From xen-devel-bounces@lists.xenproject.org Mon Feb 01 00:56:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 00:56:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79484.144663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6NVu-0005Wq-Sz; Mon, 01 Feb 2021 00:56:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79484.144663; Mon, 01 Feb 2021 00:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6NVu-0005Wj-Py; Mon, 01 Feb 2021 00:56:22 +0000
Received: by outflank-mailman (input) for mailman id 79484;
 Mon, 01 Feb 2021 00:56:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6NVt-0005Wb-9w; Mon, 01 Feb 2021 00:56:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6NVs-0002JU-WA; Mon, 01 Feb 2021 00:56:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6NVs-0007Jm-OT; Mon, 01 Feb 2021 00:56:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6NVs-0000BN-O0; Mon, 01 Feb 2021 00:56:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Lg1vZlx2wEZYTTmKkIJlvtI7YxUIIq+1XiGn9de8rPU=; b=j2QjjiVMsRYkM2+5LdV6jYhCtm
	XnT+t6ozJq+4+TmG/adFhPLrBjLsYHXKjh0LuHaUxLJQdnXCj3ezKTWGt++gkdfLZ5qPTLilHkSbZ
	agvgN4XFWnC5pw5jMcVy6yUng5GA4CBKueNT9LabV8cvhwTuspz6Lrim43cfVwEuqcrs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-amd64-coresched-amd64-xl
Message-Id: <E1l6NVs-0000BN-O0@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 00:56:20 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-coresched-amd64-xl
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158870/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-coresched-amd64-xl.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-coresched-amd64-xl.guest-start --summary-out=tmp/158870.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-amd64-coresched-amd64-xl guest-start
Searching for failure / basis pass:
 158841 fail [host=pinot0] / 158681 [host=pinot1] 158624 ok.
Failure / basis pass flights: 158841 / 158624
(tree with no url: minios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-0fbca6ce4174724f28be5268c5d210f51ed96e31 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-c6be6dab9c4bdf135bc02b61ecc304d5511c3588 git://xenbits.xen.org/qemu-xen-traditional\
 .git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-9dc687f155a57216b83b17f9cde55dd43e06b0cd
Loaded 15001 nodes in revision graph
Searching for test results:
 158609 pass irrelevant
 158616 [host=rimava1]
 158624 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158681 [host=pinot1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158852 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158853 fail irrelevant
 158854 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158855 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158856 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158858 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158859 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158861 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158862 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158864 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158865 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158866 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158867 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158869 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158870 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158624 (pass), for basis pass
 For basis failure, parent search stopping at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55aed4caf5b0) HASH(0x55aed4ccd460) HASH(0x55aed4ca0028) For basis failure, parent search stopping at 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55aed4cba9a8) For basis failure, parent search stopping at 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55aed4cd77b0) For basis failure, parent search stopping at 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55aed4cb8b20) For basis\
  failure, parent search stopping at d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55aed4cd94b8) For basis failure, parent search stopping at 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7\
 298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd, results HASH(0x55aed4cad2a8) HASH(0x55aed4cc9c30) Result found: flight 158748 (fail), for basis failure (at ancestor ~10985)
 Repro found: flight 158852 (pass), for basis pass
 Repro found: flight 158864 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 158862 (pass), for last pass
 Result found: flight 158865 (fail), for first failure
 Repro found: flight 158866 (pass), for last pass
 Repro found: flight 158867 (fail), for first failure
 Repro found: flight 158869 (pass), for last pass
 Repro found: flight 158870 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158870/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 250 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-amd64-coresched-amd64-xl.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
158870: tolerable ALL FAIL

flight 158870 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/158870/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-coresched-amd64-xl 14 guest-start            fail baseline untested


jobs:
 test-amd64-coresched-amd64-xl                                fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 00:59:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 00:59:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79492.144681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6NZF-0005qg-J7; Mon, 01 Feb 2021 00:59:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79492.144681; Mon, 01 Feb 2021 00:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6NZF-0005qZ-Fv; Mon, 01 Feb 2021 00:59:49 +0000
Received: by outflank-mailman (input) for mailman id 79492;
 Mon, 01 Feb 2021 00:59:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6NZE-0005qT-0d
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 00:59:48 +0000
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a2e47ce-d4dd-4209-b150-3625a6697b31;
 Mon, 01 Feb 2021 00:59:46 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id a1so14806104wrq.6
 for <xen-devel@lists.xenproject.org>; Sun, 31 Jan 2021 16:59:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a2e47ce-d4dd-4209-b150-3625a6697b31
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=yVI+DSMjX7ADoTI85ShtDDtbltiIpAG4FoX74uUSix0=;
        b=CaD1fb64ht9Ft8AEAWiK7zJCLKg6irwCgxgGMomLOniEnYhKaxIXui+ZrF13iVXetg
         Q1mmUWC1AP7HQ86TPBEWPOLN4uzXMtLLAnCRIg1a7yicJe7d0KkyC3m1781oih0ME4Hp
         J132/I9gurcdutMF33OczQ7TNVSRrcHmInENxZsUiZtIx2c0T1tTD6NIMn1JKotVapXu
         iJuk02T0MT6YurPmnDuN4EEWWeF0sZLapE17nLG/v7zV+xhpliDQK3IQcO2DNt6IcenR
         KWh744W8VAjmCCGE1whOPK4XbId09oXiNk5HJekev94VeVF+MvXwVoTSdtZlE8HY6MQq
         keHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=yVI+DSMjX7ADoTI85ShtDDtbltiIpAG4FoX74uUSix0=;
        b=eJ8rYzqYeEXWPaBf/fyb5HLorF17vvHz5gtHcQiIXUdftXXRgiHrrwJ6RuzDOblSR/
         18kvfF1jnty2gaS91WMIy/2ocPKj+69PMHSfzPtnxUB/iH47XBT/74lGEevSppVS8t+v
         swDgyRiJaXtv8pslRr9Tz/t8y/SkdcMjKg86R39h4awiDPf/pEWhkudu2xjUM+hqmjPD
         wd+4E/UsA4HJlTcBXaOrfbRs2C8YQdgiA6f5ObkJbyf7l8dz3Do5pi9GTlelI0nDtV9O
         B/06bWXfVJdjsJR2h6UUqiwVGu2lCX418g9MbKj3Y9GF9NGFqBzK0OiHDkFi2lGt7ULy
         w28g==
X-Gm-Message-State: AOAM533ruV6cBEuxwycvPyNaLtwuELmkzx41mdFmS8LLmfsErkj24cz9
	X9oqm6cz/kQ3TqFH/CJuKtqf3B9vvep7fGBtwd0=
X-Google-Smtp-Source: ABdhPJwpF4ynxnzTjCxUkhogwbwb0CbkZteTb5NOvGcbr99LVqMpLmjHE8bY+t3HLcERf16aB8Tz+UMxqkRZRkqGsvk=
X-Received: by 2002:adf:ffce:: with SMTP id x14mr15971767wrs.390.1612141185830;
 Sun, 31 Jan 2021 16:59:45 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
In-Reply-To: <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Sun, 31 Jan 2021 19:59:09 -0500
Message-ID: <CABfawh=bwM8B8CH+BQ2OqbStSvaAGjA2JD2GWMru6JS0r_OHDg@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+undef@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Jan 31, 2021 at 6:50 PM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > (XEN) Device tree generation failed (-22).
> >
> > > Does anyone have an idea what might be going wrong here? I tried
> > > building the dtb without using the dtb overlay but it didn't seem to
> > > do anything.
> >
> > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > replace the "return res;" with "continue;" that will bypass the issue.
> > The 3 people I'm copying on this message though may wish to ask questions
> > about the state of your build tree.
>
> I'll try that but it's a pretty hacky work-around ;)

That change got Xen to continue but then I don't see any outfrom from
dom0 afterwards and the system just hangs:

(XEN) *** LOADING DOMAIN 0 ***
(XEN) Loading d0 kernel from boot module @ 0000000000480000
(XEN) Allocating 1:1 mappings totalling 512MB for dom0:
(XEN) BANK[0] 0x00000010000000-0x00000028000000 (384MB)
(XEN) BANK[1] 0x00000030000000-0x00000038000000 (128MB)
(XEN) Grant table range: 0x00000000200000-0x00000000240000
(XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
(XEN) Allocating PPI 16 for event channel interrupt
(XEN) Loading zImage from 0000000000480000 to 0000000010000000-0000000010f80000
(XEN) Loading d0 DTB to 0x0000000018000000-0x000000001800bde9
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) ***************************************************
(XEN) WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) This option is intended to aid debugging of Xen by ensuring
(XEN) that all output is synchronously delivered on the serial line.
(XEN) However it can introduce SIGNIFICANT latencies and affect
(XEN) timekeeping. It is NOT recommended for production use!
(XEN) ***************************************************
(XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
(XEN) Please update your firmware.
(XEN) ***************************************************
(XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
(XEN) Please update your firmware.
(XEN) ***************************************************
(XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
(XEN) Please update your firmware.
(XEN) ***************************************************
(XEN) 3... 2... 1...
(XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
(XEN) Freed 352kB init memory.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 02:00:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 02:00:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79501.144699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6OVG-0001Qb-Bn; Mon, 01 Feb 2021 01:59:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79501.144699; Mon, 01 Feb 2021 01:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6OVG-0001QT-5U; Mon, 01 Feb 2021 01:59:46 +0000
Received: by outflank-mailman (input) for mailman id 79501;
 Mon, 01 Feb 2021 01:59:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ttS=HD=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l6OVE-0001QO-H0
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 01:59:44 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc2b2e3b-772d-412b-a42f-f70154f38a30;
 Mon, 01 Feb 2021 01:59:42 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 1111xRtb011604
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sun, 31 Jan 2021 20:59:33 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 1111xR67011603;
 Sun, 31 Jan 2021 17:59:27 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc2b2e3b-772d-412b-a42f-f70154f38a30
Date: Sun, 31 Jan 2021 17:59:27 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
Message-ID: <YBdgf4KKcDn0SCOw@mattapan.m5p.com>
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com>
 <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > (XEN) Device tree generation failed (-22).
> >
> > > Does anyone have an idea what might be going wrong here? I tried
> > > building the dtb without using the dtb overlay but it didn't seem to
> > > do anything.
> >
> > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > replace the "return res;" with "continue;" that will bypass the issue.
> > The 3 people I'm copying on this message though may wish to ask questions
> > about the state of your build tree.
> 
> I'll try that but it's a pretty hacky work-around ;)

Actually no, it simply causes Xen to ignore these entries.  The patch
I've got ready to submit to this list also adjusts the error message to
avoid misinterpretation, but does pretty well exactly this.

My only concern is whether it should ignore the entries only for Domain 0
or should always ignore them.


> > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > point to that last being touched last year.  Their tree is at
> > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> 
> I've moved the Linux branch up to 5.10 because there had been a fair
> amount of work that went into fixing Xen on RPI4, which got merged
> into 5.9 and I would like to be able to build upstream everything
> without the custom patches coming with the rpixen script repo.

Please keep track of where your kernel source is checked out at since
there was a desire to figure out what was going on with the device-trees.


Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
kernel command-line should ensure you get output from the kernel if it
manages to start (yes, Linux does support having multiple consoles at the
same time).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Feb 01 02:43:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 02:43:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79507.144717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6PBw-0005gn-N7; Mon, 01 Feb 2021 02:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79507.144717; Mon, 01 Feb 2021 02:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6PBw-0005gg-KB; Mon, 01 Feb 2021 02:43:52 +0000
Received: by outflank-mailman (input) for mailman id 79507;
 Mon, 01 Feb 2021 02:43:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6PBv-0005gb-Ji
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 02:43:51 +0000
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ec56cf1-978d-46a7-bcef-e079fbb477c1;
 Mon, 01 Feb 2021 02:43:50 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id j18so11305463wmi.3
 for <xen-devel@lists.xenproject.org>; Sun, 31 Jan 2021 18:43:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ec56cf1-978d-46a7-bcef-e079fbb477c1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=zYtaXle1N7QvSDr4hJgeFHILKHlGu6PfUoeF5ITSgaQ=;
        b=jHEmgEcidd9GH8IM4+fZS/DtEjSWUJTUGfL+9M/Z0QuCNIVu8K8zecPtQMbxz29kdN
         H72ycERPi+f6RJRoUlCZV2SYjsPpj601PYCvSyBqGbpXXyxGsdZs14jVLBMOBkGAjD3X
         zR9AsYH1vNSSIc/7qIS579yeOd1Eblp1Sb0jAn4tpfKGCjda1fXpFvDJL74uDHO/7FKX
         CB4prHbFM90udAlFYV/Z0S2CkWVLjvKunwL4BQeWBagkyFRfVVy6l/kam0mYriiTqzyH
         M2KQ47Qdp7O02MFD7rfiCsAi2RsE/cFxobVBxXaevK9y8KDro9J6ex4jBKWuLKpS+cPN
         PHDg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=zYtaXle1N7QvSDr4hJgeFHILKHlGu6PfUoeF5ITSgaQ=;
        b=Y4lKo8MqwRXBgp2LIbTT1tCkMDmuRsDOIkMT9imKrazoMtlYT1jM1TDnG4dYX5V/1P
         nUojgr/4bTb5GFFTt+INPf3j9urkcKPOo8UC7FampIg/zj1wP2rSPf3L09CMKpBYZce2
         gROzT8ni+V66UGlYXWiE7epmd1IIcv75ARBgKvy466+LvPuoq63ZRqZ71OewDebcylbo
         jyM9y+5Rkwbt7vMHCHIc/a51YWbND1geyb5xoA/KCTcB1X5moE4XdnlH8Pnz14SJqpSm
         +rbZzeXrC6ei9JC5pgzeO7azXqmK6GQUm2gtg5DZF4Cla7BJ0FqNOw0c18bDx2/aIj7G
         z1OQ==
X-Gm-Message-State: AOAM5310494wmJpds6LXF7TLFP7YyiPulqm8fJICy6+vFgFcpGNMy8zS
	X13XJlKApvjBoIei+fM2tCuygKLyUz/1SILzIWA=
X-Google-Smtp-Source: ABdhPJy7uVSLphV+lUd6ZFLjYJwQl6VWhcIDm02YzLPNZJwlWv2CYxt3midZHKhAF9MpmSwqyVpe6yrMc4KFb1nWJPg=
X-Received: by 2002:a1c:e905:: with SMTP id q5mr13088536wmc.84.1612147429482;
 Sun, 31 Jan 2021 18:43:49 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com>
In-Reply-To: <YBdgf4KKcDn0SCOw@mattapan.m5p.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Sun, 31 Jan 2021 21:43:13 -0500
Message-ID: <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > >
> > > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > > (XEN) Device tree generation failed (-22).
> > >
> > > > Does anyone have an idea what might be going wrong here? I tried
> > > > building the dtb without using the dtb overlay but it didn't seem to
> > > > do anything.
> > >
> > > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > > replace the "return res;" with "continue;" that will bypass the issue.
> > > The 3 people I'm copying on this message though may wish to ask questions
> > > about the state of your build tree.
> >
> > I'll try that but it's a pretty hacky work-around ;)
>
> Actually no, it simply causes Xen to ignore these entries.  The patch
> I've got ready to submit to this list also adjusts the error message to
> avoid misinterpretation, but does pretty well exactly this.
>
> My only concern is whether it should ignore the entries only for Domain 0
> or should always ignore them.
>
>
> > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > point to that last being touched last year.  Their tree is at
> > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> >
> > I've moved the Linux branch up to 5.10 because there had been a fair
> > amount of work that went into fixing Xen on RPI4, which got merged
> > into 5.9 and I would like to be able to build upstream everything
> > without the custom patches coming with the rpixen script repo.
>
> Please keep track of where your kernel source is checked out at since
> there was a desire to figure out what was going on with the device-trees.
>
>
> Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> kernel command-line should ensure you get output from the kernel if it
> manages to start (yes, Linux does support having multiple consoles at the
> same time).

No output from dom0 received even with the added console options
(+earlyprintk=xen). The kernel build was from rpi-5.10.y
c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
with 4.19 next.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 02:55:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 02:55:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79509.144729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6PMs-0006nb-Px; Mon, 01 Feb 2021 02:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79509.144729; Mon, 01 Feb 2021 02:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6PMs-0006nU-MU; Mon, 01 Feb 2021 02:55:10 +0000
Received: by outflank-mailman (input) for mailman id 79509;
 Mon, 01 Feb 2021 02:55:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6PMr-0006nP-Iy
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 02:55:09 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bd5e5e2-f531-4791-8828-19d01f288b27;
 Mon, 01 Feb 2021 02:55:08 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id q7so14927027wre.13
 for <xen-devel@lists.xenproject.org>; Sun, 31 Jan 2021 18:55:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bd5e5e2-f531-4791-8828-19d01f288b27
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=S6GWgEoFzvJMBPUCO9dayzaEi9/BbIzTat0tN/bGrKs=;
        b=sRJvyYI4bb9Sl526k+8KnO6OMJEu6pj8FPEFuuY6uluW4rCIShcWRWrwC11n2uWA7i
         3a/jTn16hV8kOJUnVWycCkFEUkBs4ADnzURzD8E45eWfU792/ktK6ikH5R5Oe3MxEHuX
         xEsRe9f+K+ZvVF+pPcbWs+hJdTEsOr9L4PcH97rcZK97dXs+HcMyb7gkxarKwKAo5BCf
         /8adr/xOx0UHu4W78DqoOVCJjJZ0jup88sdzD31Dk0XlVpkATejgb/gwbcqnpKQk32Zi
         hPX+5f3PKX5rBdWYQMmnQpMhjqxBiQ1Ze+ueEuP5QJEwHNb7iFWu6fNWKOYXx70sco+M
         vNHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=S6GWgEoFzvJMBPUCO9dayzaEi9/BbIzTat0tN/bGrKs=;
        b=Spo35lVmUL4m4V8ueOMai2UECnhi3vRoT/6lgXDPLFTYoNlZd6IthiYRZfgKmhEasI
         pGoLQ+eM7Uh2tfY3i7uj2UTOC9gfl23HJlA4UrkXKW+zcECxIjq97Lkuump/z72x08t5
         daUE0ocAw4J4E7okRuk0/VjpbG5prVT6vBqzQ3j95lU6XwFwhT56W9uHKqRhKpSWJmx0
         jnjTfSr1lGkOcWAAcBBekX5HqGvyOavgVCS4XuZdcNmSOhS0AZe+J/ed6KRSHYOnJhit
         aLoY1V5lL5JUz65GEc4a227nXJpwnRjRnCaEN6/Je9TBhjQvfn5eqsuyydGLJ8fNi/Hy
         A79w==
X-Gm-Message-State: AOAM5331LDcvj8GqnQRn8ZvThtHb69Qkpd3fpYUlO6tLy3X1tsRwY/39
	X8YaJUAF8JrKj6dxTkofSxjKIb5YNoRnHeXAMo4=
X-Google-Smtp-Source: ABdhPJxOJqy+snbCd2pzMFm4jUslYDsOefwlkKyhmvJrrEG5uLTRfkdIyPyMaR9FQxiwlBrlFJPWDLmwSWtUFssmo1A=
X-Received: by 2002:a05:6000:1547:: with SMTP id 7mr15896036wry.301.1612148107571;
 Sun, 31 Jan 2021 18:55:07 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
In-Reply-To: <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Sun, 31 Jan 2021 21:54:31 -0500
Message-ID: <CABfawhn4YTot8FeZg7zJV0Hc7=kHM0-yiBM7vwo4eqLcKvXnzg@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Jan 31, 2021 at 9:43 PM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > >
> > > > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > > > (XEN) Device tree generation failed (-22).
> > > >
> > > > > Does anyone have an idea what might be going wrong here? I tried
> > > > > building the dtb without using the dtb overlay but it didn't seem to
> > > > > do anything.
> > > >
> > > > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > > > replace the "return res;" with "continue;" that will bypass the issue.
> > > > The 3 people I'm copying on this message though may wish to ask questions
> > > > about the state of your build tree.
> > >
> > > I'll try that but it's a pretty hacky work-around ;)
> >
> > Actually no, it simply causes Xen to ignore these entries.  The patch
> > I've got ready to submit to this list also adjusts the error message to
> > avoid misinterpretation, but does pretty well exactly this.
> >
> > My only concern is whether it should ignore the entries only for Domain 0
> > or should always ignore them.
> >
> >
> > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > point to that last being touched last year.  Their tree is at
> > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > >
> > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > amount of work that went into fixing Xen on RPI4, which got merged
> > > into 5.9 and I would like to be able to build upstream everything
> > > without the custom patches coming with the rpixen script repo.
> >
> > Please keep track of where your kernel source is checked out at since
> > there was a desire to figure out what was going on with the device-trees.
> >
> >
> > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > kernel command-line should ensure you get output from the kernel if it
> > manages to start (yes, Linux does support having multiple consoles at the
> > same time).
>
> No output from dom0 received even with the added console options
> (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> with 4.19 next.

The dtb overlay is giving me the following error with both 4.19 and 5.10:

arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: node name is not "pci" or "pcie"
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: missing ranges for PCI bridge (or not a
bridge)
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: incorrect #address-cells for PCI bridge
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning (pci_bridge):
/fragment@1/__overlay__: incorrect #size-cells for PCI bridge
arch/arm64/boot/dts/overlays/pi4-64-xen.dtbo: Warning
(pci_device_bus_num): Failed prerequisite 'pci_bridge'

The overlays are defined in
https://github.com/dornerworks/xen-rpi4-builder/blob/master/patches/linux/0001-Add-Xen-overlay-for-the-Pi-4.patch
as:

+/dts-v1/;
+/plugin/;
+
+/ {
+ compatible = "brcm,bcm2711";
+
+ fragment@0 {
+ target-path = "/chosen";
+ __overlay__ {
+ #address-cells = <0x1>;
+ #size-cells = <0x1>;
+ xen,xen-bootargs = "console=dtuart dtuart=/soc/serial@7e215040
sync_console dom0_mem=512M dom0_mem=512M bootscrub=0";
+
+ dom0 {
+ compatible = "xen,linux-zimage", "xen,multiboot-module";
+ reg = <0x00400000 0x01000000>;
+ };
+ };
+ };
+
+ fragment@1 {
+ target-path = "/scb/pcie@7d500000";
+ __overlay__ {
+ device_type = "pci";
+ };
+ };
+};
+// Xen configuration for Pi 4

Don't really know what those warnings mean or how to fix them but
perhaps they are relevant to why Xen also complains?

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 03:07:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 03:07:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79511.144741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6PYK-0008Kx-So; Mon, 01 Feb 2021 03:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79511.144741; Mon, 01 Feb 2021 03:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6PYK-0008Kq-P3; Mon, 01 Feb 2021 03:07:00 +0000
Received: by outflank-mailman (input) for mailman id 79511;
 Mon, 01 Feb 2021 03:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6PYJ-0008Kl-QR
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 03:06:59 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dff7de0-4625-4442-b5f7-afd7254d7f7c;
 Mon, 01 Feb 2021 03:06:58 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id f16so11326514wmq.5
 for <xen-devel@lists.xenproject.org>; Sun, 31 Jan 2021 19:06:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dff7de0-4625-4442-b5f7-afd7254d7f7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=OA5wXvIykS3+3x57uU2aiX9oKjvSpEg2ZJ+yujWKtOU=;
        b=VoLFSoN6UWBuTx5naMlWKxN30w2Ygw+SRTg3KtW9tOh0iSCrfbCUvQbSHQ8fBI4dtM
         eonCNmCoo8wq6VfYPYN88ecsPWNqVwWVAn1a2c1aCcIJJjDa22UK3sCgfW78SZg6PykM
         f7KGV7h9iwarzX4CSemTZXOGHhx8CAsffpFpjz/N4ioiez4DYF/xHdFBXbJ/ZRjVh15w
         y3/C5PWkIdMtdG83Ep+IU96B2hVj2dd7MRpzuUVG52vETwPG8NrpHz2mOL8ulWLJPp1V
         77av/jCqVsF4y9LVZu21Y/shkX+gVTacfvR1JGXSDV8TxKTOhQCPYLHluT1gX0D/c8Kg
         yMqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=OA5wXvIykS3+3x57uU2aiX9oKjvSpEg2ZJ+yujWKtOU=;
        b=Gbw5qvLKtV+nhYUUar+uIw4h0D2nfywbS2O6v14zrlwxn12zugS006IuksMBqg8oBx
         mM6h0mFm5/tkGooi86lZSwqNUQxlgPxCUQ29GB05NdW4GSImJqMsIrNEFB9hajpOPFKC
         0RvWyerWLkJx7TKXmE9TU340LleMTCa/plJyNuj4Il+YdV4VRQYCG3EPq6K87/WEIh39
         fuWLapov4XXjazW8+Kk5wtRhnDZZSfXnc5xQh6DBWZ8tzFuwfE+i18pOUstKoVc6ULlc
         uFs+XntqO0crYim8WDM3ixx9effYdUpEjhEQQmkDPSxzgIhSCjYEEywLNNKLiSU2IQbD
         cYRQ==
X-Gm-Message-State: AOAM5334me0Cz7AVeqDNlq648JBlx/wIqbCnC8BZiFSF0WEnbaYTGwSE
	2QlM4XHLSaJ3wnW6q9G4KanD4S6J26EsKTES5Cc=
X-Google-Smtp-Source: ABdhPJwo2Fxl3W9m0SGAR63okJjnFwDpecgf3n0+6l3t61F2tCKFboGWsgBLgnT1YnW/UwKZZ9k2jfsoM4NKzFH2ObI=
X-Received: by 2002:a1c:4d05:: with SMTP id o5mr3129251wmh.51.1612148817660;
 Sun, 31 Jan 2021 19:06:57 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
In-Reply-To: <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Sun, 31 Jan 2021 22:06:21 -0500
Message-ID: <CABfawhmE20u8PpKK6N2SNwOSjeOyfhqa8U48jykswbw9Yhnpvg@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Jan 31, 2021 at 9:43 PM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > >
> > > > On Sun, Jan 31, 2021 at 02:06:17PM -0500, Tamas K Lengyel wrote:
> > > > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > > > (XEN) Device tree generation failed (-22).
> > > >
> > > > > Does anyone have an idea what might be going wrong here? I tried
> > > > > building the dtb without using the dtb overlay but it didn't seem to
> > > > > do anything.
> > > >
> > > > If you go to line 1412 of the file xen/arch/arm/domain_build.c and
> > > > replace the "return res;" with "continue;" that will bypass the issue.
> > > > The 3 people I'm copying on this message though may wish to ask questions
> > > > about the state of your build tree.
> > >
> > > I'll try that but it's a pretty hacky work-around ;)
> >
> > Actually no, it simply causes Xen to ignore these entries.  The patch
> > I've got ready to submit to this list also adjusts the error message to
> > avoid misinterpretation, but does pretty well exactly this.
> >
> > My only concern is whether it should ignore the entries only for Domain 0
> > or should always ignore them.
> >
> >
> > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > point to that last being touched last year.  Their tree is at
> > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > >
> > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > amount of work that went into fixing Xen on RPI4, which got merged
> > > into 5.9 and I would like to be able to build upstream everything
> > > without the custom patches coming with the rpixen script repo.
> >
> > Please keep track of where your kernel source is checked out at since
> > there was a desire to figure out what was going on with the device-trees.
> >
> >
> > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > kernel command-line should ensure you get output from the kernel if it
> > manages to start (yes, Linux does support having multiple consoles at the
> > same time).
>
> No output from dom0 received even with the added console options
> (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> with 4.19 next.

With rpi-4.19.y kernel and dtbs
(cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
previous error is not present. I get the boot log on the serial with
just console=hvc0 from dom0 but the kernel ends up in a panic down the
line:

(XEN) traps.c:1983:d0v0 HSR=0x93860046 pc=0xffffff80085ac97c
gva=0xffffff800b096000 gpa=0x0000003e330000
[    1.242863] Unhandled fault at 0xffffff800b096000
[    1.242871] Mem abort info:
[    1.242879]   ESR = 0x96000000
[    1.242893]   Exception class = DABT (current EL), IL = 32 bits
[    1.242922]   SET = 0, FnV = 0
[    1.242928]   EA = 0, S1PTW = 0
[    1.242934] Data abort info:
[    1.242941]   ISV = 0, ISS = 0x00000000
[    1.242948]   CM = 0, WnR = 0
[    1.242958] swapper pgtable: 4k pages, 39-bit VAs, pgdp = (____ptrval____)
[    1.242965] [ffffff800b096000] pgd=0000000033ffe003,
pud=0000000033ffe003, pmd=000000003230a003, pte=006800003e33070f
[    1.242989] Internal error: ttbr address size fault: 96000000 [#1]
PREEMPT SMP
[    1.242995] Modules linked in:
[    1.243005] Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
[    1.243014] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.127-v8+ #1
[    1.243019] Hardware name: Raspberry Pi 4 Model B Rev 1.1 (DT)
[    1.243026] pstate: 20000005 (nzCv daif -PAN -UAO)
[    1.243044] pc : cfb_imageblit+0x58c/0x820
[    1.243054] lr : bcm2708_fb_imageblit+0x2c/0x40
[    1.243059] sp : ffffff800802b4e0
[    1.243063] x29: ffffff800802b4e0 x28: 00000000ffffffff
[    1.243073] x27: 0000000000000010 x26: ffffffc03212c000
[    1.243081] x25: 0000000000000020 x24: ffffffc0322c7d80
[    1.243088] x23: 0000000000000008 x22: ffffffc03212a118
[    1.243095] x21: 0000000000000000 x20: ffffff800b096000
[    1.243102] x19: 0000000000000000 x18: 00000000fffffffc
[    1.243109] x17: 0000000000000000 x16: ffffff800b096000
[    1.243116] x15: 0000000000000001 x14: 0000000000001e00
[    1.243124] x13: 0000000000000010 x12: 0000000000000000
[    1.243131] x11: 0000000000000020 x10: 0000000000000001
[    1.243138] x9 : 0000000000000008 x8 : ffffff800b096020
[    1.243145] x7 : ffffffc03212c001 x6 : 0000000000000000
[    1.243152] x5 : ffffff80089e2f78 x4 : 0000000000000000
[    1.243159] x3 : ffffff800b096000 x2 : ffffffc03212c000
[    1.243166] x1 : 0000000000000000 x0 : 0000000000000000
[    1.243173] Call trace:
[    1.243182]  cfb_imageblit+0x58c/0x820
[    1.243190]  bcm2708_fb_imageblit+0x2c/0x40
[    1.243197]  soft_cursor+0x16c/0x200
[    1.243204]  bit_cursor+0x30c/0x53c
[    1.243211]  fbcon_cursor+0x13c/0x1a0
[    1.243220]  hide_cursor+0x44/0xb0
[    1.243228]  redraw_screen+0x218/0x28c
[    1.243234]  fbcon_prepare_logo+0x380/0x3ec
[    1.243241]  fbcon_init+0x364/0x550
[    1.243248]  visual_init+0xbc/0x110
[    1.243256]  do_bind_con_driver.isra.0+0x1c4/0x3a0
[    1.243264]  do_take_over_console+0x148/0x204
[    1.243270]  do_fbcon_takeover+0x7c/0xe4
[    1.243277]  fbcon_event_notify+0x6d4/0x850
[    1.243288]  blocking_notifier_call_chain+0x90/0xc0
[    1.243297]  fb_notifier_call_chain+0x34/0x40
[    1.243303]  register_framebuffer+0x21c/0x300
[    1.243311]  bcm2708_fb_probe+0x340/0x770
[    1.243319]  platform_drv_probe+0x5c/0xb0
[    1.243325]  really_probe+0x290/0x3a4
[    1.243331]  driver_probe_device+0x60/0xf4
[    1.243337]  __driver_attach+0x118/0x13c
[    1.243347]  bus_for_each_dev+0x84/0xe0
[    1.243352]  driver_attach+0x34/0x40
[    1.243358]  bus_add_driver+0x1a8/0x21c
[    1.243364]  driver_register+0x7c/0x124
[    1.243371]  __platform_driver_register+0x58/0x64
[    1.243382]  bcm2708_fb_init+0x24/0x2c
[    1.243390]  do_one_initcall+0x54/0x248
[    1.243399]  kernel_init_freeable+0x2e4/0x384
[    1.243408]  kernel_init+0x1c/0x118
[    1.243415]  ret_from_fork+0x10/0x18
[    1.243425] Code: 8b0608a6 b94050c6 0a060026 4a0400c6 (b9000066)
[    1.243437] ---[ end trace b74230fc2252e944 ]---
[    1.243452] Kernel panic - not syncing: Attempted to kill init!
exitcode=0x0000000b
[    1.243452]
[    1.243464] SMP: stopping secondary CPUs
[    1.243512] Kernel Offset: disabled
[    1.243519] CPU features: 0x0,61006000
[    1.243523] Memory Limit: none
[    1.590409] ---[ end Kernel panic - not syncing: Attempted to kill
init! exitcode=0x0000000b
[    1.590409]  ]---

This seems to have been caused by a monitor being attached to the HDMI
port, with HDMI unplugged dom0 boots OK.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 04:27:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 04:27:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79516.144759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6QoP-0007nX-Ny; Mon, 01 Feb 2021 04:27:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79516.144759; Mon, 01 Feb 2021 04:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6QoP-0007nQ-KZ; Mon, 01 Feb 2021 04:27:41 +0000
Received: by outflank-mailman (input) for mailman id 79516;
 Mon, 01 Feb 2021 04:27:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6QoO-0007nI-I7; Mon, 01 Feb 2021 04:27:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6QoO-0000pZ-Bk; Mon, 01 Feb 2021 04:27:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6QoO-00007T-1x; Mon, 01 Feb 2021 04:27:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6QoO-0005cc-1Q; Mon, 01 Feb 2021 04:27:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NTcIpU7Q572lVb3yiE0ppeH7GRSBa1rCTeFOGBFWz24=; b=odVoe7SiHgiqgIK+0Y7mG4pM9r
	+2rLNpPrfVOGKbVZxGuK/b03anpWg0/tGGm75Bvcv3vF18fgSuA8eHWTg3k45iW7gsU8TkfEzFArb
	wTOATpzMUUaC6TQ2LMogMm4Cv6ZwGJ2BJcHlqBFLTIgVb12DnOOwqcGLj4nlt8sW2ddo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158860-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158860: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=74208cd252c5da9d867270a178799abd802b9338
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 04:27:40 +0000

flight 158860 qemu-mainline real [real]
flight 158876 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/158860/
http://logs.test-lab.xenproject.org/osstest/logs/158876/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                74208cd252c5da9d867270a178799abd802b9338
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  164 days
Failing since        152659  2020-08-21 14:07:39 Z  163 days  332 attempts
Testing same since   158816  2021-01-30 13:16:09 Z    1 days    3 attempts

------------------------------------------------------------
372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 98359 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 05:15:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 05:15:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79524.144780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6RYT-0004WW-KR; Mon, 01 Feb 2021 05:15:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79524.144780; Mon, 01 Feb 2021 05:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6RYT-0004WP-Go; Mon, 01 Feb 2021 05:15:17 +0000
Received: by outflank-mailman (input) for mailman id 79524;
 Mon, 01 Feb 2021 05:15:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6RYS-0004WH-0U; Mon, 01 Feb 2021 05:15:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6RYR-000237-PU; Mon, 01 Feb 2021 05:15:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6RYR-0001hG-FD; Mon, 01 Feb 2021 05:15:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6RYR-0004ap-Eh; Mon, 01 Feb 2021 05:15:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0DmIQHdOSmCTmag4BMv8bcrTGZBVVzhxVwNyp0iW7K0=; b=elbupxu7m92flVStIlUIgcC5oD
	BHz6GioTCrF6OwdqBftu85ks/+wix6QSZClsMkV/cU8Et5za+Oxg4jdIQasGfGL/OKVpjipRHlr0I
	GPG+tdZ0DlqJa/xQRaECvC+De+Kmc7d2JYIPx9NXkqpLsPpqkB13FrSOZldc9I8fKgsY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158863-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158863: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-i386-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-shadow:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fbca6ce4174724f28be5268c5d210f51ed96e31
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 05:15:15 +0000

flight 158863 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158863/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl           14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-pair         25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install       fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-i386-xl-shadow    14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 linux                0fbca6ce4174724f28be5268c5d210f51ed96e31
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   19 days
Failing since        158473  2021-01-17 13:42:20 Z   14 days   24 attempts
Testing same since   158818  2021-01-30 13:48:12 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Akilesh Kailash <akailash@google.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Leibovich <alexl@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexey Minnekhanov <alexeymin@postmarketos.org>
  Anders Roxell <anders.roxell@linaro.org>
  Andreas Kemnade <andreas@kemnade.info>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Zhizhikin <andrey.zhizhikin@leica-geosystems.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Lutomirski <luto@kernel.org>
  Anthony Iliopoulos <ailiop@suse.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Marcovitch <ariel.marcovitch@gmail.com>
  Ariel Marcovitch <arielmarcovitch@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Aya Levin <ayal@nvidia.com>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Baruch Siach <baruch@tkos.co.il>
  Ben Skeggs <bskeggs@redhat.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Craig Tatlor <ctatlor97@gmail.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  David Wu <david.wu@rock-chips.com>
  Dennis Li <Dennis.Li@amd.com>
  Dexuan Cui <decui@microsoft.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Enke Chen <enchen@paloaltonetworks.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Korenevsky <ekorenevsky@astralinux.ru>
  Ewan D. Milne <emilne@redhat.com>
  Fabian Vogt <fvogt@suse.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe LaÃ­ns <lains@archlinux.org>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gaurav Kohli <gkohli@codeaurora.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gopal Tiwari <gtiwari@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guido GÃ¼nther <agx@sigxcpu.org>
  Guillaume Nault <gnault@redhat.com>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hao Wang <pkuwangh@gmail.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Ion Agorria <ion@agorria.com>
  Israel Rukshin <israelr@nvidia.com>
  J. Bruce Fields <bfields@redhat.com>
  j.nixdorf@avm.de <j.nixdorf@avm.de>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@jamieiles.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  JC Kuo <jckuo@nvidia.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jethro Beekman <jethro@fortanix.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Nixdorf <j.nixdorf@avm.de>
  John Millikin <john@john-millikin.com>
  Johnathan Smithinovic <johnathan.smithinovic@gmx.at>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni K. SeppÃ¤nen <jks@iki.fi>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Juerg Haefliger <juergh@canonical.com>
  Juergen Gross <jgross@suse.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Krzysztof Mazur <krzysiek@podlesie.net>
  Krzysztof Piotr OlÄ™dzki <ole@ans.pl>
  Lars-Peter Clausen <lars@metafoo.de>
  Lecopzer Chen <lecopzer.chen@mediatek.com>
  Lecopzer Chen <lecopzer@gmail.com>
  Leon Romanovsky <leonro@nvidia.com>
  Leon Schuermann <leon@is.currently.online>
  Linhua Xu <linhua.xu@unisoc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Lozano <llozano@google.com>
  Lukas Wunner <lukas@wunner.de>
  Manish Chopra <manishc@marvell.com>
  Manoj Gupta <manojgupta@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marcin Wojtas <mw@semihalf.com>
  Mark Bloch <mbloch@nvidia.com>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Kresin <dev@kresin.me>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikko Perttunen <mperttunen@nvidia.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mircea Cirjaliu <mcirjaliu@bitdefender.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolai Stange <nstange@suse.de>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nilesh Javali <njavali@marvell.com>
  Oded Gabbay <ogabbay@kernel.org>
  Olaf Hering <olaf@aepfle.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali RohÃ¡r <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pan Bian <bianpan2016@163.com>
  Parav Pandit <parav@nvidia.com>
  Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  Paul Cercueil <paul@crapouillou.net>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Collingbourne <pcc@google.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Robinson <pbrobinson@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <me@pmachata.org>
  Petr Machata <petrm@nvidia.com>
  Phil Oester <kernel@linuxace.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafael Kitover <rkitover@gmail.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <rasmus.villemoes@prevas.dk>
  Reinette Chatre <reinette.chatre@intel.com>
  Rich Felker <dalias@libc.org>
  Rob Clark <robdclark@chromium.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Rohit Maheshwari <rohitm@chelsio.com>
  Roman Guskov <rguskov@dh-electronics.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Ross Zwisler <zwisler@google.com>
  Ryan Chen <ryan_chen@aspeedtech.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sameer Pujar <spujar@nvidia.com>
  Samuel Holland <samuel@sholland.org>
  Sasha Levin <sashal@kernel.org>
  Sean Tranchetti <stranche@codeaurora.org>
  Seth Miller <miller.seth@gmail.com>
  Shawn Guo <shawn.guo@linaro.org>
  Shravya Kumbham <shravya.kumbham@xilinx.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Chulski <stefanc@marvell.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Su Yue <l@damenly.su>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hebb <tommyhebb@gmail.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Toke HÃ¸iland-JÃ¸rgensen <toke@toke.dk>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-KÃ¶nig <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@nvidia.com>
  Valdis Kletnieks <valdis.kletnieks@vt.edu>
  Valdis KlÄ“tnieks <valdis.kletnieks@vt.edu>
  Vasily Averin <vvs@virtuozzo.com>
  Victor Zhao <Victor.Zhao@amd.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wang Hui <john.wanghui@huawei.com>
  Wayne Lin <Wayne.Lin@amd.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  yangerkun <yangerkun@huawei.com>
  Yazen Ghannam <Yazen.Ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  Youling Tang <tangyouling@loongson.cn>
  YueHaibing <yuehaibing@huawei.com>
  Yufeng Mo <moyufeng@huawei.com>
  zhengbin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8166 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 05:54:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 05:54:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79530.144795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6SAP-0008PU-Sv; Mon, 01 Feb 2021 05:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79530.144795; Mon, 01 Feb 2021 05:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6SAP-0008PN-Pn; Mon, 01 Feb 2021 05:54:29 +0000
Received: by outflank-mailman (input) for mailman id 79530;
 Mon, 01 Feb 2021 05:54:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ttS=HD=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l6SAO-0008Oz-Lo
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 05:54:28 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 103d0558-48db-406f-b8e7-e141ab72951c;
 Mon, 01 Feb 2021 05:54:23 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 1115s6B6012636
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 1 Feb 2021 00:54:12 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 1115s5Hj012635;
 Sun, 31 Jan 2021 21:54:05 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 103d0558-48db-406f-b8e7-e141ab72951c
Date: Sun, 31 Jan 2021 21:54:05 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
Message-ID: <YBeXfWf8lQ2nwMtI@mattapan.m5p.com>
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com>
 <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com>
 <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> With rpi-4.19.y kernel and dtbs
> (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> previous error is not present. I get the boot log on the serial with
> just console=hvc0 from dom0 but the kernel ends up in a panic down the
> line:

> This seems to have been caused by a monitor being attached to the HDMI
> port, with HDMI unplugged dom0 boots OK.

The balance of reports seem to suggest 5.10 is the way to go if you want
graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
RP4.


On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > point to that last being touched last year.  Their tree is at
> > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > >
> > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > amount of work that went into fixing Xen on RPI4, which got merged
> > > into 5.9 and I would like to be able to build upstream everything
> > > without the custom patches coming with the rpixen script repo.
> >
> > Please keep track of where your kernel source is checked out at since
> > there was a desire to figure out what was going on with the device-trees.
> >
> >
> > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > kernel command-line should ensure you get output from the kernel if it
> > manages to start (yes, Linux does support having multiple consoles at the
> > same time).
> 
> No output from dom0 received even with the added console options
> (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> with 4.19 next.

So, their current HEAD.  This reads like you've got a problematic kernel
configuration.  What procedure are you following to generate the
configuration you use?

Using their upstream as a base and then adding the configuration options
for Xen has worked fairly well for me (`make bcm2711_defconfig`,
`make menuconfig`, `make zImage`).

Notably the options:
CONFIG_PARAVIRT
CONFIG_XEN_DOM0
CONFIG_XEN
CONFIG_XEN_BLKDEV_BACKEND
CONFIG_XEN_NETDEV_BACKEND
CONFIG_HVC_XEN
CONFIG_HVC_XEN_FRONTEND

Should be set to "y".


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Feb 01 05:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 05:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79532.144807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6SF4-0000Ir-F9; Mon, 01 Feb 2021 05:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79532.144807; Mon, 01 Feb 2021 05:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6SF4-0000Ij-CC; Mon, 01 Feb 2021 05:59:18 +0000
Received: by outflank-mailman (input) for mailman id 79532;
 Mon, 01 Feb 2021 05:59:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6SF2-0000Ib-VB; Mon, 01 Feb 2021 05:59:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6SF2-0002mV-OW; Mon, 01 Feb 2021 05:59:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6SF2-0003FZ-F4; Mon, 01 Feb 2021 05:59:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6SF2-0006cy-Ea; Mon, 01 Feb 2021 05:59:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z/oVmKxcdUK3M02/6Y1EQHtwrqVhXOgP4Mux68I3bS8=; b=wp0Hp/QfH5ajQXk8tnQnDx40it
	22tsY2aqOfDbDHveY+MeDHDN/55EmGTPZvibRpjPFKQDFJrHL7MWOPeiPiHupoXEncJBijfeuNbiQ
	5NhrN0BqWs8prGOR+/zRbwPhytFo/QxeBDIsJTfOBCuHXby/CCe/fOJQpcnvn3S+Dh7M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158878-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 158878: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=35d5b26aa433bd33f4b33be3dbb67313357f97f9
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 05:59:16 +0000

flight 158878 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158878/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              35d5b26aa433bd33f4b33be3dbb67313357f97f9
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  206 days
Failing since        151818  2020-07-11 04:18:52 Z  205 days  200 attempts
Testing same since   158878  2021-02-01 04:18:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38608 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 06:54:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 06:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79539.144828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6T63-0005zl-P9; Mon, 01 Feb 2021 06:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79539.144828; Mon, 01 Feb 2021 06:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6T63-0005zd-KN; Mon, 01 Feb 2021 06:54:03 +0000
Received: by outflank-mailman (input) for mailman id 79539;
 Mon, 01 Feb 2021 06:54:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=et9n=HD=codiax.se=anders.tornqvist@srs-us1.protection.inumbo.net>)
 id 1l6T61-0005xz-Cs
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 06:54:02 +0000
Received: from mailrelay1-3.pub.mailoutpod1-cph3.one.com (unknown
 [46.30.212.10]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23867401-90df-48a0-9d6a-be39100a004e;
 Mon, 01 Feb 2021 06:53:59 +0000 (UTC)
Received: from [192.168.101.129] (h87-96-135-119.cust.a3fiber.se
 [87.96.135.119])
 by mailrelay1.pub.mailoutpod1-cph3.one.com (Halon) with ESMTPSA
 id 415b493c-645a-11eb-9242-d0431ea8a283;
 Mon, 01 Feb 2021 06:53:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23867401-90df-48a0-9d6a-be39100a004e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=codiax.se; s=20191106;
	h=content-transfer-encoding:content-type:in-reply-to:mime-version:date:
	 message-id:from:references:to:subject:from;
	bh=9iSWWxPn+xYNB147RqOz9EaHPk6vjfbCLBnxZjD6gxY=;
	b=NO2kyN3ctfG/AcWlHjGUfiwmZHJT/eKH0QMepwBmfz0vj7BlsmM+Nf8NJ8SweenlKgcQcB4n1Sn5z
	 OXzckIbZFVkxKpSQCJRNxuA890fBNHy6+xlkKU43LJbmT4o6PZdXa1/HO40f6IniZqiyN++EB1pR59
	 59hZNVBtNbUeJBvucod7UosLYUw72egYdfKBbNtte0FbVjVvapkBvgz1AfUatCQuZtf3cnyLV6FbnE
	 +BNwj/QH46vjNyl7/OpTskMfJ1poiPy4rTcsXeD4ID78HU0jcTEdHTa5b9gmwpDyYQhKJ6dlQtvSDC
	 oQSKIODi+wSVh3KlHr8gY/s8UyKhykA==
X-HalOne-Cookie: 6c6de1af18e8d22f82e1d197fec1ff80f8342d4c
X-HalOne-ID: 415b493c-645a-11eb-9242-d0431ea8a283
Subject: Re: Null scheduler and vwfi native problem
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <bfe8b2fe-57c4-79e2-f2e7-3e1cb9b7963b@suse.com>
 <98331cca73412d13d594015c046c64809a7d221c.camel@suse.com>
From: =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Message-ID: <29598e84-e26c-793a-d531-02498d73e284@codiax.se>
Date: Mon, 1 Feb 2021 07:53:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <98331cca73412d13d594015c046c64809a7d221c.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-GB

On 1/29/21 11:16 AM, Dario Faggioli wrote:
> On Fri, 2021-01-29 at 09:18 +0100, JÃ¼rgen GroÃŸ wrote:
>> On 29.01.21 09:08, Anders TÃ¶rnqvist wrote:
>>>> So it using it has only downsides (and that's true in general, if
>>>> you
>>>> ask me, but particularly so if using NULL).
>>> Thanks for the feedback.
>>> I removed dom0_vcpus_pin. And, as you said, it seems to be
>>> unrelated to
>>> the problem we're discussing.
> Right. Don't put it back, and stay away from it, if you accept an
> advice. :-)
>
>>> The system still behaves the same.
>>>
> Yeah, that was expected.
>
>>> When the dom0_vcpus_pin is removed. xl vcpu-list looks like this:
>>>
>>> NameÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  IDÂ  VCPUÂ Â  CPU State Time(s)
>>> Affinity (Hard / Soft)
>>> Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  0Â Â Â Â  0Â Â Â  0Â Â  r--Â Â Â Â Â  29.4
>>> all / all
>>> Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  0Â Â Â Â  1Â Â Â  1Â Â  r--Â Â Â Â Â  28.7
>>> all / all
>>> Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  0Â Â Â Â  2Â Â Â  2Â Â  r--Â Â Â Â Â  28.7
>>> all / all
>>> Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  0Â Â Â Â  3Â Â Â  3Â Â  r--Â Â Â Â Â  28.6
>>> all / all
>>> Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  0Â Â Â Â  4Â Â Â  4Â Â  r--Â Â Â Â Â  28.6
>>> all / all
>>> mydomuÂ Â Â Â Â Â Â Â Â  Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  1Â Â Â Â  0Â Â Â  5Â Â  r--Â Â Â Â Â  21.6 5
>>> / all
>>>
> Right, and it makes sense for it to look like this.
>
>>>  Â From this listing (with "all" as hard affinity for dom0) one might
>>> read
>>> it like dom0 is not pinned with hard affinity to any specific pCPUs
>>> at
>>> all but mudomu is pinned to pCPU 5.
>>> Will the dom0_max_vcpus=5 in this case guarantee that dom0 only
>>> will run
>>> on pCPU 0-4 so that mydomu always will have pCPU 5 for itself only?
>> No.
>>
> Well, yes... if you use the NULL scheduler. Which is in use here. :-)
>
> Basically, the NULL scheduler _always_ assign one and only one vCPU to
> each pCPU. This happens at domain (well, at the vCPU) creation time.
> And it _never_ move a vCPU away from the pCPU to which it has assigned
> it.
>
> And it also _never_ change this vCPU-->pCPU assignment/relationship,
> unless some special event happens (such as, either the vCPU and/or the
> pCPU goes offline, is removed from the cpupool, you change the affinity
> [as I'll explain below], etc).
>
> This is the NULL scheduler's mission and only job, so it does that by
> default, _without_ any need for an affinity to be specified.
>
> So, how can affinity be useful in the NULL scheduler? Well, it's useful
> if you want to control and decide to what pCPU a certain vCPU should
> go.
>
> So, let's make an example. Let's say you are in this situation:
>
> Name                                ID  VCPU   CPU State Time(s) Affinity (Hard / Soft)
> Domain-0                             0     0    0   r--     29.4   all / all
> Domain-0                             0     1    1   r--     28.7   all / all
> Domain-0                             0     2    2   r--     28.7   all / all
> Domain-0                             0     3    3   r--     28.6   all / all
> Domain-0                             0     4    4   r--     28.6   all / all
>
> I.e., you have 6 CPUs, you have only dom0, dom0 has 5 vCPUs and you are
> not using dom0_vcpus_pin.
>
> The NULL scheduler has put d0v0 on pCPU 0. And d0v0 is the only vCPU
> that can run on pCPU 0, despite its affinities being "all"... because
> it's what the NULL scheduler does for you and it's the reason why one
> uses it! :-)
>
> Similarly, it has put d0v1 on pCPU 1, d0v2 on pCPU 2, d0v3 on pCPU 3
> and d0v4 on pCPU 4. And the "exclusivity guarantee" exaplained above
> for d0v0 and pCPU 0, applies to all these other vCPUs and pCPUs as
> well.
>
> With no affinity being specified, which vCPU is assigned to which pCPU
> is entirely under the NULL scheduler control. It has its heuristics
> inside, to try to do that in a smart way, but that's an
> internal/implementation detail and is not relevant here.
>
> If you now create a domU with 1 vCPU, that vCPU will be assigned to
> pCPU 5.
>
> Now, let's say that, for whatever reason, you absolutely want that d0v2
> to run on pCPU 5, instead of being assigned and run on pCPU 2 (which is
> what the NULL scheduler decided to pick for it). Well, what you do is
> use xl, set the affinity of d0v2 to pCPU 5, and you will get something
> like this as a result:
>
> Name                                ID  VCPU   CPU State Time(s) Affinity (Hard / Soft)
> Domain-0                             0     0    0   r--     29.4   all / all
> Domain-0                             0     1    1   r--     28.7   all / all
> Domain-0                             0     2    5   r--     28.7     5 / all
> Domain-0                             0     3    3   r--     28.6   all / all
> Domain-0                             0     4    4   r--     28.6   all / all
>
> So, affinity is indeed useful, even when using NULL, if you want to
> diverge from the default behavior and enact a certain policy, maybe due
> to the nature of your workload, the characteristics of your hardware,
> or whatever.
>
> It is not, however, necessary to set the affinity to:
>   - have a vCPU to always stay on one --and always the same one too--
>     pCPU;
>   - avoid that any other vCPU would ever run on that pCPU.
>
> That is guaranteed by the NULL scheduler itself. It just can't happen
> that it behaves otherwise, because the whole point of doing it was to
> make it simple (and fast :-)) *exactly* by avoiding to teach it how to
> do such things. It can't do it, because the code for doing it is not
> there... by design! :-D
>
> And, BTW, if you now create a domU with 1 vCPU, that vCPU will be
> assigned to pCPU  2.
Wow, what a great explanation. Thank you very much!
>> What if I would like mydomu to be th only domain that uses pCPU 2?
> Setup a cpupool with that pcpu assigned to it and put your domain into
> that cpupool.
>
> Yes, with any other scheduler that is not NULL, that's the proper way
> of doing it.
>
> Regards




From xen-devel-bounces@lists.xenproject.org Mon Feb 01 06:56:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 06:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79540.144839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6T7v-00066a-3u; Mon, 01 Feb 2021 06:55:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79540.144839; Mon, 01 Feb 2021 06:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6T7v-00066T-11; Mon, 01 Feb 2021 06:55:59 +0000
Received: by outflank-mailman (input) for mailman id 79540;
 Mon, 01 Feb 2021 06:55:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=et9n=HD=codiax.se=anders.tornqvist@srs-us1.protection.inumbo.net>)
 id 1l6T7u-00066N-0B
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 06:55:58 +0000
Received: from mailrelay4-3.pub.mailoutpod1-cph3.one.com (unknown
 [46.30.212.13]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae0437dd-2408-4af4-b87b-3ef018b8b831;
 Mon, 01 Feb 2021 06:55:56 +0000 (UTC)
Received: from [192.168.101.129] (h87-96-135-119.cust.a3fiber.se
 [87.96.135.119])
 by mailrelay4.pub.mailoutpod1-cph3.one.com (Halon) with ESMTPSA
 id 87916886-645a-11eb-a8e7-d0431ea8bb10;
 Mon, 01 Feb 2021 06:55:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae0437dd-2408-4af4-b87b-3ef018b8b831
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=codiax.se; s=20191106;
	h=content-transfer-encoding:content-type:in-reply-to:mime-version:date:
	 message-id:from:references:to:subject:from;
	bh=KVkCnxlgywxorwQhcaT1l1x1iQFMOPECPdMmbhOGd2A=;
	b=GjHe9wrxWY4g6tNMcYj7epl9LcsyX+vY16/kOwhenYw+TM146HULTdIiQINaEPdgvTo8F7Ajx+S6h
	 zjq/Pr7hoaR1lXVlwSvdyxEm9S8wobXil/NggYC3xILYb7H2O7Ie6bTBPoTjKykJ/gyUVl84E9PkG6
	 zC/41seThUjrWqg6hsqsaxh6zmKPdKtcFQl3w68ATLBSwQV0PcR3ck/W5NCldwdtDwe0kvMPpgkBTo
	 +BvCJr79YjIhbCCtGsZn1R9d7NiHDmYiz9XJcBmRMH+hhLT3FVmzgsOHhUKilfQrX4/XtXoD55FakK
	 9SsSNif7B1p040uSJCGswvBCbsTEALw==
X-HalOne-Cookie: 150a2bb0fd9eed9730d60ae145742d155ea0c4b1
X-HalOne-ID: 87916886-645a-11eb-a8e7-d0431ea8bb10
Subject: Re: Null scheduler and vwfi native problem
To: Dario Faggioli <dfaggioli@suse.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
From: =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Message-ID: <def6307a-9242-f8b2-a4e0-b32652a1b6c0@codiax.se>
Date: Mon, 1 Feb 2021 07:55:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-GB

On 1/30/21 6:59 PM, Dario Faggioli wrote:
> On Fri, 2021-01-29 at 09:08 +0100, Anders TÃ¶rnqvist wrote:
>> On 1/26/21 11:31 PM, Dario Faggioli wrote:
>>> Thanks again for letting us see these logs.
>> Thanks for the attention to this :-)
>>
>> Any ideas for how to solve it?
>>
> So, you're up for testing patches, right?
Absolutely. I will apply them and be back with the results. :-)

>
> How about applying these two, and letting me know what happens? :-D
>
> They are on top of current staging. I can try to rebase on something
> else, if it's easier for you to test.
>
> Besides being attached, they're also available here:
>
> https://gitlab.com/xen-project/people/dfaggioli/xen/-/tree/rcu-quiet-fix
>
> I could not test them properly on ARM, as I don't have an ARM system
> handy, so everything is possible really... just let me know.
>
> It should at least build fine, AFAICT from here:
>
> https://gitlab.com/xen-project/people/dfaggioli/xen/-/pipelines/249101213
>
> Julien, back in:
>
>   https://lore.kernel.org/xen-devel/315740e1-3591-0e11-923a-718e06c36445@arm.com/
>
>
> you said I should hook in enter_hypervisor_head(),
> leave_hypervisor_tail(). Those functions are gone now and looking at
> how the code changed, this is where I figured I should put the calls
> (see the second patch). But feel free to educate me otherwise.
>
> For x86 people that are listening... Do we have, in our beloved arch,
> equally handy places (i.e., right before leaving Xen for a guest and
> right after entering Xen from one), preferrably in a C file, and for
> all guests... like it seems to be the case on ARM?
>
> Regards




From xen-devel-bounces@lists.xenproject.org Mon Feb 01 07:24:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 07:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79578.144851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6TZZ-0000lo-En; Mon, 01 Feb 2021 07:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79578.144851; Mon, 01 Feb 2021 07:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6TZZ-0000lh-Br; Mon, 01 Feb 2021 07:24:33 +0000
Received: by outflank-mailman (input) for mailman id 79578;
 Mon, 01 Feb 2021 07:24:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6TZY-0000lZ-JC; Mon, 01 Feb 2021 07:24:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6TZY-0004FN-BW; Mon, 01 Feb 2021 07:24:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6TZX-0008DV-V0; Mon, 01 Feb 2021 07:24:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6TZX-0006fH-UX; Mon, 01 Feb 2021 07:24:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZiLyHwM/8KjRKIIIw6bGFw1hofwEEiMZKFRFGVaXiU8=; b=gGK41QnIJTWNChgQe/INdF5MK1
	7kUmi5s7lCM5P5vvZMZ2t+Swo6EkLkDxnngS8KH3uLaOq/hmER2NXSuDe2JTTcOZixWJ1pjFHj0uP
	0QTuW71joJteiAePh/PVW8qZkOhi8fIRlP0+N65aYzhbbeIR+Y1ARnSn95kl3dINRaPk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158868-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158868: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1048ba83fb1c00cd24172e23e8263972f6b5d9ac
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 07:24:31 +0000

flight 158868 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158868/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1048ba83fb1c00cd24172e23e8263972f6b5d9ac
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  184 days
Failing since        152366  2020-08-01 20:49:34 Z  183 days  328 attempts
Testing same since   158868  2021-01-31 22:10:44 Z    0 days    1 attempts

------------------------------------------------------------
4508 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1021215 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 07:37:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 07:37:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79588.144873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6TmD-0001rc-Om; Mon, 01 Feb 2021 07:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79588.144873; Mon, 01 Feb 2021 07:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6TmD-0001rV-LX; Mon, 01 Feb 2021 07:37:37 +0000
Received: by outflank-mailman (input) for mailman id 79588;
 Mon, 01 Feb 2021 07:37:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6TmC-0001rQ-NH
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 07:37:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77de3bb1-3abc-4ba6-a741-6e9017ed9156;
 Mon, 01 Feb 2021 07:37:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D2A7BAD18;
 Mon,  1 Feb 2021 07:37:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77de3bb1-3abc-4ba6-a741-6e9017ed9156
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612165054; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sTh2n4NnU4Dc05aqukiyC6j0Tu3DgFwJVfulzJ12f0M=;
	b=XZBp5nBnRwyJK85FMN2KvGV2EragKRqas+ZBENpYp1OB7108QIcJ68DLzzs+DbzXZQoH2J
	h++ehRdwqJiEfFTItcuR6RIFG0wmnIk0K7smuPIEtHxM2i14Mjw2f5XZARVxI8TpKu4Svf
	Qls7xv+X2ZrDB5iIAyiCgOCi1IY9MIw=
Subject: Re: [PATCH RFC 2/2] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <35443b5a-1410-7099-a937-e9f537bbe989@suse.com>
 <d0f1f249-293c-5a7f-4b6c-1caeb275e7b9@suse.com>
Message-ID: <ee7e4c77-24e3-9ff8-6f7c-e99860328099@suse.com>
Date: Mon, 1 Feb 2021 08:37:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <d0f1f249-293c-5a7f-4b6c-1caeb275e7b9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.01.2021 17:20, Jan Beulich wrote:
> @@ -1696,6 +1696,21 @@ static void time_calibration_tsc_rendezv
>                  r->master_stime = read_platform_stime(NULL);
>                  r->master_tsc_stamp = rdtsc_ordered();
>              }
> +            else if ( r->master_tsc_stamp < r->max_tsc_stamp )
> +            {
> +                /*
> +                 * We want to avoid moving the TSC backwards for any CPU.
> +                 * Use the largest value observed anywhere on the first
> +                 * iteration and bump up our previously recorded system
> +                 * accordingly.
> +                 */
> +                uint64_t delta = r->max_tsc_stamp - r->master_tsc_stamp;
> +
> +                r->master_stime += scale_delta(delta,
> +                                               &this_cpu(cpu_time).tsc_scale);
> +                r->master_tsc_stamp = r->max_tsc_stamp;
> +            }

I went too far here - adjusting ->master_stime like this is
a mistake. Especially in extreme cases like Claudemir's this
can lead to the read_platform_stime() visible in context
above reading a value behind the previously recorded one,
leading to NOW() moving backwards (temporarily).

Instead of this I think I will want to move the call to
read_platform_stime() to the last iteration, such that the
gap between the point in time when it was taken and the
point in time the TSCs start counting from their new values
gets minimized. In fact I intend that to also do away with
the unnecessary reading back of the TSC in
time_calibration_rendezvous_tail() - we already know the
closest TSC value we can get hold of (without calculations),
which is the one we wrote a few cycles back.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 07:51:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 07:51:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79591.144885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Tzm-0003mY-W3; Mon, 01 Feb 2021 07:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79591.144885; Mon, 01 Feb 2021 07:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Tzm-0003mR-Sw; Mon, 01 Feb 2021 07:51:38 +0000
Received: by outflank-mailman (input) for mailman id 79591;
 Mon, 01 Feb 2021 07:51:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Tzl-0003mM-E9
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 07:51:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffb23232-0385-411b-9d19-bffda1e715e3;
 Mon, 01 Feb 2021 07:51:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 711F0AD87;
 Mon,  1 Feb 2021 07:51:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffb23232-0385-411b-9d19-bffda1e715e3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612165895; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I34PXV5jcXCd0lDNvXoTB1OluD7lbxJLWzB1GJ6LEhY=;
	b=VvKSIpiwHZReKxs3XbFMV/Aen3AWHgmS0+slW2ZJgYST44Qn+VVVhWrEG7dfNCLhy9srIK
	yMjhVJDdK8FB3UgJ7yokey0pt+YqPD4nfPz8YW4vmnvxdcL5rjGPKQuK4ZvlbOPcj69+Tg
	W+FyqiW4rL0mNq/VbAWIG/IyAmLnXIU=
Subject: Re: Problems with APIC on versions 4.9 and later (4.8 works)
To: Claudemir Todo Bom <claudemir@todobom.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <CANyqHYfNBHnUiBiXHdt+R3mZ72oYQBnQcaWuKw5gY0uDb_ZqKw@mail.gmail.com>
 <e1d69914-c6bc-40b9-a9f4-33be4bd022b6@suse.com>
 <CANyqHYcifnCgd5C5vbYoi4CTtoMX5+jzGqHfs6JZ+e=d2Y_dmg@mail.gmail.com>
 <ff799cd4-ba42-e120-107c-5011dc803b5a@suse.com>
 <609a82d8-af12-4764-c4e0-f5ee0e11c130@suse.com>
 <CANyqHYehUWeNfVXqVJX6nrBS_CcKL1DQjyNVa1cUbvbx+zD83w@mail.gmail.com>
 <9d04edfe-0059-6fbf-c1da-2087f6190e64@suse.com>
 <CANyqHYfOC6JY978SRPAQ8Ug3GevFD=jbT6bVVET4+QOv8mv7qA@mail.gmail.com>
 <a0a7bbd0-c4c3-cfb8-5af0-a5a4aff14b76@suse.com>
 <CANyqHYeDR_NUKzPtbfLiUzxAUzerKepbU4B-_6=U-7Y6uy8gpQ@mail.gmail.com>
 <8837c3fb-1e0c-5941-258c-e76551a9e02b@suse.com>
 <8cf69fb3-5b8c-60ea-bd1c-39a0cbd5cb5c@suse.com>
 <CANyqHYeCQc2bt836uyrtm9Eo2T1uPP-+ups-ygfACu6zK36BQg@mail.gmail.com>
 <bd150f4d-4f7e-082e-6b10-03bf1eca7b80@suse.com>
 <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0dbd6c58-a524-e013-946b-181ed3b40fb4@suse.com>
Date: Mon, 1 Feb 2021 08:51:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.01.2021 20:31, Claudemir Todo Bom wrote:
> Em sex., 29 de jan. de 2021 Ã s 13:24, Jan Beulich <jbeulich@suse.com> escreveu:
>>
>> On 28.01.2021 14:08, Claudemir Todo Bom wrote:
>>> Em qui., 28 de jan. de 2021 Ã s 06:49, Jan Beulich <jbeulich@suse.com> escreveu:
>>>>
>>>> On 28.01.2021 10:47, Jan Beulich wrote:
>>>>> On 26.01.2021 14:03, Claudemir Todo Bom wrote:
>>>>>> If this information is good for more tests, please send the patch and
>>>>>> I will test it!
>>>>>
>>>>> Here you go. For simplifying analysis it may be helpful if you
>>>>> could limit the number of CPUs in use, e.g. by "maxcpus=4" or
>>>>> at least "smt=0". Provided the problem still reproduces with
>>>>> such options, of course.
>>>>
>>>> Speaking of command line options - it doesn't look like you have
>>>> told us what else you have on the Xen command line, and without
>>>> a serial log this isn't visible (e.g. in your video).
>>>
>>> All tests are done with xen command line:
>>>
>>> dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true
>>> smt=false vga=text-80x50,keep
>>>
>>> and kernel command line:
>>>
>>> loglevel=0 earlyprintk=xen nomodeset
>>>
>>> this way I can get all xen messages on console.
>>>
>>> Attached are the frames I captured from a video, I manually selected
>>> them starting from the first readable frame.
>>
>> I've just sent a pair of patches, with you Cc-ed on the 2nd one.
>> Please give that one a try, with or without the updated debugging
>> patch below. In case of problems I'd of course want to see the
>> output from the debugging patch as well. I think it's up to you
>> whether you also use the first patch from that series - afaict it
>> shouldn't directly affect your case, but I may be wrong.
> 
> I've applied both patches, system didn't booted, used following parameters:
> 
> xen: dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true smt=true
> kernel: loglevel=3
> 
> The screen cleared right after the initial xen messages and frozen
> there for a few minutes until I restarted the system.
> 
> I've added "vga=text-80x25,keep" to the xen command line and
> "nomodeset" to the kernel command line, hoping to get some more info
> and surprisingly this was sufficient to make system boot!

Odd, but as per my reply to the patch submission itself a
few minutes ago, over the weekend I realized a flaw. I do
think this explains the anomalies seen from the log between
CPU0 and and all other CPUs; the problem merely isn't as
severe anymore as it was before as it seems. I also did
realize I ought to be able to mimic your system's behavior;
if so I ought to be able to send out an updated series that
actually had some testing for the specific case. Later
today, hopefully.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 08:06:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 08:06:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79595.144897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6UEE-0005RV-J1; Mon, 01 Feb 2021 08:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79595.144897; Mon, 01 Feb 2021 08:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6UEE-0005RO-G0; Mon, 01 Feb 2021 08:06:34 +0000
Received: by outflank-mailman (input) for mailman id 79595;
 Mon, 01 Feb 2021 08:06:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6UEC-0005RJ-Mz
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 08:06:32 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f268e6fa-bac4-4ef2-8b3d-d91bbbca46ac;
 Mon, 01 Feb 2021 08:06:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f268e6fa-bac4-4ef2-8b3d-d91bbbca46ac
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612166791;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=KhdcpZ6YsT3jClC7sF00+pqqlfLoNCz2hPSAtqdX3Ks=;
  b=CLqNOQ3KqJ0g6ZZ/now18tAckD5wEhnKi+ONxlAxWYASuHHp1fJZwjii
   VYjgWa2UtnP/mCDAgOxSsjvTDY4zZ5fRTnPM+Dk70AAMWIzDAMRd9XREq
   7/H1zNbrHT3IuVhL/EHl3J+GNVUDT8aRzcVLLui6NfDIh7BqXemzcY56P
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: v798ru2Si4AMC9Me2Z4zh+chcNpz/rjcaVN/dMgxcjX7N5IM96Pz7zQyizjrztr69PmVJs/Q/a
 YrkYwPsy6yixEQqPyzJ/8k70v0wlN5lHPWhM7IGcZBOr3ohf7tCcCDCNctHz/HoWn7HtZcM+4V
 V37LTxSYhg5BzgLPjcDDdiv3jvvdXHGoBVjUPaJlfwRSwL92VYzYEoWxdXh0E9Aa9aZNif2Hk4
 7jJSqrnaj4AKO7enrNp4Tt0MBtwLWl2Q7qp0XovLWtfo12Zc6gKpQ5NTPR5ItJr649NIoJyMQn
 qWA=
X-SBRS: 5.2
X-MesageID: 36251608
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,391,1602561600"; 
   d="scan'208";a="36251608"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fhHi7/7PtDTNYkof8eCQZuw6jMJEclgIOhxXr50FIXg81NfrX5nGCj5nMoQKR+q+0PCJfHPqrHAG/NGJ8kP7nXFBEgb9C0jioav1wBmFAt4bR9eAUEYNCNPrr6ZUwe+N/RyPfOQKY9n2mLkEjL/npSBryYQ3D8txdXY0FVij8VdRid/MY5gGcvNbnGnUXwb/DCAPr3jHwd5HOWLJEd8O5c8ttvvhHugX41atZJ6DvtXynzhLFzhTNgThpS2SaDLB4gMee1D8aCjok9PxmMhY74n9nXCwpIjRvVS4+Mc7CalIgItYMC3rdTmpSTVUG4tf07lCEJEF8F03hztj3WeSsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=76t+WtlYL8DogoPuNoWMKZ2EhhsETnF01kV83mi7V80=;
 b=PGz76xqVJd5PfddOQR1S2TC2+fgp63/hBZGY6H70KjZDQ5IsZi3eEFtVu/cL/ANBEoz2aW0wqrLyXMQ+M5oC0OB3rdi+ggdpPNSfr4wFUKpu7IQJ9w3EbIUjgZ6C/Lcxp/pH6VOWaQkSwwI50/55snact7gaNaGbvmPC5DqMLeQbfsKPNBgeuSyXmYf4V6fjIH7noJhhxg3RbuJcL7qx/uwU0ZgykwmkYUml8cjHhF4bEKnQfv8kH9HwocEpN5Ijhu3QvQzFYQocz0gVLdlwhiSRvGOD2qonPUYEv8CEaUxiweYF1GfHJED/gz9KU3/l3B9wd2IHCqrObblyRJNZmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=76t+WtlYL8DogoPuNoWMKZ2EhhsETnF01kV83mi7V80=;
 b=aEZiu2/z9I7omedKKdbHBLTZHpksyzcJXN+tdylPdYyTXyQewxdT5EuHOGjjkC6j9PF2H6I1VlIUNLSkY49gBEHL48oL55gQIOzypKVHQ9I46peskYHepYI7oZ3124LCYs09HcOIVS7+3S427Fn+f4Q/5udtwerNnwMlAMTKFnc=
Date: Mon, 1 Feb 2021 09:06:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] libs/light: pass some infos to qemu
Message-ID: <YBe2RSZeJBeMybdt@Air-de-Roger>
References: <20210126224800.1246-1-bouyer@netbsd.org>
 <20210126224800.1246-12-bouyer@netbsd.org> <YBKbEhavZlpD75fU@Air-de-Roger>
 <20210130115013.GA2101@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210130115013.GA2101@antioche.eu.org>
X-ClientProxiedBy: PR3P189CA0101.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b5::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f3254604-b9fb-4a14-d9f3-08d8c688446d
X-MS-TrafficTypeDiagnostic: DM6PR03MB5081:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB508121DAB726A68AFE1C3F038FB69@DM6PR03MB5081.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3pnaDrg7IMLceTph8zylUSLWHWCOUNyHRJLIdVyXdqz0a6HWah462MCPPjZ1pAGvFCiSnWmGbORBquEzioVbTIo1FMKUHwK/oKYPY1yu2ZoapX//P6vh6jnkeU44kTFgwvFGAr6ZWHjb5yq7gBL9ku9pjEm9AGlHTDAbauGJRGHb9Uy5KyzxPNSd+1FzXcSNb997Lo+0HRcy0+jFCxNeUktnHjjm6BhL3J7x+pmrWzOg2fG8MFIPakrRDlInu9cIqoSx6C1xP/EQCNh+qQyZIeYV46yslARi9l/kERTlRupRdqru+99um6dAdQbviBveFm7qJT6Zi2X4xR5JccuyB7eN87wtkPpUVaNgxGFtT+6mQIk2xvZYy4DwPHTYqw8+xVwJ8zGTwe/cRhZojiAR5Ffqr+oCHOfrTlULauywDod+pF7nWCUWCQ5+cdNEl8MtiRFw0xkz9XyRYeq34Ikp7mVi3GrtCAB0BK3APj+hIQEiFG3QH/wArHCr9GGKG0bP0QqD2LP3bdDAcJIW4w4kBw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(136003)(39850400004)(396003)(366004)(346002)(316002)(33716001)(26005)(8676002)(8936002)(4326008)(6486002)(956004)(66556008)(66476007)(54906003)(86362001)(66946007)(85182001)(107886003)(6916009)(2906002)(6496006)(5660300002)(16526019)(186003)(478600001)(6666004)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aTRzY2VDS2JOQkJnanArb2JWY3QvOXRIejI3cFBuRGJyTGszOXR2U1E4Mk9s?=
 =?utf-8?B?Q24xTDFjUng4Z1JRRDN3ZnhpL2V6Tk4yWjBab2kxbGJnZmF0eDQ5bWVRdk8v?=
 =?utf-8?B?VlY1Z2NMQzBLend3V0szTHFESDZYeUxZZmV4ZEJYdTF4WkgzbHN5UWs3ajZ4?=
 =?utf-8?B?ODFydUhMOC9KaTZvN3l6RWJ1WEp0cnIvNnluNDBnOGd3K2gveWVnZEMzc0pH?=
 =?utf-8?B?L0srOXIzcGFJL01YVHpDVGhVTlNCQXhiWUFRZEpTZ3BLUEdCckJvSFRxYUds?=
 =?utf-8?B?cHd0Nk5zN2FXN1NGU0VTVUpGc0JmQXc2RXRwUUx1NDdReWpJK3pIU0dONEk2?=
 =?utf-8?B?YjhlOE5UUmRmV1RwMmo4TG5RbE4yMGtLb0k0QkZrb3VUQmZTeCtiVC9tS3RR?=
 =?utf-8?B?KzZ4N09ZR0xramxLWlhJT1dQWkI5dVo0QWgxUTRBNjhEdUpTOENERC8yUlE3?=
 =?utf-8?B?dEpVK3N1dVN5NmQ3d0RmdUQxRCsyU3psQWd6Rll5VVpzVW5mQk5keHdwU0xq?=
 =?utf-8?B?ZVpvNUFRVzZIZTc1aWl5U1NHSmVlNDlwdzZCcCtrNnl1emZSNU45Q08xa0pU?=
 =?utf-8?B?eTJsMUx2S0pVUS94Ym52Z3FWQTgzcnhUL05vaGZETEY5cjlCaHczdS83K2V3?=
 =?utf-8?B?WXcreS80Mms1TjlIOVhKWU9ETXh1SnhZQnVBK0pWamNrWnY5UHBxN1luRDZl?=
 =?utf-8?B?VGxNamJjWmxUbU9PMG83MHZiUE9mWWpwSHVqVmhGZXQxZDhrdm4xUzNiNHlJ?=
 =?utf-8?B?cThheG84Tk0vUjUraHI4SGphaVVJY0ZzQlNTVjhucVQ0M0Y1bEgwMmROTDZC?=
 =?utf-8?B?V1JuZjBDcUJuNEp6MHpXb0tRWCs2SzF4NWR4a1JoNHU3dWJmU2l0ZVRxWDVF?=
 =?utf-8?B?bE9ITVhoZlBVUjFqRXJhcDdrUi9ualQyQXU2MG5oU2tvalNBK0xEc3luR0do?=
 =?utf-8?B?b3pZeFNKbFZucE5OSEswOXh6VTBqMmtiUUpQVlE5ZFd2SE9la2Zrby8xMFQy?=
 =?utf-8?B?QmJEM0FqOGlMQjk1QUtaenpBN3I4N2NoZ2xmUEd2YW05SVE5T0lSY3JqWURy?=
 =?utf-8?B?Y3lJSzR5VHlQclNwRFpQS3JpOTIydkNEWmg2UzJMSDR5U2Z6MFhPZ2M2amVK?=
 =?utf-8?B?SVArU0ROUFBOYThpYmxjcGxOQ0kzeUxYZW1QQlJPaDgwT0FtcFhYNmJxWUJr?=
 =?utf-8?B?eFEvNG9SUklmcG9FZXFSZmk1aHZrUjAwbTBzUFM4QU45VnJvK1Z6TWZmZ1dF?=
 =?utf-8?B?NFZVS1RmODNqeUU2VG1EUzgvc3AxY056dDN6aVVvTUgyVDB1ODdIcGFGcU5T?=
 =?utf-8?B?QnVDcWRQd0NjQ3VIRjVFZWhJMk8rN2FQNkpUTURuQW5hYTRIMEJEbUlRL1B3?=
 =?utf-8?B?MGlDY3ZINTZuV3BGNDllaDZYY2tzbHJXbGdFZTNLZEdVM1BSNUFTbHV1ZFdy?=
 =?utf-8?B?WDB5K2xVNlZrd1ZVb0w4T3p1dHZYRFIzY2Y1Y2hBPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f3254604-b9fb-4a14-d9f3-08d8c688446d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 08:06:24.1126
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4X0NR4+4sEgkrTiNrlR5ARorjzPupNMkt4kwazGW543dq6uGTg8uu2x96DXJcglayU7I3Dr8nY1lRhE0mSt5zg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5081
X-OriginatorOrg: citrix.com

On Sat, Jan 30, 2021 at 12:50:13PM +0100, Manuel Bouyer wrote:
> On Thu, Jan 28, 2021 at 12:08:02PM +0100, Roger Pau MonnÃ© wrote:
> > [...]
> > Also, the qemu-ifup script doesn't seem to be part of the NetBSD
> > scripts that are upstream, is this something carried by the NetBSD
> > package?
> 
> Actually, the script is part of qemu-xen-traditional:
> tools/qemu-xen-traditional/i386-dm/qemu-ifup-NetBSD
> 
> and it's installed as part of 'make install'. The same script can be used
> for both qemu-xen-traditional and qemu-xen as long as we support only
> bridged mode by default.
> 
> qemu-xen-traditional did call the script with the bridge name.
> This patch makes qemu-xen call the script with the same parameters,
> and add the XEN_DOMAIN_ID environnement variable.
> 
> Is it OK to keep the script from qemu-xen-traditional (and installed as
> part of qemu-xen-traditional) for now ?

I think you want to move the script into hotplug/NetBSD/ because it
should be possible to install a system without qemu-xen-traditional
(--disable-qemu-traditional) and only qemu-upstream, and the script
will still be needed in that case.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 08:15:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 08:15:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79599.144909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6UMe-0006Tr-GS; Mon, 01 Feb 2021 08:15:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79599.144909; Mon, 01 Feb 2021 08:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6UMe-0006Tk-Ck; Mon, 01 Feb 2021 08:15:16 +0000
Received: by outflank-mailman (input) for mailman id 79599;
 Mon, 01 Feb 2021 08:15:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6UMc-0006Tf-JP
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 08:15:14 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6abc3799-8422-496c-939d-e1111ec35f3f;
 Mon, 01 Feb 2021 08:15:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6abc3799-8422-496c-939d-e1111ec35f3f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612167312;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=sfJygqxGL57ldSNgJ7sdvK1cZ+s42CO9qcTX01xar84=;
  b=Q7Kr8d01Ko1mOTGj7laUcmNjXmkDoz8Xw8+C4Ik+WA6LbtW2aWcIjUKo
   92ffu+JCA2I23KabztAIg+HtBzF8UwxqqJr714YJxDoDPytX+AXi97hdu
   /aHCfQmkKYquWDlbGSv0V88T5Qg0tanr9s2C7w3gj5wJiZhHFlCdfix5B
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qLTbDWCk7U/Bb+FqsqmbU1k5TRIkuYF+2du4fCPfJXkzuS74kEzgz+CHSqcu5CUOkjSVgjbxhr
 ghn3oZ0w2tErG0EXap7nQfkvpNaPxsDsG2x7IzKai9qWZNm0ouhhyMNlwvTRT4FbqXx6JWlx6T
 LWwmf9ncxn++TBtmt9JBkcTtJE3cCbrAZE+Y3wZDoiVFoYBKAhspKbtg9W96Ma4kf5JAAqKaFV
 /WR9e3kCw2jnhj/TQ/AQmeF846peXU2aff7BvS4NVgSBq7JIzYQvLMISfgtvaMU1tu4YTZm5Le
 Tkg=
X-SBRS: 5.2
X-MesageID: 36295524
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,391,1602561600"; 
   d="scan'208";a="36295524"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cf+rI8hnZPuVph0CR59JthQ8XpPqmy6OXR/1pYZnutOlXX+oaCHPyAkp6mhiSy7DwSJu8q5mYegUDeG3ZHQZ5Sp19pqiJJ19t42ljVB2DngEo9umqaL6rfreuyO6bbn8U6SYNpqvCt7cYSYukE7MNNpg26wv+s/nIFPt3eEnRHcI1quMfrqNqjRQGrwWdfy8fXZLDwjUtQz0GzgvVrWzyngXXOdJL6ULl3z2Gi6zslFPovBy/UMkzMlwC5oZJq4zKLMHVfZ2cuCoL7/GmHZU5lzRG3E9CA0CSr1rWxb7fD2VCk7OQsFnYNGYTPWJTSszXC1IVXhY99t9b+doyNbzpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GUXDxMsGy43OVp3KsnzbywguZBgojdHyN0MWev1RQQ8=;
 b=d+nKZ60+lH3Ub2c9+qI3ihLhVLc5nLUE01/VrDQBCqRZ47EkcT6yAP4tjr9v1o+nO/zjAjlKhyiwL2kD15Giepp5v0bih3OGY2hBS9nutONhMqQF17U0hFkd7j1ZGuy9kPVaOX2gk9dzqf/A77s+kb1B3lEz7LGVHJ/doQoDc8n2fIZxOnOhTcMloev6tlIDW4JXCja+o9b9QobSbmivS8XS2Ay20UgrJFMWfllJefEXSj1fHgwLgWOy0JIkmnfjgSpq0capYZYiWoB/OsKqvhafKfn2kUmdXV/JiDdO2Ie8mzaT0cYtRqM7B5aMD7xXZ8b6XKNUUMUWzeHg0hvpaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GUXDxMsGy43OVp3KsnzbywguZBgojdHyN0MWev1RQQ8=;
 b=K5XobmLT9B7BPmR4lqrTFGc3M9gLovhIpkp7gniRXxu4nY1yHb0zaO+niPUUfpbO26VTvKLBrellkAwOjXAex90xqvvg6zsbZ0DIcMOcvPYFNvM3QiR+uY7L7w/mGh3DkH1hpyGpWPkEhfYAl8OvdDld/MDU6sVnhwvRITPjN14=
Date: Mon, 1 Feb 2021 09:15:04 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>, "open list:X86" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH] x86/pod: Do not fragment PoD memory allocations
Message-ID: <YBe4iNMbmaeUBLcp@Air-de-Roger>
References: <c0a70f39-d529-6ee4-511d-e82730c14879@suse.com>
 <YBBWj7NO+1HVKEgX@mattapan.m5p.com>
 <f6a75725-edc2-ee2d-0565-da1efae05190@suse.com>
 <YBHJS3NEX5+iEqyd@mattapan.m5p.com>
 <67ef8210-a65f-9d6a-bea1-46ce06d47fb7@citrix.com>
 <YBHo/gscAfcAZqst@mattapan.m5p.com>
 <44450edc-9a64-8a6e-e8d3-3a3f726a96bc@suse.com>
 <YBMB1VUhYd3RUuDO@mattapan.m5p.com>
 <DC18947E-BC67-4BF8-A889-04B812DACACC@citrix.com>
 <YBbzXQt2GBAvpvgQ@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <YBbzXQt2GBAvpvgQ@mattapan.m5p.com>
X-ClientProxiedBy: PR0P264CA0226.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3be08554-cfd8-4b56-7911-08d8c6897cfb
X-MS-TrafficTypeDiagnostic: DM6PR03MB5083:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5083BF669BEA54FB7A0D7F2C8FB69@DM6PR03MB5083.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3K0WJvphH9/TRKaiqS9d/DHdYH7vjxay6AHVRWYmS46kATgk2COLbMwlKHOQOfoLcyl/y2mBc1i3LzAS0odQXAzPXim8EERdYQFqkm7+TzLc8uR77I/dJqvC1okvqUq2jMLJMFcqgkGQO2rAKNvxf5fv7doEqx4RAzbCWtJHM44zVxs7L1YBMOYW/qFJmdEaUIbGgjiWfLGO5TS5QL8bSzcc0NbhoFkEXtOuBUJCQRTgqHk8OKleNIH6p7SY2YmtUMBaKOLKAgiBtjt+GNPStPKHSLMBoR1bTrQ3C0CX6eB/2g7cGqlAoScqOpMoaHA3vW1EyX8cnmfIux92aPdZBVf2Cy4XuZyaYJ7P0jZR+uaFiObBAgTHJjPn40BFnMw9iVQg965KYqoz8zgrwOOxfH0dtno4VW5340b/TlvtMmYIR3LYd6VroWgzGdoEQ2ZmxbAPuS1kEK7fYNWMOebnB/PGz5SsEvt/DbfHroFzrkadKcY33wHcBYP6YsS95LuM0qR/UJZPHWzvFvNM9PFWCQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(136003)(39850400004)(346002)(376002)(366004)(86362001)(4326008)(107886003)(186003)(16526019)(85182001)(6496006)(956004)(33716001)(8936002)(66946007)(54906003)(316002)(66476007)(66556008)(6666004)(5660300002)(8676002)(9686003)(478600001)(26005)(53546011)(2906002)(83380400001)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NmRscXRlOTFLQVhmMzVnalo0OEdtNGFpbzJtVmdVdXE1SzN2djA0QTAvNXA3?=
 =?utf-8?B?bEVUUHFVTlNhYzVrWmhhVFk4dlVNUFFERHorL3VoeThRNDR2RXRDMDBmM3Z5?=
 =?utf-8?B?OFBvZ0FoTHJNT1pPUHlIcFdQbHY5L1hROXZKZWcvTkVqZmdTUW5qaWhVd1J5?=
 =?utf-8?B?ZW9qSkhETGJ1WTMzeDBqTE1KclJ2S25Oelo0eXFEVTA3OSs3bmwwNkdLVUMz?=
 =?utf-8?B?ZWI4b1oveFh6TzdtTnVZSDRlV3dyK0o4L3RCaU9ORHJwVkRUeUJ2d29FUFNy?=
 =?utf-8?B?SGZ5UU5YbDBhQ2JyZlhIOGxSMVo0V2ZWTWJQdCtpUzFJZUJXL2RHZHNDRVYy?=
 =?utf-8?B?VTNSNkFVYXdTcnFmc1hTdjBReDlqMXhKclRPV2FYNXpsVmE3R3YyK0hzMjcx?=
 =?utf-8?B?L0dIVllFeFNGOGVRbnhLbEFXdHZWQ0ZJZXRCVlVFbEpYeHhJc0FLTkdYbmpS?=
 =?utf-8?B?dnV3ZFM0Uy9nRWVQNXZoVXJJL1VBOXBtanVmKy9LK1dhaG1nNXpVOS9jRTJY?=
 =?utf-8?B?cXpOY0tQQkcrM09wLzBCV0dhaGxXaWhITG1IWTVNYVZaY3pmMjFGSzRXd1I3?=
 =?utf-8?B?M212VE9ZaDdMUTYwaEtONFdSNFcxM1RmKzNrc1Ircjl4L0tOL2hIRzZQdnFr?=
 =?utf-8?B?Q3FMY2RNWldRMko1MkNiUzRoT0NVSGs2OVJ6bWZCWXF4b2VMVElpYjJ4TDJm?=
 =?utf-8?B?Q1gwYjJCdUk5ZDBXVDRoenI2RkpDQkF5RllwVDlWcnJ4dXd5Rm5tL1FFdUZC?=
 =?utf-8?B?S0E3UitPcE1lOEtyTXp3b01vMXR6dzJiZnkzUkN3NWJOSmV1QTFsNnpkZll3?=
 =?utf-8?B?eWIveTlQVG5paUxNNVZkUnllVG1HNnJnV1ZLZlFiRWJZVFZleUt4Q1JUVitM?=
 =?utf-8?B?NXZyMUpLQjFWQzJtYjN3NzhsN0xzRzVEMmhkOWRTM0RCbTBPdEN1UFNlYUFr?=
 =?utf-8?B?b3NMcHUrTHMvZ09kMFBobGUzbnFWK0VkSjZFYVlxWWpoQnFZeDF0N1p0SWlI?=
 =?utf-8?B?YmhSZ21YbkVTc2RuZTFMOU9ta3ZlczFhenBXSkxXc1RDU2NvVzVPNlJuemh6?=
 =?utf-8?B?eW5ZSE1RTzNPZ05Bd3ptUnV3cmZKQXR0a2tNMDZGRjl1aFVzWkVadmhZenFT?=
 =?utf-8?B?TVFxL0c4VnJyRGNPR3ZDdUlVVUpodVM2Nkt6d1dJUUxVUmZFUk8raWhFQzA5?=
 =?utf-8?B?TmdKbEhub2N2ZVZrRTZlQVJUaUVGbFRRZDcrd2JwQVRwbW4yTy9JNFJlcXJr?=
 =?utf-8?B?QWlxUkZaZFdvOUxzSWR0aU9iSmNSM3NCUHp1WjdiRTZ2SG1TOEdEVzZ0ZkVa?=
 =?utf-8?B?SVNoazJZN2dndHplRDd5b0ZKckRudTZvOTh1OGJxWGo2aG9PWS9SR3RrdWR6?=
 =?utf-8?B?N1ZwR2cwQ3ZuR0dwaVl6dm1xVFFIcEtjQnJKc1huVVViZ04vV3dnTDZYSXIz?=
 =?utf-8?B?UU5qejBXNWlPUmFreEVueWZIL053ei9SZjU0aFZ3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3be08554-cfd8-4b56-7911-08d8c6897cfb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 08:15:08.5009
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: R0PY5S/uxc3gEGNmnSzCCi/6jPhvCdfon7cZzR9swAbXYIeZNZVEnIwTh5PixGaCnQTyraXr7ojo3qPfWeWz0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5083
X-OriginatorOrg: citrix.com

On Sun, Jan 31, 2021 at 10:13:49AM -0800, Elliott Mitchell wrote:
> On Thu, Jan 28, 2021 at 10:42:27PM +0000, George Dunlap wrote:
> > 
> > > On Jan 28, 2021, at 6:26 PM, Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > type = "hvm"
> > > memory = 1024
> > > maxmem = 1073741824
> > > 
> > > I suspect maxmem > free Xen memory may be sufficient.  The instances I
> > > can be certain of have been maxmem = total host memory *7.
> > 
> > Can you include your Xen version and dom0 command-line?
> 
> > This is on staging-4.14 from a month or two ago (i.e., what I happened to have on a random test  box), and `dom0_mem=1024M,max:1024M` in my command-line.  That rune will give dom0 only 1GiB of RAM, but also prevent it from auto-ballooning down to free up memory for the guest.
> > 
> 
> As this is a server, not a development target, Debian's build of 4.11 is
> in use.  Your domain 0 memory allocation is extremely generous compared
> to mine.  One thing which is on the command-line though is
> "watchdog=true".
> 
> I've got 3 candidates which presently concern me:ble:
> 
> 1> There is a limited range of maxmem values where this occurs.  Perhaps
> 1TB is too high on your machine for the problem to reproduce.  As
> previously stated my sample configuration has maxmem being roughly 7
> times actual machine memory.
> 
> 2> Between issuing the `xl create` command and the machine rebooting a
> few moments of slow response have been observed.  Perhaps the memory
> allocator loop is hogging processor cores long enough for the watchdog to
> trigger?
> 
> 3> Perhaps one of the patches on Debian broke things?  This seems
> unlikely since nearly all of Debian's patches are either strictly for
> packaging or else picks from Xen's main branch, but this is certainly
> possible.

If you have a reliable way to reproduce this, and such error happens
on one of your server boxes, is it impossible for you to connect to
the serial console and get the output of the crash?

I would assume this being a server it must have some kind of serial
console support, even if Serial over LAN.

That way we could remove all the speculation about what has gone
wrong.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 08:22:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 08:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79603.144921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6UTI-0007Ux-80; Mon, 01 Feb 2021 08:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79603.144921; Mon, 01 Feb 2021 08:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6UTI-0007Uq-53; Mon, 01 Feb 2021 08:22:08 +0000
Received: by outflank-mailman (input) for mailman id 79603;
 Mon, 01 Feb 2021 08:22:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6UTH-0007Ul-KO
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 08:22:07 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e734a841-f2c7-4f8c-ad5c-b90579097e3e;
 Mon, 01 Feb 2021 08:22:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e734a841-f2c7-4f8c-ad5c-b90579097e3e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612167726;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=8maoQfuMHAQoyFK0g/YC9hiRf2n9XZ8hLxdhV0HI+1w=;
  b=KUDUUys5t/YsQ0l09dm5MiLwtvHWMOkNM3GacRphchwMzfjlSPKrKqu9
   0qFUnyDwpWV9mhhpMlDLOcNBrMnuwjuPy5DeCETvcu2iekPzv875W/Bvd
   KTXykD8phwq1HoUmevqXKyKeRrpKtRG6aX0bGhKGMW/Rqx9xr1JJslZA0
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 85RMr02dFZ0aJKRfviJzq3Tn9ZttArdUDlyl/baJgWAHJ6E+cJaMav1gZ3LZl1LWo+AWyjKD4s
 xHj9lgw4Tl/1C5Cr2UdgdBqi/EhbNkxsv1G4nNfj6coXiodrYQAmMRsgQPgOxDtRBTlikZEX4C
 K4afBcxFg3l04uAmmiWa7jYmJR6vlJLatXdqinTEiUdzu+XKigCoe5kIeZxVVKZIYN1tOeUQo4
 84FeOa+m7dVLPBDlZ94oNxDknEJXgI8+9BKiyhxDsHOoB6RQRYzDde/Zm/BT3swnV70sv/0nxn
 Jvg=
X-SBRS: 5.2
X-MesageID: 36295820
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,391,1602561600"; 
   d="scan'208";a="36295820"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IGyYJyJ8kM68BglVySDey41oIvYFsJ2SPUErF7VwJ1LbK0teN5fxJDkUBuut+iyaCwqQyAsNYwH0kRZJiMfKEtugIkRj/hgG+HixynxRHLwIXI9UzHEYM9u5kt4vGy2BjXldupHHqMPN/CoOo2I9WaqBqmBQUT1lB57J8ujvvZ6W/OYACXTa4rv+AGTfIIc45fobDx/DuiI4cYLpTQJV0y6BJlmJsHl9aS2zmCY1tU6KpYX40H3gndVFBLekReeSLTu6LPByOBtRY+4A44ogakzNytLt67rY9bYdu346kL+TGuABp52dAV3wX7hz/nbD7mnchIW97dCdX9G/5AT5Ig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7s8Sdi+pp7MYe1YNRzd2LwR5pm5wnR6nkgEQQIRj1lI=;
 b=K5aEdlwi2zn1zHqzsKr1t/M945cnAGRomaue51aKFkY1qhgc2enz02/2iBQzCN4LApsyp/2A4X2uqmLU9JcoDkUG00kpLONP+ds5BeBlcVyA6Xyd4R7p1wdgnIfKTf0TNcsl9biSL0B9uj3AYeWldr1y4JtfeJ6i8JnRW58orbE/+n2RYs1EXfsZxTrF2SvUy0ZxjqQqFXxj9Xhk09jQG9Nmry0t3N+L279/dnghh2cIu6TQp0aSK8veSmwhQwPpDh8Ujk5NxBFbMoJNxQXcopHWkhlI1jfB0vJAtsILrL6v9PrqMqFSpYHcTjodv7qLeT6gHL8hevBBvwKp9TfS/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7s8Sdi+pp7MYe1YNRzd2LwR5pm5wnR6nkgEQQIRj1lI=;
 b=ud6b9ryNMTlgTuaCjucIHWdHIF9D0IZaca+FxFtXiPPwEUzG++WtSKpyKlbBS6WgiKp1G6cM35cEF/UnBVmhCllglCltgkNBr8QldgGBgB3lqG2fDXbQ+wmIT4SYlCQSYV9nhZAHcWfQO3oAC7utFYvcyFlWAHYvfMnqOkSqS+g=
Date: Mon, 1 Feb 2021 09:21:58 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/2] Document qemu-ifup on NetBSD
Message-ID: <YBe6JpR6jOLvYDz6@Air-de-Roger>
References: <20210130230300.11664-1-bouyer@netbsd.org>
 <20210130230300.11664-2-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210130230300.11664-2-bouyer@netbsd.org>
X-ClientProxiedBy: PR0P264CA0201.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1f::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c787ab0c-71bf-455c-4ec6-08d8c68a7421
X-MS-TrafficTypeDiagnostic: DM6PR03MB5083:
X-Microsoft-Antispam-PRVS: <DM6PR03MB508384617AA0862F145AF06F8FB69@DM6PR03MB5083.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XxAjIENoGaXXEmsNqp65oGLMI1v9480N//bD6QvzmsUCZP+r1U7PlFeuZFwAh3dwPNo16QCiIG55CwbklS5eYO8t4Cu6x3JgwKV6GalSmF/qlym7gISoJCMFKlCxva/pLYFln3TwarKL1h1KlwJzNNqB5+BmSh7m6hlJ648i8CWp1szpaNvc1dAAMCI+vEcrQD8kXNYZY2MqyNmgfuv/R+M3zxgRTKti7g6vcMlV20ZjiKeJliAuCjpe9qGSRG3uT6BHkUtVdxDtAy23J1Ova4LOePLmdHd14HHtPXdl2UO3HkEt7ME0w+jyb20nERLvEykqjp7QT2ccxUmlCMMEwaXnZxDbXJcJysfJFbWiMZ76upL7IyDpZN5WthtvnIlWwXWbP+kGrStFTW9+kNL9SrfMD/S/Pr+0VqGK5UdZcuTmY9irUEAtSjzJ7vssEnFTYlXS+46ODXzhCRgzNWeypw9UWA/bltIXiCNe3HEfHywlZ8rWavhPJxVCJRpJwRBOw6L1Hb+d9HsXd4sK4kWaXw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(136003)(39850400004)(346002)(376002)(366004)(86362001)(4326008)(186003)(16526019)(85182001)(6496006)(956004)(33716001)(8936002)(66946007)(54906003)(316002)(66476007)(66556008)(6666004)(5660300002)(6916009)(8676002)(9686003)(478600001)(26005)(2906002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TDN2cUI5d0JkYys0WUhKUXprcVR6SmZMR2lUeEpqTGFqSm8xT3MwLzFQeGsx?=
 =?utf-8?B?dnEvSTZ2OGdGSTdHZnRSZExsL1pNV29NV1lQY05aWWtEOUtvc0d1cFRxNVk0?=
 =?utf-8?B?V1FMN0xXVU5icXVGYUE4VTc1NFNGcGh1RElVcW1MMzhjOFpSUkxFUWFJdmZE?=
 =?utf-8?B?eHlGWWRSKzNrTkVrS0FtdkhkVnRqU013cmo0d0hIallGSjU4VUFXeFRrWUE1?=
 =?utf-8?B?TytHNk4zRmlMOHBhZW1LWVU2MHI2ZmdwSGUveFZ6VEo2bDBGRDBnaS9ldExq?=
 =?utf-8?B?blljcmVJZGc5bGNINHVpdW5GZVVSdFBuVXhJbWg0TllpOU1YSzY3T3Bucmhp?=
 =?utf-8?B?V2JwWWcrT3RXcUVvWm1OUjZZTURqZ2RUOVdBZTBtS0hYUFF6TGh4L2U1Sm5J?=
 =?utf-8?B?R1ZJVmcrU0dIN1c5cldoSnFHMnBFVU1wcHZDY0lhOUxRN1JMTFQ2TzRkczRi?=
 =?utf-8?B?c0pJTXlNcm9NdVc2VE5na1NWd05YSjZYMFdNTkZmU1BPRXlJeFhRWGFlQlJF?=
 =?utf-8?B?WUpXdnhxam9IdXF3alE1R3ZCZkhWWWJsQUxmTWNvelpSWmN3Z29FTitGNFNt?=
 =?utf-8?B?QmY0NjJZU2FhbUMzWGtwN0pScFBscFNaajgxS0lNNlp3QUxBbWNXRnR6dGZz?=
 =?utf-8?B?YlAvWGsxM3hmMUJGbVBVUmN0Nlora2EwazY4bnc3TTc1RjgzeXo1TDRlTGI1?=
 =?utf-8?B?U096eEowWlYwanNzYUNMVzNVNE1aVkJrMnFIbWlKeVVTZkhRaTVkQzZLNjhs?=
 =?utf-8?B?RVF5UzU4VHlzODlFaDNpMVdjVmpLOTN6R25nTmQ1NG5HYnh2dzFEZ2xEQktv?=
 =?utf-8?B?RkM2NmJjNGM5TjNYMjJseEc0Q1pOVnh0dmlIQUNZdlg5L25zS1R4RGQ4Rlh1?=
 =?utf-8?B?VzNMQks2N3pFQ2NvWjRNNktERDBZNG0ySEtDWFdSb2xPbnZTTitsV2dzaHRx?=
 =?utf-8?B?Ny8xekZBQnJJQURvRnN3TnlaUGptNCtRWFNjS0lGc1AzMlZQRHBCMVlqd0E5?=
 =?utf-8?B?WGdOK1VjMlg0WFN6Y0txZ0N0SFV1ZndGWUkveWNIYXoycE52WGtRTmNhbXRi?=
 =?utf-8?B?dWh4eUVicStvR1Iyc3dYb0d2UkNmU0RwaEY0d0FEMm16UE1DSFhTZTZLWFpE?=
 =?utf-8?B?ZTVRWWNaQVErMTRwdnBOeU53cFY3MjRrSlk5RGFXM1UvU0hlb1BtbWZBSEhp?=
 =?utf-8?B?YWFFR2lBTmU0SFdjREdYUElpaGFoZ1lETkVmK2hDZkdIVDN6Q0s0bmlDV2tu?=
 =?utf-8?B?TWJDK2p5Szh6Y21JdkJvQlZ6a3BKSURyeXE5WEI0T3dOdWxNNXdWZ0oyWnMr?=
 =?utf-8?B?ckdNSmVVM3FZU0hvdG1JWXg2RCswN2gyRE1GZmlBT0tiSEZFaTNZUU40U2JV?=
 =?utf-8?B?QlM3ZXR1RjN1b2xKaGJqeGxNZk1tbzJlUGJXbFh4Z0Mwa3p5YkNCSkMyWFVj?=
 =?utf-8?B?RGVZVUU5VElaM0JWRHo4eWpIZzRTK2NRck4rSHdBPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c787ab0c-71bf-455c-4ec6-08d8c68a7421
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 08:22:03.1331
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1PXHQk1HNU8KLQa9RK+8R4MpVKtSNquTqr5skQ7D/ZK6iZ0pJCn5GuMmEacDhTfLHfBu+J5h3KN9wEmKO7mSYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5083
X-OriginatorOrg: citrix.com

On Sun, Jan 31, 2021 at 12:03:00AM +0100, Manuel Bouyer wrote:
> Document that on NetBSD, the tap interface will be configured by the
> qemu-ifup script. Document the arguments, and XEN_DOMAIN_ID environnement
> variable.

You are missing a Signed-off-by tag here ;).

> ---
>  docs/man/xl-network-configuration.5.pod | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/docs/man/xl-network-configuration.5.pod b/docs/man/xl-network-configuration.5.pod
> index af058d4d3c..f6eb6c31fc 100644
> --- a/docs/man/xl-network-configuration.5.pod
> +++ b/docs/man/xl-network-configuration.5.pod
> @@ -172,6 +172,10 @@ add it to the relevant bridge). Defaults to
>  C<XEN_SCRIPT_DIR/vif-bridge> but can be set to any script. Some example
>  scripts are installed in C<XEN_SCRIPT_DIR>.
>  
> +On NetBSD, HVM guests will always use
> +C<XEN_SCRIPT_DIR/qemu-ifup> to configure the tap interface. The first argument
> +is the tap interface, the second is the bridge name. the environnement variable
> +C<XEN_DOMAIN_ID> contains the domU's ID.

LGTM, but I would make it even less technical and more user focused:

Note on NetBSD HVM guests will ignore the script option for tap
(emulated) interfaces and always use C<XEN_SCRIPT_DIR/qemu-ifup> to
configure the interface in bridged mode.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 08:35:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 08:35:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79608.144933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ufh-0000Be-Gq; Mon, 01 Feb 2021 08:34:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79608.144933; Mon, 01 Feb 2021 08:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ufh-0000BX-Dr; Mon, 01 Feb 2021 08:34:57 +0000
Received: by outflank-mailman (input) for mailman id 79608;
 Mon, 01 Feb 2021 08:34:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDK9=HD=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1l6Uff-0000BS-Qo
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 08:34:56 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 78dbbe63-da5c-4660-899e-f04534398690;
 Mon, 01 Feb 2021 08:34:54 +0000 (UTC)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-409-deOd4GvMNliiokeHwSxqtQ-1; Mon, 01 Feb 2021 03:34:52 -0500
Received: by mail-ed1-f71.google.com with SMTP id x13so7592458edi.7
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 00:34:52 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id s3sm7441126ejn.47.2021.02.01.00.34.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 00:34:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78dbbe63-da5c-4660-899e-f04534398690
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1612168494;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gm47NOdsiA8w3zVE9sHPDHTLG06a6IG1aKDU8XTax/o=;
	b=Rt6pH9jEf+lxIveAbPeurtv00oSKx4RWB8gAilzVD+ViBfyhFQNE8u/Vp6qL2D8FP5kRgi
	rISzvS1uefL5eYM+dhlFe/5EyebTmp44uQQkA+6xov0vJ6DxouJgDd7YuJG4skIoS8w19l
	9+juDp2NQk3Ta2dYbokEyxG+yRC2UKs=
X-MC-Unique: deOd4GvMNliiokeHwSxqtQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Gm47NOdsiA8w3zVE9sHPDHTLG06a6IG1aKDU8XTax/o=;
        b=B9KZgbVj+cp+gASzJKPAqoxNX7Rt37c8I/9Q9rBv6Ie9eVeBps36c09Bw+UoA9RIPa
         iduwZnpruo2RW6rXuDfsl9z39/L1KlQ1MAccVCrI3boktoY9bVELvrimicavqbYJiti/
         lGlONz1u8TgMcvEKWLsfzsrhUB7X3JQK2hkvdVe7DXFcWWDcKcBVmCkLs6ETIoqnbs7n
         eCGKeIPIeRFGSJXFNECD80E73PqgUfQKoi5qXtvuhEYfi6rKCYkoPbP/qm+NOZYwe3j7
         GJVy3Nf2Jxqptk9JD1BsWnh8RJx3zNIJ2L2h36S0H1GNkECbOy9A9JFXPTNNxLoN0k0e
         NWOQ==
X-Gm-Message-State: AOAM531PFaNnfXCstpLQt4g7eq1tthjTBJpfk3VhQ/vdHvOp9xJyf1uk
	L63+qFEfsXxiH9ojQy42HElrPbiwxYFBr5/4fzdeg87NoehxWJfoJPQ4+gg49tOVUAydVn/N/L9
	ovgrPLioFqyvJ9TIS1Bx1uwPoZDk=
X-Received: by 2002:a05:6402:35c2:: with SMTP id z2mr17435958edc.34.1612168491491;
        Mon, 01 Feb 2021 00:34:51 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxyZ391Du2eae4AAvhEk4ICQIr9fen9gbN1LB/IKhx59RWxvtv3goadRgPEg3msphjMcEQPQg==
X-Received: by 2002:a05:6402:35c2:: with SMTP id z2mr17435937edc.34.1612168491260;
        Mon, 01 Feb 2021 00:34:51 -0800 (PST)
Subject: Re: [PATCH v2 4/4] hw/xen: Have Xen machines select 9pfs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>, qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>, Paul Durrant
 <paul@xen.org>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Eduardo Habkost <ehabkost@redhat.com>
References: <20210131141810.293186-1-f4bug@amsat.org>
 <20210131141810.293186-5-f4bug@amsat.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <565bf0dd-a5de-352e-eec7-68b862ed09e4@redhat.com>
Date: Mon, 1 Feb 2021 09:34:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20210131141810.293186-5-f4bug@amsat.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 31/01/21 15:18, Philippe Mathieu-DaudÃ© wrote:
> 9pfs is not an accelerator feature but a machine one,
> move the selection on the machine Kconfig (in hw/).
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
> ---
>   accel/Kconfig       | 1 -
>   hw/i386/xen/Kconfig | 1 +
>   hw/xen/Kconfig      | 1 +
>   3 files changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/accel/Kconfig b/accel/Kconfig
> index 461104c7715..b9e9a2d35b0 100644
> --- a/accel/Kconfig
> +++ b/accel/Kconfig
> @@ -15,4 +15,3 @@ config KVM
>   
>   config XEN
>       bool
> -    select FSDEV_9P if VIRTFS
> diff --git a/hw/i386/xen/Kconfig b/hw/i386/xen/Kconfig
> index ad9d774b9ea..4affd842f28 100644
> --- a/hw/i386/xen/Kconfig
> +++ b/hw/i386/xen/Kconfig
> @@ -3,3 +3,4 @@ config XEN_FV
>       default y if XEN
>       depends on XEN
>       select I440FX
> +    select FSDEV_9P if VIRTFS
> diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
> index 0b8427d6bd1..825277969fa 100644
> --- a/hw/xen/Kconfig
> +++ b/hw/xen/Kconfig
> @@ -5,3 +5,4 @@ config XEN_PV
>       select PCI
>       select USB
>       select IDE_PIIX
> +    select FSDEV_9P if VIRTFS
> 

I think you can compile without FSDEV_9P selected?  If so, this should 
be "imply".

If on the other hand you cannot, and that is because of some other file 
brought in by CONFIG_XEN, this patch shouldn't be there.

I can queue the series once you resolve this doubt.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 08:55:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 08:55:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79620.144954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Uzi-0002Fh-9q; Mon, 01 Feb 2021 08:55:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79620.144954; Mon, 01 Feb 2021 08:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Uzi-0002Fa-6v; Mon, 01 Feb 2021 08:55:38 +0000
Received: by outflank-mailman (input) for mailman id 79620;
 Mon, 01 Feb 2021 08:55:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Uzg-0002FV-Ka
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 08:55:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b59ed5d1-3165-432d-b042-b29a35a061da;
 Mon, 01 Feb 2021 08:55:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D430CAC3A;
 Mon,  1 Feb 2021 08:55:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b59ed5d1-3165-432d-b042-b29a35a061da
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612169730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mgNiOk+4kVI/YVXK5HOe7VyQot2fW1PGf6UQgzqHRQQ=;
	b=pFNrLi3eSyS5BYdHmmrESUjG4tQOyE0C3WpaGvG3hcz0izVibbFFrAO0RHLTUA3OEi/ao5
	KGaGV9HozXkPt3PTVnsGhuAKu4aOvo2A3DQn2nDWIa7x1VFyIA7Mk6DAcYX/gL9qaQtWP4
	DwjdVORKxFUGKlT/6oSOh146+QAjJGI=
Subject: Re: [PATCH v7 10/10] x86/vm_event: Carry Processor Trace buffer
 offset in vm_event
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Tamas K Lengyel <tamas.lengyel@intel.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210121212718.2441-1-andrew.cooper3@citrix.com>
 <20210121212718.2441-11-andrew.cooper3@citrix.com>
 <c00b60c5-ba4a-7473-cf26-60b46681279a@suse.com>
 <0a34175c-9bc1-9449-413b-01d743d201fc@citrix.com>
 <CABfawhk4eYA85pgSc6xKbHexBQpJKzAv-KvS_X6X9-eAAqXe4A@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <95f07ddb-7fe8-c6fb-dbc4-9743f82a1d89@suse.com>
Date: Mon, 1 Feb 2021 09:55:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CABfawhk4eYA85pgSc6xKbHexBQpJKzAv-KvS_X6X9-eAAqXe4A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.01.2021 00:40, Tamas K Lengyel wrote:
> On Fri, Jan 29, 2021 at 6:22 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>
>> On 26/01/2021 14:27, Jan Beulich wrote:
>>> On 21.01.2021 22:27, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/vm_event.c
>>>> +++ b/xen/arch/x86/vm_event.c
>>>> @@ -251,6 +251,9 @@ void vm_event_fill_regs(vm_event_request_t *req)
>>>>
>>>>      req->data.regs.x86.shadow_gs = ctxt.shadow_gs;
>>>>      req->data.regs.x86.dr6 = ctxt.dr6;
>>>> +
>>>> +    if ( hvm_vmtrace_output_position(curr, &req->data.regs.x86.pt_offset) != 1 )
>>>> +        req->data.regs.x86.pt_offset = ~0;
>>> Ah. (Regarding my earlier question about this returning -errno or
>>> boolean).
>>>
>>>> --- a/xen/include/public/vm_event.h
>>>> +++ b/xen/include/public/vm_event.h
>>>> @@ -223,6 +223,12 @@ struct vm_event_regs_x86 {
>>>>       */
>>>>      uint64_t npt_base;
>>>>
>>>> +    /*
>>>> +     * Current offset in the Processor Trace buffer. For Intel Processor Trace
>>>> +     * this is MSR_RTIT_OUTPUT_MASK. Set to ~0 if no Processor Trace is active.
>>>> +     */
>>>> +    uint64_t pt_offset;
>>> According to vmtrace_output_position() the value is only one half
>>> of what the named MSR contains. Perhaps "... this is from MSR_..."?
>>> Not sure whether, despite this, there still is a reason to have
>>> this 64-bit wide.
>>
>> This is a vestigial remnant which escaped the "use vmtrace uniformly" work.
>>
>> It should match the domctl_vmtrace_output_position() format, so each
>> interface gives the same content for the attempted-platform-neutral version.
> 
> From the vm_event ABI perspective it's simpler to have a 64-bit value
> here even if the max value it may possibly carry is never going to use
> the whole 64-bit width. I rather not play with shortening it just to
> add padding somewhere else.
> 
> As for what it's called is not that important from my perspective,
> vmtrace_pos or something like that for example is fine by me.

The important thing to me is that the comment not be misleading.
Whether that's arranged for by adjusting the comment of the
commented declaration is up to what you deem better, i.e. I
understand the comment it is.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:06:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79623.144967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VAR-0003Ki-B0; Mon, 01 Feb 2021 09:06:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79623.144967; Mon, 01 Feb 2021 09:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VAR-0003Kb-7u; Mon, 01 Feb 2021 09:06:43 +0000
Received: by outflank-mailman (input) for mailman id 79623;
 Mon, 01 Feb 2021 09:06:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6VAP-0003KW-9O
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:06:41 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0064ddd0-f13f-4694-ac5c-a9df2de8c614;
 Mon, 01 Feb 2021 09:06:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0064ddd0-f13f-4694-ac5c-a9df2de8c614
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612170400;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=xkVJHZ3Z0cnda2Ifz1g5Y9SdJvPoShYXfdWDTCZ5Rp0=;
  b=E2ASW/ORZomc3vK0VHu+Nzh1ZB2q1bOB8kcFigRfZVyrwhSf3lf4+DZo
   t/sRc8PFuefW08o/B4x/Uh7Spx/lcBFxPTSIPXGYFepNkbN9DDPg2zipz
   XqTV4m0n6WTLNVnUuzFDwCqeDbDfBBTll9zV0RYa0o/BTcdHvxu8juMhR
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: JjPmpUc7y4gJ0wMtEgJPJ0GIFydIeGEz0FCYCLC6dC+AU3tVXUpT5vytp9AtktnrwOOBcjdqXc
 tzf/d4AQIgDT94+IN1dgzYM66yC6BE5l2iXYlLawhi7Fb74uyKe9r3f9w4Tt4KD3oOrKjqcvGY
 E5hXB7oa2G0nVDaSE5My5h/eDDXDFZFNY4uRIv4nM9a0SdJQ3PqKuAnu/HNOjUWtSnE3dTsuJ5
 xqVP2X6g05zLcHf7v8oEzufNF3TV6IGdLtj7A8JKC+KMGNsGlh+fyNsqhwt1T9nr7FFw1KIrwC
 X7c=
X-SBRS: 5.2
X-MesageID: 36216196
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36216196"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dmzl8EpwcX3pejtzrAT37HnC+Nf4gCblUATKhO1k/nwmYfKhFeoimjQeGqPOEMY9gt+NLcBLmBqSTU2PUCqXuWU06Pqr7vBip5T9yVIa0jgYYBATkXUlc5g8gdSb3ayyKHbdteGhMD95hDwVKbNnZtCeGmhWSQGm04zLKKvAa2Ltlu3whg6Y1fZS3sM2GVKQDDxXZ/WnGDdgPWmP904E9gBeVvOD3xrhBid/be+qWc8hViQbrHfueCRVrw639fpASSUJYO5x1rpSrHTkFWe/S7SlO9bPScaSUWmFUd4BQt8jVcxUQDp+TkdGJ+D307pIqn4y8lOm1pwNDid+GaSLtA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sGjRN6tz/BfhPmsSH+7Nomnz4zgbH460+p/U5a3VJfI=;
 b=jOTki7pAZSX5ygC2u9r2lRuJaAMei47ysH25VgAP4W2r7diGGhAs64ouUB9nYipJcrcl3/rC9SMOp6MJOl/ne0gqrLVJlRXtk3y2RweKaVyDkm4LgeU0Z/OB6TdHub8pJN5DcJQUY6YYaVsnBa0kW9W2nBlB+rSDi2t9WGWhyRbX5ogaz5KrAD/jHC9Iu9yk8AyTa/cHMQ+QytbUJtgOLun0PeOM4LyCIKiex36mUEx9LgjauC7lLeCuhm8rtVYkZC9TfGuEhlKk9TIJrg1jvT7YonP4l5XDhlZfg6gz5gqtb4TqCGVJNYTY6sIQrKiqMJuXa+v5Pe/vmUOXWUAyIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sGjRN6tz/BfhPmsSH+7Nomnz4zgbH460+p/U5a3VJfI=;
 b=QC6juAVyI92HMZT41G7a4UCL5q9hg3j8pEokpVTfUkEROO7oFOfC1Hd+92y6xtYW2qGsjJYW4V5A5W1T9wJ5nKNGZokMNhVf0NG1NCcjpxoPd+n6F+IfrTaSJCdMkP/4GfPbmX6I8RXZX1/XYUD3VGzAE09tZnO5aEcadMqqsJA=
Subject: Re: [PATCH v7 10/10] x86/vm_event: Carry Processor Trace buffer
 offset in vm_event
To: Jan Beulich <jbeulich@suse.com>, Tamas K Lengyel <tamas@tklengyel.com>
CC: Tamas K Lengyel <tamas.lengyel@intel.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
	<michal.leszczynski@cert.pl>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210121212718.2441-1-andrew.cooper3@citrix.com>
 <20210121212718.2441-11-andrew.cooper3@citrix.com>
 <c00b60c5-ba4a-7473-cf26-60b46681279a@suse.com>
 <0a34175c-9bc1-9449-413b-01d743d201fc@citrix.com>
 <CABfawhk4eYA85pgSc6xKbHexBQpJKzAv-KvS_X6X9-eAAqXe4A@mail.gmail.com>
 <95f07ddb-7fe8-c6fb-dbc4-9743f82a1d89@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <39fe9d84-05c9-955d-4938-ab4b30e0ccfb@citrix.com>
Date: Mon, 1 Feb 2021 09:06:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <95f07ddb-7fe8-c6fb-dbc4-9743f82a1d89@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0435.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::8) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 133e5b7a-7eef-401c-ca2f-08d8c690ac58
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5646:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB56468C1611E259195A4EAAA7BAB69@SJ0PR03MB5646.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Bi9ptCoKhwTil3gQtMeujo23APygMhTMyDw5bYCEX7B6RHgsr6qgwyD+ifSSH0s3Z57iJnEp+tH5pymiEf1O9+hcatEW7JRfm736BrJhn5Yk7Xu/W26jkbBbWa1wIEFY1oHpb0VWvJZ2LyzQzvrr8VgjFnUR15zXqP3TAB/jO+mO3VAPfUGMrjxZeY3yijUAIkL7/6HBkRpkncj3TLjEKnQerpXmvxu0jg7cUW+4rVGL6cfWGHAOknTQX33fNw/1PWdZkAqE1AMnm+rNp8ib8OUQv46EwF/3NPiqfcNXzwR642K5FSS3pNSgR5pHHfwukaKliB4+3WHK+NTF2LyQGY+w44lriLQDHfMTAdmBYdBtDq8x8jgVyxnJ4DYtnvcORGwxuVxkLiM3iPMQFFApTrsTYUgdZuPnA3pYH7XTybQnfxvCNVlRP5+mTuJnU21RC+AMfRvjW17+FRbB945JjuKPX0ihzdzfeb6MzqtQyXuzCpEhhzq5X/jG2NqR2zf9F1OMjtLgxT3anBSIqlAkz9vLD+OUTR/FcbvY7DkMk+m5qugeedm02+G/oGX2SBMhYQD0JSr16DYRSyFc0XR9pzdEtf4C9JKjWLUdR9Y5fd8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(956004)(2616005)(5660300002)(6486002)(31696002)(86362001)(31686004)(4326008)(478600001)(6666004)(16526019)(8676002)(54906003)(186003)(53546011)(26005)(36756003)(66476007)(66556008)(66946007)(83380400001)(8936002)(2906002)(16576012)(316002)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dzRkUnhRckZlRnlqaG1BbWZCdHZxYW43UmhHRkdqbUdZUW9ZRXlIYStzY0xo?=
 =?utf-8?B?OHVOZitqVElZd1NWdTY2dkRvTm9JOXp6ZHZEcXI3dXM1RllxVFBlSGRRdWwy?=
 =?utf-8?B?Q3lMRGVoOS9jU3YzemZYZmc1eTVqZnpOc1MzUXdXc3NsZGFleDZmeGVkajE3?=
 =?utf-8?B?bHZXV2x1ZW1HR1hlVW5heGVEWWNpM04zazNjTitCZDN1VXBrbXo5Z0hXQ3kw?=
 =?utf-8?B?ZFUwcXBuUVdkQWtBcjU3WStOeU4vLy9tMVFzVmRIbzE3TXdjNnkrREdJYmg1?=
 =?utf-8?B?MTkxNDBjK1VIRVhCNXViS2wrbkcvNW1LMGVPYmNaK1BhalpWbTNxek94aVBL?=
 =?utf-8?B?enI3TCtRaDA1ZzVYS2gyWUp0K015MUJnRDRJNHAxbWNEa0RFVm5wOXpyOUoy?=
 =?utf-8?B?cDFnTzcxTGppN2hjd05PS3ZGOW9qVUx1NGZkTDQ4OTk2MExrQmNBdXBTdU01?=
 =?utf-8?B?TzdRTlhIYktSOVRoVm5YWXFqb29VYW5wMnpaVmxad1EyMWVjRkpnTGdMU2hD?=
 =?utf-8?B?Tm13MVhrTzRmU2dCWjNFek5XMjlHYXc4cXB4Vk9nMUtTQjBGOVQ3dzN5SENp?=
 =?utf-8?B?ZmgxUGpPRjFWV0tZYjM0K01CZm5SdjBCeDBsVjRsQjg5ZktKdUIrbC9DYi96?=
 =?utf-8?B?UlJYTWlaRE1qbmNGU3VhODZUMjF1UmxKbnZiMkwrS2xUYVh5RUs5bCtMWFJN?=
 =?utf-8?B?T3dTTWVFOWltRHJ1TGdSdjlaT2Npbit1U1lkZEVLNDRzR1FyZHJIUExrWFIx?=
 =?utf-8?B?V1h3TnNHcmc1NVRFZnVPRmFueWpVaEpOTkQ0bGlseElPQ0Y5UDJPZm9DYzRI?=
 =?utf-8?B?ZWRxWlQ4TEVJbi8ySm0xZXJPdWtRZlJlV3BhQ2lBTW9KcGcvazY5bVJ0dEhZ?=
 =?utf-8?B?Mll5RUtaQ0d4WElET3pmVWFHdUJtQ0p0U3pFbjVCaWgrZkFObnlIdjRraWdz?=
 =?utf-8?B?cnNldXlpRnN6MGpwc3RhNGgycGdHQ2JCT21DNlJ1NG4wZWZUZFNWRXNBemNj?=
 =?utf-8?B?RkoxVzJhbGRkd0hVWkg2d0E0NzlseGJlZXU5RlhuMlM5OWdCdlBObzFXYUcr?=
 =?utf-8?B?aGdUc2tZWUZWRmgyY1JjWEpEdUppSVFlTEhEVnBIZldWS0cyQ0hhalp2NkxX?=
 =?utf-8?B?U1VNR3h0cVZYVnJoZk9NVVlib1duMHF3VGwzT01VKy93d1hpaFFkR3BNdXlo?=
 =?utf-8?B?eFBJVE8yREtmZy96M0ROeWE4Ni9YTnljTnh0OVNNWkNoMWVkTmYvZW10STAv?=
 =?utf-8?B?bktJWFBUWTNPZEVOZGxPK1BvNDRFSW9pdUJuaVVrdHgvNjdSMVdyZ0VKVC9h?=
 =?utf-8?B?SWdhTDZkTWZNNVVhRmNYOFFwK2lBN2cxQ2thVnVMc3IyNUk2bVFpeXBTQ0h1?=
 =?utf-8?B?bTNla1RHNmlEQ2xuZUhZSnJoZk0yRXRZZVhlQnVvV2hqcnhsT3RIV1prd3hz?=
 =?utf-8?Q?MhKKQzgo?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 133e5b7a-7eef-401c-ca2f-08d8c690ac58
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 09:06:34.3694
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9nAQSyjgKWoc/cD8lGPY98muB00eOfPIreY32vfeICkpeRqu37HCvcD68B2lTSvUvniffVNeWgerkfOs08795uyRfA5CtH5LFyIlnilRHUc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5646
X-OriginatorOrg: citrix.com

On 01/02/2021 08:55, Jan Beulich wrote:
> On 30.01.2021 00:40, Tamas K Lengyel wrote:
>> On Fri, Jan 29, 2021 at 6:22 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 26/01/2021 14:27, Jan Beulich wrote:
>>>> On 21.01.2021 22:27, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/vm_event.c
>>>>> +++ b/xen/arch/x86/vm_event.c
>>>>> @@ -251,6 +251,9 @@ void vm_event_fill_regs(vm_event_request_t *req)
>>>>>
>>>>>      req->data.regs.x86.shadow_gs = ctxt.shadow_gs;
>>>>>      req->data.regs.x86.dr6 = ctxt.dr6;
>>>>> +
>>>>> +    if ( hvm_vmtrace_output_position(curr, &req->data.regs.x86.pt_offset) != 1 )
>>>>> +        req->data.regs.x86.pt_offset = ~0;
>>>> Ah. (Regarding my earlier question about this returning -errno or
>>>> boolean).
>>>>
>>>>> --- a/xen/include/public/vm_event.h
>>>>> +++ b/xen/include/public/vm_event.h
>>>>> @@ -223,6 +223,12 @@ struct vm_event_regs_x86 {
>>>>>       */
>>>>>      uint64_t npt_base;
>>>>>
>>>>> +    /*
>>>>> +     * Current offset in the Processor Trace buffer. For Intel Processor Trace
>>>>> +     * this is MSR_RTIT_OUTPUT_MASK. Set to ~0 if no Processor Trace is active.
>>>>> +     */
>>>>> +    uint64_t pt_offset;
>>>> According to vmtrace_output_position() the value is only one half
>>>> of what the named MSR contains. Perhaps "... this is from MSR_..."?
>>>> Not sure whether, despite this, there still is a reason to have
>>>> this 64-bit wide.
>>> This is a vestigial remnant which escaped the "use vmtrace uniformly" work.
>>>
>>> It should match the domctl_vmtrace_output_position() format, so each
>>> interface gives the same content for the attempted-platform-neutral version.
>> From the vm_event ABI perspective it's simpler to have a 64-bit value
>> here even if the max value it may possibly carry is never going to use
>> the whole 64-bit width. I rather not play with shortening it just to
>> add padding somewhere else.
>>
>> As for what it's called is not that important from my perspective,
>> vmtrace_pos or something like that for example is fine by me.
> The important thing to me is that the comment not be misleading.
> Whether that's arranged for by adjusting the comment of the
> commented declaration is up to what you deem better, i.e. I
> understand the comment it is.

Please see v8.Â  I rewrote the comment.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:18:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:18:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79625.144979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VMB-0004OJ-Gw; Mon, 01 Feb 2021 09:18:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79625.144979; Mon, 01 Feb 2021 09:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VMB-0004OC-CS; Mon, 01 Feb 2021 09:18:51 +0000
Received: by outflank-mailman (input) for mailman id 79625;
 Mon, 01 Feb 2021 09:18:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6VMA-0004O7-GT
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:18:50 +0000
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61ed6e5d-6894-4dc1-b083-adeca233c333;
 Mon, 01 Feb 2021 09:18:49 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id m2so11948467wmm.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 01:18:49 -0800 (PST)
Received: from [192.168.1.36] (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id l1sm25560510wrp.40.2021.02.01.01.18.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 01:18:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 61ed6e5d-6894-4dc1-b083-adeca233c333
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=zwJvXpxzD86IAB50OcEa1h9dCfmRan0v54n4k0R9nDk=;
        b=knZRjue7XA5Kbq0aQZ+f07ZV+LMkatnkgFgqf4pIl5o+gF6rmJWc+DIIkwVkmTFWT+
         8xYuw6VaixRCi3wiMJeBsiPqMY1pbZU85mLtKu3Rv+s36GTJ/X+saX5IHZj6KJdMkl7Z
         8tdsLEG3T+fvN6P/0dz0DIWpXyuA7SelLR3Mf2ocBxE05OT8ro4VD7Ekm2BvSrK0hhf7
         4RZR+OWZR+d9XwWxybSx0ipb0sn3a90f7BW5d5EkjXHw6DXnQbC3Fuv6QFoxp3Jc1d2R
         w/7CB3j3sWZX9hMvovVcnBxG6Hv/ZRCl0gRpUOUh1knPdNT9yO6oaXtTgsIsANhTwPEJ
         OMcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
         :date:user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=zwJvXpxzD86IAB50OcEa1h9dCfmRan0v54n4k0R9nDk=;
        b=UPuhzWbhASzm94dTXdUjbjhxiqHCjq+V7TRQX1r89bcmioUxSfZCAuClsSY1fPl+qL
         Ng65ePtFTwY1F11IWSt7RXhKWEq1KjgJP0Wv3XO5KkkJqDSuMtZbaFmOeocahuiPJOo+
         eENjk/2ZitBb4D1aSxswh6JJQVgDkHtkVFVLCH9M2I8dSJT1Yxm1dJ5+SCzZaB0T+UUe
         lm3RuYsFvgBciVNIoYeTvAzHd3dNf9Ta/ogoPel/sWhteIEL/IXPdTxOj5LxN8d7QqUf
         csyC8qRWvTdIoD8nSiZV3O2qxEksFfEQT4vMMxurHuOj7EefmBvClri0hgRSNopkgy5q
         MBRQ==
X-Gm-Message-State: AOAM531qa5SRBi9Wa9e5d5P5mGxX8tPWu2IQmTFEzZjYnUv86FJuwuEg
	iMcjrD7NvchEMDK1u9zTyRVsAlI5Jww=
X-Google-Smtp-Source: ABdhPJwXmLsSklCtG+dQ9zYlkzFgHSP0m68NDId7xXtpz/vVqTTQ8m6tLaZPeP9G1mutPM0o8rwOwA==
X-Received: by 2002:a1c:8181:: with SMTP id c123mr14636704wmd.23.1612171128109;
        Mon, 01 Feb 2021 01:18:48 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
Subject: Re: [PATCH v2 4/4] hw/xen: Have Xen machines select 9pfs
To: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, qemu-devel@nongnu.org, Greg Kurz <groug@kaod.org>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20210131141810.293186-1-f4bug@amsat.org>
 <20210131141810.293186-5-f4bug@amsat.org>
 <565bf0dd-a5de-352e-eec7-68b862ed09e4@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <f6e1917a-f9cf-9ae3-50b1-9dc0ee4f65f3@amsat.org>
Date: Mon, 1 Feb 2021 10:18:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <565bf0dd-a5de-352e-eec7-68b862ed09e4@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/1/21 9:34 AM, Paolo Bonzini wrote:
> On 31/01/21 15:18, Philippe Mathieu-DaudÃ© wrote:
>> 9pfs is not an accelerator feature but a machine one,
>> move the selection on the machine Kconfig (in hw/).
>>
>> Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
>> ---
>> Â  accel/KconfigÂ Â Â Â Â Â  | 1 -
>> Â  hw/i386/xen/Kconfig | 1 +
>> Â  hw/xen/KconfigÂ Â Â Â Â  | 1 +
>> Â  3 files changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/accel/Kconfig b/accel/Kconfig
>> index 461104c7715..b9e9a2d35b0 100644
>> --- a/accel/Kconfig
>> +++ b/accel/Kconfig
>> @@ -15,4 +15,3 @@ config KVM
>> Â  Â  config XEN
>> Â Â Â Â Â  bool
>> -Â Â Â  select FSDEV_9P if VIRTFS
>> diff --git a/hw/i386/xen/Kconfig b/hw/i386/xen/Kconfig
>> index ad9d774b9ea..4affd842f28 100644
>> --- a/hw/i386/xen/Kconfig
>> +++ b/hw/i386/xen/Kconfig
>> @@ -3,3 +3,4 @@ config XEN_FV
>> Â Â Â Â Â  default y if XEN
>> Â Â Â Â Â  depends on XEN
>> Â Â Â Â Â  select I440FX
>> +Â Â Â  select FSDEV_9P if VIRTFS
>> diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
>> index 0b8427d6bd1..825277969fa 100644
>> --- a/hw/xen/Kconfig
>> +++ b/hw/xen/Kconfig
>> @@ -5,3 +5,4 @@ config XEN_PV
>> Â Â Â Â Â  select PCI
>> Â Â Â Â Â  select USB
>> Â Â Â Â Â  select IDE_PIIX
>> +Â Â Â  select FSDEV_9P if VIRTFS
>>
> 
> I think you can compile without FSDEV_9P selected?Â  If so, this should
> be "imply".
> 
> If on the other hand you cannot, and that is because of some other file
> brought in by CONFIG_XEN, this patch shouldn't be there.

FYI using 'imply FSDEV_9P' instead I get:

/usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
`xen_be_register_common':
hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'

The function is:

  void xen_be_register_common(void)
  {
      xen_set_dynamic_sysbus();

      xen_be_register("console", &xen_console_ops);
      xen_be_register("vkbd", &xen_kbdmouse_ops);
  #ifdef CONFIG_VIRTFS
      xen_be_register("9pfs", &xen_9pfs_ops);
  #endif
  #ifdef CONFIG_USB_LIBUSB
      xen_be_register("qusb", &xen_usb_ops);
  #endif
  }

The object is compiled using:

-- >8 --
-#ifdef CONFIG_VIRTFS
+#ifdef CONFIG_FSDEV_9P
     xen_be_register("9pfs", &xen_9pfs_ops);
 #endif
---

Respin planned.

Thanks,

Phil.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:37:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79628.144991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ve6-0006MR-0y; Mon, 01 Feb 2021 09:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79628.144991; Mon, 01 Feb 2021 09:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ve5-0006MK-U9; Mon, 01 Feb 2021 09:37:21 +0000
Received: by outflank-mailman (input) for mailman id 79628;
 Mon, 01 Feb 2021 09:37:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Ve5-0006MF-9A
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:37:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 328f6949-9202-4d8a-92f9-41c1fb6c9c0d;
 Mon, 01 Feb 2021 09:37:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 78510AE63;
 Mon,  1 Feb 2021 09:37:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 328f6949-9202-4d8a-92f9-41c1fb6c9c0d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612172239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0TUqV3nK4hzpGWbVQxZaKAW4ccsDU8ScO6Q58BIGzOA=;
	b=XRGPfFwt5LvjIwxy7K5megpnnDKvfg/JEx/dnt+vYl9jKpHCDtGVv8a1+p7Tm/Gfew05Bi
	YQef0NdL9gk3sfY2hCO+9yhqqpLAEmO4ao5BiTqFQTalkaaOIBJ74ZPjNUZ7K3V5N7i6me
	M/wNLWpC6uOVwqkV+ygGAC48/2SKBP0=
Subject: Re: [PATCH] x86/debug: fix page-overflow bug in dbg_rw_guest_mem
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <caba05850df644814d75d5de0574c62ce90e8789.1611971959.git.tamas@tklengyel.com>
 <74f3263a-fe12-d365-ad45-e5556b575539@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <044823b7-1bbd-6405-7371-2b06e49cc147@suse.com>
Date: Mon, 1 Feb 2021 10:37:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <74f3263a-fe12-d365-ad45-e5556b575539@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.01.2021 03:59, Andrew Cooper wrote:
> On 30/01/2021 01:59, Tamas K Lengyel wrote:
>> When using gdbsx dbg_rw_guest_mem is used to read/write guest memory. When the
>> buffer being accessed is on a page-boundary, the next page needs to be grabbed
>> to access the correct memory for the buffer's overflown parts. While
>> dbg_rw_guest_mem has logic to handle that, it broke with 229492e210a. Instead
>> of grabbing the next page the code right now is looping back to the
>> start of the first page. This results in errors like the following while trying
>> to use gdb with Linux' lx-dmesg:
>>
>> [    0.114457] PM: hibernation: Registered nosave memory: [mem
>> 0xfdfff000-0xffffffff]
>> [    0.114460] [mem 0x90000000-0xfbffffff] available for PCI demem 0
>> [    0.114462] f]f]
>> Python Exception <class 'ValueError'> embedded null character:
>> Error occurred in Python: embedded null character
>>
>> Fixing this bug by taking the variable assignment outside the loop.
>>
>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

I have to admit that I'm irritated: On January 14th I did submit
a patch ('x86/gdbsx: convert "user" to "guest" accesses') fixing this
as a side effect. I understand that one was taking care of more
issues here, but shouldn't that be preferred? Re-basing isn't going
to be overly difficult, but anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:38:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79630.145003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Vej-0006Sm-DA; Mon, 01 Feb 2021 09:38:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79630.145003; Mon, 01 Feb 2021 09:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Vej-0006Sf-A6; Mon, 01 Feb 2021 09:38:01 +0000
Received: by outflank-mailman (input) for mailman id 79630;
 Mon, 01 Feb 2021 09:37:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=npej=HD=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l6Veh-0006SX-Ii
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:37:59 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 229d5425-a32d-4a00-bfe8-480a686f2dfe;
 Mon, 01 Feb 2021 09:37:56 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 1119blZa005621;
 Mon, 1 Feb 2021 10:37:47 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 66E13281D; Mon,  1 Feb 2021 10:37:47 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229d5425-a32d-4a00-bfe8-480a686f2dfe
Date: Mon, 1 Feb 2021 10:37:47 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
        Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/2] Document qemu-ifup on NetBSD
Message-ID: <20210201093747.GA624@antioche.eu.org>
References: <20210130230300.11664-1-bouyer@netbsd.org>
 <20210130230300.11664-2-bouyer@netbsd.org>
 <YBe6JpR6jOLvYDz6@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YBe6JpR6jOLvYDz6@Air-de-Roger>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 01 Feb 2021 10:37:47 +0100 (MET)

On Mon, Feb 01, 2021 at 09:21:58AM +0100, Roger Pau Monné wrote:
> On Sun, Jan 31, 2021 at 12:03:00AM +0100, Manuel Bouyer wrote:
> > Document that on NetBSD, the tap interface will be configured by the
> > qemu-ifup script. Document the arguments, and XEN_DOMAIN_ID environnement
> > variable.
> 
> You are missing a Signed-off-by tag here ;).
> 
> > ---
> >  docs/man/xl-network-configuration.5.pod | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/docs/man/xl-network-configuration.5.pod b/docs/man/xl-network-configuration.5.pod
> > index af058d4d3c..f6eb6c31fc 100644
> > --- a/docs/man/xl-network-configuration.5.pod
> > +++ b/docs/man/xl-network-configuration.5.pod
> > @@ -172,6 +172,10 @@ add it to the relevant bridge). Defaults to
> >  C<XEN_SCRIPT_DIR/vif-bridge> but can be set to any script. Some example
> >  scripts are installed in C<XEN_SCRIPT_DIR>.
> >  
> > +On NetBSD, HVM guests will always use
> > +C<XEN_SCRIPT_DIR/qemu-ifup> to configure the tap interface. The first argument
> > +is the tap interface, the second is the bridge name. the environnement variable
> > +C<XEN_DOMAIN_ID> contains the domU's ID.
> 
> LGTM, but I would make it even less technical and more user focused:
> 
> Note on NetBSD HVM guests will ignore the script option for tap
> (emulated) interfaces and always use C<XEN_SCRIPT_DIR/qemu-ifup> to
> configure the interface in bridged mode.

Well, as a user, I want to know how the scripts are called, so that I can
tune them ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:39:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79631.145015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VgP-0006kd-Pr; Mon, 01 Feb 2021 09:39:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79631.145015; Mon, 01 Feb 2021 09:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VgP-0006kW-M2; Mon, 01 Feb 2021 09:39:45 +0000
Received: by outflank-mailman (input) for mailman id 79631;
 Mon, 01 Feb 2021 09:39:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=npej=HD=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l6VgO-0006kQ-ES
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:39:44 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95c75cde-3d39-4f29-b826-75fdddc7ad29;
 Mon, 01 Feb 2021 09:39:43 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 1119ddUZ004162;
 Mon, 1 Feb 2021 10:39:39 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 61E18281D; Mon,  1 Feb 2021 10:39:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95c75cde-3d39-4f29-b826-75fdddc7ad29
Date: Mon, 1 Feb 2021 10:39:39 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
        Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] libs/light: pass some infos to qemu
Message-ID: <20210201093939.GB624@antioche.eu.org>
References: <20210126224800.1246-1-bouyer@netbsd.org>
 <20210126224800.1246-12-bouyer@netbsd.org>
 <YBKbEhavZlpD75fU@Air-de-Roger>
 <20210130115013.GA2101@antioche.eu.org>
 <YBe2RSZeJBeMybdt@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YBe2RSZeJBeMybdt@Air-de-Roger>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1100:a00:20ff:fe1c:276e]); Mon, 01 Feb 2021 10:39:39 +0100 (MET)

On Mon, Feb 01, 2021 at 09:06:13AM +0100, Roger Pau Monné wrote:
> On Sat, Jan 30, 2021 at 12:50:13PM +0100, Manuel Bouyer wrote:
> > On Thu, Jan 28, 2021 at 12:08:02PM +0100, Roger Pau Monné wrote:
> > > [...]
> > > Also, the qemu-ifup script doesn't seem to be part of the NetBSD
> > > scripts that are upstream, is this something carried by the NetBSD
> > > package?
> > 
> > Actually, the script is part of qemu-xen-traditional:
> > tools/qemu-xen-traditional/i386-dm/qemu-ifup-NetBSD
> > 
> > and it's installed as part of 'make install'. The same script can be used
> > for both qemu-xen-traditional and qemu-xen as long as we support only
> > bridged mode by default.
> > 
> > qemu-xen-traditional did call the script with the bridge name.
> > This patch makes qemu-xen call the script with the same parameters,
> > and add the XEN_DOMAIN_ID environnement variable.
> > 
> > Is it OK to keep the script from qemu-xen-traditional (and installed as
> > part of qemu-xen-traditional) for now ?
> 
> I think you want to move the script into hotplug/NetBSD/ because it
> should be possible to install a system without qemu-xen-traditional
> (--disable-qemu-traditional) and only qemu-upstream, and the script
> will still be needed in that case.

I can, but how do I get the ecript removed from qemu-traditional ?
It's a different repo, isn't it ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:46:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:46:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79635.145027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VnI-0007cq-Hg; Mon, 01 Feb 2021 09:46:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79635.145027; Mon, 01 Feb 2021 09:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VnI-0007cj-EZ; Mon, 01 Feb 2021 09:46:52 +0000
Received: by outflank-mailman (input) for mailman id 79635;
 Mon, 01 Feb 2021 09:46:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6VnH-0007ce-DF
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:46:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 551bb446-24ad-4543-889a-2769eb0d55be;
 Mon, 01 Feb 2021 09:46:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 755E9AC9B;
 Mon,  1 Feb 2021 09:46:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 551bb446-24ad-4543-889a-2769eb0d55be
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612172809; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lTfjF2ZVLBDFFszI0T+/kJjmJ7xJKd47EbkHp3lLM9E=;
	b=AAiYwnlpviEzTVxmmn0Uv5+yj4sjugVPJRv5HaZ3v3XEmXcF3bELmmFMdUScYklTJGl6e5
	oQANKLEqegfkieQCDLxT9Oq8ZtuHlLj5UBvoKyYnCzErQVPTGZ3CvKFyCiJk2fFMH+8ALk
	Ti6OJITLQbGMl9So50XbvDl8ts9zEzE=
Subject: Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and
 CONFIG_COVERAGE=y
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <20210130152210.17503-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <174a18ba-25d5-a94c-a85d-4a81b837a936@suse.com>
Date: Mon, 1 Feb 2021 10:46:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210130152210.17503-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.01.2021 16:22, Julien Grall wrote:
> @@ -1442,13 +1447,6 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( d == NULL )
>              return -ESRCH;
>  
> -        rc = xatp_permission_check(d, xatpb.space);
> -        if ( rc )
> -        {
> -            rcu_unlock_domain(d);
> -            return rc;
> -        }
> -
>          rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>  
>          rcu_unlock_domain(d);

I'd be okay with the code movement if you did so consistently,
i.e. also for the other invocation. I realize this would have
an effect on the dm-op call of the function, but I wonder
whether this wouldn't even be a good thing. If not, I think
duplicating xenmem_add_to_physmap()'s early ASSERT() into
xenmem_add_to_physmap_batch() would be the better course of
action.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 09:51:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 09:51:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79640.145041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VsC-0000Bb-5l; Mon, 01 Feb 2021 09:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79640.145041; Mon, 01 Feb 2021 09:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6VsC-0000BU-2j; Mon, 01 Feb 2021 09:51:56 +0000
Received: by outflank-mailman (input) for mailman id 79640;
 Mon, 01 Feb 2021 09:51:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6VsA-0000BP-K9
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 09:51:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ab9db54-4460-451b-a6ef-98127d973801;
 Mon, 01 Feb 2021 09:51:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 04BC1ACF4;
 Mon,  1 Feb 2021 09:51:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ab9db54-4460-451b-a6ef-98127d973801
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612173113; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2tnGneDSnu7+cSQ/HFxC8HHQO4QsR5EtasRZB/JHnM4=;
	b=t05sYcMlPdocTZQZGPc2o34ykxBhDhtKtpZkc0IRuDIPG/PMrd5DH5DRloVBEgSaQrNTq9
	boWmP52lR1N8yeEcyGWsqMA480ieTpk1BtcjG9EhNgYNM5g33yBqVBfdEQ/wO/3WnyQzFt
	qvvAz3fVQZ+wb6AC6yozKg0vjjNI898=
Subject: Re: [PATCH v8 16/16] x86/vm_event: Carry the vmtrace buffer position
 in vm_event
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Tamas K Lengyel <tamas.lengyel@intel.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-17-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <43d6d4a8-6a3a-4476-6262-122784270d8b@suse.com>
Date: Mon, 1 Feb 2021 10:51:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210130025852.12430-17-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.01.2021 03:58, Andrew Cooper wrote:
> From: Tamas K Lengyel <tamas.lengyel@intel.com>
> 
> Add vmtrace_pos field to x86 regs in vm_event. Initialized to ~0 if
> vmtrace is not in use.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:11:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:11:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79717.145111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WAc-0002jo-KB; Mon, 01 Feb 2021 10:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79717.145111; Mon, 01 Feb 2021 10:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WAc-0002jh-GP; Mon, 01 Feb 2021 10:10:58 +0000
Received: by outflank-mailman (input) for mailman id 79717;
 Mon, 01 Feb 2021 10:10:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6WAa-0002jc-Sq
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:10:57 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b849ccb-ab6b-43b6-82c5-e4a0f9663e49;
 Mon, 01 Feb 2021 10:10:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b849ccb-ab6b-43b6-82c5-e4a0f9663e49
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612174255;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yj2sEYPvzVQBPl6Fnbb6nFc23k1yNLT3EzsJzTZjjx8=;
  b=FLWcN8OKPUINGRUHJmqq5V/YQYnnzGNFKQx3V7EsCZIynm2qFMr+MhIO
   t63/s5CB07VkueszFYvIGuM13LLzioz4fb78K6YAcPrfVUved0gdyJWM7
   zaA1jlGKd068PXkcl5YVn3nbErKAyVJam3NDwT8mgHgjhour0IhX9nMJ5
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: JJCuRkNt4A/rU8Cg1p/SI4mV6x4sFaMdK947eqpH8n6kdlvwVoMtEbdXNXNiQzlMLwqy6km+4m
 EqUAVALOUbf3DBL8bCzHV9XriVDUi/RuBb3DWDfMWP5UnVVQwVECgQyX77GdJ2DyLxOK7rGWVn
 StY/ClLQ8IbvVyTEhDZEyb1hfKi6smPV0omEHL3wGDvx9ANO/hpahwn44xkKWHYErG1tmqucls
 uYA+zu4tJEK+PUUw7bR1y5wy5hF9HOIZ82rSk5MWq4jaBl7XqvRB8EXo9kQ5tQ8thmuZoeSAD1
 I8k=
X-SBRS: 5.2
X-MesageID: 36219869
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36219869"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a94U5K4YReINhRN2Ptp87Rb8M7egulyNa4NQWEslW35/fNzpsA/IXEo4GAjTXVIQjiEWL4xyuAVk5NkABpMwJ3xhBc/bP867ayTXMJ7sxJ2ErhCRQSHuvsHPUlS5c/cYyEmO9oD2ck+sCGR5Z6Z84MBLmdAbR+XCSDX2mEYnlqgGo6uLnRwxzNSaFIIWwCufIzudQmT6iF4HwOtla33tKgmvJ9WK3puIn9DGMVhdmEUBRj63PpDlfBZ1O6REho7SPrQ46u8F2EO7lFC/N8A89cdNhyglDNdKrfBP+dQtvH7OrZ5WbHx0CnAyuK/p0FqojLyzVIUPvMNsBYis8RIC4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Baac9PWgRUHt29mtlrO9nxhPMmzp5MwDuBZNMIfS70=;
 b=SHBNpJsRtztOUc+qU3Kva4qDKCtVpLSGrF7sEXfF0xXlBv1EPZY5Ye5rrjbtUZM1b0mPc1amiBPbeQhQcOy4do2wO1DfRqMvmmiDfmfPj7Fu5B6AV5ekK3nRHSdSvcqr6/seyaYz2C4NiE7HCzAwheDiHauA9fH53LY9Nv+A+Acordj6BcQ1Vx2kO/XT6nJ6CNDVuYQlptmXlG7zTdD1ES+M+IMbbMaN5b129rNqHuYhujh7fFfRLtsIZVo1488PmDw8FU8WZc8NnPzAbC/5QwsYSjRDEEamVUJZYXTHk+HjdISsEbazqye4PKqa6cZqfrr7OFwmMD+gkYSXrsuYnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Baac9PWgRUHt29mtlrO9nxhPMmzp5MwDuBZNMIfS70=;
 b=Kt8esl6Qvmqsmq6w0p1lITWQ3c5j1tCJIjtHsF7xPFaiV5AgHLeoYh0OPVscCJ+bRQWX14FTpmJYVYjDSTqpoYnxt8gPsy2aY8evYystveKAlRk1YhGNF6QhyWbOyBu1KsIJfout9mKCRz5zxWJHqNUTjMuqtfVotOCzCehpg3k=
Date: Mon, 1 Feb 2021 11:10:45 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>, Hubert
 Jasudowicz <hubert.jasudowicz@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
Message-ID: <YBfTpTzi+wo7AFSH@Air-de-Roger>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210130025852.12430-7-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0064.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 81ab56f7-140a-44db-4d58-08d8c699a715
X-MS-TrafficTypeDiagnostic: DM5PR03MB3066:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3066B6199DF1E55A95A2AAF18FB69@DM5PR03MB3066.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bdi7Vs3t2akzwj0TZleJZVvN4uE7iBOiPeKZYPSNBYE62A5j8KyiuljbRgKuarBpsXkWZkey2p4k/kZ54xSMmPd1pKGDVUQXxyXakb15xOX/69yH85eSGi3S2cKo9JT/SL7VxKJ8xhsH4UowoYH4Q2eoD7ahCOwp+Pxo8vAN8fJcDFd8OBOtt4F6h+iKZE5Kur6luSXEIROCJ3pPLovG+jify+Ov1e6hZSYiRVgRqTE43jy8OOtOtAHvEO1pw6ZhhLlGyMAdTmtXtqDVpB7H58N+gGZzsn6l+GYigG25fdxArNx2YJ8W7pUUL9eEWOYHC7AOmqMFwsZ37o51FdmoSkrzIhJFhAbLc1Paygh7BIwNcJL5PGRyxWQwpNXVJXrLRqYyWQihl4/DrlI5CACND7vMJCxJ1TdSw7vN+QJ1PtMq9KrYufIKIQHQA2Lo+PwjTTW+c0b3tgXTdi9tlC/Sx2jdlGMpPBPOPMvFMfZ2sTi4n3PKPHH7IznPAFZWpWWT+3P6QFp+dyLVpHnQa++5IiW4AsWWKzOkTdlQY6pGIVO908dSYi8ocktw75UK0RRg
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(376002)(366004)(136003)(39860400002)(316002)(66476007)(7416002)(956004)(9686003)(66556008)(186003)(16526019)(26005)(6496006)(66946007)(8936002)(33716001)(478600001)(85182001)(8676002)(2906002)(83380400001)(66574015)(4326008)(6862004)(5660300002)(6486002)(6636002)(54906003)(86362001)(6666004)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ckgzdUFzNUl2YVpxR1hDdDBRSUxTYmRoZ3dFWFBKSEk4b05JeHFEblpKcXlH?=
 =?utf-8?B?WXJUTWovV1g1Z0tWSU1MVTB6bVd1Q0ZhVE5jOVVSQXMweDl2UW16VlZsVFNW?=
 =?utf-8?B?dlYyYWorR0F2SG0wVHNQTmxVT3V0WGJPelRJTE4vZ0t4V1cxOFdPUVo2N3pB?=
 =?utf-8?B?c21XSnZ6UFlTS1JqNkwrR3EvcTRpY1dVVWVMTkgwS2JJd2hEYytLa2dwVDBK?=
 =?utf-8?B?WVZyYzdrVlRrMTVJOU5MQ0JIQ3JCaG5TUXM4WkRUMGo2Z0Rwd0FIUE9yQTRa?=
 =?utf-8?B?eFg0d1ZQbHQxczdpbjhGK3JnRmNGNGkxVklYeFFhSXhqa1JUK0lPTmxNbFFS?=
 =?utf-8?B?NGppZjVWWXY2S0VraG5KRXFnZ1pkOXBMRzNaUHpYTC9xdFp0ajJML0Nhc3FY?=
 =?utf-8?B?bmxMZmc1d0JqM3lqMDFjdWFxaGVSZExoeVVacFUvWjdHWnVjNExQaWFMajNU?=
 =?utf-8?B?V0FjSWZ4KzkrLzI0Z2Z1UGpwTko3VHFWSmJ1R29KaTZnUmFNZDJ5ZUJma3lq?=
 =?utf-8?B?UmU1NVZxK2d1dUdUeDdMRjlsZ3ZTYkZScHd6ME9mZkZuOVRud1N5NHd1UjBS?=
 =?utf-8?B?Wm1oREpyY2lCRWo5bVdIMVcyVXNNazd3M2J0Z0Z4TC9zODliczhrWVJSQ01l?=
 =?utf-8?B?V2NQWjJWSDF3a2IzVUxuOG4rUmNjS3p4U0lycTBTYXpsL2txbnA3RjFFZ2N2?=
 =?utf-8?B?eFlTSGFlVk9sQldIUDkrQkxIN1lORlpYRUJjTHJkMnZWRU5UMUJpb0hGMFlJ?=
 =?utf-8?B?SFdmdHBRbmZDVXN3UjJnL2c3QXhrWTRtZmdPenFQcTdUbVJHZ3M2dk1UeG44?=
 =?utf-8?B?bVNXQVhjdG0relpMY0c4QUZ0ZlNRb0xFeWx5N2NTR2djZ0lJTnVQbXFhMzhM?=
 =?utf-8?B?WGFvcmJHRWJvK0p1bFU1ckdMbTZRRVJXT3VjOU1na01IdXd6b1FsZlZKcXpY?=
 =?utf-8?B?MXJ0bnl1K1lzaWpWZHJpSklLRktZSjduQmYzRDhWdHBDWDUyUGhtdllhSG1N?=
 =?utf-8?B?bUlqRjVRS2xtNU5IcjhnYXZ5T09BaXBzcmxqWENqdjNham9YVzlRdkhVN0RI?=
 =?utf-8?B?aDJobjJ5czNFSWxFN2p6ZHdkOTdoaS8wYXlkR1p0TDZrbzBsOW00b2wxOHFn?=
 =?utf-8?B?SkdTQndmNmhVZXpscmhQaG1MdHhCbmcvYjhUL3hQUjNpTFRRYU1iSExnSVBy?=
 =?utf-8?B?a1Q5VVhWRUZFNXpsenhQWDMwd3RVcUxia2RDQlRvaURaWHYyUHltUSs5SnJI?=
 =?utf-8?B?SGhMUHRHTkU1VElQOTV6Szc0Y2hsTWdoWXFMUjBDNno1TTZFeWM0b05tRmhn?=
 =?utf-8?B?Y29vVjk3Q2plQ0xSMXE2VDkyb1lMcFR6eEZSSDhScXBtbHVLSTNXVDhIcTNh?=
 =?utf-8?B?VTA2SjVXYXpBcTU4d20vUjNLOTRBRXRkb0hGT3N5T2FveDl2YTRFeXRCUjV2?=
 =?utf-8?B?bXFtL0tLNTlmL1RDdXg1aE1MU0ora2R3amtlMHZ3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 81ab56f7-140a-44db-4d58-08d8c699a715
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 10:10:51.0911
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QODp8QieQPi8HfQq4fjMC5UoIhnYpGKR7nsWAJXb9Tav2GOs+poVEBzgCUUcYYrNaAMtdkw1ARBK7rbKJchd5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3066
X-OriginatorOrg: citrix.com

On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
> A guest's default number of grant frames is 64, and XENMEM_acquire_resource
> will reject an attempt to map more than 32 frames.  This limit is caused by
> the size of mfn_list[] on the stack.
> 
> Fix mapping of arbitrary size requests by looping over batches of 32 in
> acquire_resource(), and using hypercall continuations when necessary.
> 
> To start with, break _acquire_resource() out of acquire_resource() to cope
> with type-specific dispatching, and update the return semantics to indicate
> the number of mfns returned.  Update gnttab_acquire_resource() and x86's
> arch_acquire_resource() to match these new semantics.
> 
> Have do_memory_op() pass start_extent into acquire_resource() so it can pick
> up where it left off after a continuation, and loop over batches of 32 until
> all the work is done, or a continuation needs to occur.
> 
> compat_memory_op() is a bit more complicated, because it also has to marshal
> frame_list in the XLAT buffer.  Have it account for continuation information
> itself and hide details from the upper layer, so it can marshal the buffer in
> chunks if necessary.
> 
> With these fixes in place, it is now possible to map the whole grant table for
> a guest.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Just one comment/question regarding a continuation below.

I have to admit I had a hard time reviewing this, all this compat code
plus the continuation stuff is quite hard to follow.

> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
> 
> v8:
>  * nat => cmp change in the start_extent check.
>  * Rebase over 'frame' and ARM/IOREQ series.
> 
> v3:
>  * Spelling fixes
> ---
>  xen/common/compat/memory.c |  94 +++++++++++++++++++++++++++-------
>  xen/common/grant_table.c   |   3 ++
>  xen/common/memory.c        | 124 +++++++++++++++++++++++++++++++++------------
>  3 files changed, 169 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
> index 834c5e19d1..4c9cd9c05a 100644
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -402,23 +402,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>          case XENMEM_acquire_resource:
>          {
>              xen_pfn_t *xen_frame_list = NULL;
> -            unsigned int max_nr_frames;
>  
>              if ( copy_from_guest(&cmp.mar, compat, 1) )
>                  return -EFAULT;
>  
> -            /*
> -             * The number of frames handled is currently limited to a
> -             * small number by the underlying implementation, so the
> -             * scratch space should be sufficient for bouncing the
> -             * frame addresses.
> -             */
> -            max_nr_frames = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
> -                sizeof(*xen_frame_list);
> -
> -            if ( cmp.mar.nr_frames > max_nr_frames )
> -                return -E2BIG;
> -
>              /* Marshal the frame list in the remainder of the xlat space. */
>              if ( !compat_handle_is_null(cmp.mar.frame_list) )
>                  xen_frame_list = (xen_pfn_t *)(nat.mar + 1);
> @@ -432,6 +419,28 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>  
>              if ( xen_frame_list && cmp.mar.nr_frames )
>              {
> +                unsigned int xlat_max_frames =

Could be made const static I think?

> +                    (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
> +                    sizeof(*xen_frame_list);
> +
> +                if ( start_extent >= cmp.mar.nr_frames )
> +                    return -EINVAL;
> +
> +                /*
> +                 * Adjust nat to account for work done on previous
> +                 * continuations, leaving cmp pristine.  Hide the continaution
> +                 * from the native code to prevent double accounting.
> +                 */
> +                nat.mar->nr_frames -= start_extent;
> +                nat.mar->frame += start_extent;
> +                cmd &= MEMOP_CMD_MASK;
> +
> +                /*
> +                 * If there are two many frames to fit within the xlat buffer,
> +                 * we'll need to loop to marshal them all.
> +                 */
> +                nat.mar->nr_frames = min(nat.mar->nr_frames, xlat_max_frames);
> +
>                  /*
>                   * frame_list is an input for translated guests, and an output
>                   * for untranslated guests.  Only copy in for translated guests.
> @@ -444,14 +453,14 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>                                               cmp.mar.nr_frames) ||
>                           __copy_from_compat_offset(
>                               compat_frame_list, cmp.mar.frame_list,
> -                             0, cmp.mar.nr_frames) )
> +                             start_extent, nat.mar->nr_frames) )
>                          return -EFAULT;
>  
>                      /*
>                       * Iterate backwards over compat_frame_list[] expanding
>                       * compat_pfn_t to xen_pfn_t in place.
>                       */
> -                    for ( int x = cmp.mar.nr_frames - 1; x >= 0; --x )
> +                    for ( int x = nat.mar->nr_frames - 1; x >= 0; --x )
>                          xen_frame_list[x] = compat_frame_list[x];

Unrelated question, but I don't really see the point of iterating
backwards, wouldn't it be easy to use use the existing 'i' loop
counter and for a for ( i = 0; i <  nat.mar->nr_frames; i++ )?

(Not that you need to fix it here, just curious about why we use that
construct instead).

>                  }
>              }
> @@ -600,9 +609,11 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>          case XENMEM_acquire_resource:
>          {
>              DEFINE_XEN_GUEST_HANDLE(compat_mem_acquire_resource_t);
> +            unsigned int done;
>  
>              if ( compat_handle_is_null(cmp.mar.frame_list) )
>              {
> +                ASSERT(split == 0 && rc == 0);
>                  if ( __copy_field_to_guest(
>                           guest_handle_cast(compat,
>                                             compat_mem_acquire_resource_t),
> @@ -611,6 +622,21 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>                  break;
>              }
>  
> +            if ( split < 0 )
> +            {
> +                /* Continuation occurred. */
> +                ASSERT(rc != XENMEM_acquire_resource);
> +                done = cmd >> MEMOP_EXTENT_SHIFT;
> +            }
> +            else
> +            {
> +                /* No continuation. */
> +                ASSERT(rc == 0);
> +                done = nat.mar->nr_frames;
> +            }
> +
> +            ASSERT(done <= nat.mar->nr_frames);
> +
>              /*
>               * frame_list is an input for translated guests, and an output for
>               * untranslated guests.  Only copy out for untranslated guests.
> @@ -626,7 +652,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>                   */
>                  BUILD_BUG_ON(sizeof(compat_pfn_t) > sizeof(xen_pfn_t));
>  
> -                for ( i = 0; i < cmp.mar.nr_frames; i++ )
> +                for ( i = 0; i < done; i++ )
>                  {
>                      compat_pfn_t frame = xen_frame_list[i];
>  
> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>                      compat_frame_list[i] = frame;
>                  }
>  
> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
>                                               compat_frame_list,
> -                                             cmp.mar.nr_frames) )
> +                                             done) )
>                      return -EFAULT;

Is it fine to return with a possibly pending continuation already
encoded here?

Other places seem to crash the domain when that happens.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:23:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:23:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79720.145123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WMZ-0003iI-Sf; Mon, 01 Feb 2021 10:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79720.145123; Mon, 01 Feb 2021 10:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WMZ-0003iB-Pb; Mon, 01 Feb 2021 10:23:19 +0000
Received: by outflank-mailman (input) for mailman id 79720;
 Mon, 01 Feb 2021 10:23:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDK9=HD=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1l6WMY-0003i6-JJ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:23:18 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 12dd5824-b4e8-4722-9342-f1cd5d0785fe;
 Mon, 01 Feb 2021 10:23:16 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-223-GKx9rcpDMOOjuvzfoDn0SA-1; Mon, 01 Feb 2021 05:23:14 -0500
Received: by mail-wm1-f69.google.com with SMTP id y9so145772wmj.7
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 02:23:14 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id x9sm4234865wmb.14.2021.02.01.02.23.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 02:23:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12dd5824-b4e8-4722-9342-f1cd5d0785fe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1612174995;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5aJMOuyPSvjfhIQ04b1GIIwOFY3oexeLpV4TNKMqWiI=;
	b=Ig9VYT4nOehZv3vos89PLHQiUROwoBFC3GmOZTOLmVThxFAUppqdK+garDjTQuXq860h03
	lrTezMa/Wgvnz3t79nhbKzpQ/MBpxYN6NqIV5V6x/cRJQ5DB3sUgr5QrZIaMFqXkdgNx7v
	R7ZmsQJ8c6YaiTcdlNQ7r4gphhHRWow=
X-MC-Unique: GKx9rcpDMOOjuvzfoDn0SA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=5aJMOuyPSvjfhIQ04b1GIIwOFY3oexeLpV4TNKMqWiI=;
        b=jEZRYL3Qne6GwumK1zONhmxV0A+7sfpH1coxpoNiU7wJI8Pnh+N9UpyjA46LQA0nLU
         1lJHDezWleSO2ixiWmeTfsULfVF6Bq2xctf/fEV/L8vobjEeNUnHwK9xQmPWTXM1qfbo
         7YfB28IPI0craCc4jemUiWKbtBd9Ma8LnqJ139XHJj0RtL4cBX4/t9isgddOOFBHdK0g
         wyVGpXlKzn+OM0qDYZTZuQyllpabJ4ZTVO2ATl6hRWqyBEkoFMV474avnugIf89u+FrB
         pphUGl5VF3F8/Tz3zRNXHBYYAz78s7d3YyvrV1TxrPItNPOir4HLWCnLKJ0HXurnDeZq
         DWlg==
X-Gm-Message-State: AOAM53363JlO6oRhna0vXtFTsznu97n1JOsQhTK0JJnqa2bbwxn60Ns/
	1XhRSfoTIyBUn3LrD76GBTvwBWxkzn3k+luA3u2IoJnwHIrsC8FccA/m12IPcYzb2vQFSlKjS05
	E0zKfc/2A3uTaWTcILm/j3HFutovj8tWIg8oPRs7dhpGGT+HDEqprttbpMC7MfQeHe7GYnrZDNB
	dh79U=
X-Received: by 2002:adf:8b47:: with SMTP id v7mr1554771wra.133.1612174993047;
        Mon, 01 Feb 2021 02:23:13 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxRiU0yK4wM7pHxMScsrUEJgrMLA/P5aBurhjGx5cZ0jxtB++LorBC//x8+uxwvN9hX5go09A==
X-Received: by 2002:adf:8b47:: with SMTP id v7mr1554744wra.133.1612174992844;
        Mon, 01 Feb 2021 02:23:12 -0800 (PST)
Subject: Re: [PATCH v2 4/4] hw/xen: Have Xen machines select 9pfs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org, Greg Kurz <groug@kaod.org>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20210131141810.293186-1-f4bug@amsat.org>
 <20210131141810.293186-5-f4bug@amsat.org>
 <565bf0dd-a5de-352e-eec7-68b862ed09e4@redhat.com>
 <f6e1917a-f9cf-9ae3-50b1-9dc0ee4f65f3@amsat.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <50306fbf-c6f0-e281-248f-de1bc984b113@redhat.com>
Date: Mon, 1 Feb 2021 11:23:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <f6e1917a-f9cf-9ae3-50b1-9dc0ee4f65f3@amsat.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01/02/21 10:18, Philippe Mathieu-DaudÃ© wrote:
> FYI using 'imply FSDEV_9P' instead I get:
> 
> /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
> `xen_be_register_common':
> hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'

Ok, so then we have the case of a file (hw/xen/xen-legacy-backend.c) 
brought in by CONFIG_XEN.  In that case this patch is incorrect...

> The function is:
> 
>    void xen_be_register_common(void)
>    {
>        xen_set_dynamic_sysbus();
> 
>        xen_be_register("console", &xen_console_ops);
>        xen_be_register("vkbd", &xen_kbdmouse_ops);
>    #ifdef CONFIG_VIRTFS
>        xen_be_register("9pfs", &xen_9pfs_ops);
>    #endif
>    #ifdef CONFIG_USB_LIBUSB
>        xen_be_register("qusb", &xen_usb_ops);
>    #endif
>    }
> 
> The object is compiled using:
> 
> -- >8 --
> -#ifdef CONFIG_VIRTFS
> +#ifdef CONFIG_FSDEV_9P
>       xen_be_register("9pfs", &xen_9pfs_ops);
>   #endif
> ---

... and this is the best fix, together with:

- a "#include CONFIG_DEVICES" at the top (to get CONFIG_FSDEV_9P)

- moving xen-legacy-backend.c from softmmu_ss to specific_ss (to get 
CONFIG_DEVICES)

- changing "select" to "imply" in accel/Kconfig (otherwise the patch has 
no effect)

But really, doing nothing and just dropping this patch is perfectly fine.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:32:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79727.145135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WVO-0004r3-PL; Mon, 01 Feb 2021 10:32:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79727.145135; Mon, 01 Feb 2021 10:32:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WVO-0004qw-Ll; Mon, 01 Feb 2021 10:32:26 +0000
Received: by outflank-mailman (input) for mailman id 79727;
 Mon, 01 Feb 2021 10:32:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6WVN-0004qq-2v
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:32:25 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 516746b6-4810-4949-902a-e30f047f6d31;
 Mon, 01 Feb 2021 10:32:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 516746b6-4810-4949-902a-e30f047f6d31
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612175544;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=X/sz+7w5Nnf5+URDWqmPGWLRVyoLUEp035G4YNVDJ7k=;
  b=XGW2+xEXN2LQ8sJWuGCUBCsRgeFW+GiCKqzPE4MIu6VjYza3YUGY4DPO
   UC3xgPF/zcjIeP1D3qiashwAsCYE1Ji0/hMo7/mkVujOaiLUUJC9+I8IG
   4VFeUBvZRqR4Iqng6rwkVZ3LVjF/P0JoKzEKNOVmA8nu3ejFoOyGEvjxA
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: AYQIc5pS2CiQNYu8DXI3SavB64O+AfWGYMSavoS9abeteykp/8jYuP0chGaMTbv6BAaZp1d1il
 LsxWuhlCXbCdCiUsJKqD9u//0s13kOOXQ7CyxLzsjOqllLlvPG3GPQCTNwrxJpoUXNxOlK/Raf
 S3ziPfYbunUUCXAn1ropttJjqFLQuaI+1wBTnrO5oX1JvUn1r8SrPN3v0nlPLFsgUt1tKhFOtW
 hvx8gmpLHI6fIXHpq5a063+kGxOhubJmMIESi6KXryH19+RyWnCiiIwpsP7XJu4NxKKGm04V9u
 6qo=
X-SBRS: 5.2
X-MesageID: 36462139
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36462139"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=il082yrhNgFaMwFqbq4MR8Kd1303Bndi6xeDqv2J3efjDSXhJpyebOZSJFelV3Euy1sgX4Xjg36RO9FpJrs/63n4yjd/HC9icwADDX7sImENkgC+99BvZenbA5AIvWIIOONnjAVKPit0V7lVEiKTFGtML+V6zgDCfvP2Di4tWRJbs59022ki2HkdUl9vxOMrUOpU2i1Ac/LMPhOttw5Vr6gkaFyO/AojYghCqzImGV0emVGw+iNFX4WiqwSVeyjkLPWV/5zGG2IYJLxMgVZJE7GH1mTo1au7ALCF9xgl0fd89VsG2LRd5Yi2WSIJaF/rCRmDUVYuZ5qkhk4BS15Dmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=91Vhi0/iypvPXtXefs++3QEF8+WC5JsTtXsPN5FWi6k=;
 b=kUQMIDTO52eKdjfHmJs/NK+9BfXyoh8PrifcIOYdTCOBXCJ1gwY/3U8UxPuFLjLSnsvaqW6r2fa4ia+E8ru0EHT0I0Y2CcDkiEiojKFOfTv6gPaCvLhhZSsjmkOHEVvQXqNXp6tXQd/v5qw5Cz1qaUc+vfkY7ghQhnvQyyOZQNf/mtRUIKy5wTaq2xx2P8TAwGd2HeT7uwKIrrM88X0ejdasdk/50WkgCPIkNYCDpuTwZZ/cjWC4EfXYGGx2bt+A15CD2AQ1QnbeGUifPFehq/9Wk650WpKwocUts2nmnzCbTRuk1dOrqKxiO017YxdLGhKxqNhld/1HTKOhoor7pA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=91Vhi0/iypvPXtXefs++3QEF8+WC5JsTtXsPN5FWi6k=;
 b=ZUBFUZ8NMz9DLKlG+Lzgt5btSo5QZZ216bcOqg5vrg+hPK87In3BZbCWbZ4ezRcOj/j7akwvRkskpLCIt3mEmjbE+FhO7JwJ9HyraxkwKkqn+K6PVaesnPOxCP5sw4LEJLuaUPc4K5eUgl4CI3o5ySEx1saaxh5jjfFhxMCgcH8=
Date: Mon, 1 Feb 2021 11:32:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>, Tamas
 K Lengyel <tamas@tklengyel.com>
Subject: Re: [PATCH v8 07/16] xen+tools: Introduce XEN_SYSCTL_PHYSCAP_vmtrace
Message-ID: <YBfYraDV6LLDJLCX@Air-de-Roger>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-8-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210130025852.12430-8-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0043.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::31)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 08ff36eb-0112-48a0-d21b-08d8c69ca776
X-MS-TrafficTypeDiagnostic: DM6PR03MB3915:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB391554D41E4D5856D80AB4D58FB69@DM6PR03MB3915.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xxBwlIeFsqYxI83UBud0YP9e4Zt6OSJvSst4lrlLShJShhCOFMzyVfNgqcuePtYiRkPiUl4X+1tJ54cGEIitsmcXp3Zb3D27GqDpxJW3DqFG1zlhk8x0f0vYomWyUpGFZDaP46pAFphQr4Ou/SW1XkZexKDIySXzjU22vLvwgpBD3kqcssHaHNrqHgnRIj/pDsp06TROBuEQaOq0xVTabsZKsLtfATAXJ41tm4eD32jsfh+JBN37w6dmgRtW6UOU9Mqc0XTP/3x8W/OF4VgQrvy9DFjxyF6W/LK8rC08/3m1d8Df+BTt11L2FfANLQFCd4hKHdvEbikkP+ipcmPBN8aRe+YKlPh+KGxj8QDizFme9xOSAWn+CyXG/6J9nq45RBz6zyMwVIU+8q7//l5zqUYMXMNO+dAR2z+kDqyaJ6vVPr5KCpn1N8IE7I/X9V1yFDU5ssBfHteF8mBCR2G5eEWiU4TIj1BEnAkqBec7Evxyh0kt5yzkwcxSE991qE7KRzKiZX1NGBGo+dXMTKVSAA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(346002)(39850400004)(376002)(136003)(396003)(16526019)(186003)(2906002)(4744005)(8936002)(478600001)(6636002)(66556008)(8676002)(6486002)(66946007)(66476007)(85182001)(6862004)(26005)(316002)(6666004)(5660300002)(33716001)(86362001)(54906003)(9686003)(956004)(6496006)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SWNST01EbVdKMDhzZU9RYjlmYnpCZ1JFSjFmaW1sTEZUODZUUlk2UTErN01H?=
 =?utf-8?B?TGpVaElGS3lzUTVlbTBWcUE2NzFiN3VaNmlQNW1tM0xDdlA5cmJMYldsYnNo?=
 =?utf-8?B?ZjlCbW5IYzM1clhKVFFxalYxZHBMOGhJdW10VFcweEhDM2swZlNBRzVpLzNB?=
 =?utf-8?B?VWJEMmt3SU1TQkZRbktiYUJrVFNNSHM0UEVxSkpmSG44N3craEsxRUlqdzVr?=
 =?utf-8?B?d1RQbnpIQjRxUGMzOVVZQVNkbVM5SUwzV21JV1RvQmpBUXA5VGtvNG1JNHNa?=
 =?utf-8?B?MFRFTS9xK21OYklzdGJRRmRLV0VKMnZlV0lERWVkQ1hydkUyTW5UQ2MrTCs0?=
 =?utf-8?B?dll0VDR4ZWo0R3ozS2xpd0RxWTR5NmxUOCszZWZGYi9JUnpkbmJPNDZZMGVj?=
 =?utf-8?B?dTFJM0h4clNQMXpaWDVrWnBBSnNyL1hWSjJUbWJyTEZaOWpCRFF3Vm1rdGZl?=
 =?utf-8?B?aUkrM1BOcnp6RXc5VUYrVFNzVkc1dDJDeFk2eTY5S2JhdFU2Ty9GUzdib2kz?=
 =?utf-8?B?Z25oYURROERpam9oQXl4Zk9JZ0VIY2NWOXU5TDVzRjlidERyU25VeWVYclVB?=
 =?utf-8?B?T1JyL28ybHhxMElJN3g0WlJtSUJsWG9JS3dpbjdwdkFEWTE0Z1JybTYyWlEx?=
 =?utf-8?B?Q1hLcTJLLzdjNXJhMEdDTVhZdmsyT3BLdnZEY1pNS2M2MVdmY1FqYWtGcVZ3?=
 =?utf-8?B?c1hIZDMxWFhMK2NqUkpYVHcvc0RWSDZGSVF1TDZnK2NzUVgwWWpxYll6bXRm?=
 =?utf-8?B?b2JyaDk5V0ptTVZtaEV5L2s3cTVKSkVkQi8yT2twYTBlNCtoYktKVHlsWFZN?=
 =?utf-8?B?YUl4NjdncWJOM2pvT3o5eUx2MzhtdGtYejgyOGNkZ2xqMk5TNFNDWTUyTStt?=
 =?utf-8?B?WHh0cGFCNlVDd0VDL3JYalZZS09ESm9EbUJ5ejgxWDJrV09kQ0FXaXBOQ0VU?=
 =?utf-8?B?YzJlSzRhWUF5TnJKcFM0aGpnR0xCTXhuMm1ueDdJYkozU0xuQnFNb3J3b1Y3?=
 =?utf-8?B?cWgxcmEyWW5YWFZrRmxJZkl1L2ltQ0hYaFRRRGRNcUVHdnNNR0FFUFNoODBM?=
 =?utf-8?B?UkIvV0NPV2NuK1hNcEFqbDlDR0cvajVGRGFud21ndXpidnQrR04xcE52dTV0?=
 =?utf-8?B?bFl0VUdDYVFaTXpQVlVZaGdVUTBIMFhaTTJWaGMvNjVDUUhTK2MxaDVVUTdG?=
 =?utf-8?B?RUVCdG1Gc0NNSTZXd1oyT1ZCZTNieE43dUphQWkwUmZtc2cyYXFhbzFVclIv?=
 =?utf-8?B?d0lZcHNqWnZiVVU5eE9wNXU3cmlPUkU5YWZqdmx2TXF4c0xKdS85MjZybzhF?=
 =?utf-8?B?RzR1aDY4NFlMdE5vT0s3UVJ6NGdhUXNkMFdYcENqd3hyYkVBNnVKYmZMbGRZ?=
 =?utf-8?B?WSs3T3JjSVNPZ0dYUEZoRE5FNm9VYzVJZTdPNjk0K2RCRXlYZ1J6VVB5M3hT?=
 =?utf-8?B?ajBZRWI4ZStmZzdOakYyNUpVWTdMenVKS3IvYU5RPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 08ff36eb-0112-48a0-d21b-08d8c69ca776
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 10:32:20.0979
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fUa+1qDXv8G7OoR1Rvs8zB/UKGBGkexBbc91sNPtUpH+TW7/SLB3hJk8kZGI++Jz/dOccSoEZ41LPW47cCbCVg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3915
X-OriginatorOrg: citrix.com

On Sat, Jan 30, 2021 at 02:58:43AM +0000, Andrew Cooper wrote:
> We're about to introduce support for Intel Processor Trace, but similar
> functionality exists in other platforms.
> 
> Aspects of vmtrace can reasonably can be common, so start with
> XEN_SYSCTL_PHYSCAP_vmtrace and plumb the signal from Xen all the way down into
> `xl info`.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:35:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79728.145147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WYG-00050S-7w; Mon, 01 Feb 2021 10:35:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79728.145147; Mon, 01 Feb 2021 10:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WYG-00050L-4G; Mon, 01 Feb 2021 10:35:24 +0000
Received: by outflank-mailman (input) for mailman id 79728;
 Mon, 01 Feb 2021 10:35:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcwM=HD=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1l6WYD-00050G-W2
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:35:22 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 937708e5-1cc5-4d43-baf3-c15752ee3ce5;
 Mon, 01 Feb 2021 10:35:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 937708e5-1cc5-4d43-baf3-c15752ee3ce5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612175720;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=w1rzNtUr0SMeVPcp8/AcCHbGnGHGvDcSbttKrJJDaWw=;
  b=GK1nJg4/KVDlFjFMVytexOF9XZG6FV7v61LD7lTMkY6OlDOpCZzWXMNj
   Tk7TlKZFU+uXA86HEr+hojRB3GJHwS0w50yJ9pjunWl9UxxKPyioGmMHH
   LeDaQb46JB1ZSjR2tH3bomtQ4pBXIUEUs6084r+vlBwf8t9AWsOdlsgWd
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dtPK9egNufoCR5qvjktNyvRstJT78xV6UtI6PTXjDlDWiUAycvD1bd+Qq7A/bhImqT1WOOTC00
 FuXuYR+II/DO+nt9S6Tbr/j3TRmi3Ed7obk7sUC6AHGMu0HCM7CAOYr00+B8EM28tDuZQv9bFX
 VvTNl7Y0V/0+T6hD+kk2xet0cGeH3QVYfoCS28OOdLsKDxDK7G2wNE0SZz8hotYPLMOs8tvQUM
 w3Fie6z17heWhScVQnZYeozd/pGRrgOYm7/LYqVdqvAVtx/lbsu6IS/nWZmEQREu2nyL9+fKRm
 o2U=
X-SBRS: 5.2
X-MesageID: 37590542
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="37590542"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dKOWYMgYyngDrb3BPCQD0KIojPzC4T5d9ENaOXHOGgbP1KmRHsW1oJXv1gBDBoJ62xBED7Xi5iGkDtEgt0j7bTTAflxg04T6iE4utDTAeY6aRhia/4Wu8GfE7ujLvi2L24NGLqt2/PMRmOjnQTXwrHMzqmfpQuiSCDJ1f7G1pT6wNXlE4EqW7wB8UM6JmIZU2ex9evCizL/66fkkXBaAOBL8svsJN75HHvdSYSf7RbgnpXsCv5EBi+1xYFiCkW/slMr84GL72iuqQJ3/Gv78H8uXa0JQw1iw8qYLq/GF4RYJ7gBoBovQmNzZSAa+gCe8zeKEw3O53KlAEZ7iEb1maQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w1rzNtUr0SMeVPcp8/AcCHbGnGHGvDcSbttKrJJDaWw=;
 b=OZaTaUhLF+Wo0kKT8ojMnOcIJoMjGpFG9M0ie2RwXmwIe6ndXlqXtR6LKJKNhMKEbJZCYxnpMxctcNPLS4haGqj/gEi2i1DGlH2ATWGL8016ITMfTP2SBPWI1QlxEzoy+2J85ffuxX9R/L/nlR+FCGxReG9gz/+ZuTrDvpu5PO4LJUKyLFJPPWKrjbzfipScVMGQkMc+/VH1lBHBWyCWoWao0TpSQzmf1p/M7Ml7IG94jEe6PmPGdtxC/pHA98GMtfosF7JXEkjcpAUb0sZ4OFyf4vTL8JT/FHe8/5h2YM4/C8ncz1hhRpo42YiT651/We7x1HLLV2JqEe+X2lVWvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w1rzNtUr0SMeVPcp8/AcCHbGnGHGvDcSbttKrJJDaWw=;
 b=H9OUx1TGcGNOk+BrjcXgQwuKYqTcn3Cr1d0FdyOxTXyohjkKgqxYTedSVHoBoCG3QG86cXllpNzA6YWL4V2B4EsjTCKV+ePl15EjAkY+EoVuvT3ZAxkpgqM851USsu/sj2aXnPDip5MOqIQf7xzSrjNqzQGVF+9PugALjsKiL/M=
From: George Dunlap <George.Dunlap@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
CC: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>, "open list:X86" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH] x86/pod: Do not fragment PoD memory allocations
Thread-Topic: [PATCH] x86/pod: Do not fragment PoD memory allocations
Thread-Index: AQHW8qXhfghyfjZCb0iw2DUJHGLmxKo4G2OAgACDU4CAASMUgIAAcJSAgAELJICAAK7PgIAADiAAgAAXqgCAAMbHgIAAiAKAgABHgwCABGvygIABEjOA
Date: Mon, 1 Feb 2021 10:35:15 +0000
Message-ID: <E8806231-28EE-4618-B6A5-1B50813BF4B1@citrix.com>
References: <YA8D85MoJ9lG0KJS@mattapan.m5p.com>
 <c0a70f39-d529-6ee4-511d-e82730c14879@suse.com>
 <YBBWj7NO+1HVKEgX@mattapan.m5p.com>
 <f6a75725-edc2-ee2d-0565-da1efae05190@suse.com>
 <YBHJS3NEX5+iEqyd@mattapan.m5p.com>
 <67ef8210-a65f-9d6a-bea1-46ce06d47fb7@citrix.com>
 <YBHo/gscAfcAZqst@mattapan.m5p.com>
 <44450edc-9a64-8a6e-e8d3-3a3f726a96bc@suse.com>
 <YBMB1VUhYd3RUuDO@mattapan.m5p.com>
 <DC18947E-BC67-4BF8-A889-04B812DACACC@citrix.com>
 <YBbzXQt2GBAvpvgQ@mattapan.m5p.com>
In-Reply-To: <YBbzXQt2GBAvpvgQ@mattapan.m5p.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: a7b9d2f6-9dc6-4384-0b29-08d8c69d101d
x-ms-traffictypediagnostic: PH0PR03MB5733:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <PH0PR03MB5733E5DBD471418CA769B6EB99B69@PH0PR03MB5733.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: QQzwEbzhlVvZSQyTnTyq0KACD37SeFhz0fCOYogi+hY8+u+Nnc+51mGttErtFO28WwOoW/OJ19QnE3NnpRfnwqC8bc5r+63mN52L3bmw3YueFIQuvQ/H4+qCNjNMmG5aRF4GgNVgJ4AYAZgYOZtDiheiPkzCM4kvsXcabseC9zdTjft7PApSuwsU0kxHcGudkVWTeGYee09NzeOKzRoA3qvokzFUehGZWeyJYRMI/6EGUU057+9sYv5deheN53595YXDkv7PH2jDmZP+WngMGpVx2BZZ6UOyCPJT5sVrGHSfrqVNzvHFf4QCi9hMxEVNcyX4aQdfnsaaPpdJ0+H5vofC+mAtVLqsv7lBvA7fmj7ftaL6c9mLYsC2mKgsfZxvcNK9/6uDSJBKu3zNOxyvZU8H7zw4P8O57ZwDeRLhpTOep4MuX6l1NH9Xq0ed0/vRdpYEIaeFah0TqZCYcxnB+Iucd+syxx0Z9dfEQeE8yCssam75zjpTPm32ut/xA+kpzfKxeyPgKA++c1aQtQWRFoLjuz4SfnLG7r+eg2R/uAWEFuwyisMLtvravWzId6VhGaQDPQxMYEP33IOgRjyANQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(39850400004)(136003)(346002)(316002)(71200400001)(36756003)(6486002)(6512007)(83380400001)(33656002)(54906003)(478600001)(8936002)(107886003)(76116006)(6506007)(26005)(5660300002)(66446008)(86362001)(2906002)(4326008)(53546011)(66556008)(64756008)(186003)(91956017)(2616005)(8676002)(66946007)(66476007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?L2pXYU5RZWhqaEtiSzFacHJJcnp0dHRXemc3STVhcEVJaWEwczVaZ3JmTmdG?=
 =?utf-8?B?UGZ0QUUreXJ3ZUNUd1BIZSszdVFjTGQ5ZEN3ZHd2RHNTR255MStqanBMVTc5?=
 =?utf-8?B?ZkRqRWd5RjV4V1NHWDAwSllncUtDelp5cUxNUlU4dFdsS1BXcnI4dzM3czhZ?=
 =?utf-8?B?ektoaDlaemtJeE4zK2JDNXppZFQ0cnFVRjVBU0s0TWwvWnVCYzJkalhsZ29t?=
 =?utf-8?B?blBhd2IxYzE1ZkIyRzV2S1Riai9pby9CREtNUFBOWjVXVitaemw5Q0NSc3Vq?=
 =?utf-8?B?YnNBaWxSdm8rYUJkck42bE4yWnFJeWxpV3FGZDhTUkhPMUJWOHlqc2pKRXh5?=
 =?utf-8?B?bGZ2UWVOV0thVnRLQTB5QkIrY1cySWkyY0ZFYW13U25MWVZlSFFqSDBvc3Iw?=
 =?utf-8?B?RDJ5MG9UNTVRbExObnMxN3RwZFBVakJaQ0JPWnNsNUZ4UXBBL25zU1ZiaVl0?=
 =?utf-8?B?TTRNMEEwOWZNLzhjNHhTaENuOFF6Mkc4S0ZPQmZpRC8rVEdMWERFV010N21C?=
 =?utf-8?B?bjlhQzZ5Rm04V2YvWmNBeXlEZUFRcjdnTVQ0cjdhUDVBNmdGbGdwQWh3Uml0?=
 =?utf-8?B?N281U0ZoVUVUYVJTQ1docFBnY2xJZnQzeTVrcllRZ2tTVWZocjBVR3ZhdG9E?=
 =?utf-8?B?aWlvODYwdDFBL1ZCelBZMFJtallrLzAxakd3VGRMamVibUZRMFJqWDRMWW81?=
 =?utf-8?B?amlXd3BKa3A4R05vTDBwVmR5R09ML08zU2tsWk5ld2hKZHdWY05jcU55Y3Bx?=
 =?utf-8?B?QVhwSElYQWZZS0lHcEVUMnFXQmYwSE15N1JLTkNDVGFrYVBtRUg3cEpHSXMr?=
 =?utf-8?B?OGxXSkdKcmxuRjNTNkc4VExiUEZHMVA5K1o4ZTJwVjZNcGFia2J2M1RWUENy?=
 =?utf-8?B?enlJSW9oSlZkam02MEZGUnVncHF3QWtJU1M1ZWRZRWxpVkhFaVhTVmluWU9I?=
 =?utf-8?B?aU5nWkV5TVFoSVRhRG1BRGlUK0o0Vytka3hCUks5dGVqOTJKa01UdDZ3bjFO?=
 =?utf-8?B?d2oxc25md2tjSU9tQVM1QTRQNnF5R1RKUEZlT2U3RDlncHBPQ2RKcUhHWUYx?=
 =?utf-8?B?THcyL25pUjM4THphR3hSSE1yNjlXaGJkWTdVRTltTTVldXRObUFmMEVvV3l3?=
 =?utf-8?B?SnppY2RkdW9lZ0p5cnhja2dpbEJ4KzBScS9WNVlIY1VDQU5CVXF0WTQrVEJ0?=
 =?utf-8?B?MHN5WWxaSTRWa25ac0tsR2U2S0c3RnF5RzJ2TW1EOWl0UE9SMWNpZzlBcWxv?=
 =?utf-8?B?SGZZdmV0dk1WTjJpRVQveEpSWllqM2VUSzg4MFZpdXNhaHpxQ2RsclNSQUR1?=
 =?utf-8?B?VjZxQ0lZT1phQ2MvK2JDdVRpc1JMcW1pOVF3cWU4Z0VFbFNQT09DTjdrUVhn?=
 =?utf-8?B?dVhycFlLeURzNVZUTEpJb2lCc2xFMjhsaXRKOGEvS014YzU0Q0RiR2dacFAx?=
 =?utf-8?B?UDlqbDhhZHpUM0lRNEg5N0NQcERtVGQ0WVhxOU1nPT0=?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C1D5F62EF48C814CADB5422F09948623@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7b9d2f6-9dc6-4384-0b29-08d8c69d101d
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Feb 2021 10:35:15.3666
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: aFtR9eFM4yQCYQBZC97esXlaIcYfV79T103OfZKj5izDfIqIRX7mvrNmK2zBW62kn5fQkDv3P9WDnpx52q7IpqzwHuGnzjMvHj95qrQmK4A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5733
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSmFuIDMxLCAyMDIxLCBhdCA2OjEzIFBNLCBFbGxpb3R0IE1pdGNoZWxsIDxlaGVt
K3hlbkBtNXAuY29tPiB3cm90ZToNCj4gDQo+IE9uIFRodSwgSmFuIDI4LCAyMDIxIGF0IDEwOjQy
OjI3UE0gKzAwMDAsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+PiANCj4+PiBPbiBKYW4gMjgsIDIw
MjEsIGF0IDY6MjYgUE0sIEVsbGlvdHQgTWl0Y2hlbGwgPGVoZW0reGVuQG01cC5jb20+IHdyb3Rl
Og0KPj4+IHR5cGUgPSAiaHZtIg0KPj4+IG1lbW9yeSA9IDEwMjQNCj4+PiBtYXhtZW0gPSAxMDcz
NzQxODI0DQo+Pj4gDQo+Pj4gSSBzdXNwZWN0IG1heG1lbSA+IGZyZWUgWGVuIG1lbW9yeSBtYXkg
YmUgc3VmZmljaWVudC4gIFRoZSBpbnN0YW5jZXMgSQ0KPj4+IGNhbiBiZSBjZXJ0YWluIG9mIGhh
dmUgYmVlbiBtYXhtZW0gPSB0b3RhbCBob3N0IG1lbW9yeSAqNy4NCj4+IA0KPj4gQ2FuIHlvdSBp
bmNsdWRlIHlvdXIgWGVuIHZlcnNpb24gYW5kIGRvbTAgY29tbWFuZC1saW5lPw0KPiANCj4+IFRo
aXMgaXMgb24gc3RhZ2luZy00LjE0IGZyb20gYSBtb250aCBvciB0d28gYWdvIChpLmUuLCB3aGF0
IEkgaGFwcGVuZWQgdG8gaGF2ZSBvbiBhIHJhbmRvbSB0ZXN0ICBib3gpLCBhbmQgYGRvbTBfbWVt
PTEwMjRNLG1heDoxMDI0TWAgaW4gbXkgY29tbWFuZC1saW5lLiAgVGhhdCBydW5lIHdpbGwgZ2l2
ZSBkb20wIG9ubHkgMUdpQiBvZiBSQU0sIGJ1dCBhbHNvIHByZXZlbnQgaXQgZnJvbSBhdXRvLWJh
bGxvb25pbmcgZG93biB0byBmcmVlIHVwIG1lbW9yeSBmb3IgdGhlIGd1ZXN0Lg0KPj4gDQo+IA0K
PiBBcyB0aGlzIGlzIGEgc2VydmVyLCBub3QgYSBkZXZlbG9wbWVudCB0YXJnZXQsIERlYmlhbidz
IGJ1aWxkIG9mIDQuMTEgaXMNCj4gaW4gdXNlLiAgWW91ciBkb21haW4gMCBtZW1vcnkgYWxsb2Nh
dGlvbiBpcyBleHRyZW1lbHkgZ2VuZXJvdXMgY29tcGFyZWQNCj4gdG8gbWluZS4gIE9uZSB0aGlu
ZyB3aGljaCBpcyBvbiB0aGUgY29tbWFuZC1saW5lIHRob3VnaCBpcw0KPiAid2F0Y2hkb2c9dHJ1
ZSIuDQoNCnN0YWdpbmctNC4xNCBpcyBqdXN0IHRoZSBzdGFibGUgNC4xNCBicmFuY2ggd2hpY2gg
b3VyIENJIGxvb3AgdGVzdHMgYmVmb3JlIHB1c2hpbmcgdG8gc3RhYmxlLTQuMTQsIHdoaWNoIGlz
IGVzc2VudGlhbGx5IHRhZ2dlZCAzIHRpbWVzIGEgeWVhciBmb3IgcG9pbnQgcmVsZWFzZXMuICBJ
dOKAmXMgcXVpdGUgc3RhYmxlLiAgSeKAmWxsIGdpdmUgNC4xMSBhIHRyeSBpZiBJIGdldCBhIGNo
YW5jZS4NCg0KSXTigJlzIG5vdCBjbGVhciBmcm9tIHlvdXIgcmVzcG9uc2Ug4oCUIGFyZSB5b3Ug
YWxsb2NhdGluZyBhIGZpeGVkIGFtb3VudCB0byBkb20wPyAgSG93IG11Y2ggaXMgaXQ/ICBJbiBm
YWN0LCBwcm9iYWJseSB0aGUgc2ltcGxlc3QgdGhpbmcgdG8gZG8gd291bGQgYmUgdG8gYXR0YWNo
IHRoZSBvdXRwdXQgb2YgYHhsIGluZm9gIGFuZCBgeGwgZG1lc2dgOyB0aGF0IHdpbGwgc2F2ZSBh
IGxvdCBvZiBwb3RlbnRpYWwgZnV0dXJlIGJhY2stYW5kLWZvcnRoLg0KDQoxR2lCIGlzbuKAmXQg
cGFydGljdWxhcmx5IGdlbmVyb3VzIGlmIHlvdeKAmXJlIHJ1bm5pbmcgYSBsYXJnZSBudW1iZXIg
b2YgZ3Vlc3RzLiAgTXkgdW5kZXJzdGFuZGluZyBpcyB0aGF0IFhlblNlcnZlciBub3cgZGVmYXVs
dHMgdG8gNEdpQiBvZiBSQU0gZm9yIGRvbTAuDQoNCj4gSSd2ZSBnb3QgMyBjYW5kaWRhdGVzIHdo
aWNoIHByZXNlbnRseSBjb25jZXJuIG1lOmJsZToNCj4gDQo+IDE+IFRoZXJlIGlzIGEgbGltaXRl
ZCByYW5nZSBvZiBtYXhtZW0gdmFsdWVzIHdoZXJlIHRoaXMgb2NjdXJzLiAgUGVyaGFwcw0KPiAx
VEIgaXMgdG9vIGhpZ2ggb24geW91ciBtYWNoaW5lIGZvciB0aGUgcHJvYmxlbSB0byByZXByb2R1
Y2UuICBBcw0KPiBwcmV2aW91c2x5IHN0YXRlZCBteSBzYW1wbGUgY29uZmlndXJhdGlvbiBoYXMg
bWF4bWVtIGJlaW5nIHJvdWdobHkgNw0KPiB0aW1lcyBhY3R1YWwgbWFjaGluZSBtZW1vcnkuDQoN
CkluIGZhY3QgSSBkaWQgYSBudW1iZXIgb2YgYmluYXJ5LXNlYXJjaC1zdHlsZSBleHBlcmltZW50
cyB0byB0cnkgdG8gZmluZCBvdXQgYm91bmRhcnkgYmVoYXZpb3IuICBJIGRvbuKAmXQgdGhpbmsg
SSBkaWQgN3ggbWVtb3J5LCBidXQgSSBjZXJ0YWlubHkgZGlkIDJ4IG9yIDN4IGhvc3QgbWVtb3J5
LCBhbmQgdGhlIGV4YWN0IG51bWJlciB5b3UgZ2F2ZSB0aGF0IGNhdXNlZCB5b3UgcHJvYmxlbXMu
ICBJbiBhbGwgY2FzZXMgZm9yIG1lLCBpdCBlaXRoZXIgd29ya2VkIG9yIGZhaWxlZCB3aXRoIGEg
Y3J5cHRpYyBlcnJvciBtZXNzYWdlICh0aGUgc3BlY2lmaWMgbWVzc2FnZSBkZXBlbmRpbmcgb24g
d2hldGhlciBJIGhhZCBmaXhlZCBkb20wIG1lbW9yeSBvciBhdXRvYmFsbG9vbmVkIG1lbW9yeSku
DQoNCj4gMj4gQmV0d2VlbiBpc3N1aW5nIHRoZSBgeGwgY3JlYXRlYCBjb21tYW5kIGFuZCB0aGUg
bWFjaGluZSByZWJvb3RpbmcgYQ0KPiBmZXcgbW9tZW50cyBvZiBzbG93IHJlc3BvbnNlIGhhdmUg
YmVlbiBvYnNlcnZlZC4gIFBlcmhhcHMgdGhlIG1lbW9yeQ0KPiBhbGxvY2F0b3IgbG9vcCBpcyBo
b2dnaW5nIHByb2Nlc3NvciBjb3JlcyBsb25nIGVub3VnaCBmb3IgdGhlIHdhdGNoZG9nIHRvDQo+
IHRyaWdnZXI/DQoNCkkgZG9u4oCZdCBrbm93IHRoZSBiYWxsb29uIGRyaXZlciB2ZXJ5IHdlbGws
IGJ1dCBJ4oCZZCBob3BlIGl0IHlpZWxkZWQgcHJldHR5IHJlZ3VsYXJseS4gIEl0IHNlZW1zIG1v
cmUgbGlrZWx5IHRvIG1lIHRoYXQgeW91ciBkb20wIGlzIHN3YXBwaW5nIGR1ZSB0byBsb3cgbWVt
b3J5IC8gc3RydWdnbGluZyB3aXRoIGhhdmluZyB0byB3b3JrIHdpdGggbm8gZmlsZSBjYWNoZS4g
IE9yIHRoZSBPT00ga2lsbGVyIGlzIGRvaW5nIGl0cyBjYWxjdWxhdGlvbiB0cnlpbmcgdG8gZmln
dXJlIG91dCB3aGljaCBwcm9jZXNzIHRvIHNob290PyAgDQoNCj4gMz4gUGVyaGFwcyBvbmUgb2Yg
dGhlIHBhdGNoZXMgb24gRGViaWFuIGJyb2tlIHRoaW5ncz8gIFRoaXMgc2VlbXMNCj4gdW5saWtl
bHkgc2luY2UgbmVhcmx5IGFsbCBvZiBEZWJpYW4ncyBwYXRjaGVzIGFyZSBlaXRoZXIgc3RyaWN0
bHkgZm9yDQo+IHBhY2thZ2luZyBvciBlbHNlIHBpY2tzIGZyb20gWGVuJ3MgbWFpbiBicmFuY2gs
IGJ1dCB0aGlzIGlzIGNlcnRhaW5seQ0KPiBwb3NzaWJsZS4NCg0KSW5kZWVkLCBJ4oCZZCBjb25z
aWRlciB0aGF0IHVubGlrZWx5LiAgU29tZSB0aGluZ3MgSeKAmWQgY29uc2lkZXIgbW9yZSBsaWtl
bHkgdG8gY2F1c2UgdGhlIGRpZmZlcmVuY2U6DQoNCjEuIFRoZSBhbW91bnQgb2YgaG9zdCBtZW1v
cnkgKG15IHRlc3QgYm94IGhhZCBvbmx5IDZHaUIpDQoNCjIuIFRoZSBhbW91bnQgb2YgbWVtb3J5
IGFzc2lnbmVkIHRvIGRvbTAgDQoNCjMuIFRoZSBudW1iZXIgb2Ygb3RoZXIgVk1zIHJ1bm5pbmcg
aW4gdGhlIGJhY2tncm91bmQNCg0KNC4gQSBkaWZmZXJlbmNlIGluIHRoZSB2ZXJzaW9uIG9mIExp
bnV4IChJ4oCZbSBhbHNvIHJ1bm5pbmcgRGViaWFuLCBidXQgZGViYW4tdGVzdGluZykNCg0KNS4g
QSBidWcgaW4gNC4xMSB0aGF0IHdhcyBmaXhlZCBieSA0LjE0Lg0KDQpJZiB5b3XigJlyZSBhbHJl
YWR5IGFsbG9jYXRpbmcgYSBmaXhlZCBhbW91bnQgb2YgbWVtb3J5IHRvIGRvbTAsIGJ1dCBpdOKA
mXMgc2lnbmlmaWNhbnRseSBsZXNzIHRoYW4gMUdpQiwgdGhlIGZpcnN0IHRoaW5nIEnigJlkIHRy
eSBpcyBpbmNyZWFzaW5nIHRoYXQgdG8gMUdpQi4gIEFsc28gbWFrZSBzdXJlIHRoYXQgeW914oCZ
cmUgc3BlY2lmeWluZyBhIOKAmG1heOKAmSBmb3IgZG9tMCBtZW1vcnk6IElmIHlvdSBzaW1wbHkg
cHV0IGBkb20wX21lbT1YYCwgZG9tMCB3aWxsIHN0YXJ0IHdpdGggWCBhbW91bnQgb2YgbWVtb3J5
LCBidXQgYWxsb2NhdGUgZW5vdWdoIGZyYW1lIHRhYmxlcyBzdWNoIHRoYXQgaXQgY291bGQgYmFs
bG9vbiB1cCB0byB0aGUgZnVsbCBob3N0IG1lbW9yeSBpZiByZXF1ZXN0ZWQuICAoQW5kIGZyYW1l
IHRhYmxlcyBhcmUgbm90IGZyZWUuKSAgYGRvbTBfbWVtPVgsbWF4PVhgIHdpbGwgY2F1c2UgZG9t
MCB0byBvbmx5IG1ha2UgZnJhbWUgdGFibGVzIGZvciBYIG1lbW9yeS4gIChBdCBsZWFzdCwgc28g
SeKAmW0gZ3Vlc3Npbmc7IEkgaGF2ZW7igJl0IGNoZWNrZWQuKQ0KDQpJZiB0aGF0IGRvZXNu4oCZ
dCB3b3JrLCBwbGVhc2UgaW5jbHVkZSB0aGUgb3V0cHV0IG9mIGB4bCBpbmZvYCBhbmQgYHhsIGRt
ZXNnYDsgdGhhdCB3aWxsIGdpdmUgdXMgYSBsb3QgbW9yZSBpbmZvcm1hdGlvbiB0byB3b3JrIHdp
dGguDQoNClBlYWNlLA0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:52:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79737.145159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WoG-0006xG-PJ; Mon, 01 Feb 2021 10:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79737.145159; Mon, 01 Feb 2021 10:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WoG-0006x9-Lb; Mon, 01 Feb 2021 10:51:56 +0000
Received: by outflank-mailman (input) for mailman id 79737;
 Mon, 01 Feb 2021 10:51:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6WoF-0006x4-5N
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:51:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4b4005f-915e-431b-bf55-61e6327bf8a5;
 Mon, 01 Feb 2021 10:51:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4b4005f-915e-431b-bf55-61e6327bf8a5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612176713;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Zil1B4wupZZkBaHmSr/HdAE1skdwfI3C1XPaoopK2/A=;
  b=XlQ/oDprvMhmM9fGQNVMra+HVM+RJM5gGhpyC0jV/9nYyh/0rCv3E1+b
   6UJ6GccU4MQXeQ2VK9C689TnfGjbOj31sUi6Hq6mmKBwgJasYAEfWbb8X
   Lmu3FX1OkITXAmLhrLYcl6AUPtjM9XSffjCMRk/u0L0faF0XAtGQhHMoi
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3jBgMnvTeCgPLpUuyhrNiTegVVrAdxvSNhexpm2gNTbol48gS1D737pY/K3Nrlz1HJViOXP12j
 AjOXelFhS+IO4jsJ2gl+KFoUHjtryomFvvjybHoZXRsW8W23FlFBK+0AbmCGAkzwDJveg5Rt7a
 h1zWeSY+QzP5j/N8QkIkVFK1EaLtxzSdBWwfgcqdYScPM+Uw8N/83K4pAUfOwYLXwuCHfsGG8o
 7/HNt/i8v4XRg33edxpgvkZT9Rzyh/2lq7D0BWdJhnmf0novThBbn06ylfIL4NGZwYBV8TFA/7
 OpQ=
X-SBRS: 5.2
X-MesageID: 36643314
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36643314"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a2IEpwaaUalgnFYdc+kBa1wsK8LQ3gbR/CjyAfYv2mguFYANJRA8oSO2jm4udJK+jMGzFsRXnSeKmQofFcXjdE7kiwbwAkXttN9bZIwA23kgFiE0q6oMdmPR+Te/y6RB0gK8sl0BQA9GrypeApcxqpAUIUpZ/whVeO8ALtihLoYhKD7bFWnvqGb+cc/VAwHS7pwA6KeiOxZu3ZnqYubNrmiuvg+ICNhR6TuCFuCcNPSNha8VDmfsCQRgyeofue9E4qIWifLJYFdwtT/IM5Vt6wH/Ae8eZWtH48870V6eIscsA0QiodNgnTWEtfuaP67naiHyEk5Z43q8XYamsEgVcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KYLg1wGSXbQKI4Z4xRkQiVWoIqP80XI6rUhcotB2DRg=;
 b=jOIGhg2wWTLoakI2+7zIL9e0JJXu38Fjbr0I2DPWEvzLiPVwJmY4JeMToQcgLQz9H2NpTsuo0RDIiVryy1Bf07lwx6noI1nWAQVU/+JdlVYCdBEt50QgpZlNkR7upJJYv4WVwiM7kjsm360sinB9k1jfmoyjWZ+fRRW6eaYtooAXkl5S9+XOJXxKjIUxrvrzaNcz4DP/0dE3YaC3i8l3fttSf1GkC6aGhPjoj3jnSK/KKR1b1WFHo1sZ2b316SzXyRs5Egtv5Pfesc5Ltc5wweLPovNlC2nidMLFWNy/S8Z1NzFthR41SkTdmeLdwXTOa8JeHMHndR/IrWJ5cnjBkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KYLg1wGSXbQKI4Z4xRkQiVWoIqP80XI6rUhcotB2DRg=;
 b=qrQwqggIGP//6umtv1a8bmVjC9dHDnAq+xwy4c8/38fvY/sEKaJ5bux5ERuCl0oK5NoEoXApqMNcAhsPDKK7r1KYk6VTmJ0aD9Q9jLE3JCePhXNOkS+cjA4MMfVO0BMcAC8jDBS8vd2DFKwCiZWFcXRp6u/R2lAXMBNyHKnWRzA=
Date: Mon, 1 Feb 2021 11:51:40 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>, Jan
 Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: Re: [PATCH v8 08/16] xen/domain: Add vmtrace_size domain creation
 parameter
Message-ID: <YBfc+BaXLm5dSvkG@Air-de-Roger>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-9-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210130025852.12430-9-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0053.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99b14d32-64e7-4d1a-d91d-08d8c69f5e3b
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB532458CF06F12547EC3C5B548FB69@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fj35sBvW4vYsjBT/kpHPrIh5MXaW7oK+FGh/8V4ApFTZOJ2FZFGSAxQOlF1tJ/F6/T555EU3vHW25D77xHbIv535ZmXOwuvhc4VWNpFiHOMZxpVNfJwaYN3TnT/hG+iECrtDWpvGQ1p/RnvLoF6gSnTNkRMrHdFy8eX2RYjrchmQ7ob8W+u298LQM004FHWtC1N4e8j24YRW94F+WktGro88J1MbjD0vX4b50ojwtHhcQqudOigkI5dp9JrOYDNtKET1b+SVUyfx9zKtdL2yb6MYRAx0WCsOpGrk6DDwYqxtkKFS5wl7M/ltLIFs3so4/xRItZ7LWySs3ACxokxml0Z8ja7KLn/xzoA+cOsJfE9k4+K1p25tWeNUNs+iscX7AYObfizbVRMfN0MY3gPvneN/eOgrpPSEm6OtCIgj7zd1obOUgQmypJNcTCKmthoIATZ8VtQE9eY4N+vdBMfjGa9KI1QVezT/XUKQXp2B2nNvmIYBwyUuujyeliBe8pdcqVMROqW5vBGvd4CI59Ss9aJaEgp0GOnGxxCbzi1q0WXlZKnVveHTlKcUaTbHh8qx
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(366004)(396003)(346002)(39860400002)(2906002)(956004)(478600001)(5660300002)(6636002)(86362001)(8676002)(33716001)(66476007)(66556008)(6496006)(66946007)(186003)(8936002)(4326008)(316002)(85182001)(16526019)(6862004)(9686003)(54906003)(83380400001)(6666004)(6486002)(26005)(66574015)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bFlXemt4WDFQMVF0bUV1b0xrNzNTcy9nWHFLRElwa3pwQWo2Sm15UGNCNk9Y?=
 =?utf-8?B?TEpIMXlLUUhMdHdBdmt5T2xvOEExK1VVQVpXc0pUaWNoa1BEM3ZXVUpsU2tt?=
 =?utf-8?B?OWZsRkt1dzQrY0pkUWY3UmRkZGN1a2ViZDNwYVBRT05SMnpKa1htQ0NZRzFt?=
 =?utf-8?B?VFJkT2hBUVlvcmozbDBRdGpWM0krSGRxdDQ2OTJTdUFkczZHVHl6cVFmZ2I5?=
 =?utf-8?B?Z2puemU1UDZneWF2eXRXRDhCc21FSEtHVWxWYXpGY1AwVmt4QWFrVEg3aDBw?=
 =?utf-8?B?cy8reDc2T0ZpdnRsdm1rcC9tWUl3aWxqdkxXbGtINUdyUlRtSnBvYmtNakhT?=
 =?utf-8?B?eCttbnZsOExGQnVHZ1FSMm1ieFNYclU4TXQrcWdkVWNXaWZiMXpoUlloekNT?=
 =?utf-8?B?TjRFQ0NKcmpPaDhuNVZ3ZzNoWHJ5eWJwOVROWlFVM2trcjNUVDJBMHJlQ2tV?=
 =?utf-8?B?dDNNZnEzTW4weHJvZzhMOGNpYjBXaTRsM3MyWTIzTG5iMkNuUWdlRy9kenBH?=
 =?utf-8?B?USsyUDE5K3lQbWtCL0l4MXp2Y2Q1SHpIelU3Zk5SOG9xYWNkRmsrUExUenFY?=
 =?utf-8?B?TVdYNFdIL2ZMdEVrNU1GL3F3cEZVVjVVcytDYmpoVnlITjdEWTlSVXRxcG8y?=
 =?utf-8?B?b0l3OW9YSElObjA1a0RxeGpOd3NpTVdWS2ZIWUkwTHVNT1dHenBMaHcxblgz?=
 =?utf-8?B?L0V3Z2RlcHRMelZ0ZkpOOFk1YWVZVFJUUnJ1MFFIbkFXQ09rL01qd1hnMmdl?=
 =?utf-8?B?K2R2ZTFORTMvZ3hZUHE2MzdrTjNBZ2xScGNGNERGbXNVWTdqa2lqaS9pZHNx?=
 =?utf-8?B?UGJMVS9NZFdFaGZiSVV2UVdlSHFlaXUvZTZnOUFiUmJ5TUNRd3duZE5BRDBZ?=
 =?utf-8?B?aDZpRFFDRUNIZjZaVE9iTWc1d1U1cXFubE9mNmx5Z0dXYUttaHAydmZCeUpj?=
 =?utf-8?B?WUhGOURsOG5mZTZ1UTRxc2J6WEVmYlNRUkVYOFdEYVIxcGtOc1l5K2FmWUNU?=
 =?utf-8?B?L0FuZ1k1TWVxYUlpN3VldUlyWmxiYzFzc1JKaEI1dlkvM3V1QkNSUmdpV2JK?=
 =?utf-8?B?VFlKZFdLb04wbVRTZjFJZ1g2aDRuNExKRHRvMVRDM0lmQmpiWG4xMXhNdlZv?=
 =?utf-8?B?UE1KMFJUYkRwRVNyOWpsbVNtbFFzbXZxTDV2R3JyU0xtWldrVCt5NFBROWhr?=
 =?utf-8?B?MnpOd0Y2ZTZWdTVuS1dkeHd4SktjeFM0a1VTSE9MbWpYb05qNXBDNHVkSVIz?=
 =?utf-8?B?NkhyT3hRQnpsTCsyY1VyYVJSZ2hQTnZGUjU1ditWVStpQ24vQ1lkRnJsZDN5?=
 =?utf-8?B?allUN0Vnb1RsWXY3WjRFMDB4Z0NzcUhYQVZ6UGk1aXVLOEordU1EYkR2R2FL?=
 =?utf-8?B?d3A2RHZPcGdkZW1WSXhKZW5IUHJLTnBkZlIvL1hXaVBQdGVlMUNuZXZYTVNs?=
 =?utf-8?B?VjhUV2FaUmFNeXhWMnlpcW1lN2VKaThzWU1jeXZ3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 99b14d32-64e7-4d1a-d91d-08d8c69f5e3b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 10:51:45.8365
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WpSqazUth7XuwdEYdB+XdjL8ikaKfAaPb9Zea9RpGDsw4VaGu5p1+VHRrvTSKcH4ysfcIM3Ir2yN8ROzAd36NA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Sat, Jan 30, 2021 at 02:58:44AM +0000, Andrew Cooper wrote:
> From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
> 
> To use vmtrace, buffers of a suitable size need allocating, and different
> tasks will want different sizes.
> 
> Add a domain creation parameter, and audit it appropriately in the
> {arch_,}sanitise_domain_config() functions.
> 
> For now, the x86 specific auditing is tuned to Processor Trace running in
> Single Output mode, which requires a single contiguous range of memory.
> 
> The size is given an arbitrary limit of 64M which is expected to be enough for
> anticipated usecases, but not large enough to get into long-running-hypercall
> problems.
> 
> Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index d1e94d88cf..491b32812e 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -132,6 +132,71 @@ static void vcpu_info_reset(struct vcpu *v)
>      v->vcpu_info_mfn = INVALID_MFN;
>  }
>  
> +static void vmtrace_free_buffer(struct vcpu *v)
> +{
> +    const struct domain *d = v->domain;
> +    struct page_info *pg = v->vmtrace.pg;
> +    unsigned int i;
> +
> +    if ( !pg )
> +        return;
> +
> +    v->vmtrace.pg = NULL;
> +
> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
> +    {
> +        put_page_alloc_ref(&pg[i]);
> +        put_page_and_type(&pg[i]);
> +    }
> +}
> +
> +static int vmtrace_alloc_buffer(struct vcpu *v)

You might as well make this return true/false, as the error code is
ignored by the caller (at least in this patch).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:54:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79738.145171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Wr1-00076U-74; Mon, 01 Feb 2021 10:54:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79738.145171; Mon, 01 Feb 2021 10:54:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Wr1-00076N-41; Mon, 01 Feb 2021 10:54:47 +0000
Received: by outflank-mailman (input) for mailman id 79738;
 Mon, 01 Feb 2021 10:54:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6Wr0-00076H-2i
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:54:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e47bf0d8-2e74-4d75-afb9-ca4d5714e311;
 Mon, 01 Feb 2021 10:54:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e47bf0d8-2e74-4d75-afb9-ca4d5714e311
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612176884;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=t9YagauD79mi4YmCsOHZ3R/PJt7bZjM72a5/MZdCZIk=;
  b=bjQwtpmu7Asvx45eXE9yc9BzQq9Fju2Ig89/ma9pajtEEU406IS1ARqL
   qu3Q+O9TYiwHsENN+TeVW8u1dyWmSi2lpPQbDr6xhwEx5b8bWbjg87vMW
   0RcU6/UThZhcQw2poxaMfuLGCXOSQVeFr1UEOxm/V4bVl+eJSNrhba4hP
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: y4v+UCg4gpTRFVEbIOaWsOqVCNgA+3MQYGSsPGcd7Pmhxxz+Zh5gL9s6xaD7cKthU4ChLy7JnT
 u7b2Zu9x4zgUG1TarBOlrx1rNNQqxS/Q/IvEOW1t6ilMcnJfFAsICEWgWMhmPqSOw8olM2+4I5
 xdy1pKaB+CffoUdYi6z+MRxUdGQ6jAP2zFTXxJ+3Q3KJIRu0OIwZqWwrI5ts2bE8uNf2REMZdC
 UrZ62ll+FGBh3Iv1FqC57dO+MoOY/ONrbjvqydqJ96enmTxC4r1EEjoYTvGLiRlkLYzNVCU5MQ
 Vec=
X-SBRS: 5.2
X-MesageID: 36303987
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36303987"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GkuGLNpMsZN4KA0viBOYHkYvXCfsCz8jF6AkIGJCHbb0SF6n7IpEn1fLZosCPIiFPnx4MMauAdcO+7a3jAxIBjSSpGr0hfUYtpnfjcOAakSa/9lTnONKGDHg8a63shgW04PNl2ST/5WWWiD7ZGYoSU7Jh2DiFaFZEOUVyVliMUF/a13ph3Py+9nww/s86w0MC0LAaWFBxTOoid1/2F+cVU3Iw2ndnwpKS3UpFP/Etpvi1NMAAV7p4MwTxpIq6YDR7Jl1QISonJg6rM54UieiTM688Jw+P8qg+7sK4elPX8cS6tDtGAGZCl3hnDT29q4qkrz7K4olZPHzMn5D70tpmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tBB2pCXj9pcuxSS5ALOe282uccOem3JuCD5/fK9/WME=;
 b=RI8avMy4KEEhNvv2xnh+Cr3Qvuq3Nfd4yySIMxhOxP77EEJZO937pUPgkPL873AUzViE8c8lR1twHklS4U1nlKpabihRmYXpzGfTFq8bSSHCLuFFVcTAqm0VlGC3sHVLuY0bog/l/P5tvOSvWUMz//rM49EtLYYCTWoIp6jngpZjAVWq2LeP6JRgB0qY/CxAscg0KkXDAHeFftT9D2ZxONjsZAzxlU86nBE/EIRP6/96r1j2dv0r/0KYWIJC0UghOepDF0icRE/wpgLCZfgkRSJf9wfehOwuf4fsKFZrVpguz7WOFTDp0Xlm4zy/Bl20nr65EWSxql0kvaeWsS22ow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tBB2pCXj9pcuxSS5ALOe282uccOem3JuCD5/fK9/WME=;
 b=fHO4SsOY5ZO5zQRItLgR+GTZVdKFAoScWGVlQo7hYYaF22wg4KhcIqZ6ACfFCnHVIjr9NOmxoosX2QLsBIUJn/00+Z2Gzvi5BhwENLE6xxlI2bD6ZjIhM9T9zcvQQRVzlUk9MA9NAXmYQA8i8ZgEIups9VhWqFhVReMCKW4YSWQ=
Date: Mon, 1 Feb 2021 11:54:35 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] libs/light: pass some infos to qemu
Message-ID: <YBfd62T1LNh+zyjj@Air-de-Roger>
References: <20210126224800.1246-1-bouyer@netbsd.org>
 <20210126224800.1246-12-bouyer@netbsd.org> <YBKbEhavZlpD75fU@Air-de-Roger>
 <20210130115013.GA2101@antioche.eu.org> <YBe2RSZeJBeMybdt@Air-de-Roger>
 <20210201093939.GB624@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210201093939.GB624@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0088.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 25f12d7c-b20c-4ca9-7ab2-08d8c69fc69c
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5324C0E7203F874309E3B6B48FB69@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8Uk5uNK1dyASdJkGcwC5Wtk1yQQZ9AtVH4NlMnsmLwkd4mC/k3awSfQthrVhBF7lDSks9lc5jBKNY09L5j54YkDknFq7B0Ut+BSFy2IcOvRoYOt4sHy0Joe+M8+WU1TPyZeu/PYu25zUNfvWgwGB7lkyu3KH5LYT5WMCQJ39qb40Mw08TKXhNg95CzPznyv0dHMQ0nADkjHBNK9VewdxoA61EsCvR2oS9PAtJcBG2GcrIprC6jhwICeRYsws/hQIQ4lnXfxvVIP2bT4mK/Gqt9QyhBsrRJg5BjjEbbaHDOeEkX9zUR4ENkjK10pQ8jdY6hlebuKzowa5xxXWDAegWi3HpY6GI6DDQHGNOBsueDAtM4CUa5i6++gEN1sHrwNrz4HbjaNDqrI/Su2LrI5S9e9hVuDC97kEj3fuGRZarOeJjVBy1DRTY3E6BIE8RVyMe8775N9+jKPTTDqvQ0CDRa5Hfv89ri9jNcwV7FUeqXT6meSdrVLhql+/oJ1FhXu35vDg0BICb+rEE1BUjTmP43LYQRdcSeOghEokHnZMSx718tIJWzu5UjeUWB4peuSrSKzDB5FvHSOks6wHTbZg8Cs30AkUJsUuD0h9UoQCkE7x7M/l65L6qtVO5E2+Mbhulu2jTemHeU5oh+cAQoDwDw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(366004)(396003)(346002)(39860400002)(966005)(2906002)(956004)(478600001)(5660300002)(86362001)(6916009)(8676002)(33716001)(66476007)(66556008)(6496006)(66946007)(186003)(8936002)(4326008)(316002)(85182001)(16526019)(9686003)(54906003)(6666004)(6486002)(26005)(107886003)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TzYvNnFuY3NKMzJuRnpLZGhjSFMwblQrYUhQUWExNEl0VnZGYlNuNnlpWEk3?=
 =?utf-8?B?a1JwazZJS3VUOFRTWmx2eWNDK040Y0kxMGM0OS82TVQ1WnB4eW5VU282R1VL?=
 =?utf-8?B?ZjlFSzhzMU9TODRzWEJPbWhUTE9EVzVZVThvUFlwc3FhaUNzUmtNZEQvS1c0?=
 =?utf-8?B?NXpJeXZ3UlZEZG00eE9uMjdjYm50NzBtdXBmKzE4TzlPd1hOTmdiQW9FWjNm?=
 =?utf-8?B?V3ZMckpRQk93Nk1QcU1ZcTh0QWdvd0JHOWhUSGx5ZnczZjZqWGI2SDN4TEM4?=
 =?utf-8?B?VW5MTEJjamFRTzd0UWdTOHBQaUQ3K1ZldW0wNnMvL016K1pQd1Y4dTJyRXBT?=
 =?utf-8?B?M0xhYmhTOUYvRWRPOEQ0R3A3M1JWd2xCdGNIQlVtRUVyTzZ4ZFJoZWF3bjFx?=
 =?utf-8?B?NUI0ZmNkMUxxL2h4eVhoN1R1M3lSdzZwZ2Ntd1A4WmpSZE1pODRVdmVvWTdC?=
 =?utf-8?B?enZBVFRmdE9haUpMd3FjSTh2cEg4eWQ2UXZNRVZuWnJ4L3RoSC9YYmhVZU5t?=
 =?utf-8?B?TVlwSTg5SnpzTzNjMWUyR0pGenZnMERKV2lKVHo2cDdqWWVFOWwxUXFkZHQ1?=
 =?utf-8?B?YlVycE9oL2RkMmxlSU83eFUxZWliL2VCZWhnQW0vMW1qTTcra0JOcUVrdkZ6?=
 =?utf-8?B?RUxxa3IrS09yUGJMOFJ3ekZnOHJRQTRYeVc4c0JHblhQRWdhaEdZdnloREx5?=
 =?utf-8?B?bHdCWnBNSHJkRDB6aG8zYytYdW9obFhVdjdzdk8zcG00Z3VkbnhMb2RKMjRs?=
 =?utf-8?B?d0RtVlFKNCtNelNQRGY4UzZEc3gvcHRQcEdTWUY5Q0c1ZkRCeGFJTTVKdzdr?=
 =?utf-8?B?dkpNWXdpZkU5ZWU1ajdBbC91aGtteWR4TDRLaDRMcEczc0FJMi91VEtNUm1O?=
 =?utf-8?B?SjYxNjI5SDEyUHZlaHVIUEpjWTB2QXVaSjRISXR5bVVtY0xMT1g4MXNmZk5r?=
 =?utf-8?B?S3RJN0grTThBN2RPajhsNE9ORTA5dXFsTGhiOWlLTXJpcUNVTVVtekR4V3ZL?=
 =?utf-8?B?OHJhcDhiWDdoN2J0ZEs0SVA4Q2xUMGdTRGZRK1dWUkI2eEcxVzB0SnVMMU5K?=
 =?utf-8?B?M0owODFrWjI5NGFhTVc4YmJ0Z2N6L0xqdXl4bVVRalNpd0gvSC9iVG1XbWFj?=
 =?utf-8?B?WGw4VHBPMnA4bjQ2VEcvNGJHMktBdk1rNUxudjZwY1ZGZzZodU5Sek1UVTNJ?=
 =?utf-8?B?cFhrRXdhcGNySTJNRFp1MUplRWlYYkdxTE5sdjZMbFdrMWlSRlBId0FHOURk?=
 =?utf-8?B?U21qZGVQUENXb1dsdFQ5RDhtZ09tSzArZ25Db1p3RmZ4M0ZxbDZCK2kxR2do?=
 =?utf-8?B?eVFPZEN4NXJIS0JyTy9XQ1Brb0RBT3hGTTNYOUtkWkxOa0M2NXBVdGhYWlFv?=
 =?utf-8?B?YWtGVXVETTZDVURsbWZwTldVeFNRS3Yxc3JBWHg1SmdDRXdoZUxBajlRVjFZ?=
 =?utf-8?B?OHZ5dE1PMnhvNTFqek9MOThhVWltekpuQ1RjQmt3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 25f12d7c-b20c-4ca9-7ab2-08d8c69fc69c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 10:54:40.9597
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7trm+dJaDzpTz+eTXXFHcn7jBAsNZJoY5ZKfVqMN2hFfXDW9cezxQOmZxo6S1oa6HsDEovaMTT+2DnUmXmBajw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 10:39:39AM +0100, Manuel Bouyer wrote:
> On Mon, Feb 01, 2021 at 09:06:13AM +0100, Roger Pau MonnÃ© wrote:
> > On Sat, Jan 30, 2021 at 12:50:13PM +0100, Manuel Bouyer wrote:
> > > On Thu, Jan 28, 2021 at 12:08:02PM +0100, Roger Pau MonnÃ© wrote:
> > > > [...]
> > > > Also, the qemu-ifup script doesn't seem to be part of the NetBSD
> > > > scripts that are upstream, is this something carried by the NetBSD
> > > > package?
> > > 
> > > Actually, the script is part of qemu-xen-traditional:
> > > tools/qemu-xen-traditional/i386-dm/qemu-ifup-NetBSD
> > > 
> > > and it's installed as part of 'make install'. The same script can be used
> > > for both qemu-xen-traditional and qemu-xen as long as we support only
> > > bridged mode by default.
> > > 
> > > qemu-xen-traditional did call the script with the bridge name.
> > > This patch makes qemu-xen call the script with the same parameters,
> > > and add the XEN_DOMAIN_ID environnement variable.
> > > 
> > > Is it OK to keep the script from qemu-xen-traditional (and installed as
> > > part of qemu-xen-traditional) for now ?
> > 
> > I think you want to move the script into hotplug/NetBSD/ because it
> > should be possible to install a system without qemu-xen-traditional
> > (--disable-qemu-traditional) and only qemu-upstream, and the script
> > will still be needed in that case.
> 
> I can, but how do I get the ecript removed from qemu-traditional ?
> It's a different repo, isn't it ?

Yes, it's:

http://xenbits.xen.org/gitweb/?p=qemu-xen-traditional.git;a=summary

I would remove it from qemu-trad and then only install from
hotplug/NetBSD if it's not already there? Or maybe just force-install
it from hotplug/NetBSD even if it's already present?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:58:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79741.145183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WuQ-0007Eq-NQ; Mon, 01 Feb 2021 10:58:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79741.145183; Mon, 01 Feb 2021 10:58:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6WuQ-0007Ej-KI; Mon, 01 Feb 2021 10:58:18 +0000
Received: by outflank-mailman (input) for mailman id 79741;
 Mon, 01 Feb 2021 10:58:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6WuP-0007Ee-K6
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:58:17 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2705bdb0-0d13-4b60-8459-6b4e3d4d231c;
 Mon, 01 Feb 2021 10:58:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2705bdb0-0d13-4b60-8459-6b4e3d4d231c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612177096;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=o6MWkZRtRYOmdsN4Tsm/RbazjW6wcOt7150x1hIVzS0=;
  b=NwaBh13NizkaCnJzGRtk+2f1FZ0ST1Mv7g2000PtMHYFGS3A65uFZKVI
   5QwBQp+yA6crIzyYeWJ52j6U5Vuxm7dHjpsf/0OXExbXygqysAmrbkWxU
   Ujaa268Jbmw3ew01MYRp2gGWicKv0v8+hrruPlUC8rHtlQ+cvYE0PuC0d
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qFxtAkRC31+P7ZLccMUmrj5x8zx8ROM/ZPJFu4pDjoksPZeHCou41bsBpOzfF7CTbkh1ErjjNu
 rhXhP/83RmOmRA17AuBuysUQgCcoyXaY+xRAMr4LIOhxALnFfO2K00tWEKbiUg5uJhR0aIHPmM
 jivjss0Ofi+huHnxaIos9+9AKJn7dAexWqL0RwWvtwj1Ekz1DmgGz/UTFW+NcdWt6wb110cU12
 1AOI81wMzyYPtFtbKtZISRKzrwi2cBVFRdJhH82Yqdtu4PGnXr/oJctyCF+BkbhvM3d7NH2PXz
 6Y8=
X-SBRS: 5.2
X-MesageID: 36643667
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36643667"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZqGsVxsWI7Md7OE+DU88PWiUa7sMPO1va9IfLlM6KWQyFicrcvdtl+Y+ZsLp1T7t6cVWdbMPPfjP/gOZn+u3C5jrhF14TpC5xF8rMIgS7Wu/pH54Z5oe4N+eAQ+agaVSdLp/qMUUwcOBGELbkJ7GjQ5t5/1A6/K2bEch5YoHA0JDSfxbFRtTUsg/CjCysKb2Sioup6AT+ehB0Fgc9IQKCIHVKadxb8EyOVzOXexBIUDjoKEhZD79dWA5Jhq403LfbyNN5oFau14MBPrPuFo5rgtGabCa4tM4QO5tFN5rnrNAEkbLHHjQUZdz5Q/+kfBzR+1O5UzUWYJk2fB8iHx89Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0tmwUFa3fBUy2JhuCDIny6Or45RAXiZzSGL1MiipUu4=;
 b=RX4V2jjIsQajON9DvSYRbshrMwbDDvL9KoCgwVxMgBq/dq3h3D0ow6Sp0Q3Smop1wUnN1Xh+Y5KepUDtYz2OIElalrXsuxKB2/8TwmGS0oPFilFR+BWFoB2DRlpDnYA2UHIXEXSx8ZOLkiknYlrJ+XUlkaSU+hCuR0wz+nNXsyY8jCuis+u7UFG2/nejS7I+9E1AGMCGgLto++55SjrwedxjvwxB9SXx8emq1PSgtmlzSqgj0pJQBNh9U6w6mvoqA87Y6xRlKKkwYsj9bRdbu9F0sVU3RbVzl5qLdSrR9nj+hIaSTCxk53Avfv34tBx0h6PU66lXv38iDptC7B45DQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0tmwUFa3fBUy2JhuCDIny6Or45RAXiZzSGL1MiipUu4=;
 b=dFUGQnsOm5H1flF/N9GKshjvTGZstgcTRvJgDivV4gN3kmCwflJqZWDcrSFzl0RyTsZbHiYG2ir6CIwUl6diQk9qW/o3c+ZIlrIUzVy7hzGnyHg8EJ4NeJNoLzID3CQYr0Skt6vg4X/ALFprqawkUFEDsXoxRXfRO0UQhyc5vT0=
Date: Mon, 1 Feb 2021 11:58:06 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/2] Document qemu-ifup on NetBSD
Message-ID: <YBfevtNorE5OtQVR@Air-de-Roger>
References: <20210130230300.11664-1-bouyer@netbsd.org>
 <20210130230300.11664-2-bouyer@netbsd.org> <YBe6JpR6jOLvYDz6@Air-de-Roger>
 <20210201093747.GA624@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210201093747.GA624@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0125.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cac69d5e-cc15-4be4-345a-08d8c6a04436
X-MS-TrafficTypeDiagnostic: DM6PR03MB5340:
X-Microsoft-Antispam-PRVS: <DM6PR03MB53404C7397415F38C3799F788FB69@DM6PR03MB5340.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cPYXQ4ALVJ6L/Xf5ZSUfAchatQgCiDiano7d/Y9CQW9oL5an2jtUqLR07yngk8gLfQiBJdFlKvQkX0nldN/AW7TedQu0RKIATTywLxymPe5SWRynnazZ9A1XOUUDiOhPEp1BO9E8k06zaFPTln6VmmZBFzUexfYfPGSku4kTsYQI5CTpIA4097FCEgkthSp4cCI247pWRtpenWKf5j6rAur+yvm7JmrWOC+W5qghRQOtvH00abNfo14iGMDg/vyGZ/NyHLpNxRuNws14kb9tBgMXzPDSE8KyXhxLOjO2U0R+f++MvRPxfBut+GzFcB+bZmusti9w7WQgd44j5XtpqbTJ6Bmng2lSfj3SYFz4XjiAPj0NVQLbvppc2dgbr+/OqbcPlaUI6CQVnwCqkjnhTIRiAXk75fMrzT8pSVcM6PLUWa+u51isgeMxsx0Ov0E5MVRUd5YyQ68rG3hx55odt3MbDs+tIdHlPmlog5tzDv6lxfS5CRszRg24qId0tZGl4uobGYlxj3McjFJ45AlCpg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(136003)(346002)(376002)(39860400002)(396003)(9686003)(66946007)(478600001)(6666004)(66476007)(316002)(54906003)(66556008)(85182001)(33716001)(6916009)(2906002)(6496006)(8936002)(5660300002)(4326008)(8676002)(26005)(6486002)(956004)(16526019)(86362001)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?a2RHOVZNcVVMVzAwcnZ3NVAvejFzREt2S3hrMWZsemhEWUo4RmpXcTQ1NWd6?=
 =?utf-8?B?czFUQjRkWktXQkhYV01wZVpWTzJIOWZzRERaSkZ5SytubytNY3R0QTM2R0Ur?=
 =?utf-8?B?WStsc0p2WWZneEVPSGFxRlJOcnBpdXhDdjNBRkY2NVJwdHJpZFc0RmhDaGJp?=
 =?utf-8?B?OGNtMkpkWGRaVFNJVkdQVS9sYktZWkdNQXNZN0hzem1xRHpaM1YwbERJOUJT?=
 =?utf-8?B?SGU3NHFCS29XL1loNncwUmk3MUswQ3k5NUx0STZTN0ZtM2tyRmZLZU1uQjZn?=
 =?utf-8?B?NXArL0dRb0Rvem9xYVBWbHozOXN2UTF4SnlXaXpVaWVoZE1XMXlFd2RjRHJw?=
 =?utf-8?B?V1lRczZwWGFtbjNkR2UyeFE3b05rYUxjZkNqKy9aZ1BxUFVabGh1VCt6Q1lQ?=
 =?utf-8?B?dnMvR3RlN1p5LzUxSEx5L2ZpVUFFVTRUWDl0MkFFWXZ0ZXRQMVM3VTlTMXVH?=
 =?utf-8?B?RjVNcUQ5NzlXa1JiRW12NGIrOStaNjRTSEtxenNKUE8wTUFLN3VCN3NaM01j?=
 =?utf-8?B?L0N3eHlQMDFUMFVyUUpXM2l6NVRIWmdrWHl6QlNtSXBUVG5ST3RoOTcxbTl0?=
 =?utf-8?B?eld1N0NaYk9kQzVUdW50S0FkS1F3NkM1VjV0d1J0Q1kzYUdJM3kzQzYybzhZ?=
 =?utf-8?B?dDB6MUlrRkNKVDB3UW5ESWhNekJhRGxPbW5OK0l3UzIwamNDVXhadXBpZTE5?=
 =?utf-8?B?WkdZRUJhYUVMWHZQVkxrc2ZVSnZTMUp5SHM3NStod3pMWTVZWHc2MWgvTElM?=
 =?utf-8?B?N01VVjhjR0hxOG1VQTdrdDBycUpxUHhsczJUYy9TTk4xa3dLUlJnRnl3L3ZD?=
 =?utf-8?B?VkZiSXZiUktucWtRYkVMbXdiVWxjcGRxTkF5Z1ZGN0pEbUZyMi93cTFyVTV4?=
 =?utf-8?B?bGx3Z1E0VGVtQTcrRTlKb0hYZmFCMnp0cEZpYU5rWEJtbUxqWnQyWlNnTkVU?=
 =?utf-8?B?WVZxS2dmLzArU2VNSnRMZERibWJlTU9UcFg4WFdMK1VIS2JIR3JNT3pqN0Nv?=
 =?utf-8?B?L3h2bVROelE1TThvaUhLRG8rSFVRaVUvRzZIZUFEb3AvM2Qrb1hhWlZXN0lR?=
 =?utf-8?B?c0MxdkplM0krQmttREs4ZTBEeXlDZStXRVV5bkhHdG1HUVR2L0tQTERvaE9q?=
 =?utf-8?B?NWFTb0JQOWpHSWhCUTY0VUtscTdrQWlhMVNJWnJhVTd4TkhjdVB3UkF4eEwr?=
 =?utf-8?B?ekVhc2ZzaUp1UW82R3dhT3VsVzhEbmJHS0xrSDlidGsveGVoaE42OWNVR05R?=
 =?utf-8?B?QXJjazdkRDE2eStUT08wemFZNEU5VzZiMmk1d3Q4Z2RaWmhMeGxIN1RuWFk0?=
 =?utf-8?B?aEZPMnhhTXd6UU1xcjNFc3BmYTcyQ2E0T3R3VFp3WVg3VzBFUDNVL3VWUUxz?=
 =?utf-8?B?QTJsRUJxVStSUFR5Mi9qZmJRRlRFVDVXRXB0RzY1NHVWM3JXeVJndnBVMkNQ?=
 =?utf-8?B?YVg2TlA0S2VubS9kd3o0UWwwODcrTU1NNG5qU2RRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cac69d5e-cc15-4be4-345a-08d8c6a04436
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 10:58:11.6705
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IK26yxv9Z4PtmgSlaPnUYLHr91kN/5RVkjRZqNrv3QHgS8TpPQED320BewDzfusWhGXQhuZLR4IC9ISYEAo7ow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5340
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 10:37:47AM +0100, Manuel Bouyer wrote:
> On Mon, Feb 01, 2021 at 09:21:58AM +0100, Roger Pau MonnÃ© wrote:
> > On Sun, Jan 31, 2021 at 12:03:00AM +0100, Manuel Bouyer wrote:
> > > Document that on NetBSD, the tap interface will be configured by the
> > > qemu-ifup script. Document the arguments, and XEN_DOMAIN_ID environnement
> > > variable.
> > 
> > You are missing a Signed-off-by tag here ;).
> > 
> > > ---
> > >  docs/man/xl-network-configuration.5.pod | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > > 
> > > diff --git a/docs/man/xl-network-configuration.5.pod b/docs/man/xl-network-configuration.5.pod
> > > index af058d4d3c..f6eb6c31fc 100644
> > > --- a/docs/man/xl-network-configuration.5.pod
> > > +++ b/docs/man/xl-network-configuration.5.pod
> > > @@ -172,6 +172,10 @@ add it to the relevant bridge). Defaults to
> > >  C<XEN_SCRIPT_DIR/vif-bridge> but can be set to any script. Some example
> > >  scripts are installed in C<XEN_SCRIPT_DIR>.
> > >  
> > > +On NetBSD, HVM guests will always use
> > > +C<XEN_SCRIPT_DIR/qemu-ifup> to configure the tap interface. The first argument
> > > +is the tap interface, the second is the bridge name. the environnement variable
> > > +C<XEN_DOMAIN_ID> contains the domU's ID.
> > 
> > LGTM, but I would make it even less technical and more user focused:
> > 
> > Note on NetBSD HVM guests will ignore the script option for tap
> > (emulated) interfaces and always use C<XEN_SCRIPT_DIR/qemu-ifup> to
> > configure the interface in bridged mode.
> 
> Well, as a user, I want to know how the scripts are called, so that I can
> tune them ...

Isn't that information on the header of the script? I would expect
users that want to modify such script will open it and then the header
should already lists the parameters.

IMO I would leave the parameters out of this document because we don't
list them for any other script, so it seems odd to list them for the
qemu-ifup script only.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 10:59:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 10:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79746.145195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ww3-0007Wj-9A; Mon, 01 Feb 2021 10:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79746.145195; Mon, 01 Feb 2021 10:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ww3-0007Wc-55; Mon, 01 Feb 2021 10:59:59 +0000
Received: by outflank-mailman (input) for mailman id 79746;
 Mon, 01 Feb 2021 10:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6Ww2-0007WW-Jm
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 10:59:58 +0000
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11be89a7-ad55-4b43-96e4-e83228935551;
 Mon, 01 Feb 2021 10:59:57 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id y187so12786279wmd.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 02:59:56 -0800 (PST)
Received: from [192.168.1.36] (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id o17sm26583745wrm.52.2021.02.01.02.59.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 02:59:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 11be89a7-ad55-4b43-96e4-e83228935551
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=KE6pn1XLuZ35zO3PqK0E652tTHU9fRPfHroHeNP+mqs=;
        b=k8M1PI5PBfrYYPekh/jrTDQwU/rOaxV2QKr0Hn6vWm+QG3GeFyY7H+sJuPhHsBw2wa
         wulgnY4lgkmywg979X9tkQveWnWgx4/Jt/qV+qMmZ1PsIoFXPINHPBTw/7m0rAnAMbMD
         T3ej0P6cCJYXUx9iIp5mCI3ID9Z8bI7r/t6BMgf2hxRoTtrCOF8Ng4IFTgHzylfKKBQO
         TRJllt+Ux3Fruw84BDcCZhFNvzDwKTQj3QkF3ahTxr+AHG+11xPZ4pW499AJ3AE9n8Mu
         HgS4B1h0wM1kGKL69Q/u3dzajqNgLUbGAsm7feYngoDKeSZ1cEHuuB38xbZp02SjJlvS
         PNIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
         :date:user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=KE6pn1XLuZ35zO3PqK0E652tTHU9fRPfHroHeNP+mqs=;
        b=hHHL9Olv0KSSgLe1nkoYgdSZzKQBJ0v4vOx/2hxezbgJE8ivrw3kyK0uI0VdvR2wvE
         7RvK13WttQ9UtzOmFDmZHJf281krOzPmO34vfA7X3Rhy+oUfsYHCLLd7GneA3IprGcpE
         3TectrBR1isFb1Fxcv0Ip8ZEanTWvIrTyM6PIWSRQi9QH0XGy9pX2+jEBWC3IPJ4Oy9l
         0SEjgTSq3fYxW8wN9Xwssi+84rpW085RVEfNDzhWWxAeDPSsIiznoCs7uNmyYR3k1LmV
         7D9dcU0qAey/NlQCFP0xxE8AHQi5yRDheU61xSvCie+sCGkQpr3kEKIzbm3oRcH+DJ7r
         4NiQ==
X-Gm-Message-State: AOAM5309lIKeMBLMC63t4n3oALeBQw5AFA9V+IORvuVG/bfkFmalIUpg
	5cjfE5N8g5ikP9MIcT3fkAs=
X-Google-Smtp-Source: ABdhPJx1kGiUfZp6OECVM9HcEU2ro/EqkyZTE/nUfkUseg0RWzzAejKdnqSZZFYlYi0EmovqBovjXA==
X-Received: by 2002:a05:600c:4f07:: with SMTP id l7mr9122348wmq.111.1612177195926;
        Mon, 01 Feb 2021 02:59:55 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
Subject: Re: [PATCH v2 3/4] hw/xen/Kconfig: Introduce XEN_PV config
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>
References: <20210131141810.293186-1-f4bug@amsat.org>
 <20210131141810.293186-4-f4bug@amsat.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <d3ad42eb-42bd-2e63-4c99-8eed91216fc5@amsat.org>
Date: Mon, 1 Feb 2021 11:59:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20210131141810.293186-4-f4bug@amsat.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 1/31/21 3:18 PM, Philippe Mathieu-DaudÃ© wrote:
> xenpv machine requires USB, IDE_PIIX and PCI:
> 
>   /usr/bin/ld:
>   libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
>   hw/xen/xen-legacy-backend.c:757: undefined reference to `xen_usb_ops'
>   libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `unplug_disks':
>   hw/i386/xen/xen_platform.c:153: undefined reference to `pci_piix3_xen_ide_unplug'
>   libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `pci_unplug_nics':
>   hw/i386/xen/xen_platform.c:137: undefined reference to `pci_for_each_device'
>   libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `xen_platform_realize':
>   hw/i386/xen/xen_platform.c:483: undefined reference to `pci_register_bar'
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
> ---
>  hw/Kconfig     | 1 +
>  hw/xen/Kconfig | 7 +++++++
>  2 files changed, 8 insertions(+)
>  create mode 100644 hw/xen/Kconfig
> 
> diff --git a/hw/Kconfig b/hw/Kconfig
> index 5ad3c6b5a4b..f2a95591d94 100644
> --- a/hw/Kconfig
> +++ b/hw/Kconfig
> @@ -39,6 +39,7 @@ source usb/Kconfig
>  source virtio/Kconfig
>  source vfio/Kconfig
>  source watchdog/Kconfig
> +source xen/Kconfig
>  
>  # arch Kconfig
>  source arm/Kconfig
> diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
> new file mode 100644
> index 00000000000..0b8427d6bd1
> --- /dev/null
> +++ b/hw/xen/Kconfig
> @@ -0,0 +1,7 @@
> +config XEN_PV
> +    bool
> +    default y if XEN
> +    depends on XEN
> +    select PCI
> +    select USB
> +    select IDE_PIIX

Well this is not enough, --without-default-devices fails:

/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
function `cpu_physical_memory_set_dirty_range':
include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
function `ram_block_add':
softmmu/physmem.c:1873: undefined reference to `xen_ram_alloc'
/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
function `cpu_physical_memory_set_dirty_range':
include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
/usr/bin/ld: include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_memory.c.o: in function
`cpu_physical_memory_set_dirty_range':
include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:04:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79751.145213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Wzs-0008P0-Rj; Mon, 01 Feb 2021 11:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79751.145213; Mon, 01 Feb 2021 11:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Wzs-0008Ot-Og; Mon, 01 Feb 2021 11:03:56 +0000
Received: by outflank-mailman (input) for mailman id 79751;
 Mon, 01 Feb 2021 11:03:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6Wzr-0008Oo-Dg
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:03:55 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7eaa0002-a232-4e54-8c0f-93fafda32891;
 Mon, 01 Feb 2021 11:03:54 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id 6so16079553wri.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:03:54 -0800 (PST)
Received: from [192.168.1.36] (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id y24sm20162170wmi.47.2021.02.01.03.03.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 03:03:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 7eaa0002-a232-4e54-8c0f-93fafda32891
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=QXz33jfF5o0HFplOhpdqHclk9wrcsV6HW630elwNatQ=;
        b=uHaXw4TeOEO12kl+fnN0tUpC/LRsruyO4yk0HDjYL4Yv9dETsyq6wxTzk0WgoGZNj1
         CtiTU3CxtCl5kqqOpCGgCP91x76CwFj1xYR2CgeaTSgpMI4GCcaisRg3eQHEH5jRTc8J
         PrqVpIOzZWusI1BWVlvpAZ2t2OSlb632AsiI/rZM/XjY0N1XrACtobf8HH9iB/kf49Uf
         L8MCkTGa2waSCkbgEYybsg91FFiSdouJFNXmZ1ZgIu4JQRBNRTpaYmav5I6QczuPJzyQ
         txdg9t1drhSVcGzHsd5uy92C+zWleQvFXI/rCca5Oid+8jLWA2yOlWp2fJzTB2k/t9q5
         CpUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
         :date:user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=QXz33jfF5o0HFplOhpdqHclk9wrcsV6HW630elwNatQ=;
        b=Ycq4J1Xgo8BAoS1N+F/mBxqO7A6pgE7sSbCLGsJEFfBagU0gx1HlRpISVAKJJm3qOQ
         mQzZtMSAxtAsUnnVY3quRG9ADBuSV8ZC0Y2XH+2ux5OQRRraxFf0nbptJ2bRGduM05rN
         Vtmu0qvOYfjWVaBNzRL86ufGoBCrgOBIXWPlrOniwDTB9335mN8p1Btd/ev4/uxB3j3g
         vOmbz9eASWjwS0xHUZoCQRuDUOOooLsxzbGUzEF8PsnImnkt0YgncHrvuCjWE+UY5Tp4
         TmnGFtGi/bcpjCDRrr2hOzQ/MR0Q3kqtrn6IUB9cruV2368+y0ZyOJQSjVTx+6fHs4Bx
         WRhw==
X-Gm-Message-State: AOAM5311rZU6zGMrJ/7JiSAOWbw6G6srXucEk0ubFY1TGQaNJQoxSoIp
	4Mj40wCnCKJv4Plw0dvAyIO/t3xaBfA=
X-Google-Smtp-Source: ABdhPJzITrIsCsiDN2VerxM6locxuBFFl/nz7JvgM8MlpL6o8XoP6uVy6xOEqzLJkVu4SwmLgPLOEA==
X-Received: by 2002:adf:80c8:: with SMTP id 66mr2705174wrl.344.1612177433516;
        Mon, 01 Feb 2021 03:03:53 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
Subject: Re: [PATCH v2 4/4] hw/xen: Have Xen machines select 9pfs
To: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, qemu-devel@nongnu.org, Greg Kurz <groug@kaod.org>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Paul Durrant <paul@xen.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20210131141810.293186-1-f4bug@amsat.org>
 <20210131141810.293186-5-f4bug@amsat.org>
 <565bf0dd-a5de-352e-eec7-68b862ed09e4@redhat.com>
 <f6e1917a-f9cf-9ae3-50b1-9dc0ee4f65f3@amsat.org>
 <50306fbf-c6f0-e281-248f-de1bc984b113@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <83a785a5-6050-a24b-c796-d427c3873a07@amsat.org>
Date: Mon, 1 Feb 2021 12:03:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <50306fbf-c6f0-e281-248f-de1bc984b113@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/1/21 11:23 AM, Paolo Bonzini wrote:
> On 01/02/21 10:18, Philippe Mathieu-DaudÃ© wrote:
>> FYI using 'imply FSDEV_9P' instead I get:
>>
>> /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
>> `xen_be_register_common':
>> hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
> 
> Ok, so then we have the case of a file (hw/xen/xen-legacy-backend.c)
> brought in by CONFIG_XEN.Â  In that case this patch is incorrect...
> 
>> The function is:
>>
>> Â Â  void xen_be_register_common(void)
>> Â Â  {
>> Â Â Â Â Â Â  xen_set_dynamic_sysbus();
>>
>> Â Â Â Â Â Â  xen_be_register("console", &xen_console_ops);
>> Â Â Â Â Â Â  xen_be_register("vkbd", &xen_kbdmouse_ops);
>> Â Â  #ifdef CONFIG_VIRTFS
>> Â Â Â Â Â Â  xen_be_register("9pfs", &xen_9pfs_ops);
>> Â Â  #endif
>> Â Â  #ifdef CONFIG_USB_LIBUSB
>> Â Â Â Â Â Â  xen_be_register("qusb", &xen_usb_ops);
>> Â Â  #endif
>> Â Â  }
>>
>> The object is compiled using:
>>
>> -- >8 --
>> -#ifdef CONFIG_VIRTFS
>> +#ifdef CONFIG_FSDEV_9P
>> Â Â Â Â Â  xen_be_register("9pfs", &xen_9pfs_ops);
>> Â  #endif
>> ---
> 
> ... and this is the best fix, together with:
> 
> - a "#include CONFIG_DEVICES" at the top (to get CONFIG_FSDEV_9P)
> 
> - moving xen-legacy-backend.c from softmmu_ss to specific_ss (to get
> CONFIG_DEVICES)
> 
> - changing "select" to "imply" in accel/Kconfig (otherwise the patch has
> no effect)

OK.

> 
> But really, doing nothing and just dropping this patch is perfectly fine.

Yes, I'll respin what I have so far and continue when I find the
time and motivation another week-end.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:12:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79757.145225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6X7b-00011a-N9; Mon, 01 Feb 2021 11:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79757.145225; Mon, 01 Feb 2021 11:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6X7b-00011T-JC; Mon, 01 Feb 2021 11:11:55 +0000
Received: by outflank-mailman (input) for mailman id 79757;
 Mon, 01 Feb 2021 11:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6X7Z-00011N-Hh
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:11:53 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a74af947-bc18-4698-8a3b-65f25c4febb8;
 Mon, 01 Feb 2021 11:11:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a74af947-bc18-4698-8a3b-65f25c4febb8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612177911;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/0WZgPqKuCzjkbqO3B9jlxzdhmopgRMSE+6vqCaF3wM=;
  b=BSVvnv6PzQzpnMKQIYjqykSmBosCVn3icKSV54npcZ65nDamXaqx1Xnr
   xQkysGjBJOjnIBI6Iv1vgB6QTI41Om8XhoRPdyM2qKECWUdqUkvE/xYwG
   5876FN3ZHWC6FrINjJeGDNWU2kDqEV7d4wO2HKZ4j8t5PoLm8WYBzAXVE
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: O9xCrGXcLhKM4N2MpDmmqdgv8fA8hSyUncQjLbJR6L64/hOyN90fDqQSLKfaF3UDnqk/bk+3YB
 wVxfUpBdYF+VV36Q4ZiISWEjDuIioMpCzF5yNe07gK8E/4aTiNjDReDAHToSpul8oHyOkk632n
 tqGcU01+vf6bzeHgUQSValJrk95FV3/oTDdQWl0v3l44vqx/1Hcof7z6F7NyVtcBTOliAauleW
 I5VRblt+sF/0mY7grY+iP+Dc2r66OvMBmCfgN257Ge/2uCmgNUEp3PQ1kDWPpL05XPMgCw72Oa
 TNI=
X-SBRS: 5.2
X-MesageID: 36305186
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36305186"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=behsLTtOxHweBNicK6o6u4YOweeG4n5p8UCnLG1hCnJCRwmGFxp3B3HpVLMNe7JR95bnWR07cU8erAGzkpN+7c0OZsAW67XfIyxF0gOZNY+0NGdH85+e+9OYjrcuK/LWbvUUImO4evV66FzAyNlIeCQ+p0DbhTWdNRriIeU27j0Guu4i+1w2hBFaLANk7oEhMgGWRSysqid6VpaP5cM7TGmlV3O+/NR3SSjd+ejflyHfxXUBCm1CR8pNHUWa8/Ngbl0eatc6uDz3i/+PwQYrielONniFsRa/HjTXdpGJ8lw/83xwxY7EJzjSNR+lu9Ut8lf0GmIzXwQyzA5wOeNIfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cW0y75xdrGwUEFSo42TsmqJIrR3/6nV9yYU4GpM0/Wg=;
 b=EeRC2CtcgdwdTQB8ANFMcavkhIUMqptABmPnZ/QvUws4KccYY5mEbZojz63CgdwdAXTQqE5sqRKSBjMb3NsiD4d4WDEfyzwkoLBck8Nze9kmxdZGMeBt7ivZLF617YwiH95b+DImUOR63NmrswCM9CK7+KqYOX6X7UGN/PwYkF+MxUSirQDjzX1q5a/BqllMBfpl8zncn3Om18BF9gVc9jSPdhkzhviyKgqejZUrXc46Sayhzu98+WACCf61w65LvRWVB15NL1USMLLCIOQ9nBJLQE1P5Y1In5MpJyS4vMk1NwAD7G6Nyt7+JXVyGxk4M/G6E8cJwouz1JZy1UO1Mw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cW0y75xdrGwUEFSo42TsmqJIrR3/6nV9yYU4GpM0/Wg=;
 b=afo/tOMkNEoklrG459Bd27FK6EjCIY9h3K55X+4byv0v3iR4BbYK14OcJaeM+F57w7KBRUBdVxztjKDDjHeHgqD9GYdt44xYdLJYIXkFH5l+8ojJbf6dAuL6YsTxuJ1IXouXABmGoAn0i0+BAakrsuA39Uzdkp4Ug7grGD/stpY=
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, Hubert
 Jasudowicz <hubert.jasudowicz@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
 <YBfTpTzi+wo7AFSH@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
Date: Mon, 1 Feb 2021 11:11:37 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <YBfTpTzi+wo7AFSH@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0076.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::9) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ecb7b00a-bb94-46bc-16a7-08d8c6a22997
X-MS-TrafficTypeDiagnostic: BYAPR03MB3478:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3478769B6EB58EE713696A52BAB69@BYAPR03MB3478.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: llDGCKYnazmm3CS/Cuq8Yre+TEVR2CBcvTcA5ZARGzZTt/CfxbKror6OujQq5gbsk4f/WkJDjJYCrXTqe/88SN2Vwya1oWtjuXVbF8qqAIeB/rGA5TpK3AiWP0CJ60Agoz5VhqG1Jz39PZH5CA7kjnB72h4FsushmeYTAawXBgM8+NMVVnmUGROx0xtbtg9xGjJvZL5ECBSIEX6l/Mm7PIZAUIngwqn3aTChMuaj7VCjPi1sIPyrXQnN0BRfrw2EEiy/fLD4aqrUMQKDfuAHQzviA2kXKrbknklo5wra6rgDPqLzDgHr5+2zfjbF/Xlv3M9NJnA+pyqiPkpLS6xMMB2JvkfVGNxw4HXUU0y+lN1jNm/NJrdvAM6/Sd6abJI2nkT/IyduIJh9xsUw0m8KojwVscuElI9W4Aab8qxUsgtxHaAHSsnvNPehzaZJkxxhM6vhWQToB3w5bKv5BUMQn7lRDzczXOaR5A2nOoiZeBGqUvawqkzwUA5Pf+jmQ7ayhGuktrDVAbZqEZOcmfM/Vi1OTFYZxRTOKzwKy1XCEnyllQnrHanf3RFfj4DUsx3GFV6KYrgtBvnPjs9vTruRZLR9whoZJuShCR1aVzRORQA=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(136003)(376002)(346002)(54906003)(66574015)(6666004)(37006003)(316002)(16576012)(31686004)(956004)(2616005)(26005)(53546011)(83380400001)(186003)(16526019)(86362001)(6636002)(5660300002)(2906002)(36756003)(478600001)(31696002)(4326008)(6862004)(7416002)(8676002)(8936002)(66476007)(66556008)(6486002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dDRYVmMwTkYxMEtLbE5uU1NHcnVEd1V0TjNZRDMrOGJDNW9iRDFwb1BnS290?=
 =?utf-8?B?akYvcmdSQ0pjZzYyeTdURVgrN3Y3S3FQQTlkbE1VNWs3UHIxYlp0QmpPWlZI?=
 =?utf-8?B?dTN3QjVEajZNYm5zYUNUVWhDM0F3QzdteWhVK3hyendXd3N4K0pZSTlzeDN2?=
 =?utf-8?B?OGlFVEtEcGkybEhTY2RjYWZ2MW5jRjV4cSsxL0dHQ2dJVnFSZGJiR0RDQUor?=
 =?utf-8?B?eEhhY1dibG0yRHFTaFhkZFBxMkNxNTZ1ZkFaaGlFbGZBZ20rZXNFVS9GeDFR?=
 =?utf-8?B?YVBnSmhMdnJ6N1JJWTdDK0Rvb3M0U0dSYVRBK1NGZ2pZVmdUV0l1RlJhUkx5?=
 =?utf-8?B?TXJ3ZDFUcmk3S2VRcUhhLy83Uy9HNnA0MzN0dEpTNWg3U0RkdTN6YllCbnN5?=
 =?utf-8?B?Vit2NFhCVnZBSEJZVUlXakVtZGdYb1F4L092N3hhRStlMUMwWFEwTTBVV1J3?=
 =?utf-8?B?THdjWkFURmJtSjBUUDlwRiswZTFJblJpeDE5cUNRTHhTMnJVbnJBblhtVUl4?=
 =?utf-8?B?cmVTSlBvdGs5RXFXL2hpbFFVYXZGdkJQK3dqZGoydGJac2ZnclZ4NGxjaFo5?=
 =?utf-8?B?cUFjaUU3NWpQVWJFd21HallnYzYvUUVad1hCR0pTemlyMy9VUkZZZGxUTnlw?=
 =?utf-8?B?aUdlTmFSV3kwY1hRMGtha25hL1EzSXd2Yy80dEpGUTJrOUN3MmVPZFJOaGdP?=
 =?utf-8?B?Yzk4eUJQSUsxNERsaWxOWnp5Zk1rYzlybHQ5NVIzZFFBRTdGN0l0OHpPMGlK?=
 =?utf-8?B?RlQ3M1ppZlIzQTNGTDFJejdtWUgybmFRVWpJYmdWUlhnSUVBM01vd0cveUs3?=
 =?utf-8?B?ZlhpaFYxZnMvMHlKekxnK05JN2llb1BYaTY3Vk1EenV1clRiRmFrb3d4NkUw?=
 =?utf-8?B?d1IvL1dYR0cvMlE4cDN6VENsTkdJanNkTUdJc3NwdTJ6eWNJNjFTWERBV0Yv?=
 =?utf-8?B?OGZuWXNRR3F4RW5kWE1sZ1lJa0cvQktJY2gwSnJVRkRLN3FxcTZPaW50TDRx?=
 =?utf-8?B?L3YwalVIT1JySmtWODJ4b1hwenBRMm0wQjF3QU4rdlAvTzM3K3JTMjlZY3N1?=
 =?utf-8?B?RWFFd01YaFc2K3pBMmdlaHphTlRsMDZWVzJDajVHaVRCVkd1SHNCZG5ycEdo?=
 =?utf-8?B?SDkzZ0JvVUFrUjVReTh5aUlWNEZWZXF3YU1aRE9QZzd5WW1BdHkzdzlOUWpa?=
 =?utf-8?B?UVU1QUZnbjMraURSZ0NpeGxMZXZwY3MzQmFDN2hGcFMvT253YW9QTGV1NG0z?=
 =?utf-8?B?SVczcisrNXp0aGZnK1ovbUtrVXgyU3JISjV5U1k0MTF0OXYwd2ZsdGhnSEc4?=
 =?utf-8?B?am5FS3RTUWtNUmhPQTEvVm85Wmszam5wbkZ1U2JFczczRXhWRlY2ZHZrbUM2?=
 =?utf-8?B?N3QrSW1ZNVZFcXI5OE51am5oTTZENHovdy9rSXMyRm83VkhSOFVVektYVGJw?=
 =?utf-8?Q?MRf0UY1m?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ecb7b00a-bb94-46bc-16a7-08d8c6a22997
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 11:11:46.0013
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +MD9gFnMEIfKJHTzgFP1NiTFdUbxkS0Yth95P37bWNVkp6LNUcsn53Ele7Jt/pJ7qPpVpr4/rHvTpew8AVFc4hDf80+mrjB5BhhoPdFK5Vo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3478
X-OriginatorOrg: citrix.com

On 01/02/2021 10:10, Roger Pau MonnÃ© wrote:
> On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
>> A guest's default number of grant frames is 64, and XENMEM_acquire_resource
>> will reject an attempt to map more than 32 frames.  This limit is caused by
>> the size of mfn_list[] on the stack.
>>
>> Fix mapping of arbitrary size requests by looping over batches of 32 in
>> acquire_resource(), and using hypercall continuations when necessary.
>>
>> To start with, break _acquire_resource() out of acquire_resource() to cope
>> with type-specific dispatching, and update the return semantics to indicate
>> the number of mfns returned.  Update gnttab_acquire_resource() and x86's
>> arch_acquire_resource() to match these new semantics.
>>
>> Have do_memory_op() pass start_extent into acquire_resource() so it can pick
>> up where it left off after a continuation, and loop over batches of 32 until
>> all the work is done, or a continuation needs to occur.
>>
>> compat_memory_op() is a bit more complicated, because it also has to marshal
>> frame_list in the XLAT buffer.  Have it account for continuation information
>> itself and hide details from the upper layer, so it can marshal the buffer in
>> chunks if necessary.
>>
>> With these fixes in place, it is now possible to map the whole grant table for
>> a guest.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Just one comment/question regarding a continuation below.
>
> I have to admit I had a hard time reviewing this, all this compat code
> plus the continuation stuff is quite hard to follow.
>
>> ---
>> CC: George Dunlap <George.Dunlap@eu.citrix.com>
>> CC: Ian Jackson <iwj@xenproject.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Wei Liu <wl@xen.org>
>> CC: Julien Grall <julien@xen.org>
>> CC: Paul Durrant <paul@xen.org>
>> CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
>> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
>> CC: Tamas K Lengyel <tamas@tklengyel.com>
>>
>> v8:
>>  * nat => cmp change in the start_extent check.
>>  * Rebase over 'frame' and ARM/IOREQ series.
>>
>> v3:
>>  * Spelling fixes
>> ---
>>  xen/common/compat/memory.c |  94 +++++++++++++++++++++++++++-------
>>  xen/common/grant_table.c   |   3 ++
>>  xen/common/memory.c        | 124 +++++++++++++++++++++++++++++++++------------
>>  3 files changed, 169 insertions(+), 52 deletions(-)
>>
>> diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
>> index 834c5e19d1..4c9cd9c05a 100644
>> --- a/xen/common/compat/memory.c
>> +++ b/xen/common/compat/memory.c
>> @@ -402,23 +402,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>          case XENMEM_acquire_resource:
>>          {
>>              xen_pfn_t *xen_frame_list = NULL;
>> -            unsigned int max_nr_frames;
>>  
>>              if ( copy_from_guest(&cmp.mar, compat, 1) )
>>                  return -EFAULT;
>>  
>> -            /*
>> -             * The number of frames handled is currently limited to a
>> -             * small number by the underlying implementation, so the
>> -             * scratch space should be sufficient for bouncing the
>> -             * frame addresses.
>> -             */
>> -            max_nr_frames = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
>> -                sizeof(*xen_frame_list);
>> -
>> -            if ( cmp.mar.nr_frames > max_nr_frames )
>> -                return -E2BIG;
>> -
>>              /* Marshal the frame list in the remainder of the xlat space. */
>>              if ( !compat_handle_is_null(cmp.mar.frame_list) )
>>                  xen_frame_list = (xen_pfn_t *)(nat.mar + 1);
>> @@ -432,6 +419,28 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>  
>>              if ( xen_frame_list && cmp.mar.nr_frames )
>>              {
>> +                unsigned int xlat_max_frames =
> Could be made const static I think?

It is a compile time constant, but the compiler can already figure that
out.Â  static is definitely out of the question.

>
>> +                    (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
>> +                    sizeof(*xen_frame_list);
>> +
>> +                if ( start_extent >= cmp.mar.nr_frames )
>> +                    return -EINVAL;
>> +
>> +                /*
>> +                 * Adjust nat to account for work done on previous
>> +                 * continuations, leaving cmp pristine.  Hide the continaution
>> +                 * from the native code to prevent double accounting.
>> +                 */
>> +                nat.mar->nr_frames -= start_extent;
>> +                nat.mar->frame += start_extent;
>> +                cmd &= MEMOP_CMD_MASK;
>> +
>> +                /*
>> +                 * If there are two many frames to fit within the xlat buffer,
>> +                 * we'll need to loop to marshal them all.
>> +                 */
>> +                nat.mar->nr_frames = min(nat.mar->nr_frames, xlat_max_frames);
>> +
>>                  /*
>>                   * frame_list is an input for translated guests, and an output
>>                   * for untranslated guests.  Only copy in for translated guests.
>> @@ -444,14 +453,14 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>                                               cmp.mar.nr_frames) ||
>>                           __copy_from_compat_offset(
>>                               compat_frame_list, cmp.mar.frame_list,
>> -                             0, cmp.mar.nr_frames) )
>> +                             start_extent, nat.mar->nr_frames) )
>>                          return -EFAULT;
>>  
>>                      /*
>>                       * Iterate backwards over compat_frame_list[] expanding
>>                       * compat_pfn_t to xen_pfn_t in place.
>>                       */
>> -                    for ( int x = cmp.mar.nr_frames - 1; x >= 0; --x )
>> +                    for ( int x = nat.mar->nr_frames - 1; x >= 0; --x )
>>                          xen_frame_list[x] = compat_frame_list[x];
> Unrelated question, but I don't really see the point of iterating
> backwards, wouldn't it be easy to use use the existing 'i' loop
> counter and for a for ( i = 0; i <  nat.mar->nr_frames; i++ )?
>
> (Not that you need to fix it here, just curious about why we use that
> construct instead).

Iterating backwards is totally critical.

xen_frame_list and compat_frame_list are the same numerical pointer.Â 
We've just filled it 50% full with compat_pfn_t's, and need to turn
these into xen_pfn_t's which are double the size.

Iterating forwards would clobber every entry but the first.

>
>>                  }
>>              }
>> @@ -600,9 +609,11 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>          case XENMEM_acquire_resource:
>>          {
>>              DEFINE_XEN_GUEST_HANDLE(compat_mem_acquire_resource_t);
>> +            unsigned int done;
>>  
>>              if ( compat_handle_is_null(cmp.mar.frame_list) )
>>              {
>> +                ASSERT(split == 0 && rc == 0);
>>                  if ( __copy_field_to_guest(
>>                           guest_handle_cast(compat,
>>                                             compat_mem_acquire_resource_t),
>> @@ -611,6 +622,21 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>                  break;
>>              }
>>  
>> +            if ( split < 0 )
>> +            {
>> +                /* Continuation occurred. */
>> +                ASSERT(rc != XENMEM_acquire_resource);
>> +                done = cmd >> MEMOP_EXTENT_SHIFT;
>> +            }
>> +            else
>> +            {
>> +                /* No continuation. */
>> +                ASSERT(rc == 0);
>> +                done = nat.mar->nr_frames;
>> +            }
>> +
>> +            ASSERT(done <= nat.mar->nr_frames);
>> +
>>              /*
>>               * frame_list is an input for translated guests, and an output for
>>               * untranslated guests.  Only copy out for untranslated guests.
>> @@ -626,7 +652,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>                   */
>>                  BUILD_BUG_ON(sizeof(compat_pfn_t) > sizeof(xen_pfn_t));
>>  
>> -                for ( i = 0; i < cmp.mar.nr_frames; i++ )
>> +                for ( i = 0; i < done; i++ )
>>                  {
>>                      compat_pfn_t frame = xen_frame_list[i];
>>  
>> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>                      compat_frame_list[i] = frame;
>>                  }
>>  
>> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
>> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
>>                                               compat_frame_list,
>> -                                             cmp.mar.nr_frames) )
>> +                                             done) )
>>                      return -EFAULT;
> Is it fine to return with a possibly pending continuation already
> encoded here?
>
> Other places seem to crash the domain when that happens.

Hmm.Â  This is all a total mess.Â  (Elsewhere the error handling is also
broken - a caller who receives an error can't figure out how to recover)

But yes - I think you're right - the only thing we can do here is `goto
crash;` and woe betide any 32bit kernel which passes a pointer to a
read-only buffer.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:12:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79758.145236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6X7d-00012k-5L; Mon, 01 Feb 2021 11:11:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79758.145236; Mon, 01 Feb 2021 11:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6X7d-00012d-2A; Mon, 01 Feb 2021 11:11:57 +0000
Received: by outflank-mailman (input) for mailman id 79758;
 Mon, 01 Feb 2021 11:11:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDK9=HD=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1l6X7b-00011S-Ls
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:11:55 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 29c569bc-7002-45c7-b23a-3f4aadd62203;
 Mon, 01 Feb 2021 11:11:54 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-344-mZlFu0BdM_CV-adiXS1Fuw-1; Mon, 01 Feb 2021 06:11:52 -0500
Received: by mail-wm1-f69.google.com with SMTP id s10so4410581wme.8
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:11:52 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id w25sm21241436wmc.42.2021.02.01.03.11.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 03:11:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29c569bc-7002-45c7-b23a-3f4aadd62203
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1612177914;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FY6TXrDOk+o3c9tXEy/tXPHIpJwXmfVgAUDZqtQralI=;
	b=eZJelNpXp1A+uUYQU8VCGEnHgqYV1BnWoa1dwtF67TqywH8zjEttX3VK9Wmg4DNEMj9ZlQ
	rdnIJqybHaNgxtAQ03pYgk1m8NL+ELlk+2xMgFYx3EUanTKQVQQh/QbW8ShbTuauhDjmHQ
	h4b9MDTi81meYA8n6ZZqYWn+LjEJcJg=
X-MC-Unique: mZlFu0BdM_CV-adiXS1Fuw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=FY6TXrDOk+o3c9tXEy/tXPHIpJwXmfVgAUDZqtQralI=;
        b=JUfYQNrUWybZWQIhppyhIqq1O7ktIJfFdqw9mn99LFwaKlXmYqO4syJ8lVGiFRH/ut
         mZ171phlpaASKIihPjsYmosiG2kZE1N/JV3TPBwyAfiwrq0coUQ5UcCKWYs6oZvg7xao
         NfIIhnCUutWsnVAxwa+4MbQMH7ZRqwuWi9LtYLogC/2RYOGfVxrXcCQvyl1JzvrPRhlP
         H8LkUxA7dvlD8XOiYZoVP4Gte9HQgv6rmuHQeKBGKylSmiwY7OVshy0LIWXWiVrZApXF
         CeHiWsD8lIwnfLh9vsO5FRatSdYPHA0WkQD9I+/hyDpk62SwajHsoZi3aEUrjN0hQsd4
         2cOg==
X-Gm-Message-State: AOAM530176epNQqrd4g+H0XwNTx0066vMjFm7QUnM9PyZYHAL6XAgNyv
	UW13e/6PJRcAIsKEY336zWHh1GCfo9vxqYpcirbSWitIa2JQnApo9v0xyhx5NBLhQPbZy/h5W9v
	cFwyucDfUdqDeOSemiAdVbNO/dMs=
X-Received: by 2002:a05:6000:11c5:: with SMTP id i5mr17767753wrx.302.1612177911452;
        Mon, 01 Feb 2021 03:11:51 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwSQMfyeFF+3atrjOWr///Dt99h3yMiGXmn5Z/I940huc+9DXK1zFUmdjSOWwmL+4h1fUpRQA==
X-Received: by 2002:a05:6000:11c5:: with SMTP id i5mr17767722wrx.302.1612177911186;
        Mon, 01 Feb 2021 03:11:51 -0800 (PST)
Subject: Re: [PATCH v2 3/4] hw/xen/Kconfig: Introduce XEN_PV config
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>
References: <20210131141810.293186-1-f4bug@amsat.org>
 <20210131141810.293186-4-f4bug@amsat.org>
 <d3ad42eb-42bd-2e63-4c99-8eed91216fc5@amsat.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <2931aaa6-3850-ca59-73ea-252a524bd63d@redhat.com>
Date: Mon, 1 Feb 2021 12:11:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d3ad42eb-42bd-2e63-4c99-8eed91216fc5@amsat.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01/02/21 11:59, Philippe Mathieu-DaudÃ© wrote:
> On 1/31/21 3:18 PM, Philippe Mathieu-DaudÃ© wrote:
>> xenpv machine requires USB, IDE_PIIX and PCI:
>>
>>    /usr/bin/ld:
>>    libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
>>    hw/xen/xen-legacy-backend.c:757: undefined reference to `xen_usb_ops'
>>    libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `unplug_disks':
>>    hw/i386/xen/xen_platform.c:153: undefined reference to `pci_piix3_xen_ide_unplug'
>>    libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `pci_unplug_nics':
>>    hw/i386/xen/xen_platform.c:137: undefined reference to `pci_for_each_device'
>>    libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `xen_platform_realize':
>>    hw/i386/xen/xen_platform.c:483: undefined reference to `pci_register_bar'
>>
>> Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
>> ---
>>   hw/Kconfig     | 1 +
>>   hw/xen/Kconfig | 7 +++++++
>>   2 files changed, 8 insertions(+)
>>   create mode 100644 hw/xen/Kconfig
>>
>> diff --git a/hw/Kconfig b/hw/Kconfig
>> index 5ad3c6b5a4b..f2a95591d94 100644
>> --- a/hw/Kconfig
>> +++ b/hw/Kconfig
>> @@ -39,6 +39,7 @@ source usb/Kconfig
>>   source virtio/Kconfig
>>   source vfio/Kconfig
>>   source watchdog/Kconfig
>> +source xen/Kconfig
>>   
>>   # arch Kconfig
>>   source arm/Kconfig
>> diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
>> new file mode 100644
>> index 00000000000..0b8427d6bd1
>> --- /dev/null
>> +++ b/hw/xen/Kconfig
>> @@ -0,0 +1,7 @@
>> +config XEN_PV
>> +    bool
>> +    default y if XEN
>> +    depends on XEN
>> +    select PCI
>> +    select USB
>> +    select IDE_PIIX
> 
> Well this is not enough, --without-default-devices fails:
> 
> /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
> function `cpu_physical_memory_set_dirty_range':
> include/exec/ram_addr.h:333: undefined reference to
> `xen_hvm_modified_memory'
> /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
> function `ram_block_add':
> softmmu/physmem.c:1873: undefined reference to `xen_ram_alloc'
> /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
> function `cpu_physical_memory_set_dirty_range':
> include/exec/ram_addr.h:333: undefined reference to
> `xen_hvm_modified_memory'
> /usr/bin/ld: include/exec/ram_addr.h:333: undefined reference to
> `xen_hvm_modified_memory'
> /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_memory.c.o: in function
> `cpu_physical_memory_set_dirty_range':
> include/exec/ram_addr.h:333: undefined reference to
> `xen_hvm_modified_memory'
> collect2: error: ld returned 1 exit status
> ninja: build stopped: subcommand failed.

I think you can modify xen_hvm_modified_memory to become a virtual 
function in AccelClass.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:20:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79765.145248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XFW-0001zF-0H; Mon, 01 Feb 2021 11:20:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79765.145248; Mon, 01 Feb 2021 11:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XFV-0001ye-TQ; Mon, 01 Feb 2021 11:20:05 +0000
Received: by outflank-mailman (input) for mailman id 79765;
 Mon, 01 Feb 2021 11:20:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6XFU-0001iw-HC
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:20:04 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 806c5104-4c36-40c4-a2a1-244b893126cc;
 Mon, 01 Feb 2021 11:20:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 806c5104-4c36-40c4-a2a1-244b893126cc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612178402;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OfCSUijL/g03NqFj5e7sExKBRjyMfgp8WWFBBFug8+4=;
  b=CTknYlls4eAymCVm5XBl/mC23Cl1shal/HAV2OqlbEmlpQHA/1vz9etP
   3IcYwbtvIqo8DN0ObawxNMkghtiIvQNlsSzBUBYmWBG8CgXqe+kKv6TTW
   3gYLAYuT0qtmZbugIdaF3csxiRNeNq9hpihL3s54Yt3VzKoR8yrxpJuKp
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZdEBIlaBH/IEloNEBEteueFtJkdo72H2ux3hy4IGsGLLSGXWRqrQNiOuXzmoAq+zfWII31gFnH
 ofTI758DA3gw5H7EhUwsXCJb/Ch4xQRqyrMhEMv18L8OZru40Z3qoLQz8HqzwpExRWUlVtB5n7
 li1RfuyFDt1++0g5OtTWhpGfaDZ91tla2YIHA4ZDUVReGkdOHjsg8iyViURP8AQ7vxblUqjsDz
 QuXVEeOE5Y6OZTn0MiugHNFdj9Nm7mu7zXonZ0DQBs9hPKV/5Oq2GjGcBU1p7AQYGpJ3db0VNB
 598=
X-SBRS: 5.2
X-MesageID: 36645156
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36645156"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R7bM7vhPuVB1HkDaSMuaH54xy8B1EuDZW7BLuYeBEg4oeYhAU4cOSt/wR+rQ/iMokyW8lW0fSHs660EkbhKj3k04NzFPxsbRrwmP66DphDVZXwbqBFpqJz8YnMraKHPRjaa9d67fyGKz7khPJwagU56thlQRn32N5Y0kzAS2tTZV/1lgSQwKexawdv/PkFFR5W+3hSUJYGLCYqf5RUBP2Ox6zXZmz4wmN5gEn9mieHc9jKqAiheBNnfjsUpvMRt9N9nVausMSKMm1bUCc2ULbJ1lvrKJ43AyknJR0L8qrZw5MoQpA7ScolEnB6dPOk8CuvKkZ/zZNGaURP2jMyLauw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WK4h1NonlODp/K5/vAkWB4w8rS9acAkKX6uErWPY+To=;
 b=WmR1Yo9dcIg9y/BUUXULmCwVqBnfa1DdmVE7Ibmdv7rUcsOdM9TsWZJJ/wM4mBZBL2Er1kKVejVgqbnHOthWxsFVH1RLxu6TjyWWm6P6tCVp0bIo9JQt2Bg3ZhYjd8nQRh0f+f5rao4jdCbbF8bCByPYFs4qjq5E8aPc+7dyzdwuzGCMUpdoJUCLbIKduwSHq+lTwRBLSwBzNPexx4pw8Oyd4Fu4EuCQTceUBB57fz41Rq17YdVS4UgwL07Zf1oVZuNzcyp8Fmq/1UhcGv7hgrAwxzJHqP6RLrkNbrqI2KSQLRFwHu8Ye88bauw6V506EStBLonKMRCnJoFMCdo/Mw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WK4h1NonlODp/K5/vAkWB4w8rS9acAkKX6uErWPY+To=;
 b=pfHGfefksJoQecaHt8TxnhNdWoJyV9QxQlhMn50mkxRjoVuEArJIoYFO9CbgoiVDaXilqM/xHV7Mo4wLAUu+IG4SxhaDonLr57nPSpEDT6xKXp1soaw8hsD6V0tJK5JQg0a2TV/Ho3ALN2UuhBCkq2hkXlLA28LgBJz6Yus5pqI=
Subject: Re: [PATCH v8 08/16] xen/domain: Add vmtrace_size domain creation
 parameter
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, "Jan
 Beulich" <JBeulich@suse.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-9-andrew.cooper3@citrix.com>
 <YBfc+BaXLm5dSvkG@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a549b8c5-e5de-2b03-f8d5-ce5ae5de6067@citrix.com>
Date: Mon, 1 Feb 2021 11:19:46 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <YBfc+BaXLm5dSvkG@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0477.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::33) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a43b7400-ff44-4938-dcad-08d8c6a34bc5
X-MS-TrafficTypeDiagnostic: BYAPR03MB4904:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB49040F3E1B03C597D872EEFBBAB69@BYAPR03MB4904.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /PQ9RjKJDop6UHzvtPQKJBCOnQr1eerNtdqTlI/FU430jDxLUcjceC2V9jhfuS1wcCscdXZ0mm6hN0yRIWYbb64H1+OZ2fLKfBcvOzGuyK2lcQAwo2VXo9MUaq+iIUPr1zu0QcNb5joGYl36EvlqvqtDZ1V+MgbE+BNdjvYVUW32Uquw74QW00x4xytA8aR3YFzicfoyf8bxwrbsD9/o7RTin5rMKsX6u7mRexMWtF3TVjKrqjmjRtM7rrl9WBahRECRDN/Nov1YZGxJUP9ACqxkWgdzKzXlU2+NyI6SCzw2icdHwznUo0LxhsASRYrDAWPRqjUPBVJQFPDxYlnsaADE2tWPORW+ZqeVPd+n+b7+g3zP7rLhlbRhVpkgqyvSN23no3NVlgY9PTW+XjKw3bhsIjiUvmaxsHHh555Wy+vTstjT4LWliEZUBxQDypi1HhHiySru3MtlJ3x7jl1bgcpfhn6MifedXV7JiZJS9WAqN7cqhOJZZvRPPuc332sHsa05BgJ4ZcXk9Bp8RpS1//Ias2RHuByk/vXwLUndxyzjVgL+wCEzNnYHaGSV5c0+yv4jkRjsiSLh3UO74yAGNLi+0NOUcoR5cnQPI8YIl7c=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(39860400002)(366004)(376002)(6486002)(956004)(6666004)(8936002)(2906002)(478600001)(8676002)(2616005)(6862004)(66946007)(66476007)(66556008)(26005)(186003)(16526019)(53546011)(16576012)(316002)(37006003)(5660300002)(4326008)(54906003)(31696002)(31686004)(86362001)(66574015)(83380400001)(36756003)(6636002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZHJaUmhVbnBlT1FOTU11T3k2MDZ4VThiZkIvQ3EvOW9DM3I4ZGhHQ1BYY1I5?=
 =?utf-8?B?UFoxMVNkb3BHVnNNYzJGb2ZkTEVjdjVSQkFMazMzeDdNT2NxSE9hQzdDaUQ3?=
 =?utf-8?B?SC9SMjg5NGlpQVlmZThzL3Q1ODdpYm9QUWs1WU1oVzB4V25wVXNENjdMaTZQ?=
 =?utf-8?B?M2RWN2hKT245anpqYnZHTTdqWkRJOC9zbHRZdEZMQjYxcERHVjRHZjQ4Z29T?=
 =?utf-8?B?eG9xYXZIeisyRFUvd3lxS1VoUHo1SytYMVNvcmNNWSswVHV0WnFvcVhYbTg4?=
 =?utf-8?B?SHF2bXNSYlZWTklGYjJYWmE3ZG9KaUhJY0JZSzB5QzVtdHFLUUltSUVRNmNK?=
 =?utf-8?B?eUl6cFFyZmRCdmczaEtrcERxSGloVGxGWnltd0tud0dlbURJTlBoWjBRRldE?=
 =?utf-8?B?VnhEcXJUeFJPM3RjeXhlOWpYRnNleXpZMzYvVndCR2ErK1pnemNRSFJUSklV?=
 =?utf-8?B?cFpOVWZnOEVHbnJ0bGRLbGJ5TEFOVVRRSFJURlUwRUdGWjZyZmZXZzZVaHk5?=
 =?utf-8?B?djIwMDYzZ2FaTkZnRy9QNklSdVlBa1ZrVzNCd2Q4ZEtBeTVUUWJjY05hSGcx?=
 =?utf-8?B?RWdyT1dXc01PV0xhbXZla2lLRU9QSHd0aGRWcEJUWHcva1RuZW9Xb3VEekJH?=
 =?utf-8?B?ZS9uL1NWMUJXQU5KNlIwMGNQQ2dVVHJMakZweVJuYXllRzJqUzdONmdsNklj?=
 =?utf-8?B?Zk5ta0xRSmVtVWtwUzVjdVpFaXMrK3JEY3RZV2wyTmZmcjFFaS93U201NHZo?=
 =?utf-8?B?MzNXWUNOWHB0WmgrOURxRXdwM01ONHdGYXM3SVhiaDEvYmJHK2RHYTIwZmsx?=
 =?utf-8?B?ZEFlSGVQdksxRlF4OURPb0g1VEdqY04zbFUxL08wKy9GRVlVdVl1QlZ1QjJj?=
 =?utf-8?B?ejdMaGRRcngxWVNvTk1FUW5kSVlrQTFyWWgxQ2pnNkk2Q29NSFVqQ0JWdllz?=
 =?utf-8?B?ZUdmTUhOcUhOMWFyMm9FYnd3eXdua2o4NUdYRFZua2diYnNOMlNwZnJvbmxY?=
 =?utf-8?B?QVB0dEh2c0FBUmZVVlJyOVMzcGUxblJxai90R1FNSlprUWtsRTR3NUFLOUhD?=
 =?utf-8?B?a3dnUEVNM0ZQUVMvWW1XSHRId1ZidDc3NXRTZ0RicmNWZ1VIVTEvb0IxbjFD?=
 =?utf-8?B?SWtPTGxrTllFeWVLdjRLUnpxN05lZk9lWk1IYU5yaXp4c1BHRmJOVHM4TXBR?=
 =?utf-8?B?bmVuOEJaMlRvMHA2S0VnaFk1SDJvMXRrY2U2UGVkRDRlZHhpenRTcEk0L1Ew?=
 =?utf-8?B?QXdBb1QyemF0QVM3aXNaZnJ5VzloMUdvdkU2YXhXZmRZMFVjZWRnV01jUlln?=
 =?utf-8?B?dXM2aHBscTJpUEU2ZkhVYzV2SE9sNmN3eEMvNTVPVlRpUjU3UW9peEkrNG1F?=
 =?utf-8?B?UitpYm5YQVhUZWV3VjhUc21jTnNNVWFFNERhbXdENGdkMW1VZlNFcGltK0s0?=
 =?utf-8?Q?HYV+tIDL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a43b7400-ff44-4938-dcad-08d8c6a34bc5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 11:19:52.8960
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W+UPzsHZx/uM29g5QLa33CrwypPbwkKRicAWkPXA3rGE3DGHCzzG5lvY47a8T8TmkbrwX9Q+4D4pXEvzBepukVz3hNT6toLiPNndx9B2KHs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4904
X-OriginatorOrg: citrix.com

On 01/02/2021 10:51, Roger Pau MonnÃ© wrote:
> On Sat, Jan 30, 2021 at 02:58:44AM +0000, Andrew Cooper wrote:
>> From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
>>
>> To use vmtrace, buffers of a suitable size need allocating, and different
>> tasks will want different sizes.
>>
>> Add a domain creation parameter, and audit it appropriately in the
>> {arch_,}sanitise_domain_config() functions.
>>
>> For now, the x86 specific auditing is tuned to Processor Trace running in
>> Single Output mode, which requires a single contiguous range of memory.
>>
>> The size is given an arbitrary limit of 64M which is expected to be enough for
>> anticipated usecases, but not large enough to get into long-running-hypercall
>> problems.
>>
>> Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks.

>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index d1e94d88cf..491b32812e 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -132,6 +132,71 @@ static void vcpu_info_reset(struct vcpu *v)
>>      v->vcpu_info_mfn = INVALID_MFN;
>>  }
>>  
>> +static void vmtrace_free_buffer(struct vcpu *v)
>> +{
>> +    const struct domain *d = v->domain;
>> +    struct page_info *pg = v->vmtrace.pg;
>> +    unsigned int i;
>> +
>> +    if ( !pg )
>> +        return;
>> +
>> +    v->vmtrace.pg = NULL;
>> +
>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>> +    {
>> +        put_page_alloc_ref(&pg[i]);
>> +        put_page_and_type(&pg[i]);
>> +    }
>> +}
>> +
>> +static int vmtrace_alloc_buffer(struct vcpu *v)
> You might as well make this return true/false, as the error code is
> ignored by the caller (at least in this patch).

At some point vcpu_create() needs to be fixed not to lose the error
code.Â  That is the real bug here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:22:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79766.145261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XHO-0002I7-EG; Mon, 01 Feb 2021 11:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79766.145261; Mon, 01 Feb 2021 11:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XHO-0002I0-9b; Mon, 01 Feb 2021 11:22:02 +0000
Received: by outflank-mailman (input) for mailman id 79766;
 Mon, 01 Feb 2021 11:22:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=npej=HD=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l6XHN-0002Hv-32
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:22:01 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57af58b9-a3aa-43f2-95b9-3ae5ff4c6585;
 Mon, 01 Feb 2021 11:22:00 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 111BLtWw011344;
 Mon, 1 Feb 2021 12:21:55 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 2AD292821; Mon,  1 Feb 2021 12:21:55 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57af58b9-a3aa-43f2-95b9-3ae5ff4c6585
Date: Mon, 1 Feb 2021 12:21:55 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
        Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] libs/light: pass some infos to qemu
Message-ID: <20210201112155.GB832@antioche.eu.org>
References: <20210126224800.1246-1-bouyer@netbsd.org>
 <20210126224800.1246-12-bouyer@netbsd.org>
 <YBKbEhavZlpD75fU@Air-de-Roger>
 <20210130115013.GA2101@antioche.eu.org>
 <YBe2RSZeJBeMybdt@Air-de-Roger>
 <20210201093939.GB624@antioche.eu.org>
 <YBfd62T1LNh+zyjj@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YBfd62T1LNh+zyjj@Air-de-Roger>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]); Mon, 01 Feb 2021 12:21:55 +0100 (MET)

On Mon, Feb 01, 2021 at 11:54:35AM +0100, Roger Pau Monné wrote:
> > I can, but how do I get the ecript removed from qemu-traditional ?
> > It's a different repo, isn't it ?
> 
> Yes, it's:
> 
> http://xenbits.xen.org/gitweb/?p=qemu-xen-traditional.git;a=summary
> 
> I would remove it from qemu-trad and then only install from
> hotplug/NetBSD if it's not already there? Or maybe just force-install
> it from hotplug/NetBSD even if it's already present?

OK will try that

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79770.145273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOK-0002YE-5O; Mon, 01 Feb 2021 11:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79770.145273; Mon, 01 Feb 2021 11:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOK-0002Y7-1a; Mon, 01 Feb 2021 11:29:12 +0000
Received: by outflank-mailman (input) for mailman id 79770;
 Mon, 01 Feb 2021 11:29:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOI-0002Y2-Ad
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:10 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39cf8ec7-362d-469c-bcbd-fa41da21f339;
 Mon, 01 Feb 2021 11:29:09 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id v15so16201879wrx.4
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:09 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id x81sm21819860wmg.40.2021.02.01.03.29.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 39cf8ec7-362d-469c-bcbd-fa41da21f339
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=TX4T29tQMNqmDsICQt4DTx558hzr8lARDzNE3099YzQ=;
        b=cmfFtwCA+tgOSGKlnimSvXFLT15H8AxoX9KZP875N9P3SFSZc5jw8QxAHtF0UpmvUp
         i8mAwIklyL85PQE2+TqNbKMxp9S7e/K724ACw4np3/cgi/SLI/BsS1zMw9WR5WjX+/Ok
         RpgyUVbzxhIWeOdfC7lBP+BW38aXonPfyMCw2MLBDvP02Ef6Ha/PE/wKb0Wj9USH976u
         3hO+qiXXjTmtrkTcN6AfzSrqUWoX0uIQP9VpV+e4TmKLu4b6kC8BTgKzr6IVeGdaDKvH
         8L9LqVl3FU/ilIjd41mGqsykj/EcDtipVF4VsJ5uh0sHjamWoKv+CZU+tRMQfUBmuosN
         Fb2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :mime-version:content-transfer-encoding;
        bh=TX4T29tQMNqmDsICQt4DTx558hzr8lARDzNE3099YzQ=;
        b=fBkxIojzL5XJDvcvrCOahMaajIbmeWdy/BYxoMKFJsvaYszCc3k46lEWYY3nqFpCM4
         n9CjRbgXy3qgbiKGMTIWqcOcLWttOacQcJMUxC+boptG132o1+Ghc0VFnVeASiB7hrq1
         SOWoHXMUACSgsqyCe5A+uS+X+r/8+xlju/9piD/jVDKfBr1qzE9yCXHdLK82/xPuJ4DD
         +6uEw6FQZ8BT6/JbLBLNJEFOWSZTcKDAG4beJvMY5I7+MMPySPFi6U61MM0KueXLSTLp
         k0ohK92NOf3CvzXm/2/uPlPGZ4JIxjOPIqnyWXUv2edIRidJVadBu5mc5grjGUJ8uShj
         6nYg==
X-Gm-Message-State: AOAM531ZL5RIrYuKv+JbI/HqSbXnBroT7EGZh32hU7cKKaqxPvYL7lVn
	kqlf5gGY3U3vBNK0xhHIIuE=
X-Google-Smtp-Source: ABdhPJw0nJTtf9Bc1689UuS28OIRlRbCYYGpd67R1WBnjeVFdvngEGU5yH3pxA6EJNWEunMJWx8L4w==
X-Received: by 2002:adf:f452:: with SMTP id f18mr17213716wrp.11.1612178948626;
        Mon, 01 Feb 2021 03:29:08 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 0/7] hw/xen: Introduce XEN_FV/XEN_PV Kconfig
Date: Mon,  1 Feb 2021 12:28:58 +0100
Message-Id: <20210201112905.545144-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Sort the Xen buildsys glue a bit.=0D
=0D
The first patches are probably ready now.=0D
=0D
Since v2:=0D
- Addressed some of Paolo's comments=0D
- More fixes=0D
- XEN_PV still not buildable alone -> postponed=0D
=0D
v2: Considered Paolo's comments from v1=0D
=0D
Philippe Mathieu-Daud=C3=A9 (7):=0D
  meson: Do not build Xen x86_64-softmmu on Aarch64=0D
  hw/xen: Relax dependency on FSDEV_9P=0D
  accel/xen: Incorporate xen-mapcache.c=0D
  hw/i386/xen: Introduce XEN_FV Kconfig=0D
  hw/xen: Make xen_shutdown_fatal_error() available out of X86 HVM=0D
  hw/xen: Make qmp_xen_set_global_dirty_log() available out of X86 HVM=0D
  NOTFORMERGE hw/xen/Kconfig: Introduce XEN_PV config=0D
=0D
 meson.build                           |  8 ++++++--=0D
 accel/xen/trace.h                     |  1 +=0D
 {hw/i386 =3D> accel}/xen/xen-mapcache.c |  0=0D
 hw/i386/xen/xen-hvm.c                 | 24 ------------------------=0D
 hw/xen/xen-legacy-backend.c           |  3 ++-=0D
 hw/xen/xen-migration.c                | 22 ++++++++++++++++++++++=0D
 hw/xen/xen-utils.c                    | 25 +++++++++++++++++++++++++=0D
 accel/Kconfig                         |  2 +-=0D
 accel/xen/meson.build                 |  5 ++++-=0D
 accel/xen/trace-events                | 10 ++++++++++=0D
 hw/Kconfig                            |  1 +=0D
 hw/i386/Kconfig                       |  2 ++=0D
 hw/i386/xen/Kconfig                   |  5 +++++=0D
 hw/i386/xen/meson.build               |  3 +--=0D
 hw/i386/xen/trace-events              |  6 ------=0D
 hw/xen/Kconfig                        |  7 +++++++=0D
 hw/xen/meson.build                    |  4 +++-=0D
 17 files changed, 90 insertions(+), 38 deletions(-)=0D
 create mode 100644 accel/xen/trace.h=0D
 rename {hw/i386 =3D> accel}/xen/xen-mapcache.c (100%)=0D
 create mode 100644 hw/xen/xen-migration.c=0D
 create mode 100644 hw/xen/xen-utils.c=0D
 create mode 100644 accel/xen/trace-events=0D
 create mode 100644 hw/i386/xen/Kconfig=0D
 create mode 100644 hw/xen/Kconfig=0D
=0D
-- =0D
2.26.2=0D
=0D


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79771.145285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOP-0002a3-D2; Mon, 01 Feb 2021 11:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79771.145285; Mon, 01 Feb 2021 11:29:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOP-0002Zw-9C; Mon, 01 Feb 2021 11:29:17 +0000
Received: by outflank-mailman (input) for mailman id 79771;
 Mon, 01 Feb 2021 11:29:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XON-0002Y2-BJ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:15 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2826e767-af4e-40bf-875f-db97620a9052;
 Mon, 01 Feb 2021 11:29:14 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id o10so11297372wmc.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:14 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id f4sm26998449wrs.34.2021.02.01.03.29.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 2826e767-af4e-40bf-875f-db97620a9052
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=donqcXl7R+CymjCL8vc7v5IAilQU7H4mg0fNs68sCzA=;
        b=e5igdUOKRu+nuBY0Zu2Ohn5JfnspcW8pqSPyZT4CaQx7HCrrRF6/uOf16FFra0u+BJ
         Ip2eqn5lYrT8So/ZzTJBuOK58LV2uQ5OvMNgRK5m2cMRvTX/49GjhKxORVzEa4eJrH74
         C4rzdb13gibI6wNTUMXHZhK73uON9FKBUZnBJhiYzLyhWkCmOv7h2ZqS35nM2cwMg30D
         FH5wZcwae4yATnosrM5VXNPMx3HwFM9aanKYq4QURPzWTywVp1ERBuFqs6BnMYJqn7N5
         PcaTnrymB0a7DiW8jS1VFjqewxxHRSRqy9WA89Acn/1D/R6X3o5X5PvPGCkMXM1ctXvg
         FHgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=donqcXl7R+CymjCL8vc7v5IAilQU7H4mg0fNs68sCzA=;
        b=cztcXdWNGfHSeS6hA7wpeny0ZIAnxRsV+96Ja3ZGIcB4n5Mow0BDVkMojRXmc98x/W
         UYer16kpUOLYnlJCBMGbo75f0Rla3omsv8KD0IZnUDCa2V0VgKK+ooShR3XHSq/MO7qJ
         Cv8N614tMRCLjwPK3TEt+Ock6IvP5+H5CoPO7c/aVqtWMzIyYqfCkIuI6VX0V1+/XcCX
         ryd3h4QSK8xP2G0f6r9+1os90pn8KF+1DRvzjCCCMmA/GsQY/X7u+nAdQbUZEQUIZzqk
         RsKNMqWQF41w2zvQsXiasmxdk1PybtfNSLolKhSYF6I8eOTcqJEbBtAlSSTorlqpikTj
         Fe+Q==
X-Gm-Message-State: AOAM533p2qcePZH52MbzPVrZvToUk+7Qm7Sz2moy2aHgZktf02b+sleq
	HH4C5UeFJb6G5r9JReyc6OY=
X-Google-Smtp-Source: ABdhPJx1pElW3YJm/WsOQahhG2HH5xZse6fgcEBep2dVSJwz1qWS4aDb1550CqyGQkCPJFpsp90m2g==
X-Received: by 2002:a7b:c09a:: with SMTP id r26mr1225594wmh.60.1612178953873;
        Mon, 01 Feb 2021 03:29:13 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 1/7] meson: Do not build Xen x86_64-softmmu on Aarch64
Date: Mon,  1 Feb 2021 12:28:59 +0100
Message-Id: <20210201112905.545144-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The Xen on ARM documentation only mentions the i386-softmmu
target. As the x86_64-softmmu doesn't seem used, remove it
to avoid wasting cpu cycles building it.

Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
 meson.build | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/meson.build b/meson.build
index f00b7754fd4..97a577a7743 100644
--- a/meson.build
+++ b/meson.build
@@ -74,10 +74,10 @@
 endif
 
 accelerator_targets = { 'CONFIG_KVM': kvm_targets }
-if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
+if cpu in ['arm', 'aarch64']
   # i368 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
-    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu'],
   }
 endif
 if cpu in ['x86', 'x86_64']
@@ -85,6 +85,7 @@
     'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
     'CONFIG_HVF': ['x86_64-softmmu'],
     'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
   }
 endif
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79772.145297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOT-0002df-OW; Mon, 01 Feb 2021 11:29:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79772.145297; Mon, 01 Feb 2021 11:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOT-0002dV-Kz; Mon, 01 Feb 2021 11:29:21 +0000
Received: by outflank-mailman (input) for mailman id 79772;
 Mon, 01 Feb 2021 11:29:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOS-0002d8-UU
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:20 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 712290df-d38f-45e8-ae26-fea464031983;
 Mon, 01 Feb 2021 11:29:20 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id q7so16138853wre.13
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:20 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id y6sm26305538wrp.6.2021.02.01.03.29.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 712290df-d38f-45e8-ae26-fea464031983
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=aPiraoQq+wBjdcIQOiUTo/rKER9nPb2vFzjApKq2T+g=;
        b=ko4bHURsbdywzQeAqTKNRMtP+zzJ6yTACMFwMAle7bWMNtkvJufYJrtPUI/M5VW3/O
         frmacZk8z5EwOYVKlR1W7pU0lJDo4Xtr7KDcSHqPIQGwsREIlJ5kPDQQ61Cr1BYMoUWE
         CF0rc60JUGkcPOKbMV0O0+zHtMU1Tui4QSz+lBYsHlKckNcTiNoyfbBX8hLgAYBcFvOE
         uNIFEwGblR6Um04jIkD8OHxk0Gs5LrqqQzd0x+eiVKhrXYbg0AxzeRgUoumA1uDQTC4m
         FxrgOOgPU5iJBrHon1c+gWeae2DbmRnTY7+cbkCrU59wU2snpSOPEcSb8jlKgXrUPOGp
         zwjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=aPiraoQq+wBjdcIQOiUTo/rKER9nPb2vFzjApKq2T+g=;
        b=ojeT7zw2DR0nZAaMP9ChrBI55nFDHgcpMPuarD5KOds/pMTM41uEfhDqHqCNHrfJAn
         qfwJIeZUzDjC8U9fqHonzTBA94hYk4Fsmn7MX14P763olOjv+/A2QxEhjOjxYeLdIF3f
         m5GSuYMI1XuPHd1+pNehtNPC1ReAS7jA/pG5c3Nn/H04xLU1MTPBfKe12X7Wjdsy5AM7
         mLLYsub9HEQaFw0RFqoa3dL9jTYPu4djTQ14tgadEEX3yOKtyEusoKiDPXUWlnjuzMJh
         resaWkw6uxKuBHAtN416dDapcmoUoKsw+fZMCyxaDkmElpIUYHtfcrp4DwHeQFAQ+ZDe
         XNIQ==
X-Gm-Message-State: AOAM530PjKulwDdGeLI1W5LSYaZE4A8mdfZZMIBsJFSJ9psw9cFcC7FX
	w3Zn+xmciA/yGN4jaHiQ6Rg=
X-Google-Smtp-Source: ABdhPJwPt7ERal1fQeDzwH3w64qDIOwi7IRP9XTWdLtX/WYy5c6IDx5O5BK4Mw3ZLOmXAK4MQef0Tw==
X-Received: by 2002:a5d:4e0e:: with SMTP id p14mr17735664wrt.58.1612178959403;
        Mon, 01 Feb 2021 03:29:19 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 2/7] hw/xen: Relax dependency on FSDEV_9P
Date: Mon,  1 Feb 2021 12:29:00 +0100
Message-Id: <20210201112905.545144-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Relax the dependency on 9pfs by using the 'imply' Kconfig rule.
This fixes when XEN_PV without XEN_FV:

  /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
  `xen_be_register_common':
  hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
 hw/xen/xen-legacy-backend.c | 3 ++-
 accel/Kconfig               | 2 +-
 hw/xen/meson.build          | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hw/xen/xen-legacy-backend.c b/hw/xen/xen-legacy-backend.c
index b61a4855b7b..338d443a5c0 100644
--- a/hw/xen/xen-legacy-backend.c
+++ b/hw/xen/xen-legacy-backend.c
@@ -33,6 +33,7 @@
 #include "hw/xen/xen-legacy-backend.h"
 #include "hw/xen/xen_pvdev.h"
 #include "monitor/qdev.h"
+#include CONFIG_DEVICES
 
 DeviceState *xen_sysdev;
 BusState *xen_sysbus;
@@ -750,7 +751,7 @@ void xen_be_register_common(void)
 
     xen_be_register("console", &xen_console_ops);
     xen_be_register("vkbd", &xen_kbdmouse_ops);
-#ifdef CONFIG_VIRTFS
+#ifdef CONFIG_FSDEV_9P
     xen_be_register("9pfs", &xen_9pfs_ops);
 #endif
 #ifdef CONFIG_USB_LIBUSB
diff --git a/accel/Kconfig b/accel/Kconfig
index 461104c7715..7565ccf69e6 100644
--- a/accel/Kconfig
+++ b/accel/Kconfig
@@ -15,4 +15,4 @@ config KVM
 
 config XEN
     bool
-    select FSDEV_9P if VIRTFS
+    imply FSDEV_9P
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 076954b89ca..3c2062b9b3e 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -2,12 +2,12 @@
   'xen-backend.c',
   'xen-bus-helper.c',
   'xen-bus.c',
-  'xen-legacy-backend.c',
   'xen_devconfig.c',
   'xen_pvdev.c',
 ))
 
 xen_specific_ss = ss.source_set()
+xen_specific_ss.add(files('xen-legacy-backend.c'))
 xen_specific_ss.add(when: 'CONFIG_XEN_PCI_PASSTHROUGH', if_true: files(
   'xen-host-pci-device.c',
   'xen_pt.c',
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79773.145309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOa-0002ix-1U; Mon, 01 Feb 2021 11:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79773.145309; Mon, 01 Feb 2021 11:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOZ-0002io-UZ; Mon, 01 Feb 2021 11:29:27 +0000
Received: by outflank-mailman (input) for mailman id 79773;
 Mon, 01 Feb 2021 11:29:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOY-0002hn-EF
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:26 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cc4cb1c-3b29-4634-bc0a-6dfffa42d20c;
 Mon, 01 Feb 2021 11:29:25 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id y187so12868892wmd.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:25 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id s23sm20324467wmc.35.2021.02.01.03.29.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 5cc4cb1c-3b29-4634-bc0a-6dfffa42d20c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=86kgWxqaVQ459jB6Z+UlH8D45KqeOXWfxBT1ho7ZaU4=;
        b=OvgFBK6bmxqweSp3L77Aj8WaAxT1RQK0LtEk3i36jsBBT/QaFtswrGEa72xZLMd4Yb
         HIUhY+oLNPcVTx0grppbuToABViBp+iIgP1lhjj7VUfVrYskRS511GA7hnokxOH0+1hl
         Llq6UzcwQZKox5D1iLDy4JUoVe9v7MvTlaPAwPhQGGDyHg/dG4Y7wUG4nevlaACcQCNa
         OGhW/bC2jrKwZhFOXWeH02rHFHQET1ePy5uJIr09AQTtEe/LqnRUMrCDsUr7yNukAjNX
         KfV/ZLVI7sFYmKWV4CFTTIr/ggz3eMOkgOHARcOO5pOvHUgqxm9ZDMwGiTNoRja/YO0o
         FkWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=86kgWxqaVQ459jB6Z+UlH8D45KqeOXWfxBT1ho7ZaU4=;
        b=FAuKnR2pvYsvQUFztPtDtY9FcUAiTWAuSmhxqI1IdrHCFB2prtLkck4eLyrC+VP5F5
         gG1D6QY3bT79qZHwcVUNOH0+NusDtAbh8ZTiTF/7Pu3mVJQKof11r9ikbHLLz7G7L5U7
         WrQBsEq3t3xVsvNghbFeW2C6rlbEYZudX+7NbKr7cdmBFv5WQiH0H54OTElmzIEJ+6Vi
         cRwczpKPpt4cuIMyiw7UldvXV2Z64sEQZnIS5VNcJNDX5p/pPHR+aZKCHxTWeNsp3oJq
         D1e4oolEa9Jbf62zpNxkFjy/SYQ46SjnOQnEu6Ueb8NQ3hlJ/i6Ptlb43oi1wC69vG/M
         McQQ==
X-Gm-Message-State: AOAM533/Q20nBD6/cguvOgiYMg/UJzy7sYawFcSaY+gxfsVzevm/WowF
	8ovr/v9FvQ2WmR4/VeV+XiY=
X-Google-Smtp-Source: ABdhPJxcRgfDMwb1tHeCu3Tj2p4+drVJi66+iJFM6aft3sJqiy4qEUbcRAahN2gmhZHZ8T/I+s19pw==
X-Received: by 2002:a1c:f415:: with SMTP id z21mr14727242wma.114.1612178964810;
        Mon, 01 Feb 2021 03:29:24 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 3/7] accel/xen: Incorporate xen-mapcache.c
Date: Mon,  1 Feb 2021 12:29:01 +0100
Message-Id: <20210201112905.545144-4-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

xen-mapcache.c contains accelerator related routines,
not particular to the X86 HVM machine. Move this file
to accel/xen/ (adapting the buildsys machinery).

Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
 meson.build                           |  3 +++
 accel/xen/trace.h                     |  1 +
 {hw/i386 => accel}/xen/xen-mapcache.c |  0
 hw/i386/xen/xen-hvm.c                 |  1 -
 accel/xen/meson.build                 |  5 ++++-
 accel/xen/trace-events                | 10 ++++++++++
 hw/i386/xen/meson.build               |  1 -
 hw/i386/xen/trace-events              |  6 ------
 8 files changed, 18 insertions(+), 9 deletions(-)
 create mode 100644 accel/xen/trace.h
 rename {hw/i386 => accel}/xen/xen-mapcache.c (100%)
 create mode 100644 accel/xen/trace-events

diff --git a/meson.build b/meson.build
index 97a577a7743..f2e778f22cd 100644
--- a/meson.build
+++ b/meson.build
@@ -1706,6 +1706,9 @@
   'crypto',
   'monitor',
 ]
+if 'CONFIG_XEN' in accelerators
+  trace_events_subdirs += [ 'accel/xen' ]
+endif
 if have_user
   trace_events_subdirs += [ 'linux-user' ]
 endif
diff --git a/accel/xen/trace.h b/accel/xen/trace.h
new file mode 100644
index 00000000000..f6be599b187
--- /dev/null
+++ b/accel/xen/trace.h
@@ -0,0 +1 @@
+#include "trace/trace-accel_xen.h"
diff --git a/hw/i386/xen/xen-mapcache.c b/accel/xen/xen-mapcache.c
similarity index 100%
rename from hw/i386/xen/xen-mapcache.c
rename to accel/xen/xen-mapcache.c
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 68821d90f52..7156ab13329 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -31,7 +31,6 @@
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/xen.h"
-#include "sysemu/xen-mapcache.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
 
diff --git a/accel/xen/meson.build b/accel/xen/meson.build
index 002bdb03c62..45671e4bdbf 100644
--- a/accel/xen/meson.build
+++ b/accel/xen/meson.build
@@ -1 +1,4 @@
-specific_ss.add(when: 'CONFIG_XEN', if_true: files('xen-all.c'))
+specific_ss.add(when: 'CONFIG_XEN', if_true: files(
+  'xen-all.c',
+  'xen-mapcache.c',
+))
diff --git a/accel/xen/trace-events b/accel/xen/trace-events
new file mode 100644
index 00000000000..30bf4f42283
--- /dev/null
+++ b/accel/xen/trace-events
@@ -0,0 +1,10 @@
+# See docs/devel/tracing.txt for syntax documentation.
+
+# xen-hvm.c
+xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
+
+# xen-mapcache.c
+xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
+xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
+xen_map_cache_return(void* ptr) "%p"
+
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index be84130300c..2fcc46e6ca1 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,6 +1,5 @@
 i386_ss.add(when: 'CONFIG_XEN', if_true: files(
   'xen-hvm.c',
-  'xen-mapcache.c',
   'xen_apic.c',
   'xen_platform.c',
   'xen_pvdevice.c',
diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index ca3a4948baa..f1b36d164d9 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -20,9 +20,3 @@ cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint6
 xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
 cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-
-# xen-mapcache.c
-xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
-xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
-xen_map_cache_return(void* ptr) "%p"
-
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79774.145321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOf-0002pr-Bu; Mon, 01 Feb 2021 11:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79774.145321; Mon, 01 Feb 2021 11:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOf-0002pj-7x; Mon, 01 Feb 2021 11:29:33 +0000
Received: by outflank-mailman (input) for mailman id 79774;
 Mon, 01 Feb 2021 11:29:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOe-0002mv-1S
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:32 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29a3a60d-7caa-4bd4-ba58-bd1247de4c2d;
 Mon, 01 Feb 2021 11:29:31 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id l12so16189174wry.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:31 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id o124sm21363300wmb.5.2021.02.01.03.29.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 29a3a60d-7caa-4bd4-ba58-bd1247de4c2d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TsrsxDQVz5tygrPJnMSHXRKo3FHIhV5VlahOLojrTSs=;
        b=tdY/GT4s+Svs7EsgQ8L7VZEN1EU3auIv2pg9z7H40kDst9zeyUzRC0p5gOsl8bvrpe
         YEfky/hcCoc8afoXnaLPVYRZxcU/v8rs/k6MaSMzq+ctcEKeAEQchfrjelV4MrfHwi0D
         eC1Rwq0VU0EF2HasdLW5V7eFRpwPFIhTKRmnxU3el+nAOKtdZ98sCrKCVIybhaq+xOC2
         W1yQWSaG7E6lhP016iJy6W+p1SEDmo8wCHivGUSQQBQIjoJ5GdDEc2vIbE170r2MxnPF
         gPKC/tKyGsVI8pq30CfaNfuu6QhMNJ6IKY0QfXPEW0jFuOpChYSTPDJDreWcOvCxTkpk
         arbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=TsrsxDQVz5tygrPJnMSHXRKo3FHIhV5VlahOLojrTSs=;
        b=ill1FsUN/50ORfYoW2yWZQN/GStXFjWrvEDvR5sj9hwAgfKcSr0kfjYPUOI6wutlZP
         Mn0pAQJsAD0XMZYQMNv4KAgZAqTFcrdSt83p7lkkiHsXbwfO/wvgh+GHJLhgvGwJyGr7
         CRW+1xhuAke233/zuxtMGMlibozOR1WySofhkat9UoEApHKCpJReQiwr3QtM0osd4B55
         33QkpoKjqdj7i5G7SbqTC9rbzu9+Pe7EBNiXcpFXapIaF5htr5l18qQTSFDH2CWHbsnY
         0MfDVEN4XGffR3ojYbpOF0l/QXKu0VLMM3b+miAe5k91i93w6NFkWVAc+jqjpfC/dn9u
         d9Yw==
X-Gm-Message-State: AOAM5313DvY5v5z02cwWV/Q5uhYkkhNmJj/JoyL9RNhb+KsMueAJo/vt
	jNg5BBGJWKn9kh9Bn1RwaDs=
X-Google-Smtp-Source: ABdhPJyJ59H9X2zflNDtn/k6K0IuGMyavaSeLpb40ogb98MM80P8GyEJQcxbrbHoCIm5tQ98H9B8gw==
X-Received: by 2002:adf:d1c2:: with SMTP id b2mr17953783wrd.296.1612178970576;
        Mon, 01 Feb 2021 03:29:30 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 4/7] hw/i386/xen: Introduce XEN_FV Kconfig
Date: Mon,  1 Feb 2021 12:29:02 +0100
Message-Id: <20210201112905.545144-5-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce XEN_FV to differency the machine from the accelerator.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
 hw/i386/Kconfig         | 2 ++
 hw/i386/xen/Kconfig     | 5 +++++
 hw/i386/xen/meson.build | 2 +-
 3 files changed, 8 insertions(+), 1 deletion(-)
 create mode 100644 hw/i386/xen/Kconfig

diff --git a/hw/i386/Kconfig b/hw/i386/Kconfig
index 7f91f30877f..b4c8aa5c242 100644
--- a/hw/i386/Kconfig
+++ b/hw/i386/Kconfig
@@ -1,3 +1,5 @@
+source xen/Kconfig
+
 config SEV
     bool
     depends on KVM
diff --git a/hw/i386/xen/Kconfig b/hw/i386/xen/Kconfig
new file mode 100644
index 00000000000..ad9d774b9ea
--- /dev/null
+++ b/hw/i386/xen/Kconfig
@@ -0,0 +1,5 @@
+config XEN_FV
+    bool
+    default y if XEN
+    depends on XEN
+    select I440FX
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index 2fcc46e6ca1..37716b42673 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,4 +1,4 @@
-i386_ss.add(when: 'CONFIG_XEN', if_true: files(
+i386_ss.add(when: 'CONFIG_XEN_FV', if_true: files(
   'xen-hvm.c',
   'xen_apic.c',
   'xen_platform.c',
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79776.145333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOl-0002wg-N9; Mon, 01 Feb 2021 11:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79776.145333; Mon, 01 Feb 2021 11:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOl-0002wZ-Ij; Mon, 01 Feb 2021 11:29:39 +0000
Received: by outflank-mailman (input) for mailman id 79776;
 Mon, 01 Feb 2021 11:29:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOj-0002vK-R1
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:37 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37583e1b-ab6b-43b0-8e0e-5006d3c89b0a;
 Mon, 01 Feb 2021 11:29:36 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id c12so16174405wrc.7
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:36 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id d30sm28931639wrc.92.2021.02.01.03.29.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 37583e1b-ab6b-43b0-8e0e-5006d3c89b0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=RGHdflD40CpleI2qTHXo49VNZ9h44Tapyh7dWA7hgYU=;
        b=FmjS7JgVOCypovovv65rLivJnwALFuw37uvTf4RtBSvcme+5wkDBdr5zU4iNwMntrn
         Hgt/tdJZflBiPWoHmBRxbG99i1qLXeOQhLcOcdLq9qXB0TmlTxseS1ch4a/PWI2VGyNc
         KhQhMpM8iJtQ5GJSdlxU9A9hqZKd4pN6UylDzcX8C0OlBu12yLaR09B56wzP2rPTQjLP
         vVOdaRCOy8DqkeRS6zSuluHJq/mKNmWcbYvw77lsOx940Rm9a7a3DW/Tcykdz5FQtilS
         Hw5P7oc0/+gBflEG2AzBjwZyrKln8W4hxuuwJMVvRwX92wB1tr9GJV1L07lkC5+RAABP
         ywOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=RGHdflD40CpleI2qTHXo49VNZ9h44Tapyh7dWA7hgYU=;
        b=NBBab+qLAE/ZEXihWRAm0NPtcHqe7fmsKmOqN/BcaW3OEfH+9PC4J9j1eOOscIo9uP
         ufWAyOqlOFvtcWDpzUfSb+b8vJEoZq5ciNDVBGs9NavcAuB1rjaaumOxqa/Cp1UGJiGG
         oSObsmMGBKbXmZSw7iW8iDMR6ST4T0RRtyU9Pi6Wp430DpUqgp7bNFwByOdiHj/9slBe
         AUoOcxuXipU/DScjOcHRTswuzRRqfRaIDgNnaaUbKomwzVlNU0yHVsUkYd/Xgc+eCNJD
         FX3IS+W8PI66Po3N74tUyHfNceQ9E6XX514qzOWElR3E+/NuL7FnhhJkbvs2ot1CuNhm
         cwDw==
X-Gm-Message-State: AOAM530GOaCjnIXO2nEGUPx53gAX5s0de+xu5mJMikr9iT2Tp7xzgMq3
	QU2s7jR1l2WrXdpg+FtWWL0=
X-Google-Smtp-Source: ABdhPJzvuBjmMW0UYZ6d5GxA8BEVDlhSDPc0KocqiT7XJMYbQPMV9DKL5NtfvrqJaptzIHmm77u+zg==
X-Received: by 2002:adf:d1cb:: with SMTP id b11mr17935101wrd.118.1612178976190;
        Mon, 01 Feb 2021 03:29:36 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 5/7] hw/xen: Make xen_shutdown_fatal_error() available out of X86 HVM
Date: Mon,  1 Feb 2021 12:29:03 +0100
Message-Id: <20210201112905.545144-6-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

xen_shutdown_fatal_error() is also used by XEN_PV.

This fixes when XEN_PV without XEN_FV:

  /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/hw_xen_xen_pt_config_init.c.o: in function `xen_pt_status_reg_init':
  hw/xen/xen_pt_config_init.c:281: undefined reference to `xen_shutdown_fatal_error'
  /usr/bin/ld: hw/xen/xen_pt_config_init.c:275: undefined reference to `xen_shutdown_fatal_error'
  /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/hw_xen_xen_pt.c.o: in function `xen_pt_pci_read_config':
  hw/xen/xen_pt.c:220: undefined reference to `xen_shutdown_fatal_error'
  /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/hw_xen_xen_pt.c.o: in function `xen_pt_pci_write_config':
  hw/xen/xen_pt.c:369: undefined reference to `xen_shutdown_fatal_error'

Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
 hw/i386/xen/xen-hvm.c | 13 -------------
 hw/xen/xen-utils.c    | 25 +++++++++++++++++++++++++
 hw/xen/meson.build    |  1 +
 3 files changed, 26 insertions(+), 13 deletions(-)
 create mode 100644 hw/xen/xen-utils.c

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 7156ab13329..69196754a30 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -28,7 +28,6 @@
 #include "qemu/error-report.h"
 #include "qemu/main-loop.h"
 #include "qemu/range.h"
-#include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/xen.h"
 #include "trace.h"
@@ -1570,18 +1569,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
     framebuffer = mr;
 }
 
-void xen_shutdown_fatal_error(const char *fmt, ...)
-{
-    va_list ap;
-
-    va_start(ap, fmt);
-    vfprintf(stderr, fmt, ap);
-    va_end(ap);
-    fprintf(stderr, "Will destroy the domain.\n");
-    /* destroy the domain */
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
-}
-
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
     if (unlikely(xen_in_migration)) {
diff --git a/hw/xen/xen-utils.c b/hw/xen/xen-utils.c
new file mode 100644
index 00000000000..d6003782420
--- /dev/null
+++ b/hw/xen/xen-utils.c
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2010       Citrix Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "qemu/osdep.h"
+#include "sysemu/runstate.h"
+#include "hw/xen/xen_common.h"
+
+void xen_shutdown_fatal_error(const char *fmt, ...)
+{
+    va_list ap;
+
+    va_start(ap, fmt);
+    vfprintf(stderr, fmt, ap);
+    va_end(ap);
+    fprintf(stderr, "Will destroy the domain.\n");
+    /* destroy the domain */
+    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
+}
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 3c2062b9b3e..6c836ae06e4 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -4,6 +4,7 @@
   'xen-bus.c',
   'xen_devconfig.c',
   'xen_pvdev.c',
+  'xen-utils.c',
 ))
 
 xen_specific_ss = ss.source_set()
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79777.145345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOq-00031V-1n; Mon, 01 Feb 2021 11:29:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79777.145345; Mon, 01 Feb 2021 11:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOp-00031N-UQ; Mon, 01 Feb 2021 11:29:43 +0000
Received: by outflank-mailman (input) for mailman id 79777;
 Mon, 01 Feb 2021 11:29:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOp-00030d-41
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:43 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf559e46-822b-4443-b85f-65951a19a348;
 Mon, 01 Feb 2021 11:29:42 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id g10so16186473wrx.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:42 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id o12sm26789769wrx.82.2021.02.01.03.29.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: cf559e46-822b-4443-b85f-65951a19a348
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XRtUPL4N+Vif4bfdhtAFgCO/ETkuGGWOvk7Qu3281fs=;
        b=otz4MVF8iuJeP0O61dnQoLLdWj7yRHoLIPiCQts39X7Iz45omjLW6B5RPKA0Ji8grr
         TaSCvFwiR5a4nyf8A49BwsSX6wD8hGldzi9ObmoQMYTN4EcsB6LLt+JE4crwXoA/Qwkd
         +v/aCY2PpUQZdG6YBQvHtLAhEhdzr6vGVZBvJFEd3QWL7lxOeg367jIC8KtdxkJkEIGa
         bxhcoJZxJVW9PcUVwMCed91ItAi23w2JLJqvnxc4jq/89yFFL+RirWl9fRoTqM/UfJIp
         olTqK+WR7XOw/vm5aC5mEmtouWIpsAPYBPUMZAEJgGJKcfDsc2qwHnZvE6Gp86q3FH26
         Ljzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=XRtUPL4N+Vif4bfdhtAFgCO/ETkuGGWOvk7Qu3281fs=;
        b=f3NklxOpMCibNI3WyZCDCWAOxEekFK/X383VHluCRRPfWhK5uswKsmqg7H7ieWFB0L
         rDDOM5wTI26AlQ5bPPmADv2Cuv7WMoPVF0P+2xUf9DAWg2Pbe3IP7upOPkE9S2rzCqqp
         k6j3ywkTLY59Ev7TYekQiXPM2W9iep7H0iJDOEF9WlA7u9xr+o1ecxyaPWNdyHrKzELa
         5nzSDo4I5zUOUYrvVvNFmLze+rsMUS8IofGg8rAyezEga+dyaxJq8UZdXZlAgVjwFMoM
         A7KDpIBWrT6tnrI55CvH9wDKCDbaIp1JrNH7aaGEPE/TRwhVA1rTk2HvVCBgaLYwO46r
         igHw==
X-Gm-Message-State: AOAM531+GVm4HNVrPkEnTjZEbegEdDlbjlVVGScrZvniOsRTRwQvQ1HL
	uTX3b32X3TsZjnmle0It/ok=
X-Google-Smtp-Source: ABdhPJyOAdkL8yCg3T/QnJEGkqZccdLOHBM22xLPE323hR3HQAe/aPubKmDcduC4ijItMwsDSxlZ+w==
X-Received: by 2002:a5d:6b47:: with SMTP id x7mr16884630wrw.170.1612178981514;
        Mon, 01 Feb 2021 03:29:41 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 6/7] hw/xen: Make qmp_xen_set_global_dirty_log() available out of X86 HVM
Date: Mon,  1 Feb 2021 12:29:04 +0100
Message-Id: <20210201112905.545144-7-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

qmp_xen_set_global_dirty_log() is also used by XEN_PV.

This fixes when XEN_PV without XEN_FV:

  /usr/bin/ld: libqemuutil.a(meson-generated_.._qapi_qapi-commands-migration.c.o): in function `qmp_marshal_xen_set_global_dirty_log':
  qapi/qapi-commands-migration.c:626: undefined reference to `qmp_xen_set_global_dirty_log'

Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
 hw/i386/xen/xen-hvm.c  | 10 ----------
 hw/xen/xen-migration.c | 22 ++++++++++++++++++++++
 hw/xen/meson.build     |  1 +
 3 files changed, 23 insertions(+), 10 deletions(-)
 create mode 100644 hw/xen/xen-migration.c

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 69196754a30..85859ea0ba3 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -24,7 +24,6 @@
 #include "hw/xen/xen-bus.h"
 #include "hw/xen/xen-x86.h"
 #include "qapi/error.h"
-#include "qapi/qapi-commands-migration.h"
 #include "qemu/error-report.h"
 #include "qemu/main-loop.h"
 #include "qemu/range.h"
@@ -1591,12 +1590,3 @@ void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
         }
     }
 }
-
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
diff --git a/hw/xen/xen-migration.c b/hw/xen/xen-migration.c
new file mode 100644
index 00000000000..1c53f1af08d
--- /dev/null
+++ b/hw/xen/xen-migration.c
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2010       Citrix Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "qemu/osdep.h"
+#include "exec/memory.h"
+#include "qapi/qapi-commands-migration.h"
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 6c836ae06e4..21f94625dc7 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -4,6 +4,7 @@
   'xen-bus.c',
   'xen_devconfig.c',
   'xen_pvdev.c',
+  'xen-migration.c',
   'xen-utils.c',
 ))
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:29:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:29:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79778.145357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOw-00038n-Gc; Mon, 01 Feb 2021 11:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79778.145357; Mon, 01 Feb 2021 11:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XOw-00038c-CX; Mon, 01 Feb 2021 11:29:50 +0000
Received: by outflank-mailman (input) for mailman id 79778;
 Mon, 01 Feb 2021 11:29:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6XOu-00036P-Gi
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:29:48 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02531718-cdf8-4ae8-bf42-0f67091f6174;
 Mon, 01 Feb 2021 11:29:47 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id a1so16191228wrq.6
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 03:29:47 -0800 (PST)
Received: from localhost.localdomain (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id r124sm21261633wmr.16.2021.02.01.03.29.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 03:29:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 02531718-cdf8-4ae8-bf42-0f67091f6174
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=s9rQaS8UMqNJGneEaOHvqNQyzFShTSgQ9o1efcMMfnw=;
        b=vN5Dbc8LEBoxXzt4TOZpKQUfFP4d0DUrmJzsY8jZ30ugvsjqLnSTGDuzM7H5Fyrk3f
         abS4D3Ay2zNaJBSGMTGCJOBMPB0RMUXJbL6yqIkQ/k8U3aKlUwIlyhW30wiOETsY+lUU
         VJdwwEkBARJ7eV715aeJDCjTlEzuJ7ft/zc1y32IFpIdtIEhNETmDDSyBCzd5/cL7xO9
         U50ggDjlwUQVizGN8vef1jmyXmIUVrOW/c4xcMiNqJGCJsnBZXQULkYBB67vLBH/7B24
         jEj/VTilekOmxsfiANyo+i9GfbpLzhVXb7nSmdL0Jut2lRCZzFBDzUvdQG9EI5wxHfm6
         AB0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :in-reply-to:references:mime-version:content-transfer-encoding;
        bh=s9rQaS8UMqNJGneEaOHvqNQyzFShTSgQ9o1efcMMfnw=;
        b=jdxjho9T4umlDKls6BvODSQVlQZWgKQkYvkyHmHHJQKQIo9obKP6GCIQGFBMyrVmgq
         kDx6V+K1XacEXF+AepqwU4DfAyEW1OoExll6AsaWM8Nn/6nLc+1VuSM971aJQnV/ltVG
         haTnvI3A5yhjp6GWdCtRndse8bflVq3CuFWZorUb55fB4iHedpK0Y93gJH73bfJO5Vpc
         DO/g5HwAfWIFuU07APb4q7bcubrKFjndQ7A1hXrt93vZuTJrmGtWGXwDQJhnh6XI00R9
         eCviMTI+zqrlDxNoUVRnhMVMoN5mja5/WvxOwTtnNVpfZptodUTssqm4puva+8DFJTpF
         SdlQ==
X-Gm-Message-State: AOAM531YiuG4fBemiDDLQvLjiThwpGpUHofdTYWPCzZRyuG/buoglurk
	5th8ePDKR8CRR99IwOsW9rc=
X-Google-Smtp-Source: ABdhPJzAYGwF+J2Ou4s+1HcJL36utLf3MJcMRSzDMAwMwQQaTKhKo5Am9QETvwv7os+GbvBC+C3N8g==
X-Received: by 2002:adf:bc4b:: with SMTP id a11mr18119220wrh.260.1612178986887;
        Mon, 01 Feb 2021 03:29:46 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PATCH v3 7/7] NOTFORMERGE hw/xen/Kconfig: Introduce XEN_PV config
Date: Mon,  1 Feb 2021 12:29:05 +0100
Message-Id: <20210201112905.545144-8-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210201112905.545144-1-f4bug@amsat.org>
References: <20210201112905.545144-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

xenpv machine requires USB, IDE_PIIX and PCI:

  /usr/bin/ld:
  libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
  hw/xen/xen-legacy-backend.c:757: undefined reference to `xen_usb_ops'
  libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `unplug_disks':
  hw/i386/xen/xen_platform.c:153: undefined reference to `pci_piix3_xen_ide_unplug'
  libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `pci_unplug_nics':
  hw/i386/xen/xen_platform.c:137: undefined reference to `pci_for_each_device'
  libqemu-i386-softmmu.fa.p/hw_i386_xen_xen_platform.c.o: in function `xen_platform_realize':
  hw/i386/xen/xen_platform.c:483: undefined reference to `pci_register_bar'

Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
Unfinished:

/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
function `cpu_physical_memory_set_dirty_range':
include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
function `ram_block_add':
softmmu/physmem.c:1873: undefined reference to `xen_ram_alloc'
/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o: in
function `cpu_physical_memory_set_dirty_range':
include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
/usr/bin/ld: include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
/usr/bin/ld: libqemu-x86_64-softmmu.fa.p/softmmu_memory.c.o: in function
`cpu_physical_memory_set_dirty_range':
include/exec/ram_addr.h:333: undefined reference to
`xen_hvm_modified_memory'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

TODO another day: Paolo's suggestion:
"modify xen_hvm_modified_memory to become a virtual function in AccelClass."
---
 hw/Kconfig     | 1 +
 hw/xen/Kconfig | 7 +++++++
 2 files changed, 8 insertions(+)
 create mode 100644 hw/xen/Kconfig

diff --git a/hw/Kconfig b/hw/Kconfig
index 5ad3c6b5a4b..f2a95591d94 100644
--- a/hw/Kconfig
+++ b/hw/Kconfig
@@ -39,6 +39,7 @@ source usb/Kconfig
 source virtio/Kconfig
 source vfio/Kconfig
 source watchdog/Kconfig
+source xen/Kconfig
 
 # arch Kconfig
 source arm/Kconfig
diff --git a/hw/xen/Kconfig b/hw/xen/Kconfig
new file mode 100644
index 00000000000..0b8427d6bd1
--- /dev/null
+++ b/hw/xen/Kconfig
@@ -0,0 +1,7 @@
+config XEN_PV
+    bool
+    default y if XEN
+    depends on XEN
+    select PCI
+    select USB
+    select IDE_PIIX
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 11:44:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 11:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79797.145368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XdI-0005F6-RG; Mon, 01 Feb 2021 11:44:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79797.145368; Mon, 01 Feb 2021 11:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6XdI-0005Ez-OQ; Mon, 01 Feb 2021 11:44:40 +0000
Received: by outflank-mailman (input) for mailman id 79797;
 Mon, 01 Feb 2021 11:44:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6XdG-0005Eu-KK
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 11:44:38 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3161d6f3-d6e8-405d-82fc-442175c42a7f;
 Mon, 01 Feb 2021 11:44:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3161d6f3-d6e8-405d-82fc-442175c42a7f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612179876;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=YgIge87ZuT3IZ6iE+AIvwtfW82mSyzQ6YJNFZJEA54Q=;
  b=F1DIcH5Up63iWY5imj1x0tNmhT3jZaTXyPmlAT4GXM7BFCGrZQu9MQnh
   e08MziCDAoIkUMKB54SZwhLzvyeRuIZsp4fL4FXTxiCgMWrpRyfcKwrmR
   KzTctY9dA1jv9pSJze5kAE5+hMkIUj3KQj0bQpf6yBy/OP2mIWWLtEzUA
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: IRsV8HB/g9BkcdiSK0fXUJyaaqJtJvPuP4ne1wrbQc8Bpcfy/Y33tKLQANZ/V+yBXZRXqjzME+
 pyCfEfau+lhiVd4IlDv7QXKvtjIsihiXwI1DnKsG0tgsNWQsEbjfswyBLg9PAkzc7FPf1+EYzN
 z4kLVlO4MIBUlczYseJr0prv27VqY6qge/Dn3/AZr4id37FPrCEtYCqxxAjjL08wkcHuqKTzDb
 DPQc3s6k6AUIA/5cPD5GGuaEn5icWmvJrdtvw+issQt0GurOLTiZ6R5uuhGSvQy1ScepIf2Eiy
 iI8=
X-SBRS: 5.2
X-MesageID: 36466009
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36466009"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NAfBeRAMMb4HfZASJj3yf/KcUtwyemr9LxvBE6sVwd+VwhQb2TBIwPCsvRfGTycY8yGEMwWKrxa5ipb1WsRp1e9M11tAdZGT4HslR5I/c/SG0RXX2TvFzRhnRrCxB2UnibQI0CdaMhIp+zH1CDM9mASfdmX2MBhUbtUYpT7VY/cbB8tf2h4/a2sxeU+dgeIKClf38aqxdReWhGUq9lc9dDh0HQZaNHyG2n0/eC31G2Yt9zDJqdgbQotHbFiIO8V5PwIRnyI9VMJj5jzAa9ihwayGqcO41HthZBSzirOMm0Knh7UcTEVZ/CbHKGg7mwJYY/0PuHjbX7MZyFIyaVRSyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1XeDDXN3b8M4c3Yv0nWTnGkViWyT0skuZLqvFA+Wkgk=;
 b=ItjCNjy6Ehj9eHTrfwVBxqRF2RKYnaLgNFOtaqAxOesjsq56j5k1yG/sD7L+6PWFfEXBvPDygoaC4xsY9HD77ZjQyyHz5g7DvvOBK4lLrmOcIwvIygNPvDoFQnDbNnKB4/lkiCdNK02HRxk/HqHlOjdtUr/EvZJitBrqojlzFBTFBOc1jHGcbw+H2MOd4zHqkL2K718HYaEWTlDm1sTqQKggWiEJndAToaLODXYdlhkJlrUAOX++SUvrlv8inuhIcgD72k/BcvyxLL6Tu76xuKtZaPGpU/IPjpqJtXsKHtC0ylVCWhIraaWk0Fprwz1lXT3/lzGVaQd652r9VIEqJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1XeDDXN3b8M4c3Yv0nWTnGkViWyT0skuZLqvFA+Wkgk=;
 b=f4qK2skZJQdQv2kea3oA3qGSbVWgvgyX23h+D2LwLCCF1W26lnfAZtAIx2TAgvYm9zXjRwlfR1cSDjWoJJakOro3HdDJoT0dh1CNcj5JkWhfiz4pJjJ10gaEwFW3YjsEOlCFmcdDpR+A9GfWMn+j6PvhyOH9XJwzTwybQ8d1o00=
Subject: Re: [PATCH] x86/debug: fix page-overflow bug in dbg_rw_guest_mem
To: Jan Beulich <jbeulich@suse.com>, Tamas K Lengyel <tamas@tklengyel.com>
CC: Elena Ufimtseva <elena.ufimtseva@oracle.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, <xen-devel@lists.xenproject.org>
References: <caba05850df644814d75d5de0574c62ce90e8789.1611971959.git.tamas@tklengyel.com>
 <74f3263a-fe12-d365-ad45-e5556b575539@citrix.com>
 <044823b7-1bbd-6405-7371-2b06e49cc147@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0dc1f3c9-6837-ce12-8826-11354346b3c1@citrix.com>
Date: Mon, 1 Feb 2021 11:44:23 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <044823b7-1bbd-6405-7371-2b06e49cc147@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0021.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::33) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3d593044-5f85-4bd4-8dca-08d8c6a6bc9d
X-MS-TrafficTypeDiagnostic: BYAPR03MB4773:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB477366BC4F0F0A298E6C8CDBBAB69@BYAPR03MB4773.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 7N+rRp4ZDQG8Gr0qh2m7bUxZTX3U2WQRwOIBe60YLXBOnq0OsIqs/khK6Tx4f+X9zOCWyO4NnHI8PAo6PeFWO4Hcl/Xr7r4clBSLJTdVWXZT2mVuRQu9ZzPtlOcgxv8qYn9BieFPO0zpslESiUFDZhtCimuGdT3Ty9ycQOLYjDSHklBpsGITx6ahtt/5oRyFXyGnvnFlix51DUhsy4wNdFKnb6ZScCmMtwIXCqVi0H3CQqi2vPIeWScTZ6XLVJehqONC+7DcmuPFc7ysK+ySeXo0lwDtsXcRZyNKqFSoSKydRynt+ev3D3fJSqAoeZlpi5cjuGnZnyjpUMI4DtQfYkGTpMC0FCA0EtQGXjYshF1waGiPrD21q3ELferJxFY1XIdV2+WxbeAaEfaxcDxqyyfc27D5sQFBhMEZWbvSxiQTUXE1VPROGBirq1Cd7/PMihAnI/qtaw21UgP2rXYT/mHY7ryUCZ2BWgPzaBQ7/YvNu18bR9E6JNsT2AtnH2vhiyWrt7XySM+EIkmtb6UP22RuMBri3QbQUqH0otz9c8WxhcXZFipGGDdI5OKrw3o1Y3Xcsnm527zO5s4JzLrceB85SgB/DtNrmhP6/mZoBcg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(346002)(376002)(136003)(83380400001)(6486002)(31686004)(53546011)(478600001)(86362001)(5660300002)(26005)(2616005)(4326008)(956004)(31696002)(2906002)(8676002)(36756003)(16526019)(66946007)(16576012)(66476007)(8936002)(66556008)(110136005)(54906003)(6666004)(186003)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MlVDU1F5N0x2ZnpVTENuSm5JUGFKUVBURExOY1dRY3R2bmRjZDZTcHpaUEdZ?=
 =?utf-8?B?NnlKcFBHOC90RjRkY2s4eXFkSUdwT21VQ3hNNUFrWE5iTElNNVpma094Ui9R?=
 =?utf-8?B?K2ZRV2czSHBYN25jNDQ2VmdIZVlLWmhpL29uNm04T0dqSW55NnptcSt4WCsx?=
 =?utf-8?B?R2NJMXF5Z0JONTNFbGpKM3hOR0l2VmdhT0xoZ0JUa2E5YkRXWjV2Z0JCZXlB?=
 =?utf-8?B?QWZ1NnNDWEk0NUJXeVcrS3hCNjZtY1I3UTBIc3JFZDR1TWN4ZWxDOEZTRFp3?=
 =?utf-8?B?Uzhoblo1dXRsZVVnWjdKUWw3VWMrSHNoWFd6dnBaVHJmc0IxZG9ySGU4bVRn?=
 =?utf-8?B?dk9mczNKaTllb2xVLzEvcFpDWStVOU0vV1VMTnFpenVGZkcvTkxjWjhhU0NV?=
 =?utf-8?B?UnNTUklIY3pDV2xxLzMzSGM2MnlLNjZxaDIwMzZqeHFrbmRVb0lBSXZXeVFm?=
 =?utf-8?B?b0hWYmVQSkMzR1BYU2U0ejB0QVpJMEhNRXdVSnhzV1FvLzFib3IwVUppaU1q?=
 =?utf-8?B?SjV3R2dPQ0tRemxRaWI0VDJWZlMxckVjV252ZythOG93cG5Hdld2dEFEQnZZ?=
 =?utf-8?B?QjlrdlluQnc5RlorYytqc0hWZytzRFRuTmdDaDkyU2daeTNQNk81ODluc2Y0?=
 =?utf-8?B?VlE0VUNnbXlnV2hnZWtjQXlPSFhxQkJZRnJCVEpBTzFRTktLeFYvZGJJenky?=
 =?utf-8?B?cDZVUVF4VDJpODBDMXlRcHNIVlBKNlNyMlB2VEc0VkdxeS9oay9sRFBTVDRo?=
 =?utf-8?B?T0lVY2lQSU4reVFMUlhFTlBOTFgvRTRlRURjUE9oazZGRUpVWU5xdG42UWV4?=
 =?utf-8?B?aWZIN293QUJiVUJFdVR0eThGZTVDa0xVajZtTXVEalcvUTA3OUhNV3BaWXE5?=
 =?utf-8?B?WmxpcW5wVjVpNzZCdmJZbGhHZ1FYOXlTVlJSVDFYMHp0UTlJWXY5MkdacTgr?=
 =?utf-8?B?bjhuU0duSmFScjBnamp0N0pBUldSS1VRKzBWd1VHZUd6RFlnR2x2aW1PNVU0?=
 =?utf-8?B?ZXk5WUR1bXhhYm14Q0VQVTNFM1MzMUd3ODliL044bkpQM1QwYm5MTnZiTjh2?=
 =?utf-8?B?Z0ZBem9NZ2JTeTZVUHNhd0Z0TzFETmxOVERVcmQwU2RhU1Z5QlBVdmtYMUpL?=
 =?utf-8?B?Vm5HNXRTSlZLTHNFNmxOTjdwcHdteHU0cVhoY1ZMbTZvVDRjbmZtU0VBSzAx?=
 =?utf-8?B?SEJLZFJjZk9hUjUrY2IvOCtDQUpoQlRXUE1ZblRmMHlCVWxmUkFoM3FKbDNT?=
 =?utf-8?B?a3NDUHRSWktPajgvRHJxZHRjUW1XY0I3cTNqKzQ3elRzTzZBRnVTY3BGOEEr?=
 =?utf-8?B?Zzd2dnQ4bFdjNkQxSDRWYmpZaXNrTjRnODBXODllK2VnVmkwVVZPMHNuenMr?=
 =?utf-8?B?WUdCUThTdzlGVWU0N3Q0b0tPRERNbTBEaUdUL0ZxNmp1a3FvNUZZdzZwWE42?=
 =?utf-8?Q?6UCxPfiO?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d593044-5f85-4bd4-8dca-08d8c6a6bc9d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 11:44:30.8385
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Cyn30/T/d6C4iW42uXk7KEieiJMkErhGhvnqZhT1ZvnbPjpJT4wTpdQ+ypCVjLeNmm/GvSmcGbLd3PVuYgLFzQtjvbzJcy5M8ho14EUiu4A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4773
X-OriginatorOrg: citrix.com

On 01/02/2021 09:37, Jan Beulich wrote:
> On 30.01.2021 03:59, Andrew Cooper wrote:
>> On 30/01/2021 01:59, Tamas K Lengyel wrote:
>>> When using gdbsx dbg_rw_guest_mem is used to read/write guest memory. When the
>>> buffer being accessed is on a page-boundary, the next page needs to be grabbed
>>> to access the correct memory for the buffer's overflown parts. While
>>> dbg_rw_guest_mem has logic to handle that, it broke with 229492e210a. Instead
>>> of grabbing the next page the code right now is looping back to the
>>> start of the first page. This results in errors like the following while trying
>>> to use gdb with Linux' lx-dmesg:
>>>
>>> [    0.114457] PM: hibernation: Registered nosave memory: [mem
>>> 0xfdfff000-0xffffffff]
>>> [    0.114460] [mem 0x90000000-0xfbffffff] available for PCI demem 0
>>> [    0.114462] f]f]
>>> Python Exception <class 'ValueError'> embedded null character:
>>> Error occurred in Python: embedded null character
>>>
>>> Fixing this bug by taking the variable assignment outside the loop.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> I have to admit that I'm irritated: On January 14th I did submit
> a patch ('x86/gdbsx: convert "user" to "guest" accesses') fixing this
> as a side effect. I understand that one was taking care of more
> issues here, but shouldn't that be preferred? Re-basing isn't going
> to be overly difficult, but anyway.

I'm sorry.Â  That was sent during the period where I had no email access
(hence I wasn't aware of it - I've been focusing on 4.15 work and this
series wasn't pinged.), but it also isn't identified as a bugfix, or
suitable for backporting in that form.

I apologise for the extra work caused unintentionally, but I think this
is the correct way around WRT backports, is it not?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:02:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:02:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79809.145381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Xu6-0007GC-OX; Mon, 01 Feb 2021 12:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79809.145381; Mon, 01 Feb 2021 12:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Xu6-0007G5-KE; Mon, 01 Feb 2021 12:02:02 +0000
Received: by outflank-mailman (input) for mailman id 79809;
 Mon, 01 Feb 2021 12:02:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6Xu5-0007G0-3G
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:02:01 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f310392f-0058-45d9-810c-ff6cb676d535;
 Mon, 01 Feb 2021 12:01:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f310392f-0058-45d9-810c-ff6cb676d535
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612180918;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=pEmJHvnpvhnxSN0K1Pw1fw/sCUSG7qwjuEy7nvEn5lc=;
  b=AeP01b0lZLfoxbs3A/Gx7aKAQvDVbtOMpa7DjP4Ddt/1O0eSE8Lscy8x
   X48AV4m3mPouPkwaEuZN3+fNM3Re++h7utjO04y1QgagfojyjAE0eqnQL
   a6PZd/tobeU3GXJPcuQ2cLopX7O5k0EInNP2sRVVn+gxGDKyhHZ4G8uhs
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 27X+8H/vSHJyFvUg9iqvwAIbGVc9jT64oyDYHZt8JlQB1+30KEwmFy4sY5124KBcBmWbWSdp8J
 DteaPlZdPgheN4W2Jm/fk3Qc4MyQf+LtZUOwLx0MIidEjROE01wuF/B06Ay0qu8NkDZl0QdhA4
 ZjqVbGJmcGxtLc6yPNeXFk9plgBFGqUAFcWYcoLl4HqozvK2L1f2792QuCJO13GAEXCXviSXxt
 8B2q1X+EIisHxUN5QtbJkMiEPOQV09peMzRzhs8GOgdsMpQOi/QTstEXSjBcWTR7fHgXnTfycX
 ugY=
X-SBRS: 5.2
X-MesageID: 36466771
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36466771"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PKQq6uAklNpjYkR3dgGGQWXShFJzpbSBhvFhN1V5AElPvERyVdFV7umln+4ReeO/9zBhHlUL24y66DKoW0IXUjgzc0MXBFJAwxHz2p13QhI4x0iLwdljF+yYgPTBz7KE3n3fi0vQlkevBEjKZHDzmtUMOy1tnBV0SKY2NbL9EWJgRLfRr1lK2kMYV4Shw0wkCUq32ciFN0+EsL3Y4DGa+1KVha+bIXSkR8XtGITu0RytpQH96BfgCbweWz5/ZoTNJJuyd9nInDf05gE4ioGOKJiXTPh0ULTeg3edJHKYWtlG0Zry8cNbeMjImhMmUd251D6Nh3QCghMcDESQRZLscg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wb6ot+QJdkaYDefg/rBunuBoFJoz9ePHqO3a3cXrWYg=;
 b=KSULxET2KxlwOgezFzVoWFV9eRjyJUUhsglQ9peYAf5Xz2WgIBJsCYQQ42H+SMXFbKzMYJb1fzEA73nl07AJ4z5TCkKBszIQUhjnFHRJBHWw8tDSu4468YrRtzUFFlJetAir1KU8XGSocUOmveSMSuFgIO/X4Rb7etLA5rhQkqzdDeZjz4zMFhxjtmJfjhDIrKpCcoBVdZfbS5xuUgNxA90+1CGI54AFB7lZ/VQM90cT+lkjGDy0WtdhzwiDYzFtGWF5dkTV0RinMaVsR0MUEUN3er0eaL1ILTUsMjYBdwv4B97r7A9Z0FNfrZqRTQZCP+cyeRhz7kkK97b578v/rQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wb6ot+QJdkaYDefg/rBunuBoFJoz9ePHqO3a3cXrWYg=;
 b=tfFhKCE+CsS7204yOp5OHI6xHosMYgrKrdRRw0l2lzrWh4dN8/s59/I68NHvhPzoNkyMPQebn0R5NJRfA1v/y6LYxz1zxgQWYmtBRSAIH122IWeYTyylsLUtUNtGOB/JiTuSwa9O8nEyzVs4+OIqKliP7Ctk7bRI2e6QKG9jFQo=
Date: Mon, 1 Feb 2021 13:01:22 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>, "Jan
 Beulich" <JBeulich@suse.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: Re: [PATCH v8 12/16] xen/domctl: Add XEN_DOMCTL_vmtrace_op
Message-ID: <YBftkqLHfwIzpaN9@Air-de-Roger>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-13-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210130025852.12430-13-andrew.cooper3@citrix.com>
X-ClientProxiedBy: PR3P189CA0113.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b5::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 12594a62-c8d2-408b-bfac-08d8c6a91b71
X-MS-TrafficTypeDiagnostic: DM6PR03MB3916:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3916ACBD2AEC52DC1B3E4EF48FB69@DM6PR03MB3916.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: V7IEpkf8o1AWl0PfYppMZ4NBT8ylqufaN9AJOnG08SWZ45enpVVGBsJ8irCo7IA7xUDqR1P4B0+v2QL5Iu7vjejd2I4pAVjY64eVyoQKcchBTCj0/RjvAU6rj0kLZGY36SepU1EpiKnRO6eNK4zhg7YcFPrGKxBa4FEcGP9zEOczzVd3LswSRpPbZ0/zWh+Ha4egzvcd0eAmbI4kFY3LBnzMWFMJxLdtU0r4VvHmrWHtJhEmeAozjE3KTql3Ls4bAarrvpw71WwXl9IGQilFhNQ5Izn4RkqQEd2kik6D/Ip3LjfUQF/dn2rmy5T82Mqb7aX+hilj1horHK4nvdx1SP2hvz2OhnPzPh9+B2XAwsRjH8hDdOZ4O4xXYByghYHhn5el3GOqZw4ZpFwBLWU016oH1zNGw8XJbn9pWjln2bafjxNCJz4rVvTa2hwNG6k0lm0pmpWSZ+1ZHbYC+mYGYPhwt+GW1Y5MaZ44Nb2bX0J3yUGJHPW2N8AnEWePwDgRm4sX7UhI5iB/EL8DqXs7cA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(376002)(366004)(39860400002)(136003)(66476007)(316002)(16526019)(186003)(9686003)(956004)(66946007)(6496006)(33716001)(8676002)(478600001)(26005)(85182001)(66556008)(5660300002)(6862004)(86362001)(83380400001)(4326008)(2906002)(6486002)(6636002)(6666004)(54906003)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ekI3ZVlEYWR3anlYUkFYVE1LMFY5R2Zhdng1dndtZ0hQblQ1UVE2S1JqYXpV?=
 =?utf-8?B?Zkd0UDlyTnpPRkFweVE2Qld3eFBYWU1SbXk4Q0xseE4xVWFNTUJ1QW11RXNS?=
 =?utf-8?B?MUQ2K3AzVlh3L0dsMTJrWlNkeDJLTUo2Z2F4bEN5VkRtQUJaR3VDa2UzQ1hC?=
 =?utf-8?B?ZGFnTEZSbDFLREVzWUtXeGdCT0VQMEFzUkhnUVBkMGRFbFVUMWVZWXVjTE5L?=
 =?utf-8?B?UUlsWndpSlYrOG5wbElzVGg5YWdlbytYMXltdEgyUlNOaUdmczZTdHd6cWZw?=
 =?utf-8?B?L0c4QVdJZHBabzBVK1BPNWJPU2V1ckNjOFFPeUlKcitGRk5YQ01GeFkvWkxl?=
 =?utf-8?B?VmNPVzBwaDJORHlVVnYwdlByUnpXZHVKVm9NdXFReUdzcTA4YUpDOW9GTHBn?=
 =?utf-8?B?MlFyem9zSEwzKzFPVXdGaGxlVnpzUFZRNE0zMVl3RFNJYUxaYlZCakJIN1lZ?=
 =?utf-8?B?OFo0SzdBeGw2Rm9WZ2VLdDZON0dKT3JLSS8wQ1NSNWEvZU84OE5mNWRVdzN0?=
 =?utf-8?B?TDExVS9JMjRBMVQzWVNGUnF1RmMrSDlLRjNIT3FwWkpnS0lBQWtxdUJ1YnRw?=
 =?utf-8?B?eEg2OHRDVVZmODVYOXdGd1F6eTl5a3hIR1Z0SmRIa1NENmpsMXhNaDlvcXBG?=
 =?utf-8?B?T0FFRFpPVnMrdkxRaG1rd1BDRUVhNlgrY2JNVXBia0UxL0ZDZ3BRMldVWGlB?=
 =?utf-8?B?Skpmd29mc3l6RVNIN1lHdVlyWUt2TUprVk00M3FCNzNCbXgvN3NrT0ZnVXlw?=
 =?utf-8?B?bjl1NFRrMHkyMmlZNSswOU9NYmNreHRPdndxVFdmcmFQNUdnQ2Y2dHdBOThk?=
 =?utf-8?B?enlZbktML0N4OUxaOCtNNnlWRWVoSm13NmdOblBUa0N6MnZHSldEWTdHSHpQ?=
 =?utf-8?B?S1JjM2JFUG5WdUVHMUZzMUIvVjJ5Y3dreDEwSHpTZXlhZVZMYjZwSGgvZnVQ?=
 =?utf-8?B?eTZGVk0zbUJzUXVGTC9VOGhZeGI5SmMyMm1xcVBVcXYxL09CWENqS3RIQTNS?=
 =?utf-8?B?dXhtN3AzdUVCWmgzYjdZWTVUeWxYMVduZGVmUTBJOHNBZjNDbE9ibFlBLzYz?=
 =?utf-8?B?R1NPYVF3UEowMmpuUmlrNGg3N05ZUitDWmNIYjh1UG1Ld2haOU5PVzRhQkx2?=
 =?utf-8?B?Q2xLRWg4R3RvRHpMZEcyUVV5YXpHVUVUbEk0eDdzclA0MnB5QnN4SHExK0lQ?=
 =?utf-8?B?ZjRya1hoM1grQ2JReFdmaElRc1BiU3lwSVdVa0ZmZkRxYmYvUmRJZ2M2d2RL?=
 =?utf-8?B?SjRLc1BVR1JqWFh4RlVVY3NDUGNiUlpIZmk1ZmVXL1Y5N0pnZW5adkFieUE0?=
 =?utf-8?B?ZkJuVFUwWCtYKzVqZGRrQ3JRbVdQcmxsMjFVb2dYd2Rxc0hRSUluOGZOVWtv?=
 =?utf-8?B?cVRvM2l2ZHVpOTQ1SmN3L0o2bGtyRUx2SWNnd01tSVFEQzhrSlRJdkpIQWdl?=
 =?utf-8?B?bFJwU05nd0owSWFxejlWVFZJT2Njb1cyRDNUKy93PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 12594a62-c8d2-408b-bfac-08d8c6a91b71
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 12:01:28.7248
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B0f4XLNtkonW5+qEI1aXHrxXoyxEEHEihcA18egCvOuMFvhAseBqZ13mSt8ucxCbnNFxAqKxy+ov4kjIPY2WtQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3916
X-OriginatorOrg: citrix.com

On Sat, Jan 30, 2021 at 02:58:48AM +0000, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 12b961113e..a64c4e4177 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2261,6 +2261,157 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
>      return true;
>  }
>  
> +/*
> + * We only let vmtrace agents see and modify a subset of bits in MSR_RTIT_CTL.
> + * These all pertain to data-emitted into the trace buffer(s).  Must not
> + * include controls pertaining to the structure/position of the trace
> + * buffer(s).
> + */
> +#define RTIT_CTL_MASK                                                   \
> +    (RTIT_CTL_TRACE_EN | RTIT_CTL_OS | RTIT_CTL_USR | RTIT_CTL_TSC_EN | \
> +     RTIT_CTL_DIS_RETC | RTIT_CTL_BRANCH_EN)
> +
> +/*
> + * Status bits restricted to the first-gen subset (i.e. no further CPUID
> + * requirements.)
> + */
> +#define RTIT_STATUS_MASK                                                \
> +    (RTIT_STATUS_FILTER_EN | RTIT_STATUS_CONTEXT_EN | RTIT_STATUS_TRIGGER_EN | \
> +     RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED)
> +
> +static int vmtrace_get_option(struct vcpu *v, uint64_t key, uint64_t *output)
> +{
> +    const struct vcpu_msrs *msrs = v->arch.msrs;
> +
> +    switch ( key )
> +    {
> +    case MSR_RTIT_OUTPUT_MASK:

Is there any value in returning the raw value of this MSR instead of
just using XEN_DOMCTL_vmtrace_output_position?

The size of the buffer should be known to user-space, and then setting
the offset could be done by adding a XEN_DOMCTL_vmtrace_set_output_position?

Also the contents of this MSR depend on whether ToPA mode is used, and
that's not under the control of the guest. So if Xen is switched to
use ToPA mode at some point the value of this MSR might not be what a
user of the interface expects.

>From an interface PoV it might be better to offer:

XEN_DOMCTL_vmtrace_get_limit
XEN_DOMCTL_vmtrace_get_output_position
XEN_DOMCTL_vmtrace_set_output_position

IMO, as that would be compatible with ToPA if we ever switch to it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:08:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79814.145393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Xzw-0007SP-Dc; Mon, 01 Feb 2021 12:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79814.145393; Mon, 01 Feb 2021 12:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Xzw-0007SI-9P; Mon, 01 Feb 2021 12:08:04 +0000
Received: by outflank-mailman (input) for mailman id 79814;
 Mon, 01 Feb 2021 12:08:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6Xzu-0007SD-Iy
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:08:02 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adbf6bf6-3d7c-45d3-a573-edb4819fdff5;
 Mon, 01 Feb 2021 12:08:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adbf6bf6-3d7c-45d3-a573-edb4819fdff5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612181280;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=E9iLSbZmHnemEGyhvYtVBHSgurnKxfn2YJu+Cxe7LwI=;
  b=ZQhRSQD8q3XYLz30frceFXI3Uy7Nvtav1XUCdhVJ5UGYvZU0PW/SVp/R
   Tv5mEfJ94MUXJSyJFWz3rp0kXrh5D7TD9JJa+qM89oDa6GU6YYTSxH8fb
   VgQCZc4ykta64aUyyjHMELTK9foyitwRlfDrjP0y9Y80ehsvAjXPLpAep
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: V/WQQrj5SSNsqL5z11Pn278BGGzI+5QjT7YQvBAbLUYFJ6fWhC85fBYkzYDn4z/wUoeXGaRyct
 snD/DQ9KJXl/PvR9hivbLZpof0D2GQNJRg8mTbyF+GeyWtvfLy5IBzlYphKsJffkMCvh48+sCu
 EPA/mX761kEAfMmC+BQfwnBDgrhtvKR7BMKFxyz2aVRK0ysdq0sJFvNZVXf32sCPT+WUVxxpao
 38TcnJG/zQhyhYhKEQjaOEcqkwBk7F/uwwh3kRm25kYxd2iKr8xbe+8qOb4OiGoF5QLMurmEvF
 Cco=
X-SBRS: 5.2
X-MesageID: 36308068
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36308068"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lZrkt3zJbyUIylDbEMz3rT9/MPU+VFPWYmZQL0+A2Uqy7+s0owCnKgN7vB2tDmQP1vqH61rq4EI7MxhgsIZsZp2dt5Zd2QwbADoUVAhZ/xUDgLIzOvMLKfC/a2+DYKUTbiTKG8DTaB1nTeqEkrFSiZG/ngKea1xqvVCugZCSZ12sQKl/ujOm4JcS9pDaBauITdRyMZnI9qLqp7aFUyfzmAiLYHVbhwux23FDL/roCBTeZ5tm4MzqMLUvsYB1P/AV16xdqYfgtYG0cjKpYHox9+3eCnj4aHmEF+4m8h+zlZEQwdiyxLovzv1wSokKsWH/cy4Pr9pdM1QFWy7fXmf0vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hsO+Z8PnyvvynhfefiMZE7vKd2VJ+d5vXrtaFXjO5Vo=;
 b=aTtR9cE3oT/iEuvTKY1kwPcP5k8fR5sv/BL8s6y+/265LKTmvTR4vVaHT2+Y4wHunry6GPR8ltZYi+sEOsNa47DffXrgVGIj75Ie9SgioHiGiwjGE2lfS11Dn5NkaxD0kjFJ6lBlfPQpABgiQ+YPb7Cx22cwavZyPRPyldKnVwduWYs6aHoRTvFhA17Ti1DYsbaB1YUr/G8+LNQBwUCW7IM1Fai0LGBLaHrkYvQEcqyi6Zy2VrCPnHntkqfNlzzov0su1rVtAW21uyHjgTkS8KTVYpnZq3KjHwd7vQpdeF1VC6DHv7JKp2RP8WT8mm9oFIXqzZPUq+p6RKG+FAiygg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hsO+Z8PnyvvynhfefiMZE7vKd2VJ+d5vXrtaFXjO5Vo=;
 b=VpG4rqgZLTIJbRI6995wB1MHv4tMxTgxPz/qigR4mxJ7flKiODCa9F+oSgW1N3Ah5s5kgHeh3qNIoODFPaDfbaIYd7u7YbVvWTNZtK42ubjdEGwxCuSlIr56DEtbgZbbyA0OI7nWtS6R+o1nxf4JnuGynKzvrRh1s7nVGqwN9gM=
Date: Mon, 1 Feb 2021 13:07:52 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	"Hubert Jasudowicz" <hubert.jasudowicz@cert.pl>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
Message-ID: <YBfvGKkQWSisFHNs@Air-de-Roger>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
 <YBfTpTzi+wo7AFSH@Air-de-Roger>
 <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
X-ClientProxiedBy: MRXP264CA0007.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ccb84ef4-acef-488e-4cc9-08d8c6aa02fa
X-MS-TrafficTypeDiagnostic: DM6PR03MB3916:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB39160790A74FD0AFFDA21A1E8FB69@DM6PR03MB3916.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8ZbyHhUlGtPQPuEZ4R3x7Fc8e7kjGXm1PfGFaCbTrYhkiAk/FZKtr4wCVat9AzJPJILVUexuRt+20hHmgc6oXcOGsB3mQiU9omvo3dmDrdnQTZekTUvr2FR0S2IaTFZtlJvzndUhMOk9OcDHn2oh9f1g8BXQCPZ7841o4HIWahOzy2b+Ij27hQvLAZbwM9avdQHumHO3Cpf157I+BHtKyIe0XE2CAw4E5KmJ0aelqm3rujeelK1RBqu+ayYUudBgXLawI4MsstVPWYKEvsMjPEairo3Tqo4fdbxnw6H05PGwXvwD4lhunzZ/62ijVrltdfHIhRZp15lb6LF/pF8ERQEfYjMhjzdmpfZqt2YMmHp8SK9QycYlwGp3+IPHuoTetKYlljDr2efsLZNUdWZ4mTCJQ0xkFRQlxVCfvB72umUp9FaTMwRoTIsWtq91NlMh8EhuhmCSn/mrQihSeIH2311qVrsbEHG+VR/jU4Xfxb5WT4bwcLJ6wD7oVL2PfcMOBMABVBMA5SL0EiEXLuSS7A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(376002)(366004)(39860400002)(136003)(7416002)(66476007)(316002)(16526019)(186003)(9686003)(53546011)(956004)(66946007)(6496006)(33716001)(8676002)(478600001)(26005)(85182001)(66556008)(5660300002)(6862004)(86362001)(83380400001)(4326008)(2906002)(6486002)(6636002)(6666004)(54906003)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OTZ0SWZZMmdRSVh1V1ArbzBvSzgxY2ZRQ1Y1WlVXUmM0dWRiNWljalZqZnlL?=
 =?utf-8?B?MkhISVdrVGc2TU5pSzVSS1pjSDNUNUViT3dSanVodUtqS2pybVpEL05jRFdu?=
 =?utf-8?B?RFM0WmkxYVV6VjlpUGNCKzlVK1MzKzRsTktSOXZudEJpNHUzYTVEQUxHMUhD?=
 =?utf-8?B?Q0YvdGNJNXJSdEYrYnIvQ0hvNzlGK2xyN2VHdzhxWHl3QmROUlZYSXBMZ2lp?=
 =?utf-8?B?K25kUWlLVEMrRVVuVmNmQ0dHUnlUdm52V1AvRWc4cjM2UXRYSE8yTnpDVU9R?=
 =?utf-8?B?OXQ1SEJINUxMR3YyVVRQaHFNQndrVHNuejhXQkllTVF2U2Q3d3N5am9yRFJW?=
 =?utf-8?B?WXNxbStPMjIveUdkeWRYaGFtV1BOZUtZM1Q2WE52NTZOWTNCYnFna05BdTJp?=
 =?utf-8?B?R1YvYXd4QlpNbnkzZ3QwVGFiWVRVWFRsS1RhSGxyTk5MRy9lTXhSWk5zaTNK?=
 =?utf-8?B?bXQvNllmVjZnQy9mS05UT0E1Uyt0Y1JxSVovMU5JcE9hSXRLMnFOdUpsRm12?=
 =?utf-8?B?bnc0aVdpV3RiR2dUbHZWbFZrVHVlNkVzS0FpRkdBcWd5aDI3SDZKZm9zMFpT?=
 =?utf-8?B?bzFNTVVyUWtCdkc3eTV6ZUJ2L3VLVElGUU5zakJ2aWIxODhFVW0yWUVJYjI1?=
 =?utf-8?B?V2swMHF4MTN2TzlhSjdwVkRFT0pYL0w3WFd1U3NmV0llNWNVQkpZRmZwWlE1?=
 =?utf-8?B?QzFKS1dlSkZhRFRSOS9wSFRJWVVDNHR5OE9NMUExTmdXcnlqU3FHWFB5L24r?=
 =?utf-8?B?b3pZK0RCOEtkSWhSZnFHeHl6YTRPWnAzdmNucmdYSUN3WmlUcWFQeERiRk1v?=
 =?utf-8?B?ZE54cFZtN3ZhMGFJRkJsUVI3emFrcit2UzFVWG9nbURGeVp5Skx5RzRuTHZU?=
 =?utf-8?B?YTkvS2NnRmhMQkp4Um53QlBWcHlNeVdNeldmck8yLzJVY2NyZytjMURjbWpV?=
 =?utf-8?B?U1Z4M0xuNk5vM3MxOXUyQXpvTDkwTkl3cmllTm1ZTzcrRW90YjROVGY3U3hE?=
 =?utf-8?B?SzgwL3QwRC9oU0F4RUxiS3JpV3RMejBDdEdHOXhlY3J6ZklSYTdXaWpaem92?=
 =?utf-8?B?eHRHcWttVUxScE1hcWF2OGdtT0Z3MVBiL1BiRFVNQ2xZWkt1SGhuOGoyYlRX?=
 =?utf-8?B?LzRuZk43a1M0MmVTOTFLQzcvWFdHQ3lPQlBJMW1mZ2l0ZkNPUjdwRjRGdVox?=
 =?utf-8?B?K0JGbTZWcWJTOU1BVjJiN0JBYWc2WmxEeS9SSGFsT1NqOW9HdlFDT0p4c3R1?=
 =?utf-8?B?a3orMEZxVE9tMytUQUZiRFovaXlxV1FLMzFJNHVFVnI2REpDUDlxN1U4ZHkv?=
 =?utf-8?B?eDBwMXdjOG5zYVhjWWhpZ3JSNFZyZ2g5Z0dMMDJQVjlMVldqTEpxTm9yYUZC?=
 =?utf-8?B?ODNobENyeWZ0QjcwZDJaZko3ZW9Wc0FLTlVBNHEwSk5DL08xTVpEMmpDbWNW?=
 =?utf-8?B?ZUhPL0I2T1EwQmpubGVydEVXNlZ5WmxxbG9kWnhBPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ccb84ef4-acef-488e-4cc9-08d8c6aa02fa
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 12:07:57.0648
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KjdTYSgucwMavbaW2DE05MPpMIxWjhY+Gd83jDnnZW5ilMdNEm6+mCQNawOQqDFjhJvCOw0ISRSbhw0KDaS+Qw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3916
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 11:11:37AM +0000, Andrew Cooper wrote:
> On 01/02/2021 10:10, Roger Pau MonnÃ© wrote:
> > On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
> >> +                    (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
> >> +                    sizeof(*xen_frame_list);
> >> +
> >> +                if ( start_extent >= cmp.mar.nr_frames )
> >> +                    return -EINVAL;
> >> +
> >> +                /*
> >> +                 * Adjust nat to account for work done on previous
> >> +                 * continuations, leaving cmp pristine.  Hide the continaution
> >> +                 * from the native code to prevent double accounting.
> >> +                 */
> >> +                nat.mar->nr_frames -= start_extent;
> >> +                nat.mar->frame += start_extent;
> >> +                cmd &= MEMOP_CMD_MASK;
> >> +
> >> +                /*
> >> +                 * If there are two many frames to fit within the xlat buffer,
> >> +                 * we'll need to loop to marshal them all.
> >> +                 */
> >> +                nat.mar->nr_frames = min(nat.mar->nr_frames, xlat_max_frames);
> >> +
> >>                  /*
> >>                   * frame_list is an input for translated guests, and an output
> >>                   * for untranslated guests.  Only copy in for translated guests.
> >> @@ -444,14 +453,14 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
> >>                                               cmp.mar.nr_frames) ||
> >>                           __copy_from_compat_offset(
> >>                               compat_frame_list, cmp.mar.frame_list,
> >> -                             0, cmp.mar.nr_frames) )
> >> +                             start_extent, nat.mar->nr_frames) )
> >>                          return -EFAULT;
> >>  
> >>                      /*
> >>                       * Iterate backwards over compat_frame_list[] expanding
> >>                       * compat_pfn_t to xen_pfn_t in place.
> >>                       */
> >> -                    for ( int x = cmp.mar.nr_frames - 1; x >= 0; --x )
> >> +                    for ( int x = nat.mar->nr_frames - 1; x >= 0; --x )
> >>                          xen_frame_list[x] = compat_frame_list[x];
> > Unrelated question, but I don't really see the point of iterating
> > backwards, wouldn't it be easy to use use the existing 'i' loop
> > counter and for a for ( i = 0; i <  nat.mar->nr_frames; i++ )?
> >
> > (Not that you need to fix it here, just curious about why we use that
> > construct instead).
> 
> Iterating backwards is totally critical.
> 
> xen_frame_list and compat_frame_list are the same numerical pointer.Â 
> We've just filled it 50% full with compat_pfn_t's, and need to turn
> these into xen_pfn_t's which are double the size.
> 
> Iterating forwards would clobber every entry but the first.

Oh, I didn't realize they point to the same address. A comment would
help (not that you need to add it now).

> >
> >>                  }
> >>              }
> >> @@ -600,9 +609,11 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
> >>          case XENMEM_acquire_resource:
> >>          {
> >>              DEFINE_XEN_GUEST_HANDLE(compat_mem_acquire_resource_t);
> >> +            unsigned int done;
> >>  
> >>              if ( compat_handle_is_null(cmp.mar.frame_list) )
> >>              {
> >> +                ASSERT(split == 0 && rc == 0);
> >>                  if ( __copy_field_to_guest(
> >>                           guest_handle_cast(compat,
> >>                                             compat_mem_acquire_resource_t),
> >> @@ -611,6 +622,21 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
> >>                  break;
> >>              }
> >>  
> >> +            if ( split < 0 )
> >> +            {
> >> +                /* Continuation occurred. */
> >> +                ASSERT(rc != XENMEM_acquire_resource);
> >> +                done = cmd >> MEMOP_EXTENT_SHIFT;
> >> +            }
> >> +            else
> >> +            {
> >> +                /* No continuation. */
> >> +                ASSERT(rc == 0);
> >> +                done = nat.mar->nr_frames;
> >> +            }
> >> +
> >> +            ASSERT(done <= nat.mar->nr_frames);
> >> +
> >>              /*
> >>               * frame_list is an input for translated guests, and an output for
> >>               * untranslated guests.  Only copy out for untranslated guests.
> >> @@ -626,7 +652,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
> >>                   */
> >>                  BUILD_BUG_ON(sizeof(compat_pfn_t) > sizeof(xen_pfn_t));
> >>  
> >> -                for ( i = 0; i < cmp.mar.nr_frames; i++ )
> >> +                for ( i = 0; i < done; i++ )
> >>                  {
> >>                      compat_pfn_t frame = xen_frame_list[i];
> >>  
> >> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
> >>                      compat_frame_list[i] = frame;
> >>                  }
> >>  
> >> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
> >> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
> >>                                               compat_frame_list,
> >> -                                             cmp.mar.nr_frames) )
> >> +                                             done) )
> >>                      return -EFAULT;
> > Is it fine to return with a possibly pending continuation already
> > encoded here?
> >
> > Other places seem to crash the domain when that happens.
> 
> Hmm.Â  This is all a total mess.Â  (Elsewhere the error handling is also
> broken - a caller who receives an error can't figure out how to recover)
> 
> But yes - I think you're right - the only thing we can do here is `goto
> crash;` and woe betide any 32bit kernel which passes a pointer to a
> read-only buffer.

With that added:

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:10:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:10:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79824.145404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Y2a-0008Rz-RK; Mon, 01 Feb 2021 12:10:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79824.145404; Mon, 01 Feb 2021 12:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Y2a-0008Rs-OH; Mon, 01 Feb 2021 12:10:48 +0000
Received: by outflank-mailman (input) for mailman id 79824;
 Mon, 01 Feb 2021 12:10:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6Y2Z-0008Rn-Ti
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:10:47 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d4db2a2-d9ca-4c6a-aff7-a935cf033c2a;
 Mon, 01 Feb 2021 12:10:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d4db2a2-d9ca-4c6a-aff7-a935cf033c2a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612181446;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Jk0VGGDCLdafKuQSgGrvHoVip91gbWGqE1CZd/yuumk=;
  b=JksPSME9OQJ6xXOnEUzneGJK6g9gUnLkujl6VfH1V6kA0cSdU0SmYd9t
   6GgWkF/+zvnJRgg3uQiq28qCAf+uOxHSd89UXhs4c8PQigbBLd4DWY0Sa
   Z9kXa8Pjat/fzF5hKZSg0EKcxtT8zTSCDaDcJ+BuOegAQ587z9uo9GaGp
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0WceRi0BsSsTGRbF4IVoS0GkpqJDL3Uu3bzyGaF3LjsJBkRigFuyM4X3nLCwRkDSjELs8rx+0y
 Gogaz5+yTjMG+tLUS1eQauovSxr9lz/ry+Ss6gwqf2Ed0nKiErqrNhZhbbFg6bbnBXWkfdnxcy
 PIDgh6ZMJXKep8xzBE3O9TJaHxom8u9MwHvP/2pFQKg/L4LhLiAUHv3qvl8jljN/GZMzk0F3pV
 xAGyra8TNyIfFUSTpcMcj8EOkUVfb2YMy559SevTx83C3g/d73GPdS6Xx9tfXri0ptrQa/wRuO
 +3c=
X-SBRS: 5.2
X-MesageID: 36647773
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36647773"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fWLfW5btlNdi8ol7SxhSaX8rB5T7hxV3AY801zqFmewXKxJcVYEr2m50l0TpxqQ0M1hHVmfiGxH2AU8uVJTuaSyswPequXFZa0C4SbvPfHwI+TIOYg5vA0Sp8k9hZ+0DoBVjCWgV2Aw6a+RCK1Lcq6Gif7gnix5KWyYlUqonFNBZkXF9Fd5PDmJu+muIGtzwpbEqahE0fZ0ni1csJkAuV48rCFJ5qbgnG6tKOAVk/b6/WSQ33iQOglihoto3/JP49wbpnHmmpj1JCUe+6agHsk305aBIlIUEmSJlhsX/aUEZmwc6+CRqqg7c8PGDV7n3jmNuzF+bgXFjB5fSYhviKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pNk1fO7H6/8CzNxzsFbl+/PTqOzZVtHgN/tfSPzX2ec=;
 b=YNSKKGpAGokkRWzsNuqmRgeAl3q7VKe35kmZbVQTy9h2eCi4ELVYu0iU+w8EW4OBmWN1pJwyIdNPYbUk03wM/YFiILhgGy7HJWTnA6xUa5cgUAlEkBbHhSrmEXu6qjHY4VqxYg1LRqib9uZLP7kx7j8oiT4TnlDzShbHXjHF08c1iLDJRXm3iF7M4QZbz0Tw7Gf1Qmhn1ukYpzpim4udBDmrWmP8Qmup0xGcIrBQ3Pvg1S9qayKPvj6nm6l/u5/R+C4q9AKWkdUe/H8nOK+no0gK3oDIsISG/K/Mr1u1+3lwOV2F5Q8xLSfTTBnpMjhQKt90al9LHcbyZXIJ7c1VzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pNk1fO7H6/8CzNxzsFbl+/PTqOzZVtHgN/tfSPzX2ec=;
 b=GQlVA/36PuX3kUqNJxRV1W0s49cTnWcBUTi8H6r0w/6T2Fr70s22uQt1o8/w3ulTXht4HrDffDH6iL6AZsCE5/4DYq/m03Cxgjy4NZRcKI3O9bXg1+u3i8o/+sPukbuXJcOt1ZIjgaZSOtYkGOEf6T/AxahNaHAyzNusyDQowNY=
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, Hubert
 Jasudowicz <hubert.jasudowicz@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
 <YBfTpTzi+wo7AFSH@Air-de-Roger>
 <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
 <YBfvGKkQWSisFHNs@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <932c9ab3-361d-0188-e608-2d686fc6b31a@citrix.com>
Date: Mon, 1 Feb 2021 12:10:33 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <YBfvGKkQWSisFHNs@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0043.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::12) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 62fa8407-56f9-4ca0-cfc1-08d8c6aa64ae
X-MS-TrafficTypeDiagnostic: BYAPR03MB4534:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4534E5B61428D750BE89A921BAB69@BYAPR03MB4534.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oy5Mhze/auUCQEXM7My2yUwQ6GM7/Rkmr6abibW72tx2Xr2N+TWQblwB6d8HB0enW5NmJ7DWjUmeqnbizrI4zkS1+RjHHDDnlu4vCrLDxC+JRq92kF9Oe6hRGdZKbmvf5D2aos5QvCi23iU4lwfzDVzHAzQh3h+C28tZ7eq2akRIeCZYghQ5I+yZQDESHe+QFlKUJ4nphyW+f0ds9EB4O8+ouJ73F1AiCp1A/5aHQqGH5IyF65MR20I/+F8whJKPU8D6YiZF/m6BPfwbRdKhZ8kyellVAGsOqE4c72vY2v7VjBd2o21l0tu2qm8NJSnw760QE5UeEXs2vBelDqTEaAGomJkweGaJzXOuuSzOptk749DNGGPoVno0bicHNs+1ZLgDJOLM9aQ6rZ/bGAZ8eZDI4ZZ+YNgke8zOeafWd39fzzKhHums2izmWcw3GzKcvO1iFWKXheWT0ViXZVd095LbfRF+eQCOyxYK8khTWoosMZMGO1jvAdDOk9cbQUiVgMcn4xwdqmY8u/Bvf+OSJqpqqYEzmnjSdbjeDsNXgvpfBu98RUqnTsgV6MPIqpuhjc5HjVeCNG1dRLdQ5lr2JrcFua3UJN0l++QgiLOe2a4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(366004)(136003)(5660300002)(186003)(86362001)(36756003)(54906003)(26005)(53546011)(7416002)(6862004)(16526019)(37006003)(66946007)(2906002)(66556008)(31686004)(16576012)(8676002)(66476007)(8936002)(2616005)(6636002)(316002)(478600001)(83380400001)(31696002)(956004)(4326008)(6486002)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Y0JQVWh4WWVLTWhWcHhpYnR6TjhKZ3p4L0RLRkpraUFjUVB0d2FrZHRHVHZV?=
 =?utf-8?B?WTJNSlNvMTNmL3FFUTFhc0VsQ0wxRklZMU9hOXVIQlFOTXZCbnZQd0RvOUxz?=
 =?utf-8?B?MThTY1hxb1ZoMXRZeC9OVmN6YTAwNDY5M050akdTVjhSb1NrOFBDOG9kSjRl?=
 =?utf-8?B?SGZwenh2UENyeDgwcVpDaHB5QU1jajFBcS9Na3hyeW9sR1RwblJpemV1UzNT?=
 =?utf-8?B?QThkRWc0WVlySjVKSUwxRUZuSUNNRmFRWEhUMzdFeFZGaDZkb2daQzJZOHJ4?=
 =?utf-8?B?YWduMFdzaE9RWXNHU1hvWXgrSlE5SnFyZGZQRWliQU80ZGtWTldiaEh1b2Nh?=
 =?utf-8?B?MUlVQkFXN2sxM2JSbDVBakllcW9iWkFacFVEU0lNM3o3ZEFYazlMYzFBdVBL?=
 =?utf-8?B?ak52UXlDaWF6RUp6WUZmMkdNOE43d0V2NHRXRE5OdEVFRGp1UzQ0UllEaFJp?=
 =?utf-8?B?eHZIdU0wU1AxYXJIdmVqZ0MxOFhEeTN5THM2NnNNQmhrOXE5SG9VLytUekJ5?=
 =?utf-8?B?TEZMWEdJRkR1dnFscEV3ZHQzN0VaTTE2TXlUWjN1MmdZZUJTdUYwUXNIbmM0?=
 =?utf-8?B?SG80NVlnTGRiRGQ1cjVjKzJleEh4bEtXUzNVTlFoMG1FNll2cXgwSTJBQ0xJ?=
 =?utf-8?B?eGxLSjhzdzlwZXlpWGI3NWlOSFpZVHNIQXZxRmtONEY0U1JkRXFkK2lmeVg4?=
 =?utf-8?B?WnMxSlY4NW95bDFWTVQ4aitwdyt3bGpocStQTDFxTzRjRVBEZUxVZHUrQ0ZX?=
 =?utf-8?B?eGJZMjhIU3puTUEwTG13aHlnbXJpckhBNDdKcnpmZHU3UEVka1hXd1pTM3h6?=
 =?utf-8?B?SXFHc0JaMUd6UUNrcXQ1aTZvK0I0R3J4UVgvWlVzVXlBZFlPdU9wMEVuRnFP?=
 =?utf-8?B?cjZJMGxHZ3JyYXFjN3pWK0NJWHlvdTVjaEVXR0dCSXZsZmFyaTlDZjZEWVVJ?=
 =?utf-8?B?S2hsbmtsSGFTcm5lTnNzZGlOSVAvNkRlVXRDWGRLNzRSc0dDamZjRWZtRTUv?=
 =?utf-8?B?OW1DWU9EajQvTWdHbG85blh1NERXcWZTZHd0ZEduTDlvOE04WmJ1SC9SY3Zj?=
 =?utf-8?B?RU1sYUJGZktGUTM3UnA1a3FJK2t2dVIwMHZPVmQzYThEQ2lNZnVzTnlGdmwx?=
 =?utf-8?B?S2oza1ZhZHYyK28ySmE5VncrLy9DR3pUbi9UbmxPU0Q0c1VKL1k3TmdxL1Nk?=
 =?utf-8?B?ZHQxU3R4eTdlcUVJZkhZaG1xNE54Z2xXc05tZFg1S2tPaTlHRFpDOS9LdTV1?=
 =?utf-8?B?ZFVOam9xMzV1bmN1RS9sZDhSU1ExVml1WldSMUVPZTI2b210d3Bua0xvT3Rs?=
 =?utf-8?B?dS9ZTWRsenUrNEliMjNGTm9DNENsd2YzTXF4UlhFMWdMb1M0NU9laXJodFpv?=
 =?utf-8?B?VU1rdjVObi9LMFc0UU04Qm00Sys0aEJ0c1dzMjhsODZFd0didTdmMWxBeFlN?=
 =?utf-8?Q?SqA54L4d?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 62fa8407-56f9-4ca0-cfc1-08d8c6aa64ae
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 12:10:41.2091
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YL+OJQe5hnawBj+2a0MAuNLUIkjEBIX4vybu2Ny/hCRqWdT9DrRFB8GSyq+iEFtCgTyT6akVL8ndbeomdXLQn7pqTEzE7m2UwOwKMFgjBLc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4534
X-OriginatorOrg: citrix.com

On 01/02/2021 12:07, Roger Pau MonnÃ© wrote:
> On Mon, Feb 01, 2021 at 11:11:37AM +0000, Andrew Cooper wrote:
>> On 01/02/2021 10:10, Roger Pau MonnÃ© wrote:
>>> On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
>>>> +                    (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
>>>> +                    sizeof(*xen_frame_list);
>>>> +
>>>> +                if ( start_extent >= cmp.mar.nr_frames )
>>>> +                    return -EINVAL;
>>>> +
>>>> +                /*
>>>> +                 * Adjust nat to account for work done on previous
>>>> +                 * continuations, leaving cmp pristine.  Hide the continaution
>>>> +                 * from the native code to prevent double accounting.
>>>> +                 */
>>>> +                nat.mar->nr_frames -= start_extent;
>>>> +                nat.mar->frame += start_extent;
>>>> +                cmd &= MEMOP_CMD_MASK;
>>>> +
>>>> +                /*
>>>> +                 * If there are two many frames to fit within the xlat buffer,
>>>> +                 * we'll need to loop to marshal them all.
>>>> +                 */
>>>> +                nat.mar->nr_frames = min(nat.mar->nr_frames, xlat_max_frames);
>>>> +
>>>>                  /*
>>>>                   * frame_list is an input for translated guests, and an output
>>>>                   * for untranslated guests.  Only copy in for translated guests.
>>>> @@ -444,14 +453,14 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>                                               cmp.mar.nr_frames) ||
>>>>                           __copy_from_compat_offset(
>>>>                               compat_frame_list, cmp.mar.frame_list,
>>>> -                             0, cmp.mar.nr_frames) )
>>>> +                             start_extent, nat.mar->nr_frames) )
>>>>                          return -EFAULT;
>>>>  
>>>>                      /*
>>>>                       * Iterate backwards over compat_frame_list[] expanding
>>>>                       * compat_pfn_t to xen_pfn_t in place.
>>>>                       */
>>>> -                    for ( int x = cmp.mar.nr_frames - 1; x >= 0; --x )
>>>> +                    for ( int x = nat.mar->nr_frames - 1; x >= 0; --x )
>>>>                          xen_frame_list[x] = compat_frame_list[x];
>>> Unrelated question, but I don't really see the point of iterating
>>> backwards, wouldn't it be easy to use use the existing 'i' loop
>>> counter and for a for ( i = 0; i <  nat.mar->nr_frames; i++ )?
>>>
>>> (Not that you need to fix it here, just curious about why we use that
>>> construct instead).
>> Iterating backwards is totally critical.
>>
>> xen_frame_list and compat_frame_list are the same numerical pointer.Â 
>> We've just filled it 50% full with compat_pfn_t's, and need to turn
>> these into xen_pfn_t's which are double the size.
>>
>> Iterating forwards would clobber every entry but the first.
> Oh, I didn't realize they point to the same address. A comment would
> help (not that you need to add it now).

Well - that's what "expand ... in place" means in the existing comment.Â 
Suggestions for how to make it clearer?

>
>>>>                  }
>>>>              }
>>>> @@ -600,9 +609,11 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>          case XENMEM_acquire_resource:
>>>>          {
>>>>              DEFINE_XEN_GUEST_HANDLE(compat_mem_acquire_resource_t);
>>>> +            unsigned int done;
>>>>  
>>>>              if ( compat_handle_is_null(cmp.mar.frame_list) )
>>>>              {
>>>> +                ASSERT(split == 0 && rc == 0);
>>>>                  if ( __copy_field_to_guest(
>>>>                           guest_handle_cast(compat,
>>>>                                             compat_mem_acquire_resource_t),
>>>> @@ -611,6 +622,21 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>                  break;
>>>>              }
>>>>  
>>>> +            if ( split < 0 )
>>>> +            {
>>>> +                /* Continuation occurred. */
>>>> +                ASSERT(rc != XENMEM_acquire_resource);
>>>> +                done = cmd >> MEMOP_EXTENT_SHIFT;
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                /* No continuation. */
>>>> +                ASSERT(rc == 0);
>>>> +                done = nat.mar->nr_frames;
>>>> +            }
>>>> +
>>>> +            ASSERT(done <= nat.mar->nr_frames);
>>>> +
>>>>              /*
>>>>               * frame_list is an input for translated guests, and an output for
>>>>               * untranslated guests.  Only copy out for untranslated guests.
>>>> @@ -626,7 +652,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>                   */
>>>>                  BUILD_BUG_ON(sizeof(compat_pfn_t) > sizeof(xen_pfn_t));
>>>>  
>>>> -                for ( i = 0; i < cmp.mar.nr_frames; i++ )
>>>> +                for ( i = 0; i < done; i++ )
>>>>                  {
>>>>                      compat_pfn_t frame = xen_frame_list[i];
>>>>  
>>>> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>                      compat_frame_list[i] = frame;
>>>>                  }
>>>>  
>>>> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
>>>> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
>>>>                                               compat_frame_list,
>>>> -                                             cmp.mar.nr_frames) )
>>>> +                                             done) )
>>>>                      return -EFAULT;
>>> Is it fine to return with a possibly pending continuation already
>>> encoded here?
>>>
>>> Other places seem to crash the domain when that happens.
>> Hmm.Â  This is all a total mess.Â  (Elsewhere the error handling is also
>> broken - a caller who receives an error can't figure out how to recover)
>>
>> But yes - I think you're right - the only thing we can do here is `goto
>> crash;` and woe betide any 32bit kernel which passes a pointer to a
>> read-only buffer.
> With that added:
>
> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:30:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:30:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79829.145417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YL3-0001Jg-Hs; Mon, 01 Feb 2021 12:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79829.145417; Mon, 01 Feb 2021 12:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YL3-0001JZ-E2; Mon, 01 Feb 2021 12:29:53 +0000
Received: by outflank-mailman (input) for mailman id 79829;
 Mon, 01 Feb 2021 12:29:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6YL1-0001JU-Sg
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:29:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1be26997-e97d-4939-889a-1e818c9328a5;
 Mon, 01 Feb 2021 12:29:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6857ABD5;
 Mon,  1 Feb 2021 12:29:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1be26997-e97d-4939-889a-1e818c9328a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612182590; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S3K0ofkJNzRg3JXXy+YxfS3HC+mTazT9FRO6Bg6J2rg=;
	b=utIImy9jcn20xMV9AcFXeeNhDC+N/Rxk9JwDw1Cehw7gXEhBnJN/GmX9luRUD/2VxrVF3j
	evSsoEoiVic1M/D8LVBqOMN1EFJkshr7HKdzOVU3vnQT3Z/Mk1wNWHXso0YNvBF016cKcz
	Xdak3B0sJJKUErw4VMfBHHLJaQZUlsM=
Subject: Re: [PATCH] x86/debug: fix page-overflow bug in dbg_rw_guest_mem
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <caba05850df644814d75d5de0574c62ce90e8789.1611971959.git.tamas@tklengyel.com>
 <74f3263a-fe12-d365-ad45-e5556b575539@citrix.com>
 <044823b7-1bbd-6405-7371-2b06e49cc147@suse.com>
 <0dc1f3c9-6837-ce12-8826-11354346b3c1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <65ded62d-9f57-85ed-c333-e301d195c9f2@suse.com>
Date: Mon, 1 Feb 2021 13:29:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <0dc1f3c9-6837-ce12-8826-11354346b3c1@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 12:44, Andrew Cooper wrote:
> On 01/02/2021 09:37, Jan Beulich wrote:
>> On 30.01.2021 03:59, Andrew Cooper wrote:
>>> On 30/01/2021 01:59, Tamas K Lengyel wrote:
>>>> When using gdbsx dbg_rw_guest_mem is used to read/write guest memory. When the
>>>> buffer being accessed is on a page-boundary, the next page needs to be grabbed
>>>> to access the correct memory for the buffer's overflown parts. While
>>>> dbg_rw_guest_mem has logic to handle that, it broke with 229492e210a. Instead
>>>> of grabbing the next page the code right now is looping back to the
>>>> start of the first page. This results in errors like the following while trying
>>>> to use gdb with Linux' lx-dmesg:
>>>>
>>>> [    0.114457] PM: hibernation: Registered nosave memory: [mem
>>>> 0xfdfff000-0xffffffff]
>>>> [    0.114460] [mem 0x90000000-0xfbffffff] available for PCI demem 0
>>>> [    0.114462] f]f]
>>>> Python Exception <class 'ValueError'> embedded null character:
>>>> Error occurred in Python: embedded null character
>>>>
>>>> Fixing this bug by taking the variable assignment outside the loop.
>>>>
>>>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> I have to admit that I'm irritated: On January 14th I did submit
>> a patch ('x86/gdbsx: convert "user" to "guest" accesses') fixing this
>> as a side effect. I understand that one was taking care of more
>> issues here, but shouldn't that be preferred? Re-basing isn't going
>> to be overly difficult, but anyway.
> 
> I'm sorry.Â  That was sent during the period where I had no email access
> (hence I wasn't aware of it - I've been focusing on 4.15 work and this
> series wasn't pinged.),

Oh, so you had actually lost emails, rather than (as I did
understand so far) only getting them in a very delayed fashion?

Anyway, the first part of that series having been pretty close
to getting an XSA, I thought you were well aware that at least
that part is very clearly intended for 4.15. (I also did
mention it to you last week on irc, when you asked what wants
specifically looking at for 4.15.) Plus, besides the bringing
up of the topic on the last two or three community calls, over
all of January I've specifically avoided pinging _any_ of the
many patches I have pending, to avoid giving you the feel of
even more pressure.

> but it also isn't identified as a bugfix, or
> suitable for backporting in that form.
> 
> I apologise for the extra work caused unintentionally, but I think this
> is the correct way around WRT backports, is it not?

It didn't occur to me that there could be a consideration of
backporting here. But yes, if so wanted, maybe the split is
helpful. Otoh then the full change could as well be taken,
to stop the abuse of "user" accesses also in the stable trees.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:34:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79831.145428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YPS-00029x-3j; Mon, 01 Feb 2021 12:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79831.145428; Mon, 01 Feb 2021 12:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YPS-00029q-0o; Mon, 01 Feb 2021 12:34:26 +0000
Received: by outflank-mailman (input) for mailman id 79831;
 Mon, 01 Feb 2021 12:34:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LFAu=HD=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l6YPQ-00029l-Ht
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:34:24 +0000
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f4fcbad-4f40-4472-a42a-51800129a07b;
 Mon, 01 Feb 2021 12:34:23 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id y14so6725897ljn.8
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 04:34:23 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 196sm2968698lfj.219.2021.02.01.04.34.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 04:34:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f4fcbad-4f40-4472-a42a-51800129a07b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=XbPw9h0o24mx7IeLb1rrzfnFe0QBg2D5okIs1p7WT1Q=;
        b=EBINm2kyobqcHSJpOMEc/i/ONegS0KpcuwlE+f3bB2HTHP0Fhz7eHmlXTpGv4RNH35
         owa0/eBTlkhKoO2W5jRTS6dmEm4uZnxt6S7/R0H+CWDtDzoY+iYPZYAEDpPISeNiBmr5
         0gKu8EE+StMtsVJrJiVTIVLNeFcaub/CJttHfpvWd0MHeuLGgNSYdq7hxqu6/BEeRvNi
         8GW/SJkCyTsGB/pLM6uuvF95OYLlMEpQaPSVAOBX1QVlccvj2Pvw8k3IxXSf3Wda0ngy
         BEUjzyfC4AIuPhousXGq2QmoOiU0EbE0DDnT9tXJ/Bmv7uPtOwtyEE3mSl0ektwSxmfO
         MABA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=XbPw9h0o24mx7IeLb1rrzfnFe0QBg2D5okIs1p7WT1Q=;
        b=Hl04N7r3U0LKCWV7eBqEVyHmgvdGikNtoxJcPDqv4mtR8BlgEQHh4Ygg+RkzwJr9Su
         c52Rtq3t9PgDi+FoRFvSHDECHjKhFGGiGFo5LuqkO7pvXMk/WQ4Nb5IQfo9ow/pEahIQ
         x/Xnf3vC+8e5diXHyzEeObn6Lvz5xdfj6kcx0k7s1gZFiUuhLI/p99yhoAfl4mUP2Ntf
         Lt8vJ8uI+FNwEenosxNxEXclmMMQbUi4IBFtG5dsJfJymuDwjILPwZ7fMu7m2cANQwWU
         RSyN/j/1vVa/4aYICyDkWGNAfUVRlsywtOpX9aJaFvBa+1gbQN0pNzIjFvJUzF9+oSZ7
         VawQ==
X-Gm-Message-State: AOAM532caXJ47iowspWbyq0drA689vtQDb36eeBQ00ATnOphbrH6ZG8Z
	VKCMSpzOOU9M0sUBy4cVj+Q=
X-Google-Smtp-Source: ABdhPJxisAotvCzYNiMdMYsVKM5JGdtAKt+ibFSa5ytHrFiLZTi6W5fbxbNSEi/8AiSDIvX9oh/tLw==
X-Received: by 2002:a2e:3e19:: with SMTP id l25mr10226149lja.217.1612182862098;
        Mon, 01 Feb 2021 04:34:22 -0800 (PST)
Subject: Re: [PATCH v8 00/16] acquire_resource size and external IPT
 monitoring
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, Anthony PERARD
 <anthony.perard@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <911270bf-0077-b70e-c224-712dfa535afa@gmail.com>
Date: Mon, 1 Feb 2021 14:34:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20210130025852.12430-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 30.01.21 04:58, Andrew Cooper wrote:

Hi Andrew

> Combined series (as they are dependent).  First, the resource size fixes, and
> then the external IPT monitoring built on top.
>
> Posting in full for reference, but several patches are ready to go in.  Those
> in need of review are patch 6, 8 and 12.
>
> See individual patches for changes.  The major work was rebasing over the
> ARM/IOREQ series which moved a load of code which this series was bugfixing.

Looks like that some of these patches have been already merged. So I 
preliminary tested current staging 
(9dc687f155a57216b83b17f9cde55dd43e06b0cd x86/debug: fix page-overflow 
bug in dbg_rw_guest_mem) on Arm *with* IOREQ enabled.

I didn't notice any regressions with IOREQ on Arm))


>
> Andrew Cooper (7):
>    xen/memory: Reject out-of-range resource 'frame' values
>    xen/gnttab: Rework resource acquisition
>    xen/memory: Fix acquire_resource size semantics
>    xen/memory: Improve compat XENMEM_acquire_resource handling
>    xen/memory: Indent part of acquire_resource()
>    xen/memory: Fix mapping grant tables with XENMEM_acquire_resource
>    xen+tools: Introduce XEN_SYSCTL_PHYSCAP_vmtrace
>
> MichaÅ‚ LeszczyÅ„ski (7):
>    xen/domain: Add vmtrace_size domain creation parameter
>    tools/[lib]xl: Add vmtrace_buf_size parameter
>    xen/memory: Add a vmtrace_buf resource type
>    x86/vmx: Add Intel Processor Trace support
>    xen/domctl: Add XEN_DOMCTL_vmtrace_op
>    tools/libxc: Add xc_vmtrace_* functions
>    tools/misc: Add xen-vmtrace tool
>
> Tamas K Lengyel (2):
>    xen/vmtrace: support for VM forks
>    x86/vm_event: Carry the vmtrace buffer position in vm_event
>
>   docs/man/xl.cfg.5.pod.in                    |   9 +
>   tools/golang/xenlight/helpers.gen.go        |   4 +
>   tools/golang/xenlight/types.gen.go          |   2 +
>   tools/include/libxl.h                       |  14 ++
>   tools/include/xenctrl.h                     |  73 ++++++++
>   tools/libs/ctrl/Makefile                    |   1 +
>   tools/libs/ctrl/xc_vmtrace.c                | 128 +++++++++++++
>   tools/libs/light/libxl.c                    |   2 +
>   tools/libs/light/libxl_cpuid.c              |   1 +
>   tools/libs/light/libxl_create.c             |   1 +
>   tools/libs/light/libxl_types.idl            |   5 +
>   tools/misc/.gitignore                       |   1 +
>   tools/misc/Makefile                         |   7 +
>   tools/misc/xen-cpuid.c                      |   2 +-
>   tools/misc/xen-vmtrace.c                    | 154 ++++++++++++++++
>   tools/ocaml/libs/xc/xenctrl.ml              |   1 +
>   tools/ocaml/libs/xc/xenctrl.mli             |   1 +
>   tools/xl/xl_info.c                          |   5 +-
>   tools/xl/xl_parse.c                         |   4 +
>   xen/arch/x86/domain.c                       |  23 +++
>   xen/arch/x86/domctl.c                       |  55 ++++++
>   xen/arch/x86/hvm/vmx/vmcs.c                 |  19 +-
>   xen/arch/x86/hvm/vmx/vmx.c                  | 200 +++++++++++++++++++-
>   xen/arch/x86/mm/mem_sharing.c               |   3 +
>   xen/arch/x86/vm_event.c                     |   3 +
>   xen/common/compat/memory.c                  | 147 +++++++++++----
>   xen/common/domain.c                         |  81 ++++++++
>   xen/common/grant_table.c                    | 112 ++++++++----
>   xen/common/ioreq.c                          |   2 +-
>   xen/common/memory.c                         | 274 +++++++++++++++++++---------
>   xen/common/sysctl.c                         |   2 +
>   xen/include/asm-x86/cpufeature.h            |   1 +
>   xen/include/asm-x86/hvm/hvm.h               |  72 ++++++++
>   xen/include/asm-x86/hvm/vmx/vmcs.h          |   4 +
>   xen/include/asm-x86/msr.h                   |  32 ++++
>   xen/include/public/arch-x86/cpufeatureset.h |   1 +
>   xen/include/public/domctl.h                 |  38 ++++
>   xen/include/public/memory.h                 |  18 +-
>   xen/include/public/sysctl.h                 |   3 +-
>   xen/include/public/vm_event.h               |   7 +
>   xen/include/xen/domain.h                    |   2 +
>   xen/include/xen/grant_table.h               |  21 ++-
>   xen/include/xen/ioreq.h                     |   2 +-
>   xen/include/xen/sched.h                     |   6 +
>   xen/xsm/flask/hooks.c                       |   1 +
>   45 files changed, 1366 insertions(+), 178 deletions(-)
>   create mode 100644 tools/libs/ctrl/xc_vmtrace.c
>   create mode 100644 tools/misc/xen-vmtrace.c
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79836.145447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YWZ-0003Ay-VP; Mon, 01 Feb 2021 12:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79836.145447; Mon, 01 Feb 2021 12:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YWZ-0003Ar-SA; Mon, 01 Feb 2021 12:41:47 +0000
Received: by outflank-mailman (input) for mailman id 79836;
 Mon, 01 Feb 2021 12:41:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6YWX-0003Am-UU
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:41:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 864af19d-4885-4521-bd63-36209379be20;
 Mon, 01 Feb 2021 12:41:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D2A4DAB92;
 Mon,  1 Feb 2021 12:41:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 864af19d-4885-4521-bd63-36209379be20
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612183302; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=eNLWhHPDsMuX+VaPLPf7HucG8rgFqcAlJfgfgBVrV1o=;
	b=hz08KS8bNOMxQvHLXzdLz303Ow4YoL3Pp1OU9PAdCoCEFa4HelH8jXVjP4oUQgTJteD0ab
	+xOxO7LcUGSvpQd4QArHku3+7v/gU66nooYuQux1FaMan7W5wPsgNRoqlpJBVhRt1eL5TB
	c/AMGgD9Or0OkQAjbleXX9+Nw6I5VNI=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/3] x86/time: calibration rendezvous adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>
Message-ID: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Date: Mon, 1 Feb 2021 13:41:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The latter two patches are meant to address a regression reported on
the list under "Problems with APIC on versions 4.9 and later (4.8
works)". In the course of analyzing output from a debugging patch I
ran into another anomaly again, which I thought I should finally try
to address. Hence patch 1.

While looking closely at the corresponding debugging patch'es output I
noticed a suspicious drift between local and master stime: Measured not
very precisely, local was behind master by about 200ms in about half an
hour. Interestingly the recording of ->master_stime (and hence the not
really inexpensive invocation of read_platform_stime()) looks to be
pretty pointless when CONSTANT_TSC - I haven't been able to spot an
actual consumer. IOW the drift may not be a problem, and we might be
able to eliminate the platform timer reads. (When !CONSTANT_TSC, such
drift would get corrected anyway, by local_time_calibration().)

1: change initiation of the calibration timer
2: adjust time recording time_calibration_tsc_rendezvous()
3: don't move TSC backwards in time_calibration_tsc_rendezvous()

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:42:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79837.145459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YXP-0003Gv-Ar; Mon, 01 Feb 2021 12:42:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79837.145459; Mon, 01 Feb 2021 12:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YXP-0003Go-6R; Mon, 01 Feb 2021 12:42:39 +0000
Received: by outflank-mailman (input) for mailman id 79837;
 Mon, 01 Feb 2021 12:42:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6YXN-0003Gh-TX
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:42:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 067a2b04-0906-4776-b2c3-2a24514f1815;
 Mon, 01 Feb 2021 12:42:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EE3A3ABD5;
 Mon,  1 Feb 2021 12:42:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 067a2b04-0906-4776-b2c3-2a24514f1815
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612183356; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pQE/B5fzZf3jiK+QiCdMMnScKlarCr8sADBb9isYXQM=;
	b=ogx0M9L/sRFh0TkJYhEmDi/1u4/+aqxSxUPQRicSox3pAZqKQ0ZpEjUfMfKltleZlOEtmK
	kJkKGzh1oJf9PMnYfddaFWWHl6i8PqtR2+VMg4oH9LiKtBMj0ob/cUvScvzz/zJM3+B9if
	cPpH/XqQwjcIJhJyXxLX8PLnbWr1puM=
Subject: [PATCH v2 1/3] x86/time: change initiation of the calibration timer
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Message-ID: <ca624e2e-5a2c-e2a6-6e26-54bc3ac7cc19@suse.com>
Date: Mon, 1 Feb 2021 13:42:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Setting the timer a second (EPOCH) into the future at a random point
during boot (prior to bringing up APs and prior to launching Dom0) does
not yield predictable results: The timer may expire while we're still
bringing up APs (too early) or when Dom0 already boots (too late).
Instead invoke the timer handler function explicitly at a predictable
point in time, once we've established the rendezvous function to use
(and hence also once all APs are online). This will, through the raising
and handling of TIMER_SOFTIRQ, then also have the effect of arming the
timer.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -854,9 +854,7 @@ static void resume_platform_timer(void)
 
 static void __init reset_platform_timer(void)
 {
-    /* Deactivate any timers running */
     kill_timer(&plt_overflow_timer);
-    kill_timer(&calibration_timer);
 
     /* Reset counters and stamps */
     spin_lock_irq(&platform_timer_lock);
@@ -1956,19 +1954,13 @@ static void __init reset_percpu_time(voi
     t->stamp.master_stime = t->stamp.local_stime;
 }
 
-static void __init try_platform_timer_tail(bool late)
+static void __init try_platform_timer_tail(void)
 {
     init_timer(&plt_overflow_timer, plt_overflow, NULL, 0);
     plt_overflow(NULL);
 
     platform_timer_stamp = plt_stamp64;
     stime_platform_stamp = NOW();
-
-    if ( !late )
-        init_percpu_time();
-
-    init_timer(&calibration_timer, time_calibration, NULL, 0);
-    set_timer(&calibration_timer, NOW() + EPOCH);
 }
 
 /* Late init function, after all cpus have booted */
@@ -2009,10 +2001,13 @@ static int __init verify_tsc_reliability
             time_calibration_rendezvous_fn = time_calibration_nop_rendezvous;
 
             /* Finish platform timer switch. */
-            try_platform_timer_tail(true);
+            try_platform_timer_tail();
 
             printk("Switched to Platform timer %s TSC\n",
                    freq_string(plt_src.frequency));
+
+            time_calibration(NULL);
+
             return 0;
         }
     }
@@ -2033,6 +2028,8 @@ static int __init verify_tsc_reliability
          !boot_cpu_has(X86_FEATURE_TSC_RELIABLE) )
         time_calibration_rendezvous_fn = time_calibration_tsc_rendezvous;
 
+    time_calibration(NULL);
+
     return 0;
 }
 __initcall(verify_tsc_reliability);
@@ -2048,7 +2045,11 @@ int __init init_xen_time(void)
     do_settime(get_wallclock_time(), 0, NOW());
 
     /* Finish platform timer initialization. */
-    try_platform_timer_tail(false);
+    try_platform_timer_tail();
+
+    init_percpu_time();
+
+    init_timer(&calibration_timer, time_calibration, NULL, 0);
 
     /*
      * Setup space to track per-socket TSC_ADJUST values. Don't fiddle with



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:42:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79838.145471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YXV-0003Jo-Ip; Mon, 01 Feb 2021 12:42:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79838.145471; Mon, 01 Feb 2021 12:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YXV-0003Jg-Eg; Mon, 01 Feb 2021 12:42:45 +0000
Received: by outflank-mailman (input) for mailman id 79838;
 Mon, 01 Feb 2021 12:42:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YXU-0003JQ-PX; Mon, 01 Feb 2021 12:42:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YXU-0001iv-KA; Mon, 01 Feb 2021 12:42:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YXU-0006iv-7J; Mon, 01 Feb 2021 12:42:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YXU-00061r-6o; Mon, 01 Feb 2021 12:42:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PcfVVsuD9NKS+XAKlnXgPfu6RP7QMcIkNWIzG6j2fI8=; b=gVDlP4tmgEY/RlBp+aUmQz533e
	u52pq8Eb8j8dYeZwJPHpu6KPeooYWkT9n3cYcxU3oyCIJ2QDmDbXAs0YEHaDuJK67tjgqNKqV32f1
	doo8ZsvgWwstbc6Wp0cEiXBVLV/YCjwThCVsfIeeuohSUFUqApL96fFFfhNm304CUJug=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158873-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 158873: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 12:42:44 +0000

flight 158873 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158873/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158835
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158835
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158835
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158835
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158835
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158835
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158835
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158835
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158835
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158835
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158835
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158873  2021-02-01 01:51:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:43:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:43:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79840.145486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YXs-0003SK-1Q; Mon, 01 Feb 2021 12:43:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79840.145486; Mon, 01 Feb 2021 12:43:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YXr-0003SD-TR; Mon, 01 Feb 2021 12:43:07 +0000
Received: by outflank-mailman (input) for mailman id 79840;
 Mon, 01 Feb 2021 12:43:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6YXq-0003Rw-Eh
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:43:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69894d35-6bd9-4c1c-aaea-8908e3e09a16;
 Mon, 01 Feb 2021 12:43:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9DC19AC45;
 Mon,  1 Feb 2021 12:43:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69894d35-6bd9-4c1c-aaea-8908e3e09a16
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612183384; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DnYSxTzJBDvOfiz9wcen1XbR/0uoT6XyzGIZOo8FU/0=;
	b=sh+nbL0yu7qztUDe5BYyZVRiJLK4UzCa3t5KRokxVQ/2hbl4jQHrs7HZDCyHuAwNrhyKqR
	VdRlWaxgZqz5/LE+rq7L1pef+SZ7CJ/rj7yG0NlU3dESUuyMjylj/7QBXdyNW/5/O/7raA
	UsJHeoN6i+vRZ3n4+LH3kQdaZLXQE7Q=
Subject: [PATCH v2 2/3] x86/time: adjust time recording
 time_calibration_tsc_rendezvous()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Message-ID: <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
Date: Mon, 1 Feb 2021 13:43:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The (stime,tsc) tuple is the basis for extrapolation by get_s_time().
Therefore the two better get taken as close to one another as possible.
This means two things: First, reading platform time is too early when
done on the first iteration. The closest we can get is on the last
iteration, immediately before telling other CPUs to write their TSCs
(and then also writing CPU0's). While at the first glance it may seem
not overly relevant when exactly platform time is read (when assuming
that only stime is ever relevant anywhere, and hence the association
with the precise TSC values is of lower interest), both CPU frequency
changes and the effects of SMT make it unpredictable (between individual
rendezvous instances) how long the loop iterations will take. This will
in turn lead to higher an error than neccesary in how close to linear
stime movement we can get.

Second, re-reading the TSC for local recording is increasing the overall
error as well, when we already know a more precise value - the one just
written.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1662,11 +1662,12 @@ struct calibration_rendezvous {
 };
 
 static void
-time_calibration_rendezvous_tail(const struct calibration_rendezvous *r)
+time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
+                                 uint64_t tsc)
 {
     struct cpu_time_stamp *c = &this_cpu(cpu_calibration);
 
-    c->local_tsc    = rdtsc_ordered();
+    c->local_tsc    = tsc;
     c->local_stime  = get_s_time_fixed(c->local_tsc);
     c->master_stime = r->master_stime;
 
@@ -1691,11 +1692,11 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) != (total_cpus - 1) )
                 cpu_relax();
 
-            if ( r->master_stime == 0 )
-            {
-                r->master_stime = read_platform_stime(NULL);
+            if ( r->master_tsc_stamp == 0 )
                 r->master_tsc_stamp = rdtsc_ordered();
-            }
+            else if ( i == 0 )
+                r->master_stime = read_platform_stime(NULL);
+
             atomic_inc(&r->semaphore);
 
             if ( i == 0 )
@@ -1720,7 +1721,7 @@ static void time_calibration_tsc_rendezv
         }
     }
 
-    time_calibration_rendezvous_tail(r);
+    time_calibration_rendezvous_tail(r, r->master_tsc_stamp);
 }
 
 /* Ordinary rendezvous function which does not modify TSC values. */
@@ -1745,7 +1746,7 @@ static void time_calibration_std_rendezv
         smp_rmb(); /* receive signal /then/ read r->master_stime */
     }
 
-    time_calibration_rendezvous_tail(r);
+    time_calibration_rendezvous_tail(r, rdtsc_ordered());
 }
 
 /*



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:43:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:43:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79842.145497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YYH-0003Yy-Ac; Mon, 01 Feb 2021 12:43:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79842.145497; Mon, 01 Feb 2021 12:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YYH-0003Yq-7N; Mon, 01 Feb 2021 12:43:33 +0000
Received: by outflank-mailman (input) for mailman id 79842;
 Mon, 01 Feb 2021 12:43:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6YYF-0003Yd-JB
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:43:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50e4a339-9990-4a43-b796-6f872f291cb7;
 Mon, 01 Feb 2021 12:43:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 43731AB92;
 Mon,  1 Feb 2021 12:43:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50e4a339-9990-4a43-b796-6f872f291cb7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612183409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pO95WGmT9luUPpVQj5Mfxb1ngFYgSQ2HBQd/KaR67jk=;
	b=SxDTn8Yr2eNncgqM27CwVzLnQR04GX1SPvm2AuWxwmStGoz9j4HlR6Jb1v4d4ati86Jxjv
	KQVK2jp0BCFZl7Fmaeq+TEP5bLZZbBnZ6J5RzYquvxxZLX6rEM5MCnaBiPjIkuMzvoCwzq
	EN9XRLPHwXHTIEQa8vftAcYYKngFIXk=
Subject: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Message-ID: <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
Date: Mon, 1 Feb 2021 13:43:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While doing this for small amounts may be okay, the unconditional use
of CPU0's value here has been found to be a problem when the boot time
TSC of the BSP was behind that of all APs by more than a second. In
particular because of get_s_time_fixed() producing insane output when
the calculated delta is negative, we can't allow this to happen.

On the first iteration have all other CPUs sort out the highest TSC
value any one of them has read. On the second iteration, if that
maximum is higher than CPU0's, update its recorded value from that
taken in the first iteration. Use the resulting value on the last
iteration to write everyone's TSCs.

To account for the possible discontinuity, have
time_calibration_rendezvous_tail() record the newly written value, but
extrapolate local stime using the value read.

Reported-by: Claudemir Todo Bom <claudemir@todobom.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Don't update r->master_stime by calculation. Re-base over new
    earlier patch. Make time_calibration_rendezvous_tail() take two TSC
    values.
---
Since CPU0 reads its TSC last on the first iteration, if TSCs were
perfectly sync-ed there shouldn't ever be a need to update. However,
even on the TSC-reliable system I first tested this on (using
"tsc=skewed" to get this rendezvous function into use in the first
place) updates by up to several thousand clocks did happen. I wonder
whether this points at some problem with the approach that I'm not (yet)
seeing.

Considering the sufficiently modern CPU it's using, I suspect the
reporter's system wouldn't even need to turn off TSC_RELIABLE, if only
there wasn't the boot time skew. Hence another approach might be to fix
this boot time skew. Of course to recognize whether the TSCs then still
aren't in sync we'd need to run tsc_check_reliability() sufficiently
long after that adjustment. Which is besides the need to have this
"fixing" be precise enough for the TSCs to not look skewed anymore
afterwards.

As per the comment ahead of it, the original purpose of the function was
to deal with TSCs halted in deep C states. While this probably explains
why only forward moves were ever expected, I don't see how this could
have been reliable in case CPU0 was deep-sleeping for a sufficiently
long time. My only guess here is a hidden assumption of CPU0 never being
idle for long enough.

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1658,17 +1658,17 @@ struct calibration_rendezvous {
     cpumask_t cpu_calibration_map;
     atomic_t semaphore;
     s_time_t master_stime;
-    u64 master_tsc_stamp;
+    uint64_t master_tsc_stamp, max_tsc_stamp;
 };
 
 static void
 time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
-                                 uint64_t tsc)
+                                 uint64_t old_tsc, uint64_t new_tsc)
 {
     struct cpu_time_stamp *c = &this_cpu(cpu_calibration);
 
-    c->local_tsc    = tsc;
-    c->local_stime  = get_s_time_fixed(c->local_tsc);
+    c->local_tsc    = new_tsc;
+    c->local_stime  = get_s_time_fixed(old_tsc ?: new_tsc);
     c->master_stime = r->master_stime;
 
     raise_softirq(TIME_CALIBRATE_SOFTIRQ);
@@ -1683,6 +1683,7 @@ static void time_calibration_tsc_rendezv
     int i;
     struct calibration_rendezvous *r = _r;
     unsigned int total_cpus = cpumask_weight(&r->cpu_calibration_map);
+    uint64_t tsc = 0;
 
     /* Loop to get rid of cache effects on TSC skew. */
     for ( i = 4; i >= 0; i-- )
@@ -1692,8 +1693,15 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) != (total_cpus - 1) )
                 cpu_relax();
 
-            if ( r->master_tsc_stamp == 0 )
-                r->master_tsc_stamp = rdtsc_ordered();
+            if ( tsc == 0 )
+                r->master_tsc_stamp = tsc = rdtsc_ordered();
+            else if ( r->master_tsc_stamp < r->max_tsc_stamp )
+                /*
+                 * We want to avoid moving the TSC backwards for any CPU.
+                 * Use the largest value observed anywhere on the first
+                 * iteration.
+                 */
+                r->master_tsc_stamp = r->max_tsc_stamp;
             else if ( i == 0 )
                 r->master_stime = read_platform_stime(NULL);
 
@@ -1712,6 +1720,16 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) < total_cpus )
                 cpu_relax();
 
+            if ( tsc == 0 )
+            {
+                uint64_t cur;
+
+                tsc = rdtsc_ordered();
+                while ( tsc > (cur = r->max_tsc_stamp) )
+                    if ( cmpxchg(&r->max_tsc_stamp, cur, tsc) == cur )
+                        break;
+            }
+
             if ( i == 0 )
                 write_tsc(r->master_tsc_stamp);
 
@@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) > total_cpus )
                 cpu_relax();
         }
+
+        /* Just in case a read above ended up reading zero. */
+        tsc += !tsc;
     }
 
-    time_calibration_rendezvous_tail(r, r->master_tsc_stamp);
+    time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp);
 }
 
 /* Ordinary rendezvous function which does not modify TSC values. */
@@ -1746,7 +1767,7 @@ static void time_calibration_std_rendezv
         smp_rmb(); /* receive signal /then/ read r->master_stime */
     }
 
-    time_calibration_rendezvous_tail(r, rdtsc_ordered());
+    time_calibration_rendezvous_tail(r, 0, rdtsc_ordered());
 }
 
 /*



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:45:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:45:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79848.145509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YaJ-0003mv-M3; Mon, 01 Feb 2021 12:45:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79848.145509; Mon, 01 Feb 2021 12:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YaJ-0003mo-JE; Mon, 01 Feb 2021 12:45:39 +0000
Received: by outflank-mailman (input) for mailman id 79848;
 Mon, 01 Feb 2021 12:45:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YaJ-0003mf-8E; Mon, 01 Feb 2021 12:45:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YaJ-0001m0-2E; Mon, 01 Feb 2021 12:45:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YaI-0006qV-PA; Mon, 01 Feb 2021 12:45:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6YaI-0007TX-Oh; Mon, 01 Feb 2021 12:45:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S602z46TghlwPReHBpwzGwpkXOmoRZQLPAOZePEJDPQ=; b=crz0bgs/PfSClt1fqYl62vDlF3
	6kp61Br/3S0/rS9c4O01CeevR511etvtXMJ6kRfCOHeUl245oX4lysqD9Ra4KpVyrb8BG5aX9E9iu
	ZEYRmx3FmouhJQNdl5c8eJ+eNWCGefpk1aLFy5+lI0bRltJEkg6DDevV9cvMW7W87sZs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158874-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 158874: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ea56ebf67dd55483105aa9f9996a48213e78337e
X-Osstest-Versions-That:
    ovmf=c6be6dab9c4bdf135bc02b61ecc304d5511c3588
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 12:45:38 +0000

flight 158874 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158874/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ea56ebf67dd55483105aa9f9996a48213e78337e
baseline version:
 ovmf                 c6be6dab9c4bdf135bc02b61ecc304d5511c3588

Last test of basis   158757  2021-01-29 03:10:14 Z    3 days
Testing same since   158874  2021-02-01 01:54:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c6be6dab9c..ea56ebf67d  ea56ebf67dd55483105aa9f9996a48213e78337e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:47:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:47:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79853.145525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yc3-0003um-3n; Mon, 01 Feb 2021 12:47:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79853.145525; Mon, 01 Feb 2021 12:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yc3-0003uf-0d; Mon, 01 Feb 2021 12:47:27 +0000
Received: by outflank-mailman (input) for mailman id 79853;
 Mon, 01 Feb 2021 12:47:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Yc1-0003uZ-JX
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:47:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6f4b139-ca82-4f3c-ae97-171bb9319dd0;
 Mon, 01 Feb 2021 12:47:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F917ABD5;
 Mon,  1 Feb 2021 12:47:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6f4b139-ca82-4f3c-ae97-171bb9319dd0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612183643; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=06yv9ZhOizDTB6GSm2cGAelDEQnP+st4xtzMqlOQe3I=;
	b=VkUvoROOmD9+xeNxRTkLW6gM/4MlSbdbokzN1bBrkjNSwiNSGq1Y5zZqpuSVjDQdb7uvk9
	WElXiPpAOnRN4m+1t+sWq+dxgEXn1ZjJSBaBF50geLmp2rW8xKof0ieXxjQbUAGviZv4Ba
	m/wVN8585PqVLwTIoOAGZB6ozoGjOjA=
Subject: Re: Problems with APIC on versions 4.9 and later (4.8 works)
To: Claudemir Todo Bom <claudemir@todobom.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <CANyqHYfNBHnUiBiXHdt+R3mZ72oYQBnQcaWuKw5gY0uDb_ZqKw@mail.gmail.com>
 <e1d69914-c6bc-40b9-a9f4-33be4bd022b6@suse.com>
 <CANyqHYcifnCgd5C5vbYoi4CTtoMX5+jzGqHfs6JZ+e=d2Y_dmg@mail.gmail.com>
 <ff799cd4-ba42-e120-107c-5011dc803b5a@suse.com>
 <609a82d8-af12-4764-c4e0-f5ee0e11c130@suse.com>
 <CANyqHYehUWeNfVXqVJX6nrBS_CcKL1DQjyNVa1cUbvbx+zD83w@mail.gmail.com>
 <9d04edfe-0059-6fbf-c1da-2087f6190e64@suse.com>
 <CANyqHYfOC6JY978SRPAQ8Ug3GevFD=jbT6bVVET4+QOv8mv7qA@mail.gmail.com>
 <a0a7bbd0-c4c3-cfb8-5af0-a5a4aff14b76@suse.com>
 <CANyqHYeDR_NUKzPtbfLiUzxAUzerKepbU4B-_6=U-7Y6uy8gpQ@mail.gmail.com>
 <8837c3fb-1e0c-5941-258c-e76551a9e02b@suse.com>
 <8cf69fb3-5b8c-60ea-bd1c-39a0cbd5cb5c@suse.com>
 <CANyqHYeCQc2bt836uyrtm9Eo2T1uPP-+ups-ygfACu6zK36BQg@mail.gmail.com>
 <bd150f4d-4f7e-082e-6b10-03bf1eca7b80@suse.com>
 <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <803a50a9-707f-14db-b523-cd1f6f685ab4@suse.com>
Date: Mon, 1 Feb 2021 13:47:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.01.2021 20:31, Claudemir Todo Bom wrote:
> I've applied both patches, system didn't booted, used following parameters:
> 
> xen: dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true smt=true
> kernel: loglevel=3
> 
> The screen cleared right after the initial xen messages and frozen
> there for a few minutes until I restarted the system.
> 
> I've added "vga=text-80x25,keep" to the xen command line and
> "nomodeset" to the kernel command line, hoping to get some more info
> and surprisingly this was sufficient to make system boot!
> 
> System prompt took a lot to appear, the kernel driver for the usb
> keyboard loaded after 3 minutes and the driver for the usb wifi dongle
> I am using loaded about five minutes after kernel boot, and I had to
> issue "ifup -a" to get an ip address from the dhcp server, and it took
> almost one minute to get it!

I was able to repro this behavior, by deliberately screwing up
CPU0's TSC early during boot. This of course did make it a lot
easier to find and fix the problem. I've Cc-ed you on the full
3-patch series that I've sent a minute ago, because while you
may continue to opt for ignoring the first patch, you'll now
need the latter two. And as before, the updated debugging patch
below.

Jan

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1558,6 +1558,12 @@ static void local_time_calibration(void)
  * TSC Reliability check
  */
 
+static struct {//temp
+ unsigned cpu;
+ signed iter;
+ cycles_t prev, now;
+} check_log[NR_CPUS + 4];
+static unsigned check_idx;//temp
 /*
  * The Linux original version of this function is
  * Copyright (c) 2006, Red Hat, Inc., Ingo Molnar
@@ -1566,6 +1572,7 @@ static void check_tsc_warp(unsigned long
 {
     static DEFINE_SPINLOCK(sync_lock);
     static cycles_t last_tsc;
+unsigned idx, cpu = smp_processor_id();//temp
 
     cycles_t start, now, prev, end;
     int i;
@@ -1576,6 +1583,15 @@ static void check_tsc_warp(unsigned long
     end = start + tsc_khz * 20ULL;
     now = start;
 
+{//temp
+ spin_lock(&sync_lock);
+ idx = check_idx++;
+ check_log[idx].cpu = cpu;
+ check_log[idx].iter = -1;
+ check_log[idx].now = now;
+ spin_unlock(&sync_lock);
+}
+
     for ( i = 0; ; i++ )
     {
         /*
@@ -1610,7 +1626,14 @@ static void check_tsc_warp(unsigned long
         {
             spin_lock(&sync_lock);
             if ( *max_warp < prev - now )
+{//temp
                 *max_warp = prev - now;
+ idx = check_idx++;
+ check_log[idx].cpu = cpu;
+ check_log[idx].iter = i;
+ check_log[idx].prev = prev;
+ check_log[idx].now = now;
+}
             spin_unlock(&sync_lock);
         }
     }
@@ -1647,6 +1670,12 @@ static void tsc_check_reliability(void)
         cpu_relax();
 
     spin_unlock(&lock);
+{//temp
+ unsigned i;
+ printk("CHK[%2u] %lx\n", cpu, tsc_max_warp);//temp
+ for(i = 0; i < ARRAY_SIZE(check_log) && check_log[i].now; ++i)
+  printk("chk[%4u] CPU%-2u %016lx %016lx #%d\n", i, check_log[i].cpu, check_log[i].prev, check_log[i].now, check_log[i].iter);
+}
 }
 
 /*
@@ -1661,6 +1690,7 @@ struct calibration_rendezvous {
     uint64_t master_tsc_stamp, max_tsc_stamp;
 };
 
+static bool rdzv_log;//temp
 static void
 time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
                                  uint64_t old_tsc, uint64_t new_tsc)
@@ -1671,6 +1701,7 @@ time_calibration_rendezvous_tail(const s
     c->local_stime  = get_s_time_fixed(old_tsc ?: new_tsc);
     c->master_stime = r->master_stime;
 
+if(rdzv_log) printk("RDZV[%2u] t=%016lx(%016lx) s=%012lx(%012lx)\n", smp_processor_id(), c->local_tsc, r->master_tsc_stamp, c->local_stime, r->master_stime);//temp
     raise_softirq(TIME_CALIBRATE_SOFTIRQ);
 }
 
@@ -1684,7 +1715,9 @@ static void time_calibration_tsc_rendezv
     struct calibration_rendezvous *r = _r;
     unsigned int total_cpus = cpumask_weight(&r->cpu_calibration_map);
     uint64_t tsc = 0;
+uint64_t adj = 0;//temp
 
+if(rdzv_log) printk("RDZV[%2u] t=%016lx\n", smp_processor_id(), rdtsc_ordered());//temp
     /* Loop to get rid of cache effects on TSC skew. */
     for ( i = 4; i >= 0; i-- )
     {
@@ -1701,6 +1734,7 @@ static void time_calibration_tsc_rendezv
                  * Use the largest value observed anywhere on the first
                  * iteration.
                  */
+adj = r->max_tsc_stamp - r->master_tsc_stamp,//temp
                 r->master_tsc_stamp = r->max_tsc_stamp;
             else if ( i == 0 )
                 r->master_stime = read_platform_stime(NULL);
@@ -1743,6 +1777,13 @@ static void time_calibration_tsc_rendezv
     }
 
     time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp);
+if(adj) {//temp
+ static unsigned long cnt, thr;
+ if(++cnt > thr) {
+  thr |= cnt;
+  printk("TSC adjusted by %lx\n", adj);
+ }
+}
 }
 
 /* Ordinary rendezvous function which does not modify TSC values. */
@@ -1794,6 +1835,12 @@ static void time_calibration(void *unuse
     struct calibration_rendezvous r = {
         .semaphore = ATOMIC_INIT(0)
     };
+static unsigned long cnt, thr;//temp
+if(++cnt > thr) {//temp
+ thr |= cnt;
+ printk("TSC: %ps\n", time_calibration_rendezvous_fn);
+ rdzv_log = true;
+}
 
     if ( clocksource_is_tsc() )
     {
@@ -1808,6 +1855,10 @@ static void time_calibration(void *unuse
     on_selected_cpus(&r.cpu_calibration_map,
                      time_calibration_rendezvous_fn,
                      &r, 1);
+if(rdzv_log) {//temp
+ rdzv_log = false;
+ printk("TSC: end rendezvous\n");
+}
 }
 
 static struct cpu_time_stamp ap_bringup_ref;
@@ -1904,6 +1955,7 @@ void init_percpu_time(void)
     }
     t->stamp.local_tsc   = tsc;
     t->stamp.local_stime = now;
+printk("INIT[%2u] t=%016lx s=%012lx m=%012lx\n", smp_processor_id(), tsc, now, t->stamp.master_stime);//temp
 }
 
 /*
@@ -2046,6 +2098,7 @@ static int __init verify_tsc_reliability
      * While with constant-rate TSCs the scale factor can be shared, when TSCs
      * are not marked as 'reliable', re-sync during rendezvous.
      */
+printk("TSC: c=%d r=%d\n", !!boot_cpu_has(X86_FEATURE_CONSTANT_TSC), !!boot_cpu_has(X86_FEATURE_TSC_RELIABLE));//temp
     if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&
          !boot_cpu_has(X86_FEATURE_TSC_RELIABLE) )
         time_calibration_rendezvous_fn = time_calibration_tsc_rendezvous;
@@ -2061,6 +2114,7 @@ int __init init_xen_time(void)
 {
     tsc_check_writability();
 
+printk("TSC: c=%d r=%d\n", !!boot_cpu_has(X86_FEATURE_CONSTANT_TSC), !!boot_cpu_has(X86_FEATURE_TSC_RELIABLE));//temp
     open_softirq(TIME_CALIBRATE_SOFTIRQ, local_time_calibration);
 
     /* NB. get_wallclock_time() can take over one second to execute. */


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:50:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:50:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79857.145536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yed-0004p5-Hr; Mon, 01 Feb 2021 12:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79857.145536; Mon, 01 Feb 2021 12:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yed-0004oy-Eh; Mon, 01 Feb 2021 12:50:07 +0000
Received: by outflank-mailman (input) for mailman id 79857;
 Mon, 01 Feb 2021 12:50:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Yeb-0004dt-TD
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:50:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0701a4c7-0ba4-4c56-a9e6-7ff453034b20;
 Mon, 01 Feb 2021 12:50:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79872AB92;
 Mon,  1 Feb 2021 12:50:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0701a4c7-0ba4-4c56-a9e6-7ff453034b20
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612183803; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9tYLAgKn1BFPh0ol2qf612xGo0jMgnl+1NkEFaocbWM=;
	b=ITDCeuVqLg7W/tumWZPDMw5oHaH8ploMK4RgUaRFAfNt6fIfJ78dXL6DsjIGTyoZfTTu/K
	aS+PEWtTAxZTz4OwVAVwA9mcjZBrdglErreem790oyC6hMqiGsPRZeBs5nJr9gUXjyzFMP
	0WC0urZaskRmWg6gQKCyqHSRwliLBWI=
Subject: Re: [PATCH v3] x86/mm: Short circuit damage from "fishy"
 ref/typecount failure
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210119094122.23713-1-andrew.cooper3@citrix.com>
 <20210119130254.27058-1-andrew.cooper3@citrix.com>
 <98f64276-ec5d-7242-f10f-126fe7ee1f7e@suse.com>
 <45f5d1f0-1a89-706f-f202-91ddb1d8b094@citrix.com>
 <dd59ad75-c0f1-4d14-a0b6-06dd9e095b36@suse.com>
 <0c279f99-e74e-ced0-4947-b9a104160671@citrix.com>
 <a2ea0bc0-3644-a2c1-c0b2-f3085b1aa0b5@suse.com>
 <a0456589-4626-fc51-d585-9159d6ea3010@citrix.com>
 <d0fc73c7-3dde-b303-dba0-7cf65e1ef0e8@suse.com>
 <6fb91ae1-4b91-f76a-1d38-1c528ab43a9c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <222ce537-b4fc-cf45-1bd0-0aaa3293d8ea@suse.com>
Date: Mon, 1 Feb 2021 13:50:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <6fb91ae1-4b91-f76a-1d38-1c528ab43a9c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.01.2021 18:17, Andrew Cooper wrote:
> On 29/01/2021 16:31, Jan Beulich wrote:
>> On 29.01.2021 17:17, Andrew Cooper wrote:
>>> On 29/01/2021 11:29, Jan Beulich wrote:
>>>> On 25.01.2021 18:59, Andrew Cooper wrote:
>>>>> On 20/01/2021 08:06, Jan Beulich wrote:
>>>>>> Also, as far as "impossible" here goes - the constructs all
>>>>>> anyway exist only to deal with what we consider impossible.
>>>>>> The question therefore really is of almost exclusively
>>>>>> theoretical nature, and hence something like a counter
>>>>>> possibly overflowing imo needs to be accounted for as
>>>>>> theoretically possible, albeit impossible with today's
>>>>>> computers and realistic timing assumptions. If a counter
>>>>>> overflow occurred, it definitely wouldn't be because of a
>>>>>> bug in Xen, but because of abnormal behavior elsewhere.
>>>>>> Hence I remain unconvinced it is appropriate to deal with
>>>>>> the situation by BUG().
>>>>> I'm not sure how to be any clearer.
>>>>>
>>>>> I am literally not changing the current behaviour.Â  Xen *will* hit a
>>>>> BUG() if any of these domain_crash() paths are taken.
>>>>>
>>>>> If you do not believe me, then please go and actually check what happens
>>>>> when simulating a ref-acquisition failure.
>>>> So I've now also played the same game on the ioreq path (see
>>>> debugging patch below, and again with some non-"//temp"
>>>> changes actually improving overall behavior in that "impossible"
>>>> case). No BUG()s hit, no leaks (thanks to the extra changes),
>>>> no other anomalies observed.
>>>>
>>>> Hence I'm afraid it is now really up to you to point out the
>>>> specific BUG()s (and additional context as necessary) that you
>>>> either believe could be hit, or that you have observed being hit.
>>> The refcounting logic was taken verbatim from ioreq, with the only
>>> difference being an order greater than 0.Â  The logic is also identical
>>> to the vlapic logic.
>>>
>>> And the reason *why* it bugs is obvious - the cleanup logic
>>> unconditionally put()'s refs it never took to begin with, and hits
>>> underflow bugchecks.
>> In current staging, neither vmx_alloc_vlapic_mapping() nor
>> hvm_alloc_ioreq_mfn() put any refs they couldn't get. Hence
>> my failed attempt to repro your claimed misbehavior.
> 
> I think I've figured out what is going on.
> 
> They *look* as if they do, but the logic is deceptive.
> 
> We skip both puts in free_*() if the typeref failed, and rely on the
> fact that the frame(s) are *also* on the domheap list for
> relinquish_resources() to put the acquire ref.
> 
> Yet another bizzare recounting rule/behaviour which isn't written down.

But that's not the case - extra pages land on their own
list, which relinquish_resources() doesn't iterate. Hence
me saying we leak these pages on the domain_crash() paths,
and hence my repro attempt patches containing adjustments
to at least try to free those pages on those paths.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 12:56:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 12:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79861.145549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yl8-00055V-Fj; Mon, 01 Feb 2021 12:56:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79861.145549; Mon, 01 Feb 2021 12:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yl8-00055O-Ci; Mon, 01 Feb 2021 12:56:50 +0000
Received: by outflank-mailman (input) for mailman id 79861;
 Mon, 01 Feb 2021 12:56:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Yl6-00055J-T0
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 12:56:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2881815-a5ec-4f7b-b9f6-bd593f575327;
 Mon, 01 Feb 2021 12:56:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28B98AC45;
 Mon,  1 Feb 2021 12:56:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2881815-a5ec-4f7b-b9f6-bd593f575327
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612184206; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qp4xt7Sbe1Inec8fvqcANn5GNAzlVlkGu5IDLxuLTDk=;
	b=l5NtTT+lBrMvT2BKNhTyUocHM2OFCguZ2amYNDUvPbml/oQ4XK3f7aUopRsUd08Fq2ofsO
	r62gmH76pdOQQnzImkGN68JZweg+oU2J7LXHPVMMHAkZio/uQlSiMA79tigy8fxO8In85Z
	hiPKFr4fAsuIcuWwHlhY3ZihbKWo0J4=
Subject: Re: [PATCH v3 7/7] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
 <julien@xen.org>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210112194841.1537-1-andrew.cooper3@citrix.com>
 <20210112194841.1537-8-andrew.cooper3@citrix.com>
 <65d256c1-e9c0-3859-b6fc-d3b7a41ef964@suse.com>
 <836bc7bf-7342-96b4-253c-dedb00da92f6@citrix.com>
 <46d73c0a-035c-db7f-3b1a-87ad88bc2518@suse.com>
 <4024cc9e-8c9a-601a-8f9d-fa480c42d569@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <519d6bea-ab4b-cde1-3d84-fcfead98f292@suse.com>
Date: Mon, 1 Feb 2021 13:56:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <4024cc9e-8c9a-601a-8f9d-fa480c42d569@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.01.2021 19:18, Andrew Cooper wrote:
> On 29/01/2021 09:46, Jan Beulich wrote:
>> On 29.01.2021 00:44, Andrew Cooper wrote:
>>> On 15/01/2021 16:12, Jan Beulich wrote:
>>>> On 12.01.2021 20:48, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/mm.c
>>>>> +++ b/xen/arch/x86/mm.c
>>>>> @@ -4628,7 +4628,6 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
>>>>>          if ( id != (unsigned int)ioservid )
>>>>>              break;
>>>>>  
>>>>> -        rc = 0;
>>>>>          for ( i = 0; i < nr_frames; i++ )
>>>>>          {
>>>>>              mfn_t mfn;
>>>> How "good" are our chances that older gcc won't recognize that
>>>> without this initialization ...
>>>>
>>>>> @@ -4639,6 +4638,9 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
>>>>>  
>>>>>              mfn_list[i] = mfn_x(mfn);
>>>>>          }
>>>>> +        if ( i == nr_frames )
>>>>> +            /* Success.  Passed nr_frames back to the caller. */
>>>>> +            rc = nr_frames;
>>>> ... rc will nevertheless not remain uninitialized when nr_frames
>>>> is zero?
>>> I don't anticipate this function getting complicated enough for us to
>>> need to rely on tricks like that to spot bugs.
>>>
>>> AFAICT, it would take a rather larger diffstat to make it "uninitialised
>>> clean" again.
>> Well, we'll see how it goes then. We still allow rather ancient
>> gcc, and I will eventually run into a build issue here once
>> having rebased accordingly, if such old gcc won't cope.
> 
> But those problematic compilers have problems spotting that a variable
> is actually initialised on all paths.
> 
> This case is opposite.Â  It is unconditionally initialised to -EINVAL
> (just outside of the context above), which breaks some "the compiler
> would warn us about error paths" logic that we try to use.

Oh, right you are. I'm sorry for the noise then.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:01:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:01:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79865.145561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yph-00066C-7a; Mon, 01 Feb 2021 13:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79865.145561; Mon, 01 Feb 2021 13:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ypg-000663-W0; Mon, 01 Feb 2021 13:01:32 +0000
Received: by outflank-mailman (input) for mailman id 79865;
 Mon, 01 Feb 2021 13:01:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6Ypf-00065x-P7
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:01:31 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9dd9674-1a73-480c-b1e5-e583ea8d4812;
 Mon, 01 Feb 2021 13:01:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9dd9674-1a73-480c-b1e5-e583ea8d4812
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612184489;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=jV5KbZdyslnUG1RIxoDjF1hQa8y8ZFNzUwOz54zRj1k=;
  b=U0QEAvzzoioXPtw0GlZRTv2dAerUAQDQDiDTjQuKHe0zKWk0fD8ZWzoF
   ppZPNKJERa/inJwl4Ri7n/XQEbFsFUEY9AUxjdwEKPYrb5CProNuYmEzQ
   uBN5aMPpGjPPyw2v2cW9f1AUPGCjXyZTkFYW3c25npPDNfEfMi6CjzSrW
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: yAfHMaOzUlyVspQLwDXLD4V5TKdiDy4U4Trz1Pz1bcO3vtJUtJC6jpP6jLBOf8q8FnprQsNZkZ
 qvomp1IxyhyIgNyxzIpDrOuj222Ut1SmcTxZ2XRFfTLhLi91pam7JOwfXT+iVG2AKjELt37A6X
 Lo5Nz7P0sN/0xjmSVxqYyIIBBI1hd2kwGqYtuFW1GYkx+/J+Zlptyqkgzh0H3nzw/ZKtOSmPPr
 cixGBmYGLm7SGMEmIHOaYvMTzpX3ZJqUw0MMtGwOcpd/8s8ZJU0mWlXUyKCrX7fO89lYSujHYl
 BUQ=
X-SBRS: 5.2
X-MesageID: 36650823
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36650823"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dgbK7UQYCjEsLTcKZ21s9QEnsRu+BLPv6ygoVD9Qf4hWaX0gPlDNk/zF/XaAYHyyY7bEi0UjT1LUnQuXjVc8tVbLB7bFqdacDPCnjZqT07jX00yXwhOxBY5ERzgPShTQ6UPjJsxMWAca22uPyoIhHsdTYH9eHNrTYlx7sLEIZXHHOgqo49yRnRVthmdUphFigYntPrN3u0JSSNwFOlh3kHsg8RHb5wqEvVDSy9TvytwOLoLhHa3/KLLrba8168S5exr8YReB79dTdaVKNbKck58MVnOI3IEIxkNZXhI3i8jtQrOMIcillg4t7kYQmr1xk+7hE1G5vALKJYOCS7OrPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZp8HxNw9IQ8JWY9bpdc7k0Axvb7N23JXY6ItYc+ObE=;
 b=NZM9SdiQt0HhSRlkidDsZekPbc7ctyAKmnUYA+3gElx77pMFNb58jH8B/ZDE/lQ75b6Tf5ehnewaDc+pdsek8k7WZUOkRhJlOwFvO6uUyIA9P+gOG7S3qLUHNBuePmsNHd4ui5nlj2QSnyXPTvOWtRsSibSWa4U0mLO/tLD6yot1PxyBmM2fLxoCIttyBbHjHGXJluSjxQZZvTVusVpN2QlPlCOlp3lTLqLRdcCXLz6onSCin/CHZyC1X3J2msvlJ1CidsTN95bQPvfLpi/z55Psl9jp7pNqpZHV+UT4v0B6lAQPdrVIdjwMhSPG5rwycDchvV9K98tTnRq+LgrPIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZp8HxNw9IQ8JWY9bpdc7k0Axvb7N23JXY6ItYc+ObE=;
 b=w7pAlCoHuC4DBdnFTZNGXWCeC1hD6+EWn/7AwqfpujZMSdNddfus4Ah6W4JNfVC14e4zpW1LI1Vt+7uHBU03rnwCG1dveqCAKFocXaCft4CcIwWEb18C3Y2zhFn+O5s2YIn+FRPLClQQAgS924YNTcdhIXJfof6zy43iPrnav90=
Subject: Re: [PATCH v8 12/16] xen/domctl: Add XEN_DOMCTL_vmtrace_op
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, "Jan
 Beulich" <JBeulich@suse.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Tamas K Lengyel
	<tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-13-andrew.cooper3@citrix.com>
 <YBftkqLHfwIzpaN9@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <316b907e-ff40-039c-374a-c07fbb33bbc2@citrix.com>
Date: Mon, 1 Feb 2021 13:00:47 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <YBftkqLHfwIzpaN9@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0516.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::23) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 93636516-10b3-4523-69b0-08d8c6b169b0
X-MS-TrafficTypeDiagnostic: BY5PR03MB5171:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB517120AFBF085B56C3F2BDD1BAB69@BY5PR03MB5171.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bkc6z/GuWbYNg0KO+2pS73gxt+kHyqYqAQTKsj1JMMaPvLa3UsirKIRW2AdxT1XFdHMQW1ehD3Uv4O2SWKoWDTFl5981UBsGRMurnKI4CTlnBB0Ro4Mjo/YxZ5q2qllRmnxYFp3CYgJgUOjhu69QUP8uVbuTHRqzKFCobG7Oa+c4saGLCzboWRP7wKyTbkocGDIGO58QrfQaUuTP7gT5N/hUnYnQ/n5A99VRh1NqREhOsVea/3Wt5PQPLI16edx8JdpEOsyRJG8psrU3p37HbxfVmbi3/AXwteQ+p2s71Zf5N8nW0KxXzNlgkYHMvT2Jf+JwDObXa/0rT0JF257jR2sAjbiNNcpXI3aqPif2gotJ5nVoNal5x/UNzmtM/cuseX+xdLXa79+GYGiDc89VlWIz/DwBMRa6EcEYksWV9MBluasOdpN60D3wb+UGX1ZuB2+D8AszXjclcxZts0kaV6U5q/Fm5hzGKQJiokgtnD8iid5RTf6blLjrPW4raBSnXDn3DBoOW0fOzmezKatGNjfN+DXRd7TIuXQOlYLTpytU7YsTtqtPne3imMyV5Xl+7rzKMJsACh5+CnHQfxVbHjuiA+eq4HpIDQlrfZgApJI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(366004)(86362001)(26005)(8936002)(16526019)(6862004)(956004)(2906002)(6636002)(4326008)(186003)(31696002)(5660300002)(478600001)(2616005)(66946007)(8676002)(66476007)(66556008)(53546011)(316002)(36756003)(54906003)(37006003)(31686004)(83380400001)(16576012)(6666004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aW90UmFCVEZEbENTTmxQWFFZSmEwSThubzhGK0dmQjlyZmpZTzN6MHgzcHlO?=
 =?utf-8?B?VG5TdTgzREFYalZ3eTZLc2ZDOC9rb1NhaHJub0RlMkZKTXNkSVNhcE1NSndB?=
 =?utf-8?B?TzdDTldEUGR4VVZVaHREbGsrR1NOMkZGYmREeHFCVTRKQVNkdEJRZTJoUTJY?=
 =?utf-8?B?dm50cDZxUEp5ZFZVSGwxbXRCTmErQ3p1N3AvYWxzTWFETGNnU08yRk5aR1lP?=
 =?utf-8?B?MGpOblBQVzJIL2R1YVYydE4wVjZHbE84b1JyeGVDTnJVZ0xXZCtocHpUdnlp?=
 =?utf-8?B?RW9pZnB0bitSNy9yWVFLTVlkWHB2blBpV05WQ2JBNFZjV2pTRnhCRU5GMnFB?=
 =?utf-8?B?bUhGQmhnQkIvRTdGRVUwMk9aTkczajkvNUltVDNJdDJQRDl4M1Z6Sm5sajIr?=
 =?utf-8?B?YTAzS01aRXQ1ZTFZMVRwQzArM1Ywanhzem1Pa1Y0Y3hpTTV1WU91Q3ZCNlBE?=
 =?utf-8?B?UkVHNElnYjJ6T1RRTjNrWmhWeUpMZ1Ryb3hVQisrSkJtMEtvbURzV3diY1NE?=
 =?utf-8?B?ejhFSEdDTk5Kc1ZIQ2xVVHRmd3VNZnczdHU5d1Z4ZlFHSmZsbXNuMWxoRkpz?=
 =?utf-8?B?cEhtK05ZYnNqM3piUEdhbnhwSjhJRTlYcWQ0cmdSU1R3NzRuWVR0bnNhbUpo?=
 =?utf-8?B?eDNNaGpEbEpHQi9yajZnRHJvVkpOeHFFQ21OZ2FGbXFZWGV2aFdoOGl3ZWdF?=
 =?utf-8?B?ZFhBQ2dYdEdNbHdoeGNzaUxJb2FjdzY5cGd4ZkRxMy91U1IxM2MxcDZ6THlX?=
 =?utf-8?B?QXNNV3MySWw4OGJtVnJjRjBZNXRNS0F3ZkRPTlhMUGZrV2tURnVCbXhnbVRT?=
 =?utf-8?B?c0VhTGl6Mm50ai9QU3AvODQrOEp3MnowUXNGNW42NTNKdHROSzAzamZ5UWJD?=
 =?utf-8?B?N0didTdWSEEyK1BSK0pZRjkrc0NIejE4NjZHZVZrbk9RY0Z0WThFTVNkdEwv?=
 =?utf-8?B?TVEvUktUSTJXRDF0dndad1hhZ1hjYmQxbTNkWHpKZWY2b1RqQStIaWU4YjNw?=
 =?utf-8?B?MkI5Y3hDdWNjaGovYnc5NW4rTm14YW1qdGg1SjQ1K0M0eVBXL0pxVDE2ZlZQ?=
 =?utf-8?B?d2EvUS9aTVE3WXlXRldGclh4VlJGck9iMVhYclFMNzNQM0EyL2Urc2RHZmVF?=
 =?utf-8?B?QW9PeThkeGdkSitMUllNVk9CMmRTWWtKeXZEczNIMzkwMDNwbm1ZMGlMZWRy?=
 =?utf-8?B?MStHN0lRQXVmR3dCMmF6R2FKSXNONi9YbCtTSmtjQnN6L1ZrdjNQRVcrU1gw?=
 =?utf-8?B?YzZCbXc2QVVDQVBJQ3N0UFN2WS9tbXAveHdLbjVJZ3A0Y0EydkxrOVlMZW9Y?=
 =?utf-8?B?WWR5YTNGbFNZLy8wRHh4c216WHJqNy8zTGMyRlZKay9sOWs5TlpCVkNnN1RJ?=
 =?utf-8?B?RmtCcDg2aXdhdGpJS1dTd0V3QkJMZUgranZsNzBRQUNtZmhlaUhlV0MvaGFN?=
 =?utf-8?Q?PN3oYw9K?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 93636516-10b3-4523-69b0-08d8c6b169b0
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 13:00:56.0403
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fCNhi+F8EzbknVlpDZary7v8qn347XdzFQRP7FFgGPzN7zXYkuQfdNC+kLo6tUkY9/HhKvEK5IZuhHqqLB+QG/wa43epeua8Dv/Lecq5aHY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5171
X-OriginatorOrg: citrix.com

On 01/02/2021 12:01, Roger Pau MonnÃ© wrote:
> On Sat, Jan 30, 2021 at 02:58:48AM +0000, Andrew Cooper wrote:
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index 12b961113e..a64c4e4177 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -2261,6 +2261,157 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
>>      return true;
>>  }
>>  
>> +/*
>> + * We only let vmtrace agents see and modify a subset of bits in MSR_RTIT_CTL.
>> + * These all pertain to data-emitted into the trace buffer(s).  Must not
>> + * include controls pertaining to the structure/position of the trace
>> + * buffer(s).
>> + */
>> +#define RTIT_CTL_MASK                                                   \
>> +    (RTIT_CTL_TRACE_EN | RTIT_CTL_OS | RTIT_CTL_USR | RTIT_CTL_TSC_EN | \
>> +     RTIT_CTL_DIS_RETC | RTIT_CTL_BRANCH_EN)
>> +
>> +/*
>> + * Status bits restricted to the first-gen subset (i.e. no further CPUID
>> + * requirements.)
>> + */
>> +#define RTIT_STATUS_MASK                                                \
>> +    (RTIT_STATUS_FILTER_EN | RTIT_STATUS_CONTEXT_EN | RTIT_STATUS_TRIGGER_EN | \
>> +     RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED)
>> +
>> +static int vmtrace_get_option(struct vcpu *v, uint64_t key, uint64_t *output)
>> +{
>> +    const struct vcpu_msrs *msrs = v->arch.msrs;
>> +
>> +    switch ( key )
>> +    {
>> +    case MSR_RTIT_OUTPUT_MASK:
> Is there any value in returning the raw value of this MSR instead of
> just using XEN_DOMCTL_vmtrace_output_position?

Yes, but for interface reasons.

There are deliberately some common interfaces (for the subset of options
expected to be useful), and some platform-specific ones (because there's
no possible way we encode all of the options in some "common" interface).

Yes - there is some overlap between the two sets - that is unavoidable
IMO.Â  A user of this interface already needs platform specific knowledge
because it has to interpret the contents of the trace buffer.

Future extensions to this interface would be setting up the CR3 filter
and range filters, which definitely shouldn't be common, and can be
added without new subops in the current model.

> The size of the buffer should be known to user-space, and then setting
> the offset could be done by adding a XEN_DOMCTL_vmtrace_set_output_position?
>
> Also the contents of this MSR depend on whether ToPA mode is used, and
> that's not under the control of the guest. So if Xen is switched to
> use ToPA mode at some point the value of this MSR might not be what a
> user of the interface expects.
>
> From an interface PoV it might be better to offer:
>
> XEN_DOMCTL_vmtrace_get_limit
> XEN_DOMCTL_vmtrace_get_output_position
> XEN_DOMCTL_vmtrace_set_output_position
>
> IMO, as that would be compatible with ToPA if we ever switch to it.

ToPA is definitely more complicated.Â  We'd need to stitch the disparate
buffers back together into one logical view, at which point
get_output_position becomes more complicated.

As for set_output_position, that's not useful.Â  You either want to keep
the position as-is, or reset back to 0, hence having a platform-neutral
reset option.

However, based on this reasoning, I think I should drop access to
MSR_RTIT_OUTPUT_MASK entirely.Â  Neither half is useful for userspace to
access in a platforms-specific way, and disallowing access entirely will
simplify adding ToPA support in the future.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:03:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79866.145572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yru-0006Dn-Ff; Mon, 01 Feb 2021 13:03:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79866.145572; Mon, 01 Feb 2021 13:03:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yru-0006Dg-Cc; Mon, 01 Feb 2021 13:03:50 +0000
Received: by outflank-mailman (input) for mailman id 79866;
 Mon, 01 Feb 2021 13:03:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Yrs-0006Db-Re
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:03:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40d6fdb7-9160-48cc-92b1-7f08cbbff9a4;
 Mon, 01 Feb 2021 13:03:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35DF4B1A6;
 Mon,  1 Feb 2021 13:03:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40d6fdb7-9160-48cc-92b1-7f08cbbff9a4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612184626; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SHR/MRvR0A2i+vs7TTqoKTOUfpHjXryxkqvPvezHDfo=;
	b=baRI0+iJ/5wM+pEVwZpKuSkoUJFSsqLinOrzytLITbgkUVVdWG7njMVl3E8jQSrPiU4xOy
	l97/OQPTGDnYNUEoWrdxqgpFSSe+Dj3DN6+Yeo2gh0NXMKFdI2Z4XTe30yxkOO0Rs3FPF8
	A5/3A0Xp2MvNv3E8arQ1ERqWJzOM0Ec=
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>, Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
 <YBfTpTzi+wo7AFSH@Air-de-Roger>
 <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <28e8d116-9d5f-3c73-b366-63d5b047b085@suse.com>
Date: Mon, 1 Feb 2021 14:03:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 12:11, Andrew Cooper wrote:
> On 01/02/2021 10:10, Roger Pau MonnÃ© wrote:
>> On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
>>> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>                      compat_frame_list[i] = frame;
>>>                  }
>>>  
>>> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
>>> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
>>>                                               compat_frame_list,
>>> -                                             cmp.mar.nr_frames) )
>>> +                                             done) )
>>>                      return -EFAULT;
>> Is it fine to return with a possibly pending continuation already
>> encoded here?
>>
>> Other places seem to crash the domain when that happens.
> 
> Hmm.Â  This is all a total mess.Â  (Elsewhere the error handling is also
> broken - a caller who receives an error can't figure out how to recover)
> 
> But yes - I think you're right - the only thing we can do here is `goto
> crash;` and woe betide any 32bit kernel which passes a pointer to a
> read-only buffer.

I'd like to ask you to reconsider the "goto crash", both the one
you mention above and the other one already present in the patch.
Wiring all the cases where we mean to crash the guest into a
single domain_crash() invocation has the downside that when
observing such a case one can't remotely know which path has led
there. Therefore I'd like to suggest individual domain_crash()
invocations on every affected path. Elsewhere in the file there
already is such an instance, commented "Cannot cancel the
continuation...".

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:07:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79869.145585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YvO-0006NY-0i; Mon, 01 Feb 2021 13:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79869.145585; Mon, 01 Feb 2021 13:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6YvN-0006NR-TJ; Mon, 01 Feb 2021 13:07:25 +0000
Received: by outflank-mailman (input) for mailman id 79869;
 Mon, 01 Feb 2021 13:07:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6YvM-0006NL-Uy
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:07:24 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9a56ccb-97b0-45f0-b3fd-bfecb4f1ac0a;
 Mon, 01 Feb 2021 13:07:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9a56ccb-97b0-45f0-b3fd-bfecb4f1ac0a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612184843;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FMI2Cy8yTr/93ZwxRqgzifnxTLwEw3jggpXvj+TaX8A=;
  b=ZHVNIV5zPcASagg5Dg6F8QCdlydf+CIQgkd78XDEr842ILcB99kTcFm3
   BDLwCjoB8bMuP/q4REaR8kOwfiMEnvkl1/L2Ep/mNZ2KGbJva1in3Emz7
   bpDj5SA0TY2zxGuCMs1hNBCWnwEHh3WUrEKUirbSvAw/SH+jOS3DsAxN5
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: lMwtFkoNGmUbZN9rg2tLiIEAVyJSXXObP4HHjA3+LYjHoYaIdaBM/wliyUi+vQg59ThY9IJhwE
 1rjIonmuIKysZs8UUb1FiE2KIYrZtXpRYQuhg8JD2hEBTetpoBBdDX0lMDov5BnLtioAyN4C/x
 /JQPPH/sZx1Plyi9HmvhS6jqCST8R338Kvrz0GQ/KtV4X0C99mNQpN/RkCaTaAXREA5i82pq7X
 YB3rCVRiKyFVTEC0OcFfANLU/UY39e10us3nebPNG4H+shx8EwcmgkpEzvo6B+5umXpPeNKPE9
 S2I=
X-SBRS: 5.2
X-MesageID: 36471127
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36471127"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IS4AA/PhMz9kK1BO+rDai1Alu/oBuUa/KjH3FKHxNM72G30/uCbzX5Rz1q08FrkKmgcWUiFvTGxInGiqRtzuphuebv7JBJDbyb5QxYNmLQ0Q8eLrvBkulGke2sUn0j4IW1IiFeTCx+w+CwVnYlYDRBYp5R51VhNFahHRkpEHFTcMkN8R3XxuMiDjmiIpYBRhGn1aH8OS2Ut/PnF48lojfm6I/wHqUU+/wv+QuA5ABLYRONumhtkbDhCo9/Ud9qmDPlj6DayCrVKcKW0l3mPUSezSBuhxMliNHWyG8lI9SskBTc+BTvvib/sPvg4b8/ZNDK3BbLH4EyH0T9709kbWFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FMI2Cy8yTr/93ZwxRqgzifnxTLwEw3jggpXvj+TaX8A=;
 b=K86yXtw5ICke9R1qrHgWO6xzb6FZxI5Hf9r/errSJ5W4k6gFoXOCPkcVqxegfmH70dT6vKfXJsnreiJ3WSV1kDAyAOPyUiN5MaId2fgjV/a1+5bTa4MrJ3co54EHltcYbwqGjqmsRH9G4o9v0drt9u1uyhbDbQS3FgEQnJXMRZBi6LObiCpq5xP+S9jr36prvoR3vMsoxnapiR6wsBL+VBYrLsL99+Gp6nrvezqZUd3WHu2mD151woXX4iy6JkL73njg27o3dvyf6BQV8iXXMjKvUn5uhM+9Z0LbH4XNViVcO7Vfnkx1GmH+CaqCN16XmEFuZedqBGXYqDDhoP7o6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FMI2Cy8yTr/93ZwxRqgzifnxTLwEw3jggpXvj+TaX8A=;
 b=Un7W6ooheR8inl7NzKITCr2JCqRTc+kRKvuphFcVtMCRYz3orNi0gIgKM/2mzm9aivbp1MnO5sPIGSHnws9jAbIt/3eATqe6Ij+u1xrbGnySkG2uvrMPii+sru8pVxzxxc/m8KH5B8LazD+Q6QoV7Z3+MPxy4bf0QJ6iEYnfYFw=
Subject: Re: [PATCH v8 00/16] acquire_resource size and external IPT
 monitoring
To: Oleksandr <olekstysh@gmail.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Anthony PERARD <anthony.perard@citrix.com>, Jun
 Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, Tamas
 K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <911270bf-0077-b70e-c224-712dfa535afa@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fceef592-e637-e985-8217-11546e088027@citrix.com>
Date: Mon, 1 Feb 2021 13:07:04 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <911270bf-0077-b70e-c224-712dfa535afa@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0206.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::26) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ea264dcb-92eb-485e-6665-08d8c6b249c7
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5904:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5904F0633459BE7144814DABBAB69@SJ0PR03MB5904.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 01X77vhT/lHHzKhC0Abh+d67Nk8wZWOqLUvwebbQPcOh3G1OUaql3pDLS8YhwZdHPciuPHNISj5UvhAKkTlDbi8B0OA9AbYY7mM5l6iXKx+89Ju3nqKgnCe2872+u9aDC+OY/dFPFR9MJZYtM5sdQ4GkIpjtiVfyTJrokCI+CdZBLtClJDm5Nb3gbySeAjBZOb1aGALd1M0bCUOnyMmq2Iluenkc4f6fG2cm2RjHb9S71H97YkdQ2ze6/geNUtK5aRvMWVy9z71LH3SznNmv4T1EXcGIHgND96n3vlfTrPwr5M2Umv5Uht5nE7VkDqRqEeKzXiZeuwRgnTLut6xrtacbkhLfq3WiGXrH6d4C1fo/ClAQqx0haxMM6/9gJurvFvX0CMSRJee9mhlY7M1+g4U57DHEaaZGXSgar6yocY1TElbMfQwtbB6zH6hnrQZNM+UqJBSV82p28j+/1gsiS9hx2/nK8vJhw1YgBWpdHKngnkPAwEuXBy+P34eBL1D5gRuSOfJOp3rsTv2rbFfRyZzgF5yxwMkMZKO/agQWPCcVJDqg7iGwC43HtltPPN7OJMwZJ6HB/fikl5HIpYbMCXVQHcQ1HFXDZKvdy3gpW/I=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(39860400002)(136003)(376002)(8676002)(54906003)(83380400001)(53546011)(26005)(6486002)(66476007)(6666004)(16526019)(186003)(66556008)(66946007)(2616005)(5660300002)(956004)(8936002)(316002)(4326008)(2906002)(16576012)(110136005)(31686004)(31696002)(36756003)(86362001)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OXR5MnNOalFNenc5UE5pSURmWHplUURXU0NBMktPK1Y3Tkt5NzlyR0s5Zkl2?=
 =?utf-8?B?UzdxZkJkT1lva0JLWW9Fd29SNTh3TGI3d0J2R0lhSWE4S0w5QUNVU29RbERO?=
 =?utf-8?B?RHo2dm94ZjA3MUU5RWp4Uko0TllsWU16ZUdDRFZzNWpteUtUWlpWNUN0Y0xO?=
 =?utf-8?B?d01RVFVkYXhtVHorWVJ6TkkwekVZNCtKZ00yKzFEdFRkczJpSGo0RXFmdWVD?=
 =?utf-8?B?MURqQWhzWmIydzRlZk9EazdiQUgvZUE5L0ZILzMraVBHQllIeGYvTXdnelBG?=
 =?utf-8?B?eDBHcWN1N3M4NUkvLzY1WDNQNTltUzhqRDMyUkdDNnZVdUV2MEVMM3F4RTE2?=
 =?utf-8?B?S0FuTEhRQ3JKblRSc1lRSWNnRDhmcEJMbmVzdjJ5UWtBVFlNV3ZFZXVFQ0lV?=
 =?utf-8?B?RGU1L3RPd3dXSEFpb3JKQ3lOVmd2UHgxM0JEZHVVR1lIaEdhalpjcTJFUnJU?=
 =?utf-8?B?OFNRaUptdEUrbVFpT1pwdHhnTXZTdlVtamo3U1dVQVdUQk4vYWFSTHQvZ1BQ?=
 =?utf-8?B?UFhzREFTbUwwMThmWStycklob0RuYUZydDVDN2x2WmpicWNHdW95a1NlZWJZ?=
 =?utf-8?B?Nlg2OTJ4SktOeE9ZeTBSdUErUHdZS1lEc0xqT1lTUFQ0N2podFFYWjB2NXll?=
 =?utf-8?B?SDk2TnRKS1h3d1JYRTRwS1NTVFNGMUtWZjh6SWlUMzNMZnhqTkJzWWgzb2cy?=
 =?utf-8?B?eXlKNXZKOUkrdmgxZlQrSnQ2SW9mdjlxNVpFTUJ5bFFZamk2S1hOS21TTmtB?=
 =?utf-8?B?MXFnWmppYTF2VlhKcXZGMGxmRC9zYmxLa1lzS0V3b0laNU0rdjJLMFNRQTJY?=
 =?utf-8?B?UldLOG9JbnB2Nm9rUmdkdzFCU05qdXFnWE84elc0TlFCbFZmd0pEL0p0OWxz?=
 =?utf-8?B?a21Ub1ZPTlNraWROYklnK0pTeFgvUmt1bnhzWDE1QytKVWRKSU1hem1tTFhY?=
 =?utf-8?B?a3h0aVVLVFBwZGVZTENDaXpJaG1NOEcyei81MXBGd2xuR1NVZnJoUHVZZEt5?=
 =?utf-8?B?RktXZ2ovUlJ4Snc3N0pRTEphUlZMVi9pT0o2dXNtWjFqcWZwVG9BQzBjc3VE?=
 =?utf-8?B?MzVhVDFWK3ZnQWVUaTVjUzlLd0E5bU4vM2ZSZEM0d1VtemRPTzQ4S0NETFl3?=
 =?utf-8?B?eFB4UTVieDdNSFpLWlBOcXBJM05XR0JNaTFiTlhvVVdYNEhCQUE4dTVEUXhR?=
 =?utf-8?B?MUZXMTczM3YxdEYzODNtc2ZvVmxPcWRqSHpPTEtiRTVITkRJbnh6d2dNTUNl?=
 =?utf-8?B?M2VhTDZKRjZ0czdaUDY3ZzdGNDc4ek5OQ0Fqd01xb2dRcGVQRDlYQU1wL3hU?=
 =?utf-8?B?TGtVQk9rZFZ3dHRKUFBIb1JSZ09NNU5QU0pSdlJGa1lIMWZJM1J2dmVZOGJX?=
 =?utf-8?B?L1RXdjZydkNHd3p1UnZOOHdoK1VvRnFSVWM4VVNrdlZqMEcrTEd5Ym4zM0Jz?=
 =?utf-8?B?V0JMeVdGOThtS3ZXRDFKajlQVkFNdEhxL1YxbG1RPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ea264dcb-92eb-485e-6665-08d8c6b249c7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 13:07:12.0706
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XgfPrO5GsgLuohUNN5kg6ch0OGecOvTY15yCG82E6V9B1L8rbSFjStnCZadE4vnIC75NTgztwsWHNy/kbNo1K+Q4rx/W97PQWtsoPz4LEW4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5904
X-OriginatorOrg: citrix.com

On 01/02/2021 12:34, Oleksandr wrote:
>
> On 30.01.21 04:58, Andrew Cooper wrote:
>
> Hi Andrew
>
>> Combined series (as they are dependent).Â  First, the resource size
>> fixes, and
>> then the external IPT monitoring built on top.
>>
>> Posting in full for reference, but several patches are ready to go
>> in.Â  Those
>> in need of review are patch 6, 8 and 12.
>>
>> See individual patches for changes.Â  The major work was rebasing over
>> the
>> ARM/IOREQ series which moved a load of code which this series was
>> bugfixing.
>
> Looks like that some of these patches have been already merged. So I
> preliminary tested current staging
> (9dc687f155a57216b83b17f9cde55dd43e06b0cd x86/debug: fix page-overflow
> bug in dbg_rw_guest_mem) on Arm *with* IOREQ enabled.
>
> I didn't notice any regressions with IOREQ on Arm))

Fantastic!

Tamas and I did do extended testing on the subset which got committed,
before it went in, and it is all fixing of corner cases, rather than
fundamentally changing how things worked.


One query I did leave on IRC, and hasn't had an answer.

What is the maximum number of vcpus in an ARM guest?Â  You moved an
x86-ism "max 128 vcpus" into common code.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:08:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79872.145596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ywa-0006VQ-FB; Mon, 01 Feb 2021 13:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79872.145596; Mon, 01 Feb 2021 13:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Ywa-0006VJ-C1; Mon, 01 Feb 2021 13:08:40 +0000
Received: by outflank-mailman (input) for mailman id 79872;
 Mon, 01 Feb 2021 13:08:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6YwY-0006VA-Od
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:08:38 +0000
Received: from mail-ej1-x62d.google.com (unknown [2a00:1450:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27b28c91-9e9f-4dcb-b6e3-623c84fb84b9;
 Mon, 01 Feb 2021 13:08:38 +0000 (UTC)
Received: by mail-ej1-x62d.google.com with SMTP id sa23so7960844ejb.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 05:08:37 -0800 (PST)
Received: from [192.168.1.36] (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id r26sm9134948edc.95.2021.02.01.05.08.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 05:08:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 27b28c91-9e9f-4dcb-b6e3-623c84fb84b9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=y4kRp1O9NSMOYw470pZIHLMK+3nXsZMTt27NrvtPWBY=;
        b=ovTBQDzeS7zCx9tL7P7Xk3plyacF02rVpLFtCjZKWXJUBgmWY1oXR1tK6yi8ng40B5
         55ReqthCZais7QcdQGliDXTtSW6Ve6qAy21SGP3iZLIqSkZTlNYMSah1Bi46dayCHdjK
         b/cnTE77BDbBU4Cuor5oDH5bVMY+xi2Ixvg68G869jB0CKYCl4JQxA3tN8Yn3vd+0bzg
         /QsHl38bVof4umYD9LFZ2nQA9bBtcN00LME5YrZqFoUSoTnq05PcaRVzNL4/kH3pz4uU
         ocMtSZAuWY/JDke64T1h2LghIsMaEvWqgFTo2hFw91zvc1iSDT6SaLVAS5i5m4xJOVLR
         QVGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
         :date:user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=y4kRp1O9NSMOYw470pZIHLMK+3nXsZMTt27NrvtPWBY=;
        b=LaTuL8FfG5rI5q8kBQslxgr5tOpylCrtFlsXLZm4Ls15h2iGA/yc5EFtdvR41eru2+
         wEKpYp+J7DUytKdDIKodUZbD6SZAqxNyBkOoSJ7VqUj6M/t0h3bYccDXyo2RDKn7rU3b
         jrbTbkaAb/O2NbWPdSY/xur7zpVPuUWFmmasMigz8Yi/3XRpGM5sP342lvSS4Y/p0okm
         YZY29dfYKGVoxjeWgnFd67+Sil5cy1AsLJqx2anAluv/UjRilt1AMkv4RWOMjpAlofd+
         j31vZ+zHfxZxantNDPcPaibct24b7rFdIHv+rV7MQsm2RyERxAENdp+sXZ9TeDNLTn7i
         XIeA==
X-Gm-Message-State: AOAM532by0uoIN9QGad3ztGGcRV6yMgpZutF1m2UzaAUZ8yL7FzKJmoH
	ubvzVfJJqLFDg6DjuBr+Sec=
X-Google-Smtp-Source: ABdhPJwTwiao55AEZH/q4YO3I6ip3slYCuygRAACBsntgWTgI4+6twqi5ufD5XR1Cd/PPfzw3vKE4w==
X-Received: by 2002:a17:906:4d8f:: with SMTP id s15mr17815981eju.389.1612184917083;
        Mon, 01 Feb 2021 05:08:37 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
Subject: Re: [PATCH v3 3/7] accel/xen: Incorporate xen-mapcache.c
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
References: <20210201112905.545144-1-f4bug@amsat.org>
 <20210201112905.545144-4-f4bug@amsat.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <8c7281bb-5688-5ef4-4841-3181bdb02bfc@amsat.org>
Date: Mon, 1 Feb 2021 14:08:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20210201112905.545144-4-f4bug@amsat.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/1/21 12:29 PM, Philippe Mathieu-DaudÃ© wrote:
> xen-mapcache.c contains accelerator related routines,
> not particular to the X86 HVM machine. Move this file
> to accel/xen/ (adapting the buildsys machinery).
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
> ---
>  meson.build                           |  3 +++
>  accel/xen/trace.h                     |  1 +
>  {hw/i386 => accel}/xen/xen-mapcache.c |  0
>  hw/i386/xen/xen-hvm.c                 |  1 -
>  accel/xen/meson.build                 |  5 ++++-
>  accel/xen/trace-events                | 10 ++++++++++
>  hw/i386/xen/meson.build               |  1 -
>  hw/i386/xen/trace-events              |  6 ------
>  8 files changed, 18 insertions(+), 9 deletions(-)
>  create mode 100644 accel/xen/trace.h
>  rename {hw/i386 => accel}/xen/xen-mapcache.c (100%)
>  create mode 100644 accel/xen/trace-events
...
> diff --git a/accel/xen/trace-events b/accel/xen/trace-events
> new file mode 100644
> index 00000000000..30bf4f42283
> --- /dev/null
> +++ b/accel/xen/trace-events
> @@ -0,0 +1,10 @@
> +# See docs/devel/tracing.txt for syntax documentation.
> +
> +# xen-hvm.c
> +xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"

Self-Nack, this should not be here ^

> +
> +# xen-mapcache.c
> +xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
> +xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
> +xen_map_cache_return(void* ptr) "%p"


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:09:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:09:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79874.145609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yx4-0006e4-O1; Mon, 01 Feb 2021 13:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79874.145609; Mon, 01 Feb 2021 13:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Yx4-0006dx-L8; Mon, 01 Feb 2021 13:09:10 +0000
Received: by outflank-mailman (input) for mailman id 79874;
 Mon, 01 Feb 2021 13:09:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUbX=HD=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1l6Yx3-0006dq-BZ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:09:09 +0000
Received: from mail-ej1-x635.google.com (unknown [2a00:1450:4864:20::635])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32e97fcf-6f1c-48bb-a6e5-0d55376c0de3;
 Mon, 01 Feb 2021 13:09:08 +0000 (UTC)
Received: by mail-ej1-x635.google.com with SMTP id w1so24205084ejf.11
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 05:09:08 -0800 (PST)
Received: from [192.168.1.36] (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id jt8sm8070132ejc.40.2021.02.01.05.09.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 05:09:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 32e97fcf-6f1c-48bb-a6e5-0d55376c0de3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=rP9I+61hg7LNIUhfLLrTy63wjzGJQeyog0qeGD9KQOk=;
        b=PqljVBE7qJ3DWkvlnvaUln3090pAAuOSz/DQV8eqwDSLCduh6EOA+HKQcrTK05r7O7
         wOhRqk2exJ1038aBtNY2ugNVEx5Ei7H6zj9E0YuLI786lxJ44EMzOVYuamswZPG/GWnW
         u+mX0q0CPvkWkDXyWYjrTPeO8UwmavQSnDmcYCJHj20fvEG7FCo9FcrK28NH0h+JHESU
         KSJEacavGpsmlcbRRSwC/6lnj5dPr9JHXGC7s6IwcPgr8nosYzQna+4/t+8P+xhE6GHg
         2pKpb+tdPYokxpZVdworJIEMcU4qE+hb+Q1kATfChus1ukNQZUYjiiMAPLS/EfUCu/t0
         Ntfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
         :date:user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=rP9I+61hg7LNIUhfLLrTy63wjzGJQeyog0qeGD9KQOk=;
        b=Q5WNfAY/Xjp4zH69hFaFg2vMd1Ldeg0FkUl5jLtz0tYrV4+9YUu76yBYDlvDQL8FL1
         HDOcAmb6yuHOfFEIYAtKQDPSiUHba42owu74yNtAYBBx/OWiKPZ8lwd3HC9pqOKhVDsK
         AjYporJdxp0MOF4VDpOreQrom0QGQDlH4vtA2olBdBz+0LhvYpnfJYS3DiBc//GMYXG5
         RuyrKRz0y6EVWoOyc7Jj4BtKym9shJP+RgiXtDQKiXwEzO3X0dXpNluqtN1anMW8vEmn
         1dCKoxJfeseAZA4gzwjk8fiMs8RZ29oCwsOSHuL14tE2SUUgu2Ps5dVT19VdsWRBpkIt
         /9VQ==
X-Gm-Message-State: AOAM533x1HETmQRdAhilcsc9o5ieUVMJOCgQz/8sPPosTnJZ2GPrqzP/
	uUBJxhVSzq6LUj8nGsw1K7Y=
X-Google-Smtp-Source: ABdhPJw40nPrERTfiMARHLjmWXlD4UMilhu04GifoKIpMYQFjX66d7uewWoAbwIsEoqdkYi/phZtTg==
X-Received: by 2002:a17:906:8410:: with SMTP id n16mr17948338ejx.551.1612184947588;
        Mon, 01 Feb 2021 05:09:07 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
Subject: Re: [PATCH v3 5/7] hw/xen: Make xen_shutdown_fatal_error() available
 out of X86 HVM
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
References: <20210201112905.545144-1-f4bug@amsat.org>
 <20210201112905.545144-6-f4bug@amsat.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <a77a620d-beca-0d13-6ffe-861528c6cbc4@amsat.org>
Date: Mon, 1 Feb 2021 14:09:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20210201112905.545144-6-f4bug@amsat.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/1/21 12:29 PM, Philippe Mathieu-DaudÃ© wrote:
> xen_shutdown_fatal_error() is also used by XEN_PV.
> 
> This fixes when XEN_PV without XEN_FV:
> 
>   /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/hw_xen_xen_pt_config_init.c.o: in function `xen_pt_status_reg_init':
>   hw/xen/xen_pt_config_init.c:281: undefined reference to `xen_shutdown_fatal_error'
>   /usr/bin/ld: hw/xen/xen_pt_config_init.c:275: undefined reference to `xen_shutdown_fatal_error'
>   /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/hw_xen_xen_pt.c.o: in function `xen_pt_pci_read_config':
>   hw/xen/xen_pt.c:220: undefined reference to `xen_shutdown_fatal_error'
>   /usr/bin/ld: libqemu-x86_64-softmmu.fa.p/hw_xen_xen_pt.c.o: in function `xen_pt_pci_write_config':
>   hw/xen/xen_pt.c:369: undefined reference to `xen_shutdown_fatal_error'
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
> ---
>  hw/i386/xen/xen-hvm.c | 13 -------------
>  hw/xen/xen-utils.c    | 25 +++++++++++++++++++++++++
>  hw/xen/meson.build    |  1 +
>  3 files changed, 26 insertions(+), 13 deletions(-)
>  create mode 100644 hw/xen/xen-utils.c
> 
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 7156ab13329..69196754a30 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -28,7 +28,6 @@
>  #include "qemu/error-report.h"
>  #include "qemu/main-loop.h"
>  #include "qemu/range.h"
> -#include "sysemu/runstate.h"

^ self-nack, was not supposed to remove this.

>  #include "sysemu/sysemu.h"
>  #include "sysemu/xen.h"
>  #include "trace.h"
> @@ -1570,18 +1569,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
>      framebuffer = mr;
>  }


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:18:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:18:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79880.145621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Z6C-0007fs-Ky; Mon, 01 Feb 2021 13:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79880.145621; Mon, 01 Feb 2021 13:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Z6C-0007fl-Hy; Mon, 01 Feb 2021 13:18:36 +0000
Received: by outflank-mailman (input) for mailman id 79880;
 Mon, 01 Feb 2021 13:18:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6Z6B-0007fg-7p
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:18:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7c84440-30c2-45f8-8432-b0aea597fe99;
 Mon, 01 Feb 2021 13:18:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 737CDABD5;
 Mon,  1 Feb 2021 13:18:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7c84440-30c2-45f8-8432-b0aea597fe99
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612185512; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Au27WK7IriauZdi7YfA2QDsv32IX8X0J3WgdkQPCN3U=;
	b=Wv0OrEc5RvLuLfZs+zdQtz6UOUhTKiDZNfdRcmVfF02n0AY0wmYSdQD69gJWbeYujrqNAf
	PYfuqejPA0PY6Ou4G2Beai0nG/mdcM3v7G/Yjy51lTcNCcdbaE+mYTTyYjDfPcGFMP6wOv
	4S20LiV+3Sz97jNUG6RW0/PhSaleAy0=
Subject: Re: [PATCH v8 08/16] xen/domain: Add vmtrace_size domain creation
 parameter
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-9-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3cf886f6-db7f-ccc1-5ef0-6fd8ccb38caf@suse.com>
Date: Mon, 1 Feb 2021 14:18:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210130025852.12430-9-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.01.2021 03:58, Andrew Cooper wrote:
> +static int vmtrace_alloc_buffer(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    struct page_info *pg;
> +    unsigned int i;
> +
> +    if ( !d->vmtrace_size )
> +        return 0;
> +
> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
> +                             MEMF_no_refcount);
> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    /*
> +     * Getting the reference counting correct here is hard.
> +     *
> +     * All pages are now on the domlist.  They, or subranges within, will be

"domlist" is too imprecise, as there's no list with this name. It's
extra_page_list in this case (see also below).

> +     * freed when their reference count drops to zero, which may any time
> +     * between now and the domain teardown path.
> +     */
> +
> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
> +            goto refcnt_err;
> +
> +    /*
> +     * We must only let vmtrace_free_buffer() take any action in the success
> +     * case when we've taken all the refs it intends to drop.
> +     */
> +    v->vmtrace.pg = pg;
> +
> +    return 0;
> +
> + refcnt_err:
> +    /*
> +     * In the failure case, we must drop all the acquired typerefs thus far,
> +     * skip vmtrace_free_buffer(), and leave domain_relinquish_resources() to
> +     * drop the alloc refs on any remaining pages - some pages could already
> +     * have been freed behind our backs.
> +     */
> +    while ( i-- )
> +        put_page_and_type(&pg[i]);
> +
> +    return -ENODATA;
> +}

As said in reply on the other thread, PGC_extra pages don't get
freed automatically. I too initially thought they would, but
(re-)learned otherwise when trying to repro your claims on that
other thread. For all pages you've managed to get the writable
ref, freeing is easily done by prefixing the loop body above by
put_page_alloc_ref(). For all other pages best you can do (I
think; see the debugging patches I had sent on that other
thread) is to try get_page() - if it succeeds, calling
put_page_alloc_ref() is allowed. Otherwise we can only leak the
respective page (unless going to further extents with trying to
recover from the "impossible"), or assume the failure here was
because it did get freed already.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:39:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:39:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79887.145638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ZQM-0001Pz-FK; Mon, 01 Feb 2021 13:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79887.145638; Mon, 01 Feb 2021 13:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ZQM-0001Ps-CO; Mon, 01 Feb 2021 13:39:26 +0000
Received: by outflank-mailman (input) for mailman id 79887;
 Mon, 01 Feb 2021 13:39:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6ZQK-0001Pn-UZ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:39:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69491309-3bb8-4c86-8196-14ef3dc5bfa0;
 Mon, 01 Feb 2021 13:39:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5860AB92;
 Mon,  1 Feb 2021 13:39:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69491309-3bb8-4c86-8196-14ef3dc5bfa0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612186763; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KUY99SqnKBeXZzlYj2VlDDRXfhI6LC4XP4ZRoVG/cIQ=;
	b=F7UdL39RVWKQ9o2wFgFCvRkLUTgi8ELiMLLjmjqk6e8BK7VoItatrXSE3sP0qvr8x0XqMd
	1Gy/W4aCvutMeNgVPhDsbJASIzEFas4MHqjK4UaL8ugA6fp1uctfZutWFOPq+xHT4TtJrl
	BEbOsPn+NgYYN3AexyGmxzsz6gUOz7s=
Subject: Re: [PATCH for 4.16] xl: Add xl.conf-based dom0 autoballoon limits
To: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20210129164858.3280477-1-george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <606292ed-9083-d9a7-33e9-a02485cbbca0@suse.com>
Date: Mon, 1 Feb 2021 14:39:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210129164858.3280477-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.01.2021 17:48, George Dunlap wrote:
> When in "autoballoon" mode, xl will attempt to balloon dom0 down in
> order to free up memory to start a guest, based on the expected amount
> of memory reported by libxl.  Currently, this is limited in libxl with
> a hard-coded value of 128MiB, to prevent squeezing dom0 so hard that
> it OOMs.  On many systems, however, 128MiB is far too low a limit, and
> dom0 may crash anyway.
> 
> Furthermore, "autoballoon" mode, although strongly recommended
> against, must be the default mode for most distros: Having the memory
> available to Linux drop to 1GiB would be too much of an unexpected
> surprise for those not familiar with Xen.  This leads to a situation,
> however, where starting too many guests on a large-ish system can
> unexpectedly cause the system to slow down and crash with no warning,
> as xl squeezes dom0 until it OOMs.
> 
> Ideally what we want is to retain the "just works after install"
> functionality that we have now with respect to autoballooning, but
> prompts the admin to consider autoballooning issues once they've
> demonstrated that they intend to use a significant percentage of the
> host memory to start guests, and also allow knowledgable users the
> flexibility to configure the system as they see fit.
> 
> To do this, introduce two new xl.conf-based dom0 autoballoon limits:
> autoballoon_dom0_min_memmb, and autoballoon_dom0_min_mempct.
> 
> When parsing xl.conf, xl will always calculate a minimum value for
> dom0 target.  If autoballoon_dom0_min_memmb is set, it will just use
> that; if that is not set and _min_mempct is set, it will calculate the
> minimum target based on a percentage of host memory.  If neither is
> set, it will default to 25% of host memory.
> 
> Add a more useful error message when autoballoon fails due to missing
> the target.  Additionally, if the autoballoon target was automatic,
> post an additional message prompting the admin to consider autoballoon
> explicitly.  Hopefully this will balance things working out of the box
> (and make it possible for advanced users to configure their systems as
> they wish), yet prompt admins to explore further when it's
> appropriate.
> 
> NB that there's a race in the resulting code between
> libxl_get_memory_target() and libxl_set_memory_target(); but there was
> already a race between the latter and libxl_get_free_memory() anyway;
> this doesn't really make the situation worse.
> 
> While here, reduce the scope of the free_memkb variable, which isn't
> used outside the do{} loop in freemem().
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

I'm not convinced it is the tool stack to set a lower limit here.
Imo the kernel should guard itself from too aggressive ballooning.
In fact, the old XenoLinux driver did, as of
https://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/b61443b1bf76,
which in our forward ports we then extended to have exposure in
/proc and /sys, alongside an upper limit (for purely informational
purposes iirc).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:47:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79889.145650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ZYB-0002Hz-9u; Mon, 01 Feb 2021 13:47:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79889.145650; Mon, 01 Feb 2021 13:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ZYB-0002Hs-6q; Mon, 01 Feb 2021 13:47:31 +0000
Received: by outflank-mailman (input) for mailman id 79889;
 Mon, 01 Feb 2021 13:47:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LFAu=HD=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l6ZY9-0002Hn-S7
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:47:29 +0000
Received: from mail-lf1-x12b.google.com (unknown [2a00:1450:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1ba89df-cddf-4a84-b968-3d9ec33954d9;
 Mon, 01 Feb 2021 13:47:28 +0000 (UTC)
Received: by mail-lf1-x12b.google.com with SMTP id f1so22817098lfu.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 05:47:28 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id c7sm3909521ljd.95.2021.02.01.05.47.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 05:47:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1ba89df-cddf-4a84-b968-3d9ec33954d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=jbuJPoipTiUQP7/k+Rmr0WaOAzGMug15ye46jUuJg14=;
        b=NrqXkc5T34QPfoPLnw86SWjAiuhppAmC1XT3UfCFjrmuJnqJL0Fy1Wjju/gY0glebH
         0Es8tmD/T2pxDS8bA3U2IX4XYRuGsjW+QZv04eGgqVKp+P49A+/7HfR32HCyd9b2tLzt
         0BnrbYgICrrP6Od8lr0MPsUsETtBFZuKmeYC2cEa21K0RQhwSXN8kLzgw9EsvaCShDIy
         NhU9Ff7qHnTUp5iGQsNXNOmcWZwwuw2MOeClww7HkobFMdsAl37SNn3E8356oDZKO9tV
         9XsggcCqx64OJh6gHYPqSxcpd/+ZFsrLRDzhxoasJ9MDWHj6hp7vcBfwuLswHUvoZp8N
         VCdg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=jbuJPoipTiUQP7/k+Rmr0WaOAzGMug15ye46jUuJg14=;
        b=i8ouG4RWjFJQ3QbzWXsaP7VWtzoo2B/NptImcYHLLUR5GCydF21FMTNOLhDJaXmIzU
         GRjmaS5/gVXTdj4rh/D6gPodf+RSPyGMdT2CTMIee9fTwx/ZqokOnsoQbfocaJgA+d/S
         vHo+zHapmmNLtCK/viBYPMusMReRNfB5FCzsu2pnWDtGcDuGXhdy44gN/0hwGlEYypbi
         m+VQ7SOri/CePbGFn8GlbXNLRwH+ijVwkDprrS7oqBvmtW2f91/ab79JlAd0nPJV03UU
         R2K6sBdUe9iyYkIvGrQWLINNyhxZqkM6WvjbK3/bUJyVP/nCv4NWtkncmoTNW1GwZkVF
         VH/g==
X-Gm-Message-State: AOAM531XcfvjDml3Anh6hAU+HTHVp2+lFJJcsklL6EVipOmYzA+QPKbg
	SN24CquUvt/gvp02NHnwbL4=
X-Google-Smtp-Source: ABdhPJy6z0KlgYp43GQ3JrzHVxWydzuScJzBi8/BWK4j4Vo3Ykq1/E2PN90O07tlS4m4zUWV47DW9w==
X-Received: by 2002:ac2:5d51:: with SMTP id w17mr8032713lfd.343.1612187247589;
        Mon, 01 Feb 2021 05:47:27 -0800 (PST)
Subject: Re: [PATCH v8 00/16] acquire_resource size and external IPT
 monitoring
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, Anthony PERARD
 <anthony.perard@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <911270bf-0077-b70e-c224-712dfa535afa@gmail.com>
 <fceef592-e637-e985-8217-11546e088027@citrix.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <d5cc17a4-267c-3022-11e5-eb043de121a9@gmail.com>
Date: Mon, 1 Feb 2021 15:47:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fceef592-e637-e985-8217-11546e088027@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.02.21 15:07, Andrew Cooper wrote:

Hi Andrew

> On 01/02/2021 12:34, Oleksandr wrote:
>> On 30.01.21 04:58, Andrew Cooper wrote:
>>
>> Hi Andrew
>>
>>> Combined series (as they are dependent).Â  First, the resource size
>>> fixes, and
>>> then the external IPT monitoring built on top.
>>>
>>> Posting in full for reference, but several patches are ready to go
>>> in.Â  Those
>>> in need of review are patch 6, 8 and 12.
>>>
>>> See individual patches for changes.Â  The major work was rebasing over
>>> the
>>> ARM/IOREQ series which moved a load of code which this series was
>>> bugfixing.
>> Looks like that some of these patches have been already merged. So I
>> preliminary tested current staging
>> (9dc687f155a57216b83b17f9cde55dd43e06b0cd x86/debug: fix page-overflow
>> bug in dbg_rw_guest_mem) on Arm *with* IOREQ enabled.
>>
>> I didn't notice any regressions with IOREQ on Arm))
> Fantastic!
>
> Tamas and I did do extended testing on the subset which got committed,
> before it went in, and it is all fixing of corner cases, rather than
> fundamentally changing how things worked.
>
>
> One query I did leave on IRC, and hasn't had an answer.
>
> What is the maximum number of vcpus in an ARM guest?

public/arch-arm.h says that current supported guest VCPUs max number is 
128.


> You moved an
> x86-ism "max 128 vcpus" into common code.

Ooh, I am not sure I understand where exactly. Could you please clarify 
in which patch?


>
> ~Andrew

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 13:54:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 13:54:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79893.145663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Zf6-0003Jl-2k; Mon, 01 Feb 2021 13:54:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79893.145663; Mon, 01 Feb 2021 13:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Zf5-0003Je-Vi; Mon, 01 Feb 2021 13:54:39 +0000
Received: by outflank-mailman (input) for mailman id 79893;
 Mon, 01 Feb 2021 13:54:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcwM=HD=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1l6Zf4-0003JZ-GH
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 13:54:38 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc3e95f5-0911-4752-a8e0-d1e8df019178;
 Mon, 01 Feb 2021 13:54:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc3e95f5-0911-4752-a8e0-d1e8df019178
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612187676;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=i3RqQQ2Ww49K89/ICp9Nq50nugf8sRfMQwkDzeBqkAY=;
  b=D/FVohUNEJ3UVdvUzuOU0e0vJGg96nFCjY13Tw0T/JeciHC3XkvvA9mE
   B4ri9QGTvGvWap8DgKp1BjeM83fQYKHlVO+b+HlRkkDIcy9jFHtPY2X6X
   /LAeahWMgQvEuxNGsQQ0wn2swxRd5dpKXKGUbvetdTjZTs+VmJk/sWg4i
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SdwwFXwsj49KI4AXuF6/J0myesEF16qfJwdXXOYUYT5MxZh5QfPzmTJPYbFgXFHmp2N+MGi1Mw
 KcHvlErnjv5yETnCr7Ih1vRchETSkv/O1nhneMkTLgwhvLZtnPX/fjYnfdpL4clzOv1+x9emFz
 rVaeCFMNsVbMLuPq/NRh+iN3bKqcKLVDEX5S4N2xIjWjWTOuHeMAGBnD4qwfQjNE4DcJKscvMv
 a7AmaECIU1AndQP4oxpToUm8huwt47HjK/W3W+wMKML0TJCKFvZIJYjaa/AmoYX7IfDdcwFGMi
 WAU=
X-SBRS: 5.2
X-MesageID: 37603483
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="37603483"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HZLnJBpE/xnD112p4UCM5GI+Mdo3oN5/ZKddqlk3jvpS3ErzoxPxCVKr/MyJ45obpWnqsopTUZGAfdosrivIeLqEX4scaE4vAYdB9YL5ohhsTT+sPiscTT+2d90NLU+Xx+OS8AIlEK6077ooRbakEvc2AuEofTPjX06t/9Hx1HnaNSjiUgB+sFQD8jfxiteMrIy7fQiGrRmOdS0V0mvzlg43CkjBSdnfL6eqHZxmzOi5xDuRAmwMIvqeHmpGmx2l9eWvHkxRyCYjJDZu/As4e8KwzaFIkmI0y6HOx0hMZSB2GTdxrCQB4UYf60F/WTzyAMGF9aBAfE6HwqV1ug7gFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i3RqQQ2Ww49K89/ICp9Nq50nugf8sRfMQwkDzeBqkAY=;
 b=C9uOMEOXkNwXD37wr1vgQqlUZ9JxaDJptttHofBteYcnh2nnB+bT7zc5J6xwFvx4z1Ao7oMJ5yTFSVP0sp0CuVabAU2yhrYSxmo90fKPPPCO+d7v8BCAE15ZoC6rHmbAa527sy0WZG1YZ8nlwBKu8EwNgLB9p42VG6loIz7Q466bSJ4K0lDK4E5OYl2mHjmYhCr42wdpeqfLHmUIYtFjq9m5q+O+W6anlZi163SE81C6YZp5CWje3DYMRt3QZi1nmI2cvGt7ggB3WfLovtoT+0EcZOzLkZSo8ySGiCTLaZQM0s3VT++p+P6X4MPZyWLsVYGH2AbSFCDrlrPnZY2lUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i3RqQQ2Ww49K89/ICp9Nq50nugf8sRfMQwkDzeBqkAY=;
 b=ua6tQ9YRg+5QxARA3mBuaGFE+HyhzBZo7BsWb1tEhD7I4FXkEc/v+/w3kzg915HNFTw5EN6Eh1l+rV6FQXKPRTY/oQUpkwRICJKxUgpmvrQbpzc1apZKucFOTWIhwCwzpUU9BF221vQv5Du7pSdTRgW5mjwMPv0XhJ9FSH722BY=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>, Anthony Perard
	<anthony.perard@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for 4.16] xl: Add xl.conf-based dom0 autoballoon limits
Thread-Topic: [PATCH for 4.16] xl: Add xl.conf-based dom0 autoballoon limits
Thread-Index: AQHW9l7snzGwdbg2aUWFV1Hc3k4X76pDUo0AgAAEOAA=
Date: Mon, 1 Feb 2021 13:54:29 +0000
Message-ID: <1F7EC4ED-BFA0-4EC8-A7F4-BE78E2455FAE@citrix.com>
References: <20210129164858.3280477-1-george.dunlap@citrix.com>
 <606292ed-9083-d9a7-33e9-a02485cbbca0@suse.com>
In-Reply-To: <606292ed-9083-d9a7-33e9-a02485cbbca0@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 06734a5e-6687-40e4-94dc-08d8c6b8e519
x-ms-traffictypediagnostic: PH0PR03MB5720:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <PH0PR03MB5720855AFB6A9F9B1FE482E499B69@PH0PR03MB5720.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: AmAiJPq+7i0IAw7RO026pw8u4urIZrG+a/5czufCjT/1lLMUyc8yv5c3M5/WKm6NAKQ+RsHQKyfVmTtQ42YoeDqaFtJThU3m19F95XopociGlGhqsmzn9eJpMSnh0zupKkw8ceckFjlYzTNc0ZjQcQ7mZAmrRGeE+G0Ur5UKzk3ArKj4eH7FSvxbanrz3fiMLQZOMgkMwSqo29+uTzXyKAXQNL1eX253miZaRupsOlsWftn+PW+BsB4d0qzIubEiuLh0LTUsT+3g5gtuExUftlU5anc0BLlwNmJE3vO4qG4LOsUahYYAtCIrL9M0tjIaqohGNkRGx+lnXhGaUmB3d7AJjNnGFcJq5AdDXvOkKjEBhfivYmIDEiLNeF/ktIoQmlRFuN07qdSQ7U7XH35Am+5Rm8oKj8s/+BxYbSqZ3ZTe4UWaEFUz6rtEZSwLXxySNI0vBJAXkOsyGXAQtpNM03IKBcKcNKym9tlJxhlIWo/Q+TD0DBYj1DpbEzEp/TxFsrj+LclEeP5Y/hCwqXS6HLV3LefO8MfzriwGp2xRhmECZrQ/QEFHHMl0HiRvl762lJnOLwA5qNQmdxgK1sftfp9mBGAYXA4BFb3vYLaPPvp7w9bHioxayYwcIraRy0ht/owqIDhtB4E18qf7EloIMO+AriZ00RwTvwwtlcjV1gk=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(366004)(39860400002)(376002)(6486002)(2906002)(91956017)(66446008)(66946007)(6916009)(86362001)(76116006)(66476007)(64756008)(66556008)(83380400001)(2616005)(8936002)(478600001)(316002)(6512007)(33656002)(966005)(186003)(6506007)(5660300002)(54906003)(8676002)(36756003)(53546011)(71200400001)(26005)(4326008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?K2FFOWNyWFZ2S3duTjFRMUU0MUpWUnI3cjQ5ZC9ONWRJYU9ZVktlVVp6UzUx?=
 =?utf-8?B?ZHAxcVVNSm9paisvNWpmRXpDK1czNVFqNENOeU9iWjhYM1NWV3pzckFuSVFP?=
 =?utf-8?B?M1VJd0ZiNWxzWGRjamUrcDloaFBBZ3FQNjl5NTBoNERjT0I2eVRrdXFzWUkx?=
 =?utf-8?B?YlJrQnl5SnBrYldCZUgvSGJhbmpScHFhZUVmbVJpZWJtSUxMeVp2RnFzQys4?=
 =?utf-8?B?L0F5WnZ3cWlEN2M4S005b2JNT0IvZDA4SmI3dHRaVjdZUE10YU5QeDN1cTZH?=
 =?utf-8?B?TDhRZjhtaEZ1KzBxaFVZdDM1VE1wTkJjRmFuZ3hHRGZlRHk4Q3htbEVrbzZP?=
 =?utf-8?B?emRmSWVZVWlvY3hXdWc1NGlhWWNyeDAzSWg1dkh1RDhHenA3R1lVY294Rnpa?=
 =?utf-8?B?aml4emIweTN5bnAxSnRPa0tadnpSM2d4L0tSMmpTMmNlZU5ZOFJIOFRBTWho?=
 =?utf-8?B?RVJPd2gzS3NyQVZRNVlORHIwNGM4SW5aWmtaTDNUUkhYMTFjL3R6YzZhNlMz?=
 =?utf-8?B?d1IwRFU3YXFzOGN1aThVZWRncllJUVhZbGw0bzlWQkZ4ZkNTWkZGYjJBUWZZ?=
 =?utf-8?B?Nm1VSXZYamp2c2JpNmZhaDNXdzNrRERTN3dna0xuOVNuOGNlN1k0aHNJYUhN?=
 =?utf-8?B?eTdpU05jTlFtR2UwcnJmOU5OUCt1OHJHL2Y2Q05ITTVxUldyN0h0WU9zeWN5?=
 =?utf-8?B?UEJaVHJqNWkyR21Bdk5JSWkyT0dEYktEREZRTDE0aVRMcEF1V2pVS1MycjlP?=
 =?utf-8?B?ZFRlK3BYRmVMdVR3dmYxdEU5SDI4dFVBVjBlTU90K3VHMTd6QWN3UmFtTDdI?=
 =?utf-8?B?dzFrTS95K2FwaXcyeUVORnRONFRPVmQwS1RtaHBLUHA2eWYrVzVGMFdaMGNI?=
 =?utf-8?B?UWZjTmFDVVhNVEZhdU1xd0N0VGh3YkZJSlQ1b2l2Nk0wN3ZOeE1jMHd3SHdR?=
 =?utf-8?B?WXFRUXZHTWFPUSs0RXp2YVNpOXVmY0tSZHdSUzc5b29zMTJua1l4d3FHa3M1?=
 =?utf-8?B?NWwxLy96aFZuK2llcWJUTFRXWlpVeEtSVmRETEZLSmNzS1g5VTlKRC9jajVl?=
 =?utf-8?B?OUxpYk1jRGlxaUd4dDJjaVdyZ2pLdnR3bnc2WkxyS2NxWXFUU0d1OVNiSUZW?=
 =?utf-8?B?eHlERzhyUWxqMTB4T2krUS9qck1lYWVoUGtmb0dpZUdDMlhVT0VjTEJRTTVI?=
 =?utf-8?B?V1pBSlo1UVV0N1JwdkFSclJ0ZnNLZ2VTZ0JqMzFqb0ZKcWxESjIycHpKbUR3?=
 =?utf-8?B?eEFKRkNsczlFT3o2L3RwREV0L1BSQi9zR1RzOUFOWEorelhVbzUvV2s3L1ha?=
 =?utf-8?B?bWlDUmN3SWFIOTNBMEpVUjNrN3UwVVg3K281WDJLSDRkNHVvS3d1ZHM0d2hM?=
 =?utf-8?B?eGRyVTJkekMxOUN5OTV2OFlkT1hlbkF1R1FPTGpvenE4dGNkRHVsYmZINmRl?=
 =?utf-8?B?aHo2TGgrbSszRk1Hd2pVcEthR2xzOURVWWJ1YS93PT0=?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1EB8D590AD90E24AA39611252A84CE65@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06734a5e-6687-40e4-94dc-08d8c6b8e519
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Feb 2021 13:54:29.1920
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lQ2L1I90Bgdd3uE7ZdqHk18nIsxZEV+I1uYJGHr+BVBP3uEmmBwsxSMLsndZLmJ7VAfg0KGoDegjKn88KzB9vOTVEMrldqERYyUkB3r22zs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5720
X-OriginatorOrg: citrix.com

DQoNCj4gT24gRmViIDEsIDIwMjEsIGF0IDE6MzkgUE0sIEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAyOS4wMS4yMDIxIDE3OjQ4LCBHZW9yZ2UgRHVubGFw
IHdyb3RlOg0KPj4gV2hlbiBpbiAiYXV0b2JhbGxvb24iIG1vZGUsIHhsIHdpbGwgYXR0ZW1wdCB0
byBiYWxsb29uIGRvbTAgZG93biBpbg0KPj4gb3JkZXIgdG8gZnJlZSB1cCBtZW1vcnkgdG8gc3Rh
cnQgYSBndWVzdCwgYmFzZWQgb24gdGhlIGV4cGVjdGVkIGFtb3VudA0KPj4gb2YgbWVtb3J5IHJl
cG9ydGVkIGJ5IGxpYnhsLiAgQ3VycmVudGx5LCB0aGlzIGlzIGxpbWl0ZWQgaW4gbGlieGwgd2l0
aA0KPj4gYSBoYXJkLWNvZGVkIHZhbHVlIG9mIDEyOE1pQiwgdG8gcHJldmVudCBzcXVlZXppbmcg
ZG9tMCBzbyBoYXJkIHRoYXQNCj4+IGl0IE9PTXMuICBPbiBtYW55IHN5c3RlbXMsIGhvd2V2ZXIs
IDEyOE1pQiBpcyBmYXIgdG9vIGxvdyBhIGxpbWl0LCBhbmQNCj4+IGRvbTAgbWF5IGNyYXNoIGFu
eXdheS4NCj4+IA0KPj4gRnVydGhlcm1vcmUsICJhdXRvYmFsbG9vbiIgbW9kZSwgYWx0aG91Z2gg
c3Ryb25nbHkgcmVjb21tZW5kZWQNCj4+IGFnYWluc3QsIG11c3QgYmUgdGhlIGRlZmF1bHQgbW9k
ZSBmb3IgbW9zdCBkaXN0cm9zOiBIYXZpbmcgdGhlIG1lbW9yeQ0KPj4gYXZhaWxhYmxlIHRvIExp
bnV4IGRyb3AgdG8gMUdpQiB3b3VsZCBiZSB0b28gbXVjaCBvZiBhbiB1bmV4cGVjdGVkDQo+PiBz
dXJwcmlzZSBmb3IgdGhvc2Ugbm90IGZhbWlsaWFyIHdpdGggWGVuLiAgVGhpcyBsZWFkcyB0byBh
IHNpdHVhdGlvbiwNCj4+IGhvd2V2ZXIsIHdoZXJlIHN0YXJ0aW5nIHRvbyBtYW55IGd1ZXN0cyBv
biBhIGxhcmdlLWlzaCBzeXN0ZW0gY2FuDQo+PiB1bmV4cGVjdGVkbHkgY2F1c2UgdGhlIHN5c3Rl
bSB0byBzbG93IGRvd24gYW5kIGNyYXNoIHdpdGggbm8gd2FybmluZywNCj4+IGFzIHhsIHNxdWVl
emVzIGRvbTAgdW50aWwgaXQgT09Ncy4NCj4+IA0KPj4gSWRlYWxseSB3aGF0IHdlIHdhbnQgaXMg
dG8gcmV0YWluIHRoZSAianVzdCB3b3JrcyBhZnRlciBpbnN0YWxsIg0KPj4gZnVuY3Rpb25hbGl0
eSB0aGF0IHdlIGhhdmUgbm93IHdpdGggcmVzcGVjdCB0byBhdXRvYmFsbG9vbmluZywgYnV0DQo+
PiBwcm9tcHRzIHRoZSBhZG1pbiB0byBjb25zaWRlciBhdXRvYmFsbG9vbmluZyBpc3N1ZXMgb25j
ZSB0aGV5J3ZlDQo+PiBkZW1vbnN0cmF0ZWQgdGhhdCB0aGV5IGludGVuZCB0byB1c2UgYSBzaWdu
aWZpY2FudCBwZXJjZW50YWdlIG9mIHRoZQ0KPj4gaG9zdCBtZW1vcnkgdG8gc3RhcnQgZ3Vlc3Rz
LCBhbmQgYWxzbyBhbGxvdyBrbm93bGVkZ2FibGUgdXNlcnMgdGhlDQo+PiBmbGV4aWJpbGl0eSB0
byBjb25maWd1cmUgdGhlIHN5c3RlbSBhcyB0aGV5IHNlZSBmaXQuDQo+PiANCj4+IFRvIGRvIHRo
aXMsIGludHJvZHVjZSB0d28gbmV3IHhsLmNvbmYtYmFzZWQgZG9tMCBhdXRvYmFsbG9vbiBsaW1p
dHM6DQo+PiBhdXRvYmFsbG9vbl9kb20wX21pbl9tZW1tYiwgYW5kIGF1dG9iYWxsb29uX2RvbTBf
bWluX21lbXBjdC4NCj4+IA0KPj4gV2hlbiBwYXJzaW5nIHhsLmNvbmYsIHhsIHdpbGwgYWx3YXlz
IGNhbGN1bGF0ZSBhIG1pbmltdW0gdmFsdWUgZm9yDQo+PiBkb20wIHRhcmdldC4gIElmIGF1dG9i
YWxsb29uX2RvbTBfbWluX21lbW1iIGlzIHNldCwgaXQgd2lsbCBqdXN0IHVzZQ0KPj4gdGhhdDsg
aWYgdGhhdCBpcyBub3Qgc2V0IGFuZCBfbWluX21lbXBjdCBpcyBzZXQsIGl0IHdpbGwgY2FsY3Vs
YXRlIHRoZQ0KPj4gbWluaW11bSB0YXJnZXQgYmFzZWQgb24gYSBwZXJjZW50YWdlIG9mIGhvc3Qg
bWVtb3J5LiAgSWYgbmVpdGhlciBpcw0KPj4gc2V0LCBpdCB3aWxsIGRlZmF1bHQgdG8gMjUlIG9m
IGhvc3QgbWVtb3J5Lg0KPj4gDQo+PiBBZGQgYSBtb3JlIHVzZWZ1bCBlcnJvciBtZXNzYWdlIHdo
ZW4gYXV0b2JhbGxvb24gZmFpbHMgZHVlIHRvIG1pc3NpbmcNCj4+IHRoZSB0YXJnZXQuICBBZGRp
dGlvbmFsbHksIGlmIHRoZSBhdXRvYmFsbG9vbiB0YXJnZXQgd2FzIGF1dG9tYXRpYywNCj4+IHBv
c3QgYW4gYWRkaXRpb25hbCBtZXNzYWdlIHByb21wdGluZyB0aGUgYWRtaW4gdG8gY29uc2lkZXIg
YXV0b2JhbGxvb24NCj4+IGV4cGxpY2l0bHkuICBIb3BlZnVsbHkgdGhpcyB3aWxsIGJhbGFuY2Ug
dGhpbmdzIHdvcmtpbmcgb3V0IG9mIHRoZSBib3gNCj4+IChhbmQgbWFrZSBpdCBwb3NzaWJsZSBm
b3IgYWR2YW5jZWQgdXNlcnMgdG8gY29uZmlndXJlIHRoZWlyIHN5c3RlbXMgYXMNCj4+IHRoZXkg
d2lzaCksIHlldCBwcm9tcHQgYWRtaW5zIHRvIGV4cGxvcmUgZnVydGhlciB3aGVuIGl0J3MNCj4+
IGFwcHJvcHJpYXRlLg0KPj4gDQo+PiBOQiB0aGF0IHRoZXJlJ3MgYSByYWNlIGluIHRoZSByZXN1
bHRpbmcgY29kZSBiZXR3ZWVuDQo+PiBsaWJ4bF9nZXRfbWVtb3J5X3RhcmdldCgpIGFuZCBsaWJ4
bF9zZXRfbWVtb3J5X3RhcmdldCgpOyBidXQgdGhlcmUgd2FzDQo+PiBhbHJlYWR5IGEgcmFjZSBi
ZXR3ZWVuIHRoZSBsYXR0ZXIgYW5kIGxpYnhsX2dldF9mcmVlX21lbW9yeSgpIGFueXdheTsNCj4+
IHRoaXMgZG9lc24ndCByZWFsbHkgbWFrZSB0aGUgc2l0dWF0aW9uIHdvcnNlLg0KPj4gDQo+PiBX
aGlsZSBoZXJlLCByZWR1Y2UgdGhlIHNjb3BlIG9mIHRoZSBmcmVlX21lbWtiIHZhcmlhYmxlLCB3
aGljaCBpc24ndA0KPj4gdXNlZCBvdXRzaWRlIHRoZSBkb3t9IGxvb3AgaW4gZnJlZW1lbSgpLg0K
Pj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+DQo+IA0KPiBJJ20gbm90IGNvbnZpbmNlZCBpdCBpcyB0aGUgdG9vbCBzdGFjayB0byBz
ZXQgYSBsb3dlciBsaW1pdCBoZXJlLg0KPiBJbW8gdGhlIGtlcm5lbCBzaG91bGQgZ3VhcmQgaXRz
ZWxmIGZyb20gdG9vIGFnZ3Jlc3NpdmUgYmFsbG9vbmluZy4NCj4gSW4gZmFjdCwgdGhlIG9sZCBY
ZW5vTGludXggZHJpdmVyIGRpZCwgYXMgb2YNCj4gaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvaGcv
bGludXgtMi42LjE4LXhlbi5oZy9yZXYvYjYxNDQzYjFiZjc2LA0KPiB3aGljaCBpbiBvdXIgZm9y
d2FyZCBwb3J0cyB3ZSB0aGVuIGV4dGVuZGVkIHRvIGhhdmUgZXhwb3N1cmUgaW4NCj4gL3Byb2Mg
YW5kIC9zeXMsIGFsb25nc2lkZSBhbiB1cHBlciBsaW1pdCAoZm9yIHB1cmVseSBpbmZvcm1hdGlv
bmFsDQo+IHB1cnBvc2VzIGlpcmMpLg0KDQpKdXN0IHRvIGJlIGNsZWFyLCB0aGUgbGltaXQgaW4g
dGhpcyBwYXRjaCB3aWxsICpvbmx5KiBhZmZlY3Q6DQoNCjEuIGF1dG9iYWxsb29uaW5nDQoyLiBk
b25lIGJ5IHhsDQozLiBpbiByZXNwb25zZSB0byBhbiBgeGwgY3JlYXRlYA0KDQpZb3UgY2FuIHN0
aWxsIGNyYXNoIHlvdXIgZG9tMCBqdXN0IGZpbmUgYnkgcnVubmluZyBgeGwgbWVtLXNldCAwICRU
T09fU01BTExgOyBhbmQgdGhpcyBwYXRjaCB3b27igJl0IGFmZmVjdCB0aGUgYmVoYXZpb3Igb2Yg
YWx0ZXJuYXRlIHRvb2xzdGFja3MgaW1wbGVtZW50ZWQgb24gbGlieGwgKGUuZy4sIGxpYnZpcnQp
Lg0KDQpJ4oCZZCBjZXJ0YWlubHkgYmUgaW4gZmF2b3Igb2YgZ2V0dGluZyBzb21ldGhpbmcgbGlr
ZSBiNjE0NDNiMWJmNzYgdXBzdHJlYW0uICBJIHN0aWxsIHRoaW5rIGEgcGF0Y2ggbGlrZSB0aGlz
IG9uZSB3b3VsZCBiZSB1c2VmdWwgdGhvdWdoOg0KDQoxLiBUaGUgZXJyb3IgbWVzc2FnZSB3aWxs
IGJlIGdpdmVuIHJpZ2h0IGF3YXksIHJhdGhlciB0aGFuIHRpbWluZyBvdXQgb24gdGhlIGRvbTAg
YmFsbG9vbiBkcml2ZXINCg0KMi4gVGhlIGVycm9yIG1lc3NhZ2UgY2FuIGJlIG1vcmUgaW5mb3Jt
YXRpdmUsIGFuZCBwb2ludCBwZW9wbGUgdG8gdGhlIHdob2xlIOKAnGZpeGVkIGRvbTAgbWVtb3J5
4oCdIHRoaW5nLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:00:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79898.145675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Zl8-0004PM-T0; Mon, 01 Feb 2021 14:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79898.145675; Mon, 01 Feb 2021 14:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Zl8-0004PF-Of; Mon, 01 Feb 2021 14:00:54 +0000
Received: by outflank-mailman (input) for mailman id 79898;
 Mon, 01 Feb 2021 14:00:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6Zl6-0004PA-VR
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:00:53 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9f785dc-8710-4321-a0bc-7cfe6d4b1454;
 Mon, 01 Feb 2021 14:00:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9f785dc-8710-4321-a0bc-7cfe6d4b1454
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612188051;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=V3gCxFCSdMi8jsyeVIxWaTf0d6D86exZfF+VqOYQOx8=;
  b=dXRkk55VC5HPkptt66aA//+YDBjkGYWI7BxgLrfxUA58dsC4t3F8RoWK
   LX51KNZhHuPG7VL9EB9yy4t5H409T3FCTEmKA7eerlZEl/ErdduifKBWu
   +hDXMcZBgB+oAJ7pi2fqDtWXk3YirPQ/2mk0+4iIx23LZ5umMDOb2oARD
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: hhp0urncDouQE7ZHFIARHbz6CmalhAagCc/oZNVbKMRK7ZCp0gdiV9P3KN/khCCfrgT959NbWw
 u/H/cDSzvVu17hJGTdsUjTCzCB1DyX3x/wgTC0jQDAXqyNRYDOFGxcdJZHTKGVW9iyh+LnRM6o
 R9WENkpvoowqAFI3QWvOekcMumqBMjo4P89eoOLBCvA41z8hX59xP/o0cDEJRkHwfMK/GyJm/P
 c5u53uIehJsGUQR2A0xOgZBUFZwE96ZIRk/QyY6n9eM1PqVBIpRgz887ttjALMx+04672T4Sl3
 wOk=
X-SBRS: 5.2
X-MesageID: 36475573
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36475573"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R6rL1ls46V5anZ1JuaE/G1Oaj6eb8IWdgEjmJGB7GCMl5QnZC2A6ZFyknZMppKPZm/gdlSFD6nWNvlDB80uFJRIamHT60aTmpwgxDFBm+8wiR8yMew7vbgisxykLpQK8T6NfwwdlDKcCuPknLil/1v5UNpnubEX4AcjT55WfLmlSCphAV4RzoGGvg8GtJkiclHMaaLbs9UPRf2hU3xzV2HRTit2mJ46IN3k2zDhFo2W6KeiAvbXXKDOFo3dTTbMPq3gkBx13XPMj7d7rvuvXK3VCudiBKEE56N59pgcvnyfBQJunW2uD03pW/hcaAPASHx/+BC6AFdtRwRCHEb48CQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V3gCxFCSdMi8jsyeVIxWaTf0d6D86exZfF+VqOYQOx8=;
 b=n3nVZSnsF5ERX2je8MQA11JGYr1cPagEx1SBw0e+PWD5Bb+/bRo+e11do3YXPQJunui2mEuAfCblMTOjlao8ZhoSjFiNRDyYIcD6OJpL5OM3asaYmyotSejEAk0WbB8SJcxbILJHoluxQc8q+F+Qmt0DCUvdteemfCrsgVfM8wiTV/QsvjbQyIPUlKi1dOoWEePStLEk4x+1eO6m3yAeeqIU+P3Fnsj6S/UtcK1ka+Tc+l1xJYDcPywSSFHOS5pC1uvUj+WUPr8HU5MDHF3snPiaEpzG9bxWkX5TjzsCLKcqL3X97Dioy19cqYJxTYByT+0X+9ZuoaBdgQN+a9nPsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V3gCxFCSdMi8jsyeVIxWaTf0d6D86exZfF+VqOYQOx8=;
 b=GIjiC+LEGkynneq4ivyIiudz6LvmZZUjdIhzGvb/yjzbPaeQ+1a30+WH5xkBfqUEb/+1WuEiJAI8LMZiJi4TmretAItTlhJ+jKtYD7gXRECxCxW5jRCHpOukB2/vALeeTlGuO0NUfdx2XAe31UXD0+yqRXqfZQKxVbl4smAgB+o=
Subject: Re: [PATCH v8 00/16] acquire_resource size and external IPT
 monitoring
To: Oleksandr <olekstysh@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin
 Tian <kevin.tian@intel.com>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
	<michal.leszczynski@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <911270bf-0077-b70e-c224-712dfa535afa@gmail.com>
 <fceef592-e637-e985-8217-11546e088027@citrix.com>
 <d5cc17a4-267c-3022-11e5-eb043de121a9@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7384c55e-f996-be08-f8ee-b6d09c9e2eef@citrix.com>
Date: Mon, 1 Feb 2021 14:00:38 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <d5cc17a4-267c-3022-11e5-eb043de121a9@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0380.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::7) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 345d8f60-ca38-473d-040a-08d8c6b9c5c2
X-MS-TrafficTypeDiagnostic: BYAPR03MB4584:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4584BEDBA374A160A605DE6EBAB69@BYAPR03MB4584.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HzWPgRyFWHVIXLsiiCulX18aqdr2T59fPc955U6IWCqHntrbRZ7q8mN0Dopzn9NLY4ZWzjdgTdhK33UoZBooBrpz2EeMCJ9xc692ux5yYJM7LFdxdsxEtOjorNgQDoH1SMWHNwxY+claAuEheXGc/4cw96PbMG1mAt1L7D7Jwjjum4oHEyBI6HUiU93FmcbWU63XFAhscQzOUwZDfQEGKjplHF0nGsIUce2KIPtBYxmWo3MLmc6t2XoFHX6FrVF9rlvdfnNb3t7C357LlNVo+gedb4R5tg99IRvxazWVERjTohxThQqZY8KWMC4EjkdfrnL4NWebrBhutTNsTZB049I9NSpzqvVZmOKwyMTaFEn7bbo4xJFV577QWWT6HJgbRS1a3OMsH6NoWFMDyouFGesLiMFwkXzUiqK1hdoggiv+NLdL+Mf/wZKNvbHmDQbA6G2Bcw1+9p5/NYEhBhWycX/MNE4O+1KJ5dxDX1mSbBVOD0V4X+hjWRNocEZk3+/1Aj2zH7kqRbvmF5Iu7z4VZYzmOZ7KF0uOqz6kcmF3LYMpDFlEsg2oGiPaV3iwg+klGhLcLXU4NMQNl4wTS6002vFpncXXkMzptf34Z9k4Kz8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(346002)(376002)(366004)(53546011)(31686004)(186003)(6916009)(2906002)(4744005)(4326008)(86362001)(31696002)(478600001)(5660300002)(26005)(2616005)(956004)(66556008)(8676002)(66476007)(66946007)(16526019)(36756003)(316002)(6666004)(54906003)(8936002)(16576012)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Q1UvTEdrb2dZMmV5L3FHNm1oWnlLaG1lYU1zQjI1dUZ4QkFxZXBkRkxDQWpu?=
 =?utf-8?B?QThMWEx0SFdtakdwTUw2R2VWRzJ6b1pGUXB5Zmo0R0dBRkF3ZmdmYytLZmIv?=
 =?utf-8?B?YzY4NUNZYmtDQzVxRkp5Wk9yaFkxdHhESXhUZ2xQMjEvbmJZdUc0aHRoNjU0?=
 =?utf-8?B?eG5ucHJCc0g3L3Q4dW9yRFRlVllCRGs4aTBXZ1pVa09MQ2FnamdROU04eWxP?=
 =?utf-8?B?UEhYMTFmK2d2eXZ5NzJDMlZEZzNoaTVlVXdZV2hjYmdyWHhmc0pDZ3Rwb2Vv?=
 =?utf-8?B?N1ZKS0NGeVBvdkxwUUR1UUNNV2pPdFpNMVU0Q0orazRxWEptanZVTnBBSEFO?=
 =?utf-8?B?TXdzd1VpRmRuVkdFRzk1c2tHSUFpOU92R2lhMlExdmhRT2VpUkNBMURjaC9i?=
 =?utf-8?B?STlObFpLRmVmUmtzWnNrbkt2NXVnL3plbjFveTZuS2FYQkNLeXpyZ0kyUnpC?=
 =?utf-8?B?VUl6TVRYMENTZlVjV2p2amtWc2ZOWXBoZ3dCb0ZvMHlSZ1lNbFYyUWJzeU8v?=
 =?utf-8?B?UWFNbU1MMXh1bUJmeXBid1lwQjlFWmdLaVdYUUpML0ZycjFTWnZCYkZ4RDVH?=
 =?utf-8?B?Q0plaGFiU3llV3FFK2RIbTRQSDJwNlhySjB4cDRJZ0JBY3Z4SjRYbmVISXg2?=
 =?utf-8?B?ODlkRmNKWGxxbURBMjMybTR2dVI2TjM4ODcvbXRqcUdDZ3l4b1VJWjNDam01?=
 =?utf-8?B?ZDBaeEFiRnhVTkhCc1Fja3I5bFFYOFYwclAvRzc0Q2NVclphMHVnaFh2Szhr?=
 =?utf-8?B?SURRQ1hXM3lZNTFsTDEwaEc3TlJvbEZzVWx6Tm5WaGNMZWV5WlFaNjQ1VVdH?=
 =?utf-8?B?b1czQkFndEpBZExHdWlseHRCQUJtcjNrRm5WOVpFbUtndTBrWjJYeEVFeWpD?=
 =?utf-8?B?N2xDL0gxU2hYZjQxZ3BpSUFCcVo3WUpaaEZrL0dWWWZZOGFJL1VITGV3WlhL?=
 =?utf-8?B?clREMWU0K2xQaHBNenM5cTVIMnkxQmV6MUw1YitsMFBXQ2E2azZMWGFYbUFl?=
 =?utf-8?B?ZTFLaEZ3eFQxNEZJa0ViZ2JDeG4xRjN4Slowb3ZaVit2MDlONExIT0ErTjRi?=
 =?utf-8?B?VS9HZ1JVRWdkQVJBUFBoM3RGZzYxK3lXeEJ3aTZLSkpzZ051aERSTFozNW1Q?=
 =?utf-8?B?WitkNlR5bTdGU2FxZlZwMDR3TnkrM3ZmdkIxVGhVT1BkYUdBNDg4L09zQUFF?=
 =?utf-8?B?N3hHZ1ZxRkEvWnN2eFBKeUNQb1hCNDM3U2VIUUtUMW14d0ttOXhOcG9CUVJM?=
 =?utf-8?B?d1c4Z3FKMDdNL3VROGdNMEwyNnhpZ3lwT1lzK2xwS1BjVjJWTHFFc29lLzFU?=
 =?utf-8?B?NlpJaUZkNnkzb3VWZXV6RTdQRDRITHQ4Y1ZuNkNhbXAxbnNoNFNpcFI4YzN4?=
 =?utf-8?B?aTV1enpGSTdpcDNTeDh0MGViQXFNZkN5UzhVOXZCMHVYRUIvbm1zeHg5NGdz?=
 =?utf-8?B?bVRRck1Id1M2U2x5enRhSGYvZ2plQUZpekRna3h3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 345d8f60-ca38-473d-040a-08d8c6b9c5c2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 14:00:46.3840
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sQ1TckDc6SkcABilpaYzS4nnZlD2EGPR+B2gNSaix2DKzXdIYORrKM8wWY32e2x1ZekF4/Ti9gzAJ+/fSO/M1kUX7mhjOOYF6acWgHJgFb4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4584
X-OriginatorOrg: citrix.com

On 01/02/2021 13:47, Oleksandr wrote:
>
> On 01.02.21 15:07, Andrew Cooper wrote:
>
> Hi Andrew
>
>> On 01/02/2021 12:34, Oleksandr wrote:
>>> On 30.01.21 04:58, Andrew Cooper wrote:
>>
>> One query I did leave on IRC, and hasn't had an answer.
>>
>> What is the maximum number of vcpus in an ARM guest?
>
> public/arch-arm.h says that current supported guest VCPUs max number
> is 128.
>
>
>> You moved an
>> x86-ism "max 128 vcpus" into common code.
>
> Ooh, I am not sure I understand where exactly. Could you please
> clarify in which patch?

ioreq_server_get_frame() hardcodes "there is exactly one non-bufioreq
frame", which in practice means there is 128 vcpu's work of struct
ioreqs contained within the mapping.

I've coded ioreq_server_max_frames() to perform the calculation
correctly, but ioreq_server_get_frame() will need fixing by whomever
supports more than 128 vcpus with ioreq servers first.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:04:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79900.145687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Zp4-0004Za-DX; Mon, 01 Feb 2021 14:04:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79900.145687; Mon, 01 Feb 2021 14:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6Zp4-0004ZT-AQ; Mon, 01 Feb 2021 14:04:58 +0000
Received: by outflank-mailman (input) for mailman id 79900;
 Mon, 01 Feb 2021 14:04:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6Zp2-0004ZO-2J
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:04:56 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be5a8e3c-0c23-4a6b-99f4-096628b665d4;
 Mon, 01 Feb 2021 14:04:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be5a8e3c-0c23-4a6b-99f4-096628b665d4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612188294;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ezH3Gpyp4RfJM+MVTYcXvlw268WLRxhjfpjZKqCoeG8=;
  b=dOPSvYoNoi7m9ZelO8gR2tmG7fAv5LvM2QnO/4Xp91dTa0RaJut3sIX1
   9mIOAPHsImzWwms6hcW9C/rErnEqKj0+ifIa/v4DiYEBldUsxk8LVzlZt
   FZ0VF1+T0TNQy/xaRnwNxGUS9i/0U9jRjC4KksNY6QEYWWDz45c29wajR
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ekg0SIFOziXn37hvBr3A0eEAPIA57Il8D9Eqso+E+AhA0yQjQrX7Hv6KdHyZWx2H2qg0ekAFln
 NTutp79kenxuG0jv+cmiOGjmJvBVkjjmj0wEAxVLL65FPTzGcJ3KbJRX7DBxAu+LJELjFjRYoQ
 cTaTbZ3qTRFncnW658YHYSDNuppu0Ek3kYRSamC3l4AYJ8Zj6gAIkWJpZd7sUAeB8n74YYxGlB
 8nJyCRmCAVpgA402Ot0uoNPeQ93h/ym5NHsPArSew6iy7yUaNO1xBFQy+b8u8b3rEzbdGnPNan
 JaI=
X-SBRS: None
X-MesageID: 36273297
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36273297"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CZDWpxHR7ZlY8muUBnINGWIga4vxBZJQuSFSsZb89b5jB5APwUJUmZ5mIp5NTE+RFNLfb3UBqMzAFcXqyC8u+OMz+3bd6pAIb1RBOVrN8aaeEdEo1//UxAR/9upEd6UUr8MkxsOQCP9PAzGnpfc64NAsx544XhEZqi2BHfg9S4Tsq3cHeUsqoFHE4TwX59vcvEhwmHSCdS5ylXU9faGtfvmgXEekDTBUdKq2F9auRteDYsWZcom9WZDIKIzPtAXVX31iczFNzkHKCPYxAlaGgUQefFcgxfdVUx+sebXo/I6OlQOCM4/3qzGSD1IC/CRBk7fRXMa8kdBgQDvxkO4xkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hW4ZU4gQ3UGSA87v9AW0hVKlpA7k7ugSAiZ7f1Na3Rw=;
 b=bQogFHT8CzckcJ0u99myTq12/naZ0XOj90D3v7AO5D/SaqYMv4+dYKf4Q5ZxCQvTOg0croaxPkfaY5XbqAuym6nPGYDO9lRwLKpi1WRDuK5P51uyEiDB3w6ZV3Ey/6KM9ZSFDizrYYcp1/F1mIYSlOTrpkmPcAAJJflU0tzc/fqVI2u1MriKvneoffhxQkjve1eeLItuYrglg8zV/tTtpTc7lSHPMJuF24Hk+DXL8J0wNgTpjooce+PcnYQemDnGOl9zRGPtTA4TldVmr1p+yBX17SUrAy9FUFBAPr6RdYsRaHXKcyGG+fHZhQ7dGeqzagiMzyrufqfIHs7uAv/PcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hW4ZU4gQ3UGSA87v9AW0hVKlpA7k7ugSAiZ7f1Na3Rw=;
 b=Q3i+V7if+LPf/nVzDA/hj3qudHzh23ffD87jn0ZeO+2MilH60s/5cO3kQ45WFE+Y6dC8FdUfvY2Vu0Qlv7nT60dujtsOgMEhFtMDadg3L39ctPJQMQ+tCiZOZzCuMyvi3JpMZf8L/oXA0nvlVzbXUk17cOgDXe5b3+0KMhc2qrA=
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, Hubert
 Jasudowicz <hubert.jasudowicz@cert.pl>, Tamas K Lengyel
	<tamas@tklengyel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
 <YBfTpTzi+wo7AFSH@Air-de-Roger>
 <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
 <28e8d116-9d5f-3c73-b366-63d5b047b085@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8c4ef206-8378-8fa4-60cd-199491a136c6@citrix.com>
Date: Mon, 1 Feb 2021 14:04:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <28e8d116-9d5f-3c73-b366-63d5b047b085@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0092.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::32) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8ffde1b8-c535-4116-b7d9-08d8c6ba509f
X-MS-TrafficTypeDiagnostic: BYAPR03MB4584:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4584921BB9142DC5EE937B9CBAB69@BYAPR03MB4584.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UJHa6Bqb8VaOQC+e0zBIpHoHhHbqO7C6ifNi7t2iMhyzhXD9a7ahg7fcuSQ2PtL666ACBlJo050rXDyk1o5AiB+M3+ERN+21fWrl9WWGxQILvNCRc5bZkSga5ZGxv3cVrSYSaJxKX5SNoVej6wFFj+XGu2I/WnC/GhCjBAC9PciYqR2QVPC7NKOZjCkQH9tY/+RGd31kNBy8t3wQbRkcWgs2cWEX4uaAT+vbVyGa22tEyWy7KyQm9Agk72HVldnwYD2rF945qbyLGVZfo0uV6kz6PWtZ2s9L2WQsSCAILcJ9gFuwU1P6jAw5iLvRwA9MLoZ0xI0qAaNJSxODEQw5nOFX6NWrYXOgwOQO8CGlsoMMpDPVvlEEiJFdIev/85dHcdJ/FSdeoD5FA+HypYw1hXaskAuX7vRAjM7Vd41OyGfYAjh2qCft94JPDqG1pr3i/WXTueaEtcFJBmftswP8JXgcKrDX8X1MUHqlQN0zal+v31Z9FgeFAbTF/v7otoP3wN4FK3vdJlrluOblrvbaFqjcezLKiuFlTRFcdf5CSWECbTLbTn5IHjcyr3kBHDIsC5qZIqP+/H8FtxB9D5WHw5StJUsWvywPUtoxnTFGLQU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(346002)(376002)(366004)(53546011)(31686004)(186003)(6916009)(2906002)(4326008)(86362001)(31696002)(107886003)(478600001)(5660300002)(26005)(2616005)(956004)(66556008)(8676002)(66476007)(66946007)(16526019)(7416002)(36756003)(316002)(6666004)(54906003)(8936002)(16576012)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bklZSHp4amFYUFdFZ3hhMkJucG5nM1I3MFl4S3FaNTlVdmUrLzN6alhTS0U2?=
 =?utf-8?B?YXJzK1kzVU1DbnpZSXZIbFJJamhuU3lMZHBYYVF5dWhOOFZBN2J0TmVZTGtp?=
 =?utf-8?B?VjE1UllEdlMwWGU0bk16NGpmREVqVEVoRTJUNTJyalJZMWFjRzJaWWQvUEI1?=
 =?utf-8?B?QzMxK3lkVjhzSHJMUEsxd3lUSjBjOUR6blBCSlpGeThBTENKUE9ZQ2lVYW1V?=
 =?utf-8?B?UkVkVjhhVkhMcnY4OXQ1M1NVZDBjY2c0dm9QOU9ZRTk3cHBxV2ExL04vRlJy?=
 =?utf-8?B?b0lSUkVVRFh3RW1uZXdVZXR3ZjFHQkNnRGNmRWU2dlJOUDRZQ0FkVWdIM1pV?=
 =?utf-8?B?YWFwci94ZjdteGxvdlFSMGRwOVFiNWtEa05aNGdXY01YNzJVZ1hwdDJpaWxM?=
 =?utf-8?B?UWJTa2lqYnpxdUd2dlI1TnVPRGRRR0Vna1V4STk0dkJ3Y2ZocmJHdHpPcHF0?=
 =?utf-8?B?aDFlbFUyNS9ORndGSFdKOXVMbEhBVnR6MGhqTzBrWjNXK3NPblpkNnF2Rk9k?=
 =?utf-8?B?SG5zM2w5ZGVibzZSVjFFdm1Bek9WUjFLWUFNSEJBVmthangydHBobjQ0Z0JE?=
 =?utf-8?B?ZllaM29sZ1ZxMzdZNXR3YUtyajMrZEZGdzV2Z0NhanArSEVSUVlzUVdsSTdo?=
 =?utf-8?B?YmdnMnRSSEpTdHQ0SkFueUJ3bWhiSkZudXpNTmZpZmNYZUhIa0xBbCtsME1k?=
 =?utf-8?B?cGVSM3RENWsrOU56dDJzVmdVeTZ4OWxiT2x1VFlpb3BSeUxyT1VMbWMrOXVl?=
 =?utf-8?B?bUdGc2k1d0FTMnN5NUJ2RHFOdWJENnZPUVhVemdIdWdLMTh5M1l3M29NUWVa?=
 =?utf-8?B?N3RzbUJvL0FvYVVETUNQNmgrd0hZd3V6Wkx1T0FwN1R6aDM5ZlhtbmJ6eit0?=
 =?utf-8?B?aDdFeVBzWG1QTWQ2RXZYRitMeUJuNThZclpKQmhDRjN4Mjc5VjZ4SGJva2pn?=
 =?utf-8?B?UWpmM0JuMG5qMjFHVEZxcmJmTW95TEE4S0FzSlIvVFpaZ1h6cVZKQUVoS1pO?=
 =?utf-8?B?RitjVHZvVVFVZmxWYUtWdzhCYjRqVllwWDh6VUErVWI5UVppSmJtSzZUMGg0?=
 =?utf-8?B?Y2R3TDc3MWtZTVZpZkdBdmNhd2VubDE3dTZtUEhncnAzT3Q3UnBZeDRNeGJB?=
 =?utf-8?B?QWNHdUZIUjAyLzJ5YVdSQUNWbUN2MFV2Q0h5RkJWdDhzaXNtNmhFVHY2UkM1?=
 =?utf-8?B?YXYvQkNIUklDVS91alNNZVljYnRhaVBPRWZ2Q2tneG1vdXM1Zm85Z2RybkFC?=
 =?utf-8?B?UHRCdWhBcVppWW1hYVBPNnZFckQ1SzcxVlN0NU5BanZsNU5uaWMycUFMQkQ5?=
 =?utf-8?B?eXBqVnQ3Z0RUN2dkcWVuek9HRDdMc0o0ZXNKMnp2RUpBeWR0STVMd2F5bkM5?=
 =?utf-8?B?QWdjTWhSSDlhMUx2RGV1VW9ESGRYU2h0QVRsd3ZUbFhwYTc4R2VtN3JwaGd5?=
 =?utf-8?B?VGh1RjZTdk1RZ3FYMUl2L1o4RWJaVGRKeEZ3d1F3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ffde1b8-c535-4116-b7d9-08d8c6ba509f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 14:04:39.3395
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wVB3tqcFjkYCAUZyFNMLWvn2wVlVEHxmj/7jsBllj7KvCs+7wBceAaWj7HBvPpzzsyC+aJXKdzInv4p+xKs5H3Jo7mQQe77xkmBXwZge67E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4584
X-OriginatorOrg: citrix.com

On 01/02/2021 13:03, Jan Beulich wrote:
> On 01.02.2021 12:11, Andrew Cooper wrote:
>> On 01/02/2021 10:10, Roger Pau MonnÃ© wrote:
>>> On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
>>>> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>                      compat_frame_list[i] = frame;
>>>>                  }
>>>>  
>>>> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
>>>> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
>>>>                                               compat_frame_list,
>>>> -                                             cmp.mar.nr_frames) )
>>>> +                                             done) )
>>>>                      return -EFAULT;
>>> Is it fine to return with a possibly pending continuation already
>>> encoded here?
>>>
>>> Other places seem to crash the domain when that happens.
>> Hmm.Â  This is all a total mess.Â  (Elsewhere the error handling is also
>> broken - a caller who receives an error can't figure out how to recover)
>>
>> But yes - I think you're right - the only thing we can do here is `goto
>> crash;` and woe betide any 32bit kernel which passes a pointer to a
>> read-only buffer.
> I'd like to ask you to reconsider the "goto crash", both the one
> you mention above and the other one already present in the patch.
> Wiring all the cases where we mean to crash the guest into a
> single domain_crash() invocation has the downside that when
> observing such a case one can't remotely know which path has led
> there. Therefore I'd like to suggest individual domain_crash()
> invocations on every affected path. Elsewhere in the file there
> already is such an instance, commented "Cannot cancel the
> continuation...".

But they're all logically the same, are they not?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:12:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:12:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79904.145699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ZwC-0005Zs-7P; Mon, 01 Feb 2021 14:12:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79904.145699; Mon, 01 Feb 2021 14:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ZwC-0005Zl-47; Mon, 01 Feb 2021 14:12:20 +0000
Received: by outflank-mailman (input) for mailman id 79904;
 Mon, 01 Feb 2021 14:12:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6ZwB-0005Zg-Oh
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:12:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f959a94-ab87-411b-9746-450bd3d34564;
 Mon, 01 Feb 2021 14:12:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C4B89B12A;
 Mon,  1 Feb 2021 14:12:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f959a94-ab87-411b-9746-450bd3d34564
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612188737; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=e6NTAuMLffaGsyeg7ykeVAQ3bGgrjQJ8c8hL8Z+d/4U=;
	b=krwd+A3hydspkJx1NcKNjQOpToVoTP8TC4HRw7V7n6U6oHXWWY1v9hotcyHe5e+ZSl/A1H
	BKFLaoCTOdocOludZunGuLKzEj2ZRZZJLVGTM7VQ+uDurvBawDWin4LznEVXLJXaUN0T5V
	xM0R2/0oerPaJqrM0Tovo5LiZtsTnAQ=
Subject: Re: [PATCH v3 5/7] xen/memory: Improve compat XENMEM_acquire_resource
 handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
 <julien@xen.org>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210112194841.1537-1-andrew.cooper3@citrix.com>
 <20210112194841.1537-6-andrew.cooper3@citrix.com>
 <e8162d0a-b85f-abc4-790e-60ea93a8dc6b@suse.com>
 <cf8408e3-4869-8fae-fb33-b651ee1f8948@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9659b4f2-ebd7-e398-fc0a-7bd451c4ebe0@suse.com>
Date: Mon, 1 Feb 2021 15:12:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <cf8408e3-4869-8fae-fb33-b651ee1f8948@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.01.2021 00:32, Andrew Cooper wrote:
> On 15/01/2021 15:37, Jan Beulich wrote:
>> On 12.01.2021 20:48, Andrew Cooper wrote:
>>> @@ -446,6 +430,31 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>  
>>>  #undef XLAT_mem_acquire_resource_HNDL_frame_list
>>>  
>>> +            if ( xen_frame_list && cmp.mar.nr_frames )
>>> +            {
>>> +                /*
>>> +                 * frame_list is an input for translated guests, and an output
>>> +                 * for untranslated guests.  Only copy in for translated guests.
>>> +                 */
>>> +                if ( paging_mode_translate(currd) )
>>> +                {
>>> +                    compat_pfn_t *compat_frame_list = (void *)xen_frame_list;
>>> +
>>> +                    if ( !compat_handle_okay(cmp.mar.frame_list,
>>> +                                             cmp.mar.nr_frames) ||
>>> +                         __copy_from_compat_offset(
>>> +                             compat_frame_list, cmp.mar.frame_list,
>>> +                             0, cmp.mar.nr_frames) )
>>> +                        return -EFAULT;
>>> +
>>> +                    /*
>>> +                     * Iterate backwards over compat_frame_list[] expanding
>>> +                     * compat_pfn_t to xen_pfn_t in place.
>>> +                     */
>>> +                    for ( int x = cmp.mar.nr_frames - 1; x >= 0; --x )
>>> +                        xen_frame_list[x] = compat_frame_list[x];
>> Just as a nit, without requiring you to adjust (but with the
>> request to consider adjusting) - x getting used as array index
>> would generally suggest it wants to be an unsigned type (despite
>> me guessing the compiler ought to be able to avoid an explicit
>> sign-extension for the actual memory accesses):
>>
>>                     for ( unsigned int x = cmp.mar.nr_frames; x--; )
>>                         xen_frame_list[x] = compat_frame_list[x];
> 
> Signed numbers are not inherently evil.Â  The range of x is between 0 and
> 1020 so there is no issue with failing to enter the loop.
> 
> It is the compilers job to make this optimisation.Â  It is a very poor
> use of a developers time to write logic which takes extra effort to
> figure out whether it is correct or not.

I don't see why my suggested alternative is any more difficult to
understand. It's one less expression, so perhaps even less cognitive
load. But yes, this is easily getting subjective.

> You know what my attitude will be towards a compiler which is incapable
> of making the optimisation, and you've got to go back a decade to find a
> processor old enough to not have identical performance between the
> unoptimised signed and unsigned forms.

I'm not sure I see how the compiler could transform this to using
unsigned int. By observation, gcc10 doesn't, despite -O2 (release
build). It still emits an otherwise unnecessary MOVSXD, and the
loop body is one insn shorter with an unsigned induction variable
(albeit that's likely just a side effect in this specific example).

> Both signs of numbers have their uses, and a rigid policy of using
> unsigned numbers does more harm than good (in this case, concerning the
> simplicity of the code).

Of course. But array accesses are where we'd better limit ourselves
to unsigned indexing variables, imo.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:23:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79906.145711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6a6g-0006eH-6K; Mon, 01 Feb 2021 14:23:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79906.145711; Mon, 01 Feb 2021 14:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6a6g-0006eA-3O; Mon, 01 Feb 2021 14:23:10 +0000
Received: by outflank-mailman (input) for mailman id 79906;
 Mon, 01 Feb 2021 14:23:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6a6e-0006e3-RP
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:23:08 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ce82b0d-327d-4457-bcc8-9f7f562ae2ff;
 Mon, 01 Feb 2021 14:23:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ce82b0d-327d-4457-bcc8-9f7f562ae2ff
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612189387;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=S1mLsKUUQh3XUJlDeuDVuscRMdGY+VyPnjPpCqoc9Fk=;
  b=cvFE7nzZm9yl9Py42gmvVfrIZM39UFKG14RPG9KqHHi12Rph407alDvU
   5PmjNxkKDFEOjZ1cExO7WUwfSRezk9mDeo1SVM61gw6kHDfW6ZzKnHzWr
   Yp/5jrI3FIibjTBEghmH8OcV9BqF3RhEJTv7svanjenwzb3CpXoC4XE/5
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ayd3akQopjyTRp2jIkTkijPgzDMHwMqP08SzGBEH1CDw/hybYIVXiPqdEfO2z6IdsZSr+KzzdE
 2BZYHc8K16VZ9o3xdMNJgOVsjbkqNBy7eOqZr7Ol8Ht5pjdentpKMCa1O0yx5nsnZGh1Bcm5im
 rbbIpLCAEiU2S/7foBsQz5M6NLU6p1NyHFHXgIgPc0p4QSzJ0G8cYEsO2dhqIR5QDIDZmq5By1
 DUtFbrC0rKT6NfZ+viV++L7ts15JCjYEACgkPOPflMTy6TAZElOCsSZOUNpzzmAmnl27Q/wRZI
 Tc0=
X-SBRS: 5.2
X-MesageID: 36235930
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36235930"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S18wDtbOLO5SGredUbOa6zBBEMXj9lNRp38YtHkiWziXWUweW/Ml7oAzdb40KkJY8dPTznzVN0Dqy47utpSe9LGwxC9mL0Ikhd5gymonQCZcP1YJBIjQZdCz6Um8D4Vyrl3m+2z+4g0yKmddyi9XfFJibpTwWYaBAxc1zQ8/KqR8asP/mjIrfEN7FQFuuJYeVWMylDtplblHLCGsM3s9SpwMP4nzKehJSibgFBa6nu5BM8dIT1DNYKMWFp8nXPznXREbsnoOG6PE5RkH7WX8pOA0IH+/cUtO5eHGeXXOYLS2Hbqn8LIUAQEIv8lleTYNlqYmcyc1ZuPtPbA2WN897Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l31qFCme4UtzWak68FnfvS7fS+QbUGHWlTKm9xQxhTQ=;
 b=aUDuros7o7/k0aMyCqiVPSX20701Cq1c7UVGEheuN2P0w5Ifgwe+PjSvTGq7kj64RT+dqdc9c5fMqMJdcyMN+/5qfFmAfULWqHfa0pNpybrZOdHjvoWu8wQfaawvdW1jaeJeZxph1LpGWjWXd2tXGUjhZTI74vClqMToxEu9GDmP/SIIDoh70pEAb4SQ/zJafl6BQtPAl8ElQCvpH/gYWpCt/syfM8r4kXAwoqzWcskpP9gPL1dPIYRUaNaf+ESeFIoPhcwDiKUoclp0Bcsb+91U1i6vmqJSRxRrVDGGKQwCdHEC/HB55oQYRaJAWv7QlFclLtnFI+HjnhxJr8HZZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l31qFCme4UtzWak68FnfvS7fS+QbUGHWlTKm9xQxhTQ=;
 b=o2efHwRLzodVoh4gkzNguNtGDcDII6M/NOVrDCH+3crSh5ptswR1yIql5+yR9T85lEr7i1eCjNp4S74MgHlnIKJaC812de0bLeqrMP2SeiornkgOkRegDlzDh2m87n3ZhJELZD6vPyzfA3XY+L3pZYv5QOK5dfRnwu0B4FJlvQ8=
Subject: Re: [PATCH v8 08/16] xen/domain: Add vmtrace_size domain creation
 parameter
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-9-andrew.cooper3@citrix.com>
 <3cf886f6-db7f-ccc1-5ef0-6fd8ccb38caf@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f54dec0a-65b6-07bf-9de8-ed96ffd8d791@citrix.com>
Date: Mon, 1 Feb 2021 14:22:56 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <3cf886f6-db7f-ccc1-5ef0-6fd8ccb38caf@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f874483c-6b01-4dd1-3a3e-08d8c6bce276
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5872:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB587294CA7065DEC3427B6CD0BAB69@SJ0PR03MB5872.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: OnjgKUdYGyRuz6ExZBCe3OI0a+99SdIr9MWzpL3ZoNKwqV2+l3mljbi3T5srVevSMpa2VHnDVUXeTBLJeJSoEdn+gfQ+nuCE/UEyep5x8slhxM0Jz0UoWQvtwgXd/Ng3Z4DmcljYNy9MuziclRqy29sd2CWs/lHO7LZ6XZaetrvCFLAIE0TpTillA9BRPWIVhVyGePE11AjbfGxv5QWOLosHqrdhYnLOTwZSC77jjQSfbLiPc/8OKue3zYsBbwNLVURD8ghFbjK59lDlap9BblzBxSayXkCtJ6X7upc+4geECB/VGctQH+DmGKeQSHKZ/i+2duZYFiqthrn7ynH710tKD1fHAB011NhDa1v1U5RxMbJIFysMWtPV0V9HFYlz0jFwe+kRMCIeDi+ozlIiMHdzBYVZE6XvcTln/wVMoYh+tn5qepEh/oAMWIIhFla09alk7Q9K59jQzrOKpgcPnny+TExKWvNPxXAGT61zDEGWPnoJLzinyZD8V6CR+MMhdz12hjG+i8m3v2sS5EqNo26L9JI2Unt/WKFfyGiDrLUbPJMH+ghIVKXe7fjnrG0ddJvo5uXgaq8Cnd8n4MOXdjvpD75C8Z4hATD8qGYrq8w=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(396003)(136003)(366004)(346002)(31686004)(8936002)(4326008)(478600001)(53546011)(6666004)(5660300002)(8676002)(66946007)(956004)(31696002)(316002)(66476007)(2616005)(83380400001)(36756003)(66556008)(16576012)(86362001)(6916009)(54906003)(26005)(6486002)(186003)(2906002)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R1pESGJ4VHR5RG8xbjRZMXZKSGp0TlhmZGVYQXg4eXMvUEEralQ3a3g5MWxI?=
 =?utf-8?B?R01SYktzZEg5Q1o3ZEpjTFZ1blY3RWMwbllHL3FPbGxKWVZadUlJOFNaM2t0?=
 =?utf-8?B?dkdudFV5WDlwMnpKNFRRNVpzY0pydkYwc0p2WFJiMkZPRTZpVi9lT245K0hl?=
 =?utf-8?B?dUwyL2FKMjlWQ1AyUDNTQjJ4SzVtTzJkamZRcHRyNWpiQ2JMTWtQTHRRY0Z2?=
 =?utf-8?B?NzVtR3dqOXJhUHJzZlFiRjZYQjlOM2IvaW1uaGVsME9NQkc3dENEbTFEZVp2?=
 =?utf-8?B?QUtXODVNY3JONVdITmY5T3NaYytQZk1CSDBsbHpVRWo5U3dMZ1M1VjNFb000?=
 =?utf-8?B?cndZTm1VMEhVdFVJY2JDNnEyMDNuTUNDSGZZNUUvRE5nM2RqVVRabUNDTkQz?=
 =?utf-8?B?Nlk4TFJZWDhYQkdIZ3dTQkRiNVowb1VHeHJ4aURVVldoSFNLL2k2WVpycEdU?=
 =?utf-8?B?QjdCU2FMY25JZmlaVnhlUTkxMlQzS3VWVWYzVlp4SnJ5dGRVVXJCUXJUcHRB?=
 =?utf-8?B?d3luRy8xOSt1RDlQckdCbTFiQm0zT0dSK0FDRW5tVlRuOFVvZG9wR2txV0hR?=
 =?utf-8?B?L2xqVFA4dGpleHg5NkVJZS91Z3RzbzJqaDdJZnUwMU1nWGlQUzk1bVg2YzAx?=
 =?utf-8?B?MXpKMkRCYi9MZWtUSUE4dUo5ZGRyOFhpeWRzNGR1eGlobWhER0lEbXBPanpv?=
 =?utf-8?B?aWhvN21ZNzVTTGpFRFZVbFBlVk11blBmZ0tLcVlxSndwZDhKdVE0ZFJkQzBx?=
 =?utf-8?B?UWVqdEh3L1ZsQllaRW80WWhVYVpJSmFJN1I4eGdwMmRtQjV4cXloalRhZE82?=
 =?utf-8?B?QkFHOEVqRXFxNkZqaWdvZnlMTW1QTElJbnUya3FoZCtESlNiekc3cjRsU1lz?=
 =?utf-8?B?NXUremY2cFJsRzRIL3RDcmdpdVI2L01sd096QmFNRWVYZWw4Zk41WllCekhz?=
 =?utf-8?B?Ylh2YmtZRlJZMUhKV1NvZ3NEVGxUaUVxTzV4b214MjhZbC91RnZmUXlqSWRG?=
 =?utf-8?B?cGVCMFp0UHhzcFFKZDRqa1FZVWM0VTV3dC9ua01QODdJN1pGYW5yb292MkM2?=
 =?utf-8?B?bUZiSHdsK3dOK1JiRVlpYktpNnRiU2Ztdy85VTlVSFVIM09sWVVlOFNWVUpS?=
 =?utf-8?B?aWFEVk81emRrTjllK08rdWgvODNKUVhPWDEyYkx0QThJem1wS2FOcTY1TE1z?=
 =?utf-8?B?M1BycTZzQ0ZCWnpGVFlNeTdMV0hxSEYvYnRhS2VYa1B6Z3hPNWhBbThiMWcv?=
 =?utf-8?B?RmowYm1yM0RaODZxZEU0Q0hHRkpUTWUvUHhNWUV1c1BmSEllRm1zWXJ4dllB?=
 =?utf-8?B?UU16M1Fzb3cyamFOSGxDNDRUMmRvV2svM1puL3k5MUdLUTcveGY0Z1FUT1ow?=
 =?utf-8?B?UmlWZUJMYjdYWFdKRG4xcGNvM0grNXlna2cva1VLeU5mY1h2S2tqMjFraVht?=
 =?utf-8?B?a1pIWFF4ZVhseEtsVmxyVkw0OGZ5VTVGYkkvSFZRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f874483c-6b01-4dd1-3a3e-08d8c6bce276
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 14:23:03.0714
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YFNEc26cQS4Xpm4NgQ86r7sM3oiwA2Pkl1TJBK4vHSemO/2ze+VaRCD6ADYVMVX1aBEMDBpgpFTDFlFVTne+//5y5TMEW1qmUWsezU72dWA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5872
X-OriginatorOrg: citrix.com

On 01/02/2021 13:18, Jan Beulich wrote:
> On 30.01.2021 03:58, Andrew Cooper wrote:
>> +static int vmtrace_alloc_buffer(struct vcpu *v)
>> +{
>> +    struct domain *d = v->domain;
>> +    struct page_info *pg;
>> +    unsigned int i;
>> +
>> +    if ( !d->vmtrace_size )
>> +        return 0;
>> +
>> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
>> +                             MEMF_no_refcount);
>> +    if ( !pg )
>> +        return -ENOMEM;
>> +
>> +    /*
>> +     * Getting the reference counting correct here is hard.
>> +     *
>> +     * All pages are now on the domlist.  They, or subranges within, will be
> "domlist" is too imprecise, as there's no list with this name. It's
> extra_page_list in this case (see also below).
>
>> +     * freed when their reference count drops to zero, which may any time
>> +     * between now and the domain teardown path.
>> +     */
>> +
>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
>> +            goto refcnt_err;
>> +
>> +    /*
>> +     * We must only let vmtrace_free_buffer() take any action in the success
>> +     * case when we've taken all the refs it intends to drop.
>> +     */
>> +    v->vmtrace.pg = pg;
>> +
>> +    return 0;
>> +
>> + refcnt_err:
>> +    /*
>> +     * In the failure case, we must drop all the acquired typerefs thus far,
>> +     * skip vmtrace_free_buffer(), and leave domain_relinquish_resources() to
>> +     * drop the alloc refs on any remaining pages - some pages could already
>> +     * have been freed behind our backs.
>> +     */
>> +    while ( i-- )
>> +        put_page_and_type(&pg[i]);
>> +
>> +    return -ENODATA;
>> +}
> As said in reply on the other thread, PGC_extra pages don't get
> freed automatically. I too initially thought they would, but
> (re-)learned otherwise when trying to repro your claims on that
> other thread. For all pages you've managed to get the writable
> ref, freeing is easily done by prefixing the loop body above by
> put_page_alloc_ref(). For all other pages best you can do (I
> think; see the debugging patches I had sent on that other
> thread) is to try get_page() - if it succeeds, calling
> put_page_alloc_ref() is allowed. Otherwise we can only leak the
> respective page (unless going to further extents with trying to
> recover from the "impossible"), or assume the failure here was
> because it did get freed already.

Right - I'm going to insist on breaking apart orthogonal issues.

This refcounting issue isn't introduced by this series - this series
uses an established pattern, in which we've found a corner case.

The corner case is theoretical, not practical - it is not possible for a
malicious PV domain to take 2^43 refs on any of the pages in this
allocation.Â  Doing so would require an hours-long SMI, or equivalent,
and even then all malicious activity would be paused after 1s for the
time calibration rendezvous which would livelock the system until the
watchdog kicked in.


I will drop the comments, because in light of this discovery, they're
not correct.

We should fix the corner case, but that should be a separate patch.Â 
Whatever we do needs to start by writing down the the refcounting rules
first because its totally clear that noone understands them, and then a
change to all examples of this pattern adjusting (if necessary).

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:27:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79909.145723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aAv-0006qo-S9; Mon, 01 Feb 2021 14:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79909.145723; Mon, 01 Feb 2021 14:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aAv-0006qh-P6; Mon, 01 Feb 2021 14:27:33 +0000
Received: by outflank-mailman (input) for mailman id 79909;
 Mon, 01 Feb 2021 14:27:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6aAu-0006qc-CP
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:27:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4834ab1d-ba13-4c25-92e9-d886850d56f5;
 Mon, 01 Feb 2021 14:27:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4834ab1d-ba13-4c25-92e9-d886850d56f5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612189650;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Ob8RLq4BB/B8dGEbt7qAiulpcg4OPRplWuekIBJLkEo=;
  b=HA6UdEvpbmWhxXMqcyPjBHztgrKk8MpjpsW3V+3x1DXoUZYuF/26V5xB
   o2e5zbwKF4Mw03YrFaBOAP/SFUKFpNxOLxNxPpESJOXY+1XfiI6ZI7iU5
   jwkbTaoJQz603h0SEIKvu4M4xET/3qg6tf34seMuNS9AHNJMjK4VvqZAx
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: l1D5xyjDSB8M+V2nwZPCbxwVCLgBmqaliz72pKtWIosb+AMK9BA4FyJIoLDfOC55jR1nr6eNdc
 GC5fXh907pmDTQ5/1XvvBX7vPu1yLCF55syzqAqZjedZPaIQ5RYaLjc0ZE9qAM5Mrm0R9SVP82
 hV5ZyHXBFPoq6VG7MatFS9XiPZNvbvrdP3LT926ZkCs/FTgXbYNJBue+jgLAdIa9HC2tq7wjCr
 UWZ8pp12aV7aGxUmATzi2yPlifK7hYXFTN9nP3QKM5onWq+bO7gZ6qtUgASNv32zl9X4dNZLC0
 A/k=
X-SBRS: 5.2
X-MesageID: 36236266
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36236266"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hayBnQ542UlliqkIMDI5zCoNP8/RswI/Z86sUz5FWYLLDbvifOV5zJr1hu1hBlPzXJs10OF4eBcAL7qsjyjti+/zTFqi/eX/BvFSSckpJfrYkKGWGS04G01M+XuqqTwGNlHOagwfvzbZnyMVkQ3szkhFgYAPVcf24V33zM9PCQbwM6xP4cd7hDoGqir3GC75fETBA31cEfxFQ5n9LK0dQRxOJ4ERWGS2FYAoqN/RV7aWpiIvBW6dLnT5mhpape+M2AoABqJdtHJSMjKTOY1YLGc6dXn5SFvI7cHK4XScP8X9g5PrnRZ/UAy9QMSMa+D25ynYgSIFBHtUT+VqsTSkpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b/3YTPFMnKzL432BWu94ZdEMueSSPJFLNvFpTtef/kE=;
 b=kE802x7lcJAmQuCgNhMpmJ5a8snQXxLtLI2/Gv2utqMlw2NpX+kQYM+DOIg/tvZ3R8zM0GOFW7ToUArkbQMQZL/YynUzV6Mm2iZktX8uvS83sd+hTiz4RBUZtJHM7nTItJx3IaeyGVxbx/+BlL0TjKMpDXutV/8Cn20SiOQf9uoINCiNh6PBJZioM8ivNpdNjo9aDRYxrkpLD9d2VfTdYhBLsUMC37av8HqWteV6mBkidrZcX1uwXWJfCc64j24fZUx7ia9CbouOwRG6V6BDdInbXswMXaYSze2lij+BvmmHowFo8RDRpvNvGBlrtDAYvJuh50OGTkEmmnlNEztkUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b/3YTPFMnKzL432BWu94ZdEMueSSPJFLNvFpTtef/kE=;
 b=DKTpm1z/8xFcSnbtsqRQR07/VlnOVf8OQ4QOd6XK5Y50Efk89rQUnwZpjLXr33wdW7B4lh0kWYScX96xgM12iHpQpgq+J7bEDlxBS/S3WYNtFZq3lmyrVc0kV8pZd2ck+9Tg+DZKo0XRJQV0DlyW2fju48YsebIhJR6QyIarbtU=
Date: Mon, 1 Feb 2021 15:27:22 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>, "Jan
 Beulich" <JBeulich@suse.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: Re: [PATCH v8 12/16] xen/domctl: Add XEN_DOMCTL_vmtrace_op
Message-ID: <YBgPyi0IZSRmirTB@Air-de-Roger>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-13-andrew.cooper3@citrix.com>
 <YBftkqLHfwIzpaN9@Air-de-Roger>
 <316b907e-ff40-039c-374a-c07fbb33bbc2@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <316b907e-ff40-039c-374a-c07fbb33bbc2@citrix.com>
X-ClientProxiedBy: MRXP264CA0016.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eff26337-857d-4d32-6925-08d8c6bd807c
X-MS-TrafficTypeDiagnostic: DS7PR03MB5608:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB560889880FAA9FCFBABF395E8FB69@DS7PR03MB5608.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6UF8LMT3t6vs3UkZ3UQImTS7JO3t65aRhFjJWeAJmqK+bRW5gayoK6lM2Pq6cygCiW4chT/0Nn7dht6NWBYhJLhhcsx89OCNwslxknrQ0PtixhllekE3jBbbN3KdupBQ22N2ghLBMCXlBifsy7bNLcCl/BN436VSi31A9B2uSdnGGLa4Z3vybgC2KQ5y3nse5sNVE7Aj0MlqOVPV1r9hTkTmm86YLz3DjCzUyKqVBA2WagoCIk6r0eklicTjfTbRsSeIMntfvySe3AGwlpEc9XjTD76souXkR+0HRghUPkSqhcFocyLNdjUABb02ecdMZb/Aonk6Qcfocp04UM4zgPLUEVICNbKMX6yN0MoVTx+kUd8MhCYieg+jcWzzKEsCwFOpV2z462elqT9+ahDhIAwBmjlDP01boT9kWVWM7WkEHRPyb3Zwt5SJLaGFGdflod+JK6OmPrOQ1QrQfhEAMRPh2ZKPYov8tXxHAw0uATLel6kkCJ1L8M+qtmkhplkGEqYJIrpiuOHysd1Ptl7VVg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(396003)(136003)(346002)(366004)(4326008)(478600001)(8936002)(53546011)(6666004)(5660300002)(8676002)(66946007)(956004)(86362001)(85182001)(83380400001)(6636002)(186003)(6496006)(54906003)(66556008)(66476007)(316002)(9686003)(6486002)(33716001)(26005)(6862004)(2906002)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VTlRWTR2dFAzZk1RcnpPb0dlNkI1L2JZakt3bzgzMS9takM2bTlTaGdpTlZt?=
 =?utf-8?B?cEFhUlJOUU8wWnJjcWp1SFJBUERjc0cyNDR0ajlBRDJZOERmT2tYNHZLbnEz?=
 =?utf-8?B?T3Z6dS9tUllsUjdzR0RXOWlHUXpjVmxDTy84VGV5b21FVG56dkJOVkZESkNv?=
 =?utf-8?B?bGxhaHp5elFnZUROcUVxWHRLa1I0ZzRsZ2JaSVNEUlFCaGtFdE1CRGNCNGpr?=
 =?utf-8?B?a2hxUHlOTU1nU3YrT0lsT1R3MjhWN3FsY1d0b0hWbnRpNFZ2NmlyLzdBNllv?=
 =?utf-8?B?c3ZxSFdGck9GbmNKSC9FaVltSnFyb1VNUkdvMXkvY3BaMkZzcnRtYU8xeDdy?=
 =?utf-8?B?Z3VzMUl3OXN0SzZOV2lYVlNHT2dQVjRUbDJvaGUxdk5GY3Uya2VxOWdDRCtO?=
 =?utf-8?B?UURjdUYvSmlHK2N6N1llRlB5WDdGQ1pReHltWVc3clBqRlRyZWRZeDZQd3E0?=
 =?utf-8?B?ZnZvdkg3bVhKUTZoNG0yN1I1VXppd0l0bEFtT2doL1R0OVpmWTRmamwwSTEw?=
 =?utf-8?B?SjB0VXk1ZUlhWjR3SGhJUFp1V29UREJObVNnSEtMMGpacXBqcm16SW1mQnRU?=
 =?utf-8?B?c0VyaktITW5ocjF3K29kdjhOUlVpbHFodzlWYTR0Wm9WaXphcXNtNXpPU2Fx?=
 =?utf-8?B?YVpsYTJaRXUvdGo1WUhaMkxUWFdMd3lDNk80VU51UmM5b0FTMTMxMlRJOWw0?=
 =?utf-8?B?RjRpREdEeUV6Y3ZvbmFNL0cvd2VuMnA1UTRmdHQ5QWJIbDQvalZtNzBxVW1a?=
 =?utf-8?B?TWszSHpMaXI3T2xOVkQ4d3hhb2Y1NkovUUhoUmFMTzZRaWdmL0w2UnZFeHdL?=
 =?utf-8?B?U2JVTldvZmswLzI3U2VOa2tROGxpOHRzVEdQZk9KK0hSYmM0QmxBNjdSK1dm?=
 =?utf-8?B?Vmg4WnpCaWVGaysyblpGSFQzRnR0RjhCeVBhb2pia3lrVFhqR1ZjSzcrMHlz?=
 =?utf-8?B?M0oyekZPcXY4ZTZKK3RoRU1pMWR5QzlwVHV5N1BEQytad3pHcnFvNzUrdkQ5?=
 =?utf-8?B?MmxqSTRWc05UVU5ob0MrajhHWkxITnhrU1p0dEZOYThVaXFrdDBGNmwrQVUw?=
 =?utf-8?B?SDFaQnZBVEtRdWJhUWFYWmE2VU9yTEFVT1B5UzdwaFpwZmw2eUppTlVqc3hr?=
 =?utf-8?B?aXFNZGpVdXdNRlMvUitrVU5BZnVoWUM1UWxSS0QxeGszN0lzaFl2NElVZ3h3?=
 =?utf-8?B?enlDQkllT2dtSVMyT0RUUUQzZHRNc0VxcG5YUUNyeHJERVJFa2ZPRzRoUktX?=
 =?utf-8?B?OVRHd0FkcFIwZk5ZUTRhVEJMTlIrSEJEdmExM0grTnMrNHVvWGhJdnJYVW1s?=
 =?utf-8?B?N3FCTFc1RkJGS1NQM2JkNTBTeUdCcGNtcmZGc2VyeElYN2JuaXdMOUx0Q3VF?=
 =?utf-8?B?Q295bGZ1V0xncDJsQXJrV253V04yVkFFdDlERWdOWXRXS25XVFUrTGRBTDdM?=
 =?utf-8?B?RWpJaU9jWFZza2hHdUY4Ukx4elRsRnRubTdkNFd3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: eff26337-857d-4d32-6925-08d8c6bd807c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 14:27:28.2299
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /d2xoArTmr6Pg/X1YoAR68DqIrYUvgvW8ZzBDImonINmhOrg8SOSMtDlaA9UCXFlFAQ9fdemu6WRNk2aoKd3xQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5608
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 01:00:47PM +0000, Andrew Cooper wrote:
> On 01/02/2021 12:01, Roger Pau MonnÃ© wrote:
> > On Sat, Jan 30, 2021 at 02:58:48AM +0000, Andrew Cooper wrote:
> >> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >> index 12b961113e..a64c4e4177 100644
> >> --- a/xen/arch/x86/hvm/vmx/vmx.c
> >> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >> @@ -2261,6 +2261,157 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
> >>      return true;
> >>  }
> >>  
> >> +/*
> >> + * We only let vmtrace agents see and modify a subset of bits in MSR_RTIT_CTL.
> >> + * These all pertain to data-emitted into the trace buffer(s).  Must not
> >> + * include controls pertaining to the structure/position of the trace
> >> + * buffer(s).
> >> + */
> >> +#define RTIT_CTL_MASK                                                   \
> >> +    (RTIT_CTL_TRACE_EN | RTIT_CTL_OS | RTIT_CTL_USR | RTIT_CTL_TSC_EN | \
> >> +     RTIT_CTL_DIS_RETC | RTIT_CTL_BRANCH_EN)
> >> +
> >> +/*
> >> + * Status bits restricted to the first-gen subset (i.e. no further CPUID
> >> + * requirements.)
> >> + */
> >> +#define RTIT_STATUS_MASK                                                \
> >> +    (RTIT_STATUS_FILTER_EN | RTIT_STATUS_CONTEXT_EN | RTIT_STATUS_TRIGGER_EN | \
> >> +     RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED)
> >> +
> >> +static int vmtrace_get_option(struct vcpu *v, uint64_t key, uint64_t *output)
> >> +{
> >> +    const struct vcpu_msrs *msrs = v->arch.msrs;
> >> +
> >> +    switch ( key )
> >> +    {
> >> +    case MSR_RTIT_OUTPUT_MASK:
> > Is there any value in returning the raw value of this MSR instead of
> > just using XEN_DOMCTL_vmtrace_output_position?
> 
> Yes, but for interface reasons.
> 
> There are deliberately some common interfaces (for the subset of options
> expected to be useful), and some platform-specific ones (because there's
> no possible way we encode all of the options in some "common" interface).
> 
> Yes - there is some overlap between the two sets - that is unavoidable
> IMO.Â  A user of this interface already needs platform specific knowledge
> because it has to interpret the contents of the trace buffer.
> 
> Future extensions to this interface would be setting up the CR3 filter
> and range filters, which definitely shouldn't be common, and can be
> added without new subops in the current model.
> 
> > The size of the buffer should be known to user-space, and then setting
> > the offset could be done by adding a XEN_DOMCTL_vmtrace_set_output_position?
> >
> > Also the contents of this MSR depend on whether ToPA mode is used, and
> > that's not under the control of the guest. So if Xen is switched to
> > use ToPA mode at some point the value of this MSR might not be what a
> > user of the interface expects.
> >
> > From an interface PoV it might be better to offer:
> >
> > XEN_DOMCTL_vmtrace_get_limit
> > XEN_DOMCTL_vmtrace_get_output_position
> > XEN_DOMCTL_vmtrace_set_output_position
> >
> > IMO, as that would be compatible with ToPA if we ever switch to it.
> 
> ToPA is definitely more complicated.Â  We'd need to stitch the disparate
> buffers back together into one logical view, at which point
> get_output_position becomes more complicated.
> 
> As for set_output_position, that's not useful.Â  You either want to keep
> the position as-is, or reset back to 0, hence having a platform-neutral
> reset option.
> 
> However, based on this reasoning, I think I should drop access to
> MSR_RTIT_OUTPUT_MASK entirely.Â  Neither half is useful for userspace to
> access in a platforms-specific way, and disallowing access entirely will
> simplify adding ToPA support in the future.

Exactly. Dropping access to MSR_RTIT_OUTPUT_MASK would indeed solve my
concerns. I somehow assumed that setting the offset was needed for the
users of the interface. With that dropped you can add:

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:32:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:32:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79913.145735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aFM-0007sA-De; Mon, 01 Feb 2021 14:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79913.145735; Mon, 01 Feb 2021 14:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aFM-0007s3-AJ; Mon, 01 Feb 2021 14:32:08 +0000
Received: by outflank-mailman (input) for mailman id 79913;
 Mon, 01 Feb 2021 14:32:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6aFL-0007ry-3c
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:32:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 288ab814-50d9-4a0d-b9c7-8d86337b0b91;
 Mon, 01 Feb 2021 14:32:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5AAFAABD5;
 Mon,  1 Feb 2021 14:32:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 288ab814-50d9-4a0d-b9c7-8d86337b0b91
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612189925; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JsQrQiVZAcpllfsI2rXmpiIJAp8HlUSCrYlYpdPS2d0=;
	b=HELZbpW/JKdE91nbmyMfUXMwRTrutbeFmAoRh+Aaz95K1TjF9dKrDMs60YAPefC0GRIn3h
	AOWAa6rywy3INmRw4qeMfGOZX43W4urZfD6exvAn8ydHhE4336Xelh7+4sjQLkOWBHtPwN
	kD4svgE9zz+cm8lHBBzUJFJ1BfAMAdQ=
Subject: Re: [PATCH v8 06/16] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>, Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-7-andrew.cooper3@citrix.com>
 <YBfTpTzi+wo7AFSH@Air-de-Roger>
 <53a26fb9-9c43-d1c4-90cd-bb29d57e106b@citrix.com>
 <28e8d116-9d5f-3c73-b366-63d5b047b085@suse.com>
 <8c4ef206-8378-8fa4-60cd-199491a136c6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <af358c3a-8d8c-b576-22e5-d6464e167ef2@suse.com>
Date: Mon, 1 Feb 2021 15:32:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <8c4ef206-8378-8fa4-60cd-199491a136c6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 15:04, Andrew Cooper wrote:
> On 01/02/2021 13:03, Jan Beulich wrote:
>> On 01.02.2021 12:11, Andrew Cooper wrote:
>>> On 01/02/2021 10:10, Roger Pau MonnÃ© wrote:
>>>> On Sat, Jan 30, 2021 at 02:58:42AM +0000, Andrew Cooper wrote:
>>>>> @@ -636,15 +662,45 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
>>>>>                      compat_frame_list[i] = frame;
>>>>>                  }
>>>>>  
>>>>> -                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
>>>>> +                if ( __copy_to_compat_offset(cmp.mar.frame_list, start_extent,
>>>>>                                               compat_frame_list,
>>>>> -                                             cmp.mar.nr_frames) )
>>>>> +                                             done) )
>>>>>                      return -EFAULT;
>>>> Is it fine to return with a possibly pending continuation already
>>>> encoded here?
>>>>
>>>> Other places seem to crash the domain when that happens.
>>> Hmm.Â  This is all a total mess.Â  (Elsewhere the error handling is also
>>> broken - a caller who receives an error can't figure out how to recover)
>>>
>>> But yes - I think you're right - the only thing we can do here is `goto
>>> crash;` and woe betide any 32bit kernel which passes a pointer to a
>>> read-only buffer.
>> I'd like to ask you to reconsider the "goto crash", both the one
>> you mention above and the other one already present in the patch.
>> Wiring all the cases where we mean to crash the guest into a
>> single domain_crash() invocation has the downside that when
>> observing such a case one can't remotely know which path has led
>> there. Therefore I'd like to suggest individual domain_crash()
>> invocations on every affected path. Elsewhere in the file there
>> already is such an instance, commented "Cannot cancel the
>> continuation...".
> 
> But they're all logically the same, are they not?

Depends on what "logically the same" here means. To me different
paths and different causes aren't necessarily "the same".

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:36:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79916.145750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aJY-00085F-07; Mon, 01 Feb 2021 14:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79916.145750; Mon, 01 Feb 2021 14:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aJX-000858-TD; Mon, 01 Feb 2021 14:36:27 +0000
Received: by outflank-mailman (input) for mailman id 79916;
 Mon, 01 Feb 2021 14:36:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6aJW-000851-6U
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:36:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5eaea734-a8c9-49da-8cf6-ba2e7234efb8;
 Mon, 01 Feb 2021 14:36:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 584B7AB92;
 Mon,  1 Feb 2021 14:36:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5eaea734-a8c9-49da-8cf6-ba2e7234efb8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612190184; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6uykghAdfz9re78oxMr0Ju2czSjC0DFr3lBX48I5rJY=;
	b=UIRXS0qGyOKD43/9h6/BoC469Q11Ymo2/Ak3rTprFlzqc/Xg3Lk+HmpH3SSyV3MOCJ9sUX
	QKZYoTu/LowwuyVQFrdLgk/sktMbeMex7CsRS4oaThOSA38aI0YXkIKBP6fPgXInvJEuuZ
	GmO2i6+Gn6b4tQzEx+7lazHOg8hUj4k=
Subject: Re: [PATCH v8 08/16] xen/domain: Add vmtrace_size domain creation
 parameter
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-9-andrew.cooper3@citrix.com>
 <3cf886f6-db7f-ccc1-5ef0-6fd8ccb38caf@suse.com>
 <f54dec0a-65b6-07bf-9de8-ed96ffd8d791@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <296e5ee3-0ae1-fe0b-9ec3-940b78284cdc@suse.com>
Date: Mon, 1 Feb 2021 15:36:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <f54dec0a-65b6-07bf-9de8-ed96ffd8d791@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 15:22, Andrew Cooper wrote:
> On 01/02/2021 13:18, Jan Beulich wrote:
>> On 30.01.2021 03:58, Andrew Cooper wrote:
>>> +static int vmtrace_alloc_buffer(struct vcpu *v)
>>> +{
>>> +    struct domain *d = v->domain;
>>> +    struct page_info *pg;
>>> +    unsigned int i;
>>> +
>>> +    if ( !d->vmtrace_size )
>>> +        return 0;
>>> +
>>> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
>>> +                             MEMF_no_refcount);
>>> +    if ( !pg )
>>> +        return -ENOMEM;
>>> +
>>> +    /*
>>> +     * Getting the reference counting correct here is hard.
>>> +     *
>>> +     * All pages are now on the domlist.  They, or subranges within, will be
>> "domlist" is too imprecise, as there's no list with this name. It's
>> extra_page_list in this case (see also below).
>>
>>> +     * freed when their reference count drops to zero, which may any time
>>> +     * between now and the domain teardown path.
>>> +     */
>>> +
>>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>>> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
>>> +            goto refcnt_err;
>>> +
>>> +    /*
>>> +     * We must only let vmtrace_free_buffer() take any action in the success
>>> +     * case when we've taken all the refs it intends to drop.
>>> +     */
>>> +    v->vmtrace.pg = pg;
>>> +
>>> +    return 0;
>>> +
>>> + refcnt_err:
>>> +    /*
>>> +     * In the failure case, we must drop all the acquired typerefs thus far,
>>> +     * skip vmtrace_free_buffer(), and leave domain_relinquish_resources() to
>>> +     * drop the alloc refs on any remaining pages - some pages could already
>>> +     * have been freed behind our backs.
>>> +     */
>>> +    while ( i-- )
>>> +        put_page_and_type(&pg[i]);
>>> +
>>> +    return -ENODATA;
>>> +}
>> As said in reply on the other thread, PGC_extra pages don't get
>> freed automatically. I too initially thought they would, but
>> (re-)learned otherwise when trying to repro your claims on that
>> other thread. For all pages you've managed to get the writable
>> ref, freeing is easily done by prefixing the loop body above by
>> put_page_alloc_ref(). For all other pages best you can do (I
>> think; see the debugging patches I had sent on that other
>> thread) is to try get_page() - if it succeeds, calling
>> put_page_alloc_ref() is allowed. Otherwise we can only leak the
>> respective page (unless going to further extents with trying to
>> recover from the "impossible"), or assume the failure here was
>> because it did get freed already.
> 
> Right - I'm going to insist on breaking apart orthogonal issues.
> 
> This refcounting issue isn't introduced by this series - this series
> uses an established pattern, in which we've found a corner case.
> 
> The corner case is theoretical, not practical - it is not possible for a
> malicious PV domain to take 2^43 refs on any of the pages in this
> allocation.Â  Doing so would require an hours-long SMI, or equivalent,
> and even then all malicious activity would be paused after 1s for the
> time calibration rendezvous which would livelock the system until the
> watchdog kicked in.

Actually an overflow is only one of the possible reasons here.
Another, which may be more "practical", is that another entity
has already managed to free the page (by dropping its alloc-ref,
and of course implying it did guess at the MFN).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:47:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79922.145765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aTs-0000ph-2M; Mon, 01 Feb 2021 14:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79922.145765; Mon, 01 Feb 2021 14:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aTr-0000pa-V7; Mon, 01 Feb 2021 14:47:07 +0000
Received: by outflank-mailman (input) for mailman id 79922;
 Mon, 01 Feb 2021 14:47:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tQCk=HD=todobom.com=claudemir@srs-us1.protection.inumbo.net>)
 id 1l6aTr-0000pT-35
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:47:07 +0000
Received: from mail-qt1-x82f.google.com (unknown [2607:f8b0:4864:20::82f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2542089-7031-47d4-9d5c-427d0284c030;
 Mon, 01 Feb 2021 14:46:57 +0000 (UTC)
Received: by mail-qt1-x82f.google.com with SMTP id t17so12382884qtq.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 06:46:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2542089-7031-47d4-9d5c-427d0284c030
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=todobom-com.20150623.gappssmtp.com; s=20150623;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ux8jRvXs7Hpok0WX+Jd2sq3r6PCrl0Cr5XMNofb2eus=;
        b=lgd5oWSlUSw5jXHP1/RfTKhpT5YLJcrCnRdtmvfzYXoAR4w/oTkaZRjuHm0duUO2cu
         eLj1ooLcfXj8QY+6rG/Sv+0AZjSV32Ney377hrEJzAWczEL9+dwpYYt/puSs+gM7BnmH
         Y13fVEy9EaP7KSqe3Jm0GETh+vQhIx+CWehfEblMJ7ltP7LrwU3xOdC3CL+A+rtXN0Po
         z1yVYdRonGAqK4UZgTgqfYaCA6/mFoCK03CeHQm87zX5J2SMpJG79icr/T6sKjtFbCdT
         cGhbvIZooI3DV0LYAbNQoe4EyxzyGTqNRzf+43HmoxSSxMyQsAS2wNZZ03pfIvUZQwpH
         FmsQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ux8jRvXs7Hpok0WX+Jd2sq3r6PCrl0Cr5XMNofb2eus=;
        b=r90aX0mHyHQp2dNpVxjzsXFA9dVFjvEd3o40JcvDmXhKk+gCQjVcng0GBLAGDBMRwk
         47Uyh+YjWPpOVSvs6dJxjGOwDKw3y4KAXpN25Afoyg65kUeGSC3PJNns4vuFtvQcfUkE
         sO4+3+1e+IU/kKjn8OFEB/AtyiXFlz5ajdPCNc0FtwISJHSxCHkqpPTetWD9wjr6OyB5
         CebSEQZW68580EnP420BzdLWwubvqc+O2KVLL+dOFlGfNTUK4kcabvwZEIbhrP71PGyP
         iVWAignDzTRIb8o8ZL/HfKF/nNyeaafP/er6t9zTDc+ANpdENYQujG5ya1NvLYlI8d6k
         rnug==
X-Gm-Message-State: AOAM5308h2j5eMtLjtnF4BUziJsQafbZ2oNrETc2/5tbbU4fhSEoyuq+
	bm/q3Q/y8K7bNUAZEcGYG4+mV2Mz0hJBgA+/i79F7w==
X-Google-Smtp-Source: ABdhPJxR9DpcNH7ksLuFmw5Uhhkj4wQcoexuBrcb5aiEb+/dWqGNAs8kfrwWSno0MRycXnkLQsOcaP5kicD9fv8y/RY=
X-Received: by 2002:ac8:5c41:: with SMTP id j1mr15502145qtj.306.1612190817243;
 Mon, 01 Feb 2021 06:46:57 -0800 (PST)
MIME-Version: 1.0
References: <CANyqHYfNBHnUiBiXHdt+R3mZ72oYQBnQcaWuKw5gY0uDb_ZqKw@mail.gmail.com>
 <e1d69914-c6bc-40b9-a9f4-33be4bd022b6@suse.com> <CANyqHYcifnCgd5C5vbYoi4CTtoMX5+jzGqHfs6JZ+e=d2Y_dmg@mail.gmail.com>
 <ff799cd4-ba42-e120-107c-5011dc803b5a@suse.com> <609a82d8-af12-4764-c4e0-f5ee0e11c130@suse.com>
 <CANyqHYehUWeNfVXqVJX6nrBS_CcKL1DQjyNVa1cUbvbx+zD83w@mail.gmail.com>
 <9d04edfe-0059-6fbf-c1da-2087f6190e64@suse.com> <CANyqHYfOC6JY978SRPAQ8Ug3GevFD=jbT6bVVET4+QOv8mv7qA@mail.gmail.com>
 <a0a7bbd0-c4c3-cfb8-5af0-a5a4aff14b76@suse.com> <CANyqHYeDR_NUKzPtbfLiUzxAUzerKepbU4B-_6=U-7Y6uy8gpQ@mail.gmail.com>
 <8837c3fb-1e0c-5941-258c-e76551a9e02b@suse.com> <8cf69fb3-5b8c-60ea-bd1c-39a0cbd5cb5c@suse.com>
 <CANyqHYeCQc2bt836uyrtm9Eo2T1uPP-+ups-ygfACu6zK36BQg@mail.gmail.com>
 <bd150f4d-4f7e-082e-6b10-03bf1eca7b80@suse.com> <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
 <803a50a9-707f-14db-b523-cd1f6f685ab4@suse.com>
In-Reply-To: <803a50a9-707f-14db-b523-cd1f6f685ab4@suse.com>
From: Claudemir Todo Bom <claudemir@todobom.com>
Date: Mon, 1 Feb 2021 11:46:45 -0300
Message-ID: <CANyqHYfNjqjm7tFoHD=XDcv_P42wppmx0gjy=--Kz88MZcK6Pw@mail.gmail.com>
Subject: Re: Problems with APIC on versions 4.9 and later (4.8 works)
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/mixed; boundary="000000000000f9517405ba476c04"

--000000000000f9517405ba476c04
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Tested first without the debug patch and with following parameters:

xen: dom0_mem=3D1024M,max:2048M dom0_max_vcpus=3D4 dom0_vcpus_pin=3Dtrue sm=
t=3Dtrue
kernel: loglevel=3D3

same behaviour as before... black screen right after the xen messages.

adding earlyprintk=3Dxen to the kernel command line is sufficient to
make it boot, I can imagine this can be happening because Xen is not
releasing console to the kernel at that moment.

The system worked well (with earlyprintk=3Dxen), tested with the "yes
stress test" mentioned before on a guest and on dom0.

Then, I installed the debug patch and booted it again, it also needed
the earlyprintk=3Dxen parameter on the kernel command line. I've also
added console_timestamps=3Dboot to the xen command line in order to get
the time of the messages.

I'm attaching the outputs of "xl dmesg" and "dmesg" on this message.

Think it is almost done! WIll wait for the next round of tests!

Thank you very much!

Em seg., 1 de fev. de 2021 =C3=A0s 09:47, Jan Beulich <jbeulich@suse.com> e=
screveu:
>
> On 29.01.2021 20:31, Claudemir Todo Bom wrote:
> > I've applied both patches, system didn't booted, used following paramet=
ers:
> >
> > xen: dom0_mem=3D1024M,max:2048M dom0_max_vcpus=3D4 dom0_vcpus_pin=3Dtru=
e smt=3Dtrue
> > kernel: loglevel=3D3
> >
> > The screen cleared right after the initial xen messages and frozen
> > there for a few minutes until I restarted the system.
> >
> > I've added "vga=3Dtext-80x25,keep" to the xen command line and
> > "nomodeset" to the kernel command line, hoping to get some more info
> > and surprisingly this was sufficient to make system boot!
> >
> > System prompt took a lot to appear, the kernel driver for the usb
> > keyboard loaded after 3 minutes and the driver for the usb wifi dongle
> > I am using loaded about five minutes after kernel boot, and I had to
> > issue "ifup -a" to get an ip address from the dhcp server, and it took
> > almost one minute to get it!
>
> I was able to repro this behavior, by deliberately screwing up
> CPU0's TSC early during boot. This of course did make it a lot
> easier to find and fix the problem. I've Cc-ed you on the full
> 3-patch series that I've sent a minute ago, because while you
> may continue to opt for ignoring the first patch, you'll now
> need the latter two. And as before, the updated debugging patch
> below.
>
> Jan
>
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1558,6 +1558,12 @@ static void local_time_calibration(void)
>   * TSC Reliability check
>   */
>
> +static struct {//temp
> + unsigned cpu;
> + signed iter;
> + cycles_t prev, now;
> +} check_log[NR_CPUS + 4];
> +static unsigned check_idx;//temp
>  /*
>   * The Linux original version of this function is
>   * Copyright (c) 2006, Red Hat, Inc., Ingo Molnar
> @@ -1566,6 +1572,7 @@ static void check_tsc_warp(unsigned long
>  {
>      static DEFINE_SPINLOCK(sync_lock);
>      static cycles_t last_tsc;
> +unsigned idx, cpu =3D smp_processor_id();//temp
>
>      cycles_t start, now, prev, end;
>      int i;
> @@ -1576,6 +1583,15 @@ static void check_tsc_warp(unsigned long
>      end =3D start + tsc_khz * 20ULL;
>      now =3D start;
>
> +{//temp
> + spin_lock(&sync_lock);
> + idx =3D check_idx++;
> + check_log[idx].cpu =3D cpu;
> + check_log[idx].iter =3D -1;
> + check_log[idx].now =3D now;
> + spin_unlock(&sync_lock);
> +}
> +
>      for ( i =3D 0; ; i++ )
>      {
>          /*
> @@ -1610,7 +1626,14 @@ static void check_tsc_warp(unsigned long
>          {
>              spin_lock(&sync_lock);
>              if ( *max_warp < prev - now )
> +{//temp
>                  *max_warp =3D prev - now;
> + idx =3D check_idx++;
> + check_log[idx].cpu =3D cpu;
> + check_log[idx].iter =3D i;
> + check_log[idx].prev =3D prev;
> + check_log[idx].now =3D now;
> +}
>              spin_unlock(&sync_lock);
>          }
>      }
> @@ -1647,6 +1670,12 @@ static void tsc_check_reliability(void)
>          cpu_relax();
>
>      spin_unlock(&lock);
> +{//temp
> + unsigned i;
> + printk("CHK[%2u] %lx\n", cpu, tsc_max_warp);//temp
> + for(i =3D 0; i < ARRAY_SIZE(check_log) && check_log[i].now; ++i)
> +  printk("chk[%4u] CPU%-2u %016lx %016lx #%d\n", i, check_log[i].cpu, ch=
eck_log[i].prev, check_log[i].now, check_log[i].iter);
> +}
>  }
>
>  /*
> @@ -1661,6 +1690,7 @@ struct calibration_rendezvous {
>      uint64_t master_tsc_stamp, max_tsc_stamp;
>  };
>
> +static bool rdzv_log;//temp
>  static void
>  time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
>                                   uint64_t old_tsc, uint64_t new_tsc)
> @@ -1671,6 +1701,7 @@ time_calibration_rendezvous_tail(const s
>      c->local_stime  =3D get_s_time_fixed(old_tsc ?: new_tsc);
>      c->master_stime =3D r->master_stime;
>
> +if(rdzv_log) printk("RDZV[%2u] t=3D%016lx(%016lx) s=3D%012lx(%012lx)\n",=
 smp_processor_id(), c->local_tsc, r->master_tsc_stamp, c->local_stime, r->=
master_stime);//temp
>      raise_softirq(TIME_CALIBRATE_SOFTIRQ);
>  }
>
> @@ -1684,7 +1715,9 @@ static void time_calibration_tsc_rendezv
>      struct calibration_rendezvous *r =3D _r;
>      unsigned int total_cpus =3D cpumask_weight(&r->cpu_calibration_map);
>      uint64_t tsc =3D 0;
> +uint64_t adj =3D 0;//temp
>
> +if(rdzv_log) printk("RDZV[%2u] t=3D%016lx\n", smp_processor_id(), rdtsc_=
ordered());//temp
>      /* Loop to get rid of cache effects on TSC skew. */
>      for ( i =3D 4; i >=3D 0; i-- )
>      {
> @@ -1701,6 +1734,7 @@ static void time_calibration_tsc_rendezv
>                   * Use the largest value observed anywhere on the first
>                   * iteration.
>                   */
> +adj =3D r->max_tsc_stamp - r->master_tsc_stamp,//temp
>                  r->master_tsc_stamp =3D r->max_tsc_stamp;
>              else if ( i =3D=3D 0 )
>                  r->master_stime =3D read_platform_stime(NULL);
> @@ -1743,6 +1777,13 @@ static void time_calibration_tsc_rendezv
>      }
>
>      time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp);
> +if(adj) {//temp
> + static unsigned long cnt, thr;
> + if(++cnt > thr) {
> +  thr |=3D cnt;
> +  printk("TSC adjusted by %lx\n", adj);
> + }
> +}
>  }
>
>  /* Ordinary rendezvous function which does not modify TSC values. */
> @@ -1794,6 +1835,12 @@ static void time_calibration(void *unuse
>      struct calibration_rendezvous r =3D {
>          .semaphore =3D ATOMIC_INIT(0)
>      };
> +static unsigned long cnt, thr;//temp
> +if(++cnt > thr) {//temp
> + thr |=3D cnt;
> + printk("TSC: %ps\n", time_calibration_rendezvous_fn);
> + rdzv_log =3D true;
> +}
>
>      if ( clocksource_is_tsc() )
>      {
> @@ -1808,6 +1855,10 @@ static void time_calibration(void *unuse
>      on_selected_cpus(&r.cpu_calibration_map,
>                       time_calibration_rendezvous_fn,
>                       &r, 1);
> +if(rdzv_log) {//temp
> + rdzv_log =3D false;
> + printk("TSC: end rendezvous\n");
> +}
>  }
>
>  static struct cpu_time_stamp ap_bringup_ref;
> @@ -1904,6 +1955,7 @@ void init_percpu_time(void)
>      }
>      t->stamp.local_tsc   =3D tsc;
>      t->stamp.local_stime =3D now;
> +printk("INIT[%2u] t=3D%016lx s=3D%012lx m=3D%012lx\n", smp_processor_id(=
), tsc, now, t->stamp.master_stime);//temp
>  }
>
>  /*
> @@ -2046,6 +2098,7 @@ static int __init verify_tsc_reliability
>       * While with constant-rate TSCs the scale factor can be shared, whe=
n TSCs
>       * are not marked as 'reliable', re-sync during rendezvous.
>       */
> +printk("TSC: c=3D%d r=3D%d\n", !!boot_cpu_has(X86_FEATURE_CONSTANT_TSC),=
 !!boot_cpu_has(X86_FEATURE_TSC_RELIABLE));//temp
>      if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&
>           !boot_cpu_has(X86_FEATURE_TSC_RELIABLE) )
>          time_calibration_rendezvous_fn =3D time_calibration_tsc_rendezvo=
us;
> @@ -2061,6 +2114,7 @@ int __init init_xen_time(void)
>  {
>      tsc_check_writability();
>
> +printk("TSC: c=3D%d r=3D%d\n", !!boot_cpu_has(X86_FEATURE_CONSTANT_TSC),=
 !!boot_cpu_has(X86_FEATURE_TSC_RELIABLE));//temp
>      open_softirq(TIME_CALIBRATE_SOFTIRQ, local_time_calibration);
>
>      /* NB. get_wallclock_time() can take over one second to execute. */

--000000000000f9517405ba476c04
Content-Type: text/plain; charset="US-ASCII"; name="xen-dmesg.txt"
Content-Disposition: attachment; filename="xen-dmesg.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_kkmoouso0>
X-Attachment-Id: f_kkmoouso0

KFhFTikgcGFyYW1ldGVyICJwbGFjZWhvbGRlciIgdW5rbm93biEKKFhFTikgWyAgICAwLjAwMDAw
MF0gWGVuIHZlcnNpb24gNC4xMS40IChEZWJpYW4gNC4xMS40KzU3LWc0MWE4MjJjMzkyLTIpIChw
a2cteGVuLWRldmVsQGxpc3RzLmFsaW90aC5kZWJpYW4ub3JnKSAoZ2NjIChEZWJpYW4gOC4zLjAt
NikgOC4zLjApIGRlYnVnPW4gIE1vbiBGZWIgIDEgMTE6MzE6NDQgLTAzIDIwMjEKKFhFTikgWyAg
ICAwLjAwMDAwMF0gQm9vdGxvYWRlcjogR1JVQiAyLjA0LTEyCihYRU4pIFsgICAgMC4wMDAwMDBd
IENvbW1hbmQgbGluZTogcGxhY2Vob2xkZXIgZG9tMF9tZW09MTAyNE0sbWF4OjIwNDhNIGRvbTBf
bWF4X3ZjcHVzPTQgZG9tMF92Y3B1c19waW49dHJ1ZSBzbXQ9dHJ1ZSBjb25zb2xlX3RpbWVzdGFt
cHM9Ym9vdAooWEVOKSBbICAgIDAuMDAwMDAwXSBYZW4gaW1hZ2UgbG9hZCBiYXNlIGFkZHJlc3M6
IDB4YmEyMDAwMDAKKFhFTikgWyAgICAwLjAwMDAwMF0gVmlkZW8gaW5mb3JtYXRpb246CihYRU4p
IFsgICAgMC4wMDAwMDBdICBWR0EgaXMgdGV4dCBtb2RlIDgweDI1LCBmb250IDh4MTYKKFhFTikg
WyAgICAwLjAwMDAwMF0gIFZCRS9EREMgbWV0aG9kczogbm9uZTsgRURJRCB0cmFuc2ZlciB0aW1l
OiAwIHNlY29uZHMKKFhFTikgWyAgICAwLjAwMDAwMF0gIEVESUQgaW5mbyBub3QgcmV0cmlldmVk
IGJlY2F1c2Ugbm8gRERDIHJldHJpZXZhbCBtZXRob2QgZGV0ZWN0ZWQKKFhFTikgWyAgICAwLjAw
MDAwMF0gRGlzYyBpbmZvcm1hdGlvbjoKKFhFTikgWyAgICAwLjAwMDAwMF0gIEZvdW5kIDEgTUJS
IHNpZ25hdHVyZXMKKFhFTikgWyAgICAwLjAwMDAwMF0gIEZvdW5kIDIgRUREIGluZm9ybWF0aW9u
IHN0cnVjdHVyZXMKKFhFTikgWyAgICAwLjAwMDAwMF0gWGVuLWU4MjAgUkFNIG1hcDoKKFhFTikg
WyAgICAwLjAwMDAwMF0gIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1c2Fi
bGUpCihYRU4pIFsgICAgMC4wMDAwMDBdICAwMDAwMDAwMDAwMDllODAwIC0gMDAwMDAwMDAwMDBh
MDAwMCAocmVzZXJ2ZWQpCihYRU4pIFsgICAgMC4wMDAwMDBdICAwMDAwMDAwMDAwMGUwMDAwIC0g
MDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpCihYRU4pIFsgICAgMC4wMDAwMDBdICAwMDAwMDAw
MDAwMTAwMDAwIC0gMDAwMDAwMDBiYTk1MjAwMCAodXNhYmxlKQooWEVOKSBbICAgIDAuMDAwMDAw
XSAgMDAwMDAwMDBiYTk1MjAwMCAtIDAwMDAwMDAwYmE5OGIwMDAgKHJlc2VydmVkKQooWEVOKSBb
ICAgIDAuMDAwMDAwXSAgMDAwMDAwMDBiYTk4YjAwMCAtIDAwMDAwMDAwYmFiYzMwMDAgKHVzYWJs
ZSkKKFhFTikgWyAgICAwLjAwMDAwMF0gIDAwMDAwMDAwYmFiYzMwMDAgLSAwMDAwMDAwMGJiMWMw
MDAwIChBQ1BJIE5WUykKKFhFTikgWyAgICAwLjAwMDAwMF0gIDAwMDAwMDAwYmIxYzAwMDAgLSAw
MDAwMDAwMGJiODQzMDAwIChyZXNlcnZlZCkKKFhFTikgWyAgICAwLjAwMDAwMF0gIDAwMDAwMDAw
YmI4NDMwMDAgLSAwMDAwMDAwMGJiODQ0MDAwICh1c2FibGUpCihYRU4pIFsgICAgMC4wMDAwMDBd
ICAwMDAwMDAwMGJiODQ0MDAwIC0gMDAwMDAwMDBiYjhjYTAwMCAoQUNQSSBOVlMpCihYRU4pIFsg
ICAgMC4wMDAwMDBdICAwMDAwMDAwMGJiOGNhMDAwIC0gMDAwMDAwMDBiYmQwZjAwMCAodXNhYmxl
KQooWEVOKSBbICAgIDAuMDAwMDAwXSAgMDAwMDAwMDBiYmQwZjAwMCAtIDAwMDAwMDAwYmJmZjQw
MDAgKHJlc2VydmVkKQooWEVOKSBbICAgIDAuMDAwMDAwXSAgMDAwMDAwMDBiYmZmNDAwMCAtIDAw
MDAwMDAwYmMwMDAwMDAgKHVzYWJsZSkKKFhFTikgWyAgICAwLjAwMDAwMF0gIDAwMDAwMDAwZDAw
MDAwMDAgLSAwMDAwMDAwMGUwMDAwMDAwIChyZXNlcnZlZCkKKFhFTikgWyAgICAwLjAwMDAwMF0g
IDAwMDAwMDAwZmVkMWMwMDAgLSAwMDAwMDAwMGZlZDIwMDAwIChyZXNlcnZlZCkKKFhFTikgWyAg
ICAwLjAwMDAwMF0gIDAwMDAwMDAwZmYwMDAwMDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZl
ZCkKKFhFTikgWyAgICAwLjAwMDAwMF0gIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwNDQwMDAw
MDAwICh1c2FibGUpCihYRU4pIFsgICAgMC4wMDAwMDBdIEFDUEk6IFJTRFAgMDAwRjA0QTAsIDAw
MjQgKHIyIEFMQVNLQSkKKFhFTikgWyAgICAwLjAwMDAwMF0gQUNQSTogWFNEVCBCQjBERDA3MCwg
MDA1QyAocjEgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIFsg
ICAgMC4wMDAwMDBdIEFDUEk6IEZBQ1AgQkIwRTU3MjgsIDAxMEMgKHI1IEFMQVNLQSAgICBBIE0g
SSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQooWEVOKSBbICAgIDAuMDAwMDAwXSBBQ1BJOiBEU0RU
IEJCMEREMTYwLCA4NUM3IChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAgMjAgSU5UTCAyMDA1MTEx
NykKKFhFTikgWyAgICAwLjAwMDAwMF0gQUNQSTogRkFDUyBCQjFCN0Y4MCwgMDA0MAooWEVOKSBb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIEJCMEU1ODM4LCAwMUE4IChyMyBBTEFTS0EgICAgQSBN
IEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhFTikgWyAgICAwLjAwMDAwMF0gQUNQSTogRlBE
VCBCQjBFNTlFMCwgMDA0NCAocjEgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAw
MTMpCihYRU4pIFsgICAgMC4wMDAwMDBdIEFDUEk6IE1DRkcgQkIwRTVBMjgsIDAwM0MgKHIxIEFM
QVNLQSBPRU1NQ0ZHLiAgMTA3MjAwOSBNU0ZUICAgICAgIDk3KQooWEVOKSBbICAgIDAuMDAwMDAw
XSBBQ1BJOiBIUEVUIEJCMEU1QTY4LCAwMDM4IChyMSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkg
QU1JLiAgICAgICAgNSkKKFhFTikgWyAgICAwLjAwMDAwMF0gQUNQSTogU1NEVCBCQjBFNUFBMCwg
Q0QzODAgKHIyICBJTlRFTCAgICBDcHVQbSAgICAgNDAwMCBJTlRMIDIwMDUxMTE3KQooWEVOKSBb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBETUFSIEJCMUIyRTIwLCAwMEVDIChyMSBBIE0gSSAgIE9FTURN
QVIgICAgICAgIDEgSU5UTCAgICAgICAgMSkKKFhFTikgWyAgICAwLjAwMDAwMF0gU3lzdGVtIFJB
TTogMTYzMDNNQiAoMTY2OTQ3NjBrQikKKFhFTikgWyAgICAwLjAwMDAwMF0gRG9tYWluIGhlYXAg
aW5pdGlhbGlzZWQKKFhFTikgWyAgICAwLjAwMDAwMF0gQUNQSTogMzIvNjRYIEZBQ1MgYWRkcmVz
cyBtaXNtYXRjaCBpbiBGQURUIC0gYmIxYjdmODAvMDAwMDAwMDAwMDAwMDAwMCwgdXNpbmcgMzIK
KFhFTikgWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGljX2lkIDAsIHZlcnNpb24gMzIsIGFk
ZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMKKFhFTikgWyAgICAwLjAwMDAwMF0gSU9BUElDWzFd
OiBhcGljX2lkIDIsIHZlcnNpb24gMzIsIGFkZHJlc3MgMHhmZWMwMTAwMCwgR1NJIDI0LTQ3CihY
RU4pIFsgICAgMC4wMDAwMDBdIEVuYWJsaW5nIEFQSUMgbW9kZTogIFBoeXMuICBVc2luZyAyIEkv
TyBBUElDcwooWEVOKSBbICAgIDAuMDAwMDAwXSBTd2l0Y2hlZCB0byBBUElDIGRyaXZlciB4MmFw
aWNfY2x1c3RlcgooWEVOKSBbICAgIDAuMDAwMDAwXSB4c3RhdGU6IHNpemU6IDB4MzQwIGFuZCBz
dGF0ZXM6IDB4NwooWEVOKSBbICAgIDAuMDAwMDAwXSBTcGVjdWxhdGl2ZSBtaXRpZ2F0aW9uIGZh
Y2lsaXRpZXM6CihYRU4pIFsgICAgMC4wMDAwMDBdICAgSGFyZHdhcmUgZmVhdHVyZXM6CihYRU4p
IFsgICAgMC4wMDAwMDBdICAgQ29tcGlsZWQtaW4gc3VwcG9ydDogSU5ESVJFQ1RfVEhVTksgU0hB
RE9XX1BBR0lORwooWEVOKSBbICAgIDAuMDAwMDAwXSAgIFhlbiBzZXR0aW5nczogQlRJLVRodW5r
IFJFVFBPTElORSwgU1BFQ19DVFJMOiBObywgT3RoZXI6CihYRU4pIFsgICAgMC4wMDAwMDBdICAg
TDFURjogYmVsaWV2ZWQgdnVsbmVyYWJsZSwgbWF4cGh5c2FkZHIgTDFEIDQ2LCBDUFVJRCA0Niwg
U2FmZSBhZGRyZXNzIDMwMDAwMDAwMDAwMAooWEVOKSBbICAgIDAuMDAwMDAwXSAgIFN1cHBvcnQg
Zm9yIFZNczogUFY6IFJTQiBFQUdFUl9GUFUsIEhWTTogUlNCIEVBR0VSX0ZQVQooWEVOKSBbICAg
IDAuMDAwMDAwXSAgIFhQVEkgKDY0LWJpdCBQViBvbmx5KTogRG9tMCBlbmFibGVkLCBEb21VIGVu
YWJsZWQKKFhFTikgWyAgICAwLjAwMDAwMF0gICBQViBMMVRGIHNoYWRvd2luZzogRG9tMCBkaXNh
YmxlZCwgRG9tVSBlbmFibGVkCihYRU4pIFsgICAgMC4wMDAwMDBdIFVzaW5nIHNjaGVkdWxlcjog
U01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKKFhFTikgWyAgICAwLjAwMDAwMF0gUGxhdGZv
cm0gdGltZXIgaXMgMTQuMzE4TUh6IEhQRVQKKFhFTikgWyAgICAwLjQ1ODg4N10gRGV0ZWN0ZWQg
MjQ5NC4zMzggTUh6IHByb2Nlc3Nvci4KKFhFTikgWyAgICAwLjQ2MjkxNl0gSW5pdGluZyBtZW1v
cnkgc2hhcmluZy4KKFhFTikgWyAgICAwLjQ2NTcyOV0gSW50ZWwgVlQtZCBpb21tdSAwIHN1cHBv
cnRlZCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCLgooWEVOKSBbICAgIDAuNDY3MTMwXSBJbnRl
bCBWVC1kIFNub29wIENvbnRyb2wgZW5hYmxlZC4KKFhFTikgWyAgICAwLjQ2ODUyNV0gSW50ZWwg
VlQtZCBEb20wIERNQSBQYXNzdGhyb3VnaCBub3QgZW5hYmxlZC4KKFhFTikgWyAgICAwLjQ2OTkx
OV0gSW50ZWwgVlQtZCBRdWV1ZWQgSW52YWxpZGF0aW9uIGVuYWJsZWQuCihYRU4pIFsgICAgMC40
NzEzMTNdIEludGVsIFZULWQgSW50ZXJydXB0IFJlbWFwcGluZyBlbmFibGVkLgooWEVOKSBbICAg
IDAuNDcyNzAyXSBJbnRlbCBWVC1kIFBvc3RlZCBJbnRlcnJ1cHQgbm90IGVuYWJsZWQuCihYRU4p
IFsgICAgMC40NzQyMTJdIEludGVsIFZULWQgU2hhcmVkIEVQVCB0YWJsZXMgZW5hYmxlZC4KKFhF
TikgWyAgICAwLjQ4Njg0NF0gSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQKKFhFTikgWyAgICAw
LjQ4ODI0OV0gIC0gRG9tMCBtb2RlOiBSZWxheGVkCihYRU4pIFsgICAgMC40ODk2MzVdIEludGVy
cnVwdCByZW1hcHBpbmcgZW5hYmxlZAooWEVOKSBbICAgIDAuNDkxMTU4XSBFbmFibGVkIGRpcmVj
dGVkIEVPSSB3aXRoIGlvYXBpY19hY2tfb2xkIG9uIQooWEVOKSBbICAgIDAuNDkzNDk0XSBFTkFC
TElORyBJTy1BUElDIElSUXMKKFhFTikgWyAgICAwLjQ5NDg3OV0gIC0+IFVzaW5nIG9sZCBBQ0sg
bWV0aG9kCihYRU4pIFsgICAgMC42OTczODJdIFRTQzogYz0xIHI9MQooWEVOKSBbICAgIDEuNDgz
OTc0XSBJTklUWyAwXSB0PTAwMDAwMDE5M2E2Mjk3Yjkgcz0wMDAwNTg3MzljZjQgbT0wMDAwNTg3
MzllOGYKKFhFTikgWyAgICAxLjQ4NTM3NV0gQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiA2NCBL
aUIuCihYRU4pIFsgICAgMS40ODY3NzddIFZNWDogU3VwcG9ydGVkIGFkdmFuY2VkIGZlYXR1cmVz
OgooWEVOKSBbICAgIDEuNDg4MTY1XSAgLSBBUElDIE1NSU8gYWNjZXNzIHZpcnR1YWxpc2F0aW9u
CihYRU4pIFsgICAgMS40ODk1NTZdICAtIEFQSUMgVFBSIHNoYWRvdwooWEVOKSBbICAgIDEuNDkw
OTQwXSAgLSBFeHRlbmRlZCBQYWdlIFRhYmxlcyAoRVBUKQooWEVOKSBbICAgIDEuNDkyMzI3XSAg
LSBWaXJ0dWFsLVByb2Nlc3NvciBJZGVudGlmaWVycyAoVlBJRCkKKFhFTikgWyAgICAxLjQ5Mzcy
MV0gIC0gVmlydHVhbCBOTUkKKFhFTikgWyAgICAxLjQ5NTEwNl0gIC0gTVNSIGRpcmVjdC1hY2Nl
c3MgYml0bWFwCihYRU4pIFsgICAgMS40OTY0OTNdICAtIFVucmVzdHJpY3RlZCBHdWVzdAooWEVO
KSBbICAgIDEuNDk3ODgwXSAgLSBBUElDIFJlZ2lzdGVyIFZpcnR1YWxpemF0aW9uCihYRU4pIFsg
ICAgMS40OTkzODBdICAtIFZpcnR1YWwgSW50ZXJydXB0IERlbGl2ZXJ5CihYRU4pIFsgICAgMS41
MDA3NjhdICAtIFBvc3RlZCBJbnRlcnJ1cHQgUHJvY2Vzc2luZwooWEVOKSBbICAgIDEuNTAyMTYy
XSBIVk06IEFTSURzIGVuYWJsZWQuCihYRU4pIFsgICAgMS41MDM1NDRdIFZNWDogRGlzYWJsaW5n
IGV4ZWN1dGFibGUgRVBUIHN1cGVycGFnZXMgZHVlIHRvIENWRS0yMDE4LTEyMjA3CihYRU4pIFsg
ICAgMS41MDYzMjBdIEhWTTogVk1YIGVuYWJsZWQKKFhFTikgWyAgICAxLjUwNzcwMV0gSFZNOiBI
YXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQKKFhFTikgWyAgICAxLjUwOTA5
N10gSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQgooWEVOKSBbICAgIDEuNTEwNjMx
XSBJTklUWyAxXSB0PTAwMDAwMDFhMDg2MmM0NDcgcz0wMDAwNWEwYTVjNzIgbT0wMDAwNWEwYTVl
MmYKKFhFTikgWyAgICAxLjUxMjIzMV0gSU5JVFsgMl0gdD0wMDAwMDAxYTA4OWY5ZGZmIHM9MDAw
MDVhMjJjMTg0IG09MDAwMDVhMjJjMzQwCihYRU4pIFsgICAgMS41MTM3ODldIElOSVRbIDNdIHQ9
MDAwMDAwMWEwOGRhZjdmYiBzPTAwMDA1YTNhOGQzMSBtPTAwMDA1YTNhOGViOAooWEVOKSBbICAg
IDEuNTE1NDUzXSBJTklUWyA0XSB0PTAwMDAwMDFhMDkxYTQxZWUgcz0wMDAwNWE1M2VjOTQgbT0w
MDAwNWE1M2VlM2IKKFhFTikgWyAgICAxLjUxNzAwOV0gSU5JVFsgNV0gdD0wMDAwMDAxYTA5NTU4
Y2ZlIHM9MDAwMDVhNmJiMWYxIG09MDAwMDVhNmJiM2IzCihYRU4pIFsgICAgMS41MTg1NjFdIElO
SVRbIDZdIHQ9MDAwMDAwMWEwOTkwOGM5NyBzPTAwMDA1YTgzNTkxYiBtPTAwMDA1YTgzNWFlMgoo
WEVOKSBbICAgIDEuNTIwMTI3XSBJTklUWyA3XSB0PTAwMDAwMDFhMDljYzM2Y2Ygcz0wMDAwNWE5
YjQ0OTUgbT0wMDAwNWE5YjQ2NDYKKFhFTikgWyAgICAxLjUyMTcwMV0gSU5JVFsgOF0gdD0wMDAw
MDAxYTBhMDgxMjllIHM9MDAwMDVhYjM0NDMyIG09MDAwMDVhYjM0NWQ5CihYRU4pIFsgICAgMS41
MjMyNzhdIElOSVRbIDldIHQ9MDAwMDAwMWEwYTQ0MWY4MiBzPTAwMDA1YWNiNTcyNCBtPTAwMDA1
YWNiNThjYwooWEVOKSBbICAgIDEuNTI0ODU0XSBJTklUWzEwXSB0PTAwMDAwMDFhMGE4MDBkOWEg
cz0wMDAwNWFlMzVlMWUgbT0wMDAwNWFlMzVmYmUKKFhFTikgWyAgICAxLjUyNjQyOF0gSU5JVFsx
MV0gdD0wMDAwMDAxYTBhYmJmZjc2IHM9MDAwMDVhZmI2NjM4IG09MDAwMDVhZmI2ODBjCihYRU4p
IFsgICAgMS41Mjc5OTddIElOSVRbMTJdIHQ9MDAwMDAwMWEwYWY3YjI1YSBzPTAwMDA1YjEzNTUy
NyBtPTAwMDA1YjEzNTZmYwooWEVOKSBbICAgIDEuNTI5NTc5XSBJTklUWzEzXSB0PTAwMDAwMDFh
MGIzM2Y1M2Egcz0wMDAwNWIyYjdkZjIgbT0wMDAwNWIyYjdmYzIKKFhFTikgWyAgICAxLjUzMTI1
Ml0gSU5JVFsxNF0gdD0wMDAwMDAxYTBiNzM5N2M1IHM9MDAwMDViNDUwMGUyIG09MDAwMDViNDUw
MmJjCihYRU4pIFsgICAgMS41MzI4MjJdIElOSVRbMTVdIHQ9MDAwMDAwMWEwYmFmNWM4ZCBzPTAw
MDA1YjVjZjcyMSBtPTAwMDA1YjVjZjkwYQooWEVOKSBbICAgIDEuNTM0Mzg5XSBJTklUWzE2XSB0
PTAwMDAwMDFhMGJlYWZjOWUgcz0wMDAwNWI3NGRlZDggbT0wMDAwNWI3NGUwOWIKKFhFTikgWyAg
ICAxLjUzNTk2M10gSU5JVFsxN10gdD0wMDAwMDAxYTBjMjZlOTY2IHM9MDAwMDViOGNlNGY5IG09
MDAwMDViOGNlNmJiCihYRU4pIFsgICAgMS41Mzc1MzFdIElOSVRbMThdIHQ9MDAwMDAwMWEwYzYy
OGQ2ZSBzPTAwMDA1YmE0Y2UyYSBtPTAwMDA1YmE0Y2ZmMAooWEVOKSBbICAgIDEuNTM5MTA1XSBJ
TklUWzE5XSB0PTAwMDAwMDFhMGM5ZTdmNWUgcz0wMDAwNWJiY2Q2NGUgbT0wMDAwNWJiY2Q4M2YK
KFhFTikgWyAgICAxLjU0MDY3Nl0gSU5JVFsyMF0gdD0wMDAwMDAxYTBjZGE0MTdkIHM9MDAwMDVi
ZDRjYjY3IG09MDAwMDViZDRjZDc1CihYRU4pIFsgICAgMS41NDIyNDhdIElOSVRbMjFdIHQ9MDAw
MDAwMWEwZDE2MjRmOSBzPTAwMDA1YmVjY2UzYSBtPTAwMDA1YmVjZDAwOQooWEVOKSBbICAgIDEu
NTQzODI0XSBJTklUWzIyXSB0PTAwMDAwMDFhMGQ1MjE4Nzggcz0wMDAwNWMwNGQ3MDggbT0wMDAw
NWMwNGQ4ZTQKKFhFTikgWyAgICAxLjU0NTQxN10gSU5JVFsyM10gdD0wMDAwMDAxYTBkOGViYzYw
IHM9MDAwMDVjMWQyNjg0IG09MDAwMDVjMWQyODY3CihYRU4pIFsgICAgMS41NDY4MzldIEJyb3Vn
aHQgdXAgMjQgQ1BVcwooWEVOKSBbICAgIDEuNjAxOTIyXSBDSEtbIDBdIGNhMDk5OTdlCihYRU4p
IFsgICAgMS42MDMzMDRdIGNoa1sgICAwXSBDUFUwICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDE5
NDhmMWNiYTUgIy0xCihYRU4pIFsgICAgMS42MDQ3MDNdIGNoa1sgICAxXSBDUFUxICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY3YmIgIy0xCihYRU4pIFsgICAgMS42MDYxMDVdIGNoa1sg
ICAyXSBDUFU3ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4NjEgIy0xCihYRU4pIFsg
ICAgMS42MDc1MDRdIGNoa1sgICAzXSBDUFU2ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJm
YjY4NmQgIy0xCihYRU4pIFsgICAgMS42MDg5MDJdIGNoa1sgICA0XSBDUFUxOCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDFhMTJmYjY4ZDYgIy0xCihYRU4pIFsgICAgMS42MTAzMDJdIGNoa1sgICA1
XSBDUFUxNiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4ZDUgIy0xCihYRU4pIFsgICAg
MS42MTE4MTBdIGNoa1sgICA2XSBDUFUxOSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4
Y2UgIy0xCihYRU4pIFsgICAgMS42MTMyMDZdIGNoa1sgICA3XSBDUFU4ICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDFhMTJmYjY4OGIgIy0xCihYRU4pIFsgICAgMS42MTQ2MDddIGNoa1sgICA4XSBD
UFUxNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY5YWIgIy0xCihYRU4pIFsgICAgMS42
MTYwMDldIGNoa1sgICA5XSBDUFU5ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4OTcg
Iy0xCihYRU4pIFsgICAgMS42MTc0MDldIGNoa1sgIDEwXSBDUFUyMSAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDFhMTJmYjY4YjQgIy0xCihYRU4pIFsgICAgMS42MTg4MTBdIGNoa1sgIDExXSBDUFUx
NCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY5YTMgIy0xCihYRU4pIFsgICAgMS42MjAy
MTBdIGNoa1sgIDEyXSBDUFUyMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4YmMgIy0x
CihYRU4pIFsgICAgMS42MjE2MTRdIGNoa1sgIDEzXSBDUFUxNyAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDFhMTJmYjY4YzkgIy0xCihYRU4pIFsgICAgMS42MjMwMTVdIGNoa1sgIDE0XSBDUFUxMyAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4YmYgIy0xCihYRU4pIFsgICAgMS42MjQ0MTZd
IGNoa1sgIDE1XSBDUFUzICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjZhNDMgIy0xCihY
RU4pIFsgICAgMS42MjU4MTVdIGNoa1sgIDE2XSBDUFUyICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDFhMTJmYjZhNGYgIy0xCihYRU4pIFsgICAgMS42MjczMjhdIGNoa1sgIDE3XSBDUFUxMiAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY4Y2IgIy0xCihYRU4pIFsgICAgMS42Mjg3MjZdIGNo
a1sgIDE4XSBDUFUxMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY5ZDcgIy0xCihYRU4p
IFsgICAgMS42MzAxMzBdIGNoa1sgIDE5XSBDUFU0ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFh
MTJmYjZhNjggIy0xCihYRU4pIFsgICAgMS42MzE1MzVdIGNoa1sgIDIwXSBDUFU1ICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDFhMTJmYjZhNWMgIy0xCihYRU4pIFsgICAgMS42MzI5MzVdIGNoa1sg
IDIxXSBDUFUyMiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJmYjY5YmMgIy0xCihYRU4pIFsg
ICAgMS42MzQzMzZdIGNoa1sgIDIyXSBDUFUyMyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhMTJm
YjY5YjAgIy0xCihYRU4pIFsgICAgMS42MzU3MzddIGNoa1sgIDIzXSBDUFUxMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDFhMTJmYjY5Y2IgIy0xCihYRU4pIFsgICAgMS42MzcxNDBdIGNoa1sgIDI0
XSBDUFUwICAwMDAwMDAxYTEyZmJkMTYzIDAwMDAwMDE5NDhmMjM4ZDUgIzEKKFhFTikgWyAgICAx
LjYzODUzN10gY2hrWyAgMjVdIENQVTAgIDAwMDAwMDFhMTJmZDAyYzcgMDAwMDAwMTk0OGYzNmEx
NSAjNAooWEVOKSBbICAgIDEuNjM5OTM3XSBjaGtbICAyNl0gQ1BVMCAgMDAwMDAwMWExMmZkNjdm
NyAwMDAwMDAxOTQ4ZjNjZWY1ICM1CihYRU4pIFsgICAgMS42NDEzMzVdIGNoa1sgIDI3XSBDUFUw
ICAwMDAwMDAxYTEyZmUzMjE3IDAwMDAwMDE5NDhmNDk4YmQgIzcKKFhFTikgWyAgICAxLjY0Mjcz
NF0gY2hrWyAgMjhdIENQVTAgIDAwMDAwMDFhMTMwNTNmOGYgMDAwMDAwMTk0OGZiYTYyNSAjMjUK
KFhFTikgWyAgICAxLjY0NDI0OV0gY2hrWyAgMjldIENQVTAgIDAwMDAwMDFhMTMxZDcxZmYgMDAw
MDAwMTk0OTEzZDg4MSAjODcKKFhFTikgWyAgICAxLjY0NTY0OV0gVFNDIHdhcnAgZGV0ZWN0ZWQs
IGRpc2FibGluZyBUU0NfUkVMSUFCTEUKKFhFTikgWyAgICAxLjY0NzA0N10gVFNDOiBjPTEgcj0w
CihYRU4pIFsgICAgMS42NDg0MzJdIFRTQzogdGltZS5jI3RpbWVfY2FsaWJyYXRpb25fdHNjX3Jl
bmRlenZvdXMKKFhFTikgWyAgICAxLjY0OTgyOF0gUkRaVlsgMF0gdD0wMDAwMDAxOTUzMGIxZjI5
CihYRU4pIFsgICAgMS42NTEyMTZdIFJEWlZbIDFdIHQ9MDAwMDAwMWExZDE0YzI3MwooWEVOKSBb
ICAgIDEuNjUyNjA2XSBSRFpWWyAyXSB0PTAwMDAwMDFhMWQxNWRjNjkKKFhFTikgWyAgICAxLjY1
Mzk5M10gUkRaVlsgM10gdD0wMDAwMDAxYTFkMTVkYzZkCihYRU4pIFsgICAgMS42NTUzODFdIFJE
WlZbIDVdIHQ9MDAwMDAwMWExZDE1ZTYxMgooWEVOKSBbICAgIDEuNjU2NzY1XSBSRFpWWyA0XSB0
PTAwMDAwMDFhMWQxNWU2MGEKKFhFTikgWyAgICAxLjY1ODE1OF0gUkRaVlsgN10gdD0wMDAwMDAx
YTFkMTVlZjhmCihYRU4pIFsgICAgMS42NTk2MjldIFJEWlZbIDZdIHQ9MDAwMDAwMWExZDE1ZWY4
NwooWEVOKSBbICAgIDEuNjYxMDIyXSBSRFpWWyA4XSB0PTAwMDAwMDFhMWQxNWZjYjkKKFhFTikg
WyAgICAxLjY2MjQwOV0gUkRaVlsgOV0gdD0wMDAwMDAxYTFkMTVmY2JkCihYRU4pIFsgICAgMS42
NjM4MDBdIFJEWlZbMTFdIHQ9MDAwMDAwMWExZDE2MDljYgooWEVOKSBbICAgIDEuNjY1MTkxXSBS
RFpWWzEwXSB0PTAwMDAwMDFhMWQxNjA5ZDMKKFhFTikgWyAgICAxLjY2NjU4NF0gUkRaVlsxMl0g
dD0wMDAwMDAxYTFkMTYxMjdjCihYRU4pIFsgICAgMS42Njc5NzFdIFJEWlZbMTNdIHQ9MDAwMDAw
MWExZDE2MTI3NAooWEVOKSBbICAgIDEuNjY5MzYyXSBSRFpWWzE1XSB0PTAwMDAwMDFhMWQxNjFj
ZWIKKFhFTikgWyAgICAxLjY3MDc0NV0gUkRaVlsxNF0gdD0wMDAwMDAxYTFkMTYxY2YzCihYRU4p
IFsgICAgMS42NzIxMzVdIFJEWlZbMTddIHQ9MDAwMDAwMWExZDE2MjQ4MwooWEVOKSBbICAgIDEu
NjczNTI0XSBSRFpWWzE2XSB0PTAwMDAwMDFhMWQxNjI0N2IKKFhFTikgWyAgICAxLjY3NDkxNF0g
UkRaVlsxOF0gdD0wMDAwMDAxYTFkMTYyZWNhCihYRU4pIFsgICAgMS42NzYzODRdIFJEWlZbMTld
IHQ9MDAwMDAwMWExZDE2MmVkNgooWEVOKSBbICAgIDEuNjc3NzcyXSBSRFpWWzIxXSB0PTAwMDAw
MDFhMWQxNjM3Y2IKKFhFTikgWyAgICAxLjY3OTE1OV0gUkRaVlsyMF0gdD0wMDAwMDAxYTFkMTYz
N2MzCihYRU4pIFsgICAgMS42ODA1NDhdIFJEWlZbMjJdIHQ9MDAwMDAwMWExZDE2NDE3MQooWEVO
KSBbICAgIDEuNjgxOTM1XSBSRFpWWzIzXSB0PTAwMDAwMDFhMWQxNjQxNzkKKFhFTikgWyAgICAx
LjY4MzMyNV0gUkRaVlsyMF0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9
MDAwMDY0NTU2YWE3KDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjY4NjA5NV0gUkRaVlsgNF0g
dD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTJkKDAwMDA2
NDU1YWY1NikKKFhFTikgWyAgICAxLjY4ODg3NF0gUkRaVlsxOF0gdD0wMDAwMDAxYTIyMGY3Y2Uz
KDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTVhKDAwMDA2NDU1YWY1NikKKFhFTikgWyAg
ICAxLjY5MTczNF0gUkRaVlsxOV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMp
IHM9MDAwMDY0NTU2YTJjKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjY5NDUwN10gUkRaVlsy
MV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YWYyKDAw
MDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjY5NzI4Nl0gUkRaVlsgOF0gdD0wMDAwMDAxYTIyMGY3
Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTY0KDAwMDA2NDU1YWY1NikKKFhFTikg
WyAgICAxLjcwMDA1N10gUkRaVlsgOV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdj
ZTMpIHM9MDAwMDY0NTU2YTNmKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcwMjg0MV0gUkRa
VlsxMl0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2OWY5
KDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcwNTYyNV0gUkRaVlsxM10gdD0wMDAwMDAxYTIy
MGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTA5KDAwMDA2NDU1YWY1NikKKFhF
TikgWyAgICAxLjcwODUxNV0gUkRaVlsgN10gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIw
ZjdjZTMpIHM9MDAwMDY0NTU2YTQyKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcxMTI5Ml0g
UkRaVlsgNl0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2
YTViKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcxNDA2NF0gUkRaVlsxNF0gdD0wMDAwMDAx
YTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTM4KDAwMDA2NDU1YWY1NikK
KFhFTikgWyAgICAxLjcxNjg0Ml0gUkRaVlsxNV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFh
MjIwZjdjZTMpIHM9MDAwMDY0NTU2YTNjKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcxOTYy
NV0gUkRaVlsgM10gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0
NTU2YTZkKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcyMjQwMF0gUkRaVlsyM10gdD0wMDAw
MDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTA4KDAwMDA2NDU1YWY1
NikKKFhFTikgWyAgICAxLjcyNTI4NV0gUkRaVlsxNl0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAw
MDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTRmKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjcy
ODA1Nl0gUkRaVlsgMV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAw
MDY0NTU2OWY2KDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjczMDgzOV0gUkRaVlsxMF0gdD0w
MDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTU2KDAwMDA2NDU1
YWY1NikKKFhFTikgWyAgICAxLjczMzYxNV0gUkRaVlsxMV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAw
MDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTIxKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAx
LjczNjM5N10gUkRaVlsxN10gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9
MDAwMDY0NTU2YTJkKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjczOTI4NF0gUkRaVlsyMl0g
dD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTJmKDAwMDA2
NDU1YWY1NikKKFhFTikgWyAgICAzLjEwMDk4MV0gUkRaVlsgMF0gdD0wMDAwMDAxYTIyMGY3Y2Uz
KDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2ODdkKDAwMDA2NDU1YWY1NikKKFhFTikgWyAg
ICAxLjc0NDgzMV0gUkRaVlsgMl0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMp
IHM9MDAwMDY0NTU2YTIxKDAwMDA2NDU1YWY1NikKKFhFTikgWyAgICAxLjc0NzYwN10gUkRaVlsg
NV0gdD0wMDAwMDAxYTIyMGY3Y2UzKDAwMDAwMDFhMjIwZjdjZTMpIHM9MDAwMDY0NTU2YTFkKDAw
MDA2NDU1YWY1NikKKFhFTikgWyAgICAzLjEwOTMxM10gVFNDIGFkanVzdGVkIGJ5IGNhMDlhMTAy
CihYRU4pIFsgICAgMy4xMTA2OTldIFRTQzogZW5kIHJlbmRlenZvdXMKKFhFTikgWyAgICAzLjEx
MjA4N10gbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgZml4ZWQgTVRSUiBzZXR0aW5n
cwooWEVOKSBbICAgIDMuMTEzNTExXSBEb20wIGhhcyBtYXhpbXVtIDgxNiBQSVJRcwooWEVOKSBb
ICAgIDIuNDY0Mzk1XSAgWGVuICBrZXJuZWw6IDY0LWJpdCwgbHNiLCBjb21wYXQzMgooWEVOKSBb
ICAgIDIuNDY1Nzg2XSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAw
MDAwMCAtPiAweDJjMmMwMDAKKFhFTikgWyAgICAyLjQ2ODk3NV0gUEhZU0lDQUwgTUVNT1JZIEFS
UkFOR0VNRU5UOgooWEVOKSBbICAgIDIuNDcwMzYyXSAgRG9tMCBhbGxvYy46ICAgMDAwMDAwMDQy
ODAwMDAwMC0+MDAwMDAwMDQyYzAwMDAwMCAoMjM3MDY3IHBhZ2VzIHRvIGJlIGFsbG9jYXRlZCkK
KFhFTikgWyAgICAyLjQ3MzE0NV0gIEluaXQuIHJhbWRpc2s6IDAwMDAwMDA0M2RlMGIwMDAtPjAw
MDAwMDA0M2ZmZmY2NjQKKFhFTikgWyAgICAyLjQ3NDU0Nl0gVklSVFVBTCBNRU1PUlkgQVJSQU5H
RU1FTlQ6CihYRU4pIFsgICAgMi40NzYwNTFdICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgxMDAw
MDAwLT5mZmZmZmZmZjgyYzJjMDAwCihYRU4pIFsgICAgMi40Nzc0NTBdICBJbml0LiByYW1kaXNr
OiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihYRU4pIFsgICAgMi40Nzg4NDld
ICBQaHlzLU1hY2ggbWFwOiAwMDAwMDA4MDAwMDAwMDAwLT4wMDAwMDA4MDAwMjAwMDAwCihYRU4p
IFsgICAgMi40ODAyNDhdICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgyYzJjMDAwLT5mZmZmZmZm
ZjgyYzJjNGI4CihYRU4pIFsgICAgMi40ODE2NDVdICBYZW5zdG9yZSByaW5nOiAwMDAwMDAwMDAw
MDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihYRU4pIFsgICAgMi40ODMwNDBdICBDb25zb2xlIHJp
bmc6ICAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihYRU4pIFsgICAgMi40ODQ0
MzhdICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgyYzJkMDAwLT5mZmZmZmZmZjgyYzQ4MDAwCihY
RU4pIFsgICAgMi40ODU4MzddICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgyYzQ4MDAwLT5mZmZm
ZmZmZjgyYzQ5MDAwCihYRU4pIFsgICAgMi40ODcyMzhdICBUT1RBTDogICAgICAgICBmZmZmZmZm
ZjgwMDAwMDAwLT5mZmZmZmZmZjgzMDAwMDAwCihYRU4pIFsgICAgMi40ODg2MzddICBFTlRSWSBB
RERSRVNTOiBmZmZmZmZmZjgyODJhMTYwCihYRU4pIFsgICAgMi40OTA4NTRdIERvbTAgaGFzIG1h
eGltdW0gNCBWQ1BVcwooWEVOKSBbICAgIDIuODIwODIzXSBUU0M6IHRpbWUuYyN0aW1lX2NhbGli
cmF0aW9uX3RzY19yZW5kZXp2b3VzCihYRU4pIFsgICAgMi44MjIyMjJdIFJEWlZbIDBdIHQ9MDAw
MDAwMWFjYjYyYzI3ZgooWEVOKSBbICAgIDIuODIzNjExXSBSRFpWWyAxXSB0PTAwMDAwMDFhY2I2
MmMzYWIKKFhFTikgWyAgICAyLjgyNDk5N10gUkRaVlsgMl0gdD0wMDAwMDAxYWNiNjNlMzhkCihY
RU4pIFsgICAgMi44MjYzODRdIFJEWlZbIDNdIHQ9MDAwMDAwMWFjYjYzZTNmOQooWEVOKSBbICAg
IDIuODI3ODU1XSBSRFpWWyA1XSB0PTAwMDAwMDFhY2I2M2VkNTUKKFhFTikgWyAgICAyLjgyOTI0
MV0gUkRaVlsgNF0gdD0wMDAwMDAxYWNiNjNlOGYxCihYRU4pIFsgICAgMi44MzA2MzNdIFJEWlZb
IDddIHQ9MDAwMDAwMWFjYjYzZjViMQooWEVOKSBbICAgIDIuODMyMDIzXSBSRFpWWyA2XSB0PTAw
MDAwMDFhY2I2M2Y1YTEKKFhFTikgWyAgICAyLjgzMzQxNF0gUkRaVlsgOV0gdD0wMDAwMDAxYWNi
NjNmZmZkCihYRU4pIFsgICAgMi44MzQ4MDNdIFJEWlZbIDhdIHQ9MDAwMDAwMWFjYjYzZmZmZAoo
WEVOKSBbICAgIDIuODM2MTk3XSBSRFpWWzExXSB0PTAwMDAwMDFhY2I2NDBmYjUKKFhFTikgWyAg
ICAyLjgzNzU4OF0gUkRaVlsxMF0gdD0wMDAwMDAxYWNiNjQwZjg1CihYRU4pIFsgICAgMi44Mzg5
NzRdIFJEWlZbMTNdIHQ9MDAwMDAwMWFjYjY0MTc5ZAooWEVOKSBbICAgIDIuODQwMzYzXSBSRFpW
WzEyXSB0PTAwMDAwMDFhY2I2NDE5NzEKKFhFTikgWyAgICAyLjg0MTc1Nl0gUkRaVlsxNV0gdD0w
MDAwMDAxYWNiNjQyMzNjCihYRU4pIFsgICAgMi44NDMxNDZdIFJEWlZbMTRdIHQ9MDAwMDAwMWFj
YjY0MjFjNAooWEVOKSBbICAgIDIuODQ0NjIwXSBSRFpWWzE3XSB0PTAwMDAwMDFhY2I2NDJiMGUK
KFhFTikgWyAgICAyLjg0NjAwN10gUkRaVlsxNl0gdD0wMDAwMDAxYWNiNjQyYjIyCihYRU4pIFsg
ICAgMi44NDczOTZdIFJEWlZbMThdIHQ9MDAwMDAwMWFjYjY0MzUyNgooWEVOKSBbICAgIDIuODQ4
Nzg1XSBSRFpWWzE5XSB0PTAwMDAwMDFhY2I2NDM1MjIKKFhFTikgWyAgICAyLjg1MDE3Nl0gUkRa
VlsyMF0gdD0wMDAwMDAxYWNiNjQzZWRkCihYRU4pIFsgICAgMi44NTE1NjJdIFJEWlZbMjFdIHQ9
MDAwMDAwMWFjYjY0M2VjZAooWEVOKSBbICAgIDIuODUyOTQ5XSBSRFpWWzIyXSB0PTAwMDAwMDFh
Y2I2NDQ3YjkKKFhFTikgWyAgICAyLjg1NDMzNF0gUkRaVlsyM10gdD0wMDAwMDAxYWNiNjQ0N2Q1
CihYRU4pIFsgICAgMi44NTU3MjJdIFJEWlZbIDBdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAx
YWQwNWRhYTg2KSBzPTAwMDBhYTM2Yzc4NygwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44NTg1
MDJdIFJEWlZbMjNdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBh
YTM2Yzk4NSgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44NjEzNjRdIFJEWlZbMjJdIHQ9MDAw
MDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Yzk5ZSgwMDAwYWEzNzY1
YTkpCihYRU4pIFsgICAgMi44NjQxNDBdIFJEWlZbIDhdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAw
MDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2YzhmOSgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44
NjY5MjFdIFJEWlZbIDldIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAw
MDBhYTM2YzhkNCgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44Njk2OThdIFJEWlZbMjBdIHQ9
MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2EzNigwMDAwYWEz
NzY1YTkpCihYRU4pIFsgICAgMi44NzI0NzJdIFJEWlZbMjFdIHQ9MDAwMDAwMWFkMDVkYWE4Nigw
MDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2E4NCgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAg
Mi44NzUyNDhdIFJEWlZbMTRdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBz
PTAwMDBhYTM2Y2I4NSgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44NzgxMzVdIFJEWlZbMTVd
IHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2MxYSgwMDAw
YWEzNzY1YTkpCihYRU4pIFsgICAgMi44ODA5MTVdIFJEWlZbMTddIHQ9MDAwMDAwMWFkMDVkYWE4
NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2M4NCgwMDAwYWEzNzY1YTkpCihYRU4pIFsg
ICAgMi44ODM2ODZdIFJEWlZbMTZdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2
KSBzPTAwMDBhYTM2Y2NiMSgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44ODY0NjRdIFJEWlZb
IDddIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2E0ZSgw
MDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44ODkyNDZdIFJEWlZbIDZdIHQ9MDAwMDAwMWFkMDVk
YWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2E2YSgwMDAwYWEzNzY1YTkpCihYRU4p
IFsgICAgMi44OTIxMzVdIFJEWlZbMTJdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRh
YTg2KSBzPTAwMDBhYTM2Y2E0MygwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44OTQ5MTVdIFJE
WlZbMTNdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Yzk5
MCgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi44OTc2OTBdIFJEWlZbIDJdIHQ9MDAwMDAwMWFk
MDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2YzlmMygwMDAwYWEzNzY1YTkpCihY
RU4pIFsgICAgMi45MDA0NjddIFJEWlZbIDFdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQw
NWRhYTg2KSBzPTAwMDBhYTM2Yzk1YSgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi45MDMyNTBd
IFJEWlZbMTBdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2
YzllZCgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi45MDYwMzFdIFJEWlZbMTFdIHQ9MDAwMDAw
MWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2YzljNSgwMDAwYWEzNzY1YTkp
CihYRU4pIFsgICAgMi45MDg5MjZdIFJEWlZbMThdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAx
YWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2EzNigwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi45MTE2
OTldIFJEWlZbIDNdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBh
YTM2Y2E3MSgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi45MTQ0NzhdIFJEWlZbIDVdIHQ9MDAw
MDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2ExZigwMDAwYWEzNzY1
YTkpCihYRU4pIFsgICAgMi45MTcyNTZdIFJEWlZbMTldIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAw
MDAxYWQwNWRhYTg2KSBzPTAwMDBhYTM2Y2EwZCgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi45
MjAwMzNdIFJEWlZbIDRdIHQ9MDAwMDAwMWFkMDVkYWE4NigwMDAwMDAxYWQwNWRhYTg2KSBzPTAw
MDBhYTM2Yzg2ZCgwMDAwYWEzNzY1YTkpCihYRU4pIFsgICAgMi45MjI4MTBdIFRTQyBhZGp1c3Rl
ZCBieSA4NTcKKFhFTikgWyAgICAyLjkyNDMwNF0gVFNDOiBlbmQgcmVuZGV6dm91cwooWEVOKSBb
ICAgIDMuMzI1NTA1XSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4
NDAwMCBwYWdlcy4KKFhFTikgWyAgICAzLjMyNjkxMF0gU2NydWJiaW5nIEZyZWUgUkFNIG9uIDEg
bm9kZXMgdXNpbmcgMTIgQ1BVcwooWEVOKSBbICAgIDMuNDEwMTIyXSAuLi4uLi4uLi4uLi5kb25l
LgooWEVOKSBbICAgIDQuMTU2NDMyXSBTdGQuIExvZ2xldmVsOiBFcnJvcnMgYW5kIHdhcm5pbmdz
CihYRU4pIFsgICAgNC4xNTc5MjFdIEd1ZXN0IExvZ2xldmVsOiBOb3RoaW5nIChSYXRlLWxpbWl0
ZWQ6IEVycm9ycyBhbmQgd2FybmluZ3MpCihYRU4pIFsgICAgNC4xNTkzMjRdIFhlbiBpcyByZWxp
bnF1aXNoaW5nIFZHQSBjb25zb2xlLgooWEVOKSBbICAgIDQuMTYxMTcwXSAqKiogU2VyaWFsIGlu
cHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRv
IFhlbikKKFhFTikgWyAgICA0LjE2MTMzM10gRnJlZWQgNDc2a0IgaW5pdCBtZW1vcnkKKFhFTikg
WyAgICA0Ljk3NjIxOF0gVFNDOiB0aW1lLmMjdGltZV9jYWxpYnJhdGlvbl90c2NfcmVuZGV6dm91
cwooWEVOKSBbICAgIDQuOTc2MjIyXSBSRFpWWyAwXSB0PTAwMDAwMDFjMGJhMTYwNDkKKFhFTikg
WyAgICA0Ljk3NjIyNV0gUkRaVlsgMV0gdD0wMDAwMDAxYzBiYTE2NWI5CihYRU4pIFsgICAgNC45
NzYyNTNdIFJEWlZbIDNdIHQ9MDAwMDAwMWMwYmEyNmRmNwooWEVOKSBbICAgIDQuOTc2MjU1XSBS
RFpWWyAyXSB0PTAwMDAwMDFjMGJhMjZkZTcKKFhFTikgWyAgICA0Ljk3NjI1OF0gUkRaVlsgNF0g
dD0wMDAwMDAxYzBiYTI3NjdjCihYRU4pIFsgICAgNC45NzYyNjBdIFJEWlZbIDVdIHQ9MDAwMDAw
MWMwYmEyNzY4YwooWEVOKSBbICAgIDQuOTc2MjYzXSBSRFpWWyA3XSB0PTAwMDAwMDFjMGJhMjdm
OTIKKFhFTikgWyAgICA0Ljk3NjI2Nl0gUkRaVlsgNl0gdD0wMDAwMDAxYzBiYTI3ZmQ2CihYRU4p
IFsgICAgNC45NzYyNjldIFJEWlZbIDhdIHQ9MDAwMDAwMWMwYmEyOGUyNwooWEVOKSBbICAgIDQu
OTc2MjcxXSBSRFpWWyA5XSB0PTAwMDAwMDFjMGJhMjhlMTcKKFhFTikgWyAgICA0Ljk3NjI3NF0g
UkRaVlsxMF0gdD0wMDAwMDAxYzBiYTI5OWMyCihYRU4pIFsgICAgNC45NzYyNzddIFJEWlZbMTFd
IHQ9MDAwMDAwMWMwYmEyOTliZQooWEVOKSBbICAgIDQuOTc2Mjc5XSBSRFpWWzEzXSB0PTAwMDAw
MDFjMGJhMmEzN2EKKFhFTikgWyAgICA0Ljk3NjI4Ml0gUkRaVlsxMl0gdD0wMDAwMDAxYzBiYTJh
MjhlCihYRU4pIFsgICAgNC45NzYyODVdIFJEWlZbMTVdIHQ9MDAwMDAwMWMwYmEyYWNmZgooWEVO
KSBbICAgIDQuOTc2Mjg4XSBSRFpWWzE0XSB0PTAwMDAwMDFjMGJhMmFjZmYKKFhFTikgWyAgICA0
Ljk3NjI5Ml0gUkRaVlsxN10gdD0wMDAwMDAxYzBiYTJiNTY4CihYRU4pIFsgICAgNC45NzYyOTRd
IFJEWlZbMTZdIHQ9MDAwMDAwMWMwYmEyYjU3OAooWEVOKSBbICAgIDQuOTc2Mjk1XSBSRFpWWzE5
XSB0PTAwMDAwMDFjMGJhMmJlOGQKKFhFTikgWyAgICA0Ljk3NjI5N10gUkRaVlsxOF0gdD0wMDAw
MDAxYzBiYTJiZWE1CihYRU4pIFsgICAgNC45NzYzMDFdIFJEWlZbMjBdIHQ9MDAwMDAwMWMwYmEy
YzgwNgooWEVOKSBbICAgIDQuOTc2MzAzXSBSRFpWWzIxXSB0PTAwMDAwMDFjMGJhMmM3ZDIKKFhF
TikgWyAgICA0Ljk3NjMwNl0gUkRaVlsyMl0gdD0wMDAwMDAxYzBiYTJkMjNhCihYRU4pIFsgICAg
NC45NzYzMDhdIFJEWlZbMjNdIHQ9MDAwMDAwMWMwYmEyZDIzYQooWEVOKSBbICAgIDQuOTc2MzEy
XSBSRFpWWyAwXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5
YzcxM2UoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzE1XSBSRFpWWyAxXSB0PTAwMDAw
MDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc0NzcoMDAwMTI4OWRhM2Ni
KQooWEVOKSBbICAgIDQuOTc2MzE4XSBSRFpWWyAyXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAw
MWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc1M2UoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2
MzIwXSBSRFpWWyA4XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAx
Mjg5Yzc1ZmIoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzIyXSBSRFpWWyA5XSB0PTAw
MDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc1ZGEoMDAwMTI4OWRh
M2NiKQooWEVOKSBbICAgIDQuOTc2MzI1XSBSRFpWWzIxXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAw
MDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc1OWYoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQu
OTc2MzI4XSBSRFpWWzE4XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0w
MDAxMjg5Yzc1MGUoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzMwXSBSRFpWWzE5XSB0
PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc0Y2QoMDAwMTI4
OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzMyXSBSRFpWWyAzXSB0PTAwMDAwMDFjMGJhNGI0YjIo
MDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc1Y2EoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAg
IDQuOTc2MzM1XSBSRFpWWzIzXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikg
cz0wMDAxMjg5Yzc0OGQoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzM4XSBSRFpWWyA2
XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc2OGMoMDAw
MTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzQwXSBSRFpWWyA3XSB0PTAwMDAwMDFjMGJhNGI0
YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc2NGEoMDAwMTI4OWRhM2NiKQooWEVOKSBb
ICAgIDQuOTc2MzQ0XSBSRFpWWzE3XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRi
Mikgcz0wMDAxMjg5YzdjZTMoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzQ2XSBSRFpW
WzE2XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5YzdkMTgo
MDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzQ3XSBSRFpWWyA0XSB0PTAwMDAwMDFjMGJh
NGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5YzczMzYoMDAwMTI4OWRhM2NiKQooWEVO
KSBbICAgIDQuOTc2MzUwXSBSRFpWWzIwXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0
YjRiMikgcz0wMDAxMjg5Yzc1NjEoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzUzXSBS
RFpWWzE1XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc3
M2EoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzU1XSBSRFpWWzE0XSB0PTAwMDAwMDFj
MGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc2OWUoMDAwMTI4OWRhM2NiKQoo
WEVOKSBbICAgIDQuOTc2MzU3XSBSRFpWWzEzXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMw
YmE0YjRiMikgcz0wMDAxMjg5Yzc0NzcoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzYw
XSBSRFpWWzEyXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5
Yzc0ZDMoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzYxXSBSRFpWWzIyXSB0PTAwMDAw
MDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc0YTkoMDAwMTI4OWRhM2Ni
KQooWEVOKSBbICAgIDQuOTc2MzY0XSBSRFpWWyA1XSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAw
MWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc0ZWQoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2
MzY3XSBSRFpWWzEwXSB0PTAwMDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAx
Mjg5Yzc4OWYoMDAwMTI4OWRhM2NiKQooWEVOKSBbICAgIDQuOTc2MzY5XSBSRFpWWzExXSB0PTAw
MDAwMDFjMGJhNGI0YjIoMDAwMDAwMWMwYmE0YjRiMikgcz0wMDAxMjg5Yzc4N2IoMDAwMTI4OWRh
M2NiKQooWEVOKSBbICAgIDQuOTc2MzY5XSBUU0MgYWRqdXN0ZWQgYnkgNmJkCihYRU4pIFsgICAg
NC45NzYzNzBdIFRTQzogZW5kIHJlbmRlenZvdXMKKFhFTikgWyAgICA4Ljk3NjUzMl0gVFNDOiB0
aW1lLmMjdGltZV9jYWxpYnJhdGlvbl90c2NfcmVuZGV6dm91cwooWEVOKSBbICAgIDguOTc2NTU2
XSBSRFpWWyAwXSB0PTAwMDAwMDFlNWU2MDdkOGYKKFhFTikgWyAgICA4Ljk3NjU1OV0gUkRaVlsg
MV0gdD0wMDAwMDAxZTVlNjA3ZmRmCihYRU4pIFsgICAgOC45NzY1NzhdIFJEWlZbIDJdIHQ9MDAw
MDAwMWU1ZTYxMzliZQooWEVOKSBbICAgIDguOTc2NTgxXSBSRFpWWyAzXSB0PTAwMDAwMDFlNWU2
MTM5YTYKKFhFTikgWyAgICA4Ljk3NjU4NF0gUkRaVlsgNF0gdD0wMDAwMDAxZTVlNjEzZDQzCihY
RU4pIFsgICAgOC45NzY1ODddIFJEWlZbIDVdIHQ9MDAwMDAwMWU1ZTYxM2QzNwooWEVOKSBbICAg
IDguOTc2NTkwXSBSRFpWWyA2XSB0PTAwMDAwMDFlNWU2MTQ4MWYKKFhFTikgWyAgICA4Ljk3NjU5
Ml0gUkRaVlsgN10gdD0wMDAwMDAxZTVlNjE0ODFmCihYRU4pIFsgICAgOC45NzY1OTVdIFJEWlZb
IDhdIHQ9MDAwMDAwMWU1ZTYxNTM3MAooWEVOKSBbICAgIDguOTc2NTk4XSBSRFpWWyA5XSB0PTAw
MDAwMDFlNWU2MTUzNzAKKFhFTikgWyAgICA4Ljk3NjYwMl0gUkRaVlsxMF0gdD0wMDAwMDAxZTVl
NjE2MjIwCihYRU4pIFsgICAgOC45NzY2MDRdIFJEWlZbMTFdIHQ9MDAwMDAwMWU1ZTYxNjI0NAoo
WEVOKSBbICAgIDguOTc2NjA2XSBSRFpWWzEzXSB0PTAwMDAwMDFlNWU2MWY2ZDgKKFhFTikgWyAg
ICA4Ljk3NjYwOV0gUkRaVlsxMl0gdD0wMDAwMDAxZTVlNjFmNTIwCihYRU4pIFsgICAgOC45NzY2
MTJdIFJEWlZbMTRdIHQ9MDAwMDAwMWU1ZTYyMDIwNQooWEVOKSBbICAgIDguOTc2NjE1XSBSRFpW
WzE1XSB0PTAwMDAwMDFlNWU2MjAyMjUKKFhFTikgWyAgICA4Ljk3NjYxOV0gUkRaVlsxNl0gdD0w
MDAwMDAxZTVlNjIwYmQ0CihYRU4pIFsgICAgOC45NzY2MjJdIFJEWlZbMTddIHQ9MDAwMDAwMWU1
ZTYyMGJkNAooWEVOKSBbICAgIDguOTc2NjIzXSBSRFpWWzE4XSB0PTAwMDAwMDFlNWU2MjE0MTQK
KFhFTikgWyAgICA4Ljk3NjYyNV0gUkRaVlsxOV0gdD0wMDAwMDAxZTVlNjIxNDE0CihYRU4pIFsg
ICAgOC45NzY2MjhdIFJEWlZbMjBdIHQ9MDAwMDAwMWU1ZTYyMjAxNgooWEVOKSBbICAgIDguOTc2
NjMwXSBSRFpWWzIxXSB0PTAwMDAwMDFlNWU2MjIwMDIKKFhFTikgWyAgICA4Ljk3NjYzM10gUkRa
VlsyMl0gdD0wMDAwMDAxZTVlNjIyODYxCihYRU4pIFsgICAgOC45NzY2MzZdIFJEWlZbMjNdIHQ9
MDAwMDAwMWU1ZTYyMjdkNQooWEVOKSBbICAgIDguOTc2NjQwXSBSRFpWWyAwXSB0PTAwMDAwMDFl
NWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzk5NzEoMDAwMjE3MGVlZjdiKQoo
WEVOKSBbICAgIDguOTc2NjQ0XSBSRFpWWyA2XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1
ZTYzOTY2ZCkgcz0wMDAyMTcwY2EwNWUoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjQ2
XSBSRFpWWyA3XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcw
Y2EwMjUoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjQ4XSBSRFpWWyAxXSB0PTAwMDAw
MDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzllMjQoMDAwMjE3MGVlZjdi
KQooWEVOKSBbICAgIDguOTc2NjUxXSBSRFpWWzE0XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAw
MWU1ZTYzOTY2ZCkgcz0wMDAyMTcwY2EyNGEoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2
NjU0XSBSRFpWWzE1XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAy
MTcwY2EzMDgoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjU4XSBSRFpWWzE3XSB0PTAw
MDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwY2E2YTgoMDAwMjE3MGVl
ZjdiKQooWEVOKSBbICAgIDguOTc2NjU4XSBSRFpWWzE4XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAw
MDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzllY2QoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDgu
OTc2NjYxXSBSRFpWWzE5XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0w
MDAyMTcwYzllOTQoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjYzXSBSRFpWWzIyXSB0
PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzlkZWIoMDAwMjE3
MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjY4XSBSRFpWWzE2XSB0PTAwMDAwMDFlNWU2Mzk2NmQo
MDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwY2E2ZWMoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAg
IDguOTc2NjY4XSBSRFpWWyA5XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkg
cz0wMDAyMTcwYzlmMTYoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjcxXSBSRFpWWyA4
XSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzlmNDAoMDAw
MjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjczXSBSRFpWWzIzXSB0PTAwMDAwMDFlNWU2Mzk2
NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzlkOWQoMDAwMjE3MGVlZjdiKQooWEVOKSBb
ICAgIDguOTc2Njc1XSBSRFpWWzEzXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2
ZCkgcz0wMDAyMTcwYzllMzgoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2Njc4XSBSRFpW
WzEyXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzlkZDQo
MDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2NjgwXSBSRFpWWyA0XSB0PTAwMDAwMDFlNWU2
Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwY2EwMjIoMDAwMjE3MGVlZjdiKQooWEVO
KSBbICAgIDguOTc2Njg0XSBSRFpWWzEwXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYz
OTY2ZCkgcz0wMDAyMTcwY2ExZGYoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2Njg2XSBS
RFpWWzExXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwY2Ex
YTYoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2Njg4XSBSRFpWWyA1XSB0PTAwMDAwMDFl
NWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwY2ExZDYoMDAwMjE3MGVlZjdiKQoo
WEVOKSBbICAgIDguOTc2NjkxXSBSRFpWWyAzXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1
ZTYzOTY2ZCkgcz0wMDAyMTcwYzlmMmMoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2Njky
XSBSRFpWWyAyXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcw
YzllOTMoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2Njk1XSBSRFpWWzIxXSB0PTAwMDAw
MDFlNWU2Mzk2NmQoMDAwMDAwMWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzllYjgoMDAwMjE3MGVlZjdi
KQooWEVOKSBbICAgIDguOTc2Njk3XSBSRFpWWzIwXSB0PTAwMDAwMDFlNWU2Mzk2NmQoMDAwMDAw
MWU1ZTYzOTY2ZCkgcz0wMDAyMTcwYzllNzAoMDAwMjE3MGVlZjdiKQooWEVOKSBbICAgIDguOTc2
Njk3XSBUU0MgYWRqdXN0ZWQgYnkgNWE2CihYRU4pIFsgICAgOC45NzY2OThdIFRTQzogZW5kIHJl
bmRlenZvdXMKKFhFTikgWyAgIDE2Ljk3NzA3OV0gVFNDOiB0aW1lLmMjdGltZV9jYWxpYnJhdGlv
bl90c2NfcmVuZGV6dm91cwooWEVOKSBbICAgMTYuOTc3MTAzXSBSRFpWWyAwXSB0PTAwMDAwMDIz
MDNkOWJiNmEKKFhFTikgWyAgIDE2Ljk3NzEwN10gUkRaVlsgMV0gdD0wMDAwMDAyMzAzZGE1ZGY0
CihYRU4pIFsgICAxNi45NzcxMjldIFJEWlZbIDNdIHQ9MDAwMDAwMjMwM2RiMjQ5NgooWEVOKSBb
ICAgMTYuOTc3MTMxXSBSRFpWWyAyXSB0PTAwMDAwMDIzMDNkYjI0YWEKKFhFTikgWyAgIDE2Ljk3
NzEzNl0gUkRaVlsgNF0gdD0wMDAwMDAyMzAzZGIyNmE2CihYRU4pIFsgICAxNi45NzcxMzhdIFJE
WlZbIDVdIHQ9MDAwMDAwMjMwM2RiMjZiYQooWEVOKSBbICAgMTYuOTc3MTQwXSBSRFpWWyA3XSB0
PTAwMDAwMDIzMDNkYjMxMzEKKFhFTikgWyAgIDE2Ljk3NzE0Ml0gUkRaVlsgNl0gdD0wMDAwMDAy
MzAzZGIzMTFkCihYRU4pIFsgICAxNi45NzcxNDldIFJEWlZbIDhdIHQ9MDAwMDAwMjMwM2RiNDAx
NAooWEVOKSBbICAgMTYuOTc3MTUxXSBSRFpWWyA5XSB0PTAwMDAwMDIzMDNkYjQwMjgKKFhFTikg
WyAgIDE2Ljk3NzE1MV0gUkRaVlsxMV0gdD0wMDAwMDAyMzAzZGI0YTdjCihYRU4pIFsgICAxNi45
NzcxNTNdIFJEWlZbMTBdIHQ9MDAwMDAwMjMwM2RiNGFhNAooWEVOKSBbICAgMTYuOTc3MTU2XSBS
RFpWWzEzXSB0PTAwMDAwMDIzMDNkYjU0NTMKKFhFTikgWyAgIDE2Ljk3NzE1OF0gUkRaVlsxMl0g
dD0wMDAwMDAyMzAzZGI1NDUzCihYRU4pIFsgICAxNi45NzcxNjRdIFJEWlZbMTVdIHQ9MDAwMDAw
MjMwM2RiNWQ4ZgooWEVOKSBbICAgMTYuOTc3MTY2XSBSRFpWWzE0XSB0PTAwMDAwMDIzMDNkYjVl
NGIKKFhFTikgWyAgIDE2Ljk3NzE3MV0gUkRaVlsxNl0gdD0wMDAwMDAyMzAzZGI2NWE0CihYRU4p
IFsgICAxNi45NzcxNzRdIFJEWlZbMTddIHQ9MDAwMDAwMjMwM2RiNjVjNAooWEVOKSBbICAgMTYu
OTc3MTcyXSBSRFpWWzE5XSB0PTAwMDAwMDIzMDNkYjZmNzMKKFhFTikgWyAgIDE2Ljk3NzE3NF0g
UkRaVlsxOF0gdD0wMDAwMDAyMzAzZGI2ZjdmCihYRU4pIFsgICAxNi45NzcxNzddIFJEWlZbMjFd
IHQ9MDAwMDAwMjMwM2RiNzljOQooWEVOKSBbICAgMTYuOTc3MTgwXSBSRFpWWzIwXSB0PTAwMDAw
MDIzMDNkYjc5ZDEKKFhFTikgWyAgIDE2Ljk3NzE4Ml0gUkRaVlsyM10gdD0wMDAwMDAyMzAzZGI4
MzVkCihYRU4pIFsgICAxNi45NzcxODRdIFJEWlZbMjJdIHQ9MDAwMDAwMjMwM2RiODM5MQooWEVO
KSBbICAgMTYuOTc3MTg5XSBSRFpWWyAxXSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2Rk
NmVmZikgcz0wMDAzZjNlYjRjYjcoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MTkyXSBS
RFpWWzIwXSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRl
YjgoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MTk0XSBSRFpWWzIyXSB0PTAwMDAwMDIz
MDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRkYjcoMDAwM2YzZjBiZTU3KQoo
WEVOKSBbICAgMTYuOTc3MTk3XSBSRFpWWyAzXSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMw
M2RkNmVmZikgcz0wMDAzZjNlYjRlNWQoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjAw
XSBSRFpWWzExXSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNl
YjUwNmEoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjAzXSBSRFpWWzEwXSB0PTAwMDAw
MDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjUwYjQoMDAwM2YzZjBiZTU3
KQooWEVOKSBbICAgMTYuOTc3MjA4XSBSRFpWWyA4XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAw
MjMwM2RkNmVmZikgcz0wMDAzZjNlYjVmMDEoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3
MjExXSBSRFpWWyA5XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAz
ZjNlYjVlYWYoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjA5XSBSRFpWWzE5XSB0PTAw
MDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRmMDkoMDAwM2YzZjBi
ZTU3KQooWEVOKSBbICAgMTYuOTc3MjEyXSBSRFpWWzE4XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAw
MDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRmNjkoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYu
OTc3MjE0XSBSRFpWWzEyXSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0w
MDAzZjNlYjRkZDIoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjE3XSBSRFpWWzEzXSB0
PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRlMjIoMDAwM2Yz
ZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjE3XSBSRFpWWyAwXSB0PTAwMDAwMDIzMDNkZDZlZmYo
MDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjQzODEoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAg
MTYuOTc3MjI3XSBSRFpWWzE3XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikg
cz0wMDAzZjNlYjYwMzMoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjI5XSBSRFpWWzE2
XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjVmZWUoMDAw
M2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjI3XSBSRFpWWyA2XSB0PTAwMDAwMDIzMDNkZDZl
ZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjUwODIoMDAwM2YzZjBiZTU3KQooWEVOKSBb
ICAgMTYuOTc3MjI5XSBSRFpWWyA3XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVm
Zikgcz0wMDAzZjNlYjUwNjMoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjMxXSBSRFpW
WzIzXSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRjZTAo
MDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjMzXSBSRFpWWzIxXSB0PTAwMDAwMDIzMDNk
ZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRkZjgoMDAwM2YzZjBiZTU3KQooWEVO
KSBbICAgMTYuOTc3MjM5XSBSRFpWWzE1XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2Rk
NmVmZikgcz0wMDAzZjNlYjU4YjQoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjQxXSBS
RFpWWzE0XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjU4
MWEoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjQwXSBSRFpWWyAyXSB0PTAwMDAwMDIz
MDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNlYjRkYzcoMDAwM2YzZjBiZTU3KQoo
WEVOKSBbICAgMTYuOTc3MjQ0XSBSRFpWWyA0XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMw
M2RkNmVmZikgcz0wMDAzZjNlYjU0NDcoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjQ2
XSBSRFpWWyA1XSB0PTAwMDAwMDIzMDNkZDZlZmYoMDAwMDAwMjMwM2RkNmVmZikgcz0wMDAzZjNl
YjU0ZjgoMDAwM2YzZjBiZTU3KQooWEVOKSBbICAgMTYuOTc3MjQ0XSBUU0MgYWRqdXN0ZWQgYnkg
NTEzCihYRU4pIFsgICAxNi45NzcyNDZdIFRTQzogZW5kIHJlbmRlenZvdXMKKFhFTikgWyAgIDMy
Ljk3ODE3OV0gVFNDOiB0aW1lLmMjdGltZV9jYWxpYnJhdGlvbl90c2NfcmVuZGV6dm91cwooWEVO
KSBbICAgMzIuOTc4MjAzXSBSRFpWWyAwXSB0PTAwMDAwMDJjNGVjZDQ3NDkKKFhFTikgWyAgIDMy
Ljk3ODIwOV0gUkRaVlsgMV0gdD0wMDAwMDAyYzRlY2RlYzJhCihYRU4pIFsgICAzMi45NzgyMzJd
IFJEWlZbIDNdIHQ9MDAwMDAwMmM0ZWNlYWY4ZQooWEVOKSBbICAgMzIuOTc4MjM0XSBSRFpWWyAy
XSB0PTAwMDAwMDJjNGVjZWFmOGEKKFhFTikgWyAgIDMyLjk3ODIzOV0gUkRaVlsgNF0gdD0wMDAw
MDAyYzRlY2ViMjAwCihYRU4pIFsgICAzMi45NzgyNDFdIFJEWlZbIDVdIHQ9MDAwMDAwMmM0ZWNl
YjFmMAooWEVOKSBbICAgMzIuOTc4MjQ1XSBSRFpWWyA3XSB0PTAwMDAwMDJjNGVjZWJjNzgKKFhF
TikgWyAgIDMyLjk3ODI0N10gUkRaVlsgNl0gdD0wMDAwMDAyYzRlY2ViYjc0CihYRU4pIFsgICAz
Mi45NzgyNTJdIFJEWlZbIDldIHQ9MDAwMDAwMmM0ZWNlYzk1YQooWEVOKSBbICAgMzIuOTc4MjU0
XSBSRFpWWyA4XSB0PTAwMDAwMDJjNGVjZWM5NGEKKFhFTikgWyAgIDMyLjk3ODI1NF0gUkRaVlsx
MV0gdD0wMDAwMDAyYzRlY2VkNGZhCihYRU4pIFsgICAzMi45NzgyNTZdIFJEWlZbMTBdIHQ9MDAw
MDAwMmM0ZWNlZDU2ZQooWEVOKSBbICAgMzIuOTc4MjU4XSBSRFpWWzEyXSB0PTAwMDAwMDJjNGVj
ZWRkMzUKKFhFTikgWyAgIDMyLjk3ODI2MF0gUkRaVlsxM10gdD0wMDAwMDAyYzRlY2VkZTU5CihY
RU4pIFsgICAzMi45NzgyNjddIFJEWlZbMTRdIHQ9MDAwMDAwMmM0ZWNlZTdiNQooWEVOKSBbICAg
MzIuOTc4MjcwXSBSRFpWWzE1XSB0PTAwMDAwMDJjNGVjZWU5MmQKKFhFTikgWyAgIDMyLjk3ODI3
NF0gUkRaVlsxN10gdD0wMDAwMDAyYzRlY2VmMDhmCihYRU4pIFsgICAzMi45NzgyNzZdIFJEWlZb
MTZdIHQ9MDAwMDAwMmM0ZWNlZjA4YgooWEVOKSBbICAgMzIuOTc4Mjc1XSBSRFpWWzE5XSB0PTAw
MDAwMDJjNGVjZWY4ZmQKKFhFTikgWyAgIDMyLjk3ODI3OF0gUkRaVlsxOF0gdD0wMDAwMDAyYzRl
Y2VmYTIxCihYRU4pIFsgICAzMi45NzgyODFdIFJEWlZbMjBdIHQ9MDAwMDAwMmM0ZWNmMDQ5Mgoo
WEVOKSBbICAgMzIuOTc4MjgzXSBSRFpWWzIxXSB0PTAwMDAwMDJjNGVjZjAzMzIKKFhFTikgWyAg
IDMyLjk3ODI4Nl0gUkRaVlsyMl0gdD0wMDAwMDAyYzRlY2YwZWQ5CihYRU4pIFsgICAzMi45Nzgy
ODhdIFJEWlZbMjNdIHQ9MDAwMDAwMmM0ZWNmMGVkZAooWEVOKSBbICAgMzIuOTc4MjkxXSBSRFpW
WyAxXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGJiNTEo
MDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4Mjk1XSBSRFpWWyAyXSB0PTAwMDAwMDJjNGVk
MGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGMxNGMoMDAwN2FkYjc2MjA5KQooWEVO
KSBbICAgMzIuOTc4Mjk3XSBSRFpWWyAzXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQw
ZjgyZikgcz0wMDA3YWRhOGMyMzMoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzAzXSBS
RFpWWyA4XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGQx
MzUoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzA1XSBSRFpWWyA5XSB0PTAwMDAwMDJj
NGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGQwZDEoMDAwN2FkYjc2MjA5KQoo
WEVOKSBbICAgMzIuOTc4MzA0XSBSRFpWWzE5XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0
ZWQwZjgyZikgcz0wMDA3YWRhOGM0MDkoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzA2
XSBSRFpWWzEzXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRh
OGJkYzMoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzA4XSBSRFpWWzEyXSB0PTAwMDAw
MDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGJlMWQoMDAwN2FkYjc2MjA5
KQooWEVOKSBbICAgMzIuOTc4MzE0XSBSRFpWWzE0XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAw
MmM0ZWQwZjgyZikgcz0wMDA3YWRhOGNjMzkoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4
MzE3XSBSRFpWWzE1XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3
YWRhOGNkYTcoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzE3XSBSRFpWWzIzXSB0PTAw
MDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGM0ZTMoMDAwN2FkYjc2
MjA5KQooWEVOKSBbICAgMzIuOTc4MzE5XSBSRFpWWzIxXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAw
MDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGMzOTIoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIu
OTc4MzIzXSBSRFpWWyA0XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0w
MDA3YWRhOGM3MWMoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzI1XSBSRFpWWzIyXSB0
PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGM1MGUoMDAwN2Fk
Yjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzIxXSBSRFpWWyAwXSB0PTAwMDAwMDJjNGVkMGY4MmYo
MDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGFhMjQoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAg
MzIuOTc4MzMzXSBSRFpWWzE3XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikg
cz0wMDA3YWRhOGQ0MTUoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzM2XSBSRFpWWzE2
XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGQ0OWEoMDAw
N2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzM2XSBSRFpWWyA3XSB0PTAwMDAwMDJjNGVkMGY4
MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGM5Y2UoMDAwN2FkYjc2MjA5KQooWEVOKSBb
ICAgMzIuOTc4MzM4XSBSRFpWWyA2XSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgy
Zikgcz0wMDA3YWRhOGM5NjQoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzM5XSBSRFpW
WzIwXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGM1NGMo
MDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzQyXSBSRFpWWzE4XSB0PTAwMDAwMDJjNGVk
MGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGM1ZmIoMDAwN2FkYjc2MjA5KQooWEVO
KSBbICAgMzIuOTc4MzQ0XSBSRFpWWzEwXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQw
ZjgyZikgcz0wMDA3YWRhOGMzYTIoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzQ2XSBS
RFpWWzExXSB0PTAwMDAwMDJjNGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGMy
N2UoMDAwN2FkYjc2MjA5KQooWEVOKSBbICAgMzIuOTc4MzUwXSBSRFpWWyA1XSB0PTAwMDAwMDJj
NGVkMGY4MmYoMDAwMDAwMmM0ZWQwZjgyZikgcz0wMDA3YWRhOGM3ZDUoMDAwN2FkYjc2MjA5KQoo
WEVOKSBbICAgMzIuOTc4MzQ0XSBUU0MgYWRqdXN0ZWQgYnkgNWQxCihYRU4pIFsgICAzMi45Nzgz
NDVdIFRTQzogZW5kIHJlbmRlenZvdXMKKFhFTikgWyAgIDY0Ljk4MDM1NF0gVFNDOiB0aW1lLmMj
dGltZV9jYWxpYnJhdGlvbl90c2NfcmVuZGV6dm91cwooWEVOKSBbICAgNjQuOTgwMzU4XSBSRFpW
WyAwXSB0PTAwMDAwMDNlZTRiM2JkNDgKKFhFTikgWyAgIDY0Ljk4MDM2OV0gUkRaVlsgMV0gdD0w
MDAwMDAzZWU0YjNjM2E0CihYRU4pIFsgICA2NC45ODAzODVdIFJEWlZbIDNdIHQ9MDAwMDAwM2Vl
NGIzYzU2OQooWEVOKSBbICAgNjQuOTgwMzg2XSBSRFpWWyAyXSB0PTAwMDAwMDNlZTRiM2M1NjUK
KFhFTikgWyAgIDY0Ljk4MDQwNF0gUkRaVlsgNV0gdD0wMDAwMDAzZWU0YjRiOTU5CihYRU4pIFsg
ICA2NC45ODA0MDZdIFJEWlZbIDRdIHQ9MDAwMDAwM2VlNGI0Yjk2NQooWEVOKSBbICAgNjQuOTgw
NDA5XSBSRFpWWyA2XSB0PTAwMDAwMDNlZTRiNGMyYjAKKFhFTikgWyAgIDY0Ljk4MDQxMl0gUkRa
VlsgN10gdD0wMDAwMDAzZWU0YjRjMmEwCihYRU4pIFsgICA2NC45ODA0MTVdIFJEWlZbIDhdIHQ9
MDAwMDAwM2VlNGI0ZDA4ZgooWEVOKSBbICAgNjQuOTgwNDE4XSBSRFpWWyA5XSB0PTAwMDAwMDNl
ZTRiNGQwYTMKKFhFTikgWyAgIDY0Ljk4MDQxOF0gUkRaVlsxMF0gdD0wMDAwMDAzZWU0YjRkYTA2
CihYRU4pIFsgICA2NC45ODA0MjFdIFJEWlZbMTFdIHQ9MDAwMDAwM2VlNGI0ZGE0MgooWEVOKSBb
ICAgNjQuOTgwNDE5XSBSRFpWWzEzXSB0PTAwMDAwMDNlZTRiNGU2MDIKKFhFTikgWyAgIDY0Ljk4
MDQyMV0gUkRaVlsxMl0gdD0wMDAwMDAzZWU0YjRlNjUyCihYRU4pIFsgICA2NC45ODA0MjhdIFJE
WlZbMTVdIHQ9MDAwMDAwM2VlNGI0ZjA3MAooWEVOKSBbICAgNjQuOTgwNDI5XSBSRFpWWzE0XSB0
PTAwMDAwMDNlZTRiNGYwNzQKKFhFTikgWyAgIDY0Ljk4MDQzNl0gUkRaVlsxNl0gdD0wMDAwMDAz
ZWU0YjRmOTdiCihYRU4pIFsgICA2NC45ODA0MzhdIFJEWlZbMTddIHQ9MDAwMDAwM2VlNGI0Zjk3
NwooWEVOKSBbICAgNjQuOTgwNDM5XSBSRFpWWzE4XSB0PTAwMDAwMDNlZTRiNTAxODYKKFhFTikg
WyAgIDY0Ljk4MDQ0MV0gUkRaVlsxOV0gdD0wMDAwMDAzZWU0YjUwMTkyCihYRU4pIFsgICA2NC45
ODA0NDJdIFJEWlZbMjBdIHQ9MDAwMDAwM2VlNGI1MGFhYwooWEVOKSBbICAgNjQuOTgwNDQ0XSBS
RFpWWzIxXSB0PTAwMDAwMDNlZTRiNTBhYTAKKFhFTikgWyAgIDY0Ljk4MDQ0OF0gUkRaVlsyMl0g
dD0wMDAwMDAzZWU0YjUxNWYwCihYRU4pIFsgICA2NC45ODA0NTFdIFJEWlZbMjNdIHQ9MDAwMDAw
M2VlNGI1MTVjYwooWEVOKSBbICAgNjQuOTgwNDQxXSBSRFpWWyAwXSB0PTAwMDAwMDNlZTRiNmNl
M2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMmNhYWMoMDAwZjIxNDU3YzA4KQooWEVOKSBb
ICAgNjQuOTgwNDY2XSBSRFpWWyAzXSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2Uz
ZSkgcz0wMDBmMjEyMzIxMTkoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDYwXSBSRFpW
WzIwXSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMmZlZTAo
MDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDYzXSBSRFpWWzIyXSB0PTAwMDAwMDNlZTRi
NmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzAzNzAoMDAwZjIxNDU3YzA4KQooWEVO
KSBbICAgNjQuOTgwNDY3XSBSRFpWWzE5XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2
Y2UzZSkgcz0wMDBmMjEyMzA4NDEoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDY5XSBS
RFpWWzE4XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzA3
ZjkoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDY5XSBSRFpWWzIxXSB0PTAwMDAwMDNl
ZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMmZkY2MoMDAwZjIxNDU3YzA4KQoo
WEVOKSBbICAgNjQuOTgwNDczXSBSRFpWWzIzXSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2Vl
NGI2Y2UzZSkgcz0wMDBmMjEyMzA2ODEoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDgw
XSBSRFpWWyA1XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEy
MzE0NWEoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDgyXSBSRFpWWyA0XSB0PTAwMDAw
MDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzE0OWYoMDAwZjIxNDU3YzA4
KQooWEVOKSBbICAgNjQuOTgwNDgyXSBSRFpWWzEwXSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAw
M2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzBhNmQoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgw
NDg1XSBSRFpWWzExXSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBm
MjEyMzBiZTMoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDg0XSBSRFpWWzE0XSB0PTAw
MDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzAwZjcoMDAwZjIxNDU3
YzA4KQooWEVOKSBbICAgNjQuOTgwNDg4XSBSRFpWWzE1XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAw
MDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzA3M2EoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQu
OTgwNDk0XSBSRFpWWyA2XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0w
MDBmMjEyMzExMmYoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDk2XSBSRFpWWyA3XSB0
PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzEyZmYoMDAwZjIx
NDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDk5XSBSRFpWWyA4XSB0PTAwMDAwMDNlZTRiNmNlM2Uo
MDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzE1MTUoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAg
NjQuOTgwNTAyXSBSRFpWWyA5XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkg
cz0wMDBmMjEyMzE3NzMoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDk2XSBSRFpWWzEy
XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMmY3Y2QoMDAw
ZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDk5XSBSRFpWWzEzXSB0PTAwMDAwMDNlZTRiNmNl
M2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMmY5NzAoMDAwZjIxNDU3YzA4KQooWEVOKSBb
ICAgNjQuOTgwNTA4XSBSRFpWWzE3XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2Uz
ZSkgcz0wMDBmMjEyMzExMjYoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNTEwXSBSRFpW
WzE2XSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMzExNDIo
MDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNTAzXSBSRFpWWyAxXSB0PTAwMDAwMDNlZTRi
NmNlM2UoMDAwMDAwM2VlNGI2Y2UzZSkgcz0wMDBmMjEyMmVkYzMoMDAwZjIxNDU3YzA4KQooWEVO
KSBbICAgNjQuOTgwNTE4XSBSRFpWWyAyXSB0PTAwMDAwMDNlZTRiNmNlM2UoMDAwMDAwM2VlNGI2
Y2UzZSkgcz0wMDBmMjEyMzFlYzgoMDAwZjIxNDU3YzA4KQooWEVOKSBbICAgNjQuOTgwNDk4XSBU
U0MgYWRqdXN0ZWQgYnkgMTRlCihYRU4pIFsgICA2NC45ODA0OTldIFRTQzogZW5kIHJlbmRlenZv
dXMK
--000000000000f9517405ba476c04
Content-Type: text/plain; charset="UTF-8"; name="kernel-dmesg.txt"
Content-Disposition: attachment; filename="kernel-dmesg.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_kkmoout51>
X-Attachment-Id: f_kkmoout51

WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA1LjEwLjAtMS1hbWQ2NCAoZGViaWFuLWtlcm5l
bEBsaXN0cy5kZWJpYW4ub3JnKSAoZ2NjLTEwIChEZWJpYW4gMTAuMi4xLTMpIDEwLjIuMSAyMDIw
MTIyNCwgR05VIGxkIChHTlUgQmludXRpbHMgZm9yIERlYmlhbikgMi4zNS4xKSAjMSBTTVAgRGVi
aWFuIDUuMTAuNC0xICgyMDIwLTEyLTMxKQpbICAgIDAuMDAwMDAwXSBDb21tYW5kIGxpbmU6IHBs
YWNlaG9sZGVyIHJvb3Q9VVVJRD1mZmQyZTQ0ZS0zZWI3LTRjNGQtYTAyOC1kYWQ3ZGEwM2M4MzEg
cm8gbG9nbGV2ZWw9MyBlYXJseXByaW50az14ZW4KWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogU3Vw
cG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDAxOiAneDg3IGZsb2F0aW5nIHBvaW50IHJlZ2lzdGVy
cycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDAy
OiAnU1NFIHJlZ2lzdGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FW
RSBmZWF0dXJlIDB4MDA0OiAnQVZYIHJlZ2lzdGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTog
eHN0YXRlX29mZnNldFsyXTogIDU3NiwgeHN0YXRlX3NpemVzWzJdOiAgMjU2ClsgICAgMC4wMDAw
MDBdIHg4Ni9mcHU6IEVuYWJsZWQgeHN0YXRlIGZlYXR1cmVzIDB4NywgY29udGV4dCBzaXplIGlz
IDgzMiBieXRlcywgdXNpbmcgJ3N0YW5kYXJkJyBmb3JtYXQuClsgICAgMC4wMDAwMDBdIFJlbGVh
c2VkIDAgcGFnZShzKQpbICAgIDAuMDAwMDAwXSBCSU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBt
YXA6ClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAw
MDAwMDlkZmZmXSB1c2FibGUKWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDAwMDA5
ZTgwMC0weDAwMDAwMDAwMDAwZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAwMDBdIFhlbjogW21l
bSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAwMDAwMDgwMDYxZmZmXSB1c2FibGUKWyAgICAwLjAw
MDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBiYTk1MjAwMC0weDAwMDAwMDAwYmE5OGFmZmZdIHJl
c2VydmVkClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwYmFiYzMwMDAtMHgwMDAw
MDAwMGJiMWJmZmZmXSBBQ1BJIE5WUwpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAw
MGJiMWMwMDAwLTB4MDAwMDAwMDBiYjg0MmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gWGVu
OiBbbWVtIDB4MDAwMDAwMDBiYjg0NDAwMC0weDAwMDAwMDAwYmI4YzlmZmZdIEFDUEkgTlZTClsg
ICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwYmJkMGYwMDAtMHgwMDAwMDAwMGJiZmYz
ZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGQwMDAwMDAw
LTB4MDAwMDAwMDBkZmZmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4
MDAwMDAwMDBmYmZmYzAwMC0weDAwMDAwMDAwZmJmZmNmZmZdIHJlc2VydmVkClsgICAgMC4wMDAw
MDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVjMDAwMDAtMHgwMDAwMDAwMGZlYzAxZmZmXSByZXNl
cnZlZApbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZlZDFjMDAwLTB4MDAwMDAw
MDBmZWQxZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBm
ZWUwMDAwMC0weDAwMDAwMDAwZmVlZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAwMDBdIFhlbjog
W21lbSAweDAwMDAwMDAwZmYwMDAwMDAtMHgwMDAwMDAwMGZmZmZmZmZmXSByZXNlcnZlZApbICAg
IDAuMDAwMDAwXSBOWCAoRXhlY3V0ZSBEaXNhYmxlKSBwcm90ZWN0aW9uOiBhY3RpdmUKWyAgICAw
LjAwMDAwMF0gU01CSU9TIDIuNyBwcmVzZW50LgpbICAgIDAuMDAwMDAwXSBETUk6IFRvIGJlIGZp
bGxlZCBieSBPLkUuTS4gVG8gYmUgZmlsbGVkIGJ5IE8uRS5NLi9JbnRlbCBYNzksIEJJT1MgNC42
LjUgMDcvMTcvMjAxOQpbICAgIDAuMDAwMDAwXSBIeXBlcnZpc29yIGRldGVjdGVkOiBYZW4gUFYK
WyAgICAwLjA0NjgwN10gdHNjOiBGYXN0IFRTQyBjYWxpYnJhdGlvbiB1c2luZyBQSVQKWyAgICAw
LjA0NjgwOV0gdHNjOiBEZXRlY3RlZCAyNDk0LjQwNiBNSHogcHJvY2Vzc29yClsgICAgMC4wNDY4
MTBdIHRzYzogRGV0ZWN0ZWQgMjQ5NC4zMzggTUh6IFRTQwpbICAgIDAuMDUyMjU3XSBlODIwOiB1
cGRhdGUgW21lbSAweDAwMDAwMDAwLTB4MDAwMDBmZmZdIHVzYWJsZSA9PT4gcmVzZXJ2ZWQKWyAg
ICAwLjA1MjI2MF0gZTgyMDogcmVtb3ZlIFttZW0gMHgwMDBhMDAwMC0weDAwMGZmZmZmXSB1c2Fi
bGUKWyAgICAwLjA1MjI2Nl0gbGFzdF9wZm4gPSAweDgwMDYyIG1heF9hcmNoX3BmbiA9IDB4NDAw
MDAwMDAwClsgICAgMC4wNTIyNjddIERpc2FibGVkClsgICAgMC4wNTIyNjhdIHg4Ni9QQVQ6IE1U
UlJzIGRpc2FibGVkLCBza2lwcGluZyBQQVQgaW5pdGlhbGl6YXRpb24gdG9vLgpbICAgIDAuMDUy
Mjc0XSB4ODYvUEFUOiBDb25maWd1cmF0aW9uIFswLTddOiBXQiAgV1QgIFVDLSBVQyAgV0MgIFdQ
ICBVQyAgVUMgIApbICAgIDAuMDY3MDg3XSBLZXJuZWwvVXNlciBwYWdlIHRhYmxlcyBpc29sYXRp
b246IGRpc2FibGVkIG9uIFhFTiBQVi4KWyAgICAwLjY3OTM2OV0gUkFNRElTSzogW21lbSAweDA0
MDAwMDAwLTB4MDYxZjRmZmZdClsgICAgMC42NzkzODJdIEFDUEk6IEVhcmx5IHRhYmxlIGNoZWNr
c3VtIHZlcmlmaWNhdGlvbiBkaXNhYmxlZApbICAgIDAuNjg1NTM2XSBBQ1BJOiBSU0RQIDB4MDAw
MDAwMDAwMDBGMDRBMCAwMDAwMjQgKHYwMiBBTEFTS0EpClsgICAgMC42ODU1NDddIEFDUEk6IFhT
RFQgMHgwMDAwMDAwMEJCMEREMDcwIDAwMDA1QyAodjAxIEFMQVNLQSBBIE0gSSAgICAwMTA3MjAw
OSBBTUkgIDAwMDEwMDEzKQpbICAgIDAuNjg1NTc0XSBBQ1BJOiBGQUNQIDB4MDAwMDAwMDBCQjBF
NTcyOCAwMDAxMEMgKHYwNSBBTEFTS0EgQSBNIEkgICAgMDEwNzIwMDkgQU1JICAwMDAxMDAxMykK
WyAgICAwLjY4NTY0MF0gQUNQSTogRFNEVCAweDAwMDAwMDAwQkIwREQxNjAgMDA4NUM3ICh2MDIg
QUxBU0tBIEEgTSBJICAgIDAwMDAwMDIwIElOVEwgMjAwNTExMTcpClsgICAgMC42ODU2NTVdIEFD
UEk6IEZBQ1MgMHgwMDAwMDAwMEJCMUI3RjgwIDAwMDA0MApbICAgIDAuNjg1NjY5XSBBQ1BJOiBB
UElDIDB4MDAwMDAwMDBCQjBFNTgzOCAwMDAxQTggKHYwMyBBTEFTS0EgQSBNIEkgICAgMDEwNzIw
MDkgQU1JICAwMDAxMDAxMykKWyAgICAwLjY4NTY4M10gQUNQSTogRlBEVCAweDAwMDAwMDAwQkIw
RTU5RTAgMDAwMDQ0ICh2MDEgQUxBU0tBIEEgTSBJICAgIDAxMDcyMDA5IEFNSSAgMDAwMTAwMTMp
ClsgICAgMC42ODU2OThdIEFDUEk6IE1DRkcgMHgwMDAwMDAwMEJCMEU1QTI4IDAwMDAzQyAodjAx
IEFMQVNLQSBPRU1NQ0ZHLiAwMTA3MjAwOSBNU0ZUIDAwMDAwMDk3KQpbICAgIDAuNjg1NzEyXSBB
Q1BJOiBIUEVUIDB4MDAwMDAwMDBCQjBFNUE2OCAwMDAwMzggKHYwMSBBTEFTS0EgQSBNIEkgICAg
MDEwNzIwMDkgQU1JLiAwMDAwMDAwNSkKWyAgICAwLjY4NTcyN10gQUNQSTogU1NEVCAweDAwMDAw
MDAwQkIwRTVBQTAgMENEMzgwICh2MDIgSU5URUwgIENwdVBtICAgIDAwMDA0MDAwIElOVEwgMjAw
NTExMTcpClsgICAgMC42ODU3NDJdIEFDUEk6IFJNQUQgMHgwMDAwMDAwMEJCMUIyRTIwIDAwMDBF
QyAodjAxIEEgTSBJICBPRU1ETUFSICAwMDAwMDAwMSBJTlRMIDAwMDAwMDAxKQpbICAgIDAuNjg1
NzgwXSBBQ1BJOiBMb2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMApbICAgIDAuNjg1NzgyXSBT
ZXR0aW5nIEFQSUMgcm91dGluZyB0byBYZW4gUFYuClsgICAgMC42ODU4MTVdIE5VTUEgdHVybmVk
IG9mZgpbICAgIDAuNjg1ODE2XSBGYWtpbmcgYSBub2RlIGF0IFttZW0gMHgwMDAwMDAwMDAwMDAw
MDAwLTB4MDAwMDAwMDA4MDA2MWZmZl0KWyAgICAwLjY4NTgyOV0gTk9ERV9EQVRBKDApIGFsbG9j
YXRlZCBbbWVtIDB4M2ZiZjcwMDAtMHgzZmMyMGZmZl0KWyAgICAwLjY5ODk5N10gWm9uZSByYW5n
ZXM6ClsgICAgMC42OTg5OThdICAgRE1BICAgICAgW21lbSAweDAwMDAwMDAwMDAwMDEwMDAtMHgw
MDAwMDAwMDAwZmZmZmZmXQpbICAgIDAuNjk5MDAwXSAgIERNQTMyICAgIFttZW0gMHgwMDAwMDAw
MDAxMDAwMDAwLTB4MDAwMDAwMDA4MDA2MWZmZl0KWyAgICAwLjY5OTAwMl0gICBOb3JtYWwgICBl
bXB0eQpbICAgIDAuNjk5MDAzXSAgIERldmljZSAgIGVtcHR5ClsgICAgMC42OTkwMDRdIE1vdmFi
bGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlClsgICAgMC42OTkwMDhdIEVhcmx5IG1lbW9yeSBu
b2RlIHJhbmdlcwpbICAgIDAuNjk5MDA5XSAgIG5vZGUgICAwOiBbbWVtIDB4MDAwMDAwMDAwMDAw
MTAwMC0weDAwMDAwMDAwMDAwOWRmZmZdClsgICAgMC42OTkwMTBdICAgbm9kZSAgIDA6IFttZW0g
MHgwMDAwMDAwMDAwMTAwMDAwLTB4MDAwMDAwMDA4MDA2MWZmZl0KWyAgICAwLjY5OTMzMl0gWmVy
b2VkIHN0cnVjdCBwYWdlIGluIHVuYXZhaWxhYmxlIHJhbmdlczogMzI3NjkgcGFnZXMKWyAgICAw
LjY5OTMzNF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAweDAwMDAwMDAwMDAwMDEwMDAtMHgw
MDAwMDAwMDgwMDYxZmZmXQpbICAgIDAuNjk5MzM1XSBPbiBub2RlIDAgdG90YWxwYWdlczogNTI0
Mjg3ClsgICAgMC42OTkzMzddICAgRE1BIHpvbmU6IDY0IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcApb
ICAgIDAuNjk5MzM3XSAgIERNQSB6b25lOiAyMSBwYWdlcyByZXNlcnZlZApbICAgIDAuNjk5MzM4
XSAgIERNQSB6b25lOiAzOTk3IHBhZ2VzLCBMSUZPIGJhdGNoOjAKWyAgICAwLjY5OTM4OV0gICBE
TUEzMiB6b25lOiA4MTMwIHBhZ2VzIHVzZWQgZm9yIG1lbW1hcApbICAgIDAuNjk5MzkwXSAgIERN
QTMyIHpvbmU6IDUyMDI5MCBwYWdlcywgTElGTyBiYXRjaDo2MwpbICAgIDAuNzAwMTE4XSBwMm0g
dmlydHVhbCBhcmVhIGF0IChfX19fcHRydmFsX19fXyksIHNpemUgaXMgNDAwMDAwMDAKWyAgICAw
LjkyNjU0OV0gUmVtYXBwZWQgOTggcGFnZShzKQpbICAgIDAuOTI4NjM5XSBBQ1BJOiBQTS1UaW1l
ciBJTyBQb3J0OiAweDQwOApbICAgIDAuOTI4NjQ2XSBBQ1BJOiBMb2NhbCBBUElDIGFkZHJlc3Mg
MHhmZWUwMDAwMApbICAgIDAuOTI4NzE4XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0g
aGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAwLjkyODcyMF0gQUNQSTogTEFQSUNfTk1JIChhY3Bp
X2lkWzB4MDJdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMC45Mjg3MjJdIEFDUEk6IExBUElD
X05NSSAoYWNwaV9pZFsweDA0XSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDAuOTI4NzIzXSBB
Q1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAw
LjkyODcyNV0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDhdIGhpZ2ggZWRnZSBsaW50WzB4
MV0pClsgICAgMC45Mjg3MjZdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDBhXSBoaWdoIGVk
Z2UgbGludFsweDFdKQpbICAgIDAuOTI4NzI4XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgw
Y10gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAwLjkyODcyOV0gQUNQSTogTEFQSUNfTk1JIChh
Y3BpX2lkWzB4MGVdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMC45Mjg3MzBdIEFDUEk6IExB
UElDX05NSSAoYWNwaV9pZFsweDEwXSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDAuOTI4NzMy
XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgxMl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAg
ICAwLjkyODczM10gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MTRdIGhpZ2ggZWRnZSBsaW50
WzB4MV0pClsgICAgMC45Mjg3MzVdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDE2XSBoaWdo
IGVkZ2UgbGludFsweDFdKQpbICAgIDAuOTI4NzM2XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRb
MHgwMV0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAwLjkyODczOF0gQUNQSTogTEFQSUNfTk1J
IChhY3BpX2lkWzB4MDNdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMC45Mjg3MzldIEFDUEk6
IExBUElDX05NSSAoYWNwaV9pZFsweDA1XSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDAuOTI4
NzQxXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwN10gaGlnaCBlZGdlIGxpbnRbMHgxXSkK
WyAgICAwLjkyODc0Ml0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDldIGhpZ2ggZWRnZSBs
aW50WzB4MV0pClsgICAgMC45Mjg3NDNdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDBiXSBo
aWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDAuOTI4NzQ1XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlf
aWRbMHgwZF0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAwLjkyODc0Nl0gQUNQSTogTEFQSUNf
Tk1JIChhY3BpX2lkWzB4MGZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMC45Mjg3NDddIEFD
UEk6IExBUElDX05NSSAoYWNwaV9pZFsweDExXSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDAu
OTI4NzQ5XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgxM10gaGlnaCBlZGdlIGxpbnRbMHgx
XSkKWyAgICAwLjkyODc1MF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MTVdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pClsgICAgMC45Mjg3NTJdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDE3
XSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDAuOTI4NzgwXSBJT0FQSUNbMF06IGFwaWNfaWQg
MCwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZlYzAwMDAwLCBHU0kgMC0yMwpbICAgIDAuOTI4Nzkz
XSBJT0FQSUNbMV06IGFwaWNfaWQgMiwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZlYzAxMDAwLCBH
U0kgMjQtNDcKWyAgICAwLjkyODgxMV0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEg
MCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkKWyAgICAwLjkyODgxNF0gQUNQSTogSU5UX1NSQ19PVlIg
KGJ1cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJxIDkgaGlnaCBsZXZlbCkKWyAgICAwLjkyODgyMl0g
QUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLgpbICAgIDAuOTI4ODI0XSBBQ1BJOiBJUlE5IHVz
ZWQgYnkgb3ZlcnJpZGUuClsgICAgMC45Mjg4NDBdIFVzaW5nIEFDUEkgKE1BRFQpIGZvciBTTVAg
Y29uZmlndXJhdGlvbiBpbmZvcm1hdGlvbgpbICAgIDAuOTI4ODQ1XSBBQ1BJOiBIUEVUIGlkOiAw
eDgwODZhNzAxIGJhc2U6IDB4ZmVkMDAwMDAKWyAgICAwLjkyODg1NF0gc21wYm9vdDogQWxsb3dp
bmcgMjQgQ1BVcywgMCBob3RwbHVnIENQVXMKWyAgICAwLjkyODg3M10gUE06IGhpYmVybmF0aW9u
OiBSZWdpc3RlcmVkIG5vc2F2ZSBtZW1vcnk6IFttZW0gMHgwMDAwMDAwMC0weDAwMDAwZmZmXQpb
ICAgIDAuOTI4ODc1XSBQTTogaGliZXJuYXRpb246IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9yeTog
W21lbSAweDAwMDllMDAwLTB4MDAwOWVmZmZdClsgICAgMC45Mjg4NzZdIFBNOiBoaWJlcm5hdGlv
bjogUmVnaXN0ZXJlZCBub3NhdmUgbWVtb3J5OiBbbWVtIDB4MDAwOWYwMDAtMHgwMDBmZmZmZl0K
WyAgICAwLjkyODg3OF0gW21lbSAweDgwMDYyMDAwLTB4YmE5NTFmZmZdIGF2YWlsYWJsZSBmb3Ig
UENJIGRldmljZXMKWyAgICAwLjkyODg4MV0gQm9vdGluZyBwYXJhdmlydHVhbGl6ZWQga2VybmVs
IG9uIFhlbgpbICAgIDAuOTI4ODgyXSBYZW4gdmVyc2lvbjogNC4xMS40IChwcmVzZXJ2ZS1BRCkK
WyAgICAwLjkyODg4NV0gY2xvY2tzb3VyY2U6IHJlZmluZWQtamlmZmllczogbWFzazogMHhmZmZm
ZmZmZiBtYXhfY3ljbGVzOiAweGZmZmZmZmZmLCBtYXhfaWRsZV9uczogNzY0NTUxOTYwMDIxMTU2
OCBucwpbICAgIDAuOTMzMzI4XSBzZXR1cF9wZXJjcHU6IE5SX0NQVVM6ODE5MiBucl9jcHVtYXNr
X2JpdHM6MjQgbnJfY3B1X2lkczoyNCBucl9ub2RlX2lkczoxClsgICAgMC45MzQyODJdIHBlcmNw
dTogRW1iZWRkZWQgNTQgcGFnZXMvY3B1IHMxODM5NjAgcjgxOTIgZDI5MDMyIHUyNjIxNDQKWyAg
ICAwLjkzNDI5MF0gcGNwdS1hbGxvYzogczE4Mzk2MCByODE5MiBkMjkwMzIgdTI2MjE0NCBhbGxv
Yz0xKjIwOTcxNTIKWyAgICAwLjkzNDI5Ml0gcGNwdS1hbGxvYzogWzBdIDAwIDAxIDAyIDAzIDA0
IDA1IDA2IDA3IFswXSAwOCAwOSAxMCAxMSAxMiAxMyAxNCAxNSAKWyAgICAwLjkzNDMwMl0gcGNw
dS1hbGxvYzogWzBdIDE2IDE3IDE4IDE5IDIwIDIxIDIyIDIzIApbICAgIDAuOTM0MzcxXSB4ZW46
IFBWIHNwaW5sb2NrcyBlbmFibGVkClsgICAgMC45MzQzNzVdIFBWIHFzcGlubG9jayBoYXNoIHRh
YmxlIGVudHJpZXM6IDI1NiAob3JkZXI6IDAsIDQwOTYgYnl0ZXMsIGxpbmVhcikKWyAgICAwLjkz
NDM4MF0gQnVpbHQgMSB6b25lbGlzdHMsIG1vYmlsaXR5IGdyb3VwaW5nIG9uLiAgVG90YWwgcGFn
ZXM6IDUxNjA3MgpbICAgIDAuOTM0MzgxXSBQb2xpY3kgem9uZTogRE1BMzIKWyAgICAwLjkzNDM4
Ml0gS2VybmVsIGNvbW1hbmQgbGluZTogcGxhY2Vob2xkZXIgcm9vdD1VVUlEPWZmZDJlNDRlLTNl
YjctNGM0ZC1hMDI4LWRhZDdkYTAzYzgzMSBybyBsb2dsZXZlbD0zIGVhcmx5cHJpbnRrPXhlbgpb
ICAgIDAuOTM0NTk5XSBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIxNDQgKG9y
ZGVyOiA5LCAyMDk3MTUyIGJ5dGVzLCBsaW5lYXIpClsgICAgMC45MzQ2NzldIElub2RlLWNhY2hl
IGhhc2ggdGFibGUgZW50cmllczogMTMxMDcyIChvcmRlcjogOCwgMTA0ODU3NiBieXRlcywgbGlu
ZWFyKQpbICAgIDAuOTM1MTg3XSBtZW0gYXV0by1pbml0OiBzdGFjazpvZmYsIGhlYXAgYWxsb2M6
b24sIGhlYXAgZnJlZTpvZmYKWyAgICAwLjk4MDcxMV0gc29mdHdhcmUgSU8gVExCOiBtYXBwZWQg
W21lbSAweDAwMDAwMDAwMzhjMDAwMDAtMHgwMDAwMDAwMDNjYzAwMDAwXSAoNjRNQikKWyAgICAx
LjAwMDUxMF0gTWVtb3J5OiAxOTc4MDhLLzIwOTcxNDhLIGF2YWlsYWJsZSAoMTIyOTVLIGtlcm5l
bCBjb2RlLCAyNTQwSyByd2RhdGEsIDQwNjBLIHJvZGF0YSwgMjM4MEsgaW5pdCwgMTY5MksgYnNz
LCAxMjI5NjkySyByZXNlcnZlZCwgMEsgY21hLXJlc2VydmVkKQpbICAgIDEuMDAwNTE4XSByYW5k
b206IGdldF9yYW5kb21fdTY0IGNhbGxlZCBmcm9tIF9fa21lbV9jYWNoZV9jcmVhdGUrMHgyZS8w
eDU1MCB3aXRoIGNybmdfaW5pdD0wClsgICAgMS4wMDA4MjFdIFNMVUI6IEhXYWxpZ249NjQsIE9y
ZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTQsIE5vZGVzPTEKWyAgICAxLjAwMTc1OF0gZnRy
YWNlOiBhbGxvY2F0aW5nIDM1OTg4IGVudHJpZXMgaW4gMTQxIHBhZ2VzClsgICAgMS4wMTUzODFd
IGZ0cmFjZTogYWxsb2NhdGVkIDE0MSBwYWdlcyB3aXRoIDQgZ3JvdXBzClsgICAgMS4wMTU3OTNd
IHJjdTogSGllcmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4KWyAgICAxLjAxNTc5NV0gcmN1
OiAJUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTgxOTIgdG8gbnJfY3B1X2lkcz00
LgpbICAgIDEuMDE1Nzk2XSAJUnVkZSB2YXJpYW50IG9mIFRhc2tzIFJDVSBlbmFibGVkLgpbICAg
IDEuMDE1Nzk3XSAJVHJhY2luZyB2YXJpYW50IG9mIFRhc2tzIFJDVSBlbmFibGVkLgpbICAgIDEu
MDE1Nzk4XSByY3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9mIHNjaGVkdWxlci1lbmxpc3RtZW50
IGRlbGF5IGlzIDI1IGppZmZpZXMuClsgICAgMS4wMTU3OTldIHJjdTogQWRqdXN0aW5nIGdlb21l
dHJ5IGZvciByY3VfZmFub3V0X2xlYWY9MTYsIG5yX2NwdV9pZHM9NApbICAgIDEuMDI2NzU1XSBV
c2luZyBOVUxMIGxlZ2FjeSBQSUMKWyAgICAxLjAyNjc1N10gTlJfSVJRUzogNTI0NTQ0LCBucl9p
cnFzOiA4NjQsIHByZWFsbG9jYXRlZCBpcnFzOiAwClsgICAgMS4wMjY4MjBdIHhlbjpldmVudHM6
IFVzaW5nIEZJRk8tYmFzZWQgQUJJClsgICAgMS4wMjY4NTNdIHhlbjogLS0+IHBpcnE9MSAtPiBp
cnE9MSAoZ3NpPTEpClsgICAgMS4wMjY4NjhdIHhlbjogLS0+IHBpcnE9MiAtPiBpcnE9MiAoZ3Np
PTIpClsgICAgMS4wMjY4ODJdIHhlbjogLS0+IHBpcnE9MyAtPiBpcnE9MyAoZ3NpPTMpClsgICAg
MS4wMjY4OTZdIHhlbjogLS0+IHBpcnE9NCAtPiBpcnE9NCAoZ3NpPTQpClsgICAgMS4wMjY5MDld
IHhlbjogLS0+IHBpcnE9NSAtPiBpcnE9NSAoZ3NpPTUpClsgICAgMS4wMjY5MjNdIHhlbjogLS0+
IHBpcnE9NiAtPiBpcnE9NiAoZ3NpPTYpClsgICAgMS4wMjY5MzddIHhlbjogLS0+IHBpcnE9NyAt
PiBpcnE9NyAoZ3NpPTcpClsgICAgMS4wMjY5NTBdIHhlbjogLS0+IHBpcnE9OCAtPiBpcnE9OCAo
Z3NpPTgpClsgICAgMS4wMjY5NjVdIHhlbjogLS0+IHBpcnE9OSAtPiBpcnE9OSAoZ3NpPTkpClsg
ICAgMS4wMjY5NzldIHhlbjogLS0+IHBpcnE9MTAgLT4gaXJxPTEwIChnc2k9MTApClsgICAgMS4w
MjY5OTRdIHhlbjogLS0+IHBpcnE9MTEgLT4gaXJxPTExIChnc2k9MTEpClsgICAgMS4wMjcwMDhd
IHhlbjogLS0+IHBpcnE9MTIgLT4gaXJxPTEyIChnc2k9MTIpClsgICAgMS4wMjcwMjJdIHhlbjog
LS0+IHBpcnE9MTMgLT4gaXJxPTEzIChnc2k9MTMpClsgICAgMS4wMjcwMzZdIHhlbjogLS0+IHBp
cnE9MTQgLT4gaXJxPTE0IChnc2k9MTQpClsgICAgMS4wMjcwNTBdIHhlbjogLS0+IHBpcnE9MTUg
LT4gaXJxPTE1IChnc2k9MTUpClsgICAgMS4wMjcwOTZdIHJhbmRvbTogY3JuZyBkb25lICh0cnVz
dGluZyBDUFUncyBtYW51ZmFjdHVyZXIpClsgICAgMS4wMzIwNjFdIENvbnNvbGU6IGNvbG91ciBW
R0ErIDgweDI1ClsgICAgMS4wMzIwNzNdIHByaW50azogY29uc29sZSBbdHR5MF0gZW5hYmxlZApb
ICAgIDEuMDMyMDg0XSBwcmludGs6IGNvbnNvbGUgW2h2YzBdIGVuYWJsZWQKWyAgICAxLjAzMjEx
Ml0gQUNQSTogQ29yZSByZXZpc2lvbiAyMDIwMDkyNQpbICAgIDEuMTA1MjQxXSBjbG9ja3NvdXJj
ZTogeGVuOiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmZmYgbWF4X2N5Y2xlczogMHgxY2Q0MmU0ZGZm
YiwgbWF4X2lkbGVfbnM6IDg4MTU5MDU5MTQ4MyBucwpbICAgIDEuMTA1MjUzXSBYZW46IHVzaW5n
IHZjcHVvcCB0aW1lciBpbnRlcmZhY2UKWyAgICAxLjEwNTI1OF0gaW5zdGFsbGluZyBYZW4gdGlt
ZXIgZm9yIENQVSAwClsgICAgMS4xMDUyOThdIGNsb2Nrc291cmNlOiB0c2MtZWFybHk6IG1hc2s6
IDB4ZmZmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDIzZjQ1NGYwNjI1LCBtYXhfaWRsZV9u
czogNDQwNzk1MjI4NjYxIG5zClsgICAgMS4xMDUzMDFdIENhbGlicmF0aW5nIGRlbGF5IGxvb3Ag
KHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1ZW5jeS4uIDQ5ODgu
NjcgQm9nb01JUFMgKGxwaj05OTc3MzUyKQpbICAgIDEuMTA1MzA0XSBwaWRfbWF4OiBkZWZhdWx0
OiAzMjc2OCBtaW5pbXVtOiAzMDEKWyAgICAxLjEwNTM5OF0gTFNNOiBTZWN1cml0eSBGcmFtZXdv
cmsgaW5pdGlhbGl6aW5nClsgICAgMS4xMDU0MjVdIFlhbWE6IGRpc2FibGVkIGJ5IGRlZmF1bHQ7
IGVuYWJsZSB3aXRoIHN5c2N0bCBrZXJuZWwueWFtYS4qClsgICAgMS4xMDU1MjFdIEFwcEFybW9y
OiBBcHBBcm1vciBpbml0aWFsaXplZApbICAgIDEuMTA1NTI2XSBUT01PWU8gTGludXggaW5pdGlh
bGl6ZWQKWyAgICAxLjEwNTU3M10gTW91bnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2
IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcikKWyAgICAxLjEwNTU3OF0gTW91bnRwb2lu
dC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywg
bGluZWFyKQpbICAgIDEuMTA2MzU4XSBMYXN0IGxldmVsIGlUTEIgZW50cmllczogNEtCIDUxMiwg
Mk1CIDgsIDRNQiA4ClsgICAgMS4xMDYzNTldIExhc3QgbGV2ZWwgZFRMQiBlbnRyaWVzOiA0S0Ig
NTEyLCAyTUIgMCwgNE1CIDAsIDFHQiA0ClsgICAgMS4xMDYzNjNdIFNwZWN0cmUgVjEgOiBNaXRp
Z2F0aW9uOiB1c2VyY29weS9zd2FwZ3MgYmFycmllcnMgYW5kIF9fdXNlciBwb2ludGVyIHNhbml0
aXphdGlvbgpbICAgIDEuMTA2MzY0XSBTcGVjdHJlIFYyIDogTWl0aWdhdGlvbjogRnVsbCBnZW5l
cmljIHJldHBvbGluZQpbICAgIDEuMTA2MzY1XSBTcGVjdHJlIFYyIDogU3BlY3RyZSB2MiAvIFNw
ZWN0cmVSU0IgbWl0aWdhdGlvbjogRmlsbGluZyBSU0Igb24gY29udGV4dCBzd2l0Y2gKWyAgICAx
LjEwNjM2Nl0gU3BlY3VsYXRpdmUgU3RvcmUgQnlwYXNzOiBWdWxuZXJhYmxlClsgICAgMS4xMDYz
NjhdIE1EUzogVnVsbmVyYWJsZTogQ2xlYXIgQ1BVIGJ1ZmZlcnMgYXR0ZW1wdGVkLCBubyBtaWNy
b2NvZGUKWyAgICAxLjEwNjUwNl0gRnJlZWluZyBTTVAgYWx0ZXJuYXRpdmVzIG1lbW9yeTogMzJL
ClsgICAgMS4xMDk0MzFdIGNwdSAwIHNwaW5sb2NrIGV2ZW50IGlycSA0OQpbICAgIDEuMTA5NDM5
XSBWUE1VIGRpc2FibGVkIGJ5IGh5cGVydmlzb3IuClsgICAgMS4xMDk4MzNdIFBlcmZvcm1hbmNl
IEV2ZW50czogdW5zdXBwb3J0ZWQgcDYgQ1BVIG1vZGVsIDYyIG5vIFBNVSBkcml2ZXIsIHNvZnR3
YXJlIGV2ZW50cyBvbmx5LgpbICAgIDEuMTA5OTM4XSByY3U6IEhpZXJhcmNoaWNhbCBTUkNVIGlt
cGxlbWVudGF0aW9uLgpbICAgIDEuMTEwNDE2XSBOTUkgd2F0Y2hkb2c6IFBlcmYgTk1JIHdhdGNo
ZG9nIHBlcm1hbmVudGx5IGRpc2FibGVkClsgICAgMS4xMTA1NjldIHNtcDogQnJpbmdpbmcgdXAg
c2Vjb25kYXJ5IENQVXMgLi4uClsgICAgMS4xMTA4MzldIGluc3RhbGxpbmcgWGVuIHRpbWVyIGZv
ciBDUFUgMQpbICAgIDEuMTExMjA2XSBjcHUgMSBzcGlubG9jayBldmVudCBpcnEgNTkKWyAgICAx
LjExMTIwNl0gTURTIENQVSBidWcgcHJlc2VudCBhbmQgU01UIG9uLCBkYXRhIGxlYWsgcG9zc2li
bGUuIFNlZSBodHRwczovL3d3dy5rZXJuZWwub3JnL2RvYy9odG1sL2xhdGVzdC9hZG1pbi1ndWlk
ZS9ody12dWxuL21kcy5odG1sIGZvciBtb3JlIGRldGFpbHMuClsgICAgMS4xMTEyMDZdIGluc3Rh
bGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMgpbICAgIDEuMTExMjA2XSBjcHUgMiBzcGlubG9jayBl
dmVudCBpcnEgNjUKWyAgICAxLjExMTIwNl0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAz
ClsgICAgMS4xMTEyMDZdIGNwdSAzIHNwaW5sb2NrIGV2ZW50IGlycSA3MQpbICAgIDEuMTExMjA2
XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCA0IENQVXMKWyAgICAxLjExMTIwNl0gc21wYm9vdDog
TWF4IGxvZ2ljYWwgcGFja2FnZXM6IDYKWyAgICAxLjExNDkyOF0gbm9kZSAwIGRlZmVycmVkIHBh
Z2VzIGluaXRpYWxpc2VkIGluIDBtcwpbICAgIDEuMTE0OTY1XSBkZXZ0bXBmczogaW5pdGlhbGl6
ZWQKWyAgICAxLjExNDk2NV0geDg2L21tOiBNZW1vcnkgYmxvY2sgc2l6ZTogMTI4TUIKWyAgICAx
LjExNDk2NV0gUE06IFJlZ2lzdGVyaW5nIEFDUEkgTlZTIHJlZ2lvbiBbbWVtIDB4YmFiYzMwMDAt
MHhiYjFiZmZmZl0gKDYyNzkxNjggYnl0ZXMpClsgICAgMS4xMTQ5NjVdIFBNOiBSZWdpc3Rlcmlu
ZyBBQ1BJIE5WUyByZWdpb24gW21lbSAweGJiODQ0MDAwLTB4YmI4YzlmZmZdICg1NDg4NjQgYnl0
ZXMpClsgICAgMS4xMTQ5NjVdIGNsb2Nrc291cmNlOiBqaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZm
IG1heF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1heF9pZGxlX25zOiA3NjQ1MDQxNzg1MTAwMDAwIG5z
ClsgICAgMS4xMTQ5NjVdIGZ1dGV4IGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDQs
IDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAgMS4xMTQ5NjVdIHBpbmN0cmwgY29yZTogaW5pdGlh
bGl6ZWQgcGluY3RybCBzdWJzeXN0ZW0KWyAgICAxLjExNDk2NV0gTkVUOiBSZWdpc3RlcmVkIHBy
b3RvY29sIGZhbWlseSAxNgpbICAgIDEuMTE0OTY1XSB4ZW46Z3JhbnRfdGFibGU6IEdyYW50IHRh
YmxlcyB1c2luZyB2ZXJzaW9uIDEgbGF5b3V0ClsgICAgMS4xMTQ5NjVdIEdyYW50IHRhYmxlIGlu
aXRpYWxpemVkClsgICAgMS4xMTQ5NjVdIGF1ZGl0OiBpbml0aWFsaXppbmcgbmV0bGluayBzdWJz
eXMgKGRpc2FibGVkKQpbICAgIDEuMTE0OTY1XSBhdWRpdDogdHlwZT0yMDAwIGF1ZGl0KDE2MTIx
OTA1MzIuNTgyOjEpOiBzdGF0ZT1pbml0aWFsaXplZCBhdWRpdF9lbmFibGVkPTAgcmVzPTEKWyAg
ICAxLjExNDk2NV0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAnZmFp
cl9zaGFyZScKWyAgICAxLjExNDk2NV0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBn
b3Zlcm5vciAnYmFuZ19iYW5nJwpbICAgIDEuMTE0OTY1XSB0aGVybWFsX3N5czogUmVnaXN0ZXJl
ZCB0aGVybWFsIGdvdmVybm9yICdzdGVwX3dpc2UnClsgICAgMS4xMTQ5NjVdIHRoZXJtYWxfc3lz
OiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ3VzZXJfc3BhY2UnClsgICAgMS4xMTQ5NjVd
IEFDUEk6IGJ1cyB0eXBlIFBDSSByZWdpc3RlcmVkClsgICAgMS4xMTQ5NjVdIGFjcGlwaHA6IEFD
UEkgSG90IFBsdWcgUENJIENvbnRyb2xsZXIgRHJpdmVyIHZlcnNpb246IDAuNQpbICAgIDEuMTE0
OTY1XSBQQ0k6IE1NQ09ORklHIGZvciBkb21haW4gMDAwMCBbYnVzIDAwLWZmXSBhdCBbbWVtIDB4
ZDAwMDAwMDAtMHhkZmZmZmZmZl0gKGJhc2UgMHhkMDAwMDAwMCkKWyAgICAxLjExNDk2NV0gUENJ
OiBNTUNPTkZJRyBhdCBbbWVtIDB4ZDAwMDAwMDAtMHhkZmZmZmZmZl0gcmVzZXJ2ZWQgaW4gRTgy
MApbICAgIDEuMjA3OTAxXSBQQ0k6IFVzaW5nIGNvbmZpZ3VyYXRpb24gdHlwZSAxIGZvciBiYXNl
IGFjY2VzcwpbICAgIDEuMzg5NDcwXSBBQ1BJOiBBZGRlZCBfT1NJKE1vZHVsZSBEZXZpY2UpClsg
ICAgMS4zODk0NzJdIEFDUEk6IEFkZGVkIF9PU0koUHJvY2Vzc29yIERldmljZSkKWyAgICAxLjM4
OTQ3NF0gQUNQSTogQWRkZWQgX09TSSgzLjAgX1NDUCBFeHRlbnNpb25zKQpbICAgIDEuMzg5NDc1
XSBBQ1BJOiBBZGRlZCBfT1NJKFByb2Nlc3NvciBBZ2dyZWdhdG9yIERldmljZSkKWyAgICAxLjM4
OTQ3N10gQUNQSTogQWRkZWQgX09TSShMaW51eC1EZWxsLVZpZGVvKQpbICAgIDEuMzg5NDc4XSBB
Q1BJOiBBZGRlZCBfT1NJKExpbnV4LUxlbm92by1OVi1IRE1JLUF1ZGlvKQpbICAgIDEuMzg5NDgw
XSBBQ1BJOiBBZGRlZCBfT1NJKExpbnV4LUhQSS1IeWJyaWQtR3JhcGhpY3MpClsgICAgMS41NTcz
MzZdIEFDUEk6IDIgQUNQSSBBTUwgdGFibGVzIHN1Y2Nlc3NmdWxseSBhY3F1aXJlZCBhbmQgbG9h
ZGVkClsgICAgMS41NzE0MjBdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDkgdHJpZ2dlcmluZyAwIHBv
bGFyaXR5IDAKWyAgICAxLjU3Njc5OV0gQUNQSTogSW50ZXJwcmV0ZXIgZW5hYmxlZApbICAgIDEu
NTc2ODEzXSBBQ1BJOiAoc3VwcG9ydHMgUzAgUzUpClsgICAgMS41NzY4MTRdIEFDUEk6IFVzaW5n
IElPQVBJQyBmb3IgaW50ZXJydXB0IHJvdXRpbmcKWyAgICAxLjU3Njg1NV0gUENJOiBVc2luZyBo
b3N0IGJyaWRnZSB3aW5kb3dzIGZyb20gQUNQSTsgaWYgbmVjZXNzYXJ5LCB1c2UgInBjaT1ub2Ny
cyIgYW5kIHJlcG9ydCBhIGJ1ZwpbICAgIDEuNTc3NDQwXSBBQ1BJOiBFbmFibGVkIDYgR1BFcyBp
biBibG9jayAwMCB0byAzRgpbICAgIDEuNTk1OTc5XSBBQ1BJOiBQQ0kgUm9vdCBCcmlkZ2UgW1BD
STBdIChkb21haW4gMDAwMCBbYnVzIDAwLWZlXSkKWyAgICAxLjU5NTk4NV0gYWNwaSBQTlAwQTA4
OjAwOiBfT1NDOiBPUyBzdXBwb3J0cyBbRXh0ZW5kZWRDb25maWcgQVNQTSBDbG9ja1BNIFNlZ21l
bnRzIE1TSSBIUFgtVHlwZTNdClsgICAgMS41OTYyMDhdIGFjcGkgUE5QMEEwODowMDogX09TQzog
cGxhdGZvcm0gZG9lcyBub3Qgc3VwcG9ydCBbU0hQQ0hvdHBsdWcgUE1FIEFFUiBMVFJdClsgICAg
MS41OTY0MTNdIGFjcGkgUE5QMEEwODowMDogX09TQzogT1Mgbm93IGNvbnRyb2xzIFtQQ0llSG90
cGx1ZyBQQ0llQ2FwYWJpbGl0eV0KWyAgICAxLjU5Njk3OF0gUENJIGhvc3QgYnJpZGdlIHRvIGJ1
cyAwMDAwOjAwClsgICAgMS41OTY5ODFdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3Vy
Y2UgW2lvICAweDAwMDAtMHgwM2FmIHdpbmRvd10KWyAgICAxLjU5Njk4Ml0gcGNpX2J1cyAwMDAw
OjAwOiByb290IGJ1cyByZXNvdXJjZSBbaW8gIDB4MDNlMC0weDBjZjcgd2luZG93XQpbICAgIDEu
NTk2OTgzXSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgwM2IwLTB4
MDNkZiB3aW5kb3ddClsgICAgMS41OTY5ODRdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVz
b3VyY2UgW2lvICAweDBkMDAtMHhmZmZmIHdpbmRvd10KWyAgICAxLjU5Njk4NV0gcGNpX2J1cyAw
MDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZiB3aW5k
b3ddClsgICAgMS41OTY5ODddIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21l
bSAweDAwMGMwMDAwLTB4MDAwZGZmZmYgd2luZG93XQpbICAgIDEuNTk2OTg4XSBwY2lfYnVzIDAw
MDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHhjYzAwMDAwMC0weGZmZmZmZmZmIHdpbmRv
d10KWyAgICAxLjU5Njk4OV0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVt
IDB4NDQwMDAwMDAwLTB4M2ZmZmZmZmZmZmZmIHdpbmRvd10KWyAgICAxLjU5Njk5MV0gcGNpX2J1
cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbYnVzIDAwLWZlXQpbICAgIDEuNTk3MDQ3XSBw
Y2kgMDAwMDowMDowMC4wOiBbODA4NjowZTAwXSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsgICAg
MS41OTczNDBdIHBjaSAwMDAwOjAwOjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3Qg
RDNjb2xkClsgICAgMS41OTc1NzhdIHBjaSAwMDAwOjAwOjAxLjA6IFs4MDg2OjBlMDJdIHR5cGUg
MDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjU5NzkzMF0gcGNpIDAwMDA6MDA6MDEuMDogUE1FIyBz
dXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjU5ODIxN10gcGNpIDAwMDA6MDA6
MDEuMTogWzgwODY6MGUwM10gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDEuNTk4NTY4XSBw
Y2kgMDAwMDowMDowMS4xOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAg
IDEuNTk4ODY4XSBwY2kgMDAwMDowMDowMi4wOiBbODA4NjowZTA0XSB0eXBlIDAxIGNsYXNzIDB4
MDYwNDAwClsgICAgMS41OTkyMTldIHBjaSAwMDAwOjAwOjAyLjA6IFBNRSMgc3VwcG9ydGVkIGZy
b20gRDAgRDNob3QgRDNjb2xkClsgICAgMS41OTk0OTBdIHBjaSAwMDAwOjAwOjAyLjE6IFs4MDg2
OjBlMDVdIHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjU5OTg0MF0gcGNpIDAwMDA6MDA6
MDIuMTogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjYwMDExMl0g
cGNpIDAwMDA6MDA6MDIuMjogWzgwODY6MGUwNl0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAg
IDEuNjAwNDYyXSBwY2kgMDAwMDowMDowMi4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90
IEQzY29sZApbICAgIDEuNjAwNzMxXSBwY2kgMDAwMDowMDowMi4zOiBbODA4NjowZTA3XSB0eXBl
IDAxIGNsYXNzIDB4MDYwNDAwClsgICAgMS42MDEwODFdIHBjaSAwMDAwOjAwOjAyLjM6IFBNRSMg
c3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkClsgICAgMS42MDEzODBdIHBjaSAwMDAwOjAw
OjAzLjA6IFs4MDg2OjBlMDhdIHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjYwMTU0NV0g
cGNpIDAwMDA6MDA6MDMuMDogZW5hYmxpbmcgRXh0ZW5kZWQgVGFncwpbICAgIDEuNjAxNzU0XSBw
Y2kgMDAwMDowMDowMy4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAg
IDEuNjAyMDI2XSBwY2kgMDAwMDowMDowMy4xOiBbODA4NjowZTA5XSB0eXBlIDAxIGNsYXNzIDB4
MDYwNDAwClsgICAgMS42MDIzODJdIHBjaSAwMDAwOjAwOjAzLjE6IFBNRSMgc3VwcG9ydGVkIGZy
b20gRDAgRDNob3QgRDNjb2xkClsgICAgMS42MDI2NTNdIHBjaSAwMDAwOjAwOjAzLjI6IFs4MDg2
OjBlMGFdIHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjYwMzAwM10gcGNpIDAwMDA6MDA6
MDMuMjogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjYwMzI3NV0g
cGNpIDAwMDA6MDA6MDMuMzogWzgwODY6MGUwYl0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAg
IDEuNjAzNjI1XSBwY2kgMDAwMDowMDowMy4zOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90
IEQzY29sZApbICAgIDEuNjAzODk1XSBwY2kgMDAwMDowMDowNS4wOiBbODA4NjowZTI4XSB0eXBl
IDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MDQzMjFdIHBjaSAwMDAwOjAwOjA1LjI6IFs4MDg2
OjBlMmFdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYwNDc0NF0gcGNpIDAwMDA6MDA6
MDUuNDogWzgwODY6MGUyY10gdHlwZSAwMCBjbGFzcyAweDA4MDAyMApbICAgIDEuNjA0Nzk1XSBw
Y2kgMDAwMDowMDowNS40OiByZWcgMHgxMDogW21lbSAweGZiMzA0MDAwLTB4ZmIzMDRmZmZdClsg
ICAgMS42MDUyNjZdIHBjaSAwMDAwOjAwOjA2LjA6IFs4MDg2OjBlMTBdIHR5cGUgMDAgY2xhc3Mg
MHgwODgwMDAKWyAgICAxLjYwNTY5N10gcGNpIDAwMDA6MDA6MDYuMTogWzgwODY6MGUxMV0gdHlw
ZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjA2MTE2XSBwY2kgMDAwMDowMDowNi4yOiBbODA4
NjowZTEyXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MDY1NDVdIHBjaSAwMDAwOjAw
OjA2LjM6IFs4MDg2OjBlMTNdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYwNjk2NF0g
cGNpIDAwMDA6MDA6MDYuNDogWzgwODY6MGUxNF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAg
IDEuNjA3Mzg1XSBwY2kgMDAwMDowMDowNi41OiBbODA4NjowZTE1XSB0eXBlIDAwIGNsYXNzIDB4
MDg4MDAwClsgICAgMS42MDc4MDNdIHBjaSAwMDAwOjAwOjA2LjY6IFs4MDg2OjBlMTZdIHR5cGUg
MDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYwODIyMF0gcGNpIDAwMDA6MDA6MDYuNzogWzgwODY6
MGUxN10gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjA4NjQ1XSBwY2kgMDAwMDowMDow
Ny4wOiBbODA4NjowZTE4XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MDkwNjddIHBj
aSAwMDAwOjAwOjA3LjE6IFs4MDg2OjBlMTldIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAx
LjYwOTQ5M10gcGNpIDAwMDA6MDA6MDcuMjogWzgwODY6MGUxYV0gdHlwZSAwMCBjbGFzcyAweDA4
ODAwMApbICAgIDEuNjA5OTExXSBwY2kgMDAwMDowMDowNy4zOiBbODA4NjowZTFiXSB0eXBlIDAw
IGNsYXNzIDB4MDg4MDAwClsgICAgMS42MTAzMzldIHBjaSAwMDAwOjAwOjA3LjQ6IFs4MDg2OjBl
MWNdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYxMDg2Nl0gcGNpIDAwMDA6MDA6MWEu
MDogWzgwODY6MWUyZF0gdHlwZSAwMCBjbGFzcyAweDBjMDMyMApbICAgIDEuNjEwOTI0XSBwY2kg
MDAwMDowMDoxYS4wOiByZWcgMHgxMDogW21lbSAweGZiMzAyMDAwLTB4ZmIzMDIzZmZdClsgICAg
MS42MTEyNDBdIHBjaSAwMDAwOjAwOjFhLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3Qg
RDNjb2xkClsgICAgMS42MTE0NTFdIHBjaSAwMDAwOjAwOjFjLjA6IFs4MDg2OjFlMTBdIHR5cGUg
MDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjYxMTgwMF0gcGNpIDAwMDA6MDA6MWMuMDogUE1FIyBz
dXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjYxMTg3Ml0gcGNpIDAwMDA6MDA6
MWMuMDogRW5hYmxpbmcgTVBDIElSQk5DRQpbICAgIDEuNjExODc5XSBwY2kgMDAwMDowMDoxYy4w
OiBJbnRlbCBQQ0ggcm9vdCBwb3J0IEFDUyB3b3JrYXJvdW5kIGVuYWJsZWQKWyAgICAxLjYxMjA3
Ml0gcGNpIDAwMDA6MDA6MWMuMTogWzgwODY6MWUxMl0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApb
ICAgIDEuNjEyNDIyXSBwY2kgMDAwMDowMDoxYy4xOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQz
aG90IEQzY29sZApbICAgIDEuNjEyNDkxXSBwY2kgMDAwMDowMDoxYy4xOiBFbmFibGluZyBNUEMg
SVJCTkNFClsgICAgMS42MTI0OThdIHBjaSAwMDAwOjAwOjFjLjE6IEludGVsIFBDSCByb290IHBv
cnQgQUNTIHdvcmthcm91bmQgZW5hYmxlZApbICAgIDEuNjEyNjk3XSBwY2kgMDAwMDowMDoxYy40
OiBbODA4NjoxZTE4XSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsgICAgMS42MTMwNDZdIHBjaSAw
MDAwOjAwOjFjLjQ6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkClsgICAgMS42
MTMxMTZdIHBjaSAwMDAwOjAwOjFjLjQ6IEVuYWJsaW5nIE1QQyBJUkJOQ0UKWyAgICAxLjYxMzEy
Ml0gcGNpIDAwMDA6MDA6MWMuNDogSW50ZWwgUENIIHJvb3QgcG9ydCBBQ1Mgd29ya2Fyb3VuZCBl
bmFibGVkClsgICAgMS42MTMzNDRdIHBjaSAwMDAwOjAwOjFkLjA6IFs4MDg2OjFlMjZdIHR5cGUg
MDAgY2xhc3MgMHgwYzAzMjAKWyAgICAxLjYxMzQwMl0gcGNpIDAwMDA6MDA6MWQuMDogcmVnIDB4
MTA6IFttZW0gMHhmYjMwMTAwMC0weGZiMzAxM2ZmXQpbICAgIDEuNjEzNzE4XSBwY2kgMDAwMDow
MDoxZC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDEuNjEzOTE1
XSBwY2kgMDAwMDowMDoxZS4wOiBbODA4NjoyNDRlXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAxClsg
ICAgMS42MTQzMjJdIHBjaSAwMDAwOjAwOjFmLjA6IFs4MDg2OjFlNDhdIHR5cGUgMDAgY2xhc3Mg
MHgwNjAxMDAKWyAgICAxLjYxNDg0N10gcGNpIDAwMDA6MDA6MWYuMjogWzgwODY6MWUwMl0gdHlw
ZSAwMCBjbGFzcyAweDAxMDYwMQpbICAgIDEuNjE0OTAzXSBwY2kgMDAwMDowMDoxZi4yOiByZWcg
MHgxMDogW2lvICAweGYwNzAtMHhmMDc3XQpbICAgIDEuNjE0OTMzXSBwY2kgMDAwMDowMDoxZi4y
OiByZWcgMHgxNDogW2lvICAweGYwNjAtMHhmMDYzXQpbICAgIDEuNjE0OTYzXSBwY2kgMDAwMDow
MDoxZi4yOiByZWcgMHgxODogW2lvICAweGYwNTAtMHhmMDU3XQpbICAgIDEuNjE0OTkzXSBwY2kg
MDAwMDowMDoxZi4yOiByZWcgMHgxYzogW2lvICAweGYwNDAtMHhmMDQzXQpbICAgIDEuNjE1MDIz
XSBwY2kgMDAwMDowMDoxZi4yOiByZWcgMHgyMDogW2lvICAweGYwMjAtMHhmMDNmXQpbICAgIDEu
NjE1MDUzXSBwY2kgMDAwMDowMDoxZi4yOiByZWcgMHgyNDogW21lbSAweGZiMzAwMDAwLTB4ZmIz
MDA3ZmZdClsgICAgMS42MTUyMzZdIHBjaSAwMDAwOjAwOjFmLjI6IFBNRSMgc3VwcG9ydGVkIGZy
b20gRDNob3QKWyAgICAxLjYxNTQxOV0gcGNpIDAwMDA6MDA6MWYuMzogWzgwODY6MWUyMl0gdHlw
ZSAwMCBjbGFzcyAweDBjMDUwMApbICAgIDEuNjE1NDkwXSBwY2kgMDAwMDowMDoxZi4zOiByZWcg
MHgxMDogW21lbSAweDNmZmZmZmYwMDAwMC0weDNmZmZmZmYwMDBmZiA2NGJpdF0KWyAgICAxLjYx
NTU3Nl0gcGNpIDAwMDA6MDA6MWYuMzogcmVnIDB4MjA6IFtpbyAgMHhmMDAwLTB4ZjAxZl0KWyAg
ICAxLjYxNTkxNl0gcGNpIDAwMDA6MDA6MDEuMDogUENJIGJyaWRnZSB0byBbYnVzIDAxXQpbICAg
IDEuNjE2MDg0XSBwY2kgMDAwMDowMDowMS4xOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDJdClsgICAg
MS42MTYyODddIHBjaSAwMDAwOjAzOjAwLjA6IFsxMGRlOjAxZDNdIHR5cGUgMDAgY2xhc3MgMHgw
MzAwMDAKWyAgICAxLjYxNjMzOV0gcGNpIDAwMDA6MDM6MDAuMDogcmVnIDB4MTA6IFttZW0gMHhm
YTAwMDAwMC0weGZhZmZmZmZmXQpbICAgIDEuNjE2MzgyXSBwY2kgMDAwMDowMzowMC4wOiByZWcg
MHgxNDogW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAxLjYxNjQy
NV0gcGNpIDAwMDA6MDM6MDAuMDogcmVnIDB4MWM6IFttZW0gMHhmOTAwMDAwMC0weGY5ZmZmZmZm
IDY0Yml0XQpbICAgIDEuNjE2NDc5XSBwY2kgMDAwMDowMzowMC4wOiByZWcgMHgzMDogW21lbSAw
eGZiMDAwMDAwLTB4ZmIwMWZmZmYgcHJlZl0KWyAgICAxLjYxNjc2MF0gcGNpIDAwMDA6MDM6MDAu
MDogZGlzYWJsaW5nIEFTUE0gb24gcHJlLTEuMSBQQ0llIGRldmljZS4gIFlvdSBjYW4gZW5hYmxl
IGl0IHdpdGggJ3BjaWVfYXNwbT1mb3JjZScKWyAgICAxLjYxNjc4N10gcGNpIDAwMDA6MDA6MDIu
MDogUENJIGJyaWRnZSB0byBbYnVzIDAzXQpbICAgIDEuNjE2ODA2XSBwY2kgMDAwMDowMDowMi4w
OiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGY5MDAwMDAwLTB4ZmIwZmZmZmZdClsgICAgMS42MTY4
MjNdIHBjaSAwMDAwOjAwOjAyLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZTAwMDAwMDAtMHhl
ZmZmZmZmZiA2NGJpdCBwcmVmXQpbICAgIDEuNjE2OTU0XSBwY2kgMDAwMDowMDowMi4xOiBQQ0kg
YnJpZGdlIHRvIFtidXMgMDRdClsgICAgMS42MTcxMTldIHBjaSAwMDAwOjAwOjAyLjI6IFBDSSBi
cmlkZ2UgdG8gW2J1cyAwNV0KWyAgICAxLjYxNzI3OV0gcGNpIDAwMDA6MDA6MDIuMzogUENJIGJy
aWRnZSB0byBbYnVzIDA2XQpbICAgIDEuNjE3NDUyXSBwY2kgMDAwMDowMDowMy4wOiBQQ0kgYnJp
ZGdlIHRvIFtidXMgMDddClsgICAgMS42MTc2MTVdIHBjaSAwMDAwOjAwOjAzLjE6IFBDSSBicmlk
Z2UgdG8gW2J1cyAwOF0KWyAgICAxLjYxNzc3NV0gcGNpIDAwMDA6MDA6MDMuMjogUENJIGJyaWRn
ZSB0byBbYnVzIDA5XQpbICAgIDEuNjE3OTM3XSBwY2kgMDAwMDowMDowMy4zOiBQQ0kgYnJpZGdl
IHRvIFtidXMgMGFdClsgICAgMS42MTgwOTRdIHBjaSAwMDAwOjAwOjFjLjA6IFBDSSBicmlkZ2Ug
dG8gW2J1cyAwYl0KWyAgICAxLjYxODMxNl0gcGNpIDAwMDA6MGM6MDAuMDogWzExMDY6MzQ4M10g
dHlwZSAwMCBjbGFzcyAweDBjMDMzMApbICAgIDEuNjE4Mzk5XSBwY2kgMDAwMDowYzowMC4wOiBy
ZWcgMHgxMDogW21lbSAweGZiMjAwMDAwLTB4ZmIyMDBmZmYgNjRiaXRdClsgICAgMS42MTg3NjFd
IHBjaSAwMDAwOjBjOjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNjb2xkClsgICAgMS42
MTg5NTNdIHBjaSAwMDAwOjAwOjFjLjE6IFBDSSBicmlkZ2UgdG8gW2J1cyAwY10KWyAgICAxLjYx
ODk3Ml0gcGNpIDAwMDA6MDA6MWMuMTogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmYjIwMDAwMC0w
eGZiMmZmZmZmXQpbICAgIDEuNjE5MTgxXSBwY2kgMDAwMDowZDowMC4wOiBbMTBlYzo4MTY4XSB0
eXBlIDAwIGNsYXNzIDB4MDIwMDAwClsgICAgMS42MTkyNDddIHBjaSAwMDAwOjBkOjAwLjA6IHJl
ZyAweDEwOiBbaW8gIDB4ZTAwMC0weGUwZmZdClsgICAgMS42MTkzMzddIHBjaSAwMDAwOjBkOjAw
LjA6IHJlZyAweDE4OiBbbWVtIDB4ZmIxMDQwMDAtMHhmYjEwNGZmZiA2NGJpdF0KWyAgICAxLjYx
OTM5Ml0gcGNpIDAwMDA6MGQ6MDAuMDogcmVnIDB4MjA6IFttZW0gMHhmYjEwMDAwMC0weGZiMTAz
ZmZmIDY0Yml0XQpbICAgIDEuNjE5NzIyXSBwY2kgMDAwMDowZDowMC4wOiBzdXBwb3J0cyBEMSBE
MgpbICAgIDEuNjE5NzIzXSBwY2kgMDAwMDowZDowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQw
IEQxIEQyIEQzaG90IEQzY29sZApbICAgIDEuNjE5OTg4XSBwY2kgMDAwMDowMDoxYy40OiBQQ0kg
YnJpZGdlIHRvIFtidXMgMGRdClsgICAgMS42MTk5OTldIHBjaSAwMDAwOjAwOjFjLjQ6ICAgYnJp
ZGdlIHdpbmRvdyBbaW8gIDB4ZTAwMC0weGVmZmZdClsgICAgMS42MjAwMDldIHBjaSAwMDAwOjAw
OjFjLjQ6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZmIxMDAwMDAtMHhmYjFmZmZmZl0KWyAgICAx
LjYyMDA5NF0gcGNpX2J1cyAwMDAwOjBlOiBleHRlbmRlZCBjb25maWcgc3BhY2Ugbm90IGFjY2Vz
c2libGUKWyAgICAxLjYyMDI1NV0gcGNpIDAwMDA6MDA6MWUuMDogUENJIGJyaWRnZSB0byBbYnVz
IDBlXSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAgIDEuNjIwMjkwXSBwY2kgMDAwMDowMDoxZS4w
OiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweDAwMDAtMHgwM2FmIHdpbmRvd10gKHN1YnRyYWN0aXZl
IGRlY29kZSkKWyAgICAxLjYyMDI5MV0gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ugd2luZG93
IFtpbyAgMHgwM2UwLTB4MGNmNyB3aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMS42
MjAyOTNdIHBjaSAwMDAwOjAwOjFlLjA6ICAgYnJpZGdlIHdpbmRvdyBbaW8gIDB4MDNiMC0weDAz
ZGYgd2luZG93XSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAgIDEuNjIwMjk0XSBwY2kgMDAwMDow
MDoxZS4wOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweDBkMDAtMHhmZmZmIHdpbmRvd10gKHN1YnRy
YWN0aXZlIGRlY29kZSkKWyAgICAxLjYyMDI5NV0gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ug
d2luZG93IFttZW0gMHgwMDBhMDAwMC0weDAwMGJmZmZmIHdpbmRvd10gKHN1YnRyYWN0aXZlIGRl
Y29kZSkKWyAgICAxLjYyMDI5Nl0gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ugd2luZG93IFtt
ZW0gMHgwMDBjMDAwMC0weDAwMGRmZmZmIHdpbmRvd10gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAg
ICAxLjYyMDI5N10gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhjYzAw
MDAwMC0weGZmZmZmZmZmIHdpbmRvd10gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAxLjYyMDI5
OV0gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ugd2luZG93IFttZW0gMHg0NDAwMDAwMDAtMHgz
ZmZmZmZmZmZmZmYgd2luZG93XSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAgIDEuNjIwODQzXSB4
ZW46IHJlZ2lzdGVyaW5nIGdzaSAxMyB0cmlnZ2VyaW5nIDEgcG9sYXJpdHkgMApbICAgIDEuNjIx
MDcyXSBBQ1BJOiBQQ0kgUm9vdCBCcmlkZ2UgW1VOQzBdIChkb21haW4gMDAwMCBbYnVzIGZmXSkK
WyAgICAxLjYyMTA3Nl0gYWNwaSBQTlAwQTAzOjAwOiBfT1NDOiBPUyBzdXBwb3J0cyBbRXh0ZW5k
ZWRDb25maWcgQVNQTSBDbG9ja1BNIFNlZ21lbnRzIE1TSSBIUFgtVHlwZTNdClsgICAgMS42MjEx
MDFdIGFjcGkgUE5QMEEwMzowMDogX09TQzogT1Mgbm93IGNvbnRyb2xzIFtQQ0llSG90cGx1ZyBT
SFBDSG90cGx1ZyBQTUUgQUVSIFBDSWVDYXBhYmlsaXR5IExUUl0KWyAgICAxLjYyMTE2NF0gUENJ
IGhvc3QgYnJpZGdlIHRvIGJ1cyAwMDAwOmZmClsgICAgMS42MjExNjZdIHBjaV9idXMgMDAwMDpm
Zjogcm9vdCBidXMgcmVzb3VyY2UgW2J1cyBmZl0KWyAgICAxLjYyMTIyNF0gcGNpIDAwMDA6ZmY6
MDguMDogWzgwODY6MGU4MF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjIxNDc5XSBw
Y2kgMDAwMDpmZjowOC4yOiBbODA4NjowZTMyXSB0eXBlIDAwIGNsYXNzIDB4MTEwMTAwClsgICAg
MS42MjE3MzhdIHBjaSAwMDAwOmZmOjA4LjM6IFs4MDg2OjBlODNdIHR5cGUgMDAgY2xhc3MgMHgw
ODgwMDAKWyAgICAxLjYyMjA4MV0gcGNpIDAwMDA6ZmY6MDguNDogWzgwODY6MGU4NF0gdHlwZSAw
MCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjIyNDQwXSBwY2kgMDAwMDpmZjowOC41OiBbODA4Njow
ZTg1XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MjI3NzhdIHBjaSAwMDAwOmZmOjA4
LjY6IFs4MDg2OjBlODZdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYyMzExNl0gcGNp
IDAwMDA6ZmY6MDguNzogWzgwODY6MGU4N10gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEu
NjIzNDQ3XSBwY2kgMDAwMDpmZjowOS4wOiBbODA4NjowZTkwXSB0eXBlIDAwIGNsYXNzIDB4MDg4
MDAwClsgICAgMS42MjM2OTddIHBjaSAwMDAwOmZmOjA5LjI6IFs4MDg2OjBlMzNdIHR5cGUgMDAg
Y2xhc3MgMHgxMTAxMDAKWyAgICAxLjYyMzk1NF0gcGNpIDAwMDA6ZmY6MDkuMzogWzgwODY6MGU5
M10gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjI0MjkyXSBwY2kgMDAwMDpmZjowOS40
OiBbODA4NjowZTk0XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MjQ2MzddIHBjaSAw
MDAwOmZmOjA5LjU6IFs4MDg2OjBlOTVdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYy
NDk3M10gcGNpIDAwMDA6ZmY6MDkuNjogWzgwODY6MGU5Nl0gdHlwZSAwMCBjbGFzcyAweDA4ODAw
MApbICAgIDEuNjI1Mjk3XSBwY2kgMDAwMDpmZjowYS4wOiBbODA4NjowZWMwXSB0eXBlIDAwIGNs
YXNzIDB4MDg4MDAwClsgICAgMS42MjU1NDBdIHBjaSAwMDAwOmZmOjBhLjE6IFs4MDg2OjBlYzFd
IHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYyNTc3N10gcGNpIDAwMDA6ZmY6MGEuMjog
WzgwODY6MGVjMl0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjI2MDIxXSBwY2kgMDAw
MDpmZjowYS4zOiBbODA4NjowZWMzXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MjYy
ODJdIHBjaSAwMDAwOmZmOjBiLjA6IFs4MDg2OjBlMWVdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAK
WyAgICAxLjYyNjUyM10gcGNpIDAwMDA6ZmY6MGIuMzogWzgwODY6MGUxZl0gdHlwZSAwMCBjbGFz
cyAweDA4ODAwMApbICAgIDEuNjI2NzcxXSBwY2kgMDAwMDpmZjowYy4wOiBbODA4NjowZWUwXSB0
eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MjcwMDhdIHBjaSAwMDAwOmZmOjBjLjE6IFs4
MDg2OjBlZTJdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYyNzI0Ml0gcGNpIDAwMDA6
ZmY6MGMuMjogWzgwODY6MGVlNF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjI3NDc2
XSBwY2kgMDAwMDpmZjowYy4zOiBbODA4NjowZWU2XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsg
ICAgMS42Mjc3MTJdIHBjaSAwMDAwOmZmOjBjLjQ6IFs4MDg2OjBlZThdIHR5cGUgMDAgY2xhc3Mg
MHgwODgwMDAKWyAgICAxLjYyNzk1MF0gcGNpIDAwMDA6ZmY6MGMuNTogWzgwODY6MGVlYV0gdHlw
ZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjI4MTkyXSBwY2kgMDAwMDpmZjowZC4wOiBbODA4
NjowZWUxXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42Mjg0MjddIHBjaSAwMDAwOmZm
OjBkLjE6IFs4MDg2OjBlZTNdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYyODY2NF0g
cGNpIDAwMDA6ZmY6MGQuMjogWzgwODY6MGVlNV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAg
IDEuNjI4ODk4XSBwY2kgMDAwMDpmZjowZC4zOiBbODA4NjowZWU3XSB0eXBlIDAwIGNsYXNzIDB4
MDg4MDAwClsgICAgMS42MjkxMzJdIHBjaSAwMDAwOmZmOjBkLjQ6IFs4MDg2OjBlZTldIHR5cGUg
MDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYyOTM3N10gcGNpIDAwMDA6ZmY6MGQuNTogWzgwODY6
MGVlYl0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjI5NjIwXSBwY2kgMDAwMDpmZjow
ZS4wOiBbODA4NjowZWEwXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42Mjk4NjNdIHBj
aSAwMDAwOmZmOjBlLjE6IFs4MDg2OjBlMzBdIHR5cGUgMDAgY2xhc3MgMHgxMTAxMDAKWyAgICAx
LjYzMDE0Ml0gcGNpIDAwMDA6ZmY6MGYuMDogWzgwODY6MGVhOF0gdHlwZSAwMCBjbGFzcyAweDA4
ODAwMApbICAgIDEuNjMwNDg2XSBwY2kgMDAwMDpmZjowZi4xOiBbODA4NjowZTcxXSB0eXBlIDAw
IGNsYXNzIDB4MDg4MDAwClsgICAgMS42MzA4MjVdIHBjaSAwMDAwOmZmOjBmLjI6IFs4MDg2OjBl
YWFdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYzMTE1OV0gcGNpIDAwMDA6ZmY6MGYu
MzogWzgwODY6MGVhYl0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjMxNDkxXSBwY2kg
MDAwMDpmZjowZi40OiBbODA4NjowZWFjXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42
MzE4MjRdIHBjaSAwMDAwOmZmOjBmLjU6IFs4MDg2OjBlYWRdIHR5cGUgMDAgY2xhc3MgMHgwODgw
MDAKWyAgICAxLjYzMjE3MF0gcGNpIDAwMDA6ZmY6MTAuMDogWzgwODY6MGViMF0gdHlwZSAwMCBj
bGFzcyAweDA4ODAwMApbICAgIDEuNjMyNTEwXSBwY2kgMDAwMDpmZjoxMC4xOiBbODA4NjowZWIx
XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MzI4NDRdIHBjaSAwMDAwOmZmOjEwLjI6
IFs4MDg2OjBlYjJdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYzMzE3OV0gcGNpIDAw
MDA6ZmY6MTAuMzogWzgwODY6MGViM10gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjMz
NTE3XSBwY2kgMDAwMDpmZjoxMC40OiBbODA4NjowZWI0XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAw
ClsgICAgMS42MzM4NTJdIHBjaSAwMDAwOmZmOjEwLjU6IFs4MDg2OjBlYjVdIHR5cGUgMDAgY2xh
c3MgMHgwODgwMDAKWyAgICAxLjYzNDE5MF0gcGNpIDAwMDA6ZmY6MTAuNjogWzgwODY6MGViNl0g
dHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjM0NTI4XSBwY2kgMDAwMDpmZjoxMC43OiBb
ODA4NjowZWI3XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MzQ4NTZdIHBjaSAwMDAw
OmZmOjEzLjA6IFs4MDg2OjBlMWRdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYzNTA5
Nl0gcGNpIDAwMDA6ZmY6MTMuMTogWzgwODY6MGUzNF0gdHlwZSAwMCBjbGFzcyAweDExMDEwMApb
ICAgIDEuNjM1MzQwXSBwY2kgMDAwMDpmZjoxMy40OiBbODA4NjowZTgxXSB0eXBlIDAwIGNsYXNz
IDB4MDg4MDAwClsgICAgMS42MzU1NzVdIHBjaSAwMDAwOmZmOjEzLjU6IFs4MDg2OjBlMzZdIHR5
cGUgMDAgY2xhc3MgMHgxMTAxMDAKWyAgICAxLjYzNTgzMF0gcGNpIDAwMDA6ZmY6MTYuMDogWzgw
ODY6MGVjOF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjM2MDY1XSBwY2kgMDAwMDpm
ZjoxNi4xOiBbODA4NjowZWM5XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MzYyOTld
IHBjaSAwMDAwOmZmOjE2LjI6IFs4MDg2OjBlY2FdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAg
ICAxLjYzNjU4M10gcGNpIDAwMDA6ZmY6MWMuMDogWzgwODY6MGU2MF0gdHlwZSAwMCBjbGFzcyAw
eDA4ODAwMApbICAgIDEuNjM2ODIyXSBwY2kgMDAwMDpmZjoxYy4xOiBbODA4NjowZTM4XSB0eXBl
IDAwIGNsYXNzIDB4MTEwMTAwClsgICAgMS42MzcxMTJdIHBjaSAwMDAwOmZmOjFkLjA6IFs4MDg2
OjBlNjhdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYzNzQ1Ml0gcGNpIDAwMDA6ZmY6
MWQuMTogWzgwODY6MGU3OV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjM3Nzg3XSBw
Y2kgMDAwMDpmZjoxZC4yOiBbODA4NjowZTZhXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAg
MS42MzgxMjJdIHBjaSAwMDAwOmZmOjFkLjM6IFs4MDg2OjBlNmJdIHR5cGUgMDAgY2xhc3MgMHgw
ODgwMDAKWyAgICAxLjYzODQ2N10gcGNpIDAwMDA6ZmY6MWQuNDogWzgwODY6MGU2Y10gdHlwZSAw
MCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjM4ODA2XSBwY2kgMDAwMDpmZjoxZC41OiBbODA4Njow
ZTZkXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42MzkxNTFdIHBjaSAwMDAwOmZmOjFl
LjA6IFs4MDg2OjBlZjBdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjYzOTQ4Nl0gcGNp
IDAwMDA6ZmY6MWUuMTogWzgwODY6MGVmMV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEu
NjM5ODI2XSBwY2kgMDAwMDpmZjoxZS4yOiBbODA4NjowZWYyXSB0eXBlIDAwIGNsYXNzIDB4MDg4
MDAwClsgICAgMS42NDAxNjFdIHBjaSAwMDAwOmZmOjFlLjM6IFs4MDg2OjBlZjNdIHR5cGUgMDAg
Y2xhc3MgMHgwODgwMDAKWyAgICAxLjY0MDQ5NV0gcGNpIDAwMDA6ZmY6MWUuNDogWzgwODY6MGVm
NF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjQwODMxXSBwY2kgMDAwMDpmZjoxZS41
OiBbODA4NjowZWY1XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42NDExNjZdIHBjaSAw
MDAwOmZmOjFlLjY6IFs4MDg2OjBlZjZdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY0
MTUwNl0gcGNpIDAwMDA6ZmY6MWUuNzogWzgwODY6MGVmN10gdHlwZSAwMCBjbGFzcyAweDA4ODAw
MApbICAgIDEuNjQxOTcyXSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOS0FdIChJUlFzIDMg
NCA1IDYgNyAxMCAqMTEgMTIgMTQgMTUpClsgICAgMS42NDIwNjBdIEFDUEk6IFBDSSBJbnRlcnJ1
cHQgTGluayBbTE5LQl0gKElSUXMgMyA0IDUgNiA3ICoxMCAxMSAxMiAxNCAxNSkKWyAgICAxLjY0
MjE0NF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktDXSAoSVJRcyAzIDQgKjUgNiAxMCAx
MSAxMiAxNCAxNSkKWyAgICAxLjY0MjIzOF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktE
XSAoSVJRcyAzICo0IDUgNiAxMCAxMSAxMiAxNCAxNSkKWyAgICAxLjY0MjMyMl0gQUNQSTogUENJ
IEludGVycnVwdCBMaW5rIFtMTktFXSAoSVJRcyAzIDQgNSA2IDcgMTAgMTEgMTIgMTQgMTUpICow
LCBkaXNhYmxlZC4KWyAgICAxLjY0MjQwN10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktG
XSAoSVJRcyAzIDQgNSA2IDcgMTAgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4KWyAgICAxLjY0
MjQ5MV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktHXSAoSVJRcyAzIDQgNSA2IDcgMTAg
MTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4KWyAgICAxLjY0MjU3Nl0gQUNQSTogUENJIEludGVy
cnVwdCBMaW5rIFtMTktIXSAoSVJRcyAqMyA0IDUgNiA3IDEwIDExIDEyIDE0IDE1KQpbICAgIDEu
NjQ1MzE5XSBBUElDOiBOUl9DUFVTL3Bvc3NpYmxlX2NwdXMgbGltaXQgb2YgNCByZWFjaGVkLiBQ
cm9jZXNzb3IgNC8weDEgaWdub3JlZC4KWyAgICAxLjY0NTMxOV0gQUNQSTogVW5hYmxlIHRvIG1h
cCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NTQ5MF0gQVBJQzogTlJfQ1BV
Uy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDUvMHgzIGlnbm9y
ZWQuClsgICAgMS42NDU0OTFdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBj
cHUgbnVtYmVyClsgICAgMS42NDU2MzhdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1p
dCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciA2LzB4NSBpZ25vcmVkLgpbICAgIDEuNjQ1NjM5XSBB
Q1BJOiBVbmFibGUgdG8gbWFwIGxhcGljIHRvIGxvZ2ljYWwgY3B1IG51bWJlcgpbICAgIDEuNjQ1
Nzg2XSBBUElDOiBOUl9DUFVTL3Bvc3NpYmxlX2NwdXMgbGltaXQgb2YgNCByZWFjaGVkLiBQcm9j
ZXNzb3IgNy8weDcgaWdub3JlZC4KWyAgICAxLjY0NTc4N10gQUNQSTogVW5hYmxlIHRvIG1hcCBs
YXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NTg3Nl0gQVBJQzogTlJfQ1BVUy9w
b3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDgvMHg4IGlnbm9yZWQu
ClsgICAgMS42NDU4NzddIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUg
bnVtYmVyClsgICAgMS42NDU5NzBdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBv
ZiA0IHJlYWNoZWQuIFByb2Nlc3NvciA5LzB4OSBpZ25vcmVkLgpbICAgIDEuNjQ1OTcxXSBBQ1BJ
OiBVbmFibGUgdG8gbWFwIGxhcGljIHRvIGxvZ2ljYWwgY3B1IG51bWJlcgpbICAgIDEuNjQ2MDYw
XSBBUElDOiBOUl9DUFVTL3Bvc3NpYmxlX2NwdXMgbGltaXQgb2YgNCByZWFjaGVkLiBQcm9jZXNz
b3IgMTAvMHhhIGlnbm9yZWQuClsgICAgMS42NDYwNjFdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFw
aWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS42NDYxNTRdIEFQSUM6IE5SX0NQVVMvcG9z
c2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxMS8weGIgaWdub3JlZC4K
WyAgICAxLjY0NjE1NV0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBu
dW1iZXIKWyAgICAxLjY0NjI0NF0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9m
IDQgcmVhY2hlZC4gUHJvY2Vzc29yIDEyLzB4MTAgaWdub3JlZC4KWyAgICAxLjY0NjI0NV0gQUNQ
STogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NjMz
OV0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vz
c29yIDEzLzB4MTEgaWdub3JlZC4KWyAgICAxLjY0NjM0MF0gQUNQSTogVW5hYmxlIHRvIG1hcCBs
YXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NjQzMF0gQVBJQzogTlJfQ1BVUy9w
b3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDE0LzB4MTIgaWdub3Jl
ZC4KWyAgICAxLjY0NjQzMF0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNw
dSBudW1iZXIKWyAgICAxLjY0NjUyNV0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0
IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDE1LzB4MTMgaWdub3JlZC4KWyAgICAxLjY0NjUyNl0g
QUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0
NjYxNl0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJv
Y2Vzc29yIDE2LzB4MTQgaWdub3JlZC4KWyAgICAxLjY0NjYxN10gQUNQSTogVW5hYmxlIHRvIG1h
cCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NjcxMV0gQVBJQzogTlJfQ1BV
Uy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDE3LzB4MTUgaWdu
b3JlZC4KWyAgICAxLjY0NjcxMl0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2Fs
IGNwdSBudW1iZXIKWyAgICAxLjY0NjgwN10gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxp
bWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDE4LzB4MTYgaWdub3JlZC4KWyAgICAxLjY0Njgw
OF0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAx
LjY0NjkwM10gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4g
UHJvY2Vzc29yIDE5LzB4MTcgaWdub3JlZC4KWyAgICAxLjY0NjkwNF0gQUNQSTogVW5hYmxlIHRv
IG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0Njk5N10gQVBJQzogTlJf
Q1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDIwLzB4MTgg
aWdub3JlZC4KWyAgICAxLjY0Njk5OF0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dp
Y2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NzA5M10gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVz
IGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDIxLzB4MTkgaWdub3JlZC4KWyAgICAxLjY0
NzA5NF0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAg
ICAxLjY0NzE4Nl0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hl
ZC4gUHJvY2Vzc29yIDIyLzB4MWEgaWdub3JlZC4KWyAgICAxLjY0NzE4N10gQUNQSTogVW5hYmxl
IHRvIG1hcCBsYXBpYyB0byBsb2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NzI4M10gQVBJQzog
TlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDIzLzB4
MWIgaWdub3JlZC4KWyAgICAxLjY0NzI4M10gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBs
b2dpY2FsIGNwdSBudW1iZXIKWyAgICAxLjY0NzgxNl0geGVuOmJhbGxvb246IEluaXRpYWxpc2lu
ZyBiYWxsb29uIGRyaXZlcgpbICAgIDEuNjQ3ODE2XSBpb21tdTogRGVmYXVsdCBkb21haW4gdHlw
ZTogVHJhbnNsYXRlZCAKWyAgICAxLjY0OTMzOV0gcGNpIDAwMDA6MDM6MDAuMDogdmdhYXJiOiBz
ZXR0aW5nIGFzIGJvb3QgVkdBIGRldmljZQpbICAgIDEuNjQ5MzQwXSBwY2kgMDAwMDowMzowMC4w
OiB2Z2FhcmI6IFZHQSBkZXZpY2UgYWRkZWQ6IGRlY29kZXM9aW8rbWVtLG93bnM9aW8rbWVtLGxv
Y2tzPW5vbmUKWyAgICAxLjY0OTM1NF0gcGNpIDAwMDA6MDM6MDAuMDogdmdhYXJiOiBicmlkZ2Ug
Y29udHJvbCBwb3NzaWJsZQpbICAgIDEuNjQ5MzU1XSB2Z2FhcmI6IGxvYWRlZApbICAgIDEuNjQ5
NDUxXSBFREFDIE1DOiBWZXI6IDMuMC4wClsgICAgMS42NTA0MTddIE5ldExhYmVsOiBJbml0aWFs
aXppbmcKWyAgICAxLjY1MDQxN10gTmV0TGFiZWw6ICBkb21haW4gaGFzaCBzaXplID0gMTI4Clsg
ICAgMS42NTA0MTddIE5ldExhYmVsOiAgcHJvdG9jb2xzID0gVU5MQUJFTEVEIENJUFNPdjQgQ0FM
SVBTTwpbICAgIDEuNjUwNDE3XSBOZXRMYWJlbDogIHVubGFiZWxlZCB0cmFmZmljIGFsbG93ZWQg
YnkgZGVmYXVsdApbICAgIDEuNjUwNDE3XSBQQ0k6IFVzaW5nIEFDUEkgZm9yIElSUSByb3V0aW5n
ClsgICAgMS42NzgxODBdIFBDSTogcGNpX2NhY2hlX2xpbmVfc2l6ZSBzZXQgdG8gNjQgYnl0ZXMK
WyAgICAxLjY3OTAyOV0gZTgyMDogcmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0gMHgwMDA5ZTAwMC0w
eDAwMDlmZmZmXQpbICAgIDEuNjc5MDMyXSBlODIwOiByZXNlcnZlIFJBTSBidWZmZXIgW21lbSAw
eDgwMDYyMDAwLTB4ODNmZmZmZmZdClsgICAgMS42ODEzMzBdIGNsb2Nrc291cmNlOiBTd2l0Y2hl
ZCB0byBjbG9ja3NvdXJjZSB0c2MtZWFybHkKWyAgICAxLjY5Mjg5OF0gVkZTOiBEaXNrIHF1b3Rh
cyBkcXVvdF82LjYuMApbICAgIDEuNjkyOTE1XSBWRlM6IERxdW90LWNhY2hlIGhhc2ggdGFibGUg
ZW50cmllczogNTEyIChvcmRlciAwLCA0MDk2IGJ5dGVzKQpbICAgIDEuNjkyOTM3XSBodWdldGxi
ZnM6IGRpc2FibGluZyBiZWNhdXNlIHRoZXJlIGFyZSBubyBzdXBwb3J0ZWQgaHVnZXBhZ2Ugc2l6
ZXMKWyAgICAxLjY5MzAyOV0gQXBwQXJtb3I6IEFwcEFybW9yIEZpbGVzeXN0ZW0gRW5hYmxlZApb
ICAgIDEuNjkzMDQ3XSBwbnA6IFBuUCBBQ1BJIGluaXQKWyAgICAxLjY5MzE5M10gc3lzdGVtIDAw
OjAwOiBbbWVtIDB4ZmMwMDAwMDAtMHhmY2ZmZmZmZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAx
LjY5MzE5NV0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZmQwMDAwMDAtMHhmZGZmZmZmZl0gaGFzIGJl
ZW4gcmVzZXJ2ZWQKWyAgICAxLjY5MzE5Nl0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZmUwMDAwMDAt
MHhmZWFmZmZmZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjY5MzE5OF0gc3lzdGVtIDAwOjAw
OiBbbWVtIDB4ZmViMDAwMDAtMHhmZWJmZmZmZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjY5
MzE5OV0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZmVkMDA0MDAtMHhmZWQzZmZmZl0gY291bGQgbm90
IGJlIHJlc2VydmVkClsgICAgMS42OTMyMDFdIHN5c3RlbSAwMDowMDogW21lbSAweGZlZDQ1MDAw
LTB4ZmVkZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS42OTMyMDJdIHN5c3RlbSAwMDow
MDogW21lbSAweGZlZTAwMDAwLTB4ZmVlZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS42
OTMyMDhdIHN5c3RlbSAwMDowMDogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBj
MDEgKGFjdGl2ZSkKWyAgICAxLjY5MzM1N10gc3lzdGVtIDAwOjAxOiBbbWVtIDB4ZmJmZmMwMDAt
MHhmYmZmZGZmZl0gY291bGQgbm90IGJlIHJlc2VydmVkClsgICAgMS42OTMzNjJdIHN5c3RlbSAw
MDowMTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBjMDIgKGFjdGl2ZSkKWyAg
ICAxLjY5MzU5OF0gc3lzdGVtIDAwOjAyOiBbaW8gIDB4MGEwMC0weDBhMWZdIGhhcyBiZWVuIHJl
c2VydmVkClsgICAgMS42OTM1OTldIHN5c3RlbSAwMDowMjogW2lvICAweDBhMjAtMHgwYTJmXSBo
YXMgYmVlbiByZXNlcnZlZApbICAgIDEuNjkzNjAxXSBzeXN0ZW0gMDA6MDI6IFtpbyAgMHgwYTMw
LTB4MGEzZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjY5MzYwNV0gc3lzdGVtIDAwOjAyOiBQ
bHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMGMwMiAoYWN0aXZlKQpbICAgIDEuNjkz
NjM0XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxIHRyaWdnZXJpbmcgMSBwb2xhcml0eSAwClsgICAg
MS42OTM2NzldIHBucCAwMDowMzogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDAz
MDMgUE5QMDMwYiAoYWN0aXZlKQpbICAgIDEuNjkzNzE1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAx
MiB0cmlnZ2VyaW5nIDEgcG9sYXJpdHkgMApbICAgIDEuNjkzNzU2XSBwbnAgMDA6MDQ6IFBsdWcg
YW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwZjAzIFBOUDBmMTMgKGFjdGl2ZSkKWyAgICAx
LjY5Mzc4M10geGVuOiByZWdpc3RlcmluZyBnc2kgOCB0cmlnZ2VyaW5nIDEgcG9sYXJpdHkgMApb
ICAgIDEuNjkzODIwXSBwbnAgMDA6MDU6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQ
TlAwYjAwIChhY3RpdmUpClsgICAgMS42OTM5MTJdIHN5c3RlbSAwMDowNjogW2lvICAweDA0ZDAt
MHgwNGQxXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDEuNjkzOTE3XSBzeXN0ZW0gMDA6MDY6IFBs
dWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsgICAgMS42OTQx
OTFdIHN5c3RlbSAwMDowNzogW2lvICAweDA0MDAtMHgwNDUzXSBoYXMgYmVlbiByZXNlcnZlZApb
ICAgIDEuNjk0MTkyXSBzeXN0ZW0gMDA6MDc6IFtpbyAgMHgwNDU4LTB4MDQ3Zl0gaGFzIGJlZW4g
cmVzZXJ2ZWQKWyAgICAxLjY5NDE5NF0gc3lzdGVtIDAwOjA3OiBbaW8gIDB4MTE4MC0weDExOWZd
IGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS42OTQxOTVdIHN5c3RlbSAwMDowNzogW2lvICAweDA1
MDAtMHgwNTdmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDEuNjk0MTk3XSBzeXN0ZW0gMDA6MDc6
IFttZW0gMHhmZWQxYzAwMC0weGZlZDFmZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDEuNjk0
MTk5XSBzeXN0ZW0gMDA6MDc6IFttZW0gMHhmZWMwMDAwMC0weGZlY2ZmZmZmXSBjb3VsZCBub3Qg
YmUgcmVzZXJ2ZWQKWyAgICAxLjY5NDIwMV0gc3lzdGVtIDAwOjA3OiBbbWVtIDB4ZmYwMDAwMDAt
MHhmZmZmZmZmZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjY5NDIwNV0gc3lzdGVtIDAwOjA3
OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMGMwMSAoYWN0aXZlKQpbICAgIDEu
Njk0MzE5XSBzeXN0ZW0gMDA6MDg6IFtpbyAgMHgwNDU0LTB4MDQ1N10gaGFzIGJlZW4gcmVzZXJ2
ZWQKWyAgICAxLjY5NDMyNF0gc3lzdGVtIDAwOjA4OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNl
LCBJRHMgSU5UM2YwZCBQTlAwYzAyIChhY3RpdmUpClsgICAgMS42OTQ3ODddIHBucDogUG5QIEFD
UEk6IGZvdW5kIDkgZGV2aWNlcwpbICAgIDEuNzE0MjI1XSBQTS1UaW1lciBmYWlsZWQgY29uc2lz
dGVuY3kgY2hlY2sgICgweGZmZmZmZikgLSBhYm9ydGluZy4KWyAgICAxLjcxNDI4Nl0gTkVUOiBS
ZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAyClsgICAgMS43MTQ1OTNdIHRjcF9saXN0ZW5fcG9y
dGFkZHJfaGFzaCBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAyLCAxNjM4NCBieXRl
cywgbGluZWFyKQpbICAgIDEuNzE0NjEzXSBUQ1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRy
aWVzOiAxNjM4NCAob3JkZXI6IDUsIDEzMTA3MiBieXRlcywgbGluZWFyKQpbICAgIDEuNzE0NjYz
XSBUQ1AgYmluZCBoYXNoIHRhYmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNiwgMjYyMTQ0IGJ5
dGVzLCBsaW5lYXIpClsgICAgMS43MTQ2ODVdIFRDUDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAo
ZXN0YWJsaXNoZWQgMTYzODQgYmluZCAxNjM4NCkKWyAgICAxLjcxNDcyOF0gVURQIGhhc2ggdGFi
bGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpClsgICAgMS43
MTQ3MzddIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4
IGJ5dGVzLCBsaW5lYXIpClsgICAgMS43MTQ5MDVdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBm
YW1pbHkgMQpbICAgIDEuNzE0OTE0XSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDQ0
ClsgICAgMS43MTQ5NDldIHBjaSAwMDAwOjAwOjAxLjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwMV0K
WyAgICAxLjcxNDk5MF0gcGNpIDAwMDA6MDA6MDEuMTogUENJIGJyaWRnZSB0byBbYnVzIDAyXQpb
ICAgIDEuNzE1MDI4XSBwY2kgMDAwMDowMDowMi4wOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDNdClsg
ICAgMS43MTUwNDJdIHBjaSAwMDAwOjAwOjAyLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4Zjkw
MDAwMDAtMHhmYjBmZmZmZl0KWyAgICAxLjcxNTA1Ml0gcGNpIDAwMDA6MDA6MDIuMDogICBicmlk
Z2Ugd2luZG93IFttZW0gMHhlMDAwMDAwMC0weGVmZmZmZmZmIDY0Yml0IHByZWZdClsgICAgMS43
MTUwNjldIHBjaSAwMDAwOjAwOjAyLjE6IFBDSSBicmlkZ2UgdG8gW2J1cyAwNF0KWyAgICAxLjcx
NTEwN10gcGNpIDAwMDA6MDA6MDIuMjogUENJIGJyaWRnZSB0byBbYnVzIDA1XQpbICAgIDEuNzE1
MTQ1XSBwY2kgMDAwMDowMDowMi4zOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDZdClsgICAgMS43MTUx
ODJdIHBjaSAwMDAwOjAwOjAzLjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwN10KWyAgICAxLjcxNTIy
MF0gcGNpIDAwMDA6MDA6MDMuMTogUENJIGJyaWRnZSB0byBbYnVzIDA4XQpbICAgIDEuNzE1MjU4
XSBwY2kgMDAwMDowMDowMy4yOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDldClsgICAgMS43MTUyOTVd
IHBjaSAwMDAwOjAwOjAzLjM6IFBDSSBicmlkZ2UgdG8gW2J1cyAwYV0KWyAgICAxLjcxNTMzNF0g
cGNpIDAwMDA6MDA6MWMuMDogUENJIGJyaWRnZSB0byBbYnVzIDBiXQpbICAgIDEuNzE1MzczXSBw
Y2kgMDAwMDowMDoxYy4xOiBQQ0kgYnJpZGdlIHRvIFtidXMgMGNdClsgICAgMS43MTUzODddIHBj
aSAwMDAwOjAwOjFjLjE6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZmIyMDAwMDAtMHhmYjJmZmZm
Zl0KWyAgICAxLjcxNTQxNF0gcGNpIDAwMDA6MDA6MWMuNDogUENJIGJyaWRnZSB0byBbYnVzIDBk
XQpbICAgIDEuNzE1NDIwXSBwY2kgMDAwMDowMDoxYy40OiAgIGJyaWRnZSB3aW5kb3cgW2lvICAw
eGUwMDAtMHhlZmZmXQpbICAgIDEuNzE1NDM0XSBwY2kgMDAwMDowMDoxYy40OiAgIGJyaWRnZSB3
aW5kb3cgW21lbSAweGZiMTAwMDAwLTB4ZmIxZmZmZmZdClsgICAgMS43MTU0NjBdIHBjaSAwMDAw
OjAwOjFlLjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwZV0KWyAgICAxLjcxNTUwMF0gcGNpX2J1cyAw
MDAwOjAwOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAwLTB4MDNhZiB3aW5kb3ddClsgICAgMS43MTU1
MDFdIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNSBbaW8gIDB4MDNlMC0weDBjZjcgd2luZG93
XQpbICAgIDEuNzE1NTAyXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDYgW2lvICAweDAzYjAt
MHgwM2RmIHdpbmRvd10KWyAgICAxLjcxNTUwM10gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA3
IFtpbyAgMHgwZDAwLTB4ZmZmZiB3aW5kb3ddClsgICAgMS43MTU1MDVdIHBjaV9idXMgMDAwMDow
MDogcmVzb3VyY2UgOCBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZiB3aW5kb3ddClsgICAgMS43
MTU1MDZdIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgOSBbbWVtIDB4MDAwYzAwMDAtMHgwMDBk
ZmZmZiB3aW5kb3ddClsgICAgMS43MTU1MDddIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgMTAg
W21lbSAweGNjMDAwMDAwLTB4ZmZmZmZmZmYgd2luZG93XQpbICAgIDEuNzE1NTA4XSBwY2lfYnVz
IDAwMDA6MDA6IHJlc291cmNlIDExIFttZW0gMHg0NDAwMDAwMDAtMHgzZmZmZmZmZmZmZmYgd2lu
ZG93XQpbICAgIDEuNzE1NTEwXSBwY2lfYnVzIDAwMDA6MDM6IHJlc291cmNlIDEgW21lbSAweGY5
MDAwMDAwLTB4ZmIwZmZmZmZdClsgICAgMS43MTU1MTFdIHBjaV9idXMgMDAwMDowMzogcmVzb3Vy
Y2UgMiBbbWVtIDB4ZTAwMDAwMDAtMHhlZmZmZmZmZiA2NGJpdCBwcmVmXQpbICAgIDEuNzE1NTEz
XSBwY2lfYnVzIDAwMDA6MGM6IHJlc291cmNlIDEgW21lbSAweGZiMjAwMDAwLTB4ZmIyZmZmZmZd
ClsgICAgMS43MTU1MTRdIHBjaV9idXMgMDAwMDowZDogcmVzb3VyY2UgMCBbaW8gIDB4ZTAwMC0w
eGVmZmZdClsgICAgMS43MTU1MTZdIHBjaV9idXMgMDAwMDowZDogcmVzb3VyY2UgMSBbbWVtIDB4
ZmIxMDAwMDAtMHhmYjFmZmZmZl0KWyAgICAxLjcxNTUxN10gcGNpX2J1cyAwMDAwOjBlOiByZXNv
dXJjZSA0IFtpbyAgMHgwMDAwLTB4MDNhZiB3aW5kb3ddClsgICAgMS43MTU1MThdIHBjaV9idXMg
MDAwMDowZTogcmVzb3VyY2UgNSBbaW8gIDB4MDNlMC0weDBjZjcgd2luZG93XQpbICAgIDEuNzE1
NTIwXSBwY2lfYnVzIDAwMDA6MGU6IHJlc291cmNlIDYgW2lvICAweDAzYjAtMHgwM2RmIHdpbmRv
d10KWyAgICAxLjcxNTUyMV0gcGNpX2J1cyAwMDAwOjBlOiByZXNvdXJjZSA3IFtpbyAgMHgwZDAw
LTB4ZmZmZiB3aW5kb3ddClsgICAgMS43MTU1MjJdIHBjaV9idXMgMDAwMDowZTogcmVzb3VyY2Ug
OCBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZiB3aW5kb3ddClsgICAgMS43MTU1MjNdIHBjaV9i
dXMgMDAwMDowZTogcmVzb3VyY2UgOSBbbWVtIDB4MDAwYzAwMDAtMHgwMDBkZmZmZiB3aW5kb3dd
ClsgICAgMS43MTU1MjVdIHBjaV9idXMgMDAwMDowZTogcmVzb3VyY2UgMTAgW21lbSAweGNjMDAw
MDAwLTB4ZmZmZmZmZmYgd2luZG93XQpbICAgIDEuNzE1NTI2XSBwY2lfYnVzIDAwMDA6MGU6IHJl
c291cmNlIDExIFttZW0gMHg0NDAwMDAwMDAtMHgzZmZmZmZmZmZmZmYgd2luZG93XQpbICAgIDEu
NzE1NjY3XSBwY2kgMDAwMDowMDowNS4wOiBkaXNhYmxlZCBib290IGludGVycnVwdHMgb24gZGV2
aWNlIFs4MDg2OjBlMjhdClsgICAgMS43MTU4NTRdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDE2IHRy
aWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMS43MTU4ODJdIHhlbjogLS0+IHBpcnE9MTYgLT4g
aXJxPTE2IChnc2k9MTYpClsgICAgMS43MTYxMjRdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDIzIHRy
aWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMS43MTYxNDRdIHhlbjogLS0+IHBpcnE9MjMgLT4g
aXJxPTIzIChnc2k9MjMpClsgICAgMS43MTYzNDZdIHBjaSAwMDAwOjAzOjAwLjA6IFZpZGVvIGRl
dmljZSB3aXRoIHNoYWRvd2VkIFJPTSBhdCBbbWVtIDB4MDAwYzAwMDAtMHgwMDBkZmZmZl0KWyAg
ICAxLjcxNjQyOV0geGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5
IDEKWyAgICAxLjcxNjQzM10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAgIDEuNzE2NDc5
XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDEu
NzE2NDk2XSB4ZW46IC0tPiBwaXJxPTE3IC0+IGlycT0xNyAoZ3NpPTE3KQpbICAgIDEuNzE2OTQ2
XSBQQ0k6IENMUyA2NCBieXRlcywgZGVmYXVsdCA2NApbICAgIDEuNzE3MDAxXSBUcnlpbmcgdG8g
dW5wYWNrIHJvb3RmcyBpbWFnZSBhcyBpbml0cmFtZnMuLi4KWyAgICAyLjI0OTIxNl0gRnJlZWlu
ZyBpbml0cmQgbWVtb3J5OiAzNDc3MksKWyAgICAyLjI0OTI3Ml0gY2xvY2tzb3VyY2U6IHRzYzog
bWFzazogMHhmZmZmZmZmZmZmZmZmZmZmIG1heF9jeWNsZXM6IDB4MjNmNDU0ZjA2MjUsIG1heF9p
ZGxlX25zOiA0NDA3OTUyMjg2NjEgbnMKWyAgICAyLjI0OTM3OF0gY2xvY2tzb3VyY2U6IFN3aXRj
aGVkIHRvIGNsb2Nrc291cmNlIHRzYwpbICAgIDIuMjQ5ODQ4XSBJbml0aWFsaXNlIHN5c3RlbSB0
cnVzdGVkIGtleXJpbmdzClsgICAgMi4yNDk4NjFdIEtleSB0eXBlIGJsYWNrbGlzdCByZWdpc3Rl
cmVkClsgICAgMi4yNTAwMDZdIHdvcmtpbmdzZXQ6IHRpbWVzdGFtcF9iaXRzPTM2IG1heF9vcmRl
cj0xOCBidWNrZXRfb3JkZXI9MApbICAgIDIuMjUxMDk1XSB6YnVkOiBsb2FkZWQKWyAgICAyLjI1
MTU1MV0gaW50ZWdyaXR5OiBQbGF0Zm9ybSBLZXlyaW5nIGluaXRpYWxpemVkClsgICAgMi4yNTE1
NTVdIEtleSB0eXBlIGFzeW1tZXRyaWMgcmVnaXN0ZXJlZApbICAgIDIuMjUxNTU2XSBBc3ltbWV0
cmljIGtleSBwYXJzZXIgJ3g1MDknIHJlZ2lzdGVyZWQKWyAgICAyLjI1MTU2Nl0gQmxvY2sgbGF5
ZXIgU0NTSSBnZW5lcmljIChic2cpIGRyaXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI1
MSkKWyAgICAyLjI1MTcyMV0gaW8gc2NoZWR1bGVyIG1xLWRlYWRsaW5lIHJlZ2lzdGVyZWQKWyAg
ICAyLjI1MjA0NF0geGVuOiByZWdpc3RlcmluZyBnc2kgMjYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5
IDEKWyAgICAyLjI1MjA3NF0geGVuOiAtLT4gcGlycT0yNiAtPiBpcnE9MjYgKGdzaT0yNikKWyAg
ICAyLjI1MjYxN10geGVuOiByZWdpc3RlcmluZyBnc2kgMjYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5
IDEKWyAgICAyLjI1MjYyMl0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyNgpbICAgIDIuMjUzMTIw
XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzMiB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDIu
MjUzMTM4XSB4ZW46IC0tPiBwaXJxPTMyIC0+IGlycT0zMiAoZ3NpPTMyKQpbICAgIDIuMjUzNjQ2
XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzMiB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDIu
MjUzNjUwXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjMyClsgICAgMi4yNTQxNDVdIHhlbjogcmVn
aXN0ZXJpbmcgZ3NpIDMyIHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMi4yNTQxNDldIEFs
cmVhZHkgc2V0dXAgdGhlIEdTSSA6MzIKWyAgICAyLjI1NDYzNV0geGVuOiByZWdpc3RlcmluZyBn
c2kgMzIgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgICAyLjI1NDY0MF0gQWxyZWFkeSBzZXR1
cCB0aGUgR1NJIDozMgpbICAgIDIuMjU1MTM0XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSA0MCB0cmln
Z2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDIuMjU1MTUyXSB4ZW46IC0tPiBwaXJxPTQwIC0+IGly
cT00MCAoZ3NpPTQwKQpbICAgIDIuMjU1NjUxXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSA0MCB0cmln
Z2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDIuMjU1NjU2XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kg
OjQwClsgICAgMi4yNTYxNDFdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDQwIHRyaWdnZXJpbmcgMCBw
b2xhcml0eSAxClsgICAgMi4yNTYxNDZdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6NDAKWyAgICAy
LjI1NjYzNV0geGVuOiByZWdpc3RlcmluZyBnc2kgNDAgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEK
WyAgICAyLjI1NjY0MF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDo0MApbICAgIDIuMjU3MTI5XSB4
ZW46IHJlZ2lzdGVyaW5nIGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDIuMjU3
MTM0XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjE3ClsgICAgMi4yNTc3NTNdIHhlbjogcmVnaXN0
ZXJpbmcgZ3NpIDE3IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMi4yNTc3NTddIEFscmVh
ZHkgc2V0dXAgdGhlIEdTSSA6MTcKWyAgICAyLjI1ODEyMF0gc2hwY2hwOiBTdGFuZGFyZCBIb3Qg
UGx1ZyBQQ0kgQ29udHJvbGxlciBEcml2ZXIgdmVyc2lvbjogMC40ClsgICAgMi4yNTgxMzJdIGlu
dGVsX2lkbGU6IE1XQUlUIHN1YnN0YXRlczogMHgxMTIwClsgICAgMi4yNTgyNjBdIE1vbml0b3It
TXdhaXQgd2lsbCBiZSB1c2VkIHRvIGVudGVyIEMtMSBzdGF0ZQpbICAgIDIuMjU4MjcwXSBBQ1BJ
OiBcX1NCXy5TQ0swLkMwMDA6IEZvdW5kIDEgaWRsZSBzdGF0ZXMKWyAgICAyLjI1ODI3MV0gaW50
ZWxfaWRsZTogdjAuNS4xIG1vZGVsIDB4M0UKWyAgICAyLjI1ODI3Nl0gaW50ZWxfaWRsZTogaW50
ZWxfaWRsZSB5aWVsZGluZyB0byBub25lClsgICAgMi4yNTg0MTVdIEFDUEk6IFxfU0JfLlNDSzAu
QzAwMDogRm91bmQgMSBpZGxlIHN0YXRlcwpbICAgIDIuMjU4NzAwXSBBQ1BJOiBcX1NCXy5TQ0sw
LkMwMDI6IEZvdW5kIDEgaWRsZSBzdGF0ZXMKWyAgICAyLjI1OTA2NF0gQUNQSTogXF9TQl8uU0NL
MC5DMDA0OiBGb3VuZCAxIGlkbGUgc3RhdGVzClsgICAgMi4yNTkzNjddIEFDUEk6IFxfU0JfLlND
SzAuQzAwNjogRm91bmQgMSBpZGxlIHN0YXRlcwpbICAgIDIuMjYwMDMzXSB4ZW5fbWNlbG9nOiAv
ZGV2L21jZWxvZyByZWdpc3RlcmVkIGJ5IFhlbgpbICAgIDIuMjYwNjA3XSBTZXJpYWw6IDgyNTAv
MTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBlbmFibGVkClsgICAgMi4yNjEzNDBd
IGhwZXRfYWNwaV9hZGQ6IG5vIGFkZHJlc3Mgb3IgaXJxcyBpbiBfQ1JTClsgICAgMi4yNjEzNjNd
IExpbnV4IGFncGdhcnQgaW50ZXJmYWNlIHYwLjEwMwpbICAgIDIuMjYxNDYxXSBBTUQtVmk6IEFN
RCBJT01NVXYyIGRyaXZlciBieSBKb2VyZyBSb2VkZWwgPGpyb2VkZWxAc3VzZS5kZT4KWyAgICAy
LjI2MTQ2Ml0gQU1ELVZpOiBBTUQgSU9NTVV2MiBmdW5jdGlvbmFsaXR5IG5vdCBhdmFpbGFibGUg
b24gdGhpcyBzeXN0ZW0KWyAgICAyLjI2MTg2Nl0gaTgwNDI6IFBOUDogUFMvMiBDb250cm9sbGVy
IFtQTlAwMzAzOlBTMkssUE5QMGYwMzpQUzJNXSBhdCAweDYwLDB4NjQgaXJxIDEsMTIKWyAgICAy
LjI2MjQ5NV0gc2VyaW86IGk4MDQyIEtCRCBwb3J0IGF0IDB4NjAsMHg2NCBpcnEgMQpbICAgIDIu
MjYyNTAxXSBzZXJpbzogaTgwNDIgQVVYIHBvcnQgYXQgMHg2MCwweDY0IGlycSAxMgpbICAgIDIu
MjYyNzA3XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZvciBhbGwgbWljZQpb
ICAgIDIuMjYyNzg4XSBydGNfY21vcyAwMDowNTogUlRDIGNhbiB3YWtlIGZyb20gUzQKWyAgICAy
LjI2MzE1NV0gcnRjX2Ntb3MgMDA6MDU6IHJlZ2lzdGVyZWQgYXMgcnRjMApbICAgIDIuMjYzMjMx
XSBydGNfY21vcyAwMDowNTogc2V0dGluZyBzeXN0ZW0gY2xvY2sgdG8gMjAyMS0wMi0wMVQxNDo0
MjoxMiBVVEMgKDE2MTIxOTA1MzIpClsgICAgMi4yNjMyNTNdIHJ0Y19jbW9zIDAwOjA1OiBhbGFy
bXMgdXAgdG8gb25lIG1vbnRoLCB5M2ssIDExNCBieXRlcyBudnJhbQpbICAgIDIuMjYzMjYxXSBp
bnRlbF9wc3RhdGU6IENQVSBtb2RlbCBub3Qgc3VwcG9ydGVkClsgICAgMi4yNjMzODZdIGxlZHRy
aWctY3B1OiByZWdpc3RlcmVkIHRvIGluZGljYXRlIGFjdGl2aXR5IG9uIENQVXMKWyAgICAyLjI2
NDEzOV0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxMApbICAgIDIuMjg0NzIxXSBT
ZWdtZW50IFJvdXRpbmcgd2l0aCBJUHY2ClsgICAgMi4yODQ3NDJdIG1pcDY6IE1vYmlsZSBJUHY2
ClsgICAgMi4yODQ3NDVdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTcKWyAgICAy
LjI4NDg1OF0gbXBsc19nc286IE1QTFMgR1NPIHN1cHBvcnQKWyAgICAyLjI4NTE1MF0gSVBJIHNo
b3J0aGFuZCBicm9hZGNhc3Q6IGVuYWJsZWQKWyAgICAyLjI4NTE1N10gc2NoZWRfY2xvY2s6IE1h
cmtpbmcgc3RhYmxlICgyMjA2ODc0OTA2LCA3ODIzNjc1MiktPigyMjk3NzYzNjUyLCAtMTI2NTE5
OTQpClsgICAgMi4yODU0OTBdIHJlZ2lzdGVyZWQgdGFza3N0YXRzIHZlcnNpb24gMQpbICAgIDIu
Mjg1NDkzXSBMb2FkaW5nIGNvbXBpbGVkLWluIFguNTA5IGNlcnRpZmljYXRlcwpbICAgIDIuMzI1
MjMzXSBMb2FkZWQgWC41MDkgY2VydCAnRGViaWFuIFNlY3VyZSBCb290IENBOiA2Y2NlY2U3ZTRj
NmMwZDFmNjE0OWYzZGQyN2RmY2M1Y2JiNDE5ZWExJwpbICAgIDIuMzI1MjUzXSBMb2FkZWQgWC41
MDkgY2VydCAnRGViaWFuIFNlY3VyZSBCb290IFNpZ25lciAyMDIwOiAwMGI1NWViM2I5JwpbICAg
IDIuMzI1Mjg5XSB6c3dhcDogbG9hZGVkIHVzaW5nIHBvb2wgbHpvL3pidWQKWyAgICAyLjMyNTcz
OF0gS2V5IHR5cGUgLl9mc2NyeXB0IHJlZ2lzdGVyZWQKWyAgICAyLjMyNTczOV0gS2V5IHR5cGUg
LmZzY3J5cHQgcmVnaXN0ZXJlZApbICAgIDIuMzI1NzQwXSBLZXkgdHlwZSBmc2NyeXB0LXByb3Zp
c2lvbmluZyByZWdpc3RlcmVkClsgICAgMi4zMjU3ODZdIEFwcEFybW9yOiBBcHBBcm1vciBzaGEx
IHBvbGljeSBoYXNoaW5nIGVuYWJsZWQKWyAgICAyLjMyODMxOF0gRnJlZWluZyB1bnVzZWQga2Vy
bmVsIGltYWdlIChpbml0bWVtKSBtZW1vcnk6IDIzODBLClsgICAgMi4zNTczNzNdIFdyaXRlIHBy
b3RlY3RpbmcgdGhlIGtlcm5lbCByZWFkLW9ubHkgZGF0YTogMTg0MzJrClsgICAgMi4zNzE0NDBd
IEZyZWVpbmcgdW51c2VkIGtlcm5lbCBpbWFnZSAodGV4dC9yb2RhdGEgZ2FwKSBtZW1vcnk6IDIw
NDBLClsgICAgMi4zNzE1MTFdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBpbWFnZSAocm9kYXRhL2Rh
dGEgZ2FwKSBtZW1vcnk6IDM2SwpbICAgIDIuODIyMzA3XSB4ODYvbW06IENoZWNrZWQgVytYIG1h
cHBpbmdzOiBwYXNzZWQsIG5vIFcrWCBwYWdlcyBmb3VuZC4KWyAgICAyLjgyMjM4Nl0gUnVuIC9p
bml0IGFzIGluaXQgcHJvY2VzcwpbICAgIDIuODIyMzg5XSAgIHdpdGggYXJndW1lbnRzOgpbICAg
IDIuODIyMzkwXSAgICAgL2luaXQKWyAgICAyLjgyMjM5MV0gICAgIHBsYWNlaG9sZGVyClsgICAg
Mi44MjIzOTFdICAgd2l0aCBlbnZpcm9ubWVudDoKWyAgICAyLjgyMjM5M10gICAgIEhPTUU9Lwpb
ICAgIDIuODIyMzkzXSAgICAgVEVSTT1saW51eApbICAgIDMuMTQ2MTk5XSB4ZW46IHJlZ2lzdGVy
aW5nIGdzaSAxOCB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDMuMTQ2MjM5XSB4ZW46IC0t
PiBwaXJxPTE4IC0+IGlycT0xOCAoZ3NpPTE4KQpbICAgIDMuMTQ2NDA0XSBpODAxX3NtYnVzIDAw
MDA6MDA6MWYuMzogU01CdXMgdXNpbmcgUENJIGludGVycnVwdApbICAgIDMuMTQ3MTEzXSBpMmMg
aTJjLTA6IDQvNCBtZW1vcnkgc2xvdHMgcG9wdWxhdGVkIChmcm9tIERNSSkKWyAgICAzLjE0ODgy
NV0gQUNQSTogYnVzIHR5cGUgVVNCIHJlZ2lzdGVyZWQKWyAgICAzLjE0ODg2MV0gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JmcwpbICAgIDMuMTQ4ODc0XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1YgpbICAgIDMuMTQ5NDYxXSB1
c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYgpbICAgIDMuMTU0Mjc0XSBT
Q1NJIHN1YnN5c3RlbSBpbml0aWFsaXplZApbICAgIDMuMTU2MTQwXSBlaGNpX2hjZDogVVNCIDIu
MCAnRW5oYW5jZWQnIEhvc3QgQ29udHJvbGxlciAoRUhDSSkgRHJpdmVyClsgICAgMy4xNTgzNDJd
IHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDE2IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMy4x
NTgzNDhdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTYKWyAgICAzLjE1OTE2M10gZWhjaS1wY2k6
IEVIQ0kgUENJIHBsYXRmb3JtIGRyaXZlcgpbICAgIDMuMTU5Mjk4XSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSAxNiB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDMuMTU5MzA4XSBBbHJlYWR5IHNl
dHVwIHRoZSBHU0kgOjE2ClsgICAgMy4xNjAzMzRdIGVoY2ktcGNpIDAwMDA6MDA6MWEuMDogRUhD
SSBIb3N0IENvbnRyb2xsZXIKWyAgICAzLjE2MDM0NV0gZWhjaS1wY2kgMDAwMDowMDoxYS4wOiBu
ZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDEKWyAgICAzLjE2MDM5
MF0gZWhjaS1wY2kgMDAwMDowMDoxYS4wOiBkZWJ1ZyBwb3J0IDIKWyAgICAzLjE2NDM2OF0gZWhj
aS1wY2kgMDAwMDowMDoxYS4wOiBjYWNoZSBsaW5lIHNpemUgb2YgNjQgaXMgbm90IHN1cHBvcnRl
ZApbICAgIDMuMTY1MzQwXSBlaGNpLXBjaSAwMDAwOjAwOjFhLjA6IGlycSAxNiwgaW8gbWVtIDB4
ZmIzMDIwMDAKWyAgICAzLjE4MjYwNF0gZWhjaS1wY2kgMDAwMDowMDoxYS4wOiBVU0IgMi4wIHN0
YXJ0ZWQsIEVIQ0kgMS4wMApbICAgIDMuMTg0MjYwXSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2Ug
Zm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyLCBiY2REZXZpY2U9IDUuMTAKWyAg
ICAzLjE4NDI2M10gdXNiIHVzYjE6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9k
dWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMy4xODQyNjRdIHVzYiB1c2IxOiBQcm9kdWN0OiBF
SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDMuMTg0MjY2XSB1c2IgdXNiMTogTWFudWZhY3R1cmVy
OiBMaW51eCA1LjEwLjAtMS1hbWQ2NCBlaGNpX2hjZApbICAgIDMuMTg0MjY3XSB1c2IgdXNiMTog
U2VyaWFsTnVtYmVyOiAwMDAwOjAwOjFhLjAKWyAgICAzLjE4NDQ2M10gaHViIDEtMDoxLjA6IFVT
QiBodWIgZm91bmQKWyAgICAzLjE4NDQ4OF0gaHViIDEtMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQK
WyAgICAzLjE4NDk0NV0geGVuOiByZWdpc3RlcmluZyBnc2kgMjMgdHJpZ2dlcmluZyAwIHBvbGFy
aXR5IDEKWyAgICAzLjE4NDk1MF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMwpbICAgIDMuMTg1
MDc2XSBlaGNpLXBjaSAwMDAwOjAwOjFkLjA6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMy4x
ODUwODVdIGVoY2ktcGNpIDAwMDA6MDA6MWQuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciAyClsgICAgMy4xODUxMzBdIGVoY2ktcGNpIDAwMDA6MDA6MWQuMDog
ZGVidWcgcG9ydCAyClsgICAgMy4xODUyODRdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDE3IHRyaWdn
ZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMy4xODUyODldIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6
MTcKWyAgICAzLjE4OTExN10gZWhjaS1wY2kgMDAwMDowMDoxZC4wOiBjYWNoZSBsaW5lIHNpemUg
b2YgNjQgaXMgbm90IHN1cHBvcnRlZApbICAgIDMuMTg5MTc3XSBlaGNpLXBjaSAwMDAwOjAwOjFk
LjA6IGlycSAyMywgaW8gbWVtIDB4ZmIzMDEwMDAKWyAgICAzLjE4OTI2N10gbGliYXRhIHZlcnNp
b24gMy4wMCBsb2FkZWQuClsgICAgMy4xOTQ5MDddIGxpYnBoeTogcjgxNjk6IHByb2JlZApbICAg
IDMuMTk1MTcyXSByODE2OSAwMDAwOjBkOjAwLjAgZXRoMDogUlRMODE2OGgvODExMWgsIDAwOmUw
OjRjOjBhOjUyOjk3LCBYSUQgNTQxLCBJUlEgODkKWyAgICAzLjE5NTE3NV0gcjgxNjkgMDAwMDow
ZDowMC4wIGV0aDA6IGp1bWJvIGZlYXR1cmVzIFtmcmFtZXM6IDkxOTQgYnl0ZXMsIHR4IGNoZWNr
c3VtbWluZzoga29dClsgICAgMy4xOTgwNjJdIGFoY2kgMDAwMDowMDoxZi4yOiB2ZXJzaW9uIDMu
MApbICAgIDMuMTk4MTc2XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxOSB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQpbICAgIDMuMTk4MTk3XSB4ZW46IC0tPiBwaXJxPTE5IC0+IGlycT0xOSAoZ3NpPTE5
KQpbICAgIDMuMTk4Mzc1XSBhaGNpIDAwMDA6MDA6MWYuMjogQUhDSSAwMDAxLjAzMDAgMzIgc2xv
dHMgNiBwb3J0cyA2IEdicHMgMHgzIGltcGwgU0FUQSBtb2RlClsgICAgMy4xOTgzNzhdIGFoY2kg
MDAwMDowMDoxZi4yOiBmbGFnczogNjRiaXQgbmNxIHNudGYgcG0gbGVkIGNsbyBwaW8gc2x1bSBw
YXJ0IGVtcyBhcHN0IApbICAgIDMuMjAxMzQ1XSBlaGNpLXBjaSAwMDAwOjAwOjFkLjA6IFVTQiAy
LjAgc3RhcnRlZCwgRUhDSSAxLjAwClsgICAgMy4yMDE1MTldIHVzYiB1c2IyOiBOZXcgVVNCIGRl
dmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0gNS4x
MApbICAgIDMuMjAxNTIyXSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMs
IFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAzLjIwMTUyNF0gdXNiIHVzYjI6IFByb2R1
Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMy4yMDE1MjZdIHVzYiB1c2IyOiBNYW51ZmFj
dHVyZXI6IExpbnV4IDUuMTAuMC0xLWFtZDY0IGVoY2lfaGNkClsgICAgMy4yMDE1MjhdIHVzYiB1
c2IyOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MWQuMApbICAgIDMuMjAxNzQ3XSBodWIgMi0wOjEu
MDogVVNCIGh1YiBmb3VuZApbICAgIDMuMjAxNzc3XSBodWIgMi0wOjEuMDogMiBwb3J0cyBkZXRl
Y3RlZApbICAgIDMuMjAyMDYzXSB4aGNpX2hjZCAwMDAwOjBjOjAwLjA6IHhIQ0kgSG9zdCBDb250
cm9sbGVyClsgICAgMy4yMDIwNzJdIHhoY2lfaGNkIDAwMDA6MGM6MDAuMDogbmV3IFVTQiBidXMg
cmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgICAgMy4yMDI2MzZdIHhoY2lfaGNk
IDAwMDA6MGM6MDAuMDogaGNjIHBhcmFtcyAweDAwMjg0MWViIGhjaSB2ZXJzaW9uIDB4MTAwIHF1
aXJrcyAweDAwMDAwMDAwMDAwMDA4OTAKWyAgICAzLjIwMzA0MV0gdXNiIHVzYjM6IE5ldyBVU0Ig
ZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMiwgYmNkRGV2aWNlPSA1
LjEwClsgICAgMy4yMDMwNDNdIHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9
MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgIDMuMjAzMDQ1XSB1c2IgdXNiMzogUHJv
ZHVjdDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAzLjIwMzA0N10gdXNiIHVzYjM6IE1hbnVm
YWN0dXJlcjogTGludXggNS4xMC4wLTEtYW1kNjQgeGhjaS1oY2QKWyAgICAzLjIwMzA0OV0gdXNi
IHVzYjM6IFNlcmlhbE51bWJlcjogMDAwMDowYzowMC4wClsgICAgMy4yMDMyMjddIGh1YiAzLTA6
MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMy4yMDMyNTZdIGh1YiAzLTA6MS4wOiAxIHBvcnQgZGV0
ZWN0ZWQKWyAgICAzLjIwMzU0NF0geGhjaV9oY2QgMDAwMDowYzowMC4wOiB4SENJIEhvc3QgQ29u
dHJvbGxlcgpbICAgIDMuMjAzNTUwXSB4aGNpX2hjZCAwMDAwOjBjOjAwLjA6IG5ldyBVU0IgYnVz
IHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgNApbICAgIDMuMjAzNTU1XSB4aGNpX2hj
ZCAwMDAwOjBjOjAwLjA6IEhvc3Qgc3VwcG9ydHMgVVNCIDMuMCBTdXBlclNwZWVkClsgICAgMy4y
MDM3NzZdIHVzYiB1c2I0OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQ
cm9kdWN0PTAwMDMsIGJjZERldmljZT0gNS4xMApbICAgIDMuMjAzNzc5XSB1c2IgdXNiNDogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAg
ICAzLjIwMzc4MV0gdXNiIHVzYjQ6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsgICAg
My4yMDM3ODJdIHVzYiB1c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDUuMTAuMC0xLWFtZDY0IHho
Y2ktaGNkClsgICAgMy4yMDM3ODNdIHVzYiB1c2I0OiBTZXJpYWxOdW1iZXI6IDAwMDA6MGM6MDAu
MApbICAgIDMuMjAzOTc4XSBodWIgNC0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDMuMjA0MDA4
XSBodWIgNC0wOjEuMDogNCBwb3J0cyBkZXRlY3RlZApbICAgIDMuMjA5OTk2XSBzY3NpIGhvc3Qw
OiBhaGNpClsgICAgMy4yMTAzNzldIHNjc2kgaG9zdDE6IGFoY2kKWyAgICAzLjIxMDY4M10gc2Nz
aSBob3N0MjogYWhjaQpbICAgIDMuMjExMTkxXSBzY3NpIGhvc3QzOiBhaGNpClsgICAgMy4yMTM0
MjldIHNjc2kgaG9zdDQ6IGFoY2kKWyAgICAzLjIxNDQ3NF0gc2NzaSBob3N0NTogYWhjaQpbICAg
IDMuMjE0NTU2XSBhdGExOiBTQVRBIG1heCBVRE1BLzEzMyBhYmFyIG0yMDQ4QDB4ZmIzMDAwMDAg
cG9ydCAweGZiMzAwMTAwIGlycSA5MApbICAgIDMuMjE0NTU5XSBhdGEyOiBTQVRBIG1heCBVRE1B
LzEzMyBhYmFyIG0yMDQ4QDB4ZmIzMDAwMDAgcG9ydCAweGZiMzAwMTgwIGlycSA5MApbICAgIDMu
MjE0NTYxXSBhdGEzOiBEVU1NWQpbICAgIDMuMjE0NTYyXSBhdGE0OiBEVU1NWQpbICAgIDMuMjE0
NTYzXSBhdGE1OiBEVU1NWQpbICAgIDMuMjE0NTY0XSBhdGE2OiBEVU1NWQpbICAgIDMuMjMxOTkz
XSByODE2OSAwMDAwOjBkOjAwLjAgZW5wMTNzMDogcmVuYW1lZCBmcm9tIGV0aDAKWyAgICAzLjUx
NzM3Nl0gdXNiIDEtMTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBl
aGNpLXBjaQpbICAgIDMuNTI4Nzg0XSBhdGExOiBTQVRBIGxpbmsgdXAgNi4wIEdicHMgKFNTdGF0
dXMgMTMzIFNDb250cm9sIDMwMCkKWyAgICAzLjUyODgwOV0gYXRhMjogU0FUQSBsaW5rIHVwIDMu
MCBHYnBzIChTU3RhdHVzIDEyMyBTQ29udHJvbCAzMDApClsgICAgMy41MjkxOTldIGF0YTEuMDA6
IEFUQS0xMTogS0lOR1NUT04gU1VWNDAwUzM3MTIwRywgMEMzRkQ2U0QsIG1heCBVRE1BLzEzMwpb
ICAgIDMuNTI5MjAyXSBhdGExLjAwOiAyMzQ0NDE2NDggc2VjdG9ycywgbXVsdGkgMTY6IExCQTQ4
IE5DUSAoZGVwdGggMzIpLCBBQQpbICAgIDMuNTMwMDMyXSBhdGExLjAwOiBjb25maWd1cmVkIGZv
ciBVRE1BLzEzMwpbICAgIDMuNTMwMTkyXSBzY3NpIDA6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAg
IEFUQSAgICAgIEtJTkdTVE9OIFNVVjQwMFMgRDZTRCBQUTogMCBBTlNJOiA1ClsgICAgMy41Mzcz
NzNdIHVzYiAzLTE6IG5ldyBoaWdoLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgeGhj
aV9oY2QKWyAgICAzLjUzNzM4Nl0gdXNiIDItMTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBu
dW1iZXIgMiB1c2luZyBlaGNpLXBjaQpbICAgIDMuNTQ1NTIyXSBhdGEyLjAwOiBBVEEtODogS0lO
R1NUT04gU1YzMDBTMzdBMTIwRywgNTI1QUJCRjAsIG1heCBVRE1BLzEzMwpbICAgIDMuNTQ1NTI0
XSBhdGEyLjAwOiAyMzQ0NDE2NDggc2VjdG9ycywgbXVsdGkgMTY6IExCQTQ4IE5DUSAoZGVwdGgg
MzIpLCBBQQpbICAgIDMuNTY2NjM0XSBhdGEyLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMwpb
ICAgIDMuNTY2NzY0XSBzY3NpIDE6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIEtJ
TkdTVE9OIFNWMzAwUzMgQkJGMCBQUTogMCBBTlNJOiA1ClsgICAgMy41NzcxOTldIHNkIDE6MDow
OjA6IFtzZGJdIDIzNDQ0MTY0OCA1MTItYnl0ZSBsb2dpY2FsIGJsb2NrczogKDEyMCBHQi8xMTIg
R2lCKQpbICAgIDMuNTc3MjE3XSBzZCAwOjA6MDowOiBbc2RhXSAyMzQ0NDE2NDggNTEyLWJ5dGUg
bG9naWNhbCBibG9ja3M6ICgxMjAgR0IvMTEyIEdpQikKWyAgICAzLjU3NzIyMF0gc2QgMDowOjA6
MDogW3NkYV0gNDA5Ni1ieXRlIHBoeXNpY2FsIGJsb2NrcwpbICAgIDMuNTc3MjIzXSBzZCAxOjA6
MDowOiBbc2RiXSBXcml0ZSBQcm90ZWN0IGlzIG9mZgpbICAgIDMuNTc3MjI3XSBzZCAxOjA6MDow
OiBbc2RiXSBNb2RlIFNlbnNlOiAwMCAzYSAwMCAwMApbICAgIDMuNTc3MjQzXSBzZCAwOjA6MDow
OiBbc2RhXSBXcml0ZSBQcm90ZWN0IGlzIG9mZgpbICAgIDMuNTc3MjQ1XSBzZCAwOjA6MDowOiBb
c2RhXSBNb2RlIFNlbnNlOiAwMCAzYSAwMCAwMApbICAgIDMuNTc3MjY4XSBzZCAxOjA6MDowOiBb
c2RiXSBXcml0ZSBjYWNoZTogZW5hYmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBz
dXBwb3J0IERQTyBvciBGVUEKWyAgICAzLjU3NzI4Nl0gc2QgMDowOjA6MDogW3NkYV0gV3JpdGUg
Y2FjaGU6IGVuYWJsZWQsIHJlYWQgY2FjaGU6IGVuYWJsZWQsIGRvZXNuJ3Qgc3VwcG9ydCBEUE8g
b3IgRlVBClsgICAgMy41OTQ4MDVdIHNkIDE6MDowOjA6IFtzZGJdIEF0dGFjaGVkIFNDU0kgZGlz
awpbICAgIDMuNjAyMDA2XSAgc2RhOiBzZGExIHNkYTIKWyAgICAzLjYwMjY5MF0gc2QgMDowOjA6
MDogW3NkYV0gQXR0YWNoZWQgU0NTSSBkaXNrClsgICAgMy42NjU0NzZdIGRldmljZS1tYXBwZXI6
IHVldmVudDogdmVyc2lvbiAxLjAuMwpbICAgIDMuNjY1NjAxXSBkZXZpY2UtbWFwcGVyOiBpb2N0
bDogNC40My4wLWlvY3RsICgyMDIwLTEwLTAxKSBpbml0aWFsaXNlZDogZG0tZGV2ZWxAcmVkaGF0
LmNvbQpbICAgIDMuNjczNjU4XSB1c2IgMS0xOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5k
b3I9ODA4NywgaWRQcm9kdWN0PTAwMjQsIGJjZERldmljZT0gMC4wMApbICAgIDMuNjczNjYxXSB1
c2IgMS0xOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MCwgUHJvZHVjdD0wLCBTZXJpYWxO
dW1iZXI9MApbICAgIDMuNjc0MDA1XSBodWIgMS0xOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDMu
Njc0MTc5XSBodWIgMS0xOjEuMDogNiBwb3J0cyBkZXRlY3RlZApbICAgIDMuNjkwNjU3XSB1c2Ig
My0xOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MjEwOSwgaWRQcm9kdWN0PTM0MzEs
IGJjZERldmljZT0gNC4yMApbICAgIDMuNjkwNjYwXSB1c2IgMy0xOiBOZXcgVVNCIGRldmljZSBz
dHJpbmdzOiBNZnI9MCwgUHJvZHVjdD0xLCBTZXJpYWxOdW1iZXI9MApbICAgIDMuNjkwNjYxXSB1
c2IgMy0xOiBQcm9kdWN0OiBVU0IyLjAgSHViClsgICAgMy42OTE2NDVdIGh1YiAzLTE6MS4wOiBV
U0IgaHViIGZvdW5kClsgICAgMy42OTE5ODldIGh1YiAzLTE6MS4wOiA0IHBvcnRzIGRldGVjdGVk
ClsgICAgMy42OTM2NTRdIHVzYiAyLTE6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj04
MDg3LCBpZFByb2R1Y3Q9MDAyNCwgYmNkRGV2aWNlPSAwLjAwClsgICAgMy42OTM2NTddIHVzYiAy
LTE6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0wLCBQcm9kdWN0PTAsIFNlcmlhbE51bWJl
cj0wClsgICAgMy42OTQxNTddIGh1YiAyLTE6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMy42OTQy
ODNdIGh1YiAyLTE6MS4wOiA4IHBvcnRzIGRldGVjdGVkClsgICAgMy44MTE0MzBdIFBNOiBJbWFn
ZSBub3QgZm91bmQgKGNvZGUgLTIyKQpbICAgIDMuOTY5MzMyXSB1c2IgMS0xLjI6IG5ldyBsb3ct
c3BlZWQgVVNCIGRldmljZSBudW1iZXIgMyB1c2luZyBlaGNpLXBjaQpbICAgIDMuOTcwMTM0XSBF
WFQ0LWZzIChzZGEyKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3JkZXJlZCBkYXRhIG1vZGUu
IE9wdHM6IChudWxsKQpbICAgIDMuOTg5MzUwXSB1c2IgMi0xLjQ6IG5ldyBoaWdoLXNwZWVkIFVT
QiBkZXZpY2UgbnVtYmVyIDMgdXNpbmcgZWhjaS1wY2kKWyAgICA0LjA4NDE5M10gdXNiIDEtMS4y
OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MDk5YSwgaWRQcm9kdWN0PTYxMGMsIGJj
ZERldmljZT0gMC4wMQpbICAgIDQuMDg0MTk2XSB1c2IgMS0xLjI6IE5ldyBVU0IgZGV2aWNlIHN0
cmluZ3M6IE1mcj0xLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0wClsgICAgNC4wODQxOThdIHVz
YiAxLTEuMjogUHJvZHVjdDogVVNCIE11bHRpbWVkaWEgS2V5Ym9hcmQgClsgICAgNC4wODQxOTld
IHVzYiAxLTEuMjogTWFudWZhY3R1cmVyOiAgClsgICAgNC4xMDk1MDRdIHVzYiAyLTEuNDogTmV3
IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTE0OGYsIGlkUHJvZHVjdD03NjAxLCBiY2REZXZp
Y2U9IDAuMDAKWyAgICA0LjEwOTUwN10gdXNiIDItMS40OiBOZXcgVVNCIGRldmljZSBzdHJpbmdz
OiBNZnI9MSwgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MwpbICAgIDQuMTA5NTA4XSB1c2IgMi0x
LjQ6IFByb2R1Y3Q6IDgwMi4xMSBuIFdMQU4KWyAgICA0LjEwOTUwOV0gdXNiIDItMS40OiBNYW51
ZmFjdHVyZXI6IE1lZGlhVGVrClsgICAgNC4xMDk1MTBdIHVzYiAyLTEuNDogU2VyaWFsTnVtYmVy
OiAxLjAKWyAgICA0LjIwOTA2OF0gTm90IGFjdGl2YXRpbmcgTWFuZGF0b3J5IEFjY2VzcyBDb250
cm9sIGFzIC9zYmluL3RvbW95by1pbml0IGRvZXMgbm90IGV4aXN0LgpbICAgIDUuNDU5NDMzXSBz
eXN0ZW1kWzFdOiBJbnNlcnRlZCBtb2R1bGUgJ2F1dG9mczQnClsgICAgNS41MDUzMzVdIHN5c3Rl
bWRbMV06IHN5c3RlbWQgMjQ3LjItNSBydW5uaW5nIGluIHN5c3RlbSBtb2RlLiAoK1BBTSArQVVE
SVQgK1NFTElOVVggK0lNQSArQVBQQVJNT1IgK1NNQUNLICtTWVNWSU5JVCArVVRNUCArTElCQ1JZ
UFRTRVRVUCArR0NSWVBUICtHTlVUTFMgK0FDTCArWFogK0xaNCArWlNURCArU0VDQ09NUCArQkxL
SUQgK0VMRlVUSUxTICtLTU9EICtJRE4yIC1JRE4gK1BDUkUyIGRlZmF1bHQtaGllcmFyY2h5PXVu
aWZpZWQpClsgICAgNS41MDU2MThdIHN5c3RlbWRbMV06IERldGVjdGVkIGFyY2hpdGVjdHVyZSB4
ODYtNjQuClsgICAgNS41MDgyMTVdIHN5c3RlbWRbMV06IFNldCBob3N0bmFtZSB0byA8ZGViaWFu
Pi4KWyAgICA1LjU3NjI1N10gc3lzdGVtZC1zeXN2LWdlbmVyYXRvclsyMjNdOiBTeXNWIHNlcnZp
Y2UgJy9ldGMvaW5pdC5kL3hlbicgbGFja3MgYSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUuIEF1
dG9tYXRpY2FsbHkgZ2VuZXJhdGluZyBhIHVuaXQgZmlsZSBmb3IgY29tcGF0aWJpbGl0eS4gUGxl
YXNlIHVwZGF0ZSBwYWNrYWdlIHRvIGluY2x1ZGUgYSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUs
IGluIG9yZGVyIHRvIG1ha2UgaXQgbW9yZSBzYWZlIGFuZCByb2J1c3QuClsgICAgNS41OTM1MjBd
IHN5c3RlbWQtc3lzdi1nZW5lcmF0b3JbMjIzXTogU3lzViBzZXJ2aWNlICcvZXRjL2luaXQuZC9l
eGltNCcgbGFja3MgYSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUuIEF1dG9tYXRpY2FsbHkgZ2Vu
ZXJhdGluZyBhIHVuaXQgZmlsZSBmb3IgY29tcGF0aWJpbGl0eS4gUGxlYXNlIHVwZGF0ZSBwYWNr
YWdlIHRvIGluY2x1ZGUgYSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUsIGluIG9yZGVyIHRvIG1h
a2UgaXQgbW9yZSBzYWZlIGFuZCByb2J1c3QuClsgICAgNS41OTM2NTBdIHN5c3RlbWQtc3lzdi1n
ZW5lcmF0b3JbMjIzXTogU3lzViBzZXJ2aWNlICcvZXRjL2luaXQuZC94ZW5jb21tb25zJyBsYWNr
cyBhIG5hdGl2ZSBzeXN0ZW1kIHVuaXQgZmlsZS4gQXV0b21hdGljYWxseSBnZW5lcmF0aW5nIGEg
dW5pdCBmaWxlIGZvciBjb21wYXRpYmlsaXR5LiBQbGVhc2UgdXBkYXRlIHBhY2thZ2UgdG8gaW5j
bHVkZSBhIG5hdGl2ZSBzeXN0ZW1kIHVuaXQgZmlsZSwgaW4gb3JkZXIgdG8gbWFrZSBpdCBtb3Jl
IHNhZmUgYW5kIHJvYnVzdC4KWyAgICA1LjU5NjgwOF0gc3lzdGVtZC1zeXN2LWdlbmVyYXRvclsy
MjNdOiBTeXNWIHNlcnZpY2UgJy9ldGMvaW5pdC5kL2lzYy1kaGNwLXNlcnZlcicgbGFja3MgYSBu
YXRpdmUgc3lzdGVtZCB1bml0IGZpbGUuIEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGluZyBhIHVuaXQg
ZmlsZSBmb3IgY29tcGF0aWJpbGl0eS4gUGxlYXNlIHVwZGF0ZSBwYWNrYWdlIHRvIGluY2x1ZGUg
YSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUsIGluIG9yZGVyIHRvIG1ha2UgaXQgbW9yZSBzYWZl
IGFuZCByb2J1c3QuClsgICAgNS42NTY3OTRdIHN5c3RlbWRbMV06IC9saWIvc3lzdGVtZC9zeXN0
ZW0vdmlydGxvZ2Quc29ja2V0OjY6IExpc3RlblN0cmVhbT0gcmVmZXJlbmNlcyBhIHBhdGggYmVs
b3cgbGVnYWN5IGRpcmVjdG9yeSAvdmFyL3J1bi8sIHVwZGF0aW5nIC92YXIvcnVuL2xpYnZpcnQv
dmlydGxvZ2Qtc29jayDihpIgL3J1bi9saWJ2aXJ0L3ZpcnRsb2dkLXNvY2s7IHBsZWFzZSB1cGRh
dGUgdGhlIHVuaXQgZmlsZSBhY2NvcmRpbmdseS4KWyAgICA1LjY2NTc0N10gc3lzdGVtZFsxXTog
L2xpYi9zeXN0ZW1kL3N5c3RlbS92aXJ0bG9nZC1hZG1pbi5zb2NrZXQ6NjogTGlzdGVuU3RyZWFt
PSByZWZlcmVuY2VzIGEgcGF0aCBiZWxvdyBsZWdhY3kgZGlyZWN0b3J5IC92YXIvcnVuLywgdXBk
YXRpbmcgL3Zhci9ydW4vbGlidmlydC92aXJ0bG9nZC1hZG1pbi1zb2NrIOKGkiAvcnVuL2xpYnZp
cnQvdmlydGxvZ2QtYWRtaW4tc29jazsgcGxlYXNlIHVwZGF0ZSB0aGUgdW5pdCBmaWxlIGFjY29y
ZGluZ2x5LgpbICAgIDUuNjY2MzI2XSBzeXN0ZW1kWzFdOiAvbGliL3N5c3RlbWQvc3lzdGVtL3Zp
cnRsb2NrZC5zb2NrZXQ6NjogTGlzdGVuU3RyZWFtPSByZWZlcmVuY2VzIGEgcGF0aCBiZWxvdyBs
ZWdhY3kgZGlyZWN0b3J5IC92YXIvcnVuLywgdXBkYXRpbmcgL3Zhci9ydW4vbGlidmlydC92aXJ0
bG9ja2Qtc29jayDihpIgL3J1bi9saWJ2aXJ0L3ZpcnRsb2NrZC1zb2NrOyBwbGVhc2UgdXBkYXRl
IHRoZSB1bml0IGZpbGUgYWNjb3JkaW5nbHkuClsgICAgNS42NjczNTldIHN5c3RlbWRbMV06IC9s
aWIvc3lzdGVtZC9zeXN0ZW0vdmlydGxvY2tkLWFkbWluLnNvY2tldDo2OiBMaXN0ZW5TdHJlYW09
IHJlZmVyZW5jZXMgYSBwYXRoIGJlbG93IGxlZ2FjeSBkaXJlY3RvcnkgL3Zhci9ydW4vLCB1cGRh
dGluZyAvdmFyL3J1bi9saWJ2aXJ0L3ZpcnRsb2NrZC1hZG1pbi1zb2NrIOKGkiAvcnVuL2xpYnZp
cnQvdmlydGxvY2tkLWFkbWluLXNvY2s7IHBsZWFzZSB1cGRhdGUgdGhlIHVuaXQgZmlsZSBhY2Nv
cmRpbmdseS4KWyAgICA1LjcxODk5MF0gc3lzdGVtZFsxXTogUXVldWVkIHN0YXJ0IGpvYiBmb3Ig
ZGVmYXVsdCB0YXJnZXQgR3JhcGhpY2FsIEludGVyZmFjZS4KWyAgICA1LjcyMDk0MF0gc3lzdGVt
ZFsxXTogQ3JlYXRlZCBzbGljZSBWaXJ0dWFsIE1hY2hpbmUgYW5kIENvbnRhaW5lciBTbGljZS4K
WyAgICA1LjcyMjUyN10gc3lzdGVtZFsxXTogQ3JlYXRlZCBzbGljZSBzeXN0ZW0tZ2V0dHkuc2xp
Y2UuClsgICAgNS43MjM0NThdIHN5c3RlbWRbMV06IENyZWF0ZWQgc2xpY2Ugc3lzdGVtLW1vZHBy
b2JlLnNsaWNlLgpbICAgIDUuNzI0MzUwXSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHN5c3Rl
bS1zZXJpYWxceDJkZ2V0dHkuc2xpY2UuClsgICAgNS43MjUyMDldIHN5c3RlbWRbMV06IENyZWF0
ZWQgc2xpY2UgVXNlciBhbmQgU2Vzc2lvbiBTbGljZS4KWyAgICA1LjcyNTg1OF0gc3lzdGVtZFsx
XTogU3RhcnRlZCBEaXNwYXRjaCBQYXNzd29yZCBSZXF1ZXN0cyB0byBDb25zb2xlIERpcmVjdG9y
eSBXYXRjaC4KWyAgICA1LjcyNjQ2M10gc3lzdGVtZFsxXTogU3RhcnRlZCBGb3J3YXJkIFBhc3N3
b3JkIFJlcXVlc3RzIHRvIFdhbGwgRGlyZWN0b3J5IFdhdGNoLgpbICAgIDUuNzI3MjI5XSBzeXN0
ZW1kWzFdOiBTZXQgdXAgYXV0b21vdW50IEFyYml0cmFyeSBFeGVjdXRhYmxlIEZpbGUgRm9ybWF0
cyBGaWxlIFN5c3RlbSBBdXRvbW91bnQgUG9pbnQuClsgICAgNS43Mjc3MjRdIHN5c3RlbWRbMV06
IFJlYWNoZWQgdGFyZ2V0IExvY2FsIEVuY3J5cHRlZCBWb2x1bWVzLgpbICAgIDUuNzI4MjE4XSBz
eXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBQYXRocy4KWyAgICA1LjcyODY4OF0gc3lzdGVtZFsx
XTogUmVhY2hlZCB0YXJnZXQgUmVtb3RlIEZpbGUgU3lzdGVtcy4KWyAgICA1LjcyOTE1OV0gc3lz
dGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgU2xpY2VzLgpbICAgIDUuNzI5Njc0XSBzeXN0ZW1kWzFd
OiBSZWFjaGVkIHRhcmdldCBMaWJ2aXJ0IGd1ZXN0cyBzaHV0ZG93bi4KWyAgICA1LjczMDI2OV0g
c3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIERldmljZS1tYXBwZXIgZXZlbnQgZGFlbW9uIEZJRk9z
LgpbICAgIDUuNzMwOTAwXSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gTFZNMiBwb2xsIGRhZW1v
biBzb2NrZXQuClsgICAgNS43MzI0NTJdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBTeXNsb2cg
U29ja2V0LgpbICAgIDUuNzMzMDk2XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gZnNjayB0byBm
c2NrZCBjb21tdW5pY2F0aW9uIFNvY2tldC4KWyAgICA1LjczMzY4N10gc3lzdGVtZFsxXTogTGlz
dGVuaW5nIG9uIGluaXRjdGwgQ29tcGF0aWJpbGl0eSBOYW1lZCBQaXBlLgpbICAgIDUuNzM0NTI4
XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gSm91cm5hbCBBdWRpdCBTb2NrZXQuClsgICAgNS43
MzUyMDJdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBKb3VybmFsIFNvY2tldCAoL2Rldi9sb2cp
LgpbICAgIDUuNzM1OTM3XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gSm91cm5hbCBTb2NrZXQu
ClsgICAgNS43MzY2NTldIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiB1ZGV2IENvbnRyb2wgU29j
a2V0LgpbICAgIDUuNzM3MzI4XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gdWRldiBLZXJuZWwg
U29ja2V0LgpbICAgIDUuNzM4MTMxXSBzeXN0ZW1kWzFdOiBDb25kaXRpb24gY2hlY2sgcmVzdWx0
ZWQgaW4gSHVnZSBQYWdlcyBGaWxlIFN5c3RlbSBiZWluZyBza2lwcGVkLgpbICAgIDUuNzQwODI1
XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBQT1NJWCBNZXNzYWdlIFF1ZXVlIEZpbGUgU3lzdGVtLi4u
ClsgICAgNS43NDQxNjNdIHN5c3RlbWRbMV06IE1vdW50aW5nIEtlcm5lbCBEZWJ1ZyBGaWxlIFN5
c3RlbS4uLgpbICAgIDUuNzQ3NDU4XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBLZXJuZWwgVHJhY2Ug
RmlsZSBTeXN0ZW0uLi4KWyAgICA1Ljc0ODUxN10gc3lzdGVtZFsxXTogRmluaXNoZWQgQXZhaWxh
YmlsaXR5IG9mIGJsb2NrIGRldmljZXMuClsgICAgNS43NTI1NTFdIHN5c3RlbWRbMV06IFN0YXJ0
aW5nIFNldCB0aGUgY29uc29sZSBrZXlib2FyZCBsYXlvdXQuLi4KWyAgICA1Ljc1NjI3MV0gc3lz
dGVtZFsxXTogU3RhcnRpbmcgQ3JlYXRlIGxpc3Qgb2Ygc3RhdGljIGRldmljZSBub2RlcyBmb3Ig
dGhlIGN1cnJlbnQga2VybmVsLi4uClsgICAgNS43NTkzNjJdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IE1vbml0b3Jpbmcgb2YgTFZNMiBtaXJyb3JzLCBzbmFwc2hvdHMgZXRjLiB1c2luZyBkbWV2ZW50
ZCBvciBwcm9ncmVzcyBwb2xsaW5nLi4uClsgICAgNS43NjMyMjJdIHN5c3RlbWRbMV06IFN0YXJ0
aW5nIExvYWQgS2VybmVsIE1vZHVsZSBjb25maWdmcy4uLgpbICAgIDUuNzY2OTQzXSBzeXN0ZW1k
WzFdOiBTdGFydGluZyBMb2FkIEtlcm5lbCBNb2R1bGUgZHJtLi4uClsgICAgNS43NzA3NDhdIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIExvYWQgS2VybmVsIE1vZHVsZSBmdXNlLi4uClsgICAgNS43NzIz
NTNdIHN5c3RlbWRbMV06IENvbmRpdGlvbiBjaGVjayByZXN1bHRlZCBpbiBTZXQgVXAgQWRkaXRp
b25hbCBCaW5hcnkgRm9ybWF0cyBiZWluZyBza2lwcGVkLgpbICAgIDUuNzcyNDkwXSBzeXN0ZW1k
WzFdOiBDb25kaXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gRmlsZSBTeXN0ZW0gQ2hlY2sgb24gUm9v
dCBEZXZpY2UgYmVpbmcgc2tpcHBlZC4KWyAgICA1Ljc3ODk0Nl0gc3lzdGVtZFsxXTogU3RhcnRp
bmcgSm91cm5hbCBTZXJ2aWNlLi4uClsgICAgNS43ODQyNDldIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IExvYWQgS2VybmVsIE1vZHVsZXMuLi4KWyAgICA1Ljc4ODc5Ml0gc3lzdGVtZFsxXTogU3RhcnRp
bmcgUmVtb3VudCBSb290IGFuZCBLZXJuZWwgRmlsZSBTeXN0ZW1zLi4uClsgICAgNS43OTMxNTFd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIENvbGRwbHVnIEFsbCB1ZGV2IERldmljZXMuLi4KWyAgICA1
Ljc5ODY5Nl0gc3lzdGVtZFsxXTogTW91bnRlZCBQT1NJWCBNZXNzYWdlIFF1ZXVlIEZpbGUgU3lz
dGVtLgpbICAgIDUuNzk5Mzc4XSBzeXN0ZW1kWzFdOiBNb3VudGVkIEtlcm5lbCBEZWJ1ZyBGaWxl
IFN5c3RlbS4KWyAgICA1LjgwMDA2M10gc3lzdGVtZFsxXTogTW91bnRlZCBLZXJuZWwgVHJhY2Ug
RmlsZSBTeXN0ZW0uClsgICAgNS44MDExMzldIHN5c3RlbWRbMV06IEZpbmlzaGVkIENyZWF0ZSBs
aXN0IG9mIHN0YXRpYyBkZXZpY2Ugbm9kZXMgZm9yIHRoZSBjdXJyZW50IGtlcm5lbC4KWyAgICA1
LjgwMjI2OF0gc3lzdGVtZFsxXTogbW9kcHJvYmVAY29uZmlnZnMuc2VydmljZTogU3VjY2VlZGVk
LgpbICAgIDUuODAyNzAyXSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBMb2FkIEtlcm5lbCBNb2R1bGUg
Y29uZmlnZnMuClsgICAgNS44MDU5MzVdIHN5c3RlbWRbMV06IE1vdW50aW5nIEtlcm5lbCBDb25m
aWd1cmF0aW9uIEZpbGUgU3lzdGVtLi4uClsgICAgNS44MTY2MjhdIHN5c3RlbWRbMV06IE1vdW50
ZWQgS2VybmVsIENvbmZpZ3VyYXRpb24gRmlsZSBTeXN0ZW0uClsgICAgNS44NTE4MzBdIGZ1c2U6
IGluaXQgKEFQSSB2ZXJzaW9uIDcuMzIpClsgICAgNS44NTkzNDFdIHN5c3RlbWRbMV06IG1vZHBy
b2JlQGZ1c2Uuc2VydmljZTogU3VjY2VlZGVkLgpbICAgIDUuODU5OTQ4XSBzeXN0ZW1kWzFdOiBG
aW5pc2hlZCBMb2FkIEtlcm5lbCBNb2R1bGUgZnVzZS4KWyAgICA1Ljg2NDIwNl0gc3lzdGVtZFsx
XTogTW91bnRpbmcgRlVTRSBDb250cm9sIEZpbGUgU3lzdGVtLi4uClsgICAgNS44NjkzNDVdIEVY
VDQtZnMgKHNkYTIpOiByZS1tb3VudGVkLiBPcHRzOiBlcnJvcnM9cmVtb3VudC1ybwpbICAgIDUu
ODcyOTU0XSB4ZW46eGVuX2V2dGNobjogRXZlbnQtY2hhbm5lbCBkZXZpY2UgaW5zdGFsbGVkClsg
ICAgNS44Nzc4NThdIHN5c3RlbWRbMV06IEZpbmlzaGVkIFJlbW91bnQgUm9vdCBhbmQgS2VybmVs
IEZpbGUgU3lzdGVtcy4KWyAgICA1Ljg3OTU3OV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uIGNoZWNr
IHJlc3VsdGVkIGluIFJlYnVpbGQgSGFyZHdhcmUgRGF0YWJhc2UgYmVpbmcgc2tpcHBlZC4KWyAg
ICA1Ljg3OTY3MV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uIGNoZWNrIHJlc3VsdGVkIGluIFBsYXRm
b3JtIFBlcnNpc3RlbnQgU3RvcmFnZSBBcmNoaXZhbCBiZWluZyBza2lwcGVkLgpbICAgIDUuODgy
OTk5XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBMb2FkL1NhdmUgUmFuZG9tIFNlZWQuLi4KWyAgICA1
Ljg5MTc3Ml0gc3lzdGVtZFsxXTogU3RhcnRpbmcgQ3JlYXRlIFN5c3RlbSBVc2Vycy4uLgpbICAg
IDUuODk0MDAyXSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBNb25pdG9yaW5nIG9mIExWTTIgbWlycm9y
cywgc25hcHNob3RzIGV0Yy4gdXNpbmcgZG1ldmVudGQgb3IgcHJvZ3Jlc3MgcG9sbGluZy4KWyAg
ICA1Ljg5NTQ5MV0gc3lzdGVtZFsxXTogbW9kcHJvYmVAZHJtLnNlcnZpY2U6IFN1Y2NlZWRlZC4K
WyAgICA1Ljg5NjE5MF0gc3lzdGVtZFsxXTogRmluaXNoZWQgTG9hZCBLZXJuZWwgTW9kdWxlIGRy
bS4KWyAgICA1LjkwNjE4MF0gc3lzdGVtZFsxXTogTW91bnRlZCBGVVNFIENvbnRyb2wgRmlsZSBT
eXN0ZW0uClsgICAgNS45MTczNDRdIHhlbl9wY2liYWNrOiBiYWNrZW5kIGlzIHZwY2kKWyAgICA1
Ljk0MTQ5Nl0gc3lzdGVtZFsxXTogRmluaXNoZWQgQ3JlYXRlIFN5c3RlbSBVc2Vycy4KWyAgICA1
Ljk0NDkzNF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgQ3JlYXRlIFN0YXRpYyBEZXZpY2UgTm9kZXMg
aW4gL2Rldi4uLgpbICAgIDUuOTUwODAwXSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBMb2FkL1NhdmUg
UmFuZG9tIFNlZWQuClsgICAgNS45NTE3MjRdIHN5c3RlbWRbMV06IENvbmRpdGlvbiBjaGVjayBy
ZXN1bHRlZCBpbiBGaXJzdCBCb290IENvbXBsZXRlIGJlaW5nIHNraXBwZWQuClsgICAgNS45ODQ0
NzldIHN5c3RlbWRbMV06IEZpbmlzaGVkIENyZWF0ZSBTdGF0aWMgRGV2aWNlIE5vZGVzIGluIC9k
ZXYuClsgICAgNS45ODk1OTNdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFJ1bGUtYmFzZWQgTWFuYWdl
ciBmb3IgRGV2aWNlIEV2ZW50cyBhbmQgRmlsZXMuLi4KWyAgICA2LjAwMTA5OV0gc3lzdGVtZFsx
XTogRmluaXNoZWQgU2V0IHRoZSBjb25zb2xlIGtleWJvYXJkIGxheW91dC4KWyAgICA2LjAwMjk3
N10gc3lzdGVtZFsxXTogRmluaXNoZWQgTG9hZCBLZXJuZWwgTW9kdWxlcy4KWyAgICA2LjAwNDAw
M10gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgTG9jYWwgRmlsZSBTeXN0ZW1zIChQcmUpLgpb
ICAgIDYuMDA0NjcyXSBzeXN0ZW1kWzFdOiBDb25kaXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gVmly
dHVhbCBNYWNoaW5lIGFuZCBDb250YWluZXIgU3RvcmFnZSAoQ29tcGF0aWJpbGl0eSkgYmVpbmcg
c2tpcHBlZC4KWyAgICA2LjAwNDcyNF0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgTG9jYWwg
RmlsZSBTeXN0ZW1zLgpbICAgIDYuMDA1MzQzXSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBD
b250YWluZXJzLgpbICAgIDYuMDA4ODM0XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBMb2FkIEFwcEFy
bW9yIHByb2ZpbGVzLi4uClsgICAgNi4wMTI0NzhdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFNldCBj
b25zb2xlIGZvbnQgYW5kIGtleW1hcC4uLgpbICAgIDYuMDEzMjMyXSBzeXN0ZW1kWzFdOiBDb25k
aXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gTWFyayB0aGUgbmVlZCB0byByZWxhYmVsIGFmdGVyIHJl
Ym9vdCBiZWluZyBza2lwcGVkLgpbICAgIDYuMDEzNDQ3XSBzeXN0ZW1kWzFdOiBDb25kaXRpb24g
Y2hlY2sgcmVzdWx0ZWQgaW4gU3RvcmUgYSBTeXN0ZW0gVG9rZW4gaW4gYW4gRUZJIFZhcmlhYmxl
IGJlaW5nIHNraXBwZWQuClsgICAgNi4wMTM1OTFdIHN5c3RlbWRbMV06IENvbmRpdGlvbiBjaGVj
ayByZXN1bHRlZCBpbiBDb21taXQgYSB0cmFuc2llbnQgbWFjaGluZS1pZCBvbiBkaXNrIGJlaW5n
IHNraXBwZWQuClsgICAgNi4wMTg0ODZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEFwcGx5IEtlcm5l
bCBWYXJpYWJsZXMuLi4KWyAgICA2LjA1MTAyMV0gc3lzdGVtZFsxXTogRmluaXNoZWQgQXBwbHkg
S2VybmVsIFZhcmlhYmxlcy4KWyAgICA2LjA1ODk5Nl0gc3lzdGVtZFsxXTogRmluaXNoZWQgU2V0
IGNvbnNvbGUgZm9udCBhbmQga2V5bWFwLgpbICAgIDYuMDY2MzcwXSBzeXN0ZW1kWzFdOiBTdGFy
dGVkIFJ1bGUtYmFzZWQgTWFuYWdlciBmb3IgRGV2aWNlIEV2ZW50cyBhbmQgRmlsZXMuClsgICAg
Ni4xMjU0NDBdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgSm91cm5hbCBTZXJ2aWNlLgpbICAgIDYuMTYx
Nzk1XSBhdWRpdDogdHlwZT0xNDAwIGF1ZGl0KDE2MTIxOTA1MzYuMzkyOjIpOiBhcHBhcm1vcj0i
U1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0idW5jb25maW5lZCIgbmFt
ZT0ibHNiX3JlbGVhc2UiIHBpZD0yODIgY29tbT0iYXBwYXJtb3JfcGFyc2VyIgpbICAgIDYuMTY0
NDc3XSBhdWRpdDogdHlwZT0xNDAwIGF1ZGl0KDE2MTIxOTA1MzYuMzk2OjMpOiBhcHBhcm1vcj0i
U1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0idW5jb25maW5lZCIgbmFt
ZT0iL3Vzci9iaW4vbWFuIiBwaWQ9MjgxIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgICA2LjE2
NDQ4M10gYXVkaXQ6IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkwNTM2LjM5Njo0KTogYXBwYXJtb3I9
IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIHByb2ZpbGU9InVuY29uZmluZWQiIG5h
bWU9Im1hbl9maWx0ZXIiIHBpZD0yODEgY29tbT0iYXBwYXJtb3JfcGFyc2VyIgpbICAgIDYuMTY0
NDg3XSBhdWRpdDogdHlwZT0xNDAwIGF1ZGl0KDE2MTIxOTA1MzYuMzk2OjUpOiBhcHBhcm1vcj0i
U1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0idW5jb25maW5lZCIgbmFt
ZT0ibWFuX2dyb2ZmIiBwaWQ9MjgxIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgICA2LjE2NDg5
M10gYXVkaXQ6IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkwNTM2LjM5Njo2KTogYXBwYXJtb3I9IlNU
QVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIHByb2ZpbGU9InVuY29uZmluZWQiIG5hbWU9
Im52aWRpYV9tb2Rwcm9iZSIgcGlkPTI4MCBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAgNi4x
NjQ5MDBdIGF1ZGl0OiB0eXBlPTE0MDAgYXVkaXQoMTYxMjE5MDUzNi4zOTY6Nyk6IGFwcGFybW9y
PSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBwcm9maWxlPSJ1bmNvbmZpbmVkIiBu
YW1lPSJudmlkaWFfbW9kcHJvYmUvL2ttb2QiIHBpZD0yODAgY29tbT0iYXBwYXJtb3JfcGFyc2Vy
IgpbICAgIDYuMTY4NzM5XSBhdWRpdDogdHlwZT0xNDAwIGF1ZGl0KDE2MTIxOTA1MzYuNDAwOjgp
OiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0idW5j
b25maW5lZCIgbmFtZT0iL3Vzci9zYmluL2xpYnZpcnRkIiBwaWQ9Mjc5IGNvbW09ImFwcGFybW9y
X3BhcnNlciIKWyAgICA2LjE2ODc0Nl0gYXVkaXQ6IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkwNTM2
LjQwMDo5KTogYXBwYXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIHByb2Zp
bGU9InVuY29uZmluZWQiIG5hbWU9Ii91c3Ivc2Jpbi9saWJ2aXJ0ZC8vcWVtdV9icmlkZ2VfaGVs
cGVyIiBwaWQ9Mjc5IGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgICA2LjE3NDYyOF0gYXVkaXQ6
IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkwNTM2LjQwODoxMCk6IGFwcGFybW9yPSJTVEFUVVMiIG9w
ZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBwcm9maWxlPSJ1bmNvbmZpbmVkIiBuYW1lPSJ2aXJ0LWFh
LWhlbHBlciIgcGlkPTI4MyBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAgNi4yMzE3OTVdIGlu
cHV0OiBQb3dlciBCdXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvTE5YU1lCVVM6MDAvUE5Q
MEMwQzowMC9pbnB1dC9pbnB1dDMKWyAgICA2LjI0NTM3Nl0gQUNQSTogUG93ZXIgQnV0dG9uIFtQ
V1JCXQpbICAgIDYuMjQ1NDc4XSBpbnB1dDogUG93ZXIgQnV0dG9uIGFzIC9kZXZpY2VzL0xOWFNZ
U1RNOjAwL0xOWFBXUkJOOjAwL2lucHV0L2lucHV0NApbICAgIDYuMjQ1NTYyXSBBQ1BJOiBQb3dl
ciBCdXR0b24gW1BXUkZdClsgICAgNy43MzgyMTJdIGhpZDogcmF3IEhJRCBldmVudHMgZHJpdmVy
IChDKSBKaXJpIEtvc2luYQpbICAgIDcuNzQyMDA5XSBpVENPX3ZlbmRvcl9zdXBwb3J0OiB2ZW5k
b3Itc3VwcG9ydD0wClsgICAgNy43NDY0MjJdIGlUQ09fd2R0OiBJbnRlbCBUQ08gV2F0Y2hEb2cg
VGltZXIgRHJpdmVyIHYxLjExClsgICAgNy43NDY0NzFdIGlUQ09fd2R0OiBGb3VuZCBhIFBhbnRo
ZXIgUG9pbnQgVENPIGRldmljZSAoVmVyc2lvbj0yLCBUQ09CQVNFPTB4MDQ2MCkKWyAgICA3Ljc0
NjkxOV0gaVRDT193ZHQ6IGluaXRpYWxpemVkLiBoZWFydGJlYXQ9MzAgc2VjIChub3dheW91dD0w
KQpbICAgIDcuNzUyMzk0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IHVzYmhpZApbICAgIDcuNzUyMzk2XSB1c2JoaWQ6IFVTQiBISUQgY29yZSBkcml2ZXIKWyAgICA3
Ljc3MTU4Nl0gc2QgMDowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMCB0eXBlIDAKWyAg
ICA3Ljc3MTY2NV0gc2QgMTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMSB0eXBlIDAK
WyAgICA3Ljg2OTk1NF0gY2ZnODAyMTE6IExvYWRpbmcgY29tcGlsZWQtaW4gWC41MDkgY2VydGlm
aWNhdGVzIGZvciByZWd1bGF0b3J5IGRhdGFiYXNlClsgICAgNy44NzAzMjJdIGNmZzgwMjExOiBM
b2FkZWQgWC41MDkgY2VydCAnYmVuaEBkZWJpYW4ub3JnOiA1NzdlMDIxY2I5ODBlMGU4MjA4MjFi
YTdiNTRiNDk2MWI4YjRmYWRmJwpbICAgIDcuODcwNjgyXSBjZmc4MDIxMTogTG9hZGVkIFguNTA5
IGNlcnQgJ3JvbWFpbi5wZXJpZXJAZ21haWwuY29tOiAzYWJiYzZlYzE0NmUwOWQxYjYwMTZhYjlk
NmNmNzFkZDIzM2YwMzI4JwpbICAgIDcuODcxMDI3XSBjZmc4MDIxMTogTG9hZGVkIFguNTA5IGNl
cnQgJ3Nmb3JzaGVlOiAwMGIyOGRkZjQ3YWVmOWNlYTcnClsgICAgNy44NzEzNDVdIHBsYXRmb3Jt
IHJlZ3VsYXRvcnkuMDogZmlybXdhcmU6IGZhaWxlZCB0byBsb2FkIHJlZ3VsYXRvcnkuZGIgKC0y
KQpbICAgIDcuODcxMzQ3XSBmaXJtd2FyZV9jbGFzczogU2VlIGh0dHBzOi8vd2lraS5kZWJpYW4u
b3JnL0Zpcm13YXJlIGZvciBpbmZvcm1hdGlvbiBhYm91dCBtaXNzaW5nIGZpcm13YXJlClsgICAg
Ny44NzEzNDldIHBsYXRmb3JtIHJlZ3VsYXRvcnkuMDogRGlyZWN0IGZpcm13YXJlIGxvYWQgZm9y
IHJlZ3VsYXRvcnkuZGIgZmFpbGVkIHdpdGggZXJyb3IgLTIKWyAgICA3Ljg3MTM1Ml0gY2ZnODAy
MTE6IGZhaWxlZCB0byBsb2FkIHJlZ3VsYXRvcnkuZGIKWyAgICA3LjkyNzkyM10gaW5wdXQ6IFBD
IFNwZWFrZXIgYXMgL2RldmljZXMvcGxhdGZvcm0vcGNzcGtyL2lucHV0L2lucHV0NQpbICAgIDgu
MDYxMzMzXSB1c2IgMi0xLjQ6IHJlc2V0IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMyB1
c2luZyBlaGNpLXBjaQpbICAgIDguMTY0Njg5XSBjcnlwdGQ6IG1heF9jcHVfcWxlbiBzZXQgdG8g
MTAwMApbICAgIDguMTY1NDEwXSBBZGRpbmcgMzkwNTUzMmsgc3dhcCBvbiAvZGV2L3NkYTEuICBQ
cmlvcml0eTotMiBleHRlbnRzOjEgYWNyb3NzOjM5MDU1MzJrIFNTRlMKWyAgICA4LjE3NzU0MF0g
bXQ3NjAxdSAyLTEuNDoxLjA6IEFTSUMgcmV2aXNpb246IDc2MDEwMDAxIE1BQyByZXZpc2lvbjog
NzYwMTA1MDAKWyAgICA4LjE3OTUwNl0gbXQ3NjAxdSAyLTEuNDoxLjA6IGZpcm13YXJlOiBkaXJl
Y3QtbG9hZGluZyBmaXJtd2FyZSBtdDc2MDF1LmJpbgpbICAgIDguMTc5NTE1XSBtdDc2MDF1IDIt
MS40OjEuMDogRmlybXdhcmUgVmVyc2lvbjogMC4xLjAwIEJ1aWxkOiA3NjQwIEJ1aWxkIHRpbWU6
IDIwMTMwMjA1MjE0Nl9fX18KWyAgICA4LjIyMTAzNF0gQVZYIHZlcnNpb24gb2YgZ2NtX2VuYy9k
ZWMgZW5nYWdlZC4KWyAgICA4LjIyMTAzN10gQUVTIENUUiBtb2RlIGJ5OCBvcHRpbWl6YXRpb24g
ZW5hYmxlZApbICAgIDguMjM5MjE3XSBpbnB1dDogICBVU0IgTXVsdGltZWRpYSBLZXlib2FyZCAg
YXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjFhLjAvdXNiMS8xLTEvMS0xLjIvMS0xLjI6
MS4wLzAwMDM6MDk5QTo2MTBDLjAwMDEvaW5wdXQvaW5wdXQ2ClsgICAgOC4yOTk0MzBdIGhpZC1n
ZW5lcmljIDAwMDM6MDk5QTo2MTBDLjAwMDE6IGlucHV0LGhpZHJhdzA6IFVTQiBISUQgdjEuMDAg
S2V5Ym9hcmQgWyAgVVNCIE11bHRpbWVkaWEgS2V5Ym9hcmQgXSBvbiB1c2ItMDAwMDowMDoxYS4w
LTEuMi9pbnB1dDAKWyAgICA4LjI5OTgyNl0gaW5wdXQ6ICAgVVNCIE11bHRpbWVkaWEgS2V5Ym9h
cmQgIENvbnN1bWVyIENvbnRyb2wgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjFhLjAv
dXNiMS8xLTEvMS0xLjIvMS0xLjI6MS4xLzAwMDM6MDk5QTo2MTBDLjAwMDIvaW5wdXQvaW5wdXQ3
ClsgICAgOC4zNjUyMjJdIGlucHV0OiAgIFVTQiBNdWx0aW1lZGlhIEtleWJvYXJkICBTeXN0ZW0g
Q29udHJvbCBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MWEuMC91c2IxLzEtMS8xLTEu
Mi8xLTEuMjoxLjEvMDAwMzowOTlBOjYxMEMuMDAwMi9pbnB1dC9pbnB1dDgKWyAgICA4LjM2NTU1
OV0gaGlkLWdlbmVyaWMgMDAwMzowOTlBOjYxMEMuMDAwMjogaW5wdXQsaGlkcmF3MTogVVNCIEhJ
RCB2MS4wMCBEZXZpY2UgWyAgVVNCIE11bHRpbWVkaWEgS2V5Ym9hcmQgXSBvbiB1c2ItMDAwMDow
MDoxYS4wLTEuMi9pbnB1dDEKWyAgICA4LjU4Mzc2M10gbXQ3NjAxdSAyLTEuNDoxLjA6IEVFUFJP
TSB2ZXI6MGQgZmFlOjAwClsgICAgOC44MTIxNzFdIGllZWU4MDIxMSBwaHkwOiBTZWxlY3RlZCBy
YXRlIGNvbnRyb2wgYWxnb3JpdGhtICdtaW5zdHJlbF9odCcKWyAgICA4LjgxMjc4M10gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBtdDc2MDF1ClsgICAgOC44MjA0Mzhd
IG10NzYwMXUgMi0xLjQ6MS4wIHdseDIwZTAxNjAwNjRiZDogcmVuYW1lZCBmcm9tIHdsYW4wClsg
ICAxMS41NTA5MzBdIHdseDIwZTAxNjAwNjRiZDogYXV0aGVudGljYXRlIHdpdGggNjg6ZmY6N2I6
NDc6ODY6MTcKWyAgIDExLjU4NzA1NV0gd2x4MjBlMDE2MDA2NGJkOiBzZW5kIGF1dGggdG8gNjg6
ZmY6N2I6NDc6ODY6MTcgKHRyeSAxLzMpClsgICAxMS41ODk0NThdIHdseDIwZTAxNjAwNjRiZDog
YXV0aGVudGljYXRlZApbICAgMTEuNTkzMzM3XSB3bHgyMGUwMTYwMDY0YmQ6IGFzc29jaWF0ZSB3
aXRoIDY4OmZmOjdiOjQ3Ojg2OjE3ICh0cnkgMS8zKQpbICAgMTEuNTk4NTY1XSB3bHgyMGUwMTYw
MDY0YmQ6IFJYIEFzc29jUmVzcCBmcm9tIDY4OmZmOjdiOjQ3Ojg2OjE3IChjYXBhYj0weDQzMSBz
dGF0dXM9MCBhaWQ9MikKWyAgIDExLjYzNjU1MV0gd2x4MjBlMDE2MDA2NGJkOiBhc3NvY2lhdGVk
ClsgICAxMS42OTQxOTJdIElQdjY6IEFERFJDT05GKE5FVERFVl9DSEFOR0UpOiB3bHgyMGUwMTYw
MDY0YmQ6IGxpbmsgYmVjb21lcyByZWFkeQpbICAgMTQuOTAwMjk2XSBicmlkZ2U6IGZpbHRlcmlu
ZyB2aWEgYXJwL2lwL2lwNnRhYmxlcyBpcyBubyBsb25nZXIgYXZhaWxhYmxlIGJ5IGRlZmF1bHQu
IFVwZGF0ZSB5b3VyIHNjcmlwdHMgdG8gbG9hZCBicl9uZXRmaWx0ZXIgaWYgeW91IG5lZWQgdGhp
cy4KWyAgIDE0LjkzNTk5OV0gYnItbGFuOiBwb3J0IDEoZW5wMTNzMCkgZW50ZXJlZCBibG9ja2lu
ZyBzdGF0ZQpbICAgMTQuOTM2MDAyXSBici1sYW46IHBvcnQgMShlbnAxM3MwKSBlbnRlcmVkIGRp
c2FibGVkIHN0YXRlClsgICAxNC45MzYwODZdIGRldmljZSBlbnAxM3MwIGVudGVyZWQgcHJvbWlz
Y3VvdXMgbW9kZQpbICAgMTQuOTQxMjQ0XSByODE2OSAwMDAwOjBkOjAwLjA6IGZpcm13YXJlOiBk
aXJlY3QtbG9hZGluZyBmaXJtd2FyZSBydGxfbmljL3J0bDgxNjhoLTIuZncKWyAgIDE0Ljk2OTM0
OV0gR2VuZXJpYyBGRS1HRSBSZWFsdGVrIFBIWSByODE2OS1kMDA6MDA6IGF0dGFjaGVkIFBIWSBk
cml2ZXIgW0dlbmVyaWMgRkUtR0UgUmVhbHRlayBQSFldIChtaWlfYnVzOnBoeV9hZGRyPXI4MTY5
LWQwMDowMCwgaXJxPUlHTk9SRSkKWyAgIDE1LjE2OTQ3Ml0gcjgxNjkgMDAwMDowZDowMC4wIGVu
cDEzczA6IExpbmsgaXMgRG93bgpbICAgMTUuMTczNDQ3XSBici1sYW46IHBvcnQgMShlbnAxM3Mw
KSBlbnRlcmVkIGJsb2NraW5nIHN0YXRlClsgICAxNS4xNzM0NTFdIGJyLWxhbjogcG9ydCAxKGVu
cDEzczApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQpbICAgMTUuOTE3Mzk2XSBici1sYW46IHBv
cnQgMShlbnAxM3MwKSBlbnRlcmVkIGRpc2FibGVkIHN0YXRlCg==
--000000000000f9517405ba476c04--


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:55:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:55:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79929.145777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6abW-0001yT-38; Mon, 01 Feb 2021 14:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79929.145777; Mon, 01 Feb 2021 14:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6abV-0001yM-Vr; Mon, 01 Feb 2021 14:55:01 +0000
Received: by outflank-mailman (input) for mailman id 79929;
 Mon, 01 Feb 2021 14:55:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6abU-0001yH-9P
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:55:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6abU-00040v-57
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:55:00 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6abU-00035Q-4I
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:55:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6abP-0007eV-Dq; Mon, 01 Feb 2021 14:54:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=dKfvyWOtU1ds8JEL7uFi5qrZvBqg/UGn1drgGsjXRHs=; b=Bnf8YRUn4HKi2MMzHs246C6RWk
	jD9F7VGIpKdrrwEzDay+KbqKQl0JstH/3bIHkWlrG84knM970cHfUNUXPLJ1HzjGvSQHysx7i6Dt0
	8PHNN6Rpwjg0Hh6oFw2LzTsWh5Y8h9SBjHe4zhepnwG2XfaDpKVKD/BRFQp4oDuXgt5w=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.5695.143342.713995@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 14:54:55 +0000
To: Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    Julien Grall <jgrall@amazon.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y [and 1 more messages]
In-Reply-To: <20210130152210.17503-1-julien@xen.org>,
	<174a18ba-25d5-a94c-a85d-4a81b837a936@suse.com>
References: <20210130152210.17503-1-julien@xen.org>
	<174a18ba-25d5-a94c-a85d-4a81b837a936@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("[PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
> Xen is heavily relying on the DCE stage to remove unused code so the
> linker doesn't throw an error because a function is not implemented
> yet we defined a prototype for it.

Thanks for the clear explanation.

> It is not entirely clear why the compiler DCE is not detecting the
> unused code. However, moving the permission check from do_memory_op()
> to xenmem_add_to_physmap_batch() does the trick.

How unfortunate.

> Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

I have reviewed the diff, but not the code in context.

> The gitlab CI is used to provide basic testing on a per-series basis. So
> I would like to request this patch to be merged in Xen 4.15 in order to
> reduce the number of failure not related to the series tested.

Quite so.

Jan Beulich writes ("Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
> On 30.01.2021 16:22, Julien Grall wrote:
> > @@ -1442,13 +1447,6 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> >          if ( d == NULL )
> >              return -ESRCH;
> >  
> > -        rc = xatp_permission_check(d, xatpb.space);
> > -        if ( rc )
> > -        {
> > -            rcu_unlock_domain(d);
> > -            return rc;
> > -        }
> > -
> >          rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
> >  
> >          rcu_unlock_domain(d);
> 
> I'd be okay with the code movement if you did so consistently,
> i.e. also for the other invocation. I realize this would have
> an effect on the dm-op call of the function, but I wonder
> whether this wouldn't even be a good thing. If not, I think
> duplicating xenmem_add_to_physmap()'s early ASSERT() into
> xenmem_add_to_physmap_batch() would be the better course of
> action.

Jan, can you confirm whether in your opinion this patch as originally
posted by Julien is *correct* as is ?  In particular, Julien did not
intend a functional change.  Have you satisfied yourself that there is
no functional change here ?

I understand your objectiion above to relate to style or neatness,
rather than function.  Is that correct ?  And that your proposed
additional change would have some impact whilch would have to be
assessed.

In which case I think it would be better to defer the style
improvement until after the release.

IOW, the original patch

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

assuming a favourable functional code review from a relevant
maintainer.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:56:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:56:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79930.145788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6adE-00025f-Ds; Mon, 01 Feb 2021 14:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79930.145788; Mon, 01 Feb 2021 14:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6adE-00025Y-Ar; Mon, 01 Feb 2021 14:56:48 +0000
Received: by outflank-mailman (input) for mailman id 79930;
 Mon, 01 Feb 2021 14:56:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6adC-00025S-AG
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:56:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6adC-00042e-9R
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:56:46 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6adC-0003NF-7Y
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:56:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6ad9-0007fT-0k; Mon, 01 Feb 2021 14:56:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=MgBV62szT89Tk4OKE4cEbWfTA81OWdN7qE+5TyGn8Q0=; b=3kZBEqVkrJ0MnOBBTemHQexHDE
	Lo5uFJs+X764kHC6k/S0mCMmVACSdgLX0FvLT1KS/hEHwhp8QpH0ADRCYR3GaGl12skHdBCiRzPsw
	a1iYBWXLslhnkOyWIoHP0l7N4yFFu19nnURfUxjXCgtd0H7QBY/e0rMWertFKeYRDeoY=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24600.5802.791792.705035@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 14:56:42 +0000
To: Manuel Bouyer <bouyer@netbsd.org>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>,
    Roger Pau =?utf-8?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH v3 1/2] libs/light: pass some infos to qemu
In-Reply-To: <20210130230300.11664-1-bouyer@netbsd.org>
References: <20210130230300.11664-1-bouyer@netbsd.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("[PATCH v3 1/2] libs/light: pass some infos to qemu"):
> Pass bridge name to qemu as command line option
> When starting qemu, set an environnement variable XEN_DOMAIN_ID,
> to be used by qemu helper scripts
> The only functional difference of using the br parameter is that the
> bridge name gets passed to the QEMU script.
> NetBSD doesn't have the ioctl to rename network interfaces implemented, and
> thus cannot rename the interface from tapX to vifX.Y-emu. Only qemu knowns
> the tap interface name, so we need to use the qemu script from qemu itself.
> 
> Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I think this is a bugfix but it has implications for non-NetBSD
systems and I think it would be best for it to get (or not get) an
explicit release-ack.

I think it is sufficiently low risk to take it now.  We don't think
this will cause trouble for other platforms but if it proves to, that
should be fairly obvious and caught in our testing.  So:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:57:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79931.145801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6adT-0002A3-NE; Mon, 01 Feb 2021 14:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79931.145801; Mon, 01 Feb 2021 14:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6adT-00029w-K2; Mon, 01 Feb 2021 14:57:03 +0000
Received: by outflank-mailman (input) for mailman id 79931;
 Mon, 01 Feb 2021 14:57:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6adS-00029l-JC
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:57:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9c5d348-d9cd-422d-a2be-e0cf2a827521;
 Mon, 01 Feb 2021 14:57:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CD1BAAB92;
 Mon,  1 Feb 2021 14:56:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9c5d348-d9cd-422d-a2be-e0cf2a827521
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612191419; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=gpjsoXyymi3tg3gNQWwGrxdoOaO5dOb3D73T4UALbJY=;
	b=O4jT9VS+Vw7V5cpYT9lciX+jT/8jHDht0cw27NOF7UCQLEainrdSOaNGOvaRB+ZJiljoYE
	LoAbHbV8uhV2xWmmlyxWrxP5zhy9Ih39kc+rmJMoov71Z3Bl18IzCufXUz5SXHL02CyH5m
	Kye9Ycd/Uy1LVsc+szPH+qUW7qddjFY=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
Message-ID: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
Date: Mon, 1 Feb 2021 15:56:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Going through an intermediate *.new file requires telling the compiler
what the real target is, so that the inclusion of the resulting .*.d
file will actually be useful.

Fixes: 7d2d7a43d014 ("x86/build: limit rebuilding of asm-offsets.h")
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Already on the original patch I did suggest that perhaps Arm would want
to follow suit. So again - perhaps the rules should be unified by moving
to common code?

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -241,7 +241,7 @@ efi/buildid.o efi/relocs-dummy.o: $(BASE
 efi/buildid.o efi/relocs-dummy.o: ;
 
 asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c $(BASEDIR)/include/asm-x86/asm-macros.h
-	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new $<
+	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new -MQ $@ $<
 	$(call move-if-changed,$@.new,$@)
 
 asm-macros.i: CFLAGS-y += -D__ASSEMBLY__ -P


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 14:59:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 14:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79940.145813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ag1-0002WR-4Z; Mon, 01 Feb 2021 14:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79940.145813; Mon, 01 Feb 2021 14:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ag1-0002WK-1Y; Mon, 01 Feb 2021 14:59:41 +0000
Received: by outflank-mailman (input) for mailman id 79940;
 Mon, 01 Feb 2021 14:59:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ag0-0002WA-33
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:59:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ag0-00046x-2G
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:59:40 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ag0-0003Wo-1N
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 14:59:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6afv-0007gD-Cb; Mon, 01 Feb 2021 14:59:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=B5q6o1TGLLRrNrSxomCbQfBR0hkmteq+1huEep3awn4=; b=3lUNaioN/e5TsKwcIoRWgoTuoQ
	7aBvh8xqoaf3C2geRO/EywgKckWAkCrJn+yTgW9+mjhuQZZe00lLnaCi3t8yDa/jF4A9Kw8cGP9kt
	673cszVZaIz0k1vzayM6rWM0GsQZGWkzYwbE83NiiUfz+yUj8+4UHAbfo4fW18GIynUc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24600.5975.179890.232213@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 14:59:35 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
    <xen-devel@lists.xenproject.org>,
    Wei  Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/2] Document qemu-ifup on NetBSD
In-Reply-To: <YBfevtNorE5OtQVR@Air-de-Roger>
References: <20210130230300.11664-1-bouyer@netbsd.org>
	<20210130230300.11664-2-bouyer@netbsd.org>
	<YBe6JpR6jOLvYDz6@Air-de-Roger>
	<20210201093747.GA624@antioche.eu.org>
	<YBfevtNorE5OtQVR@Air-de-Roger>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monné writes ("Re: [PATCH v3 2/2] Document qemu-ifup on NetBSD"):
> On Mon, Feb 01, 2021 at 10:37:47AM +0100, Manuel Bouyer wrote:
> > Well, as a user, I want to know how the scripts are called, so that I can
> > tune them ...
> 
> Isn't that information on the header of the script? I would expect
> users that want to modify such script will open it and then the header
> should already lists the parameters.
> 
> IMO I would leave the parameters out of this document because we don't
> list them for any other script, so it seems odd to list them for the
> qemu-ifup script only.

I think it is shown there, indeed.  That may not be the best place for
this information.  But maybe putting it somewhere else should be done
systematically rather than ad hoc.  IOW with my maintainer hat on I
don't feel I have a strong opinion.

OTOH from my RM hat

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

since this is a documentation change.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:02:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79944.145824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aj3-0003Ny-Jr; Mon, 01 Feb 2021 15:02:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79944.145824; Mon, 01 Feb 2021 15:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aj3-0003Nr-Gn; Mon, 01 Feb 2021 15:02:49 +0000
Received: by outflank-mailman (input) for mailman id 79944;
 Mon, 01 Feb 2021 15:02:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6aj2-0003Nm-Jd
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:02:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6aj2-0004D6-EI
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:02:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6aj2-0003tW-Av
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:02:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6ait-0007h1-5v; Mon, 01 Feb 2021 15:02:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=zY76ypF3Q8Qeil0EJm2iCzobjcE/L4TqYeEZkySCwbY=; b=GVa9Atn6I3wxWrvp8eFjkaP59C
	y1+mZXI8gR6zRq+s8Jfxhzo/CMcxyk8FD5c0kKWuQ3wfv1gONxa400Euw5Pdl5HPperwgqDDx12p4
	To47t+0Tml07v7vGaQyJqb0FX53zgUldUH/tJoABPe7ryLIZqHx/PlX+6jNQltRKPMcQ=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Message-ID: <24600.6158.963964.960004@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:02:38 +0000
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
    Wei Liu <wl@xen.org>,
    Anthony Perard <anthony.perard@citrix.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for 4.16] xl: Add xl.conf-based dom0 autoballoon limits
In-Reply-To: <1F7EC4ED-BFA0-4EC8-A7F4-BE78E2455FAE@citrix.com>
References: <20210129164858.3280477-1-george.dunlap@citrix.com>
	<606292ed-9083-d9a7-33e9-a02485cbbca0@suse.com>
	<1F7EC4ED-BFA0-4EC8-A7F4-BE78E2455FAE@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

George Dunlap writes ("Re: [PATCH for 4.16] xl: Add xl.conf-based dom0 autoballoon limits"):
> Iâ€™d certainly be in favor of getting something like b61443b1bf76
> upstream.  I still think a patch like this one would be useful
> though:
> 
> 1. The error message will be given right away, rather than timing
> out on the dom0 balloon driver
> 
> 2. The error message can be more informative, and point people to
> the whole â€œfixed dom0 memoryâ€ thing.

I agree.

If the memory ballooning accounting area wasn't already a swamp I
would be considering this patch for 4.15.  As it is I think it unwise
to make this change now because I don't see how to be confident it
won't break anything.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:02:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79945.145837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aj9-0003Q9-Ro; Mon, 01 Feb 2021 15:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79945.145837; Mon, 01 Feb 2021 15:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aj9-0003Q1-Oj; Mon, 01 Feb 2021 15:02:55 +0000
Received: by outflank-mailman (input) for mailman id 79945;
 Mon, 01 Feb 2021 15:02:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6aj8-0003Pe-EO
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:02:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ceb8407-dc7d-43cf-beb8-57e6eb1aa0dd;
 Mon, 01 Feb 2021 15:02:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66258ABD5;
 Mon,  1 Feb 2021 15:02:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ceb8407-dc7d-43cf-beb8-57e6eb1aa0dd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612191772; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/cxaZe5HibYKi5X4hVRGu6pgg1gFZKdO6lE5Jqra6LE=;
	b=fGf7NQQ0y6XB2l9IEbF+ZYQhqJ47RKHe1BSKrM80chiTbIB0TN8AhypUgySXIoE7397KU/
	vxy7Y1egI69m/2ATa9waIh71vi890CbK4JcdCESw/uPy4ipI7GrUtUHLP4lSpdnbg4w4lg
	6Epv9WMI5PjzPPybTjrlxGmBmKf21vI=
Subject: Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and
 CONFIG_COVERAGE=y [and 1 more messages]
To: Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20210130152210.17503-1-julien@xen.org>
 <174a18ba-25d5-a94c-a85d-4a81b837a936@suse.com>
 <24600.5695.143342.713995@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <321c06d3-106a-cfab-5ac8-df629e600dfe@suse.com>
Date: Mon, 1 Feb 2021 16:02:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <24600.5695.143342.713995@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.02.2021 15:54, Ian Jackson wrote:
> Julien Grall writes ("[PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
>> Xen is heavily relying on the DCE stage to remove unused code so the
>> linker doesn't throw an error because a function is not implemented
>> yet we defined a prototype for it.
> 
> Thanks for the clear explanation.
> 
>> It is not entirely clear why the compiler DCE is not detecting the
>> unused code. However, moving the permission check from do_memory_op()
>> to xenmem_add_to_physmap_batch() does the trick.
> 
> How unfortunate.
> 
>> Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
>> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> I have reviewed the diff, but not the code in context.
> 
>> The gitlab CI is used to provide basic testing on a per-series basis. So
>> I would like to request this patch to be merged in Xen 4.15 in order to
>> reduce the number of failure not related to the series tested.
> 
> Quite so.
> 
> Jan Beulich writes ("Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
>> On 30.01.2021 16:22, Julien Grall wrote:
>>> @@ -1442,13 +1447,6 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>          if ( d == NULL )
>>>              return -ESRCH;
>>>  
>>> -        rc = xatp_permission_check(d, xatpb.space);
>>> -        if ( rc )
>>> -        {
>>> -            rcu_unlock_domain(d);
>>> -            return rc;
>>> -        }
>>> -
>>>          rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>>>  
>>>          rcu_unlock_domain(d);
>>
>> I'd be okay with the code movement if you did so consistently,
>> i.e. also for the other invocation. I realize this would have
>> an effect on the dm-op call of the function, but I wonder
>> whether this wouldn't even be a good thing. If not, I think
>> duplicating xenmem_add_to_physmap()'s early ASSERT() into
>> xenmem_add_to_physmap_batch() would be the better course of
>> action.
> 
> Jan, can you confirm whether in your opinion this patch as originally
> posted by Julien is *correct* as is ?  In particular, Julien did not
> intend a functional change.  Have you satisfied yourself that there is
> no functional change here ?

Yes and yes.

> I understand your objectiion above to relate to style or neatness,
> rather than function.  Is that correct ?

Yes.

>  And that your proposed
> additional change would have some impact whilch would have to be
> assessed.

The first of the proposed alternatives may need further
investigation, yes. The second of the alternatives would
shrink this patch to a 2-line one, i.e. far less code
churn, and is not in need of any assessment afaia. In
fact I believe this latter alternative was discussed as
the approach to take here, before the patch was submitted.

Jan

> In which case I think it would be better to defer the style
> improvement until after the release.
> 
> IOW, the original patch
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> assuming a favourable functional code review from a relevant
> maintainer.
> 
> Thanks,
> Ian.
> 



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:06:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:06:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79948.145849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6an2-0003eu-DH; Mon, 01 Feb 2021 15:06:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79948.145849; Mon, 01 Feb 2021 15:06:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6an2-0003en-AD; Mon, 01 Feb 2021 15:06:56 +0000
Received: by outflank-mailman (input) for mailman id 79948;
 Mon, 01 Feb 2021 15:06:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6an0-0003ei-2E
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:06:54 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5f63a9b-5c1a-423c-83bb-ebddb6c74494;
 Mon, 01 Feb 2021 15:06:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5f63a9b-5c1a-423c-83bb-ebddb6c74494
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612192012;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CmujobCdXS0rXkjlW1uqM7HHHGcadyi+aill032zNUs=;
  b=ZpVbxefhl9Djz+C5K62BldOg6Zw2FyMJ65UNlNP+9W4XRD+RvDtCTDbS
   MwWakulLV22XmkAVdaYPQgDb3qMzHP6axzPjdfUZIqq+hMThvnPDWXovo
   4t5pa+eLMMLk46BOMGjGeRz20/p2ZQVVucWpGx+51679cz1QVzMHNCupE
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: D+qiPcmJdTvK/MMSKIwebaB5Am/s7qJyoundTtE7WG7qVAH1drLb3QBBg73sTlfGvP0cdI9KUl
 2/v2fXWi5ZNZgAllXQ+yMOWi6ghg8jduHwYkMPjJzb/oQZM3CLgS+bR9v4+j/Yrof0pWHX1LhC
 XWBDR19RVKL7dPLC5/A1KEf1lQO9PRB16BSl03ZvpikBgtNsJQT/9PsFyZv5Kg8iFko16iwfnL
 /m2H460lM8Qnq3DHWjlt3ceaZ7vLtcFNESMm2S0oa/fGSg15m/vLsOZjPcSNAm6wHShxDESW0p
 mNw=
X-SBRS: 5.2
X-MesageID: 36482386
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36482386"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ILQ8kmlPAednR6jK1EeYa6nSOHIMj6OpNIbvUBFLCmWEm+Oyl2/UGMTSnA+wnmaDV83z3RpydhTKWT/MAoTtRbQ11LRy9IbuQtMYFQFW96clK4kNfgeLyrEqB7voYepnFp+YBW0VZsvCuG6GVGhBwYVFx5F0tOcWRc/8INz77WBCkYyOw4/TGhRQ0xT4jkZTH9lpApZK+RSNTM38EheXiXpUlA6EcvOwl+eXONy539JD4yFdh+nxR/Rdqk3p2ggNjRI37b97gnKaH/U5anbPcjvAmbXjmQno1nIiZqsnNzzfYeKkQAgpKATSr+5fB9oDXQDIGadN9mAo0hAuBmZgtw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=atXN1GxkEwQE8MMSncvh83/fWaw7vuSzU/q8wa+hX3g=;
 b=E1zFr9vBeLw9JR1Xi3MeEg0Xq96wY5AvcVEgveKa2ZusUrEqOuXwr16ceBbtI5HybjN+kP3KyKLSxNqDbHWvaF5hkyri0jmeRhqtTJM9T/GaabBh0SGAl6Qr6pLaAinyV0l3n/VSMXxcfbQZdcETA7oje141Ne4/TP5lTWFvZMCUkntZfCUQkoZu+0e+P+ySQB8jw7s8YEpx6N9VbiqaEHnwrmsg87wsL5pJGtMhTlkxZomXfrk6L0OrIUjhmGz8fbtyLxkkFgzedHgxqOHmkv/5jFKrltJM9zWWTUgwzsSmvD9q2Koo43Cu0ydc7DkWkJylh+Aka7bDwkXUHThXpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=atXN1GxkEwQE8MMSncvh83/fWaw7vuSzU/q8wa+hX3g=;
 b=BMG5WmTyqUfEiQeZx1semoZ80iJT73V+cNPW+Jvjk8M3rsQeAN29ueqnzFkD8i3qhA0xiMbL9DI9BZmkRKyAabQ6bhnqDsyf6YoNyLwXCe8tSGXU2HF1A1YvGtzn8Oy4Gsxf5HXYKeHlwFKAqwbqK0DCOS1k67SCo1bO6Qotvbw=
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Julien Grall <julien@xen.org>, Ian Jackson
	<iwj@xenproject.org>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0cbbdb3a-5681-10df-aeee-ac185d7033cc@citrix.com>
Date: Mon, 1 Feb 2021 15:06:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0131.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::10) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b6707d29-31e3-4b95-36b8-08d8c6c2ff14
X-MS-TrafficTypeDiagnostic: BYAPR03MB4727:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB47270F358FB3D5D5987E28A7BAB69@BYAPR03MB4727.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1468;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HN7FtUHAfhqt6xYggEhbbu/jrVBK2AWLESV2Db8TSdx2EiAmx8jq2+JRj5saBeAh7W68PlF3/8KQ/khvemrxaQ4M270IEXfHHqPu1HzWSIOmU2thQmd25j9s1q9O7TS8RU6+WBAgdOQq5PCYEIBpUr/UTEjFa/aGcrdA+JcpOPD0yoo6KaU/MLXWQAqi788X5YCOyUTkgDSUPMAxeANsnvO0Tn+k8tR+6egLPvekFXwgpoFTbbRZfllPUrsu0jilDBNorXJ1zX2rIjs5ebUYFQvfa8URQ/a5Hag2JngGD+9U3wA7x/0UVrmcN9qNmL3sJ1VQwAEN5ykyEaNtfzOSIX52blTwFDno52FRwg4pHNgSZPg94SURtqGU2AAbyEBwGvpG3gHDjqdGCiBugrpeq+MsUKgkvWlBneRVWBRs3VaIQsJXxItqEPcyCdRW8gccVYAN50tXeCK6Uiv+LN/wwa82xae11oRmvBXCe+ao4ru+FLCANCBgMMc6ux/wMzqEm6OQtcTPfofZVNTEHFheLzQtA/bBx0Krt6EEgE4UnV1e9jFbNnt58AnqBpg0qwOVtyjMoWfb2+hyLwnjzNVqnLI0YcjWVMxTpCZG/exLMwM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(136003)(366004)(376002)(396003)(2616005)(6666004)(8936002)(2906002)(6486002)(478600001)(4326008)(5660300002)(66556008)(956004)(66476007)(16526019)(186003)(53546011)(26005)(66946007)(316002)(16576012)(83380400001)(54906003)(110136005)(36756003)(31696002)(86362001)(8676002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TTZqa0o2SnQzemhsazQ2Q1QvVzlrenNDd2laTlZtMGptQ3RXcEE4YVo2c2NY?=
 =?utf-8?B?S0x6YlNwWEZVUCsvVU5XeXkzMVBpaGN3U2VnSzZMYzhwSUlNeFZ3TmlrMTM0?=
 =?utf-8?B?WDNPSmtCSzlJRUZ6SFQxRm1iMmFNbnMyeFYreVF0NTluUjlnc2loZmI2N3lp?=
 =?utf-8?B?bXRPcUpKT0x4aUxsVkxkandKQWFhKys0OXRPaEtuNUVhaVJjdGIyaWpxSjFW?=
 =?utf-8?B?VDh3UjhiTVdEQzdpVXJhUkIySTk3Q1hncjRIOHBHKy8zVjM2emRhWVlTbVA1?=
 =?utf-8?B?VjdzeXhOcG1UdzNQWWxxWUppdlhMU2I0eTdubHpWbzQ0bW40RjhhMGpodHRJ?=
 =?utf-8?B?WTBpUzl5ZUxSSEJ6U280T1RCYWlTTSttaUJpTmZZT1B6eUdIUVl2elBqZnRQ?=
 =?utf-8?B?QWl4a3lDL2Zzd3hzenJscnFpL2k3Z256UFZGL3p5UjgyK3BrUmJPdjE1b1Iv?=
 =?utf-8?B?a3gxR1RpcGZVMjNoMS84aVczRndBWVZCQUtVanh3WDk2M2lDem1QL1FQMGlN?=
 =?utf-8?B?aGZNU2NacTRFTktxVkk1UG1BcFFGQ0NhVCt2eWJuQ29INUowSW1TZ0tEcG9m?=
 =?utf-8?B?VFJ2bEk4M0hZRnJLY1FvT1N1VWNXMjczYUhQak5ZYk1KeXBkYmpWRXVZN2ts?=
 =?utf-8?B?SUVPSWNmYVg1UUJsTUNlVWlRUTVwUjl4Skh1TTFmdnVPaXRrYkErU0VwYXRl?=
 =?utf-8?B?QWUxOFFiMWt2b0JieVNxUVlSKzRuREd3ZUFsVW81a2p0ekpSeDJtdHdQSTZi?=
 =?utf-8?B?ZVlQYUdCYkpmcVBKU01VZWErVnN3R2R1RjRJQU54V0x1eXcxNmhYUnhiQnZE?=
 =?utf-8?B?d0I2QVhFOEIwTTB3ejE0RVRldERiOTZodXM0anhRS2RkWFJjUWlzRE04U3Zj?=
 =?utf-8?B?NEpxRGFyd2hnaFJyTmVROFptYkxLRGZ1T0ZIaXl1TUxVQjBzZWR1eUhJMFNv?=
 =?utf-8?B?THFLcGFtd1NRWnd3TWNxalRQbVhoaVhMWlgrQkVQV2JpZGp4cWJMTENRcXJs?=
 =?utf-8?B?dmlISFkvanB5YmlZWlNMSGNlT2pSdDYyY3FqeUpYOFlhVWVsZ0ZjM1ZxbnBm?=
 =?utf-8?B?WWFYRUVWcVlaaXluVmZIeVFRd1VKVEF2VkJLdjBpbWV1TVJSOTVWcHJTNXo3?=
 =?utf-8?B?UW5iN3RseWpyaEZSSDdrK3p5WFkvd3JnbCs5aHd6WFRzeXcrZzY1Q2tWREsz?=
 =?utf-8?B?YURGNkp0Y01iNnFBZVlLR0hOalA0R21POGJBVjFIbGVqcEYwYWZXVGN3NDZi?=
 =?utf-8?B?bnluMk1rQ2NnbXBzd0gydEVTUWJhaDRGSUw3dHF4RXhqaGFWRUZOMWxUMW1z?=
 =?utf-8?B?ejdoVnRwWE1TeXpzNTYyVlpRWTNZcU0yVVRldDZsZmR2U1lrSmJVVDNDVkJD?=
 =?utf-8?B?QVVxeUdXL29VeWxTN05xWkVGUXVIOGJGSUttMEdyMkhDRE4rN3lwVnRhMFcz?=
 =?utf-8?B?Z3JKTTFUejFMbUtqalVqOXdoeWUwRGJhUGVicHNBPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b6707d29-31e3-4b95-36b8-08d8c6c2ff14
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 15:06:47.9854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YHEN7OOU8ktHBijsrRx5FiDoDWAxHlj8MOYpRll8f7uX9siHy+m0A0W3v7w0GgrJaDhv9YBYFAjEEVkHDLX5dCRIpEcQpzBVIYDihISgqls=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4727
X-OriginatorOrg: citrix.com

On 01/02/2021 14:56, Jan Beulich wrote:
> Going through an intermediate *.new file requires telling the compiler
> what the real target is, so that the inclusion of the resulting .*.d
> file will actually be useful.
>
> Fixes: 7d2d7a43d014 ("x86/build: limit rebuilding of asm-offsets.h")
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, whatever the
outcome of the discussion below.

> ---
> Already on the original patch I did suggest that perhaps Arm would want
> to follow suit. So again - perhaps the rules should be unified by moving
> to common code?
>
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -241,7 +241,7 @@ efi/buildid.o efi/relocs-dummy.o: $(BASE
>  efi/buildid.o efi/relocs-dummy.o: ;
>  
>  asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c $(BASEDIR)/include/asm-x86/asm-macros.h
> -	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new $<
> +	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new -MQ $@ $<
>  	$(call move-if-changed,$@.new,$@)
>  
>  asm-macros.i: CFLAGS-y += -D__ASSEMBLY__ -P



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:09:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79953.145868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6apQ-0003yG-BA; Mon, 01 Feb 2021 15:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79953.145868; Mon, 01 Feb 2021 15:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6apQ-0003y1-4g; Mon, 01 Feb 2021 15:09:24 +0000
Received: by outflank-mailman (input) for mailman id 79953;
 Mon, 01 Feb 2021 15:09:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6apO-0003xb-TO
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6apO-0004Jf-P5
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:22 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6apO-0004R7-NQ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6apL-0007k9-FO; Mon, 01 Feb 2021 15:09:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=dGGSACJOMJZtTzt+/JjXmWZTZXWco5h2qLf96Gs5OkY=; b=HWZ/xrFjDH8npH1gyCWisY86Y3
	F0dcTUH30s9hexmtIiD1WUaORoEfMqL/Hc2LKM9kkIPG1lifYNplZ1hTGvx80KMqqWow3FOmxtanl
	3NoxukuIBZYdGUTDW6cGTvI0YNlnazgkQmr+uOYrnPjXgKafmzzAQ+OGsyPGT6g9HjHA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.6559.225650.207177@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:09:19 +0000
To: Manuel Bouyer <bouyer@netbsd.org>
Cc: <xen-devel@lists.xenproject.org>,
    Wei  Liu <wl@xen.org>,
    Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [PATCH v3 1/2] xenpmd.c: use dynamic allocation
In-Reply-To: <20210130182711.2473-1-bouyer@netbsd.org>
References: <20210130182711.2473-1-bouyer@netbsd.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("[PATCH v3 1/2] xenpmd.c: use dynamic allocation"):
> On NetBSD, d_name is larger than 256, so file_name[284] may not be large
> enough (and gcc emits a format-truncation error).
> Use asprintf() instead of snprintf() on a static on-stack buffer.
> 
> Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
> Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I consider this a bugfix, so

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

I think Roger's ack got dropped, so I have added it back.#

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:09:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79952.145860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6apP-0003xn-Vb; Mon, 01 Feb 2021 15:09:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79952.145860; Mon, 01 Feb 2021 15:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6apP-0003xg-Sd; Mon, 01 Feb 2021 15:09:23 +0000
Received: by outflank-mailman (input) for mailman id 79952;
 Mon, 01 Feb 2021 15:09:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6apO-0003xW-EE
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebf6e760-b35b-4046-bb6d-a7e01e55b497;
 Mon, 01 Feb 2021 15:09:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47A77ABD5;
 Mon,  1 Feb 2021 15:09:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebf6e760-b35b-4046-bb6d-a7e01e55b497
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612192159; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Tzm3Ir4Xb3LXI0gewPtMCGXTGNN2FaOl2nuWVwTF40c=;
	b=qDGXoTs+NlA33B4DWSXl/pUB3IZe5GIEMbex0aGkgj8nzilG7QEeCJ0an2mai7dkkyhsaV
	bycMJfSPGK2pUe3vR+iEJaqmoueroPwkghC8SuUvbBeUwJPgTZ0PwoJyMeidSHQAZ3T2cy
	ZxAcxXkSfqto0ppBqKJIas2k13l0Vi4=
Subject: Re: Problems with APIC on versions 4.9 and later (4.8 works)
To: Claudemir Todo Bom <claudemir@todobom.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <CANyqHYfNBHnUiBiXHdt+R3mZ72oYQBnQcaWuKw5gY0uDb_ZqKw@mail.gmail.com>
 <e1d69914-c6bc-40b9-a9f4-33be4bd022b6@suse.com>
 <CANyqHYcifnCgd5C5vbYoi4CTtoMX5+jzGqHfs6JZ+e=d2Y_dmg@mail.gmail.com>
 <ff799cd4-ba42-e120-107c-5011dc803b5a@suse.com>
 <609a82d8-af12-4764-c4e0-f5ee0e11c130@suse.com>
 <CANyqHYehUWeNfVXqVJX6nrBS_CcKL1DQjyNVa1cUbvbx+zD83w@mail.gmail.com>
 <9d04edfe-0059-6fbf-c1da-2087f6190e64@suse.com>
 <CANyqHYfOC6JY978SRPAQ8Ug3GevFD=jbT6bVVET4+QOv8mv7qA@mail.gmail.com>
 <a0a7bbd0-c4c3-cfb8-5af0-a5a4aff14b76@suse.com>
 <CANyqHYeDR_NUKzPtbfLiUzxAUzerKepbU4B-_6=U-7Y6uy8gpQ@mail.gmail.com>
 <8837c3fb-1e0c-5941-258c-e76551a9e02b@suse.com>
 <8cf69fb3-5b8c-60ea-bd1c-39a0cbd5cb5c@suse.com>
 <CANyqHYeCQc2bt836uyrtm9Eo2T1uPP-+ups-ygfACu6zK36BQg@mail.gmail.com>
 <bd150f4d-4f7e-082e-6b10-03bf1eca7b80@suse.com>
 <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
 <803a50a9-707f-14db-b523-cd1f6f685ab4@suse.com>
 <CANyqHYfNjqjm7tFoHD=XDcv_P42wppmx0gjy=--Kz88MZcK6Pw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <96a23d4a-b29f-46e2-a0f5-568a5d1f4b9e@suse.com>
Date: Mon, 1 Feb 2021 16:09:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CANyqHYfNjqjm7tFoHD=XDcv_P42wppmx0gjy=--Kz88MZcK6Pw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.02.2021 15:46, Claudemir Todo Bom wrote:
> Tested first without the debug patch and with following parameters:

And this test was all three of the non-debugging patches?

> xen: dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true smt=true
> kernel: loglevel=3
> 
> same behaviour as before... black screen right after the xen messages.
> 
> adding earlyprintk=xen to the kernel command line is sufficient to
> make it boot, I can imagine this can be happening because Xen is not
> releasing console to the kernel at that moment.

If the answer to the above question is "yes", then I start
suspecting this to be a different problem. I'm not sure I
see a way to debug this without having access to any output
(i.e. neither video nor serial). Without "earlyprintk=xen"
and instead with "vga=keep watchdog" on the Xen command
line, is there anything helpful (without or if need be with
the debugging patch in place)?

> The system worked well (with earlyprintk=xen), tested with the "yes
> stress test" mentioned before on a guest and on dom0.
> 
> Then, I installed the debug patch and booted it again, it also needed
> the earlyprintk=xen parameter on the kernel command line. I've also
> added console_timestamps=boot to the xen command line in order to get
> the time of the messages.
> 
> I'm attaching the outputs of "xl dmesg" and "dmesg" on this message.
> 
> Think it is almost done! WIll wait for the next round of tests!

As per above, not sure if there's going to be one. Thanks
for your patient testing!

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:09:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79956.145889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6apk-00048d-Jf; Mon, 01 Feb 2021 15:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79956.145889; Mon, 01 Feb 2021 15:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6apk-00048W-G9; Mon, 01 Feb 2021 15:09:44 +0000
Received: by outflank-mailman (input) for mailman id 79956;
 Mon, 01 Feb 2021 15:09:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6apj-00048D-Ch
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6apj-0004KD-Bj
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:43 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6apj-0004TP-9u
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:09:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6apg-0007kO-5D; Mon, 01 Feb 2021 15:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=HV79K8JazcK1uQVJzyPbSIA4SSZtI828CihczDwLmQ4=; b=Coz+RlLRgb+2FLsZ0Esyr9EaCp
	Ehy+6lgp9XD+BJai1W0glUvGPcNfwupmj4dyaYaD3zfAAZPrMdIXw3L9KImgQwc9MItQHTwU0Zy1k
	9PKYhFh7Lw0WRj7aSBr3s39Fw3dEVMpTLLPe2J0OCSv7rpY+Q3AvKjmiJFtS/2KHP9mA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.6579.879861.695018@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:09:39 +0000
To: Manuel Bouyer <bouyer@netbsd.org>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/2] define GNU_SOURCE for asprintf()
In-Reply-To: <20210130182711.2473-2-bouyer@netbsd.org>
References: <20210130182711.2473-1-bouyer@netbsd.org>
	<20210130182711.2473-2-bouyer@netbsd.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("[PATCH v3 2/2] define GNU_SOURCE for asprintf()"):
> #define _GNU_SOURCE to get for asprintf() prototype on Linux.
> Harmless on NetBSD.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

This needs to be folded into the previous patch so I will do so.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:15:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:15:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79961.145901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6av3-00053s-9G; Mon, 01 Feb 2021 15:15:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79961.145901; Mon, 01 Feb 2021 15:15:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6av3-00053l-5L; Mon, 01 Feb 2021 15:15:13 +0000
Received: by outflank-mailman (input) for mailman id 79961;
 Mon, 01 Feb 2021 15:15:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6av2-00053g-CM
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:15:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6av2-0004Q8-8V
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:15:12 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6av2-0004zz-5U
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:15:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6aux-0007mB-HN; Mon, 01 Feb 2021 15:15:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=ujP6Rh16nsukNtKjM63EsqjzhjYIB57ZeV6enttUqVs=; b=j+74toKT0lZ8ohL0kTUQlTikGS
	0bFsnL1uWPwIVZzHR9r7IsuhiebGV4rX9IJRH7IwjTWOeG5vBANBlu4tJyPu1LSCjBPwpTe3pxqnm
	/q3qKkAY7b+QmcOGQqixTpIoj+ZG0nQewtxeDASLKOLEi4oLjJOhAa+ueeLautQDLFew=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.6907.258459.936054@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:15:07 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>,
    xen-devel@lists.xenproject.org,
    Julien Grall <jgrall@amazon.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and
 CONFIG_COVERAGE=y [and 1 more messages]
In-Reply-To: <321c06d3-106a-cfab-5ac8-df629e600dfe@suse.com>
References: <20210130152210.17503-1-julien@xen.org>
	<174a18ba-25d5-a94c-a85d-4a81b837a936@suse.com>
	<24600.5695.143342.713995@mariner.uk.xensource.com>
	<321c06d3-106a-cfab-5ac8-df629e600dfe@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y [and 1 more messages]"):
> On 01.02.2021 15:54, Ian Jackson wrote:
> > Julien Grall writes ("[PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
...
> > Jan, can you confirm whether in your opinion this patch as originally
> > posted by Julien is *correct* as is ?  In particular, Julien did not
> > intend a functional change.  Have you satisfied yourself that there is
> > no functional change here ?
> 
> Yes and yes.
> 
> > I understand your objectiion above to relate to style or neatness,
> > rather than function.  Is that correct ?
> 
> Yes.

Right, thanks.

> >  And that your proposed
> > additional change would have some impact whilch would have to be
> > assessed.
> 
> The first of the proposed alternatives may need further
> investigation, yes. The second of the alternatives would
> shrink this patch to a 2-line one, i.e. far less code
> churn, and is not in need of any assessment afaia. In
> fact I believe this latter alternative was discussed as
> the approach to take here, before the patch was submitted.

Sorry I missed that part.  I would be happy with that other approach
too, so for that approach (adding a duplicated ASSERT) is also

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

I'm not a huge fan of code duplication, in general.  I suggest that if
the ASSERT is duplicated it might be worth leaving comment(s) by each
one pointing to the other.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:16:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:16:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79962.145913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aw8-00059z-IU; Mon, 01 Feb 2021 15:16:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79962.145913; Mon, 01 Feb 2021 15:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6aw8-00059s-FJ; Mon, 01 Feb 2021 15:16:20 +0000
Received: by outflank-mailman (input) for mailman id 79962;
 Mon, 01 Feb 2021 15:16:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6aw7-00059m-H5
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:16:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6aw7-0004RN-GA
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:16:19 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6aw7-00054z-FY
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:16:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6aw2-0007mt-SJ; Mon, 01 Feb 2021 15:16:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=xAJ0H1W7F6Zc9EUBrxRiO0yUaKI7jIqhIhCKb1SPPB4=; b=Wr9MnSV39M9x5pSidjLUO+syKs
	KDp4vdzGUxQsk7o9R6Mq5rwrJdBdYiLHo9kHl4PE6mJhM/v1OyybdaQoLfcFRrFQnnPDpd1NeZYpb
	f2/5LcHAsyMmT50T+WsSaKvQvlvI9sI0yZvd4/ZyvgrYAPu+MW16518tKeyhSu9Havh4=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.6974.503961.950273@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:16:14 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>,
    Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Julien Grall <julien@xen.org>
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s [and 1 more messages]
In-Reply-To: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>,
	<0cbbdb3a-5681-10df-aeee-ac185d7033cc@citrix.com>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
	<0cbbdb3a-5681-10df-aeee-ac185d7033cc@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s"):
> On 01/02/2021 14:56, Jan Beulich wrote:
> > Going through an intermediate *.new file requires telling the compiler
> > what the real target is, so that the inclusion of the resulting .*.d
> > file will actually be useful.
> >
> > Fixes: 7d2d7a43d014 ("x86/build: limit rebuilding of asm-offsets.h")
> > Reported-by: Julien Grall <julien@xen.org>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, whatever the
> outcome of the discussion below.

This is a bugfix and does not need a release ack, but FTAOD

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

> > Already on the original patch I did suggest that perhaps Arm would want
> > to follow suit. So again - perhaps the rules should be unified by moving
> > to common code?

Quite so.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:22:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:22:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79967.145925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b1h-0006Bv-8c; Mon, 01 Feb 2021 15:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79967.145925; Mon, 01 Feb 2021 15:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b1h-0006Bo-5L; Mon, 01 Feb 2021 15:22:05 +0000
Received: by outflank-mailman (input) for mailman id 79967;
 Mon, 01 Feb 2021 15:22:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6b1f-0006Bi-Nl
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:22:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6f1b027-791e-4140-974e-c648fca96b60;
 Mon, 01 Feb 2021 15:22:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E35BFABD5;
 Mon,  1 Feb 2021 15:22:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6f1b027-791e-4140-974e-c648fca96b60
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612192922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=sG4LHCqEmYFjhoTglfEjWuO4Ko9nlkVCLkCQZ2mXyvQ=;
	b=TZaxPwXJc9p7X5Y5dAK8oDDnqIoxoXRR1di1ZUM/J+qC5u5UAUprwAj1HsmqdQycnhc6Dt
	eFsQoW/g4OP/OsY4N21zUfIW8aOxIlnMEk3QyHUGRjy2WmthSoSgljCnE4e5XckWEt3R/f
	S7c07wUCIwLED6np/nBKLWDW/9EEKys=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] ioreq: don't (deliberately) crash Dom0
Message-ID: <1dc6fe4c-3435-462d-a339-085014ae0deb@suse.com>
Date: Mon, 1 Feb 2021 16:22:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

We consider this error path of hvm_alloc_ioreq_mfn() to not be possible
to be taken, or otherwise to indicate abuse or a bug somewhere. If there
is abuse of some kind, crashing Dom0 here would mean a system-wide DoS.
Only crash the emulator domain if it's not the (global) control domain;
crash only the guest being serviced otherwise.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -274,7 +274,7 @@ static int hvm_alloc_ioreq_mfn(struct hv
          * The domain can't possibly know about this page yet, so failure
          * here is a clear indication of something fishy going on.
          */
-        domain_crash(s->emulator);
+        domain_crash(is_control_domain(s->emulator) ? s->target : s->emulator);
         return -ENODATA;
     }
 


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:24:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:24:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79970.145944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b3l-0006Zs-CO; Mon, 01 Feb 2021 15:24:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79970.145944; Mon, 01 Feb 2021 15:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b3l-0006Zl-8s; Mon, 01 Feb 2021 15:24:13 +0000
Received: by outflank-mailman (input) for mailman id 79970;
 Mon, 01 Feb 2021 15:24:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6b3k-0006ZW-5f
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:24:12 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df22a960-350d-491c-864a-28077ccffca7;
 Mon, 01 Feb 2021 15:24:11 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id c4so14291951wru.9
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 07:24:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df22a960-350d-491c-864a-28077ccffca7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=JomcygMoFAs9Y2GP+k0r376V1/eph2W1tdPkMO1ncJw=;
        b=OBpBjLOxdQLBtqhkZHgcVllHh0btVEdpIxHw6bQ81Fo4MnYpsKNuTAxVyj4cDpzqOv
         22CV6rTZLP7Gb5DsJfWcsHOhm++MoS4qecJuWJxkb8ycpg56CTyxKeEkqOlQBWeyfuhR
         K9w9s0BQ5yjrXTvvI9EU8corn0pouoFLV8l765wntdEczupvsZvxtD+YdpBow0ZyK0Le
         uukBfxd+wM6jJpRBuvXcjRTEE1+r4jvRc90vgaAU9Nyes3Y3kv5SLadhYqQJGyNZyu3K
         1D1ukqcmI/276GBUXTbCadRBpI5bBhzK4CEhEwc78mSk0VOR+/Tr4n8idzjSIRg4ewOf
         +aSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=JomcygMoFAs9Y2GP+k0r376V1/eph2W1tdPkMO1ncJw=;
        b=fOhxtIu4uvHkMREY28xOV073Jm/h/MiN/ErO6FhH5rwIMdaIy3QbuAE5K5LxK1E58e
         RwvJuHK7bmVnT/JBKk6ylxOF2EmSIGn9zuYfuWcWLHsmARmW1pzffSSurxkbDj32Y4QA
         khAK7N/4cTO2OnHSr79sr8oRHs2FSlbqgBZJP9cvuwOzrBTkXeG8CvTnMc35g92dhttC
         iENyU+zVB94MXciqVOSANUaeziFnWYH465ybKZs8LAOtf0uicGGFNalbZZoDbjUd9bGJ
         IYNcU0M1swVj46tRsIfNugFEhdaOGPExU36raib5zVn2yotTid/h1huWoSnj/TT6p4Ul
         qSYQ==
X-Gm-Message-State: AOAM5328MotRncgje4Y2hyqMPT3nP8+ndRxLlWsjCoyy3Sa4gzpv6gFT
	BV7BbbzFTZb81tQ8L/YdDkRPsyJTy+tmb7ym8DI=
X-Google-Smtp-Source: ABdhPJyTeAodM5OiaoEu+u/eFdsZCWL9qjderSYLEMXGfnhLDaRu1ujwiemKtsV/4CqAzf8Im4wlO/lH56MtY0SbIKU=
X-Received: by 2002:a05:6000:1547:: with SMTP id 7mr18921939wry.301.1612193050345;
 Mon, 01 Feb 2021 07:24:10 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com>
In-Reply-To: <YBeXfWf8lQ2nwMtI@mattapan.m5p.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 1 Feb 2021 10:23:34 -0500
Message-ID: <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> > With rpi-4.19.y kernel and dtbs
> > (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> > previous error is not present. I get the boot log on the serial with
> > just console=hvc0 from dom0 but the kernel ends up in a panic down the
> > line:
>
> > This seems to have been caused by a monitor being attached to the HDMI
> > port, with HDMI unplugged dom0 boots OK.
>
> The balance of reports seem to suggest 5.10 is the way to go if you want
> graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
> RP4.
>
>
> On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > >
> > > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > > point to that last being touched last year.  Their tree is at
> > > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > > >
> > > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > > amount of work that went into fixing Xen on RPI4, which got merged
> > > > into 5.9 and I would like to be able to build upstream everything
> > > > without the custom patches coming with the rpixen script repo.
> > >
> > > Please keep track of where your kernel source is checked out at since
> > > there was a desire to figure out what was going on with the device-trees.
> > >
> > >
> > > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > > kernel command-line should ensure you get output from the kernel if it
> > > manages to start (yes, Linux does support having multiple consoles at the
> > > same time).
> >
> > No output from dom0 received even with the added console options
> > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > with 4.19 next.
>
> So, their current HEAD.  This reads like you've got a problematic kernel
> configuration.  What procedure are you following to generate the
> configuration you use?
>
> Using their upstream as a base and then adding the configuration options
> for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> `make menuconfig`, `make zImage`).
>
> Notably the options:
> CONFIG_PARAVIRT
> CONFIG_XEN_DOM0
> CONFIG_XEN
> CONFIG_XEN_BLKDEV_BACKEND
> CONFIG_XEN_NETDEV_BACKEND
> CONFIG_HVC_XEN
> CONFIG_HVC_XEN_FRONTEND
>
> Should be set to "y".

Yes, these configs are all set the same way for all Linux builds by the script:
        make O=.build-arm64 ARCH=arm64
CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config

I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
dom0. So far only 4.19 boots.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:24:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:24:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79971.145957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b4R-0006xA-B6; Mon, 01 Feb 2021 15:24:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79971.145957; Mon, 01 Feb 2021 15:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b4R-0006x1-4x; Mon, 01 Feb 2021 15:24:55 +0000
Received: by outflank-mailman (input) for mailman id 79971;
 Mon, 01 Feb 2021 15:24:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6b4Q-0006wJ-FJ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:24:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68241d7c-7b3c-4b6c-bb40-8f12f823bd8a;
 Mon, 01 Feb 2021 15:24:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1AF8AB92;
 Mon,  1 Feb 2021 15:24:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68241d7c-7b3c-4b6c-bb40-8f12f823bd8a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612193092; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=N5ty0huTZtI0G0yLVdIqAvLN3l71HCYk3tPLXHwt23w=;
	b=MpxDllO6j2J8RnFOMAiI5I2f0AzULk0kRY5y82McSo2MijflZP4uFOnsdnfjM9LDJDlC5M
	0iPO3QmTJ2AYUrSSSV8rarPnlHSAHe5HlosSG2OnsrCknUbXXxlZJdZD9nFmvLQX9gQffi
	BwGuiL11YAazDW7iHpAuLPzTplJMegg=
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
 [and 1 more messages]
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
 <0cbbdb3a-5681-10df-aeee-ac185d7033cc@citrix.com>
 <24600.6974.503961.950273@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aed2dfba-3b1c-7e54-7996-766b100375f9@suse.com>
Date: Mon, 1 Feb 2021 16:24:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <24600.6974.503961.950273@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.02.2021 16:16, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s"):
>> On 01/02/2021 14:56, Jan Beulich wrote:
>>> Going through an intermediate *.new file requires telling the compiler
>>> what the real target is, so that the inclusion of the resulting .*.d
>>> file will actually be useful.
>>>
>>> Fixes: 7d2d7a43d014 ("x86/build: limit rebuilding of asm-offsets.h")
>>> Reported-by: Julien Grall <julien@xen.org>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, whatever the
>> outcome of the discussion below.
> 
> This is a bugfix and does not need a release ack, but FTAOD

Oh, this used to be different on prior releases once we were
past the full freeze point. Are to intending to allow bug fixes
without release ack until the actual release (minus commit
moratorium periods, of course), or will this change at some
(un?)predictable point?

> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:26:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:26:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79974.145969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b66-0007If-E2; Mon, 01 Feb 2021 15:26:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79974.145969; Mon, 01 Feb 2021 15:26:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b66-0007IY-9X; Mon, 01 Feb 2021 15:26:38 +0000
Received: by outflank-mailman (input) for mailman id 79974;
 Mon, 01 Feb 2021 15:26:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tQCk=HD=todobom.com=claudemir@srs-us1.protection.inumbo.net>)
 id 1l6b65-0007IT-GO
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:26:37 +0000
Received: from mail-qv1-xf2f.google.com (unknown [2607:f8b0:4864:20::f2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79b6c5c8-4293-4c94-9f22-bcbea5da011f;
 Mon, 01 Feb 2021 15:26:29 +0000 (UTC)
Received: by mail-qv1-xf2f.google.com with SMTP id u20so4239519qvx.7
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 07:26:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79b6c5c8-4293-4c94-9f22-bcbea5da011f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=todobom-com.20150623.gappssmtp.com; s=20150623;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=w9Wav7mSXUzERH0kMWqcOTYflSBldGnTMG9/W+BcZoQ=;
        b=jvj+u16d1v/Pe+A+pbaAkxug+hsptrSDwPoyniy/NOy/TxUJCtjA8JOwxtvflCuchE
         XoTtJyjI2oJ9/Lvtrk1iRMJLDucXOMWKkxil3yjcuHX9KHTgmwOk830zD5tTXa1hFFCT
         YCbixtlegHkn8B8U/aQZuPLOlc56AB9ffYabFhfGIhLNpbqTEAYEn12wmIbhoQXGljm4
         6lBd02I6ZhuEhQeGOZvPzmDBm1LZX87DDkWGuV7OooxWpodHqcXvnvMLGmturPPeQVoK
         0NrGyPGWXqtesXR7FsmkUZiTal/szYt1Oxh7trCKyKyUXnQvLYRQTAjNUB+6LfaPWzNA
         d3EA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=w9Wav7mSXUzERH0kMWqcOTYflSBldGnTMG9/W+BcZoQ=;
        b=ahj5ewOzdnGsNpwpDiwraMj91DM0p15Wy1hNV5uqOfHlvsxwBXeo9PY0wX4A80iVKT
         Uqei0hTs+/A52eL9Y3Vpvv6sD8Apl8t0lfHVWLInPdtxUPiVy3ZAJGJ2XY0gtKk7DK+U
         fd+2mFh1G4lKZQEDVFvn2yU97Sdnk7++rR5t74om9wdzhRVpRjC4KaAi0Wr+KBiZJ8RE
         VDoR7v3+B3VcNjHMNwXLOPtvxIO5Dag/Y+L5Zm+Atdmm58N2t3MCoXwWFFFJU8Dv314e
         mjY5i9MjgbLEabZnmJA+00IplkcJL0l1KeEp/3oYNIIig/7KDrLK0yUGUlXISb+lX2iB
         cdBg==
X-Gm-Message-State: AOAM533gx/So8n3DZgjBOdMRkNzBRYBpA9fgqLB0DFnfEtKBcYTAavHK
	fF1s7CdQBIcvl0sowQsBLEGeguSn+ma5E2fr+uNY6A==
X-Google-Smtp-Source: ABdhPJwDXBkwK/F6a97PZU14V8Mzpmm4GoJNcYqREpyus6peEylawJFH8vMG6Frhfh6DU28nCAtFHkwL7QUjA0r6nHw=
X-Received: by 2002:a0c:f582:: with SMTP id k2mr16088298qvm.46.1612193189297;
 Mon, 01 Feb 2021 07:26:29 -0800 (PST)
MIME-Version: 1.0
References: <CANyqHYfNBHnUiBiXHdt+R3mZ72oYQBnQcaWuKw5gY0uDb_ZqKw@mail.gmail.com>
 <e1d69914-c6bc-40b9-a9f4-33be4bd022b6@suse.com> <CANyqHYcifnCgd5C5vbYoi4CTtoMX5+jzGqHfs6JZ+e=d2Y_dmg@mail.gmail.com>
 <ff799cd4-ba42-e120-107c-5011dc803b5a@suse.com> <609a82d8-af12-4764-c4e0-f5ee0e11c130@suse.com>
 <CANyqHYehUWeNfVXqVJX6nrBS_CcKL1DQjyNVa1cUbvbx+zD83w@mail.gmail.com>
 <9d04edfe-0059-6fbf-c1da-2087f6190e64@suse.com> <CANyqHYfOC6JY978SRPAQ8Ug3GevFD=jbT6bVVET4+QOv8mv7qA@mail.gmail.com>
 <a0a7bbd0-c4c3-cfb8-5af0-a5a4aff14b76@suse.com> <CANyqHYeDR_NUKzPtbfLiUzxAUzerKepbU4B-_6=U-7Y6uy8gpQ@mail.gmail.com>
 <8837c3fb-1e0c-5941-258c-e76551a9e02b@suse.com> <8cf69fb3-5b8c-60ea-bd1c-39a0cbd5cb5c@suse.com>
 <CANyqHYeCQc2bt836uyrtm9Eo2T1uPP-+ups-ygfACu6zK36BQg@mail.gmail.com>
 <bd150f4d-4f7e-082e-6b10-03bf1eca7b80@suse.com> <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
 <803a50a9-707f-14db-b523-cd1f6f685ab4@suse.com> <CANyqHYfNjqjm7tFoHD=XDcv_P42wppmx0gjy=--Kz88MZcK6Pw@mail.gmail.com>
 <96a23d4a-b29f-46e2-a0f5-568a5d1f4b9e@suse.com>
In-Reply-To: <96a23d4a-b29f-46e2-a0f5-568a5d1f4b9e@suse.com>
From: Claudemir Todo Bom <claudemir@todobom.com>
Date: Mon, 1 Feb 2021 12:26:17 -0300
Message-ID: <CANyqHYfue3mPKESc6_U79=ckCHrJo6rEJg0TgXi8-g6=peM01A@mail.gmail.com>
Subject: Re: Problems with APIC on versions 4.9 and later (4.8 works)
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/mixed; boundary="0000000000005bcc5605ba47fa81"

--0000000000005bcc5605ba47fa81
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Em seg., 1 de fev. de 2021 =C3=A0s 12:09, Jan Beulich <jbeulich@suse.com> e=
screveu:
>
> On 01.02.2021 15:46, Claudemir Todo Bom wrote:
> > Tested first without the debug patch and with following parameters:
>
> And this test was all three of the non-debugging patches?

Yes, all three patches.

> > xen: dom0_mem=3D1024M,max:2048M dom0_max_vcpus=3D4 dom0_vcpus_pin=3Dtru=
e smt=3Dtrue
> > kernel: loglevel=3D3
> >
> > same behaviour as before... black screen right after the xen messages.
> >
> > adding earlyprintk=3Dxen to the kernel command line is sufficient to
> > make it boot, I can imagine this can be happening because Xen is not
> > releasing console to the kernel at that moment.
>
> If the answer to the above question is "yes", then I start
> suspecting this to be a different problem. I'm not sure I
> see a way to debug this without having access to any output
> (i.e. neither video nor serial). Without "earlyprintk=3Dxen"
> and instead with "vga=3Dkeep watchdog" on the Xen command
> line, is there anything helpful (without or if need be with
> the debugging patch in place)?

with "vga=3Dtext-80x25,keep watchdog" and without the earlyprintk,
system booted. I'm attaching the "xl dmesg" and "dmesg" outputs here.

>
> > The system worked well (with earlyprintk=3Dxen), tested with the "yes
> > stress test" mentioned before on a guest and on dom0.
> >
> > Then, I installed the debug patch and booted it again, it also needed
> > the earlyprintk=3Dxen parameter on the kernel command line. I've also
> > added console_timestamps=3Dboot to the xen command line in order to get
> > the time of the messages.
> >
> > I'm attaching the outputs of "xl dmesg" and "dmesg" on this message.
> >
> > Think it is almost done! WIll wait for the next round of tests!
>
> As per above, not sure if there's going to be one. Thanks
> for your patient testing!

I can live with the "earlyprintk=3Dxen" or any other solution that makes
it boot. I'm pretty sure this problem will appear in tests of other
people with a setup that helps debug deeper.

Thank you for your work!

Best regards,
Claudemir

--0000000000005bcc5605ba47fa81
Content-Type: text/plain; charset="US-ASCII"; name="xen-dmesg.txt"
Content-Disposition: attachment; filename="xen-dmesg.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_kkmq4i3q0>
X-Attachment-Id: f_kkmq4i3q0

KFhFTikgcGFyYW1ldGVyICJwbGFjZWhvbGRlciIgdW5rbm93biEKKFhFTikgWGVuIHZlcnNpb24g
NC4xMS40IChEZWJpYW4gNC4xMS40KzU3LWc0MWE4MjJjMzkyLTIpIChwa2cteGVuLWRldmVsQGxp
c3RzLmFsaW90aC5kZWJpYW4ub3JnKSAoZ2NjIChEZWJpYW4gOC4zLjAtNikgOC4zLjApIGRlYnVn
PW4gIE1vbiBGZWIgIDEgMTE6MzE6NDQgLTAzIDIwMjEKKFhFTikgQm9vdGxvYWRlcjogR1JVQiAy
LjA0LTEyCihYRU4pIENvbW1hbmQgbGluZTogcGxhY2Vob2xkZXIgZG9tMF9tZW09MTAyNE0sbWF4
OjIwNDhNIGRvbTBfbWF4X3ZjcHVzPTQgZG9tMF92Y3B1c19waW49dHJ1ZSBzbXQ9dHJ1ZSB2Z2E9
dGV4dC04MHgyNSxrZWVwIHdhdGNoZG9nCihYRU4pIFhlbiBpbWFnZSBsb2FkIGJhc2UgYWRkcmVz
czogMHhiYTIwMDAwMAooWEVOKSBWaWRlbyBpbmZvcm1hdGlvbjoKKFhFTikgIFZHQSBpcyB0ZXh0
IG1vZGUgODB4MjUsIGZvbnQgOHgxNgooWEVOKSAgVkJFL0REQyBtZXRob2RzOiBub25lOyBFRElE
IHRyYW5zZmVyIHRpbWU6IDAgc2Vjb25kcwooWEVOKSAgRURJRCBpbmZvIG5vdCByZXRyaWV2ZWQg
YmVjYXVzZSBubyBEREMgcmV0cmlldmFsIG1ldGhvZCBkZXRlY3RlZAooWEVOKSBEaXNjIGluZm9y
bWF0aW9uOgooWEVOKSAgRm91bmQgMSBNQlIgc2lnbmF0dXJlcwooWEVOKSAgRm91bmQgMiBFREQg
aW5mb3JtYXRpb24gc3RydWN0dXJlcwooWEVOKSBYZW4tZTgyMCBSQU0gbWFwOgooWEVOKSAgMDAw
MDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWU4MDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAw
MDAwOWU4MDAgLSAwMDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwMDAw
ZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwMDAxMDAw
MDAgLSAwMDAwMDAwMGJhOTUyMDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMGJhOTUyMDAwIC0g
MDAwMDAwMDBiYTk4YjAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGJhOThiMDAwIC0gMDAw
MDAwMDBiYWJjMzAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDBiYWJjMzAwMCAtIDAwMDAwMDAw
YmIxYzAwMDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDBiYjFjMDAwMCAtIDAwMDAwMDAwYmI4
NDMwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDBiYjg0MzAwMCAtIDAwMDAwMDAwYmI4NDQw
MDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwYmI4NDQwMDAgLSAwMDAwMDAwMGJiOGNhMDAwIChB
Q1BJIE5WUykKKFhFTikgIDAwMDAwMDAwYmI4Y2EwMDAgLSAwMDAwMDAwMGJiZDBmMDAwICh1c2Fi
bGUpCihYRU4pICAwMDAwMDAwMGJiZDBmMDAwIC0gMDAwMDAwMDBiYmZmNDAwMCAocmVzZXJ2ZWQp
CihYRU4pICAwMDAwMDAwMGJiZmY0MDAwIC0gMDAwMDAwMDBiYzAwMDAwMCAodXNhYmxlKQooWEVO
KSAgMDAwMDAwMDBkMDAwMDAwMCAtIDAwMDAwMDAwZTAwMDAwMDAgKHJlc2VydmVkKQooWEVOKSAg
MDAwMDAwMDBmZWQxYzAwMCAtIDAwMDAwMDAwZmVkMjAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAw
MDAwMDBmZjAwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAw
MDEwMDAwMDAwMCAtIDAwMDAwMDA0NDAwMDAwMDAgKHVzYWJsZSkKKFhFTikgQUNQSTogUlNEUCAw
MDBGMDRBMCwgMDAyNCAocjIgQUxBU0tBKQooWEVOKSBBQ1BJOiBYU0RUIEJCMEREMDcwLCAwMDVD
IChyMSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhFTikgQUNQSTog
RkFDUCBCQjBFNTcyOCwgMDEwQyAocjUgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAg
MTAwMTMpCihYRU4pIEFDUEk6IERTRFQgQkIwREQxNjAsIDg1QzcgKHIyIEFMQVNLQSAgICBBIE0g
SSAgICAgICAyMCBJTlRMIDIwMDUxMTE3KQooWEVOKSBBQ1BJOiBGQUNTIEJCMUI3RjgwLCAwMDQw
CihYRU4pIEFDUEk6IEFQSUMgQkIwRTU4MzgsIDAxQTggKHIzIEFMQVNLQSAgICBBIE0gSSAgMTA3
MjAwOSBBTUkgICAgIDEwMDEzKQooWEVOKSBBQ1BJOiBGUERUIEJCMEU1OUUwLCAwMDQ0IChyMSBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhFTikgQUNQSTogTUNGRyBC
QjBFNUEyOCwgMDAzQyAocjEgQUxBU0tBIE9FTU1DRkcuICAxMDcyMDA5IE1TRlQgICAgICAgOTcp
CihYRU4pIEFDUEk6IEhQRVQgQkIwRTVBNjgsIDAwMzggKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3
MjAwOSBBTUkuICAgICAgICA1KQooWEVOKSBBQ1BJOiBTU0RUIEJCMEU1QUEwLCBDRDM4MCAocjIg
IElOVEVMICAgIENwdVBtICAgICA0MDAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6IERNQVIg
QkIxQjJFMjAsIDAwRUMgKHIxIEEgTSBJICAgT0VNRE1BUiAgICAgICAgMSBJTlRMICAgICAgICAx
KQooWEVOKSBTeXN0ZW0gUkFNOiAxNjMwM01CICgxNjY5NDc2MGtCKQooWEVOKSBEb21haW4gaGVh
cCBpbml0aWFsaXNlZAooWEVOKSBBQ1BJOiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGlu
IEZBRFQgLSBiYjFiN2Y4MC8wMDAwMDAwMDAwMDAwMDAwLCB1c2luZyAzMgooWEVOKSBJT0FQSUNb
MF06IGFwaWNfaWQgMCwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZlYzAwMDAwLCBHU0kgMC0yMwoo
WEVOKSBJT0FQSUNbMV06IGFwaWNfaWQgMiwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZlYzAxMDAw
LCBHU0kgMjQtNDcKKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgUGh5cy4gIFVzaW5nIDIgSS9P
IEFQSUNzCihYRU4pIFN3aXRjaGVkIHRvIEFQSUMgZHJpdmVyIHgyYXBpY19jbHVzdGVyCihYRU4p
IHhzdGF0ZTogc2l6ZTogMHgzNDAgYW5kIHN0YXRlczogMHg3CihYRU4pIFNwZWN1bGF0aXZlIG1p
dGlnYXRpb24gZmFjaWxpdGllczoKKFhFTikgICBIYXJkd2FyZSBmZWF0dXJlczoKKFhFTikgICBD
b21waWxlZC1pbiBzdXBwb3J0OiBJTkRJUkVDVF9USFVOSyBTSEFET1dfUEFHSU5HCihYRU4pICAg
WGVuIHNldHRpbmdzOiBCVEktVGh1bmsgUkVUUE9MSU5FLCBTUEVDX0NUUkw6IE5vLCBPdGhlcjoK
KFhFTikgICBMMVRGOiBiZWxpZXZlZCB2dWxuZXJhYmxlLCBtYXhwaHlzYWRkciBMMUQgNDYsIENQ
VUlEIDQ2LCBTYWZlIGFkZHJlc3MgMzAwMDAwMDAwMDAwCihYRU4pICAgU3VwcG9ydCBmb3IgVk1z
OiBQVjogUlNCIEVBR0VSX0ZQVSwgSFZNOiBSU0IgRUFHRVJfRlBVCihYRU4pICAgWFBUSSAoNjQt
Yml0IFBWIG9ubHkpOiBEb20wIGVuYWJsZWQsIERvbVUgZW5hYmxlZAooWEVOKSAgIFBWIEwxVEYg
c2hhZG93aW5nOiBEb20wIGRpc2FibGVkLCBEb21VIGVuYWJsZWQKKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBQbGF0Zm9ybSB0aW1lciBp
cyAxNC4zMThNSHogSFBFVAooWEVOKSBEZXRlY3RlZCAyNDk0LjM0MSBNSHogcHJvY2Vzc29yLgoo
WEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSBJbnRlbCBWVC1kIGlvbW11IDAgc3Vw
cG9ydGVkIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0IuCihYRU4pIEludGVsIFZULWQgU25vb3Ag
Q29udHJvbCBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIERvbTAgRE1BIFBhc3N0aHJvdWdoIG5v
dCBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIFF1ZXVlZCBJbnZhbGlkYXRpb24gZW5hYmxlZC4K
KFhFTikgSW50ZWwgVlQtZCBJbnRlcnJ1cHQgUmVtYXBwaW5nIGVuYWJsZWQuCihYRU4pIEludGVs
IFZULWQgUG9zdGVkIEludGVycnVwdCBub3QgZW5hYmxlZC4KKFhFTikgSW50ZWwgVlQtZCBTaGFy
ZWQgRVBUIHRhYmxlcyBlbmFibGVkLgooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZAoo
WEVOKSAgLSBEb20wIG1vZGU6IFJlbGF4ZWQKKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFi
bGVkCihYRU4pIEVuYWJsZWQgZGlyZWN0ZWQgRU9JIHdpdGggaW9hcGljX2Fja19vbGQgb24hCihY
RU4pIEVOQUJMSU5HIElPLUFQSUMgSVJRcwooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBtZXRob2QK
KFhFTikgVFNDOiBjPTEgcj0xCihYRU4pIElOSVRbIDBdIHQ9MDAwMDAwMTllZGNiODc2YiBzPTAw
MDAyZTZjMDBmNiBtPTAwMDAyZTZjMDI5MQooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9m
IDY0IEtpQi4KKFhFTikgVk1YOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6CihYRU4pICAt
IEFQSUMgTU1JTyBhY2Nlc3MgdmlydHVhbGlzYXRpb24KKFhFTikgIC0gQVBJQyBUUFIgc2hhZG93
CihYRU4pICAtIEV4dGVuZGVkIFBhZ2UgVGFibGVzIChFUFQpCihYRU4pICAtIFZpcnR1YWwtUHJv
Y2Vzc29yIElkZW50aWZpZXJzIChWUElEKQooWEVOKSAgLSBWaXJ0dWFsIE5NSQooWEVOKSAgLSBN
U1IgZGlyZWN0LWFjY2VzcyBiaXRtYXAKKFhFTikgIC0gVW5yZXN0cmljdGVkIEd1ZXN0CihYRU4p
ICAtIEFQSUMgUmVnaXN0ZXIgVmlydHVhbGl6YXRpb24KKFhFTikgIC0gVmlydHVhbCBJbnRlcnJ1
cHQgRGVsaXZlcnkKKFhFTikgIC0gUG9zdGVkIEludGVycnVwdCBQcm9jZXNzaW5nCihYRU4pIEhW
TTogQVNJRHMgZW5hYmxlZC4KKFhFTikgVk1YOiBEaXNhYmxpbmcgZXhlY3V0YWJsZSBFUFQgc3Vw
ZXJwYWdlcyBkdWUgdG8gQ1ZFLTIwMTgtMTIyMDcKKFhFTikgSFZNOiBWTVggZW5hYmxlZAooWEVO
KSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAoSEFQKSBkZXRlY3RlZAooWEVOKSBIVk06
IEhBUCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCCihYRU4pIElOSVRbIDFdIHQ9MDAwMDAwMWFi
YmI5NDRiZSBzPTAwMDAyZmVjNzIyOSBtPTAwMDAyZmVjNzQxZgooWEVOKSBJTklUWyAyXSB0PTAw
MDAwMDFhYmJmNWUyZDkgcz0wMDAwMzAwNGJmOGQgbT0wMDAwMzAwNGMxNzMKKFhFTikgSU5JVFsg
M10gdD0wMDAwMDAxYWJjMzU5MGU5IHM9MDAwMDMwMWU0Nzg5IG09MDAwMDMwMWU0OTU3CihYRU4p
IElOSVRbIDRdIHQ9MDAwMDAwMWFiYzcwNDFlNCBzPTAwMDAzMDM1Y2VmYyBtPTAwMDAzMDM1ZDBl
MQooWEVOKSBJTklUWyA1XSB0PTAwMDAwMDFhYmNhYjYwMTAgcz0wMDAwMzA0ZDgyNjcgbT0wMDAw
MzA0ZDg0NTcKKFhFTikgSU5JVFsgNl0gdD0wMDAwMDAxYWJjZTYyNDUzIHM9MDAwMDMwNjUxMWI4
IG09MDAwMDMwNjUxMzg0CihYRU4pIElOSVRbIDddIHQ9MDAwMDAwMWFiZDIxNzhjMyBzPTAwMDAz
MDdjZGFjYiBtPTAwMDAzMDdjZGNjZQooWEVOKSBJTklUWyA4XSB0PTAwMDAwMDFhYmQ1Y2U3OTcg
cz0wMDAwMzA5NGFlN2YgbT0wMDAwMzA5NGIwNzUKKFhFTikgSU5JVFsgOV0gdD0wMDAwMDAxYWJk
OThhYjgzIHM9MDAwMDMwYWNhNDcyIG09MDAwMDMwYWNhNjdjCihYRU4pIElOSVRbMTBdIHQ9MDAw
MDAwMWFiZGQ0NGNjYyBzPTAwMDAzMGM0OGM1YSBtPTAwMDAzMGM0OGU1NAooWEVOKSBJTklUWzEx
XSB0PTAwMDAwMDFhYmUxMDZhODQgcz0wMDAwMzBkY2E2N2QgbT0wMDAwMzBkY2E4NWYKKFhFTikg
SU5JVFsxMl0gdD0wMDAwMDAxYWJlNGM0OGJmIHM9MDAwMDMwZjRhNjlhIG09MDAwMDMwZjRhOGM0
CihYRU4pIElOSVRbMTNdIHQ9MDAwMDAwMWFiZThiZDRlNyBzPTAwMDAzMTBlMjE4MSBtPTAwMDAz
MTBlMjJiZQooWEVOKSBJTklUWzE0XSB0PTAwMDAwMDFhYmVjN2NlMmQgcz0wMDAwMzEyNjJiZTMg
bT0wMDAwMzEyNjJlMGQKKFhFTikgSU5JVFsxNV0gdD0wMDAwMDAxYWJmMDNhY2Y5IHM9MDAwMDMx
M2UyYzc5IG09MDAwMDMxM2UyZWI4CihYRU4pIElOSVRbMTZdIHQ9MDAwMDAwMWFiZjNmNDNhYSBz
PTAwMDAzMTU2MTA3MCBtPTAwMDAzMTU2MTI3OAooWEVOKSBJTklUWzE3XSB0PTAwMDAwMDFhYmY3
YjBhZjIgcz0wMDAwMzE2ZTA3ODggbT0wMDAwMzE2ZTA5OTcKKFhFTikgSU5JVFsxOF0gdD0wMDAw
MDAxYWJmYjZiZjI3IHM9MDAwMDMxODVmNmU4IG09MDAwMDMxODVmOTEyCihYRU4pIElOSVRbMTld
IHQ9MDAwMDAwMWFiZmYyNjNjYiBzPTAwMDAzMTlkZTA4YiBtPTAwMDAzMTlkZTI4ZAooWEVOKSBJ
TklUWzIwXSB0PTAwMDAwMDFhYzAyZGZmNmUgcz0wMDAwMzFiNWM2MDMgbT0wMDAwMzFiNWM4MzYK
KFhFTikgSU5JVFsyMV0gdD0wMDAwMDAxYWMwNjlkNDVhIHM9MDAwMDMxY2RjMmEzIG09MDAwMDMx
Y2RjNGM5CihYRU4pIElOSVRbMjJdIHQ9MDAwMDAwMWFjMGE1YTY1NCBzPTAwMDAzMWU1YmU1YyBt
PTAwMDAzMWU1YzA4YgooWEVOKSBJTklUWzIzXSB0PTAwMDAwMDFhYzBlMWFmNjQgcz0wMDAwMzFm
ZGNmYzggbT0wMDAwMzFmZGQyMjEKKFhFTikgQnJvdWdodCB1cCAyNCBDUFVzCihYRU4pIFRlc3Rp
bmcgTk1JIHdhdGNoZG9nIG9uIGFsbCBDUFVzOiBvawooWEVOKSBDSEtbIDBdIGNhMmVjY2Y4CihY
RU4pIGNoa1sgICAwXSBDUFUwICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDE5ZmY0ZmFiNGIgIy0x
CihYRU4pIGNoa1sgICAxXSBDUFUxICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdhYmEg
Iy0xCihYRU4pIGNoa1sgICAyXSBDUFUxMiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdi
NjEgIy0xCihYRU4pIGNoa1sgICAzXSBDUFU3ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3
ZTdjNDMgIy0xCihYRU4pIGNoa1sgICA0XSBDUFU2ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFh
Yzk3ZTdjMmYgIy0xCihYRU4pIGNoa1sgICA1XSBDUFUxNCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDFhYzk3ZTdjMTggIy0xCihYRU4pIGNoa1sgICA2XSBDUFUxOCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDFhYzk3ZTdkNzcgIy0xCihYRU4pIGNoa1sgICA3XSBDUFUzICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDFhYzk3ZTdjNTMgIy0xCihYRU4pIGNoa1sgICA4XSBDUFUyICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDFhYzk3ZTdjNWYgIy0xCihYRU4pIGNoa1sgICA5XSBDUFUxMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDFhYzk3ZTdkNzcgIy0xCihYRU4pIGNoa1sgIDEwXSBDUFU5ICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkNTcgIy0xCihYRU4pIGNoa1sgIDExXSBDUFUyMiAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkYzQgIy0xCihYRU4pIGNoa1sgIDEyXSBDUFUyMyAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkYjggIy0xCihYRU4pIGNoa1sgIDEzXSBDUFUx
NiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkMmQgIy0xCihYRU4pIGNoa1sgIDE0XSBD
UFUxOSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkODMgIy0xCihYRU4pIGNoa1sgIDE1
XSBDUFUxMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkNmIgIy0xCihYRU4pIGNoa1sg
IDE2XSBDUFUyMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdjNTYgIy0xCihYRU4pIGNo
a1sgIDE3XSBDUFUxMyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdiNTUgIy0xCihYRU4p
IGNoa1sgIDE4XSBDUFUxNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdjMTAgIy0xCihY
RU4pIGNoa1sgIDE5XSBDUFU4ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkNjMgIy0x
CihYRU4pIGNoa1sgIDIwXSBDUFUxNyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdkMjEg
Iy0xCihYRU4pIGNoa1sgIDIxXSBDUFU1ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3ZTdk
ZjAgIy0xCihYRU4pIGNoa1sgIDIyXSBDUFU0ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFhYzk3
ZTdkZmMgIy0xCihYRU4pIGNoa1sgIDIzXSBDUFUyMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDFh
Yzk3ZTdjNWUgIy0xCihYRU4pIGNoa1sgIDI0XSBDUFUwICAwMDAwMDAxYWM5N2U5MDJhIDAwMDAw
MDE5ZmY0ZmU2NjMgIzEKKFhFTikgY2hrWyAgMjVdIENQVTAgIDAwMDAwMDFhYzk3ZjIxNzMgMDAw
MDAwMTlmZjUwNTRmZiAjMgooWEVOKSBjaGtbICAyNl0gQ1BVMCAgMDAwMDAwMWFjOTgxMWJjYiAw
MDAwMDAxOWZmNTI0ZjJiICM3CihYRU4pIGNoa1sgIDI3XSBDUFUwICAwMDAwMDAxYWM5ODMxNDdi
IDAwMDAwMDE5ZmY1NDQ3ZDcgIzEyCihYRU4pIGNoa1sgIDI4XSBDUFUwICAwMDAwMDAxYWM5ODQ0
MWUzIDAwMDAwMDE5ZmY1NTc1MzMgIzE1CihYRU4pIGNoa1sgIDI5XSBDUFUwICAwMDAwMDAxYWM5
OTNlYzU3IDAwMDAwMDE5ZmY2NTFmOWYgIzU1CihYRU4pIGNoa1sgIDMwXSBDUFUwICAwMDAwMDAx
YWNhMGQ1OWFiIDAwMDAwMDE5ZmZkZThjZWYgIzM2NgooWEVOKSBjaGtbICAzMV0gQ1BVMCAgMDAw
MDAwMWFjYTQ1ZDg3MyAwMDAwMDAxYTAwMTcwYmIzICM1MTAKKFhFTikgY2hrWyAgMzJdIENQVTAg
IDAwMDAwMDFhY2M3N2UzZGYgMDAwMDAwMWEwMjQ5MTZlNyAjMTkzNAooWEVOKSBUU0Mgd2FycCBk
ZXRlY3RlZCwgZGlzYWJsaW5nIFRTQ19SRUxJQUJMRQooWEVOKSBUU0M6IGM9MSByPTAKKFhFTikg
VFNDOiB0aW1lLmMjdGltZV9jYWxpYnJhdGlvbl90c2NfcmVuZGV6dm91cwooWEVOKSBSRFpWWyAw
XSB0PTAwMDAwMDFhMGEwNTdiNTcKKFhFTikgUkRaVlsgMV0gdD0wMDAwMDAxYWQ0MzQ1MjllCihY
RU4pIFJEWlZbIDNdIHQ9MDAwMDAwMWFkNDM1NjBiMgooWEVOKSBSRFpWWyAyXSB0PTAwMDAwMDFh
ZDQzNTYwYWEKKFhFTikgUkRaVlsgNF0gdD0wMDAwMDAxYWQ0MzU2OTNiCihYRU4pIFJEWlZbIDVd
IHQ9MDAwMDAwMWFkNDM1Njk0MwooWEVOKSBSRFpWWyA3XSB0PTAwMDAwMDFhZDQzNTcyZmMKKFhF
TikgUkRaVlsgNl0gdD0wMDAwMDAxYWQ0MzU3MzA0CihYRU4pIFJEWlZbIDldIHQ9MDAwMDAwMWFk
NDM1ODBhNgooWEVOKSBSRFpWWyA4XSB0PTAwMDAwMDFhZDQzNTgwOWUKKFhFTikgUkRaVlsxMV0g
dD0wMDAwMDAxYWQ0MzU4Y2FjCihYRU4pIFJEWlZbMTBdIHQ9MDAwMDAwMWFkNDM1OGNiNAooWEVO
KSBSRFpWWzEzXSB0PTAwMDAwMDFhZDQzNTk2MDUKKFhFTikgUkRaVlsxMl0gdD0wMDAwMDAxYWQ0
MzU5NjBkCihYRU4pIFJEWlZbMTRdIHQ9MDAwMDAwMWFkNDM1OWZmZgooWEVOKSBSRFpWWzE1XSB0
PTAwMDAwMDFhZDQzNWEwMDcKKFhFTikgUkRaVlsxN10gdD0wMDAwMDAxYWQ0MzVhODUzCihYRU4p
IFJEWlZbMTZdIHQ9MDAwMDAwMWFkNDM1YTg1NwooWEVOKSBSRFpWWzE4XSB0PTAwMDAwMDFhZDQz
NWIxZGEKKFhFTikgUkRaVlsxOV0gdD0wMDAwMDAxYWQ0MzViMWQyCihYRU4pIFJEWlZbMjFdIHQ9
MDAwMDAwMWFkNDM1YmIzOAooWEVOKSBSRFpWWzIwXSB0PTAwMDAwMDFhZDQzNWJiM2MKKFhFTikg
UkRaVlsyM10gdD0wMDAwMDAxYWQ0MzVjNThlCihYRU4pIFJEWlZbMjJdIHQ9MDAwMDAwMWFkNDM1
YzU5YQooWEVOKSBSRFpWWzIyXSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykg
cz0wMDAwM2JiOWRiMmEoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzIzXSB0PTAwMDAwMDFhZDky
OWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRiMDkoMDAwMDNiYmEyMGNmKQooWEVO
KSBSRFpWWzIwXSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2Ji
OWRiMDYoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzEyXSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAw
MDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRhZTgoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzEz
XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRiYzEoMDAw
MDNiYmEyMGNmKQooWEVOKSBSRFpWWyA4XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5
YjU2Nykgcz0wMDAwM2JiOWRhZTcoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWyA5XSB0PTAwMDAw
MDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRhZmQoMDAwMDNiYmEyMGNm
KQooWEVOKSBSRFpWWzE0XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0w
MDAwM2JiOWRiMDYoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzE1XSB0PTAwMDAwMDFhZDkyOWI1
NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRhZmYoMDAwMDNiYmEyMGNmKQooWEVOKSBS
RFpWWyA2XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRi
MjEoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWyA3XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAw
MWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRiMDkoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzIxXSB0
PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRhZjIoMDAwMDNi
YmEyMGNmKQooWEVOKSBSRFpWWzE3XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2
Nykgcz0wMDAwM2JiOWRiYzgoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzE2XSB0PTAwMDAwMDFh
ZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRiZTkoMDAwMDNiYmEyMGNmKQoo
WEVOKSBSRFpWWyAyXSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAw
M2JiOWRhY2MoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWyAxXSB0PTAwMDAwMDFhZDkyOWI1Njco
MDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRhOTEoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpW
WzE5XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRiNTIo
MDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzE4XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFk
OTI5YjU2Nykgcz0wMDAwM2JiOWRhZmMoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWyAzXSB0PTAw
MDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRiMjUoMDAwMDNiYmEy
MGNmKQooWEVOKSBSRFpWWzEwXSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykg
cz0wMDAwM2JiOWRiYzYoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWzExXSB0PTAwMDAwMDFhZDky
OWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRjMDYoMDAwMDNiYmEyMGNmKQooWEVO
KSBSRFpWWyAwXSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2Ji
OWQ5NWQoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWyA0XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAw
MDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRjODkoMDAwMDNiYmEyMGNmKQooWEVOKSBSRFpWWyA1
XSB0PTAwMDAwMDFhZDkyOWI1NjcoMDAwMDAwMWFkOTI5YjU2Nykgcz0wMDAwM2JiOWRjOTAoMDAw
MDNiYmEyMGNmKQooWEVOKSBUU0MgYWRqdXN0ZWQgYnkgY2EyZWQ2ODQKKFhFTikgVFNDOiBlbmQg
cmVuZGV6dm91cwooWEVOKSBtdHJyOiB5b3VyIENQVXMgaGFkIGluY29uc2lzdGVudCBmaXhlZCBN
VFJSIHNldHRpbmdzCihYRU4pIERvbTAgaGFzIG1heGltdW0gODE2IFBJUlFzCihYRU4pICBYZW4g
IGtlcm5lbDogNjQtYml0LCBsc2IsIGNvbXBhdDMyCihYRU4pICBEb20wIGtlcm5lbDogNjQtYml0
LCBQQUUsIGxzYiwgcGFkZHIgMHgxMDAwMDAwIC0+IDB4MmMyYzAwMAooWEVOKSBQSFlTSUNBTCBN
RU1PUlkgQVJSQU5HRU1FTlQ6CihYRU4pICBEb20wIGFsbG9jLjogICAwMDAwMDAwNDI4MDAwMDAw
LT4wMDAwMDAwNDJjMDAwMDAwICgyMzcwNjcgcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQooWEVOKSAg
SW5pdC4gcmFtZGlzazogMDAwMDAwMDQzZGUwYjAwMC0+MDAwMDAwMDQzZmZmZjY2NAooWEVOKSBW
SVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoKKFhFTikgIExvYWRlZCBrZXJuZWw6IGZmZmZmZmZm
ODEwMDAwMDAtPmZmZmZmZmZmODJjMmMwMDAKKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAw
MDAwMDAwMDAtPjAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIFBoeXMtTWFjaCBtYXA6IDAwMDAwMDgw
MDAwMDAwMDAtPjAwMDAwMDgwMDAyMDAwMDAKKFhFTikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZm
ODJjMmMwMDAtPmZmZmZmZmZmODJjMmM0YjgKKFhFTikgIFhlbnN0b3JlIHJpbmc6IDAwMDAwMDAw
MDAwMDAwMDAtPjAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIENvbnNvbGUgcmluZzogIDAwMDAwMDAw
MDAwMDAwMDAtPjAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIFBhZ2UgdGFibGVzOiAgIGZmZmZmZmZm
ODJjMmQwMDAtPmZmZmZmZmZmODJjNDgwMDAKKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZm
ODJjNDgwMDAtPmZmZmZmZmZmODJjNDkwMDAKKFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZm
ODAwMDAwMDAtPmZmZmZmZmZmODMwMDAwMDAKKFhFTikgIEVOVFJZIEFERFJFU1M6IGZmZmZmZmZm
ODI4MmExNjAKKFhFTikgRG9tMCBoYXMgbWF4aW11bSA0IFZDUFVzCihYRU4pIFRTQzogdGltZS5j
I3RpbWVfY2FsaWJyYXRpb25fdHNjX3JlbmRlenZvdXMKKFhFTikgUkRaVlsgMF0gdD0wMDAwMDAx
YjdkNWY0MGFiCihYRU4pIFJEWlZbIDFdIHQ9MDAwMDAwMWI3ZDVmNDU1NwooWEVOKSBSRFpWWyAy
XSB0PTAwMDAwMDFiN2Q2MDU5MjQKKFhFTikgUkRaVlsgM10gdD0wMDAwMDAxYjdkNjA1OTE0CihY
RU4pIFJEWlZbIDVdIHQ9MDAwMDAwMWI3ZDYwNWYwMgooWEVOKSBSRFpWWyA0XSB0PTAwMDAwMDFi
N2Q2MDVmMTYKKFhFTikgUkRaVlsgNl0gdD0wMDAwMDAxYjdkNjA2YTRlCihYRU4pIFJEWlZbIDdd
IHQ9MDAwMDAwMWI3ZDYwNmE0YQooWEVOKSBSRFpWWyA4XSB0PTAwMDAwMDFiN2Q2MDc4NWEKKFhF
TikgUkRaVlsgOV0gdD0wMDAwMDAxYjdkNjA3N2I2CihYRU4pIFJEWlZbMTBdIHQ9MDAwMDAwMWI3
ZDYwODJhYQooWEVOKSBSRFpWWzExXSB0PTAwMDAwMDFiN2Q2MDgyOTIKKFhFTikgUkRaVlsxMl0g
dD0wMDAwMDAxYjdkNjA4Y2UyCihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAwMWI3ZDYwOGQ3MgooWEVO
KSBSRFpWWzE1XSB0PTAwMDAwMDFiN2Q2MDk3ODkKKFhFTikgUkRaVlsxNF0gdD0wMDAwMDAxYjdk
NjA5NzhkCihYRU4pIFJEWlZbMTddIHQ9MDAwMDAwMWI3ZDYwOWYxMgooWEVOKSBSRFpWWzE2XSB0
PTAwMDAwMDFiN2Q2MDllZmEKKFhFTikgUkRaVlsxOV0gdD0wMDAwMDAxYjdkNjBhODhjCihYRU4p
IFJEWlZbMThdIHQ9MDAwMDAwMWI3ZDYwYTg4YwooWEVOKSBSRFpWWzIxXSB0PTAwMDAwMDFiN2Q2
MGIxMmIKKFhFTikgUkRaVlsyMF0gdD0wMDAwMDAxYjdkNjBiMTEzCihYRU4pIFJEWlZbMjJdIHQ9
MDAwMDAwMWI3ZDYwYmFhZQooWEVOKSBSRFpWWzIzXSB0PTAwMDAwMDFiN2Q2MGJhYWEKKFhFTikg
UkRaVlsgMF0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJk
ZDFlKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsgMV0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAw
MDFiODI1NDg3ODQpIHM9MDAwMDdmOGJkZjExKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsgN10g
dD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJkZjZjKDAwMDA3
ZjhjN2M3YikKKFhFTikgUkRaVlsgNl0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3
ODQpIHM9MDAwMDdmOGJkZjdkKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsgOV0gdD0wMDAwMDAx
YjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJkZWZjKDAwMDA3ZjhjN2M3YikK
KFhFTikgUkRaVlsgOF0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAw
MDdmOGJkZjIwKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsgNF0gdD0wMDAwMDAxYjgyNTQ4Nzg0
KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMGE1KDAwMDA3ZjhjN2M3YikKKFhFTikgUkRa
VlsxM10gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMDEy
KDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsxMl0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFi
ODI1NDg3ODQpIHM9MDAwMDdmOGJkZWYyKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsxNF0gdD0w
MDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJkZjUyKDAwMDA3Zjhj
N2M3YikKKFhFTikgUkRaVlsxNV0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQp
IHM9MDAwMDdmOGJkZjRiKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsxN10gdD0wMDAwMDAxYjgy
NTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMTA4KDAwMDA3ZjhjN2M3YikKKFhF
TikgUkRaVlsxNl0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdm
OGJlMTQxKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsxOF0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAw
MDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJkZWQ0KDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsy
MF0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMDI4KDAw
MDA3ZjhjN2M3YikKKFhFTikgUkRaVlsyMl0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1
NDg3ODQpIHM9MDAwMDdmOGJlMDIyKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsgMl0gdD0wMDAw
MDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMTIwKDAwMDA3ZjhjN2M3
YikKKFhFTikgUkRaVlsgNV0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9
MDAwMDdmOGJlMGEyKDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsgM10gdD0wMDAwMDAxYjgyNTQ4
Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMTdlKDAwMDA3ZjhjN2M3YikKKFhFTikg
UkRaVlsxMF0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJk
ZmM4KDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsxMV0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAw
MDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMDA3KDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsyMV0g
dD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMDFlKDAwMDA3
ZjhjN2M3YikKKFhFTikgUkRaVlsxOV0gdD0wMDAwMDAxYjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3
ODQpIHM9MDAwMDdmOGJkZjI3KDAwMDA3ZjhjN2M3YikKKFhFTikgUkRaVlsyM10gdD0wMDAwMDAx
YjgyNTQ4Nzg0KDAwMDAwMDFiODI1NDg3ODQpIHM9MDAwMDdmOGJlMDAxKDAwMDA3ZjhjN2M3YikK
KFhFTikgVFNDIGFkanVzdGVkIGJ5IDY3OQooWEVOKSBUU0M6IGVuZCByZW5kZXp2b3VzCihYRU4p
IEluaXRpYWwgbG93IG1lbW9yeSB2aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLgoo
WEVOKSBTY3J1YmJpbmcgRnJlZSBSQU0gb24gMSBub2RlcyB1c2luZyAxMiBDUFVzCihYRU4pIC4u
Li4uLi4uLi4uLmRvbmUuCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEVycm9ycyBhbmQgd2FybmluZ3MK
KFhFTikgR3Vlc3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFuZCB3
YXJuaW5ncykKKFhFTikgWGVuIGlzIGtlZXBpbmcgVkdBIGNvbnNvbGUuCihYRU4pICoqKiBTZXJp
YWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5w
dXQgdG8gWGVuKQooWEVOKSBGcmVlZCA0NzZrQiBpbml0IG1lbW9yeQooWEVOKSBUU0M6IHRpbWUu
YyN0aW1lX2NhbGlicmF0aW9uX3RzY19yZW5kZXp2b3VzCihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAw
MWNiOGUxM2Y0YQooWEVOKSBSRFpWWyAxXSB0PTAwMDAwMDFjYjhlMTQ0NjYKKFhFTikgUkRaVlsg
M10gdD0wMDAwMDAxY2I4ZTI1NzQzCihYRU4pIFJEWlZbIDJdIHQ9MDAwMDAwMWNiOGUyNTczZgoo
WEVOKSBSRFpWWyA1XSB0PTAwMDAwMDFjYjhlMjVmNTIKKFhFTikgUkRaVlsgNF0gdD0wMDAwMDAx
Y2I4ZTI1ZjQ2CihYRU4pIFJEWlZbIDZdIHQ9MDAwMDAwMWNiOGUyNjk1MwooWEVOKSBSRFpWWyA3
XSB0PTAwMDAwMDFjYjhlMjY5OGIKKFhFTikgUkRaVlsgOV0gdD0wMDAwMDAxY2I4ZTI3N2I3CihY
RU4pIFJEWlZbIDhdIHQ9MDAwMDAwMWNiOGUyNzgwMwooWEVOKSBSRFpWWzExXSB0PTAwMDAwMDFj
YjhlMjgyOWIKKFhFTikgUkRaVlsxMF0gdD0wMDAwMDAxY2I4ZTI4MjRmCihYRU4pIFJEWlZbMTNd
IHQ9MDAwMDAwMWNiOGUyOGNjYwooWEVOKSBSRFpWWzEyXSB0PTAwMDAwMDFjYjhlMjhjY2MKKFhF
TikgUkRaVlsxNF0gdD0wMDAwMDAxY2I4ZTI5NjdiCihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwMWNi
OGUyOTYxYgooWEVOKSBSRFpWWzE3XSB0PTAwMDAwMDFjYjhlMjllMTUKKFhFTikgUkRaVlsxNl0g
dD0wMDAwMDAxY2I4ZTI5ZTM5CihYRU4pIFJEWlZbMTldIHQ9MDAwMDAwMWNiOGUyYTc5MAooWEVO
KSBSRFpWWzE4XSB0PTAwMDAwMDFjYjhlMmE3OWMKKFhFTikgUkRaVlsyMV0gdD0wMDAwMDAxY2I4
ZTJiMWQ0CihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwMWNiOGUyYjFhMAooWEVOKSBSRFpWWzIzXSB0
PTAwMDAwMDFjYjhlMmI5ODQKKFhFTikgUkRaVlsyMl0gdD0wMDAwMDAxY2I4ZTJiOTc4CihYRU4p
IFJEWlZbIDBdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5
M2IwMygwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDFdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAw
MDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2Q5OCgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMjJd
IHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2Y2MygwMDAw
ZmUwYTc5Y2QpCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZj
Mzg5KSBzPTAwMDBmZTA5M2RlZSgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMTldIHQ9MDAwMDAw
MWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2UzYSgwMDAwZmUwYTc5Y2Qp
CihYRU4pIFJEWlZbMTZdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAw
MDBmZTA5NDE0MigwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMTddIHQ9MDAwMDAwMWNiZGQ2YzM4
OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5NDBlYygwMDAwZmUwYTc5Y2QpCihYRU4pIFJE
WlZbIDddIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2Y3
ZCgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDZdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAx
Y2JkZDZjMzg5KSBzPTAwMDBmZTA5M2Y2OSgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMTFdIHQ9
MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2U4YygwMDAwZmUw
YTc5Y2QpCihYRU4pIFJEWlZbMTBdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5
KSBzPTAwMDBmZTA5M2UzNSgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDNdIHQ9MDAwMDAwMWNi
ZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5NDEwNSgwMDAwZmUwYTc5Y2QpCihY
RU4pIFJEWlZbMjFdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBm
ZTA5M2VmZCgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMTRdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgw
MDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2RkYygwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZb
MTVdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2RhYygw
MDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2Jk
ZDZjMzg5KSBzPTAwMDBmZTA5M2VmYigwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDVdIHQ9MDAw
MDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5NDE1NygwMDAwZmUwYTc5
Y2QpCihYRU4pIFJEWlZbIDJdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBz
PTAwMDBmZTA5NDBhYigwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMTJdIHQ9MDAwMDAwMWNiZGQ2
YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2Q3NSgwMDAwZmUwYTc5Y2QpCihYRU4p
IFJEWlZbMTNdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5
M2U4ZSgwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbMjNdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAw
MDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2Y0NigwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDld
IHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5M2RiNigwMDAw
ZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDhdIHQ9MDAwMDAwMWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZj
Mzg5KSBzPTAwMDBmZTA5M2UwNigwMDAwZmUwYTc5Y2QpCihYRU4pIFJEWlZbIDRdIHQ9MDAwMDAw
MWNiZGQ2YzM4OSgwMDAwMDAxY2JkZDZjMzg5KSBzPTAwMDBmZTA5NDE2MygwMDAwZmUwYTc5Y2Qp
CihYRU4pIFRTQyBhZGp1c3RlZCBieSA1MzMKKFhFTikgVFNDOiBlbmQgcmVuZGV6dm91cwooWEVO
KSBUU0M6IHRpbWUuYyN0aW1lX2NhbGlicmF0aW9uX3RzY19yZW5kZXp2b3VzCihYRU4pIFJEWlZb
IDNdIHQ9MDAwMDAwMWYxNjM3YzZlYgooWEVOKSBSRFpWWyAyXSB0PTAwMDAwMDFmMTYzN2M2ZWYK
KFhFTikgUkRaVlsgMV0gdD0wMDAwMDAxZjE2MzdjZDc5CihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAw
MWYxNjM3Y2ZlOQooWEVOKSBSRFpWWyA1XSB0PTAwMDAwMDFmMTYzOGQwNzgKKFhFTikgUkRaVlsg
NF0gdD0wMDAwMDAxZjE2MzhkMDg4CihYRU4pIFJEWlZbIDddIHQ9MDAwMDAwMWYxNjM4ZGEyMQoo
WEVOKSBSRFpWWyA2XSB0PTAwMDAwMDFmMTYzOGRhNDkKKFhFTikgUkRaVlsgOF0gdD0wMDAwMDAx
ZjE2MzhlN2NhCihYRU4pIFJEWlZbIDldIHQ9MDAwMDAwMWYxNjM4ZTdiYQooWEVOKSBSRFpWWzEx
XSB0PTAwMDAwMDFmMTYzOGY0M2EKKFhFTikgUkRaVlsxMF0gdD0wMDAwMDAxZjE2MzhmMzI2CihY
RU4pIFJEWlZbMTNdIHQ9MDAwMDAwMWYxNjM5MDA1ZgooWEVOKSBSRFpWWzEyXSB0PTAwMDAwMDFm
MTYzOTAwOGYKKFhFTikgUkRaVlsxNV0gdD0wMDAwMDAxZjE2MzkwOGM5CihYRU4pIFJEWlZbMTRd
IHQ9MDAwMDAwMWYxNjM5MDhjZAooWEVOKSBSRFpWWzE2XSB0PTAwMDAwMDFmMTYzOTExZmUKKFhF
TikgUkRaVlsxN10gdD0wMDAwMDAxZjE2MzkxMWZlCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwMWYx
NjM5MWE0MwooWEVOKSBSRFpWWzE5XSB0PTAwMDAwMDFmMTYzOTFiNTMKKFhFTikgUkRaVlsyMV0g
dD0wMDAwMDAxZjE2MzkyNWQyCihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwMWYxNjM5MjU0YQooWEVO
KSBSRFpWWzIyXSB0PTAwMDAwMDFmMTYzOTJkMGUKKFhFTikgUkRaVlsyM10gdD0wMDAwMDAxZjE2
MzkyZDAyCihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIz
KSBzPTAwMDFmMGI2NTA2ZSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDFdIHQ9MDAwMDAwMWYx
YjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTRmNigwMDAxZjBiOGM3ZDEpCihY
RU4pIFJEWlZbIDZdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFm
MGI2NTgwNigwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDddIHQ9MDAwMDAwMWYxYjI4MDgyMygw
MDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTgwZCgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZb
MjFdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTkwZigw
MDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDRdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFi
MjgwODIzKSBzPTAwMDFmMGI2NWJiMSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMTddIHQ9MDAw
MDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NThmYigwMDAxZjBiOGM3
ZDEpCihYRU4pIFJEWlZbMTZdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBz
PTAwMDFmMGI2NTk0YSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDNdIHQ9MDAwMDAwMWYxYjI4
MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NWE0OSgwMDAxZjBiOGM3ZDEpCihYRU4p
IFJEWlZbMTJdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2
NTg5ZSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAw
MDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTlhMygwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDhd
IHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTUxMCgwMDAx
ZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDldIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgw
ODIzKSBzPTAwMDFmMGI2NTU4NigwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMjJdIHQ9MDAwMDAw
MWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTk3YygwMDAxZjBiOGM3ZDEp
CihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAw
MDFmMGI2NTRmYigwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMTRdIHQ9MDAwMDAwMWYxYjI4MDgy
MygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTUzMSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJE
WlZbMjBdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NThl
MSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAx
ZjFiMjgwODIzKSBzPTAwMDFmMGI2NTRlNSgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMTldIHQ9
MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTU5YygwMDAxZjBi
OGM3ZDEpCihYRU4pIFJEWlZbMTFdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIz
KSBzPTAwMDFmMGI2NTY4NygwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbMTBdIHQ9MDAwMDAwMWYx
YjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTVjYygwMDAxZjBiOGM3ZDEpCihY
RU4pIFJEWlZbIDVdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFm
MGI2NWJiOCgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZbIDJdIHQ9MDAwMDAwMWYxYjI4MDgyMygw
MDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTlmMCgwMDAxZjBiOGM3ZDEpCihYRU4pIFJEWlZb
MjNdIHQ9MDAwMDAwMWYxYjI4MDgyMygwMDAwMDAxZjFiMjgwODIzKSBzPTAwMDFmMGI2NTk1OCgw
MDAxZjBiOGM3ZDEpCihYRU4pIFRTQyBhZGp1c3RlZCBieSA1NWEKKFhFTikgVFNDOiBlbmQgcmVu
ZGV6dm91cwooWEVOKSBUU0M6IHRpbWUuYyN0aW1lX2NhbGlicmF0aW9uX3RzY19yZW5kZXp2b3Vz
CihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwMjNjNjNmYmY5NgooWEVOKSBSRFpWWyAxXSB0PTAwMDAw
MDIzYzYzZmNhOTIKKFhFTikgUkRaVlsgM10gdD0wMDAwMDAyM2M2NDBlZGU5CihYRU4pIFJEWlZb
IDJdIHQ9MDAwMDAwMjNjNjQwZWRmNQooWEVOKSBSRFpWWyA0XSB0PTAwMDAwMDIzYzY0MGVmMzMK
KFhFTikgUkRaVlsgNV0gdD0wMDAwMDAyM2M2NDBlZjNmCihYRU4pIFJEWlZbIDZdIHQ9MDAwMDAw
MjNjNjQwZmI0MgooWEVOKSBSRFpWWyA3XSB0PTAwMDAwMDIzYzY0MGY4ZDYKKFhFTikgUkRaVlsg
OF0gdD0wMDAwMDAyM2M2NDEwOTUxCihYRU4pIFJEWlZbIDldIHQ9MDAwMDAwMjNjNjQxMDk1MQoo
WEVOKSBSRFpWWzEwXSB0PTAwMDAwMDIzYzY0MTE0OTEKKFhFTikgUkRaVlsxMV0gdD0wMDAwMDAy
M2M2NDExMzQxCihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAwMjNjNjQxMWU3OAooWEVOKSBSRFpWWzEy
XSB0PTAwMDAwMDIzYzY0MTFlYjQKKFhFTikgUkRaVlsxNF0gdD0wMDAwMDAyM2M2NDEyOTA3CihY
RU4pIFJEWlZbMTVdIHQ9MDAwMDAwMjNjNjQxMjk0MwooWEVOKSBSRFpWWzE3XSB0PTAwMDAwMDIz
YzY0MTJmZjMKKFhFTikgUkRaVlsxNl0gdD0wMDAwMDAyM2M2NDEyZWZiCihYRU4pIFJEWlZbMThd
IHQ9MDAwMDAwMjNjNjQxMzljNQooWEVOKSBSRFpWWzE5XSB0PTAwMDAwMDIzYzY0MTM5YjUKKFhF
TikgUkRaVlsyMV0gdD0wMDAwMDAyM2M2NDE0MzM1CihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwMjNj
NjQxNDFmZAooWEVOKSBSRFpWWzIzXSB0PTAwMDAwMDIzYzY0MTRmZTUKKFhFTikgUkRaVlsyMl0g
dD0wMDAwMDAyM2M2NDE0ZmQ1CihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAw
MDAyM2NiMmY4MmMzKSBzPTAwMDNkMWNmZmNkNygwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbIDFd
IHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMDRlNCgwMDAz
ZDFkNWVlOTcpCihYRU4pIFJEWlZbMjJdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4
MmMzKSBzPTAwMDNkMWQwMGRkNigwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMjNdIHQ9MDAwMDAw
MjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGQ5NygwMDAzZDFkNWVlOTcp
CihYRU4pIFJEWlZbMTddIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAw
MDNkMWQwMDlhNigwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMTZdIHQ9MDAwMDAwMjNjYjJmODJj
MygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMDljMygwMDAzZDFkNWVlOTcpCihYRU4pIFJE
WlZbMTBdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMDc3
ZigwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMTFdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAy
M2NiMmY4MmMzKSBzPTAwMDNkMWQwMDg5MCgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbIDVdIHQ9
MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGY0MigwMDAzZDFk
NWVlOTcpCihYRU4pIFJEWlZbIDhdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMz
KSBzPTAwMDNkMWQwMDUxMCgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbIDldIHQ9MDAwMDAwMjNj
YjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMDViMygwMDAzZDFkNWVlOTcpCihY
RU4pIFJEWlZbMjFdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNk
MWQwMDdjOSgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbIDddIHQ9MDAwMDAwMjNjYjJmODJjMygw
MDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGFkZCgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZb
IDZdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGJkYSgw
MDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2Ni
MmY4MmMzKSBzPTAwMDNkMWQwMDdlOCgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMTldIHQ9MDAw
MDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMDQ1NygwMDAzZDFkNWVl
OTcpCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBz
PTAwMDNkMWQwMDM3ZigwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwMjNjYjJm
ODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMDkxNCgwMDAzZDFkNWVlOTcpCihYRU4p
IFJEWlZbMTRdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQw
MDk1MCgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbIDJdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAw
MDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGY1OCgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbMTNd
IHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGIxYigwMDAz
ZDFkNWVlOTcpCihYRU4pIFJEWlZbMTJdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4
MmMzKSBzPTAwMDNkMWQwMGE3YSgwMDAzZDFkNWVlOTcpCihYRU4pIFJEWlZbIDNdIHQ9MDAwMDAw
MjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAwMDNkMWQwMGY4YSgwMDAzZDFkNWVlOTcp
CihYRU4pIFJEWlZbIDRdIHQ9MDAwMDAwMjNjYjJmODJjMygwMDAwMDAyM2NiMmY4MmMzKSBzPTAw
MDNkMWQwMGY0NCgwMDAzZDFkNWVlOTcpCihYRU4pIFRTQyBhZGp1c3RlZCBieSAyNDkKKFhFTikg
VFNDOiBlbmQgcmVuZGV6dm91cwooWEVOKSBUU0M6IHRpbWUuYyN0aW1lX2NhbGlicmF0aW9uX3Rz
Y19yZW5kZXp2b3VzCihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwMmQxYmJmYmZlYgooWEVOKSBSRFpW
WyAxXSB0PTAwMDAwMDJkMWJiZmM5YWYKKFhFTikgUkRaVlsgMl0gdD0wMDAwMDAyZDFiYzBlMjBj
CihYRU4pIFJEWlZbIDNdIHQ9MDAwMDAwMmQxYmMwZTIwYwooWEVOKSBSRFpWWyA1XSB0PTAwMDAw
MDJkMWJjMGU1ODYKKFhFTikgUkRaVlsgNF0gdD0wMDAwMDAyZDFiYzBlNTc2CihYRU4pIFJEWlZb
IDZdIHQ9MDAwMDAwMmQxYmMwZWYyNgooWEVOKSBSRFpWWyA3XSB0PTAwMDAwMDJkMWJjMGVmMzYK
KFhFTikgUkRaVlsgOV0gdD0wMDAwMDAyZDFiYzBmZDA0CihYRU4pIFJEWlZbIDhdIHQ9MDAwMDAw
MmQxYmMwZmQwNAooWEVOKSBSRFpWWzEwXSB0PTAwMDAwMDJkMWJjMTA2MWMKKFhFTikgUkRaVlsx
MV0gdD0wMDAwMDAyZDFiYzEwNjU4CihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAwMmQxYmMxMTE1Ygoo
WEVOKSBSRFpWWzEyXSB0PTAwMDAwMDJkMWJjMTExODcKKFhFTikgUkRaVlsxNV0gdD0wMDAwMDAy
ZDFiYzExYjgwCihYRU4pIFJEWlZbMTRdIHQ9MDAwMDAwMmQxYmMxMWI4YwooWEVOKSBSRFpWWzE3
XSB0PTAwMDAwMDJkMWJjMTIxZGEKKFhFTikgUkRaVlsxNl0gdD0wMDAwMDAyZDFiYzEyMWVhCihY
RU4pIFJEWlZbMThdIHQ9MDAwMDAwMmQxYmMxMmE4OAooWEVOKSBSRFpWWzE5XSB0PTAwMDAwMDJk
MWJjMTJhOWMKKFhFTikgUkRaVlsyMV0gdD0wMDAwMDAyZDFiYzEzNjFhCihYRU4pIFJEWlZbMjBd
IHQ9MDAwMDAwMmQxYmMxMzYxZQooWEVOKSBSRFpWWzIyXSB0PTAwMDAwMDJkMWJjMTQxZGMKKFhF
TikgUkRaVlsyM10gdD0wMDAwMDAyZDFiYzE0MWRjCihYRU4pIFJEWlZbMjNdIHQ9MDAwMDAwMmQy
MGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTgxMSgwMDA3OGZkODI1NjUpCihY
RU4pIFJEWlZbMTBdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4
ZmM3YTA1NSgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbIDZdIHQ9MDAwMDAwMmQyMGFmMjJmYSgw
MDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTQ4YSgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZb
IDddIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTUzYigw
MDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIw
YWYyMmZhKSBzPTAwMDc4ZmM3YTNhNygwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMjJdIHQ9MDAw
MDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YWIzNygwMDA3OGZkODI1
NjUpCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBz
PTAwMDc4ZmM3YTA5OCgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMTldIHQ9MDAwMDAwMmQyMGFm
MjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTExYSgwMDA3OGZkODI1NjUpCihYRU4p
IFJEWlZbMjFdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3
YTA3YSgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbIDFdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAw
MDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3OTllOCgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMTFd
IHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTQ0OSgwMDA3
OGZkODI1NjUpCihYRU4pIFJEWlZbIDhdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYy
MmZhKSBzPTAwMDc4ZmM3OWNjZCgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbIDldIHQ9MDAwMDAw
MmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3OWZjYygwMDA3OGZkODI1NjUp
CihYRU4pIFJEWlZbIDVdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAw
MDc4ZmM3YWM0MSgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAwMmQyMGFmMjJm
YSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTU0ZCgwMDA3OGZkODI1NjUpCihYRU4pIFJE
WlZbMTJdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTRm
ZSgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAy
ZDIwYWYyMmZhKSBzPTAwMDc4ZmM3ODc0ZSgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbIDJdIHQ9
MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YWMzZCgwMDA3OGZk
ODI1NjUpCihYRU4pIFJEWlZbMTZdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZh
KSBzPTAwMDc4ZmM3YTI5ZCgwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMTddIHQ9MDAwMDAwMmQy
MGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTI0MigwMDA3OGZkODI1NjUpCihY
RU4pIFJEWlZbIDNdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4
ZmM3YWJmNygwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZbIDRdIHQ9MDAwMDAwMmQyMGFmMjJmYSgw
MDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YWJkNigwMDA3OGZkODI1NjUpCihYRU4pIFJEWlZb
MTRdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIwYWYyMmZhKSBzPTAwMDc4ZmM3YTAzMCgw
MDA3OGZkODI1NjUpCihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwMmQyMGFmMjJmYSgwMDAwMDAyZDIw
YWYyMmZhKSBzPTAwMDc4ZmM3OWUyYSgwMDA3OGZkODI1NjUpCihYRU4pIFRTQyBhZGp1c3RlZCBi
eSAzZWYKKFhFTikgVFNDOiBlbmQgcmVuZGV6dm91cwooWEVOKSBUU0M6IHRpbWUuYyN0aW1lX2Nh
bGlicmF0aW9uX3RzY19yZW5kZXp2b3VzCihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwM2ZiYzMzNTU3
MwooWEVOKSBSRFpWWyAxXSB0PTAwMDAwMDNmYmMzMzYxYmIKKFhFTikgUkRaVlsgMl0gdD0wMDAw
MDAzZmJjMzQ1YzUwCihYRU4pIFJEWlZbIDNdIHQ9MDAwMDAwM2ZiYzM0NWM0YwooWEVOKSBSRFpW
WyA0XSB0PTAwMDAwMDNmYmMzNDY5MmYKKFhFTikgUkRaVlsgNV0gdD0wMDAwMDAzZmJjMzQ2OTNm
CihYRU4pIFJEWlZbIDZdIHQ9MDAwMDAwM2ZiYzM0NzA5NQooWEVOKSBSRFpWWyA3XSB0PTAwMDAw
MDNmYmMzNDcwOTUKKFhFTikgUkRaVlsgOV0gdD0wMDAwMDAzZmJjMzQ3ZTk1CihYRU4pIFJEWlZb
IDhdIHQ9MDAwMDAwM2ZiYzM0N2U5NQooWEVOKSBSRFpWWzExXSB0PTAwMDAwMDNmYmMzNDhiZjgK
KFhFTikgUkRaVlsxMF0gdD0wMDAwMDAzZmJjMzQ4YzY4CihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAw
M2ZiYzM0OTQ1NwooWEVOKSBSRFpWWzEyXSB0PTAwMDAwMDNmYmMzNDk0OTMKKFhFTikgUkRaVlsx
NF0gdD0wMDAwMDAzZmJjMzQ5ZjVlCihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwM2ZiYzM0OWY2YQoo
WEVOKSBSRFpWWzE3XSB0PTAwMDAwMDNmYmMzNGE1ZTUKKFhFTikgUkRaVlsxNl0gdD0wMDAwMDAz
ZmJjMzRhNWNkCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwM2ZiYzM0YjBhNwooWEVOKSBSRFpWWzE5
XSB0PTAwMDAwMDNmYmMzNGIwN2YKKFhFTikgUkRaVlsyMV0gdD0wMDAwMDAzZmJjMzRiOTM2CihY
RU4pIFJEWlZbMjBdIHQ9MDAwMDAwM2ZiYzM0YjkzNgooWEVOKSBSRFpWWzIzXSB0PTAwMDAwMDNm
YmMzNGM1YWUKKFhFTikgUkRaVlsyMl0gdD0wMDAwMDAzZmJjMzRjNWUyCihYRU4pIFJEWlZbIDBd
IHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMzlhMigwMDBm
MDdhMjljYTkpCihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJk
NGNhKSBzPTAwMGYwNzdjMmM3OCgwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMTRdIHQ9MDAwMDAw
M2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMmRhYygwMDBmMDdhMjljYTkp
CihYRU4pIFJEWlZbMTZdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAw
MGYwNzdjMzc4MygwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMTddIHQ9MDAwMDAwM2ZjMTIyZDRj
YSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMzczYSgwMDBmMDdhMjljYTkpCihYRU4pIFJE
WlZbMThdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjM2My
NygwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbIDFdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAz
ZmMxMjJkNGNhKSBzPTAwMGYwNzdjMTlkNygwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMTNdIHQ9
MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMmNlMSgwMDBmMDdh
MjljYTkpCihYRU4pIFJEWlZbMTJdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNh
KSBzPTAwMGYwNzdjMmNlZSgwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbIDZdIHQ9MDAwMDAwM2Zj
MTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjNDBlYigwMDBmMDdhMjljYTkpCihY
RU4pIFJEWlZbIDddIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYw
NzdjNDJmZigwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMjFdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgw
MDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMzhhYigwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZb
MTldIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjM2QwMygw
MDBmMDdhMjljYTkpCihYRU4pIFJEWlZbIDhdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMx
MjJkNGNhKSBzPTAwMGYwNzdjMjhlZigwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbIDldIHQ9MDAw
MDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMmM0ZigwMDBmMDdhMjlj
YTkpCihYRU4pIFJEWlZbMTFdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBz
PTAwMGYwNzdjNDFjYygwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMTBdIHQ9MDAwMDAwM2ZjMTIy
ZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjM2NmMygwMDBmMDdhMjljYTkpCihYRU4p
IFJEWlZbMjBdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdj
MzhlZSgwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMjJdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAw
MDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjNDE0MSgwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbIDNd
IHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMzc0ZCgwMDBm
MDdhMjljYTkpCihYRU4pIFJEWlZbIDVdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJk
NGNhKSBzPTAwMGYwNzdjMzc5YigwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbMjNdIHQ9MDAwMDAw
M2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjM2VmMCgwMDBmMDdhMjljYTkp
CihYRU4pIFJEWlZbIDJdIHQ9MDAwMDAwM2ZjMTIyZDRjYSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAw
MGYwNzdjMzQyNSgwMDBmMDdhMjljYTkpCihYRU4pIFJEWlZbIDRdIHQ9MDAwMDAwM2ZjMTIyZDRj
YSgwMDAwMDAzZmMxMjJkNGNhKSBzPTAwMGYwNzdjMzU0OCgwMDBmMDdhMjljYTkpCihYRU4pIFRT
QzogZW5kIHJlbmRlenZvdXMKKFhFTikgVFNDIGFkanVzdGVkIGJ5IDk3NAooWEVOKSBUU0M6IHRp
bWUuYyN0aW1lX2NhbGlicmF0aW9uX3RzY19yZW5kZXp2b3VzCihYRU4pIFJEWlZbIDBdIHQ9MDAw
MDAwNjRmMjg1MDlmMQooWEVOKSBSRFpWWyAxXSB0PTAwMDAwMDY0ZjI4NTEzYjEKKFhFTikgUkRa
VlsgOF0gdD0wMDAwMDA2NGYyODYxMGZhCihYRU4pIFJEWlZbIDldIHQ9MDAwMDAwNjRmMjg2MTU5
YQooWEVOKSBSRFpWWyAzXSB0PTAwMDAwMDY0ZjI4NjJmOTMKKFhFTikgUkRaVlsgMl0gdD0wMDAw
MDA2NGYyODYyZjhiCihYRU4pIFJEWlZbIDVdIHQ9MDAwMDAwNjRmMjg2MzlhMQooWEVOKSBSRFpW
WyA0XSB0PTAwMDAwMDY0ZjI4NjM4ZWQKKFhFTikgUkRaVlsgNl0gdD0wMDAwMDA2NGYyODY0Mzk5
CihYRU4pIFJEWlZbIDddIHQ9MDAwMDAwNjRmMjg2NDM0OQooWEVOKSBSRFpWWzEwXSB0PTAwMDAw
MDY0ZjI4NjUxNWYKKFhFTikgUkRaVlsxMV0gdD0wMDAwMDA2NGYyODY1MTZmCihYRU4pIFJEWlZb
MTNdIHQ9MDAwMDAwNjRmMjg2NTk5MgooWEVOKSBSRFpWWzEyXSB0PTAwMDAwMDY0ZjI4NjVjMTIK
KFhFTikgUkRaVlsxNV0gdD0wMDAwMDA2NGYyODY2NTg0CihYRU4pIFJEWlZbMTRdIHQ9MDAwMDAw
NjRmMjg2NjU2NAooWEVOKSBSRFpWWzE3XSB0PTAwMDAwMDY0ZjI4NjZlNTMKKFhFTikgUkRaVlsx
Nl0gdD0wMDAwMDA2NGYyODY2ZTUzCihYRU4pIFJEWlZbMTldIHQ9MDAwMDAwNjRmMjg2Nzg1Ygoo
WEVOKSBSRFpWWzE4XSB0PTAwMDAwMDY0ZjI4Njc4NmYKKFhFTikgUkRaVlsyMV0gdD0wMDAwMDA2
NGYyODY4MjBiCihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwNjRmMjg2ODIwZgooWEVOKSBSRFpWWzIz
XSB0PTAwMDAwMDY0ZjI4NjhhZDYKKFhFTikgUkRaVlsyMl0gdD0wMDAwMDA2NGYyODY4YmIyCihY
RU4pIFJEWlZbIDNdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRm
MmE3ODk4MigwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMThdIHQ9MDAwMDAwNjRmNzc0NDkzMygw
MDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3ODhjZigwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZb
MTldIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3ODlmOCgw
MDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMTZdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3
NzQ0OTMzKSBzPTAwMWRmMmE3ODA4OCgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMTddIHQ9MDAw
MDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3ODMyYSgwMDFkZjJmN2Q2
NWIpCihYRU4pIFJEWlZbMjJdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBz
PTAwMWRmMmE3OWYzMigwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbIDJdIHQ9MDAwMDAwNjRmNzc0
NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3ODUxYSgwMDFkZjJmN2Q2NWIpCihYRU4p
IFJEWlZbIDVdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3
ODE4NygwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbIDFdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAw
MDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3NDgxYigwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMjNd
IHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3OTZlOSgwMDFk
ZjJmN2Q2NWIpCihYRU4pIFJEWlZbMTJdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0
OTMzKSBzPTAwMWRmMmE3NzdhYSgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMTNdIHQ9MDAwMDAw
NjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3Nzc4ZigwMDFkZjJmN2Q2NWIp
CihYRU4pIFJEWlZbIDBdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAw
MWRmMmE3M2NmZCgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMjBdIHQ9MDAwMDAwNjRmNzc0NDkz
MygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3MzczYSgwMDFkZjJmN2Q2NWIpCihYRU4pIFJE
WlZbIDldIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3Nzll
NSgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbIDhdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2
NGY3NzQ0OTMzKSBzPTAwMWRmMmE3NzM0NCgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMTFdIHQ9
MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3OTgxNygwMDFkZjJm
N2Q2NWIpCihYRU4pIFJEWlZbMTBdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMz
KSBzPTAwMWRmMmE3OGUzNygwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMTVdIHQ9MDAwMDAwNjRm
Nzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3NmYwYygwMDFkZjJmN2Q2NWIpCihY
RU4pIFJEWlZbMTRdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRm
MmE3NmVkOSgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbIDddIHQ9MDAwMDAwNjRmNzc0NDkzMygw
MDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3OTk5MCgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZb
IDZdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3OTQ5OCgw
MDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbMjFdIHQ9MDAwMDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3
NzQ0OTMzKSBzPTAwMWRmMmE3NDI3OSgwMDFkZjJmN2Q2NWIpCihYRU4pIFJEWlZbIDRdIHQ9MDAw
MDAwNjRmNzc0NDkzMygwMDAwMDA2NGY3NzQ0OTMzKSBzPTAwMWRmMmE3N2NmMigwMDFkZjJmN2Q2
NWIpCihYRU4pIFRTQzogZW5kIHJlbmRlenZvdXMKKFhFTikgVFNDIGFkanVzdGVkIGJ5IDczNAo=
--0000000000005bcc5605ba47fa81
Content-Type: text/plain; charset="UTF-8"; name="kernel-dmesg.txt"
Content-Disposition: attachment; filename="kernel-dmesg.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_kkmq4i421>
X-Attachment-Id: f_kkmq4i421

WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA1LjEwLjAtMS1hbWQ2NCAoZGViaWFuLWtlcm5l
bEBsaXN0cy5kZWJpYW4ub3JnKSAoZ2NjLTEwIChEZWJpYW4gMTAuMi4xLTMpIDEwLjIuMSAyMDIw
MTIyNCwgR05VIGxkIChHTlUgQmludXRpbHMgZm9yIERlYmlhbikgMi4zNS4xKSAjMSBTTVAgRGVi
aWFuIDUuMTAuNC0xICgyMDIwLTEyLTMxKQpbICAgIDAuMDAwMDAwXSBDb21tYW5kIGxpbmU6IHBs
YWNlaG9sZGVyIHJvb3Q9VVVJRD1mZmQyZTQ0ZS0zZWI3LTRjNGQtYTAyOC1kYWQ3ZGEwM2M4MzEg
cm8gbG9nbGV2ZWw9MwpbICAgIDAuMDAwMDAwXSB4ODYvZnB1OiBTdXBwb3J0aW5nIFhTQVZFIGZl
YXR1cmUgMHgwMDE6ICd4ODcgZmxvYXRpbmcgcG9pbnQgcmVnaXN0ZXJzJwpbICAgIDAuMDAwMDAw
XSB4ODYvZnB1OiBTdXBwb3J0aW5nIFhTQVZFIGZlYXR1cmUgMHgwMDI6ICdTU0UgcmVnaXN0ZXJz
JwpbICAgIDAuMDAwMDAwXSB4ODYvZnB1OiBTdXBwb3J0aW5nIFhTQVZFIGZlYXR1cmUgMHgwMDQ6
ICdBVlggcmVnaXN0ZXJzJwpbICAgIDAuMDAwMDAwXSB4ODYvZnB1OiB4c3RhdGVfb2Zmc2V0WzJd
OiAgNTc2LCB4c3RhdGVfc2l6ZXNbMl06ICAyNTYKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogRW5h
YmxlZCB4c3RhdGUgZmVhdHVyZXMgMHg3LCBjb250ZXh0IHNpemUgaXMgODMyIGJ5dGVzLCB1c2lu
ZyAnc3RhbmRhcmQnIGZvcm1hdC4KWyAgICAwLjAwMDAwMF0gUmVsZWFzZWQgMCBwYWdlKHMpClsg
ICAgMC4wMDAwMDBdIEJJT1MtcHJvdmlkZWQgcGh5c2ljYWwgUkFNIG1hcDoKWyAgICAwLjAwMDAw
MF0gWGVuOiBbbWVtIDB4MDAwMDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwMDAwOWRmZmZdIHVzYWJs
ZQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDllODAwLTB4MDAwMDAwMDAw
MDBmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDAwMDEw
MDAwMC0weDAwMDAwMDAwODAwNjFmZmZdIHVzYWJsZQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0g
MHgwMDAwMDAwMGJhOTUyMDAwLTB4MDAwMDAwMDBiYTk4YWZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAw
MDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBiYWJjMzAwMC0weDAwMDAwMDAwYmIxYmZmZmZdIEFD
UEkgTlZTClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwYmIxYzAwMDAtMHgwMDAw
MDAwMGJiODQyZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAw
MGJiODQ0MDAwLTB4MDAwMDAwMDBiYjhjOWZmZl0gQUNQSSBOVlMKWyAgICAwLjAwMDAwMF0gWGVu
OiBbbWVtIDB4MDAwMDAwMDBiYmQwZjAwMC0weDAwMDAwMDAwYmJmZjNmZmZdIHJlc2VydmVkClsg
ICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZDAwMDAwMDAtMHgwMDAwMDAwMGRmZmZm
ZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZiZmZjMDAw
LTB4MDAwMDAwMDBmYmZmY2ZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4
MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAwZmVjMDFmZmZdIHJlc2VydmVkClsgICAgMC4wMDAw
MDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVkMWMwMDAtMHgwMDAwMDAwMGZlZDFmZmZmXSByZXNl
cnZlZApbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAw
MDBmZWVmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBm
ZjAwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAwMDBdIE5YIChF
eGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQpbICAgIDAuMDAwMDAwXSBTTUJJT1Mg
Mi43IHByZXNlbnQuClsgICAgMC4wMDAwMDBdIERNSTogVG8gYmUgZmlsbGVkIGJ5IE8uRS5NLiBU
byBiZSBmaWxsZWQgYnkgTy5FLk0uL0ludGVsIFg3OSwgQklPUyA0LjYuNSAwNy8xNy8yMDE5Clsg
ICAgMC4wMDAwMDBdIEh5cGVydmlzb3IgZGV0ZWN0ZWQ6IFhlbiBQVgpbICAgIDAuMDQ2ODAzXSB0
c2M6IEZhc3QgVFNDIGNhbGlicmF0aW9uIHVzaW5nIFBJVApbICAgIDAuMDQ2ODA1XSB0c2M6IERl
dGVjdGVkIDI0OTQuMjc5IE1IeiBwcm9jZXNzb3IKWyAgICAwLjA0NjgwNl0gdHNjOiBEZXRlY3Rl
ZCAyNDk0LjM0MCBNSHogVFNDClsgICAgMC4wNTIyOTBdIGU4MjA6IHVwZGF0ZSBbbWVtIDB4MDAw
MDAwMDAtMHgwMDAwMGZmZl0gdXNhYmxlID09PiByZXNlcnZlZApbICAgIDAuMDUyMjkzXSBlODIw
OiByZW1vdmUgW21lbSAweDAwMGEwMDAwLTB4MDAwZmZmZmZdIHVzYWJsZQpbICAgIDAuMDUyMjk4
XSBsYXN0X3BmbiA9IDB4ODAwNjIgbWF4X2FyY2hfcGZuID0gMHg0MDAwMDAwMDAKWyAgICAwLjA1
MjI5OV0gRGlzYWJsZWQKWyAgICAwLjA1MjMwMF0geDg2L1BBVDogTVRSUnMgZGlzYWJsZWQsIHNr
aXBwaW5nIFBBVCBpbml0aWFsaXphdGlvbiB0b28uClsgICAgMC4wNTIzMDZdIHg4Ni9QQVQ6IENv
bmZpZ3VyYXRpb24gWzAtN106IFdCICBXVCAgVUMtIFVDICBXQyAgV1AgIFVDICBVQyAgClsgICAg
MC4wNjcwOTBdIEtlcm5lbC9Vc2VyIHBhZ2UgdGFibGVzIGlzb2xhdGlvbjogZGlzYWJsZWQgb24g
WEVOIFBWLgpbICAgIDAuNjc5NjQwXSBSQU1ESVNLOiBbbWVtIDB4MDQwMDAwMDAtMHgwNjFmNGZm
Zl0KWyAgICAwLjY3OTY1M10gQUNQSTogRWFybHkgdGFibGUgY2hlY2tzdW0gdmVyaWZpY2F0aW9u
IGRpc2FibGVkClsgICAgMC42ODU5MDddIEFDUEk6IFJTRFAgMHgwMDAwMDAwMDAwMEYwNEEwIDAw
MDAyNCAodjAyIEFMQVNLQSkKWyAgICAwLjY4NTkxOF0gQUNQSTogWFNEVCAweDAwMDAwMDAwQkIw
REQwNzAgMDAwMDVDICh2MDEgQUxBU0tBIEEgTSBJICAgIDAxMDcyMDA5IEFNSSAgMDAwMTAwMTMp
ClsgICAgMC42ODU5NDVdIEFDUEk6IEZBQ1AgMHgwMDAwMDAwMEJCMEU1NzI4IDAwMDEwQyAodjA1
IEFMQVNLQSBBIE0gSSAgICAwMTA3MjAwOSBBTUkgIDAwMDEwMDEzKQpbICAgIDAuNjg2MDEwXSBB
Q1BJOiBEU0RUIDB4MDAwMDAwMDBCQjBERDE2MCAwMDg1QzcgKHYwMiBBTEFTS0EgQSBNIEkgICAg
MDAwMDAwMjAgSU5UTCAyMDA1MTExNykKWyAgICAwLjY4NjAyNV0gQUNQSTogRkFDUyAweDAwMDAw
MDAwQkIxQjdGODAgMDAwMDQwClsgICAgMC42ODYwMzldIEFDUEk6IEFQSUMgMHgwMDAwMDAwMEJC
MEU1ODM4IDAwMDFBOCAodjAzIEFMQVNLQSBBIE0gSSAgICAwMTA3MjAwOSBBTUkgIDAwMDEwMDEz
KQpbICAgIDAuNjg2MDUzXSBBQ1BJOiBGUERUIDB4MDAwMDAwMDBCQjBFNTlFMCAwMDAwNDQgKHYw
MSBBTEFTS0EgQSBNIEkgICAgMDEwNzIwMDkgQU1JICAwMDAxMDAxMykKWyAgICAwLjY4NjA2OF0g
QUNQSTogTUNGRyAweDAwMDAwMDAwQkIwRTVBMjggMDAwMDNDICh2MDEgQUxBU0tBIE9FTU1DRkcu
IDAxMDcyMDA5IE1TRlQgMDAwMDAwOTcpClsgICAgMC42ODYwODJdIEFDUEk6IEhQRVQgMHgwMDAw
MDAwMEJCMEU1QTY4IDAwMDAzOCAodjAxIEFMQVNLQSBBIE0gSSAgICAwMTA3MjAwOSBBTUkuIDAw
MDAwMDA1KQpbICAgIDAuNjg2MDk3XSBBQ1BJOiBTU0RUIDB4MDAwMDAwMDBCQjBFNUFBMCAwQ0Qz
ODAgKHYwMiBJTlRFTCAgQ3B1UG0gICAgMDAwMDQwMDAgSU5UTCAyMDA1MTExNykKWyAgICAwLjY4
NjExMl0gQUNQSTogUk1BRCAweDAwMDAwMDAwQkIxQjJFMjAgMDAwMEVDICh2MDEgQSBNIEkgIE9F
TURNQVIgIDAwMDAwMDAxIElOVEwgMDAwMDAwMDEpClsgICAgMC42ODYxNTFdIEFDUEk6IExvY2Fs
IEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwClsgICAgMC42ODYxNTJdIFNldHRpbmcgQVBJQyByb3V0
aW5nIHRvIFhlbiBQVi4KWyAgICAwLjY4NjE4NV0gTlVNQSB0dXJuZWQgb2ZmClsgICAgMC42ODYx
ODddIEZha2luZyBhIG5vZGUgYXQgW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDgw
MDYxZmZmXQpbICAgIDAuNjg2MTk5XSBOT0RFX0RBVEEoMCkgYWxsb2NhdGVkIFttZW0gMHgzZmJm
NzAwMC0weDNmYzIwZmZmXQpbICAgIDAuNjk5MjQ5XSBab25lIHJhbmdlczoKWyAgICAwLjY5OTI1
MV0gICBETUEgICAgICBbbWVtIDB4MDAwMDAwMDAwMDAwMTAwMC0weDAwMDAwMDAwMDBmZmZmZmZd
ClsgICAgMC42OTkyNTNdICAgRE1BMzIgICAgW21lbSAweDAwMDAwMDAwMDEwMDAwMDAtMHgwMDAw
MDAwMDgwMDYxZmZmXQpbICAgIDAuNjk5MjU0XSAgIE5vcm1hbCAgIGVtcHR5ClsgICAgMC42OTky
NTVdICAgRGV2aWNlICAgZW1wdHkKWyAgICAwLjY5OTI1N10gTW92YWJsZSB6b25lIHN0YXJ0IGZv
ciBlYWNoIG5vZGUKWyAgICAwLjY5OTI2MF0gRWFybHkgbWVtb3J5IG5vZGUgcmFuZ2VzClsgICAg
MC42OTkyNjFdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAwMDAwMDAwMDAxMDAwLTB4MDAwMDAwMDAw
MDA5ZGZmZl0KWyAgICAwLjY5OTI2Ml0gICBub2RlICAgMDogW21lbSAweDAwMDAwMDAwMDAxMDAw
MDAtMHgwMDAwMDAwMDgwMDYxZmZmXQpbICAgIDAuNjk5NTg3XSBaZXJvZWQgc3RydWN0IHBhZ2Ug
aW4gdW5hdmFpbGFibGUgcmFuZ2VzOiAzMjc2OSBwYWdlcwpbICAgIDAuNjk5NTg5XSBJbml0bWVt
IHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDAwMDAwMTAwMC0weDAwMDAwMDAwODAwNjFmZmZd
ClsgICAgMC42OTk1OTBdIE9uIG5vZGUgMCB0b3RhbHBhZ2VzOiA1MjQyODcKWyAgICAwLjY5OTU5
Ml0gICBETUEgem9uZTogNjQgcGFnZXMgdXNlZCBmb3IgbWVtbWFwClsgICAgMC42OTk1OTJdICAg
RE1BIHpvbmU6IDIxIHBhZ2VzIHJlc2VydmVkClsgICAgMC42OTk1OTNdICAgRE1BIHpvbmU6IDM5
OTcgcGFnZXMsIExJRk8gYmF0Y2g6MApbICAgIDAuNjk5NjQ0XSAgIERNQTMyIHpvbmU6IDgxMzAg
cGFnZXMgdXNlZCBmb3IgbWVtbWFwClsgICAgMC42OTk2NDVdICAgRE1BMzIgem9uZTogNTIwMjkw
IHBhZ2VzLCBMSUZPIGJhdGNoOjYzClsgICAgMC43MDAzNzVdIHAybSB2aXJ0dWFsIGFyZWEgYXQg
KF9fX19wdHJ2YWxfX19fKSwgc2l6ZSBpcyA0MDAwMDAwMApbICAgIDAuOTk4MTc4XSBSZW1hcHBl
ZCA5OCBwYWdlKHMpClsgICAgMS4wMDAyNjddIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NDA4
ClsgICAgMS4wMDAyNzRdIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwClsgICAg
MS4wMDAzNDddIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAwXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQpbICAgIDEuMDAwMzQ5XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMl0gaGlnaCBl
ZGdlIGxpbnRbMHgxXSkKWyAgICAxLjAwMDM1MF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4
MDRdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMS4wMDAzNTJdIEFDUEk6IExBUElDX05NSSAo
YWNwaV9pZFsweDA2XSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDEuMDAwMzUzXSBBQ1BJOiBM
QVBJQ19OTUkgKGFjcGlfaWRbMHgwOF0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAxLjAwMDM1
NF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MGFdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsg
ICAgMS4wMDAzNTZdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDBjXSBoaWdoIGVkZ2UgbGlu
dFsweDFdKQpbICAgIDEuMDAwMzU3XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwZV0gaGln
aCBlZGdlIGxpbnRbMHgxXSkKWyAgICAxLjAwMDM1OV0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lk
WzB4MTBdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMS4wMDAzNjBdIEFDUEk6IExBUElDX05N
SSAoYWNwaV9pZFsweDEyXSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDEuMDAwMzYyXSBBQ1BJ
OiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgxNF0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAxLjAw
MDM2M10gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MTZdIGhpZ2ggZWRnZSBsaW50WzB4MV0p
ClsgICAgMS4wMDAzNjVdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2Ug
bGludFsweDFdKQpbICAgIDEuMDAwMzY2XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwM10g
aGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAxLjAwMDM2N10gQUNQSTogTEFQSUNfTk1JIChhY3Bp
X2lkWzB4MDVdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMS4wMDAzNjldIEFDUEk6IExBUElD
X05NSSAoYWNwaV9pZFsweDA3XSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDEuMDAwMzcwXSBB
Q1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwOV0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAx
LjAwMDM3Ml0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MGJdIGhpZ2ggZWRnZSBsaW50WzB4
MV0pClsgICAgMS4wMDAzNzNdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDBkXSBoaWdoIGVk
Z2UgbGludFsweDFdKQpbICAgIDEuMDAwMzc0XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgw
Zl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAxLjAwMDM3Nl0gQUNQSTogTEFQSUNfTk1JIChh
Y3BpX2lkWzB4MTFdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMS4wMDAzNzddIEFDUEk6IExB
UElDX05NSSAoYWNwaV9pZFsweDEzXSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAgIDEuMDAwMzc5
XSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgxNV0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAg
ICAxLjAwMDM4MF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MTddIGhpZ2ggZWRnZSBsaW50
WzB4MV0pClsgICAgMS4wMDA0MDldIElPQVBJQ1swXTogYXBpY19pZCAwLCB2ZXJzaW9uIDMyLCBh
ZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTIzClsgICAgMS4wMDA0MjJdIElPQVBJQ1sxXTogYXBp
Y19pZCAyLCB2ZXJzaW9uIDMyLCBhZGRyZXNzIDB4ZmVjMDEwMDAsIEdTSSAyNC00NwpbICAgIDEu
MDAwNDQwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAwIGdsb2JhbF9pcnEgMiBk
ZmwgZGZsKQpbICAgIDEuMDAwNDQzXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSA5
IGdsb2JhbF9pcnEgOSBoaWdoIGxldmVsKQpbICAgIDEuMDAwNDUyXSBBQ1BJOiBJUlEwIHVzZWQg
Ynkgb3ZlcnJpZGUuClsgICAgMS4wMDA0NTNdIEFDUEk6IElSUTkgdXNlZCBieSBvdmVycmlkZS4K
WyAgICAxLjAwMDQ2OV0gVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGlu
Zm9ybWF0aW9uClsgICAgMS4wMDA0NzVdIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmE3MDEgYmFzZTog
MHhmZWQwMDAwMApbICAgIDEuMDAwNDgzXSBzbXBib290OiBBbGxvd2luZyAyNCBDUFVzLCAwIGhv
dHBsdWcgQ1BVcwpbICAgIDEuMDAwNTAzXSBQTTogaGliZXJuYXRpb246IFJlZ2lzdGVyZWQgbm9z
YXZlIG1lbW9yeTogW21lbSAweDAwMDAwMDAwLTB4MDAwMDBmZmZdClsgICAgMS4wMDA1MDVdIFBN
OiBoaWJlcm5hdGlvbjogUmVnaXN0ZXJlZCBub3NhdmUgbWVtb3J5OiBbbWVtIDB4MDAwOWUwMDAt
MHgwMDA5ZWZmZl0KWyAgICAxLjAwMDUwNV0gUE06IGhpYmVybmF0aW9uOiBSZWdpc3RlcmVkIG5v
c2F2ZSBtZW1vcnk6IFttZW0gMHgwMDA5ZjAwMC0weDAwMGZmZmZmXQpbICAgIDEuMDAwNTA3XSBb
bWVtIDB4ODAwNjIwMDAtMHhiYTk1MWZmZl0gYXZhaWxhYmxlIGZvciBQQ0kgZGV2aWNlcwpbICAg
IDEuMDAwNTEwXSBCb290aW5nIHBhcmF2aXJ0dWFsaXplZCBrZXJuZWwgb24gWGVuClsgICAgMS4w
MDA1MTFdIFhlbiB2ZXJzaW9uOiA0LjExLjQgKHByZXNlcnZlLUFEKQpbICAgIDEuMDAwNTE1XSBj
bG9ja3NvdXJjZTogcmVmaW5lZC1qaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZmIG1heF9jeWNsZXM6
IDB4ZmZmZmZmZmYsIG1heF9pZGxlX25zOiA3NjQ1NTE5NjAwMjExNTY4IG5zClsgICAgMS4wMDUw
NzddIHNldHVwX3BlcmNwdTogTlJfQ1BVUzo4MTkyIG5yX2NwdW1hc2tfYml0czoyNCBucl9jcHVf
aWRzOjI0IG5yX25vZGVfaWRzOjEKWyAgICAxLjAwNTg5N10gcGVyY3B1OiBFbWJlZGRlZCA1NCBw
YWdlcy9jcHUgczE4Mzk2MCByODE5MiBkMjkwMzIgdTI2MjE0NApbICAgIDEuMDA1OTA0XSBwY3B1
LWFsbG9jOiBzMTgzOTYwIHI4MTkyIGQyOTAzMiB1MjYyMTQ0IGFsbG9jPTEqMjA5NzE1MgpbICAg
IDEuMDA1OTA2XSBwY3B1LWFsbG9jOiBbMF0gMDAgMDEgMDIgMDMgMDQgMDUgMDYgMDcgWzBdIDA4
IDA5IDEwIDExIDEyIDEzIDE0IDE1IApbICAgIDEuMDA1OTE2XSBwY3B1LWFsbG9jOiBbMF0gMTYg
MTcgMTggMTkgMjAgMjEgMjIgMjMgClsgICAgMS4wMDU5ODZdIHhlbjogUFYgc3BpbmxvY2tzIGVu
YWJsZWQKWyAgICAxLjAwNTk5MV0gUFYgcXNwaW5sb2NrIGhhc2ggdGFibGUgZW50cmllczogMjU2
IChvcmRlcjogMCwgNDA5NiBieXRlcywgbGluZWFyKQpbICAgIDEuMDA1OTk1XSBCdWlsdCAxIHpv
bmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogNTE2MDcyClsgICAg
MS4wMDU5OTZdIFBvbGljeSB6b25lOiBETUEzMgpbICAgIDEuMDA1OTk4XSBLZXJuZWwgY29tbWFu
ZCBsaW5lOiBwbGFjZWhvbGRlciByb290PVVVSUQ9ZmZkMmU0NGUtM2ViNy00YzRkLWEwMjgtZGFk
N2RhMDNjODMxIHJvIGxvZ2xldmVsPTMKWyAgICAxLjAwNjIwMl0gRGVudHJ5IGNhY2hlIGhhc2gg
dGFibGUgZW50cmllczogMjYyMTQ0IChvcmRlcjogOSwgMjA5NzE1MiBieXRlcywgbGluZWFyKQpb
ICAgIDEuMDA2MjgzXSBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3Jk
ZXI6IDgsIDEwNDg1NzYgYnl0ZXMsIGxpbmVhcikKWyAgICAxLjAwNjc4OV0gbWVtIGF1dG8taW5p
dDogc3RhY2s6b2ZmLCBoZWFwIGFsbG9jOm9uLCBoZWFwIGZyZWU6b2ZmClsgICAgMS4wNTE1NThd
IHNvZnR3YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHgwMDAwMDAwMDM4YzAwMDAwLTB4MDAwMDAw
MDAzY2MwMDAwMF0gKDY0TUIpClsgICAgMS4wNzIwNTddIE1lbW9yeTogMTk3ODA4Sy8yMDk3MTQ4
SyBhdmFpbGFibGUgKDEyMjk1SyBrZXJuZWwgY29kZSwgMjU0MEsgcndkYXRhLCA0MDYwSyByb2Rh
dGEsIDIzODBLIGluaXQsIDE2OTJLIGJzcywgMTIyOTY5MksgcmVzZXJ2ZWQsIDBLIGNtYS1yZXNl
cnZlZCkKWyAgICAxLjA3MjA2NV0gcmFuZG9tOiBnZXRfcmFuZG9tX3U2NCBjYWxsZWQgZnJvbSBf
X2ttZW1fY2FjaGVfY3JlYXRlKzB4MmUvMHg1NTAgd2l0aCBjcm5nX2luaXQ9MApbICAgIDEuMDcy
MzY4XSBTTFVCOiBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz00LCBO
b2Rlcz0xClsgICAgMS4wNzMzMDVdIGZ0cmFjZTogYWxsb2NhdGluZyAzNTk4OCBlbnRyaWVzIGlu
IDE0MSBwYWdlcwpbICAgIDEuMDg2OTI0XSBmdHJhY2U6IGFsbG9jYXRlZCAxNDEgcGFnZXMgd2l0
aCA0IGdyb3VwcwpbICAgIDEuMDg3MzM1XSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50
YXRpb24uClsgICAgMS4wODczMzddIHJjdTogCVJDVSByZXN0cmljdGluZyBDUFVzIGZyb20gTlJf
Q1BVUz04MTkyIHRvIG5yX2NwdV9pZHM9NC4KWyAgICAxLjA4NzMzOF0gCVJ1ZGUgdmFyaWFudCBv
ZiBUYXNrcyBSQ1UgZW5hYmxlZC4KWyAgICAxLjA4NzMzOV0gCVRyYWNpbmcgdmFyaWFudCBvZiBU
YXNrcyBSQ1UgZW5hYmxlZC4KWyAgICAxLjA4NzM0MF0gcmN1OiBSQ1UgY2FsY3VsYXRlZCB2YWx1
ZSBvZiBzY2hlZHVsZXItZW5saXN0bWVudCBkZWxheSBpcyAyNSBqaWZmaWVzLgpbICAgIDEuMDg3
MzQxXSByY3U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9sZWFmPTE2LCBucl9j
cHVfaWRzPTQKWyAgICAxLjA5ODMyNV0gVXNpbmcgTlVMTCBsZWdhY3kgUElDClsgICAgMS4wOTgz
MjZdIE5SX0lSUVM6IDUyNDU0NCwgbnJfaXJxczogODY0LCBwcmVhbGxvY2F0ZWQgaXJxczogMApb
ICAgIDEuMDk4MzkxXSB4ZW46ZXZlbnRzOiBVc2luZyBGSUZPLWJhc2VkIEFCSQpbICAgIDEuMDk4
NDI0XSB4ZW46IC0tPiBwaXJxPTEgLT4gaXJxPTEgKGdzaT0xKQpbICAgIDEuMDk4NDM5XSB4ZW46
IC0tPiBwaXJxPTIgLT4gaXJxPTIgKGdzaT0yKQpbICAgIDEuMDk4NDUzXSB4ZW46IC0tPiBwaXJx
PTMgLT4gaXJxPTMgKGdzaT0zKQpbICAgIDEuMDk4NDY3XSB4ZW46IC0tPiBwaXJxPTQgLT4gaXJx
PTQgKGdzaT00KQpbICAgIDEuMDk4NDgwXSB4ZW46IC0tPiBwaXJxPTUgLT4gaXJxPTUgKGdzaT01
KQpbICAgIDEuMDk4NDk0XSB4ZW46IC0tPiBwaXJxPTYgLT4gaXJxPTYgKGdzaT02KQpbICAgIDEu
MDk4NTA4XSB4ZW46IC0tPiBwaXJxPTcgLT4gaXJxPTcgKGdzaT03KQpbICAgIDEuMDk4NTIxXSB4
ZW46IC0tPiBwaXJxPTggLT4gaXJxPTggKGdzaT04KQpbICAgIDEuMDk4NTM2XSB4ZW46IC0tPiBw
aXJxPTkgLT4gaXJxPTkgKGdzaT05KQpbICAgIDEuMDk4NTUwXSB4ZW46IC0tPiBwaXJxPTEwIC0+
IGlycT0xMCAoZ3NpPTEwKQpbICAgIDEuMDk4NTY1XSB4ZW46IC0tPiBwaXJxPTExIC0+IGlycT0x
MSAoZ3NpPTExKQpbICAgIDEuMDk4NTc5XSB4ZW46IC0tPiBwaXJxPTEyIC0+IGlycT0xMiAoZ3Np
PTEyKQpbICAgIDEuMDk4NTkzXSB4ZW46IC0tPiBwaXJxPTEzIC0+IGlycT0xMyAoZ3NpPTEzKQpb
ICAgIDEuMDk4NjA3XSB4ZW46IC0tPiBwaXJxPTE0IC0+IGlycT0xNCAoZ3NpPTE0KQpbICAgIDEu
MDk4NjIxXSB4ZW46IC0tPiBwaXJxPTE1IC0+IGlycT0xNSAoZ3NpPTE1KQpbICAgIDEuMDk4NjY3
XSByYW5kb206IGNybmcgZG9uZSAodHJ1c3RpbmcgQ1BVJ3MgbWFudWZhY3R1cmVyKQpbICAgIDEu
MTAzNjM1XSBDb25zb2xlOiBjb2xvdXIgVkdBKyA4MHgyNQpbICAgIDEuMTAzNjQ3XSBwcmludGs6
IGNvbnNvbGUgW3R0eTBdIGVuYWJsZWQKWyAgICAxLjEwMzY1OF0gcHJpbnRrOiBjb25zb2xlIFto
dmMwXSBlbmFibGVkClsgICAgMS4xMDM2ODZdIEFDUEk6IENvcmUgcmV2aXNpb24gMjAyMDA5MjUK
WyAgICAxLjE3Njk5OF0gY2xvY2tzb3VyY2U6IHhlbjogbWFzazogMHhmZmZmZmZmZmZmZmZmZmZm
IG1heF9jeWNsZXM6IDB4MWNkNDJlNGRmZmIsIG1heF9pZGxlX25zOiA4ODE1OTA1OTE0ODMgbnMK
WyAgICAxLjE3NzAxMF0gWGVuOiB1c2luZyB2Y3B1b3AgdGltZXIgaW50ZXJmYWNlClsgICAgMS4x
NzcwMTVdIGluc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMApbICAgIDEuMTc3MDU1XSBjbG9j
a3NvdXJjZTogdHNjLWVhcmx5OiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmZmYgbWF4X2N5Y2xlczog
MHgyM2Y0NTcyNWYzMiwgbWF4X2lkbGVfbnM6IDQ0MDc5NTI3Mzc3MCBucwpbICAgIDEuMTc3MDU4
XSBDYWxpYnJhdGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2lu
ZyB0aW1lciBmcmVxdWVuY3kuLiA0OTg4LjY4IEJvZ29NSVBTIChscGo9OTk3NzM2MCkKWyAgICAx
LjE3NzA2MV0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxClsgICAgMS4xNzcx
NTVdIExTTTogU2VjdXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemluZwpbICAgIDEuMTc3MTgyXSBZ
YW1hOiBkaXNhYmxlZCBieSBkZWZhdWx0OyBlbmFibGUgd2l0aCBzeXNjdGwga2VybmVsLnlhbWEu
KgpbICAgIDEuMTc3Mjc3XSBBcHBBcm1vcjogQXBwQXJtb3IgaW5pdGlhbGl6ZWQKWyAgICAxLjE3
NzI4M10gVE9NT1lPIExpbnV4IGluaXRpYWxpemVkClsgICAgMS4xNzczMzBdIE1vdW50LWNhY2hl
IGhhc2ggdGFibGUgZW50cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIp
ClsgICAgMS4xNzczMzVdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2
IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcikKWyAgICAxLjE3ODEwMl0gTGFzdCBsZXZl
bCBpVExCIGVudHJpZXM6IDRLQiA1MTIsIDJNQiA4LCA0TUIgOApbICAgIDEuMTc4MTA0XSBMYXN0
IGxldmVsIGRUTEIgZW50cmllczogNEtCIDUxMiwgMk1CIDAsIDRNQiAwLCAxR0IgNApbICAgIDEu
MTc4MTA4XSBTcGVjdHJlIFYxIDogTWl0aWdhdGlvbjogdXNlcmNvcHkvc3dhcGdzIGJhcnJpZXJz
IGFuZCBfX3VzZXIgcG9pbnRlciBzYW5pdGl6YXRpb24KWyAgICAxLjE3ODEwOV0gU3BlY3RyZSBW
MiA6IE1pdGlnYXRpb246IEZ1bGwgZ2VuZXJpYyByZXRwb2xpbmUKWyAgICAxLjE3ODExMF0gU3Bl
Y3RyZSBWMiA6IFNwZWN0cmUgdjIgLyBTcGVjdHJlUlNCIG1pdGlnYXRpb246IEZpbGxpbmcgUlNC
IG9uIGNvbnRleHQgc3dpdGNoClsgICAgMS4xNzgxMTFdIFNwZWN1bGF0aXZlIFN0b3JlIEJ5cGFz
czogVnVsbmVyYWJsZQpbICAgIDEuMTc4MTEzXSBNRFM6IFZ1bG5lcmFibGU6IENsZWFyIENQVSBi
dWZmZXJzIGF0dGVtcHRlZCwgbm8gbWljcm9jb2RlClsgICAgMS4xNzgyNTNdIEZyZWVpbmcgU01Q
IGFsdGVybmF0aXZlcyBtZW1vcnk6IDMySwpbICAgIDEuMTgxMTg3XSBjcHUgMCBzcGlubG9jayBl
dmVudCBpcnEgNDkKWyAgICAxLjE4MTE5NF0gVlBNVSBkaXNhYmxlZCBieSBoeXBlcnZpc29yLgpb
ICAgIDEuMTgxNTg4XSBQZXJmb3JtYW5jZSBFdmVudHM6IHVuc3VwcG9ydGVkIHA2IENQVSBtb2Rl
bCA2MiBubyBQTVUgZHJpdmVyLCBzb2Z0d2FyZSBldmVudHMgb25seS4KWyAgICAxLjE4MTY5M10g
cmN1OiBIaWVyYXJjaGljYWwgU1JDVSBpbXBsZW1lbnRhdGlvbi4KWyAgICAxLjE4MjE2OV0gTk1J
IHdhdGNoZG9nOiBQZXJmIE5NSSB3YXRjaGRvZyBwZXJtYW5lbnRseSBkaXNhYmxlZApbICAgIDEu
MTgyMzIxXSBzbXA6IEJyaW5naW5nIHVwIHNlY29uZGFyeSBDUFVzIC4uLgpbICAgIDEuMTgyNTkw
XSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDEKWyAgICAxLjE4Mjk2MF0gY3B1IDEgc3Bp
bmxvY2sgZXZlbnQgaXJxIDU5ClsgICAgMS4xODI5NjBdIE1EUyBDUFUgYnVnIHByZXNlbnQgYW5k
IFNNVCBvbiwgZGF0YSBsZWFrIHBvc3NpYmxlLiBTZWUgaHR0cHM6Ly93d3cua2VybmVsLm9yZy9k
b2MvaHRtbC9sYXRlc3QvYWRtaW4tZ3VpZGUvaHctdnVsbi9tZHMuaHRtbCBmb3IgbW9yZSBkZXRh
aWxzLgpbICAgIDEuMTgyOTYwXSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDIKWyAgICAx
LjE4Mjk2MF0gY3B1IDIgc3BpbmxvY2sgZXZlbnQgaXJxIDY1ClsgICAgMS4xODI5NjBdIGluc3Rh
bGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMwpbICAgIDEuMTgyOTYwXSBjcHUgMyBzcGlubG9jayBl
dmVudCBpcnEgNzEKWyAgICAxLjE4Mjk2MF0gc21wOiBCcm91Z2h0IHVwIDEgbm9kZSwgNCBDUFVz
ClsgICAgMS4xODI5NjBdIHNtcGJvb3Q6IE1heCBsb2dpY2FsIHBhY2thZ2VzOiA2ClsgICAgMS4x
ODY3NzZdIG5vZGUgMCBkZWZlcnJlZCBwYWdlcyBpbml0aWFsaXNlZCBpbiAwbXMKWyAgICAxLjE4
NjgxNF0gZGV2dG1wZnM6IGluaXRpYWxpemVkClsgICAgMS4xODY4MTRdIHg4Ni9tbTogTWVtb3J5
IGJsb2NrIHNpemU6IDEyOE1CClsgICAgMS4xODY4MTRdIFBNOiBSZWdpc3RlcmluZyBBQ1BJIE5W
UyByZWdpb24gW21lbSAweGJhYmMzMDAwLTB4YmIxYmZmZmZdICg2Mjc5MTY4IGJ5dGVzKQpbICAg
IDEuMTg2ODE0XSBQTTogUmVnaXN0ZXJpbmcgQUNQSSBOVlMgcmVnaW9uIFttZW0gMHhiYjg0NDAw
MC0weGJiOGM5ZmZmXSAoNTQ4ODY0IGJ5dGVzKQpbICAgIDEuMTg2ODE0XSBjbG9ja3NvdXJjZTog
amlmZmllczogbWFzazogMHhmZmZmZmZmZiBtYXhfY3ljbGVzOiAweGZmZmZmZmZmLCBtYXhfaWRs
ZV9uczogNzY0NTA0MTc4NTEwMDAwMCBucwpbICAgIDEuMTg2ODE0XSBmdXRleCBoYXNoIHRhYmxl
IGVudHJpZXM6IDEwMjQgKG9yZGVyOiA0LCA2NTUzNiBieXRlcywgbGluZWFyKQpbICAgIDEuMTg2
ODE0XSBwaW5jdHJsIGNvcmU6IGluaXRpYWxpemVkIHBpbmN0cmwgc3Vic3lzdGVtClsgICAgMS4x
ODY4MTRdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTYKWyAgICAxLjE4NjgxNF0g
eGVuOmdyYW50X3RhYmxlOiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dApbICAg
IDEuMTg2ODE0XSBHcmFudCB0YWJsZSBpbml0aWFsaXplZApbICAgIDEuMTg2ODE0XSBhdWRpdDog
aW5pdGlhbGl6aW5nIG5ldGxpbmsgc3Vic3lzIChkaXNhYmxlZCkKWyAgICAxLjE4NjgxNF0gYXVk
aXQ6IHR5cGU9MjAwMCBhdWRpdCgxNjEyMTkyNzk0Ljk0NzoxKTogc3RhdGU9aW5pdGlhbGl6ZWQg
YXVkaXRfZW5hYmxlZD0wIHJlcz0xClsgICAgMS4xODY4MTRdIHRoZXJtYWxfc3lzOiBSZWdpc3Rl
cmVkIHRoZXJtYWwgZ292ZXJub3IgJ2ZhaXJfc2hhcmUnClsgICAgMS4xODY4MTRdIHRoZXJtYWxf
c3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ2JhbmdfYmFuZycKWyAgICAxLjE4Njgx
NF0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAnc3RlcF93aXNlJwpb
ICAgIDEuMTg2ODE0XSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICd1
c2VyX3NwYWNlJwpbICAgIDEuMTg2ODE0XSBBQ1BJOiBidXMgdHlwZSBQQ0kgcmVnaXN0ZXJlZApb
ICAgIDEuMTg2ODE0XSBhY3BpcGhwOiBBQ1BJIEhvdCBQbHVnIFBDSSBDb250cm9sbGVyIERyaXZl
ciB2ZXJzaW9uOiAwLjUKWyAgICAxLjE4NjgxNF0gUENJOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAw
MDAgW2J1cyAwMC1mZl0gYXQgW21lbSAweGQwMDAwMDAwLTB4ZGZmZmZmZmZdIChiYXNlIDB4ZDAw
MDAwMDApClsgICAgMS4xODY4MTRdIFBDSTogTU1DT05GSUcgYXQgW21lbSAweGQwMDAwMDAwLTB4
ZGZmZmZmZmZdIHJlc2VydmVkIGluIEU4MjAKWyAgICAxLjI3NzU5MV0gUENJOiBVc2luZyBjb25m
aWd1cmF0aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MKWyAgICAxLjQ2MTI0OV0gQUNQSTogQWRk
ZWQgX09TSShNb2R1bGUgRGV2aWNlKQpbICAgIDEuNDYxMjUxXSBBQ1BJOiBBZGRlZCBfT1NJKFBy
b2Nlc3NvciBEZXZpY2UpClsgICAgMS40NjEyNTJdIEFDUEk6IEFkZGVkIF9PU0koMy4wIF9TQ1Ag
RXh0ZW5zaW9ucykKWyAgICAxLjQ2MTI1NF0gQUNQSTogQWRkZWQgX09TSShQcm9jZXNzb3IgQWdn
cmVnYXRvciBEZXZpY2UpClsgICAgMS40NjEyNTVdIEFDUEk6IEFkZGVkIF9PU0koTGludXgtRGVs
bC1WaWRlbykKWyAgICAxLjQ2MTI1N10gQUNQSTogQWRkZWQgX09TSShMaW51eC1MZW5vdm8tTlYt
SERNSS1BdWRpbykKWyAgICAxLjQ2MTI1OV0gQUNQSTogQWRkZWQgX09TSShMaW51eC1IUEktSHli
cmlkLUdyYXBoaWNzKQpbICAgIDEuNjI5NDA4XSBBQ1BJOiAyIEFDUEkgQU1MIHRhYmxlcyBzdWNj
ZXNzZnVsbHkgYWNxdWlyZWQgYW5kIGxvYWRlZApbICAgIDEuNjQzNzMxXSB4ZW46IHJlZ2lzdGVy
aW5nIGdzaSA5IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAwClsgICAgMS42NDkwOTVdIEFDUEk6IElu
dGVycHJldGVyIGVuYWJsZWQKWyAgICAxLjY0OTEwOF0gQUNQSTogKHN1cHBvcnRzIFMwIFM1KQpb
ICAgIDEuNjQ5MTEwXSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVycnVwdCByb3V0aW5nClsg
ICAgMS42NDkxNDldIFBDSTogVXNpbmcgaG9zdCBicmlkZ2Ugd2luZG93cyBmcm9tIEFDUEk7IGlm
IG5lY2Vzc2FyeSwgdXNlICJwY2k9bm9jcnMiIGFuZCByZXBvcnQgYSBidWcKWyAgICAxLjY0OTcx
OV0gQUNQSTogRW5hYmxlZCA2IEdQRXMgaW4gYmxvY2sgMDAgdG8gM0YKWyAgICAxLjY2ODI3MV0g
QUNQSTogUENJIFJvb3QgQnJpZGdlIFtQQ0kwXSAoZG9tYWluIDAwMDAgW2J1cyAwMC1mZV0pClsg
ICAgMS42NjgyNzddIGFjcGkgUE5QMEEwODowMDogX09TQzogT1Mgc3VwcG9ydHMgW0V4dGVuZGVk
Q29uZmlnIEFTUE0gQ2xvY2tQTSBTZWdtZW50cyBNU0kgSFBYLVR5cGUzXQpbICAgIDEuNjY4NTAx
XSBhY3BpIFBOUDBBMDg6MDA6IF9PU0M6IHBsYXRmb3JtIGRvZXMgbm90IHN1cHBvcnQgW1NIUENI
b3RwbHVnIFBNRSBBRVIgTFRSXQpbICAgIDEuNjY4NzA3XSBhY3BpIFBOUDBBMDg6MDA6IF9PU0M6
IE9TIG5vdyBjb250cm9scyBbUENJZUhvdHBsdWcgUENJZUNhcGFiaWxpdHldClsgICAgMS42Njky
NzNdIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDowMApbICAgIDEuNjY5Mjc2XSBwY2lfYnVz
IDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgwMDAwLTB4MDNhZiB3aW5kb3ddClsg
ICAgMS42NjkyNzhdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2lvICAweDAz
ZTAtMHgwY2Y3IHdpbmRvd10KWyAgICAxLjY2OTI3OV0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1
cyByZXNvdXJjZSBbaW8gIDB4MDNiMC0weDAzZGYgd2luZG93XQpbICAgIDEuNjY5MjgwXSBwY2lf
YnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgwZDAwLTB4ZmZmZiB3aW5kb3dd
ClsgICAgMS42NjkyODJdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAw
eDAwMGEwMDAwLTB4MDAwYmZmZmYgd2luZG93XQpbICAgIDEuNjY5MjgzXSBwY2lfYnVzIDAwMDA6
MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHgwMDBjMDAwMC0weDAwMGRmZmZmIHdpbmRvd10K
WyAgICAxLjY2OTI4NF0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4
Y2MwMDAwMDAtMHhmZmZmZmZmZiB3aW5kb3ddClsgICAgMS42NjkyODVdIHBjaV9idXMgMDAwMDow
MDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDQ0MDAwMDAwMC0weDNmZmZmZmZmZmZmZiB3aW5k
b3ddClsgICAgMS42NjkyODddIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2J1
cyAwMC1mZV0KWyAgICAxLjY2OTM0NF0gcGNpIDAwMDA6MDA6MDAuMDogWzgwODY6MGUwMF0gdHlw
ZSAwMCBjbGFzcyAweDA2MDAwMApbICAgIDEuNjY5NjI1XSBwY2kgMDAwMDowMDowMC4wOiBQTUUj
IHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDEuNjY5ODc3XSBwY2kgMDAwMDow
MDowMS4wOiBbODA4NjowZTAyXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsgICAgMS42NzAyMjhd
IHBjaSAwMDAwOjAwOjAxLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkClsg
ICAgMS42NzA1MTBdIHBjaSAwMDAwOjAwOjAxLjE6IFs4MDg2OjBlMDNdIHR5cGUgMDEgY2xhc3Mg
MHgwNjA0MDAKWyAgICAxLjY3MDg2MF0gcGNpIDAwMDA6MDA6MDEuMTogUE1FIyBzdXBwb3J0ZWQg
ZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjY3MTE1OV0gcGNpIDAwMDA6MDA6MDIuMDogWzgw
ODY6MGUwNF0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDEuNjcxNTA5XSBwY2kgMDAwMDow
MDowMi4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDEuNjcxNzgw
XSBwY2kgMDAwMDowMDowMi4xOiBbODA4NjowZTA1XSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsg
ICAgMS42NzIxMzBdIHBjaSAwMDAwOjAwOjAyLjE6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNo
b3QgRDNjb2xkClsgICAgMS42NzI0MDFdIHBjaSAwMDAwOjAwOjAyLjI6IFs4MDg2OjBlMDZdIHR5
cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjY3Mjc1MV0gcGNpIDAwMDA6MDA6MDIuMjogUE1F
IyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjY3MzAyMV0gcGNpIDAwMDA6
MDA6MDIuMzogWzgwODY6MGUwN10gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDEuNjczMzcz
XSBwY2kgMDAwMDowMDowMi4zOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApb
ICAgIDEuNjczNjYyXSBwY2kgMDAwMDowMDowMy4wOiBbODA4NjowZTA4XSB0eXBlIDAxIGNsYXNz
IDB4MDYwNDAwClsgICAgMS42NzM4MzNdIHBjaSAwMDAwOjAwOjAzLjA6IGVuYWJsaW5nIEV4dGVu
ZGVkIFRhZ3MKWyAgICAxLjY3NDA0MV0gcGNpIDAwMDA6MDA6MDMuMDogUE1FIyBzdXBwb3J0ZWQg
ZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjY3NDMxN10gcGNpIDAwMDA6MDA6MDMuMTogWzgw
ODY6MGUwOV0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDEuNjc0NjY2XSBwY2kgMDAwMDow
MDowMy4xOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDEuNjc0OTM1
XSBwY2kgMDAwMDowMDowMy4yOiBbODA4NjowZTBhXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsg
ICAgMS42NzUyODVdIHBjaSAwMDAwOjAwOjAzLjI6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNo
b3QgRDNjb2xkClsgICAgMS42NzU1NTNdIHBjaSAwMDAwOjAwOjAzLjM6IFs4MDg2OjBlMGJdIHR5
cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjY3NTkwM10gcGNpIDAwMDA6MDA6MDMuMzogUE1F
IyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjY3NjE3M10gcGNpIDAwMDA6
MDA6MDUuMDogWzgwODY6MGUyOF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjc2NTk4
XSBwY2kgMDAwMDowMDowNS4yOiBbODA4NjowZTJhXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsg
ICAgMS42NzcwMjFdIHBjaSAwMDAwOjAwOjA1LjQ6IFs4MDg2OjBlMmNdIHR5cGUgMDAgY2xhc3Mg
MHgwODAwMjAKWyAgICAxLjY3NzA4Nl0gcGNpIDAwMDA6MDA6MDUuNDogcmVnIDB4MTA6IFttZW0g
MHhmYjMwNDAwMC0weGZiMzA0ZmZmXQpbICAgIDEuNjc3NTYyXSBwY2kgMDAwMDowMDowNi4wOiBb
ODA4NjowZTEwXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42NzgwMDBdIHBjaSAwMDAw
OjAwOjA2LjE6IFs4MDg2OjBlMTFdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY3ODQy
MF0gcGNpIDAwMDA6MDA6MDYuMjogWzgwODY6MGUxMl0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApb
ICAgIDEuNjc4ODM5XSBwY2kgMDAwMDowMDowNi4zOiBbODA4NjowZTEzXSB0eXBlIDAwIGNsYXNz
IDB4MDg4MDAwClsgICAgMS42NzkyNTZdIHBjaSAwMDAwOjAwOjA2LjQ6IFs4MDg2OjBlMTRdIHR5
cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY3OTY3M10gcGNpIDAwMDA6MDA6MDYuNTogWzgw
ODY6MGUxNV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjgwMDkwXSBwY2kgMDAwMDow
MDowNi42OiBbODA4NjowZTE2XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42ODA1MDZd
IHBjaSAwMDAwOjAwOjA2Ljc6IFs4MDg2OjBlMTddIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAg
ICAxLjY4MDkyM10gcGNpIDAwMDA6MDA6MDcuMDogWzgwODY6MGUxOF0gdHlwZSAwMCBjbGFzcyAw
eDA4ODAwMApbICAgIDEuNjgxMzUzXSBwY2kgMDAwMDowMDowNy4xOiBbODA4NjowZTE5XSB0eXBl
IDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42ODE3NzddIHBjaSAwMDAwOjAwOjA3LjI6IFs4MDg2
OjBlMWFdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY4MjE5OF0gcGNpIDAwMDA6MDA6
MDcuMzogWzgwODY6MGUxYl0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjgyNjE2XSBw
Y2kgMDAwMDowMDowNy40OiBbODA4NjowZTFjXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAg
MS42ODMxNDBdIHBjaSAwMDAwOjAwOjFhLjA6IFs4MDg2OjFlMmRdIHR5cGUgMDAgY2xhc3MgMHgw
YzAzMjAKWyAgICAxLjY4MzE5OF0gcGNpIDAwMDA6MDA6MWEuMDogcmVnIDB4MTA6IFttZW0gMHhm
YjMwMjAwMC0weGZiMzAyM2ZmXQpbICAgIDEuNjgzNTEzXSBwY2kgMDAwMDowMDoxYS4wOiBQTUUj
IHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDEuNjgzNzIzXSBwY2kgMDAwMDow
MDoxYy4wOiBbODA4NjoxZTEwXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsgICAgMS42ODQwNzJd
IHBjaSAwMDAwOjAwOjFjLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkClsg
ICAgMS42ODQxNDRdIHBjaSAwMDAwOjAwOjFjLjA6IEVuYWJsaW5nIE1QQyBJUkJOQ0UKWyAgICAx
LjY4NDE1MV0gcGNpIDAwMDA6MDA6MWMuMDogSW50ZWwgUENIIHJvb3QgcG9ydCBBQ1Mgd29ya2Fy
b3VuZCBlbmFibGVkClsgICAgMS42ODQzNDhdIHBjaSAwMDAwOjAwOjFjLjE6IFs4MDg2OjFlMTJd
IHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAxLjY4NDY5OF0gcGNpIDAwMDA6MDA6MWMuMTog
UE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAxLjY4NDc2N10gcGNpIDAw
MDA6MDA6MWMuMTogRW5hYmxpbmcgTVBDIElSQk5DRQpbICAgIDEuNjg0NzczXSBwY2kgMDAwMDow
MDoxYy4xOiBJbnRlbCBQQ0ggcm9vdCBwb3J0IEFDUyB3b3JrYXJvdW5kIGVuYWJsZWQKWyAgICAx
LjY4NDk3NF0gcGNpIDAwMDA6MDA6MWMuNDogWzgwODY6MWUxOF0gdHlwZSAwMSBjbGFzcyAweDA2
MDQwMApbICAgIDEuNjg1MzI1XSBwY2kgMDAwMDowMDoxYy40OiBQTUUjIHN1cHBvcnRlZCBmcm9t
IEQwIEQzaG90IEQzY29sZApbICAgIDEuNjg1Mzk1XSBwY2kgMDAwMDowMDoxYy40OiBFbmFibGlu
ZyBNUEMgSVJCTkNFClsgICAgMS42ODU0MDJdIHBjaSAwMDAwOjAwOjFjLjQ6IEludGVsIFBDSCBy
b290IHBvcnQgQUNTIHdvcmthcm91bmQgZW5hYmxlZApbICAgIDEuNjg1NjIzXSBwY2kgMDAwMDow
MDoxZC4wOiBbODA4NjoxZTI2XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzIwClsgICAgMS42ODU2ODFd
IHBjaSAwMDAwOjAwOjFkLjA6IHJlZyAweDEwOiBbbWVtIDB4ZmIzMDEwMDAtMHhmYjMwMTNmZl0K
WyAgICAxLjY4NjAwM10gcGNpIDAwMDA6MDA6MWQuMDogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBE
M2hvdCBEM2NvbGQKWyAgICAxLjY4NjIwMl0gcGNpIDAwMDA6MDA6MWUuMDogWzgwODY6MjQ0ZV0g
dHlwZSAwMSBjbGFzcyAweDA2MDQwMQpbICAgIDEuNjg2NjAwXSBwY2kgMDAwMDowMDoxZi4wOiBb
ODA4NjoxZTQ4XSB0eXBlIDAwIGNsYXNzIDB4MDYwMTAwClsgICAgMS42ODcxMjVdIHBjaSAwMDAw
OjAwOjFmLjI6IFs4MDg2OjFlMDJdIHR5cGUgMDAgY2xhc3MgMHgwMTA2MDEKWyAgICAxLjY4NzE4
MV0gcGNpIDAwMDA6MDA6MWYuMjogcmVnIDB4MTA6IFtpbyAgMHhmMDcwLTB4ZjA3N10KWyAgICAx
LjY4NzIxMV0gcGNpIDAwMDA6MDA6MWYuMjogcmVnIDB4MTQ6IFtpbyAgMHhmMDYwLTB4ZjA2M10K
WyAgICAxLjY4NzI0MV0gcGNpIDAwMDA6MDA6MWYuMjogcmVnIDB4MTg6IFtpbyAgMHhmMDUwLTB4
ZjA1N10KWyAgICAxLjY4NzI3MV0gcGNpIDAwMDA6MDA6MWYuMjogcmVnIDB4MWM6IFtpbyAgMHhm
MDQwLTB4ZjA0M10KWyAgICAxLjY4NzMwMV0gcGNpIDAwMDA6MDA6MWYuMjogcmVnIDB4MjA6IFtp
byAgMHhmMDIwLTB4ZjAzZl0KWyAgICAxLjY4NzMzMV0gcGNpIDAwMDA6MDA6MWYuMjogcmVnIDB4
MjQ6IFttZW0gMHhmYjMwMDAwMC0weGZiMzAwN2ZmXQpbICAgIDEuNjg3NTE0XSBwY2kgMDAwMDow
MDoxZi4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQzaG90ClsgICAgMS42ODc2OTddIHBjaSAwMDAw
OjAwOjFmLjM6IFs4MDg2OjFlMjJdIHR5cGUgMDAgY2xhc3MgMHgwYzA1MDAKWyAgICAxLjY4Nzc2
OF0gcGNpIDAwMDA6MDA6MWYuMzogcmVnIDB4MTA6IFttZW0gMHgzZmZmZmZmMDAwMDAtMHgzZmZm
ZmZmMDAwZmYgNjRiaXRdClsgICAgMS42ODc4NTRdIHBjaSAwMDAwOjAwOjFmLjM6IHJlZyAweDIw
OiBbaW8gIDB4ZjAwMC0weGYwMWZdClsgICAgMS42ODgxOTRdIHBjaSAwMDAwOjAwOjAxLjA6IFBD
SSBicmlkZ2UgdG8gW2J1cyAwMV0KWyAgICAxLjY4ODM1Nl0gcGNpIDAwMDA6MDA6MDEuMTogUENJ
IGJyaWRnZSB0byBbYnVzIDAyXQpbICAgIDEuNjg4NTYwXSBwY2kgMDAwMDowMzowMC4wOiBbMTBk
ZTowMWQzXSB0eXBlIDAwIGNsYXNzIDB4MDMwMDAwClsgICAgMS42ODg2MTJdIHBjaSAwMDAwOjAz
OjAwLjA6IHJlZyAweDEwOiBbbWVtIDB4ZmEwMDAwMDAtMHhmYWZmZmZmZl0KWyAgICAxLjY4ODY1
NV0gcGNpIDAwMDA6MDM6MDAuMDogcmVnIDB4MTQ6IFttZW0gMHhlMDAwMDAwMC0weGVmZmZmZmZm
IDY0Yml0IHByZWZdClsgICAgMS42ODg2OThdIHBjaSAwMDAwOjAzOjAwLjA6IHJlZyAweDFjOiBb
bWVtIDB4ZjkwMDAwMDAtMHhmOWZmZmZmZiA2NGJpdF0KWyAgICAxLjY4ODc1Ml0gcGNpIDAwMDA6
MDM6MDAuMDogcmVnIDB4MzA6IFttZW0gMHhmYjAwMDAwMC0weGZiMDFmZmZmIHByZWZdClsgICAg
MS42ODkwMzJdIHBjaSAwMDAwOjAzOjAwLjA6IGRpc2FibGluZyBBU1BNIG9uIHByZS0xLjEgUENJ
ZSBkZXZpY2UuICBZb3UgY2FuIGVuYWJsZSBpdCB3aXRoICdwY2llX2FzcG09Zm9yY2UnClsgICAg
MS42ODkwNTddIHBjaSAwMDAwOjAwOjAyLjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwM10KWyAgICAx
LjY4OTA4MV0gcGNpIDAwMDA6MDA6MDIuMDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmOTAwMDAw
MC0weGZiMGZmZmZmXQpbICAgIDEuNjg5MDk5XSBwY2kgMDAwMDowMDowMi4wOiAgIGJyaWRnZSB3
aW5kb3cgW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAxLjY4OTIz
M10gcGNpIDAwMDA6MDA6MDIuMTogUENJIGJyaWRnZSB0byBbYnVzIDA0XQpbICAgIDEuNjg5Mzk0
XSBwY2kgMDAwMDowMDowMi4yOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDVdClsgICAgMS42ODk1NTZd
IHBjaSAwMDAwOjAwOjAyLjM6IFBDSSBicmlkZ2UgdG8gW2J1cyAwNl0KWyAgICAxLjY4OTcyM10g
cGNpIDAwMDA6MDA6MDMuMDogUENJIGJyaWRnZSB0byBbYnVzIDA3XQpbICAgIDEuNjg5ODkyXSBw
Y2kgMDAwMDowMDowMy4xOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDhdClsgICAgMS42OTAwNTZdIHBj
aSAwMDAwOjAwOjAzLjI6IFBDSSBicmlkZ2UgdG8gW2J1cyAwOV0KWyAgICAxLjY5MDIxOF0gcGNp
IDAwMDA6MDA6MDMuMzogUENJIGJyaWRnZSB0byBbYnVzIDBhXQpbICAgIDEuNjkwMzc3XSBwY2kg
MDAwMDowMDoxYy4wOiBQQ0kgYnJpZGdlIHRvIFtidXMgMGJdClsgICAgMS42OTA1ODhdIHBjaSAw
MDAwOjBjOjAwLjA6IFsxMTA2OjM0ODNdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgICAxLjY5
MDY3MF0gcGNpIDAwMDA6MGM6MDAuMDogcmVnIDB4MTA6IFttZW0gMHhmYjIwMDAwMC0weGZiMjAw
ZmZmIDY0Yml0XQpbICAgIDEuNjkxMDMzXSBwY2kgMDAwMDowYzowMC4wOiBQTUUjIHN1cHBvcnRl
ZCBmcm9tIEQwIEQzY29sZApbICAgIDEuNjkxMjI0XSBwY2kgMDAwMDowMDoxYy4xOiBQQ0kgYnJp
ZGdlIHRvIFtidXMgMGNdClsgICAgMS42OTEyNDNdIHBjaSAwMDAwOjAwOjFjLjE6ICAgYnJpZGdl
IHdpbmRvdyBbbWVtIDB4ZmIyMDAwMDAtMHhmYjJmZmZmZl0KWyAgICAxLjY5MTQ1Ml0gcGNpIDAw
MDA6MGQ6MDAuMDogWzEwZWM6ODE2OF0gdHlwZSAwMCBjbGFzcyAweDAyMDAwMApbICAgIDEuNjkx
NTE4XSBwY2kgMDAwMDowZDowMC4wOiByZWcgMHgxMDogW2lvICAweGUwMDAtMHhlMGZmXQpbICAg
IDEuNjkxNjA5XSBwY2kgMDAwMDowZDowMC4wOiByZWcgMHgxODogW21lbSAweGZiMTA0MDAwLTB4
ZmIxMDRmZmYgNjRiaXRdClsgICAgMS42OTE2NjRdIHBjaSAwMDAwOjBkOjAwLjA6IHJlZyAweDIw
OiBbbWVtIDB4ZmIxMDAwMDAtMHhmYjEwM2ZmZiA2NGJpdF0KWyAgICAxLjY5MTk5Nl0gcGNpIDAw
MDA6MGQ6MDAuMDogc3VwcG9ydHMgRDEgRDIKWyAgICAxLjY5MTk5N10gcGNpIDAwMDA6MGQ6MDAu
MDogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEMSBEMiBEM2hvdCBEM2NvbGQKWyAgICAxLjY5MjI2
Ml0gcGNpIDAwMDA6MDA6MWMuNDogUENJIGJyaWRnZSB0byBbYnVzIDBkXQpbICAgIDEuNjkyMjcz
XSBwY2kgMDAwMDowMDoxYy40OiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweGUwMDAtMHhlZmZmXQpb
ICAgIDEuNjkyMjgzXSBwY2kgMDAwMDowMDoxYy40OiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGZi
MTAwMDAwLTB4ZmIxZmZmZmZdClsgICAgMS42OTIzNjldIHBjaV9idXMgMDAwMDowZTogZXh0ZW5k
ZWQgY29uZmlnIHNwYWNlIG5vdCBhY2Nlc3NpYmxlClsgICAgMS42OTI1MjldIHBjaSAwMDAwOjAw
OjFlLjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwZV0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAx
LjY5MjU2M10gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwMDAwLTB4
MDNhZiB3aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMS42OTI1NjVdIHBjaSAwMDAw
OjAwOjFlLjA6ICAgYnJpZGdlIHdpbmRvdyBbaW8gIDB4MDNlMC0weDBjZjcgd2luZG93XSAoc3Vi
dHJhY3RpdmUgZGVjb2RlKQpbICAgIDEuNjkyNTY2XSBwY2kgMDAwMDowMDoxZS4wOiAgIGJyaWRn
ZSB3aW5kb3cgW2lvICAweDAzYjAtMHgwM2RmIHdpbmRvd10gKHN1YnRyYWN0aXZlIGRlY29kZSkK
WyAgICAxLjY5MjU2N10gcGNpIDAwMDA6MDA6MWUuMDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgw
ZDAwLTB4ZmZmZiB3aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMS42OTI1NjhdIHBj
aSAwMDAwOjAwOjFlLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZm
ZiB3aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMS42OTI1NjldIHBjaSAwMDAwOjAw
OjFlLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4MDAwYzAwMDAtMHgwMDBkZmZmZiB3aW5kb3dd
IChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMS42OTI1NzFdIHBjaSAwMDAwOjAwOjFlLjA6ICAg
YnJpZGdlIHdpbmRvdyBbbWVtIDB4Y2MwMDAwMDAtMHhmZmZmZmZmZiB3aW5kb3ddIChzdWJ0cmFj
dGl2ZSBkZWNvZGUpClsgICAgMS42OTI1NzJdIHBjaSAwMDAwOjAwOjFlLjA6ICAgYnJpZGdlIHdp
bmRvdyBbbWVtIDB4NDQwMDAwMDAwLTB4M2ZmZmZmZmZmZmZmIHdpbmRvd10gKHN1YnRyYWN0aXZl
IGRlY29kZSkKWyAgICAxLjY5MzEyMl0geGVuOiByZWdpc3RlcmluZyBnc2kgMTMgdHJpZ2dlcmlu
ZyAxIHBvbGFyaXR5IDAKWyAgICAxLjY5MzM1MF0gQUNQSTogUENJIFJvb3QgQnJpZGdlIFtVTkMw
XSAoZG9tYWluIDAwMDAgW2J1cyBmZl0pClsgICAgMS42OTMzNTRdIGFjcGkgUE5QMEEwMzowMDog
X09TQzogT1Mgc3VwcG9ydHMgW0V4dGVuZGVkQ29uZmlnIEFTUE0gQ2xvY2tQTSBTZWdtZW50cyBN
U0kgSFBYLVR5cGUzXQpbICAgIDEuNjkzMzc5XSBhY3BpIFBOUDBBMDM6MDA6IF9PU0M6IE9TIG5v
dyBjb250cm9scyBbUENJZUhvdHBsdWcgU0hQQ0hvdHBsdWcgUE1FIEFFUiBQQ0llQ2FwYWJpbGl0
eSBMVFJdClsgICAgMS42OTM0NDJdIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDpmZgpbICAg
IDEuNjkzNDQ0XSBwY2lfYnVzIDAwMDA6ZmY6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgZmZdClsg
ICAgMS42OTM1MDNdIHBjaSAwMDAwOmZmOjA4LjA6IFs4MDg2OjBlODBdIHR5cGUgMDAgY2xhc3Mg
MHgwODgwMDAKWyAgICAxLjY5Mzc1N10gcGNpIDAwMDA6ZmY6MDguMjogWzgwODY6MGUzMl0gdHlw
ZSAwMCBjbGFzcyAweDExMDEwMApbICAgIDEuNjk0MDI0XSBwY2kgMDAwMDpmZjowOC4zOiBbODA4
NjowZTgzXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42OTQzNjNdIHBjaSAwMDAwOmZm
OjA4LjQ6IFs4MDg2OjBlODRdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY5NDcwOV0g
cGNpIDAwMDA6ZmY6MDguNTogWzgwODY6MGU4NV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAg
IDEuNjk1MDQ1XSBwY2kgMDAwMDpmZjowOC42OiBbODA4NjowZTg2XSB0eXBlIDAwIGNsYXNzIDB4
MDg4MDAwClsgICAgMS42OTUzODNdIHBjaSAwMDAwOmZmOjA4Ljc6IFs4MDg2OjBlODddIHR5cGUg
MDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY5NTcxMl0gcGNpIDAwMDA6ZmY6MDkuMDogWzgwODY6
MGU5MF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjk1OTYwXSBwY2kgMDAwMDpmZjow
OS4yOiBbODA4NjowZTMzXSB0eXBlIDAwIGNsYXNzIDB4MTEwMTAwClsgICAgMS42OTYyMTZdIHBj
aSAwMDAwOmZmOjA5LjM6IFs4MDg2OjBlOTNdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAx
LjY5NjU1M10gcGNpIDAwMDA6ZmY6MDkuNDogWzgwODY6MGU5NF0gdHlwZSAwMCBjbGFzcyAweDA4
ODAwMApbICAgIDEuNjk2ODk4XSBwY2kgMDAwMDpmZjowOS41OiBbODA4NjowZTk1XSB0eXBlIDAw
IGNsYXNzIDB4MDg4MDAwClsgICAgMS42OTcyMzldIHBjaSAwMDAwOmZmOjA5LjY6IFs4MDg2OjBl
OTZdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY5NzU2M10gcGNpIDAwMDA6ZmY6MGEu
MDogWzgwODY6MGVjMF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjk3ODEwXSBwY2kg
MDAwMDpmZjowYS4xOiBbODA4NjowZWMxXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42
OTgwNDddIHBjaSAwMDAwOmZmOjBhLjI6IFs4MDg2OjBlYzJdIHR5cGUgMDAgY2xhc3MgMHgwODgw
MDAKWyAgICAxLjY5ODI5Nl0gcGNpIDAwMDA6ZmY6MGEuMzogWzgwODY6MGVjM10gdHlwZSAwMCBj
bGFzcyAweDA4ODAwMApbICAgIDEuNjk4NTQ3XSBwY2kgMDAwMDpmZjowYi4wOiBbODA4NjowZTFl
XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS42OTg3ODZdIHBjaSAwMDAwOmZmOjBiLjM6
IFs4MDg2OjBlMWZdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjY5OTAzM10gcGNpIDAw
MDA6ZmY6MGMuMDogWzgwODY6MGVlMF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjk5
MjY2XSBwY2kgMDAwMDpmZjowYy4xOiBbODA4NjowZWUyXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAw
ClsgICAgMS42OTk1MDNdIHBjaSAwMDAwOmZmOjBjLjI6IFs4MDg2OjBlZTRdIHR5cGUgMDAgY2xh
c3MgMHgwODgwMDAKWyAgICAxLjY5OTczNl0gcGNpIDAwMDA6ZmY6MGMuMzogWzgwODY6MGVlNl0g
dHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNjk5OTY5XSBwY2kgMDAwMDpmZjowYy40OiBb
ODA4NjowZWU4XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MDAyMDZdIHBjaSAwMDAw
OmZmOjBjLjU6IFs4MDg2OjBlZWFdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcwMDQ0
N10gcGNpIDAwMDA6ZmY6MGQuMDogWzgwODY6MGVlMV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApb
ICAgIDEuNzAwNjgwXSBwY2kgMDAwMDpmZjowZC4xOiBbODA4NjowZWUzXSB0eXBlIDAwIGNsYXNz
IDB4MDg4MDAwClsgICAgMS43MDA5MTRdIHBjaSAwMDAwOmZmOjBkLjI6IFs4MDg2OjBlZTVdIHR5
cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcwMTExM10gcGNpIDAwMDA6ZmY6MGQuMzogWzgw
ODY6MGVlN10gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzAxMzQ4XSBwY2kgMDAwMDpm
ZjowZC40OiBbODA4NjowZWU5XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MDE1ODFd
IHBjaSAwMDAwOmZmOjBkLjU6IFs4MDg2OjBlZWJdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAg
ICAxLjcwMTgzNV0gcGNpIDAwMDA6ZmY6MGUuMDogWzgwODY6MGVhMF0gdHlwZSAwMCBjbGFzcyAw
eDA4ODAwMApbICAgIDEuNzAyMDc4XSBwY2kgMDAwMDpmZjowZS4xOiBbODA4NjowZTMwXSB0eXBl
IDAwIGNsYXNzIDB4MTEwMTAwClsgICAgMS43MDIzNjBdIHBjaSAwMDAwOmZmOjBmLjA6IFs4MDg2
OjBlYThdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcwMjY5NF0gcGNpIDAwMDA6ZmY6
MGYuMTogWzgwODY6MGU3MV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzAzMDI5XSBw
Y2kgMDAwMDpmZjowZi4yOiBbODA4NjowZWFhXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAg
MS43MDMzNjJdIHBjaSAwMDAwOmZmOjBmLjM6IFs4MDg2OjBlYWJdIHR5cGUgMDAgY2xhc3MgMHgw
ODgwMDAKWyAgICAxLjcwMzY5NV0gcGNpIDAwMDA6ZmY6MGYuNDogWzgwODY6MGVhY10gdHlwZSAw
MCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzA0MDI4XSBwY2kgMDAwMDpmZjowZi41OiBbODA4Njow
ZWFkXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MDQzNjldIHBjaSAwMDAwOmZmOjEw
LjA6IFs4MDg2OjBlYjBdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcwNDcwN10gcGNp
IDAwMDA6ZmY6MTAuMTogWzgwODY6MGViMV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEu
NzA1MDQzXSBwY2kgMDAwMDpmZjoxMC4yOiBbODA4NjowZWIyXSB0eXBlIDAwIGNsYXNzIDB4MDg4
MDAwClsgICAgMS43MDUzODBdIHBjaSAwMDAwOmZmOjEwLjM6IFs4MDg2OjBlYjNdIHR5cGUgMDAg
Y2xhc3MgMHgwODgwMDAKWyAgICAxLjcwNTcxOV0gcGNpIDAwMDA6ZmY6MTAuNDogWzgwODY6MGVi
NF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzA2MDU5XSBwY2kgMDAwMDpmZjoxMC41
OiBbODA4NjowZWI1XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MDYzOTNdIHBjaSAw
MDAwOmZmOjEwLjY6IFs4MDg2OjBlYjZdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcw
NjcyNl0gcGNpIDAwMDA6ZmY6MTAuNzogWzgwODY6MGViN10gdHlwZSAwMCBjbGFzcyAweDA4ODAw
MApbICAgIDEuNzA3MDU0XSBwY2kgMDAwMDpmZjoxMy4wOiBbODA4NjowZTFkXSB0eXBlIDAwIGNs
YXNzIDB4MDg4MDAwClsgICAgMS43MDcyOTNdIHBjaSAwMDAwOmZmOjEzLjE6IFs4MDg2OjBlMzRd
IHR5cGUgMDAgY2xhc3MgMHgxMTAxMDAKWyAgICAxLjcwNzUzOF0gcGNpIDAwMDA6ZmY6MTMuNDog
WzgwODY6MGU4MV0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzA3NzczXSBwY2kgMDAw
MDpmZjoxMy41OiBbODA4NjowZTM2XSB0eXBlIDAwIGNsYXNzIDB4MTEwMTAwClsgICAgMS43MDgw
MjddIHBjaSAwMDAwOmZmOjE2LjA6IFs4MDg2OjBlYzhdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAK
WyAgICAxLjcwODI2MV0gcGNpIDAwMDA6ZmY6MTYuMTogWzgwODY6MGVjOV0gdHlwZSAwMCBjbGFz
cyAweDA4ODAwMApbICAgIDEuNzA4NDkzXSBwY2kgMDAwMDpmZjoxNi4yOiBbODA4NjowZWNhXSB0
eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MDg3NzddIHBjaSAwMDAwOmZmOjFjLjA6IFs4
MDg2OjBlNjBdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcwOTAxNl0gcGNpIDAwMDA6
ZmY6MWMuMTogWzgwODY6MGUzOF0gdHlwZSAwMCBjbGFzcyAweDExMDEwMApbICAgIDEuNzA5MzEx
XSBwY2kgMDAwMDpmZjoxZC4wOiBbODA4NjowZTY4XSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsg
ICAgMS43MDk2NDZdIHBjaSAwMDAwOmZmOjFkLjE6IFs4MDg2OjBlNzldIHR5cGUgMDAgY2xhc3Mg
MHgwODgwMDAKWyAgICAxLjcwOTk5MV0gcGNpIDAwMDA6ZmY6MWQuMjogWzgwODY6MGU2YV0gdHlw
ZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzEwMzI3XSBwY2kgMDAwMDpmZjoxZC4zOiBbODA4
NjowZTZiXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MTA2NjBdIHBjaSAwMDAwOmZm
OjFkLjQ6IFs4MDg2OjBlNmNdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcxMDk5OV0g
cGNpIDAwMDA6ZmY6MWQuNTogWzgwODY6MGU2ZF0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAg
IDEuNzExMzQ2XSBwY2kgMDAwMDpmZjoxZS4wOiBbODA4NjowZWYwXSB0eXBlIDAwIGNsYXNzIDB4
MDg4MDAwClsgICAgMS43MTE2ODBdIHBjaSAwMDAwOmZmOjFlLjE6IFs4MDg2OjBlZjFdIHR5cGUg
MDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcxMjAxOV0gcGNpIDAwMDA6ZmY6MWUuMjogWzgwODY6
MGVmMl0gdHlwZSAwMCBjbGFzcyAweDA4ODAwMApbICAgIDEuNzEyMzUzXSBwY2kgMDAwMDpmZjox
ZS4zOiBbODA4NjowZWYzXSB0eXBlIDAwIGNsYXNzIDB4MDg4MDAwClsgICAgMS43MTI2ODddIHBj
aSAwMDAwOmZmOjFlLjQ6IFs4MDg2OjBlZjRdIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAx
LjcxMzAyNl0gcGNpIDAwMDA6ZmY6MWUuNTogWzgwODY6MGVmNV0gdHlwZSAwMCBjbGFzcyAweDA4
ODAwMApbICAgIDEuNzEzMzY5XSBwY2kgMDAwMDpmZjoxZS42OiBbODA4NjowZWY2XSB0eXBlIDAw
IGNsYXNzIDB4MDg4MDAwClsgICAgMS43MTM3MDNdIHBjaSAwMDAwOmZmOjFlLjc6IFs4MDg2OjBl
ZjddIHR5cGUgMDAgY2xhc3MgMHgwODgwMDAKWyAgICAxLjcxNDE3NV0gQUNQSTogUENJIEludGVy
cnVwdCBMaW5rIFtMTktBXSAoSVJRcyAzIDQgNSA2IDcgMTAgKjExIDEyIDE0IDE1KQpbICAgIDEu
NzE0MjYxXSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOS0JdIChJUlFzIDMgNCA1IDYgNyAq
MTAgMTEgMTIgMTQgMTUpClsgICAgMS43MTQzNDVdIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBb
TE5LQ10gKElSUXMgMyA0ICo1IDYgMTAgMTEgMTIgMTQgMTUpClsgICAgMS43MTQ0MjhdIEFDUEk6
IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LRF0gKElSUXMgMyAqNCA1IDYgMTAgMTEgMTIgMTQgMTUp
ClsgICAgMS43MTQ1MTFdIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LRV0gKElSUXMgMyA0
IDUgNiA3IDEwIDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuClsgICAgMS43MTQ1OTRdIEFDUEk6
IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LRl0gKElSUXMgMyA0IDUgNiA3IDEwIDExIDEyIDE0IDE1
KSAqMCwgZGlzYWJsZWQuClsgICAgMS43MTQ2NzddIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBb
TE5LR10gKElSUXMgMyA0IDUgNiA3IDEwIDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuClsgICAg
MS43MTQ3NjBdIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LSF0gKElSUXMgKjMgNCA1IDYg
NyAxMCAxMSAxMiAxNCAxNSkKWyAgICAxLjcxNzExOF0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9j
cHVzIGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDQvMHgxIGlnbm9yZWQuClsgICAgMS43
MTcxMThdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsg
ICAgMS43MTcyNDNdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNo
ZWQuIFByb2Nlc3NvciA1LzB4MyBpZ25vcmVkLgpbICAgIDEuNzE3MjQ0XSBBQ1BJOiBVbmFibGUg
dG8gbWFwIGxhcGljIHRvIGxvZ2ljYWwgY3B1IG51bWJlcgpbICAgIDEuNzE3MzkyXSBBUElDOiBO
Ul9DUFVTL3Bvc3NpYmxlX2NwdXMgbGltaXQgb2YgNCByZWFjaGVkLiBQcm9jZXNzb3IgNi8weDUg
aWdub3JlZC4KWyAgICAxLjcxNzM5M10gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dp
Y2FsIGNwdSBudW1iZXIKWyAgICAxLjcxNzU0MF0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVz
IGxpbWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDcvMHg3IGlnbm9yZWQuClsgICAgMS43MTc1
NDBdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAg
MS43MTc2MzBdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQu
IFByb2Nlc3NvciA4LzB4OCBpZ25vcmVkLgpbICAgIDEuNzE3NjMxXSBBQ1BJOiBVbmFibGUgdG8g
bWFwIGxhcGljIHRvIGxvZ2ljYWwgY3B1IG51bWJlcgpbICAgIDEuNzE3NzI0XSBBUElDOiBOUl9D
UFVTL3Bvc3NpYmxlX2NwdXMgbGltaXQgb2YgNCByZWFjaGVkLiBQcm9jZXNzb3IgOS8weDkgaWdu
b3JlZC4KWyAgICAxLjcxNzcyNV0gQUNQSTogVW5hYmxlIHRvIG1hcCBsYXBpYyB0byBsb2dpY2Fs
IGNwdSBudW1iZXIKWyAgICAxLjcxNzgxNV0gQVBJQzogTlJfQ1BVUy9wb3NzaWJsZV9jcHVzIGxp
bWl0IG9mIDQgcmVhY2hlZC4gUHJvY2Vzc29yIDEwLzB4YSBpZ25vcmVkLgpbICAgIDEuNzE3ODE2
XSBBQ1BJOiBVbmFibGUgdG8gbWFwIGxhcGljIHRvIGxvZ2ljYWwgY3B1IG51bWJlcgpbICAgIDEu
NzE3OTEwXSBBUElDOiBOUl9DUFVTL3Bvc3NpYmxlX2NwdXMgbGltaXQgb2YgNCByZWFjaGVkLiBQ
cm9jZXNzb3IgMTEvMHhiIGlnbm9yZWQuClsgICAgMS43MTc5MTBdIEFDUEk6IFVuYWJsZSB0byBt
YXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTgwMDVdIEFQSUM6IE5SX0NQ
VVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxMi8weDEwIGln
bm9yZWQuClsgICAgMS43MTgwMDZdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNh
bCBjcHUgbnVtYmVyClsgICAgMS43MTgxMDBdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBs
aW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxMy8weDExIGlnbm9yZWQuClsgICAgMS43MTgx
MDFdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAg
MS43MTgxOTJdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQu
IFByb2Nlc3NvciAxNC8weDEyIGlnbm9yZWQuClsgICAgMS43MTgxOTNdIEFDUEk6IFVuYWJsZSB0
byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTgyODhdIEFQSUM6IE5S
X0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxNS8weDEz
IGlnbm9yZWQuClsgICAgMS43MTgyODldIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9n
aWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTgzODBdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1
cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxNi8weDE0IGlnbm9yZWQuClsgICAgMS43
MTgzODFdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsg
ICAgMS43MTg0NzVdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNo
ZWQuIFByb2Nlc3NvciAxNy8weDE1IGlnbm9yZWQuClsgICAgMS43MTg0NzZdIEFDUEk6IFVuYWJs
ZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTg1NjddIEFQSUM6
IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxOC8w
eDE2IGlnbm9yZWQuClsgICAgMS43MTg1NjddIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8g
bG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTg2NjJdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVf
Y3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAxOS8weDE3IGlnbm9yZWQuClsgICAg
MS43MTg2NjNdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVy
ClsgICAgMS43MTg3NTRdIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJl
YWNoZWQuIFByb2Nlc3NvciAyMC8weDE4IGlnbm9yZWQuClsgICAgMS43MTg3NTVdIEFDUEk6IFVu
YWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTg4NTBdIEFQ
SUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAy
MS8weDE5IGlnbm9yZWQuClsgICAgMS43MTg4NTFdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMg
dG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTg5NDNdIEFQSUM6IE5SX0NQVVMvcG9zc2li
bGVfY3B1cyBsaW1pdCBvZiA0IHJlYWNoZWQuIFByb2Nlc3NvciAyMi8weDFhIGlnbm9yZWQuClsg
ICAgMS43MTg5NDRdIEFDUEk6IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVt
YmVyClsgICAgMS43MTkwNTddIEFQSUM6IE5SX0NQVVMvcG9zc2libGVfY3B1cyBsaW1pdCBvZiA0
IHJlYWNoZWQuIFByb2Nlc3NvciAyMy8weDFiIGlnbm9yZWQuClsgICAgMS43MTkwNThdIEFDUEk6
IFVuYWJsZSB0byBtYXAgbGFwaWMgdG8gbG9naWNhbCBjcHUgbnVtYmVyClsgICAgMS43MTk1NjRd
IHhlbjpiYWxsb29uOiBJbml0aWFsaXNpbmcgYmFsbG9vbiBkcml2ZXIKWyAgICAxLjcxOTU2NF0g
aW9tbXU6IERlZmF1bHQgZG9tYWluIHR5cGU6IFRyYW5zbGF0ZWQgClsgICAgMS43MjExMDZdIHBj
aSAwMDAwOjAzOjAwLjA6IHZnYWFyYjogc2V0dGluZyBhcyBib290IFZHQSBkZXZpY2UKWyAgICAx
LjcyMTEwNl0gcGNpIDAwMDA6MDM6MDAuMDogdmdhYXJiOiBWR0EgZGV2aWNlIGFkZGVkOiBkZWNv
ZGVzPWlvK21lbSxvd25zPWlvK21lbSxsb2Nrcz1ub25lClsgICAgMS43MjExMDddIHBjaSAwMDAw
OjAzOjAwLjA6IHZnYWFyYjogYnJpZGdlIGNvbnRyb2wgcG9zc2libGUKWyAgICAxLjcyMTEwOF0g
dmdhYXJiOiBsb2FkZWQKWyAgICAxLjcyMTE5OV0gRURBQyBNQzogVmVyOiAzLjAuMApbICAgIDEu
NzIyMTczXSBOZXRMYWJlbDogSW5pdGlhbGl6aW5nClsgICAgMS43MjIxNzNdIE5ldExhYmVsOiAg
ZG9tYWluIGhhc2ggc2l6ZSA9IDEyOApbICAgIDEuNzIyMTczXSBOZXRMYWJlbDogIHByb3RvY29s
cyA9IFVOTEFCRUxFRCBDSVBTT3Y0IENBTElQU08KWyAgICAxLjcyMjE3M10gTmV0TGFiZWw6ICB1
bmxhYmVsZWQgdHJhZmZpYyBhbGxvd2VkIGJ5IGRlZmF1bHQKWyAgICAxLjcyMjE3M10gUENJOiBV
c2luZyBBQ1BJIGZvciBJUlEgcm91dGluZwpbICAgIDEuNzUwNjgyXSBQQ0k6IHBjaV9jYWNoZV9s
aW5lX3NpemUgc2V0IHRvIDY0IGJ5dGVzClsgICAgMS43NTE1MjddIGU4MjA6IHJlc2VydmUgUkFN
IGJ1ZmZlciBbbWVtIDB4MDAwOWUwMDAtMHgwMDA5ZmZmZl0KWyAgICAxLjc1MTUyOV0gZTgyMDog
cmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0gMHg4MDA2MjAwMC0weDgzZmZmZmZmXQpbICAgIDEuNzUz
MDY0XSBjbG9ja3NvdXJjZTogU3dpdGNoZWQgdG8gY2xvY2tzb3VyY2UgdHNjLWVhcmx5ClsgICAg
MS43NjQ2NzVdIFZGUzogRGlzayBxdW90YXMgZHF1b3RfNi42LjAKWyAgICAxLjc2NDY5Ml0gVkZT
OiBEcXVvdC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXIgMCwgNDA5NiBieXRl
cykKWyAgICAxLjc2NDcxM10gaHVnZXRsYmZzOiBkaXNhYmxpbmcgYmVjYXVzZSB0aGVyZSBhcmUg
bm8gc3VwcG9ydGVkIGh1Z2VwYWdlIHNpemVzClsgICAgMS43NjQ4MDZdIEFwcEFybW9yOiBBcHBB
cm1vciBGaWxlc3lzdGVtIEVuYWJsZWQKWyAgICAxLjc2NDgyM10gcG5wOiBQblAgQUNQSSBpbml0
ClsgICAgMS43NjQ5NzJdIHN5c3RlbSAwMDowMDogW21lbSAweGZjMDAwMDAwLTB4ZmNmZmZmZmZd
IGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS43NjQ5NzRdIHN5c3RlbSAwMDowMDogW21lbSAweGZk
MDAwMDAwLTB4ZmRmZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS43NjQ5NzVdIHN5c3Rl
bSAwMDowMDogW21lbSAweGZlMDAwMDAwLTB4ZmVhZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsg
ICAgMS43NjQ5NzZdIHN5c3RlbSAwMDowMDogW21lbSAweGZlYjAwMDAwLTB4ZmViZmZmZmZdIGhh
cyBiZWVuIHJlc2VydmVkClsgICAgMS43NjQ5NzhdIHN5c3RlbSAwMDowMDogW21lbSAweGZlZDAw
NDAwLTB4ZmVkM2ZmZmZdIGNvdWxkIG5vdCBiZSByZXNlcnZlZApbICAgIDEuNzY0OTc5XSBzeXN0
ZW0gMDA6MDA6IFttZW0gMHhmZWQ0NTAwMC0weGZlZGZmZmZmXSBoYXMgYmVlbiByZXNlcnZlZApb
ICAgIDEuNzY0OTgxXSBzeXN0ZW0gMDA6MDA6IFttZW0gMHhmZWUwMDAwMC0weGZlZWZmZmZmXSBo
YXMgYmVlbiByZXNlcnZlZApbICAgIDEuNzY0OTg3XSBzeXN0ZW0gMDA6MDA6IFBsdWcgYW5kIFBs
YXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYzAxIChhY3RpdmUpClsgICAgMS43NjUxMzRdIHN5c3Rl
bSAwMDowMTogW21lbSAweGZiZmZjMDAwLTB4ZmJmZmRmZmZdIGNvdWxkIG5vdCBiZSByZXNlcnZl
ZApbICAgIDEuNzY1MTM5XSBzeXN0ZW0gMDA6MDE6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2Us
IElEcyBQTlAwYzAyIChhY3RpdmUpClsgICAgMS43NjUzNjldIHN5c3RlbSAwMDowMjogW2lvICAw
eDBhMDAtMHgwYTFmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDEuNzY1MzcwXSBzeXN0ZW0gMDA6
MDI6IFtpbyAgMHgwYTIwLTB4MGEyZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjc2NTM3Ml0g
c3lzdGVtIDAwOjAyOiBbaW8gIDB4MGEzMC0weDBhM2ZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAg
MS43NjUzNzZdIHN5c3RlbSAwMDowMjogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBO
UDBjMDIgKGFjdGl2ZSkKWyAgICAxLjc2NTQwNV0geGVuOiByZWdpc3RlcmluZyBnc2kgMSB0cmln
Z2VyaW5nIDEgcG9sYXJpdHkgMApbICAgIDEuNzY1NDUwXSBwbnAgMDA6MDM6IFBsdWcgYW5kIFBs
YXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwMzAzIFBOUDAzMGIgKGFjdGl2ZSkKWyAgICAxLjc2NTQ4
N10geGVuOiByZWdpc3RlcmluZyBnc2kgMTIgdHJpZ2dlcmluZyAxIHBvbGFyaXR5IDAKWyAgICAx
Ljc2NTUyOF0gcG5wIDAwOjA0OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMGYw
MyBQTlAwZjEzIChhY3RpdmUpClsgICAgMS43NjU1NTZdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDgg
dHJpZ2dlcmluZyAxIHBvbGFyaXR5IDAKWyAgICAxLjc2NTU5MV0gcG5wIDAwOjA1OiBQbHVnIGFu
ZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMGIwMCAoYWN0aXZlKQpbICAgIDEuNzY1Njg2XSBz
eXN0ZW0gMDA6MDY6IFtpbyAgMHgwNGQwLTB4MDRkMV0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAx
Ljc2NTY5MV0gc3lzdGVtIDAwOjA2OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5Q
MGMwMiAoYWN0aXZlKQpbICAgIDEuNzY1OTY3XSBzeXN0ZW0gMDA6MDc6IFtpbyAgMHgwNDAwLTB4
MDQ1M10gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjc2NTk2OV0gc3lzdGVtIDAwOjA3OiBbaW8g
IDB4MDQ1OC0weDA0N2ZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS43NjU5NzBdIHN5c3RlbSAw
MDowNzogW2lvICAweDExODAtMHgxMTlmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDEuNzY1OTcy
XSBzeXN0ZW0gMDA6MDc6IFtpbyAgMHgwNTAwLTB4MDU3Zl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAg
ICAxLjc2NTk3NF0gc3lzdGVtIDAwOjA3OiBbbWVtIDB4ZmVkMWMwMDAtMHhmZWQxZmZmZl0gaGFz
IGJlZW4gcmVzZXJ2ZWQKWyAgICAxLjc2NTk3NV0gc3lzdGVtIDAwOjA3OiBbbWVtIDB4ZmVjMDAw
MDAtMHhmZWNmZmZmZl0gY291bGQgbm90IGJlIHJlc2VydmVkClsgICAgMS43NjU5NzddIHN5c3Rl
bSAwMDowNzogW21lbSAweGZmMDAwMDAwLTB4ZmZmZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsg
ICAgMS43NjU5ODFdIHN5c3RlbSAwMDowNzogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURz
IFBOUDBjMDEgKGFjdGl2ZSkKWyAgICAxLjc2NjA5Nl0gc3lzdGVtIDAwOjA4OiBbaW8gIDB4MDQ1
NC0weDA0NTddIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMS43NjYxMDFdIHN5c3RlbSAwMDowODog
UGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIElOVDNmMGQgUE5QMGMwMiAoYWN0aXZlKQpb
ICAgIDEuNzY2NTYyXSBwbnA6IFBuUCBBQ1BJOiBmb3VuZCA5IGRldmljZXMKWyAgICAxLjc4NTM1
Nl0gUE0tVGltZXIgZmFpbGVkIGNvbnNpc3RlbmN5IGNoZWNrICAoMHhmZmZmZmYpIC0gYWJvcnRp
bmcuClsgICAgMS43ODU0MTldIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMgpbICAg
IDEuNzg1NTk1XSB0Y3BfbGlzdGVuX3BvcnRhZGRyX2hhc2ggaGFzaCB0YWJsZSBlbnRyaWVzOiAx
MDI0IChvcmRlcjogMiwgMTYzODQgYnl0ZXMsIGxpbmVhcikKWyAgICAxLjc4NTYxNF0gVENQIGVz
dGFibGlzaGVkIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVyOiA1LCAxMzEwNzIgYnl0
ZXMsIGxpbmVhcikKWyAgICAxLjc4NTY2M10gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiAx
NjM4NCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKQpbICAgIDEuNzg1Njg1XSBUQ1A6
IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQgKGVzdGFibGlzaGVkIDE2Mzg0IGJpbmQgMTYzODQpClsg
ICAgMS43ODU3MjRdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2
OCBieXRlcywgbGluZWFyKQpbICAgIDEuNzg1NzM0XSBVRFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJp
ZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKQpbICAgIDEuNzg1OTE5XSBO
RVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDEKWyAgICAxLjc4NTkyNF0gTkVUOiBSZWdp
c3RlcmVkIHByb3RvY29sIGZhbWlseSA0NApbICAgIDEuNzg1OTUyXSBwY2kgMDAwMDowMDowMS4w
OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdClsgICAgMS43ODU5OTFdIHBjaSAwMDAwOjAwOjAxLjE6
IFBDSSBicmlkZ2UgdG8gW2J1cyAwMl0KWyAgICAxLjc4NjAyOV0gcGNpIDAwMDA6MDA6MDIuMDog
UENJIGJyaWRnZSB0byBbYnVzIDAzXQpbICAgIDEuNzg2MDQzXSBwY2kgMDAwMDowMDowMi4wOiAg
IGJyaWRnZSB3aW5kb3cgW21lbSAweGY5MDAwMDAwLTB4ZmIwZmZmZmZdClsgICAgMS43ODYwNTNd
IHBjaSAwMDAwOjAwOjAyLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZTAwMDAwMDAtMHhlZmZm
ZmZmZiA2NGJpdCBwcmVmXQpbICAgIDEuNzg2MDcxXSBwY2kgMDAwMDowMDowMi4xOiBQQ0kgYnJp
ZGdlIHRvIFtidXMgMDRdClsgICAgMS43ODYxMDhdIHBjaSAwMDAwOjAwOjAyLjI6IFBDSSBicmlk
Z2UgdG8gW2J1cyAwNV0KWyAgICAxLjc4NjE0NV0gcGNpIDAwMDA6MDA6MDIuMzogUENJIGJyaWRn
ZSB0byBbYnVzIDA2XQpbICAgIDEuNzg2MTgzXSBwY2kgMDAwMDowMDowMy4wOiBQQ0kgYnJpZGdl
IHRvIFtidXMgMDddClsgICAgMS43ODYyMjBdIHBjaSAwMDAwOjAwOjAzLjE6IFBDSSBicmlkZ2Ug
dG8gW2J1cyAwOF0KWyAgICAxLjc4NjI1N10gcGNpIDAwMDA6MDA6MDMuMjogUENJIGJyaWRnZSB0
byBbYnVzIDA5XQpbICAgIDEuNzg2Mjk0XSBwY2kgMDAwMDowMDowMy4zOiBQQ0kgYnJpZGdlIHRv
IFtidXMgMGFdClsgICAgMS43ODYzMzJdIHBjaSAwMDAwOjAwOjFjLjA6IFBDSSBicmlkZ2UgdG8g
W2J1cyAwYl0KWyAgICAxLjc4NjM3MV0gcGNpIDAwMDA6MDA6MWMuMTogUENJIGJyaWRnZSB0byBb
YnVzIDBjXQpbICAgIDEuNzg2Mzg1XSBwY2kgMDAwMDowMDoxYy4xOiAgIGJyaWRnZSB3aW5kb3cg
W21lbSAweGZiMjAwMDAwLTB4ZmIyZmZmZmZdClsgICAgMS43ODY0MTJdIHBjaSAwMDAwOjAwOjFj
LjQ6IFBDSSBicmlkZ2UgdG8gW2J1cyAwZF0KWyAgICAxLjc4NjQxOF0gcGNpIDAwMDA6MDA6MWMu
NDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHhlMDAwLTB4ZWZmZl0KWyAgICAxLjc4NjQzMl0gcGNp
IDAwMDA6MDA6MWMuNDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmYjEwMDAwMC0weGZiMWZmZmZm
XQpbICAgIDEuNzg2NDU4XSBwY2kgMDAwMDowMDoxZS4wOiBQQ0kgYnJpZGdlIHRvIFtidXMgMGVd
ClsgICAgMS43ODY0OTddIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNCBbaW8gIDB4MDAwMC0w
eDAzYWYgd2luZG93XQpbICAgIDEuNzg2NDk4XSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDUg
W2lvICAweDAzZTAtMHgwY2Y3IHdpbmRvd10KWyAgICAxLjc4NjQ5OV0gcGNpX2J1cyAwMDAwOjAw
OiByZXNvdXJjZSA2IFtpbyAgMHgwM2IwLTB4MDNkZiB3aW5kb3ddClsgICAgMS43ODY1MDFdIHBj
aV9idXMgMDAwMDowMDogcmVzb3VyY2UgNyBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAg
IDEuNzg2NTAyXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDggW21lbSAweDAwMGEwMDAwLTB4
MDAwYmZmZmYgd2luZG93XQpbICAgIDEuNzg2NTAzXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNl
IDkgW21lbSAweDAwMGMwMDAwLTB4MDAwZGZmZmYgd2luZG93XQpbICAgIDEuNzg2NTA0XSBwY2lf
YnVzIDAwMDA6MDA6IHJlc291cmNlIDEwIFttZW0gMHhjYzAwMDAwMC0weGZmZmZmZmZmIHdpbmRv
d10KWyAgICAxLjc4NjUwNV0gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSAxMSBbbWVtIDB4NDQw
MDAwMDAwLTB4M2ZmZmZmZmZmZmZmIHdpbmRvd10KWyAgICAxLjc4NjUwN10gcGNpX2J1cyAwMDAw
OjAzOiByZXNvdXJjZSAxIFttZW0gMHhmOTAwMDAwMC0weGZiMGZmZmZmXQpbICAgIDEuNzg2NTA4
XSBwY2lfYnVzIDAwMDA6MDM6IHJlc291cmNlIDIgW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmYg
NjRiaXQgcHJlZl0KWyAgICAxLjc4NjUxMF0gcGNpX2J1cyAwMDAwOjBjOiByZXNvdXJjZSAxIFtt
ZW0gMHhmYjIwMDAwMC0weGZiMmZmZmZmXQpbICAgIDEuNzg2NTExXSBwY2lfYnVzIDAwMDA6MGQ6
IHJlc291cmNlIDAgW2lvICAweGUwMDAtMHhlZmZmXQpbICAgIDEuNzg2NTEzXSBwY2lfYnVzIDAw
MDA6MGQ6IHJlc291cmNlIDEgW21lbSAweGZiMTAwMDAwLTB4ZmIxZmZmZmZdClsgICAgMS43ODY1
MTRdIHBjaV9idXMgMDAwMDowZTogcmVzb3VyY2UgNCBbaW8gIDB4MDAwMC0weDAzYWYgd2luZG93
XQpbICAgIDEuNzg2NTE1XSBwY2lfYnVzIDAwMDA6MGU6IHJlc291cmNlIDUgW2lvICAweDAzZTAt
MHgwY2Y3IHdpbmRvd10KWyAgICAxLjc4NjUxNl0gcGNpX2J1cyAwMDAwOjBlOiByZXNvdXJjZSA2
IFtpbyAgMHgwM2IwLTB4MDNkZiB3aW5kb3ddClsgICAgMS43ODY1MTddIHBjaV9idXMgMDAwMDow
ZTogcmVzb3VyY2UgNyBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAgIDEuNzg2NTE4XSBw
Y2lfYnVzIDAwMDA6MGU6IHJlc291cmNlIDggW21lbSAweDAwMGEwMDAwLTB4MDAwYmZmZmYgd2lu
ZG93XQpbICAgIDEuNzg2NTE5XSBwY2lfYnVzIDAwMDA6MGU6IHJlc291cmNlIDkgW21lbSAweDAw
MGMwMDAwLTB4MDAwZGZmZmYgd2luZG93XQpbICAgIDEuNzg2NTIxXSBwY2lfYnVzIDAwMDA6MGU6
IHJlc291cmNlIDEwIFttZW0gMHhjYzAwMDAwMC0weGZmZmZmZmZmIHdpbmRvd10KWyAgICAxLjc4
NjUyMl0gcGNpX2J1cyAwMDAwOjBlOiByZXNvdXJjZSAxMSBbbWVtIDB4NDQwMDAwMDAwLTB4M2Zm
ZmZmZmZmZmZmIHdpbmRvd10KWyAgICAxLjc4NjY1Ml0gcGNpIDAwMDA6MDA6MDUuMDogZGlzYWJs
ZWQgYm9vdCBpbnRlcnJ1cHRzIG9uIGRldmljZSBbODA4NjowZTI4XQpbICAgIDEuNzg2ODI0XSB4
ZW46IHJlZ2lzdGVyaW5nIGdzaSAxNiB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDEuNzg2
ODQ4XSB4ZW46IC0tPiBwaXJxPTE2IC0+IGlycT0xNiAoZ3NpPTE2KQpbICAgIDEuNzg3MDg1XSB4
ZW46IHJlZ2lzdGVyaW5nIGdzaSAyMyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDEuNzg3
MTAzXSB4ZW46IC0tPiBwaXJxPTIzIC0+IGlycT0yMyAoZ3NpPTIzKQpbICAgIDEuNzg3MzA0XSBw
Y2kgMDAwMDowMzowMC4wOiBWaWRlbyBkZXZpY2Ugd2l0aCBzaGFkb3dlZCBST00gYXQgW21lbSAw
eDAwMGMwMDAwLTB4MDAwZGZmZmZdClsgICAgMS43ODczODZdIHhlbjogcmVnaXN0ZXJpbmcgZ3Np
IDE2IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMS43ODczOTBdIEFscmVhZHkgc2V0dXAg
dGhlIEdTSSA6MTYKWyAgICAxLjc4NzQzMl0geGVuOiByZWdpc3RlcmluZyBnc2kgMTcgdHJpZ2dl
cmluZyAwIHBvbGFyaXR5IDEKWyAgICAxLjc4NzQ0OF0geGVuOiAtLT4gcGlycT0xNyAtPiBpcnE9
MTcgKGdzaT0xNykKWyAgICAxLjc4Nzg5NF0gUENJOiBDTFMgNjQgYnl0ZXMsIGRlZmF1bHQgNjQK
WyAgICAxLjc4Nzk0NV0gVHJ5aW5nIHRvIHVucGFjayByb290ZnMgaW1hZ2UgYXMgaW5pdHJhbWZz
Li4uClsgICAgMi4zMjA3ODBdIEZyZWVpbmcgaW5pdHJkIG1lbW9yeTogMzQ3NzJLClsgICAgMi4z
MjA4MzBdIGNsb2Nrc291cmNlOiB0c2M6IG1hc2s6IDB4ZmZmZmZmZmZmZmZmZmZmZiBtYXhfY3lj
bGVzOiAweDIzZjQ1NzI1ZjMyLCBtYXhfaWRsZV9uczogNDQwNzk1MjczNzcwIG5zClsgICAgMi4z
MjA5MTRdIGNsb2Nrc291cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSB0c2MKWyAgICAyLjMy
MTUwNV0gSW5pdGlhbGlzZSBzeXN0ZW0gdHJ1c3RlZCBrZXlyaW5ncwpbICAgIDIuMzIxNTE3XSBL
ZXkgdHlwZSBibGFja2xpc3QgcmVnaXN0ZXJlZApbICAgIDIuMzIxNjY1XSB3b3JraW5nc2V0OiB0
aW1lc3RhbXBfYml0cz0zNiBtYXhfb3JkZXI9MTggYnVja2V0X29yZGVyPTAKWyAgICAyLjMyMjc2
MV0gemJ1ZDogbG9hZGVkClsgICAgMi4zMjMyMThdIGludGVncml0eTogUGxhdGZvcm0gS2V5cmlu
ZyBpbml0aWFsaXplZApbICAgIDIuMzIzMjIxXSBLZXkgdHlwZSBhc3ltbWV0cmljIHJlZ2lzdGVy
ZWQKWyAgICAyLjMyMzIyM10gQXN5bW1ldHJpYyBrZXkgcGFyc2VyICd4NTA5JyByZWdpc3RlcmVk
ClsgICAgMi4zMjMyMzNdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVy
c2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNTEpClsgICAgMi4zMjMzODldIGlvIHNjaGVkdWxlciBt
cS1kZWFkbGluZSByZWdpc3RlcmVkClsgICAgMi4zMjM3MjBdIHhlbjogcmVnaXN0ZXJpbmcgZ3Np
IDI2IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMi4zMjM3NDhdIHhlbjogLS0+IHBpcnE9
MjYgLT4gaXJxPTI2IChnc2k9MjYpClsgICAgMi4zMjQyOTBdIHhlbjogcmVnaXN0ZXJpbmcgZ3Np
IDI2IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMi4zMjQyOTVdIEFscmVhZHkgc2V0dXAg
dGhlIEdTSSA6MjYKWyAgICAyLjMyNDc5MV0geGVuOiByZWdpc3RlcmluZyBnc2kgMzIgdHJpZ2dl
cmluZyAwIHBvbGFyaXR5IDEKWyAgICAyLjMyNDgwOV0geGVuOiAtLT4gcGlycT0zMiAtPiBpcnE9
MzIgKGdzaT0zMikKWyAgICAyLjMyNTI5N10geGVuOiByZWdpc3RlcmluZyBnc2kgMzIgdHJpZ2dl
cmluZyAwIHBvbGFyaXR5IDEKWyAgICAyLjMyNTMwMl0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoz
MgpbICAgIDIuMzI1Nzg5XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzMiB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQpbICAgIDIuMzI1NzkzXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjMyClsgICAgMi4z
MjYyNzddIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDMyIHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsg
ICAgMi4zMjYyODJdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MzIKWyAgICAyLjMyNjc2Nl0geGVu
OiByZWdpc3RlcmluZyBnc2kgNDAgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgICAyLjMyNjc4
NF0geGVuOiAtLT4gcGlycT00MCAtPiBpcnE9NDAgKGdzaT00MCkKWyAgICAyLjMyNzI4NV0geGVu
OiByZWdpc3RlcmluZyBnc2kgNDAgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgICAyLjMyNzI4
OV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDo0MApbICAgIDIuMzI3NzY4XSB4ZW46IHJlZ2lzdGVy
aW5nIGdzaSA0MCB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQpbICAgIDIuMzI3NzczXSBBbHJlYWR5
IHNldHVwIHRoZSBHU0kgOjQwClsgICAgMi4zMjgyNjJdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDQw
IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMi4zMjgyNjddIEFscmVhZHkgc2V0dXAgdGhl
IEdTSSA6NDAKWyAgICAyLjMyODc1Nl0geGVuOiByZWdpc3RlcmluZyBnc2kgMTcgdHJpZ2dlcmlu
ZyAwIHBvbGFyaXR5IDEKWyAgICAyLjMyODc2MF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNwpb
ICAgIDIuMzI5NDMyXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJp
dHkgMQpbICAgIDIuMzI5NDM3XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjE3ClsgICAgMi4zMjk3
OTldIHNocGNocDogU3RhbmRhcmQgSG90IFBsdWcgUENJIENvbnRyb2xsZXIgRHJpdmVyIHZlcnNp
b246IDAuNApbICAgIDIuMzI5ODExXSBpbnRlbF9pZGxlOiBNV0FJVCBzdWJzdGF0ZXM6IDB4MTEy
MApbICAgIDIuMzI5OTkxXSBNb25pdG9yLU13YWl0IHdpbGwgYmUgdXNlZCB0byBlbnRlciBDLTEg
c3RhdGUKWyAgICAyLjMzMDAyMF0gQUNQSTogXF9TQl8uU0NLMC5DMDAwOiBGb3VuZCAxIGlkbGUg
c3RhdGVzClsgICAgMi4zMzAwMjJdIGludGVsX2lkbGU6IHYwLjUuMSBtb2RlbCAweDNFClsgICAg
Mi4zMzAwMjddIGludGVsX2lkbGU6IGludGVsX2lkbGUgeWllbGRpbmcgdG8gbm9uZQpbICAgIDIu
MzMwMTk3XSBBQ1BJOiBcX1NCXy5TQ0swLkMwMDA6IEZvdW5kIDEgaWRsZSBzdGF0ZXMKWyAgICAy
LjMzMDUyMF0gQUNQSTogXF9TQl8uU0NLMC5DMDAyOiBGb3VuZCAxIGlkbGUgc3RhdGVzClsgICAg
Mi4zMzA4MzNdIEFDUEk6IFxfU0JfLlNDSzAuQzAwNDogRm91bmQgMSBpZGxlIHN0YXRlcwpbICAg
IDIuMzMxMTM2XSBBQ1BJOiBcX1NCXy5TQ0swLkMwMDY6IEZvdW5kIDEgaWRsZSBzdGF0ZXMKWyAg
ICAyLjMzMTc5MV0geGVuX21jZWxvZzogL2Rldi9tY2Vsb2cgcmVnaXN0ZXJlZCBieSBYZW4KWyAg
ICAyLjMzMjM2OV0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZlciwgNCBwb3J0cywgSVJRIHNoYXJp
bmcgZW5hYmxlZApbICAgIDIuMzMzMDk0XSBocGV0X2FjcGlfYWRkOiBubyBhZGRyZXNzIG9yIGly
cXMgaW4gX0NSUwpbICAgIDIuMzMzMTE3XSBMaW51eCBhZ3BnYXJ0IGludGVyZmFjZSB2MC4xMDMK
WyAgICAyLjMzMzIxNV0gQU1ELVZpOiBBTUQgSU9NTVV2MiBkcml2ZXIgYnkgSm9lcmcgUm9lZGVs
IDxqcm9lZGVsQHN1c2UuZGU+ClsgICAgMi4zMzMyMTZdIEFNRC1WaTogQU1EIElPTU1VdjIgZnVu
Y3Rpb25hbGl0eSBub3QgYXZhaWxhYmxlIG9uIHRoaXMgc3lzdGVtClsgICAgMi4zMzM2MThdIGk4
MDQyOiBQTlA6IFBTLzIgQ29udHJvbGxlciBbUE5QMDMwMzpQUzJLLFBOUDBmMDM6UFMyTV0gYXQg
MHg2MCwweDY0IGlycSAxLDEyClsgICAgMi4zMzQyNTldIHNlcmlvOiBpODA0MiBLQkQgcG9ydCBh
dCAweDYwLDB4NjQgaXJxIDEKWyAgICAyLjMzNDI2NV0gc2VyaW86IGk4MDQyIEFVWCBwb3J0IGF0
IDB4NjAsMHg2NCBpcnEgMTIKWyAgICAyLjMzNDQ2OF0gbW91c2VkZXY6IFBTLzIgbW91c2UgZGV2
aWNlIGNvbW1vbiBmb3IgYWxsIG1pY2UKWyAgICAyLjMzNDU0Nl0gcnRjX2Ntb3MgMDA6MDU6IFJU
QyBjYW4gd2FrZSBmcm9tIFM0ClsgICAgMi4zMzQ5MTJdIHJ0Y19jbW9zIDAwOjA1OiByZWdpc3Rl
cmVkIGFzIHJ0YzAKWyAgICAyLjMzNDk4OV0gcnRjX2Ntb3MgMDA6MDU6IHNldHRpbmcgc3lzdGVt
IGNsb2NrIHRvIDIwMjEtMDItMDFUMTU6MTk6NTYgVVRDICgxNjEyMTkyNzk2KQpbICAgIDIuMzM1
MDE0XSBydGNfY21vcyAwMDowNTogYWxhcm1zIHVwIHRvIG9uZSBtb250aCwgeTNrLCAxMTQgYnl0
ZXMgbnZyYW0KWyAgICAyLjMzNTAyM10gaW50ZWxfcHN0YXRlOiBDUFUgbW9kZWwgbm90IHN1cHBv
cnRlZApbICAgIDIuMzM1MTQ2XSBsZWR0cmlnLWNwdTogcmVnaXN0ZXJlZCB0byBpbmRpY2F0ZSBh
Y3Rpdml0eSBvbiBDUFVzClsgICAgMi4zMzU5MDVdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBm
YW1pbHkgMTAKWyAgICAyLjM1Njc3OF0gU2VnbWVudCBSb3V0aW5nIHdpdGggSVB2NgpbICAgIDIu
MzU2ODAzXSBtaXA2OiBNb2JpbGUgSVB2NgpbICAgIDIuMzU2ODA2XSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDE3ClsgICAgMi4zNTY5MTldIG1wbHNfZ3NvOiBNUExTIEdTTyBzdXBw
b3J0ClsgICAgMi4zNTcyMThdIElQSSBzaG9ydGhhbmQgYnJvYWRjYXN0OiBlbmFibGVkClsgICAg
Mi4zNTcyMjZdIHNjaGVkX2Nsb2NrOiBNYXJraW5nIHN0YWJsZSAoMjI3ODc2MDUyNSwgNzg0MjI2
MzEpLT4oMjM3MTM4ODgzMSwgLTE0MjA1Njc1KQpbICAgIDIuMzU3NTE2XSByZWdpc3RlcmVkIHRh
c2tzdGF0cyB2ZXJzaW9uIDEKWyAgICAyLjM1NzUxOV0gTG9hZGluZyBjb21waWxlZC1pbiBYLjUw
OSBjZXJ0aWZpY2F0ZXMKWyAgICAyLjM5NzMzM10gTG9hZGVkIFguNTA5IGNlcnQgJ0RlYmlhbiBT
ZWN1cmUgQm9vdCBDQTogNmNjZWNlN2U0YzZjMGQxZjYxNDlmM2RkMjdkZmNjNWNiYjQxOWVhMScK
WyAgICAyLjM5NzM1M10gTG9hZGVkIFguNTA5IGNlcnQgJ0RlYmlhbiBTZWN1cmUgQm9vdCBTaWdu
ZXIgMjAyMDogMDBiNTVlYjNiOScKWyAgICAyLjM5NzM5MF0genN3YXA6IGxvYWRlZCB1c2luZyBw
b29sIGx6by96YnVkClsgICAgMi4zOTc3OThdIEtleSB0eXBlIC5fZnNjcnlwdCByZWdpc3RlcmVk
ClsgICAgMi4zOTc3OTldIEtleSB0eXBlIC5mc2NyeXB0IHJlZ2lzdGVyZWQKWyAgICAyLjM5Nzgw
MF0gS2V5IHR5cGUgZnNjcnlwdC1wcm92aXNpb25pbmcgcmVnaXN0ZXJlZApbICAgIDIuMzk3ODQ2
XSBBcHBBcm1vcjogQXBwQXJtb3Igc2hhMSBwb2xpY3kgaGFzaGluZyBlbmFibGVkClsgICAgMi40
MDA0MTRdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBpbWFnZSAoaW5pdG1lbSkgbWVtb3J5OiAyMzgw
SwpbICAgIDIuNDMzMTUxXSBXcml0ZSBwcm90ZWN0aW5nIHRoZSBrZXJuZWwgcmVhZC1vbmx5IGRh
dGE6IDE4NDMyawpbICAgIDIuNDQ3MjQ2XSBGcmVlaW5nIHVudXNlZCBrZXJuZWwgaW1hZ2UgKHRl
eHQvcm9kYXRhIGdhcCkgbWVtb3J5OiAyMDQwSwpbICAgIDIuNDQ3MzE3XSBGcmVlaW5nIHVudXNl
ZCBrZXJuZWwgaW1hZ2UgKHJvZGF0YS9kYXRhIGdhcCkgbWVtb3J5OiAzNksKWyAgICAyLjg5OTE4
NF0geDg2L21tOiBDaGVja2VkIFcrWCBtYXBwaW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91
bmQuClsgICAgMi44OTkxOTJdIFJ1biAvaW5pdCBhcyBpbml0IHByb2Nlc3MKWyAgICAyLjg5OTE5
NF0gICB3aXRoIGFyZ3VtZW50czoKWyAgICAyLjg5OTE5NF0gICAgIC9pbml0ClsgICAgMi44OTkx
OTVdICAgICBwbGFjZWhvbGRlcgpbICAgIDIuODk5MTk2XSAgIHdpdGggZW52aXJvbm1lbnQ6Clsg
ICAgMi44OTkxOTddICAgICBIT01FPS8KWyAgICAyLjg5OTE5OF0gICAgIFRFUk09bGludXgKWyAg
ICAzLjE4NTk2NF0geGVuOiByZWdpc3RlcmluZyBnc2kgMTggdHJpZ2dlcmluZyAwIHBvbGFyaXR5
IDEKWyAgICAzLjE4NTk5N10geGVuOiAtLT4gcGlycT0xOCAtPiBpcnE9MTggKGdzaT0xOCkKWyAg
ICAzLjE4NjE1OV0gaTgwMV9zbWJ1cyAwMDAwOjAwOjFmLjM6IFNNQnVzIHVzaW5nIFBDSSBpbnRl
cnJ1cHQKWyAgICAzLjE4Njg3OV0gaTJjIGkyYy0wOiA0LzQgbWVtb3J5IHNsb3RzIHBvcHVsYXRl
ZCAoZnJvbSBETUkpClsgICAgMy4yMjI2MzBdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDE2IHRyaWdn
ZXJpbmcgMCBwb2xhcml0eSAxClsgICAgMy4yMjI2MzZdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6
MTYKWyAgICAzLjIzMTAzN10gQUNQSTogYnVzIHR5cGUgVVNCIHJlZ2lzdGVyZWQKWyAgICAzLjIz
MTA2MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JmcwpbICAg
IDMuMjMxMDY3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1Ygpb
ICAgIDMuMjMxMTc1XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYgpb
ICAgIDMuMjMxNTg3XSBTQ1NJIHN1YnN5c3RlbSBpbml0aWFsaXplZApbICAgIDMuMjQzMTcxXSBl
aGNpX2hjZDogVVNCIDIuMCAnRW5oYW5jZWQnIEhvc3QgQ29udHJvbGxlciAoRUhDSSkgRHJpdmVy
ClsgICAgMy4yNDQ3MDddIGVoY2ktcGNpOiBFSENJIFBDSSBwbGF0Zm9ybSBkcml2ZXIKWyAgICAz
LjI0NTA0OF0geGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEK
WyAgICAzLjI0NTA1M10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAgIDMuMjQ1MTgyXSBl
aGNpLXBjaSAwMDAwOjAwOjFhLjA6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMy4yNDUxOTNd
IGVoY2ktcGNpIDAwMDA6MDA6MWEuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQg
YnVzIG51bWJlciAxClsgICAgMy4yNDUyMjhdIGVoY2ktcGNpIDAwMDA6MDA6MWEuMDogZGVidWcg
cG9ydCAyClsgICAgMy4yNDkxOThdIGVoY2ktcGNpIDAwMDA6MDA6MWEuMDogY2FjaGUgbGluZSBz
aXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgICAzLjI0OTQ5M10gZWhjaS1wY2kgMDAwMDow
MDoxYS4wOiBpcnEgMTYsIGlvIG1lbSAweGZiMzAyMDAwClsgICAgMy4yNTU4NDBdIGxpYmF0YSB2
ZXJzaW9uIDMuMDAgbG9hZGVkLgpbICAgIDMuMjU2NTQ0XSBsaWJwaHk6IHI4MTY5OiBwcm9iZWQK
WyAgICAzLjI1Njc4N10gcjgxNjkgMDAwMDowZDowMC4wIGV0aDA6IFJUTDgxNjhoLzgxMTFoLCAw
MDplMDo0YzowYTo1Mjo5NywgWElEIDU0MSwgSVJRIDg5ClsgICAgMy4yNTY3OTBdIHI4MTY5IDAw
MDA6MGQ6MDAuMCBldGgwOiBqdW1ibyBmZWF0dXJlcyBbZnJhbWVzOiA5MTk0IGJ5dGVzLCB0eCBj
aGVja3N1bW1pbmc6IGtvXQpbICAgIDMuMjY1MTIxXSBlaGNpLXBjaSAwMDAwOjAwOjFhLjA6IFVT
QiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwClsgICAgMy4yNjUyOTRdIHVzYiB1c2IxOiBOZXcgVVNC
IGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0g
NS4xMApbICAgIDMuMjY1Mjk3XSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZy
PTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAzLjI2NTI5OV0gdXNiIHVzYjE6IFBy
b2R1Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMy4yNjUzMDNdIHVzYiB1c2IxOiBNYW51
ZmFjdHVyZXI6IExpbnV4IDUuMTAuMC0xLWFtZDY0IGVoY2lfaGNkClsgICAgMy4yNjUzMTFdIHVz
YiB1c2IxOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MWEuMApbICAgIDMuMjY1NTI3XSBodWIgMS0w
OjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDMuMjY2MTI5XSBodWIgMS0wOjEuMDogMiBwb3J0cyBk
ZXRlY3RlZApbICAgIDMuMjY2NTY0XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAyMyB0cmlnZ2VyaW5n
IDAgcG9sYXJpdHkgMQpbICAgIDMuMjY2NTcwXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjIzClsg
ICAgMy4yNjY2NTJdIGVoY2ktcGNpIDAwMDA6MDA6MWQuMDogRUhDSSBIb3N0IENvbnRyb2xsZXIK
WyAgICAzLjI2NjY2MV0gZWhjaS1wY2kgMDAwMDowMDoxZC4wOiBuZXcgVVNCIGJ1cyByZWdpc3Rl
cmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDIKWyAgICAzLjI2NjY5Nl0gZWhjaS1wY2kgMDAwMDow
MDoxZC4wOiBkZWJ1ZyBwb3J0IDIKWyAgICAzLjI2NzM3NV0geGVuOiByZWdpc3RlcmluZyBnc2kg
MTcgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgICAzLjI2NzM4MV0gQWxyZWFkeSBzZXR1cCB0
aGUgR1NJIDoxNwpbICAgIDMuMjcwNjUxXSBlaGNpLXBjaSAwMDAwOjAwOjFkLjA6IGNhY2hlIGxp
bmUgc2l6ZSBvZiA2NCBpcyBub3Qgc3VwcG9ydGVkClsgICAgMy4yNzE3NDhdIGVoY2ktcGNpIDAw
MDA6MDA6MWQuMDogaXJxIDIzLCBpbyBtZW0gMHhmYjMwMTAwMApbICAgIDMuMjcyNTMxXSBhaGNp
IDAwMDA6MDA6MWYuMjogdmVyc2lvbiAzLjAKWyAgICAzLjI3MjY1Ml0geGVuOiByZWdpc3Rlcmlu
ZyBnc2kgMTkgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgICAzLjI3MjY4Nl0geGVuOiAtLT4g
cGlycT0xOSAtPiBpcnE9MTkgKGdzaT0xOSkKWyAgICAzLjI3Mjg5N10gYWhjaSAwMDAwOjAwOjFm
LjI6IEFIQ0kgMDAwMS4wMzAwIDMyIHNsb3RzIDYgcG9ydHMgNiBHYnBzIDB4MyBpbXBsIFNBVEEg
bW9kZQpbICAgIDMuMjcyOTAxXSBhaGNpIDAwMDA6MDA6MWYuMjogZmxhZ3M6IDY0Yml0IG5jcSBz
bnRmIHBtIGxlZCBjbG8gcGlvIHNsdW0gcGFydCBlbXMgYXBzdCAKWyAgICAzLjI4NTEyOV0gZWhj
aS1wY2kgMDAwMDowMDoxZC4wOiBVU0IgMi4wIHN0YXJ0ZWQsIEVIQ0kgMS4wMApbICAgIDMuMjg1
Mjg5XSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJv
ZHVjdD0wMDAyLCBiY2REZXZpY2U9IDUuMTAKWyAgICAzLjI4NTI5Ml0gdXNiIHVzYjI6IE5ldyBV
U0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAg
My4yODUyOTRdIHVzYiB1c2IyOiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDMu
Mjg1Mjk2XSB1c2IgdXNiMjogTWFudWZhY3R1cmVyOiBMaW51eCA1LjEwLjAtMS1hbWQ2NCBlaGNp
X2hjZApbICAgIDMuMjg1Mjk3XSB1c2IgdXNiMjogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjFkLjAK
WyAgICAzLjI4NTUxMl0gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgICAzLjI4NTUzOF0g
aHViIDItMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQKWyAgICAzLjI4NjIxOV0geGhjaV9oY2QgMDAw
MDowYzowMC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDMuMjg2MjI4XSB4aGNpX2hjZCAw
MDAwOjBjOjAwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIg
MwpbICAgIDMuMjg2NTIwXSB4aGNpX2hjZCAwMDAwOjBjOjAwLjA6IGhjYyBwYXJhbXMgMHgwMDI4
NDFlYiBoY2kgdmVyc2lvbiAweDEwMCBxdWlya3MgMHgwMDAwMDAwMDAwMDAwODkwClsgICAgMy4y
ODcyOTJdIHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQ
cm9kdWN0PTAwMDIsIGJjZERldmljZT0gNS4xMApbICAgIDMuMjg3Mjk0XSB1c2IgdXNiMzogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAg
ICAzLjI4NzI5Nl0gdXNiIHVzYjM6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsgICAg
My4yODcyOThdIHVzYiB1c2IzOiBNYW51ZmFjdHVyZXI6IExpbnV4IDUuMTAuMC0xLWFtZDY0IHho
Y2ktaGNkClsgICAgMy4yODczMDBdIHVzYiB1c2IzOiBTZXJpYWxOdW1iZXI6IDAwMDA6MGM6MDAu
MApbICAgIDMuMjg3Njk1XSBodWIgMy0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDMuMjg3NzI2
XSBodWIgMy0wOjEuMDogMSBwb3J0IGRldGVjdGVkClsgICAgMy4yODgwMTBdIHhoY2lfaGNkIDAw
MDA6MGM6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAzLjI4ODAxNl0geGhjaV9oY2Qg
MDAwMDowYzowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVy
IDQKWyAgICAzLjI4ODAyMF0geGhjaV9oY2QgMDAwMDowYzowMC4wOiBIb3N0IHN1cHBvcnRzIFVT
QiAzLjAgU3VwZXJTcGVlZApbICAgIDMuMjg4MjA2XSB1c2IgdXNiNDogTmV3IFVTQiBkZXZpY2Ug
Zm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAzLCBiY2REZXZpY2U9IDUuMTAKWyAg
ICAzLjI4ODIwOV0gdXNiIHVzYjQ6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9k
dWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMy4yODgyMTBdIHVzYiB1c2I0OiBQcm9kdWN0OiB4
SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDMuMjg4MjEyXSB1c2IgdXNiNDogTWFudWZhY3R1cmVy
OiBMaW51eCA1LjEwLjAtMS1hbWQ2NCB4aGNpLWhjZApbICAgIDMuMjg4MjE0XSB1c2IgdXNiNDog
U2VyaWFsTnVtYmVyOiAwMDAwOjBjOjAwLjAKWyAgICAzLjI4ODU4Ml0gaHViIDQtMDoxLjA6IFVT
QiBodWIgZm91bmQKWyAgICAzLjI4ODYxNV0gaHViIDQtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQK
WyAgICAzLjI4OTUwMl0gc2NzaSBob3N0MDogYWhjaQpbICAgIDMuMjg5ODY0XSBzY3NpIGhvc3Qx
OiBhaGNpClsgICAgMy4yOTI4NjhdIHI4MTY5IDAwMDA6MGQ6MDAuMCBlbnAxM3MwOiByZW5hbWVk
IGZyb20gZXRoMApbICAgIDMuMjk2MzQxXSBzY3NpIGhvc3QyOiBhaGNpClsgICAgMy4yOTcxNThd
IHNjc2kgaG9zdDM6IGFoY2kKWyAgICAzLjI5NzY4Ml0gc2NzaSBob3N0NDogYWhjaQpbICAgIDMu
Mjk4MTQ3XSBzY3NpIGhvc3Q1OiBhaGNpClsgICAgMy4yOTgyMzZdIGF0YTE6IFNBVEEgbWF4IFVE
TUEvMTMzIGFiYXIgbTIwNDhAMHhmYjMwMDAwMCBwb3J0IDB4ZmIzMDAxMDAgaXJxIDkwClsgICAg
My4yOTgyMzldIGF0YTI6IFNBVEEgbWF4IFVETUEvMTMzIGFiYXIgbTIwNDhAMHhmYjMwMDAwMCBw
b3J0IDB4ZmIzMDAxODAgaXJxIDkwClsgICAgMy4yOTgyNDBdIGF0YTM6IERVTU1ZClsgICAgMy4y
OTgyNDFdIGF0YTQ6IERVTU1ZClsgICAgMy4yOTgyNDNdIGF0YTU6IERVTU1ZClsgICAgMy4yOTgy
NDRdIGF0YTY6IERVTU1ZClsgICAgMy42MDExMzJdIHVzYiAxLTE6IG5ldyBoaWdoLXNwZWVkIFVT
QiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgZWhjaS1wY2kKWyAgICAzLjYxMjEyOV0gYXRhMTogU0FU
QSBsaW5rIHVwIDYuMCBHYnBzIChTU3RhdHVzIDEzMyBTQ29udHJvbCAzMDApClsgICAgMy42MTIy
NDldIGF0YTI6IFNBVEEgbGluayB1cCAzLjAgR2JwcyAoU1N0YXR1cyAxMjMgU0NvbnRyb2wgMzAw
KQpbICAgIDMuNjEyNDI3XSBhdGExLjAwOiBBVEEtMTE6IEtJTkdTVE9OIFNVVjQwMFMzNzEyMEcs
IDBDM0ZENlNELCBtYXggVURNQS8xMzMKWyAgICAzLjYxMjQyOV0gYXRhMS4wMDogMjM0NDQxNjQ4
IHNlY3RvcnMsIG11bHRpIDE2OiBMQkE0OCBOQ1EgKGRlcHRoIDMyKSwgQUEKWyAgICAzLjYxMjk2
M10gYXRhMS4wMDogY29uZmlndXJlZCBmb3IgVURNQS8xMzMKWyAgICAzLjYxMzEyMl0gc2NzaSAw
OjA6MDowOiBEaXJlY3QtQWNjZXNzICAgICBBVEEgICAgICBLSU5HU1RPTiBTVVY0MDBTIEQ2U0Qg
UFE6IDAgQU5TSTogNQpbICAgIDMuNjIxMTM1XSB1c2IgMi0xOiBuZXcgaGlnaC1zcGVlZCBVU0Ig
ZGV2aWNlIG51bWJlciAyIHVzaW5nIGVoY2ktcGNpClsgICAgMy42MjExMzldIHVzYiAzLTE6IG5l
dyBoaWdoLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgeGhjaV9oY2QKWyAgICAzLjYy
ODg1Nl0gYXRhMi4wMDogQVRBLTg6IEtJTkdTVE9OIFNWMzAwUzM3QTEyMEcsIDUyNUFCQkYwLCBt
YXggVURNQS8xMzMKWyAgICAzLjYyODg1OV0gYXRhMi4wMDogMjM0NDQxNjQ4IHNlY3RvcnMsIG11
bHRpIDE2OiBMQkE0OCBOQ1EgKGRlcHRoIDMyKSwgQUEKWyAgICAzLjY1MDEyNV0gYXRhMi4wMDog
Y29uZmlndXJlZCBmb3IgVURNQS8xMzMKWyAgICAzLjY1MDI3MF0gc2NzaSAxOjA6MDowOiBEaXJl
Y3QtQWNjZXNzICAgICBBVEEgICAgICBLSU5HU1RPTiBTVjMwMFMzIEJCRjAgUFE6IDAgQU5TSTog
NQpbICAgIDMuNjYxMTQ3XSBzZCAwOjA6MDowOiBbc2RhXSAyMzQ0NDE2NDggNTEyLWJ5dGUgbG9n
aWNhbCBibG9ja3M6ICgxMjAgR0IvMTEyIEdpQikKWyAgICAzLjY2MTE1MF0gc2QgMDowOjA6MDog
W3NkYV0gNDA5Ni1ieXRlIHBoeXNpY2FsIGJsb2NrcwpbICAgIDMuNjYxMTc5XSBzZCAwOjA6MDow
OiBbc2RhXSBXcml0ZSBQcm90ZWN0IGlzIG9mZgpbICAgIDMuNjYxMTgyXSBzZCAwOjA6MDowOiBb
c2RhXSBNb2RlIFNlbnNlOiAwMCAzYSAwMCAwMApbICAgIDMuNjYxMTk5XSBzZCAxOjA6MDowOiBb
c2RiXSAyMzQ0NDE2NDggNTEyLWJ5dGUgbG9naWNhbCBibG9ja3M6ICgxMjAgR0IvMTEyIEdpQikK
WyAgICAzLjY2MTIyNl0gc2QgMTowOjA6MDogW3NkYl0gV3JpdGUgUHJvdGVjdCBpcyBvZmYKWyAg
ICAzLjY2MTIyOF0gc2QgMTowOjA6MDogW3NkYl0gTW9kZSBTZW5zZTogMDAgM2EgMDAgMDAKWyAg
ICAzLjY2MTIzOF0gc2QgMDowOjA6MDogW3NkYV0gV3JpdGUgY2FjaGU6IGVuYWJsZWQsIHJlYWQg
Y2FjaGU6IGVuYWJsZWQsIGRvZXNuJ3Qgc3VwcG9ydCBEUE8gb3IgRlVBClsgICAgMy42NjEyNzdd
IHNkIDE6MDowOjA6IFtzZGJdIFdyaXRlIGNhY2hlOiBlbmFibGVkLCByZWFkIGNhY2hlOiBlbmFi
bGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQpbICAgIDMuNjk1NTU3XSAgc2RhOiBzZGEx
IHNkYTIKWyAgICAzLjY5NjE0OV0gc2QgMDowOjA6MDogW3NkYV0gQXR0YWNoZWQgU0NTSSBkaXNr
ClsgICAgMy42OTg1MTBdIHNkIDE6MDowOjA6IFtzZGJdIEF0dGFjaGVkIFNDU0kgZGlzawpbICAg
IDMuNzYxMTgyXSBkZXZpY2UtbWFwcGVyOiB1ZXZlbnQ6IHZlcnNpb24gMS4wLjMKWyAgICAzLjc2
MTMxMF0gZGV2aWNlLW1hcHBlcjogaW9jdGw6IDQuNDMuMC1pb2N0bCAoMjAyMC0xMC0wMSkgaW5p
dGlhbGlzZWQ6IGRtLWRldmVsQHJlZGhhdC5jb20KWyAgICAzLjc2OTQzMF0gdXNiIDEtMTogTmV3
IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTgwODcsIGlkUHJvZHVjdD0wMDI0LCBiY2REZXZp
Y2U9IDAuMDAKWyAgICAzLjc2OTQzM10gdXNiIDEtMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczog
TWZyPTAsIFByb2R1Y3Q9MCwgU2VyaWFsTnVtYmVyPTAKWyAgICAzLjc2OTg5Ml0gaHViIDEtMTox
LjA6IFVTQiBodWIgZm91bmQKWyAgICAzLjc3MDA2Nl0gaHViIDEtMToxLjA6IDYgcG9ydHMgZGV0
ZWN0ZWQKWyAgICAzLjc3MDgzM10gdXNiIDMtMTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVu
ZG9yPTIxMDksIGlkUHJvZHVjdD0zNDMxLCBiY2REZXZpY2U9IDQuMjAKWyAgICAzLjc3MDgzNl0g
dXNiIDMtMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTAsIFByb2R1Y3Q9MSwgU2VyaWFs
TnVtYmVyPTAKWyAgICAzLjc3MDgzOF0gdXNiIDMtMTogUHJvZHVjdDogVVNCMi4wIEh1YgpbICAg
IDMuNzcxOTk3XSBodWIgMy0xOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDMuNzcyMzA3XSBodWIg
My0xOjEuMDogNCBwb3J0cyBkZXRlY3RlZApbICAgIDMuNzgxNDM4XSB1c2IgMi0xOiBOZXcgVVNC
IGRldmljZSBmb3VuZCwgaWRWZW5kb3I9ODA4NywgaWRQcm9kdWN0PTAwMjQsIGJjZERldmljZT0g
MC4wMApbICAgIDMuNzgxNDQxXSB1c2IgMi0xOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9
MCwgUHJvZHVjdD0wLCBTZXJpYWxOdW1iZXI9MApbICAgIDMuNzgxNzQ1XSBodWIgMi0xOjEuMDog
VVNCIGh1YiBmb3VuZApbICAgIDMuNzgxODMyXSBodWIgMi0xOjEuMDogOCBwb3J0cyBkZXRlY3Rl
ZApbICAgIDMuOTE1MjUzXSBQTTogSW1hZ2Ugbm90IGZvdW5kIChjb2RlIC0yMikKWyAgICA0LjA2
NTEwNF0gdXNiIDEtMS4yOiBuZXcgbG93LXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDMgdXNpbmcg
ZWhjaS1wY2kKWyAgICA0LjA3MzA5M10gdXNiIDItMS40OiBuZXcgaGlnaC1zcGVlZCBVU0IgZGV2
aWNlIG51bWJlciAzIHVzaW5nIGVoY2ktcGNpClsgICAgNC4wODMwMzNdIEVYVDQtZnMgKHNkYTIp
OiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRlcmVkIGRhdGEgbW9kZS4gT3B0czogKG51bGwp
ClsgICAgNC4xNzM5ODNdIE5vdCBhY3RpdmF0aW5nIE1hbmRhdG9yeSBBY2Nlc3MgQ29udHJvbCBh
cyAvc2Jpbi90b21veW8taW5pdCBkb2VzIG5vdCBleGlzdC4KWyAgICA0LjE4NzMwNF0gdXNiIDEt
MS4yOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MDk5YSwgaWRQcm9kdWN0PTYxMGMs
IGJjZERldmljZT0gMC4wMQpbICAgIDQuMTg3MzA3XSB1c2IgMS0xLjI6IE5ldyBVU0IgZGV2aWNl
IHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0wClsgICAgNC4xODczMDld
IHVzYiAxLTEuMjogUHJvZHVjdDogVVNCIE11bHRpbWVkaWEgS2V5Ym9hcmQgClsgICAgNC4xODcz
MTBdIHVzYiAxLTEuMjogTWFudWZhY3R1cmVyOiAgClsgICAgNC4xOTI1MzhdIHVzYiAyLTEuNDog
TmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTE0OGYsIGlkUHJvZHVjdD03NjAxLCBiY2RE
ZXZpY2U9IDAuMDAKWyAgICA0LjE5MjU0MF0gdXNiIDItMS40OiBOZXcgVVNCIGRldmljZSBzdHJp
bmdzOiBNZnI9MSwgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MwpbICAgIDQuMTkyNTQxXSB1c2Ig
Mi0xLjQ6IFByb2R1Y3Q6IDgwMi4xMSBuIFdMQU4KWyAgICA0LjE5MjU0Ml0gdXNiIDItMS40OiBN
YW51ZmFjdHVyZXI6IE1lZGlhVGVrClsgICAgNC4xOTI1NDNdIHVzYiAyLTEuNDogU2VyaWFsTnVt
YmVyOiAxLjAKWyAgICA0LjI2NDQzMV0gc3lzdGVtZFsxXTogSW5zZXJ0ZWQgbW9kdWxlICdhdXRv
ZnM0JwpbICAgIDQuMzEwMjQyXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kIDI0Ny4yLTUgcnVubmluZyBp
biBzeXN0ZW0gbW9kZS4gKCtQQU0gK0FVRElUICtTRUxJTlVYICtJTUEgK0FQUEFSTU9SICtTTUFD
SyArU1lTVklOSVQgK1VUTVAgK0xJQkNSWVBUU0VUVVAgK0dDUllQVCArR05VVExTICtBQ0wgK1ha
ICtMWjQgK1pTVEQgK1NFQ0NPTVAgK0JMS0lEICtFTEZVVElMUyArS01PRCArSUROMiAtSUROICtQ
Q1JFMiBkZWZhdWx0LWhpZXJhcmNoeT11bmlmaWVkKQpbICAgIDQuMzEwNTIyXSBzeXN0ZW1kWzFd
OiBEZXRlY3RlZCBhcmNoaXRlY3R1cmUgeDg2LTY0LgpbICAgIDQuMzEzMTI0XSBzeXN0ZW1kWzFd
OiBTZXQgaG9zdG5hbWUgdG8gPGRlYmlhbj4uClsgICAgNC4zODcwNjNdIHN5c3RlbWQtc3lzdi1n
ZW5lcmF0b3JbMjI1XTogU3lzViBzZXJ2aWNlICcvZXRjL2luaXQuZC94ZW4nIGxhY2tzIGEgbmF0
aXZlIHN5c3RlbWQgdW5pdCBmaWxlLiBBdXRvbWF0aWNhbGx5IGdlbmVyYXRpbmcgYSB1bml0IGZp
bGUgZm9yIGNvbXBhdGliaWxpdHkuIFBsZWFzZSB1cGRhdGUgcGFja2FnZSB0byBpbmNsdWRlIGEg
bmF0aXZlIHN5c3RlbWQgdW5pdCBmaWxlLCBpbiBvcmRlciB0byBtYWtlIGl0IG1vcmUgc2FmZSBh
bmQgcm9idXN0LgpbICAgIDQuNDAyMjc2XSBzeXN0ZW1kLXN5c3YtZ2VuZXJhdG9yWzIyNV06IFN5
c1Ygc2VydmljZSAnL2V0Yy9pbml0LmQvZXhpbTQnIGxhY2tzIGEgbmF0aXZlIHN5c3RlbWQgdW5p
dCBmaWxlLiBBdXRvbWF0aWNhbGx5IGdlbmVyYXRpbmcgYSB1bml0IGZpbGUgZm9yIGNvbXBhdGli
aWxpdHkuIFBsZWFzZSB1cGRhdGUgcGFja2FnZSB0byBpbmNsdWRlIGEgbmF0aXZlIHN5c3RlbWQg
dW5pdCBmaWxlLCBpbiBvcmRlciB0byBtYWtlIGl0IG1vcmUgc2FmZSBhbmQgcm9idXN0LgpbICAg
IDQuNDAyNDAzXSBzeXN0ZW1kLXN5c3YtZ2VuZXJhdG9yWzIyNV06IFN5c1Ygc2VydmljZSAnL2V0
Yy9pbml0LmQveGVuY29tbW9ucycgbGFja3MgYSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUuIEF1
dG9tYXRpY2FsbHkgZ2VuZXJhdGluZyBhIHVuaXQgZmlsZSBmb3IgY29tcGF0aWJpbGl0eS4gUGxl
YXNlIHVwZGF0ZSBwYWNrYWdlIHRvIGluY2x1ZGUgYSBuYXRpdmUgc3lzdGVtZCB1bml0IGZpbGUs
IGluIG9yZGVyIHRvIG1ha2UgaXQgbW9yZSBzYWZlIGFuZCByb2J1c3QuClsgICAgNC40MDQ5NzZd
IHN5c3RlbWQtc3lzdi1nZW5lcmF0b3JbMjI1XTogU3lzViBzZXJ2aWNlICcvZXRjL2luaXQuZC9p
c2MtZGhjcC1zZXJ2ZXInIGxhY2tzIGEgbmF0aXZlIHN5c3RlbWQgdW5pdCBmaWxlLiBBdXRvbWF0
aWNhbGx5IGdlbmVyYXRpbmcgYSB1bml0IGZpbGUgZm9yIGNvbXBhdGliaWxpdHkuIFBsZWFzZSB1
cGRhdGUgcGFja2FnZSB0byBpbmNsdWRlIGEgbmF0aXZlIHN5c3RlbWQgdW5pdCBmaWxlLCBpbiBv
cmRlciB0byBtYWtlIGl0IG1vcmUgc2FmZSBhbmQgcm9idXN0LgpbICAgIDQuNDY3MDQ5XSBzeXN0
ZW1kWzFdOiAvbGliL3N5c3RlbWQvc3lzdGVtL3ZpcnRsb2dkLnNvY2tldDo2OiBMaXN0ZW5TdHJl
YW09IHJlZmVyZW5jZXMgYSBwYXRoIGJlbG93IGxlZ2FjeSBkaXJlY3RvcnkgL3Zhci9ydW4vLCB1
cGRhdGluZyAvdmFyL3J1bi9saWJ2aXJ0L3ZpcnRsb2dkLXNvY2sg4oaSIC9ydW4vbGlidmlydC92
aXJ0bG9nZC1zb2NrOyBwbGVhc2UgdXBkYXRlIHRoZSB1bml0IGZpbGUgYWNjb3JkaW5nbHkuClsg
ICAgNC40NzU5ODRdIHN5c3RlbWRbMV06IC9saWIvc3lzdGVtZC9zeXN0ZW0vdmlydGxvZ2QtYWRt
aW4uc29ja2V0OjY6IExpc3RlblN0cmVhbT0gcmVmZXJlbmNlcyBhIHBhdGggYmVsb3cgbGVnYWN5
IGRpcmVjdG9yeSAvdmFyL3J1bi8sIHVwZGF0aW5nIC92YXIvcnVuL2xpYnZpcnQvdmlydGxvZ2Qt
YWRtaW4tc29jayDihpIgL3J1bi9saWJ2aXJ0L3ZpcnRsb2dkLWFkbWluLXNvY2s7IHBsZWFzZSB1
cGRhdGUgdGhlIHVuaXQgZmlsZSBhY2NvcmRpbmdseS4KWyAgICA0LjQ3NjU2MV0gc3lzdGVtZFsx
XTogL2xpYi9zeXN0ZW1kL3N5c3RlbS92aXJ0bG9ja2Quc29ja2V0OjY6IExpc3RlblN0cmVhbT0g
cmVmZXJlbmNlcyBhIHBhdGggYmVsb3cgbGVnYWN5IGRpcmVjdG9yeSAvdmFyL3J1bi8sIHVwZGF0
aW5nIC92YXIvcnVuL2xpYnZpcnQvdmlydGxvY2tkLXNvY2sg4oaSIC9ydW4vbGlidmlydC92aXJ0
bG9ja2Qtc29jazsgcGxlYXNlIHVwZGF0ZSB0aGUgdW5pdCBmaWxlIGFjY29yZGluZ2x5LgpbICAg
IDQuNDc3NjA2XSBzeXN0ZW1kWzFdOiAvbGliL3N5c3RlbWQvc3lzdGVtL3ZpcnRsb2NrZC1hZG1p
bi5zb2NrZXQ6NjogTGlzdGVuU3RyZWFtPSByZWZlcmVuY2VzIGEgcGF0aCBiZWxvdyBsZWdhY3kg
ZGlyZWN0b3J5IC92YXIvcnVuLywgdXBkYXRpbmcgL3Zhci9ydW4vbGlidmlydC92aXJ0bG9ja2Qt
YWRtaW4tc29jayDihpIgL3J1bi9saWJ2aXJ0L3ZpcnRsb2NrZC1hZG1pbi1zb2NrOyBwbGVhc2Ug
dXBkYXRlIHRoZSB1bml0IGZpbGUgYWNjb3JkaW5nbHkuClsgICAgNC41MjkwMTZdIHN5c3RlbWRb
MV06IFF1ZXVlZCBzdGFydCBqb2IgZm9yIGRlZmF1bHQgdGFyZ2V0IEdyYXBoaWNhbCBJbnRlcmZh
Y2UuClsgICAgNC41MzA5NTRdIHN5c3RlbWRbMV06IENyZWF0ZWQgc2xpY2UgVmlydHVhbCBNYWNo
aW5lIGFuZCBDb250YWluZXIgU2xpY2UuClsgICAgNC41MzI1NDNdIHN5c3RlbWRbMV06IENyZWF0
ZWQgc2xpY2Ugc3lzdGVtLWdldHR5LnNsaWNlLgpbICAgIDQuNTMzNDk3XSBzeXN0ZW1kWzFdOiBD
cmVhdGVkIHNsaWNlIHN5c3RlbS1tb2Rwcm9iZS5zbGljZS4KWyAgICA0LjUzNDM5MV0gc3lzdGVt
ZFsxXTogQ3JlYXRlZCBzbGljZSBzeXN0ZW0tc2VyaWFsXHgyZGdldHR5LnNsaWNlLgpbICAgIDQu
NTM1MjUxXSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIFVzZXIgYW5kIFNlc3Npb24gU2xpY2Uu
ClsgICAgNC41MzU4NzJdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgRGlzcGF0Y2ggUGFzc3dvcmQgUmVx
dWVzdHMgdG8gQ29uc29sZSBEaXJlY3RvcnkgV2F0Y2guClsgICAgNC41MzY0NzhdIHN5c3RlbWRb
MV06IFN0YXJ0ZWQgRm9yd2FyZCBQYXNzd29yZCBSZXF1ZXN0cyB0byBXYWxsIERpcmVjdG9yeSBX
YXRjaC4KWyAgICA0LjUzNzI3Ml0gc3lzdGVtZFsxXTogU2V0IHVwIGF1dG9tb3VudCBBcmJpdHJh
cnkgRXhlY3V0YWJsZSBGaWxlIEZvcm1hdHMgRmlsZSBTeXN0ZW0gQXV0b21vdW50IFBvaW50Lgpb
ICAgIDQuNTM3NzgyXSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBMb2NhbCBFbmNyeXB0ZWQg
Vm9sdW1lcy4KWyAgICA0LjUzODI4Ml0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgUGF0aHMu
ClsgICAgNC41Mzg3NTRdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IFJlbW90ZSBGaWxlIFN5
c3RlbXMuClsgICAgNC41MzkyMjBdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IFNsaWNlcy4K
WyAgICA0LjUzOTcwOV0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgTGlidmlydCBndWVzdHMg
c2h1dGRvd24uClsgICAgNC41NDAzMDZdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBEZXZpY2Ut
bWFwcGVyIGV2ZW50IGRhZW1vbiBGSUZPcy4KWyAgICA0LjU0MDk0MF0gc3lzdGVtZFsxXTogTGlz
dGVuaW5nIG9uIExWTTIgcG9sbCBkYWVtb24gc29ja2V0LgpbICAgIDQuNTQyNTAzXSBzeXN0ZW1k
WzFdOiBMaXN0ZW5pbmcgb24gU3lzbG9nIFNvY2tldC4KWyAgICA0LjU0MzE0N10gc3lzdGVtZFsx
XTogTGlzdGVuaW5nIG9uIGZzY2sgdG8gZnNja2QgY29tbXVuaWNhdGlvbiBTb2NrZXQuClsgICAg
NC41NDM3MTZdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBpbml0Y3RsIENvbXBhdGliaWxpdHkg
TmFtZWQgUGlwZS4KWyAgICA0LjU0NDU2Ml0gc3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIEpvdXJu
YWwgQXVkaXQgU29ja2V0LgpbICAgIDQuNTQ1MjY2XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24g
Sm91cm5hbCBTb2NrZXQgKC9kZXYvbG9nKS4KWyAgICA0LjU0NjAwMF0gc3lzdGVtZFsxXTogTGlz
dGVuaW5nIG9uIEpvdXJuYWwgU29ja2V0LgpbICAgIDQuNTQ2NzI2XSBzeXN0ZW1kWzFdOiBMaXN0
ZW5pbmcgb24gdWRldiBDb250cm9sIFNvY2tldC4KWyAgICA0LjU0NzM3Nl0gc3lzdGVtZFsxXTog
TGlzdGVuaW5nIG9uIHVkZXYgS2VybmVsIFNvY2tldC4KWyAgICA0LjU0ODE3OF0gc3lzdGVtZFsx
XTogQ29uZGl0aW9uIGNoZWNrIHJlc3VsdGVkIGluIEh1Z2UgUGFnZXMgRmlsZSBTeXN0ZW0gYmVp
bmcgc2tpcHBlZC4KWyAgICA0LjU1MDk0Nl0gc3lzdGVtZFsxXTogTW91bnRpbmcgUE9TSVggTWVz
c2FnZSBRdWV1ZSBGaWxlIFN5c3RlbS4uLgpbICAgIDQuNTU0ODcwXSBzeXN0ZW1kWzFdOiBNb3Vu
dGluZyBLZXJuZWwgRGVidWcgRmlsZSBTeXN0ZW0uLi4KWyAgICA0LjU1OTI2OV0gc3lzdGVtZFsx
XTogTW91bnRpbmcgS2VybmVsIFRyYWNlIEZpbGUgU3lzdGVtLi4uClsgICAgNC41NjA0NzVdIHN5
c3RlbWRbMV06IEZpbmlzaGVkIEF2YWlsYWJpbGl0eSBvZiBibG9jayBkZXZpY2VzLgpbICAgIDQu
NTY0Mzg2XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBTZXQgdGhlIGNvbnNvbGUga2V5Ym9hcmQgbGF5
b3V0Li4uClsgICAgNC41NjgzODldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIENyZWF0ZSBsaXN0IG9m
IHN0YXRpYyBkZXZpY2Ugbm9kZXMgZm9yIHRoZSBjdXJyZW50IGtlcm5lbC4uLgpbICAgIDQuNTcx
NzY4XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBNb25pdG9yaW5nIG9mIExWTTIgbWlycm9ycywgc25h
cHNob3RzIGV0Yy4gdXNpbmcgZG1ldmVudGQgb3IgcHJvZ3Jlc3MgcG9sbGluZy4uLgpbICAgIDQu
NTc1NTE5XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBMb2FkIEtlcm5lbCBNb2R1bGUgY29uZmlnZnMu
Li4KWyAgICA0LjU3OTMxMl0gc3lzdGVtZFsxXTogU3RhcnRpbmcgTG9hZCBLZXJuZWwgTW9kdWxl
IGRybS4uLgpbICAgIDQuNTgzMTM4XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBMb2FkIEtlcm5lbCBN
b2R1bGUgZnVzZS4uLgpbICAgIDQuNTg0NzExXSBzeXN0ZW1kWzFdOiBDb25kaXRpb24gY2hlY2sg
cmVzdWx0ZWQgaW4gU2V0IFVwIEFkZGl0aW9uYWwgQmluYXJ5IEZvcm1hdHMgYmVpbmcgc2tpcHBl
ZC4KWyAgICA0LjU4NDg1OV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uIGNoZWNrIHJlc3VsdGVkIGlu
IEZpbGUgU3lzdGVtIENoZWNrIG9uIFJvb3QgRGV2aWNlIGJlaW5nIHNraXBwZWQuClsgICAgNC41
OTE0NzVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEpvdXJuYWwgU2VydmljZS4uLgpbICAgIDQuNjA4
MTI3XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBMb2FkIEtlcm5lbCBNb2R1bGVzLi4uClsgICAgNC42
MTEzNjldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFJlbW91bnQgUm9vdCBhbmQgS2VybmVsIEZpbGUg
U3lzdGVtcy4uLgpbICAgIDQuNjE2MDc3XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBDb2xkcGx1ZyBB
bGwgdWRldiBEZXZpY2VzLi4uClsgICAgNC42MTYzMzFdIGZ1c2U6IGluaXQgKEFQSSB2ZXJzaW9u
IDcuMzIpClsgICAgNC42MjU2MjFdIHN5c3RlbWRbMV06IE1vdW50ZWQgUE9TSVggTWVzc2FnZSBR
dWV1ZSBGaWxlIFN5c3RlbS4KWyAgICA0LjYyNjkxNF0gc3lzdGVtZFsxXTogTW91bnRlZCBLZXJu
ZWwgRGVidWcgRmlsZSBTeXN0ZW0uClsgICAgNC42Mjc2ODZdIHN5c3RlbWRbMV06IE1vdW50ZWQg
S2VybmVsIFRyYWNlIEZpbGUgU3lzdGVtLgpbICAgIDQuNjI4OTg1XSBzeXN0ZW1kWzFdOiBGaW5p
c2hlZCBDcmVhdGUgbGlzdCBvZiBzdGF0aWMgZGV2aWNlIG5vZGVzIGZvciB0aGUgY3VycmVudCBr
ZXJuZWwuClsgICAgNC42MzA1MTldIHN5c3RlbWRbMV06IG1vZHByb2JlQGNvbmZpZ2ZzLnNlcnZp
Y2U6IFN1Y2NlZWRlZC4KWyAgICA0LjYzMTExNF0gc3lzdGVtZFsxXTogRmluaXNoZWQgTG9hZCBL
ZXJuZWwgTW9kdWxlIGNvbmZpZ2ZzLgpbICAgIDQuNjMyNDM0XSBzeXN0ZW1kWzFdOiBtb2Rwcm9i
ZUBmdXNlLnNlcnZpY2U6IFN1Y2NlZWRlZC4KWyAgICA0LjYzMzAxN10gc3lzdGVtZFsxXTogRmlu
aXNoZWQgTG9hZCBLZXJuZWwgTW9kdWxlIGZ1c2UuClsgICAgNC42Mzc1MjRdIHN5c3RlbWRbMV06
IE1vdW50aW5nIEZVU0UgQ29udHJvbCBGaWxlIFN5c3RlbS4uLgpbICAgIDQuNjQxNDAyXSBzeXN0
ZW1kWzFdOiBNb3VudGluZyBLZXJuZWwgQ29uZmlndXJhdGlvbiBGaWxlIFN5c3RlbS4uLgpbICAg
IDQuNjQzNDcxXSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBkcm0uc2VydmljZTogU3VjY2VlZGVkLgpb
ICAgIDQuNjQ0MTUzXSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBMb2FkIEtlcm5lbCBNb2R1bGUgZHJt
LgpbICAgIDQuNjUxMjkwXSBzeXN0ZW1kWzFdOiBNb3VudGVkIEZVU0UgQ29udHJvbCBGaWxlIFN5
c3RlbS4KWyAgICA0LjY1NTA2NV0geGVuOnhlbl9ldnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2aWNl
IGluc3RhbGxlZApbICAgIDQuNjU1NjIxXSBFWFQ0LWZzIChzZGEyKTogcmUtbW91bnRlZC4gT3B0
czogZXJyb3JzPXJlbW91bnQtcm8KWyAgICA0LjY1ODgxMF0gc3lzdGVtZFsxXTogRmluaXNoZWQg
UmVtb3VudCBSb290IGFuZCBLZXJuZWwgRmlsZSBTeXN0ZW1zLgpbICAgIDQuNjYwNDg2XSBzeXN0
ZW1kWzFdOiBDb25kaXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gUmVidWlsZCBIYXJkd2FyZSBEYXRh
YmFzZSBiZWluZyBza2lwcGVkLgpbICAgIDQuNjYwNTc4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb24g
Y2hlY2sgcmVzdWx0ZWQgaW4gUGxhdGZvcm0gUGVyc2lzdGVudCBTdG9yYWdlIEFyY2hpdmFsIGJl
aW5nIHNraXBwZWQuClsgICAgNC42NjQwOTBdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIExvYWQvU2F2
ZSBSYW5kb20gU2VlZC4uLgpbICAgIDQuNjc1OTMzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBDcmVh
dGUgU3lzdGVtIFVzZXJzLi4uClsgICAgNC42ODEzMDVdIHN5c3RlbWRbMV06IEZpbmlzaGVkIE1v
bml0b3Jpbmcgb2YgTFZNMiBtaXJyb3JzLCBzbmFwc2hvdHMgZXRjLiB1c2luZyBkbWV2ZW50ZCBv
ciBwcm9ncmVzcyBwb2xsaW5nLgpbICAgIDQuNjgyMzA2XSBzeXN0ZW1kWzFdOiBNb3VudGVkIEtl
cm5lbCBDb25maWd1cmF0aW9uIEZpbGUgU3lzdGVtLgpbICAgIDQuNzA0NjE0XSB4ZW5fcGNpYmFj
azogYmFja2VuZCBpcyB2cGNpClsgICAgNC43MjMxNDVdIHN5c3RlbWRbMV06IEZpbmlzaGVkIExv
YWQvU2F2ZSBSYW5kb20gU2VlZC4KWyAgICA0LjcyNDUxMV0gc3lzdGVtZFsxXTogQ29uZGl0aW9u
IGNoZWNrIHJlc3VsdGVkIGluIEZpcnN0IEJvb3QgQ29tcGxldGUgYmVpbmcgc2tpcHBlZC4KWyAg
ICA0LjcyNzk4MV0gc3lzdGVtZFsxXTogRmluaXNoZWQgQ3JlYXRlIFN5c3RlbSBVc2Vycy4KWyAg
ICA0LjczMTMwMF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgQ3JlYXRlIFN0YXRpYyBEZXZpY2UgTm9k
ZXMgaW4gL2Rldi4uLgpbICAgIDQuNzY3MDQ3XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBDcmVhdGUg
U3RhdGljIERldmljZSBOb2RlcyBpbiAvZGV2LgpbICAgIDQuNzY4NjMzXSBzeXN0ZW1kWzFdOiBG
aW5pc2hlZCBMb2FkIEtlcm5lbCBNb2R1bGVzLgpbICAgIDQuNzcyMDI2XSBzeXN0ZW1kWzFdOiBT
dGFydGluZyBBcHBseSBLZXJuZWwgVmFyaWFibGVzLi4uClsgICAgNC43NzcxNTRdIHN5c3RlbWRb
MV06IFN0YXJ0aW5nIFJ1bGUtYmFzZWQgTWFuYWdlciBmb3IgRGV2aWNlIEV2ZW50cyBhbmQgRmls
ZXMuLi4KWyAgICA0LjgyMDI1Ml0gc3lzdGVtZFsxXTogRmluaXNoZWQgU2V0IHRoZSBjb25zb2xl
IGtleWJvYXJkIGxheW91dC4KWyAgICA0LjgyMTAyNF0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJn
ZXQgTG9jYWwgRmlsZSBTeXN0ZW1zIChQcmUpLgpbICAgIDQuODIxNjU0XSBzeXN0ZW1kWzFdOiBD
b25kaXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gVmlydHVhbCBNYWNoaW5lIGFuZCBDb250YWluZXIg
U3RvcmFnZSAoQ29tcGF0aWJpbGl0eSkgYmVpbmcgc2tpcHBlZC4KWyAgICA0LjgyMTY5OV0gc3lz
dGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgTG9jYWwgRmlsZSBTeXN0ZW1zLgpbICAgIDQuODIyMzA2
XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBDb250YWluZXJzLgpbICAgIDQuODI1NjAwXSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBMb2FkIEFwcEFybW9yIHByb2ZpbGVzLi4uClsgICAgNC44Mjg2
MDFdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFNldCBjb25zb2xlIGZvbnQgYW5kIGtleW1hcC4uLgpb
ICAgIDQuODI5MzA2XSBzeXN0ZW1kWzFdOiBDb25kaXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gTWFy
ayB0aGUgbmVlZCB0byByZWxhYmVsIGFmdGVyIHJlYm9vdCBiZWluZyBza2lwcGVkLgpbICAgIDQu
ODI5NDk5XSBzeXN0ZW1kWzFdOiBDb25kaXRpb24gY2hlY2sgcmVzdWx0ZWQgaW4gU3RvcmUgYSBT
eXN0ZW0gVG9rZW4gaW4gYW4gRUZJIFZhcmlhYmxlIGJlaW5nIHNraXBwZWQuClsgICAgNC44Mjk2
NDhdIHN5c3RlbWRbMV06IENvbmRpdGlvbiBjaGVjayByZXN1bHRlZCBpbiBDb21taXQgYSB0cmFu
c2llbnQgbWFjaGluZS1pZCBvbiBkaXNrIGJlaW5nIHNraXBwZWQuClsgICAgNC44MzEzNDNdIHN5
c3RlbWRbMV06IEZpbmlzaGVkIEFwcGx5IEtlcm5lbCBWYXJpYWJsZXMuClsgICAgNC45Mjg2MTJd
IHN5c3RlbWRbMV06IFN0YXJ0ZWQgUnVsZS1iYXNlZCBNYW5hZ2VyIGZvciBEZXZpY2UgRXZlbnRz
IGFuZCBGaWxlcy4KWyAgICA0Ljk0NjI5OF0gc3lzdGVtZFsxXTogRmluaXNoZWQgU2V0IGNvbnNv
bGUgZm9udCBhbmQga2V5bWFwLgpbICAgIDQuOTY1ODY5XSBzeXN0ZW1kWzFdOiBTdGFydGVkIEpv
dXJuYWwgU2VydmljZS4KWyAgICA1LjA1ODQyMF0gYXVkaXQ6IHR5cGU9MTQwMCBhdWRpdCgxNjEy
MTkyNzk5LjIxOToyKTogYXBwYXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQi
IHByb2ZpbGU9InVuY29uZmluZWQiIG5hbWU9Im52aWRpYV9tb2Rwcm9iZSIgcGlkPTI4MiBjb21t
PSJhcHBhcm1vcl9wYXJzZXIiClsgICAgNS4wNTg0MjddIGF1ZGl0OiB0eXBlPTE0MDAgYXVkaXQo
MTYxMjE5Mjc5OS4yMTk6Myk6IGFwcGFybW9yPSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9s
b2FkIiBwcm9maWxlPSJ1bmNvbmZpbmVkIiBuYW1lPSJudmlkaWFfbW9kcHJvYmUvL2ttb2QiIHBp
ZD0yODIgY29tbT0iYXBwYXJtb3JfcGFyc2VyIgpbICAgIDUuMDU4NTY4XSBhdWRpdDogdHlwZT0x
NDAwIGF1ZGl0KDE2MTIxOTI3OTkuMjE5OjQpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249
InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0idW5jb25maW5lZCIgbmFtZT0ibHNiX3JlbGVhc2UiIHBp
ZD0yODQgY29tbT0iYXBwYXJtb3JfcGFyc2VyIgpbICAgIDUuMDU5MTczXSBhdWRpdDogdHlwZT0x
NDAwIGF1ZGl0KDE2MTIxOTI3OTkuMjE5OjUpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249
InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0idW5jb25maW5lZCIgbmFtZT0iL3Vzci9zYmluL2xpYnZp
cnRkIiBwaWQ9MjgxIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgICA1LjA1OTE3OF0gYXVkaXQ6
IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkyNzk5LjIxOTo2KTogYXBwYXJtb3I9IlNUQVRVUyIgb3Bl
cmF0aW9uPSJwcm9maWxlX2xvYWQiIHByb2ZpbGU9InVuY29uZmluZWQiIG5hbWU9Ii91c3Ivc2Jp
bi9saWJ2aXJ0ZC8vcWVtdV9icmlkZ2VfaGVscGVyIiBwaWQ9MjgxIGNvbW09ImFwcGFybW9yX3Bh
cnNlciIKWyAgICA1LjA2MzM1Nl0gYXVkaXQ6IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkyNzk5LjIy
Mzo3KTogYXBwYXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIHByb2ZpbGU9
InVuY29uZmluZWQiIG5hbWU9Ii91c3IvYmluL21hbiIgcGlkPTI4MyBjb21tPSJhcHBhcm1vcl9w
YXJzZXIiClsgICAgNS4wNjMzNjFdIGF1ZGl0OiB0eXBlPTE0MDAgYXVkaXQoMTYxMjE5Mjc5OS4y
MjM6OCk6IGFwcGFybW9yPSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBwcm9maWxl
PSJ1bmNvbmZpbmVkIiBuYW1lPSJtYW5fZmlsdGVyIiBwaWQ9MjgzIGNvbW09ImFwcGFybW9yX3Bh
cnNlciIKWyAgICA1LjA2MzM2NV0gYXVkaXQ6IHR5cGU9MTQwMCBhdWRpdCgxNjEyMTkyNzk5LjIy
Mzo5KTogYXBwYXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIHByb2ZpbGU9
InVuY29uZmluZWQiIG5hbWU9Im1hbl9ncm9mZiIgcGlkPTI4MyBjb21tPSJhcHBhcm1vcl9wYXJz
ZXIiClsgICAgNS4wNzU4MzJdIGF1ZGl0OiB0eXBlPTE0MDAgYXVkaXQoMTYxMjE5Mjc5OS4yMzU6
MTApOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgcHJvZmlsZT0i
dW5jb25maW5lZCIgbmFtZT0idmlydC1hYS1oZWxwZXIiIHBpZD0yODUgY29tbT0iYXBwYXJtb3Jf
cGFyc2VyIgpbICAgIDUuMDg2MjM4XSBpbnB1dDogUG93ZXIgQnV0dG9uIGFzIC9kZXZpY2VzL0xO
WFNZU1RNOjAwL0xOWFNZQlVTOjAwL1BOUDBDMEM6MDAvaW5wdXQvaW5wdXQzClsgICAgNS4xMDEx
NTRdIEFDUEk6IFBvd2VyIEJ1dHRvbiBbUFdSQl0KWyAgICA1LjEwMTI4MV0gaW5wdXQ6IFBvd2Vy
IEJ1dHRvbiBhcyAvZGV2aWNlcy9MTlhTWVNUTTowMC9MTlhQV1JCTjowMC9pbnB1dC9pbnB1dDQK
WyAgICA1LjEwMTM2Ml0gQUNQSTogUG93ZXIgQnV0dG9uIFtQV1JGXQpbICAgIDYuNjMyMzU2XSBo
aWQ6IHJhdyBISUQgZXZlbnRzIGRyaXZlciAoQykgSmlyaSBLb3NpbmEKWyAgICA2LjYzNzg2M10g
aVRDT192ZW5kb3Jfc3VwcG9ydDogdmVuZG9yLXN1cHBvcnQ9MApbICAgIDYuNjQ0NDU3XSBpVENP
X3dkdDogSW50ZWwgVENPIFdhdGNoRG9nIFRpbWVyIERyaXZlciB2MS4xMQpbICAgIDYuNjQ0NTA2
XSBpVENPX3dkdDogRm91bmQgYSBQYW50aGVyIFBvaW50IFRDTyBkZXZpY2UgKFZlcnNpb249Miwg
VENPQkFTRT0weDA0NjApClsgICAgNi42NDQ3NTVdIGlUQ09fd2R0OiBpbml0aWFsaXplZC4gaGVh
cnRiZWF0PTMwIHNlYyAobm93YXlvdXQ9MCkKWyAgICA2LjY1MDA1N10gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JoaWQKWyAgICA2LjY1MDA2MF0gdXNiaGlkOiBV
U0IgSElEIGNvcmUgZHJpdmVyClsgICAgNi42NjkwNTBdIHNkIDA6MDowOjA6IEF0dGFjaGVkIHNj
c2kgZ2VuZXJpYyBzZzAgdHlwZSAwClsgICAgNi42ODYyNTRdIHNkIDE6MDowOjA6IEF0dGFjaGVk
IHNjc2kgZ2VuZXJpYyBzZzEgdHlwZSAwClsgICAgNi43MjgwNTldIGNmZzgwMjExOiBMb2FkaW5n
IGNvbXBpbGVkLWluIFguNTA5IGNlcnRpZmljYXRlcyBmb3IgcmVndWxhdG9yeSBkYXRhYmFzZQpb
ICAgIDYuNzI4Mzk1XSBjZmc4MDIxMTogTG9hZGVkIFguNTA5IGNlcnQgJ2JlbmhAZGViaWFuLm9y
ZzogNTc3ZTAyMWNiOTgwZTBlODIwODIxYmE3YjU0YjQ5NjFiOGI0ZmFkZicKWyAgICA2LjcyODcz
N10gY2ZnODAyMTE6IExvYWRlZCBYLjUwOSBjZXJ0ICdyb21haW4ucGVyaWVyQGdtYWlsLmNvbTog
M2FiYmM2ZWMxNDZlMDlkMWI2MDE2YWI5ZDZjZjcxZGQyMzNmMDMyOCcKWyAgICA2LjcyOTIzMl0g
Y2ZnODAyMTE6IExvYWRlZCBYLjUwOSBjZXJ0ICdzZm9yc2hlZTogMDBiMjhkZGY0N2FlZjljZWE3
JwpbICAgIDYuNzMwMzE1XSBwbGF0Zm9ybSByZWd1bGF0b3J5LjA6IGZpcm13YXJlOiBmYWlsZWQg
dG8gbG9hZCByZWd1bGF0b3J5LmRiICgtMikKWyAgICA2LjczMDMxN10gZmlybXdhcmVfY2xhc3M6
IFNlZSBodHRwczovL3dpa2kuZGViaWFuLm9yZy9GaXJtd2FyZSBmb3IgaW5mb3JtYXRpb24gYWJv
dXQgbWlzc2luZyBmaXJtd2FyZQpbICAgIDYuNzMwMzIwXSBwbGF0Zm9ybSByZWd1bGF0b3J5LjA6
IERpcmVjdCBmaXJtd2FyZSBsb2FkIGZvciByZWd1bGF0b3J5LmRiIGZhaWxlZCB3aXRoIGVycm9y
IC0yClsgICAgNi43MzAzMjNdIGNmZzgwMjExOiBmYWlsZWQgdG8gbG9hZCByZWd1bGF0b3J5LmRi
ClsgICAgNi44MzQwOTJdIGlucHV0OiBQQyBTcGVha2VyIGFzIC9kZXZpY2VzL3BsYXRmb3JtL3Bj
c3Brci9pbnB1dC9pbnB1dDUKWyAgICA2Ljk1OTEyMV0gdXNiIDItMS40OiByZXNldCBoaWdoLXNw
ZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDMgdXNpbmcgZWhjaS1wY2kKWyAgICA3LjA2ODA5OF0gbXQ3
NjAxdSAyLTEuNDoxLjA6IEFTSUMgcmV2aXNpb246IDc2MDEwMDAxIE1BQyByZXZpc2lvbjogNzYw
MTA1MDAKWyAgICA3LjA2OTk0MF0gbXQ3NjAxdSAyLTEuNDoxLjA6IGZpcm13YXJlOiBkaXJlY3Qt
bG9hZGluZyBmaXJtd2FyZSBtdDc2MDF1LmJpbgpbICAgIDcuMDY5OTQ4XSBtdDc2MDF1IDItMS40
OjEuMDogRmlybXdhcmUgVmVyc2lvbjogMC4xLjAwIEJ1aWxkOiA3NjQwIEJ1aWxkIHRpbWU6IDIw
MTMwMjA1MjE0Nl9fX18KWyAgICA3LjA4NjM4Nl0gaW5wdXQ6ICAgVVNCIE11bHRpbWVkaWEgS2V5
Ym9hcmQgIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxYS4wL3VzYjEvMS0xLzEtMS4y
LzEtMS4yOjEuMC8wMDAzOjA5OUE6NjEwQy4wMDAxL2lucHV0L2lucHV0NgpbICAgIDcuMDg5OTA0
XSBjcnlwdGQ6IG1heF9jcHVfcWxlbiBzZXQgdG8gMTAwMApbICAgIDcuMDk0NTQ0XSBBZGRpbmcg
MzkwNTUzMmsgc3dhcCBvbiAvZGV2L3NkYTEuICBQcmlvcml0eTotMiBleHRlbnRzOjEgYWNyb3Nz
OjM5MDU1MzJrIFNTRlMKWyAgICA3LjE0ODQ2OF0gQVZYIHZlcnNpb24gb2YgZ2NtX2VuYy9kZWMg
ZW5nYWdlZC4KWyAgICA3LjE0ODQ3MF0gQUVTIENUUiBtb2RlIGJ5OCBvcHRpbWl6YXRpb24gZW5h
YmxlZApbICAgIDcuMTQ5NzEwXSBoaWQtZ2VuZXJpYyAwMDAzOjA5OUE6NjEwQy4wMDAxOiBpbnB1
dCxoaWRyYXcwOiBVU0IgSElEIHYxLjAwIEtleWJvYXJkIFsgIFVTQiBNdWx0aW1lZGlhIEtleWJv
YXJkIF0gb24gdXNiLTAwMDA6MDA6MWEuMC0xLjIvaW5wdXQwClsgICAgNy4xNTAwNzJdIGlucHV0
OiAgIFVTQiBNdWx0aW1lZGlhIEtleWJvYXJkICBDb25zdW1lciBDb250cm9sIGFzIC9kZXZpY2Vz
L3BjaTAwMDA6MDAvMDAwMDowMDoxYS4wL3VzYjEvMS0xLzEtMS4yLzEtMS4yOjEuMS8wMDAzOjA5
OUE6NjEwQy4wMDAyL2lucHV0L2lucHV0NwpbICAgIDcuMjEzMjY0XSBpbnB1dDogICBVU0IgTXVs
dGltZWRpYSBLZXlib2FyZCAgU3lzdGVtIENvbnRyb2wgYXMgL2RldmljZXMvcGNpMDAwMDowMC8w
MDAwOjAwOjFhLjAvdXNiMS8xLTEvMS0xLjIvMS0xLjI6MS4xLzAwMDM6MDk5QTo2MTBDLjAwMDIv
aW5wdXQvaW5wdXQ4ClsgICAgNy4yMTM3NDJdIGhpZC1nZW5lcmljIDAwMDM6MDk5QTo2MTBDLjAw
MDI6IGlucHV0LGhpZHJhdzE6IFVTQiBISUQgdjEuMDAgRGV2aWNlIFsgIFVTQiBNdWx0aW1lZGlh
IEtleWJvYXJkIF0gb24gdXNiLTAwMDA6MDA6MWEuMC0xLjIvaW5wdXQxClsgICAgNy40NzU3MTNd
IG10NzYwMXUgMi0xLjQ6MS4wOiBFRVBST00gdmVyOjBkIGZhZTowMApbICAgIDcuNzA0NTA2XSBp
ZWVlODAyMTEgcGh5MDogU2VsZWN0ZWQgcmF0ZSBjb250cm9sIGFsZ29yaXRobSAnbWluc3RyZWxf
aHQnClsgICAgNy43MDYxOTJdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2
ZXIgbXQ3NjAxdQpbICAgIDcuNzE4MzI1XSBtdDc2MDF1IDItMS40OjEuMCB3bHgyMGUwMTYwMDY0
YmQ6IHJlbmFtZWQgZnJvbSB3bGFuMApbICAgMTAuNDU5MTQyXSB3bHgyMGUwMTYwMDY0YmQ6IGF1
dGhlbnRpY2F0ZSB3aXRoIDY4OmZmOjdiOjQ3Ojg2OjE3ClsgICAxMC40OTM4NTVdIHdseDIwZTAx
NjAwNjRiZDogc2VuZCBhdXRoIHRvIDY4OmZmOjdiOjQ3Ojg2OjE3ICh0cnkgMS8zKQpbICAgMTAu
NDk3NjEzXSB3bHgyMGUwMTYwMDY0YmQ6IGF1dGhlbnRpY2F0ZWQKWyAgIDEwLjUwMTExMF0gd2x4
MjBlMDE2MDA2NGJkOiBhc3NvY2lhdGUgd2l0aCA2ODpmZjo3Yjo0Nzo4NjoxNyAodHJ5IDEvMykK
WyAgIDEwLjUwODA1OF0gd2x4MjBlMDE2MDA2NGJkOiBSWCBBc3NvY1Jlc3AgZnJvbSA2ODpmZjo3
Yjo0Nzo4NjoxNyAoY2FwYWI9MHg0MzEgc3RhdHVzPTAgYWlkPTIpClsgICAxMC41NDU4NDddIHds
eDIwZTAxNjAwNjRiZDogYXNzb2NpYXRlZApbICAgMTAuNTk5MjMzXSBJUHY2OiBBRERSQ09ORihO
RVRERVZfQ0hBTkdFKTogd2x4MjBlMDE2MDA2NGJkOiBsaW5rIGJlY29tZXMgcmVhZHkKWyAgIDE0
LjE1NjQxOF0gYnJpZGdlOiBmaWx0ZXJpbmcgdmlhIGFycC9pcC9pcDZ0YWJsZXMgaXMgbm8gbG9u
Z2VyIGF2YWlsYWJsZSBieSBkZWZhdWx0LiBVcGRhdGUgeW91ciBzY3JpcHRzIHRvIGxvYWQgYnJf
bmV0ZmlsdGVyIGlmIHlvdSBuZWVkIHRoaXMuClsgICAxNC4xOTA2MDldIGJyLWxhbjogcG9ydCAx
KGVucDEzczApIGVudGVyZWQgYmxvY2tpbmcgc3RhdGUKWyAgIDE0LjE5MDYxMV0gYnItbGFuOiBw
b3J0IDEoZW5wMTNzMCkgZW50ZXJlZCBkaXNhYmxlZCBzdGF0ZQpbICAgMTQuMTkwNjk3XSBkZXZp
Y2UgZW5wMTNzMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUKWyAgIDE0LjE5NTkzM10gcjgxNjkg
MDAwMDowZDowMC4wOiBmaXJtd2FyZTogZGlyZWN0LWxvYWRpbmcgZmlybXdhcmUgcnRsX25pYy9y
dGw4MTY4aC0yLmZ3ClsgICAxNC4yMjExNDRdIEdlbmVyaWMgRkUtR0UgUmVhbHRlayBQSFkgcjgx
NjktZDAwOjAwOiBhdHRhY2hlZCBQSFkgZHJpdmVyIFtHZW5lcmljIEZFLUdFIFJlYWx0ZWsgUEhZ
XSAobWlpX2J1czpwaHlfYWRkcj1yODE2OS1kMDA6MDAsIGlycT1JR05PUkUpClsgICAxNC40MDEy
NTldIHI4MTY5IDAwMDA6MGQ6MDAuMCBlbnAxM3MwOiBMaW5rIGlzIERvd24KWyAgIDE0LjQwNTI3
Ml0gYnItbGFuOiBwb3J0IDEoZW5wMTNzMCkgZW50ZXJlZCBibG9ja2luZyBzdGF0ZQpbICAgMTQu
NDA1Mjc0XSBici1sYW46IHBvcnQgMShlbnAxM3MwKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUK
WyAgIDE1LjE4OTE2NV0gYnItbGFuOiBwb3J0IDEoZW5wMTNzMCkgZW50ZXJlZCBkaXNhYmxlZCBz
dGF0ZQo=
--0000000000005bcc5605ba47fa81--


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:27:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:27:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79975.145981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b6T-0007Nt-RL; Mon, 01 Feb 2021 15:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79975.145981; Mon, 01 Feb 2021 15:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b6T-0007Nl-Nz; Mon, 01 Feb 2021 15:27:01 +0000
Received: by outflank-mailman (input) for mailman id 79975;
 Mon, 01 Feb 2021 15:27:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RPu5=HD=dingwall.me.uk=james@srs-us1.protection.inumbo.net>)
 id 1l6b6S-0007NV-Cs
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:27:00 +0000
Received: from know-smtprelay-omc-8.server.virginmedia.net (unknown
 [80.0.253.72]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fab9a2d6-f1c4-4731-9161-e950e7c1e34b;
 Mon, 01 Feb 2021 15:26:57 +0000 (UTC)
Received: from mail0.xen.dingwall.me.uk ([82.38.249.212]) by cmsmtp with ESMTPA
 id 6b6OlNtrFCZ836b6OlQsUS; Mon, 01 Feb 2021 15:26:56 +0000
Received: from localhost (localhost [IPv6:::1])
 by mail0.xen.dingwall.me.uk (Postfix) with ESMTP id DBDAB307E31
 for <xen-devel@lists.xenproject.org>; Mon,  1 Feb 2021 15:26:55 +0000 (GMT)
Received: from mail0.xen.dingwall.me.uk ([127.0.0.1])
 by localhost (mail0.xen.dingwall.me.uk [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id aVY9Yro3_jVC for <xen-devel@lists.xenproject.org>;
 Mon,  1 Feb 2021 15:26:55 +0000 (GMT)
Received: from ghoul.dingwall.me.uk (ghoul.dingwall.me.uk [192.168.1.200])
 by dingwall.me.uk (Postfix) with ESMTP id B086D307E2E
 for <xen-devel@lists.xenproject.org>; Mon,  1 Feb 2021 15:26:55 +0000 (GMT)
Received: by ghoul.dingwall.me.uk (Postfix, from userid 1000)
 id ABB2790C; Mon,  1 Feb 2021 15:26:55 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fab9a2d6-f1c4-4731-9161-e950e7c1e34b
X-Originating-IP: [82.38.249.212]
X-Authenticated-User: james.dingwall@blueyonder.co.uk
X-Spam: 0
X-Authority: v=2.3 cv=HNHt6Llv c=1 sm=1 tr=0 a=gXUefieqlD6GaZBkXOTlrw==:117
 a=gXUefieqlD6GaZBkXOTlrw==:17 a=xqWC_Br6kY4A:10 a=kj9zAlcOel0A:10
 a=qa6Q16uM49sA:10 a=gUfT_g-z_Vnq6cCTFasA:9 a=CjuIK1q_8ugA:10
 a=Z5ABNNGmrOfJ6cZ5bIyy:22 a=jd6J4Gguk5HxikPWLKER:22
X-Virus-Scanned: Debian amavisd-new at dingwall.me.uk
Date: Mon, 1 Feb 2021 15:26:55 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: xen-devel@lists.xenproject.org
Subject: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
Message-ID: <20210201152655.GA3922797@dingwall.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-CMAE-Envelope: MS4wfJzlh0jqB6E7T+qYl/IiNdpIx2LGiG9mqOjFltnD0T468NOXAk0dKbuzlljiwe0H4wH3XcygyRECWOu5gUFM5bGxNLAsar73xf1ZptJkcLNvYaw6al/i
 qh7Xp01lSlqI/UbSyLx7wr9G8W6UDftj2exIlNyYcaUbDy+sTJ3mhSbn37ZxGTla36YJREa3Ca7lizLaqjt/tV5bQf+5pst8yAU=

Hi,

I am building the xen 4.11 branch at 
310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit 
3b5de119f0399cbe745502cb6ebd5e6633cc139c "86/msr: fix handling of 
MSR_IA32_PERF_{STATUS/CTL}".  I think this should address this error 
recorded in xen's dmesg:

(XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0

I have removed `viridian = [..]` from the xen config nut still get this 
reliably when launching PassMark Performance Test and it is collecting 
CPU information.

This is recorded in the domain qemu-dm log:

21244@1612191983.279616:xen_platform_log xen platform: XEN|BUGCHECK: ====>
21244@1612191983.279819:xen_platform_log xen platform: XEN|BUGCHECK: SYSTEM_SERVICE_EXCEPTION: 00000000C0000096 FFFFF800A43C72C5 FFFFD0014343D580 0000000000000000
21244@1612191983.279959:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (FFFFF800A43C72C5):
21244@1612191983.280075:xen_platform_log xen platform: XEN|BUGCHECK: - Code = C148320F
21244@1612191983.280205:xen_platform_log xen platform: XEN|BUGCHECK: - Flags = 0B4820E2
21244@1612191983.280346:xen_platform_log xen platform: XEN|BUGCHECK: - Address = 0000A824948D4800
21244@1612191983.280504:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[0] = 8B00000769850F07
21244@1612191983.280633:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[1] = 46B70F4024448906
21244@1612191983.280754:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[2] = 0F44442444896604
21244@1612191983.280876:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[3] = E983C88B410646B6
21244@1612191983.281012:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[4] = 0D7401E9831E7401
21244@1612191983.281172:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[5] = 54B70F217502F983
21244@1612191983.281304:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[6] = 54B70F15EBED4024
21244@1612191983.281426:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[7] = EBC0B70FED664024
21244@1612191983.281547:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[8] = 0FEC402454B70F09
21244@1612191983.281668:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[9] = 448B42244489C0B6
21244@1612191983.281809:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[10] = 2444B70F06894024
21244@1612191983.281932:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[11] = 4688440446896644
21244@1612191983.282052:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[12] = 0000073846C74906
21244@1612191983.282185:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[13] = F8830000070AE900
21244@1612191983.282340:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[14] = 8B000006F9850F07
21244@1612191983.282480:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (0000A824848948C2):
21244@1612191983.282617:xen_platform_log xen platform: XEN|BUGCHECK: CONTEXT (FFFFD0014343D580):
21244@1612191983.282717:xen_platform_log xen platform: XEN|BUGCHECK: - GS = 002B
21244@1612191983.282816:xen_platform_log xen platform: XEN|BUGCHECK: - FS = 0053
21244@1612191983.282914:xen_platform_log xen platform: XEN|BUGCHECK: - ES = 002B
21244@1612191983.283011:xen_platform_log xen platform: XEN|BUGCHECK: - DS = 002B
21244@1612191983.283127:xen_platform_log xen platform: XEN|BUGCHECK: - SS = 0018
21244@1612191983.283226:xen_platform_log xen platform: XEN|BUGCHECK: - CS = 0010
21244@1612191983.283332:xen_platform_log xen platform: XEN|BUGCHECK: - EFLAGS = 00000202
21244@1612191983.283444:xen_platform_log xen platform: XEN|BUGCHECK: - RDI = 00000000F64D5C20
21244@1612191983.283555:xen_platform_log xen platform: XEN|BUGCHECK: - RSI = 00000000F6367280
21244@1612191983.283666:xen_platform_log xen platform: XEN|BUGCHECK: - RBX = 000000008011E060
21244@1612191983.283810:xen_platform_log xen platform: XEN|BUGCHECK: - RDX = 00000000F64D5C20
21244@1612191983.283972:xen_platform_log xen platform: XEN|BUGCHECK: - RCX = 0000000000000199
21244@1612191983.284350:xen_platform_log xen platform: XEN|BUGCHECK: - RAX = 0000000000000004
21244@1612191983.284523:xen_platform_log xen platform: XEN|BUGCHECK: - RBP = 000000004343E891
21244@1612191983.284658:xen_platform_log xen platform: XEN|BUGCHECK: - RIP = 00000000A43C72C5
21244@1612191983.284842:xen_platform_log xen platform: XEN|BUGCHECK: - RSP = 000000004343DFA0
21244@1612191983.284959:xen_platform_log xen platform: XEN|BUGCHECK: - R8 = 0000000000000008
21244@1612191983.285073:xen_platform_log xen platform: XEN|BUGCHECK: - R9 = 000000000000000E
21244@1612191983.285188:xen_platform_log xen platform: XEN|BUGCHECK: - R10 = 0000000000000002
21244@1612191983.285304:xen_platform_log xen platform: XEN|BUGCHECK: - R11 = 000000004343E808
21244@1612191983.285420:xen_platform_log xen platform: XEN|BUGCHECK: - R12 = 0000000000000000
21244@1612191983.285564:xen_platform_log xen platform: XEN|BUGCHECK: - R13 = 00000000F7964E50
21244@1612191983.285680:xen_platform_log xen platform: XEN|BUGCHECK: - R14 = 00000000F64D5C20
21244@1612191983.285796:xen_platform_log xen platform: XEN|BUGCHECK: - R15 = 00000000F7964E50
21244@1612191983.285888:xen_platform_log xen platform: XEN|BUGCHECK: STACK:
21244@1612191983.286105:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343E810: (0000000000000000 000000004343E891 0000000000000002 00000000F75F08A0) ntoskrnl.exe + 0000000000485507
21244@1612191983.286340:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343E8E0: (00000000F75F0805 000000004343EB80 00000000F6A62CC0 00000000F75F08A0) ntoskrnl.exe + 0000000000486468
21244@1612191983.286547:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343EA20: (0000000000000000 0000000000000000 0000000000000000 0000000000000000) ntoskrnl.exe + 0000000000458CAE
21244@1612191983.286755:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343EA90: (0000000000000000 0000000000000000 000000007DBED000 000000007DA00028) ntoskrnl.exe + 00000000001501A3
21244@1612191983.286976:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE388: (00000000587D5673 0000000058F40000 0000000006002D2B 0000000000000000) 00007FFB5B3207CA
21244@1612191983.287171:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE390: (0000000058F40000 0000000006002D2B 0000000000000000 00000000160C86D8) 00007FFB587D5673
21244@1612191983.287390:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE398: (0000000006002D2B 0000000000000000 00000000160C86D8 0000000009ABE3E0) 00007FFB58F40000
21244@1612191983.287584:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE3A0: (0000000000000000 00000000160C86D8 0000000009ABE3E0 000000008011E060) 00007FFB06002D2B
21244@1612191983.287777:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE3A8: (00000000160C86D8 0000000009ABE3E0 000000008011E060 0000000009ABE4A0) 0000000000000000
21244@1612191983.287898:xen_platform_log xen platform: XEN|BUGCHECK: <====

The Windows guest is running winpv drivers 8.2.1.

I'm not quite sure what else to examine or change at this point so any 
guidance would be welcome.

Thanks,
James


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:28:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79980.145993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b8B-0007Xc-7I; Mon, 01 Feb 2021 15:28:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79980.145993; Mon, 01 Feb 2021 15:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b8B-0007XV-4C; Mon, 01 Feb 2021 15:28:47 +0000
Received: by outflank-mailman (input) for mailman id 79980;
 Mon, 01 Feb 2021 15:28:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6b89-0007XQ-Qv
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:28:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6b89-0004gO-PV
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:28:45 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6b89-00065H-O8
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:28:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6b86-0007pj-GS; Mon, 01 Feb 2021 15:28:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=HqB98nLbZIIhhuBMHm87xXko2EUVIbeNy11RSKhULm0=; b=xHpppG0XgBfYrc/fUADHYOZK+z
	lSjz9fOUKjFQq519JD++koPSAvPaCqQfYaa6T3+Yo/Q42TmC7eqsDfhhKnl30P5Ep4huIaVP+CD54
	NyCIYcnsYc0uwi+WwXn80V4lubqtTAtSq7pZCtR7IX01zi173W0usW7aJHHfgNIwERK8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24600.7722.256518.556806@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:28:42 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?=  <roger.pau@citrix.com>,
    Julien Grall <julien@xen.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
 [and 1 more messages]
In-Reply-To: <aed2dfba-3b1c-7e54-7996-766b100375f9@suse.com>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
	<0cbbdb3a-5681-10df-aeee-ac185d7033cc@citrix.com>
	<24600.6974.503961.950273@mariner.uk.xensource.com>
	<aed2dfba-3b1c-7e54-7996-766b100375f9@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s [and 1 more messages]"):
> Oh, this used to be different on prior releases once we were
> past the full freeze point. Are to intending to allow bug fixes
> without release ack until the actual release (minus commit
> moratorium periods, of course), or will this change at some
> (un?)predictable point?

> >    Friday 29th January    Feature freeze
> > 
> >        Patches adding new features should be committed by this date.
> >        Straightforward bugfixes may continue to be accepted by maintainers.
> > 
> >    Friday 12th February **tentatve**   Code freeze
> > 
> >        Bugfixes only, all changes to be approved by the Release Manager.

I will send a proper announcement.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:29:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79982.146005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b9K-0007qG-IE; Mon, 01 Feb 2021 15:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79982.146005; Mon, 01 Feb 2021 15:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6b9K-0007q9-Es; Mon, 01 Feb 2021 15:29:58 +0000
Received: by outflank-mailman (input) for mailman id 79982;
 Mon, 01 Feb 2021 15:29:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6b9J-0007px-5B
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:29:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8d9ccc6-4a65-4e07-8520-3501842ef03e;
 Mon, 01 Feb 2021 15:29:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7B248B14D;
 Mon,  1 Feb 2021 15:29:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8d9ccc6-4a65-4e07-8520-3501842ef03e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612193395; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MQ68E43jyhkHfGMTsQG8E0vxVLN8gjNoABSisXJaxCo=;
	b=GTTkoEG/IPXdgXwap0jXm1bPO5y3CafyIYzXDmouBE5JPkWiGa/wDZ9M9pbl7LVsKfN9Kz
	M1vmWi1KZz3srJWdNEWjM9z+s9i6CDYI7YN7aoAnKWgMyzJnYEz9mahuxl9rr6FBRHo5Lb
	rFOlLV4g1N8apj0QtQqig2ahvVH+LDk=
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
 [and 1 more messages]
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
 <0cbbdb3a-5681-10df-aeee-ac185d7033cc@citrix.com>
 <24600.6974.503961.950273@mariner.uk.xensource.com>
 <aed2dfba-3b1c-7e54-7996-766b100375f9@suse.com>
 <24600.7722.256518.556806@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6c09f5d9-22b3-07f6-3af3-ed76c6d15c75@suse.com>
Date: Mon, 1 Feb 2021 16:29:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <24600.7722.256518.556806@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 16:28, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s [and 1 more messages]"):
>> Oh, this used to be different on prior releases once we were
>> past the full freeze point. Are to intending to allow bug fixes
>> without release ack until the actual release (minus commit
>> moratorium periods, of course), or will this change at some
>> (un?)predictable point?
> 
>>> Â Â  Friday 29th JanuaryÂ Â Â  Feature freeze
>>>
>>> Â Â Â Â Â Â  Patches adding new features should be committed by this date.
>>> Â Â Â Â Â Â  Straightforward bugfixes may continue to be accepted by maintainers.
>>>
>>> Â Â  Friday 12th February **tentatve**Â Â  Code freeze
>>>
>>> Â Â Â Â Â Â  Bugfixes only, all changes to be approved by the Release Manager.

Oh, looks like I forgot we have three freezes, not two.

> I will send a proper announcement.

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:34:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:34:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79987.146016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bD9-0000Fw-2s; Mon, 01 Feb 2021 15:33:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79987.146016; Mon, 01 Feb 2021 15:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bD8-0000Fp-WB; Mon, 01 Feb 2021 15:33:54 +0000
Received: by outflank-mailman (input) for mailman id 79987;
 Mon, 01 Feb 2021 15:33:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6bD8-0000Fk-Fi
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:33:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6bD8-0004mp-C8
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:33:54 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6bD8-0006Yr-Ak
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:33:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6bD5-0007qn-1G; Mon, 01 Feb 2021 15:33:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Subject:To:Date:Message-ID:
	Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=nkBRSlzmBwKRZLlNLozRCLgU5gDK5KyhbACypTekpcM=; b=4ELd+8uSHu1bG/mw4nPVI/10+h
	Byxu8SH75vNazZaF9K3Oaqf7fvWgJuY26AjP+eHyT+XbjWWqnxY2pWkb95BFyLgTw6cE/FcE0Be4N
	7w1pVl5+QsVDKBkScv6dDy6eJAlB9Zx70CRCzmG15XcXUkkfkpluU0H0GwiANK7Alb0c=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.8030.769396.165224@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 15:33:50 +0000
To: committers@xenproject.org,
    xen-devel@lists.xenproject.org
Subject: [ANNOUNCE] Xen 4.15 release schedule - feature freeze now in effect

Thanks everyone for your hard work so far.  We are now in feature
freeze, as of the end of Friday.  No new features should be committed
to xen.git#staging.

You may continue to commit straightforward bugfixes, docs changes, and
new tests, without a release-ack.  Anything involving reorganisation
or refactoring should get a release ack.  If in doubt please ask me
and I will grant (or withhold) permission.

Codefreeze will occur on the 12th of February as planned.

  As a reminder, here is the release schedule:
  (unchanged information indented with spaces):

   Friday 15th January    Last posting date

       Patches adding new features should be posted to the mailing list
       by this cate, although perhaps not in their final version.

   Friday 29th January    Feature freeze

       Patches adding new features should be committed by this date.
       Straightforward bugfixes may continue to be accepted by
       maintainers.

+  Friday 12th February   Code freeze

       Bugfixes only, all changes to be approved by the Release Manager.

   Week of 12th March **tentative**    Release
       (probably Tuesday or Wednesday)

  Any patches containing substantial refactoring are to treated as
  new features, even if they intent is to fix bugs.

  Freeze exceptions will not be routine, but may be granted in
  exceptional cases for small changes on the basis of risk assessment.
  Large series will not get exceptions.  Contributors *must not* rely on
  getting, or expect, a freeze exception.

  New or improved tests (supposing they do not involve refactoring,
  even build system reorganisation), and documentation improvements,
  will generally be treated as bugfixes.

  The release dates is provisional and will be adjusted in the light
  of apparent code quality etc.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:37:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79991.146034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bH2-0000RJ-M1; Mon, 01 Feb 2021 15:37:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79991.146034; Mon, 01 Feb 2021 15:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bH2-0000RC-Ie; Mon, 01 Feb 2021 15:37:56 +0000
Received: by outflank-mailman (input) for mailman id 79991;
 Mon, 01 Feb 2021 15:37:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6bH1-0000R7-Df
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:37:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b47b2dd-4db6-43ad-8e39-40832e83f484;
 Mon, 01 Feb 2021 15:37:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b47b2dd-4db6-43ad-8e39-40832e83f484
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612193874;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=EIzritZu05SnuimgO8QHGuCk8dlARUFYVnTJ1chxz1M=;
  b=cMYwPv6O+jKwjXB1gILL/qIGaC+qOaEh5mKeymPeQXSC6lKYy480HFd+
   n/Q94oyShrVjA8Nc6utZHkaZoiVTNdyBTMqUmbiE1PfdoQJpb3PlwqRxJ
   V797MEMSUsu/n8cHyExkRUSsNUsgHxiPuSb0aR5IscvmxkygjmpKhdv3g
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5chuxRv0pEc2w2U/VxD2C+uyuCXQaZMqhwX1ZilrJjXTzIrphgnfEhM3HjnLZ8mlrs0v7mll/N
 lQ/W2C6t0B+cGvmEZzq1MLDaIwF6yo9mAsF4zHaalQKegwkWCrrzTyhg0301gMts/4pjgly1zw
 JzPO1iVcpBqrZVkL/ne0F+DDqF+omjl+T/z9r96LAxLTcHHzzPqnJO5SxR2AAOW2af8fi5rDck
 tean62E638PfYC13O9BwEprlyO0oTdJlZlkrQ8zQkOLoffsox8x+1WlEaFvgapwV9wgLSEooOk
 M8Y=
X-SBRS: 5.2
X-MesageID: 36667757
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36667757"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZOLnakT0oALS5FFE9ZF0uDzYPwW5F0uG/kItqJ0WkHvfLuAmzF449BMKzQEstasOU6tXHdkUX6qbfk5uBCwZiy0Wvmfppcnk+w6EnfBuNm3ykrO8xy4E4650AM4Tna6duJLVMbvQxawyG2+STQU3oa4Fcq65x+5GpA+smFkApoI01qvpJ066EYeicH0jbh/6PBBJv79wfrqveEt/BL3QANeZLEMuQY0MOrlvgrqBKIGucmjoqzk9NHa1Gc2aOx2cT4/SjWQy5t0NUKJSep1BxDcaBJw+MBfCdkJtJjpaOKHZY3RplgEmNJwT/2gJJFBJeRiePPyENeSgsqwDbFtMGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z1Nk6ujDwrukE7XA630RTM5uYNvmKaDCjAVPbpv/zyo=;
 b=Do8F3Qtj5IBtzWPc52C+4JUQASxcGglX/1WBGc4o+gvmoDrUWvmeF7uhP6zBO4IFBAWlLy/KGBmn4Qs/g4QOdejegroMSAIKONeADEGaPMXmrUM+FHLacbHbJCTwse//5+ICUVA+hmHOF+U+O2v+j8x9vsfHFhhEltabdaQOuw8gDH0fsCouu1KEdDg3uiQuO0rwRmSRNxNt/GKfwosWT1hTR1CBUKWyafqui5J0RACyxGwCVD9Ip/Rc1LagcLgTC4EdWUkDkbQB/6HjqQeO8hfSu2Pi92AiMe4SBfUDZZbe3Q8IKlz9XuqFLLgThaOnyVkKD7h+4COyaakVafU+1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z1Nk6ujDwrukE7XA630RTM5uYNvmKaDCjAVPbpv/zyo=;
 b=gt7eNCk9/ytdcWPSktMtJZgYVVcepcSJh9/3d+UL6rp629ybjQsdxKZ9T3lQSfUHjn6B3yHV5+hhJrHncj7guf4F9Q1YevfKvOW4V9ftaMdWXtS7wpCFsgqdgBOYUQJ8jvJj7DU6rRL6TYJFoCaLo6uiHxUlKYgxNQN4dWe6df4=
Subject: Re: [PATCH] ioreq: don't (deliberately) crash Dom0
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>, Ian Jackson <iwj@xenproject.org>
References: <1dc6fe4c-3435-462d-a339-085014ae0deb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f6c0504f-c83f-5477-0797-d6e6da616fc5@citrix.com>
Date: Mon, 1 Feb 2021 15:37:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <1dc6fe4c-3435-462d-a339-085014ae0deb@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0066.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::30) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4755d4e0-4f73-4a5e-77ac-08d8c6c753a3
X-MS-TrafficTypeDiagnostic: BYAPR03MB3413:
X-Microsoft-Antispam-PRVS: <BYAPR03MB341349E44C7EED6AD7B68224BAB69@BYAPR03MB3413.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3173;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UyzfuL1LJBH+wuGjdUn3di5VTr6o2oJy4W8oyI6ZNrqAPiWn0DbEg4aIM2Bz5gFhd4ngFnVFncoIlJKifKolY91Sb76RppT0CecYmt/N29akywQBHUL+NubZADLxZZ1Jy4/YiHPVoJzMZ70mHm96o0IJjsFTVg8VKekpdhSR1Jl4/rhrU6HfJPe82oCTwy5lX00iLW/TvSEqODzcl0r+blZZTFL1ALpVDddcjTj+93mUUhGCe4vRF18Qki7nDv1IKkcKW7HNmXnSag88nxot3pfDErb60MgqTlU6wsf7K6tXqDj2lGz/BwPZQrqUyqLRmq9TnPegf40ICf1jFKnneEtSMNEn51DqcYpX5A+ykB9EldVV4yccY5/Ghjq4QWrFiBS0DHHJjrJHIkmsiVTz63lg1wtlYITmloBFQ871Rrk/cHKSK64LGT6LvA9NMT/22heYkv6kDw1XjcpSCrUmTqQsyfp4e7l91E1lT5NVB7WmGro0zXOJpEDr5PzIjfQSJQ8sj4rOqMYbKs5IP2f80LEiNNOiV4vq9/QkifMw+kkCYj5Zq+SJq/IOHY5/vQm/0GNhvBSG906Nj/sPy5Y1nHnqUbhuRPn8XTyC3i+gdEQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(39860400002)(136003)(366004)(66946007)(66476007)(66556008)(5660300002)(54906003)(316002)(16576012)(110136005)(6666004)(83380400001)(26005)(186003)(16526019)(86362001)(2616005)(53546011)(956004)(8936002)(31686004)(8676002)(4326008)(36756003)(6486002)(31696002)(478600001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ckRrR1hRWkVEYU1zSng2S2FlWlhSQWxQSTB3QkxIaEU5Zldyd0gxQkh6U1d1?=
 =?utf-8?B?NURtOERuN0dLdFptaEkrR1RaNWRzenZMd0g0TXNRTll4S3RqZTJUbGdrN3dt?=
 =?utf-8?B?NWlHS2JZK2xrN3RZR1FiMWRCZFpUUkdMTWtJRWdpcjcwWGhobTBUNkxLZVRl?=
 =?utf-8?B?Y0ZDL05CTXFPQVAxM1ltbU4wcjZVRVh4ampiblY2Um9nd0MxclQ0cmN5RE5B?=
 =?utf-8?B?S1FLa3JPM2dsc2FYb29hdW4zcTQ4c2tMbVJZdW5xM0x5ejVCNFg0UGlQa3pT?=
 =?utf-8?B?bkJFeGkwTFJ6dTNmMUt0RTl3WjcwVUZybnZub3JUcVc0b0dyMFVLMDJEZUE2?=
 =?utf-8?B?NmY2aXhXaDZwZGJvZnBiTDlYeVhaNGo4NGxVYm80aDNDcUo4RnJqckxDK2J0?=
 =?utf-8?B?QUt0NnFIVDdpRy9BMEVPblRlbFFhN0xPTjk3L3FYMk9xZjV3SklSWTdzMVA1?=
 =?utf-8?B?R2MwUHFPK0kyMEJhbHBWWFhWTzN0WFgyRkhCYlRhTXYxR2lPZXU5MlhsUmpm?=
 =?utf-8?B?V1Z6dUdEbUFhWEw4WEg3N09EZDNoVjRoQ1MyaEdVY2tPWURtYUVybytTSVZB?=
 =?utf-8?B?cVZjelpxQk9NQ0RJcDZ5QVFuTlRLMVQxZXFpeDQrNHBEaGhPSDdiZWpHZVJG?=
 =?utf-8?B?NnNRYU5yQyt1elk5MDU4SzBZWlZFbGF1YXRlVjFuVEZ4dE8vQWRtd1pIZ1VD?=
 =?utf-8?B?Y1Z1WGM2ZlZYMURLQllsN0lHZ1hBbGZrV3EzV0dPQjRteDZtN1R5c0FWbXpi?=
 =?utf-8?B?UVhBQmJSZmczMlNPS2Q5YjlCMUg0Wld1UU50WGMvK0tDaXBsMVdrQy9SMnl1?=
 =?utf-8?B?dlJyTWlocHZrcXAwTi9xWkRkTkRSRVRaZ0VwaUxJOVhnbUxocXNFNVl2RGh2?=
 =?utf-8?B?dTNXZTBXbmlOeXp4bVFxeHlhdUo0RjJvWVVmMnBvR3NJWi9ETlRjbWxuVEZV?=
 =?utf-8?B?eTNqdER0RkpWRkRsd0hSdXYxVzdSeGVQaW5MaVVHdnBPWG9CRXIyNXdPUWJl?=
 =?utf-8?B?R1dNZmZJMGhPdFNtSmxhUUpjQzlPQ0pnQUE0VU5Uc0Z4ZEF1WUdxRHJsM2w2?=
 =?utf-8?B?NjhNWVMwRlk1SkJ1MC9zL3ptTmtYaG5zcFFaMUFFb2Q4N1BqNUMxMithb2w1?=
 =?utf-8?B?alh4NzJRSTV4UGJ0U1dsNDU3MVlVcjlaSUZpWmxIWWZlZjQzakdpNEVzQ1VV?=
 =?utf-8?B?VFNoVGJ6Mzh5UTFNUFAySU8wSHN4WUpTQjBoR3dJRy9oZWVtckJPa3dEaWtz?=
 =?utf-8?B?bFRlOVI4Z3E1SkxvUDk3SVB4Q0QwK0ZFdERNSUw5NGRXZE5nbGt3NnNnMk5W?=
 =?utf-8?B?U2RMMDFyM3lpYkttRnFCTDR4cS9wYW5iMm9PaG8yejdKWnBwV1M3SzF3Mjh3?=
 =?utf-8?B?S3RyREY4TEU0anp0aUNpcTl1WFpDYndzL1oxY2VSMEpiYzJJQjNiOFUxNjRN?=
 =?utf-8?B?WEJNZW50aDdIbHNoNFpaZlVMVTVDdlNHbkcwWkJRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4755d4e0-4f73-4a5e-77ac-08d8c6c753a3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 15:37:47.8337
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +EnMC/GcZN5zX/IOPRF+FRSjkME41cf0pNif4IYb9bw9jif4eyrZDWSuJ8dsR40mRFIxOhTQMbU6NGRLDKa2bxj4+6MfWsfFcBw5wFM3iLs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3413
X-OriginatorOrg: citrix.com

On 01/02/2021 15:22, Jan Beulich wrote:
> We consider this error path of hvm_alloc_ioreq_mfn() to not be possible
> to be taken, or otherwise to indicate abuse or a bug somewhere. If there
> is abuse of some kind, crashing Dom0 here would mean a system-wide DoS.
> Only crash the emulator domain if it's not the (global) control domain;
> crash only the guest being serviced otherwise.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Honestly, I'm -1 towards this.

Asymmetrically shooting things which aren't dom0 only complicates
investigations, and doesn't remove the fact that this is an XSA.

I do not subscribe to the opinion that keeping dom0 running at all
possible costs is the best thing thing for the system.

In this particular case, the theoretical cases where it can go wrong
might not be the fault of either domain.

~Andrew

>
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -274,7 +274,7 @@ static int hvm_alloc_ioreq_mfn(struct hv
>           * The domain can't possibly know about this page yet, so failure
>           * here is a clear indication of something fishy going on.
>           */
> -        domain_crash(s->emulator);
> +        domain_crash(is_control_domain(s->emulator) ? s->target : s->emulator);
>          return -ENODATA;
>      }
>  
>



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:39:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:39:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79994.146047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bIf-0000c7-5H; Mon, 01 Feb 2021 15:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79994.146047; Mon, 01 Feb 2021 15:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bIf-0000c0-23; Mon, 01 Feb 2021 15:39:37 +0000
Received: by outflank-mailman (input) for mailman id 79994;
 Mon, 01 Feb 2021 15:39:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l6bId-0000be-BA
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:39:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6bIZ-0004sr-Qn; Mon, 01 Feb 2021 15:39:31 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6bIZ-0006xh-Io; Mon, 01 Feb 2021 15:39:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vwDt3r++4xn6rNRb7Y2vRUxUWBY36G0ldfZ6OSmnceQ=; b=pALZyd48hjQaLOfm/YRn/vgI+B
	9Dm9gIGSQfKHXUiJnEINJY47SDSFYdAbm9Uz3319Q84bZBqzaQog+iyOYwhsnmqU7xaKx72Xzo7fE
	9eveOrDgyvp8aUM5dns0Xpynqq0zTY030K+aYPnWJsL3UG6w+6pROTawEHCxdszCZnXQ=;
Subject: Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and
 CONFIG_COVERAGE=y [and 1 more messages]
To: Jan Beulich <jbeulich@suse.com>, Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20210130152210.17503-1-julien@xen.org>
 <174a18ba-25d5-a94c-a85d-4a81b837a936@suse.com>
 <24600.5695.143342.713995@mariner.uk.xensource.com>
 <321c06d3-106a-cfab-5ac8-df629e600dfe@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <00328618-61c8-3fc7-45ce-ee99b71c85b5@xen.org>
Date: Mon, 1 Feb 2021 15:39:29 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <321c06d3-106a-cfab-5ac8-df629e600dfe@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 01/02/2021 15:02, Jan Beulich wrote:
> On 01.02.2021 15:54, Ian Jackson wrote:
>> Julien Grall writes ("[PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
>>> Xen is heavily relying on the DCE stage to remove unused code so the
>>> linker doesn't throw an error because a function is not implemented
>>> yet we defined a prototype for it.
>>
>> Thanks for the clear explanation.
>>
>>> It is not entirely clear why the compiler DCE is not detecting the
>>> unused code. However, moving the permission check from do_memory_op()
>>> to xenmem_add_to_physmap_batch() does the trick.
>>
>> How unfortunate.
>>
>>> Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
>>> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> I have reviewed the diff, but not the code in context.
>>
>>> The gitlab CI is used to provide basic testing on a per-series basis. So
>>> I would like to request this patch to be merged in Xen 4.15 in order to
>>> reduce the number of failure not related to the series tested.
>>
>> Quite so.
>>
>> Jan Beulich writes ("Re: [PATCH for-4.15] xen/mm: Fix build when CONFIG_HVM=n and CONFIG_COVERAGE=y"):
>>> On 30.01.2021 16:22, Julien Grall wrote:
>>>> @@ -1442,13 +1447,6 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>           if ( d == NULL )
>>>>               return -ESRCH;
>>>>   
>>>> -        rc = xatp_permission_check(d, xatpb.space);
>>>> -        if ( rc )
>>>> -        {
>>>> -            rcu_unlock_domain(d);
>>>> -            return rc;
>>>> -        }
>>>> -
>>>>           rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>>>>   
>>>>           rcu_unlock_domain(d);
>>>
>>> I'd be okay with the code movement if you did so consistently,
>>> i.e. also for the other invocation. I realize this would have
>>> an effect on the dm-op call of the function, but I wonder
>>> whether this wouldn't even be a good thing. If not, I think
>>> duplicating xenmem_add_to_physmap()'s early ASSERT() into
>>> xenmem_add_to_physmap_batch() would be the better course of
>>> action.
>>
>> Jan, can you confirm whether in your opinion this patch as originally
>> posted by Julien is *correct* as is ?  In particular, Julien did not
>> intend a functional change.  Have you satisfied yourself that there is
>> no functional change here ?
> 
> Yes and yes.
> 
>> I understand your objectiion above to relate to style or neatness,
>> rather than function.  Is that correct ?
> 
> Yes.
> 
>>   And that your proposed
>> additional change would have some impact whilch would have to be
>> assessed.
> 
> The first of the proposed alternatives may need further
> investigation, yes. The second of the alternatives would
> shrink this patch to a 2-line one, i.e. far less code
> churn, and is not in need of any assessment afaia. In
> fact I believe this latter alternative was discussed as
> the approach to take here, before the patch was submitted.

Right, I chose this approach over the one discussed previously because 
it doesn't duplicate the check for auto-translated domain and I couldn't 
really find a good way to justify it in the code (you requested one).

Regarding calling the xatp_permission_check() from 
xenmem_add_to_physmap(). It would means that the DM code will have two 
XSM check (one XSM_DM_PRIV, one XSM_TARGET). It is not clear to me 
whether this is going to be introducing more headache for the XSM folks.

Therefore, for Xen 4.15, I would prefer to stick with my patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 15:53:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 15:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.79999.146058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bW8-0002WY-CL; Mon, 01 Feb 2021 15:53:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 79999.146058; Mon, 01 Feb 2021 15:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bW8-0002WR-9N; Mon, 01 Feb 2021 15:53:32 +0000
Received: by outflank-mailman (input) for mailman id 79999;
 Mon, 01 Feb 2021 15:53:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dub/=HD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6bW6-0002WM-JT
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 15:53:30 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5f9866f-56a6-418b-94ce-f214d1f72f71;
 Mon, 01 Feb 2021 15:53:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5f9866f-56a6-418b-94ce-f214d1f72f71
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612194809;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=37NP3hA19mATChF6+CPr6RY9nxCTxiiI2Dv5z+vtpbE=;
  b=eUT0oZjFI1uJy+C4Q7l2/ElRnZXXNH1TYZTsFB4foIhNTMySAx2/alqT
   fjMzqqnhj/4iITcJGECM2RfdErss5gYIlrna5GjyTV96D8+OD+wfwUfqP
   hP516AogKGRbq5AZnEcPxGWfJlLdBOzg2iRo1sWm1Y6qJxRX6SUGO13aE
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RPBavhJpHSuOHx0j+Bng8nXyIOYuVzWnqDUqjnRXdXvDVzCZFWpWWVDVplfscdt0eKUBvhshIg
 ZcT8FEunAdUn8i6rYwLhUmHudxw2o/Na1go4Q4Pp4Rh7GNi/y8IpzohkfE25JZdcVjCsjLb71P
 4on6y4uAihir+cx2zczE0LqlvMtP2V8DT9ReE/baEjT2wpavKH1VI2DmTty2rAxbBSP3A9TYBw
 q7u7cOuHxHsMhRbPqQnp23mdkpBjxSH3bf86/W+i9fslJTxmQ5ARitTFTvrHdflbq5w+MBbSGq
 vOM=
X-SBRS: 5.2
X-MesageID: 36245957
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,392,1602561600"; 
   d="scan'208";a="36245957"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BsbH68arlPt3ocUXg97ZmN6V7XNDXi0wLyPJAwc0x+INvi5ImnnIy4TQpUvoUvJqF2kcKn9tE3KQOLfHguYZ9tx5Irx+6S9RzqBPkrxbh4Yg6Ezm5BdiqAWKWESH9RKN/xGRIVtZH7QEE1+08a5EIixmePqRy4T+BJ7MnHmJ89JCt1eW94nnbm2K2uPFofU85QO6/r18dMVyz7pNO5El7a/CuS2GgR1xRplDXsge1SXy4SzMOBp8z+TNXmNrAJSOH65MODM8DWMzx+1Ui6ZnvXFwcsPADDRF3kG+MKAhVSjfZQPWGv+SC4fd83NlWV3UIGcIBqGZW8BuuKc9KYGJbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+DPZlqJy76uCRvHmXUZW2JcqPlPoOx4QIRUbMcfhIqg=;
 b=SyrTSbySOgGqcALJ61oi+W7md9LOKtt1P2YKRAxNM302+3kMO9LYhNSzawcuLsIWKGmtz77Zv6Efuvf9rINkNa41Bwo3+h2bL7LDzYrvYl0WLWCdyHEmYLWofU8aOIHnZEfcapQGAXtt18JGpJlxHvhkhJPyiiFv6hhFW4UWQIY6caeffEPBwMhOl5hFqv+AerIHqlk4C1kNoYJ+8SNYDQUDKnLvBhdP5Iq5TathV2yaydBOgT0WFFaGuLaLtszq2bJSuSozdKGYAdCSCPwdoK3YdToWr59FDEp8YJyuzq8RncgZXEKHn6z7AMkwMhCTsYJYgEOGeD25R6rpOqWJTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+DPZlqJy76uCRvHmXUZW2JcqPlPoOx4QIRUbMcfhIqg=;
 b=Tr+iiFiGXFACYa2OFlfqaJ3kdZ6aRNdpV6ILHQbXK72Y8P+ZOrqFT8ZZxkpX+xTZ/3+hxUY/m3ZItYEGfTG4K5NasgAAW/qnRb+ibK4Iu65UCjquDOxZrIfq3s6fDCgSD7GbeivCGJ0dHfvQVgeXObU2GJBtQFukcwcI68Fe3cU=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15] xenstore: fix build on {Net/Free}BSD
Date: Mon,  1 Feb 2021 16:53:17 +0100
Message-ID: <20210201155317.57748-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM5PR1001CA0069.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::46) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d24a9e2d-5eb6-4f08-61f6-08d8c6c98249
X-MS-TrafficTypeDiagnostic: DM6PR03MB4393:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4393394248E3CF0E0707DD198FB69@DM6PR03MB4393.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AcG9YU1hkEolQkSRw3cwrsdAowtCmJIMQxLwH4io7Yi9FhVXSsr/yBVizUy42wsX3kFZDfnCpFjcSuH+XrFXSHhSbMISBohtX1NB6YsizhEkJ8hAzKgxJxwTzNdSfbF1qw10m2n/qRso4ShWsLYSmXDv5B3kFM65Y8GgVdvLNDfcj4ZYjI3wKI2qyuE6kb7ZchkfkBvvckNecQfBDQoiF9+Db7FrosOb64PCrSAEjc+nYFcBukpb8gCWwhx18idHFaiqPiC9at7H0jQ9zjU409kgdBYOJ/mytkqRCryxVB3whG354OYY3cU5kh7iuJP0qD3BCxobtA55rdbcr7kaMDv0fI8+yNt53nHxTh5agf9IJDgb39J3piTrzXdg3wjgEbOMEEaB8YvaUKbAX3Msfflc8aZp/khKb1JNX2ccBvY1SSoANmOc+L93oEgv5hd6d1iIVtATeo+XmcgVcNlnHjCVlh1kHabjO449YbOGqKjbkjjUxpvswgN56WZYcymX1ii3Kx5tamGpRs6YfLjnDg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(136003)(346002)(366004)(1076003)(26005)(2616005)(4744005)(66556008)(66476007)(6916009)(66946007)(6666004)(5660300002)(8676002)(8936002)(2906002)(6496006)(478600001)(86362001)(316002)(186003)(6486002)(36756003)(4326008)(16526019)(54906003)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UElvS08reEllOS9mQ2Nhby81YURrcmZrSVJDVnB0emxoTEhlenlKcWN5N2ZE?=
 =?utf-8?B?enRXRW5NaVREK3NndlU4eWhzU3FVREtVb2YvSEZKYkZKVFBEMWFYNGpLQmNy?=
 =?utf-8?B?azVLeDR0RHBTSFQwOUhiT0xSK2p4bVBDQUZTMHNkNEw1S01pN2NGR0dBVGhY?=
 =?utf-8?B?UXY2cGdXUGsrdXNDTkZYOFh6Zm11VzhqemxFVHJCY0dFNzE4VS9Zd1RkY1Fm?=
 =?utf-8?B?NXJYUjg1aUZMOWJJajhQVURpQlJ0VGpGWGNST2ZkcVpOOE5FMVg4ZFgzTlJo?=
 =?utf-8?B?aGVQNXBuVVhCdDhSbE1HTjlFblExaWVmSUp0T0tHQVNOd2NCcW5ZcDRpUGk5?=
 =?utf-8?B?clFUTCtQbG5OUVZ4SE5vdzByOWZXbWdUbFhmZSt4RnVuNzhSNFdhWUNYWUg0?=
 =?utf-8?B?Q0xMdXJHRlN4MThmT3lFN0E0Q1hzRXNqSVdxUkJINUtEWFZEUUpxcC93WjJu?=
 =?utf-8?B?bjVRZk1NRi8va1VybjF2elNnbEZNNGV4ZmFEbUNwQnIzbEN5ZEwwakJBazE2?=
 =?utf-8?B?YzVEb21IZVN3VitWUUs4WkdidktGSWtNbElsU3MvVVREdU82em8zRXJXK2da?=
 =?utf-8?B?RGR3ekcyNXBGbzNsMFJBMlU1b1VZc3RxOGM0dkFQWVZWalRqYTBMWnBWNzcw?=
 =?utf-8?B?VE9WSmN0SWxUeEl1OVNWcFV2N005dHI2ZTB5WHQwdEZrTWxWRGw4RkxzbWor?=
 =?utf-8?B?SjBxdEord2g1dENoSnpkelVFaGN4dUp6UVFBcnVDQStqM3Z2cTlTN0RPb0R4?=
 =?utf-8?B?NlZqOFIrU0xCZWsxOW9ZaWlnRnNGNHdsUGhsenFMZDhCQlkvc1BSNmp6WjRi?=
 =?utf-8?B?V2ZaYWtMenpGZzBOWjBIdXdqK3B3TWFnUU9PT29jNHdORlZyamN1TmNZQVNM?=
 =?utf-8?B?WUkxK3d0KzBKdEJ0Rkk5c204Skl6alZ2bzJUVi9zd0hVbFE3YllIYldVLzdY?=
 =?utf-8?B?ZlpyU1poUmVhV3NCRjFaQjBXYm5zaFdPVlQ5VzBLcmdYMm5MUlRtRUcvdjRj?=
 =?utf-8?B?RHVkMW1GczNsakxlWUxYUDVqcCtRbWRkN294cVdjdEQrcUdUU3F3UC9Oc2gw?=
 =?utf-8?B?eEVZMjVWTzZLZGFwaGdSUDJ2ZkV5V25jckRCV3JpM21uaUswem1lQ3hsalVv?=
 =?utf-8?B?NUlSZFlTR2NaSzFydytJSElFSzJ0cjAxWjBMa0lEZTJtcmVlcXMvNE05N2k1?=
 =?utf-8?B?QXd6UitVb2RsQktKZi94dCtpNFRLZ1lFL1orTE1VSHJPVm9OQUpKMTVDbDNh?=
 =?utf-8?B?cVFRZjIyMUNwOGxjb1FETVlESHdQNVdIMEY5VnV1aDQwZENJVmw0T1lGeHU5?=
 =?utf-8?B?ckNoYmN3NkRUNE1FcTZyWUthOGNiRi9yK0RWajhZREYwd3VSNWxjd0k4dDlU?=
 =?utf-8?B?QndhTkJ5Z1Z2Vy9ESWF6Q0h0VnMwOEQ2a1dDeWkxU01DbG1zbUxDMTg5Y0kv?=
 =?utf-8?B?SjV6MTlsTm5iUVBNbE01a2FwNEZXZjExZnlPcHV3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d24a9e2d-5eb6-4f08-61f6-08d8c6c98249
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 15:53:25.0623
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tz0J3Qjy0NlkDVPA2FD5Br7jvL+v2QXs3iQfdjkKSTdk829NPxMzPi61NoBI7Q8mLITfchcz/QzAMdpEHdBVuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4393
X-OriginatorOrg: citrix.com

The endian.h header is in sys/ on NetBSD and FreeBSD.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
Only tested on FreeBSD, but from my reading the header is at the same
place on NetBSD.
---
 tools/xenstore/include/xenstore_state.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h
index d2a9307400..1bd443f61a 100644
--- a/tools/xenstore/include/xenstore_state.h
+++ b/tools/xenstore/include/xenstore_state.h
@@ -21,7 +21,11 @@
 #ifndef XENSTORE_STATE_H
 #define XENSTORE_STATE_H
 
+#if defined(__FreeBSD__) ||Â defined(__NetBSD__)
+#include <sys/endian.h>
+#else
 #include <endian.h>
+#endif
 #include <sys/types.h>
 
 #ifndef htobe32
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 16:03:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 16:03:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80002.146070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bfU-00047I-Al; Mon, 01 Feb 2021 16:03:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80002.146070; Mon, 01 Feb 2021 16:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bfU-00047B-7R; Mon, 01 Feb 2021 16:03:12 +0000
Received: by outflank-mailman (input) for mailman id 80002;
 Mon, 01 Feb 2021 16:03:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6bfT-000476-Ht
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:03:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42394945-7a1c-4c62-a0f7-2e02ee46fb43;
 Mon, 01 Feb 2021 16:03:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1FCCFAB92;
 Mon,  1 Feb 2021 16:03:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42394945-7a1c-4c62-a0f7-2e02ee46fb43
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612195389; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ffS5DjfBHdCzkAI9Vf/Eo5smSllFsqKw1wxIXYB8H3c=;
	b=JRK7lhYDLR2pezkKNnh19V12njyVIqaNs0YcP0hiqSmP0zOdl/Nzmp9DdcDE2V2BZQpSIO
	4pof5R8Wy1lDVFadUMfKba6XNYS9uMaYtJyYTXLt0I5/NwyFuaoVEkIInUFEamV3X64iTl
	02KOIT7MqixCGTg6JnP606zsU514Vk8=
Subject: Re: [PATCH] ioreq: don't (deliberately) crash Dom0
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul@xen.org>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1dc6fe4c-3435-462d-a339-085014ae0deb@suse.com>
 <f6c0504f-c83f-5477-0797-d6e6da616fc5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f68a1cf2-1c0a-813e-018e-9f5bda956074@suse.com>
Date: Mon, 1 Feb 2021 17:03:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <f6c0504f-c83f-5477-0797-d6e6da616fc5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.02.2021 16:37, Andrew Cooper wrote:
> On 01/02/2021 15:22, Jan Beulich wrote:
>> We consider this error path of hvm_alloc_ioreq_mfn() to not be possible
>> to be taken, or otherwise to indicate abuse or a bug somewhere. If there
>> is abuse of some kind, crashing Dom0 here would mean a system-wide DoS.
>> Only crash the emulator domain if it's not the (global) control domain;
>> crash only the guest being serviced otherwise.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Honestly, I'm -1 towards this.
> 
> Asymmetrically shooting things which aren't dom0 only complicates
> investigations, and doesn't remove the fact that this is an XSA.
> 
> I do not subscribe to the opinion that keeping dom0 running at all
> possible costs is the best thing thing for the system.
> 
> In this particular case, the theoretical cases where it can go wrong
> might not be the fault of either domain.

I agree with "might", but I don't think we should consider buggy
Xen as the first option for what errors mean. As said on another
related thread, failure here could come from the page having got
freed very quickly (by guessing its MFN). If could be guest,
emulator, or yet some other domain's misbehavior. In none of
these cases I consider it appropriate to kill Dom0. The guest
not starting (or crashing, if this is dynamic insertion of an
ioreq server) is a clear enough sign that there's an issue that
needs looking into. No need to also penalize all other domains
running on that host.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 16:21:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 16:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80008.146088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bwf-00062V-UQ; Mon, 01 Feb 2021 16:20:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80008.146088; Mon, 01 Feb 2021 16:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6bwf-00062O-RX; Mon, 01 Feb 2021 16:20:57 +0000
Received: by outflank-mailman (input) for mailman id 80008;
 Mon, 01 Feb 2021 16:20:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6bwe-00062J-Hv
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:20:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1e149c4-393a-49e8-8ba6-e8a9b5b1b409;
 Mon, 01 Feb 2021 16:20:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BCA2BAB92;
 Mon,  1 Feb 2021 16:20:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1e149c4-393a-49e8-8ba6-e8a9b5b1b409
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612196454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=GvGyOpHjxePA3adDxLLV4FpwWSt7Uar0rJSh1I4ST+U=;
	b=ie48RAdB4XksWzNkS/bCeIoDEds/o1OgfW28Fd15bq0vn6MfN+NrSyH1XhUn0J4pHDBjto
	QrPswOwnMKsc0YjRzCfUDXWuqE2aHR1Mf6SCa8nxJEYfkGO0Zq7LmBXJuo/eDuWvdkLzyS
	NXc37/rTyTvFkH+pgCJP+RW14Osh5Nk=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] memory: fix build with COVERAGE but !HVM
Message-ID: <84a05b9e-a0c3-7860-4a59-a591a873b884@suse.com>
Date: Mon, 1 Feb 2021 17:20:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Xen is heavily relying on the DCE stage to remove unused code so the
linker doesn't throw an error because a function is not implemented
yet we defined a prototype for it.

On some GCC versions (such as 9.4 provided by Debian sid), the compiler
DCE stage will not manage to figure that out for
xenmem_add_to_physmap_batch():

ld: ld: prelink.o: in function `xenmem_add_to_physmap_batch':
/xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
/xen/xen/common/memory.c:942:(.text+0x22145): relocation truncated
to fit: R_X86_64_PLT32 against undefined symbol `xenmem_add_to_physmap_one'
prelink-efi.o: in function `xenmem_add_to_physmap_batch':
/xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
make[2]: *** [Makefile:215: /root/xen/xen/xen.efi] Error 1
make[2]: *** Waiting for unfinished jobs....
ld: /xen/xen/.xen-syms.0: hidden symbol `xenmem_add_to_physmap_one' isn't defined
ld: final link failed: bad value

It is not entirely clear why the compiler DCE is not detecting the
unused code. However, cloning the check introduced by the commit below
into xenmem_add_to_physmap_batch() does the trick.

No functional change intended.

Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>
---
Julien, since I reused most of your patch'es description, I've kept your
S-o-b. Please let me know if you want me to drop it.

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -904,6 +904,19 @@ static int xenmem_add_to_physmap_batch(s
 {
     union add_to_physmap_extra extra = {};
 
+    /*
+     * While, unlike xenmem_add_to_physmap(), this function is static, there
+     * still have been cases observed where xatp_permission_check(), invoked
+     * by our caller, doesn't lead to elimination of this entire function when
+     * the compile time evaluation of paging_mode_translate(d) is false. Guard
+     * against this be replicating the same check here.
+     */
+    if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
+        return -EACCES;
+    }
+
     if ( unlikely(xatpb->size < extent) )
         return -EILSEQ;
 


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 16:32:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 16:32:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80011.146101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6c7y-0007Ch-1C; Mon, 01 Feb 2021 16:32:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80011.146101; Mon, 01 Feb 2021 16:32:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6c7x-0007Ca-UU; Mon, 01 Feb 2021 16:32:37 +0000
Received: by outflank-mailman (input) for mailman id 80011;
 Mon, 01 Feb 2021 16:32:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6c7w-0007CC-16
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:32:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6c7v-0006GQ-TU
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:32:35 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6c7v-0003AA-Sh
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:32:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6c7s-0007yU-G6; Mon, 01 Feb 2021 16:32:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=mD9K0ja73FntMSDedb+moNvmJiBIdbM4YuBQENcV2EQ=; b=wIimVN4DX5fdHJWmN5uAIdz7s1
	6QNzFi5vR0sSkYlMR0XqoJ8PKiw8b7wIIHiwB2i1Yq7W+SNPa/LzfCnrvpayk9bHzjWhjSb3HPAaO
	NZMXzGW6O0BaVZoME+WJfh/xxXHF+I6T+tSCD9luyWnZfDPPtoEyHWkvKWNSqk7GPDQM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24600.11552.182964.601654@mariner.uk.xensource.com>
Date: Mon, 1 Feb 2021 16:32:32 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15] xenstore: fix build on {Net/Free}BSD
In-Reply-To: <20210201155317.57748-1-roger.pau@citrix.com>
References: <20210201155317.57748-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] xenstore: fix build on {Net/Free}BSD"):
> The endian.h header is in sys/ on NetBSD and FreeBSD.

That's a bit irritating.  Ah well.

Acked-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 16:37:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 16:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80013.146113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6cCW-0007Mc-Js; Mon, 01 Feb 2021 16:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80013.146113; Mon, 01 Feb 2021 16:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6cCW-0007MV-Gl; Mon, 01 Feb 2021 16:37:20 +0000
Received: by outflank-mailman (input) for mailman id 80013;
 Mon, 01 Feb 2021 16:37:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mXSv=HD=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1l6cCU-0007MQ-NP
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:37:18 +0000
Received: from MTA-08-4.privateemail.com (unknown [198.54.122.58])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82bfd945-ae78-43aa-a983-8b0dd615e2b3;
 Mon, 01 Feb 2021 16:37:17 +0000 (UTC)
Received: from MTA-08.privateemail.com (localhost [127.0.0.1])
 by MTA-08.privateemail.com (Postfix) with ESMTP id D819D60078
 for <xen-devel@lists.xenproject.org>; Mon,  1 Feb 2021 11:37:16 -0500 (EST)
Received: from mail-wm1-f51.google.com (unknown [10.20.151.218])
 by MTA-08.privateemail.com (Postfix) with ESMTPA id 99DC16004C
 for <xen-devel@lists.xenproject.org>; Mon,  1 Feb 2021 16:37:16 +0000 (UTC)
Received: by mail-wm1-f51.google.com with SMTP id y187so13747274wmd.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 08:37:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82bfd945-ae78-43aa-a983-8b0dd615e2b3
X-Gm-Message-State: AOAM530wT7WKzgXQ0VxX+C+nqWRnttcqIbvgjQMmyqcydibKv7mk5R4M
	l4q8vHjg+OGTfjsKgXCQMmQFMILKfOmuSU0qmS8=
X-Google-Smtp-Source: ABdhPJw55lAGPsvAzoKS/Wj+V4tSwCJ0BZJtsGajVfpkkEG3OevZhCKSlbEXLcZ2c79xzskQFoWBEBHXqvl4ooD0YRc=
X-Received: by 2002:a05:600c:214d:: with SMTP id v13mr15950431wml.186.1612197435173;
 Mon, 01 Feb 2021 08:37:15 -0800 (PST)
MIME-Version: 1.0
References: <caba05850df644814d75d5de0574c62ce90e8789.1611971959.git.tamas@tklengyel.com>
 <74f3263a-fe12-d365-ad45-e5556b575539@citrix.com> <044823b7-1bbd-6405-7371-2b06e49cc147@suse.com>
 <0dc1f3c9-6837-ce12-8826-11354346b3c1@citrix.com> <65ded62d-9f57-85ed-c333-e301d195c9f2@suse.com>
In-Reply-To: <65ded62d-9f57-85ed-c333-e301d195c9f2@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 1 Feb 2021 11:36:39 -0500
X-Gmail-Original-Message-ID: <CABfawhk6ob5Az6p++-uCwBDkL+nC_SvtWWyHOuGF1WzJabirnA@mail.gmail.com>
Message-ID: <CABfawhk6ob5Az6p++-uCwBDkL+nC_SvtWWyHOuGF1WzJabirnA@mail.gmail.com>
Subject: Re: [PATCH] x86/debug: fix page-overflow bug in dbg_rw_guest_mem
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Elena Ufimtseva <elena.ufimtseva@oracle.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-Virus-Scanned: ClamAV using ClamSMTP

On Mon, Feb 1, 2021 at 7:29 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 01.02.2021 12:44, Andrew Cooper wrote:
> > On 01/02/2021 09:37, Jan Beulich wrote:
> >> On 30.01.2021 03:59, Andrew Cooper wrote:
> >>> On 30/01/2021 01:59, Tamas K Lengyel wrote:
> >>>> When using gdbsx dbg_rw_guest_mem is used to read/write guest memory. When the
> >>>> buffer being accessed is on a page-boundary, the next page needs to be grabbed
> >>>> to access the correct memory for the buffer's overflown parts. While
> >>>> dbg_rw_guest_mem has logic to handle that, it broke with 229492e210a. Instead
> >>>> of grabbing the next page the code right now is looping back to the
> >>>> start of the first page. This results in errors like the following while trying
> >>>> to use gdb with Linux' lx-dmesg:
> >>>>
> >>>> [    0.114457] PM: hibernation: Registered nosave memory: [mem
> >>>> 0xfdfff000-0xffffffff]
> >>>> [    0.114460] [mem 0x90000000-0xfbffffff] available for PCI demem 0
> >>>> [    0.114462] f]f]
> >>>> Python Exception <class 'ValueError'> embedded null character:
> >>>> Error occurred in Python: embedded null character
> >>>>
> >>>> Fixing this bug by taking the variable assignment outside the loop.
> >>>>
> >>>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> >>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> I have to admit that I'm irritated: On January 14th I did submit
> >> a patch ('x86/gdbsx: convert "user" to "guest" accesses') fixing this
> >> as a side effect. I understand that one was taking care of more
> >> issues here, but shouldn't that be preferred? Re-basing isn't going
> >> to be overly difficult, but anyway.
> >
> > I'm sorry.  That was sent during the period where I had no email access
> > (hence I wasn't aware of it - I've been focusing on 4.15 work and this
> > series wasn't pinged.),
>
> Oh, so you had actually lost emails, rather than (as I did
> understand so far) only getting them in a very delayed fashion?
>
> Anyway, the first part of that series having been pretty close
> to getting an XSA, I thought you were well aware that at least
> that part is very clearly intended for 4.15. (I also did
> mention it to you last week on irc, when you asked what wants
> specifically looking at for 4.15.) Plus, besides the bringing
> up of the topic on the last two or three community calls, over
> all of January I've specifically avoided pinging _any_ of the
> many patches I have pending, to avoid giving you the feel of
> even more pressure.
>
> > but it also isn't identified as a bugfix, or
> > suitable for backporting in that form.
> >
> > I apologise for the extra work caused unintentionally, but I think this
> > is the correct way around WRT backports, is it not?
>
> It didn't occur to me that there could be a consideration of
> backporting here. But yes, if so wanted, maybe the split is
> helpful. Otoh then the full change could as well be taken,
> to stop the abuse of "user" accesses also in the stable trees.

IMHO this should be backported cause it breaks use of gdbsx for all
affected releases.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 16:48:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 16:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80018.146124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6cMw-0008So-NO; Mon, 01 Feb 2021 16:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80018.146124; Mon, 01 Feb 2021 16:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6cMw-0008Sh-Jy; Mon, 01 Feb 2021 16:48:06 +0000
Received: by outflank-mailman (input) for mailman id 80018;
 Mon, 01 Feb 2021 16:48:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4N3t=HD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6cMv-0008Sc-O3
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 16:48:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5def02a2-896b-4c72-8b7a-25691fb14c0f;
 Mon, 01 Feb 2021 16:48:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 94E73AB92;
 Mon,  1 Feb 2021 16:48:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5def02a2-896b-4c72-8b7a-25691fb14c0f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612198082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YOVHCqx27sDKeippwBQDyPRkXExzEzerkHgT2dfxMis=;
	b=mVcjSUsBS73TmY7PymTduRcqVQfo4ROJSkVOWCOVbVwu1XeUNEMTvV2PVwGbMFe25xQMsF
	54mVXlW/+6TuKnHcIH0mUkB4Idp/WCqnxaFRCoPphJXNd1pSnCgNeQyyK2DEDRgIFadQxM
	OzOgcJtXP1ma041eLroDRDHXBWiFb6k=
Subject: Re: Problems with APIC on versions 4.9 and later (4.8 works)
To: Claudemir Todo Bom <claudemir@todobom.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <CANyqHYfNBHnUiBiXHdt+R3mZ72oYQBnQcaWuKw5gY0uDb_ZqKw@mail.gmail.com>
 <ff799cd4-ba42-e120-107c-5011dc803b5a@suse.com>
 <609a82d8-af12-4764-c4e0-f5ee0e11c130@suse.com>
 <CANyqHYehUWeNfVXqVJX6nrBS_CcKL1DQjyNVa1cUbvbx+zD83w@mail.gmail.com>
 <9d04edfe-0059-6fbf-c1da-2087f6190e64@suse.com>
 <CANyqHYfOC6JY978SRPAQ8Ug3GevFD=jbT6bVVET4+QOv8mv7qA@mail.gmail.com>
 <a0a7bbd0-c4c3-cfb8-5af0-a5a4aff14b76@suse.com>
 <CANyqHYeDR_NUKzPtbfLiUzxAUzerKepbU4B-_6=U-7Y6uy8gpQ@mail.gmail.com>
 <8837c3fb-1e0c-5941-258c-e76551a9e02b@suse.com>
 <8cf69fb3-5b8c-60ea-bd1c-39a0cbd5cb5c@suse.com>
 <CANyqHYeCQc2bt836uyrtm9Eo2T1uPP-+ups-ygfACu6zK36BQg@mail.gmail.com>
 <bd150f4d-4f7e-082e-6b10-03bf1eca7b80@suse.com>
 <CANyqHYeHf8f6G+U2z9A0JC049HPYvWQ+WXZYLCQyWyx5Jvq6BA@mail.gmail.com>
 <803a50a9-707f-14db-b523-cd1f6f685ab4@suse.com>
 <CANyqHYfNjqjm7tFoHD=XDcv_P42wppmx0gjy=--Kz88MZcK6Pw@mail.gmail.com>
 <96a23d4a-b29f-46e2-a0f5-568a5d1f4b9e@suse.com>
 <CANyqHYfue3mPKESc6_U79=ckCHrJo6rEJg0TgXi8-g6=peM01A@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <73a79bef-2c5d-cd94-8eaa-bd4139123a0f@suse.com>
Date: Mon, 1 Feb 2021 17:48:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CANyqHYfue3mPKESc6_U79=ckCHrJo6rEJg0TgXi8-g6=peM01A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 16:26, Claudemir Todo Bom wrote:
> Em seg., 1 de fev. de 2021 Ã s 12:09, Jan Beulich <jbeulich@suse.com> escreveu:
>>
>> On 01.02.2021 15:46, Claudemir Todo Bom wrote:
>>> Tested first without the debug patch and with following parameters:
>>
>> And this test was all three of the non-debugging patches?
> 
> Yes, all three patches.
> 
>>> xen: dom0_mem=1024M,max:2048M dom0_max_vcpus=4 dom0_vcpus_pin=true smt=true
>>> kernel: loglevel=3
>>>
>>> same behaviour as before... black screen right after the xen messages.
>>>
>>> adding earlyprintk=xen to the kernel command line is sufficient to
>>> make it boot, I can imagine this can be happening because Xen is not
>>> releasing console to the kernel at that moment.
>>
>> If the answer to the above question is "yes", then I start
>> suspecting this to be a different problem. I'm not sure I
>> see a way to debug this without having access to any output
>> (i.e. neither video nor serial). Without "earlyprintk=xen"
>> and instead with "vga=keep watchdog" on the Xen command
>> line, is there anything helpful (without or if need be with
>> the debugging patch in place)?
> 
> with "vga=text-80x25,keep watchdog" and without the earlyprintk,
> system booted.

Well, you clearly don't want to keep "vga=keep". There has to
be something that's still going wrong, but this may now be a
kernel side issue. In the logs you provided I couldn't spot
anything odd, but these were from working cases after all. So
as said, for now I'm lost, and you may need to live with some
form of workaround (which you've said you're okay with).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 18:24:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 18:24:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80072.146165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6dsP-0001if-35; Mon, 01 Feb 2021 18:24:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80072.146165; Mon, 01 Feb 2021 18:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6dsP-0001iY-02; Mon, 01 Feb 2021 18:24:41 +0000
Received: by outflank-mailman (input) for mailman id 80072;
 Mon, 01 Feb 2021 18:24:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6dsO-0001iP-7X; Mon, 01 Feb 2021 18:24:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6dsN-00005X-Ue; Mon, 01 Feb 2021 18:24:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6dsN-0002cN-KB; Mon, 01 Feb 2021 18:24:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6dsN-0001WU-Jh; Mon, 01 Feb 2021 18:24:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=83cLAWfZmRY0/DtxBsgN+vm8chzd+cdXTGGfXvoM/RI=; b=YV/uBNt25tFOLef2oryaenh/JR
	Gt0D0/5sAQiLpJm5S5mdRROtj6jVgRq6xkcQewOY0HMMGGK+Ko+QqGuJzLIg7f7vBo+tjFao1DVpI
	blsx00y96s2a5xfoIuEl10lkT0ojZu4whSnocLPcYaTFlTgKsTK2h7ipjxaaA6jFKK4k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-amd64-amd64-qemuu-freebsd12-amd64
Message-Id: <E1l6dsN-0001WU-Jh@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 18:24:39 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-freebsd12-amd64
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158894/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-amd64-qemuu-freebsd12-amd64.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-amd64-qemuu-freebsd12-amd64.guest-start --summary-out=tmp/158894.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-amd64-amd64-qemuu-freebsd12-amd64 guest-start
Searching for failure / basis pass:
 158863 fail [host=pinot1] / 158681 [host=chardonnay1] 158624 [host=huxelrebe1] 158616 [host=fiano0] 158609 [host=godello1] 158603 [host=albana0] 158593 [host=rimava1] 158583 [host=elbling0] 158563 [host=elbling1] 158552 [host=fiano1] 158544 [host=chardonnay0] 158533 [host=huxelrebe1] 158520 [host=albana0] 158500 [host=godello1] 158487 ok.
Failure / basis pass flights: 158863 / 158487
(tree with no url: minios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass ceed81a883dc43e2073ecdcfd273fa179e24df5b c530a75c1e6a472b0eb9558310b518f0dfcd8860 c88736f8605eab3b0877d9301f8e845291c6fdd9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e db9c4ad1b1abaef3c38027b9b2700d9250d13125
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#ceed81a883dc43e2073ecdcfd273fa179e24df5b-0fbca6ce4174724f28be5268c5d210f51ed96e31 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#c88736f8605eab3b0877d9301f8e845291c6fdd9-c6be6dab9c4bdf135bc02b61ecc304d5511c3588 git://xenbits.xen.org/qemu-xen-traditional\
 .git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#db9c4ad1b1abaef3c38027b9b2700d9250d13125-9dc687f155a57216b83b17f9cde55dd43e06b0cd
Loaded 15001 nodes in revision graph
Searching for test results:
 158477 [host=huxelrebe0]
 158487 pass ceed81a883dc43e2073ecdcfd273fa179e24df5b c530a75c1e6a472b0eb9558310b518f0dfcd8860 c88736f8605eab3b0877d9301f8e845291c6fdd9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e db9c4ad1b1abaef3c38027b9b2700d9250d13125
 158500 [host=godello1]
 158520 [host=albana0]
 158533 [host=huxelrebe1]
 158544 [host=chardonnay0]
 158552 [host=fiano1]
 158563 [host=elbling1]
 158583 [host=elbling0]
 158593 [host=rimava1]
 158603 [host=albana0]
 158609 [host=godello1]
 158616 [host=fiano0]
 158624 [host=huxelrebe1]
 158681 [host=chardonnay1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158871 pass ceed81a883dc43e2073ecdcfd273fa179e24df5b c530a75c1e6a472b0eb9558310b518f0dfcd8860 c88736f8605eab3b0877d9301f8e845291c6fdd9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e db9c4ad1b1abaef3c38027b9b2700d9250d13125
 158872 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158891 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158875 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e bb9afb7a465d3b7b438f2e11105409d24400f8f4
 158877 pass 1607adf1ac413927b6fe19963ebdb52cf9664d9f c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158880 fail 391187744436ec5bf00bab9952d145fc9fb652d1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158882 pass b1b943f5b65e374df982d9d37bfdddcb4870372f c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158883 fail 8c3d3b385ed868660c7dff0336da1bd5a9fb134d c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158885 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158886 pass c074680653e27f19eb584522df06758607277f77 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158887 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158888 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158889 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158890 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158893 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158894 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158487 (pass), for basis pass
 For basis failure, parent search stopping at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x562b4a4ce6f0) HASH(0x562b4a489778) HASH(0x562b4a462f20) For basis failure, parent search stopping at 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x562b4a4b14f8) For basis failure, parent search stopping at c074680653e27f19eb584522df06758607277f77 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x562b4a4c70c0) For basis failure, parent search stopping at b1b943f5b65e374df982d9d37bfdddcb4870372f c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x562b494e5488) For basis\
  failure, parent search stopping at 1607adf1ac413927b6fe19963ebdb52cf9664d9f c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x562b4875ac50) For basis failure, parent search stopping at d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcf\
 eddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e bb9afb7a465d3b7b438f2e11105409d24400f8f4, results HASH(0x562b4a4d9040) For basis failure, parent search stopping at ceed81a883dc43e2073ecdcfd273fa179e24df5b c530a75c1e6a472b0eb9558310b518f0dfcd8860 c88736f8605eab3b0877d9301f8e845291c6fdd9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcf\
 fa45e db9c4ad1b1abaef3c38027b9b2700d9250d13125, results HASH(0x562b4a480028) HASH(0x562b4a49b008) Result found: flight 158748 (fail), for basis failure (at ancestor ~10985)
 Repro found: flight 158871 (pass), for basis pass
 Repro found: flight 158872 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 158888 (pass), for last pass
 Result found: flight 158889 (fail), for first failure
 Repro found: flight 158890 (pass), for last pass
 Repro found: flight 158891 (fail), for first failure
 Repro found: flight 158893 (pass), for last pass
 Repro found: flight 158894 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158894/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 134 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-amd64-amd64-qemuu-freebsd12-amd64.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
158894: tolerable ALL FAIL

flight 158894 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/158894/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start   fail baseline untested


jobs:
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 18:26:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 18:26:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80080.146180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6duJ-0001rh-Hj; Mon, 01 Feb 2021 18:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80080.146180; Mon, 01 Feb 2021 18:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6duJ-0001ra-EN; Mon, 01 Feb 2021 18:26:39 +0000
Received: by outflank-mailman (input) for mailman id 80080;
 Mon, 01 Feb 2021 18:26:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HCN7=HD=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1l6duI-0001rV-Oj
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 18:26:38 +0000
Received: from mail-wm1-f52.google.com (unknown [209.85.128.52])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3689bfb6-838f-498a-b1e0-3a8918503333;
 Mon, 01 Feb 2021 18:26:38 +0000 (UTC)
Received: by mail-wm1-f52.google.com with SMTP id c127so133192wmf.5
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 10:26:37 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q7sm28880579wrx.18.2021.02.01.10.26.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 Feb 2021 10:26:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3689bfb6-838f-498a-b1e0-3a8918503333
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=jJEaWxu0zKpeLIV6D0wE2eRdohtKy6L9pNjk+x70Wp8=;
        b=fvgfKXgL5QCjd/K3jX59LGwkQas2GKUvdENCDY4K5GYmsMjKDYZ/q2RhOQGMKd8BAp
         diQAwvN6n+xvLLjZjXQFhF5mtGuPIVVddrPz+Z1Zw/jKbXjJ19yC5f6w+GRPee1QlEl4
         DVSwINqqJZL+w/sfh87MYwPxJ3YRnG+AeBDJqOJiWSgcnjqofkV8o9s8dK82n7FXQmge
         4fQ7Qs5LS+ex4JRUD9KsA/bSmre0PKHvaYnbrrABVWxjEf0DmyjbX70riyds8GfnLvPI
         uIKsV295fPcnEEXPpHg6PpESYXOpitTQhaTYNpPIuXlcligQuKS9v20jm+/VWeoQd+vQ
         EBkQ==
X-Gm-Message-State: AOAM530oxfs6ADVHJFYWbZmwdu1PnIJ9mhAhSkscSz7aJWZ5eO2M9EwX
	Skfxc5XLpKum3n130EuHo0U=
X-Google-Smtp-Source: ABdhPJzsjSF9AItcHM0JM82B83x8qRjwGQNpPEQp9FJUe0TfnMExtwpkwZmaZMqVXvFWwiz5Dq0reQ==
X-Received: by 2002:a1c:bcd4:: with SMTP id m203mr172766wmf.120.1612203997162;
        Mon, 01 Feb 2021 10:26:37 -0800 (PST)
Date: Mon, 1 Feb 2021 18:26:35 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] memory: fix build with COVERAGE but !HVM
Message-ID: <20210201182635.kqtdth6zwtvergbu@liuwe-devbox-debian-v2>
References: <84a05b9e-a0c3-7860-4a59-a591a873b884@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <84a05b9e-a0c3-7860-4a59-a591a873b884@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Feb 01, 2021 at 05:20:54PM +0100, Jan Beulich wrote:
> Xen is heavily relying on the DCE stage to remove unused code so the
> linker doesn't throw an error because a function is not implemented
> yet we defined a prototype for it.
> 
> On some GCC versions (such as 9.4 provided by Debian sid), the compiler
> DCE stage will not manage to figure that out for
> xenmem_add_to_physmap_batch():
> 
> ld: ld: prelink.o: in function `xenmem_add_to_physmap_batch':
> /xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
> /xen/xen/common/memory.c:942:(.text+0x22145): relocation truncated
> to fit: R_X86_64_PLT32 against undefined symbol `xenmem_add_to_physmap_one'
> prelink-efi.o: in function `xenmem_add_to_physmap_batch':
> /xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
> make[2]: *** [Makefile:215: /root/xen/xen/xen.efi] Error 1
> make[2]: *** Waiting for unfinished jobs....
> ld: /xen/xen/.xen-syms.0: hidden symbol `xenmem_add_to_physmap_one' isn't defined
> ld: final link failed: bad value
> 
> It is not entirely clear why the compiler DCE is not detecting the
> unused code. However, cloning the check introduced by the commit below
> into xenmem_add_to_physmap_batch() does the trick.
> 
> No functional change intended.
> 
> Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 18:26:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 18:26:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80081.146192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6duV-0001wB-UF; Mon, 01 Feb 2021 18:26:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80081.146192; Mon, 01 Feb 2021 18:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6duV-0001w3-Qz; Mon, 01 Feb 2021 18:26:51 +0000
Received: by outflank-mailman (input) for mailman id 80081;
 Mon, 01 Feb 2021 18:26:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6duT-0001vW-Su
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 18:26:50 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7313bb1e-9402-4f9a-9120-38ecb80355fd;
 Mon, 01 Feb 2021 18:26:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7313bb1e-9402-4f9a-9120-38ecb80355fd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612204008;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=arVrXoL+C248ZsaLgLTHSqwkl3EUXRTWbZXpN3xpU+c=;
  b=ZxeaJWeUovf7LFPVXLVEMfKFvka5VGxKyHFECk4dK99Ef7kCIQWCXTLu
   65baePcPywsFU74vXBtTx9iAYcfGJ0WoG9JgkWhjwC/8uX7Ck3XWehIEh
   y+LTAnei7zdXJUnYPFtkJtBv+TWNcfFN5aO/7pJgd4JLHWUA1iQzgGS+J
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jRdklUyo6Irdf6VSpNNCBncG3MiX2zf6WwVNnbTD5E5PWUEXLee0ezjsGd7HCgRJ4RRyP+UC9s
 kByQY1JLrSQSJ3Iwleq3DP1pBVKwoOZ+ehQZwlqQ8jvIMdAc44mfpNsf3IfIX/p2XfXMvEG86X
 DAiQEn9o4djNXLU8Z3PJjKoEkGm3mn1QwMwWMk3fPQakZTJ0hcf8zPL6KEIpq2KWK1uM2JotfZ
 qzmuYACkeLracertJormAvejFWs4BWYbF9DeUE1wVvqepbNpKcyeuTz8imLg+WQ1e5OaJgLvb8
 rTw=
X-SBRS: 5.2
X-MesageID: 36260110
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36260110"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b6jSzKexu37FDwYlsbXZVfKRhx8iElhxfSnsmKiyFhrO/kOr3Lk5NaAaXFyF3jH+NsUGcDZ6Y2FSQ/e30NGjmdsqwIFSjtLFkLwHXaeCODVRa44I5niSJfv7TTVw1EmO/JqBFgCkoSL+m9SwVlF+uQ8Ln56FHuLYYkMt1fEIVqR6DtmvkzspTd74c4CGm4wzMDviDNUJZiRg4rDMoxPkWA5lN2FYGxQueZYlcO6B0N7P69l7BVYgM48iP51jqrrVRkuJlLsJSldCeHBNf++h2cErKnKdJ8cHxduh28U3QOIxfjP9X2dIsIXR7GRPfJmwLhmnbpzK1QQcBWfaJkofpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZzeBeog67ofe8nEnuWFZTQD0XYXfMZHV8KiLcfzrO+0=;
 b=Vyk02z5c4Q3wVDcQyeQUqZiD9Sdl/mLOA9q4+jxy+pB1vhgVij3Ra8v6DYFM2u05XD3hPZ2k2F6e6NyXpNacvmrwnUboFjrhj7nwWZt5/47f28a9yMgMfYBEp0EPMoX8b4ZKPzQMTgEDDjfXeRMLVn4OPdDdA8cmoNlQIzcEhCIkzxafV4ncwFjx4qLN7yqIQFWhaigwbsIQFnaDG9PalMa2jFof9jVK0v/tOXzKeN8HEdT4RqPHUPSoV3nM8Lgs/GfFCnpAvSmQXgIxZTf0FA/yZ80rZXSK2sL5tGoyFObKgbE2xPWF4QQyzh9gGq0II1SJwzAHhncIJrDmOqJgfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZzeBeog67ofe8nEnuWFZTQD0XYXfMZHV8KiLcfzrO+0=;
 b=RKYygpSa2lFRpGo4u1o4CcoP0rdqdVh3Junn+AmrEJ5BUVLSEV4/JoDUzZGIOEHCSd7g2USb19uRdD9o2JR5UF6YDDcrWIaI+pMT7JlUfDWSAg1cFqIvDpHnL9U2SmWZTBdgxyXNgmyg0xA8jeI7h4zljEXZcQ8hJ6rmAf4ybsQ=
Subject: Re: [PATCH] memory: fix build with COVERAGE but !HVM
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <84a05b9e-a0c3-7860-4a59-a591a873b884@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a2fff2f5-70d2-19f2-b7d3-01e4db50f709@citrix.com>
Date: Mon, 1 Feb 2021 18:26:29 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <84a05b9e-a0c3-7860-4a59-a591a873b884@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0487.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::12) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0d750821-1f92-4521-7dd2-08d8c6dee844
X-MS-TrafficTypeDiagnostic: BYAPR03MB3478:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3478EA56F976530D46097EEEBAB69@BYAPR03MB3478.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ts+jpXwdqi5ZFKG8UnrsJFS+jgmWBBhe+Y0/DggmsWFJnw0pzTlp/hKMZiSTPiv39eHQcMlbiOdnRwwiT4DHY5d/QQcObWELZ4V9JBbLIqJflpQc8Uar7dwujV3fCMQlD5mvZ4O8PRN95XDUXuY8ETaZiVb22kTRqpX4t+597b70mvbELvNSQ2BWdJN5LvLqEoJYU+dqEQN5Kv7V9VWYP/gNQJwURFrWkInU/7Djb5Umix/ps243qVSwxVzqvCA4pXENuHlJknlcnq86sY602EcU2uDTLfot3uakXOAmHsLnKW1BOOF0WG4bELmP98YFjWf6Uewr88gPThivVOCuDSigFvFmVCWkVDxrFzBglv/zzXi5NrOVY8LLYSbznHDjVZBCnXZcl4MI+At+GPhJn6d3v7yNvY10LxyeLl9zjeZTiUV6k1lm7P4CTmNlgksV19ptTWaPu2sIxLmdwQMJcPBgqWUqTocjRcbZNG0Nr3BqC8vOmqopjhQMIIoA0MthwvJulZDcTI3T05lfI6zVpEJnv2cRGplpDQRASKzU032VrOR0H25BdDQ9/mi9WDyHykLU+ndgz7SWfIVvVBGja5kUJCxJspXdOoLD21fmAe0=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(366004)(39860400002)(31696002)(478600001)(66476007)(4326008)(36756003)(5660300002)(66556008)(2906002)(66946007)(6486002)(8936002)(8676002)(6666004)(110136005)(316002)(16576012)(54906003)(83380400001)(53546011)(31686004)(16526019)(186003)(86362001)(2616005)(956004)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dmZiOFB6VWYyNVU5WTcvaDlpenJaV2dpc1Y1VStTQmZMRUFwbTZIVE1Nb3Nu?=
 =?utf-8?B?V3ZjbXo4Yys2cWZRb2NFNTI5dSs2U0pWNmdSbFhneGRlcExDMHhRdU5VUTRn?=
 =?utf-8?B?YjZxSVhXang3MXkwWTlSZXN4S05FMFRiMDFVeFRoUkc3L0JPWFhCcitLM1lo?=
 =?utf-8?B?NHhCbHl3RW5lOG1iZ0NiUlh3NmRWeVphMlgwMkRUTUpzNmpTNjhPVnQzb2ZL?=
 =?utf-8?B?aTF3Vjc3MGN0MW5rU01xL1c0c0szdWNrdk44WXo3N2Z0NGdWdWFvZjV5QUZW?=
 =?utf-8?B?YW91cWFmVi9ZQ1JJbVB1b1dXeWd1bFlxQlhLK25GK05wbWpLWENTYnplOUk4?=
 =?utf-8?B?aS9Lci9iYXQvOE55SjhMMXo5NEVvY00yT2lSelBWMTVpN0JUSENCd0RXeDlq?=
 =?utf-8?B?YWkwZFVRczR4b1BCbUZhcnFWNzZqMzF4bVhsSkhoU1ZJVXBsOXg1ZTcyditU?=
 =?utf-8?B?alptd2cxY2x0d3dYVGlja1dFRUJGekJxSnY4RGJTS3JPY1ptTG1TK2x0T0I0?=
 =?utf-8?B?MWZCM0FzWURxSUY0UmVYMS9rZFJibkx5alZnbVl5cmF2d1AyYnBLdVJoSGhY?=
 =?utf-8?B?TkFFQVdlNE1NRGIwekdJZmtvRThLWnhxbmFzd3dLeFhGMEhlZGFaS3h0cm41?=
 =?utf-8?B?eHo3b3hNbmFJeUVXemV4ZkplY0tRWWEwSmt3Vnc4ZFVmb0FOVHM0b04reGNW?=
 =?utf-8?B?bHptMG5KSkgzTVJJL1FlMG1vREYxa0JqNGpabU9KTnN1cWw1Y1FXUWNIZ2Zv?=
 =?utf-8?B?SE9SdTlEZUlvU0dYWnVjMmExYWFFaVJHVnBNKzZIMzhQYUdNRzhNdkJmaThs?=
 =?utf-8?B?NSt6VXpPL3BjZGlaYitBUXRNdDRGK0x4Vm8wVE83RjJRczRTTlJFWElnYjA5?=
 =?utf-8?B?K3BhVWloM2lST2l2eFpybXJwVFdKMmVkOUZvRTUxdDAvMGE1cW0zTFdlcEhS?=
 =?utf-8?B?ZjRYNDhnVjYrRzVDdHVsdVluK2IxbFVMNFhJUityQm5McVVSZnp4T1hoNzhI?=
 =?utf-8?B?Q21kbytFcWp1akx6VWdwa1FvYm1SWTBMSlpMQU9mZVZlQjBxL0tNQWhKTWVR?=
 =?utf-8?B?MkFsSkU1U2ViUUFaSnZuQ1k2Q0hlMFJ3MmRhaW4rV0dHL0dCcXhDdDVBTURh?=
 =?utf-8?B?NGk2Z1ZWU1hrZzArc0JOcEM5MU9qbDFEa2JoWmgwZ2hSUUlrbGNQR21oclRy?=
 =?utf-8?B?dGhwWnF1dHhCYmJWamhkNGtUNnUzL1hFYXZTTy9Cd3JFS3BHUnBzNmRHMnB5?=
 =?utf-8?B?ZGt2TDdBZnM3UnRqaC9GdVVUV0lxRldxSFhkK21FZFhqOXpLaWxQQ0UxY095?=
 =?utf-8?B?Yjd1QmZNa0JZZXRBTi9YOFlHZ0F6UTRzcnd2Q1prbXM4WjA2NjJTTERJWk5j?=
 =?utf-8?B?QUJ2V08zQVpzZGRTdElMYnNVdmdLMVZtT1QwUkxXVGovV3RaK3ByOVBNamRC?=
 =?utf-8?B?Ymk2dEp6bzRTeWRJbXllazUyY3dDYmpvb2N3djRnPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d750821-1f92-4521-7dd2-08d8c6dee844
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 18:26:35.6156
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fdymTA+vfHcoxeagtJqW+Tfn7q3Mr1A0ODyJcjXk1gE+hbs9pSUMyEXdq1qKsBXOAI/rnmeXP4Ro9uQ2AdJpK0gGhDwZRozlhPQDgcHKTKk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3478
X-OriginatorOrg: citrix.com

On 01/02/2021 16:20, Jan Beulich wrote:
> Xen is heavily relying on the DCE stage to remove unused code so the
> linker doesn't throw an error because a function is not implemented
> yet we defined a prototype for it.
>
> On some GCC versions (such as 9.4 provided by Debian sid), the compiler
> DCE stage will not manage to figure that out for
> xenmem_add_to_physmap_batch():
>
> ld: ld: prelink.o: in function `xenmem_add_to_physmap_batch':
> /xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
> /xen/xen/common/memory.c:942:(.text+0x22145): relocation truncated
> to fit: R_X86_64_PLT32 against undefined symbol `xenmem_add_to_physmap_one'
> prelink-efi.o: in function `xenmem_add_to_physmap_batch':
> /xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
> make[2]: *** [Makefile:215: /root/xen/xen/xen.efi] Error 1
> make[2]: *** Waiting for unfinished jobs....
> ld: /xen/xen/.xen-syms.0: hidden symbol `xenmem_add_to_physmap_one' isn't defined
> ld: final link failed: bad value
>
> It is not entirely clear why the compiler DCE is not detecting the
> unused code. However, cloning the check introduced by the commit below
> into xenmem_add_to_physmap_batch() does the trick.
>
> No functional change intended.
>
> Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> ---
> Julien, since I reused most of your patch'es description, I've kept your
> S-o-b. Please let me know if you want me to drop it.
>
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -904,6 +904,19 @@ static int xenmem_add_to_physmap_batch(s
>  {
>      union add_to_physmap_extra extra = {};
>  
> +    /*
> +     * While, unlike xenmem_add_to_physmap(), this function is static, there
> +     * still have been cases observed where xatp_permission_check(), invoked
> +     * by our caller, doesn't lead to elimination of this entire function when
> +     * the compile time evaluation of paging_mode_translate(d) is false. Guard
> +     * against this be replicating the same check here.
> +     */

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, but I feel this
comment can be far more precise/concise.

/* In some configurations, (!HVM, COVERAGE), the
xenmem_add_to_physmap_one() call doesn't succumb to
dead-code-elimination.Â  Duplicate the short-circut from
xatp_permission_check() to try and help the compiler out. */

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 18:52:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 18:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80094.146215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6eIs-0004vw-5R; Mon, 01 Feb 2021 18:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80094.146215; Mon, 01 Feb 2021 18:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6eIs-0004vp-0S; Mon, 01 Feb 2021 18:52:02 +0000
Received: by outflank-mailman (input) for mailman id 80094;
 Mon, 01 Feb 2021 18:52:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lIMN=HD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l6eIq-0004vk-AF
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 18:52:00 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25eb2018-c5f6-4e41-a5dd-f774b7cfe454;
 Mon, 01 Feb 2021 18:51:58 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D647464DDF;
 Mon,  1 Feb 2021 18:51:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25eb2018-c5f6-4e41-a5dd-f774b7cfe454
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612205518;
	bh=S6ELnD9R8YsJ3bwtYYpTrt6x7n/ZBii0HHtjMePRs30=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=e4O5jeImRhPeoiNW44J2TMs+gufMcxE/uqHsGZmXlQj655SF8X82gYJRBl7dHQseR
	 QuDi7aLK06oU3Y188akcPTqgndngkTU98MPKTxQABaDp3RK7Tbhfx1DhDXglfWjgdA
	 i3sN/Ony0TARZwulZ6+ysCBLEUnwXQ7en+ooTAO5+zTVMRdzAu8ap8GAsESfjLpxrO
	 YgGBabCyLYLssbSm6SfvzdXGzj6SQ65ujB0rThFvo1CwWqhWMEA1TNQPzDdn7Nzr4W
	 gaMoDl5RsQ+eF9VwL1LEZ8R0jVRh7dC1cdF5eIZCKpENTYcWGIVGC32Z+HzMUQkfKy
	 sKKboJCz7HttA==
Date: Mon, 1 Feb 2021 10:51:50 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
cc: Elliott Mitchell <ehem+xen@m5p.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
In-Reply-To: <CABfawhmE20u8PpKK6N2SNwOSjeOyfhqa8U48jykswbw9Yhnpvg@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102011050400.29047@sstabellini-ThinkPad-T480s>
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com> <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com> <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <CABfawhmE20u8PpKK6N2SNwOSjeOyfhqa8U48jykswbw9Yhnpvg@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 31 Jan 2021, Tamas K Lengyel wrote:
> This seems to have been caused by a monitor being attached to the HDMI
> port, with HDMI unplugged dom0 boots OK.

FYI others have reported issues with swiotlb-xen when using graphics:
https://marc.info/?l=xen-devel&m=161201486021603 Disabling swiotlb-xen
makes it work for them.


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 18:56:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 18:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80096.146227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6eMz-00055z-LP; Mon, 01 Feb 2021 18:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80096.146227; Mon, 01 Feb 2021 18:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6eMz-00055s-IN; Mon, 01 Feb 2021 18:56:17 +0000
Received: by outflank-mailman (input) for mailman id 80096;
 Mon, 01 Feb 2021 18:56:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6eMz-00055k-16; Mon, 01 Feb 2021 18:56:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6eMy-0000cT-PE; Mon, 01 Feb 2021 18:56:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6eMy-0003ad-HA; Mon, 01 Feb 2021 18:56:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6eMy-0004m3-Gd; Mon, 01 Feb 2021 18:56:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YS3AUZBlYLDRY1OKHnqbHh7XGTphZ0mVLS49H5D+zIk=; b=iIKgUfwYBM5U9ZT/qCOEjyCnTn
	U1Y8A9KRbbX4A/Lh9QS1Keprx439k3nWIt5nv+PsB4/eUmTOIF+JV2OjpK7XBsce4WU6X0WinBd+C
	TXtflDA4ohuF5bBXTnRw631cYaH0JCYuwxjqD285xSIPebFN/NnQBO5o7w5KVgon2P+U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158892: trouble: broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:host-install(5):broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=419cd07895891c6642f29085aee07be72413f08c
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 18:56:16 +0000

flight 158892 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158892/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-arm64-arm64-xl-xsm       5 host-install(5)        broken REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  419cd07895891c6642f29085aee07be72413f08c
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    2 days
Testing same since   158892  2021-02-01 16:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-xsm host-install(5)

Not pushing.

------------------------------------------------------------
commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 19:26:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 19:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80101.146242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6epj-000844-52; Mon, 01 Feb 2021 19:25:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80101.146242; Mon, 01 Feb 2021 19:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6epj-00083x-0r; Mon, 01 Feb 2021 19:25:59 +0000
Received: by outflank-mailman (input) for mailman id 80101;
 Mon, 01 Feb 2021 19:25:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lIMN=HD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l6eph-00083s-Io
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 19:25:57 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40d18937-f513-4709-831c-c282826c9f0f;
 Mon, 01 Feb 2021 19:25:56 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 539CC64EC1;
 Mon,  1 Feb 2021 19:25:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40d18937-f513-4709-831c-c282826c9f0f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612207555;
	bh=uu+QFxb2DiukTYh09ZUgJJQq/9ic5vWtR3/tgsBGaj4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KPp81cWvP3mcauxW8s9ZZ6fFq3nHT5B7PnJ4NJc7Em6HNpmeFCrjSEXkkaHB2Di7x
	 EXwAX6J5ZrLqQK7IT42RnTAFXo1fY6qu7ogqVA7rl7iBDQc/iMWNJkw0n1dx99TXBq
	 YP8hFBFqJEFKtp2E0vOIuhdwgUGJ4hoVeb+3eENNCYXJligNr3m44wY71jSIVh6djn
	 14fVBC8WVvP8CONwQlV+Ke4/k7h+xEIVRmSMfCzx9YxHROyneR1yFSC/L4hKVNUAQD
	 qF1qrWzG00Y/sREnt9DicfcJmXeHNCxGsKK05Ev1Ybf9D3jytKVzW6DA5OxgVCDwjf
	 622oob2Q3KEug==
Date: Mon, 1 Feb 2021 11:25:54 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Jukka Kaartinen <jukka.kaartinen@unikie.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Roman Shaposhnik <roman@zededa.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Question about xen and Rasp 4B
In-Reply-To: <c44d45ed-f03e-e901-4a46-0ce57504703f@xen.org>
Message-ID: <alpine.DEB.2.21.2102011055080.29047@sstabellini-ThinkPad-T480s>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com> <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s> <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com> <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com> <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s> <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com> <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com> <c44d45ed-f03e-e901-4a46-0ce57504703f@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-411723651-1612206594=:29047"
Content-ID: <alpine.DEB.2.21.2102011111550.29047@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-411723651-1612206594=:29047
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102011111551.29047@sstabellini-ThinkPad-T480s>

On Sat, 30 Jan 2021, Julien Grall wrote:
> > > On 27/01/2021 11:47, Jukka Kaartinen wrote:
> > > > 
> > > > 
> > > > On Tue, Jan 26, 2021 at 10:22 PM Stefano Stabellini
> > > > <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
> > > > 
> > > > Â Â Â  On Tue, 26 Jan 2021, Jukka Kaartinen wrote:
> > > > Â Â Â Â  > On Tue, Jan 26, 2021 at 2:54 AM Stefano Stabellini
> > > > Â Â Â  <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
> > > > Â Â Â Â  >Â  Â  Â  Â On Sat, 23 Jan 2021, Jukka Kaartinen wrote:
> > > > Â Â Â Â  >Â  Â  Â  Â > Thanks for the response!
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â > On Sat, Jan 23, 2021 at 2:27 AM Stefano Stabellini
> > > > Â Â Â  <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â + xen-devel, Roman,
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â On Fri, 22 Jan 2021, Jukka Kaartinen wrote:
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Hi Stefano,
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > I'm JukkaÂ Kaartinen a SW developer working on
> > > > Â Â Â  enabling hypervisors on mobile platforms. One of our HW that we use
> > > > on
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â development is
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Raspberry Pi 4B. I wonder if you could help me a
> > > > Â Â Â  bit :).
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > I'm trying to enable the GPU with XenÂ + Raspberry
> > > > Â Â Â  Pi for
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â dom0.
> > > > https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323#p1797605
> > > > <https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323#p1797605> 
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > I got so far that GPU drivers are loaded (v3d &
> > > > Â Â Â  vc4) without errors. But now Xen returns errorÂ when X is starting:
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > (XEN) traps.c:1986:d0v1 HSR=0x93880045
> > > > Â Â Â  pc=0x00007f97b14e70 gva=0x7f7f817000 gpa=0x0000401315d000
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Â I tried to debug what causes this and looks
> > > > Â Â Â  likeÂ find_mmio_handlerÂ cannot find handler.
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > (See more here:
> > > > https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323&start=25#p1801691 
> > > > <https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323&start=25#p1801691> 
> > > > Â Â Â  )
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Any ideas why the handler is not found?
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â Hi Jukka,
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â I am glad to hear that you are interested in Xen on
> > > > Â Â Â  RaspberryPi :-)Â  I
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â haven't tried the GPU yet, I have been using the
> > > > Â Â Â  serial only.
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â Roman, did you ever get the GPU working?
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â The error is a data abort error: Linux is trying to
> > > > Â Â Â  access an address
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â which is not mapped to dom0. The address seems to
> > > > Â Â Â  be 0x401315d000. It is
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â a pretty high address; I looked in device tree but
> > > > Â Â Â  couldn't spot it.
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >From the HSR (the syndrom register) it looks like
> > > > Â Â Â  it is a translation
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â fault at EL1 on stage1. As if the Linux address
> > > > Â Â Â  mapping was wrong.
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â Anyone has any ideas how this could happen? Maybe a
> > > > Â Â Â  reserved-memory
> > > > Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â misconfiguration?
> > > > Â Â Â Â  >Â  Â  Â  Â >
> > > > Â Â Â Â  >Â  Â  Â  Â > I had issuesÂ with loading the driver in the first place.
> > > > Â Â Â  Apparently swiotlb is used, maybe itÂ can cause this. I also tried to
> > > > Â Â Â Â  >Â  Â  Â  Â enable CMA.
> > > > Â Â Â Â  >Â  Â  Â  Â > config.txt:
> > > > Â Â Â Â  >Â  Â  Â  Â > dtoverlay=vc4-fkms-v3d,cma=320M@0x0-0x40000000
> > > > Â Â Â Â  >Â  Â  Â  Â > gpu_mem=128
> > > > Â Â Â Â  >
> > > > Â Â Â Â  >Â  Â  Â  Â Also looking at your other reply and the implementation of
> > > > Â Â Â Â  >Â  Â  Â  Â vc4_bo_create, it looks like this is a CMA problem.
> > > > Â Â Â Â  >
> > > > Â Â Â Â  >Â  Â  Â  Â It would be good to run a test with the swiotlb-xen
> > > > disabled:
> > > > Â Â Â Â  >
> > > > Â Â Â Â  >Â  Â  Â  Â diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> > > > Â Â Â Â  >Â  Â  Â  Â index 467fa225c3d0..2bdd12785d14 100644
> > > > Â Â Â Â  >Â  Â  Â  Â --- a/arch/arm/xen/mm.c
> > > > Â Â Â Â  >Â  Â  Â  Â +++ b/arch/arm/xen/mm.c
> > > > Â Â Â Â  >Â  Â  Â  Â @@ -138,8 +138,7 @@ void
> > > > Â Â Â  xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int
> > > > order)
> > > > Â Â Â Â  >Â  Â  Â  Â Â static int __init xen_mm_init(void)
> > > > Â Â Â Â  >Â  Â  Â  Â Â {
> > > > Â Â Â Â  >Â  Â  Â  Â Â  Â  Â  Â  struct gnttab_cache_flush cflush;
> > > > Â Â Â Â  >Â  Â  Â  Â -Â  Â  Â  Â if (!xen_initial_domain())
> > > > Â Â Â Â  >Â  Â  Â  Â -Â  Â  Â  Â  Â  Â  Â  Â return 0;
> > > > Â Â Â Â  >Â  Â  Â  Â +Â  Â  Â  Â return 0;
> > > > Â Â Â Â  >Â  Â  Â  Â Â  Â  Â  Â  xen_swiotlb_init(1, false);
> > > > Â Â Â Â  >
> > > > Â Â Â Â  >Â  Â  Â  Â Â  Â  Â  Â  cflush.op = 0;
> > > > Â Â Â Â  >
> > > > Â Â Â Â  > With this change the kernel is not booting up. (btw. I'm using
> > > > Â Â Â  USB SSD for my OS.)
> > > > Â Â Â Â  > [ Â  Â 0.071081] bcm2835-dma fe007000.dma: Unable to set DMA mask
> > > > Â Â Â Â  > [ Â  Â 0.076277] bcm2835-dma fe007b00.dma: Unable to set DMA mask
> > > > Â Â Â Â  > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
> > > > Â Â Â Â  > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> > > > Â Â Â Â  > [ Â  Â 0.592695] pci 0000:00:00.0: Failed to add - passthrough or
> > > > Â Â Â  MSI/MSI-X might fail!
> > > > Â Â Â Â  > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> > > > Â Â Â Â  > [ Â  Â 0.606819] pci 0000:01:00.0: Failed to add - passthrough or
> > > > Â Â Â  MSI/MSI-X might fail!
> > > > Â Â Â Â  > [ Â  Â 1.212820] usb 1-1: device descriptor read/64, error 18
> > > > Â Â Â Â  > [ Â  Â 1.452815] usb 1-1: device descriptor read/64, error 18
> > > > Â Â Â Â  > [ Â  Â 1.820813] usb 1-1: device descriptor read/64, error 18
> > > > Â Â Â Â  > [ Â  Â 2.060815] usb 1-1: device descriptor read/64, error 18
> > > > Â Â Â Â  > [ Â  Â 2.845548] usb 1-1: device descriptor read/8, error -61
> > > > Â Â Â Â  > [ Â  Â 2.977603] usb 1-1: device descriptor read/8, error -61
> > > > Â Â Â Â  > [ Â  Â 3.237530] usb 1-1: device descriptor read/8, error -61
> > > > Â Â Â Â  > [ Â  Â 3.369585] usb 1-1: device descriptor read/8, error -61
> > > > Â Â Â Â  > [ Â  Â 3.480765] usb usb1-port1: unable to enumerate USB device
> > > > Â Â Â Â  >
> > > > Â Â Â Â  > Traces stop here. I could try with a memory card. Maybe it makes
> > > > Â Â Â  a difference.
> > > > 
> > > > Â Â Â  This is very surprising. Disabling swiotlb-xen should make things
> > > > better
> > > > Â Â Â  not worse. The only reason I can think of why it could make things
> > > > worse
> > > > Â Â Â  is if Linux runs out of low memory. Julien's patch
> > > > Â Â Â  437b0aa06a014ce174e24c0d3530b3e9ab19b18b for Xen should have
> > > > addressed
> > > > Â Â Â  that issue though. Julien, any ideas?
> > > 
> > > I think, Stefano's small patch is not enough to disable the swiotlb as we
> > > will still override the DMA ops. You also likely want:
> > > 
> > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> > > index 8a8949174b1c..aa43e249ecdd 100644
> > > --- a/arch/arm/mm/dma-mapping.c
> > > +++ b/arch/arm/mm/dma-mapping.c
> > > @@ -2279,7 +2279,7 @@ void arch_setup_dma_ops(struct device *dev, u64
> > > dma_base, u64 size,
> > > Â Â Â Â Â Â Â Â  set_dma_ops(dev, dma_ops);
> > > 
> > > Â Â #ifdef CONFIG_XEN
> > > -Â Â Â Â Â Â  if (xen_initial_domain())
> > > +Â Â Â Â Â Â  if (0 || xen_initial_domain())
> > > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  dev->dma_ops = &xen_swiotlb_dma_ops;
> > > Â Â #endif
> > > Â Â Â Â Â Â Â Â  dev->archdata.dma_ops_setup = true;
> > > 
> > > Otherwise, you would still use the swiotlb DMA ops that would not be
> > > functional as we disabled the swiotlb.
> > > 
> > > This would explain the following error because it will check whether the
> > > mask is valid using the callback dma_supported():
> > > 
> > > [Â Â Â  0.071081] bcm2835-dma fe007000.dma: Unable to set DMA mask
> > > 
> > Good catch.
> > GPU works now and I can start X! Thanks! I was also able to create domU that
> > runs Raspian OS.
> 
> Glad to hear it works! IIRC, the swiotlb may become necessary when running
> guest if the guest memory ends up to be used for DMA transaction.

It is necessary if you are using PV network or PV disk: memory shared by
another domU could end up being used in a DMA transaction. For that, you
need swiotlb-xen.

 
> > Now that swiotlb is disabled what does it mean?
> 
> I can see two reasons:
>   1) You have limited memory below the 30 bits mark. So Swiotlb and CMA may
> try to fight for the low memory.
>   2) We found a few conversion bugs in the swiotlb on RPI4 last year (IIRC the
> DMA and physical address may be different). I looked at the Linux branch you
> are using and they seem to all be there. So there might be another bug.
> 
> I am not sure how to figure out where is the problem. Stefano, do you have a
> suggestion where to start?

Both 1) and 2) are possible. It is also possible that another driver,
probably something related to CMA or DRM, has some special dma_ops
handling that doesn't work well together with swiotlb-xen.

Given that the original error seemed to be related to vc4_bo_create,
which calls dma_alloc_wc, I would add a couple of printks to
xen_swiotlb_alloc_coherent to help us figure it out:


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 2b385c1b4a99..cac8b09af603 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -295,6 +295,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
+	printk("DEBUG %s %d size=%lu flags=%lx attr=%lx\n",__func__,__LINE__,size,flags,attrs);
 	/* On ARM this function returns an ioremap'ped virtual address for
 	 * which virt_to_phys doesn't return the corresponding physical
 	 * address. In fact on ARM virt_to_phys only works for kernel direct
@@ -315,16 +316,20 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	phys = dma_to_phys(hwdev, *dma_handle);
 	dev_addr = xen_phys_to_dma(hwdev, phys);
 	if (((dev_addr + size - 1 <= dma_mask)) &&
-	    !range_straddles_page_boundary(phys, size))
+	    !range_straddles_page_boundary(phys, size)) {
 		*dma_handle = dev_addr;
-	else {
+		printk("DEBUG %s %d phys=%lx dma=%lx\n",__func__,__LINE__,phys,dev_addr);
+	} else {
 		if (xen_create_contiguous_region(phys, order,
 						 fls64(dma_mask), dma_handle) != 0) {
+			printk("DEBUG %s %d\n",__func__,__LINE__);
 			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
 			return NULL;
 		}
 		*dma_handle = phys_to_dma(hwdev, *dma_handle);
 		SetPageXenRemapped(virt_to_page(ret));
+		printk("DEBUG %s %d dma_mask=%d page_boundary=%d phys=%lx dma=%lx\n",__func__,__LINE__,
+			((dev_addr + size - 1 <= dma_mask)),range_straddles_page_boundary(phys, size),phys,*dma_handle);
 	}
 	memset(ret, 0, size);
 	return ret;




> > And also can I pass the GPU to domU? Raspberry Pi 4 is limited HW and
> > doesn't have IOMMU. I'm trying to create similar OS like QubesOS where GPU,
> > Network, keyboard/mouse, ... are isolated to their own VMs.
> 
> Without an IOMMU or any other HW mechamisns (e.g. MPU), it would not be safe
> to assign a DMA-capable device to a non-trusted VM.
> 
> If you trust the VM where you assigned a device, then a possible approach
> would be to have the VM direct mapped (e.g. guest physical address == host
> physical address). Although, I can foreese some issues if you have multiple
> VMs requires memory below 30 bits (there seem to be limited amount)>
> 
> If you don't trust the VM where you assigned a device, then your best option
> will be to expose a PV interface of the device and have your backend
> sanitizing the request and issuing it on behalf of the guest.

FYI you could do that with the existing PVFB drivers that only support 2D
graphics.
--8323329-411723651-1612206594=:29047--


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 19:28:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 19:28:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80103.146253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6esF-0008CE-M1; Mon, 01 Feb 2021 19:28:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80103.146253; Mon, 01 Feb 2021 19:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6esF-0008C7-J2; Mon, 01 Feb 2021 19:28:35 +0000
Received: by outflank-mailman (input) for mailman id 80103;
 Mon, 01 Feb 2021 19:28:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6esE-0008Bz-1O; Mon, 01 Feb 2021 19:28:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6esD-0001DA-Pt; Mon, 01 Feb 2021 19:28:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6esD-0004VR-GK; Mon, 01 Feb 2021 19:28:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6esD-0008TA-Fr; Mon, 01 Feb 2021 19:28:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KohISRuccSuwVcXh8mccbxy+nl+FowFVxzFb0rVsIwY=; b=KOuteP/5DtwlXA56JGB3ER+RE/
	w9Jb8sase2g3tU64wNNO5X7DC6qVsYWyvz/8rhGhH7PmaZ+9kPY0qaUUhEM1XXa7lde6gSNgpUg4U
	69GWMLPBD+oHlnP7dTPR+26MULoyvO6Z9Uj//p8Asde0e3kTWjhDXv+theEs6e++0NUA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158897-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158897: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 19:28:33 +0000

flight 158897 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158897/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    2 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    2 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 19:31:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 19:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80107.146269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ev2-0000mE-4U; Mon, 01 Feb 2021 19:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80107.146269; Mon, 01 Feb 2021 19:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ev2-0000m7-1S; Mon, 01 Feb 2021 19:31:28 +0000
Received: by outflank-mailman (input) for mailman id 80107;
 Mon, 01 Feb 2021 19:31:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6ev0-0000ly-MV; Mon, 01 Feb 2021 19:31:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6ev0-0001FU-GT; Mon, 01 Feb 2021 19:31:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6ev0-0004Zn-8K; Mon, 01 Feb 2021 19:31:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6ev0-0000ZO-7o; Mon, 01 Feb 2021 19:31:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yv4PwHm8W71mwvOdoey6HtU8hoVhrafLlx0oqx9ozLs=; b=JkP0U77LtFGiVfu8kbwy3jfAxA
	xF2wnX8WJXEM36sch8R7eKnKPSZlDxRg5MV+VLAiVW/XEjcHh30jVd54Bl95XYpmitSlvNOLS+MLq
	bTzclJPT2PZXZaELJZmbBY31OkNzv3f/KqnnrQhSFqvvZXJ4KlAPgBQVAX68jlkNCE/U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158881: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-i386-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-shadow:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fbca6ce4174724f28be5268c5d210f51ed96e31
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 19:31:26 +0000

flight 158881 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158881/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-xl           14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-pair         25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install       fail REGR. vs. 158387
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-i386-xl-shadow    14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 linux                0fbca6ce4174724f28be5268c5d210f51ed96e31
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   19 days
Failing since        158473  2021-01-17 13:42:20 Z   15 days   25 attempts
Testing same since   158818  2021-01-30 13:48:12 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Akilesh Kailash <akailash@google.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Leibovich <alexl@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexey Minnekhanov <alexeymin@postmarketos.org>
  Anders Roxell <anders.roxell@linaro.org>
  Andreas Kemnade <andreas@kemnade.info>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Zhizhikin <andrey.zhizhikin@leica-geosystems.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Lutomirski <luto@kernel.org>
  Anthony Iliopoulos <ailiop@suse.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Marcovitch <ariel.marcovitch@gmail.com>
  Ariel Marcovitch <arielmarcovitch@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Aya Levin <ayal@nvidia.com>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Baruch Siach <baruch@tkos.co.il>
  Ben Skeggs <bskeggs@redhat.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Craig Tatlor <ctatlor97@gmail.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  David Wu <david.wu@rock-chips.com>
  Dennis Li <Dennis.Li@amd.com>
  Dexuan Cui <decui@microsoft.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Enke Chen <enchen@paloaltonetworks.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Korenevsky <ekorenevsky@astralinux.ru>
  Ewan D. Milne <emilne@redhat.com>
  Fabian Vogt <fvogt@suse.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe LaÃ­ns <lains@archlinux.org>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gaurav Kohli <gkohli@codeaurora.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gopal Tiwari <gtiwari@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guido GÃ¼nther <agx@sigxcpu.org>
  Guillaume Nault <gnault@redhat.com>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hao Wang <pkuwangh@gmail.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Ion Agorria <ion@agorria.com>
  Israel Rukshin <israelr@nvidia.com>
  J. Bruce Fields <bfields@redhat.com>
  j.nixdorf@avm.de <j.nixdorf@avm.de>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@jamieiles.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  JC Kuo <jckuo@nvidia.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jethro Beekman <jethro@fortanix.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Nixdorf <j.nixdorf@avm.de>
  John Millikin <john@john-millikin.com>
  Johnathan Smithinovic <johnathan.smithinovic@gmx.at>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni K. SeppÃ¤nen <jks@iki.fi>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Juerg Haefliger <juergh@canonical.com>
  Juergen Gross <jgross@suse.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Krzysztof Mazur <krzysiek@podlesie.net>
  Krzysztof Piotr OlÄ™dzki <ole@ans.pl>
  Lars-Peter Clausen <lars@metafoo.de>
  Lecopzer Chen <lecopzer.chen@mediatek.com>
  Lecopzer Chen <lecopzer@gmail.com>
  Leon Romanovsky <leonro@nvidia.com>
  Leon Schuermann <leon@is.currently.online>
  Linhua Xu <linhua.xu@unisoc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Lozano <llozano@google.com>
  Lukas Wunner <lukas@wunner.de>
  Manish Chopra <manishc@marvell.com>
  Manoj Gupta <manojgupta@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marcin Wojtas <mw@semihalf.com>
  Mark Bloch <mbloch@nvidia.com>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Kresin <dev@kresin.me>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikko Perttunen <mperttunen@nvidia.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mircea Cirjaliu <mcirjaliu@bitdefender.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolai Stange <nstange@suse.de>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nilesh Javali <njavali@marvell.com>
  Oded Gabbay <ogabbay@kernel.org>
  Olaf Hering <olaf@aepfle.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali RohÃ¡r <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pan Bian <bianpan2016@163.com>
  Parav Pandit <parav@nvidia.com>
  Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  Paul Cercueil <paul@crapouillou.net>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Collingbourne <pcc@google.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Robinson <pbrobinson@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <me@pmachata.org>
  Petr Machata <petrm@nvidia.com>
  Phil Oester <kernel@linuxace.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafael Kitover <rkitover@gmail.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <rasmus.villemoes@prevas.dk>
  Reinette Chatre <reinette.chatre@intel.com>
  Rich Felker <dalias@libc.org>
  Rob Clark <robdclark@chromium.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Rohit Maheshwari <rohitm@chelsio.com>
  Roman Guskov <rguskov@dh-electronics.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Ross Zwisler <zwisler@google.com>
  Ryan Chen <ryan_chen@aspeedtech.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sameer Pujar <spujar@nvidia.com>
  Samuel Holland <samuel@sholland.org>
  Sasha Levin <sashal@kernel.org>
  Sean Tranchetti <stranche@codeaurora.org>
  Seth Miller <miller.seth@gmail.com>
  Shawn Guo <shawn.guo@linaro.org>
  Shravya Kumbham <shravya.kumbham@xilinx.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Chulski <stefanc@marvell.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Su Yue <l@damenly.su>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hebb <tommyhebb@gmail.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Toke HÃ¸iland-JÃ¸rgensen <toke@toke.dk>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-KÃ¶nig <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@nvidia.com>
  Valdis Kletnieks <valdis.kletnieks@vt.edu>
  Valdis KlÄ“tnieks <valdis.kletnieks@vt.edu>
  Vasily Averin <vvs@virtuozzo.com>
  Victor Zhao <Victor.Zhao@amd.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wang Hui <john.wanghui@huawei.com>
  Wayne Lin <Wayne.Lin@amd.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  yangerkun <yangerkun@huawei.com>
  Yazen Ghannam <Yazen.Ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  Youling Tang <tangyouling@loongson.cn>
  YueHaibing <yuehaibing@huawei.com>
  Yufeng Mo <moyufeng@huawei.com>
  zhengbin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8166 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 19:34:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 19:34:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80112.146283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6exh-0000x3-QW; Mon, 01 Feb 2021 19:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80112.146283; Mon, 01 Feb 2021 19:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6exh-0000ww-NK; Mon, 01 Feb 2021 19:34:13 +0000
Received: by outflank-mailman (input) for mailman id 80112;
 Mon, 01 Feb 2021 19:34:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6exg-0000wp-Bj
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 19:34:12 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1067ac81-1368-46cf-90a6-f513217c2e6e;
 Mon, 01 Feb 2021 19:34:11 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id c127so281991wmf.5
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 11:34:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1067ac81-1368-46cf-90a6-f513217c2e6e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=CNxUoNjWhiavfXkFSSymJaiViS6oRJzcYybxl80N3nk=;
        b=gwUcxavhokGLM2SU/lpegkHm8EneQkP5BKQZGl4uVaZbE1vENt0EYE5cJ2URZMxvGX
         ml4fE3+ftlIieWrG27CSgpqs5vVVIMfbnOQ3cZkEpi5tH82OQHdttQzkBsfFEskaicsA
         sf0vd/r57UXostTCp7VV5Q8hsgMsjIdyINOd5F7lXU/pfv0zupDjiy4zM72ynQYQHLns
         p57PmARnizYS4yryl4HnP3zw+d/0RAh7F5jQhvl4kju1NndwTAeQpUuSs5p3WznuS0VH
         WSEyJ/CtBqMsgkzFbV2hqPVXtfOSyec0AkJO/5MLtN3uQP2yW2wr3WA+NPWc3L4YHszj
         okSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=CNxUoNjWhiavfXkFSSymJaiViS6oRJzcYybxl80N3nk=;
        b=DO6cHDsZdD6SLOmM7eKhc9HZsNLTljlV+RARNt2o9BZPvzSRQ/QFU0t2ufXSCKvdte
         INFbR2LzaDy6V088K1yk/nxMe/Q7TxyNTOusOxG+Wrrs++aBXggVm/M8eiHyt84LmnZL
         gGtg5fn0la4xnBhrmWL9g56LYffLg3Gue5yx6kWESeqNTcYVQCa8MpuUZdfPc4mkHO5y
         Tf3c9Fav9Rd43Cl7IPqrNqgeIVBjgUy3ZCcVGkruJy1Gc2tLtRSAEFUmifGCbPij7HGg
         LPyvm4Nr7NMf/KEC4W2BMbKx2UfsGyczT1pll9k12E8ulEcJz3Dp2n9d+wVWhwO4CoE+
         pWVA==
X-Gm-Message-State: AOAM5315a6X2ft7EkKFrrPGaiAItf9Qi1SkLT4EBm86honE7C7J+XcTR
	PL1kXtSkQ4zsGUd51NLkbtuf2YHfqSOSMCFVMHU=
X-Google-Smtp-Source: ABdhPJxRfb0jeyKEB//h8w/pR+hDhLRxkFNu4AjS8FKgWMJJACFEk4FibLlCf6Aa+/NE6qq5gCTaofj+6NKbPk1eGT0=
X-Received: by 2002:a1c:f70c:: with SMTP id v12mr359301wmh.77.1612208050103;
 Mon, 01 Feb 2021 11:34:10 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com> <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
In-Reply-To: <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 1 Feb 2021 14:33:34 -0500
Message-ID: <CABfawhktGwwXNdMrm4uShKSs5MvaUz2hG63wzcDA97z9pGL=Ug@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Feb 1, 2021 at 10:23 AM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> > > With rpi-4.19.y kernel and dtbs
> > > (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> > > previous error is not present. I get the boot log on the serial with
> > > just console=hvc0 from dom0 but the kernel ends up in a panic down the
> > > line:
> >
> > > This seems to have been caused by a monitor being attached to the HDMI
> > > port, with HDMI unplugged dom0 boots OK.
> >
> > The balance of reports seem to suggest 5.10 is the way to go if you want
> > graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
> > RP4.
> >
> >
> > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > >
> > > > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > > > point to that last being touched last year.  Their tree is at
> > > > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > > > >
> > > > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > > > amount of work that went into fixing Xen on RPI4, which got merged
> > > > > into 5.9 and I would like to be able to build upstream everything
> > > > > without the custom patches coming with the rpixen script repo.
> > > >
> > > > Please keep track of where your kernel source is checked out at since
> > > > there was a desire to figure out what was going on with the device-trees.
> > > >
> > > >
> > > > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > > > kernel command-line should ensure you get output from the kernel if it
> > > > manages to start (yes, Linux does support having multiple consoles at the
> > > > same time).
> > >
> > > No output from dom0 received even with the added console options
> > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > with 4.19 next.
> >
> > So, their current HEAD.  This reads like you've got a problematic kernel
> > configuration.  What procedure are you following to generate the
> > configuration you use?
> >
> > Using their upstream as a base and then adding the configuration options
> > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > `make menuconfig`, `make zImage`).
> >
> > Notably the options:
> > CONFIG_PARAVIRT
> > CONFIG_XEN_DOM0
> > CONFIG_XEN
> > CONFIG_XEN_BLKDEV_BACKEND
> > CONFIG_XEN_NETDEV_BACKEND
> > CONFIG_HVC_XEN
> > CONFIG_HVC_XEN_FRONTEND
> >
> > Should be set to "y".
>
> Yes, these configs are all set the same way for all Linux builds by the script:
>         make O=.build-arm64 ARCH=arm64
> CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
>
> I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> dom0. So far only 4.19 boots.

rpi-5.4.y boots but ends up in yet another different kernel panic:

(XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
(XEN) d0v2: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
(XEN) d0v3: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
[    0.354800] Detected PIPT I-cache on CPU1
[    0.360473] Xen: initializing cpu1
[    0.360508] CPU1: Booted secondary processor 0x0000000001 [0x410fd083]
[    0.361674] Detected PIPT I-cache on CPU2
[    0.367323] Xen: initializing cpu2
[    0.367357] CPU2: Booted secondary processor 0x0000000002 [0x410fd083]
[    0.368460] Detected PIPT I-cache on CPU3
[    0.374110] Xen: initializing cpu3
[    0.374144] CPU3: Booted secondary processor 0x0000000003 [0x410fd083]
[    0.374357] smp: Brought up 1 node, 4 CPUs
[    0.421250] SMP: Total of 4 processors activated.
[    0.426051] CPU features: detected: 32-bit EL0 Support
[    0.431344] CPU features: detected: CRC32 instructions
[    0.459837] ------------[ cut here ]------------
[    0.463848] CPU: CPUs started in inconsistent modes
[    0.463925] Internal error: aarch64 BRK: f2000800 [#1] PREEMPT SMP
[    0.475143] Modules linked in:
[    0.478308] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.4.83-v8+ #1
[    0.484712] Hardware name: Raspberry Pi 4 Model B Rev 1.1 (DT)
[    0.490686] pstate: 60000005 (nZCv daif -PAN -UAO)
[    0.495615] pc : smp_cpus_done+0x74/0x98
[    0.499642] lr : smp_cpus_done+0x74/0x98
[    0.503687] sp : ffffffc01002bdf0
[    0.507102] x29: ffffffc01002bdf0 x28: 0000000000000000
[    0.512545] x27: 0000000000000000 x26: ffffffc010f08958
[    0.517989] x25: ffffffc010f286c8 x24: 0000000000000040
[    0.523442] x23: ffffffc010f08000 x22: ffffffc010f28000
[    0.528876] x21: ffffffc010f08918 x20: ffffffc010f08aa0
[    0.534331] x19: 0000000000000100 x18: 0000000000000000
[    0.539764] x17: 000000004d4bcabc x16: 00000000deadbeef
[    0.545209] x15: 0000000000000030 x14: ffffffffffffffff
[    0.550652] x13: ffffffc09002ba87 x12: ffffffc01002ba8f
[    0.556096] x11: ffffffc01002bdf0 x10: ffffffc01002bdf0
[    0.561540] x9 : 00000000ffffffc8 x8 : 636e69206e692064
[    0.566984] x7 : 6574726174732073 x6 : ffffffc010f09000
[    0.572427] x5 : ffffffc010f09098 x4 : ffffffc01002c000
[    0.577871] x3 : 0000000000000000 x2 : 0000000000000000
[    0.583315] x1 : 0000000000000000 x0 : ffffff8036948000
[    0.588759] Call trace:
[    0.591310]  smp_cpus_done+0x74/0x98
[    0.594999]  smp_init+0xe4/0xfc
[    0.598245]  kernel_init_freeable+0x134/0x27c
[    0.602726]  kernel_init+0x1c/0x118
[    0.606326]  ret_from_fork+0x10/0x18
[    0.610016] Code: 540000c0 90fff480 9125a000 97cb8545 (d4210000)
[    0.616251] ---[ end trace 828ddf3cc765197a ]---
[    0.620987] note: swapper/0[1] exited with preempt_count 1
[    0.626610] Kernel panic - not syncing: Attempted to kill init!
exitcode=0x0000000b
[    0.634425] SMP: stopping secondary CPUs
[    0.638511] ---[ end Kernel panic - not syncing: Attempted to kill
init! exitcode=0x0000000b ]---

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 19:47:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 19:47:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80118.146299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6fAA-00022R-2G; Mon, 01 Feb 2021 19:47:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80118.146299; Mon, 01 Feb 2021 19:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6fA9-00022K-UF; Mon, 01 Feb 2021 19:47:05 +0000
Received: by outflank-mailman (input) for mailman id 80118;
 Mon, 01 Feb 2021 19:47:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6fA8-00022C-92; Mon, 01 Feb 2021 19:47:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6fA8-0001VL-0v; Mon, 01 Feb 2021 19:47:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6fA7-0005Kv-PZ; Mon, 01 Feb 2021 19:47:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6fA7-0004wm-P7; Mon, 01 Feb 2021 19:47:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L/RyGeykhGokWnnjkZc44CoztMgivFcwlZ+iya+4ezk=; b=y2Q/gJZ1oNeeCSeaw2MlCjLpUR
	9DCQ4zEO0sA/72UoPgM6vlI7ueQjY8DH1r//xfDH+NNZU4BqOzlCMUzdRgm3Qk2sWVMOzGWpD36jk
	MzENqWg9M1u3gWA3ZHFf1YLHrnnC5GmdkXv9Pb+9lbS/4PlBBy08sLz6ncIa/5VJhtwA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158879-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158879: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=74208cd252c5da9d867270a178799abd802b9338
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 19:47:03 +0000

flight 158879 qemu-mainline real [real]
flight 158896 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/158879/
http://logs.test-lab.xenproject.org/osstest/logs/158896/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                74208cd252c5da9d867270a178799abd802b9338
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  165 days
Failing since        152659  2020-08-21 14:07:39 Z  164 days  333 attempts
Testing same since   158816  2021-01-30 13:16:09 Z    2 days    4 attempts

------------------------------------------------------------
372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 98359 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 19:48:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 19:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80120.146314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6fBc-00029V-Do; Mon, 01 Feb 2021 19:48:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80120.146314; Mon, 01 Feb 2021 19:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6fBc-00029N-Ai; Mon, 01 Feb 2021 19:48:36 +0000
Received: by outflank-mailman (input) for mailman id 80120;
 Mon, 01 Feb 2021 19:48:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hg0o=HD=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1l6fBa-00029E-3Q
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 19:48:35 +0000
Received: from mail.skyhub.de (unknown [2a01:4f8:190:11c2::b:1457])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17e9759c-d063-45fd-8660-11c6ed79e5c7;
 Mon, 01 Feb 2021 19:48:31 +0000 (UTC)
Received: from zn.tnic (p200300ec2f06fe00e55f3102cc5eb27e.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f06:fe00:e55f:3102:cc5e:b27e])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 26BD61EC0253;
 Mon,  1 Feb 2021 20:48:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17e9759c-d063-45fd-8660-11c6ed79e5c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1612208910;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lOkZrEeAn9oVnNn743NqAyVIKQ3k9/Lp3V74SDhkd64=;
	b=EM+rZpLJ9tNc2VbuYUaopFlP03mUsRMmq8qq/pgZ3JJsI6Y/KowJvU44oGPkeeGAD2qPsa
	j3K801BVFENduL2KRFMJ4dbHjY93xQqSdkCzV0oRKi2WduQAxPNz6//O0rJUMn7L41a83g
	pwy9/FLLvDRay4tdxWs6a0Sjt8mlIoM=
Date: Mon, 1 Feb 2021 20:48:28 +0100
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>
Subject: Re: [PATCH v4 07/15] x86/paravirt: switch time pvops functions to
 use static_call()
Message-ID: <20210201194828.GB14590@zn.tnic>
References: <20210120135555.32594-1-jgross@suse.com>
 <20210120135555.32594-8-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210120135555.32594-8-jgross@suse.com>

On Wed, Jan 20, 2021 at 02:55:47PM +0100, Juergen Gross wrote:
> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
> 
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification of the pvops implementation.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V4:
> - drop paravirt_time.h again
> - don't move Hyper-V code (Michael Kelley)
> ---
>  arch/x86/Kconfig                      |  1 +
>  arch/x86/include/asm/mshyperv.h       |  2 +-
>  arch/x86/include/asm/paravirt.h       | 17 ++++++++++++++---
>  arch/x86/include/asm/paravirt_types.h |  6 ------
>  arch/x86/kernel/cpu/vmware.c          |  5 +++--
>  arch/x86/kernel/kvm.c                 |  2 +-
>  arch/x86/kernel/kvmclock.c            |  2 +-
>  arch/x86/kernel/paravirt.c            | 16 ++++++++++++----
>  arch/x86/kernel/tsc.c                 |  2 +-
>  arch/x86/xen/time.c                   | 11 ++++-------
>  drivers/clocksource/hyperv_timer.c    |  5 +++--
>  drivers/xen/time.c                    |  2 +-
>  12 files changed, 42 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 21f851179ff0..7ccd4a80788c 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -771,6 +771,7 @@ if HYPERVISOR_GUEST
>  
>  config PARAVIRT
>  	bool "Enable paravirtualization code"
> +	depends on HAVE_STATIC_CALL
>  	help
>  	  This changes the kernel so it can modify itself when it is run
>  	  under a hypervisor, potentially improving performance significantly
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> index 30f76b966857..b4ee331d29a7 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -63,7 +63,7 @@ typedef int (*hyperv_fill_flush_list_func)(
>  static __always_inline void hv_setup_sched_clock(void *sched_clock)
>  {
>  #ifdef CONFIG_PARAVIRT
> -	pv_ops.time.sched_clock = sched_clock;
> +	paravirt_set_sched_clock(sched_clock);
>  #endif
>  }
>  
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 4abf110e2243..1e45b46fae84 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -15,11 +15,22 @@
>  #include <linux/bug.h>
>  #include <linux/types.h>
>  #include <linux/cpumask.h>
> +#include <linux/static_call_types.h>
>  #include <asm/frame.h>
>  
> -static inline unsigned long long paravirt_sched_clock(void)
> +u64 dummy_steal_clock(int cpu);
> +u64 dummy_sched_clock(void);
> +
> +DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
> +DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock);

Did you build this before sending?

I'm test-applying this on rc6 + tip/master so I probably am using a
different tree so it looks like something has changed in the meantime.
-rc6 has a couple of Xen changes which made applying those to need some
wiggling in...

Maybe you should redo them ontop of tip/master. That is, *if* they're
going to eventually go through tip. The diffstat has Xen stuff too so we
might need some synchronization here what goes where how...

./arch/x86/include/asm/paravirt.h:24:1: warning: data definition has no type or storage class
   24 | DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
      | ^~~~~~~~~~~~~~~~~~~
./arch/x86/include/asm/paravirt.h:24:1: error: type defaults to â€˜intâ€™ in declaration of â€˜DECLARE_STATIC_CALLâ€™ [-Werror=implicit-int]
./arch/x86/include/asm/paravirt.h:24:1: warning: parameter names (without types) in function declaration
./arch/x86/include/asm/paravirt.h:25:1: warning: data definition has no type or storage class
   25 | DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock);
      | ^~~~~~~~~~~~~~~~~~~
./arch/x86/include/asm/paravirt.h:25:1: error: type defaults to â€˜intâ€™ in declaration of â€˜DECLARE_STATIC_CALLâ€™ [-Werror=implicit-int]
./arch/x86/include/asm/paravirt.h:25:1: warning: parameter names (without types) in function declaration
./arch/x86/include/asm/paravirt.h: In function â€˜paravirt_sched_clockâ€™:
./arch/x86/include/asm/paravirt.h:33:9: error: implicit declaration of function â€˜static_callâ€™ [-Werror=implicit-function-declaration]
   33 |  return static_call(pv_sched_clock)();
      |         ^~~~~~~~~~~
./arch/x86/include/asm/paravirt.h:33:21: error: â€˜pv_sched_clockâ€™ undeclared (first use in this function); did you mean â€˜dummy_sched_clockâ€™?
   33 |  return static_call(pv_sched_clock)();
      |                     ^~~~~~~~~~~~~~
      |                     dummy_sched_clock
./arch/x86/include/asm/paravirt.h:33:21: note: each undeclared identifier is reported only once for each function it appears in
./arch/x86/include/asm/paravirt.h: In function â€˜paravirt_steal_clockâ€™:
./arch/x86/include/asm/paravirt.h:47:21: error: â€˜pv_steal_clockâ€™ undeclared (first use in this function); did you mean â€˜dummy_steal_clockâ€™?
   47 |  return static_call(pv_steal_clock)(cpu);
      |                     ^~~~~~~~~~~~~~
      |                     dummy_steal_clock
cc1: some warnings being treated as errors
make[1]: *** [scripts/Makefile.build:117: arch/x86/kernel/asm-offsets.s] Error 1
make: *** [Makefile:1200: prepare0] Error 2

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 21:00:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 21:00:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80144.146343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6gJ0-0001XO-2f; Mon, 01 Feb 2021 21:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80144.146343; Mon, 01 Feb 2021 21:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6gIz-0001XH-Vp; Mon, 01 Feb 2021 21:00:17 +0000
Received: by outflank-mailman (input) for mailman id 80144;
 Mon, 01 Feb 2021 21:00:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6gIy-0001X9-IF; Mon, 01 Feb 2021 21:00:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6gIy-0002pI-DV; Mon, 01 Feb 2021 21:00:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6gIy-0008Vh-42; Mon, 01 Feb 2021 21:00:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6gIy-0006LD-3W; Mon, 01 Feb 2021 21:00:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+vnIL6Jw7wQc91cBGcRRdnFNVsW/lNrzqNqh+OwJ7Ls=; b=v+8LPY/65tcsBPB06btNMww5Yi
	GoCsK2y7pSND9cfE7yzLGHeUt/fhWUX6xkaJY6z6Ues2CWobNgn815V+kVaqgKW7k0sG1bGJgq11D
	yw1sLcPQclqQ0sLyaZGeIZ2Edqc9OCc4UsM/VubjXYKzzk11r9V5y5kjFOmZkcres99k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158900: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 21:00:16 +0000

flight 158900 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158900/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    2 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    3 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 21:25:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 21:25:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80153.146371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ggz-0003dL-9O; Mon, 01 Feb 2021 21:25:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80153.146371; Mon, 01 Feb 2021 21:25:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ggz-0003dE-5K; Mon, 01 Feb 2021 21:25:05 +0000
Received: by outflank-mailman (input) for mailman id 80153;
 Mon, 01 Feb 2021 21:25:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ttS=HD=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l6ggx-0003d9-T1
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 21:25:03 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d886e01-5ac5-4c6c-9e7f-f4232cca3192;
 Mon, 01 Feb 2021 21:24:58 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 111LOi9v017556
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 1 Feb 2021 16:24:50 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 111LOgTQ017555;
 Mon, 1 Feb 2021 13:24:42 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d886e01-5ac5-4c6c-9e7f-f4232cca3192
Date: Mon, 1 Feb 2021 13:24:42 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
Message-ID: <YBhxmo5sFyTs/XTr@mattapan.m5p.com>
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com>
 <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com>
 <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com>
 <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Feb 01, 2021 at 10:23:34AM -0500, Tamas K Lengyel wrote:
> On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > No output from dom0 received even with the added console options
> > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > with 4.19 next.
> >
> > So, their current HEAD.  This reads like you've got a problematic kernel
> > configuration.  What procedure are you following to generate the
> > configuration you use?
> >
> > Using their upstream as a base and then adding the configuration options
> > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > `make menuconfig`, `make zImage`).
> >
> > Notably the options:
> > CONFIG_PARAVIRT
> > CONFIG_XEN_DOM0
> > CONFIG_XEN
> > CONFIG_XEN_BLKDEV_BACKEND
> > CONFIG_XEN_NETDEV_BACKEND
> > CONFIG_HVC_XEN
> > CONFIG_HVC_XEN_FRONTEND
> >
> > Should be set to "y".
> 
> Yes, these configs are all set the same way for all Linux builds by the script:
>         make O=.build-arm64 ARCH=arm64
> CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
> 
> I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> dom0. So far only 4.19 boots.

So you're using a scripted procedure to generate the configuration.  The
actual kernel configuration is saved in the file ".config" in the build
directory.  Could you confirm whether those are actually being set?

Try running `grep -eCONFIG_PARAVIRT -eCONFIG_XEN_DOM0 -eCONFIG_XEN
-eCONFIG_HVC_XEN -eCONFIG_HVC_XEN_FRONTEND .config`, those 5 must
be "=y".  Various kernel configuration options depend upon others, so
there could be potential you need to set one before those get enabled.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Feb 01 22:14:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 22:14:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80162.146400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6hSn-00005k-75; Mon, 01 Feb 2021 22:14:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80162.146400; Mon, 01 Feb 2021 22:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6hSn-00005c-47; Mon, 01 Feb 2021 22:14:29 +0000
Received: by outflank-mailman (input) for mailman id 80162;
 Mon, 01 Feb 2021 22:14:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6hSl-00005X-Qv
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 22:14:27 +0000
Received: from mail-wm1-x32a.google.com (unknown [2a00:1450:4864:20::32a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e734d697-f43a-46d2-8722-8d8448923f1e;
 Mon, 01 Feb 2021 22:14:26 +0000 (UTC)
Received: by mail-wm1-x32a.google.com with SMTP id o5so649868wmq.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 14:14:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e734d697-f43a-46d2-8722-8d8448923f1e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=TW6/+i6xxLsh0EqOzgpV/nui3BqyZzoaQpTf8n/t8Fo=;
        b=Cf7LpT0pqUxYYkmIY/8zzH+AjPGoFVdwEtbm0JMG9nDPPj/mEnaH08powxAAjOi2k+
         m7xqDIoKlQKxsxXyYxmbnf0HYAu7kaImc4pHFMakfW4znAWeeOImGXOreajYwME+6+rt
         8DPdiaTwb/TWmSr3y144x3hEXYLOKMX7eRIqkP5Afz51vIibDT2WNVTTmijViVPY+sN9
         CdsNCXSr+JsczQkCtB6OtGK3iUO890KUEKxW6EGxf4sldyFKTAt8dpy7mbRY7rzlhvAk
         zZv/gDRdvO5siw8JpwDuHGamNLRUDemGZ7lJ6S7PDyQaEABKBRX1XRZeSBjz+WrTVGTV
         1jWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=TW6/+i6xxLsh0EqOzgpV/nui3BqyZzoaQpTf8n/t8Fo=;
        b=revZRlP+Rgo96oQBN3KqriRAhLmHaBgJA7TgxPHlre0CVFGD6gtoXpp5Dbh9Jq8tz9
         Mb7C5wif0yRNRqMzq9Xa3u7qaTfcm+l/M9KX2NqmyVV7r9SVu5jJt9y9vuLHowSEw8Gl
         j+vMHL9LpJVkGSpA351z34sNvugCJ7DV6VRuHddD4OaImTUE8oIK13JaKTXU4K2TP9yF
         rRaZtM4r4ig1xw9RM62gJdr7IfqX9ok9f373d7ysJW/RDvChtK2Ya5OSWZEdLhLj1bg1
         dkdSkkbPnhYXCDx70GyKqWi7inn8BPLXFm6keQhcCXIIJN1jClzq+3k57QcaG40S0+3R
         fBGA==
X-Gm-Message-State: AOAM533jNlJVKtSTQ+NzW8Wa8XHgUC9IuNT243WbtT5Btfe3GWaYIKqj
	vYm4Yc9ZY1iUG93PuiDH49N0s27wi9+ogieCXWA=
X-Google-Smtp-Source: ABdhPJwKQ+p2lSjCC8U2eSOiulb3ai5FDO3HKkOsTFN5/4H52/RjazbxKdTXx4bdhqggkxygrYtAHCEzvqJzJmgIv+s=
X-Received: by 2002:a05:600c:214d:: with SMTP id v13mr824621wml.186.1612217666007;
 Mon, 01 Feb 2021 14:14:26 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com> <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
 <YBhxmo5sFyTs/XTr@mattapan.m5p.com>
In-Reply-To: <YBhxmo5sFyTs/XTr@mattapan.m5p.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 1 Feb 2021 17:13:48 -0500
Message-ID: <CABfawhmWDGM3bNQ3K_GRbnOmHoE1nT=-V8w8NkeceQqMB-Zfgg@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Feb 1, 2021 at 4:24 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Mon, Feb 01, 2021 at 10:23:34AM -0500, Tamas K Lengyel wrote:
> > On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > > No output from dom0 received even with the added console options
> > > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > > with 4.19 next.
> > >
> > > So, their current HEAD.  This reads like you've got a problematic kernel
> > > configuration.  What procedure are you following to generate the
> > > configuration you use?
> > >
> > > Using their upstream as a base and then adding the configuration options
> > > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > > `make menuconfig`, `make zImage`).
> > >
> > > Notably the options:
> > > CONFIG_PARAVIRT
> > > CONFIG_XEN_DOM0
> > > CONFIG_XEN
> > > CONFIG_XEN_BLKDEV_BACKEND
> > > CONFIG_XEN_NETDEV_BACKEND
> > > CONFIG_HVC_XEN
> > > CONFIG_HVC_XEN_FRONTEND
> > >
> > > Should be set to "y".
> >
> > Yes, these configs are all set the same way for all Linux builds by the script:
> >         make O=.build-arm64 ARCH=arm64
> > CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
> >
> > I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> > dom0. So far only 4.19 boots.
>
> So you're using a scripted procedure to generate the configuration.  The
> actual kernel configuration is saved in the file ".config" in the build
> directory.  Could you confirm whether those are actually being set?
>
> Try running `grep -eCONFIG_PARAVIRT -eCONFIG_XEN_DOM0 -eCONFIG_XEN
> -eCONFIG_HVC_XEN -eCONFIG_HVC_XEN_FRONTEND .config`, those 5 must
> be "=y".  Various kernel configuration options depend upon others, so
> there could be potential you need to set one before those get enabled.

These options are all set, it was one of the first things I did was to
confirm in the actual config file. There is no output from 5.9 or
5.10. With 4.19 and 5.4 there is, but only 4.19 actually manages to
boot to a workable state.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 22:14:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 22:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80163.146413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6hTE-0000As-Ge; Mon, 01 Feb 2021 22:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80163.146413; Mon, 01 Feb 2021 22:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6hTE-0000Ak-D7; Mon, 01 Feb 2021 22:14:56 +0000
Received: by outflank-mailman (input) for mailman id 80163;
 Mon, 01 Feb 2021 22:14:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6hTD-0000Aa-9M
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 22:14:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fbb9e76-d2ee-4df9-a185-fa669aa16b08;
 Mon, 01 Feb 2021 22:14:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fbb9e76-d2ee-4df9-a185-fa669aa16b08
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612217693;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=oFJYuSYsY/rLZDq6eu0OYK+gy+6p7XUPeqWI0PN/HbU=;
  b=FVnbwHpc45EosvGUScXzB1OCtlCsvpCkzOWRucx3V4+cjWIdG6cVMmnw
   xUroXXLt2HAsLQnTB8Q5wtaSlbK9WPiC5p1vsRvKkuP7HRlPofbR3ubcn
   Scb9D5xRV5YCqy1ljeGbupcIP96JTBq2/MLroeiCKKYZSCwDXVrT2rXk5
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: EGGn/zk/oY7erXeENt3JpKjOzsY+RRrI2+KiyiFbQu4Mt7G3jCdOQgok82deMmqpgGp091JmUb
 X4xArtVrpP7CkVf15OGt7xOaEXOYTFD4FPtOYQDCN/5ySEhj53HUYicK+Z2epRua2pNa7hPsvm
 fS5WoE1CArYUCo5IyXr16hfi8SIoREO9LtXYH7oKksKE9pbJcfsbzxNYAGI45N0rCKdm7mAUKS
 aaTah10GnFJoBRNSW8qhIoQs978OP0/QCWrcCXrCbcIKvdPa/09HwELcSPQ5BqOP017XyxllQ4
 jKE=
X-SBRS: 5.2
X-MesageID: 36698030
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36698030"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WJ8bao5qA2fzYI8l3B8RiAPFCbP1dYlKWITso+tJFIczZ0YpIjyPIBM+twV/0q8NEuUhP1jczN+yWD9UlnrLNi0siAkCP3ZFJakvbNymT8g4RCdlrcyR+hYutMXZwsLmyTNuA6HJtLFlphJfyFGnJu5eGED1d1eQcdzWyH2zHmHA8FS2z+BPFs7SYE5Dqng9xpBtH9SjPNNAKPPfW/W+h9zsIxsAse8IaFIxmdR2RwWxOghqoLsWmbPJlS/DhVbzL3ymyPAyBfZ0Y7JIYu9FFlhs3piqiN7v46RS89plI6lN+g0NV6wwReSvhu3R08JV8e0ICCbCtelBwvCYcUbuDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tQ4GbX8DKqqb5qN1QA1TM1ZU1ty6R9EBq8A2DVGgQ5Q=;
 b=PT+uKhZKW/ckB8eOg0AxHEHrZ9zfeE5LqJK32KzzlWCZzMqugJw5vxNqAu6ZVUdNFiqa0yPhGA1BWaWtGCDCI3i/GgxUKDed1eb45SIhO/sAk2weuHysJQKci40bqbzH8BpOqxuKQWggXSa9zHpK8D9/peZqnigmdc1J+/5xpFh8JiwwJhdi6pVsGBQh8iJ7qQ/T8NlVLAUb/hfy0BVuFyfohmpsjiIwdxD0ICCD8MSh62jTpsJ4U8MTSLqouuhl/OZTM79HNENWlkzylGNwFO3QCzqEOG5GU/2JweSigSc85ODfaInRVr81TelF6Fx39USW0tuinEKNfg5MhjHhVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tQ4GbX8DKqqb5qN1QA1TM1ZU1ty6R9EBq8A2DVGgQ5Q=;
 b=j6ll3ALfSAuogaoyX70XWmS8M0j9LC06ShODyVDgNnIk4z9eHYeui7PT+ohsU9kJ54CtTYGmZdIDAIg1vmh1VBWACi8JF/eeM9qqG0h1CRtMyxxC50VTJ5oUgOrrmLHylflgLD47emPrGhUJnTDHfayCC3S+7X/XmC9z/2IMx1E=
Subject: Re: [PATCH v8 08/16] xen/domain: Add vmtrace_size domain creation
 parameter
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <20210130025852.12430-9-andrew.cooper3@citrix.com>
 <3cf886f6-db7f-ccc1-5ef0-6fd8ccb38caf@suse.com>
 <f54dec0a-65b6-07bf-9de8-ed96ffd8d791@citrix.com>
 <296e5ee3-0ae1-fe0b-9ec3-940b78284cdc@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <511115b6-5549-49fe-f6ac-6fdb3c66f605@citrix.com>
Date: Mon, 1 Feb 2021 22:14:32 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <296e5ee3-0ae1-fe0b-9ec3-940b78284cdc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0286.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::21) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 22142e7f-72b0-42d5-42e2-08d8c6fec700
X-MS-TrafficTypeDiagnostic: BYAPR03MB3848:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3848E214E270223C8BE049A7BAB69@BYAPR03MB3848.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qNe4/nHgy4fW97p7Imw7SRLiIxEfog2UNLSS1EU31T8pxNczVeNWcd/6LDT/dQrxtOM4kYEsaJ39pNF+ulG4TmL/fDtgcqprRNbKVSs3nzbp8mD8QO4gi0CjUOCJy9tkvyanOuyjNQYd5PyGwNmzP9bBf8G+VJaLvtm+4qMcm9KFYCpb9QCODGazDWpkPsBMFjpVWubbnTfhkdESz9jvKmhQRP47Dk3s/KrtxCw6qomkGlYyh6CPywJPev9Ba0u7iz7DVQ//V+LwhbI1IhpL5oCy508gAwK+R/05HM+ui3W1aXE0o8UpLHUOT0LdbtEEnulnaXn0StiDb3K8B+K4gD/yb8OOv8FoasbltDJ04+88Ig45JFGx9kLlNam/Je4ycwf9HA2ZC5/hXErhXZbbku1X15HGS3om4BHJKmBocIhyiQZ7UUPo/Z4tUpZg5VDWz0DLdtK58MZYfoc8vdFaeIOXY8U3yTN0rbwOFPjL6B6DEalO+PndMjQbdWXt6bloUXzQ/el8eu4eg+mLc5hKQrm1PAeZXDZmIsk/rN6DWrZW9/RFeebiNE4mO7orE9RmjY/0XVwl7cClAoBhdfh1m+KVfNPHz6U7HQGnmmxJ2FY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(136003)(346002)(376002)(8676002)(66556008)(478600001)(66476007)(66946007)(4326008)(83380400001)(6486002)(6666004)(6916009)(5660300002)(8936002)(2906002)(54906003)(31696002)(16576012)(31686004)(2616005)(16526019)(26005)(186003)(956004)(53546011)(86362001)(36756003)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TmFSVlAxWnJOemZGNktRdmloVXgxenk5NHg5WlZFSFBzNTVNOGk5Z0EwYjI2?=
 =?utf-8?B?Qi8ybVNDWFA0UUtkZU8yTEdLdnRqeVZ4TnNjQ1B2T1dDOE1SdnZpSVhwRkdk?=
 =?utf-8?B?SEZOaDBqYittazUvL05NekJuL0xpeTNqVm1kNUErb3lxdXNrYzZPUmJIbHh2?=
 =?utf-8?B?cmptRnpxRFBuaHQrQ0dkYWdodGlvcXhxRG1oaS9CSFJyMU9zNWRzTURsNDgx?=
 =?utf-8?B?V0Y3cmowTW1DUk1aaHVrUmd6UUZpdkRNZVBxc3B2d0ZQYm9ieEhjeDNka2NK?=
 =?utf-8?B?Q1FwTHFxUWlmT0tnSFVKaDVVdjJ5K25PdGJ4czhJZDIxcDhxaU0relpXczNO?=
 =?utf-8?B?dzVESkJFNGI3WUFSQ3RNVWJEZ1FVK3ljQnZlcWtaMXBFWFJzYzQxQ1RkSzdJ?=
 =?utf-8?B?dkQ3eUs1Y1NGOFdtbEU0dkJFWlV6UEcvbUlKRzlDWjZ3NlVmb1NWc09YZVZF?=
 =?utf-8?B?N2ZPSEo3YVdxRUZtNzU2ZVZ1ZVlhbGpKdEZtNTJCS2VNR09oZEJPUDRwT2l1?=
 =?utf-8?B?anJVOTBrc1k3Q3BmVnJxeWIwQkJZbTFhMTJSdnU0dERTMktzRmZxMzQyeWRO?=
 =?utf-8?B?bFFwalgxUU5Qbit1cW9ORXRFRWc1NDFIVjdJKzBNM04rSHFvZEVBb3JSSXlF?=
 =?utf-8?B?U29KOTRvelZOSWNNdStHaWc5UGl0a0VRZUFWekJSSHJVSUU1ZVVGR01UNk8y?=
 =?utf-8?B?THNkMk41a0I3Z20relNITk8wVnRRNXgxckxLYk02eXl2UGlSUThTMW40cHhI?=
 =?utf-8?B?N2VJNkNIVlhhalFDWm9NWFRPTEk3TDg1MzV4dExyWWRYeS92akVpdk85Y1BN?=
 =?utf-8?B?ODJ2c3dzWEE0cHBOQTQ3QXVFWUJEaU03Z0h3VWpaNCthMkNXMXJQWVZvYjRu?=
 =?utf-8?B?UHVvMDQrTmlxVzNuYVpETGNmTklWNGdVc2plVGlkMVJidzhzd1AvRzNCSUkr?=
 =?utf-8?B?K2Jha1ZaY1Rsc2tQNXdnUVZKbXd0SmVtcVJreWhYSGcxNlBEK0VSdFpta3Y2?=
 =?utf-8?B?VmZpMCt3NllGU1ppYVNyNjllOGwwNzFnRjdxNHppNFQzWWI1L0RqM0hqanNF?=
 =?utf-8?B?QUU2SlprM09mYjRPRDJnNU85SXhHNEJtNWJ3S1FRUEhoUjVLY2VKTVZYY0pw?=
 =?utf-8?B?VGJ3bTFhdnVBRFQvNzdpcllmSy9jUERGSTZmc29EUVlQVjZCMEROZ3dsK1NC?=
 =?utf-8?B?QytBb2REQjZTYzFoM1FNRFpYeTBBM0ZnOEtsbUpESE40cTZuTmV2d1ZpWUR0?=
 =?utf-8?B?dTFvY3FzbzV2ZWw3TlliWWJ4T1Bzd3VobjJ0Qzh1cVhLQ1NnN2I4M2VnRWFK?=
 =?utf-8?B?NFNHT0Rja1ppOHZ0Zlpib0pDWHhQVi9VMDdibDE4VEZ1Q3dZSjFvbEw2K0xB?=
 =?utf-8?B?N2NLVWlyMDNQbXdCV0ZFRHgyUG01Z1BJYkNPMVNqTWRHakk3UlRXVC8wRnBM?=
 =?utf-8?B?ZDB0SFFRQ09XRitMVy9icmRFdlVFendUaGducEl3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 22142e7f-72b0-42d5-42e2-08d8c6fec700
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 22:14:43.9129
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: twHz7Zw8wKebqf+Zmvcao+k249cJgTa4CdBbHC+gn+1h2AZZEr+LIY/QPFYh6d1tR0rOHbt457ck+k2VtCtZkeFN3pfTK5eKlwRhfJsbiAA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3848
X-OriginatorOrg: citrix.com

On 01/02/2021 14:36, Jan Beulich wrote:
> On 01.02.2021 15:22, Andrew Cooper wrote:
>> On 01/02/2021 13:18, Jan Beulich wrote:
>>> On 30.01.2021 03:58, Andrew Cooper wrote:
>>>> +static int vmtrace_alloc_buffer(struct vcpu *v)
>>>> +{
>>>> +    struct domain *d = v->domain;
>>>> +    struct page_info *pg;
>>>> +    unsigned int i;
>>>> +
>>>> +    if ( !d->vmtrace_size )
>>>> +        return 0;
>>>> +
>>>> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
>>>> +                             MEMF_no_refcount);
>>>> +    if ( !pg )
>>>> +        return -ENOMEM;
>>>> +
>>>> +    /*
>>>> +     * Getting the reference counting correct here is hard.
>>>> +     *
>>>> +     * All pages are now on the domlist.  They, or subranges within, will be
>>> "domlist" is too imprecise, as there's no list with this name. It's
>>> extra_page_list in this case (see also below).
>>>
>>>> +     * freed when their reference count drops to zero, which may any time
>>>> +     * between now and the domain teardown path.
>>>> +     */
>>>> +
>>>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>>>> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
>>>> +            goto refcnt_err;
>>>> +
>>>> +    /*
>>>> +     * We must only let vmtrace_free_buffer() take any action in the success
>>>> +     * case when we've taken all the refs it intends to drop.
>>>> +     */
>>>> +    v->vmtrace.pg = pg;
>>>> +
>>>> +    return 0;
>>>> +
>>>> + refcnt_err:
>>>> +    /*
>>>> +     * In the failure case, we must drop all the acquired typerefs thus far,
>>>> +     * skip vmtrace_free_buffer(), and leave domain_relinquish_resources() to
>>>> +     * drop the alloc refs on any remaining pages - some pages could already
>>>> +     * have been freed behind our backs.
>>>> +     */
>>>> +    while ( i-- )
>>>> +        put_page_and_type(&pg[i]);
>>>> +
>>>> +    return -ENODATA;
>>>> +}
>>> As said in reply on the other thread, PGC_extra pages don't get
>>> freed automatically. I too initially thought they would, but
>>> (re-)learned otherwise when trying to repro your claims on that
>>> other thread. For all pages you've managed to get the writable
>>> ref, freeing is easily done by prefixing the loop body above by
>>> put_page_alloc_ref(). For all other pages best you can do (I
>>> think; see the debugging patches I had sent on that other
>>> thread) is to try get_page() - if it succeeds, calling
>>> put_page_alloc_ref() is allowed. Otherwise we can only leak the
>>> respective page (unless going to further extents with trying to
>>> recover from the "impossible"), or assume the failure here was
>>> because it did get freed already.
>> Right - I'm going to insist on breaking apart orthogonal issues.
>>
>> This refcounting issue isn't introduced by this series - this series
>> uses an established pattern, in which we've found a corner case.
>>
>> The corner case is theoretical, not practical - it is not possible for a
>> malicious PV domain to take 2^43 refs on any of the pages in this
>> allocation.Â  Doing so would require an hours-long SMI, or equivalent,
>> and even then all malicious activity would be paused after 1s for the
>> time calibration rendezvous which would livelock the system until the
>> watchdog kicked in.
> Actually an overflow is only one of the possible reasons here.
> Another, which may be more "practical", is that another entity
> has already managed to free the page (by dropping its alloc-ref,
> and of course implying it did guess at the MFN).

Yes, but in this case it did get dropped from the extra page list, in
which case looping over the remaining ones in relinquish_resource would
be safe.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 22:51:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 22:51:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80172.146434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6i2t-000468-H9; Mon, 01 Feb 2021 22:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80172.146434; Mon, 01 Feb 2021 22:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6i2t-000461-Dv; Mon, 01 Feb 2021 22:51:47 +0000
Received: by outflank-mailman (input) for mailman id 80172;
 Mon, 01 Feb 2021 22:51:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WfTc=HD=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6i2r-00045w-IF
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 22:51:45 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fc6b0f7-00ef-43df-9b8d-6503dff700c5;
 Mon, 01 Feb 2021 22:51:44 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id 7so18415796wrz.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 14:51:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fc6b0f7-00ef-43df-9b8d-6503dff700c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=rVWTHFV6fVqerpbgleY7SQtHq8jFutPTm1kiRRj1tMU=;
        b=k5vJ+WZtEycb8R2w3NCmfiW4IMfxEUZJimMcHvQrwXrQ+Zkwf8GTPOU9itxyHYAw1F
         CUo0JIQ5ZFJcx6M4XY5jGT4J1fIIOCgASwS+vvhcW2BGS0MqgC1YA0sviqoVsLXj12LO
         qfLgV6B/7iUrcufW3P1o5KlVgFRbep1STOD5Gn9h48aFxmbQBa6AE9MwlaE26UMYdhLP
         N/bguqzukRvJROVCTLQF+sP/Q86UC3Vi3ygxgK6coPUGg+ylhUHmJLBZy/iujMMO17OF
         NMI8NZ7ovQbLD4uZZ0z//4TMEW0PECL38fTLqO0eX7Kq5AyXbHzGyrC3JF+kXTYRJViF
         1DNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=rVWTHFV6fVqerpbgleY7SQtHq8jFutPTm1kiRRj1tMU=;
        b=nJCBt6ZMAz8t9+jCq0+Xb2yfZOZUhJYvGIUb5j6J3FmWEZ9mnQ8l64aRcgg0DkqfeG
         m4g1eQaQ5ct/TYPKE8y1vvAL0vOtHUcM/J50ezA7btya34AsjPOKvMOLwc4Suscib3Hc
         /0aS3Hlx73uy3iiaZ8omdUPwNfvO6HccbbkReAsEfth9aOxnfW1Bf5iIYSuCJhP+GFM5
         7WNIT94MgUobRdYbAkapO4ob1nCCdKjDZy3a+SOXEk7EkFkCjGUOv6So3Z0U5owClvl0
         QF5PB7dCRvI2SrMJ8NUknZfVEVGvQ/FDBUWQ7csrJO9R8kt6GKLpiPSjyKukbzUdS5r3
         2keQ==
X-Gm-Message-State: AOAM533ksrZIp0Pa3wqndEnw11HlAW9gWjAlT4fPB/8vcupiUJ8cynWg
	uubQVQf1Zt2bMjaWsmfHsLl6Pjp7MMfFNaE83UZvUK/CkmaRHA==
X-Google-Smtp-Source: ABdhPJzcQkjP1Nbg4+BBiz4SLCmYhPzJqk0UsX4I7wYHszX2MF2BR49WEsj5f5YIQ8QryzsxMvUF/8d4LesjESQYpok=
X-Received: by 2002:adf:ffce:: with SMTP id x14mr21135503wrs.390.1612219903395;
 Mon, 01 Feb 2021 14:51:43 -0800 (PST)
MIME-Version: 1.0
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 1 Feb 2021 17:51:05 -0500
Message-ID: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
Subject: staging: unable to restore HVM with Viridian param set
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

Hi all,
trying to restore a Windows VM saved on Xen 4.14 with Xen staging results in:

# xl restore -p /shared/cfg/windows10.save
Loading new save file /shared/cfg/windows10.save (new xl fmt info 0x3/0x0/1475)
 Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Found x86 HVM domain from Xen 4.14
xc: info: Restoring domain
xc: error: set HVM param 9 = 0x0000000000000065 (17 = File exists):
Internal error
xc: error: Restore failed (17 = File exists): Internal error
libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
restoring domain: File exists
libxl: error: libxl_create.c:1581:domcreate_rebuild_done: Domain
8:cannot (re-)build domain: -3
libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
8:Non-existant domain
libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
8:Unable to destroy guest
libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
8:Destruction of domain failed

Running on staging 419cd07895891c6642f29085aee07be72413f08c

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 22:57:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 22:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80174.146446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6i8Y-0004Gn-5L; Mon, 01 Feb 2021 22:57:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80174.146446; Mon, 01 Feb 2021 22:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6i8Y-0004Gg-1e; Mon, 01 Feb 2021 22:57:38 +0000
Received: by outflank-mailman (input) for mailman id 80174;
 Mon, 01 Feb 2021 22:57:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6i8W-0004Gb-GS
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 22:57:36 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a51033c-f72c-499b-8b77-9495d8ef4ec1;
 Mon, 01 Feb 2021 22:57:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a51033c-f72c-499b-8b77-9495d8ef4ec1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612220255;
  h=subject:to:references:from:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=Xdl/2+QjCEGxk+tS3//GTKL2aq2JBAwe02rAZ5YaB+E=;
  b=QVzGCl4FI2h08WqqxOqcoUl8KtQ8R+dnn6SlN38huuVABesYYw+RNkDT
   Wqjflqc8PAVHXMAR451E7sMoo3EtUnklMDG8M9WNWSl88ljchL3P14xDG
   S7LSIRqrYNBEk2wE0AVBqr4m3MsbPNSMrTUa6/hCH5S0wjIXO7Kf4ADVg
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ndA/aa73tBtr6qYSKd/AduCog0I0KNEuvXVNtQtIvMBRwJlQz240CwROlLNTy+4VNuct7r+Fce
 JCFvHZahf61ydRfnHGyeZifQjq6gXgqfG7PXH5bhCKWDcDBKtz8G9AQj9SZm4saI7GvCt/1ngc
 U5LkhuLo9klQKBInzYpfG7AkrTnZTYJnqBoUWBJEpctWqMh2LxTgqsf+xc0XP70N2hBiFQSZuf
 zRY8fS2dGsF7efe/i1GtnTA8Ge3UPU1Jqv3qRq/ZZBMeGYYTDqVDhbbsL1AmpYDKQ2KT5EHBjb
 pI0=
X-SBRS: 5.2
X-MesageID: 36521532
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36521532"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VTwg9hxnXiMiRpAuQebt37Id1yX1p2wQ2PWpjFoVARvUZDkQZveGZw7TzT144UFoenuXDUL82I6Z0iPaa6XbhA3fXCF8tX460nlB0eZC9CTnk7wrXJu8mn6HpBiwl9agJazeHr1xxp9x5IxX/XDwiiGvBS3RIjaQZRGCKnQvY/Jw7VVfyCJqBuB/pqH2EYbuMN9VVWAhtt6N8qq+f3MFKwXy3lEXt1vi7mG9DHAVc6eboqnaT7CsGuAPrNEL0riUzQmUQ0mmyaAprMA2sv6PzwIYObX2LGEwQzOLJZuYftddboe+gcAfpXoeyauZsBC4e+wNXUNbsCGhoCNxmiZ/ig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=patjk2+TXivoRpKNWbyC+vTwSrfzHjyuIVOtJBdv/d8=;
 b=nx3apTR4iPiVrlhxk7Gb9PALj7k8A16AnDKE12xWZUXzyIub2GxmAA0oRhO4O/OqnuHSyBstTVu7pbqDHyt0394BeifdfPnkNzCKs0tZw9SchGj108Lp1OsEvLRBm1/h377LOqyDwHSr6Xkt+RfpAEyQXmqxGkY9xxBGpaZvJZyz/KAbm+SVztvll9ZMLnSbSbaBFBbwtaImRBXfInG9jzYzHs72lxq0cAIW/ZumvC3/Nlo5+t2ZQge1VVcMH3mrOkdeA8H6LSWjolmIQytwR593Bnrr6aiY1orhRAju3EGqy8qOslWJ6t+pn+z76vooxczFd4wQOs2NkQ9UQ9sCbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=patjk2+TXivoRpKNWbyC+vTwSrfzHjyuIVOtJBdv/d8=;
 b=U4J184JLpoz3AvocncLx/+bU91qIMGGQRZKBG+myjbqtcttumq8Ws3T5OaaSerxZB+iGKF358re4Kd4jHMDdGCns8xDF6k/lvSpovydbbUEHdOLJ8EPs3Sg3dDI9fSejeCOxyIGg096G5fZwMnNTQzwXfiNznbpNKjLuAasnM44=
Subject: Re: staging: unable to restore HVM with Viridian param set
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Anthony PERARD <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, Igor Druzhinin <igor.druzhinin@citrix.com>
References: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <12e17af4-3502-0047-36e2-3c1262602747@citrix.com>
Date: Mon, 1 Feb 2021 22:57:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0449.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::29) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5d853598-6365-4aa7-014b-08d8c704be2e
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5741:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5741E4608145D9B8B13365C0BAB69@SJ0PR03MB5741.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: NwvKEsd0mL70IhxqTIrhw0boUBP6wjQdmzdIt7drMUbqbijQ7tQJtReFajetSsCnkIokySsLi/UdRKK/b2U3x9PDC1H1lHNwGzx0wjkqbUrV1RkW+LP0C1Wm0LKSq+leM+NLu1aRiBI6LBbh8HgeBMNSTubzOEQ0UMLqj/VxoeFi/avDcDqblDLpBStNLRs/k/p+mOC0QtPFKNIJ+XSBjvxZhpwhDaqtBxVIgCp63l9oYTPp8aZQ1S0HLnkN6Aw5Op4EeOYtds/BUIDWgZrFZOQDQXg1cbKM7u/UNyuLLzPtvMPlgfskG2jRef+Z6WNUQWIYIoqE79ijj0k629Zw4ej5etxHsLm/Fw3AWFbvIdoz/6N3UOvKe5p+pfJ1bpd+pT2Fx89B7L4sAbG5kiTIoCsG0exGfj8UUA/9fZebhJ8C7vv6O++saTrTyETo+XPTkOUjOWxOlwA7YvCtfJGSg+UsTNI9QiSpcgtGMN0Av+y1apD//Xs3VW6SxpzqUilwwUPUOis77KRdCJC2IaPPaHcU6Lfz8vnK7/71zHLGi+z3lNHcOCwEKRq0Es3lq7WhU6aa/NkxtopSZO8oXK77Y5da0pGwiJyLrUDLIeqvbVw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(376002)(396003)(346002)(136003)(956004)(26005)(16526019)(31696002)(8676002)(53546011)(6636002)(8936002)(86362001)(2616005)(83380400001)(186003)(36756003)(478600001)(6486002)(31686004)(66556008)(5660300002)(110136005)(66946007)(316002)(66476007)(6666004)(2906002)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bUczcEpJT1VtaG81VDZvcGk0SG94Q2R5MEhoajR4OHJqbThHTVMyQ3dQUkYv?=
 =?utf-8?B?TXRoSXV5ZUt0QnAxVjFUZ2ZTRGtBc3ladFJmb1pMSU03dzJPZDB0bmw0UEJZ?=
 =?utf-8?B?OXZ0VFFWcVRPUCswajRGeXAxbFVRcFB5MmZXZlVxdk5GeHI3YWxUdFlDVFNK?=
 =?utf-8?B?Rm1lOXcwVjk5d1Y0dzlPQ3hTdEpFVTRiRWhrNFRsUXlwL3IydG5hWmR2V3ZK?=
 =?utf-8?B?ZnUxYUZqcFRWYzhCQzJWMUVVSjUwU2RCc05JSU5ZdGdJVlphNVZjdDZNbXlp?=
 =?utf-8?B?VGcyR2VWakRGYit2WGlhV2RGR3dSR3l5S21VMDZ0N0s3azJGMDVuNStBTmFW?=
 =?utf-8?B?ejZ5NHpOMjBSbFFlUlNGa29GdlBSelh6ZUpqd09GNVpRd0UraERjQlJpTWVD?=
 =?utf-8?B?WjVCSEZscUxPYitJTnpiditoaTBzeXdVWXo2TzFPRC9tSFpaR3VyK3E3amlj?=
 =?utf-8?B?SFdoQnYxMk5VZFVRL2xQNnhsdjVYYjl1SEVPVkJTNm0zSTRZMmhwczFWRy9q?=
 =?utf-8?B?K1BySDc3OXFUcko2Rk4vRUtvY0VZN040ZFA1ODNwNE9DTzBCQklsSFI1VWNR?=
 =?utf-8?B?Rm40NDdaUDhJT0l5Mk94K2FYY0FiRVd6T2pPaVA5eEF5VjRwb3NSbXVoWEIx?=
 =?utf-8?B?Qk9Vd0VMOTQ2NzNUVEN0RWkwbFRVWlE3dzZMNzFYcStKaENzbGpWOG8yM29u?=
 =?utf-8?B?R3d3dXE5WGpwZHFvbUF1NENVd0VOeGhlTHB5SDFQUlFmU1Q1QzdLQ2NFd0hh?=
 =?utf-8?B?VGFzU0lXM012MXBZQUxXSitveUE2ZGZLMzZzZW4zV3UyUG5tSnNndGltdlZJ?=
 =?utf-8?B?NElEd1B5NWhtZEZnZ1QzbGJjZTB2c3E1WmNqc0crd1dhaXlCL05NdU5BNGVS?=
 =?utf-8?B?R3VrT0lNdzU2ZFIrQ3gwSmJhYzJrYmZDRGVsUXk2OUkwY3J2N1FWUWZoeVIr?=
 =?utf-8?B?YUhyZGlaWjRJM2JjMUNlNVc3dHQ2QUp1YStiTVFuRzVSUDZGTDN2c1JWc3Nm?=
 =?utf-8?B?QmREYzk5dC8zS2FWdEp0UC9LakV6dURGYWlibkVZVTl1L051dnl5TmFqanFx?=
 =?utf-8?B?REJMalNDcGRhZnBDVW8zR3JrY1ovZGRHQnNEaXJ4SGtvQ1VpSHVJVFF0VnZX?=
 =?utf-8?B?dFF0MkEzQ0xLRmNSbEVEWU9HUWxIek5NU042cWVDNk1RemRWUm9nYkw1MmpI?=
 =?utf-8?B?ajE2RE5jVkk1Z2U0bW5vblZPV3BkYlBsZytzL0pkLzZGT0owWjNsRlpqZVJF?=
 =?utf-8?B?ZkNnNm9HVkdGbDJESnM4QkZDRkoxMWh0ZTJKS2crczFHenVSRnhZNmwyaUF0?=
 =?utf-8?B?VFpPd2haUUZHbXY1UkZ3VFNITzJBNmU5aVRqOVJ0b2d3UzdnMWlFazNJKzZQ?=
 =?utf-8?B?N2wrNWJESlNmUzdKa3pWU0cyMHlDRkxBOTh2YXVDNk9vSHpRT2k2Mk1lUC9X?=
 =?utf-8?B?SU90RFdCRncrOFpGOUVxaG5oZjBubm1EcWpJWHFRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d853598-6365-4aa7-014b-08d8c704be2e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2021 22:57:25.9446
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W1eFUkgtP1+UDECWx4lwfx07+GlkUJ+a2nCOezWFzfL8QW9GAI2GBBpKGiuM+icVu3N7qX/3dCHDpzEJuA+Pu8MxYoSL09cYJlXe2qcdbvw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5741
X-OriginatorOrg: citrix.com

On 01/02/2021 22:51, Tamas K Lengyel wrote:
> Hi all,
> trying to restore a Windows VM saved on Xen 4.14 with Xen staging results in:
>
> # xl restore -p /shared/cfg/windows10.save
> Loading new save file /shared/cfg/windows10.save (new xl fmt info 0x3/0x0/1475)
>  Savefile contains xl domain config in JSON format
> Parsing config from <saved>
> xc: info: Found x86 HVM domain from Xen 4.14
> xc: info: Restoring domain
> xc: error: set HVM param 9 = 0x0000000000000065 (17 = File exists):
> Internal error
> xc: error: Restore failed (17 = File exists): Internal error
> libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
> restoring domain: File exists
> libxl: error: libxl_create.c:1581:domcreate_rebuild_done: Domain
> 8:cannot (re-)build domain: -3
> libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
> 8:Non-existant domain
> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
> 8:Unable to destroy guest
> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
> 8:Destruction of domain failed
>
> Running on staging 419cd07895891c6642f29085aee07be72413f08c

CC'ing maintainers and those who've edited the code recently.

What is happening is xl/libxl is selecting some viridian settings,
applying them to the domain, and then the migrations stream has a
different set of viridian settings.

For a migrating-in VM, nothing should be set during domain build.Â 
Viridian state has been part of the migrate stream since before mig-v2,
so can be considered to be everywhere relevant now.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:19:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:19:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80180.146461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6iTb-0006Tm-1s; Mon, 01 Feb 2021 23:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80180.146461; Mon, 01 Feb 2021 23:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6iTa-0006Tf-UP; Mon, 01 Feb 2021 23:19:22 +0000
Received: by outflank-mailman (input) for mailman id 80180;
 Mon, 01 Feb 2021 23:19:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6iTZ-0006TX-UJ; Mon, 01 Feb 2021 23:19:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6iTZ-00053G-Ny; Mon, 01 Feb 2021 23:19:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6iTZ-0003bD-Cx; Mon, 01 Feb 2021 23:19:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6iTZ-0000Kn-CT; Mon, 01 Feb 2021 23:19:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JgXo339TolpPGwJHbpOsNQZjmjPr/B1FMDMsgiYcuGg=; b=Omx0XBBZeKjWfEcqqurdWUpGGc
	oknRZ/DS8xVPrQsGbxE6BdNz9D4pVv3NOJiz9SVtBdQTpzQBinX12SmUhnM0OFhTsop2lpuiOauHx
	/kawrcf0tPsQ5cNg/bXRQC8ivHawQVPpKnvtlty7hPTPZtU10a514ClY5uG3HnNv3MM0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158884: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1048ba83fb1c00cd24172e23e8263972f6b5d9ac
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 23:19:21 +0000

flight 158884 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158884/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 158868 REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot       fail in 158868 REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 158868 REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot       fail in 158868 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 158868 pass in 158884
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 158868
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10    fail pass in 158868
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 158868

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-xsm       5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1048ba83fb1c00cd24172e23e8263972f6b5d9ac
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  185 days
Failing since        152366  2020-08-01 20:49:34 Z  184 days  329 attempts
Testing same since   158868  2021-01-31 22:10:44 Z    1 days    2 attempts

------------------------------------------------------------
4508 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 broken  
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken
broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-libvirt-xsm host-install(5)
broken-step test-arm64-arm64-xl-xsm host-install(5)

Not pushing.

(No revision log; it would be 1021215 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80188.146482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibW-0007PJ-4w; Mon, 01 Feb 2021 23:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80188.146482; Mon, 01 Feb 2021 23:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibV-0007PC-Vb; Mon, 01 Feb 2021 23:27:33 +0000
Received: by outflank-mailman (input) for mailman id 80188;
 Mon, 01 Feb 2021 23:27:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibV-0007P6-0R
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:33 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 913af046-38af-40b3-83dc-937cf1f68d92;
 Mon, 01 Feb 2021 23:27:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 913af046-38af-40b3-83dc-937cf1f68d92
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222050;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=yyB7IEX91e5BgUihcc8rw4F6xaxM9TfCWmmgF4h+/LI=;
  b=CFvMY5+F9R9VND5Itfx6LGX5uIasTK2hc2ad+UkucMAcUUe+FEFu6D9b
   8HXi0UYtHqaogOfgJIqLMdgGdKcfPM7GflrX86NOYqZGt3V0iwxsjrQ3/
   sANxYNBPDMTccchXDbqgDqusqZH6s7Lm0eZTa0ynYQPrei31+D4ifyL5Q
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: YEENkP44gzMZtTh5zS4/cGoSBfGzd33S8ERdyX2jzq/bBs4QGabWaOEJ/mFSIEyph8x50neiVi
 KAqLlL3R861Wfcm5JFExOEOYhMujndOEyXK/FYmoAN/ZIwaMmS5mLNN/gjg/RFORH1EhPyU8Yr
 eissbn/PAIMQh+HA2acPq90Wn6tiyhpPlnyhnW+ucL00bIE/+mUSRI2mt1Kj0ZIo2VvECq5F+e
 HwR2ZNjl995nSiQ+ZIN+Gej3vnjNVHHc56sxbh4D5y06SN1JBILuqUy5xsp83leAHnbuFm8Wih
 44E=
X-SBRS: 5.1
X-MesageID: 36523025
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36523025"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, "Tamas K
 Lengyel" <tamas@tklengyel.com>
Subject: [PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter
Date: Mon, 1 Feb 2021 23:26:55 +0000
Message-ID: <20210201232703.29275-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

Allow to specify the size of per-vCPU trace buffer upon
domain creation. This is zero by default (meaning: not enabled).

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v8:
 * Rebase over vmtrace_size change.
v7:
 * Use the name 'vmtrace' consistently.
---
 docs/man/xl.cfg.5.pod.in             | 9 +++++++++
 tools/golang/xenlight/helpers.gen.go | 2 ++
 tools/golang/xenlight/types.gen.go   | 1 +
 tools/include/libxl.h                | 7 +++++++
 tools/libs/light/libxl_create.c      | 1 +
 tools/libs/light/libxl_types.idl     | 4 ++++
 tools/xl/xl_parse.c                  | 4 ++++
 7 files changed, 28 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 7cdb8595d3..040374dcd6 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -681,6 +681,15 @@ Windows).
 
 If this option is not specified then it will default to B<false>.
 
+=item B<vmtrace_buf_kb=KBYTES>
+
+Specifies the size of vmtrace buffer that would be allocated for each
+vCPU belonging to this domain.  Disabled (i.e.  B<vmtrace_buf_kb=0>) by
+default.
+
+B<NOTE>: Acceptable values are platform specific.  For Intel Processor
+Trace, this value must be a power of 2 between 4k and 16M.
+
 =back
 
 =head2 Devices
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 63e2876463..4c60d27a9c 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1114,6 +1114,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
 x.Altp2M = Altp2MMode(xc.altp2m)
+x.VmtraceBufKb = int(xc.vmtrace_buf_kb)
 
  return nil}
 
@@ -1589,6 +1590,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
+xc.vmtrace_buf_kb = C.int(x.VmtraceBufKb)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 5851c38057..cb13002fdb 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -514,6 +514,7 @@ GicVersion GicVersion
 Vuart VuartType
 }
 Altp2M Altp2MMode
+VmtraceBufKb int
 }
 
 type domainBuildInfoTypeUnion interface {
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index f48d0c5e8a..a7b673e89d 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -489,6 +489,13 @@
 #define LIBXL_HAVE_PHYSINFO_CAP_VMTRACE 1
 
 /*
+ * LIBXL_HAVE_VMTRACE_BUF_KB indicates that libxl_domain_create_info has a
+ * vmtrace_buf_kb parameter, which allows to enable pre-allocation of
+ * processor tracing buffers of given size.
+ */
+#define LIBXL_HAVE_VMTRACE_BUF_KB 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 9848d65f36..46f68da697 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -607,6 +607,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
+            .vmtrace_size = ROUNDUP(b_info->vmtrace_buf_kb << 10, XC_PAGE_SHIFT),
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index dacb7df6b7..5b85a7419f 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -648,6 +648,10 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # supported by x86 HVM and ARM support is planned.
     ("altp2m", libxl_altp2m_mode),
 
+    # Size of preallocated vmtrace trace buffers (in KBYTES).
+    # Use zero value to disable this feature.
+    ("vmtrace_buf_kb", integer),
+
     ], dir=DIR_IN,
        copy_deprecated_fn="libxl__domain_build_info_copy_deprecated",
 )
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 867e4d068a..1893cfc086 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1863,6 +1863,10 @@ void parse_config_data(const char *config_source,
         }
     }
 
+    if (!xlu_cfg_get_long(config, "vmtrace_buf_kb", &l, 1) && l) {
+        b_info->vmtrace_buf_kb = l;
+    }
+
     if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
         b_info->num_ioports = num_ioports;
         b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80189.146493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibX-0007QH-AZ; Mon, 01 Feb 2021 23:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80189.146493; Mon, 01 Feb 2021 23:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibX-0007QA-7Z; Mon, 01 Feb 2021 23:27:35 +0000
Received: by outflank-mailman (input) for mailman id 80189;
 Mon, 01 Feb 2021 23:27:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibW-0007PB-0l
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:34 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bda2d77-d5a5-4294-9fd1-d4c2be43d2e3;
 Mon, 01 Feb 2021 23:27:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bda2d77-d5a5-4294-9fd1-d4c2be43d2e3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222052;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=EvfjqR50MUjZfV+wJxte5vJxLsTztJ8urztBWoZwrE4=;
  b=a64ywfGPWzZrgGIZ9nz5YSx5JWVPNBK9O+cLzYjkjQ0XabURaXYhTRRE
   YqibgUDpj+qcAeC6ObSohMbGGtMzC8aLnsjcasW6qcG5UbLQWwA/3t6//
   llOg3BnNbSBN1hAWoUx8FNUPmD1VV/efdFecmW76lA7vZvtXdcpEcBfqR
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: p+bwOS9yhAo0dKZ1+LmmyuytD6afidLE/c6n6GRnIcBSgmr0eDty1Gtj6MxTt4mjmczbooMOYP
 U8rfJeV3iX3LGukV840Cc9jx8pL50wshDUT/vR4nxdOdMCYbLVAWa8066eaRYUDU7k1WhQ3m2U
 Vb5TxY/FC4fFeJR7F1oB40hFd4ohKIziSaAhvJPZPjvW1FiFy1rSNEJEhd4BoXWMtXZQQTi/mx
 Zt9tJLlcB6Xfx/vz09iN1F0ae8yvRdQnQwJ8Qpo6/KP5xliDfdWxsWlsW3WIh+TnTn+txqSZvu
 +jk=
X-SBRS: 5.1
X-MesageID: 36523027
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36523027"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 07/11] tools/libxc: Add xc_vmtrace_* functions
Date: Mon, 1 Feb 2021 23:26:59 +0000
Message-ID: <20210201232703.29275-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v7:
 * Use the name 'vmtrace' consistently.
---
 tools/include/xenctrl.h      |  73 ++++++++++++++++++++++++
 tools/libs/ctrl/Makefile     |   1 +
 tools/libs/ctrl/xc_vmtrace.c | 128 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 202 insertions(+)
 create mode 100644 tools/libs/ctrl/xc_vmtrace.c

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 3796425e1e..0efcdae8b4 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1583,6 +1583,79 @@ int xc_tbuf_set_cpu_mask(xc_interface *xch, xc_cpumap_t mask);
 
 int xc_tbuf_set_evt_mask(xc_interface *xch, uint32_t mask);
 
+/**
+ * Enable vmtrace for given vCPU.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_enable(xc_interface *xch, uint32_t domid, uint32_t vcpu);
+
+/**
+ * Enable vmtrace for given vCPU.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_disable(xc_interface *xch, uint32_t domid, uint32_t vcpu);
+
+/**
+ * Enable vmtrace for a given vCPU, along with resetting status/offset
+ * details.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_reset_and_enable(xc_interface *xch, uint32_t domid,
+                                uint32_t vcpu);
+
+/**
+ * Get current output position inside the trace buffer.
+ *
+ * Repeated calls will return different values if tracing is enabled.  It is
+ * platform specific what happens when the buffer fills completely.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @parm pos current output position in bytes
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_output_position(xc_interface *xch, uint32_t domid,
+                               uint32_t vcpu, uint64_t *pos);
+
+/**
+ * Get platform specific vmtrace options.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @parm key platform-specific input
+ * @parm value platform-specific output
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_get_option(xc_interface *xch, uint32_t domid,
+                          uint32_t vcpu, uint64_t key, uint64_t *value);
+
+/**
+ * Set platform specific vntvmtrace options.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @parm key platform-specific input
+ * @parm value platform-specific input
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_set_option(xc_interface *xch, uint32_t domid,
+                          uint32_t vcpu, uint64_t key, uint64_t value);
+
 int xc_domctl(xc_interface *xch, struct xen_domctl *domctl);
 int xc_sysctl(xc_interface *xch, struct xen_sysctl *sysctl);
 
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index 6106e36c49..ce9ecae710 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -22,6 +22,7 @@ SRCS-y       += xc_pm.c
 SRCS-y       += xc_cpu_hotplug.c
 SRCS-y       += xc_resume.c
 SRCS-y       += xc_vm_event.c
+SRCS-y       += xc_vmtrace.c
 SRCS-y       += xc_monitor.c
 SRCS-y       += xc_mem_paging.c
 SRCS-y       += xc_mem_access.c
diff --git a/tools/libs/ctrl/xc_vmtrace.c b/tools/libs/ctrl/xc_vmtrace.c
new file mode 100644
index 0000000000..602502367f
--- /dev/null
+++ b/tools/libs/ctrl/xc_vmtrace.c
@@ -0,0 +1,128 @@
+/******************************************************************************
+ * xc_vmtrace.c
+ *
+ * API for manipulating hardware tracing features
+ *
+ * Copyright (c) 2020, Michal Leszczynski
+ *
+ * Copyright 2020 CERT Polska. All rights reserved.
+ * Use is subject to license terms.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+
+int xc_vmtrace_enable(
+    xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_vmtrace_op,
+        .domain = domid,
+        .u.vmtrace_op = {
+            .cmd = XEN_DOMCTL_vmtrace_enable,
+            .vcpu = vcpu,
+        },
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_vmtrace_disable(
+    xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_vmtrace_op,
+        .domain = domid,
+        .u.vmtrace_op = {
+            .cmd = XEN_DOMCTL_vmtrace_disable,
+            .vcpu = vcpu,
+        },
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_vmtrace_reset_and_enable(
+    xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_vmtrace_op,
+        .domain = domid,
+        .u.vmtrace_op = {
+            .cmd = XEN_DOMCTL_vmtrace_reset_and_enable,
+            .vcpu = vcpu,
+        },
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_vmtrace_output_position(
+    xc_interface *xch, uint32_t domid, uint32_t vcpu, uint64_t *pos)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_vmtrace_op,
+        .domain = domid,
+        .u.vmtrace_op = {
+            .cmd = XEN_DOMCTL_vmtrace_output_position,
+            .vcpu = vcpu,
+        },
+    };
+    int rc = do_domctl(xch, &domctl);
+
+    if ( !rc )
+        *pos = domctl.u.vmtrace_op.value;
+
+    return rc;
+}
+
+int xc_vmtrace_get_option(
+    xc_interface *xch, uint32_t domid, uint32_t vcpu,
+    uint64_t key, uint64_t *value)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_vmtrace_op,
+        .domain = domid,
+        .u.vmtrace_op = {
+            .cmd = XEN_DOMCTL_vmtrace_get_option,
+            .vcpu = vcpu,
+            .key = key,
+        },
+    };
+    int rc = do_domctl(xch, &domctl);
+
+    if ( !rc )
+        *value = domctl.u.vmtrace_op.value;
+
+    return rc;
+}
+
+int xc_vmtrace_set_option(
+    xc_interface *xch, uint32_t domid, uint32_t vcpu,
+    uint64_t key, uint64_t value)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_vmtrace_op,
+        .domain = domid,
+        .u.vmtrace_op = {
+            .cmd = XEN_DOMCTL_vmtrace_set_option,
+            .vcpu = vcpu,
+            .key = key,
+            .value = value,
+        },
+    };
+
+    return do_domctl(xch, &domctl);
+}
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80190.146506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibb-0007Tb-JZ; Mon, 01 Feb 2021 23:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80190.146506; Mon, 01 Feb 2021 23:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibb-0007TU-Fr; Mon, 01 Feb 2021 23:27:39 +0000
Received: by outflank-mailman (input) for mailman id 80190;
 Mon, 01 Feb 2021 23:27:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibZ-0007P6-Qt
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:37 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d35c18d-68bf-48f9-993b-8fdff7993348;
 Mon, 01 Feb 2021 23:27:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d35c18d-68bf-48f9-993b-8fdff7993348
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222051;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=SJRgavZh5UboqFIes1Vgd8L9o/aFp0WpSJ+Ons+P7bw=;
  b=VJ+XdxhDljjq4eO4QmvqjbxS6JuTELmP9M433yE/rB9ZrmQT8UYelJug
   isRTKaF5D1SX4t3QJGklg22yWPQ9ZyTZ09Yvj8Udk0zK1ngsZvJtKWFp9
   80doVvdWnbGYSliXDz8y15BscDyES05ImsSU1K6z+VkdanKe/n9Q+/l4b
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vJwJmHhGesYA1E5eg5NT9NMRm0hNc4DFSfh8WkGBNeByiHoRntitHyhtv7W48wl75fZNCSFR3B
 l9eRQSWp4qQmrLqYrLWODTXN9UNTiUjDrEp7G0kq+sHrEWRc+ICtme/0/aUzmrfSgDIEz2rMLk
 lUXlRYWLzIsS0rcb+lNOvTEZTm+srPwcMS1jXr2ZSvEWFy1KkNuqy72JYfcR3GpQ+EfCejWROa
 r7n1y1Z7WeO62A8bLBkWM+tzoB0vu7Rvg0cL80hwuqN0PLvA/vBjziyx34mzBD28nm+4uWukyf
 JHQ=
X-SBRS: 5.1
X-MesageID: 36319798
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319798"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 04/11] xen/memory: Add a vmtrace_buf resource type
Date: Mon, 1 Feb 2021 23:26:56 +0000
Message-ID: <20210201232703.29275-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

Allow to map processor trace buffer using acquire_resource().

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v8:
 * Rebase over 'fault' and ARM/IOREQ series.

v7:
 * Rebase over changes elsewhere in the series
---
 xen/common/memory.c         | 29 +++++++++++++++++++++++++++++
 xen/include/public/memory.h |  1 +
 2 files changed, 30 insertions(+)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 128718b31c..fada97a79f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1085,6 +1085,9 @@ static unsigned int resource_max_frames(const struct domain *d,
     case XENMEM_resource_ioreq_server:
         return ioreq_server_max_frames(d);
 
+    case XENMEM_resource_vmtrace_buf:
+        return d->vmtrace_size >> PAGE_SHIFT;
+
     default:
         return -EOPNOTSUPP;
     }
@@ -1125,6 +1128,29 @@ static int acquire_ioreq_server(struct domain *d,
 #endif
 }
 
+static int acquire_vmtrace_buf(
+    struct domain *d, unsigned int id, unsigned int frame,
+    unsigned int nr_frames, xen_pfn_t mfn_list[])
+{
+    const struct vcpu *v = domain_vcpu(d, id);
+    unsigned int i;
+    mfn_t mfn;
+
+    if ( !v )
+        return -ENOENT;
+
+    if ( !v->vmtrace.pg ||
+         (frame + nr_frames) > (d->vmtrace_size >> PAGE_SHIFT) )
+        return -EINVAL;
+
+    mfn = page_to_mfn(v->vmtrace.pg);
+
+    for ( i = 0; i < nr_frames; i++ )
+        mfn_list[i] = mfn_x(mfn) + frame + i;
+
+    return nr_frames;
+}
+
 /*
  * Returns -errno on error, or positive in the range [1, nr_frames] on
  * success.  Returning less than nr_frames contitutes a request for a
@@ -1142,6 +1168,9 @@ static int _acquire_resource(
     case XENMEM_resource_ioreq_server:
         return acquire_ioreq_server(d, id, frame, nr_frames, mfn_list);
 
+    case XENMEM_resource_vmtrace_buf:
+        return acquire_vmtrace_buf(d, id, frame, nr_frames, mfn_list);
+
     default:
         return -EOPNOTSUPP;
     }
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 020c79d757..50e73eef98 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -625,6 +625,7 @@ struct xen_mem_acquire_resource {
 
 #define XENMEM_resource_ioreq_server 0
 #define XENMEM_resource_grant_table 1
+#define XENMEM_resource_vmtrace_buf 2
 
     /*
      * IN - a type-specific resource identifier, which must be zero
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80191.146512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibb-0007UN-VW; Mon, 01 Feb 2021 23:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80191.146512; Mon, 01 Feb 2021 23:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibb-0007UB-P4; Mon, 01 Feb 2021 23:27:39 +0000
Received: by outflank-mailman (input) for mailman id 80191;
 Mon, 01 Feb 2021 23:27:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6iba-0007PB-S1
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:38 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 530ec581-18cd-4167-be04-6ebe22ae1b1a;
 Mon, 01 Feb 2021 23:27:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 530ec581-18cd-4167-be04-6ebe22ae1b1a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222053;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ndDeJS/U1dfYy3KhxJrt2mDw759yVCsV1w6A2jxC4Vc=;
  b=ONwkkz4bp/Rs/aDmWEzl3ejg2gfVuUGHPmueZ8g+Y0eohiCZ9YSzCSvy
   x6RI8SyEU9dObXAgtHppbolQJk2VYQCkKWu6zRvbggnNtXpZUySDJOsRC
   AIVTcG3E8V/LhFACCqWdwyTKXCE8O3bxv9jD6ZZuh6tM0r50meMvFiW75
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qPaTmDrkP1LKm8wqBC3XVmHClBioSLKitKOvQ8gdeQSj8QZTBSD0gl6IF4SE3Jk6eaauqodLPf
 XeAAr0udY8Y/XhFkyeew+ppbCt3K4DrTK8SQVhInP7O29toYZdCSQagGB67guO7cxm+C0NKqp/
 rv8MFkIvoCSWNEEw0sPtuejPeiHoMs7vhYjvPXAgOwdoLcySy2iAShIBmLfgwwFvjlkqoIlD3C
 bKWkohFeKuCfe5aJbhnBrE+7bhd1hGuI44pzxwYMICvfuAEnZORRz+WBWlQPkaA8gsO5S8UJzD
 6hc=
X-SBRS: 5.1
X-MesageID: 36319797
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319797"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: [PATCH v9 02/11] xen/domain: Add vmtrace_size domain creation parameter
Date: Mon, 1 Feb 2021 23:26:54 +0000
Message-ID: <20210201232703.29275-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

To use vmtrace, buffers of a suitable size need allocating, and different
tasks will want different sizes.

Add a domain creation parameter, and audit it appropriately in the
{arch_,}sanitise_domain_config() functions.

For now, the x86 specific auditing is tuned to Processor Trace running in
Single Output mode, which requires a single contiguous range of memory.

The size is given an arbitrary limit of 64M which is expected to be enough for
anticipated usecases, but not large enough to get into long-running-hypercall
problems.

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

When support for later generations of IPT get added, we can in principle start
to use ToTP which is a scatter list of smaller trace regions to use, if we
need to massively up the buffer size available.

v9:
 * Drop misleading comments in vmtrace_alloc_buffer().  Memory still gets
   leaked in theoretical corner cases, but the pattern needs fixing across the
   board when we figure out a solution.

v8:
 * Rename vmtrace_frames to vmtrace_size.  Reposition to fill a hole.
 * Rename vmtrace.buf to vmtrace.pg.
 * Rework the refcounting logic and comment it *very* clearly.

v7:
 * Major chop&change within the series.
 * Use the name 'vmtrace' consistently.
 * Use the (new) common vcpu_teardown() functionality, rather than leaving a
   latent memory leak on ARM.
---
 xen/arch/x86/domain.c       | 23 ++++++++++++++++
 xen/common/domain.c         | 64 +++++++++++++++++++++++++++++++++++++++++++++
 xen/include/public/domctl.h |  3 +++
 xen/include/xen/sched.h     |  6 +++++
 4 files changed, 96 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b9ba04633e..6c7ee25f3b 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -660,6 +660,29 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    if ( config->vmtrace_size )
+    {
+        unsigned int size = config->vmtrace_size;
+
+        ASSERT(vmtrace_available); /* Checked by common code. */
+
+        /*
+         * For now, vmtrace is restricted to HVM guests, and using a
+         * power-of-2 buffer between 4k and 64M in size.
+         */
+        if ( !hvm )
+        {
+            dprintk(XENLOG_INFO, "vmtrace not supported for PV\n");
+            return -EINVAL;
+        }
+
+        if ( size < PAGE_SIZE || size > MB(64) || (size & (size - 1)) )
+        {
+            dprintk(XENLOG_INFO, "Unsupported vmtrace size: %#x\n", size);
+            return -EINVAL;
+        }
+    }
+
     return 0;
 }
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index d1e94d88cf..b6f8d2f536 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -132,6 +132,56 @@ static void vcpu_info_reset(struct vcpu *v)
     v->vcpu_info_mfn = INVALID_MFN;
 }
 
+static void vmtrace_free_buffer(struct vcpu *v)
+{
+    const struct domain *d = v->domain;
+    struct page_info *pg = v->vmtrace.pg;
+    unsigned int i;
+
+    if ( !pg )
+        return;
+
+    v->vmtrace.pg = NULL;
+
+    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
+    {
+        put_page_alloc_ref(&pg[i]);
+        put_page_and_type(&pg[i]);
+    }
+}
+
+static int vmtrace_alloc_buffer(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct page_info *pg;
+    unsigned int i;
+
+    if ( !d->vmtrace_size )
+        return 0;
+
+    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
+                             MEMF_no_refcount);
+    if ( !pg )
+        return -ENOMEM;
+
+    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
+        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
+            goto refcnt_err;
+
+    /*
+     * We must only let vmtrace_free_buffer() take any action in the success
+     * case when we've taken all the refs it intends to drop.
+     */
+    v->vmtrace.pg = pg;
+    return 0;
+
+ refcnt_err:
+    while ( i-- )
+        put_page_and_type(&pg[i]);
+
+    return -ENODATA;
+}
+
 /*
  * Release resources held by a vcpu.  There may or may not be live references
  * to the vcpu, and it may or may not be fully constructed.
@@ -140,6 +190,8 @@ static void vcpu_info_reset(struct vcpu *v)
  */
 static int vcpu_teardown(struct vcpu *v)
 {
+    vmtrace_free_buffer(v);
+
     return 0;
 }
 
@@ -201,6 +253,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
     if ( sched_init_vcpu(v) != 0 )
         goto fail_wq;
 
+    if ( vmtrace_alloc_buffer(v) != 0 )
+        goto fail_wq;
+
     if ( arch_vcpu_create(v) != 0 )
         goto fail_sched;
 
@@ -449,6 +504,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
         }
     }
 
+    if ( config->vmtrace_size && !vmtrace_available )
+    {
+        dprintk(XENLOG_INFO, "vmtrace requested but not available\n");
+        return -EINVAL;
+    }
+
     return arch_sanitise_domain_config(config);
 }
 
@@ -474,7 +535,10 @@ struct domain *domain_create(domid_t domid,
     ASSERT(is_system_domain(d) ? config == NULL : config != NULL);
 
     if ( config )
+    {
         d->options = config->flags;
+        d->vmtrace_size = config->vmtrace_size;
+    }
 
     /* Sort out our idea of is_control_domain(). */
     d->is_privileged = is_priv;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 666aeb71bf..88a5b1ef5d 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -95,6 +95,9 @@ struct xen_domctl_createdomain {
     int32_t max_grant_frames;
     int32_t max_maptrack_frames;
 
+    /* Per-vCPU buffer size in bytes.  0 to disable. */
+    uint32_t vmtrace_size;
+
     struct xen_arch_domainconfig arch;
 };
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 06dba1a397..bc78a09a53 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -272,6 +272,10 @@ struct vcpu
     /* vPCI per-vCPU area, used to store data for long running operations. */
     struct vpci_vcpu vpci;
 
+    struct {
+        struct page_info *pg; /* One contiguous allocation of d->vmtrace_size */
+    } vmtrace;
+
     struct arch_vcpu arch;
 
 #ifdef CONFIG_IOREQ_SERVER
@@ -547,6 +551,8 @@ struct domain
         unsigned int guest_request_sync          : 1;
     } monitor;
 
+    unsigned int vmtrace_size; /* Buffer size in bytes, or 0 to disable. */
+
 #ifdef CONFIG_ARGO
     /* Argo interdomain communication support */
     struct argo_domain *argo;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80192.146530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibg-0007az-EX; Mon, 01 Feb 2021 23:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80192.146530; Mon, 01 Feb 2021 23:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibg-0007ao-AW; Mon, 01 Feb 2021 23:27:44 +0000
Received: by outflank-mailman (input) for mailman id 80192;
 Mon, 01 Feb 2021 23:27:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibe-0007P6-Qz
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:42 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38cdcfa4-521c-43b8-a188-6a21d06b37d7;
 Mon, 01 Feb 2021 23:27:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38cdcfa4-521c-43b8-a188-6a21d06b37d7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222052;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FBfo6N9u3Sukj0MKNcU3Z9nO9VLLQT0Q251N7HMr1t4=;
  b=W+AMHjrZtM0JdgByJWxIl+Q2o2kC0Hp+Z9c35pgJCUOdxofBHtd98ugt
   RyZDO/x/pM4EgnPbFfZCGkc55tUQmV4eFGJoOUA6CBObgj8HKNCgP4qOR
   kcP4/TbGNemXwKraGvRA8t4gwVByqsLzui2w2iE3zM4UD6yoANyCv7Ovu
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 12MhwBfguQk0bLxFSPpqPdTqs4n6Fxn3l25XKaRAqX09hFa34kLWoantVeiJPyoIOd9c4ScBOK
 XdoOLk0DmbE9RXIcx5/P8DrSsQjR2Z5G7N8+/5fREE/G/xf2Hs2kZMC7PP0dz0TRULbtnT9GDN
 UeuaOsfeX6ntwJ9t/j1Sier4gog6YGqKhufTg89OcsascmaJjD595nU4buluHAhcmM/AbDDKaG
 VEwpqi+7FCZgnV4tL58dn2dChMjNxvo2jrOSElEgx+2fxxPWFzJCgsl0j7HNderxoke4jPvCPH
 Y+4=
X-SBRS: 5.1
X-MesageID: 36523028
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36523028"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Tamas K Lengyel <tamas.lengyel@intel.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?=
	<michal.leszczynski@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 09/11] xen/vmtrace: support for VM forks
Date: Mon, 1 Feb 2021 23:27:01 +0000
Message-ID: <20210201232703.29275-10-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Tamas K Lengyel <tamas.lengyel@intel.com>

Implement vmtrace_reset_pt function. Properly set IPT
state for VM forks.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v7:
 * New
---
 tools/misc/xen-vmtrace.c      |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c    | 11 +++++++++++
 xen/arch/x86/mm/mem_sharing.c |  3 +++
 xen/include/asm-x86/hvm/hvm.h |  9 +++++++++
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index cc58a0707b..7572e880c5 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -43,7 +43,7 @@ static uint32_t domid, vcpu;
 static size_t size;
 static char *buf;
 
-static sig_atomic_t interrupted = 0;
+static sig_atomic_t interrupted;
 static void int_handler(int signum)
 {
     interrupted = 1;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index beb5692b8b..faba95d057 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2392,6 +2392,16 @@ static int vmtrace_output_position(struct vcpu *v, uint64_t *pos)
     return v->arch.hvm.vmx.ipt_active;
 }
 
+static int vmtrace_reset(struct vcpu *v)
+{
+    if ( !v->arch.hvm.vmx.ipt_active )
+        return -EINVAL;
+
+    v->arch.msrs->rtit.output_offset = 0;
+    v->arch.msrs->rtit.status = 0;
+    return 0;
+}
+
 static struct hvm_function_table __initdata vmx_function_table = {
     .name                 = "VMX",
     .cpu_up_prepare       = vmx_cpu_up_prepare,
@@ -2451,6 +2461,7 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .vmtrace_output_position = vmtrace_output_position,
     .vmtrace_set_option = vmtrace_set_option,
     .vmtrace_get_option = vmtrace_get_option,
+    .vmtrace_reset = vmtrace_reset,
     .tsc_scaling = {
         .max_ratio = VMX_TSC_MULTIPLIER_MAX,
     },
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index adaeab4612..00ada05c10 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1632,6 +1632,8 @@ static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
             copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
         }
 
+        hvm_vmtrace_reset(cd_vcpu);
+
         /*
          * TODO: to support VMs with PV interfaces copy additional
          * settings here, such as PV timers.
@@ -1782,6 +1784,7 @@ static int fork(struct domain *cd, struct domain *d)
         cd->max_pages = d->max_pages;
         *cd->arch.cpuid = *d->arch.cpuid;
         *cd->arch.msr = *d->arch.msr;
+        cd->vmtrace_size = d->vmtrace_size;
         cd->parent = d;
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 960ec03917..150746de66 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -219,6 +219,7 @@ struct hvm_function_table {
     int (*vmtrace_output_position)(struct vcpu *v, uint64_t *pos);
     int (*vmtrace_set_option)(struct vcpu *v, uint64_t key, uint64_t value);
     int (*vmtrace_get_option)(struct vcpu *v, uint64_t key, uint64_t *value);
+    int (*vmtrace_reset)(struct vcpu *v);
 
     /*
      * Parameters and callbacks for hardware-assisted TSC scaling,
@@ -696,6 +697,14 @@ static inline int hvm_vmtrace_get_option(
     return -EOPNOTSUPP;
 }
 
+static inline int hvm_vmtrace_reset(struct vcpu *v)
+{
+    if ( hvm_funcs.vmtrace_reset )
+        return hvm_funcs.vmtrace_reset(v);
+
+    return -EOPNOTSUPP;
+}
+
 /*
  * This must be defined as a macro instead of an inline function,
  * because it uses 'struct vcpu' and 'struct domain' which have
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80193.146542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibh-0007dl-Oy; Mon, 01 Feb 2021 23:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80193.146542; Mon, 01 Feb 2021 23:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibh-0007dY-Ki; Mon, 01 Feb 2021 23:27:45 +0000
Received: by outflank-mailman (input) for mailman id 80193;
 Mon, 01 Feb 2021 23:27:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibf-0007PB-SJ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:43 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 537d15c5-bb30-4d98-9b4f-c825c6125c33;
 Mon, 01 Feb 2021 23:27:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 537d15c5-bb30-4d98-9b4f-c825c6125c33
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222054;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=gjwIZ7tSRC53L46IQqr59AJJ4+W6spW+dFwdwkZvhUU=;
  b=B4oSTp8MF/rSgYgtGoYEqfIsPSii1Uw4J8TOdFTR3YyEoNLwLoWZFVQl
   Q0Mt96rOY2luWb+YSDCwBb9aF9UItv1aiYjfZXJSLvW099t6ddCETjOoQ
   XcTco8cLwssuxYMf051xfX7vfuKRO2w6ZOGU9kLgyfgfyHrzeGbHl7TFP
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +u4gKq1njIZq4gAJqR+TaWr7sQ2k0elD+4u8/m4xGO9B+F/HKIDmaKjAha0VOR3wqvIR/5cYZ3
 HvzpU6TRS/zfPlKF4jrk2IEeWU8DAIuAomqHevt8rpYcdr0M7w29hvrVQ+VxpS+AWdu0QG7wnL
 EVCHxvgJseACWphXAxAcP7nKsUUEdOaAFvfsrQXNraCsRts3nk68+TjOogGjHLOuy2YTatXmMX
 t3z1vpFA2iu2M0Gu7qZfkTqVnrD9q7Nm/6jTmg5FS4BWLkjTg6kPuGtXu3nnoVM3jjvIsxO//2
 3CE=
X-SBRS: 5.1
X-MesageID: 36319802
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319802"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 05/11] x86/vmx: Add Intel Processor Trace support
Date: Mon, 1 Feb 2021 23:26:57 +0000
Message-ID: <20210201232703.29275-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

Add CPUID/MSR enumeration details for Processor Trace.  For now, we will only
support its use inside VMX operation.  Fill in the vmtrace_available boolean
to activate the newly introduced common infrastructure for allocating trace
buffers.

For now, Processor Trace is going to be operated in Single Output mode behind
the guests back.  Add the MSRs to struct vcpu_msrs, and set up the buffer
limit in vmx_init_ipt() as it is fixed for the lifetime of the domain.

Context switch the most of the MSRs in and out of vCPU context, but the main
control register needs to reside in the MSR load/save lists.  Explicitly pull
the msrs pointer out into a local variable, because the optimiser cannot keep
it live across the memory clobbers in the MSR accesses.

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v8:
 * Plumb bsp boolean down into vmx_init_vmcs_config()
 * Rename vmx_init_pt() to vmx_init_ipt()
 * Rebase over vmtrace_size/pg renames.

v7:
 * Major chop&change within the series.
 * Move MSRs to vcpu_msrs, which is where they'll definitely want to live when
   we offer PT to VMs for their own use.
---
 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  2 +-
 xen/arch/x86/hvm/vmx/vmcs.c                 | 19 +++++++++++++---
 xen/arch/x86/hvm/vmx/vmx.c                  | 34 ++++++++++++++++++++++++++++-
 xen/include/asm-x86/cpufeature.h            |  1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h          |  4 ++++
 xen/include/asm-x86/msr.h                   | 32 +++++++++++++++++++++++++++
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 8 files changed, 89 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 259612834e..289c59c742 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -188,6 +188,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
         {"avx512-ifma",  0x00000007,  0, CPUID_REG_EBX, 21,  1},
         {"clflushopt",   0x00000007,  0, CPUID_REG_EBX, 23,  1},
         {"clwb",         0x00000007,  0, CPUID_REG_EBX, 24,  1},
+        {"proc-trace",   0x00000007,  0, CPUID_REG_EBX, 25,  1},
         {"avx512pf",     0x00000007,  0, CPUID_REG_EBX, 26,  1},
         {"avx512er",     0x00000007,  0, CPUID_REG_EBX, 27,  1},
         {"avx512cd",     0x00000007,  0, CPUID_REG_EBX, 28,  1},
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index c81aa93055..2d04162d8d 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -106,7 +106,7 @@ static const char *const str_7b0[32] =
     [18] = "rdseed",   [19] = "adx",
     [20] = "smap",     [21] = "avx512-ifma",
     [22] = "pcommit",  [23] = "clflushopt",
-    [24] = "clwb",     [25] = "pt",
+    [24] = "clwb",     [25] = "proc-trace",
     [26] = "avx512pf", [27] = "avx512er",
     [28] = "avx512cd", [29] = "sha",
     [30] = "avx512bw", [31] = "avx512vl",
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 164535f8f0..f9f9bc18cd 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -243,7 +243,7 @@ static bool_t cap_check(const char *name, u32 expected, u32 saw)
     return saw != expected;
 }
 
-static int vmx_init_vmcs_config(void)
+static int vmx_init_vmcs_config(bool bsp)
 {
     u32 vmx_basic_msr_low, vmx_basic_msr_high, min, opt;
     u32 _vmx_pin_based_exec_control;
@@ -291,6 +291,20 @@ static int vmx_init_vmcs_config(void)
         _vmx_cpu_based_exec_control &=
             ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
 
+    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
+
+    /* Check whether IPT is supported in VMX operation. */
+    if ( bsp )
+        vmtrace_available = cpu_has_proc_trace &&
+                            (_vmx_misc_cap & VMX_MISC_PROC_TRACE);
+    else if ( vmtrace_available &&
+              !(_vmx_misc_cap & VMX_MISC_PROC_TRACE) )
+    {
+        printk("VMX: IPT capabilities differ between CPU%u and BSP\n",
+               smp_processor_id());
+        return -EINVAL;
+    }
+
     if ( _vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS )
     {
         min = 0;
@@ -305,7 +319,6 @@ static int vmx_init_vmcs_config(void)
                SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
                SECONDARY_EXEC_XSAVES |
                SECONDARY_EXEC_TSC_SCALING);
-        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
         if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
             opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
         if ( opt_vpid_enabled )
@@ -715,7 +728,7 @@ static int _vmx_cpu_up(bool bsp)
         wrmsr(MSR_IA32_FEATURE_CONTROL, eax, 0);
     }
 
-    if ( (rc = vmx_init_vmcs_config()) != 0 )
+    if ( (rc = vmx_init_vmcs_config(bsp)) != 0 )
         return rc;
 
     INIT_LIST_HEAD(&this_cpu(active_vmcs_list));
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 2d4475ee3d..12b961113e 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -428,6 +428,20 @@ static void vmx_domain_relinquish_resources(struct domain *d)
     vmx_free_vlapic_mapping(d);
 }
 
+static void vmx_init_ipt(struct vcpu *v)
+{
+    unsigned int size = v->domain->vmtrace_size;
+
+    if ( !size )
+        return;
+
+    /* Checked by domain creation logic. */
+    ASSERT(v->vmtrace.pg);
+    ASSERT(size >= PAGE_SIZE && (size & (size - 1)) == 0);
+
+    v->arch.msrs->rtit.output_limit = size - 1;
+}
+
 static int vmx_vcpu_initialise(struct vcpu *v)
 {
     int rc;
@@ -470,6 +484,7 @@ static int vmx_vcpu_initialise(struct vcpu *v)
     }
 
     vmx_install_vlapic_mapping(v);
+    vmx_init_ipt(v);
 
     return 0;
 }
@@ -508,22 +523,39 @@ static void vmx_restore_host_msrs(void)
 
 static void vmx_save_guest_msrs(struct vcpu *v)
 {
+    struct vcpu_msrs *msrs = v->arch.msrs;
+
     /*
      * We cannot cache SHADOW_GS_BASE while the VCPU runs, as it can
      * be updated at any time via SWAPGS, which we cannot trap.
      */
     v->arch.hvm.vmx.shadow_gs = read_gs_shadow();
+
+    if ( v->arch.hvm.vmx.ipt_active )
+    {
+        rdmsrl(MSR_RTIT_OUTPUT_MASK, msrs->rtit.output_mask);
+        rdmsrl(MSR_RTIT_STATUS, msrs->rtit.status);
+    }
 }
 
 static void vmx_restore_guest_msrs(struct vcpu *v)
 {
+    const struct vcpu_msrs *msrs = v->arch.msrs;
+
     write_gs_shadow(v->arch.hvm.vmx.shadow_gs);
     wrmsrl(MSR_STAR,           v->arch.hvm.vmx.star);
     wrmsrl(MSR_LSTAR,          v->arch.hvm.vmx.lstar);
     wrmsrl(MSR_SYSCALL_MASK,   v->arch.hvm.vmx.sfmask);
 
     if ( cpu_has_msr_tsc_aux )
-        wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
+        wrmsr_tsc_aux(msrs->tsc_aux);
+
+    if ( v->arch.hvm.vmx.ipt_active )
+    {
+        wrmsrl(MSR_RTIT_OUTPUT_BASE, page_to_maddr(v->vmtrace.pg));
+        wrmsrl(MSR_RTIT_OUTPUT_MASK, msrs->rtit.output_mask);
+        wrmsrl(MSR_RTIT_STATUS, msrs->rtit.status);
+    }
 }
 
 void vmx_update_cpu_exec_control(struct vcpu *v)
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index f62e526a96..33b2257888 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -105,6 +105,7 @@
 #define cpu_has_clwb            boot_cpu_has(X86_FEATURE_CLWB)
 #define cpu_has_avx512er        boot_cpu_has(X86_FEATURE_AVX512ER)
 #define cpu_has_avx512cd        boot_cpu_has(X86_FEATURE_AVX512CD)
+#define cpu_has_proc_trace      boot_cpu_has(X86_FEATURE_PROC_TRACE)
 #define cpu_has_sha             boot_cpu_has(X86_FEATURE_SHA)
 #define cpu_has_avx512bw        boot_cpu_has(X86_FEATURE_AVX512BW)
 #define cpu_has_avx512vl        boot_cpu_has(X86_FEATURE_AVX512VL)
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 906810592f..8073af323b 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -156,6 +156,9 @@ struct vmx_vcpu {
     /* Do we need to tolerate a spurious EPT_MISCONFIG VM exit? */
     bool_t               ept_spurious_misconfig;
 
+    /* Processor Trace configured and enabled for the vcpu. */
+    bool                 ipt_active;
+
     /* Is the guest in real mode? */
     uint8_t              vmx_realmode;
     /* Are we emulating rather than VMENTERing? */
@@ -283,6 +286,7 @@ extern u32 vmx_secondary_exec_control;
 #define VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL 0x80000000000ULL
 extern u64 vmx_ept_vpid_cap;
 
+#define VMX_MISC_PROC_TRACE                     0x00004000
 #define VMX_MISC_CR3_TARGET                     0x01ff0000
 #define VMX_MISC_VMWRITE_ALL                    0x20000000
 
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 16f95e7344..1d3eca9063 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -306,6 +306,38 @@ struct vcpu_msrs
         };
     } misc_features_enables;
 
+    /*
+     * 0x00000560 ... 57x - MSR_RTIT_*
+     *
+     * "Real Time Instruction Trace", now called Processor Trace.
+     *
+     * These MSRs are not exposed to guests.  They are controlled by Xen
+     * behind the scenes, when vmtrace is enabled for the domain.
+     *
+     * MSR_RTIT_OUTPUT_BASE not stored here.  It is fixed per vcpu, and
+     * derived from v->vmtrace.buf.
+     */
+    struct {
+        /*
+         * Placed in the MSR load/save lists.  Only modified by hypercall in
+         * the common case.
+         */
+        uint64_t ctl;
+
+        /*
+         * Updated by hardware in non-root mode.  Synchronised here on vcpu
+         * context switch.
+         */
+        uint64_t status;
+        union {
+            uint64_t output_mask;
+            struct {
+                uint32_t output_limit;
+                uint32_t output_offset;
+            };
+        };
+    } rtit;
+
     /* 0x00000da0 - MSR_IA32_XSS */
     struct {
         uint64_t raw;
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 6f7efaad6d..a501479820 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -217,6 +217,7 @@ XEN_CPUFEATURE(SMAP,          5*32+20) /*S  Supervisor Mode Access Prevention */
 XEN_CPUFEATURE(AVX512_IFMA,   5*32+21) /*A  AVX-512 Integer Fused Multiply Add */
 XEN_CPUFEATURE(CLFLUSHOPT,    5*32+23) /*A  CLFLUSHOPT instruction */
 XEN_CPUFEATURE(CLWB,          5*32+24) /*A  CLWB instruction */
+XEN_CPUFEATURE(PROC_TRACE,    5*32+25) /*   Processor Trace */
 XEN_CPUFEATURE(AVX512PF,      5*32+26) /*A  AVX-512 Prefetch Instructions */
 XEN_CPUFEATURE(AVX512ER,      5*32+27) /*A  AVX-512 Exponent & Reciprocal Instrs */
 XEN_CPUFEATURE(AVX512CD,      5*32+28) /*A  AVX-512 Conflict Detection Instrs */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80194.146554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibl-0007je-74; Mon, 01 Feb 2021 23:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80194.146554; Mon, 01 Feb 2021 23:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibl-0007jR-2F; Mon, 01 Feb 2021 23:27:49 +0000
Received: by outflank-mailman (input) for mailman id 80194;
 Mon, 01 Feb 2021 23:27:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibj-0007P6-R7
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:47 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c06dd5f6-d926-4d97-b9e7-c3c1e9e49841;
 Mon, 01 Feb 2021 23:27:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c06dd5f6-d926-4d97-b9e7-c3c1e9e49841
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222053;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=TDLzcFIKoSOVt0T44fLv76oX9OLOeZU1LgI0QoM4uAM=;
  b=br7pFkh3Yxfga1vsyta+ZOBhdcbIx/sVkdnoQRXdqLZsvDV2hJsgSznC
   lvscJ8DtUlvsY6yofYg50pOg2YnO+oDu9emseILjH+xUag6uczt9jnBXd
   OHNylUS56fBramPCa6tv//Ii6NEtHlN4MmB+p2D3E9u0jDsNiMVTemq6p
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: mNnxg+ANT/zlDaBh9WfHRXCilKPBs3sVM8W70uTCXls00Pxz1RVTGxZKZf2zugHUSi7fZZlsSy
 xFXIeUqtRd0scZ5Unh5CP3+us5n+2eJqpEaN8TcBk+yQrm/yNVJzmRVS5QwSTsqnvk+4Q+6aMu
 6HVmbgb3qG+lSg1Ejo0yJjLWTwGOO8KcZQ8fuZ9ZAtAOS88HIzxQXgscE4uH4GinvWkShfH5Ol
 fYtlm6BCE434KxkSJzvp667yuUrtbo2QqVlp8RE4r7sEU7T6HhIhEB2eI8WsC12Z1DCrD20HyP
 OUg=
X-SBRS: 5.1
X-MesageID: 36319801
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319801"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 06/11] xen/domctl: Add XEN_DOMCTL_vmtrace_op
Date: Mon, 1 Feb 2021 23:26:58 +0000
Message-ID: <20210201232703.29275-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

Implement an interface to configure and control tracing operations.  Reuse the
existing SETDEBUGGING flask vector rather than inventing a new one.

Userspace using this interface is going to need platform specific knowledge
anyway to interpret the contents of the trace buffer.  While some operations
(e.g. enable/disable) can reasonably be generic, others cannot.  Provide an
explicitly-platform specific pair of get/set operations to reduce API churn as
new options get added/enabled.

For the VMX specific Processor Trace implementation, tolerate reading and
modifying a safe subset of bits in CTL, STATUS and OUTPUT_MASK.  This permits
userspace to control the content which gets logged, but prevents modification
of details such as the position/size of the output buffer.

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v9:
 * Drop platform-specific access to MSR_RTIT_OUTPUT_MASK.  It's not necessary
   for current usecases, and will simplify adding ToPA support in the future.

v8:
 * Reposition mask constants.

v7:
 * Major chop&change within the series.
---
 xen/arch/x86/domctl.c         |  55 +++++++++++++++++
 xen/arch/x86/hvm/vmx/vmx.c    | 135 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/hvm.h |  63 ++++++++++++++++++++
 xen/include/public/domctl.h   |  35 +++++++++++
 xen/xsm/flask/hooks.c         |   1 +
 5 files changed, 289 insertions(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index b28cfe9817..b464465230 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -155,6 +155,55 @@ void arch_get_domain_info(const struct domain *d,
     info->arch_config.emulation_flags = d->arch.emulation_flags;
 }
 
+static int do_vmtrace_op(struct domain *d, struct xen_domctl_vmtrace_op *op,
+                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    struct vcpu *v;
+    int rc;
+
+    if ( !d->vmtrace_size || d == current->domain /* No vcpu_pause() */ )
+        return -EINVAL;
+
+    ASSERT(is_hvm_domain(d)); /* Restricted by domain creation logic. */
+
+    v = domain_vcpu(d, op->vcpu);
+    if ( !v )
+        return -ENOENT;
+
+    vcpu_pause(v);
+    switch ( op->cmd )
+    {
+    case XEN_DOMCTL_vmtrace_enable:
+    case XEN_DOMCTL_vmtrace_disable:
+    case XEN_DOMCTL_vmtrace_reset_and_enable:
+        rc = hvm_vmtrace_control(
+            v, op->cmd != XEN_DOMCTL_vmtrace_disable,
+            op->cmd == XEN_DOMCTL_vmtrace_reset_and_enable);
+        break;
+
+    case XEN_DOMCTL_vmtrace_output_position:
+        rc = hvm_vmtrace_output_position(v, &op->value);
+        if ( rc >= 0 )
+            rc = 0;
+        break;
+
+    case XEN_DOMCTL_vmtrace_get_option:
+        rc = hvm_vmtrace_get_option(v, op->key, &op->value);
+        break;
+
+    case XEN_DOMCTL_vmtrace_set_option:
+        rc = hvm_vmtrace_set_option(v, op->key, op->value);
+        break;
+
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+    vcpu_unpause(v);
+
+    return rc;
+}
+
 #define MAX_IOPORTS 0x10000
 
 long arch_do_domctl(
@@ -1320,6 +1369,12 @@ long arch_do_domctl(
         domain_unpause(d);
         break;
 
+    case XEN_DOMCTL_vmtrace_op:
+        ret = do_vmtrace_op(d, &domctl->u.vmtrace_op, u_domctl);
+        if ( !ret )
+            copyback = true;
+        break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 12b961113e..beb5692b8b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2261,6 +2261,137 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
     return true;
 }
 
+/*
+ * We only let vmtrace agents see and modify a subset of bits in MSR_RTIT_CTL.
+ * These all pertain to data-emitted into the trace buffer(s).  Must not
+ * include controls pertaining to the structure/position of the trace
+ * buffer(s).
+ */
+#define RTIT_CTL_MASK                                                   \
+    (RTIT_CTL_TRACE_EN | RTIT_CTL_OS | RTIT_CTL_USR | RTIT_CTL_TSC_EN | \
+     RTIT_CTL_DIS_RETC | RTIT_CTL_BRANCH_EN)
+
+/*
+ * Status bits restricted to the first-gen subset (i.e. no further CPUID
+ * requirements.)
+ */
+#define RTIT_STATUS_MASK                                                \
+    (RTIT_STATUS_FILTER_EN | RTIT_STATUS_CONTEXT_EN | RTIT_STATUS_TRIGGER_EN | \
+     RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED)
+
+static int vmtrace_get_option(struct vcpu *v, uint64_t key, uint64_t *output)
+{
+    const struct vcpu_msrs *msrs = v->arch.msrs;
+
+    switch ( key )
+    {
+    case MSR_RTIT_CTL:
+        *output = msrs->rtit.ctl & RTIT_CTL_MASK;
+        break;
+
+    case MSR_RTIT_STATUS:
+        *output = msrs->rtit.status & RTIT_STATUS_MASK;
+        break;
+
+    default:
+        *output = 0;
+        return -EINVAL;
+    }
+    return 0;
+}
+
+static int vmtrace_set_option(struct vcpu *v, uint64_t key, uint64_t value)
+{
+    struct vcpu_msrs *msrs = v->arch.msrs;
+    bool new_en, old_en = msrs->rtit.ctl & RTIT_CTL_TRACE_EN;
+
+    switch ( key )
+    {
+    case MSR_RTIT_CTL:
+        if ( value & ~RTIT_CTL_MASK )
+            return -EINVAL;
+
+        msrs->rtit.ctl &= ~RTIT_CTL_MASK;
+        msrs->rtit.ctl |= (value & RTIT_CTL_MASK);
+        break;
+
+    case MSR_RTIT_STATUS:
+        if ( value & ~RTIT_STATUS_MASK )
+            return -EINVAL;
+
+        msrs->rtit.status &= ~RTIT_STATUS_MASK;
+        msrs->rtit.status |= (value & RTIT_STATUS_MASK);
+        break;
+
+    default:
+        return -EINVAL;
+    }
+
+    new_en = msrs->rtit.ctl & RTIT_CTL_TRACE_EN;
+
+    /* ctl.trace_en changed => update MSR load/save lists appropriately. */
+    if ( !old_en && new_en )
+    {
+        if ( vmx_add_guest_msr(v, MSR_RTIT_CTL, msrs->rtit.ctl) ||
+             vmx_add_host_load_msr(v, MSR_RTIT_CTL, 0) )
+        {
+            /*
+             * The only failure cases here are failing the
+             * singleton-per-domain memory allocation, or exceeding the space
+             * in the allocation.  We could unwind in principle, but there is
+             * nothing userspace can usefully do to continue using this VM.
+             */
+            domain_crash(v->domain);
+            return -ENXIO;
+        }
+    }
+    else if ( old_en && !new_en )
+    {
+        vmx_del_msr(v, MSR_RTIT_CTL, VMX_MSR_GUEST);
+        vmx_del_msr(v, MSR_RTIT_CTL, VMX_MSR_HOST);
+    }
+
+    return 0;
+}
+
+static int vmtrace_control(struct vcpu *v, bool enable, bool reset)
+{
+    struct vcpu_msrs *msrs = v->arch.msrs;
+    uint64_t new_ctl;
+    int rc;
+
+    /*
+     * Absolutely nothing good will come of Xen's and userspace's idea of
+     * whether ipt is enabled getting out of sync.
+     */
+    if ( v->arch.hvm.vmx.ipt_active == enable )
+        return -EINVAL;
+
+    if ( reset )
+    {
+        msrs->rtit.status = 0;
+        msrs->rtit.output_offset = 0;
+    }
+
+    new_ctl = msrs->rtit.ctl & ~RTIT_CTL_TRACE_EN;
+    if ( enable )
+        new_ctl |= RTIT_CTL_TRACE_EN;
+
+    rc = vmtrace_set_option(v, MSR_RTIT_CTL, new_ctl);
+    if ( rc )
+        return rc;
+
+    v->arch.hvm.vmx.ipt_active = enable;
+
+    return 0;
+}
+
+static int vmtrace_output_position(struct vcpu *v, uint64_t *pos)
+{
+    *pos = v->arch.msrs->rtit.output_offset;
+    return v->arch.hvm.vmx.ipt_active;
+}
+
 static struct hvm_function_table __initdata vmx_function_table = {
     .name                 = "VMX",
     .cpu_up_prepare       = vmx_cpu_up_prepare,
@@ -2316,6 +2447,10 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
     .altp2m_vcpu_emulate_ve = vmx_vcpu_emulate_ve,
     .altp2m_vcpu_emulate_vmfunc = vmx_vcpu_emulate_vmfunc,
+    .vmtrace_control = vmtrace_control,
+    .vmtrace_output_position = vmtrace_output_position,
+    .vmtrace_set_option = vmtrace_set_option,
+    .vmtrace_get_option = vmtrace_get_option,
     .tsc_scaling = {
         .max_ratio = VMX_TSC_MULTIPLIER_MAX,
     },
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 334bd573b9..960ec03917 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -214,6 +214,12 @@ struct hvm_function_table {
     bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v);
     int (*altp2m_vcpu_emulate_vmfunc)(const struct cpu_user_regs *regs);
 
+    /* vmtrace */
+    int (*vmtrace_control)(struct vcpu *v, bool enable, bool reset);
+    int (*vmtrace_output_position)(struct vcpu *v, uint64_t *pos);
+    int (*vmtrace_set_option)(struct vcpu *v, uint64_t key, uint64_t value);
+    int (*vmtrace_get_option)(struct vcpu *v, uint64_t key, uint64_t *value);
+
     /*
      * Parameters and callbacks for hardware-assisted TSC scaling,
      * which are valid only when the hardware feature is available.
@@ -655,6 +661,41 @@ static inline bool altp2m_vcpu_emulate_ve(struct vcpu *v)
     return false;
 }
 
+static inline int hvm_vmtrace_control(struct vcpu *v, bool enable, bool reset)
+{
+    if ( hvm_funcs.vmtrace_control )
+        return hvm_funcs.vmtrace_control(v, enable, reset);
+
+    return -EOPNOTSUPP;
+}
+
+/* Returns -errno, or a boolean of whether tracing is currently active. */
+static inline int hvm_vmtrace_output_position(struct vcpu *v, uint64_t *pos)
+{
+    if ( hvm_funcs.vmtrace_output_position )
+        return hvm_funcs.vmtrace_output_position(v, pos);
+
+    return -EOPNOTSUPP;
+}
+
+static inline int hvm_vmtrace_set_option(
+    struct vcpu *v, uint64_t key, uint64_t value)
+{
+    if ( hvm_funcs.vmtrace_set_option )
+        return hvm_funcs.vmtrace_set_option(v, key, value);
+
+    return -EOPNOTSUPP;
+}
+
+static inline int hvm_vmtrace_get_option(
+    struct vcpu *v, uint64_t key, uint64_t *value)
+{
+    if ( hvm_funcs.vmtrace_get_option )
+        return hvm_funcs.vmtrace_get_option(v, key, value);
+
+    return -EOPNOTSUPP;
+}
+
 /*
  * This must be defined as a macro instead of an inline function,
  * because it uses 'struct vcpu' and 'struct domain' which have
@@ -751,6 +792,28 @@ static inline bool hvm_has_set_descriptor_access_exiting(void)
     return false;
 }
 
+static inline int hvm_vmtrace_control(struct vcpu *v, bool enable, bool reset)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline int hvm_vmtrace_output_position(struct vcpu *v, uint64_t *pos)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline int hvm_vmtrace_set_option(
+    struct vcpu *v, uint64_t key, uint64_t value)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline int hvm_vmtrace_get_option(
+    struct vcpu *v, uint64_t key, uint64_t *value)
+{
+    return -EOPNOTSUPP;
+}
+
 #define is_viridian_domain(d) ((void)(d), false)
 #define is_viridian_vcpu(v) ((void)(v), false)
 #define has_viridian_time_ref_count(d) ((void)(d), false)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 88a5b1ef5d..4dbf107785 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1135,6 +1135,39 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/* XEN_DOMCTL_vmtrace_op: Perform VM tracing operations. */
+struct xen_domctl_vmtrace_op {
+    uint32_t cmd;           /* IN */
+    uint32_t vcpu;          /* IN */
+    uint64_aligned_t key;   /* IN     - @cmd specific data. */
+    uint64_aligned_t value; /* IN/OUT - @cmd specific data. */
+
+    /*
+     * General enable/disable of tracing.
+     *
+     * XEN_DOMCTL_vmtrace_reset_and_enable is provided as optimisation for
+     * common usecases, which want to reset status and position information
+     * when turning tracing back on.
+     */
+#define XEN_DOMCTL_vmtrace_enable             1
+#define XEN_DOMCTL_vmtrace_disable            2
+#define XEN_DOMCTL_vmtrace_reset_and_enable   3
+
+    /* Obtain the current output position within the buffer.  Fills @value. */
+#define XEN_DOMCTL_vmtrace_output_position    4
+
+    /*
+     * Get/Set platform specific configuration.
+     *
+     * For Intel Processor Trace, @key/@value are interpreted as MSR
+     * reads/writes to MSR_RTIT_*, filtered to a safe subset.
+     */
+#define XEN_DOMCTL_vmtrace_get_option         5
+#define XEN_DOMCTL_vmtrace_set_option         6
+};
+typedef struct xen_domctl_vmtrace_op xen_domctl_vmtrace_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmtrace_op_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1219,6 +1252,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_vmtrace_op                    84
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1279,6 +1313,7 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_vmtrace_op        vmtrace_op;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 11784d7425..3b7313b949 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -703,6 +703,7 @@ static int flask_domctl(struct domain *d, int cmd)
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_debug_op:
+    case XEN_DOMCTL_vmtrace_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
     case XEN_DOMCTL_gdbsx_pausevcpu:
     case XEN_DOMCTL_gdbsx_unpausevcpu:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80195.146566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibm-0007nU-Rq; Mon, 01 Feb 2021 23:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80195.146566; Mon, 01 Feb 2021 23:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibm-0007nD-ME; Mon, 01 Feb 2021 23:27:50 +0000
Received: by outflank-mailman (input) for mailman id 80195;
 Mon, 01 Feb 2021 23:27:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibk-0007PB-SZ
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:48 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7290a8cc-2697-4c86-a1de-a9c8fc86c84b;
 Mon, 01 Feb 2021 23:27:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7290a8cc-2697-4c86-a1de-a9c8fc86c84b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222054;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=vuuwq+6fSuNyCmrq1LxmH6Aqiqy1A+s02+fVKEkgO+4=;
  b=OatCVzKqX4fMD5FwX+vlL5/K9WYg5uS95lqVNRHI1JBpYR/9PJR+4Aiw
   CwtErrZTlER4W/8a/x+HSJny943Z4U8VdmRGBBtK5zhFaaWSc/sdZQrUs
   GXNjEncR2OPQ40JeComWBDMh5Ot6t6X/6oTFx3Bi4TNhDfwPyUOngv4+l
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +RXHQME0JbXlIVKVhTdFc35TdYsQ/l2+XYnNQNL2+IQShsdxGo+SWoIc/Ds9PwB0Zr4BG4pIKp
 Iix87JetSgQNyyjwFqcco0J39yiqp/ruPnzu8SfPsVAqwNktkQXo7SWw2fdrwhIV5tCMM/3/zl
 ZuuYJjLQS7PSTcx5/Yt5OsBYAVPWFRusTVR/dWiAqwosaKQXFrvl/ulK3bxY+cqE0SPkQUuorV
 pQR6oFYYcmRRfgkOapMEnraQT6udL4EcGDRyFeNceBYZuon7OHNv07Go66OFDY7X9/PbVGJTS8
 aro=
X-SBRS: 5.1
X-MesageID: 36319805
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319805"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Anthony PERARD <anthony.perard@citrix.com>, "Jun
 Nakajima" <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
	=?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 00/11] acquire_resource size and external IPT monitoring
Date: Mon, 1 Feb 2021 23:26:52 +0000
Message-ID: <20210201232703.29275-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Combined series (as they are dependent).  First, the resource size fixes, and
then the external IPT monitoring built on top.  Some patches got committed
before the feature freeze date last Friday.  This is the remainder.

Everything is suitably reviewed now, unless anyone has any last minute urgent
issues.

Therefore, I'd like to request a release exception.

Patch 1 is a bugfix, and the last in a long line of fixes to the
acquire_resource hypercall.  Technically it ought not to need a release ack at
this point.

The rest of the patches are a feature, originally contributed by CERT.PL for a
project they are working on, which got blocked for reasons outside of their
control (blocked on my acquire_resource fixes, and the extreme quantity of
security work this release cycle).

Intel Processor Trace is a debugging/diagnostic feature, which allows for
reconstruction of the exact execution path of the target.  As implemented
here, a monitoring agent can trace execution within the guest.

There are two production users of this already.

1) KFX - https://github.com/intel/kernel-fuzzer-for-xen-project

   This is a project lead by Tamas which is a fuzzer based on Xen, with AFL
   running in dom0, and backended with introspection and VMFork/reset for
   injecting data and parallel testing.  It uses IPT (this series) to feed the
   taken-path back to AFL, is far more convenient than recompiling the
   subject-under-test, and is far faster than using breakpoints for path
   reconstruction.

2) Drakvuf Sandbox - https://github.com/CERT-Polska/drakvuf-sandbox

   This project, lead by a team at CERT is an automatic malware-analysis SaaS
   offering, which will inspect suspicious files and attempt to provoke them
   to extract their payload, with introspection stepping in once it is fully
   unpacked, to inspect and classify the malware.

Both are very exciting projects, and the addition of IPT support like this
helps keep Xen at the forefront of hypervisor introspection technologies.

When I've got enough free time to do some paperwork, I'm intending to add IPT
as tech-preview (in particular - there are some hardware errata which concern
me, and an as-yet uninvestigated exclusion vs LBR as a hardware restriction).

It has active downstream users and extensive testing, as well as being fairly
isolated in terms of interactions with the rest of Xen, so the changes of a
showstopper affecting other features is very slim.


Andrew Cooper (1):
  xen/memory: Fix mapping grant tables with XENMEM_acquire_resource

MichaÅ‚ LeszczyÅ„ski (7):
  xen/domain: Add vmtrace_size domain creation parameter
  tools/[lib]xl: Add vmtrace_buf_size parameter
  xen/memory: Add a vmtrace_buf resource type
  x86/vmx: Add Intel Processor Trace support
  xen/domctl: Add XEN_DOMCTL_vmtrace_op
  tools/libxc: Add xc_vmtrace_* functions
  tools/misc: Add xen-vmtrace tool

Tamas K Lengyel (3):
  xen/vmtrace: support for VM forks
  x86/vm_event: Carry the vmtrace buffer position in vm_event
  x86/vm_event: add response flag to reset vmtrace buffer

 docs/man/xl.cfg.5.pod.in                    |   9 ++
 tools/golang/xenlight/helpers.gen.go        |   2 +
 tools/golang/xenlight/types.gen.go          |   1 +
 tools/include/libxl.h                       |   7 ++
 tools/include/xenctrl.h                     |  73 +++++++++++
 tools/libs/ctrl/Makefile                    |   1 +
 tools/libs/ctrl/xc_vmtrace.c                | 128 ++++++++++++++++++++
 tools/libs/light/libxl_cpuid.c              |   1 +
 tools/libs/light/libxl_create.c             |   1 +
 tools/libs/light/libxl_types.idl            |   4 +
 tools/misc/.gitignore                       |   1 +
 tools/misc/Makefile                         |   7 ++
 tools/misc/xen-cpuid.c                      |   2 +-
 tools/misc/xen-vmtrace.c                    | 166 +++++++++++++++++++++++++
 tools/xl/xl_parse.c                         |   4 +
 xen/arch/x86/domain.c                       |  23 ++++
 xen/arch/x86/domctl.c                       |  55 +++++++++
 xen/arch/x86/hvm/vmx/vmcs.c                 |  19 ++-
 xen/arch/x86/hvm/vmx/vmx.c                  | 180 +++++++++++++++++++++++++++-
 xen/arch/x86/mm/mem_sharing.c               |   3 +
 xen/arch/x86/vm_event.c                     |  10 ++
 xen/common/compat/memory.c                  | 114 ++++++++++++++----
 xen/common/domain.c                         |  64 ++++++++++
 xen/common/grant_table.c                    |   3 +
 xen/common/memory.c                         | 153 ++++++++++++++++++-----
 xen/common/vm_event.c                       |   3 +
 xen/include/asm-arm/vm_event.h              |   6 +
 xen/include/asm-x86/cpufeature.h            |   1 +
 xen/include/asm-x86/hvm/hvm.h               |  72 +++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h          |   4 +
 xen/include/asm-x86/msr.h                   |  32 +++++
 xen/include/asm-x86/vm_event.h              |   2 +
 xen/include/public/arch-x86/cpufeatureset.h |   1 +
 xen/include/public/domctl.h                 |  38 ++++++
 xen/include/public/memory.h                 |   1 +
 xen/include/public/vm_event.h               |  11 ++
 xen/include/xen/sched.h                     |   6 +
 xen/xsm/flask/hooks.c                       |   1 +
 38 files changed, 1150 insertions(+), 59 deletions(-)
 create mode 100644 tools/libs/ctrl/xc_vmtrace.c
 create mode 100644 tools/misc/xen-vmtrace.c

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80196.146578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibq-0007tk-98; Mon, 01 Feb 2021 23:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80196.146578; Mon, 01 Feb 2021 23:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibq-0007tY-4m; Mon, 01 Feb 2021 23:27:54 +0000
Received: by outflank-mailman (input) for mailman id 80196;
 Mon, 01 Feb 2021 23:27:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibo-0007P6-RR
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:52 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4974a30f-43f4-47f0-bfa7-ad5966e08e58;
 Mon, 01 Feb 2021 23:27:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4974a30f-43f4-47f0-bfa7-ad5966e08e58
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222055;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ooOeFTFX01JiGDLs2zOj6QNxJzo/EvnmXjy1QpJOiP4=;
  b=a+NT2DtUex+pSs0j0IktlIAZRKnsKiiEVDrDBiSiBjUQRJkMY1VS7Toz
   upWRHN2MSqbcsR8p6Z3l67d05V4qo/zQ6wCb2T5YAc7NuxR88ebW0UQQR
   MwRBLXl/njKInQefWZca0Xf7VE2DKx7/Ci6yDNqwejWJ6znqEGi36vImI
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: GkpqpvXlBS0xQRr0Iymb1U4xQHvVP8qukJrzdmUmf5j2EtFLx/ZR8st79ap0XebBopMenwStB5
 YK4Nc2TFUMM7caQwQ9KmRudisXuuPmgAMGDyWHmyj4kR0TI2fncs39KX6OYO15Myz6g4Uukf5D
 j+IW0IFwoqJ1NWMxWZqsHsCLZJueQJr3k4gV+0BIfQCpahOKXaVIujqLCKUAeSe9xR9eL4g8kO
 vNQXK/w3Hil2eZd/f0n2wixXZ9LeuawymIziKwkEdS1rUXuUemWdeYzeJvtVaiVWhIP0bH/HY1
 Leo=
X-SBRS: 5.1
X-MesageID: 36319806
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319806"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: [PATCH v9 01/11] xen/memory: Fix mapping grant tables with XENMEM_acquire_resource
Date: Mon, 1 Feb 2021 23:26:53 +0000
Message-ID: <20210201232703.29275-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

A guest's default number of grant frames is 64, and XENMEM_acquire_resource
will reject an attempt to map more than 32 frames.  This limit is caused by
the size of mfn_list[] on the stack.

Fix mapping of arbitrary size requests by looping over batches of 32 in
acquire_resource(), and using hypercall continuations when necessary.

To start with, break _acquire_resource() out of acquire_resource() to cope
with type-specific dispatching, and update the return semantics to indicate
the number of mfns returned.  Update gnttab_acquire_resource() and x86's
arch_acquire_resource() to match these new semantics.

Have do_memory_op() pass start_extent into acquire_resource() so it can pick
up where it left off after a continuation, and loop over batches of 32 until
all the work is done, or a continuation needs to occur.

compat_memory_op() is a bit more complicated, because it also has to marshal
frame_list in the XLAT buffer.  Have it account for continuation information
itself and hide details from the upper layer, so it can marshal the buffer in
chunks if necessary.

With these fixes in place, it is now possible to map the whole grant table for
a guest.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v9:
 * Crash domain rather than returning late with -ERANGE/-EFAULT.

v8:
 * nat => cmp change in the start_extent check.
 * Rebase over 'frame' and ARM/IOREQ series.

v3:
 * Spelling fixes
---
 xen/common/compat/memory.c | 114 +++++++++++++++++++++++++++++++++--------
 xen/common/grant_table.c   |   3 ++
 xen/common/memory.c        | 124 +++++++++++++++++++++++++++++++++------------
 3 files changed, 187 insertions(+), 54 deletions(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 834c5e19d1..c43fa97cf1 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -402,23 +402,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
         case XENMEM_acquire_resource:
         {
             xen_pfn_t *xen_frame_list = NULL;
-            unsigned int max_nr_frames;
 
             if ( copy_from_guest(&cmp.mar, compat, 1) )
                 return -EFAULT;
 
-            /*
-             * The number of frames handled is currently limited to a
-             * small number by the underlying implementation, so the
-             * scratch space should be sufficient for bouncing the
-             * frame addresses.
-             */
-            max_nr_frames = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
-                sizeof(*xen_frame_list);
-
-            if ( cmp.mar.nr_frames > max_nr_frames )
-                return -E2BIG;
-
             /* Marshal the frame list in the remainder of the xlat space. */
             if ( !compat_handle_is_null(cmp.mar.frame_list) )
                 xen_frame_list = (xen_pfn_t *)(nat.mar + 1);
@@ -432,6 +419,28 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 
             if ( xen_frame_list && cmp.mar.nr_frames )
             {
+                unsigned int xlat_max_frames =
+                    (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
+                    sizeof(*xen_frame_list);
+
+                if ( start_extent >= cmp.mar.nr_frames )
+                    return -EINVAL;
+
+                /*
+                 * Adjust nat to account for work done on previous
+                 * continuations, leaving cmp pristine.  Hide the continaution
+                 * from the native code to prevent double accounting.
+                 */
+                nat.mar->nr_frames -= start_extent;
+                nat.mar->frame += start_extent;
+                cmd &= MEMOP_CMD_MASK;
+
+                /*
+                 * If there are two many frames to fit within the xlat buffer,
+                 * we'll need to loop to marshal them all.
+                 */
+                nat.mar->nr_frames = min(nat.mar->nr_frames, xlat_max_frames);
+
                 /*
                  * frame_list is an input for translated guests, and an output
                  * for untranslated guests.  Only copy in for translated guests.
@@ -444,14 +453,14 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                                              cmp.mar.nr_frames) ||
                          __copy_from_compat_offset(
                              compat_frame_list, cmp.mar.frame_list,
-                             0, cmp.mar.nr_frames) )
+                             start_extent, nat.mar->nr_frames) )
                         return -EFAULT;
 
                     /*
                      * Iterate backwards over compat_frame_list[] expanding
                      * compat_pfn_t to xen_pfn_t in place.
                      */
-                    for ( int x = cmp.mar.nr_frames - 1; x >= 0; --x )
+                    for ( int x = nat.mar->nr_frames - 1; x >= 0; --x )
                         xen_frame_list[x] = compat_frame_list[x];
                 }
             }
@@ -600,9 +609,11 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
         case XENMEM_acquire_resource:
         {
             DEFINE_XEN_GUEST_HANDLE(compat_mem_acquire_resource_t);
+            unsigned int done;
 
             if ( compat_handle_is_null(cmp.mar.frame_list) )
             {
+                ASSERT(split == 0 && rc == 0);
                 if ( __copy_field_to_guest(
                          guest_handle_cast(compat,
                                            compat_mem_acquire_resource_t),
@@ -611,6 +622,21 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 break;
             }
 
+            if ( split < 0 )
+            {
+                /* Continuation occurred. */
+                ASSERT(rc != XENMEM_acquire_resource);
+                done = cmd >> MEMOP_EXTENT_SHIFT;
+            }
+            else
+            {
+                /* No continuation. */
+                ASSERT(rc == 0);
+                done = nat.mar->nr_frames;
+            }
+
+            ASSERT(done <= nat.mar->nr_frames);
+
             /*
              * frame_list is an input for translated guests, and an output for
              * untranslated guests.  Only copy out for untranslated guests.
@@ -626,21 +652,67 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                  */
                 BUILD_BUG_ON(sizeof(compat_pfn_t) > sizeof(xen_pfn_t));
 
-                for ( i = 0; i < cmp.mar.nr_frames; i++ )
+                rc = 0;
+                for ( i = 0; i < done; i++ )
                 {
                     compat_pfn_t frame = xen_frame_list[i];
 
                     if ( frame != xen_frame_list[i] )
-                        return -ERANGE;
+                    {
+                        rc = -ERANGE;
+                        break;
+                    }
 
                     compat_frame_list[i] = frame;
                 }
 
-                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
-                                             compat_frame_list,
-                                             cmp.mar.nr_frames) )
-                    return -EFAULT;
+                if ( !rc && __copy_to_compat_offset(
+                         cmp.mar.frame_list, start_extent,
+                         compat_frame_list, done) )
+                    rc = -EFAULT;
+
+                if ( rc )
+                {
+                    if ( split < 0 )
+                    {
+                        gdprintk(XENLOG_ERR,
+                                 "Cannot cancel continuation: %ld\n", rc);
+                        domain_crash(current->domain);
+                    }
+                    return rc;
+                }
+            }
+
+            start_extent += done;
+
+            /* Completely done. */
+            if ( start_extent == cmp.mar.nr_frames )
+                break;
+
+            /*
+             * Done a "full" batch, but we were limited by space in the xlat
+             * area.  Go around the loop again without necesserily returning
+             * to guest context.
+             */
+            if ( done == nat.mar->nr_frames )
+            {
+                split = 1;
+                break;
             }
+
+            /* Explicit continuation request from a higher level. */
+            if ( done < nat.mar->nr_frames )
+                return hypercall_create_continuation(
+                    __HYPERVISOR_memory_op, "ih",
+                    op | (start_extent << MEMOP_EXTENT_SHIFT), compat);
+
+            /*
+             * Well... Somethings gone wrong with the two levels of chunking.
+             * My condolences to whomever next has to debug this mess.
+             */
+            ASSERT_UNREACHABLE();
+            domain_crash(current->domain);
+            split = 0;
             break;
         }
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 280b7969b6..b95403695f 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4053,6 +4053,9 @@ int gnttab_acquire_resource(
     for ( i = 0; i < nr_frames; ++i )
         mfn_list[i] = virt_to_mfn(vaddrs[frame + i]);
 
+    /* Success.  Passed nr_frames back to the caller. */
+    rc = nr_frames;
+
  out:
     grant_write_unlock(gt);
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 01cab7e493..128718b31c 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1118,23 +1118,41 @@ static int acquire_ioreq_server(struct domain *d,
         mfn_list[i] = mfn_x(mfn);
     }
 
-    return 0;
+    /* Success.  Passed nr_frames back to the caller. */
+    return nr_frames;
 #else
     return -EOPNOTSUPP;
 #endif
 }
 
+/*
+ * Returns -errno on error, or positive in the range [1, nr_frames] on
+ * success.  Returning less than nr_frames contitutes a request for a
+ * continuation.  Callers can depend on frame + nr_frames not overflowing.
+ */
+static int _acquire_resource(
+    struct domain *d, unsigned int type, unsigned int id, unsigned int frame,
+    unsigned int nr_frames, xen_pfn_t mfn_list[])
+{
+    switch ( type )
+    {
+    case XENMEM_resource_grant_table:
+        return gnttab_acquire_resource(d, id, frame, nr_frames, mfn_list);
+
+    case XENMEM_resource_ioreq_server:
+        return acquire_ioreq_server(d, id, frame, nr_frames, mfn_list);
+
+    default:
+        return -EOPNOTSUPP;
+    }
+}
+
 static int acquire_resource(
-    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
+    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
+    unsigned long start_extent)
 {
     struct domain *d, *currd = current->domain;
     xen_mem_acquire_resource_t xmar;
-    /*
-     * The mfn_list and gfn_list (below) arrays are ok on stack for the
-     * moment since they are small, but if they need to grow in future
-     * use-cases then per-CPU arrays or heap allocations may be required.
-     */
-    xen_pfn_t mfn_list[32];
     unsigned int max_frames;
     int rc;
 
@@ -1147,9 +1165,6 @@ static int acquire_resource(
     if ( xmar.pad != 0 )
         return -EINVAL;
 
-    if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
-        return -E2BIG;
-
     /*
      * The ABI is rather unfortunate.  nr_frames (and therefore the total size
      * of the resource) is 32bit, while frame (the offset within the resource
@@ -1179,7 +1194,7 @@ static int acquire_resource(
 
     if ( guest_handle_is_null(xmar.frame_list) )
     {
-        if ( xmar.nr_frames )
+        if ( xmar.nr_frames || start_extent )
             goto out;
 
         xmar.nr_frames = max_frames;
@@ -1187,30 +1202,47 @@ static int acquire_resource(
         goto out;
     }
 
-    do {
-        switch ( xmar.type )
-        {
-        case XENMEM_resource_grant_table:
-            rc = gnttab_acquire_resource(d, xmar.id, xmar.frame, xmar.nr_frames,
-                                         mfn_list);
-            break;
+    /*
+     * Limiting nr_frames at (UINT_MAX >> MEMOP_EXTENT_SHIFT) isn't ideal.  If
+     * it ever becomes a practical problem, we can switch to mutating
+     * xmar.{frame,nr_frames,frame_list} in guest memory.
+     */
+    rc = -EINVAL;
+    if ( start_extent >= xmar.nr_frames ||
+         xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
+        goto out;
 
-        case XENMEM_resource_ioreq_server:
-            rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames,
-                                      mfn_list);
-            break;
+    /* Adjust for work done on previous continuations. */
+    xmar.nr_frames -= start_extent;
+    xmar.frame += start_extent;
+    guest_handle_add_offset(xmar.frame_list, start_extent);
 
-        default:
-            rc = -EOPNOTSUPP;
-            break;
-        }
+    do {
+        /*
+         * Arbitrary size.  Not too much stack space, and a reasonable stride
+         * for continuation checks.
+         */
+        xen_pfn_t mfn_list[32];
+        unsigned int todo = MIN(ARRAY_SIZE(mfn_list), xmar.nr_frames), done;
 
-        if ( rc )
+        rc = _acquire_resource(d, xmar.type, xmar.id, xmar.frame,
+                               todo, mfn_list);
+        if ( rc < 0 )
+            goto out;
+
+        done = rc;
+        rc = 0;
+        if ( done == 0 || done > todo )
+        {
+            ASSERT_UNREACHABLE();
+            rc = -EINVAL;
             goto out;
+        }
 
+        /* Adjust guest frame_list appropriately. */
         if ( !paging_mode_translate(currd) )
         {
-            if ( copy_to_guest(xmar.frame_list, mfn_list, xmar.nr_frames) )
+            if ( copy_to_guest(xmar.frame_list, mfn_list, done) )
                 rc = -EFAULT;
         }
         else
@@ -1218,10 +1250,10 @@ static int acquire_resource(
             xen_pfn_t gfn_list[ARRAY_SIZE(mfn_list)];
             unsigned int i;
 
-            if ( copy_from_guest(gfn_list, xmar.frame_list, xmar.nr_frames) )
+            if ( copy_from_guest(gfn_list, xmar.frame_list, done) )
                 rc = -EFAULT;
 
-            for ( i = 0; !rc && i < xmar.nr_frames; i++ )
+            for ( i = 0; !rc && i < done; i++ )
             {
                 rc = set_foreign_p2m_entry(currd, d, gfn_list[i],
                                            _mfn(mfn_list[i]));
@@ -1230,7 +1262,32 @@ static int acquire_resource(
                     rc = -EIO;
             }
         }
-    } while ( 0 );
+
+        if ( rc )
+            goto out;
+
+        xmar.nr_frames -= done;
+        xmar.frame += done;
+        guest_handle_add_offset(xmar.frame_list, done);
+        start_extent += done;
+
+        /*
+         * Explicit continuation request from _acquire_resource(), or we've
+         * still got more work to do.
+         */
+        if ( done < todo ||
+             (xmar.nr_frames && hypercall_preempt_check()) )
+        {
+            rc = hypercall_create_continuation(
+                __HYPERVISOR_memory_op, "lh",
+                XENMEM_acquire_resource | (start_extent << MEMOP_EXTENT_SHIFT),
+                arg);
+            goto out;
+        }
+
+    } while ( xmar.nr_frames );
+
+    rc = 0;
 
  out:
     rcu_unlock_domain(d);
@@ -1697,7 +1754,8 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case XENMEM_acquire_resource:
         rc = acquire_resource(
-            guest_handle_cast(arg, xen_mem_acquire_resource_t));
+            guest_handle_cast(arg, xen_mem_acquire_resource_t),
+            start_extent);
         break;
 
     default:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:27:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80197.146590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibr-0007wv-Oq; Mon, 01 Feb 2021 23:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80197.146590; Mon, 01 Feb 2021 23:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ibr-0007wh-Gx; Mon, 01 Feb 2021 23:27:55 +0000
Received: by outflank-mailman (input) for mailman id 80197;
 Mon, 01 Feb 2021 23:27:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ibp-0007PB-Sp
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:27:53 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c308b7bc-9a82-4428-9e41-2107868277a2;
 Mon, 01 Feb 2021 23:27:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c308b7bc-9a82-4428-9e41-2107868277a2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222056;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=njVGh0BG2zEjGyol+qwxOLZnJzEyLf5QMa4qeRMwOaI=;
  b=J3Zc47eOJRMQTCgV3iIkux1S4SyWV86QbIvq6euaYaN+crmQntJDYHpX
   IdB/qGgIAwLdCJaI/CIftkJrnCFVyz5v8Sa9IBljosnh45OlnhgkLBv4L
   L7y7ZheU0QOpS57/ou9Yrus8yigPi480YvNh0wlp33mAcxQ207jxYBvkH
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: SgNfmBguvZHKQHBqSDQjqZXWEQLQaZg26ryWUzmAlwkA4p7i4zv+Ivvf4NSSC0BQ//Zp0QQfZG
 Et15UDHnHhEAjHmzK+4iAVTf+hbKTX4EFhA+h84fmFgcL+N4FySXpL37FYol1YUzbsrJsfDBnl
 TlKbVVWjZr6s9gp0PYWL3Mwz4j1OrI6pLxc0bYFv/FqGtKyLpLq3iKnSsICVbBh59iYysXCRZv
 0tjVSppl+zZGA6hBdZo1kh7BYFKhzYmCHdOk8ho5VWD3s14FxzwPPPosztVuLDpXWqmW2DdplT
 yO0=
X-SBRS: 5.1
X-MesageID: 36319807
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36319807"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 08/11] tools/misc: Add xen-vmtrace tool
Date: Mon, 1 Feb 2021 23:27:00 +0000
Message-ID: <20210201232703.29275-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>

Add an demonstration tool that uses xc_vmtrace_* calls in order
to manage external IPT monitoring for DomU.

Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Ian Jackson <iwj@xenproject.org>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v9:
 * Fix truncated data on clean exit

v8:
 * Switch to being a build-only target
---
 tools/misc/.gitignore    |   1 +
 tools/misc/Makefile      |   7 ++
 tools/misc/xen-vmtrace.c | 166 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 174 insertions(+)
 create mode 100644 tools/misc/xen-vmtrace.c

diff --git a/tools/misc/.gitignore b/tools/misc/.gitignore
index b2c3b21f57..ce6f937d0c 100644
--- a/tools/misc/.gitignore
+++ b/tools/misc/.gitignore
@@ -1,3 +1,4 @@
 xen-access
 xen-memshare
 xen-ucode
+xen-vmtrace
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 912c5d4f0e..2b683819d4 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -50,6 +50,10 @@ TARGETS_COPY += xenpvnetboot
 # Everything which needs to be built
 TARGETS_BUILD := $(filter-out $(TARGETS_COPY),$(TARGETS_ALL))
 
+# ... including build-only targets
+TARGETS_BUILD-$(CONFIG_X86)    += xen-vmtrace
+TARGETS_BUILD += $(TARGETS_BUILD-y)
+
 .PHONY: all build
 all build: $(TARGETS_BUILD)
 
@@ -90,6 +94,9 @@ xen-hvmcrash: xen-hvmcrash.o
 xen-memshare: xen-memshare.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-vmtrace: xen-vmtrace.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
+
 xenperf: xenperf.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
new file mode 100644
index 0000000000..cc58a0707b
--- /dev/null
+++ b/tools/misc/xen-vmtrace.c
@@ -0,0 +1,166 @@
+/******************************************************************************
+ * tools/vmtrace.c
+ *
+ * Demonstrative tool for collecting Intel Processor Trace data from Xen.
+ *  Could be used to externally monitor a given vCPU in given DomU.
+ *
+ * Copyright (C) 2020 by CERT Polska - NASK PIB
+ *
+ * Authors: MichaÅ‚ LeszczyÅ„ski, michal.leszczynski@cert.pl
+ * Date:    June, 2020
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; under version 2 of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <err.h>
+#include <errno.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/mman.h>
+
+#include <xenctrl.h>
+#include <xenforeignmemory.h>
+
+#define MSR_RTIT_CTL                        0x00000570
+#define  RTIT_CTL_OS                        (1 <<  2)
+#define  RTIT_CTL_USR                       (1 <<  3)
+#define  RTIT_CTL_BRANCH_EN                 (1 << 13)
+
+static xc_interface *xch;
+static xenforeignmemory_handle *fh;
+static uint32_t domid, vcpu;
+static size_t size;
+static char *buf;
+
+static sig_atomic_t interrupted = 0;
+static void int_handler(int signum)
+{
+    interrupted = 1;
+}
+
+static int get_more_data(void)
+{
+    static uint64_t last_pos;
+    uint64_t pos;
+
+    if ( xc_vmtrace_output_position(xch, domid, vcpu, &pos) )
+    {
+        perror("xc_vmtrace_output_position()");
+        return -1;
+    }
+
+    if ( pos > last_pos )
+        fwrite(buf + last_pos, pos - last_pos, 1, stdout);
+    else if ( pos < last_pos )
+    {
+        /* buffer wrapped */
+        fwrite(buf + last_pos, size - last_pos, 1, stdout);
+        fwrite(buf, pos, 1, stdout);
+    }
+
+    last_pos = pos;
+    return 0;
+}
+
+int main(int argc, char **argv)
+{
+    int rc, exit = 1;
+    xenforeignmemory_resource_handle *fres = NULL;
+
+    if ( signal(SIGINT, int_handler) == SIG_ERR )
+        err(1, "Failed to register signal handler\n");
+
+    if ( argc != 3 )
+    {
+        fprintf(stderr, "Usage: %s <domid> <vcpu_id>\n", argv[0]);
+        fprintf(stderr, "It's recommended to redirect thisprogram's output to file\n");
+        fprintf(stderr, "or to pipe it's output to xxd or other program.\n");
+        return 1;
+    }
+
+    domid = atoi(argv[1]);
+    vcpu  = atoi(argv[2]);
+
+    xch = xc_interface_open(NULL, NULL, 0);
+    fh = xenforeignmemory_open(NULL, 0);
+
+    if ( !xch )
+        err(1, "xc_interface_open()");
+    if ( !fh )
+        err(1, "xenforeignmemory_open()");
+
+    rc = xenforeignmemory_resource_size(
+        fh, domid, XENMEM_resource_vmtrace_buf, vcpu, &size);
+    if ( rc )
+        err(1, "xenforeignmemory_resource_size()");
+
+    fres = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_vmtrace_buf, vcpu,
+        0, size >> XC_PAGE_SHIFT, (void **)&buf, PROT_READ, 0);
+    if ( !fres )
+        err(1, "xenforeignmemory_map_resource()");
+
+    if ( xc_vmtrace_set_option(
+             xch, domid, vcpu, MSR_RTIT_CTL,
+             RTIT_CTL_BRANCH_EN | RTIT_CTL_USR | RTIT_CTL_OS) )
+    {
+        perror("xc_vmtrace_set_option()");
+        goto out;
+    }
+
+    if ( xc_vmtrace_enable(xch, domid, vcpu) )
+    {
+        perror("xc_vmtrace_enable()");
+        goto out;
+    }
+
+    while ( !interrupted )
+    {
+        xc_dominfo_t dominfo;
+
+        if ( get_more_data() )
+            goto out;
+
+        usleep(1000 * 100);
+
+        if ( xc_domain_getinfo(xch, domid, 1, &dominfo) != 1 ||
+             dominfo.domid != domid || dominfo.shutdown )
+        {
+            if ( get_more_data() )
+                goto out;
+            break;
+        }
+    }
+
+    exit = 0;
+
+ out:
+    if ( xc_vmtrace_disable(xch, domid, vcpu) )
+        perror("xc_vmtrace_disable()");
+
+    if ( fres && xenforeignmemory_unmap_resource(fh, fres) )
+        perror("xenforeignmemory_unmap_resource()");
+
+    return exit;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:35:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:35:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80214.146608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ijM-00014o-VV; Mon, 01 Feb 2021 23:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80214.146608; Mon, 01 Feb 2021 23:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ijM-00014h-RY; Mon, 01 Feb 2021 23:35:40 +0000
Received: by outflank-mailman (input) for mailman id 80214;
 Mon, 01 Feb 2021 23:35:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6ijM-00014Z-74
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:35:40 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d46be8c9-ecc5-4d3c-b1a3-2e52b88c4e01;
 Mon, 01 Feb 2021 23:35:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d46be8c9-ecc5-4d3c-b1a3-2e52b88c4e01
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612222539;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=lSsGlZ3RV0J0u057SkJDf9q9z6vf4wraLwwgAQ/Unkc=;
  b=VXNdHOiPUaADkLgl0Guqv8XQHiQPNpiKUcvCVmHsLWuZ8n3OxLA+Ldb8
   Nh1v6TK+WHcfzycHpSWEYe9ER+rt7YYHTRRY6/958Pa2+se3XniYn3kHA
   AHlJEtP5Apcx+wFadrhDOR6g6kzUY680CulwjVsjhtwwgVATyhcgmDrg6
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uIVeiblegCeEtOCvCc2XeXfZzbg2CvcG5MIecgpfclTJ2uBXIAj3Nx4jNd2E0bV/Uk7VSbScxX
 Cfp2JAIVzjJVvCOb6AaKUN4k8/P4qgA5T1+0efq+mT9tK72NdH+ZwUEnKwW0g+Of7qwM+8MtcT
 wXyb7CSDsBwOk9Lf2H7/2cG8wn77x8+a6eKdKQpuueyj9t0qHRkC6twRNKNDbRL2QaiCZ5gD09
 eNAvzqipyTQElAp+1Ey/is06Vb9qy3vNW2iU2OCF/6SKLuD4J5QJQl87Kf5qnyldoVpRnJ2rjQ
 kIk=
X-SBRS: 5.1
X-MesageID: 36523363
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36523363"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH] xenstore: Fix all builds
Date: Mon, 1 Feb 2021 23:35:13 +0000
Message-ID: <20210201233513.30923-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This diff is easier viewed through `cat -A`

  diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h$
  index 1bd443f61a..f7e4da2b2c 100644$
  --- a/tools/xenstore/include/xenstore_state.h$
  +++ b/tools/xenstore/include/xenstore_state.h$
  @@ -21,7 +21,7 @@$
   #ifndef XENSTORE_STATE_H$
   #define XENSTORE_STATE_H$
   $
  -#if defined(__FreeBSD__) ||M-BM- defined(__NetBSD__)$
  +#if defined(__FreeBSD__) || defined(__NetBSD__)$
   #include <sys/endian.h>$
   #else$
   #include <endian.h>$

A non-breaking space isn't a valid C preprocessor token.

Fixes: ffbb8aa282de ("xenstore: fix build on {Net/Free}BSD")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
---
 tools/xenstore/include/xenstore_state.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h
index 1bd443f61a..f7e4da2b2c 100644
--- a/tools/xenstore/include/xenstore_state.h
+++ b/tools/xenstore/include/xenstore_state.h
@@ -21,7 +21,7 @@
 #ifndef XENSTORE_STATE_H
 #define XENSTORE_STATE_H
 
-#if defined(__FreeBSD__) ||Â defined(__NetBSD__)
+#if defined(__FreeBSD__) || defined(__NetBSD__)
 #include <sys/endian.h>
 #else
 #include <endian.h>
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:52:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80222.146638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6izk-00035E-ME; Mon, 01 Feb 2021 23:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80222.146638; Mon, 01 Feb 2021 23:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6izk-000357-IP; Mon, 01 Feb 2021 23:52:36 +0000
Received: by outflank-mailman (input) for mailman id 80222;
 Mon, 01 Feb 2021 23:52:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6izi-00033V-Os
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:52:34 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7098a47b-7da7-49cb-a644-40ccc243acb2;
 Mon, 01 Feb 2021 23:52:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7098a47b-7da7-49cb-a644-40ccc243acb2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612223549;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FaRlj2zjPcVI+8D5Vii4wydz4lnh2fPaipEPLCTPTI0=;
  b=SZjlb4NuwRAMCtg82JY9sAJv2tOki2WqnQUHWovDr2ooY3Ft0PTDOSQQ
   COwLrqjkSslcangajo0ZfJ7X8p12lcrgYDl/2cxv1jw0AtKV1wpKOQ3YG
   bte92Ap7myYnCkskdApBzthBgj1WPle0fv3ROrvZX4D6KKQCewtjyM3N8
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: VGG41gzWPGgR1D9d6Nqyc5pFf0yGaeZWBxJlOgIReLPjTnRPrebnQEN95TkhRZyBAujKjfDekZ
 YzR7jscQ4lQCpafFPq03Kf8N17oWuQskY9kytiNqWjA4Mao4Fufa4p6VULuaK/2I3XzMUB6l/Q
 +ozcBcfxVVjVIstbc/gqrhgmsQiWuZ92j5vJPITm10QwRJELJ1iDZlLUnKn6iZ518xad3Eh59P
 ttUhk37grwqnotT97cEkWRuYKGh60sNigdU/BnWUA15/8ZctB0k6yKTU514RXRbLuKPAqlkxwG
 DdE=
X-SBRS: 5.1
X-MesageID: 36320760
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36320760"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Tamas K Lengyel <tamas.lengyel@intel.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?=
	<michal.leszczynski@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH v9 10/11] x86/vm_event: Carry the vmtrace buffer position in vm_event
Date: Mon, 1 Feb 2021 23:27:02 +0000
Message-ID: <20210201232703.29275-11-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Tamas K Lengyel <tamas.lengyel@intel.com>

Add vmtrace_pos field to x86 regs in vm_event. Initialized to ~0 if
vmtrace is not in use.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
CC: Tamas K Lengyel <tamas@tklengyel.com>

v8:
 * Use 'vmtrace' consistently.

v7:
 * New
---
 xen/arch/x86/vm_event.c       | 3 +++
 xen/include/public/vm_event.h | 7 +++++++
 2 files changed, 10 insertions(+)

diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 848d69c1b0..36272e9316 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -251,6 +251,9 @@ void vm_event_fill_regs(vm_event_request_t *req)
 
     req->data.regs.x86.shadow_gs = ctxt.shadow_gs;
     req->data.regs.x86.dr6 = ctxt.dr6;
+
+    if ( hvm_vmtrace_output_position(curr, &req->data.regs.x86.vmtrace_pos) != 1 )
+        req->data.regs.x86.vmtrace_pos = ~0;
 #endif
 }
 
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index 141ea024a3..147dc3ea73 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -223,6 +223,13 @@ struct vm_event_regs_x86 {
      */
     uint64_t npt_base;
 
+    /*
+     * Current position in the vmtrace buffer, or ~0 if vmtrace is not active.
+     *
+     * For Intel Processor Trace, it is the upper half of MSR_RTIT_OUTPUT_MASK.
+     */
+    uint64_t vmtrace_pos;
+
     uint32_t cs_base;
     uint32_t ss_base;
     uint32_t ds_base;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:52:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80221.146625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6izf-00033h-Dg; Mon, 01 Feb 2021 23:52:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80221.146625; Mon, 01 Feb 2021 23:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6izf-00033a-AO; Mon, 01 Feb 2021 23:52:31 +0000
Received: by outflank-mailman (input) for mailman id 80221;
 Mon, 01 Feb 2021 23:52:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IWGu=HD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6izd-00033V-UV
 for xen-devel@lists.xenproject.org; Mon, 01 Feb 2021 23:52:29 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c1e5e0b-07ee-49dd-ac0c-ef390b2db081;
 Mon, 01 Feb 2021 23:52:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c1e5e0b-07ee-49dd-ac0c-ef390b2db081
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612223547;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=E5JUPU1j2pT/veM8SWF/+k6jtm1Vy8PzPBbksm2vA/E=;
  b=O6IMRZtqj7HX/MQqtncNB9oTXGohww3OwNZqSf0m5lT2KpyrCYAFot/H
   U68ResoE33UL6dzR1TEM3rumW2TQaN0bi0LF6IzGq9SQzicsSwCUGzOyz
   f9B09zAV3CqnqSp89MZrnap0VEtwxz6bKPmfEunwjCH9DqVx9BnKkgRgw
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 47SmiOIP1y9uF8049fSXl693rHNEwXfXK3cqeRsTSw1Sgj369cMxOTtxcf4mvOSU3MeW8dhel+
 fpgs/MMEY38E0ReJm8aqZ0TESzIYiNji4o3AJoKTXbRq1I5w4sCnPau4RISHaYRArAyWwUNkj1
 JRf1mZyMPraBr16a8A8Dy91DcYmvcx4YOJyNhmw2Tn7QvWF7V8NVmp9hNP1JcSJtSgKh6vK6f8
 EzMZ8NCs5weUZGuFLHVZo6K58Z5RjFw2MEQ9Y0G08431PB5z3jLKNc3DN6rGdzxn7oEVSXModI
 lDM=
X-SBRS: 5.1
X-MesageID: 36363241
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36363241"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: [PATCH v9 11/11] x86/vm_event: add response flag to reset vmtrace buffer
Date: Mon, 1 Feb 2021 23:27:03 +0000
Message-ID: <20210201232703.29275-12-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

From: Tamas K Lengyel <tamas.lengyel@intel.com>

Allow resetting the vmtrace buffer in response to a vm_event. This can be used
to optimize a use-case where detecting a looped vmtrace buffer is important.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/vm_event.c        | 7 +++++++
 xen/common/vm_event.c          | 3 +++
 xen/include/asm-arm/vm_event.h | 6 ++++++
 xen/include/asm-x86/vm_event.h | 2 ++
 xen/include/public/vm_event.h  | 4 ++++
 5 files changed, 22 insertions(+)

diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 36272e9316..8f73a73e2e 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -300,6 +300,13 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
     };
 }
 
+void vm_event_reset_vmtrace(struct vcpu *v)
+{
+#ifdef CONFIG_HVM
+    hvm_vmtrace_reset(v);
+#endif
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 127f2d58f1..44d542f23e 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -424,6 +424,9 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved)
             if ( rsp.flags & VM_EVENT_FLAG_GET_NEXT_INTERRUPT )
                 vm_event_monitor_next_interrupt(v);
 
+            if ( rsp.flags & VM_EVENT_FLAG_RESET_VMTRACE )
+                vm_event_reset_vmtrace(v);
+
             if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
                 vm_event_vcpu_unpause(v);
         }
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index 14d1d341cc..abe7db1970 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -58,4 +58,10 @@ void vm_event_sync_event(struct vcpu *v, bool value)
     /* Not supported on ARM. */
 }
 
+static inline
+void vm_event_reset_vmtrace(struct vcpu *v)
+{
+    /* Not supported on ARM. */
+}
+
 #endif /* __ASM_ARM_VM_EVENT_H__ */
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 785e741fba..0756124075 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -54,4 +54,6 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
 
 void vm_event_sync_event(struct vcpu *v, bool value);
 
+void vm_event_reset_vmtrace(struct vcpu *v);
+
 #endif /* __ASM_X86_VM_EVENT_H__ */
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index 147dc3ea73..36135ba4f1 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -123,6 +123,10 @@
  * Set if the event comes from a nested VM and thus npt_base is valid.
  */
 #define VM_EVENT_FLAG_NESTED_P2M         (1 << 12)
+/*
+ * Reset the vmtrace buffer (if vmtrace is enabled)
+ */
+#define VM_EVENT_FLAG_RESET_VMTRACE      (1 << 13)
 
 /*
  * Reasons for the vm event request
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 01 23:53:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 Feb 2021 23:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80223.146650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6j0L-0003EO-VR; Mon, 01 Feb 2021 23:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80223.146650; Mon, 01 Feb 2021 23:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6j0L-0003EH-SK; Mon, 01 Feb 2021 23:53:13 +0000
Received: by outflank-mailman (input) for mailman id 80223;
 Mon, 01 Feb 2021 23:53:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6j0J-0003E4-U0; Mon, 01 Feb 2021 23:53:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6j0J-0005d0-N7; Mon, 01 Feb 2021 23:53:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6j0J-0004Rc-FV; Mon, 01 Feb 2021 23:53:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6j0J-0000N6-F1; Mon, 01 Feb 2021 23:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XQyRPeOhUekwhQ2ecBD1qGvEAjqcvxQhyIPHaIDGbrU=; b=rHhcXPkYIeHWnIS/w0A3oHtYbR
	mW/iGdS3eZp+o0yIZF9SlPz1mDQr9uMBsvG36rBOerzI4lgIAMYJ6f4BhOcwDs1v+e8Y2M6Fqi1U8
	iOSgTiwVhFxOCXaAE2O8K4OpCcdM2+CzETMtIeEFIMw0QCZVjL+oxOnhSlkc0homeNt0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158909-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158909: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 Feb 2021 23:53:11 +0000

flight 158909 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158909/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    2 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    4 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 00:20:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 00:20:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80236.146670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6jQl-0006qg-Q7; Tue, 02 Feb 2021 00:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80236.146670; Tue, 02 Feb 2021 00:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6jQl-0006qZ-N2; Tue, 02 Feb 2021 00:20:31 +0000
Received: by outflank-mailman (input) for mailman id 80236;
 Tue, 02 Feb 2021 00:20:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Vw=HE=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l6jQk-0006qU-FE
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 00:20:30 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfb0b5e2-c7d7-489e-986b-8efad4bbf45f;
 Tue, 02 Feb 2021 00:20:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfb0b5e2-c7d7-489e-986b-8efad4bbf45f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612225228;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=qYRCDf454/9gViQXW6sCARkf8D4a98DJsrtCD/deB/Y=;
  b=PRo/mFU+pG5SNLEvcfuAPzlehRUz3B95W+lRsDXqmVKCbe8xO4KCQvDC
   7UMGvJXy1yRQBc13o2g8KtLoJpsaE+00AZ+slaJJ0ZpMWyRs/qXSLAjl2
   LhPawqFrzhfGyJ6BWxvQEK6pdkJGbuhJIgb1hVuLmkUMexzqp9A1LaUnl
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: pSXi1n/ArSFDmDmP3Wj2/HrEBHF7dcBtMvr/FAXyN/chPv76xa5CkPeVyBtw39o6e6mszFoD6o
 6FHeGSyazeKksF5VOTvdaRUFWyOWrNjIz2AzvOTF9mOkymtfsaPwkIWTE8/TfO/mthalkGuzRC
 s1Uii+9OacAcitJPQt2Yts71dN+3XDLvE5UT/lqpEqfC4rhkIfjxhFpDMMHqMJxddwUhVsgwC5
 +Wat+FpxpVNbIoKkEyn1IEkFDlCr6cQi3aQqfpD4yzPrOsC8+ktX1pbP+mo0sT/ckgKboEL6VO
 OS0=
X-SBRS: 5.1
X-MesageID: 36322045
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="36322045"
Subject: Re: staging: unable to restore HVM with Viridian param set
To: Andrew Cooper <andrew.cooper3@citrix.com>, Tamas K Lengyel
	<tamas.k.lengyel@gmail.com>, Xen-devel <xen-devel@lists.xenproject.org>, "Wei
 Liu" <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
 <12e17af4-3502-0047-36e2-3c1262602747@citrix.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <7ea14fac-7832-fe68-529e-03a8f9812f88@citrix.com>
Date: Tue, 2 Feb 2021 00:20:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <12e17af4-3502-0047-36e2-3c1262602747@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit

n 01/02/2021 22:57, Andrew Cooper wrote:
> On 01/02/2021 22:51, Tamas K Lengyel wrote:
>> Hi all,
>> trying to restore a Windows VM saved on Xen 4.14 with Xen staging results in:
>>
>> # xl restore -p /shared/cfg/windows10.save
>> Loading new save file /shared/cfg/windows10.save (new xl fmt info 0x3/0x0/1475)
>>  Savefile contains xl domain config in JSON format
>> Parsing config from <saved>
>> xc: info: Found x86 HVM domain from Xen 4.14
>> xc: info: Restoring domain
>> xc: error: set HVM param 9 = 0x0000000000000065 (17 = File exists):
>> Internal error
>> xc: error: Restore failed (17 = File exists): Internal error
>> libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
>> restoring domain: File exists
>> libxl: error: libxl_create.c:1581:domcreate_rebuild_done: Domain
>> 8:cannot (re-)build domain: -3
>> libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
>> 8:Non-existant domain
>> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
>> 8:Unable to destroy guest
>> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
>> 8:Destruction of domain failed
>>
>> Running on staging 419cd07895891c6642f29085aee07be72413f08c
> 
> CC'ing maintainers and those who've edited the code recently.
> 
> What is happening is xl/libxl is selecting some viridian settings,
> applying them to the domain, and then the migrations stream has a
> different set of viridian settings.
> 
> For a migrating-in VM, nothing should be set during domain build.Â 
> Viridian state has been part of the migrate stream since before mig-v2,
> so can be considered to be everywhere relevant now.

The fallout is likely from my changes that modified default set of viridian
settings. The relevant commits:
983524671031fcfdb24a6c0da988203ebb47aebe
7e5cffcd1e9300cab46a1816b5eb676caaeed2c1

The same config from migrated domains now implies different set of viridian
extensions then those set at the source side. That creates inconsistency in
libxl. I don't really know how to address it properly in libxl other than
don't extend the default set ever.

Igor


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 01:06:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 01:06:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80246.146695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6k9a-0002y9-JT; Tue, 02 Feb 2021 01:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80246.146695; Tue, 02 Feb 2021 01:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6k9a-0002y2-EM; Tue, 02 Feb 2021 01:06:50 +0000
Received: by outflank-mailman (input) for mailman id 80246;
 Tue, 02 Feb 2021 01:06:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6k9Z-0002xu-4B; Tue, 02 Feb 2021 01:06:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6k9Y-0007VU-Vg; Tue, 02 Feb 2021 01:06:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6k9Y-0007KP-Pb; Tue, 02 Feb 2021 01:06:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6k9Y-0007X0-P7; Tue, 02 Feb 2021 01:06:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zkfq1gV8ircImS2Ydg3Z1JkiovdGXZ6RBUa5l6geKIo=; b=yC1dFlX9TgFjumW8dQbzyWPQZP
	H7eqMSH01jlOg1r5PWQUyU9PJrgmpkzM9bq8m0VR0c+msKM96fXm9wNHYd27+RfMkraulTll0Nh8l
	DIbfZfQQ/m+JHwYjGeHi9cW30ZWGRPafg+UxRpI0rqmlLmzriIBZ0fLuRTOlQW0T5ZDs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158918-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158918: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 01:06:48 +0000

flight 158918 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158918/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    2 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    5 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 01:40:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 01:40:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80256.146716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6kgC-0006j3-7l; Tue, 02 Feb 2021 01:40:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80256.146716; Tue, 02 Feb 2021 01:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6kgC-0006iw-3E; Tue, 02 Feb 2021 01:40:32 +0000
Received: by outflank-mailman (input) for mailman id 80256;
 Tue, 02 Feb 2021 01:40:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gsWk=HE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l6kgA-0006ir-TW
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 01:40:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c301a209-cc2a-4ba4-87d9-4e7eaf9f0244;
 Tue, 02 Feb 2021 01:40:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A59FC64E8D;
 Tue,  2 Feb 2021 01:40:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c301a209-cc2a-4ba4-87d9-4e7eaf9f0244
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612230028;
	bh=MhpsANIwunldzDBQYrWg2ZUdOzY9nRyWcaPYT5HqPaw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mzpDfLCWRZqVZ91GLne+hgLf3ncIWgz2bt69rkg1/7QooDWGzMBisMaW3zHhkxTXo
	 SiGbz9AtHbUAq3se1FFcJorJWMAb1oaaUlnRA1ysIUDbK3LTVMUbpw+iAJUlaXdyfX
	 6uDa5CXpzCamq+fe3UcgTHEmqAQtNcTtVmigzb/+H0AbFHgLQu7BQmAnXkUBaVG+QM
	 i1W7wKplMFjIifeEn0i584ELxMzX5P4U4gxgRc9TpjD4pf82mm/QmFuhZI6QQThd7T
	 tzlk0yr25RbK7PhYGEtx1yGPO+Nj2kbTlmCED488R9CJofeJrQXppS8p7OSrKVFGnf
	 YQbbDdz37u88Q==
Date: Mon, 1 Feb 2021 17:40:28 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
cc: Elliott Mitchell <ehem+xen@m5p.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
In-Reply-To: <CABfawhktGwwXNdMrm4uShKSs5MvaUz2hG63wzcDA97z9pGL=Ug@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102011737010.29047@sstabellini-ThinkPad-T480s>
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com> <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com> <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com> <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com> <CABfawhktGwwXNdMrm4uShKSs5MvaUz2hG63wzcDA97z9pGL=Ug@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 1 Feb 2021, Tamas K Lengyel wrote:
> On Mon, Feb 1, 2021 at 10:23 AM Tamas K Lengyel
> <tamas.k.lengyel@gmail.com> wrote:
> >
> > On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > >
> > > On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> > > > With rpi-4.19.y kernel and dtbs
> > > > (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> > > > previous error is not present. I get the boot log on the serial with
> > > > just console=hvc0 from dom0 but the kernel ends up in a panic down the
> > > > line:
> > >
> > > > This seems to have been caused by a monitor being attached to the HDMI
> > > > port, with HDMI unplugged dom0 boots OK.
> > >
> > > The balance of reports seem to suggest 5.10 is the way to go if you want
> > > graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
> > > RP4.
> > >
> > >
> > > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > > On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > >
> > > > > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > > > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > > > > point to that last being touched last year.  Their tree is at
> > > > > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > > > > >
> > > > > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > > > > amount of work that went into fixing Xen on RPI4, which got merged
> > > > > > into 5.9 and I would like to be able to build upstream everything
> > > > > > without the custom patches coming with the rpixen script repo.
> > > > >
> > > > > Please keep track of where your kernel source is checked out at since
> > > > > there was a desire to figure out what was going on with the device-trees.
> > > > >
> > > > >
> > > > > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > > > > kernel command-line should ensure you get output from the kernel if it
> > > > > manages to start (yes, Linux does support having multiple consoles at the
> > > > > same time).
> > > >
> > > > No output from dom0 received even with the added console options
> > > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > > with 4.19 next.
> > >
> > > So, their current HEAD.  This reads like you've got a problematic kernel
> > > configuration.  What procedure are you following to generate the
> > > configuration you use?
> > >
> > > Using their upstream as a base and then adding the configuration options
> > > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > > `make menuconfig`, `make zImage`).
> > >
> > > Notably the options:
> > > CONFIG_PARAVIRT
> > > CONFIG_XEN_DOM0
> > > CONFIG_XEN
> > > CONFIG_XEN_BLKDEV_BACKEND
> > > CONFIG_XEN_NETDEV_BACKEND
> > > CONFIG_HVC_XEN
> > > CONFIG_HVC_XEN_FRONTEND
> > >
> > > Should be set to "y".
> >
> > Yes, these configs are all set the same way for all Linux builds by the script:
> >         make O=.build-arm64 ARCH=arm64
> > CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
> >
> > I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> > dom0. So far only 4.19 boots.
> 
> rpi-5.4.y boots but ends up in yet another different kernel panic:

That's an interesting error. However, I can tell you that I can boot
rpi-5.9.y just fine (without a monitor attached) with:

cd linux
KERNEL=kernel7l
make bcm2711_defconfig

As mentioned here:

https://www.raspberrypi.org/documentation/linux/kernel/building.md

and also taking the device tree from arch/arm64/boot/dts/broadcom/.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 02:10:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 02:10:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80283.146727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6l95-0001mC-IZ; Tue, 02 Feb 2021 02:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80283.146727; Tue, 02 Feb 2021 02:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6l95-0001m5-F9; Tue, 02 Feb 2021 02:10:23 +0000
Received: by outflank-mailman (input) for mailman id 80283;
 Tue, 02 Feb 2021 02:10:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wa+X=HE=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1l6l94-0001m0-Pu
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 02:10:22 +0000
Received: from mail-qv1-xf29.google.com (unknown [2607:f8b0:4864:20::f29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b842fb71-f809-4ac8-a757-09cec9ad65e1;
 Tue, 02 Feb 2021 02:10:21 +0000 (UTC)
Received: by mail-qv1-xf29.google.com with SMTP id h21so9228388qvb.8
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 18:10:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b842fb71-f809-4ac8-a757-09cec9ad65e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=cUQJC01+rQDSXrxsiio4QQOir8Awvbgt5TFoYrXYai0=;
        b=VXRjH//AiA21cU1QoThPpipayyPTgF9rIOAE3ho17W3tSgT39Ekx2Gd603a+JRYh9p
         f4ZjfneVgzAw4jwhPSm4Xe2Tb91nqxXwrFUOLw5bqTwYz2ss4AZZ9EwZqy1IFmpaOO1C
         MEV/bO2YbMqgA30AWOBAhkBgWrSVGFVlPTiA/yH1h0T28nJI5OPVzGKdd6KIWh6ROjDu
         BeOtQww6X+nhqgkKDc1eZg8110D9pL9gdTeJpmToEfhQAbs3PYh/zv6fj/Ff23p87GeA
         rJF0WdUH87xqltRY8G7VV1FeKDk7QZYkptQb7dOS2gGR+Z8WZvvzpUmbdvU+ZxJiug8A
         pSCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=cUQJC01+rQDSXrxsiio4QQOir8Awvbgt5TFoYrXYai0=;
        b=tq/TcC7Zz0H6j/G75Yh1jB7qSR31Se4ndOF/ahVT99JwjPGGxgux9lKTfah1fPdfcN
         AGggpUuRNTUaz9zOralek+TU+DInJ4oiCgq6UaatfsUBJCAOCF9Q8j7np/VoMYtCoSZG
         Ulr/5qYIJdtmBNLB71eW7mfkc0cIhCtJ6NHUTNQarV1cUH7yC99Tz954hEqN+cfHmr+A
         3wIUxecPmQ/azcnqDCR2mqQvn2wwVbssE8E71XdsWMzzVnGVkub1esjJU68dpWBwmE6Z
         UcmoUnTXTjRvOuXMnK4xidjPz6pD7y4DbLjK0f/cjKDDGcSmjYRlzfA/e59f02VSPMC1
         q2zQ==
X-Gm-Message-State: AOAM530M7lG0axIwBYapZyryBo2hutcra/KjSMSeoXberugGzpHgkbnQ
	++UqB0bhF1rlQkpB+ZlATw3GXFo6J8mcfiYZIW+77Q==
X-Google-Smtp-Source: ABdhPJxOARiN5aBhjIp1SMa8EfD+uwIBF3RKmefTbFJWHHJ1SRokTF+Mp4AMSmwpTZfsTE/IHmfPlXUtkorBEvwiPyA=
X-Received: by 2002:a05:6214:18f0:: with SMTP id ep16mr11093208qvb.0.1612231821129;
 Mon, 01 Feb 2021 18:10:21 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com> <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
 <CABfawhktGwwXNdMrm4uShKSs5MvaUz2hG63wzcDA97z9pGL=Ug@mail.gmail.com> <alpine.DEB.2.21.2102011737010.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102011737010.29047@sstabellini-ThinkPad-T480s>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 1 Feb 2021 18:10:10 -0800
Message-ID: <CAMmSBy--93yDWZcfhkDHHPxmydvJ4tyymwTzHCC4apObD4983Q@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, Elliott Mitchell <ehem+xen@m5p.com>, 
	Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Feb 1, 2021 at 5:40 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Mon, 1 Feb 2021, Tamas K Lengyel wrote:
> > On Mon, Feb 1, 2021 at 10:23 AM Tamas K Lengyel
> > <tamas.k.lengyel@gmail.com> wrote:
> > >
> > > On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > >
> > > > On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> > > > > With rpi-4.19.y kernel and dtbs
> > > > > (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> > > > > previous error is not present. I get the boot log on the serial with
> > > > > just console=hvc0 from dom0 but the kernel ends up in a panic down the
> > > > > line:
> > > >
> > > > > This seems to have been caused by a monitor being attached to the HDMI
> > > > > port, with HDMI unplugged dom0 boots OK.
> > > >
> > > > The balance of reports seem to suggest 5.10 is the way to go if you want
> > > > graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
> > > > RP4.
> > > >
> > > >
> > > > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > > > On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > > >
> > > > > > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > > > > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > > > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > > > > > point to that last being touched last year.  Their tree is at
> > > > > > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > > > > > >
> > > > > > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > > > > > amount of work that went into fixing Xen on RPI4, which got merged
> > > > > > > into 5.9 and I would like to be able to build upstream everything
> > > > > > > without the custom patches coming with the rpixen script repo.
> > > > > >
> > > > > > Please keep track of where your kernel source is checked out at since
> > > > > > there was a desire to figure out what was going on with the device-trees.
> > > > > >
> > > > > >
> > > > > > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > > > > > kernel command-line should ensure you get output from the kernel if it
> > > > > > manages to start (yes, Linux does support having multiple consoles at the
> > > > > > same time).
> > > > >
> > > > > No output from dom0 received even with the added console options
> > > > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > > > with 4.19 next.
> > > >
> > > > So, their current HEAD.  This reads like you've got a problematic kernel
> > > > configuration.  What procedure are you following to generate the
> > > > configuration you use?
> > > >
> > > > Using their upstream as a base and then adding the configuration options
> > > > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > > > `make menuconfig`, `make zImage`).
> > > >
> > > > Notably the options:
> > > > CONFIG_PARAVIRT
> > > > CONFIG_XEN_DOM0
> > > > CONFIG_XEN
> > > > CONFIG_XEN_BLKDEV_BACKEND
> > > > CONFIG_XEN_NETDEV_BACKEND
> > > > CONFIG_HVC_XEN
> > > > CONFIG_HVC_XEN_FRONTEND
> > > >
> > > > Should be set to "y".
> > >
> > > Yes, these configs are all set the same way for all Linux builds by the script:
> > >         make O=.build-arm64 ARCH=arm64
> > > CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
> > >
> > > I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> > > dom0. So far only 4.19 boots.
> >
> > rpi-5.4.y boots but ends up in yet another different kernel panic:
>
> That's an interesting error. However, I can tell you that I can boot
> rpi-5.9.y just fine (without a monitor attached) with:
>
> cd linux
> KERNEL=kernel7l
> make bcm2711_defconfig
>
> As mentioned here:
>
> https://www.raspberrypi.org/documentation/linux/kernel/building.md
>
> and also taking the device tree from arch/arm64/boot/dts/broadcom/.

FWIW: I see the same results with stock upstream 5.10.7 effectively
following the steps you're doing.

However, as I keep saying -- the combination of firmware and u-boot
(in my case) is a very sensitive combination -- hence we're relying
on a very particular set of bits for there in EVE and will refuse to work
with anything else.

It may be helpful to take that combination outside of EVE's context and
try it out in your experiments Tamas.

Thanks,
Roman.

P.S. I'm running into a DomU issue in certain places when running with 5.10.7
but that's a subject for a different thread.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 02:23:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 02:23:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80285.146740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lLE-0002qL-Nw; Tue, 02 Feb 2021 02:22:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80285.146740; Tue, 02 Feb 2021 02:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lLE-0002qE-Ki; Tue, 02 Feb 2021 02:22:56 +0000
Received: by outflank-mailman (input) for mailman id 80285;
 Tue, 02 Feb 2021 02:22:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wa+X=HE=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1l6lLD-0002q9-L2
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 02:22:55 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae7bdef8-1278-45af-bd3a-cc458dac47cd;
 Tue, 02 Feb 2021 02:22:54 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id t14so13923078qto.8
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 18:22:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae7bdef8-1278-45af-bd3a-cc458dac47cd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=+OKJAguCpfNQDJWPH186SyUiA9w+kRtBMsZ1drKwIgA=;
        b=YsLJ0rLn3w7TgEWcM5IIZ22nrtyWkUcPZfJ4uqFfe1QoyMOG//7wJ204xkMih3Qo0r
         mYq88f2x8DD0GVqp9CZFKyXeSBLt/ft7IeWzlcCdWIp46/T1ijDQA7VfBAbtffj08cVn
         68wu6AiYBZ8p7v2WReQhknLVaM6jgbWkk9rUyGTdmMVu7f3FOCR6dpvCjsdh48edVjqS
         12GaaNfLAcKYB2AJT93eaBaPzn++3jo04Mxkmc+rWqqhbQNLh2B5djpHEBanGGR/NKpI
         cqWFnzHN7rE6r+aP+qGJxwwb5zQqrvIHt75Ri/qzBRx+FyZuXerlotOA+SnTWpTFSVL/
         Z70A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=+OKJAguCpfNQDJWPH186SyUiA9w+kRtBMsZ1drKwIgA=;
        b=qoYkkjb+2M6a0luTSYq4NYvrQ3qRtzH0qU0YA4XfZtpLdYP3W0ZUccekXXPb30lYsE
         63xZ930GOC4VSedblrinxV9/R2Acx80mOCsMev75Josq3m8TZmfd2Pmc3Nqv019V7d9O
         jJzXNKt/DPmS6J9qzS+1RvsN8SJ0Bb4VQYxqvsi1jt85Alrqn4s311MJmm8NunqjQN/A
         GTNuni9sHTa1rYzib35uBal7ZeQzJOgnnKZnAqJVUiNmeehN6x/hsOH2hBKdkmEebSz7
         2IJkoTAptRQiPeZhbhPwc6pH08dj8I7p8kTUncved+FN+zF8EYpKw6PNKVa4JqQAzmy6
         mZ2g==
X-Gm-Message-State: AOAM532xRYmG1XriOJMWeVZDiJL14qBQ7x+jH7dLX4OGOH/vPdWnO9gR
	4n8k5Oe/b8YZghkY2rxnlP6gUD2XL6pSqLqDzY3DUg==
X-Google-Smtp-Source: ABdhPJwoVyn182H4cK2krOHz3RcxkvPdl2fNkeXjDsWrGaHwZ6S8ImDEDD8INbhWnLuC/RJSOJZMnxMg20AUvOmiBa8=
X-Received: by 2002:ac8:4f52:: with SMTP id i18mr17346821qtw.113.1612232574090;
 Mon, 01 Feb 2021 18:22:54 -0800 (PST)
MIME-Version: 1.0
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com>
 <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s>
 <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org> <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
In-Reply-To: <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 1 Feb 2021 18:22:43 -0800
Message-ID: <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
Subject: Re: Question about xen and Rasp 4B
To: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 30, 2021 at 5:53 AM Jukka Kaartinen
<jukka.kaartinen@unikie.com> wrote:
> > On 27/01/2021 11:47, Jukka Kaartinen wrote:
> >>
> >>
> >> On Tue, Jan 26, 2021 at 10:22 PM Stefano Stabellini
> >> <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
> >>
> >>     On Tue, 26 Jan 2021, Jukka Kaartinen wrote:
> >>      > On Tue, Jan 26, 2021 at 2:54 AM Stefano Stabellini
> >>     <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
> >>      >       On Sat, 23 Jan 2021, Jukka Kaartinen wrote:
> >>      >       > Thanks for the response!
> >>      >       >
> >>      >       > On Sat, Jan 23, 2021 at 2:27 AM Stefano Stabellini
> >>     <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
> >>      >       >       + xen-devel, Roman,
> >>      >       >
> >>      >       >
> >>      >       >       On Fri, 22 Jan 2021, Jukka Kaartinen wrote:
> >>      >       >       > Hi Stefano,
> >>      >       >       > I'm Jukka Kaartinen a SW developer working on
> >>     enabling hypervisors on mobile platforms. One of our HW that we
> >> use on
> >>      >       >       development is
> >>      >       >       > Raspberry Pi 4B. I wonder if you could help me a
> >>     bit :).
> >>      >       >       >
> >>      >       >       > I'm trying to enable the GPU with Xen + Raspberry
> >>     Pi for
> >>      >       >       dom0.
> >>
> >> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323#p1797605
> >>
> >> <https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323#p1797605>
> >>      >       >       >
> >>      >       >       > I got so far that GPU drivers are loaded (v3d &
> >>     vc4) without errors. But now Xen returns error when X is starting:
> >>      >       >       > (XEN) traps.c:1986:d0v1 HSR=0x93880045
> >>     pc=0x00007f97b14e70 gva=0x7f7f817000 gpa=0x0000401315d000
> >>      >       >       >  I tried to debug what causes this and looks
> >>     like find_mmio_handler cannot find handler.
> >>      >       >       > (See more here:
> >>
> >> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323&start=25#p1801691
> >>
> >>
> >> <https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323&start=25#p1801691>
> >>
> >>     )
> >>      >       >       >
> >>      >       >       > Any ideas why the handler is not found?
> >>      >       >
> >>      >       >
> >>      >       >       Hi Jukka,
> >>      >       >
> >>      >       >       I am glad to hear that you are interested in Xen on
> >>     RaspberryPi :-)  I
> >>      >       >       haven't tried the GPU yet, I have been using the
> >>     serial only.
> >>      >       >       Roman, did you ever get the GPU working?
> >>      >       >
> >>      >       >
> >>      >       >       The error is a data abort error: Linux is trying to
> >>     access an address
> >>      >       >       which is not mapped to dom0. The address seems to
> >>     be 0x401315d000. It is
> >>      >       >       a pretty high address; I looked in device tree but
> >>     couldn't spot it.
> >>      >       >
> >>      >       >       >From the HSR (the syndrom register) it looks like
> >>     it is a translation
> >>      >       >       fault at EL1 on stage1. As if the Linux address
> >>     mapping was wrong.
> >>      >       >       Anyone has any ideas how this could happen? Maybe a
> >>     reserved-memory
> >>      >       >       misconfiguration?
> >>      >       >
> >>      >       > I had issues with loading the driver in the first place.
> >>     Apparently swiotlb is used, maybe it can cause this. I also tried to
> >>      >       enable CMA.
> >>      >       > config.txt:
> >>      >       > dtoverlay=vc4-fkms-v3d,cma=320M@0x0-0x40000000
> >>      >       > gpu_mem=128
> >>      >
> >>      >       Also looking at your other reply and the implementation of
> >>      >       vc4_bo_create, it looks like this is a CMA problem.
> >>      >
> >>      >       It would be good to run a test with the swiotlb-xen
> >> disabled:
> >>      >
> >>      >       diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> >>      >       index 467fa225c3d0..2bdd12785d14 100644
> >>      >       --- a/arch/arm/xen/mm.c
> >>      >       +++ b/arch/arm/xen/mm.c
> >>      >       @@ -138,8 +138,7 @@ void
> >>     xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
> >>      >        static int __init xen_mm_init(void)
> >>      >        {
> >>      >               struct gnttab_cache_flush cflush;
> >>      >       -       if (!xen_initial_domain())
> >>      >       -               return 0;
> >>      >       +       return 0;
> >>      >               xen_swiotlb_init(1, false);
> >>      >
> >>      >               cflush.op = 0;
> >>      >
> >>      > With this change the kernel is not booting up. (btw. I'm using
> >>     USB SSD for my OS.)
> >>      > [    0.071081] bcm2835-dma fe007000.dma: Unable to set DMA mask
> >>      > [    0.076277] bcm2835-dma fe007b00.dma: Unable to set DMA mask
> >>      > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
> >>      > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> >>      > [    0.592695] pci 0000:00:00.0: Failed to add - passthrough or
> >>     MSI/MSI-X might fail!
> >>      > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> >>      > [    0.606819] pci 0000:01:00.0: Failed to add - passthrough or
> >>     MSI/MSI-X might fail!
> >>      > [    1.212820] usb 1-1: device descriptor read/64, error 18
> >>      > [    1.452815] usb 1-1: device descriptor read/64, error 18
> >>      > [    1.820813] usb 1-1: device descriptor read/64, error 18
> >>      > [    2.060815] usb 1-1: device descriptor read/64, error 18
> >>      > [    2.845548] usb 1-1: device descriptor read/8, error -61
> >>      > [    2.977603] usb 1-1: device descriptor read/8, error -61
> >>      > [    3.237530] usb 1-1: device descriptor read/8, error -61
> >>      > [    3.369585] usb 1-1: device descriptor read/8, error -61
> >>      > [    3.480765] usb usb1-port1: unable to enumerate USB device
> >>      >
> >>      > Traces stop here. I could try with a memory card. Maybe it makes
> >>     a difference.
> >>
> >>     This is very surprising. Disabling swiotlb-xen should make things
> >> better
> >>     not worse. The only reason I can think of why it could make things
> >> worse
> >>     is if Linux runs out of low memory. Julien's patch
> >>     437b0aa06a014ce174e24c0d3530b3e9ab19b18b for Xen should have
> >> addressed
> >>     that issue though. Julien, any ideas?
> >
> > I think, Stefano's small patch is not enough to disable the swiotlb as
> > we will still override the DMA ops. You also likely want:
> >
> > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> > index 8a8949174b1c..aa43e249ecdd 100644
> > --- a/arch/arm/mm/dma-mapping.c
> > +++ b/arch/arm/mm/dma-mapping.c
> > @@ -2279,7 +2279,7 @@ void arch_setup_dma_ops(struct device *dev, u64
> > dma_base, u64 size,
> >          set_dma_ops(dev, dma_ops);
> >
> >   #ifdef CONFIG_XEN
> > -       if (xen_initial_domain())
> > +       if (0 || xen_initial_domain())
> >                  dev->dma_ops = &xen_swiotlb_dma_ops;
> >   #endif
> >          dev->archdata.dma_ops_setup = true;
> >
> > Otherwise, you would still use the swiotlb DMA ops that would not be
> > functional as we disabled the swiotlb.
> >
> > This would explain the following error because it will check whether the
> > mask is valid using the callback dma_supported():
> >
> > [    0.071081] bcm2835-dma fe007000.dma: Unable to set DMA mask
> >
> Good catch.
> GPU works now and I can start X! Thanks! I was also able to create domU
> that runs Raspian OS.

This is very interesting that you were able to achieve that - congrats!

Now, sorry to be a bit dense -- but since this thread went into all
sorts of interesting
directions all at once -- I just have a very particular question: what is exact
combination of versions of Xen, Linux kernel and a set of patches that went
on top that allowed you to do that? I'd love to obviously see it
productized in Xen
upstream, but for now -- I'd love to make available to Project EVE/Xen community
since there seems to be a few folks interested in EVE/Xen combo being able to
do that.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 02:29:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 02:29:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80289.146757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lRJ-00036l-DE; Tue, 02 Feb 2021 02:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80289.146757; Tue, 02 Feb 2021 02:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lRJ-00036e-AH; Tue, 02 Feb 2021 02:29:13 +0000
Received: by outflank-mailman (input) for mailman id 80289;
 Tue, 02 Feb 2021 02:29:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Vw=HE=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l6lRH-00036Z-T5
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 02:29:12 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 829d8355-4b57-4e84-93de-54556559ae70;
 Tue, 02 Feb 2021 02:29:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 829d8355-4b57-4e84-93de-54556559ae70
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612232950;
  h=to:from:subject:message-id:date:mime-version:
   content-transfer-encoding;
  bh=aBk6kPNay1v13xNGzgI3rqpwZ9zBuxanpuYIpdG7v2o=;
  b=IFb3NqLY6NlTm8wRTui7yuoYJOm5izWUF9rMpQ4+ew1elPhGEiKFhAIO
   AqTVO9fYz4pEhUJ41Sf1szPqb9mSCY5WI35EQMAoaVI8LSWLqSU7Y5g1W
   f+FeKDnKI/jQyDix1x92WN1wcZV7emyZfxRA6wmI4PAPmScJ+3du4cC7e
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vTLfqJPo0/XPxXfapal6Ck3ylc+JRO1WtY0vx9VD1dVhk+tIP2Hphd1efDgvBFRQrK9A71HzBw
 vPe48PhqjnWxI5pV/7EpgfdF9kNSN5Q8q0dGMZLixw7xaCIkKKqwhaTI3rKLWgcMX+tbdllHWl
 uVtIym+DRFRpJPtjlw5OTRy14IsOm7Vda/nYXOvtDwG9yT3EVzeB5qwOQ3nfDxqJ8Wwu5tq18w
 /N6adRS31X+ZcksZUNpADtsWp4DkpKYJNRmbUs+kRVXTXhCE/demms8wB4TBUMks2R4h6B9egZ
 iUw=
X-SBRS: 5.1
X-MesageID: 37661405
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,393,1602561600"; 
   d="scan'208";a="37661405"
To: 'Juergen Gross' <jgross@suse.com>, xen-devel
	<xen-devel@lists.xenproject.org>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: dom0 crash in xenvif_rx_ring_slots_available
Message-ID: <c7dea44e-8e6d-1b98-b2a4-7e9623a8e3fb@citrix.com>
Date: Tue, 2 Feb 2021 02:29:05 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Juergen,

We've got a crash report from one of our customers (see below) running 4.4 kernel.
The functions seem to be the new that came with XSA-332 and nothing like that has been
reported before in their cloud. It appears there is some use-after-free happening on skb
in the following code fragment:

static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
{
        RING_IDX prod, cons;
        struct sk_buff *skb;
        int needed;

        skb = skb_peek(&queue->rx_queue);
        if (!skb)
                return false;

        needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
        if (skb_is_gso(skb))  <== skb points to 0-ed memory
                needed++;

Has something similar been reported before? Any ideas?

Igor


[2721961.681180]  ALERT: BUG: unable to handle kernel NULL pointer dereference at 0000000000000002
[2721961.681222]  ALERT: IP: [<ffffffff8146c4da>] xenvif_rx_ring_slots_available+0x4a/0xa0
[2721961.681252]   WARN: PGD 121cf5067 PUD 10095a067 PMD 0 
[2721961.681274]   WARN: Oops: 0000 [#1] SMP 
[2721961.681291]   WARN: Modules linked in: iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi vport_vxlan(O) vport_stt(O) tun sch_sfq sch_htb 8021q garp mrp stp llc openvswitch(O) tunnel6 nf_conntrack_ipv6 nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat nf_conntrack libcrc32c xt_tcpudp iptable_filter dm_multipath ipmi_devintf dm_mod x86_pkg_temp_thermal coretemp crc32_pclmul aesni_intel aes_x86_64 ablk_helper cryptd lrw gf128mul glue_helper psmouse sb_edac sg edac_core lpc_ich mfd_core hpilo shpchp wmi tpm_tis tpm nls_utf8 isofs ipmi_si ipmi_msghandler ip_tables x_tables sd_mod uhci_hcd serio_raw ehci_pci ehci_hcd ixgbe(O) hpsa(O) vxlan scsi_transport_sas ip6_udp_tunnel udp_tunnel ptp pps_core scsi_dh_rdac scsi_dh_hp_sw scsi_dh_emc scsi_dh_alua scsi_mod ipv6 autofs4
[2721961.681538]   WARN: CPU: 12 PID: 0 Comm: swapper/12 Tainted: G           O    4.4.0+2 #1
[2721961.681551]   WARN: Hardware name: HP ProLiant DL380p Gen8, BIOS P70 05/21/2018
[2721961.681562]   WARN: task: ffff880149231c00 ti: ffff880149238000 task.ti: ffff880149238000
[2721961.681574]   WARN: RIP: e030:[<ffffffff8146c4da>]  [<ffffffff8146c4da>] xenvif_rx_ring_slots_available+0x4a/0xa0
[2721961.681591]   WARN: RSP: e02b:ffff880149b83db8  EFLAGS: 00010002
[2721961.681601]   WARN: RAX: 0000000000000000 RBX: ffffc90040a0e000 RCX: 0000000000000000
[2721961.681615]   WARN: RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffffc90040a0e000
[2721961.681629]   WARN: RBP: ffff880149b83db8 R08: 0000000000000000 R09: ffff880057bcc000
[2721961.681647]   WARN: R10: ffff880057bcc0e0 R11: 0000000000000000 R12: 0000000000000000
[2721961.681671]   WARN: R13: 0000000000000157 R14: 0000000000000001 R15: 0000000000000000
[2721961.681742]   WARN: FS:  00007f5cc4c98780(0000) GS:ffff880149b80000(0000) knlGS:0000000000000000
[2721961.681762]   WARN: CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
[2721961.681777]   WARN: CR2: 0000000000000002 CR3: 0000000101006000 CR4: 0000000000040660
[2721961.681802]   WARN: Stack:
[2721961.681812]   WARN:  ffff880149b83dd8 ffffffff8146cdf7 ffffc90040a0e000 ffffc90040a0e028
[2721961.681878]   WARN:  ffff880149b83e08 ffffffff8146aa71 ffff88009ca7ca80 0000000000000157
[2721961.681911]   WARN:  ffff88006315c020 0000000000000000 ffff880149b83e58 ffffffff810c046f
[2721961.681971]   WARN: Call Trace:
[2721961.681981]   WARN:  <IRQ> 
[2721961.681994]   WARN:  [<ffffffff8146cdf7>] xenvif_have_rx_work+0x17/0x80
[2721961.682044]   WARN:  [<ffffffff8146aa71>] xenvif_interrupt+0x61/0xb0
[2721961.682065]   WARN:  [<ffffffff810c046f>] handle_irq_event_percpu+0x7f/0x1e0
[2721961.682108]   WARN:  [<ffffffff810c060b>] handle_irq_event+0x3b/0x60
[2721961.682125]   WARN:  [<ffffffff810c35ef>] handle_edge_irq+0xff/0x130
[2721961.682178]   WARN:  [<ffffffff810bfc72>] generic_handle_irq+0x22/0x30
[2721961.682196]   WARN:  [<ffffffff813c6a0d>] handle_irq_for_port+0xbd/0xd0
[2721961.682248]   WARN:  [<ffffffff813c7c23>] __evtchn_fifo_handle_events+0x143/0x170
[2721961.682266]   WARN:  [<ffffffff813c7c7e>] evtchn_fifo_handle_events+0xe/0x10
[2721961.682285]   WARN:  [<ffffffff813c4af0>] __xen_evtchn_do_upcall+0x50/0x90
[2721961.682339]   WARN:  [<ffffffff813c6a50>] xen_evtchn_do_upcall+0x30/0x50
[2721961.682359]   WARN:  [<ffffffff815a456e>] xen_do_hypervisor_callback+0x1e/0x40
[2721961.682414]   WARN:  <EOI> 
[2721961.682425]   WARN:  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
[2721961.682443]   WARN:  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
[2721961.682492]   WARN:  [<ffffffff8100c570>] ? xen_safe_halt+0x10/0x20
[2721961.682506]   WARN:  [<ffffffff81020d27>] ? default_idle+0x57/0xf0
[2721961.682518]   WARN:  [<ffffffff8102145f>] ? arch_cpu_idle+0xf/0x20
[2721961.682569]   WARN:  [<ffffffff810ab292>] ? default_idle_call+0x32/0x40
[2721961.682582]   WARN:  [<ffffffff810ab4ec>] ? cpu_startup_entry+0x1ec/0x330
[2721961.682633]   WARN:  [<ffffffff81013dd8>] ? cpu_bringup_and_idle+0x18/0x20
[2721961.682645]   WARN: Code: 73 48 85 c0 74 f7 8b 90 80 00 00 00 8b 88 cc 00 00 00 4c 8b 80 d0 00 00 00 0f b6 80 91 00 00 00 48 81 c2 ff 0f 00 00 48 c1 ea 0c <66> 41 83 7c 08 02 00 8d 72 01 0f 44 f2 48 8b 97 d8 a9 00 00 83 
[2721961.682944]  ALERT: RIP  [<ffffffff8146c4da>] xenvif_rx_ring_slots_available+0x4a/0xa0
[2721961.682998]   WARN:  RSP <ffff880149b83db8>
[2721961.683006]   WARN: CR2: 0000000000000002
[2721961.683808]   WARN: ---[ end trace b82c754a03a6c6e8 ]---
[2721961.683830]  EMERG: Kernel panic - not syncing: Fatal exception in interrupt


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 02:40:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 02:40:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80294.146775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lcK-0004yc-Kk; Tue, 02 Feb 2021 02:40:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80294.146775; Tue, 02 Feb 2021 02:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lcK-0004yV-HU; Tue, 02 Feb 2021 02:40:36 +0000
Received: by outflank-mailman (input) for mailman id 80294;
 Tue, 02 Feb 2021 02:40:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lcI-0004yL-Ot; Tue, 02 Feb 2021 02:40:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lcI-00010P-It; Tue, 02 Feb 2021 02:40:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lcI-00036U-C1; Tue, 02 Feb 2021 02:40:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lcI-0000xm-BU; Tue, 02 Feb 2021 02:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=n58zz5vGqXahK1LZxfFviKffIjPktbCvM7AQ8Y8jX+M=; b=exYbCfcNE0LkTJtaQYbU8JhBX0
	3k0nc95c9Bb1f8/uyHm0ojV1y55Wdw8E/zxJHRJkooKD5TYgkmzjf62M99ha9WxHycS9iqgpBjO/6
	JSxeeCQmV/Qf0/2mu2YS9vLFIzdiIyD4ALfMlkmrchK7dVfHkybZQb6a2siPm3zu2pKQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158923-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158923: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 02:40:34 +0000

flight 158923 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158923/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    2 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    6 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 02:53:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 02:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80304.146810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6los-00066x-TE; Tue, 02 Feb 2021 02:53:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80304.146810; Tue, 02 Feb 2021 02:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6los-00066q-Q8; Tue, 02 Feb 2021 02:53:34 +0000
Received: by outflank-mailman (input) for mailman id 80304;
 Tue, 02 Feb 2021 02:53:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=46ur=HE=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l6lor-00066l-Lv
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 02:53:33 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35998ec5-4221-44d3-bb0d-4ab472f4bff1;
 Tue, 02 Feb 2021 02:53:32 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id q7so18780748wre.13
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 18:53:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35998ec5-4221-44d3-bb0d-4ab472f4bff1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=0DMF/kOzW6VsTETjB9wkPrPZebkGXhjUK4k4jG+hBcU=;
        b=WZ63WUPwN3GWiCeACGQYCWmM+tXZDEw0a8lfKSSXhoa4cJt/Pjz5EStbISMtXUfjU0
         YRyDxsuFnyObDCH6b3Sp+Fmnsx+Djf3DauJtMPyQowM1ZAG6YnoovxvOZ28H1e/Zt9so
         1Mtka1kPCTOuEzGSERj3cWOvwyYS0QE7YQDI4TQ4AeqAzFrlkgZoITvA16y0BM4DEZpx
         pZ9vm+Ww4OiZ9ycRUAYXNwBl/TmV7R8ossdHwkr40BsubBkqCSFKhR4lz/vp/+2WZm7Z
         hmluJ8pjf46lOhcbiOzzEdsW5IY8CMcOERwvkw5Gyq3lOwoSitUDUCz3ebyagbgtkg3O
         ySow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=0DMF/kOzW6VsTETjB9wkPrPZebkGXhjUK4k4jG+hBcU=;
        b=tmmxOHN3IgVPtkx2EaboviULb8TKBvIaVE/O0c5RJpJTrLHfrep7GGRf11rLCFPUhQ
         z8rnmZEA3nt6fj2ZhOCwwYZqyV5jCbhY4SgStoM1Fzd8b2RhqVPy7M5e7+y4gVFKMszU
         cNHePQ7zrLZXxcRvxsI09mcdjCqajuJLjZPLT6lBN8WzQeSr6hO5D/GBPzHq9XbVNxgm
         t2d16IAIFksp5AEZ32cVBU+n1gbBoBFz5uDpuLR0P3H3ASXRmWHXyd0XFGWbSXGr5dEv
         baBCwRgN+KRiqwv0wzn3jsxh4klWDNvP+1OzCOqiBWNPdNf1JJ3hI9mh20pxvhCumE/l
         kpvg==
X-Gm-Message-State: AOAM531Uiv95kGrrezEzuvlaPNdZRx2K3wwhBeVSBcnMGTvAezNuXTvb
	DE6D5bFGnHauDY6CdTFumqxd3rhaB14Ckig2E6Q=
X-Google-Smtp-Source: ABdhPJzwCo0WIi3Ucf+KKYiyulT5R5pHWdCdF7by0jioNVDY6kUAEXzhTK/rPiDDm0NeP15YhD1tmIe5lmITD0rcF4o=
X-Received: by 2002:adf:ffce:: with SMTP id x14mr21801322wrs.390.1612234411741;
 Mon, 01 Feb 2021 18:53:31 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com> <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
 <CABfawhktGwwXNdMrm4uShKSs5MvaUz2hG63wzcDA97z9pGL=Ug@mail.gmail.com>
 <alpine.DEB.2.21.2102011737010.29047@sstabellini-ThinkPad-T480s> <CAMmSBy--93yDWZcfhkDHHPxmydvJ4tyymwTzHCC4apObD4983Q@mail.gmail.com>
In-Reply-To: <CAMmSBy--93yDWZcfhkDHHPxmydvJ4tyymwTzHCC4apObD4983Q@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 1 Feb 2021 21:52:53 -0500
Message-ID: <CABfawhkYmFB9Xc+zmvzA6ispiryfX1EgMWzYTSvHxD9Gmu=TUg@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Elliott Mitchell <ehem+xen@m5p.com>, 
	Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Feb 1, 2021 at 9:10 PM Roman Shaposhnik <roman@zededa.com> wrote:
>
> On Mon, Feb 1, 2021 at 5:40 PM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > On Mon, 1 Feb 2021, Tamas K Lengyel wrote:
> > > On Mon, Feb 1, 2021 at 10:23 AM Tamas K Lengyel
> > > <tamas.k.lengyel@gmail.com> wrote:
> > > >
> > > > On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > >
> > > > > On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> > > > > > With rpi-4.19.y kernel and dtbs
> > > > > > (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> > > > > > previous error is not present. I get the boot log on the serial with
> > > > > > just console=hvc0 from dom0 but the kernel ends up in a panic down the
> > > > > > line:
> > > > >
> > > > > > This seems to have been caused by a monitor being attached to the HDMI
> > > > > > port, with HDMI unplugged dom0 boots OK.
> > > > >
> > > > > The balance of reports seem to suggest 5.10 is the way to go if you want
> > > > > graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
> > > > > RP4.
> > > > >
> > > > >
> > > > > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > > > > On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > > > >
> > > > > > > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > > > > > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > > > > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > > > > > > point to that last being touched last year.  Their tree is at
> > > > > > > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > > > > > > >
> > > > > > > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > > > > > > amount of work that went into fixing Xen on RPI4, which got merged
> > > > > > > > into 5.9 and I would like to be able to build upstream everything
> > > > > > > > without the custom patches coming with the rpixen script repo.
> > > > > > >
> > > > > > > Please keep track of where your kernel source is checked out at since
> > > > > > > there was a desire to figure out what was going on with the device-trees.
> > > > > > >
> > > > > > >
> > > > > > > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > > > > > > kernel command-line should ensure you get output from the kernel if it
> > > > > > > manages to start (yes, Linux does support having multiple consoles at the
> > > > > > > same time).
> > > > > >
> > > > > > No output from dom0 received even with the added console options
> > > > > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > > > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > > > > with 4.19 next.
> > > > >
> > > > > So, their current HEAD.  This reads like you've got a problematic kernel
> > > > > configuration.  What procedure are you following to generate the
> > > > > configuration you use?
> > > > >
> > > > > Using their upstream as a base and then adding the configuration options
> > > > > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > > > > `make menuconfig`, `make zImage`).
> > > > >
> > > > > Notably the options:
> > > > > CONFIG_PARAVIRT
> > > > > CONFIG_XEN_DOM0
> > > > > CONFIG_XEN
> > > > > CONFIG_XEN_BLKDEV_BACKEND
> > > > > CONFIG_XEN_NETDEV_BACKEND
> > > > > CONFIG_HVC_XEN
> > > > > CONFIG_HVC_XEN_FRONTEND
> > > > >
> > > > > Should be set to "y".
> > > >
> > > > Yes, these configs are all set the same way for all Linux builds by the script:
> > > >         make O=.build-arm64 ARCH=arm64
> > > > CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
> > > >
> > > > I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> > > > dom0. So far only 4.19 boots.
> > >
> > > rpi-5.4.y boots but ends up in yet another different kernel panic:
> >
> > That's an interesting error. However, I can tell you that I can boot
> > rpi-5.9.y just fine (without a monitor attached) with:
> >
> > cd linux
> > KERNEL=kernel7l
> > make bcm2711_defconfig
> >
> > As mentioned here:
> >
> > https://www.raspberrypi.org/documentation/linux/kernel/building.md
> >
> > and also taking the device tree from arch/arm64/boot/dts/broadcom/.
>
> FWIW: I see the same results with stock upstream 5.10.7 effectively
> following the steps you're doing.
>
> However, as I keep saying -- the combination of firmware and u-boot
> (in my case) is a very sensitive combination -- hence we're relying
> on a very particular set of bits for there in EVE and will refuse to work
> with anything else.
>
> It may be helpful to take that combination outside of EVE's context and
> try it out in your experiments Tamas.

Well, I'm giving up on this for now. I ran out of ideas to try and I
don't see any useful suggestions on how to debug this further. Looks
like it's super fragile and works only under specific conditions
that's not well documented - if documented at all.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 02:57:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 02:57:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80306.146822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lsf-0006Gv-Ex; Tue, 02 Feb 2021 02:57:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80306.146822; Tue, 02 Feb 2021 02:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lsf-0006Go-B2; Tue, 02 Feb 2021 02:57:29 +0000
Received: by outflank-mailman (input) for mailman id 80306;
 Tue, 02 Feb 2021 02:57:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lse-0006Gg-A0; Tue, 02 Feb 2021 02:57:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lse-0001Go-0P; Tue, 02 Feb 2021 02:57:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lsd-0003oz-NN; Tue, 02 Feb 2021 02:57:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6lsd-0005ew-Ms; Tue, 02 Feb 2021 02:57:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oNBypP+EMW2zlsfTbZTVAcojtyVOslzqC4vrZqNP4ZM=; b=uhZgUtbevGCgHZ5Voy6g9rB/y0
	i83UhDS6lWMBhAYY8AiEoOwFuCAPVNL/POPFX+X/TnkR/Bss/yx84TuTVig0o71svga0R44r6OxPy
	ZbNletTwiQlR52v2creZAAmwcKGHQhF+dkJ2TLzjgDt6zRxV/YI1wYYYUs5ZZqAc8zf4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158898: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-i386-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-shadow:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:host-install(5):broken:heisenbug
    linux-5.4:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fbca6ce4174724f28be5268c5d210f51ed96e31
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 02:57:27 +0000

flight 158898 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158898/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl           14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-pair         25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install       fail REGR. vs. 158387
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-i386-xl-shadow    14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start    fail in 158881 REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start    fail in 158881 REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start    fail in 158881 REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           5 host-install(5)          broken pass in 158881
 test-arm64-arm64-xl-xsm       5 host-install(5)          broken pass in 158881
 test-arm64-arm64-xl-credit2   5 host-install(5)          broken pass in 158881

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 linux                0fbca6ce4174724f28be5268c5d210f51ed96e31
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   20 days
Failing since        158473  2021-01-17 13:42:20 Z   15 days   26 attempts
Testing same since   158818  2021-01-30 13:48:12 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Akilesh Kailash <akailash@google.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Leibovich <alexl@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexey Minnekhanov <alexeymin@postmarketos.org>
  Anders Roxell <anders.roxell@linaro.org>
  Andreas Kemnade <andreas@kemnade.info>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Zhizhikin <andrey.zhizhikin@leica-geosystems.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Lutomirski <luto@kernel.org>
  Anthony Iliopoulos <ailiop@suse.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Marcovitch <ariel.marcovitch@gmail.com>
  Ariel Marcovitch <arielmarcovitch@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Aya Levin <ayal@nvidia.com>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Baruch Siach <baruch@tkos.co.il>
  Ben Skeggs <bskeggs@redhat.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Craig Tatlor <ctatlor97@gmail.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  David Wu <david.wu@rock-chips.com>
  Dennis Li <Dennis.Li@amd.com>
  Dexuan Cui <decui@microsoft.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Enke Chen <enchen@paloaltonetworks.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Korenevsky <ekorenevsky@astralinux.ru>
  Ewan D. Milne <emilne@redhat.com>
  Fabian Vogt <fvogt@suse.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe LaÃ­ns <lains@archlinux.org>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gaurav Kohli <gkohli@codeaurora.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gopal Tiwari <gtiwari@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guido GÃ¼nther <agx@sigxcpu.org>
  Guillaume Nault <gnault@redhat.com>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hao Wang <pkuwangh@gmail.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Ion Agorria <ion@agorria.com>
  Israel Rukshin <israelr@nvidia.com>
  J. Bruce Fields <bfields@redhat.com>
  j.nixdorf@avm.de <j.nixdorf@avm.de>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@jamieiles.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  JC Kuo <jckuo@nvidia.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jethro Beekman <jethro@fortanix.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Nixdorf <j.nixdorf@avm.de>
  John Millikin <john@john-millikin.com>
  Johnathan Smithinovic <johnathan.smithinovic@gmx.at>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni K. SeppÃ¤nen <jks@iki.fi>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Juerg Haefliger <juergh@canonical.com>
  Juergen Gross <jgross@suse.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Krzysztof Mazur <krzysiek@podlesie.net>
  Krzysztof Piotr OlÄ™dzki <ole@ans.pl>
  Lars-Peter Clausen <lars@metafoo.de>
  Lecopzer Chen <lecopzer.chen@mediatek.com>
  Lecopzer Chen <lecopzer@gmail.com>
  Leon Romanovsky <leonro@nvidia.com>
  Leon Schuermann <leon@is.currently.online>
  Linhua Xu <linhua.xu@unisoc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Lozano <llozano@google.com>
  Lukas Wunner <lukas@wunner.de>
  Manish Chopra <manishc@marvell.com>
  Manoj Gupta <manojgupta@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marcin Wojtas <mw@semihalf.com>
  Mark Bloch <mbloch@nvidia.com>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Kresin <dev@kresin.me>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikko Perttunen <mperttunen@nvidia.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mircea Cirjaliu <mcirjaliu@bitdefender.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolai Stange <nstange@suse.de>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nilesh Javali <njavali@marvell.com>
  Oded Gabbay <ogabbay@kernel.org>
  Olaf Hering <olaf@aepfle.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali RohÃ¡r <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pan Bian <bianpan2016@163.com>
  Parav Pandit <parav@nvidia.com>
  Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  Paul Cercueil <paul@crapouillou.net>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Collingbourne <pcc@google.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Robinson <pbrobinson@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <me@pmachata.org>
  Petr Machata <petrm@nvidia.com>
  Phil Oester <kernel@linuxace.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafael Kitover <rkitover@gmail.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <rasmus.villemoes@prevas.dk>
  Reinette Chatre <reinette.chatre@intel.com>
  Rich Felker <dalias@libc.org>
  Rob Clark <robdclark@chromium.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Rohit Maheshwari <rohitm@chelsio.com>
  Roman Guskov <rguskov@dh-electronics.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Ross Zwisler <zwisler@google.com>
  Ryan Chen <ryan_chen@aspeedtech.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sameer Pujar <spujar@nvidia.com>
  Samuel Holland <samuel@sholland.org>
  Sasha Levin <sashal@kernel.org>
  Sean Tranchetti <stranche@codeaurora.org>
  Seth Miller <miller.seth@gmail.com>
  Shawn Guo <shawn.guo@linaro.org>
  Shravya Kumbham <shravya.kumbham@xilinx.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Chulski <stefanc@marvell.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Su Yue <l@damenly.su>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hebb <tommyhebb@gmail.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Toke HÃ¸iland-JÃ¸rgensen <toke@toke.dk>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-KÃ¶nig <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@nvidia.com>
  Valdis Kletnieks <valdis.kletnieks@vt.edu>
  Valdis KlÄ“tnieks <valdis.kletnieks@vt.edu>
  Vasily Averin <vvs@virtuozzo.com>
  Victor Zhao <Victor.Zhao@amd.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wang Hui <john.wanghui@huawei.com>
  Wayne Lin <Wayne.Lin@amd.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  yangerkun <yangerkun@huawei.com>
  Yazen Ghannam <Yazen.Ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  Youling Tang <tangyouling@loongson.cn>
  YueHaibing <yuehaibing@huawei.com>
  Yufeng Mo <moyufeng@huawei.com>
  zhengbin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken
broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl host-install(5)
broken-step test-arm64-arm64-xl-xsm host-install(5)
broken-step test-arm64-arm64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 8166 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 03:00:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 03:00:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80310.146837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lvj-0007Ij-2Z; Tue, 02 Feb 2021 03:00:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80310.146837; Tue, 02 Feb 2021 03:00:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6lvi-0007Ic-Uy; Tue, 02 Feb 2021 03:00:38 +0000
Received: by outflank-mailman (input) for mailman id 80310;
 Tue, 02 Feb 2021 03:00:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wa+X=HE=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1l6lvh-0007IW-Fd
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 03:00:37 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffc1f792-79da-4844-b328-714c441af365;
 Tue, 02 Feb 2021 03:00:36 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id o18so13989368qtp.10
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 19:00:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffc1f792-79da-4844-b328-714c441af365
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=FpyXLBtamoyeVhEeer0d08+iA40GBj5AQwepqD5vymY=;
        b=DNcqeiKeAEqfTEab4FdDwjxCC6kqIseL0vvMy0ChKBciVsQtSCjvlcR+7fmob5qugm
         uvIvArslRRcbql417lHTQcciEBvGOgY+Cf1pZZvpmKala/e+gILOu9fBb4cGLFhfMGYg
         4/M4hcXM/aiadvU4FGkrIOStfwiCg/xcoaAXDX7mkw16s6q+af5HlQ+6rxRHGxBLzcS5
         BbQEYwyEpWO/3WHB/7sq/vGTa0L8hZePHJktpUjryrV/mtNT3i6UV0o6nyurjDMdAU7z
         BmgWhNWlNcyuguDC0wrWPAq9yB7rZHyaXlJ1KFlc90APGLiO0FMUEqZCy4G+9R1+FsmJ
         ZM/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=FpyXLBtamoyeVhEeer0d08+iA40GBj5AQwepqD5vymY=;
        b=k5xXIR5ZyQ+1I1CfjQt8VNSU82zahTp2nFRF6GCf5VrucMAU8boxv8M/gJAophvtZK
         lczuKhzSZ0MCckylF4fl7zQnPT8ZdRUuAxctb9XdXkOH6Cdkl9rv8YI11cTNwWpjOOCz
         y9z1i6ycg41gMrsSH9Cgm/VMhwpovCJohkU101yvBjpoGnPIK/o535AQ5ft2SkuSXvII
         OmVzCJLLDDdqZdtNjWsjoPzjUOgFrQT9QDHfGNsSC0nu9FFLJ3yP7YMbsqK+IxGyMx6v
         6E/0j9qAK1+FCitj8eM7Q7W+HMn588sFiNrC7Kj6cw6ajb7AfM3FZJ0ykzFaGNZxUSDn
         ehKQ==
X-Gm-Message-State: AOAM531cFp79ekQvZaRXdw97Jo4o4+6ex18YaL+PIiI991oR4sbi4TcP
	LaDoLcyjl1x3iFL0hry4j2ZUMMG5x473OY7ddFONGA==
X-Google-Smtp-Source: ABdhPJzeqSXzaFj2nnVZLoidEbN1pxnCAP8GlgW3BcL/BlxRxy99sW1qBkxzXV0ZAnq486jGLju7aZDM4Wp0plzH0V0=
X-Received: by 2002:ac8:524f:: with SMTP id y15mr18578044qtn.266.1612234836038;
 Mon, 01 Feb 2021 19:00:36 -0800 (PST)
MIME-Version: 1.0
References: <CABfawhnvgFLU=VmaqmKyf8DNeVcXoXTD2=XpiwnL0OikC1_z4w@mail.gmail.com>
 <YBc+Iaf1CCgXO0Aj@mattapan.m5p.com> <CABfawhncPyDKy_G2Lz7MaEb_ZoPadHef7S7KAj-fbCbQq6YNGA@mail.gmail.com>
 <YBdgf4KKcDn0SCOw@mattapan.m5p.com> <CABfawhmrWX6tO-bESuF5oGec5cLbkHdyjdCGsfwX5AVrwQBsKA@mail.gmail.com>
 <YBeXfWf8lQ2nwMtI@mattapan.m5p.com> <CABfawhn74W88nJz5bCvA=VMxX_QqKv1ZaDQxOEtNZu5Pr8mFag@mail.gmail.com>
 <CABfawhktGwwXNdMrm4uShKSs5MvaUz2hG63wzcDA97z9pGL=Ug@mail.gmail.com>
 <alpine.DEB.2.21.2102011737010.29047@sstabellini-ThinkPad-T480s>
 <CAMmSBy--93yDWZcfhkDHHPxmydvJ4tyymwTzHCC4apObD4983Q@mail.gmail.com> <CABfawhkYmFB9Xc+zmvzA6ispiryfX1EgMWzYTSvHxD9Gmu=TUg@mail.gmail.com>
In-Reply-To: <CABfawhkYmFB9Xc+zmvzA6ispiryfX1EgMWzYTSvHxD9Gmu=TUg@mail.gmail.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 1 Feb 2021 19:00:24 -0800
Message-ID: <CAMmSBy8343hi0PBRwtkCqpCsf2QOEcRzVOSo42Efn8oCaCYm+A@mail.gmail.com>
Subject: Re: Xen 4.14.1 on RPI4: device tree generation failed
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Elliott Mitchell <ehem+xen@m5p.com>, 
	Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Feb 1, 2021 at 6:53 PM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Mon, Feb 1, 2021 at 9:10 PM Roman Shaposhnik <roman@zededa.com> wrote:
> >
> > On Mon, Feb 1, 2021 at 5:40 PM Stefano Stabellini
> > <sstabellini@kernel.org> wrote:
> > >
> > > On Mon, 1 Feb 2021, Tamas K Lengyel wrote:
> > > > On Mon, Feb 1, 2021 at 10:23 AM Tamas K Lengyel
> > > > <tamas.k.lengyel@gmail.com> wrote:
> > > > >
> > > > > On Mon, Feb 1, 2021 at 12:54 AM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > > >
> > > > > > On Sun, Jan 31, 2021 at 10:06:21PM -0500, Tamas K Lengyel wrote:
> > > > > > > With rpi-4.19.y kernel and dtbs
> > > > > > > (cc39f1c9f82f6fe5a437836811d906c709e0661c) Xen boots fine and the
> > > > > > > previous error is not present. I get the boot log on the serial with
> > > > > > > just console=hvc0 from dom0 but the kernel ends up in a panic down the
> > > > > > > line:
> > > > > >
> > > > > > > This seems to have been caused by a monitor being attached to the HDMI
> > > > > > > port, with HDMI unplugged dom0 boots OK.
> > > > > >
> > > > > > The balance of reports seem to suggest 5.10 is the way to go if you want
> > > > > > graphics on a RP4 with Xen.  Even without Xen 4.19 is looking rickety on
> > > > > > RP4.
> > > > > >
> > > > > >
> > > > > > On Sun, Jan 31, 2021 at 09:43:13PM -0500, Tamas K Lengyel wrote:
> > > > > > > On Sun, Jan 31, 2021 at 8:59 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > > > > >
> > > > > > > > On Sun, Jan 31, 2021 at 06:50:36PM -0500, Tamas K Lengyel wrote:
> > > > > > > > > On Sun, Jan 31, 2021 at 6:33 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
> > > > > > > > > > Presently the rpixen script is grabbing the RPF's 4.19 branch, dates
> > > > > > > > > > point to that last being touched last year.  Their tree is at
> > > > > > > > > > cc39f1c9f82f6fe5a437836811d906c709e0661c.
> > > > > > > > >
> > > > > > > > > I've moved the Linux branch up to 5.10 because there had been a fair
> > > > > > > > > amount of work that went into fixing Xen on RPI4, which got merged
> > > > > > > > > into 5.9 and I would like to be able to build upstream everything
> > > > > > > > > without the custom patches coming with the rpixen script repo.
> > > > > > > >
> > > > > > > > Please keep track of where your kernel source is checked out at since
> > > > > > > > there was a desire to figure out what was going on with the device-trees.
> > > > > > > >
> > > > > > > >
> > > > > > > > Including "console=hvc0 console=AMA0 console=ttyS0 console=tty0" in the
> > > > > > > > kernel command-line should ensure you get output from the kernel if it
> > > > > > > > manages to start (yes, Linux does support having multiple consoles at the
> > > > > > > > same time).
> > > > > > >
> > > > > > > No output from dom0 received even with the added console options
> > > > > > > (+earlyprintk=xen). The kernel build was from rpi-5.10.y
> > > > > > > c9226080e513181ffb3909a905e9c23b8a6e8f62. I'll check if it still boots
> > > > > > > with 4.19 next.
> > > > > >
> > > > > > So, their current HEAD.  This reads like you've got a problematic kernel
> > > > > > configuration.  What procedure are you following to generate the
> > > > > > configuration you use?
> > > > > >
> > > > > > Using their upstream as a base and then adding the configuration options
> > > > > > for Xen has worked fairly well for me (`make bcm2711_defconfig`,
> > > > > > `make menuconfig`, `make zImage`).
> > > > > >
> > > > > > Notably the options:
> > > > > > CONFIG_PARAVIRT
> > > > > > CONFIG_XEN_DOM0
> > > > > > CONFIG_XEN
> > > > > > CONFIG_XEN_BLKDEV_BACKEND
> > > > > > CONFIG_XEN_NETDEV_BACKEND
> > > > > > CONFIG_HVC_XEN
> > > > > > CONFIG_HVC_XEN_FRONTEND
> > > > > >
> > > > > > Should be set to "y".
> > > > >
> > > > > Yes, these configs are all set the same way for all Linux builds by the script:
> > > > >         make O=.build-arm64 ARCH=arm64
> > > > > CROSS_COMPILE=aarch64-none-linux-gnu- bcm2711_defconfig xen.config
> > > > >
> > > > > I tried with both the rpi-5.10.y and rpi-5.9.y, neither boot up as
> > > > > dom0. So far only 4.19 boots.
> > > >
> > > > rpi-5.4.y boots but ends up in yet another different kernel panic:
> > >
> > > That's an interesting error. However, I can tell you that I can boot
> > > rpi-5.9.y just fine (without a monitor attached) with:
> > >
> > > cd linux
> > > KERNEL=kernel7l
> > > make bcm2711_defconfig
> > >
> > > As mentioned here:
> > >
> > > https://www.raspberrypi.org/documentation/linux/kernel/building.md
> > >
> > > and also taking the device tree from arch/arm64/boot/dts/broadcom/.
> >
> > FWIW: I see the same results with stock upstream 5.10.7 effectively
> > following the steps you're doing.
> >
> > However, as I keep saying -- the combination of firmware and u-boot
> > (in my case) is a very sensitive combination -- hence we're relying
> > on a very particular set of bits for there in EVE and will refuse to work
> > with anything else.
> >
> > It may be helpful to take that combination outside of EVE's context and
> > try it out in your experiments Tamas.
>
> Well, I'm giving up on this for now. I ran out of ideas to try and I
> don't see any useful suggestions on how to debug this further. Looks
> like it's super fragile and works only under specific conditions
> that's not well documented - if documented at all.

That's fair -- at the same time I honestly don't see how any other approach
but documenting that a BOM of versions is known to work together can be
really practical.

I mean -- at the end of the day that's why no user (well no sane user ;-)) takes
kernel from kernel.org directly -- most of them come through Linux distro BOM.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 04:08:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 04:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80329.146876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6mzQ-0006R3-FT; Tue, 02 Feb 2021 04:08:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80329.146876; Tue, 02 Feb 2021 04:08:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6mzQ-0006Qw-Bp; Tue, 02 Feb 2021 04:08:32 +0000
Received: by outflank-mailman (input) for mailman id 80329;
 Tue, 02 Feb 2021 04:08:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6mzO-0006Qo-UJ; Tue, 02 Feb 2021 04:08:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6mzO-0002YR-P8; Tue, 02 Feb 2021 04:08:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6mzO-00089d-Im; Tue, 02 Feb 2021 04:08:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6mzO-000311-IE; Tue, 02 Feb 2021 04:08:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SP2mRNvxIy+dapOEmJVaZSWo64nLNNsBrBDFQGuR2wI=; b=dF/n8f8V8to89QHkm2SlJ/FxS9
	xn6I1fgUn15ffc6HNbKniUhmG9K0Ihn4THukk8bQNgpYxRFwfNBn0BLNzgH9uOgLM9gomRo1c+MT6
	IMLLHEvtLtNHw6Vzc+olDHNFJ4rPqx0OsHTH2a0HNz0SYiF5Dd3xi7pcPZoejWjGjckA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158930-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158930: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 04:08:30 +0000

flight 158930 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158930/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    3 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    7 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 04:17:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 04:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80335.146897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6n81-0007VN-Dh; Tue, 02 Feb 2021 04:17:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80335.146897; Tue, 02 Feb 2021 04:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6n81-0007VG-AE; Tue, 02 Feb 2021 04:17:25 +0000
Received: by outflank-mailman (input) for mailman id 80335;
 Tue, 02 Feb 2021 04:17:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6n80-0007V8-0l; Tue, 02 Feb 2021 04:17:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6n7z-0002hN-Qx; Tue, 02 Feb 2021 04:17:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6n7z-00005g-F5; Tue, 02 Feb 2021 04:17:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6n7z-0001nB-Eb; Tue, 02 Feb 2021 04:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=a+1TD2R8zqpo6AN8wxmd1LoA7+m96HIqtQRLKlMSJf0=; b=5y8IDNkZFpAu+cYNI1oEsG03CH
	nPm5Qw3vJ0Onluqie0ah0IUK3QlREVAW524L02l6K7cg5Ol4V9J8Mh1DyebiLVDzBQn9Jho7C5Kn/
	0tu94ifWdTCQBwdahiIj0KT6E5mUWNXkt0Z4U6zRNGD0dxpb4fjlD781GKc0eidy2fvY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-arm64-xsm
Message-Id: <E1l6n7z-0001nB-Eb@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 04:17:23 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-arm64-xsm
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  ffbb8aa282de262403275f2395d8540818cf576e
  Bug not present: 419cd07895891c6642f29085aee07be72413f08c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158933/


  commit ffbb8aa282de262403275f2395d8540818cf576e
  Author: Roger Pau Monne <roger.pau@citrix.com>
  Date:   Mon Feb 1 16:53:17 2021 +0100
  
      xenstore: fix build on {Net/Free}BSD
      
      The endian.h header is in sys/ on NetBSD and FreeBSD.
      
      Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Acked-by: Ian Jackson <iwj@xenproject.org>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build --summary-out=tmp/158933.bisection-summary --basis-template=158804 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-arm64-xsm xen-build
Searching for failure / basis pass:
 158930 fail [host=laxton0] / 158804 [host=rochester1] 158798 ok.
Failure / basis pass flights: 158930 / 158798
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
Basis pass 7ea428895af2840d85c524f0bd11a38aac308308 e402441d4c02908cea9c14392fd7c2831c0456d0
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#e402441d4c02908cea9c14392fd7c2831c0456d0-ffbb8aa282de262403275f2395d8540818cf576e
Loaded 5001 nodes in revision graph
Searching for test results:
 158798 pass 7ea428895af2840d85c524f0bd11a38aac308308 e402441d4c02908cea9c14392fd7c2831c0456d0
 158804 [host=rochester1]
 158897 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158900 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158911 pass 7ea428895af2840d85c524f0bd11a38aac308308 e402441d4c02908cea9c14392fd7c2831c0456d0
 158913 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158914 pass 7ea428895af2840d85c524f0bd11a38aac308308 6fe64b150ce519d1952edc5da452e1d143cef4cc
 158909 [host=laxton1]
 158916 pass 7ea428895af2840d85c524f0bd11a38aac308308 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158917 [host=laxton1]
 158918 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158919 [host=laxton1]
 158921 pass 7ea428895af2840d85c524f0bd11a38aac308308 bbed98e7cedcd5072671c21605330075740382d3
 158924 pass 7ea428895af2840d85c524f0bd11a38aac308308 419cd07895891c6642f29085aee07be72413f08c
 158923 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158925 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158926 pass 7ea428895af2840d85c524f0bd11a38aac308308 419cd07895891c6642f29085aee07be72413f08c
 158927 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158930 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
 158931 pass 7ea428895af2840d85c524f0bd11a38aac308308 419cd07895891c6642f29085aee07be72413f08c
 158933 fail 7ea428895af2840d85c524f0bd11a38aac308308 ffbb8aa282de262403275f2395d8540818cf576e
Searching for interesting versions
 Result found: flight 158798 (pass), for basis pass
 For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 419cd07895891c6642f29085aee07be72413f08c, results HASH(0x55b718daeee0) HASH(0x55b718dce0e0) HASH(0x55b718dd0a10) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 bbed98e7cedcd5072671c21605330075740382d3, results HASH(0x55b718dc5018) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 9dc687f155a57216b83b17f9cde55dd43e06b0cd, results HASH(0x\
 55b718dbf2d8) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 6fe64b150ce519d1952edc5da452e1d143cef4cc, results HASH(0x55b718db3698) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 e402441d4c02908cea9c14392fd7c2831c0456d0, results HASH(0x55b718db0be8) HASH(0x55b718dbacc8) Result found: flight 158897 (fail), for basis failure (at ancestor ~988)
 Repro found: flight 158911 (pass), for basis pass
 Repro found: flight 158913 (fail), for basis failure
 0 revisions at 7ea428895af2840d85c524f0bd11a38aac308308 419cd07895891c6642f29085aee07be72413f08c
No revisions left to test, checking graph state.
 Result found: flight 158924 (pass), for last pass
 Result found: flight 158925 (fail), for first failure
 Repro found: flight 158926 (pass), for last pass
 Repro found: flight 158927 (fail), for first failure
 Repro found: flight 158931 (pass), for last pass
 Repro found: flight 158933 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  ffbb8aa282de262403275f2395d8540818cf576e
  Bug not present: 419cd07895891c6642f29085aee07be72413f08c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158933/


  commit ffbb8aa282de262403275f2395d8540818cf576e
  Author: Roger Pau Monne <roger.pau@citrix.com>
  Date:   Mon Feb 1 16:53:17 2021 +0100
  
      xenstore: fix build on {Net/Free}BSD
      
      The endian.h header is in sys/ on NetBSD and FreeBSD.
      
      Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Acked-by: Ian Jackson <iwj@xenproject.org>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
158933: tolerable ALL FAIL

flight 158933 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/158933/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-xsm               6 xen-build               fail baseline untested


jobs:
 build-arm64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 05:59:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 05:59:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80350.146930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6oiW-0000kB-03; Tue, 02 Feb 2021 05:59:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80350.146930; Tue, 02 Feb 2021 05:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6oiV-0000k4-TA; Tue, 02 Feb 2021 05:59:11 +0000
Received: by outflank-mailman (input) for mailman id 80350;
 Tue, 02 Feb 2021 05:59:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TyQM=HE=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l6oiV-0000jz-6h
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 05:59:11 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 575553cc-0f5a-42f4-9c70-75cb4e06bc2f;
 Tue, 02 Feb 2021 05:59:07 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 1125wsaL019305
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 2 Feb 2021 00:58:59 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 1125wrwD019304;
 Mon, 1 Feb 2021 21:58:53 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 575553cc-0f5a-42f4-9c70-75cb4e06bc2f
Date: Mon, 1 Feb 2021 21:58:53 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
        Roger Pau Monne <roger.pau@citrix.com>,
        "open list:X86" <xen-devel@lists.xenproject.org>,
        Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH] x86/pod: Do not fragment PoD memory allocations
Message-ID: <YBjqHZIRCzdW7RX7@mattapan.m5p.com>
References: <YBBWj7NO+1HVKEgX@mattapan.m5p.com>
 <f6a75725-edc2-ee2d-0565-da1efae05190@suse.com>
 <YBHJS3NEX5+iEqyd@mattapan.m5p.com>
 <67ef8210-a65f-9d6a-bea1-46ce06d47fb7@citrix.com>
 <YBHo/gscAfcAZqst@mattapan.m5p.com>
 <44450edc-9a64-8a6e-e8d3-3a3f726a96bc@suse.com>
 <YBMB1VUhYd3RUuDO@mattapan.m5p.com>
 <DC18947E-BC67-4BF8-A889-04B812DACACC@citrix.com>
 <YBbzXQt2GBAvpvgQ@mattapan.m5p.com>
 <E8806231-28EE-4618-B6A5-1B50813BF4B1@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <E8806231-28EE-4618-B6A5-1B50813BF4B1@citrix.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Feb 01, 2021 at 10:35:15AM +0000, George Dunlap wrote:
> 
> 
> > On Jan 31, 2021, at 6:13 PM, Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > 
> > On Thu, Jan 28, 2021 at 10:42:27PM +0000, George Dunlap wrote:
> >> 
> >>> On Jan 28, 2021, at 6:26 PM, Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >>> type = "hvm"
> >>> memory = 1024
> >>> maxmem = 1073741824
> >>> 
> >>> I suspect maxmem > free Xen memory may be sufficient.  The instances I
> >>> can be certain of have been maxmem = total host memory *7.
> >> 
> >> Can you include your Xen version and dom0 command-line?
> > 
> >> This is on staging-4.14 from a month or two ago (i.e., what I happened to have on a random test  box), and `dom0_mem=1024M,max:1024M` in my command-line.  That rune will give dom0 only 1GiB of RAM, but also prevent it from auto-ballooning down to free up memory for the guest.
> >> 
> > 
> > As this is a server, not a development target, Debian's build of 4.11 is
> > in use.  Your domain 0 memory allocation is extremely generous compared
> > to mine.  One thing which is on the command-line though is
> > "watchdog=true".
> 
> staging-4.14 is just the stable 4.14 branch which our CI loop tests before pushing to stable-4.14, which is essentially tagged 3 times a year for point releases.  It???s quite stable.  I???ll give 4.11 a try if I get a chance.
> 
> It???s not clear from your response ??? are you allocating a fixed amount to dom0?  How much is it?  In fact, probably the simplest thing to do would be to attach the output of `xl info` and `xl dmesg`; that will save a lot of potential future back-and-forth.
> 
> 1GiB isn???t particularly generous if you???re running a large number of guests.  My understanding is that XenServer now defaults to 4GiB of RAM for dom0.
> 

I guess it comes to setup, how careful one is at pruning unneeded
services and whether one takes steps to ensure there aren't extra qemu
processes hanging around (avoiding hvm VMs in most cases).


release                : 4.19.160-2
version                : #5 SMP Sat Dec 5 09:58:41 PST 2020
machine                : x86_64
nr_cpus                : 8
max_cpu_id             : 7
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 2
cpu_mhz                : 4018.086
hw_caps                : 178bf3ff:b698320b:2e500800:0069bfff:00000000:00000008:00000000:00000500
virt_caps              : hvm
total_memory           : 16110
free_memory            : 781
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 11
xen_extra              : .4
xen_version            : 4.11.4
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 
xen_commandline        : placeholder watchdog=true loglvl=info iommu=verbose cpuidle dom0_mem=384M,max:640M dom0_max_vcpus=8
cc_compiler            : gcc (Debian 8.3.0-6) 8.3.0
cc_compile_by          : pkg-xen-devel
cc_compile_domain      : lists.alioth.debian.org
cc_compile_date        : Fri Dec 11 21:33:51 UTC 2020
build_id               : 6d8e0fa3ddb825695eb6c6832631b4fa2331fe41
xend_config_format     : 4


> > I've got 3 candidates which presently concern me:ble:
> > 
> > 1> There is a limited range of maxmem values where this occurs.  Perhaps
> > 1TB is too high on your machine for the problem to reproduce.  As
> > previously stated my sample configuration has maxmem being roughly 7
> > times actual machine memory.
> 
> In fact I did a number of binary-search-style experiments to try to find out boundary behavior.  I don???t think I did 7x memory, but I certainly did 2x or 3x host memory, and the exact number you gave that caused you problems.  In all cases for me, it either worked or failed with a cryptic error message (the specific message depending on whether I had fixed dom0 memory or autoballooned memory).
> 

Hmm, may have to mem-set Dom0 to max then retry the crash configuration
with maxmem just greater than machine memory...    Do have that downtime
due to kernel update...


> > 2> Between issuing the `xl create` command and the machine rebooting a
> > few moments of slow response have been observed.  Perhaps the memory
> > allocator loop is hogging processor cores long enough for the watchdog to
> > trigger?
> 
> I don???t know the balloon driver very well, but I???d hope it yielded pretty regularly.  It seems more likely to me that your dom0 is swapping due to low memory / struggling with having to work with no file cache.  Or the OOM killer is doing its calculation trying to figure out which process to shoot?  
> 

I know which process it shoots.  One ideal is to have memory just high
enough for the OOM-killer not to trigger.  Under this idea you *want* to
use some swap, as some portions of process address space are left idle
99.99% of the time and don't need to waste RAM.  This though is a bit
funky with SSDs taking over for which writes are more precious.
Difficulty then becomes some of Xen's odd Dom0 memory behavior.

According to `xl list` it isn't possible to set Dom0's memory to maximum.
I've been theorizing this might be due to memory used for DMA needing to
be inside the maxmem region, but isn't counted in `xl list`...


> > 3> Perhaps one of the patches on Debian broke things?  This seems
> > unlikely since nearly all of Debian's patches are either strictly for
> > packaging or else picks from Xen's main branch, but this is certainly
> > possible.
> 
> Indeed, I???d consider that unlikely.  Some things I???d consider more likely to cause the difference:
> 
> 1. The amount of host memory (my test box had only 6GiB)
> 
> 2. The amount of memory assigned to dom0 

I consider this unlikely.  Due to a downtime I got a chance to try this
issue from the console and *nothing* appeared.  If there was a memory
issue with domain 0 then I would have expected messages from the
OOM-killer before restart.


> 3. The number of other VMs running in the background

During that downtime other VMs had been saved to storage (I didn't want
to lose their runtimes).  As such all memory was available to domain 0
and the problematic VM configuration.


> 4. A difference in the version of Linux (I???m also running Debian, but deban-testing)
> 

Not impossible, but seems improbable to me.  This has also been observed
when domain 0 had a 4.9 kernel.  Perhaps 5.x includes some fix which
works around the issue, but I'm very doubtful of this.

> 5. A bug in 4.11 that was fixed by 4.14.

This isn't confined to 4.11.  I observed this with 4.8 and I recall
running into suspiciously similar things with 4.4.  The bug may well have
been fixed between 4.11 and 4.14 though.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Feb 02 06:05:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 06:05:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80352.146942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ooA-0001mt-PU; Tue, 02 Feb 2021 06:05:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80352.146942; Tue, 02 Feb 2021 06:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ooA-0001mm-MU; Tue, 02 Feb 2021 06:05:02 +0000
Received: by outflank-mailman (input) for mailman id 80352;
 Tue, 02 Feb 2021 06:05:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6oo9-0001me-EG; Tue, 02 Feb 2021 06:05:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6oo9-0006D0-7R; Tue, 02 Feb 2021 06:05:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6oo8-0004aq-WF; Tue, 02 Feb 2021 06:05:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6oo8-00068i-Vj; Tue, 02 Feb 2021 06:05:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ANU+ItRc3YblHBUc48DcLgHSQg78UmQ8PnjkfQDuKjU=; b=rRk0/R1Wa8J/gVOa6nph9FZwWq
	a1Y1SgiNqZl4zpkg8zjPq5ND7ZR9hxI1JzVH7/MlHfKRoJyLVlRxZwJCIm4Hgo4BkzMjVNNFlzq6w
	BMHqVD18gojfNoCWCOp2NWDQ/8u6f0EkQBg7JtI0fGQwvNvunX2bAUvI+pt+iyyXTew4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158937: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 06:05:00 +0000

flight 158937 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158937/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    3 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    8 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 06:50:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 06:50:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80362.146963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6pW2-0006fz-4z; Tue, 02 Feb 2021 06:50:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80362.146963; Tue, 02 Feb 2021 06:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6pW2-0006fs-1r; Tue, 02 Feb 2021 06:50:22 +0000
Received: by outflank-mailman (input) for mailman id 80362;
 Tue, 02 Feb 2021 06:50:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6pW0-0006fk-Jg; Tue, 02 Feb 2021 06:50:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6pW0-0006zK-DR; Tue, 02 Feb 2021 06:50:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6pW0-00062B-0n; Tue, 02 Feb 2021 06:50:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6pW0-0004H1-0J; Tue, 02 Feb 2021 06:50:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OqAMnLLXd7mA9U57uxpg0fGpVcYPThBrQfnB15gIx1Q=; b=6h2AF+HBj5cH+SXNBgfNP59grK
	l5vivUzsAAEWeew6an7OyMfDM2uv/gdmNO/9MNTXivzd+bpVqmpJUbDN9aQD4eZk5gB4g48xQxnLA
	0DDm64n3jxbGexDBVoT9x8adOo/LoE6F5zB2VtlPgx1v9ZkriqEYxcA/GueEd6WJpaLA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158901-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158901: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=74208cd252c5da9d867270a178799abd802b9338
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 06:50:20 +0000

flight 158901 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail in 158879 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 158879
 test-armhf-armhf-xl-vhd      13 guest-start                fail pass in 158879

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 158879 like 152631
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 158879 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 158879 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 158879 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 158879 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 158879 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                74208cd252c5da9d867270a178799abd802b9338
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  165 days
Failing since        152659  2020-08-21 14:07:39 Z  164 days  334 attempts
Testing same since   158816  2021-01-30 13:16:09 Z    2 days    5 attempts

------------------------------------------------------------
372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 98359 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 07:09:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 07:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80367.146977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6poJ-0007zs-Rv; Tue, 02 Feb 2021 07:09:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80367.146977; Tue, 02 Feb 2021 07:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6poJ-0007zl-OQ; Tue, 02 Feb 2021 07:09:15 +0000
Received: by outflank-mailman (input) for mailman id 80367;
 Tue, 02 Feb 2021 07:09:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OBIp=HE=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l6poI-0007zg-Mh
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 07:09:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45f48be8-72fb-4211-ad30-f181e052cf47;
 Tue, 02 Feb 2021 07:09:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D644ACB7;
 Tue,  2 Feb 2021 07:09:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45f48be8-72fb-4211-ad30-f181e052cf47
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612249751; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4dh4Q5FYrIakWqXKf5ONncovH+2kNiot2gUc6eT6xWE=;
	b=lzZK6q12mp2TXt9vhhxOCyeYHd426NIcClX9cqCYvXkLflYV5H3BLCbHmpRC1rzHGTxIG5
	cal8nxj33J+iuRfs5+wNOJ96XB48S8UaRpHcThlX/dut9HBEiTUZdUuVbSYmakGa9zG02z
	/QRtPVlr2YuZ2dUyCYOmNPTliHFmsgo=
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <c7dea44e-8e6d-1b98-b2a4-7e9623a8e3fb@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: dom0 crash in xenvif_rx_ring_slots_available
Message-ID: <15253c8c-6391-bec8-b161-d2d6e8d067e2@suse.com>
Date: Tue, 2 Feb 2021 08:09:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <c7dea44e-8e6d-1b98-b2a4-7e9623a8e3fb@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="W4r86Cqr1toa4syKIlnsJdTPSQVuZcZ0x"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--W4r86Cqr1toa4syKIlnsJdTPSQVuZcZ0x
Content-Type: multipart/mixed; boundary="3ZmydOoyo9936MThrob9EaQIgXO87Ol1j";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <15253c8c-6391-bec8-b161-d2d6e8d067e2@suse.com>
Subject: Re: dom0 crash in xenvif_rx_ring_slots_available
References: <c7dea44e-8e6d-1b98-b2a4-7e9623a8e3fb@citrix.com>
In-Reply-To: <c7dea44e-8e6d-1b98-b2a4-7e9623a8e3fb@citrix.com>

--3ZmydOoyo9936MThrob9EaQIgXO87Ol1j
Content-Type: multipart/mixed;
 boundary="------------C34BD29822F32F2A0450E7D8"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C34BD29822F32F2A0450E7D8
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.02.21 03:29, Igor Druzhinin wrote:
> Juergen,
>=20
> We've got a crash report from one of our customers (see below) running =
4.4 kernel.
> The functions seem to be the new that came with XSA-332 and nothing lik=
e that has been
> reported before in their cloud. It appears there is some use-after-free=
 happening on skb
> in the following code fragment:
>=20
> static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
> {
>          RING_IDX prod, cons;
>          struct sk_buff *skb;
>          int needed;
>=20
>          skb =3D skb_peek(&queue->rx_queue);
>          if (!skb)
>                  return false;
>=20
>          needed =3D DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
>          if (skb_is_gso(skb))  <=3D=3D skb points to 0-ed memory
>                  needed++;
>=20
> Has something similar been reported before? Any ideas?

I haven't seen that before, but I think your analysis regarding use
after free is correct. xenvif_rx_ring_slots_available() is now called
from the interrupt handler, too, so it needs to take the queue lock
before peeking at the queue.

Patch is coming.


Juergen

--------------C34BD29822F32F2A0450E7D8
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C34BD29822F32F2A0450E7D8--

--3ZmydOoyo9936MThrob9EaQIgXO87Ol1j--

--W4r86Cqr1toa4syKIlnsJdTPSQVuZcZ0x
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAY+pYFAwAAAAAACgkQsN6d1ii/Ey9h
pAf+JqP0+7sX2n9oC2HnoHZfOjwJ5KxRa0Cl4OeyoArH3Hf8to+kEgJ+YEDmFKNrC7LTY0HlYIDZ
reHokjQ5gcz2CzpCZNWZS0fhQplxSxHaBEez+JrLJUcjS3NZYZBM5MC0OOleVD7WCBHPMyuKC9FJ
PpfPaEj+VJ+pHi+JSGtvw6fDKVLvXuf283LmGDUD/HXPQIJuSlPKxyRu0goGXfK+uMFnm78kj6Rq
xsqceGi67vPa3DPiBiwlGqczrMzRMcZkcM1uYOtdSQGSBfTMSJl35lPm2bKO7Pw21Bg2/VxTDGXi
XTJHgvlUZxVMR2CqHyeCbb2P4CQospPyNO7Dc+EY5g==
=UG/Z
-----END PGP SIGNATURE-----

--W4r86Cqr1toa4syKIlnsJdTPSQVuZcZ0x--


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 07:09:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 07:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80368.146990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6pom-0008Ef-8t; Tue, 02 Feb 2021 07:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80368.146990; Tue, 02 Feb 2021 07:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6pom-0008EY-5G; Tue, 02 Feb 2021 07:09:44 +0000
Received: by outflank-mailman (input) for mailman id 80368;
 Tue, 02 Feb 2021 07:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OBIp=HE=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l6pok-0008EL-Ro
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 07:09:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fcbd9d4-3e3a-4c79-9329-a3a3f6d62f12;
 Tue, 02 Feb 2021 07:09:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 46D2EACB7;
 Tue,  2 Feb 2021 07:09:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fcbd9d4-3e3a-4c79-9329-a3a3f6d62f12
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612249781; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=1tpaqvOitmoTdUOKCltTkXQMrQMduhfR/V/K0TJ91jk=;
	b=pbaIptIDBcZvfobat6HNoXje6mR3kf8+MuWpibPjXaHBmoQvt3LBZ1S4k+WfN+6v3yXqqG
	CnLh7G8uXWy5d/2yQhTslehE4QmbeheYR1UmRM7pfxMgd1ZiuvafUH6s9i4k7bnpbVZHHm
	wGiaWxu3itzFTHNe0z01xg0UqKte3eQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Igor Druzhinin <igor.druzhinin@citrix.com>,
	stable@vger.kernel.org
Subject: [PATCH] xen/netback: avoid race in xenvif_rx_ring_slots_available()
Date: Tue,  2 Feb 2021 08:09:38 +0100
Message-Id: <20210202070938.7863-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
xenvif_rx_ring_slots_available() is no longer called only from the rx
queue kernel thread, so it needs to access the rx queue with the
associated queue held.

Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/net/xen-netback/rx.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
index b8febe1d1bfd..accc991d153f 100644
--- a/drivers/net/xen-netback/rx.c
+++ b/drivers/net/xen-netback/rx.c
@@ -38,10 +38,15 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
 	RING_IDX prod, cons;
 	struct sk_buff *skb;
 	int needed;
+	unsigned long flags;
+
+	spin_lock_irqsave(&queue->rx_queue.lock, flags);
 
 	skb = skb_peek(&queue->rx_queue);
-	if (!skb)
+	if (!skb) {
+		spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
 		return false;
+	}
 
 	needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
 	if (skb_is_gso(skb))
@@ -49,6 +54,8 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
 	if (skb->sw_hash)
 		needed++;
 
+	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
+
 	do {
 		prod = queue->rx.sring->req_prod;
 		cons = queue->rx.req_cons;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 07:23:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 07:23:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80373.147008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6q2C-0001gl-IY; Tue, 02 Feb 2021 07:23:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80373.147008; Tue, 02 Feb 2021 07:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6q2C-0001ge-FZ; Tue, 02 Feb 2021 07:23:36 +0000
Received: by outflank-mailman (input) for mailman id 80373;
 Tue, 02 Feb 2021 07:23:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6q2B-0001gW-3P; Tue, 02 Feb 2021 07:23:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6q2A-0007Xa-Vn; Tue, 02 Feb 2021 07:23:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6q2A-0007o3-O0; Tue, 02 Feb 2021 07:23:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6q2A-0000rJ-NT; Tue, 02 Feb 2021 07:23:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tFQYqLIgHaetpsUJWtNLnq9Zxca8zDqUbFDlufu7IdQ=; b=QC6PmlL3YtksvFTxt2xMCFStwa
	x/kUp4u7KQst40k5AFbQX/3zfoNchkYr8D0aujpmSo70UDLs8yL/+SJHKU9Q1aolRNZY4gOWE8KFx
	jg9p5+E2Dkq8LmWFn09j+z1Zqh5L5XHLSjZlz2jWATISX4ea6g3plNlOl1pR4yNU0w+c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158941-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158941: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 07:23:34 +0000

flight 158941 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158941/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    3 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days    9 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 07:38:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 07:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80377.147023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qG2-0002qr-TU; Tue, 02 Feb 2021 07:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80377.147023; Tue, 02 Feb 2021 07:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qG2-0002qk-Pr; Tue, 02 Feb 2021 07:37:54 +0000
Received: by outflank-mailman (input) for mailman id 80377;
 Tue, 02 Feb 2021 07:37:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6qG1-0002qc-Ac; Tue, 02 Feb 2021 07:37:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6qG1-0007oV-2b; Tue, 02 Feb 2021 07:37:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6qG0-0008Qp-Rp; Tue, 02 Feb 2021 07:37:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6qG0-0005fA-RH; Tue, 02 Feb 2021 07:37:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Vk0hctR72txMyq9OoYV/DGzEG1YGBF1Vkdr7cNjqp6Y=; b=AmVHSNwgKIm5cyk++ITLSHrovN
	EDtxURbf5WUv9sZmbVI6Ou1YzqbldsCBEESYjArXiB+zXd0bdrzut+6k7oS6Kh8woi6fALvg/fdgY
	O6fkHeEpHqvrHOQaO3Sr2dwOclFqlWMzE2tb8mMRdqO2v9tfB4BsaX7wkOK/dcTWEYjY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158935-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 158935: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=31d1835428004add9f58b0ac03715263ad869858
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 07:37:52 +0000

flight 158935 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158935/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              31d1835428004add9f58b0ac03715263ad869858
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  207 days
Failing since        151818  2020-07-11 04:18:52 Z  206 days  201 attempts
Testing same since   158935  2021-02-02 04:19:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38658 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 07:47:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 07:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80384.147038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qPa-0003vX-Sl; Tue, 02 Feb 2021 07:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80384.147038; Tue, 02 Feb 2021 07:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qPa-0003vQ-PN; Tue, 02 Feb 2021 07:47:46 +0000
Received: by outflank-mailman (input) for mailman id 80384;
 Tue, 02 Feb 2021 07:47:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IB5w=HE=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l6qPY-0003vL-O4
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 07:47:44 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dacb6bd1-760e-41b4-b31f-d761e7fd7628;
 Tue, 02 Feb 2021 07:47:43 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id r14so22883101ljc.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 Feb 2021 23:47:43 -0800 (PST)
Received: from [192.168.1.76] (91-153-193-91.elisa-laajakaista.fi.
 [91.153.193.91])
 by smtp.gmail.com with ESMTPSA id z2sm3178846lfd.142.2021.02.01.23.47.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 01 Feb 2021 23:47:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dacb6bd1-760e-41b4-b31f-d761e7fd7628
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=MISwNRhozcbsn1vZzO9Kicf25LlHf5cmWJcAyf8y+X0=;
        b=sP9KRokis3IDfBDnbQbbi3N+ppG/Dk/pm4+70EfwkR8RFp3whk6TCEWMrHUKz6vBvx
         +PeKk2U6Y405E61CNZg9x0xZAUOpcWlyWLF6d6JiGxARRZl1Ap8+DP40KqbJlKF2yE34
         EmWyN/xWWBTvsk3SoTd71O0QmKS0f8prcHUR8qUUVpUFj/5Bes8uKmzFQU4CTw20rDfz
         xJA5ywbC2XvU6ezK5qugYTH9owJ+aicNDOOumRiGjRVVjPmhFCxRjMIogq16byC3SIvz
         OxGC/RHpCNjCi7p4b8g1knszUPOGPRGfUbOquT1OqXhq1SL9jG9sdeVQqIw1RuGlkaR/
         2/wA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=MISwNRhozcbsn1vZzO9Kicf25LlHf5cmWJcAyf8y+X0=;
        b=RR7ePJMGlAw7KeQJLeCVxDe6bH++6NvYsE6HRmNWQNuLP3G+Jny6mUnt7jDRemKN5z
         qk+KRSNnPYh3Gxzpfy4B54oScyqM8mtJKYZhSyyddRIxCuvXJCAAw40UeMMiEFh09CIw
         9n5Y3oGVKBRWygUyDQoeOPvppEiHfaHKmmuBTJxBvpz/8Euf1c8zEpw/2L0b8RKlaU0e
         wJfAqPQQ3WCYAa04pElmka3cpMufydzNZFxNk6CRxVWbLbW6dvOXz+ZchytSHUqVWRu6
         Kb80zI2S9Tqt3Ukh+QcQsgm9ZPyySQ8RilTOhFSxwyejEI280ItXpYNGMn/MnUDmHY8Z
         +I9g==
X-Gm-Message-State: AOAM530Ij4MZaRL3QixkF5Ac4uJZLES4E7nFQX8l5S/t2qtf7oXVj/Iu
	oN72yvMNVwWpkkHe0NkxC8ZV2Q==
X-Google-Smtp-Source: ABdhPJwqYKKMQCLLe3ZqxcIQ6ido/lqBX1NWN+cCPEiFR1wvzfou1Pxk7X1ADz0OjKeoZtFnMQIoGw==
X-Received: by 2002:a2e:988d:: with SMTP id b13mr11849823ljj.176.1612252062382;
        Mon, 01 Feb 2021 23:47:42 -0800 (PST)
Subject: Re: Question about xen and Rasp 4B
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Roman Shaposhnik <roman@zededa.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com>
 <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s>
 <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <c44d45ed-f03e-e901-4a46-0ce57504703f@xen.org>
 <alpine.DEB.2.21.2102011055080.29047@sstabellini-ThinkPad-T480s>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <598633a0-28b7-6bb2-73b7-c0cb320561d6@unikie.com>
Date: Tue, 2 Feb 2021 09:47:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102011055080.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit


>>
>> Glad to hear it works! IIRC, the swiotlb may become necessary when running
>> guest if the guest memory ends up to be used for DMA transaction.
> 
> It is necessary if you are using PV network or PV disk: memory shared by
> another domU could end up being used in a DMA transaction. For that, you
> need swiotlb-xen.

Sounds bad :).


>>> Now that swiotlb is disabled what does it mean?
>>
>> I can see two reasons:
>>    1) You have limited memory below the 30 bits mark. So Swiotlb and CMA may
>> try to fight for the low memory.
>>    2) We found a few conversion bugs in the swiotlb on RPI4 last year (IIRC the
>> DMA and physical address may be different). I looked at the Linux branch you
>> are using and they seem to all be there. So there might be another bug.
>>
>> I am not sure how to figure out where is the problem. Stefano, do you have a
>> suggestion where to start?
> 
> Both 1) and 2) are possible. It is also possible that another driver,
> probably something related to CMA or DRM, has some special dma_ops
> handling that doesn't work well together with swiotlb-xen.
> 
> Given that the original error seemed to be related to vc4_bo_create,
> which calls dma_alloc_wc, I would add a couple of printks to
> xen_swiotlb_alloc_coherent to help us figure it out:
> 
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 2b385c1b4a99..cac8b09af603 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -295,6 +295,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>   	/* Convert the size to actually allocated. */
>   	size = 1UL << (order + XEN_PAGE_SHIFT);
>   
> +	printk("DEBUG %s %d size=%lu flags=%lx attr=%lx\n",__func__,__LINE__,size,flags,attrs);
>   	/* On ARM this function returns an ioremap'ped virtual address for
>   	 * which virt_to_phys doesn't return the corresponding physical
>   	 * address. In fact on ARM virt_to_phys only works for kernel direct
> @@ -315,16 +316,20 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>   	phys = dma_to_phys(hwdev, *dma_handle);
>   	dev_addr = xen_phys_to_dma(hwdev, phys);
>   	if (((dev_addr + size - 1 <= dma_mask)) &&
> -	    !range_straddles_page_boundary(phys, size))
> +	    !range_straddles_page_boundary(phys, size)) {
>   		*dma_handle = dev_addr;
> -	else {
> +		printk("DEBUG %s %d phys=%lx dma=%lx\n",__func__,__LINE__,phys,dev_addr);
> +	} else {
>   		if (xen_create_contiguous_region(phys, order,
>   						 fls64(dma_mask), dma_handle) != 0) {
> +			printk("DEBUG %s %d\n",__func__,__LINE__);
>   			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>   			return NULL;
>   		}
>   		*dma_handle = phys_to_dma(hwdev, *dma_handle);
>   		SetPageXenRemapped(virt_to_page(ret));
> +		printk("DEBUG %s %d dma_mask=%d page_boundary=%d phys=%lx dma=%lx\n",__func__,__LINE__,
> +			((dev_addr + size - 1 <= dma_mask)),range_straddles_page_boundary(phys, size),phys,*dma_handle);
>   	}
>   	memset(ret, 0, size);
>   	return ret;
> 
Thanks I will try this.

> 
> 
> 
>>> And also can I pass the GPU to domU? Raspberry Pi 4 is limited HW and
>>> doesn't have IOMMU. I'm trying to create similar OS like QubesOS where GPU,
>>> Network, keyboard/mouse, ... are isolated to their own VMs.
>>
>> Without an IOMMU or any other HW mechamisns (e.g. MPU), it would not be safe
>> to assign a DMA-capable device to a non-trusted VM.
>>
>> If you trust the VM where you assigned a device, then a possible approach
>> would be to have the VM direct mapped (e.g. guest physical address == host
>> physical address). Although, I can foreese some issues if you have multiple
>> VMs requires memory below 30 bits (there seem to be limited amount)>
>>
>> If you don't trust the VM where you assigned a device, then your best option
>> will be to expose a PV interface of the device and have your backend
>> sanitizing the request and issuing it on behalf of the guest.
> 
> FYI you could do that with the existing PVFB drivers that only support 2D
> graphics
I'll keep that in my mind, thanks.



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 07:59:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 07:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80388.147056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qat-000556-6W; Tue, 02 Feb 2021 07:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80388.147056; Tue, 02 Feb 2021 07:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qat-00054z-38; Tue, 02 Feb 2021 07:59:27 +0000
Received: by outflank-mailman (input) for mailman id 80388;
 Tue, 02 Feb 2021 07:59:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l6qar-00054u-QJ
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 07:59:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6qaq-0008Bt-Ug; Tue, 02 Feb 2021 07:59:24 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6qaq-0005Vl-P8; Tue, 02 Feb 2021 07:59:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:To:Subject;
	bh=PMmzMAArqVH1l6YZzvwo1qCQH2Z4azKlIqLB2VxigfA=; b=peXsMGrjbb0hYJLmsGEVnUoqoS
	vh8onzuImGs9EcpLqmHPitSZYO0f2ZNsKDaO2efvwlbiUajLMGGiolWG7DCgI+sy0vc47RNP+uSlM
	5Ir2G4SPu2LCVuE9hhFhsvZPdROcbyJh4Y6Vq3/ED3jhPFL82JGN13+VWdt3+hxvM9YY=;
Subject: Re: Null scheduler and vwfi native problem
To: Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
Date: Tue, 2 Feb 2021 07:59:23 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Dario,

On 30/01/2021 17:59, Dario Faggioli wrote:
> On Fri, 2021-01-29 at 09:08 +0100, Anders TÃ¶rnqvist wrote:
>> On 1/26/21 11:31 PM, Dario Faggioli wrote:
>>> Thanks again for letting us see these logs.
>>
>> Thanks for the attention to this :-)
>>
>> Any ideas for how to solve it?
>>
> So, you're up for testing patches, right?
> 
> How about applying these two, and letting me know what happens? :-D
> 
> They are on top of current staging. I can try to rebase on something
> else, if it's easier for you to test.
> 
> Besides being attached, they're also available here:
> 
> https://gitlab.com/xen-project/people/dfaggioli/xen/-/tree/rcu-quiet-fix
> 
> I could not test them properly on ARM, as I don't have an ARM system
> handy, so everything is possible really... just let me know.
> 
> It should at least build fine, AFAICT from here:
> 
> https://gitlab.com/xen-project/people/dfaggioli/xen/-/pipelines/249101213
> 
> Julien, back in:
> 
>   https://lore.kernel.org/xen-devel/315740e1-3591-0e11-923a-718e06c36445@arm.com/
> 
> 
> you said I should hook in enter_hypervisor_head(),
> leave_hypervisor_tail(). Those functions are gone now and looking at
> how the code changed, this is where I figured I should put the calls
> (see the second patch). But feel free to educate me otherwise.

enter_hypervisor_from_guest() and leave_hypervisor_to_guest() are the 
new functions.

I have had a quick look at your place. The RCU call in 
leave_hypervisor_to_guest() needs to be placed just after the last call 
to check_for_pcpu_work().

Otherwise, you may be preempted and keep the RCU quiet.

The placement in enter_hypervisor_from_guest() doesn't matter too much, 
although I would consider to call it as a late as possible.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 08:12:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 08:12:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80392.147068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qn7-0007ez-Ey; Tue, 02 Feb 2021 08:12:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80392.147068; Tue, 02 Feb 2021 08:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qn7-0007es-C4; Tue, 02 Feb 2021 08:12:05 +0000
Received: by outflank-mailman (input) for mailman id 80392;
 Tue, 02 Feb 2021 08:12:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IB5w=HE=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l6qn6-0007en-0X
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 08:12:04 +0000
Received: from mail-lf1-x12f.google.com (unknown [2a00:1450:4864:20::12f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ae982b5-e2d2-40f2-8c3c-abf95db9ca2c;
 Tue, 02 Feb 2021 08:12:02 +0000 (UTC)
Received: by mail-lf1-x12f.google.com with SMTP id e15so6489830lft.13
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 00:12:02 -0800 (PST)
Received: from [192.168.1.76] (91-153-193-91.elisa-laajakaista.fi.
 [91.153.193.91])
 by smtp.gmail.com with ESMTPSA id i2sm4236978ljn.39.2021.02.02.00.12.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 02 Feb 2021 00:12:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ae982b5-e2d2-40f2-8c3c-abf95db9ca2c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=mBeMdGCzCotqagX+j+uVvuRd6wPgDYEDThhCzjwhkKk=;
        b=zVkG3XPssK5AwfuM4qaZRznpmwf9bN7Ncsd8o8+OZ2sCEhSraA02l2u9/yF48Y7KfB
         53Mzkgxz6pZmQ9WSGI8L8shP1bMIrvlNy/VHOROE4gBPk3ynVWITWwpupSos6p2nfDAu
         IE5YJtR3tKUporJx5CWcIf/LpmMPpaGveEf+roedMEAgdraPejuNQeRnhnKMOoYvPQfT
         Rf4NA1yEzfyDmmYBzTkwRQqoAKAH64kHBBlfWf2HR7KMj5my1geLgvufWlsVncmp8m4t
         K1T/bGz5KHWvnV44YA6EsO5lAL2R+GfJqfqYNFYqqz8IyTN87GKJuoPt2R+UDY6S4rDv
         MgdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=mBeMdGCzCotqagX+j+uVvuRd6wPgDYEDThhCzjwhkKk=;
        b=p6gATpfto6RpZMLtZi6wx2WZ61rHY4DZ3ngIALfh9GbFPhtMZLU3WPc/0SYrHc3Df7
         lGZjmLJeDOLWNVX58M5m3SNpTQTAl/vRs3PsM0JfOLtE99hYIJC1pD36B85uCSMgwme7
         d8dFwoopSafPF9LcRQAbNYR2KxGmrcDEG6RJvxks8lvvcwNmuZgZpwdPK6HHW2yTUaGc
         uRmm4WTwbjCv9xW8PeWlYP6Ay3G3R3jUzJXJK0sEoSu/jfWU6VZqGg+KOzHHJXMaWAw6
         ht3rXK/ELulg3dWK/RU8fk249ta+tpIQxiEmWtPPCAUnaMF2znB04KIYAXXEwaHr9leO
         Zm8A==
X-Gm-Message-State: AOAM532j0VcH0411LsBcoQSSRTh3Xnby7nk02FlwTfyNOHToYvpor9yQ
	sdHgF5VqA5l5naFcrne/2KnxsQ==
X-Google-Smtp-Source: ABdhPJyyjJgzZxu+y0g66+uj+Jn1Ixix1LE7vHy8gVURNnNn70/dxYOIg7d/0E1x8Swwya18iDiyVA==
X-Received: by 2002:a19:54d:: with SMTP id 74mr10585440lff.258.1612253521340;
        Tue, 02 Feb 2021 00:12:01 -0800 (PST)
Subject: Re: Question about xen and Rasp 4B
To: Roman Shaposhnik <roman@zededa.com>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com>
 <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s>
 <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
Date: Tue, 2 Feb 2021 10:12:00 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Roman,

>>>
>> Good catch.
>> GPU works now and I can start X! Thanks! I was also able to create domU
>> that runs Raspian OS.
> 
> This is very interesting that you were able to achieve that - congrats!
> 
> Now, sorry to be a bit dense -- but since this thread went into all
> sorts of interesting
> directions all at once -- I just have a very particular question: what is exact
> combination of versions of Xen, Linux kernel and a set of patches that went
> on top that allowed you to do that? I'd love to obviously see it
> productized in Xen
> upstream, but for now -- I'd love to make available to Project EVE/Xen community
> since there seems to be a few folks interested in EVE/Xen combo being able to
> do that.

I have tried Xen Release 4.14.0, 4.14.1 and master (from week 4, 2021).

Kernel rpi-5.9.y and rpi-5.10.y branches from 
https://github.com/raspberrypi/linux

and

U-boot (master).

For the GPU to work it was enough to disable swiotlb from the kernel(s) 
as suggested in this thread.

If you use Xen master then you need to revert the 
25849c8b16f2a5b7fcd0a823e80a5f1b590291f9. Apparently v3d uses same 
resources and will not start.

I was able to get most of the combinations to work without any big efforts.
In case you use USB SSD U-boot needs a fix if you use 5.9 kernel.
The 5.10 works with the lates Xen master but then you need one small 
workaround to the xen since there is address that Xen cannot map. Some 
usb address cannot be found (address was 0 if recall correctly). I just 
by passed the error:

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e824ba34b0..3e8a175f2e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1409,7 +1409,7 @@ static int __init handle_device(struct domain *d, 
struct dt_device_node *dev,
          {
              printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
                     i, dt_node_full_name(dev));
-            return res;
+            continue; //return res;
          }

          res = map_range_to_domain(dev, addr, size, &mr_data);



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 08:16:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 08:16:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80394.147080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qrU-0007rZ-2J; Tue, 02 Feb 2021 08:16:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80394.147080; Tue, 02 Feb 2021 08:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6qrT-0007rS-UY; Tue, 02 Feb 2021 08:16:35 +0000
Received: by outflank-mailman (input) for mailman id 80394;
 Tue, 02 Feb 2021 08:16:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6qrS-0007rN-F6
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 08:16:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad1a5445-1e8b-48b6-beba-0382da2c07d2;
 Tue, 02 Feb 2021 08:16:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C272CAD57;
 Tue,  2 Feb 2021 08:16:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad1a5445-1e8b-48b6-beba-0382da2c07d2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612253792; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o1jo4rfmACOXREMgoKonmem5GnxsE2wCTSrHFtLGgDI=;
	b=mDlujh96hgx8/qCAtPDa8pNIiNOS+y/8flFruo9jZwfK2lW2TJVLhrVwfoEvjn6mQ6TyGz
	y5Sa5G8Uw7NMgoBQMwhfhV1I211tlwEHrOAtCBJjlA6qynC4VX5LRA6dpiQRIStpbO0q6r
	hW6ZCzDuzrj37eepooFpm8gDn4GqWr8=
Subject: Re: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
Message-ID: <22d172b7-bee6-79da-f194-e504ada14871@suse.com>
Date: Tue, 2 Feb 2021 09:16:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.02.2021 13:43, Jan Beulich wrote:
> As per the comment ahead of it, the original purpose of the function was
> to deal with TSCs halted in deep C states. While this probably explains
> why only forward moves were ever expected, I don't see how this could
> have been reliable in case CPU0 was deep-sleeping for a sufficiently
> long time. My only guess here is a hidden assumption of CPU0 never being
> idle for long enough.

Furthermore that comment looks to be contradicting the actual use of
the function: It gets installed when !RELIABLE_TSC, while the comment
would suggest !NONSTOP_TSC. I suppose the comment is simply misleading,
because RELIABLE_TSC implies NONSTOP_TSC according to all the places
where either of the two feature bits gets played with. Plus in the
!NONSTOP_TSC case we write the TSC explicitly anyway when coming back
out of a (deep; see below) C-state.

As an implication from the above mwait_idle_cpu_init() then looks to
pointlessly clear "reliable" when "nonstop" is clear.

It further looks odd that mwait_idle() (unlike acpi_processor_idle())
calls cstate_restore_tsc() independent of what C-state was active.

> @@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
>              while ( atomic_read(&r->semaphore) > total_cpus )
>                  cpu_relax();
>          }
> +
> +        /* Just in case a read above ended up reading zero. */
> +        tsc += !tsc;
>      }
>  
> -    time_calibration_rendezvous_tail(r, r->master_tsc_stamp);
> +    time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp);

This, in particular, wouldn't be valid when !NONSTOP_TSC without
cstate_restore_tsc(). We then wouldn't have a way to know whether
the observed gap is because of the TSC having been halted for a
while (as the comment ahead of the function - imo wrongly, as per
above - suggests), or whether - like in Claudemir's case - the
individual TSCs were offset against one another.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 08:28:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 08:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80403.147092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6r39-0000WX-4x; Tue, 02 Feb 2021 08:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80403.147092; Tue, 02 Feb 2021 08:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6r39-0000WQ-1U; Tue, 02 Feb 2021 08:28:39 +0000
Received: by outflank-mailman (input) for mailman id 80403;
 Tue, 02 Feb 2021 08:28:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6r37-0000WL-Na
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 08:28:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7997f814-da3c-488b-b126-aa9a9420ac75;
 Tue, 02 Feb 2021 08:28:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D609DAF13;
 Tue,  2 Feb 2021 08:28:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7997f814-da3c-488b-b126-aa9a9420ac75
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612254516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zA32hojje5gaJd4Wl5W98aGCHXdaqpWoeZBN3Vk/yg4=;
	b=S6g9WwToMRAilnLtwKdaAx7sQ56qJn+mpW/mEYpb9UY2ttXfT46C4hNHyyXeXs8Hl1+8Bh
	Mkvfw5EwnGLdzKYZVkxfOwD1npr38kp9TwOtRcdEMrC3WBI9ZuCOzgaPrt8q0soGQMsjB2
	t0iXhgAw/MAat7K2CC7i2F1V50uthmw=
Subject: Re: [PATCH] memory: fix build with COVERAGE but !HVM
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <84a05b9e-a0c3-7860-4a59-a591a873b884@suse.com>
 <a2fff2f5-70d2-19f2-b7d3-01e4db50f709@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c26ae312-2e08-6266-ea0d-a7a6c901f866@suse.com>
Date: Tue, 2 Feb 2021 09:28:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <a2fff2f5-70d2-19f2-b7d3-01e4db50f709@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.02.2021 19:26, Andrew Cooper wrote:
> On 01/02/2021 16:20, Jan Beulich wrote:
>> Xen is heavily relying on the DCE stage to remove unused code so the
>> linker doesn't throw an error because a function is not implemented
>> yet we defined a prototype for it.
>>
>> On some GCC versions (such as 9.4 provided by Debian sid), the compiler
>> DCE stage will not manage to figure that out for
>> xenmem_add_to_physmap_batch():
>>
>> ld: ld: prelink.o: in function `xenmem_add_to_physmap_batch':
>> /xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
>> /xen/xen/common/memory.c:942:(.text+0x22145): relocation truncated
>> to fit: R_X86_64_PLT32 against undefined symbol `xenmem_add_to_physmap_one'
>> prelink-efi.o: in function `xenmem_add_to_physmap_batch':
>> /xen/xen/common/memory.c:942: undefined reference to `xenmem_add_to_physmap_one'
>> make[2]: *** [Makefile:215: /root/xen/xen/xen.efi] Error 1
>> make[2]: *** Waiting for unfinished jobs....
>> ld: /xen/xen/.xen-syms.0: hidden symbol `xenmem_add_to_physmap_one' isn't defined
>> ld: final link failed: bad value
>>
>> It is not entirely clear why the compiler DCE is not detecting the
>> unused code. However, cloning the check introduced by the commit below
>> into xenmem_add_to_physmap_batch() does the trick.
>>
>> No functional change intended.
>>
>> Fixes: d4f699a0df6c ("x86/mm: p2m_add_foreign() is HVM-only")
>> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
>> ---
>> Julien, since I reused most of your patch'es description, I've kept your
>> S-o-b. Please let me know if you want me to drop it.
>>
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -904,6 +904,19 @@ static int xenmem_add_to_physmap_batch(s
>>  {
>>      union add_to_physmap_extra extra = {};
>>  
>> +    /*
>> +     * While, unlike xenmem_add_to_physmap(), this function is static, there
>> +     * still have been cases observed where xatp_permission_check(), invoked
>> +     * by our caller, doesn't lead to elimination of this entire function when
>> +     * the compile time evaluation of paging_mode_translate(d) is false. Guard
>> +     * against this be replicating the same check here.
>> +     */
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>,

Thanks.

> but I feel this
> comment can be far more precise/concise.
> 
> /* In some configurations, (!HVM, COVERAGE), the
> xenmem_add_to_physmap_one() call doesn't succumb to
> dead-code-elimination.Â  Duplicate the short-circut from
> xatp_permission_check() to try and help the compiler out. */

I'm perfectly fine to take this one. I have to admit though that
I first needed to look up "succumb" ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 08:35:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 08:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80406.147105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6r9O-0001ZP-UX; Tue, 02 Feb 2021 08:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80406.147105; Tue, 02 Feb 2021 08:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6r9O-0001ZI-PD; Tue, 02 Feb 2021 08:35:06 +0000
Received: by outflank-mailman (input) for mailman id 80406;
 Tue, 02 Feb 2021 08:35:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XtUW=HE=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l6r9N-0001ZD-Bm
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 08:35:05 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9788af28-c41c-4b6c-83f7-3d9382faed3e;
 Tue, 02 Feb 2021 08:35:04 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id d16so19445101wro.11
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 00:35:04 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id v4sm34648305wrw.42.2021.02.02.00.35.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 02 Feb 2021 00:35:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9788af28-c41c-4b6c-83f7-3d9382faed3e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=fJhlXqWKKXV3CwRrJvNt4+y3ONiq0lXyBm2W2gSYULg=;
        b=VR7SBEc8+xr99XlM7/imame9l9moEJE80EHhYJZQ4KC4Ie3zM8ZK2a/zKiql8hw2Pm
         QQ/TiVixquDhxkAYVCglUZjL5OiDbshEWq2p/vmrOfk8hsxtFjqiQGHdOvSVAFOlLjNS
         EIDyNKkTGUVMl0lA2BN6yjlCqVNwkT4t94vA6SQOdygpaoG/VpLjkJc/9jKe2HKeZ8RM
         T5wZM9kbe59CkHzM3ALauexuLFgqCb+vkqW0pZcZCb04e6g/hHnQw5ieO01mVg+UmlAE
         n1n3EkHWOrYCS/uBpGxtliNLUpn4m6Tu/iLRzbIDEOAroTT7+sLiLOENHRZjbDVIQ9/Y
         rQYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:references:in-reply-to:subject
         :date:message-id:mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=fJhlXqWKKXV3CwRrJvNt4+y3ONiq0lXyBm2W2gSYULg=;
        b=g90el+yPkFuDUIVWkqSsTNKRdUFtnPT6jHPN+XCAioAxwuDV1uhzB3syfVXiLBQ9jS
         RLVR0OuyfMdKYIxgYNNGWcF98fF/zG8jNglJiKhcT+y+xMy7+dZ4XDsmA60FnTHZ7IGK
         ghFpkQOJ8cBI7QuvhSVbw93AvzPnsC19dmMQeXYznxVyXbnKPajR3nYnz+w5vBEEx3+S
         nWhUvILGcg1v5bX26NfFpkWFSg7Jd68ciixnxScP9XoKgHXKLQR2Xgk57TBDhv56PQdK
         a70ZshfTUm1jJVZLOxAPxh1kx7rDOWMLUMWV4AyN7Q9gPB5TSuiQTI7mOZVfSQZsCMTe
         kU6g==
X-Gm-Message-State: AOAM533rZ20Hs/8HZ0lmeQFwG6HAmkUzmTmcyrdWHe/fo1LeRkzrvLi1
	AGe7/P1l9vmLY1457AaHlnY=
X-Google-Smtp-Source: ABdhPJySN61g56h7QmXQE1/nxJzAtj6MxUcF74FqO2cH8Armb+6ZbTWlHhby8apDPvI//le7Jy3gkA==
X-Received: by 2002:adf:f4c1:: with SMTP id h1mr22488665wrp.102.1612254903537;
        Tue, 02 Feb 2021 00:35:03 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Igor Druzhinin'" <igor.druzhinin@citrix.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Tamas K Lengyel'" <tamas.k.lengyel@gmail.com>,
	"'Xen-devel'" <xen-devel@lists.xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com> <12e17af4-3502-0047-36e2-3c1262602747@citrix.com> <7ea14fac-7832-fe68-529e-03a8f9812f88@citrix.com>
In-Reply-To: <7ea14fac-7832-fe68-529e-03a8f9812f88@citrix.com>
Subject: RE: staging: unable to restore HVM with Viridian param set
Date: Tue, 2 Feb 2021 08:35:01 -0000
Message-ID: <035301d6f93e$4d03c6b0$e70b5410$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJ3Ltpo4mDg/loyO5st+hbbC6UbfgDvdyJ3AnP3zDOo6QKkoA==
Content-Language: en-gb

> -----Original Message-----
> From: Igor Druzhinin <igor.druzhinin@citrix.com>
> Sent: 02 February 2021 00:20
> To: Andrew Cooper <andrew.cooper3@citrix.com>; Tamas K Lengyel =
<tamas.k.lengyel@gmail.com>; Xen-devel
> <xen-devel@lists.xenproject.org>; Wei Liu <wl@xen.org>; Ian Jackson =
<iwj@xenproject.org>; Anthony
> PERARD <anthony.perard@citrix.com>; Paul Durrant <paul@xen.org>
> Subject: Re: staging: unable to restore HVM with Viridian param set
>=20
> n 01/02/2021 22:57, Andrew Cooper wrote:
> > On 01/02/2021 22:51, Tamas K Lengyel wrote:
> >> Hi all,
> >> trying to restore a Windows VM saved on Xen 4.14 with Xen staging =
results in:
> >>
> >> # xl restore -p /shared/cfg/windows10.save
> >> Loading new save file /shared/cfg/windows10.save (new xl fmt info =
0x3/0x0/1475)
> >>  Savefile contains xl domain config in JSON format
> >> Parsing config from <saved>
> >> xc: info: Found x86 HVM domain from Xen 4.14
> >> xc: info: Restoring domain
> >> xc: error: set HVM param 9 =3D 0x0000000000000065 (17 =3D File =
exists):
> >> Internal error
> >> xc: error: Restore failed (17 =3D File exists): Internal error
> >> libxl: error: =
libxl_stream_read.c:850:libxl__xc_domain_restore_done:
> >> restoring domain: File exists
> >> libxl: error: libxl_create.c:1581:domcreate_rebuild_done: Domain
> >> 8:cannot (re-)build domain: -3
> >> libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
> >> 8:Non-existant domain
> >> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
> >> 8:Unable to destroy guest
> >> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
> >> 8:Destruction of domain failed
> >>
> >> Running on staging 419cd07895891c6642f29085aee07be72413f08c
> >
> > CC'ing maintainers and those who've edited the code recently.
> >
> > What is happening is xl/libxl is selecting some viridian settings,
> > applying them to the domain, and then the migrations stream has a
> > different set of viridian settings.
> >
> > For a migrating-in VM, nothing should be set during domain build.
> > Viridian state has been part of the migrate stream since before =
mig-v2,
> > so can be considered to be everywhere relevant now.
>=20
> The fallout is likely from my changes that modified default set of =
viridian
> settings. The relevant commits:
> 983524671031fcfdb24a6c0da988203ebb47aebe
> 7e5cffcd1e9300cab46a1816b5eb676caaeed2c1
>=20
> The same config from migrated domains now implies different set of =
viridian
> extensions then those set at the source side. That creates =
inconsistency in
> libxl. I don't really know how to address it properly in libxl other =
than
> don't extend the default set ever.
>=20

Surely it should be addressed properly in libxl by not messing with the =
viridian settings on migrate-in/resume, as Andrew says? TBH I thought =
this was already the case. There should be no problem with adding to the =
default set as this is just an xl/libxl concept; the param flags in Xen =
are always define the *exact* set of enabled enlightenments.

  Paul


> Igor



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 08:49:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 08:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80409.147116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6rN1-0002pG-5a; Tue, 02 Feb 2021 08:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80409.147116; Tue, 02 Feb 2021 08:49:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6rN1-0002p9-1q; Tue, 02 Feb 2021 08:49:11 +0000
Received: by outflank-mailman (input) for mailman id 80409;
 Tue, 02 Feb 2021 08:49:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6rMz-0002iY-GK
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 08:49:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a964888f-61c2-4ab4-bfcc-dda8f3237e8e;
 Tue, 02 Feb 2021 08:49:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AE553AE89;
 Tue,  2 Feb 2021 08:49:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a964888f-61c2-4ab4-bfcc-dda8f3237e8e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612255746; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fHxAoQG2as0o0ecdoloiK5EPBvhs1eUJHVM5PjMQWOs=;
	b=Lu3mSdcH0wyLm9Q/OktgcIxkRE/sKCjjGBQR5lMYyQ9inx1BQRdnBiO/OzEfoMbGk3U/Lx
	6D4Y+f2dJ5tBOmxsKrfRG1YH6d7n2WwEjORnuKGO7x7k3ckl22vGs8dbc8fWoBvwCpM4IT
	65Hanh7EOCZPkx0YYqcyOM57Oh9tnDA=
Subject: Re: [PATCH] xenstore: Fix all builds
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210201233513.30923-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5c957319-e0cd-4d9f-505b-cceb1a83bb00@suse.com>
Date: Tue, 2 Feb 2021 09:49:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210201233513.30923-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.02.2021 00:35, Andrew Cooper wrote:
> This diff is easier viewed through `cat -A`
> 
>   diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h$
>   index 1bd443f61a..f7e4da2b2c 100644$
>   --- a/tools/xenstore/include/xenstore_state.h$
>   +++ b/tools/xenstore/include/xenstore_state.h$
>   @@ -21,7 +21,7 @@$
>    #ifndef XENSTORE_STATE_H$
>    #define XENSTORE_STATE_H$
>    $
>   -#if defined(__FreeBSD__) ||M-BM- defined(__NetBSD__)$
>   +#if defined(__FreeBSD__) || defined(__NetBSD__)$
>    #include <sys/endian.h>$
>    #else$
>    #include <endian.h>$
> 
> A non-breaking space isn't a valid C preprocessor token.
> 
> Fixes: ffbb8aa282de ("xenstore: fix build on {Net/Free}BSD")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I wonder why you didn't throw this in right away, without
waiting for any acks.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 09:04:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 09:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80414.147127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6rbw-0004hW-Lk; Tue, 02 Feb 2021 09:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80414.147127; Tue, 02 Feb 2021 09:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6rbw-0004hP-IT; Tue, 02 Feb 2021 09:04:36 +0000
Received: by outflank-mailman (input) for mailman id 80414;
 Tue, 02 Feb 2021 09:04:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6rbv-0004hK-Ck
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 09:04:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aeaa9761-b23c-41d0-95e9-7ff2e60737b7;
 Tue, 02 Feb 2021 09:04:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 428EDAF9C;
 Tue,  2 Feb 2021 09:04:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aeaa9761-b23c-41d0-95e9-7ff2e60737b7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612256672; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jJY700A6YPRFHrbaR6No2EjJvsrjURGXNXHYWtWql7E=;
	b=kgbt52l5RlwpofZ4EjIdreBytQyo5HyV4zleGUaXnuALofh3ta84nI0oZTKI7mR3LacADZ
	Vl3Ap+NxLTCUK6jJRN8SEaoYoRJm9upL+hEn1lHuJ7u5fmuAqBDwbFwb7UDzJIv4xXGUsR
	kw/UO1hv5v/9iTIyBzXnkTmMYaJWINo=
Subject: Re: [PATCH v9 02/11] xen/domain: Add vmtrace_size domain creation
 parameter
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
 <20210201232703.29275-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7a27c313-2c7c-8394-3749-e2f4d671fdab@suse.com>
Date: Tue, 2 Feb 2021 10:04:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210201232703.29275-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.02.2021 00:26, Andrew Cooper wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -132,6 +132,56 @@ static void vcpu_info_reset(struct vcpu *v)
>      v->vcpu_info_mfn = INVALID_MFN;
>  }
>  
> +static void vmtrace_free_buffer(struct vcpu *v)
> +{
> +    const struct domain *d = v->domain;
> +    struct page_info *pg = v->vmtrace.pg;
> +    unsigned int i;
> +
> +    if ( !pg )
> +        return;
> +
> +    v->vmtrace.pg = NULL;
> +
> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
> +    {
> +        put_page_alloc_ref(&pg[i]);
> +        put_page_and_type(&pg[i]);
> +    }
> +}
> +
> +static int vmtrace_alloc_buffer(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    struct page_info *pg;
> +    unsigned int i;
> +
> +    if ( !d->vmtrace_size )
> +        return 0;
> +
> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
> +                             MEMF_no_refcount);
> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
> +            goto refcnt_err;
> +
> +    /*
> +     * We must only let vmtrace_free_buffer() take any action in the success
> +     * case when we've taken all the refs it intends to drop.
> +     */
> +    v->vmtrace.pg = pg;
> +    return 0;
> +
> + refcnt_err:
> +    while ( i-- )
> +        put_page_and_type(&pg[i]);
> +
> +    return -ENODATA;

Would you mind at least logging how many pages may be leaked
here? I also don't understand why you don't call
put_page_alloc_ref() in the loop - that's fine to do prior to
the put_page_and_type(), and will at least limit the leak.
The buffer size here typically isn't insignificant, and it
may be helpful to not unnecessarily defer the freeing to
relinquish_resources() (assuming we will make that one also
traverse the list of "extra" pages, but I understand that's
not going to happen for 4.15 anymore anyway).

Additionally, while I understand you're not in favor of the
comments we have on all three similar code paths, I think
replicating their comments here would help easily spotting
(by grep-ing e.g. for "fishy") all instances that will need
adjusting once we have figured a better overall solution.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 09:20:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 09:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80423.147143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6rr1-0006gT-1m; Tue, 02 Feb 2021 09:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80423.147143; Tue, 02 Feb 2021 09:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6rr0-0006gM-Un; Tue, 02 Feb 2021 09:20:10 +0000
Received: by outflank-mailman (input) for mailman id 80423;
 Tue, 02 Feb 2021 09:20:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b4+r=HE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6rqz-0006gH-AQ
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 09:20:09 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 527fa2dd-1ac9-4881-b439-3491e4bc59dc;
 Tue, 02 Feb 2021 09:20:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 527fa2dd-1ac9-4881-b439-3491e4bc59dc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612257606;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=FBi1CK6rYrpxHlFiKbA46nvvpFlcW0fleWlCni9utU8=;
  b=M54BmubbM6P52cnBJe+nATKm6y7gqiY/7lrcFhPNna+Q8gch1Yxaj4ch
   KoF7gqMcSxHWu43NQ8P/jE2weMTHJjLt4awp1E5tGX203okX7J97v9wue
   YVMVPMp/nI8Q0wJlY+YBWIa6pUPD6cgvjQGY1XEqEdUDZlleRswKxlnUL
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4ji27tSKyerhD8pdwJKNEkWpvrlmc1GBAwHSZymylFBw+mAefh3e89drl8HQzhcjcMdYH4lZHl
 tXobT1gqcdqk4OSiXVYswZGOI+AAPsr9tLntVyjWLbaeh+hGJFoSylDdQdTqK7XGpd5neAtujg
 vtURDdh5WhilQdEprGN1XJkAdD1FPdsBSmNgyArz/52JhxJ4+/cgOq6jFq44DwX/suFNuyvI6e
 mql51KvZAGED8RsMxCFuA557rK0+OAAJTig5zsE12Q0FSG7E3YqYPwBepfwfVJ/MA+RO2ar7sO
 6Co=
X-SBRS: 5.2
X-MesageID: 37679449
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,394,1602561600"; 
   d="scan'208";a="37679449"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DBoAP88Q+CmLZ/+f60jQeqq2dhFTgzgg2fmRtN18Uc8MOkCR826pUvgH2Cj+dZNWo/MoHKGSPZEPIWTKMhdpcB4yjNHg1rpHdOKMWKZnGI/beDkW/Qxkxl0NfiwIJXmLvDIeNKy2mCbLr0SOx+BAtVXtZ/8BBZUSjmkzYAyszVjmhG3yqyYrfaxEom/AQDFcklmIFhtm+1c0aV3VuLr9pXBn0b10a/ZDgx82l1EyNkND11Q9KMjXOeZxkrDGSkiJO3KIdreR1SSmsd5PIWHawkZlKjYcGoVTOrhBKPObsMJ89niq68oorzs/bGtPnuTEq9i7sOG7U3fsTGDw9whwyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/gmgu9GdMiIcGKeSjmPU54VuSLrpAkpme844jmWvr4=;
 b=GBSvQlYn5jbPC/1h5j4nzyd7wbhgPha8g9Ge5g015XjJNYFrrRjBVBffohak3wxFEJNsG9JGT6p/UtWr15zuFG62huBQf252sII6g9ST0wi2uTKTcNGzEDTaqyDZrzIWP4d8Zm4nUT1RMFlBXLag6cBG1OeUT4lurKTR+14oTGEfkI0pfedj5DpE0T6dVvoddMbA2zraDmrW7SXA8Li+Utegm+OSeuK3HT2+xrmBACSEJw3bReTuCs/cYluytSlN8FLjQyUqK5AuuftKZquTyq4zIdAJppJRRiyrKECA0lHgFhxMOqY6KCRiz8nl+MtHZK1Nbt3lsU4g7nFmhhvdtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/gmgu9GdMiIcGKeSjmPU54VuSLrpAkpme844jmWvr4=;
 b=h57Nts5ODTAF3DERXWtR7DvWIe7ndvdJcb6IFA35s1YM3QPwkbWUnGX4cnPzCpKDbyM4jFIiQESMTxmJhdedNfnYD1/i+JHCJVN80KH/uNVXd2cxnm53WlFOE8nvbTCWGgodfbXUeGIHKVIxizGnA6Xt20oNIhFlKctv/bff4Tg=
Date: Tue, 2 Feb 2021 10:19:55 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH] xenstore: Fix all builds
Message-ID: <YBkZO9rV4rT3yzUI@Air-de-Roger>
References: <20210201233513.30923-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210201233513.30923-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MRXP264CA0014.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a8407496-de13-4952-30a9-08d8c75bb8bd
X-MS-TrafficTypeDiagnostic: DM5PR03MB2969:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB296918A8B15F22128B8FC4748FB59@DM5PR03MB2969.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xNSP+B7BrPgJcS6wKXVCZtLp5AyXz8z9vtae8H5M8L2pZd9wPhbXZlgkE3arOXQ8bqrfYVuOeumqDG0dQ4Mvfs3soCcqgyStZ0c86sskHju0RVo1b0fJd4FN4jCOSGpfoSi7dEgmaUQMi2/8R36+5froW3EVs+Cm7NCz4vYMYrqBKm25IgtB4Ixv2/YXlLln/Ol+d78qIHqVrE75sRaumnoYSeRpxcai5RvC9VAhSnlz3jAmm2+DeQ/TIaRB38BXXcHAoSnJhBuwWbgmfVRNsv+xyE2NMXD4vNzt8/P0bO1ommbOyHB2uMZTXiP9mfKZty2RC78W0/rVG/4uB88ZVujM/eex2+HwkQce8nXhb16jZf7XvICWzF/EBGsj3h1Cpv8B0DsCejQyHiLjFPc8OPtj3Whp4XOuhlLQp2oinwhT9kQ6VH64x2uUOCbXA3iclzhOTGHn5ZSUWVdTtdShMNZcaYV/KiqsA8nc7wOnSbovnbNEsuAiKNQ2sgaMqP0p9puIoQtYT8yGn5YNBFsD1w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(136003)(346002)(366004)(396003)(9686003)(6486002)(6636002)(6862004)(4744005)(66946007)(2906002)(66476007)(66556008)(5660300002)(6666004)(54906003)(26005)(16526019)(186003)(956004)(316002)(85182001)(8936002)(33716001)(478600001)(4326008)(6496006)(8676002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?STJZUldzeUVjNXYxNHJvZlc3N1FLWHVKWVVKSWtCMmdHZjJlSGZHbHVpSEY3?=
 =?utf-8?B?VXpmWUVHWHIzSHRqZ3ZuWHdPTnFjWE15ZnprckY3TjlDaVQyU2JmVlJ3c3Jp?=
 =?utf-8?B?MlB2M0ljZ1d3YUZGKzZ0VU1ZekQ2UWV3eUFjZmJnUmFvTW5BR3pVdXlLZ2VF?=
 =?utf-8?B?VjRlTUV4YzVxcy9GR05VK3BKOVBWOHBybFg3UjhySmQrMTNYQVJHckU2QTRy?=
 =?utf-8?B?NktPdG5wcW5FaUF5L3ZiMGdlekRMWkpDdCszZUJYdk1GOGJ5NXJtNGNYcXJP?=
 =?utf-8?B?Z2J4dDlSYkhSUGRYb2VnWmJHRVJKYnZsdDhuRkxLVEYvRVNuOTZDZFN4V2NS?=
 =?utf-8?B?YnlLeW1SMlJGVG1tM0RRYkRtMk1mZ01QeUhHZW9oN0REOVdTQzdFRXZNZXhk?=
 =?utf-8?B?ZWMzVm5DZTRtcmNwVlhzVEVFWEM5alBtTmY1MTVNdE5KOTRrUHMwbkRKdXZD?=
 =?utf-8?B?ZTlNR0t5U1FBY2I4Q1p1elBiZDdhQXBBYy9Oa0w5T3RmUjFFNVNrckEvcU5x?=
 =?utf-8?B?MXpSandlY1E5V2UzVDBnYk11dkJpV0JQWm9rcnZCNTZSd0tUeWZ0UVFQU2c3?=
 =?utf-8?B?MWlpRVVyOWh3UWtoL0hDWCt6dkJUU0N0WnFIR1k0VlRDdHlDQTYxODcvTmds?=
 =?utf-8?B?ZGRnbGtBZkUzM3hRbFAzVkhYSDdTc3h6TW0xVzdnNDZxUXRKbDExOUtncVZY?=
 =?utf-8?B?am9TSmpSbk9ySW5qekdoSjQ3TnVxSkdmbEEzbkNJRWNiZzZBcjI4OWkybXlJ?=
 =?utf-8?B?SzJJZGtCdlNKaE42aHhSMTZCMjhtcFRycC96WHowa1lQOEZIT3NUSGRIM3do?=
 =?utf-8?B?eFFTTFFjQlF5WjhqQ1hvQmowaTM0YVJTclFHb051dEZTS1FJbGVzUTJHMXo5?=
 =?utf-8?B?bzBaYjBKRWx5dEtiZXJIUXdkdzByQVcvN1ZhTWcyQS8zVkxjbituYURXMkp4?=
 =?utf-8?B?TnE3bWduYzR1OG81b01kVEtiZGJOYk4zK3JKRkV2ck96Q2hEVzhjSElWV0tV?=
 =?utf-8?B?WW0wOUdlNEVHc3dCamdsc043U2JjdzlyaUlqSysySjNBdWxORDE4ZngvTC9Z?=
 =?utf-8?B?cUhKT2NVam5LMlZkYmt3M1M4T1FTTGJXMzJKYSsxL1JtYm1lb3RwNDY1SjdG?=
 =?utf-8?B?Y2ZlTzgrY0pyOW9abkEzUVVBWGpodktpUlNZL0VTNFJUUlNrV3dzOTNTT3Ux?=
 =?utf-8?B?aTY5RmwzL2ttTmtkNTV3ZlhjSG5CVmF4dm5qTGlSb2hGbFk5ZWRkMDZDeUJN?=
 =?utf-8?B?eWVwTjVXZXl3cXZkRHlTdDBDMUExT0FiWFptNmNQMFprTmJYV2FWTE9va004?=
 =?utf-8?B?cXZQZWRRdFU5bEdiQjVOYTJmVU1oK2VCNGRFVkNUSWsrU3E5UUZtNURrWVpC?=
 =?utf-8?B?NWY0RXhCUHRYbmtxTG9hb2o1TVdFLzZhM3dVd2dpSFdZemhwRGJjeFR5SHJG?=
 =?utf-8?B?N2UwWGprcXpISFhKRWI0TEpjWUxNTVQ0K2J6aTZBPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a8407496-de13-4952-30a9-08d8c75bb8bd
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 09:20:03.1409
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qvjqxs/pwczaWxYrHKJNRaXQV6doVBli04t9Hn44nlcHqnEMOF8cfk28ZqeuUDdRnLp0221EHhMLPOzND6SC7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2969
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 11:35:13PM +0000, Andrew Cooper wrote:
> This diff is easier viewed through `cat -A`
> 
>   diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h$
>   index 1bd443f61a..f7e4da2b2c 100644$
>   --- a/tools/xenstore/include/xenstore_state.h$
>   +++ b/tools/xenstore/include/xenstore_state.h$
>   @@ -21,7 +21,7 @@$
>    #ifndef XENSTORE_STATE_H$
>    #define XENSTORE_STATE_H$
>    $
>   -#if defined(__FreeBSD__) ||M-BM- defined(__NetBSD__)$
>   +#if defined(__FreeBSD__) || defined(__NetBSD__)$
>    #include <sys/endian.h>$
>    #else$
>    #include <endian.h>$
> 
> A non-breaking space isn't a valid C preprocessor token.
> 
> Fixes: ffbb8aa282de ("xenstore: fix build on {Net/Free}BSD")

Sorry. I fixed this locally but forgot to refresh the patch and ended
up sending the broken version. I need to figure a way to make
format-patch fail if there are uncommitted changes in the repository.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 09:36:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 09:36:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80428.147158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6s7A-0007p4-Ee; Tue, 02 Feb 2021 09:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80428.147158; Tue, 02 Feb 2021 09:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6s7A-0007ox-Am; Tue, 02 Feb 2021 09:36:52 +0000
Received: by outflank-mailman (input) for mailman id 80428;
 Tue, 02 Feb 2021 09:36:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XtUW=HE=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l6s79-0007os-Gw
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 09:36:51 +0000
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87c542bf-64f7-41a9-b0d9-67533f15158d;
 Tue, 02 Feb 2021 09:36:50 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id 7so19690214wrz.0
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 01:36:49 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id d16sm30262030wrr.59.2021.02.02.01.36.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 02 Feb 2021 01:36:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87c542bf-64f7-41a9-b0d9-67533f15158d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=Zz8FFykTcux2eAHU4qruCPE/CaDgws9rPlVFrgLBmDE=;
        b=n0na40lt2YrJjOE1hvFtNiO/wlcvGHgllxcJfOb+wkGhd8mAeV4zcdMNFuVWvf5bxm
         blPJ61MgpDxWvC8snrZbeUJTpzZhBCBI7ILLW+J/fEcA+xEEKDeGtq7MBMZEivS4Hr6E
         /sv427EwM7KRx7YOKAWVOKpatV3A5cxBS77ubD0H5jiPQWp7fJWwjmwtA2IlSV/2SEXb
         mHuve3Y0Vp9+WrzkydbX35K1NEBzrDN8GcICCqavgLmVPucYDQ5qUzZlfq96HmrrHIu+
         fupr60uAveOSPbzlF7fGoY3gggQCDNXsU2+nG2dAMw9TiRzBwg9M5E+g2HwP9BiVlA5y
         XwIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=Zz8FFykTcux2eAHU4qruCPE/CaDgws9rPlVFrgLBmDE=;
        b=NoVfx3V3Q6FaGEmZMe4gMp60NvQvghuZioNT6luXx4Qh6KAusa2YBbp5Rzi02Rryil
         vXv6QWhh1XWySIyzOxmW8Qy5zifoI3McJ8zb/tet5pN0tcPa9hyStlSt+W916SWeAopr
         w7sIOp+nebNVO1uvQTo6O3Y0dHYqbDCt9AEs1BhKua8zFWXnwHooMoWSPkv1R+AoB8XK
         0ntTdpL8ZXTfEss40jwhmR39dNygQA1zML0DiaPYczZHQgZQ2SwSdFLzPUg8IgJ/728b
         64qVaObmx87FrbXBrGmZ5hRHO9MI63SoCNBfpuJ87QsoOQjkEePI22cG6OTl1LT9ap6/
         F8XQ==
X-Gm-Message-State: AOAM530WPRSzyFddt5O40PRUj5xW8C3WF8mQbHVbC0NPzxLM/r68MC1N
	09UHM7AmptQ4IcN//VDIRfk=
X-Google-Smtp-Source: ABdhPJz4fEifub8TGwz6LNU29pnusFYU5k+lNNiZwnGQJkpMxAQLzpZDkwPWXtgHYaCpE0U2sOL2/A==
X-Received: by 2002:a5d:47af:: with SMTP id 15mr22541568wrb.205.1612258609190;
        Tue, 02 Feb 2021 01:36:49 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Dongli Zhang'" <dongli.zhang@oracle.com>,
	=?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Konrad Rzeszutek Wilk'" <konrad.wilk@oracle.com>,
	"'Jens Axboe'" <axboe@kernel.dk>,
	<linux-block@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
References: <20210128130441.11744-1-paul@xen.org> <c3a476c5-c671-4429-73d5-0bf7ced1a06b@oracle.com> <7fb64e2f-141a-c848-0f8a-2313d2e821b6@suse.com> <02d901d6f616$b0004750$1000d5f0$@xen.org> <c10b539b-0f86-ac60-d289-4e3b7ded25fb@oracle.com>
In-Reply-To: <c10b539b-0f86-ac60-d289-4e3b7ded25fb@oracle.com>
Subject: RE: [PATCH v2] xen-blkback: fix compatibility bug with single page rings
Date: Tue, 2 Feb 2021 09:36:47 -0000
Message-ID: <035701d6f946$edb90180$c92b0480$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQESWKuFsxkk/iz7Hd1VT64wNwq0EAGAqAtPAflt50QCuTnTiQKN7zH0q4fRvEA=
Content-Language: en-gb

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Dongli Zhang
> Sent: 30 January 2021 05:09
> To: paul@xen.org; 'J=C3=BCrgen Gro=C3=9F' <jgross@suse.com>; =
xen-devel@lists.xenproject.org; linux-
> block@vger.kernel.org; linux-kernel@vger.kernel.org
> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Konrad Rzeszutek Wilk' =
<konrad.wilk@oracle.com>; 'Roger Pau
> Monn=C3=A9' <roger.pau@citrix.com>; 'Jens Axboe' <axboe@kernel.dk>
> Subject: Re: [PATCH v2] xen-blkback: fix compatibility bug with single =
page rings
>=20
>=20
>=20
> On 1/29/21 12:13 AM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> >> Sent: 29 January 2021 07:35
> >> To: Dongli Zhang <dongli.zhang@oracle.com>; Paul Durrant =
<paul@xen.org>; xen-
> >> devel@lists.xenproject.org; linux-block@vger.kernel.org; =
linux-kernel@vger.kernel.org
> >> Cc: Paul Durrant <pdurrant@amazon.com>; Konrad Rzeszutek Wilk =
<konrad.wilk@oracle.com>; Roger Pau
> >> Monn=C3=A9 <roger.pau@citrix.com>; Jens Axboe <axboe@kernel.dk>
> >> Subject: Re: [PATCH v2] xen-blkback: fix compatibility bug with =
single page rings
> >>
> >> On 29.01.21 07:20, Dongli Zhang wrote:
> >>>
> >>>
> >>> On 1/28/21 5:04 AM, Paul Durrant wrote:
> >>>> From: Paul Durrant <pdurrant@amazon.com>
> >>>>
> >>>> Prior to commit 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() =
to avoid
> >>>> inconsistent xenstore 'ring-page-order' set by malicious =
blkfront"), the
> >>>> behaviour of xen-blkback when connecting to a frontend was:
> >>>>
> >>>> - read 'ring-page-order'
> >>>> - if not present then expect a single page ring specified by =
'ring-ref'
> >>>> - else expect a ring specified by 'ring-refX' where X is between =
0 and
> >>>>    1 << ring-page-order
> >>>>
> >>>> This was correct behaviour, but was broken by the afforementioned =
commit to
> >>>> become:
> >>>>
> >>>> - read 'ring-page-order'
> >>>> - if not present then expect a single page ring (i.e. =
ring-page-order =3D 0)
> >>>> - expect a ring specified by 'ring-refX' where X is between 0 and
> >>>>    1 << ring-page-order
> >>>> - if that didn't work then see if there's a single page ring =
specified by
> >>>>    'ring-ref'
> >>>>
> >>>> This incorrect behaviour works most of the time but fails when a =
frontend
> >>>> that sets 'ring-page-order' is unloaded and replaced by one that =
does not
> >>>> because, instead of reading 'ring-ref', xen-blkback will read the =
stale
> >>>> 'ring-ref0' left around by the previous frontend will try to map =
the wrong
> >>>> grant reference.
> >>>>
> >>>> This patch restores the original behaviour.
> >>>>
> >>>> Fixes: 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to avoid =
inconsistent xenstore 'ring-
> page-
> >> order' set by malicious blkfront")
> >>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> >>>> ---
> >>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>>> Cc: "Roger Pau Monn=C3=A9" <roger.pau@citrix.com>
> >>>> Cc: Jens Axboe <axboe@kernel.dk>
> >>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
> >>>>
> >>>> v2:
> >>>>   - Remove now-spurious error path special-case when nr_grefs =
=3D=3D 1
> >>>> ---
> >>>>   drivers/block/xen-blkback/common.h |  1 +
> >>>>   drivers/block/xen-blkback/xenbus.c | 38 =
+++++++++++++-----------------
> >>>>   2 files changed, 17 insertions(+), 22 deletions(-)
> >>>>
> >>>> diff --git a/drivers/block/xen-blkback/common.h =
b/drivers/block/xen-blkback/common.h
> >>>> index b0c71d3a81a0..524a79f10de6 100644
> >>>> --- a/drivers/block/xen-blkback/common.h
> >>>> +++ b/drivers/block/xen-blkback/common.h
> >>>> @@ -313,6 +313,7 @@ struct xen_blkif {
> >>>>
> >>>>   	struct work_struct	free_work;
> >>>>   	unsigned int 		nr_ring_pages;
> >>>> +	bool                    multi_ref;
> >>>
> >>> Is it really necessary to introduce 'multi_ref' here or we may =
just re-use
> >>> 'nr_ring_pages'?
> >>>
> >>> According to blkfront code, 'ring-page-order' is set only when it =
is not zero,
> >>> that is, only when (info->nr_ring_pages > 1).
> >>
> >
> > That's how it is *supposed* to be. Windows certainly behaves that =
way too.
> >
> >> Did you look into all other OS's (Windows, OpenBSD, FreebSD, =
NetBSD,
> >> Solaris, Netware, other proprietary systems) implementations to =
verify
> >> that claim?
> >>
> >> I don't think so. So better safe than sorry.
> >>
> >
> > Indeed. It was unfortunate that the commit to blkif.h documenting =
multi-page (829f2a9c6dfae) was not
> crystal clear and (possibly as a consequence) blkback was implemented =
to read ring-ref0 rather than
> ring-ref if ring-page-order was present and 0. Hence the only safe =
thing to do is to restore that
> behaviour.
> >
>=20
> Thank you very much for the explanation!
>=20
> Reviewed-by: Dongli Zhang <dongli.zhang@oracle.com>
>=20

Thanks.

Roger, Konrad, can I get a maintainer ack or otherwise, please?

  Paul




From xen-devel-bounces@lists.xenproject.org Tue Feb 02 09:48:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 09:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80432.147169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6sI5-0000TN-Ei; Tue, 02 Feb 2021 09:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80432.147169; Tue, 02 Feb 2021 09:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6sI5-0000TG-Bo; Tue, 02 Feb 2021 09:48:09 +0000
Received: by outflank-mailman (input) for mailman id 80432;
 Tue, 02 Feb 2021 09:48:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sI4-0000T8-F8; Tue, 02 Feb 2021 09:48:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sI4-00029P-9b; Tue, 02 Feb 2021 09:48:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sI4-0006JF-21; Tue, 02 Feb 2021 09:48:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sI4-0007J6-1a; Tue, 02 Feb 2021 09:48:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WZ9eTNhjS/6Z1S9jMFRtTqAp8PMaqSvQoXXQzD2Ngts=; b=FJieqnkQ0iLGSpQkWojcwpL+O/
	yWe5gHNsRRarD6MWvFpnGSJQIqWPqTDEDl4iAMbdL0yDyGPlBMWbdT42S0SOb2J+sQaeiowwr15nF
	RujRn7JWdyYwt8a1tclbqPXH9rmOD6YzQjEVvkvI6Y9i76ggybHPuue6aElPRyfzbTh8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158944-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158944: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 09:48:08 +0000

flight 158944 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    3 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days   10 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 10:24:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 10:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80453.147191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6sqp-0004Ut-Ir; Tue, 02 Feb 2021 10:24:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80453.147191; Tue, 02 Feb 2021 10:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6sqp-0004Um-EC; Tue, 02 Feb 2021 10:24:03 +0000
Received: by outflank-mailman (input) for mailman id 80453;
 Tue, 02 Feb 2021 10:24:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sqn-0004TG-S2; Tue, 02 Feb 2021 10:24:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sqn-0002nG-LC; Tue, 02 Feb 2021 10:24:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sqn-0007oK-7o; Tue, 02 Feb 2021 10:24:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sqn-0002DI-7O; Tue, 02 Feb 2021 10:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TqQT9LGR/csinFBNIMY1qFwiiO4PnRKTBn/XxXFSA2c=; b=1/MWvb0WaBuCrT3INV5APHzHYa
	ew5U7Kk0OQDl+VD/cPt7svzWrE+CNCzaLs/8l+sexIZVRy4Q1rzz8dvyBPdDBncqf/RfmPFYU+4BU
	erxgxvYFR8903IxWkypECCfe7sVYz/tYThQ3rNNt8LNH8juZ9u7VHhH2iJdcSFsZwI5w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158915-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158915: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=88bb507a74ea7d75fa49edd421eaa710a7d80598
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 10:24:01 +0000

flight 158915 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158915/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl           5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                88bb507a74ea7d75fa49edd421eaa710a7d80598
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  185 days
Failing since        152366  2020-08-01 20:49:34 Z  184 days  330 attempts
Testing same since   158915  2021-02-01 23:39:22 Z    0 days    1 attempts

------------------------------------------------------------
4508 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken
broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-arm64-arm64-xl host-install(5)

Not pushing.

(No revision log; it would be 1021361 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 10:31:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 10:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80460.147209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6sxz-0005Yy-CK; Tue, 02 Feb 2021 10:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80460.147209; Tue, 02 Feb 2021 10:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6sxz-0005Yr-8w; Tue, 02 Feb 2021 10:31:27 +0000
Received: by outflank-mailman (input) for mailman id 80460;
 Tue, 02 Feb 2021 10:31:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sxy-0005Yj-EZ; Tue, 02 Feb 2021 10:31:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sxy-0002vO-96; Tue, 02 Feb 2021 10:31:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sxx-0008Ia-VN; Tue, 02 Feb 2021 10:31:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6sxx-0007rI-Uq; Tue, 02 Feb 2021 10:31:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=evqPNll06rVnZGbBZShBqQX5nWAd0rT3zZyuikjSxhg=; b=RQjhq/M9Yl2/haHLG6ZGvB2s/R
	KxmTQ4X/wQb0ghhh3KvIXs9tcHeTsKINcPXyVcK2GUUGwFplKOQXHAHJrRhO1yRC3t15myszD8iKP
	Wh/dQgtrcj0fxS00N+NLB8wMpAxOPHGUZ35jj9TUQnMY1Cfz6JrMJV030GiGaWG2h9hE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158946-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158946: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ffbb8aa282de262403275f2395d8540818cf576e
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 10:31:25 +0000

flight 158946 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158946/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 158804
 build-arm64-xsm               6 xen-build                fail REGR. vs. 158804
 build-armhf                   6 xen-build                fail REGR. vs. 158804

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ffbb8aa282de262403275f2395d8540818cf576e
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    3 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days   11 attempts
Testing same since   158897  2021-02-01 19:02:25 Z    0 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffbb8aa282de262403275f2395d8540818cf576e
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Feb 1 16:53:17 2021 +0100

    xenstore: fix build on {Net/Free}BSD
    
    The endian.h header is in sys/ on NetBSD and FreeBSD.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 419cd07895891c6642f29085aee07be72413f08c
Author: Ian Jackson <iwj@xenproject.org>
Date:   Mon Feb 1 15:18:36 2021 +0000

    xenpmd.c: Remove hard tab
    
    bbed98e7cedc "xenpmd.c: use dynamic allocation" had a hard tab.
    I thought we had fixed that and I thought I had checked.
    Remove it now.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>

commit bbed98e7cedcd5072671c21605330075740382d3
Author: Manuel Bouyer <bouyer@netbsd.org>
Date:   Sat Jan 30 19:27:10 2021 +0100

    xenpmd.c: use dynamic allocation
    
    On NetBSD, d_name is larger than 256, so file_name[284] may not be large
    enough (and gcc emits a format-truncation error).
    Use asprintf() instead of snprintf() on a static on-stack buffer.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    
    Plus
    
    define GNU_SOURCE for asprintf()
    
    Harmless on NetBSD.
    
    Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 10:41:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 10:41:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80446.147226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6t84-0006gr-DY; Tue, 02 Feb 2021 10:41:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80446.147226; Tue, 02 Feb 2021 10:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6t84-0006gk-Ag; Tue, 02 Feb 2021 10:41:52 +0000
Received: by outflank-mailman (input) for mailman id 80446;
 Tue, 02 Feb 2021 10:18:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OJTJ=HE=linux.alibaba.com=jiapeng.chong@srs-us1.protection.inumbo.net>)
 id 1l6sl6-0003Y6-Oi
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 10:18:08 +0000
Received: from out30-54.freemail.mail.aliyun.com (unknown [115.124.30.54])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26385483-e477-4f9f-8c2b-1f1de3014811;
 Tue, 02 Feb 2021 10:18:05 +0000 (UTC)
Received: from
 j63c13417.sqa.eu95.tbsite.net(mailfrom:jiapeng.chong@linux.alibaba.com
 fp:SMTPD_---0UNfdAcH_1612261072) by smtp.aliyun-inc.com(127.0.0.1);
 Tue, 02 Feb 2021 18:17:59 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26385483-e477-4f9f-8c2b-1f1de3014811
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R821e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=jiapeng.chong@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0UNfdAcH_1612261072;
From: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
To: boris.ostrovsky@oracle.com
Cc: jgross@suse.com,
	sstabellini@kernel.org,
	davem@davemloft.net,
	kuba@kernel.org,
	ast@kernel.org,
	daniel@iogearbox.net,
	hawk@kernel.org,
	john.fastabend@gmail.com,
	andrii@kernel.org,
	kafai@fb.com,
	songliubraving@fb.com,
	yhs@fb.com,
	kpsingh@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	bpf@vger.kernel.org,
	Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Subject: [PATCH] drivers: net: xen-netfront: Simplify the calculation of variables
Date: Tue,  2 Feb 2021 18:17:49 +0800
Message-Id: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>
X-Mailer: git-send-email 1.8.3.1

Fix the following coccicheck warnings:

./drivers/net/xen-netfront.c:1816:52-54: WARNING !A || A && B is
equivalent to !A || B.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
---
 drivers/net/xen-netfront.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index b01848e..5158841 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1813,7 +1813,7 @@ static int setup_netfront(struct xenbus_device *dev,
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
-	if (!feature_split_evtchn || (feature_split_evtchn && err))
+	if (!feature_split_evtchn || err)
 		err = setup_netfront_single(queue);
 
 	if (err)
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 11:21:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 11:21:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80495.147264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6tkW-0002LI-Vt; Tue, 02 Feb 2021 11:21:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80495.147264; Tue, 02 Feb 2021 11:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6tkW-0002LB-Sz; Tue, 02 Feb 2021 11:21:36 +0000
Received: by outflank-mailman (input) for mailman id 80495;
 Tue, 02 Feb 2021 11:21:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6tkV-0002L6-Hl
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:21:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6tkV-0003jP-Ga
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:21:35 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6tkV-000125-DG
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:21:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6tkS-0001rb-54; Tue, 02 Feb 2021 11:21:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=a6OB6ReDxGsFWtrMM+RFjzdpQ9rx5NiYkJ75O5McMwU=; b=M5C8ywgg91p0R5jN2v8jalRDK/
	50FN9x0LJmfs/z/CLMnWg4NjrJEFKvYUfXUad/5JRh9/yKdlJyxQcDuP+VZXwr/SXOMSH9SK9hdrR
	f44m+dose21gl6h0lSCFbmmWvGU7XAP6k5Lx6resx5ZNBOvLh7Yc6BBUCLjXmJJerqnc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24601.13755.876115.891026@mariner.uk.xensource.com>
Date: Tue, 2 Feb 2021 11:21:31 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: [PATCH] xenstore: Fix all builds
In-Reply-To: <20210201233513.30923-1-andrew.cooper3@citrix.com>
References: <20210201233513.30923-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH] xenstore: Fix all builds"):
> This diff is easier viewed through `cat -A`
...
> A non-breaking space isn't a valid C preprocessor token.

Yikes.

Thanks!

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 11:40:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 11:40:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80504.147283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6u2g-0004L2-Lx; Tue, 02 Feb 2021 11:40:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80504.147283; Tue, 02 Feb 2021 11:40:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6u2g-0004Kv-IG; Tue, 02 Feb 2021 11:40:22 +0000
Received: by outflank-mailman (input) for mailman id 80504;
 Tue, 02 Feb 2021 11:40:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Vw=HE=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l6u2g-0004Kq-3I
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:40:22 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cf433e3-6bba-4bb0-a376-66de3294a7a0;
 Tue, 02 Feb 2021 11:40:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cf433e3-6bba-4bb0-a376-66de3294a7a0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612266020;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=+kTizDiF4wV7/9RFfLTm8OjoqBaK7bkQnA7o6h0Mck8=;
  b=Ju4NjJZbZkqM1yDCu3SpDgXBPm4XFPmpPVBhfoQeR7W+LZ1YuAnGXFGE
   u6i7OQBitZ2YgXbMXZu7elNLKdic8lR+puYtSA3+PW+/l7A459uVsAIvK
   Ie0dg+kU4HQ3GTe7zt+db1MGQ91Oy//Ox15fdyrbTcyCYIwiQGtZfzdKc
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: M4zBCpaNZ3+L6gJcZmnpqcy1J8OIfxaO0ESYP0qQ47leepw/plLxTjbAOj3A8Lzgy4InxOetui
 Z6BQAPTU2Ry/tDfbjsHb99HSVGYPWZHrE8iRKyPXRlWVizgphi1xWryS7c9WNpicxl0MTx0QnH
 b3HSSV8DwLNhY8reeerbJdPsOIop3QYK0JLluTDF89WIEilfW4H6rZvKbPKAQlQ3343T3H5Nm7
 G4BiqfvN+hJr7cIDfMprjZUYbTVPrP7KJAgs+k0t5xq4Mgp3fy69sTM8eX/6b7pIxdeByKyiQi
 1Xg=
X-SBRS: 5.1
X-MesageID: 36556528
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,394,1602561600"; 
   d="scan'208";a="36556528"
Subject: Re: staging: unable to restore HVM with Viridian param set
To: <paul@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>, "'Tamas K
 Lengyel'" <tamas.k.lengyel@gmail.com>, 'Xen-devel'
	<xen-devel@lists.xenproject.org>, 'Wei Liu' <wl@xen.org>, 'Ian Jackson'
	<iwj@xenproject.org>, 'Anthony PERARD' <anthony.perard@citrix.com>
References: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
 <12e17af4-3502-0047-36e2-3c1262602747@citrix.com>
 <7ea14fac-7832-fe68-529e-03a8f9812f88@citrix.com>
 <035301d6f93e$4d03c6b0$e70b5410$@xen.org>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <e8e7d041-3196-9387-df84-16176459d0ff@citrix.com>
Date: Tue, 2 Feb 2021 11:40:11 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <035301d6f93e$4d03c6b0$e70b5410$@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02/02/2021 08:35, Paul Durrant wrote:
>> -----Original Message-----
>> From: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Sent: 02 February 2021 00:20
>> To: Andrew Cooper <andrew.cooper3@citrix.com>; Tamas K Lengyel <tamas.k.lengyel@gmail.com>; Xen-devel
>> <xen-devel@lists.xenproject.org>; Wei Liu <wl@xen.org>; Ian Jackson <iwj@xenproject.org>; Anthony
>> PERARD <anthony.perard@citrix.com>; Paul Durrant <paul@xen.org>
>> Subject: Re: staging: unable to restore HVM with Viridian param set
>>
>> n 01/02/2021 22:57, Andrew Cooper wrote:
>>> On 01/02/2021 22:51, Tamas K Lengyel wrote:
>>>> Hi all,
>>>> trying to restore a Windows VM saved on Xen 4.14 with Xen staging results in:
>>>>
>>>> # xl restore -p /shared/cfg/windows10.save
>>>> Loading new save file /shared/cfg/windows10.save (new xl fmt info 0x3/0x0/1475)
>>>>  Savefile contains xl domain config in JSON format
>>>> Parsing config from <saved>
>>>> xc: info: Found x86 HVM domain from Xen 4.14
>>>> xc: info: Restoring domain
>>>> xc: error: set HVM param 9 = 0x0000000000000065 (17 = File exists):
>>>> Internal error
>>>> xc: error: Restore failed (17 = File exists): Internal error
>>>> libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
>>>> restoring domain: File exists
>>>> libxl: error: libxl_create.c:1581:domcreate_rebuild_done: Domain
>>>> 8:cannot (re-)build domain: -3
>>>> libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
>>>> 8:Non-existant domain
>>>> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
>>>> 8:Unable to destroy guest
>>>> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
>>>> 8:Destruction of domain failed
>>>>
>>>> Running on staging 419cd07895891c6642f29085aee07be72413f08c
>>>
>>> CC'ing maintainers and those who've edited the code recently.
>>>
>>> What is happening is xl/libxl is selecting some viridian settings,
>>> applying them to the domain, and then the migrations stream has a
>>> different set of viridian settings.
>>>
>>> For a migrating-in VM, nothing should be set during domain build.
>>> Viridian state has been part of the migrate stream since before mig-v2,
>>> so can be considered to be everywhere relevant now.
>>
>> The fallout is likely from my changes that modified default set of viridian
>> settings. The relevant commits:
>> 983524671031fcfdb24a6c0da988203ebb47aebe
>> 7e5cffcd1e9300cab46a1816b5eb676caaeed2c1
>>
>> The same config from migrated domains now implies different set of viridian
>> extensions then those set at the source side. That creates inconsistency in
>> libxl. I don't really know how to address it properly in libxl other than
>> don't extend the default set ever.
>>
> 
> Surely it should be addressed properly in libxl by not messing with the viridian settings on migrate-in/resume, as Andrew says? TBH I thought this was already the case. There should be no problem with adding to the default set as this is just an xl/libxl concept; the param flags in Xen are always define the *exact* set of enabled enlightenments.

If maintainers agree with this approach I can make a patch.

Igor


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 11:52:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 11:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80511.147295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6uDv-0005Pm-OB; Tue, 02 Feb 2021 11:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80511.147295; Tue, 02 Feb 2021 11:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6uDv-0005Pf-Ky; Tue, 02 Feb 2021 11:51:59 +0000
Received: by outflank-mailman (input) for mailman id 80511;
 Tue, 02 Feb 2021 11:51:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6uDu-0005Pa-7A
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:51:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6uDu-0004Di-5P
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:51:58 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6uDu-0002wk-44
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:51:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6uDm-0001w8-Ff; Tue, 02 Feb 2021 11:51:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=e9Z+A/k6izIwfzJ4SG7hVctYyCtPqRTDMc8GHPZXz2Q=; b=T1RnQYgANfYc9Rnnb7kpmMoJzw
	Y/yWsE3JcFOEaLP1MZQE/Y0fNdDpXkWwD8L1KtTI6g2p4JXrTTNX8AfjRkast/nto4Yj3yv10T+es
	58nELHGZ5U3wIYO1N9z1cU18P3aQ/AR0aA2TYYA2t8L0ggpqGrUJpabQ2dAzV/mSCPp8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24601.15574.213980.576056@mariner.uk.xensource.com>
Date: Tue, 2 Feb 2021 11:51:50 +0000
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: <paul@xen.org>,
    'Andrew Cooper' <andrew.cooper3@citrix.com>,
    "'Tamas K  Lengyel'" <tamas.k.lengyel@gmail.com>,
    'Xen-devel' <xen-devel@lists.xenproject.org>,
    'Wei Liu' <wl@xen.org>,
    'Anthony PERARD' <anthony.perard@citrix.com>
Subject: Re: staging: unable to restore HVM with Viridian param set
In-Reply-To: <e8e7d041-3196-9387-df84-16176459d0ff@citrix.com>
References: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
	<12e17af4-3502-0047-36e2-3c1262602747@citrix.com>
	<7ea14fac-7832-fe68-529e-03a8f9812f88@citrix.com>
	<035301d6f93e$4d03c6b0$e70b5410$@xen.org>
	<e8e7d041-3196-9387-df84-16176459d0ff@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Igor Druzhinin writes ("Re: staging: unable to restore HVM with Viridian param set"):
> On 02/02/2021 08:35, Paul Durrant wrote:
> > Surely it should be addressed properly in libxl by not messing with the viridian settings on migrate-in/resume, as Andrew says? TBH I thought this was already the case. There should be no problem with adding to the default set as this is just an xl/libxl concept; the param flags in Xen are always define the *exact* set of enabled enlightenments.
> 
> If maintainers agree with this approach I can make a patch.

If Andy is in favour of this approach then certainly it is fine by me.

FTR, preemptively,

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

although as this is a bugfix it probably doesn't need one.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 11:55:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 11:55:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80513.147307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6uHi-0005Zj-8X; Tue, 02 Feb 2021 11:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80513.147307; Tue, 02 Feb 2021 11:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6uHi-0005Zc-5W; Tue, 02 Feb 2021 11:55:54 +0000
Received: by outflank-mailman (input) for mailman id 80513;
 Tue, 02 Feb 2021 11:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hKoN=HE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6uHg-0005ZX-EO
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 11:55:52 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20413ec0-c83d-4546-9aab-8e6f40e3c6c0;
 Tue, 02 Feb 2021 11:55:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20413ec0-c83d-4546-9aab-8e6f40e3c6c0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612266951;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eYnZcQ6pmNweY6iauGn946d+ZuiIupbjWLAvjA6Ju5U=;
  b=QBNu8X7b1zqDNFDcsW2wpQ66/wvnMbkL+2KI13DLp7n/Ov5+0TrImAZz
   X4mkZ4F/GY3hiqL7H90F5crbv+OerS3dMDBeyIwsk78MVhO9zWKXrFdqf
   vGspED2LkIbmdHKh3d1pwYyEytB9mJ1Q2KQLM9jqj3qUpNNZ2txTzyvkc
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dqJePxJfYUpHBHVG7lu5SAgIat4XdakfuX5F0hk7W6R/zpTOrNa7SUp4Ub8OXG1FBKLCdx9LEg
 10OIzzZcZvJ/bBaLD+pYsYlPrxZk3OvqwBNyHs3J0OU5V3eiNuNzgXlKU/BotPv+YSvZuwYSG6
 oLKYjxDIaeRlcrZB+0t/0+SxrKHEeLVeR/fMYt9qNSaZlBUDaVfSSud274oFVjFvsCHpxOXCKC
 84Ib70W9savSh0DgYQQExKU6nvJYjBobtkVnk1obxmNOn7O9Jh7ghoQLLXt6hTDhet+C/NFPr0
 +3U=
X-SBRS: 5.2
X-MesageID: 36736465
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,394,1602561600"; 
   d="scan'208";a="36736465"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hSAbLa4/+x0ErPZLXZDDR4J4h/Xyn+J2sE1yPiFY2e21cv7M0bAq3EKMRWSf7jWvcfFaAjrzWnvRE2zXuCnzDcHPIAtcbRaggdYkGYY5T4wC8NeV52Wk/k7mUINDwnkwUxJfBSUp6hmMJmulnQVpio9KQLDPvwOMk9l6KePitaDfsC8E1Wuv2hCGXe1thPOurOwsj0bkZ3tb/5z2FFB+G1WY5nnOAFOPmePel1CMW2LT2DP10cymH1NKqhIuTelvWv5oxrbpiXi/Ij7KQMlGQwG6vYsOMGzDyTMDAkj1K+ocNzwTHG0NLq6Gb47L+QE1PgXoYcegp8N7GDGVCFxo9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eYnZcQ6pmNweY6iauGn946d+ZuiIupbjWLAvjA6Ju5U=;
 b=AgxCb5zH+ZOoqqg4ZJpDDg2yGv4SNpTdJ8T03M10Qpv07DjI+oeQp0S7K4HoaIlSjwNLYY+HO/iBazx5Q1ZILXlSw16cGpDv6qp85qFxFYmvv/i3gMRhnAxBQZZ+NvdGopKWZxnj/NusRRCHfUNUIngXRy23oxMLJEwGKvFYY8MwEjSRkh5iUKnjIU9p5vKab7JBPHODtc6LIto2DLMGOlhoJbc7KxL6T3DdKS4r6qlMXnWztxWw/h4JE/HfmckJ3vcQ02pGGnhgCk5HwUhiTKH7ofo6e7Rot7ld+UKvWPtun+j2omUCmaIcyltrssf7otWpkIJOT5TbhDtfgpvYDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eYnZcQ6pmNweY6iauGn946d+ZuiIupbjWLAvjA6Ju5U=;
 b=c1bqGN/nMgi5RB4ObZ3geVgaXzQw3L83vzA52OIxwQN328kH7Xh/imNNEuvCAXCIKsD8sj6BAoaJ7Ju8QUKyz5u+gmdHTErUHDDcHPpQZL1BjaSSyrMRDOiR6nRrnP89UYa/6QAPNfoSAHTPKHXlgCiBsHZ/R3RSLLNF7YZD5T0=
Subject: Re: staging: unable to restore HVM with Viridian param set
To: Ian Jackson <iwj@xenproject.org>, Igor Druzhinin
	<igor.druzhinin@citrix.com>
CC: <paul@xen.org>, 'Tamas K Lengyel' <tamas.k.lengyel@gmail.com>, 'Xen-devel'
	<xen-devel@lists.xenproject.org>, 'Wei Liu' <wl@xen.org>, 'Anthony PERARD'
	<anthony.perard@citrix.com>
References: <CABfawhkxEKOha7RYpSvTaJEtxyBsio9Pcc=xtRD7DszHm2k2pw@mail.gmail.com>
 <12e17af4-3502-0047-36e2-3c1262602747@citrix.com>
 <7ea14fac-7832-fe68-529e-03a8f9812f88@citrix.com>
 <035301d6f93e$4d03c6b0$e70b5410$@xen.org>
 <e8e7d041-3196-9387-df84-16176459d0ff@citrix.com>
 <24601.15574.213980.576056@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c3383689-0a6e-add6-6275-236e89d2775e@citrix.com>
Date: Tue, 2 Feb 2021 11:55:37 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24601.15574.213980.576056@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0228.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::24) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 446bdbfe-c528-4f1d-18fd-08d8c7717873
X-MS-TrafficTypeDiagnostic: BYAPR03MB4055:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4055CAE71DAA87FFB8A9CCB4BAB59@BYAPR03MB4055.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: izH6qCDG1GTFFk4/UONzjuahYgoJkjSy8g0eJ0fXfJ8lVvIEEVofInmeuFrfXAix5Tu4VtWrNO4FlBDaD/0jMUUi/KIjuOi+4yvkE0jEWH+m4P6H9cxQYRbgZ+xI+Iqc29PA4u+MkiVoezq6ElLVKJ+MRgmiATDQc2VELlxHOxiYjX47RBP+M8t3nhtG12pCeXdIm9WrWxZPPW5aSDSO2SWaMf78Cuvd5JtZ9c7IA7GIusFpIKjc3Jz/7jAZGRG2gP/bR+v6ssVKMh/MP8vJnz+5lU0XAfcprUYworEu6HvlnnnHcnrdU5B1fMaQ3Q9Y7EAtdkmH6Y17Qph1Jka+2+MRMAE9wjm1W7SG3HNcbeVUt1WLgDiepRzEMWzjh3ceRHEEZllg5jCMjxqnRLr40j+lzA1BH8IXOYTgj6ZVF4zBerQ9ffHS92KZLdH9pSVkvn/cZZDs2conLuaClpT+aP6TE9Uhov0+o4ETBb75Y7GQpjPKLKZpJSo9kj+PAGRcfRWbzF99N5iatzyvjqIB/ZNts8HUIM7v+jmOTPUPaJvWOfaN7JSD3Z2hgK5dy0fexcQ/tIlsaKVDU+H6ZlWMlo4jneD96z1LRkzbXnpoGmw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(396003)(346002)(376002)(39840400004)(6486002)(66476007)(83380400001)(2906002)(66946007)(26005)(36756003)(16526019)(2616005)(4326008)(31686004)(4744005)(8936002)(107886003)(66556008)(5660300002)(956004)(8676002)(86362001)(16576012)(54906003)(6666004)(478600001)(31696002)(316002)(186003)(6636002)(53546011)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MTFNYTVQL2JLQXN5YXYrNjJndCs0d2taT3ZtNDRBN0hraitEdE9rNmFYQlVQ?=
 =?utf-8?B?MHZUMjIzR3IyM1gwUFpnYk5VZDRLcnhxWUxGR2dvZFdoYkROSWpJWXcwWnY4?=
 =?utf-8?B?Z3FTblI5VDROVVY3UmROdE1YaFdJaWRDYkdWK1Z0YysrTkJGL2Uvc1d1RGRp?=
 =?utf-8?B?VTN2VXo0K0dZNjY4U0RvczlkRkdoOXBPclIwQnlPYXV6RHJGaENiYU1EVFFV?=
 =?utf-8?B?NHFDcFhDQVVoQkN3QWIzU0pDcitqRnQwN1JnSGViSEh2UmJvekNDS2NEOEc4?=
 =?utf-8?B?UndWSDMralpCZEJNK0NodVh4MEVkZ2ZwOCt1c3A5dFV6WEY1MWJNTkVmMGhV?=
 =?utf-8?B?VUJzZkRjdE1MbkJpQkU2Q0hteFc1TVVOaExIMWhwTytoQlY0aW84U09tZTVK?=
 =?utf-8?B?WlJlaGdsRnFmNUJVVTRrNmo1NDZRUWJHc28rSDFDVWNqclNVVmZpdEpESGhC?=
 =?utf-8?B?SkZLY2RFZFh2RFhIOE1SVExYYmdkalB5VG95NjQ2enZBVEdIcDd1cndIUEdF?=
 =?utf-8?B?bWxUTGdpT2ROQU5laEJodUVYWnNaWVE5NTNYWDRFNVl4YUcrdDFVYjFvRVla?=
 =?utf-8?B?eFJhaTduWnNzWnRnNzRtVVlMUkgzNUJ5SU5ZdGFoTm9oNDNtbmV3dklMalFW?=
 =?utf-8?B?T09hbjFMQjlpTHBYYy9wR3ZMRTRQeGtDdCtKUmozUmVNN0NoVUVyQTNCaTBl?=
 =?utf-8?B?OCtVc2Y2TzcwS1Z5YTBQYStXWVd4a2lxcXoyYTFCR2QrNGR1eXhTak5ML0hi?=
 =?utf-8?B?Qm5kSVM1bWpLNDFyeWF4YzBIY3lFb0ptL2U4TE0wQW9NMHlWTCtNRERSVlQ2?=
 =?utf-8?B?V25nQVdHNGNmMVcyUCtDZ01ZL0F2RElWTCtKS3hDbFREKzNuV2xiMXFnTWVO?=
 =?utf-8?B?ditKZGI5RHNDcVRvWjBTaGw0czVXUXRvWGIva0tRRzZxOXI2RExXZUhOQTBJ?=
 =?utf-8?B?dEhabzZlYjE5SnhmN2JHSFBOem54ZWF5WnZFV3JvendSbncwRFRMNUZuSDRD?=
 =?utf-8?B?aW5YZ1dYcmtxVlZHZUNDU1RpQ2oyNTNHd0VtUkpxZy8xRDhzcnIrS1RSRG1x?=
 =?utf-8?B?TWJiQVo0M3l2bjZkNmhOazljMlFCSzI0bFd6WUpTdFVNZ3AvZ2x5dlN0YzBK?=
 =?utf-8?B?dHpPWTN5cU93YTluQzlWSmNvemt2aEJHSTRwVkdYM0RMMG9rWFpJU1FGdDE5?=
 =?utf-8?B?VVhsc0JJdUNORWZyMTh5Y2JpRjdBK2V2Nis0WVpUT0w5blUrWEhlSHg5bFlI?=
 =?utf-8?B?dEtwVytSUFdiNjNDNjNQODAycklpWmsrc1A2Zk94N29PcHVxTFVBZ2NPU0h5?=
 =?utf-8?B?bTJjOTJ1TWtGZDBZcmNOU0pkZm96S2xQdFdsNjY5bHkxRStHUVA0b1g5dEpT?=
 =?utf-8?B?dXpnV09yUmM5YlkxaHNRdVp5akNCUEVvTXExTmFNZ1hOQi9CdGZKRkUxQVI3?=
 =?utf-8?B?MjJPa1FFbVhjcjFMTU50RnZNT1JESzhNdmNpaUlRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 446bdbfe-c528-4f1d-18fd-08d8c7717873
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 11:55:44.1384
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mIoXRRA6RdoG7vNwl5hZD1MuYVKAcOXEp+MUlRt2sRPYUOQFOOZS6wwk2SbcG8KYuRlOrHq/76JNWpbyb2vTQ7RmYT0/iIgX9JDSKrilLBM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4055
X-OriginatorOrg: citrix.com

On 02/02/2021 11:51, Ian Jackson wrote:
> Igor Druzhinin writes ("Re: staging: unable to restore HVM with Viridian param set"):
>> On 02/02/2021 08:35, Paul Durrant wrote:
>>> Surely it should be addressed properly in libxl by not messing with the viridian settings on migrate-in/resume, as Andrew says? TBH I thought this was already the case. There should be no problem with adding to the default set as this is just an xl/libxl concept; the param flags in Xen are always define the *exact* set of enabled enlightenments.
>> If maintainers agree with this approach I can make a patch.
> If Andy is in favour of this approach then certainly it is fine by me.

Yeah - in the case that we're restoring from a suspend image or migrate,
just skip setting the viridian flags in build.Â  They are in the the
stream, and will be restored later.

~Andrew

P.S. This is going to get even more complicated when it all shifts into
CPUID settings, but it needs to happen at some point.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 12:09:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 12:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80525.147319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6uV6-0006uv-QS; Tue, 02 Feb 2021 12:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80525.147319; Tue, 02 Feb 2021 12:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6uV6-0006uo-N1; Tue, 02 Feb 2021 12:09:44 +0000
Received: by outflank-mailman (input) for mailman id 80525;
 Tue, 02 Feb 2021 12:09:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jjjr=HE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l6uV5-0006uj-JX
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:09:43 +0000
Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46e7f4ef-2ac6-4f2b-86ee-ef4a8a69c451;
 Tue, 02 Feb 2021 12:09:42 +0000 (UTC)
Received: by mail-lj1-x231.google.com with SMTP id a25so23681782ljn.0
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 04:09:42 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id d28sm3273106lfn.15.2021.02.02.04.09.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 02 Feb 2021 04:09:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46e7f4ef-2ac6-4f2b-86ee-ef4a8a69c451
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=cklQJu8+KYVHEIxLg9m2wh/uxfhxTC5GXImJ4sfcMrY=;
        b=mFoyQkcX+lb3A/7GziV07+mOAfWQlbGDWi7Uy8dZ6p0R3ETVY8uHKLKPRxcvfHTUqM
         J8DBZt4UCZW4ESP36WlspXX5v3Qn9nK87F17bnzkGzmotkku/LLmFGH+tYh2vCUsBFvN
         bPNKtzzavKhPEhJI7thTde2ylmBjtnvnzE/9klZ/usZ6DpwMpaG2fnj3+HhJKKw5IBgi
         OA8bCrjbVy80Y4jdFrYDnYEgImsmPnFTuRNL5McSOBL5tg06qJJ+qXYGG8htI5qs1P0R
         neDo0AuFAyD1o/M6oXIhKso0drcKyBd2cr8bPCBNkSTZDRoaVbdQfew7u5nMdfKEbzWH
         ZqEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=cklQJu8+KYVHEIxLg9m2wh/uxfhxTC5GXImJ4sfcMrY=;
        b=TCPQrngb00V9D0LDY/ORm9Mjq/nhZJ/ADNEIp+g7BQpQoL8yYxCUX4xTnLMb9Wy5fM
         LDQ3U4nyhV6nsTMs3kk0ffy7qFJhcnmqX/09E7Vb+2vYfHFMpuZ1+T05JAoz+TFw2p5N
         gFzFjdf8IYAyntXKCaSTLmNnMzi8YNp1Xkj9I3nvW2+5dNmcTuNZacoufMuM2wG9v/WM
         0st2KfBn6CxQmM6H42Vrw4uIkX7ItwMSe2tlhztG2QyA+a2bxKo4dLFY6oAq9NfgouY0
         mVIQmCIhN9sZTq60xVscqzlDUfnac+s/TgBf6QzEsZUSezxEyf127KvDcpV8rWz0y6VL
         AwZA==
X-Gm-Message-State: AOAM5312ebPJr92UkdYkJ4sTCALjX7C9xPz/Bh1SSKUfWO3dJF5i5VAa
	fRbn/za6myEXAZv4EWR/j4M=
X-Google-Smtp-Source: ABdhPJwtplTISQ/VughfoBDpUhlOEQ/8PjQR5p7Yn+2tVifUH2Xr9P0CTs34CRH2P1rorjr8VMuglA==
X-Received: by 2002:a05:651c:1029:: with SMTP id w9mr13366979ljm.142.1612267781378;
        Tue, 02 Feb 2021 04:09:41 -0800 (PST)
Subject: Re: [PATCH v8 00/16] acquire_resource size and external IPT
 monitoring
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, Anthony PERARD
 <anthony.perard@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <20210130025852.12430-1-andrew.cooper3@citrix.com>
 <911270bf-0077-b70e-c224-712dfa535afa@gmail.com>
 <fceef592-e637-e985-8217-11546e088027@citrix.com>
 <d5cc17a4-267c-3022-11e5-eb043de121a9@gmail.com>
 <7384c55e-f996-be08-f8ee-b6d09c9e2eef@citrix.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7350812f-a222-8f82-c2dc-06c939e64249@gmail.com>
Date: Tue, 2 Feb 2021 14:09:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7384c55e-f996-be08-f8ee-b6d09c9e2eef@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 01.02.21 16:00, Andrew Cooper wrote:

Hi Andrew

> On 01/02/2021 13:47, Oleksandr wrote:
>> On 01.02.21 15:07, Andrew Cooper wrote:
>>
>> Hi Andrew
>>
>>> On 01/02/2021 12:34, Oleksandr wrote:
>>>> On 30.01.21 04:58, Andrew Cooper wrote:
>>> One query I did leave on IRC, and hasn't had an answer.
>>>
>>> What is the maximum number of vcpus in an ARM guest?
>> public/arch-arm.h says that current supported guest VCPUs max number
>> is 128.
>>
>>
>>> You moved an
>>> x86-ism "max 128 vcpus" into common code.
>> Ooh, I am not sure I understand where exactly. Could you please
>> clarify in which patch?
> ioreq_server_get_frame() hardcodes "there is exactly one non-bufioreq
> frame", which in practice means there is 128 vcpu's work of struct
> ioreqs contained within the mapping.
>
> I've coded ioreq_server_max_frames() to perform the calculation
> correctly, but ioreq_server_get_frame() will need fixing by whomever
> supports more than 128 vcpus with ioreq servers first.

Thank you for the explanation. Now it is clear what you meant.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 12:16:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 12:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80527.147331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ubx-0007mD-IT; Tue, 02 Feb 2021 12:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80527.147331; Tue, 02 Feb 2021 12:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ubx-0007m6-FR; Tue, 02 Feb 2021 12:16:49 +0000
Received: by outflank-mailman (input) for mailman id 80527;
 Tue, 02 Feb 2021 12:16:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ubw-0007m1-Ep
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:16:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ubw-0004fB-BI
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:16:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ubw-0004jL-9e
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:16:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6ubt-000200-35; Tue, 02 Feb 2021 12:16:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=viDI+kHkzZd9GUY5Vttquz2sCdCxMbggPYxeDQ/Ydp8=; b=zvd3i2hrJGMDtNY99H0YuxZpO8
	GV2RvTtciHOEuIbAjN015mCapsadp5THXcOoQQ24neULOL6vwd6IGyDt616T1qSCGFtqNhVr3U85a
	WEc53qWwbi5tQPl8YAxq8STcavOsbjAwTAnzuZxqBzEC5cTO4y0T3CdY24Hm1DGY+5wM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-2
Content-Transfer-Encoding: 8bit
Message-ID: <24601.17068.666597.295268@mariner.uk.xensource.com>
Date: Tue, 2 Feb 2021 12:16:44 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    =?iso-8859-2?Q?Micha=B3_Leszczy=F1ski?= <michal.leszczynski@cert.pl>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>,
    "Tamas K  Lengyel" <tamas@tklengyel.com>
Subject: Re: [PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter
In-Reply-To: <20210201232703.29275-4-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
	<20210201232703.29275-4-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter"):
> From: Micha³ Leszczyñski <michal.leszczynski@cert.pl>
> 
> Allow to specify the size of per-vCPU trace buffer upon
> domain creation. This is zero by default (meaning: not enabled).
...

Wearing my maintainer/reviewer hat:

Release risk assessment for this patch:

 * This contains golang changes which might break the build or need
   updates to golang generated files.  This ought to be detected by
   our tests so we can fix it.  At this stage of the release that is
   probably OK.  The risk of actually shipping a broken build is low.

 * The patch introduces a new libxl config parameter.  That has API
   and UI implications.  But it is a very small change and the
   semantics are fairly obvious.  The name likewise is fine.  So I am
   very comfortable with recommending this late addition to these
   APIs.

 * The patch contains buffer size handling code.  In the general case
   that might produce a risk of buffer overruns.  But at least here in
   this patch this is actually just the configured size of a buffer,
   and actual length/use checks are done elsewhere, so this is is not
   a real risk.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 12:17:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 12:17:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80528.147343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ucZ-0007rT-S8; Tue, 02 Feb 2021 12:17:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80528.147343; Tue, 02 Feb 2021 12:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ucZ-0007rM-OP; Tue, 02 Feb 2021 12:17:27 +0000
Received: by outflank-mailman (input) for mailman id 80528;
 Tue, 02 Feb 2021 12:17:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ucY-0007rE-4g
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:17:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ucY-0004fi-33
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:17:26 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ucY-0004o0-1w
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:17:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6ucU-00020Y-RR; Tue, 02 Feb 2021 12:17:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:To:Date:
	Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=VSW2mzEMMOyTeQivmM7OnpDGG5T1WNuga1TTwwehR/M=; b=BHM8ckjojBdJJC4oXizZrNFByk
	DteLT/Vh2RWAlwDgAjGDlr3za+nU6uMK0azZx8cFi5t3SZiXB0PvejZSLa2HRZfZoO07LPW4P6NNW
	r9oF3XzDbKhept7o4XnEQ3Xm9v1mqcTgLqyrsm2zhaXz6Gnr1ZFLqBItGe0GCXP/lkH8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-2
Content-Transfer-Encoding: 8bit
Message-ID: <24601.17106.625570.402466@mariner.uk.xensource.com>
Date: Tue, 2 Feb 2021 12:17:22 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>,
    Xen-devel <xen-devel@lists.xenproject.org>,
    =?iso-8859-2?Q?Micha=B3_Leszczy=F1ski?= <michal.leszczynski@cert.pl>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>,
    "Tamas K  Lengyel" <tamas@tklengyel.com>
Subject: Re: [PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter
In-Reply-To: <24601.17068.666597.295268@mariner.uk.xensource.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
	<20210201232703.29275-4-andrew.cooper3@citrix.com>
	<24601.17068.666597.295268@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Ian Jackson writes ("Re: [PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter"):
> Andrew Cooper writes ("[PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter"):
> > From: Micha³ Leszczyñski <michal.leszczynski@cert.pl>
> > 
> > Allow to specify the size of per-vCPU trace buffer upon
> > domain creation. This is zero by default (meaning: not enabled).
> ...
> 
> Wearing my maintainer/reviewer hat:
> 
> Release risk assessment for this patch:
> 
>  * This contains golang changes which might break the build or need
>    updates to golang generated files.  This ought to be detected by
>    our tests so we can fix it.  At this stage of the release that is
>    probably OK.  The risk of actually shipping a broken build is low.
> 
>  * The patch introduces a new libxl config parameter.  That has API
>    and UI implications.  But it is a very small change and the
>    semantics are fairly obvious.  The name likewise is fine.  So I am
>    very comfortable with recommending this late addition to these
>    APIs.
> 
>  * The patch contains buffer size handling code.  In the general case
>    that might produce a risk of buffer overruns.  But at least here in
>    this patch this is actually just the configured size of a buffer,
>    and actual length/use checks are done elsewhere, so this is is not
>    a real risk.

Consequently, wearing my RM hat, this patch:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 12:20:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 12:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80531.147355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ufU-0000RG-9l; Tue, 02 Feb 2021 12:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80531.147355; Tue, 02 Feb 2021 12:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6ufU-0000R9-6d; Tue, 02 Feb 2021 12:20:28 +0000
Received: by outflank-mailman (input) for mailman id 80531;
 Tue, 02 Feb 2021 12:20:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ufS-0000R3-Os
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:20:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ufS-0004iM-O6
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:20:26 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l6ufS-0004zK-NN
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:20:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l6ufP-00021H-GQ; Tue, 02 Feb 2021 12:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Tv5QIy12FDCUPsCxutoRvGxqSmV8/9YDlxqN1gWL/tA=; b=gWeJYKwbx2RvWbfpi7t2IiywZ0
	ne3zN0Lz/6qGragGn3rstAtSXV+X04MMZiAag87SHKU4saEQzEx8bL2iU6WFlmC1Uyr/NCAfcIU67
	m0cONGJtqtX3FbCN68E+mbqcSgld7XLS9Maturr/k+i/rHuXxby7akmBzvPrut9ofBLI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24601.17287.280124.602809@mariner.uk.xensource.com>
Date: Tue, 2 Feb 2021 12:20:23 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Jan Beulich <JBeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>,
    "Jun  Nakajima" <jun.nakajima@intel.com>,
    Kevin Tian <kevin.tian@intel.com>,
    =?iso-8859-2?Q?Micha=B3_Leszczy=F1ski?= <michal.leszczynski@cert.pl>,
    Tamas K Lengyel <tamas@tklengyel.com>
Subject: Re: [PATCH v9 00/11] acquire_resource size and external IPT monitoring
In-Reply-To: <20210201232703.29275-1-andrew.cooper3@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH v9 00/11] acquire_resource size and external IPT monitoring"):
...
> Therefore, I'd like to request a release exception.

Thanks for this writeup.

There is discussion here of the upside of granting an exception, which
certainly seems substantial enough to give this serious consideration.

> It [is] fairly isolated in terms of interactions with the rest of
> Xen, so the changes of a showstopper affecting other features is
> very slim.

This is encouraging (optimistic, even) but very general.  I would like
to see a frank and detailed assessment of the downside risks, ideally
based on analysis of the individual patches.

When I say a "frank and detailed assessment" I'm hoping to have a list
of the specific design and code changes that pose a risk to non-IPT
configurations, in decreasing order of risk.

For each one there should be brief discussion of the measures that
have exist to control that risk (eg, additional review, additional
testing), and a characterisation of the resulting risk (both in terms
of likelihood and severity of the resulting bug).

All risks that would come to a diligent reviewer's mind should be
mentioned and explicitly delath with, even if it is immediately clear
that they are not a real risk.

Do you think that would be feasible ?  We would want to make a
decision ASAP so it would have to be done quickly too - in the next
few days and certainly by the end of the week.


Since you mentioned patch 1 and asserted it didn't need a release-ack,
I looked at it in a little more detail.  It seems to contain a
moderate amount of (fairly localised) restructuring.  IDK whether
XENMEM_acquire_resource is used by non-IPT configurations but I didn't
see an assertion anywhere that it isn't.

I appreciate that whether something is "straightforward" on the one
hand, vs involving "substantial refactoring" on the ohter, or this is
a matter of judgement, which I have left up to the commiters during
this part of the freeze.  But for the record my view is that this
patch is not a "straightforward bugfix" and needs a release ack.


To give you an idea of what kind of thing I am looking for in a risk
assessment, I have written one up for
  [PATCH v9 03/11] tools/[lib]xl: Add vmtrace_buf_size parameter

Ideally I would like to go through a similar process for the other
patches.


I appreciate that this is rather a more throrough process than we have
adopted in the past.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 12:45:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 12:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80537.147373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6v3I-0002V7-7b; Tue, 02 Feb 2021 12:45:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80537.147373; Tue, 02 Feb 2021 12:45:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6v3I-0002V0-3z; Tue, 02 Feb 2021 12:45:04 +0000
Received: by outflank-mailman (input) for mailman id 80537;
 Tue, 02 Feb 2021 12:45:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hKoN=HE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l6v3H-0002Uv-Gd
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 12:45:03 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f6aa90b-7798-46d4-9d8a-f731c7feb2ce;
 Tue, 02 Feb 2021 12:45:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f6aa90b-7798-46d4-9d8a-f731c7feb2ce
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612269901;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Nxglflffc7e195qgy6DACSPUl7PMna6H60+G1Cp6nLg=;
  b=biu2Q+wUYlblmwddEuj0+2kLrfNnvcGm9NKrq0N6JCEltASnQijTIu+0
   5W2hK7bOYeXy9UmvIgr99WOdoprt1F1rAjBpZvdFDnkTzdGfGC2GIUhn3
   CR+aph4Hkzhvjixo+Z9+kXrfNgR2fHc/WDdghc8nAVyND6ph9E3lxaby6
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: V+sMeBFPq8dGK9WQQMkBCjWb+VMygr24/MJrGmDl71XYS3josa5LGm8EzZOTp7EghKD2eQ+xcN
 CtYxk0kVRjzjhsYHRtouN7716BV1zvPUUDaq379e4I69NfXMTZZoWiNBtPtNqzCS6ytJS4Ohld
 Ui3heynlxGZgYfsGiLUHZaoKwVJmBta6D5D1o0Tlnawmg6lP8wcjHOTtwmg3bgYDzaIkcteHmk
 fJ7c66hq3kRqlpeF5A9e8Z595JkRGo38Cpj1xAQ/n8GR8d8coedlOz6LnpHLcY4fDuAwVPUxf8
 iCI=
X-SBRS: 5.2
X-MesageID: 36742024
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,395,1602561600"; 
   d="scan'208";a="36742024"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PVvD8oORX6qyupbnz/gYZrLddHJ5MvZnHTvI1dgSFs5XJvYON7WQeSVwIquYpK1kIqP45WvKRIL/8PhNJLm0J9avuXKJ8w3DMJbo7LjqenXHT3Qqrxp44KTMalenbeheH8VB61Jad/iE5rvAr7fc6rDV/9yW1EgN6EjsIihFlLXWIaws6VqRfsfWanPot4oTaU75lNSKqv1S/0kSgj5Yas0bfJBDK1I3AkO3Wtn0pdCYbU5mYwbuLor/+SDAw/Vr4RxJ2Vk6QwHFL/hVf2FTtTZvfURqrhNU1qXIU9NstcME1WHG2P1/pMbmclTigUmDWYM1oOymyGxsLa2dEMEpDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uFCNrdhEgxKpnC8BCo9ZS1EsEJGW/wgh2lGPWHcanj8=;
 b=jWLFK+HU2nlY63vMc/qLiUunO1Sggkf+xfjdcPCIMUWLA9i7H7f2J3N75HZl9Uo6w6J6iSmiE8CFghVse4woeyBwycEHrAWuuNZGG1t2s4xaPCE11f0PoyJqeKkjkRVXQ1QS+/pnyMkfNzhsto1WDf3RJiz5z/ZMJ9FPih4MrO6KsUH/NdMtVCZnCkxpyIhADMqivs3IeeUDw/aq+u7iWJJMhD5SXRPy+dGrHtll+BVQsZCIZdjXmhiFpWv0O7g/5asdT5T90kmQPkABhxk8JW+gZ9j6ir8OF517hBl/V8sSsqcmZa2N9Er2w6Goxu90+NZQwDVJ0294RsM2G9wbFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uFCNrdhEgxKpnC8BCo9ZS1EsEJGW/wgh2lGPWHcanj8=;
 b=xl/1fg5wSKlacP50DTBxJW9O2DvEWHlj66n1gxIHLEmKD/MCfuDhadaPhlTJTeS/T6V5/5D9tNV5bKbseh3WDe+qx7ozvlRkvMCgUB8UGj8+EwagHKGIXnMJMvk+MIKnhP5608npcEDO3J7JIzBHWQLNZqTxJbqFKva2h+7mOok=
Subject: Re: [PATCH v9 00/11] acquire_resource size and external IPT
 monitoring
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, "Jun
 Nakajima" <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, "Tamas
 K Lengyel" <tamas@tklengyel.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
 <24601.17287.280124.602809@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <85dc5ead-f381-d9f9-3b19-bbafe9712a58@citrix.com>
Date: Tue, 2 Feb 2021 12:44:11 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24601.17287.280124.602809@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0458.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::13) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 86454e07-7f1b-42a3-9a14-08d8c7784172
X-MS-TrafficTypeDiagnostic: BYAPR03MB4407:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB440780C1380182A0E5D7A0A4BAB59@BYAPR03MB4407.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3513;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: LNgxPBLzqgw7jbLz0+hfmCWnmfmrbPsZ6eo8JYqMOUXftAZ6LeVNxzEdYbvAgf5K8kbo3C4/urP4XXDg2ZMCwKQrev2DGZP6Nkwu7y9SBsVGabNCVewsh+PzeIF/WGDelZFI2C6GJ7b0SSNcTJoYP9ByUDNd2LSED0w1eMpuno5XLP7Our7hdgh76PXxep8DpGteZ2xrPYCRiJoGF+tB94JZ7nIiiJ+6+ZAIXDGE2KWBz2wXDW6/6XLblFnJgqW/goYS2zhkVqcHLplS2sFScYyzIjbklZXJEOZdbuKyMMWA4jv3RX8L01ZSjj8Stc5zoLOukVPFvTgGWtGEbE5qynDWUrxG71MKZ/wsr66zWfhwZSbwqYgs5T+spgeuNkF+P4dzuizPifedBW/Ct9sMb69kxPcIMyJPZ30WopUsKWoQDqgTaTpdb+BxVFSZrIIhW7W5r+AIiYeCEqo1cik44H7O92v6BFONpjgxlOg4A7EXsr0n2ozVAAjR6n8iEzXNZLUd3KoiH0rcj7NrI0cxmXO77WosSd7QDDx0nr1Kmd0ExzoteteS1/rE1v2ABzJt/yUgb1VLJZ1gwMZz9s+biRF8fLO6KN4o1XwI7wimhjg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(376002)(366004)(136003)(6916009)(8936002)(4326008)(5660300002)(86362001)(6666004)(2906002)(6486002)(36756003)(478600001)(8676002)(16526019)(31696002)(66946007)(956004)(53546011)(66476007)(2616005)(316002)(26005)(186003)(31686004)(54906003)(16576012)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MWVvQ2FoNXQzL2dGVWFsYnlQYzlLYU1BaDVsNGJHRzhIV3U3d0MreklxcDRB?=
 =?utf-8?B?YnFsNG03aGJmTkNtcHh4NUREOEVkZTBVZlNGU1IrVXI1MWlTeVFZMklpcEQy?=
 =?utf-8?B?VHVWL0ZSVWxaV2FwS2ZSSll0cy90d254Yk9DOEl6VjJWTEMzeXgzbnhubkdM?=
 =?utf-8?B?YmNWRXBQdWJSOGpRcXZuR1c4MUxRaGswQ1MxdGo5a0ZIdnRpc2Z4ZTA1MHpo?=
 =?utf-8?B?QXJmeS96MmxtVFBNdHFacFIrOUxvS1BSWXZjU2FnYVFRMWk1dnNYYkpyNTI0?=
 =?utf-8?B?NGthTmhFVFFoVk9nZW90N3Rxb1FxcDAyNVVHL2ZsR3BYWHJmT1JHcHR6azVV?=
 =?utf-8?B?U09tZDUxNHp1NkZDb2NMak54NzQ0VFNMSlcvVWdHZWdKODNCekhDam9HdTdG?=
 =?utf-8?B?eVhkWERDbGtqOXV3cUlnbHhoM1ZiQVBCOG1xRVBWODVNeVpsT0llczI2MmNv?=
 =?utf-8?B?MDFVRzE3TS9qcFF2RHcrSXp1aVhFTjliQkZUa05Ta2F3d1lidzcyNHN1c1VS?=
 =?utf-8?B?UDhJcDJzNWs0ZVB5eGk2RkhIdXRiRCtKc0JhRE8rV2ppSjBBWWx1YXJXT3I4?=
 =?utf-8?B?N1BRM0dEbmtFZ2lkWEV1cUQ1RDlLc2IyaWM3bWxhb3pvZVd6dXo2SGZvWlVa?=
 =?utf-8?B?cUV5NmFvOVAwKzMvWEIzSFJSSlBCZ1JDWDZiaE40VEd5cCt2VGppdkdHSGJJ?=
 =?utf-8?B?dHFsenNOb0VOUVB4YXNWM0dsT05lRWgxc0NJQVFMa0RBTGQzQU0xRlROSUpJ?=
 =?utf-8?B?UEY1VVNwajhNUy95ZWw5TVU5TjdoR2NxQWlBZTJhUnFwRnVXUkJrczkvaUh5?=
 =?utf-8?B?MU5kMzByem95YmowQmZ1ZHR5RDdnWTBTYnNFNjNoSG9HTkJSd1lpK1piL1lQ?=
 =?utf-8?B?NXBQNFhyUHZja0hvZkVRSEJKZzFRU0F4SHJER3NEWHUveWxHQ1hKRWsrTDly?=
 =?utf-8?B?UERkUVg4TTlqeHMvRm9ldTVDVmluZUs0VUVtWEhiQVA2RC8zbGlpQXFlR2Jp?=
 =?utf-8?B?c2VwT01UMlZwd3F6T0lVZzl1ZXFiRVVPTDVzZ0lyenpkZkxqOFJKMmp4dHZP?=
 =?utf-8?B?S3lIdkFwTjhOOUozcWE1aW5oUDUvamlzT2VBaG5pTFpwSkErdlA3c2dKYzJs?=
 =?utf-8?B?VWoya0loK1FVa0ZBZ3ZzWG5VdUhqTnZMRXRnVDVXazVoRWpNaGpad1dqblpI?=
 =?utf-8?B?NXA0b01tTlhtU1lOZmNsSUo2djM5SFhYMzA5RURDMXpQZ1VTOGt2cGlyaUxm?=
 =?utf-8?B?eWZSTkZhR2YzMHJPenJKYVRFblR2M2I3c3Z3amZZZjhLdisxZmZKTjR4bjEv?=
 =?utf-8?B?K0dlZUlJYVdjSG9VcHhpOExyeUVQZDlOY3hHZWc2RWhya2xNbHZNUUM3dkdZ?=
 =?utf-8?B?ZEZCQUNWNDRuRGUyMGxNV2Rybzl4dlhsSDI1OEViWmdJSDVnS3BBK09velJP?=
 =?utf-8?B?elk5cG0ydXpUYllJNGs4M3RxRkVNZlNLRFl4WlVRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 86454e07-7f1b-42a3-9a14-08d8c7784172
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 12:44:18.3676
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7CesucCAJLTmqOxThRurhBl3tZPnUlofUm5eVfIHZ695eAatg3LTnP5ifRGFh1eermFEXb3bw3AoZmoZ3ljb2gjpDQCX0//5DcUUyzu3DLo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4407
X-OriginatorOrg: citrix.com

On 02/02/2021 12:20, Ian Jackson wrote:
> Since you mentioned patch 1 and asserted it didn't need a release-ack,
> I looked at it in a little more detail.  It seems to contain a
> moderate amount of (fairly localised) restructuring.  IDK whether
> XENMEM_acquire_resource is used by non-IPT configurations but I didn't
> see an assertion anywhere that it isn't.

Acquire resource is used by Qemu/demu/varstored/etc (for IO emulation)
and the the domain builder (seeding the grant table with
xenstore/console details).

None of these usecases used a size calculation, and made blind mapping
calls of 1 page in size.

IPT is the first usecase to want to map more than a single page in one go.

> I appreciate that whether something is "straightforward" on the one
> hand, vs involving "substantial refactoring" on the ohter, or this is
> a matter of judgement, which I have left up to the commiters during
> this part of the freeze.  But for the record my view is that this
> patch is not a "straightforward bugfix" and needs a release ack.

I have extensive testing, demonstrating the bug already present in
staging (unable to map the guests whole grant table in default
configurations), and demonstrating the correctness of the fix.

Some of this testing (specifically, the toos/tests/* binary) is
something I plan to fix up over the ARM IOERQ and other series, and
submit later this week.Â  It will demonstrate the current bug in staging,
and show it fixed with patch 1 committed.Â  (This is something I want to
become an autotest in due course.)

Other parts of this testing cannot be submitted.Â  To get the compat
layer correct, I needed an XTF test and modified Xen which had a known
pattern, to check the marshalling logic didn't lose anything when a
continuation hit an interesting.Â  I can talk you through these tests and
assert that I have run them, but its not testing logic we can commit
into Xen, and its not anything which gets tested by OSSTest because we
don't test 32bit PVH dom0's in anger.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 13:25:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 13:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80545.147391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6vgP-0006QW-Bh; Tue, 02 Feb 2021 13:25:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80545.147391; Tue, 02 Feb 2021 13:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6vgP-0006QP-8j; Tue, 02 Feb 2021 13:25:29 +0000
Received: by outflank-mailman (input) for mailman id 80545;
 Tue, 02 Feb 2021 13:25:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6vgO-0006QH-9K; Tue, 02 Feb 2021 13:25:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6vgO-0005mI-0R; Tue, 02 Feb 2021 13:25:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6vgN-00006e-L1; Tue, 02 Feb 2021 13:25:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6vgN-0003lu-KV; Tue, 02 Feb 2021 13:25:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J9GQO9nWPeb4hf8PgzFwvdnTyMHyP1i9jnykTY/XHq4=; b=Uo65Alpr0K1VU716xBT+mMPIy7
	Kx7YZkRlqi1+IqeMJliysGbXC+0K+RKjgaiKglq+lT1l+WG1ufbPvqcznUavVSb9zgQXDEUxwDVLi
	UsO86eTsdFwA36vpTBmTk/oRwubUnUUlvMZETXOm6pKT/hhBod/UXJPiYSwltsZMwE/w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158950: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 13:25:27 +0000

flight 158950 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158950/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158804  2021-01-30 04:00:24 Z    3 days
Failing since        158892  2021-02-01 16:00:25 Z    0 days   12 attempts
Testing same since   158950  2021-02-02 11:01:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9dc687f155..5e7aa90440  5e7aa904405fa2f268c3af213516bae271de3265 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 14:20:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 14:20:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80567.147412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6wXh-0003nr-Gi; Tue, 02 Feb 2021 14:20:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80567.147412; Tue, 02 Feb 2021 14:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6wXh-0003nk-Dg; Tue, 02 Feb 2021 14:20:33 +0000
Received: by outflank-mailman (input) for mailman id 80567;
 Tue, 02 Feb 2021 14:20:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wXg-0003nc-8e; Tue, 02 Feb 2021 14:20:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wXg-0006uD-1w; Tue, 02 Feb 2021 14:20:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wXf-0001tl-Qb; Tue, 02 Feb 2021 14:20:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wXf-0002pO-Q4; Tue, 02 Feb 2021 14:20:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=RFLYNn8Ql7kjM0xfqsYLllRa+2PecZhkvz1zgbW1CJk=; b=IFUWeXwRDLdV/gwvmMMvclSzYU
	NpJpoIJ6DNf8RyJwMWvs/qYJpz8IIqdgeU17HGOiqI5fS5qs2OUVpP7r0JqnBgB9J/lCqoUVjqaYZ
	CGU4ZpkQTKXdX6ZqwONzN2CFOexjPPRkdsIT3JQNqT3aeMOKOEUVDHrJSVleQCbs/igw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-amd64-coresched-i386-xl
Message-Id: <E1l6wXf-0002pO-Q4@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 14:20:31 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-coresched-i386-xl
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158955/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-coresched-i386-xl.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-coresched-i386-xl.guest-start --summary-out=tmp/158955.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-amd64-coresched-i386-xl guest-start
Searching for failure / basis pass:
 158881 fail [host=pinot0] / 158681 [host=rimava1] 158624 [host=pinot1] 158616 ok.
Failure / basis pass flights: 158881 / 158616
(tree with no url: minios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-0fbca6ce4174724f28be5268c5d210f51ed96e31 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-c6be6dab9c4bdf135bc02b61ecc304d5511c3588 git://xenbits.xen.org/qemu-xen-traditional\
 .git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-9dc687f155a57216b83b17f9cde55dd43e06b0cd
Loaded 15001 nodes in revision graph
Searching for test results:
 158609 [host=rimava1]
 158616 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158624 [host=pinot1]
 158681 [host=rimava1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158895 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158902 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158906 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158912 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158920 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158928 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158936 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158939 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158942 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158945 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158948 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158952 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158954 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158955 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158616 (pass), for basis pass
 For basis failure, parent search stopping at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55a52c07deb0) HASH(0x55a52c070930) HASH(0x55a52c08e060) For basis failure, parent search stopping at 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55a52c062428) For basis failure, parent search stopping at 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55a52c078198) For basis failure, parent search stopping at 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55a52c08dbe0) For basis\
  failure, parent search stopping at d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x55a52c08da38) For basis failure, parent search stopping at 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7\
 298862a0cd8aa5fffc6a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd, results HASH(0x55a52c0557d0) HASH(0x55a52c0816e0) Result found: flight 158748 (fail), for basis failure (at ancestor ~10985)
 Repro found: flight 158895 (pass), for basis pass
 Repro found: flight 158902 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 158942 (pass), for last pass
 Result found: flight 158945 (fail), for first failure
 Repro found: flight 158948 (pass), for last pass
 Repro found: flight 158952 (fail), for first failure
 Repro found: flight 158954 (pass), for last pass
 Repro found: flight 158955 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158955/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 251 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-amd64-coresched-i386-xl.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
158955: tolerable ALL FAIL

flight 158955 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/158955/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-coresched-i386-xl 14 guest-start             fail baseline untested


jobs:
 test-amd64-coresched-i386-xl                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 14:29:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 14:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80574.147430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6wg4-0004B2-Dj; Tue, 02 Feb 2021 14:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80574.147430; Tue, 02 Feb 2021 14:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6wg4-0004Av-AW; Tue, 02 Feb 2021 14:29:12 +0000
Received: by outflank-mailman (input) for mailman id 80574;
 Tue, 02 Feb 2021 14:29:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wg2-0004An-VN; Tue, 02 Feb 2021 14:29:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wg2-00073C-MV; Tue, 02 Feb 2021 14:29:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wg2-0002B7-9Z; Tue, 02 Feb 2021 14:29:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6wg2-0000Z7-94; Tue, 02 Feb 2021 14:29:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XZmBDiHo7durbnXmvmSGrzz5MIhDswhoDOBDn0nvejo=; b=YQU3W3x/+yIt4+nHM1ssneeUtQ
	DuWGegve5RcNzLPGk9qsmJbGfG14BpVkdrRXMdcfrEsj9duQTHPJAAVU3G8+fVMmU14Imuc4qlb0/
	O3H5Tnak4+s/aeNnmrdqwJQO/5DgM7j6ZPzeSAFDsAh4WjL/g5Ad5dqcb/k3Qo7t/Btk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158922-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 158922: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 14:29:10 +0000

flight 158922 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158922/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore          fail pass in 158873
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 158873
 test-armhf-armhf-xl-vhd      13 guest-start                fail pass in 158873

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 158873 like 158835
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 158873 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 158873 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 158873 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158873
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158873
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158873
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158873
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158873
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158873
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158873
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158873
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158873
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158873
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158922  2021-02-02 01:51:30 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:03:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80587.147447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xD9-0007tL-6A; Tue, 02 Feb 2021 15:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80587.147447; Tue, 02 Feb 2021 15:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xD9-0007tE-3A; Tue, 02 Feb 2021 15:03:23 +0000
Received: by outflank-mailman (input) for mailman id 80587;
 Tue, 02 Feb 2021 15:03:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=58F4=HE=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1l6xD7-0007t9-VN
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:03:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff164533-b49f-4ddd-b103-126440e6d3a3;
 Tue, 02 Feb 2021 15:03:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EADCBAD57;
 Tue,  2 Feb 2021 15:03:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff164533-b49f-4ddd-b103-126440e6d3a3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612278200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=68SE39HbaZaq/EHFcUnvvpfmGpnlieWm24yT1YRTzWg=;
	b=V/TreyBkVM0RSNZfKFebGXM5zcg4e+90YAsxZhPAeqz2wTwN3WnjsvtTqhSQRLvtMA238A
	87OH9tzy+9TOJHU7N3NFVabCpYONOYwD/mL6yKcR0T82LcvI8opL4jTRycZN+qyU7cwrNC
	DlDQugwT+pbPOTc5nf2Oz3THF2IzPKk=
Message-ID: <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
Subject: Re: Null scheduler and vwfi native problem
From: Dario Faggioli <dfaggioli@suse.com>
To: Julien Grall <julien@xen.org>, Anders =?ISO-8859-1?Q?T=F6rnqvist?=
	 <anders.tornqvist@codiax.se>, xen-devel@lists.xenproject.org, Stefano
	Stabellini <sstabellini@kernel.org>
Date: Tue, 02 Feb 2021 16:03:18 +0100
In-Reply-To: <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
	 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
	 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
	 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
	 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
	 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
	 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
	 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
	 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
	 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
	 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
	 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-R/S8TuxHfJahqHl9r7DO"
User-Agent: Evolution 3.38.3 (by Flathub.org) 
MIME-Version: 1.0


--=-R/S8TuxHfJahqHl9r7DO
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2021-02-02 at 07:59 +0000, Julien Grall wrote:
> Hi Dario,
>=20
Hi!

> I have had a quick look at your place. The RCU call in=20
> leave_hypervisor_to_guest() needs to be placed just after the last
> call=20
> to check_for_pcpu_work().
>=20
> Otherwise, you may be preempted and keep the RCU quiet.
>=20
Ok, makes sense. I'll move it.

> The placement in enter_hypervisor_from_guest() doesn't matter too
> much,=20
> although I would consider to call it as a late as possible.
>=20
Mmmm... Can I ask why? In fact, I would have said "as soon as
possible".

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-R/S8TuxHfJahqHl9r7DO
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmAZabYACgkQFkJ4iaW4
c+5FLg/+JZW0aeDvv0sWbH+C7aKYkyA2LGmft0QaEvYE/guNn8AlDt/0cazmDIem
N3IxwCsHa2B8hdrBM8s4p/5hTsEggoIS78A7p9VfKqfZloHASwuOtG6bEuPb7ZTN
MBH9rjrqOg7l//nEc0rgnsoRi0wTuWIZ3yJp1JthVmtrtAuul+jNryvfvWEHz+jE
TBuwbF3t5SYh7CfHFYQzcYgF+DfOKDhY9UzUktZyQFbtDMhhqxMfPklJNcHuA7SQ
Eurcx12G0jz/n8MCoJbvcwt/HeFLNE+8qeAz35Am/Zm1hDpJiZ0Zsunr6tyiO06L
lKL8wp+4OF7ql0bBvJuHCdtUXxPnOXGJnkBpwBHnkZeuh5Cl7tLJ9Wk7HMyYMMa8
fKRCqGljWBFzneiRRggzlc3ijvJE/UeQKB0Idn73UDeQW9yPnfkJObKC20qKA33h
Tg5nOVPz2hkV7/tNJ7u1plC+CTWoaVzTlNoRccjxh8QJjsl4Ln89UzQmmR4tk4ha
YkOV3ENChK0K7PJ03cizlmmwkkQc6f0yXdbM9O5rq+qmFEhsA8Ru4U/69GQEsYqm
dx65gyAMRbc1ODCHD0NuG/84/8fSS3GIc2PZigKR/t8bYttX9PzA00pvCxIiAl3D
XhOiDgwB4xPITVSXR9ozJRXbsopbbz8lD/B3Wvhc/9V10wdcXi4=
=IVOe
-----END PGP SIGNATURE-----

--=-R/S8TuxHfJahqHl9r7DO--



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:13:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80592.147460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xMV-0000VW-3e; Tue, 02 Feb 2021 15:13:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80592.147460; Tue, 02 Feb 2021 15:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xMU-0000VP-W5; Tue, 02 Feb 2021 15:13:02 +0000
Received: by outflank-mailman (input) for mailman id 80592;
 Tue, 02 Feb 2021 15:13:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6xMT-0000VK-I0
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:13:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3be95d28-36fb-4b15-8d37-c9ee70c859b2;
 Tue, 02 Feb 2021 15:13:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 81D17AD3E;
 Tue,  2 Feb 2021 15:12:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3be95d28-36fb-4b15-8d37-c9ee70c859b2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612278779; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Z01t98c2lAkI9ynOf8Dr+sYJGnNib0dWzziTUSbDUgs=;
	b=LOATRheeZQUTPtcAJkoCO1/kZsaCOHwUtfAcaKV5F/63+M+eEsufLPS9Vgb0AHeGZ38AzW
	p9osKcPkDkPe5bqe4AcD58ZskaAKBv+jorjMG31Gy1gXkyvVkbw0E73JNg4h8BBBCJjA7c
	ZVvPn4nFIVxyMKigQX/wRgszmBtu5lU=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] IOREQ: mapcache invalidation request sending
 corrections
Message-ID: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
Date: Tue, 2 Feb 2021 16:13:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

I'm sorry it took so long to prepare v2. I had some trouble
figuring a reasonable way to address the main earlier review
requests in what is now patch 2; see there for what changed.
Patch 1 is new.

1: fix waiting for broadcast completion
2: refine when to send mapcache invalidation request

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:14:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:14:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80593.147472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xNg-0000dQ-DL; Tue, 02 Feb 2021 15:14:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80593.147472; Tue, 02 Feb 2021 15:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xNg-0000dJ-AC; Tue, 02 Feb 2021 15:14:16 +0000
Received: by outflank-mailman (input) for mailman id 80593;
 Tue, 02 Feb 2021 15:14:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6xNe-0000dD-GI
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:14:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6c7ac7e-148c-4287-b701-906a1675bf95;
 Tue, 02 Feb 2021 15:14:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F69AABDA;
 Tue,  2 Feb 2021 15:14:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6c7ac7e-148c-4287-b701-906a1675bf95
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612278853; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bSUSrC4kKy6M1cRdxifAihMQi5w0aWPsjPzG6ahbp9A=;
	b=geWPBSFwqPxOveS6dUnWc6g8a63zMCzhOtwQlN3hxjI+eGjS5EOTOzCOWt3krAZppoOEkT
	HhLXIYA+ZkEkDFVcKG2xKzoEPMT/qjmp1OuOuOTMR6LRpxIFfF2iuIBiDttGXOVcKXUSxB
	OCvBq5W/j2ABhIXZoyts+dOBGK9qJtM=
Subject: [PATCH v2 1/2] IOREQ: fix waiting for broadcast completion
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Paul Durrant <paul@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
Message-ID: <3365a9a1-92c0-8917-1632-b88f1c055392@suse.com>
Date: Tue, 2 Feb 2021 16:14:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Checking just a single server is not enough - all of them must have
signaled that they're done processing the request.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -213,9 +213,9 @@ bool vcpu_ioreq_handle_completion(struct
         return false;
     }
 
-    sv = get_pending_vcpu(v, &s);
-    if ( sv && !wait_for_io(sv, get_ioreq(s, v)) )
-        return false;
+    while ( (sv = get_pending_vcpu(v, &s)) != NULL )
+        if ( !wait_for_io(sv, get_ioreq(s, v)) )
+            return false;
 
     vio->req.state = ioreq_needs_completion(&vio->req) ?
         STATE_IORESP_READY : STATE_IOREQ_NONE;



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:14:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:14:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80594.147484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xOI-0000jR-MI; Tue, 02 Feb 2021 15:14:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80594.147484; Tue, 02 Feb 2021 15:14:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xOI-0000jJ-Iy; Tue, 02 Feb 2021 15:14:54 +0000
Received: by outflank-mailman (input) for mailman id 80594;
 Tue, 02 Feb 2021 15:14:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=licP=HE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l6xOH-0000jB-2o
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:14:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe867e53-0065-4840-bdb5-4ca1e89e753d;
 Tue, 02 Feb 2021 15:14:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB41EABDA;
 Tue,  2 Feb 2021 15:14:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe867e53-0065-4840-bdb5-4ca1e89e753d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612278891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TeRk8iTaKeNpeQPP3n3RrlbXOFqNj3jioAO6kau3rEA=;
	b=HWi6/qYfuHeQ34uppAIGDdzmTj4cBKoym/aW5Vm6wOhtJQRcLKxHOb1YSjthDuuZ7DNr/I
	wpHgA6XuAxsZXHP9C4KnDunzCcQmDQoc+xAGYCnbN2PKQxDuPR6c9p/H5oQ8QRWZpEM5Ts
	/DsMTQWIjSM5xeCe/PILW5X+RlClUNs=
Subject: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation
 request
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 George Dunlap <george.dunlap@citrix.com>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
Message-ID: <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
Date: Tue, 2 Feb 2021 16:14:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

XENMEM_decrease_reservation isn't the only means by which pages can get
removed from a guest, yet all removals ought to be signaled to qemu. Put
setting of the flag into the central p2m_remove_page() underlying all
respective hypercalls as well as a few similar places, mainly in PoD
code.

Additionally there's no point sending the request for the local domain
when the domain acted upon is a different one. The latter domain's ioreq
server mapcaches need invalidating. We assume that domain to be paused
at the point the operation takes place, so sending the request in this
case happens from the hvm_do_resume() path, which as one of its first
steps calls handle_hvm_io_completion().

Even without the remote operation aspect a single domain-wide flag
doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
parallel. Each of them needs to issue an invalidation request in due
course, in particular because exiting to guest context should not happen
before the request was actually seen by (all) the emulator(s).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Preemption related adjustment split off. Make flag per-vCPU. More
    places to set the flag. Also handle acting on a remote domain.
    Re-base.
---
I'm still unconvinced of moving the flag setting into p2m_set_entry().
Besides likely causing false positives, we'd also need to make the
function retrieve the prior entry's type in order to do this for
displaced RAM entries only. Instead I've identified more places where
the flag should be set.

--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -759,10 +759,9 @@ static void p2m_free_entry(struct p2m_do
          * has failed (error case).
          * So, at worst, the spurious mapcache invalidation might be sent.
          */
-        if ( (p2m->domain == current->domain) &&
-              domain_has_ioreq_server(p2m->domain) &&
-              p2m_is_ram(entry.p2m.type) )
-            p2m->domain->mapcache_invalidate = true;
+        if ( p2m_is_ram(entry.p2m.type) &&
+             domain_has_ioreq_server(p2m->domain) )
+            ioreq_request_mapcache_invalidate(p2m->domain);
 #endif
 
         p2m->stats.mappings[level]--;
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1509,8 +1509,8 @@ static void do_trap_hypercall(struct cpu
      * Note that sending the invalidation request causes the vCPU to block
      * until all the IOREQ servers have acknowledged the invalidation.
      */
-    if ( unlikely(curr->domain->mapcache_invalidate) &&
-         test_and_clear_bool(curr->domain->mapcache_invalidate) )
+    if ( unlikely(curr->mapcache_invalidate) &&
+         test_and_clear_bool(curr->mapcache_invalidate) )
         ioreq_signal_mapcache_invalidate();
 #endif
 }
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -32,7 +32,6 @@
 
 static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    const struct vcpu *curr = current;
     long rc;
 
     switch ( cmd & MEMOP_CMD_MASK )
@@ -42,14 +41,11 @@ static long hvm_memory_op(int cmd, XEN_G
         return -ENOSYS;
     }
 
-    if ( !curr->hcall_compat )
+    if ( !current->hcall_compat )
         rc = do_memory_op(cmd, arg);
     else
         rc = compat_memory_op(cmd, arg);
 
-    if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
-        curr->domain->mapcache_invalidate = true;
-
     return rc;
 }
 
@@ -327,9 +323,11 @@ int hvm_hypercall(struct cpu_user_regs *
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
 
-    if ( unlikely(currd->mapcache_invalidate) &&
-         test_and_clear_bool(currd->mapcache_invalidate) )
+    if ( unlikely(curr->mapcache_invalidate) )
+    {
+        curr->mapcache_invalidate = false;
         ioreq_signal_mapcache_invalidate();
+    }
 
     return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
 }
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -28,6 +28,7 @@
 #include <xen/vm_event.h>
 #include <xen/event.h>
 #include <xen/grant_table.h>
+#include <xen/ioreq.h>
 #include <xen/param.h>
 #include <public/vm_event.h>
 #include <asm/domain.h>
@@ -815,6 +816,8 @@ p2m_remove_page(struct p2m_domain *p2m,
         }
     }
 
+    ioreq_request_mapcache_invalidate(p2m->domain);
+
     return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
                          p2m->default_access);
 }
@@ -1301,6 +1304,8 @@ static int set_typed_p2m_entry(struct do
             ASSERT(mfn_valid(mfn_add(omfn, i)));
             set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
         }
+
+        ioreq_request_mapcache_invalidate(d);
     }
 
     P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -20,6 +20,7 @@
  */
 
 #include <xen/event.h>
+#include <xen/ioreq.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
 #include <xen/trace.h>
@@ -647,6 +648,8 @@ p2m_pod_decrease_reservation(struct doma
                 set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
             p2m_pod_cache_add(p2m, page, cur_order);
 
+            ioreq_request_mapcache_invalidate(d);
+
             steal_for_cache =  ( p2m->pod.entry_count > p2m->pod.count );
 
             ram -= n;
@@ -835,6 +838,8 @@ p2m_pod_zero_check_superpage(struct p2m_
     p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
     p2m->pod.entry_count += SUPERPAGE_PAGES;
 
+    ioreq_request_mapcache_invalidate(d);
+
     ret = SUPERPAGE_PAGES;
 
 out_reset:
@@ -997,6 +1002,8 @@ p2m_pod_zero_check(struct p2m_domain *p2
             /* Add to cache, and account for the new p2m PoD entry */
             p2m_pod_cache_add(p2m, mfn_to_page(mfns[i]), PAGE_ORDER_4K);
             p2m->pod.entry_count++;
+
+            ioreq_request_mapcache_invalidate(d);
         }
     }
 
@@ -1315,6 +1322,8 @@ guest_physmap_mark_populate_on_demand(st
         p2m->pod.entry_count -= pod_count;
         BUG_ON(p2m->pod.entry_count < 0);
         pod_unlock(p2m);
+
+        ioreq_request_mapcache_invalidate(d);
     }
 
 out:
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -35,6 +35,17 @@
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
+void ioreq_request_mapcache_invalidate(const struct domain *d)
+{
+    struct vcpu *v = current;
+
+    if ( d == v->domain )
+        v->mapcache_invalidate = true;
+    else if ( d->creation_finished )
+        for_each_vcpu ( d, v )
+            v->mapcache_invalidate = true;
+}
+
 /* Ask ioemu mapcache to invalidate mappings. */
 void ioreq_signal_mapcache_invalidate(void)
 {
@@ -206,6 +217,7 @@ bool vcpu_ioreq_handle_completion(struct
     struct ioreq_server *s;
     struct ioreq_vcpu *sv;
     enum vio_completion completion;
+    bool res = true;
 
     if ( has_vpci(d) && vpci_process_pending(v) )
     {
@@ -232,17 +244,27 @@ bool vcpu_ioreq_handle_completion(struct
         break;
 
     case VIO_mmio_completion:
-        return arch_ioreq_complete_mmio();
+        res = arch_ioreq_complete_mmio();
+        break;
 
     case VIO_pio_completion:
-        return handle_pio(vio->req.addr, vio->req.size,
-                          vio->req.dir);
+        res = handle_pio(vio->req.addr, vio->req.size,
+                         vio->req.dir);
+        break;
 
     default:
-        return arch_vcpu_ioreq_completion(completion);
+        res = arch_vcpu_ioreq_completion(completion);
+        break;
     }
 
-    return true;
+    if ( res && unlikely(v->mapcache_invalidate) )
+    {
+        v->mapcache_invalidate = false;
+        ioreq_signal_mapcache_invalidate();
+        res = false;
+    }
+
+    return res;
 }
 
 static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf)
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -103,6 +103,7 @@ struct ioreq_server *ioreq_server_select
 int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p,
                bool buffered);
 unsigned int ioreq_broadcast(ioreq_t *p, bool buffered);
+void ioreq_request_mapcache_invalidate(const struct domain *d);
 void ioreq_signal_mapcache_invalidate(void);
 
 void ioreq_domain_init(struct domain *d);
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -225,6 +225,14 @@ struct vcpu
     bool             hcall_compat;
 #endif
 
+#ifdef CONFIG_IOREQ_SERVER
+    /*
+     * Indicates that mapcache invalidation request should be sent to
+     * the device emulator.
+     */
+    bool             mapcache_invalidate;
+#endif
+
     /* The CPU, if any, which is holding onto this VCPU's state. */
 #define VCPU_CPU_CLEAN (~0u)
     unsigned int     dirty_cpu;
@@ -444,11 +452,6 @@ struct domain
      * unpaused for the first time by the systemcontroller.
      */
     bool             creation_finished;
-    /*
-     * Indicates that mapcache invalidation request should be sent to
-     * the device emulator.
-     */
-    bool             mapcache_invalidate;
 
     /* Which guest this guest has privileges on */
     struct domain   *target;



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:24:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80600.147496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xXB-0001nP-Jc; Tue, 02 Feb 2021 15:24:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80600.147496; Tue, 02 Feb 2021 15:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xXB-0001nI-Er; Tue, 02 Feb 2021 15:24:05 +0000
Received: by outflank-mailman (input) for mailman id 80600;
 Tue, 02 Feb 2021 15:24:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l6xX9-0001n6-El
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:24:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6xX7-0007yS-Om; Tue, 02 Feb 2021 15:24:01 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6xX7-0000Nz-Iw; Tue, 02 Feb 2021 15:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:Cc:From:References:To:Subject;
	bh=Mw4IMl4Ae0U2lXSykBhjVy2sBzxa2prqjIT4wRg2xAs=; b=CDkCecAwn9Pr/GmDSvoGCjaRS6
	WPI4tCp3w/VXG98KwumOkznD1GDNqmYntXBjURUesUn/UnviHtsyQxJA3x7+IoKohhJtqMjPwBYlc
	8kqSxErqK5Gju/DX8nOhUsHJop6OLCQLnDvc3aUj2sd6jC8a2xSQ0z8EPkPSGdZ8rKvs=;
Subject: Re: Null scheduler and vwfi native problem
To: Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
From: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>
Message-ID: <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
Date: Tue, 2 Feb 2021 15:23:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

(Adding Andrew, Jan, Juergen for visibility)

Hi Dario,

On 02/02/2021 15:03, Dario Faggioli wrote:
> On Tue, 2021-02-02 at 07:59 +0000, Julien Grall wrote:
>> Hi Dario,
>>
>> I have had a quick look at your place. The RCU call in
>> leave_hypervisor_to_guest() needs to be placed just after the last
>> call
>> to check_for_pcpu_work().
>>
>> Otherwise, you may be preempted and keep the RCU quiet.
>>
> Ok, makes sense. I'll move it.
> 
>> The placement in enter_hypervisor_from_guest() doesn't matter too
>> much,
>> although I would consider to call it as a late as possible.
>>
> Mmmm... Can I ask why? In fact, I would have said "as soon as
> possible".

Because those functions only access data for the current vCPU/domain. 
This is already protected by the fact that the domain is running.

By leaving the "quiesce" mode later, you give an opportunity to the RCU 
to release memory earlier.

In reality, it is probably still too early as a pCPU can be considered 
quiesced until a call to rcu_lock*() (such rcu_lock_domain()).

But this would require some investigation to check if we effectively 
protect all the region with the RCU helpers. This is likely too 
complicated for 4.15.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:26:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80601.147508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xZG-0001uR-VM; Tue, 02 Feb 2021 15:26:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80601.147508; Tue, 02 Feb 2021 15:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6xZG-0001uK-SM; Tue, 02 Feb 2021 15:26:14 +0000
Received: by outflank-mailman (input) for mailman id 80601;
 Tue, 02 Feb 2021 15:26:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Vw=HE=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l6xZF-0001uF-Iz
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:26:13 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df92750e-b45a-4609-8554-b56b2bcd6ee3;
 Tue, 02 Feb 2021 15:26:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df92750e-b45a-4609-8554-b56b2bcd6ee3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612279572;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Ff0ODS3QBH1c9Z92pXmSGi2ISsoawY8OQVKLEOisOdI=;
  b=HBaMfUtXHO/wsjJFfUmn63GNAvJrKyQ/n6kbgCI1OR5wPBxeyhhG3dI3
   XqalJ9iYPbNSDprydgnKPNZLLgOJHV43r1tzoyexMsmMVDyJDNtjJ/2nt
   3PyGEwPGmm4ALzCHYBXDtKQG7nIQDy0/xGaWVyWSpNH86uozKH+OUsdoz
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3AsF6jWq0gMIIKS1cdsb7GcLWobd5iEgdpRkCEPFI/T0bE6dBMaAKwJkpHPCPlXUy/8L47yPoS
 Ir3YOcLKU9ZljJFLU9/qevKmpLia6n5Sxb07p+ObyagjNGHumtyIZpHjAB0tR0r07ONc+/1ISv
 Y/yNHDRdozIUNz423FFx0RtDiWaEYE7Z+3VKZeSybNrBBmiYxvZ+3HsAZ4L4yRPYIRHnnTsW/6
 cTW7rS0ki5wDxJAo3EjtAadZ0L0c7NDOMnO0zRsJkKSOCADBCbV1BvOZQjU+vQUfMDL/QbE5Y6
 NdU=
X-SBRS: 5.1
X-MesageID: 36576887
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,395,1602561600"; 
   d="scan'208";a="36576887"
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
CC: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
	<stable@vger.kernel.org>
References: <20210202070938.7863-1-jgross@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <c17d4e45-cad1-510d-0e7b-9d95af89ff01@citrix.com>
Date: Tue, 2 Feb 2021 15:26:03 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20210202070938.7863-1-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02/02/2021 07:09, Juergen Gross wrote:
> Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> xenvif_rx_ring_slots_available() is no longer called only from the rx
> queue kernel thread, so it needs to access the rx queue with the
> associated queue held.
> 
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> Cc: stable@vger.kernel.org
> Signed-off-by: Juergen Gross <jgross@suse.com>

Appreciate a quick fix! Is this the only place that sort of race could
happen now?

Igor


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 15:57:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 15:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80614.147523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6y2y-0004sV-Jl; Tue, 02 Feb 2021 15:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80614.147523; Tue, 02 Feb 2021 15:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6y2y-0004sO-Ge; Tue, 02 Feb 2021 15:56:56 +0000
Received: by outflank-mailman (input) for mailman id 80614;
 Tue, 02 Feb 2021 15:56:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Y9eS=HE=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1l6y2x-0004s2-Bs
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 15:56:55 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0810bf4a-0376-4a38-baf6-545f2149b5f5;
 Tue, 02 Feb 2021 15:56:54 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-59-SAE8JIHWPN2Un753zj4IYA-1; Tue, 02 Feb 2021 10:56:50 -0500
Received: by mail-wm1-f70.google.com with SMTP id b201so1624354wmb.9
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 07:56:50 -0800 (PST)
Received: from x1w.redhat.com (7.red-83-57-171.dynamicip.rima-tde.net.
 [83.57.171.7])
 by smtp.gmail.com with ESMTPSA id u206sm3928158wme.12.2021.02.02.07.56.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 Feb 2021 07:56:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0810bf4a-0376-4a38-baf6-545f2149b5f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1612281413;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=FIeU5vr6P9ETMb5WrDe1IVgUMo9nsV2GwJpRkYb6CD0=;
	b=UzIapkeEU1SxYG/lEu20y2R2YdcwSNeckq+OcI36h6+uTSA3Bj2VAxRYcELUCqs9dKaDcp
	iXUCi71VtOA4GgsC8+j5koC/JHHe7W74UEx5gs0PGOwC4hzsXYwHk46GJ+egCEy5ifKFpl
	ClIrXAuDwzjOfVzb2DrmdeMlqCvwcqE=
X-MC-Unique: SAE8JIHWPN2Un753zj4IYA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=FIeU5vr6P9ETMb5WrDe1IVgUMo9nsV2GwJpRkYb6CD0=;
        b=JXqNc5mDa91In2MHGWTY6Memec9Y01wY2+sGOjDj1CPwkvHGrf2EYJrYHwVbJATQu8
         /vdIPD4fbCVkhCXulEonwtBpW6rjD3uu87HGyjc7i4bVy7tgRKPnbpGuBo+45jMUF8FO
         WnMLUGdh5Z56+CeuKzA08ydH/6XpmdG0XyTNzgreYHDAasVwRBAOa+gj5YpFcHZnnOgH
         x3y3CqQuusX64uheZw1udRLHExmcygKxpQLxHyNW2m7ycJBgF4QKLUfljH2oa6POXeBU
         8+RTO1jff5Kripg27iTPea7sl2QXisNBiJ9JHYxJzw64gK28cdO+E+HbuD2W5oZOoQcu
         wnbA==
X-Gm-Message-State: AOAM533DdhE2w2zkoBvG1YgyGdyp5+CRsacfbaI2nszD1HRL5DyZfwOu
	cD15ytymg8ikr1E4w4TiWHYZcaEjH4o9x/asqqP7pzw2x++YxK/bxvED9UrYdW8/YqB8gIX3Sf7
	DmL06MiM/YRsHQvQSlTYbkJ4igFQ=
X-Received: by 2002:a05:600c:28b:: with SMTP id 11mr4281530wmk.98.1612281409506;
        Tue, 02 Feb 2021 07:56:49 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzKYLHerQzAoFfIr5Ish+YXWqEJkYkVd0aR+Ev7o1WaT/vyuU1T9BTgcwcLSBvcLIpm7WrhFg==
X-Received: by 2002:a05:600c:28b:: with SMTP id 11mr4281362wmk.98.1612281406773;
        Tue, 02 Feb 2021 07:56:46 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	qemu-trivial@nongnu.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH] hw/i386/xen: Remove dead code
Date: Tue,  2 Feb 2021 16:56:44 +0100
Message-Id: <20210202155644.998812-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

'drivers_blacklisted' is never accessed, remove it.

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 hw/i386/xen/xen_platform.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 7c4db35debb..01ae1fb1618 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -60,7 +60,6 @@ struct PCIXenPlatformState {
     MemoryRegion bar;
     MemoryRegion mmio_bar;
     uint8_t flags; /* used only for version_id == 2 */
-    int drivers_blacklisted;
     uint16_t driver_product_version;
 
     /* Log from guest drivers */
@@ -245,18 +244,10 @@ static void platform_fixed_ioport_writeb(void *opaque, uint32_t addr, uint32_t v
 
 static uint32_t platform_fixed_ioport_readw(void *opaque, uint32_t addr)
 {
-    PCIXenPlatformState *s = opaque;
-
     switch (addr) {
     case 0:
-        if (s->drivers_blacklisted) {
-            /* The drivers will recognise this magic number and refuse
-             * to do anything. */
-            return 0xd249;
-        } else {
-            /* Magic value so that you can identify the interface. */
-            return 0x49d2;
-        }
+        /* Magic value so that you can identify the interface. */
+        return 0x49d2;
     default:
         return 0xffff;
     }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:05:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:05:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80621.147541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yBQ-0006Tq-JU; Tue, 02 Feb 2021 16:05:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80621.147541; Tue, 02 Feb 2021 16:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yBQ-0006Tj-Ed; Tue, 02 Feb 2021 16:05:40 +0000
Received: by outflank-mailman (input) for mailman id 80621;
 Tue, 02 Feb 2021 16:05:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XtUW=HE=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l6yBO-0006Te-Lu
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:05:38 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f2ff961-e135-408b-934e-95516fd27975;
 Tue, 02 Feb 2021 16:05:37 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id y187so2774978wmd.3
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 08:05:37 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id c62sm3845103wmd.43.2021.02.02.08.05.34
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 02 Feb 2021 08:05:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f2ff961-e135-408b-934e-95516fd27975
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=J50FcM9Xuz2nuuwVsOHs0CLSxTRB3uDXE4eudLz4aM4=;
        b=qrkMLA3seqJMHuNgODSKDXHuSLKmDH4p0dzA+rUEshoDC415nUII4P9GE1STh27f4/
         72kl//YpdRAIuvKDs1mEk7BbsfL7S5xTun5vZZPk3dmLKpchZs0BSl2XDQovFGb+7UG3
         uVtrec0d7u2clCaYr35vEp0axF/0YlD542ZFt1ktuWb92AME8HtioZodSbf4yomxsrY3
         tRUcvaV8p4B5mjRdGk02H7PaE9P+xTvAyCTTwDSOFln81v34tU+Z7iyaLlXBFAkNttg7
         E6WSCLUJeXLo0R6dYyxd3QAoVzoyQ0cp2Eot03Mzwdrm+UDkUEalcL2/6hYem1S1yqlG
         85Hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=J50FcM9Xuz2nuuwVsOHs0CLSxTRB3uDXE4eudLz4aM4=;
        b=gKwKBLHN5+jIr+rCR5NXfFaG0bzWOpypN+ihcaGHL6nsowm1Hkmag+doFgvzqg1Eve
         pNCqVNpml3+Oxc2FcjpX8Z9LJ7/0v3Uio+GAiU/oh7k+5nTMgcA8esXZHfPrLxAoZ48f
         FJTaUtLqb4viHnjlfh1kacpCMouzPEAnJxHP5Vg1tgHsGgG56TmPm0+r7v0HKfiRt/7O
         TdUtiMbWs5BgDaU4PfPZX52LIERwpRTd11G94sqPc/QAluz1LvvAZsE3xr/N6JRjadyv
         chftoeixdvnAsWa8ehQfKSMKcvxsqnpij0KVHWqK7cadgnquvsTNQgXhK0pBVR9144yX
         CqYA==
X-Gm-Message-State: AOAM533HhSwjiqjINwgYg70DRWItXX89D1D4+XN3dAT10n7D69Xbm/WS
	iX+MZSPACyIUWKzFpNHqWec=
X-Google-Smtp-Source: ABdhPJy/49XXN1Evx6YRHjuIWKhFBSVJHREEYrjkPlVJuxjo9IsapchrbzDO3/D4W+gHE9tR0TboIw==
X-Received: by 2002:a1c:1d8b:: with SMTP id d133mr4412376wmd.172.1612281936831;
        Tue, 02 Feb 2021 08:05:36 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: =?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
	<qemu-devel@nongnu.org>
Cc: "'Richard Henderson'" <richard.henderson@linaro.org>,
	"'Paolo Bonzini'" <pbonzini@redhat.com>,
	"'Eduardo Habkost'" <ehabkost@redhat.com>,
	<qemu-trivial@nongnu.org>,
	"'Michael S. Tsirkin'" <mst@redhat.com>,
	"'Marcel Apfelbaum'" <marcel.apfelbaum@gmail.com>,
	<xen-devel@lists.xenproject.org>,
	"'Anthony Perard'" <anthony.perard@citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>
References: <20210202155644.998812-1-philmd@redhat.com>
In-Reply-To: <20210202155644.998812-1-philmd@redhat.com>
Subject: RE: [PATCH] hw/i386/xen: Remove dead code
Date: Tue, 2 Feb 2021 16:05:34 -0000
Message-ID: <036801d6f97d$3d9f0bf0$b8dd23d0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQHU213CX1E5gjnLtJG8R95suaIaVKpJQttA
Content-Language: en-gb

> -----Original Message-----
> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Sent: 02 February 2021 15:57
> To: qemu-devel@nongnu.org
> Cc: Richard Henderson <richard.henderson@linaro.org>; Paolo Bonzini =
<pbonzini@redhat.com>; Eduardo
> Habkost <ehabkost@redhat.com>; qemu-trivial@nongnu.org; Michael S. =
Tsirkin <mst@redhat.com>; Marcel
> Apfelbaum <marcel.apfelbaum@gmail.com>; =
xen-devel@lists.xenproject.org; Paul Durrant <paul@xen.org>;
> Anthony Perard <anthony.perard@citrix.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Philippe
> Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Subject: [PATCH] hw/i386/xen: Remove dead code
>=20
> 'drivers_blacklisted' is never accessed, remove it.
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>

FTR this is a vestige of an ancient mechanism that's not used any more =
(see =
https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dblob;f=3Ddocs/misc/hvm-em=
ulated-unplug.pandoc step 5).

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  hw/i386/xen/xen_platform.c | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
>=20
> diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> index 7c4db35debb..01ae1fb1618 100644
> --- a/hw/i386/xen/xen_platform.c
> +++ b/hw/i386/xen/xen_platform.c
> @@ -60,7 +60,6 @@ struct PCIXenPlatformState {
>      MemoryRegion bar;
>      MemoryRegion mmio_bar;
>      uint8_t flags; /* used only for version_id =3D=3D 2 */
> -    int drivers_blacklisted;
>      uint16_t driver_product_version;
>=20
>      /* Log from guest drivers */
> @@ -245,18 +244,10 @@ static void platform_fixed_ioport_writeb(void =
*opaque, uint32_t addr, uint32_t v
>=20
>  static uint32_t platform_fixed_ioport_readw(void *opaque, uint32_t =
addr)
>  {
> -    PCIXenPlatformState *s =3D opaque;
> -
>      switch (addr) {
>      case 0:
> -        if (s->drivers_blacklisted) {
> -            /* The drivers will recognise this magic number and =
refuse
> -             * to do anything. */
> -            return 0xd249;
> -        } else {
> -            /* Magic value so that you can identify the interface. */
> -            return 0x49d2;
> -        }
> +        /* Magic value so that you can identify the interface. */
> +        return 0x49d2;
>      default:
>          return 0xffff;
>      }
> --
> 2.26.2




From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:07:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:07:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80622.147552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yDP-0006b3-UG; Tue, 02 Feb 2021 16:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80622.147552; Tue, 02 Feb 2021 16:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yDP-0006aw-RC; Tue, 02 Feb 2021 16:07:43 +0000
Received: by outflank-mailman (input) for mailman id 80622;
 Tue, 02 Feb 2021 16:07:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g09W=HE=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l6yDP-0006ar-6I
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:07:43 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.61]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 229f7e3a-1167-46bc-bf03-a4129ff3c3e8;
 Tue, 02 Feb 2021 16:07:41 +0000 (UTC)
Received: from DB6P191CA0007.EURP191.PROD.OUTLOOK.COM (2603:10a6:6:28::17) by
 VI1PR08MB3471.eurprd08.prod.outlook.com (2603:10a6:803:7d::22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3805.19; Tue, 2 Feb 2021 16:07:27 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:28:cafe::69) by DB6P191CA0007.outlook.office365.com
 (2603:10a6:6:28::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.17 via Frontend
 Transport; Tue, 2 Feb 2021 16:07:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 2 Feb 2021 16:07:26 +0000
Received: ("Tessian outbound 8418c949a3fa:v71");
 Tue, 02 Feb 2021 16:07:26 +0000
Received: from 73672a4cf0c8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 32A22B64-DDE4-4298-900A-5E9B8C8AA937.1; 
 Tue, 02 Feb 2021 16:07:19 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 73672a4cf0c8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 02 Feb 2021 16:07:19 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3881.eurprd08.prod.outlook.com (2603:10a6:10:77::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Tue, 2 Feb
 2021 16:07:17 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021
 16:07:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229f7e3a-1167-46bc-bf03-a4129ff3c3e8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YLiCbfzZcc+o2kZaqn3sYmkw7CkSS9uvAeq5DrqtSm8=;
 b=ZGP91mzQpYo41W6uWbDszcFpARAod6TuNcZ9yzCMansy2UaBjagBWes3HeqcKGu8OlrzbER9oejqid63Il/hteYRgcsNcVDM66RZyvwXvnkvu4/UQl2v+tjk9olE4H4D6U269eKNnXS0h8ZzPWX1LoQ4CuboTD2XTUYHXtMhN+s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: bd55745c8d5821be
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RjV4n4eU9wDPrU2SJr8xhUwJB62GgmoRoDqVulFn6Cv0uwN4/KSmQVnX4oGGYHcLlFPEhEll7Y4Nu+IaSFP/5wB57/UawWKJ5x0GnPUNz+idjEPsn6UnSxKDtSBawaVI539MiLg0fTsEZ1J5PCkvvG1IAE7aYBnrdXqM/VjpSoMw5nWW7D090VAP7UD8f/1qjpDVlRD/MfI0BWxY2VYwftFqmQMFjdXsFHzdxclWqJWENxhCYft9nB74E44ZzkSXDtJc3Ibf0BS/AaqbCL+cldE9KAyb0XuW6hSZyNWJFBx4L0Z1CJti+6RJmpfQ68fUWY/jiRm2zOstfNBUfKRqzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YLiCbfzZcc+o2kZaqn3sYmkw7CkSS9uvAeq5DrqtSm8=;
 b=KhpkZSonfJQjjDkeoonD4kaJhOl0sbNCkBcXyjH7Svg3eI6KIePx2cdvqZRefuEMh4k7Rg4dzqXwz7uqjopencXRuYLECBTa2xGRNuGJhmrusDPikF2UUwZB+ymnvjl4flNyRqtpUMzpJBlMcj969jl6n5CMySQ0JyUKodDC7+E4CM5j4gLPxV5F4sqOhjHctTyq78WTC96SIWa8VhRFlw3wyZLUooijHm5jD6y9j6U9yI3GsyojzpfOgjJbpxoxi9tvRTbEatmIhB4Pug+iWZ3unX7aS1TfBXxaOb3hVvygGRpf0uFRsi4RBgoIwrCsGF/XDe+9CNfYeWaLduYYWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YLiCbfzZcc+o2kZaqn3sYmkw7CkSS9uvAeq5DrqtSm8=;
 b=ZGP91mzQpYo41W6uWbDszcFpARAod6TuNcZ9yzCMansy2UaBjagBWes3HeqcKGu8OlrzbER9oejqid63Il/hteYRgcsNcVDM66RZyvwXvnkvu4/UQl2v+tjk9olE4H4D6U269eKNnXS0h8ZzPWX1LoQ4CuboTD2XTUYHXtMhN+s=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "brian.woods@xilinx.com"
	<brian.woods@xilinx.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v3 1/3] arm,smmu: switch to using iommu_fwspec functions
Thread-Topic: [PATCH v3 1/3] arm,smmu: switch to using iommu_fwspec functions
Thread-Index: AQHW9DbwAflFP7R5g0GKnact3rvcxKpFEoUA
Date: Tue, 2 Feb 2021 16:07:17 +0000
Message-ID: <DF3A4D93-17BE-4E59-92B1-5BA356C22EB1@arm.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
 <20210126225836.6017-1-sstabellini@kernel.org>
In-Reply-To: <20210126225836.6017-1-sstabellini@kernel.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d33b0680-b2c6-4166-49e5-08d8c794a282
x-ms-traffictypediagnostic: DB7PR08MB3881:|VI1PR08MB3471:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB34719571FDD50C1DAA47CE07FCB59@VI1PR08MB3471.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:773;OLM:773;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 psD1mlYQs65Rm9ZZtvfLaozWM9aAy6V5UX/eiMOaUleObpwYO8qwfYw/40mjwxK+jY4ac7l6mlQanZXbkmC1kg18TF9dhfZTMklMqex3jQXC90MbLj2RjDS0FXtXKa3CZmzKsh3DP80kNMRQD+TCgOBMxjVW4eGp9gYS1apN2dEYlXd/B+pNh5W00oruiRcTcffMHFB/LnEeS/GBQgu4TMn834Z97GHKuV5uZ5IB/vwltDJzCKBexJ0V+qgZaH2nx1z0oVlQ4WY7Xq2i+v3N3uRDtIavc+MV861cS/ZZqMhq8iisYnaWJ8nEehh/LsPAWKq2v/cqrjb8Q1nAyNf7Ku9AzTSxYY3WmYtVTSSUoR/cmuWVpopP1UgR/wrLmFNBK/R1QWZObEE4ocbnXagB48PeHmw5P6rRliCPgiFyKFtzB8dhiauqNwu0PjXmAdP5O52saN+/hMqPnS0fkV0nErOqeSUkT4jIIZZu1RxKZZ60IH58TUMa8HKJGn2Uvq1mJKJ9EeGJNnhmiqCQP1Drls1fQ/BTzJXBt3RJm0sHlXtQ4kY5aSz8REPtYWWq37Ay2jWlqELBa8oAOb4TI8KCHg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(6486002)(54906003)(6512007)(86362001)(316002)(83380400001)(8936002)(2616005)(4326008)(66446008)(8676002)(36756003)(478600001)(64756008)(76116006)(66556008)(66476007)(66946007)(33656002)(91956017)(186003)(53546011)(5660300002)(6506007)(71200400001)(26005)(2906002)(6916009)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?YD4e354NcAoDD0t6GrvlY4WrE7iE3kqy80IhsOEkymexTzy8MdI5TY19QH9y?=
 =?us-ascii?Q?w5x5e8Xu9mu7DfAQPzm3zmsvOCQdY+iS/K3A8Ln5ePG4UKzW9k89xd8ETIa5?=
 =?us-ascii?Q?Qg7R23BjD0CReFxBovKAGKc8Xc7hPe57b1Db8oA1RlxfUGdkuDUkzgBmislo?=
 =?us-ascii?Q?MZEGurtDdU1w6zxM6jRhxigDSzg9Q2hF3gs1mUXwjsiFtvyQmVkWTb1rloWX?=
 =?us-ascii?Q?h931ZfRSzfDDBmpfSnmDdnDqWNwWpDt9283YlWeOvGiKTEP+iwYbgEzk6NVS?=
 =?us-ascii?Q?qYxvx3wdwVA+8IHA+Ea5Tg5oavlbgkJObC+lcO+wsWPToCbmFHQyyClwW07r?=
 =?us-ascii?Q?b7avoHOgVvHuaeB/SzWaATOYBkRLUQmgFtwPJUq0JXKPjidu/2GydYMxTxKa?=
 =?us-ascii?Q?E3Xi4juqmoBWDSKPkzw8pJTHtW+t4DPKie5vNTDrR+T9r/AzKWDaSrtNZWSl?=
 =?us-ascii?Q?6mUVkdE4amcAzplzK38O0WLhGVhZDMEOrdx8vaWTykXDmft9gDbEGipV7D27?=
 =?us-ascii?Q?FEYaFmTskPQKMfYx0mPGMGQip+SkNXLrR3UA/TIA2MglC30gKBT2zRJZgf6S?=
 =?us-ascii?Q?FBfteTM70BUOjAQ3b1FjzwNN56w4MflMoGCXQdxmjEXnGEuwoTctpctiHUbb?=
 =?us-ascii?Q?jwXPTJCwpn0q7SVPuYyqiWu9ragKas2l+xeOs6VWBdDjtGvoLqdYYtgrLnBi?=
 =?us-ascii?Q?hEf/FKL7aZ7Hbv8vAp185Lxs8DomUH+TPazX5Tu5ViM1GxKk8976ZVjJwPSE?=
 =?us-ascii?Q?h8W73ySlO5rge5TPzQ9zcb2cFMO6Xzm7uptbrYahzKiciyazR0EUwZeZX35f?=
 =?us-ascii?Q?PKxJr3ZtTt9lXbqgL3bGB+Ibvc0hP5ef5BV9N5Z+wt8sRO+yT63A8BQZR8Jy?=
 =?us-ascii?Q?K5iENOEsiJwlKqh5QDkYVfGfAg8lhT+IRN8Nbr/c1X4JzNx5BGNsNuvDwR8H?=
 =?us-ascii?Q?OsblLBdwQldZBtJvPbbZkfp+hNJYOat6Csbz94qNVe21otaE6d6ifIAXEPNh?=
 =?us-ascii?Q?ytYiPMRKN5LfB+P89OKsLarrhnJIZRqiQ1hJzCh7z52g8XLvbWydY1vDcWmC?=
 =?us-ascii?Q?p0bqa8fD?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4E0BD89213380545A308C490C1AC9619@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3881
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e6c6a53f-5427-46bd-ac85-08d8c7949ce9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TrTZaXtHNlcQJwiKHqTWhSjj0tgEEv3DHuxKS0yDJ526kx7ySUplL+Tn6vpXjq8WsvH5Wog5+Sq7JpLxrenz5kOi+oNXj/7rKpKYT7r5t5FXmnBJlEV7BLJyIFlDPcKthVYhtIypzzxtfOyAANKxdNofBxJCHs9GQzN0LbeM10D9ea2JLaVCKbMP5my3seWUdB8Vdw/6J1GuHRYB995DtGRFrKSrAjiBjJwqC1eRCd7vFvdJ4Mn6Ani1va/hBDRM9nnnwns4mt7vuq+5a3Aiz06s/nF7tTxs7RsvAi3emOrDWTdG50X2107jGCksuI3zG9PffXcKyb7TSuWNXGTABYtmo6quNbn1OrM5FNiHMilzcuUMhsa4FYyVR8uO0B2+ylL1wFThB3PP+FnEUJuMiftGjJTQOlVHRBlZDdSTgtwHFCjJbCpMITZNyEMDoY+0eJ2Cenb1kKksrPsu8SVbzkTBfPkuQhmpzo9Im28DWBcvZueysOXtnW3b/A4K/3f/3NDSJXb5BQLMTGKHDwZtEJn0lZFIeTz6wOMRbzluzoCOeh2JQ0nfWoJ/VTKBVQYOikSqmacCiVBaG7GOWLNLenllDSVSO5xmT49XzAe+NzlGntK8qbEEudyswiF0PAhdMts85WXe9onNFdiM0ZffvEqksQEuq/dRLDyb546Q5/Y=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(39860400002)(136003)(46966006)(36840700001)(33656002)(30864003)(86362001)(6486002)(6512007)(2616005)(47076005)(6862004)(107886003)(186003)(2906002)(4326008)(26005)(53546011)(316002)(478600001)(8936002)(54906003)(83380400001)(6506007)(82740400003)(36860700001)(356005)(8676002)(36756003)(81166007)(5660300002)(70586007)(82310400003)(336012)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 16:07:26.8375
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d33b0680-b2c6-4166-49e5-08d8c794a282
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3471

Hello Stefano,

> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> From: Brian Woods <brian.woods@xilinx.com>
>=20
> Modify the smmu driver so that it uses the iommu_fwspec helper
> functions.  This means both ARM IOMMU drivers will both use the
> iommu_fwspec helper functions, making enabling generic device tree
> bindings in the SMMU driver much cleaner.
>=20
> Signed-off-by: Brian Woods <brian.woods@xilinx.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
> ---
> Changes in v3:
> - add a comment in iommu_add_dt_device
> - don't allocate fwspec twice in arm_smmu_add_device
> - reuse existing fwspec pointer, don't add a second one
> - add comment about supporting fwspec at the top of the file
> ---
> xen/drivers/passthrough/arm/smmu.c    | 98 ++++++++++++++++-----------
> xen/drivers/passthrough/device_tree.c |  7 ++
> 2 files changed, 66 insertions(+), 39 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough=
/arm/smmu.c
> index 3e8aa37866..3898d1d737 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -32,6 +32,9 @@
>  *	- 4k and 64k pages, with contiguous pte hints.
>  *	- Up to 48-bit addressing (dependent on VA_BITS)
>  *	- Context fault reporting
> + *
> + * Changes compared to Linux driver:
> + *	- support for fwspec
>  */
>=20
>=20
> @@ -49,6 +52,7 @@
> #include <asm/atomic.h>
> #include <asm/device.h>
> #include <asm/io.h>
> +#include <asm/iommu_fwspec.h>
> #include <asm/platform.h>
>=20
> /* Xen: The below defines are redefined within the file. Undef it */
> @@ -302,9 +306,6 @@ static struct iommu_group *iommu_group_get(struct dev=
ice *dev)
>=20
> /***** Start of Linux SMMU code *****/
>=20
> -/* Maximum number of stream IDs assigned to a single device */
> -#define MAX_MASTER_STREAMIDS		MAX_PHANDLE_ARGS
> -
> /* Maximum number of context banks per SMMU */
> #define ARM_SMMU_MAX_CBS		128
>=20
> @@ -597,8 +598,6 @@ struct arm_smmu_smr {
> };
>=20
> struct arm_smmu_master_cfg {
> -	int				num_streamids;
> -	u16				streamids[MAX_MASTER_STREAMIDS];
> 	struct arm_smmu_smr		*smrs;
> };
>=20
> @@ -686,6 +685,14 @@ static struct arm_smmu_option_prop arm_smmu_options[=
] =3D {
> 	{ 0, NULL},
> };
>=20
> +static inline struct iommu_fwspec *
> +arm_smmu_get_fwspec(struct arm_smmu_master_cfg *cfg)
> +{
> +	struct arm_smmu_master *master =3D container_of(cfg,
> +			                                      struct arm_smmu_master, cfg);
> +	return dev_iommu_fwspec_get(&master->of_node->dev);
> +}
> +
> static void parse_driver_options(struct arm_smmu_device *smmu)
> {
> 	int i =3D 0;
> @@ -779,8 +786,9 @@ static int register_smmu_master(struct arm_smmu_devic=
e *smmu,
> 				struct device *dev,
> 				struct of_phandle_args *masterspec)
> {
> -	int i;
> +	int i, ret =3D 0;
> 	struct arm_smmu_master *master;
> +	struct iommu_fwspec *fwspec;
>=20
> 	master =3D find_smmu_master(smmu, masterspec->np);
> 	if (master) {
> @@ -790,34 +798,37 @@ static int register_smmu_master(struct arm_smmu_dev=
ice *smmu,
> 		return -EBUSY;
> 	}
>=20
> -	if (masterspec->args_count > MAX_MASTER_STREAMIDS) {
> -		dev_err(dev,
> -			"reached maximum number (%d) of stream IDs for master device %s\n",
> -			MAX_MASTER_STREAMIDS, masterspec->np->name);
> -		return -ENOSPC;
> -	}
> -
> 	master =3D devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
> 	if (!master)
> 		return -ENOMEM;
> +	master->of_node =3D masterspec->np;
>=20
> -	master->of_node			=3D masterspec->np;
> -	master->cfg.num_streamids	=3D masterspec->args_count;
> +	ret =3D iommu_fwspec_init(&master->of_node->dev, smmu->dev);
> +	if (ret) {
> +		kfree(master);
> +		return ret;
> +	}
> +	fwspec =3D dev_iommu_fwspec_get(dev);
> +
> +	/* adding the ids here */
> +	ret =3D iommu_fwspec_add_ids(&masterspec->np->dev,
> +				   masterspec->args,
> +				   masterspec->args_count);
> +	if (ret)
> +		return ret;
>=20
> 	/* Xen: Let Xen know that the device is protected by an SMMU */
> 	dt_device_set_protected(masterspec->np);
>=20
> -	for (i =3D 0; i < master->cfg.num_streamids; ++i) {
> -		u16 streamid =3D masterspec->args[i];
> -
> -		if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
> -		     (streamid >=3D smmu->num_mapping_groups)) {
> -			dev_err(dev,
> -				"stream ID for master device %s greater than maximum allowed (%d)\n"=
,
> -				masterspec->np->name, smmu->num_mapping_groups);
> -			return -ERANGE;
> +	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) {
> +		for (i =3D 0; i < fwspec->num_ids; ++i) {
> +			if (masterspec->args[i] >=3D smmu->num_mapping_groups) {
> +				dev_err(dev,
> +					"stream ID for master device %s greater than maximum allowed (%d)\n=
",
> +					masterspec->np->name, smmu->num_mapping_groups);
> +				return -ERANGE;
> +			}
> 		}
> -		master->cfg.streamids[i] =3D streamid;
> 	}
> 	return insert_smmu_master(smmu, master);
> }
> @@ -1390,6 +1401,7 @@ static int arm_smmu_master_configure_smrs(struct ar=
m_smmu_device *smmu,
> 	int i;
> 	struct arm_smmu_smr *smrs;
> 	void __iomem *gr0_base =3D ARM_SMMU_GR0(smmu);
> +	struct iommu_fwspec *fwspec =3D arm_smmu_get_fwspec(cfg);
>=20
> 	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH))
> 		return 0;
> @@ -1397,15 +1409,14 @@ static int arm_smmu_master_configure_smrs(struct =
arm_smmu_device *smmu,
> 	if (cfg->smrs)
> 		return -EEXIST;
>=20
> -	smrs =3D kmalloc_array(cfg->num_streamids, sizeof(*smrs), GFP_KERNEL);
> +	smrs =3D kmalloc_array(fwspec->num_ids, sizeof(*smrs), GFP_KERNEL);
> 	if (!smrs) {
> -		dev_err(smmu->dev, "failed to allocate %d SMRs\n",
> -			cfg->num_streamids);
> +		dev_err(smmu->dev, "failed to allocate %d SMRs\n", fwspec->num_ids);
> 		return -ENOMEM;
> 	}
>=20
> 	/* Allocate the SMRs on the SMMU */
> -	for (i =3D 0; i < cfg->num_streamids; ++i) {
> +	for (i =3D 0; i < fwspec->num_ids; ++i) {
> 		int idx =3D __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
> 						  smmu->num_mapping_groups);
> 		if (IS_ERR_VALUE(idx)) {
> @@ -1416,12 +1427,12 @@ static int arm_smmu_master_configure_smrs(struct =
arm_smmu_device *smmu,
> 		smrs[i] =3D (struct arm_smmu_smr) {
> 			.idx	=3D idx,
> 			.mask	=3D 0, /* We don't currently share SMRs */
> -			.id	=3D cfg->streamids[i],
> +			.id	=3D fwspec->ids[i],
> 		};
> 	}
>=20
> 	/* It worked! Now, poke the actual hardware */
> -	for (i =3D 0; i < cfg->num_streamids; ++i) {
> +	for (i =3D 0; i < fwspec->num_ids; ++i) {
> 		u32 reg =3D SMR_VALID | smrs[i].id << SMR_ID_SHIFT |
> 			  smrs[i].mask << SMR_MASK_SHIFT;
> 		writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_SMR(smrs[i].idx));
> @@ -1443,12 +1454,13 @@ static void arm_smmu_master_free_smrs(struct arm_=
smmu_device *smmu,
> 	int i;
> 	void __iomem *gr0_base =3D ARM_SMMU_GR0(smmu);
> 	struct arm_smmu_smr *smrs =3D cfg->smrs;
> +	struct iommu_fwspec *fwspec =3D arm_smmu_get_fwspec(cfg);
>=20
> 	if (!smrs)
> 		return;
>=20
> 	/* Invalidate the SMRs before freeing back to the allocator */
> -	for (i =3D 0; i < cfg->num_streamids; ++i) {
> +	for (i =3D 0; i < fwspec->num_ids; ++i) {
> 		u8 idx =3D smrs[i].idx;
>=20
> 		writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(idx));
> @@ -1465,16 +1477,17 @@ static int arm_smmu_domain_add_master(struct arm_=
smmu_domain *smmu_domain,
> 	int i, ret;
> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> 	void __iomem *gr0_base =3D ARM_SMMU_GR0(smmu);
> +	struct iommu_fwspec *fwspec =3D arm_smmu_get_fwspec(cfg);
>=20
> 	/* Devices in an IOMMU group may already be configured */
> 	ret =3D arm_smmu_master_configure_smrs(smmu, cfg);
> 	if (ret)
> 		return ret =3D=3D -EEXIST ? 0 : ret;
>=20
> -	for (i =3D 0; i < cfg->num_streamids; ++i) {
> +	for (i =3D 0; i < fwspec->num_ids; ++i) {
> 		u32 idx, s2cr;
>=20
> -		idx =3D cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
> +		idx =3D cfg->smrs ? cfg->smrs[i].idx : fwspec->ids[i];
> 		s2cr =3D S2CR_TYPE_TRANS |
> 		       (smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT);
> 		writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx));
> @@ -1489,6 +1502,7 @@ static void arm_smmu_domain_remove_master(struct ar=
m_smmu_domain *smmu_domain,
> 	int i;
> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> 	void __iomem *gr0_base =3D ARM_SMMU_GR0(smmu);
> +	struct iommu_fwspec *fwspec =3D arm_smmu_get_fwspec(cfg);
>=20
> 	/* An IOMMU group is torn down by the first device to be removed */
> 	if ((smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) && !cfg->smrs)
> @@ -1499,8 +1513,8 @@ static void arm_smmu_domain_remove_master(struct ar=
m_smmu_domain *smmu_domain,
> 	 * that it can be re-allocated immediately.
> 	 * Xen: Unlike Linux, any access to non-configured stream will fault.
> 	 */
> -	for (i =3D 0; i < cfg->num_streamids; ++i) {
> -		u32 idx =3D cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
> +	for (i =3D 0; i < fwspec->num_ids; ++i) {
> +		u32 idx =3D cfg->smrs ? cfg->smrs[i].idx : fwspec->ids[i];
>=20
> 		writel_relaxed(S2CR_TYPE_FAULT,
> 			       gr0_base + ARM_SMMU_GR0_S2CR(idx));
> @@ -1903,9 +1917,9 @@ static int arm_smmu_add_device(struct device *dev)
> 	struct arm_smmu_device *smmu;
> 	struct arm_smmu_master_cfg *cfg;
> 	struct iommu_group *group;
> +	struct iommu_fwspec *fwspec;
> 	void (*releasefn)(void *) =3D NULL;
> 	int ret;
> -
> 	smmu =3D find_smmu_for_device(dev);
> 	if (!smmu)
> 		return -ENODEV;
> @@ -1925,13 +1939,19 @@ static int arm_smmu_add_device(struct device *dev=
)
> 			goto out_put_group;
> 		}
>=20
> -		cfg->num_streamids =3D 1;
> +		ret =3D iommu_fwspec_init(dev, smmu->dev);
> +		if (ret) {
> +			kfree(cfg);
> +			goto out_put_group;
> +		}
> +		fwspec =3D dev_iommu_fwspec_get(dev);
> +
> 		/*
> 		 * Assume Stream ID =3D=3D Requester ID for now.
> 		 * We need a way to describe the ID mappings in FDT.
> 		 */
> 		pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid,
> -				       &cfg->streamids[0]);
> +				       &fwspec->ids[0]);
> 		releasefn =3D __arm_smmu_release_pci_iommudata;
> 	} else {
> 		struct arm_smmu_master *master;
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthro=
ugh/device_tree.c
> index 999b831d90..a51ae3c9c3 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -140,6 +140,13 @@ int iommu_add_dt_device(struct dt_device_node *np)
>     if ( !ops )
>         return -EINVAL;
>=20
> +	/*
> +	 * This is needed in case a device has both the iommus property and
> +	 * also apperars in the mmu-masters list.
> +	 */
> +    if ( dt_device_is_protected(np) )
> +        return 0;
> +
>     if ( dev_iommu_fwspec_get(dev) )
>         return -EEXIST;
>=20
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:07:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:07:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80623.147564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yDZ-0006eZ-Aw; Tue, 02 Feb 2021 16:07:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80623.147564; Tue, 02 Feb 2021 16:07:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yDZ-0006eS-7y; Tue, 02 Feb 2021 16:07:53 +0000
Received: by outflank-mailman (input) for mailman id 80623;
 Tue, 02 Feb 2021 16:07:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g09W=HE=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l6yDX-0006dn-TG
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:07:51 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.54]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 712f1330-0ad1-430e-a5de-7c06e6687c7f;
 Tue, 02 Feb 2021 16:07:50 +0000 (UTC)
Received: from DB6PR07CA0014.eurprd07.prod.outlook.com (2603:10a6:6:2d::24) by
 VI1PR0801MB1856.eurprd08.prod.outlook.com (2603:10a6:800:57::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.23; Tue, 2 Feb
 2021 16:07:28 +0000
Received: from DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::8f) by DB6PR07CA0014.outlook.office365.com
 (2603:10a6:6:2d::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.8 via Frontend
 Transport; Tue, 2 Feb 2021 16:07:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT023.mail.protection.outlook.com (10.152.20.68) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 2 Feb 2021 16:07:27 +0000
Received: ("Tessian outbound 2b57fdd78668:v71");
 Tue, 02 Feb 2021 16:07:27 +0000
Received: from d6c15b7036ca.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DDD9D202-4167-4623-94B8-1EF00DCDA1B8.1; 
 Tue, 02 Feb 2021 16:07:10 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d6c15b7036ca.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 02 Feb 2021 16:07:10 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3881.eurprd08.prod.outlook.com (2603:10a6:10:77::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Tue, 2 Feb
 2021 16:07:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021
 16:07:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 712f1330-0ad1-430e-a5de-7c06e6687c7f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EztLR4nuMtzO5tLprYC7gPR+MPCHYmw5ocLwT05Q2N0=;
 b=163jTqINCBgbw7yg73JJMqKadFyoKRDB7WllaLyfow20pieqkPl9AqbnopSAPo4+n17fZc+fOw9i0EsQF8RqDNHwm6bs7m5bTSLzWVXAZH98HoSyNtuJP2hnUZA88JgvYpyz3xa1PAcK6r69NmPOWXIdanFODegcozlNbu6+k3I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4455107c76d9732c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SpvafVGqkJpJSh6afI15GpR8s3kRyN6EGxe4rc8v5APehPeZpHlno1WCj8bWUzF7njIUl209uCdLlV7FwD2fm+9w2QgKODaTn1/rdpnHZ/9TmZRuWdaE0l/5hGYQ4sDAM0ErqIIwT27qns/9giZTmpJHh41ka53mj/ftDAJ+z2meDF7bXcQ7HPAvPSZVD127e65JyRruGc8HWd8B4Gd2bm1I/nzpptAu8/JjzWPJ2D+kmzmAnuBGd4jX5jfBAKJOuGhD1ioVztPvw3JHZsnTxVLMIl20VgA+AZ+6YLzWyDEVkBiex9H4vsPuJeGmLbUuGZ68Pl4gAlyKvdjtxRlzkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EztLR4nuMtzO5tLprYC7gPR+MPCHYmw5ocLwT05Q2N0=;
 b=nVihpXCYkk7NDXLdwRvSlAu0urF0QMiVk0JokrXdy/h668Se7U8P5BfgPmNbuJ5VjbZuWfOn903A7gWDg6g7BZwmd2pmCTiqsYumwdwNmA/BkxbR6sRVppFe1EIJcSuaWJnQ9cGR1qOgU8FOGZfRy+RHywkr0D/h1gWqq1mdeGu/Dy9WtNFG142694aWf9OFniwjyaqPkj0bkceU1HDO2pTpPVKpxYEkl3rIWxc4roRprT3cfjl5zYlHLzqyz/bn786XNnZIBrvWdOAAmZFGP4vNlN7yVAWF9oslFXgRK2ouE39myL+iGgFUER1EAB5Peq/iceiiyBnvPBCnyeHVDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EztLR4nuMtzO5tLprYC7gPR+MPCHYmw5ocLwT05Q2N0=;
 b=163jTqINCBgbw7yg73JJMqKadFyoKRDB7WllaLyfow20pieqkPl9AqbnopSAPo4+n17fZc+fOw9i0EsQF8RqDNHwm6bs7m5bTSLzWVXAZH98HoSyNtuJP2hnUZA88JgvYpyz3xa1PAcK6r69NmPOWXIdanFODegcozlNbu6+k3I=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "brian.woods@xilinx.com"
	<brian.woods@xilinx.com>
Subject: Re: [PATCH v3 0/3] Generic SMMU Bindings
Thread-Topic: [PATCH v3 0/3] Generic SMMU Bindings
Thread-Index: AQHW9DbnaDtcsjSAAUiKSXHzYBU+iapFEnqA
Date: Tue, 2 Feb 2021 16:07:08 +0000
Message-ID: <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 68d91f85-cb66-414c-ceda-08d8c794a2bb
x-ms-traffictypediagnostic: DB7PR08MB3881:|VI1PR0801MB1856:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0801MB1856F20691B2535090A0326DFCB59@VI1PR0801MB1856.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5516;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OtLftEk/TxVoM0wTJO5kiX1pRv+t+DSGrpNGiFqZS5336aMvfsYPj6O1/vLUYRDzzwmkjLAVZ+xpc/VBYjBlsuhZbv9q1LL2Kdu0U0dkACl4bRpm1o22BEuyLLYnYhTtfZlffevIq6OjX7puH766pAWtsTFRhFEYEsh35GJyR8wLiwxDCzGC9SS7eTseVMZUm1320TcaBHa4cwpELCnW9/oM4rtBEQHyaghFqNd9KCGLogopwpG/Fxx0I28InZF8fUfabuYj2sn4AT3v4j+5KR5/GTkbv7EdLlpbTqsaq8tGPMR6AlMS1DpM+iqp7ohcouA5dJFbZ8APTlWkD3M5JuYZ2muv5RxaWIuTg1JnfuomU1YqD3eutEna8NZL5oe6gOTO+4wgelahUwTelhVt8EPi1lehPKZ/+8c6jWe+OfvhcX7M+RiBX+BPXCwEYSL8t0pHRZVhHnsBhg1d3VFdQc9W/MKfx7kvc2pn34mBsdnfCyPijMnvuFoSYb4e+a9eMPQCsjJmRNHF7QwaJYhhF8HogiMgKXNofyCS4rwZViuTGwQ5QS4uqQAAT08wfwnzjwNjcBVsiuQ/+3sfLQbWsIkqpV9l/eUdlzQxQJ4hqrv/1m5ySenxOiy9mCJuPLcp9zyVV/n1bXY5o5xpeG1yn0iCUt+8dTh60xOVXjorxXZMe+oz1BT11Y0/lRbUm8X3AL3Heo84s0Bp6cMz1Z2dsg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(6486002)(54906003)(6512007)(86362001)(316002)(83380400001)(8936002)(2616005)(4326008)(66446008)(8676002)(36756003)(478600001)(64756008)(76116006)(66556008)(66476007)(66946007)(33656002)(91956017)(186003)(53546011)(5660300002)(6506007)(71200400001)(966005)(26005)(2906002)(6916009)(41533002)(45980500001)(6606295002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?s4PC+h5giPVtQOmkNZ3YlKgicMECEk4l05m0oqXMFi7SwWle1ruRBmykRR96?=
 =?us-ascii?Q?qq2ZGHu+8GlxdncIFK7BL+VbiOW2ickOrt/J26Uv9Zy7rJEpekdyaYtGt9Zc?=
 =?us-ascii?Q?12e74c5my7WXa1ciSLUqm5quNznnpcRybbpEhEml7pSuC2NQ/D4c0KDFcVng?=
 =?us-ascii?Q?KCahK/b/DDmrhjjD+fCMz0CsbPn+eG1z3RUzpYhNQppzSYrQHWwshVChCzk7?=
 =?us-ascii?Q?kZcQk67EBUg4h9pi/jbNKD5QYJ7gtK06HxtKlPD3GbdoP/a0sr6wCq/MStar?=
 =?us-ascii?Q?QVVHcdp7POwWWWNZEBT4j9fynHkbIcj6EWW1kF2IEHeHMkXshgDKzUC4MNNO?=
 =?us-ascii?Q?Ud2NiywNPoOL0zIYqY4quWg1NWZg3/85W6P89pX5paxPRdf8qC95UJViKVV0?=
 =?us-ascii?Q?q28lp+kbEtB8ue6uw0RxB90t4dB67u8dq3TDwE55cdhCPH+WwkZLF3kdNqHG?=
 =?us-ascii?Q?Aivxq6Y8jU3Z9lbikyZYING6s1QapcOkjmXPIZ9ioxiZzE8ze70axVBCD42A?=
 =?us-ascii?Q?odZTrjcY0zPSMOWmjoFueRr+lDiuKwWakUTnACBb0h7UmPbgeJna35XNoeeC?=
 =?us-ascii?Q?E/VTMmnuCr+lNQHr6jNmXtRpnlj+Ee3LhedOEoA1pHvYdyzL6igZ1M4L/m5X?=
 =?us-ascii?Q?fgC+bGAoTC45ChpsoCLjSkmGEopxvi0LZUZwP59LT8da549wRkbfsMwJxZ/C?=
 =?us-ascii?Q?9BxHtipZWrfgi0gorxd77WXGJQR4VTe0G3aqIki0PpFp8KLn+2e9R71Z25T8?=
 =?us-ascii?Q?eMqyvemrUVpnCK3nSnf7sA+Pz+wqrq1duXB9OSZMGP3bjp/3mJYMBrUOTxmB?=
 =?us-ascii?Q?BRfoz6FjON76tLcSLKcoyXn3QQoQDeLcIX/Vll7IRNR1XB7vaZIQsDvpDNFx?=
 =?us-ascii?Q?ct1T/+KyKLep6CVnZht3dhFsxwWifVlInGeB7GD/aN0ZAl1GeMjabBjklVEa?=
 =?us-ascii?Q?GQxTEJYxF+3NSKtnuBzkxtxX/+6jjU/Szjm/GVWSWH34DBnaY76LDsbbaHi/?=
 =?us-ascii?Q?higvo8lRgxXf5OiGsULOCc0idwJp+08a6BCgeMlV9L/1zrnfoWfsV80gEyPV?=
 =?us-ascii?Q?Qwl5P+DV?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <832E06F993C0A344BE7B3AA325D89B2C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3881
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6a9e2212-3c77-4b15-837b-08d8c794975e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WJHVKmPBox8NCwR5lUxDws2E6y2pi3a3BjIfsVS8Vi6zfSHqzRt31j8DqcpG1tjdvFUaq/+F0uR00bSecgOf0N1xrJy1xF8OHNshzBDj8deDNRI4SE7EyegbM66DWERyhKDqYWz0vq/l/geZ+ztu42dhCuImGge4VgKTFZ48N2MtODVPF4zd1gnTIm+KWFptfoGxY/9d1BN5B/4+IdeMWRGXDgk3DgiJYqY75Z2Zp/Bc8GSlEnn1yLsiYP7ALvnDh4za9s2QEl8/xshZeVecBI303KXSsfvvym6aA7RLCGcfqjOdfg1N0QP6dZ2zE78N62sbdr0mhalGDGL5lJUpP5rmTIc0esnvV+Rr6mI29fOblYbuv2bLxsc2cVkr/fo8m7cIx+yRQov77v2OdwfliVyZbDjrJx0AoAv8FufdCfV4ku51TbmVIAhec6HHE7Vtw1e8dLMCuMlv8vum4I+Fn2KQarbkLRNKbxtARfozvdO9jKxtZ8Cfa739O9hl6+24zOdv8SKH0W58ycdMiiEMYKz3lg/SpaW02Dpp+PV07GRGK+AsdZILaBTEYBuOxBI8HFAFkZVBeGsesQ3MV+39lSClrwonh9dUzDhwB6auCvw22t/HihyE7yCh0ZV69003kgWx7/xkDYp4LAKfEIoruA9XR2x4rjgQOuGN91ZsSvp9v7b1t4PQgjJSmMC2BAPyKG6Xw7nsREfXR3te3IpPu0pAioiAahAApSRdrYmKZBrOf7V26SYRk9NKNOin5DbpBDFvxhtrtAbB/8/MZHtlJL4hAjodXUI+TrMtYWDRs48=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(36840700001)(46966006)(4326008)(107886003)(5660300002)(336012)(54906003)(316002)(356005)(36860700001)(966005)(82310400003)(478600001)(53546011)(86362001)(8676002)(2616005)(33656002)(26005)(36756003)(6486002)(70586007)(82740400003)(70206006)(6862004)(83380400001)(2906002)(186003)(6506007)(81166007)(47076005)(6512007)(8936002)(41533002)(6606295002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 16:07:27.2139
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 68d91f85-cb66-414c-ceda-08d8c794a2bb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1856

Hello Stefano,

> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> Hi all,
>=20
> This series introduces support for the generic SMMU bindings to
> xen/drivers/passthrough/arm/smmu.c.
>=20
> The last version of the series was
> https://marc.info/?l=3Dxen-devel&m=3D159539053406643
>=20
> I realize that it is late for 4.15 -- I think it is OK if this series
> goes in afterwards.

I tested the series on the Juno board it is woking fine. =20
I found one issue in SMMU driver while testing this series that is not rela=
ted to this series but already existing issue in SMMU driver.

If there are more than one device behind SMMU and they share the same Strea=
m-Id, SMMU driver is creating the new SMR entry without checking the alread=
y configured SMR entry if SMR entry correspond to stream-id is already conf=
igured.  Because of this I observed the stream match conflicts on Juno boar=
d.

(XEN) smmu: /iommu@7fb30000: Unexpected global fault, this could be serious
(XEN) smmu: /iommu@7fb30000: 	GFSR 0x00000004, GFSYNR0 0x00000006, GFSYNR1 =
0x00000000, GFSYNR2 0x00000000


Below two patches is required to be ported to Xen to fix the issue from Lin=
ux driver.

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/dri=
vers/iommu/arm-smmu.c?h=3Dlinux-5.8.y&id=3D1f3d5ca43019bff1105838712d55be08=
7d93c0da
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/dri=
vers/iommu/arm-smmu.c?h=3Dlinux-5.8.y&id=3D21174240e4f4439bb8ed6c116cdbdc03=
eba2126e

Regards,
Rahul=20
>=20
> Cheers,
>=20
> Stefano
>=20
>=20
> Brian Woods (3):
>      arm,smmu: switch to using iommu_fwspec functions
>      arm,smmu: restructure code in preparation to new bindings support
>      arm,smmu: add support for generic DT bindings. Implement add_device =
and dt_xlate.
>=20
> xen/drivers/passthrough/arm/smmu.c    | 162 ++++++++++++++++++++++++-----=
-----
> xen/drivers/passthrough/device_tree.c |  24 ++---
> 2 files changed, 123 insertions(+), 63 deletions(-)



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:09:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80627.147576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yFP-00070P-Oe; Tue, 02 Feb 2021 16:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80627.147576; Tue, 02 Feb 2021 16:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yFP-00070I-Lb; Tue, 02 Feb 2021 16:09:47 +0000
Received: by outflank-mailman (input) for mailman id 80627;
 Tue, 02 Feb 2021 16:09:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g09W=HE=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l6yFO-00070B-7B
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:09:46 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59e05910-d07e-4744-a3b1-f3bf4b37e379;
 Tue, 02 Feb 2021 16:09:43 +0000 (UTC)
Received: from DU2PR04CA0126.eurprd04.prod.outlook.com (2603:10a6:10:231::11)
 by VI1PR08MB5389.eurprd08.prod.outlook.com (2603:10a6:803:137::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3784.11; Tue, 2 Feb
 2021 16:09:42 +0000
Received: from DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::fd) by DU2PR04CA0126.outlook.office365.com
 (2603:10a6:10:231::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.17 via Frontend
 Transport; Tue, 2 Feb 2021 16:09:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT017.mail.protection.outlook.com (10.152.20.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 2 Feb 2021 16:09:42 +0000
Received: ("Tessian outbound e989e14f9207:v71");
 Tue, 02 Feb 2021 16:09:42 +0000
Received: from 3cee38c37ebb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 54133D43-9D15-4A0B-A0C5-0CBF2F60438D.1; 
 Tue, 02 Feb 2021 16:09:36 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3cee38c37ebb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 02 Feb 2021 16:09:36 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3881.eurprd08.prod.outlook.com (2603:10a6:10:77::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Tue, 2 Feb
 2021 16:09:31 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021
 16:09:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59e05910-d07e-4744-a3b1-f3bf4b37e379
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nlbsKsoq5v7rwOLmf68xBCBs+buI++72lSKrZwzCYWE=;
 b=rKj+Krr/o/wUC6tPZk8RBfjYUAxRpW8lDEGgXGLQNXXWHdM6b8Q+CXZNmAuF2y6FqC6MHBtKdZvj1gGKgbGFxceSp6v/l4hThy5CdiJNUshIsdNRHUrjtIIewsVH1KuXvPQ4E9nFrd5oQg5lIPx8xEtcNeRylRgCHCmm+Mu6dIg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8fbbbe638963fdac
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KTz/Ff+w+rnsyEFUfpU6Zk55tC/ma718YTlrm5aRNxVVVju23DBxhHdIdP7X5vU/z61kWXNPY4rXgUuCY8njQkSECIBRhbXYikv95fh0xkrEa8QiqXazH5VvvXVs822fNPgJ84sq1gm83bbWoI2A8Fh95i0ESEdXL8QfrEUPjDsJ9M5Z95sTf9CnbiZNrtrxGa2V5dVuXkA7OEXd0E/RIGL0aQZiP0O++qL+0D+x6wqsKW8lFdq3C/GEeTkf41H2MAPdc4+dwzPZwSdydKCgo4Mh38v/OVsZSZvWnvyUd5vHIrp4McQz5OkwjE+8PHP3DK+d+BKEc/117PFD2B+A6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nlbsKsoq5v7rwOLmf68xBCBs+buI++72lSKrZwzCYWE=;
 b=LFNZSXfNrOGjC+oq50n689/93QEPZlCyP359UIvrHXm8DrWqMn4A/GDZKi6KGuX9e5rEOISeLxs+m5/a9WXCSqLy4wPanf2XKqQAz835eEtBVJokjUbONaxuHwcMPJAigNgi28Rc3tQ7WVTDz/MmBlq5HsorFKJWgPbciRMWYzC6rCqC4yjRih2SmGvItoxCmrbDo0Ws5wNAC+SHGhlldjj2jlr1JtdKtvQu0NhZSpyENgqwWxQAOvpLpbR7hvcikuQyVCvIMiPHDBHejOXg/HqVK6/mWK56kuftetmHokGniu9v5tL24qZt0OQX5PySC9RAm4p97ntVoWAiDh/Esg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nlbsKsoq5v7rwOLmf68xBCBs+buI++72lSKrZwzCYWE=;
 b=rKj+Krr/o/wUC6tPZk8RBfjYUAxRpW8lDEGgXGLQNXXWHdM6b8Q+CXZNmAuF2y6FqC6MHBtKdZvj1gGKgbGFxceSp6v/l4hThy5CdiJNUshIsdNRHUrjtIIewsVH1KuXvPQ4E9nFrd5oQg5lIPx8xEtcNeRylRgCHCmm+Mu6dIg=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "brian.woods@xilinx.com"
	<brian.woods@xilinx.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v3 3/3] arm,smmu: add support for generic DT bindings.
 Implement add_device and dt_xlate.
Thread-Topic: [PATCH v3 3/3] arm,smmu: add support for generic DT bindings.
 Implement add_device and dt_xlate.
Thread-Index: AQHW9DbZcc+nHEHmhUCG1Y8eIX96RapFEyUA
Date: Tue, 2 Feb 2021 16:09:31 +0000
Message-ID: <446A1B3B-4465-4AB1-91BC-17DFB32B3A42@arm.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
 <20210126225836.6017-3-sstabellini@kernel.org>
In-Reply-To: <20210126225836.6017-3-sstabellini@kernel.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 342af303-b8ea-4f19-2cc0-08d8c794f344
x-ms-traffictypediagnostic: DB7PR08MB3881:|VI1PR08MB5389:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB5389A8DFDDBB703EFDF30240FCB59@VI1PR08MB5389.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2449;OLM:2449;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tdKj1qYVJCS5oPdzKnSQE/YpHNwXqC8fg1AqoxE+auzhOVkmdjX9hOYEKbzT+WifGO4hssTkjmulUWTi0HdFCTL7ZXAd8QYdJL6lrJTSx8mass5RiRzQlp9HL1T1sRUexs8bK0nsin9rKxaBlG9n794pv5he3Q1sBlNslM32pGJT8TzbliyFCwR09ByLaENaifEEne5Fxc6qTZ9ok2uMdd35x3PFbjl/1VdjEfT7gN//6YEBrv0z2fPVidrsg6kT902P3/T5ITPRRqvF71xnmAKEmAJl6WJVyooPWb6RMzn6MvvARkOwQ//NKzfKwQIUebjEkQh/O3hMvSySS8fC/GUVbOlGO+StT85UqJq6+u1rcbVShV6PaZaTyq1tv3/p4luytAYwLrz15KEQHIvfDNWCJUEhl3D4QG2GMnI26/tZpetEbxGSXG1XqN9BzVCO8xs8tFCpkKfuGO5pIi84I1r774Yq8xHXlsfP0BXDxC2IexU6lLBzqb7VUB4FLxTfKl6B8Pr6YNZKs3Siyiu5UbpB2g6wydE4nt2ksfatuvU9uHOUQpUX53E2D9TXFnZPIIKgUQbITKmpcIphUJ1yv2y50r09ARV9i8BGmKmxUSPMIqmnRiPvdwPbQ21NlZu9
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(6486002)(54906003)(6512007)(86362001)(316002)(83380400001)(8936002)(2616005)(4326008)(66446008)(8676002)(36756003)(478600001)(64756008)(76116006)(66556008)(66476007)(66946007)(33656002)(91956017)(186003)(53546011)(5660300002)(6506007)(71200400001)(26005)(2906002)(6916009)(41533002)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?Fa8w1JZo0MsSY5VCKf3/PhuA7vH4KVcr9lyhmy7EWt7QiB9GwkKcRuEOLxP+?=
 =?us-ascii?Q?MwKsqC0gU3lkK0VaxtkFHfV1aypCu/3PusAMsbzNMbmKd7H3yvRFXoKKFBlm?=
 =?us-ascii?Q?boZui0xi4BVzxQVpa/VuEGCFtuFXUKtAXWB4jfn02xAg5tL6cX0/PL3N4nUo?=
 =?us-ascii?Q?ZJr4yQ3IP1ZFZRBQmHFv7XK4hb9W6Khw+BcIuHMTODwup/ASY8VuMf69MTcY?=
 =?us-ascii?Q?QPxYKW7FP8/YIAna8xozyd76zN6s6ROa+r4C2spJ1lfu/SPF4GTW2mtg87GJ?=
 =?us-ascii?Q?NFRalQeHbYSh079pwyN6ybdcglWgT99n8HDaeX91QMTD39dz7iLqG2vS4SCV?=
 =?us-ascii?Q?InvLe5EineQ1uBkW63EdGXfpx1OUa4ljmcplfGE4U+jcdHxyWG8NsFeInp9e?=
 =?us-ascii?Q?Cn5NJfkH/dp1U8CaypEwBu0kib0788//nyBko0R0iCr5BkqTfo503RRXY8wS?=
 =?us-ascii?Q?9dFtnsAAjLfUOx4MZugxzIupg4++PF3xnCIlA8yo3h9BT+WHRjwZMezRX0kv?=
 =?us-ascii?Q?luxfQZVMdd9EK2g6XTtQkVOlq+kYNnjvJvyouMrm2FIge4DehIIkOBhGII6R?=
 =?us-ascii?Q?uNzCA4uNGpj50Q4u7Rr132vXFvmNVvUN5ZicxErh0Kq6bJCvKbHRtT43UHud?=
 =?us-ascii?Q?nsUouc9uV0EmXjTf3jCPLa48wfoNEvzQgYYUvLfPYCQsSclELz+Pl9tYMUB2?=
 =?us-ascii?Q?xl8bKX8KAm0dy6Y1jKMk2CUvy4PaMVf2T6MaU3mSmnclC3FJU3vMVa8uI620?=
 =?us-ascii?Q?EmMQ74+Fgg0C2Ba5XqpQlrVjVitIc2Wbw5CRhrTDrUioF7geL15JeRFBhV+z?=
 =?us-ascii?Q?TF083Gwt2OR4iZ5xdFhwVEPnpC33aroGCubQ+pDPLy1k2jwVR0DLcKbt5BX+?=
 =?us-ascii?Q?pVZ5rDjyk5+OfsdY+zG5TYSgMV+Gnm7/hbPKeSq5VHVtYY9dXVP1wXao4VrJ?=
 =?us-ascii?Q?JWBKMKlJ3doFHlbCUeVg/aU0WWVfPVq4IHOFUPwQ2W2l9o7aZXDhrj7zFqI/?=
 =?us-ascii?Q?r2yRAFHMHIweomkn2tijTgVTWNrAI1yvgpD/mrOucewaMSTBL3VYA2a9JIHt?=
 =?us-ascii?Q?LbDOQqbA?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <AD65E08474971944822675D966D84841@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3881
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a453dcc7-d444-48cf-9038-08d8c794ecb7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BlMoXFhdLYhayBSbti55912m9QLe08352pWTKqaIYef2o2N66mK6NM5EqqyOLtE1x4dBzvIjB8LodihyPQphoCPBFpcQNw01bgbQpTR+idn51yEgVOlxwLP7K2atiGYGNbL1ULUDbm8Jp30oi2ANtzrtX4W38rtTaqoTWOXPmRyRIWm/liPQF4ndswtzpu5tAJx43BP7bxQliZvulI/NjGU5W0a91Cv7t1+5Ln8tL9dJXSLyx/fikjKEnP2a6J4LRanGOvt6yNx7pjnTuMgLzfP4GPpGIPq6dBFHGKbpmwF+xx5l9JS6lZ2+Hp7PwK9jT4m017E6FoH+xtG5gPxG2cgazLZMqs024t5kAkoi1hi8IWgzEBwY+Pr+PYPW27bEUaoVtbIe0wAVbVGmOazx2tXW3UU46oVrWyg+KDd2Y0ywvQu5MJYS5L7TR6RuYIn6o9vAqwKH9/B3CMScAhDOVL4SO1hP9jCx7nCMgQHiLmim2Ss2qREolcsaiKqvmUvl5pSxDawXD01BMI04H5g01Gceefm5/AJ3hseZBgY7N5hFF04/E/jgCgTl25rDwP1apIS2WclaBC/Xn//dZdgup64HSpznZilsRQ7+lKKRmytxRlsqYjU9Ja1RcN80bqZ7lIaaNm6BzALNk16tVEDsvEfo0wo+LXQg5yGGWRnc660ecT57z/uQDtNb5n8vo3G1cUJJWo5B2gSFFdeTeK1RnQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(396003)(39860400002)(36840700001)(46966006)(53546011)(4326008)(6486002)(47076005)(6862004)(86362001)(356005)(81166007)(82740400003)(36756003)(70586007)(2906002)(70206006)(5660300002)(54906003)(316002)(2616005)(82310400003)(8676002)(186003)(33656002)(336012)(6512007)(107886003)(478600001)(83380400001)(26005)(8936002)(6506007)(36860700001)(41533002)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 16:09:42.3263
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 342af303-b8ea-4f19-2cc0-08d8c794f344
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5389

Hello Stefano,

> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> From: Brian Woods <brian.woods@xilinx.com>
>=20
> Now that all arm iommu drivers support generic bindings we can remove
> the workaround from iommu_add_dt_device().
>=20
> Note that if both legacy bindings and generic bindings are present in
> device tree, the legacy bindings are the ones that are used.
>=20
> Signed-off-by: Brian Woods <brian.woods@xilinx.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
> ---
> Changes in v3:
> - split patch
> - make find_smmu return non-const so that we can use it in arm_smmu_dt_ad=
d_device_generic
> - use dt_phandle_args
> - update commit message
> ---
> xen/drivers/passthrough/arm/smmu.c    | 40 ++++++++++++++++++++++++++-
> xen/drivers/passthrough/device_tree.c | 17 +-----------
> 2 files changed, 40 insertions(+), 17 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough=
/arm/smmu.c
> index 9687762283..620ba5a4b5 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -254,6 +254,8 @@ struct iommu_group
> 	atomic_t ref;
> };
>=20
> +static struct arm_smmu_device *find_smmu(const struct device *dev);
> +
> static struct iommu_group *iommu_group_alloc(void)
> {
> 	struct iommu_group *group =3D xzalloc(struct iommu_group);
> @@ -843,6 +845,40 @@ static int register_smmu_master(struct arm_smmu_devi=
ce *smmu,
> 					     fwspec);
> }
>=20
> +static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
> +{
> +	struct arm_smmu_device *smmu;
> +	struct iommu_fwspec *fwspec;
> +
> +	fwspec =3D dev_iommu_fwspec_get(dev);
> +	if (fwspec =3D=3D NULL)
> +		return -ENXIO;
> +
> +	smmu =3D find_smmu(fwspec->iommu_dev);
> +	if (smmu =3D=3D NULL)
> +		return -ENXIO;
> +
> +	return arm_smmu_dt_add_device_legacy(smmu, dev, fwspec);
> +}
> +
> +static int arm_smmu_dt_xlate_generic(struct device *dev,
> +				    const struct dt_phandle_args *spec)
> +{
> +	uint32_t mask, fwid =3D 0;
> +
> +	if (spec->args_count > 0)
> +		fwid |=3D (SMR_ID_MASK & spec->args[0]) << SMR_ID_SHIFT;
> +
> +	if (spec->args_count > 1)
> +		fwid |=3D (SMR_MASK_MASK & spec->args[1]) << SMR_MASK_SHIFT;
> +	else if (!of_property_read_u32(spec->np, "stream-match-mask", &mask))
> +		fwid |=3D (SMR_MASK_MASK & mask) << SMR_MASK_SHIFT;
> +
> +	return iommu_fwspec_add_ids(dev,
> +				    &fwid,
> +				    1);
> +}
> +
> static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
> {
> 	struct arm_smmu_device *smmu;
> @@ -2766,6 +2802,7 @@ static void arm_smmu_iommu_domain_teardown(struct d=
omain *d)
> static const struct iommu_ops arm_smmu_iommu_ops =3D {
>     .init =3D arm_smmu_iommu_domain_init,
>     .hwdom_init =3D arm_smmu_iommu_hwdom_init,
> +    .add_device =3D arm_smmu_dt_add_device_generic,
>     .teardown =3D arm_smmu_iommu_domain_teardown,
>     .iotlb_flush =3D arm_smmu_iotlb_flush,
>     .iotlb_flush_all =3D arm_smmu_iotlb_flush_all,
> @@ -2773,9 +2810,10 @@ static const struct iommu_ops arm_smmu_iommu_ops =
=3D {
>     .reassign_device =3D arm_smmu_reassign_dev,
>     .map_page =3D arm_iommu_map_page,
>     .unmap_page =3D arm_iommu_unmap_page,
> +    .dt_xlate =3D arm_smmu_dt_xlate_generic,
> };
>=20
> -static __init const struct arm_smmu_device *find_smmu(const struct devic=
e *dev)
> +static struct arm_smmu_device *find_smmu(const struct device *dev)
> {
> 	struct arm_smmu_device *smmu;
> 	bool found =3D false;
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthro=
ugh/device_tree.c
> index a51ae3c9c3..ae07f272e1 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -162,22 +162,7 @@ int iommu_add_dt_device(struct dt_device_node *np)
>          * these callback implemented.
>          */
>         if ( !ops->add_device || !ops->dt_xlate )
> -        {
> -            /*
> -             * Some Device Trees may expose both legacy SMMU and generic
> -             * IOMMU bindings together. However, the SMMU driver is only
> -             * supporting the former and will protect them during the
> -             * initialization. So we need to skip them and not return
> -             * error here.
> -             *
> -             * XXX: This can be dropped when the SMMU is able to deal
> -             * with generic bindings.
> -             */
> -            if ( dt_device_is_protected(np) )
> -                return 0;
> -            else
> -                return -EINVAL;
> -        }
> +            return -EINVAL;
>=20
>         if ( !dt_device_is_available(iommu_spec.np) )
>             break;
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:10:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80628.147589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yFl-0007lv-77; Tue, 02 Feb 2021 16:10:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80628.147589; Tue, 02 Feb 2021 16:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yFl-0007ln-3N; Tue, 02 Feb 2021 16:10:09 +0000
Received: by outflank-mailman (input) for mailman id 80628;
 Tue, 02 Feb 2021 16:10:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g09W=HE=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l6yFj-0007lY-0k
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:10:07 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::622])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d80793a4-1e17-4493-93e4-0a231e701cd4;
 Tue, 02 Feb 2021 16:10:04 +0000 (UTC)
Received: from DBBPR09CA0009.eurprd09.prod.outlook.com (2603:10a6:10:c0::21)
 by DB8PR08MB4059.eurprd08.prod.outlook.com (2603:10a6:10:a9::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.23; Tue, 2 Feb
 2021 16:09:29 +0000
Received: from DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:c0:cafe::c7) by DBBPR09CA0009.outlook.office365.com
 (2603:10a6:10:c0::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.16 via Frontend
 Transport; Tue, 2 Feb 2021 16:09:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT064.mail.protection.outlook.com (10.152.21.199) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 2 Feb 2021 16:09:28 +0000
Received: ("Tessian outbound 587c3d093005:v71");
 Tue, 02 Feb 2021 16:09:28 +0000
Received: from 8cc60e791e57.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9E5ABAE0-7DDA-48E5-A17C-E22C62C056D7.1; 
 Tue, 02 Feb 2021 16:09:22 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8cc60e791e57.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 02 Feb 2021 16:09:22 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3881.eurprd08.prod.outlook.com (2603:10a6:10:77::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Tue, 2 Feb
 2021 16:09:14 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021
 16:09:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d80793a4-1e17-4493-93e4-0a231e701cd4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=chfXEkVNIDp0GgWIUFBj6dfTAutsmq0UA7H8E2F9h+c=;
 b=8RoQ5augsiHLWaZxq+bKcSEQZt3GafJsFh1trUwTnvgdRuxkB+96Rqg7L8/d2/C7dVZTRznQ+dwfAgV+4tB7z9atX3tg08v8gPQRPPy9iBrwe+HpT8m3opoCBHynfMb3hqY0uPM7oxZEFzzZpsCpQH8MHKYWw2p826FcrFAhAzI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4eb05e2f95b08285
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gol0c7zjWQ+Cqp6b9zpMlaW4IxvUWyjR4AADNNMMYZfo9IsCJl+PIdZUF37P2nTeCxv3m0CYEXnOGQwXijxkO+jWR+f7OOpGQKAEBcOQTGcGFU7ErV4wvAQxLXRhYuW9MN7oQF+7735nTuB1v16TV0Odn8a6LeKP1Ya4j6tEe6MhQKyJFH4lfq9uuRxrO8cTOFzqGUsOspiONH0vYl6jlnT5bkTc6mxP4NCtpTMjZ+sjVQUZuwcKO32QW0ABfimPYlkKnOJfi/Gp6RsBw3giCP86aDnOP2bcA9dGaEGbvS4NLLWpy2Ef/wKvxRLPHYWNHgYf7EBW3hj/kaknkwu3lA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=chfXEkVNIDp0GgWIUFBj6dfTAutsmq0UA7H8E2F9h+c=;
 b=UXYUfLUh/cm8asgpk2yP/OorSyXYD7fTQHAeWEVId4WjRcHK7hBsTxLpa4s9DlJ+rOBOBb9OS/YhmQWXRqdqYwnyztQoiTY6q8OH/GzYQ4O9kbA9Ym2u2fy9YN1Q6cwWBnl41d6srcs6jjHFxjPW7+LVnErWqOi4RtbIUsNyeeNfDtBxDhw9fog4/IhZHFvLSx84QqebNoIkdNO1eWSb2QIugKGpB5UT6BIz77atKAgE8Stbi8sPklyfcGjgYydwIApTPNg6Y0IrpqXob6wdFiepwSCs9OPrb1JgE7UA1YD1Rtl5wcDuxTQinOxZ3hjikWCKg4TiS0qU+8ocVfh2Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=chfXEkVNIDp0GgWIUFBj6dfTAutsmq0UA7H8E2F9h+c=;
 b=8RoQ5augsiHLWaZxq+bKcSEQZt3GafJsFh1trUwTnvgdRuxkB+96Rqg7L8/d2/C7dVZTRznQ+dwfAgV+4tB7z9atX3tg08v8gPQRPPy9iBrwe+HpT8m3opoCBHynfMb3hqY0uPM7oxZEFzzZpsCpQH8MHKYWw2p826FcrFAhAzI=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "brian.woods@xilinx.com"
	<brian.woods@xilinx.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v3 2/3] arm,smmu: restructure code in preparation to new
 bindings support
Thread-Topic: [PATCH v3 2/3] arm,smmu: restructure code in preparation to new
 bindings support
Thread-Index: AQHW9DbzkpN/GsglfUeaXlWwDlkioapFExCA
Date: Tue, 2 Feb 2021 16:09:14 +0000
Message-ID: <36D1F0B5-C811-4A21-97F5-7C233649FEBF@arm.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
 <20210126225836.6017-2-sstabellini@kernel.org>
In-Reply-To: <20210126225836.6017-2-sstabellini@kernel.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 034f3d7c-4cb6-4f11-6a0a-08d8c794eb1f
x-ms-traffictypediagnostic: DB7PR08MB3881:|DB8PR08MB4059:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB4059945217E2ABB05B3885CBFCB59@DB8PR08MB4059.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:483;OLM:483;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Zyacs0LKF2zxe5HBqRtTK8VVmKTGvBWxmkxB/QLTF5zzaFwq/5WHd2pi1UOsNihzmx//y3x1Q9W0uroEVNEDHoa57pzosyJhNOUlzMFTF/RoXYv7zGDLDGmhOg013FzbXd+kYGoIkkmphAT95VMjeoZG3yYi27Ppcv9Z5m+LSHCSP4gas3JxromxxwoKhkYXyJUnZKjtQ2UjefZb3qekmf3BRa3qjeG/0eaynzYcNlfd2wxKxlyrfcNy2XnFRm2MwU6TbWfFZCx5azopPXV/5wdttTLki0EsDFelPUZCM/bZq7gCsmJDgFrevpGtPX07RhjhYjYTEhaLDsZ23ASGRw/fmzzv02RlyRDQkhK+AyBpAW15ePSXOTX+zO7ol6OpUeIgFOj/Tszo9iLBOn9txyWJwkrrTWL2zjoXqaws/4nZxoMT3hfViNfeKAqFDu+h/55wnfMImZeSV902hVAmibF+1Fk/UH1cQSi6lUJltWZ+WiST1qDOHH2md5M5gvjo+HbVGNxeYf7g/YFqrKctWd1BI2XjnRdHQYet9F8JnBlT8nt3MZ0uRatMpKHSMpb0awZ0qFvZaotxyqWcaHetng==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(6486002)(54906003)(6512007)(86362001)(316002)(83380400001)(8936002)(2616005)(4326008)(66446008)(8676002)(36756003)(478600001)(64756008)(76116006)(66556008)(66476007)(66946007)(33656002)(91956017)(186003)(53546011)(5660300002)(6506007)(71200400001)(26005)(2906002)(6916009)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?8t/MwcNU7k1Cjm3H0XLHlc6tjfb3UoqgZ5DsYGZi7wOS8EFDE1Kl6mON5c3q?=
 =?us-ascii?Q?rcKFpXOjLCsIM7igMVuCySTrdw9qHnPdv5Nf12tbJ4oNhN8V88d1TNsDsEFO?=
 =?us-ascii?Q?ey/3DUlv6kWh8kqZLCLkect2XyTcf2ACFdEZKNJ7qdKuZqDOonjRfmyOlXRK?=
 =?us-ascii?Q?19yY40CFzgXAAtjGAJCqrul8e51GXIIv9S3HU7YvkFdXGvdZJOAZyz2hIB7G?=
 =?us-ascii?Q?/UYBU/HIzmbRFIfKQZWZnXlf8ljdYjBmYI7b+cSCFRLc/i9A2V5XTPNJQiFZ?=
 =?us-ascii?Q?7LJ2S6sJnB6zLVWDPRPuD6uYwgntgTGOBXqo5ezcFf75748nfjEURA7SS7gM?=
 =?us-ascii?Q?gPP6BQtsa4+9+iTEcHq8IpW2YAjhaMgdXNwdwV3Dps+xyq5PIyDK1hy0OhgJ?=
 =?us-ascii?Q?1GJF7LaQXJxgB5mbEsHN7fnzbkYoHeJTDBsEttr7/SQdFoZpSXec71IIXzvN?=
 =?us-ascii?Q?FLi/mS84D11RJXutNqkaf8AdP+YudOUdFOaFHy42k1dL3opNrHuIbloRdQ6B?=
 =?us-ascii?Q?mNbhwoW8K4PxgmF/jr7cl8WWFEUHddkUSRYnFD2APU3D0N5NZk4A0/WpNWrm?=
 =?us-ascii?Q?J6SKnmHgbOCNz5BS5HSGbdfMQh7vvRzG3w08X3F3X7oJXPnN75sgHUunN6tg?=
 =?us-ascii?Q?+sSRXb+OoHngk/Rdp1Cv9FKCtpNkE14oWlf4B3XlQWkKstkVAzluwYXT8hpx?=
 =?us-ascii?Q?nNT5oL7XhfiRp8uXxUYXxDz/k6v4Aqx6od8HnGbA1ZT++27fixjQ3h2dAk4o?=
 =?us-ascii?Q?iVEg1WeoqBfTcpxOWhiCod228nwva5p/7TjZsz4G2O3DmxSXRurYVTUIxFUP?=
 =?us-ascii?Q?lT3xFbJpLOcZFKwrP7fUJyO0BEXG8sGbcMf6BHJoL6CIdfIOXTpb+aidl5gM?=
 =?us-ascii?Q?E/cT7KB2muMJdKGUSFe0tQ4gbgxJ3RkWRznyzm/HjvPRJpHK3rFHqIc92ir+?=
 =?us-ascii?Q?KVS8s7DD1x3xTN552rNkafZikkHHiE4N2vPdjZm67ctc+hkg3dAdofYN7Z1R?=
 =?us-ascii?Q?KaXY228C0RREem+MyDFXJC+60NjfMWIjk6ivCvt+wL3ReBYwFVFQvSfrFdGv?=
 =?us-ascii?Q?zEp1Nljn?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5118782BC34D1E48A467F807902D895A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3881
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4e019348-6511-488a-b949-08d8c794e293
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b1cQkL85klBuFlQrgEl3jTbCZ83GeRDQ3x7R/CTKYG4nvw30Fl4NpRjP46yrRnGGD7l7CBK4mvOYcTUFYAXH4eblgFfdS/FOCSdYuWWapTwiBYeLUUxVp6ekjQS0uMk3Ck7/Zwtjgh7gxd9vFRynq8/UAWDlEzO2cP86Rb3tJX2C2u4mlUcG0O+bCa/wB1Z1EU35SYzEqT4tmETpfATvfO30/104XPq+kNvdP8W/sDy/YMen4W6/e2EGlyVZ18YtlvRHfeinWoKLf19ARb4oFlZpeNkYo0d6MP9TzukH1VD0Iem01Odsh/3CLMEtCKwSE4t/kgJldzlcgaTpVagTVTQUUXMOnsqt4wN2JbXaNjcNHGJaAwmpdRrHbpcP0vYrV0ot8UXsODU+AB5j3vx3rdzBWKvXINRNGWlm0z5Fb3Vz2NzZgBLl/rDUnHwZAnfNjjqVqQJO0YwgXdnFWw/kIz91MbRGO4Qchu3BZsFeYkHrdvjWBkgGsnz9N1ZWpomvxF3znq+f1SAtCR21SsNiC65YuafEfmoID3Lx8GDDzZvn9kM+APrwELNrvb85cTVFH+TV1UOi21SabtgHXmcdBDAPkHdyMpXRKRmhO89PA/9o4N/e8GTqrkY28UPG5pduw9j9/T9RiCRNQ8RzgTu4xvKVWZ5IA1yIEZHSD6zR1h8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966006)(36840700001)(36756003)(2906002)(82310400003)(6862004)(36860700001)(82740400003)(33656002)(5660300002)(107886003)(86362001)(336012)(83380400001)(8936002)(356005)(26005)(186003)(70206006)(8676002)(478600001)(4326008)(70586007)(47076005)(316002)(6512007)(81166007)(6486002)(54906003)(6506007)(53546011)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 16:09:28.6652
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 034f3d7c-4cb6-4f11-6a0a-08d8c794eb1f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4059

Hello Stefano,

> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> From: Brian Woods <brian.woods@xilinx.com>
>=20
> Restructure some of the code and add supporting functions for adding
> generic device tree (DT) binding support.  This will allow for using
> current Linux device trees with just modifying the chosen field to
> enable Xen.
>=20
> Signed-off-by: Brian Woods <brian.woods@xilinx.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
> ---
> Changes in v3:
> - split patch
> ---
> xen/drivers/passthrough/arm/smmu.c | 60 +++++++++++++++++-------------
> 1 file changed, 35 insertions(+), 25 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough=
/arm/smmu.c
> index 3898d1d737..9687762283 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -782,50 +782,36 @@ static int insert_smmu_master(struct arm_smmu_devic=
e *smmu,
> 	return 0;
> }
>=20
> -static int register_smmu_master(struct arm_smmu_device *smmu,
> -				struct device *dev,
> -				struct of_phandle_args *masterspec)
> +static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
> +					 struct device *dev,
> +					 struct iommu_fwspec *fwspec)
> {
> -	int i, ret =3D 0;
> +	int i;
> 	struct arm_smmu_master *master;
> -	struct iommu_fwspec *fwspec;
> +	struct device_node *dev_node =3D dev_get_dev_node(dev);
>=20
> -	master =3D find_smmu_master(smmu, masterspec->np);
> +	master =3D find_smmu_master(smmu, dev_node);
> 	if (master) {
> 		dev_err(dev,
> 			"rejecting multiple registrations for master device %s\n",
> -			masterspec->np->name);
> +			dev_node->name);
> 		return -EBUSY;
> 	}
>=20
> 	master =3D devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
> 	if (!master)
> 		return -ENOMEM;
> -	master->of_node =3D masterspec->np;
> -
> -	ret =3D iommu_fwspec_init(&master->of_node->dev, smmu->dev);
> -	if (ret) {
> -		kfree(master);
> -		return ret;
> -	}
> -	fwspec =3D dev_iommu_fwspec_get(dev);
> -
> -	/* adding the ids here */
> -	ret =3D iommu_fwspec_add_ids(&masterspec->np->dev,
> -				   masterspec->args,
> -				   masterspec->args_count);
> -	if (ret)
> -		return ret;
> +	master->of_node =3D dev_node;
>=20
> 	/* Xen: Let Xen know that the device is protected by an SMMU */
> -	dt_device_set_protected(masterspec->np);
> +	dt_device_set_protected(dev_node);
>=20
> 	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) {
> 		for (i =3D 0; i < fwspec->num_ids; ++i) {
> -			if (masterspec->args[i] >=3D smmu->num_mapping_groups) {
> +			if (fwspec->ids[i] >=3D smmu->num_mapping_groups) {
> 				dev_err(dev,
> 					"stream ID for master device %s greater than maximum allowed (%d)\n"=
,
> -					masterspec->np->name, smmu->num_mapping_groups);
> +					dev_node->name, smmu->num_mapping_groups);
> 				return -ERANGE;
> 			}
> 		}
> @@ -833,6 +819,30 @@ static int register_smmu_master(struct arm_smmu_devi=
ce *smmu,
> 	return insert_smmu_master(smmu, master);
> }
>=20
> +static int register_smmu_master(struct arm_smmu_device *smmu,
> +				struct device *dev,
> +				struct of_phandle_args *masterspec)
> +{
> +	int ret =3D 0;
> +	struct iommu_fwspec *fwspec;
> +
> +	ret =3D iommu_fwspec_init(&masterspec->np->dev, smmu->dev);
> +	if (ret)
> +		return ret;
> +
> +	fwspec =3D dev_iommu_fwspec_get(&masterspec->np->dev);
> +
> +	ret =3D iommu_fwspec_add_ids(&masterspec->np->dev,
> +				   masterspec->args,
> +				   masterspec->args_count);
> +	if (ret)
> +		return ret;
> +
> +	return arm_smmu_dt_add_device_legacy(smmu,
> +					     &masterspec->np->dev,
> +					     fwspec);
> +}
> +
> static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
> {
> 	struct arm_smmu_device *smmu;
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:12:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:12:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80633.147601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yIM-0007wZ-LR; Tue, 02 Feb 2021 16:12:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80633.147601; Tue, 02 Feb 2021 16:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yIM-0007wS-IH; Tue, 02 Feb 2021 16:12:50 +0000
Received: by outflank-mailman (input) for mailman id 80633;
 Tue, 02 Feb 2021 16:12:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OBIp=HE=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l6yIL-0007wM-Uk
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:12:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98b59979-6da2-4044-bfd9-52455257bdeb;
 Tue, 02 Feb 2021 16:12:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7787AC6E;
 Tue,  2 Feb 2021 16:12:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b59979-6da2-4044-bfd9-52455257bdeb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612282367; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0ArKDfnqR+KqyNjNG4BqJNXocLWHqj+DCceD9FDV9WY=;
	b=aD1cAZyOUrpe7Lt6CeVmfLuFfX5EcgIWrZNntPLLSmwStLHUZ5Az7eNbKxPftGfaKIN6iE
	LhPH+G7MRTMnQRBRxIMAU4C+ysApU7ZLBXRSyJHmfyJQUboa+u+hmwyz6hImnYYVIn45cT
	pTOxyhuIzYG7o7c3PDtH10GFgteHsvc=
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 stable@vger.kernel.org
References: <20210202070938.7863-1-jgross@suse.com>
 <c17d4e45-cad1-510d-0e7b-9d95af89ff01@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b27dc022-7233-03e9-59bf-819338a80308@suse.com>
Date: Tue, 2 Feb 2021 17:12:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <c17d4e45-cad1-510d-0e7b-9d95af89ff01@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="escPDdfwUENdW7Uy0xL9kJ81jObPW2Rm9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--escPDdfwUENdW7Uy0xL9kJ81jObPW2Rm9
Content-Type: multipart/mixed; boundary="YVA3BwPtEvOPWnkHy6E8MYSJ7u5r1itUt";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 stable@vger.kernel.org
Message-ID: <b27dc022-7233-03e9-59bf-819338a80308@suse.com>
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
References: <20210202070938.7863-1-jgross@suse.com>
 <c17d4e45-cad1-510d-0e7b-9d95af89ff01@citrix.com>
In-Reply-To: <c17d4e45-cad1-510d-0e7b-9d95af89ff01@citrix.com>

--YVA3BwPtEvOPWnkHy6E8MYSJ7u5r1itUt
Content-Type: multipart/mixed;
 boundary="------------C6EDB9B863674721206F1046"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C6EDB9B863674721206F1046
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.02.21 16:26, Igor Druzhinin wrote:
> On 02/02/2021 07:09, Juergen Gross wrote:
>> Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding"=
)
>> xenvif_rx_ring_slots_available() is no longer called only from the rx
>> queue kernel thread, so it needs to access the rx queue with the
>> associated queue held.
>>
>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Appreciate a quick fix! Is this the only place that sort of race could
> happen now?

I checked and didn't find any other similar problem.


Juergen


--------------C6EDB9B863674721206F1046
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C6EDB9B863674721206F1046--

--YVA3BwPtEvOPWnkHy6E8MYSJ7u5r1itUt--

--escPDdfwUENdW7Uy0xL9kJ81jObPW2Rm9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAZef4FAwAAAAAACgkQsN6d1ii/Ey+F
pwf+NrlAu7xqGQO1OeoDKfMOHpT5OooD9OmTj3Lrw942dLGgauX3feIH/tMUAXTi71gm/ezq2qox
HVjEsXLFi5s42DnyglyzSbHpT6YID434MDD9zKzSE6D2H3sx7IqY+W61OHQuVwS+ve2RB9ITq8Fa
PnAArCljEIHje9MhtW4IBFT5D9kkwXX75ujK2t5ZZQtRMm7GTpzf6jzSPVMbCYiuEIilZDeY0i9O
+4QKW5+dvlnXju3G55kmb87Cb2unDYeybT6AVpsHWi+UcOKJApV191AyhvtbCUzqMie5PKsqrnM4
h49EPG6rp1QSn55u4oVeERIgmaAOfYxYzKWd/UrmYw==
=0oW2
-----END PGP SIGNATURE-----

--escPDdfwUENdW7Uy0xL9kJ81jObPW2Rm9--


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:22:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80640.147613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yRr-0000Zb-KP; Tue, 02 Feb 2021 16:22:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80640.147613; Tue, 02 Feb 2021 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yRr-0000ZU-HQ; Tue, 02 Feb 2021 16:22:39 +0000
Received: by outflank-mailman (input) for mailman id 80640;
 Tue, 02 Feb 2021 16:22:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VJ8X=HE=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
 id 1l6yRq-0000ZP-5z
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:22:38 +0000
Received: from mail-wr1-f48.google.com (unknown [209.85.221.48])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b0bc5fb-ee8b-4d05-ba2f-1fc7e7fc013b;
 Tue, 02 Feb 2021 16:22:37 +0000 (UTC)
Received: by mail-wr1-f48.google.com with SMTP id p15so21052182wrq.8
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 08:22:36 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id e11sm32813367wrt.35.2021.02.02.08.22.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 Feb 2021 08:22:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b0bc5fb-ee8b-4d05-ba2f-1fc7e7fc013b
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=0Tl/pMxs9MIy87xsuWIVEOhHXa7tZ9dZIJ0KCeGa4jI=;
        b=PtlYwQaAuINVwj23D2FdunHBRsPE1FxkYD8nrSOVu7hIqpxqNyDIDJXAnfqUMXkLCi
         xK5XIfX1dVfbSueVOPqUcz/DD/+3YF28eC29jemIvmxZjut25I0ku1mO6sXD9oEYqWQh
         TYgkhfTuf6qag2QomSzRBSuyP9ShufnVV3Tqcgz5qQllayp8u+USSLVORcIDX+QGK4vr
         8QDLw7rdWqIyP6XOKKlMzC8eC7dDgRlqG6NabewWJxCXdDFOYgHhFv9yduX68YJFZpME
         aThET7H8obuLn8UXySCFecZt7xuITfsrIb5RUCL7nayGz4EyHP67Uj4MWtBsGLKAyh1h
         1hHA==
X-Gm-Message-State: AOAM532N9sBm+aC5N36U3pZxRVfJvmX9GOcwprwqvM4N/iv++/JrOzvG
	HnOb5blKaXFJcmE6v3vhsh8=
X-Google-Smtp-Source: ABdhPJx6LQHNUFui3giapOyyc5YGOC6dtOEQkMgOyDblBANYn5obn77kkqsoELxUiLKfWRiM9E+4kA==
X-Received: by 2002:adf:dd43:: with SMTP id u3mr25063060wrm.396.1612282956206;
        Tue, 02 Feb 2021 08:22:36 -0800 (PST)
Date: Tue, 2 Feb 2021 16:22:34 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Igor Druzhinin <igor.druzhinin@citrix.com>, stable@vger.kernel.org
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
Message-ID: <20210202162234.sf575hwoj4bngvpt@liuwe-devbox-debian-v2>
References: <20210202070938.7863-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210202070938.7863-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Feb 02, 2021 at 08:09:38AM +0100, Juergen Gross wrote:
> Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> xenvif_rx_ring_slots_available() is no longer called only from the rx
> queue kernel thread, so it needs to access the rx queue with the
> associated queue held.
> 
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> Cc: stable@vger.kernel.org
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:29:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80646.147624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yYf-0000vb-Cq; Tue, 02 Feb 2021 16:29:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80646.147624; Tue, 02 Feb 2021 16:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yYf-0000vU-9h; Tue, 02 Feb 2021 16:29:41 +0000
Received: by outflank-mailman (input) for mailman id 80646;
 Tue, 02 Feb 2021 16:29:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b4+r=HE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l6yYd-0000vP-QL
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:29:40 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a6b6893-0f6f-4e74-97a3-8cb1c0c4eccb;
 Tue, 02 Feb 2021 16:29:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a6b6893-0f6f-4e74-97a3-8cb1c0c4eccb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612283377;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=RNY5drRKfQDqnT6tZEtwyUVCWsnHI+a2Y0+0kZ9nnZg=;
  b=L+02HfzzKAIY2WAmD7F5pa8DEz2Wlg0RizepqPucYOCpRwHmgMvTouCB
   OcKp23vkuMxy0I9bFodFiibgZMHgGQ4OpDqPqEGkqwzDy/ArHwSbYgU16
   Scd43mq3u/HG85779V1+g586NjD5a9WQZx7gMshba2zDTpfL5T0Ecd2Lo
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: JQ93LLghNXf1Y8neF05Kz5wNtUOiKwNdjXI3niqK8NhImntjVl55u8MT3spGic9X3ByUSKhTFL
 hQxmhxbyAUN5wq/baQWYxDE0TuGxrJIhS79nzyBF3mQfyzqaVy/sOjeIObqLH7c54TvOnNIN7u
 dXHBTdyK6yrA3uLr3VMlIjjuknu7XU6y4ymDxmCQiVgi6h8QUQhC/KtMaA24ad4u9gguqACZSs
 zSc7PBwPfNlQYbJV2Xg1GQaP/LUxQVF1nqWznMMnme2aH/otsbmtsi+IvZCbq/+gGtLQHO2xDg
 axQ=
X-SBRS: 5.2
X-MesageID: 36385231
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,395,1602561600"; 
   d="scan'208";a="36385231"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bRMZ35rgrDiN7iJc+THoOBqsWObnzNauYCn3OxFTnsnDKn9sVdEwRchaVnmu9ta6VzvMwCmTF0P8GrthTKb8kKJzp87IB9uR5Dj2Mnla4r1sWh3Q1RSW/dnil20QPYmyFhTveoAHq2mG3DnrPNL3sIDx2QjbSqiw5VwgBPtfU6FuviJjInD3XjA52Ym7eKniRtaBrQdUmJTdu6by0CpbBbJ2CcuXaAR8tursU+07W+errJQOELlHQeZyIPGV2uTrJQLbZPX7WfsmL+zWb9R/0BTEpxTG60KPfjWMzmcpN6M3jrVp+heoRUyuKk55g4/S2+RBeK8T7orJEd2zZ31+nQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=477fdphrfKsIiITpaPq/t+BDg5TfuMxQzRB9zww6uxs=;
 b=YgHdQBVQe4YK0jTGVYBCEpV/HCcKM79et6AiJePiA8SzxCfdLcwPZ7nN6sVUb81ouvt9dGSg64TjLsWVqRF9+FvdE0ryrCiNJu3P06y04yrc6SZUV8ef8TSq6yGl+JhhthSzZbk15SiXS7RIa/qTnew0wpwK/0DE3Zqp4vbZTzGuZyRz5AHLTHIRTxO+mD0EEqY6sPsUm8VKOrLaszOIg3Up/CZNTs9yqkv7CN4y6vqpS8Huwhdf6M72OwNQVWY+V4WaY44OhqLFug/UOVBzrvpNHDYui1A6b6avzeMfxy1i9QC6XPfs4JsJNXo1FvdhkSL+1YPe4I1W3rl+I8bvJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=477fdphrfKsIiITpaPq/t+BDg5TfuMxQzRB9zww6uxs=;
 b=inQPcaCY3DKIfucd8iH0nKG0KducpDiJdoCXKzFW60iq70J2AUQPxINCQX9KWr7nn1GKTwhR8hoQm07F34vMVnuRpD/y6w7WuS+7N0cq2QDUEUkmAN5hksy0Imzru/P0CllFWk85Vs8zTigFB0Wk+zSg3ARCUQrD/gUkqHIh+GE=
Date: Tue, 2 Feb 2021 17:28:57 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Paul Durrant <paul@xen.org>
CC: <xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, Paul Durrant <pdurrant@amazon.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Jens Axboe <axboe@kernel.dk>,
	"Dongli Zhang" <dongli.zhang@oracle.com>
Subject: Re: [PATCH v2] xen-blkback: fix compatibility bug with single page
 rings
Message-ID: <YBl9ycif3bG/Y+eR@Air-de-Roger>
References: <20210128130441.11744-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210128130441.11744-1-paul@xen.org>
X-ClientProxiedBy: PR3P192CA0027.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:102:56::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aaf24ce3-0811-499b-ea3b-08d8c797a6fc
X-MS-TrafficTypeDiagnostic: DM6PR03MB4300:
X-Microsoft-Antispam-PRVS: <DM6PR03MB430067ACC6BC7A29FDFA1DC78FB59@DM6PR03MB4300.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2331;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8PmrSKENAYySBUdoMJRyGm1y826BHoULetuit6wzAMaesDO79mkr79cZrf1afS0aglwdpn1QY8mco+hd5KEUruEChTEpjXuwoNXtMBn7jP4GwXmcj1zuBQLI9phFAlmpZO8JkQKFcbCTFiw4Mo/tyZqGjPKJsiRE+6PV9nGzNjEftf6xm3O50G9kj/Rf2GP5uC527cBnCEXy3/fi0xocNSat59Saaq5tpzw78+qL8/XRZNfCTo2Dw3dzoAPCnvaaVWWvpEvtFPdxLLVedn7VEyBbtqELCBCcm84nUVxX8YGaSOGDVoZd9bAZK5UMvv5jlJioOgdjYYfp6VxeEQVr6S16rimJlSY4LTVN4BAO0KBeAI8gRtfAgvKxI8jlF22QK6D7cfrTu+vXVwtGR1csLSRNb5nNVrLa94UA1hGowQ4r3P0ch3qvkap5jsoMJHgvkgVFXBshie3UWkLt4apeUlv6iw0j+mbhWKsBsTBsODaSRNKAf78iw8Ys+GwLbf9JneLliY5YTB2eE8Q1QWCv3g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(376002)(136003)(366004)(39860400002)(396003)(6486002)(5660300002)(9686003)(54906003)(66946007)(186003)(478600001)(16526019)(316002)(26005)(6496006)(85182001)(33716001)(6916009)(8936002)(8676002)(83380400001)(66556008)(2906002)(6666004)(956004)(86362001)(66476007)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bkIwVjJDd2ZVQjBIakxkcGc5SW5zOWpOUzB2RW83ekgvclRuNCtSYWZpVnpv?=
 =?utf-8?B?VjJCZ2FvSjNmTmZlclM3djZMMEV4M2crV0tHcTdZdVdnL3FkVHg0ZGJQRVR4?=
 =?utf-8?B?ei9aYVNUYzdwN1FiVHJLRU9CNUpDZG1RRENEL0hYNzdzOFVQcjRMVkg4bDJp?=
 =?utf-8?B?NUpQYk83TS8wK2pvblFDaDc1aEkwNUFodHJGSDQybXEvR082RE85WDBlWHJP?=
 =?utf-8?B?R3J4ek9SVEtqZHZBWGhQVE9WRWRGR3VuTjlBU0V5WmRJOXJSSExUNTVlUU16?=
 =?utf-8?B?SFRPSTRONWxsNFI3djVRWDBDY1lEK04ydnZsK1p0ejB2RUo2ODhtcGc2Nnkr?=
 =?utf-8?B?T0RHL3lxUDhiWGJ4MVVsUFVPUGtkb1FiUWVSVUp0SVVZRFByMGxpK2VTZmhh?=
 =?utf-8?B?QW1SVUpvcHdxS2tqTkZDbTNpY0NsZHVuNjJRSVhTMTd3YVp0SzhkSHYvUmFw?=
 =?utf-8?B?UEpobCsxM1ZHSkxwano4QlpBVVNLeEhvOGhMZkEzN0t3SEdSVmU2L1VZV1o5?=
 =?utf-8?B?cGZEbFlScWtNVXJmMUFCWUVnT2x3cTBHY29WTTRCRys4Z09sd2Z2NXgySFNQ?=
 =?utf-8?B?ZEQ2OHhGZ00wZmhYSExacytESnVxRTJiZzU4QU5pREw0OXY4bDMxdUhxSWFq?=
 =?utf-8?B?V2FsaGUwaEZnTlVrUGhHSkZ6WDFTN3MwaUdOT0lBQ1pPeWUxUEhHSE1GUHFp?=
 =?utf-8?B?czFJZUpGL2owZG1vVHJWeGxYVlpSWW1SQ1JjNjlnaHZkY1Q1NGFJRjN0dVJG?=
 =?utf-8?B?ZE9CTGZEaGRaZkwyUnZFdnFMZ2NubytLaEQ3UG12bWNHZUJTcEFvc0RXYTFO?=
 =?utf-8?B?bHVGZzdJdEtVS3B3Z2JpeG5YOExBVkVRUUwwczljQy94OXQrNHhuMU9rM0lm?=
 =?utf-8?B?dzJUU29SQUt2VVQ4VjZtWS9tcW9DYnM0MUZTN3JJSW5kYmx6MzYydU4wbEJz?=
 =?utf-8?B?cjR5RE5sb1R0Y2xiRkx2TVdlMDNDZDFZTWQ5QXBKbmYrVUxoYmxjbEErbzNj?=
 =?utf-8?B?b0xZcTZnYWNESWgwTmRxYnNOT0RyanAzYklHR0xjdmIwNHpoNUIrSGVVUGdD?=
 =?utf-8?B?U0pCM0krMkNNWHFIbVcyRHR0THJ1c3FwUFdkem9CUFlXVnNJZk80emo3MHRI?=
 =?utf-8?B?REJlU1F1OUR3ZWF6SzkwY2xTT3JKN1NJWm1WcDNsajFJMW1LaXdabnRvbVg0?=
 =?utf-8?B?RnBCSnpJcUlCVWxBdy9OeU5pMjh6UXZPUlo3MUQyV2QrelFkYlA3ZVQ2dFZP?=
 =?utf-8?B?VmNiTmx6V2E3bEZMVGdiRDVuRXZnVTVoV2NYcEdRTy9LamRwczZhaEJWUmpM?=
 =?utf-8?B?NlRUTEhXTHBwWnJiWkx1VkgvbHVXRFJCU1haMU5tTEVWWXhzT0pxVCsrTmZx?=
 =?utf-8?B?anhQYTFUcm41STZxTUhtQmx2NDlCUnpyOHZzQjdFK25veVowMTNxeWRraXBa?=
 =?utf-8?B?clBXK2d1WmJ2bjBrc1IrM2lYMUxKNWFBR0RKNDlRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: aaf24ce3-0811-499b-ea3b-08d8c797a6fc
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 16:29:03.0448
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k5HPjKISI7cin9PzFQz8pweIauiFYd+nfFn7nXrA+weeIbvWp2DA2m6RaXV+0oDWLofjyn/99iPx7AQEi+QNvA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4300
X-OriginatorOrg: citrix.com

On Thu, Jan 28, 2021 at 01:04:41PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Prior to commit 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to avoid
> inconsistent xenstore 'ring-page-order' set by malicious blkfront"), the
> behaviour of xen-blkback when connecting to a frontend was:
> 
> - read 'ring-page-order'
> - if not present then expect a single page ring specified by 'ring-ref'
> - else expect a ring specified by 'ring-refX' where X is between 0 and
>   1 << ring-page-order
> 
> This was correct behaviour, but was broken by the afforementioned commit to
> become:
> 
> - read 'ring-page-order'
> - if not present then expect a single page ring (i.e. ring-page-order = 0)
> - expect a ring specified by 'ring-refX' where X is between 0 and
>   1 << ring-page-order
> - if that didn't work then see if there's a single page ring specified by
>   'ring-ref'
> 
> This incorrect behaviour works most of the time but fails when a frontend
> that sets 'ring-page-order' is unloaded and replaced by one that does not
> because, instead of reading 'ring-ref', xen-blkback will read the stale
> 'ring-ref0' left around by the previous frontend will try to map the wrong
> grant reference.
> 
> This patch restores the original behaviour.
> 
> Fixes: 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront")
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: "Roger Pau MonnÃ©" <roger.pau@citrix.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Dongli Zhang <dongli.zhang@oracle.com>
> 
> v2:
>  - Remove now-spurious error path special-case when nr_grefs == 1
> ---
>  drivers/block/xen-blkback/common.h |  1 +
>  drivers/block/xen-blkback/xenbus.c | 38 +++++++++++++-----------------
>  2 files changed, 17 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
> index b0c71d3a81a0..524a79f10de6 100644
> --- a/drivers/block/xen-blkback/common.h
> +++ b/drivers/block/xen-blkback/common.h
> @@ -313,6 +313,7 @@ struct xen_blkif {
>  
>  	struct work_struct	free_work;
>  	unsigned int 		nr_ring_pages;
> +	bool                    multi_ref;

You seem to have used spaces between the type and the variable name
here, while neighbors also use hard tabs.

The rest LGTM:

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

We should have forbidden the usage of ring-page-order = 0 and we could
have avoided having to add the multi_ref variable, but that's too late
now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:32:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:32:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80648.147637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yau-0001jm-UQ; Tue, 02 Feb 2021 16:32:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80648.147637; Tue, 02 Feb 2021 16:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yau-0001jf-RD; Tue, 02 Feb 2021 16:32:00 +0000
Received: by outflank-mailman (input) for mailman id 80648;
 Tue, 02 Feb 2021 16:31:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6yat-0001jW-Kp; Tue, 02 Feb 2021 16:31:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6yat-0001DM-DT; Tue, 02 Feb 2021 16:31:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6yat-00062J-7A; Tue, 02 Feb 2021 16:31:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6yat-0008CT-6d; Tue, 02 Feb 2021 16:31:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oICqhxSqTdEPFa00iTzHCzpTQRG8cQiDmtvz0r3dKww=; b=44AfggLHJ3egK60/izeqF5Qpa9
	bHS3QgII+dVDZc774kd9qGHLlh+UBS2ZBr9i5pBsWliliLFWBjyW9HSK2zpyIvMPno2hWiXfiL8zJ
	DmzZYjHW7yYyWbvb/cVJbs0AvGTN6gaNk7IAWi9wqxxWy7l4XUo5ynRUX7r8I/tiXQDA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158932-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 158932: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3b468095cd3dfcd1aa4ae63bc623f534bc2390d2
X-Osstest-Versions-That:
    ovmf=ea56ebf67dd55483105aa9f9996a48213e78337e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 16:31:59 +0000

flight 158932 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158932/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2
baseline version:
 ovmf                 ea56ebf67dd55483105aa9f9996a48213e78337e

Last test of basis   158874  2021-02-01 01:54:43 Z    1 days
Testing same since   158932  2021-02-02 03:10:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kun Qin <kun.q@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ea56ebf67d..3b468095cd  3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:39:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:39:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80655.147652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yhx-00026Y-OV; Tue, 02 Feb 2021 16:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80655.147652; Tue, 02 Feb 2021 16:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yhx-00026R-LI; Tue, 02 Feb 2021 16:39:17 +0000
Received: by outflank-mailman (input) for mailman id 80655;
 Tue, 02 Feb 2021 16:39:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iTi=HE=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1l6yhw-00026M-Aq
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:39:16 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 34753819-a19f-4505-9d2b-c72fed66008a;
 Tue, 02 Feb 2021 16:39:15 +0000 (UTC)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-500-cpCsUiTMPzWIWHKI7sQoTA-1; Tue, 02 Feb 2021 11:39:13 -0500
Received: by mail-ed1-f69.google.com with SMTP id f21so9907421edx.23
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 08:39:12 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id b1sm3313895eju.15.2021.02.02.08.39.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 02 Feb 2021 08:39:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34753819-a19f-4505-9d2b-c72fed66008a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1612283955;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=upnt39CyCVwBoTycKyW7rsDh22PJi9fn/RtxLu9lyuM=;
	b=XJIj4CVuOXIbXba3uXtKtrnaOq+uQ44BWiothUsTonfEyZyMDOa0wyaJllX1KM1j0rebSs
	NlRe+86rO/kD88PQ7gQREl3DxCPnGaQ2Hj933Hz4irAPL7Mnhg2XCCDqasiOG6JkOTsAj6
	GbrJ964JhnmSXAV8qWOtJl8Gq46V168=
X-MC-Unique: cpCsUiTMPzWIWHKI7sQoTA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=upnt39CyCVwBoTycKyW7rsDh22PJi9fn/RtxLu9lyuM=;
        b=jNN54bI9P6cTotgnOZvga7m0i35xIGIYp511SIPc5vvGStWnNIYF34YQBnTNvarUBO
         bp2//dFPwO39xOW91JCNhbpmJELIYClZVM20aEhkTxGyVqsGgFz9Z5O76YcYwCVFGl00
         c5WS15URWWjMvgTC9PHba0ZFzH23/vluMKNu8wNmH6KednAR+8BXKkjr/zGc48Zwxxyc
         gPKUCdN9fIF+BY0QFSfzjEBj0kc9Xs966kEq8oLZhtVS0XO7oxhcBe+cX8WObczKqyF5
         He3adK+IIUKcl+/7RtFrG1grcufHCBmh5f2dQYFJtJgxmwsIllMGEdrbdQ1BkCdl91Te
         /bWQ==
X-Gm-Message-State: AOAM530jIDUUrVmRyBVK/sdfMQgHQ/T4idfy3Qi8QolOrFy9/en1DEn9
	nZJUFm+a5NRS+qoAQvgzwHPV+61aNAsynW88e8rXzlKpq5l5/Llw216OpIqSp1xpYM6/Bbpe6Gs
	lmQr9PzrRP3G6TMn/qHBAMUOQxzk=
X-Received: by 2002:a17:906:c049:: with SMTP id bm9mr22307617ejb.535.1612283951845;
        Tue, 02 Feb 2021 08:39:11 -0800 (PST)
X-Google-Smtp-Source: ABdhPJz2tklvRYuU9l0qDykZvY93YU17wJ9cDOV5rFXjVQAhMAYZL/Panns/oGzBvmoCNoRhjmIqiA==
X-Received: by 2002:a17:906:c049:: with SMTP id bm9mr22307596ejb.535.1612283951672;
        Tue, 02 Feb 2021 08:39:11 -0800 (PST)
Subject: Re: [PATCH] hw/i386/xen: Remove dead code
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-trivial@nongnu.org,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "qemu-trivial@nongnu.org" <qemu-trivial@nongnu.org>
References: <20210202155644.998812-1-philmd@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <13e06470-e13d-31ef-f7a7-9370a01d8b1c@redhat.com>
Date: Tue, 2 Feb 2021 17:39:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20210202155644.998812-1-philmd@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02/02/21 16:56, Philippe Mathieu-DaudÃ© wrote:
> 'drivers_blacklisted' is never accessed, remove it.
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
> ---
>   hw/i386/xen/xen_platform.c | 13 ++-----------
>   1 file changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> index 7c4db35debb..01ae1fb1618 100644
> --- a/hw/i386/xen/xen_platform.c
> +++ b/hw/i386/xen/xen_platform.c
> @@ -60,7 +60,6 @@ struct PCIXenPlatformState {
>       MemoryRegion bar;
>       MemoryRegion mmio_bar;
>       uint8_t flags; /* used only for version_id == 2 */
> -    int drivers_blacklisted;
>       uint16_t driver_product_version;
>   
>       /* Log from guest drivers */
> @@ -245,18 +244,10 @@ static void platform_fixed_ioport_writeb(void *opaque, uint32_t addr, uint32_t v
>   
>   static uint32_t platform_fixed_ioport_readw(void *opaque, uint32_t addr)
>   {
> -    PCIXenPlatformState *s = opaque;
> -
>       switch (addr) {
>       case 0:
> -        if (s->drivers_blacklisted) {
> -            /* The drivers will recognise this magic number and refuse
> -             * to do anything. */
> -            return 0xd249;
> -        } else {
> -            /* Magic value so that you can identify the interface. */
> -            return 0x49d2;
> -        }
> +        /* Magic value so that you can identify the interface. */
> +        return 0x49d2;
>       default:
>           return 0xffff;
>       }
> 

Cc: qemu-trivial@nongnu.org



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 16:42:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 16:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80656.147663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yl1-0002ub-7S; Tue, 02 Feb 2021 16:42:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80656.147663; Tue, 02 Feb 2021 16:42:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6yl1-0002uU-4S; Tue, 02 Feb 2021 16:42:27 +0000
Received: by outflank-mailman (input) for mailman id 80656;
 Tue, 02 Feb 2021 16:42:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XtUW=HE=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l6yl0-0002uO-9b
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 16:42:26 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a09bd907-abea-44ab-8d69-0271c9986d12;
 Tue, 02 Feb 2021 16:42:25 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id z6so21121453wrq.10
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 08:42:25 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id b4sm32480479wrn.12.2021.02.02.08.42.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 02 Feb 2021 08:42:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a09bd907-abea-44ab-8d69-0271c9986d12
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=STneyShm0vCPYM4l3wpr4D69jWlxGiJ4g4tzZ7/33lE=;
        b=Z8h1xH9LrCIPbnUzDhB6+8OgUVkaq3lMCCgyLN0ZPUv8uKh1CGeCtexXpijdWGeEdR
         GmuMpknz9Xi/KKlCXJNdmCykqb7XMrDbl1hQeONsplw0h5K7DxwcmeBHPTqhf/n1FwSl
         twSkencRK6PL1ThnkyhuV9IKTwtSLwmBLvCqYEzPgYOevXeJRrABOjSSgyK7nG+nfMv7
         LDAUDusE6mxPyKTsXFh2FcyPBev6/DzxmoHcZbXvChChWJxHY74FbieBOcnEIessfNAs
         mk6DBnsjmySGjFkdSOqoy39WYd6OgG1pU+/6SX5SG2SNErZj1nSPVoQdbah0wjMck+ji
         b7YQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=STneyShm0vCPYM4l3wpr4D69jWlxGiJ4g4tzZ7/33lE=;
        b=brBOZpxpdZz3PUytMXP9WfMnRBt6ODFu2GX4Gx/GC3ZdOd1gS719SnSEDmf3SLfvYi
         pxZuMyMPLwUIZBntECg/o8GkgK0js+GWgBMHdXDxCUmAiv1cEZ3NQaiffSguNnJrOC03
         rW3yWJwVM9HECGKLeOnfi0zhwVbKL9gS8Dp+tOVRHbi1qGb5RgLTONOAyTbt8p7y2utt
         Ya5Q4gP3UsZTNK1kaT7u0/VA2ZvQ6VSO1km/7vZQvXLDSQn/IOrywfApql2vRmQnvGCu
         e7dunp47unJY8hfEluSUeBAtQvyQ3h8IT++LPwnrUSDpck7kMaLqnZ4ZCGxfcaomAgOx
         qyEQ==
X-Gm-Message-State: AOAM530Km5r1M5KDaxm/vKnsOKwjzOqGMjQcDZ8efp4rFfdT1Iv5IZOl
	B7ilq+P17DUKkXTbtwzuUFU=
X-Google-Smtp-Source: ABdhPJy2nnGmtTJ3TUqoRthgCxgYu0/DthvxcvL9h7iYFq5Z3OEkSPZ7bmrdl5r8eH8nbB7nJSHkNg==
X-Received: by 2002:adf:f891:: with SMTP id u17mr24929982wrp.253.1612284144489;
        Tue, 02 Feb 2021 08:42:24 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
	<linux-block@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Konrad Rzeszutek Wilk'" <konrad.wilk@oracle.com>,
	"'Jens Axboe'" <axboe@kernel.dk>,
	"'Dongli Zhang'" <dongli.zhang@oracle.com>
References: <20210128130441.11744-1-paul@xen.org> <YBl9ycif3bG/Y+eR@Air-de-Roger>
In-Reply-To: <YBl9ycif3bG/Y+eR@Air-de-Roger>
Subject: RE: [PATCH v2] xen-blkback: fix compatibility bug with single page rings
Date: Tue, 2 Feb 2021 16:42:22 -0000
Message-ID: <037601d6f982$61e34f80$25a9ee80$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQESWKuFsxkk/iz7Hd1VT64wNwq0EAFvpAqOq8LV10A=
Content-Language: en-gb

> -----Original Message-----
> From: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Sent: 02 February 2021 16:29
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; linux-block@vger.kernel.org; =
linux-kernel@vger.kernel.org; Paul
> Durrant <pdurrant@amazon.com>; Konrad Rzeszutek Wilk =
<konrad.wilk@oracle.com>; Jens Axboe
> <axboe@kernel.dk>; Dongli Zhang <dongli.zhang@oracle.com>
> Subject: Re: [PATCH v2] xen-blkback: fix compatibility bug with single =
page rings
>=20
> On Thu, Jan 28, 2021 at 01:04:41PM +0000, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Prior to commit 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to =
avoid
> > inconsistent xenstore 'ring-page-order' set by malicious blkfront"), =
the
> > behaviour of xen-blkback when connecting to a frontend was:
> >
> > - read 'ring-page-order'
> > - if not present then expect a single page ring specified by =
'ring-ref'
> > - else expect a ring specified by 'ring-refX' where X is between 0 =
and
> >   1 << ring-page-order
> >
> > This was correct behaviour, but was broken by the afforementioned =
commit to
> > become:
> >
> > - read 'ring-page-order'
> > - if not present then expect a single page ring (i.e. =
ring-page-order =3D 0)
> > - expect a ring specified by 'ring-refX' where X is between 0 and
> >   1 << ring-page-order
> > - if that didn't work then see if there's a single page ring =
specified by
> >   'ring-ref'
> >
> > This incorrect behaviour works most of the time but fails when a =
frontend
> > that sets 'ring-page-order' is unloaded and replaced by one that =
does not
> > because, instead of reading 'ring-ref', xen-blkback will read the =
stale
> > 'ring-ref0' left around by the previous frontend will try to map the =
wrong
> > grant reference.
> >
> > This patch restores the original behaviour.
> >
> > Fixes: 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to avoid =
inconsistent xenstore 'ring-page-
> order' set by malicious blkfront")
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > ---
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: "Roger Pau Monn=C3=A9" <roger.pau@citrix.com>
> > Cc: Jens Axboe <axboe@kernel.dk>
> > Cc: Dongli Zhang <dongli.zhang@oracle.com>
> >
> > v2:
> >  - Remove now-spurious error path special-case when nr_grefs =3D=3D =
1
> > ---
> >  drivers/block/xen-blkback/common.h |  1 +
> >  drivers/block/xen-blkback/xenbus.c | 38 =
+++++++++++++-----------------
> >  2 files changed, 17 insertions(+), 22 deletions(-)
> >
> > diff --git a/drivers/block/xen-blkback/common.h =
b/drivers/block/xen-blkback/common.h
> > index b0c71d3a81a0..524a79f10de6 100644
> > --- a/drivers/block/xen-blkback/common.h
> > +++ b/drivers/block/xen-blkback/common.h
> > @@ -313,6 +313,7 @@ struct xen_blkif {
> >
> >  	struct work_struct	free_work;
> >  	unsigned int 		nr_ring_pages;
> > +	bool                    multi_ref;
>=20
> You seem to have used spaces between the type and the variable name
> here, while neighbors also use hard tabs.
>=20

Oops. Xen vs. Linux coding style :-( I'll send a v3 with the whitespace =
fixed.

> The rest LGTM:
>=20
> Reviewed-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> We should have forbidden the usage of ring-page-order =3D 0 and we =
could
> have avoided having to add the multi_ref variable, but that's too late
> now.

Thanks. Yes, that cat is out of the bag and has been for a while =
unfortunately.

  Paul

>=20
> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 17:01:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 17:01:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80661.147676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6z3g-00054l-Sf; Tue, 02 Feb 2021 17:01:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80661.147676; Tue, 02 Feb 2021 17:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6z3g-00054e-PR; Tue, 02 Feb 2021 17:01:44 +0000
Received: by outflank-mailman (input) for mailman id 80661;
 Tue, 02 Feb 2021 17:01:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xXB=HE=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1l6z3e-00054Z-Oi
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 17:01:42 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88a52f97-ed8e-4d35-a616-f4ae4b13cf18;
 Tue, 02 Feb 2021 17:01:41 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 112GxaLt004533;
 Tue, 2 Feb 2021 17:01:38 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 36dn4whq4g-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 02 Feb 2021 17:01:38 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 112H174C141785;
 Tue, 2 Feb 2021 17:01:37 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2048.outbound.protection.outlook.com [104.47.66.48])
 by aserp3020.oracle.com with ESMTP id 36dhbyjndn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 02 Feb 2021 17:01:37 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB4227.namprd10.prod.outlook.com (2603:10b6:a03:208::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.16; Tue, 2 Feb
 2021 17:01:34 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::fcc2:62e8:e4e1:b4cb]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::fcc2:62e8:e4e1:b4cb%5]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021
 17:01:34 +0000
Received: from [10.74.98.193] (138.3.200.1) by
 SJ0PR13CA0074.namprd13.prod.outlook.com (2603:10b6:a03:2c4::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.15 via Frontend
 Transport; Tue, 2 Feb 2021 17:01:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88a52f97-ed8e-4d35-a616-f4ae4b13cf18
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=PIlvO+rgtSQKk4bfc122PlbsI9JtFxoKSIR4QI30r6I=;
 b=paGPN7zBCXoMz3uQ6LNLgHSZbS2QP4/O+mgwiLoLtdLrMeKN4Fr2iia/E6c/avptZHYG
 KqNy/0Y4NeJGirBhVBpc57GwNAI0OVei02nxgWb+JI0Vbefi0CZIL+sbBYilBmYNLeK6
 YQa6IEt7lVjKdOUP2nVJ98agoECvCQ1GWwj3pWguZsvg0JohD9wyva5lmSV+uqva8P2y
 CIQQMhyjOYXvNWKkqUA3idF0z+zz5cqLfeI73mozz5D+4k8pRFOHu4Oi9U9xz4aFJRh6
 UqiJIo7lbv+mwaVaEfnJbEf2ERuqkbvIXTqR+qpI50z+tDgS/To4+Pil7YLqZ+MOxQ7t RQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oJJ0pjmVN9/hfxhXbkndi3dcKfNs3KwTazkgqupsHLC8hq/b7FvPgQQthQ6r13KQL5nFItIxMDhi41If3o/9OrgPW44Ql69aFXrMVhYV4Ooi13TqA/qU6aNlFIR+jkQVIkua+b8fCbT2HsLCDaW+TIeA8XbZHV4Vyr2SHJodxoQ0qTcG5aNfufbt3qqrv5g2jmXrfTYwWFRW7E1cF7Dyq/2e0ogZLe0OijXQhZuRGp7xUMqoRojYJYIj6Ng+U2qopzqYCnv9DvWQaqDAzPdGotYvaHodbFCuabFfYTeB1KJzRh7XnWRe9U6i2GhYIm0Kvf91k9a49+LRIWnC+2KnWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PIlvO+rgtSQKk4bfc122PlbsI9JtFxoKSIR4QI30r6I=;
 b=PoJxpjD/TfCNpvhZRuwxFRNNcVHVOOKCoD5sracG6Vd/xNhkknOhbBgQNBLGHVwX9r4Vk0Tr5AF41qvOGklhXmCcVnpzE7DccznlO2m65mfAGA7b9mQ9uqLoXD2qJ9nB37Qp0di9BEg0jZ81qIAMbfSw1AWAIdGp6fhZ84Dw8edLOzt/c8t6iKFGCeXai365F/LyGbPm14wXOwUsy/tJsAsW2KRv6Y83r2vjuYjMv4riaysUQYraXXme1q+FuCV4C45AKkT2cy+IcjFVYqCw6Lo5VuBkAqqb5R9Su6SODknOTjAf1J/8nmqro2Jg2oS9tqYPIaKIyqgYbArTh79ksg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PIlvO+rgtSQKk4bfc122PlbsI9JtFxoKSIR4QI30r6I=;
 b=u7crYMOSBthqxR48syBaKzd0bwJA1SG0ouc9uPFByLGd1x0bsEjofubmPrAIwcoKgeEC5eSOtzHzO/gPGvsaZU6+GeMg52UYuf4PQgAHmSugH3+ukcVFAtD6pjeUhXAxtPndnoyqDhE5KdtZimpAEZ8ImeL7kYUtqNagYYkepkE=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com
Cc: iwj@xenproject.org, wl@xen.org, anthony.perard@citrix.com,
        roger.pau@citrix.com, jun.nakajima@intel.com, kevin.tian@intel.com,
        xen-devel@lists.xenproject.org
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <ba472bc6-a4e4-2e94-6388-0f9bf8eef3b3@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <fe920a45-7a40-5eda-7fbd-06b28fc545a9@oracle.com>
Date: Tue, 2 Feb 2021 12:01:30 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
In-Reply-To: <ba472bc6-a4e4-2e94-6388-0f9bf8eef3b3@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.1]
X-ClientProxiedBy: SJ0PR13CA0074.namprd13.prod.outlook.com
 (2603:10b6:a03:2c4::19) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c11956f6-26a5-4bed-d5e7-08d8c79c3234
X-MS-TrafficTypeDiagnostic: BY5PR10MB4227:
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB422711B1102B8B61F16E33438AB59@BY5PR10MB4227.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:605;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Mn4h2wh9+EoneDDSfsg1CaYjaPK/jc7T9VNnudENdQYoiNqGFVgr7xqMAmPpNqESQnRRN6v+koMt6TX9jU6rwx6KrjLbLIfTrPTCrkA9GYiGqugTZvRhUU+N5ndPwLEqfr16O38SDVlRG7ziSw9cQ94Q45kHLw+D+uOy91Oq8jIfl2NLNqUtpk/NRVI8zVZMm9iY5ujQeYm83C2dinr6ZOQmbSB7mfDGBv0JiA6kcNIVTiMsp1+UNdwqp/q5itjHRhPVzf2AjfYfC5ps0unzzPIPFziQDZP9OBYiDxP0I+6s7KVExcdIautMsR6JV1Q0QwkwKDH6no22cPrkfaBIRybVG3T01Ws4adLXWdr4rtUMmoM6m1c4niKRSFwscFEdikOKazpTDICBJCfk/vwCzYS65kbAOeORH2m6vd1AxpLgGhGDsFg3t2ZUBXBX+VBvzX2Ziqn990QfdxFyQqBZ+e4A638kvrI3K9IjWTNUGDgidh5F5/o3i8stWTAOEY1kLVXCcpNPcOwMh08yZPhaZQbeU0F0JAxgzResNwrEVJVX4m31QzXAnNudc2Pck8i2IQPgFWj90cHA/OoHr/DZvKLPglLv4C4O6q8WZPmNMec=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(136003)(39860400002)(346002)(366004)(376002)(36756003)(31686004)(478600001)(16576012)(316002)(31696002)(66946007)(44832011)(86362001)(8936002)(956004)(66556008)(6486002)(26005)(2906002)(16526019)(4326008)(186003)(53546011)(4744005)(5660300002)(66476007)(2616005)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?L2dFWnluOU96N011QUYrdkFmK05JVXAreWxIYzAvSG5JTWdISUg2WUJQQm9Y?=
 =?utf-8?B?SjhkbmM5a2h2cWt3aWQyREhWakgrVkUrYjVVUGhwU1UrbTVWTEh6ZkFzR3dV?=
 =?utf-8?B?aUlHQnc4WWY0SnlEYWpQQjVoZDRqV3pjSy9aUmw1KzV5UFM1cFhrcDlBUmVV?=
 =?utf-8?B?STQxVFh5UVB5VW9PdFVqTCt2RTlEMnQ4YmRXdWpJUVZvSW4ycW16OWFYQXRi?=
 =?utf-8?B?V1lwbEdVMUZKUnUxWmRFMlYzMG1YY2pYa0JCbzYxMDRSQWlnRXlVTzR3ZlpU?=
 =?utf-8?B?VGtQQ2tUSjNnOG5Fd1RpOXpEazM3b2lZdnUrSkl1WWZCc0lNSTJZR2hoWFlS?=
 =?utf-8?B?WnEyWHF0Rlg0VmtaT1p4SldNeWcxc2E0ZzlhSE1rVVJNenJZNzl0Zk5yb29t?=
 =?utf-8?B?UkEveXdMbS9ZMlNwLzMwNmEwQXMyK2RJZHU2NWoxVHNnbWFrKzBSOUM4Zkp2?=
 =?utf-8?B?ZXdUeWtjWHo0em5NdEh1U3FWOUhjSzlNSG1ySGUwUUErMnVoMXJJY1JTUU5M?=
 =?utf-8?B?QmpyNjQ5ZU9uQWwwMkhsNnpZZGtacXovTGVWSlJnZGdLMG5hRmhCYStmdmdB?=
 =?utf-8?B?ck1saTZoa2hieGtTVXdRVGlsYkhGa24xb1FkWlcvOWlpL3c4eGdYSHRyYXVa?=
 =?utf-8?B?RU1MUXprOVhDSFYrWGgyS1pUQVZyU002L0ZvT0RwV1NoNUdoUUhEQXBVa2Jw?=
 =?utf-8?B?eTI4RFdTd0hYSXVRZXRFd2xzenJ4Q1BDZFdLQTNkejUybm9rTE9pZk1uWisv?=
 =?utf-8?B?OE4xd0dVZ2MyVXUyZ01DMjF3N0tlWUJnV3VXS3VhYVZ0czlwSWNpSk1KMlA4?=
 =?utf-8?B?MHJQTWxpM2lVcGlPZ2NuUi9GRU91UHh0K0hXSTVNd1NjUUhlU3BzTXo0RzdV?=
 =?utf-8?B?MjdFRHlIRnFPS3dJcGhUa2NmRm9hTnBjVjdSNjFUZ1V5TlNSZ3MyVkZzOG1C?=
 =?utf-8?B?ZXJONXRIYlFYaVBQb1hvUkpPbFVNZnhaUFRKRmF1QnpwaVRzNTlRVEdXQXZE?=
 =?utf-8?B?eWxwYVlkTHhCY0JMVkxHS1Z2dlNGdTNyN2wwa3BDVUFoNWtVbGliUEdSVVMx?=
 =?utf-8?B?Z2gzTm8rSTcra3V2UU11c1piVms3Tit6dkQ2NDJDN1dUY3RYRWcvbGJBNUZD?=
 =?utf-8?B?WmlHTWRrTDB3bVdSVHFabmpYbnNxL2R0cElHNXFSUk9YMnN6Y1Q3Ry84dHRh?=
 =?utf-8?B?T0YwblduZXJlcTkwZk1Cd2ZYZGpxUlpDbFZ3b3FJbWZ2bmJ5Rm5LLzRXM01s?=
 =?utf-8?B?OERlTnkxZ2hnSjVRWTF0OFhWQ29qN0szSlMvcUpsTjBXTkpKckZmeVdOYUIz?=
 =?utf-8?B?UzhRckR5NjM3WGgwZzBaTml1aHJNSDJXdzRKOHpjZ1ZwWmRSN2hsT2crTEt5?=
 =?utf-8?B?WHgzaW1kMzYzUjhMV3AyWXl6WHBXZlFQVVFyZThwQ2szWU5NTU1TbDZIMm41?=
 =?utf-8?Q?nw1k04AW?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c11956f6-26a5-4bed-d5e7-08d8c79c3234
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 17:01:34.6103
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GT8j19AxwwznKON+i8gIrmhs3TTrhyid/Vrl+JaJQcVUO1Is6oUh6BxTZpKk8YGPh/7bBNB60UshHDdt43GQsPWiudr2MBrkdIruIkrNbk0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4227
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9883 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 suspectscore=0
 spamscore=0 mlxscore=0 malwarescore=0 mlxlogscore=999 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102020112
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9883 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 impostorscore=0 lowpriorityscore=0
 spamscore=0 priorityscore=1501 suspectscore=0 phishscore=0 mlxlogscore=999
 malwarescore=0 clxscore=1015 bulkscore=0 adultscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102020112


On 1/22/21 6:51 AM, Jan Beulich wrote:
> Also, Andrew, (I think I did say so before) - I definitely
> would want your general consent with this model, as what gets
> altered here is almost all relatively recent contributions
> by you. Nor would I exclude the approach being controversial.
>

Andrew, ping?


-boris



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 17:12:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 17:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80665.147688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zEI-0006Cp-UA; Tue, 02 Feb 2021 17:12:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80665.147688; Tue, 02 Feb 2021 17:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zEI-0006Ci-QI; Tue, 02 Feb 2021 17:12:42 +0000
Received: by outflank-mailman (input) for mailman id 80665;
 Tue, 02 Feb 2021 17:12:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6zEH-0006Ca-Ic; Tue, 02 Feb 2021 17:12:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6zEH-0001tl-AA; Tue, 02 Feb 2021 17:12:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l6zEG-0007nC-VS; Tue, 02 Feb 2021 17:12:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l6zEG-0002PQ-Uv; Tue, 02 Feb 2021 17:12:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rUEzL1DkDFdq1rCQv0+pHrW1Zd1twncwAdkZY7q6LIE=; b=53ykAz2fPEqvYf8fIesIPE7VWB
	OaR+S6T/evpWQzJJ4HZT7F1Y+GyJa47ivqRgXGUL0rMbG8OiKTA+HqcHapsF/FNjHXrhnpw0nmyxI
	193xF8cKRNlCgt14AEQHZBAp3g5hmgeSQ6HlJw0XhR1AGv1oBAdsHti6V7PbSDiqKNrU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158929: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-i386-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-shadow:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:host-install(5):broken:heisenbug
    linux-5.4:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-rtds:xen-boot:fail:heisenbug
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fbca6ce4174724f28be5268c5d210f51ed96e31
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 Feb 2021 17:12:40 +0000

flight 158929 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158929/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken  in 158898
 test-arm64-arm64-xl-credit2     <job status>                 broken  in 158898
 test-arm64-arm64-xl-xsm         <job status>                 broken  in 158898
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl           14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-pair         25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install       fail REGR. vs. 158387
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-i386-xl-shadow    14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl          5 host-install(5) broken in 158898 pass in 158929
 test-arm64-arm64-xl-xsm      5 host-install(5) broken in 158898 pass in 158929
 test-arm64-arm64-xl-credit2  5 host-install(5) broken in 158898 pass in 158929
 test-amd64-amd64-xl-rtds      8 xen-boot                   fail pass in 158898
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 158898

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-rtds     14 guest-start    fail in 158898 REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 linux                0fbca6ce4174724f28be5268c5d210f51ed96e31
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   20 days
Failing since        158473  2021-01-17 13:42:20 Z   16 days   27 attempts
Testing same since   158818  2021-01-30 13:48:12 Z    3 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Akilesh Kailash <akailash@google.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Leibovich <alexl@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexey Minnekhanov <alexeymin@postmarketos.org>
  Anders Roxell <anders.roxell@linaro.org>
  Andreas Kemnade <andreas@kemnade.info>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Zhizhikin <andrey.zhizhikin@leica-geosystems.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Lutomirski <luto@kernel.org>
  Anthony Iliopoulos <ailiop@suse.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Marcovitch <ariel.marcovitch@gmail.com>
  Ariel Marcovitch <arielmarcovitch@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Aya Levin <ayal@nvidia.com>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Baruch Siach <baruch@tkos.co.il>
  Ben Skeggs <bskeggs@redhat.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Craig Tatlor <ctatlor97@gmail.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  David Wu <david.wu@rock-chips.com>
  Dennis Li <Dennis.Li@amd.com>
  Dexuan Cui <decui@microsoft.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Enke Chen <enchen@paloaltonetworks.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Korenevsky <ekorenevsky@astralinux.ru>
  Ewan D. Milne <emilne@redhat.com>
  Fabian Vogt <fvogt@suse.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe LaÃ­ns <lains@archlinux.org>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gaurav Kohli <gkohli@codeaurora.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gopal Tiwari <gtiwari@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guido GÃ¼nther <agx@sigxcpu.org>
  Guillaume Nault <gnault@redhat.com>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hao Wang <pkuwangh@gmail.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Ion Agorria <ion@agorria.com>
  Israel Rukshin <israelr@nvidia.com>
  J. Bruce Fields <bfields@redhat.com>
  j.nixdorf@avm.de <j.nixdorf@avm.de>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@jamieiles.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  JC Kuo <jckuo@nvidia.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jethro Beekman <jethro@fortanix.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Nixdorf <j.nixdorf@avm.de>
  John Millikin <john@john-millikin.com>
  Johnathan Smithinovic <johnathan.smithinovic@gmx.at>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni K. SeppÃ¤nen <jks@iki.fi>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Juerg Haefliger <juergh@canonical.com>
  Juergen Gross <jgross@suse.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Krzysztof Mazur <krzysiek@podlesie.net>
  Krzysztof Piotr OlÄ™dzki <ole@ans.pl>
  Lars-Peter Clausen <lars@metafoo.de>
  Lecopzer Chen <lecopzer.chen@mediatek.com>
  Lecopzer Chen <lecopzer@gmail.com>
  Leon Romanovsky <leonro@nvidia.com>
  Leon Schuermann <leon@is.currently.online>
  Linhua Xu <linhua.xu@unisoc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Lozano <llozano@google.com>
  Lukas Wunner <lukas@wunner.de>
  Manish Chopra <manishc@marvell.com>
  Manoj Gupta <manojgupta@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marcin Wojtas <mw@semihalf.com>
  Mark Bloch <mbloch@nvidia.com>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Kresin <dev@kresin.me>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikko Perttunen <mperttunen@nvidia.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mircea Cirjaliu <mcirjaliu@bitdefender.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolai Stange <nstange@suse.de>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nilesh Javali <njavali@marvell.com>
  Oded Gabbay <ogabbay@kernel.org>
  Olaf Hering <olaf@aepfle.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali RohÃ¡r <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pan Bian <bianpan2016@163.com>
  Parav Pandit <parav@nvidia.com>
  Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  Paul Cercueil <paul@crapouillou.net>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Collingbourne <pcc@google.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Robinson <pbrobinson@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <me@pmachata.org>
  Petr Machata <petrm@nvidia.com>
  Phil Oester <kernel@linuxace.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafael Kitover <rkitover@gmail.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <rasmus.villemoes@prevas.dk>
  Reinette Chatre <reinette.chatre@intel.com>
  Rich Felker <dalias@libc.org>
  Rob Clark <robdclark@chromium.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Rohit Maheshwari <rohitm@chelsio.com>
  Roman Guskov <rguskov@dh-electronics.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Ross Zwisler <zwisler@google.com>
  Ryan Chen <ryan_chen@aspeedtech.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sameer Pujar <spujar@nvidia.com>
  Samuel Holland <samuel@sholland.org>
  Sasha Levin <sashal@kernel.org>
  Sean Tranchetti <stranche@codeaurora.org>
  Seth Miller <miller.seth@gmail.com>
  Shawn Guo <shawn.guo@linaro.org>
  Shravya Kumbham <shravya.kumbham@xilinx.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Chulski <stefanc@marvell.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Su Yue <l@damenly.su>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hebb <tommyhebb@gmail.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Toke HÃ¸iland-JÃ¸rgensen <toke@toke.dk>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-KÃ¶nig <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@nvidia.com>
  Valdis Kletnieks <valdis.kletnieks@vt.edu>
  Valdis KlÄ“tnieks <valdis.kletnieks@vt.edu>
  Vasily Averin <vvs@virtuozzo.com>
  Victor Zhao <Victor.Zhao@amd.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wang Hui <john.wanghui@huawei.com>
  Wayne Lin <Wayne.Lin@amd.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  yangerkun <yangerkun@huawei.com>
  Yazen Ghannam <Yazen.Ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  Youling Tang <tangyouling@loongson.cn>
  YueHaibing <yuehaibing@huawei.com>
  Yufeng Mo <moyufeng@huawei.com>
  zhengbin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken
broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-xsm broken

Not pushing.

(No revision log; it would be 8166 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 17:44:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 17:44:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80674.147703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zit-0000ni-Ka; Tue, 02 Feb 2021 17:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80674.147703; Tue, 02 Feb 2021 17:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zit-0000nb-H9; Tue, 02 Feb 2021 17:44:19 +0000
Received: by outflank-mailman (input) for mailman id 80674;
 Tue, 02 Feb 2021 17:44:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gsWk=HE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l6zis-0000nW-GR
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 17:44:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e46cfcb1-616d-4c9d-9b1b-592c69505ea2;
 Tue, 02 Feb 2021 17:44:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 95D3464DA3;
 Tue,  2 Feb 2021 17:44:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e46cfcb1-616d-4c9d-9b1b-592c69505ea2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612287856;
	bh=kAPUxvEPOSjvbzuW7d4DzlEofbmKMDPLH5RHl5JCa84=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Lu0lpEgV2tbCP2WhvEk4HQCnDd1OKYESzDAjRt3K1UzWBjFRtKQy3lGZxdSPyi89h
	 oTDZUViaWM5dL2jx3jf8yfdoueiAknt5W8HNI1sQl1Sr6WjiCevhlRQ7DKcu5mntRu
	 qYyxGK27EGZN4WCp3VCPWOsMnVctYqOxS+2daD0pX1UrkQgVPfq7gOI0GVDOyUuU9T
	 qL/2ZFIHyQh3OWVKe9KGcXO5ynV2ZevXYUm2cpC1j7v0f+Cldy2Xe56j2k/XzkNmDp
	 qE7BgjhzdoHOwdPh2Ffw71ZNh0I5W/opq8wIbX0oWScdKNhIOquDLln0D8ya5L+rtO
	 Y8c1+JHpD/QDA==
Date: Tue, 2 Feb 2021 09:44:15 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>, 
    "brian.woods@xilinx.com" <brian.woods@xilinx.com>
Subject: Re: [PATCH v3 0/3] Generic SMMU Bindings
In-Reply-To: <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com>
Message-ID: <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s> <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 Feb 2021, Rahul Singh wrote:
> Hello Stefano,
> 
> > On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > Hi all,
> > 
> > This series introduces support for the generic SMMU bindings to
> > xen/drivers/passthrough/arm/smmu.c.
> > 
> > The last version of the series was
> > https://marc.info/?l=xen-devel&m=159539053406643
> > 
> > I realize that it is late for 4.15 -- I think it is OK if this series
> > goes in afterwards.
> 
> I tested the series on the Juno board it is woking fine.  
> I found one issue in SMMU driver while testing this series that is not related to this series but already existing issue in SMMU driver.
> 
> If there are more than one device behind SMMU and they share the same Stream-Id, SMMU driver is creating the new SMR entry without checking the already configured SMR entry if SMR entry correspond to stream-id is already configured.  Because of this I observed the stream match conflicts on Juno board.
> 
> (XEN) smmu: /iommu@7fb30000: Unexpected global fault, this could be serious
> (XEN) smmu: /iommu@7fb30000: 	GFSR 0x00000004, GFSYNR0 0x00000006, GFSYNR1 0x00000000, GFSYNR2 0x00000000
> 
> 
> Below two patches is required to be ported to Xen to fix the issue from Linux driver.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=1f3d5ca43019bff1105838712d55be087d93c0da
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=21174240e4f4439bb8ed6c116cdbdc03eba2126e


Good catch and thanks for the pointers! Do you have any interest in
backporting these two patches or should I put them on my TODO list?

Unrelated to who does the job, we should discuss if it makes sense to
try to fix the bug for 4.15. The patches don't seem trivial so I am
tempted to say that it might be best to leave the bug unfixed for 4.15
and fix it later.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 17:48:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 17:48:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80675.147715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zmY-0000vn-5G; Tue, 02 Feb 2021 17:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80675.147715; Tue, 02 Feb 2021 17:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zmY-0000vg-1Q; Tue, 02 Feb 2021 17:48:06 +0000
Received: by outflank-mailman (input) for mailman id 80675;
 Tue, 02 Feb 2021 17:48:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TyQM=HE=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l6zmW-0000vb-Fy
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 17:48:04 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5c730fc-f88f-4553-aaf0-ffa11e86cc42;
 Tue, 02 Feb 2021 17:48:01 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 112HllcC022963
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 2 Feb 2021 12:47:53 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 112HllHT022962;
 Tue, 2 Feb 2021 09:47:47 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5c730fc-f88f-4553-aaf0-ffa11e86cc42
Date: Tue, 2 Feb 2021 09:47:47 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
Message-ID: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

The handle_device() function has been returning failure upon
encountering a device address which was invalid.  A device tree which
had such an entry has now been seen in the wild.  As it causes no
failures to simply ignore the entries, ignore them.

Signed-off-by: Elliott Mitchell <ehem+xenn@m5p.com>

---
I'm starting to suspect there are an awful lot of places in the various
domain_build.c files which should simply ignore errors.  This is now the
second place I've encountered in 2 months where ignoring errors was the
correct action.  I know failing in case of error is an engineer's
favorite approach, but there seem an awful lot of harmless failures
causing panics.

This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore
empty memory bank".  Now it seems clear the correct approach is to simply
ignore these entries.

This seems a good candidate for backport to 4.14 and certainly should be
in 4.15.
---
 xen/arch/arm/domain_build.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 374bf655ee..c0568b7579 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1407,9 +1407,9 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
         res = dt_device_get_address(dev, i, &addr, &size);
         if ( res )
         {
-            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
+            printk(XENLOG_ERR "Unable to retrieve address of %s, index %u\n",
+                   dt_node_full_name(dev), i);
+            continue;
         }
 
         res = map_range_to_domain(dev, addr, size, &mr_data);
-- 
2.20.1


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Feb 02 17:50:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 17:50:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80679.147730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zox-0001uN-Ik; Tue, 02 Feb 2021 17:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80679.147730; Tue, 02 Feb 2021 17:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zox-0001uG-FB; Tue, 02 Feb 2021 17:50:35 +0000
Received: by outflank-mailman (input) for mailman id 80679;
 Tue, 02 Feb 2021 17:50:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l6zov-0001uB-Nh
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 17:50:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6zou-0002Vy-Na; Tue, 02 Feb 2021 17:50:32 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l6zou-0005qx-HV; Tue, 02 Feb 2021 17:50:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lcy2N9XuXSWSqalYRLS1T/ccGfxxM6G/C6AQaaGTGwM=; b=UFiRPikbD1sqNmtUchdtO+inFo
	UM3dLL8WkLKuk6oOfXYiNK9CrS1m5NQ4rX6w5GhKShb5U9AsDbDiU+sqI77fLqdWexEWVjineBQjr
	Ee95YYckgzFMHfVUgfgxUMZ0UjXy2s3NbKl8Q7djWV4KfS/izXJ4yDvlMyIPxnBBVzH4=;
Subject: Re: [PATCH v3 0/3] Generic SMMU Bindings
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <Rahul.Singh@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>,
 "brian.woods@xilinx.com" <brian.woods@xilinx.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
 <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com>
 <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <7ddc6e1b-41ce-37ae-f86e-39893f53a0ec@xen.org>
Date: Tue, 2 Feb 2021 17:50:29 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 02/02/2021 17:44, Stefano Stabellini wrote:
> On Tue, 2 Feb 2021, Rahul Singh wrote:
>> Hello Stefano,
>>
>>> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>
>>> Hi all,
>>>
>>> This series introduces support for the generic SMMU bindings to
>>> xen/drivers/passthrough/arm/smmu.c.
>>>
>>> The last version of the series was
>>> https://marc.info/?l=xen-devel&m=159539053406643
>>>
>>> I realize that it is late for 4.15 -- I think it is OK if this series
>>> goes in afterwards.
>>
>> I tested the series on the Juno board it is woking fine.
>> I found one issue in SMMU driver while testing this series that is not related to this series but already existing issue in SMMU driver.
>>
>> If there are more than one device behind SMMU and they share the same Stream-Id, SMMU driver is creating the new SMR entry without checking the already configured SMR entry if SMR entry correspond to stream-id is already configured.  Because of this I observed the stream match conflicts on Juno board.
>>
>> (XEN) smmu: /iommu@7fb30000: Unexpected global fault, this could be serious
>> (XEN) smmu: /iommu@7fb30000: 	GFSR 0x00000004, GFSYNR0 0x00000006, GFSYNR1 0x00000000, GFSYNR2 0x00000000
>>
>>
>> Below two patches is required to be ported to Xen to fix the issue from Linux driver.
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=1f3d5ca43019bff1105838712d55be087d93c0da
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=21174240e4f4439bb8ed6c116cdbdc03eba2126e
> 
> 
> Good catch and thanks for the pointers! Do you have any interest in
> backporting these two patches or should I put them on my TODO list?
> 
> Unrelated to who does the job, we should discuss if it makes sense to
> try to fix the bug for 4.15. The patches don't seem trivial so I am
> tempted to say that it might be best to leave the bug unfixed for 4.15
> and fix it later.

SMMU support on Juno is not that interesting because IIRC the stream-ID 
is the same for all the devices. So it is all or nothing passthrough.

For other HW, this may be a useful feature. Yet we would need a way to 
group the devices for passthrough.

In this context, I would consider it more a feature than a bug because 
the SMMU driver never remotely work on such HW.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 17:57:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 17:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80683.147745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zvW-00026J-Bi; Tue, 02 Feb 2021 17:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80683.147745; Tue, 02 Feb 2021 17:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l6zvW-00026C-8W; Tue, 02 Feb 2021 17:57:22 +0000
Received: by outflank-mailman (input) for mailman id 80683;
 Tue, 02 Feb 2021 17:57:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1l6zvV-000267-5h
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 17:57:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1l6zvU-0002dh-MJ; Tue, 02 Feb 2021 17:57:20 +0000
Received: from host86-190-149-163.range86-190.btcentralplus.com
 ([86.190.149.163] helo=ubuntu.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1l6zvU-0006HU-Ac; Tue, 02 Feb 2021 17:57:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=U9LFyj567L8ODDmpt5H9dtpTYleJRBeis5xuYap+UEk=; b=IUMNPYDqTYI1m8BX1jWz/P2whH
	oNjab1nTmbSWf0fqWGhwZLs6Uc9oaLgtwwALS9GMpNWg813Ba3WGqoY7+YjCijfTBCWYjCI1WyMsn
	cso0MzQS3/uM9+n28VjVJBo9A0oF72aK2lv2qOP7xALJK5tiE97jvyjlF+mJbRox/L1U=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: [PATCH v3] xen-blkback: fix compatibility bug with single page rings
Date: Tue,  2 Feb 2021 17:56:59 +0000
Message-Id: <20210202175659.18452-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Prior to commit 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to avoid
inconsistent xenstore 'ring-page-order' set by malicious blkfront"), the
behaviour of xen-blkback when connecting to a frontend was:

- read 'ring-page-order'
- if not present then expect a single page ring specified by 'ring-ref'
- else expect a ring specified by 'ring-refX' where X is between 0 and
  1 << ring-page-order

This was correct behaviour, but was broken by the afforementioned commit to
become:

- read 'ring-page-order'
- if not present then expect a single page ring (i.e. ring-page-order = 0)
- expect a ring specified by 'ring-refX' where X is between 0 and
  1 << ring-page-order
- if that didn't work then see if there's a single page ring specified by
  'ring-ref'

This incorrect behaviour works most of the time but fails when a frontend
that sets 'ring-page-order' is unloaded and replaced by one that does not
because, instead of reading 'ring-ref', xen-blkback will read the stale
'ring-ref0' left around by the previous frontend will try to map the wrong
grant reference.

This patch restores the original behaviour.

Fixes: 4a8c31a1c6f5 ("xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Dongli Zhang <dongli.zhang@oracle.com>
Reviewed-by: "Roger Pau MonnÃ©" <roger.pau@citrix.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jens Axboe <axboe@kernel.dk>

v3:
 - Whitespace fix

v2:
 - Remove now-spurious error path special-case when nr_grefs == 1
---
 drivers/block/xen-blkback/common.h |  1 +
 drivers/block/xen-blkback/xenbus.c | 38 +++++++++++++-----------------
 2 files changed, 17 insertions(+), 22 deletions(-)

diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index b0c71d3a81a0..bda5c815e441 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -313,6 +313,7 @@ struct xen_blkif {
 
 	struct work_struct	free_work;
 	unsigned int 		nr_ring_pages;
+	bool			multi_ref;
 	/* All rings for this device. */
 	struct xen_blkif_ring	*rings;
 	unsigned int		nr_rings;
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 9860d4842f36..6c5e9373e91c 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -998,14 +998,17 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
 	for (i = 0; i < nr_grefs; i++) {
 		char ring_ref_name[RINGREF_NAME_LEN];
 
-		snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
+		if (blkif->multi_ref)
+			snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
+		else {
+			WARN_ON(i != 0);
+			snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref");
+		}
+
 		err = xenbus_scanf(XBT_NIL, dir, ring_ref_name,
 				   "%u", &ring_ref[i]);
 
 		if (err != 1) {
-			if (nr_grefs == 1)
-				break;
-
 			err = -EINVAL;
 			xenbus_dev_fatal(dev, err, "reading %s/%s",
 					 dir, ring_ref_name);
@@ -1013,18 +1016,6 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
 		}
 	}
 
-	if (err != 1) {
-		WARN_ON(nr_grefs != 1);
-
-		err = xenbus_scanf(XBT_NIL, dir, "ring-ref", "%u",
-				   &ring_ref[0]);
-		if (err != 1) {
-			err = -EINVAL;
-			xenbus_dev_fatal(dev, err, "reading %s/ring-ref", dir);
-			return err;
-		}
-	}
-
 	err = -ENOMEM;
 	for (i = 0; i < nr_grefs * XEN_BLKIF_REQS_PER_PAGE; i++) {
 		req = kzalloc(sizeof(*req), GFP_KERNEL);
@@ -1129,10 +1120,15 @@ static int connect_ring(struct backend_info *be)
 		 blkif->nr_rings, blkif->blk_protocol, protocol,
 		 blkif->vbd.feature_gnt_persistent ? "persistent grants" : "");
 
-	ring_page_order = xenbus_read_unsigned(dev->otherend,
-					       "ring-page-order", 0);
-
-	if (ring_page_order > xen_blkif_max_ring_order) {
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-page-order", "%u",
+			   &ring_page_order);
+	if (err != 1) {
+		blkif->nr_ring_pages = 1;
+		blkif->multi_ref = false;
+	} else if (ring_page_order <= xen_blkif_max_ring_order) {
+		blkif->nr_ring_pages = 1 << ring_page_order;
+		blkif->multi_ref = true;
+	} else {
 		err = -EINVAL;
 		xenbus_dev_fatal(dev, err,
 				 "requested ring page order %d exceed max:%d",
@@ -1141,8 +1137,6 @@ static int connect_ring(struct backend_info *be)
 		return err;
 	}
 
-	blkif->nr_ring_pages = 1 << ring_page_order;
-
 	if (blkif->nr_rings == 1)
 		return read_per_ring_refs(&blkif->rings[0], dev->otherend);
 	else {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 18:12:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 18:12:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80685.147757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l709u-00046N-Nb; Tue, 02 Feb 2021 18:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80685.147757; Tue, 02 Feb 2021 18:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l709u-00046G-K2; Tue, 02 Feb 2021 18:12:14 +0000
Received: by outflank-mailman (input) for mailman id 80685;
 Tue, 02 Feb 2021 18:12:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l709t-00046B-85
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 18:12:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l709r-0002yA-8y; Tue, 02 Feb 2021 18:12:11 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l709r-0007QK-1e; Tue, 02 Feb 2021 18:12:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3ao1A6dDpSArxpBrWSwd2BFC+8CJxPMrIxJHfLjHuuQ=; b=ieZG9MYo4OpgSIbNeVWkeH9LLh
	GPHWXzwlws5kNYCW5bbIBSaW3eOkSD0e9clXcjV2Y6gtERXj5+39UpL7ckcMfK6fCpUQ5sJQniQ6a
	AP/cL0qOSUyimJuGyLZQ04tpgFoR1LRtx2mFiU1zQs5kqfTX5ahwkaoHfBdVKNMiU5QE=;
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
To: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
Date: Tue, 2 Feb 2021 18:12:09 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 02/02/2021 17:47, Elliott Mitchell wrote:
> The handle_device() function has been returning failure upon
> encountering a device address which was invalid.  A device tree which
> had such an entry has now been seen in the wild.  As it causes no
> failures to simply ignore the entries, ignore them. >
> Signed-off-by: Elliott Mitchell <ehem+xenn@m5p.com>
> 
> ---
> I'm starting to suspect there are an awful lot of places in the various
> domain_build.c files which should simply ignore errors.  This is now the
> second place I've encountered in 2 months where ignoring errors was the
> correct action.

Right, as a counterpoint, we run Xen on Arm HW for several years now and 
this is the first time I heard about issue parsing the DT. So while I 
appreciate that you are eager to run Xen on the RPI...

>  I know failing in case of error is an engineer's
> favorite approach, but there seem an awful lot of harmless failures
> causing panics.
> 
> This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore
> empty memory bank".  Now it seems clear the correct approach is to simply
> ignore these entries.

... we first need to fully understand the issues. Here a few questions:
    1) Can you provide more information why you believe the address is 
invalid?
    2) How does Linux use the node?
    3) Is it happening with all the RPI DT? If not, what are the 
differences?

> 
> This seems a good candidate for backport to 4.14 and certainly should be
> in 4.15.
> ---
>   xen/arch/arm/domain_build.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 374bf655ee..c0568b7579 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1407,9 +1407,9 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
>           res = dt_device_get_address(dev, i, &addr, &size);
>           if ( res )
>           {
> -            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> -                   i, dt_node_full_name(dev));
> -            return res;
> +            printk(XENLOG_ERR "Unable to retrieve address of %s, index %u\n",
> +                   dt_node_full_name(dev), i);
> +            continue;
>           }
>   
>           res = map_range_to_domain(dev, addr, size, &mr_data);
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 18:27:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 18:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80687.147769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l70OY-0005EW-3k; Tue, 02 Feb 2021 18:27:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80687.147769; Tue, 02 Feb 2021 18:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l70OY-0005EP-09; Tue, 02 Feb 2021 18:27:22 +0000
Received: by outflank-mailman (input) for mailman id 80687;
 Tue, 02 Feb 2021 18:27:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gsWk=HE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l70OX-0005EC-2L
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 18:27:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7c15d5a-4cdd-4df8-b5d2-8bafc3c15eb9;
 Tue, 02 Feb 2021 18:27:20 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3D97764F68;
 Tue,  2 Feb 2021 18:27:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7c15d5a-4cdd-4df8-b5d2-8bafc3c15eb9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612290439;
	bh=4VmhOap821i5h0fmMi7LKFBaQp9enkkSEshesdaVjw4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GjsBHYrXgmuyvYBZ31Qa2c5eK7vTmgmh/COeMQ4EKJD8wTOxCGlRMjB0ld0pOxaQR
	 Oz4uBrpORvjB724Msm0/b1dHUDNXree3STn90miBzIW1ZDmI665k1/mjBUq+YJq76G
	 ikXUHq9SNmgf8R4pSNN59j+5zmGUONCMFq0iCjm8AXv/f+cNv65Gn7SZTwKUjRx46r
	 +PdorPizufaj5b5BBoz9456ebavm8gqOhcoO/VUbZ7uh8nmo2ACjnGaqbTbX/801jR
	 EmCmFjYybML8Yynezqn33Rv6GKbTk8MuM8GH/eXZjxpByQQryaMvlLpFyh3MrjRv1W
	 SChmSh/kvM/Pw==
Date: Tue, 2 Feb 2021 10:27:18 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Rahul Singh <Rahul.Singh@arm.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>, 
    "brian.woods@xilinx.com" <brian.woods@xilinx.com>
Subject: Re: [PATCH v3 0/3] Generic SMMU Bindings
In-Reply-To: <7ddc6e1b-41ce-37ae-f86e-39893f53a0ec@xen.org>
Message-ID: <alpine.DEB.2.21.2102021024100.29047@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s> <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com> <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s> <7ddc6e1b-41ce-37ae-f86e-39893f53a0ec@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 Feb 2021, Julien Grall wrote:
> On 02/02/2021 17:44, Stefano Stabellini wrote:
> > On Tue, 2 Feb 2021, Rahul Singh wrote:
> > > Hello Stefano,
> > > 
> > > > On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org>
> > > > wrote:
> > > > 
> > > > Hi all,
> > > > 
> > > > This series introduces support for the generic SMMU bindings to
> > > > xen/drivers/passthrough/arm/smmu.c.
> > > > 
> > > > The last version of the series was
> > > > https://marc.info/?l=xen-devel&m=159539053406643
> > > > 
> > > > I realize that it is late for 4.15 -- I think it is OK if this series
> > > > goes in afterwards.
> > > 
> > > I tested the series on the Juno board it is woking fine.
> > > I found one issue in SMMU driver while testing this series that is not
> > > related to this series but already existing issue in SMMU driver.
> > > 
> > > If there are more than one device behind SMMU and they share the same
> > > Stream-Id, SMMU driver is creating the new SMR entry without checking the
> > > already configured SMR entry if SMR entry correspond to stream-id is
> > > already configured.  Because of this I observed the stream match conflicts
> > > on Juno board.
> > > 
> > > (XEN) smmu: /iommu@7fb30000: Unexpected global fault, this could be
> > > serious
> > > (XEN) smmu: /iommu@7fb30000: 	GFSR 0x00000004, GFSYNR0 0x00000006,
> > > GFSYNR1 0x00000000, GFSYNR2 0x00000000
> > > 
> > > 
> > > Below two patches is required to be ported to Xen to fix the issue from
> > > Linux driver.
> > > 
> > > https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=1f3d5ca43019bff1105838712d55be087d93c0da
> > > https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=21174240e4f4439bb8ed6c116cdbdc03eba2126e
> > 
> > 
> > Good catch and thanks for the pointers! Do you have any interest in
> > backporting these two patches or should I put them on my TODO list?
> > 
> > Unrelated to who does the job, we should discuss if it makes sense to
> > try to fix the bug for 4.15. The patches don't seem trivial so I am
> > tempted to say that it might be best to leave the bug unfixed for 4.15
> > and fix it later.
> 
> SMMU support on Juno is not that interesting because IIRC the stream-ID is the
> same for all the devices. So it is all or nothing passthrough.
> 
> For other HW, this may be a useful feature. Yet we would need a way to group
> the devices for passthrough.
> 
> In this context, I would consider it more a feature than a bug because the
> SMMU driver never remotely work on such HW.

I see. To be honest I wasn't thinking of Juno (I wasn't aware of its
limitations) but of potential genuine situations where stream-ids are
the same for 2 devices. I know it can happen with PCI devices for
instance, although I am aware we don't have PCI passthrough yet. I don't
know if it is possible for it to happen with non-PCI devices but I
wouldn't be surprised if it can.


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 18:37:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 18:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80692.147781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l70YY-0006JU-7I; Tue, 02 Feb 2021 18:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80692.147781; Tue, 02 Feb 2021 18:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l70YY-0006JN-4Q; Tue, 02 Feb 2021 18:37:42 +0000
Received: by outflank-mailman (input) for mailman id 80692;
 Tue, 02 Feb 2021 18:37:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eq/0=HE=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1l70YW-0006JI-At
 for xen-devel@lists.xen.org; Tue, 02 Feb 2021 18:37:40 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c3b8160-42e0-4667-b88a-7629e24dddfd;
 Tue, 02 Feb 2021 18:37:38 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 112Ibaqt002576
 for <xen-devel@lists.xen.org>; Tue, 2 Feb 2021 19:37:36 +0100 (CET)
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 112IbaKq013885
 for <xen-devel@lists.xen.org>; Tue, 2 Feb 2021 19:37:36 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
 id 00A467120; Tue,  2 Feb 2021 19:37:35 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c3b8160-42e0-4667-b88a-7629e24dddfd
Date: Tue, 2 Feb 2021 19:37:35 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xen.org
Subject: xenstored file descriptor leak
Message-ID: <20210202183735.GA25046@mail.soc.lip6.fr>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Tue, 02 Feb 2021 19:37:36 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

Hello,
on NetBSD I'm tracking down an issue where xenstored never closes its
file descriptor to /var/run/xenstored/socket and instead loops at 100%
CPU on these descriptors.

xenstored loops because poll(2) returns a POLLIN event for these
descriptors but they are marked is_ignored = true. 

I'm seeing this with xen 4.15, 4.13 and has also been reported with 4.11
with latest security patches.
It seems to have started with patches for XSA-115 (A user reported this
for 4.11)

I've tracked it down to a difference in poll(2) implementation; it seems
that linux will return something that is not (POLLIN|POLLOUT) when a
socket if closed; while NetBSD returns POLLIN (this matches the NetBSD's
man page).

First I think there may be a security issue here, as, even on linux it should
be possible for a client to cause a socket to go to the is_ignored state,
while still sending data and cause xenstored to go to a 100% CPU loop.
I think this is needed anyway:

--- xenstored_core.c.orig	2021-02-02 18:06:33.389316841 +0100
+++ xenstored_core.c	2021-02-02 19:27:43.761877371 +0100
@@ -397,9 +397,12 @@
 			     !list_empty(&conn->out_list)))
 				*ptimeout = 0;
 		} else {
-			short events = POLLIN|POLLPRI;
-			if (!list_empty(&conn->out_list))
-				events |= POLLOUT;
+			short events = 0;
+			if (!conn->is_ignored) {
+				events |= POLLIN|POLLPRI;
+			        if (!list_empty(&conn->out_list))
+				        events |= POLLOUT;
+			}
 			conn->pollfd_idx = set_fd(conn->fd, events);
 		}
 	}

Now I wonder if, on NetBSD at last, a read error or short read shouldn't
cause the socket to be closed, as with:

@@ -1561,6 +1565,8 @@
 
 bad_client:
 	ignore_connection(conn);
+	/* we don't want to keep this connection alive */
+	talloc_free(conn);
 }
 
 static void handle_output(struct connection *conn)


what do you think ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 18:44:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 18:44:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80696.147799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l70ee-0007Kc-06; Tue, 02 Feb 2021 18:44:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80696.147799; Tue, 02 Feb 2021 18:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l70ed-0007KV-Sn; Tue, 02 Feb 2021 18:43:59 +0000
Received: by outflank-mailman (input) for mailman id 80696;
 Tue, 02 Feb 2021 18:43:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l70ec-0007KQ-M0
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 18:43:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l70eb-0003TV-AX; Tue, 02 Feb 2021 18:43:57 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l70eb-00015b-4V; Tue, 02 Feb 2021 18:43:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kRrhMXrhYGAVHBtrbarY6P89Mz2xJnk+0P1iItuTPZw=; b=eCMJW1wOdeHzNvtr138Ipygpeb
	gx+WhBUUkaLvOQ0uHQ/uLteo+GwZikaSgKkpEBhwMs1bwH/3yVdPX2QWgSqgb6+EoB89LwR1x3wid
	SJ6z83ypWRvzaeyjn8G84Kc5AdatxR4nmxo/AwO+p20Owazp3JrViKntwCZdThP975RM=;
Subject: Re: [PATCH v3 0/3] Generic SMMU Bindings
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>,
 "brian.woods@xilinx.com" <brian.woods@xilinx.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
 <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com>
 <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s>
 <7ddc6e1b-41ce-37ae-f86e-39893f53a0ec@xen.org>
 <alpine.DEB.2.21.2102021024100.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <cd8dc216-987f-4dd8-88ab-4ee455456f81@xen.org>
Date: Tue, 2 Feb 2021 18:43:55 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102021024100.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 02/02/2021 18:27, Stefano Stabellini wrote:
> On Tue, 2 Feb 2021, Julien Grall wrote:
>> On 02/02/2021 17:44, Stefano Stabellini wrote:
>>> On Tue, 2 Feb 2021, Rahul Singh wrote:
>>>> Hello Stefano,
>>>>
>>>>> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org>
>>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> This series introduces support for the generic SMMU bindings to
>>>>> xen/drivers/passthrough/arm/smmu.c.
>>>>>
>>>>> The last version of the series was
>>>>> https://marc.info/?l=xen-devel&m=159539053406643
>>>>>
>>>>> I realize that it is late for 4.15 -- I think it is OK if this series
>>>>> goes in afterwards.
>>>>
>>>> I tested the series on the Juno board it is woking fine.
>>>> I found one issue in SMMU driver while testing this series that is not
>>>> related to this series but already existing issue in SMMU driver.
>>>>
>>>> If there are more than one device behind SMMU and they share the same
>>>> Stream-Id, SMMU driver is creating the new SMR entry without checking the
>>>> already configured SMR entry if SMR entry correspond to stream-id is
>>>> already configured.  Because of this I observed the stream match conflicts
>>>> on Juno board.
>>>>
>>>> (XEN) smmu: /iommu@7fb30000: Unexpected global fault, this could be
>>>> serious
>>>> (XEN) smmu: /iommu@7fb30000: 	GFSR 0x00000004, GFSYNR0 0x00000006,
>>>> GFSYNR1 0x00000000, GFSYNR2 0x00000000
>>>>
>>>>
>>>> Below two patches is required to be ported to Xen to fix the issue from
>>>> Linux driver.
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=1f3d5ca43019bff1105838712d55be087d93c0da
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/iommu/arm-smmu.c?h=linux-5.8.y&id=21174240e4f4439bb8ed6c116cdbdc03eba2126e
>>>
>>>
>>> Good catch and thanks for the pointers! Do you have any interest in
>>> backporting these two patches or should I put them on my TODO list?
>>>
>>> Unrelated to who does the job, we should discuss if it makes sense to
>>> try to fix the bug for 4.15. The patches don't seem trivial so I am
>>> tempted to say that it might be best to leave the bug unfixed for 4.15
>>> and fix it later.
>>
>> SMMU support on Juno is not that interesting because IIRC the stream-ID is the
>> same for all the devices. So it is all or nothing passthrough.
>>
>> For other HW, this may be a useful feature. Yet we would need a way to group
>> the devices for passthrough.
>>
>> In this context, I would consider it more a feature than a bug because the
>> SMMU driver never remotely work on such HW.
> 
> I see. To be honest I wasn't thinking of Juno (I wasn't aware of its
> limitations) but of potential genuine situations where stream-ids are
> the same for 2 devices. I know it can happen with PCI devices for
> instance, although I am aware we don't have PCI passthrough yet. I don't
> know if it is possible for it to happen with non-PCI devices but I
> wouldn't be surprised if it can.

I merely pointed out Juno because this is where the discussion started. 
Although, my conclusion wasn't solely based on this system nor PCI devices.

It was based on the fact that this could never have worked with the 
current SMMU driver. So this is not a regression and more an improvement 
of the driver to support passthrough for devices using the same stream-ID.

At this stage of the release, I would only consider trivial improvement 
to be merged.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 19:11:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 19:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80701.147810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l714o-0001qp-5H; Tue, 02 Feb 2021 19:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80701.147810; Tue, 02 Feb 2021 19:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l714o-0001qi-24; Tue, 02 Feb 2021 19:11:02 +0000
Received: by outflank-mailman (input) for mailman id 80701;
 Tue, 02 Feb 2021 19:11:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hKoN=HE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l714n-0001qd-3F
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 19:11:01 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5fcb1b71-6134-4c64-8799-d21583c05e78;
 Tue, 02 Feb 2021 19:10:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fcb1b71-6134-4c64-8799-d21583c05e78
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612293059;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=wHP3tAWKLhgeTGf6oNIIFZkRbXg7nIN94ZGYk8lgsYY=;
  b=eirMNJHIMMZs9BaSQ7G20/fETMs6k2Rw3ge3T3iFdfODWjtXOx2a1IYc
   cFMS07F+9FtuNTBWNb/Zxe9dcl2lCQZ6MRX4MJsVVa10jyPdSYikM7t9f
   PCIlmGJEeK2wyX7oVw/YC9cXZm0fq06fSmwbArqkBga6Jc6mW1lF/5lvw
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Z/fiUyJ6WxUu3WkwTjrMezJ3Auls+vSANv7qeldeoLPX3cFLSUxflO+3SXCuHjvaTvm50CeKME
 Pq3tDfc+mICtijQrPrd7Vt6I3T62B8+7vh2tsomNsMxrYknAy0d89czdgZIqshfZfMFzQuV4Ot
 io+Ofks6YgP0l6Su8D6dOnW+YL1UTae5w2m2tEhZ8fNxJLAQ1Cd110irSDJ/T0/6sTRJDXSawc
 TP4U5hlajnZ8mt5vhUH2T4rHVy4pIrFcfUxmOUCJ2alj6zCKk31lZxil0955VgIQLdoD1wnCyQ
 1As=
X-SBRS: 5.1
X-MesageID: 36400885
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,396,1602561600"; 
   d="scan'208";a="36400885"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>, Oleksandr
	<olekstysh@gmail.com>
Subject: [PATCH for-4.15] tools/tests: Introduce a test for acquire_resource
Date: Tue, 2 Feb 2021 19:09:37 +0000
Message-ID: <20210202190937.30206-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

For now, simply try to map 40 frames of grant table.  This catches most of the
basic errors with resource sizes found and fixed through the 4.15 dev window.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Oleksandr <olekstysh@gmail.com>

Fails against current staging:

  XENMEM_acquire_resource tests
  Test x86 PV
    d7: grant table
      Fail: Map 7 - Argument list too long
  Test x86 PVH
    d8: grant table
      Fail: Map 7 - Argument list too long

The fix has already been posted:
  [PATCH v9 01/11] xen/memory: Fix mapping grant tables with XENMEM_acquire_resource

and the fixed run is:

  XENMEM_acquire_resource tests
  Test x86 PV
    d7: grant table
  Test x86 PVH
    d8: grant table

ARM folk: would you mind testing this?  I'm pretty sure the create parameters
are suitable, but I don't have any way to test this.

I've got more plans for this, but insufficient time right now.
---
 tools/tests/Makefile                 |   1 +
 tools/tests/resource/.gitignore      |   1 +
 tools/tests/resource/Makefile        |  40 ++++++++++
 tools/tests/resource/test-resource.c | 138 +++++++++++++++++++++++++++++++++++
 4 files changed, 180 insertions(+)
 create mode 100644 tools/tests/resource/.gitignore
 create mode 100644 tools/tests/resource/Makefile
 create mode 100644 tools/tests/resource/test-resource.c

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index fc9b715951..c45b5fbc1d 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -2,6 +2,7 @@ XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 SUBDIRS-y :=
+SUBDIRS-y := resource
 SUBDIRS-$(CONFIG_X86) += cpu-policy
 SUBDIRS-$(CONFIG_X86) += mce-test
 ifneq ($(clang),y)
diff --git a/tools/tests/resource/.gitignore b/tools/tests/resource/.gitignore
new file mode 100644
index 0000000000..4872e97d4b
--- /dev/null
+++ b/tools/tests/resource/.gitignore
@@ -0,0 +1 @@
+test-resource
diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
new file mode 100644
index 0000000000..8a3373e786
--- /dev/null
+++ b/tools/tests/resource/Makefile
@@ -0,0 +1,40 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+TARGET := test-resource
+
+.PHONY: all
+all: $(TARGET)
+
+.PHONY: run
+run: $(TARGET)
+	./$(TARGET)
+
+.PHONY: clean
+clean:
+	$(RM) -f -- *.o .*.d .*.d2 $(TARGET)
+
+.PHONY: distclean
+distclean: clean
+	$(RM) -f -- *~
+
+.PHONY: install
+install: all
+
+.PHONY: uninstall
+uninstall:
+
+CFLAGS += -Werror -D__XEN_TOOLS__
+CFLAGS += $(CFLAGS_xeninclude)
+CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenforeginmemory)
+CFLAGS += $(APPEND_CFLAGS)
+
+LDFLAGS += $(LDLIBS_libxenctrl)
+LDFLAGS += $(LDLIBS_libxenforeignmemory)
+LDFLAGS += $(APPEND_LDFLAGS)
+
+test-resource: test-resource.o
+	$(CC) $(LDFLAGS) -o $@ $<
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
new file mode 100644
index 0000000000..81a2a5cd12
--- /dev/null
+++ b/tools/tests/resource/test-resource.c
@@ -0,0 +1,138 @@
+#include <err.h>
+#include <errno.h>
+#include <error.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/mman.h>
+
+#include <xenctrl.h>
+#include <xenforeignmemory.h>
+#include <xendevicemodel.h>
+#include <xen-tools/libs.h>
+
+static unsigned int nr_failures;
+#define fail(fmt, ...)                          \
+({                                              \
+    nr_failures++;                              \
+    (void)printf(fmt, ##__VA_ARGS__);           \
+})
+
+static xc_interface *xch;
+static xenforeignmemory_handle *fh;
+static xendevicemodel_handle *dh;
+
+static void test_gnttab(uint32_t domid, unsigned int nr_frames)
+{
+    xenforeignmemory_resource_handle *res;
+    void *addr = NULL;
+    size_t size;
+    int rc;
+
+    printf("  d%u: grant table\n", domid);
+
+    rc = xenforeignmemory_resource_size(
+        fh, domid, XENMEM_resource_grant_table,
+        XENMEM_resource_grant_table_id_shared, &size);
+    if ( rc )
+        return fail("    Fail: Get size: %d - %s\n", errno, strerror(errno));
+
+    if ( (size >> XC_PAGE_SHIFT) != nr_frames )
+        return fail("    Fail: Get size: expected %u frames, got %zu\n",
+                    nr_frames, size >> XC_PAGE_SHIFT);
+
+    res = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_grant_table,
+        XENMEM_resource_grant_table_id_shared, 0, size >> XC_PAGE_SHIFT,
+        &addr, PROT_READ | PROT_WRITE, 0);
+    if ( !res )
+        return fail("    Fail: Map %d - %s\n", errno, strerror(errno));
+
+    rc = xenforeignmemory_unmap_resource(fh, res);
+    if ( rc )
+        return fail("    Fail: Unmap %d - %s\n", errno, strerror(errno));
+}
+
+static void test_domain_configurations(void)
+{
+    static struct test {
+        const char *name;
+        struct xen_domctl_createdomain create;
+    } tests[] = {
+#if defined(__x86_64__) || defined(__i386__)
+        {
+            .name = "x86 PV",
+            .create = {
+                .max_vcpus = 2,
+                .max_grant_frames = 40,
+            },
+        },
+        {
+            .name = "x86 PVH",
+            .create = {
+                .flags = XEN_DOMCTL_CDF_hvm,
+                .max_vcpus = 2,
+                .max_grant_frames = 40,
+                .arch = {
+                    .emulation_flags = XEN_X86_EMU_LAPIC,
+                },
+            },
+        },
+#elif defined(__aarch64__) || defined(__arm__)
+        {
+            .name = "ARM",
+            .create = {
+                .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
+                .max_vcpus = 2,
+                .max_grant_frames = 40,
+            },
+        },
+#endif
+    };
+
+    for ( unsigned int i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        struct test *t = &tests[i];
+        uint32_t domid = 0;
+        int rc;
+
+        printf("Test %s\n", t->name);
+
+        rc = xc_domain_create(xch, &domid, &t->create);
+        if ( rc )
+        {
+            if ( errno == EINVAL || errno == EOPNOTSUPP )
+                printf("  Skip: %d - %s\n", errno, strerror(errno));
+            else
+                fail("  Domain create failure: %d - %s\n",
+                     errno, strerror(errno));
+            continue;
+        }
+
+        test_gnttab(domid, t->create.max_grant_frames);
+
+        rc = xc_domain_destroy(xch, domid);
+        if ( rc )
+            fail("  Failed to destroy domain: %d - %s\n",
+                 errno, strerror(errno));
+    }
+}
+
+int main(int argc, char **argv)
+{
+    printf("XENMEM_acquire_resource tests\n");
+
+    xch = xc_interface_open(NULL, NULL, 0);
+    fh = xenforeignmemory_open(NULL, 0);
+    dh = xendevicemodel_open(NULL, 0);
+
+    if ( !xch )
+        err(1, "xc_interface_open");
+    if ( !fh )
+        err(1, "xenforeignmemory_open");
+    if ( !dh )
+        err(1, "xendevicemodel_open");
+
+    test_domain_configurations();
+
+    return !!nr_failures;
+}
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 02 19:21:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 19:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80703.147823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l71El-0002td-5v; Tue, 02 Feb 2021 19:21:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80703.147823; Tue, 02 Feb 2021 19:21:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l71El-0002tW-1t; Tue, 02 Feb 2021 19:21:19 +0000
Received: by outflank-mailman (input) for mailman id 80703;
 Tue, 02 Feb 2021 19:21:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l71Ej-0002tR-Dw
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 19:21:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l71Ei-000461-1U; Tue, 02 Feb 2021 19:21:16 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l71Eh-0003nu-Ns; Tue, 02 Feb 2021 19:21:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=tDuQlJufbyrkxG6R+9PVQYZDZUQUiOSapiecrWdniUU=; b=W4VH4UaNqTVeE4v+oqQ7pJOkJx
	m0MFcVHbxa2rZ2qyfAhc0AI37K/abPeez97p1zlQ2IziRqftW27aSH5VeFJloPPa9LSPMpXCjw/uv
	SuOQ53YCnQgat9lrGGXxT/3cB1xIZ1PNb8YJJi3qLzS6lgWx1i5COuJs7zFL2wpgK/1Q=;
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
From: Julien Grall <julien@xen.org>
To: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
 <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
Message-ID: <b6d342f8-c833-db88-9808-cdc946999300@xen.org>
Date: Tue, 2 Feb 2021 19:21:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 02/02/2021 18:12, Julien Grall wrote:
> Hi,
> 
> On 02/02/2021 17:47, Elliott Mitchell wrote:
>> The handle_device() function has been returning failure upon
>> encountering a device address which was invalid.Â  A device tree which
>> had such an entry has now been seen in the wild.Â  As it causes no
>> failures to simply ignore the entries, ignore them. >
>> Signed-off-by: Elliott Mitchell <ehem+xenn@m5p.com>
>>
>> ---
>> I'm starting to suspect there are an awful lot of places in the various
>> domain_build.c files which should simply ignore errors.Â  This is now the
>> second place I've encountered in 2 months where ignoring errors was the
>> correct action.
> 
> Right, as a counterpoint, we run Xen on Arm HW for several years now and 
> this is the first time I heard about issue parsing the DT. So while I 
> appreciate that you are eager to run Xen on the RPI...
> 
>> Â I know failing in case of error is an engineer's
>> favorite approach, but there seem an awful lot of harmless failures
>> causing panics.
>>
>> This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore
>> empty memory bank".Â  Now it seems clear the correct approach is to simply
>> ignore these entries.
> 
> ... we first need to fully understand the issues. Here a few questions:
>  Â Â  1) Can you provide more information why you believe the address is 
> invalid?
>  Â Â  2) How does Linux use the node?
>  Â Â  3) Is it happening with all the RPI DT? If not, what are the 
> differences?

So I had another look at the device-tree you provided earlier on. The 
node is the following (copied directly from the DTS):

&pcie0 {
         pci@1,0 {
                 #address-cells = <3>;
                 #size-cells = <2>;
                 ranges;

                 reg = <0 0 0 0 0>;

                 usb@1,0 {
                         reg = <0x10000 0 0 0 0>;
                         resets = <&reset 
RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
                 };
         };
};

pcie0: pcie@7d500000 {
    compatible = "brcm,bcm2711-pcie";
    reg = <0x0 0x7d500000  0x0 0x9310>;
    device_type = "pci";
    #address-cells = <3>;
    #interrupt-cells = <1>;
    #size-cells = <2>;
    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
    interrupt-names = "pcie", "msi";
    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
                                                      IRQ_TYPE_LEVEL_HIGH>;
    msi-controller;
    msi-parent = <&pcie0>;

    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
              0x0 0x40000000>;
    /*
     * The wrapper around the PCIe block has a bug
     * preventing it from accessing beyond the first 3GB of
     * memory.
     */
    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
                  0x0 0xc0000000>;
    brcm,enable-ssc;
};

The interpretation of "reg" depends on the context. In this case, we are 
trying to interpret as a memory address from the CPU PoV when it has a 
different meaning (I am not exactly sure what).

In fact, you are lucky that Xen doesn't manage to interpret it. Xen 
should really stop trying to look region to map when it discover a PCI 
bus. I wrote a quick hack patch that should ignore it:

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 374bf655ee34..937fd1e387b7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1426,7 +1426,7 @@ static int __init handle_device(struct domain *d, 
struct dt_device_node *dev,

  static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
                                struct dt_device_node *node,
-                              p2m_type_t p2mt)
+                              p2m_type_t p2mt, bool pci_bus)
  {
      static const struct dt_device_match skip_matches[] __initconst =
      {
@@ -1532,9 +1532,14 @@ static int __init handle_node(struct domain *d, 
struct kernel_info *kinfo,
                 "WARNING: Path %s is reserved, skip the node as we may 
re-use the path.\n",
                 path);

-    res = handle_device(d, node, p2mt);
-    if ( res)
-        return res;
+    if ( !pci_bus )
+    {
+        res = handle_device(d, node, p2mt);
+        if ( res)
+           return res;
+
+        pci_bus = dt_device_type_is_equal(node, "pci");
+    }

      /*
       * The property "name" is used to have a different name on older FDT
@@ -1554,7 +1559,7 @@ static int __init handle_node(struct domain *d, 
struct kernel_info *kinfo,

      for ( child = node->child; child != NULL; child = child->sibling )
      {
-        res = handle_node(d, kinfo, child, p2mt);
+        res = handle_node(d, kinfo, child, p2mt, pci_bus);
          if ( res )
              return res;
      }
@@ -2192,7 +2197,7 @@ static int __init prepare_dtb_hwdom(struct domain 
*d, struct kernel_info *kinfo)

      fdt_finish_reservemap(kinfo->fdt);

-    ret = handle_node(d, kinfo, dt_host, default_p2mt);
+    ret = handle_node(d, kinfo, dt_host, default_p2mt, false);
      if ( ret )
          goto err;

A less hackish possibility would be to modify dt_number_of_address() and 
return 0 when the device is a child of a PCI below.

Stefano, do you have any opinions?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 02 20:20:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 Feb 2021 20:20:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80719.147844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l729o-0000L4-M1; Tue, 02 Feb 2021 20:20:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80719.147844; Tue, 02 Feb 2021 20:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l729o-0000Kx-Is; Tue, 02 Feb 2021 20:20:16 +0000
Received: by outflank-mailman (input) for mailman id 80719;
 Tue, 02 Feb 2021 20:20:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hKoN=HE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l729m-0000Ks-IV
 for xen-devel@lists.xenproject.org; Tue, 02 Feb 2021 20:20:14 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48f434b9-a5a7-473b-ae9a-0ab9bc10e455;
 Tue, 02 Feb 2021 20:20:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48f434b9-a5a7-473b-ae9a-0ab9bc10e455
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612297213;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=QgP6yjKg+mlauxL/HyT5hLy0ghIAYuihbD/tCZr3B7c=;
  b=MJs/Dej/pwdYiMxC9/IMPNBorKffCLdiyQkT5WKVTrfcaiEp0pw3kqXK
   hhf5Denl2BYuDKKMdno6D2SYHopN7PH62d666PFHp94nHKe4V0M8WtbxI
   RDf+eKfzNRTblmuxZ0z+nT+ll+OV6megEe57qACjsD628OKV2H4QwPwi1
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 05MAMcZDHTXpvnN7QUdfKdGQCNESx24zxLjArGvTN5AtaHhxA9aaUqYruoy5ION8STlVJGXNAm
 u2Jy+pM9+jRKiaoT+6sQpR5X90cmiwxK0wjxuRb4xlHGvPClFNdDnT5sr46Y9vcLuSL85z9swo
 fVgDaz8MKFJ2GEeDgXWp+u6Rpzf396vl9k2f1jCIUp2LhtpPhSdhBPLvAO4dJvAU7Au7m+vdA3
 Au8dWi2h1KfpLfwbzSsaCblsb+zNBU243jXeBBabsJTRVlEO7QJ74enJ0tW56WV5RilGtzr3A4
 3zo=
X-SBRS: 5.2
X-MesageID: 36361996
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,396,1602561600"; 
   d="scan'208";a="36361996"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q/eG/z9QhHDlDYrgXuln820Xwt/gwyo+0lgKYryQxmtJ0FxgcGhzKmLcVAKlwiuBDTiPT9BKRYNVDXL1Wj6Q/738TgKpRNSNyUNATLdeVFmiu4xc2CQbzC09o3340Vx1Oa0OONaMsnd91K4Vjme0ez63P13h3yhLRbzRe+M5o20mpTDst1aHQfGNOWVijQwAs3U5zRt8dmQvRBg8Wo2VOGONw/NOPMFaxHNjSDT3kX2/V+ROnVWMtSflaO2TdyboKwKkpcNGpOBeBm6wKtZYitG0K+O39RGxhuvx9UK91xtFObxGN72bERRnwgC1PEq84pUBxbjLaLuZ87vVigIRrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jWFxTOYSry2yTzeXDwa9CxCFiIJ9+9gB22vrE95pGKA=;
 b=LXcbxAf9jThKZA34PqxDplemnPK22Xb3XPLfZ4U+MjgfqTxlhBF/NFMx+iIYvdWHby+KRdG5RS6cYr3AbUmN4/6xxsEMSQjO31VoFRDtwUECeXTGIPISkqTPh3yo1pslIUOcDYeph/JzbjVQMUMttYwKqewG0cURFY23b4M71H5dy45GKji1bY/SAGI5VpD6W55YC9YIkshFT2btj4oL1zZz7lHcFuylkCF4P1Zs9EOC/QL6yzT5nMgDo7I3gPrYu8i0s6ur3Zvuv+heGOCRDV+Zyzd1dxSA4AAuwg9t6fU4xr78yVuhWzp3+pPlURIegB6+4VzrairjZ2BX5Bqy9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jWFxTOYSry2yTzeXDwa9CxCFiIJ9+9gB22vrE95pGKA=;
 b=Q9mZz444Z3CTa32dmI3FD8kkWsHjiqWh6bgX/Wo6THyFjxnwF5nt36knagF33a2/rkiE+tsMaOlCGJSYtAah8FrLP88xV6N0EvjYvmpA1FO+V7Vxj+4YmSHlBCpzO/Fnq5gniACIbhdUCZp5vqpF1c2LbvN6hxLbnMquCSGSSu0=
Subject: Re: [PATCH v9 00/11] acquire_resource size and external IPT
 monitoring
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Jun
 Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, Tamas
 K Lengyel <tamas@tklengyel.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
 <24601.17287.280124.602809@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e2cecfe6-2cd0-0d92-8f17-0283bc1f8503@citrix.com>
Date: Tue, 2 Feb 2021 20:19:28 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24601.17287.280124.602809@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0453.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::33) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d6d2bdbf-4420-4b4e-f869-08d8c7b7dbeb
X-MS-TrafficTypeDiagnostic: BYAPR03MB3975:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB39753D99890C2186D83FB660BAB59@BYAPR03MB3975.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3MvCagPPMUYOWZyAuBOjAHGVki6SmvSZY3Rr7pZgZDWt9q0Rt6ZZuycBiKnhYmJW0K2acmvxSeXH53HUCr778ePerWD2p2hCtpQAXHjCsKovP8WfCmC8pKqliyj8Gt/ggHkXhROpeFaKoD33Je82uZpIsif7rp/hyzOJnD9WSe+emlfcFSvLG0Foi1fAZ7RsLjSZMyfQ95MYMqAppWCVzjAST3wsszMplbA70NcTZmDiSgf9aCWD+XA5VFJcPf9QYvza7pKJa+670p95D8HrqWWwkfRIjBg+/9wpZJsA6RXoGvzBkL/3awF6z16Hkst4gwi7zRQFohRVuxZG/5ut1j/mBtGE1Y4JXNl1/m4h2MkyoqUl15Vc7f9IPrWrfFcifUGn85jzk0Zte/eMgnsOjvuSGQ9ApDq+KQpbP9qGOm13kT0vRKinEIkPb64ZhQttbBit5GbE9HUmXlKvkqyedpi3yzZ8hdkAWViehV4qlw429xdsp+fb777TG4y5C1YRd8/DvVasn55jKvy79rH1lw7/2wDVCcmHmNL6n6LRtcnW1xeIG2qLqUyMg5cpAbPnEvnCvlS1sN/K4DoXw/VgkFfeMX4rGJwk1fFpLkIhax4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(39860400002)(396003)(376002)(16526019)(956004)(26005)(8936002)(54906003)(186003)(5660300002)(16576012)(36756003)(31686004)(316002)(6916009)(86362001)(53546011)(8676002)(2906002)(2616005)(66556008)(66476007)(31696002)(83380400001)(6666004)(6486002)(4326008)(66946007)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NVBEZWs2UHJ0cTNUQU9UbGFzNmxHWHl0NWl3NnNWMmdGRElLUWJWek8wbEI3?=
 =?utf-8?B?eCtPQWRwZTdaQkxwaGFOdEpSUllnczZ2VVBvSVl1dno0bUNQWFlMSXRabHh2?=
 =?utf-8?B?WmM1c2ZJcnQxZ1FvL3Y1T3gxZ2ZJRjUvMjQzTjY1bi9XanlNQ21CQ1lKN3Ix?=
 =?utf-8?B?SURkL05CdjBwZFEvbENYdjJ1T3o5SkdSUWFnbll4UVQ1c2dmVm5LbFFpZ3h2?=
 =?utf-8?B?clBpMFhuT2RTcDZrNUU0UmpwdGx2Q0srbE5EelovSk5PWEp2K2JHdDNObCtt?=
 =?utf-8?B?MVZUNVdYWWQ0Ymk5aHA1WFJwS04wMkpON2dFRllyQzd6Wk9vY1JURDgxQVhC?=
 =?utf-8?B?N3Uvck5IQXhFVTBMa2F0WnBQQkpsRHBEYTVKSzBtdGYrNm8rMVVtRkp3aG1o?=
 =?utf-8?B?MVhjOHlOeXVZb2dCajFEUVVGbDFYUUFIVEZwNzErSWZhSUtoK0xsVGV4b2ll?=
 =?utf-8?B?NU44QU9iZ0x2SWtWNVMvYnZWWTB5YlNGdU9oZnVCWThUL3dQaDZKRjhuMm1J?=
 =?utf-8?B?UlVnMVpmdXkrVDgxSC8rT3RteUNyTVprVzJWL0x0QkM0eXdhaUhMc3VwM0tk?=
 =?utf-8?B?amFCYmNpejRtNmMwNGlGMmFNaTI0d3h0TGJFMHpWZi9GMm92STN1OVp1TEQy?=
 =?utf-8?B?aWM4Q2wvSC9qd2k3cE0rY2NnNDdxRnAvdlEyM25pekhDYjFIQmNNSUd3c2lw?=
 =?utf-8?B?WVFoU0pGK2E1SHcvVkIxbkxhaU9JMkNyMmFKZUd0bFg0MXduNGZNeXc2Rkpv?=
 =?utf-8?B?OWUzVGQzNWF6ck9uQVpKTGROekh4VGJBUlBUWmg5K1JsV2tvanBkZ29CK3hL?=
 =?utf-8?B?RnpkQk5nZWl2TjBtUDBuN1FRWklZeGJROUdtakhnWFBjeU1maUJyODlPVEth?=
 =?utf-8?B?b2FGelI1M3NZOHNnQjE1ZjgyT1ppa0VQeklCSmR1TVRRRGV2cDBuMHJEektX?=
 =?utf-8?B?SU5tUy9kZFMvN0tWVEQ2cTJaTSt0WDRhdVFyWGpmQnU0QU1XejZ3MXY1V1lN?=
 =?utf-8?B?dVFmb0wvRXBOMjQ2Ny9TdkNHYjljT3ZJR1VWMGNHTG8rY0xJQ0tKZ1kxcG1q?=
 =?utf-8?B?NkdSN3QxeTNYYmlTUE1lb3lHMi81MXZacUREY25ER2w3YVhPQ0Z2em1NSk5L?=
 =?utf-8?B?aWlQdDNxSjlyNVNPcXljaVY2S0NaSmNXUmhjZjhWaHVsMkl2VmVtdFA5bVAz?=
 =?utf-8?B?OXNGcGh6Ui9ZZ29GNjFlYXZ3UUxFbFNYdUZXRXh6dGZDVjZGbGJObUp3MzFX?=
 =?utf-8?B?VnNtWVFMczF6Vm93aDBTR1FZSUlUNVdnMGphak9OUktmN0xlb3I1V0dkZzh4?=
 =?utf-8?B?ajhXYThvQUUyOHRvc0RHMzZSMWYydHA1ZzFOVVZ2OVZWZGdDRDdBUkY2QXVH?=
 =?utf-8?B?aDJDMDY2WnR2ZDNXdFFtWmRQd2dJWStlR1VHSXFiaU8zeXVManNZQ3hxZDVu?=
 =?utf-8?Q?mLVdBvYI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d6d2bdbf-4420-4b4e-f869-08d8c7b7dbeb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 20:19:35.9140
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W8LGu65JAoL3kv3fASM6vTaQJfu6+y4pZSndz1dQ00YPg9RcUDiiuzA/J2RaiMIUIBs2qTojqQ0VdBDHr7rZp07aVDd61i1pLRk2JjRd5vs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3975
X-OriginatorOrg: citrix.com

On 02/02/2021 12:20, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH v9 00/11] acquire_resource size and external IPT monitoring"):
> ...
>> Therefore, I'd like to request a release exception.
> Thanks for this writeup.
>
> There is discussion here of the upside of granting an exception, which
> certainly seems substantial enough to give this serious consideration.
>
>> It [is] fairly isolated in terms of interactions with the rest of
>> Xen, so the changes of a showstopper affecting other features is
>> very slim.
> This is encouraging (optimistic, even) but very general.  I would like
> to see a frank and detailed assessment of the downside risks, ideally
> based on analysis of the individual patches.
>
> When I say a "frank and detailed assessment" I'm hoping to have a list
> of the specific design and code changes that pose a risk to non-IPT
> configurations, in decreasing order of risk.
>
> For each one there should be brief discussion of the measures that
> have exist to control that risk (eg, additional review, additional
> testing), and a characterisation of the resulting risk (both in terms
> of likelihood and severity of the resulting bug).
>
> All risks that would come to a diligent reviewer's mind should be
> mentioned and explicitly delath with, even if it is immediately clear
> that they are not a real risk.
>
> Do you think that would be feasible ?  We would want to make a
> decision ASAP so it would have to be done quickly too - in the next
> few days and certainly by the end of the week.

Honestly, I think this is an unreasonably large paperwork expectation,
particularly for changes this-clearly isolated in terms of functionality.

I'm going to explicitly disregard build/compile issues because we're not
even at code freeze or -rc1 yet, with multiple weeks yet before any
potential release, and loads of tooling.


Patch 2 adds a new domain creation parameter, which is an internal
tools/xen interface.Â  Default is off, and it needs explicit opting in to
(patch 3), so it will get all the testing it needs in an OSSTest smoke run.

This patch does introduce one new use of a preexisting refcounting
pattern currently under discussion.Â  It many leak memory in theoretical
circumstances, not practical ones.Â  Work to figure out how to unbreak
this pattern is in progress, and orthogonal, and needs applying uniformly

Patch 4 adds a new resource type, which is an API/ABI with userspace.Â 
It is a new type/index so has no current users.

Patch 5 adds enumeration for the IPT feature in hardware, as well as
context switching logic.Â  All context switching changes are behind an
opt-in flag, so a smoke run will be sufficient to prove no adverse
interaction in !vmtrace case.

Patch 6 adds a new domctl and subops for controlling vmtrace.Â  All brand
new functionality with no users, and bounded by the opt-ins from patch 2
and 5.

Patch 7 adds libxc library functions wrapping the domctl of patch 6.Â  No
users.

Patch 8 is example code demonstrating how to use all of the new
functionality.Â  It is built, but not installed.

Patch 9 extends the existing VMFork feature to cope with VMs configured
with this new functionality.Â  It is a no-op for regular VMs.

Patch 10 extends vm_event requests with additional optional metadata
about the tail pointer of data in the vmtrace buffer.Â  Doesn't alter the
behaviour for regular VMs.

Patch 11 extends vm_event responses with an optional request to reset
the vmtrace buffer position.Â  No users, and a no-op for regular VMs.


All of this new functionality is off-by-default and needs an explicit
opt in, for any behavioural changes to occur.Â  While there is no
guarantee that the implementation of the new functionality is perfect,
the development of it has found and fixed a whole slew bugs elsewhere in
Xen, and the new functionality does have extensive testing itself.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 00:04:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 00:04:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80754.147885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l75eu-0005LN-Dx; Wed, 03 Feb 2021 00:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80754.147885; Wed, 03 Feb 2021 00:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l75eu-0005LG-B2; Wed, 03 Feb 2021 00:04:36 +0000
Received: by outflank-mailman (input) for mailman id 80754;
 Wed, 03 Feb 2021 00:04:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qRRE=HF=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1l75et-0005LB-6g
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 00:04:35 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 542bc3b4-139e-467d-9b18-7e8b3d4304bf;
 Wed, 03 Feb 2021 00:04:33 +0000 (UTC)
Received: from sisyou.hme. (c-73-129-147-140.hsd1.md.comcast.net
 [73.129.147.140]) by mx.zohomail.com
 with SMTPS id 1612310661678893.5908610764848;
 Tue, 2 Feb 2021 16:04:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 542bc3b4-139e-467d-9b18-7e8b3d4304bf
ARC-Seal: i=1; a=rsa-sha256; t=1612310664; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=JCSq1DGmhMj0Djic194oFkeMu9Jl4qOQc02x+VBPH8o8Ov/nR/yK0VszGXhFbO28KqmOcQkmLe+jCS5axWqlFvQER6yiPn0J9HSSM0HXYtUJ5EcrgH9pel/b/0VO6Fk+tuSEAWOhJUcAqtAc8zWz9v9aahO3SYy4KWyq8DHauT4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1612310664; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=Ia2V4e6myGKZ7WbhSc4tiAaaHWQaNTe8d6gesk3Qs+Q=; 
	b=RlMccXACklaUR0l9Wo/tkQyEgJxu14oxRB9ZUzge0oevR2L98PKCh7w0f+qwFi21JdWWPb+fxh8L9B180Q3kBN9lKxqRueo07fmi8mKKyW1PobX3/8sKqBpBmSPjED5tUL2qEZsbhgaJMN3YQIPjkEV6Mx0Ja78QQ3oAUDk0FqY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1612310664;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Message-ID:Subject:Date:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=Ia2V4e6myGKZ7WbhSc4tiAaaHWQaNTe8d6gesk3Qs+Q=;
	b=lf2ecu0SKg+ZJPiz9KEjdsq15PX9IFEhJQDUg/S1VjJH8jGvchEPDIYku6VtYHpV
	LNrLcy5x51Ta/WgQuKnAJIHiak6FQVCTnIKCVCicUzrUU32oDkSibX+yPqhLmy4JUEG
	Rk516tAKi1w7TksP84Qxq34xbFnRi/hT+qlDIOvU=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: christopher.w.clark@gmail.com,
	andrew.cooper3@citrix.com,
	stefano.stabellini@xilinx.com,
	jgrall@amazon.com,
	iwj@xenproject.org,
	wl@xen.org,
	george.dunlap@citrix.com,
	jbeulich@suse.com,
	persaur@gmail.com,
	adam.schwalm@starlab.io
Message-ID: <20210203000952.31695-1-dpsmith@apertussolutions.com>
Subject: [RFC PATCH v2] docs/design: boot domain device tree design
Date: Tue,  2 Feb 2021 19:09:52 -0500
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

This is a Request For Comments on the adoption of Device Tree as the
format for the Launch Control Module as described in the previously
posted DomB RFC.

For RFC purposes, a rendered of this file can be found here:
ihttps://drive.google.com/file/d/1l3fo4FylvZCQs1V00DcwifyLjl5Jw3W8/view?usp=
=3Dsharing

Details on the DomB boot domain can be found on Xen wiki:
https://wiki.xenproject.org/wiki/DomB_mode_of_dom0less

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
Signed-off-by: Christopher Clark <christopher.clark@starlab.io>

Version 2
---------

 - cleaned up wording
 - updated example to reflect a real configuration
 - add explanation of mb2 modules that would be loaded
 - add the config node
---
 docs/designs/boot-domain-device-tree.rst | 235 +++++++++++++++++++++++++++=
++++
 1 file changed, 235 insertions(+)
 create mode 100644 docs/designs/boot-domain-device-tree.rst

diff --git a/docs/designs/boot-domain-device-tree.rst b/docs/designs/boot-d=
omain-device-tree.rst
new file mode 100644
index 0000000000..558d75a796
--- /dev/null
+++ b/docs/designs/boot-domain-device-tree.rst
@@ -0,0 +1,235 @@
+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
+Xen Boot Domain Device Tree Bindings
+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
+
+The Xen Boot Domain device tree adopts the dom0less device tree structure =
and
+extends it to meet the requirements for the Boot Domain capability. The pr=
imary
+difference is the introduction of the ``xen`` node that is under the ``/ch=
osen``
+node. The move to a dedicated node was driven by:
+
+1. Reduces the need to walk over nodes that are not of interest, e.g. only
+nodes of interest should be in ``/chosen/xen``
+
+2. Enables the use of the ``#address-cells`` and ``#size-cells`` fields on=
 the
+xen node.
+
+3. Allows for the domain construction information to easily be sanitized b=
y
+simple removing the ``/chosen/xen`` node.
+
+Below is an example device tree definition for a xen node followed by an
+explanation of each section and field:
+::
+    xen {
+        #address-cells =3D <1>;
+        #size-cells =3D <0>;
+
+        // Configuration container
+        config@0 {
+            #address-cells =3D <1>;
+            #size-cells =3D <0>;
+            compatible =3D "xen,config";
+
+            // reg is required but ignored for "xen,config" node
+            reg =3D <0>;
+
+            module@1 {
+                compatible =3D "multiboot,microcode", "multiboot,module";
+                reg =3D <1>;
+            };
+
+            module@2 {
+                compatible =3D "multiboot,xsm-policy", "multiboot,module";
+                reg =3D <2>;
+            };
+        };
+
+        // Boot Domain definition
+        domain@0x7FF5 {
+            #address-cells =3D <1>;
+            #size-cells =3D <0>;
+            compatible =3D "xen,domain";
+
+            reg =3D <0x7FF5>;
+            memory =3D <0x0 0x20000>;
+            cpus =3D <1>;
+            module@3 {
+                compatible =3D "multiboot,kernel", "multiboot,module";
+                reg =3D <3>;
+            };
+
+            module@4 {
+                compatible =3D "multiboot,ramdisk", "multiboot,module";
+                reg =3D <4>;
+            };
+            module@5 {
+                compatible =3D "multiboot,config", "multiboot,module";
+                reg =3D <5>;
+            };
+
+        // Classic Dom0 definition
+        domain@0 {
+            #address-cells =3D <1>;
+            #size-cells =3D <0>;
+            compatible =3D "xen,domain";
+
+            reg =3D <0>;
+
+            // PERMISSION_NONE          (0)
+            // PERMISSION_CONTROL       (1 << 0)
+            // PERMISSION_HARDWARE      (1 << 1)
+            permissions =3D <3>;
+
+            // FUNCTION_NONE            (0)
+            // FUNCTION_BOOT            (1 << 1)
+            // FUNCTION_CRASH           (1 << 2)
+            // FUNCTION_CONSOLE         (1 << 3)
+            // FUNCTION_XENSTORE        (1 << 30)
+            // FUNCTION_LEGACY_DOM0     (1 << 31)
+            functions =3D <0xFFFFFFFF>;
+
+            // MODE_PARAVIRTUALIZED     (1 << 0) /* PV | PVH/HVM */
+            // MODE_ENABLE_DEVICE_MODEL (1 << 1) /* HVM | PVH */
+            // MODE_LONG                (1 << 2) /* 64 BIT | 32 BIT */
+            mode =3D <5>; /* 64 BIT, PV */
+
+            // UUID
+            domain-handle =3D [B3 FB 98 FB 8F 9F 67 A3];
+
+            cpus =3D <1>;
+            memory =3D <0x0 0x20000>;
+            security-id =3D <0>;
+
+            module@6 {
+                compatible =3D "multiboot,kernel", "multiboot,module";
+                reg =3D <6>;
+                bootargs =3D "console=3Dhvc0";
+            };
+            module@7 {
+                compatible =3D "multiboot,ramdisk", "multiboot,module";
+                reg =3D <7>;
+            };
+    };
+
+The multiboot modules that would be supplied when using the above config w=
ould
+be, in order:
+ - (the above config, compiled)
+ - CPU microcode
+ - XSM policy
+ - kernel for boot domain
+ - ramdisk for boot domain
+ - boot domain configuration file
+ - kernel for the classic dom0 domain
+ - ramdisk for the classic dom0 domain
+
+The Xen node
+------------
+The xen node is a top level container for the domains that will be built b=
y
+hypervisor on start up. On the ``xen`` node the ``#address-cells`` is set =
to one
+and the ``#size-cells`` is set to zero. This is done to enforce that each =
domain
+node must define a reg property and the hypervisor will use it to determin=
e the
+``domid`` for the domain.
+
+The Config node
+---------------
+
+A config node is for detailing any multiboot modules that are of interest =
to Xen
+itself. For example this would be where Xen would be informed of microcode=
 or
+XSM policy locations. For locating the multiboot modules, the #address-cel=
ls and
+#size-cells are set according to how the multiboot modules are identified =
and
+located. If the multiboot modules will be located by index within the modu=
le
+chain, the values should be =E2=80=9C1=E2=80=9D and =E2=80=9C0=E2=80=9D re=
spectively. If the multiboot module
+will be located by physical memory address, then the values should be =E2=
=80=9C1=E2=80=9D and
+=E2=80=9C1=E2=80=9D respectively.
+
+
+\#address-cells
+  Identifies number of fields for address. Required.
+
+\#size-cells
+  Identifies number of fields for size. Required.
+
+compatible
+  Identifies the type of node. Required.
+
+reg
+  Unused. Required.
+
+The Domain node
+---------------
+A domain node is for describing the construction of a domain. It is free t=
o set
+the #address-cells and #size-cells depending on how the multiboot modules
+identified and located. If the multiboot modules will be located by index =
within
+the module chain, the values should be =E2=80=9C1=E2=80=9D and =E2=80=9C0=
=E2=80=9D respectively. If the
+multiboot module will be located by physical memory address, then the valu=
es
+should be =E2=80=9C1=E2=80=9D and =E2=80=9C1=E2=80=9D respectively.
+
+As previously mentioned a domain node must have a reg property which will =
be
+used as the requested domain id for the domain with a value of =E2=80=9C0=
=E2=80=9D signifying to
+use the next available domain id. A domain configuration is not able to re=
quest
+a domid of =E2=80=9C0=E2=80=9D. After that a domain node may have any of t=
he following
+parameters,
+
+\#address-cells
+  Identifies number of fields for address. Required.
+
+\#size-cells
+  Identifies number of fields for size. Required.
+
+compatible
+  Identifies the type of node. Required.
+
+reg
+  Identifies the domid requested to assign to the domain. Required.
+
+permissions
+  This sets what Discretionary Access Control permissions
+  a domain is assigned. Optional, default is none.
+
+functions
+  This identifies what system functions a domain will fulfill.
+  Optional, the default is none.
+
+.. note::  The `functions` bits that have been selected to indicate ``FUNC=
TION_XENSTORE`` and ``FUNCTION_LEGACY_DOM0`` are the last two bits (30, 31)=
 such that should these features ever be fully retired, the flags may be dr=
opped without leaving a gap in the flag set.
+
+mode
+  The mode the domain will be executed under. Required.
+
+domain-handle
+  A globally unique identifier for the domain. Optional,
+  the default is NULL.
+
+cpus
+  The number of vCPUs to be assigned to the domain. Optional,
+  the default is =E2=80=9C1=E2=80=9D.
+
+memory
+  The amount of memory to assign to the domain, in KBs.
+  Required.
+
+security-id
+  The security identity to be assigned to the domain when XSM
+  is the access control mechanism being used. Optional,
+  the default is =E2=80=9Cdomu=E2=80=9D.
+
+The Module node
+---------------
+This node describes a multiboot module loaded by the boot loader. The requ=
ired
+compatible property follows the format: multiboot,<type> where type can be
+=E2=80=9Cmodule=E2=80=9D, =E2=80=9Ckernel=E2=80=9D, =E2=80=9Cramdisk=E2=80=
=9D, =E2=80=9Cdevice-tree=E2=80=9D, =E2=80=9Cmicrocode=E2=80=9D, =E2=80=9Cx=
sm-policy=E2=80=9D or
+=E2=80=9Cconfig=E2=80=9D. The reg property is required and identifies how =
to locate the
+multiboot module.
+
+compatible
+  This identifies what the module is and thus what the hypervisor
+  should use the module for during domain construction. Required.
+
+reg
+  This identifies where this module is located within the multiboot
+  module chain. Required.
+
+bootargs
+  This is used to override the boot params carried with the
+  multiboot module.
+
+.. note::  The bootargs property is intended for situations where the same=
 kernel multiboot module is used for more than one domain.
+
--=20
2.11.0




From xen-devel-bounces@lists.xenproject.org Wed Feb 03 00:18:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 00:18:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80756.147898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l75se-0006Pq-Pa; Wed, 03 Feb 2021 00:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80756.147898; Wed, 03 Feb 2021 00:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l75se-0006Pj-KW; Wed, 03 Feb 2021 00:18:48 +0000
Received: by outflank-mailman (input) for mailman id 80756;
 Wed, 03 Feb 2021 00:18:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qtc=HF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l75sd-0006Pe-Kw
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 00:18:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2c44d46-5c53-4c44-90f6-a0b483c01606;
 Wed, 03 Feb 2021 00:18:46 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 63B2264DF2;
 Wed,  3 Feb 2021 00:18:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2c44d46-5c53-4c44-90f6-a0b483c01606
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612311525;
	bh=BBeVyJ1xqUGjOrP+LecHUrqgLayyqMyZ4uABjyP3H00=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=VyIuS40MkOZPMbKXA9jT1IhMaMM2Ffxd0B2wPI8/S1T+m04NSo9J/JMhOs4DXfEDX
	 dBz98tYUA92nC+z9CfYSbUroVFZ9MXTElzP5tKOjwNh82KBJLrmbkt4T90TNHOiRz3
	 Cn7J36mzBKwkwWUQA07LVrSOIM0qLRbfuv6EbKgQWx/gEjZysv7uur2vyk5IO1W66n
	 l1thgpYZq9Nleib2BGntiGLpljFVLkjERjmUls33uOFsL0vg1o/WOD1JU7PgXZPnL8
	 wA5SZ6m5QfL1Ky+kBt/NmFFL5RDAz+zRz82qVvZSuxjhHIQbvhhcpAAoVLhDmmVb+T
	 WT59fsv7ARDtQ==
Date: Tue, 2 Feb 2021 16:18:44 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jukka Kaartinen <jukka.kaartinen@unikie.com>
cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Question about xen and Rasp 4B
In-Reply-To: <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
Message-ID: <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com> <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s> <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com> <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com> <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s> <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com> <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com> <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com> <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 Feb 2021, Jukka Kaartinen wrote:
> Hi Roman,
> 
> > > > 
> > > Good catch.
> > > GPU works now and I can start X! Thanks! I was also able to create domU
> > > that runs Raspian OS.
> > 
> > This is very interesting that you were able to achieve that - congrats!
> > 
> > Now, sorry to be a bit dense -- but since this thread went into all
> > sorts of interesting
> > directions all at once -- I just have a very particular question: what is
> > exact
> > combination of versions of Xen, Linux kernel and a set of patches that went
> > on top that allowed you to do that? I'd love to obviously see it
> > productized in Xen
> > upstream, but for now -- I'd love to make available to Project EVE/Xen
> > community
> > since there seems to be a few folks interested in EVE/Xen combo being able
> > to
> > do that.
> 
> I have tried Xen Release 4.14.0, 4.14.1 and master (from week 4, 2021).
> 
> Kernel rpi-5.9.y and rpi-5.10.y branches from
> https://github.com/raspberrypi/linux
> 
> and
> 
> U-boot (master).
> 
> For the GPU to work it was enough to disable swiotlb from the kernel(s) as
> suggested in this thread.

How are you configuring and installing the kernel?

make bcm2711_defconfig
make Image.gz
make modules_install

?

The device tree is the one from the rpi-5.9.y build? How are you loading
the kernel and device tree with uboot? Do you have any interesting
changes to config.txt?

I am asking because I cannot get to the point of reproducing what you
are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
get any graphics output on my screen. (The serial works.) I am using the
default Ubuntu Desktop rpi-install target as rootfs and uboot master.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 00:18:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 00:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80757.147910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l75sk-0006Rv-W1; Wed, 03 Feb 2021 00:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80757.147910; Wed, 03 Feb 2021 00:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l75sk-0006Ro-Sv; Wed, 03 Feb 2021 00:18:54 +0000
Received: by outflank-mailman (input) for mailman id 80757;
 Wed, 03 Feb 2021 00:18:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qtc=HF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l75sj-0006RS-Qm
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 00:18:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7911f58f-0040-4802-ae56-3161a006c8fc;
 Wed, 03 Feb 2021 00:18:52 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0552964E11;
 Wed,  3 Feb 2021 00:18:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7911f58f-0040-4802-ae56-3161a006c8fc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612311532;
	bh=mx1C+CFztb9VdgnMKD+epikE78i+nTfZa4NOt62pGcw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HGAE8PPeKCrgbzP2B5emDWln0XlfDXRsRbhAMVnuuqfa0hWj9tIw5yn4Ar3XO1z5n
	 fPIjYtf9CDqk2qEHGItJGf1O0bjg/i6zRu4Khef5kIorv3hDIFv9rosD676FwGhA+2
	 ZeapyyTRTkARbTzDjmtQjpMZXWBdC9E4XqB/iwMu3eML0G38kMGHFeQ+x/DAz2C80K
	 a5CGk6MmXw3oRNVQP0pTLOF6M8x5BA0Sd6CovnvuHpPAYp3cYDwBYd2GTYoaOYTF2W
	 NU33WF2q8Bp1fszXB7bp5M8PLPQ7oD94hWvG4W5omx2BdRfqRCKi8cJMFU/bScDnE3
	 KqKVUvT329RVw==
Date: Tue, 2 Feb 2021 16:18:51 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
In-Reply-To: <b6d342f8-c833-db88-9808-cdc946999300@xen.org>
Message-ID: <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-470387924-1612304319=:29047"
Content-ID: <alpine.DEB.2.21.2102021418410.29047@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-470387924-1612304319=:29047
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102021418411.29047@sstabellini-ThinkPad-T480s>

On Tue, 2 Feb 2021, Julien Grall wrote:
> On 02/02/2021 18:12, Julien Grall wrote:
> > On 02/02/2021 17:47, Elliott Mitchell wrote:
> > > The handle_device() function has been returning failure upon
> > > encountering a device address which was invalid.Â  A device tree which
> > > had such an entry has now been seen in the wild.Â  As it causes no
> > > failures to simply ignore the entries, ignore them. >
> > > Signed-off-by: Elliott Mitchell <ehem+xenn@m5p.com>
> > > 
> > > ---
> > > I'm starting to suspect there are an awful lot of places in the various
> > > domain_build.c files which should simply ignore errors.Â  This is now the
> > > second place I've encountered in 2 months where ignoring errors was the
> > > correct action.
> > 
> > Right, as a counterpoint, we run Xen on Arm HW for several years now and
> > this is the first time I heard about issue parsing the DT. So while I
> > appreciate that you are eager to run Xen on the RPI...
> > 
> > > Â I know failing in case of error is an engineer's
> > > favorite approach, but there seem an awful lot of harmless failures
> > > causing panics.
> > > 
> > > This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore
> > > empty memory bank".Â  Now it seems clear the correct approach is to simply
> > > ignore these entries.
> > 
> > ... we first need to fully understand the issues. Here a few questions:
> >  Â Â  1) Can you provide more information why you believe the address is
> > invalid?
> >  Â Â  2) How does Linux use the node?
> >  Â Â  3) Is it happening with all the RPI DT? If not, what are the
> > differences?
> 
> So I had another look at the device-tree you provided earlier on. The node is
> the following (copied directly from the DTS):
> 
> &pcie0 {
>         pci@1,0 {
>                 #address-cells = <3>;
>                 #size-cells = <2>;
>                 ranges;
> 
>                 reg = <0 0 0 0 0>;
> 
>                 usb@1,0 {
>                         reg = <0x10000 0 0 0 0>;
>                         resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
>                 };
>         };
> };
> 
> pcie0: pcie@7d500000 {
>    compatible = "brcm,bcm2711-pcie";
>    reg = <0x0 0x7d500000  0x0 0x9310>;
>    device_type = "pci";
>    #address-cells = <3>;
>    #interrupt-cells = <1>;
>    #size-cells = <2>;
>    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
>                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
>    interrupt-names = "pcie", "msi";
>    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
>    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
>                                                      IRQ_TYPE_LEVEL_HIGH>;
>    msi-controller;
>    msi-parent = <&pcie0>;
> 
>    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
>              0x0 0x40000000>;
>    /*
>     * The wrapper around the PCIe block has a bug
>     * preventing it from accessing beyond the first 3GB of
>     * memory.
>     */
>    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
>                  0x0 0xc0000000>;
>    brcm,enable-ssc;
> };
> 
> The interpretation of "reg" depends on the context. In this case, we are
> trying to interpret as a memory address from the CPU PoV when it has a
> different meaning (I am not exactly sure what).
> 
> In fact, you are lucky that Xen doesn't manage to interpret it. Xen should
> really stop trying to look region to map when it discover a PCI bus. I wrote a
> quick hack patch that should ignore it:

Yes, I think you are right. There are a few instances where "reg" is not
a address ready to be remapped. It is not just PCI, although that's the
most common.  Maybe we need a list, like skip_matches in handle_node.


> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 374bf655ee34..937fd1e387b7 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1426,7 +1426,7 @@ static int __init handle_device(struct domain *d, struct
> dt_device_node *dev,
> 
>  static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
>                                struct dt_device_node *node,
> -                              p2m_type_t p2mt)
> +                              p2m_type_t p2mt, bool pci_bus)
>  {
>      static const struct dt_device_match skip_matches[] __initconst =
>      {
> @@ -1532,9 +1532,14 @@ static int __init handle_node(struct domain *d, struct
> kernel_info *kinfo,
>                 "WARNING: Path %s is reserved, skip the node as we may re-use
> the path.\n",
>                 path);
> 
> -    res = handle_device(d, node, p2mt);
> -    if ( res)
> -        return res;
> +    if ( !pci_bus )
> +    {
> +        res = handle_device(d, node, p2mt);
> +        if ( res)
> +           return res;
> +
> +        pci_bus = dt_device_type_is_equal(node, "pci");
> +    }
> 
>      /*
>       * The property "name" is used to have a different name on older FDT
> @@ -1554,7 +1559,7 @@ static int __init handle_node(struct domain *d, struct
> kernel_info *kinfo,
> 
>      for ( child = node->child; child != NULL; child = child->sibling )
>      {
> -        res = handle_node(d, kinfo, child, p2mt);
> +        res = handle_node(d, kinfo, child, p2mt, pci_bus);
>          if ( res )
>              return res;
>      }
> @@ -2192,7 +2197,7 @@ static int __init prepare_dtb_hwdom(struct domain *d,
> struct kernel_info *kinfo)
> 
>      fdt_finish_reservemap(kinfo->fdt);
> 
> -    ret = handle_node(d, kinfo, dt_host, default_p2mt);
> +    ret = handle_node(d, kinfo, dt_host, default_p2mt, false);
>      if ( ret )
>          goto err;
> 
> A less hackish possibility would be to modify dt_number_of_address() and
> return 0 when the device is a child of a PCI below.
> 
> Stefano, do you have any opinions?

Would PCIe even work today? Because if it doesn't, we could just add it
to skip_matches until we get PCI passthrough properly supported.

But aside from PCIe, let's say that we know of a few nodes for which
"reg" needs a special treatment. I am not sure it makes sense to proceed
with parsing those nodes without knowing how to deal with that. So maybe
we should add those nodes to skip_matches until we know what to do with
them. At that point, I would imagine we would introduce a special
handle_device function that knows what to do. In the case of PCIe,
something like "handle_device_pcie".
--8323329-470387924-1612304319=:29047--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 00:28:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 00:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80760.147922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l761m-0007ZN-V1; Wed, 03 Feb 2021 00:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80760.147922; Wed, 03 Feb 2021 00:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l761m-0007ZG-Qr; Wed, 03 Feb 2021 00:28:14 +0000
Received: by outflank-mailman (input) for mailman id 80760;
 Wed, 03 Feb 2021 00:28:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l761l-0007Z8-Jy; Wed, 03 Feb 2021 00:28:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l761l-0001HU-B8; Wed, 03 Feb 2021 00:28:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l761k-0005fv-VV; Wed, 03 Feb 2021 00:28:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l761k-0007Nm-V0; Wed, 03 Feb 2021 00:28:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sy6BeDpCvK7dPX/Vfh2P5KlYVeroWQB1VdEVbJVS2jU=; b=RDYf94TTHb/lMVL0Rl4BuKEsIX
	qlg4leyC+oI7KFcefK4yJh9NWsakAHkqS9qscwa1QjQ8YKmblDleU6XrGEqyZ53KwdakiGrryVt7p
	viFwUo9C7x05Un+gdvP1IVkY3OsN9wUikOPcz6Sg3hMw6hSN5xJ7A2Ii/XO/bNwp9zMU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158940-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158940: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-xsm:guest-saverestore:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=74208cd252c5da9d867270a178799abd802b9338
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 00:28:12 +0000

flight 158940 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158940/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken  in 158901
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail in 158879 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 158879
 test-armhf-armhf-xl-vhd      13 guest-start                fail pass in 158879
 test-amd64-amd64-xl-xsm      17 guest-saverestore          fail pass in 158901

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1 5 host-install(5) broken in 158901 blocked in 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 158879 like 152631
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 158879 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 158879 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 158879 never pass
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 158901 like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                74208cd252c5da9d867270a178799abd802b9338
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  166 days
Failing since        152659  2020-08-21 14:07:39 Z  165 days  335 attempts
Testing same since   158816  2021-01-30 13:16:09 Z    3 days    6 attempts

------------------------------------------------------------
372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken

Not pushing.

(No revision log; it would be 98359 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 01:16:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 01:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80767.147942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l76lr-0005xi-OB; Wed, 03 Feb 2021 01:15:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80767.147942; Wed, 03 Feb 2021 01:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l76lr-0005xb-Kw; Wed, 03 Feb 2021 01:15:51 +0000
Received: by outflank-mailman (input) for mailman id 80767;
 Wed, 03 Feb 2021 01:15:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l76lq-0005xT-Ll; Wed, 03 Feb 2021 01:15:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l76lq-0003aT-G0; Wed, 03 Feb 2021 01:15:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l76lq-0007Wr-8W; Wed, 03 Feb 2021 01:15:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l76lq-0002tl-85; Wed, 03 Feb 2021 01:15:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LJB63DMhEQitPH/B1ZbAUwJhAYEimoyzLcjiqhk1eYA=; b=ZAsXA2ljmgk+tAasyGmQJLjYas
	47gc5LzGsVxYHbPQvLr3NZ72GVhDX6bn7HaYdntZpptnTZa+B3F5+8gIcziHgFvgOKtnJ0p3fR9iM
	WfsxZ5zrcUq0T0bXnlYGKuZnXlv162qh2tzwYo6eVigXG+/URvVHIUit+m0Cm88hfy5M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158949-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158949: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=88bb507a74ea7d75fa49edd421eaa710a7d80598
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 01:15:50 +0000

flight 158949 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158949/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle     <job status>                 broken  in 158915
 test-arm64-arm64-xl             <job status>                 broken  in 158915
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 158915 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 158915 pass in 158949
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 158915

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 5 host-install(5) broken in 158915 blocked in 152332
 test-arm64-arm64-xl       5 host-install(5) broken in 158915 blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                88bb507a74ea7d75fa49edd421eaa710a7d80598
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  186 days
Failing since        152366  2020-08-01 20:49:34 Z  185 days  331 attempts
Testing same since   158915  2021-02-01 23:39:22 Z    1 days    2 attempts

------------------------------------------------------------
4508 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl broken

Not pushing.

(No revision log; it would be 1021361 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 01:59:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 01:59:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80773.147958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l77RX-000216-4A; Wed, 03 Feb 2021 01:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80773.147958; Wed, 03 Feb 2021 01:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l77RW-00020z-WA; Wed, 03 Feb 2021 01:58:54 +0000
Received: by outflank-mailman (input) for mailman id 80773;
 Wed, 03 Feb 2021 01:58:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vpkB=HF=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l77RW-00020u-2u
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 01:58:54 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6775a40d-27da-40b2-b786-1cbc93b04f3b;
 Wed, 03 Feb 2021 01:58:49 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 1131wUR6024610
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 2 Feb 2021 20:58:36 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 1131wTmg024609;
 Tue, 2 Feb 2021 17:58:29 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6775a40d-27da-40b2-b786-1cbc93b04f3b
Date: Tue, 2 Feb 2021 17:58:29 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jukka Kaartinen <jukka.kaartinen@unikie.com>,
        Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>,
        Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Question about xen and Rasp 4B
Message-ID: <YBoDRSQMCAk/qbAf@mattapan.m5p.com>
References: <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
 <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
 <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Tue, Feb 02, 2021 at 04:18:44PM -0800, Stefano Stabellini wrote:
> How are you configuring and installing the kernel?
> 
> make bcm2711_defconfig
> make Image.gz
> make modules_install
> 
> ?
> 
> The device tree is the one from the rpi-5.9.y build? How are you loading
> the kernel and device tree with uboot? Do you have any interesting
> changes to config.txt?
> 
> I am asking because I cannot get to the point of reproducing what you
> are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
> get any graphics output on my screen. (The serial works.) I am using the
> default Ubuntu Desktop rpi-install target as rootfs and uboot master.

I've been trying with various pieces from various sources trying to get
things to work.  Since my goal has been a Debian-variant I use
Debian-packaged versions of things if at all possible.  Sticking to
packaged versions is more maintainable over the longer run.

My starting point was SuSE Raspberry PI 4B installation medium.  I'm
still using pieces from SuSE's installation.  Notably SuSE's overlays
have worked rather better than RPF or kernel versions of device-tree
overlays.

Debian's u-boot-rpi:arm64 package is functional.  As such that provides
u-boot.bin which is loaded via config.txt as the kernel.

Debian's grub-efi-arm64 package is also functional.  Installing that is
a bit funky as U-Boot's EFI environment is incomplete.  Nonetheless it
is simply an issue of having that installed in EFI/BOOT as the primary
boot entry, rather than EFI/Debian where it would normally install.

The base device-tree files from the RPF kernel function reasonably
well (unlike the overlays).  I'm actually doing
`make O=<build-dir> bcm2711_defconfig menuconfig bindeb-pkg` and then
installing the resultant package.  This places bcm2711-rpi-4-b.dtb in
/usr/lib/linux-image-<rev>/broadcom/bcm2711-rpi-4-b.dtb, I'm presently
copying that into the Raspberry PI boot area.

If you're unable to get graphics output, note the instruction that HDMI
MUST be plugged in *during* *boot*.  Broadcom's chips have the graphics
core is control of rather more than one might expect (Qualcomm follows
this pattern by wanting their modems in control).  In fact I've observed
I need my monitor displaying the input from the RP4 in order for it to
complete the handshake and have the RP4 do graphics.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Feb 03 04:02:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 04:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80799.147976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l79MV-0006Rb-WF; Wed, 03 Feb 2021 04:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80799.147976; Wed, 03 Feb 2021 04:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l79MV-0006RU-SZ; Wed, 03 Feb 2021 04:01:51 +0000
Received: by outflank-mailman (input) for mailman id 80799;
 Wed, 03 Feb 2021 04:01:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u1O+=HF=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l79MU-0006RP-6f
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 04:01:50 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8f78852-189c-454d-b59e-d2992d4cbd2d;
 Wed, 03 Feb 2021 04:01:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8f78852-189c-454d-b59e-d2992d4cbd2d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612324908;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=xn/6xO8m4ShHK03f/t/BD4tc59kJO62bHplznEoYNZo=;
  b=EF9x/yez1HG34Vg/KRPRtBxkiokx0brURgHqtRHeUKfLODoqgvW8Y06q
   UjArvSmZh0gMEn/8S83nNbjLR7gi43BR/o5oBt35LKdRHF2xWTkffumU/
   mAKoaraPAZ8s7crzaBdA/Ejiu5cMBfHs3UlOC/7mV0ErkW6KCWzsd9iTh
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qtrqTeT6IsDhu2KYKR6HMfg0X5okI/fJ93wJedDLVjcoqqLidFa/AkOHVMAG2RXNrWZgEGrDZy
 EFqxU1LveUtZHH7h1iTmkuHNl6epr9XCGC56wPq+Vw2DHvL6ZKbL8zJR0ike8ThLlK1zHgeG/G
 9B0uUWCLZaEYS9Jb3UNXE/PxdRVe6mpjJtXgyI6z7dP9C8jyeord7n1iX8+SoH9Ge9oUv61uTr
 IU4xvZzhMCrSpFcStquBElrYQYmqZ6r7domu6/FOQ3Wuh/o8txjrg3JMMbrPkems7I+qfkgsup
 iiw=
X-SBRS: 4.0
X-MesageID: 36808232
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,397,1602561600"; 
   d="scan'208";a="36808232"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <iwj@xenproject.org>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<tamas.k.lengyel@gmail.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH] tools/libxl: only set viridian flags on new domains
Date: Wed, 3 Feb 2021 04:01:29 +0000
Message-ID: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

Domains migrating or restoring should have viridian HVM param key in
the migration stream already and setting that twice results in Xen
returing -EEXIST on the second attempt later (during migration stream parsing)
in case the values don't match. That causes migration/restore operation
to fail at destination side.

That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
extending default viridian feature set making the values from the previous
migration streams and those set at domain construction different.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 tools/libs/light/libxl_arch.h |  6 ++++--
 tools/libs/light/libxl_arm.c  |  4 +++-
 tools/libs/light/libxl_dom.c  |  2 +-
 tools/libs/light/libxl_x86.c  | 11 ++++++++---
 4 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 6a91775..c305d70 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -30,8 +30,10 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 
 /* arch specific internal domain creation function */
 _hidden
-int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
-               uint32_t domid);
+int libxl__arch_domain_create(libxl__gc *gc,
+                              libxl_domain_config *d_config,
+                              libxl__domain_build_state *state,
+                              uint32_t domid);
 
 /* setup arch specific hardware description, i.e. DTB on ARM */
 _hidden
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 66e8a06..8c4eda3 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -126,7 +126,9 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
     return 0;
 }
 
-int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
+int libxl__arch_domain_create(libxl__gc *gc,
+                              libxl_domain_config *d_config,
+                              ibxl__domain_build_state *state,
                               uint32_t domid)
 {
     return 0;
diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 1916857..842a51c 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -378,7 +378,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     state->store_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->store_domid);
     state->console_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->console_domid);
 
-    rc = libxl__arch_domain_create(gc, d_config, domid);
+    rc = libxl__arch_domain_create(gc, d_config, state, domid);
 
     /* Construct a CPUID policy, but only for brand new domains.  Domains
      * being migrated-in/restored have CPUID handled during the
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 91a9fc7..58187ed 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -453,8 +453,10 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
     return ret;
 }
 
-int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
-        uint32_t domid)
+int libxl__arch_domain_create(libxl__gc *gc,
+                              libxl_domain_config *d_config,
+                              libxl__domain_build_state *state,
+                              uint32_t domid)
 {
     const libxl_domain_build_info *info = &d_config->b_info;
     int ret = 0;
@@ -466,7 +468,10 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
         (ret = hvm_set_conf_params(gc, domid, info)) != 0)
         goto out;
 
-    if (info->type == LIBXL_DOMAIN_TYPE_HVM &&
+    /* Viridian flags are already a part of the migration stream so set
+     * them here only for brand new domains. */
+    if (!state->restore &&
+        info->type == LIBXL_DOMAIN_TYPE_HVM &&
         (ret = hvm_set_viridian_features(gc, domid, info)) != 0)
         goto out;
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 04:10:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 04:10:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80801.147988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l79V5-0007VD-0Y; Wed, 03 Feb 2021 04:10:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80801.147988; Wed, 03 Feb 2021 04:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l79V4-0007V6-Sw; Wed, 03 Feb 2021 04:10:42 +0000
Received: by outflank-mailman (input) for mailman id 80801;
 Wed, 03 Feb 2021 04:10:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u1O+=HF=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l79V4-0007V1-3U
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 04:10:42 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc16e286-c82e-428c-8143-4c6f3b4d622d;
 Wed, 03 Feb 2021 04:10:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc16e286-c82e-428c-8143-4c6f3b4d622d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612325440;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Iv3CuZMIEwErChhvqgZmbCzl6kWEUQR7ijsEZbz0szs=;
  b=h75r9jhVdgoxH2ZMgxWvF0fwot7e9qQEq5ZdW+copuwdG/+BqCc1uj7V
   4SZ67JQY2x4K90EeqbvUtQsLGOaCPeEzFlr0/M7/XkT69bfYV1iQIt8Hv
   4wlAtrsUmFlhWyYnN6tQ2GqWp+53386wA795aCY3gFcx7TAYo7UQNkfqO
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RqvNpyff6R377OuhA1L2I4fTRSMuHR+8sPxH0bLVhEJ4n8JZMNpkDfC864eUHdhA/oc3nwhMLL
 x6w6mmNmFKSgNjtN7m4yUXT0e8ttrALPNUuYXnzYbr4vJ1qajlU+rXVaVE8Yuvq9OfS0ecPVdy
 PqmivGmb1pCLHeTExe5FC7mlMfL0Gcsh6ZaLQCWV2R4NOg0cliLnKutrkbR86R4J0GDjBii6sO
 TTPU+iOaNbCazzuUTbQdnnZyQqXfudGJcGuMk+QQhFtnEZlI8CHa/Pxg65O3VT09AwsMX1Ytun
 4HQ=
X-SBRS: 4.0
X-MesageID: 36427480
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,397,1602561600"; 
   d="scan'208";a="36427480"
Subject: Re: [PATCH] tools/libxl: only set viridian flags on new domains
To: <xen-devel@lists.xenproject.org>
CC: <iwj@xenproject.org>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<tamas.k.lengyel@gmail.com>
References: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <7ccc0b02-1400-2ef4-3a01-92c2fd765591@citrix.com>
Date: Wed, 3 Feb 2021 04:10:36 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03/02/2021 04:01, Igor Druzhinin wrote:
> Domains migrating or restoring should have viridian HVM param key in
> the migration stream already and setting that twice results in Xen
> returing -EEXIST on the second attempt later (during migration stream parsing)
> in case the values don't match. That causes migration/restore operation
> to fail at destination side.
> 
> That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
> extending default viridian feature set making the values from the previous
> migration streams and those set at domain construction different.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Igor


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 05:37:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 05:37:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80809.148009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Aqa-0007Il-BS; Wed, 03 Feb 2021 05:37:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80809.148009; Wed, 03 Feb 2021 05:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Aqa-0007Ie-8H; Wed, 03 Feb 2021 05:37:00 +0000
Received: by outflank-mailman (input) for mailman id 80809;
 Wed, 03 Feb 2021 05:36:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxJz=HF=gmail.com=pankaj.gupta.linux@srs-us1.protection.inumbo.net>)
 id 1l7AqZ-0007IZ-C6
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 05:36:59 +0000
Received: from mail-io1-xd2c.google.com (unknown [2607:f8b0:4864:20::d2c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a000cab-8888-4dde-8355-6bf540b869ad;
 Wed, 03 Feb 2021 05:36:58 +0000 (UTC)
Received: by mail-io1-xd2c.google.com with SMTP id u17so24021949iow.1
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 21:36:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a000cab-8888-4dde-8355-6bf540b869ad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QOAFG5e83xgmf8k0DWJGMDL7RfKZ3CHVYa0GrtuyQBw=;
        b=WZXVoX6SBgnC737ppEU8/8migVv7IeXjPzQUQq2rQmRHGbhY89dPWxCwx+Wmj9UwGC
         lc/ACKgBxCQOBOR8RUH5eS+ZPszj1UvGL6thVaamq885PN7G9pFlhBvZt/UjpdnwUrZg
         FDRP03+Qs+KKS0ZCfxVEv4+Z4/KVByyF72r3O200gqOcdZmCVoRhsevpy2Kep1gmtYrX
         Q+vutsQ/UKBFRtLGqSfujBUn6z8YXtBb9x+4Re1py20oDhkh+JYoKAnMoNlcqsZD6Vhz
         bSwoRNFfqOu/YSNSWO+PLMoufPlB/CQGXyPhvVpQUklr9CoM4qVkcvUe703e7h5kz2Ur
         uszA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QOAFG5e83xgmf8k0DWJGMDL7RfKZ3CHVYa0GrtuyQBw=;
        b=BcIgkP6sNxKDY6mLd4hlqMoMFFsMDvxdtjxiwZMCm9IqJPjCjei2KNr4sEOqZZTCYC
         mqXu2y0aHIpMRtGyclUMo6zvpCsrh/glHB+MSGT3qNQ0WbCVldGfKxDr/JTVSeYtmmF6
         k8jMkBDmrTHf+D5Nqd5pyR1TQ0ecJFWziY4iLgiGDAPcH3YLKLHBRGXeQiH2WuK5pWQ9
         UTQucq5HBp1S9CPNOCxNMNNLXZBeLCLSVkRwm8Pj6NcYGFjrRT74SQAyprKYiISl7bdE
         QSO0KnVxU81LlvjT+mgiLdCuKyrCIO+R1u7SF3/o/Rf7A0k7J5YlfGQTAdO2p+5rHcwd
         CBXQ==
X-Gm-Message-State: AOAM5316cVmpzhJViHgeU/Pu/aTAAG9OZo0FsyLoSDgc6nHPaAmvlX0T
	32LaFoM3rgXA/Ku1gOMV7II6ZzzLPQBVNM7fPdw=
X-Google-Smtp-Source: ABdhPJzDAiiFwtpxbYTCU0PBMsJeg/lZf0roQ6HxhxuikJq6gsCG0p+pEyAta6N+UlSj2M5K1LhBFUYNd+rK32Rpbt8=
X-Received: by 2002:a05:6638:33aa:: with SMTP id h42mr1547914jav.124.1612330617400;
 Tue, 02 Feb 2021 21:36:57 -0800 (PST)
MIME-Version: 1.0
References: <20210126115829.10909-1-david@redhat.com>
In-Reply-To: <20210126115829.10909-1-david@redhat.com>
From: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Date: Wed, 3 Feb 2021 06:36:45 +0100
Message-ID: <CAM9Jb+hQMqBmHOQoME+ro4K82v6bVe9Fhcjmkp4bxFtighVo8w@mail.gmail.com>
Subject: Re: [PATCH v1] mm/memory_hotplug: MEMHP_MERGE_RESOURCE -> MHP_MERGE_RESOURCE
To: David Hildenbrand <david@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Linux MM <linux-mm@kvack.org>, 
	Andrew Morton <akpm@linux-foundation.org>, "K. Y. Srinivasan" <kys@microsoft.com>, 
	Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, 
	Wei Liu <wei.liu@kernel.org>, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Michal Hocko <mhocko@kernel.org>, 
	Oscar Salvador <osalvador@suse.de>, Anshuman Khandual <anshuman.khandual@arm.com>, 
	Wei Yang <richard.weiyang@linux.alibaba.com>, linux-hyperv@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

> Let's make "MEMHP_MERGE_RESOURCE" consistent with "MHP_NONE", "mhp_t" and
> "mhp_flags". As discussed recently [1], "mhp" is our internal
> acronym for memory hotplug now.
>
> [1] https://lore.kernel.org/linux-mm/c37de2d0-28a1-4f7d-f944-cfd7d81c334d@redhat.com/
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: linux-hyperv@vger.kernel.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  drivers/hv/hv_balloon.c        | 2 +-
>  drivers/virtio/virtio_mem.c    | 2 +-
>  drivers/xen/balloon.c          | 2 +-
>  include/linux/memory_hotplug.h | 2 +-
>  mm/memory_hotplug.c            | 2 +-
>  5 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index 8c471823a5af..2f776d78e3c1 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -726,7 +726,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
>
>                 nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn));
>                 ret = add_memory(nid, PFN_PHYS((start_pfn)),
> -                               (HA_CHUNK << PAGE_SHIFT), MEMHP_MERGE_RESOURCE);
> +                               (HA_CHUNK << PAGE_SHIFT), MHP_MERGE_RESOURCE);
>
>                 if (ret) {
>                         pr_err("hot_add memory failed error is %d\n", ret);
> diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
> index 85a272c9978e..148bea39b09a 100644
> --- a/drivers/virtio/virtio_mem.c
> +++ b/drivers/virtio/virtio_mem.c
> @@ -623,7 +623,7 @@ static int virtio_mem_add_memory(struct virtio_mem *vm, uint64_t addr,
>         /* Memory might get onlined immediately. */
>         atomic64_add(size, &vm->offline_size);
>         rc = add_memory_driver_managed(vm->nid, addr, size, vm->resource_name,
> -                                      MEMHP_MERGE_RESOURCE);
> +                                      MHP_MERGE_RESOURCE);
>         if (rc) {
>                 atomic64_sub(size, &vm->offline_size);
>                 dev_warn(&vm->vdev->dev, "adding memory failed: %d\n", rc);
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index b57b2067ecbf..671c71245a7b 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -331,7 +331,7 @@ static enum bp_state reserve_additional_memory(void)
>         mutex_unlock(&balloon_mutex);
>         /* add_memory_resource() requires the device_hotplug lock */
>         lock_device_hotplug();
> -       rc = add_memory_resource(nid, resource, MEMHP_MERGE_RESOURCE);
> +       rc = add_memory_resource(nid, resource, MHP_MERGE_RESOURCE);
>         unlock_device_hotplug();
>         mutex_lock(&balloon_mutex);
>
> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
> index 3d99de0db2dd..4b834f5d032e 100644
> --- a/include/linux/memory_hotplug.h
> +++ b/include/linux/memory_hotplug.h
> @@ -53,7 +53,7 @@ typedef int __bitwise mhp_t;
>   * with this flag set, the resource pointer must no longer be used as it
>   * might be stale, or the resource might have changed.
>   */
> -#define MEMHP_MERGE_RESOURCE   ((__force mhp_t)BIT(0))
> +#define MHP_MERGE_RESOURCE     ((__force mhp_t)BIT(0))
>
>  /*
>   * Extended parameters for memory hotplug:
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 710e469fb3a1..ae497e3ff77c 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1153,7 +1153,7 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
>          * In case we're allowed to merge the resource, flag it and trigger
>          * merging now that adding succeeded.
>          */
> -       if (mhp_flags & MEMHP_MERGE_RESOURCE)
> +       if (mhp_flags & MHP_MERGE_RESOURCE)
>                 merge_system_ram_resource(res);
>
>         /* online pages if requested */

 Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 06:19:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 06:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80811.148020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7BVA-0002rr-Ca; Wed, 03 Feb 2021 06:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80811.148020; Wed, 03 Feb 2021 06:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7BVA-0002rk-9e; Wed, 03 Feb 2021 06:18:56 +0000
Received: by outflank-mailman (input) for mailman id 80811;
 Wed, 03 Feb 2021 06:18:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7BV9-0002pp-4C
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 06:18:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id daf6892a-fbc7-458b-a8fd-3d62486c466a;
 Wed, 03 Feb 2021 06:18:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8805CAC55;
 Wed,  3 Feb 2021 06:18:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daf6892a-fbc7-458b-a8fd-3d62486c466a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612333132; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=agnCdG+yE8W9o2rLyktiMQiKSvmoPTMMb/dtschoNQU=;
	b=mnoXJs8GYbKWEG8XhWIZvzAnHaTUEDDEZM5zZUlhoPxgkCEbFv5Xa/qtrezaBeOKSwR33s
	6UiBx8WL9SmZlidXQwkHhvP3SsbUKMeeMOiZHkfy4Faq03gA0zLzHSGKaGsn7NS/jj2PHB
	dQKgwG/vTw1/GtQL6YelQ4tfVEIwVkw=
To: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xen.org
References: <20210202183735.GA25046@mail.soc.lip6.fr>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: xenstored file descriptor leak
Message-ID: <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
Date: Wed, 3 Feb 2021 07:18:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <20210202183735.GA25046@mail.soc.lip6.fr>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="HMyAnnFbcawsWSY5v1w65020AJa86Xat5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--HMyAnnFbcawsWSY5v1w65020AJa86Xat5
Content-Type: multipart/mixed; boundary="UBpALOHG4koNejao7WISFnVg2yKlct4lY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xen.org
Message-ID: <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210202183735.GA25046@mail.soc.lip6.fr>
In-Reply-To: <20210202183735.GA25046@mail.soc.lip6.fr>

--UBpALOHG4koNejao7WISFnVg2yKlct4lY
Content-Type: multipart/mixed;
 boundary="------------F325049DE409B717F4744AFE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F325049DE409B717F4744AFE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.02.21 19:37, Manuel Bouyer wrote:
> Hello,
> on NetBSD I'm tracking down an issue where xenstored never closes its
> file descriptor to /var/run/xenstored/socket and instead loops at 100%
> CPU on these descriptors.
>=20
> xenstored loops because poll(2) returns a POLLIN event for these
> descriptors but they are marked is_ignored =3D true.
>=20
> I'm seeing this with xen 4.15, 4.13 and has also been reported with 4.1=
1
> with latest security patches.
> It seems to have started with patches for XSA-115 (A user reported this=

> for 4.11)
>=20
> I've tracked it down to a difference in poll(2) implementation; it seem=
s
> that linux will return something that is not (POLLIN|POLLOUT) when a
> socket if closed; while NetBSD returns POLLIN (this matches the NetBSD'=
s
> man page).

Yeah, Linux seems to return POLLHUP additionally.

>=20
> First I think there may be a security issue here, as, even on linux it =
should
> be possible for a client to cause a socket to go to the is_ignored stat=
e,
> while still sending data and cause xenstored to go to a 100% CPU loop.

No security issue, as sockets are restricted to dom0 and user root.

In case you are suspecting a security issue, please don't send such
mails to xen-devel in future, but to security@lists.xenproject.org.

> I think this is needed anyway:
>=20
> --- xenstored_core.c.orig	2021-02-02 18:06:33.389316841 +0100
> +++ xenstored_core.c	2021-02-02 19:27:43.761877371 +0100
> @@ -397,9 +397,12 @@
>   			     !list_empty(&conn->out_list)))
>   				*ptimeout =3D 0;
>   		} else {
> -			short events =3D POLLIN|POLLPRI;
> -			if (!list_empty(&conn->out_list))
> -				events |=3D POLLOUT;
> +			short events =3D 0;
> +			if (!conn->is_ignored) {
> +				events |=3D POLLIN|POLLPRI;
> +			        if (!list_empty(&conn->out_list))
> +				        events |=3D POLLOUT;
> +			}
>   			conn->pollfd_idx =3D set_fd(conn->fd, events);
>   		}
>   	}

Yes, I think this is a good idea.

>=20
> Now I wonder if, on NetBSD at last, a read error or short read shouldn'=
t
> cause the socket to be closed, as with:
>=20
> @@ -1561,6 +1565,8 @@
>  =20
>   bad_client:
>   	ignore_connection(conn);
> +	/* we don't want to keep this connection alive */
> +	talloc_free(conn);
>   }

This is wrong for non-socket connections, as we want to keep the domain
in question to be known to xenstored.

For socket connections this should be okay, though.


Juergen

--------------F325049DE409B717F4744AFE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F325049DE409B717F4744AFE--

--UBpALOHG4koNejao7WISFnVg2yKlct4lY--

--HMyAnnFbcawsWSY5v1w65020AJa86Xat5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAaQEsFAwAAAAAACgkQsN6d1ii/Ey+B
jAf9Fz6myhY1UgusTb2PWf+t5daZaqpwtjBZ4920cO0r0kc1S5x7jaje9bVxxE2KrqqW3ofxqPAP
JP7H/mqJ2pm9HjHkbsKrDxRkaxMFLlSOdTQHnpkrVsD7CSwNimvU7VlfyIAb9xbIQoTRo3HRfDO8
UHPdhCQK25hUoOWnnGALkD7wyDu/cceS8oChGJVg4Ro8GTHFzkDl15g7NHym1M/Gg8tNwFe/abt8
x9IT3aLWdVcfSK8lLjWxDpznnrsiqRfUbxarO1eJBjFFFeMkBzXRYJThk7VcSl5BQVvvIqtOTVNy
Jxl0lDgEpMVzyPtRhbdeOZ7/BddwShWrfe/we2qYSg==
=N/Qe
-----END PGP SIGNATURE-----

--HMyAnnFbcawsWSY5v1w65020AJa86Xat5--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 07:12:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 07:12:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80814.148033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CKR-0000D1-Dg; Wed, 03 Feb 2021 07:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80814.148033; Wed, 03 Feb 2021 07:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CKR-0000Cu-9k; Wed, 03 Feb 2021 07:11:55 +0000
Received: by outflank-mailman (input) for mailman id 80814;
 Wed, 03 Feb 2021 07:11:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CKP-0000Cl-Sm; Wed, 03 Feb 2021 07:11:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CKP-0001vG-Mg; Wed, 03 Feb 2021 07:11:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CKP-0006yI-AF; Wed, 03 Feb 2021 07:11:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CKP-0004xA-9n; Wed, 03 Feb 2021 07:11:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IAq08xCMMCDdruQZnbLuyuuylENSLthrYw3XUxNGL7Q=; b=JktLP/2CU4tWDBN9sJw0TmJBRS
	2/R/R8WuotK9RzQjlTHoJ50c2Os9Cm6gOB1mf/v4fNvaIaMr63WbUFUufvo3bQn/+LQU+QwwiPotV
	oPvyytbIw85sAq6dDmOGdRcC/zgQjC8I3Rmrd1KLybp43wsYPPZuCbVLzlnVBwV+AKNQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158959-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 158959: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3f90ac3ec03512e2374cd2968c047a7e856a8965
X-Osstest-Versions-That:
    ovmf=3b468095cd3dfcd1aa4ae63bc623f534bc2390d2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 07:11:53 +0000

flight 158959 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158959/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3f90ac3ec03512e2374cd2968c047a7e856a8965
baseline version:
 ovmf                 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2

Last test of basis   158932  2021-02-02 03:10:23 Z    1 days
Testing same since   158959  2021-02-02 16:40:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aiden Park <aiden.park@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3b468095cd..3f90ac3ec0  3f90ac3ec03512e2374cd2968c047a7e856a8965 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 07:25:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 07:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80820.148054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CXz-0001LF-P1; Wed, 03 Feb 2021 07:25:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80820.148054; Wed, 03 Feb 2021 07:25:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CXz-0001L8-KR; Wed, 03 Feb 2021 07:25:55 +0000
Received: by outflank-mailman (input) for mailman id 80820;
 Wed, 03 Feb 2021 07:25:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CXy-0001L0-Ct; Wed, 03 Feb 2021 07:25:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CXy-0002AG-3O; Wed, 03 Feb 2021 07:25:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CXx-0007N5-Rk; Wed, 03 Feb 2021 07:25:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7CXx-0005ai-RI; Wed, 03 Feb 2021 07:25:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cyfvavoGBoGrc8xiwYr2Lsbb5eK2vTHMxFrJi1/oIcw=; b=gpcZcFUuHiYhDfTX1QvgFV2lMF
	2vkdRkGeP+O3DkixVHdg5f07FyDky0F6tXP92sypXpebfcRuTD7xLsQIkKQolGu7l679ujt2mPvoh
	D+fV7diTdtqnyyWLZIK3/LMhK9CG+7x8Cik18aP7xzt8igt/SGK4E9DRIB1krlr+/tDI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158973-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 158973: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d115019b6a52bfd18cd5ee87cfe8d0811a06d725
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 07:25:53 +0000

flight 158973 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158973/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d115019b6a52bfd18cd5ee87cfe8d0811a06d725
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  208 days
Failing since        151818  2020-07-11 04:18:52 Z  207 days  202 attempts
Testing same since   158973  2021-02-03 04:18:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 39270 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 07:31:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 07:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80826.148069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CdI-0002MP-JM; Wed, 03 Feb 2021 07:31:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80826.148069; Wed, 03 Feb 2021 07:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CdI-0002MI-FR; Wed, 03 Feb 2021 07:31:24 +0000
Received: by outflank-mailman (input) for mailman id 80826;
 Wed, 03 Feb 2021 07:31:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgMe=HF=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1l7CdG-0002MD-PN
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 07:31:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f04f281-6c19-4619-8e30-5ba1af13219f;
 Wed, 03 Feb 2021 07:31:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B17C3AC55;
 Wed,  3 Feb 2021 07:31:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f04f281-6c19-4619-8e30-5ba1af13219f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612337480; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LurbnCi6cSGjDt+8GQYuZGaVU6xBpMGqOuE44eWPkTg=;
	b=JL99QBDI7Y5ojnOMfoi4EEoEJqKvAW2CWjYztbnnurcQczTZWqZGD8pfNq5PPU/scQrLjj
	fiAvilXni/craQij12J2fwLkGX2Y2WJ1vcmvMblRnbq0BYMahVMT5GlcDTOHk5KBTvx+oI
	fQYazFgisqoF6cu1leINQFjJBh3HboY=
Message-ID: <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
Subject: Re: Null scheduler and vwfi native problem
From: Dario Faggioli <dfaggioli@suse.com>
To: Julien Grall <julien@xen.org>, Anders =?ISO-8859-1?Q?T=F6rnqvist?=
	 <anders.tornqvist@codiax.se>, xen-devel@lists.xenproject.org, Stefano
	Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Juergen Gross <jgross@suse.com>
Date: Wed, 03 Feb 2021 08:31:19 +0100
In-Reply-To: <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
	 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
	 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
	 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
	 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
	 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
	 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
	 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
	 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
	 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
	 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
	 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
	 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
	 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-hIOvxr7sBUz5B6FO3JQj"
User-Agent: Evolution 3.38.3 (by Flathub.org) 
MIME-Version: 1.0


--=-hIOvxr7sBUz5B6FO3JQj
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi again,=20

On Tue, 2021-02-02 at 15:23 +0000, Julien Grall wrote:
> (Adding Andrew, Jan, Juergen for visibility)
>=20
Thanks! :-)

> On 02/02/2021 15:03, Dario Faggioli wrote:
> > On Tue, 2021-02-02 at 07:59 +0000, Julien Grall wrote:
> > > The placement in enter_hypervisor_from_guest() doesn't matter too
> > > much,
> > > although I would consider to call it as a late as possible.
> > >=20
> > Mmmm... Can I ask why? In fact, I would have said "as soon as
> > possible".
>=20
> Because those functions only access data for the current vCPU/domain.
> This is already protected by the fact that the domain is running.
>=20
Mmm.. ok, yes, I think it makes sense.

> By leaving the "quiesce" mode later, you give an opportunity to the
> RCU=20
> to release memory earlier.
>=20
Yeah. What I wanted to be sure is that we put the CPU "back in the
race" :-) before any current or future use of RCUs.

> In reality, it is probably still too early as a pCPU can be
> considered=20
> quiesced until a call to rcu_lock*() (such rcu_lock_domain()).
>=20
Well, yes, in theory, we could track down which is the first RCU read
side crit. section on this path, and put the call right before that (if
I understood what you mean).

To me, however, this looks indeed too complex and difficult to
maintain, not only for 4.15 but in general. E.g., suppose we find such
a use of RCUs in function foo() called by bar() called by
hypervisor_enter_from_guest().

If someone at some points wants to use RCUs in bar(), how does she know
that she should also move the call to rcu_quiet_enter() from foo() to
there?

So, yes, I'll move it a little down, but still within
hypervisor_enter_from_guest().

In the meanwhile, I had a quick chat with Jourgen about x86. In fact, I
had a look and was not finding a place where to put the
rcu_quiet_{exit,enter}() calls as convenient as you have here on ARM.
I.e., two nice C functions that we traverse for all kind of guests, for
HVM and SVM, etc.

Actually, I was quite skeptical about it but, you know, one can hope!
Juergen confirmed that there's no such things, so I'll look at the
various entry.S files for the proper spots.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-hIOvxr7sBUz5B6FO3JQj
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmAaUUcACgkQFkJ4iaW4
c+4vig/+IKxBhp5C0Yknj+OHXkwPqxlBdQcRkeqqNsO+vLGAp7y5iFqwoC/uRYot
tRSpjYqERxi41/o7hOfvAD7PIoGtfiFK4RSFxlXY0Ui4hq+alyE0zllPYaEjWAeT
65oVoTvjyJqnRIwZeH9KZlJk2ZNa7Z6gjcEeTJ1I6zq9dTmtQYTqpZNTEGMqwI/y
zShQRfEYJjuFJzuFg3LDKSJr1IQZBNpxJg18XpzW3RmCj8qhjy9AKiwatuhKRiXx
/+axdPcfGLMTyOUsEp3UjemsyGhAmPjLrosGJ4x+5ym4iiepReugSXdfsHDXgsuo
5xoJAjwkbBUw3341oV/tib9cacV4/WkuRtNfPOB6qBO7UlApRQPzc+GEGUzUtt8F
zfqZASiKRIEbewzuT0mm52qZr9vvLE1dr00kP8GcqoQkUiOJMnzxwNoF5Lf6jcyY
pBjTJeS38ODqHzh9JpAersruRdw2DTsbEAPu1MGxW8am6hxW3yYlQs7PByboyyUa
klpsU0zjq4SWmg1CTYBgf5lgvffXae4a6UhfS+MGzp1+KQf3v++2n6/DELRKpdd6
YkfkzHZJ7Nq6AMbUnz7uGb1JJDVU79cSh1h9I99ZMkaJbRK0CqYbli/kFoBsHDRi
cql+QRhVK37xYHcwOgYZ/AfiwYQWoHMgBI7jcf5F4AW8OL2vpA0=
=XdIW
-----END PGP SIGNATURE-----

--=-hIOvxr7sBUz5B6FO3JQj--



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 07:36:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 07:36:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80828.148081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CiO-0002Xl-6F; Wed, 03 Feb 2021 07:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80828.148081; Wed, 03 Feb 2021 07:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CiO-0002Xe-37; Wed, 03 Feb 2021 07:36:40 +0000
Received: by outflank-mailman (input) for mailman id 80828;
 Wed, 03 Feb 2021 07:36:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q5cB=HF=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l7CiM-0002XZ-SX
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 07:36:39 +0000
Received: from mail-lf1-x134.google.com (unknown [2a00:1450:4864:20::134])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 407e0e55-af34-4505-aa8a-0e3b56681787;
 Wed, 03 Feb 2021 07:36:37 +0000 (UTC)
Received: by mail-lf1-x134.google.com with SMTP id i187so31882010lfd.4
 for <xen-devel@lists.xenproject.org>; Tue, 02 Feb 2021 23:36:36 -0800 (PST)
Received: from [192.168.1.76] (91-153-193-91.elisa-laajakaista.fi.
 [91.153.193.91])
 by smtp.gmail.com with ESMTPSA id l11sm153547lfp.284.2021.02.02.23.36.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 02 Feb 2021 23:36:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 407e0e55-af34-4505-aa8a-0e3b56681787
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=HCN6VQ/749BQG6W/up6baxrR4i/W7HwLUnGJQCtdOZA=;
        b=Y3A1X2/LAqnsvxA6RWErdys8F9vg1oaW+aCKbgj2kikVshnLpk9Pvxk0w7zFIcTGcW
         zSJsXVW040HIELEDM4oskANa3KFmhTL/Zg7N+TGGzuGQs35hixTy0QmXAKovQVG9f2WJ
         Iq63h+p8QjWYmi0OfslCFeh2ZvAPg7nEtl9rmGRApsNlVIDnJ+yOkN4qb3BLjQB1VddN
         RPonCnxlOtvaQ5KFNqw77p91+UbdZnxyFkfCwr/y+UNwX3BqBA51mZOSfhYEsJdd013g
         aVcdrMJzVQ+Yblkqw5tYa67AYJorAsHpCGd1/zyahnjycuE5PpFkr3Co81kc4t5mH/AD
         qh5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=HCN6VQ/749BQG6W/up6baxrR4i/W7HwLUnGJQCtdOZA=;
        b=rAer97/eFAEnuevMfPx5wShdw3DybbGPwbD73qzVgrfxtILjdghAXJABfpv/L9UmTQ
         S5z1fkB2lOAlQpAEBDagzKU0AjRSNvob+zZZtRGZzfdJNNhjz6WzHKJkhxh06uV5OLUf
         8D29vfg39xENj6c/VZtTjz4SZ1Fp1VVuTAptFByTKxblu73G8XytnHEKNmobhfoaOm6f
         2rsbYZTsD85kmLvNOHtJQnPLE6FpJRbLGFbLRwmFxZW1yh/EXGeplmrXor5InM14HKm6
         PufyTAJOsT5batI3YC276pBCocNmPejTZ1CASF/fnP8kIVpBYh40ulkyGsW+/Sn4r+EG
         vwVQ==
X-Gm-Message-State: AOAM531TgP5lQPq2T8CZA7956k1Zz3VtQYZJYv833MDpha9SH59RcIfg
	5H26Xw78uARNlOLt1TmFf5sPwA==
X-Google-Smtp-Source: ABdhPJzNRyaezBYMmqh02vurmHfjl1ZtPCiUGPt+1W31WxMfi2Pe/+G4BAy2w3vDlYtQqFLlZDzabg==
X-Received: by 2002:ac2:5382:: with SMTP id g2mr1051870lfh.331.1612337795872;
        Tue, 02 Feb 2021 23:36:35 -0800 (PST)
Subject: Re: Question about xen and Rasp 4B
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com>
 <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s>
 <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
 <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
 <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <3c98d8d0-ca4e-b177-1e2b-5f3eb454722d@unikie.com>
Date: Wed, 3 Feb 2021 09:36:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 3.2.2021 2.18, Stefano Stabellini wrote:
> On Tue, 2 Feb 2021, Jukka Kaartinen wrote:
>> Hi Roman,
>>
>>>>>
>>>> Good catch.
>>>> GPU works now and I can start X! Thanks! I was also able to create domU
>>>> that runs Raspian OS.
>>>
>>> This is very interesting that you were able to achieve that - congrats!
>>>
>>> Now, sorry to be a bit dense -- but since this thread went into all
>>> sorts of interesting
>>> directions all at once -- I just have a very particular question: what is
>>> exact
>>> combination of versions of Xen, Linux kernel and a set of patches that went
>>> on top that allowed you to do that? I'd love to obviously see it
>>> productized in Xen
>>> upstream, but for now -- I'd love to make available to Project EVE/Xen
>>> community
>>> since there seems to be a few folks interested in EVE/Xen combo being able
>>> to
>>> do that.
>>
>> I have tried Xen Release 4.14.0, 4.14.1 and master (from week 4, 2021).
>>
>> Kernel rpi-5.9.y and rpi-5.10.y branches from
>> https://github.com/raspberrypi/linux
>>
>> and
>>
>> U-boot (master).
>>
>> For the GPU to work it was enough to disable swiotlb from the kernel(s) as
>> suggested in this thread.
> 
> How are you configuring and installing the kernel?
> 
> make bcm2711_defconfig
> make Image.gz
> make modules_install
> 
> ?
> 
> The device tree is the one from the rpi-5.9.y build? How are you loading
> the kernel and device tree with uboot? Do you have any interesting
> changes to config.txt?
> 
> I am asking because I cannot get to the point of reproducing what you
> are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
> get any graphics output on my screen. (The serial works.) I am using the
> default Ubuntu Desktop rpi-install target as rootfs and uboot master.
> 

This is what I do:

make bcm2711_defconfig
cat "xen_additions" >> .config
make Image  modules dtbs

make INSTALL_MOD_PATH=rootfs modules_install
depmod -a

cp arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb boot/
cp arch/arm64/boot/dts/overlays/*.dtbo boot/overlays/

config.txt:

[pi4]
max_framebuffers=2
enable_uart=1
arm_freq=1500
force_turbo=1

[all]
arm_64bit=1
kernel=u-boot.bin

start_file=start4.elf
fixup_file=fixup4.dat

# Enable the audio output, I2C and SPI interfaces on the GPIO header
dtparam=audio=on
dtparam=i2c_arm=on
dtparam=spi=on

# Enable the FKMS ("Fake" KMS) graphics overlay, enable the camera firmware
# and allocate 128Mb to the GPU memory
dtoverlay=vc4-fkms-v3d,cma-64
gpu_mem=128

# Comment out the following line if the edges of the desktop appear outside
# the edges of your display
disable_overscan=1


boot.source:
setenv serverip 10.42.0.1
setenv ipaddr 10.42.0.231
tftpb 0xC00000 boot2.scr
source 0xC00000

boot2.source:
tftpb 0xE00000 xen
tftpb 0x1000000 Image
setenv lin_size $filesize

fdt addr ${fdt_addr}
fdt resize 1024

fdt set /chosen xen,xen-bootargs "console=dtuart dtuart=serial0 
sync_console dom0_mem=1024M dom0_max_vcpus=1 bootscrub=0 vwfi=native 
sched=credit2"

fdt mknod /chosen dom0

# These will break the default framebuffer@3e2fe000 that
# is the same chosen -node.
#fdt set /chosen/dom0 \#address-cells <0x2>
#fdt set /chosen/dom0 \#size-cells <0x2>

fdt set /chosen/dom0 compatible "xen,linux-zimage" "xen,multiboot-module"
fdt set /chosen/dom0 reg <0x1000000 0x${lin_size}>

fdt set /chosen xen,dom0-bootargs "dwc_otg.lpm_enable=0 console=hvc0 
earlycon=xen earlyprintk=xen root=/dev/sda4 elevator=deadline rootwait 
fixrtc quiet splash"

setenv fdt_high 0xffffffffffffffff

fdt print /chosen

#xen
booti 0xE00000 - ${fdt_addr}


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 07:36:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 07:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80829.148093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CiU-0002Zj-GF; Wed, 03 Feb 2021 07:36:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80829.148093; Wed, 03 Feb 2021 07:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7CiU-0002Zc-C5; Wed, 03 Feb 2021 07:36:46 +0000
Received: by outflank-mailman (input) for mailman id 80829;
 Wed, 03 Feb 2021 07:36:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PznS=HF=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1l7CiS-0002ZC-Gl
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 07:36:44 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26d61195-722c-4ebf-8f21-5c7390240ecb;
 Wed, 03 Feb 2021 07:36:42 +0000 (UTC)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 02 Feb 2021 23:36:36 -0800
Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19])
 by orsmga002.jf.intel.com with ESMTP; 02 Feb 2021 23:36:31 -0800
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Tue, 2 Feb 2021 23:36:31 -0800
Received: from orsmsx606.amr.corp.intel.com (10.22.229.19) by
 ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Tue, 2 Feb 2021 23:36:31 -0800
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2
 via Frontend Transport; Tue, 2 Feb 2021 23:36:31 -0800
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.40) by
 edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Tue, 2 Feb 2021 23:36:28 -0800
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MWHPR11MB1712.namprd11.prod.outlook.com (2603:10b6:300:29::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.24; Wed, 3 Feb
 2021 07:36:27 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::f1b4:bace:1e44:4a46]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::f1b4:bace:1e44:4a46%6]) with mapi id 15.20.3825.019; Wed, 3 Feb 2021
 07:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26d61195-722c-4ebf-8f21-5c7390240ecb
IronPort-SDR: xNNpGN92qYG4A2qOOwovX4GTmy8CQqObfHitdhckdKKQNonhMYXZo0PuDKXvjwbuVTj4Zjhat5
 Re9gqapdnKsA==
X-IronPort-AV: E=McAfee;i="6000,8403,9883"; a="180224950"
X-IronPort-AV: E=Sophos;i="5.79,397,1602572400"; 
   d="scan'208";a="180224950"
IronPort-SDR: 6PVknCE7QP7DroU64xvIb5jc3kw0wKVm86s/kKZwb7bRsK0kCkX/zoopZazecmz1BZTTTaOGEB
 B6pysxPrnGUw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.79,397,1602572400"; 
   d="scan'208";a="372282308"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O4ZeQQAwpictthMg/1lQSyV6UvDyDHFsPkQt+16l8bxYjNpuQb8loztqOFdvwF/CaVPFPcKkEI+yv4LhDYDx5FEiwyBoKwvMM/F/olwtQc6qT5dDQhGMMdxOKK9BgnQO24tl2azAGNCVkovVOjaNq3Be+HC60rxXJSdaE40FgKbo3rilyBQPyhPV7BmKdFfmj081E4y7s0vMmtZLmVZCo0jB0ca3D+z8GdBA8nq7+mqdIgHfBRIcTywJi3N1DhD6Hd5ZUuw4L0NyyFMeTv4Br6BW+SHGEtp0vqqCv++2nJEUWuVRn38QwQrxfgcfdUxj80dL5ybsCYxh3wljTf7ASw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hr0aN4o1hUCRSLS5hOZTbqv8iiEO3B8wJaVIMAbUu48=;
 b=LWnCVsqYHbxlfoNfTtiRDGPZF8PZaRtznrANfzu6Qk1nt1Tq+s6nJRyslpv0VO/Xhnh69xYU1vIqTQ0Qd2fX+g1wR6z6VedXSme6jCLQ8PUyuX3PM67BT6mt+wgpBfdep4oGWcv/BlLuH0wcRBJlVA6urhQySJap+0ml5uqiDN5PBjKXA0o/2lKRawXRV84ikG4gRL/daqxKAA7Mg70ngDxh7ApMo3a0ZeDgX+sFIjbYYyStUr78QQ4Wmof5uc9PQD3CwG+gtYLC6l34+t7OINc/bi7BCYUL4DeuFVDWJ/3k7vFXkfQx02Xbus8G4xeNzPRaUhL5hj3dEcShE0ba1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hr0aN4o1hUCRSLS5hOZTbqv8iiEO3B8wJaVIMAbUu48=;
 b=HUtgAEvMNQrPyqxDCmxLigxUsRXhVOWwYjSu8oZqDTLfI1UbXA9FL7/GbXz9J0oVeztTsN2ycqfO5rKx4kE7Wq/OuzT+1zHJJc5rbBIDdBjwcRx2pSXcv6yMge1UBLkk0K77HBQunRc1oVVYDrfLQmogk5tIaUGqELBd2N8zd68=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "Cooper, Andrew" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, "Nakajima, Jun"
	<jun.nakajima@intel.com>
Subject: RE: [PATCH] x86/HVM: support emulated UMIP
Thread-Topic: [PATCH] x86/HVM: support emulated UMIP
Thread-Index: AQHW9jRI88PZs9TFZkae/jqZeEYzrapGEf4g
Date: Wed, 3 Feb 2021 07:36:27 +0000
Message-ID: <MWHPR11MB18868671133984DBA0BBFAAD8CB49@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <5a8b1c37-5f53-746f-ba87-778d4d980d99@suse.com>
In-Reply-To: <5a8b1c37-5f53-746f-ba87-778d4d980d99@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.199]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9a309247-a94f-4ae2-2b51-08d8c8166a6a
x-ms-traffictypediagnostic: MWHPR11MB1712:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR11MB171258D97AEEA4B39287DA098CB49@MWHPR11MB1712.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6108;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 9Bva3XxJmOW+jnLcaDOv1UZMb/NmyAY0FpHeUYs/MTGXoy4NzYmN6ZKmfqxA+v7QF4Od47OKR15DmQANE5AW3IK0SB34fQbD3+8I3m2gI7RjF4VqXX8jkwkPQlyomjau3vF/ZnpJZZ4imVjdqRo1w1VsiWIbVJP/HERiq2P308NY1P4IRmmrEil4ZbC093nkzTJFzMmwmAFqkF7OyCMCe76U13n4K8B/XUSa2pDTjF4T4Q3pbj+QKWPW1n0NjCTwvU3mRBJo3Qt69oTxCHoknpl8uvhIAT06mCa5SJfKLpcnnZI5GpLbSqdv1FoWTxBfW1lwtZUpFfj4cPM9UqTpvdLjUKtNpUWVWaKTO1kXAD1M7tGjPueGRjseft3IGkCanI1fbP4j68/ChheEpNZgtiUjdP1G+qOEY2lN6UjINiR/Vr1VrCLJTpxZ+r2jRbb3q/FXGL2q2ajAAbKG6eoAFNYQqPWNkhZfxzxElopwA+xius9aWDhjvaYv84xhrzkcyO/hWQ9duJH8xG8TwPCUrw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(136003)(366004)(396003)(376002)(346002)(2906002)(4326008)(83380400001)(8676002)(9686003)(55016002)(76116006)(107886003)(54906003)(110136005)(316002)(52536014)(71200400001)(7696005)(86362001)(8936002)(66556008)(64756008)(66446008)(5660300002)(26005)(66476007)(478600001)(66946007)(33656002)(6506007)(186003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?YjQ1b3pLMFB4SmFFRmlGRWdLVFpXblR3alpRM0VXOUNDVzF2UitlV3FYeCsx?=
 =?utf-8?B?QVBvOCtDcGgrZGZiY1phYUlqWXZyVmswRGplbE0zRVM5ZkRsYjgwK05QMGhO?=
 =?utf-8?B?RzZubXZ4ZmRQVjY2dWxNaUxMd2ZjMTNWelVZdU1tZE44dkpWdWtDYWpqZk01?=
 =?utf-8?B?WGd1TisxM1JGKytFcVI5THFLaXpBUUdycG9veTQrZC9QWU96ZW9CcVNmTkZQ?=
 =?utf-8?B?TzFVbDlTclFIam0yaUdDKzhaM2s0L2c0bzdNTDVsbXhLN3BiNmUxWkF2Mk5X?=
 =?utf-8?B?OEI5NnNlWmxhSnpxV2tnendzUnJuRzVNby8wbjZVaCtrdm5zQlp4MjFzUzIz?=
 =?utf-8?B?MHRMYTBFQ3c4aWx4c3F5MTc3d0U1ZVlBNTdqb0RILzg0NVV0SGFlaTNyU1pI?=
 =?utf-8?B?eTl1bi9vT3k1QS9OQmUvdXJjaDUzTHJKSFBzVmpZNk9nUEM5dnIrREk2aFVp?=
 =?utf-8?B?ZDJOd1dmZzZiZk5nZ1k1TGtqMGg1YzNIOVRHVm9RNTA0Z25JdEZHTVJObks1?=
 =?utf-8?B?eFFwNEZrdUs1VkhuclIzWWRNVmp1QnBzQm5PdStUdEFIQW9NM3BwcTd2WU1t?=
 =?utf-8?B?L3pjK0tKUkZ6dFdpQTJQc0NQb0ViZWx0ZG11STVJZUh2QllzRElWRlg0bDZD?=
 =?utf-8?B?NmE3ZW5lNGFLaVgzam1QNTRMNnh1RUdQZFp4UkxzMWppbHhDa0xibU5LSmxG?=
 =?utf-8?B?ZVZadDVKQTAxT1hsVURjaU9QbEpWZWtZdm8ydGZEWE9VUnRtdnB0OTBPdlRZ?=
 =?utf-8?B?U29UZHBYUnVyMjdCUmtKSXBUczV6VEhVbTRLdU9mbWVDWjBqS1BERE1OTjBr?=
 =?utf-8?B?LzhvOWdrWXEydWx2VXBMMDgyWm0vd0pGK25qVkJkcXFneTBTcE9PclFxN2hk?=
 =?utf-8?B?OGFvNkUwWVdSV3lIVEY0QyttSk1BNEg2djlKRHlZK1BwV1pEM2dkSWV1bUs2?=
 =?utf-8?B?TEQ0RlliY1p3Rkg3WkwwUmszcUI3NTQxeFhCTTlPMFc0NmlmNngyMDZkdUNQ?=
 =?utf-8?B?ZzUzWmhOQjRyc3VHRmc5Y1NNV1VHSHdYNndMSVE5VldydkZYSVdsY3lYMzJn?=
 =?utf-8?B?SHlLNmJqWHZLRjNhNEhQN2t0WDlYS2FKY2dLb2xkdXZPdmtzWDRLYzJrNUg2?=
 =?utf-8?B?aitLYmRBSDlRbndWbGdCaSttRXVSbGN5WnJjdWdKUGRwdFhnQWV6dlFOTWRR?=
 =?utf-8?B?cVRSYW9admpKNVYxVnRsNUdhTktWV3dHaHpGSTE4OXhkTWdRdlB1Mk9MZkhQ?=
 =?utf-8?B?T2ttZnkwNG1lckFKT3dyZWtqSmtYdzFidWNscG1ZWW9UemhpRkpGNlp3NTNa?=
 =?utf-8?B?aXpHTUwrM1lvSUZwSE1RbzhaaDNRTVE0U1B1Sy96MWJFS3dsR0kvb0wvd0Jj?=
 =?utf-8?B?QjlRN2tNTXVDRUFiOUpMRnMxMVUzU25LaVcxRFU4SnV2TU52MTFSOG1abEpo?=
 =?utf-8?Q?g38sBoOK?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a309247-a94f-4ae2-2b51-08d8c8166a6a
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Feb 2021 07:36:27.1834
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: b8L0+8ZcRUZq+gP7WmhslqJ4Ts8MHzjJ9+jiZJ95/CDbG6NQaynv7bnxrw9H6S3hkagFEJelDvVq5FDLsE7v3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1712
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IEZyaWRheSwg
SmFudWFyeSAyOSwgMjAyMSA3OjQ1IFBNDQo+IA0KPiBUaGVyZSBhcmUgdGhyZWUgbm90ZXdvcnRo
eSBkcmF3YmFja3M6DQo+IDEpIFRoZSBpbnRlcmNlcHRzIHdlIG5lZWQgdG8gZW5hYmxlIGhlcmUg
YXJlIENQTC1pbmRlcGVuZGVudCwgaS5lLiB3ZQ0KPiAgICBub3cgaGF2ZSB0byBlbXVsYXRlIGNl
cnRhaW4gaW5zdHJ1Y3Rpb25zIGZvciByaW5nIDAuDQo+IDIpIE9uIFZNWCB0aGVyZSdzIG5vIGlu
dGVyY2VwdCBmb3IgU01TVywgc28gdGhlIGVtdWxhdGlvbiBpc24ndCByZWFsbHkNCj4gICAgY29t
cGxldGUgdGhlcmUuDQo+IDMpIFRoZSBDUjQgd3JpdGUgaW50ZXJjZXB0IG9uIFNWTSBpcyBsb3dl
ciBwcmlvcml0eSB0aGFuIGFsbCBleGNlcHRpb24NCj4gICAgY2hlY2tzLCBzbyB3ZSBuZWVkIHRv
IGludGVyY2VwdCAjR1AuDQo+IFRoZXJlZm9yZSB0aGlzIGVtdWxhdGlvbiBkb2Vzbid0IGdldCBv
ZmZlcmVkIHRvIGd1ZXN0cyBieSBkZWZhdWx0Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2
aW4udGlhbkBpbnRlbC5jb20+DQoNCj4gLS0tDQo+IHYzOiBEb24ndCBvZmZlciBlbXVsYXRpb24g
YnkgZGVmYXVsdC4gUmUtYmFzZS4NCj4gdjI6IFNwbGl0IG9mZiB0aGUgeDg2IGluc24gZW11bGF0
b3IgcGFydC4gUmUtYmFzZS4gVXNlIGh2bV9mZWF0dXJlc2V0DQo+ICAgICBpbiBodm1fY3I0X2d1
ZXN0X3Jlc2VydmVkX2JpdHMoKS4NCj4gDQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9jcHVpZC5jDQo+
ICsrKyBiL3hlbi9hcmNoL3g4Ni9jcHVpZC5jDQo+IEBAIC00NTMsNiArNDUzLDEzIEBAIHN0YXRp
YyB2b2lkIF9faW5pdCBjYWxjdWxhdGVfaHZtX21heF9wb2wNCj4gICAgICBfX3NldF9iaXQoWDg2
X0ZFQVRVUkVfWDJBUElDLCBodm1fZmVhdHVyZXNldCk7DQo+IA0KPiAgICAgIC8qDQo+ICsgICAg
ICogWGVuIGNhbiBvZnRlbiBwcm92aWRlIFVNSVAgZW11bGF0aW9uIHRvIEhWTSBndWVzdHMgZXZl
biBpZiB0aGUgaG9zdA0KPiArICAgICAqIGRvZXNuJ3QgaGF2ZSBzdWNoIGZ1bmN0aW9uYWxpdHku
DQo+ICsgICAgICovDQo+ICsgICAgaWYgKCBodm1fZnVuY3Muc2V0X2Rlc2NyaXB0b3JfYWNjZXNz
X2V4aXRpbmcgKQ0KPiArICAgICAgICBfX3NldF9iaXQoWDg2X0ZFQVRVUkVfVU1JUCwgaHZtX2Zl
YXR1cmVzZXQpOw0KPiArDQo+ICsgICAgLyoNCj4gICAgICAgKiBPbiBBTUQsIFBWIGd1ZXN0cyBh
cmUgZW50aXJlbHkgdW5hYmxlIHRvIHVzZSBTWVNFTlRFUiBhcyBYZW4gcnVucyBpbg0KPiAgICAg
ICAqIGxvbmcgbW9kZSAoYW5kIGluaXRfYW1kKCkgaGFzIGNsZWFyZWQgaXQgb3V0IG9mIGhvc3Qg
Y2FwYWJpbGl0aWVzKSwgYnV0DQo+ICAgICAgICogSFZNIGd1ZXN0cyBhcmUgYWJsZSBpZiBydW5u
aW5nIGluIHByb3RlY3RlZCBtb2RlLg0KPiBAQCAtNTA0LDYgKzUxMSwxMCBAQCBzdGF0aWMgdm9p
ZCBfX2luaXQgY2FsY3VsYXRlX2h2bV9kZWZfcG9sDQo+ICAgICAgZm9yICggaSA9IDA7IGkgPCBB
UlJBWV9TSVpFKGh2bV9mZWF0dXJlc2V0KTsgKytpICkNCj4gICAgICAgICAgaHZtX2ZlYXR1cmVz
ZXRbaV0gJj0gaHZtX2ZlYXR1cmVtYXNrW2ldOw0KPiANCj4gKyAgICAvKiBEb24ndCBvZmZlciBV
TUlQIGVtdWxhdGlvbiBieSBkZWZhdWx0LiAqLw0KPiArICAgIGlmICggIWNwdV9oYXNfdW1pcCAp
DQo+ICsgICAgICAgIF9fY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1VNSVAsIGh2bV9mZWF0dXJlc2V0
KTsNCj4gKw0KPiAgICAgIGd1ZXN0X2NvbW1vbl9mZWF0dXJlX2FkanVzdG1lbnRzKGh2bV9mZWF0
dXJlc2V0KTsNCj4gICAgICBndWVzdF9jb21tb25fZGVmYXVsdF9mZWF0dXJlX2FkanVzdG1lbnRz
KGh2bV9mZWF0dXJlc2V0KTsNCj4gDQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMNCj4g
KysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYw0KPiBAQCAtOTkxLDcgKzk5MSw4IEBAIHVuc2ln
bmVkIGxvbmcgaHZtX2NyNF9ndWVzdF92YWxpZF9iaXRzKGMNCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgWDg2X0NSNF9QQ0UgICAgICAgICAgICAgICAgICAgIHwNCj4gICAgICAg
ICAgICAgIChwLT5iYXNpYy5meHNyICAgID8gWDg2X0NSNF9PU0ZYU1IgICAgICAgICAgICA6IDAp
IHwNCj4gICAgICAgICAgICAgIChwLT5iYXNpYy5zc2UgICAgID8gWDg2X0NSNF9PU1hNTUVYQ1BU
ICAgICAgICA6IDApIHwNCj4gLSAgICAgICAgICAgIChwLT5mZWF0LnVtaXAgICAgID8gWDg2X0NS
NF9VTUlQICAgICAgICAgICAgICA6IDApIHwNCj4gKyAgICAgICAgICAgICgocCA9PSAmaG9zdF9j
cHVpZF9wb2xpY3kgPyAmaHZtX21heF9jcHVpZF9wb2xpY3kgOiBwKS0+ZmVhdC51bWlwDQo+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICA/IFg4Nl9DUjRfVU1JUCAgICAgICAgICAgICAg
OiAwKSB8DQo+ICAgICAgICAgICAgICAodm14ZSAgICAgICAgICAgICA/IFg4Nl9DUjRfVk1YRSAg
ICAgICAgICAgICAgOiAwKSB8DQo+ICAgICAgICAgICAgICAocC0+ZmVhdC5mc2dzYmFzZSA/IFg4
Nl9DUjRfRlNHU0JBU0UgICAgICAgICAgOiAwKSB8DQo+ICAgICAgICAgICAgICAocC0+YmFzaWMu
cGNpZCAgICA/IFg4Nl9DUjRfUENJREUgICAgICAgICAgICAgOiAwKSB8DQo+IEBAIC0zNzMxLDYg
KzM3MzIsMTMgQEAgaW50IGh2bV9kZXNjcmlwdG9yX2FjY2Vzc19pbnRlcmNlcHQodWludA0KPiAg
ICAgIHN0cnVjdCB2Y3B1ICpjdXJyID0gY3VycmVudDsNCj4gICAgICBzdHJ1Y3QgZG9tYWluICpj
dXJyZCA9IGN1cnItPmRvbWFpbjsNCj4gDQo+ICsgICAgaWYgKCAoaXNfd3JpdGUgfHwgY3Vyci0+
YXJjaC5odm0uZ3Vlc3RfY3JbNF0gJiBYODZfQ1I0X1VNSVApICYmDQo+ICsgICAgICAgICBodm1f
Z2V0X2NwbChjdXJyKSApDQo+ICsgICAgew0KPiArICAgICAgICBodm1faW5qZWN0X2h3X2V4Y2Vw
dGlvbihUUkFQX2dwX2ZhdWx0LCAwKTsNCj4gKyAgICAgICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsN
Cj4gKyAgICB9DQo+ICsNCj4gICAgICBpZiAoIGN1cnJkLT5hcmNoLm1vbml0b3IuZGVzY3JpcHRv
cl9hY2Nlc3NfZW5hYmxlZCApDQo+ICAgICAgew0KPiAgICAgICAgICBBU1NFUlQoY3Vyci0+YXJj
aC52bV9ldmVudCk7DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vc3ZtL3N2bS5jDQo+ICsrKyBi
L3hlbi9hcmNoL3g4Ni9odm0vc3ZtL3N2bS5jDQo+IEBAIC01NDcsNiArNTQ3LDI4IEBAIHZvaWQg
c3ZtX3VwZGF0ZV9ndWVzdF9jcihzdHJ1Y3QgdmNwdSAqdiwNCj4gICAgICAgICAgICAgIHZhbHVl
ICY9IH4oWDg2X0NSNF9TTUVQIHwgWDg2X0NSNF9TTUFQKTsNCj4gICAgICAgICAgfQ0KPiANCj4g
KyAgICAgICAgaWYgKCB2LT5kb21haW4tPmFyY2guY3B1aWQtPmZlYXQudW1pcCAmJiAhY3B1X2hh
c191bWlwICkNCj4gKyAgICAgICAgew0KPiArICAgICAgICAgICAgdTMyIGdlbmVyYWwxX2ludGVy
Y2VwdHMgPSB2bWNiX2dldF9nZW5lcmFsMV9pbnRlcmNlcHRzKHZtY2IpOw0KPiArDQo+ICsgICAg
ICAgICAgICBpZiAoIHYtPmFyY2guaHZtLmd1ZXN0X2NyWzRdICYgWDg2X0NSNF9VTUlQICkNCj4g
KyAgICAgICAgICAgIHsNCj4gKyAgICAgICAgICAgICAgICB2YWx1ZSAmPSB+WDg2X0NSNF9VTUlQ
Ow0KPiArICAgICAgICAgICAgICAgIEFTU0VSVCh2bWNiX2dldF9jcl9pbnRlcmNlcHRzKHZtY2Ip
ICYNCj4gQ1JfSU5URVJDRVBUX0NSMF9SRUFEKTsNCj4gKyAgICAgICAgICAgICAgICBnZW5lcmFs
MV9pbnRlcmNlcHRzIHw9IEdFTkVSQUwxX0lOVEVSQ0VQVF9JRFRSX1JFQUQgfA0KPiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgR0VORVJBTDFfSU5URVJDRVBUX0dEVFJf
UkVBRCB8DQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBHRU5FUkFM
MV9JTlRFUkNFUFRfTERUUl9SRUFEIHwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIEdFTkVSQUwxX0lOVEVSQ0VQVF9UUl9SRUFEOw0KPiArICAgICAgICAgICAgfQ0K
PiArICAgICAgICAgICAgZWxzZSBpZiAoICF2LT5kb21haW4tPmFyY2gubW9uaXRvci5kZXNjcmlw
dG9yX2FjY2Vzc19lbmFibGVkICkNCj4gKyAgICAgICAgICAgICAgICBnZW5lcmFsMV9pbnRlcmNl
cHRzICY9IH4oR0VORVJBTDFfSU5URVJDRVBUX0lEVFJfUkVBRCB8DQo+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIEdFTkVSQUwxX0lOVEVSQ0VQVF9HRFRSX1JFQUQg
fA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBHRU5FUkFMMV9J
TlRFUkNFUFRfTERUUl9SRUFEIHwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgR0VORVJBTDFfSU5URVJDRVBUX1RSX1JFQUQpOw0KPiArDQo+ICsgICAgICAgICAg
ICB2bWNiX3NldF9nZW5lcmFsMV9pbnRlcmNlcHRzKHZtY2IsIGdlbmVyYWwxX2ludGVyY2VwdHMp
Ow0KPiArICAgICAgICB9DQo+ICsNCj4gICAgICAgICAgdm1jYl9zZXRfY3I0KHZtY2IsIHZhbHVl
KTsNCj4gICAgICAgICAgYnJlYWs7DQo+ICAgICAgZGVmYXVsdDoNCj4gQEAgLTg4Myw3ICs5MDUs
MTQgQEAgc3RhdGljIHZvaWQgc3ZtX3NldF9kZXNjcmlwdG9yX2FjY2Vzc19leA0KPiAgICAgIGlm
ICggZW5hYmxlICkNCj4gICAgICAgICAgZ2VuZXJhbDFfaW50ZXJjZXB0cyB8PSBtYXNrOw0KPiAg
ICAgIGVsc2UNCj4gKyAgICB7DQo+ICAgICAgICAgIGdlbmVyYWwxX2ludGVyY2VwdHMgJj0gfm1h
c2s7DQo+ICsgICAgICAgIGlmICggKHYtPmFyY2guaHZtLmd1ZXN0X2NyWzRdICYgWDg2X0NSNF9V
TUlQKSAmJiAhY3B1X2hhc191bWlwICkNCj4gKyAgICAgICAgICAgIGdlbmVyYWwxX2ludGVyY2Vw
dHMgfD0gR0VORVJBTDFfSU5URVJDRVBUX0lEVFJfUkVBRCB8DQo+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIEdFTkVSQUwxX0lOVEVSQ0VQVF9HRFRSX1JFQUQgfA0KPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBHRU5FUkFMMV9JTlRFUkNFUFRfTERUUl9S
RUFEIHwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgR0VORVJBTDFfSU5U
RVJDRVBUX1RSX1JFQUQ7DQo+ICsgICAgfQ0KPiANCj4gICAgICB2bWNiX3NldF9nZW5lcmFsMV9p
bnRlcmNlcHRzKHZtY2IsIGdlbmVyYWwxX2ludGVyY2VwdHMpOw0KPiAgfQ0KPiBAQCAtMTc4MSw2
ICsxODEwLDE2IEBAIHN0YXRpYyB2b2lkIHN2bV92bWV4aXRfZG9fY3JfYWNjZXNzKA0KPiAgICAg
ICAgICBfX3VwZGF0ZV9ndWVzdF9laXAocmVncywgdm1jYi0+bmV4dHJpcCAtIHZtY2ItPnJpcCk7
DQo+ICB9DQo+IA0KPiArc3RhdGljIGJvb2wgaXNfY3I0X3dyaXRlKGNvbnN0IHN0cnVjdCB4ODZf
ZW11bGF0ZV9zdGF0ZSAqc3RhdGUsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qg
c3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQpDQo+ICt7DQo+ICsgICAgdW5zaWduZWQgaW50
IGNyOw0KPiArDQo+ICsgICAgcmV0dXJuIGN0eHQtPm9wY29kZSA9PSBYODZFTVVMX09QQygweDBm
LCAweDIyKSAmJg0KPiArICAgICAgICAgICB4ODZfaW5zbl9tb2RybShzdGF0ZSwgTlVMTCwgJmNy
KSA9PSAzICYmDQo+ICsgICAgICAgICAgIGNyID09IDQ7DQo+ICt9DQo+ICsNCj4gIHN0YXRpYyB2
b2lkIHN2bV9kcl9hY2Nlc3Moc3RydWN0IHZjcHUgKnYsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQ0KPiAgew0KPiAgICAgIHN0cnVjdCB2bWNiX3N0cnVjdCAqdm1jYiA9IHZjcHVfbmVzdGVk
aHZtKHYpLm52X24xdm1jeDsNCj4gQEAgLTI3MjgsNiArMjc2NywxNCBAQCB2b2lkIHN2bV92bWV4
aXRfaGFuZGxlcihzdHJ1Y3QgY3B1X3VzZXJfDQo+ICAgICAgICAgIHN2bV9mcHVfZGlydHlfaW50
ZXJjZXB0KCk7DQo+ICAgICAgICAgIGJyZWFrOw0KPiANCj4gKyAgICBjYXNlIFZNRVhJVF9FWENF
UFRJT05fR1A6DQo+ICsgICAgICAgIEhWTVRSQUNFXzFEKFRSQVAsIFRSQVBfZ3BfZmF1bHQpOw0K
PiArICAgICAgICAvKiBXZSBvbmx5IGNhcmUgYWJvdXQgcmluZyAwIGZhdWx0cyB3aXRoIGVycm9y
IGNvZGUgemVyby4gKi8NCj4gKyAgICAgICAgaWYgKCB2bWNiLT5leGl0aW5mbzEgfHwgdm1jYl9n
ZXRfY3BsKHZtY2IpIHx8DQo+ICsgICAgICAgICAgICAgIWh2bV9lbXVsYXRlX29uZV9pbnNuKGlz
X2NyNF93cml0ZSwgIkNSNCB3cml0ZSIpICkNCj4gKyAgICAgICAgICAgIGh2bV9pbmplY3RfaHdf
ZXhjZXB0aW9uKFRSQVBfZ3BfZmF1bHQsIHZtY2ItPmV4aXRpbmZvMSk7DQo+ICsgICAgICAgIGJy
ZWFrOw0KPiArDQo+ICAgICAgY2FzZSBWTUVYSVRfRVhDRVBUSU9OX1BGOg0KPiAgICAgIHsNCj4g
ICAgICAgICAgdW5zaWduZWQgbG9uZyB2YTsNCj4gQEAgLTI4NzMsNyArMjkyMCwxNiBAQCB2b2lk
IHN2bV92bWV4aXRfaGFuZGxlcihzdHJ1Y3QgY3B1X3VzZXJfDQo+ICAgICAgICAgICAgICBodm1f
aW5qZWN0X2h3X2V4Y2VwdGlvbihUUkFQX2dwX2ZhdWx0LCAwKTsNCj4gICAgICAgICAgYnJlYWs7
DQo+IA0KPiAtICAgIGNhc2UgVk1FWElUX0NSMF9SRUFEIC4uLiBWTUVYSVRfQ1IxNV9SRUFEOg0K
PiArICAgIGNhc2UgVk1FWElUX0NSMF9SRUFEOg0KPiArICAgICAgICBpZiAoICh2LT5hcmNoLmh2
bS5ndWVzdF9jcls0XSAmIFg4Nl9DUjRfVU1JUCkgJiYNCj4gKyAgICAgICAgICAgICB2bWNiX2dl
dF9jcGwodm1jYikgKQ0KPiArICAgICAgICB7DQo+ICsgICAgICAgICAgICBBU1NFUlQoIWNwdV9o
YXNfdW1pcCk7DQo+ICsgICAgICAgICAgICBodm1faW5qZWN0X2h3X2V4Y2VwdGlvbihUUkFQX2dw
X2ZhdWx0LCAwKTsNCj4gKyAgICAgICAgICAgIGJyZWFrOw0KPiArICAgICAgICB9DQo+ICsgICAg
ICAgIC8qIGZhbGwgdGhyb3VnaCAqLw0KPiArICAgIGNhc2UgVk1FWElUX0NSMV9SRUFEIC4uLiBW
TUVYSVRfQ1IxNV9SRUFEOg0KPiAgICAgIGNhc2UgVk1FWElUX0NSMF9XUklURSAuLi4gVk1FWElU
X0NSMTVfV1JJVEU6DQo+ICAgICAgICAgIGlmICggY3B1X2hhc19zdm1fZGVjb2RlICYmICh2bWNi
LT5leGl0aW5mbzEgJiAoMVVMTCA8PCA2MykpICkNCj4gICAgICAgICAgICAgIHN2bV92bWV4aXRf
ZG9fY3JfYWNjZXNzKHZtY2IsIHJlZ3MpOw0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3N2bS92
bWNiLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9zdm0vdm1jYi5jDQo+IEBAIC0xNDEsNiAr
MTQxLDEwIEBAIHN0YXRpYyBpbnQgY29uc3RydWN0X3ZtY2Ioc3RydWN0IHZjcHUgKnYNCj4gICAg
ICAgICAgSFZNX1RSQVBfTUFTSyB8DQo+ICAgICAgICAgICh2LT5hcmNoLmZ1bGx5X2VhZ2VyX2Zw
dSA/IDAgOiAoMVUgPDwgVFJBUF9ub19kZXZpY2UpKTsNCj4gDQo+ICsgICAgLyogRm9yIFVNSVAg
ZW11bGF0aW9uIGludGVyY2VwdCAjR1AgdG8gY2F0Y2ggZmF1bHRpbmcgQ1I0IHdyaXRlcy4gKi8N
Cj4gKyAgICBpZiAoIHYtPmRvbWFpbi0+YXJjaC5jcHVpZC0+ZmVhdC51bWlwICYmICFjcHVfaGFz
X3VtaXAgKQ0KPiArICAgICAgICB2bWNiLT5fZXhjZXB0aW9uX2ludGVyY2VwdHMgfD0gMVUgPDwg
VFJBUF9ncF9mYXVsdDsNCj4gKw0KPiAgICAgIGlmICggcGFnaW5nX21vZGVfaGFwKHYtPmRvbWFp
bikgKQ0KPiAgICAgIHsNCj4gICAgICAgICAgdm1jYi0+X25wX2VuYWJsZSA9IDE7IC8qIGVuYWJs
ZSBuZXN0ZWQgcGFnaW5nICovDQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYw0K
PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMNCj4gQEAgLTEwNzQsNyArMTA3NCw4
IEBAIHN0YXRpYyBpbnQgY29uc3RydWN0X3ZtY3Moc3RydWN0IHZjcHUgKnYNCj4gDQo+ICAgICAg
LyoNCj4gICAgICAgKiBEaXNhYmxlIGZlYXR1cmVzIHdoaWNoIHdlIGRvbid0IHdhbnQgYWN0aXZl
IGJ5IGRlZmF1bHQ6DQo+IC0gICAgICogIC0gRGVzY3JpcHRvciB0YWJsZSBleGl0aW5nIG9ubHkg
aWYgd2FudGVkIGJ5IGludHJvc3BlY3Rpb24NCj4gKyAgICAgKiAgLSBEZXNjcmlwdG9yIHRhYmxl
IGV4aXRpbmcgb25seSBpZiBuZWVkZWQgZm9yIENSNC5VTUlQIHdyaXRlDQo+ICsgICAgICogICAg
ZW11bGF0aW9uIG9yIHdhbnRlZCBieSBpbnRyb3NwZWN0aW9uDQo+ICAgICAgICogIC0geDJBUElD
IC0gZGVmYXVsdCBpcyB4QVBJQyBtb2RlDQo+ICAgICAgICogIC0gVlBJRCBzZXR0aW5ncyBjaG9z
ZW4gYXQgVk1FbnRyeSB0aW1lDQo+ICAgICAgICogIC0gVk1DUyBTaGFkb3dpbmcgb25seSB3aGVu
IGluIG5lc3RlZCBWTVggbW9kZQ0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0K
PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiBAQCAtMTI2NCw3ICsxMjY0LDcg
QEAgc3RhdGljIHZvaWQgdm14X3NldF9kZXNjcmlwdG9yX2FjY2Vzc19leA0KPiAgICAgIGlmICgg
ZW5hYmxlICkNCj4gICAgICAgICAgdi0+YXJjaC5odm0udm14LnNlY29uZGFyeV9leGVjX2NvbnRy
b2wgfD0NCj4gICAgICAgICAgICAgIFNFQ09OREFSWV9FWEVDX0RFU0NSSVBUT1JfVEFCTEVfRVhJ
VElORzsNCj4gLSAgICBlbHNlDQo+ICsgICAgZWxzZSBpZiAoICEodi0+YXJjaC5odm0uZ3Vlc3Rf
Y3JbNF0gJiBYODZfQ1I0X1VNSVApIHx8IGNwdV9oYXNfdW1pcCApDQo+ICAgICAgICAgIHYtPmFy
Y2guaHZtLnZteC5zZWNvbmRhcnlfZXhlY19jb250cm9sICY9DQo+ICAgICAgICAgICAgICB+U0VD
T05EQVJZX0VYRUNfREVTQ1JJUFRPUl9UQUJMRV9FWElUSU5HOw0KPiANCj4gQEAgLTE1MTQsNiAr
MTUxNCwyMSBAQCBzdGF0aWMgdm9pZCB2bXhfdXBkYXRlX2d1ZXN0X2NyKHN0cnVjdCB2DQo+ICAg
ICAgICAgICAgICB2LT5hcmNoLmh2bS5od19jcls0XSAmPSB+KFg4Nl9DUjRfU01FUCB8IFg4Nl9D
UjRfU01BUCk7DQo+ICAgICAgICAgIH0NCj4gDQo+ICsgICAgICAgIGlmICggdi0+ZG9tYWluLT5h
cmNoLmNwdWlkLT5mZWF0LnVtaXAgJiYgIWNwdV9oYXNfdW1pcCApDQo+ICsgICAgICAgIHsNCj4g
KyAgICAgICAgICAgIGlmICggKHYtPmFyY2guaHZtLmd1ZXN0X2NyWzRdICYgWDg2X0NSNF9VTUlQ
KSApDQo+ICsgICAgICAgICAgICB7DQo+ICsgICAgICAgICAgICAgICAgQVNTRVJUKGNwdV9oYXNf
dm14X2R0X2V4aXRpbmcpOw0KPiArICAgICAgICAgICAgICAgIHYtPmFyY2guaHZtLmh3X2NyWzRd
ICY9IH5YODZfQ1I0X1VNSVA7DQo+ICsgICAgICAgICAgICAgICAgdi0+YXJjaC5odm0udm14LnNl
Y29uZGFyeV9leGVjX2NvbnRyb2wgfD0NCj4gKyAgICAgICAgICAgICAgICAgICAgU0VDT05EQVJZ
X0VYRUNfREVTQ1JJUFRPUl9UQUJMRV9FWElUSU5HOw0KPiArICAgICAgICAgICAgfQ0KPiArICAg
ICAgICAgICAgZWxzZSBpZiAoICF2LT5kb21haW4tPmFyY2gubW9uaXRvci5kZXNjcmlwdG9yX2Fj
Y2Vzc19lbmFibGVkICkNCj4gKyAgICAgICAgICAgICAgICB2LT5hcmNoLmh2bS52bXguc2Vjb25k
YXJ5X2V4ZWNfY29udHJvbCAmPQ0KPiArICAgICAgICAgICAgICAgICAgICB+U0VDT05EQVJZX0VY
RUNfREVTQ1JJUFRPUl9UQUJMRV9FWElUSU5HOw0KPiArICAgICAgICAgICAgdm14X3VwZGF0ZV9z
ZWNvbmRhcnlfZXhlY19jb250cm9sKHYpOw0KPiArICAgICAgICB9DQo+ICsNCj4gICAgICAgICAg
X192bXdyaXRlKEdVRVNUX0NSNCwgdi0+YXJjaC5odm0uaHdfY3JbNF0pOw0KPiANCj4gICAgICAg
ICAgLyoNCj4gQEAgLTE1MzcsNiArMTU1Miw3IEBAIHN0YXRpYyB2b2lkIHZteF91cGRhdGVfZ3Vl
c3RfY3Ioc3RydWN0IHYNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIChYODZfQ1I0X1BTRSB8IFg4Nl9DUjRfU01FUCB8DQo+ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWDg2X0NSNF9TTUFQKQ0KPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOiAwOw0KPiArICAgICAgICAgICAg
di0+YXJjaC5odm0udm14LmNyNF9ob3N0X21hc2sgfD0gY3B1X2hhc191bWlwID8gMCA6DQo+IFg4
Nl9DUjRfVU1JUDsNCj4gICAgICAgICAgICAgIGlmICggdi0+ZG9tYWluLT5hcmNoLm1vbml0b3Iu
d3JpdGVfY3RybHJlZ19lbmFibGVkICYNCj4gICAgICAgICAgICAgICAgICAgbW9uaXRvcl9jdHJs
cmVnX2JpdG1hc2soVk1fRVZFTlRfWDg2X0NSNCkgKQ0KPiAgICAgICAgICAgICAgICAgIHYtPmFy
Y2guaHZtLnZteC5jcjRfaG9zdF9tYXNrIHw9DQo+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
Y3B1ZmVhdHVyZS5oDQo+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVyZS5oDQo+
IEBAIC0xMTAsNiArMTEwLDcgQEANCj4gDQo+ICAvKiBDUFVJRCBsZXZlbCAweDAwMDAwMDA3OjAu
ZWN4ICovDQo+ICAjZGVmaW5lIGNwdV9oYXNfYXZ4NTEyX3ZibWkNCj4gYm9vdF9jcHVfaGFzKFg4
Nl9GRUFUVVJFX0FWWDUxMl9WQk1JKQ0KPiArI2RlZmluZSBjcHVfaGFzX3VtaXAgICAgICAgICAg
ICBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfVU1JUCkNCj4gICNkZWZpbmUgY3B1X2hhc19hdng1
MTJfdmJtaTINCj4gYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX0FWWDUxMl9WQk1JMikNCj4gICNk
ZWZpbmUgY3B1X2hhc19nZm5pICAgICAgICAgICAgYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX0dG
TkkpDQo+ICAjZGVmaW5lIGNwdV9oYXNfdmFlcyAgICAgICAgICAgIGJvb3RfY3B1X2hhcyhYODZf
RkVBVFVSRV9WQUVTKQ0K


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 07:57:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 07:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80833.148105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7D2U-0004f2-Cx; Wed, 03 Feb 2021 07:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80833.148105; Wed, 03 Feb 2021 07:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7D2U-0004ev-9Y; Wed, 03 Feb 2021 07:57:26 +0000
Received: by outflank-mailman (input) for mailman id 80833;
 Wed, 03 Feb 2021 07:57:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7D2T-0004ep-6L
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 07:57:25 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7feda2ca-62a1-42d6-a67d-0c2e9bff1dd0;
 Wed, 03 Feb 2021 07:57:23 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 1137vLQg013493;
 Wed, 3 Feb 2021 08:57:21 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 9FC62281D; Wed,  3 Feb 2021 08:57:21 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7feda2ca-62a1-42d6-a67d-0c2e9bff1dd0
Date: Wed, 3 Feb 2021 08:57:21 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203075721.GB445@antioche.eu.org>
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 03 Feb 2021 08:57:21 +0100 (MET)

On Wed, Feb 03, 2021 at 07:18:51AM +0100, Jürgen Groß wrote:
> On 02.02.21 19:37, Manuel Bouyer wrote:
> > Hello,
> > on NetBSD I'm tracking down an issue where xenstored never closes its
> > file descriptor to /var/run/xenstored/socket and instead loops at 100%
> > CPU on these descriptors.
> > 
> > xenstored loops because poll(2) returns a POLLIN event for these
> > descriptors but they are marked is_ignored = true.
> > 
> > I'm seeing this with xen 4.15, 4.13 and has also been reported with 4.11
> > with latest security patches.
> > It seems to have started with patches for XSA-115 (A user reported this
> > for 4.11)
> > 
> > I've tracked it down to a difference in poll(2) implementation; it seems
> > that linux will return something that is not (POLLIN|POLLOUT) when a
> > socket if closed; while NetBSD returns POLLIN (this matches the NetBSD's
> > man page).
> 
> Yeah, Linux seems to return POLLHUP additionally.

Overall, it seems that the poll(2) behavior with non-regular files
is highly OS-dependant when it comes to border cases (like a remote
close of a socket)

> 
> > 
> > First I think there may be a security issue here, as, even on linux it should
> > be possible for a client to cause a socket to go to the is_ignored state,
> > while still sending data and cause xenstored to go to a 100% CPU loop.
> 
> No security issue, as sockets are restricted to dom0 and user root.
> 
> In case you are suspecting a security issue, please don't send such
> mails to xen-devel in future, but to security@lists.xenproject.org.

Yes, sorry. Will do next time.

> 
> > I think this is needed anyway:
> > 
> > --- xenstored_core.c.orig	2021-02-02 18:06:33.389316841 +0100
> > +++ xenstored_core.c	2021-02-02 19:27:43.761877371 +0100
> > @@ -397,9 +397,12 @@
> >   			     !list_empty(&conn->out_list)))
> >   				*ptimeout = 0;
> >   		} else {
> > -			short events = POLLIN|POLLPRI;
> > -			if (!list_empty(&conn->out_list))
> > -				events |= POLLOUT;
> > +			short events = 0;
> > +			if (!conn->is_ignored) {
> > +				events |= POLLIN|POLLPRI;
> > +			        if (!list_empty(&conn->out_list))
> > +				        events |= POLLOUT;
> > +			}
> >   			conn->pollfd_idx = set_fd(conn->fd, events);
> >   		}
> >   	}
> 
> Yes, I think this is a good idea.

Well, after some sleep I don't think it is. We should always keep at last
POLLIN or we will never notice a socket close otherwise.

> 
> > 
> > Now I wonder if, on NetBSD at last, a read error or short read shouldn't
> > cause the socket to be closed, as with:
> > 
> > @@ -1561,6 +1565,8 @@
> >   bad_client:
> >   	ignore_connection(conn);
> > +	/* we don't want to keep this connection alive */
> > +	talloc_free(conn);
> >   }
> 
> This is wrong for non-socket connections, as we want to keep the domain
> in question to be known to xenstored.
> 
> For socket connections this should be okay, though.

What are "non-socket connections" BTW ? I don't think I've seen one
in my test.

Is there a way to know if a connection is socket or non-socket ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 08:05:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 08:05:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80836.148117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DAP-0006GG-9t; Wed, 03 Feb 2021 08:05:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80836.148117; Wed, 03 Feb 2021 08:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DAP-0006G9-6t; Wed, 03 Feb 2021 08:05:37 +0000
Received: by outflank-mailman (input) for mailman id 80836;
 Wed, 03 Feb 2021 08:05:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7DAN-0006G2-3i
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 08:05:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 594fd395-f28e-42d3-9d82-85847e2e7629;
 Wed, 03 Feb 2021 08:05:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 348ECAF24;
 Wed,  3 Feb 2021 08:05:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 594fd395-f28e-42d3-9d82-85847e2e7629
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612339528; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8Z6kPXkuN8m7pIKyFSb5tGREEclWYqoG0xjVfACj8I8=;
	b=dk5+hduupkfuSUa/luTB3/rC/wcLKmonXMzX24zP3Th67HbjJsNPacHyKKu4m+dk1VWiF3
	Y3dtyu1+nwOwoMciX+FSkLcnMyNjUgJlbxzFSM6YBEnBpi0cjkTYQ3PAQSRPs4V5z9kdve
	Dnm/as7sDNaAQn0FMfar2Dva6KPqHs4=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
Date: Wed, 3 Feb 2021 09:05:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203075721.GB445@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="4Lnc8Y4g91tVWddJRcZrPfwplW8y75FBS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--4Lnc8Y4g91tVWddJRcZrPfwplW8y75FBS
Content-Type: multipart/mixed; boundary="RA2N4GBVvqpx24PKjb7syDgT0H04rTt7e";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
In-Reply-To: <20210203075721.GB445@antioche.eu.org>

--RA2N4GBVvqpx24PKjb7syDgT0H04rTt7e
Content-Type: multipart/mixed;
 boundary="------------C09EFC4B4FCE120EFDC7B722"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C09EFC4B4FCE120EFDC7B722
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 08:57, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 07:18:51AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 02.02.21 19:37, Manuel Bouyer wrote:
>>> Hello,
>>> on NetBSD I'm tracking down an issue where xenstored never closes its=

>>> file descriptor to /var/run/xenstored/socket and instead loops at 100=
%
>>> CPU on these descriptors.
>>>
>>> xenstored loops because poll(2) returns a POLLIN event for these
>>> descriptors but they are marked is_ignored =3D true.
>>>
>>> I'm seeing this with xen 4.15, 4.13 and has also been reported with 4=
=2E11
>>> with latest security patches.
>>> It seems to have started with patches for XSA-115 (A user reported th=
is
>>> for 4.11)
>>>
>>> I've tracked it down to a difference in poll(2) implementation; it se=
ems
>>> that linux will return something that is not (POLLIN|POLLOUT) when a
>>> socket if closed; while NetBSD returns POLLIN (this matches the NetBS=
D's
>>> man page).
>>
>> Yeah, Linux seems to return POLLHUP additionally.
>=20
> Overall, it seems that the poll(2) behavior with non-regular files
> is highly OS-dependant when it comes to border cases (like a remote
> close of a socket)
>=20
>>
>>>
>>> First I think there may be a security issue here, as, even on linux i=
t should
>>> be possible for a client to cause a socket to go to the is_ignored st=
ate,
>>> while still sending data and cause xenstored to go to a 100% CPU loop=
=2E
>>
>> No security issue, as sockets are restricted to dom0 and user root.
>>
>> In case you are suspecting a security issue, please don't send such
>> mails to xen-devel in future, but to security@lists.xenproject.org.
>=20
> Yes, sorry. Will do next time.
>=20
>>
>>> I think this is needed anyway:
>>>
>>> --- xenstored_core.c.orig	2021-02-02 18:06:33.389316841 +0100
>>> +++ xenstored_core.c	2021-02-02 19:27:43.761877371 +0100
>>> @@ -397,9 +397,12 @@
>>>    			     !list_empty(&conn->out_list)))
>>>    				*ptimeout =3D 0;
>>>    		} else {
>>> -			short events =3D POLLIN|POLLPRI;
>>> -			if (!list_empty(&conn->out_list))
>>> -				events |=3D POLLOUT;
>>> +			short events =3D 0;
>>> +			if (!conn->is_ignored) {
>>> +				events |=3D POLLIN|POLLPRI;
>>> +			        if (!list_empty(&conn->out_list))
>>> +				        events |=3D POLLOUT;
>>> +			}
>>>    			conn->pollfd_idx =3D set_fd(conn->fd, events);
>>>    		}
>>>    	}
>>
>> Yes, I think this is a good idea.
>=20
> Well, after some sleep I don't think it is. We should always keep at la=
st
> POLLIN or we will never notice a socket close otherwise.

Adding the fd of an ignored socket connection to the list is the real
problem here. Why should that be done?

>=20
>>
>>>
>>> Now I wonder if, on NetBSD at last, a read error or short read should=
n't
>>> cause the socket to be closed, as with:
>>>
>>> @@ -1561,6 +1565,8 @@
>>>    bad_client:
>>>    	ignore_connection(conn);
>>> +	/* we don't want to keep this connection alive */
>>> +	talloc_free(conn);
>>>    }
>>
>> This is wrong for non-socket connections, as we want to keep the domai=
n
>> in question to be known to xenstored.
>>
>> For socket connections this should be okay, though.
>=20
> What are "non-socket connections" BTW ? I don't think I've seen one
> in my test.

Every connection to another domain.

> Is there a way to know if a connection is socket or non-socket ?

Active socket connections have conn->fd >=3D 0.


Juergen

--------------C09EFC4B4FCE120EFDC7B722
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C09EFC4B4FCE120EFDC7B722--

--RA2N4GBVvqpx24PKjb7syDgT0H04rTt7e--

--4Lnc8Y4g91tVWddJRcZrPfwplW8y75FBS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAaWUcFAwAAAAAACgkQsN6d1ii/Ey9J
+ggAnpPLADVSK1szGiSR9UYn1kL6TjcqB7maaRZtdx5pd42zn/bZfLnGS0MITNVfhPGPieyUPfai
dXshjfwKqHwpH8nVtlo4xwrvmgpXifqw7f26zCrdjG13UdzQuh+Z64frJ4W1Qb/IN58lbcT0I4h7
8lZ4DW5LtT8p0kb9g9f2c1dmAo14ATobeBZyZcKWix6MxjiX4/uDGSVyKduYaTfw2sSelKteGEYM
aNtGnbvcIZO+7WttYy8tNDalew1ma8gtlhUU3MCkCrxMzrKwqvtzVtwl6xKWyCsd/X+HDzjVtC8I
xZ4zr+QNHQbspAaW09ZOR78C+fNUyloJPEXGDRnYRA==
=giSS
-----END PGP SIGNATURE-----

--4Lnc8Y4g91tVWddJRcZrPfwplW8y75FBS--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 08:06:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 08:06:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80837.148129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DBK-0006MK-Kk; Wed, 03 Feb 2021 08:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80837.148129; Wed, 03 Feb 2021 08:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DBK-0006MD-Gn; Wed, 03 Feb 2021 08:06:34 +0000
Received: by outflank-mailman (input) for mailman id 80837;
 Wed, 03 Feb 2021 08:06:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7DBK-0006M5-1u; Wed, 03 Feb 2021 08:06:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7DBJ-0003MQ-QN; Wed, 03 Feb 2021 08:06:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7DBJ-000085-H6; Wed, 03 Feb 2021 08:06:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7DBJ-0001My-GX; Wed, 03 Feb 2021 08:06:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gLaJIYIYYmlrMnONAT6PItox3DXdaZ6jNLD2dmzLnOk=; b=Bi5qFgf9fvc+0bCMWPoxPFhF42
	yHLiZO820ahMhi1TjXqGHo4FPUTqhHkJmx8x/DqoLlzmS8wslI0V9Hg/BmhKdSMpB08b/vO0IuktG
	v2JJVYWU+O5zTBZJeWK0G9b+aYjWo7wSgND56MQQqddNHNWGSwDds5SOv98xWd6dmez8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158957-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 158957: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 08:06:33 +0000

flight 158957 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158957/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 158922

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158922
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158922
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158922
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158922
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158922
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158922
 test-armhf-armhf-libvirt-raw 13 guest-start                  fail  like 158922
 test-armhf-armhf-xl-vhd      13 guest-start                  fail  like 158922
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158922
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158922
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158922
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158922
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158922  2021-02-02 01:51:30 Z    1 days
Testing same since   158957  2021-02-02 14:38:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9dc687f155..5e7aa90440  5e7aa904405fa2f268c3af213516bae271de3265 -> master


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 08:16:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 08:16:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80851.148176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DKs-0007bp-4R; Wed, 03 Feb 2021 08:16:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80851.148176; Wed, 03 Feb 2021 08:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DKs-0007bi-0n; Wed, 03 Feb 2021 08:16:26 +0000
Received: by outflank-mailman (input) for mailman id 80851;
 Wed, 03 Feb 2021 08:16:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7DKq-0007bd-Pc
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 08:16:24 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b95fc45-8c51-4bcc-89ea-9ced71b9ffb5;
 Wed, 03 Feb 2021 08:16:23 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 1138GM8f010307;
 Wed, 3 Feb 2021 09:16:22 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 0342D281D; Wed,  3 Feb 2021 09:16:21 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b95fc45-8c51-4bcc-89ea-9ced71b9ffb5
Date: Wed, 3 Feb 2021 09:16:21 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203081621.GD445@antioche.eu.org>
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1100:a00:20ff:fe1c:276e]); Wed, 03 Feb 2021 09:16:22 +0100 (MET)

On Wed, Feb 03, 2021 at 09:05:27AM +0100, Jürgen Groß wrote:
> > > [...]
> > > Yes, I think this is a good idea.
> > 
> > Well, after some sleep I don't think it is. We should always keep at last
> > POLLIN or we will never notice a socket close otherwise.
> 
> Adding the fd of an ignored socket connection to the list is the real
> problem here. Why should that be done?

If we don't do it, we never notice when the socket is closed and the file
descriptor will stay forever. When I tried it, I had about 50 zombie
file descriptors open in xenstored, after starting only 2 domains.

> > > > 
> > > > Now I wonder if, on NetBSD at last, a read error or short read shouldn't
> > > > cause the socket to be closed, as with:
> > > > 
> > > > @@ -1561,6 +1565,8 @@
> > > >    bad_client:
> > > >    	ignore_connection(conn);
> > > > +	/* we don't want to keep this connection alive */
> > > > +	talloc_free(conn);
> > > >    }
> > > 
> > > This is wrong for non-socket connections, as we want to keep the domain
> > > in question to be known to xenstored.
> > > 
> > > For socket connections this should be okay, though.
> > 
> > What are "non-socket connections" BTW ? I don't think I've seen one
> > in my test.
> 
> Every connection to another domain.
> 
> > Is there a way to know if a connection is socket or non-socket ?
> 
> Active socket connections have conn->fd >= 0.

OK, I'll rework my patch. Thanks 

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 08:21:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 08:21:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80855.148194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DPs-0000BM-Qy; Wed, 03 Feb 2021 08:21:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80855.148194; Wed, 03 Feb 2021 08:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7DPs-0000BF-Na; Wed, 03 Feb 2021 08:21:36 +0000
Received: by outflank-mailman (input) for mailman id 80855;
 Wed, 03 Feb 2021 08:21:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7DPs-0000B9-7r
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 08:21:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18d89d6e-50c4-4a0e-ac77-99c9aab2b3cc;
 Wed, 03 Feb 2021 08:21:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 83BA2AD8C;
 Wed,  3 Feb 2021 08:21:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18d89d6e-50c4-4a0e-ac77-99c9aab2b3cc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612340493; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0z+hIKnGW33WHrmxYCb+AAFs0RHrOl9yIOSOUh1VHY0=;
	b=gzh7Y+tAae8KmeKWAUEEbatwTUCYvVxdm/GMZtj1z2FFsFIHvUYKLurasuUkvCXHY5Zxdi
	yZ8Wtszcw/jsmX7i5SKrKBkh6TlId6NRq6LSrWkwKPuUSBsiT+EFowzETED6sQjVtAwg5s
	/1Z2WAE67m976FRUytsJSQLFpMtTvuU=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
Date: Wed, 3 Feb 2021 09:21:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203081621.GD445@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5eVcxWH0F2KO2DDn2cbNKw9FlWyy3zJSs"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5eVcxWH0F2KO2DDn2cbNKw9FlWyy3zJSs
Content-Type: multipart/mixed; boundary="3ch6lJFzaxTQ3aSObDCCbh0gK6EP44AVf";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
In-Reply-To: <20210203081621.GD445@antioche.eu.org>

--3ch6lJFzaxTQ3aSObDCCbh0gK6EP44AVf
Content-Type: multipart/mixed;
 boundary="------------1BF4F136D569AD914BF6ED03"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1BF4F136D569AD914BF6ED03
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 09:16, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 09:05:27AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>>>> [...]
>>>> Yes, I think this is a good idea.
>>>
>>> Well, after some sleep I don't think it is. We should always keep at =
last
>>> POLLIN or we will never notice a socket close otherwise.
>>
>> Adding the fd of an ignored socket connection to the list is the real
>> problem here. Why should that be done?
>=20
> If we don't do it, we never notice when the socket is closed and the fi=
le
> descriptor will stay forever. When I tried it, I had about 50 zombie
> file descriptors open in xenstored, after starting only 2 domains.

This shouldn't happen in case we are closing the socket actively.

In the end we should just do a talloc_free(conn) in
ignore_connection() if it is a socket based one. This should revert
the critical modification of the XSA-115 fixes for sockets while
keeping the desired effect for domain connections.


Juergen

--------------1BF4F136D569AD914BF6ED03
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1BF4F136D569AD914BF6ED03--

--3ch6lJFzaxTQ3aSObDCCbh0gK6EP44AVf--

--5eVcxWH0F2KO2DDn2cbNKw9FlWyy3zJSs
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAaXQwFAwAAAAAACgkQsN6d1ii/Ey8Z
SggAiaDp7ruzMjIM9ZqdV4Okcpj35M2lIdFR9ikvan65BhK9blwpZkeprhtSh3bE3UaEoOtQmOey
NXu8hOg07w4PFZ94BmPC1G0O4Flps/2erKlL17SqOqoDk3YGNXWBGYLT8MgYZpHhcQHOZQGB4g3j
6nayAluWoz6+8LRq8GXehUNQBfEIuIDsQRqhZqcRCKp2G4dQeaG6Iwg5igy3gskzCDvIZaFBTFXo
tvCWXacfwO/A/rOTUOXN7xV5jdO+uSU4Nbyf8PPRnIqHiE0w+BEw3c/rgVeZr8tkyL56V0diWBqR
GmhVDi0AElUAzgOVJ66Ecq349X9fSRuGVUUe/fOvfg==
=YoCX
-----END PGP SIGNATURE-----

--5eVcxWH0F2KO2DDn2cbNKw9FlWyy3zJSs--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 09:19:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 09:19:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80859.148206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7EJc-0005Iy-Cg; Wed, 03 Feb 2021 09:19:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80859.148206; Wed, 03 Feb 2021 09:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7EJc-0005Ir-96; Wed, 03 Feb 2021 09:19:12 +0000
Received: by outflank-mailman (input) for mailman id 80859;
 Wed, 03 Feb 2021 09:19:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l7EJb-0005Im-1y
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 09:19:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7EJY-0004Xt-Qv; Wed, 03 Feb 2021 09:19:08 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7EJY-00078V-Hg; Wed, 03 Feb 2021 09:19:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Y5xbycL6If9nruYgguxOqo7byWRB1Be/Kv3cglmw/hA=; b=JTH58skLf9xQv7vVE7pWroXQv9
	ALsM/YjdxVYwctdA59WeQBc6E1R3630OtdHE9JPFrSl/2TUYN0+y4rUTvgnh56BJ74yRcz5CP5fDA
	Uiqn666LaFvBOJQzE6YQC3HHh2JjV1PKtyv+/zfc3yi2qz0rcwZUe5nFkGnHdBm6vvQQ=;
Subject: Re: Null scheduler and vwfi native problem
To: Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
 <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
Date: Wed, 3 Feb 2021 09:19:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 03/02/2021 07:31, Dario Faggioli wrote:
> On Tue, 2021-02-02 at 15:23 +0000, Julien Grall wrote:
>> In reality, it is probably still too early as a pCPU can be
>> considered
>> quiesced until a call to rcu_lock*() (such rcu_lock_domain()).
>>
> Well, yes, in theory, we could track down which is the first RCU read
> side crit. section on this path, and put the call right before that (if
> I understood what you mean).

Oh, that's not what I meant. This will indeed be far more complex than I 
originally had in mind.

AFAIU, the RCU uses critical section to protect data. So the "entering" 
could be used as "the pCPU is not quiesced" and "exiting" could be used 
as "the pCPU is quiesced".

The concern with my approach is we would need to make sure that Xen 
correctly uses the rcu helpers. I know Juergen worked on that recently, 
but I don't know whether this is fully complete.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 09:48:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 09:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80864.148223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ElZ-00085O-Po; Wed, 03 Feb 2021 09:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80864.148223; Wed, 03 Feb 2021 09:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ElZ-00085H-MZ; Wed, 03 Feb 2021 09:48:05 +0000
Received: by outflank-mailman (input) for mailman id 80864;
 Wed, 03 Feb 2021 09:48:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ElY-00084t-SK; Wed, 03 Feb 2021 09:48:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ElY-0004yi-KP; Wed, 03 Feb 2021 09:48:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ElY-0003PU-C0; Wed, 03 Feb 2021 09:48:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ElY-0001Cn-8z; Wed, 03 Feb 2021 09:48:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9qZBxdqYUJ0CN7wUsLZPITLu2p7jPi8amF9tOdkorUI=; b=VEg58hfxS6gWMD3nrzBmqaBvI3
	2QaMtHOpWt0xBCP/gypcHpC3Ttp6azf2Rs6Lm8C2k0x3KbHojL6s1TrUwyhUAG5XHE2k6tVTiKweR
	qJLKlTjErjoOMleF7I7RNaudQCAUVI2MoDoUMVv7CG1JQ2jXoQr+Gsn0pFOBKA0rDUGQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158979-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 158979: all pass - PUSHED
X-Osstest-Versions-This:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
X-Osstest-Versions-That:
    xen=9dc687f155a57216b83b17f9cde55dd43e06b0cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 09:48:04 +0000

flight 158979 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158979/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265
baseline version:
 xen                  9dc687f155a57216b83b17f9cde55dd43e06b0cd

Last test of basis   158849  2021-01-31 09:18:27 Z    3 days
Testing same since   158979  2021-02-03 09:18:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Manuel Bouyer <bouyer@netbsd.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9dc687f155..5e7aa90440  5e7aa904405fa2f268c3af213516bae271de3265 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 09:58:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 09:58:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80868.148239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Evp-0000ip-Ry; Wed, 03 Feb 2021 09:58:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80868.148239; Wed, 03 Feb 2021 09:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Evp-0000ii-Oi; Wed, 03 Feb 2021 09:58:41 +0000
Received: by outflank-mailman (input) for mailman id 80868;
 Wed, 03 Feb 2021 09:58:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Evp-0000ia-4z; Wed, 03 Feb 2021 09:58:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Evo-00058D-SX; Wed, 03 Feb 2021 09:58:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Evo-00041I-M8; Wed, 03 Feb 2021 09:58:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Evo-0000vp-Le; Wed, 03 Feb 2021 09:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9Z4O3AXTz26cn5146jgus64pArKN8Kc8lbeaEapBpY4=; b=p593nrn3TD/ETIfId4cClqnb4f
	iJr2r2v61VHfPHJXT21ggJ9ov4UIcspcRcUHO+v7M8KSOhNWc68DYw2+FewG0dbnkRScAJovfExgk
	9Plh/4FzdqTcI8NLVDQKcy1Kjy/xwfo3ySk6M6QzcuBw47xzcYJdUuUBOk3c3eKafbv8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158962-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158962: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-shadow:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fbca6ce4174724f28be5268c5d210f51ed96e31
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 09:58:40 +0000

flight 158962 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158962/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl           14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-pair         25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install       fail REGR. vs. 158387
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-i386-xl-shadow    14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 linux                0fbca6ce4174724f28be5268c5d210f51ed96e31
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   21 days
Failing since        158473  2021-01-17 13:42:20 Z   16 days   28 attempts
Testing same since   158818  2021-01-30 13:48:12 Z    3 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Akilesh Kailash <akailash@google.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Leibovich <alexl@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexey Minnekhanov <alexeymin@postmarketos.org>
  Anders Roxell <anders.roxell@linaro.org>
  Andreas Kemnade <andreas@kemnade.info>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Zhizhikin <andrey.zhizhikin@leica-geosystems.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Lutomirski <luto@kernel.org>
  Anthony Iliopoulos <ailiop@suse.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Marcovitch <ariel.marcovitch@gmail.com>
  Ariel Marcovitch <arielmarcovitch@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Aya Levin <ayal@nvidia.com>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Baruch Siach <baruch@tkos.co.il>
  Ben Skeggs <bskeggs@redhat.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Craig Tatlor <ctatlor97@gmail.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  David Wu <david.wu@rock-chips.com>
  Dennis Li <Dennis.Li@amd.com>
  Dexuan Cui <decui@microsoft.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Enke Chen <enchen@paloaltonetworks.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Korenevsky <ekorenevsky@astralinux.ru>
  Ewan D. Milne <emilne@redhat.com>
  Fabian Vogt <fvogt@suse.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe LaÃ­ns <lains@archlinux.org>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gaurav Kohli <gkohli@codeaurora.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gopal Tiwari <gtiwari@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guido GÃ¼nther <agx@sigxcpu.org>
  Guillaume Nault <gnault@redhat.com>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hao Wang <pkuwangh@gmail.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Ion Agorria <ion@agorria.com>
  Israel Rukshin <israelr@nvidia.com>
  J. Bruce Fields <bfields@redhat.com>
  j.nixdorf@avm.de <j.nixdorf@avm.de>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@jamieiles.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  JC Kuo <jckuo@nvidia.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jethro Beekman <jethro@fortanix.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Nixdorf <j.nixdorf@avm.de>
  John Millikin <john@john-millikin.com>
  Johnathan Smithinovic <johnathan.smithinovic@gmx.at>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni K. SeppÃ¤nen <jks@iki.fi>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Juerg Haefliger <juergh@canonical.com>
  Juergen Gross <jgross@suse.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Krzysztof Mazur <krzysiek@podlesie.net>
  Krzysztof Piotr OlÄ™dzki <ole@ans.pl>
  Lars-Peter Clausen <lars@metafoo.de>
  Lecopzer Chen <lecopzer.chen@mediatek.com>
  Lecopzer Chen <lecopzer@gmail.com>
  Leon Romanovsky <leonro@nvidia.com>
  Leon Schuermann <leon@is.currently.online>
  Linhua Xu <linhua.xu@unisoc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Lozano <llozano@google.com>
  Lukas Wunner <lukas@wunner.de>
  Manish Chopra <manishc@marvell.com>
  Manoj Gupta <manojgupta@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marcin Wojtas <mw@semihalf.com>
  Mark Bloch <mbloch@nvidia.com>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Kresin <dev@kresin.me>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikko Perttunen <mperttunen@nvidia.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mircea Cirjaliu <mcirjaliu@bitdefender.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolai Stange <nstange@suse.de>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nilesh Javali <njavali@marvell.com>
  Oded Gabbay <ogabbay@kernel.org>
  Olaf Hering <olaf@aepfle.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali RohÃ¡r <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pan Bian <bianpan2016@163.com>
  Parav Pandit <parav@nvidia.com>
  Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  Paul Cercueil <paul@crapouillou.net>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Collingbourne <pcc@google.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Robinson <pbrobinson@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <me@pmachata.org>
  Petr Machata <petrm@nvidia.com>
  Phil Oester <kernel@linuxace.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafael Kitover <rkitover@gmail.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <rasmus.villemoes@prevas.dk>
  Reinette Chatre <reinette.chatre@intel.com>
  Rich Felker <dalias@libc.org>
  Rob Clark <robdclark@chromium.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Rohit Maheshwari <rohitm@chelsio.com>
  Roman Guskov <rguskov@dh-electronics.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Ross Zwisler <zwisler@google.com>
  Ryan Chen <ryan_chen@aspeedtech.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sameer Pujar <spujar@nvidia.com>
  Samuel Holland <samuel@sholland.org>
  Sasha Levin <sashal@kernel.org>
  Sean Tranchetti <stranche@codeaurora.org>
  Seth Miller <miller.seth@gmail.com>
  Shawn Guo <shawn.guo@linaro.org>
  Shravya Kumbham <shravya.kumbham@xilinx.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Chulski <stefanc@marvell.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Su Yue <l@damenly.su>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hebb <tommyhebb@gmail.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Toke HÃ¸iland-JÃ¸rgensen <toke@toke.dk>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-KÃ¶nig <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@nvidia.com>
  Valdis Kletnieks <valdis.kletnieks@vt.edu>
  Valdis KlÄ“tnieks <valdis.kletnieks@vt.edu>
  Vasily Averin <vvs@virtuozzo.com>
  Victor Zhao <Victor.Zhao@amd.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wang Hui <john.wanghui@huawei.com>
  Wayne Lin <Wayne.Lin@amd.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  yangerkun <yangerkun@huawei.com>
  Yazen Ghannam <Yazen.Ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  Youling Tang <tangyouling@loongson.cn>
  YueHaibing <yuehaibing@huawei.com>
  Yufeng Mo <moyufeng@huawei.com>
  zhengbin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8166 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 11:01:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 11:01:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80877.148260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7FuA-0007VJ-Qr; Wed, 03 Feb 2021 11:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80877.148260; Wed, 03 Feb 2021 11:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7FuA-0007VC-NB; Wed, 03 Feb 2021 11:01:02 +0000
Received: by outflank-mailman (input) for mailman id 80877;
 Wed, 03 Feb 2021 11:01:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7Fu9-0007V7-6f
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 11:01:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71be47ef-49ca-4e16-b71b-4bc84dedfc28;
 Wed, 03 Feb 2021 11:01:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1A247AF78;
 Wed,  3 Feb 2021 11:00:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71be47ef-49ca-4e16-b71b-4bc84dedfc28
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612350059; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=q3KE9TfPPTGcvsD79BQ0xkUg5qvgl+n+GeEjBrtJUTQ=;
	b=ldc7uadkWkVhpHyEOxvGwOxb7RHboVcDmqoQYhWqS44mzA45woaJRIssi4zeuxMj1rPzV8
	W6i2hfSx9O3GvRdgI+zG9tVg4pHJeUla8qM8LEj6TB3yfZudeRAIqRckd2EWWMTy7Z5zWw
	HjQ75RF+CZhB2XzHyeh6vmZR/Xx/Aps=
To: Julien Grall <julien@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
 <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
 <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: Null scheduler and vwfi native problem
Message-ID: <5ad8befd-75a1-4995-e0bb-e1a438f7556d@suse.com>
Date: Wed, 3 Feb 2021 12:00:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="1lnwlIH2lKrIgSY5B9gyffH7R1kVJtffb"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--1lnwlIH2lKrIgSY5B9gyffH7R1kVJtffb
Content-Type: multipart/mixed; boundary="ZTGhjvmEWTexT4YQaGjqlqe0yhO9njnuy";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5ad8befd-75a1-4995-e0bb-e1a438f7556d@suse.com>
Subject: Re: Null scheduler and vwfi native problem
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
 <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
 <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
In-Reply-To: <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>

--ZTGhjvmEWTexT4YQaGjqlqe0yhO9njnuy
Content-Type: multipart/mixed;
 boundary="------------18CF0D851AD2BAB757E8F521"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------18CF0D851AD2BAB757E8F521
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 10:19, Julien Grall wrote:
> Hi,
>=20
> On 03/02/2021 07:31, Dario Faggioli wrote:
>> On Tue, 2021-02-02 at 15:23 +0000, Julien Grall wrote:
>>> In reality, it is probably still too early as a pCPU can be
>>> considered
>>> quiesced until a call to rcu_lock*() (such rcu_lock_domain()).
>>>
>> Well, yes, in theory, we could track down which is the first RCU read
>> side crit. section on this path, and put the call right before that (i=
f
>> I understood what you mean).
>=20
> Oh, that's not what I meant. This will indeed be far more complex than =
I=20
> originally had in mind.
>=20
> AFAIU, the RCU uses critical section to protect data. So the "entering"=
=20
> could be used as "the pCPU is not quiesced" and "exiting" could be used=
=20
> as "the pCPU is quiesced".
>=20
> The concern with my approach is we would need to make sure that Xen=20
> correctly uses the rcu helpers. I know Juergen worked on that recently,=
=20
> but I don't know whether this is fully complete.

I think it is complete, but I can't be sure, of course.

One bit missing (for catching some wrong uses of the helpers) is this
patch:

https://lists.xen.org/archives/html/xen-devel/2020-03/msg01759.html

I don't remember why it hasn't been taken, but I think there was a
specific reason for that.


Juergen

--------------18CF0D851AD2BAB757E8F521
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------18CF0D851AD2BAB757E8F521--

--ZTGhjvmEWTexT4YQaGjqlqe0yhO9njnuy--

--1lnwlIH2lKrIgSY5B9gyffH7R1kVJtffb
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAagmoFAwAAAAAACgkQsN6d1ii/Ey9X
4wf/eCxrr0wGKXjVMonA9VTFqHDs7Ti2BxLJ5J2qCU/p++7HTzA+piAysw/MfZ3kS91+HYdZWssA
2/ZQhIi1NE6ZDAiShlKumvX9tiugo/MaYYpsbnGHYPmOXDjwfvcaTehk/2getv2BRUvy0KYkC/lV
6nqVpdpJcD77DIvjSAcqkLEUXLnhaosLZCVCRMpTmSfFB/no7kuDyvzF67z2vWXX8vANfX2UocTc
syuD7n87gox8dCcOjHfVOri+xc3pVAKkIe2NmqDbeMBCn3hBnVuMA+JToYQl3qMSFaXAEZ1KPNj1
jwuqQw1POQIoU5W8f3BRGGSho88jlt+BSOfGSnTb0A==
=HBIH
-----END PGP SIGNATURE-----

--1lnwlIH2lKrIgSY5B9gyffH7R1kVJtffb--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 11:21:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 11:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80880.148275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7GDU-000159-IZ; Wed, 03 Feb 2021 11:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80880.148275; Wed, 03 Feb 2021 11:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7GDU-000152-F9; Wed, 03 Feb 2021 11:21:00 +0000
Received: by outflank-mailman (input) for mailman id 80880;
 Wed, 03 Feb 2021 11:20:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l7GDT-00014x-Fa
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 11:20:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7GDR-0006Y4-Gp; Wed, 03 Feb 2021 11:20:57 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7GDR-0006MR-9K; Wed, 03 Feb 2021 11:20:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=SBcLEcpvO3k+svE+RRLJ+wnouCQimxvuZz8XhlodnlE=; b=lVjGxmUZHITggI+yPkySqp6Sve
	EjlxqxwQahaRYWG/U/odQtxVC9k8gFPqLotX1kAXK2eybQNaMpm3/sSQCwQpTAKz9lGU+b/6lxDI/
	B+/M4Qp+gEeXpIaEEgXe5MTHddcxEbMREMJeA2zvHqpCVhBzpspU6Y3qVcwVBskrlwxc=;
Subject: Re: Null scheduler and vwfi native problem
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
 <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
 <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
 <5ad8befd-75a1-4995-e0bb-e1a438f7556d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aa46d1ce-cc41-4bda-acbf-0d39a30be289@xen.org>
Date: Wed, 3 Feb 2021 11:20:55 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <5ad8befd-75a1-4995-e0bb-e1a438f7556d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 03/02/2021 11:00, JÃ¼rgen GroÃŸ wrote:
> On 03.02.21 10:19, Julien Grall wrote:
>> Hi,
>>
>> On 03/02/2021 07:31, Dario Faggioli wrote:
>>> On Tue, 2021-02-02 at 15:23 +0000, Julien Grall wrote:
>>>> In reality, it is probably still too early as a pCPU can be
>>>> considered
>>>> quiesced until a call to rcu_lock*() (such rcu_lock_domain()).
>>>>
>>> Well, yes, in theory, we could track down which is the first RCU read
>>> side crit. section on this path, and put the call right before that (if
>>> I understood what you mean).
>>
>> Oh, that's not what I meant. This will indeed be far more complex than 
>> I originally had in mind.
>>
>> AFAIU, the RCU uses critical section to protect data. So the 
>> "entering" could be used as "the pCPU is not quiesced" and "exiting" 
>> could be used as "the pCPU is quiesced".
>>
>> The concern with my approach is we would need to make sure that Xen 
>> correctly uses the rcu helpers. I know Juergen worked on that 
>> recently, but I don't know whether this is fully complete.
> 
> I think it is complete, but I can't be sure, of course.
> 
> One bit missing (for catching some wrong uses of the helpers) is this
> patch:
> 
> https://lists.xen.org/archives/html/xen-devel/2020-03/msg01759.html
> 
> I don't remember why it hasn't been taken, but I think there was a
> specific reason for that.

Looking at v8 and the patch is suitably reviewed by Jan. So I am a bit 
puzzled to why this wasn't committed... I had to go to v6 and notice the 
following message:

"albeit to be honest I'm not fully convinced we need to go this far."

Was the implication that his reviewed-by was conditional to someone else 
answering to the e-mail?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 11:48:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 11:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80888.148293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7GeE-0003Np-03; Wed, 03 Feb 2021 11:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80888.148293; Wed, 03 Feb 2021 11:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7GeD-0003Ni-Sy; Wed, 03 Feb 2021 11:48:37 +0000
Received: by outflank-mailman (input) for mailman id 80888;
 Wed, 03 Feb 2021 11:48:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7GeC-0003Nd-0E
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 11:48:36 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a4c9ca2-3f2a-4598-9f7b-ab8122802911;
 Wed, 03 Feb 2021 11:48:34 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113BmWBO026723;
 Wed, 3 Feb 2021 12:48:32 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 8CD48281D; Wed,  3 Feb 2021 12:48:32 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a4c9ca2-3f2a-4598-9f7b-ab8122802911
Date: Wed, 3 Feb 2021 12:48:32 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203114832.GA1549@antioche.eu.org>
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="liOOAslEiF7prFVr"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1100:a00:20ff:fe1c:276e]); Wed, 03 Feb 2021 12:48:32 +0100 (MET)


--liOOAslEiF7prFVr
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Wed, Feb 03, 2021 at 09:21:32AM +0100, Jürgen Groß wrote:
> [...]
> This shouldn't happen in case we are closing the socket actively.
> 
> In the end we should just do a talloc_free(conn) in
> ignore_connection() if it is a socket based one. This should revert
> the critical modification of the XSA-115 fixes for sockets while
> keeping the desired effect for domain connections.

Hello
here's an updated patch which works for me. Does anyone see a problem
with it ? If not I will submit it for commit.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--liOOAslEiF7prFVr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="xenstored_core.diff"

--- xenstored_core.c.orig	2021-02-02 18:06:33.389316841 +0100
+++ xenstored_core.c	2021-02-03 12:46:17.204376338 +0100
@@ -397,9 +397,12 @@
 			     !list_empty(&conn->out_list)))
 				*ptimeout = 0;
 		} else {
-			short events = POLLIN|POLLPRI;
-			if (!list_empty(&conn->out_list))
-				events |= POLLOUT;
+			short events = 0;
+			if (!conn->is_ignored) {
+				events |= POLLIN|POLLPRI;
+			        if (!list_empty(&conn->out_list))
+				        events |= POLLOUT;
+			}
 			conn->pollfd_idx = set_fd(conn->fd, events);
 		}
 	}
@@ -1440,6 +1443,9 @@
 
 	talloc_free(conn->in);
 	conn->in = NULL;
+	/* if this is a socket connection, drop it now */
+	if (conn->fd >= 0)
+		talloc_free(conn);
 }
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type)

--liOOAslEiF7prFVr--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 11:50:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 11:50:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80889.148305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7GgO-0004Lk-Ci; Wed, 03 Feb 2021 11:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80889.148305; Wed, 03 Feb 2021 11:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7GgO-0004Ld-9X; Wed, 03 Feb 2021 11:50:52 +0000
Received: by outflank-mailman (input) for mailman id 80889;
 Wed, 03 Feb 2021 11:50:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7GgM-0004LU-H6; Wed, 03 Feb 2021 11:50:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7GgM-00073D-B5; Wed, 03 Feb 2021 11:50:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7GgM-0002TE-0y; Wed, 03 Feb 2021 11:50:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7GgM-0008DD-0a; Wed, 03 Feb 2021 11:50:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Nvfy2dQFoXWqKpmvM/DxV8hqz1MQxmJXrUD/vevPDL4=; b=Bz0NX3jYljWk7a/7t0B4mN55Mt
	Tz2ZandYo5CWK4bI9I/Y3aPNacdtHG/dDFPmtAlBqfJWVd6LGYyzN3btKeMf9EUpyFpaDxWjSa+4m
	W2ZEPz3hS9/ivys2u6QN9qd2xF++zJUeKR5Au3ECNtu4GAwq3N2xCW4A7aZNEolo2UFI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-xl
Message-Id: <E1l7GgM-0008DD-0a@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 11:50:50 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-xl
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158983/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-xl.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl.guest-start --summary-out=tmp/158983.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-xl guest-start
Searching for failure / basis pass:
 158962 fail [host=laxton1] / 158681 [host=rochester1] 158624 [host=rochester0] 158616 [host=rochester1] 158609 [host=rochester0] 158603 [host=rochester1] 158593 [host=laxton0] 158583 ok.
Failure / basis pass flights: 158962 / 158583
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#d26b3110041a9fddc6c6e36398f53f7eab8cff82-0fbca6ce4174724f28be5268c5d210f51ed96e31 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#5b4a97bbc39ed8e7eb50038b9cffe2e948e49995-3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#e8adbf680b56a3f4b9600c7bcc04fec1877a6213-9dc687f155a57216b83b17f9cde55dd43e06b0cd
Loaded 15001 nodes in revision graph
Searching for test results:
 158563 [host=laxton0]
 158583 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 158593 [host=laxton0]
 158603 [host=rochester1]
 158609 [host=rochester0]
 158616 [host=rochester1]
 158624 [host=rochester0]
 158681 [host=rochester1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158956 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158958 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158963 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158964 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158965 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158966 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158967 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158968 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158970 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158972 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158974 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158976 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158978 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158962 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158980 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158982 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158983 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158583 (pass), for basis pass
 For basis failure, parent search stopping at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558e51e4cac8) HASH(0x558e51e44600) HASH(0x558e51e4d548) For basis failure, parent search stopping at 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36\
 fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558e51e4b8a0) For basis failure, parent search stopping at 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558e51e4\
 a198) For basis failure, parent search stopping at 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558e51e439d8) For basis failure, parent search stopping at d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea42\
 8895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558e51df3578) For basis failure, parent search stopping at 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd, results HASH(0x558e51e48310) For basis failure, parent searc\
 h stopping at d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213, results HASH(0x558e51e02800) HASH(0x558e51e1a9f8) Result found: flight 158748 (fail), for basis failure (at ancestor ~6126)
 Repro found: flight 158956 (pass), for basis pass
 Repro found: flight 158962 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 158974 (pass), for last pass
 Result found: flight 158976 (fail), for first failure
 Repro found: flight 158978 (pass), for last pass
 Repro found: flight 158980 (fail), for first failure
 Repro found: flight 158982 (pass), for last pass
 Repro found: flight 158983 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/158983/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 150 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
158983: tolerable ALL FAIL

flight 158983 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/158983/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl          14 guest-start             fail baseline untested


jobs:
 test-arm64-arm64-xl                                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 11:54:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 11:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80895.148320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Gjs-0004XQ-50; Wed, 03 Feb 2021 11:54:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80895.148320; Wed, 03 Feb 2021 11:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Gjs-0004XJ-0n; Wed, 03 Feb 2021 11:54:28 +0000
Received: by outflank-mailman (input) for mailman id 80895;
 Wed, 03 Feb 2021 11:54:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7Gjq-0004XE-R3
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 11:54:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9ba26b0-9ed2-407d-8e85-0f78477dc66b;
 Wed, 03 Feb 2021 11:54:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 72F1CAFFA;
 Wed,  3 Feb 2021 11:54:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9ba26b0-9ed2-407d-8e85-0f78477dc66b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612353264; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pWhnz+bTxdbZTyJEHH9/dFp0wsiLakxt/gAbvkMOkOQ=;
	b=s6lpqYCZk6Ew0WTkUwsg3UPKOzQyuOPZZhbx7dxjsJgWyVZpD/FLZ8IS/lP4tPOv1UeDZD
	b6S7MZJECbtJWsuD0DrOcpyEhjraJBM5j/xNK08q1sk0wwCalvn73jZ9N0Pc5Mm3KPvdAY
	b8clGfmwezmNh3hsB9//CdOx+2BtAR0=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
Date: Wed, 3 Feb 2021 12:54:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203114832.GA1549@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TYhajP37nsuqwMSyTBE6PLtPSD4JknCgC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TYhajP37nsuqwMSyTBE6PLtPSD4JknCgC
Content-Type: multipart/mixed; boundary="PdBqg37MD83ieF1ftk1LroFx5vD8ZJhtO";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
In-Reply-To: <20210203114832.GA1549@antioche.eu.org>

--PdBqg37MD83ieF1ftk1LroFx5vD8ZJhtO
Content-Type: multipart/mixed;
 boundary="------------23149E6BAA186FEA49F90BD9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------23149E6BAA186FEA49F90BD9
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 12:48, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 09:21:32AM +0100, J=FCrgen Gro=DF wrote:
>> [...]
>> This shouldn't happen in case we are closing the socket actively.
>>
>> In the end we should just do a talloc_free(conn) in
>> ignore_connection() if it is a socket based one. This should revert
>> the critical modification of the XSA-115 fixes for sockets while
>> keeping the desired effect for domain connections.
>=20
> Hello
> here's an updated patch which works for me. Does anyone see a problem
> with it ? If not I will submit it for commit.
>=20

Do you really need the first hunk? I would have thought just freeing
conn in ignore_connection() is enough.

In case you are seeing problems without the first hunk, please say so
in a comment added to this hunk in order to avoid it being removed
sometimes in the future.


Juergen

--------------23149E6BAA186FEA49F90BD9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------23149E6BAA186FEA49F90BD9--

--PdBqg37MD83ieF1ftk1LroFx5vD8ZJhtO--

--TYhajP37nsuqwMSyTBE6PLtPSD4JknCgC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAaju8FAwAAAAAACgkQsN6d1ii/Ey9Z
RQf/aSFmW8qoPopN2sy79FfSn8RNatyEO6F7ZT45I4lxy6B7cdohwm4kI9u9uGin7ZpaugRvBxwA
v9pqyaI8Hlzwe7hs53wyV6c8iUOaTfcRkB/AzNC388eoIX+Gz7AuyFflQOxDrBons1OxOYbgvM4H
KyJiHBHZqdPZJmLhTIGiNFBgcD+vlNFAjrk5bzMeOo9uX561vEJIa1qaXlBRSM9YmkbAd9lU1G2V
RLpyZCmPCG9a6ykFzEX30tp1i/aTp2JYsf6PbfqnOa0cK/aGQS+bw5W0Pbwh5uWkRnJQpsemsGiv
DMm61/TRSIRTvoRbuxGSSlTvGR0B0CJTohSt6kMflg==
=grwF
-----END PGP SIGNATURE-----

--TYhajP37nsuqwMSyTBE6PLtPSD4JknCgC--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:02:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80900.148335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Grn-0005d5-6e; Wed, 03 Feb 2021 12:02:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80900.148335; Wed, 03 Feb 2021 12:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Grn-0005cy-3I; Wed, 03 Feb 2021 12:02:39 +0000
Received: by outflank-mailman (input) for mailman id 80900;
 Wed, 03 Feb 2021 12:02:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7Grl-0005ct-4n
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 12:02:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ad6f6a4-78ec-4a2f-85f4-a115add573fb;
 Wed, 03 Feb 2021 12:02:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 113EDAD78;
 Wed,  3 Feb 2021 12:02:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ad6f6a4-78ec-4a2f-85f4-a115add573fb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612353755; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2kTGFphc2LZJ2ATGCnVNW5wh5u84pzQBs/EFF3UgR/k=;
	b=m5xiyPN/zmrdlZOrczt+uqGnGR6zP43X4ZuOTXyaLakz1+NGQUrcunbdcm+j5FiR32O8gK
	PTTHDzAyYBKw5psqHVqS6Gv1Vcr1myMwkj6D29TwiJDFpTky2XjYu0VdI8oiT/D1iMRDBF
	dSCKv+bK6GOG6pGfqBKREC5f62QF11U=
To: Julien Grall <julien@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
 <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
 <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
 <5ad8befd-75a1-4995-e0bb-e1a438f7556d@suse.com>
 <aa46d1ce-cc41-4bda-acbf-0d39a30be289@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: Null scheduler and vwfi native problem
Message-ID: <9a144000-e429-b9be-6562-ef8ba11b54ba@suse.com>
Date: Wed, 3 Feb 2021 13:02:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <aa46d1ce-cc41-4bda-acbf-0d39a30be289@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="7wxiMZchvXXR5GW2xwgbBGrPq9ZxYCspR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--7wxiMZchvXXR5GW2xwgbBGrPq9ZxYCspR
Content-Type: multipart/mixed; boundary="3nxcv8NOxFK7IFjUud9Ja1aV6lqovGwjW";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9a144000-e429-b9be-6562-ef8ba11b54ba@suse.com>
Subject: Re: Null scheduler and vwfi native problem
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
 <4760cbac-b006-78bc-b064-3265384f6707@xen.org>
 <311bb201bcacfd356f0c0b67856754eceae39e37.camel@suse.com>
 <7f2ec84a-9814-ffb1-0940-e936a80afbbb@xen.org>
 <501664dbdb736eaa4d9c05255dedfd7ad3e694fa.camel@suse.com>
 <1ab7ad80-c027-ffdd-8188-e1ab1fd53335@xen.org>
 <5ad8befd-75a1-4995-e0bb-e1a438f7556d@suse.com>
 <aa46d1ce-cc41-4bda-acbf-0d39a30be289@xen.org>
In-Reply-To: <aa46d1ce-cc41-4bda-acbf-0d39a30be289@xen.org>

--3nxcv8NOxFK7IFjUud9Ja1aV6lqovGwjW
Content-Type: multipart/mixed;
 boundary="------------35435530B3D7394E53ECF621"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------35435530B3D7394E53ECF621
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 12:20, Julien Grall wrote:
> Hi Juergen,
>=20
> On 03/02/2021 11:00, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.02.21 10:19, Julien Grall wrote:
>>> Hi,
>>>
>>> On 03/02/2021 07:31, Dario Faggioli wrote:
>>>> On Tue, 2021-02-02 at 15:23 +0000, Julien Grall wrote:
>>>>> In reality, it is probably still too early as a pCPU can be
>>>>> considered
>>>>> quiesced until a call to rcu_lock*() (such rcu_lock_domain()).
>>>>>
>>>> Well, yes, in theory, we could track down which is the first RCU rea=
d
>>>> side crit. section on this path, and put the call right before that =
(if
>>>> I understood what you mean).
>>>
>>> Oh, that's not what I meant. This will indeed be far more complex=20
>>> than I originally had in mind.
>>>
>>> AFAIU, the RCU uses critical section to protect data. So the=20
>>> "entering" could be used as "the pCPU is not quiesced" and "exiting" =

>>> could be used as "the pCPU is quiesced".
>>>
>>> The concern with my approach is we would need to make sure that Xen=20
>>> correctly uses the rcu helpers. I know Juergen worked on that=20
>>> recently, but I don't know whether this is fully complete.
>>
>> I think it is complete, but I can't be sure, of course.
>>
>> One bit missing (for catching some wrong uses of the helpers) is this
>> patch:
>>
>> https://lists.xen.org/archives/html/xen-devel/2020-03/msg01759.html
>>
>> I don't remember why it hasn't been taken, but I think there was a
>> specific reason for that.
>=20
> Looking at v8 and the patch is suitably reviewed by Jan. So I am a bit =

> puzzled to why this wasn't committed... I had to go to v6 and notice th=
e=20
> following message:
>=20
> "albeit to be honest I'm not fully convinced we need to go this far."
>=20
> Was the implication that his reviewed-by was conditional to someone els=
e=20
> answering to the e-mail?

I have no record for that being the case.

Patches 1-3 of that series were needed for getting rid of
stop_machine_run() in rcu handling and to fix other problems. Patch 4
was adding some additional ASSERT()s for making sure no potential
deadlocks due to wrong rcu usage could creep in again.

Patch 5 was more a "nice to have" addition in order to avoid any
wrong usage of rcu which should have no real negative impact on the
system stability.

So I believe Jan as the committer didn't want to commit it himself, but
was fine with the overall idea and implementation.

I still think for code sanity it would be nice, but I was rather busy
with Xenstore and event channel security work at that time, so I didn't
urge anyone to take this patch.


Juergen

--------------35435530B3D7394E53ECF621
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------35435530B3D7394E53ECF621--

--3nxcv8NOxFK7IFjUud9Ja1aV6lqovGwjW--

--7wxiMZchvXXR5GW2xwgbBGrPq9ZxYCspR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAakNoFAwAAAAAACgkQsN6d1ii/Ey8u
Ewf+PJYeZ/LnUh25BGqGhFLRIEN69n2k74WbllemBha+ICE1nSa+EPz8/ejQmfdGScOXqivQNY9c
hs85z9b3eV7cfTvpMhQ9xPbTgXDxm9bFatYge2D2daL8QLXDJGfyOYDE2qTFqgBC4EdjoBkZtTmE
s0fwgFr3fKh0wUGxmHWoJMt+2C6rPAAvgqyG9WVKlA72nRh8E9XhyA/bljcqek6m53ww+P1Z3tEz
GjS3/uW57UspyKy5W5ES2PterQ3zlDdJHIZ4dIPkdTNJgJSY2n5aaWNwyS7+mQdhJiRktgZ2GxCs
aX2tXNjC/A5ChQtPDHznqxE9Pw1RfcYHPkPh4J8SMw==
=5k71
-----END PGP SIGNATURE-----

--7wxiMZchvXXR5GW2xwgbBGrPq9ZxYCspR--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:03:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80901.148346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Gsm-0005ij-H9; Wed, 03 Feb 2021 12:03:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80901.148346; Wed, 03 Feb 2021 12:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Gsm-0005ic-E6; Wed, 03 Feb 2021 12:03:40 +0000
Received: by outflank-mailman (input) for mailman id 80901;
 Wed, 03 Feb 2021 12:03:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7Gsl-0005iT-Bd
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:03:39 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ce66d3f-a6e3-47f3-845d-7751d62a8e09;
 Wed, 03 Feb 2021 12:03:38 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113C3a5K026802;
 Wed, 3 Feb 2021 13:03:36 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id A6117281D; Wed,  3 Feb 2021 13:03:36 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ce66d3f-a6e3-47f3-845d-7751d62a8e09
Date: Wed, 3 Feb 2021 13:03:36 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203120336.GA2511@antioche.eu.org>
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 03 Feb 2021 13:03:37 +0100 (MET)

On Wed, Feb 03, 2021 at 12:54:23PM +0100, Jürgen Groß wrote:
> On 03.02.21 12:48, Manuel Bouyer wrote:
> > On Wed, Feb 03, 2021 at 09:21:32AM +0100, Jürgen Groß wrote:
> > > [...]
> > > This shouldn't happen in case we are closing the socket actively.
> > > 
> > > In the end we should just do a talloc_free(conn) in
> > > ignore_connection() if it is a socket based one. This should revert
> > > the critical modification of the XSA-115 fixes for sockets while
> > > keeping the desired effect for domain connections.
> > 
> > Hello
> > here's an updated patch which works for me. Does anyone see a problem
> > with it ? If not I will submit it for commit.
> > 
> 
> Do you really need the first hunk? I would have thought just freeing
> conn in ignore_connection() is enough.
> 
> In case you are seeing problems without the first hunk, please say so
> in a comment added to this hunk in order to avoid it being removed
> sometimes in the future.

No I don't need it. From your previous comments I though it was a good idea
to keep it, but I can remove it if you think it's better.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:14:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80904.148359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7H2l-0006ne-GH; Wed, 03 Feb 2021 12:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80904.148359; Wed, 03 Feb 2021 12:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7H2l-0006nX-D8; Wed, 03 Feb 2021 12:13:59 +0000
Received: by outflank-mailman (input) for mailman id 80904;
 Wed, 03 Feb 2021 12:13:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7H2k-0006nS-9m
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:13:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 305990be-0b98-42f2-9e82-d03a93f5cfc2;
 Wed, 03 Feb 2021 12:13:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7547AAD26;
 Wed,  3 Feb 2021 12:13:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 305990be-0b98-42f2-9e82-d03a93f5cfc2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612354434; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iVGXOyF7HVFqu80y6fjOPcCAFU1P1go4UCuwi+wl5Jk=;
	b=B4GRQwb9spIwtX17Z0/+6WJwMU+dsd3pXuIMJ2HMBWlDsE4AWhdP6blpHmr+wPrVv/lwX5
	JQfmvUcR8/7F3lRHBVnUhmEH2XuvGjCaWRPpPHxA5xuki9rfS1A2t8HVvXelvHDi53ik+A
	PLZLd5NyzddZa7eyCSYTfeZU+xLHsCs=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
Date: Wed, 3 Feb 2021 13:13:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203120336.GA2511@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="bAFJgMTKr2fjnuyYUXo25dZkUa81jmLlf"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--bAFJgMTKr2fjnuyYUXo25dZkUa81jmLlf
Content-Type: multipart/mixed; boundary="fNwDouoacOfXS2pFB2GLlERVJUAromO7z";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
In-Reply-To: <20210203120336.GA2511@antioche.eu.org>

--fNwDouoacOfXS2pFB2GLlERVJUAromO7z
Content-Type: multipart/mixed;
 boundary="------------7AB700267E334A3B4A1E50E2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7AB700267E334A3B4A1E50E2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 13:03, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 12:54:23PM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.02.21 12:48, Manuel Bouyer wrote:
>>> On Wed, Feb 03, 2021 at 09:21:32AM +0100, J=C3=BCrgen Gro=C3=9F wrote=
:
>>>> [...]
>>>> This shouldn't happen in case we are closing the socket actively.
>>>>
>>>> In the end we should just do a talloc_free(conn) in
>>>> ignore_connection() if it is a socket based one. This should revert
>>>> the critical modification of the XSA-115 fixes for sockets while
>>>> keeping the desired effect for domain connections.
>>>
>>> Hello
>>> here's an updated patch which works for me. Does anyone see a problem=

>>> with it ? If not I will submit it for commit.
>>>
>>
>> Do you really need the first hunk? I would have thought just freeing
>> conn in ignore_connection() is enough.
>>
>> In case you are seeing problems without the first hunk, please say so
>> in a comment added to this hunk in order to avoid it being removed
>> sometimes in the future.
>=20
> No I don't need it. From your previous comments I though it was a good =
idea
> to keep it, but I can remove it if you think it's better.

Yes, please remove it.


Juergen

--------------7AB700267E334A3B4A1E50E2
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7AB700267E334A3B4A1E50E2--

--fNwDouoacOfXS2pFB2GLlERVJUAromO7z--

--bAFJgMTKr2fjnuyYUXo25dZkUa81jmLlf
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAak4EFAwAAAAAACgkQsN6d1ii/Ey/f
aQf+PnwXVfnyTP46pvz1WdgvFFQxapcWltAych83xPixq0FsfH4p9GrYhISCkPMwsXOVTJ9TjIRy
FsA6Vl+FeG2OfNzGTFM0hhEcGqknbMG5j5lQIJhewCSWT5MiMC4B5lfiqnAJYh85r5dTIOsvZcvU
qWLFi9lsUrpNn3XLOb/dX28mIZ+GyzP/tQ9PS4u9GsHe/vaiHYqIbCq/Dssvk7Shrby+vlEE1FzX
8bKOPDb31C2kh+qdY5YHS04L75NUnImqpAt0fGIZLnkU308lJ99lbaJk9VJw7YmltyPUwBFx8ESY
WFkbmI8JV5EhvIz5gCSF0weCaz1sklHX6ccGujJXUw==
=6/RA
-----END PGP SIGNATURE-----

--bAFJgMTKr2fjnuyYUXo25dZkUa81jmLlf--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:17:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80906.148371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7H6M-0006xV-0W; Wed, 03 Feb 2021 12:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80906.148371; Wed, 03 Feb 2021 12:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7H6L-0006xO-TL; Wed, 03 Feb 2021 12:17:41 +0000
Received: by outflank-mailman (input) for mailman id 80906;
 Wed, 03 Feb 2021 12:17:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7H6K-0006xJ-En
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:17:40 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b047a686-7b42-404c-b8fe-557aded63f29;
 Wed, 03 Feb 2021 12:17:39 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113CHcQ2026559;
 Wed, 3 Feb 2021 13:17:38 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 23DC9281D; Wed,  3 Feb 2021 13:17:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b047a686-7b42-404c-b8fe-557aded63f29
Date: Wed, 3 Feb 2021 13:17:38 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203121738.GA2610@antioche.eu.org>
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 03 Feb 2021 13:17:38 +0100 (MET)

On Wed, Feb 03, 2021 at 01:13:53PM +0100, Jürgen Groß wrote:
> On 03.02.21 13:03, Manuel Bouyer wrote:
> > On Wed, Feb 03, 2021 at 12:54:23PM +0100, Jürgen Groß wrote:
> > > On 03.02.21 12:48, Manuel Bouyer wrote:
> > > > On Wed, Feb 03, 2021 at 09:21:32AM +0100, Jürgen Groß wrote:
> > > > > [...]
> > > > > This shouldn't happen in case we are closing the socket actively.
> > > > > 
> > > > > In the end we should just do a talloc_free(conn) in
> > > > > ignore_connection() if it is a socket based one. This should revert
> > > > > the critical modification of the XSA-115 fixes for sockets while
> > > > > keeping the desired effect for domain connections.
> > > > 
> > > > Hello
> > > > here's an updated patch which works for me. Does anyone see a problem
> > > > with it ? If not I will submit it for commit.
> > > > 
> > > 
> > > Do you really need the first hunk? I would have thought just freeing
> > > conn in ignore_connection() is enough.
> > > 
> > > In case you are seeing problems without the first hunk, please say so
> > > in a comment added to this hunk in order to avoid it being removed
> > > sometimes in the future.
> > 
> > No I don't need it. From your previous comments I though it was a good idea
> > to keep it, but I can remove it if you think it's better.
> 
> Yes, please remove it.

Will do

In the patch I'm going to submit, may I add

Reviewed-by: Jürgen Groß <jgross@suse.com>
?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:22:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80908.148383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HAZ-0007wh-Hx; Wed, 03 Feb 2021 12:22:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80908.148383; Wed, 03 Feb 2021 12:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HAZ-0007wa-ES; Wed, 03 Feb 2021 12:22:03 +0000
Received: by outflank-mailman (input) for mailman id 80908;
 Wed, 03 Feb 2021 12:22:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7HAY-0007wV-Dn
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:22:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ab38da2-717b-4953-a11d-b19c97af2a39;
 Wed, 03 Feb 2021 12:22:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7DFD9AEE7;
 Wed,  3 Feb 2021 12:22:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ab38da2-717b-4953-a11d-b19c97af2a39
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612354920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=VwIUpkNamluGG5cDWQAEaHPOjzv1rFKNI2wjcey9h/4=;
	b=goN3IectAS6uBJjsTObaqIwS0berpLSGKYI6jkftPvsC39suT7xmldUXIkEKkovBtQ0ZZs
	urXZBOMb3N+nlXLNG70HqcYiDDSdQv8ceCnOXzoUtNloLlRsu6ELUUh6l7ezwQmQcM/iTb
	FnPeZU2ILrVGLaHld/5+519Ievsja54=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
Date: Wed, 3 Feb 2021 13:21:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203121738.GA2610@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="dE2Xx7OnGdgonja0yAECc1fMZyYikr0r8"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--dE2Xx7OnGdgonja0yAECc1fMZyYikr0r8
Content-Type: multipart/mixed; boundary="3FmQZ2Z9Q9fITSrbGyWmd6N4uqn59D9A2";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210202183735.GA25046@mail.soc.lip6.fr>
 <b6ed10d4-7976-6a61-434d-35e467763deb@suse.com>
 <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
In-Reply-To: <20210203121738.GA2610@antioche.eu.org>

--3FmQZ2Z9Q9fITSrbGyWmd6N4uqn59D9A2
Content-Type: multipart/mixed;
 boundary="------------701DF3D0F8A84529FA650EDC"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------701DF3D0F8A84529FA650EDC
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 13:17, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 01:13:53PM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.02.21 13:03, Manuel Bouyer wrote:
>>> On Wed, Feb 03, 2021 at 12:54:23PM +0100, J=C3=BCrgen Gro=C3=9F wrote=
:
>>>> On 03.02.21 12:48, Manuel Bouyer wrote:
>>>>> On Wed, Feb 03, 2021 at 09:21:32AM +0100, J=C3=BCrgen Gro=C3=9F wro=
te:
>>>>>> [...]
>>>>>> This shouldn't happen in case we are closing the socket actively.
>>>>>>
>>>>>> In the end we should just do a talloc_free(conn) in
>>>>>> ignore_connection() if it is a socket based one. This should rever=
t
>>>>>> the critical modification of the XSA-115 fixes for sockets while
>>>>>> keeping the desired effect for domain connections.
>>>>>
>>>>> Hello
>>>>> here's an updated patch which works for me. Does anyone see a probl=
em
>>>>> with it ? If not I will submit it for commit.
>>>>>
>>>>
>>>> Do you really need the first hunk? I would have thought just freeing=

>>>> conn in ignore_connection() is enough.
>>>>
>>>> In case you are seeing problems without the first hunk, please say s=
o
>>>> in a comment added to this hunk in order to avoid it being removed
>>>> sometimes in the future.
>>>
>>> No I don't need it. From your previous comments I though it was a goo=
d idea
>>> to keep it, but I can remove it if you think it's better.
>>
>> Yes, please remove it.
>=20
> Will do
>=20
> In the patch I'm going to submit, may I add
>=20
> Reviewed-by: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> ?
>=20

Let me see the patch (including commit message) first, please.

So just send out as a regular patch, and I'll respond accordingly. :-)


Juergen

--------------701DF3D0F8A84529FA650EDC
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------701DF3D0F8A84529FA650EDC--

--3FmQZ2Z9Q9fITSrbGyWmd6N4uqn59D9A2--

--dE2Xx7OnGdgonja0yAECc1fMZyYikr0r8
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAalWcFAwAAAAAACgkQsN6d1ii/Ey9O
Jgf8DUjNoZmav0MJTHm3reJkRMcuCo1k56d1YZtI3sa0hXpVOsw5WXFATzg3RRWw1F1tLDHRZpqa
pF5JJ+BWrh+QpCxXMmPPJSk8iLAH7cNq9DJHl8tyFVzWvdIpzfbgsTWftD7Zdqk3KPtDyTpLf7Gq
kA5zYtyR+eQQepMm115x7WBVi/tEBOCOH7rb49kJg6upopvmKtq4C54ciU09ErIUEguycOAOpKKE
VZOkx3HiexD/y4byR8F0PPP4cmrQYIvc246Gn2hriTxuhR/uUh/fTVI5Tyzkuamq4bT9wBrpxP1w
V3Z7ckUImgirPin910D7WZ5gpfib18Ps4wRfKUnVTQ==
=J/0t
-----END PGP SIGNATURE-----

--dE2Xx7OnGdgonja0yAECc1fMZyYikr0r8--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:33:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:33:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80911.148395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HLQ-0000bT-Nb; Wed, 03 Feb 2021 12:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80911.148395; Wed, 03 Feb 2021 12:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HLQ-0000bM-KZ; Wed, 03 Feb 2021 12:33:16 +0000
Received: by outflank-mailman (input) for mailman id 80911;
 Wed, 03 Feb 2021 12:33:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7HLP-0000bH-3Z
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:33:15 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b429eff-50ba-4fc9-8e6f-edfafd693c2b;
 Wed, 03 Feb 2021 12:33:13 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113CXC52000249;
 Wed, 3 Feb 2021 13:33:12 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 229AB281D; Wed,  3 Feb 2021 13:33:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b429eff-50ba-4fc9-8e6f-edfafd693c2b
Date: Wed, 3 Feb 2021 13:33:12 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203123312.GA2906@antioche.eu.org>
References: <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="d6Gm4EdcadzBjdND"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1100:a00:20ff:fe1c:276e]); Wed, 03 Feb 2021 13:33:12 +0100 (MET)


--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Wed, Feb 03, 2021 at 01:21:59PM +0100, Jürgen Groß wrote:
> > Will do
> > 
> > In the patch I'm going to submit, may I add
> > 
> > Reviewed-by: Jürgen Groß <jgross@suse.com>
> > ?
> > 
> 
> Let me see the patch (including commit message) first, please.
> 
> So just send out as a regular patch, and I'll respond accordingly. :-)

Attached is the new patch.

As commit message I suggest:

xenstored: close socket connections on error

On error, don't keep socket connection in ignored state but close them.
When the remote end of a socket is closed, xenstored will flag it as an
error and switch the connection to ignored. But on some OSes (e.g.
NetBSD), poll(2) will return only POLLIN in this case, so sockets in ignored
state will stay open forever in xenstored (and it will loop with CPU 100%
busy).

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="xenstored_core.diff"

--- xenstored_core.c.orig	2021-02-03 13:25:38.568308454 +0100
+++ xenstored_core.c	2021-02-03 13:25:44.282429872 +0100
@@ -1440,6 +1440,9 @@
 
 	talloc_free(conn->in);
 	conn->in = NULL;
+	/* if this is a socket connection, drop it now */
+	if (conn->fd >= 0)
+		talloc_free(conn);
 }
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type)

--d6Gm4EdcadzBjdND--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:42:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:42:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80913.148407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HU7-0001ei-LG; Wed, 03 Feb 2021 12:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80913.148407; Wed, 03 Feb 2021 12:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HU7-0001eb-I9; Wed, 03 Feb 2021 12:42:15 +0000
Received: by outflank-mailman (input) for mailman id 80913;
 Wed, 03 Feb 2021 12:42:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7HU6-0001eW-RT
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:42:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65e17e11-91e0-4295-a011-c4e1b6017d45;
 Wed, 03 Feb 2021 12:42:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 188CBAD26;
 Wed,  3 Feb 2021 12:42:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65e17e11-91e0-4295-a011-c4e1b6017d45
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612356133; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=91LYg4tGYNtPQ9Oa2Z/x4yec1JUVP0RzZjcMFpFb2w8=;
	b=U/migWGX5VE87spMeRUyJD6zZ8ocPmXvKd+MiEQlAhjM9kxmogkQsqmC94HoAsGvaJkJiR
	QspNzuV4AERiI5ykfVMVRtdHc96Q+nkqvMJAypUuSiy3SbaFy4GI2Eiky09o2GXpRdoL3D
	UJjIT4tmeQDj3EO+hwOOhczn3AHk3og=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
Date: Wed, 3 Feb 2021 13:42:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203123312.GA2906@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ffMcAP9bsnB8FxgIf9Ujp56vHVV5CmbcL"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ffMcAP9bsnB8FxgIf9Ujp56vHVV5CmbcL
Content-Type: multipart/mixed; boundary="uBCt7eytPmlC2sArBPgLX7mbF43VdfmK6";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210203075721.GB445@antioche.eu.org>
 <7768ff4b-837d-965b-61c7-b6794f677d9e@suse.com>
 <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
In-Reply-To: <20210203123312.GA2906@antioche.eu.org>

--uBCt7eytPmlC2sArBPgLX7mbF43VdfmK6
Content-Type: multipart/mixed;
 boundary="------------84524E00F3DAB670F147AEA8"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------84524E00F3DAB670F147AEA8
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 13:33, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 01:21:59PM +0100, J=FCrgen Gro=DF wrote:
>>> Will do
>>>
>>> In the patch I'm going to submit, may I add
>>>
>>> Reviewed-by: J=FCrgen Gro=DF <jgross@suse.com>
>>> ?
>>>
>>
>> Let me see the patch (including commit message) first, please.
>>
>> So just send out as a regular patch, and I'll respond accordingly. :-)=

>=20
> Attached is the new patch.
>=20
> As commit message I suggest:
>=20
> xenstored: close socket connections on error
>=20
> On error, don't keep socket connection in ignored state but close them.=

> When the remote end of a socket is closed, xenstored will flag it as an=

> error and switch the connection to ignored. But on some OSes (e.g.
> NetBSD), poll(2) will return only POLLIN in this case, so sockets in ig=
nored
> state will stay open forever in xenstored (and it will loop with CPU 10=
0%
> busy).
>=20

Uh, this is no regular patch. You've sent correct patches before,
haven't you? The patch should have a Signed-off-by: and in this case a
Fixes: tag as well (hint: the patch breaking your use case was
commit d2fa370d3ef9cbe22d7256c608671cdcdf6e0083). See

https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches


Juergen

--------------84524E00F3DAB670F147AEA8
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------84524E00F3DAB670F147AEA8--

--uBCt7eytPmlC2sArBPgLX7mbF43VdfmK6--

--ffMcAP9bsnB8FxgIf9Ujp56vHVV5CmbcL
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAamiQFAwAAAAAACgkQsN6d1ii/Ey+9
qAf+JOIicE6OdczwAAU9M37NPtjrh1gjYHufaBUjB73Ym48odISQHlz+pQ1EDdZaYKBIZH9acB9c
jQ1HK54T1s7arWkvXrHjPg2ldGrMFXRlxXzAUaUNJgbQWWF8/jAzHBNw4k8n5vkGZJO9hmUO0gTl
kMTwJa5ySAa0rOWaFre8GgMs3aN+uOnaAEsd06Ob9Ou9ONE9REpwIKI9Ptb9UKFAlzTvHWeGItFN
0lAufCIxDJ7rEkgKKxo6QBe4KOkX1bolxb304GTH/uIitXeTn125OOtWCLUNfhGBC3McFGIBSc87
EUyBGZ60d/ImHurMfU9XmBzUd4nY65Lhx/Kn2/93Og==
=qBzP
-----END PGP SIGNATURE-----

--ffMcAP9bsnB8FxgIf9Ujp56vHVV5CmbcL--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:47:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80915.148419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HZC-0001pO-9R; Wed, 03 Feb 2021 12:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80915.148419; Wed, 03 Feb 2021 12:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HZC-0001pH-6F; Wed, 03 Feb 2021 12:47:30 +0000
Received: by outflank-mailman (input) for mailman id 80915;
 Wed, 03 Feb 2021 12:47:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7HZA-0001pC-Ji
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:47:28 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e538226-0e32-4fb6-8b9c-01844935557d;
 Wed, 03 Feb 2021 12:47:27 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113ClQaN001145;
 Wed, 3 Feb 2021 13:47:26 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 45730281D; Wed,  3 Feb 2021 13:47:26 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e538226-0e32-4fb6-8b9c-01844935557d
Date: Wed, 3 Feb 2021 13:47:26 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203124726.GA3034@antioche.eu.org>
References: <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 03 Feb 2021 13:47:26 +0100 (MET)

On Wed, Feb 03, 2021 at 01:42:12PM +0100, Jürgen Groß wrote:
> Uh, this is no regular patch.

I though by regular patch, you meants a plain diff -u

> You've sent correct patches before,

Yes, and it's very time-consuming. This is why I want to have it right the
first time and not go through sevreral iterations with this protocol.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:56:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80917.148431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HhY-0002sW-5J; Wed, 03 Feb 2021 12:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80917.148431; Wed, 03 Feb 2021 12:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7HhY-0002sP-27; Wed, 03 Feb 2021 12:56:08 +0000
Received: by outflank-mailman (input) for mailman id 80917;
 Wed, 03 Feb 2021 12:56:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7HhW-0002s1-J9; Wed, 03 Feb 2021 12:56:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7HhW-00089K-0N; Wed, 03 Feb 2021 12:56:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7HhV-0004e6-M7; Wed, 03 Feb 2021 12:56:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7HhV-0004on-Li; Wed, 03 Feb 2021 12:56:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t4C6iUzN+/Vu3LsMI5lE14X0Fgyj7RU43O73sUZ2xsw=; b=5GY1LzwXCPs3BV4kRcpEoPX2Yo
	2HxxxW43njSZJpddLjytCRTnXYgUnWv3Kb0z2cl9cZ0wK4tpmiYsDdTEIyrXXnxiP0FMFKWvST/SR
	CbP9qpWw+9H/bMfKKZVO8u5dRlshsqWekCdEh0bwOhnsQu9cGuliqWr4uWTVkypeFQjA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158975-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 158975: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=618e6a1f21a11eaee0a92e19c753969eb4a1b198
X-Osstest-Versions-That:
    ovmf=3f90ac3ec03512e2374cd2968c047a7e856a8965
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 12:56:05 +0000

flight 158975 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158975/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 618e6a1f21a11eaee0a92e19c753969eb4a1b198
baseline version:
 ovmf                 3f90ac3ec03512e2374cd2968c047a7e856a8965

Last test of basis   158959  2021-02-02 16:40:59 Z    0 days
Testing same since   158975  2021-02-03 07:14:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  gechao <gechao@greatwall.com.cn>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Vijayenthiran Subramaniam <vijayenthiran.subramaniam@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3f90ac3ec0..618e6a1f21  618e6a1f21a11eaee0a92e19c753969eb4a1b198 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 12:58:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 12:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80920.148446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Hjl-00030F-Hf; Wed, 03 Feb 2021 12:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80920.148446; Wed, 03 Feb 2021 12:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Hjl-000307-E6; Wed, 03 Feb 2021 12:58:25 +0000
Received: by outflank-mailman (input) for mailman id 80920;
 Wed, 03 Feb 2021 12:58:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7Hjj-000301-9i
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 12:58:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 516582c6-847d-41df-9d95-bb99cffd336e;
 Wed, 03 Feb 2021 12:58:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 664DEAD57;
 Wed,  3 Feb 2021 12:58:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 516582c6-847d-41df-9d95-bb99cffd336e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612357100; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IpqIkmH9UMEK6IApALt6vZdV3DT5fATDQFbcCCrrq9U=;
	b=NWAlEo2Gs55oxDqVlXMvtjt/GoGvneZKsQUSJoKxbxKLekIztqQS0YP59Z3fy6Ewo4lBLt
	oH5oja8v7lcNkrddbSoQve4M7xympdDTDTw4Bqe9iHlNJkaE4rBK7sMZwhpEms8/hViYNt
	5/blxfsk0rsH9D1G/rdv0u98Pc3s/2I=
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
 <20210203124726.GA3034@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: xenstored file descriptor leak
Message-ID: <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
Date: Wed, 3 Feb 2021 13:58:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203124726.GA3034@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CJhONyXnIOZaTMVf6VnLVnCJzRNarobx1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CJhONyXnIOZaTMVf6VnLVnCJzRNarobx1
Content-Type: multipart/mixed; boundary="mN879XEnM1Pq3GOUiyfXoAdIluWaeMFRP";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210203081621.GD445@antioche.eu.org>
 <89ddaac0-eb05-8ddb-465a-60d78e4009eb@suse.com>
 <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
 <20210203124726.GA3034@antioche.eu.org>
In-Reply-To: <20210203124726.GA3034@antioche.eu.org>

--mN879XEnM1Pq3GOUiyfXoAdIluWaeMFRP
Content-Type: multipart/mixed;
 boundary="------------7B63DBC5ACB62949CA541D97"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7B63DBC5ACB62949CA541D97
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 13:47, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 01:42:12PM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> Uh, this is no regular patch.
>=20
> I though by regular patch, you meants a plain diff -u
>=20
>> You've sent correct patches before,
>=20
> Yes, and it's very time-consuming. This is why I want to have it right =
the
> first time and not go through sevreral iterations with this protocol.
>=20

Its not that hard if you have a proper git tree...


Juergen

--------------7B63DBC5ACB62949CA541D97
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7B63DBC5ACB62949CA541D97--

--mN879XEnM1Pq3GOUiyfXoAdIluWaeMFRP--

--CJhONyXnIOZaTMVf6VnLVnCJzRNarobx1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAanesFAwAAAAAACgkQsN6d1ii/Ey/I
+AgAmvIP9pmHGNNBNA94aE0sb07HW8DUp9gU0fcE6H1CiDQkhV/3l8upHFod8zHMGt1yR8LKApAu
nM/oquZBTWulXOM7hmqAMfeKkMMFdUnToRNlsnL0xz8bm6ZtqFPDabAdxg9b2LhVD2eMs52wzUFD
/9ToPLNMQv6xNtSL6AuCH7DF1mUlqqREd+LGFT79G9119XZMXY0SJfmWR2bYNdA8xuI4vHFmWQW2
N0uWl7vgH62IYlh8Vzs+aIm72G6yFLvn/m9ZaIGNhbPF7X7BCiX1UEGd6sIrE/2pysJEMl7xNR5x
18hjPUHKX3IChwlbXShs/c0oN5vMS7xjpezBOu6pLw==
=eWD1
-----END PGP SIGNATURE-----

--CJhONyXnIOZaTMVf6VnLVnCJzRNarobx1--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 13:03:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 13:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80924.148458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Hob-00043P-8x; Wed, 03 Feb 2021 13:03:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80924.148458; Wed, 03 Feb 2021 13:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Hob-00043I-56; Wed, 03 Feb 2021 13:03:25 +0000
Received: by outflank-mailman (input) for mailman id 80924;
 Wed, 03 Feb 2021 13:03:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7Hoa-00043C-20
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 13:03:24 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6971aad3-600f-4c43-99d0-d9c44f6beb1a;
 Wed, 03 Feb 2021 13:03:22 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113D3LLL001690;
 Wed, 3 Feb 2021 14:03:21 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 268B6281D; Wed,  3 Feb 2021 14:03:21 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6971aad3-600f-4c43-99d0-d9c44f6beb1a
Date: Wed, 3 Feb 2021 14:03:21 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203130321.GA2579@antioche.eu.org>
References: <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
 <20210203124726.GA3034@antioche.eu.org>
 <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1100:a00:20ff:fe1c:276e]); Wed, 03 Feb 2021 14:03:21 +0100 (MET)

On Wed, Feb 03, 2021 at 01:58:19PM +0100, Jürgen Groß wrote:
> On 03.02.21 13:47, Manuel Bouyer wrote:
> > On Wed, Feb 03, 2021 at 01:42:12PM +0100, Jürgen Groß wrote:
> > > Uh, this is no regular patch.
> > 
> > I though by regular patch, you meants a plain diff -u
> > 
> > > You've sent correct patches before,
> > 
> > Yes, and it's very time-consuming. This is why I want to have it right the
> > first time and not go through sevreral iterations with this protocol.
> > 
> 
> Its not that hard if you have a proper git tree...

git is the problem actually. I'm not used to it ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 13:11:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 13:11:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80926.148470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Hwh-00057I-3k; Wed, 03 Feb 2021 13:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80926.148470; Wed, 03 Feb 2021 13:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Hwh-00057B-03; Wed, 03 Feb 2021 13:11:47 +0000
Received: by outflank-mailman (input) for mailman id 80926;
 Wed, 03 Feb 2021 13:11:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dUjn=HF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7Hwg-000576-7B
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 13:11:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b416a267-f376-4fb7-9be3-41c5d7a8b3d8;
 Wed, 03 Feb 2021 13:11:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 779A6AFD2;
 Wed,  3 Feb 2021 13:11:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b416a267-f376-4fb7-9be3-41c5d7a8b3d8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612357904; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=j5TSYKegSXrOL6TFFVJoBJ6ygumWjZUTfZ85PyGalh8=;
	b=JMCO/Xbaz2xnBhAD6HU0rfCpsJUkp6cmvg2wh2tyZyFPNwdGsewlRp6IiftCY0/E3Ms9l2
	lYjMCo0mnrhFeanC6l6fgpm2g6goGtSKFzomtk+zhPulfmkRCGGocTc4S2wXF+fK2iztBT
	22cwM4RUB+RFQzodmjR895MU7DjoTv0=
Subject: Re: xenstored file descriptor leak
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
References: <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
 <20210203124726.GA3034@antioche.eu.org>
 <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
 <20210203130321.GA2579@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <cb9d1b06-9777-73a7-fdbc-c25ffe45d8dd@suse.com>
Date: Wed, 3 Feb 2021 14:11:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203130321.GA2579@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="yrBctnWO9doKo49OsWql6vOjC6tXdumGq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--yrBctnWO9doKo49OsWql6vOjC6tXdumGq
Content-Type: multipart/mixed; boundary="Q5uog5s70fe0omjaWE0oNVKTWiBfbMzUh";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xen.org
Message-ID: <cb9d1b06-9777-73a7-fdbc-c25ffe45d8dd@suse.com>
Subject: Re: xenstored file descriptor leak
References: <20210203114832.GA1549@antioche.eu.org>
 <73c0dd26-d3a5-221f-84b8-06055ee62889@suse.com>
 <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
 <20210203124726.GA3034@antioche.eu.org>
 <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
 <20210203130321.GA2579@antioche.eu.org>
In-Reply-To: <20210203130321.GA2579@antioche.eu.org>

--Q5uog5s70fe0omjaWE0oNVKTWiBfbMzUh
Content-Type: multipart/mixed;
 boundary="------------F674F058869B31BF55DB1CDE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F674F058869B31BF55DB1CDE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 14:03, Manuel Bouyer wrote:
> On Wed, Feb 03, 2021 at 01:58:19PM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.02.21 13:47, Manuel Bouyer wrote:
>>> On Wed, Feb 03, 2021 at 01:42:12PM +0100, J=C3=BCrgen Gro=C3=9F wrote=
:
>>>> Uh, this is no regular patch.
>>>
>>> I though by regular patch, you meants a plain diff -u
>>>
>>>> You've sent correct patches before,
>>>
>>> Yes, and it's very time-consuming. This is why I want to have it righ=
t the
>>> first time and not go through sevreral iterations with this protocol.=

>>>
>>
>> Its not that hard if you have a proper git tree...
>=20
> git is the problem actually. I'm not used to it ...
>=20

Not using git on a daily basis I really suggest to use stgit as an
add.on:

https://wiki.xenproject.org/wiki/Managing_Xen_Patches_with_StGit

It makes handling multiple iterations of a patch rather easy.


Juergen

--------------F674F058869B31BF55DB1CDE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F674F058869B31BF55DB1CDE--

--Q5uog5s70fe0omjaWE0oNVKTWiBfbMzUh--

--yrBctnWO9doKo49OsWql6vOjC6tXdumGq
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAaoQ8FAwAAAAAACgkQsN6d1ii/Ey+C
iQf9HC/sqY+Efrt+ur5FjnKEdoH9MQjBftmsl3JpEQKmFPwrI7BXYxJU3oC7pJOWchiw4l4vntpe
38CmB4qVkGyxnxvHn+smN50q+yrFfHYiUYT2cRv5Wfz8kT+ypMhVwzhihMNPC78D70sAn4ewous1
5NWI/myJMDujmzXicLXX69h3z3atW8+ee/+D42Rj8jDSSFiwRhMWMZ44hyzgdVYduHT1ipZSLvto
rB475NCSZTQ3wvSrDWIhQZTLIn825ZCqri2Byzv5wzqATjRjA44Hpbdr1Rs1FaceyNWHc63F4Fg9
aXg22ky+5zAekWf9AsZNjzDLDmFtu5mKd0goaC3Jqw==
=/Q2r
-----END PGP SIGNATURE-----

--yrBctnWO9doKo49OsWql6vOjC6tXdumGq--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 13:25:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 13:25:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80929.148482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7I9V-0006E8-A6; Wed, 03 Feb 2021 13:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80929.148482; Wed, 03 Feb 2021 13:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7I9V-0006E1-6k; Wed, 03 Feb 2021 13:25:01 +0000
Received: by outflank-mailman (input) for mailman id 80929;
 Wed, 03 Feb 2021 13:24:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7I9T-0006Dw-II
 for xen-devel@lists.xen.org; Wed, 03 Feb 2021 13:24:59 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c2c5866-c55f-4a6e-8a4c-e17a0f47a6bc;
 Wed, 03 Feb 2021 13:24:58 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113DOv1L003985;
 Wed, 3 Feb 2021 14:24:57 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 57267281D; Wed,  3 Feb 2021 14:24:57 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c2c5866-c55f-4a6e-8a4c-e17a0f47a6bc
Date: Wed, 3 Feb 2021 14:24:57 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: xenstored file descriptor leak
Message-ID: <20210203132457.GA3237@antioche.eu.org>
References: <20210203120336.GA2511@antioche.eu.org>
 <8e4f159a-9519-3576-8f6b-3800a0e84500@suse.com>
 <20210203121738.GA2610@antioche.eu.org>
 <cbf97615-a81e-4a5d-6cc2-ce3b850ed2f5@suse.com>
 <20210203123312.GA2906@antioche.eu.org>
 <25ba02f1-ec03-3bdd-b378-946889271969@suse.com>
 <20210203124726.GA3034@antioche.eu.org>
 <411aa7b7-3006-61ce-e2e2-9b9d51658b99@suse.com>
 <20210203130321.GA2579@antioche.eu.org>
 <cb9d1b06-9777-73a7-fdbc-c25ffe45d8dd@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cb9d1b06-9777-73a7-fdbc-c25ffe45d8dd@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1100:a00:20ff:fe1c:276e]); Wed, 03 Feb 2021 14:24:57 +0100 (MET)

On Wed, Feb 03, 2021 at 02:11:43PM +0100, Jürgen Groß wrote:
> Not using git on a daily basis I really suggest to use stgit as an
> add.on:
> 
> https://wiki.xenproject.org/wiki/Managing_Xen_Patches_with_StGit
> 
> It makes handling multiple iterations of a patch rather easy.

thanks. When I started, I looked at the wiki for instructions about
patches, but didn't find any ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 13:50:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 13:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80935.148494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7IXc-0008SB-9B; Wed, 03 Feb 2021 13:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80935.148494; Wed, 03 Feb 2021 13:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7IXc-0008S4-5O; Wed, 03 Feb 2021 13:49:56 +0000
Received: by outflank-mailman (input) for mailman id 80935;
 Wed, 03 Feb 2021 13:49:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7IXZ-0008Rz-TC
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 13:49:54 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0945355-5cd7-47fa-9a63-74b7e697c97f;
 Wed, 03 Feb 2021 13:49:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0945355-5cd7-47fa-9a63-74b7e697c97f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612360191;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BM5rbNDvpLAhzfHLtpQKarJS8b4zuYq0VYVVV7hIGls=;
  b=XUrb/shIP0Iiq5wXmHC4e921uKbTMqjBjFjPLvtEaF/1CsEJ3lJv5yDQ
   uCTFEZVLSVPTEZRx7pWgTFgK+xv9lB3GgL92ynh11X9Jl88HDvCNESyhX
   ZIn77UTv0kH1LYlkOrwU4Ph9Vf5FkENiB8ZneCybkxq3ER973aMfrzclG
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jTwGnrfMLhmo3u8KW/jVD4B6MpxjR7Q1T6hzpTDHf20yf/VKqZVNEn0yMVqmNGoDCKRnZ/4Je0
 Zg5BvYsWXVeYgAyfpvJjJF7UM7nOAet6Wi3Y2ZUucqnM1HS8VDEorXuHVcjOPcNSn0xu+i19lz
 bpxhyKPg/KU+TwtH/F4ahbziV7ISrqJa7jIeJhnXCDCvRp62cStKNRI9TBIJTD2499iKwLCrxW
 4ZdItRLh1jXf9uXIG/rZGpRmPDiWmwvX8eZ6Y9+SqlnxEmrLejWf+aOt7ZePQkXzvlyIMi1iin
 lx8=
X-SBRS: 5.2
X-MesageID: 36655966
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,398,1602561600"; 
   d="scan'208";a="36655966"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BiQVfT9PRKyxTMPQwbxproldu81UC2uaeHwvGZrHv2FYhvCiMc3XVVDOVHm1qc/cuP9ga+s7XvsAU6UoFI06UYeN73IUmcSMFS2AkJBJ9fxIjKgPBi6fQ5J1YhallNSaBbz5rTYhjERKzg+GTGaO6UQ1qSz7ghgqPoXJ9oUsJ3lT7WbY9yZORvX+UsE9UQWFxWpMr4RvOmKn2V320yINyDi2xz3IIBT+tVharzyMcPi92OHkxz4ik41pNAosoHk+b9hugQSobFB5G8R+QXzsP3gtopOE2GYp2uJdDJOgZYvfgG84MKKGmCmMciWh3+erDYTOuWpEHH6vI4HiPQTXDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BM5rbNDvpLAhzfHLtpQKarJS8b4zuYq0VYVVV7hIGls=;
 b=CIRGzGHSmFRwz20NdfTOTCnEZm1N1UZpUDU3c+dDkAZWH0tf2bvSMnRQYB4FHFzrtm5vXWmFIT2f7itikSlguijZ7rh9+N0ZwG6QIebEhJyC91y9YTZ/1xGzs5XzswbpuBvATSEeVk/fyMcy85SLResOzUaVVrPaw/NoGJ3+6r3gPxO27s/4JMRV+8Iitvws01/YvqzrmWGLxMmkTEZPobqsBZtnjxEUfawNISR++pC3I3QdEBFsw1u8qYptTFs48tUn1YR5HTbZpkXys6ulYsnZYeatXsgp7H4X2sny6dWG0trkxxdbQc9kp9Fv8x1s6t7o6ZRHWsmp0bS7C/QgYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BM5rbNDvpLAhzfHLtpQKarJS8b4zuYq0VYVVV7hIGls=;
 b=so86SbCxf8mJ7GE+PwReO2yPLoZZh+aLIQ9d/WfAir3kJcORiD1jXFd34tcfJ4v5J8h4X9bGAae7HfYsy7vgv6O5+HHU/m0eGMnQfomeWb8nhU6L/HC+V1nsZPYn02hhdS5bMEyjO1mWPyj2lwdvmmeQhPmgZLBRXt0u/9Ck4uY=
Subject: Re: [PATCH] tools/libxl: only set viridian flags on new domains
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: <iwj@xenproject.org>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<tamas.k.lengyel@gmail.com>
References: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <968f9f91-72c6-2070-075a-308a2870d6ce@citrix.com>
Date: Wed, 3 Feb 2021 13:49:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0078.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::18) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9c4e4a75-d2d3-473d-bb16-08d8c84a87e9
X-MS-TrafficTypeDiagnostic: BY5PR03MB5063:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5063885DB0E687104590ECC5BAB49@BY5PR03MB5063.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ezbu2InLRJh/0czxtdE6r+ZYbhKeQ39FE64wMTNsfYnPXlSrp0h+SqwVyYVH/RZRDykCsQyt5EdCKi60isg85aXMjBofHlBiIZjtEPpZHnrMryzAf/5y+2he4CAK0TuMzhLhK/quzlvMJsUNiuCRt5x9eTWiWseh45M/95eT2MJkMTziquZysCGZ5odk8+rdh7+1xuqaV9OTC9PtMtihqFZ9FA3keVyUnBNNLf+gWLJq60huT0IPwX9HU3mU6W+gS/dBAKfslvZfj5VgoewCgmLtlw4YadmVMjqYkzfUoaERT6TltBs1t+ilsTTQJCer40zri7RRHYhYitp+fvg9PpSbqARJWtea2gPwy36//EWkCnNhkkqG/vP4AOTjTlbVpVL5NWebDPF13uUJSpxhVsB8gEB2jJlByLeXRucN+eLNRSBngBvFwTTYjIf6kBaVCPPAEB8jFWrJYoHxHvEz1a1FFrlsqs0xpWIS07obYFTtsdRFxohpq5G2R9e3iKsJd9fwPFPs2Noek6MUy+qlnzsmCrIX9hvcv/OEUxCKD6O/M99Miydm0WgwTI7G7F61jfRmBXLpS4Xxtsk7nF4fmRAfnNCq1BFgWx4cfE7RMcs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(366004)(39860400002)(346002)(86362001)(8936002)(478600001)(2906002)(5660300002)(4326008)(66556008)(4744005)(8676002)(83380400001)(16576012)(53546011)(31696002)(2616005)(66476007)(26005)(66946007)(31686004)(6666004)(316002)(16526019)(36756003)(186003)(6486002)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZlRCa1hjT0FIVE5WNmczOGtmNlcvcE5mUE1mUWdwU2F0SlBCaGcwaFBOdWIv?=
 =?utf-8?B?TVBlZGF3bmVPTVdMQ3pUbXc4dERud1ZzTHJmbUFmL1JUZzJKV2xUZTRJZlFo?=
 =?utf-8?B?UkdJenJuM2pHWDM1bzdodDRUWStLZzdZUXZlZ05VOFhSOFRYcGNVL0NZQlBN?=
 =?utf-8?B?NU9OM2xCNTB4ZEhPZmJlcEZYOEhocHpYTjFRUjVTR0ZLVGJoRmlNVkJtQ3Zq?=
 =?utf-8?B?cjhkSXpsRitNZ0o2YWdZQURSRi82Q1JEc0RkMUFhN2VnelBmLytBWHc3NmNp?=
 =?utf-8?B?Y2VWU3ZybFY4Q2J1VU5pc0tpOTlMMDU2cWg5WkJiejFIdGszaU02ZWVRK0hj?=
 =?utf-8?B?T3FscFpXODhIdjkrWFc4cDV5VVJ2UDZRMERlclJtaUJQSmM2ckxWRDNudmlw?=
 =?utf-8?B?dGVMZm1JWkxicUhmNGZUNFZ4N1RrdE8raVBCbmpXLy9ObW9hdlo0bWxYOTND?=
 =?utf-8?B?UzU4MnREeGJyK2RvdGJibTdDNS8rbnlNL1YycFo0eUxKZW4yUzZCN3JHbzN0?=
 =?utf-8?B?STJBVXg1cml0WFdIejZ5bXZ3UFJXN0kzbWd1T1FnV0dtK2xZajluUjdaV3Bx?=
 =?utf-8?B?Yng4Ykd2RTdmRFVNK1pJeHhkMHpINGU5MVd2N2dGVkJtbkZiR3ErbDFYZ09p?=
 =?utf-8?B?emlhVDBPT0hNbWlCMHc3YkduRGZDY1hnemF5cEVsMGl3dVpDbHlKTnVmY3Uv?=
 =?utf-8?B?eU9aaTlsMC9KbkozRmxMZ0dTeW9odE9rM04wYTBvTkd3MHJXV0JNMTRSWlFU?=
 =?utf-8?B?NW5WcUhodGVmTjAySnJsTkhBN1dlZFYvanlFa0E5YmZydDhCb0ZpZjRIYURN?=
 =?utf-8?B?NDVaakxGVmxPNnRqN3A0U3NQcEx3aEZPaVBvSkttdUFaeFJUcDROd1NnelV4?=
 =?utf-8?B?eDEybDJJMWY0ZFFYai8wRnlRRXppcFNaYWFqbXFnMTg0U24xN3A2TzFadUVB?=
 =?utf-8?B?K21JYVMvY1pWWTdkUXY3M2ZpQXgxem9BdlpVNElsTUt6QVRPNWc1eG9RcGg5?=
 =?utf-8?B?MjByNjM2a0RVcXN6M2dJVGtFU1AwSFNNY01GbmhpQlZHc0ROMENOL05RT24w?=
 =?utf-8?B?VTFIanBYQTZQQ082ZUUzd21BVndPbEJLVWFUSEJhNXV3cWxQY2lTa2ZzQ3RN?=
 =?utf-8?B?WnhkWUFRdk1uSEFneXArQVF2TnZkbS9Sbjlzc3IzUnVzTi9jbUd5WFZhNVEy?=
 =?utf-8?B?aGpOMWVMOHdRTE1ZRXo1T1c4NWQ1aE13R0FNTy9aU2JwRWhBOC94NzVSNlda?=
 =?utf-8?B?V2s4eU40dTdOZ2gyMWRRaTNsVXVndUJRWFdJRy9jV3NwQ3VZYlNxOFF3dXFP?=
 =?utf-8?B?T2JXK1hoZ3p1cTZOSkVBelh4bWN2ZzhScnRuTEM3QkhadHI4Szc1c1gzdFhE?=
 =?utf-8?B?V0FQbTMvUzgyYlBHWHI5aUJjeEJ4SWcyYkk2Vnlra0dlUEpsVWVJeEIrQlc0?=
 =?utf-8?Q?ULruglJ3?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c4e4a75-d2d3-473d-bb16-08d8c84a87e9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 13:49:30.9630
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Pf/zmgaalofJ765Stft0ji34xTDupnlYXvwQ3f0sy8un+ncWqUZE7MuPwHVMoZhX/7d57FGRtTvRJiEKFEGKMNrHQ/2RtokInWDlrCqPpcE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5063
X-OriginatorOrg: citrix.com

On 03/02/2021 04:01, Igor Druzhinin wrote:
> Domains migrating or restoring should have viridian HVM param key in
> the migration stream already and setting that twice results in Xen
> returing -EEXIST on the second attempt later (during migration stream parsing)
> in case the values don't match. That causes migration/restore operation
> to fail at destination side.
>
> That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
> extending default viridian feature set making the values from the previous
> migration streams and those set at domain construction different.
>
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 14:22:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 14:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80940.148506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7J2w-0003lN-S5; Wed, 03 Feb 2021 14:22:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80940.148506; Wed, 03 Feb 2021 14:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7J2w-0003lG-OR; Wed, 03 Feb 2021 14:22:18 +0000
Received: by outflank-mailman (input) for mailman id 80940;
 Wed, 03 Feb 2021 14:22:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ulgz=HF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7J2v-0003lB-Sr
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 14:22:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a171fe5c-f184-4827-a383-fa72376b7b1f;
 Wed, 03 Feb 2021 14:22:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ECE29AC6E;
 Wed,  3 Feb 2021 14:22:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a171fe5c-f184-4827-a383-fa72376b7b1f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612362136; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=mA3BQhw9Vb3J5jIjdLh1AVt9kQ9/0iuprHGUbiogyl0=;
	b=rLWE5Enr1gTUaWRJSThSSlYHaVUjmjEPr61Q3NgKyyCqq9Lo4ZpwnwIKS7PXyZMK2iDk4g
	b3CkXDJN0UksLV4QHOBxTIHwNo1nqLnYr/MNLjCiCv5ERgHm9eKPBB5BWxZOCQ4/TGRI3U
	NmwRQpREiPA7elO9vQLEpApQItF8RGY=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/string: correct memmove()'s forwarding to memcpy()
Message-ID: <a23d148a-25d0-cc5b-6050-88345188ef5a@suse.com>
Date: Wed, 3 Feb 2021 15:22:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

With memcpy() expanding to the compiler builtin, we may not hand it
overlapping source and destination. We strictly mean to forward to our
own implementation (a few lines up in the same source file).

Fixes: 78825e1c60fa ("x86/string: Clean up x86/string.h")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
An alternative would be to "#undef memcpy" near the top of the file. But
I think the way it's done now is more explicit to the reader. An #undef
would be the only way if the macro was an object-like one.

At least with gcc10 this does alter generated code: The builtin gets
expanded into a tail call, while after this change our memcpy() gets
inlined into memmove(). This would change again once we separate the 3
functions here into their own CUs for placing them in an archive.

--- a/xen/arch/x86/string.c
+++ b/xen/arch/x86/string.c
@@ -43,7 +43,7 @@ void *(memmove)(void *dest, const void *
         return dest;
 
     if ( dest < src )
-        return memcpy(dest, src, n);
+        return (memcpy)(dest, src, n);
 
     asm volatile (
         "   std         ; "


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 14:31:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 14:31:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80942.148518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7JC8-0004pi-S2; Wed, 03 Feb 2021 14:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80942.148518; Wed, 03 Feb 2021 14:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7JC8-0004pb-N1; Wed, 03 Feb 2021 14:31:48 +0000
Received: by outflank-mailman (input) for mailman id 80942;
 Wed, 03 Feb 2021 14:31:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7JC7-0004pW-Nj
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 14:31:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7JC7-0001Nu-JM
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 14:31:47 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7JC7-0000ea-GO
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 14:31:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7JC4-0005AW-5Q; Wed, 03 Feb 2021 14:31:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=qXrdm/t0Vriz1iUoGDa0nMpVQ6Gf8I/9hbaEOJO78ts=; b=TxWVZmhKL8V0IkmpXKn3eV11Ga
	Qfe6fXczuRd83KT0sQy68zUoiQvBqnWmTqNt6Cx50aXxbw973pcmmKf8d1nBGTj3uVGeuPAcuH3RW
	h7s2IMCs1LddIHC9C5US1EeGd/8+R1eBKoyJ09a8wljBS/fjWdgKqkkciwCxJFPTLbiI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.46031.904844.532072@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 14:31:43 +0000
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    <wl@xen.org>,
    <anthony.perard@citrix.com>,
    <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH] tools/libxl: only set viridian flags on new domains
In-Reply-To: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
References: <1612324889-20942-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Igor Druzhinin writes ("[PATCH] tools/libxl: only set viridian flags on new domains"):
> Domains migrating or restoring should have viridian HVM param key in
> the migration stream already and setting that twice results in Xen
> returing -EEXIST on the second attempt later (during migration stream parsing)
> in case the values don't match. That causes migration/restore operation
> to fail at destination side.
> 
> That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
> extending default viridian feature set making the values from the previous
> migration streams and those set at domain construction different.

I am OK with this in principle but I would have preferred the prep
work of passing libxl__domain_build_state* through to more places to
be split out into its own patch.

As it is it's not easy to see the wood for the trees.  If we weren't
in the freeze I would probably just shrug and ack it but I think a bit
more care is needed at this stage.

Would you mind splitting the patch in two, the first of which is "no
intended functional change" ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 14:55:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 14:55:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80945.148530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7JYl-0006ui-SW; Wed, 03 Feb 2021 14:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80945.148530; Wed, 03 Feb 2021 14:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7JYl-0006ub-PL; Wed, 03 Feb 2021 14:55:11 +0000
Received: by outflank-mailman (input) for mailman id 80945;
 Wed, 03 Feb 2021 14:55:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ulgz=HF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7JYk-0006uW-N6
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 14:55:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bcce33f-7796-4242-bc5a-c16df0dc54c8;
 Wed, 03 Feb 2021 14:55:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C483AAE6D;
 Wed,  3 Feb 2021 14:55:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bcce33f-7796-4242-bc5a-c16df0dc54c8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612364107; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SntCOdyyskzW/9ObliyAk5eMBZG7C/pk/M+C03zFZNw=;
	b=S+MGHl0ZHHfRM6cUlBYzapI6gXyj/N3tMhNbac4YIKCqIPUvmWLjEj8waLxUiXG3z6tFkm
	t8M0qzPNhj29Zw2jMZ+EgMSYs/GMEvjSgXY08kXlSAyuwuup6cJ+gE/8LTWUSDv3zjNDQX
	sYLxHA5gm0x5uNchb3SjD0W+N/2PG9I=
Subject: Re: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
To: James Dingwall <james-xen@dingwall.me.uk>
References: <20210201152655.GA3922797@dingwall.me.uk>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
Date: Wed, 3 Feb 2021 15:55:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210201152655.GA3922797@dingwall.me.uk>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.02.2021 16:26, James Dingwall wrote:
> I am building the xen 4.11 branch at 
> 310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit 
> 3b5de119f0399cbe745502cb6ebd5e6633cc139c "86/msr: fix handling of 
> MSR_IA32_PERF_{STATUS/CTL}".  I think this should address this error 
> recorded in xen's dmesg:
> 
> (XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0

It seems to me that you imply some information here which might
better be spelled out. As it stands I do not see the immediate
connection between the cited commit and the crash. C0000096 is
STATUS_PRIVILEGED_INSTRUCTION, which to me ought to be impossible
for code running in ring 0. Of course I may simply not know enough
about modern Windows' internals to understand the connection.

> I have removed `viridian = [..]` from the xen config nut still get this 
> reliably when launching PassMark Performance Test and it is collecting 
> CPU information.
> 
> This is recorded in the domain qemu-dm log:
> 
> 21244@1612191983.279616:xen_platform_log xen platform: XEN|BUGCHECK: ====>
> 21244@1612191983.279819:xen_platform_log xen platform: XEN|BUGCHECK: SYSTEM_SERVICE_EXCEPTION: 00000000C0000096 FFFFF800A43C72C5 FFFFD0014343D580 0000000000000000
> 21244@1612191983.279959:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (FFFFF800A43C72C5):
> 21244@1612191983.280075:xen_platform_log xen platform: XEN|BUGCHECK: - Code = C148320F
> 21244@1612191983.280205:xen_platform_log xen platform: XEN|BUGCHECK: - Flags = 0B4820E2
> 21244@1612191983.280346:xen_platform_log xen platform: XEN|BUGCHECK: - Address = 0000A824948D4800
> 21244@1612191983.280504:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[0] = 8B00000769850F07
> 21244@1612191983.280633:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[1] = 46B70F4024448906
> 21244@1612191983.280754:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[2] = 0F44442444896604
> 21244@1612191983.280876:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[3] = E983C88B410646B6
> 21244@1612191983.281012:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[4] = 0D7401E9831E7401
> 21244@1612191983.281172:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[5] = 54B70F217502F983
> 21244@1612191983.281304:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[6] = 54B70F15EBED4024
> 21244@1612191983.281426:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[7] = EBC0B70FED664024
> 21244@1612191983.281547:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[8] = 0FEC402454B70F09
> 21244@1612191983.281668:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[9] = 448B42244489C0B6
> 21244@1612191983.281809:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[10] = 2444B70F06894024
> 21244@1612191983.281932:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[11] = 4688440446896644
> 21244@1612191983.282052:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[12] = 0000073846C74906
> 21244@1612191983.282185:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[13] = F8830000070AE900
> 21244@1612191983.282340:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[14] = 8B000006F9850F07
> 21244@1612191983.282480:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (0000A824848948C2):
> 21244@1612191983.282617:xen_platform_log xen platform: XEN|BUGCHECK: CONTEXT (FFFFD0014343D580):
> 21244@1612191983.282717:xen_platform_log xen platform: XEN|BUGCHECK: - GS = 002B
> 21244@1612191983.282816:xen_platform_log xen platform: XEN|BUGCHECK: - FS = 0053
> 21244@1612191983.282914:xen_platform_log xen platform: XEN|BUGCHECK: - ES = 002B
> 21244@1612191983.283011:xen_platform_log xen platform: XEN|BUGCHECK: - DS = 002B
> 21244@1612191983.283127:xen_platform_log xen platform: XEN|BUGCHECK: - SS = 0018
> 21244@1612191983.283226:xen_platform_log xen platform: XEN|BUGCHECK: - CS = 0010
> 21244@1612191983.283332:xen_platform_log xen platform: XEN|BUGCHECK: - EFLAGS = 00000202
> 21244@1612191983.283444:xen_platform_log xen platform: XEN|BUGCHECK: - RDI = 00000000F64D5C20
> 21244@1612191983.283555:xen_platform_log xen platform: XEN|BUGCHECK: - RSI = 00000000F6367280
> 21244@1612191983.283666:xen_platform_log xen platform: XEN|BUGCHECK: - RBX = 000000008011E060
> 21244@1612191983.283810:xen_platform_log xen platform: XEN|BUGCHECK: - RDX = 00000000F64D5C20
> 21244@1612191983.283972:xen_platform_log xen platform: XEN|BUGCHECK: - RCX = 0000000000000199
> 21244@1612191983.284350:xen_platform_log xen platform: XEN|BUGCHECK: - RAX = 0000000000000004
> 21244@1612191983.284523:xen_platform_log xen platform: XEN|BUGCHECK: - RBP = 000000004343E891
> 21244@1612191983.284658:xen_platform_log xen platform: XEN|BUGCHECK: - RIP = 00000000A43C72C5
> 21244@1612191983.284842:xen_platform_log xen platform: XEN|BUGCHECK: - RSP = 000000004343DFA0
> 21244@1612191983.284959:xen_platform_log xen platform: XEN|BUGCHECK: - R8 = 0000000000000008
> 21244@1612191983.285073:xen_platform_log xen platform: XEN|BUGCHECK: - R9 = 000000000000000E
> 21244@1612191983.285188:xen_platform_log xen platform: XEN|BUGCHECK: - R10 = 0000000000000002
> 21244@1612191983.285304:xen_platform_log xen platform: XEN|BUGCHECK: - R11 = 000000004343E808
> 21244@1612191983.285420:xen_platform_log xen platform: XEN|BUGCHECK: - R12 = 0000000000000000
> 21244@1612191983.285564:xen_platform_log xen platform: XEN|BUGCHECK: - R13 = 00000000F7964E50
> 21244@1612191983.285680:xen_platform_log xen platform: XEN|BUGCHECK: - R14 = 00000000F64D5C20
> 21244@1612191983.285796:xen_platform_log xen platform: XEN|BUGCHECK: - R15 = 00000000F7964E50

I'm also confused by this - the pointer given for CONTEXT suggests this
is a 64-bit kernel, yet none of the registers - including RIP and RSP -
have non-zero upper 32 bits. Or is qemu truncating these values?

> 21244@1612191983.285888:xen_platform_log xen platform: XEN|BUGCHECK: STACK:
> 21244@1612191983.286105:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343E810: (0000000000000000 000000004343E891 0000000000000002 00000000F75F08A0) ntoskrnl.exe + 0000000000485507
> 21244@1612191983.286340:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343E8E0: (00000000F75F0805 000000004343EB80 00000000F6A62CC0 00000000F75F08A0) ntoskrnl.exe + 0000000000486468
> 21244@1612191983.286547:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343EA20: (0000000000000000 0000000000000000 0000000000000000 0000000000000000) ntoskrnl.exe + 0000000000458CAE
> 21244@1612191983.286755:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343EA90: (0000000000000000 0000000000000000 000000007DBED000 000000007DA00028) ntoskrnl.exe + 00000000001501A3
> 21244@1612191983.286976:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE388: (00000000587D5673 0000000058F40000 0000000006002D2B 0000000000000000) 00007FFB5B3207CA
> 21244@1612191983.287171:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE390: (0000000058F40000 0000000006002D2B 0000000000000000 00000000160C86D8) 00007FFB587D5673
> 21244@1612191983.287390:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE398: (0000000006002D2B 0000000000000000 00000000160C86D8 0000000009ABE3E0) 00007FFB58F40000
> 21244@1612191983.287584:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE3A0: (0000000000000000 00000000160C86D8 0000000009ABE3E0 000000008011E060) 00007FFB06002D2B
> 21244@1612191983.287777:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE3A8: (00000000160C86D8 0000000009ABE3E0 000000008011E060 0000000009ABE4A0) 0000000000000000
> 21244@1612191983.287898:xen_platform_log xen platform: XEN|BUGCHECK: <====
> 
> The Windows guest is running winpv drivers 8.2.1.
> 
> I'm not quite sure what else to examine or change at this point so any 
> guidance would be welcome.

The hypervisor log (at maximum log levels) accompanying this might
help some. And of course, if possible, trying on a newer Xen (ideally
current master).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 15:01:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 15:01:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80947.148542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7JfD-0007x4-Jt; Wed, 03 Feb 2021 15:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80947.148542; Wed, 03 Feb 2021 15:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7JfD-0007wx-GR; Wed, 03 Feb 2021 15:01:51 +0000
Received: by outflank-mailman (input) for mailman id 80947;
 Wed, 03 Feb 2021 15:01:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7JfB-0007ws-8u
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 15:01:49 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1848eee9-4d10-4460-9432-530d60c46b39;
 Wed, 03 Feb 2021 15:01:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1848eee9-4d10-4460-9432-530d60c46b39
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612364507;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/xXvjgdy11jdnqvpj+bFwqGUCIwgRoG7akY5RndKhTg=;
  b=DIWK0WPGWPVygVP2eNNhXP6rk+FlEwTjWZ1MPsm2SVOzQ0WQ7/4RE/a6
   IJ4sq78C8XlngcYZcA7TCTSFBuanh0SHw7aNXTmSUnCeTkw+aTRuqdx2c
   TY1B7uD0G1MI6IBOo7UcL5aj+tyafMBKDNCSgklv1ekPTWLtXJDijnJvE
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dAGaf8Fk3JudgqKb7CzgBrhS1RYOSO3q2mUJZFW/mop0jdXsp0RTp81OK9riLQVXWwTZQ6shCJ
 V39kcAT/gGCPGg9BP1kTnBqkRKsoMmH6FDy4qTRdPuYormrbQ1UfoMVbN9Zom1ce+baMrQllCR
 1iBerKVD6RqXedCZ9ke1k+MJi5Y1lZGZWXO6hB351Qb/tD2NsuvTXXKFp4ooWw942MHaP7E+kH
 zrPkrRiwwd686ASXJqo8oCZivPT5+/NSmgRLJDMa5cpsOQkGqKUSKhYo2pPN0CPx8Qxu1XPqkO
 tBo=
X-SBRS: 5.2
X-MesageID: 36418240
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,398,1602561600"; 
   d="scan'208";a="36418240"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OBZpWfOkLXg9vmVLwY0p7M3OTMYuh8YKGxjGqrNg/NQY6VWv8lt532lImkuzgbodzNWesnjQM9TxsH2yLEDQaIArT8DFVANX++fhKcWyu2zPUhuuf8wT+IcuwfTF6KlWYiNYvT2BJ3Ak+ErPeFHkKrBAfqTjMDfyMUCgWEQ0dU3JDKL+qM/SUsyFQol1VusBaplp0FikxddP4luedAMsXnzyMmwK7VamM/eWylxLvcRjR9KEsjFAI/mnA2ZVE2pKrPnSh+Q/1nrroU8RB4DzHXCrF5F1PN5XG89OINv0ux89DdG4XjdAsHx9RRj7KreHvQLJoRSdk5QkI+FTCDUuBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tc0U3sVt46NDiGTDiofE6JPIa/13SAjy4T3EAcDPUBE=;
 b=DaN/EOSFK8iuyaXuR9nj2i/TIXEHcq/gyESTGZ7Qr4E6BIlg1e5PBgn89isiIb3A7vCGErV8cR0PZzsp3PLX3NyXjxdUpom6P5TgdykkpTPsiyjcR4ItnTUfLSdkYDol6QzPvQDsZwy8yPBubeHzP0gf1OXSCUssclJ533lL3LUQULOa9ZqODiqHUHCg2pTsvkaiXr6X7dq7BVk2xwT/7m9q5tpom61gPE9ltTNMOWMFJUmWSoWYymvdk9PqjnPxJHZJuLKTieJyx6wuyBYUjqlPT9TbbHAARtGpFWDbvV/mVtm9sqOJFrsd3WRWQWlmn1oesX+0jRcFAdcbuPDwdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tc0U3sVt46NDiGTDiofE6JPIa/13SAjy4T3EAcDPUBE=;
 b=jwBYdcTwBYPRyImBctjZeInQeD5IFa/knxvV7+bXAZL2M+bn2lWH3bzeQzYwiEx97PAd+7H3lSbY+XXrZdxm8NrQjCeezzIAisSEhxeNS0o3w1tEJnQhZ4WfHoKJSY4dYuaGNwhwZxLwCwQVU5z7G4kWNlMkzg8SaJmV7vzsG2M=
Subject: Re: [PATCH] x86/string: correct memmove()'s forwarding to memcpy()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <a23d148a-25d0-cc5b-6050-88345188ef5a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0d537de8-09f4-9971-466f-8bf42964171e@citrix.com>
Date: Wed, 3 Feb 2021 15:01:36 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <a23d148a-25d0-cc5b-6050-88345188ef5a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0353.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::29) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7e22d68d-9799-4338-1d11-08d8c8549dec
X-MS-TrafficTypeDiagnostic: BYAPR03MB4583:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB458331E1F0BD8EA708D4EF6FBAB49@BYAPR03MB4583.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HL6Ss0XoQ4vz1MGrLVYoR1Z+bSeDkqVuAD73Dzpw6N0zohHWHb6UHkx9rbAsR3y9UokN4GpJAc5pn1lTqWT+af8sA0OqW0lINdG5Hex9/HJ6XrMaHts/sjOnuq+YwCy+kCfZaQmBF28EOXu0kQ/2slGEN6pKx4gCjAwEbyVvmhzf6ChBMYqKYNXqKZCnm/2/G2pdDKxnJwPOu/Zf7PaQy/r0pA1kYvCwpveTLk8jcqg2Y5Gr3tdO8WVvpMmDM7ee1bErSCNaZ++/3ll2bmhCqdu9N2ntIwBiQ0Lnb/uX1OnrBJ57Bpl4e5jTgJDT9nEWqkkHvgZFZXQnT7fi2zkY7eT549H8wLN1R1b/fpLlp69mcJboaj6JFiXpnAu53F9Ryj2GBwIAkzB1l4iG/GcbtuErvvL66M8zkcz6MAKoy55BRejy8rQFrD2ErqIXBZFDtvp05IdWrHf/H74HnVrGOnxiBGBtnzQ+w9Yr/v5RD01Tmmqs84iDmrOH1RFhtpadrrSM45Dye77+XqXYx9FAGFyn8e6f1TpwisJAK3o48IzCqHzBNsrEaKYXriTUODQTc3AB+qEvNvKtHQH/ppSHuv6whciWyg0rMbGKFcmC3uE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(136003)(346002)(39860400002)(376002)(8936002)(8676002)(31696002)(5660300002)(54906003)(478600001)(2616005)(956004)(53546011)(4326008)(66476007)(66556008)(66946007)(110136005)(2906002)(16526019)(107886003)(6666004)(26005)(36756003)(186003)(6486002)(86362001)(31686004)(16576012)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?c1FmbE15anBUa0U0ZlU0a1IxSFFjUndOVGhsRHZwUUlNWnJEVWNtR0FIcStL?=
 =?utf-8?B?Vi8waS80TWJ3T3grcjVDYU5TV2U1aGdjb092eWxLMWl2VklPY0EzeDlKUFF2?=
 =?utf-8?B?M1pIQitpdVR3bWNYUzhKVTN3cVN0RFRHQjZqbkI0blh4VmluWEVhUkZPUWc0?=
 =?utf-8?B?K1prM2g2UERjejFQOUc1S2xTQ0o3Q2R2bENOelppMk50M2RBOXo3QTNrUTF4?=
 =?utf-8?B?d2tIV1g0NFdYUlRqNVhFNzI2VkpYdFllNTRKRmRFOXllcGU3c0wxbFVuSytE?=
 =?utf-8?B?Y3U3bXVKbUd6dnBvMllGdzk3VTI1QkJJYW9aYUZHWE9rdDkrZDhCSmZwWFAv?=
 =?utf-8?B?RDIxZE0yelE3bTcyQnAwazFDR3ZmemdxaThiNTNMRlVKbk01VTI3dlVXeEdR?=
 =?utf-8?B?S0hicHBaM25mR0k0SG5ydGdRdU5LRHlWdVRjTklTK1BjazJyclZUTEh0Y3lZ?=
 =?utf-8?B?UFBhZm5ETVJFUG9menJIajhYYVBsVWpnZXIwYklWUG16TVlYcVllOHRMVjRL?=
 =?utf-8?B?S3BXWHJZZ3hIU0hBbHFpTXhCV3VNNlRYeHdiKzVrYTBpZXpGMW9aMzkvdkM3?=
 =?utf-8?B?bEd0a3hJUUlyQWxOdkwrL080ODRMQVVkdDJlQ29BaVdzQUFOdUIzMjFhZE5y?=
 =?utf-8?B?USs3MVQ0UGxvVzFlQzl3M0I3akRLSW1sci9aZWh3TU5VdlNIb0VTNDh6V2JU?=
 =?utf-8?B?Q3JOMXJBZGhDaEZBd1ozR0tseDJ4L28yMTBKVVNVemxTY3ZobjM5a0hEU3dT?=
 =?utf-8?B?QURPSGx3NzR5QU1XWU9hTGVuejFLS1lkRk05WDlVcXdLRmdsWldEMXZmSjhk?=
 =?utf-8?B?dTFMTncwNERWRjJtMkxSaGtGUjBaZW12MHlOSHJyeVFyd3RKVkRQRHZRSUtw?=
 =?utf-8?B?NGlsRnlqTUR3ajlJeUlCeW5id2VyeGllUjd2VGRUVXk1cUpNN3BWeHhvNUVT?=
 =?utf-8?B?ZmJlWWRadzRPSGFjRFpJRVV6Nm5FTW1kQUpVanhXM1F5RHRYYXY0WVFEV0RR?=
 =?utf-8?B?RVkreTJlZDhxRklCRTBYZFJ0YWdUaGM0NmN6TGkvNHdRZW4xRVRaRytqalJp?=
 =?utf-8?B?NWovdm9IVkNLVm1aNGNzcW9kaWJhTW9BU1A4clMxNE9TTERpNXhVVXFiSXo5?=
 =?utf-8?B?RGxrbE9aNERrc2I3TWpma2ZxY0FtZVhZRmthWjFaZ3pucXl5THA0TjQ4Uk1E?=
 =?utf-8?B?NkgwZFJnblJrQ3lrbGRjTFhzVkEvQzl0OVNJZUVSK0wwSE5Icm1CZXg2cnpJ?=
 =?utf-8?B?ajY0cTNPSTkrbWN6eUtjVmtzLzF3L1hXNmdVNTVoaldpQTdXV2daYndSVUZa?=
 =?utf-8?B?K0hTVFphZDcyL2xZRkVDYUFWVVE2VEIyNUVOSlVhemFibFp0eXFTOXVVTmF1?=
 =?utf-8?B?RU43QTdZc1Z3dGhUYmx5WXdBZkkzS1BMMG5XTmdrZ1QrY0R6enducmtuU3R2?=
 =?utf-8?Q?BFGnGb7M?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e22d68d-9799-4338-1d11-08d8c8549dec
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 15:01:42.6839
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YC7olGAHS6BJgaiscK1S7dTh9akMeFxjBBQx970lTThRJ157mDeEdWStzvFGQzn+aztQOsa2XAvMaZHkT26hAsZZNWkiG6wJh0DPsQi9niw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4583
X-OriginatorOrg: citrix.com

On 03/02/2021 14:22, Jan Beulich wrote:
> With memcpy() expanding to the compiler builtin, we may not hand it
> overlapping source and destination. We strictly mean to forward to our
> own implementation (a few lines up in the same source file).
>
> Fixes: 78825e1c60fa ("x86/string: Clean up x86/string.h")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I agree that the current logic is buggy, but I'm not sure this is an
improvement.

You've switched from relying on GCC's builtin to operate forwards, to
relying on Xen's implementation operating forwards.

At the very least, can we get a code comment stating something like
"depends on Xen's implementation operating forwards" ?

> ---
> An alternative would be to "#undef memcpy" near the top of the file. But
> I think the way it's done now is more explicit to the reader. An #undef
> would be the only way if the macro was an object-like one.

I chose not to use undef's to avoid impacting the optimisation of other
functions in this file.Â  I can't remember if this made a difference in
practice.

> At least with gcc10 this does alter generated code: The builtin gets
> expanded into a tail call, while after this change our memcpy() gets
> inlined into memmove(). This would change again once we separate the 3
> functions here into their own CUs for placing them in an archive.

As (perhaps) a tangent, how do we plan to provide x86-optimised versions
in combination with the library work?Â  We're long overdue needing to
refresh our fast-strings support to include fast rep-mov/stosb.

~Andrew

>
> --- a/xen/arch/x86/string.c
> +++ b/xen/arch/x86/string.c
> @@ -43,7 +43,7 @@ void *(memmove)(void *dest, const void *
>          return dest;
>  
>      if ( dest < src )
> -        return memcpy(dest, src, n);
> +        return (memcpy)(dest, src, n);
>  
>      asm volatile (
>          "   std         ; "



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 15:04:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 15:04:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80948.148554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Jhz-00086J-27; Wed, 03 Feb 2021 15:04:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80948.148554; Wed, 03 Feb 2021 15:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Jhy-000869-Uq; Wed, 03 Feb 2021 15:04:42 +0000
Received: by outflank-mailman (input) for mailman id 80948;
 Wed, 03 Feb 2021 15:04:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+PM6=HF=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l7Jhy-000864-Gn
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 15:04:42 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e51bcac-c86b-4381-8f67-fc45582140a0;
 Wed, 03 Feb 2021 15:04:41 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id p15so24658266wrq.8
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 07:04:41 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id h1sm4082605wrr.73.2021.02.03.07.04.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 03 Feb 2021 07:04:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e51bcac-c86b-4381-8f67-fc45582140a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=zFn+fUtDL880wCZdN25GPb72QHjOshIvJImNRb7sZrQ=;
        b=s7Ub46YBAd86Pv0On5CxuRKvS2/F9T2G40qPaZBrfSJ8wE7AqmmkVZofm3w/+5MtTe
         Oz3IRfaD/qQf6DkinZHcQy1N0NUHswynWl/tvYSNeOJx1U/2wLSfWXbtxeQDev8GKLUN
         QfLszdnGPeLAY+nLSHE6XauqG6iUpSqZkxzmgdRhS9ZTMIglZcQF+KmreSohgStlQtzs
         v+8UF/48Cxmkt9XGqrJt7yPYb/K9ueKinO1qTxVUd6zp5Qje1VDw9SIG1Mdmg1qxg8Ws
         52UAaIjqLNUGqI0LKXU4ZVxraiZQYXExvQA0+aZ3FX7bjwaBseXoIkPNUvxNxL8wUVo3
         clPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=zFn+fUtDL880wCZdN25GPb72QHjOshIvJImNRb7sZrQ=;
        b=Rdc9h/CYeW8fxCMyEgttYdEDhQT1/16KVo4mt8Q8m41qUCpzcZpQfiYmM6SgXlX3BC
         1FIwcD0rTsEmSRnAK0CgdUTywmnLQM4wKK8BMsVVUXJiyXrpTB9n8s5cfsZDumCeEeYV
         sWZTpsJShNd0ddraf3N+uAIqNSfoZzh9Xx3/0VDViPD9odo7ZZOHqxgtOC3TkyPKsSTN
         xSMzm5XZQ1bxwl4W0SwfzlbjiTN9kgBCqBvaJGhy/BfI660DnYELLNZ/qmQwmE8yFIjT
         Yff8Vmph126sdSBVhEq0Zb/10hrRGH4eo0sinM5Zj2UK/vtgD8HnDayHeWgadBDb+Sr3
         cYHA==
X-Gm-Message-State: AOAM531F0HGSony6Gz6ZfFkf7dY0zhXC+Aq/ruRWes5Rn80Hzk5eSyfk
	/hiIZofIEajfKg4gEKDopqk=
X-Google-Smtp-Source: ABdhPJzjEsp0/dl9rrwtMXuUUKjgGCGEwJCbn3sJczQvFCrTdELVHeJFaUHc4YQHNIt69ahGMQRlqA==
X-Received: by 2002:a5d:6752:: with SMTP id l18mr4038462wrw.209.1612364680493;
        Wed, 03 Feb 2021 07:04:40 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'James Dingwall'" <james-xen@dingwall.me.uk>
Cc: <xen-devel@lists.xenproject.org>
References: <20210201152655.GA3922797@dingwall.me.uk> <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
In-Reply-To: <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
Subject: RE: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
Date: Wed, 3 Feb 2021 15:04:38 -0000
Message-ID: <03d501d6fa3d$e5227cc0$af677640$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJhmlK/cLfJLwTp9TTyTj8ctTVv3wG1IORhqSOdFuA=
Content-Language: en-gb

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Jan Beulich
> Sent: 03 February 2021 14:55
> To: James Dingwall <james-xen@dingwall.me.uk>
> Cc: xen-devel@lists.xenproject.org
> Subject: Re: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
>=20
> On 01.02.2021 16:26, James Dingwall wrote:
> > I am building the xen 4.11 branch at
> > 310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit
> > 3b5de119f0399cbe745502cb6ebd5e6633cc139c "86/msr: fix handling of
> > MSR_IA32_PERF_{STATUS/CTL}".  I think this should address this error
> > recorded in xen's dmesg:
> >
> > (XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
>=20
> It seems to me that you imply some information here which might
> better be spelled out. As it stands I do not see the immediate
> connection between the cited commit and the crash. C0000096 is
> STATUS_PRIVILEGED_INSTRUCTION, which to me ought to be impossible
> for code running in ring 0. Of course I may simply not know enough
> about modern Windows' internals to understand the connection.
>=20
> > I have removed `viridian =3D [..]` from the xen config nut still get =
this
> > reliably when launching PassMark Performance Test and it is =
collecting
> > CPU information.
> >
> > This is recorded in the domain qemu-dm log:
> >
> > 21244@1612191983.279616:xen_platform_log xen platform: XEN|BUGCHECK: =
=3D=3D=3D=3D>
> > 21244@1612191983.279819:xen_platform_log xen platform: XEN|BUGCHECK: =
SYSTEM_SERVICE_EXCEPTION:
> 00000000C0000096 FFFFF800A43C72C5 FFFFD0014343D580 0000000000000000
> > 21244@1612191983.279959:xen_platform_log xen platform: XEN|BUGCHECK: =
EXCEPTION (FFFFF800A43C72C5):
> > 21244@1612191983.280075:xen_platform_log xen platform: XEN|BUGCHECK: =
- Code =3D C148320F
> > 21244@1612191983.280205:xen_platform_log xen platform: XEN|BUGCHECK: =
- Flags =3D 0B4820E2
> > 21244@1612191983.280346:xen_platform_log xen platform: XEN|BUGCHECK: =
- Address =3D 0000A824948D4800
> > 21244@1612191983.280504:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[0] =3D
> 8B00000769850F07
> > 21244@1612191983.280633:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[1] =3D
> 46B70F4024448906
> > 21244@1612191983.280754:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[2] =3D
> 0F44442444896604
> > 21244@1612191983.280876:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[3] =3D
> E983C88B410646B6
> > 21244@1612191983.281012:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[4] =3D
> 0D7401E9831E7401
> > 21244@1612191983.281172:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[5] =3D
> 54B70F217502F983
> > 21244@1612191983.281304:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[6] =3D
> 54B70F15EBED4024
> > 21244@1612191983.281426:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[7] =3D
> EBC0B70FED664024
> > 21244@1612191983.281547:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[8] =3D
> 0FEC402454B70F09
> > 21244@1612191983.281668:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[9] =3D
> 448B42244489C0B6
> > 21244@1612191983.281809:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[10] =3D
> 2444B70F06894024
> > 21244@1612191983.281932:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[11] =3D
> 4688440446896644
> > 21244@1612191983.282052:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[12] =3D
> 0000073846C74906
> > 21244@1612191983.282185:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[13] =3D
> F8830000070AE900
> > 21244@1612191983.282340:xen_platform_log xen platform: XEN|BUGCHECK: =
- Parameter[14] =3D
> 8B000006F9850F07
> > 21244@1612191983.282480:xen_platform_log xen platform: XEN|BUGCHECK: =
EXCEPTION (0000A824848948C2):
> > 21244@1612191983.282617:xen_platform_log xen platform: XEN|BUGCHECK: =
CONTEXT (FFFFD0014343D580):
> > 21244@1612191983.282717:xen_platform_log xen platform: XEN|BUGCHECK: =
- GS =3D 002B
> > 21244@1612191983.282816:xen_platform_log xen platform: XEN|BUGCHECK: =
- FS =3D 0053
> > 21244@1612191983.282914:xen_platform_log xen platform: XEN|BUGCHECK: =
- ES =3D 002B
> > 21244@1612191983.283011:xen_platform_log xen platform: XEN|BUGCHECK: =
- DS =3D 002B
> > 21244@1612191983.283127:xen_platform_log xen platform: XEN|BUGCHECK: =
- SS =3D 0018
> > 21244@1612191983.283226:xen_platform_log xen platform: XEN|BUGCHECK: =
- CS =3D 0010
> > 21244@1612191983.283332:xen_platform_log xen platform: XEN|BUGCHECK: =
- EFLAGS =3D 00000202
> > 21244@1612191983.283444:xen_platform_log xen platform: XEN|BUGCHECK: =
- RDI =3D 00000000F64D5C20
> > 21244@1612191983.283555:xen_platform_log xen platform: XEN|BUGCHECK: =
- RSI =3D 00000000F6367280
> > 21244@1612191983.283666:xen_platform_log xen platform: XEN|BUGCHECK: =
- RBX =3D 000000008011E060
> > 21244@1612191983.283810:xen_platform_log xen platform: XEN|BUGCHECK: =
- RDX =3D 00000000F64D5C20
> > 21244@1612191983.283972:xen_platform_log xen platform: XEN|BUGCHECK: =
- RCX =3D 0000000000000199
> > 21244@1612191983.284350:xen_platform_log xen platform: XEN|BUGCHECK: =
- RAX =3D 0000000000000004
> > 21244@1612191983.284523:xen_platform_log xen platform: XEN|BUGCHECK: =
- RBP =3D 000000004343E891
> > 21244@1612191983.284658:xen_platform_log xen platform: XEN|BUGCHECK: =
- RIP =3D 00000000A43C72C5
> > 21244@1612191983.284842:xen_platform_log xen platform: XEN|BUGCHECK: =
- RSP =3D 000000004343DFA0
> > 21244@1612191983.284959:xen_platform_log xen platform: XEN|BUGCHECK: =
- R8 =3D 0000000000000008
> > 21244@1612191983.285073:xen_platform_log xen platform: XEN|BUGCHECK: =
- R9 =3D 000000000000000E
> > 21244@1612191983.285188:xen_platform_log xen platform: XEN|BUGCHECK: =
- R10 =3D 0000000000000002
> > 21244@1612191983.285304:xen_platform_log xen platform: XEN|BUGCHECK: =
- R11 =3D 000000004343E808
> > 21244@1612191983.285420:xen_platform_log xen platform: XEN|BUGCHECK: =
- R12 =3D 0000000000000000
> > 21244@1612191983.285564:xen_platform_log xen platform: XEN|BUGCHECK: =
- R13 =3D 00000000F7964E50
> > 21244@1612191983.285680:xen_platform_log xen platform: XEN|BUGCHECK: =
- R14 =3D 00000000F64D5C20
> > 21244@1612191983.285796:xen_platform_log xen platform: XEN|BUGCHECK: =
- R15 =3D 00000000F7964E50
>=20
> I'm also confused by this - the pointer given for CONTEXT suggests =
this
> is a 64-bit kernel, yet none of the registers - including RIP and RSP =
-
> have non-zero upper 32 bits. Or is qemu truncating these values?

The logging is coming from the PV drivers (in =
https://xenbits.xen.org/gitweb/?p=3Dpvdrivers/win/xenbus.git;a=3Dblob;f=3D=
src/xen/bug_check.c). The truncated values may just be due to a 32-bit =
user process I guess.

  Paul

>=20
> > 21244@1612191983.285888:xen_platform_log xen platform: XEN|BUGCHECK: =
STACK:
> > 21244@1612191983.286105:xen_platform_log xen platform: XEN|BUGCHECK: =
000000004343E810:
> (0000000000000000 000000004343E891 0000000000000002 00000000F75F08A0) =
ntoskrnl.exe + 0000000000485507
> > 21244@1612191983.286340:xen_platform_log xen platform: XEN|BUGCHECK: =
000000004343E8E0:
> (00000000F75F0805 000000004343EB80 00000000F6A62CC0 00000000F75F08A0) =
ntoskrnl.exe + 0000000000486468
> > 21244@1612191983.286547:xen_platform_log xen platform: XEN|BUGCHECK: =
000000004343EA20:
> (0000000000000000 0000000000000000 0000000000000000 0000000000000000) =
ntoskrnl.exe + 0000000000458CAE
> > 21244@1612191983.286755:xen_platform_log xen platform: XEN|BUGCHECK: =
000000004343EA90:
> (0000000000000000 0000000000000000 000000007DBED000 000000007DA00028) =
ntoskrnl.exe + 00000000001501A3
> > 21244@1612191983.286976:xen_platform_log xen platform: XEN|BUGCHECK: =
0000000009ABE388:
> (00000000587D5673 0000000058F40000 0000000006002D2B 0000000000000000) =
00007FFB5B3207CA
> > 21244@1612191983.287171:xen_platform_log xen platform: XEN|BUGCHECK: =
0000000009ABE390:
> (0000000058F40000 0000000006002D2B 0000000000000000 00000000160C86D8) =
00007FFB587D5673
> > 21244@1612191983.287390:xen_platform_log xen platform: XEN|BUGCHECK: =
0000000009ABE398:
> (0000000006002D2B 0000000000000000 00000000160C86D8 0000000009ABE3E0) =
00007FFB58F40000
> > 21244@1612191983.287584:xen_platform_log xen platform: XEN|BUGCHECK: =
0000000009ABE3A0:
> (0000000000000000 00000000160C86D8 0000000009ABE3E0 000000008011E060) =
00007FFB06002D2B
> > 21244@1612191983.287777:xen_platform_log xen platform: XEN|BUGCHECK: =
0000000009ABE3A8:
> (00000000160C86D8 0000000009ABE3E0 000000008011E060 0000000009ABE4A0) =
0000000000000000
> > 21244@1612191983.287898:xen_platform_log xen platform: XEN|BUGCHECK: =
<=3D=3D=3D=3D
> >
> > The Windows guest is running winpv drivers 8.2.1.
> >
> > I'm not quite sure what else to examine or change at this point so =
any
> > guidance would be welcome.
>=20
> The hypervisor log (at maximum log levels) accompanying this might
> help some. And of course, if possible, trying on a newer Xen (ideally
> current master).
>=20
> Jan




From xen-devel-bounces@lists.xenproject.org Wed Feb 03 15:31:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 15:31:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80954.148566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7K7b-0002dU-C1; Wed, 03 Feb 2021 15:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80954.148566; Wed, 03 Feb 2021 15:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7K7b-0002dN-8s; Wed, 03 Feb 2021 15:31:11 +0000
Received: by outflank-mailman (input) for mailman id 80954;
 Wed, 03 Feb 2021 15:31:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ulgz=HF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7K7Z-0002dI-S3
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 15:31:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e336f85-fffd-4a54-af05-f1d98a279676;
 Wed, 03 Feb 2021 15:31:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AFFD4B14C;
 Wed,  3 Feb 2021 15:31:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e336f85-fffd-4a54-af05-f1d98a279676
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612366267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hVz7R3R3FbScC3OBnxHSq/y6rko/K9bkHSZZKtpUDmI=;
	b=K0nUksY5+zdWXexcEY6vxVyfNLSXnNRwB4BHYzzqsjymV9uYawveuqrOK8F4l9Ip6iI0lA
	+VGhqy3PDfjsJhBGfi2IfZvnayy7XhKnHr/QW8x/f7bjuD2Lppi/MaLEp081w9DrIJ26XG
	RUZGH6Ld6JyOMhcgy0dW+0+VCggrRws=
Subject: Re: [PATCH] x86/string: correct memmove()'s forwarding to memcpy()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a23d148a-25d0-cc5b-6050-88345188ef5a@suse.com>
 <0d537de8-09f4-9971-466f-8bf42964171e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1556e428-be59-2035-8406-a54576c4d41d@suse.com>
Date: Wed, 3 Feb 2021 16:31:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <0d537de8-09f4-9971-466f-8bf42964171e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.02.2021 16:01, Andrew Cooper wrote:
> On 03/02/2021 14:22, Jan Beulich wrote:
>> With memcpy() expanding to the compiler builtin, we may not hand it
>> overlapping source and destination. We strictly mean to forward to our
>> own implementation (a few lines up in the same source file).
>>
>> Fixes: 78825e1c60fa ("x86/string: Clean up x86/string.h")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I agree that the current logic is buggy, but I'm not sure this is an
> improvement.
> 
> You've switched from relying on GCC's builtin to operate forwards, to
> relying on Xen's implementation operating forwards.

Is there such a guarantee for the compiler builtin? If so - no
need for this patch indeed. But I couldn't find any doc saying
so.

> At the very least, can we get a code comment stating something like
> "depends on Xen's implementation operating forwards" ?

No problem at all.

>> ---
>> An alternative would be to "#undef memcpy" near the top of the file. But
>> I think the way it's done now is more explicit to the reader. An #undef
>> would be the only way if the macro was an object-like one.
> 
> I chose not to use undef's to avoid impacting the optimisation of other
> functions in this file.Â  I can't remember if this made a difference in
> practice.
> 
>> At least with gcc10 this does alter generated code: The builtin gets
>> expanded into a tail call, while after this change our memcpy() gets
>> inlined into memmove(). This would change again once we separate the 3
>> functions here into their own CUs for placing them in an archive.
> 
> As (perhaps) a tangent, how do we plan to provide x86-optimised versions
> in combination with the library work?

By specifying the per-arch lib.a first.

>Â  We're long overdue needing to
> refresh our fast-strings support to include fast rep-mov/stosb.

That's pretty much orthogonal to the code movement though.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 15:37:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 15:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80956.148578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7KDC-0002nt-0l; Wed, 03 Feb 2021 15:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80956.148578; Wed, 03 Feb 2021 15:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7KDB-0002nm-Tf; Wed, 03 Feb 2021 15:36:57 +0000
Received: by outflank-mailman (input) for mailman id 80956;
 Wed, 03 Feb 2021 15:36:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7KDA-0002nh-4I
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 15:36:56 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 581f6b03-a22e-4b01-a123-c422c3c327d7;
 Wed, 03 Feb 2021 15:36:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 581f6b03-a22e-4b01-a123-c422c3c327d7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612366614;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FX6FwoNVhsfj7C4kPKB0/qKC8cFRTtjcevGezhqt2Hc=;
  b=SEMB3iH6UcOGL69f9pheFtQiR281xGj/FYhfjchzpr+O38Fs9i8AYyfq
   cRSQOAGJ0tIymGt2VB2OQWOnUA/ByJfzBH9GMcUlnHBcwqLrMLk+GFJAC
   8bqFszcJv+N45hjAqWsKkwF4IPP2A310WOJYuudTNFQS3CQwlTyr+Z7V+
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZolqkgHUWX0jB/D7M9wWi4VoA0dBkR7wd0TFtTlkrsgXXDWTekACBG68hDYis1efwPNCEc96wg
 d4Oxq7loGDiu36OPc4/DXVPFImgpoeKg2J0cpezQKK2BK8brHWSjZH/9F+hWmpMsQcGw74HHOP
 DiQemBEP4AUk7P9gYLvHmq0BV7ZX3e5VcoKFMqsWqkgXp8Z7IQLD3l8EaufevenfdNQowHRWGq
 O7YYWrCvJNq5UXZGuJh3f1cpBZpIuTxWnr/EYPDjyM8Azt5DZxTYcvuS4LO/y8wqH4sQcZXbRZ
 s94=
X-SBRS: 5.2
X-MesageID: 36422234
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,398,1602561600"; 
   d="scan'208";a="36422234"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RbTSkS8p7a3x6aIMa6q9zCGaZNfT7UK0YZkEijCUI2tFpx/M1TZuMeoyPM8RxeSVxpfrNv0FYqc5NVjmu24ZjpjDrjnpZXl/pPr5BYLRdeKKjaeQbEChxhdR8Qo+LF949h0uwLq8gHMmEoNrYzKVi5NoSnfABuhHh7Uv5bDcLs9waR7GQdyRgU7g0ABUGnMAGSWbdGNjc8AJ34ktXIFNYfn3uPu/KJ6D5LdOnS527GFpceNI9yEEvxagkO6TDgwYC/6a/0jN6DiW84ajcxJY5k3657o7YwD+RpO8RyVOpNXAT4guyZALdUeKtxDEShIcUtO+CHdQZcYys50KbuJISg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FX6FwoNVhsfj7C4kPKB0/qKC8cFRTtjcevGezhqt2Hc=;
 b=Hpy1iAc1wqLdfolBdK3iGn1noCj4Wl9y2J7gIfUVSC/46/krs1uzlFppYHUXXUbE7uAaiMMxBhs1MzcWIVM0wNTVOz6Nkb1ajPVWgiZHUKPHseKEbR12S7LKuOgpEzKY0T7ZCmMRJghzTrCbm9g7QQicG5MZOh2twbfcR+dZcPOfqXzUQ6pz2ckDgHJwDiMmVOeRp3aaSZdCf6wJOrmW/UrrR7ecjsOrwJiFvhaCcxhG1fEw4hvK8Wn8Hs8xwcbcKXUp5koYVJbW0n2AKI1uoUygb3IvJ+1qBndeVum7p3Eug4WWYwSDGlMlOVOiGIIpW+AWfqp+AqBbRNG5rbTB7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FX6FwoNVhsfj7C4kPKB0/qKC8cFRTtjcevGezhqt2Hc=;
 b=jgd3fpUDVPpZIX9EZhrqmtI3SS32PFyo0UG+oS4wBA4VhVsYT/dXZYyu1PIkrPgs6MmzU7XPFQYwsRWyBMC50oSD9FbjgSIZ2yALD8YIcrhuHdlhyTZN4xUunWfMfcLWHNMAd7nrvp+Zy34VkuboiwcaSirFQo2jFTJJFYRusa4=
Subject: Re: [PATCH] x86/string: correct memmove()'s forwarding to memcpy()
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <a23d148a-25d0-cc5b-6050-88345188ef5a@suse.com>
 <0d537de8-09f4-9971-466f-8bf42964171e@citrix.com>
 <1556e428-be59-2035-8406-a54576c4d41d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <84f171f0-39b4-9ee0-9536-d8e6ff794dbf@citrix.com>
Date: Wed, 3 Feb 2021 15:36:33 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <1556e428-be59-2035-8406-a54576c4d41d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0489.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::8) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e9aa1bba-14f5-4337-dd95-08d8c8597f70
X-MS-TrafficTypeDiagnostic: BY5PR03MB5064:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50648C4AC3B8A4784AAED783BAB49@BY5PR03MB5064.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: yg63cnFXw3GTaeqI5s1cN5cf5TNB2YgPvIPWJg2rEJtdMbeqcv12Q5TfprsVvvkZCn8PoVigvqm1ykVTE1Vmj5l/BkbBSBW5P/U5goJFY4SqkexUOydHAVG7k7zu0bRcjFcyXsVpCJ8iCI3Tn1thlm3EgU8Oi3ctfbzRSRlwaTsBvhDhf40dJF/xrIfmAbN9Tj3qNQF0C0OZCwiOOU754EYofBMTcB2OXMoxh48HKSvXDN1csCc/y85mMgpjrNgNGKwSOhvn/8jR0JujbLpZ+SB9mUf9p4QsssfHPIAkXeqjCD7wSZpatQv575wVX6hVRuE0Oo6Zjz2ILsby+O5jBmE2GzdhKQQZ9vAOa7fRBJyq2TYZebRFCJPm4FOUsGLk8Td7mqmGhM9x5XnrS9hSzoSE4IXKFUl6Y3D5icrcFlFEzVOR6WxLljw5YJ2cY9LvMWl3Ie0fTVPaSyk+0A4Oy7ESD4JAhuyRrKqSoOjHToWqJ1PuhWMigyEYS8bS/7HdYfVRH3TQm1DcDsgMXPcPhO2LnxrrSQZw2xmkBVmtzZZIFp8CnGYQG3FTjgX6PKnrUjBoWILzvCI00sNPUFjGQRjMK/BzIcQcuQd07/5oGQs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(39860400002)(346002)(396003)(136003)(66476007)(16526019)(8676002)(6916009)(54906003)(6486002)(53546011)(26005)(186003)(66946007)(86362001)(36756003)(66556008)(4326008)(478600001)(2616005)(8936002)(316002)(956004)(5660300002)(16576012)(2906002)(31686004)(31696002)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SlVMWlFaYXU3SG1WQUpvWStudWs5OHVjRXg0c3RwWExseW1FRjN2MXhRUi9B?=
 =?utf-8?B?cWVFSEZpV05PSmhlYnhYbzR6cGp3czZZVTJRTG1vTEdnR1RxVDZuS0ROR2w1?=
 =?utf-8?B?Y0xJdE5IbzYyanphb3JHWDJFQU82OW5zanN1K1N4Sk5BYUgzMjNuWU4zSmVB?=
 =?utf-8?B?bnZ3Z0hmSG12czN3TGVJcjlLU1JhVFNIK2xwenBQdFhOZGMxeVlCY3AzRFla?=
 =?utf-8?B?KzdDKytaWE8xeGR5aGFkeXMzTlhISFZYQjRWc2lhVm9WUisyODZIeHpZM2Ju?=
 =?utf-8?B?ZVU1c0gyVUVNMUxKWmVnYVlhc2FaaGxFODhJdzhtT0lYaDZzVXBTbHpaM0N1?=
 =?utf-8?B?QVV2WkZsZFVjL0psTHRGNURFanJLQUd1SXU1Nmd1eTlPTmpaNTNMbk90dXlM?=
 =?utf-8?B?VVFHOFgvRDBXMURDRVdRZDVmc1pTcTNLaGt4V3oxQ1Q3aVcvejl2TzJGUGtx?=
 =?utf-8?B?RDZQdFhMZUh0cFo1Ymc2RXdheEhHV2h3N216SmZOa1p5Y2dQeVZpVDAya1I3?=
 =?utf-8?B?bDJTQkRkS3NGdUNxQTM3amhOMmxLZWdvelNOcmZUUWZJYkFrR1dIaVZDT1Mw?=
 =?utf-8?B?MGp3ZWxreGphbkNjVTRVVGpmeEF3SlZhRHIySUliQWhFVjNsT25HampFOXVi?=
 =?utf-8?B?TWtnbGpHM1Jya0dqOUJCUWt6eGpHaG56eXFCTjhBRnl1di9BWm1PSlFIc1ZL?=
 =?utf-8?B?OXkwaTZRU1JuK01RR3hlTWpSd01FVGJtUWN2RUMvT09SaVMrS1RPbENDOE9R?=
 =?utf-8?B?ZDNHS29US1ArM204TXYrQWZJOXFGTlFDQ1Q2eVRXSWpkRE9pNUV4YmQxLzNv?=
 =?utf-8?B?TnhyOUNPTmoxUDJoVTFYYzZaVVUwb2loNGhJdTI4TGd6aUp6OHcwR2kvSnNt?=
 =?utf-8?B?MFR1ZWljRklySWNpekprMUttY1hhR2hOcFFJRlZlNExkK3hXRnR3SDVZRnFm?=
 =?utf-8?B?RGJ5VGh6WHdSdDh5WTNGaStYd1l3RnNNU2J0WDMyZzlBQ2p2Z3JhUm5rWHNz?=
 =?utf-8?B?UlIxVmNOb3BhVWNJTWFLL0UrSXA5SUJ5MVh1aWJneXdwNTl5Qm9ZMks0dTBr?=
 =?utf-8?B?WFQ4U0tFN3NOY3prOG0rc1BsV0JOT3NaNUtNS1VGT0M2enJUeWd4MFJ6VWJM?=
 =?utf-8?B?SW9HbUZuMnpsYTJ3ZVNkQ2hiTFJwL1A0Q2xLaHVIY3g3elpVODRMSHliMDlz?=
 =?utf-8?B?eVNuNXdWamd5ZDJManNhd1V2SmtuUVRXVEdHTkVKTm1vanBBYkIyOWg3eldI?=
 =?utf-8?B?b0FQUEJIZmp2RVJIS0RRSGNLWTdzQ1ZKWGVQVFJFUm05WVRscmwwbHFKVnZ2?=
 =?utf-8?B?WnVucW84NU5FVVFoR1ZFOWxxN1hMN2RYdEJOV0g2d2pMcGdiRHY5QzdmTmRM?=
 =?utf-8?B?eC9od3BYNENHSkdlTzVTQTJkYkRmQXN2US96cnlQMDJtc2tVNW1QTVhGRVlF?=
 =?utf-8?Q?rXIgUUxH?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e9aa1bba-14f5-4337-dd95-08d8c8597f70
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 15:36:39.0881
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VyZXZ/xCvIbd0Uyc0KiVSo0uwesOk5/Qxbt4Cgi4wIo7AIl6IkARuuQaFbRv7XMqP9vB5I/c/+FRW1zFnnGCZYVh0DMHo19rzdhepdoBJO4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5064
X-OriginatorOrg: citrix.com

On 03/02/2021 15:31, Jan Beulich wrote:
> On 03.02.2021 16:01, Andrew Cooper wrote:
>> On 03/02/2021 14:22, Jan Beulich wrote:
>>> With memcpy() expanding to the compiler builtin, we may not hand it
>>> overlapping source and destination. We strictly mean to forward to our
>>> own implementation (a few lines up in the same source file).
>>>
>>> Fixes: 78825e1c60fa ("x86/string: Clean up x86/string.h")
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> I agree that the current logic is buggy, but I'm not sure this is an
>> improvement.
>>
>> You've switched from relying on GCC's builtin to operate forwards, to
>> relying on Xen's implementation operating forwards.
> Is there such a guarantee for the compiler builtin? If so - no
> need for this patch indeed. But I couldn't find any doc saying
> so.

I've never seen it emit anything which isn't a forwards operation (i.e.
I think the compiled result tended to be safe in practice), but C's
flexibility does explicitly permit a backwards implementation.

>
>> At the very least, can we get a code comment stating something like
>> "depends on Xen's implementation operating forwards" ?
> No problem at all.

In which case Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> to
avoid a round trip.

>
>>> ---
>>> An alternative would be to "#undef memcpy" near the top of the file. But
>>> I think the way it's done now is more explicit to the reader. An #undef
>>> would be the only way if the macro was an object-like one.
>> I chose not to use undef's to avoid impacting the optimisation of other
>> functions in this file.Â  I can't remember if this made a difference in
>> practice.
>>
>>> At least with gcc10 this does alter generated code: The builtin gets
>>> expanded into a tail call, while after this change our memcpy() gets
>>> inlined into memmove(). This would change again once we separate the 3
>>> functions here into their own CUs for placing them in an archive.
>> As (perhaps) a tangent, how do we plan to provide x86-optimised versions
>> in combination with the library work?
> By specifying the per-arch lib.a first.

Ok - good to hear.

>> Â  We're long overdue needing to
>> refresh our fast-strings support to include fast rep-mov/stosb.
> That's pretty much orthogonal to the code movement though.

Yes, but it does need doing.Â  We're perpetually playing catchup, and
there are meaningful improvements to be had for logic such as
clear_page() which is fairly poor, optimisation wise atm.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 15:42:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 15:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80958.148590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7KIj-0003oa-Lz; Wed, 03 Feb 2021 15:42:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80958.148590; Wed, 03 Feb 2021 15:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7KIj-0003oT-Iv; Wed, 03 Feb 2021 15:42:41 +0000
Received: by outflank-mailman (input) for mailman id 80958;
 Wed, 03 Feb 2021 15:42:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ulgz=HF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7KIh-0003oO-Dm
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 15:42:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e2c4872-5cbd-4127-a618-a56aa0dbb9fb;
 Wed, 03 Feb 2021 15:42:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61C9EACB0;
 Wed,  3 Feb 2021 15:42:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e2c4872-5cbd-4127-a618-a56aa0dbb9fb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612366957; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dn2+aABa7j7AiXXY9eZJoZmiQPmfDFh4agY5laGov4o=;
	b=f80s3UDVBOEf5FvFEw8FTSjV5E122YKp1Dm/fJ7lZrBFo0hF1Mt1v4CSWVLVg7AoA9dBgl
	rcijtE6ZbHnpMFj6sa6w7GlxSdbf63ynA1czW06mk4bfLz68y+vvi52NBQZY5KlFe/Quqv
	/ZaojyRprhOpjF8svXac0V9evt/C+hI=
Subject: Re: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
To: paul@xen.org
Cc: xen-devel@lists.xenproject.org,
 'James Dingwall' <james-xen@dingwall.me.uk>
References: <20210201152655.GA3922797@dingwall.me.uk>
 <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
 <03d501d6fa3d$e5227cc0$af677640$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <94d4af04-9f3c-8c40-8acd-705259ec91fa@suse.com>
Date: Wed, 3 Feb 2021 16:42:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <03d501d6fa3d$e5227cc0$af677640$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.02.2021 16:04, Paul Durrant wrote:
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jan Beulich
>> Sent: 03 February 2021 14:55
>>
>> On 01.02.2021 16:26, James Dingwall wrote:
>>> 21244@1612191983.282480:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (0000A824848948C2):
>>> 21244@1612191983.282617:xen_platform_log xen platform: XEN|BUGCHECK: CONTEXT (FFFFD0014343D580):
>>> 21244@1612191983.282717:xen_platform_log xen platform: XEN|BUGCHECK: - GS = 002B
>>> 21244@1612191983.282816:xen_platform_log xen platform: XEN|BUGCHECK: - FS = 0053
>>> 21244@1612191983.282914:xen_platform_log xen platform: XEN|BUGCHECK: - ES = 002B
>>> 21244@1612191983.283011:xen_platform_log xen platform: XEN|BUGCHECK: - DS = 002B
>>> 21244@1612191983.283127:xen_platform_log xen platform: XEN|BUGCHECK: - SS = 0018
>>> 21244@1612191983.283226:xen_platform_log xen platform: XEN|BUGCHECK: - CS = 0010
>>> 21244@1612191983.283332:xen_platform_log xen platform: XEN|BUGCHECK: - EFLAGS = 00000202
>>> 21244@1612191983.283444:xen_platform_log xen platform: XEN|BUGCHECK: - RDI = 00000000F64D5C20
>>> 21244@1612191983.283555:xen_platform_log xen platform: XEN|BUGCHECK: - RSI = 00000000F6367280
>>> 21244@1612191983.283666:xen_platform_log xen platform: XEN|BUGCHECK: - RBX = 000000008011E060
>>> 21244@1612191983.283810:xen_platform_log xen platform: XEN|BUGCHECK: - RDX = 00000000F64D5C20
>>> 21244@1612191983.283972:xen_platform_log xen platform: XEN|BUGCHECK: - RCX = 0000000000000199
>>> 21244@1612191983.284350:xen_platform_log xen platform: XEN|BUGCHECK: - RAX = 0000000000000004
>>> 21244@1612191983.284523:xen_platform_log xen platform: XEN|BUGCHECK: - RBP = 000000004343E891
>>> 21244@1612191983.284658:xen_platform_log xen platform: XEN|BUGCHECK: - RIP = 00000000A43C72C5
>>> 21244@1612191983.284842:xen_platform_log xen platform: XEN|BUGCHECK: - RSP = 000000004343DFA0
>>> 21244@1612191983.284959:xen_platform_log xen platform: XEN|BUGCHECK: - R8 = 0000000000000008
>>> 21244@1612191983.285073:xen_platform_log xen platform: XEN|BUGCHECK: - R9 = 000000000000000E
>>> 21244@1612191983.285188:xen_platform_log xen platform: XEN|BUGCHECK: - R10 = 0000000000000002
>>> 21244@1612191983.285304:xen_platform_log xen platform: XEN|BUGCHECK: - R11 = 000000004343E808
>>> 21244@1612191983.285420:xen_platform_log xen platform: XEN|BUGCHECK: - R12 = 0000000000000000
>>> 21244@1612191983.285564:xen_platform_log xen platform: XEN|BUGCHECK: - R13 = 00000000F7964E50
>>> 21244@1612191983.285680:xen_platform_log xen platform: XEN|BUGCHECK: - R14 = 00000000F64D5C20
>>> 21244@1612191983.285796:xen_platform_log xen platform: XEN|BUGCHECK: - R15 = 00000000F7964E50
>>
>> I'm also confused by this - the pointer given for CONTEXT suggests this
>> is a 64-bit kernel, yet none of the registers - including RIP and RSP -
>> have non-zero upper 32 bits. Or is qemu truncating these values?
> 
> The logging is coming from the PV drivers (in https://xenbits.xen.org/gitweb/?p=pvdrivers/win/xenbus.git;a=blob;f=src/xen/bug_check.c). The truncated values may just be due to a 32-bit user process I guess.

Since you pointed me at the code and truncation inside a string
not likely being due to some user process, I went and looked:
The driver uses %016X, instead of e.g. converting to (PVOID)
and using %p like code elsewhere in the file does (presumably
because there's no really convenient way to print 64-bit values
in Windows, short of using their custom "%016I64X" format
specifier, and the absence of a uniform specifier allowing to
format pointer-sized integers independent of architecture).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 15:48:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 15:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80960.148602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7KO3-0003zF-Ap; Wed, 03 Feb 2021 15:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80960.148602; Wed, 03 Feb 2021 15:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7KO3-0003z8-7X; Wed, 03 Feb 2021 15:48:11 +0000
Received: by outflank-mailman (input) for mailman id 80960;
 Wed, 03 Feb 2021 15:48:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+PM6=HF=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l7KO2-0003z3-AE
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 15:48:10 +0000
Received: from mail-wm1-x32c.google.com (unknown [2a00:1450:4864:20::32c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae1c98af-fc59-4aa1-babc-4ebab13f2744;
 Wed, 03 Feb 2021 15:48:09 +0000 (UTC)
Received: by mail-wm1-x32c.google.com with SMTP id l12so109256wmq.2
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 07:48:09 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id d16sm3999152wrr.59.2021.02.03.07.48.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 03 Feb 2021 07:48:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae1c98af-fc59-4aa1-babc-4ebab13f2744
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=QcD9ujADNh4gEP4T6CQjKgXk1e4/dGZeakSlRkYvYCs=;
        b=b2JAdHN/a2ODF4O7H3zTAWXdg4st156Vtf9axC0IN5nt9l5fP1L03+MY8trsQoQdQf
         Z20re5MWWd4Ax7pzzWDqCD7Bhwi0U/EpN1MlPG3jEsJtm7QEtJ0o+z+zD4lbyQxgMGps
         GkBDQQPP+tQiHLy6LPsfNExU3FlljfYyre5hBqJd9B4YdGju9eWfbvddwTiQW50s+12w
         JSZl8Q+bGsdgtlb+1r4Qff3eZnreuLP57m8mLh82GJ0NatLzKXnxtOH7KIlk1oKol4fx
         vVO2Bxsw8UpMhoxwQXh7Oo2pNjTMIKrHZM8hGK2BlZzX3JXjiOlSSmY77uDUW2PpbvEH
         Umbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=QcD9ujADNh4gEP4T6CQjKgXk1e4/dGZeakSlRkYvYCs=;
        b=cNlOKbkcaAecn0yFqV10JvZ0n2NO4btAEbSf04shuXg2wEaqGgBPEkhb9JPs2bL5Cu
         coWeZdFqRifmTIXhOFxEy+slecGRMW2qIRUochctdkhU0E2nGK3/icj+90BvJmdEdAWk
         Z5z+hP5R9HK5DAFa54YLGIB5J+yZF+Ll5DnqbxHlnRuVPU1I/DaqWaUMfBvVc0abq49E
         rsUHUBxpp7q8lQMyUykfAM8lsApI8Qloe2UALhGT62M8npK+zRiaFTLGuPHQk74l+BLx
         G3XTqvvc0c3/9cP0W1qHEoosPljUJHg5XkhZxfIMc9eIFIYcECyjEEzrSylufv7A3kwB
         Tumg==
X-Gm-Message-State: AOAM532K5StzjYcqzXxO7mv2SMac9Y15mt5jWevT8S8OSKGtCu5G0IWN
	OrglulXniN98DswcfkFIw+E=
X-Google-Smtp-Source: ABdhPJxbxhzDZQljCNUMUqTauwihM/GY/eA/0SqSETqf3Urgsoac2WdYw+dDwhNLd8XBJHC6nZ6/ig==
X-Received: by 2002:a05:600c:414b:: with SMTP id h11mr3006567wmm.125.1612367288326;
        Wed, 03 Feb 2021 07:48:08 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: <xen-devel@lists.xenproject.org>,
	"'James Dingwall'" <james-xen@dingwall.me.uk>
References: <20210201152655.GA3922797@dingwall.me.uk> <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com> <03d501d6fa3d$e5227cc0$af677640$@xen.org> <94d4af04-9f3c-8c40-8acd-705259ec91fa@suse.com>
In-Reply-To: <94d4af04-9f3c-8c40-8acd-705259ec91fa@suse.com>
Subject: RE: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
Date: Wed, 3 Feb 2021 15:48:07 -0000
Message-ID: <03dc01d6fa43$f798fe00$e6cafa00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJhmlK/cLfJLwTp9TTyTj8ctTVv3wG1IORhAfTq6uYCUcbZMakBdGpg
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 03 February 2021 15:43
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'James Dingwall' <james-xen@dingwall.me.uk>
> Subject: Re: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
> 
> On 03.02.2021 16:04, Paul Durrant wrote:
> >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jan Beulich
> >> Sent: 03 February 2021 14:55
> >>
> >> On 01.02.2021 16:26, James Dingwall wrote:
> >>> 21244@1612191983.282480:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (0000A824848948C2):
> >>> 21244@1612191983.282617:xen_platform_log xen platform: XEN|BUGCHECK: CONTEXT (FFFFD0014343D580):
> >>> 21244@1612191983.282717:xen_platform_log xen platform: XEN|BUGCHECK: - GS = 002B
> >>> 21244@1612191983.282816:xen_platform_log xen platform: XEN|BUGCHECK: - FS = 0053
> >>> 21244@1612191983.282914:xen_platform_log xen platform: XEN|BUGCHECK: - ES = 002B
> >>> 21244@1612191983.283011:xen_platform_log xen platform: XEN|BUGCHECK: - DS = 002B
> >>> 21244@1612191983.283127:xen_platform_log xen platform: XEN|BUGCHECK: - SS = 0018
> >>> 21244@1612191983.283226:xen_platform_log xen platform: XEN|BUGCHECK: - CS = 0010
> >>> 21244@1612191983.283332:xen_platform_log xen platform: XEN|BUGCHECK: - EFLAGS = 00000202
> >>> 21244@1612191983.283444:xen_platform_log xen platform: XEN|BUGCHECK: - RDI = 00000000F64D5C20
> >>> 21244@1612191983.283555:xen_platform_log xen platform: XEN|BUGCHECK: - RSI = 00000000F6367280
> >>> 21244@1612191983.283666:xen_platform_log xen platform: XEN|BUGCHECK: - RBX = 000000008011E060
> >>> 21244@1612191983.283810:xen_platform_log xen platform: XEN|BUGCHECK: - RDX = 00000000F64D5C20
> >>> 21244@1612191983.283972:xen_platform_log xen platform: XEN|BUGCHECK: - RCX = 0000000000000199
> >>> 21244@1612191983.284350:xen_platform_log xen platform: XEN|BUGCHECK: - RAX = 0000000000000004
> >>> 21244@1612191983.284523:xen_platform_log xen platform: XEN|BUGCHECK: - RBP = 000000004343E891
> >>> 21244@1612191983.284658:xen_platform_log xen platform: XEN|BUGCHECK: - RIP = 00000000A43C72C5
> >>> 21244@1612191983.284842:xen_platform_log xen platform: XEN|BUGCHECK: - RSP = 000000004343DFA0
> >>> 21244@1612191983.284959:xen_platform_log xen platform: XEN|BUGCHECK: - R8 = 0000000000000008
> >>> 21244@1612191983.285073:xen_platform_log xen platform: XEN|BUGCHECK: - R9 = 000000000000000E
> >>> 21244@1612191983.285188:xen_platform_log xen platform: XEN|BUGCHECK: - R10 = 0000000000000002
> >>> 21244@1612191983.285304:xen_platform_log xen platform: XEN|BUGCHECK: - R11 = 000000004343E808
> >>> 21244@1612191983.285420:xen_platform_log xen platform: XEN|BUGCHECK: - R12 = 0000000000000000
> >>> 21244@1612191983.285564:xen_platform_log xen platform: XEN|BUGCHECK: - R13 = 00000000F7964E50
> >>> 21244@1612191983.285680:xen_platform_log xen platform: XEN|BUGCHECK: - R14 = 00000000F64D5C20
> >>> 21244@1612191983.285796:xen_platform_log xen platform: XEN|BUGCHECK: - R15 = 00000000F7964E50
> >>
> >> I'm also confused by this - the pointer given for CONTEXT suggests this
> >> is a 64-bit kernel, yet none of the registers - including RIP and RSP -
> >> have non-zero upper 32 bits. Or is qemu truncating these values?
> >
> > The logging is coming from the PV drivers (in
> https://xenbits.xen.org/gitweb/?p=pvdrivers/win/xenbus.git;a=blob;f=src/xen/bug_check.c). The
> truncated values may just be due to a 32-bit user process I guess.
> 
> Since you pointed me at the code and truncation inside a string
> not likely being due to some user process, I went and looked:
> The driver uses %016X, instead of e.g. converting to (PVOID)
> and using %p like code elsewhere in the file does (presumably
> because there's no really convenient way to print 64-bit values
> in Windows, short of using their custom "%016I64X" format
> specifier, and the absence of a uniform specifier allowing to
> format pointer-sized integers independent of architecture).

Oh yes, good point... Other places in the code use the %p trick. It should be changed.

  Paul

> 
> Jan



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:04:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80963.148614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Ke0-0006YO-Vc; Wed, 03 Feb 2021 16:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80963.148614; Wed, 03 Feb 2021 16:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Ke0-0006YH-Qp; Wed, 03 Feb 2021 16:04:40 +0000
Received: by outflank-mailman (input) for mailman id 80963;
 Wed, 03 Feb 2021 16:04:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7Kdz-0006YC-4E
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:04:39 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39dc6382-7a7c-49bb-86e3-06bf20dd01c7;
 Wed, 03 Feb 2021 16:04:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39dc6382-7a7c-49bb-86e3-06bf20dd01c7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612368278;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=KwQBzrQCXw9TmyahieNF9pmX/f7wCZrpvVl8XueVxPs=;
  b=GPx0QFN+1i76066gOfsjawBINqgWcIiIUFZGynnUt8FLdFysP/zAhJCC
   rCwpG8qWUPY+2NakSH0sjzG2BPZIDKAowTk2HAqXmhwSyIPGlbdW5n6Fe
   hn6jYFPfFVy6hfmEYnx+2niLJwidCXBlgkp5wdTdIdGI0b0Qz1bjwrEw1
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: defHSGZdOoBab3I5xd7Y1qr6iEjMatyV/109R3K1Rq2KZlsQiM81fQGT5QaFXIWyN6BLhk2wSd
 e5vHGZmCp5nEYFNW92XGf2Z0dawrbyjYTjso1djNwzChfZ632cgrTC2cTp8kMQA8HuAnCen14T
 Ay1OnYRxH95OE7/pQ6euQI+ihesxIKf6giXwj/azVsx+0F8056CCYj2ec0PxD0Xx5xzDGsHZi3
 JNHTHSUxRJuz9kaug6YaYLhVRv1YPLSMI0CFx8HTJdtH0f0UDeqopB5x1w03WXWF+Vy1J9gcoT
 DkI=
X-SBRS: 5.2
X-MesageID: 36425584
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,398,1602561600"; 
   d="scan'208";a="36425584"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mNkHpSWjkdNixoav69+gmC5t+ehHL4MFV0feZedkiR+HU1q6UQo5LYZbpmMuw/sYWXYyD2mTSpSK3lVZOmDsTUjlhO1MaBRT5GzjQxSV1AZGKvlZOook/8e3yMvTyij82b9X7N8V0BKJIa943fTRVXTr+8xHoEEEHFBSwudPDV/avp/22OFzJ5UJLO0mehFMKfi+BlB9C7q1MSit4QR6wrMImHIfele0HHml4pYNqMd0bthwRT7e3wDKJVXxnkXHtrPmQWeAVUoQer1mEeHVcScwhifKxak16PY3vn0HAe5fwqk7y/50PhYqo6FS96SZHzr3rmLcdEQ6kLB8NjC9ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lO+KDfaUir45msS+7YB6vlvOLGHOXRxWbwqCsjofHBI=;
 b=K1quyAiQPoKRzyQTqCxNf/2vCYnEpKJaENNG6DG7ACUQnBlqnTBKB3H9xCkN27OfSVXmR0AttzubrcpYQD2kpIdiuNQrbebmdupaoQJtlH61AFBx9BeoLmFMeqXZLS4uvxo3zQvMAm2MVPP2U6tYoExqpIRmovTZKOEK1mZncuv4niebn/757IWunxYAkDHdZz5uV/XmxuRngfQnRUs3F8cv6ONJFJaDhTCvJU7Wcsa+/T4aNys1UdiB2g1wl7b5QMJp+/E6fPCT2SuzanaOYHVEfUFnb/LKn/hG7xDEXclDrip6vdgJfPX1XDeRkTyZ08leTpSM23FK9yOsvQBFtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lO+KDfaUir45msS+7YB6vlvOLGHOXRxWbwqCsjofHBI=;
 b=e7atVh2VmEPqklCsLnSdDjjgleoc5G9YGknqARPo1z9jM4WeEQBT3g0NfvM7aLDR72uHUqUIM0xgynR+hH16yh3x4fzykZLMP8HQAEqZXTGbITz/lN7nQjjJc72bGOLMDHYZ1xv04n6bD+d0GlU1vh7sL8cf4yIpLWGK1LPeFUQ=
Subject: Re: [PATCH v9 02/11] xen/domain: Add vmtrace_size domain creation
 parameter
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
 <20210201232703.29275-3-andrew.cooper3@citrix.com>
 <7a27c313-2c7c-8394-3749-e2f4d671fdab@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c6762201-eceb-39c1-fa2a-4bf2af5e0758@citrix.com>
Date: Wed, 3 Feb 2021 16:04:26 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <7a27c313-2c7c-8394-3749-e2f4d671fdab@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0194.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::19) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ed44add-0c0b-4d4e-fd06-08d8c85d6542
X-MS-TrafficTypeDiagnostic: BY5PR03MB5125:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5125D2E53693102E89C57710BAB49@BY5PR03MB5125.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QUv354zA6Dff/ktP86uKZPXqVWTW7Wg116NX97ZZ/mRr4ZETi616Po55oDIUFluLpBkoMZRGjBbYptAWuJjSPJdIGhUH8f00Lr6wOSsryswfTZB5YCiBcHnhXoGm/Aug+WhX+I217+Hlo46E97ZsKnIGXf0LG0Ku7i0bsfTJU/BVI5YNWkdEOxGLXcrWQh3EOuf5sg4BpLf66YU+D2P1q0V8ZsLDq+eCMPPX0hYRRby+1NPNr1SUzM76cbTmqPV8pq8qmG+Q6lxJY6iMa7GDebIjWh5OBVgYkQtEIopfDT8NewyD4M8/5Qs0l6Fs8I1/SBp897sLM12WJHrSvRlXussUWW6o5LclxbU58lTpDCCTeRQeyA2W42B8sfxrmTO1eDV6MWLzGkpvAD46RdR46MYh2VSqfk3Go8WjfX9FpFmRqLa2Rl51WRHVcA0srhdPapcM5RbVvz3kySyzacE8P0mUD23aVcFCLRPApj/wjgWINuXPco+0YQLTuPS1vpHEfMZAEQAOZQL9Rg8EhxWvdAp7Ga8RnmfEzfF/Nu2kLWQ/sirg/wVJpmp4Ld/INdNtTmSGestbIuAmKO5Ja1HY0rP9M9SKyRuUJcaVOvyts8M=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(136003)(39860400002)(346002)(2906002)(8676002)(5660300002)(16526019)(54906003)(6486002)(26005)(36756003)(53546011)(478600001)(316002)(66476007)(2616005)(6666004)(4326008)(31686004)(31696002)(83380400001)(66946007)(6916009)(8936002)(66556008)(16576012)(86362001)(956004)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dFpybkk3TlM5czFXM1RhTnZibE4rcXA2ZytRc0RTVm1jYnhIUCtlSkREaytR?=
 =?utf-8?B?OWhyR2UrOE9KMHpzaEhtQlVRUEdaaURNTUZGRWxnRzYwSGNzV3RKYmhWdlhT?=
 =?utf-8?B?L2g5ZHQ0YTM3OHYyRmdTUWlkSWg0elRVcnZJdVJRMzdhUFpObGFDaW5jNHNj?=
 =?utf-8?B?amdQbDNjRGUreTlGaEl0b0FFS2lCcFJtVTRIMVp0a3F5YXgxeXpQcnRpS01B?=
 =?utf-8?B?QTdRL01lNEhqNVlZOHpUSFVWRnZNMllER3J2VTlPU1ptczY2MDkvNkhJSTQ1?=
 =?utf-8?B?Y2NoQkJBRStGMEFsTTllc1FRU1BpT01td2Z1dFZlREozZi9QamhZTXFodjBI?=
 =?utf-8?B?VVl4RXlCYTNmVi9hVnFnNDlHWmhCVE9QOVJUMVlVSGkySGhpd0p1Q25kVndn?=
 =?utf-8?B?LzRNckpDN0NndWVSUkxxYk1Fa2MvN1o2S0JJd3B0NzdlNHdXNFB4aG1IQlFI?=
 =?utf-8?B?dzJTMlJJbnZncFVxWWdlVHFzYXU3VEJwR3U5NFQ4UTB1Mzd5MlJvUkhKb2xh?=
 =?utf-8?B?ZFR6Rzk0Wi9ST1JkdlFmSGM4dDBUYjUxelJ1aE1Tb1Avc242akVJZVRXbWN3?=
 =?utf-8?B?MzRIcjdqdjVhRW8vZkp3TFMvTlUrbm5TV3Z2NlNONHdIbGdzNmN6dVZtSExC?=
 =?utf-8?B?cGlha3R2U25HNVI2T20zLy80UDVIeGk2K2UyaGEyNmt1dzRDUnF1cjY3ZzRS?=
 =?utf-8?B?YTE2QkZ6MlhPME9FdVl0bkVzSXo3UFpBVFhNcXgrNWI1OTkzWDNCK2xBNkNz?=
 =?utf-8?B?aWk1K2FBNXNjWEhPSVNNcU4zTThEeWNzckV4bmlaTVoyQVZKRC9Fa000dnlj?=
 =?utf-8?B?RDBqOWtpcHNucVliamVhV011RnFmYnhwdm1pUlNmcC9iRlZDcmhYVDZ2ODNy?=
 =?utf-8?B?RytiWmVETGlPaENHc1Z4RTIyUTN5Zk9NYTlrUmsxemFuZ1F3TlpwbVJOcHNa?=
 =?utf-8?B?RE9DaVlYaDJ5bm9xbU9MOWFNREpNellJTDFWbVd3bk1QWCtiL1dnVkIzUkVw?=
 =?utf-8?B?a3lCbjZzNHpXYklZdzlZKzllMDlmV0FuT0JDbWQ4NlJMaEVrV0tqTCs5Rld4?=
 =?utf-8?B?SzJHQjNuYmtudmFCOVBJNEtzcnF5Y3hrQVZ2bTJKUjFFZFRwSENKVXZBMllv?=
 =?utf-8?B?MitMdkY4dFVEMWlsTURlTG9EZHhtZmRxYnZ0Z1dKQ0MrbzBOWG4zSHh1bDJv?=
 =?utf-8?B?Z0FnR3oyaitndVJ0NHZWY0xzUjVxaEFaS05vVHpuVFVWb1Y3Vm5DN0gyUTh6?=
 =?utf-8?B?YWdDejdxWU1YRHJXUnEyaEs3ZmNNZ3kzbkcxNEpoR1Q4NlR2TDVKOTJPKzda?=
 =?utf-8?B?azNMWEpHR2pyeG9aRWhjRjNSRlFxSEQ5OXo3c3B0bEdyMVZaaFU4K0RPM1Fk?=
 =?utf-8?B?NEVRQzdOR0p2UGhOVTJHUUlJSzdnS0ljMkl4V2hWQk5uc2EwWk92MXN6cHdu?=
 =?utf-8?Q?IxH3zpfK?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ed44add-0c0b-4d4e-fd06-08d8c85d6542
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 16:04:33.0717
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JLQwmJJrNwsnLwKIRhKJfkiPi55n92zAoahBsEbDyxnKkWopJ/5YpmnsWDOprsMYPuUCBdA38a04BAAnWp0tt065RbJUU+vPNHJhC5yT67I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5125
X-OriginatorOrg: citrix.com

On 02/02/2021 09:04, Jan Beulich wrote:
> On 02.02.2021 00:26, Andrew Cooper wrote:
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -132,6 +132,56 @@ static void vcpu_info_reset(struct vcpu *v)
>>      v->vcpu_info_mfn = INVALID_MFN;
>>  }
>>  
>> +static void vmtrace_free_buffer(struct vcpu *v)
>> +{
>> +    const struct domain *d = v->domain;
>> +    struct page_info *pg = v->vmtrace.pg;
>> +    unsigned int i;
>> +
>> +    if ( !pg )
>> +        return;
>> +
>> +    v->vmtrace.pg = NULL;
>> +
>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>> +    {
>> +        put_page_alloc_ref(&pg[i]);
>> +        put_page_and_type(&pg[i]);
>> +    }
>> +}
>> +
>> +static int vmtrace_alloc_buffer(struct vcpu *v)
>> +{
>> +    struct domain *d = v->domain;
>> +    struct page_info *pg;
>> +    unsigned int i;
>> +
>> +    if ( !d->vmtrace_size )
>> +        return 0;
>> +
>> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
>> +                             MEMF_no_refcount);
>> +    if ( !pg )
>> +        return -ENOMEM;
>> +
>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
>> +            goto refcnt_err;
>> +
>> +    /*
>> +     * We must only let vmtrace_free_buffer() take any action in the success
>> +     * case when we've taken all the refs it intends to drop.
>> +     */
>> +    v->vmtrace.pg = pg;
>> +    return 0;
>> +
>> + refcnt_err:
>> +    while ( i-- )
>> +        put_page_and_type(&pg[i]);
>> +
>> +    return -ENODATA;
> Would you mind at least logging how many pages may be leaked
> here? I also don't understand why you don't call
> put_page_alloc_ref() in the loop - that's fine to do prior to
> the put_page_and_type(), and will at least limit the leak.
> The buffer size here typically isn't insignificant, and it
> may be helpful to not unnecessarily defer the freeing to
> relinquish_resources() (assuming we will make that one also
> traverse the list of "extra" pages, but I understand that's
> not going to happen for 4.15 anymore anyway).
>
> Additionally, while I understand you're not in favor of the
> comments we have on all three similar code paths, I think
> replicating their comments here would help easily spotting
> (by grep-ing e.g. for "fishy") all instances that will need
> adjusting once we have figured a better overall solution.

How is:

Â Â Â  for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
Â Â Â Â Â Â Â  if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
Â Â Â Â Â Â Â Â Â Â Â  /*
Â Â Â Â Â Â Â Â Â Â Â Â  * The domain can't possibly know about this page yet, so
failure
Â Â Â Â Â Â Â Â Â Â Â Â  * here is a clear indication of something fishy going on.
Â Â Â Â Â Â Â Â Â Â Â Â  */
Â Â Â Â Â Â Â Â Â Â Â  goto refcnt_err;

Â Â Â  /*
Â Â Â Â  * We must only let vmtrace_free_buffer() take any action in the success
Â Â Â Â  * case when we've taken all the refs it intends to drop.
Â Â Â Â  */
Â Â Â  v->vmtrace.pg = pg;
Â Â Â  return 0;

Â refcnt_err:
Â Â Â  /*
Â Â Â Â  * We can theoretically reach this point if someone has taken 2^43
refs on
Â Â Â Â  * the frames in the time the above loop takes to execute, or
someone has
Â Â Â Â  * made a blind decrease reservation hypercall and managed to pick the
Â Â Â Â  * right mfn.Â  Free the memory we safely can, and leak the rest.
Â Â Â Â  */
Â Â Â  while ( i-- )
Â Â Â  {
Â Â Â Â Â Â Â  put_page_alloc_ref(&pg[i]);
Â Â Â Â Â Â Â  put_page_and_type(&pg[i]);
Â Â Â  }

Â Â Â  return -ENODATA;

this?

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:38:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:38:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80966.148638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LAV-00018q-S0; Wed, 03 Feb 2021 16:38:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80966.148638; Wed, 03 Feb 2021 16:38:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LAV-00018j-Ox; Wed, 03 Feb 2021 16:38:15 +0000
Received: by outflank-mailman (input) for mailman id 80966;
 Wed, 03 Feb 2021 16:38:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7LAT-000173-T2
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:38:13 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae9366f3-35f7-471b-8c00-7c2ea9f0712f;
 Wed, 03 Feb 2021 16:38:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae9366f3-35f7-471b-8c00-7c2ea9f0712f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612370287;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=/FFMstEv10Fuz2PT9ZSanWC0I4D6HCLPZ/XglOaxIvs=;
  b=Zvja6IBJWNaVzoIIRLoruVymJU5ikPiSygIOC9IhI3bJHMdqcEIwnoq6
   MvBdleU3S86G+P7jQu67SANOHHenRtYTq081hWrIPcC7kKd8z1NzXQXA1
   HY6aUvT3b5Xs+hRVODEr6h+k+nC37zabxynGNFykKDC0fUO1nsPyV64dl
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: PPlo+BQ4IBmPcKqqpQuwqGCHonp1+s7/0podmDR7vHMVbeEjx8zulOhF0QD0srgP+1/T7QtKjX
 /YaiA8nImmzAPiIrbnBH66X1YDp8n7vFzUZNr4ozJF2UJPbis1FktYnubQ7ZyfwWdkC6PvIIby
 kBfUlOQwSk2l5Jbqa4CBBiJzmzAr5KFZIJoQ0J1AUYJDjThOxA+h0XA5XqeioJABxnzdvB5yGx
 ZuBbwlt8gbEhHCSF1HkYAUjsd+ACvLv68MM/alSRIgN1FNZcc3mWlt2Ed/UaVnMrb+IyYOrb1m
 PtI=
X-SBRS: 4.0
X-MesageID: 37815366
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,398,1602561600"; 
   d="scan'208";a="37815366"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Manuel Bouyer
	<bouyer@netbsd.org>
Subject: [PATCH for-4.15 1/2] libs/foreignmem: Drop useless and/or misleading logging
Date: Wed, 3 Feb 2021 16:37:49 +0000
Message-ID: <20210203163750.7564-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These log lines are all in response to single system calls, and do not provide
any information which the immediate caller can't determine themselves.  It is
however exceedinly rude to put junk like this onto stderr, especially as
system call failures are not even error conditions in certain circumstances.

The FreeBSD logging has stale function names in, and solaris shouldn't have
passed code review to start with.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Manuel Bouyer <bouyer@netbsd.org>

The errno constructs in osdep_xenforeignmemory_map_resource() are addressed in
the following patch, to avoid adding complexity to this one.

This reduces the quantity of noise from unit tests, where certain syscall
failures are definitely not errors.
---
 tools/libs/foreignmemory/freebsd.c | 7 ++-----
 tools/libs/foreignmemory/linux.c   | 6 +-----
 tools/libs/foreignmemory/netbsd.c  | 2 --
 tools/libs/foreignmemory/solaris.c | 2 +-
 4 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/tools/libs/foreignmemory/freebsd.c b/tools/libs/foreignmemory/freebsd.c
index 9a2796f0b7..04bfa806b0 100644
--- a/tools/libs/foreignmemory/freebsd.c
+++ b/tools/libs/foreignmemory/freebsd.c
@@ -65,10 +65,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
 
     addr = mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED, fd, 0);
     if ( addr == MAP_FAILED )
-    {
-        PERROR("xc_map_foreign_bulk: mmap failed");
         return NULL;
-    }
 
     ioctlx.num = num;
     ioctlx.dom = dom;
@@ -80,7 +77,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
     if ( rc < 0 )
     {
         int saved_errno = errno;
-        PERROR("xc_map_foreign_bulk: ioctl failed");
+
         (void)munmap(addr, num << PAGE_SHIFT);
         errno = saved_errno;
         return NULL;
@@ -137,7 +134,7 @@ int osdep_xenforeignmemory_map_resource(xenforeignmemory_handle *fmem,
         int saved_errno;
 
         if ( errno != ENOSYS )
-            PERROR("mmap resource ioctl failed");
+            ;
         else
             errno = EOPNOTSUPP;
 
diff --git a/tools/libs/foreignmemory/linux.c b/tools/libs/foreignmemory/linux.c
index d0eead1196..050b9ed3a5 100644
--- a/tools/libs/foreignmemory/linux.c
+++ b/tools/libs/foreignmemory/linux.c
@@ -171,10 +171,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
     addr = mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED,
                 fd, 0);
     if ( addr == MAP_FAILED )
-    {
-        PERROR("mmap failed");
         return NULL;
-    }
 
     ioctlx.num = num;
     ioctlx.dom = dom;
@@ -273,7 +270,6 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
     {
         int saved_errno = errno;
 
-        PERROR("ioctl failed");
         (void)munmap(addr, num << PAGE_SHIFT);
         errno = saved_errno;
         return NULL;
@@ -330,7 +326,7 @@ int osdep_xenforeignmemory_map_resource(
         int saved_errno;
 
         if ( errno != fmem->unimpl_errno && errno != EOPNOTSUPP )
-            PERROR("ioctl failed");
+            ;
         else
             errno = EOPNOTSUPP;
 
diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory/netbsd.c
index 4ae60aafdd..565682e064 100644
--- a/tools/libs/foreignmemory/netbsd.c
+++ b/tools/libs/foreignmemory/netbsd.c
@@ -147,8 +147,6 @@ int osdep_xenforeignmemory_map_resource(
     rc = ioctl(fmem->fd, IOCTL_PRIVCMD_MMAP_RESOURCE, &mr);
     if ( rc )
     {
-        PERROR("ioctl failed");
-
         if ( fres->addr )
         {
             int saved_errno = errno;
diff --git a/tools/libs/foreignmemory/solaris.c b/tools/libs/foreignmemory/solaris.c
index ee8aae4fbd..958fb01f6d 100644
--- a/tools/libs/foreignmemory/solaris.c
+++ b/tools/libs/foreignmemory/solaris.c
@@ -83,7 +83,7 @@ void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
     if ( ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH, &ioctlx) < 0 )
     {
         int saved_errno = errno;
-        PERROR("XXXXXXXX");
+
         (void)munmap(addr, num*XC_PAGE_SIZE);
         errno = saved_errno;
         return NULL;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:38:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:38:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80965.148625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LAQ-00017V-Kx; Wed, 03 Feb 2021 16:38:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80965.148625; Wed, 03 Feb 2021 16:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LAQ-00017O-Hd; Wed, 03 Feb 2021 16:38:10 +0000
Received: by outflank-mailman (input) for mailman id 80965;
 Wed, 03 Feb 2021 16:38:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7LAP-000173-36
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:38:09 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd182cf8-9d51-4177-81a3-86f0af5b1151;
 Wed, 03 Feb 2021 16:38:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd182cf8-9d51-4177-81a3-86f0af5b1151
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612370288;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=6Cdl5mNz4USjdmiVZYWHZGQb2t50AyQjzwU5oazAcbU=;
  b=OmWhnWsYTi4s+bXterF15k4eg78gDBPWdqOP5SC0mc9ZIGJ5kK5TmNFg
   5dRFWRar/tbwtazPfVTKpM1dEDAA/6Xdarb79d2oVzxZZhZLK1PTGZw+V
   /P4QZqW0X77pF0tefd1HpLRhsMtkAsoAAoA5kXJQf/EIYNMJDtgAPPI9V
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: me/ojDeqZUajc5mFdwnZGF5qZYFlxsh8oUn/YYd6GliaUvfbZ9mLFDAWlXsoKEZzBP5CfP9f1A
 x34qEDIWBK6NMUu2lAG7+xcaQwoOiVBhOxNZmkj68yR1suAH8jp5NGMwALHHHY9MdhsT3PK5n3
 8wFDsAK9Q8OQuai7gVmv1scV1AuMwNMi5zwvfx7pKreXu6vV/eFmAYBpV7wov4JU8LMKY5sfLp
 St+8kv1x+PHnor45aRGMSvLUNqsVY15L5ssV09DT8ij1IVJqdo2zi7BYxN24bI08yqSrg0bWyA
 +1E=
X-SBRS: 4.0
X-MesageID: 36675157
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,398,1602561600"; 
   d="scan'208";a="36675157"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Manuel Bouyer
	<bouyer@netbsd.org>
Subject: [PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource
Date: Wed, 3 Feb 2021 16:37:50 +0000
Message-ID: <20210203163750.7564-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210203163750.7564-1-andrew.cooper3@citrix.com>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Simplify the FreeBSD logic, and duplicate it for NetBSD as the userpace ABI
appears to be to consistently provide EOPNOTSUPP for missing Xen/Kernel
support.

The Linux logic was contorted in what appears to be a deliberate attempt to
skip the now-deleted logic for the EOPNOTSUPP case.  Simplify it.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Manuel Bouyer <bouyer@netbsd.org>
---
 tools/libs/foreignmemory/freebsd.c | 4 +---
 tools/libs/foreignmemory/linux.c   | 4 +---
 tools/libs/foreignmemory/netbsd.c  | 3 +++
 3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/tools/libs/foreignmemory/freebsd.c b/tools/libs/foreignmemory/freebsd.c
index 04bfa806b0..d94ea07862 100644
--- a/tools/libs/foreignmemory/freebsd.c
+++ b/tools/libs/foreignmemory/freebsd.c
@@ -133,9 +133,7 @@ int osdep_xenforeignmemory_map_resource(xenforeignmemory_handle *fmem,
     {
         int saved_errno;
 
-        if ( errno != ENOSYS )
-            ;
-        else
+        if ( errno == ENOSYS )
             errno = EOPNOTSUPP;
 
         if ( fres->addr )
diff --git a/tools/libs/foreignmemory/linux.c b/tools/libs/foreignmemory/linux.c
index 050b9ed3a5..c1f35e2db7 100644
--- a/tools/libs/foreignmemory/linux.c
+++ b/tools/libs/foreignmemory/linux.c
@@ -325,9 +325,7 @@ int osdep_xenforeignmemory_map_resource(
     {
         int saved_errno;
 
-        if ( errno != fmem->unimpl_errno && errno != EOPNOTSUPP )
-            ;
-        else
+        if ( errno == fmem->unimpl_errno )
             errno = EOPNOTSUPP;
 
         if ( fres->addr )
diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory/netbsd.c
index 565682e064..c0b1b8f79d 100644
--- a/tools/libs/foreignmemory/netbsd.c
+++ b/tools/libs/foreignmemory/netbsd.c
@@ -147,6 +147,9 @@ int osdep_xenforeignmemory_map_resource(
     rc = ioctl(fmem->fd, IOCTL_PRIVCMD_MMAP_RESOURCE, &mr);
     if ( rc )
     {
+        if ( errno == ENOSYS )
+            errno = EOPNOTSUPP;
+
         if ( fres->addr )
         {
             int saved_errno = errno;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:54:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80969.148650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQS-0003E4-AT; Wed, 03 Feb 2021 16:54:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80969.148650; Wed, 03 Feb 2021 16:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQS-0003Dx-6g; Wed, 03 Feb 2021 16:54:44 +0000
Received: by outflank-mailman (input) for mailman id 80969;
 Wed, 03 Feb 2021 16:54:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+rCV=HF=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1l7LQQ-0003Do-Uf
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:54:42 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 120a9d99-9055-46bf-a931-b92ee53d9f10;
 Wed, 03 Feb 2021 16:54:41 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 113GsdWI021863;
 Wed, 3 Feb 2021 17:54:39 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 113GscNw003580;
 Wed, 3 Feb 2021 17:54:39 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 73EFAAA8BB; Wed,  3 Feb 2021 17:54:38 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 120a9d99-9055-46bf-a931-b92ee53d9f10
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>, Ian Jackson <iwj@xenproject.org>,
        Wei Liu <wl@xen.org>
Subject: [PATCH] add a qemu-ifup script on NetBSD
Date: Wed,  3 Feb 2021 17:54:18 +0100
Message-Id: <20210203165421.1550-1-bouyer@netbsd.org>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Wed, 03 Feb 2021 17:54:39 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

On NetBSD, qemu-xen will use a qemu-ifup script to setup the tap interfaces
(as qemu-xen-traditional used to). Copy the script from qemu-xen-traditional,
and install it on NetBSD. While there document parameters and environnement
variables.

Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
---
 tools/hotplug/NetBSD/Makefile  | 1 +
 tools/hotplug/NetBSD/qemu-ifup | 9 +++++++++
 2 files changed, 10 insertions(+)
 create mode 100644 tools/hotplug/NetBSD/qemu-ifup

diff --git a/tools/hotplug/NetBSD/Makefile b/tools/hotplug/NetBSD/Makefile
index 114b223207..f909ffa367 100644
--- a/tools/hotplug/NetBSD/Makefile
+++ b/tools/hotplug/NetBSD/Makefile
@@ -7,6 +7,7 @@ XEN_SCRIPTS += locking.sh
 XEN_SCRIPTS += block
 XEN_SCRIPTS += vif-bridge
 XEN_SCRIPTS += vif-ip
+XEN_SCRIPTS += qemu-ifup
 
 XEN_SCRIPT_DATA =
 XEN_RCD_PROG = rc.d/xencommons rc.d/xendomains rc.d/xen-watchdog rc.d/xendriverdomain
diff --git a/tools/hotplug/NetBSD/qemu-ifup b/tools/hotplug/NetBSD/qemu-ifup
new file mode 100644
index 0000000000..4305419f44
--- /dev/null
+++ b/tools/hotplug/NetBSD/qemu-ifup
@@ -0,0 +1,9 @@
+#!/bin/sh
+
+#called by qemu when a HVM domU is started.
+# first parameter is tap interface, second is the bridge name
+# environement variable $XEN_DOMAIN_ID contains the domU's ID,
+# which can be used to retrieve extra parameters from the xenstore.
+
+ifconfig $1 up
+exec /sbin/brconfig $2 add $1
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:54:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80970.148662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQW-0003FV-Hf; Wed, 03 Feb 2021 16:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80970.148662; Wed, 03 Feb 2021 16:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQW-0003FO-EO; Wed, 03 Feb 2021 16:54:48 +0000
Received: by outflank-mailman (input) for mailman id 80970;
 Wed, 03 Feb 2021 16:54:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+rCV=HF=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1l7LQV-0003Do-Lq
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:54:47 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 153216e8-a24f-4064-8feb-d0305b03f932;
 Wed, 03 Feb 2021 16:54:45 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 113GsiS3013800;
 Wed, 3 Feb 2021 17:54:44 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 113Gsi37000552;
 Wed, 3 Feb 2021 17:54:44 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 07683AA8BB; Wed,  3 Feb 2021 17:54:44 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 153216e8-a24f-4064-8feb-d0305b03f932
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>, Ian Jackson <iwj@xenproject.org>,
        Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH] xenstored: close socket connections on error
Date: Wed,  3 Feb 2021 17:54:19 +0100
Message-Id: <20210203165421.1550-2-bouyer@netbsd.org>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210203165421.1550-1-bouyer@netbsd.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Wed, 03 Feb 2021 17:54:44 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

On error, don't keep socket connection in ignored state but close them.
When the remote end of a socket is closed, xenstored will flag it as an
error and switch the connection to ignored. But on some OSes (e.g.
NetBSD), poll(2) will return only POLLIN in this case, so sockets in ignored
state will stay open forever in xenstored (and it will loop with CPU 100%
busy).

Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083
---
 tools/xenstore/xenstored_core.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 1ab6f162cb..0fea598352 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1440,6 +1440,9 @@ static void ignore_connection(struct connection *conn)
 
 	talloc_free(conn->in);
 	conn->in = NULL;
+	/* if this is a socket connection, drop it now */
+	if (conn->fd >= 0)
+		talloc_free(conn);
 }
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:54:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80971.148674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQe-0003KX-WC; Wed, 03 Feb 2021 16:54:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80971.148674; Wed, 03 Feb 2021 16:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQe-0003KP-S6; Wed, 03 Feb 2021 16:54:56 +0000
Received: by outflank-mailman (input) for mailman id 80971;
 Wed, 03 Feb 2021 16:54:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+rCV=HF=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1l7LQd-0003Jr-QC
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:54:55 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf125cb7-1473-43ae-b2db-b1daf8ee0b89;
 Wed, 03 Feb 2021 16:54:54 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 113GsrDn009511;
 Wed, 3 Feb 2021 17:54:53 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 113GsrfB012214;
 Wed, 3 Feb 2021 17:54:53 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id ACEECAA8BB; Wed,  3 Feb 2021 17:54:53 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf125cb7-1473-43ae-b2db-b1daf8ee0b89
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>, Wei Liu <wl@xen.org>,
        Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v3] Document qemu-ifup on NetBSD
Date: Wed,  3 Feb 2021 17:54:20 +0100
Message-Id: <20210203165421.1550-3-bouyer@netbsd.org>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210203165421.1550-1-bouyer@netbsd.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Wed, 03 Feb 2021 17:54:53 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

Document that on NetBSD, the tap interface will be configured by the
qemu-ifup script.

Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>
---
 docs/man/xl-network-configuration.5.pod | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/docs/man/xl-network-configuration.5.pod b/docs/man/xl-network-configuration.5.pod
index af058d4d3c..8e5fd909fa 100644
--- a/docs/man/xl-network-configuration.5.pod
+++ b/docs/man/xl-network-configuration.5.pod
@@ -172,6 +172,9 @@ add it to the relevant bridge). Defaults to
 C<XEN_SCRIPT_DIR/vif-bridge> but can be set to any script. Some example
 scripts are installed in C<XEN_SCRIPT_DIR>.
 
+Note on NetBSD HVM guests will ignore the script option for tap
+(emulated) interfaces and always use
+C<XEN_SCRIPT_DIR/qemu-ifup> to configure the interface in bridged mode.
 
 =head2 ip
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 16:55:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 16:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80972.148686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQj-0003Of-9b; Wed, 03 Feb 2021 16:55:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80972.148686; Wed, 03 Feb 2021 16:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LQj-0003OV-5n; Wed, 03 Feb 2021 16:55:01 +0000
Received: by outflank-mailman (input) for mailman id 80972;
 Wed, 03 Feb 2021 16:55:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+rCV=HF=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1l7LQi-0003Jr-1I
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 16:55:00 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3172e2eb-b1c9-42c2-83ab-81a1ba84c68f;
 Wed, 03 Feb 2021 16:54:57 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 113GsuUJ001216;
 Wed, 3 Feb 2021 17:54:56 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 113GsuSS003390;
 Wed, 3 Feb 2021 17:54:56 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 1ADF8AA8BB; Wed,  3 Feb 2021 17:54:56 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3172e2eb-b1c9-42c2-83ab-81a1ba84c68f
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>,
        Elena Ufimtseva <elena.ufimtseva@oracle.com>,
        Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v3] NetBSD: use system-provided headers
Date: Wed,  3 Feb 2021 17:54:21 +0100
Message-Id: <20210203165421.1550-4-bouyer@netbsd.org>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210203165421.1550-1-bouyer@netbsd.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Wed, 03 Feb 2021 17:54:56 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

On NetBSD use the system-provided headers for ioctl and related definitions,
they are up to date and have more chances to match the kernel's idea of
the ioctls and structures.
Remove now-unused NetBSD/evtchn.h and NetBSD/privcmd.h.
Don't fail install if xen/sys/*.h are not present.

Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
---
 tools/debugger/gdbsx/xg/xg_main.c      |   4 +
 tools/include/Makefile                 |   2 +
 tools/include/xen-sys/NetBSD/evtchn.h  |  86 --------------------
 tools/include/xen-sys/NetBSD/privcmd.h | 106 -------------------------
 tools/libs/call/private.h              |   4 +
 tools/libs/ctrl/xc_private.h           |   4 +
 tools/libs/foreignmemory/private.h     |   6 ++
 7 files changed, 20 insertions(+), 192 deletions(-)
 delete mode 100644 tools/include/xen-sys/NetBSD/evtchn.h
 delete mode 100644 tools/include/xen-sys/NetBSD/privcmd.h

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 4576c762af..903d60baed 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -49,7 +49,11 @@
 #include "xg_public.h"
 #include <xen/version.h>
 #include <xen/domctl.h>
+#ifdef __NetBSD__
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
 
diff --git a/tools/include/Makefile b/tools/include/Makefile
index 4d4ec5f974..65fb67a771 100644
--- a/tools/include/Makefile
+++ b/tools/include/Makefile
@@ -68,7 +68,9 @@ install: all
 	$(INSTALL_DATA) xen/foreign/*.h $(DESTDIR)$(includedir)/xen/foreign
 	$(INSTALL_DATA) xen/hvm/*.h $(DESTDIR)$(includedir)/xen/hvm
 	$(INSTALL_DATA) xen/io/*.h $(DESTDIR)$(includedir)/xen/io
+ifneq ($(wildcard xen/sys/*.h),)
 	$(INSTALL_DATA) xen/sys/*.h $(DESTDIR)$(includedir)/xen/sys
+endif
 	$(INSTALL_DATA) xen/xsm/*.h $(DESTDIR)$(includedir)/xen/xsm
 
 .PHONY: uninstall
diff --git a/tools/include/xen-sys/NetBSD/evtchn.h b/tools/include/xen-sys/NetBSD/evtchn.h
deleted file mode 100644
index 2d8a1f9164..0000000000
--- a/tools/include/xen-sys/NetBSD/evtchn.h
+++ /dev/null
@@ -1,86 +0,0 @@
-/* $NetBSD: evtchn.h,v 1.1.1.1 2007/06/14 19:39:45 bouyer Exp $ */
-/******************************************************************************
- * evtchn.h
- * 
- * Interface to /dev/xen/evtchn.
- * 
- * Copyright (c) 2003-2005, K A Fraser
- * 
- * This file may be distributed separately from the Linux kernel, or
- * incorporated into other software packages, subject to the following license:
- * 
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this source file (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy, modify,
- * merge, publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so, subject to
- * the following conditions:
- * 
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- * 
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- */
-
-#ifndef __NetBSD_EVTCHN_H__
-#define __NetBSD_EVTCHN_H__
-
-/*
- * Bind a fresh port to VIRQ @virq.
- */
-#define IOCTL_EVTCHN_BIND_VIRQ				\
-	_IOWR('E', 4, struct ioctl_evtchn_bind_virq)
-struct ioctl_evtchn_bind_virq {
-	unsigned int virq;
-	unsigned int port;
-};
-
-/*
- * Bind a fresh port to remote <@remote_domain, @remote_port>.
- */
-#define IOCTL_EVTCHN_BIND_INTERDOMAIN			\
-	_IOWR('E', 5, struct ioctl_evtchn_bind_interdomain)
-struct ioctl_evtchn_bind_interdomain {
-	unsigned int remote_domain, remote_port;
-	unsigned int port;
-};
-
-/*
- * Allocate a fresh port for binding to @remote_domain.
- */
-#define IOCTL_EVTCHN_BIND_UNBOUND_PORT			\
-	_IOWR('E', 6, struct ioctl_evtchn_bind_unbound_port)
-struct ioctl_evtchn_bind_unbound_port {
-	unsigned int remote_domain;
-	unsigned int port;
-};
-
-/*
- * Unbind previously allocated @port.
- */
-#define IOCTL_EVTCHN_UNBIND				\
-	_IOW('E', 7, struct ioctl_evtchn_unbind)
-struct ioctl_evtchn_unbind {
-	unsigned int port;
-};
-
-/*
- * Send event to previously allocated @port.
- */
-#define IOCTL_EVTCHN_NOTIFY				\
-	_IOW('E', 8, struct ioctl_evtchn_notify)
-struct ioctl_evtchn_notify {
-	unsigned int port;
-};
-
-/* Clear and reinitialise the event buffer. Clear error condition. */
-#define IOCTL_EVTCHN_RESET				\
-	_IO('E', 9)
-
-#endif /* __NetBSD_EVTCHN_H__ */
diff --git a/tools/include/xen-sys/NetBSD/privcmd.h b/tools/include/xen-sys/NetBSD/privcmd.h
deleted file mode 100644
index 555bad973e..0000000000
--- a/tools/include/xen-sys/NetBSD/privcmd.h
+++ /dev/null
@@ -1,106 +0,0 @@
-/*	NetBSD: xenio.h,v 1.3 2005/05/24 12:07:12 yamt Exp $	*/
-
-/******************************************************************************
- * privcmd.h
- * 
- * Copyright (c) 2003-2004, K A Fraser
- * 
- * This file may be distributed separately from the Linux kernel, or
- * incorporated into other software packages, subject to the following license:
- * 
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this source file (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy, modify,
- * merge, publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so, subject to
- * the following conditions:
- * 
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- * 
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- */
-
-#ifndef __NetBSD_PRIVCMD_H__
-#define __NetBSD_PRIVCMD_H__
-
-/* Interface to /dev/xen/privcmd */
-
-typedef struct privcmd_hypercall
-{
-    unsigned long op;
-    unsigned long arg[5];
-    long retval;
-} privcmd_hypercall_t;
-
-typedef struct privcmd_mmap_entry {
-    unsigned long va;
-    unsigned long mfn;
-    unsigned long npages;
-} privcmd_mmap_entry_t; 
-
-typedef struct privcmd_mmap {
-    int num;
-    domid_t dom; /* target domain */
-    privcmd_mmap_entry_t *entry;
-} privcmd_mmap_t; 
-
-typedef struct privcmd_mmapbatch {
-    int num;     /* number of pages to populate */
-    domid_t dom; /* target domain */
-    unsigned long addr;  /* virtual address */
-    unsigned long *arr; /* array of mfns - top nibble set on err */
-} privcmd_mmapbatch_t; 
-
-typedef struct privcmd_blkmsg
-{
-    unsigned long op;
-    void         *buf;
-    int           buf_size;
-} privcmd_blkmsg_t;
-
-/*
- * @cmd: IOCTL_PRIVCMD_HYPERCALL
- * @arg: &privcmd_hypercall_t
- * Return: Value returned from execution of the specified hypercall.
- */
-#define IOCTL_PRIVCMD_HYPERCALL         \
-    _IOWR('P', 0, privcmd_hypercall_t)
-
-#if defined(_KERNEL)
-/* compat */
-#define IOCTL_PRIVCMD_INITDOMAIN_EVTCHN_OLD \
-    _IO('P', 1)
-#endif /* defined(_KERNEL) */
-    
-#define IOCTL_PRIVCMD_MMAP             \
-    _IOW('P', 2, privcmd_mmap_t)
-#define IOCTL_PRIVCMD_MMAPBATCH        \
-    _IOW('P', 3, privcmd_mmapbatch_t)
-#define IOCTL_PRIVCMD_GET_MACH2PHYS_START_MFN \
-    _IOR('P', 4, unsigned long)
-
-/*
- * @cmd: IOCTL_PRIVCMD_INITDOMAIN_EVTCHN
- * @arg: n/a
- * Return: Port associated with domain-controller end of control event channel
- *         for the initial domain.
- */
-#define IOCTL_PRIVCMD_INITDOMAIN_EVTCHN \
-    _IOR('P', 5, int)
-
-/* Interface to /dev/xenevt */
-/* EVTCHN_RESET: Clear and reinit the event buffer. Clear error condition. */
-#define EVTCHN_RESET  _IO('E', 1)
-/* EVTCHN_BIND: Bind to the specified event-channel port. */
-#define EVTCHN_BIND   _IOW('E', 2, unsigned long)
-/* EVTCHN_UNBIND: Unbind from the specified event-channel port. */
-#define EVTCHN_UNBIND _IOW('E', 3, unsigned long)
-
-#endif /* __NetBSD_PRIVCMD_H__ */
diff --git a/tools/libs/call/private.h b/tools/libs/call/private.h
index 7944ac5baf..96922e03d5 100644
--- a/tools/libs/call/private.h
+++ b/tools/libs/call/private.h
@@ -7,7 +7,11 @@
 #include <xencall.h>
 
 #include <xen/xen.h>
+#ifdef __NetBSD__
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 
 #ifndef PAGE_SHIFT /* Mini-os, Yukk */
 #define PAGE_SHIFT           12
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..68e388f488 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -39,7 +39,11 @@
 #include <xenforeignmemory.h>
 #include <xendevicemodel.h>
 
+#ifdef __NetBSD__
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemory/private.h
index 1ee3626dd2..581d8b1eef 100644
--- a/tools/libs/foreignmemory/private.h
+++ b/tools/libs/foreignmemory/private.h
@@ -8,7 +8,13 @@
 #include <xentoolcore_internal.h>
 
 #include <xen/xen.h>
+
+#ifdef __NetBSD__
+#include <xen/xen.h>
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 
 #ifndef PAGE_SHIFT /* Mini-os, Yukk */
 #define PAGE_SHIFT           12
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:15:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:15:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80978.148697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LkF-0005eH-1i; Wed, 03 Feb 2021 17:15:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80978.148697; Wed, 03 Feb 2021 17:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LkE-0005eA-Uu; Wed, 03 Feb 2021 17:15:10 +0000
Received: by outflank-mailman (input) for mailman id 80978;
 Wed, 03 Feb 2021 17:15:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIhc=HF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l7LkD-0005dm-BW
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:15:09 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b668b52-59b7-4391-86a3-39d0926cf479;
 Wed, 03 Feb 2021 17:15:05 +0000 (UTC)
Received: from DBBPR09CA0044.eurprd09.prod.outlook.com (2603:10a6:10:d4::32)
 by VI1PR08MB4302.eurprd08.prod.outlook.com (2603:10a6:803:fb::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.17; Wed, 3 Feb
 2021 17:15:03 +0000
Received: from DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:d4:cafe::42) by DBBPR09CA0044.outlook.office365.com
 (2603:10a6:10:d4::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20 via Frontend
 Transport; Wed, 3 Feb 2021 17:15:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT018.mail.protection.outlook.com (10.152.20.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Wed, 3 Feb 2021 17:15:02 +0000
Received: ("Tessian outbound 4d8113405d55:v71");
 Wed, 03 Feb 2021 17:15:02 +0000
Received: from a6cc49a97c2a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 83011BE2-D7DE-490E-9E25-3A1674217C4B.1; 
 Wed, 03 Feb 2021 17:14:47 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a6cc49a97c2a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 03 Feb 2021 17:14:47 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6172.eurprd08.prod.outlook.com (2603:10a6:10:1f4::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.24; Wed, 3 Feb
 2021 17:14:45 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3805.027; Wed, 3 Feb 2021
 17:14:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b668b52-59b7-4391-86a3-39d0926cf479
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zzN44KkCJ3krwqY6/PWdHUONNqbR+L3AonRvJq6pmlA=;
 b=QaSmsqsg1FBeEcwbMf63DysKFVwibwxQiHpAhxK3h8LtB4PT72sO/ztiZ+oF3wOlpLNCo0QH4JBrE+0CPI6qtSIUPIE8gevHQscSll3scnP8D8k0rta4UvfomsxxEdFCFp0URlGpzAQ0IK0OS1WFS9x2YZQRfrRpqtbrR2gK/+o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 05a47b3c064fad7f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A7Zg04LMF6akPsGdAXXSuh6UXe1Ms3NYV4JV+Z/wLph+iXjPetip1V+bklbdtK2WjJ3SBLms6+Mm7XbziSwwj41mqJsNSJqkDQT4zY9lugI6fza8pZfjmDzL+Yz6jSsTm1f/UvAkv1/5dhjxfppnomhkLaMNGHSo9Sjpz/GkbQgi9yDEbFbLPI6Vkj1QpyFdHi7DqINV6yjK288EesNoYNYayFRojrssy92kk9YLMhpH2jRilo1CoqHV2mQDu1/deyNuL/mZgd+FbNrLlStG7zi68vrclgeAnE+4WRjJgHLy19FI8xhtITKQXR7lmelEE5H4S3QJyTbfb+n8GYaDmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zzN44KkCJ3krwqY6/PWdHUONNqbR+L3AonRvJq6pmlA=;
 b=LX2Jb9ozk3x0+31MiOP1VrD5+QCFjDdULD/css5w/DmVaurk2EBvIlUVaEjHuWJF2XGcKz1juJH4BJbXmnvEj8TsBh5Dww3PFAt8DaUHhFeuN5REs4rKX8IJSNAO4A0pSBfVMSoTRvqAHmvFaKvrPcs3bC8rzir5UH8IHYrpU0HXsPCojBjM2G2gIomFEwnTd7LApF2o9XXBRHyJBkS9roQqxBA6OpGex3DjovDVdE1gfe6UVmbX9HMlGe5bhLKIhSNqOAstnc/8pKWrHG+AywctYrwWaxy6gr6L1kR4VPpLMvkZ3zUx/BuA+kuf3H6C+3SirzlI7OHb+IKMWZOdng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zzN44KkCJ3krwqY6/PWdHUONNqbR+L3AonRvJq6pmlA=;
 b=QaSmsqsg1FBeEcwbMf63DysKFVwibwxQiHpAhxK3h8LtB4PT72sO/ztiZ+oF3wOlpLNCo0QH4JBrE+0CPI6qtSIUPIE8gevHQscSll3scnP8D8k0rta4UvfomsxxEdFCFp0URlGpzAQ0IK0OS1WFS9x2YZQRfrRpqtbrR2gK/+o=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "brian.woods@xilinx.com"
	<brian.woods@xilinx.com>
Subject: Re: [PATCH v3 0/3] Generic SMMU Bindings
Thread-Topic: [PATCH v3 0/3] Generic SMMU Bindings
Thread-Index: AQHW9DbnaDtcsjSAAUiKSXHzYBU+iapFEnqAgAAbJICAAYoWgA==
Date: Wed, 3 Feb 2021 17:14:45 +0000
Message-ID: <86C6469B-2E50-41A0-A667-73E1E37E5C32@arm.com>
References: <alpine.DEB.2.21.2101261435550.2568@sstabellini-ThinkPad-T480s>
 <C094E054-885F-4363-ABF3-E0FB4DDD7A2A@arm.com>
 <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102020937480.29047@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 33071f18-173f-4df9-127b-08d8c8673e65
x-ms-traffictypediagnostic: DBBPR08MB6172:|VI1PR08MB4302:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB43029654B779C1C481C02585FCB49@VI1PR08MB4302.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qKrHE1SlsV2GUD+aR+rrVd6bncIDdHp1vMnAEGosxrzaEl7yNPguY988b32yPhLgNTy/c67WqpP5JfQAlxfukRmn+JqtkZD4oQ19K3+UsVYWeFq/bwtjgwsjZubhRIeAbEOLiQgqVkFGl97U9F/93dPL/owyblrDSbLy6TmeWBhy723gOtoDrqAnu/5bS+EsbtNXfTwWiO7BdWKxSG2eqdBKIGdsgTk0PucTtzzUGjPQwRVIzdB/12Fk1SKL67+UiRbYecjIgnsAQEnoCMy8VNxRxnGr6BiS5O4lAS0iAjiBmbGRcWl1DxyZFXaHe/zL50pTrhIHyEjZ75ICJpbNpyrST6nBfyvRHdAbgjBAgBHlwRezceLw0+pxZJyTYtUYvxc1UzIxWSkeiHv8k/jpOBEFOkawmI8m0b5B5uF3xqiYG6xDs/sD5z2IfSeTVK6huZ5LloEvRw7mNBDzpTNFT0MGWogfB5RveUewjQlihd7omjb9Ne960yDqWkDpF1QXMPhV610n8wTFdZACNbpY5TTcWIXKsu6i5BZ/n2VwWafOZSfvO2bpjOt8UC7zKv/mMnO3QLvPTAJawUf9V5jp6fuUAWB9EwvwEnF0Behgbq9qVPw4RQy0lTZDYnsvVzHGE38octkiTXr/KW4AQkF31iSdZVg/VPK4yLSfcq5g2NPH9JTTu7pt8vQNcpZWg4q8nm0J1uX7XhYxM9ZqG/Jm6Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(39860400002)(396003)(346002)(53546011)(6506007)(33656002)(966005)(478600001)(66946007)(316002)(4326008)(6916009)(83380400001)(8676002)(6512007)(54906003)(2616005)(186003)(2906002)(26005)(5660300002)(64756008)(66556008)(76116006)(91956017)(66476007)(66446008)(8936002)(86362001)(71200400001)(36756003)(6486002)(41533002)(45980500001)(6606295002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?seC5pCUTj7uDB2q/CsElEg95hELIhoo97nFVRgy0tj3Gs9cLodfznN16Hzg8?=
 =?us-ascii?Q?/LV+wh9F4rcOfp28cPjjdNfPLh46p/EIeW/nYfWKTN/e2ddSt8e3rGs2c+rj?=
 =?us-ascii?Q?3QcyhiKAyaLcwzLwElBmPs2J0pvGbv8AJ5erie3sab0sVBz+jCprDS8z3fnr?=
 =?us-ascii?Q?M+Bgs1scCRDdsFG2pWyu4s20xKijx3RzB2gOrduqBp3FzL1CSIGGYKdVCXps?=
 =?us-ascii?Q?6T9Zu9p1iFD5S1Xnmw5wHusEJU48wD7kfr4Llg6Ok/mqhuv07M2jN9hi027z?=
 =?us-ascii?Q?cx0YAc2hX9be+HyEf7HJOhJsBhmkMSmBmKQhrpWalKh2BOf1r53LpsU+YXEF?=
 =?us-ascii?Q?hqDrMEKRz4mI8kRa+NQWuYidBbufskXYU/wCx8nv63RTXtHUjU3x0CTBYR/S?=
 =?us-ascii?Q?vzgnW08z55f/Zyld0oBxPwKNtMVKjxPdbMDL6x6zg1O3ywE5kqat16cvJU1I?=
 =?us-ascii?Q?gLrvIkTQ61bVL1x2bCtdTJt3e1xyGfWxMQkVDTrE9C28nGCm1nQgrsMNH0Lx?=
 =?us-ascii?Q?m7W2OQNYUq1pd7V9Aa+R3ZYxGmoENbMbeTDCIATU3BxKHTTfXNTqDtiidouI?=
 =?us-ascii?Q?O3N9rN25dtqyaNx2D5QOFN4NS3JUFgwfn6C9Jn7JcH7XdR5RQu4Msn86Fnid?=
 =?us-ascii?Q?edzIZ2tcg1pggimkJgnI7G5EzOyW6BLPR37/lKo48zB5zj7L+o3Rn+C4Oxht?=
 =?us-ascii?Q?rGVyy5tQdA/rxS1YJ9ZfwGk7ocbJII4FUFOlg7z6g+7TpKBcfhoo2dgQKtWt?=
 =?us-ascii?Q?VknAL1QTLApnp7RjzhyHN3rQ0/lIy/Z0HEdTEMYiHy8HJo0l2zjZi/mvxUU7?=
 =?us-ascii?Q?NtRhWgxuXfiSCgLwyX7kclflBi/9ZT3kC9KdbUU9Ik8A/xOzJ556I9uprsyL?=
 =?us-ascii?Q?9Ta/C1TCcKhWe2KWYrkZLHwBVCtpPdUaAy+pMKB5d51gBD7OmjL2uWvxIz5a?=
 =?us-ascii?Q?Ns9w6ggtDkOl0x2NuDSJhPd6gKCMLDT1BV6vBvGmuMfGdPH6i+uMZlQuReMT?=
 =?us-ascii?Q?MbkC/8ZtQvCUckPj83DZXfa+gAeUTT3agg7+UJLi3/668tYhiLup7jRl5jGY?=
 =?us-ascii?Q?LsO/wvGVpMeQRi6geS52aPWgWqeVq2zGsU7my63AmbXT09ZR0Gb09EvhkqPy?=
 =?us-ascii?Q?KIBySLLr6vciOtXUXVZ5cN51WucVCMc4jrnVOkgN1t9113JoP9CITcnv1wVz?=
 =?us-ascii?Q?TnyY87zq0Q7ll+6gWHN0DN6/l5PBWZTBnr0jyQDVO2LRXIywIGvYgK+6ZM7A?=
 =?us-ascii?Q?jtqMHGT7MCQYIaLSde7wvpkgbGWGkL3ke/ZZBRyBXDkS/1l81hzADeqn9BKX?=
 =?us-ascii?Q?dJ5dOU4hLK+ZPHhg6VUE0jSx?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <288BC99701E45342AE002B52E05287C7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6172
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2e12a96a-ba57-4cbf-c9bc-08d8c867345c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LfgN3oLOU+hveokXk1d2aoKDBXD+k6OGN1XwM4L0du7erhBaqvnmZjranXCu4LzYXuBnOD96tcuiacNxSN3Q05/mNjja4M5rOOamTGCZ72wNRKtKiZIeT7/hSxrIO4v6CV41G5q0vinYH3flW+E8nJOOGS0QTAx+ZmE7PWitimC98L3iFXQP0wKzPRO9TXzamkBkfhhq3hjXAWjU+s0awVRE3nTaWhF0WJ5HmHfd5gt8ZbbFVRzQ51QWy7oHnIoQkqDwdtIr2c2CN9wHUyLVvo+k1AfKCjSWVXaB4eef+7W+Ged9Dq3j/fdxnIqHwmFV8n2ddQuRNcaclOh4nr33xioYnd3mHwSJaCpgSeMvzKOZDDOB28AJquR8t0edufMVGfx/3BIESwmRSitwfc2Hzkbcpfy600o/1LRxgE7KcQMkfanxv7VW6V1dpmoGN86cPT1+vAbzF4P0pZzM8WYiTRC9GWH/CvYQIwZUts5UrdGOtSbEHX14rWYQx4ytcHg4/JcM/ceyWjrjkpyIt2B6JQLtaMkkYd9HdLGu74ic+kFWXl+GgEPjs4XHulESpCTIyPmtThIUhgieoty1uS+fI3D8S9Hkg/zUY8sBqBoU/sTHTbfQAQ6mE99DqlHy+jc7QHPIr+jcinXfpto9rt7Syp/3txizypVoWY2QhflBEKOHz+AdpqisXpqRV3SO6LUkrkUlJU79i4GDyIhcbStY5Zn3hwhLCWmkQaSz66tqwqw686/1TiIDikDubmScLk5zWYXcXEmNUy8IrEevrSK0wfRDghgd33NlAAtr5vYN5+I=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(39860400002)(36840700001)(46966006)(81166007)(356005)(70206006)(70586007)(82740400003)(478600001)(36756003)(316002)(54906003)(36860700001)(5660300002)(966005)(6506007)(4326008)(336012)(2616005)(107886003)(186003)(8676002)(83380400001)(6486002)(86362001)(2906002)(6862004)(53546011)(33656002)(47076005)(82310400003)(8936002)(26005)(6512007)(41533002)(6606295002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 17:15:02.6863
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 33071f18-173f-4df9-127b-08d8c8673e65
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4302

Hello Stefano,

> On 2 Feb 2021, at 5:44 pm, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>=20
> On Tue, 2 Feb 2021, Rahul Singh wrote:
>> Hello Stefano,
>>=20
>>> On 26 Jan 2021, at 10:58 pm, Stefano Stabellini <sstabellini@kernel.org=
> wrote:
>>>=20
>>> Hi all,
>>>=20
>>> This series introduces support for the generic SMMU bindings to
>>> xen/drivers/passthrough/arm/smmu.c.
>>>=20
>>> The last version of the series was
>>> https://marc.info/?l=3Dxen-devel&m=3D159539053406643
>>>=20
>>> I realize that it is late for 4.15 -- I think it is OK if this series
>>> goes in afterwards.
>>=20
>> I tested the series on the Juno board it is woking fine. =20
>> I found one issue in SMMU driver while testing this series that is not r=
elated to this series but already existing issue in SMMU driver.
>>=20
>> If there are more than one device behind SMMU and they share the same St=
ream-Id, SMMU driver is creating the new SMR entry without checking the alr=
eady configured SMR entry if SMR entry correspond to stream-id is already c=
onfigured.  Because of this I observed the stream match conflicts on Juno b=
oard.
>>=20
>> (XEN) smmu: /iommu@7fb30000: Unexpected global fault, this could be seri=
ous
>> (XEN) smmu: /iommu@7fb30000: 	GFSR 0x00000004, GFSYNR0 0x00000006, GFSYN=
R1 0x00000000, GFSYNR2 0x00000000
>>=20
>>=20
>> Below two patches is required to be ported to Xen to fix the issue from =
Linux driver.
>>=20
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/=
drivers/iommu/arm-smmu.c?h=3Dlinux-5.8.y&id=3D1f3d5ca43019bff1105838712d55b=
e087d93c0da
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/=
drivers/iommu/arm-smmu.c?h=3Dlinux-5.8.y&id=3D21174240e4f4439bb8ed6c116cdbd=
c03eba2126e
>=20
>=20
> Good catch and thanks for the pointers! Do you have any interest in
> backporting these two patches or should I put them on my TODO list?

Yes I am happy to backport these patches to XEN.=20
I will send the patch for review once 4.15 release branch out from master.
=20
Regards,
Rahul
>=20
> Unrelated to who does the job, we should discuss if it makes sense to
> try to fix the bug for 4.15. The patches don't seem trivial so I am
> tempted to say that it might be best to leave the bug unfixed for 4.15
> and fix it later.



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:16:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80980.148710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Llv-0005lp-Kl; Wed, 03 Feb 2021 17:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80980.148710; Wed, 03 Feb 2021 17:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Llv-0005li-Gs; Wed, 03 Feb 2021 17:16:55 +0000
Received: by outflank-mailman (input) for mailman id 80980;
 Wed, 03 Feb 2021 17:16:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7Llt-0005lb-SB
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:16:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7Llt-0004dj-PB
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:16:53 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7Llt-0008OR-O1
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:16:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7Llq-0005XX-EB; Wed, 03 Feb 2021 17:16:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=W+T1v1TOuUdK1kF5e2BVX7zV5OzlU3rVeTz1+HvQYAs=; b=lOfF2bFtZmPtXpdr4gMaSapYRp
	35diEc0bam5AO3eYIs9dy2QKFfv+hyoGJczYVOqujjd5XorR0rbXxPNtC4uWBnWvbSmgo4g+ovLkU
	quB/6KUychceeU0vgjpGqBQy3wXwB0yvIRiMjxsdKbwDUyQxQ3RVy9JBUmOFQCFrDdL0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.55938.208939.124502@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:16:50 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Manuel Bouyer <bouyer@netbsd.org>
Subject: Re: [PATCH for-4.15 1/2] libs/foreignmem: Drop useless and/or misleading logging
In-Reply-To: <20210203163750.7564-1-andrew.cooper3@citrix.com>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15 1/2] libs/foreignmem: Drop useless and/or misleading logging"):
> These log lines are all in response to single system calls, and do not provide
> any information which the immediate caller can't determine themselves.  It is
> however exceedinly rude to put junk like this onto stderr, especially as
> system call failures are not even error conditions in certain circumstances.
> 
> The FreeBSD logging has stale function names in, and solaris shouldn't have
> passed code review to start with.
> 
> No functional change.

Thanks.

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

>          int saved_errno = errno;
> -        PERROR("XXXXXXXX");
> +

That's particularly wtf...

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:18:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80981.148721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LnV-0005tQ-VL; Wed, 03 Feb 2021 17:18:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80981.148721; Wed, 03 Feb 2021 17:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LnV-0005tJ-SI; Wed, 03 Feb 2021 17:18:33 +0000
Received: by outflank-mailman (input) for mailman id 80981;
 Wed, 03 Feb 2021 17:18:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LnT-0005tC-W0
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:18:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LnT-0004fL-V8
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:18:31 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LnT-00005M-UL
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:18:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7LnQ-0005Y3-PR; Wed, 03 Feb 2021 17:18:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=AvmqkPSIncWKobbYCr2mpGNBcQljOX/hM9cFApdEL1Q=; b=m6QYCLOuriBzV/5XePJavAJG/O
	lOj4TnukZy1KfAAIYYBKe17/AZau48Nm0eSsN033w5I9/AWifgoaQibhHurejjgcG6S6rk57aAUsj
	b4g496xVrLuXS/kC+rZTXbCuTYCQa6zOXpXXj47SOG4euPJay93pYzPRxOnYeQP/sMqo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.56036.522316.35851@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:18:28 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Manuel Bouyer <bouyer@netbsd.org>
Subject: Re: [PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource
In-Reply-To: <20210203163750.7564-2-andrew.cooper3@citrix.com>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
	<20210203163750.7564-2-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource"):
> Simplify the FreeBSD logic, and duplicate it for NetBSD as the userpace ABI
> appears to be to consistently provide EOPNOTSUPP for missing Xen/Kernel
> support.
> 
> The Linux logic was contorted in what appears to be a deliberate attempt to
> skip the now-deleted logic for the EOPNOTSUPP case.  Simplify it.

AFAICT this is a mixture of cleanup/refactoring, and bugfix.  Is that
correct ?

> diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory/netbsd.c
> index 565682e064..c0b1b8f79d 100644
> --- a/tools/libs/foreignmemory/netbsd.c
> +++ b/tools/libs/foreignmemory/netbsd.c
> @@ -147,6 +147,9 @@ int osdep_xenforeignmemory_map_resource(
>      rc = ioctl(fmem->fd, IOCTL_PRIVCMD_MMAP_RESOURCE, &mr);
>      if ( rc )
>      {
> +        if ( errno == ENOSYS )
> +            errno = EOPNOTSUPP;
> +
>          if ( fres->addr )
>          {
>              int saved_errno = errno;

Specifically, I guess this is the bugfix ?

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:20:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80984.148733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Lov-0006Ct-AP; Wed, 03 Feb 2021 17:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80984.148733; Wed, 03 Feb 2021 17:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Lov-0006Cm-7V; Wed, 03 Feb 2021 17:20:01 +0000
Received: by outflank-mailman (input) for mailman id 80984;
 Wed, 03 Feb 2021 17:20:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7Lou-0006Cg-0e
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:20:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7Lot-0004iK-W5
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:19:59 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7Lot-0000Bb-V1
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:19:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7Loq-0005YW-Nk; Wed, 03 Feb 2021 17:19:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=AasWy0jCq3vSAVendwvm0UcUa9vzgAR1nBfHlYFPYco=; b=WOXu5JfI8iTgKkXQtKjcPsPec1
	9KabKVx1obFY0/yAsLGP6vW7tuW6Na9TSouQVPd7v0BwjPMQtP2QBAZjgaijiKtD8iALNCSemDbrh
	EXTTfYqNvmax1TKtHNkBzGVXciY2nqrzy123Sk4hv2z9OE4IBg1q5CjOmpmlcWRU6wTw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.56124.395139.528377@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:19:56 +0000
To: Manuel Bouyer <bouyer@netbsd.org>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH] add a qemu-ifup script on NetBSD
In-Reply-To: <20210203165421.1550-1-bouyer@netbsd.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("[PATCH] add a qemu-ifup script on NetBSD"):
> On NetBSD, qemu-xen will use a qemu-ifup script to setup the tap interfaces
> (as qemu-xen-traditional used to). Copy the script from qemu-xen-traditional,
> and install it on NetBSD. While there document parameters and environnement
> variables.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

> +++ b/tools/hotplug/NetBSD/qemu-ifup
> @@ -0,0 +1,9 @@
> +#!/bin/sh
> +
> +#called by qemu when a HVM domU is started.
> +# first parameter is tap interface, second is the bridge name
> +# environement variable $XEN_DOMAIN_ID contains the domU's ID,
> +# which can be used to retrieve extra parameters from the xenstore.
> +
> +ifconfig $1 up
> +exec /sbin/brconfig $2 add $1

Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:24:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:24:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80987.148746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LtO-00073i-Uf; Wed, 03 Feb 2021 17:24:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80987.148746; Wed, 03 Feb 2021 17:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LtO-00073b-Qy; Wed, 03 Feb 2021 17:24:38 +0000
Received: by outflank-mailman (input) for mailman id 80987;
 Wed, 03 Feb 2021 17:24:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LtN-00073W-Ny
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:24:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LtN-0004ll-KY
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:24:37 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LtN-0000aQ-EU
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:24:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7LtK-0005ZH-AG; Wed, 03 Feb 2021 17:24:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=A874jRM+JuyBHXdJiKm1Rbp+44N3dVhe2nB49IAhQHc=; b=XwVY9wb8pFcdf6xYY2XM8Aiv+F
	MrE86WX+6yjGR9Qi4Uodg/j6Z4WHlRP2m+CqkThZI37EJLnWStXvl+oaxjRN8SMIM8Vc3ClMok5cW
	r8tSXtGbGjf9p9XfPpd/M5xTCxUjhE+7jsnmQIwScgLXBw7iCZiVkPMWEyxXDmPrhS0A=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.56402.42441.687037@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:24:34 +0000
To: Manuel Bouyer <bouyer@netbsd.org>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xenstored: close socket connections on error
In-Reply-To: <20210203165421.1550-2-bouyer@netbsd.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
	<20210203165421.1550-2-bouyer@netbsd.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("[PATCH] xenstored: close socket connections on error"):
> On error, don't keep socket connection in ignored state but close them.
> When the remote end of a socket is closed, xenstored will flag it as an
> error and switch the connection to ignored. But on some OSes (e.g.
> NetBSD), poll(2) will return only POLLIN in this case, so sockets in ignored
> state will stay open forever in xenstored (and it will loop with CPU 100%
> busy).

Juergen, I think you probably know this code the best.  Would you be
able to review this ?

I'm not sure I understand what the specific behaviour on NetBSD is
that is upsetting xenstored.  Or rather, what it is that xenstored is
using to tell when the socket should be closed.

I grepped for POLLERR and nothing came up.

Or to put it another way, is this commit

> Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083

broken on Linux too ?

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:26:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80988.148757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Lv0-0007Ab-9Q; Wed, 03 Feb 2021 17:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80988.148757; Wed, 03 Feb 2021 17:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Lv0-0007AU-68; Wed, 03 Feb 2021 17:26:18 +0000
Received: by outflank-mailman (input) for mailman id 80988;
 Wed, 03 Feb 2021 17:26:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7Luz-0007AP-3v
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:26:17 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e23f8b3-68cb-463a-9059-8af6f2fd6dae;
 Wed, 03 Feb 2021 17:26:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e23f8b3-68cb-463a-9059-8af6f2fd6dae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612373175;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=6y+IWSCPuht68Kt1rqKSydh47wdhXdI6U/4bh8ybAUE=;
  b=L59SFOWmqSFayv8LeWWehsErss8ASb39iflWZsgNgzkFemuCzIYV867R
   gxbH6ZguWJZ6F6ome5nxGv2YNpagH9Q9JIuqDUJrHJK4zrSHE43Z6MNan
   /e+ZNMxREipWDnOljvgJRJNW6Kfgcd9LdzPY/v6ahAOn5Ib2RvyqHimGE
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tgPX8heHvBp/yqe6+MJa5PhGsaSzlq+kEs9/eNfpcsTGIO6hsAKEAnS6jVok80IijHq55lYM8B
 qLNdvdchY7G34jrJMljWIzUYCM8f3DvpiJsfh8fDVdVt/+SHLjEh7fnAAn5ujhiRzVZq6qZE13
 kxhZZRpKoN0+DVfjmF1gXleXxFWhSz6SQLIW1sRE0gXmevoi9rsXGc5DePLAr8BnmZ6luTosXP
 0CceYUnkq+fk2WNQzAIPYu/OKRmfblPThozAHdvuSMXwyhKt0G1gqq8kpHDIEcREP0lr17qnH2
 X0Q=
X-SBRS: 5.2
X-MesageID: 36862857
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36862857"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lFfDnu08iuf3HYulGPOLXa/kFbNPgV6faieoDOqqYU7PEC/JtZpCxNvXTFOr2gk+mCfmre6rQLJLTtjyqDO9b0W+h6n4WnZIm/9AUIymCQrsVuZqmB4aHzceIYCMC2HW7Cfd7ksnp+kjRy6U/EPBlexIsQFVTzRAT7oY7Kv3CUlb2r5g3xXKPUPBhrk2/g5Afj5C9HuMCK1RDFy/i+Qzi9uwG63hDthea7GcOnn2/AkkO6boE5+ZlwS2PMetPSFXh8omLrcRJDb+RJs1e4qc3Rio+eied9ea5XjF8jkZXZHAX3fIfcLJRLHXbOB7clWpIxKRSQrXtm3NNXBp4MOacg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+JLhdE93bHB9SpQ6drUbuNUQHREO/ZYr9kTsziNRNYk=;
 b=Q1He+dBrcEn83HBdHmxBNhicRI3xUDoYGPGLzdPFNmk1pqSqOBqRK4uaNIUfklGbs5sIH9DrV17oFHviTwh5SWaAA8miSA64SXI+BS9Xtg0vyJYSCoaI+2oda+vryPS4Ur+DSJumNjTkoqg0iKdZ+QQJC1X5WlvN6mMcsi6NecMmNeN1Wk0gbhsGNWjbYXC5WSU0Oz9XsJUe0XS+g5zxE8draN22r9l+OjJd6IT3uiuyIPlqnlquIM3+jNM4mZo4/41aduIg8pv2sc/QeZrf4VYMWALBrXhNgsUZ4z1d8cE9TEP3oYQH5j8/9IJ8MTUJOBIpLnZtsD+9UcsI82wcjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+JLhdE93bHB9SpQ6drUbuNUQHREO/ZYr9kTsziNRNYk=;
 b=IGvDPUglywypcaeln/CU/7bjmCr5uHhnu4xYL3klMDQkJnUqHbMgix16sdJhyAADB5XDAY2Xt0j3tfJZRmWbIPWS6WsEWNtTjyJl4CLes/n6xcA62NDDXGsUP2P4/WmJ2467BfbwWvtac45tIESnN2tg7j4CQBactnRTyFRzAHE=
Subject: Re: [PATCH for-4.15 1/2] libs/foreignmem: Drop useless and/or
 misleading logging
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Manuel Bouyer
	<bouyer@netbsd.org>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
 <24602.55938.208939.124502@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <684c5f26-64ac-77cf-4b24-8a83376f8f70@citrix.com>
Date: Wed, 3 Feb 2021 17:26:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24602.55938.208939.124502@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0036.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::24) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9e75b243-12b7-4933-ca9d-08d8c868ca6b
X-MS-TrafficTypeDiagnostic: BYAPR03MB4535:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB453540628B83BDF5C528BB5CBAB49@BYAPR03MB4535.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: E05QpDohsxAhoFaqquc9kxrpY6ERMTw1TxLOZYf1rgJ/w3UDuTGeQrgDUBkcZzFNI36mtljGJNBKa5izJac43GDmEfHA1/a7mUoIRbf4agkJX+yfsejOkDgv7obTg76ohBWhRkIjLknGapbnfRiyQj/yrEh53ry7mbg9T0Us4ep/uJl9+MDTdNcrcHEC7/12OCvlotgcCf8c+pDv305e4X5REMTnVhK+5Ey7ubjbq0X/pIVvBkhbi1pscwE44HBaoruLp/otmWh0XxX7xBV28mVzBhLnOOndgBtOvF2GYnMeD5Q0V/oeIzZWEz/NoodgkwnJKay1Ipzlx6uGr/t9FskS4qWjqG+DGCJzL5coTGZ0MlkeZKYqxpW8N8SpkAcyZlkOdcj+vtg+egh/XJaQrwAzzzQe7VqwUYBARJTI8kGrwZD8zntLg8/SJkS5q+GZcuWP5f4AqtMkEiSnYXNBxe/ouIJ08g768zfkIw5lPARV3dBSNkzrKjPfvNqXaKC5PmzadwUPthVwoUzcFhEOzRVCN7bPf2Kg7fN/ijwXBYl7YiXqn9pzhapsgN6LsaL0QIQMVF2wJugLpT2qq+z6FRhcqtaja2iqBXEYaxSGo8A=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(346002)(376002)(396003)(136003)(86362001)(66946007)(6916009)(6666004)(36756003)(26005)(2616005)(66556008)(66476007)(186003)(31696002)(31686004)(8676002)(4326008)(2906002)(478600001)(5660300002)(53546011)(16576012)(8936002)(54906003)(83380400001)(316002)(4744005)(16526019)(956004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UUU4cElheWQ4WndsYU54UXNhK0FFY0pWWUp4VjJUK3NQeUFWUlFkS1Vzb05W?=
 =?utf-8?B?L0JSd0tGbnpyeUJnMVZHYjFvbVRCek4rNjk0aDJLclJQS2NRSTFhS3VadGkx?=
 =?utf-8?B?ZmRwbmRkaEhCcHNWYVV3OXZCOTNPUGRQYUY3KzZtQ1pKNHgxazFtKzFjV2hi?=
 =?utf-8?B?emJQRmtnMjZ1MmZMS0pxekJCaHBweGY0cHltZGRTcFdhTE1DaTMzN0NYWUo2?=
 =?utf-8?B?ZGhaOUJYV0tCOEpZQWxPdXpIK1BPb28vWmNVaURwNFV1TEdqSlVIY3E2bGxC?=
 =?utf-8?B?M0VuK2hmT1NHL3c4dDJSMEZpRUdJOWFIMUpMU0RYSHpRTnNtenBlVzZXazZO?=
 =?utf-8?B?eVd3S3RrQmhFeTRhNzNOZDZlMWJlWWVWLzR4K1M1RDJ6YzZsOTRnWVBYM3pY?=
 =?utf-8?B?a01lTm1CSVBZTVR5MFpnKzZJZzhWWUFqMjRvY2hpMkJLY25BYUcwTGJvVDlF?=
 =?utf-8?B?QXh5UHZKOFB3UStvSlhVeVdzSXVpOUVES0V4ekUxNnY4RGJ5Qi8vcXRPSWYz?=
 =?utf-8?B?TDIvT3lzb29WeDYvbU9QZTM2U0pFM2M3WE1OdENsMXlXYW92SG9MZ0VvTElh?=
 =?utf-8?B?YVdqR1lSaWR2MWxPSEJmQWtIZDhVWDNjSzRBcXZMMFZsVmNaVTlVVWUwV3NE?=
 =?utf-8?B?UkcrQkxYVEl6NXhRSU5TNmhPT1FmS2ZyR3RHQnlpaFU5SmNkOFordjJpb2ZL?=
 =?utf-8?B?TUFIT0JnZzVCTGF6NXVGNG9OaDZGRUFXZlBDellNOHdHWGJ1bS9EcGhscFdl?=
 =?utf-8?B?OGNXelduWklUSFAzQ3hXT1pJUkRuTkN3SmNWbEQ1eHBka0xObnNxamJTVmNP?=
 =?utf-8?B?NWZYMHo3RzgyUVdJeXlDS0JFUDNidTBnUFM3OGFvV2cybXRhVFR4ZmM1SGEy?=
 =?utf-8?B?akhDdFRJUTI0dXVoWk80Tm5pY2dTZDk3S2VKTzF3NEd4SlBCZnQ5Q0trMnlS?=
 =?utf-8?B?djdlR2J3aHNjSjRaTEtVS2g3R3Q3Z2cyNjFSWlNQUkRBdnUvTVhNd2ZUQzZM?=
 =?utf-8?B?SHhUU3lZZ2p1L3ZLamN0Qks4NCtOdXBsNnIrSitVYndYYTMwb29mTGx1c1BO?=
 =?utf-8?B?Yy8vdFd4VVRQZlROZUlFRm5mWnMxVkdrY0gzV01PT2tHMDQ3YkZ3WFEyNGlY?=
 =?utf-8?B?dGRzOUNFcDFFMGU1c3dwbVJ0L3ZjUGlkTUp5K1FHRng4SmdFZFk3Sm5pRzV4?=
 =?utf-8?B?ZTRzTkRwNll5R1FNd0lBRkhVeVBEbnB2NVVZVGpZRW03bnNJZSsxT3ZjOURo?=
 =?utf-8?B?UlJMSlMrNlRHK1Y4cTNrendINERHQU85cWN0S2VhcjBtYVdvQlNvZDU2TWxR?=
 =?utf-8?B?WGRwai9ZditlWmFUbXFaMnVySlZCRGpsQTFPTnZ3dnlzTVUrNTM4dmoyYURS?=
 =?utf-8?B?b0lVUUx6SDNTVHZTb3htMXcvSTFxS1kxeVZVU1dtM1lBODNwTC9POHZRMGJK?=
 =?utf-8?Q?ZHMYFfp5?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e75b243-12b7-4933-ca9d-08d8c868ca6b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 17:26:07.3269
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gsUQlPtZgzuqfXmluRLZO6GGFmPVl0SF/cNjnpNxLJW8FMl6sFsWz1nqXPym8QDnzxozMnxjiEAhFc535gnVThqexSQi5CEu93Xlr78mMxA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4535
X-OriginatorOrg: citrix.com

On 03/02/2021 17:16, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH for-4.15 1/2] libs/foreignmem: Drop useless and/or misleading logging"):
>> These log lines are all in response to single system calls, and do not provide
>> any information which the immediate caller can't determine themselves.  It is
>> however exceedinly rude to put junk like this onto stderr, especially as
>> system call failures are not even error conditions in certain circumstances.
>>
>> The FreeBSD logging has stale function names in, and solaris shouldn't have
>> passed code review to start with.
>>
>> No functional change.
> Thanks.
>
> Reviewed-by: Ian Jackson <iwj@xenproject.org>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks,

>
>>          int saved_errno = errno;
>> -        PERROR("XXXXXXXX");
>> +
> That's particularly wtf...

My thoughts exactly.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:26:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:26:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80989.148770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LvW-0007G7-Ig; Wed, 03 Feb 2021 17:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80989.148770; Wed, 03 Feb 2021 17:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LvW-0007G0-FU; Wed, 03 Feb 2021 17:26:50 +0000
Received: by outflank-mailman (input) for mailman id 80989;
 Wed, 03 Feb 2021 17:26:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LvV-0007Fr-6R
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:26:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LvV-0004p9-5j
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:26:49 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7LvV-0000gl-4q
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:26:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7LvQ-0005Zq-Ca; Wed, 03 Feb 2021 17:26:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:CC:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=MvQxSNbrKt2Zb6XufKKBOCD/1HuYdGafTzpPSfhEj3c=; b=AsNCNLLhwzzmwN6qeGfyfW8237
	sUA00NBUyL+FQfM/bhskL7NXdM7SIWFk5QAnp1LTPmKb3xb6Ka1E07+wUKXh6mv3iRapa19zbNI5Y
	VQnrRy2Te3Nf1MzgLJgVw3L1paKFAdNaW617NwZlyVHVdWQciDZzJZSZWcIS/R4t1IAY=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.56532.169889.71270@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:26:44 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
CC: Manuel Bouyer <bouyer@netbsd.org>,
    xen-devel@lists.xenproject.org,
    Elena Ufimtseva <elena.ufimtseva@oracle.com>,
    Ian Jackson <iwj@xenproject.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] NetBSD: use system-provided headers
In-Reply-To: <20210203165421.1550-4-bouyer@netbsd.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
	<20210203165421.1550-4-bouyer@netbsd.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("[PATCH v3] NetBSD: use system-provided headers"):
> +#ifdef __NetBSD__
> +#include <xen/xenio.h>
> +#else
>  #include <xen/sys/privcmd.h>
> +#endif
>  #include <xen/foreign/x86_32.h>
>  #include <xen/foreign/x86_64.h>
Maneul, thanks.  I think this is a bugfix and ought in principle to go
in but I think we probably want to do this with configure rather than
ad-hoc ifdefs.

Roger, what do you think ?  Were you going to add a configure test for
the #ifdef that we put in earlier ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:27:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80990.148781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LwN-0007NS-Sz; Wed, 03 Feb 2021 17:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80990.148781; Wed, 03 Feb 2021 17:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7LwN-0007NL-Pu; Wed, 03 Feb 2021 17:27:43 +0000
Received: by outflank-mailman (input) for mailman id 80990;
 Wed, 03 Feb 2021 17:27:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7LwN-0007NF-3k
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:27:43 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 649045c1-783e-4110-a551-7e53b0df63d7;
 Wed, 03 Feb 2021 17:27:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 649045c1-783e-4110-a551-7e53b0df63d7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612373261;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NjYpEzNQazpJGj4a3dfpfbot4Kja9+8s7lhf+XKS/zY=;
  b=cXuNMZF75TYNbLQZys17ZnCpE6t9iaMLLGP3NroL2Gf1Kr1FBIMVg6q9
   dJF8JRA0UEhZ1AAXU4kJ96SJomquXZD3szJpukjXb9h1ULycmDuZ6QRFY
   2eB+jnVie+gLQkuUsYVSjinXz5E9GPYNAlSRHChBAPO3Mf/KixGuaiX90
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: uQqKJc+TQayzNJHo8/PCrIwLEsdjrYb/PfC6PZA9BUNiw1x5/GCiSlo7KmvQNaFKMNuRboZwvS
 mfGpl5KkMyjCHJoVIlrtE0QIJG9eFa50i1CXrXf71zfedKBAbkUe3FtVG5wESUzKMQUTnZqKJI
 uufGXDVBJGvLyD4TG2Wvo9y1m2CTXc9OBWAdvBnzph37m1rA5n522JUBrnPHqATFTQ7ODiQJ/E
 xTj5zL7RFuUnTMYXnTFWk6kw20S3bCZ9NIt6CBEP6SiWbbOC2ABTJ5Ysjk29iiEdvhh2ywUwpy
 Ajg=
X-SBRS: 5.2
X-MesageID: 36482864
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36482864"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SEDqIKplDlH+FvFgVMaU5pVF/Yb4uVIUjwX5+zBuIvvtN3m42xCrT1BN5Ws4JTR0KWfKUDhZ7UrRw3+jir8y+3WoF1u9TVgZPoz5KvXG3FM6fw5lYzcWcFtICRg562cAaIOTk+Nv8qtPuhVMEZXY8Vj2t4y+YOYy/71j+iz/RwJDLEWoVYk4jqfcPOxgag4lYV4eXQTVSTbsdjsxGcHteP6Q6Bfl0zEbR3Rk+Y6twby2ZgXOqCthQu3aPdX9s1qTklreyvPJnOxqWBd5ob3TkIcdB8LV12b4AfHyxvx65n7nxLiK+ceVX50CcbxoFIRlEuORccBumO7ZSj3kfs4XqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4EkyiE2hrhjR80MlJZncWxzBHVWtp46zqd3s+A0L+Y0=;
 b=NNiLHwqGjEZ4RWcy0hOvwFKGnwWJ4aysePF/hwTXJe+61m3rYJvjwPtt509zkT/bGD+S10SJCOm0PU/3P0lT/F+rjCvSni2PSFcObAG8RtKEzZq9p65p4dgHYjzVeL+fndUQi2S6sta5VXw11mmnvEgKoY38aCiORWrdPMI4Ge7MvXGee3X97QwL3s87z/K5dx2dKUV6NnyGcps3JqZksk+lTENBJ07mFqWv5Ffd97FzPc1DNPlf+o8ZIj8kQs+NupxX8hd94KBqZ9k3s40CbOzDe+1CYpDOEFYh4njoJtdrHXQAg144xnuQdvaOPH+iH53me3naGy42BJSBtO1Qzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4EkyiE2hrhjR80MlJZncWxzBHVWtp46zqd3s+A0L+Y0=;
 b=X/LvIBLRUNRivrZvLH78f0jzh9ElMW072tZoqAyVA6J22ERbuQ7hd8iOhWhMxP8RO87Hf4J94uediMo5BAS+8j+Fa4pE3UsottMKCt3TsGSZ+XF7/vXeo2F5HKiJXVbn+VXyghzdpVmriUEWSah+iy42jCkyq/O3kdg+ps8c6wg=
Subject: Re: [PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling
 for map_resource
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Manuel Bouyer
	<bouyer@netbsd.org>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
 <20210203163750.7564-2-andrew.cooper3@citrix.com>
 <24602.56036.522316.35851@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2a895f2c-76ec-4636-9a99-20ad3522936e@citrix.com>
Date: Wed, 3 Feb 2021 17:27:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24602.56036.522316.35851@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0033.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::21) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b2ea18bb-d9ba-4bd9-ace3-08d8c868ee73
X-MS-TrafficTypeDiagnostic: BY5PR03MB5061:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5061383ABD49641EFC9141B1BAB49@BY5PR03MB5061.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oeI2bV97lg5CAuBXflePzHz2Keygzc4RLzsLsEqKpaTDeXqFozY8KS09NLG9BqM6T4wfB1wVxcS9jJg31DWUOYrlvcKMsHdoi4MHD+gavc/hVWQqHjOP6x4Kf+VZ4fBPZHJSXEqlV+L9Td8fePk1ANVRdf2HW0pGCs1ZTWhNfKMlXb2359vA+fNDIYx4eWuW9ImNqH2LsfHWVGBwu8h/ITEa2leE45EyVPiuW3FLPueGyez+PwdJCMA/4jKkGPir/Tx/mhuLcvMEuuYnQy3z0jvTS7/oFrRSrQ4TmsbOJxiS9TlM+q3SMg7WojhOW9FzUBGtwaawhkaLaY3XjHlI12FLOEzUWq+RY2abMrKVxjZ5eJHmrEJyK8TkxDxD9BhpUZZ9s7H/+yZugvN+5tJN1rUsLWCkGF44QhqUIR+tB9PCXENESQbN1xJoLGrtJit3yq4OOW2Ve9z2Mw/IPAPZd8sM1c9Dd1Az5vC1BNfF9msAmFXXVm2RFu4a23fsqIrICAekw9Halg9aeZG8H2+c0VrbH2gYYcePa89m21SEGXur6KddHU0csq9hfvthRGmLqT6dHlvSVPvF+/nmJyU2gFttM15eC/kbecIRYX992uk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(366004)(136003)(396003)(6666004)(316002)(54906003)(36756003)(31686004)(16526019)(26005)(2906002)(8936002)(83380400001)(6486002)(8676002)(16576012)(5660300002)(31696002)(86362001)(478600001)(6916009)(53546011)(4326008)(956004)(66476007)(66556008)(186003)(2616005)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YTdjMWdGWmU1djBHeXZjWXdiNGw5UWx4dG1ReFZRZFNyV002dlpPakEraHNS?=
 =?utf-8?B?N2JnWmEvek9tS3d4VHVlS3VLUlYvM2cwMFZ3anFrQXZYWmYxN2NTOE5WRmFJ?=
 =?utf-8?B?SllUQk5oMHYrbzZzVko0cE1BYUpBb2F2OEJrYjlVOHVRZHZ0VlZkN3VxMVhm?=
 =?utf-8?B?R1J6NWNOeXYxK2JJOC90L002RzFIWFNGL0VOSXZoU1hyN3NiczJmelN0bnV6?=
 =?utf-8?B?azk5UHRVL0FTTW4zSnNFNkRzWE1OMVFDckU3bE5EcGYxbXhrSmdKUUFWN0p5?=
 =?utf-8?B?bTNMRzNjVVh3SndvcTYyeFdVbFNaUVpZS0RhRHM5YktoT2tjU3QvK1JSRWpn?=
 =?utf-8?B?WS9xaWNtOXpPV0svUWU5RmVOeVM5R0lWcEl3RDkyRHBmVnZZNVZFdG0yUFRI?=
 =?utf-8?B?OU8xeWpzNTQ2SVl5azVkMUhwOGNGckVKV3RWN2sxY0pXMUowSjRsd2x3VFo0?=
 =?utf-8?B?bzduNGtaTjRucVdVUjVFb1lsbHhYNGFWd3lhTDZVSDFxcTVQTTIydDk1WWlY?=
 =?utf-8?B?M1FpMXRrN3BSZ1FKYWo0V1BMTmx4U0sxazBtOVdZRXNxeTVxRHRuT1ZUZzF3?=
 =?utf-8?B?TmVrWWlwNmlLaWhpVFVGM1ZSdlpYUWVOWnJ5bFpwc2t6bUVvb1p1ZkVXS0FP?=
 =?utf-8?B?T1Y2MkFsVGw4MWhjVVlOdWF3aEMyODlJSjdUUkVoZGFPTkVHTjlKQmMzNHUr?=
 =?utf-8?B?aFZ2U09ySGl1bnBTYlB1T3FiaGVSOE56YWhqT3NqOFkzWWFJQmxmMTJTOEY1?=
 =?utf-8?B?Q3FWc0w0eDlCbEtQMTVmNXhtNnpmTFU0N0hRS05jdFVLNVl5T1hxUWZ2czI4?=
 =?utf-8?B?cDZSNXdLNHBFWlR2VEhQOHIvTUU5b0ozWXN0Qk1BZVcrcDNSTnJQZUcxTDBz?=
 =?utf-8?B?ZzVHdllNbHZYaGJYMUp5R1MwNXlIbi9BQlRMYk04SkEyWk5PYzhyUFdyZVRZ?=
 =?utf-8?B?azYrYjVSdkdXQmMzcEdGQTB0b25vd0plZHZrLzdXZ3Nqa3gxR2o5WEhmVHpp?=
 =?utf-8?B?U05kTjdHdHpqODA2eCtxblBwSnhNVUxuZkoyRHM0eU56RllDK1BXVXJoM0py?=
 =?utf-8?B?OGxtcDVCWTZwYzltZVNwUUw4VitaN3FTRklKR3k5VkREZEFXeUdaUW85VGtK?=
 =?utf-8?B?UHFHMExuRC9iN0JxWEZvQjRaRFFmVHFEWmZGS3VFeDQ2eWRPdzNPRGpPemVz?=
 =?utf-8?B?NElsRUk5ckhCOVVwbmViR2lsYUE3RFZIV0g0NXAyamhWQ01YUEc1eEg0M3Bk?=
 =?utf-8?B?eWxxSEFLMDVJRXk1bkhvMFZEY0ExcTZhS0hVZWxTczVsQ05adVh5VjBOREcx?=
 =?utf-8?B?M0JxbkY0eDBJUUhFZEh3cVNTQVc1emJDQkptb2MxRUI1K1hHTFpGMzNGakdR?=
 =?utf-8?B?WEpBV1FodU1zSnJ5YTdhYlZnUUEwNjQ4Q2tYWVJyUWRTWjZzWDlmUUgzeXAx?=
 =?utf-8?Q?BJDVw2Ow?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b2ea18bb-d9ba-4bd9-ace3-08d8c868ee73
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 17:27:07.6996
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Cd6wZUx+o/h6E5RYoMxNbaIN/jj60DRIQf5K/ACTDWmlF+PAUVDSg1bhh+5PytwMUD+SEER3cxvzXCI1/jHbpxIBX4iFp6TfU2O2retFJbA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5061
X-OriginatorOrg: citrix.com

On 03/02/2021 17:18, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource"):
>> Simplify the FreeBSD logic, and duplicate it for NetBSD as the userpace ABI
>> appears to be to consistently provide EOPNOTSUPP for missing Xen/Kernel
>> support.
>>
>> The Linux logic was contorted in what appears to be a deliberate attempt to
>> skip the now-deleted logic for the EOPNOTSUPP case.  Simplify it.
> AFAICT this is a mixture of cleanup/refactoring, and bugfix.  Is that
> correct ?
>
>> diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory/netbsd.c
>> index 565682e064..c0b1b8f79d 100644
>> --- a/tools/libs/foreignmemory/netbsd.c
>> +++ b/tools/libs/foreignmemory/netbsd.c
>> @@ -147,6 +147,9 @@ int osdep_xenforeignmemory_map_resource(
>>      rc = ioctl(fmem->fd, IOCTL_PRIVCMD_MMAP_RESOURCE, &mr);
>>      if ( rc )
>>      {
>> +        if ( errno == ENOSYS )
>> +            errno = EOPNOTSUPP;
>> +
>>          if ( fres->addr )
>>          {
>>              int saved_errno = errno;
> Specifically, I guess this is the bugfix ?

It is a bugfix on NetBSD (for brand new functionality), and cleanup on
FreeBSD/Linux specifically split out of the previous patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:36:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80998.148801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4Z-0008Uw-6K; Wed, 03 Feb 2021 17:36:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80998.148801; Wed, 03 Feb 2021 17:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4Y-0008Ul-Vr; Wed, 03 Feb 2021 17:36:10 +0000
Received: by outflank-mailman (input) for mailman id 80998;
 Wed, 03 Feb 2021 17:36:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7M4Y-0008Tz-5S
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:36:10 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b008ad3-6a96-418a-91a1-c75a1770c9a8;
 Wed, 03 Feb 2021 17:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b008ad3-6a96-418a-91a1-c75a1770c9a8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612373768;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=g+a0zjEE5v0WInnTQX+eiSAhYh9ndjnBsYijZ422tKA=;
  b=fs8xPXZKpH1sZfIJlXXr+ldYqu/g/tiqtxvJDMOXUddIMOtmmgbBk6hy
   WM5ZlfpvNSeUQDxVxMmT9INJl83SCUz6kCsAJtjzmKnG5HjO35MN54khp
   nE++qJZROaJZ8+jn0EUrIEZ2Y75sB3Ofo9GF0p11FC+rKug1WnZm2uuZq
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +x0IaZ3Lv3ThQBPVJscpHNRMxzB/BYwmXJhwj5OFmM6hJOqn45xZqQCsWXfPxdtUitE8b+bm0r
 YOKqsNPk7vmN1z3mTSA+c/oNaMdzlREZdwzF1aKNdeHVqh1HXe/rs3UxGM1NCbI/L2lHHI6Pdx
 EblhuLgnJO5COnFhalVab6pGjV+lo00qV+TVJXOLSgE6D2FgfVFah6car82xxYk4GAlclweyQT
 Idq1iFqQaucakg53vqqKN+aBawbcP1E6tJcyhzb8VpK4TJuQWqW+Ln7Hzus+Xw3te1q04myv6T
 ues=
X-SBRS: 4.0
X-MesageID: 36517500
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36517500"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
Subject: [PATCH 3/3] tools/oxenstored: mkdir conflicts were sometimes missed
Date: Wed, 3 Feb 2021 17:35:49 +0000
Message-ID: <20210203173549.21159-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210203173549.21159-1-andrew.cooper3@citrix.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>

Due to how set_write_lowpath was used here it didn't detect create/delete
conflicts.  When we create an entry we must mark our parent as modified
(this is what creating a new node via write does).

Otherwise we can have 2 transactions one creating, and another deleting a node
both succeeding depending on timing.  Or one transaction reading an entry,
concluding it doesn't exist, do some other work based on that information and
successfully commit even if another transaction creates the node via mkdir
meanwhile.

Signed-off-by: Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/ocaml/xenstored/transaction.ml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/ocaml/xenstored/transaction.ml b/tools/ocaml/xenstored/transaction.ml
index 25bc8c3b4a..17b1bdf2ea 100644
--- a/tools/ocaml/xenstored/transaction.ml
+++ b/tools/ocaml/xenstored/transaction.ml
@@ -165,7 +165,7 @@ let write t perm path value =
 
 let mkdir ?(with_watch=true) t perm path =
 	Store.mkdir t.store perm path;
-	set_write_lowpath t path;
+	set_write_lowpath t (Store.Path.get_parent path);
 	if with_watch then
 		add_wop t Xenbus.Xb.Op.Mkdir path
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:36:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80997.148794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4Y-0008UR-Qz; Wed, 03 Feb 2021 17:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80997.148794; Wed, 03 Feb 2021 17:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4Y-0008UK-Nw; Wed, 03 Feb 2021 17:36:10 +0000
Received: by outflank-mailman (input) for mailman id 80997;
 Wed, 03 Feb 2021 17:36:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7M4X-0008Tu-Jr
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:36:09 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58c0bd68-f56d-4cab-82e4-e7127bb1aace;
 Wed, 03 Feb 2021 17:36:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58c0bd68-f56d-4cab-82e4-e7127bb1aace
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612373767;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=889AXvG19V0DSakGW3KQHUHk1PEmlP7uNjR4YWM4jWA=;
  b=JPJkjgP1C0rWovARN0J3PoSLv3GHDmwG3eoFZ3pVgZiPhNTFqa4w3L7P
   C6nAFHSxvNHJASwj2WkWLYsd99tFF5JMa9EofSurPvGwKbTwVLna2npQI
   Gj4bJXjW2fWV4V/CPS4SsE/I9KKdLMydRk7Cgm7OhfsRrLaLkVagZ6f9q
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: GHlpCchcGiZaGyMFSLcqrjrMkMAx7YnISs1C2/f8fe3gDyooaOdOelMf6aFFDzwG8iadmKZNs8
 edjC4SEnD/fIPDmb6pWVjnHXl9bwREqYnNI3ghAepI7EHHhVlueIX9LlvooRxXZOo8Yz8+pNOF
 WlD4C1aRO0OaOl6X+tWv6tUdQ9UM8DHrt9zsgphtpUa0KYYImq6jFrsmmZ2pOwJoZPebOwU6sw
 nqqrc51lUpiQJ9nc/N0a7a5DshMrEWJ2lm4ekDbFu8zN7wRRanPskcnx6alPgLCrIoqqcE1z1R
 Nrk=
X-SBRS: 4.0
X-MesageID: 36863742
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36863742"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
Subject: [PATCH 1/3] tools/oxenstored: Fix quota calculation for mkdir EEXIST
Date: Wed, 3 Feb 2021 17:35:47 +0000
Message-ID: <20210203173549.21159-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210203173549.21159-1-andrew.cooper3@citrix.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>

We increment the domain's quota on mkdir even when the node already exists.
This results in a quota inconsistency after live update, where reconstructing
the tree from scratch results in a different quota.

Not a security issue because the domain uses up quota faster, so it will only
get a Quota error sooner than it should.

Found by the structured fuzzer.

Signed-off-by: Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/ocaml/xenstored/store.ml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/ocaml/xenstored/store.ml b/tools/ocaml/xenstored/store.ml
index 1bd0c81f6f..20e67b1427 100644
--- a/tools/ocaml/xenstored/store.ml
+++ b/tools/ocaml/xenstored/store.ml
@@ -419,6 +419,7 @@ let mkdir store perm path =
 	(* It's upt to the mkdir logic to decide what to do with existing path *)
 	if not (existing || (Perms.Connection.is_dom0 perm)) then Quota.check store.quota owner 0;
 	store.root <- path_mkdir store perm path;
+	if not existing then
 	Quota.add_entry store.quota owner
 
 let rm store perm path =
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:36:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.80999.148818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4e-00006x-DL; Wed, 03 Feb 2021 17:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 80999.148818; Wed, 03 Feb 2021 17:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4e-00006k-97; Wed, 03 Feb 2021 17:36:16 +0000
Received: by outflank-mailman (input) for mailman id 80999;
 Wed, 03 Feb 2021 17:36:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7M4c-0008Tu-G1
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:36:14 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a0a5488c-c076-4082-9369-f2956828899f;
 Wed, 03 Feb 2021 17:36:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0a5488c-c076-4082-9369-f2956828899f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612373769;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=iWbegkhTjcNGUgdqrYfEshcnIaVsgGnEz2AcHo7KIcY=;
  b=dKnUUku5tD558XMnKtqYFR/yWgpORCxlgd5cmicMt+a2WacXzsdJ8VnZ
   BT4xDcMSqulSCtfoFSSJx8OE4sZCONk0j2S5LF5Dt3CFBcPMN0txFJHAd
   ncqyUHU34UymYQB5kspHkLEEs+AKvJ/dqPtyJgJMxp52P13SGdFnaXOCL
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: r32Bzh2LpWWAZ/gaiDa31aRZLbOBH++7daewSEVhnMNKhqNahmiNye6xlvnoCzRBknZer2IpQK
 hcEK48uQm3BS4Nm9eainJwLBP3PCkmiQZBBkQXkcqBxo0pvgXMfPsy4+bQD1J1u9O+VLVc3xNJ
 Lz0HbvjqamrM68n9AaDdEPjjcST734fxuVLmW0oPdX0kz0nhnm2D1IEviqPIfiBUUsWsKogXK6
 BaU8/gv0lE2n8i6XAY5gDPAvEItv5YW5KXaEgYQRrb9/uP2arfCfe6r4cm0iUlNaRRUYpwRfuX
 DSo=
X-SBRS: 4.0
X-MesageID: 36863743
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36863743"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH for-4.15 0/3] tools/oxenstored bugfixes
Date: Wed, 3 Feb 2021 17:35:46 +0000
Message-ID: <20210203173549.21159-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

All of these been posted before, but were tangled in other content which is
not appropriate for 4.15 any more.  As a consequence, I didn't get around to
committing them before the code freeze.

They were all found with unit testing, specifically fuzzing the
serialising/deserialising logic introduced for restartiblity, asserting that
the tree before and after was identical.

The unit testing/fuzzing content isn't suitable for 4.15, but these bugfixes
want backporting to all releases, and should therefore be considered for 4.15
at this point.

Edwin TÃ¶rÃ¶k (3):
  tools/oxenstored: Fix quota calculation for mkdir EEXIST
  tools/oxenstored: Reject invalid watch paths early
  tools/oxenstored: mkdir conflicts were sometimes missed

 tools/ocaml/xenstored/connection.ml  | 5 ++---
 tools/ocaml/xenstored/connections.ml | 4 +++-
 tools/ocaml/xenstored/store.ml       | 1 +
 tools/ocaml/xenstored/transaction.ml | 2 +-
 4 files changed, 7 insertions(+), 5 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:36:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81000.148823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4e-00007q-Op; Wed, 03 Feb 2021 17:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81000.148823; Wed, 03 Feb 2021 17:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M4e-00007V-I4; Wed, 03 Feb 2021 17:36:16 +0000
Received: by outflank-mailman (input) for mailman id 81000;
 Wed, 03 Feb 2021 17:36:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7M4d-0008Tz-2W
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:36:15 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adc4e1d1-95bd-4935-8ece-b9fabf13efa6;
 Wed, 03 Feb 2021 17:36:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adc4e1d1-95bd-4935-8ece-b9fabf13efa6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612373770;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=jygmUT9ZgGv9XOH9leiyk/f2jyGpdMf+qKEjhDs7RYI=;
  b=hUAZxBfXhjv4XwruRSQv/O1Zb5KU5RL9xPfHW++fdsduOt1Ik+3sfxhh
   eQqkoH2TJ2PHDsP6USL/UkW/EYZdYBiFRSDt+ew8QCARlTVaWJhAVOE6S
   63iRut9ukT6//s4sTxFWVNpPBCjOpi4mT+VT1hIwjgFOcaAdGcBBgCq5a
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ps/T2ELTmh5sbe8wJPIhhRDNfIPLPal8p3N/07kJv29melQyOj0h8RWyJIhLIdUTpR4XNNS/7Z
 7ziBh1l1UWLsB7UWnqDf3f6heXzFR1dPKUSMhBc6fjHlkqspctyKwG98mSNKUqXpKFPg+GjQHo
 gHFHONUbOP7V9S3HoZHqjyxMr3TYgPWRcPpB15Rk7ZEvn6TwKdyOuhpvJh1BsE6Eb7qp3ACYQD
 qNN11o2T1Aki7W63juGTtXGA1EOEBuHow7MdUFIEMfHlY/nhQvFElpmsfFSY+PKb7PD3BpetBf
 /+8=
X-SBRS: 4.0
X-MesageID: 36863744
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36863744"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
Subject: [PATCH 2/3] tools/oxenstored: Reject invalid watch paths early
Date: Wed, 3 Feb 2021 17:35:48 +0000
Message-ID: <20210203173549.21159-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210203173549.21159-1-andrew.cooper3@citrix.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>

Watches on invalid paths were accepted, but they would never trigger.  The
client also got no notification that its watch is bad and would never trigger.

Found again by the structured fuzzer, due to an error on live update reload:
the invalid watch paths would get rejected during live update and the list of
watches would be different pre/post live update.

The testcase is watch on `//`, which is an invalid path.

Signed-off-by: Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/ocaml/xenstored/connection.ml  | 5 ++---
 tools/ocaml/xenstored/connections.ml | 4 +++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/ocaml/xenstored/connection.ml b/tools/ocaml/xenstored/connection.ml
index d09a0fa405..65f99ea6f2 100644
--- a/tools/ocaml/xenstored/connection.ml
+++ b/tools/ocaml/xenstored/connection.ml
@@ -158,18 +158,17 @@ let get_children_watches con path =
 let is_dom0 con =
 	Perms.Connection.is_dom0 (get_perm con)
 
-let add_watch con path token =
+let add_watch con (path, apath) token =
 	if !Quota.activate && !Define.maxwatch > 0 &&
 	   not (is_dom0 con) && con.nb_watches > !Define.maxwatch then
 		raise Quota.Limit_reached;
-	let apath = get_watch_path con path in
 	let l = get_watches con apath in
 	if List.exists (fun w -> w.token = token) l then
 		raise Define.Already_exist;
 	let watch = watch_create ~con ~token ~path in
 	Hashtbl.replace con.watches apath (watch :: l);
 	con.nb_watches <- con.nb_watches + 1;
-	apath, watch
+	watch
 
 let del_watch con path token =
 	let apath = get_watch_path con path in
diff --git a/tools/ocaml/xenstored/connections.ml b/tools/ocaml/xenstored/connections.ml
index 8a66eeec3a..3c7429fe7f 100644
--- a/tools/ocaml/xenstored/connections.ml
+++ b/tools/ocaml/xenstored/connections.ml
@@ -114,8 +114,10 @@ let key_of_path path =
 	"" :: Store.Path.to_string_list path
 
 let add_watch cons con path token =
-	let apath, watch = Connection.add_watch con path token in
+	let apath = Connection.get_watch_path con path in
+	(* fail on invalid paths early by calling key_of_str before adding watch *)
 	let key = key_of_str apath in
+	let watch = Connection.add_watch con (path, apath) token in
 	let watches =
  		if Trie.mem cons.watches key
  		then Trie.find cons.watches key
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:39:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81001.148841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M7M-0000hk-8s; Wed, 03 Feb 2021 17:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81001.148841; Wed, 03 Feb 2021 17:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M7M-0000hd-5b; Wed, 03 Feb 2021 17:39:04 +0000
Received: by outflank-mailman (input) for mailman id 81001;
 Wed, 03 Feb 2021 17:39:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M7L-0000gg-1A
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:39:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M7K-000522-VO
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:39:02 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M7K-0001UW-Tm
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:39:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7M7H-0005cJ-Mc; Wed, 03 Feb 2021 17:38:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:To:Date:
	Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=auFCto3O62Vvvpe0SQoM+WERRBM/Mb/CReAMXBAiD7U=; b=asbFQMrN6XSwRB0acKuiu1Yciw
	xIL2ShJnEwbqGMH0MnUtjTixWPRXKQSgb9PpQ5F/+pzQa/yS2JhFmKIwsAZqyOV9EX5aBb/tm3Rt/
	ZIeTKhueIjRAW0rfIW01H0W2Gtivz66TrEApDNiixieJZdkaQsTpTRi4k8ApcmhgJLRw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.57267.471477.281218@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:38:59 +0000
To: Manuel Bouyer <bouyer@netbsd.org>,
    xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xenstored: close socket connections on error
In-Reply-To: <24602.56402.42441.687037@mariner.uk.xensource.com>
References: <20210203165421.1550-1-bouyer@netbsd.org>
	<20210203165421.1550-2-bouyer@netbsd.org>
	<24602.56402.42441.687037@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Ian Jackson writes ("Re: [PATCH] xenstored: close socket connections on error"):
> Manuel Bouyer writes ("[PATCH] xenstored: close socket connections on error"):
> > On error, don't keep socket connection in ignored state but close them.
> > When the remote end of a socket is closed, xenstored will flag it as an
> > error and switch the connection to ignored. But on some OSes (e.g.
> > NetBSD), poll(2) will return only POLLIN in this case, so sockets in ignored
> > state will stay open forever in xenstored (and it will loop with CPU 100%
> > busy).
> 
> Juergen, I think you probably know this code the best.  Would you be
> able to review this ?
> 
> I'm not sure I understand what the specific behaviour on NetBSD is
> that is upsetting xenstored.  Or rather, what it is that xenstored is
> using to tell when the socket should be closed.
> 
> I grepped for POLLERR and nothing came up.
> 
> Or to put it another way, is this commit
> 
> > Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083
> 
> broken on Linux too ?

Andy pointed me to the recent thread "xenstored file descriptor leak"
which answers all these questions.  I think it would have been nice if
some tools maintainer(s) had been CC'd on that :-).

Juergen, I guess I will get a formal R-b from you ?

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


Manuel, in response to this:

> When I started, I looked at the wiki for instructions about
> patches, but didn't find any ...

Earlier I offered you help with git, in private email.  I agree that
git is confusing and sometimes impenetrable.  But it seems that what
you are doing now is worse!  Please take me up on my offer of help.

Our wiki doesn't give instructions on how to use git to maintain a
patch series.  Those instructions would not be Xen-specific.  Perhaps
we could have a pointer or two, but everyone has their own pet methods
and tooling so the result would perhaps be more confusing than
helpful.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:40:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:40:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81006.148853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M98-0001VA-Kq; Wed, 03 Feb 2021 17:40:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81006.148853; Wed, 03 Feb 2021 17:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7M98-0001V3-Hk; Wed, 03 Feb 2021 17:40:54 +0000
Received: by outflank-mailman (input) for mailman id 81006;
 Wed, 03 Feb 2021 17:40:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M97-0001Uy-0A
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:40:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M96-00054g-VX
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:40:52 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M96-0001bU-Un
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:40:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7M93-0005cv-ON; Wed, 03 Feb 2021 17:40:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=+pPG7fWcaxATMKgpOrR36TfBtaqoEQHGcUgT6kBeq6A=; b=IIHMTnrtd4B4U+fqcRI6PXl+11
	ZfW97+uwemjhHi9IorbRZZSK+BTYW4lEcqzzb6WAlPE1Hs8f4/cIakuziCazIDadH92m2FegqxtuW
	mzXyNmaMiEcsq5F+SQ5bdkFtzfhz23X+8GWFwTnL60/57dcknqMd94HxCqXvUETbny6U=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24602.57377.524847.811116@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:40:49 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Edwin =?iso-8859-1?Q?T=F6r=F6k?= <edvin.torok@citrix.com>,
    "Christian  Lindig" <christian.lindig@citrix.com>,
    "Wei  Liu" <wl@xen.org>
Subject: Re: [PATCH 1/3] tools/oxenstored: Fix quota calculation for mkdir EEXIST
In-Reply-To: <20210203173549.21159-2-andrew.cooper3@citrix.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>
	<20210203173549.21159-2-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 1/3] tools/oxenstored: Fix quota calculation for mkdir EEXIST"):
> From: Edwin Török <edvin.torok@citrix.com>
> 
> We increment the domain's quota on mkdir even when the node already exists.
> This results in a quota inconsistency after live update, where reconstructing
> the tree from scratch results in a different quota.
> 
> Not a security issue because the domain uses up quota faster, so it will only
> get a Quota error sooner than it should.
> 
> Found by the structured fuzzer.

Thanks for these.  They look like straightforward bugfixes, so they
don't need a release ack, but FTR

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

I don't feel qualified to give a maintainer-ack...

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:41:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81007.148866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MA0-0001bc-Vr; Wed, 03 Feb 2021 17:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81007.148866; Wed, 03 Feb 2021 17:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MA0-0001bV-S6; Wed, 03 Feb 2021 17:41:48 +0000
Received: by outflank-mailman (input) for mailman id 81007;
 Wed, 03 Feb 2021 17:41:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M9z-0001bQ-FX
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:41:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M9z-00055W-Em
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:41:47 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7M9z-0001dk-E1
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:41:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7M9w-0005dL-99; Wed, 03 Feb 2021 17:41:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=ayb0T0Fvvd9i7BGUWA3i7gIWyFpIhf+Ysfr4i8JpAKk=; b=QG/9M8uLpyh0sS9llvb90f7hEP
	1h4uDQltGHbhCg6626b70lx0Y+YgXFl5R3iIKKw3YpF+C476st0tUJT3lTwSLrPSWJnJ9Dq4+fZRi
	XVk8+8ZZglUNXwNr38RG21G5hMLlozAfDRh4rNndOdbEIKUnHcEnPuWzGxMAQe9WVTTI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.57432.3548.55315@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:41:44 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Christian Lindig <christian.lindig@citrix.com>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15 0/3] tools/oxenstored bugfixes
In-Reply-To: <20210203173549.21159-1-andrew.cooper3@citrix.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15 0/3] tools/oxenstored bugfixes"):
> All of these been posted before, but were tangled in other content which is
> not appropriate for 4.15 any more.  As a consequence, I didn't get around to
> committing them before the code freeze.
> 
> They were all found with unit testing, specifically fuzzing the
> serialising/deserialising logic introduced for restartiblity, asserting that
> the tree before and after was identical.
> 
> The unit testing/fuzzing content isn't suitable for 4.15, but these bugfixes
> want backporting to all releases, and should therefore be considered for 4.15
> at this point.

I just gave my

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

in that other mail.  FTAOD that applies to all three.

Christian, would you be able to do a maintainer review ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:44:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:44:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81010.148878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MCI-0001lX-C3; Wed, 03 Feb 2021 17:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81010.148878; Wed, 03 Feb 2021 17:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MCI-0001lQ-8n; Wed, 03 Feb 2021 17:44:10 +0000
Received: by outflank-mailman (input) for mailman id 81010;
 Wed, 03 Feb 2021 17:44:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l7MCH-0001lL-5t
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:44:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7MCG-00057l-PU; Wed, 03 Feb 2021 17:44:08 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7MCG-0001t7-Af; Wed, 03 Feb 2021 17:44:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ibIgOyTY5/w/hahocjYsAt+ck6Orz6+VxIGYAsMk5x0=; b=uO2g5HBEJUoK9gOgWrTrcurcBA
	vc+3XTQy27aiCiE02gwsRVCS8qjl+0Q6q4Lt91YjAI+WXCmc25YmUBXtTbu8ZYFrtBZqMXgtL5hEJ
	YqNKWIc4DMWFCxWJg3pQesFTkbtMidxuHECDou2WVnzWk7a7/aJGqrRFnZocUI6rCSUY=;
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
 <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org>
 <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
Date: Wed, 3 Feb 2021 17:44:05 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 03/02/2021 00:18, Stefano Stabellini wrote:
> On Tue, 2 Feb 2021, Julien Grall wrote:
>> On 02/02/2021 18:12, Julien Grall wrote:
>>> On 02/02/2021 17:47, Elliott Mitchell wrote:
>>>> The handle_device() function has been returning failure upon
>>>> encountering a device address which was invalid.Â  A device tree which
>>>> had such an entry has now been seen in the wild.Â  As it causes no
>>>> failures to simply ignore the entries, ignore them. >
>>>> Signed-off-by: Elliott Mitchell <ehem+xenn@m5p.com>
>>>>
>>>> ---
>>>> I'm starting to suspect there are an awful lot of places in the various
>>>> domain_build.c files which should simply ignore errors.Â  This is now the
>>>> second place I've encountered in 2 months where ignoring errors was the
>>>> correct action.
>>>
>>> Right, as a counterpoint, we run Xen on Arm HW for several years now and
>>> this is the first time I heard about issue parsing the DT. So while I
>>> appreciate that you are eager to run Xen on the RPI...
>>>
>>>>  Â I know failing in case of error is an engineer's
>>>> favorite approach, but there seem an awful lot of harmless failures
>>>> causing panics.
>>>>
>>>> This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore
>>>> empty memory bank".Â  Now it seems clear the correct approach is to simply
>>>> ignore these entries.
>>>
>>> ... we first need to fully understand the issues. Here a few questions:
>>>   Â Â  1) Can you provide more information why you believe the address is
>>> invalid?
>>>   Â Â  2) How does Linux use the node?
>>>   Â Â  3) Is it happening with all the RPI DT? If not, what are the
>>> differences?
>>
>> So I had another look at the device-tree you provided earlier on. The node is
>> the following (copied directly from the DTS):
>>
>> &pcie0 {
>>          pci@1,0 {
>>                  #address-cells = <3>;
>>                  #size-cells = <2>;
>>                  ranges;
>>
>>                  reg = <0 0 0 0 0>;
>>
>>                  usb@1,0 {
>>                          reg = <0x10000 0 0 0 0>;
>>                          resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
>>                  };
>>          };
>> };
>>
>> pcie0: pcie@7d500000 {
>>     compatible = "brcm,bcm2711-pcie";
>>     reg = <0x0 0x7d500000  0x0 0x9310>;
>>     device_type = "pci";
>>     #address-cells = <3>;
>>     #interrupt-cells = <1>;
>>     #size-cells = <2>;
>>     interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
>>                  <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
>>     interrupt-names = "pcie", "msi";
>>     interrupt-map-mask = <0x0 0x0 0x0 0x7>;
>>     interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
>>                                                       IRQ_TYPE_LEVEL_HIGH>;
>>     msi-controller;
>>     msi-parent = <&pcie0>;
>>
>>     ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
>>               0x0 0x40000000>;
>>     /*
>>      * The wrapper around the PCIe block has a bug
>>      * preventing it from accessing beyond the first 3GB of
>>      * memory.
>>      */
>>     dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
>>                   0x0 0xc0000000>;
>>     brcm,enable-ssc;
>> };
>>
>> The interpretation of "reg" depends on the context. In this case, we are
>> trying to interpret as a memory address from the CPU PoV when it has a
>> different meaning (I am not exactly sure what).
>>
>> In fact, you are lucky that Xen doesn't manage to interpret it. Xen should
>> really stop trying to look region to map when it discover a PCI bus. I wrote a
>> quick hack patch that should ignore it:
> 
> Yes, I think you are right. There are a few instances where "reg" is not
> a address ready to be remapped. It is not just PCI, although that's the
> most common.  Maybe we need a list, like skip_matches in handle_node.

 From my understanding, "reg" can be considered as an MMIO region only 
if all the *parents up to the root have the property "ranges" and they 
are not on a different bus (e.g. pci).

Do you have example where this is not the case?

Whether Xen does it correctly is another question :).

> 
> 
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 374bf655ee34..937fd1e387b7 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -1426,7 +1426,7 @@ static int __init handle_device(struct domain *d, struct
>> dt_device_node *dev,
>>
>>   static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
>>                                 struct dt_device_node *node,
>> -                              p2m_type_t p2mt)
>> +                              p2m_type_t p2mt, bool pci_bus)
>>   {
>>       static const struct dt_device_match skip_matches[] __initconst =
>>       {
>> @@ -1532,9 +1532,14 @@ static int __init handle_node(struct domain *d, struct
>> kernel_info *kinfo,
>>                  "WARNING: Path %s is reserved, skip the node as we may re-use
>> the path.\n",
>>                  path);
>>
>> -    res = handle_device(d, node, p2mt);
>> -    if ( res)
>> -        return res;
>> +    if ( !pci_bus )
>> +    {
>> +        res = handle_device(d, node, p2mt);
>> +        if ( res)
>> +           return res;
>> +
>> +        pci_bus = dt_device_type_is_equal(node, "pci");
>> +    }
>>
>>       /*
>>        * The property "name" is used to have a different name on older FDT
>> @@ -1554,7 +1559,7 @@ static int __init handle_node(struct domain *d, struct
>> kernel_info *kinfo,
>>
>>       for ( child = node->child; child != NULL; child = child->sibling )
>>       {
>> -        res = handle_node(d, kinfo, child, p2mt);
>> +        res = handle_node(d, kinfo, child, p2mt, pci_bus);
>>           if ( res )
>>               return res;
>>       }
>> @@ -2192,7 +2197,7 @@ static int __init prepare_dtb_hwdom(struct domain *d,
>> struct kernel_info *kinfo)
>>
>>       fdt_finish_reservemap(kinfo->fdt);
>>
>> -    ret = handle_node(d, kinfo, dt_host, default_p2mt);
>> +    ret = handle_node(d, kinfo, dt_host, default_p2mt, false);
>>       if ( ret )
>>           goto err;
>>
>> A less hackish possibility would be to modify dt_number_of_address() and
>> return 0 when the device is a child of a PCI below.
>>
>> Stefano, do you have any opinions?
> 
> Would PCIe even work today? Because if it doesn't, we could just add it
> to skip_matches until we get PCI passthrough properly supported.
PCIe (or PCI) definitely works in dom0 today but Xen is not aware of the 
hostbridge. So you would break quite a few uses cases by skipping the nodes.

> 
> But aside from PCIe, let's say that we know of a few nodes for which
> "reg" needs a special treatment. I am not sure it makes sense to proceed
> with parsing those nodes without knowing how to deal with that.

I believe that most of the time the "special" treatment would be to 
ignore the property "regs" as it will not be an CPU memory address.

> So maybe
> we should add those nodes to skip_matches until we know what to do with
> them. At that point, I would imagine we would introduce a special
> handle_device function that knows what to do. In the case of PCIe,
> something like "handle_device_pcie".
Could you outline how "handle_device_pcie()" will differ with handle_node()?

In fact, the problem is not the PCIe node directly. Instead, it is the 
second level of nodes below it (i.e usb@...).

The current implementation of dt_number_of_address() only look at the 
bus type of the parent. As the parent has no bus type and "ranges" then 
it thinks this is something we can translate to a CPU address.

However, this is below a PCI bus so the meaning of "reg" is completely 
different. In this case, we only need to ignore "reg".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:48:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:48:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81014.148889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MGO-0001vW-36; Wed, 03 Feb 2021 17:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81014.148889; Wed, 03 Feb 2021 17:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MGN-0001vP-W9; Wed, 03 Feb 2021 17:48:23 +0000
Received: by outflank-mailman (input) for mailman id 81014;
 Wed, 03 Feb 2021 17:48:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3m8k=HF=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7MGN-0001vJ-7S
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:48:23 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c29c372e-45da-405d-82a1-ec7707f27b3d;
 Wed, 03 Feb 2021 17:48:20 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 113HmJS3022557;
 Wed, 3 Feb 2021 18:48:19 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id B8330281D; Wed,  3 Feb 2021 18:48:18 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c29c372e-45da-405d-82a1-ec7707f27b3d
Date: Wed, 3 Feb 2021 18:48:11 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
        Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xenstored: close socket connections on error
Message-ID: <20210203174811.GB192@antioche.eu.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-2-bouyer@netbsd.org>
 <24602.56402.42441.687037@mariner.uk.xensource.com>
 <24602.57267.471477.281218@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <24602.57267.471477.281218@mariner.uk.xensource.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 03 Feb 2021 18:48:19 +0100 (MET)

On Wed, Feb 03, 2021 at 05:38:59PM +0000, Ian Jackson wrote:
> > [...]
> > broken on Linux too ?
> 
> Andy pointed me to the recent thread "xenstored file descriptor leak"
> which answers all these questions.  I think it would have been nice if
> some tools maintainer(s) had been CC'd on that :-).

I did use add_maintainers.pl against it (or at last it was my intent)

> 
> Juergen, I guess I will get a formal R-b from you ?
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> 
> Manuel, in response to this:
> 
> > When I started, I looked at the wiki for instructions about
> > patches, but didn't find any ...
> 
> Earlier I offered you help with git, in private email.  I agree that
> git is confusing and sometimes impenetrable.  But it seems that what
> you are doing now is worse!  Please take me up on my offer of help.

I didn't forget. It's just that I don't even know what to ask to start
with.  It seems that StGit will help a lot though.

> 
> Our wiki doesn't give instructions on how to use git to maintain a
> patch series.  Those instructions would not be Xen-specific.  Perhaps
> we could have a pointer or two, but everyone has their own pet methods
> and tooling so the result would perhaps be more confusing than
> helpful.

a howto is alwaus helpfull. Even if it's not the one and only way
to do, at last it gives a start point.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:51:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81016.148902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MJD-0002w4-Hh; Wed, 03 Feb 2021 17:51:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81016.148902; Wed, 03 Feb 2021 17:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MJD-0002vx-Ee; Wed, 03 Feb 2021 17:51:19 +0000
Received: by outflank-mailman (input) for mailman id 81016;
 Wed, 03 Feb 2021 17:51:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7MJB-0002vr-SJ
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:51:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7MJB-0005Gk-R9
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:51:17 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7MJB-0002FE-Pz
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:51:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7MJ8-0005et-JZ; Wed, 03 Feb 2021 17:51:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=f1PPwg52iFe6CeB8p0HVZUd00ZOwwpd/rT0J5pYLUfc=; b=Nq/PQ0jyy27xFFtiN+exz2YkYl
	tsu5gGOUGpOC0uu3GO527YZOrJy4QhSDEqaCtPcHFAaOCmtmcWSNpIqUVqq2lRfUx/am0ALgoDv/E
	NdMbp6LdqLOajO79iOACaed6o/l4nvuSVYiS2wK2yTL15evqsQrUR97zBJ2CuJvB2eX0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24602.58002.389945.787614@mariner.uk.xensource.com>
Date: Wed, 3 Feb 2021 17:51:14 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Manuel Bouyer <bouyer@netbsd.org>
Subject: Re: [PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource
In-Reply-To: <20210203163750.7564-2-andrew.cooper3@citrix.com>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
	<20210203163750.7564-2-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource"):
> Simplify the FreeBSD logic, and duplicate it for NetBSD as the userpace ABI
> appears to be to consistently provide EOPNOTSUPP for missing Xen/Kernel
> support.
> 
> The Linux logic was contorted in what appears to be a deliberate attempt to
> skip the now-deleted logic for the EOPNOTSUPP case.  Simplify it.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Sorry for my earlier confusion.  I had lost the context between the
two patches.  I will explain my reasoning for the R-A:

For the first two hunks (freebsd.c): these are consequential cleanup
from patch 1/2 of this series.  Splitting this up made this easier to
review and we don't want to leave the rather unfortunate constructs
which arise from some hunks of 1/1.  IOW, the combination of 1/1 plus
the first two hunks here is definitely release-worthy and the split
has helped review.

The final hunk is a straightforward bugfix.

This combination of two completely different kinds of thing is a bit
confusing but now that I have explained it to myself I'm satisfied.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:53:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:53:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81018.148913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MLP-00033a-UR; Wed, 03 Feb 2021 17:53:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81018.148913; Wed, 03 Feb 2021 17:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MLP-00033T-RJ; Wed, 03 Feb 2021 17:53:35 +0000
Received: by outflank-mailman (input) for mailman id 81018;
 Wed, 03 Feb 2021 17:53:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7MLO-00033M-GD
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:53:34 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cf56b8d-183e-4496-b642-36e61c393a8c;
 Wed, 03 Feb 2021 17:53:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cf56b8d-183e-4496-b642-36e61c393a8c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612374812;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=+3gRgyzegJYh7p68ifCETicxsBKAJlnn3+Zs9hKqvB0=;
  b=aWton2qDz4hx/NxA2oNAvWoKWKNpXBAj2h1YOg0SVayOaPgbjIcPeYnw
   Mk43lHwGkX5T1Ub8Bz1lUi1KuzlTvvuhxKVBHyQ0XgCphFgLepVLHo/fi
   YT69nGJkK/LuFBoDOieg3LrQO77+8QDWdXLgLqARsL99AhjtGOIV8s+aT
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 6dAi2n9oqQPgZdbf7QqYdjFl5NVlssJb0Rf9VbtOfWZF4PsNFbTtHzeFXBwfTJaRJy4ZJipFno
 JVWw6ocRfxJomMvug0oRCs6brfzXDlgOl5wY9ySdZIThPgtzD61bdLJ9Xb9olrYtmWnqnyhCQI
 lPW1ZniTbnNqanZbVjz2zl2VKNPkWyweRSdeRG5FdCGJ2SIH1iWOYYqzPLLRt8yelYTUY2xRYx
 e9Su5/GyPQJIfL8PA8z/BqCIXmJjrsF0ivbRsipwN4LMge4Kx3a0vLLDY6bkLPr4mCyTIK7dyn
 dWc=
X-SBRS: 5.2
X-MesageID: 36683542
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36683542"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aSX2DzvZEqudm1AArxNF4e0AcmZ5WcJUWepkDz7GLdiCE1eqcg1lMdMQjwoWF7TQqMP/ciNzHuMjpMyfSI4d9neTM6hzy8erUo3k5C0X2BnZmwYXKWeYbGYNX3HpMvhWAIWE3qhGqnUGHSxhXwHxQTMNdnK1vkaXzD9KeRZT873+hriAH8Kowv0tY702TcdTivq863lFZ13GS/O0ba4yuvxOlbxbSCDE6Z8QzgJwXH3wta2RT3+b21yJbGLVoR7ZKkXwYUWCBd51RiRHbKhFEcTX365O465DK+WWgaOjK8QwID3cmnJqO8xDlvkPDc/vsbdL7yzJsufKTRX3fhkXGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S1RuMOxAsejss2NVD2n4KkH0wh3MGAfBL0OKR5o0nWA=;
 b=QZEd4joBnvx9KZ4MIecP6PRlgbJlXmLAlFYEGPXeNYgFuj6tXDqijn8VQdJOjBlI4sLXcfCBpoQ5MBqHyArd5cZhx6JxR5A69J5ZWrqP3jo0sVqYf+d0mBeun1HGs3TUNDiCwdi+0BWXwGytlnaJ5cAJe4zCHYiBj9LvW9SPWXtGIboUOlLZqjnKKW5YUcSxAGd2ieSHPxKhGLH2xl/EzNxB6QeNNYzbZTV9SrLflVfq7ISoRNuyk2+GnisKy8OHX3L/pK0jdMEWwD1go2OnZ89XW7+7ptupX1vd80t1syVm5AEIRlBNM/OqiwlhY9Ln0CbyPRtUpjs9G6Xwea5/2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S1RuMOxAsejss2NVD2n4KkH0wh3MGAfBL0OKR5o0nWA=;
 b=BMGeU4QKCfjJ41/bBfFfZcPViccfpYzNoev80oylF6XDPpZsuiTk3SabPd4NJsosA84a9ZvSx2pGDU0MfMGd5fEGXwv+REM5HhkIcxH1D5gCTBgfiN3s1IIfkqh8yDL3yoJt+yzBBhe/8mzJHqGm2UPIcI2RaQWLkOtZrcTqHAI=
Subject: Re: [PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling
 for map_resource
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Manuel Bouyer
	<bouyer@netbsd.org>
References: <20210203163750.7564-1-andrew.cooper3@citrix.com>
 <20210203163750.7564-2-andrew.cooper3@citrix.com>
 <24602.58002.389945.787614@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e6311666-80af-c0f0-ae22-66b4d140e7ea@citrix.com>
Date: Wed, 3 Feb 2021 17:52:58 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24602.58002.389945.787614@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0015.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::20) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 20007945-bf37-4c70-a58e-08d8c86c8e67
X-MS-TrafficTypeDiagnostic: BYAPR03MB4839:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB48392288381109B97B7B8DB5BAB49@BYAPR03MB4839.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fIXcq7NFhswPxxmBwXiQptU5HLNmezG6556QzCrTNzBvFEVoVbnAbOWNmDZsuPs6wnxmVL1Nsq/I5ni0GvCDprKX4ZNR30URi9tNdeMn4Un4Mkm+xmeL1ySswZuth73MQJtasNYVqPOTdqmb98lOEh4s0mdE/Ym57DbYRKvQRgRwK7zwME778jNroV2G4htI/vnSadn8eIQ/o6oERSb6onSKwBEeQ2rEUETQr/lzgVYA3RtFVqoX/C5KxGhvw6TzalM21pN/Qnm9f0Ip7t+zL6DjJkkgM1ZHaxpJMcbPwjvYK/09ili4tWdaaZTDXJGt9WfQdPdP57/eYWwbEYGC5+KvIbD/UFPWa1UCdIgJTmk9BklN8RO3l3rMo6+osvHNUNuGqu2ctGJ7tVegob05KTy8j5neJsbvyTZT3MDtX5BlH5MObOHmqqvgmjwSH3PIZ5vgAXSOm9RxRRAerZ9wE44pZDhrFAfQE5S2Yl7ZFQ7xL8uR243Ht4jFEB9c92o5+Wy1d3kEPbaygSyqqfEubOcfUvk3FCo80m6AQHDfkboRWiAaAqnMGt2a0kv0JPF1OeXeJN/t3NizCHAeJ1C8ig1EME1/45p+opGruDNywgE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(136003)(396003)(346002)(376002)(4326008)(6916009)(16576012)(86362001)(26005)(5660300002)(186003)(16526019)(8936002)(2616005)(956004)(8676002)(31686004)(6486002)(2906002)(66476007)(54906003)(31696002)(53546011)(478600001)(36756003)(83380400001)(66556008)(66946007)(6666004)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QW45WVVWemFOZkFxNDdlbnkzT09kNjBTRnZURFE3ZmdNZ09yRHpzNnZlVCta?=
 =?utf-8?B?NCsxSFU4bXJCaEt2Nmt6MDVwbm8wejdiWlhjajd0TS9YY3EwT3g4WE8vd0tX?=
 =?utf-8?B?V2VZZGozTDdCUjkxTDdhYXZPMnphcFJFMERaQnhMcVJqYVVUSGZHV2s0SDVM?=
 =?utf-8?B?cVBlVXN6Y0VYMXgwRjVsVkVsSFMwcTdSNElzWGZEWndPQzErL3hvTHBzNkJj?=
 =?utf-8?B?Zk9VbWdHNmkxT1RLWHV3M1JnMDU1K0FJNm54dHBJSXJuWGdVV0FlZU9EY3c2?=
 =?utf-8?B?akVvVVRuaWtvT1Z3L2wyNjkybzdzeVQrcExDMWxRaTNpMTdMV2pVQ3IvMmky?=
 =?utf-8?B?bUlUYlhJb3NCM2x3SnVzNXovRCsvcmJCTlV5VjcxQlJrMnAwUnRlejF5N1ho?=
 =?utf-8?B?aGNxcWlNNGREY2tvQTYyWitQOUFrdzNTS1RwSTVLYStIRURNZmtuQmtWMFlz?=
 =?utf-8?B?SG9tekF6YkozQjJ1blFHN08rQXRnV2R0UDJHdGp6bnVYYTk0MSthOXY1ejRZ?=
 =?utf-8?B?UHNRbW43aWl0QmduU3NmQlZQbzBpS3hiMStwc0hsZzJyNXVzQU80RzZpYjVY?=
 =?utf-8?B?MCtBRDhmUVJmK2FNaFlNclZNYUVGdnREN0xQbWxaTUYrd0MxRWh6azVKWnYx?=
 =?utf-8?B?NkptejNvUVBtSUh1SVp0YmJJRkx4THk1MTErN2JVL1pxV3RXMzQxYk9YSWd1?=
 =?utf-8?B?Uit4OUc1elFlUk1OWm1zT0I1R2hjWk9rL2dHM3l0SjdPRjRnNnlOY2piOGM3?=
 =?utf-8?B?NXJCZG0yQkYzMUVlWkJZNUlQZzN2Um95VlNja1ZXM0t5RzQzVjR5ZnoxcGpR?=
 =?utf-8?B?MUNPWGtWNHV0N1dQOGtpRlhpT3NFaTJyWUZaRU5Sb09QMXB5NFcwMGVzWTVP?=
 =?utf-8?B?c3Y3N1pvRjYwbkE1aHEyZzJMYXRMQzc4dStmTU1qRFgwVGoveWNxMXAvaUxX?=
 =?utf-8?B?R0JFdlBhdURyOXVyMlZBS1RwY0E5WFNNTTd0b1ZSYUlyVGxoYXFJc3grSUxE?=
 =?utf-8?B?RW9LMTNqUFhEUEhpQUQvRUJTYXA0QVVzYVJ3Q1lvbUpyZGdJbCs5UURBT0lm?=
 =?utf-8?B?cmMxYWFvdkFubkVONEdUbm9pWWY5WFBiSk1aYS9hcnJaVGJmcE80bTRyQ0RB?=
 =?utf-8?B?NEswZ001L0g4dS9GOWxSYmlPWDl0VzZiUkxYMWdoYXF5WTVBOWsvY1VMSUY5?=
 =?utf-8?B?enVCOTcvRHJxMVF1K0o4Z1I1UUxjeldWYlovdVdCRmNwaTRlN3NYNmZ0Q09w?=
 =?utf-8?B?ZjJnTzhId2NoWmVjNGI1dUVxeU1qalVSNTRzd3BSZkRWRGZ0ZHlJR0hCeEtv?=
 =?utf-8?B?azY5emFyelNsTmpnNnpRazNETk10Vk9ETHhhSGxJYVlnczl0UWRNa1BYMWxW?=
 =?utf-8?B?K1hKdGtPb3RWVUpXVldTNGx2UUFYNXNPSjNlQlFDRCt0RXhqaVJsbTJrajZv?=
 =?utf-8?Q?FrGIgkTN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 20007945-bf37-4c70-a58e-08d8c86c8e67
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 17:53:04.6654
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W5VlLGWPavqS+kd4LSrf2e+yBx6VZ0sZ8Q2tlvbxi+YW+VGZuAOjoygWbeEa7qorrXvXfv2utn3Uyqo5s24WAIT4T+SQCl0G0xJicEIaaeY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4839
X-OriginatorOrg: citrix.com

On 03/02/2021 17:51, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH for-4.15 2/2] libs/foreignmem: Fix/simplify errno handling for map_resource"):
>> Simplify the FreeBSD logic, and duplicate it for NetBSD as the userpace ABI
>> appears to be to consistently provide EOPNOTSUPP for missing Xen/Kernel
>> support.
>>
>> The Linux logic was contorted in what appears to be a deliberate attempt to
>> skip the now-deleted logic for the EOPNOTSUPP case.  Simplify it.
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
>
> Sorry for my earlier confusion.  I had lost the context between the
> two patches.  I will explain my reasoning for the R-A:
>
> For the first two hunks (freebsd.c): these are consequential cleanup

FreeBSD and Linux.

> from patch 1/2 of this series.  Splitting this up made this easier to
> review and we don't want to leave the rather unfortunate constructs
> which arise from some hunks of 1/1.  IOW, the combination of 1/1 plus
> the first two hunks here is definitely release-worthy and the split
> has helped review.
>
> The final hunk is a straightforward bugfix.
>
> This combination of two completely different kinds of thing is a bit
> confusing but now that I have explained it to myself I'm satisfied.

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 17:58:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 17:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81021.148926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MQI-0003E7-IL; Wed, 03 Feb 2021 17:58:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81021.148926; Wed, 03 Feb 2021 17:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MQI-0003E0-Et; Wed, 03 Feb 2021 17:58:38 +0000
Received: by outflank-mailman (input) for mailman id 81021;
 Wed, 03 Feb 2021 17:58:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L1oB=HF=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7MQH-0003Dv-F4
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 17:58:37 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 201e75a8-f5af-46b6-9180-77516bb62f80;
 Wed, 03 Feb 2021 17:58:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 201e75a8-f5af-46b6-9180-77516bb62f80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612375116;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=trcoQUEy5txRTrkPduMpJ/zANyByYqabtUa0dtboCho=;
  b=KDdEYxdhLW4vp1tnHZ7iGPdgSav+aSLZQrlDE8LN5StQ15d7l4zDxUmB
   Jzih2rUkQfF4jCUjlOD5VKgNp4z1rhl4hIiyFZar0uPltccd57rMjeWA4
   vJySMKuI5pDzWgC9oJhOAWJSfqr0zRG7R/Y7nXquovEPCBl97C9zhVE1U
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vQIsZrlU8rjQ4RHzpTnfcjUbuRHoAJj4LFKkYEXTrVSMzpFLtqL34QLRmknlQxGDIz+iOeWqh+
 s0hkX6lmFgzw+UwfCdaVZC25ziynybjqyD3C8r1c7sIb5WmHHaTyIwSTl7i4wHtu32nWOBXNui
 Mrep6DLd7vcQtk6TlJRnRpPnwlm4NA5i4yhg6FgX9g2Qgdmfi3aAI+R1uFZHgN3BofHnipNOui
 ADt5cAl2Ou3jYJnuj1nhxWzgzzy0ENYZiIUbzd1dlronveRoIw3ghmAlPW490EKBQGvm3ZJJsV
 8ik=
X-SBRS: 5.2
X-MesageID: 37823703
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="37823703"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y29yYlYHYQ0x01KG4+/4leStVdhwQ48Yk1q/4DdE8q8c5k+LW6/jJGHRRE6hcXUP1FiCdYwNtsL9Yss4eGMxbkzFuYcifxlkfLMH1oIvw0boN94s9zZmT/uVX/U0HZr6/wBlw/8A06vkF12Wr20bmMtYZN7mwViPUxtYke1K8NCrgdr2huZTL8/8ojhFulxnqNkYDoCmqRT6Ikj/KojoTwwx0+6gd2Uq2A/yj7CuqHBNXQXXyIfVlff1F/FnsysJbyL4OPQSaAIdts82669Ms2Rnei7P52MBBhodMKMWN0HWliXfXHyCKewXEhSF6+TJmVgWlINWfUgMHSIbhO7YPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KmNkCv8sutdkYan5lMTj0IWcXRQ7sSkRkyHjZ+tlES4=;
 b=Mg8yzhq7V2l1cmX3wwwaq0Fh0gRN9mku506t/DrJ9beWgMKAkIha8GrW2T0QucjrV/uQYaidcOqMX1ANsZNQpbNH/NnOYw/XfoRuHGVl3SrbDgzU8iBddVgQ+ZVkEjGEFcIQRYxCxZ+HrIPsXuXEKvd4XwB9UNTtCH6Wu1aTZyXFx1dCR6RfmdGevH0rQGIU/RHBbpDqhXuuw8BkEhJpjVxr2kS5yF1dNEbmFDjq5V5MBxQ40HZRCB66OwyCvOCvAm010vEozzE4KLugyLPFsSNWuujJYm0t9zPdvHMpolY3jz2grO0sXfZ232TcnKzFxMFnfnFQz8YeIhVr7YOjjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KmNkCv8sutdkYan5lMTj0IWcXRQ7sSkRkyHjZ+tlES4=;
 b=sGRPDC2Wz3AtQPfj79s/VI4lqJB7vz0qfNckRpARc9K3/U2tQ3Wz5+BwprGiyDJbovXPa0TGx9YtvgjZ2iXQrsJRxEsNTyzSJdR5wnrf8D08xB6UQ7uoGG6qjXmPD2zOVDW6PDrDrR36mBDQF7b5rz4VHIX0468ulYaOHjSLKUc=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15] x86/efi: enable MS ABI attribute on clang
Date: Wed,  3 Feb 2021 18:58:05 +0100
Message-ID: <20210203175805.86465-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0090.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::30) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7141ae79-a15e-4322-6d31-08d8c86d4a49
X-MS-TrafficTypeDiagnostic: SA0PR03MB5609:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SA0PR03MB5609451C893E84377946BB3E8FB49@SA0PR03MB5609.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4bsGsDJsgpd7znIp0KYIcsrLbHLV4382rwoU9+tVsPqb2ggYgWrO+Y4zZbdGAkxvqkaCZ9TiSJKj3oiT42k/Rutyfmos1kwbNH1qwBzdhKAal3Fc+tzLDXmDi0X8mXjgrrjBxzjhThhBhQpF187F4jd8Cdviz14zBIN5nLP2Koem4KyMtYW1J30K0LRD2eZQNdH6cEuMPyC+DMkypiYYIYdM6Dcb2e8+Q5O67iVe9e5KO5hnPOHBYrZOl0aOe+48be0FaTAK9OxjT/gOXamrAi37iCz+WCa3VEdD6yTUYO64lURrOIv8GIFN0IXy/UzpzurZygirsT7FBbvyjQXZaYlq8U9cJ2YGWogoDjkT16FSj3gVP64VuFmPhRvZ1Z0jreoVRhWL8zfEgrswSqVkMSFtjcqHO379KY7dvybVavyAZwWVAZTOGzqHJGFWXV1RJBeeJXZEj+P+28Yn+2UZ3uHxo3LjuKIL1LaxfkanFgtnua2p5cqdAEDxfyWq9U2VnO1oxtfeEttTUQlwgL5G/A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(136003)(39860400002)(376002)(36756003)(66476007)(6666004)(54906003)(66946007)(6486002)(83380400001)(4326008)(8676002)(6496006)(66556008)(8936002)(16526019)(26005)(86362001)(316002)(45080400002)(6916009)(186003)(2616005)(5660300002)(956004)(2906002)(478600001)(1076003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NStUNEV6aVEveXVvMWdjYjhINUpMM2d0YU4zR3RNaE14Z0FiS0FhYzRLNXAv?=
 =?utf-8?B?dnRZeU5uRjAwekU0dGg2RUMwMm5UdnViZzRJRU5XQktPRnJORTJ6VFhBVlVQ?=
 =?utf-8?B?NS8zRmkzZVR3Z2t4dFdLSUNiSm4zOXk4b3N4bTRWeWF1elM1Tk5jbEdEQ0tz?=
 =?utf-8?B?bS9uK2VabkpoZWs2V1kxeXlqZlFKOWhUYlk1NExURDZwNjA1TDZuT2lqMGtF?=
 =?utf-8?B?ZVd4Y3J4UzF2MkZEQnhoWVVJcXZIalZxK0NQZzcrMVk1TFhQV2pRWExrWGw5?=
 =?utf-8?B?eldTajg2K2FqdGZsaUx0ckh6d0x4SlRGZk91VEFtVjNQbmxjV1FTSTBxY1JH?=
 =?utf-8?B?NDdqSWJRdWZTVTJOaWlRMUpvM1NWZHVuVVg5S3hUeDlJVERKQWRJNmV1dU5S?=
 =?utf-8?B?MWcrbmNVbTlBMGpxMUhKWXR3V2wxR1I0eWpOZ2tCMzFCV3pUMmZlSHp0anFm?=
 =?utf-8?B?ME5GNXhjOUYxMnM2MnROZGhWQlNaaEYyRk5OenJ4d3ArZTVXY0dhRmNIaFNM?=
 =?utf-8?B?OEhULzBqMVpMLzNDREdNY3N2bWJUYWVEZVU5THpJZEd3ckhPcnhDUzlyY251?=
 =?utf-8?B?M1RGRm1JN0llaXJMNWZnL3R6TDNhQm0zT1Q2NGNDbDZ3MHRMZnIxZ2RBVlMy?=
 =?utf-8?B?UU56VjJRREkzbnhDUEhCTy8zTllWVXdYRFpsdFUrcUJjOERMemNGVDZSdXJk?=
 =?utf-8?B?ZXl5S0crTnAxeXc4ZXZzdllEazNhQXg1YkRyaXAxUkF2bUsvTEI5VWpXRExY?=
 =?utf-8?B?ZnpIRmdXb3hHVytqMGRNbEkxNFd2SURSV3BHc3Q1MHliT2s0TkRYV2tjdm5T?=
 =?utf-8?B?Y3FuaG9taEhOVGYrN2thR2ZBVndqMHZHMGw2emdYMURnQVhiT2cvL0tkL01Y?=
 =?utf-8?B?dWFDVi9rOWxVd0JmdTNhQ1NBY1dLZC85Wm9IS3lWVVg5UEtLTzN4NlNpRU9y?=
 =?utf-8?B?TmRlMEZrYVVYMVNNS2h5ZEovNmFSelgwK2VjQlBnM3IzSmpPUUpidGJVN2hS?=
 =?utf-8?B?MUl0dEFkZ1RwdUVsSmw0VE1YN2Z5bjVxUzVTaDRkVWhkTGYzUmRmSlZ0MHNF?=
 =?utf-8?B?WUIxUUtxKzQrbU93RERYUGEwcWhJZHBrUzVJY3h6VEM2QXF1TFRWVzBENC9v?=
 =?utf-8?B?T0MySG5FK3JWTHVOeUdkLzZTMzUzWHQydkl0U0dQdTF1c24wbXZhcWR6cktF?=
 =?utf-8?B?TWV4V2F2QXZFK295SjdTWjNlb0h4TTNwR1B5UysrN3V1dWVJMzBLQWRvS0VI?=
 =?utf-8?B?dXlsWEtxemJiM0VLTzdHSHk5VDFhblRtd08za2ZXK2VWZnQzMzlPczJ3cVhE?=
 =?utf-8?B?MEhtdmZxVEhiR2w1Tk1SMTd2ZHVEMkdVWWxUWXFsU2xtUVpxalBQMlpqdjlo?=
 =?utf-8?B?alpaVVNYOGRacHRlS0lRWGdNVFZRNWRZWmc3V2xTclVKekRMZkJPRElUWEQy?=
 =?utf-8?B?VG4vM0VrNFluaklMMk52TWhVMUVkT1hKZGlDVnV3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7141ae79-a15e-4322-6d31-08d8c86d4a49
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 17:58:19.8780
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qhYBoYNw0Auxh/za7PeAyktaeErsja0ptV6IvLJgxu30Twq028CYnZsE39bPMUa3AqhpGdkJlOsQOViou3pZlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5609
X-OriginatorOrg: citrix.com

Or else the EFI service calls will use the wrong calling convention.

The __ms_abi__ attribute is available on all supported versions of
clang.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
Cc: Ian Jackson <iwj@xenproject.org>

Without this a Xen built with clang won't be able to correctly use the
EFI services, leading to weird messages from the firmware and crashes.
The impact of this fix for GCC users is exactly 0, and will fix the
build on clang.

The biggest fallout from this could be using the attribute on a
compiler that doesn't support it, which would translate into a build
failure, but the gitlab tests have shown no issues.
---
 xen/include/asm-x86/x86_64/efibind.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/asm-x86/x86_64/efibind.h b/xen/include/asm-x86/x86_64/efibind.h
index b013db175d..ddcfae07ec 100644
--- a/xen/include/asm-x86/x86_64/efibind.h
+++ b/xen/include/asm-x86/x86_64/efibind.h
@@ -172,7 +172,7 @@ typedef uint64_t   UINTN;
 #ifndef EFIAPI                  // Forces EFI calling conventions reguardless of compiler options
     #ifdef _MSC_EXTENSIONS
         #define EFIAPI __cdecl  // Force C calling convention for Microsoft C compiler
-    #elif __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
+    #elif __clang__ || __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
         #define EFIAPI __attribute__((__ms_abi__))  // Force Microsoft ABI
     #else
         #define EFIAPI          // Substitute expresion to force C calling convention
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 18:00:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 18:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81022.148938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MRf-0003xX-Tp; Wed, 03 Feb 2021 18:00:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81022.148938; Wed, 03 Feb 2021 18:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7MRf-0003wv-Ql; Wed, 03 Feb 2021 18:00:03 +0000
Received: by outflank-mailman (input) for mailman id 81022;
 Wed, 03 Feb 2021 18:00:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7MRe-0003lr-Ki; Wed, 03 Feb 2021 18:00:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7MRe-0005WR-B2; Wed, 03 Feb 2021 18:00:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7MRe-0000x8-2g; Wed, 03 Feb 2021 18:00:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7MRe-0004gv-2E; Wed, 03 Feb 2021 18:00:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5sWxJmjD1C21Fd5L6Ek2EJrl/uIwm3+FCJYEr7GCZkI=; b=ykOUNIgGcBr4t9ZgJzRj2tlCMr
	hEv+BQDItXbjRIZJQpAsphuhWhq/yY+ic9QEcmWKdOafdcX0jjgYubFX7OIz6SgzMtW6V6yGFw9f0
	gLZcZ/19LTd2c23NMEzjSPpuEwy24Ux34K2ja/3PnsQjylCHzmWQKjsjPWcBZVxqoCSU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158971-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158971: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3aaf0a27ffc29b19a62314edd684b9bc6346f9a8
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 18:00:02 +0000

flight 158971 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158971/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3aaf0a27ffc29b19a62314edd684b9bc6346f9a8
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  186 days
Failing since        152366  2020-08-01 20:49:34 Z  185 days  332 attempts
Testing same since   158971  2021-02-03 01:39:37 Z    0 days    1 attempts

------------------------------------------------------------
4515 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1022596 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 18:28:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 18:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81031.148959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Msc-0006Vv-Dl; Wed, 03 Feb 2021 18:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81031.148959; Wed, 03 Feb 2021 18:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Msc-0006Vo-Aj; Wed, 03 Feb 2021 18:27:54 +0000
Received: by outflank-mailman (input) for mailman id 81031;
 Wed, 03 Feb 2021 18:27:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L1oB=HF=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7Msa-0006Vj-3j
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 18:27:52 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7ba0b80-9523-4518-8721-f81bc8f12f6c;
 Wed, 03 Feb 2021 18:27:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7ba0b80-9523-4518-8721-f81bc8f12f6c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612376870;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=Smtte2Bv11lAulYHj5mml6sMiAjm0cgQD5a5sQTZc08=;
  b=M0puG4cEV/yeMEWT45rZ8Ge42isaguaBzKfus7DNt+t2KcJk+0Wvr45s
   A4XOIEvlSa641TZHjywQL1atTQ1tgnU6yskjod2U9HZPmPmrNG4uPlrAQ
   0ph98Ju2/AcTmpsz4PIDV/XRe8b28dWw209e+P+KRl8h2xHsIDrkh9Ona
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 1CnpsB47DER3/O/js1NieQnMx0ISOHvm49TdGH4pWLIblRVqU4UQh8SH4Fnbjd0gMS+R1phFFD
 3LJyxetsHLm0bDc6jO/rQ79CJhMGd76lbBVO0TqoVxoJCKsUhf9Kk5iG9lWhYKTuk8CXNjMgq+
 86MeBeXGUNyW7fqbAwQ+b6nz75mUuvsMY9D0xe+13Uar9l381HOMjmzvI+b/FCfa6w3gZ1VYAZ
 MXEof5tYy17z08gLhgF37QRwz8EM/mmmc+KVsJ+UYW7/MLgQOt4dTmSZ5f0m90gGIotC5ht4wd
 y34=
X-SBRS: 5.2
X-MesageID: 36488278
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36488278"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f4jlGYzPHH/Y/dEl0bdsYVFHxwMe42EYxS1rwuiqr2anVo0cFU2NqYMEKMImaEF/M8yfrjn0sUOY8G1rtnpezEu22ZH/kAYilH6YK7bbL/VMHjqOebrXRBcd6/nZaaC9WYnjGdqJwub5igmgZvsnur3oSQJSZl8efh/2EwhbOc18G5pnbafIjDCKPVrd8bdbBrYyMlWnoebkCAGMG4oZFC3AWWoWk/t/Kj0ETTrZe6WQKmBjjYeNVmlyxp0nLOK6FdX39mgwRpr7wprT56HO6F6L9IL2h5dRYQs+j99dO/AkWmzG5/d1aWSy5JSjKjqV/anSNb+XehVvgPznpKOQhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p4dcvWQS3pySTSt+XdeIGhGDdTXQFonS/J+U2VJyU6M=;
 b=Yx7rlanD6bS0RPwcS3hJ9TYm6ppNY3BKoJewTBBT1n3U49FH50MJ+CcXQ+Ml6L1qdR4LRVf5V9ntDn6eBafMKkh4AdApP1B1oOuFagBYn3Mjg8PyBAOvnOAX0wrgvVwfwTQYrRAQcy120sNcWthLV4zkWMKOwCrD+50BFKFldRssfQMr/OkbLg2YeKVE3a8nKYJ5HP4P3zBE5YG1Fi8hHatsXZb+jMiyFaRKUY2PN1is9KbumxnY7liU0B2CfqdHUnWfN8QF6KFGmuTqEwXguVWrC8Q03VcGnvS1zxq9R1WKgzm2joIboA4NPW20opCy72rn/d6fZoz0a0JK8J/bLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p4dcvWQS3pySTSt+XdeIGhGDdTXQFonS/J+U2VJyU6M=;
 b=D/ufqujM9cOY2tZsj3gd9qgjQ8sH1QMwtJdXwheLhH1Fv6ZRH0zetHjmh2q5wLfptAGswG/k0yOq64G4BV+EZMeTZmd2uzyMsDIrhrtIIf+m5dknOFuob5iea+bQSnyBeEQ305cGWmDB8IOM44SRLbneFxguB8H34sxjWcIBiII=
Date: Wed, 3 Feb 2021 19:27:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ian Jackson <iwj@xenproject.org>
CC: Manuel Bouyer <bouyer@netbsd.org>, <xen-devel@lists.xenproject.org>,
	"Elena Ufimtseva" <elena.ufimtseva@oracle.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] NetBSD: use system-provided headers
Message-ID: <YBrrEulPi+27bV2u@Air-de-Roger>
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-4-bouyer@netbsd.org>
 <24602.56532.169889.71270@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <24602.56532.169889.71270@mariner.uk.xensource.com>
X-ClientProxiedBy: MRXP264CA0017.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::29) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6ab2f880-11e3-4b04-65e9-08d8c87160be
X-MS-TrafficTypeDiagnostic: SA2PR03MB5692:
X-Microsoft-Antispam-PRVS: <SA2PR03MB569263D2198F072EB9D664F88FB49@SA2PR03MB5692.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oC0xksk/ttUDVPfSxT2+1biRD3XffoyDon9Vg5VXqY1pecU3bb3EAlz5rFUBFy/8tmidvFfqQADbYQzYk/LUpGzXgmOvCVtHXWOn2t9OToTDPih6gf5lYG2QJwVj2UmJi3NXfgWTaVWNSVZsrI+gD6GJ/uVYw/Jv8fMvlNR9tVTbCTW810Qp4hj0+xwufoUqgB1EyLpkCxVL9hL/WBAbEiB2MzVPBF7WF4go6qXSqDDaot/L0b/Y7MeVGpysd3QsDevpPW6Q0r2fPM89PVSLLYSgWI+qPbnpZxS1T+qLgAcRMmXwHbRUXT+CsgzlwG+MP7yngUhUdIxpFZQ5AJ3b7fGnPQNkDtvEMwcBxL8zXPPXaV6roQ48W1hDUxOX2w8Hcdqiv+C7h0aDuKlTSr7Z3270AwP/DacM3FbKqLhVPhxxwJKjAI5HN0DmFnLZ8r9EiuufQU6Q9A74zb66k7TgyfI8diy7Uzso0vqYFIikKoXl4YC0LrcN1zF+EDxW1o1FuInNVm2RQz/85U78NmOW4UnGsGcnXBQ7G3w7VNcr88bSbP+mh9HewsybgglAGQuj
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(39860400002)(346002)(376002)(366004)(396003)(16526019)(956004)(26005)(9686003)(86362001)(6916009)(186003)(33716001)(6496006)(6486002)(66476007)(66556008)(5660300002)(478600001)(4744005)(85182001)(4326008)(8936002)(8676002)(316002)(54906003)(6666004)(66946007)(2906002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RjVmcnl4NnhsdWdlYXlzQzAxenYrQTZBZUFiT1R1SjM1ZkRmNjI1OFRUNWt1?=
 =?utf-8?B?Ky8wd1ZpWEg4TndObmlnL2FKKzhxQUhnTUNZREo2a2ZvQ24yaGYvUk5haVp0?=
 =?utf-8?B?UHRpcU9wVytNL1ByTU5vTUg4Zm8wL3RWWEl6ck50ZGoxbFRxWFBtcnVnQjZl?=
 =?utf-8?B?ZnV2d0lzSUdCbHhkRXZWY0k3a3NhM3VCNmhwVVdlTFVuQzVGRVRhT0hKTHNm?=
 =?utf-8?B?QjgzV0ppQURBVFo4bnhWOGwyS3I5UWdkdTBUTXpoVnVSQ1ZCTjZUa0Facmg1?=
 =?utf-8?B?MlFWNUQzdEZCSkpKaHlScHdqcEFhTDFIdlUzalppM3NPT0VXazBGVXRUMTU3?=
 =?utf-8?B?a3owYWd5OC9iWXpnQlpiZVk0cHRyV2ZLSCs3aWxsc0M5UVRBdFJWNlFaQTE0?=
 =?utf-8?B?SnlGaHk4dW1seDYvdEY5aDFTclVyQ3pmNnh6Y2FVUjd4RGhNempEZXQzWnNj?=
 =?utf-8?B?d1dOLzZRR0oxYUhtbXRRVFAydnRQWVJSbGE5WFAwSDlHSTJOOGR3NHZNaDAy?=
 =?utf-8?B?TkM0VEl2WUhTVjdvcEc5U1cxMi9OWmE4L2F2cWxGQlVkbzdYeUkxY29TUTBx?=
 =?utf-8?B?aHZlZ1pCdzk5dFpLTzQ3d3RGbDFqTTI1SlhFcTNVK1JncXZuKy9ISC9IWTQ3?=
 =?utf-8?B?Ni93QWx1MlBtUEVEOUQxQVBwbkoxSCs5M0hWTnJXL2xPd3ZrL0Q3MDFFQk1r?=
 =?utf-8?B?cWZWQml6UkIwMEJ2NEZ0TFB0YTFaWktNUnRLcXU5aTBnMTY0UUFJTVplNFFW?=
 =?utf-8?B?WFFPdXdTczVVK0hFYlMyVkJpMmF1Q0ZHSXh4WUR4dUkwdVJsaGdBdGtYV3hF?=
 =?utf-8?B?a2dES29rRUJ4dHpmSVZPVE9KeEhQcG5NSVhnanlxbVZQWkF3QWtxYi9rUkV1?=
 =?utf-8?B?YXZHcXdOWmlQbmhmeUZZYlNVc0xMdExsMlZUeW0rbkVMMFpoMzNaQlM4VjVW?=
 =?utf-8?B?T2x6ZXBvYXh3cUlWWGFMS0wwZlR5OWFOVDFQUUw2NnkxcGVCdXp6SThtYkNR?=
 =?utf-8?B?VFl0Y1gyWmF2VDZJNGhIQjBsdDEyTHk0cnlDN2F3b2Q4ektZN0RyamV2L0xS?=
 =?utf-8?B?WVMxSUdCQ1p0YlZTWEhmMFlyZEtselpQelVYNjFtRXhCOHJRbnF6bnptV1ht?=
 =?utf-8?B?bHZ4TFUrLzcwY1BhdFY4UUtQZTZISFBxSXBTRGlGK2NBMjdPU0NNQ0EyeENF?=
 =?utf-8?B?UHhWaFFsbFptNTdxRnlmT2UzdXEzOXFSRkJLRlhvaTZpZHJ4cUozVlhyaytU?=
 =?utf-8?B?YVNzZG5qejg3TGNLR1dsZ1NJRlJoTGVDTWllYmNRcEVmRVJqVitvV1pDMGpN?=
 =?utf-8?B?R2xCazQ2WEtTUVhSdll4WlJFVFpnVzM1d0JDNkNhUDBqYXIrV0Q1TVJ6RWN6?=
 =?utf-8?B?dUQ1MU5ET3VOUFZGQ3dDVURySFNuUjNrL2tOMDJ0U21BRWU1cFo5Uk9kQ2hi?=
 =?utf-8?B?dElCNkVQaFl0TmNScmdlVkVvQ1hSQlRQZCttQ2x3PT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ab2f880-11e3-4b04-65e9-08d8c87160be
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 18:27:35.4040
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lOu+6ckca0EBvOr/upZomrgw//QM0L+z6CtPC8JQVW7W7+pTLo9QkUQMkOZSze73ldAiixy+0R/ysx/0lroWbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5692
X-OriginatorOrg: citrix.com

On Wed, Feb 03, 2021 at 05:26:44PM +0000, Ian Jackson wrote:
> Manuel Bouyer writes ("[PATCH v3] NetBSD: use system-provided headers"):
> > +#ifdef __NetBSD__
> > +#include <xen/xenio.h>
> > +#else
> >  #include <xen/sys/privcmd.h>
> > +#endif
> >  #include <xen/foreign/x86_32.h>
> >  #include <xen/foreign/x86_64.h>
> Maneul, thanks.  I think this is a bugfix and ought in principle to go
> in but I think we probably want to do this with configure rather than
> ad-hoc ifdefs.
> 
> Roger, what do you think ?  Were you going to add a configure test for
> the #ifdef that we put in earlier ?

Yes, sorry, I owe you that patch. Will try to do tomorrow so that we
can have a model for other headers. AFAICT I will have to build
something around AC_CHECK_HEADER so that we can get a define that
contains a path that can be used with an #include directive.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 18:42:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 18:42:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81036.148971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7N6e-00005I-Mt; Wed, 03 Feb 2021 18:42:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81036.148971; Wed, 03 Feb 2021 18:42:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7N6e-00005B-JA; Wed, 03 Feb 2021 18:42:24 +0000
Received: by outflank-mailman (input) for mailman id 81036;
 Wed, 03 Feb 2021 18:42:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7N6d-00004x-OP; Wed, 03 Feb 2021 18:42:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7N6d-0006AN-Iw; Wed, 03 Feb 2021 18:42:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7N6d-0004MI-Bu; Wed, 03 Feb 2021 18:42:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7N6d-0004KJ-BP; Wed, 03 Feb 2021 18:42:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VSTuviVVcI8LbqjSgDbE1pX76xuth/oRrQssgYCQKz8=; b=5ACwK4S0ezr5+PO1/W3Lo9u7hq
	fHruC6D2AWt65rXHij/n6xTRP+uXbq/U6fcL2MCCOntmc8hRwtctUiyI4qLbbj8yrU/IqQDobiEgr
	VUAMKGyMSE9fKgF/9UzjqOao5kmZk+8uPmvJIrMIM3dQg9qn4G2uaPDtBtiZQ+DeVZug=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158969-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158969: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=77f3804ab7ed94b471a14acb260e5aeacf26193f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 18:42:23 +0000

flight 158969 qemu-mainline real [real]
flight 158986 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/158969/
http://logs.test-lab.xenproject.org/osstest/logs/158986/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                77f3804ab7ed94b471a14acb260e5aeacf26193f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  167 days
Failing since        152659  2020-08-21 14:07:39 Z  166 days  336 attempts
Testing same since   158969  2021-02-03 00:38:03 Z    0 days    1 attempts

------------------------------------------------------------
372 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 98850 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 19:11:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 19:11:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81042.148989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7NYW-0003CV-6s; Wed, 03 Feb 2021 19:11:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81042.148989; Wed, 03 Feb 2021 19:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7NYW-0003CO-3t; Wed, 03 Feb 2021 19:11:12 +0000
Received: by outflank-mailman (input) for mailman id 81042;
 Wed, 03 Feb 2021 19:11:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7NYU-0003C0-UC
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 19:11:10 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd1dc9f7-dfbf-404a-81e1-03c4088b6821;
 Wed, 03 Feb 2021 19:11:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd1dc9f7-dfbf-404a-81e1-03c4088b6821
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612379468;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=L+dtwTpfiUA9Jrs29DLC3iQ37Wh7rLi2dpTVGGU2vOM=;
  b=L/SKWULRFvG40rnE79zxkQpocYHNxs6FoKVMS7p8pFAqRf9/4ezeaU8r
   bv/5uAeOxuFByMldE0JmkHzdsaG82qyj4o9514UPaY0hiC01j9QyhcN+r
   xEbp1p35VYO8Hq8MfNue6ftjNPZMNWmzISl9WxJ3wDynSMc/k0gypPT0t
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RIK8rXJgwVN8lN1f6khucoH41vSVUvaSTdYfvlOHApGOFaVT5a2k7X6dpEoROEqBIerMXmubE1
 abPbzsoiYZfRw3pueKW2XWdH6fymF/uzmke3Lvw/4THXifjdp9Pha5RSo3V5RIvd7ba9UoTksA
 XKin55ALPTrGF9/14th6/D1pb5YsvWVmn9sK3sP/8HmewFNv5l4vUQyhru0yTgXOHJBl498uGr
 He+aNMCWtcdAvOitBT4xVdXln2vlM6KXeEV7dsLzB4E0VvOGaP2kcMp8HvP7826lElc1M4Ma/K
 5h8=
X-SBRS: 5.2
X-MesageID: 36491746
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36491746"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QfFtU9hZEOMeHWXjmv1kPGgIfa2l0aGdVfBAuXHGEAlG4yedtqfMCN3lLM/rCPkl19axoc3rbTtAw0pptGDyRQdiYQKNqJCHjzJECTc0WowI8nVS+CUPbhGU4+doYnJ5XiH53FglmnWR+E2RyPeyNFABl4Bkl1VMMRTub3rB0k5pT+vKpDI69iyea1fBUbneciOvrhFOoO2gm2G72uQ1OTvICbVrwFFgdfrgb791My9PUs+ZmW4uRhOZxik1lb2p9q45eJ+nbHqDYV28uORJ6X2JtHgmYzMSGppVxKRiRcB9xC6f2u2GN3j0NZmNXWQJEPgzr4pVFXd92HyBfeV2OA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L+dtwTpfiUA9Jrs29DLC3iQ37Wh7rLi2dpTVGGU2vOM=;
 b=U6s0Wvj+rLpBX7FHNawJdMuy8S/325SI4ONG8vIyhTxpahJBs9jZWCgdvwwdhM8RB/te5gVWXuK8x19ELYZWQNDjSJuj/Aabf0Fy7XCQXz3tWO1CngKBebqzYFOSoJet26Mlq2aGoHqntvrgm+WPi4fEayZYPKBvrbnf/g3HTGoC9bYx9o9xpYRiHILw2zdrB22NcMpu0L9LaEc8XuxmF/ZrVD+zJiN0ZLv393ATRpzobmuN940hou7wew6GYKOhvYslP+BSj3IK8CpppHkWaszSzPh6G7kQfPPROLe6KFbpK1tALSrb619jNqyIGNbETP20NsV5sAPYLPTjviz75w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L+dtwTpfiUA9Jrs29DLC3iQ37Wh7rLi2dpTVGGU2vOM=;
 b=r9O11xVBrtpMRTZq6HZpViPKCZGevnPj5O8XDXEfuu3LKEoF/yfY4bVfsjHewT2u948U4zwL/r8KzpCCKIQODDxC8tDdVbyQ+43xsNlJ3zQxst622H//Pst5eEEl9F9gJUVpxvgwZlx/oUb5Ir2TCBjVlZTWh3qhrHoE4ccd9Wo=
Subject: Re: [PATCH for-4.15] x86/efi: enable MS ABI attribute on clang
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
References: <20210203175805.86465-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <234bbdc1-d8a0-b1a9-63e4-83cbd46d0220@citrix.com>
Date: Wed, 3 Feb 2021 19:10:49 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <20210203175805.86465-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0026.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::14) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 81326eba-d69f-48d0-30a8-08d8c8776e5f
X-MS-TrafficTypeDiagnostic: BYAPR03MB3415:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB341594A7689C76E81201644DBAB49@BYAPR03MB3415.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qTqfvTNlQ/yeB425h+Z9M6tBsXJMUQUs3cTEi2tZlyV4DDwuLzKDIIW5Ydz90GXMdT6CvxtJPvV8XzRtVne/88Cd002Q6VIfd4pcFu38HDn2vJlytVU/MJN0gHSgaJzKydZ+bnQAhZmU2c2P93Wd6+JC7Pn8cY/Hsf2E/wg87dxytotvWmCcKg2Va2j+elqw1OLqXAilWi043YJ2VRqSLbxc3YOlX713xtiaPtjIoyL5HQ/1IUQ4Ovjz/eqfjkgfshdPCL6A6D+QuJw1vIAftNQ10Nd4lN1elMhHekm/tg5XEL5ucBYplPsWmIki4DrYDvKlgmn1MoZBw4j5hh1WroG/h1iWxRkkel04Zh+ENWkNm7fgeKf9k2LWqcjHRxor230hhUYQVvYLrDReu3XIu8XYwEBmmK1untAgXL2GdtRN8vBRrsJp5D7/3nyVkI3uMjoXhjHonNh+mpwndtQLtKQwhmwYNp7tJBtDnAs0uvNdXSEQoGTPGwNYjHJd4YyZ/OAj/ssUqlfF07XO3iyhsqKiJzF0pBAnRBZA5zSCefdIbtydZnapx+NlTrLyDRv7PM+g6xpMSI8kSP20uAfwisUFSxc8nDcZJI7hDHZIeuo=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(39860400002)(346002)(376002)(316002)(6666004)(54906003)(2616005)(956004)(4326008)(36756003)(16576012)(8936002)(16526019)(8676002)(2906002)(66476007)(186003)(53546011)(478600001)(31696002)(4744005)(66556008)(66946007)(26005)(6486002)(83380400001)(31686004)(86362001)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WWg0RUJYeW8wcG9HNEJDNkNNaGR4eEQrR1c1UnBUUE1JSnByMCtQd2lVNy9o?=
 =?utf-8?B?SDdYVVY1YzJncUUwSWtXdHh0OXlSL3hjYUQ4OUJYazY5TUdKR3ZZMnJEc1pE?=
 =?utf-8?B?QnYxQTlzM1RZQzFIYUcxMXorM05DRUNRV2dNdkJDcDhmV3hXN2hqajYrdWZ1?=
 =?utf-8?B?VGg3RWhzYXFpYnNGdkszRnRwTzFaOWRJYVkzczVoNW4zL3dkQTZPZ2NWM0Rp?=
 =?utf-8?B?dDNKY0R6S0s4MEE1K3pOaXprci9vZTZkT3V3RWZiKzNtYlE2MGhMY3NTaCt6?=
 =?utf-8?B?OGdLMDBTanozVDRoSzRrTkRtZFc5dlp2bkMwREgzU3NZbzlFRGdzc2hNRDNh?=
 =?utf-8?B?R2JLSTdwV1NManJVTDlOMC9SQmtMdEVYWUpqYnRiUlF2UDFzc1BqTmRUbGEy?=
 =?utf-8?B?ZitjSkFLUU0vcEUwZXluYmkzMWVoYlZES3Jlbko5alFIZmVSOTA1OEFDTldu?=
 =?utf-8?B?YjMycWlPVFU4WTlPOWRkUGxJaEd5UFhDWUdkRjhVMEFGc0hRbFJLbUNnSXhH?=
 =?utf-8?B?THNmTWx2OWxDVWhoSlgzNFVrZEdtd2YvU0lNbzhMTzlWbEVnY2phMVBaclox?=
 =?utf-8?B?WFp1SXl5TGJneFkreTRSMDBtZGhucllta0h2UmpoSFpydzRsVFZ3YmtGL0ti?=
 =?utf-8?B?UzVMSWpDbElXb0xCNVhHUno3MjkyNk8wNjd6K2ZsQy83cG1ycnppUFJyU0ZD?=
 =?utf-8?B?VEd2cDJ2bGlkc2R3c3NKTm9ocjAvOHZrNjNuTE5abnZtWkl2ZGZxYURMeGEv?=
 =?utf-8?B?alk4Ynp4K2Qrb0xEYmovVnhmckZwbTVaUXBYM1JHS3U4M2doQjArM1VhaDF3?=
 =?utf-8?B?RlZSc2tsV2VhV1kvRElxbXoydDlEUmxWd3hFV0tyTE42bFVXMDN3bW1DdW50?=
 =?utf-8?B?aEpYTWdIK1dqWXowR1V6K3lPeVo4ZTVSSnBxdWhvUFErTmowN0xBeE5VM3Ix?=
 =?utf-8?B?RDI1M2E2OVJCdDZCL2l3djhNbCt3S0R6WHhaMndoWnQ0WjlJYjlENHh4S1Ry?=
 =?utf-8?B?WFZpak1tNmlGT2lpUDkwK01ER2NsZlAxWm5rN1JadVI2VzJKeG4zTXlYUkpX?=
 =?utf-8?B?WDRiRjU0TXVyaXh5K3VlalI0WGNLdFpqUUZsMG5IeEZWa2ZuYTRBcGJqZ3hQ?=
 =?utf-8?B?YkJlTnJxN3BrOUJrcFd1UWloRHFEVnRrWnByR2ZIVy9TRGRCeHU3OGpEdDdN?=
 =?utf-8?B?REtnWURJVVI5THBuOWxhV0w2TWVRMmxxM1ZmUnNhQnZBeFlja1cyZlVRcVBU?=
 =?utf-8?B?NS8yZFhMalFzb1M5WW44R0t6ZG5Mb0o4K3lMaGhWdGJMS0dvT1l1Qzl1VCs0?=
 =?utf-8?B?ZUtDZE1oSzFkbUk2WlUzb3ZvTDVkWDFsZU9GTFk1OTQveUd6b09LeGxPdFRX?=
 =?utf-8?B?L08rRFF6akUyUEEybnlpNlJIY2RJSEJ3WHRoaHhlbGUxelRiWW5CY3p3SWhm?=
 =?utf-8?Q?H2YdbGeG?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 81326eba-d69f-48d0-30a8-08d8c8776e5f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 19:10:55.3689
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3dZW0aiyYQPJi8107KT4bbFuoVltmbozJz6AV9BSahQ66SRPhg+tpmwoHmlZf8cfNyyv2e9LpvNv/UCoE4dHLMUDNsvV1nEM/w99d/AEXhQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3415
X-OriginatorOrg: citrix.com

On 03/02/2021 17:58, Roger Pau Monne wrote:
> Or else the EFI service calls will use the wrong calling convention.
>
> The __ms_abi__ attribute is available on all supported versions of
> clang.
>
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> Cc: Ian Jackson <iwj@xenproject.org>
>
> Without this a Xen built with clang won't be able to correctly use the
> EFI services, leading to weird messages from the firmware and crashes.
> The impact of this fix for GCC users is exactly 0, and will fix the
> build on clang.

DYM "fix the compiled binary"?

The build on clang isn't broken atm, but it provably has the wrong ABI.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 20:07:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 20:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81048.149007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OQj-0008W2-Q3; Wed, 03 Feb 2021 20:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81048.149007; Wed, 03 Feb 2021 20:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OQj-0008Vv-Ly; Wed, 03 Feb 2021 20:07:13 +0000
Received: by outflank-mailman (input) for mailman id 81048;
 Wed, 03 Feb 2021 20:07:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nuD1=HF=linuxfoundation.org=torvalds@srs-us1.protection.inumbo.net>)
 id 1l7OQi-0008VX-0S
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 20:07:12 +0000
Received: from mail-lf1-x135.google.com (unknown [2a00:1450:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d5ebe7d-8693-4de2-b060-390a9d736581;
 Wed, 03 Feb 2021 20:07:11 +0000 (UTC)
Received: by mail-lf1-x135.google.com with SMTP id a8so984478lfi.8
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 12:07:11 -0800 (PST)
Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com.
 [209.85.167.48])
 by smtp.gmail.com with ESMTPSA id c9sm357961ljk.130.2021.02.03.12.07.08
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 03 Feb 2021 12:07:08 -0800 (PST)
Received: by mail-lf1-f48.google.com with SMTP id d3so962664lfg.10
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 12:07:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d5ebe7d-8693-4de2-b060-390a9d736581
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linux-foundation.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Kpfh9SxqhbXVAvduWXLkt3nG5WWOIvz3WkEIu4NNz7c=;
        b=O9vxx80gl/k9keknQ6fiZPXocJOOncAcdeIZ6akawRcVRBFNzF5hhkdAGj3EKxTDVO
         0tXEKP9WD5HDHhwvfLBaLBjdXeGl1DsU5Pi/lyCQ2ckXXfJb/LYkljHjpjOKECkyDWdg
         3Go0y3vjTAMTEfVdFupLN71dukSnZnXC92wMg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Kpfh9SxqhbXVAvduWXLkt3nG5WWOIvz3WkEIu4NNz7c=;
        b=ct6sXJw1Ux8TcZBjOOwjjOTRPs+63bLPw0gUIygi8b940K0P9g7ChRDOpSxUiNgP8S
         Bh/IRLYbdhqSbKrzUsihbEVEYswEkHTnYovZu1+9edXb1pjMpsgC4EqQ3aBe1Yj8GsK1
         RJzdcFG6Xf/1sJoZZnXnG8B3SSIwT8BDvinagEv6f2hnr7tkUe8zEFVNPWGNwS76oGL4
         2Wws8WNa5YCQun1GWTmbQJTMY9LxIyfp4wTiOYkBzPLU9y90fPdf738VwjH170VkHbha
         a0KzWqLJnM2e2lFkdV95z8coRGYPKRkxGxspkZrj85dQlPxHjPm7uYfkKxucXFlHGI2U
         PZEQ==
X-Gm-Message-State: AOAM5321JyVOGdDmsa3xE+bMPVmVKnXmKYEErmyOQ5uAYWOuhALnkuwD
	FCKp0lR78nEavaFiHDIsQ+Zt+tfT9QZ4Gw==
X-Google-Smtp-Source: ABdhPJzGMVr+Z8LSgxgfvQaDmIC06doQ+30XqakYuconezfOfNK3a/iobo/yl3PYEafyUQ8eDuvWdQ==
X-Received: by 2002:a19:8156:: with SMTP id c83mr2688027lfd.546.1612382829590;
        Wed, 03 Feb 2021 12:07:09 -0800 (PST)
X-Received: by 2002:ac2:4436:: with SMTP id w22mr2548053lfl.41.1612382828438;
 Wed, 03 Feb 2021 12:07:08 -0800 (PST)
MIME-Version: 1.0
References: <8a358ee4-56bc-8e64-3176-88fd0d66176f@linuxfoundation.org>
In-Reply-To: <8a358ee4-56bc-8e64-3176-88fd0d66176f@linuxfoundation.org>
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Wed, 3 Feb 2021 12:06:52 -0800
X-Gmail-Original-Message-ID: <CAHk-=wg983RfAiSSo4zLMADEfzLEuoBi+rye30Zrq7Bor8zg_Q@mail.gmail.com>
Message-ID: <CAHk-=wg983RfAiSSo4zLMADEfzLEuoBi+rye30Zrq7Bor8zg_Q@mail.gmail.com>
Subject: Re: Linux 5.11-rc6 compile error
To: Shuah Khan <skhan@linuxfoundation.org>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Wed, Feb 3, 2021 at 11:58 AM Shuah Khan <skhan@linuxfoundation.org> wrote:
>
> ld: arch/x86/built-in.a: member arch/x86/kernel/pci-swiotlb.o in archive
> is not an object
> make: *** [Makefile:1170: vmlinux] Error 1

That honestly sounds like something went wrong earlier - things like
doing a system upgrade in the middle of the build, or perhaps running
out of disk space or similar.

I've not seen any other reports of the same, and google doesn't find
anything like that either.

Does it keep happening if you do a "git clean -dqfx" to make sure you
have no old corrupt object files sound and re-do the whole build?

            Linus


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 20:07:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 20:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81049.149018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OQy-00007W-1w; Wed, 03 Feb 2021 20:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81049.149018; Wed, 03 Feb 2021 20:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OQx-00007O-V0; Wed, 03 Feb 2021 20:07:27 +0000
Received: by outflank-mailman (input) for mailman id 81049;
 Wed, 03 Feb 2021 20:07:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u1O+=HF=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l7OQx-000079-5k
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 20:07:27 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e528a65-e106-4e81-ad5f-82f4218a94de;
 Wed, 03 Feb 2021 20:07:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e528a65-e106-4e81-ad5f-82f4218a94de
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612382845;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=MITj/cWLh6YxfT29z0NOD+rmVZ8NCm5//wvOqP2KyUk=;
  b=IdnH1eTYWRSY+3VigfL5vEeidgy37RQz4URBXPlmdon6t3HNXZLz7pjJ
   pSw1u74ccz9TeLCuvcKrRwtou6h5rjJqyJT5ntTRg6k92RETZpcvCog6q
   AftfuNP1+RR86oQgd4wu6jJ/CJ2bwIvEeTYASxa727D1LAHskZHqjusB8
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hIP17ujBP+/tAu1gzbWzgP6zqDXD9V+CusUUyPvVzqJnLeOYF27W9t7XN/Ms/LrgPU7AqAoOw/
 okQCZmc7FoD0+0GAgq+8cqZ+lmzxvW/+qds4atbYWZQJyCyAjttAtKEIn/Je96nLiEqHeM+HUZ
 Blx/pTFLi+uWNkwSmdSmnm97XQLUhzwyH1d6hCfG9MG0yQqkS7oJTkrR+t/klFbCOO9by7bFIy
 gNPJXPtpceFWChJMSP1dgdYDPpJNy2w3Kcujl8bMWcAutJDdnTqty7Mt9NvMmUJwzdWpXeHdYh
 lRI=
X-SBRS: 4.0
X-MesageID: 37834476
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="37834476"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <iwj@xenproject.org>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<tamas.k.lengyel@gmail.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH v2 1/2] tools/libxl: pass libxl__domain_build_state to libxl__arch_domain_create
Date: Wed, 3 Feb 2021 20:07:03 +0000
Message-ID: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

No functional change.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
New patch in v2 as requested.
---
 tools/libs/light/libxl_arch.h | 6 ++++--
 tools/libs/light/libxl_arm.c  | 4 +++-
 tools/libs/light/libxl_dom.c  | 2 +-
 tools/libs/light/libxl_x86.c  | 6 ++++--
 4 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 6a91775..c305d70 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -30,8 +30,10 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 
 /* arch specific internal domain creation function */
 _hidden
-int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
-               uint32_t domid);
+int libxl__arch_domain_create(libxl__gc *gc,
+                              libxl_domain_config *d_config,
+                              libxl__domain_build_state *state,
+                              uint32_t domid);
 
 /* setup arch specific hardware description, i.e. DTB on ARM */
 _hidden
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 66e8a06..8c4eda3 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -126,7 +126,9 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
     return 0;
 }
 
-int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
+int libxl__arch_domain_create(libxl__gc *gc,
+                              libxl_domain_config *d_config,
+                              ibxl__domain_build_state *state,
                               uint32_t domid)
 {
     return 0;
diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 1916857..842a51c 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -378,7 +378,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     state->store_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->store_domid);
     state->console_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->console_domid);
 
-    rc = libxl__arch_domain_create(gc, d_config, domid);
+    rc = libxl__arch_domain_create(gc, d_config, state, domid);
 
     /* Construct a CPUID policy, but only for brand new domains.  Domains
      * being migrated-in/restored have CPUID handled during the
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 91a9fc7..91169d1 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -453,8 +453,10 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
     return ret;
 }
 
-int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
-        uint32_t domid)
+int libxl__arch_domain_create(libxl__gc *gc,
+                              libxl_domain_config *d_config,
+                              libxl__domain_build_state *state,
+                              uint32_t domid)
 {
     const libxl_domain_build_info *info = &d_config->b_info;
     int ret = 0;
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 20:07:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 20:07:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81050.149031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OR8-0000CC-BP; Wed, 03 Feb 2021 20:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81050.149031; Wed, 03 Feb 2021 20:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OR8-0000C0-7R; Wed, 03 Feb 2021 20:07:38 +0000
Received: by outflank-mailman (input) for mailman id 81050;
 Wed, 03 Feb 2021 20:07:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u1O+=HF=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1l7OR7-0000Bi-Hb
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 20:07:37 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19246fc6-9e83-44af-808f-9044396c1725;
 Wed, 03 Feb 2021 20:07:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19246fc6-9e83-44af-808f-9044396c1725
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612382855;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=EyvH51Bn6pq0zdCFJ+K3Cfpfi1Q7rbnOK1TQDTsUE8M=;
  b=RcdmPx3C9CDwLmZehsq8nZ38IlW6cED3EFsqsXuKhml/3dsScbMePHhy
   U5RUAie/6d6IRBzaQpaB1or8OJpd59xAVDJ+xoqDuAGHVdn+S0Nm1QK28
   zOdeZxHaPqGVL8AixbE+JZFB+SPasblD//BU358MBDDd/jDynOhhY6AXC
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: y0vrFMvfz/3F9VBGEL2ptjYDNwIkFipgqd+cfKDaRytnjYsPTCef31rc5xJ/HbNAXLEchisjiv
 3gyu/5xpUt9m1QotYqw6k1SELRN2HXkYxPIVXamaUo3AzBYn1ddhDP53UaDR36MH+gsYsslNGU
 5Ol65laLUoWwMf1LTp+ENVwtN0buuh8qixcHI0qu/6xQdvVvmZNwxZmKgBha5em6yrm1Hvg6bA
 7SjrpfMoYkW3mU/KJ1XRI3IRRjihdfCMFL3IZyIKlZJ1tmx+fyZYw2l3070rEaHaQQhEOXBbEr
 Aeg=
X-SBRS: 4.0
X-MesageID: 36694182
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36694182"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <iwj@xenproject.org>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<tamas.k.lengyel@gmail.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH v2 2/2] tools/libxl: only set viridian flags on new domains
Date: Wed, 3 Feb 2021 20:07:04 +0000
Message-ID: <1612382824-20232-2-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
References: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Domains migrating or restoring should have viridian HVM param key in
the migration stream already and setting that twice results in Xen
returing -EEXIST on the second attempt later (during migration stream parsing)
in case the values don't match. That causes migration/restore operation
to fail at destination side.

That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
extending default viridian feature set making the values from the previous
migration streams and those set at domain construction different.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 tools/libs/light/libxl_x86.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 91169d1..58187ed 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -468,7 +468,10 @@ int libxl__arch_domain_create(libxl__gc *gc,
         (ret = hvm_set_conf_params(gc, domid, info)) != 0)
         goto out;
 
-    if (info->type == LIBXL_DOMAIN_TYPE_HVM &&
+    /* Viridian flags are already a part of the migration stream so set
+     * them here only for brand new domains. */
+    if (!state->restore &&
+        info->type == LIBXL_DOMAIN_TYPE_HVM &&
         (ret = hvm_set_viridian_features(gc, domid, info)) != 0)
         goto out;
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 20:16:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 20:16:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81046.149051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OZC-0001PY-8A; Wed, 03 Feb 2021 20:15:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81046.149051; Wed, 03 Feb 2021 20:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7OZC-0001PR-5E; Wed, 03 Feb 2021 20:15:58 +0000
Received: by outflank-mailman (input) for mailman id 81046;
 Wed, 03 Feb 2021 19:58:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mAwm=HF=linuxfoundation.org=skhan@srs-us1.protection.inumbo.net>)
 id 1l7OIa-0007TD-3U
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 19:58:48 +0000
Received: from mail-io1-xd2b.google.com (unknown [2607:f8b0:4864:20::d2b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bccb927-4718-42d0-a33e-76dde9e36808;
 Wed, 03 Feb 2021 19:58:47 +0000 (UTC)
Received: by mail-io1-xd2b.google.com with SMTP id j5so607005iog.11
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 11:58:47 -0800 (PST)
Received: from [192.168.1.112] (c-24-9-64-241.hsd1.co.comcast.net.
 [24.9.64.241])
 by smtp.gmail.com with ESMTPSA id 14sm1594104ioe.3.2021.02.03.11.58.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 03 Feb 2021 11:58:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bccb927-4718-42d0-a33e-76dde9e36808
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linuxfoundation.org; s=google;
        h=cc:from:subject:to:message-id:date:user-agent:mime-version
         :content-language:content-transfer-encoding;
        bh=iBumuQ2xl7PWUunfr+5XCG2wfvDl+oJsEntLHQUnRoM=;
        b=PJYmBRC8/jAuvC7KPetZ+KazUHm905Rkga+lhU7oVaWL37SKwbs2WrYgYOc64tvwLi
         28FbdLAlO0glF5VEYVFO0YesNLN7sNRCvoxYrXN4aGoEG6FF60DGHnarjT1QvnGpaG+g
         yXSrq8lb+smqn4r188pVtEopzCif3Nu73X8pc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:cc:from:subject:to:message-id:date:user-agent
         :mime-version:content-language:content-transfer-encoding;
        bh=iBumuQ2xl7PWUunfr+5XCG2wfvDl+oJsEntLHQUnRoM=;
        b=sWmte8Gk2SBBSQItZIt6nYUNW7qs01jK5glx58YaS6ZeVx1ue9VY2xn2qlFTq3puhm
         DHQmEM1Vj5cfM5jcgOJOqOiq5tFZYZqUM88PdQ64b87BaCKJb3+yCAjOY8rkOYoEeTx+
         wATofBjYITC7kxIvogMAB2zYY8FGWVbrOwfCP5hRznRdryZi2i/6+eoL6GNvs61lAlHK
         DvoYrh3JMqesxAYQRAF20Z8hrGD5p38NKrfdZveERBF3dhB4EWdwuAJZQPlS7lQY6CWX
         1KS7x82OMWsvgdz+rAm5fvsNKhIDTrOGeZJPo2QYXFvUfOMy6xJwyQDVjctrToocg3r5
         ubOQ==
X-Gm-Message-State: AOAM5324vcMVxllytVL5wrjjMdgeUuyAKDTd0hTYAZXRcs1zRjoiVh1d
	ku2vqyF3KHiLk800+jjkrbI2Y8boB1wJWg==
X-Google-Smtp-Source: ABdhPJzgpUqBG5vdviXmVeBBOuvyp8nIPkwGaW3le/t3FUwlPckmjmGnUWHIoLQkBK24H7NGNvzLMw==
X-Received: by 2002:a05:6602:2b01:: with SMTP id p1mr3774752iov.156.1612382326457;
        Wed, 03 Feb 2021 11:58:46 -0800 (PST)
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
 xen-devel@lists.xenproject.org
From: Shuah Khan <skhan@linuxfoundation.org>
Subject: Linux 5.11-rc6 compile error
To: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <8a358ee4-56bc-8e64-3176-88fd0d66176f@linuxfoundation.org>
Date: Wed, 3 Feb 2021 12:58:45 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

I am seeing the following compile error on Linux 5.11-rc6.
No issues on 5.11.0-rc5 with the same config.

ld: arch/x86/built-in.a: member arch/x86/kernel/pci-swiotlb.o in archive 
is not an object
make: *** [Makefile:1170: vmlinux] Error 1

CONFIG_SWIOTLB_XEN=y
CONFIG_SWIOTLB=y

I can debug further later on today. Checking to see if there are any
known problems.

thanks,
-- Shuah



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 20:48:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 20:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81058.149066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7P4Q-0004UG-NN; Wed, 03 Feb 2021 20:48:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81058.149066; Wed, 03 Feb 2021 20:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7P4Q-0004U9-KE; Wed, 03 Feb 2021 20:48:14 +0000
Received: by outflank-mailman (input) for mailman id 81058;
 Wed, 03 Feb 2021 20:48:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0lb=HF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7P4O-0004U4-UP
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 20:48:13 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68e15af5-dc03-45f9-b444-ecf44685bdf1;
 Wed, 03 Feb 2021 20:48:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68e15af5-dc03-45f9-b444-ecf44685bdf1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612385291;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=hfvo2o5oAnfmFGbVgAajbGLECLlQJM7Qv3dgtr2MtPI=;
  b=W99/6qyQklcEw9Nlr0z0Ow/jGkmI31/kmvzPPLTHaF1mkiBo4Kj99vUN
   Vhb23asivymqHmfmiTKZxdL5O+gOOK4xGioEF9iqhjzKxSs3WJGfSPPuI
   zCoqdrk8mosGNZDszAkHJupwvg2PBKwKQSvg7GZy91K5KtPlowPT60nzL
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: J8W5GzqPsHr3Y97JsKDn4XoU0asK6UaQdJx3R/HEl69t6vF9NhijpyFW/6HKFGf+xoLUL5Iizg
 AnfShKY/nMZ7p/qpc13nJ+6mGpg3WnkvDSm1d8Pyg4a/GJjIXkexzIBmESUuddYqPmZco7av/M
 tz8WxeTZ/B23cAw/dmOi1rPClQYic4VaX5TY/NUwfLQlX5Mg0hbea/BZPDrTC7uteAvEBfvdEg
 kF7bafdV6hhhmzL+u5gbyGr905i6BsRhP0oN82EehG8hcaagGR3GOYz+/5mOsF6sH3axaY+nxn
 BPI=
X-SBRS: 5.2
X-MesageID: 36879825
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,399,1602561600"; 
   d="scan'208";a="36879825"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a7IteZYoU7f9t+HoNzz7zglJq1LAg7hfUn48ReByZk9r/iBv9cz5fcYUIHAiBeMWduZSRi96D5QNbd9Ws6jQiVd9spgarQqr8op+KGUIn15SqtHWxq0VGM3pjI9+P/ntdo3fHqS7IzfkOdGuDLnsvMvIdPWvs5xzbrzLeNUc7QhErzmdYpUQufGEYqL9jJwUn+1APdfguAplbee0Vf2cOzY5F/e9FxyASeXlHLK6++WzhDGB5TvwvYZKWgOm+6+RcwYT6p9d7ygJ3d+BmXVEsHLXCeCJAk7mtuBlNqfhZvecXlgzrPioJQmOfXoZJTv2EmIIOZftclEYwkigQn+xKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hfvo2o5oAnfmFGbVgAajbGLECLlQJM7Qv3dgtr2MtPI=;
 b=NXKYUdfF7PjD4x+iAtt4vsWqhw0mBxNEcQQL5LGDSvIczyloWgrb31bQ6SWd4HezhNI1rBFgiAzgix+5mbJD2OadJfIRVNmvQUiVK/d150oqfHenFSjoFjy6NQyEt2YnatQDMHm/k8FQIf1IPUPV63/6xzF+QRwaTVpt1k0vrZEy1KcRQr/ycCUq3gzGraPIK39Ka976oAZfp2K46ok2obvuCqOHwJNAySujn1ClDapA2TVL2bj1mWc5srfDj7Hw8ThuJKhxB5j7vEnJVz+dHBIEC05I+lca9supAynrbgJlf8ytYF/lulCm8hX6YA+oX9WSZs0Q0trZ9UYgRQLXIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hfvo2o5oAnfmFGbVgAajbGLECLlQJM7Qv3dgtr2MtPI=;
 b=cB5DeKgTUHygCqXK0G/jsexnynUigX4MwnYTy0uvmofWLTVP+vCL7zMQLpmzbYler9mbIaBFXVfY8hgVgkC/IZpB0VSnVr3NRIN5PEPqCSv1kNji+R7YAzXCnw4xbLzRu3SGRtOJntHFoz22SvConY5RH52g30tjXbju5T2hnRc=
Subject: Re: XSA-332 kernel patch - huge network performance on pfSense VMs
To: Samuel Verschelde <samuel.verschelde@vates.fr>,
	<xen-devel@lists.xenproject.org>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Juergen Gross
	<jgross@suse.com>, Ian Jackson <iwj@xenproject.org>
References: <dc5d60d7-1ada-5eb1-ff91-495d663ca89e@vates.fr>
 <20210118100340.6vryyk52f5pyxgwv@Air-de-Roger>
 <48ac8598-1799-3b80-73c0-210076639fbc@vates.fr>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0e7b6c47-2fb3-2231-9bcc-13a036dc0766@citrix.com>
Date: Wed, 3 Feb 2021 20:47:25 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <48ac8598-1799-3b80-73c0-210076639fbc@vates.fr>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0410.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::19) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 40ff521e-502d-4873-33fb-08d8c884ed25
X-MS-TrafficTypeDiagnostic: BY5PR03MB5218:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB52184E48EE1834E749693E0ABAB49@BY5PR03MB5218.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fhyMNvQ8smFP5MdYB7jBOrMOfTbtlJPfYnpT6b7v5B4i6OIAQP44aJqKU9uAseNY2bSoySA6pRpx0UjiV+oNf90r7j3LbRd+kxoytEI2die9RNDakKd8SvOfH1zAdsGwbgSQFS/NFd6QUfOKr0g/AbtbWanXbOx3N+A6eIH5pYwyAZ0GFvPqb1SJO9WYYVIn+maVcGx8adDHcmE8oPKvdcUpp6RJfk41Z7JTHOfCbcX3FIP+7SDE4QOCnRfpbzCK1H90BjasyEGg2NFElJlRKGCWG5LAIyy2oeUirdTbsblf35ZcrWzbQK7Mo4qztBZeLS3SCrdueyjIEIprUQ6UIqAGeDtLJ5oWL4Vf5wm/MgCz3KYhefKfMeosYZa7BT7fqq0x8ch2U8qqtfjEjShj+wuCMs71BpZNM2C3dTksMZfmCQnEYXeowmVVln8EmK5suV8382kjGRHOWjaGBbFaZdsSoD7Dqc2p9tLVDuMvkl8LhR9Ccq5mEaoE4F0aL4H9QXlqr35CYw1WLwI9rFJjS/W5ZnQAKs/G8//zeaHOb67Y1KnF+SbPbxvdcPHR3tqBeH+mXOMpA/n1nyW48k56dhtyozECPbhIQ+78M1lq1+umAzQsWZKL0YQcQnzACPz6MsPUkWIxoGnhj+EwrurwdccGaDoanT1Q9JngQrbLerQoRVEzCW78Ur2+qcU9SnEzOt8Cz3HBBJi8V03809prPQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(366004)(396003)(39860400002)(31696002)(66556008)(83380400001)(5660300002)(8936002)(86362001)(66574015)(2906002)(36756003)(31686004)(66476007)(186003)(16526019)(54906003)(6486002)(956004)(16576012)(66946007)(6666004)(8676002)(966005)(478600001)(4326008)(26005)(53546011)(2616005)(316002)(14583001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bU54YTBmZXhhYzU1T0VIak9QRHZJTUluNXUrT3BJVE5yUGxRR0EybHpyemJY?=
 =?utf-8?B?N09xa3FZK1VhbFRRUUszbWhNWkZDZWFTdDU4M3BoZHd4NEFSbFc4Um1WT1pK?=
 =?utf-8?B?ajNJbTBwNmZqYTRnUTVZM3NUeGhaTDdCVUx3V25jTTh2MUFXRjlPMjJJbURF?=
 =?utf-8?B?WTNEYkdIOHRQajY4WW5sZ2t4OXJic25Ia3pmRkora2E4K3paSmZhUlZUaVVx?=
 =?utf-8?B?cHlrQWJxVStQblFuSEFnS3N2bUVzZnFUMGtTYjJ0V3p0R1p2MStnQUhzTitt?=
 =?utf-8?B?TGJuRXgvZHMwbUoxdDhlNGhxc1pjNkszbHBIOG9XNEdCTnkvc3NMVlF3VmhI?=
 =?utf-8?B?UkJnTlFpY1FwN3UyNi95UnVxUUZTRG1KTnljemN4TUpERG9HS3B2RFp2cW5x?=
 =?utf-8?B?eTZITGZrbnpZRy9ySzFkcVdpdzlaWWFKTG82K2RubGtDNDBNaWZLQVIwRmtk?=
 =?utf-8?B?bnpCTWt0YzdmRDZBdkdXYzRvMW13dWNrTHdkZzFUV08zclN2cTU1WHFRWS8y?=
 =?utf-8?B?eW1yei9CVFhBRzVLbXFoM2VTMklScXQ3aUtpc3Avbldxa1JTWkNNd1grTlMx?=
 =?utf-8?B?RnRlTXpzMkJnYjJjd25vWjFuVWFlTzVMcGNaWWxVZ0o5aW9pNUlQRE50ZFRV?=
 =?utf-8?B?ME8vL1JPS3M3Y1NhZVVNZk5mT1pvYms4akZvdEpSRXRDQkdCVmlyRFdMYzda?=
 =?utf-8?B?ckhhUnVoWHJoQ0JDMTYybC9wOS9vYnJmU0R5aFQxdDVuRGFRc1d5aVVndmZZ?=
 =?utf-8?B?QytmYkF0WjB6Y2p6d1gyNCtJUG5yd3NNWThpOFhUR0c2b0RQOXNCR2l0cmtN?=
 =?utf-8?B?YktkaDRJelJDWG9HK2I0V2pjNEUreEJsQWpueHhNYjFna051ak5pRlZ4OFVx?=
 =?utf-8?B?Sng3cnMyRTVjbHJSeTZJSmRWWlR5MEJKS0toN2pQZFJqaXFjVHJKZlFIenNQ?=
 =?utf-8?B?MTNDZjlKOEhCNVA5ZDQ5UjlaaU9iNktla3dRUUdzblJXVm5qaDBBekFkWis3?=
 =?utf-8?B?QXBEUzZoaHBHUGVDeGVLVzFnbFpObExUdGNla1FWcXFGdWhqLzdwV2lRd0o2?=
 =?utf-8?B?ZXhqUHF4Slp1eFYyNzZFcUZLeEdiL29DZ1RlOTh0WUxueXBVUXM1c2VPMEtj?=
 =?utf-8?B?U2krWElwYWQvMlJRRnJVdHlCS1BETGtGZ1FESmo4RThDUXRjUlRMZm1SZG8x?=
 =?utf-8?B?UVJMaVlxc2lFTkgrTm8zMzQ5UU1CZ3FxQi9xTTQzM1FIV293ai9BOUdwYnhV?=
 =?utf-8?B?bENEQ3BRM0x3a0FhNEN5NnU0cnRlL2Q1QkltWSttL2FtOFg4cHg2QldVMS83?=
 =?utf-8?B?eUZpNG1kSDh3MlFMWmp0cENzZVAyQkdIYmlnejE2R2tEQWUyYWUza2wzdXRw?=
 =?utf-8?B?dWh4QzFWTm9BejROSXpYbjFtenFqNDh1c2pnMU56RWowWXR3SUgrRUp2L3pa?=
 =?utf-8?Q?4Q5VjwC9?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 40ff521e-502d-4873-33fb-08d8c884ed25
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 20:47:31.5390
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XxTiVOWS/uIhwnYxs+VH9/wt4E4BpwIpc2kuYhINo9k5iDU/vqqowc4iYAJGLFEtFKEEfhQ0KtoLdXe2n+Q/LuLmOU9FAhR042iD4PeJdI8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5218
X-OriginatorOrg: citrix.com

On 26/01/2021 15:04, Samuel Verschelde wrote:
> Le 18/01/2021 Ã  11:03, Roger Pau MonnÃ© a Ã©critÂ :
>> On Fri, Jan 15, 2021 at 03:03:26PM +0000, Samuel Verschelde wrote:
>>> Hi list,
>>>
>>> Another "popular" thread on XCP-ng forum [1], started in october 2020,
>>> allowed us to detect that patch 12 from the XSA-332 advisory [2] had
>>> a very
>>> significant impact on network performance in the case of pfSense VMs.
>>>
>>> We reproduced the issue internally (well, we reproduced "something".
>>> The
>>> user setups in this thread are diverse) and our findings seem to
>>> confirm
>>> what the users reported. Running iperf3 from the pfSense VM to a
>>> debian VM
>>> gives results around 5 times slower than before. Reverting this
>>> single patch
>>> brings the performance back. On the debian to pfSense direction, the
>>> drop is
>>> about 25%.
>>
>> pfSense is based on FreeBSD, so I would bet that whatever performance
>> degradation you are seeing would also happen with plain FreeBSD. I
>> would assume netfront in FreeBSD is triggering the ratelimit on Linux,
>> and hence it gets throttled.
>>
>> Do you think you have the bandwidth to look into the FreeBSD side and
>> try to provide a fix? I'm happy to review and commit in upstream
>> FreeBSD, but would be nice to have someone else also in the loop as
>> ATM I'm the only one doing FreeBSD/Xen development AFAIK.
>>
>> Thanks, Roger.
>>
>
> (sorry about the previous email, looks like my mail client hates me)
>
> I would personnally not be able to hack into either Xen, the linux
> kernel or FreeBSD in any efficient way. My role here is limited to
> packaging, testing and acting as a relay between users and developers.
> We currently don't have anyone at Vates who would be able to hack into
> FreeBSD either.
>
> What currently put FreeBSD on our radar is the large amount of users
> who use FreeNAS/TrueNAS or pfSense VMs, and the recent bugs they
> detected (XSA-360 and this performance drop).
>
> Additionnally, regarding this performance issue, some users report an
> impact of that same patch 12 on the network performance of their
> non-BSD VMs [1][2], so I think the FreeBSD case might be helpful to
> help identify what in that patch caused throttling (if that's what
> happens), because it's easier to reproduce, but I'm not sure fixes
> would only need to be made in FreeBSD.
>
> Best regards,
>
> Samuel Verschelde
>
> [1] https://xcp-ng.org/forum/post/35521 mentions debian based Untangle
> OS and inter-VLAN traffic
> [2] https://xcp-ng.org/forum/post/35476 general slowdown affecting all
> VMs (VM to workstation traffic), from the first user who identified
> patch 12 as the cause.

Further to this, XenServer testing has also observed a ~5x drop in
intrahost VM->VM network performance between PV VMs running under PV-Shim

As one specific case has been bisected to patch 11, its obvious that
FreeBSD's netfront is hitting dom0's new spurious-event detection and
mitigation.Â  It is also reasonable to presume that the other ~5x hits
are related, which means it isn't behaviour unique to the FreeBSD netfront.

The next step is to figure out whether the event is genuinely spurious
(i.e. the frontend really is sending too many notifications), or whether
dom0's judgement of spuriosity is wrong.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 20:55:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 20:55:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81063.149081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7PBf-0005aR-Nz; Wed, 03 Feb 2021 20:55:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81063.149081; Wed, 03 Feb 2021 20:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7PBf-0005aK-Kg; Wed, 03 Feb 2021 20:55:43 +0000
Received: by outflank-mailman (input) for mailman id 81063;
 Wed, 03 Feb 2021 20:55:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qtc=HF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7PBe-0005aF-P9
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 20:55:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efc4fdcd-4f7d-4e4f-bfe5-22d3a2af4533;
 Wed, 03 Feb 2021 20:55:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F005164F72;
 Wed,  3 Feb 2021 20:55:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efc4fdcd-4f7d-4e4f-bfe5-22d3a2af4533
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612385741;
	bh=gV+nMwrCxAT9PKJ9Gxqe+CL4rmTL/GrD7qCOU20yzew=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nXu6DY1qUbf5gBh1TNqeodpecWn4LnK9Ft1DK673C9GhzUgoMkeGw/o+e4hHOfuXr
	 /1KhDtqwLnvWNA/C5lrchWNtC7SH2Jp8U9xA60EmGm5d2IKfDwPW+sY81rIrBgC/mo
	 uCJg2XZS4lhrEAl/aJpgnGuksQJJIEUPnAP3Ka6uBHyH1D1rkIdoYKj2ejsc3xgPlX
	 1KQl+X1g6l1S4nZ9xpTcyUzsS0GqLsFWEci/f5hO4ogtvY1f/FWRCjD5RLVgPn8FBp
	 Ey4SjSxLlu22clz0Di/pqbFzW/dHXF7g46yKC/zjgKSwEXSXQ3lDlPVidM5aeCgC7e
	 7mqZEwm8iNvVg==
Date: Wed, 3 Feb 2021 12:55:40 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jukka Kaartinen <jukka.kaartinen@unikie.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Question about xen and Rasp 4B
In-Reply-To: <3c98d8d0-ca4e-b177-1e2b-5f3eb454722d@unikie.com>
Message-ID: <alpine.DEB.2.21.2102031249090.29047@sstabellini-ThinkPad-T480s>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com> <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s> <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com> <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com> <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s> <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com> <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com> <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com> <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com> <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
 <3c98d8d0-ca4e-b177-1e2b-5f3eb454722d@unikie.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 3 Feb 2021, Jukka Kaartinen wrote:
> On 3.2.2021 2.18, Stefano Stabellini wrote:
> > On Tue, 2 Feb 2021, Jukka Kaartinen wrote:
> > > > > Good catch.
> > > > > GPU works now and I can start X! Thanks! I was also able to create
> > > > > domU
> > > > > that runs Raspian OS.
> > > > 
> > > > This is very interesting that you were able to achieve that - congrats!
> > > > 
> > > > Now, sorry to be a bit dense -- but since this thread went into all
> > > > sorts of interesting
> > > > directions all at once -- I just have a very particular question: what
> > > > is
> > > > exact
> > > > combination of versions of Xen, Linux kernel and a set of patches that
> > > > went
> > > > on top that allowed you to do that? I'd love to obviously see it
> > > > productized in Xen
> > > > upstream, but for now -- I'd love to make available to Project EVE/Xen
> > > > community
> > > > since there seems to be a few folks interested in EVE/Xen combo being
> > > > able
> > > > to
> > > > do that.
> > > 
> > > I have tried Xen Release 4.14.0, 4.14.1 and master (from week 4, 2021).
> > > 
> > > Kernel rpi-5.9.y and rpi-5.10.y branches from
> > > https://github.com/raspberrypi/linux
> > > 
> > > and
> > > 
> > > U-boot (master).
> > > 
> > > For the GPU to work it was enough to disable swiotlb from the kernel(s) as
> > > suggested in this thread.
> > 
> > How are you configuring and installing the kernel?
> > 
> > make bcm2711_defconfig
> > make Image.gz
> > make modules_install
> > 
> > ?
> > 
> > The device tree is the one from the rpi-5.9.y build? How are you loading
> > the kernel and device tree with uboot? Do you have any interesting
> > changes to config.txt?
> > 
> > I am asking because I cannot get to the point of reproducing what you
> > are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
> > get any graphics output on my screen. (The serial works.) I am using the
> > default Ubuntu Desktop rpi-install target as rootfs and uboot master.
> > 
> 
> This is what I do:
> 
> make bcm2711_defconfig
> cat "xen_additions" >> .config
> make Image  modules dtbs
> 
> make INSTALL_MOD_PATH=rootfs modules_install
> depmod -a
> 
> cp arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb boot/
> cp arch/arm64/boot/dts/overlays/*.dtbo boot/overlays/

Thanks for the detailed instructions. This helps a lot. I saw below in
boot2.source that you are using ${fdt_addr} as DTB source (instead of
loading one), which means you are using the DTB as provided by U-Boot at
runtime, instead of loading your own file.

With these two copies, I take you meant to update the first partition on
the SD card, the one where config.txt lives, right? So that Xen is
getting the DTB and overlays from the rpi-5.9.y kernel tree but passed
down by the RPi loader and U-Boot?

I think the DTB must be the issue as I wasn't applying any overlays
before. I ran a test to use the DTB and overlay from U-Boot but maybe I
haven't updated them properly because I still don't see any output.


> config.txt:
> 
> [pi4]
> max_framebuffers=2
> enable_uart=1
> arm_freq=1500
> force_turbo=1
> 
> [all]
> arm_64bit=1
> kernel=u-boot.bin
> 
> start_file=start4.elf
> fixup_file=fixup4.dat
> 
> # Enable the audio output, I2C and SPI interfaces on the GPIO header
> dtparam=audio=on
> dtparam=i2c_arm=on
> dtparam=spi=on
> 
> # Enable the FKMS ("Fake" KMS) graphics overlay, enable the camera firmware
> # and allocate 128Mb to the GPU memory
> dtoverlay=vc4-fkms-v3d,cma-64
> gpu_mem=128
> 
> # Comment out the following line if the edges of the desktop appear outside
> # the edges of your display
> disable_overscan=1
> 
> 
> boot.source:
> setenv serverip 10.42.0.1
> setenv ipaddr 10.42.0.231
> tftpb 0xC00000 boot2.scr
> source 0xC00000
> 
> boot2.source:
> tftpb 0xE00000 xen
> tftpb 0x1000000 Image
> setenv lin_size $filesize
> 
> fdt addr ${fdt_addr}
> fdt resize 1024
> 
> fdt set /chosen xen,xen-bootargs "console=dtuart dtuart=serial0 sync_console
> dom0_mem=1024M dom0_max_vcpus=1 bootscrub=0 vwfi=native sched=credit2"
> 
> fdt mknod /chosen dom0
> 
> # These will break the default framebuffer@3e2fe000 that
> # is the same chosen -node.
> #fdt set /chosen/dom0 \#address-cells <0x2>
> #fdt set /chosen/dom0 \#size-cells <0x2>
> 
> fdt set /chosen/dom0 compatible "xen,linux-zimage" "xen,multiboot-module"
> fdt set /chosen/dom0 reg <0x1000000 0x${lin_size}>
> 
> fdt set /chosen xen,dom0-bootargs "dwc_otg.lpm_enable=0 console=hvc0
> earlycon=xen earlyprintk=xen root=/dev/sda4 elevator=deadline rootwait fixrtc
> quiet splash"
> 
> setenv fdt_high 0xffffffffffffffff
> 
> fdt print /chosen
> 
> #xen
> booti 0xE00000 - ${fdt_addr}
> 


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 21:06:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 21:06:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81065.149092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7PLl-0006jQ-NK; Wed, 03 Feb 2021 21:06:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81065.149092; Wed, 03 Feb 2021 21:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7PLl-0006jJ-KC; Wed, 03 Feb 2021 21:06:09 +0000
Received: by outflank-mailman (input) for mailman id 81065;
 Wed, 03 Feb 2021 21:06:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DaK9=HF=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
 id 1l7PLj-0006jC-JN
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 21:06:08 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb56a75f-4fe9-484c-a239-a35ce530000e;
 Wed, 03 Feb 2021 21:06:03 +0000 (UTC)
Received: from [2601:1c0:6280:3f0::aec2]
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1l7PLd-00070g-Ac; Wed, 03 Feb 2021 21:06:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb56a75f-4fe9-484c-a239-a35ce530000e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description;
	bh=oZBuJOenngoomghCK7kOaxz4eUOh9dK/zu7ReamXwRw=; b=WqTbYln7KEBuj5REqZLx/+qn5d
	EYp78dgxS8hJpncZvcCNQTtQdv02rKO9IBx+9gRZYRhtT+bYvKOTs6cpoaCedKZx/iSLPMzB+2O23
	4UY+J5SOwCzRQI2dXS5H6uZL2Om85x4T9ZgWmS0Coz4UV99NMucKEulkug+oTUafI8TTc8RxDh5mc
	HQpYu/MxNl2VpX03uS1OCZRVBb1PAX+mXYmqdpjfOcSiRL4Ln73mKJwpgSgRcMD1k3LpcA+jmzyiw
	naDL2kmKVUvOYTSx1vz0ReG/nOlpwiTDbe49f+5gX4FuKjMemY5tpGHeKQoIZ8VIEke+vJ+QScyp7
	fI+bclfw==;
Subject: Re: Linux 5.11-rc6 compile error
To: Shuah Khan <skhan@linuxfoundation.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
 xen-devel@lists.xenproject.org
References: <8a358ee4-56bc-8e64-3176-88fd0d66176f@linuxfoundation.org>
From: Randy Dunlap <rdunlap@infradead.org>
Message-ID: <cbaa9611-ad63-94c3-5205-c8e28a3211d5@infradead.org>
Date: Wed, 3 Feb 2021 13:05:57 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8a358ee4-56bc-8e64-3176-88fd0d66176f@linuxfoundation.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2/3/21 11:58 AM, Shuah Khan wrote:
> I am seeing the following compile error on Linux 5.11-rc6.
> No issues on 5.11.0-rc5 with the same config.
> 
> ld: arch/x86/built-in.a: member arch/x86/kernel/pci-swiotlb.o in archive is not an object
> make: *** [Makefile:1170: vmlinux] Error 1
> 
> CONFIG_SWIOTLB_XEN=y
> CONFIG_SWIOTLB=y

Those config settings in allmodconfig builds for me.


> I can debug further later on today. Checking to see if there are any
> known problems.


-- 
~Randy



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 21:47:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 21:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81067.149105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7PzY-0002RN-Na; Wed, 03 Feb 2021 21:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81067.149105; Wed, 03 Feb 2021 21:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7PzY-0002RG-KK; Wed, 03 Feb 2021 21:47:16 +0000
Received: by outflank-mailman (input) for mailman id 81067;
 Wed, 03 Feb 2021 21:47:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7PzX-0002R8-VY; Wed, 03 Feb 2021 21:47:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7PzX-0000pw-P2; Wed, 03 Feb 2021 21:47:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7PzX-0002iE-E3; Wed, 03 Feb 2021 21:47:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7PzX-0007Ux-DP; Wed, 03 Feb 2021 21:47:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3qcL2/W4NPoibyah8M3T1MUmh2Ma5OBA51eO8QchKhI=; b=LCtumLPZIyefBnuydz8Ab6UK7q
	844+wb5DX+r3WNDX3FhGQUMssRIFwSlhux3hZ4JlsXXZhyveOFGshPX1z0CDS/sPgVWXqEtltrt+R
	S7lliTAXRYxqZZ//pT9r7LejM7aodF4jdcuP+Flb6SBNZIX1iyceyuv9BVlwqsDn70Q8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 158977: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit1:host-install(5):broken:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
X-Osstest-Versions-That:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 21:47:15 +0000

flight 158977 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158977/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   5 host-install(5)          broken pass in 158957
 test-arm64-arm64-xl-xsm       5 host-install(5)          broken pass in 158957
 test-arm64-arm64-xl-credit1   5 host-install(5)          broken pass in 158957
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 158957 pass in 158977
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail pass in 158957

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 15 migrate-support-check fail in 158957 never pass
 test-arm64-arm64-xl-seattle 16 saverestore-support-check fail in 158957 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 158957 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 158957 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 158957 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 158957 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158957
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158957
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158957
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158957
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158957
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158957
 test-armhf-armhf-xl-vhd      13 guest-start                  fail  like 158957
 test-armhf-armhf-libvirt-raw 13 guest-start                  fail  like 158957
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158957
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158957
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158957
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158957
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265
baseline version:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265

Last test of basis   158977  2021-02-03 08:08:44 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-arm64-arm64-xl-xsm host-install(5)
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 22:04:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 22:04:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81074.149126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7QG4-0004Ya-DS; Wed, 03 Feb 2021 22:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81074.149126; Wed, 03 Feb 2021 22:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7QG4-0004YT-AV; Wed, 03 Feb 2021 22:04:20 +0000
Received: by outflank-mailman (input) for mailman id 81074;
 Wed, 03 Feb 2021 22:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mAwm=HF=linuxfoundation.org=skhan@srs-us1.protection.inumbo.net>)
 id 1l7QG2-0004YO-B2
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 22:04:18 +0000
Received: from mail-il1-x135.google.com (unknown [2607:f8b0:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cc6e749-c74e-43b5-b41c-1df236d64522;
 Wed, 03 Feb 2021 22:04:17 +0000 (UTC)
Received: by mail-il1-x135.google.com with SMTP id g7so743994iln.2
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 14:04:17 -0800 (PST)
Received: from [192.168.1.112] (c-24-9-64-241.hsd1.co.comcast.net.
 [24.9.64.241])
 by smtp.gmail.com with ESMTPSA id x1sm1530260ilj.61.2021.02.03.14.04.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 03 Feb 2021 14:04:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cc6e749-c74e-43b5-b41c-1df236d64522
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linuxfoundation.org; s=google;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=TS3LrDoJycQjV29QvWKYiqYYnfjPrvlJnAgde44hakc=;
        b=JPfgN/cQckDAKBwY/QR67Thxr7yUmncDVdAEsxV3tW1GqAhZtNxf/7bZqZJVPEURM6
         gNf0epse2/nHHkPtKoQQWAnhZ3lTN5HVHAGrSumWmnDcgNuIZSHw05vo4N0d8nomOXBu
         KzI/w+18TtoWfFSVKCquRr94u7944j42U7Pv0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=TS3LrDoJycQjV29QvWKYiqYYnfjPrvlJnAgde44hakc=;
        b=kob4VGDzwGNzM+BJ7Wr26tTsDkMLf/m0w1j1t5ioOynY0IdQxt3KFpL6OHNNu/8mWN
         762udUzjEu4j17A5m3bBjgVQ2Z8S2M1CMdTv1ixZYmA+kf6oAZTNV2SCL8QE41JdF/DM
         4cFkQ1AgK8xUZJMboGId8MNhAEYF8CM2+uBXw3BXWrR3UKrZvSXYSyCFRpNBv/T5pmZv
         zAQIE1ntFZBH0cej0UbfFKhgmSidAr9G8bCpRmIpokW0wnfiJOGWX88AuDLjF9xlgIZo
         V+pbOjZ4YI3liPTsz4co2NtFlja2oq5Cmhd5DQkwYzA4PboRfuLESzbYjTj5PL53snZL
         0Y/A==
X-Gm-Message-State: AOAM5314Sf01Hi6dai99zPqquhSkQPmHgFh5WyOugosILWB1+47v7JBk
	VkHjvxZiahhuyUqgngb4zxIPng==
X-Google-Smtp-Source: ABdhPJyvZIEoiVKF4EWNvGXWyeEJte1usTtNJqqzhvl2jiJPmeCqXs4hHX0a7+80gvQ0IkJH1UvQjw==
X-Received: by 2002:a05:6e02:2189:: with SMTP id j9mr4197510ila.98.1612389856937;
        Wed, 03 Feb 2021 14:04:16 -0800 (PST)
Subject: Re: Linux 5.11-rc6 compile error
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 xen-devel@lists.xenproject.org, Shuah Khan <skhan@linuxfoundation.org>
References: <8a358ee4-56bc-8e64-3176-88fd0d66176f@linuxfoundation.org>
 <CAHk-=wg983RfAiSSo4zLMADEfzLEuoBi+rye30Zrq7Bor8zg_Q@mail.gmail.com>
From: Shuah Khan <skhan@linuxfoundation.org>
Message-ID: <7e05fe6d-9bf3-7d8d-5cda-8f4745bb144d@linuxfoundation.org>
Date: Wed, 3 Feb 2021 15:04:15 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <CAHk-=wg983RfAiSSo4zLMADEfzLEuoBi+rye30Zrq7Bor8zg_Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2/3/21 1:06 PM, Linus Torvalds wrote:
> On Wed, Feb 3, 2021 at 11:58 AM Shuah Khan <skhan@linuxfoundation.org> wrote:
>>
>> ld: arch/x86/built-in.a: member arch/x86/kernel/pci-swiotlb.o in archive
>> is not an object
>> make: *** [Makefile:1170: vmlinux] Error 1
> 
> That honestly sounds like something went wrong earlier - things like
> doing a system upgrade in the middle of the build, or perhaps running
> out of disk space or similar.
> 
> I've not seen any other reports of the same, and google doesn't find
> anything like that either.
> 
> Does it keep happening if you do a "git clean -dqfx" to make sure you
> have no old corrupt object files sound and re-do the whole build?
> 
>              Linus
> 

My bad. I was playing with two test systems this morning and totally
lost track. All is well after a "make clean" and make.

Sorry for the noise.

thanks,
-- Shuah


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 22:12:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 22:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81076.149137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7QOH-0005eF-7t; Wed, 03 Feb 2021 22:12:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81076.149137; Wed, 03 Feb 2021 22:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7QOH-0005e8-54; Wed, 03 Feb 2021 22:12:49 +0000
Received: by outflank-mailman (input) for mailman id 81076;
 Wed, 03 Feb 2021 22:12:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vpkB=HF=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l7QOF-0005e2-88
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 22:12:47 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a677cac6-3a78-45b8-b6e9-97e4b4bffafe;
 Wed, 03 Feb 2021 22:12:43 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 113MCTuu030536
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 3 Feb 2021 17:12:35 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 113MCThF030535;
 Wed, 3 Feb 2021 14:12:29 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a677cac6-3a78-45b8-b6e9-97e4b4bffafe
Date: Wed, 3 Feb 2021 14:12:29 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jukka Kaartinen <jukka.kaartinen@unikie.com>,
        Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>,
        Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Question about xen and Rasp 4B
Message-ID: <YBsfzZ6fI40bXo7/@mattapan.m5p.com>
References: <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
 <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
 <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
 <3c98d8d0-ca4e-b177-1e2b-5f3eb454722d@unikie.com>
 <alpine.DEB.2.21.2102031249090.29047@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2102031249090.29047@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Feb 03, 2021 at 12:55:40PM -0800, Stefano Stabellini wrote:
> On Wed, 3 Feb 2021, Jukka Kaartinen wrote:
> > On 3.2.2021 2.18, Stefano Stabellini wrote:
> > > How are you configuring and installing the kernel?
> > > 
> > > make bcm2711_defconfig
> > > make Image.gz
> > > make modules_install
> > > 
> > > ?
> > > 
> > > The device tree is the one from the rpi-5.9.y build? How are you loading
> > > the kernel and device tree with uboot? Do you have any interesting
> > > changes to config.txt?
> > > 
> > > I am asking because I cannot get to the point of reproducing what you
> > > are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
> > > get any graphics output on my screen. (The serial works.) I am using the
> > > default Ubuntu Desktop rpi-install target as rootfs and uboot master.
> > > 
> > 
> > This is what I do:
> > 
> > make bcm2711_defconfig
> > cat "xen_additions" >> .config
> > make Image  modules dtbs
> > 
> > make INSTALL_MOD_PATH=rootfs modules_install
> > depmod -a
> > 
> > cp arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb boot/
> > cp arch/arm64/boot/dts/overlays/*.dtbo boot/overlays/
> 
> Thanks for the detailed instructions. This helps a lot. I saw below in
> boot2.source that you are using ${fdt_addr} as DTB source (instead of
> loading one), which means you are using the DTB as provided by U-Boot at
> runtime, instead of loading your own file.
> 
> With these two copies, I take you meant to update the first partition on
> the SD card, the one where config.txt lives, right? So that Xen is
> getting the DTB and overlays from the rpi-5.9.y kernel tree but passed
> down by the RPi loader and U-Boot?
> 
> I think the DTB must be the issue as I wasn't applying any overlays
> before. I ran a test to use the DTB and overlay from U-Boot but maybe I
> haven't updated them properly because I still don't see any output.

Seeing no graphics output from U-Boot is okay.  If the device-tree files
get sufficiently updated you can end up with no output from U-Boot, but
will get output once the Linux kernel's driver is operational (I've seen
this occur).

The most important part is having a HDMI display plugged in during the
early boot stages.  Unless the bootloader sees the display the output
won't get initialized and the Linux driver doesn't handle that.


> > dtoverlay=vc4-fkms-v3d,cma-64

This is odd.  My understanding is this is appropriate for RP3, but not
RP4.  For RP4 you can have "dtoverlay=disable-vc4" and still get graphics
output (hmm, I'm starting to think I need to double-check this...).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Feb 03 22:18:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 22:18:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81078.149150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7QU6-0005qu-UC; Wed, 03 Feb 2021 22:18:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81078.149150; Wed, 03 Feb 2021 22:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7QU6-0005qn-Qa; Wed, 03 Feb 2021 22:18:50 +0000
Received: by outflank-mailman (input) for mailman id 81078;
 Wed, 03 Feb 2021 22:18:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qtc=HF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7QU6-0005qi-95
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 22:18:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 339fe853-52fc-4719-a9e8-a4bc45d04afd;
 Wed, 03 Feb 2021 22:18:49 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id EBC2664DF0;
 Wed,  3 Feb 2021 22:18:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 339fe853-52fc-4719-a9e8-a4bc45d04afd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612390728;
	bh=yyIRAdp9HTCLKYMLp8vzMTGuEyMMqbW0akmu5TfQk9E=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jdb28UDAbopek7XmGT5z1jxAQOX0/dYcCOROTXdMftB/hM4+mDKx+nSqEWJQe4lSQ
	 kE+VIHIAUNZyl9WYGui+1ISgGAwhMAO06NLr/4k2pT9XEJUHhVHIV7PsUQq0jP43yO
	 bSB//6vTMv12D1qkPZSrKx7we63sjN6zqUehHMrl/HiMGhOWUug3Hzdfi7ikcJdF8V
	 cMj7rZ/O4DI/9DoFKLa0JDFrIbggHbPVG1/FW8MpOA7uS7FuamK8BEMUcSbDLSpXKt
	 kSgeonKaurg2PVcXE655IDjUA7Gq1bIOgOeiIdZ6zxb6hDqy2Dm90j3fCE1t7QnlfL
	 RcTs67gkCUkBg==
Date: Wed, 3 Feb 2021 14:18:47 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
In-Reply-To: <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
Message-ID: <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1596983874-1612387817=:29047"
Content-ID: <alpine.DEB.2.21.2102031331460.29047@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1596983874-1612387817=:29047
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102031331461.29047@sstabellini-ThinkPad-T480s>

On Wed, 3 Feb 2021, Julien Grall wrote:
> On 03/02/2021 00:18, Stefano Stabellini wrote:
> > On Tue, 2 Feb 2021, Julien Grall wrote:
> > > On 02/02/2021 18:12, Julien Grall wrote:
> > > > On 02/02/2021 17:47, Elliott Mitchell wrote:
> > > > > The handle_device() function has been returning failure upon
> > > > > encountering a device address which was invalid.Â  A device tree which
> > > > > had such an entry has now been seen in the wild.Â  As it causes no
> > > > > failures to simply ignore the entries, ignore them. >
> > > > > Signed-off-by: Elliott Mitchell <ehem+xenn@m5p.com>
> > > > > 
> > > > > ---
> > > > > I'm starting to suspect there are an awful lot of places in the
> > > > > various
> > > > > domain_build.c files which should simply ignore errors.Â  This is now
> > > > > the
> > > > > second place I've encountered in 2 months where ignoring errors was
> > > > > the
> > > > > correct action.
> > > > 
> > > > Right, as a counterpoint, we run Xen on Arm HW for several years now and
> > > > this is the first time I heard about issue parsing the DT. So while I
> > > > appreciate that you are eager to run Xen on the RPI...
> > > > 
> > > > >  Â I know failing in case of error is an engineer's
> > > > > favorite approach, but there seem an awful lot of harmless failures
> > > > > causing panics.
> > > > > 
> > > > > This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore
> > > > > empty memory bank".Â  Now it seems clear the correct approach is to
> > > > > simply
> > > > > ignore these entries.
> > > > 
> > > > ... we first need to fully understand the issues. Here a few questions:
> > > >   Â Â  1) Can you provide more information why you believe the address is
> > > > invalid?
> > > >   Â Â  2) How does Linux use the node?
> > > >   Â Â  3) Is it happening with all the RPI DT? If not, what are the
> > > > differences?
> > > 
> > > So I had another look at the device-tree you provided earlier on. The node
> > > is
> > > the following (copied directly from the DTS):
> > > 
> > > &pcie0 {
> > >          pci@1,0 {
> > >                  #address-cells = <3>;
> > >                  #size-cells = <2>;
> > >                  ranges;
> > > 
> > >                  reg = <0 0 0 0 0>;
> > > 
> > >                  usb@1,0 {
> > >                          reg = <0x10000 0 0 0 0>;
> > >                          resets = <&reset
> > > RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > >                  };
> > >          };
> > > };
> > > 
> > > pcie0: pcie@7d500000 {
> > >     compatible = "brcm,bcm2711-pcie";
> > >     reg = <0x0 0x7d500000  0x0 0x9310>;
> > >     device_type = "pci";
> > >     #address-cells = <3>;
> > >     #interrupt-cells = <1>;
> > >     #size-cells = <2>;
> > >     interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > >                  <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > >     interrupt-names = "pcie", "msi";
> > >     interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > >     interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > >                                                       IRQ_TYPE_LEVEL_HIGH>;
> > >     msi-controller;
> > >     msi-parent = <&pcie0>;
> > > 
> > >     ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > >               0x0 0x40000000>;
> > >     /*
> > >      * The wrapper around the PCIe block has a bug
> > >      * preventing it from accessing beyond the first 3GB of
> > >      * memory.
> > >      */
> > >     dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > >                   0x0 0xc0000000>;
> > >     brcm,enable-ssc;
> > > };
> > > 
> > > The interpretation of "reg" depends on the context. In this case, we are
> > > trying to interpret as a memory address from the CPU PoV when it has a
> > > different meaning (I am not exactly sure what).
> > > 
> > > In fact, you are lucky that Xen doesn't manage to interpret it. Xen should
> > > really stop trying to look region to map when it discover a PCI bus. I
> > > wrote a
> > > quick hack patch that should ignore it:
> > 
> > Yes, I think you are right. There are a few instances where "reg" is not
> > a address ready to be remapped. It is not just PCI, although that's the
> > most common.  Maybe we need a list, like skip_matches in handle_node.
> 
> From my understanding, "reg" can be considered as an MMIO region only if all
> the *parents up to the root have the property "ranges" and they are not on a
> different bus (e.g. pci).
> 
> Do you have example where this is not the case?

The famous one is CPUs. These are some other examples I could find with
a quick search:

			nvmem_firmware {
				compatible = "xlnx,zynqmp-nvmem-fw";
				#address-cells = <0x1>;
				#size-cells = <0x1>;

				soc_revision@0 {
					reg = <0x0 0x4>;
				};
			};

		cci@fd6e0000 {
			compatible = "arm,cci-400";
			reg = <0x0 0xfd6e0000 0x0 0x9000>;
			ranges = <0x0 0x0 0xfd6e0000 0x10000>;
			#address-cells = <0x1>;
			#size-cells = <0x1>;

			pmu@9000 {
				compatible = "arm,cci-400-pmu,r1";
				reg = <0x9000 0x5000>;
				interrupt-parent = <0x4>;
				interrupts = <0x0 0x7b 0x4 0x0 0x7b 0x4 0x0 0x7b 0x4 0x0 0x7b 0x4 0x0 0x7b 0x4>;
			};
		};


		i2c@ff020000 {
			compatible = "cdns,i2c-r1p14";
			status = "okay";
			interrupt-parent = <0x4>;
			interrupts = <0x0 0x11 0x4>;
			reg = <0x0 0xff020000 0x0 0x1000>;
			#address-cells = <0x1>;
			#size-cells = <0x0>;
			clocks = <0x3 0x3d>;
			clock-frequency = <0x61a80>;

			gpio@20 {
				compatible = "ti,tca6416";
				reg = <0x20>;


		ethernet@ff0e0000 {
			compatible = "cdns,zynqmp-gem", "cdns,gem";
			status = "okay";
			interrupt-parent = <0x4>;
			interrupts = <0x0 0x3f 0x4 0x0 0x3f 0x4>;
			reg = <0x0 0xff0e0000 0x0 0x1000>;
			clock-names = "pclk", "hclk", "tx_clk";
			#address-cells = <0x1>;
			#size-cells = <0x0>;
			iommus = <0xd 0x877>;
			clocks = <0xc 0xc 0xc>;
			phy-handle = <0xe>;
			phy-mode = "rgmii-id";
			#stream-id-cells = <0x1>;
			phandle = <0x16>;

			ethernet-phy@c {
				reg = <0xc>;
				ti,rx-internal-delay = <0x8>;
				ti,tx-internal-delay = <0xa>;
				ti,fifo-depth = <0x1>;
				ti,dp83867-rxctrl-strap-quirk;
				phandle = <0xe>;
			};
		};


The rule that *parents up to the root have the property "ranges"* seems
to apply correctly to all these cases. Good.


> Whether Xen does it correctly is another question :).
> 
> > 
> > 
> > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > index 374bf655ee34..937fd1e387b7 100644
> > > --- a/xen/arch/arm/domain_build.c
> > > +++ b/xen/arch/arm/domain_build.c
> > > @@ -1426,7 +1426,7 @@ static int __init handle_device(struct domain *d,
> > > struct
> > > dt_device_node *dev,
> > > 
> > >   static int __init handle_node(struct domain *d, struct kernel_info
> > > *kinfo,
> > >                                 struct dt_device_node *node,
> > > -                              p2m_type_t p2mt)
> > > +                              p2m_type_t p2mt, bool pci_bus)
> > >   {
> > >       static const struct dt_device_match skip_matches[] __initconst =
> > >       {
> > > @@ -1532,9 +1532,14 @@ static int __init handle_node(struct domain *d,
> > > struct
> > > kernel_info *kinfo,
> > >                  "WARNING: Path %s is reserved, skip the node as we may
> > > re-use
> > > the path.\n",
> > >                  path);
> > > 
> > > -    res = handle_device(d, node, p2mt);
> > > -    if ( res)
> > > -        return res;
> > > +    if ( !pci_bus )
> > > +    {
> > > +        res = handle_device(d, node, p2mt);
> > > +        if ( res)
> > > +           return res;
> > > +
> > > +        pci_bus = dt_device_type_is_equal(node, "pci");
> > > +    }
> > > 
> > >       /*
> > >        * The property "name" is used to have a different name on older FDT
> > > @@ -1554,7 +1559,7 @@ static int __init handle_node(struct domain *d,
> > > struct
> > > kernel_info *kinfo,
> > > 
> > >       for ( child = node->child; child != NULL; child = child->sibling )
> > >       {
> > > -        res = handle_node(d, kinfo, child, p2mt);
> > > +        res = handle_node(d, kinfo, child, p2mt, pci_bus);
> > >           if ( res )
> > >               return res;
> > >       }
> > > @@ -2192,7 +2197,7 @@ static int __init prepare_dtb_hwdom(struct domain
> > > *d,
> > > struct kernel_info *kinfo)
> > > 
> > >       fdt_finish_reservemap(kinfo->fdt);
> > > 
> > > -    ret = handle_node(d, kinfo, dt_host, default_p2mt);
> > > +    ret = handle_node(d, kinfo, dt_host, default_p2mt, false);
> > >       if ( ret )
> > >           goto err;
> > > 
> > > A less hackish possibility would be to modify dt_number_of_address() and
> > > return 0 when the device is a child of a PCI below.
> > > 
> > > Stefano, do you have any opinions?
> > 
> > Would PCIe even work today? Because if it doesn't, we could just add it
> > to skip_matches until we get PCI passthrough properly supported.
> PCIe (or PCI) definitely works in dom0 today but Xen is not aware of the
> hostbridge. So you would break quite a few uses cases by skipping the nodes.

Never mind my suggestion


> > But aside from PCIe, let's say that we know of a few nodes for which
> > "reg" needs a special treatment. I am not sure it makes sense to proceed
> > with parsing those nodes without knowing how to deal with that.
> 
> I believe that most of the time the "special" treatment would be to ignore the
> property "regs" as it will not be an CPU memory address.
> 
> > So maybe
> > we should add those nodes to skip_matches until we know what to do with
> > them. At that point, I would imagine we would introduce a special
> > handle_device function that knows what to do. In the case of PCIe,
> > something like "handle_device_pcie".
> Could you outline how "handle_device_pcie()" will differ with handle_node()?
> 
> In fact, the problem is not the PCIe node directly. Instead, it is the second
> level of nodes below it (i.e usb@...).
> 
> The current implementation of dt_number_of_address() only look at the bus type
> of the parent. As the parent has no bus type and "ranges" then it thinks this
> is something we can translate to a CPU address.
> 
> However, this is below a PCI bus so the meaning of "reg" is completely
> different. In this case, we only need to ignore "reg".

I see what you are saying and I agree: if we had to introduce a special
case for PCI, then  dt_number_of_address() seems to be a good place.  In
fact, we already have special PCI handling, see our
__dt_translate_address function and xen/common/device_tree.c:dt_busses.

Which brings the question: why is this actually failing?

pcie0 {
     ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0 0x40000000>;

Which means that PCI addresses 0xc0000000-0x100000000 become 0x600000000-0x700000000.

The offending DT is:

&pcie0 {
         pci@1,0 {
                 #address-cells = <3>;
                 #size-cells = <2>;
                 ranges;

                 reg = <0 0 0 0 0>;

                 usb@1,0 {
                         reg = <0x10000 0 0 0 0>;
                         resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
                 };
         };
};


reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
However, the rest of the regs cells are left as zero. It shouldn't be an
issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus. So
in theory dt_number_of_address() should already return 0 for it.

Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
add a check to skip 0 size ranges. Just a hack to explain what I mean:

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 18825e333e..236b30675b 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -866,6 +866,9 @@ int dt_device_get_address(const struct dt_device_node *dev, unsigned int index,
     unsigned int flags;
 
     addrp = dt_get_address(dev, index, size, &flags);
+    if ( *size == 0 )
+        return 0;
+
     if ( addrp == NULL )
         return -EINVAL;
 
--8323329-1596983874-1612387817=:29047--


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 22:53:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 22:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81082.149162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7R18-0001X5-LT; Wed, 03 Feb 2021 22:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81082.149162; Wed, 03 Feb 2021 22:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7R18-0001Wy-IW; Wed, 03 Feb 2021 22:52:58 +0000
Received: by outflank-mailman (input) for mailman id 81082;
 Wed, 03 Feb 2021 22:52:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JooJ=HF=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1l7R17-0001Wt-BO
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 22:52:57 +0000
Received: from mail-ej1-x630.google.com (unknown [2a00:1450:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fbccfa8-004a-47c0-a63a-435a22cef54f;
 Wed, 03 Feb 2021 22:52:56 +0000 (UTC)
Received: by mail-ej1-x630.google.com with SMTP id i8so1789124ejc.7
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 14:52:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fbccfa8-004a-47c0-a63a-435a22cef54f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1zQdBrKtgGLyifz0oCElzsKWKMiI4DTiGlD3rvY/pxE=;
        b=B5m0jCQ2E+yFUzruzcZ4MXywRVfHVhQ/IYPCZslNwqu/8UKRc/32nODVf2T7BLMbtE
         3ffIdPLp61Jp4TvvV8YZMxxmX5MUrh19vVDRU1UcDl6bbTH42M28i+UF6zO9yWbHee5p
         RG0uOSiV/2zq1K9aaToFINxnKHJpZ/Q64Xx9JQFeoYiSvUQfHFBGnn0/J8N/N4osk/X7
         DB3xfcKVOryolYmvLgcC6YUtyPuXCrC/bN1N/Wj9jqBj1STMzI+DXypdGE/iwF44TtVX
         lzfQOiZJEiXMzZLFlUKLkA4yYgzs/RWj0uB0Tse4f9VedmnCoqMPrpIpDQaJCfiy7yIO
         hpbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1zQdBrKtgGLyifz0oCElzsKWKMiI4DTiGlD3rvY/pxE=;
        b=BilX6usGwCiYhnVUOAcgpuNbd9x1/ys9k509g07H1lCDf5j4hvUp03QsnvOeDvZQiO
         KsP6ZURjXKusdrzrP2x4ZjTN94CXLEt33bvQOpaKU8nAyFoxZL5DiMhmjfQxdHVlr8OM
         c0lbWVzKP2AAhhowajKBZDl59ZFoXmpIB0VJ7G30pZ6Q9I5i0U7GrOZlZgWrbIb90kCn
         O5Zsj36wKBhkD7SEAOdbVuL4XikC88TNONepCNCrVi4UHsgDSF4e3cBukm4CGxURBVl/
         1x8sxQI0ZYmQA+n+XZd1S2eSXDINtdQOanvPNJXSe6QX1rn2umrBw7lFR1FugHsfY3DU
         Uq7g==
X-Gm-Message-State: AOAM533RpvyM3kXpE0y+1Z3s8ZvKgHcrr4Elirmmlnc7OsQqhaqsmjQp
	Z0vyDtIU+1q2MvdKISKvPR3bXRAxwRzpmlINKOw=
X-Google-Smtp-Source: ABdhPJxARiiNuXKrEI4iixbIZqWqrkxh69QtI8L9ODVCc1DVaKnoTNyIPQpCeTfY2EQkb2qmzw7HwYL+zVsGhtdGtGY=
X-Received: by 2002:a17:906:296a:: with SMTP id x10mr5341214ejd.240.1612392775402;
 Wed, 03 Feb 2021 14:52:55 -0800 (PST)
MIME-Version: 1.0
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
 <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org> <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Wed, 3 Feb 2021 22:52:44 +0000
Message-ID: <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid addresses
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > But aside from PCIe, let's say that we know of a few nodes for which
> > > "reg" needs a special treatment. I am not sure it makes sense to proceed
> > > with parsing those nodes without knowing how to deal with that.
> >
> > I believe that most of the time the "special" treatment would be to ignore the
> > property "regs" as it will not be an CPU memory address.
> >
> > > So maybe
> > > we should add those nodes to skip_matches until we know what to do with
> > > them. At that point, I would imagine we would introduce a special
> > > handle_device function that knows what to do. In the case of PCIe,
> > > something like "handle_device_pcie".
> > Could you outline how "handle_device_pcie()" will differ with handle_node()?
> >
> > In fact, the problem is not the PCIe node directly. Instead, it is the second
> > level of nodes below it (i.e usb@...).
> >
> > The current implementation of dt_number_of_address() only look at the bus type
> > of the parent. As the parent has no bus type and "ranges" then it thinks this
> > is something we can translate to a CPU address.
> >
> > However, this is below a PCI bus so the meaning of "reg" is completely
> > different. In this case, we only need to ignore "reg".
>
> I see what you are saying and I agree: if we had to introduce a special
> case for PCI, then  dt_number_of_address() seems to be a good place.  In
> fact, we already have special PCI handling, see our
> __dt_translate_address function and xen/common/device_tree.c:dt_busses.
>
> Which brings the question: why is this actually failing?

I already hinted at the reason in my previous e-mail :). Let me expand
a bit more.

>
> pcie0 {
>      ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0 0x40000000>;
>
> Which means that PCI addresses 0xc0000000-0x100000000 become 0x600000000-0x700000000.
>
> The offending DT is:
>
> &pcie0 {
>          pci@1,0 {
>                  #address-cells = <3>;
>                  #size-cells = <2>;
>                  ranges;
>
>                  reg = <0 0 0 0 0>;
>
>                  usb@1,0 {
>                          reg = <0x10000 0 0 0 0>;
>                          resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
>                  };
>          };
> };
>
>
> reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
> However, the rest of the regs cells are left as zero. It shouldn't be an
> issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.

The property "ranges" is used to define a mapping or translation
between the address space of the "bus" (here pci@1,0) and the address
space of the bus node's parent (&pcie0).
IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).

The problem is dt_number_of_address() will only look at the "bus" type
of the parent using dt_match_bus(). This will return the default bus
(see dt_bus_default_match()), because this is a property "ranges" in
the parent node (i.e. pci@1,0). Therefore...

> So
> in theory dt_number_of_address() should already return 0 for it.

... dt_number_of_address() will return 1 even if the address is not a
CPU address. So when Xen will try to translate it, it will fail.

>
> Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
> add a check to skip 0 size ranges. Just a hack to explain what I mean:

The parent of pci@1,0 is a PCI bridge (see the property type), so the
CPU addresses are found not via "regs" but "assigned-addresses".

In this situation, "regs" will have a different meaning and therefore
there is no promise that the size will be 0.

Cheers,


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:37:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81091.149180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7RiX-0005tu-3P; Wed, 03 Feb 2021 23:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81091.149180; Wed, 03 Feb 2021 23:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7RiX-0005tn-0A; Wed, 03 Feb 2021 23:37:49 +0000
Received: by outflank-mailman (input) for mailman id 81091;
 Wed, 03 Feb 2021 23:37:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7RiU-0005tf-S6; Wed, 03 Feb 2021 23:37:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7RiU-0002cv-L6; Wed, 03 Feb 2021 23:37:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7RiU-0006ZL-DL; Wed, 03 Feb 2021 23:37:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7RiU-0005BM-Ct; Wed, 03 Feb 2021 23:37:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=34/HTxSiS/AOgVfwITJWRivivy0Ry+KGYIUGnuXz/Ao=; b=06uBXcVsxoY7vW9o8Uz5w8ZPJS
	vhUdMnGwjiAIFtYAQGg/SXXWAY9ajykBDzSkfkSIkqu6YlGenKKSfe1uxH9eEVU9n8h1VM722TxPK
	igRxfTj6cozlJhIpPlNwJl1beMxIgxoYFksQgeZLAq35qVI75nSaBBqS08cNHYA11eWg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158993-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158993: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d203dbd69f1a02577dd6fe571d72beb980c548a6
X-Osstest-Versions-That:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 23:37:46 +0000

flight 158993 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158993/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d203dbd69f1a02577dd6fe571d72beb980c548a6
baseline version:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265

Last test of basis   158950  2021-02-02 11:01:24 Z    1 days
Testing same since   158993  2021-02-03 21:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e7aa90440..d203dbd69f  d203dbd69f1a02577dd6fe571d72beb980c548a6 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:38:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:38:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81093.149194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjb-00060b-EQ; Wed, 03 Feb 2021 23:38:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81093.149194; Wed, 03 Feb 2021 23:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjb-00060S-BA; Wed, 03 Feb 2021 23:38:55 +0000
Received: by outflank-mailman (input) for mailman id 81093;
 Wed, 03 Feb 2021 23:38:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7Rja-00060K-O9
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:38:54 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15696a5c-44f9-4663-b39b-48261f3da0e6;
 Wed, 03 Feb 2021 23:38:53 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113N8wg9103664;
 Wed, 3 Feb 2021 23:37:46 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 36cydm2q16-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:46 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NAbHS112134;
 Wed, 3 Feb 2021 23:37:45 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2171.outbound.protection.outlook.com [104.47.56.171])
 by userp3030.oracle.com with ESMTP id 36dhd0h98t-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:45 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:37:42 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:37:42 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:37:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15696a5c-44f9-4663-b39b-48261f3da0e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2020-01-29;
 bh=cOkwV9kqyN08qwK1NmwlE/kmA1LrE5vAxbf45V3rJbU=;
 b=NQGceTWcGnFSzxuNLWlAT7ftp0+qtrFa+QqxyH4luxuOx/4CbpdhaoE1Woq2H+KFeerO
 qHYOHjtQMoRtQcteMlRdRh0gRMZKRjAll6TtCIhagOKGAtHQK+a4RnXZyZtxYS+eWhO7
 Q9zk3YuYVHCYkJer6I5mEs0RWCDyNsLtfty3ihQJahSEH+AOAXeery6C8G+lw+Wve6VO
 RrakFbj29sTk1eqKCA6k7frw/kNvX2KNbYKUY73dlVBQUUazqqcpjvB7x1UWhf7Oxjls
 +JVF9g5XDO8m4yApyiGveb+BwV1HXWmNq5cuIY7dNvj4EEXadoMkUTRMO+SamF8ipE9/ zQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dzeUWNcU9Mj4e0TYMjDEEa04Puyz6RPQSjKZYKBWfmB0YwXGpAgYd9oM8VKYP8KXENITMV8yDMjP8CYacJPwv8DWmD4pSTiPOoRhlSV0oSyx0fWkiuokgpZ+MNc10AjBFlPN3fS6+wmdDwxaSMmr3AWXeIHjleiC2b90oOD5l7lecsd4TJ9inlejRmA0LhPrJE+i4qIR9reLEtuATKI0CpBOk/U9dhS+oomHA0MkWrlzRvDNVn3m/crta3/rFfJdhqMD/lwZ4mioO+D+XS7pQ9BYpw2e4rYERQEKbrZv9QIBXA98liYSs+uPxvIe4cpX6aBF1q3jWKa57MKtPPxTeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cOkwV9kqyN08qwK1NmwlE/kmA1LrE5vAxbf45V3rJbU=;
 b=oRgVV+HDbkWsmdM/NCIX14g0sj7/IHgOUpcV3ziyxhzl7Gd9U6OrO+hsNKg1+9hy5LFfkCdBPZ67Gvymn2hvh0EzIWmwXrCGTGCBKlXzcua2sNyEM6w6d04IriWe5nKG8hbefsrZXYHtfl+nck4edJFi/egB74dCn39TyjYaAlQT5uc3NlHVNJY8OU7o92ZDnbztY0QqnurGr2ImA4x5GETYQOsOk+i8iBEMYO2ycotIs7s6a2T4+epsSq3dofegTAZpSmjSlWo9yQ7cTq2AF75IrKWjRMvAxpe2iczboLVN5wyQgvGjq2ox9eG5EPbYxxMX81tkmyq68p0VkxAEGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cOkwV9kqyN08qwK1NmwlE/kmA1LrE5vAxbf45V3rJbU=;
 b=tjvCr9fEZnS/1sygRCnSm5Q2dXA7fAn8eTEtv5NpIiQ6sP2WsjoRLTtl1Z8622ok/gg2EPDiZ56/V2lbBNjHiKgjEsauVAX7RlT/FjXK+64C4N6hsxUAM+k9NP1EKmHYkVkYTH8WDGFNWmhbaiWPaHdLUdqB+5PpwCiQeDtoIs0=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 1/6] swiotlb: define new enumerated type
Date: Wed,  3 Feb 2021 15:37:04 -0800
Message-Id: <20210203233709.19819-2-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210203233709.19819-1-dongli.zhang@oracle.com>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 95a27da3-82a5-4fa4-cf2a-08d8c89cb345
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB47847DD3B5C66C7017909B5FF0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	kyMvFO94V89ozWj18bC0Vn9q8FAfrj1y9eDdojT3KXhZ3Kb6y0lMwkIUajPU89ixDjx/OGTr/llm3guooQuOtVd+PpQ9YQUQhSfr0VFSEaouKp9j6rQxBc2TDkU3n5TWws0P+gsQ9ClTDsY9obcuVuIpjWx8canMnExyJCp5opcTw4kGqQW+2+W11CZ3GfQA/4/tYYsbBhNweERyMTssLgeMrtTCtHAaB7mU9lopLRJxj2jUoeCAvT2jIcoEdnUPcQsrrvYmOO0jFqky9d3sZhy1e0161Va50YgEy72BmrZriZB7Fl1RBIJTb13rVgOZCbpYnWmPSfKYKtobkI66O/AUHyUDOyZMTNQBR2Mv1wasGM4dt/FVklQoggzHlBFeKpjEo6m1rahep0DWs5s3iFzqE5GbyMkd7KlvzTs1OY1vFoh+bHjv8LoQzMG7W+6nLoal5vB2LR6vIuK2+agcXr4rLiYxXl+a+AkvVQmNmsI6626FXj3SqSvLeM8oEJiQZCJ4U1Q6g3nnWr4d83pSjBwkmZVCAyR8hTQR7DHXcHQnZGjwHSyBmM0W44hGWgRhS7GVj+ds0MKOwrTfTcagGfrroQKUBZjTQBUCad4qCbc=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(6666004)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(4744005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?qeAfDC+0fc/jTywdxpwqkCaMRYdTCNWdYfhK/QFffR8uC9UvZ9YkUddjJqj8?=
 =?us-ascii?Q?LMepJPhT0a36sLGoYCc8pE42XpdTs25YEvJgJbE+r15lAPOhZ2uG8YaqDpie?=
 =?us-ascii?Q?YBy4aMhXnck5XH46LZbJJ5Bc6YzUGPA/RpT5PCmWbcQzB8peKv+pFSlMP+73?=
 =?us-ascii?Q?gIpYKWEVF/EtR6TOku8PeSlGV2dRHpAnvJ67O+hahgMBD7bgCaFV694aTg4w?=
 =?us-ascii?Q?KSujev4hlObPNbItkXf5I4GifCbVNuALJHlzJ8DOx3jIyR3vPyw0wng4LXc+?=
 =?us-ascii?Q?+9ix1gN2MitGR30SK1CJCpvNbznMmFVfpzPTnriq0kYWtNemlOD6oInjEJhz?=
 =?us-ascii?Q?lX/MAKiyokUXQ0qit/KLj+zLzuAH3F6pVaMRrXRn4VjzBw0wtn318NY07ege?=
 =?us-ascii?Q?hHXyWnZhyM9l0kHMf/ze9KlQNE6Sdg2p3vYo/Hq92jn1rRbO59GTtLfvxPn5?=
 =?us-ascii?Q?QxqO6FkLHZ/88+48rGj194umjw35pFBKx4rrKzuzS4pWSvhkdzyrhRnTCVdM?=
 =?us-ascii?Q?I7qC6QKCOzWDkQEVzuMBoOZzNkUwie7LrRumo3wMlr/8czcevewuo2IXNIzg?=
 =?us-ascii?Q?D7TJxw98KoyA2iOs/gldXx3CPC4SiFaEvp6dq7I4fh8GdU0qF1MVOiPdvsoT?=
 =?us-ascii?Q?rQQ5UWwcFD6h++pJRtaYAs0C+nUc9goj4CFBgjdYM+L4Dlgz5tEzQlMNUsnq?=
 =?us-ascii?Q?XSXVhlVhkhC9yEZcOc5IrfZuT2dzHSy7+Qbe5nInBellfHBl+CWcEpWn/jjl?=
 =?us-ascii?Q?wX9HL0m1ighHydAXVY+AV2URNDoOpHzWjwqPs8PpDN+JOoS9P98XRkJjFge+?=
 =?us-ascii?Q?4Yje34H5B0XWticmtwtHcMefFIsWWY2QXorWYako3rd0RzGNYZyuYX2uCp8i?=
 =?us-ascii?Q?sVTcOvSqj/a8gFakwJ+4nV154Vi4eug2Wo+C5oGTWB4Hswx1HebZVaMqKvKR?=
 =?us-ascii?Q?GbCdRkVX22oK6iyaAiiYpf5nB1Ep2AUAYg+Qx7ut96diFZLdXYFfNfoFZwVG?=
 =?us-ascii?Q?ussF8YCLj236NHMPrisyvb5av6Siy21oV3RqJshiJW95ZfJqpRrmbUTOdTCn?=
 =?us-ascii?Q?J9a7MZ47?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 95a27da3-82a5-4fa4-cf2a-08d8c89cb345
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:37:42.3473
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Zk+fQUFRjPWPo8X09lgguRz1ZQvV/Q3+611J76qZ14X/Exg/9tU5P18MdXg55YsgQonrvMBApO+7wQjpLqZc+A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 phishscore=0
 spamscore=0 suspectscore=0 malwarescore=0 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0
 priorityscore=1501 impostorscore=0 malwarescore=0 clxscore=1015
 spamscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 mlxscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102030143

This is just to define new enumerated type without functional change.

The 'SWIOTLB_LO' is to index legacy 32-bit swiotlb buffer, while the
'SWIOTLB_HI' is to index the 64-bit buffer.

This is to prepare to enable 64-bit swiotlb.

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 include/linux/swiotlb.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d9c9fc9ca5d2..ca125c1b1281 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -17,6 +17,12 @@ enum swiotlb_force {
 	SWIOTLB_NO_FORCE,	/* swiotlb=noforce */
 };
 
+enum swiotlb_t {
+	SWIOTLB_LO,
+	SWIOTLB_HI,
+	SWIOTLB_MAX,
+};
+
 /*
  * Maximum allowable number of contiguous slabs to map,
  * must be a power of 2.  What is the appropriate value ?
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:39:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81094.149207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjg-00064G-O9; Wed, 03 Feb 2021 23:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81094.149207; Wed, 03 Feb 2021 23:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjg-00063X-Jf; Wed, 03 Feb 2021 23:39:00 +0000
Received: by outflank-mailman (input) for mailman id 81094;
 Wed, 03 Feb 2021 23:38:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7Rjf-00060K-GA
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:38:59 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5630453-ea40-4622-bde7-75818b1e8c84;
 Wed, 03 Feb 2021 23:38:53 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113N8xlG103727;
 Wed, 3 Feb 2021 23:37:40 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 36cydm2q11-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:40 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NB3jw025155;
 Wed, 3 Feb 2021 23:37:40 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2175.outbound.protection.outlook.com [104.47.56.175])
 by aserp3020.oracle.com with ESMTP id 36dhc1wj19-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:39 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:37:36 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:37:36 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:37:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5630453-ea40-4622-bde7-75818b1e8c84
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : content-type : mime-version;
 s=corp-2020-01-29; bh=qsEGKVlwuthB6IlsIaAceDAu5rgeHIJP4Sy0WStKjL4=;
 b=YVQ3mdsiofTOsSsz6szMGB1tjZoyfyhYTGQuHfiuq9fnn7O+mT2ZMf/oFEeq9m6/1Ud8
 o2l0M8hto7uYeKodKBVIBwQKP6dPH62mTjQrhTiRiwShRrcgkXTocWM10UTh5O8FaLzY
 cDm6mIFe4nrZ7wqdI5YrPfkPEWctS1yngSQqguMBRZpy9j3l72yZRsCmPLxckDD4ZdjL
 NemMVLhXDkv5uVNcDKgW0BhYnhrsw/SUIhuaT8VuFqxU0uatOGWiBOiSJC2SXIjYIKI3
 1uu01zsQ4CYwqe/cQJp7tcQCzY9P4GR47cw5tuCu6lJ4ZDws6Y24EvMaRcipWL25/sIZ DQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aWsB8XXRpIpWWjUxGxaqItGk4rsX0gm4lOVAzXof6dvlDsAo6PxpE9LVo2mJYQX88GxxMz/SPPYnHAGjpnetONjUMHvWPYlNYB5+K8mECnmgGR/OgoGPDM7hinW4OtGlYoheUU6AQKW2+Gk7U8E1FWdigM4y8/WdKmg1ZXA1NMJmISTZVfGOUm01hX2hqlwZpi/LPdRyovi8ZZ0dEUk06oUAKSZeCOLU2gO7EuD9JFazPloB5l4xmm3xyelKDmdnUD472lImPCb5h8YLrO2J0ghn5gFY1O1d8ixBS2xgkqbiXNpB1/35t9gRaM4EmmPo7kkV52DZyOzrO3wvhXPuaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qsEGKVlwuthB6IlsIaAceDAu5rgeHIJP4Sy0WStKjL4=;
 b=e7pmhtT0iG6+1IPSSPrbtp/zCk+zXY4TfkNpxVkx2o+dJmQtno2dBbDYDmSSGy4EsndZFUkazg5pxHymYmOy+Ufp94V6WDZjkriVc4dbZnMrYkoExkxG+e3F+4e2JneeWDzhNj513lROwKlisWp3mYFb2ur40JI/0dX1lRh9DmEc+yRX+knwd1LO2EyVHo7J+YmlIQiaVkfCeMy+h0HW7BUUhM8vPfXWFf6Pqv36yAxYTDB+lHTtTHEpzUuKrJA/pVIdeub0rs/mu3UGaxeamjLNVoqbOQBavlIkYptqDChoBCtnbGnR2LRqn+Tud5yGPqQCONjQ4qzl/J6RL8KT5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qsEGKVlwuthB6IlsIaAceDAu5rgeHIJP4Sy0WStKjL4=;
 b=cExWitoEK2u27ChY5w571nzM0Hj9zOqeABpvwhLDCdOdmoyNkuS9Pl9jgif4I26Ev2rVL2IO++rYmVZOJt/SwoVWVdXXhtpq3KnJ8ZwoCQpMEGpnkMChSKLJnq+ub5E5HpyN8sJzB0MxM1+FQQnpOjS3KgLuGx8xR2GIll4TFME=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 0/6] swiotlb: 64-bit DMA buffer
Date: Wed,  3 Feb 2021 15:37:03 -0800
Message-Id: <20210203233709.19819-1-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aeb1ff20-7d9b-47ac-6403-08d8c89cafd9
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB47843F855991F25A4A45BF6DF0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	LY9Lq3NT6DmOC7z0WPolLyKaKsLauZx2aBzBON7L226+ab0hT/EWg3ZiW5wlM+VnbehKdSdVWZpbBs/TbGgyX4HezNqQ5CMlR+HH/rpWlSoRmL4fXeIGRcA5vf1RTqeTAKM2YFRYOZBo7Rb41zCbdAnPBKUdIYGdeXLu5MAoybkFdWZpSussZ68XVeaMxnJQSEiGUhVeKMMwEfDGIGyHkhar0bqMbr46E3MWFmdZ9kZ7o4IqMS2dZosCg8CFI14P7CDsspaCa7SUTll8jeVG8WArKNzxJn+TUbqCjLSd75TH9TOaGjaFYnbVlbksGTPa7pqSfNaqr/79nK5uiNUJ2POLcMyTnhZq9HFlEICzAPOIn1lYEGuLax0VpiszYoDm0Vp3Xw1DiD5/v/ylc8vsKeR8rvysabStgohdaM+Or06NxxooDiBrWjU72PiHQk9zI8Rnz75OAZzmyEQ5uJjLWW0xPlzHzGVRNOp0gUFNXeDl+kzC115+kmB45dykxnqvQmrRurijE/xz1S6wLsytw91XYB/lvBRuoc7XC/cxJRQAb02illeVoDS9wkRIuIjVKejYiNGxC/1AIi5G5TSR8MGb8i6gkehd1M6JjM96xrM=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(6666004)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?14CfgRF8o8XCWWseZbNUuFwSrYhs6AQTmHA1kkzZvm8HuSSgbEinFwcxm/Ei?=
 =?us-ascii?Q?YdCxSImF5U5UcCrk3/o7TbA0bHe+tJ9GnE9vn2dKeGvgf9ErntjaWFLqBLlQ?=
 =?us-ascii?Q?WYPYUmK3PE/gPkWS6JtnZ1UCeT8h3LrPzSiq7ycKrX1zUAEvxk5t0MOpopL7?=
 =?us-ascii?Q?GJalZ9E7v4pjStDMFVbSSTVJwCVAf/hVyN0iG3cSFppknj0A2FrYwHKQn11O?=
 =?us-ascii?Q?mcB55ilx4Wdy7BzXH4dnoCPXY/KSks0n+mbGvFnkvEmH7AO19R68vmXXxjEJ?=
 =?us-ascii?Q?SSAu67eZxoUujfsENdqYp2eNxw9b2Lbgcovp7tgz5CxkLxm17M1csryrw9M8?=
 =?us-ascii?Q?g30/xAUlYRvTNHfw9G4wwlbT85/08RhpFoqbVa5pZRjZKSWpQH1ZA5Km6IRa?=
 =?us-ascii?Q?KOQ5gc8w5HeXEte/29PvfnaI64YQlv+cg9LIdslXs4irBbX1SsCeXozPAe6M?=
 =?us-ascii?Q?d0lKn0G5wH8WnxDnyUicYklTHA6MwHol1/5qxv4n2SRQ1k60vYNllTJyexlR?=
 =?us-ascii?Q?8dcmhixT07GuHj1nYE1jGn4140LzPPfSYkbzczJbmItbPJ0ZhMfjzpxzoy6o?=
 =?us-ascii?Q?VllOGJZ2SbFkPTJLp9ZNEgO9Wz6NE8d2lqQrFvjWN0dEaCHOUBjweqeC/OTz?=
 =?us-ascii?Q?coZCwbCJ16o83K9oQaEwPz5AXiX3AM7hVDlC7AIIMMg6Uci0W1pYj9fTQ5xb?=
 =?us-ascii?Q?b2EcHraiOqOgLbCldF4Hz9qDkSrKgTuPkQ9yeXN98+VndGRXkrI4bnrlydoQ?=
 =?us-ascii?Q?AqKCiGqnuiTfkvKE6hg5ODpB5h6V1NxPbaL0jBCquCpSR0sMiXB0eU2Wr7o7?=
 =?us-ascii?Q?tclMeJftJfE08BZuuUQUdWR31kjGkQA+ubPYU80Wcsp3FQekGk8uWh75KQ2w?=
 =?us-ascii?Q?HCs/bn3R455tQR8SN0b/L9s9csLTr+DTNXuILdxnXdgPC5xcHFwA9Zxd+a44?=
 =?us-ascii?Q?Syx/l8kY+zpsn/qjhLKWGJgmjveuxGGOcWVSQ/mkeODUd79uxhyTgYge8//t?=
 =?us-ascii?Q?pDUWhLDoC41G/NxpES9c5Jry+glB3shCr5cMzDr7sJLkUr0RTSolzKkYkzLo?=
 =?us-ascii?Q?YnQwYasH?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aeb1ff20-7d9b-47ac-6403-08d8c89cafd9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:37:36.5357
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SjFAEv41/MntsA4fKs/ZtEmt2lYp+bejHO36bqzO1VLLyXuSm8FNEpw3/9n6Tfgy8/V89Pne75iBjlycVoHr+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 suspectscore=0
 spamscore=0 mlxscore=0 malwarescore=0 mlxlogscore=999 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0
 priorityscore=1501 impostorscore=0 malwarescore=0 clxscore=1011
 spamscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 mlxscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102030143

This RFC is to introduce the 2nd swiotlb buffer for 64-bit DMA access.  The
prototype is based on v5.11-rc6.

The state of the art swiotlb pre-allocates <=32-bit memory in order to meet
the DMA mask requirement for some 32-bit legacy device. Considering most
devices nowadays support 64-bit DMA and IOMMU is available, the swiotlb is
not used for most of times, except:

1. The xen PVM domain requires the DMA addresses to both (1) <= the device
dma mask, and (2) continuous in machine address. Therefore, the 64-bit
device may still require swiotlb on PVM domain.

2. From source code the AMD SME/SEV will enable SWIOTLB_FORCE. As a result
it is always required to allocate from swiotlb buffer even the device dma
mask is 64-bit.

sme_early_init()
-> if (sev_active())
       swiotlb_force = SWIOTLB_FORCE;


Therefore, this RFC introduces the 2nd swiotlb buffer for 64-bit DMA
access.  For instance, the swiotlb_tbl_map_single() allocates from the 2nd
64-bit buffer if the device DMA mask min_not_zero(*hwdev->dma_mask,
hwdev->bus_dma_limit) is 64-bit.  With the RFC, the Xen/AMD will be able to
allocate >4GB swiotlb buffer.

With it being 64-bit, you can (not in this patch set but certainly
possible) allocate this at runtime. Meaning the size could change depending
on the device MMIO buffers, etc.


I have tested the patch set on Xen PVM dom0 boot via QEMU. The dom0 is boot
via:

qemu-system-x86_64 -smp 8 -m 20G -enable-kvm -vnc :9 \
-net nic -net user,hostfwd=tcp::5029-:22 \
-hda disk.img \
-device nvme,drive=nvme0,serial=deudbeaf1,max_ioqpairs=16 \
-drive file=test.qcow2,if=none,id=nvme0 \
-serial stdio

The "swiotlb=65536,1048576,force" is to configure 32-bit swiotlb as 128MB
and 64-bit swiotlb as 2048MB. The swiotlb is enforced.

vm# cat /proc/cmdline 
placeholder root=UUID=4e942d60-c228-4caf-b98e-f41c365d9703 ro text
swiotlb=65536,1048576,force quiet splash

[    5.119877] Booting paravirtualized kernel on Xen
... ...
[    5.190423] software IO TLB: Low Mem mapped [mem 0x0000000234e00000-0x000000023ce00000] (128MB)
[    6.276161] software IO TLB: High Mem mapped [mem 0x0000000166f33000-0x00000001e6f33000] (2048MB)

0x0000000234e00000 is mapped to 0x00000000001c0000 (32-bit machine address)
0x000000023ce00000-1 is mapped to 0x000000000ff3ffff (32-bit machine address)
0x0000000166f33000 is mapped to 0x00000004b7280000 (64-bit machine address)
0x00000001e6f33000-1 is mapped to 0x000000033a07ffff (64-bit machine address)


While running fio for emulated-NVMe, the swiotlb is allocating from 64-bit
io_tlb_used-highmem.

vm# cat /sys/kernel/debug/swiotlb/io_tlb_nslabs
65536
vm# cat /sys/kernel/debug/swiotlb/io_tlb_used
258
vm# cat /sys/kernel/debug/swiotlb/io_tlb_nslabs-highmem
1048576
vm# cat /sys/kernel/debug/swiotlb/io_tlb_used-highmem
58880


I also tested virtio-scsi (with "disable-legacy=on,iommu_platform=true") on
VM with AMD SEV enabled.

qemu-system-x86_64 -enable-kvm -machine q35 -smp 36 -m 20G \
-drive if=pflash,format=raw,unit=0,file=OVMF_CODE.pure-efi.fd,readonly \
-drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd \
-hda ol7-uefi.qcow2 -serial stdio -vnc :9 \
-net nic -net user,hostfwd=tcp::5029-:22 \
-cpu EPYC -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1 \
-machine memory-encryption=sev0 \
-device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true \
-device scsi-hd,drive=disk0 \
-drive file=test.qcow2,if=none,id=disk0,format=qcow2

The "swiotlb=65536,1048576" is to configure 32-bit swiotlb as 128MB and
64-bit swiotlb as 2048MB. We do not need to force swiotlb because AMD SEV
will set SWIOTLB_FORCE.

# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.11.0-rc6swiotlb+ root=/dev/mapper/ol-root ro
crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet
LANG=en_US.UTF-8 swiotlb=65536,1048576

[    0.729790] AMD Memory Encryption Features active: SEV
... ...
[    2.113147] software IO TLB: Low Mem mapped [mem 0x0000000073e1e000-0x000000007be1e000] (128MB)
[    2.113151] software IO TLB: High Mem mapped [mem 0x00000004e8400000-0x0000000568400000] (2048MB)

While running fio for virtio-scsi, the swiotlb is allocating from 64-bit
io_tlb_used-highmem.

vm# cat /sys/kernel/debug/swiotlb/io_tlb_nslabs
65536
vm# cat /sys/kernel/debug/swiotlb/io_tlb_used
0
vm# cat /sys/kernel/debug/swiotlb/io_tlb_nslabs-highmem
1048576
vm# cat /sys/kernel/debug/swiotlb/io_tlb_used-highmem
64647


Please let me know if there is any feedback for this idea and RFC.


Dongli Zhang (6):
   swiotlb: define new enumerated type
   swiotlb: convert variables to arrays
   swiotlb: introduce swiotlb_get_type() to calculate swiotlb buffer type
   swiotlb: enable 64-bit swiotlb
   xen-swiotlb: convert variables to arrays
   xen-swiotlb: enable 64-bit xen-swiotlb

 arch/mips/cavium-octeon/dma-octeon.c         |   3 +-
 arch/powerpc/kernel/dma-swiotlb.c            |   2 +-
 arch/powerpc/platforms/pseries/svm.c         |   8 +-
 arch/x86/kernel/pci-swiotlb.c                |   5 +-
 arch/x86/pci/sta2x11-fixup.c                 |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c |   4 +-
 drivers/gpu/drm/i915/i915_scatterlist.h      |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        |   2 +-
 drivers/mmc/host/sdhci.c                     |   2 +-
 drivers/pci/xen-pcifront.c                   |   2 +-
 drivers/xen/swiotlb-xen.c                    | 123 ++++---
 include/linux/swiotlb.h                      |  49 ++-
 kernel/dma/swiotlb.c                         | 382 +++++++++++++---------
 13 files changed, 363 insertions(+), 223 deletions(-)


Thank you very much!

Dongli Zhang




From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:39:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81095.149219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjm-0006DZ-6r; Wed, 03 Feb 2021 23:39:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81095.149219; Wed, 03 Feb 2021 23:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjm-0006DQ-1Q; Wed, 03 Feb 2021 23:39:06 +0000
Received: by outflank-mailman (input) for mailman id 81095;
 Wed, 03 Feb 2021 23:39:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7Rjk-00060K-GM
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:39:04 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cdbe6280-1a3a-4c32-8b14-6afa799c6e13;
 Wed, 03 Feb 2021 23:38:53 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113N9nvp104113;
 Wed, 3 Feb 2021 23:37:57 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 36cydm2q1h-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:57 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NBZPD083375;
 Wed, 3 Feb 2021 23:37:56 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2168.outbound.protection.outlook.com [104.47.56.168])
 by aserp3030.oracle.com with ESMTP id 36dh1rp03y-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:56 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:37:54 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:37:54 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:37:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdbe6280-1a3a-4c32-8b14-6afa799c6e13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2020-01-29;
 bh=0whQxZzUvZ1N9ogSDF2hvQXVIr9wVe/wZlYPojmN3Dw=;
 b=zphOpY06kD3+7Df7x1mRIRAudIRvGVbxfV7cm1VVALzVaZSFojCB+pC1TdF6+kyhyfzb
 zw9oF/i9Jchi8EOMKPzhFIW1kNwc+TDKDxJnkptpBsmZnoaVtsUoiBM7FejgZ6Wb5N2U
 VvUVF4YiNFcEyDlZHFFYMHqBWlAzWfcK3gbJfhbfLToNfdyKRvjeTLmRoHGcpsXK/P06
 YabOHIUEmZ+/pENHIs5BZwh9W6R0gEZns8pQy97rO+ULFv37KBdlsoF47pA3QDP2wFeV
 pXGKqZ/W+qJrFG46V14gJuom5a/zmWjo4xljodpZnaKmieEerTBfh9roxByjB7vqJvSD yA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gEf0apA+fXDQBhkVC7p6RZiPW2beDGTJR3LkceZiUkSsKDlLcNRDEu8E1RUZ3dgWW3myJscl18/EraARgi7vRA5bI07NqJRHhXCRH75UJW8H5ew8k7XI6IKjGY5l1aUeMLYQA9ud84B2ewTHlqDc55qRJPCaIO0PI8C23xF717q5jori67CvUsACD4t/z+b63w8PfWcda/EZE/M9GpNSmIw7FGjsXH2u6t127XHqmaeVW+s4pUYJmTXN/tXZ1W9Jw3YhV5H8eR53Wb5Hm+YXFEIlpeRNwowt6QgMV4Rg+oHWldmDkiXyed3LFNrUTYu20+kHcG5zl9omEJiAOTEfPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0whQxZzUvZ1N9ogSDF2hvQXVIr9wVe/wZlYPojmN3Dw=;
 b=EiRIIAV+sSfHbA+ePNhanxwPg1PWkEuScf9GKzP3zcbHvD+RyrfCz6G8PQrE2Sv5vKM6R/S8DZuSj/l9X2gUFLSoLZpoD8HGsTBJZEGRitUGeud7S2nUxbRrvLCiTRXSS/A1kLjDwRzY8FJAozzdCmWL5lGIxS7lAY09nbUCTmcEPxji1fV2UBcxH0cuxnJZnx+WNaHvwa84+aVB6sTxxt0lxUo5HPkmyWAH+JKPx9C8vnb3+DFO+V4eFIgg7mdjZ8mOJk+WNq476bTRNxHyKEBhrqxKdYO9opot2nR4drhsRxAEfVBrCnseyftZL6CcWr8QjKkdP2ReukhKdC1KEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0whQxZzUvZ1N9ogSDF2hvQXVIr9wVe/wZlYPojmN3Dw=;
 b=fIEZVuK8vikYtyA8/X72InpkX1VCyox7gcB4puyv3oe+AQIFBZB8t7LAmvH/gJtpDdyOUb0CM7ki4FYRV6TQX1ynrT3iJDkCW8ENf6cVoDcVaoMkMQJ7+zib8Mr7Qc1ebqDdOUUaun9cdrR7Z9khq7AC+KgVRdYiQ8b+2cnaA7E=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 3/6] swiotlb: introduce swiotlb_get_type() to calculate swiotlb buffer type
Date: Wed,  3 Feb 2021 15:37:06 -0800
Message-Id: <20210203233709.19819-4-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210203233709.19819-1-dongli.zhang@oracle.com>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e2add3f4-4994-4ff3-7c96-08d8c89cba39
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB47845B559CD5D0A96B095B3CF0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	RLsEchawOksK0kvC0O+w3T4N1PAe5DU17fjViBpDniy2b9oX4Ufj/Tfq8EWhO4frB7qnX0pHSdDvzHQ0ACCSnNWfnsvXXVueYSMQ5bSKOZehrHUM+9L4t9jd07zu4gKv68MoOqFldDvMmK/RiQtDnCKrSM2k9IjjrO+HS1Rlbcm5FzAPQvsMhdbyVoX7IEnuLpd15jB8NLzIr8vnz81spY0Jx9+eVAjhguDVPT9rRmXtSMC60dsfHS55lCeB2Vj8k4aIvEOCTXTSZp1WxfoYpfjD1hwoDYtSG88vstblEjYKKMuirJd4Js35MVoTNqUwk1P1X1Tj8qKpqNzOOpZRVFed6WoSshwWCHcsJNCGyLxjNYzcVdPLuONOZB908nFsFlXunhMqmi/lRV97Ls5v1hy4GLXqaK3/U1a54rGH4ZkOo5lV264z0LOuJRpB36z47uumkrbQ5UUuslWuE+eRzmwO+YcKeLFn4t+gWi1VYhe4eiRqpTWdJidXGRDw8aGQs8A3hCrTQL7tDzrV5iwAsjl0Ko2cLCh5WY7v/FkOx3gVL/dv/Q+VQhrCbgmuT3dI8Gj5pg09+18zfV4CFs9ZM9xmRdm24wkCCVIxO4FyP+g=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(6666004)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?BcXfwKqgNX8nkepJT4kFsuW0EIqFSZAcYDSzc+4S9iOuag1d/bCyiGVHpKCl?=
 =?us-ascii?Q?0xddX9Vt5u5HL+mT7aWb9G412UZ2PypY26RbGIwvafGZmfYU2ipDw60fgBI+?=
 =?us-ascii?Q?2vlGJqSUe5+9mVeMO7ZexQhODHbppbGlB0dJiqh0hzET7CtRzkcaPOEBLead?=
 =?us-ascii?Q?eyD2nlmSlc5No2kjos3hDOdsYfccm3bdhp4+HospEB8JYSFBQXqymJD7f4Rv?=
 =?us-ascii?Q?f55Ab6RpmXbwmZyMu7dHwmJz3Y8FcNNB0hWUXoaypkZRGSBlRVkZR9EmJd1p?=
 =?us-ascii?Q?BxOl+Qww8yUt2ra7p1vhgyXbi0uzLhyHb+wMaAk5dmzqIdI6pvJLt2lQ0gmq?=
 =?us-ascii?Q?XPRM8W01An6c1LGPJt98NXX3Vr2i2Ewg4A/ZYxzQxp8p7EOlopuUkptkk0dT?=
 =?us-ascii?Q?sJSEkHiLb5d5VGS0uPxWoj2AWBgOtdoUSVkm4wnGdkVUTjl6bICOnosAxKIS?=
 =?us-ascii?Q?lt/aNZpaJfN1OdoUJDCwvK0FV06TLlUWCvgL7pQ3jkx04TFwALqHEw2h6pY8?=
 =?us-ascii?Q?UjsnjBGnDqRx5abwprSSt98LprW8Ku7VTHgBjJTpkIS0lmowSvcGN+6GR/Ny?=
 =?us-ascii?Q?jjxnQLNfehLBjybnbFKE3xBGxAxm7JeEJx2T2+76mWmughTZ8blz9CHrXPVS?=
 =?us-ascii?Q?jg2y+9HokLfKQTXsYCSR/6EVLJq9uq6awiLWDE0QCQckSWSHGVHuKG6JyAuG?=
 =?us-ascii?Q?3C7KIfe6ui6VdJZs08ZGxFpymfTPuhIccI4SHGRlfi14K/rYU/MuaFJisAyq?=
 =?us-ascii?Q?cQNK95FM7wog6nn4BC/mfaofm4rmej2BXxc2lB1iyJbjx1/wM2+GNYv7yTCj?=
 =?us-ascii?Q?CN0hT5a+kx0Bw7AJfaYQ6O5V9qeLoIf28Y3J2IOqHv7lD3GlMsPXfQiTMPFN?=
 =?us-ascii?Q?ToBdrV4ZsFelex4fxzffuc7L6v4rbAo0nqBqbXlUY754pa3VgZv5RlUmqSzE?=
 =?us-ascii?Q?FOPk8qIkDFXXYDZ0OJhx0HlhLiscF+5PyapcsSgifjkTUZkUBcvOV8YJadhJ?=
 =?us-ascii?Q?2uP7WM1UsT0zHDD99sInRccYSNY1fvpPMQehmQx4uO9DwTC14NuAIjcqgFbu?=
 =?us-ascii?Q?0tNfNU2D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2add3f4-4994-4ff3-7c96-08d8c89cba39
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:37:53.9367
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f2I/HMS0YUJZk/sk/Cx1bLMo0oJIxcGZe+VqnPykaL7Pm15QLnMEIREVpFCxosN1Z6mZbgwEQo0EBiVXG7K/oQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 phishscore=0
 suspectscore=0 mlxlogscore=999 bulkscore=0 mlxscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0
 priorityscore=1501 impostorscore=0 malwarescore=0 clxscore=1015
 spamscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 mlxscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102030143

This patch introduces swiotlb_get_type() in order to calculate which
swiotlb buffer the given DMA address is belong to.

This is to prepare to enable 64-bit swiotlb.

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 include/linux/swiotlb.h | 14 ++++++++++++++
 kernel/dma/swiotlb.c    |  2 ++
 2 files changed, 16 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 777046cd4d1b..3d5980d77810 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -3,6 +3,7 @@
 #define __LINUX_SWIOTLB_H
 
 #include <linux/dma-direction.h>
+#include <linux/errno.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/limits.h>
@@ -23,6 +24,8 @@ enum swiotlb_t {
 	SWIOTLB_MAX,
 };
 
+extern int swiotlb_nr;
+
 /*
  * Maximum allowable number of contiguous slabs to map,
  * must be a power of 2.  What is the appropriate value ?
@@ -84,6 +87,17 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 	       paddr < io_tlb_end[SWIOTLB_LO];
 }
 
+static inline int swiotlb_get_type(phys_addr_t paddr)
+{
+	int i;
+
+	for (i = 0; i < swiotlb_nr; i++)
+		if (paddr >= io_tlb_start[i] && paddr < io_tlb_end[i])
+			return i;
+
+	return -ENOENT;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1fbb65daa2dd..c91d3d2c3936 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -109,6 +109,8 @@ static DEFINE_SPINLOCK(io_tlb_lock);
 
 static int late_alloc;
 
+int swiotlb_nr = 1;
+
 static int __init
 setup_io_tlb_npages(char *str)
 {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:39:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81096.149231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjq-0006LH-Ej; Wed, 03 Feb 2021 23:39:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81096.149231; Wed, 03 Feb 2021 23:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjq-0006LA-Ae; Wed, 03 Feb 2021 23:39:10 +0000
Received: by outflank-mailman (input) for mailman id 81096;
 Wed, 03 Feb 2021 23:39:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7Rjp-00060K-GN
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:39:09 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2930d99-fdfb-415f-9161-dc035662ec49;
 Wed, 03 Feb 2021 23:38:53 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113N94Pp103917;
 Wed, 3 Feb 2021 23:37:52 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 36cydm2q1a-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:52 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NB4Nf025252;
 Wed, 3 Feb 2021 23:37:52 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2174.outbound.protection.outlook.com [104.47.56.174])
 by aserp3020.oracle.com with ESMTP id 36dhc1wj64-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:37:51 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:37:48 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:37:48 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:37:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2930d99-fdfb-415f-9161-dc035662ec49
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2020-01-29;
 bh=RuMDDnvFCWO4p61oFgjtmLkCJi31pcutirwlqEiaVDs=;
 b=ymt145DO0qO8TYm1a8EXesRs01fgiJRugSzEz3L8IRKdQBTYNNcfAUB+RSZT40m0lM39
 hT0O4IvkHJaQ3YjxBzh7fjBnWopdXNkkdzlGOnCTK+p7wr7kvSY7xYESFGKQfGNX/5AK
 4wDAnYH83AMb50mWUSaasQk4TwKZDassXFqS6DbdFsL+GGA9K3Y6JNC3/hybO0fpiY6b
 cWuYhdxXTgMmDeQ/jHkvmpjk2ufoDbruAHJ73uSOB1/lYzO+9wak7PU7G3VIk9kuUTzE
 dLPLyCl3O1c3gqLwNfxmUuvWIya8O820xwCbtkv8ZZIA3Mhoy++AsgHriwsanUEndgAM RQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kHORM4V2tKdZdkRvag6BlOp1gQc6R3pnvwq4BJ05MpTLvFoH8CiLTNWzFjD5mORYszDmIQBOy66uGDYUw/owH7Gq5WnziLRoztgLfcjzbML2I+VcP/YmZ0YtqkZ/jFZVAStF00fkT4IIWoSkm3XLUP0A3fCBK727jkxBbw225t6FbWR5ZI6pigIgjU+y5JP5/41iNhRGir209Uew0ZdwkMFuIXIw7q649aWD5mjXE8CmOWPL7j0FynJfZt4k2YHtlMyRP7Z4x9yq85r5eElean/5WxDslhQ5j6ChYGIfDqNiMRfct/zi6i5GBKkjVtocdlyIabR9VdWrHDZU0QfJCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RuMDDnvFCWO4p61oFgjtmLkCJi31pcutirwlqEiaVDs=;
 b=lYqaTZ1gCuE0IaPruIsmjn+pEnTEG8UFYUTHtgKUmO2pAPbraM8USj+TQyiDb+9OLGoihxSc9KJYD6/rNsGzChdJSAsmu+MtCYS0cdv81qpQ+aMWwBA/wq2Fg0ewMOm3EGbKzJm3c8wGlfQ8ZuLccr0BRBQvtPMqwp2/uc3ftgUyckKQ5XqeAsAPe6wf4dz4he2e3LQ+G77YnSVfS4/LTI6nycd/nitj0KAAXo93Wo1w7JYbQXbkED0bQI6lF4+Thl/u5iUPu4mmmkZYD+6JnnlzUOKt9k6D8BbMtf8k0zU2xUOLeFZNti6gK34KhnAqdHe6ZFFZ83xaV3dEoNPR0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RuMDDnvFCWO4p61oFgjtmLkCJi31pcutirwlqEiaVDs=;
 b=KkkR4pWcwFNIbO525g642zO5aq6uRD472cAQzoWX9B9gnmZsp86vDwMDZMJWv9XzLoktBUgh8Q/RhcbTzZI1CrxP3E2z6Q6bsmZ+eV7pczJDUgT/RPy6direwT+yBCBUg0IlsigTH14Aysxhi912DxXdyYxeL+j/nWCHp4XxKag=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 2/6] swiotlb: convert variables to arrays
Date: Wed,  3 Feb 2021 15:37:05 -0800
Message-Id: <20210203233709.19819-3-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210203233709.19819-1-dongli.zhang@oracle.com>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6944969b-97db-45a0-60c0-08d8c89cb6bb
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB478473FF8C57D2FF7554B3CFF0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:519;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Nr78D0BKtGfvFg+Zfn+Ls1e+J8xHXLnIJvc4gVLi+ug2eClaIcZ6uhzFtYpjSTwp7e7R5plVXJ8NBrg9zON7LXhMatgSgfyGlEx91DqGFigMBLa4Nmq/47L/XpbNffcydvVTC0kxSIp97xQFN+wQj1yexDtR8ALIiO/dbrPMKOsdPxwSxtOSZoGgo21vH2w2zIPpeitK5rCORhh3jWnaWj/PfWhFWAUAysHzS3e0HzAybFNCXagGOyl4jmAoEWOG+BCGOTBBAkc1HA96vI0nujtkQssrb/WEJlDSa/h/cS+42jr4R2iEsogttCi/42xJGvKGof0y+JmB8c+0RQuGOfh/cmeSJTGOSEmS3xv9JBpUqJu4VWPBGhu288UKCmvW6rsIgd644MWC8WWJvZxQMmZUI8D/uaKp3kL92CaeLj4qk4nYG5n/0JJjnblSYTBdejKWy9hJ31wAgQ9Lu7C0hdDQiUxmZjM23VdjADjiGmUJGAJozKbN4McGomAArxPJWcYNMObna0G4jz412XvSAGLyJqlJ4Juxlc8uRj4OJmaMTg4XXTUAXWfuHXhDlH3yU0+DSq21chBMad/htHBMrEvB60yr4BTBW4ER+x38C3M=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(6666004)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(30864003)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?WYrdUlsRpAm3qmPfNVC56GiDTxiaAAF5RDWBwCeb1MA7PZAa7AztQi35fgIg?=
 =?us-ascii?Q?joIVm+NsV+Ah7jJJGlV4C/3sLzdgeF3L222clfsCi59UaNnXDM4oTZRfOt0F?=
 =?us-ascii?Q?3UmtvFrkybHGvmbWbt6paglrSI3R/dBm0mGWzhWP51NkHkfEsY0QzUaBQ9MI?=
 =?us-ascii?Q?I5VeiExjTUj3mUkymQUcuJZD67uvmuZlkfAqKNnN6onUgkYRAdghXc8VyYaQ?=
 =?us-ascii?Q?kw2+Qka/eHlnHvDUdFPSurF2/oEGXe2OkaM4FzGUOO5df5PuqVVzYCsbO5pd?=
 =?us-ascii?Q?PsvPjOIs5t/ppnFbTmyGjqJewvY8+PM+KV34tZFig6vdx2YTXtsLQSYXOb7b?=
 =?us-ascii?Q?Xnp+m4waCuEnynwdXSxRIAyD2DM14JPtzR3vtMuklmjsFla3p/wSLj5+TeXa?=
 =?us-ascii?Q?EACzsAHOM/MXDubyFke02eTC1jtUsNcSnXt51XpDgXgtROTmvVQigPmfoDrJ?=
 =?us-ascii?Q?ph4CQqa+zNk/LjkzYKIX7GUon18vkSWT+Z69KRioyesR/l2g1mU06fVzLlD4?=
 =?us-ascii?Q?Nk0KxBGumDhO5E49v/gLbzBJne6BGW9N4STIsLKj3xLwiFbQz4Mhhphw/k/D?=
 =?us-ascii?Q?dNKtAgmt/SKWp71gjcNdpdgdUVDvOFh2+iCb/IlAOhYTcnbkKmFyuLl/Bjsn?=
 =?us-ascii?Q?6tOPAQ5K0mSk+3L5Iqpm6Aj9e8aYDi1aReE3eE3ZsPNPqOFS+VVQRUBdiKA+?=
 =?us-ascii?Q?LWRN/aBP0m8D9w20N6d6nvI21mehLFfnG8KtJrxFy64I06UymiZFX2mZiNPG?=
 =?us-ascii?Q?+Pq9fErWxR74DBC1KJn/LpDvK1Aoi5dfWK3BMZPPUoPhQD7JMXaCOtSVr7xm?=
 =?us-ascii?Q?UTi19khibasCepAGlaohSQQWMzSOR9LIqKoWlk/AM1VJgZmQmXRwSe2IR/OG?=
 =?us-ascii?Q?MsvNEI3rfRIOhwIuHT0xaT8N3qalx33VhrStF+SQCoHvKwT7wL47xmvXUaP2?=
 =?us-ascii?Q?JoUGpiMn39DHFhMecx22MHUd+nVE/Ty3GireUN5CgFbhJv/Rbh/YTnWUsvqu?=
 =?us-ascii?Q?exuwJp1rqFe2wPt7gVRWU6wPpFrv/m1BWe6h3K15s8HVGR1g5f4oCessiCAo?=
 =?us-ascii?Q?G/u7pE2P?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6944969b-97db-45a0-60c0-08d8c89cb6bb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:37:48.2080
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F2kKZBo1cJuaZ1vlofMVuU/CWkd5ocmk3yTATYVa2qs0VNH9hXYDL5HmeRJwljQjfwSD3en40D8igro5q/vAyg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 suspectscore=0
 spamscore=0 mlxscore=0 malwarescore=0 mlxlogscore=999 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0
 priorityscore=1501 impostorscore=0 malwarescore=0 clxscore=1015
 spamscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 mlxscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102030143

This patch converts several swiotlb related variables to arrays, in
order to maintain stat/status for different swiotlb buffers. Here are
variables involved:

- io_tlb_start and io_tlb_end
- io_tlb_nslabs and io_tlb_used
- io_tlb_list
- io_tlb_index
- max_segment
- io_tlb_orig_addr
- no_iotlb_memory

There is no functional change and this is to prepare to enable 64-bit
swiotlb.

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 arch/powerpc/platforms/pseries/svm.c |   6 +-
 drivers/xen/swiotlb-xen.c            |   4 +-
 include/linux/swiotlb.h              |   5 +-
 kernel/dma/swiotlb.c                 | 257 ++++++++++++++-------------
 4 files changed, 140 insertions(+), 132 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index 7b739cc7a8a9..9f8842d0da1f 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -55,9 +55,9 @@ void __init svm_swiotlb_init(void)
 	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false))
 		return;
 
-	if (io_tlb_start)
-		memblock_free_early(io_tlb_start,
-				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
+	if (io_tlb_start[SWIOTLB_LO])
+		memblock_free_early(io_tlb_start[SWIOTLB_LO],
+				    PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
 	panic("SVM: Cannot allocate SWIOTLB buffer");
 }
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 2b385c1b4a99..3261880ad859 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	/*
 	 * IO TLB memory already allocated. Just use it.
 	 */
-	if (io_tlb_start != 0) {
-		xen_io_tlb_start = phys_to_virt(io_tlb_start);
+	if (io_tlb_start[SWIOTLB_LO] != 0) {
+		xen_io_tlb_start = phys_to_virt(io_tlb_start[SWIOTLB_LO]);
 		goto end;
 	}
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index ca125c1b1281..777046cd4d1b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -76,11 +76,12 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
 
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
-extern phys_addr_t io_tlb_start, io_tlb_end;
+extern phys_addr_t io_tlb_start[], io_tlb_end[];
 
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
-	return paddr >= io_tlb_start && paddr < io_tlb_end;
+	return paddr >= io_tlb_start[SWIOTLB_LO] &&
+	       paddr < io_tlb_end[SWIOTLB_LO];
 }
 
 void __init swiotlb_exit(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 7c42df6e6100..1fbb65daa2dd 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -69,38 +69,38 @@ enum swiotlb_force swiotlb_force;
  * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
  * API.
  */
-phys_addr_t io_tlb_start, io_tlb_end;
+phys_addr_t io_tlb_start[SWIOTLB_MAX], io_tlb_end[SWIOTLB_MAX];
 
 /*
  * The number of IO TLB blocks (in groups of 64) between io_tlb_start and
  * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
  */
-static unsigned long io_tlb_nslabs;
+static unsigned long io_tlb_nslabs[SWIOTLB_MAX];
 
 /*
  * The number of used IO TLB block
  */
-static unsigned long io_tlb_used;
+static unsigned long io_tlb_used[SWIOTLB_MAX];
 
 /*
  * This is a free list describing the number of free entries available from
  * each index
  */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
+static unsigned int *io_tlb_list[SWIOTLB_MAX];
+static unsigned int io_tlb_index[SWIOTLB_MAX];
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
  * not be bounced (unless SWIOTLB_FORCE is set).
  */
-static unsigned int max_segment;
+static unsigned int max_segment[SWIOTLB_MAX];
 
 /*
  * We need to save away the original address corresponding to a mapped entry
  * for the sync operations.
  */
 #define INVALID_PHYS_ADDR (~(phys_addr_t)0)
-static phys_addr_t *io_tlb_orig_addr;
+static phys_addr_t *io_tlb_orig_addr[SWIOTLB_MAX];
 
 /*
  * Protect the above data structures in the map and unmap calls
@@ -113,9 +113,9 @@ static int __init
 setup_io_tlb_npages(char *str)
 {
 	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		io_tlb_nslabs[SWIOTLB_LO] = simple_strtoul(str, &str, 0);
 		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+		io_tlb_nslabs[SWIOTLB_LO] = ALIGN(io_tlb_nslabs[SWIOTLB_LO], IO_TLB_SEGSIZE);
 	}
 	if (*str == ',')
 		++str;
@@ -123,40 +123,40 @@ setup_io_tlb_npages(char *str)
 		swiotlb_force = SWIOTLB_FORCE;
 	} else if (!strcmp(str, "noforce")) {
 		swiotlb_force = SWIOTLB_NO_FORCE;
-		io_tlb_nslabs = 1;
+		io_tlb_nslabs[SWIOTLB_LO] = 1;
 	}
 
 	return 0;
 }
 early_param("swiotlb", setup_io_tlb_npages);
 
-static bool no_iotlb_memory;
+static bool no_iotlb_memory[SWIOTLB_MAX];
 
 unsigned long swiotlb_nr_tbl(void)
 {
-	return unlikely(no_iotlb_memory) ? 0 : io_tlb_nslabs;
+	return unlikely(no_iotlb_memory[SWIOTLB_LO]) ? 0 : io_tlb_nslabs[SWIOTLB_LO];
 }
 EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 
 unsigned int swiotlb_max_segment(void)
 {
-	return unlikely(no_iotlb_memory) ? 0 : max_segment;
+	return unlikely(no_iotlb_memory[SWIOTLB_LO]) ? 0 : max_segment[SWIOTLB_LO];
 }
 EXPORT_SYMBOL_GPL(swiotlb_max_segment);
 
 void swiotlb_set_max_segment(unsigned int val)
 {
 	if (swiotlb_force == SWIOTLB_FORCE)
-		max_segment = 1;
+		max_segment[SWIOTLB_LO] = 1;
 	else
-		max_segment = rounddown(val, PAGE_SIZE);
+		max_segment[SWIOTLB_LO] = rounddown(val, PAGE_SIZE);
 }
 
 unsigned long swiotlb_size_or_default(void)
 {
 	unsigned long size;
 
-	size = io_tlb_nslabs << IO_TLB_SHIFT;
+	size = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
 
 	return size ? size : (IO_TLB_DEFAULT_SIZE);
 }
@@ -170,10 +170,10 @@ void __init swiotlb_adjust_size(unsigned long new_size)
 	 * architectures such as those supporting memory encryption to
 	 * adjust/expand SWIOTLB size for their use.
 	 */
-	if (!io_tlb_nslabs) {
+	if (!io_tlb_nslabs[SWIOTLB_LO]) {
 		size = ALIGN(new_size, 1 << IO_TLB_SHIFT);
-		io_tlb_nslabs = size >> IO_TLB_SHIFT;
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+		io_tlb_nslabs[SWIOTLB_LO] = size >> IO_TLB_SHIFT;
+		io_tlb_nslabs[SWIOTLB_LO] = ALIGN(io_tlb_nslabs[SWIOTLB_LO], IO_TLB_SEGSIZE);
 
 		pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);
 	}
@@ -181,15 +181,16 @@ void __init swiotlb_adjust_size(unsigned long new_size)
 
 void swiotlb_print_info(void)
 {
-	unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	unsigned long bytes = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
 
-	if (no_iotlb_memory) {
+	if (no_iotlb_memory[SWIOTLB_LO]) {
 		pr_warn("No low mem\n");
 		return;
 	}
 
-	pr_info("mapped [mem %pa-%pa] (%luMB)\n", &io_tlb_start, &io_tlb_end,
-	       bytes >> 20);
+	pr_info("mapped [mem %pa-%pa] (%luMB)\n",
+		&io_tlb_start[SWIOTLB_LO], &io_tlb_end[SWIOTLB_LO],
+		bytes >> 20);
 }
 
 /*
@@ -203,11 +204,11 @@ void __init swiotlb_update_mem_attributes(void)
 	void *vaddr;
 	unsigned long bytes;
 
-	if (no_iotlb_memory || late_alloc)
+	if (no_iotlb_memory[SWIOTLB_LO] || late_alloc)
 		return;
 
-	vaddr = phys_to_virt(io_tlb_start);
-	bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT);
+	vaddr = phys_to_virt(io_tlb_start[SWIOTLB_LO]);
+	bytes = PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
 	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
 	memset(vaddr, 0, bytes);
 }
@@ -219,38 +220,38 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 
 	bytes = nslabs << IO_TLB_SHIFT;
 
-	io_tlb_nslabs = nslabs;
-	io_tlb_start = __pa(tlb);
-	io_tlb_end = io_tlb_start + bytes;
+	io_tlb_nslabs[SWIOTLB_LO] = nslabs;
+	io_tlb_start[SWIOTLB_LO] = __pa(tlb);
+	io_tlb_end[SWIOTLB_LO] = io_tlb_start[SWIOTLB_LO] + bytes;
 
 	/*
 	 * Allocate and initialize the free list array.  This array is used
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
 	 * between io_tlb_start and io_tlb_end.
 	 */
-	alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int));
-	io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!io_tlb_list)
+	alloc_size = PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int));
+	io_tlb_list[SWIOTLB_LO] = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!io_tlb_list[SWIOTLB_LO])
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t));
-	io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!io_tlb_orig_addr)
+	alloc_size = PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(phys_addr_t));
+	io_tlb_orig_addr[SWIOTLB_LO] = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!io_tlb_orig_addr[SWIOTLB_LO])
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	for (i = 0; i < io_tlb_nslabs; i++) {
-		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+	for (i = 0; i < io_tlb_nslabs[SWIOTLB_LO]; i++) {
+		io_tlb_list[SWIOTLB_LO][i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+		io_tlb_orig_addr[SWIOTLB_LO][i] = INVALID_PHYS_ADDR;
 	}
-	io_tlb_index = 0;
-	no_iotlb_memory = false;
+	io_tlb_index[SWIOTLB_LO] = 0;
+	no_iotlb_memory[SWIOTLB_LO] = false;
 
 	if (verbose)
 		swiotlb_print_info();
 
-	swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT);
+	swiotlb_set_max_segment(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
 	return 0;
 }
 
@@ -265,25 +266,25 @@ swiotlb_init(int verbose)
 	unsigned char *vstart;
 	unsigned long bytes;
 
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	if (!io_tlb_nslabs[SWIOTLB_LO]) {
+		io_tlb_nslabs[SWIOTLB_LO] = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs[SWIOTLB_LO] = ALIGN(io_tlb_nslabs[SWIOTLB_LO], IO_TLB_SEGSIZE);
 	}
 
-	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	bytes = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
 
 	/* Get IO TLB memory from the low pages */
 	vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
-	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
+	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs[SWIOTLB_LO], verbose))
 		return;
 
-	if (io_tlb_start) {
-		memblock_free_early(io_tlb_start,
-				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
-		io_tlb_start = 0;
+	if (io_tlb_start[SWIOTLB_LO]) {
+		memblock_free_early(io_tlb_start[SWIOTLB_LO],
+				    PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
+		io_tlb_start[SWIOTLB_LO] = 0;
 	}
 	pr_warn("Cannot allocate buffer");
-	no_iotlb_memory = true;
+	no_iotlb_memory[SWIOTLB_LO] = true;
 }
 
 /*
@@ -294,22 +295,22 @@ swiotlb_init(int verbose)
 int
 swiotlb_late_init_with_default_size(size_t default_size)
 {
-	unsigned long bytes, req_nslabs = io_tlb_nslabs;
+	unsigned long bytes, req_nslabs = io_tlb_nslabs[SWIOTLB_LO];
 	unsigned char *vstart = NULL;
 	unsigned int order;
 	int rc = 0;
 
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	if (!io_tlb_nslabs[SWIOTLB_LO]) {
+		io_tlb_nslabs[SWIOTLB_LO] = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs[SWIOTLB_LO] = ALIGN(io_tlb_nslabs[SWIOTLB_LO], IO_TLB_SEGSIZE);
 	}
 
 	/*
 	 * Get IO TLB memory from the low pages
 	 */
-	order = get_order(io_tlb_nslabs << IO_TLB_SHIFT);
-	io_tlb_nslabs = SLABS_PER_PAGE << order;
-	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	order = get_order(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
+	io_tlb_nslabs[SWIOTLB_LO] = SLABS_PER_PAGE << order;
+	bytes = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
 
 	while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
 		vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN,
@@ -320,15 +321,15 @@ swiotlb_late_init_with_default_size(size_t default_size)
 	}
 
 	if (!vstart) {
-		io_tlb_nslabs = req_nslabs;
+		io_tlb_nslabs[SWIOTLB_LO] = req_nslabs;
 		return -ENOMEM;
 	}
 	if (order != get_order(bytes)) {
 		pr_warn("only able to allocate %ld MB\n",
 			(PAGE_SIZE << order) >> 20);
-		io_tlb_nslabs = SLABS_PER_PAGE << order;
+		io_tlb_nslabs[SWIOTLB_LO] = SLABS_PER_PAGE << order;
 	}
-	rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs);
+	rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs[SWIOTLB_LO]);
 	if (rc)
 		free_pages((unsigned long)vstart, order);
 
@@ -337,10 +338,10 @@ swiotlb_late_init_with_default_size(size_t default_size)
 
 static void swiotlb_cleanup(void)
 {
-	io_tlb_end = 0;
-	io_tlb_start = 0;
-	io_tlb_nslabs = 0;
-	max_segment = 0;
+	io_tlb_end[SWIOTLB_LO] = 0;
+	io_tlb_start[SWIOTLB_LO] = 0;
+	io_tlb_nslabs[SWIOTLB_LO] = 0;
+	max_segment[SWIOTLB_LO] = 0;
 }
 
 int
@@ -350,9 +351,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 
 	bytes = nslabs << IO_TLB_SHIFT;
 
-	io_tlb_nslabs = nslabs;
-	io_tlb_start = virt_to_phys(tlb);
-	io_tlb_end = io_tlb_start + bytes;
+	io_tlb_nslabs[SWIOTLB_LO] = nslabs;
+	io_tlb_start[SWIOTLB_LO] = virt_to_phys(tlb);
+	io_tlb_end[SWIOTLB_LO] = io_tlb_start[SWIOTLB_LO] + bytes;
 
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
 	memset(tlb, 0, bytes);
@@ -362,37 +363,37 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
 	 * between io_tlb_start and io_tlb_end.
 	 */
-	io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL,
-	                              get_order(io_tlb_nslabs * sizeof(int)));
-	if (!io_tlb_list)
+	io_tlb_list[SWIOTLB_LO] = (unsigned int *)__get_free_pages(GFP_KERNEL,
+			get_order(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int)));
+	if (!io_tlb_list[SWIOTLB_LO])
 		goto cleanup3;
 
-	io_tlb_orig_addr = (phys_addr_t *)
+	io_tlb_orig_addr[SWIOTLB_LO] = (phys_addr_t *)
 		__get_free_pages(GFP_KERNEL,
-				 get_order(io_tlb_nslabs *
+				 get_order(io_tlb_nslabs[SWIOTLB_LO] *
 					   sizeof(phys_addr_t)));
-	if (!io_tlb_orig_addr)
+	if (!io_tlb_orig_addr[SWIOTLB_LO])
 		goto cleanup4;
 
-	for (i = 0; i < io_tlb_nslabs; i++) {
-		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+	for (i = 0; i < io_tlb_nslabs[SWIOTLB_LO]; i++) {
+		io_tlb_list[SWIOTLB_LO][i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+		io_tlb_orig_addr[SWIOTLB_LO][i] = INVALID_PHYS_ADDR;
 	}
-	io_tlb_index = 0;
-	no_iotlb_memory = false;
+	io_tlb_index[SWIOTLB_LO] = 0;
+	no_iotlb_memory[SWIOTLB_LO] = false;
 
 	swiotlb_print_info();
 
 	late_alloc = 1;
 
-	swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT);
+	swiotlb_set_max_segment(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
 
 	return 0;
 
 cleanup4:
-	free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs *
+	free_pages((unsigned long)io_tlb_list[SWIOTLB_LO], get_order(io_tlb_nslabs[SWIOTLB_LO] *
 	                                                 sizeof(int)));
-	io_tlb_list = NULL;
+	io_tlb_list[SWIOTLB_LO] = NULL;
 cleanup3:
 	swiotlb_cleanup();
 	return -ENOMEM;
@@ -400,23 +401,23 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 
 void __init swiotlb_exit(void)
 {
-	if (!io_tlb_orig_addr)
+	if (!io_tlb_orig_addr[SWIOTLB_LO])
 		return;
 
 	if (late_alloc) {
-		free_pages((unsigned long)io_tlb_orig_addr,
-			   get_order(io_tlb_nslabs * sizeof(phys_addr_t)));
-		free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs *
-								 sizeof(int)));
-		free_pages((unsigned long)phys_to_virt(io_tlb_start),
-			   get_order(io_tlb_nslabs << IO_TLB_SHIFT));
+		free_pages((unsigned long)io_tlb_orig_addr[SWIOTLB_LO],
+			   get_order(io_tlb_nslabs[SWIOTLB_LO] * sizeof(phys_addr_t)));
+		free_pages((unsigned long)io_tlb_list[SWIOTLB_LO],
+			   get_order(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int)));
+		free_pages((unsigned long)phys_to_virt(io_tlb_start[SWIOTLB_LO]),
+			   get_order(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
 	} else {
-		memblock_free_late(__pa(io_tlb_orig_addr),
-				   PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)));
-		memblock_free_late(__pa(io_tlb_list),
-				   PAGE_ALIGN(io_tlb_nslabs * sizeof(int)));
-		memblock_free_late(io_tlb_start,
-				   PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
+		memblock_free_late(__pa(io_tlb_orig_addr[SWIOTLB_LO]),
+				   PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(phys_addr_t)));
+		memblock_free_late(__pa(io_tlb_list[SWIOTLB_LO]),
+				   PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int)));
+		memblock_free_late(io_tlb_start[SWIOTLB_LO],
+				   PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
 	}
 	swiotlb_cleanup();
 }
@@ -465,7 +466,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
+	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start[SWIOTLB_LO]);
 	unsigned long flags;
 	phys_addr_t tlb_addr;
 	unsigned int nslots, stride, index, wrap;
@@ -475,7 +476,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	unsigned long max_slots;
 	unsigned long tmp_io_tlb_used;
 
-	if (no_iotlb_memory)
+	if (no_iotlb_memory[SWIOTLB_LO])
 		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
 
 	if (mem_encrypt_active())
@@ -518,11 +519,11 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	 */
 	spin_lock_irqsave(&io_tlb_lock, flags);
 
-	if (unlikely(nslots > io_tlb_nslabs - io_tlb_used))
+	if (unlikely(nslots > io_tlb_nslabs[SWIOTLB_LO] - io_tlb_used[SWIOTLB_LO]))
 		goto not_found;
 
-	index = ALIGN(io_tlb_index, stride);
-	if (index >= io_tlb_nslabs)
+	index = ALIGN(io_tlb_index[SWIOTLB_LO], stride);
+	if (index >= io_tlb_nslabs[SWIOTLB_LO])
 		index = 0;
 	wrap = index;
 
@@ -530,7 +531,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		while (iommu_is_span_boundary(index, nslots, offset_slots,
 					      max_slots)) {
 			index += stride;
-			if (index >= io_tlb_nslabs)
+			if (index >= io_tlb_nslabs[SWIOTLB_LO])
 				index = 0;
 			if (index == wrap)
 				goto not_found;
@@ -541,39 +542,42 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		 * contiguous buffers, we allocate the buffers from that slot
 		 * and mark the entries as '0' indicating unavailable.
 		 */
-		if (io_tlb_list[index] >= nslots) {
+		if (io_tlb_list[SWIOTLB_LO][index] >= nslots) {
 			int count = 0;
 
 			for (i = index; i < (int) (index + nslots); i++)
-				io_tlb_list[i] = 0;
-			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
-				io_tlb_list[i] = ++count;
-			tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+				io_tlb_list[SWIOTLB_LO][i] = 0;
+			for (i = index - 1;
+			     (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) &&
+			     io_tlb_list[SWIOTLB_LO][i];
+			     i--)
+				io_tlb_list[SWIOTLB_LO][i] = ++count;
+			tlb_addr = io_tlb_start[SWIOTLB_LO] + (index << IO_TLB_SHIFT);
 
 			/*
 			 * Update the indices to avoid searching in the next
 			 * round.
 			 */
-			io_tlb_index = ((index + nslots) < io_tlb_nslabs
+			io_tlb_index[SWIOTLB_LO] = ((index + nslots) < io_tlb_nslabs[SWIOTLB_LO]
 					? (index + nslots) : 0);
 
 			goto found;
 		}
 		index += stride;
-		if (index >= io_tlb_nslabs)
+		if (index >= io_tlb_nslabs[SWIOTLB_LO])
 			index = 0;
 	} while (index != wrap);
 
 not_found:
-	tmp_io_tlb_used = io_tlb_used;
+	tmp_io_tlb_used = io_tlb_used[SWIOTLB_LO];
 
 	spin_unlock_irqrestore(&io_tlb_lock, flags);
 	if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
 		dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
-			 alloc_size, io_tlb_nslabs, tmp_io_tlb_used);
+			 alloc_size, io_tlb_nslabs[SWIOTLB_LO], tmp_io_tlb_used);
 	return (phys_addr_t)DMA_MAPPING_ERROR;
 found:
-	io_tlb_used += nslots;
+	io_tlb_used[SWIOTLB_LO] += nslots;
 	spin_unlock_irqrestore(&io_tlb_lock, flags);
 
 	/*
@@ -582,7 +586,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	 * needed.
 	 */
 	for (i = 0; i < nslots; i++)
-		io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
+		io_tlb_orig_addr[SWIOTLB_LO][index+i] = orig_addr + (i << IO_TLB_SHIFT);
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
 		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
@@ -599,8 +603,8 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 {
 	unsigned long flags;
 	int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = io_tlb_orig_addr[index];
+	int index = (tlb_addr - io_tlb_start[SWIOTLB_LO]) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = io_tlb_orig_addr[SWIOTLB_LO][index];
 
 	/*
 	 * First, sync the memory before unmapping the entry
@@ -619,23 +623,26 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_lock_irqsave(&io_tlb_lock, flags);
 	{
 		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
+			 io_tlb_list[SWIOTLB_LO][index + nslots] : 0);
 		/*
 		 * Step 1: return the slots to the free list, merging the
 		 * slots with superceeding slots
 		 */
 		for (i = index + nslots - 1; i >= index; i--) {
-			io_tlb_list[i] = ++count;
-			io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+			io_tlb_list[SWIOTLB_LO][i] = ++count;
+			io_tlb_orig_addr[SWIOTLB_LO][i] = INVALID_PHYS_ADDR;
 		}
 		/*
 		 * Step 2: merge the returned slots with the preceding slots,
 		 * if available (non zero)
 		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
+		for (i = index - 1;
+		     (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) &&
+		     io_tlb_list[SWIOTLB_LO][i];
+		     i--)
+			io_tlb_list[SWIOTLB_LO][i] = ++count;
 
-		io_tlb_used -= nslots;
+		io_tlb_used[SWIOTLB_LO] -= nslots;
 	}
 	spin_unlock_irqrestore(&io_tlb_lock, flags);
 }
@@ -644,8 +651,8 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
 			     size_t size, enum dma_data_direction dir,
 			     enum dma_sync_target target)
 {
-	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = io_tlb_orig_addr[index];
+	int index = (tlb_addr - io_tlb_start[SWIOTLB_LO]) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = io_tlb_orig_addr[SWIOTLB_LO][index];
 
 	if (orig_addr == INVALID_PHYS_ADDR)
 		return;
@@ -716,7 +723,7 @@ bool is_swiotlb_active(void)
 	 * When SWIOTLB is initialized, even if io_tlb_start points to physical
 	 * address zero, io_tlb_end surely doesn't.
 	 */
-	return io_tlb_end != 0;
+	return io_tlb_end[SWIOTLB_LO] != 0;
 }
 
 #ifdef CONFIG_DEBUG_FS
@@ -726,8 +733,8 @@ static int __init swiotlb_create_debugfs(void)
 	struct dentry *root;
 
 	root = debugfs_create_dir("swiotlb", NULL);
-	debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs);
-	debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used);
+	debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs[SWIOTLB_LO]);
+	debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used[SWIOTLB_LO]);
 	return 0;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:39:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81097.149243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjw-0006V4-2g; Wed, 03 Feb 2021 23:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81097.149243; Wed, 03 Feb 2021 23:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rjv-0006Uu-Uu; Wed, 03 Feb 2021 23:39:15 +0000
Received: by outflank-mailman (input) for mailman id 81097;
 Wed, 03 Feb 2021 23:39:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7Rju-00060K-Gd
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:39:14 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25ce93c3-4e45-4ecb-b114-469d71a519c6;
 Wed, 03 Feb 2021 23:38:53 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113N9ap2053612;
 Wed, 3 Feb 2021 23:38:04 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2130.oracle.com with ESMTP id 36cvyb2we9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:38:04 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NAc2x112310;
 Wed, 3 Feb 2021 23:38:03 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2171.outbound.protection.outlook.com [104.47.56.171])
 by userp3030.oracle.com with ESMTP id 36dhd0h9qf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:38:03 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:37:59 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:37:59 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:37:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25ce93c3-4e45-4ecb-b114-469d71a519c6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2020-01-29;
 bh=ZgEJ16guRlHHGFLq1GmEa8b8Efy41uKiY1bY40epmsE=;
 b=PDiUbWCP0RoZ0o+0J6rJ82slOeerEQmqJVyEs8T+MB0fEhBfsGsSWIwqwm28J2BUvfMx
 3GhIn5+GC36EMv3B+VM8eymanh0ykin4S75rWtxZb+QFNQX0rDCjGBMkI0gqEpThb/uJ
 omXeDMO5ZDETMdUqWYPREYpc2NzB5FOvEWNAiPEMQJZsFqltz8PS1ve57hK79BP8KK4b
 DuinwA5fzl1srAvdpLjmDRpWqzeNS//FSO2lsdaaIJ0gyw1cPeFQNM1t/UvVqio3m+kf
 z1APfuDaF+0akGUX1oM47qB6e84KdbL6F8Z7kQyYmoztKHNkrwYGfNqY3xw98XA5bYN8 TQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i2V8N3xyJCDJeonH32RZQlVQ3jWrn+DzvYDIYPRzfZIPKWdEUnvGUWRxVzr3IaA2r5K0sWr+mMda2VvZm7i281mTPMbUVG13mzN/s1k9XOuo9nY2Mg/EwspJQHXt/SHYA+JawBEQKUWA+MsWyURkOC1WPrlN373wWxbwQ+KWI6Gj20e+tE5dawri4ldj4eqRPrL7ukb8LT5zoj0mXIraqp7Lqf5IFNnoMgjoVLFHKIspbQxHqI1LlI/3JnIHz4TC9O/M8oGnoQ5dzAOeKhzaLRy+Tmfy+uCNrDVpV8ifA8EpIuBGZdcF387gqZVG9XCO0PEJndUgfc55JZbpTP+1uQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZgEJ16guRlHHGFLq1GmEa8b8Efy41uKiY1bY40epmsE=;
 b=BxlqyxLITubnbBrCASW+DgmsD6m5lKdslOGtcgEEA3klheiMkFHvokQofbOz36W3nPGaPJLWwNXiwA2SNvLNV2RUbvx0f01Ng2eqFTCBOKdRGHCimI4mPsEDZUXZJJcoUUrv93VEMkZSDAgTbQGErkEYr8GD+ioPY6hX62SUNgnW4WR4sipV3s89DhfAmy0habEz8BXbBVfXRAWxTD2A83d+RO57WUgWCrDAt/+VAaHzrgGkfCZA71AG4ZvBduMF5ucIdJKeklVOr0xRTEnNRqR8M6k6xcm1QBZ51X52bU+UDDWekmdVMhKv+06hgJ4ttuXReK6uARxAaPBRT7wxGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZgEJ16guRlHHGFLq1GmEa8b8Efy41uKiY1bY40epmsE=;
 b=KBGnY6O3w2Ms4RJHjtgtaazCdGmZA33iy4XS3yPOkIcJmhvgtc8iVvTVgW7SHkiLMzZl51Sgkh57gfkguYZmsGZyKFYiaK9TT3/BZLhPsYEJdeeSisnb6COfLtX1Pegj3wHT3kXKurwWCk8QT0mGt6ukgaUqj0X5ph+jt7Buq1U=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 4/6] swiotlb: enable 64-bit swiotlb
Date: Wed,  3 Feb 2021 15:37:07 -0800
Message-Id: <20210203233709.19819-5-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210203233709.19819-1-dongli.zhang@oracle.com>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2e551ef0-1c3f-43af-134e-08d8c89cbda4
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB478471E7D05C4CA41F206321F0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	rbOoLbhvjEiDwHn8YwqCH2Td6GURxyVXGBbpEnzKivEpvSx1TMcc1tDPUXLm2G1iCCYpsHfGcyfK5XIBrrkdNzKgwWpTFllDUfRgGuhXvLvXV58G6oFYWt9hXdgAamvuPUnQbueb9oF8s4k2Ndyd8drmTH/hydLPayryGqY8IUH8ctaWDETvtZEy7xlWb5BaPjsTBQy8RUz4O6XNTteJa4m2yZrDk+fosLMy6sk6ILGjZhDX5dfs9P0N1l30DvDDuxo+gxdjtVqtQIA+KzaCMWiVtnMncLx04xkhSOhSQT7Qnz2cxkJMj74fTGRwZ/almYaVnuS044fZw21O+le6UKYMtMlcqp1ow9Uu6G+N+NXBNOZ+6+hdIlxFSuoeHWx+7RbD6FXUSP0ZGJqzqyCJTjR0vmTxyEkk2KXBPQamjbXTpMAXrWDBI2rwT9R6Qk/ENZ3Oh/c1xo9N0wZ24SeFO7M0dmWM5p9RYdcky2gtp4QKF3VCOb57QH/+F4WKnKPG/ZgKIISMZYgdgb1YGCNAMs5VmFkl8deuxiryaUTVtfdNfljtQG2h/TBV/LTWvwox9MMEmCOlbx4TlyJLcf5+81Fz24zPbZFUUumMxaXSCYvlwv3NrC9Y21FT63IIWAp9s158c5ATJozi/tWiZxsLiw==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(6666004)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(30864003)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002)(461764006)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?JhNrVSwnivNA0yGVGvlva5dhTOSQih9UrxRsxZKz4yH6/lyJBR0Ta03BdYsi?=
 =?us-ascii?Q?Efizf+QiqksJDR/a50wFo0OxP+67Xc1taIThJ0dZoT0SeB10DaYsZxMSyv59?=
 =?us-ascii?Q?iND8s8vlQIQjB+asig/RVQdH48TcaDcEY2mpYNwLhhcy/IogpyMs3MjbHDtw?=
 =?us-ascii?Q?OFEvizRSi4XPnJkiA1KEUaA9lRq+0inM0ylqWxOjcSxyW+TxemaemDvNtyG9?=
 =?us-ascii?Q?H8P/MaJZqNB3g5ypXpOCXXcG5hxXx4N2lQIW1a3T0vL61EIULNSs81ubumzh?=
 =?us-ascii?Q?tLEwz1FCm4AXZMAXFn0IRkXXj0dVthKKuGFQaoPc9KRZNpSZBMrvEOgdAmPG?=
 =?us-ascii?Q?svt8Iu6gmUkmQQ4ff4hxoo7/sptgepb3AXrX/1LSl8RtOrzwUczumr2lRkYp?=
 =?us-ascii?Q?bL77QVcFLWNZbHkfvc8brSIY8Lpbs3EKOHXgOlFjMeETOH6Swou3eXPEvDCU?=
 =?us-ascii?Q?/6ijLg4w20HRk//A0yOmqeLhhrLn1iUxtFksdQ4ETSlXkpsofHKN6B2pzffC?=
 =?us-ascii?Q?4NhDRawkeZCnoeoM+I06xOZYnDfawcIMpYCKUPBdcONRVEIPO01LHUQvPAPC?=
 =?us-ascii?Q?0KvKVI9yINtDl33q3S90op3YH6MWHd3JucEZJeyy0gtk7w8NWmQ6Xd01r262?=
 =?us-ascii?Q?QP13TUfZkrPwREs3TMnlr9ujfTFZ/BX7txjmf1eDDwJQQZG6iQCKBxROhMAV?=
 =?us-ascii?Q?fx32ZMYy4GUlunJ++cB6bd5eHgucUERVOXFBQaOlxz8CELVKxm4r8JHJrFgF?=
 =?us-ascii?Q?mJh9BgiZC0qoLmZR1/VJ6CmJKrhbboog5/COx7T4RcUAN8b34i3dcjhsqdPs?=
 =?us-ascii?Q?15bBiXV84rDDITpi5ZcztoLQUQrIgZ9uYkXsbCaOHreg6l1SVMUuwUuzkRtQ?=
 =?us-ascii?Q?j4SbOjDgRCxZB/FGNPv9UUpo7O+5TVxxzpcxTlnwxaB06uzKn7bGkTsMnNON?=
 =?us-ascii?Q?yPKQbk+muWeYNpIAMVqSHMnVoS+g/0TxEcmkZ/Ypwt+8utMehD9qEZ4rRLdh?=
 =?us-ascii?Q?Sm1BDIlsIvGoJWYx3QAouthQZOsiA5slqflXe3+vPcLLBXno4lEtbqGRij85?=
 =?us-ascii?Q?zIrxnGER?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e551ef0-1c3f-43af-134e-08d8c89cbda4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:37:59.7094
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o0/PqwuIPm+eM+UxJJTP5oTDHWgjhBJlGqrrr8kTAtm9rHRs7gOc3APX+g8HPiptyPbadCkPBGWyyZPafVE50A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 phishscore=0
 spamscore=0 suspectscore=0 malwarescore=0 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 impostorscore=0
 mlxscore=0 spamscore=0 bulkscore=0 priorityscore=1501 adultscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 mlxlogscore=999
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102030143

This patch is to enable the 64-bit swiotlb buffer.

The state of the art swiotlb pre-allocates <=32-bit memory in order to meet
the DMA mask requirement for some 32-bit legacy device. Considering most
devices nowadays support 64-bit DMA and IOMMU is available, the swiotlb is
not used for most of the times, except:

1. The xen PVM domain requires the DMA addresses to both (1) less than the
dma mask, and (2) continuous in machine address. Therefore, the 64-bit
device may still require swiotlb on PVM domain.

2. From source code the AMD SME/SEV will enable SWIOTLB_FORCE. As a result
it is always required to allocate from swiotlb buffer even the device dma
mask is 64-bit.

sme_early_init()
-> if (sev_active())
   swiotlb_force = SWIOTLB_FORCE;

Therefore, this patch introduces the 2nd swiotlb buffer for 64-bit DMA
access. For instance, the swiotlb_tbl_map_single() allocates from the 2nd
64-bit buffer if the device DMA mask is
min_not_zero(*hwdev->dma_mask, hwdev->bus_dma_limit).

The example to configure 64-bit swiotlb is "swiotlb=65536,524288,force"
or "swiotlb=,524288,force", where 524288 is the size of 64-bit buffer.

With the patch, the kernel will be able to allocate >4GB swiotlb buffer.
This patch is only for swiotlb, not including xen-swiotlb.

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 arch/mips/cavium-octeon/dma-octeon.c         |   3 +-
 arch/powerpc/kernel/dma-swiotlb.c            |   2 +-
 arch/powerpc/platforms/pseries/svm.c         |   2 +-
 arch/x86/kernel/pci-swiotlb.c                |   5 +-
 arch/x86/pci/sta2x11-fixup.c                 |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c |   4 +-
 drivers/gpu/drm/i915/i915_scatterlist.h      |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        |   2 +-
 drivers/mmc/host/sdhci.c                     |   2 +-
 drivers/pci/xen-pcifront.c                   |   2 +-
 drivers/xen/swiotlb-xen.c                    |   9 +-
 include/linux/swiotlb.h                      |  28 +-
 kernel/dma/swiotlb.c                         | 339 +++++++++++--------
 13 files changed, 238 insertions(+), 164 deletions(-)

diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c
index df70308db0e6..3480555d908a 100644
--- a/arch/mips/cavium-octeon/dma-octeon.c
+++ b/arch/mips/cavium-octeon/dma-octeon.c
@@ -245,6 +245,7 @@ void __init plat_swiotlb_setup(void)
 		panic("%s: Failed to allocate %zu bytes align=%lx\n",
 		      __func__, swiotlbsize, PAGE_SIZE);
 
-	if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1) == -ENOMEM)
+	if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs,
+				  SWIOTLB_LO, 1) == -ENOMEM)
 		panic("Cannot allocate SWIOTLB buffer");
 }
diff --git a/arch/powerpc/kernel/dma-swiotlb.c b/arch/powerpc/kernel/dma-swiotlb.c
index fc7816126a40..88113318c53f 100644
--- a/arch/powerpc/kernel/dma-swiotlb.c
+++ b/arch/powerpc/kernel/dma-swiotlb.c
@@ -20,7 +20,7 @@ void __init swiotlb_detect_4g(void)
 static int __init check_swiotlb_enabled(void)
 {
 	if (ppc_swiotlb_enable)
-		swiotlb_print_info();
+		swiotlb_print_info(SWIOTLB_LO);
 	else
 		swiotlb_exit();
 
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index 9f8842d0da1f..77910e4ffad8 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -52,7 +52,7 @@ void __init svm_swiotlb_init(void)
 	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
 
 	vstart = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE);
-	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false))
+	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, SWIOTLB_LO, false))
 		return;
 
 	if (io_tlb_start[SWIOTLB_LO])
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..950624fd95a4 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -67,12 +67,15 @@ void __init pci_swiotlb_init(void)
 
 void __init pci_swiotlb_late_init(void)
 {
+	int i;
+
 	/* An IOMMU turned us off. */
 	if (!swiotlb)
 		swiotlb_exit();
 	else {
 		printk(KERN_INFO "PCI-DMA: "
 		       "Using software bounce buffering for IO (SWIOTLB)\n");
-		swiotlb_print_info();
+		for (i = 0; i < swiotlb_nr; i++)
+			swiotlb_print_info(i);
 	}
 }
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c
index 7d2525691854..c440520b2055 100644
--- a/arch/x86/pci/sta2x11-fixup.c
+++ b/arch/x86/pci/sta2x11-fixup.c
@@ -57,7 +57,7 @@ static void sta2x11_new_instance(struct pci_dev *pdev)
 		int size = STA2X11_SWIOTLB_SIZE;
 		/* First instance: register your own swiotlb area */
 		dev_info(&pdev->dev, "Using SWIOTLB (size %i)\n", size);
-		if (swiotlb_late_init_with_default_size(size))
+		if (swiotlb_late_init_with_default_size(size), SWIOTLB_LO)
 			dev_emerg(&pdev->dev, "init swiotlb failed\n");
 	}
 	list_add(&instance->list, &sta2x11_instance_list);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ad22f42541bd..947683f2e568 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,10 +42,10 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (swiotlb_nr_tbl()) {
+	if (swiotlb_nr_tbl(SWIOTLB_LO)) {
 		unsigned int max_segment;
 
-		max_segment = swiotlb_max_segment();
+		max_segment = swiotlb_max_segment(SWIOTLB_LO);
 		if (max_segment) {
 			max_segment = max_t(unsigned int, max_segment,
 					    PAGE_SIZE) >> PAGE_SHIFT;
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h
index 9cb26a224034..c63c7f6941f6 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.h
+++ b/drivers/gpu/drm/i915/i915_scatterlist.h
@@ -118,7 +118,7 @@ static inline unsigned int i915_sg_page_sizes(struct scatterlist *sg)
 
 static inline unsigned int i915_sg_segment_size(void)
 {
-	unsigned int size = swiotlb_max_segment();
+	unsigned int size = swiotlb_max_segment(SWIOTLB_LO);
 
 	if (size == 0)
 		size = UINT_MAX;
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index a37bc3d7b38b..0919b207ac47 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = !!swiotlb_nr_tbl();
+	need_swiotlb = !!swiotlb_nr_tbl(SWIOTLB_LO);
 #endif
 
 	ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver,
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 646823ddd317..1f7fb912d5a9 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -4582,7 +4582,7 @@ int sdhci_setup_host(struct sdhci_host *host)
 		mmc->max_segs = SDHCI_MAX_SEGS;
 	} else if (host->flags & SDHCI_USE_SDMA) {
 		mmc->max_segs = 1;
-		if (swiotlb_max_segment()) {
+		if (swiotlb_max_segment(SWIOTLB_LO)) {
 			unsigned int max_req_size = (1 << IO_TLB_SHIFT) *
 						IO_TLB_SEGSIZE;
 			mmc->max_req_size = min(mmc->max_req_size,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index c6fe0cfec0f6..9509ed9b4126 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !swiotlb_nr_tbl()) {
+	if (!err && !swiotlb_nr_tbl(SWIOTLB_LO)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 3261880ad859..662638093542 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -184,7 +184,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
 
-	xen_io_tlb_nslabs = swiotlb_nr_tbl();
+	xen_io_tlb_nslabs = swiotlb_nr_tbl(SWIOTLB_LO);
 retry:
 	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
 	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
@@ -245,16 +245,17 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	}
 	if (early) {
 		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
-			 verbose))
+					  SWIOTLB_LO, verbose))
 			panic("Cannot allocate SWIOTLB buffer");
 		rc = 0;
 	} else
-		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start,
+						xen_io_tlb_nslabs, SWIOTLB_LO);
 
 end:
 	xen_io_tlb_end = xen_io_tlb_start + bytes;
 	if (!rc)
-		swiotlb_set_max_segment(PAGE_SIZE);
+		swiotlb_set_max_segment(PAGE_SIZE, SWIOTLB_LO);
 
 	return rc;
 error:
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 3d5980d77810..8ba45fddfb14 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -43,11 +43,14 @@ extern int swiotlb_nr;
 #define IO_TLB_DEFAULT_SIZE (64UL<<20)
 
 extern void swiotlb_init(int verbose);
-int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
-extern unsigned long swiotlb_nr_tbl(void);
+int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs,
+			  enum swiotlb_t type, int verbose);
+extern unsigned long swiotlb_nr_tbl(enum swiotlb_t type);
 unsigned long swiotlb_size_or_default(void);
-extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
-extern int swiotlb_late_init_with_default_size(size_t default_size);
+extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs,
+				      enum swiotlb_t type);
+extern int swiotlb_late_init_with_default_size(size_t default_size,
+					       enum swiotlb_t type);
 extern void __init swiotlb_update_mem_attributes(void);
 
 /*
@@ -83,8 +86,13 @@ extern phys_addr_t io_tlb_start[], io_tlb_end[];
 
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
-	return paddr >= io_tlb_start[SWIOTLB_LO] &&
-	       paddr < io_tlb_end[SWIOTLB_LO];
+	int i;
+
+	for (i = 0; i < swiotlb_nr; i++)
+		if (paddr >= io_tlb_start[i] && paddr < io_tlb_end[i])
+			return true;
+
+	return false;
 }
 
 static inline int swiotlb_get_type(phys_addr_t paddr)
@@ -99,7 +107,7 @@ static inline int swiotlb_get_type(phys_addr_t paddr)
 }
 
 void __init swiotlb_exit(void);
-unsigned int swiotlb_max_segment(void);
+unsigned int swiotlb_max_segment(enum swiotlb_t type);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long new_size);
@@ -112,7 +120,7 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 static inline void swiotlb_exit(void)
 {
 }
-static inline unsigned int swiotlb_max_segment(void)
+static inline unsigned int swiotlb_max_segment(enum swiotlb_t type)
 {
 	return 0;
 }
@@ -131,7 +139,7 @@ static inline void swiotlb_adjust_size(unsigned long new_size)
 }
 #endif /* CONFIG_SWIOTLB */
 
-extern void swiotlb_print_info(void);
-extern void swiotlb_set_max_segment(unsigned int);
+extern void swiotlb_print_info(enum swiotlb_t type);
+extern void swiotlb_set_max_segment(unsigned int val, enum swiotlb_t type);
 
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c91d3d2c3936..cd28db5b016a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -111,6 +111,11 @@ static int late_alloc;
 
 int swiotlb_nr = 1;
 
+static const char * const swiotlb_name[] = {
+	"Low Mem",
+	"High Mem"
+};
+
 static int __init
 setup_io_tlb_npages(char *str)
 {
@@ -121,11 +126,25 @@ setup_io_tlb_npages(char *str)
 	}
 	if (*str == ',')
 		++str;
+
+	if (isdigit(*str)) {
+		io_tlb_nslabs[SWIOTLB_HI] = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs[SWIOTLB_HI] = ALIGN(io_tlb_nslabs[SWIOTLB_HI], IO_TLB_SEGSIZE);
+
+		swiotlb_nr = 2;
+	}
+
+	if (*str == ',')
+		++str;
+
 	if (!strcmp(str, "force")) {
 		swiotlb_force = SWIOTLB_FORCE;
 	} else if (!strcmp(str, "noforce")) {
 		swiotlb_force = SWIOTLB_NO_FORCE;
 		io_tlb_nslabs[SWIOTLB_LO] = 1;
+		if (swiotlb_nr > 1)
+			io_tlb_nslabs[SWIOTLB_HI] = 1;
 	}
 
 	return 0;
@@ -134,24 +153,24 @@ early_param("swiotlb", setup_io_tlb_npages);
 
 static bool no_iotlb_memory[SWIOTLB_MAX];
 
-unsigned long swiotlb_nr_tbl(void)
+unsigned long swiotlb_nr_tbl(enum swiotlb_t type)
 {
-	return unlikely(no_iotlb_memory[SWIOTLB_LO]) ? 0 : io_tlb_nslabs[SWIOTLB_LO];
+	return unlikely(no_iotlb_memory[type]) ? 0 : io_tlb_nslabs[type];
 }
 EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 
-unsigned int swiotlb_max_segment(void)
+unsigned int swiotlb_max_segment(enum swiotlb_t type)
 {
-	return unlikely(no_iotlb_memory[SWIOTLB_LO]) ? 0 : max_segment[SWIOTLB_LO];
+	return unlikely(no_iotlb_memory[type]) ? 0 : max_segment[type];
 }
 EXPORT_SYMBOL_GPL(swiotlb_max_segment);
 
-void swiotlb_set_max_segment(unsigned int val)
+void swiotlb_set_max_segment(unsigned int val, enum swiotlb_t type)
 {
 	if (swiotlb_force == SWIOTLB_FORCE)
-		max_segment[SWIOTLB_LO] = 1;
+		max_segment[type] = 1;
 	else
-		max_segment[SWIOTLB_LO] = rounddown(val, PAGE_SIZE);
+		max_segment[type] = rounddown(val, PAGE_SIZE);
 }
 
 unsigned long swiotlb_size_or_default(void)
@@ -181,18 +200,18 @@ void __init swiotlb_adjust_size(unsigned long new_size)
 	}
 }
 
-void swiotlb_print_info(void)
+void swiotlb_print_info(enum swiotlb_t type)
 {
-	unsigned long bytes = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
+	unsigned long bytes = io_tlb_nslabs[type] << IO_TLB_SHIFT;
 
-	if (no_iotlb_memory[SWIOTLB_LO]) {
-		pr_warn("No low mem\n");
+	if (no_iotlb_memory[type]) {
+		pr_warn("No low mem with %s\n", swiotlb_name[type]);
 		return;
 	}
 
-	pr_info("mapped [mem %pa-%pa] (%luMB)\n",
-		&io_tlb_start[SWIOTLB_LO], &io_tlb_end[SWIOTLB_LO],
-		bytes >> 20);
+	pr_info("%s mapped [mem %pa-%pa] (%luMB)\n",
+		swiotlb_name[type],
+		&io_tlb_start[type], &io_tlb_end[type], bytes >> 20);
 }
 
 /*
@@ -205,88 +224,104 @@ void __init swiotlb_update_mem_attributes(void)
 {
 	void *vaddr;
 	unsigned long bytes;
+	int i;
 
-	if (no_iotlb_memory[SWIOTLB_LO] || late_alloc)
-		return;
+	for (i = 0; i < swiotlb_nr; i++) {
+		if (no_iotlb_memory[i] || late_alloc)
+			continue;
 
-	vaddr = phys_to_virt(io_tlb_start[SWIOTLB_LO]);
-	bytes = PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
-	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
-	memset(vaddr, 0, bytes);
+		vaddr = phys_to_virt(io_tlb_start[i]);
+		bytes = PAGE_ALIGN(io_tlb_nslabs[i] << IO_TLB_SHIFT);
+		set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+		memset(vaddr, 0, bytes);
+	}
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs,
+				 enum swiotlb_t type, int verbose)
 {
 	unsigned long i, bytes;
 	size_t alloc_size;
 
 	bytes = nslabs << IO_TLB_SHIFT;
 
-	io_tlb_nslabs[SWIOTLB_LO] = nslabs;
-	io_tlb_start[SWIOTLB_LO] = __pa(tlb);
-	io_tlb_end[SWIOTLB_LO] = io_tlb_start[SWIOTLB_LO] + bytes;
+	io_tlb_nslabs[type] = nslabs;
+	io_tlb_start[type] = __pa(tlb);
+	io_tlb_end[type] = io_tlb_start[type] + bytes;
 
 	/*
 	 * Allocate and initialize the free list array.  This array is used
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
 	 * between io_tlb_start and io_tlb_end.
 	 */
-	alloc_size = PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int));
-	io_tlb_list[SWIOTLB_LO] = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!io_tlb_list[SWIOTLB_LO])
+	alloc_size = PAGE_ALIGN(io_tlb_nslabs[type] * sizeof(int));
+	io_tlb_list[type] = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!io_tlb_list[type])
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	alloc_size = PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(phys_addr_t));
-	io_tlb_orig_addr[SWIOTLB_LO] = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!io_tlb_orig_addr[SWIOTLB_LO])
+	alloc_size = PAGE_ALIGN(io_tlb_nslabs[type] * sizeof(phys_addr_t));
+	io_tlb_orig_addr[type] = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!io_tlb_orig_addr[type])
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	for (i = 0; i < io_tlb_nslabs[SWIOTLB_LO]; i++) {
-		io_tlb_list[SWIOTLB_LO][i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-		io_tlb_orig_addr[SWIOTLB_LO][i] = INVALID_PHYS_ADDR;
+	for (i = 0; i < io_tlb_nslabs[type]; i++) {
+		io_tlb_list[type][i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+		io_tlb_orig_addr[type][i] = INVALID_PHYS_ADDR;
 	}
-	io_tlb_index[SWIOTLB_LO] = 0;
-	no_iotlb_memory[SWIOTLB_LO] = false;
+	io_tlb_index[type] = 0;
+	no_iotlb_memory[type] = false;
 
 	if (verbose)
-		swiotlb_print_info();
+		swiotlb_print_info(type);
 
-	swiotlb_set_max_segment(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
+	swiotlb_set_max_segment(io_tlb_nslabs[type] << IO_TLB_SHIFT, type);
 	return 0;
 }
 
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the DMA API.
- */
-void  __init
-swiotlb_init(int verbose)
+static void __init
+swiotlb_init_type(enum swiotlb_t type, int verbose)
 {
 	size_t default_size = IO_TLB_DEFAULT_SIZE;
 	unsigned char *vstart;
 	unsigned long bytes;
 
-	if (!io_tlb_nslabs[SWIOTLB_LO]) {
-		io_tlb_nslabs[SWIOTLB_LO] = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs[SWIOTLB_LO] = ALIGN(io_tlb_nslabs[SWIOTLB_LO], IO_TLB_SEGSIZE);
+	if (!io_tlb_nslabs[type]) {
+		io_tlb_nslabs[type] = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs[type] = ALIGN(io_tlb_nslabs[type], IO_TLB_SEGSIZE);
 	}
 
-	bytes = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
+	bytes = io_tlb_nslabs[type] << IO_TLB_SHIFT;
+
+	if (type == SWIOTLB_LO)
+		vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
+	else
+		vstart = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE);
 
-	/* Get IO TLB memory from the low pages */
-	vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
-	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs[SWIOTLB_LO], verbose))
+	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs[type], type, verbose))
 		return;
 
-	if (io_tlb_start[SWIOTLB_LO]) {
-		memblock_free_early(io_tlb_start[SWIOTLB_LO],
-				    PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
-		io_tlb_start[SWIOTLB_LO] = 0;
+	if (io_tlb_start[type]) {
+		memblock_free_early(io_tlb_start[type],
+				    PAGE_ALIGN(io_tlb_nslabs[type] << IO_TLB_SHIFT));
+		io_tlb_start[type] = 0;
 	}
-	pr_warn("Cannot allocate buffer");
-	no_iotlb_memory[SWIOTLB_LO] = true;
+	pr_warn("Cannot allocate buffer %s\n", swiotlb_name[type]);
+	no_iotlb_memory[type] = true;
+}
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the DMA API.
+ */
+void  __init
+swiotlb_init(int verbose)
+{
+	int i;
+
+	for (i = 0; i < swiotlb_nr; i++)
+		swiotlb_init_type(i, verbose);
 }
 
 /*
@@ -295,67 +330,68 @@ swiotlb_init(int verbose)
  * This should be just like above, but with some error catching.
  */
 int
-swiotlb_late_init_with_default_size(size_t default_size)
+swiotlb_late_init_with_default_size(size_t default_size, enum swiotlb_t type)
 {
-	unsigned long bytes, req_nslabs = io_tlb_nslabs[SWIOTLB_LO];
+	unsigned long bytes, req_nslabs = io_tlb_nslabs[type];
 	unsigned char *vstart = NULL;
 	unsigned int order;
 	int rc = 0;
+	gfp_t gfp_mask = (type == SWIOTLB_LO) ? GFP_DMA | __GFP_NOWARN :
+						GFP_KERNEL | __GFP_NOWARN;
 
-	if (!io_tlb_nslabs[SWIOTLB_LO]) {
-		io_tlb_nslabs[SWIOTLB_LO] = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs[SWIOTLB_LO] = ALIGN(io_tlb_nslabs[SWIOTLB_LO], IO_TLB_SEGSIZE);
+	if (!io_tlb_nslabs[type]) {
+		io_tlb_nslabs[type] = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs[type] = ALIGN(io_tlb_nslabs[type], IO_TLB_SEGSIZE);
 	}
 
 	/*
 	 * Get IO TLB memory from the low pages
 	 */
-	order = get_order(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
-	io_tlb_nslabs[SWIOTLB_LO] = SLABS_PER_PAGE << order;
-	bytes = io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
+	order = get_order(io_tlb_nslabs[type] << IO_TLB_SHIFT);
+	io_tlb_nslabs[type] = SLABS_PER_PAGE << order;
+	bytes = io_tlb_nslabs[type] << IO_TLB_SHIFT;
 
 	while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
-		vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN,
-						  order);
+		vstart = (void *)__get_free_pages(gfp_mask, order);
 		if (vstart)
 			break;
 		order--;
 	}
 
 	if (!vstart) {
-		io_tlb_nslabs[SWIOTLB_LO] = req_nslabs;
+		io_tlb_nslabs[type] = req_nslabs;
 		return -ENOMEM;
 	}
 	if (order != get_order(bytes)) {
 		pr_warn("only able to allocate %ld MB\n",
 			(PAGE_SIZE << order) >> 20);
-		io_tlb_nslabs[SWIOTLB_LO] = SLABS_PER_PAGE << order;
+		io_tlb_nslabs[type] = SLABS_PER_PAGE << order;
 	}
-	rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs[SWIOTLB_LO]);
+	rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs[type], type);
 	if (rc)
 		free_pages((unsigned long)vstart, order);
 
 	return rc;
 }
 
-static void swiotlb_cleanup(void)
+static void swiotlb_cleanup(enum swiotlb_t type)
 {
-	io_tlb_end[SWIOTLB_LO] = 0;
-	io_tlb_start[SWIOTLB_LO] = 0;
-	io_tlb_nslabs[SWIOTLB_LO] = 0;
-	max_segment[SWIOTLB_LO] = 0;
+	io_tlb_end[type] = 0;
+	io_tlb_start[type] = 0;
+	io_tlb_nslabs[type] = 0;
+	max_segment[type] = 0;
 }
 
 int
-swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs, enum swiotlb_t type)
 {
 	unsigned long i, bytes;
 
 	bytes = nslabs << IO_TLB_SHIFT;
 
-	io_tlb_nslabs[SWIOTLB_LO] = nslabs;
-	io_tlb_start[SWIOTLB_LO] = virt_to_phys(tlb);
-	io_tlb_end[SWIOTLB_LO] = io_tlb_start[SWIOTLB_LO] + bytes;
+	io_tlb_nslabs[type] = nslabs;
+	io_tlb_start[type] = virt_to_phys(tlb);
+	io_tlb_end[type] = io_tlb_start[type] + bytes;
 
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
 	memset(tlb, 0, bytes);
@@ -365,63 +401,67 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
 	 * between io_tlb_start and io_tlb_end.
 	 */
-	io_tlb_list[SWIOTLB_LO] = (unsigned int *)__get_free_pages(GFP_KERNEL,
-			get_order(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int)));
-	if (!io_tlb_list[SWIOTLB_LO])
+	io_tlb_list[type] = (unsigned int *)__get_free_pages(GFP_KERNEL,
+			get_order(io_tlb_nslabs[type] * sizeof(int)));
+	if (!io_tlb_list[type])
 		goto cleanup3;
 
-	io_tlb_orig_addr[SWIOTLB_LO] = (phys_addr_t *)
+	io_tlb_orig_addr[type] = (phys_addr_t *)
 		__get_free_pages(GFP_KERNEL,
-				 get_order(io_tlb_nslabs[SWIOTLB_LO] *
+				 get_order(io_tlb_nslabs[type] *
 					   sizeof(phys_addr_t)));
-	if (!io_tlb_orig_addr[SWIOTLB_LO])
+	if (!io_tlb_orig_addr[type])
 		goto cleanup4;
 
-	for (i = 0; i < io_tlb_nslabs[SWIOTLB_LO]; i++) {
-		io_tlb_list[SWIOTLB_LO][i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-		io_tlb_orig_addr[SWIOTLB_LO][i] = INVALID_PHYS_ADDR;
+	for (i = 0; i < io_tlb_nslabs[type]; i++) {
+		io_tlb_list[type][i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+		io_tlb_orig_addr[type][i] = INVALID_PHYS_ADDR;
 	}
-	io_tlb_index[SWIOTLB_LO] = 0;
-	no_iotlb_memory[SWIOTLB_LO] = false;
+	io_tlb_index[type] = 0;
+	no_iotlb_memory[type] = false;
 
-	swiotlb_print_info();
+	swiotlb_print_info(type);
 
 	late_alloc = 1;
 
-	swiotlb_set_max_segment(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
+	swiotlb_set_max_segment(io_tlb_nslabs[type] << IO_TLB_SHIFT, type);
 
 	return 0;
 
 cleanup4:
-	free_pages((unsigned long)io_tlb_list[SWIOTLB_LO], get_order(io_tlb_nslabs[SWIOTLB_LO] *
+	free_pages((unsigned long)io_tlb_list[type], get_order(io_tlb_nslabs[type] *
 	                                                 sizeof(int)));
-	io_tlb_list[SWIOTLB_LO] = NULL;
+	io_tlb_list[type] = NULL;
 cleanup3:
-	swiotlb_cleanup();
+	swiotlb_cleanup(type);
 	return -ENOMEM;
 }
 
 void __init swiotlb_exit(void)
 {
-	if (!io_tlb_orig_addr[SWIOTLB_LO])
-		return;
+	int i;
 
-	if (late_alloc) {
-		free_pages((unsigned long)io_tlb_orig_addr[SWIOTLB_LO],
-			   get_order(io_tlb_nslabs[SWIOTLB_LO] * sizeof(phys_addr_t)));
-		free_pages((unsigned long)io_tlb_list[SWIOTLB_LO],
-			   get_order(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int)));
-		free_pages((unsigned long)phys_to_virt(io_tlb_start[SWIOTLB_LO]),
-			   get_order(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
-	} else {
-		memblock_free_late(__pa(io_tlb_orig_addr[SWIOTLB_LO]),
-				   PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(phys_addr_t)));
-		memblock_free_late(__pa(io_tlb_list[SWIOTLB_LO]),
-				   PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] * sizeof(int)));
-		memblock_free_late(io_tlb_start[SWIOTLB_LO],
-				   PAGE_ALIGN(io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT));
+	for (i = 0; i < swiotlb_nr; i++) {
+		if (!io_tlb_orig_addr[i])
+			continue;
+
+		if (late_alloc) {
+			free_pages((unsigned long)io_tlb_orig_addr[i],
+				   get_order(io_tlb_nslabs[i] * sizeof(phys_addr_t)));
+			free_pages((unsigned long)io_tlb_list[i],
+				   get_order(io_tlb_nslabs[i] * sizeof(int)));
+			free_pages((unsigned long)phys_to_virt(io_tlb_start[i]),
+				   get_order(io_tlb_nslabs[i] << IO_TLB_SHIFT));
+		} else {
+			memblock_free_late(__pa(io_tlb_orig_addr[i]),
+					   PAGE_ALIGN(io_tlb_nslabs[i] * sizeof(phys_addr_t)));
+			memblock_free_late(__pa(io_tlb_list[i]),
+					   PAGE_ALIGN(io_tlb_nslabs[i] * sizeof(int)));
+			memblock_free_late(io_tlb_start[i],
+					   PAGE_ALIGN(io_tlb_nslabs[i] << IO_TLB_SHIFT));
+		}
+		swiotlb_cleanup(i);
 	}
-	swiotlb_cleanup();
 }
 
 /*
@@ -468,7 +508,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start[SWIOTLB_LO]);
+	dma_addr_t tbl_dma_addr;
 	unsigned long flags;
 	phys_addr_t tlb_addr;
 	unsigned int nslots, stride, index, wrap;
@@ -477,8 +517,16 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	unsigned long offset_slots;
 	unsigned long max_slots;
 	unsigned long tmp_io_tlb_used;
+	unsigned long dma_mask = min_not_zero(*hwdev->dma_mask,
+					      hwdev->bus_dma_limit);
+	int type;
 
-	if (no_iotlb_memory[SWIOTLB_LO])
+	if (swiotlb_nr > 1 && dma_mask == DMA_BIT_MASK(64))
+		type = SWIOTLB_HI;
+	else
+		type = SWIOTLB_LO;
+
+	if (no_iotlb_memory[type])
 		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
 
 	if (mem_encrypt_active())
@@ -490,6 +538,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
+	tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start[type]);
+
 	mask = dma_get_seg_boundary(hwdev);
 
 	tbl_dma_addr &= mask;
@@ -521,11 +571,11 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	 */
 	spin_lock_irqsave(&io_tlb_lock, flags);
 
-	if (unlikely(nslots > io_tlb_nslabs[SWIOTLB_LO] - io_tlb_used[SWIOTLB_LO]))
+	if (unlikely(nslots > io_tlb_nslabs[type] - io_tlb_used[type]))
 		goto not_found;
 
-	index = ALIGN(io_tlb_index[SWIOTLB_LO], stride);
-	if (index >= io_tlb_nslabs[SWIOTLB_LO])
+	index = ALIGN(io_tlb_index[type], stride);
+	if (index >= io_tlb_nslabs[type])
 		index = 0;
 	wrap = index;
 
@@ -533,7 +583,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		while (iommu_is_span_boundary(index, nslots, offset_slots,
 					      max_slots)) {
 			index += stride;
-			if (index >= io_tlb_nslabs[SWIOTLB_LO])
+			if (index >= io_tlb_nslabs[type])
 				index = 0;
 			if (index == wrap)
 				goto not_found;
@@ -544,42 +594,42 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		 * contiguous buffers, we allocate the buffers from that slot
 		 * and mark the entries as '0' indicating unavailable.
 		 */
-		if (io_tlb_list[SWIOTLB_LO][index] >= nslots) {
+		if (io_tlb_list[type][index] >= nslots) {
 			int count = 0;
 
 			for (i = index; i < (int) (index + nslots); i++)
-				io_tlb_list[SWIOTLB_LO][i] = 0;
+				io_tlb_list[type][i] = 0;
 			for (i = index - 1;
 			     (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) &&
-			     io_tlb_list[SWIOTLB_LO][i];
+			     io_tlb_list[type][i];
 			     i--)
-				io_tlb_list[SWIOTLB_LO][i] = ++count;
-			tlb_addr = io_tlb_start[SWIOTLB_LO] + (index << IO_TLB_SHIFT);
+				io_tlb_list[type][i] = ++count;
+			tlb_addr = io_tlb_start[type] + (index << IO_TLB_SHIFT);
 
 			/*
 			 * Update the indices to avoid searching in the next
 			 * round.
 			 */
-			io_tlb_index[SWIOTLB_LO] = ((index + nslots) < io_tlb_nslabs[SWIOTLB_LO]
+			io_tlb_index[type] = ((index + nslots) < io_tlb_nslabs[type]
 					? (index + nslots) : 0);
 
 			goto found;
 		}
 		index += stride;
-		if (index >= io_tlb_nslabs[SWIOTLB_LO])
+		if (index >= io_tlb_nslabs[type])
 			index = 0;
 	} while (index != wrap);
 
 not_found:
-	tmp_io_tlb_used = io_tlb_used[SWIOTLB_LO];
+	tmp_io_tlb_used = io_tlb_used[type];
 
 	spin_unlock_irqrestore(&io_tlb_lock, flags);
 	if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
-		dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
-			 alloc_size, io_tlb_nslabs[SWIOTLB_LO], tmp_io_tlb_used);
+		dev_warn(hwdev, "%s swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
+			 swiotlb_name[type], alloc_size, io_tlb_nslabs[type], tmp_io_tlb_used);
 	return (phys_addr_t)DMA_MAPPING_ERROR;
 found:
-	io_tlb_used[SWIOTLB_LO] += nslots;
+	io_tlb_used[type] += nslots;
 	spin_unlock_irqrestore(&io_tlb_lock, flags);
 
 	/*
@@ -588,7 +638,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	 * needed.
 	 */
 	for (i = 0; i < nslots; i++)
-		io_tlb_orig_addr[SWIOTLB_LO][index+i] = orig_addr + (i << IO_TLB_SHIFT);
+		io_tlb_orig_addr[type][index+i] = orig_addr + (i << IO_TLB_SHIFT);
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
 		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
@@ -605,8 +655,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 {
 	unsigned long flags;
 	int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (tlb_addr - io_tlb_start[SWIOTLB_LO]) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = io_tlb_orig_addr[SWIOTLB_LO][index];
+	int type = swiotlb_get_type(tlb_addr);
+	int index = (tlb_addr - io_tlb_start[type]) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = io_tlb_orig_addr[type][index];
 
 	/*
 	 * First, sync the memory before unmapping the entry
@@ -625,14 +676,14 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_lock_irqsave(&io_tlb_lock, flags);
 	{
 		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[SWIOTLB_LO][index + nslots] : 0);
+			 io_tlb_list[type][index + nslots] : 0);
 		/*
 		 * Step 1: return the slots to the free list, merging the
 		 * slots with superceeding slots
 		 */
 		for (i = index + nslots - 1; i >= index; i--) {
-			io_tlb_list[SWIOTLB_LO][i] = ++count;
-			io_tlb_orig_addr[SWIOTLB_LO][i] = INVALID_PHYS_ADDR;
+			io_tlb_list[type][i] = ++count;
+			io_tlb_orig_addr[type][i] = INVALID_PHYS_ADDR;
 		}
 		/*
 		 * Step 2: merge the returned slots with the preceding slots,
@@ -640,11 +691,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 		 */
 		for (i = index - 1;
 		     (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) &&
-		     io_tlb_list[SWIOTLB_LO][i];
+		     io_tlb_list[type][i];
 		     i--)
-			io_tlb_list[SWIOTLB_LO][i] = ++count;
+			io_tlb_list[type][i] = ++count;
 
-		io_tlb_used[SWIOTLB_LO] -= nslots;
+		io_tlb_used[type] -= nslots;
 	}
 	spin_unlock_irqrestore(&io_tlb_lock, flags);
 }
@@ -653,8 +704,9 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
 			     size_t size, enum dma_data_direction dir,
 			     enum dma_sync_target target)
 {
-	int index = (tlb_addr - io_tlb_start[SWIOTLB_LO]) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = io_tlb_orig_addr[SWIOTLB_LO][index];
+	int type = swiotlb_get_type(tlb_addr);
+	int index = (tlb_addr - io_tlb_start[type]) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = io_tlb_orig_addr[type][index];
 
 	if (orig_addr == INVALID_PHYS_ADDR)
 		return;
@@ -737,6 +789,15 @@ static int __init swiotlb_create_debugfs(void)
 	root = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs[SWIOTLB_LO]);
 	debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used[SWIOTLB_LO]);
+
+	if (swiotlb_nr == 1)
+		return 0;
+
+	debugfs_create_ulong("io_tlb_nslabs-highmem", 0400,
+			     root, &io_tlb_nslabs[SWIOTLB_HI]);
+	debugfs_create_ulong("io_tlb_used-highmem", 0400,
+			     root, &io_tlb_used[SWIOTLB_HI]);
+
 	return 0;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:39:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:39:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81103.149254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7RkT-0006jL-KO; Wed, 03 Feb 2021 23:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81103.149254; Wed, 03 Feb 2021 23:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7RkT-0006jE-HA; Wed, 03 Feb 2021 23:39:49 +0000
Received: by outflank-mailman (input) for mailman id 81103;
 Wed, 03 Feb 2021 23:39:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7RkS-0006iz-6b
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:39:48 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a22cfbcb-3358-4927-91d6-092995ce037f;
 Wed, 03 Feb 2021 23:39:47 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113N9lJ4053674;
 Wed, 3 Feb 2021 23:38:09 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2130.oracle.com with ESMTP id 36cvyb2weg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:38:08 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NBZjm083373;
 Wed, 3 Feb 2021 23:38:08 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2174.outbound.protection.outlook.com [104.47.56.174])
 by aserp3030.oracle.com with ESMTP id 36dh1rp0ay-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:38:08 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:38:05 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:38:05 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:37:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a22cfbcb-3358-4927-91d6-092995ce037f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2020-01-29;
 bh=ufcarBGag7Sjp5Mq5hX2uNHVicBXg/LdEy+UArhzb+Q=;
 b=pnoIjQjRAimCOZGBFABR649EZDLrKxG6Rg+Koe7dVE64t/PjnvnPZt/5g7CWcLP5cu52
 P6eBa7EAlCqK/BoeTW6vxGAfUKSPLS3hYYUXkTTMlMMLrZJQygd0WggDf/bNlyJP8Ach
 upL7IU+DgF9INdgJ9Jy5KwJ94emFs97z0p49t0KEbZLYH0NU/Sb0NQHW/ppSJ3yG26o3
 UeDWaKBcm9re+ww+zpHOsdb45vWIIZeVvU1aF7NtPhh3qxt8DftSnQAzVNbztT7vRnVD
 YMckMHkzfZaqs8LhaE/YbExCu0PPhS2LoJDTbRxu8k6i93v+Qr81JeiCNCaVBw2IXiJY gw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JmNqAIq6lFOC9ainynSrQWiPZxnortBSHQvRtJoXz7LQpCjd3M+ye/KymfgLmc4EJeh2XCmLTv7sCAmczKH9xO6EESx5QQPcX+Pi7PAwsZHkIVs9jytyNBNZzT7htfvXU2mXGVhPDPF8lK3ulnVkvhUwo1A5ir4EXEtkyels3vUl6KvXxEcvwIm2MDJFYwcUTqh/UucspnoQEOV8WVWhuFYFSq2XCUiFEB4+Ab3cFD3LxCQ4ykD/RpE+H12AZCxjEr7u/lzXFuPUI8Zv0e3N5tAI592Tt8HieDU7Kvn4pctbe3RetR/iBRoUwm+WRjHjjpFa08O/i2TIZv7u00nzoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ufcarBGag7Sjp5Mq5hX2uNHVicBXg/LdEy+UArhzb+Q=;
 b=A0ZPOADVVWV+Uj0/W8XGR394r9l5spc0nvzBnJvU0/rTEDb6RY8kZCTqRuBygUm03fprvI9YUgzPZSqNagCDc+RUmdqhN5fGnTrQjiBIWvB/iSZEi9cZNa0Wo+8mJ0cCs/gG2FQ7dICg5b0HcPS/TXK+Oul2GPKEgXO+M9AgitIDNtj+krmSsZ/d16skFak7nXurOhCJV2OykyOc5O43pjwcbIGL0mGuIPyxH7Fh3saJ2ELcsTlsC9i+mJQF5CBt5B9t9dRRTbxRDywZI2QkUHhz2fNRvEnMnX7d3RBnwoXQjwgTdZ4+zhrHBk6KPPfcFnXVslNmWK5xJfg/uCzk/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ufcarBGag7Sjp5Mq5hX2uNHVicBXg/LdEy+UArhzb+Q=;
 b=SuQhKsBZuPC3v2+9NTZ3wq0i3I8MSiWjwvz1ZyiiwHaXki6ngpjRZXZixVoFMMxGo7dPX/PYBT1Mt+IFEMuaxqcskcFwb4xhYy3KAWa+xsYaoLVcsDiiDtHDgTrC9Xu95xQBHdW8C8/ibsPNos9Abpi3nPaCoF7GwaiyZTzlIDU=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 5/6] xen-swiotlb: convert variables to arrays
Date: Wed,  3 Feb 2021 15:37:08 -0800
Message-Id: <20210203233709.19819-6-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210203233709.19819-1-dongli.zhang@oracle.com>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 55a79f1c-f122-4636-8dd4-08d8c89cc12c
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB47849DB7651ADA2BD0D3E36EF0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:173;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	w77riGtFxYZxbwSzlnLDDwaRSpIhQOZiCo7C6NAvJRIvk6fyvrnj5N07AptsCQM8Ls9ytt1iCLVQDJZYrEFBvlTjOhbcE9YJtxeD1SD4ad6x9UmE3mT0P2uDXh8kPTr3JlWDod/q2PNM7LqKgCTowQDE8jXR2bOUVt4ix3LvHciJZ4Xcq7H2GXDhKzdwXTKGu5Mgc/3Qev9FMhOZQ9g17mKuf9xfaYHfo6Dx/rzxrLP/DVQdxytqGBIVhk2jo8iq4rZtbHv6iYSfNqY00vHZ5r8pP9r/N/sEP6WDw/6JBKAm61IGhvf3fQ4beQAIaqw1aErcezXYVibpPmEofuXM6cDBnRgIULXs8zoronpTqW1+mLoTnkuCPvTmeSFbnUNJrCY5zMjfLQ0N8HxWz9APVN7Nik94Ev2ddkvwXkbddkqR7lfedjlniJnS/LArFITW5iqTd/xi5Rd/D2U5fDK3tMSSEh3gesvmGePlvsfgi7IXv3dlgzrdtausINCULlW0JpnJJvTOLkHQAU9XC0cmLmrBmfA2STeUzp5HCTZj+w5DWGhq4YRjzIAX8fbqqj3pDjD9QI5Ij9O9xtbwOV6yI4oi1pFIVamN9ei02syvNXM=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(6666004)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?JOaqLB/L6hqA4VUE0kNmiLvAjzCeznvsu3Cr6DfHJWdEzSttbELTyL58Z9LL?=
 =?us-ascii?Q?ahqD02oPjGOF/MLn+QNete/4x2BQJuL4BmpjZaT5x3TRnVBy3Wlh9xgE8fsV?=
 =?us-ascii?Q?VVNkDHHxkITVzis4PdWG8B7PXb6R3xPxM0QnENF0AWmWS0HLV//9XazhbpcK?=
 =?us-ascii?Q?XmpD7trzVCKsUv/WpjcjAGMVXKneA5Yx9kIDfqSh2UhvuP/rH/n+QP8S4tap?=
 =?us-ascii?Q?ug1ebPrVG9itaWj+Eo7WgCgjWKxoSAvVt+KOeHIWfRLUqo/YZv/PNnGFKZ+y?=
 =?us-ascii?Q?NRYwucUzawHwv9EQHRyZFuai97VEQVwIwWJ2DLh++ktFL0aowH4Gvmdd8Z90?=
 =?us-ascii?Q?ZfR5Q7VdIg8DkVImvEfkossqIUJsaLEFQDuZd8Nz+NKlaD0i9iv7xZ94+Hj9?=
 =?us-ascii?Q?Ief7D+OF+3JggwgSKfljSwLBJL/KWwEDLU2o0d4d7quP9dsYji4Rn3zSQXCX?=
 =?us-ascii?Q?YHnRjGhSYQlYYS/8C26DWtMF625wdFu5CBgdLqNzwb+3zPvxYRevw5JfCUgW?=
 =?us-ascii?Q?WFrWq3wn6Q7MAEKly0t06OpKuQ0D+oCxW4p+ne5TOvvdBZtPbsP4j2KQc6L4?=
 =?us-ascii?Q?pTDqTV9cebXeNKgw6DSqPvEugv2EDgIyLGfD9/zqWV/bhTwdKCyRNx9bhc2E?=
 =?us-ascii?Q?1OTjvA+ijbnKDcbZyG8qz24UhYVmZ3I9zq5gwD9KSfRvOIYTra0O5OBpz0j4?=
 =?us-ascii?Q?UQmsYodAQ6aGaxH1t8bAK5xJpBz5YosKCtXuFLJ3woygzLUCpZP18fLGTX0k?=
 =?us-ascii?Q?Z5i4YGVXoEOHmO6CegU0vYFLYpo8Q7bhGQZZkC/3xIO/eJ3swYnN35uaLnrh?=
 =?us-ascii?Q?LkALh+TdMNC8nTUvCIKwfWipsx0goweO3wb6MA7OD16mRv9YAvLKVcGdEWhn?=
 =?us-ascii?Q?29dEIbdkSRoPkfangpUGDOiCXyr1ikK8v7DWDY+Fqy6sP0ws4RBvDBzPZAdK?=
 =?us-ascii?Q?5pnYA67j39+xPNsLLKTIWA534gYGFUyyumFunSj+nTgjPmhYRHfKLKuwGfQW?=
 =?us-ascii?Q?CFj1qI/H08Qh4n7CyGJB+7XEsrHq52oARzMGKYALm/GGwT1TOFAH/QKk8Ixg?=
 =?us-ascii?Q?czp2pmvr?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 55a79f1c-f122-4636-8dd4-08d8c89cc12c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:38:05.6010
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jUSQZrsRsfIIenrB0i+mXbNWLbyi4KAfiByVhFlKYSye86P0YY8sKEgjzQs4qhhWqfZ1mhwlltkYXlrN55IWjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 phishscore=0
 suspectscore=0 mlxlogscore=999 bulkscore=0 mlxscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 impostorscore=0
 mlxscore=0 spamscore=0 bulkscore=0 priorityscore=1501 adultscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 mlxlogscore=999
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102030143

This patch converts several xen-swiotlb related variables to arrays, in
order to maintain stat/status for different swiotlb buffers. Here are
variables involved:

- xen_io_tlb_start and xen_io_tlb_end
- xen_io_tlb_nslabs
- MAX_DMA_BITS

There is no functional change and this is to prepare to enable 64-bit
xen-swiotlb.

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 drivers/xen/swiotlb-xen.c | 75 +++++++++++++++++++++------------------
 1 file changed, 40 insertions(+), 35 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 662638093542..e18cae693cdc 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -39,15 +39,17 @@
 #include <asm/xen/page-coherent.h>
 
 #include <trace/events/swiotlb.h>
-#define MAX_DMA_BITS 32
 /*
  * Used to do a quick range check in swiotlb_tbl_unmap_single and
  * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
  * API.
  */
 
-static char *xen_io_tlb_start, *xen_io_tlb_end;
-static unsigned long xen_io_tlb_nslabs;
+static char *xen_io_tlb_start[SWIOTLB_MAX], *xen_io_tlb_end[SWIOTLB_MAX];
+static unsigned long xen_io_tlb_nslabs[SWIOTLB_MAX];
+
+static int max_dma_bits[] = {32, 64};
+
 /*
  * Quick lookup value of the bus address of the IOTLB.
  */
@@ -112,8 +114,8 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr))) {
-		return paddr >= virt_to_phys(xen_io_tlb_start) &&
-		       paddr < virt_to_phys(xen_io_tlb_end);
+		return paddr >= virt_to_phys(xen_io_tlb_start[SWIOTLB_LO]) &&
+		       paddr < virt_to_phys(xen_io_tlb_end[SWIOTLB_LO]);
 	}
 	return 0;
 }
@@ -137,7 +139,7 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 				p + (i << IO_TLB_SHIFT),
 				get_order(slabs << IO_TLB_SHIFT),
 				dma_bits, &dma_handle);
-		} while (rc && dma_bits++ < MAX_DMA_BITS);
+		} while (rc && dma_bits++ < max_dma_bits[SWIOTLB_LO]);
 		if (rc)
 			return rc;
 
@@ -148,12 +150,13 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 static unsigned long xen_set_nslabs(unsigned long nr_tbl)
 {
 	if (!nr_tbl) {
-		xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
-		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
+		xen_io_tlb_nslabs[SWIOTLB_LO] = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
+		xen_io_tlb_nslabs[SWIOTLB_LO] = ALIGN(xen_io_tlb_nslabs[SWIOTLB_LO],
+						      IO_TLB_SEGSIZE);
 	} else
-		xen_io_tlb_nslabs = nr_tbl;
+		xen_io_tlb_nslabs[SWIOTLB_LO] = nr_tbl;
 
-	return xen_io_tlb_nslabs << IO_TLB_SHIFT;
+	return xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
 }
 
 enum xen_swiotlb_err {
@@ -184,16 +187,16 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
 
-	xen_io_tlb_nslabs = swiotlb_nr_tbl(SWIOTLB_LO);
+	xen_io_tlb_nslabs[SWIOTLB_LO] = swiotlb_nr_tbl(SWIOTLB_LO);
 retry:
-	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
-	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
+	bytes = xen_set_nslabs(xen_io_tlb_nslabs[SWIOTLB_LO]);
+	order = get_order(xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
 
 	/*
 	 * IO TLB memory already allocated. Just use it.
 	 */
 	if (io_tlb_start[SWIOTLB_LO] != 0) {
-		xen_io_tlb_start = phys_to_virt(io_tlb_start[SWIOTLB_LO]);
+		xen_io_tlb_start[SWIOTLB_LO] = phys_to_virt(io_tlb_start[SWIOTLB_LO]);
 		goto end;
 	}
 
@@ -201,76 +204,78 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	 * Get IO TLB memory from any location.
 	 */
 	if (early) {
-		xen_io_tlb_start = memblock_alloc(PAGE_ALIGN(bytes),
+		xen_io_tlb_start[SWIOTLB_LO] = memblock_alloc(PAGE_ALIGN(bytes),
 						  PAGE_SIZE);
-		if (!xen_io_tlb_start)
+		if (!xen_io_tlb_start[SWIOTLB_LO])
 			panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
 			      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
 	} else {
 #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
 		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
-			xen_io_tlb_start = (void *)xen_get_swiotlb_free_pages(order);
-			if (xen_io_tlb_start)
+			xen_io_tlb_start[SWIOTLB_LO] = (void *)xen_get_swiotlb_free_pages(order);
+			if (xen_io_tlb_start[SWIOTLB_LO])
 				break;
 			order--;
 		}
 		if (order != get_order(bytes)) {
 			pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n",
 				(PAGE_SIZE << order) >> 20);
-			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
-			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+			xen_io_tlb_nslabs[SWIOTLB_LO] = SLABS_PER_PAGE << order;
+			bytes = xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
 		}
 	}
-	if (!xen_io_tlb_start) {
+	if (!xen_io_tlb_start[SWIOTLB_LO]) {
 		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
 	}
 	/*
 	 * And replace that memory with pages under 4GB.
 	 */
-	rc = xen_swiotlb_fixup(xen_io_tlb_start,
+	rc = xen_swiotlb_fixup(xen_io_tlb_start[SWIOTLB_LO],
 			       bytes,
-			       xen_io_tlb_nslabs);
+			       xen_io_tlb_nslabs[SWIOTLB_LO]);
 	if (rc) {
 		if (early)
-			memblock_free(__pa(xen_io_tlb_start),
+			memblock_free(__pa(xen_io_tlb_start[SWIOTLB_LO]),
 				      PAGE_ALIGN(bytes));
 		else {
-			free_pages((unsigned long)xen_io_tlb_start, order);
-			xen_io_tlb_start = NULL;
+			free_pages((unsigned long)xen_io_tlb_start[SWIOTLB_LO], order);
+			xen_io_tlb_start[SWIOTLB_LO] = NULL;
 		}
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	if (early) {
-		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
+		if (swiotlb_init_with_tbl(xen_io_tlb_start[SWIOTLB_LO],
+					  xen_io_tlb_nslabs[SWIOTLB_LO],
 					  SWIOTLB_LO, verbose))
 			panic("Cannot allocate SWIOTLB buffer");
 		rc = 0;
 	} else
-		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start,
-						xen_io_tlb_nslabs, SWIOTLB_LO);
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start[SWIOTLB_LO],
+						xen_io_tlb_nslabs[SWIOTLB_LO],
+						SWIOTLB_LO);
 
 end:
-	xen_io_tlb_end = xen_io_tlb_start + bytes;
+	xen_io_tlb_end[SWIOTLB_LO] = xen_io_tlb_start[SWIOTLB_LO] + bytes;
 	if (!rc)
 		swiotlb_set_max_segment(PAGE_SIZE, SWIOTLB_LO);
 
 	return rc;
 error:
 	if (repeat--) {
-		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
-					(xen_io_tlb_nslabs >> 1));
+		xen_io_tlb_nslabs[SWIOTLB_LO] = max(1024UL, /* Min is 2MB */
+					(xen_io_tlb_nslabs[SWIOTLB_LO] >> 1));
 		pr_info("Lowering to %luMB\n",
-			(xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
+			(xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
 	pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc);
 	if (early)
 		panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
 	else
-		free_pages((unsigned long)xen_io_tlb_start, order);
+		free_pages((unsigned long)xen_io_tlb_start[SWIOTLB_LO], order);
 	return rc;
 }
 
@@ -561,7 +566,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
+	return xen_virt_to_bus(hwdev, xen_io_tlb_end[SWIOTLB_LO] - 1) <= mask;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:41:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:41:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81106.149267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rlm-0007b7-1D; Wed, 03 Feb 2021 23:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81106.149267; Wed, 03 Feb 2021 23:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rll-0007b0-SH; Wed, 03 Feb 2021 23:41:09 +0000
Received: by outflank-mailman (input) for mailman id 81106;
 Wed, 03 Feb 2021 23:41:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x9Sd=HF=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1l7Rll-0007as-0R
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:41:09 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce2f8cf3-523f-4670-ab35-31b8a9e56a40;
 Wed, 03 Feb 2021 23:41:07 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NAGKe046541;
 Wed, 3 Feb 2021 23:40:15 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 36cxvr5fk6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:40:15 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 113NBZKB083400;
 Wed, 3 Feb 2021 23:38:14 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2171.outbound.protection.outlook.com [104.47.56.171])
 by aserp3030.oracle.com with ESMTP id 36dh1rp0e3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 03 Feb 2021 23:38:14 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by SJ0PR10MB4784.namprd10.prod.outlook.com (2603:10b6:a03:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Wed, 3 Feb
 2021 23:38:11 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::b52e:bdd3:d193:4d14%7]) with mapi id 15.20.3805.026; Wed, 3 Feb 2021
 23:38:11 +0000
Received: from localhost.localdomain (138.3.200.16) by
 CH0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:610:b0::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.17 via Frontend Transport; Wed, 3 Feb 2021 23:38:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce2f8cf3-523f-4670-ab35-31b8a9e56a40
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2020-01-29;
 bh=T1bb0XTc3eHB9TaU/o5/wnJRnLYt5yXAz2mfRftNqPk=;
 b=AQgAsmYNKDuhJEXkKCCLjZ4btVMHNAaabw7AQaLEHjRQ1DxQ6rsmXgUd7yOAwDKP9eE6
 /jn8Fue+dSJnNLWUz8OvbspBrcSBTd2mFnhKJJYcoRiHLkKv0pQ8floQsGn4dTpuwT0r
 XCDo8DnyDk48OzkHCP/OfGRAcLmqeSD7XxwLuh5HakhRJ5fe0Xig1IQqnm5Sxz235Ir8
 mKVab/Wc+yzfaXw0POZyJ5aL5Tt0Magst+aUTkPD5NEilXYr61/hnUQYX8qpWRJVIPie
 lYzpfQkSuh7hAS2ER2mkomqrQd0Q61hS4+sZxC14yRf+48hMe4kXNeamnh4m+vxdT9M4 3w== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bzvh6TTjCKW6F/TBlPDxODMljmdU5b4U3YAZ35m4YjhLwwteroxdqRsIJXe6JrhtN/uUMUOElMzdZF1C1ENAr/iL1fuHxcW3WJ9TRa2Y0XqOGJ2RXu4LZVIpUtij3CG7vhsmbZ1bGXtFSLQSpltPFLoYkclDIu0iNCjpPLnCFanCiIOQeozHW9gQkSabewgu0PiVkxwh2PAy2tiAC2ZMA4AA/5FujqwuP3Nlw9VXAGOYvJkIEMSbYJTTzYk4PkLiBnJZ210q0KU4xFKDBtqO2M/kgU6/BGf2v2p/32pOyIlvd4KGSKYlFzG8qCSOW5qaNusG35HP52alvI07I30YpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T1bb0XTc3eHB9TaU/o5/wnJRnLYt5yXAz2mfRftNqPk=;
 b=NsRvqL3XdcmO+sHJOG6vt9wkb0lFGLkVs6UQrK32rQH1C6R8jsab1XeW9C+xmQrrhtcwtO5GxWPtzwIXzsM8O9wJTXQuqvAStPW7TRG97ZtI5TvX4TDX6474kqE5dKXn6VUYTXneJWWSzuWVrQRtaDNPHYyje0YbkIJyeB0DNXmsHkgkcQhXEOq3Mo/jyelM3Gwashr8bKF3DUTo1jq+QEbtnAJ01VklerV71acJhDvDy7a/Ire8LPupCxJia7la3+S9IN8ZGoTIvoRHTyek2iSGphT1pDOi/CDJt7lme8U5szvXIZQNUyJQ0o071z+guAhUzAl1uYzMbs5TORkCbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T1bb0XTc3eHB9TaU/o5/wnJRnLYt5yXAz2mfRftNqPk=;
 b=SH5E9mnI9UFqO/gUOeqFccVa5MYZdZsvoeyx68oFPHzx3hPsNVbSgm9Rh0xvUBc3Y2BlVvygdWv1haxHsNTj+TY77T6AJnfyqKjAeQADPPtP4bn59/aq76NOgrPOqOgNASnEzJEpRGcsQNg612h1i32PshXf4qIM/gTJXU7V5V8=
Authentication-Results: lists.freedesktop.org; dkim=none (message not signed)
 header.d=none;lists.freedesktop.org; dmarc=none action=none
 header.from=oracle.com;
From: Dongli Zhang <dongli.zhang@oracle.com>
To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        konrad.wilk@oracle.com, m.szyprowski@samsung.com,
        matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
        paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
        rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
        tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
        joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: [PATCH RFC v1 6/6] xen-swiotlb: enable 64-bit xen-swiotlb
Date: Wed,  3 Feb 2021 15:37:09 -0800
Message-Id: <20210203233709.19819-7-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210203233709.19819-1-dongli.zhang@oracle.com>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-Originating-IP: [138.3.200.16]
X-ClientProxiedBy: CH0PR03CA0008.namprd03.prod.outlook.com
 (2603:10b6:610:b0::13) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1a6f39ca-831d-4584-1671-08d8c89cc4a6
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB47843A529EE3238F834D6E8CF0B49@SJ0PR10MB4784.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:136;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	g1MdwF09ZQsQBk3V9QOQLj3Q2J4L21DedJnj5zLtjikYKElnwxG4upLd7TjhmJPnAU7VpjSHxPeJkqoPBRqm6qLeK0CHlFugspJnxi0iJI55XYooo19FwHgiOqBWBro0DzURhQTcUKHg2pt6GcgakcdWtajKr7YhF2oMlQ9TwO3PVCjJBIHzVB9ARp0nbm7zuhWes6OM0DRHYiZShiKkwZu7s9DdAyEDvWn/Gl0ZL4waWfYydjw/WkEc5/ennCc9OVPn+TREYJxjo1DWlOpdvMoTMaiSx8AjE+9jEXwuXzWLjL3qbTLDi6ktPwxxXGQs8U3Irr5/M7ar7bmutYlTQ2ed6ghggKY+GkYuKZFNMtieoWviD0NZJ01fhMAb5gbdJWAM4p1gONjFeoyWColerWY0LMjTOXUAp3dPN8tNJCXJ//c8+I3jztzLAk5mNt4e8uVOFDvvw+5Ri8EtVZZgmhOTrLSuo/OqSS6VgrbK7WOcJCGAIwNtW19s81SglVsppCFxB/JYDVlfn/XzOiteTG9HHCwH7jJTeIGHi0vZYowFiLlsq0xIt5bSbDcpMaMnWCnR5Cj//oWPXtmR2wvTgnI8gqLb36wjoJGn9DZCM2o=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(39860400002)(396003)(376002)(346002)(83380400001)(316002)(5660300002)(956004)(66946007)(6506007)(86362001)(7406005)(1076003)(66476007)(8676002)(186003)(26005)(36756003)(7416002)(478600001)(2906002)(66556008)(8936002)(6486002)(16526019)(69590400011)(4326008)(6512007)(44832011)(921005)(2616005)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?v4+k5MesDHa1MVSGi/UHPtmaF26L/lVN8GcJ71CBITc1I1MM0IDFIyBYytE4?=
 =?us-ascii?Q?53IGo7YT2EKkghBW/3Sk9E+bozL8K8Q/9nt64yNP1VUKTKWcNXGh0x3n6L54?=
 =?us-ascii?Q?xVW44nCRVbatRJqN678ro8ZSzYt+2fRymUty+3o8u/17/PwxyEk9Ady2ePJu?=
 =?us-ascii?Q?MOCoTjqlbeIOOmbEC/N28xw5dHvOPOovzSUgb9BAxSKqZmaCGbKEKN2DT/bR?=
 =?us-ascii?Q?zqtS1Di8iAHNISKyT5Kia3S5FQ31jIr8lU9eM3naCerZx5CWbllDRJPUggXy?=
 =?us-ascii?Q?PDUKaAxh4cXAaHrxbUuSXPbWSf2dHDC/0daQlqTmIDOHhAzikT+ON4WOwr19?=
 =?us-ascii?Q?NAfx9YwHYZjSnq7QmKuypvDXn/4sFWZ036E16UhM22ZhF3yCJSsT3DjLdG8U?=
 =?us-ascii?Q?tOX4toyIsdgCmi7p5QlaizYNgW+RH61ZTyCbLfrdr/vqHzBb7i9rreaXl+1c?=
 =?us-ascii?Q?guCiLvEr5Go5xNdVG4uR7NyDFOlzkkVsbMfELRCU0p1mRPrIQdx6GL7zHrWH?=
 =?us-ascii?Q?LhATraTb6GMS48zXasH4Dla6lHfNT/es/ZkxUzm5az8Zdjsb76SNDY6Qy0U9?=
 =?us-ascii?Q?dVdWc5Wr/3tykSq/9BWFCam6QyHfXpzywRf5L/GutdWeL2qGZqKhTaUTXeGH?=
 =?us-ascii?Q?EGF9kybhdDX9WoEPRubZZkhbyNiMckO0VrgM333uXg5hUIhn8B+PqVQen7cM?=
 =?us-ascii?Q?0Yf1gM25kWnPI3xofo9GasPaYo/sS+suvzIvlhb2ApKpM6Rc4MCuXBM8KsXR?=
 =?us-ascii?Q?BMlDEGL0r32OC2U3k4ymsoa2g5qKu7wKo3KGE6iz6Ya8AkYKuoW4nUzA9F4P?=
 =?us-ascii?Q?aSeXvt2DyQABYPMdHHv0KPr9+b4R5tmZoteVrUkzO/IDIxlQAXE/0MOQs2aR?=
 =?us-ascii?Q?jwCOMJIruq+MmWnpzVoZ5Wkmc8RWspNBAP+lJpmBZcIUeuUp0Z8vKzuqSteA?=
 =?us-ascii?Q?2Cp+CP+6w/Vxi22Q1FuM16M5VQ0xW2fIukRt0PSMYAsj9qZln0bvBSCk4r32?=
 =?us-ascii?Q?Tzd2Gh0o0RoulKFMNHy6Y2VjnRy43NW+4INplMPcCZXk7WjQgKv7YOEj3syO?=
 =?us-ascii?Q?x9uYpUo5?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a6f39ca-831d-4584-1671-08d8c89cc4a6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2021 23:38:11.4626
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r1RSaXkF0zyFxShGIWuzhyAUNVmLQ3O5X3zrZzuMcrJieYM/HHT4wwktIQz2k01o9UKkfGFBJpMRNiO3jThp5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4784
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 phishscore=0
 suspectscore=0 mlxlogscore=999 bulkscore=0 mlxscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9884 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxlogscore=999
 mlxscore=0 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1015
 suspectscore=0 lowpriorityscore=0 phishscore=0 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102030143

This patch is to enable the 64-bit xen-swiotlb buffer.

For Xen PVM DMA address, the 64-bit device will be able to allocate from
64-bit swiotlb buffer.

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 drivers/xen/swiotlb-xen.c | 117 ++++++++++++++++++++++++--------------
 1 file changed, 74 insertions(+), 43 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index e18cae693cdc..c9ab07809e32 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -108,27 +108,36 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
 	phys_addr_t paddr = (phys_addr_t)xen_pfn << XEN_PAGE_SHIFT;
+	int i;
 
 	/* If the address is outside our domain, it CAN
 	 * have the same virtual address as another address
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
-	if (pfn_valid(PFN_DOWN(paddr))) {
-		return paddr >= virt_to_phys(xen_io_tlb_start[SWIOTLB_LO]) &&
-		       paddr < virt_to_phys(xen_io_tlb_end[SWIOTLB_LO]);
-	}
+	if (!pfn_valid(PFN_DOWN(paddr)))
+		return 0;
+
+	for (i = 0; i < swiotlb_nr; i++)
+		if (paddr >= virt_to_phys(xen_io_tlb_start[i]) &&
+		    paddr < virt_to_phys(xen_io_tlb_end[i]))
+			return 1;
+
 	return 0;
 }
 
 static int
-xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
+xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs,
+		  enum swiotlb_t type)
 {
 	int i, rc;
 	int dma_bits;
 	dma_addr_t dma_handle;
 	phys_addr_t p = virt_to_phys(buf);
 
-	dma_bits = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT) + PAGE_SHIFT;
+	if (type == SWIOTLB_HI)
+		dma_bits = max_dma_bits[SWIOTLB_HI];
+	else
+		dma_bits = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT) + PAGE_SHIFT;
 
 	i = 0;
 	do {
@@ -139,7 +148,7 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 				p + (i << IO_TLB_SHIFT),
 				get_order(slabs << IO_TLB_SHIFT),
 				dma_bits, &dma_handle);
-		} while (rc && dma_bits++ < max_dma_bits[SWIOTLB_LO]);
+		} while (rc && dma_bits++ < max_dma_bits[type]);
 		if (rc)
 			return rc;
 
@@ -147,16 +156,17 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 	} while (i < nslabs);
 	return 0;
 }
-static unsigned long xen_set_nslabs(unsigned long nr_tbl)
+
+static unsigned long xen_set_nslabs(unsigned long nr_tbl, enum swiotlb_t type)
 {
 	if (!nr_tbl) {
-		xen_io_tlb_nslabs[SWIOTLB_LO] = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
-		xen_io_tlb_nslabs[SWIOTLB_LO] = ALIGN(xen_io_tlb_nslabs[SWIOTLB_LO],
+		xen_io_tlb_nslabs[type] = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
+		xen_io_tlb_nslabs[type] = ALIGN(xen_io_tlb_nslabs[type],
 						      IO_TLB_SEGSIZE);
 	} else
-		xen_io_tlb_nslabs[SWIOTLB_LO] = nr_tbl;
+		xen_io_tlb_nslabs[type] = nr_tbl;
 
-	return xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
+	return xen_io_tlb_nslabs[type] << IO_TLB_SHIFT;
 }
 
 enum xen_swiotlb_err {
@@ -180,23 +190,24 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
 	}
 	return "";
 }
-int __ref xen_swiotlb_init(int verbose, bool early)
+
+static int xen_swiotlb_init_type(int verbose, bool early, enum swiotlb_t type)
 {
 	unsigned long bytes, order;
 	int rc = -ENOMEM;
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
 
-	xen_io_tlb_nslabs[SWIOTLB_LO] = swiotlb_nr_tbl(SWIOTLB_LO);
+	xen_io_tlb_nslabs[type] = swiotlb_nr_tbl(type);
 retry:
-	bytes = xen_set_nslabs(xen_io_tlb_nslabs[SWIOTLB_LO]);
-	order = get_order(xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT);
+	bytes = xen_set_nslabs(xen_io_tlb_nslabs[type], type);
+	order = get_order(xen_io_tlb_nslabs[type] << IO_TLB_SHIFT);
 
 	/*
 	 * IO TLB memory already allocated. Just use it.
 	 */
-	if (io_tlb_start[SWIOTLB_LO] != 0) {
-		xen_io_tlb_start[SWIOTLB_LO] = phys_to_virt(io_tlb_start[SWIOTLB_LO]);
+	if (io_tlb_start[type] != 0) {
+		xen_io_tlb_start[type] = phys_to_virt(io_tlb_start[type]);
 		goto end;
 	}
 
@@ -204,81 +215,95 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	 * Get IO TLB memory from any location.
 	 */
 	if (early) {
-		xen_io_tlb_start[SWIOTLB_LO] = memblock_alloc(PAGE_ALIGN(bytes),
+		xen_io_tlb_start[type] = memblock_alloc(PAGE_ALIGN(bytes),
 						  PAGE_SIZE);
-		if (!xen_io_tlb_start[SWIOTLB_LO])
+		if (!xen_io_tlb_start[type])
 			panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
 			      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
 	} else {
 #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
 		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
-			xen_io_tlb_start[SWIOTLB_LO] = (void *)xen_get_swiotlb_free_pages(order);
-			if (xen_io_tlb_start[SWIOTLB_LO])
+			xen_io_tlb_start[type] = (void *)xen_get_swiotlb_free_pages(order);
+			if (xen_io_tlb_start[type])
 				break;
 			order--;
 		}
 		if (order != get_order(bytes)) {
 			pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n",
 				(PAGE_SIZE << order) >> 20);
-			xen_io_tlb_nslabs[SWIOTLB_LO] = SLABS_PER_PAGE << order;
-			bytes = xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT;
+			xen_io_tlb_nslabs[type] = SLABS_PER_PAGE << order;
+			bytes = xen_io_tlb_nslabs[type] << IO_TLB_SHIFT;
 		}
 	}
-	if (!xen_io_tlb_start[SWIOTLB_LO]) {
+	if (!xen_io_tlb_start[type]) {
 		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
 	}
 	/*
 	 * And replace that memory with pages under 4GB.
 	 */
-	rc = xen_swiotlb_fixup(xen_io_tlb_start[SWIOTLB_LO],
+	rc = xen_swiotlb_fixup(xen_io_tlb_start[type],
 			       bytes,
-			       xen_io_tlb_nslabs[SWIOTLB_LO]);
+			       xen_io_tlb_nslabs[type],
+			       type);
 	if (rc) {
 		if (early)
-			memblock_free(__pa(xen_io_tlb_start[SWIOTLB_LO]),
+			memblock_free(__pa(xen_io_tlb_start[type]),
 				      PAGE_ALIGN(bytes));
 		else {
-			free_pages((unsigned long)xen_io_tlb_start[SWIOTLB_LO], order);
-			xen_io_tlb_start[SWIOTLB_LO] = NULL;
+			free_pages((unsigned long)xen_io_tlb_start[type], order);
+			xen_io_tlb_start[type] = NULL;
 		}
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	if (early) {
-		if (swiotlb_init_with_tbl(xen_io_tlb_start[SWIOTLB_LO],
-					  xen_io_tlb_nslabs[SWIOTLB_LO],
-					  SWIOTLB_LO, verbose))
+		if (swiotlb_init_with_tbl(xen_io_tlb_start[type],
+					  xen_io_tlb_nslabs[type],
+					  type, verbose))
 			panic("Cannot allocate SWIOTLB buffer");
 		rc = 0;
 	} else
-		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start[SWIOTLB_LO],
-						xen_io_tlb_nslabs[SWIOTLB_LO],
-						SWIOTLB_LO);
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start[type],
+						xen_io_tlb_nslabs[type],
+						type);
 
 end:
-	xen_io_tlb_end[SWIOTLB_LO] = xen_io_tlb_start[SWIOTLB_LO] + bytes;
+	xen_io_tlb_end[type] = xen_io_tlb_start[type] + bytes;
 	if (!rc)
-		swiotlb_set_max_segment(PAGE_SIZE, SWIOTLB_LO);
+		swiotlb_set_max_segment(PAGE_SIZE, type);
 
 	return rc;
 error:
 	if (repeat--) {
-		xen_io_tlb_nslabs[SWIOTLB_LO] = max(1024UL, /* Min is 2MB */
-					(xen_io_tlb_nslabs[SWIOTLB_LO] >> 1));
+		xen_io_tlb_nslabs[type] = max(1024UL, /* Min is 2MB */
+					(xen_io_tlb_nslabs[type] >> 1));
 		pr_info("Lowering to %luMB\n",
-			(xen_io_tlb_nslabs[SWIOTLB_LO] << IO_TLB_SHIFT) >> 20);
+			(xen_io_tlb_nslabs[type] << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
 	pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc);
 	if (early)
 		panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
 	else
-		free_pages((unsigned long)xen_io_tlb_start[SWIOTLB_LO], order);
+		free_pages((unsigned long)xen_io_tlb_start[type], order);
 	return rc;
 }
 
+int __ref xen_swiotlb_init(int verbose, bool early)
+{
+	int i, rc;
+
+	for (i = 0; i < swiotlb_nr; i++) {
+		rc = xen_swiotlb_init_type(verbose, early, i);
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
+
 static void *
 xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			   dma_addr_t *dma_handle, gfp_t flags,
@@ -566,7 +591,13 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_virt_to_bus(hwdev, xen_io_tlb_end[SWIOTLB_LO] - 1) <= mask;
+	int i;
+
+	for (i = 0; i < swiotlb_nr; i++)
+		if (xen_virt_to_bus(hwdev, xen_io_tlb_end[i] - 1) <= mask)
+			return true;
+
+	return false;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:48:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81117.149279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7RsS-0007rN-Rn; Wed, 03 Feb 2021 23:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81117.149279; Wed, 03 Feb 2021 23:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7RsS-0007rG-Nu; Wed, 03 Feb 2021 23:48:04 +0000
Received: by outflank-mailman (input) for mailman id 81117;
 Wed, 03 Feb 2021 23:48:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t9I1=HF=kernel.org=kuba@srs-us1.protection.inumbo.net>)
 id 1l7RsR-0007rB-5v
 for xen-devel@lists.xenproject.org; Wed, 03 Feb 2021 23:48:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 296cec91-f91b-4204-ab45-854ab0c743ff;
 Wed, 03 Feb 2021 23:48:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 52C8464F60;
 Wed,  3 Feb 2021 23:48:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 296cec91-f91b-4204-ab45-854ab0c743ff
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612396081;
	bh=+6Wgzo0UAUksPUWapSep533wADw9r5wsvfB8fRBzU/8=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=UIpgYckud7N2g2qsN2quewUJHN1CUfhAF2xZ2mhebPxmAgTlZ1pFi2i1CsFDlmmtt
	 FV5l4wXK+VlOFUX90qIWpCk/8kGxGxOFbh6e6aTEsMvpxeb5KHUCf3IfB6QWsRf3s3
	 Z4fdA6Q+ICKsQwqSTsnIK7vM3TofwoJjvDpC4SIolrap2mJPZo204ZtyPnB0BblQJv
	 ni/AeyRm9iqFminn9qBX3VTMmeWnuus99o8CnpBVWde/lCR7gwnJ3kSJLcueqXCA2r
	 xeleXlmjwqoauwANKqm8SuyqbtTQPU6bqIPnCupc8Wlz/i2dl3h6diFgdDF6SgymXf
	 DC9/FjRrODCtg==
Date: Wed, 3 Feb 2021 15:48:00 -0800
From: Jakub Kicinski <kuba@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>, Paul Durrant
 <paul@xen.org>, "David S. Miller" <davem@davemloft.net>, Igor Druzhinin
 <igor.druzhinin@citrix.com>, stable@vger.kernel.org
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
Message-ID: <20210203154800.4c6959d6@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <20210202070938.7863-1-jgross@suse.com>
References: <20210202070938.7863-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue,  2 Feb 2021 08:09:38 +0100 Juergen Gross wrote:
> Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> xenvif_rx_ring_slots_available() is no longer called only from the rx
> queue kernel thread, so it needs to access the rx queue with the
> associated queue held.
> 
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> Cc: stable@vger.kernel.org
> Signed-off-by: Juergen Gross <jgross@suse.com>

Should we route this change via networking trees? I see the bug did not
go through networking :)


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:53:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:53:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81119.149290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rxo-0000VD-Fb; Wed, 03 Feb 2021 23:53:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81119.149290; Wed, 03 Feb 2021 23:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Rxo-0000V6-CA; Wed, 03 Feb 2021 23:53:36 +0000
Received: by outflank-mailman (input) for mailman id 81119;
 Wed, 03 Feb 2021 23:53:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Rxn-0000Uy-5m; Wed, 03 Feb 2021 23:53:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Rxm-0002ty-Ro; Wed, 03 Feb 2021 23:53:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Rxm-0007Nz-IG; Wed, 03 Feb 2021 23:53:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Rxm-0006xN-Hg; Wed, 03 Feb 2021 23:53:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qnbHPPC9sasnAv4hBHJf46Mvbt7Zst+Jspu9Ql9npVQ=; b=H/fCgnafIC2T6z83K/6q3ifjqy
	3KO/o8AuBi3Va56YGaCcx/nK7dIE47PxZXr09n25hErkDsvJBjbx2msLKDR8TnJuVaqeuTbI0szBS
	d0A1yTw6UK/DoavB6PMx/eB0Fxlu5v75h2jMfpq5F5YlqVPEp+Ku+Vx9o78KfEdb/zdg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158985-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 158985: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e806bb29cfde1b242bb37e72e77364dd812830e0
X-Osstest-Versions-That:
    ovmf=618e6a1f21a11eaee0a92e19c753969eb4a1b198
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 23:53:34 +0000

flight 158985 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158985/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e806bb29cfde1b242bb37e72e77364dd812830e0
baseline version:
 ovmf                 618e6a1f21a11eaee0a92e19c753969eb4a1b198

Last test of basis   158975  2021-02-03 07:14:36 Z    0 days
Testing same since   158985  2021-02-03 13:10:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Lou <yun.lou@intel.com>
  Lou, Yun <Yun.Lou@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   618e6a1f21..e806bb29cf  e806bb29cfde1b242bb37e72e77364dd812830e0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Feb 03 23:56:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 Feb 2021 23:56:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81122.149306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7S0e-0000fm-V0; Wed, 03 Feb 2021 23:56:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81122.149306; Wed, 03 Feb 2021 23:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7S0e-0000ff-RG; Wed, 03 Feb 2021 23:56:32 +0000
Received: by outflank-mailman (input) for mailman id 81122;
 Wed, 03 Feb 2021 23:56:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7S0d-0000fV-JW; Wed, 03 Feb 2021 23:56:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7S0d-0002xi-CN; Wed, 03 Feb 2021 23:56:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7S0d-0007U9-3R; Wed, 03 Feb 2021 23:56:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7S0d-0000M3-2x; Wed, 03 Feb 2021 23:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5D1Xh7Ee0bgNVpH75+n1RoAp/bw6kt77jkTZjCxIwrI=; b=eMiUhtgKBg1JOVQja/d4mNR/Yp
	GooZaPnRQBIg/jZhe5xr8h7WsoB10w+aulPep30zqW4ZdomevE7QV01MLX8douXJ/0/ivZnLWoQqA
	S23AlAMnMnDVpcImU+hG5mmxSgWgTV+rJwhYoPIvRU4R2caJLw9LAt70UzVFKcWzI/jw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158981-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158981: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-coresched-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    linux-5.4:test-arm64-arm64-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fbca6ce4174724f28be5268c5d210f51ed96e31
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 Feb 2021 23:56:31 +0000

flight 158981 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158981/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 158387
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 158387
 test-amd64-coresched-i386-xl 14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-pair         25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 158387
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 158387
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-shadow    14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl           14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 158387
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 158387
 test-amd64-i386-qemut-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 158387
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-amd64-amd64-amd64-pvgrub 12 debian-di-install       fail REGR. vs. 158387
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-win7-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 12 windows-install  fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 158387
 test-amd64-i386-xl-qemut-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start    fail in 158962 REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start    fail in 158962 REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   5 host-install(5)          broken pass in 158962
 test-arm64-arm64-xl-credit1   5 host-install(5)          broken pass in 158962

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 linux                0fbca6ce4174724f28be5268c5d210f51ed96e31
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   22 days
Failing since        158473  2021-01-17 13:42:20 Z   17 days   29 attempts
Testing same since   158818  2021-01-30 13:48:12 Z    4 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Akilesh Kailash <akailash@google.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksander Jan Bajkowski <olek2@wp.pl>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Leibovich <alexl@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexey Minnekhanov <alexeymin@postmarketos.org>
  Anders Roxell <anders.roxell@linaro.org>
  Andreas Kemnade <andreas@kemnade.info>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Zhizhikin <andrey.zhizhikin@leica-geosystems.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Lutomirski <luto@kernel.org>
  Anthony Iliopoulos <ailiop@suse.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Marcovitch <ariel.marcovitch@gmail.com>
  Ariel Marcovitch <arielmarcovitch@gmail.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Aya Levin <ayal@nvidia.com>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Baruch Siach <baruch@tkos.co.il>
  Ben Skeggs <bskeggs@redhat.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Craig Tatlor <ctatlor97@gmail.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  David Wu <david.wu@rock-chips.com>
  Dennis Li <Dennis.Li@amd.com>
  Dexuan Cui <decui@microsoft.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Enke Chen <enchen@paloaltonetworks.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Korenevsky <ekorenevsky@astralinux.ru>
  Ewan D. Milne <emilne@redhat.com>
  Fabian Vogt <fvogt@suse.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe LaÃ­ns <lains@archlinux.org>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gaurav Kohli <gkohli@codeaurora.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gopal Tiwari <gtiwari@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guido GÃ¼nther <agx@sigxcpu.org>
  Guillaume Nault <gnault@redhat.com>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hao Wang <pkuwangh@gmail.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Ion Agorria <ion@agorria.com>
  Israel Rukshin <israelr@nvidia.com>
  J. Bruce Fields <bfields@redhat.com>
  j.nixdorf@avm.de <j.nixdorf@avm.de>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@jamieiles.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  JC Kuo <jckuo@nvidia.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jethro Beekman <jethro@fortanix.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Nixdorf <j.nixdorf@avm.de>
  John Millikin <john@john-millikin.com>
  Johnathan Smithinovic <johnathan.smithinovic@gmx.at>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josef Bacik <josef@toxicpanda.com>
  Jouni K. SeppÃ¤nen <jks@iki.fi>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Juerg Haefliger <juergh@canonical.com>
  Juergen Gross <jgross@suse.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Krzysztof Mazur <krzysiek@podlesie.net>
  Krzysztof Piotr OlÄ™dzki <ole@ans.pl>
  Lars-Peter Clausen <lars@metafoo.de>
  Lecopzer Chen <lecopzer.chen@mediatek.com>
  Lecopzer Chen <lecopzer@gmail.com>
  Leon Romanovsky <leonro@nvidia.com>
  Leon Schuermann <leon@is.currently.online>
  Linhua Xu <linhua.xu@unisoc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Lozano <llozano@google.com>
  Lukas Wunner <lukas@wunner.de>
  Manish Chopra <manishc@marvell.com>
  Manoj Gupta <manojgupta@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marcin Wojtas <mw@semihalf.com>
  Mark Bloch <mbloch@nvidia.com>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Kresin <dev@kresin.me>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikko Perttunen <mperttunen@nvidia.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mircea Cirjaliu <mcirjaliu@bitdefender.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolai Stange <nstange@suse.de>
  Nicolas Dichtel <nicolas.dichtel@6wind.com>
  Nilesh Javali <njavali@marvell.com>
  Oded Gabbay <ogabbay@kernel.org>
  Olaf Hering <olaf@aepfle.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali RohÃ¡r <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pan Bian <bianpan2016@163.com>
  Parav Pandit <parav@nvidia.com>
  Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
  Paul Cercueil <paul@crapouillou.net>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Collingbourne <pcc@google.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Robinson <pbrobinson@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <me@pmachata.org>
  Petr Machata <petrm@nvidia.com>
  Phil Oester <kernel@linuxace.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafael Kitover <rkitover@gmail.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <rasmus.villemoes@prevas.dk>
  Reinette Chatre <reinette.chatre@intel.com>
  Rich Felker <dalias@libc.org>
  Rob Clark <robdclark@chromium.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Rohit Maheshwari <rohitm@chelsio.com>
  Roman Guskov <rguskov@dh-electronics.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Ross Zwisler <zwisler@google.com>
  Ryan Chen <ryan_chen@aspeedtech.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sameer Pujar <spujar@nvidia.com>
  Samuel Holland <samuel@sholland.org>
  Sasha Levin <sashal@kernel.org>
  Sean Tranchetti <stranche@codeaurora.org>
  Seth Miller <miller.seth@gmail.com>
  Shawn Guo <shawn.guo@linaro.org>
  Shravya Kumbham <shravya.kumbham@xilinx.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Chulski <stefanc@marvell.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Su Yue <l@damenly.su>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hebb <tommyhebb@gmail.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Toke HÃ¸iland-JÃ¸rgensen <toke@toke.dk>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-KÃ¶nig <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@nvidia.com>
  Valdis Kletnieks <valdis.kletnieks@vt.edu>
  Valdis KlÄ“tnieks <valdis.kletnieks@vt.edu>
  Vasily Averin <vvs@virtuozzo.com>
  Victor Zhao <Victor.Zhao@amd.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wang Hui <john.wanghui@huawei.com>
  Wayne Lin <Wayne.Lin@amd.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  yangerkun <yangerkun@huawei.com>
  Yazen Ghannam <Yazen.Ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  Youling Tang <tangyouling@loongson.cn>
  YueHaibing <yuehaibing@huawei.com>
  Yufeng Mo <moyufeng@huawei.com>
  zhengbin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 8166 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 00:13:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 00:13:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81129.149321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7SH7-0003RQ-4p; Thu, 04 Feb 2021 00:13:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81129.149321; Thu, 04 Feb 2021 00:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7SH7-0003RI-1o; Thu, 04 Feb 2021 00:13:33 +0000
Received: by outflank-mailman (input) for mailman id 81129;
 Thu, 04 Feb 2021 00:13:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7BZ=HG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7SH5-0003RD-0v
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 00:13:31 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8ad4dbaa-94fd-4a2e-a9c3-d4e3b092dc10;
 Thu, 04 Feb 2021 00:13:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A7A7664F4C;
 Thu,  4 Feb 2021 00:13:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ad4dbaa-94fd-4a2e-a9c3-d4e3b092dc10
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612397609;
	bh=/H9cSDc2XoQsKl4+NaDVz1blmhcCUBcWQhZbh+ELGZQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=V4l9h2RZcFGtswC8sqrlNicxgV8aeZd4rE56Y4nlpRyZ9Oy8dx0Bexa+VZZzngqKu
	 FjmNpHCKJopB53e/R1qQPq4Fi16nNm9vxZsH0TMtlDHkMNHk+6Rojdc/VfaM3zSOsA
	 7IEA8nCwLl/xCygfKjY0C2811W6SY+skRsEXK6LucNnil+0hm1bStLZxvjtFl/q/K9
	 3GLuIF4031VrUsKJyULzmN12Kn3LpjO5TDdaXG7p4pbJwwtIZ5RO8EeYADZr9T25HP
	 MzGxQ4m4yeNOgtKp6CYVH1Prfb2Vj38vqV1JmbKpqvojO7G03RrPOShd7q1EbMgVrC
	 bTIIUaOXQoMdQ==
Date: Wed, 3 Feb 2021 16:13:28 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Elliott Mitchell <ehem+xen@m5p.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
In-Reply-To: <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s> <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 3 Feb 2021, Julien Grall wrote:
> On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > But aside from PCIe, let's say that we know of a few nodes for which
> > > > "reg" needs a special treatment. I am not sure it makes sense to proceed
> > > > with parsing those nodes without knowing how to deal with that.
> > >
> > > I believe that most of the time the "special" treatment would be to ignore the
> > > property "regs" as it will not be an CPU memory address.
> > >
> > > > So maybe
> > > > we should add those nodes to skip_matches until we know what to do with
> > > > them. At that point, I would imagine we would introduce a special
> > > > handle_device function that knows what to do. In the case of PCIe,
> > > > something like "handle_device_pcie".
> > > Could you outline how "handle_device_pcie()" will differ with handle_node()?
> > >
> > > In fact, the problem is not the PCIe node directly. Instead, it is the second
> > > level of nodes below it (i.e usb@...).
> > >
> > > The current implementation of dt_number_of_address() only look at the bus type
> > > of the parent. As the parent has no bus type and "ranges" then it thinks this
> > > is something we can translate to a CPU address.
> > >
> > > However, this is below a PCI bus so the meaning of "reg" is completely
> > > different. In this case, we only need to ignore "reg".
> >
> > I see what you are saying and I agree: if we had to introduce a special
> > case for PCI, then  dt_number_of_address() seems to be a good place.  In
> > fact, we already have special PCI handling, see our
> > __dt_translate_address function and xen/common/device_tree.c:dt_busses.
> >
> > Which brings the question: why is this actually failing?
> 
> I already hinted at the reason in my previous e-mail :). Let me expand
> a bit more.
> 
> >
> > pcie0 {
> >      ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0 0x40000000>;
> >
> > Which means that PCI addresses 0xc0000000-0x100000000 become 0x600000000-0x700000000.
> >
> > The offending DT is:
> >
> > &pcie0 {
> >          pci@1,0 {
> >                  #address-cells = <3>;
> >                  #size-cells = <2>;
> >                  ranges;
> >
> >                  reg = <0 0 0 0 0>;
> >
> >                  usb@1,0 {
> >                          reg = <0x10000 0 0 0 0>;
> >                          resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> >                  };
> >          };
> > };
> >
> >
> > reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
> > However, the rest of the regs cells are left as zero. It shouldn't be an
> > issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.
> 
> The property "ranges" is used to define a mapping or translation
> between the address space of the "bus" (here pci@1,0) and the address
> space of the bus node's parent (&pcie0).
> IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).
> 
> The problem is dt_number_of_address() will only look at the "bus" type
> of the parent using dt_match_bus(). This will return the default bus
> (see dt_bus_default_match()), because this is a property "ranges" in
> the parent node (i.e. pci@1,0). Therefore...
> 
> > So
> > in theory dt_number_of_address() should already return 0 for it.
> 
> ... dt_number_of_address() will return 1 even if the address is not a
> CPU address. So when Xen will try to translate it, it will fail.
> 
> >
> > Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
> > add a check to skip 0 size ranges. Just a hack to explain what I mean:
> 
> The parent of pci@1,0 is a PCI bridge (see the property type), so the
> CPU addresses are found not via "regs" but "assigned-addresses".
> 
> In this situation, "regs" will have a different meaning and therefore
> there is no promise that the size will be 0.

I copy/pasted the following:

       pci@1,0 {
               #address-cells = <3>;
               #size-cells = <2>;
               ranges;

               reg = <0 0 0 0 0>;

               usb@1,0 {
                       reg = <0x10000 0 0 0 0>;
                       resets = <&reset
                       RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
               };
       };

under pcie0 in my DTS to see what happens (the node is not there in the
device tree for the rpi-5.9.y kernel.) It results in the expected error:

(XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
(XEN) Device tree generation failed (-22).

I could verify that pci@1,0 is seen as "default" bus due to the range
property, thus dt_number_of_address() returns 1.


I can see that reg = <0 0 0 0 0> is not a problem because it is ignored
given that the parent is a PCI bus. assigned-addresses is the one that
is read.


But from a device tree perspective I am actually confused by the
presence of the "ranges" property under pci@1,0. Is that correct? It is
stating that addresses of children devices will be translated to the
address space of the parent (pcie0) using the parent translation rules.
I mean -- it looks like Xen is right in trying to translate reg =
<0x10000 0 0 0 0> using ranges = <0x02000000 0x0 0xc0000000 0x6
0x00000000 0x0 0x40000000>.

Or maybe since pcie0 is a PCI bus all the children addresses, even
grand-children, are expected to be specified using "assigned-addresses"?


Looking at other examples [1][2] maybe the mistake is that pci@1,0 is
missing device_type = "pci"?  Of course, if I add that, the error
disappear.

[1] Documentation/devicetree/bindings/pci/mvebu-pci.txt
[2] Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt

For the sake of making Xen more resilient to possible DTSes, maybe we
should try to extend the dt_bus_pci_match check? See for instance the
change below, but we might be able to come up with better ideas.


diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 18825e333e..24d998f725 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -565,12 +565,21 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
 
 static bool_t dt_bus_pci_match(const struct dt_device_node *np)
 {
+    bool ret = false;
+
     /*
      * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
      * powermacs "ht" is hypertransport
      */
-    return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
+    ret = !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
         !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
+    
+    if ( ret ) return ret;
+
+    if ( !strcmp(np->name, "pci") )
+        ret = dt_bus_pci_match(dt_get_parent(np));
+
+    return ret;
 }
 
 static void dt_bus_pci_count_cells(const struct dt_device_node *np,


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 05:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 05:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81146.149366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Wki-0002xm-3J; Thu, 04 Feb 2021 05:00:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81146.149366; Thu, 04 Feb 2021 05:00:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Wkh-0002xd-Rs; Thu, 04 Feb 2021 05:00:23 +0000
Received: by outflank-mailman (input) for mailman id 81146;
 Thu, 04 Feb 2021 05:00:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Wkg-0002xV-Rn; Thu, 04 Feb 2021 05:00:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Wkg-0002lO-MX; Thu, 04 Feb 2021 05:00:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Wkg-0003kI-FZ; Thu, 04 Feb 2021 05:00:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7Wkg-0002Vr-Ep; Thu, 04 Feb 2021 05:00:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X/dUadK6RHt+TqEIpiaNvn6nUIEwIuWv+phPi7bBVDY=; b=XMDjqJbqsf7OsC7RFMTsq1F9cH
	3nSQ0bcX35y8sRmqWutYnBewG+fBRX1Pqq8EFIOcr89ppvkcG+IFb3G4aGtdrZUaWLjFhHZxt7rsw
	mrUhT8OzSaDSY/7ELWXFjmuVRfWsVpRdqoyVrnbt/SNLKwqu/x64YkJ9tUADvNBhXVfo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158987-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158987: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3aaf0a27ffc29b19a62314edd684b9bc6346f9a8
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 05:00:22 +0000

flight 158987 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158987/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot       fail in 158971 REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot       fail in 158971 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen        fail pass in 158971

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 158971 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 158971 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3aaf0a27ffc29b19a62314edd684b9bc6346f9a8
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  187 days
Failing since        152366  2020-08-01 20:49:34 Z  186 days  333 attempts
Testing same since   158971  2021-02-03 01:39:37 Z    1 days    2 attempts

------------------------------------------------------------
4515 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 1022596 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 05:32:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 05:32:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81153.149387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7XFx-0006D9-MO; Thu, 04 Feb 2021 05:32:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81153.149387; Thu, 04 Feb 2021 05:32:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7XFx-0006D2-JC; Thu, 04 Feb 2021 05:32:41 +0000
Received: by outflank-mailman (input) for mailman id 81153;
 Thu, 04 Feb 2021 05:32:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hI3K=HG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7XFv-0006Cx-QP
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 05:32:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e58f294-6078-4158-966e-72bc065a6331;
 Thu, 04 Feb 2021 05:32:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE6EBAD37;
 Thu,  4 Feb 2021 05:32:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e58f294-6078-4158-966e-72bc065a6331
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612416754; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7bOPQvy7qTInyB9wYQ9IL2fnW0fKtEjyxYEe1vYqV0c=;
	b=Hi8wPm5pIzWzXmj5NK+rF+oJbT/b2+PtaRhTiXfHfgTJfcncJB2ngC2gblQQ+UlJDKjA6U
	CHZAYXOeQmqs2CBkbHEiGWKGL51DEt+JrhuhjLB0DwK6KguPPHTWUQD26pv1uDqNCSdw40
	X6zsvPzYwLsYdBde0wCHKwQNNwb7FRk=
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
To: Jakub Kicinski <kuba@kernel.org>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Igor Druzhinin <igor.druzhinin@citrix.com>, stable@vger.kernel.org
References: <20210202070938.7863-1-jgross@suse.com>
 <20210203154800.4c6959d6@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f6fa1533-0646-e8b1-b7f8-51ad70691cae@suse.com>
Date: Thu, 4 Feb 2021 06:32:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203154800.4c6959d6@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="mHapG4vhTCP36nbTdrhUC7cqDD240Fsgi"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--mHapG4vhTCP36nbTdrhUC7cqDD240Fsgi
Content-Type: multipart/mixed; boundary="3tMIMW2EI4iG1ylfu3D1bWpPCgMh1TUEY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Igor Druzhinin <igor.druzhinin@citrix.com>, stable@vger.kernel.org
Message-ID: <f6fa1533-0646-e8b1-b7f8-51ad70691cae@suse.com>
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
References: <20210202070938.7863-1-jgross@suse.com>
 <20210203154800.4c6959d6@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <20210203154800.4c6959d6@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

--3tMIMW2EI4iG1ylfu3D1bWpPCgMh1TUEY
Content-Type: multipart/mixed;
 boundary="------------0650DBEFF4F58FCE9BE6C4D9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0650DBEFF4F58FCE9BE6C4D9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.02.21 00:48, Jakub Kicinski wrote:
> On Tue,  2 Feb 2021 08:09:38 +0100 Juergen Gross wrote:
>> Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding"=
)
>> xenvif_rx_ring_slots_available() is no longer called only from the rx
>> queue kernel thread, so it needs to access the rx queue with the
>> associated queue held.
>>
>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Should we route this change via networking trees? I see the bug did not=

> go through networking :)
>=20

I'm fine with either networking or the Xen tree. It should be included
in 5.11, though. So if you are willing to take it, please do so.


Juergen

--------------0650DBEFF4F58FCE9BE6C4D9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0650DBEFF4F58FCE9BE6C4D9--

--3tMIMW2EI4iG1ylfu3D1bWpPCgMh1TUEY--

--mHapG4vhTCP36nbTdrhUC7cqDD240Fsgi
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAbhvAFAwAAAAAACgkQsN6d1ii/Ey+E
Uwf+JIObFNlFrVyuHtDT+OqNp02a0vcUZ8IKMz8XFYYcfdtdav0EZnV00K6OwsrnBd8bhwUrIoxz
DkrxX5HfAqIUn1y1djZg7Abh0RGnGPACceKkr77Cjxi43nw1+UQxRdoHDvAj2xHzijp2fLdolK34
RtPlDFV0N29+qoyRv1Mhe3d5RvL4nSSMuxtOEvOnn5JAWqC95UUS32RymaLzdWIaz21MBzlpsa9s
AbGGcJZikD6wvIB81Py/CYj00R5fWI+o5ztTTfh1YKOjQuAr7GFWDSPF9KWaPFkUt48mrIzvkJ9V
Sqnjj4uQN9tD4WhouG8EKl9FyXDrtwt773lKKvpaPg==
=ujiM
-----END PGP SIGNATURE-----

--mHapG4vhTCP36nbTdrhUC7cqDD240Fsgi--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 06:44:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 06:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81159.149405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7YNE-0005CG-2H; Thu, 04 Feb 2021 06:44:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81159.149405; Thu, 04 Feb 2021 06:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7YND-0005C9-VZ; Thu, 04 Feb 2021 06:44:15 +0000
Received: by outflank-mailman (input) for mailman id 81159;
 Thu, 04 Feb 2021 06:44:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CKvr=HG=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l7YNB-0005C4-Vi
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 06:44:14 +0000
Received: from mail-lf1-x12e.google.com (unknown [2a00:1450:4864:20::12e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abbf19b1-c419-4c44-b29a-780646808fd9;
 Thu, 04 Feb 2021 06:44:12 +0000 (UTC)
Received: by mail-lf1-x12e.google.com with SMTP id f1so2946912lfu.3
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 22:44:12 -0800 (PST)
Received: from [192.168.1.76] (91-153-193-91.elisa-laajakaista.fi.
 [91.153.193.91])
 by smtp.gmail.com with ESMTPSA id d9sm483062lfm.293.2021.02.03.22.44.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 03 Feb 2021 22:44:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abbf19b1-c419-4c44-b29a-780646808fd9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=S7HJaE6uEyGqiWmBXa1meiKhQVc3HhcnUTxG1aVlbZw=;
        b=x4XujDwvV0gNcy+FlrWZFE1G3yCH4Iij3G16v6QsTwv3lOsDw9raSKDAUFXuyE4lcD
         CkIjSA4tLnt0pjpNu5xmsNRK8VEA4aNmhYx++TlIciwoMj+lyBNfUABReEPsLUvrlHGE
         g+lzFTq4Mog0tINVSjkVIV1Qe4Xrk4E4O1lThE+AFGPc3o6kyNHJajBpqlA3OSn7V4/a
         ZoKVotuc/pF3ERvDQN2LDI0F20edwZGhL8aqIeg2YrsQVRzn3JR8FLunYsWbalEPw6rg
         C/qC/G9uqOTzYyGD/pTR75RsOWbuB/NyGyWu/GL1TRPST4OfjtpdsPACzbJx0H4s6MS0
         4mlQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=S7HJaE6uEyGqiWmBXa1meiKhQVc3HhcnUTxG1aVlbZw=;
        b=nqYhti7JfEOV74ZuFJImLgpXo9d76zAQ9PEfY1A3HswQ4lG9HW3Itb1xO1tP9RK3p6
         1t7+Pjyr43g1VZ5g27RbHn6IIl2+NlFMBUcI1Mqn166lOLXMMrKvr1zvqzhokXrZoc3I
         J+8YklDxwfSU50MMboOPuR/Mng7swOOK8Ts+VW8z9O4462V6EHj7A6xdJlijNH4pLd0r
         SQw4mHEJLnmvHVE8s7c7ppVcOBk9HJqxCRg7BMRKwHu13HNii4ugPC5uiIUbN4xSLnIA
         vLkqoBF2JajZjwQmWPN4+WKxL9ZC9mppySJNuntNf7d+sVfxqNHtoaZDK2q3zY9hqE+W
         sp0w==
X-Gm-Message-State: AOAM5316BeHiRC9N1uncrcWN++6SOYBwuf2ezWVTMjVY+v1QiVNea9yp
	osqzRn5CReXMOoIYrfuBfqnw7Q==
X-Google-Smtp-Source: ABdhPJycO0rdV1B99xiY/VZXF+G8x/t91o3408y6SriflFOOfDsH5VLjxLa1wHMqTVbavABHU0c4HQ==
X-Received: by 2002:a19:992:: with SMTP id 140mr3907187lfj.158.1612421051018;
        Wed, 03 Feb 2021 22:44:11 -0800 (PST)
Subject: Re: Question about xen and Rasp 4B
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com>
 <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s>
 <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
 <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
 <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
 <3c98d8d0-ca4e-b177-1e2b-5f3eb454722d@unikie.com>
 <alpine.DEB.2.21.2102031249090.29047@sstabellini-ThinkPad-T480s>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <96a3d996-b7bd-fe5f-39ce-3fdd7d4e3604@unikie.com>
Date: Thu, 4 Feb 2021 08:44:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102031249090.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 3.2.2021 22.55, Stefano Stabellini wrote:
> On Wed, 3 Feb 2021, Jukka Kaartinen wrote:
>> On 3.2.2021 2.18, Stefano Stabellini wrote:
>>> On Tue, 2 Feb 2021, Jukka Kaartinen wrote:
>>>>>> Good catch.
>>>>>> GPU works now and I can start X! Thanks! I was also able to create
>>>>>> domU
>>>>>> that runs Raspian OS.
>>>>>
>>>>> This is very interesting that you were able to achieve that - congrats!
>>>>>
>>>>> Now, sorry to be a bit dense -- but since this thread went into all
>>>>> sorts of interesting
>>>>> directions all at once -- I just have a very particular question: what
>>>>> is
>>>>> exact
>>>>> combination of versions of Xen, Linux kernel and a set of patches that
>>>>> went
>>>>> on top that allowed you to do that? I'd love to obviously see it
>>>>> productized in Xen
>>>>> upstream, but for now -- I'd love to make available to Project EVE/Xen
>>>>> community
>>>>> since there seems to be a few folks interested in EVE/Xen combo being
>>>>> able
>>>>> to
>>>>> do that.
>>>>
>>>> I have tried Xen Release 4.14.0, 4.14.1 and master (from week 4, 2021).
>>>>
>>>> Kernel rpi-5.9.y and rpi-5.10.y branches from
>>>> https://github.com/raspberrypi/linux
>>>>
>>>> and
>>>>
>>>> U-boot (master).
>>>>
>>>> For the GPU to work it was enough to disable swiotlb from the kernel(s) as
>>>> suggested in this thread.
>>>
>>> How are you configuring and installing the kernel?
>>>
>>> make bcm2711_defconfig
>>> make Image.gz
>>> make modules_install
>>>
>>> ?
>>>
>>> The device tree is the one from the rpi-5.9.y build? How are you loading
>>> the kernel and device tree with uboot? Do you have any interesting
>>> changes to config.txt?
>>>
>>> I am asking because I cannot get to the point of reproducing what you
>>> are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
>>> get any graphics output on my screen. (The serial works.) I am using the
>>> default Ubuntu Desktop rpi-install target as rootfs and uboot master.
>>>
>>
>> This is what I do:
>>
>> make bcm2711_defconfig
>> cat "xen_additions" >> .config
>> make Image  modules dtbs
>>
>> make INSTALL_MOD_PATH=rootfs modules_install
>> depmod -a
>>
>> cp arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb boot/
>> cp arch/arm64/boot/dts/overlays/*.dtbo boot/overlays/
> 
> Thanks for the detailed instructions. This helps a lot. I saw below in
> boot2.source that you are using ${fdt_addr} as DTB source (instead of
> loading one), which means you are using the DTB as provided by U-Boot at
> runtime, instead of loading your own file.
> 
> With these two copies, I take you meant to update the first partition on
> the SD card, the one where config.txt lives, right? So that Xen is
> getting the DTB and overlays from the rpi-5.9.y kernel tree but passed
> down by the RPi loader and U-Boot?
> 
> I think the DTB must be the issue as I wasn't applying any overlays
> before. I ran a test to use the DTB and overlay from U-Boot but maybe I
> haven't updated them properly because I still don't see any output.
I'm using ${fdt_addr} has the changes that FW will do according to your 
congig.txt for example the overlay is applied. I tried to load and apply 
the overlay in the u-boot but there were strange errors so I decided to 
go the easy way.

I the overlay (vc4-fkms-v3d) is not applied GPU (v3d) driver is not probed.

I always see output from u-boot from the serial port. If not then there 
has been something wrong with the device-tree. And it has been because 
u-boot was not able to read from the SSD.

> 
> 
>> config.txt:
>>
>> [pi4]
>> max_framebuffers=2
>> enable_uart=1
>> arm_freq=1500
>> force_turbo=1
>>
>> [all]
>> arm_64bit=1
>> kernel=u-boot.bin
>>
>> start_file=start4.elf
>> fixup_file=fixup4.dat
>>
>> # Enable the audio output, I2C and SPI interfaces on the GPIO header
>> dtparam=audio=on
>> dtparam=i2c_arm=on
>> dtparam=spi=on
>>
>> # Enable the FKMS ("Fake" KMS) graphics overlay, enable the camera firmware
>> # and allocate 128Mb to the GPU memory
>> dtoverlay=vc4-fkms-v3d,cma-64
>> gpu_mem=128
>>
>> # Comment out the following line if the edges of the desktop appear outside
>> # the edges of your display
>> disable_overscan=1
>>
>>
>> boot.source:
>> setenv serverip 10.42.0.1
>> setenv ipaddr 10.42.0.231
>> tftpb 0xC00000 boot2.scr
>> source 0xC00000
>>
>> boot2.source:
>> tftpb 0xE00000 xen
>> tftpb 0x1000000 Image
>> setenv lin_size $filesize
>>
>> fdt addr ${fdt_addr}
>> fdt resize 1024
>>
>> fdt set /chosen xen,xen-bootargs "console=dtuart dtuart=serial0 sync_console
>> dom0_mem=1024M dom0_max_vcpus=1 bootscrub=0 vwfi=native sched=credit2"
>>
>> fdt mknod /chosen dom0
>>
>> # These will break the default framebuffer@3e2fe000 that
>> # is the same chosen -node.
>> #fdt set /chosen/dom0 \#address-cells <0x2>
>> #fdt set /chosen/dom0 \#size-cells <0x2>
>>
>> fdt set /chosen/dom0 compatible "xen,linux-zimage" "xen,multiboot-module"
>> fdt set /chosen/dom0 reg <0x1000000 0x${lin_size}>
>>
>> fdt set /chosen xen,dom0-bootargs "dwc_otg.lpm_enable=0 console=hvc0
>> earlycon=xen earlyprintk=xen root=/dev/sda4 elevator=deadline rootwait fixrtc
>> quiet splash"
>>
>> setenv fdt_high 0xffffffffffffffff
>>
>> fdt print /chosen
>>
>> #xen
>> booti 0xE00000 - ${fdt_addr}
>>


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 06:46:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 06:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81160.149416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7YP5-0005J0-Ee; Thu, 04 Feb 2021 06:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81160.149416; Thu, 04 Feb 2021 06:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7YP5-0005It-BN; Thu, 04 Feb 2021 06:46:11 +0000
Received: by outflank-mailman (input) for mailman id 81160;
 Thu, 04 Feb 2021 06:46:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CKvr=HG=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l7YP3-0005Im-8x
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 06:46:09 +0000
Received: from mail-lf1-x133.google.com (unknown [2a00:1450:4864:20::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0e130ba-8be3-4237-b0d8-18064abee8cd;
 Thu, 04 Feb 2021 06:46:08 +0000 (UTC)
Received: by mail-lf1-x133.google.com with SMTP id a12so2987288lfb.1
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 22:46:08 -0800 (PST)
Received: from [192.168.1.76] (91-153-193-91.elisa-laajakaista.fi.
 [91.153.193.91])
 by smtp.gmail.com with ESMTPSA id a30sm531644ljq.96.2021.02.03.22.46.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 03 Feb 2021 22:46:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0e130ba-8be3-4237-b0d8-18064abee8cd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=V+d1VBMMtSg8eoCFcrTHC37S2rL+yPLwXHMCT1l+IUs=;
        b=s4NJQhLMLKe5berQPXdQY6rBH0udY5oIxq47AlLicWTh68nNa6nD+/Dgh0BEofEqz3
         2/hpjV89mxj30vt7W9dnibJ29lv0JQsc7qskQnl3BcvJj/1z3STBS5EBOMqamIh9f3Uk
         Zmk/i0q6UKKxYLnara8SDO6F1WdeBhwzYWz4jHbJW3sdfCgXHkROQZ10e+BeCspzeBV4
         SUlxEtk6uYTl0cyYCHa+6lQH99Pp3wugpwjIMUweZyfhNYTqBv65oOqmBaTHAy3VgC7b
         6Jl1YH4XwwgSfN1C7uTLB5iStD+tFaW3LluIGVNUrw26wku0ZpxstigKyIZSRn3VV6ga
         jJtg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=V+d1VBMMtSg8eoCFcrTHC37S2rL+yPLwXHMCT1l+IUs=;
        b=jkTc1RVJV/BnSJwHT85e/MDmFt6a6TR4VHKytjVCrcrTrekqeldRNUzADyn9FdwkM1
         UmZQNgPlWrSdvQF2oUq2lmByOFv+LzZAES18taIMfIQN8OAuArX+zEAnRKElyiJStA42
         DB6JyaCpgn0iL/DYvKWS1MrQxD6wlVdgx876Jk0I3hED2910FivoNOlc6OL1hE7dsvCx
         LUYFMeubBX+IV9LVRExtMoOjyX4Pt1wyyZzn+jOdhhK3j1bnIDSO113dLVe8/aced43V
         1NOWSBdAMLsf0iJ/6EfHNe9YPOlOFcI+7Xv0gwlMLvQfS8ww8lOgcfe+J6iW7IXQMk4B
         Tdcg==
X-Gm-Message-State: AOAM531mOorzHNnie7aiPvyerKbt8w0lmXcgpMq/wfeFcz5j5Eq+43bw
	bU/3zp4lXVZ8w8vPUgO+QF/q4g==
X-Google-Smtp-Source: ABdhPJyTbHkj2YupoWcyp/TU3DNaYCAE11euVEFW6evIc9dYm2uUlxa05LG0/B2oVMwIwTmFHuFhfQ==
X-Received: by 2002:ac2:4569:: with SMTP id k9mr3637799lfm.461.1612421167261;
        Wed, 03 Feb 2021 22:46:07 -0800 (PST)
Subject: Re: Question about xen and Rasp 4B
To: Elliott Mitchell <ehem+xen@m5p.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <CAMmSBy-54qtu_oVVT=KB8GeKP0SW0uK+4wQ_LooHE0y_MZKJQg@mail.gmail.com>
 <3ec2b0cb-3685-384e-94df-28eaf8b57c42@unikie.com>
 <alpine.DEB.2.21.2102021552380.29047@sstabellini-ThinkPad-T480s>
 <3c98d8d0-ca4e-b177-1e2b-5f3eb454722d@unikie.com>
 <alpine.DEB.2.21.2102031249090.29047@sstabellini-ThinkPad-T480s>
 <YBsfzZ6fI40bXo7/@mattapan.m5p.com>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <e4a25631-61aa-bf31-a50e-c87d69a0888d@unikie.com>
Date: Thu, 4 Feb 2021 08:46:06 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <YBsfzZ6fI40bXo7/@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 4.2.2021 0.12, Elliott Mitchell wrote:
> On Wed, Feb 03, 2021 at 12:55:40PM -0800, Stefano Stabellini wrote:
>> On Wed, 3 Feb 2021, Jukka Kaartinen wrote:
>>> On 3.2.2021 2.18, Stefano Stabellini wrote:
>>>> How are you configuring and installing the kernel?
>>>>
>>>> make bcm2711_defconfig
>>>> make Image.gz
>>>> make modules_install
>>>>
>>>> ?
>>>>
>>>> The device tree is the one from the rpi-5.9.y build? How are you loading
>>>> the kernel and device tree with uboot? Do you have any interesting
>>>> changes to config.txt?
>>>>
>>>> I am asking because I cannot get to the point of reproducing what you
>>>> are seeing: I can boot my rpi-5.9.y kernel on recent Xen but I cannot
>>>> get any graphics output on my screen. (The serial works.) I am using the
>>>> default Ubuntu Desktop rpi-install target as rootfs and uboot master.
>>>>
>>>
>>> This is what I do:
>>>
>>> make bcm2711_defconfig
>>> cat "xen_additions" >> .config
>>> make Image  modules dtbs
>>>
>>> make INSTALL_MOD_PATH=rootfs modules_install
>>> depmod -a
>>>
>>> cp arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb boot/
>>> cp arch/arm64/boot/dts/overlays/*.dtbo boot/overlays/
>>
>> Thanks for the detailed instructions. This helps a lot. I saw below in
>> boot2.source that you are using ${fdt_addr} as DTB source (instead of
>> loading one), which means you are using the DTB as provided by U-Boot at
>> runtime, instead of loading your own file.
>>
>> With these two copies, I take you meant to update the first partition on
>> the SD card, the one where config.txt lives, right? So that Xen is
>> getting the DTB and overlays from the rpi-5.9.y kernel tree but passed
>> down by the RPi loader and U-Boot?
>>
>> I think the DTB must be the issue as I wasn't applying any overlays
>> before. I ran a test to use the DTB and overlay from U-Boot but maybe I
>> haven't updated them properly because I still don't see any output.
> 
> Seeing no graphics output from U-Boot is okay.  If the device-tree files
> get sufficiently updated you can end up with no output from U-Boot, but
> will get output once the Linux kernel's driver is operational (I've seen
> this occur).
> 
> The most important part is having a HDMI display plugged in during the
> early boot stages.  Unless the bootloader sees the display the output
> won't get initialized and the Linux driver doesn't handle that.
> 
> 
>>> dtoverlay=vc4-fkms-v3d,cma-64
> 
> This is odd.  My understanding is this is appropriate for RP3, but not
> RP4.  For RP4 you can have "dtoverlay=disable-vc4" and still get graphics
> output (hmm, I'm starting to think I need to double-check this...).
Without the overlay GPU driver (v3d) was not probed. And you need to use 
the fakekms.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 06:58:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 06:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81163.149428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Yas-0006RD-GW; Thu, 04 Feb 2021 06:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81163.149428; Thu, 04 Feb 2021 06:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Yas-0006R6-DL; Thu, 04 Feb 2021 06:58:22 +0000
Received: by outflank-mailman (input) for mailman id 81163;
 Thu, 04 Feb 2021 06:58:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CKvr=HG=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l7Yar-0006Qv-AH
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 06:58:21 +0000
Received: from mail-lf1-x12c.google.com (unknown [2a00:1450:4864:20::12c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce5bbfcb-b232-40d2-aca6-6330c848ba0a;
 Thu, 04 Feb 2021 06:58:19 +0000 (UTC)
Received: by mail-lf1-x12c.google.com with SMTP id p21so2916881lfu.11
 for <xen-devel@lists.xenproject.org>; Wed, 03 Feb 2021 22:58:19 -0800 (PST)
Received: from [192.168.1.76] (91-153-193-91.elisa-laajakaista.fi.
 [91.153.193.91])
 by smtp.gmail.com with ESMTPSA id c123sm486652lfd.95.2021.02.03.22.58.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 03 Feb 2021 22:58:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce5bbfcb-b232-40d2-aca6-6330c848ba0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=o2J4PvJuU5zTwoRlljd91RCboBAigDUhGuZu9n0nTWQ=;
        b=szCHrWWN2DVvWNV+8WDAQk4Ik2w5pkTzM2vL0WFaPcPSnMe/GY4AV3jS+Km3MvTsZ8
         Dk/OiyA4P2cLeyv9OMGIGaJyFqEZtpaHR7SpZmA5Jl6aTm1/fc6noOZsfbrp341P6+sA
         mIg4nNhEYXtf7Kk5ZTioPQf7f57/PNh/HsjMOtbjebWFPAmYeWW3EBADiUXmTLuLooRk
         x8s4OnSQCKvGVICNBajyQLXijYjwNqG03xPyh6RJ/3yOm2XRBAMFa/TPpEi06YwBqwxO
         HO3A9j0q7geapmIRGX4yQoOy5PoNnUxNHprdNWobUqX+TWUUgJfqqoValGmfQSZ8BS5e
         AUeA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=o2J4PvJuU5zTwoRlljd91RCboBAigDUhGuZu9n0nTWQ=;
        b=asnYbI1tNtnRQOxAVAZ0PEbIxeD5KA1BNA7s4y7gIfFGbvuX8wm6zntZcqFNzIFxKo
         3lCUTpPbZt57hnboXD2+LZrBi5dFciIg7Vs6M9vm/U9YOQrrINHJSi0/0maxiI7MUqLV
         CQ9VA2wR44and+trCj8UPro0fJ/b22dtU1AH5350Z70HA75w3NRgTQuXBdDof9u230TB
         YTQLtqMTvl9wXeDqd7u5FEB8dFmu8n1HUW3oeO4AJON4ZNIDXMJzv80O8yrF2+4gUeJF
         zTXd8lTK/K0LM7pDGhlhKy2zOqk7ZDu7SrerLLNLR7hc1aSNPmX+IknAtz4DtG0EvAul
         Bl5Q==
X-Gm-Message-State: AOAM533tTIMdtjXSvGTecrXEi3VoRuZN4ann+vyvgU7Pc9wTxHksDXBl
	IskqJDs3a5Ir+sYaZaN4IT/+vQ==
X-Google-Smtp-Source: ABdhPJyQWfNeG4Q0a5X70VUTRT+Z+CppYArMzJgKifbyDXk70HmzwQmSjeldnmKt2uZbMIgIB9+htQ==
X-Received: by 2002:a19:7f4d:: with SMTP id a74mr3899374lfd.618.1612421898141;
        Wed, 03 Feb 2021 22:58:18 -0800 (PST)
Subject: Re: Question about xen and Rasp 4B
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Roman Shaposhnik <roman@zededa.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com>
 <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s>
 <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com>
 <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com>
 <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s>
 <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com>
 <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com>
 <c44d45ed-f03e-e901-4a46-0ce57504703f@xen.org>
 <alpine.DEB.2.21.2102011055080.29047@sstabellini-ThinkPad-T480s>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <6a0dab88-aede-f048-fb86-b2a786ac3674@unikie.com>
Date: Thu, 4 Feb 2021 08:58:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102011055080.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 1.2.2021 21.25, Stefano Stabellini wrote:
> On Sat, 30 Jan 2021, Julien Grall wrote:
>>>> On 27/01/2021 11:47, Jukka Kaartinen wrote:
>>>>>
>>>>>
>>>>> On Tue, Jan 26, 2021 at 10:22 PM Stefano Stabellini
>>>>> <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
>>>>>
>>>>>  Â Â Â  On Tue, 26 Jan 2021, Jukka Kaartinen wrote:
>>>>>  Â Â Â Â  > On Tue, Jan 26, 2021 at 2:54 AM Stefano Stabellini
>>>>>  Â Â Â  <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
>>>>>  Â Â Â Â  >Â  Â  Â  Â On Sat, 23 Jan 2021, Jukka Kaartinen wrote:
>>>>>  Â Â Â Â  >Â  Â  Â  Â > Thanks for the response!
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â > On Sat, Jan 23, 2021 at 2:27 AM Stefano Stabellini
>>>>>  Â Â Â  <sstabellini@kernel.org <mailto:sstabellini@kernel.org>> wrote:
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â + xen-devel, Roman,
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â On Fri, 22 Jan 2021, Jukka Kaartinen wrote:
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Hi Stefano,
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > I'm JukkaÂ Kaartinen a SW developer working on
>>>>>  Â Â Â  enabling hypervisors on mobile platforms. One of our HW that we use
>>>>> on
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â development is
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Raspberry Pi 4B. I wonder if you could help me a
>>>>>  Â Â Â  bit :).
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > I'm trying to enable the GPU with XenÂ + Raspberry
>>>>>  Â Â Â  Pi for
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â dom0.
>>>>> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323#p1797605
>>>>> <https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323#p1797605>
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > I got so far that GPU drivers are loaded (v3d &
>>>>>  Â Â Â  vc4) without errors. But now Xen returns errorÂ when X is starting:
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > (XEN) traps.c:1986:d0v1 HSR=0x93880045
>>>>>  Â Â Â  pc=0x00007f97b14e70 gva=0x7f7f817000 gpa=0x0000401315d000
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Â I tried to debug what causes this and looks
>>>>>  Â Â Â  likeÂ find_mmio_handlerÂ cannot find handler.
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > (See more here:
>>>>> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323&start=25#p1801691
>>>>> <https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=232323&start=25#p1801691>
>>>>>  Â Â Â  )
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â > Any ideas why the handler is not found?
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â Hi Jukka,
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â I am glad to hear that you are interested in Xen on
>>>>>  Â Â Â  RaspberryPi :-)Â  I
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â haven't tried the GPU yet, I have been using the
>>>>>  Â Â Â  serial only.
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â Roman, did you ever get the GPU working?
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â The error is a data abort error: Linux is trying to
>>>>>  Â Â Â  access an address
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â which is not mapped to dom0. The address seems to
>>>>>  Â Â Â  be 0x401315d000. It is
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â a pretty high address; I looked in device tree but
>>>>>  Â Â Â  couldn't spot it.
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â >From the HSR (the syndrom register) it looks like
>>>>>  Â Â Â  it is a translation
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â fault at EL1 on stage1. As if the Linux address
>>>>>  Â Â Â  mapping was wrong.
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â Anyone has any ideas how this could happen? Maybe a
>>>>>  Â Â Â  reserved-memory
>>>>>  Â Â Â Â  >Â  Â  Â  Â >Â  Â  Â  Â misconfiguration?
>>>>>  Â Â Â Â  >Â  Â  Â  Â >
>>>>>  Â Â Â Â  >Â  Â  Â  Â > I had issuesÂ with loading the driver in the first place.
>>>>>  Â Â Â  Apparently swiotlb is used, maybe itÂ can cause this. I also tried to
>>>>>  Â Â Â Â  >Â  Â  Â  Â enable CMA.
>>>>>  Â Â Â Â  >Â  Â  Â  Â > config.txt:
>>>>>  Â Â Â Â  >Â  Â  Â  Â > dtoverlay=vc4-fkms-v3d,cma=320M@0x0-0x40000000
>>>>>  Â Â Â Â  >Â  Â  Â  Â > gpu_mem=128
>>>>>  Â Â Â Â  >
>>>>>  Â Â Â Â  >Â  Â  Â  Â Also looking at your other reply and the implementation of
>>>>>  Â Â Â Â  >Â  Â  Â  Â vc4_bo_create, it looks like this is a CMA problem.
>>>>>  Â Â Â Â  >
>>>>>  Â Â Â Â  >Â  Â  Â  Â It would be good to run a test with the swiotlb-xen
>>>>> disabled:
>>>>>  Â Â Â Â  >
>>>>>  Â Â Â Â  >Â  Â  Â  Â diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
>>>>>  Â Â Â Â  >Â  Â  Â  Â index 467fa225c3d0..2bdd12785d14 100644
>>>>>  Â Â Â Â  >Â  Â  Â  Â --- a/arch/arm/xen/mm.c
>>>>>  Â Â Â Â  >Â  Â  Â  Â +++ b/arch/arm/xen/mm.c
>>>>>  Â Â Â Â  >Â  Â  Â  Â @@ -138,8 +138,7 @@ void
>>>>>  Â Â Â  xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int
>>>>> order)
>>>>>  Â Â Â Â  >Â  Â  Â  Â Â static int __init xen_mm_init(void)
>>>>>  Â Â Â Â  >Â  Â  Â  Â Â {
>>>>>  Â Â Â Â  >Â  Â  Â  Â Â  Â  Â  Â  struct gnttab_cache_flush cflush;
>>>>>  Â Â Â Â  >Â  Â  Â  Â -Â  Â  Â  Â if (!xen_initial_domain())
>>>>>  Â Â Â Â  >Â  Â  Â  Â -Â  Â  Â  Â  Â  Â  Â  Â return 0;
>>>>>  Â Â Â Â  >Â  Â  Â  Â +Â  Â  Â  Â return 0;
>>>>>  Â Â Â Â  >Â  Â  Â  Â Â  Â  Â  Â  xen_swiotlb_init(1, false);
>>>>>  Â Â Â Â  >
>>>>>  Â Â Â Â  >Â  Â  Â  Â Â  Â  Â  Â  cflush.op = 0;
>>>>>  Â Â Â Â  >
>>>>>  Â Â Â Â  > With this change the kernel is not booting up. (btw. I'm using
>>>>>  Â Â Â  USB SSD for my OS.)
>>>>>  Â Â Â Â  > [ Â  Â 0.071081] bcm2835-dma fe007000.dma: Unable to set DMA mask
>>>>>  Â Â Â Â  > [ Â  Â 0.076277] bcm2835-dma fe007b00.dma: Unable to set DMA mask
>>>>>  Â Â Â Â  > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
>>>>>  Â Â Â Â  > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
>>>>>  Â Â Â Â  > [ Â  Â 0.592695] pci 0000:00:00.0: Failed to add - passthrough or
>>>>>  Â Â Â  MSI/MSI-X might fail!
>>>>>  Â Â Â Â  > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
>>>>>  Â Â Â Â  > [ Â  Â 0.606819] pci 0000:01:00.0: Failed to add - passthrough or
>>>>>  Â Â Â  MSI/MSI-X might fail!
>>>>>  Â Â Â Â  > [ Â  Â 1.212820] usb 1-1: device descriptor read/64, error 18
>>>>>  Â Â Â Â  > [ Â  Â 1.452815] usb 1-1: device descriptor read/64, error 18
>>>>>  Â Â Â Â  > [ Â  Â 1.820813] usb 1-1: device descriptor read/64, error 18
>>>>>  Â Â Â Â  > [ Â  Â 2.060815] usb 1-1: device descriptor read/64, error 18
>>>>>  Â Â Â Â  > [ Â  Â 2.845548] usb 1-1: device descriptor read/8, error -61
>>>>>  Â Â Â Â  > [ Â  Â 2.977603] usb 1-1: device descriptor read/8, error -61
>>>>>  Â Â Â Â  > [ Â  Â 3.237530] usb 1-1: device descriptor read/8, error -61
>>>>>  Â Â Â Â  > [ Â  Â 3.369585] usb 1-1: device descriptor read/8, error -61
>>>>>  Â Â Â Â  > [ Â  Â 3.480765] usb usb1-port1: unable to enumerate USB device
>>>>>  Â Â Â Â  >
>>>>>  Â Â Â Â  > Traces stop here. I could try with a memory card. Maybe it makes
>>>>>  Â Â Â  a difference.
>>>>>
>>>>>  Â Â Â  This is very surprising. Disabling swiotlb-xen should make things
>>>>> better
>>>>>  Â Â Â  not worse. The only reason I can think of why it could make things
>>>>> worse
>>>>>  Â Â Â  is if Linux runs out of low memory. Julien's patch
>>>>>  Â Â Â  437b0aa06a014ce174e24c0d3530b3e9ab19b18b for Xen should have
>>>>> addressed
>>>>>  Â Â Â  that issue though. Julien, any ideas?
>>>>
>>>> I think, Stefano's small patch is not enough to disable the swiotlb as we
>>>> will still override the DMA ops. You also likely want:
>>>>
>>>> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
>>>> index 8a8949174b1c..aa43e249ecdd 100644
>>>> --- a/arch/arm/mm/dma-mapping.c
>>>> +++ b/arch/arm/mm/dma-mapping.c
>>>> @@ -2279,7 +2279,7 @@ void arch_setup_dma_ops(struct device *dev, u64
>>>> dma_base, u64 size,
>>>>  Â Â Â Â Â Â Â Â  set_dma_ops(dev, dma_ops);
>>>>
>>>>  Â Â #ifdef CONFIG_XEN
>>>> -Â Â Â Â Â Â  if (xen_initial_domain())
>>>> +Â Â Â Â Â Â  if (0 || xen_initial_domain())
>>>>  Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  dev->dma_ops = &xen_swiotlb_dma_ops;
>>>>  Â Â #endif
>>>>  Â Â Â Â Â Â Â Â  dev->archdata.dma_ops_setup = true;
>>>>
>>>> Otherwise, you would still use the swiotlb DMA ops that would not be
>>>> functional as we disabled the swiotlb.
>>>>
>>>> This would explain the following error because it will check whether the
>>>> mask is valid using the callback dma_supported():
>>>>
>>>> [Â Â Â  0.071081] bcm2835-dma fe007000.dma: Unable to set DMA mask
>>>>
>>> Good catch.
>>> GPU works now and I can start X! Thanks! I was also able to create domU that
>>> runs Raspian OS.
>>
>> Glad to hear it works! IIRC, the swiotlb may become necessary when running
>> guest if the guest memory ends up to be used for DMA transaction.
> 
> It is necessary if you are using PV network or PV disk: memory shared by
> another domU could end up being used in a DMA transaction. For that, you
> need swiotlb-xen.
> 
>   
>>> Now that swiotlb is disabled what does it mean?
>>
>> I can see two reasons:
>>    1) You have limited memory below the 30 bits mark. So Swiotlb and CMA may
>> try to fight for the low memory.
>>    2) We found a few conversion bugs in the swiotlb on RPI4 last year (IIRC the
>> DMA and physical address may be different). I looked at the Linux branch you
>> are using and they seem to all be there. So there might be another bug.
>>
>> I am not sure how to figure out where is the problem. Stefano, do you have a
>> suggestion where to start?
> 
> Both 1) and 2) are possible. It is also possible that another driver,
> probably something related to CMA or DRM, has some special dma_ops
> handling that doesn't work well together with swiotlb-xen.
> 
> Given that the original error seemed to be related to vc4_bo_create,
> which calls dma_alloc_wc, I would add a couple of printks to
> xen_swiotlb_alloc_coherent to help us figure it out:
> 
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 2b385c1b4a99..cac8b09af603 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -295,6 +295,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>   	/* Convert the size to actually allocated. */
>   	size = 1UL << (order + XEN_PAGE_SHIFT);
>   
> +	printk("DEBUG %s %d size=%lu flags=%lx attr=%lx\n",__func__,__LINE__,size,flags,attrs);
>   	/* On ARM this function returns an ioremap'ped virtual address for
>   	 * which virt_to_phys doesn't return the corresponding physical
>   	 * address. In fact on ARM virt_to_phys only works for kernel direct
> @@ -315,16 +316,20 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>   	phys = dma_to_phys(hwdev, *dma_handle);
>   	dev_addr = xen_phys_to_dma(hwdev, phys);
>   	if (((dev_addr + size - 1 <= dma_mask)) &&
> -	    !range_straddles_page_boundary(phys, size))
> +	    !range_straddles_page_boundary(phys, size)) {
>   		*dma_handle = dev_addr;
> -	else {
> +		printk("DEBUG %s %d phys=%lx dma=%lx\n",__func__,__LINE__,phys,dev_addr);
> +	} else {
>   		if (xen_create_contiguous_region(phys, order,
>   						 fls64(dma_mask), dma_handle) != 0) {
> +			printk("DEBUG %s %d\n",__func__,__LINE__);
>   			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>   			return NULL;
>   		}
>   		*dma_handle = phys_to_dma(hwdev, *dma_handle);
>   		SetPageXenRemapped(virt_to_page(ret));
> +		printk("DEBUG %s %d dma_mask=%d page_boundary=%d phys=%lx dma=%lx\n",__func__,__LINE__,
> +			((dev_addr + size - 1 <= dma_mask)),range_straddles_page_boundary(phys, size),phys,*dma_handle);
>   	}
>   	memset(ret, 0, size);
>   	return ret;
> 
> 
> 
> 
>>> And also can I pass the GPU to domU? Raspberry Pi 4 is limited HW and
>>> doesn't have IOMMU. I'm trying to create similar OS like QubesOS where GPU,
>>> Network, keyboard/mouse, ... are isolated to their own VMs.
>>
>> Without an IOMMU or any other HW mechamisns (e.g. MPU), it would not be safe
>> to assign a DMA-capable device to a non-trusted VM.
>>
>> If you trust the VM where you assigned a device, then a possible approach
>> would be to have the VM direct mapped (e.g. guest physical address == host
>> physical address). Although, I can foreese some issues if you have multiple
>> VMs requires memory below 30 bits (there seem to be limited amount)>
>>
>> If you don't trust the VM where you assigned a device, then your best option
>> will be to expose a PV interface of the device and have your backend
>> sanitizing the request and issuing it on behalf of the guest.
> 
> FYI you could do that with the existing PVFB drivers that only support 2D
> graphics.
> 

We really need direct HW access so PVFB is not really an option. And at 
this point. We can trust the VMs.


Any idea what am I missing something here is this the way to give domu 
access to the memory?

# from dom0: cat /proc/iomem
# fe104000-fe104027 : fe104000.rng rng@7e104000

# to hide the above rng from the dom0 I added these to device-tree and 
above line from /proc/iomem dissapiered.
boot2.scr:
fdt set /soc/rng@7e104000 xen,passthrough <0x1>
fdt set /soc/rng@7e104000 status disabled


domu.cfg:
iomem = [ 'fe104,1' ]

domu start but I cannot see that address in the domu iomem range.
Also the device-tree from the domu is quite empty.

Do I need something like:
device_tree = "rng.dtb"

like here:
https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg02618.html


I tried to add
dtdev = [ "/soc/rng@7e104000" ]
but this gives be error:
"libxl: error: libxl_create.c:1107:libxl__domain_config_setdefault: 
passthrough not supported on this platform"

Will this happen if generate the rng.dtb?


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 07:30:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 07:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81169.149447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Z5P-0001OM-SU; Thu, 04 Feb 2021 07:29:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81169.149447; Thu, 04 Feb 2021 07:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7Z5P-0001OF-Oc; Thu, 04 Feb 2021 07:29:55 +0000
Received: by outflank-mailman (input) for mailman id 81169;
 Thu, 04 Feb 2021 07:29:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VauU=HG=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1l7Z5O-0001OA-RC
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 07:29:54 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37150ad6-5a02-4b42-ac84-c856080971dd;
 Thu, 04 Feb 2021 07:29:52 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id DD16068AFE; Thu,  4 Feb 2021 08:29:47 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37150ad6-5a02-4b42-ac84-c856080971dd
Date: Thu, 4 Feb 2021 08:29:47 +0100
From: Christoph Hellwig <hch@lst.de>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
	x86@kernel.org, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
	akpm@linux-foundation.org, benh@kernel.crashing.org,
	bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
	boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
	mingo@redhat.com, jani.nikula@linux.intel.com,
	joonas.lahtinen@linux.intel.com, jgross@suse.com,
	konrad.wilk@oracle.com, m.szyprowski@samsung.com,
	matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
	paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
	rodrigo.vivi@intel.com, sstabellini@kernel.org,
	bauerman@linux.ibm.com, tsbogend@alpha.franken.de,
	tglx@linutronix.de, ulf.hansson@linaro.org, joe.jin@oracle.com,
	thomas.lendacky@amd.com, Claire Chang <tientzu@chromium.org>
Subject: Re: [PATCH RFC v1 2/6] swiotlb: convert variables to arrays
Message-ID: <20210204072947.GA29812@lst.de>
References: <20210203233709.19819-1-dongli.zhang@oracle.com> <20210203233709.19819-3-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210203233709.19819-3-dongli.zhang@oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Feb 03, 2021 at 03:37:05PM -0800, Dongli Zhang wrote:
> This patch converts several swiotlb related variables to arrays, in
> order to maintain stat/status for different swiotlb buffers. Here are
> variables involved:
> 
> - io_tlb_start and io_tlb_end
> - io_tlb_nslabs and io_tlb_used
> - io_tlb_list
> - io_tlb_index
> - max_segment
> - io_tlb_orig_addr
> - no_iotlb_memory
> 
> There is no functional change and this is to prepare to enable 64-bit
> swiotlb.

Claire Chang (on Cc) already posted a patch like this a month ago,
which looks much better because it actually uses a struct instead
of all the random variables. 


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 07:49:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 07:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81171.149458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ZNr-0003Gj-EL; Thu, 04 Feb 2021 07:48:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81171.149458; Thu, 04 Feb 2021 07:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ZNr-0003Gc-BU; Thu, 04 Feb 2021 07:48:59 +0000
Received: by outflank-mailman (input) for mailman id 81171;
 Thu, 04 Feb 2021 07:48:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ZNq-0003GU-F5; Thu, 04 Feb 2021 07:48:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ZNq-0005ZW-5b; Thu, 04 Feb 2021 07:48:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ZNp-0003nM-Tj; Thu, 04 Feb 2021 07:48:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7ZNp-0001Wv-TG; Thu, 04 Feb 2021 07:48:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tatG+T2RTWNDp7hIOu1y7jJXAx04+RbRdDof0VGm6qI=; b=MGERH/Qnkiwq0mYiDSYqYDxwyr
	7v+NRQfXYeaoUJtg3gAEDDtLDwILAvN1SxyfEqD0GJA6s7WlcyFAMc0+pLYKBmbGIzOcaOEaK3xJL
	AZJrTLyea7aIXOinzRft6BGS42YbJ9Biv4+y+5oGaNqTVyrOm5j3Cg1azYnXP4/pJVaI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158989-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158989: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-xl-seattle:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=99ae0cd90d3e41b424582cf74bcf32498ca81bb9
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 07:48:57 +0000

flight 158989 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158989/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                99ae0cd90d3e41b424582cf74bcf32498ca81bb9
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  167 days
Failing since        152659  2020-08-21 14:07:39 Z  166 days  337 attempts
Testing same since   158989  2021-02-03 18:44:17 Z    0 days    1 attempts

------------------------------------------------------------
373 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl-seattle host-install(5)

Not pushing.

(No revision log; it would be 99469 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 08:40:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 08:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81176.149474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aBj-0001Np-Ea; Thu, 04 Feb 2021 08:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81176.149474; Thu, 04 Feb 2021 08:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aBj-0001Ni-B5; Thu, 04 Feb 2021 08:40:31 +0000
Received: by outflank-mailman (input) for mailman id 81176;
 Thu, 04 Feb 2021 08:40:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VauU=HG=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1l7aBh-0001Nb-Sq
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 08:40:29 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecce8bd5-4ca9-4aa1-90f8-edddb2f725e6;
 Thu, 04 Feb 2021 08:40:28 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 77E4067373; Thu,  4 Feb 2021 09:40:23 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecce8bd5-4ca9-4aa1-90f8-edddb2f725e6
Date: Thu, 4 Feb 2021 09:40:23 +0100
From: Christoph Hellwig <hch@lst.de>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
	x86@kernel.org, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
	akpm@linux-foundation.org, benh@kernel.crashing.org,
	bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
	boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
	mingo@redhat.com, jani.nikula@linux.intel.com,
	joonas.lahtinen@linux.intel.com, jgross@suse.com,
	konrad.wilk@oracle.com, m.szyprowski@samsung.com,
	matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
	paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
	rodrigo.vivi@intel.com, sstabellini@kernel.org,
	bauerman@linux.ibm.com, tsbogend@alpha.franken.de,
	tglx@linutronix.de, ulf.hansson@linaro.org, joe.jin@oracle.com,
	thomas.lendacky@amd.com
Subject: Re: [PATCH RFC v1 5/6] xen-swiotlb: convert variables to arrays
Message-ID: <20210204084023.GA32328@lst.de>
References: <20210203233709.19819-1-dongli.zhang@oracle.com> <20210203233709.19819-6-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210203233709.19819-6-dongli.zhang@oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

So one thing that has been on my mind for a while:  I'd really like
to kill the separate dma ops in Xen swiotlb.  If we compare xen-swiotlb
to swiotlb the main difference seems to be:

 - additional reasons to bounce I/O vs the plain DMA capable
 - the possibility to do a hypercall on arm/arm64
 - an extra translation layer before doing the phys_to_dma and vice
   versa
 - an special memory allocator

I wonder if inbetween a few jump labels or other no overhead enablement
options and possibly better use of the dma_range_map we could kill
off most of swiotlb-xen instead of maintaining all this code duplication?


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 08:45:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 08:45:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81180.149493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aGI-0001aa-3t; Thu, 04 Feb 2021 08:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81180.149493; Thu, 04 Feb 2021 08:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aGI-0001aT-03; Thu, 04 Feb 2021 08:45:14 +0000
Received: by outflank-mailman (input) for mailman id 81180;
 Thu, 04 Feb 2021 08:45:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SI1E=HG=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l7aGG-0001aO-Lh
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 08:45:12 +0000
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ba4fb6e-db36-4ec8-b43a-72aad967e217;
 Thu, 04 Feb 2021 08:45:11 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id c12so2458370wrc.7
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 00:45:11 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id q9sm6050638wme.18.2021.02.04.00.45.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 04 Feb 2021 00:45:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ba4fb6e-db36-4ec8-b43a-72aad967e217
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=mg7W0VLw0FK4HNewxQamfD32hSX90m7wUkSk0nDAHMY=;
        b=IuGP2Uj2Kw+lpz2wE7Ay44A6PGGuhRjkQlQ1eA/TZFGLycY60Gz8+EZQzEsZ/whyvl
         rnw0HUr2RoE5bYhGKddYKOSNV8STqa+parCcrasVmaGl/jLERAvyRMBLXX767rBp0oay
         /NiYxiyuJwmUddcQflYhSjDrfkEtiMAs5CUBnp5R/SY2iVYzO+dER17CeZUxINEkdj9s
         QCUUOabRb0ZCKaMF+Xxpqvxcs6jbc6M59MYOlkMJ6gBVJF3Bx7WBgQyFpjs5OuIkt2s0
         fJ+z0kvnTo1BZbK2DgZ0gFvYTReqUk3+K0F9k/NfTTPEE226CxcOYqMDVwVO75zPHUcj
         QtlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=mg7W0VLw0FK4HNewxQamfD32hSX90m7wUkSk0nDAHMY=;
        b=DA/CXQ7UKbcrweECCMOs1jMYqOT/4GaR0Ye6+9JNIMCl3qoXYAcNL08Iis4zZeIbka
         vfm+EUD6shnLdCoiuZkgXKVPuLEOQfpXqMwCJBoj38CiW0FL2vFq+QMABt/I5fpAsjZ5
         ffyjoggzhWgJiL3JiqkUWZlOqtLairHnSZpkATxN5mOv5t51MhNZU6ucgecQBDn6YXGG
         01clMVjlHcI/ymtPp1PDxUPyFjOgTxlW8Dij0d3rBr7ggUzk88bq1PCNeGBjYabSjmaA
         aQydf2arrgV/Q71EqZ3zyaalMP9Ts/0yCHLVy5O5XxWWD7a957qqMw00uke6unHbLgHE
         bN7g==
X-Gm-Message-State: AOAM5333ESPOQijWD1ef5QVK/2CfD4e0PM+hioMVtU7xBWgLoAPby+wD
	5swg7P+oZYpXeTO7I+V02jE=
X-Google-Smtp-Source: ABdhPJzHeeEWrCe/8QCs9u/g9R21SuE0grv7LBCKKxSSdLPWkozsnpZRxQ7oLr+VvNmZDDsv7xkQqw==
X-Received: by 2002:a5d:6686:: with SMTP id l6mr7976648wru.236.1612428311037;
        Thu, 04 Feb 2021 00:45:11 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'George Dunlap'" <george.dunlap@citrix.com>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com> <3365a9a1-92c0-8917-1632-b88f1c055392@suse.com>
In-Reply-To: <3365a9a1-92c0-8917-1632-b88f1c055392@suse.com>
Subject: RE: [PATCH v2 1/2] IOREQ: fix waiting for broadcast completion
Date: Thu, 4 Feb 2021 08:45:09 -0000
Message-ID: <03f401d6fad2$0bbb9fd0$2332df70$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQHgg/zPp1o/mkuzupLKGuAOuFa3bwFnX6eiqilg1nA=
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 02 February 2021 15:14
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <paul@xen.org>; George Dunlap <george.dunlap@citrix.com>
> Subject: [PATCH v2 1/2] IOREQ: fix waiting for broadcast completion
> 
> Checking just a single server is not enough - all of them must have
> signaled that they're done processing the request.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
> v2: New.
> 
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -213,9 +213,9 @@ bool vcpu_ioreq_handle_completion(struct
>          return false;
>      }
> 
> -    sv = get_pending_vcpu(v, &s);
> -    if ( sv && !wait_for_io(sv, get_ioreq(s, v)) )
> -        return false;
> +    while ( (sv = get_pending_vcpu(v, &s)) != NULL )
> +        if ( !wait_for_io(sv, get_ioreq(s, v)) )
> +            return false;
> 
>      vio->req.state = ioreq_needs_completion(&vio->req) ?
>          STATE_IORESP_READY : STATE_IOREQ_NONE;




From xen-devel-bounces@lists.xenproject.org Thu Feb 04 08:46:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 08:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81182.149507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aHk-0001i6-Fo; Thu, 04 Feb 2021 08:46:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81182.149507; Thu, 04 Feb 2021 08:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aHk-0001hz-Cq; Thu, 04 Feb 2021 08:46:44 +0000
Received: by outflank-mailman (input) for mailman id 81182;
 Thu, 04 Feb 2021 08:46:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gM/o=HG=dingwall.me.uk=james@srs-us1.protection.inumbo.net>)
 id 1l7aHj-0001hu-Dq
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 08:46:43 +0000
Received: from know-smtprelay-omc-8.server.virginmedia.net (unknown
 [80.0.253.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94b77665-135f-4eac-8ce6-bc40f183159e;
 Thu, 04 Feb 2021 08:46:40 +0000 (UTC)
Received: from mail0.xen.dingwall.me.uk ([82.38.249.212]) by cmsmtp with ESMTPA
 id 7aHelbnaaCZ837aHelSQdc; Thu, 04 Feb 2021 08:46:38 +0000
Received: from localhost (localhost [IPv6:::1])
 by mail0.xen.dingwall.me.uk (Postfix) with ESMTP id F2E3C308CCD;
 Thu,  4 Feb 2021 08:46:37 +0000 (GMT)
Received: from mail0.xen.dingwall.me.uk ([IPv6:::1])
 by localhost (mail0.xen.dingwall.me.uk [IPv6:::1]) (amavisd-new, port 10024)
 with ESMTP id dVevwxJLHcpX; Thu,  4 Feb 2021 08:46:37 +0000 (GMT)
Received: from ghoul.dingwall.me.uk (ghoul.dingwall.me.uk
 [IPv6:2001:470:695c:302::c0a8:1c8])
 by dingwall.me.uk (Postfix) with ESMTP id B83CE308CCA;
 Thu,  4 Feb 2021 08:46:36 +0000 (GMT)
Received: by ghoul.dingwall.me.uk (Postfix, from userid 1000)
 id C5E72890; Thu,  4 Feb 2021 08:46:36 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94b77665-135f-4eac-8ce6-bc40f183159e
X-Originating-IP: [82.38.249.212]
X-Authenticated-User: james.dingwall@blueyonder.co.uk
X-Spam: 0
X-Authority: v=2.3 cv=HNHt6Llv c=1 sm=1 tr=0 a=gXUefieqlD6GaZBkXOTlrw==:117
 a=gXUefieqlD6GaZBkXOTlrw==:17 a=xqWC_Br6kY4A:10 a=kj9zAlcOel0A:10
 a=qa6Q16uM49sA:10 a=VwQbUJbxAAAA:8 a=tHz9FfFoAAAA:8 a=YuXQDQBybyKRvAJtgHIA:9
 a=CjuIK1q_8ugA:10 a=AjGcO6oz07-iQ99wixmX:22 a=Z52K5TbU17EcDgFyT5Ki:22
X-Virus-Scanned: Debian amavisd-new at dingwall.me.uk
Date: Thu, 4 Feb 2021 08:46:36 +0000
From: James Dingwall <james@dingwall.me.uk>
To: Jan Beulich <jbeulich@suse.com>
Cc: James Dingwall <james-xen@dingwall.me.uk>,
	xen-devel@lists.xenproject.org
Subject: Re: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
Message-ID: <20210204084636.GA3781256@dingwall.me.uk>
References: <20210201152655.GA3922797@dingwall.me.uk>
 <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
X-CMAE-Envelope: MS4wfOtBa0J/BeiMUHVEkJcUJj8XkgnxqL1f4aKsFny1JlLaihJnMgxm7kyLDkB4sd4RFANsWkvhFc5IJe+ybzqFGZaIsj4ew9O+Wi24u24tD4JLmjM7slPy
 UedtOrHdv4AjaJuIIEXR2NtgQ/zfeCIwPsxEJn67HFfSOFnaGDjB/Vy1R4VaYE9aOGNQfIeLzXVPyFxtmMsxABkEBMkTpv6nNp69NtqkNMKxCH81oc8PIZLE

Hi Jan,

Thank you for your reply.

On Wed, Feb 03, 2021 at 03:55:07PM +0100, Jan Beulich wrote:
> On 01.02.2021 16:26, James Dingwall wrote:
> > I am building the xen 4.11 branch at 
> > 310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit 
> > 3b5de119f0399cbe745502cb6ebd5e6633cc139c "86/msr: fix handling of 
> > MSR_IA32_PERF_{STATUS/CTL}".  I think this should address this error 
> > recorded in xen's dmesg:
> > 
> > (XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
> 
> It seems to me that you imply some information here which might
> better be spelled out. As it stands I do not see the immediate
> connection between the cited commit and the crash. C0000096 is
> STATUS_PRIVILEGED_INSTRUCTION, which to me ought to be impossible
> for code running in ring 0. Of course I may simply not know enough
> about modern Windows' internals to understand the connection.

Searching for "VIRIDIAN CRASH: 3b" led me to this thread and then to the commit based on the commit log message.

https://patchwork.kernel.org/project/xen-devel/patch/20201007102032.98565-1-roger.pau@citrix.com/

I have naively assumed that the RCX register indicated MSR_IA32_PERF_CTL based on:

#define MSR_IA32_PERF_CTL             0x00000199

I've added this patch:

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 99c848ff41..7a764907d5 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -232,12 +232,16 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
          */
     case MSR_IA32_PERF_STATUS:
     case MSR_IA32_PERF_CTL:
-        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
+        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) ) {
+            printk(KERN_DEBUG "JKD: MSR %#x FAULT1: %#x & %#x\n", msr, cp->x86_vendor, (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR));
+
             goto gp_fault;
+        }
 
         *val = 0;
         if ( likely(!is_cpufreq_controller(d)) || rdmsr_safe(msr, *val) == 0 )
             break;
+        printk(KERN_DEBUG "JKD: MSR FAULT2\n");
         goto gp_fault;
 
         /*

and now in the hypervisor log when the domain crashes:

(XEN) JKD: MSR 0x199 FAULT1: 0 & 0x2
(XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 1146d2c5 6346d580 0
(XEN) avc:  denied  { reset } for domid=11 scontext=system_u:system_r:domU_t tcontext=system_u:system_r:domU_t_self tclass=event

I'm not sure what is expected in cp->x86_vendor but this is running on an Intel CPU so I would have thought 0x1 based on

#define X86_VENDOR_INTEL (1 << 0)

I have also booted with flask=disabled to to eliminate the reported avc denial as the cause.

> 
> > I have removed `viridian = [..]` from the xen config nut still get this 
> > reliably when launching PassMark Performance Test and it is collecting 
> > CPU information.
> > 
> > This is recorded in the domain qemu-dm log:
> > 
> > 21244@1612191983.279616:xen_platform_log xen platform: XEN|BUGCHECK: ====>
> > 21244@1612191983.279819:xen_platform_log xen platform: XEN|BUGCHECK: SYSTEM_SERVICE_EXCEPTION: 00000000C0000096 FFFFF800A43C72C5 FFFFD0014343D580 0000000000000000
> > 21244@1612191983.279959:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (FFFFF800A43C72C5):
> > 21244@1612191983.280075:xen_platform_log xen platform: XEN|BUGCHECK: - Code = C148320F
> > 21244@1612191983.280205:xen_platform_log xen platform: XEN|BUGCHECK: - Flags = 0B4820E2
> > 21244@1612191983.280346:xen_platform_log xen platform: XEN|BUGCHECK: - Address = 0000A824948D4800
> > 21244@1612191983.280504:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[0] = 8B00000769850F07
> > 21244@1612191983.280633:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[1] = 46B70F4024448906
> > 21244@1612191983.280754:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[2] = 0F44442444896604
> > 21244@1612191983.280876:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[3] = E983C88B410646B6
> > 21244@1612191983.281012:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[4] = 0D7401E9831E7401
> > 21244@1612191983.281172:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[5] = 54B70F217502F983
> > 21244@1612191983.281304:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[6] = 54B70F15EBED4024
> > 21244@1612191983.281426:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[7] = EBC0B70FED664024
> > 21244@1612191983.281547:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[8] = 0FEC402454B70F09
> > 21244@1612191983.281668:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[9] = 448B42244489C0B6
> > 21244@1612191983.281809:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[10] = 2444B70F06894024
> > 21244@1612191983.281932:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[11] = 4688440446896644
> > 21244@1612191983.282052:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[12] = 0000073846C74906
> > 21244@1612191983.282185:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[13] = F8830000070AE900
> > 21244@1612191983.282340:xen_platform_log xen platform: XEN|BUGCHECK: - Parameter[14] = 8B000006F9850F07
> > 21244@1612191983.282480:xen_platform_log xen platform: XEN|BUGCHECK: EXCEPTION (0000A824848948C2):
> > 21244@1612191983.282617:xen_platform_log xen platform: XEN|BUGCHECK: CONTEXT (FFFFD0014343D580):
> > 21244@1612191983.282717:xen_platform_log xen platform: XEN|BUGCHECK: - GS = 002B
> > 21244@1612191983.282816:xen_platform_log xen platform: XEN|BUGCHECK: - FS = 0053
> > 21244@1612191983.282914:xen_platform_log xen platform: XEN|BUGCHECK: - ES = 002B
> > 21244@1612191983.283011:xen_platform_log xen platform: XEN|BUGCHECK: - DS = 002B
> > 21244@1612191983.283127:xen_platform_log xen platform: XEN|BUGCHECK: - SS = 0018
> > 21244@1612191983.283226:xen_platform_log xen platform: XEN|BUGCHECK: - CS = 0010
> > 21244@1612191983.283332:xen_platform_log xen platform: XEN|BUGCHECK: - EFLAGS = 00000202
> > 21244@1612191983.283444:xen_platform_log xen platform: XEN|BUGCHECK: - RDI = 00000000F64D5C20
> > 21244@1612191983.283555:xen_platform_log xen platform: XEN|BUGCHECK: - RSI = 00000000F6367280
> > 21244@1612191983.283666:xen_platform_log xen platform: XEN|BUGCHECK: - RBX = 000000008011E060
> > 21244@1612191983.283810:xen_platform_log xen platform: XEN|BUGCHECK: - RDX = 00000000F64D5C20
> > 21244@1612191983.283972:xen_platform_log xen platform: XEN|BUGCHECK: - RCX = 0000000000000199
> > 21244@1612191983.284350:xen_platform_log xen platform: XEN|BUGCHECK: - RAX = 0000000000000004
> > 21244@1612191983.284523:xen_platform_log xen platform: XEN|BUGCHECK: - RBP = 000000004343E891
> > 21244@1612191983.284658:xen_platform_log xen platform: XEN|BUGCHECK: - RIP = 00000000A43C72C5
> > 21244@1612191983.284842:xen_platform_log xen platform: XEN|BUGCHECK: - RSP = 000000004343DFA0
> > 21244@1612191983.284959:xen_platform_log xen platform: XEN|BUGCHECK: - R8 = 0000000000000008
> > 21244@1612191983.285073:xen_platform_log xen platform: XEN|BUGCHECK: - R9 = 000000000000000E
> > 21244@1612191983.285188:xen_platform_log xen platform: XEN|BUGCHECK: - R10 = 0000000000000002
> > 21244@1612191983.285304:xen_platform_log xen platform: XEN|BUGCHECK: - R11 = 000000004343E808
> > 21244@1612191983.285420:xen_platform_log xen platform: XEN|BUGCHECK: - R12 = 0000000000000000
> > 21244@1612191983.285564:xen_platform_log xen platform: XEN|BUGCHECK: - R13 = 00000000F7964E50
> > 21244@1612191983.285680:xen_platform_log xen platform: XEN|BUGCHECK: - R14 = 00000000F64D5C20
> > 21244@1612191983.285796:xen_platform_log xen platform: XEN|BUGCHECK: - R15 = 00000000F7964E50
> 
> I'm also confused by this - the pointer given for CONTEXT suggests this
> is a 64-bit kernel, yet none of the registers - including RIP and RSP -
> have non-zero upper 32 bits. Or is qemu truncating these values?
> 
> > 21244@1612191983.285888:xen_platform_log xen platform: XEN|BUGCHECK: STACK:
> > 21244@1612191983.286105:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343E810: (0000000000000000 000000004343E891 0000000000000002 00000000F75F08A0) ntoskrnl.exe + 0000000000485507
> > 21244@1612191983.286340:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343E8E0: (00000000F75F0805 000000004343EB80 00000000F6A62CC0 00000000F75F08A0) ntoskrnl.exe + 0000000000486468
> > 21244@1612191983.286547:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343EA20: (0000000000000000 0000000000000000 0000000000000000 0000000000000000) ntoskrnl.exe + 0000000000458CAE
> > 21244@1612191983.286755:xen_platform_log xen platform: XEN|BUGCHECK: 000000004343EA90: (0000000000000000 0000000000000000 000000007DBED000 000000007DA00028) ntoskrnl.exe + 00000000001501A3
> > 21244@1612191983.286976:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE388: (00000000587D5673 0000000058F40000 0000000006002D2B 0000000000000000) 00007FFB5B3207CA
> > 21244@1612191983.287171:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE390: (0000000058F40000 0000000006002D2B 0000000000000000 00000000160C86D8) 00007FFB587D5673
> > 21244@1612191983.287390:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE398: (0000000006002D2B 0000000000000000 00000000160C86D8 0000000009ABE3E0) 00007FFB58F40000
> > 21244@1612191983.287584:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE3A0: (0000000000000000 00000000160C86D8 0000000009ABE3E0 000000008011E060) 00007FFB06002D2B
> > 21244@1612191983.287777:xen_platform_log xen platform: XEN|BUGCHECK: 0000000009ABE3A8: (00000000160C86D8 0000000009ABE3E0 000000008011E060 0000000009ABE4A0) 0000000000000000
> > 21244@1612191983.287898:xen_platform_log xen platform: XEN|BUGCHECK: <====
> > 
> > The Windows guest is running winpv drivers 8.2.1.
> > 
> > I'm not quite sure what else to examine or change at this point so any 
> > guidance would be welcome.
> 
> The hypervisor log (at maximum log levels) accompanying this might
> help some. And of course, if possible, trying on a newer Xen (ideally
> current master).

We have a separate upgrade to 4.14.1 in progress and I will test on that too.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 08:52:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 08:52:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81187.149520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aN6-0002lN-8e; Thu, 04 Feb 2021 08:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81187.149520; Thu, 04 Feb 2021 08:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7aN6-0002lG-5b; Thu, 04 Feb 2021 08:52:16 +0000
Received: by outflank-mailman (input) for mailman id 81187;
 Thu, 04 Feb 2021 08:52:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7aN4-0002lB-NR
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 08:52:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93328794-f744-4eae-9f5f-0fc70b945762;
 Thu, 04 Feb 2021 08:52:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F2B3ABD5;
 Thu,  4 Feb 2021 08:52:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93328794-f744-4eae-9f5f-0fc70b945762
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612428732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vt37sF7uqc2BxJ/7ttb8QONpFZzTlYene6dWE/QzuvE=;
	b=kANorCpj/J7Yd62nQ6DXK5kp8apxZfcDCa3KIQy0ApZaxBzIkvE5Q/nTssdbqswRWii4YM
	v1WcCRrB7hHlyf6L2BNwri/ZgukZcLnKW+OQ1TUMVOmw1Boke442LeC9rHWKqIA7TZkVF+
	NL9he0v3Sy2VW7uqhkLzjP22+/63GvA=
Subject: Re: VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
To: James Dingwall <james@dingwall.me.uk>
Cc: James Dingwall <james-xen@dingwall.me.uk>, xen-devel@lists.xenproject.org
References: <20210201152655.GA3922797@dingwall.me.uk>
 <d30b5ee3-1fd9-a64b-1d9a-f79b6b333169@suse.com>
 <20210204084636.GA3781256@dingwall.me.uk>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fc022dd1-a5cc-62d3-49ae-bd24f22fe83a@suse.com>
Date: Thu, 4 Feb 2021 09:52:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210204084636.GA3781256@dingwall.me.uk>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 09:46, James Dingwall wrote:
> On Wed, Feb 03, 2021 at 03:55:07PM +0100, Jan Beulich wrote:
>> On 01.02.2021 16:26, James Dingwall wrote:
>>> I am building the xen 4.11 branch at 
>>> 310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit 
>>> 3b5de119f0399cbe745502cb6ebd5e6633cc139c "86/msr: fix handling of 
>>> MSR_IA32_PERF_{STATUS/CTL}".  I think this should address this error 
>>> recorded in xen's dmesg:
>>>
>>> (XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 75b12c5 9e7f1580 0
>>
>> It seems to me that you imply some information here which might
>> better be spelled out. As it stands I do not see the immediate
>> connection between the cited commit and the crash. C0000096 is
>> STATUS_PRIVILEGED_INSTRUCTION, which to me ought to be impossible
>> for code running in ring 0. Of course I may simply not know enough
>> about modern Windows' internals to understand the connection.
> 
> Searching for "VIRIDIAN CRASH: 3b" led me to this thread and then to the commit based on the commit log message.
> 
> https://patchwork.kernel.org/project/xen-devel/patch/20201007102032.98565-1-roger.pau@citrix.com/
> 
> I have naively assumed that the RCX register indicated MSR_IA32_PERF_CTL based on:
> 
> #define MSR_IA32_PERF_CTL             0x00000199
> 
> I've added this patch:
> 
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index 99c848ff41..7a764907d5 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -232,12 +232,16 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>           */
>      case MSR_IA32_PERF_STATUS:
>      case MSR_IA32_PERF_CTL:
> -        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
> +        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) ) {
> +            printk(KERN_DEBUG "JKD: MSR %#x FAULT1: %#x & %#x\n", msr, cp->x86_vendor, (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR));
> +
>              goto gp_fault;
> +        }
>  
>          *val = 0;
>          if ( likely(!is_cpufreq_controller(d)) || rdmsr_safe(msr, *val) == 0 )
>              break;
> +        printk(KERN_DEBUG "JKD: MSR FAULT2\n");
>          goto gp_fault;
>  
>          /*
> 
> and now in the hypervisor log when the domain crashes:
> 
> (XEN) JKD: MSR 0x199 FAULT1: 0 & 0x2
> (XEN) d11v0 VIRIDIAN CRASH: 3b c0000096 1146d2c5 6346d580 0
> (XEN) avc:  denied  { reset } for domid=11 scontext=system_u:system_r:domU_t tcontext=system_u:system_r:domU_t_self tclass=event
> 
> I'm not sure what is expected in cp->x86_vendor but this is running on an Intel CPU so I would have thought 0x1 based on
> 
> #define X86_VENDOR_INTEL (1 << 0)

This is the problem - a bad backport. Therefore ...

>> The hypervisor log (at maximum log levels) accompanying this might
>> help some. And of course, if possible, trying on a newer Xen (ideally
>> current master).
> 
> We have a separate upgrade to 4.14.1 in progress and I will test on that too.

I'm sure you'll find this work there. I'll make a patch for the affected
older tree(s).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:26:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81190.149532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7atu-000606-Uk; Thu, 04 Feb 2021 09:26:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81190.149532; Thu, 04 Feb 2021 09:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7atu-0005zz-RU; Thu, 04 Feb 2021 09:26:10 +0000
Received: by outflank-mailman (input) for mailman id 81190;
 Thu, 04 Feb 2021 09:26:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SI1E=HG=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l7att-0005zu-Kp
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:26:09 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 738f4b5e-d347-4486-8735-4827e2d7c01e;
 Thu, 04 Feb 2021 09:26:07 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id v15so2637207wrx.4
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 01:26:07 -0800 (PST)
Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com.
 [86.190.149.163])
 by smtp.gmail.com with ESMTPSA id l1sm4921463wmq.17.2021.02.04.01.26.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 04 Feb 2021 01:26:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 738f4b5e-d347-4486-8735-4827e2d7c01e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=YgMIO4QlRwtCHJVXKAiPwTXPhhYr/y0FarmhJ4l+GWg=;
        b=KWPFBkvXrwU8Ckk1aSirKduTBDoltf2vZN3HLAhSMk0QmNsB9Pa94Z3KeOVn8Pehw5
         juv7N+7qcwIbqQuQNUh1jaZErsuD41jObLSNbru/Ebkz2qcqWJy7Nor+Bu6DH+oQTFAk
         xcRm5y6EbVHHDqr8p9qMNyemlfVriWjTrbAa1+xJ1CnizVW6meAa2Dcg1AhZQZq9JKjq
         j5SZAEbV3YY7RdRgMhf07NRvLbVnJA5/zZSwHbh+OPq7D89ZV7YqoEwfxrf+n3j+Ven4
         rrEx2/KjrT2/K7eOBpQVkcOwzNEqkd/9SUVAUO07+jIi/wKZuB1wB77s/ZKzpfas0fz0
         Q5hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=YgMIO4QlRwtCHJVXKAiPwTXPhhYr/y0FarmhJ4l+GWg=;
        b=CfW/upeNILWtyyPlP45cBV+4R5+mILkDcmQTAa0r8hxSTE8Oe3cKTpfCxu+GFhwDiH
         8beTK75q0xaYKRrho93cT6fEG1lxdJxXUdgmC9zEJaxg30HsptqhjhJv6LM6y8Ud0WXI
         wkWiRQ4jCBaBA0AmBv5A3ADuG6rkZqd9VVdoredQH9TKfoJ+JWBNBmUwGcFpjMzCS8Vj
         w+CMH9EEL8aPi7ieLs3bVGuClmK+Yejv6QqodT8mQ50K9NJBp54y+VBZ9RZPXIkSuSGX
         XUz8pRMn3bv3nbYSn/u5GvCcOsWxJAmJTF8qb7Xw4ERH/f2kvHkhxLhQZEou7rWMofDs
         eRxQ==
X-Gm-Message-State: AOAM533DfVjOcyX5T7NI8IaaGd2URsu2WnpqMozGfQUTi+TnjY9m283X
	3Gy4GluJKzvbtQXDV8e/oIs=
X-Google-Smtp-Source: ABdhPJw3yoDv4R0uoiSYO+6vjaWmMyHJ2+Oewx7bQpV8FIdzH8YG2PDLfJ8YJd4hUfJtZVmiaj/Y0w==
X-Received: by 2002:adf:8b47:: with SMTP id v7mr8119940wra.133.1612430766601;
        Thu, 04 Feb 2021 01:26:06 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com> <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
In-Reply-To: <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
Subject: RE: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation request
Date: Thu, 4 Feb 2021 09:26:04 -0000
Message-ID: <03fb01d6fad7$c39087b0$4ab19710$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQHgg/zPp1o/mkuzupLKGuAOuFa3bwGR/APtqigXZrA=
Content-Language: en-gb



> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 02 February 2021 15:15
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; =
Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>; Paul Durrant <paul@xen.org>; Julien Grall =
<julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>; George Dunlap <george.dunlap@citrix.com>
> Subject: [PATCH v2 2/2] IOREQ: refine when to send mapcache =
invalidation request
>=20
> XENMEM_decrease_reservation isn't the only means by which pages can =
get
> removed from a guest, yet all removals ought to be signaled to qemu. =
Put
> setting of the flag into the central p2m_remove_page() underlying all
> respective hypercalls as well as a few similar places, mainly in PoD
> code.
>=20
> Additionally there's no point sending the request for the local domain
> when the domain acted upon is a different one. The latter domain's =
ioreq
> server mapcaches need invalidating. We assume that domain to be paused
> at the point the operation takes place, so sending the request in this
> case happens from the hvm_do_resume() path, which as one of its first
> steps calls handle_hvm_io_completion().
>=20
> Even without the remote operation aspect a single domain-wide flag
> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
> parallel. Each of them needs to issue an invalidation request in due
> course, in particular because exiting to guest context should not =
happen
> before the request was actually seen by (all) the emulator(s).
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>     places to set the flag. Also handle acting on a remote domain.
>     Re-base.

I'm wondering if a per-vcpu flag is overkill actually. We just need to =
make sure that we don't miss sending an invalidation where multiple =
vcpus are in play. The mapcache in the emulator is global so issuing an =
invalidate for every vcpu is going to cause an unnecessary storm of =
ioreqs, isn't it? Could we get away with the per-domain atomic counter?

  Paul



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:36:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81192.149543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7b3a-00079G-R4; Thu, 04 Feb 2021 09:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81192.149543; Thu, 04 Feb 2021 09:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7b3a-000799-Ng; Thu, 04 Feb 2021 09:36:10 +0000
Received: by outflank-mailman (input) for mailman id 81192;
 Thu, 04 Feb 2021 09:36:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7b3Z-000794-3j
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:36:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 559ac284-2077-457e-8b54-c2069740b797;
 Thu, 04 Feb 2021 09:36:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BF5E4AC97;
 Thu,  4 Feb 2021 09:36:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 559ac284-2077-457e-8b54-c2069740b797
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612431366; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=W8SxtxQ4Ttrtlh6kMq9Cjh6bi20t2gWEjxJPBHgcr/U=;
	b=WWXyqVm72x1U+TLG9qEN+4FPUIhDAbpXuU9AIqpWltcoNPIe06Pcyw/0iN3iGTx8uOY4Ea
	bJg/aMj54jFKsRegtWaDzU7HATvvlD1wTKcbMsSULdCd5n5u13KI/+B3U1tNs6AgxJztDp
	JlKXCiVy0fQBSXB81p1+2BqzYlfNdSI=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 James Dingwall <james-xen@dingwall.me.uk>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
Message-ID: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
Date: Thu, 4 Feb 2021 10:36:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

X86_VENDOR_* aren't bit masks in the older trees.

Reported-by: James Dingwall <james@dingwall.me.uk>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -226,7 +226,8 @@ int guest_rdmsr(const struct vcpu *v, ui
          */
     case MSR_IA32_PERF_STATUS:
     case MSR_IA32_PERF_CTL:
-        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
+        if ( cp->x86_vendor != X86_VENDOR_INTEL &&
+             cp->x86_vendor != X86_VENDOR_CENTAUR )
             goto gp_fault;
 
         *val = 0;


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:38:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:38:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81193.149555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7b6H-0007SU-8s; Thu, 04 Feb 2021 09:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81193.149555; Thu, 04 Feb 2021 09:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7b6H-0007SN-5n; Thu, 04 Feb 2021 09:38:57 +0000
Received: by outflank-mailman (input) for mailman id 81193;
 Thu, 04 Feb 2021 09:38:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YP8j=HG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7b6F-0007Hi-Vc
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:38:56 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0c40b9a-baee-46d5-8e08-27dbeeb8f701;
 Thu, 04 Feb 2021 09:38:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0c40b9a-baee-46d5-8e08-27dbeeb8f701
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612431534;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=qox4YjZUceF2TMkiPBfoCDiOTORg7BxGqT1rBEL0hUg=;
  b=DUFvfM4cs9zW50tU2bx+By9leYu95IxKhBMeJILexXPGZAmPz88p9GZ/
   s1a/rxj/ha8V2BXrCus5VvQaEieJizbGUCZ7Oic5+97b7Y2aXaWSC/WHu
   r1IqNCNiCSrODKG3Ix8zsvuPmZp55N0xrMJ0O8EAiDPxM+ayO6R7xBXKu
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8zP6IfrSXrZYzz3HL98vzNgx/6VEsJaiUkYtTih5juwE8rMg57+mSWE0OfAgTS2Qc+Dw+fC3Xe
 QOX4nQQl9dzqyILAxFWgOAOokQIq4LYR5NDSfR1NM+scDJFdT9aZ5zKvQFOLY8/nIdDk84lfJl
 9BBu6+Pyn+BNz+SpaHS8sDCaQ3Me8G9iFMM0xj4vTHHWSqUrenXJbzaPP6hNZXR2LL18Co5iI7
 wZYWeIsco06qa5a1fOnnJUS/unrTfD8WigdykABXSNnPzLS3QkCuCSY/vhJ6GWrrsS3N1LQl5l
 t1Y=
X-SBRS: 5.2
X-MesageID: 36915858
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36915858"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HuxQMCOGmGnOwZ9GB+ixpIeXiWythgscUS8+JrjI0tNqi9wTCE9Z+ZnpJ5GSJ9DiQZTx3FOpW0kI4iSD0V7v2//rZiW/VtsPsbQFurMj8WLctw6uOdD3TYDd9UUgAUe6j75KMsvqBd9LYLPh2lt6B5Rlo8Ybd3SH2xOaCBdKpZdZQli+eeyNgoReKqvo1Q85xQHsYMkRJQhwu4So2ZKFyWljZgti11OA+guXT47NQS768wRYrb8MevojOkc5zbmdGttM5cHztolO/Q2IngjPUWcW0wNFskQgpag65TqXeS3FVQS+780DSaLuEsmBuWtJAockkQA5ggvQAqjKMLTBxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FV6zQhox0t6WyZyOKRZdDZ0EWUIr6LMm4rPKFCDSIKM=;
 b=n+fzV0BVz7IC3oN3tThlirNiJmpu9L5svKWwjSCNBTSQN6CIWicGpYbobeIbeA9CGfIvHd7aSGh8D2VrnBuoZkkriyit78jyUGdnfN4B8uO2EWlvCSYxk/wvVC80cJmnWgdrC1hvMy+bwuRBNBZNFvrFfPLHq1XhQ77faye+WN4Y+uhjkgS8Z2ysihu6X5eAxtNIsp6Qr/CteGDgbyU+tZNyJpeGpsYKv417lzb09ErS3FCno9LS4vZKKy+Aj5GwKhn9x/MAqqU+bEEGXhezReiwMleOnk3MadcNfNReaTO96/cKWD5QX0Pb2MpEaq958AXKmaEciK+8DsyEhXBgrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FV6zQhox0t6WyZyOKRZdDZ0EWUIr6LMm4rPKFCDSIKM=;
 b=Tp1xO3HjQHfhqL9S3FECOtIypjttO2clthveRiNXL0/awJidMXh26N8n4kYiu1irpbqSKkC3z9vwmb2YSIzPnWMU/O19+AoWskW71AE6uwgN48gwiF6oS6F/oHvZSoBpC9zMaSGS3kcYGYOsig+M4DECLbDbcxHj+vGYLA7WP7E=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15] autoconf: check endian.h include path
Date: Thu,  4 Feb 2021 10:38:33 +0100
Message-ID: <20210204093833.91190-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: PAZP264CA0001.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:21::6) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5c2d7a57-23ee-4c44-13a7-08d8c8f0aa34
X-MS-TrafficTypeDiagnostic: DM6PR03MB3739:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB37398CB95A0CD2397D0F9AD78FB39@DM6PR03MB3739.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: B/VCgxIim3/78qDmYscbYzyUSFsscYIN6nCVbfSiwT3GEUEOinS8E1Ry9Sp1V2jBQpZZDwNe1xT30sFzvR+t3kVcjYfubNX1d7yzir3Ha2/zHiSALJQaUzbJjozXxksUlqsoT3TSX8aVG9AYHxcEeISf4Vyf8sFSnWHewxBeohGF2chAYEMbesaBWE+MQfND58aOedVYW/E7gHreNThuMUtmQFkDTkW9LDocgu6emPi3fHZPXVqj0r8xSPQHle1g4yoMrG/LNqs7Z6eouBCJFkIrBVMvMg07tkLc5/NhBM18gND5B9fbpopnF2Dfr805fmeyhsd3cgJDs5dmCY1oCg4FQ/+s2vCLE4gMVgDaxkQepgrVd5PRM4yqK3fq5+UuY29UY88SbrCrrMkZWZVT4LuUcUAtoYNMm9aCBOHK0vXi9fv3g6Wm3nPI8V/oOx758nsDNTXUs7hTZ6IEL5q3CoXzO3xYJREry53HiotlNpdAWNKrz8BKaDd1rIb1A1cFAF8xCPRlSGVqWpKTDd89Rw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(346002)(136003)(376002)(396003)(4326008)(478600001)(6916009)(316002)(66556008)(1076003)(5660300002)(86362001)(54906003)(83380400001)(6496006)(66946007)(8936002)(26005)(186003)(36756003)(8676002)(956004)(2906002)(6666004)(66476007)(16526019)(2616005)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S1kzQjNnR3FibUxObWk3WEZTc3QyQ1hQQUVERkJFcDUxcVhNZURRZ0tUbGVk?=
 =?utf-8?B?N2RySUJEV0NRcG5jbEdzd200Kzl1ZWhWa2hHTmJacFB4LzgzQy92L2xXVVZM?=
 =?utf-8?B?bWVJTW9GQkFFUmRKTi9TK0RpUGRwOTZXd1kwWDYzdnhtY28vRWE4NFVxSlkr?=
 =?utf-8?B?cTJ4c0ZOUHpTMUhpUUtsb2xXa01qTGowb3I2c1g2WUtMRWE0L2t5TDk3Z1k0?=
 =?utf-8?B?NVFqNG50SFZRVFFLZEtoTU5vTjBUdmcvUGlMcVRvSG1yb1JFOWxlTUVFWGdq?=
 =?utf-8?B?YXhxeHgza0hPaFJCL2hVN0lIUSthV01QenF4cUFNcThCNEJRdzlhMFJtU2du?=
 =?utf-8?B?TGpKZ3ZFaUQ2UEwyeVdOSGtiOVBRMmNVclJCT0xYb1AwbEZpR2hMNWhpWDZh?=
 =?utf-8?B?T3lTRk5iSmd1aW1Xa2QyMVdtRHBjcDVQaFpMMzJMTDZjOCtqRE5EeHZQVERm?=
 =?utf-8?B?Qnc3c29IamRXS0lZVUltcjYzeU9XeXJhc0RmTnVEWlpmS29EN3hVcDZUVGEz?=
 =?utf-8?B?NXBLZml0Uk9yQkdaTzQ3dzJCWENQNER3Q01ha3c2amlZRDJNL3F0Y0RJOHps?=
 =?utf-8?B?Q1Q1c0QvVm5iN3hoT0JBcDlvTThVZUJUS3BKVm9oV240d21BemVuSFZxYmJQ?=
 =?utf-8?B?KzcwSzVIbUN4SGxNSUxsYVlHTXZYc0hkSktNVitGTUF3RGxPL3NQZVlJdVFo?=
 =?utf-8?B?NndaZ1hKc0FjL2FzZXBGcTg0REF1czJNQ3FHSnFRZDVoK2V1SmsvWWxqanl5?=
 =?utf-8?B?amVBYWU1bm5DT016VkFMR3ROWS9EUzM4eFNUSzIvdUlGU2E4U3hTVDhnakRN?=
 =?utf-8?B?Vy9nbGNiZStrbzAxQkxqbGZFK25EemJveTZkcUxjc1ducWVWVzlYZDJNOHZM?=
 =?utf-8?B?UmZqaGs3OFl0dW9KK3FhbG41NEhYVVlzSk1lQ25NMnVpQnBCaFZWV0dMM3JX?=
 =?utf-8?B?UXM1WmtReWVKRFJvVGxOak5ySmdOVzVpNFFkc0lidGJFNVdGb2ZlS01HSjRk?=
 =?utf-8?B?MndTYTREeDhZOEI5enVyOHA1MEZucVo1clpRcUtZYzVBSWdvbXRyZytzYTRH?=
 =?utf-8?B?Ym8yTWMyQUR0ZDhWTVNJSGhGUjlPMEttVnZxenNUaDNxVmYzYS9lL3lrTjlR?=
 =?utf-8?B?M0NlVEh6R2NvYmwwaUFWTzNVcDRNR05sdy83YzVEUnJYcGlZOWNTSzRyRUFK?=
 =?utf-8?B?cWJRME1zZkVlMlZQdDVpL29XODdaMGV3Y0NOSHI2UWpvdFZmUUdSc0VWMnp6?=
 =?utf-8?B?WnQva0p2UExoZnRCY1dZZHJKQjNCdVM1WHUzYWcwTVM1Q1hhUGVDRm5IVG1r?=
 =?utf-8?B?c1FNcUJGWXBpUE85N2xEZmJjVGxqRWxxK3RzS1N4QTRjTFd6dEQ0anUrdUZG?=
 =?utf-8?B?eXRha0NRZWorZTN1aFVGU0l6T3Y0Zi8xSmREVmNDNWphcC83bUdsZjVpRjFQ?=
 =?utf-8?B?YVVDTG9wZ0tnTndvbUpHRGt2K1JrUlpEZXVlV3c0ZytHcFBRL1Y3Y01aWHhC?=
 =?utf-8?B?c1dMK1I0TUh2MThiY1EwYXlmNFVIY2daZk10S2tVbzhyVXh6TVZ3YU9teXNk?=
 =?utf-8?B?ckl1blZQWDhCSUtoN3BwL2tQMExKbWtNWmRPNk1YTDJ4YnpiVmY5cE1MaFRZ?=
 =?utf-8?B?NFEwU2grcDZGTU5tL2t4eDlJVFYwVXFhZWVncThHNFBuQW5XUVRleFpEYTA1?=
 =?utf-8?B?V0ZBUVRxSkR2UzlqNndsMnBaY1RaWkk0UThZSU40TmQzYjFOSnBLM1cvR0Q2?=
 =?utf-8?Q?rBzCcGCJ9WCaXYckQXnNP+jT61lv80vRCzSioVv?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c2d7a57-23ee-4c44-13a7-08d8c8f0aa34
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 09:38:44.8530
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C9CX/GT8dsPLqQFx/Wm5iGP2lhtsS0SwXVHc3bQhOduOP2r8oSbCmqq3GmFwchZknexT9cNIepP1aRX0fkVGsQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3739
X-OriginatorOrg: citrix.com

Introduce an autoconf macro to check for the include path of certain
headers that can be different between OSes.

Use such macro to find the correct path for the endian.h header, and
modify the users of endian.h to use the output of such check.

Suggested-by: Ian Jackson <iwj@xenproject.org>
Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
Please re-run autogen after applying.

The biggest risk for this would be some kind of configure or build
failure, and we should be able to catch it in either osstest or the
gitlab build tests.
---
 m4/header.m4                                      | 13 +++++++++++++
 tools/configure.ac                                |  3 +++
 tools/libs/guest/xg_dom_decompress_unsafe_lzo1x.c |  2 +-
 tools/libs/guest/xg_dom_decompress_unsafe_xz.c    |  2 +-
 tools/libs/guest/xg_dom_decompress_unsafe_zstd.c  |  2 +-
 tools/xenstore/include/xenstore_state.h           |  6 +-----
 6 files changed, 20 insertions(+), 8 deletions(-)
 create mode 100644 m4/header.m4

diff --git a/m4/header.m4 b/m4/header.m4
new file mode 100644
index 0000000000..81d1d65194
--- /dev/null
+++ b/m4/header.m4
@@ -0,0 +1,13 @@
+AC_DEFUN([AX_FIND_HEADER], [
+ax_found=0
+m4_foreach_w([header], $2, [
+    AS_IF([test "$ax_found" = "0"], [
+        AC_CHECK_HEADER(header, [
+            AC_DEFINE($1, [<header>], [Header path for $1])
+            ax_found=1])
+    ])
+])
+AS_IF([test "$ax_found" = "0"], [
+    AC_MSG_ERROR([No header found from list $2])
+])
+])
diff --git a/tools/configure.ac b/tools/configure.ac
index 5b328700e0..3a3e7b4b2b 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
 m4_include([../m4/paths.m4])
 m4_include([../m4/systemd.m4])
 m4_include([../m4/golang.m4])
+m4_include([../m4/header.m4])
 
 AX_XEN_EXPAND_CONFIG()
 
@@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
 ])
 AC_SUBST(pvshim)
 
+AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
+
 AC_OUTPUT()
diff --git a/tools/libs/guest/xg_dom_decompress_unsafe_lzo1x.c b/tools/libs/guest/xg_dom_decompress_unsafe_lzo1x.c
index a4f8ebd42d..e58c1b95ed 100644
--- a/tools/libs/guest/xg_dom_decompress_unsafe_lzo1x.c
+++ b/tools/libs/guest/xg_dom_decompress_unsafe_lzo1x.c
@@ -1,7 +1,7 @@
 #include <stdio.h>
 #include <stdlib.h>
 #include <inttypes.h>
-#include <endian.h>
+#include INCLUDE_ENDIAN_H
 #include <stdint.h>
 
 #include "xg_private.h"
diff --git a/tools/libs/guest/xg_dom_decompress_unsafe_xz.c b/tools/libs/guest/xg_dom_decompress_unsafe_xz.c
index ff6824b38d..fc48198741 100644
--- a/tools/libs/guest/xg_dom_decompress_unsafe_xz.c
+++ b/tools/libs/guest/xg_dom_decompress_unsafe_xz.c
@@ -1,5 +1,5 @@
 #include <stdio.h>
-#include <endian.h>
+#include INCLUDE_ENDIAN_H
 #include <stdlib.h>
 #include <stddef.h>
 #include <stdint.h>
diff --git a/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c b/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c
index 52558d2ffc..01eafaaaa6 100644
--- a/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c
+++ b/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c
@@ -1,5 +1,5 @@
 #include <stdio.h>
-#include <endian.h>
+#include INCLUDE_ENDIAN_H
 #include <stdlib.h>
 #include <stddef.h>
 #include <stdint.h>
diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h
index f7e4da2b2c..ae0d053c8f 100644
--- a/tools/xenstore/include/xenstore_state.h
+++ b/tools/xenstore/include/xenstore_state.h
@@ -21,11 +21,7 @@
 #ifndef XENSTORE_STATE_H
 #define XENSTORE_STATE_H
 
-#if defined(__FreeBSD__) || defined(__NetBSD__)
-#include <sys/endian.h>
-#else
-#include <endian.h>
-#endif
+#include INCLUDE_ENDIAN_H
 #include <sys/types.h>
 
 #ifndef htobe32
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:41:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81195.149568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7b8X-0008Hk-Mv; Thu, 04 Feb 2021 09:41:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81195.149568; Thu, 04 Feb 2021 09:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7b8X-0008Hd-Jx; Thu, 04 Feb 2021 09:41:17 +0000
Received: by outflank-mailman (input) for mailman id 81195;
 Thu, 04 Feb 2021 09:41:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YP8j=HG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7b8W-0008HY-RP
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:41:16 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8618e7d8-6bf2-4d6c-b67f-293db70a7c2d;
 Thu, 04 Feb 2021 09:41:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8618e7d8-6bf2-4d6c-b67f-293db70a7c2d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612431675;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=FMDVXBrap7ApEzdp7tbHIqaD4Q8/49II7EbgvHeWv44=;
  b=CJABbjvErJPFqg4lIlrBy9eg78C4rMSvK2n0hdTJIncEPh5DsiRqouQ9
   DCNXqc1GWt/8co4Gd+Cw76jTsyEtxancgfIVPt13fcONZg6E5oLYNGqG1
   Af1EKG48BJY+3J56iVzIQuTaBDDeIydqVSwRWpINsMhflBPOLipqMwf+l
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: mEq6I5MJL0d9r9E+X/A07TnoAd/74Xqxl5s3yuP6c1eSEq2Ueu6WGsWO26PG7kecR89qyXFr2F
 CuQSegvEow9Lz0Y0VpnN1BtQpKBGBSHQwTzVMX97IfyW0moxUUJQIMFu5vnO5lsg3jD2aIhC3b
 cGAkCCpwhk8rpZmhAhpdNW1J49seBz7Qm5TNHeDtkOgwGBBOqrL1FcJORkCCHkr8h1rpU8VulL
 UoJ7cpuNcqxYoxYQD+rDyQYK+8q58KwUK5F4R8Vj2byjlNQZvuQ4OukF+rmFlweLr4SMxRbbTR
 R2Q=
X-SBRS: 5.2
X-MesageID: 36487253
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36487253"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fT9o4CtekVL7SBBMSsuQzK7xFMOkJ0EdkDVplzf2HBz78/eoQS3RW+S67mPG1ixufeBHJMRYWI99HqhTcb4GN3z7vdZViju/EPtVCsG2gqALCypDPisPpGjC4MS1kjEgaQ+u4Xwysv3iwHtp/XACG6tlm4cyKomJFiVz+j2WYjjcp+QggaoYhcntbq2GVoh8HRRgocXioKQK9xMbTkrqtfBmBKub4ywky2njDS/avBZLsl8U/Acwa7CoHeDTSgwQzef/LgcvfR8TzRioYNQieEVZIfO7zbVxBMozo9lubFxf2+tZZc7bDm2Stf95nersspr0jCweuvn1ISOUbiOCPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UNYHDhfPOHqIe4jfqbK9HaUjY8ZU7s7RN0jYylJkosI=;
 b=KrwfMUMJq6HegnkeHLk389Xf9NSmYQLCPa5jA6WVXXqCwp2T3OKNeqlBMUmxq4krvAEJsHrbtZlCtQusL1dRjDsacoWVZn4RY07DbOzYRbMFmdFPbFyvyCjtgxKA3h9QfpOLJDGVMl9anq6FpJ1eOcB5Tkpeug7TdSM1u5me5bFTmBiOwZmOnuc0Le9A1Hc6fBqHwnVgiGJPb6zpFiMybyTmEQdSR6eD3YkBaKEO0O7rrqUqktgR/SOQD+6nwjwqVEg2FmrxpEfpvoDkfV42GPCaXcnJWNZZZ8/PXqeahZOGh5PavwVYexZslLUO2+P2WF62PjdGJsXCanN9Q9pI6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UNYHDhfPOHqIe4jfqbK9HaUjY8ZU7s7RN0jYylJkosI=;
 b=Ae7oJ+2Uy93P1yayFT0RUCHm5YC6FtjYE5grSEV/Rw3NFa2VKclqf5ocjE67ea4nPueIo5z1tLby7YyzDDeEtyO01GYez86LQnKfcx3O13Zr7dcgX+WP9+mceA3RTtgPm1Ir+/dLtzTbmU19A8IzTb23lTwgj1m6fRZ8n+eiFlQ=
Date: Thu, 4 Feb 2021 10:40:57 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, James Dingwall
	<james-xen@dingwall.me.uk>
Subject: Re: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
Message-ID: <YBvBKYi5wweq2kms@Air-de-Roger>
References: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
X-ClientProxiedBy: PR0P264CA0230.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f4bf03f9-b9c6-48c1-23e3-08d8c8f0fc8f
X-MS-TrafficTypeDiagnostic: DM6PR03MB4972:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4972E01A04A97200CCD8ACD08FB39@DM6PR03MB4972.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:242;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TSHRYypD2FC91DzgCZfj6IieDPf2tQNSLnKCPMhSlsWUahQJZTywkGuSXICXshp1F7fc6+nKnKsN8FDOn1FMRquNxUF2LnrEbkd6azyhv8RRaywnwNotrregrfMSeB2GqocEP25JsOVLhxKMKFxYJhjr57g70YbhHozsZnD60QPHiunwxFgDaV7jNED4u+bUaSfrB6AAjg73II86bXI+p2JyuTdxJsjBWkRWIY4uog1ihnjCRi6EWuQlzfB94bGfo/3fE0WJCh71NkkrqlVTsyc9Y7ANADgnwfae05OhDnCjsq2ISzvUuXCyvhIgjPumv3rk2f/1sIwo6w7G8F3+azqX0ApCF/RguBfrbgBSRstq+tTSD9/jFdAIA/uhmyO4QznjxMy94hFF9Wabr6o2Hy/gRK0GwOy9CCS7T1FlVy8HqcLgrjyXriVWgbS4kb9TtK5GEa2ZDD34E6a/SDRUFTPBN48YaEHpJElHUMfGAdUyQ4b/mmMTSFF+4g7EvkhT7IQqUBKwb0vceJYHh5sDzg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(6916009)(2906002)(6496006)(5660300002)(4326008)(66476007)(26005)(316002)(4744005)(86362001)(956004)(6486002)(54906003)(33716001)(66946007)(85182001)(186003)(478600001)(6666004)(66556008)(8936002)(16526019)(8676002)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?V2JIYk9GM1VBenBNZHFKS0RtUmRTZEd2SzRqN1VWYVRXNEc5bS92WlVrMXMv?=
 =?utf-8?B?b1BSbVZFYkRvdWlxYXFVVHRqak4vRXBnZWdjVzZEc1oyMGtNbTJCOWFXd3VH?=
 =?utf-8?B?V3oxYVBSanBMSFFpU3ZqclBENHVjOXpSSzlwWmZ5K2FtWFdGNEF3ZzFaSFFV?=
 =?utf-8?B?UDZXWlFzb0lTSUJ5WFRRcjdZWEpQQlpMbXhmeWZNMld5bUd0d3hDZ05lSE82?=
 =?utf-8?B?R3ZLbUxDRGRIQnZ1cmdGNVNubitZL1pGa0xub29HV3Z2cENUdTZDRGo0Rzg1?=
 =?utf-8?B?anFocThmekRtYjljSE9SaWU2dXRMalQrdHA3SndUcFF6OFpLOXgxTWxCS3JP?=
 =?utf-8?B?dlRidWFJb3hpdHJKa1N0KzY1dVhCSEhRVy9SS3N5ZHRvM2NWRkRkODZzU1Rk?=
 =?utf-8?B?WDlVcFRTeHJOa3R3dE1jVFNiZVl4cnc1a0JrbXUzODNTMWRZN3FpeDN4QnpR?=
 =?utf-8?B?bW5aTkRTSDdVbHloZ0YzRXdPRUNwYS9maFVxQ21yQXg4bGk5UTJWTTVIcFRB?=
 =?utf-8?B?dWx1V2VBMUJLdWhUbkROTk5xV0dvYmRlUktmV3FDVmtDa0hMdjZpWWN0RUY3?=
 =?utf-8?B?WWNxbk5tb25VK3dqbDFGNnRBRFNlMVdMdU1pa29BemxGVDlDWWhZcWl1S2ZX?=
 =?utf-8?B?WHRTT3g3V21sSzJKbGxYaWZZbDU2WW5OaElqbWEvbXBLU3cyQmxkZzNRRVli?=
 =?utf-8?B?V2ZsWk0xdlZwd3pDdkJHd0pRdGs0bU81ZDIrSjRKVFpSYmtFdURpR2crRFNk?=
 =?utf-8?B?VHdmZGVUSzFia3ZkU0I4eW1mNVBuR0p2K2l4TTd0YjNzQUFpL0VlQU9NN1hy?=
 =?utf-8?B?bS9HS2ZjRnlnajdUSVBnWU5kR0pNRmxRUXd4M1pCWmhFT0pHK2plK1RjVGt5?=
 =?utf-8?B?WXovWFZjSXFjQmVvMEkxeGQ1eGlobWJyOTUyZVIvNG5oa1R4YlNVNGxuMnN4?=
 =?utf-8?B?NXJSMDQ2MXhNWU43LzZmMjhodm1NRDAreFZrLzZZYkRzOVBvODNsOTA0RVVs?=
 =?utf-8?B?MVo1N2R2aXIrejBHemVLOWlFNzBVTHhKWnArQXV3c2srK1hoN1g4MElmWUFR?=
 =?utf-8?B?bERET2Q0ak9HQkQvdHFyc1FqN3dzaGlCa0RWRzRvamNURXlZay9pc09JVmhU?=
 =?utf-8?B?U2ZCUXFQWFVKT005eEs4WGZRdWJQalMwUmpmTUZ6Q2cyNSt3YzdsMzhRYytI?=
 =?utf-8?B?QXpEcnphUXBmbmtTS0FzU1BrUlExQlFvQ3Jxc1FHeExvTDhqcWZYdFhrMmNM?=
 =?utf-8?B?bGxhbGI4NHF1Q1MxS0lZaUwyT05sSjJjWmUzL3JTMi9oenllZ3ZqbzVrNGFk?=
 =?utf-8?B?QlRubVZpRFhkcThWODFmaFdUd2w5dUswT3F1TzcvazRTSXRwbnJ6clhQUVZI?=
 =?utf-8?B?SnRZNnRjMEgvTGpidUgzTUMwanlmRm5hVlo5R08wWkFBbUplNWtaVGtJWHJE?=
 =?utf-8?B?dDQyVWd3TjZQUitKVUxtWEJBTUlQVlEzVFJmR2I5Z3A0ZjRmaitXRE4yS1g2?=
 =?utf-8?B?OW84MkF4SndxSUY4NzVqcFNUUjUwTGI2RmFxU1dibGFacDFZdTUrS2NtZEl3?=
 =?utf-8?B?M01oY2ljeTAwN040OG1ibDdrSXM5UG85dXBJRnNDOVZEaW81SUdiNytqbFRS?=
 =?utf-8?B?SzFoeGtaNk1ESkluVVkxazBnck1LV0FHbStYc2RGSk1xdmQ5cUdTdjdkc0sy?=
 =?utf-8?B?eUplQk1CNStZRFowcVJqamNwOS8xbkRvSE8xbWp3TTF3Rk1pV0VqK3F0UHhO?=
 =?utf-8?Q?kazKMeaej0h0YitB+5mFQvKR50HC2mTdUQrGhOv?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f4bf03f9-b9c6-48c1-23e3-08d8c8f0fc8f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 09:41:02.9005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1eFLtiE+FsWFOHUZSCIHLsEuRA2fzP6WzK11NHFc4bHbRxxcSrgC6PO07hCNKhg3PwGanhnIagtnMUWmR+0TRA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4972
X-OriginatorOrg: citrix.com

On Thu, Feb 04, 2021 at 10:36:06AM +0100, Jan Beulich wrote:
> X86_VENDOR_* aren't bit masks in the older trees.
> 
> Reported-by: James Dingwall <james@dingwall.me.uk>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Should this have a set of Fixes tag for the commit hashes on <= 4.12?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:47:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:47:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81199.149580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bE6-0008Tb-JS; Thu, 04 Feb 2021 09:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81199.149580; Thu, 04 Feb 2021 09:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bE6-0008TU-EY; Thu, 04 Feb 2021 09:47:02 +0000
Received: by outflank-mailman (input) for mailman id 81199;
 Thu, 04 Feb 2021 09:47:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7bE4-0008TP-Vt
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:47:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29729db5-bde4-4c78-a718-f8a211562610;
 Thu, 04 Feb 2021 09:46:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9F8EAD19;
 Thu,  4 Feb 2021 09:46:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29729db5-bde4-4c78-a718-f8a211562610
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612432018; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pXBAmXDyWvASwTdZ/4WeYaO3EVbd/jk45fhEDCSunYQ=;
	b=YqvSWO/MUCuHQZAdSuYe17qtjvXASMK+mvv1DHp7aVyI+2WM8AtC0G203RqvQVO/P1nAVU
	lHUiUfD4RORiAIP0z5fA2c+RG7aeN+RIZQ4q8sp8UGjOemRLQO6yhHaIySk4s3wrpcowax
	Yr44OMPAC8H6JBX1f/wyrxgxJLXLVlM=
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20210204093833.91190-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
Date: Thu, 4 Feb 2021 10:46:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210204093833.91190-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 10:38, Roger Pau Monne wrote:
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
>  m4_include([../m4/paths.m4])
>  m4_include([../m4/systemd.m4])
>  m4_include([../m4/golang.m4])
> +m4_include([../m4/header.m4])
>  
>  AX_XEN_EXPAND_CONFIG()
>  
> @@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
>  ])
>  AC_SUBST(pvshim)
>  
> +AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])

Instead of a new macro, can't you use AC_CHECK_HEADERS()?

I'm also not certain about the order of checks - what if both
exist?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:48:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:48:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81200.149591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bFS-000092-SX; Thu, 04 Feb 2021 09:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81200.149591; Thu, 04 Feb 2021 09:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bFS-00008v-PN; Thu, 04 Feb 2021 09:48:26 +0000
Received: by outflank-mailman (input) for mailman id 81200;
 Thu, 04 Feb 2021 09:48:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7bFR-00008o-3b
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:48:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66991e48-0ac3-484a-9034-4df28cc256ba;
 Thu, 04 Feb 2021 09:48:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7790AAC97;
 Thu,  4 Feb 2021 09:48:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66991e48-0ac3-484a-9034-4df28cc256ba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612432103; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UcMCk3sVSljoCpU2gCX1mtvxfFSEPoYvk1E214YlzkE=;
	b=tl326tfVmre0sIuja3r389JG6DaSSw74XV9ednB4n3cDMCQUTrGhScnzVr+JZp0gvWN3iQ
	bakFE2gR5PTIYIeLF9gYxgIRlsuiLEtqqJw/5kP2Qwez4KFsr4fMViqn4qXcvPhqj7eayu
	DxUpZeihhqimYTKT/Le7LQewevc92co=
Subject: Re: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 James Dingwall <james-xen@dingwall.me.uk>
References: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
 <YBvBKYi5wweq2kms@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <06169cfd-6fde-3d12-08c2-96d502a0064c@suse.com>
Date: Thu, 4 Feb 2021 10:48:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YBvBKYi5wweq2kms@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.02.2021 10:40, Roger Pau MonnÃ© wrote:
> On Thu, Feb 04, 2021 at 10:36:06AM +0100, Jan Beulich wrote:
>> X86_VENDOR_* aren't bit masks in the older trees.
>>
>> Reported-by: James Dingwall <james@dingwall.me.uk>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks.

> Should this have a set of Fixes tag for the commit hashes on <= 4.12?

I'd prefer Fixes: to only reference non-backports. The tag is
mainly meant to allow noticing what needs backporting, after all.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 09:59:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 09:59:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81203.149604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bQ4-0001U8-T6; Thu, 04 Feb 2021 09:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81203.149604; Thu, 04 Feb 2021 09:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bQ4-0001U1-Pz; Thu, 04 Feb 2021 09:59:24 +0000
Received: by outflank-mailman (input) for mailman id 81203;
 Thu, 04 Feb 2021 09:59:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YP8j=HG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7bQ4-0001Tw-2e
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:59:24 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd9adf15-8baa-4332-bb45-9958e234a8c9;
 Thu, 04 Feb 2021 09:59:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd9adf15-8baa-4332-bb45-9958e234a8c9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612432762;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=W9Sgws0EDVlIWBKMhffh921IrIhIJ8uXbKzG8O0T568=;
  b=bcAOeM+3jeiNaUXuWwC3HVPU/KMwXGRyhK5Ybc3DrPa15BGLB20fxWiC
   /tuR2l4/HyPE49I0ryYt2G+P9FV8/TYMSoPYHMFtyJ3uhTgwUemZgZJd+
   iwDWEyfSG7Beosza8cPOWZiXZBUSMFuhO9kQOjhWozNBJrHX4mchcDblx
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: iNtYdhjJhDpc6caSqQvJcZ7qPYpYWUm+kzcRt4/szpqYb1rw2yVKXGhSuctJYuebLiAR6PefHp
 sUiGRo/OB/yzWqVz7iTTOC9jPlj+J5pX8tsmvyP1VAuNJmSl88oCqyFEEcEjnDBqKXCgKgOGdj
 tTSI9wTDZVK73rnc6aBPP0IPWrtWOV2mZZSlL1luhrP4aggBpiVJn0fgOT3P8oZ25Q0pabcum9
 V6pgVwrqcLnsppE08qBjgNawg+0EK2DmUc8FsL7MScVN71DetHW0TkI+1G6gOhMgl845XssMe2
 xqs=
X-SBRS: 5.2
X-MesageID: 37877702
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="37877702"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JbAeWZhFt8a5ECwoQk72MGtGSdXxcEXPTz5Qv9nkAxJaHaEXa4fzTfLd9a8vQwgajvKRcGTAMpMbeZqGIUOKL8YJY6hg472YC1Eur/uFyqKGaLfLz6+2nYb8c/HXV6fbe32Ip8PDEB0/4KH8uYa/aCd9uEId1bBXptuprX14G8gwHHneUULRWFCXCGvkomPtH3qmzlbcOVwyNZDFkzhAxrRlI3tndxV2GOShL7BZNIpJuarez3huJVav3UnzIBIBi5cHkN9AKFjLbQJNv0ZMWkxcgffx4EI0wNV/NwRo9Z5/Mf9dvp7yqfpUJZLCX1bD8sQSoyJk4Ebtdr8GQqAE2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=16gpnVe2bqK+U1OuPxS4+M70hUJQBvf7g9iJRjSl3oI=;
 b=n0NcN8nxrvAB8kC97a9C3UnUd8JzgIX0dcumVu1yiHuNNASFvi6PSWUUhXk2inahMxgJMUKBCA9cMwwGlgYv4ob/x14c3P33TA5/U8YORYCVrjVYAIjcdzOwQFyJbeu9InTq1GI55s3cPmC0b1pwLc6az5Hpbc286/5G5HgqX80TPsye+tc6A3IdioVPWHxXKN0aqc8Q0dummmjnI9GybJ3OffsPZFiMkRF5sEy59pEXmrK+vlZuhwnmXrdj2d01HB1+D4D0i6P4FtV1/UUHnJMH7NpyGQ9Yxcl43wYBD2Zda8ec75pmxvk+3Y0JDsDnaW2m+sjU36HGWlbvXc85IA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=16gpnVe2bqK+U1OuPxS4+M70hUJQBvf7g9iJRjSl3oI=;
 b=BiRhFhIGDxifQU5jeoc5TwigTn2LsuvFOs8LJ+YuAhY06eIF960YIWiZyNU87cm15314KawM5WVJHorn1s9c/M2HGB/rJwawNh1s/FHBHbVTT0q1UAjT8hPP1+cYvuqiD7696/567TViB2geBYp/P72aXiSMeLSFfvPMK/jvr6w=
Date: Thu, 4 Feb 2021 10:59:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
Message-ID: <YBvFbbnje+Dt7CfD@Air-de-Roger>
References: <20210204093833.91190-1-roger.pau@citrix.com>
 <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
X-ClientProxiedBy: PR0P264CA0217.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 957a5d6f-4647-4568-5cd6-08d8c8f3877d
X-MS-TrafficTypeDiagnostic: DM6PR03MB5339:
X-Microsoft-Antispam-PRVS: <DM6PR03MB53390E980B6880F50E65F3888FB39@DM6PR03MB5339.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: VC/grYPAvLbhgvItErIZCuU4Pi4lzSO5kuSx2ZrKTajbcacXF/+tlulAVLEffxgBoDHpN9abQ01eK9EpAbllGB99YDMIYPc2fulUZ9X1gRzcScRwszzqorrnrMo3IRD2U2b2ecEmNhBfKj8JTcf2WrMwyeEkhc7CMxDXJC5TllbTHqp2sPY8wyx2R/GVV1bto4i6FNzjZ+u3S6Ynk7YqYAHSnWASYDPJWTlB1JSzEwFvTLAJIqBuKjhHQ5q+7vqpkLtHv9N/4PtpOjrTwCUIQKIQXTcmmWWr+b461GdOw8eEBVLK2Zb9tWOp3Ry93CHqp6rJ9UofRttxajvUTmXvkhj27KRNSSNrmdrgmZw00es36dscHLwqNYB5jTecdUSpw2rq+ICgZzwE+4JCcjWv1dd/F5zGDzwL+JAmgjgMs/uPPt9eGX6UIyusyUfPUwukNCtKR1whmBaFsWBTpSYwPqJpAljmWGhtQYJtgiVlc6xsgKCq++7S/gG18/IxgnbBxi6B4j74cjq8GxHD2mY9GQaLFoZwSienUByYKIXrlKpTPq+kGlmlKRkcIsrIXD4a
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(366004)(346002)(396003)(39860400002)(136003)(6916009)(186003)(16526019)(8936002)(33716001)(66476007)(85182001)(53546011)(6666004)(2906002)(26005)(8676002)(54906003)(478600001)(66556008)(316002)(86362001)(6496006)(6486002)(83380400001)(66946007)(956004)(4326008)(9686003)(5660300002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?V3YycVRqQXh6MjlsUVZYZDA4dGlycE1DR0pVUURHWk1CSW9WQjR3ZHN4UENx?=
 =?utf-8?B?WkR6R0NwVGJVOG5iTUM5VmxqQ3FCMDlGUmZSTWQwYlVTYzFsNjlPVjFrb3gy?=
 =?utf-8?B?MXdJNTlaQkx1ODNUTkVUeDltd3RicGppbmd4V1V0NHJneE93NFVIcWxRWmxq?=
 =?utf-8?B?WHM2VGJRN09YNmV2bDJsdlppZDF4cVlnYnJ6K3gxTFpIQURheGNNUzI5d3g4?=
 =?utf-8?B?VEh1MWlpbjZNek0yYmpjcUpRSHZEZWMxTndHR0FDdUcyRWwxUDVyUzlYQ2pR?=
 =?utf-8?B?WDE2UEYrWWZCQ0pyRTROc003L2d5S08rQ005ZkxFK3E0ZWNvVjJXeEZMWGNh?=
 =?utf-8?B?clp0L2cyUXVlOUpRR2JUUytla0R5R21aUWFRUDFxOEpCUFNCL2ZxQ3lCaTBL?=
 =?utf-8?B?dlRQR2VHRWJ1OGN5aTFCYUdraUNCRmswTTZKYmRiTUYrMi9DQXVucGNOckZp?=
 =?utf-8?B?YkZKUENUMGVTUGhvSERwbmhXOVhwQXY1OHJnMFZpM3huMU5vUU1iSnhwZ0xS?=
 =?utf-8?B?V2hneUZ0VGFHWHk4cFl5dG03OXFFcHAwMFdTeFFkVWpyQnc1SzlMUzhOWEFh?=
 =?utf-8?B?bTVCYjQzMnZZYVRlVzdhbDMxaFVwTlA1amFmNllLNS9PUytXSzJLZ3lVWis3?=
 =?utf-8?B?ODVhdlVqTlVYNkdrZFdmcHNBbExJcTlDWXRDVmVBY081NVphYW1VZDZKTU53?=
 =?utf-8?B?aTNPVTVNYzNhTW1qMDlPK3E1VlJON1hrM216SkxnenllZUZnZ2RqRVQ1dkh4?=
 =?utf-8?B?aVdWc1MzakFCZnY4U0tDbGRpYzJyZEoyM0MrcG9Yck5JZjBPU0JlV3lUYTc1?=
 =?utf-8?B?RzJHc3VOM3hvS3FGNXoyMUhGeW9uQzdYZGt1VC9IdlNndU1TTis2dW4xLzhV?=
 =?utf-8?B?SVp1aDVtYVE5ODk1cXJTTlV4RktVb1JpTjl0WnBpUHJtd2gycm1VdEZSVlZP?=
 =?utf-8?B?dUo2SnF5Z05lY0E0alNQOExTSDY2RUoyYzluNWRCVTc0aUNUZHdiWStEMEg0?=
 =?utf-8?B?NTM5YnZyYmFPSWlBNVdWUnQ0RUxQRVl1WFQvYzg1RWhUMHJ1RDVvTGlQYTVk?=
 =?utf-8?B?aWcySXhQWEV3cDJycVp0RUxsTklWSHptdE9VZlVmL0tLU21Lbm8rU0dWTGpN?=
 =?utf-8?B?VjE5TkhjRUkraDFCOEd3S3pJS1YxdGVzQUVDTCtEaGJzZk94YkthaE16V0Rp?=
 =?utf-8?B?NUJObWsxY2xYdmRJemkxSFpUYTZWbWpZaDNJV1hsZEdKc2wyelk0QjI0K1VN?=
 =?utf-8?B?RkZ1dGdaWitzbzFHVHZ2UWpVMEdRS0Z1bmlKcEFvclNKYTZ6Z09EdzUzbTNs?=
 =?utf-8?B?R2VuMVRLQnQra0xla0NQOFB6OEIrM0hrRUg2TitKV3JkRkRCTENvRDg4WUdZ?=
 =?utf-8?B?MWgzWkZGZFo1YVRsbGkyclpSNFF1Z2EycDU1UUt4MU16TjJ2Y1NGNnVVaTRr?=
 =?utf-8?B?RWZiRlE4MjlvSHBTN0cvcHpDL0tkM01CZEtkYjduM21KbVNWV244SHJzZkxP?=
 =?utf-8?B?ZlcwVWhhMGJIQVhiQXdYQnBnQ0g2VXlsdHBJbXZOM3djaXA3bnpjMUk1K0Yr?=
 =?utf-8?B?S3JVNjVqM2NyMktZVXRIa09QVWNuZytlanRXSDlQcmZERjVML3NHaktoNHBV?=
 =?utf-8?B?WXUxS0Y1bmJ5SU9xSkR4OTk2RnhhdzNaN1UxZ0FrNHdwUDk5M2FHeldDT2ZO?=
 =?utf-8?B?M2VIa3JmNHp6ZnN4Q3QzNjBFSUg0N3RJbVBNT05VQnJkTHcwMXNFZ3RuUnFx?=
 =?utf-8?Q?gipcKv7WsIdOV20mIcp7djHgota0HdTtUF6FTN2?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 957a5d6f-4647-4568-5cd6-08d8c8f3877d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 09:59:15.0773
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xx6x0VvqDMfio56DWj7eyi9oAFe+0lErjH4uSXX2nnY3vhzcvaeiJhJUTrYlqwk2vuNzB58EOfwtTd8MVWBB3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5339
X-OriginatorOrg: citrix.com

On Thu, Feb 04, 2021 at 10:46:58AM +0100, Jan Beulich wrote:
> On 04.02.2021 10:38, Roger Pau Monne wrote:
> > --- a/tools/configure.ac
> > +++ b/tools/configure.ac
> > @@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
> >  m4_include([../m4/paths.m4])
> >  m4_include([../m4/systemd.m4])
> >  m4_include([../m4/golang.m4])
> > +m4_include([../m4/header.m4])
> >  
> >  AX_XEN_EXPAND_CONFIG()
> >  
> > @@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
> >  ])
> >  AC_SUBST(pvshim)
> >  
> > +AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
> 
> Instead of a new macro, can't you use AC_CHECK_HEADERS()?

AC_CHECK_HEADERS doesn't do what we want here: it will instead produce
a HAVE_header-file define for each header on the list that's present,
and the action-if-found doesn't get passed the path of the found
header according to the documentation.

Here I want the variable to be set to the include path of the first
header on the list that's present on the system.

> I'm also not certain about the order of checks - what if both
> exist?

With my macro the first one will be picked.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:05:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:05:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81205.149616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bVn-0002RX-Ja; Thu, 04 Feb 2021 10:05:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81205.149616; Thu, 04 Feb 2021 10:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bVn-0002RQ-G4; Thu, 04 Feb 2021 10:05:19 +0000
Received: by outflank-mailman (input) for mailman id 81205;
 Thu, 04 Feb 2021 10:05:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7bVm-0002RI-6Y; Thu, 04 Feb 2021 10:05:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7bVl-0008Uo-UT; Thu, 04 Feb 2021 10:05:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7bVl-0003oD-Jb; Thu, 04 Feb 2021 10:05:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7bVl-00078k-J7; Thu, 04 Feb 2021 10:05:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=YbmSeTl7B1iVBDmdpCgf1IssVhi7+APiBtcRrvo+cEQ=; b=S0pnD+JPZWwM9p9hofrWI8uXZH
	SetO00NcWf0dnayLCE+enOM7qTFTU70pRejONbJbiP2jOqKb3+9xyKBupCXJyh7R1wXpkfzkGwBGI
	Ke2ngnDh3RrxBHCL+PNkXG7JYHDuWbvVld5IMMxoPn3/2WnqARG1SpFz0yVD25gGY8vo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-amd64-amd64-libvirt
Message-Id: <E1l7bVl-00078k-J7@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 10:05:17 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159010/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-amd64-libvirt.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-amd64-libvirt.guest-start --summary-out=tmp/159010.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-amd64-amd64-libvirt guest-start
Searching for failure / basis pass:
 158962 fail [host=albana1] / 158681 [host=chardonnay0] 158624 [host=godello1] 158616 [host=albana0] 158609 [host=rimava1] 158603 [host=huxelrebe1] 158593 [host=pinot1] 158583 [host=fiano1] 158563 [host=godello1] 158552 [host=fiano0] 158544 ok.
Failure / basis pass flights: 158962 / 158544
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6e5586863148773c15399ead249711143a74d2d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#d26b3110041a9fddc6c6e36398f53f7eab8cff82-0fbca6ce4174724f28be5268c5d210f51ed96e31 git://xenbits.xen.org/osstest/linux-firmware.git#\
 c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#6e5586863148773c15399ead249711143a74d2d0-3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf0\
 52c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#3487f4cf8bf5cef47a4c3918c13a502afc9891f6-9dc687f155a57216b83b17f9cde55dd43e06b0cd
Loaded 15001 nodes in revision graph
Searching for test results:
 158533 [host=elbling0]
 158544 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6e5586863148773c15399ead249711143a74d2d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6
 158552 [host=fiano0]
 158563 [host=godello1]
 158583 [host=fiano1]
 158593 [host=pinot1]
 158603 [host=huxelrebe1]
 158609 [host=rimava1]
 158616 [host=albana0]
 158624 [host=godello1]
 158681 [host=chardonnay0]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158984 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6e5586863148773c15399ead249711143a74d2d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6
 158988 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158991 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 6d57b582fb35d321ea42fe6a75f7251451a55569 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158992 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5fa6987258a757a9fae70ff28188dff07f01bf50 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158994 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 60066d5181be6448def7e97b9ad0fc2741f6c1bb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158995 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 6b59bd9eea08dea84df61bfa847579f14213684c c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 158999 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159001 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c074680653e27f19eb584522df06758607277f77 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159002 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159003 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159006 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159007 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159008 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159010 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158544 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558ead726168) HASH(0x558eac618dc0) HASH(0x558ead720128) For basis fai\
 lure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c074680653e27f19eb584522df06758607277f77 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558ead729098) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a\
 22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558ead6cdb40) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbd\
 d 6b59bd9eea08dea84df61bfa847579f14213684c c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558ead729698) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 60066d5181be6448def7e97b9ad0fc2741f6c1bb c530a75c1e6a4\
 72b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x558eac619180) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 6d57b582fb35d321ea42fe6a75f7251451a55569 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80\
 f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd, results HASH(0x558eac5064a0) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6e5586863148773c15399ead249711143a74d2d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7e\
 a428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6, results HASH(0x558ead6e7718) HASH(0x558eab893bd8) Result found: flight 158748 (fail), for basis failure (at ancestor ~6126)
 Repro found: flight 158984 (pass), for basis pass
 Repro found: flight 158988 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159002 (pass), for last pass
 Result found: flight 159003 (fail), for first failure
 Repro found: flight 159006 (pass), for last pass
 Repro found: flight 159007 (fail), for first failure
 Repro found: flight 159008 (pass), for last pass
 Repro found: flight 159010 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159010/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 180 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-amd64-amd64-libvirt.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159010: tolerable FAIL

flight 159010 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159010/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt     14 guest-start             fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:13:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:13:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81211.149631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7be3-0003Xp-N5; Thu, 04 Feb 2021 10:13:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81211.149631; Thu, 04 Feb 2021 10:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7be3-0003Xi-Jx; Thu, 04 Feb 2021 10:13:51 +0000
Received: by outflank-mailman (input) for mailman id 81211;
 Thu, 04 Feb 2021 10:13:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7be1-0003Xd-RA
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:13:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f7bf3a5-02dc-4b90-9fcf-d2fd1f68877a;
 Thu, 04 Feb 2021 10:13:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8634AC97;
 Thu,  4 Feb 2021 10:13:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f7bf3a5-02dc-4b90-9fcf-d2fd1f68877a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612433623; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yoT15T+kyMhoEcCFEuSJLqP2f+DuofcetFo1XfUFPp8=;
	b=sorgXHJ1bNPjP6TLGyNUP8PAD4Yt9TuaTxT0Sdklj5KqPPAOn1dpvbdwYi/50AaNlmuLtk
	W2Y0oOSc70oVHGu/hQRf2YkIQ9NZXE3TJiu59hn2STj7wkrpbpc6dZiDXK/c7OIIlV/gcM
	Ho7KaJSKMflyPg0LiLfeUdShSXkp6iI=
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20210204093833.91190-1-roger.pau@citrix.com>
 <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
 <YBvFbbnje+Dt7CfD@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0e3576d3-4565-9898-e954-4a888b21d92f@suse.com>
Date: Thu, 4 Feb 2021 11:13:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YBvFbbnje+Dt7CfD@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.02.2021 10:59, Roger Pau MonnÃ© wrote:
> On Thu, Feb 04, 2021 at 10:46:58AM +0100, Jan Beulich wrote:
>> On 04.02.2021 10:38, Roger Pau Monne wrote:
>>> --- a/tools/configure.ac
>>> +++ b/tools/configure.ac
>>> @@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
>>>  m4_include([../m4/paths.m4])
>>>  m4_include([../m4/systemd.m4])
>>>  m4_include([../m4/golang.m4])
>>> +m4_include([../m4/header.m4])
>>>  
>>>  AX_XEN_EXPAND_CONFIG()
>>>  
>>> @@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
>>>  ])
>>>  AC_SUBST(pvshim)
>>>  
>>> +AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
>>
>> Instead of a new macro, can't you use AC_CHECK_HEADERS()?
> 
> AC_CHECK_HEADERS doesn't do what we want here: it will instead produce
> a HAVE_header-file define for each header on the list that's present,
> and the action-if-found doesn't get passed the path of the found
> header according to the documentation.
> 
> Here I want the variable to be set to the include path of the first
> header on the list that's present on the system.

I was thinking of

#if defined(HAVE_SYS_ENDIAN_H)
# include <sys/endian.h>
#elif defined(HAVE_ENDIAN_H)
# include <endian.h>
#else
# error ...
#endif

>> I'm also not certain about the order of checks - what if both
>> exist?
> 
> With my macro the first one will be picked.

And which one is to be the first one? IOW how likely is it that
on a system having both the first one is what we're after vs
the second one?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:22:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81216.149646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bm8-0004dh-J2; Thu, 04 Feb 2021 10:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81216.149646; Thu, 04 Feb 2021 10:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bm8-0004da-G0; Thu, 04 Feb 2021 10:22:12 +0000
Received: by outflank-mailman (input) for mailman id 81216;
 Thu, 04 Feb 2021 10:22:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YP8j=HG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7bm7-0004dV-WA
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:22:12 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3eafe4fc-beab-4058-8301-97fddea7ed52;
 Thu, 04 Feb 2021 10:22:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3eafe4fc-beab-4058-8301-97fddea7ed52
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612434128;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=CoxzTOPSwW5tiPaFhlZouS/8+gSivV1kqyVEnhlZ7Yo=;
  b=CG67oiE78/W66Zr4v4dHnSdXAvxCGb2bkpvLrCxUNaWwc/7UH4mH0Sld
   qN4Wftc4nAPLzg29bmG2m/uw1KkIE2/RAiYCJvW6W20yTPOTW5Za9JYKz
   qyX8c+93oEvUy4wKk2TlkACG0ZXXWdyeo5cxWyM85bqe+9mF705GFIvkz
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UNaKVtKZ5/jdDvBi5K+FwM1yEc8zZ0NzOVm486rId56lYGSg7EafAB63t8fKEigDN+zcuO+jVu
 pgK5qi3A/jMFSyS6PABbaJjThfp1xTuu27QYftokstGbsTzrSJrQuviZ0Wdjw5pVR2v/xWZk06
 puXy8tnTY5hX3OndLSLqeKO8vz1NDpg4fDxkegyRkkp+aHGB/j2zEYdZC9pPX8u4fcRSQWjlEw
 dYL3zS9I5QO38Sjra17LgGXwihFD0uSPQ6LKFSdUCZOA0K69gHHW1Nzct8WIdEqGh16rxx6kbj
 6iU=
X-SBRS: 5.2
X-MesageID: 36572300
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36572300"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Muj8S/1Saj6fSzV+H13s6+yNp4UhpOdkcwlwmbKfXd1gx1z/OGUwhPPS642T88nCZi6qWE93vo3Q2erbbu/G7JqSiZ4DiOG6V6QoQkUpXy5+DgdFVWF3MruwhxYO5rJ/6ElVax9W45UQS2EaFjOubKdoOchWGNag0hYgRy1BeEwfhJfrQN1aF3OYa4oiGmFQF5T+3rCz6/wTDWLqyd+JPN98NxCMBPJhnn8MkI2W+TjJx+x8uUiFPYPYZ1PKJYpRmIqIEXMPj109d6wuOcwCnbK0am5Qau5kFSdutSNlrpkn8I8fuAP+7b6xfyqEP1AuJU+psv3QqpZlWQhe+joK8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QFSUr6ePCOde4sXwl/2PbMxM4K0uNjg4kBcKWGbFAQ4=;
 b=T+RrZuCTiFYWY4IHkuoPH6ZIm8Z0YlcM7AdIKDlAEid3MEEPKa8wyD+g4vC173Zk9BdXBabKSGgbR8+xz2tGmxe9X6mtvfH3PokUsrkg7Fj0WdnVeDH6jQkCTtONRG6nREwHR1lWfvh8UeHNJVYFIK3Wxy6COdtW4In2iYWoTq4rLVPOrOgUBokKwQqaQ/o0hnqXCZUm++BezgM0zqhjy34srUXdl40VqWbsoscxOfHsbJH6emtb3XM/PDACReSeaiFuGaVER/InfI0W+RopNJtDgpIOiBjZYGO1EOCygiCSFE7tsnEC/SVlqYCToj5G91PK3vZc8Qe7Be5sB4H+sg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QFSUr6ePCOde4sXwl/2PbMxM4K0uNjg4kBcKWGbFAQ4=;
 b=jbzB6HKioSmX97Qly/FaCCEWGM6hJ3qNs5j2P3p+3KsCzVXoJTDpo+/aaHjv4s/ztvOdDZmHYpuVMVd+z3ZTjfs2kJE8TqPiXyCMSeBUQKVmmd0wWGKjOpAnrw8aXV4emALNZMxD9crI39i8cOyFDH1XPlGuKlAPmpvZaKImLfA=
Date: Thu, 4 Feb 2021 11:21:52 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
Message-ID: <YBvKwNiIopKKZx/F@Air-de-Roger>
References: <20210204093833.91190-1-roger.pau@citrix.com>
 <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
 <YBvFbbnje+Dt7CfD@Air-de-Roger>
 <0e3576d3-4565-9898-e954-4a888b21d92f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0e3576d3-4565-9898-e954-4a888b21d92f@suse.com>
X-ClientProxiedBy: MR2P264CA0031.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::19)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7d6ca994-d31a-4bf4-3a8a-08d8c8f6b3c3
X-MS-TrafficTypeDiagnostic: DM5PR03MB3370:
X-Microsoft-Antispam-PRVS: <DM5PR03MB3370DC8CC19E8CA5F6DE81B08FB39@DM5PR03MB3370.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PBStrNr/H3iVJ5jB3FrnfPOVOTN5BlyNNae4ohukotGxigD0UZB2m5l6Gl2IXdAJ3KW3n3GUBwALn62aQWGX0UNudJR64gqjto1wyVKqWmd1ThAcZtJKmCOa/cUVRat8S+cKmu2QlvBucDZnLUxzvWFzPEYlwq+wNsjjYYaSeyyLxgFuZM+sLyfcNLeGZ5wcKE21b4JR+B/qRFUo9kKlh+jIqphwZwtS86X3MlMWij6xByic+Pto+HLTA9/16Zu9sYft2IHfx7F3UM2hi4YfU3xqu4jNPZ0+NDP4koXMdhUjDPZuYvx0LcFASNl5ZmtAdC95OhloOG08mf7FqLFJ24HMXiRvdpXxVTIM/WCZEBiEs/Om86PD2SieCIk6R9rAfKMskwjoPJSGVkcaFO/OFxhiHqapYgpdM21ri2VvLqxbPI+wZlV1SeTXJL/qSL4L2coLAtlgyeYNrW4eIynOvgp51dOsMB1S6V9tSDehY7mSurFegrn54WQp/Ke16x64315ozQar1to7bm8np116yQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(346002)(136003)(39860400002)(396003)(366004)(86362001)(16526019)(6916009)(9686003)(26005)(66556008)(2906002)(6486002)(83380400001)(8676002)(6666004)(8936002)(6496006)(956004)(4326008)(5660300002)(316002)(85182001)(53546011)(33716001)(186003)(66476007)(66946007)(478600001)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bWRIcDZKa0l4ODJGR2liTXBCTXlOMmF5MHVTZ0QrMTlzSUFDTFZ5SU50Ylhl?=
 =?utf-8?B?dkZkQ2diVk9MakFHRm9ReDZpSHNlWVFIMi9EYkdxT1VqanlJa1JBMVQvbG5t?=
 =?utf-8?B?YmNRT2tqLzFWKzNlcWtJdkhQYlZBcEZOUTlvbm55MEs3NmRyRVlzVXBhNlU1?=
 =?utf-8?B?UHNXeEplZDNzbldqNExtSlgrUHdyVDJPYW5zQS9HaHJCYkpMZ2RNZFcvS1VZ?=
 =?utf-8?B?N0ZET1lpWXZ2d29oampheGhKLzZvcFU0bHZ4VXJwcXJhMHBYRHlyOTZzZDha?=
 =?utf-8?B?RzdkcGY4WFIwTCt4TEhVYk9QdGR0MUlIU2NSOExpNkRXeVdNU1RhNGl2Qlpp?=
 =?utf-8?B?SzhmMCtseXNud1hCOVMyVU8wekU5Rlc4NWliZlY1UmtPQTllVzVuL0xQTC9R?=
 =?utf-8?B?RmdKUzhPL2tzd3dLSE1aUlVaK2wrUzVWZzl4blJGVlZpOUVkU0VJRW96TFRX?=
 =?utf-8?B?Zm9lRFNYeVptSjJtckJSZm9kdWVhcURiQm1qditpaHNWV0RHNXBhQVR5c1JY?=
 =?utf-8?B?UDlRWWhiMWxmRmZzSzE3YTdhekdkNE5KWVlpenJ3WmhyelhJVE9oeTBBYXd3?=
 =?utf-8?B?ZG9ob1AwOFFOTkt2RmxuYU9xWWl4WEp4U2czaGFmYUFKd1ZURjZ6aHpBRlZV?=
 =?utf-8?B?WkNrR05taElIa2NmZy9KR1QySHgwb2RGR0dzRVRiakpNUjF1bm9QM1F5WWh6?=
 =?utf-8?B?R1ZELzdPRUFvM0ozNmJaanpMTjZvNm4wbmtNa2RkSWZpYnBRWkVQY21IQjlk?=
 =?utf-8?B?eFlzWWx0Y3FxeTl6cjZuNTdLK3ora2thNzV5NnFTbzZFbnQ2bW81akxZaDQv?=
 =?utf-8?B?RktibTRwNDJVd2lqem1qdDVnd2p2czhlT0c2cFRxVzJZMWtoVkF2T0UzR2tu?=
 =?utf-8?B?RzZrVlBTYWltS3BaMFZISzlKdmxLaGlwd0ZFNVNzbGI2bDNDSzNtV0JCZGJP?=
 =?utf-8?B?Y25TRUxSb29QZlp0aGo3YnNRM2xuMENINWYyWHlTVG9YR2tpdVl6NWlseTg1?=
 =?utf-8?B?cU8wRnMvbmJ5MnpmSWd3ZWxrK2hDdUlTQURRVDNHai9sdlFaU3MvVFpmZTQv?=
 =?utf-8?B?bERpcVh5WGVTblUzL1FtZmpMektjdUVDYUpjYkZ6bG5YVFUzSVBtdTlZbHN0?=
 =?utf-8?B?VG5Qc2w4Sm9yM0p4bzRTbzJPRmxZZC93QTdWT244QlEveUowWDUxS3pYVWRC?=
 =?utf-8?B?ZkRrSzNFMVNiZHJDNG4xOHkzOGt6SURZeTJFeU5kNFJhQ0xRSVB5ajdpSGt6?=
 =?utf-8?B?M0pHYkhKSTNabDJBSTlNbW14M3hMbHVLRm43SVlhMWlXOFdTS3BaUWZjMm9L?=
 =?utf-8?B?NGp4YVJKSjhVUVdCek14QkNQMXJ1NEEzTnUzZG9nVjQzQWsyUXVsc3g2Y1FP?=
 =?utf-8?B?TTJ5bXBseDVpVmVOVDZ4bEVpNVk1VytGU2RnVVRaMmk2NFB0L25OMkFvNmNu?=
 =?utf-8?B?Q2E2NmJudXpvdzkxSHVtWXQ1SWNVNGluYWJrQmI2UzdWYW1aQ1RZOEFMNWVL?=
 =?utf-8?B?YVhaaDZ0aXkrejJCODdPYkFkdUV3YTgzOVhndXZkQ0hhd3A2WWdvVGs1RkRB?=
 =?utf-8?B?ZnFoQkc3aGdBQ3BPa21YOGJlczFuTmZ4SHkyMjJnWjdZaThYNDViMitnRmVy?=
 =?utf-8?B?blVJWWx2c241ankrNHN1cW1FeE1XQ2dBdyt3bTlCR3NiQjlOdTgvUVErVzFx?=
 =?utf-8?B?ekQ5SnpabXRMQU02Q3JUZHkzYll0em0vbEZQb3B2Y3k3c2t1UmIzSEZxK1Rh?=
 =?utf-8?Q?nMrUiLbtv7NIUlvujoVjlviog+AIdEwVMGjFtZw?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d6ca994-d31a-4bf4-3a8a-08d8c8f6b3c3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 10:21:57.8531
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oReqK/cCETU06j9JqQQN9mLEWqQQ1UyqC2rwtVxFuxAy/APpuZZT/BhTCtxnXPtsfzWEmPWEz45qCBicYvaXzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3370
X-OriginatorOrg: citrix.com

On Thu, Feb 04, 2021 at 11:13:43AM +0100, Jan Beulich wrote:
> On 04.02.2021 10:59, Roger Pau MonnÃ© wrote:
> > On Thu, Feb 04, 2021 at 10:46:58AM +0100, Jan Beulich wrote:
> >> On 04.02.2021 10:38, Roger Pau Monne wrote:
> >>> --- a/tools/configure.ac
> >>> +++ b/tools/configure.ac
> >>> @@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
> >>>  m4_include([../m4/paths.m4])
> >>>  m4_include([../m4/systemd.m4])
> >>>  m4_include([../m4/golang.m4])
> >>> +m4_include([../m4/header.m4])
> >>>  
> >>>  AX_XEN_EXPAND_CONFIG()
> >>>  
> >>> @@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
> >>>  ])
> >>>  AC_SUBST(pvshim)
> >>>  
> >>> +AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
> >>
> >> Instead of a new macro, can't you use AC_CHECK_HEADERS()?
> > 
> > AC_CHECK_HEADERS doesn't do what we want here: it will instead produce
> > a HAVE_header-file define for each header on the list that's present,
> > and the action-if-found doesn't get passed the path of the found
> > header according to the documentation.
> > 
> > Here I want the variable to be set to the include path of the first
> > header on the list that's present on the system.
> 
> I was thinking of
> 
> #if defined(HAVE_SYS_ENDIAN_H)
> # include <sys/endian.h>
> #elif defined(HAVE_ENDIAN_H)
> # include <endian.h>
> #else
> # error ...
> #endif

I think having to replicate this logic in all places that include
endian.h is cumbersome.

> >> I'm also not certain about the order of checks - what if both
> >> exist?
> > 
> > With my macro the first one will be picked.
> 
> And which one is to be the first one? IOW how likely is it that
> on a system having both the first one is what we're after vs
> the second one?

Not sure, but the same will happen with your proposal above: in your
chunk sys/endian.h will be picked over endian.h. If we think that's
the right precedence I can adjust AX_FIND_HEADER to be:

AX_FIND_HEADER([INCLUDE_ENDIAN_H], [sys/endian.h endian.h])

Which will achieve the same as your proposed snipped.

I can also add a comment to the macro that the first match will be the
one that gets set.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:27:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81218.149658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7br6-0004nz-6l; Thu, 04 Feb 2021 10:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81218.149658; Thu, 04 Feb 2021 10:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7br6-0004ns-2z; Thu, 04 Feb 2021 10:27:20 +0000
Received: by outflank-mailman (input) for mailman id 81218;
 Thu, 04 Feb 2021 10:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7br5-0004nn-96
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:27:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 619e66bb-6f5e-4106-8376-87df1e02c419;
 Thu, 04 Feb 2021 10:27:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 958CEB004;
 Thu,  4 Feb 2021 10:27:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 619e66bb-6f5e-4106-8376-87df1e02c419
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612434437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2n8gHNPwG5zcCg9PIdihShOKQKHea/CUUAp8rD6wmKQ=;
	b=iRqdKLNui4AxGVDNMvqZSzVag2Dr3fiPMY715VsECtH8O/FFk6ACztnDpFb7d4FMnmgohT
	Uuac7LeN1EVi5W5RVGYMGzlD79mLojYdyH19tL5M5s8eOINB388+jbN6idExUwOVxwJlXZ
	CW3tyMEbRLfmGi2swtlhoYFV+GcXP/U=
Subject: Re: [PATCH for-4.15] x86/efi: enable MS ABI attribute on clang
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210203175805.86465-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a9d5f126-0b7c-6a8b-7ce9-736716e6e950@suse.com>
Date: Thu, 4 Feb 2021 11:27:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203175805.86465-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.02.2021 18:58, Roger Pau Monne wrote:
> --- a/xen/include/asm-x86/x86_64/efibind.h
> +++ b/xen/include/asm-x86/x86_64/efibind.h
> @@ -172,7 +172,7 @@ typedef uint64_t   UINTN;
>  #ifndef EFIAPI                  // Forces EFI calling conventions reguardless of compiler options
>      #ifdef _MSC_EXTENSIONS
>          #define EFIAPI __cdecl  // Force C calling convention for Microsoft C compiler
> -    #elif __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
> +    #elif __clang__ || __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
>          #define EFIAPI __attribute__((__ms_abi__))  // Force Microsoft ABI
>      #else
>          #define EFIAPI          // Substitute expresion to force C calling convention
> 

So the problem is that some capable Clang versions set too low
a __GNUC_MINOR__ (I'm observing 2 alongside __GNUC__ being 4
on Clang5). The way the description and change are written
made me rather imply __GNUC__ to not be available at all,
which I then thought can't be the case because iirc we use it
elsewhere.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:32:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:32:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81219.149670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bwP-0005qd-S7; Thu, 04 Feb 2021 10:32:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81219.149670; Thu, 04 Feb 2021 10:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7bwP-0005qW-Na; Thu, 04 Feb 2021 10:32:49 +0000
Received: by outflank-mailman (input) for mailman id 81219;
 Thu, 04 Feb 2021 10:32:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7bwO-0005qR-5L
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:32:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9d6f668-b172-4497-8aec-48daf2cac748;
 Thu, 04 Feb 2021 10:32:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F529AC97;
 Thu,  4 Feb 2021 10:32:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9d6f668-b172-4497-8aec-48daf2cac748
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612434761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xnF5trDr1uTkUTu8gx7sd1TkJnEHMHIZ76LpR10QX3o=;
	b=TrLtbT0ngKyEANYqfa1pmaXovxTA0aw0yCtfApfH+3UIuOzO4KL0VLoAIJX0Q2o3+ysDh5
	r3YmChoWIJWTvkl8TwaoKYZZfxMEN3oHMh12YLeoop9Yhg0CIi5ZglOxHQ7m9iVHBfx133
	VjsqWTo+dnWphcqL/dWXEsUeX9ZJTpY=
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20210204093833.91190-1-roger.pau@citrix.com>
 <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
 <YBvFbbnje+Dt7CfD@Air-de-Roger>
 <0e3576d3-4565-9898-e954-4a888b21d92f@suse.com>
 <YBvKwNiIopKKZx/F@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <10e334fe-eb02-e771-8404-cbcda9534383@suse.com>
Date: Thu, 4 Feb 2021 11:32:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YBvKwNiIopKKZx/F@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.02.2021 11:21, Roger Pau MonnÃ© wrote:
> On Thu, Feb 04, 2021 at 11:13:43AM +0100, Jan Beulich wrote:
>> On 04.02.2021 10:59, Roger Pau MonnÃ© wrote:
>>> On Thu, Feb 04, 2021 at 10:46:58AM +0100, Jan Beulich wrote:
>>>> On 04.02.2021 10:38, Roger Pau Monne wrote:
>>>>> --- a/tools/configure.ac
>>>>> +++ b/tools/configure.ac
>>>>> @@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
>>>>>  m4_include([../m4/paths.m4])
>>>>>  m4_include([../m4/systemd.m4])
>>>>>  m4_include([../m4/golang.m4])
>>>>> +m4_include([../m4/header.m4])
>>>>>  
>>>>>  AX_XEN_EXPAND_CONFIG()
>>>>>  
>>>>> @@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
>>>>>  ])
>>>>>  AC_SUBST(pvshim)
>>>>>  
>>>>> +AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
>>>>
>>>> Instead of a new macro, can't you use AC_CHECK_HEADERS()?
>>>
>>> AC_CHECK_HEADERS doesn't do what we want here: it will instead produce
>>> a HAVE_header-file define for each header on the list that's present,
>>> and the action-if-found doesn't get passed the path of the found
>>> header according to the documentation.
>>>
>>> Here I want the variable to be set to the include path of the first
>>> header on the list that's present on the system.
>>
>> I was thinking of
>>
>> #if defined(HAVE_SYS_ENDIAN_H)
>> # include <sys/endian.h>
>> #elif defined(HAVE_ENDIAN_H)
>> # include <endian.h>
>> #else
>> # error ...
>> #endif
> 
> I think having to replicate this logic in all places that include
> endian.h is cumbersome.

Right - I would further encapsulate this in a local header.

>>>> I'm also not certain about the order of checks - what if both
>>>> exist?
>>>
>>> With my macro the first one will be picked.
>>
>> And which one is to be the first one? IOW how likely is it that
>> on a system having both the first one is what we're after vs
>> the second one?
> 
> Not sure, but the same will happen with your proposal above: in your
> chunk sys/endian.h will be picked over endian.h.

Oh, sure - the two points are entirely orthogonal. And I'm
also not certain at all whether checking sys/ first is
better, equal, or worse. I simply don't know what the
conventions are. As a result I wonder whether we shouldn't
check that the header provides what we need.

Jan

> If we think that's
> the right precedence I can adjust AX_FIND_HEADER to be:
> 
> AX_FIND_HEADER([INCLUDE_ENDIAN_H], [sys/endian.h endian.h])
> 
> Which will achieve the same as your proposed snipped.
> 
> I can also add a comment to the macro that the first match will be the
> one that gets set.
> 
> Thanks, Roger.
> 



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:46:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:46:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81228.149682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7c9Y-00073A-QV; Thu, 04 Feb 2021 10:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81228.149682; Thu, 04 Feb 2021 10:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7c9Y-000733-NJ; Thu, 04 Feb 2021 10:46:24 +0000
Received: by outflank-mailman (input) for mailman id 81228;
 Thu, 04 Feb 2021 10:46:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jl0U=HG=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1l7c9X-00072y-33
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:46:23 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d1bc1e8-05ed-4362-be27-3a92bc487383;
 Thu, 04 Feb 2021 10:46:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d1bc1e8-05ed-4362-be27-3a92bc487383
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612435581;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/NewM2VVKP3hcHJgOQAKw5LUOopwqm1DLlrgJC/ccOo=;
  b=ZsKVXV6TAzw5V87+R4RL/4DRVUfbxvms9Ed6L1gGmY1xHzUQhoGbAdrK
   zS38DZQpAqVbORb6dU1GRdQWLb8bft62yQzHXBA0VcQXeFf727r15aij3
   8BDgV9WfkC0NBSjVr/TVAg+AGBkH4RtnsI5cBak7ypUdxl4CYkBHKE3bd
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dlS/zBTk2qDV3uXF8ruq52Zjz9m4M5iKWb87UosRj1yEYGpQNas4lexstT0MCHZej+64d8Eyg1
 r3cB1r/EWngLANEwCAm1CX713N81v9soPjTfJz/2ADUVWI84y+bLw8OchBMaagnLPuvX2L2/lh
 NaGUvTsf3A4jCaWyc4FF+/l9hceLOidEFcX37gMi+R+h2G42cqPwHr07nbZOVXjLCAPldA6Pl9
 vUnELYTVAhFQ+5G31N0mOst5OQlte1td7c7b8W7X64rvwfXfwdgKQ6Rgr5IuGxmgQnZfK1kTCV
 ia0=
X-SBRS: 5.2
X-MesageID: 36919768
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36919768"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zj4hZSAMQ4NrmGa5JLX9Mdh0BN6fW7H7eeR0GUllot1eSAsi0lYF4oICrhkV1RemRE89kOKi/Vv8mtthXzfva2ysPB4zruxexQuGNUVG4kSVwfNm1MzJ+YMIZhh/IfrpeIdu2uOVv6jPoIOYTmUUgmN7FTU8dfCfOePpi5p301wsGZBQwQh1RnnnYW0pjkPBGJrIXW0qsDhEPg3049QhQJ4nhR4T3ESi9AELkVcZblxq8/6DguVIpmPAUHVsE8EzAk/sXMUhXpfo6TwIZpsRjEVJ+xjGfkFbMLaxagXGMybiBJ53gJS8eEj34swptan0WVelDf7xWVsM/9aZXDaGbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BfDk3QzAsOePUuwSbNMBLSPc+que2ymKyfD5YuUyz5g=;
 b=TYZBGXuEzwZDhteZvb2tdpNqFQqeBvkayEntAXsqertkfW+SQQA2ASz7PGrC9S2XQqp+jWmmspgg6hWrKKYEdQqjJ129Lchq++jPrz4Yqoqug/dQvdkSheIl6loCxMnE9SwShDk6MZVlycvvl2gUP7+H0z60binCpoOMtZp2qSN+IYq5lYav0YSz3E/9NddY5EXiVZy4iz6zbtwCx5ba5GbOMWtjhExOKbFkhqg/0GWpvEFkxaQjTM7MvJ5sIxYIyNGxaKhZWsAb0iPrNwrTyXlzhpmW1GzFq9ZC3zFmGPPmCh7MCoY92iOZ0HMUNUbeabqxMipLtFpqUIUjnotUZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BfDk3QzAsOePUuwSbNMBLSPc+que2ymKyfD5YuUyz5g=;
 b=wD3CNRmQQgcmBvJUN6IFGZzJd7P3FYrcBDMkNeMIsvjhiVLQLghVYLtewgklfGVUjjrex2rwe/mt9ckKIZC+Y4e7nd39MSNN/kZoaSrKESQdl63CmKVOL6uAt3/lx8+GKcnyulF3Tee8Dcl8ROWbr6/y3YsCHMa30F11mF15NOA=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Edwin Torok <edvin.torok@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15 0/3] tools/oxenstored bugfixes
Thread-Topic: [PATCH for-4.15 0/3] tools/oxenstored bugfixes
Thread-Index: AQHW+lMOjXsd9xiVCEyL7uf9ydPmOqpH0O5D
Date: Thu, 4 Feb 2021 10:46:15 +0000
Message-ID: <MWHPR03MB2445EC99795584AFB0F6D7D3F6B39@MWHPR03MB2445.namprd03.prod.outlook.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>
In-Reply-To: <20210203173549.21159-1-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 308e6dc2-9886-4ca8-785c-08d8c8fa18fd
x-ms-traffictypediagnostic: MWHPR03MB3184:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR03MB3184D3288CFAA85C415C6BEEF6B39@MWHPR03MB3184.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: YwPXZrPPi+efmA3I12qtDxsoHqIJ3tXHetBw9+HshFRhEVqCYqC/fwYi1f6rRyo56GC1Ychz7v/ya2zioKXKXkNVrcU+m1JEdEeA/AxovcssJXXrnLuK1YMYzK5iw3WXMA5pmyYfh0NGfAk4PrL0RYR4QphsoaOi3L9/obEaut+tmcG+qSJRSkS3D6ZG6emZYriaISJVv5HHelapn2LQTKalnHxdk++ZFnol2pavGu07reQGTb5A78wGYOhzY53oqNsJuCRkmLwEjDilSpnQ1esFZJhGQTKPcKKLT+FmcYS5is8F9iy0qxZsAqRpPMoRueGf25Bwowo/5kz08dYQ0tAIODOrzZTN86IeXiVK6KMdlpb8jq+uRzgzhHysefiVjucJiXjuDaHk1djJ9sUZOIjsLIr+t/UTXeY5XhzqQ4OYX7N+R7afPx8M5oLSbhzXrZtGTghkHE1fLljaLn0SvfWqw76LNRiEDyfaAymj6OENeAwTUr/wLVX+Gj5j4e4o5qIQX+23AACtP2kqYkoJ9Q==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR03MB2445.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(366004)(4326008)(478600001)(66446008)(8676002)(76116006)(7696005)(86362001)(66556008)(186003)(53546011)(6506007)(64756008)(8936002)(2906002)(66476007)(26005)(66946007)(71200400001)(5660300002)(33656002)(110136005)(316002)(54906003)(6636002)(83380400001)(66574015)(91956017)(55016002)(44832011)(52536014)(9686003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?iso-8859-1?Q?OMr+gGPfU3tsSzQFLiyGjdC75nkr7ccXoTKaWZgxQBzgPZtLaBEwSCWNcM?=
 =?iso-8859-1?Q?rBBYSDzPO7X50SGYrMgM3+Xhl4GeSI4uk3T4vRXO3xGNyp+8kfneT2Ntwq?=
 =?iso-8859-1?Q?Y7G110y+sjjyvXNZeV3yzgxufZy8zcAk4lt08I/n23VaFdML4KP2aTdycG?=
 =?iso-8859-1?Q?xL92YJYJpXz6Vnlq69jzpG6YyjBAGEVxZE9SY+g64bYrT6jfnMgygHdw2R?=
 =?iso-8859-1?Q?B96N9VGFvkHS8ViP3elcuVQouUo4eGxQ2bYxwf9OWKXuUiTAh0CrFo9koJ?=
 =?iso-8859-1?Q?cTxctU9waVHxv9RhkwmZHsp0JZQCLzjWBi6xuZBb5sjGFPMJ8oZZfJYH+B?=
 =?iso-8859-1?Q?RUwLr9a/bnHG3rn9yPIoM9MCNCedFnMRt+kxSgqB/l+MeFHpUS2te647pn?=
 =?iso-8859-1?Q?8lRgLbk5msuBLCe8pIISuJewpHcsz+St7JAWu7chU/ZZNTjaFKtpMVcZOQ?=
 =?iso-8859-1?Q?r1fnRaLQY120gFmeQQL+EFvRpLhhdKTEipYFZB3Eryh/YMyc+HWlj8SsYm?=
 =?iso-8859-1?Q?OZYVc+wgogzly/5Sd3ydh8rupsODgVUdHmAylP/r9w+SbOer4XHb9Dd4fb?=
 =?iso-8859-1?Q?hI09QBVXN5jR+vnEJFuH0Ixg+IZvv619uQd+N5Mjc1KkoKXr1EjpbsWrZe?=
 =?iso-8859-1?Q?Nt/YrYL01pAu+4K+Nr2C13qhUGra5lhv3uvZS9rhMTkdLXTEueq2Zn9wCj?=
 =?iso-8859-1?Q?UiiYlWngNZk2cQPlTO/q0tnP9cEo2tV6UBYw6eVruxH2FedzDJNv2S6dLD?=
 =?iso-8859-1?Q?MaHFLWsNGiNW1CZFUBbVAautBwQZx0KfPpKFwy9VAI+HUbemQ9Qo+Y0dQr?=
 =?iso-8859-1?Q?3cJiFQQcIDByxgO1cVZLT8eE5lBHi3jfERI1CgkzmUfBOn7HUUR1OyFEDs?=
 =?iso-8859-1?Q?tTcYyZX7rj/JkCJwlqdEWX3jNeKGtIDjN4LNKnD4DIZ20C6GyI9cFSWuFq?=
 =?iso-8859-1?Q?Bi8jPhYtoZuE3mLlBSv7B6uR3hs3pj+hNHChYQ4whnT+BUM/JI/XS9IuCc?=
 =?iso-8859-1?Q?CP+hsnh3DM/poPcCDQNRajfV1BHTk9CEYH24vu9O9QYjTEr55hN1bF86zr?=
 =?iso-8859-1?Q?oNFeoIHrbxcGS17BTkJ0oyXNq0Am37dQW32mrl5ArLBP?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR03MB2445.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 308e6dc2-9886-4ca8-785c-08d8c8fa18fd
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Feb 2021 10:46:15.8564
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: G/eNm6WFNoreA+Y0J3a+u15CqKvNSsDEK+Ne7yD3SwLt7GGS+0he/fHEUhXnR+2k9anvRNl/qwRifCAJAeu2aO/GDnYaPMrITHqeDuBNB/4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR03MB3184
X-OriginatorOrg: citrix.com

Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Andrew Cooper <andrew.cooper3@citrix.com>
Sent: 03 February 2021 17:35
To: Xen-devel
Cc: Andrew Cooper; Christian Lindig; Ian Jackson; Wei Liu
Subject: [PATCH for-4.15 0/3] tools/oxenstored bugfixes

All of these been posted before, but were tangled in other content which is
not appropriate for 4.15 any more.  As a consequence, I didn't get around t=
o
committing them before the code freeze.

They were all found with unit testing, specifically fuzzing the
serialising/deserialising logic introduced for restartiblity, asserting tha=
t
the tree before and after was identical.

The unit testing/fuzzing content isn't suitable for 4.15, but these bugfixe=
s
want backporting to all releases, and should therefore be considered for 4.=
15
at this point.

Edwin T=F6r=F6k (3):
  tools/oxenstored: Fix quota calculation for mkdir EEXIST
  tools/oxenstored: Reject invalid watch paths early
  tools/oxenstored: mkdir conflicts were sometimes missed

 tools/ocaml/xenstored/connection.ml  | 5 ++---
 tools/ocaml/xenstored/connections.ml | 4 +++-
 tools/ocaml/xenstored/store.ml       | 1 +
 tools/ocaml/xenstored/transaction.ml | 2 +-
 4 files changed, 7 insertions(+), 5 deletions(-)

--
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:48:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81230.149694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cBP-0007Ar-Bl; Thu, 04 Feb 2021 10:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81230.149694; Thu, 04 Feb 2021 10:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cBP-0007Ak-89; Thu, 04 Feb 2021 10:48:19 +0000
Received: by outflank-mailman (input) for mailman id 81230;
 Thu, 04 Feb 2021 10:48:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YP8j=HG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7cBN-0007Af-Ak
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:48:17 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ae49649-cb8f-45a9-9cf5-67330b142d6f;
 Thu, 04 Feb 2021 10:48:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ae49649-cb8f-45a9-9cf5-67330b142d6f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612435695;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YZ5jaHuvxJMWRhh8dTRMp41b5389EhRwtULzrdCF8cI=;
  b=WWkYKj+yro5rfZrGZ+iTEbh+Y0ioTRHR3dGpX9Jhp52DbJHy945Rm0a3
   /MO5ySU5HYiZjxt+a7Aos9eu3Ul1oZ5BYQTlJdc9IsgTERQO/zKqNZ6IS
   6aApmbCNv3RM4LD4RDj+6E/2qdVldzSIXudcLA4wJCdTlWaki0n5Cvkch
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: z6PTq0dhPH8pBPi1dLnC5vRBrvTNK4wgnYjP8WFO9zjNmqMAYwZmACVYBxsakNAv+dEGxidaWF
 5UTqbsnEuTCmorerPcGD9V6GXxzfIZ1QHueu1sx0cChnwJrhCgbe9WJ2eOHvZFAKHcKWNtvQWx
 /pEbHGOl45vFeXInREzHTArFyb44+o0JfFQyeS/HdU2mVaQSHd3JA3dnt4JwWtJaI8S5nqv4m7
 cJvADIpPIP3+HDDDhOWg21zo7v6SlMLWpWKNEY3azcnC4Zta1nS24wU8t9CnKDm5ytlIxcF/H2
 /i4=
X-SBRS: 5.2
X-MesageID: 37880706
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="37880706"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=STjRF+LbkoUCpnv8Fki8bBlgAE/XUW5DRgubGXaZe2V6s8Cwh88XeHJF74PCWFOU6U7Y9rpOyw6lNyJOUbNjWrwDyl4LEpS4fMXvnhpOnfbVSxGJjGQBUJIopF6uKYldqdEveHZHsuZHqjgqboIIxf6DWjN3qUwmL9z2S8pNQFfraC11aNHlqa+M+oXFPIrT2EsxOlefnWkjSDrFG1cxeVES0A6j2bcN20PAy8Pzdj9VToIpoa7CqF1aOuI4+YOs4BhmRe7npiujJtcDm6l9XH+ZHwuN/Or07iFNLA0MWOiTzawUneaP/3l9YyfGM2U/BYtIDpy0t7DHkjuLX1gRBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2DCLPw0EZVdmn6tpWD9PAmPRAJ5PBP0XB9YN0Xc3Qwc=;
 b=NwumBXOWBqIhQcerh3lPwQCbMxeHJXZnrTKRacc1C85h8XC/3yLFu79SHffPqy3MMBxkwd5Lq6CWO4Q+aWlbvNNZBqeW4gHh0jE+sYxKFlbhxtnH9AhEI5RDBJEA0HXQb/XKauzEocuqVxj68Hxk8nsWt2U6j04GScFxY3C8yRlCFSPBv9hPAejf3oMAYYu4EgTF/1DYhzDzPhRvczzL066tgUd5xqmqlWOrEqt5qLShA/XbvzakOUlYRu/wffviwfKgv1wTRNv0hq8kJ0SqOOdZAGZORaMc6QZiWcs0lxEIXThxwvNLihigRuT2pKsWDVyL9XjxGWFdMa54b07zZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2DCLPw0EZVdmn6tpWD9PAmPRAJ5PBP0XB9YN0Xc3Qwc=;
 b=f7R91ftlH/reuFR5arpuoPFdLllxQaBQPyAT9GeSyWt6Ew8RaCeHtn2YEv4Oz+9ei9oPCZhw7yrVRMNUCbHENptyuQ0MOg9pXN875xug5HDJglP2vMUmhDO/Jn+C/ANxr9+lfmioMGwN/mIVPvl2F6qrP6F19uzppqVwsnS9GSE=
Date: Thu, 4 Feb 2021 11:48:03 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
Message-ID: <YBvQ4/mFiiVJNvaA@Air-de-Roger>
References: <20210204093833.91190-1-roger.pau@citrix.com>
 <26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
 <YBvFbbnje+Dt7CfD@Air-de-Roger>
 <0e3576d3-4565-9898-e954-4a888b21d92f@suse.com>
 <YBvKwNiIopKKZx/F@Air-de-Roger>
 <10e334fe-eb02-e771-8404-cbcda9534383@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <10e334fe-eb02-e771-8404-cbcda9534383@suse.com>
X-ClientProxiedBy: AM6P192CA0077.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:209:8d::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0a18a994-ab21-4ac8-d95f-08d8c8fa5c9a
X-MS-TrafficTypeDiagnostic: DM6PR03MB4057:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4057C306A2E5CC3BA5EA7C2F8FB39@DM6PR03MB4057.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: SJ4lJWIa9VJtbX4K7sfj2L7VsQdPVavCd4qmx3qWslXttbUjLtpyrFcEk6AFfCgn90BhL726hO+nP96pqRWq2lPUuEadYJFAf4fzua3TEQauRVCIbi1iYwZwWRRs4YfeR7STTBFwV8RcP5V5QUleOCP1H647+KThVdzh6WOXfkaxRMZX3cCklubK7Rd+R9DfBT6mdlc9J9TFcCIHDGl5WN2AY5Pyrwe4qVRsqyRiARUBhsRNZjA5PV5BKqDFDsECoJI3v6VABJzZyap6acQfD2YgI36PEqzH05V55wgvA3jTtEwoi5ANlwn5IqxFlq54FU+NK3w2M97kb9eV7PnQYLU5Kd40NU0yVmqYKhGyogwsIgeYGI0rTPpzLysqBr621EkyYLQS5Tre+6GOHxO5Nb8vnboCaF7gGpJ/BvuUuRq7dWcqjgUtQUdG0ohxFTUavrkGfSpe/K2TMqOnn3wZwni7sLayGCouVAIM9QnVin2wNIATk3IOQrRllsTIr6utj/cHS/+mHQ/XeSVdpD6N4Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(39860400002)(396003)(346002)(136003)(376002)(16526019)(186003)(26005)(66946007)(6496006)(66556008)(5660300002)(54906003)(316002)(53546011)(33716001)(8676002)(2906002)(66476007)(4326008)(6916009)(6486002)(83380400001)(956004)(8936002)(85182001)(478600001)(6666004)(86362001)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aU1TQXdEeWtiNTNVblVlbW5VRFBQWXBwWi9DVkxBbkJyaVlIVkJhTk9SRG1R?=
 =?utf-8?B?SXp5Zy9XR2lSSkZCSjloMWt2Um9HR1ZGWTJzNVhSbVovSmVHSDF1Rm9yODEw?=
 =?utf-8?B?YkQwMDV5TmdQa0pDWEdIV2lTc01neS96clkvTzVHd0xjS0Zya0tqYVVaZGln?=
 =?utf-8?B?cHh6bXJFNm1VYy9lL0FjeTZybU93Y2JXSG1wZENpSnl2czhYRFNDUU4yb3cy?=
 =?utf-8?B?Y2Y5ZDFCbVNiQlBraGUyUm1QNW1PYVlCWDI2SnZaZjFHNmFRQlRrdCtlcXVD?=
 =?utf-8?B?YTF1clRtMFltdksxU0RNS2NybUNTMTZWZk53Njd5UU1MOXZ0a2o0dGYrTHZT?=
 =?utf-8?B?VS9UVC80dDhjSmo5TVNYRkV2MGdZMHJ2blRmaElZVmFjSnRyc2xvUldnaGha?=
 =?utf-8?B?L044NnhUbjRSNUVyall4QnNjelc4cVNGbWVRTVdEL2RVOVBrTnMyZnFRTE51?=
 =?utf-8?B?dnZkOWdNeXlxUSt2TzJGTnFuRlJFcDY5eVJEQUxXZGdnMFRGaEhIeVFodUlt?=
 =?utf-8?B?Z1FYT3lpMmZPbSswaVozcXVPeXBVMVFFN1o3YUU1NWRhdW4xWGEzY3VQZXhn?=
 =?utf-8?B?bVB6RThGYWV4dVV1YzVNSloxL2tkSW5jK0lCUStLeVh0Um5ncUdESHVycURn?=
 =?utf-8?B?WWVsTEpTRS8wTWhVK2lHZ3RvRHUzVmp2MlQ3TnpHVFRyWnpERm5oajRIOFFY?=
 =?utf-8?B?VmdUWDhQTVJtR3RpejFXWUVTajlUSlhxT3ZVY05YUVBZbVVDaUlGUUdhZmh1?=
 =?utf-8?B?WTUxUTFkblV5aTVudlluenM2bmFFcVZHZ1BHbXVCYVI3dG9NUWhIdWtRU0Ex?=
 =?utf-8?B?YVc5emR6SGV2cFpjU2hZS0lCZVZpU1ZkM1hTNXNxVHlxUmVlOUljOFEwTVM2?=
 =?utf-8?B?RWxHejRWQzV0dFBJNmQxd1NnQ2JWVjRHcThocXBoZCttcVRhd0ErUEVYbUE0?=
 =?utf-8?B?NVlCS05UU1kvZW11QUJwK05UanVhVnFYTFBGYVlxQW5MbVpUNXFRaXhsY29r?=
 =?utf-8?B?L0o5ZnhMTUt3TG9CVlJYZ1M3eEl1cUFtblNoV2dKZ09rNG9VelI4UEp5UTJG?=
 =?utf-8?B?VWQ3TjFGdFFVK1hzQjBrZVMvMEQ3ZEdqM2I5OE9BcnFrRjVHNFlTaTJCRVpO?=
 =?utf-8?B?c2RFczVYcElPUlovcW4wWEdWeDJiS0M0Z2hTejZQQytBSUF6QjBuaFVJSGdG?=
 =?utf-8?B?azh5UUFVa2lXOWJtVFUrU2ZCQ3dpLzY4cEQ1ZVUrUnRwblNYU0ZuMzZLTHZZ?=
 =?utf-8?B?NnEvb3VwY0J5U1FiSGZtdW94ZWIxSWp3TnkydUd4U2ZlQVJ4UE5wK015Wkdv?=
 =?utf-8?B?dWJaZENUMkFKYTB2cDdreVZGanpjclRBeXE3dmJNK0dLdUFmeHM2bmkzRUFZ?=
 =?utf-8?B?eDJWVWEyUGtMTzd1dTFZZ0hnK1RXUEZUU3I5cHQ0N2VXMnlCYWFFVzhYbGEw?=
 =?utf-8?B?dVhlZmFuUzM1WnRSejdjMTREcElURzByb1Bhdk9LdTc3MEowb2JVRm8xV3V2?=
 =?utf-8?B?S0dsYThuRkROQWFkWFl0ZnZwT0lEeGRPaXNHdnMvZC9xcEpxb0s1bTRTZDh1?=
 =?utf-8?B?bWExbVVLbCtxTklLSnQwYXVRZDYzbWlEWjh5MEp3cUpsMTVpOUtQdG5jKzZT?=
 =?utf-8?B?bmE0bG0xVHpWZEhMR2xYY0dDUjBjcTFLblZHSmUyVXVkNVhLTEVqTnJMOHNj?=
 =?utf-8?B?NUpCWHNHK2k5MUNrd0t3MTU5M0oxNkdRRVFzVzZnMmRXRUVKZ1F1NnBPZWVz?=
 =?utf-8?B?ZEZwaW9Mbm9TSnpkR2hYVXJVejQrSEVORzEyVDdQYkxDcnpubmt2OXNxWmRG?=
 =?utf-8?B?ZVpUUklPcmxTUk9YWnAzUT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a18a994-ab21-4ac8-d95f-08d8c8fa5c9a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 10:48:09.5937
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ss9b5K5AljyK98h0Uhf4C4/YvrjdupWw6M4D+LLrWHsShQVOzAKGMx/nNMBK1RvFJhqAFX46qTIkJmQE4OTUVA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4057
X-OriginatorOrg: citrix.com

On Thu, Feb 04, 2021 at 11:32:41AM +0100, Jan Beulich wrote:
> On 04.02.2021 11:21, Roger Pau MonnÃ© wrote:
> > On Thu, Feb 04, 2021 at 11:13:43AM +0100, Jan Beulich wrote:
> >> On 04.02.2021 10:59, Roger Pau MonnÃ© wrote:
> >>> On Thu, Feb 04, 2021 at 10:46:58AM +0100, Jan Beulich wrote:
> >>>> On 04.02.2021 10:38, Roger Pau Monne wrote:
> >>>>> --- a/tools/configure.ac
> >>>>> +++ b/tools/configure.ac
> >>>>> @@ -74,6 +74,7 @@ m4_include([../m4/ax_compare_version.m4])
> >>>>>  m4_include([../m4/paths.m4])
> >>>>>  m4_include([../m4/systemd.m4])
> >>>>>  m4_include([../m4/golang.m4])
> >>>>> +m4_include([../m4/header.m4])
> >>>>>  
> >>>>>  AX_XEN_EXPAND_CONFIG()
> >>>>>  
> >>>>> @@ -517,4 +518,6 @@ AC_ARG_ENABLE([pvshim],
> >>>>>  ])
> >>>>>  AC_SUBST(pvshim)
> >>>>>  
> >>>>> +AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
> >>>>
> >>>> Instead of a new macro, can't you use AC_CHECK_HEADERS()?
> >>>
> >>> AC_CHECK_HEADERS doesn't do what we want here: it will instead produce
> >>> a HAVE_header-file define for each header on the list that's present,
> >>> and the action-if-found doesn't get passed the path of the found
> >>> header according to the documentation.
> >>>
> >>> Here I want the variable to be set to the include path of the first
> >>> header on the list that's present on the system.
> >>
> >> I was thinking of
> >>
> >> #if defined(HAVE_SYS_ENDIAN_H)
> >> # include <sys/endian.h>
> >> #elif defined(HAVE_ENDIAN_H)
> >> # include <endian.h>
> >> #else
> >> # error ...
> >> #endif
> > 
> > I think having to replicate this logic in all places that include
> > endian.h is cumbersome.
> 
> Right - I would further encapsulate this in a local header.

IMO encapsulating in configure achieves the same purpose.

> >>>> I'm also not certain about the order of checks - what if both
> >>>> exist?
> >>>
> >>> With my macro the first one will be picked.
> >>
> >> And which one is to be the first one? IOW how likely is it that
> >> on a system having both the first one is what we're after vs
> >> the second one?
> > 
> > Not sure, but the same will happen with your proposal above: in your
> > chunk sys/endian.h will be picked over endian.h.
> 
> Oh, sure - the two points are entirely orthogonal. And I'm
> also not certain at all whether checking sys/ first is
> better, equal, or worse. I simply don't know what the
> conventions are.

I'm not sure either. For the specific case of endian.h I would
expect only one to be present, and I think we should first check for
top level (ie: endian.h) before checking for subfolders (ie: sys/), as
top level should have precedence.

I really don't have a strong opinion either way, so if there's an
argument to do it the other way around that would also be fine.

> As a result I wonder whether we shouldn't
> check that the header provides what we need.

Right, that would be a step forward I think. I'm not opposed to it,
but I also don't have plans to implement myself. Just checking the
path seem to be fine for the purpose here.

It could be expanded to also use AC_CHECK_DECLS to check for specific
declarations once the header path is found.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:52:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:52:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81233.149706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cF8-0008DF-RR; Thu, 04 Feb 2021 10:52:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81233.149706; Thu, 04 Feb 2021 10:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cF8-0008D8-OL; Thu, 04 Feb 2021 10:52:10 +0000
Received: by outflank-mailman (input) for mailman id 81233;
 Thu, 04 Feb 2021 10:52:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jl0U=HG=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1l7cF7-0008D3-3v
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:52:09 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fde135cd-2190-417e-9a7a-fe50e5c478fe;
 Thu, 04 Feb 2021 10:52:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde135cd-2190-417e-9a7a-fe50e5c478fe
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612435927;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=+RrCOgS5bqqy1znkZWAwJEgj8uOIvRFpn+StFHTw/tc=;
  b=WBSZo9t4KEVufASz77kTre2fW810n8BXPC6Oix5Cfa1VTxsxgOupHwGC
   TwVEVutvDFmg81AjpDR8Ej3VzaI04/xgI037deyTIpIMmNa3rSFBnOqVs
   o53E6vNJ6qrVQX0SqXuZDKvVmzyu5bdRhUYrQLDzlg2bkpLZglGCD0a7x
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DqMqhx+JEI+J+j6VoIfDYeLBiX1QlMtbV6GCcvQ0geFcX17WaltYtY7MW55p3wWLRqhNhYtq3Q
 QjSMZuvCHLUIs9pkqUvKwEZ/tw+0TRVGU/KTzxvBRCrKd/qJl4S4cOUDtAmUS+Qx7sLPK/PdQH
 eVu2AiLYWg4WtYjaKf15YLGfitRa8/1O+T7Auk8X5KUoHsO1JS49B3xI1e3e+BY2qYM1odfL/j
 6OK37eQBx0zn9hp41/P4eHpNRDcmqKuU+GekUv3uvOeiVlOQfLCRwYsri0MXqpvy6zQ2vyzUDz
 Nbs=
X-SBRS: 5.2
X-MesageID: 36491183
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36491183"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XAaP0d26FshZ6wzmD7AeCYT2EkYo/6vekY2A5Tygp6KmYG4Y96HwoySAD6EbkewafF/LJwqs9H8dVhlb3IFNmKy+3CPSAXQOQO7090WrPz+dzGQrrrNyirHk9T4grWIydNTd/cTE7VGFi4zXgN3lpklDqYjHhfQ1G9vLEU50mDLsvkFwJXHdnc9Hf17xEEbTvJGIGuQVNtBadSbWaOUQd+7Nrdsb1vftVP74mybjXlyYcMWsTaN+VugMYAtlLf8fI7zuhJ28sLPLjJa11BAy78kIVq4h7B5FzPzBxdDfTQJu3fcHU4NdMZXS8Fl5n80xUcuKBMLBsf7vfygrfW6J2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=76niKCVuE399JwJvasuKex51F5wmIyh13P0i0k6gpwg=;
 b=P/uZPCCrwHXATOjkQYbeZwfb2kLyHVYp8n7akJQUd9d7v8DuHYuQ9Mv1hmh9Q3gyAsMBcsrbnCJBl4DCw/r4E9V2RmFkQR+TenO/em51f5D2fbu1q/57H7WXddyPk5BZmL4Q6cP0YpZNeyOdZ/5PMg9IoGV5xI+tXodWgX/m5FbNc8F9PUUjlIgyRfN1EmSLPMfCFOZS8NFWWQfRjmW6DaVsd+uVWjenBkblnOlXQeKvjF1EakQr16b43bLj6RSGph9eryQaSL2N0r6szlRtUi7ifzcTWwwKqjSHWoZQz6Gs63Tgk1tfQ6lPaAY5yrlA7KlwT6N5yLeM9LMVbDUrfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=76niKCVuE399JwJvasuKex51F5wmIyh13P0i0k6gpwg=;
 b=vlibNGRPaZC+DUeCQVKyTax8NL5uruJIZY2DjvQ50e/6u7K3xMZYrSkxnZBRnBWiqxbmPEeP3Gn5j5ehxrsPrW06Ausc+DMoDUqlICV5KaDMlbVmFS1Dtt5bdVr2wolsMmxuqOE83kytS6FwQSQa68xllYRgVnGPRceDvdEIZik=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Edwin Torok <edvin.torok@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] tools/oxenstored: Reject invalid watch paths early
Thread-Topic: [PATCH 2/3] tools/oxenstored: Reject invalid watch paths early
Thread-Index: AQHW+lMQLgaeDrvHYEyTG7gcxvIkFqpH0ZCg
Date: Thu, 4 Feb 2021 10:51:50 +0000
Message-ID: <MWHPR03MB2445436997A3F5D6644AB575F6B39@MWHPR03MB2445.namprd03.prod.outlook.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>,<20210203173549.21159-3-andrew.cooper3@citrix.com>
In-Reply-To: <20210203173549.21159-3-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2493a915-9920-4e1f-e7f1-08d8c8fae0bb
x-ms-traffictypediagnostic: MWHPR03MB3278:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR03MB3278066F9C7464534FD75108F6B39@MWHPR03MB3278.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ULoFI9ZklkB07pVdnO337RFnmZfFLvx7gXcsnuSMpY/z+26f0YHLqbW0TTiVy+68Di0tA3w5V1LvhM9BxW9b8DWRYcGvg7EWLbLYvHFUmpA46eC49Zr9Rsb8niBf2Ktjd5yfZosi2CzhBEUUPFJ7KLC+GJ9sixsgyQwwam8oJS8qPQmcZfYWOpJecrncP7Q85K8bTRmx/wwfyIy3dBUczycjtRX3S25E7B4T2SFenbablU3Dx6hJUnRsDRwpapssdqvovEKfkZuGHFrc1VBoV35y3klB4htfh8l0PcWFEJBe0MndUBrT75r+XcjDnVe47hdrbUQXnpQbej66Q0CB7n5qG0Z1o995uuJ37RPNTn5lroNZkZooqFdD+k2hWOqyowGfw/1wuZqL7n37W57E98Vc8CrxCwLmnwND2CWGdV9SajcZi7lA4uiFHuncp1xikPMRoFxGxWsUSHTEKnTNfCkOfSgGU8RBVJs0zdSPnG2CyAer4igkzEizzFA6nlxUFOTVwFoT/N9gbRakQ3MFdg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR03MB2445.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(39860400002)(346002)(396003)(4326008)(2906002)(9686003)(71200400001)(52536014)(44832011)(33656002)(54906003)(5660300002)(110136005)(26005)(6506007)(53546011)(66946007)(7696005)(66476007)(8936002)(66556008)(66446008)(64756008)(91956017)(76116006)(83380400001)(8676002)(478600001)(86362001)(55016002)(66574015)(186003)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?iso-8859-1?Q?NE8nlAOB0NGbXly2v3hmFt1SmrawSfeywSQerDXxR6hltiAJiWGvnZDvp2?=
 =?iso-8859-1?Q?Sx3klsUP5Ys637e1+eVDq+yYm56E4CbIfvQ1VZlNJ9GiUdgRmxtGfShkQy?=
 =?iso-8859-1?Q?xjUzUftsZThbBmsoqMrPVbBLnB5D3UJtRgyyPR0RvdRkAz5nAlWCaGC/yA?=
 =?iso-8859-1?Q?R2cAmBhiUWd/aaG3H6Xtht87X9Vu7xIgpzguEGkvabIj1cv6bFI6Dp4fGM?=
 =?iso-8859-1?Q?b2Hai+jby6+5B3ob+RJrjSSnwHLyv9z5u7Zv2qTOdmIo/SiCd46kudOKV6?=
 =?iso-8859-1?Q?AsmaW9uLg97ahdF7wkOz6BZ3LDJbu4MMoeny+HU6HY/LTvKyhF+9HRpdXB?=
 =?iso-8859-1?Q?2sHevQfP9byWeA/TwAOvGZYatSIDXS4amON2pEkFxVuHwsdsESzcvtGcAB?=
 =?iso-8859-1?Q?GSitK/uQsikKkZo2NItJrw3WuW2Ndpu3CG2uzcbKYspMle5KdyK0LJ37ZX?=
 =?iso-8859-1?Q?PZ92IvB9BQHhscLx2ZO6xlqyhdJHt9SebY3zrFqPWDDliROxDuCarcuMYR?=
 =?iso-8859-1?Q?2Jc3BFZLWh2hq+W6isvD2ExnCHr6/3MVj2qoz5IY78u2daucFvFz/Kvu4I?=
 =?iso-8859-1?Q?egriDDPTUp19c2NA6PPzOyk7JOqLTKT/e/ltrjVpyLFCDJ+R0NrgJ62Z+H?=
 =?iso-8859-1?Q?Ke97AtGtSjqKcKJ3x73ryxXfJfcksHQpXPg/8KW7BsY8CTPRZAX1EjssIg?=
 =?iso-8859-1?Q?mz1ZEcznfeMPds+4MLI9+1Qri8K+7d7NabN4metdkaGD0pl6YT5wBkuCHL?=
 =?iso-8859-1?Q?ZZF3syxHLS6FOhtEAiqlM/YnzHwk4qNJsM+TDry8Xr3PDk0q/+vwrCcD5q?=
 =?iso-8859-1?Q?ntMKUqU5sYDswuPfhuy4hCiE+OmlkzE73dRf301QI9VEN/Hdbl1RIeedD6?=
 =?iso-8859-1?Q?AVqeaBM3oS3gbKz4g7ApTOWdG4OzFTJgY8Bpy1L0EqvJf5QCarq1WPPJry?=
 =?iso-8859-1?Q?3qegc3tmrPh/Q/cehycJd1ehr3WbEoOOWy9zOmWv/uoT/d+037U0uHA6pP?=
 =?iso-8859-1?Q?xJbNFEynf5kBe0w4rgohcjCVLeHtPAW/x/0wDoXU0Hy/DZFzPNAaE8DwuM?=
 =?iso-8859-1?Q?dt1N8eCMU273EJjg4+E7IARr3yVnB6z9bRTAcYfy016D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR03MB2445.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2493a915-9920-4e1f-e7f1-08d8c8fae0bb
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Feb 2021 10:51:50.9226
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yKOJmz+pwZSBy+2l/Wj4gyaRhnNQuoSmJI957ti0+2diCS052mq20k/OussMcqdnRiz1Q+nVMbbpY3tjBz8TaGjm0eakVU4/wIwI6KW9+64=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR03MB3278
X-OriginatorOrg: citrix.com

Acked-by: Christian Lindig <christian.lindig@citrix.com>

Great work. Fuzzing is often thought as best to find bugs in languages like=
 C where memory is explicitly managed but here it reveals logical bugs.

________________________________________
From: Andrew Cooper <andrew.cooper3@citrix.com>
Sent: 03 February 2021 17:35
To: Xen-devel
Cc: Edwin Torok; Christian Lindig; Ian Jackson; Wei Liu
Subject: [PATCH 2/3] tools/oxenstored: Reject invalid watch paths early

From: Edwin T=F6r=F6k <edvin.torok@citrix.com>

Watches on invalid paths were accepted, but they would never trigger.  The
client also got no notification that its watch is bad and would never trigg=
er.

Found again by the structured fuzzer, due to an error on live update reload=
:
the invalid watch paths would get rejected during live update and the list =
of
watches would be different pre/post live update.

The testcase is watch on `//`, which is an invalid path.

Signed-off-by: Edwin T=F6r=F6k <edvin.torok@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/ocaml/xenstored/connection.ml  | 5 ++---
 tools/ocaml/xenstored/connections.ml | 4 +++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/ocaml/xenstored/connection.ml b/tools/ocaml/xenstored/co=
nnection.ml
index d09a0fa405..65f99ea6f2 100644
--- a/tools/ocaml/xenstored/connection.ml
+++ b/tools/ocaml/xenstored/connection.ml
@@ -158,18 +158,17 @@ let get_children_watches con path =3D
 let is_dom0 con =3D
        Perms.Connection.is_dom0 (get_perm con)

-let add_watch con path token =3D
+let add_watch con (path, apath) token =3D
        if !Quota.activate && !Define.maxwatch > 0 &&
           not (is_dom0 con) && con.nb_watches > !Define.maxwatch then
                raise Quota.Limit_reached;
-       let apath =3D get_watch_path con path in
        let l =3D get_watches con apath in
        if List.exists (fun w -> w.token =3D token) l then
                raise Define.Already_exist;
        let watch =3D watch_create ~con ~token ~path in
        Hashtbl.replace con.watches apath (watch :: l);
        con.nb_watches <- con.nb_watches + 1;
-       apath, watch
+       watch

 let del_watch con path token =3D
        let apath =3D get_watch_path con path in
diff --git a/tools/ocaml/xenstored/connections.ml b/tools/ocaml/xenstored/c=
onnections.ml
index 8a66eeec3a..3c7429fe7f 100644
--- a/tools/ocaml/xenstored/connections.ml
+++ b/tools/ocaml/xenstored/connections.ml
@@ -114,8 +114,10 @@ let key_of_path path =3D
        "" :: Store.Path.to_string_list path

 let add_watch cons con path token =3D
-       let apath, watch =3D Connection.add_watch con path token in
+       let apath =3D Connection.get_watch_path con path in
+       (* fail on invalid paths early by calling key_of_str before adding =
watch *)
        let key =3D key_of_str apath in
+       let watch =3D Connection.add_watch con (path, apath) token in
        let watches =3D
                if Trie.mem cons.watches key
                then Trie.find cons.watches key
--
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:52:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81234.149718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cFD-0008FD-51; Thu, 04 Feb 2021 10:52:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81234.149718; Thu, 04 Feb 2021 10:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cFD-0008F6-0d; Thu, 04 Feb 2021 10:52:15 +0000
Received: by outflank-mailman (input) for mailman id 81234;
 Thu, 04 Feb 2021 10:52:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YP8j=HG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7cFC-0008D3-0d
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:52:14 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f234c34-35ff-43f9-b728-5a7041002549;
 Thu, 04 Feb 2021 10:52:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f234c34-35ff-43f9-b728-5a7041002549
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612435928;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=MzmZrAMfkbYy4W7zpiJIU0kfPNi4k3Ju0XZaa/31kxA=;
  b=WJOAa+HeSqsn2Un38YxCP8VtqQTrX7vOisQlq/MVrE1UPzeuyBSlMmNg
   ocPLAunLFZyYh47M1piu1mRu7FKl2ijkk0ZdhJ5tFLvmZEqthnW0jphHl
   mjfJvstCT61yhm268xdXFyhtGy0TrTALiPfLbfGro9ThQ1obbRlpQD8Cp
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: uGbG+UD3iC1EjAthPx4SyCsupm7DUPuE5ErymwozDtPnVQH78fs+PfnHctzgP923I+0R+KglIg
 CSw07nBHeECfoRB7tt1weszsJ/XLqTJsp9sLQON4+wG/otkYDpVS5lXSh3mnsrhbSDy6+0TlAl
 eRqC/HBCQCuEeW4VdeaVSwiKENKyUzbdh2n1to97I9dBvsVemuStphP+VDph+g0c7/CXo2aJgx
 au0rMrPv1f4aRRU7dfkemBrC9Tvn26fn7J7PwCe2L+R22So9J3Q2lm+4SKpVSd242WHjPvAyqm
 3Bc=
X-SBRS: 5.2
X-MesageID: 36491184
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36491184"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GuVYnd0xGlSTeNOzRv0yP7vNFCyzmtASTsnai/c72CG1TS9tt8J0eob26ZmX/f1FFtZLPlhfvjqa/clJ8PI5kXWOciGTv13HP/ITpBpsscHevfaWP/YpbZB39wVCoZq9YO8yBnDSvRBxTSxV6sImvoEjmz422NduXA+oSrnqzOiqRDqTFflwUz8MHGJhb3SGtIZNvDVfvJwkLhvQxvsQMm5hXjAt5ljdl/HGJgugRNfqX1phD5xTTYKVsU6JpkfFLAfp+bEerhbCmSvcKqJqJEpOjV6j9s94esGWQ3EdH400xI7b7X9HnDUNUusG8wlz0uNcItTtvwitwxhzSMFpPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5lQajqzjCmtn8XyU9LT08gno0mI6BAEgeE81ME2gqHk=;
 b=KxbmU2XXJelCcPnV0DjitRKVBj5/ecqBoyuW4A0T36seaG2ppiTVfIQ5VQdH+v8RI3GflSZSbHU7D7dBRubynr/wXpCAZOnKEHDyy2VZcc8TsATjQnBCDQRM9jXKscSNA7M5d7eIPm4m3C8xkkHraFQG4qO2tCA5ZvEEMwj4gRzq6fVd8kHIctqOVRHmwWkbXyajpsXEt/3UF8cNDJXj8AloXycLOOdW1eLa2yMTbirof04owA73MZvyDynB86ATx7ePA+HiG6rziuq6MaQ/D7Zjhcay/rGRbEPoxD1KPWw8a29qrhK5i95SKB9bbU482K8SwJPTl7N/U8OTLuXlVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5lQajqzjCmtn8XyU9LT08gno0mI6BAEgeE81ME2gqHk=;
 b=eumw2NIV6uWp3cEOlm1pwQBxtY+c5ptnRdMwDiNFGh5ZUmVv+vSl7cLca1jaoCeaa+oVSNh/r6zjanSnrwJsoh2Zg99AZyeyiAOeT9yggXBZjZjgl3ye1o5wIlDij5KdrHsnjuGxXdx8vOjSm99zQqxaB6+qLm1CVJil8CxJG9c=
Date: Thu, 4 Feb 2021 11:51:49 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Ian
 Jackson" <iwj@xenproject.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] x86/efi: enable MS ABI attribute on clang
Message-ID: <YBvRxYyWPMYBvGNr@Air-de-Roger>
References: <20210203175805.86465-1-roger.pau@citrix.com>
 <a9d5f126-0b7c-6a8b-7ce9-736716e6e950@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <a9d5f126-0b7c-6a8b-7ce9-736716e6e950@suse.com>
X-ClientProxiedBy: MR2P264CA0132.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 421a66a2-190a-4113-b73a-08d8c8fae2fe
X-MS-TrafficTypeDiagnostic: DM6PR03MB4057:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4057AC4CFF3F155D75B864F08FB39@DM6PR03MB4057.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: f3IBdN7Ap/rXWijWgIHQccvIXAPzRJc2ZVb8zd477b/BuOELwCSoO1mhBnRpppFQkG/8ehEMvEvCgr4E/GDsRSWxqejV6rc0RPhPda2KjH9NpFYbhzLlLB6Jx1aJO+jBrr/cbCclt2qk2hpZIBy878fkV5DMNFS4WcDfI7QJnZPOgOTkmmDQVjzLgIfWPwiCB2q+t23xsNnrgd8U+6DYoJlEKTomIueQrvMcTFcfoaTDcDx2ZsvpMGdU0E7NlytVdUUxRz66ZN30un6qZsouHFL1mK2TcLg/vNmUP+hgI4k7i2KmNx8m4lJz8UqyPHkcqRfWTbXFa6e67Qkrx5JjYnlWAVFPFaYE9PavimumXHk/VU81tLcwuHQNa0TISygCmPZJr6D+eeuzpv7+hZNhqIg7b3XP6BStgcjeJMLg3bIydwkYMyTrwtzwdyYHBdHK7FytqhyNrltLyvJ+ILHtOEio2BHzKA36rnTkcjMp0xluSjoZUXIX7i0l5SLcmjSaGjol4NiWEQ+V2Ic0X7rk3w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(39860400002)(396003)(346002)(136003)(376002)(16526019)(186003)(26005)(66946007)(6496006)(66556008)(5660300002)(54906003)(316002)(53546011)(33716001)(8676002)(2906002)(66476007)(4326008)(6916009)(6486002)(956004)(8936002)(85182001)(478600001)(6666004)(86362001)(9686003)(45080400002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NzdET2NmRm1BVUUrNm1zdnErcVZKQUp4b1lPRjZ6VmkvbWhsd1VNMWE2MWc0?=
 =?utf-8?B?V2NPMlhhOTlvWDlrc1dGNnJycC9ERjk4YlI5b3dTQmlWbW1iVGVuNldOUTR6?=
 =?utf-8?B?MVhSMmprRFdEZjV3M2Fnd2VPbXFxUGhEUC9WNkQxWitCcnZUSzVQUUNvc0Ez?=
 =?utf-8?B?eTNiOXErYUUraG8waCtKV29kdUdOZUhHc3B3ME5JTTkyU2NyT3NJUEtKYWtG?=
 =?utf-8?B?S1E0MUNMMldzZ1FmN01jSnMzSEFmS2JUNUd3TXpSWExPWUhoZFhZbUNpVEdB?=
 =?utf-8?B?QTVDRnZ6d0xyOU1yeWQrK2ZnY0x2ZjVGWVoyd0FvLzVIdER3UjU0ejdUSUsw?=
 =?utf-8?B?OHg2cmFZcnltbWNUM3ZUWUkyRFF5VDJtTWhaNVprejY4enlibklnK3pEa3or?=
 =?utf-8?B?NGNZVEZrM0JVa1Q5cWgrYStXOGN0WlllcWkzV002YWZCRmVSZFFSd2RtUEpW?=
 =?utf-8?B?SGtjK1BWVEdyQlFCRk9jRVZDVmxWSzNJZUs5aTZxZlV4aVhYTTllZklTY2t2?=
 =?utf-8?B?bGdBaFgwMWZhLy9GTWJTQnVOb0x1OXZXME9ielF1VmlBSUdLQXE0WkZaYWNh?=
 =?utf-8?B?VnFyVnZZdTNCYk1aM1BCMm5CTXF2ZWxvZ3pvQldwVjZ0ckI4d3QvUlphdEFa?=
 =?utf-8?B?emMyZ2xNaS9DaDRDZ3pjRTRhS0dFNy9CQ05DL0Y2V09Tb3U2TTAvanQrSXU0?=
 =?utf-8?B?ZmtZL01GWkNJNzlPUU9kbU92Wi9vOE55VzlhWDA3OUVRUXRjNHpvejA0VXlX?=
 =?utf-8?B?c2czZktlUzdRdFhrVG16WVd5ZFB2dlBZNXpYZXdnb1RLakZRTE1sR1E1MmJR?=
 =?utf-8?B?VnE1Z21IRk82ckx2MmR2ZUtaTlpXc0FzOXI0aE56NEZ6TE9XeGxJNDZ4NG1W?=
 =?utf-8?B?V2ZMaFhkVitZNXRzYjJORjZLZDZlSCtZQ2RRelRKeWFaZThiRTYzbms3UUl6?=
 =?utf-8?B?ZlNjZDBHWXg1dWF6cDUrSUlLbjBJc0YrbUtQd0R4QTdsN1EzcHNNaHI1UXFV?=
 =?utf-8?B?TlJQQjNhbHZLcTFoQU1zVVdNSmJaMjhJaFJwMWYrRG44emRFZHRYYkZLR1E5?=
 =?utf-8?B?dHVPNHduQTJpSFJmYjFiNHFNanBrb00waE9Rb3QrMC9PMkFJZ1dkRC9YYlRa?=
 =?utf-8?B?ZjJqcHNaS0V0M3ZIejR4ZXlnS1BrMEk0c3kxcGFyV3pjZ3dFWU0vVHd2eFl4?=
 =?utf-8?B?ZVpBNnBtTDNRNHM3WlV6ZXNrUTdnYWxQbWF5V2VQS25sNjVMUHJiSkorWDJl?=
 =?utf-8?B?aGRyOTVFU2w1d3J5TzEyWVRQYlBqYm9sVWZyek9NZW1qTFVESEtrMFR3Nmdi?=
 =?utf-8?B?OFo3YmxEUnpqS3hKRTFkckVnb0djbTJROU94ejJMZFlTZkJkMWc5WEtCM0VY?=
 =?utf-8?B?VHdSWWkzdjN4NTg5ODl0c081MDAva21mM0FQQlU0czNxYVpNVjREL0s3bm5X?=
 =?utf-8?B?MmRManRsK0lCa1orNEU3NzQ5MGdHYk10WGhkWlpIZVNENk5aMWhiMlI1MUR4?=
 =?utf-8?B?U0RUQnN5NXJYbHErRGh1WVR2MXdrbDlWUGNmWDVXTzJqT3k2WmRVZnY5S1cw?=
 =?utf-8?B?TVJZZThwZXhHWjZtdU9XZFBXOVl2SnhkS1FKdFJIODFLQ3l3cnQvNGozVlpa?=
 =?utf-8?B?d1IxRkU1NjZTK002dVJuNzdkUnZzZExFZHMxMnI4NGRJR1BRQy9zYVZLcCtV?=
 =?utf-8?B?TVZoNisyOWt2eEUwb2JhSnAxWFI3YXFpWDVscktxdlZSMklJOHJZNkVpZjRy?=
 =?utf-8?B?SmhqOVNGTVJlUkdEcVcyUHc2QmQvV0lXZzF0anBTTUdHNGZSWkR1NGhoRVhI?=
 =?utf-8?B?YVF1NmZUMUpEQmRQcndkdz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 421a66a2-190a-4113-b73a-08d8c8fae2fe
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 10:51:55.0838
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zzHKI4L6s5Vtc+3IsfHgJeMSTYqjFDukj+qtie5imugGPxGxcUMSB4BadNQWSxFw1uG4EJ4yvbNpDXggKwEUDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4057
X-OriginatorOrg: citrix.com

On Thu, Feb 04, 2021 at 11:27:17AM +0100, Jan Beulich wrote:
> On 03.02.2021 18:58, Roger Pau Monne wrote:
> > --- a/xen/include/asm-x86/x86_64/efibind.h
> > +++ b/xen/include/asm-x86/x86_64/efibind.h
> > @@ -172,7 +172,7 @@ typedef uint64_t   UINTN;
> >  #ifndef EFIAPI                  // Forces EFI calling conventions reguardless of compiler options
> >      #ifdef _MSC_EXTENSIONS
> >          #define EFIAPI __cdecl  // Force C calling convention for Microsoft C compiler
> > -    #elif __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
> > +    #elif __clang__ || __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
> >          #define EFIAPI __attribute__((__ms_abi__))  // Force Microsoft ABI
> >      #else
> >          #define EFIAPI          // Substitute expresion to force C calling convention
> > 
> 
> So the problem is that some capable Clang versions set too low
> a __GNUC_MINOR__ (I'm observing 2 alongside __GNUC__ being 4
> on Clang5). The way the description and change are written
> made me rather imply __GNUC__ to not be available at all,
> which I then thought can't be the case because iirc we use it
> elsewhere.

Yes, I also see 4.2 on Clang 11.

Do you want me to expand the description by adding:

"Add a specific Clang check because the GCC version reported by Clang
is below the required 4.4 to use the __ms_abi__ attribute."

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 10:52:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 10:52:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81235.149730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cFQ-0008Md-Kf; Thu, 04 Feb 2021 10:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81235.149730; Thu, 04 Feb 2021 10:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cFQ-0008MW-HA; Thu, 04 Feb 2021 10:52:28 +0000
Received: by outflank-mailman (input) for mailman id 81235;
 Thu, 04 Feb 2021 10:52:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jl0U=HG=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1l7cFO-0008Lq-LK
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 10:52:26 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1834f8bb-4c26-4d96-871b-c44142cb3259;
 Thu, 04 Feb 2021 10:52:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1834f8bb-4c26-4d96-871b-c44142cb3259
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612435940;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=w+eSnEMZo+Fvr7EATVEwt8D2LR7sG6sRWlxkuonUlFs=;
  b=RTxFBqfuCTp7t87uUuB+E3eZrnbA4xhvWAox+oI2V8whcEuvD5K7CY0f
   cWvnnpAIwi1xYLfi8N1CMv5CPV2a9rI10Jytr6LL9Q+3JfjEKfKvPWUac
   gEDwOan0R56BLC9o6dRnzwn1EVf3M+o37OeRubUMPFBG7lIWNsi87AJ0b
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: z8kux/NTAjLe3PbkGtlgiZpHKHsQnmzUYSocFiR2sqeaFd42IAgBV0mnPvu3tQwtijhfpiBw7/
 zes75GM8pC9k86PGqewIWn5Z6ZoCJo/gSd7m6kgHzq7qixhU/WoBm0CfvsuAjlsr0KXMpOH15k
 9EzyaiEuXNDh+oxJAEngNdDLguIcKnwVph/B/bE4o0U9D1pyRhLz+I2VAE4EuF4YYXDLIZGDvW
 98bshymUiJA1k9pw7RJvk7woPqaiduGPQu4RNTXXjshG4v79vFc5p5w/OHMvOw43Ms/n4U+Xp2
 BOU=
X-SBRS: 5.2
X-MesageID: 36739196
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36739196"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FUOTa+jodZ4nkioSn9m9ooZHzzdR8d4ooTnwPNwIIfWcCXPQODp3gRXh5zZ8d0Lj0Z0NJcUOmC+HB72uhjHx0DtDZkQDXSr6mYDt9LX15FdRsnR8/o4J3Gjw7SV7pptT8gGjiml0O7mZYairLX9XgCjp+xmvqadiEvWP7McypXrpbHRspPNdNpXLUC9ZWdPsteBHlxrIe8MklFkE+PXrGx5RvRvAiVBprzC9bvtYUy5QE99kIb0F+z9nG2mybmdsOMGpswmSS1lcOVUUBSKrxH62vZa91Gd67iPiItEFbuuSkM1eFQtn4EEBzXAbs7SPDurrJIbNd4T0TGQk17gKeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dnYF5xPRmcA32qmT+a3lNmLeTEAOuYmt9qGvzr5Q8ss=;
 b=L2V9STWyYb2zVff5heyZ2ZC+Fyl1bk5HlZJtc/cUrYDeU8nXBup/28CUqHbYYDec8mMBu7YzQPEqYuVIZ0ZNRTdj0b+G5P6del9tNctPmHNkN3Dp2Ac83BGiOSuN1L587gvNrdzMALSdPglQOFcVo3+XxRZBjYD3P2Rav145S+QcIO9z2mT4W4Q4vFCFxlpp2B/H1Mzm4sy3JVwsEAlXNrDdbTNTMEj8SIWNLZmv/FVOnw3OXinIyzwLQSfGGA79jhWGjGICN+y+IYeHaqlHBk9WscFj18Q+iEldA/c+LG4KsZg8rEfahxPIB1ZQT72S2aPtVGIXGuauHFqmAVOeJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dnYF5xPRmcA32qmT+a3lNmLeTEAOuYmt9qGvzr5Q8ss=;
 b=GtTnUHa5DfEDfidpk+lWTHDz93BZYGxJfDnyjRiQKS51IS0eIKFgjJ9FWpj2+1Mk4lw+STEsZmEgeIASz4lECal9tPxiNT69oF1b2JH8IdZMTstcL+uFFWO1fwhB/RQ8cxaAH1ClgtI2IAEZJU15MpCJU57X9+3tf3+/F2+wCLY=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Edwin Torok <edvin.torok@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/3] tools/oxenstored: mkdir conflicts were sometimes
 missed
Thread-Topic: [PATCH 3/3] tools/oxenstored: mkdir conflicts were sometimes
 missed
Thread-Index: AQHW+lMOM4p+9lOZ8UOKE5A9Zry5AKpH0usa
Date: Thu, 4 Feb 2021 10:52:13 +0000
Message-ID: <MWHPR03MB244589C19C6055D7635A34D3F6B39@MWHPR03MB2445.namprd03.prod.outlook.com>
References: <20210203173549.21159-1-andrew.cooper3@citrix.com>,<20210203173549.21159-4-andrew.cooper3@citrix.com>
In-Reply-To: <20210203173549.21159-4-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c42bfd20-e674-4386-1235-08d8c8faede2
x-ms-traffictypediagnostic: MWHPR03MB3278:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR03MB327863FE8A6ADBA64A90D59AF6B39@MWHPR03MB3278.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:298;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: MICMyx8k8R9a8ZzKzPbE4yCqOe/WvWHHJVFC7qIesVmUSavg5M14EA5QPhkFITcr7F/zXZ+/BSQHF1qVqgeuTxs1mQ2cm8b7OP7e4M6Q9gT+kfR8DkjNkWEBHb421z7K3gMU2PTbc5NSaWwBPLYNii6DomyAOfsqCwBYNzVodHp2TVW5jSQBK2WA0UTI8EuEQx1e0ToRLYOKn/kebrJONmXPzjRXg8/UpN3tp9XkScUYCJStujsGHRLLNA1ktRuUNE2O8D6+xpMIKl4VBwCCYCroK4SDB646yQZtBrZoHbds/MW90LRZMJzXttGw1P4R4sjEAcD9imdanf6rPYL76riVsgcSwG0o6ghuoIZly/A99bkIP5AccLJF3MxPsYKzVEVMZVHUoGi/oWKErXuvZxFXB5m3qQ1QsziPmRM3tLcAPfnr7pwHyYJto8qSZYH7ydmk6j4kXKjtGsB6kcBryjI9gkxs0x26y+7h/LPyXYaxKFgkRTuxUTWhEgUlGghx48iQHwevzPtmbudWWi3uDg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR03MB2445.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(39860400002)(346002)(396003)(4326008)(2906002)(9686003)(71200400001)(52536014)(44832011)(33656002)(54906003)(5660300002)(110136005)(26005)(6506007)(53546011)(66946007)(7696005)(66476007)(8936002)(66556008)(66446008)(64756008)(91956017)(76116006)(83380400001)(8676002)(478600001)(86362001)(55016002)(66574015)(186003)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?iso-8859-1?Q?5DqU0ya7Y4Hb0UBRuSs6S1x6EY1BYkiCfcstcikoNO9jYb2Y/zk2l6e8f6?=
 =?iso-8859-1?Q?cBZj2xvwjAxemDtQNbX0ovc341A+E9y3T8d/l4PG2G8XLYty/oh7Bt8GZZ?=
 =?iso-8859-1?Q?PSvASl9QI88e1uaDx94tCP6lN2FVEfWZMUMyDlDQawoUUvyPFqWVfp4Dge?=
 =?iso-8859-1?Q?+4gHatTcka7hP4P+lGvOWFO0bFhwABpwGTZp2lEtj8mYxitJhAm7IA6Afe?=
 =?iso-8859-1?Q?aLVun1udoGp4gG55hoxNQakbarVQVNlfqs8jcnPaq7fi7xl5frJhWbgpf8?=
 =?iso-8859-1?Q?j8i9wDiMLXuucXZMoRDKy5e3mzZ6IbQ0O0hTJhQFB8pU9NtqmwEYTzYmAi?=
 =?iso-8859-1?Q?z0gOX6ng5Pb4m2g/NdrKkV148c6exyHpnWq/z6Ov6oAx3jEj8HAtXER/3y?=
 =?iso-8859-1?Q?VNAnJPhGHLtVBUyK+vBPhfMTO09ce4Xeib1QiLfpFZfqXPvZWFbqYt3ygM?=
 =?iso-8859-1?Q?0S+iMVidPYjvCfc3FIISpbRKftjtRbHHeW245F1zgne97q3ssn+w+dea8W?=
 =?iso-8859-1?Q?e2h2ZMPAYdAVo2fYM6+vY9HfcnIpdXMzNj2f3PHOcxuQA4Jtyh7F8kQgVD?=
 =?iso-8859-1?Q?asWkK4akZAp+diATsAIX/P2R+wTJSk1mwZVESTOUcTddPDG5Uoy9wSjDlb?=
 =?iso-8859-1?Q?h0eEGjLBn0DPObUW0s3f8+DhbhjUrgcQicWBazJLfcQmynh2oUeveFrqph?=
 =?iso-8859-1?Q?K/uCm6j9VI9T2S9kK3A1pXFMz7CCqOy3BQ4CCJXLZcyIh85A+ogaXzF/jQ?=
 =?iso-8859-1?Q?gmkEw+OZPjoJwpmjaf/IRB6X0e/Oh9VD1O6iW7RPubnK2RmlZxcyVCaMsA?=
 =?iso-8859-1?Q?iVtbEAPTK8BhcTepnLO8MthnriU/HvvTwcpWvCOZrlR2Zr+HH5b3TAv7cC?=
 =?iso-8859-1?Q?6GrDH73gvfOiDFFVc1/1cYtrCIA6JDEC6gg+7K3JNvVF9MYc7AJy2/yV13?=
 =?iso-8859-1?Q?gL0W9kpyskdjlQ9APTGmdjtJNH2uiQi0F9tstp4eDIrVACihBzDyvy/zkK?=
 =?iso-8859-1?Q?Yolj/pm5bnj6MxrJyE9rsPmBoEh8jCwTKtEUL1UhnJrEtBJpXAhB1tFFrP?=
 =?iso-8859-1?Q?bOgjbwVUmmYaPHXLN2/AAblzWrtcMovtxuYarbfI2xex?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR03MB2445.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c42bfd20-e674-4386-1235-08d8c8faede2
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Feb 2021 10:52:13.0225
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 40cvniULwi4NGqfg75XxFaoFKV1BS6ZX5risj1mV1KLhftgZr7EBQSS9Q8NbfzUVuzWXQTX4wNzrXh0Q9xMvcfL+V46zzoffZN0eZ57H/ew=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR03MB3278
X-OriginatorOrg: citrix.com

Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Andrew Cooper <andrew.cooper3@citrix.com>
Sent: 03 February 2021 17:35
To: Xen-devel
Cc: Edwin Torok; Christian Lindig; Ian Jackson; Wei Liu
Subject: [PATCH 3/3] tools/oxenstored: mkdir conflicts were sometimes misse=
d

From: Edwin T=F6r=F6k <edvin.torok@citrix.com>

Due to how set_write_lowpath was used here it didn't detect create/delete
conflicts.  When we create an entry we must mark our parent as modified
(this is what creating a new node via write does).

Otherwise we can have 2 transactions one creating, and another deleting a n=
ode
both succeeding depending on timing.  Or one transaction reading an entry,
concluding it doesn't exist, do some other work based on that information a=
nd
successfully commit even if another transaction creates the node via mkdir
meanwhile.

Signed-off-by: Edwin T=F6r=F6k <edvin.torok@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/ocaml/xenstored/transaction.ml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/ocaml/xenstored/transaction.ml b/tools/ocaml/xenstored/t=
ransaction.ml
index 25bc8c3b4a..17b1bdf2ea 100644
--- a/tools/ocaml/xenstored/transaction.ml
+++ b/tools/ocaml/xenstored/transaction.ml
@@ -165,7 +165,7 @@ let write t perm path value =3D

 let mkdir ?(with_watch=3Dtrue) t perm path =3D
        Store.mkdir t.store perm path;
-       set_write_lowpath t path;
+       set_write_lowpath t (Store.Path.get_parent path);
        if with_watch then
                add_wop t Xenbus.Xb.Op.Mkdir path

--
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:01:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81245.149741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cNs-00018t-G3; Thu, 04 Feb 2021 11:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81245.149741; Thu, 04 Feb 2021 11:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cNs-00018m-D4; Thu, 04 Feb 2021 11:01:12 +0000
Received: by outflank-mailman (input) for mailman id 81245;
 Thu, 04 Feb 2021 11:01:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7cNq-00018h-Kl
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:01:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be49a2b2-322a-44ea-a1de-e22ccb809cf5;
 Thu, 04 Feb 2021 11:01:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 94ADEAC9B;
 Thu,  4 Feb 2021 11:01:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be49a2b2-322a-44ea-a1de-e22ccb809cf5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612436468; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9uVSqE2MvanyRQzYU6IaJ1ukX503pWAsgJJOHHmEyHQ=;
	b=M11O/pbbF1963Exk5VjdtdP7Na3JVW0tOGzuAZQROSgIbwOfCTeyFyMY/ZcfpIaFxTYi3d
	He398sFlvb0LEaE1V2lDgYs6FX0nbqy7R3xTkOvsWSoBOloh2C1O03w0LHfyW9Elzu9on/
	cKB9mHK7B2nCObJvjH+2Aj3cbQCGgTw=
Subject: Re: [PATCH for-4.15] x86/efi: enable MS ABI attribute on clang
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210203175805.86465-1-roger.pau@citrix.com>
 <a9d5f126-0b7c-6a8b-7ce9-736716e6e950@suse.com>
 <YBvRxYyWPMYBvGNr@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <617bbef5-5117-0b61-95a4-93a035f623f2@suse.com>
Date: Thu, 4 Feb 2021 12:01:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YBvRxYyWPMYBvGNr@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.02.2021 11:51, Roger Pau MonnÃ© wrote:
> On Thu, Feb 04, 2021 at 11:27:17AM +0100, Jan Beulich wrote:
>> On 03.02.2021 18:58, Roger Pau Monne wrote:
>>> --- a/xen/include/asm-x86/x86_64/efibind.h
>>> +++ b/xen/include/asm-x86/x86_64/efibind.h
>>> @@ -172,7 +172,7 @@ typedef uint64_t   UINTN;
>>>  #ifndef EFIAPI                  // Forces EFI calling conventions reguardless of compiler options
>>>      #ifdef _MSC_EXTENSIONS
>>>          #define EFIAPI __cdecl  // Force C calling convention for Microsoft C compiler
>>> -    #elif __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
>>> +    #elif __clang__ || __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 4)
>>>          #define EFIAPI __attribute__((__ms_abi__))  // Force Microsoft ABI
>>>      #else
>>>          #define EFIAPI          // Substitute expresion to force C calling convention
>>>
>>
>> So the problem is that some capable Clang versions set too low
>> a __GNUC_MINOR__ (I'm observing 2 alongside __GNUC__ being 4
>> on Clang5). The way the description and change are written
>> made me rather imply __GNUC__ to not be available at all,
>> which I then thought can't be the case because iirc we use it
>> elsewhere.
> 
> Yes, I also see 4.2 on Clang 11.
> 
> Do you want me to expand the description by adding:
> 
> "Add a specific Clang check because the GCC version reported by Clang
> is below the required 4.4 to use the __ms_abi__ attribute."

I guess adding this is easy enough when done while committing.
Thanks for the addition.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:09:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81247.149754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cVu-0001YG-Bb; Thu, 04 Feb 2021 11:09:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81247.149754; Thu, 04 Feb 2021 11:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cVu-0001Y6-8A; Thu, 04 Feb 2021 11:09:30 +0000
Received: by outflank-mailman (input) for mailman id 81247;
 Thu, 04 Feb 2021 11:09:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hI3K=HG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7cVt-0001Y1-Sz
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:09:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ff7d234-b87c-4dd5-a0e4-1dc1bacb3688;
 Thu, 04 Feb 2021 11:09:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DB6B5AC9B;
 Thu,  4 Feb 2021 11:09:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ff7d234-b87c-4dd5-a0e4-1dc1bacb3688
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612436968; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L38mQwxRm7OXz1bZGmxvDyR31jH2h3gLbIvIS9II5W8=;
	b=KD9cRA413tks5HsHfBvnLwfhR8xvvv1jPn5ZWqAu0GVpAgHXvNyVTCk1A0VUALKpupdmNf
	kwB2LTtNVZn0ICZWt9Vs7dZlm5zgN8RtS4iE1UBHt2Jsj7Eaf5sijLbTy2yzabIgx7JZo0
	CW6ZJEQ/8UhtHyjBfNN25MbBBsmH9Ww=
Subject: Re: [PATCH] drivers: net: xen-netfront: Simplify the calculation of
 variables
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>,
 boris.ostrovsky@oracle.com
Cc: sstabellini@kernel.org, davem@davemloft.net, kuba@kernel.org,
 ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org,
 john.fastabend@gmail.com, andrii@kernel.org, kafai@fb.com,
 songliubraving@fb.com, yhs@fb.com, kpsingh@kernel.org,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, bpf@vger.kernel.org
References: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9598a7b4-a0f1-3f64-b239-50612cc5caae@suse.com>
Date: Thu, 4 Feb 2021 12:09:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="P6FL2Y36uCwjeFBydgkFMsiTtKLR3rpRp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--P6FL2Y36uCwjeFBydgkFMsiTtKLR3rpRp
Content-Type: multipart/mixed; boundary="BeuoqNxIjP0OuLOIDiBTLrrgbZeZwwxG8";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>,
 boris.ostrovsky@oracle.com
Cc: sstabellini@kernel.org, davem@davemloft.net, kuba@kernel.org,
 ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org,
 john.fastabend@gmail.com, andrii@kernel.org, kafai@fb.com,
 songliubraving@fb.com, yhs@fb.com, kpsingh@kernel.org,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, bpf@vger.kernel.org
Message-ID: <9598a7b4-a0f1-3f64-b239-50612cc5caae@suse.com>
Subject: Re: [PATCH] drivers: net: xen-netfront: Simplify the calculation of
 variables
References: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>
In-Reply-To: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>

--BeuoqNxIjP0OuLOIDiBTLrrgbZeZwwxG8
Content-Type: multipart/mixed;
 boundary="------------5BC026031DA004FDEC7F39E2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5BC026031DA004FDEC7F39E2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.02.21 11:17, Jiapeng Chong wrote:
> Fix the following coccicheck warnings:
>=20
> ./drivers/net/xen-netfront.c:1816:52-54: WARNING !A || A && B is
> equivalent to !A || B.
>=20
> Reported-by: Abaci Robot <abaci@linux.alibaba.com>
> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------5BC026031DA004FDEC7F39E2
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5BC026031DA004FDEC7F39E2--

--BeuoqNxIjP0OuLOIDiBTLrrgbZeZwwxG8--

--P6FL2Y36uCwjeFBydgkFMsiTtKLR3rpRp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAb1eYFAwAAAAAACgkQsN6d1ii/Ey+G
swf7BiPLhOCdblIHtTfq8JmOqYP0/R82Hcc37LjlclTdT23Zl/WL2nbvw0cA+hQF5/6tJUytXifV
B93gA0maGsZFbhNH8fo/vTZ2nQjpplhHdRaaLux+Rzmrof/is7H1eu/m4F/D2SOG7bYiS51hdHA/
Q7ncG5p8rsjyPPLXKTksS+tNnMWMQ0k6a3DPIDzmJHt2cmxs9z5bIEDIc6l4F+AWhviYnJx+HXaW
E5f576FZhnYCdMV5oIR9E+SfaVEU7Ps+qOAXZNLq3kxzZd2BVKYSxEahmV1+VPcmTaIZMIvgNLHS
84/zTsja53kecocbbhSMnZEgAJ47Hqv4XqemBdm68g==
=5Yma
-----END PGP SIGNATURE-----

--P6FL2Y36uCwjeFBydgkFMsiTtKLR3rpRp--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:10:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81248.149766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cWf-0002KK-ME; Thu, 04 Feb 2021 11:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81248.149766; Thu, 04 Feb 2021 11:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cWf-0002KD-Iq; Thu, 04 Feb 2021 11:10:17 +0000
Received: by outflank-mailman (input) for mailman id 81248;
 Thu, 04 Feb 2021 11:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hI3K=HG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7cWe-0002K4-40
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:10:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb49948b-3f56-4378-9fcc-8f504e61fcc3;
 Thu, 04 Feb 2021 11:10:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75BC4B112;
 Thu,  4 Feb 2021 11:10:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb49948b-3f56-4378-9fcc-8f504e61fcc3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612437014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CGeSW0mPFOp4YPPV7gJG97JzJYmCP9z187rfe/wDaTw=;
	b=f7PhI5iHwvPX+79zoE8lrkDLww+wsaWCQvjBsoEISra90vslB8VbUvwwT0pQ9fsi3crBFv
	u59iTBuEw6FVKaeIgbG/ZOuSWqkEqAVzXhEwjCFmaF1adnj/QcC6hEhBIti1h9f2Arg2Um
	hhQP60OjXpy8Dn5Pjs93w0wtvTnOlds=
Subject: Re: [PATCH v2] xen: Add RING_COPY_RESPONSE()
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
References: <20210128155030.1831483-1-marmarek@invisiblethingslab.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b37ac2f6-312e-51b7-1606-8e487430d749@suse.com>
Date: Thu, 4 Feb 2021 12:10:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210128155030.1831483-1-marmarek@invisiblethingslab.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ma5KRMgur0NhOuKDnO4gsTwuqO3vdg4JR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ma5KRMgur0NhOuKDnO4gsTwuqO3vdg4JR
Content-Type: multipart/mixed; boundary="uxF7IjwPVqXRIicvSAzNPX1KGSAUpVQRi";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Message-ID: <b37ac2f6-312e-51b7-1606-8e487430d749@suse.com>
Subject: Re: [PATCH v2] xen: Add RING_COPY_RESPONSE()
References: <20210128155030.1831483-1-marmarek@invisiblethingslab.com>
In-Reply-To: <20210128155030.1831483-1-marmarek@invisiblethingslab.com>

--uxF7IjwPVqXRIicvSAzNPX1KGSAUpVQRi
Content-Type: multipart/mixed;
 boundary="------------5C2C65F6DCA84D83F36FACA6"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5C2C65F6DCA84D83F36FACA6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 28.01.21 16:50, Marek Marczykowski-G=C3=B3recki wrote:
> Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly
> (i.e., by not considering that the other end may alter the data in the
> shared ring while it is being inspected). Safe usage of a response
> generally requires taking a local copy.
>=20
> Provide a RING_COPY_RESPONSE() macro to use instead of
> RING_GET_RESPONSE() and an open-coded memcpy().  This takes care of
> ensuring that the copy is done correctly regardless of any possible
> compiler optimizations.
>=20
> Use a volatile source to prevent the compiler from reordering or
> omitting the copy.
>=20
> This generalizes similar RING_COPY_REQUEST() macro added in 3f20b8def0.=

>=20
> Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------5C2C65F6DCA84D83F36FACA6
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5C2C65F6DCA84D83F36FACA6--

--uxF7IjwPVqXRIicvSAzNPX1KGSAUpVQRi--

--ma5KRMgur0NhOuKDnO4gsTwuqO3vdg4JR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAb1hUFAwAAAAAACgkQsN6d1ii/Ey9A
ZQf+IM0nfKKmh9F2qcMYygssAmYidus+tlNlTQ6fQ7mnXZHqM1+F2gIfYXi1ASsP4eWIEPFUszQW
xG1FHL+5MXvSMoiAhdFoS8PZXG4U0W7kJrBL2A2CpCmYtPcq6OCZgOWUCe4ZBPpK1ydXGnlRCnxI
47eK7U6NL95xoC7+bK2v+LxzjBsE/bk90f005qAOjaiSIvwoso/mgnphZGggGQ+KFqMloxVdqY3v
55Qp56HIAkpk44sNCIQ5ufQwZ7Jmx6d+bd4qRNakI/4V1ymFywzVB7b5qv7OfCtr03XtHnCPhJ0u
z1QcuPV29KZhFtHMWYCgq4sumHwMH+LabEffkeCmRQ==
=O9C0
-----END PGP SIGNATURE-----

--ma5KRMgur0NhOuKDnO4gsTwuqO3vdg4JR--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:11:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:11:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81250.149778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cXS-0002Rl-73; Thu, 04 Feb 2021 11:11:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81250.149778; Thu, 04 Feb 2021 11:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cXS-0002Re-3V; Thu, 04 Feb 2021 11:11:06 +0000
Received: by outflank-mailman (input) for mailman id 81250;
 Thu, 04 Feb 2021 11:11:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7cXQ-0002RY-MY
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:11:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b64a12b-aa0b-4d43-9f12-4db73e11581a;
 Thu, 04 Feb 2021 11:11:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFC04B0D2;
 Thu,  4 Feb 2021 11:11:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b64a12b-aa0b-4d43-9f12-4db73e11581a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612437063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=siaYoyjY1DkXijIWGSAnrTQRHCrDJe7zky9prae3INo=;
	b=C8XNsZcf9ruSCQiIsYxmnpgEFwlEgrPw/QAcYb7M1rP92kTIzrg4eDAwOV4aBtLXDiaHwX
	vjoqLOSmq8g6GayIi4Abkrh6DkTMk9/I+zbv7Ml/EEUc6BRqWcdK1ykcpfozfuXJMfXSQ6
	LzXEjsrpUZ1XxgxMdkUmNRmraRmABfI=
Subject: Re: [PATCH v9 02/11] xen/domain: Add vmtrace_size domain creation
 parameter
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
 <20210201232703.29275-3-andrew.cooper3@citrix.com>
 <7a27c313-2c7c-8394-3749-e2f4d671fdab@suse.com>
 <c6762201-eceb-39c1-fa2a-4bf2af5e0758@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <981ef00b-9f6a-937a-003a-bb6a394076ca@suse.com>
Date: Thu, 4 Feb 2021 12:11:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <c6762201-eceb-39c1-fa2a-4bf2af5e0758@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.02.2021 17:04, Andrew Cooper wrote:
> On 02/02/2021 09:04, Jan Beulich wrote:
>> On 02.02.2021 00:26, Andrew Cooper wrote:
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -132,6 +132,56 @@ static void vcpu_info_reset(struct vcpu *v)
>>>      v->vcpu_info_mfn = INVALID_MFN;
>>>  }
>>>  
>>> +static void vmtrace_free_buffer(struct vcpu *v)
>>> +{
>>> +    const struct domain *d = v->domain;
>>> +    struct page_info *pg = v->vmtrace.pg;
>>> +    unsigned int i;
>>> +
>>> +    if ( !pg )
>>> +        return;
>>> +
>>> +    v->vmtrace.pg = NULL;
>>> +
>>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>>> +    {
>>> +        put_page_alloc_ref(&pg[i]);
>>> +        put_page_and_type(&pg[i]);
>>> +    }
>>> +}
>>> +
>>> +static int vmtrace_alloc_buffer(struct vcpu *v)
>>> +{
>>> +    struct domain *d = v->domain;
>>> +    struct page_info *pg;
>>> +    unsigned int i;
>>> +
>>> +    if ( !d->vmtrace_size )
>>> +        return 0;
>>> +
>>> +    pg = alloc_domheap_pages(d, get_order_from_bytes(d->vmtrace_size),
>>> +                             MEMF_no_refcount);
>>> +    if ( !pg )
>>> +        return -ENOMEM;
>>> +
>>> +    for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
>>> +        if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
>>> +            goto refcnt_err;
>>> +
>>> +    /*
>>> +     * We must only let vmtrace_free_buffer() take any action in the success
>>> +     * case when we've taken all the refs it intends to drop.
>>> +     */
>>> +    v->vmtrace.pg = pg;
>>> +    return 0;
>>> +
>>> + refcnt_err:
>>> +    while ( i-- )
>>> +        put_page_and_type(&pg[i]);
>>> +
>>> +    return -ENODATA;
>> Would you mind at least logging how many pages may be leaked
>> here? I also don't understand why you don't call
>> put_page_alloc_ref() in the loop - that's fine to do prior to
>> the put_page_and_type(), and will at least limit the leak.
>> The buffer size here typically isn't insignificant, and it
>> may be helpful to not unnecessarily defer the freeing to
>> relinquish_resources() (assuming we will make that one also
>> traverse the list of "extra" pages, but I understand that's
>> not going to happen for 4.15 anymore anyway).
>>
>> Additionally, while I understand you're not in favor of the
>> comments we have on all three similar code paths, I think
>> replicating their comments here would help easily spotting
>> (by grep-ing e.g. for "fishy") all instances that will need
>> adjusting once we have figured a better overall solution.
> 
> How is:
> 
> Â Â Â  for ( i = 0; i < (d->vmtrace_size >> PAGE_SHIFT); i++ )
> Â Â Â Â Â Â Â  if ( unlikely(!get_page_and_type(&pg[i], d, PGT_writable_page)) )
> Â Â Â Â Â Â Â Â Â Â Â  /*
> Â Â Â Â Â Â Â Â Â Â Â Â  * The domain can't possibly know about this page yet, so
> failure
> Â Â Â Â Â Â Â Â Â Â Â Â  * here is a clear indication of something fishy going on.
> Â Â Â Â Â Â Â Â Â Â Â Â  */
> Â Â Â Â Â Â Â Â Â Â Â  goto refcnt_err;
> 
> Â Â Â  /*
> Â Â Â Â  * We must only let vmtrace_free_buffer() take any action in the success
> Â Â Â Â  * case when we've taken all the refs it intends to drop.
> Â Â Â Â  */
> Â Â Â  v->vmtrace.pg = pg;
> Â Â Â  return 0;
> 
> Â refcnt_err:
> Â Â Â  /*
> Â Â Â Â  * We can theoretically reach this point if someone has taken 2^43
> refs on
> Â Â Â Â  * the frames in the time the above loop takes to execute, or
> someone has
> Â Â Â Â  * made a blind decrease reservation hypercall and managed to pick the
> Â Â Â Â  * right mfn.Â  Free the memory we safely can, and leak the rest.
> Â Â Â Â  */
> Â Â Â  while ( i-- )
> Â Â Â  {
> Â Â Â Â Â Â Â  put_page_alloc_ref(&pg[i]);
> Â Â Â Â Â Â Â  put_page_and_type(&pg[i]);
> Â Â Â  }
> 
> Â Â Â  return -ENODATA;
> 
> this?

Much better, thanks. Remains the question of logging the
suspected leak of perhaps many pages. But either way
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:11:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81251.149789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cXX-0002UY-Ew; Thu, 04 Feb 2021 11:11:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81251.149789; Thu, 04 Feb 2021 11:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cXX-0002UR-Bp; Thu, 04 Feb 2021 11:11:11 +0000
Received: by outflank-mailman (input) for mailman id 81251;
 Thu, 04 Feb 2021 11:11:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hI3K=HG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7cXV-0002RY-LD
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:11:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8571779-7f42-4bad-b744-62a47d90246b;
 Thu, 04 Feb 2021 11:11:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0C5EAF1B;
 Thu,  4 Feb 2021 11:11:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8571779-7f42-4bad-b744-62a47d90246b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612437063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=r5Me21cTG+Fqe1+62Cd+Eia2+vJNkMmmDJuwQNWYcsg=;
	b=vRGuTfuYgBS37txXWukbPBMrfA6tnZGxcwb5XKtNgiy04rUY3b0AuYxKPow/Q0pcY6g+rz
	fLO4fYvBchoh7EoSwxdlg57hR2gdZbVabmE1lv3kiioogLHlw0L/FpgwFEY6iCXErPOq1o
	JUakPQuNtrwlX1LMOy9gibOXDTwe9WU=
Subject: Re: [PATCH] xenstored: close socket connections on error
To: Manuel Bouyer <bouyer@netbsd.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-2-bouyer@netbsd.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
Date: Thu, 4 Feb 2021 12:11:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210203165421.1550-2-bouyer@netbsd.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="nTf0SR5KoI2dlbMKgWrYInMqxTx3mP32R"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--nTf0SR5KoI2dlbMKgWrYInMqxTx3mP32R
Content-Type: multipart/mixed; boundary="CH2bRWiX1G9obJXplzzgtlRRNtOqsqUiO";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@netbsd.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
Subject: Re: [PATCH] xenstored: close socket connections on error
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-2-bouyer@netbsd.org>
In-Reply-To: <20210203165421.1550-2-bouyer@netbsd.org>

--CH2bRWiX1G9obJXplzzgtlRRNtOqsqUiO
Content-Type: multipart/mixed;
 boundary="------------F51FF27DB63EDB4B66C05C07"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F51FF27DB63EDB4B66C05C07
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.02.21 17:54, Manuel Bouyer wrote:
> On error, don't keep socket connection in ignored state but close them.=

> When the remote end of a socket is closed, xenstored will flag it as an=

> error and switch the connection to ignored. But on some OSes (e.g.
> NetBSD), poll(2) will return only POLLIN in this case, so sockets in ig=
nored
> state will stay open forever in xenstored (and it will loop with CPU 10=
0%
> busy).
>=20
> Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
> Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------F51FF27DB63EDB4B66C05C07
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F51FF27DB63EDB4B66C05C07--

--CH2bRWiX1G9obJXplzzgtlRRNtOqsqUiO--

--nTf0SR5KoI2dlbMKgWrYInMqxTx3mP32R
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAb1kcFAwAAAAAACgkQsN6d1ii/Ey8m
cAf/eVPYmlB0czRmEYSPf6eVWqkC2Pm0a1A/HvODYCcwVgt8q6ILTIW1pditNTj/tXnCh/9A0mUU
RGf/VyeHHMUyfGmFd4kgg/nZRT/IfOytwZLJvmE8p5ARgSrGYMfFY4l2z7Wsh1nu6j9hD2B/yBYM
WJIse00bS7IMAEW2eB5llZtTQHRF+i8xeBhVaUF75Xe25udkJTSNIQD23O2kIHZ7Up4Ya3T6ZcH4
8et7TYu6itjx7F0u17NPVCmt/42uR8cg1G7P73LGWhbdOChlWxoyGyaiisx2FemrU5rGGX8nu93j
IfyUJT5bnNEzh0jhCYJiy8phvlUIpWIQJ/+CtPMy+w==
=QaWu
-----END PGP SIGNATURE-----

--nTf0SR5KoI2dlbMKgWrYInMqxTx3mP32R--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:16:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81256.149802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ccU-0002ki-4N; Thu, 04 Feb 2021 11:16:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81256.149802; Thu, 04 Feb 2021 11:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ccU-0002kb-0r; Thu, 04 Feb 2021 11:16:18 +0000
Received: by outflank-mailman (input) for mailman id 81256;
 Thu, 04 Feb 2021 11:16:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0/J=HG=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1l7ccS-0002kW-Q5
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:16:16 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 612448c8-c444-4a79-b132-e92b621507c2;
 Thu, 04 Feb 2021 11:16:15 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 114BGDZx025424;
 Thu, 4 Feb 2021 12:16:14 +0100 (MET)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id E25CB281D; Thu,  4 Feb 2021 12:16:13 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 612448c8-c444-4a79-b132-e92b621507c2
Date: Thu, 4 Feb 2021 12:16:13 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
        Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xenstored: close socket connections on error
Message-ID: <20210204111613.GA2316@antioche.eu.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-2-bouyer@netbsd.org>
 <55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]); Thu, 04 Feb 2021 12:16:14 +0100 (MET)

On Thu, Feb 04, 2021 at 12:11:02PM +0100, Jürgen Groß wrote:
> On 03.02.21 17:54, Manuel Bouyer wrote:
> > On error, don't keep socket connection in ignored state but close them.
> > When the remote end of a socket is closed, xenstored will flag it as an
> > error and switch the connection to ignored. But on some OSes (e.g.
> > NetBSD), poll(2) will return only POLLIN in this case, so sockets in ignored
> > state will stay open forever in xenstored (and it will loop with CPU 100%
> > busy).
> > 
> > Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
> > Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

thanks.
I still don't know if I'm supposed to send a new version of the patch with
these tags, even if the patch itself doesn't change, or if the commiter
will handle them ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:18:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:18:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81257.149813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ceC-0002rj-Hl; Thu, 04 Feb 2021 11:18:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81257.149813; Thu, 04 Feb 2021 11:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ceC-0002rc-E6; Thu, 04 Feb 2021 11:18:04 +0000
Received: by outflank-mailman (input) for mailman id 81257;
 Thu, 04 Feb 2021 11:18:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hI3K=HG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7ceA-0002rW-Qr
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:18:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d655b99-1a1d-40af-bdf4-e1e24b990719;
 Thu, 04 Feb 2021 11:18:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D1F0FAC97;
 Thu,  4 Feb 2021 11:18:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d655b99-1a1d-40af-bdf4-e1e24b990719
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612437481; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BHEqPO6ZYTSxG2igtJXOMEyvxnwNqAF1STZ2Y7mc3SA=;
	b=Na59ncoruAnvY9EMPmaL3WESjGaLDLRitwosHQAHGVv0Yu2wfabdNFnF7xGkCf8yxsPLMz
	XwSpZU5FUVA/eFkMvpJjeTuAuQGyoLYtsxJVFfb5ktTCsvbLE4gcX/kA47/HgfWAYcbP7c
	cchmSje/3oW0Z3ivXTwG7yjbE1auoto=
Subject: Re: [PATCH] xenstored: close socket connections on error
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-2-bouyer@netbsd.org>
 <55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
 <20210204111613.GA2316@antioche.eu.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <cda25726-9942-c0ca-f60f-65681003a4fc@suse.com>
Date: Thu, 4 Feb 2021 12:18:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210204111613.GA2316@antioche.eu.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Q5g6R2O3CUHJDyU7n1zujOA8FN9FefEfl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Q5g6R2O3CUHJDyU7n1zujOA8FN9FefEfl
Content-Type: multipart/mixed; boundary="ZVSgoY4FypoSX0G17qR17XHVoaAO8ydjw";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <cda25726-9942-c0ca-f60f-65681003a4fc@suse.com>
Subject: Re: [PATCH] xenstored: close socket connections on error
References: <20210203165421.1550-1-bouyer@netbsd.org>
 <20210203165421.1550-2-bouyer@netbsd.org>
 <55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
 <20210204111613.GA2316@antioche.eu.org>
In-Reply-To: <20210204111613.GA2316@antioche.eu.org>

--ZVSgoY4FypoSX0G17qR17XHVoaAO8ydjw
Content-Type: multipart/mixed;
 boundary="------------E8E91AA4FA36ECA41F5927F3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E8E91AA4FA36ECA41F5927F3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.02.21 12:16, Manuel Bouyer wrote:
> On Thu, Feb 04, 2021 at 12:11:02PM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.02.21 17:54, Manuel Bouyer wrote:
>>> On error, don't keep socket connection in ignored state but close the=
m.
>>> When the remote end of a socket is closed, xenstored will flag it as =
an
>>> error and switch the connection to ignored. But on some OSes (e.g.
>>> NetBSD), poll(2) will return only POLLIN in this case, so sockets in =
ignored
>>> state will stay open forever in xenstored (and it will loop with CPU =
100%
>>> busy).
>>>
>>> Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
>>> Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
> thanks.
> I still don't know if I'm supposed to send a new version of the patch w=
ith
> these tags, even if the patch itself doesn't change, or if the commiter=

> will handle them ?
>=20

Will be done by the committer.


Juergen

--------------E8E91AA4FA36ECA41F5927F3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E8E91AA4FA36ECA41F5927F3--

--ZVSgoY4FypoSX0G17qR17XHVoaAO8ydjw--

--Q5g6R2O3CUHJDyU7n1zujOA8FN9FefEfl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAb1+gFAwAAAAAACgkQsN6d1ii/Ey/k
Bwf/VcVYIIVN/EjD2lc0LTvtdJ202pz8ZXhPNHYNRAKKLkZfd/BPgsSISGuFKazsXublaplkPjGn
uqQEP73iYMuAVJocQkI80XQCEKHlFqOSQhgaG5J+BhHpcv54qqomm4aB9ahPf6sJcaP3nGO2H5h4
Cr6twE+wmXxaqaKj/VuBS0Y6yJN5+htZlQWHJZGVvOqs+3bfkgsm7hJ9W//3c2qL6/MKRe4D95+O
SV+PciXTfdcX73SEDovumVyiaVniwcp4gyyrKEC4qJV9s45ZiBr0WY+xREf3tV9bRROJvN/9IJ8n
1adWr0tPB+aFKXdAr92wgsAk1/sFxkElYyIF1oO9fw==
=uTiZ
-----END PGP SIGNATURE-----

--Q5g6R2O3CUHJDyU7n1zujOA8FN9FefEfl--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:24:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81260.149826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ckh-0003xY-9G; Thu, 04 Feb 2021 11:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81260.149826; Thu, 04 Feb 2021 11:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ckh-0003xR-5x; Thu, 04 Feb 2021 11:24:47 +0000
Received: by outflank-mailman (input) for mailman id 81260;
 Thu, 04 Feb 2021 11:24:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7ckf-0003xM-F4
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:24:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad777e99-c630-459c-8273-31e6551257ae;
 Thu, 04 Feb 2021 11:24:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D602B009;
 Thu,  4 Feb 2021 11:24:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad777e99-c630-459c-8273-31e6551257ae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612437883; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5CKmHHC903Z27f4+jg9qjgTCw2STN3HBMJgqjvhfs2k=;
	b=tRq391+hjPd7WWOddETC512UnThqsgQDqfJVu9tYQ7L0O2zWTSoG0PmgvQDVI2qNODO7Pw
	SCddyEnp1Qd+uouUO8tIR+AfV8vowhimhpCYhLkFqM5pl8j8UGPbbebvsqdE6vj1bsrfNn
	6N5QUXcCTHxPxCPGpeEwccnF39PpGU8=
Subject: Re: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation
 request
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'George Dunlap' <george.dunlap@citrix.com>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
 <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
 <03fb01d6fad7$c39087b0$4ab19710$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ad73c330-4cbd-0ee4-fee7-2453dab00eef@suse.com>
Date: Thu, 4 Feb 2021 12:24:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <03fb01d6fad7$c39087b0$4ab19710$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.02.2021 10:26, Paul Durrant wrote:
> 
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 02 February 2021 15:15
>> To: xen-devel@lists.xenproject.org
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau MonnÃ©
>> <roger.pau@citrix.com>; Paul Durrant <paul@xen.org>; Julien Grall <julien@xen.org>; Stefano Stabellini
>> <sstabellini@kernel.org>; George Dunlap <george.dunlap@citrix.com>
>> Subject: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation request
>>
>> XENMEM_decrease_reservation isn't the only means by which pages can get
>> removed from a guest, yet all removals ought to be signaled to qemu. Put
>> setting of the flag into the central p2m_remove_page() underlying all
>> respective hypercalls as well as a few similar places, mainly in PoD
>> code.
>>
>> Additionally there's no point sending the request for the local domain
>> when the domain acted upon is a different one. The latter domain's ioreq
>> server mapcaches need invalidating. We assume that domain to be paused
>> at the point the operation takes place, so sending the request in this
>> case happens from the hvm_do_resume() path, which as one of its first
>> steps calls handle_hvm_io_completion().
>>
>> Even without the remote operation aspect a single domain-wide flag
>> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
>> parallel. Each of them needs to issue an invalidation request in due
>> course, in particular because exiting to guest context should not happen
>> before the request was actually seen by (all) the emulator(s).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>>     places to set the flag. Also handle acting on a remote domain.
>>     Re-base.
> 
> I'm wondering if a per-vcpu flag is overkill actually. We just need
> to make sure that we don't miss sending an invalidation where
> multiple vcpus are in play. The mapcache in the emulator is global
> so issuing an invalidate for every vcpu is going to cause an
> unnecessary storm of ioreqs, isn't it?

The only time a truly unnecessary storm would occur is when for
an already running guest mapcache invalidation gets triggered
by a remote domain. This should be a pretty rare event, so I
think the storm in this case ought to be tolerable.

> Could we get away with the per-domain atomic counter?

Possible, but quite involved afaict: The potential storm above
is the price to pay for the simplicity of the model. It is
important to note that while we don't need all of the vCPU-s
to send these ioreqs, we need all of them to wait for the
request(s) to be acked. And this waiting is what we get "for
free" using the approach here, whereas we'd need to introduce
new logic for this with an atomic counter (afaict at least).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:34:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:34:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81263.149838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ctz-00054f-9m; Thu, 04 Feb 2021 11:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81263.149838; Thu, 04 Feb 2021 11:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ctz-00054Y-6d; Thu, 04 Feb 2021 11:34:23 +0000
Received: by outflank-mailman (input) for mailman id 81263;
 Thu, 04 Feb 2021 11:34:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gM/o=HG=dingwall.me.uk=james@srs-us1.protection.inumbo.net>)
 id 1l7ctx-00054D-AC
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:34:21 +0000
Received: from know-smtprelay-omc-8.server.virginmedia.net (unknown
 [80.0.253.72]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 672b21cb-7e7b-4d2d-bb82-3284626e35a3;
 Thu, 04 Feb 2021 11:34:19 +0000 (UTC)
Received: from mail0.xen.dingwall.me.uk ([82.38.249.212]) by cmsmtp with ESMTPA
 id 7ctslcLipCZ837ctslSVv4; Thu, 04 Feb 2021 11:34:17 +0000
Received: from localhost (localhost [IPv6:::1])
 by mail0.xen.dingwall.me.uk (Postfix) with ESMTP id 62266308D68;
 Thu,  4 Feb 2021 11:34:15 +0000 (GMT)
Received: from mail0.xen.dingwall.me.uk ([IPv6:::1])
 by localhost (mail0.xen.dingwall.me.uk [IPv6:::1]) (amavisd-new, port 10024)
 with ESMTP id y8lExBB9cpw0; Thu,  4 Feb 2021 11:34:15 +0000 (GMT)
Received: from ghoul.dingwall.me.uk (ghoul.dingwall.me.uk [192.168.1.200])
 by dingwall.me.uk (Postfix) with ESMTP id 1E61E308D65;
 Thu,  4 Feb 2021 11:34:15 +0000 (GMT)
Received: by ghoul.dingwall.me.uk (Postfix, from userid 1000)
 id 074DE5AA; Thu,  4 Feb 2021 11:34:15 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 672b21cb-7e7b-4d2d-bb82-3284626e35a3
X-Originating-IP: [82.38.249.212]
X-Authenticated-User: james.dingwall@blueyonder.co.uk
X-Spam: 0
X-Authority: v=2.3 cv=HNHt6Llv c=1 sm=1 tr=0 a=gXUefieqlD6GaZBkXOTlrw==:117
 a=gXUefieqlD6GaZBkXOTlrw==:17 a=xqWC_Br6kY4A:10 a=kj9zAlcOel0A:10
 a=qa6Q16uM49sA:10 a=5IRWAbXhAAAA:8 a=iox4zFpeAAAA:8 a=OcvHiVVtRz9wdHm9D6gA:9
 a=CjuIK1q_8ugA:10 a=xo7gz2vLY8DhO4BdlxfM:22 a=WzC6qhA0u3u7Ye7llzcV:22
 a=pHzHmUro8NiASowvMSCR:22 a=n87TN5wuljxrRezIQYnT:22
X-Virus-Scanned: Debian amavisd-new at dingwall.me.uk
Date: Thu, 4 Feb 2021 11:34:15 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	James Dingwall <james-xen@dingwall.me.uk>
Subject: Re: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
Message-ID: <20210204113415.GA1293808@dingwall.me.uk>
References: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
X-CMAE-Envelope: MS4wfDUH5sCO75autlK1GoiV1JoOySY9hMjLWzgvPEPwVCssC8THxMce8kEXKMIBxm60f6B3SyOuNwgIKuwLP0rX0aOCEDei8ojO64CQgi+XjLv5O2h/9hWl
 9uivZ6nriPp9Hsn7FdN/r9SiOYp73VZ/uxwU/Wb4YliijKvnxNdeovEXilA+AtJCLPvwj6HJbQFn59xXl111G+elDkM56srEKLR/b1DLtVRUFpulFLyppC9f
 odR1Jbku5j6wzIpoUqgT7D8+tjwCw2XfpnPSpgJArSv7td3Lw52hFVqhhsO5NyGoFMKRc6hhSzLyeZlGMZB8Ug==

Hi Jan,

On Thu, Feb 04, 2021 at 10:36:06AM +0100, Jan Beulich wrote:
> X86_VENDOR_* aren't bit masks in the older trees.
> 
> Reported-by: James Dingwall <james@dingwall.me.uk>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -226,7 +226,8 @@ int guest_rdmsr(const struct vcpu *v, ui
>           */
>      case MSR_IA32_PERF_STATUS:
>      case MSR_IA32_PERF_CTL:
> -        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
> +        if ( cp->x86_vendor != X86_VENDOR_INTEL &&
> +             cp->x86_vendor != X86_VENDOR_CENTAUR )
>              goto gp_fault;
>  
>          *val = 0;

Thanks for this patch, I've applied it and the Windows guest no longer crashes.

Regards,
James


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:35:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:35:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81265.149850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cuc-0005A5-Jk; Thu, 04 Feb 2021 11:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81265.149850; Thu, 04 Feb 2021 11:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cuc-00059y-Gc; Thu, 04 Feb 2021 11:35:02 +0000
Received: by outflank-mailman (input) for mailman id 81265;
 Thu, 04 Feb 2021 11:35:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cub-00059r-72
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:35:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cub-0001Z1-5E
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:35:01 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cub-0005Hc-3j
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:35:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7cuX-0007pQ-VN; Thu, 04 Feb 2021 11:34:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=FPDIdnUNMiMHHy6o6gUtOrv9YXX7IS/ZYDrAN33+SgI=; b=yDiTvQ1DYA3SjMHmBH9fMVBh6d
	gSCFDUHdQu2UZ39gttrgaIKZpx6UVKPOMqW2eOmdGi4nF42o7adBiQT42e1KkAANI0FXxVI67bWFU
	n2jlYu9OcGo8LO8m963ul2TBPo7TefVzJ9bkQc1dTHSXZc3OO8HAPXb96d2W36UCfk8g=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24603.56289.723251.263900@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 11:34:57 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
In-Reply-To: <20210204093833.91190-1-roger.pau@citrix.com>
References: <20210204093833.91190-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] autoconf: check endian.h include path"):
> Introduce an autoconf macro to check for the include path of certain
> headers that can be different between OSes.
> 
> Use such macro to find the correct path for the endian.h header, and
> modify the users of endian.h to use the output of such check.
> 
> Suggested-by: Ian Jackson <iwj@xenproject.org>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Ian Jackson <iwj@xenproject.org>

> Please re-run autogen after applying.
> 
> The biggest risk for this would be some kind of configure or build
> failure, and we should be able to catch it in either osstest or the
> gitlab build tests.

Thanks.  I agree.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:37:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81266.149862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cwn-0005Id-1O; Thu, 04 Feb 2021 11:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81266.149862; Thu, 04 Feb 2021 11:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cwm-0005IW-UT; Thu, 04 Feb 2021 11:37:16 +0000
Received: by outflank-mailman (input) for mailman id 81266;
 Thu, 04 Feb 2021 11:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cwl-0005IR-FZ
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:37:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cwl-0001dD-Em
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:37:15 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cwl-0005QN-E8
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:37:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7cwd-0007q4-O0; Thu, 04 Feb 2021 11:37:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=KtZuaYcnWedO8ua6vfd0PO2TnW6WxivSXjOsz5iVxjc=; b=NPUlz/1ppP+Cq9PXipw9EzBAOT
	Ph/gyeIk1JnSRphZxYQTbFxUOu2H7OpTeRFj99AW6gPojfMUbJGuaZuH1yyJpya6iVx0MBUbAM24s
	umKcwnxyUt4Ijno+WBSLZQtuPEgr6PzMlfuLFsTesO8dD4nU7iUZ/BVlqhmsMiokqYdI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24603.56419.517594.411675@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 11:37:07 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>,
    <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] autoconf: check endian.h include path
In-Reply-To: <YBvQ4/mFiiVJNvaA@Air-de-Roger>
References: <20210204093833.91190-1-roger.pau@citrix.com>
	<26522f21-4714-c29d-5ca4-baf012c51ac8@suse.com>
	<YBvFbbnje+Dt7CfD@Air-de-Roger>
	<0e3576d3-4565-9898-e954-4a888b21d92f@suse.com>
	<YBvKwNiIopKKZx/F@Air-de-Roger>
	<10e334fe-eb02-e771-8404-cbcda9534383@suse.com>
	<YBvQ4/mFiiVJNvaA@Air-de-Roger>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monné writes ("Re: [PATCH for-4.15] autoconf: check endian.h include path"):
> On Thu, Feb 04, 2021 at 11:32:41AM +0100, Jan Beulich wrote:
> > On 04.02.2021 11:21, Roger Pau Monné wrote:
> > > I think having to replicate this logic in all places that include
> > > endian.h is cumbersome.
> > 
> > Right - I would further encapsulate this in a local header.
> 
> IMO encapsulating in configure achieves the same purpose.

I like the way Roger has done it.

> > >> And which one is to be the first one? IOW how likely is it that
> > >> on a system having both the first one is what we're after vs
> > >> the second one?
> > > 
> > > Not sure, but the same will happen with your proposal above: in your
> > > chunk sys/endian.h will be picked over endian.h.
> > 
> > Oh, sure - the two points are entirely orthogonal. And I'm
> > also not certain at all whether checking sys/ first is
> > better, equal, or worse. I simply don't know what the
> > conventions are.
> 
> I'm not sure either. For the specific case of endian.h I would
> expect only one to be present, and I think we should first check for
> top level (ie: endian.h) before checking for subfolders (ie: sys/), as
> top level should have precedence.
> 
> I really don't have a strong opinion either way, so if there's an
> argument to do it the other way around that would also be fine.

I don't think it matters much here, but in general I would say that
checking the more general location first is a good idea.  Checking the
more specific location might in some cases find us a file that's
actually an implementation detail.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:39:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81269.149874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cyW-0005eK-DF; Thu, 04 Feb 2021 11:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81269.149874; Thu, 04 Feb 2021 11:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7cyW-0005eD-A5; Thu, 04 Feb 2021 11:39:04 +0000
Received: by outflank-mailman (input) for mailman id 81269;
 Thu, 04 Feb 2021 11:39:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cyU-0005dL-O4
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:39:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cyU-0001eV-NH
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:39:02 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7cyU-0005Tg-Md
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:39:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7cyR-0007qR-Eb; Thu, 04 Feb 2021 11:38:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Bt5R+eZiexMfpDbKqjOZJnSlNakCvj1WeCRJGb7Yh4w=; b=2uzbtOCn7Q7mF7hQ9NVBZG6YB3
	ypB5Un8BgLBNMzNnoJacYwrc/hERhdcaUsWDiLy/wYG/ZOJ6P5c2whGZ/o7OsoWnWMkBBTPEigB85
	x5W0FOmAm3IJJiD3YpxcllLuwI74rhTcKQnJX/IMRvSfPWf/lCjuuZNovFBG+gPuj0f8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24603.56531.218802.368677@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 11:38:59 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Jan Beulich <jbeulich@suse.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] x86/efi: enable MS ABI attribute on clang
In-Reply-To: <20210203175805.86465-1-roger.pau@citrix.com>
References: <20210203175805.86465-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] x86/efi: enable MS ABI attribute on clang"):
> Or else the EFI service calls will use the wrong calling convention.
> 
> The __ms_abi__ attribute is available on all supported versions of
> clang.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Ian Jackson <iwj@xenproject.org>
> 
> Without this a Xen built with clang won't be able to correctly use the
> EFI services, leading to weird messages from the firmware and crashes.
> The impact of this fix for GCC users is exactly 0, and will fix the
> build on clang.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

> The biggest fallout from this could be using the attribute on a
> compiler that doesn't support it, which would translate into a build
> failure, but the gitlab tests have shown no issues.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks for the thorough attention to the release question in your
mails.  You're making my work very easy :-).

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:42:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81273.149889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7d1u-0006UI-Tu; Thu, 04 Feb 2021 11:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81273.149889; Thu, 04 Feb 2021 11:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7d1u-0006UB-QT; Thu, 04 Feb 2021 11:42:34 +0000
Received: by outflank-mailman (input) for mailman id 81273;
 Thu, 04 Feb 2021 11:42:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7d1t-0006U6-N5
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:42:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7d1t-0001jG-JN
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:42:33 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7d1t-0005e2-HD
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:42:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7d1o-0007rA-RG; Thu, 04 Feb 2021 11:42:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=wTb0x+haIpQbBS2CGlCcsDtnKAQ/Shm7LIjVkD0CXjg=; b=RLA6PceSWq2HLt4WUBCV6s1Gg2
	BtOpzlc/4oI7S0idg7sRph2qefETFzOV9KelENHboqGhiZMk9CSkmNeivZBjYYDV9e2Gw+id1IGiy
	AOKopJo5gqtc9wK07x2oWzeFakM0PpKWgo+/+6VcaHROQ5b3cqNaYzMtHpPEK4xl105Y=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24603.56740.581223.845809@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 11:42:28 +0000
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
    xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xenstored: close socket connections on error
In-Reply-To: <20210204111613.GA2316@antioche.eu.org>
References: <20210203165421.1550-1-bouyer@netbsd.org>
	<20210203165421.1550-2-bouyer@netbsd.org>
	<55faec4f-71e3-71c3-e251-74238bb74c11@suse.com>
	<20210204111613.GA2316@antioche.eu.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Manuel Bouyer writes ("Re: [PATCH] xenstored: close socket connections on error"):
> On Thu, Feb 04, 2021 at 12:11:02PM +0100, Jürgen Groß wrote:
> > On 03.02.21 17:54, Manuel Bouyer wrote:
> > > On error, don't keep socket connection in ignored state but close them.
> > > When the remote end of a socket is closed, xenstored will flag it as an
> > > error and switch the connection to ignored. But on some OSes (e.g.
> > > NetBSD), poll(2) will return only POLLIN in this case, so sockets in ignored
> > > state will stay open forever in xenstored (and it will loop with CPU 100%
> > > busy).
> > > 
> > > Signed-off-by: Manuel Bouyer <bouyer@netbsd.org>
> > > Fixes: d2fa370d3ef9cbe22d7256c608671cdcdf6e0083
> > 
> > Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> thanks.
> I still don't know if I'm supposed to send a new version of the patch with
> these tags, even if the patch itself doesn't change, or if the commiter
> will handle them ?

The committer will handle them.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:48:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81275.149901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7d7P-0006g0-On; Thu, 04 Feb 2021 11:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81275.149901; Thu, 04 Feb 2021 11:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7d7P-0006fr-Fh; Thu, 04 Feb 2021 11:48:15 +0000
Received: by outflank-mailman (input) for mailman id 81275;
 Thu, 04 Feb 2021 11:48:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7d7O-0006fm-Hs
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:48:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7d7O-0001pM-DL
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:48:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7d7O-00063e-BY
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:48:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7d7L-0007sH-3j; Thu, 04 Feb 2021 11:48:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=YvDS7ySZBJgHDZ0Hnqw5KV5m1Cx6XZwdcT2Km+rURow=; b=M3AXYtmL+w9U//xMOisx/JwrcE
	dygWQH8qDj2jcgFSVfq7CFDGWULbwJ7ATmJQ7kn5QfymqDcdyTZ5fJ/iwUWzv5/p6vy7ZU4YbU19D
	UzXLPMvGM1i8UlRLfKOiTDkMcVbxwk5p44PTHCt3saHRYnQrZsoOjihp/MOqmHgIqgB0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24603.57082.898446.82977@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 11:48:10 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Jan Beulich <JBeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    "Stefano  Stabellini" <sstabellini@kernel.org>,
    Julien Grall <julien@xen.org>,
    "Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>,
    Oleksandr <olekstysh@gmail.com>
Subject: Re: [PATCH for-4.15] tools/tests: Introduce a test for acquire_resource
In-Reply-To: <20210202190937.30206-1-andrew.cooper3@citrix.com>
References: <20210202190937.30206-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15] tools/tests: Introduce a test for acquire_resource"):
> For now, simply try to map 40 frames of grant table.  This catches most of the
> basic errors with resource sizes found and fixed through the 4.15 dev window.

FTAOD

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 11:49:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 11:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81277.149916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7d8j-00071r-1i; Thu, 04 Feb 2021 11:49:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81277.149916; Thu, 04 Feb 2021 11:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7d8i-00071k-TH; Thu, 04 Feb 2021 11:49:36 +0000
Received: by outflank-mailman (input) for mailman id 81277;
 Thu, 04 Feb 2021 11:49:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7zif=HG=arm.com=robin.murphy@srs-us1.protection.inumbo.net>)
 id 1l7d8h-00071b-IT
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 11:49:35 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 12df5d41-f7c8-4742-9477-8bad7ed76f18;
 Thu, 04 Feb 2021 11:49:32 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E69ABD6E;
 Thu,  4 Feb 2021 03:49:31 -0800 (PST)
Received: from [10.57.49.26] (unknown [10.57.49.26])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6C76A3F73B;
 Thu,  4 Feb 2021 03:49:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12df5d41-f7c8-4742-9477-8bad7ed76f18
Subject: Re: [PATCH RFC v1 2/6] swiotlb: convert variables to arrays
To: Christoph Hellwig <hch@lst.de>, Dongli Zhang <dongli.zhang@oracle.com>
Cc: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
 linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
 akpm@linux-foundation.org, benh@kernel.crashing.org, bskeggs@redhat.com,
 bhelgaas@google.com, bp@alien8.de, boris.ostrovsky@oracle.com,
 chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com,
 mingo@kernel.org, mingo@redhat.com, jani.nikula@linux.intel.com,
 joonas.lahtinen@linux.intel.com, jgross@suse.com, konrad.wilk@oracle.com,
 m.szyprowski@samsung.com, matthew.auld@intel.com, mpe@ellerman.id.au,
 rppt@kernel.org, paulus@samba.org, peterz@infradead.org,
 rodrigo.vivi@intel.com, sstabellini@kernel.org, bauerman@linux.ibm.com,
 tsbogend@alpha.franken.de, tglx@linutronix.de, ulf.hansson@linaro.org,
 joe.jin@oracle.com, thomas.lendacky@amd.com,
 Claire Chang <tientzu@chromium.org>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
 <20210203233709.19819-3-dongli.zhang@oracle.com>
 <20210204072947.GA29812@lst.de>
From: Robin Murphy <robin.murphy@arm.com>
Message-ID: <b46ddefe-d91a-fa6a-0e0d-cf1edc343c2e@arm.com>
Date: Thu, 4 Feb 2021 11:49:23 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210204072947.GA29812@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 2021-02-04 07:29, Christoph Hellwig wrote:
> On Wed, Feb 03, 2021 at 03:37:05PM -0800, Dongli Zhang wrote:
>> This patch converts several swiotlb related variables to arrays, in
>> order to maintain stat/status for different swiotlb buffers. Here are
>> variables involved:
>>
>> - io_tlb_start and io_tlb_end
>> - io_tlb_nslabs and io_tlb_used
>> - io_tlb_list
>> - io_tlb_index
>> - max_segment
>> - io_tlb_orig_addr
>> - no_iotlb_memory
>>
>> There is no functional change and this is to prepare to enable 64-bit
>> swiotlb.
> 
> Claire Chang (on Cc) already posted a patch like this a month ago,
> which looks much better because it actually uses a struct instead
> of all the random variables.

Indeed, I skimmed the cover letter and immediately thought that this 
whole thing is just the restricted DMA pool concept[1] again, only from 
a slightly different angle.

Robin.

[1] 
https://lore.kernel.org/linux-iommu/20210106034124.30560-1-tientzu@chromium.org/


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 12:12:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 12:12:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81281.149927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dUk-0001VL-6Q; Thu, 04 Feb 2021 12:12:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81281.149927; Thu, 04 Feb 2021 12:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dUk-0001VE-3W; Thu, 04 Feb 2021 12:12:22 +0000
Received: by outflank-mailman (input) for mailman id 81281;
 Thu, 04 Feb 2021 12:12:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7dUi-0001V9-H9
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 12:12:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7dUi-0002FQ-D5
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 12:12:20 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7dUi-0007lC-BR
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 12:12:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7dUf-0007yU-56; Thu, 04 Feb 2021 12:12:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:CC:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=f+rG5AhiOwAyIRub0twvjSm4J2tj7lnP7cbhIvwtpmI=; b=Fi7A4PwySoAVQEgmY3QW+lr/52
	fjiuTOtnbh/me2rX22eeaMIfxAYg+8Ryoou95D3AMtasSH3lfJ/Tj2nbzsEjOjvc89UsvyMWyfWEi
	N0yyyth9mNwZNsI7G6oyMLCZ9PIzdZqjF5Xkfky28QCuVRbrr9qeWJNpc2o0JRrfXPs0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24603.58528.901884.980466@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 12:12:16 +0000
To: committers@xenproject.org,
    xen-devel@lists.xenproject.org
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
    Dario Faggioli <dfaggioli@suse.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    community.manager@xenproject.org
Subject: [ANNOUNCE] Xen 4.15 - call for notification/status of significant bugs
In-Reply-To: <24600.8030.769396.165224@mariner.uk.xensource.com>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Although there are a few things outstanding, we are now firmly into
the bugfixing phase of the Xen 4.15 release.

I searched my email (and my memory) and found four open blockers which
I have listed below, and one closed blocker.

I feel there are probably more issues out there, so please let me
know, in response to this mail, of any other significant bugs you are
aware of.

Ian.


OPEN ISSUES
-----------

A. HPET/PIT issue on newer Intel systems

Information from
  Andrew Cooper <andrew.cooper3@citrix.com>

| This has had literally tens of reports across the devel and users
| mailing lists, and prevents Xen from booting at all on the past two
| generations of Intel laptop.  I've finally got a repro and posted a
| fix to the list, but still in progress.

I think Andrew is still on the case here.


B. "scheduler broken" bugs.

Information from
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>

Quoting Andrew Cooper
| We've had 4 or 5 reports of Xen not working, and very little
| investigation on whats going on.  Suspicion is that there might be
| two bugs, one with smt=0 on recent AMD hardware, and one more
| general "some workloads cause negative credit" and might or might
| not be specific to credit2 (debugging feedback differs - also might
| be 3 underlying issue).

I reviewed a thread about this and it is not clear to me where we are
with this.


C. Fallout from MSR handling behavioral change.

Information from
  Jan Beulich <jbeulich@suse.com>

I am lacking an extended desxcription of this.  What are the bug(s),
and what is the situation ?


D. Use-after-free in the IOMMU code

Information from
  Julien Grall <julien@xen.org>
References
 [PATCH for-4.15 0/4] xen/iommu: Collection of bug fixes for IOMMU teadorwn
 <20201222154338.9459-1-julien@xen.org>

Quoting the 0/:
| This series is a collection of bug fixes for the IOMMU teardown code.
| All of them are candidate for 4.15 as they can either leak memory or
| lead to host crash/host corruption.

AFAIT these patches are not yet in-tree.



CLOSED ISSUES
=============

E. zstd support

Information from
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  git

Needed to unbreak Fedora.  Needs support for both dom0 and domU.

AFAICT this seems to be in-tree as of 8169f82049ef
"libxenguest: support zstd compressed kernels"


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 12:21:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 12:21:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81283.149940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ddA-0002dL-3F; Thu, 04 Feb 2021 12:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81283.149940; Thu, 04 Feb 2021 12:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dd9-0002dE-Vp; Thu, 04 Feb 2021 12:21:03 +0000
Received: by outflank-mailman (input) for mailman id 81283;
 Thu, 04 Feb 2021 12:21:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7dd9-0002d9-3u
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 12:21:03 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42044905-13c1-4a8a-8087-827065b29f9a;
 Thu, 04 Feb 2021 12:21:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42044905-13c1-4a8a-8087-827065b29f9a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612441260;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=GM9koKS90hfnWlXMk6WKDMi4j3RYWLdMRRr07AKz3TI=;
  b=h28Vpzahe2Cq8TLGdz21fMjvP5iDeI/Bnos3tw0U5axmDmqSl3b227h9
   e3181U7alhStW5Qfdg/zg/GNSlsBpjJKHtlZRdzk0+lmpgM7C9MUxPxAc
   u9sJop5Ff/JxdUEgVXS2iLvqyqHYUp878anZe2O9gE8+D0cAww233iAp/
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: sKhBG5vMxLHVQO4mWewGoPYx4/m1GrSIMB6yCGGXPc0Yn+TGVrTP9oMxriYgwsQbUfXUXUUAqK
 1mDWh0rPP1QeYLo3B0IDr5Vk1K14cmppDHH0Sor1QxLr9fZ6kAlGfvMfyr182zHFvBQXif81Bi
 so+uqsKRXY1Dm5w0BOYKFYfEbO31AWMp4IQpug/ltWZUzsQouZ5ZeNIRkHbYYR9GBVfuKx3Z1/
 LRMN2tJE/Q68SCWFdq4BeXhFuF1sC35+qMQ6K+3plsztFNx3Fdd1ugZPAHcC71ZoooStF6f+hu
 NPo=
X-SBRS: 5.2
X-MesageID: 36743834
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36743834"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m9yA8F/M3srv6jwpZ0RTRwW8E94gXGELXO76S6yOZ5BLBiKAHPYS9uflRxSoJriHt8728QKeUfRBu8Dg+rVkEp5xQ0LjrvWSTFT+h7vwbMa0dPVnQXFsAqZNByJtSJAl/ad7XaPpuZha9emByhVbJh01uAusPzUaHKmtvpE4tGyRa9jk6AqMMwqCAQQr1GRPcWjqY2J5WZ7ISOTTcdqtkUcU04hj2gZifCompsbH2IYsY6HKLxMdgoKkG+76WdFeHoxow2EMu8a6WypQDF5qhY20apQqh9gkP6hpTKgiIjiXL70dL+lctHikIegHNBLesdDDkUzNnP/HTwDy1xbutg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fH5WIVn0WSzB40kAN6ohqu/cRsmMYoP3udyZStkguKg=;
 b=CEQtrp7SdG90TgfWY9PC7GBa0tGy6zixUPczgLIo2oetyyOq/7SVwva67mLpjdWUTgPvYwKS6KOx8TV2HdkIw5OW7DL/mbAksQt/TUCvjsnGOlVwHw/9+GXI7f7E1LibRBR/r3vjiRnCBqClgd82gfaBPCnOve86f+Lio0IeyiKc4n2KLGeQqgbb6RsyX8t8aY9CQMPkLkMoCtq5sC09X2SIpHbcr4PdggmjsxTkIIUiVKMIB8MPKrwtY6yBaVd9YkEDUFVyrf96/hdR/VxJ2OYxzVCf+hHmRGVJa/wakn/xZAHhLUfmAwnKkr2CMOT6vP5DP/phEKcjR8q77UUGrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fH5WIVn0WSzB40kAN6ohqu/cRsmMYoP3udyZStkguKg=;
 b=dScJqgR45II8NkEtD8cCIxYD4GOXaOQOm/0ZSCEbkL303NYTzi7RycyFklhipbD2kStenAnaP/CDB7v81XPuzL/UP/JDUSfusdldgLdshe41UzK6aNrsL+9u+Pw7OaPeEi6ZuoSsJ8BO4TiITxjvZETD1jkKOFD1MX8PYOO+GlQ=
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant
 bugs
To: Ian Jackson <iwj@xenproject.org>, <committers@xenproject.org>,
	<xen-devel@lists.xenproject.org>
CC: Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, <community.manager@xenproject.org>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
 <24603.58528.901884.980466@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <da8b6f0b-185e-91f4-d245-22d8af50c194@citrix.com>
Date: Thu, 4 Feb 2021 12:20:48 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24603.58528.901884.980466@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0430.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::21) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 27098629-3422-4f20-02ae-08d8c90751e4
X-MS-TrafficTypeDiagnostic: BY5PR03MB5127:
X-Microsoft-Antispam-PRVS: <BY5PR03MB51278B4551F9AD76FC83C7F8BAB39@BY5PR03MB5127.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3383;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: nitDgEldv9OPmEDf+CJ73q8UKpN3a738cbruJ+FRTHjUKjqxrihm0kXEB8Cls7FpNTNSZCRjhvaxiD8Sf9InEX3viTa91VSMQRzlQGRmQhwb4eR1w5HWn1iONlqx6+eZP3wg5AuR+jF4tIqLY9gWsjx9HquNNUbvrx0thBqKN+r51yT2edTertGD49hquUv+pGKTX0iwklxZ8003sVK57Vu+lTTqkm1K6brdsNNv1zUycvrWNd0cWgHmyiLzyQsQZwplZyTOcgR/JD6iySv0CFxDx9y+NdEdVhX9lIAm51xrgP4N3Lrz++M84BGOuHbbNRY/nr5NpVUYafaPfkeTJXAGhnb8OSBvWmr+ClhCZekKDzuoHt3zp1Wh+fZ+N23IQ52LJvlmYb4Of0TWACuIEty819byIlHkAv/UWn0Mi+bldQzE3LBKFX9nzemK6CL1uvoYYzye6xOqAIIVw4klcXTgEMXBZytYSQKzEEi3iL/XoDkF1ONZ6ExJuKyBMEiH7zcmIYxjHA0V+FnfaTTaL9lhPS6GDJspdxlY1mbPxdKBax6Co+8ZbzHoH5YQi9ImtKoDKuB3CZegHaQb5k+6fVzIDOyfST7aXLF8hP9XU8U=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(39860400002)(396003)(346002)(8936002)(54906003)(316002)(16526019)(31686004)(8676002)(4326008)(478600001)(2906002)(86362001)(31696002)(6666004)(2616005)(5660300002)(956004)(6486002)(66556008)(36756003)(66946007)(66476007)(26005)(16576012)(53546011)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?L1pqdVVZQ3hMa0ZqOWNDd3FqNVFjMGdYZlpiUkxtV3doODM2VlpFcmV0Zmxa?=
 =?utf-8?B?b0p3NTVMUERyNCs0aGhJZ2JwaVpad2ovWG5PTFJPdDVZMVNTKzFlc1JFVS9P?=
 =?utf-8?B?Y21jWGVMQ0p1d2FVaWJmK1NkRzh3R1JLZE4yYmtWaThDclY0WnFtTW1Ycnht?=
 =?utf-8?B?M211cFIvN2ZDdXB4U05CM21ROEZ0alNYNC96Z0ErNnh2VDdrZ1JZYnlvZG5r?=
 =?utf-8?B?TnRUU3E5djVTMnYxMGJ3SUpYSEE3SVcxOHBZd056TVFiTnl6bmFqVkh6Y0Mx?=
 =?utf-8?B?ZFVzazRlWXQ4UzZuNk0yaE1jckUrMU9ZL2FKb21xeTdhTE1ITnFuN3RQU1hM?=
 =?utf-8?B?TW84SHBDS21ZLzgwaE5NZVppVXJjcmFLYmZJZ2VmSE1vb0hUMGN6MUJsamIw?=
 =?utf-8?B?VmNoRDI4LzkzUmgrYU1aNS9uTm5MTlFCSklEbFpuQ2R3TFdGbHp0WUJvUDRX?=
 =?utf-8?B?cm1IaWFiVDJRa0Z0V1hFc3VjTzZsbzg5WGRmd085V055UWxiUDExMUJzKzZs?=
 =?utf-8?B?a0QyQmlNWUttalBFU0pJMkhtL2F5eVg3ZTZXMVZUbjcrSzhYcmtLRU8ybGVh?=
 =?utf-8?B?a2xYV0ZxQU5zNXlZei9GME80Tm5xb0NOWlgxbVJieXRUNDN3L1FWbEFaTWI0?=
 =?utf-8?B?R0l6UWFsaHBXT1ZtZm9sQVh1S2pIa0NYODg2bXRJdjBldTJGTEdZZEpiOXZP?=
 =?utf-8?B?S0dya3E0K3hFQmhTSW9ZU292WEVFY0ZmRk5yRG9vRGt2WUVzbXZtNHRBaDIx?=
 =?utf-8?B?VEZGL2FBZGI2bnFEa0ZrYm5JQm9VbUJGK21tTGJMUHJIYnpCcVkwZkpBb2VF?=
 =?utf-8?B?ZHBRMkl4NXRoMzFWYVFHVDIzdllRODNuMzdHVCs4U2VDaU5uUkExM1hPRzlH?=
 =?utf-8?B?T2t0Yko1RzltWnFpVXlsMzlqRkd2RzhhVjVPSmJQMGxuTU4ySDVjWmI0K0Vr?=
 =?utf-8?B?bkxNRU9QZDFBbDBkRm0rN3NvNHBkY3ZMRUROVzhXaFdyUkN1S2QzT09WRXJq?=
 =?utf-8?B?TnVZcWZMVkRZZ1VFM3RQS3oyZU0yczR4Z3lNWW44WG1rZnVEY1JLbGh2WmhX?=
 =?utf-8?B?d3dtVnVVdGZQWVMrcTlYTzJIMHdrS3F3dngwOEdGMVVZTDBieHM0Um4xSTlT?=
 =?utf-8?B?YzNVb0ZZR2hRVFJ6bzlrQTE5Q2l5YlFOSzlWVUk4dHZndU42NHloMkk0TW0x?=
 =?utf-8?B?blNVSGllSWRVbmtKRGNHM00wNmlXYXBDbUNKdmFMSmh5TGlGc2t6dm9ZMGF1?=
 =?utf-8?B?bHMrSHN6YnJya0lkekdSVVZGUW41VFoyWnpGZUMxc2pqeHEzbDNpNU9mK3B0?=
 =?utf-8?B?dEdEU0dKajhjRXdTVHQrTllKUVFLa2JTWHJRNnVyTVM4M1ZxYXc3c25mM2JI?=
 =?utf-8?B?VFh1S2lZRHZrZXZYa3BiUkdEbVhDUGUvQ0lmckNOaFkzOVczMC9yenRaL1VJ?=
 =?utf-8?Q?Jr92vMh3?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 27098629-3422-4f20-02ae-08d8c90751e4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 12:20:55.0375
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Zwqls9bqC0nsprnQGpv+bSTtddqp8RGRtH6oLfHAl9zunvZrkOzmW43Ta9um36mMCip65fWgjeWOM8bM3rKYnORhGwnYlCFBrurodPb+ym4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5127
X-OriginatorOrg: citrix.com

On 04/02/2021 12:12, Ian Jackson wrote:
> OPEN ISSUES
> -----------
>
> A. HPET/PIT issue on newer Intel systems
>
> Information from
>   Andrew Cooper <andrew.cooper3@citrix.com>
>
> | This has had literally tens of reports across the devel and users
> | mailing lists, and prevents Xen from booting at all on the past two
> | generations of Intel laptop.  I've finally got a repro and posted a
> | fix to the list, but still in progress.
>
> I think Andrew is still on the case here.

Fixed.Â  c/s e1de4c196a from a week ago.

> C. Fallout from MSR handling behavioral change.
>
> Information from
>   Jan Beulich <jbeulich@suse.com>
>
> I am lacking an extended desxcription of this.  What are the bug(s),
> and what is the situation ?

Still WIP and on my TODO list.Â  In addition to Jan's report, there is a
separate report from Boris against Solaris.Â  Also I need to revert a
patch of mine from early in the release and do the same thing differently.

Bugs are "VMs which boot on earlier releases don't boot on 4.15 at the
moment".

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 12:30:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 12:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81285.149951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dlu-0003cY-VA; Thu, 04 Feb 2021 12:30:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81285.149951; Thu, 04 Feb 2021 12:30:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dlu-0003cR-S0; Thu, 04 Feb 2021 12:30:06 +0000
Received: by outflank-mailman (input) for mailman id 81285;
 Thu, 04 Feb 2021 12:30:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hI3K=HG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7dlu-0003YW-5G
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 12:30:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e4c040a-e7c6-427e-9522-def6863de254;
 Thu, 04 Feb 2021 12:30:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5506B144;
 Thu,  4 Feb 2021 12:30:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e4c040a-e7c6-427e-9522-def6863de254
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612441802; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NOVmNb1m8YT2SEVUR+FyvhLJaZ5NGULer97M0d5Mhi0=;
	b=YQoiMsFs6mUyYJd/8yCFQZ421Uf5eqYHa5RG9mU5J2OTGneT92qhUnpMAZypElAk0btb/1
	HUQ+cDCVbyxd50+s4wOu667I7ALwrR8yCQ/MwT9CJEQPQjRcX02CaezDOv8r+TaAnn+xub
	OLn/IPWs4H/BuLsDMtoD1rHJ04wG5Z8=
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
To: Julien Grall <julien@xen.org>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <ceccea0c-16fd-c31d-1b12-7712b060ad68@suse.com>
Date: Thu, 4 Feb 2021 13:30:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JwWz0TkRBLH0ZtpoSfSFqk9Imo8vGIpHf"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JwWz0TkRBLH0ZtpoSfSFqk9Imo8vGIpHf
Content-Type: multipart/mixed; boundary="odGKUPnVeOGEQ6SXWq4wlAU7bVB3VghM0";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
Message-ID: <ceccea0c-16fd-c31d-1b12-7712b060ad68@suse.com>
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
In-Reply-To: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>

--odGKUPnVeOGEQ6SXWq4wlAU7bVB3VghM0
Content-Type: multipart/mixed;
 boundary="------------56A0E22C2609E77E4FB7CD27"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------56A0E22C2609E77E4FB7CD27
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 22:25, Julien Grall wrote:
> Hi Juergen,
>=20
> When testing Linux 5.10 dom0, I could reliably hit the following warnin=
g=20
> with using event 2L ABI:
>=20
> [=C2=A0 589.591737] Interrupt for port 34, but apparently not enabled; =

> per-user 00000000a86a4c1b
> [=C2=A0 589.593259] WARNING: CPU: 0 PID: 1111 at=20
> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:170=20
> evtchn_interrupt+0xeb/0x100
> [=C2=A0 589.595514] Modules linked in:
> [=C2=A0 589.596145] CPU: 0 PID: 1111 Comm: qemu-system-i38 Tainted: G=20
> W=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.10.0+ #180
> [=C2=A0 589.597708] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),=
 BIOS=20
> rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
> [=C2=A0 589.599782] RIP: e030:evtchn_interrupt+0xeb/0x100
> [=C2=A0 589.600698] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00 =
00 00=20
> e8 d9 10 ca ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 31 3d 82 e8 65 29 a0=
=20
> ff <0f> 0b e9 42 ff ff ff 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 0f
> [=C2=A0 589.604087] RSP: e02b:ffffc90040003e70 EFLAGS: 00010086
> [=C2=A0 589.605102] RAX: 0000000000000000 RBX: ffff888102091800 RCX:=20
> 0000000000000027
> [=C2=A0 589.606445] RDX: 0000000000000000 RSI: ffff88817fe19150 RDI:=20
> ffff88817fe19158
> [=C2=A0 589.607790] RBP: ffff88810f5ab980 R08: 0000000000000001 R09:=20
> 0000000000328980
> [=C2=A0 589.609134] R10: 0000000000000000 R11: ffffc90040003c70 R12:=20
> ffff888107fd3c00
> [=C2=A0 589.610484] R13: ffffc90040003ed4 R14: 0000000000000000 R15:=20
> ffff88810f5ffd80
> [=C2=A0 589.611828] FS:=C2=A0 00007f960c4b8ac0(0000) GS:ffff88817fe0000=
0(0000)=20
> knlGS:0000000000000000
> [=C2=A0 589.613348] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 CR0: 00000000=
80050033
> [=C2=A0 589.614525] CR2: 00007f17ee72e000 CR3: 000000010f5b6000 CR4:=20
> 0000000000050660
> [=C2=A0 589.615874] Call Trace:
> [=C2=A0 589.616402]=C2=A0 <IRQ>
> [=C2=A0 589.616855]=C2=A0 __handle_irq_event_percpu+0x4e/0x2c0
> [=C2=A0 589.617784]=C2=A0 handle_irq_event_percpu+0x30/0x80
> [=C2=A0 589.618660]=C2=A0 handle_irq_event+0x3a/0x60
> [=C2=A0 589.619428]=C2=A0 handle_edge_irq+0x9b/0x1f0
> [=C2=A0 589.620209]=C2=A0 generic_handle_irq+0x4f/0x60
> [=C2=A0 589.621008]=C2=A0 evtchn_2l_handle_events+0x160/0x280
> [=C2=A0 589.621913]=C2=A0 __xen_evtchn_do_upcall+0x66/0xb0
> [=C2=A0 589.622767]=C2=A0 __xen_pv_evtchn_do_upcall+0x11/0x20
> [=C2=A0 589.623665]=C2=A0 asm_call_irq_on_stack+0x12/0x20
> [=C2=A0 589.624511]=C2=A0 </IRQ>
> [=C2=A0 589.624978]=C2=A0 xen_pv_evtchn_do_upcall+0x77/0xf0
> [=C2=A0 589.625848]=C2=A0 exc_xen_hypervisor_callback+0x8/0x10
>=20
> This can be reproduced when creating/destroying guest in a loop.=20
> Although, I have struggled to reproduce it on a vanilla Xen.
>=20
> After several hours of debugging, I think I have found the root cause.
>=20
> While we only expect the unmask to happen when the event channel is=20
> EOIed, there is an unmask happening as part of handle_edge_irq() becaus=
e=20
> the interrupt was seen as pending by another vCPU (IRQS_PENDING is set)=
=2E
>=20
> It turns out that the event channel is set for multiple vCPU is in=20
> cpu_evtchn_mask. This is happening because the affinity is not cleared =

> when freeing an event channel.
>=20
> The implementation of evtchn_2l_handle_events() will look for all the=20
> active interrupts for the current vCPU and later on clear the pending=20
> bit (via the ack() callback). IOW, I believe, this is not an atomic=20
> operation.
>=20
> Even if Xen will notify the event to a single vCPU, evtchn_pending_sel =

> may still be set on the other vCPU (thanks to a different event=20
> channel). Therefore, there is a chance that two vCPUs will try to handl=
e=20
> the same interrupt.
>=20
> The IRQ handler handle_edge_irq() is able to deal with that and will=20
> mask/unmask the interrupt. This will mess us with the lateeoi logic=20
> (although, I managed to reproduce it once without XSA-332).
>=20
> My initial idea to fix the problem was to switch the affinity from CPU =
X=20
> to CPU0 when the event channel is freed.
>=20
> However, I am not sure this is enough because I haven't found anything =

> yet preventing a race between evtchn_2l_handle_events9) and=20
> evtchn_2l_bind_vcpu().
>=20
> So maybe we want to introduce a refcounting (if there is nothing=20
> provided by the IRQ framework) and only unmask when the counter drop to=
 0.
>=20
> Any opinions?

With the two attached patches testing on my side survived more than 2
hours of constant guest reboots and destroy/create loops. Without the
patches the WARN()s came up after less than one minute.

Can you please give it a try?


Juergen


--------------56A0E22C2609E77E4FB7CD27
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-events-reset-affinity-of-2-level-event-initially.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-xen-events-reset-affinity-of-2-level-event-initially.pa";
 filename*1="tch"

=46rom 908940c92fb916146a7ce24bc41a18125967c54a Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Wed, 3 Feb 2021 16:24:42 +0100
Subject: [PATCH 1/2] xen/events: reset affinity of 2-level event initiall=
y

When creating a new event channel with 2-level events the affinity
needs to be reset initially in order to avoid using an old affinity
from earlier usage of the event channel port.

The same applies to the affinity when onlining a vcpu: all old
affinity settings for this vcpu must be reset. As percpu events get
initialized before the percpu event channel hook is called,
resetting of the affinities happens after offlining a vcpu (this is
working, as initial percpu memory is zeroed out).

Cc: stable@vger.kernel.org
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_2l.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2=
l.c
index da87f3a1e351..23217940144a 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -47,6 +47,16 @@ static unsigned evtchn_2l_max_channels(void)
 	return EVTCHN_2L_NR_CHANNELS;
 }
=20
+static int evtchn_2l_setup(evtchn_port_t evtchn)
+{
+	unsigned int cpu;
+
+	for_each_online_cpu(cpu)
+		clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
+
+	return 0;
+}
+
 static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu=
,
 				  unsigned int old_cpu)
 {
@@ -355,9 +365,18 @@ static void evtchn_2l_resume(void)
 				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
 }
=20
+static int evtchn_2l_percpu_deinit(unsigned int cpu)
+{
+	memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
+			EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
+
+	return 0;
+}
+
 static const struct evtchn_ops evtchn_ops_2l =3D {
 	.max_channels      =3D evtchn_2l_max_channels,
 	.nr_channels       =3D evtchn_2l_max_channels,
+	.setup             =3D evtchn_2l_setup,
 	.bind_to_cpu       =3D evtchn_2l_bind_to_cpu,
 	.clear_pending     =3D evtchn_2l_clear_pending,
 	.set_pending       =3D evtchn_2l_set_pending,
@@ -367,6 +386,7 @@ static const struct evtchn_ops evtchn_ops_2l =3D {
 	.unmask            =3D evtchn_2l_unmask,
 	.handle_events     =3D evtchn_2l_handle_events,
 	.resume	           =3D evtchn_2l_resume,
+	.percpu_deinit     =3D evtchn_2l_percpu_deinit,
 };
=20
 void __init xen_evtchn_2l_init(void)
--=20
2.26.2


--------------56A0E22C2609E77E4FB7CD27
Content-Type: text/x-patch; charset=UTF-8;
 name="0002-xen-events-don-t-unmask-an-event-channel-when-an-eoi.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0002-xen-events-don-t-unmask-an-event-channel-when-an-eoi.pa";
 filename*1="tch"

=46rom 1f59b7827f734f2d86cff149ea8d297944e136d1 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Tue, 15 Dec 2020 10:37:11 +0100
Subject: [PATCH 2/2] xen/events: don't unmask an event channel when an eo=
i is
 pending

An event channel should be kept masked when an eoi is pending for it.
When being migrated to another cpu it might be unmasked, though.

In order to avoid this keep two different flags for each event channel
to be able to distinguish "normal" masking/unmasking from eoi related
masking/unmasking. The event channel should only be able to generate
an interrupt if both flags are cleared.

Cc: stable@vger.kernel.org
Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framew=
ork")
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 63 +++++++++++++++++++++++++++-----
 1 file changed, 53 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events=
_base.c
index e850f79351cb..6a836d131e73 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -97,7 +97,9 @@ struct irq_info {
 	short refcnt;
 	u8 spurious_cnt;
 	u8 is_accounted;
-	enum xen_irq_type type; /* type */
+	short type;		/* type: IRQT_* */
+	bool masked;		/* Is event explicitly masked? */
+	bool eoi_pending;	/* Is EOI pending? */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
@@ -302,6 +304,8 @@ static int xen_irq_info_common_setup(struct irq_info =
*info,
 	info->irq =3D irq;
 	info->evtchn =3D evtchn;
 	info->cpu =3D cpu;
+	info->masked =3D true;
+	info->eoi_pending =3D false;
=20
 	ret =3D set_evtchn_to_irq(evtchn, irq);
 	if (ret < 0)
@@ -585,7 +589,10 @@ static void xen_irq_lateeoi_locked(struct irq_info *=
info, bool spurious)
 	}
=20
 	info->eoi_time =3D 0;
-	unmask_evtchn(evtchn);
+	info->eoi_pending =3D false;
+
+	if (!info->masked)
+		unmask_evtchn(evtchn);
 }
=20
 static void xen_irq_lateeoi_worker(struct work_struct *work)
@@ -830,7 +837,11 @@ static unsigned int __startup_pirq(unsigned int irq)=

 		goto err;
=20
 out:
-	unmask_evtchn(evtchn);
+	info->masked =3D false;
+
+	if (!info->eoi_pending)
+		unmask_evtchn(evtchn);
+
 	eoi_pirq(irq_get_irq_data(irq));
=20
 	return 0;
@@ -857,6 +868,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
=20
+	info->masked =3D true;
 	mask_evtchn(evtchn);
 	xen_evtchn_close(evtchn);
 	xen_irq_info_cleanup(info);
@@ -1768,18 +1780,26 @@ static int set_affinity_irq(struct irq_data *data=
, const struct cpumask *dest,
=20
 static void enable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
=20
-	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D false;
+
+		if (!info->eoi_pending)
+			unmask_evtchn(evtchn);
+	}
 }
=20
 static void disable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
=20
-	if (VALID_EVTCHN(evtchn))
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D true;
 		mask_evtchn(evtchn);
+	}
 }
=20
 static void ack_dynirq(struct irq_data *data)
@@ -1798,6 +1818,29 @@ static void mask_ack_dynirq(struct irq_data *data)=

 	ack_dynirq(data);
 }
=20
+static void lateeoi_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		info->eoi_pending =3D true;
+		mask_evtchn(evtchn);
+	}
+}
+
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D true;
+		info->eoi_pending =3D true;
+		mask_evtchn(evtchn);
+	}
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
 	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
@@ -2023,8 +2066,8 @@ static struct irq_chip xen_lateeoi_chip __read_most=
ly =3D {
 	.irq_mask		=3D disable_dynirq,
 	.irq_unmask		=3D enable_dynirq,
=20
-	.irq_ack		=3D mask_ack_dynirq,
-	.irq_mask_ack		=3D mask_ack_dynirq,
+	.irq_ack		=3D lateeoi_ack_dynirq,
+	.irq_mask_ack		=3D lateeoi_mask_ack_dynirq,
=20
 	.irq_set_affinity	=3D set_affinity_irq,
 	.irq_retrigger		=3D retrigger_dynirq,
--=20
2.26.2


--------------56A0E22C2609E77E4FB7CD27
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------56A0E22C2609E77E4FB7CD27--

--odGKUPnVeOGEQ6SXWq4wlAU7bVB3VghM0--

--JwWz0TkRBLH0ZtpoSfSFqk9Imo8vGIpHf
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAb6MkFAwAAAAAACgkQsN6d1ii/Ey9U
/gf/V9XVN5E5e883xG+d14tMjLLFtdrJ+6BFKH3v6fOXpIl1S0OXgIB0A2qnOJvp5BETRKZcW1oR
0LkeV+zwryk0QPQ471Knyn5CxsaVfBG8SWoKZrTj6NZ1I51hMHd/Er/3FlWhadJCaxdAmi6CRA4Y
2CtcQ2hUxugfwrd2k4x8aDlUPHhUy8DCNcx9Ios5bQnm/A5kauql45tapNQ1GEFGCWuYwO1GZkrt
ICke1r1uVQ/hSxY3I3SDwQ0ZEbmhTuIbWdEgwI3Lk5HKXBYi2gmZXamL4gwl34T+JQBgihkyoxw5
4JNfZK53qYQHFRxUCVxIW1M2GLR/aR9g3OG3Ou4Aag==
=iV00
-----END PGP SIGNATURE-----

--JwWz0TkRBLH0ZtpoSfSFqk9Imo8vGIpHf--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 12:34:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 12:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81287.149964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dq3-0003v8-Hm; Thu, 04 Feb 2021 12:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81287.149964; Thu, 04 Feb 2021 12:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7dq3-0003v1-EG; Thu, 04 Feb 2021 12:34:23 +0000
Received: by outflank-mailman (input) for mailman id 81287;
 Thu, 04 Feb 2021 12:34:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7dq2-0003ud-Fa; Thu, 04 Feb 2021 12:34:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7dq2-0002bS-6d; Thu, 04 Feb 2021 12:34:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7dq1-0002MA-T5; Thu, 04 Feb 2021 12:34:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7dq1-0001Za-Sb; Thu, 04 Feb 2021 12:34:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mqQQI0Ltz6Jq0MDrk5R0BvHrr02gtE7nWonuaw9TyqU=; b=h/zBx8uu7wechhga+K6BqWzSdm
	zXXVSHVp7KflJdYiovcuMki+PSODSJi03j2N+0by21hkR356BhHip90ybudL06Qf3F4lwfyH9dk5m
	rc5GO5aUGDU1dyguqj/nHzlf+/d/h3RtF0c3W54UO30A+vmqcqsJX5pjh9KP2DeCFc6c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159004-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159004: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=993351ff62f5b9c7c35c00ab83a814158bd64044
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 12:34:21 +0000

flight 159004 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159004/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              993351ff62f5b9c7c35c00ab83a814158bd64044
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  209 days
Failing since        151818  2020-07-11 04:18:52 Z  208 days  203 attempts
Testing same since   159004  2021-02-04 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 39861 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 12:48:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 12:48:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81292.149978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7e3n-00054E-Ql; Thu, 04 Feb 2021 12:48:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81292.149978; Thu, 04 Feb 2021 12:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7e3n-000547-Nh; Thu, 04 Feb 2021 12:48:35 +0000
Received: by outflank-mailman (input) for mailman id 81292;
 Thu, 04 Feb 2021 12:48:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7e3m-000542-4m
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 12:48:34 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91805399-0f13-4d06-af13-50d0689b02ba;
 Thu, 04 Feb 2021 12:48:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91805399-0f13-4d06-af13-50d0689b02ba
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612442912;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NAqvuQCHI3koPEFaCF/cVXUrqJXxNvTRFqJxhWTxAfo=;
  b=fjLwoMpVPSUqqXrIdnoJrRVbj8zMa7Aw18sR6ArEdfwuoD/S17KdsAtX
   izRcgwGCbXo2SNfTap2+XoZDxKfp7J/Vn7IeVszPzRufmqaDFPrzxGBzk
   eJOYDZ+tTZlZ0YeN0M7TlYrk0dsMvjvQ2enouht7er0YbdRG9ufumBz3X
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8cEDVQX78ZX4IgXOmRNzEcmpx+94y28r5hcW6O/61yZcD0pfcqFyzdTWV/FU1Lp+q4gnTVZyn4
 ReEUFCMZ55kMVPA9wVQZBlZoCsiLl73jdb9nOOZnk6bCriWiyrV8mWyCbWSorH+xjv8Be0ZfmA
 u1W/C7kCuNo5yJbXplGoK/H7cTToIbPnarCbMTlHW1nnVCuxqywYKUv+eosL83CIiiPygCVhtX
 nocjRRbK9vP9djIxgRO2nv1T8pOgjG5FK6yRYFFkcNcgz4pkYKSHSJLL6Te+R5ZdUpc1j1CMYk
 pfI=
X-SBRS: 5.2
X-MesageID: 36745303
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,400,1602561600"; 
   d="scan'208";a="36745303"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gHYMtpiKqsjFoNdGMXx1oaWIgWhvza6xVEkFx1vwDCNdgWK5ReS4YuveErIbid4sULmhiaxWJXOKwWbzJWQ8+0ccFriTBT7uTJfJAkent5EA67oUTIH7KpQR3vo6rz/k8rZCnQ2IuBYHbe+vsCnNQba9fRXfCte920pySY4AXg1wpId9Mu3h9XwQAwBKjoHOaNsX2PwYhZCXjV6F2SdiN9tOrFEcDch7nKN0Anw1XNR1WGQsyWdkL/R5KC5PqfgMRJ4fs/6/9JnRUE5FOzYcGxz0Du6efMVpuk3PlHKlBh7tl/IaREU8ufo2Oyqnm9Xpy3mmzdZQ5SXaD87e0ln8mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NAqvuQCHI3koPEFaCF/cVXUrqJXxNvTRFqJxhWTxAfo=;
 b=Cii57b+X2p0vzVFy7m1IauW1V/cmumI5jg44JyoSGl6an/A10XcHZfNGFbnlWV3jZluhPA1jgUWO2KEJ3FsspTj/htIu2lm0hTO1cdSAj068uImLihDxV8zOt5KvbQ8zCYaJLgyEo3YVVb5FP8e2r4++XV0AFeje8RH7izuFAf5bxC1UH9IwGNqotqb3tYfto/FxIMQUDzMhCaXb7Ef/Xxm/oeqT9iXRxcviFQXmd2ZSCeHW9srPlRtfkdr2aW2E/OFfGWTSPY1Xki5hIRfrcSIuG6VBB+s542KHX9KJD1sKOJqO9zfsKOCHJg1riIRDx7om6iv/DppnvJolYDkfoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NAqvuQCHI3koPEFaCF/cVXUrqJXxNvTRFqJxhWTxAfo=;
 b=tmqSogtoSIInZD8cvkk6nmoln6VUIK+FKkHXSL4estjdSMkQ5Lb1xUAMk36Z1XicZEx/qqPZm/HmKVsHH3KkAw4Gq5I2cSRy2UBSkanaycSobLEhsbgrckUEu2y84oM4aQuU5NrTkYf4BCvR6K0ZXqn0/Nw+L3YsD9XAAPB6XA4=
Subject: Re: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, James Dingwall <james-xen@dingwall.me.uk>
References: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4064c3f8-5c8f-584b-be01-9189b6adff1d@citrix.com>
Date: Thu, 4 Feb 2021 12:48:20 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0220.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::16) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 49628e91-30d2-44ce-cc3d-08d8c90b2a5d
X-MS-TrafficTypeDiagnostic: BYAPR03MB3783:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3783C43BFBCED96D5804FEB6BAB39@BYAPR03MB3783.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2201;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0Hvinnxh7PFsI6kMBkOWRTM+hXJ3mGjTItjlV8k8dXVoy2ExdPt0qWt8d73ejV2qY/tuXqczA2+RIwD8cAK2LbdqoEesNtbRzh40jR2D4hyqUSDxRMApu3zM3I88dScuBjLwRXG8qeylm0or5cEROOiTLgV3JikPz5xZTrIooOeC5TgNCXW1VkYnza3nzn40sobYYCfoOpyDG4rOyKTT6oZ9JWBzKyuNzx1TUyqPHqsuNUwt6BzIWa0UUreQ2MtzQ0l8+WXkbor0adbSCABuhqVikaT+6u/mszqWY8CXJXEomuxJ7fAHEPRFuq43uFBX8nn/AF8DI+zojRRK+AlC1oviU2opYYbUIXyKVcoYEHQ6QFD/lPG4GyQMFxepkqZZTlDhlWsA8B5AxIGZdnzMoU4HwWQOUG5pdkCgjind3y+li5g1+zyAJLVSI9JiwnubZmpTsz7OnW4hCmNSihQVNqjFNoWgds4HHwCpAp5gJeDxS8i5siUEKBEQbuZsT3OOFa2vOXhT1t1Zdmpou+zrfXp99izLYa9HyshAn5hNGiL29hsj2Xsa3bAHGbURYRQncDdCncOc/2gVyySB/aFNAsh6MEA5sH+/NCRUlWNAItE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(366004)(136003)(346002)(16576012)(6666004)(956004)(8676002)(66946007)(2616005)(53546011)(316002)(478600001)(4326008)(86362001)(2906002)(31696002)(5660300002)(8936002)(66556008)(186003)(110136005)(54906003)(26005)(6486002)(16526019)(36756003)(558084003)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OHVKdDlBL0RkanZWZUxSb0g5QWFsdHorV1RXbUxkb0xYc2VBVHVJMW52VEhq?=
 =?utf-8?B?cTZXazdCdnhtUW80T2Znd1M3dUllMkNmY0hsMGxwVlFrbm50L0FYWG0zSEF5?=
 =?utf-8?B?WVg3cVdFVXVVamd4VytOUTI3MTRmT3ZCbFk2K3pucnZJdi9maXhqL0I1ZUN2?=
 =?utf-8?B?K0pFaE5YcmRvRFNsSUZ1bWU1b2JJd3IwVWoySXhWUW0rMXZ4YVMxUkM4Sk0r?=
 =?utf-8?B?dno4eFdQR29YcS8rMUlJazVhWkc1NzFLQUdOTGtHMlhsWlpxUlZPdGsyOTB0?=
 =?utf-8?B?VktkMHhrd25lUXFzeHZrcVZnS1Z4NjU4bThlUkVjWDdLR2x4aHdPQ0FzTHVF?=
 =?utf-8?B?cFd2ckNmeEdJV3dlWkRuTTdTUXB0TU10OVRsWmFOS2dzc0U0eE8vc2wwVHhm?=
 =?utf-8?B?VXlLMWgxRlo1UDdxRUhUaTJ0ZUp6UHJVM2ZOT24rMG9VWldyaFlDaUVaWU5G?=
 =?utf-8?B?NWM1bjhmbldKRHdHMEsxUWE2T01IQU54SFYrTWMwRVpVbDRwUlFpcHJWVGNQ?=
 =?utf-8?B?ZFl4RnhLM1oxamZ1bFJUdkFEOUZOR2VsdlFwdTJDNUhuZHhTY0dZMk5DWDhU?=
 =?utf-8?B?dVE5TU1BY3ZDZlo3OWx4YmRiTFZudXZaUFhzTjlQa2VPLzlnNlRJSGpibUlp?=
 =?utf-8?B?NHBwcDY5WWUzKzBFYlE4bTVDV0hJODlaSGZMNVRuOWtGck5WeklrcEgrWmJD?=
 =?utf-8?B?VzRLU1ZrNkJzeHhIVXVXSXhURWpCOGJlOHFrOXF6dEFGemR0dnh2NVp2VG5U?=
 =?utf-8?B?eWFGMXpnMWthdXNYQVlNUFh3Ry91UGk5VFV3M1BsQVRqL1EzenpSTGd2QnRp?=
 =?utf-8?B?VGpLdFMxWDErYnlZVVRNSGZvd2FLUkRDMEtRVW1obWZSRU9mVkN0WDBxR2Zp?=
 =?utf-8?B?S2lCMCtTZXpWTVdDNkp6MG12dXVLMG1ERXMvUGVxQ2YrUkdDRXU2NDBXeDNQ?=
 =?utf-8?B?ZEhMMTFieTRXUldvTlI2bEhsaFhSQjc3VjVRK3J4V3NjbWZ2TGczbGMyYUo0?=
 =?utf-8?B?YzF5QU5Cd2dhVlZuRldoTXFxZGFDYjdHMmdtVU1LaXFqN2hwcS80dm83K01p?=
 =?utf-8?B?cjQySUk5cG13V2xMenR0WUFiSUdiQkY0RjZKcEtXSVNackhXVEtxWFo5R3hw?=
 =?utf-8?B?ZDZ3WGlCVy9xRU52c3VxTEFleW5zWkJrQXQ0TGx0LzBsbEhndGlDb2wwcC9k?=
 =?utf-8?B?R2ZKMU53d0N1V1NodkxqLzVLeTY5Nm1hb0o0WWRuL2YxSUpEQkNMU2pZL25t?=
 =?utf-8?B?YnJ3UVdUa3F4UWxJeFRaMHJtRnc2MWE0bVB6SnVMUitBclNTeFVvUTQ3WGYv?=
 =?utf-8?B?L3ZEQmFwYVFrOVplU0ZYa3AwSmRmMXkrWUJIRmpqbmowMzZIRFQvTTRDaXFI?=
 =?utf-8?B?UEVqakJPNzRHb0R0RURha2VpRkU0L0s3eHIyZnduNGE1TlBtL3AxUi9OV094?=
 =?utf-8?B?RDcrQWk2N29qSndBV2RkK3FJOWF1enpzN2k5WEpBPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 49628e91-30d2-44ce-cc3d-08d8c90b2a5d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 12:48:26.7817
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hk34J9sLAWo6Upm3QC8bfFsSPX8tQTehuJr8HJrWtzauvIA962gwdEHTVpgagNqFyO16DCeMAMARoAz8Hqlayuj6PewmjYcVDmPnl88qjKk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3783
X-OriginatorOrg: citrix.com

On 04/02/2021 09:36, Jan Beulich wrote:
> X86_VENDOR_* aren't bit masks in the older trees.
>
> Reported-by: James Dingwall <james@dingwall.me.uk>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 13:21:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 13:21:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81298.150002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7eZI-0000pj-Hv; Thu, 04 Feb 2021 13:21:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81298.150002; Thu, 04 Feb 2021 13:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7eZI-0000pc-F0; Thu, 04 Feb 2021 13:21:08 +0000
Received: by outflank-mailman (input) for mailman id 81298;
 Thu, 04 Feb 2021 13:21:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7eZH-0000pX-PS
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 13:21:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fe32ee2-ee77-47e9-b75b-f250e306ad46;
 Thu, 04 Feb 2021 13:21:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4F4E0ACB0;
 Thu,  4 Feb 2021 13:21:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fe32ee2-ee77-47e9-b75b-f250e306ad46
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612444865; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=dJzs662lbqQRyLLNgk7X6iCFiUXewoAvT9uUnaWPOqs=;
	b=hxc/wFGbOQAQOjYIlz1B9xY2d/FXqsAS3sesgzEmeN3/lWf9TarAtn+8jTSmwkJeS9Kk5+
	ddRWaE/oSQ8JUTqUAmIk9YlHpt1k2KfQGd7v8975qT32R/K+JuOQN4ZKBL3iJcvnYNnvei
	TwTfdqv9k8ahu+nS52xveVcRt+ImdO0=
To: Binutils <binutils@sourceware.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: ld 2.36 regression linking EFI binary from ELF input with debug info
Message-ID: <79812876-b43d-7729-da34-3b4cd1c31f24@suse.com>
Date: Thu, 4 Feb 2021 14:21:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hello,

the Xen project hypervisor build system includes building the
hypervisor binary as an EFI application, as an option (i.e.
as long as the tool chain supports this). Already when probing
the linker we now suddenly get several "relocation truncated
to fit:R_X86_64_32 against `.debug_...'" errors. I have not
had  the time to figure out what exactly broke this, and I'm
sending this mail in the hope that it may ring a bell for
someone.

For reference, the probing is as simple as

$(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o

As was to be expected, the errors disappear with -S, but that's
an option only for the probing, not for the actual linking of
the binary.

Thanks for pointers (or better yet, a fix),
Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 13:30:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 13:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81300.150015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ei1-0001wb-FZ; Thu, 04 Feb 2021 13:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81300.150015; Thu, 04 Feb 2021 13:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ei1-0001wU-Bs; Thu, 04 Feb 2021 13:30:09 +0000
Received: by outflank-mailman (input) for mailman id 81300;
 Thu, 04 Feb 2021 13:30:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gi2U=HG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l7ei0-0001wP-26
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 13:30:08 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23cb9d6f-2c9a-4a5f-8414-eae2d14b12a3;
 Thu, 04 Feb 2021 13:30:05 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id m1so3001844wml.2
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 05:30:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23cb9d6f-2c9a-4a5f-8414-eae2d14b12a3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=mmsArc9oeZABtX7/wQSk8VFRxMuFAKbD7as/N6J0WOw=;
        b=b7Y+2RA7QPCIO6c+//EbpZoeaUcdLqV3aTvIGy7z587ktTUhZ2KJSfKmKLemU974Or
         i43RXqgMMEuIv8Mv0oEstJV3wXNUu9oQNEyNr9htUcEPJzXeyqd/hauTmbuSuSfxhytA
         6idjsfzk87lS/AeqID9JA2loMePSu7r1qhVI4hjx2cw3IW27Dz6+c73JyaIHyZLe9o04
         ki870UihxVB1ZUbpNCC42u0yDPx5MK01qMQjx2XL2o74GILfDOhtexjHiktN98jxp3sP
         fSeGFfCHEchSYv3340D8GvHFXxpMJ4qCAxC4/p4MJPV0rfpYiU4K9ncsjSVUxMeAPiPL
         c8+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=mmsArc9oeZABtX7/wQSk8VFRxMuFAKbD7as/N6J0WOw=;
        b=SkjF1H+KHURwyvLR7DBYgXi5PJAAnyJ0YRm6knBY60aJsQESgdyLhsa5vNLut0Vsda
         mjJQe+LZaIjFduLn6NpMbDjhbTuGLgbvapGLM+A9IUyPaA8yAodJTjXBX0kIm6FfltZw
         HPDX4KgCFoT7tAFUvX7z0ABc67jh75kYR4yxCBSyx/H/K21xVfkuy74OrWfHp+yIqI/i
         DpaPTmH7avU3WtPbosZtDRpFJzl3vWpm5O5SkN+dJyivPSoGZt3rvg+00MW1yTB6YCFi
         J4VZNgd+fdaVpj7jI2RQP4ql8vLYtYaknPFSmDh3V+XAu8SYdy7tutkFpb3zy/8Fvjx2
         eVzA==
X-Gm-Message-State: AOAM531gJJkRCLyyYZLHbFWfHzYbThjo8kqnq5q78C2+Ois+HhdNpE9l
	jQuJ/hdx/OWj2f/qA+hKu8XvqfisOQG5kMIKWM8=
X-Google-Smtp-Source: ABdhPJxWgXqmG4uWbX55w8z+9aw5rZpjSiF1AMmkjYR8Hz1F+tW3/7djSThqlnmj1uvkYxKgj4RH5oyCpaMUm1m9jlk=
X-Received: by 2002:a7b:cd8e:: with SMTP id y14mr7693955wmj.61.1612445404100;
 Thu, 04 Feb 2021 05:30:04 -0800 (PST)
MIME-Version: 1.0
References: <20210202190937.30206-1-andrew.cooper3@citrix.com>
In-Reply-To: <20210202190937.30206-1-andrew.cooper3@citrix.com>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Thu, 4 Feb 2021 15:29:52 +0200
Message-ID: <CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com>
Subject: Re: [PATCH for-4.15] tools/tests: Introduce a test for acquire_resource
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: multipart/alternative; boundary="00000000000087fd7105ba82b332"

--00000000000087fd7105ba82b332
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Andrew.
[Sorry for the possible format issues]

On Tue, Feb 2, 2021 at 9:10 PM Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> For now, simply try to map 40 frames of grant table.  This catches most o=
f
> the
> basic errors with resource sizes found and fixed through the 4.15 dev
> window.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Oleksandr <olekstysh@gmail.com>
>
> Fails against current staging:
>
>   XENMEM_acquire_resource tests
>   Test x86 PV
>     d7: grant table
>       Fail: Map 7 - Argument list too long
>   Test x86 PVH
>     d8: grant table
>       Fail: Map 7 - Argument list too long
>
> The fix has already been posted:
>   [PATCH v9 01/11] xen/memory: Fix mapping grant tables with
> XENMEM_acquire_resource
>
> and the fixed run is:
>
>   XENMEM_acquire_resource tests
>   Test x86 PV
>     d7: grant table
>   Test x86 PVH
>     d8: grant table
>
> ARM folk: would you mind testing this?  I'm pretty sure the create
> parameters
> are suitable, but I don't have any way to test this.
>
Yes, as it was agreed on IRC, I will test this today's evening and inform
about the results)



>
> I've got more plans for this, but insufficient time right now.
> ---
>  tools/tests/Makefile                 |   1 +
>  tools/tests/resource/.gitignore      |   1 +
>  tools/tests/resource/Makefile        |  40 ++++++++++
>  tools/tests/resource/test-resource.c | 138
> +++++++++++++++++++++++++++++++++++
>  4 files changed, 180 insertions(+)
>  create mode 100644 tools/tests/resource/.gitignore
>  create mode 100644 tools/tests/resource/Makefile
>  create mode 100644 tools/tests/resource/test-resource.c
>
> diff --git a/tools/tests/Makefile b/tools/tests/Makefile
> index fc9b715951..c45b5fbc1d 100644
> --- a/tools/tests/Makefile
> +++ b/tools/tests/Makefile
> @@ -2,6 +2,7 @@ XEN_ROOT =3D $(CURDIR)/../..
>  include $(XEN_ROOT)/tools/Rules.mk
>
>  SUBDIRS-y :=3D
> +SUBDIRS-y :=3D resource
>  SUBDIRS-$(CONFIG_X86) +=3D cpu-policy
>  SUBDIRS-$(CONFIG_X86) +=3D mce-test
>  ifneq ($(clang),y)
> diff --git a/tools/tests/resource/.gitignore
> b/tools/tests/resource/.gitignore
> new file mode 100644
> index 0000000000..4872e97d4b
> --- /dev/null
> +++ b/tools/tests/resource/.gitignore
> @@ -0,0 +1 @@
> +test-resource
> diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefil=
e
> new file mode 100644
> index 0000000000..8a3373e786
> --- /dev/null
> +++ b/tools/tests/resource/Makefile
> @@ -0,0 +1,40 @@
> +XEN_ROOT =3D $(CURDIR)/../../..
> +include $(XEN_ROOT)/tools/Rules.mk
> +
> +TARGET :=3D test-resource
> +
> +.PHONY: all
> +all: $(TARGET)
> +
> +.PHONY: run
> +run: $(TARGET)
> +       ./$(TARGET)
> +
> +.PHONY: clean
> +clean:
> +       $(RM) -f -- *.o .*.d .*.d2 $(TARGET)
> +
> +.PHONY: distclean
> +distclean: clean
> +       $(RM) -f -- *~
> +
> +.PHONY: install
> +install: all
> +
> +.PHONY: uninstall
> +uninstall:
> +
> +CFLAGS +=3D -Werror -D__XEN_TOOLS__
> +CFLAGS +=3D $(CFLAGS_xeninclude)
> +CFLAGS +=3D $(CFLAGS_libxenctrl)
> +CFLAGS +=3D $(CFLAGS_libxenforeginmemory)
> +CFLAGS +=3D $(APPEND_CFLAGS)
> +
> +LDFLAGS +=3D $(LDLIBS_libxenctrl)
> +LDFLAGS +=3D $(LDLIBS_libxenforeignmemory)
> +LDFLAGS +=3D $(APPEND_LDFLAGS)
> +
> +test-resource: test-resource.o
> +       $(CC) $(LDFLAGS) -o $@ $<
> +
> +-include $(DEPS_INCLUDE)
> diff --git a/tools/tests/resource/test-resource.c
> b/tools/tests/resource/test-resource.c
> new file mode 100644
> index 0000000000..81a2a5cd12
> --- /dev/null
> +++ b/tools/tests/resource/test-resource.c
> @@ -0,0 +1,138 @@
> +#include <err.h>
> +#include <errno.h>
> +#include <error.h>
> +#include <stdio.h>
> +#include <string.h>
> +#include <sys/mman.h>
> +
> +#include <xenctrl.h>
> +#include <xenforeignmemory.h>
> +#include <xendevicemodel.h>
> +#include <xen-tools/libs.h>
> +
> +static unsigned int nr_failures;
> +#define fail(fmt, ...)                          \
> +({                                              \
> +    nr_failures++;                              \
> +    (void)printf(fmt, ##__VA_ARGS__);           \
> +})
> +
> +static xc_interface *xch;
> +static xenforeignmemory_handle *fh;
> +static xendevicemodel_handle *dh;
> +
> +static void test_gnttab(uint32_t domid, unsigned int nr_frames)
> +{
> +    xenforeignmemory_resource_handle *res;
> +    void *addr =3D NULL;
> +    size_t size;
> +    int rc;
> +
> +    printf("  d%u: grant table\n", domid);
> +
> +    rc =3D xenforeignmemory_resource_size(
> +        fh, domid, XENMEM_resource_grant_table,
> +        XENMEM_resource_grant_table_id_shared, &size);
> +    if ( rc )
> +        return fail("    Fail: Get size: %d - %s\n", errno,
> strerror(errno));
> +
> +    if ( (size >> XC_PAGE_SHIFT) !=3D nr_frames )
> +        return fail("    Fail: Get size: expected %u frames, got %zu\n",
> +                    nr_frames, size >> XC_PAGE_SHIFT);
> +
> +    res =3D xenforeignmemory_map_resource(
> +        fh, domid, XENMEM_resource_grant_table,
> +        XENMEM_resource_grant_table_id_shared, 0, size >> XC_PAGE_SHIFT,
> +        &addr, PROT_READ | PROT_WRITE, 0);
> +    if ( !res )
> +        return fail("    Fail: Map %d - %s\n", errno, strerror(errno));
> +
> +    rc =3D xenforeignmemory_unmap_resource(fh, res);
> +    if ( rc )
> +        return fail("    Fail: Unmap %d - %s\n", errno, strerror(errno))=
;
> +}
> +
> +static void test_domain_configurations(void)
> +{
> +    static struct test {
> +        const char *name;
> +        struct xen_domctl_createdomain create;
> +    } tests[] =3D {
> +#if defined(__x86_64__) || defined(__i386__)
> +        {
> +            .name =3D "x86 PV",
> +            .create =3D {
> +                .max_vcpus =3D 2,
> +                .max_grant_frames =3D 40,
> +            },
> +        },
> +        {
> +            .name =3D "x86 PVH",
> +            .create =3D {
> +                .flags =3D XEN_DOMCTL_CDF_hvm,
> +                .max_vcpus =3D 2,
> +                .max_grant_frames =3D 40,
> +                .arch =3D {
> +                    .emulation_flags =3D XEN_X86_EMU_LAPIC,
> +                },
> +            },
> +        },
> +#elif defined(__aarch64__) || defined(__arm__)
> +        {
> +            .name =3D "ARM",
> +            .create =3D {
> +                .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> +                .max_vcpus =3D 2,
> +                .max_grant_frames =3D 40,
> +            },
> +        },
> +#endif
> +    };
> +
> +    for ( unsigned int i =3D 0; i < ARRAY_SIZE(tests); ++i )
> +    {
> +        struct test *t =3D &tests[i];
> +        uint32_t domid =3D 0;
> +        int rc;
> +
> +        printf("Test %s\n", t->name);
> +
> +        rc =3D xc_domain_create(xch, &domid, &t->create);
> +        if ( rc )
> +        {
> +            if ( errno =3D=3D EINVAL || errno =3D=3D EOPNOTSUPP )
> +                printf("  Skip: %d - %s\n", errno, strerror(errno));
> +            else
> +                fail("  Domain create failure: %d - %s\n",
> +                     errno, strerror(errno));
> +            continue;
> +        }
> +
> +        test_gnttab(domid, t->create.max_grant_frames);
> +
> +        rc =3D xc_domain_destroy(xch, domid);
> +        if ( rc )
> +            fail("  Failed to destroy domain: %d - %s\n",
> +                 errno, strerror(errno));
> +    }
> +}
> +
> +int main(int argc, char **argv)
> +{
> +    printf("XENMEM_acquire_resource tests\n");
> +
> +    xch =3D xc_interface_open(NULL, NULL, 0);
> +    fh =3D xenforeignmemory_open(NULL, 0);
> +    dh =3D xendevicemodel_open(NULL, 0);
> +
> +    if ( !xch )
> +        err(1, "xc_interface_open");
> +    if ( !fh )
> +        err(1, "xenforeignmemory_open");
> +    if ( !dh )
> +        err(1, "xendevicemodel_open");
> +
> +    test_domain_configurations();
> +
> +    return !!nr_failures;
> +}
> --
> 2.11.0
>
>

--=20
Regards,

Oleksandr Tyshchenko

--00000000000087fd7105ba82b332
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><div>Hi Andrew.</div><div>[Sorr=
y for the possible format issues]</div><br><div class=3D"gmail_quote"><div =
dir=3D"ltr" class=3D"gmail_attr">On Tue, Feb 2, 2021 at 9:10 PM Andrew Coop=
er &lt;<a href=3D"mailto:andrew.cooper3@citrix.com">andrew.cooper3@citrix.c=
om</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margi=
n:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex=
">For now, simply try to map 40 frames of grant table.=C2=A0 This catches m=
ost of the<br>
basic errors with resource sizes found and fixed through the 4.15 dev windo=
w.<br>
<br>
Signed-off-by: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.co=
m" target=3D"_blank">andrew.cooper3@citrix.com</a>&gt;<br>
---<br>
CC: Ian Jackson &lt;<a href=3D"mailto:iwj@xenproject.org" target=3D"_blank"=
>iwj@xenproject.org</a>&gt;<br>
CC: Wei Liu &lt;<a href=3D"mailto:wl@xen.org" target=3D"_blank">wl@xen.org<=
/a>&gt;<br>
CC: Jan Beulich &lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"_blank">=
JBeulich@suse.com</a>&gt;<br>
CC: Roger Pau Monn=C3=A9 &lt;<a href=3D"mailto:roger.pau@citrix.com" target=
=3D"_blank">roger.pau@citrix.com</a>&gt;<br>
CC: Wei Liu &lt;<a href=3D"mailto:wl@xen.org" target=3D"_blank">wl@xen.org<=
/a>&gt;<br>
CC: Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;<br>
CC: Julien Grall &lt;<a href=3D"mailto:julien@xen.org" target=3D"_blank">ju=
lien@xen.org</a>&gt;<br>
CC: Volodymyr Babchuk &lt;<a href=3D"mailto:Volodymyr_Babchuk@epam.com" tar=
get=3D"_blank">Volodymyr_Babchuk@epam.com</a>&gt;<br>
CC: Oleksandr &lt;<a href=3D"mailto:olekstysh@gmail.com" target=3D"_blank">=
olekstysh@gmail.com</a>&gt;<br>
<br>
Fails against current staging:<br>
<br>
=C2=A0 XENMEM_acquire_resource tests<br>
=C2=A0 Test x86 PV<br>
=C2=A0 =C2=A0 d7: grant table<br>
=C2=A0 =C2=A0 =C2=A0 Fail: Map 7 - Argument list too long<br>
=C2=A0 Test x86 PVH<br>
=C2=A0 =C2=A0 d8: grant table<br>
=C2=A0 =C2=A0 =C2=A0 Fail: Map 7 - Argument list too long<br>
<br>
The fix has already been posted:<br>
=C2=A0 [PATCH v9 01/11] xen/memory: Fix mapping grant tables with XENMEM_ac=
quire_resource<br>
<br>
and the fixed run is:<br>
<br>
=C2=A0 XENMEM_acquire_resource tests<br>
=C2=A0 Test x86 PV<br>
=C2=A0 =C2=A0 d7: grant table<br>
=C2=A0 Test x86 PVH<br>
=C2=A0 =C2=A0 d8: grant table<br>
<br>
ARM folk: would you mind testing this?=C2=A0 I&#39;m pretty sure the create=
 parameters<br>
are suitable, but I don&#39;t have any way to test this.<br></blockquote><d=
iv>Yes, as it was agreed on IRC, I will test this today&#39;s evening and i=
nform about the results)</div><div><br></div><div>=C2=A0</div><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px soli=
d rgb(204,204,204);padding-left:1ex">
<br>
I&#39;ve got more plans for this, but insufficient time right now.<br>
---<br>
=C2=A0tools/tests/Makefile=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0|=C2=A0 =C2=A01 +<br>
=C2=A0tools/tests/resource/.gitignore=C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A01 +=
<br>
=C2=A0tools/tests/resource/Makefile=C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 40 +=
+++++++++<br>
=C2=A0tools/tests/resource/test-resource.c | 138 ++++++++++++++++++++++++++=
+++++++++<br>
=C2=A04 files changed, 180 insertions(+)<br>
=C2=A0create mode 100644 tools/tests/resource/.gitignore<br>
=C2=A0create mode 100644 tools/tests/resource/Makefile<br>
=C2=A0create mode 100644 tools/tests/resource/test-resource.c<br>
<br>
diff --git a/tools/tests/Makefile b/tools/tests/Makefile<br>
index fc9b715951..c45b5fbc1d 100644<br>
--- a/tools/tests/Makefile<br>
+++ b/tools/tests/Makefile<br>
@@ -2,6 +2,7 @@ XEN_ROOT =3D $(CURDIR)/../..<br>
=C2=A0include $(XEN_ROOT)/tools/Rules.mk<br>
<br>
=C2=A0SUBDIRS-y :=3D<br>
+SUBDIRS-y :=3D resource<br>
=C2=A0SUBDIRS-$(CONFIG_X86) +=3D cpu-policy<br>
=C2=A0SUBDIRS-$(CONFIG_X86) +=3D mce-test<br>
=C2=A0ifneq ($(clang),y)<br>
diff --git a/tools/tests/resource/.gitignore b/tools/tests/resource/.gitign=
ore<br>
new file mode 100644<br>
index 0000000000..4872e97d4b<br>
--- /dev/null<br>
+++ b/tools/tests/resource/.gitignore<br>
@@ -0,0 +1 @@<br>
+test-resource<br>
diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile<=
br>
new file mode 100644<br>
index 0000000000..8a3373e786<br>
--- /dev/null<br>
+++ b/tools/tests/resource/Makefile<br>
@@ -0,0 +1,40 @@<br>
+XEN_ROOT =3D $(CURDIR)/../../..<br>
+include $(XEN_ROOT)/tools/Rules.mk<br>
+<br>
+TARGET :=3D test-resource<br>
+<br>
+.PHONY: all<br>
+all: $(TARGET)<br>
+<br>
+.PHONY: run<br>
+run: $(TARGET)<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0./$(TARGET)<br>
+<br>
+.PHONY: clean<br>
+clean:<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0$(RM) -f -- *.o .*.d .*.d2 $(TARGET)<br>
+<br>
+.PHONY: distclean<br>
+distclean: clean<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0$(RM) -f -- *~<br>
+<br>
+.PHONY: install<br>
+install: all<br>
+<br>
+.PHONY: uninstall<br>
+uninstall:<br>
+<br>
+CFLAGS +=3D -Werror -D__XEN_TOOLS__<br>
+CFLAGS +=3D $(CFLAGS_xeninclude)<br>
+CFLAGS +=3D $(CFLAGS_libxenctrl)<br>
+CFLAGS +=3D $(CFLAGS_libxenforeginmemory)<br>
+CFLAGS +=3D $(APPEND_CFLAGS)<br>
+<br>
+LDFLAGS +=3D $(LDLIBS_libxenctrl)<br>
+LDFLAGS +=3D $(LDLIBS_libxenforeignmemory)<br>
+LDFLAGS +=3D $(APPEND_LDFLAGS)<br>
+<br>
+test-resource: test-resource.o<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0$(CC) $(LDFLAGS) -o $@ $&lt;<br>
+<br>
+-include $(DEPS_INCLUDE)<br>
diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/te=
st-resource.c<br>
new file mode 100644<br>
index 0000000000..81a2a5cd12<br>
--- /dev/null<br>
+++ b/tools/tests/resource/test-resource.c<br>
@@ -0,0 +1,138 @@<br>
+#include &lt;err.h&gt;<br>
+#include &lt;errno.h&gt;<br>
+#include &lt;error.h&gt;<br>
+#include &lt;stdio.h&gt;<br>
+#include &lt;string.h&gt;<br>
+#include &lt;sys/mman.h&gt;<br>
+<br>
+#include &lt;xenctrl.h&gt;<br>
+#include &lt;xenforeignmemory.h&gt;<br>
+#include &lt;xendevicemodel.h&gt;<br>
+#include &lt;xen-tools/libs.h&gt;<br>
+<br>
+static unsigned int nr_failures;<br>
+#define fail(fmt, ...)=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \<br>
+({=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 \<br>
+=C2=A0 =C2=A0 nr_failures++;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \<br>
+=C2=A0 =C2=A0 (void)printf(fmt, ##__VA_ARGS__);=C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0\<br>
+})<br>
+<br>
+static xc_interface *xch;<br>
+static xenforeignmemory_handle *fh;<br>
+static xendevicemodel_handle *dh;<br>
+<br>
+static void test_gnttab(uint32_t domid, unsigned int nr_frames)<br>
+{<br>
+=C2=A0 =C2=A0 xenforeignmemory_resource_handle *res;<br>
+=C2=A0 =C2=A0 void *addr =3D NULL;<br>
+=C2=A0 =C2=A0 size_t size;<br>
+=C2=A0 =C2=A0 int rc;<br>
+<br>
+=C2=A0 =C2=A0 printf(&quot;=C2=A0 d%u: grant table\n&quot;, domid);<br>
+<br>
+=C2=A0 =C2=A0 rc =3D xenforeignmemory_resource_size(<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 fh, domid, XENMEM_resource_grant_table,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 XENMEM_resource_grant_table_id_shared, &amp;si=
ze);<br>
+=C2=A0 =C2=A0 if ( rc )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return fail(&quot;=C2=A0 =C2=A0 Fail: Get size=
: %d - %s\n&quot;, errno, strerror(errno));<br>
+<br>
+=C2=A0 =C2=A0 if ( (size &gt;&gt; XC_PAGE_SHIFT) !=3D nr_frames )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return fail(&quot;=C2=A0 =C2=A0 Fail: Get size=
: expected %u frames, got %zu\n&quot;,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nr_f=
rames, size &gt;&gt; XC_PAGE_SHIFT);<br>
+<br>
+=C2=A0 =C2=A0 res =3D xenforeignmemory_map_resource(<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 fh, domid, XENMEM_resource_grant_table,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 XENMEM_resource_grant_table_id_shared, 0, size=
 &gt;&gt; XC_PAGE_SHIFT,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 &amp;addr, PROT_READ | PROT_WRITE, 0);<br>
+=C2=A0 =C2=A0 if ( !res )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return fail(&quot;=C2=A0 =C2=A0 Fail: Map %d -=
 %s\n&quot;, errno, strerror(errno));<br>
+<br>
+=C2=A0 =C2=A0 rc =3D xenforeignmemory_unmap_resource(fh, res);<br>
+=C2=A0 =C2=A0 if ( rc )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return fail(&quot;=C2=A0 =C2=A0 Fail: Unmap %d=
 - %s\n&quot;, errno, strerror(errno));<br>
+}<br>
+<br>
+static void test_domain_configurations(void)<br>
+{<br>
+=C2=A0 =C2=A0 static struct test {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 const char *name;<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct xen_domctl_createdomain create;<br>
+=C2=A0 =C2=A0 } tests[] =3D {<br>
+#if defined(__x86_64__) || defined(__i386__)<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .name =3D &quot;x86 PV&quot;,<br=
>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .create =3D {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .max_vcpus =3D 2,<=
br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .max_grant_frames =
=3D 40,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .name =3D &quot;x86 PVH&quot;,<b=
r>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .create =3D {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .flags =3D XEN_DOM=
CTL_CDF_hvm,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .max_vcpus =3D 2,<=
br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .max_grant_frames =
=3D 40,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .arch =3D {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .emu=
lation_flags =3D XEN_X86_EMU_LAPIC,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+#elif defined(__aarch64__) || defined(__arm__)<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .name =3D &quot;ARM&quot;,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .create =3D {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .flags =3D XEN_DOM=
CTL_CDF_hvm | XEN_DOMCTL_CDF_hap,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .max_vcpus =3D 2,<=
br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .max_grant_frames =
=3D 40,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 },<br>
+#endif<br>
+=C2=A0 =C2=A0 };<br>
+<br>
+=C2=A0 =C2=A0 for ( unsigned int i =3D 0; i &lt; ARRAY_SIZE(tests); ++i )<=
br>
+=C2=A0 =C2=A0 {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct test *t =3D &amp;tests[i];<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t domid =3D 0;<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 int rc;<br>
+<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 printf(&quot;Test %s\n&quot;, t-&gt;name);<br>
+<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 rc =3D xc_domain_create(xch, &amp;domid, &amp;=
t-&gt;create);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( rc )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( errno =3D=3D EINVAL || errn=
o =3D=3D EOPNOTSUPP )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 printf(&quot;=C2=
=A0 Skip: %d - %s\n&quot;, errno, strerror(errno));<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 else<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fail(&quot;=C2=A0 =
Domain create failure: %d - %s\n&quot;,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0errno, strerror(errno));<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 continue;<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
+<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 test_gnttab(domid, t-&gt;create.max_grant_fram=
es);<br>
+<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 rc =3D xc_domain_destroy(xch, domid);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( rc )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fail(&quot;=C2=A0 Failed to dest=
roy domain: %d - %s\n&quot;,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0errno, strer=
ror(errno));<br>
+=C2=A0 =C2=A0 }<br>
+}<br>
+<br>
+int main(int argc, char **argv)<br>
+{<br>
+=C2=A0 =C2=A0 printf(&quot;XENMEM_acquire_resource tests\n&quot;);<br>
+<br>
+=C2=A0 =C2=A0 xch =3D xc_interface_open(NULL, NULL, 0);<br>
+=C2=A0 =C2=A0 fh =3D xenforeignmemory_open(NULL, 0);<br>
+=C2=A0 =C2=A0 dh =3D xendevicemodel_open(NULL, 0);<br>
+<br>
+=C2=A0 =C2=A0 if ( !xch )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 err(1, &quot;xc_interface_open&quot;);<br>
+=C2=A0 =C2=A0 if ( !fh )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 err(1, &quot;xenforeignmemory_open&quot;);<br>
+=C2=A0 =C2=A0 if ( !dh )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 err(1, &quot;xendevicemodel_open&quot;);<br>
+<br>
+=C2=A0 =C2=A0 test_domain_configurations();<br>
+<br>
+=C2=A0 =C2=A0 return !!nr_failures;<br>
+}<br>
-- <br>
2.11.0<br>
<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div=
 dir=3D"ltr"><span style=3D"background-color:rgb(255,255,255)"><font size=
=3D"2"><span style=3D"color:rgb(51,51,51);font-family:Arial,sans-serif">Reg=
ards,</span></font></span></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"=
><div><span style=3D"background-color:rgb(255,255,255)"><font size=3D"2">Ol=
eksandr Tyshchenko</font></span></div></div></div></div></div></div></div><=
/div>

--00000000000087fd7105ba82b332--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 13:34:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 13:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81303.150027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7em2-00027P-4t; Thu, 04 Feb 2021 13:34:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81303.150027; Thu, 04 Feb 2021 13:34:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7em2-00027I-1f; Thu, 04 Feb 2021 13:34:18 +0000
Received: by outflank-mailman (input) for mailman id 81303;
 Thu, 04 Feb 2021 13:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NYWw=HG=nefkom.net=whitebox@srs-us1.protection.inumbo.net>)
 id 1l7em0-00027D-EJ
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 13:34:16 +0000
Received: from mail-out.m-online.net (unknown [212.18.0.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 914f9f3f-8109-49e1-9de9-5f08a272a037;
 Thu, 04 Feb 2021 13:34:14 +0000 (UTC)
Received: from frontend01.mail.m-online.net (unknown [192.168.8.182])
 by mail-out.m-online.net (Postfix) with ESMTP id 4DWffG1C5Kz1s31m;
 Thu,  4 Feb 2021 14:34:13 +0100 (CET)
Received: from localhost (dynscan1.mnet-online.de [192.168.6.70])
 by mail.m-online.net (Postfix) with ESMTP id 4DWffF5Ypmz1t6pZ;
 Thu,  4 Feb 2021 14:34:13 +0100 (CET)
Received: from mail.mnet-online.de ([192.168.8.182])
 by localhost (dynscan1.mail.m-online.net [192.168.6.70]) (amavisd-new,
 port 10024)
 with ESMTP id wR0RSaN2C-6g; Thu,  4 Feb 2021 14:34:13 +0100 (CET)
Received: from igel.home (ppp-46-244-168-92.dynamic.mnet-online.de
 [46.244.168.92])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.mnet-online.de (Postfix) with ESMTPSA;
 Thu,  4 Feb 2021 14:34:13 +0100 (CET)
Received: by igel.home (Postfix, from userid 1000)
 id 89D152C374E; Thu,  4 Feb 2021 14:34:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 914f9f3f-8109-49e1-9de9-5f08a272a037
X-Virus-Scanned: amavisd-new at mnet-online.de
X-Auth-Info: 5ltc2JxgnLIR97X4BLuo0FY2Dr0RCBzjAsxE4ktmpk8Thl+RVnwMr3lZT84qtEOB
From: Andreas Schwab <schwab@linux-m68k.org>
To: Jan Beulich via Binutils <binutils@sourceware.org>
Cc: Jan Beulich <jbeulich@suse.com>,  "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: ld 2.36 regression linking EFI binary from ELF input with debug
 info
References: <79812876-b43d-7729-da34-3b4cd1c31f24@suse.com>
X-Yow: Sometime in 1993 NANCY SINATRA will lead a BLOODLESS COUP on GUAM!!
Date: Thu, 04 Feb 2021 14:34:12 +0100
In-Reply-To: <79812876-b43d-7729-da34-3b4cd1c31f24@suse.com> (Jan Beulich via
	Binutils's message of "Thu, 4 Feb 2021 14:21:05 +0100")
Message-ID: <875z38vtwr.fsf@igel.home>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1.91 (gnu/linux)
MIME-Version: 1.0
Content-Type: text/plain

On Feb 04 2021, Jan Beulich via Binutils wrote:

> For reference, the probing is as simple as
>
> $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o
>
> As was to be expected, the errors disappear with -S, but that's
> an option only for the probing, not for the actual linking of
> the binary.
>
> Thanks for pointers (or better yet, a fix),

Does it work to link to ELF and use objcopy to convert to PE?

Andreas.

-- 
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 13:38:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 13:38:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81305.150039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ept-0002Fm-Lh; Thu, 04 Feb 2021 13:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81305.150039; Thu, 04 Feb 2021 13:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ept-0002Ff-I9; Thu, 04 Feb 2021 13:38:17 +0000
Received: by outflank-mailman (input) for mailman id 81305;
 Thu, 04 Feb 2021 13:38:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7eps-0002Fa-7A
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 13:38:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bbdb616-48cc-45ca-a194-06893fe0048a;
 Thu, 04 Feb 2021 13:38:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB264ACB0;
 Thu,  4 Feb 2021 13:38:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bbdb616-48cc-45ca-a194-06893fe0048a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612445893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=XGYTqHc3ows1m4ndaNUxi/WSzJmQDlGNgfX75xhENOU=;
	b=EJZ1VcGa41LPjN+RZSH51kAHgmeOgFCnKWJg56sB0ZpbWgZD6EaDN5vjZuDkp9a+VAodvH
	ZHR0Lnr7Ws5o09kdTXz4gqfhTRPUwpj1cwF2LTeEd2bZumyQB8Tmf+b6zx3uNI8VArG1j9
	KYSVAV8lkIcjCov78uzz1guCVgw8l70=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/EFI: work around GNU ld 2.36 issue
Message-ID: <e6d59277-35b2-e7df-0e68-a794c8855ac0@suse.com>
Date: Thu, 4 Feb 2021 14:38:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Our linker capability check fails with the recent binutils release's ld:

.../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
.../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
.../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
.../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
.../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
.../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
.../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
.../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
.../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
.../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
.../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output

Tell the linker to strip debug info as a workaround. Oddly enough debug
info has been getting stripped when linking the actual xen.efi, without
me being able to tell why this would be.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -123,7 +123,7 @@ ifneq ($(efi-y),)
 # Check if the compiler supports the MS ABI.
 export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
 # Check if the linker supports PE.
-XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o 2>/dev/null && echo y))
+XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
 CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
 endif
 


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 13:55:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 13:55:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81307.150051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7f6t-0004Lp-5B; Thu, 04 Feb 2021 13:55:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81307.150051; Thu, 04 Feb 2021 13:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7f6t-0004Li-1s; Thu, 04 Feb 2021 13:55:51 +0000
Received: by outflank-mailman (input) for mailman id 81307;
 Thu, 04 Feb 2021 13:55:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7f6r-0004La-OC; Thu, 04 Feb 2021 13:55:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7f6r-0003v2-H7; Thu, 04 Feb 2021 13:55:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7f6r-0006TI-8r; Thu, 04 Feb 2021 13:55:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7f6r-0007gs-8M; Thu, 04 Feb 2021 13:55:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ybsmRHtONhZKcLfcstmzyg9JI/FzRQJGGRcABjDjshA=; b=fXtYdApEjDb2SI4xEIAV6JkDpj
	SbrPDRkQV13WGoKC7hC6iuNP1i26wNuCH4kWCgWH3i/hF9vmLn3Vf75KZlhYAdstsoqb2mLOy6RJs
	SLh7pMhtafT6r+/fOs1aJjnOTY9MyC5WItoCHv98y7moILAy6mEWonvVfwM/ir1Cx+x8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158996-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 158996: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-stop:fail:allowable
    xen-unstable:test-arm64-arm64-xl-xsm:host-install(5):broken:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d203dbd69f1a02577dd6fe571d72beb980c548a6
X-Osstest-Versions-That:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 13:55:49 +0000

flight 158996 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158996/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-stop               fail REGR. vs. 158977

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       5 host-install(5)             broken like 158977
 test-arm64-arm64-xl-credit1   5 host-install(5)             broken like 158977
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158977
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158977
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158977
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158977
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158977
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158977
 test-armhf-armhf-libvirt-raw 13 guest-start                  fail  like 158977
 test-armhf-armhf-xl-vhd      13 guest-start                  fail  like 158977
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158977
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158977
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158977
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158977
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d203dbd69f1a02577dd6fe571d72beb980c548a6
baseline version:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265

Last test of basis   158977  2021-02-03 08:08:44 Z    1 days
Testing same since   158996  2021-02-04 00:07:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-xsm host-install(5)
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

------------------------------------------------------------
commit d203dbd69f1a02577dd6fe571d72beb980c548a6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Feb 3 15:43:35 2021 +0000

    libs/foreignmem: Fix/simplify errno handling for map_resource
    
    Simplify the FreeBSD and Linux logic, left in this state by the previous
    change.  No functional change.
    
    Duplicate the FreeBSD logic for NetBSD, to maintain the uniform ABI for
    callers that EOPNOTSUPP covers all Xen/Kernel support.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 9351e1468d67ee79fbc6411b087c67ac34813891
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Feb 3 15:41:55 2021 +0000

    libs/foreignmem: Drop useless and/or misleading logging
    
    These log lines are all in response to single system calls, and do not provide
    any information which the immediate caller can't determine themselves.  It is
    however rude to put junk like this onto stderr, especially as system call
    failures are not even error conditions in certain circumstances.
    
    The FreeBSD logging has stale function names in, and Solaris shouldn't have
    passed code review to start with.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 13:58:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 13:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81310.150066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7f99-0004TX-Li; Thu, 04 Feb 2021 13:58:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81310.150066; Thu, 04 Feb 2021 13:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7f99-0004TQ-IF; Thu, 04 Feb 2021 13:58:11 +0000
Received: by outflank-mailman (input) for mailman id 81310;
 Thu, 04 Feb 2021 13:58:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7f98-0004TK-A8
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 13:58:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3852150-ff7b-4985-ac0a-9ec11e0ba88f;
 Thu, 04 Feb 2021 13:58:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BF45CADDC;
 Thu,  4 Feb 2021 13:58:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3852150-ff7b-4985-ac0a-9ec11e0ba88f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612447088; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EkoNk62up44VMUOIH/tbPS/j9zdEfQ8JduNlDUmA1E8=;
	b=iP0Dec40wfdGXjZe6cGjgIGs1XVk2x1l/YiDYsspZu3nn7iUZo95bauUufWuhIFfJBJP7v
	5r3Ff6BY+mSm9funF6WzpFq9ygrqk7/Tnj2Irttc6l+hG0z3f3s8uKP65lDIAslrccgD/6
	SAiwTmWCAMgXeX1tM2y2PNgaNZ5GzgY=
Subject: Re: ld 2.36 regression linking EFI binary from ELF input with debug
 info
To: Andreas Schwab <schwab@linux-m68k.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Binutils <binutils@sourceware.org>
References: <79812876-b43d-7729-da34-3b4cd1c31f24@suse.com>
 <875z38vtwr.fsf@igel.home>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <64ded848-a5e1-6b99-23cd-8f451e2a8a7a@suse.com>
Date: Thu, 4 Feb 2021 14:58:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <875z38vtwr.fsf@igel.home>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 14:34, Andreas Schwab wrote:
> On Feb 04 2021, Jan Beulich via Binutils wrote:
> 
>> For reference, the probing is as simple as
>>
>> $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o
>>
>> As was to be expected, the errors disappear with -S, but that's
>> an option only for the probing, not for the actual linking of
>> the binary.
>>
>> Thanks for pointers (or better yet, a fix),
> 
> Does it work to link to ELF and use objcopy to convert to PE?

Looks like so, albeit I can't easily tell whether the output would
actually be usable in any way. I also don't think this could be an
option for our build process.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 14:10:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 14:10:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81316.150077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7fL4-0006dZ-TH; Thu, 04 Feb 2021 14:10:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81316.150077; Thu, 04 Feb 2021 14:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7fL4-0006dS-QI; Thu, 04 Feb 2021 14:10:30 +0000
Received: by outflank-mailman (input) for mailman id 81316;
 Thu, 04 Feb 2021 14:10:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7fL2-0006dN-J9
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 14:10:28 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26f30117-d8bd-4aba-9e37-e0ee7b81b215;
 Thu, 04 Feb 2021 14:10:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26f30117-d8bd-4aba-9e37-e0ee7b81b215
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612447826;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=mXg3pdca5qjkelDHIP/8BM1Qa2LMYqk2wsfT4UaRLxg=;
  b=L2mk8dY095RBGYZIDxJSeYlmrIRhbuTK758AJbqLLP4Dih+kH0OXEmN1
   JWglInCUZ5GWLrsMFwjyX00gL/xFChTwZ7C2WAC/Yrl7CCc/JNkJ8hgXI
   7+2fHp+VXB1eB1beFaYcZ31X2B5AZjLJdGKFF97bRw6I9MKqW7b2FpZYw
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: v/wqT7AszrOmRlqj7vbRBa/RCTVc9Ro5Sn3N0ggZUsivQ6ovggm7oGJkNAY2s+lPdx97nqmfJ2
 NDFZzX99/5q1A8upXzqgKXQu1ec/WoGjMVCtUMeZXh4sSqOBSJGe+UzqsWw0dTUClxm9C9I3MM
 maLX4B2/eFEztGTDx3GJYnqIhffTt4ObHL6gxKtKdthi70ZYZsjcdi5Uh2raE75jvBU46AYJEe
 /It8OMEBNB6hV8cY/EYoqa4Cudt6iXSuCytFySioyVSn1NyqjoAsa6fvpzaVobtZa4nU2Lk4yM
 2S0=
X-SBRS: 5.2
X-MesageID: 37896267
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,401,1602561600"; 
   d="scan'208";a="37896267"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CjDRUH4A+yDLFCwYGTNnkVZctqW5VGPMNlkuAkSYU7/qNfYClzY7bzpXyECRSorTRJO+DTJH46aHreP3WMr3sLqzuDZWQNlWqT+8S8AtDxzQhxkhxHjF2e2Iel1d+NsmJUU0GSoQAK6M/bl5LYdrs+3sgblrpl3UFDwBCkf21ONb1R5BAi4jB6Z01LCfPjLfoy+OSKjWAKr3e/Vc8feoIP20J85uT+p1i80o9PskM8CBHFAW/+m1VCevFqJL8nacKIGkeO7I2mh4wM82vDZul9ct99tuwmNoCDdRGtNfM5vN9VvB8Qxg/gVOKnZkp0RNe9wb4DH7zlrkbDmEwH5hCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oxpNE2g+n8xSmWCluve2WuH5dPuFNvsuDYvcWEDrpfo=;
 b=InsJcDbieWeeFjdY4/inttwdpqUzmNAt1rJ6Mc47x6AKzRkdDeWMOwVzOc3s/FLS6hEoRrgCf2vMnCLH2w+IqT/VQdxd/nZex86ubBnVoxfys95DNd+GcvDHlH1qZQooyHB4gGVwVttQz/aOpa6ZzcPY1SDCTiTpWz/i3HzUvLd7VR79yZ+TAu0tbs2iPorOWrWI9vWVJvjNVWlvlhGDWPt3OgxwQO/tcav2/45zMMRft4Sy/dxfMfSd5825DH7sKQ1bgJcmL+YBMJIcg9syMMmrYfuwwJRrqGNEojuvGBItNlh6HNoXy+28Y0IOa8zIszvihzoVm8xYrUsltnrxKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oxpNE2g+n8xSmWCluve2WuH5dPuFNvsuDYvcWEDrpfo=;
 b=Bq44WTtNnRUbsyrBm9yuNR9ERVwlInS1EpIBJx7asbjAYDPRPnkp7WXzJaoXcRdJfDFISO52F/uc31K5eqPxeKen5pfwzeRV7FvtBnY8H3Qdwjh8D7bJ3P46bfxFRO40c6l7LAuYa15/COsSLbhyLJNt4tDUHR8Y4W93b6JodAw=
Subject: Re: [PATCH] x86/HVM: support emulated UMIP
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>, Jun Nakajima
	<jun.nakajima@intel.com>, Tamas K Lengyel <tamas@tklengyel.com>
References: <5a8b1c37-5f53-746f-ba87-778d4d980d99@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c717bd30-27b2-625d-576e-eb41a7192c55@citrix.com>
Date: Thu, 4 Feb 2021 14:10:16 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <5a8b1c37-5f53-746f-ba87-778d4d980d99@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0245.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a7::16) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e40d794-fbff-49bb-6077-08d8c9169c9c
X-MS-TrafficTypeDiagnostic: BY5PR03MB5185:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB51857BF4BBA6953978286ADABAB39@BY5PR03MB5185.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: NBL5/ncZCsiHFASFnmbA5Lrfj4DiE6PF2bG36lauo84j31MY7U+Csm1nKKUy18X5nWQBD1xpPTq0ImbOLtzU+cLuP2rK6T997BLDHX+x1n3jU6YzxpBffl2NHA/jreRxdQ5RScnT7qEiIYANzbeCeJ2t8c0kixW1+3tcpisLbYH73A5R3cgC6mCnwIeOjWHTdaQwyGwpPjw+/h5fPYzh02dKSwviHiNY1rUq6S28tXCkyR6qROcDwhTzmgjE6IwP5WF0sgy+fazqfrSGEPHOuNbzsbjl3DJPSAEezZQuhYP4GE2pIFQH+MqcsHspXj8A7ncnmE+0WL6Bi/h3xNuPQGCX+rGR15ta6AXKMXLoDpSaHcGMneONlZK2P46pgOEt1HCt4rCrG8VUYkk4eZMt9xuOfNajCf/P1jYUs1nRkLvLBdQ/LlYszwloKO/FS94eiGPkpsJhlH3c8iMbA/EUDZxvuDbAXUw242Ycf+zh0TFwzrUKVIwoJZ7lfHYqTCrfYB3Ykdby+uoUnzcNm5fv4T14GLjOxZSPsFOnOtaWCLUR8ug9FIrIQ3CXqXn4VzY01DYNRH9BOvxlCH/N5qvrAWTJHmbEfOL8d/2lNFZ+VR0=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(366004)(396003)(136003)(186003)(6486002)(36756003)(2906002)(26005)(6666004)(16526019)(316002)(16576012)(53546011)(31686004)(83380400001)(86362001)(5660300002)(54906003)(478600001)(8936002)(8676002)(956004)(4326008)(66476007)(66946007)(31696002)(2616005)(66556008)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?THpoWE12aDJaYkRGWmhTZVJqYnVEVXo5anlRT3B6NTBQMzVJaDdOMmZhc2c0?=
 =?utf-8?B?YmdKOUFXRkM3N0RaNTJLWWtkRTVneVpnT2JxU0RCdmg3OUFNUzFuY0h3OEVl?=
 =?utf-8?B?YzBwMFR2cytjLzhxRkd2bDY5SHdscEowQ3A4TS9aOU1CcXhQY1BwK1h5MXBW?=
 =?utf-8?B?OVNCYlNEc0ppK0l5ZlJYUmxMczkxSmd4cWxrK0xhOHJiM1AzZFN3c1NxbVpi?=
 =?utf-8?B?RHk4TnAxSHVWQ1krMUc3WHpHOHBqS1hiQnNTQytNTkNvM0RVSkZjY1BuakFm?=
 =?utf-8?B?RTlpeVBVOGVhVm00T1dGNHZWYzRNK3FZV0szVHFBMG82bjBYSEMyQjdTRHhw?=
 =?utf-8?B?N3ozS3NmSTV4K3M2MnBidENDM3BqcVRCWGJYd1BUSFlxZGErRWZ1QmVYV2Jz?=
 =?utf-8?B?akJGSS9LM1ZNdis0NCtlYjYwdEVTa2NnR3NVMzRvL1FaMTdZZ0QwdktiRzY2?=
 =?utf-8?B?M3MvNXk0bW90bUg1Qmx4NW53b0VzYmpTT3RxU0Y4MUNBS003TTdZTWM0VjFt?=
 =?utf-8?B?YVVMbks5K1ZuaVN1V3lWcnZEMjUzczZFeHA3bTJOS1RpcUt2bTQ0cUNST29E?=
 =?utf-8?B?UFZlRVRrOFVZNHhiZFFqenBWM0xmU0xnaEU5cXpoU3A2M0RpV1huak9WRUlz?=
 =?utf-8?B?VjZpNC9NRm5YQURmTHk5OVdlSWJQODVha3BaUFU4eTc4S0JiYUZtWCtrdXNT?=
 =?utf-8?B?R3BxWWt3K0d3SGc5UzVWWkNTMHZ3UGhVbTY4bnh6WHJZSFE4a0FmS3JwaEdO?=
 =?utf-8?B?ZmQ3c2R1RXM2RUdoRmdodFFVVWErcEhWV0lEM0tOUUhOODhad2pFUDhRQXhR?=
 =?utf-8?B?dWRBS0IwcVhFVW1qTW9hU09CRmFTUEZXNWNwcGZHWTN3ZHRLSG1hUHZMZTZt?=
 =?utf-8?B?dW5qV2cxU0VCcWY2aS95cWF0UmhWK2d0bUNNQXBKUy8weVFPSjFnempGL0NH?=
 =?utf-8?B?UnQ5d0xObmtvaFBnOTZTVXV2OHRVUDBzYlgyMHNCU3VKRGlkdEl0SFVpc1Ar?=
 =?utf-8?B?S09hTXp1emZIWjR6VGZXLzl2SjgwQlNBbmJ6WlNQWUVENENHbVJtS3grOEto?=
 =?utf-8?B?M0ZWUFVreWd1Y0trbmx6Z203TXNyNUhVdnk3MnovZnFTQUhlTjBES0hHK21U?=
 =?utf-8?B?RStUaVZZU2tpSW14MFZNVllVWjluazkzblpXUzhRUmMyd0NDWkthNjZYNEpr?=
 =?utf-8?B?dldvMjdvRzdKZVFPTlZYVno3K2FManVuMW9xWEw4Wm4zYUp6WEJUbUlSMlRj?=
 =?utf-8?B?RkhPcFBzSWc3UXl1eUFGZGxVTWp0c1ZESVpqdjFnVTFEMW1xWFVwNkcxS0hP?=
 =?utf-8?B?V1NhYjl5RmRmbU5XRlBHMkZ3Q3lQVE9zbnFaSXF3aUNRSk1Nd2pQeUR3Zys5?=
 =?utf-8?B?QVltK3hOL0VSMS80dkc4b0NpYkxjbmR0d3Jld3J4L0tWSFBna29XOG1Da2ZU?=
 =?utf-8?B?WENLcjhrL1NDV0lxOU1ZMkdVelBEVXgybWVaNDZRPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e40d794-fbff-49bb-6077-08d8c9169c9c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 14:10:22.8411
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7jDkSgn3ySZLhzvGPeiryqjGATC1+8aQPU3x2iEHsIJ2yUx+tBDlnBZ/5l24t1EsT7BM9gg7QQbHE4E3u111cQ+cnRUcpnYSHv7DfSwE9IE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5185
X-OriginatorOrg: citrix.com

On 29/01/2021 11:45, Jan Beulich wrote:
> There are three noteworthy drawbacks:
> 1) The intercepts we need to enable here are CPL-independent, i.e. we
>    now have to emulate certain instructions for ring 0.
> 2) On VMX there's no intercept for SMSW, so the emulation isn't really
>    complete there.
> 3) The CR4 write intercept on SVM is lower priority than all exception
>    checks, so we need to intercept #GP.
> Therefore this emulation doesn't get offered to guests by default.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I wonder if it would be helpful for this to be 3 patches, simply because
of the differing complexity of the VT-x and SVM pieces.

> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -453,6 +453,13 @@ static void __init calculate_hvm_max_pol
>      __set_bit(X86_FEATURE_X2APIC, hvm_featureset);
>  
>      /*
> +     * Xen can often provide UMIP emulation to HVM guests even if the host
> +     * doesn't have such functionality.
> +     */
> +    if ( hvm_funcs.set_descriptor_access_exiting )

No need for this check.Â  Exiting is available on all generations and
vendors.

Also, the header file probably wants a ! annotation for UMIP to signify
that we doing something special with it.

> +        __set_bit(X86_FEATURE_UMIP, hvm_featureset);
> +
> +    /*
>       * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
>       * long mode (and init_amd() has cleared it out of host capabilities), but
>       * HVM guests are able if running in protected mode.
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -991,7 +991,8 @@ unsigned long hvm_cr4_guest_valid_bits(c
>                                  X86_CR4_PCE                    |
>              (p->basic.fxsr    ? X86_CR4_OSFXSR            : 0) |
>              (p->basic.sse     ? X86_CR4_OSXMMEXCPT        : 0) |
> -            (p->feat.umip     ? X86_CR4_UMIP              : 0) |
> +            ((p == &host_cpuid_policy ? &hvm_max_cpuid_policy : p)->feat.umip
> +                              ? X86_CR4_UMIP              : 0) |

This hunk wants dropping.Â  p can't alias host_cpuid_policy any more.

(and for future changes which do look like this, a local bool please,
per the comment.)

>              (vmxe             ? X86_CR4_VMXE              : 0) |
>              (p->feat.fsgsbase ? X86_CR4_FSGSBASE          : 0) |
>              (p->basic.pcid    ? X86_CR4_PCIDE             : 0) |
> @@ -3731,6 +3732,13 @@ int hvm_descriptor_access_intercept(uint
>      struct vcpu *curr = current;
>      struct domain *currd = curr->domain;
>  
> +    if ( (is_write || curr->arch.hvm.guest_cr[4] & X86_CR4_UMIP) &&

Brackets for & expression?

> +         hvm_get_cpl(curr) )
> +    {
> +        hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +        return X86EMUL_OKAY;
> +    }

I believe this is a logical change for monitor - previously, non-ring0
events would go all the way to userspace.

I don't expect this to be an issue - monitoring agents really shouldn't
be interested in userspace actions the guest kernel is trying to turn
into #GP.

CC'ing Tamas for his opinion.

> +
>      if ( currd->arch.monitor.descriptor_access_enabled )
>      {
>          ASSERT(curr->arch.vm_event);
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -547,6 +547,28 @@ void svm_update_guest_cr(struct vcpu *v,
>              value &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
>          }
>  
> +        if ( v->domain->arch.cpuid->feat.umip && !cpu_has_umip )

Throughout the series, examples like this should have the !cpu_has_umip
clause first.Â  It is static per host, rather than variable per VM, and
will improve the branch prediction.

Where the logic is equivalent, it is best to have the clauses in
stability order, as this will prevent a modern CPU from even evaluating
the CPUID policy.

> +        {
> +            u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
> +
> +            if ( v->arch.hvm.guest_cr[4] & X86_CR4_UMIP )
> +            {
> +                value &= ~X86_CR4_UMIP;
> +                ASSERT(vmcb_get_cr_intercepts(vmcb) & CR_INTERCEPT_CR0_READ);

It occurs to me that adding CR0 read exiting adds a lot of complexity
for very little gain.

>From a practical standpoint, UMIP exists to block SIDT/SGDT which are
the two instructions which confer an attacker with useful information
(linear addresses of the IDT/GDT respectively).Â  SLDT/STR only confer a
16 bit index within the GDT (fixed per OS), and SMSW is as good as a
constant these days.

Given that Intel cannot intercept SMSW at all and we've already accepted
that as a limitation vs architectural UMIP, I don't think extra
complexity on AMD is worth the gain.

> @@ -2728,6 +2767,14 @@ void svm_vmexit_handler(struct cpu_user_
>          svm_fpu_dirty_intercept();
>          break;
>  
> +    case VMEXIT_EXCEPTION_GP:
> +        HVMTRACE_1D(TRAP, TRAP_gp_fault);
> +        /* We only care about ring 0 faults with error code zero. */
> +        if ( vmcb->exitinfo1 || vmcb_get_cpl(vmcb) ||
> +             !hvm_emulate_one_insn(is_cr4_write, "CR4 write") )
> +            hvm_inject_hw_exception(TRAP_gp_fault, vmcb->exitinfo1);

I should post one of my pending SVM cleanup patches, which further
deconstructs exitinfo into more usefully named fields.

The comment should include *why* we only care about this state.Â  It
needs to mention emulated UMIP, and the priority order of #GP and VMExit.

> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1537,6 +1552,7 @@ static void vmx_update_guest_cr(struct v
>                                               (X86_CR4_PSE | X86_CR4_SMEP |
>                                                X86_CR4_SMAP)
>                                               : 0;
> +            v->arch.hvm.vmx.cr4_host_mask |= cpu_has_umip ? 0 : X86_CR4_UMIP;

if ( !cpu_has_umip )
Â Â Â Â  v->arch.hvm.vmx.cr4_host_mask |= X86_CR4_UMIP;

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 14:21:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 14:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81326.150096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7fV4-0007lw-Us; Thu, 04 Feb 2021 14:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81326.150096; Thu, 04 Feb 2021 14:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7fV4-0007lp-Qo; Thu, 04 Feb 2021 14:20:50 +0000
Received: by outflank-mailman (input) for mailman id 81326;
 Thu, 04 Feb 2021 14:20:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YSLV=HG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1l7fV3-0007lk-Ur
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 14:20:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 255224e8-0ee2-489e-a243-542510d95f50;
 Thu, 04 Feb 2021 14:20:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55B15AD24;
 Thu,  4 Feb 2021 14:20:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 255224e8-0ee2-489e-a243-542510d95f50
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612448447; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4BGRJVYHX8vv0mMZ5K24DmOUL6xZvEVs3qVEdlWQrvg=;
	b=ghrDj22KIIcwxBLh0qp49GdDweL7JAGXMw80gkh3wrxvDkAmoHkX+g6RLd9HoyVm1rV1EA
	ZdmlDXRKzVhEqgd1R+B9ElRA5L/l13RP/pgkWM5crSznrT5oZaM1hGDCtTYSdMVp/voZyR
	F9Nm9ORxrpU5YdgJB/vIOZNKZN4Dcs0=
Message-ID: <6d0d7181bad79259aff28351621d2ac1eeaca113.camel@suse.com>
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of
 significant bugs
From: Dario Faggioli <dfaggioli@suse.com>
To: Ian Jackson <iwj@xenproject.org>, committers@xenproject.org, 
	xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,  Julien Grall <julien@xen.org>,
 community.manager@xenproject.org
Date: Thu, 04 Feb 2021 15:20:46 +0100
In-Reply-To: <24603.58528.901884.980466@mariner.uk.xensource.com>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
	 <24603.58528.901884.980466@mariner.uk.xensource.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-4I6JqNFolG6UkCq4I6c3"
User-Agent: Evolution 3.38.3 (by Flathub.org) 
MIME-Version: 1.0


--=-4I6JqNFolG6UkCq4I6c3
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2021-02-04 at 12:12 +0000, Ian Jackson wrote:
> B. "scheduler broken" bugs.
>=20
> Information from
> =C2=A0 Andrew Cooper <andrew.cooper3@citrix.com>
> =C2=A0 Dario Faggioli <dfaggioli@suse.com>
>=20
> Quoting Andrew Cooper
> > We've had 4 or 5 reports of Xen not working, and very little
> > investigation on whats going on.=C2=A0 Suspicion is that there might be
> > two bugs, one with smt=3D0 on recent AMD hardware, and one more
> > general "some workloads cause negative credit" and might or might
> > not be specific to credit2 (debugging feedback differs - also might
> > be 3 underlying issue).
>=20
> I reviewed a thread about this and it is not clear to me where we are
> with this.
>=20
Ok, let me try to summarize the current status.

- BUG: credit=3Dsched2 machine hang when using DRAKVUF

  https://lists.xen.org/archives/html/xen-devel/2020-05/msg01985.html
  https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01561.htm=
l
  https://bugzilla.opensuse.org/show_bug.cgi?id=3D1179246

  99% sure that it's a Credit2 scheduler issue.
  I'm actively working on it.
  "Seems a tricky one; I'm still in the analysis phase"

  Manifests only with=C2=A0certain combination of hardware and workload.=C2=
=A0
  I'm=C2=A0not reproducing,=C2=A0but there are multiple reports of it (see=
=C2=A0
  above). I'm investigating and trying to come up at least with=C2=A0
  debug=C2=A0patches that one of the reporter should be able and willing to=
=C2=A0
  test.

- Null scheduler and vwfi native problem

  https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg01634.htm=
l

  RCU issues, but manifests due to scheduler behavior (especially  =C2=A0
  NULL scheduler,=C2=A0especially on ARM).
  I'm actively working on it.

  Patches that should solve the issue for ARM posted already. They=C2=A0
  will=C2=A0need to be slightly adjusted to cover x86 as well. Waiting a=C2=
=A0
  couple days more for a confirmation from the reporter that the
  patches do help, at least on ARM.

- Xen crash after S3 suspend - Xen 4.13

  https://lists.xen.org/archives/html/xen-devel/2020-03/msg01251.html
  https://lists.xen.org/archives/html/xen-devel/2021-01/msg02620.html

  S3 suspend issue, but root cause seems to be in the scheduler.

  Marek is, as usual, providing good info and feedback. It comes as=C2=A0
  third in my list (below the two above, basically), but I will look
  into it.

- Ryzen 4000 (Mobile) Softlocks/Micro-stutters

  https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg00966.htm=
l

  Seems could be scheduling, but amount of info is limited.

  What we know is that with `dom0_max_vcpus=3D1 dom0_vcpus_pin`, all=C2=A0
  schedulers seem to work fine. Without those params, Credit2 is the=C2=A0
  "least bad", although not satisfactory. Other schedulers don't even=C2=A0
  boot.
  Fact is, it is reported to occure on QubesOS, which has its own=C2=A0
  downstream patches, plus there are no logs.
  There's a feeling that this (together with others) hints at SMT off=C2=A0
  having issues on AMD (Ryzen?), but again, it's not crystal clear to=C2=A0
  me whether this is the issue (or an issue at all) and, if yes, in=C2=A0
  what subsystem the problem lays.
  I can try to have a look, mostly for trying to understand whether or=C2=
=A0
  not it is really the case that some AMDs have issues with SMT=3Doff.
  But that probably will be after I'll be done with the other issues=C2=A0
  I've mentioned before (above) this one.

-=C2=A0Recent upgrade of 4.13 -> 4.14 issue

  https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01800.htm=
l=20

  To my judgment, It's not at all clear whether or not this is a=C2=A0
  scheduler issue. And at least with the amount of info that we have=C2=A0
  so=C2=A0far, I'd lean toward "no, it's not". I'm happy to help with it=C2=
=A0
  anyway, of course,=C2=A0but it comes after the others.

So, Ian, was this any helpful?

If not, help me understand how I can help you. :-P

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-4I6JqNFolG6UkCq4I6c3
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmAcAr4ACgkQFkJ4iaW4
c+6jWA//WW4Uie6tFaNAUR5lw2pyOqDO3tvzHUNu45vUvJSJxFdingOQ/HCzKtx2
zSuaU0+uwesIZemTxaERYA2ovdLLHHz6vZETvpXgG3y1l1JTM66Bth0xr6UfdpSF
YeBU2TF+utvvBEhDjzX1AeI7KSZbgXk+W8Xqn7QuF4tGGIngf+j/WV6GcKGhQsTF
h+5wCNWp49xtBHU1tYSnZkDU3b1z3BN7Ka7MxqYw8TO+du+jW8Ijits52p/C4l2O
H8th7TwhDxnabAaPHwKBzPb7G07hkiLkVE47796PyejAkb8JMzDzSNQnwAHGGJyK
FRo8TF8PnNTHw5tTSY8FNjyCzDqipasG/De3BKSgeoDGby7ce6IyUsC913dc26/w
HBM2c70nyvZ6+frJ4gCMxsj5qXTKAFB/v2Y445E00QSo26Nt52UYG0aedlahwEeK
yfwphxAMZIe2B2H4GsqgiWh9ctBmifbWGI7DM45/XZNe6YL3kOVmRljI/VoFsUZU
885bGRVcETm/Rl1zaTbsYXt0M9CPW9XtcLtB1cOVgmTEOYdsmPGBUTrkHVy1Xp3e
R0zm3YHOdNkmFRwkwwEQ/Zpa2Djj9/8xUifyu2A5Iq0zb3YHyHIoqxDb9dF10W3+
MzRelBGIza3VU0daRrJ/W/Hp1HIgeIyRYJSmnVH+tCOwVY7jZfk=
=n7XM
-----END PGP SIGNATURE-----

--=-4I6JqNFolG6UkCq4I6c3--



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 14:31:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 14:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81331.150120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7fel-0000Vr-Vn; Thu, 04 Feb 2021 14:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81331.150120; Thu, 04 Feb 2021 14:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7fel-0000Vk-SN; Thu, 04 Feb 2021 14:30:51 +0000
Received: by outflank-mailman (input) for mailman id 81331;
 Thu, 04 Feb 2021 14:30:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7fel-0000Ve-1F
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 14:30:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e915f188-bac6-4672-b7a6-22f6da1f7903;
 Thu, 04 Feb 2021 14:30:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5B874ACB0;
 Thu,  4 Feb 2021 14:30:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e915f188-bac6-4672-b7a6-22f6da1f7903
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612449048; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uOxBn3hSn6RTj0rJ+e3/g1iq2jJqW/voWgV4nrmWcwE=;
	b=RLdvGoUOka7zwIbdyDhSkvPO0UwiMg9jAiOGE8qZHsZKhIxVsuWvfEdRcJQc+lXazJZoVl
	vlWiWJgv3CUSE2WvhRCmrat4H5lHWkmpfHoUiMtfwwrzqFkz9FDU579SVQd94FW3ZzR7Hv
	roZjZFVT9mB8RASewEZj4zQULxvJM6o=
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant
 bugs
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Julien Grall <julien@xen.org>,
 community.manager@xenproject.org, committers@xenproject.org,
 xen-devel@lists.xenproject.org
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
 <24603.58528.901884.980466@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <86412e13-ed57-8625-18be-38dd7022669e@suse.com>
Date: Thu, 4 Feb 2021 15:30:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <24603.58528.901884.980466@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 13:12, Ian Jackson wrote:
> OPEN ISSUES
> -----------
> 
> A. HPET/PIT issue on newer Intel systems
> [...]
> 
> B. "scheduler broken" bugs.
> 
> Information from
>   Andrew Cooper <andrew.cooper3@citrix.com>
>   Dario Faggioli <dfaggioli@suse.com>
> 
> Quoting Andrew Cooper
> | We've had 4 or 5 reports of Xen not working, and very little
> | investigation on whats going on.  Suspicion is that there might be
> | two bugs, one with smt=0 on recent AMD hardware, and one more
> | general "some workloads cause negative credit" and might or might
> | not be specific to credit2 (debugging feedback differs - also might
> | be 3 underlying issue).
> 
> I reviewed a thread about this and it is not clear to me where we are
> with this.

I'm not sure Marek's "Xen crash after S3 suspend - Xen 4.13 and newer"
falls in either of the two buckets.

> C. Fallout from MSR handling behavioral change.
> 
> Information from
>   Jan Beulich <jbeulich@suse.com>
> 
> I am lacking an extended desxcription of this.  What are the bug(s),
> and what is the situation ?
> 
> 
> D. Use-after-free in the IOMMU code
> 
> Information from
>   Julien Grall <julien@xen.org>
> References
>  [PATCH for-4.15 0/4] xen/iommu: Collection of bug fixes for IOMMU teadorwn
>  <20201222154338.9459-1-julien@xen.org>
> 
> Quoting the 0/:
> | This series is a collection of bug fixes for the IOMMU teardown code.
> | All of them are candidate for 4.15 as they can either leak memory or
> | lead to host crash/host corruption.
> 
> AFAIT these patches are not yet in-tree.

(since you're continuing with E. further down)

F. The almost-XSA "x86/PV: avoid speculation abuse through guest
accessors" - the first 4 patches are needed to address the actual
issue. The next 3 patches are needed to get the tree into
consistent state again, identifier-wise. The remaining patches
can probably wait.

> CLOSED ISSUES
> =============
> 
> E. zstd support
> 
> Information from
>   Andrew Cooper <andrew.cooper3@citrix.com>
>   Jan Beulich <jbeulich@suse.com>
>   git
> 
> Needed to unbreak Fedora.  Needs support for both dom0 and domU.
> 
> AFAICT this seems to be in-tree as of 8169f82049ef
> "libxenguest: support zstd compressed kernels"

Indeed.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:01:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81336.150140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7g8V-0003iF-Hw; Thu, 04 Feb 2021 15:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81336.150140; Thu, 04 Feb 2021 15:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7g8V-0003i8-EY; Thu, 04 Feb 2021 15:01:35 +0000
Received: by outflank-mailman (input) for mailman id 81336;
 Thu, 04 Feb 2021 15:01:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N2gu=HG=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1l7g8T-0003i3-Sv
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:01:33 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a591b159-ad8e-4d62-a29b-bc547d9eb4fe;
 Thu, 04 Feb 2021 15:01:32 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id c12so3852725wrc.7
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 07:01:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a591b159-ad8e-4d62-a29b-bc547d9eb4fe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=AuJjeB+t2k15miUTBh9UizpGnimRvXMSVTLdkay4a60=;
        b=s3B1Tb/Nf8+6/wDMsJ7nwyGD77FhrMVuO2QA8n5elReOICNzWmfzzvm88Xd0ZrjcK3
         nHwFlqoBf1p0TWy162nQhIiwawiX1J5mhhTvrLqa6BGG8VsCNn1saBmqCteECiUs1ZBt
         5B4sHyO3NC1TZQeNscPZRIgiGEO2TVHDqYOmvEIMphuBJ92uFtqWo5UeSqwK2bQHNOX+
         Jto21JyNdRhOC9JCBMDitGnzGsbzVyXYXhSxcF5ZmJ6L1g8g/CD2FQ/COGeNzMTxvVJ7
         zXmqfKMiZETPjhqmmLyQ2kHSnSBDKD1GSaBjaH5SXL2hx2CJuMzAdpunj8z1xTfpOX0W
         yB8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=AuJjeB+t2k15miUTBh9UizpGnimRvXMSVTLdkay4a60=;
        b=ZyHaRcywwO6mpLpfHXgFrDpgmarNdkycfw9CVhe0WhYU+XtELdLtNY10PBTswCNixL
         FtKIZ6xpqe2fhkDZ8lHFXe+coN3J+gZqOGefSgmao1nyB2R7hbG9YCU9AXZk+bYgM5Mu
         zrzHzT9M2WIa+jDxUxAHq/CN5U0GOAj9FHTKZ/0VuXA0+M+zI3f3bVgWLzvbeWrv9nGw
         LwWH1ODQcSQ1q8Lh1J1KJfcHbuxbdv19N60fYkU87s6JEE5YytBZkyr6479G2WOu+oTs
         bNB/FwrVbUQmNfGpnQTkhm1pIsU9fGwogwMegK+FWDkPRPZdOE5LQoB/8P3wfXpciVnV
         91uA==
X-Gm-Message-State: AOAM531ekjxpkJqZUWmAxaH2lTUHqTDTOj8dOKCINw4O2Redqg0SBtET
	ERsuYxyW2DCnN7+Ccqeq9v7ZR6UbKCojKciEUcE=
X-Google-Smtp-Source: ABdhPJz6Ho7t3Ui566OsDdM7Zf5qUEUw7IuJCL7P1sR+2xMIb09PgQovsZplHp82Qzoy76bjrOmFs0ZEiEXdwoENh7g=
X-Received: by 2002:adf:f687:: with SMTP id v7mr9766870wrp.182.1612450891650;
 Thu, 04 Feb 2021 07:01:31 -0800 (PST)
MIME-Version: 1.0
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
 <24603.58528.901884.980466@mariner.uk.xensource.com> <6d0d7181bad79259aff28351621d2ac1eeaca113.camel@suse.com>
In-Reply-To: <6d0d7181bad79259aff28351621d2ac1eeaca113.camel@suse.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Thu, 4 Feb 2021 10:00:55 -0500
Message-ID: <CABfawhkT5JBsT2-reSLB-bNFhP1em5U3vBs+z_FM6_Kcd7TSiQ@mail.gmail.com>
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant bugs
To: Dario Faggioli <dfaggioli@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Committers <committers@xenproject.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, community.manager@xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Feb 4, 2021 at 9:21 AM Dario Faggioli <dfaggioli@suse.com> wrote:
>
> On Thu, 2021-02-04 at 12:12 +0000, Ian Jackson wrote:
> > B. "scheduler broken" bugs.
> >
> > Information from
> >   Andrew Cooper <andrew.cooper3@citrix.com>
> >   Dario Faggioli <dfaggioli@suse.com>
> >
> > Quoting Andrew Cooper
> > > We've had 4 or 5 reports of Xen not working, and very little
> > > investigation on whats going on.  Suspicion is that there might be
> > > two bugs, one with smt=0 on recent AMD hardware, and one more
> > > general "some workloads cause negative credit" and might or might
> > > not be specific to credit2 (debugging feedback differs - also might
> > > be 3 underlying issue).
> >
> > I reviewed a thread about this and it is not clear to me where we are
> > with this.
> >
> Ok, let me try to summarize the current status.
>
> - BUG: credit=sched2 machine hang when using DRAKVUF
>
>   https://lists.xen.org/archives/html/xen-devel/2020-05/msg01985.html
>   https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01561.html
>   https://bugzilla.opensuse.org/show_bug.cgi?id=1179246
>
>   99% sure that it's a Credit2 scheduler issue.
>   I'm actively working on it.
>   "Seems a tricky one; I'm still in the analysis phase"
>
>   Manifests only with certain combination of hardware and workload.
>   I'm not reproducing, but there are multiple reports of it (see
>   above). I'm investigating and trying to come up at least with
>   debug patches that one of the reporter should be able and willing to
>   test.
>
> - Null scheduler and vwfi native problem
>
>   https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg01634.html
>
>   RCU issues, but manifests due to scheduler behavior (especially
>   NULL scheduler, especially on ARM).
>   I'm actively working on it.
>
>   Patches that should solve the issue for ARM posted already. They
>   will need to be slightly adjusted to cover x86 as well. Waiting a
>   couple days more for a confirmation from the reporter that the
>   patches do help, at least on ARM.
>

I've run into null-scheduler causing CPU lockups as well on x86.
Required physical machine reboot. Seems to be triggered with domain
destruction when destroying fork vms. Happens only intermittently.

Tamas


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:12:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81339.150152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gJN-0004qJ-PN; Thu, 04 Feb 2021 15:12:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81339.150152; Thu, 04 Feb 2021 15:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gJN-0004qC-Lo; Thu, 04 Feb 2021 15:12:49 +0000
Received: by outflank-mailman (input) for mailman id 81339;
 Thu, 04 Feb 2021 15:12:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gJN-0004q7-0N
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:12:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gJM-0005I0-UF
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:12:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gJM-0007kk-SU
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:12:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7gJI-0008QM-8X; Thu, 04 Feb 2021 15:12:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=/lCByavWdZXCvJxK0iDBpo4VmnYuCe3xKi1daN0BWvw=; b=rJAusodWIp705ZK8JS31TqfF5t
	B72ubCAwsWTmdBQa3UOGjCFOAQfJKiz/5PIdC/7H9QMbUJtSf3DLg5aD3UpxfnOvDSNtpqXcfPR45
	rNZP69vu00eZmOz3f3b+2ypZ3kK6hQwYiRTrBlUGw3lxfXviIGNABBiEJW+H6OexSjvQ=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Message-ID: <24604.3819.903469.786536@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 15:12:43 +0000
To: Dario Faggioli <dfaggioli@suse.com>
Cc: committers@xenproject.org,
    xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Jan Beulich  <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    community.manager@xenproject.org
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of
 significant bugs
In-Reply-To: <6d0d7181bad79259aff28351621d2ac1eeaca113.camel@suse.com>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
	<24603.58528.901884.980466@mariner.uk.xensource.com>
	<6d0d7181bad79259aff28351621d2ac1eeaca113.camel@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Dario Faggioli writes ("Re: [ANNOUNCE] Xen 4.15 - call for notification=
/status of significant bugs"):
> On Thu, 2021-02-04 at 12:12 +0000, Ian Jackson wrote:
> > I reviewed a thread about this and it is not clear to me where we a=
re
> > with this.
.
> Ok, let me try to summarize the current status.

Thanks.

> - BUG: credit=3Dsched2 machine hang when using DRAKVUF
>=20
>   https://lists.xen.org/archives/html/xen-devel/2020-05/msg01985.html=

>   https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg015=
61.html
>   https://bugzilla.opensuse.org/show_bug.cgi?id=3D1179246
>=20
>   99% sure that it's a Credit2 scheduler issue.
>   I'm actively working on it.
>   "Seems a tricky one; I'm still in the analysis phase"
>=20
>   Manifests only with=A0certain combination of hardware and workload.=
=A0
>   I'm=A0not reproducing,=A0but there are multiple reports of it (see=A0=

>   above). I'm investigating and trying to come up at least with=A0
>   debug=A0patches that one of the reporter should be able and willing=
 to=A0
>   test.

I think this is a clear blocker for 4.15.  I will call it "F".

> - Null scheduler and vwfi native problem
>=20
>   https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg016=
34.html
>=20
>   RCU issues, but manifests due to scheduler behavior (especially  =A0=

>   NULL scheduler,=A0especially on ARM).
>   I'm actively working on it.
>=20
>   Patches that should solve the issue for ARM posted already. They=A0=

>   will=A0need to be slightly adjusted to cover x86 as well. Waiting a=
=A0
>   couple days more for a confirmation from the reporter that the
>   patches do help, at least on ARM.

I'm not sure whether this is a blocker but it looks like it is going
to be fixed so I will keep it on my list.  I will call it "G".


> - Xen crash after S3 suspend - Xen 4.13
>=20
>   https://lists.xen.org/archives/html/xen-devel/2020-03/msg01251.html=

>   https://lists.xen.org/archives/html/xen-devel/2021-01/msg02620.html=

>=20
>   S3 suspend issue, but root cause seems to be in the scheduler.
>=20
>   Marek is, as usual, providing good info and feedback. It comes as=A0=

>   third in my list (below the two above, basically), but I will look
>   into it.

This is not a blocker so I won't track it explicitly but I would
very much welcome a fix if it is simple or comes quickly.


> - Ryzen 4000 (Mobile) Softlocks/Micro-stutters
>=20
>   https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg009=
66.html
>=20
>   Seems could be scheduling, but amount of info is limited.
>=20
>   What we know is that with `dom0_max_vcpus=3D1 dom0_vcpus_pin`, all=A0=

>   schedulers seem to work fine. Without those params, Credit2 is the=A0=

>   "least bad", although not satisfactory. Other schedulers don't even=
=A0
>   boot.
>   Fact is, it is reported to occure on QubesOS, which has its own=A0
>   downstream patches, plus there are no logs.
>   There's a feeling that this (together with others) hints at SMT off=
=A0
>   having issues on AMD (Ryzen?), but again, it's not crystal clear to=
=A0
>   me whether this is the issue (or an issue at all) and, if yes, in=A0=

>   what subsystem the problem lays.
>   I can try to have a look, mostly for trying to understand whether o=
r=A0
>   not it is really the case that some AMDs have issues with SMT=3Doff=
.
>   But that probably will be after I'll be done with the other issues=A0=

>   I've mentioned before (above) this one.

I'm not sure whether you are saying (a) our current code is not
useable on this hardware because of this issue, or on the other hand
(b) you think the issue is specific to downstream patches ?

Do you think I should consider this a blocker for 4.15 ?


> -=A0Recent upgrade of 4.13 -> 4.14 issue
>=20
>   https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg018=
00.html=20
>=20
>   To my judgment, It's not at all clear whether or not this is a=A0
>   scheduler issue. And at least with the amount of info that we have=A0=

>   so=A0far, I'd lean toward "no, it's not". I'm happy to help with it=
=A0
>   anyway, of course,=A0but it comes after the others.

Again, I think this is not a regression so not a blocker for 4.15.


> So, Ian, was this any helpful?

Yes, very much so, thank you.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:15:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:15:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81340.150164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gLw-0004zW-7x; Thu, 04 Feb 2021 15:15:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81340.150164; Thu, 04 Feb 2021 15:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gLw-0004zP-4V; Thu, 04 Feb 2021 15:15:28 +0000
Received: by outflank-mailman (input) for mailman id 81340;
 Thu, 04 Feb 2021 15:15:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gLu-0004zK-E1
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:15:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gLu-0005Kb-DB
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:15:26 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gLu-0007x9-CQ
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:15:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7gLp-0008RJ-JB; Thu, 04 Feb 2021 15:15:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=NlPeA61U6fKbMrSYtmqwXVA+DoGYH1Ieez3LKo8X+VY=; b=xNPt9LE4pMG0vPM4QgEQK7UT67
	f5Ozj9zkKsjQVmSir8rWi5lsecr60WFJeczM7SHl+DGF25c7zdK3MU7cUu9raIBhHyCAQ4XKIamYI
	WXHMiwa1/mUqP6/S+F/6E/NCKuAaW5SRjHQSmQT0plM3PpE/ZYO1YJIZmgaDRH9P/x6g=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24604.3977.348615.728094@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 15:15:21 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <committers@xenproject.org>,
    <xen-devel@lists.xenproject.org>,
    Dario Faggioli <dfaggioli@suse.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    <community.manager@xenproject.org>
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant
 bugs
In-Reply-To: <da8b6f0b-185e-91f4-d245-22d8af50c194@citrix.com>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
	<24603.58528.901884.980466@mariner.uk.xensource.com>
	<da8b6f0b-185e-91f4-d245-22d8af50c194@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant bugs"):
> On 04/02/2021 12:12, Ian Jackson wrote:
> > OPEN ISSUES
> > -----------
> >
> > A. HPET/PIT issue on newer Intel systems
...
> > I think Andrew is still on the case here.
> 
> Fixed.  c/s e1de4c196a from a week ago.
> 
> > C. Fallout from MSR handling behavioral change.
...
> Still WIP and on my TODO list.  In addition to Jan's report, there is a
> separate report from Boris against Solaris.  Also I need to revert a
> patch of mine from early in the release and do the same thing differently.
> 
> Bugs are "VMs which boot on earlier releases don't boot on 4.15 at the
> moment".

Thanks for this information, which I have noted.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:18:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:18:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81343.150175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gOd-00057U-Lj; Thu, 04 Feb 2021 15:18:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81343.150175; Thu, 04 Feb 2021 15:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gOd-00057N-Im; Thu, 04 Feb 2021 15:18:15 +0000
Received: by outflank-mailman (input) for mailman id 81343;
 Thu, 04 Feb 2021 15:18:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gOc-00057I-Ge
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:18:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gOc-0005OZ-Cr
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:18:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7gOc-0008De-Bi
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:18:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7gOR-0008SE-Lf; Thu, 04 Feb 2021 15:18:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=IWL7miHS4vJFqkYtjwVAEOJTaAM4P+Rgpz5a4pEd5G0=; b=imMJbscAlOeD+Q9OsOxjGLvQ8d
	NKhDp0E45GDxWfA33sTFPc4My9wlZRIBvPHt/Koc2tlAsQNO7TIN3VWOaUyJxpNlsV65RLPeCWQU2
	+All5snyIpkj89awJl/zneGPGlJ9NnBZRfCl3OZbZUO4RfzQoBQeXP5dnZV6FL9+aOX0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24604.4139.379635.971242@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 15:18:03 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    Dario Faggioli <dfaggioli@suse.com>,
    Julien Grall <julien@xen.org>,
    community.manager@xenproject.org,
    committers@xenproject.org,
    xen-devel@lists.xenproject.org
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant
 bugs
In-Reply-To: <86412e13-ed57-8625-18be-38dd7022669e@suse.com>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
	<24603.58528.901884.980466@mariner.uk.xensource.com>
	<86412e13-ed57-8625-18be-38dd7022669e@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant bugs"):
> On 04.02.2021 13:12, Ian Jackson wrote:
> > OPEN ISSUES
> > -----------
...
> > I reviewed a thread about this and it is not clear to me where we are
> > with this.
> 
> I'm not sure Marek's "Xen crash after S3 suspend - Xen 4.13 and newer"
> falls in either of the two buckets.

I think this is not a regression. though ?  See my reply to Dario.
Unless it is worse in 4.15 than earlier releases I'm not inclined to
see it as a blocker.

> (since you're continuing with E. further down)
> 
> F. The almost-XSA "x86/PV: avoid speculation abuse through guest
> accessors" - the first 4 patches are needed to address the actual
> issue. The next 3 patches are needed to get the tree into
> consistent state again, identifier-wise. The remaining patches
> can probably wait.

Thanks.  I have made a note of this.

I have to allocate the letters or it'll be chaos :-).  I'm calling
this I.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:23:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:23:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81346.150190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gTd-0006CF-A9; Thu, 04 Feb 2021 15:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81346.150190; Thu, 04 Feb 2021 15:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gTd-0006C8-7J; Thu, 04 Feb 2021 15:23:25 +0000
Received: by outflank-mailman (input) for mailman id 81346;
 Thu, 04 Feb 2021 15:23:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7gTb-0006Bg-CG; Thu, 04 Feb 2021 15:23:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7gTb-0005Ts-5q; Thu, 04 Feb 2021 15:23:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7gTa-0002Iw-RG; Thu, 04 Feb 2021 15:23:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7gTa-0004DH-Qp; Thu, 04 Feb 2021 15:23:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r90WV3zc0J6Jw+SNBftQSkFhnT7QXUWB5E59YPJ/s+c=; b=JrAAZWN92sMS++KTgx0k2E4LxH
	Gsw7+nUeW2ymxlQIdWHJdyVP6f10QmZAr0890MfIdJZ8FXYMe/OIXrAzBeOchqIuBmeJVnUZJWDRC
	+vJEBFUrDMaK/iJDHiHY/5u6o0pYidBICcR/9ge74tCkAmvlxUzNayTVFC/JBNYtI8zE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159000-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159000: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f6ec1dd34fb6b9757b5ead465ee2ea20c182b0ac
X-Osstest-Versions-That:
    ovmf=e806bb29cfde1b242bb37e72e77364dd812830e0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 15:23:22 +0000

flight 159000 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159000/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f6ec1dd34fb6b9757b5ead465ee2ea20c182b0ac
baseline version:
 ovmf                 e806bb29cfde1b242bb37e72e77364dd812830e0

Last test of basis   158985  2021-02-03 13:10:40 Z    1 days
Testing same since   159000  2021-02-04 01:40:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guomin Jiang <guomin.jiang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e806bb29cf..f6ec1dd34f  f6ec1dd34fb6b9757b5ead465ee2ea20c182b0ac -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:39:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81351.150209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gip-0007d1-Nq; Thu, 04 Feb 2021 15:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81351.150209; Thu, 04 Feb 2021 15:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gip-0007cu-Jv; Thu, 04 Feb 2021 15:39:07 +0000
Received: by outflank-mailman (input) for mailman id 81351;
 Thu, 04 Feb 2021 15:39:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gi2U=HG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l7gio-0007cp-3c
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:39:06 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70e6e63b-5451-4866-9868-0bb571b55a35;
 Thu, 04 Feb 2021 15:39:02 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id q7so3974661wre.13
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 07:39:01 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id f14sm6645819wmj.30.2021.02.04.07.39.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 04 Feb 2021 07:39:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70e6e63b-5451-4866-9868-0bb571b55a35
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language;
        bh=V//P/BTJwE4NM4HkK8XBmiRM5OYYsUkZOzMEeyQomXs=;
        b=Ln5qJ3MJ5Vrh56VQksMixEDY7RD2HgK8CavR3gxo2pncXMQ1E614BQyxk15KTY0lid
         L31W9hJFbMn72n4ivj3L6i4qKAJ+HMvSEqbIaCPFOW0jo9cWk3rnIFtLym6oGItU0mVV
         1yz8oj+bMPLBi70NKS/KijdZdXg2rQiZ28Esp0a7ffmnqVZEk7TK52Skud52YUJdNYbd
         wKtNo/Uu4wk2VYS6btv9v5WQePOle+UCmPaORJwUveYG4/d36DhQn5li/PNM/qB6gnU5
         vwdsYFJYidcot/4KhCVw8Fn7hXUpENpOU8AyDF302BSZurAvBeXeHJtVcLtFN287IhQ/
         gPXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-language;
        bh=V//P/BTJwE4NM4HkK8XBmiRM5OYYsUkZOzMEeyQomXs=;
        b=SOAkAsWXOYy/FCTSa1BW1taZ1DDlmJtcMU6jwnIoYhxa23V6Y2l/1fQcLW297Q6sx0
         zc70tUu6W+e2i834IoeEyHHkJqmgOxvQsMowiElivrna4YwZEmJ80O4iQMYdaZJio/c2
         qrVFlxz87BM6AbHfscTRF1SXGVXVhhU7kBD3S87K/KY2lLSU2VCBkpowQIWUxffVqfpl
         2qR0ht9ocuoFwRmsBC2KSlNsuUg/VsPi0tqTui2RH0lUR4rgZo6e+omjaxcrQvGkzxcf
         jgSDBgahWMeX6O2fxdM6MZIsyFIy+QxivCrSGPz49N9Bc0HAdUuUoVEwae/V9Dva/Aoa
         grkg==
X-Gm-Message-State: AOAM5304PhPqe1DLD5X/MnfFM3fXZyCXA0lb+oERjzmCsSxu6fVn8Uws
	NzzC1SYo7RrFLbC0YMHiiZI=
X-Google-Smtp-Source: ABdhPJyI1/aGInkU5Bqyx0cGgOYsFTsTtX8fX8YBWy589EbqtizcDFePWob9RgoCCsNT+xd7dwfeSw==
X-Received: by 2002:a5d:4d4c:: with SMTP id a12mr10225834wru.154.1612453141158;
        Thu, 04 Feb 2021 07:39:01 -0800 (PST)
Subject: Re: [PATCH for-4.15] tools/tests: Introduce a test for
 acquire_resource
From: Oleksandr <olekstysh@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210202190937.30206-1-andrew.cooper3@citrix.com>
 <CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com>
Message-ID: <6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com>
Date: Thu, 4 Feb 2021 17:38:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com>
Content-Type: multipart/alternative;
 boundary="------------A29EFBA36C0D85FB73FD573F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------A29EFBA36C0D85FB73FD573F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit


On 04.02.21 15:29, Oleksandr Tyshchenko wrote:

Hi Andrew

>
> Hi Andrew.
> [Sorry for the possible format issues]
>
> On Tue, Feb 2, 2021 at 9:10 PM Andrew Cooper 
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     For now, simply try to map 40 frames of grant table.Â  This catches
>     most of the
>     basic errors with resource sizes found and fixed through the 4.15
>     dev window.
>
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com
>     <mailto:andrew.cooper3@citrix.com>>
>     ---
>     CC: Ian Jackson <iwj@xenproject.org <mailto:iwj@xenproject.org>>
>     CC: Wei Liu <wl@xen.org <mailto:wl@xen.org>>
>     CC: Jan Beulich <JBeulich@suse.com <mailto:JBeulich@suse.com>>
>     CC: Roger Pau MonnÃ© <roger.pau@citrix.com
>     <mailto:roger.pau@citrix.com>>
>     CC: Wei Liu <wl@xen.org <mailto:wl@xen.org>>
>     CC: Stefano Stabellini <sstabellini@kernel.org
>     <mailto:sstabellini@kernel.org>>
>     CC: Julien Grall <julien@xen.org <mailto:julien@xen.org>>
>     CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com
>     <mailto:Volodymyr_Babchuk@epam.com>>
>     CC: Oleksandr <olekstysh@gmail.com <mailto:olekstysh@gmail.com>>
>
>     Fails against current staging:
>
>     Â  XENMEM_acquire_resource tests
>     Â  Test x86 PV
>     Â  Â  d7: grant table
>     Â  Â  Â  Fail: Map 7 - Argument list too long
>     Â  Test x86 PVH
>     Â  Â  d8: grant table
>     Â  Â  Â  Fail: Map 7 - Argument list too long
>
>     The fix has already been posted:
>     Â  [PATCH v9 01/11] xen/memory: Fix mapping grant tables with
>     XENMEM_acquire_resource
>
>     and the fixed run is:
>
>     Â  XENMEM_acquire_resource tests
>     Â  Test x86 PV
>     Â  Â  d7: grant table
>     Â  Test x86 PVH
>     Â  Â  d8: grant table
>
>     ARM folk: would you mind testing this?Â  I'm pretty sure the create
>     parameters
>     are suitable, but I don't have any way to test this.
>
> Yes, as it was agreed on IRC, I will test this today's evening and 
> inform about the results)


OK, well, I decided to test right away because going to be busy in the 
evening)

I am based on:
9dc687f x86/debug: fix page-overflow bug in dbg_rw_guest_mem

I noticed the error during building this test in my Yocto environment on 
Arm:


/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld: 
test-resource.o: undefined reference to symbol 
'xendevicemodel_open@@VERS_1.0'
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld: 
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/git/tools/tests/resource/../../../tools/libs/devicemodel/libxendevicemodel.so.1: 
error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
Makefile:38: recipe for target 'test-resource' failed


I didn't investigate whether it is related or not. I just added as 
following:

diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index 8a3373e..03b19ef 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -32,6 +32,7 @@ CFLAGS += $(APPEND_CFLAGS)

 Â LDFLAGS += $(LDLIBS_libxenctrl)
 Â LDFLAGS += $(LDLIBS_libxenforeignmemory)
+LDFLAGS += $(LDLIBS_libxendevicemodel)
 Â LDFLAGS += $(APPEND_LDFLAGS)

 Â test-resource: test-resource.o


I got the following result without and with "[PATCH v9 01/11] 
xen/memory: Fix mapping grant tables with XENMEM_acquire_resource"

root@generic-armv8-xt-dom0:~# test-resource
XENMEM_acquire_resource tests
Test ARM
 Â  d3: grant table
xenforeignmemory: error: ioctl failed: Invalid argument
 Â Â Â  Fail: Get size: 22 - Invalid argument


>
>
>     I've got more plans for this, but insufficient time right now.
>     ---
>     Â tools/tests/MakefileÂ  Â  Â  Â  Â  Â  Â  Â  Â |Â  Â 1 +
>     Â tools/tests/resource/.gitignoreÂ  Â  Â  |Â  Â 1 +
>     Â tools/tests/resource/MakefileÂ  Â  Â  Â  |Â  40 ++++++++++
>     Â tools/tests/resource/test-resource.c | 138
>     +++++++++++++++++++++++++++++++++++
>     Â 4 files changed, 180 insertions(+)
>     Â create mode 100644 tools/tests/resource/.gitignore
>     Â create mode 100644 tools/tests/resource/Makefile
>     Â create mode 100644 tools/tests/resource/test-resource.c
>
>     diff --git a/tools/tests/Makefile b/tools/tests/Makefile
>     index fc9b715951..c45b5fbc1d 100644
>     --- a/tools/tests/Makefile
>     +++ b/tools/tests/Makefile
>     @@ -2,6 +2,7 @@ XEN_ROOT = $(CURDIR)/../..
>     Â include $(XEN_ROOT)/tools/Rules.mk
>
>     Â SUBDIRS-y :=
>     +SUBDIRS-y := resource
>     Â SUBDIRS-$(CONFIG_X86) += cpu-policy
>     Â SUBDIRS-$(CONFIG_X86) += mce-test
>     Â ifneq ($(clang),y)
>     diff --git a/tools/tests/resource/.gitignore
>     b/tools/tests/resource/.gitignore
>     new file mode 100644
>     index 0000000000..4872e97d4b
>     --- /dev/null
>     +++ b/tools/tests/resource/.gitignore
>     @@ -0,0 +1 @@
>     +test-resource
>     diff --git a/tools/tests/resource/Makefile
>     b/tools/tests/resource/Makefile
>     new file mode 100644
>     index 0000000000..8a3373e786
>     --- /dev/null
>     +++ b/tools/tests/resource/Makefile
>     @@ -0,0 +1,40 @@
>     +XEN_ROOT = $(CURDIR)/../../..
>     +include $(XEN_ROOT)/tools/Rules.mk
>     +
>     +TARGET := test-resource
>     +
>     +.PHONY: all
>     +all: $(TARGET)
>     +
>     +.PHONY: run
>     +run: $(TARGET)
>     +Â  Â  Â  Â ./$(TARGET)
>     +
>     +.PHONY: clean
>     +clean:
>     +Â  Â  Â  Â $(RM) -f -- *.o .*.d .*.d2 $(TARGET)
>     +
>     +.PHONY: distclean
>     +distclean: clean
>     +Â  Â  Â  Â $(RM) -f -- *~
>     +
>     +.PHONY: install
>     +install: all
>     +
>     +.PHONY: uninstall
>     +uninstall:
>     +
>     +CFLAGS += -Werror -D__XEN_TOOLS__
>     +CFLAGS += $(CFLAGS_xeninclude)
>     +CFLAGS += $(CFLAGS_libxenctrl)
>     +CFLAGS += $(CFLAGS_libxenforeginmemory)
>     +CFLAGS += $(APPEND_CFLAGS)
>     +
>     +LDFLAGS += $(LDLIBS_libxenctrl)
>     +LDFLAGS += $(LDLIBS_libxenforeignmemory)
>     +LDFLAGS += $(APPEND_LDFLAGS)
>     +
>     +test-resource: test-resource.o
>     +Â  Â  Â  Â $(CC) $(LDFLAGS) -o $@ $<
>     +
>     +-include $(DEPS_INCLUDE)
>     diff --git a/tools/tests/resource/test-resource.c
>     b/tools/tests/resource/test-resource.c
>     new file mode 100644
>     index 0000000000..81a2a5cd12
>     --- /dev/null
>     +++ b/tools/tests/resource/test-resource.c
>     @@ -0,0 +1,138 @@
>     +#include <err.h>
>     +#include <errno.h>
>     +#include <error.h>
>     +#include <stdio.h>
>     +#include <string.h>
>     +#include <sys/mman.h>
>     +
>     +#include <xenctrl.h>
>     +#include <xenforeignmemory.h>
>     +#include <xendevicemodel.h>
>     +#include <xen-tools/libs.h>
>     +
>     +static unsigned int nr_failures;
>     +#define fail(fmt, ...)Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  \
>     +({Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  \
>     +Â  Â  nr_failures++;Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  \
>     +Â  Â  (void)printf(fmt, ##__VA_ARGS__);Â  Â  Â  Â  Â  Â \
>     +})
>     +
>     +static xc_interface *xch;
>     +static xenforeignmemory_handle *fh;
>     +static xendevicemodel_handle *dh;
>     +
>     +static void test_gnttab(uint32_t domid, unsigned int nr_frames)
>     +{
>     +Â  Â  xenforeignmemory_resource_handle *res;
>     +Â  Â  void *addr = NULL;
>     +Â  Â  size_t size;
>     +Â  Â  int rc;
>     +
>     +Â  Â  printf("Â  d%u: grant table\n", domid);
>     +
>     +Â  Â  rc = xenforeignmemory_resource_size(
>     +Â  Â  Â  Â  fh, domid, XENMEM_resource_grant_table,
>     +Â  Â  Â  Â  XENMEM_resource_grant_table_id_shared, &size);
>     +Â  Â  if ( rc )
>     +Â  Â  Â  Â  return fail("Â  Â  Fail: Get size: %d - %s\n", errno,
>     strerror(errno));
>     +
>     +Â  Â  if ( (size >> XC_PAGE_SHIFT) != nr_frames )
>     +Â  Â  Â  Â  return fail("Â  Â  Fail: Get size: expected %u frames, got
>     %zu\n",
>     +Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  nr_frames, size >> XC_PAGE_SHIFT);
>     +
>     +Â  Â  res = xenforeignmemory_map_resource(
>     +Â  Â  Â  Â  fh, domid, XENMEM_resource_grant_table,
>     +Â  Â  Â  Â  XENMEM_resource_grant_table_id_shared, 0, size >>
>     XC_PAGE_SHIFT,
>     +Â  Â  Â  Â  &addr, PROT_READ | PROT_WRITE, 0);
>     +Â  Â  if ( !res )
>     +Â  Â  Â  Â  return fail("Â  Â  Fail: Map %d - %s\n", errno,
>     strerror(errno));
>     +
>     +Â  Â  rc = xenforeignmemory_unmap_resource(fh, res);
>     +Â  Â  if ( rc )
>     +Â  Â  Â  Â  return fail("Â  Â  Fail: Unmap %d - %s\n", errno,
>     strerror(errno));
>     +}
>     +
>     +static void test_domain_configurations(void)
>     +{
>     +Â  Â  static struct test {
>     +Â  Â  Â  Â  const char *name;
>     +Â  Â  Â  Â  struct xen_domctl_createdomain create;
>     +Â  Â  } tests[] = {
>     +#if defined(__x86_64__) || defined(__i386__)
>     +Â  Â  Â  Â  {
>     +Â  Â  Â  Â  Â  Â  .name = "x86 PV",
>     +Â  Â  Â  Â  Â  Â  .create = {
>     +Â  Â  Â  Â  Â  Â  Â  Â  .max_vcpus = 2,
>     +Â  Â  Â  Â  Â  Â  Â  Â  .max_grant_frames = 40,
>     +Â  Â  Â  Â  Â  Â  },
>     +Â  Â  Â  Â  },
>     +Â  Â  Â  Â  {
>     +Â  Â  Â  Â  Â  Â  .name = "x86 PVH",
>     +Â  Â  Â  Â  Â  Â  .create = {
>     +Â  Â  Â  Â  Â  Â  Â  Â  .flags = XEN_DOMCTL_CDF_hvm,
>     +Â  Â  Â  Â  Â  Â  Â  Â  .max_vcpus = 2,
>     +Â  Â  Â  Â  Â  Â  Â  Â  .max_grant_frames = 40,
>     +Â  Â  Â  Â  Â  Â  Â  Â  .arch = {
>     +Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  .emulation_flags = XEN_X86_EMU_LAPIC,
>     +Â  Â  Â  Â  Â  Â  Â  Â  },
>     +Â  Â  Â  Â  Â  Â  },
>     +Â  Â  Â  Â  },
>     +#elif defined(__aarch64__) || defined(__arm__)
>     +Â  Â  Â  Â  {
>     +Â  Â  Â  Â  Â  Â  .name = "ARM",
>     +Â  Â  Â  Â  Â  Â  .create = {
>     +Â  Â  Â  Â  Â  Â  Â  Â  .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>     +Â  Â  Â  Â  Â  Â  Â  Â  .max_vcpus = 2,
>     +Â  Â  Â  Â  Â  Â  Â  Â  .max_grant_frames = 40,
>     +Â  Â  Â  Â  Â  Â  },
>     +Â  Â  Â  Â  },
>     +#endif
>     +Â  Â  };
>     +
>     +Â  Â  for ( unsigned int i = 0; i < ARRAY_SIZE(tests); ++i )
>     +Â  Â  {
>     +Â  Â  Â  Â  struct test *t = &tests[i];
>     +Â  Â  Â  Â  uint32_t domid = 0;
>     +Â  Â  Â  Â  int rc;
>     +
>     +Â  Â  Â  Â  printf("Test %s\n", t->name);
>     +
>     +Â  Â  Â  Â  rc = xc_domain_create(xch, &domid, &t->create);
>     +Â  Â  Â  Â  if ( rc )
>     +Â  Â  Â  Â  {
>     +Â  Â  Â  Â  Â  Â  if ( errno == EINVAL || errno == EOPNOTSUPP )
>     +Â  Â  Â  Â  Â  Â  Â  Â  printf("Â  Skip: %d - %s\n", errno, strerror(errno));
>     +Â  Â  Â  Â  Â  Â  else
>     +Â  Â  Â  Â  Â  Â  Â  Â  fail("Â  Domain create failure: %d - %s\n",
>     +Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â errno, strerror(errno));
>     +Â  Â  Â  Â  Â  Â  continue;
>     +Â  Â  Â  Â  }
>     +
>     +Â  Â  Â  Â  test_gnttab(domid, t->create.max_grant_frames);
>     +
>     +Â  Â  Â  Â  rc = xc_domain_destroy(xch, domid);
>     +Â  Â  Â  Â  if ( rc )
>     +Â  Â  Â  Â  Â  Â  fail("Â  Failed to destroy domain: %d - %s\n",
>     +Â  Â  Â  Â  Â  Â  Â  Â  Â errno, strerror(errno));
>     +Â  Â  }
>     +}
>     +
>     +int main(int argc, char **argv)
>     +{
>     +Â  Â  printf("XENMEM_acquire_resource tests\n");
>     +
>     +Â  Â  xch = xc_interface_open(NULL, NULL, 0);
>     +Â  Â  fh = xenforeignmemory_open(NULL, 0);
>     +Â  Â  dh = xendevicemodel_open(NULL, 0);
>     +
>     +Â  Â  if ( !xch )
>     +Â  Â  Â  Â  err(1, "xc_interface_open");
>     +Â  Â  if ( !fh )
>     +Â  Â  Â  Â  err(1, "xenforeignmemory_open");
>     +Â  Â  if ( !dh )
>     +Â  Â  Â  Â  err(1, "xendevicemodel_open");
>     +
>     +Â  Â  test_domain_configurations();
>     +
>     +Â  Â  return !!nr_failures;
>     +}
>     -- 
>     2.11.0
>
>
>
> -- 
> Regards,
>
> Oleksandr Tyshchenko

-- 
Regards,

Oleksandr Tyshchenko


--------------A29EFBA36C0D85FB73FD573F
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 04.02.21 15:29, Oleksandr Tyshchenko
      wrote:</div>
    <div class="moz-cite-prefix"><br>
    </div>
    <div class="moz-cite-prefix">Hi Andrew<br>
    </div>
    <div class="moz-cite-prefix"><br>
    </div>
    <blockquote type="cite"
cite="mid:CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div dir="ltr"><br>
        </div>
        <div>Hi Andrew.</div>
        <div>[Sorry for the possible format issues]</div>
        <br>
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Tue, Feb 2, 2021 at 9:10
            PM Andrew Cooper &lt;<a
              href="mailto:andrew.cooper3@citrix.com"
              moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;
            wrote:<br>
          </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">For now, simply try to
            map 40 frames of grant table.Â  This catches most of the<br>
            basic errors with resource sizes found and fixed through the
            4.15 dev window.<br>
            <br>
            Signed-off-by: Andrew Cooper &lt;<a
              href="mailto:andrew.cooper3@citrix.com" target="_blank"
              moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;<br>
            ---<br>
            CC: Ian Jackson &lt;<a href="mailto:iwj@xenproject.org"
              target="_blank" moz-do-not-send="true">iwj@xenproject.org</a>&gt;<br>
            CC: Wei Liu &lt;<a href="mailto:wl@xen.org" target="_blank"
              moz-do-not-send="true">wl@xen.org</a>&gt;<br>
            CC: Jan Beulich &lt;<a href="mailto:JBeulich@suse.com"
              target="_blank" moz-do-not-send="true">JBeulich@suse.com</a>&gt;<br>
            CC: Roger Pau MonnÃ© &lt;<a
              href="mailto:roger.pau@citrix.com" target="_blank"
              moz-do-not-send="true">roger.pau@citrix.com</a>&gt;<br>
            CC: Wei Liu &lt;<a href="mailto:wl@xen.org" target="_blank"
              moz-do-not-send="true">wl@xen.org</a>&gt;<br>
            CC: Stefano Stabellini &lt;<a
              href="mailto:sstabellini@kernel.org" target="_blank"
              moz-do-not-send="true">sstabellini@kernel.org</a>&gt;<br>
            CC: Julien Grall &lt;<a href="mailto:julien@xen.org"
              target="_blank" moz-do-not-send="true">julien@xen.org</a>&gt;<br>
            CC: Volodymyr Babchuk &lt;<a
              href="mailto:Volodymyr_Babchuk@epam.com" target="_blank"
              moz-do-not-send="true">Volodymyr_Babchuk@epam.com</a>&gt;<br>
            CC: Oleksandr &lt;<a href="mailto:olekstysh@gmail.com"
              target="_blank" moz-do-not-send="true">olekstysh@gmail.com</a>&gt;<br>
            <br>
            Fails against current staging:<br>
            <br>
            Â  XENMEM_acquire_resource tests<br>
            Â  Test x86 PV<br>
            Â  Â  d7: grant table<br>
            Â  Â  Â  Fail: Map 7 - Argument list too long<br>
            Â  Test x86 PVH<br>
            Â  Â  d8: grant table<br>
            Â  Â  Â  Fail: Map 7 - Argument list too long<br>
            <br>
            The fix has already been posted:<br>
            Â  [PATCH v9 01/11] xen/memory: Fix mapping grant tables with
            XENMEM_acquire_resource<br>
            <br>
            and the fixed run is:<br>
            <br>
            Â  XENMEM_acquire_resource tests<br>
            Â  Test x86 PV<br>
            Â  Â  d7: grant table<br>
            Â  Test x86 PVH<br>
            Â  Â  d8: grant table<br>
            <br>
            ARM folk: would you mind testing this?Â  I'm pretty sure the
            create parameters<br>
            are suitable, but I don't have any way to test this.<br>
          </blockquote>
          <div>Yes, as it was agreed on IRC, I will test this today's
            evening and inform about the results)</div>
        </div>
      </div>
    </blockquote>
    <p><br>
    </p>
    <p>OK, well, I decided to test right away because going to be busy
      in the evening)<br>
      <br>
      I am based on:<br>
      9dc687f x86/debug: fix page-overflow bug in dbg_rw_guest_mem</p>
    <p>I noticed the error during building this test in my Yocto
      environment on Arm:<br>
    </p>
    <p><br>
    </p>
    <p>/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
      test-resource.o: undefined reference to symbol
      '<a class="moz-txt-link-abbreviated" href="mailto:xendevicemodel_open@@VERS_1.0">xendevicemodel_open@@VERS_1.0</a>'<br>
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/git/tools/tests/resource/../../../tools/libs/devicemodel/libxendevicemodel.so.1:
      error adding symbols: DSO missing from command line<br>
      collect2: error: ld returned 1 exit status<br>
      Makefile:38: recipe for target 'test-resource' failed<br>
    </p>
    <p><br>
    </p>
    <p>I didn't investigate whether it is related or not. I just added
      as following:<br>
    </p>
    <p>diff --git a/tools/tests/resource/Makefile
      b/tools/tests/resource/Makefile<br>
      index 8a3373e..03b19ef 100644<br>
      --- a/tools/tests/resource/Makefile<br>
      +++ b/tools/tests/resource/Makefile<br>
      @@ -32,6 +32,7 @@ CFLAGS += $(APPEND_CFLAGS)<br>
      Â <br>
      Â LDFLAGS += $(LDLIBS_libxenctrl)<br>
      Â LDFLAGS += $(LDLIBS_libxenforeignmemory)<br>
      +LDFLAGS += $(LDLIBS_libxendevicemodel)<br>
      Â LDFLAGS += $(APPEND_LDFLAGS)<br>
      Â <br>
      Â test-resource: test-resource.o</p>
    <p><br>
    </p>
    <p>I got the following result without and with "[PATCH v9 01/11]
      xen/memory: Fix mapping grant tables with XENMEM_acquire_resource"<br>
    </p>
    <p>root@generic-armv8-xt-dom0:~# test-resource <br>
      XENMEM_acquire_resource tests<br>
      Test ARM<br>
      Â  d3: grant table<br>
      xenforeignmemory: error: ioctl failed: Invalid argument<br>
      Â Â Â  Fail: Get size: 22 - Invalid argument</p>
    <p><br>
    </p>
    <blockquote type="cite"
cite="mid:CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com">
      <div dir="ltr">
        <div class="gmail_quote">
          <div><br>
          </div>
          <div>Â </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">
            <br>
            I've got more plans for this, but insufficient time right
            now.<br>
            ---<br>
            Â tools/tests/MakefileÂ  Â  Â  Â  Â  Â  Â  Â  Â |Â  Â 1 +<br>
            Â tools/tests/resource/.gitignoreÂ  Â  Â  |Â  Â 1 +<br>
            Â tools/tests/resource/MakefileÂ  Â  Â  Â  |Â  40 ++++++++++<br>
            Â tools/tests/resource/test-resource.c | 138
            +++++++++++++++++++++++++++++++++++<br>
            Â 4 files changed, 180 insertions(+)<br>
            Â create mode 100644 tools/tests/resource/.gitignore<br>
            Â create mode 100644 tools/tests/resource/Makefile<br>
            Â create mode 100644 tools/tests/resource/test-resource.c<br>
            <br>
            diff --git a/tools/tests/Makefile b/tools/tests/Makefile<br>
            index fc9b715951..c45b5fbc1d 100644<br>
            --- a/tools/tests/Makefile<br>
            +++ b/tools/tests/Makefile<br>
            @@ -2,6 +2,7 @@ XEN_ROOT = $(CURDIR)/../..<br>
            Â include $(XEN_ROOT)/tools/Rules.mk<br>
            <br>
            Â SUBDIRS-y :=<br>
            +SUBDIRS-y := resource<br>
            Â SUBDIRS-$(CONFIG_X86) += cpu-policy<br>
            Â SUBDIRS-$(CONFIG_X86) += mce-test<br>
            Â ifneq ($(clang),y)<br>
            diff --git a/tools/tests/resource/.gitignore
            b/tools/tests/resource/.gitignore<br>
            new file mode 100644<br>
            index 0000000000..4872e97d4b<br>
            --- /dev/null<br>
            +++ b/tools/tests/resource/.gitignore<br>
            @@ -0,0 +1 @@<br>
            +test-resource<br>
            diff --git a/tools/tests/resource/Makefile
            b/tools/tests/resource/Makefile<br>
            new file mode 100644<br>
            index 0000000000..8a3373e786<br>
            --- /dev/null<br>
            +++ b/tools/tests/resource/Makefile<br>
            @@ -0,0 +1,40 @@<br>
            +XEN_ROOT = $(CURDIR)/../../..<br>
            +include $(XEN_ROOT)/tools/Rules.mk<br>
            +<br>
            +TARGET := test-resource<br>
            +<br>
            +.PHONY: all<br>
            +all: $(TARGET)<br>
            +<br>
            +.PHONY: run<br>
            +run: $(TARGET)<br>
            +Â  Â  Â  Â ./$(TARGET)<br>
            +<br>
            +.PHONY: clean<br>
            +clean:<br>
            +Â  Â  Â  Â $(RM) -f -- *.o .*.d .*.d2 $(TARGET)<br>
            +<br>
            +.PHONY: distclean<br>
            +distclean: clean<br>
            +Â  Â  Â  Â $(RM) -f -- *~<br>
            +<br>
            +.PHONY: install<br>
            +install: all<br>
            +<br>
            +.PHONY: uninstall<br>
            +uninstall:<br>
            +<br>
            +CFLAGS += -Werror -D__XEN_TOOLS__<br>
            +CFLAGS += $(CFLAGS_xeninclude)<br>
            +CFLAGS += $(CFLAGS_libxenctrl)<br>
            +CFLAGS += $(CFLAGS_libxenforeginmemory)<br>
            +CFLAGS += $(APPEND_CFLAGS)<br>
            +<br>
            +LDFLAGS += $(LDLIBS_libxenctrl)<br>
            +LDFLAGS += $(LDLIBS_libxenforeignmemory)<br>
            +LDFLAGS += $(APPEND_LDFLAGS)<br>
            +<br>
            +test-resource: test-resource.o<br>
            +Â  Â  Â  Â $(CC) $(LDFLAGS) -o $@ $&lt;<br>
            +<br>
            +-include $(DEPS_INCLUDE)<br>
            diff --git a/tools/tests/resource/test-resource.c
            b/tools/tests/resource/test-resource.c<br>
            new file mode 100644<br>
            index 0000000000..81a2a5cd12<br>
            --- /dev/null<br>
            +++ b/tools/tests/resource/test-resource.c<br>
            @@ -0,0 +1,138 @@<br>
            +#include &lt;err.h&gt;<br>
            +#include &lt;errno.h&gt;<br>
            +#include &lt;error.h&gt;<br>
            +#include &lt;stdio.h&gt;<br>
            +#include &lt;string.h&gt;<br>
            +#include &lt;sys/mman.h&gt;<br>
            +<br>
            +#include &lt;xenctrl.h&gt;<br>
            +#include &lt;xenforeignmemory.h&gt;<br>
            +#include &lt;xendevicemodel.h&gt;<br>
            +#include &lt;xen-tools/libs.h&gt;<br>
            +<br>
            +static unsigned int nr_failures;<br>
            +#define fail(fmt, ...)Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  \<br>
            +({Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  \<br>
            +Â  Â  nr_failures++;Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  \<br>
            +Â  Â  (void)printf(fmt, ##__VA_ARGS__);Â  Â  Â  Â  Â  Â \<br>
            +})<br>
            +<br>
            +static xc_interface *xch;<br>
            +static xenforeignmemory_handle *fh;<br>
            +static xendevicemodel_handle *dh;<br>
            +<br>
            +static void test_gnttab(uint32_t domid, unsigned int
            nr_frames)<br>
            +{<br>
            +Â  Â  xenforeignmemory_resource_handle *res;<br>
            +Â  Â  void *addr = NULL;<br>
            +Â  Â  size_t size;<br>
            +Â  Â  int rc;<br>
            +<br>
            +Â  Â  printf("Â  d%u: grant table\n", domid);<br>
            +<br>
            +Â  Â  rc = xenforeignmemory_resource_size(<br>
            +Â  Â  Â  Â  fh, domid, XENMEM_resource_grant_table,<br>
            +Â  Â  Â  Â  XENMEM_resource_grant_table_id_shared, &amp;size);<br>
            +Â  Â  if ( rc )<br>
            +Â  Â  Â  Â  return fail("Â  Â  Fail: Get size: %d - %s\n", errno,
            strerror(errno));<br>
            +<br>
            +Â  Â  if ( (size &gt;&gt; XC_PAGE_SHIFT) != nr_frames )<br>
            +Â  Â  Â  Â  return fail("Â  Â  Fail: Get size: expected %u
            frames, got %zu\n",<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  nr_frames, size &gt;&gt;
            XC_PAGE_SHIFT);<br>
            +<br>
            +Â  Â  res = xenforeignmemory_map_resource(<br>
            +Â  Â  Â  Â  fh, domid, XENMEM_resource_grant_table,<br>
            +Â  Â  Â  Â  XENMEM_resource_grant_table_id_shared, 0, size
            &gt;&gt; XC_PAGE_SHIFT,<br>
            +Â  Â  Â  Â  &amp;addr, PROT_READ | PROT_WRITE, 0);<br>
            +Â  Â  if ( !res )<br>
            +Â  Â  Â  Â  return fail("Â  Â  Fail: Map %d - %s\n", errno,
            strerror(errno));<br>
            +<br>
            +Â  Â  rc = xenforeignmemory_unmap_resource(fh, res);<br>
            +Â  Â  if ( rc )<br>
            +Â  Â  Â  Â  return fail("Â  Â  Fail: Unmap %d - %s\n", errno,
            strerror(errno));<br>
            +}<br>
            +<br>
            +static void test_domain_configurations(void)<br>
            +{<br>
            +Â  Â  static struct test {<br>
            +Â  Â  Â  Â  const char *name;<br>
            +Â  Â  Â  Â  struct xen_domctl_createdomain create;<br>
            +Â  Â  } tests[] = {<br>
            +#if defined(__x86_64__) || defined(__i386__)<br>
            +Â  Â  Â  Â  {<br>
            +Â  Â  Â  Â  Â  Â  .name = "x86 PV",<br>
            +Â  Â  Â  Â  Â  Â  .create = {<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .max_vcpus = 2,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .max_grant_frames = 40,<br>
            +Â  Â  Â  Â  Â  Â  },<br>
            +Â  Â  Â  Â  },<br>
            +Â  Â  Â  Â  {<br>
            +Â  Â  Â  Â  Â  Â  .name = "x86 PVH",<br>
            +Â  Â  Â  Â  Â  Â  .create = {<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .flags = XEN_DOMCTL_CDF_hvm,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .max_vcpus = 2,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .max_grant_frames = 40,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .arch = {<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  .emulation_flags = XEN_X86_EMU_LAPIC,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  },<br>
            +Â  Â  Â  Â  Â  Â  },<br>
            +Â  Â  Â  Â  },<br>
            +#elif defined(__aarch64__) || defined(__arm__)<br>
            +Â  Â  Â  Â  {<br>
            +Â  Â  Â  Â  Â  Â  .name = "ARM",<br>
            +Â  Â  Â  Â  Â  Â  .create = {<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .flags = XEN_DOMCTL_CDF_hvm |
            XEN_DOMCTL_CDF_hap,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .max_vcpus = 2,<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  .max_grant_frames = 40,<br>
            +Â  Â  Â  Â  Â  Â  },<br>
            +Â  Â  Â  Â  },<br>
            +#endif<br>
            +Â  Â  };<br>
            +<br>
            +Â  Â  for ( unsigned int i = 0; i &lt; ARRAY_SIZE(tests); ++i
            )<br>
            +Â  Â  {<br>
            +Â  Â  Â  Â  struct test *t = &amp;tests[i];<br>
            +Â  Â  Â  Â  uint32_t domid = 0;<br>
            +Â  Â  Â  Â  int rc;<br>
            +<br>
            +Â  Â  Â  Â  printf("Test %s\n", t-&gt;name);<br>
            +<br>
            +Â  Â  Â  Â  rc = xc_domain_create(xch, &amp;domid,
            &amp;t-&gt;create);<br>
            +Â  Â  Â  Â  if ( rc )<br>
            +Â  Â  Â  Â  {<br>
            +Â  Â  Â  Â  Â  Â  if ( errno == EINVAL || errno == EOPNOTSUPP )<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  printf("Â  Skip: %d - %s\n", errno,
            strerror(errno));<br>
            +Â  Â  Â  Â  Â  Â  else<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  fail("Â  Domain create failure: %d - %s\n",<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â errno, strerror(errno));<br>
            +Â  Â  Â  Â  Â  Â  continue;<br>
            +Â  Â  Â  Â  }<br>
            +<br>
            +Â  Â  Â  Â  test_gnttab(domid, t-&gt;create.max_grant_frames);<br>
            +<br>
            +Â  Â  Â  Â  rc = xc_domain_destroy(xch, domid);<br>
            +Â  Â  Â  Â  if ( rc )<br>
            +Â  Â  Â  Â  Â  Â  fail("Â  Failed to destroy domain: %d - %s\n",<br>
            +Â  Â  Â  Â  Â  Â  Â  Â  Â errno, strerror(errno));<br>
            +Â  Â  }<br>
            +}<br>
            +<br>
            +int main(int argc, char **argv)<br>
            +{<br>
            +Â  Â  printf("XENMEM_acquire_resource tests\n");<br>
            +<br>
            +Â  Â  xch = xc_interface_open(NULL, NULL, 0);<br>
            +Â  Â  fh = xenforeignmemory_open(NULL, 0);<br>
            +Â  Â  dh = xendevicemodel_open(NULL, 0);<br>
            +<br>
            +Â  Â  if ( !xch )<br>
            +Â  Â  Â  Â  err(1, "xc_interface_open");<br>
            +Â  Â  if ( !fh )<br>
            +Â  Â  Â  Â  err(1, "xenforeignmemory_open");<br>
            +Â  Â  if ( !dh )<br>
            +Â  Â  Â  Â  err(1, "xendevicemodel_open");<br>
            +<br>
            +Â  Â  test_domain_configurations();<br>
            +<br>
            +Â  Â  return !!nr_failures;<br>
            +}<br>
            -- <br>
            2.11.0<br>
            <br>
          </blockquote>
        </div>
        <br clear="all">
        <div><br>
        </div>
        -- <br>
        <div dir="ltr" class="gmail_signature">
          <div dir="ltr">
            <div>
              <div dir="ltr">
                <div>
                  <div dir="ltr"><span
                      style="background-color:rgb(255,255,255)"><font
                        size="2"><span
                          style="color:rgb(51,51,51);font-family:Arial,sans-serif">Regards,</span></font></span></div>
                  <div dir="ltr"><br>
                  </div>
                  <div dir="ltr">
                    <div><span style="background-color:rgb(255,255,255)"><font
                          size="2">Oleksandr Tyshchenko</font></span></div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <pre class="moz-signature" cols="72">-- 
Regards,

Oleksandr Tyshchenko</pre>
  </body>
</html>

--------------A29EFBA36C0D85FB73FD573F--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:44:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81354.150221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7goF-0008Uu-GD; Thu, 04 Feb 2021 15:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81354.150221; Thu, 04 Feb 2021 15:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7goF-0008Un-Cn; Thu, 04 Feb 2021 15:44:43 +0000
Received: by outflank-mailman (input) for mailman id 81354;
 Thu, 04 Feb 2021 15:44:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7goE-0008Ue-3j
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:44:42 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db2496f3-8448-4a16-bc6b-70f885cf3084;
 Thu, 04 Feb 2021 15:44:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db2496f3-8448-4a16-bc6b-70f885cf3084
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612453480;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:mime-version;
  bh=JkA87PxF1f+tFET4zL1TpEeD5W3Km6MeasPyajbc5cs=;
  b=Oy7NAhqy2bulOqA0lSEJ6QsbZhA1Co/ssmiIZjCTFHqfHRP/alGkOO5w
   pQa2aEjA7zWfdDLSKxhL1+AOEk1Zx2l99cGFWkzLWe5Tje+l/fNQtDESk
   DxC2+bjTdyiG8QmCaeiOljhCFSaiE1wiLAR1KW7TXp9PkW0gx7HgnOWXj
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: C2USyzTDgFgdyWogLbozy1r19L28oH9JTvuGVD2/4YVkH6uNtYTZl7f3tOaRBds/1Q9tCgsP1R
 pSdd0/jaL9AnrAgTxmPHpH3kzV/6aTOg8Iydkuawzs3vX2SZjCzBeLD0hcJh2wXHucWCPhOPr3
 VrTbQ2fcqYbtjRLOtepPM0vhVxjUKMCnP4sUxd5gOnGoafcM9nCbozm4k/CqU0yisXEZAXs6Ww
 1fVQPscseI3RBSCBG+SuIuhpLwWb7ovKGyrlaArpOez0RKWQT5/WXWgMk8D7vEzcGuW7snlKsw
 QL0=
X-SBRS: 5.2
X-MesageID: 36515698
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,401,1602561600"; 
   d="scan'208,217";a="36515698"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IXgbBce77C1w1i8z+jgr7GaHCUmU0Ekn2io0Tgd6StsqLQsR3qNL8TxsxnJn+uzpkNCbkEGcUAA/dmdSe/ra0S2SSYH5zvEwURESzwagtx0X+cuhMEVl5KgQe6bEX1uYsvpLSiEopK7CB3NBhtBwqBtayw7zgO1deFc+7WMn/pDc2xKPTb0b9Ww6/n0RSJTlupbUfodW2TD7O5fcRzxYiDJQ09kf+0/yPDuSSKFznHSdun+ctwn2O7UQCi5wUksaQET2nRbqtLjwFnNTxB4QCh9uBTdACbtQg4MtHq0un7GqgYPUQdYHxO2A7UNgOdLuJAYmyngKN3lIpaI48ujECQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gajSxu+YPkD9OLVK68Ew0Ph8EwEK3azAMNN+Wd3s/EA=;
 b=g1uSzXfJUYsRPrwqECRwpkqKDEoZAhY86shlxbKzlsk9DH9tmwY8Wm2EBJRx+D22g8hv40POl5JyRvetEnunjN2yfH9gvI67cvkb0Ns5HqgPyPIFaxxJuNuq5CYrQsFKnFJSUroxy2tnhj3+XSk8+9LNrJA6yn4hJ9H97kfHekOqc9ajvFytTXrMfTiJkOlbVNLq44RFmuecYz+RvNf8N6RQtS4gNEwS6IEUOpuOkt7XyrGe7g1h36tBpAmjBmeDohoqFEkXj+/MDEjKCjLLCFMkIao9Jle6LO4cX8CrncbSdzxTFZCqBl7JVUPXnPX3oc2xIRVqG9ZOHtEt9yD/HQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gajSxu+YPkD9OLVK68Ew0Ph8EwEK3azAMNN+Wd3s/EA=;
 b=fRnaI1PtPHpHolFjWpqr6GkK+8PJjGyzSXO/3VrCrl12I93uGI41gi3cH5s2Pj4eTHFV4jfkMxaSyWCIORUYoTxw/X1UHzhUE0BPvTeACT/a8/iGZYSi+T6IIJJcbGfUBdMkdZG2Hs1lAxAstq29GhbvxLR4rW4yej6JFh9wVN4=
Subject: Re: [PATCH for-4.15] tools/tests: Introduce a test for
 acquire_resource
To: Oleksandr <olekstysh@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <20210202190937.30206-1-andrew.cooper3@citrix.com>
 <CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com>
 <6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <79a41e5b-0717-f31b-3cec-60b716777603@citrix.com>
Date: Thu, 4 Feb 2021 15:44:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com>
Content-Type: multipart/alternative;
 boundary="------------1727A7134F178C16BAEA78C6"
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0418.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::9) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4ac715cf-47a1-43a9-9679-08d8c923c327
X-MS-TrafficTypeDiagnostic: BYAPR03MB4535:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4535ECD3BE1F7A5DF0DE92BFBAB39@BYAPR03MB4535.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:185;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 2G/NjEvKvE/x7k8I/Vzk4frs4Kf0s1bWLYjQrEhGdLtneh+jvnrutX+xieXGEf6uL80RFBWa/QOiRb2iT42UnrIBJuiiWqBMPD/AmtaZfOJAfh4R3EF0K3H6mdqGdJFiac6ABHmIzrbNk6r1ua+FuuIqjjnVvWtJYl8KTWA70EYohF5UQHM685xdeOeRX3CPkhrI8qnYCSlzESWJsp1vYISTR5TJaS05VGTz9fmdlcw+IKGzsj/YS8yAaSkcpXwT6lTzvSy+3T6EKXtm6mdYE96MyRBODj1seIEzEzojWbgdp5kAgIZa0daSU3CcqFZjG+8+/fCmO90lArBgToR0Uz9yjlCm0+E2I8zeLFE0Mgp9Lc11cIPeMwwC19kkcLDuojm9/eZ2F6ykfmHBu7KiM2jDoCsLjrLMXTnUfQo9l5th0EN9c7UetFd033Pq02rIcdW3sY3kxqcGiF3jCr5MoSyXTpmDk0/pJXs+BA6nQkVdH5f0RPyfQZU9QL1rTrSpk404H4CS3+6mm8cfRERFoFw/jgIk45JbrVTQlyXC8rGwbt4fnk8i7FmdJMDuc4F3xOwvV1zzy7+g580VyEWUn82aa6xJzCJ8I9Uwma9z6zI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(346002)(136003)(39850400004)(36756003)(26005)(6916009)(66946007)(86362001)(6666004)(66556008)(31686004)(31696002)(186003)(2616005)(8676002)(6486002)(956004)(478600001)(2906002)(16576012)(53546011)(5660300002)(8936002)(316002)(16526019)(4326008)(54906003)(66476007)(33964004)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eS9penVqT29xMkJqMHJUSDZOUkZBYi81MmNFajBkSXFlN0Y2OVZwdzZiMFFp?=
 =?utf-8?B?aXhpRGExc3pvMGpOTnpOYlA3MVh6cFFjdXVuS1UxZE9WUHJ4WmV1elVLK1RB?=
 =?utf-8?B?NkRFTGlBK3VaSWQ1NDZkek5SMlEyWTVBelhhSXJTVjJDeXdkQkdLS2Rmcmpm?=
 =?utf-8?B?M2YyNnIzZW8wUkEyS2w2cWp6U1lEMnJaUzJhenhoRDBObWNiUjJwZ1JVTisr?=
 =?utf-8?B?RUpEalp2Y0t0S21jRE1uNjBacG1COFpJd3hoYXlsd2FCWXRiWUJvL1hMdWJ1?=
 =?utf-8?B?OXJFdE9hUE43WTBqQWNHNE9nY1pUcjM0dE55anNTNjVVNEJ2SldYcFRHN0lY?=
 =?utf-8?B?RmU0dXQ0Ty9nN084bVM2NnNFeHd6WnRiRWxYdHhraWJES1k2S0dMVXRHVjA4?=
 =?utf-8?B?ZG41SUt5N01aWGdsQ2RRbnNWcHpuMG5DMGdYV1Y5T3J4bkw1SXBjS1VYdEll?=
 =?utf-8?B?QzhmQ3ZyTytpKzV5czVTRWQ5NUtIM1BRdEJ2UEJZd01RQmlZSHRndVlqdFpZ?=
 =?utf-8?B?Ujk5R0NxSE44bU1sZmU2TEs5R1NleFBpMDFZRldrM01HcWk5V0xvWWk0OGtE?=
 =?utf-8?B?VHlxbTNOY1BSTkJ4SVFtR2pxeWd3VHhYZ3FwVCtmRVpRc3lvaVdWRjVHMTFZ?=
 =?utf-8?B?RGJFd3U1cWtrb1YrbGZvS2xPVFlkdm5lckZzeTFlS3Z3WXJGYy9QMVlCU2Qz?=
 =?utf-8?B?bkRteGgyb2NKRXhQUFE4dWtTaDhqMXRleG9XRU81RWU5Vm8yVmJsWittMDJT?=
 =?utf-8?B?cU8wdy9GK0hDWHVOd3UrQ1M1K1RMaExTZ1h3Vmc5MDFOSlBUVjUzL3NRZ0Qz?=
 =?utf-8?B?dmJHZHJFV2hFbkM5MUxaK051WlRNVGVkN3FsNFpBelRTc28xQ3JrWms3eVVJ?=
 =?utf-8?B?S0JlQU83WDE4R0poWklBMXlSOW5mcXFRNWZSVHVGZjFrdllqTmJhVGxEL1ZP?=
 =?utf-8?B?N0g0LzM4eDhvQ3pOWGV6N2NGZVIyblZSYkxuYytHUFNLZU5OSjhEK0ZVa3R1?=
 =?utf-8?B?akVTYmh2R1RDRE9PbzZVMWxnT2QydG5QK1M2RmgxbG4xb1lXOWpZZ1hVSWpu?=
 =?utf-8?B?WW9wbDJ2WWRCbTYyeXNhd0lsWGV5dmVITXE3b1VpSnJ0UEtPQkdHazFlZmZS?=
 =?utf-8?B?VFZCQVpvYzNNVitEZ24raGh3S2MvUnQ4U0hhNmlCMzlzZ2FUR2ZxUmtZZ3hV?=
 =?utf-8?B?eUJFd3pMZlJCdkMxbWhCOG54b3N1L2k5ZnZOM0xHeTVzYllkN1FRWit5Y0Uy?=
 =?utf-8?B?UzdZM1BaVHQzejgzVHlsM2ZNUTdOM2IzbFFtOEVFNEVpSlVQa3ZjVWM1a0pp?=
 =?utf-8?B?UVdiZlI0K29YZVFkbSt1RjB0b3pqWDVVR2h1RlVEZndScEQvdDNWWVhpakdy?=
 =?utf-8?B?bjVQNWR6ZXh0dWtGQUxDL0VyR2pDMW11NW9BR1dMWmdqaEFyM0RXbmJCOWJM?=
 =?utf-8?Q?i7L3AjRh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4ac715cf-47a1-43a9-9679-08d8c923c327
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 15:44:31.1123
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iZ6hB6MMDo80P2vLQ5wRp3BS9dl4Qnp4COOS0Dvn5YB5kC7F4c3u/qAHyl/7gCLYcchSZCJr1hZOoYmDmoWHaEwHjrlE4B59Ixmt0EORN1g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4535
X-OriginatorOrg: citrix.com

--------------1727A7134F178C16BAEA78C6
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

On 04/02/2021 15:38, Oleksandr wrote:
>>
>> Hi Andrew.
>> [Sorry for the possible format issues]
>>
>> On Tue, Feb 2, 2021 at 9:10 PM Andrew Cooper
>> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>>
>>     For now, simply try to map 40 frames of grant table.Â  This
>>     catches most of the
>>     basic errors with resource sizes found and fixed through the 4.15
>>     dev window.
>>
>>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com
>>     <mailto:andrew.cooper3@citrix.com>>
>>     ---
>>     CC: Ian Jackson <iwj@xenproject.org <mailto:iwj@xenproject.org>>
>>     CC: Wei Liu <wl@xen.org <mailto:wl@xen.org>>
>>     CC: Jan Beulich <JBeulich@suse.com <mailto:JBeulich@suse.com>>
>>     CC: Roger Pau MonnÃ© <roger.pau@citrix.com
>>     <mailto:roger.pau@citrix.com>>
>>     CC: Wei Liu <wl@xen.org <mailto:wl@xen.org>>
>>     CC: Stefano Stabellini <sstabellini@kernel.org
>>     <mailto:sstabellini@kernel.org>>
>>     CC: Julien Grall <julien@xen.org <mailto:julien@xen.org>>
>>     CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com
>>     <mailto:Volodymyr_Babchuk@epam.com>>
>>     CC: Oleksandr <olekstysh@gmail.com <mailto:olekstysh@gmail.com>>
>>
>>     Fails against current staging:
>>
>>     Â  XENMEM_acquire_resource tests
>>     Â  Test x86 PV
>>     Â  Â  d7: grant table
>>     Â  Â  Â  Fail: Map 7 - Argument list too long
>>     Â  Test x86 PVH
>>     Â  Â  d8: grant table
>>     Â  Â  Â  Fail: Map 7 - Argument list too long
>>
>>     The fix has already been posted:
>>     Â  [PATCH v9 01/11] xen/memory: Fix mapping grant tables with
>>     XENMEM_acquire_resource
>>
>>     and the fixed run is:
>>
>>     Â  XENMEM_acquire_resource tests
>>     Â  Test x86 PV
>>     Â  Â  d7: grant table
>>     Â  Test x86 PVH
>>     Â  Â  d8: grant table
>>
>>     ARM folk: would you mind testing this?Â  I'm pretty sure the
>>     create parameters
>>     are suitable, but I don't have any way to test this.
>>
>> Yes, as it was agreed on IRC, I will test this today's evening and
>> inform about the results)
>
>
> OK, well, I decided to test right away because going to be busy in the
> evening)
>
> I am based on:
> 9dc687f x86/debug: fix page-overflow bug in dbg_rw_guest_mem
>
> I noticed the error during building this test in my Yocto environment
> on Arm:
>
>
> /media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
> test-resource.o: undefined reference to symbol
> 'xendevicemodel_open@@VERS_1.0'
> /media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
> /media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/git/tools/tests/resource/../../../tools/libs/devicemodel/libxendevicemodel.so.1:
> error adding symbols: DSO missing from command line
> collect2: error: ld returned 1 exit status
> Makefile:38: recipe for target 'test-resource' failed
>
>
> I didn't investigate whether it is related or not. I just added as
> following:
>
> diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
> index 8a3373e..03b19ef 100644
> --- a/tools/tests/resource/Makefile
> +++ b/tools/tests/resource/Makefile
> @@ -32,6 +32,7 @@ CFLAGS += $(APPEND_CFLAGS)
> Â 
> Â LDFLAGS += $(LDLIBS_libxenctrl)
> Â LDFLAGS += $(LDLIBS_libxenforeignmemory)
> +LDFLAGS += $(LDLIBS_libxendevicemodel)
> Â LDFLAGS += $(APPEND_LDFLAGS)
> Â 
> Â test-resource: test-resource.o
>

Urgh yes - I didn't fully strip out the libxendevicemodel uses.Â  I'll
fix that, rather than having this test link against a library which it
doesn't use (yet).

>
> I got the following result without and with "[PATCH v9 01/11]
> xen/memory: Fix mapping grant tables with XENMEM_acquire_resource"
>
> root@generic-armv8-xt-dom0:~# test-resource
> XENMEM_acquire_resource tests
> Test ARM
> Â  d3: grant table
> xenforeignmemory: error: ioctl failed: Invalid argument
> Â Â Â  Fail: Get size: 22 - Invalid argument
>

Ah yes - you also need a bugfix in the dom0 kernel.Â  "xen/privcmd: allow
fetching resource sizes" which is in mainline, and also backported to
the LTS trees.

However, this did get past the bit I wasn't sure about for ARM, which is
good.

~Andrew

--------------1727A7134F178C16BAEA78C6
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 04/02/2021 15:38, Oleksandr wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com">
      
      <blockquote type="cite" cite="mid:CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com">
        <div dir="ltr">
          <div dir="ltr"><br>
          </div>
          <div>Hi Andrew.</div>
          <div>[Sorry for the possible format issues]</div>
          <br>
          <div class="gmail_quote">
            <div dir="ltr" class="gmail_attr">On Tue, Feb 2, 2021 at
              9:10 PM Andrew Cooper &lt;<a href="mailto:andrew.cooper3@citrix.com" moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;
              wrote:<br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">For now, simply try to
              map 40 frames of grant table.&nbsp; This catches most of the<br>
              basic errors with resource sizes found and fixed through
              the 4.15 dev window.<br>
              <br>
              Signed-off-by: Andrew Cooper &lt;<a href="mailto:andrew.cooper3@citrix.com" target="_blank" moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;<br>
              ---<br>
              CC: Ian Jackson &lt;<a href="mailto:iwj@xenproject.org" target="_blank" moz-do-not-send="true">iwj@xenproject.org</a>&gt;<br>
              CC: Wei Liu &lt;<a href="mailto:wl@xen.org" target="_blank" moz-do-not-send="true">wl@xen.org</a>&gt;<br>
              CC: Jan Beulich &lt;<a href="mailto:JBeulich@suse.com" target="_blank" moz-do-not-send="true">JBeulich@suse.com</a>&gt;<br>
              CC: Roger Pau MonnÃ© &lt;<a href="mailto:roger.pau@citrix.com" target="_blank" moz-do-not-send="true">roger.pau@citrix.com</a>&gt;<br>
              CC: Wei Liu &lt;<a href="mailto:wl@xen.org" target="_blank" moz-do-not-send="true">wl@xen.org</a>&gt;<br>
              CC: Stefano Stabellini &lt;<a href="mailto:sstabellini@kernel.org" target="_blank" moz-do-not-send="true">sstabellini@kernel.org</a>&gt;<br>
              CC: Julien Grall &lt;<a href="mailto:julien@xen.org" target="_blank" moz-do-not-send="true">julien@xen.org</a>&gt;<br>
              CC: Volodymyr Babchuk &lt;<a href="mailto:Volodymyr_Babchuk@epam.com" target="_blank" moz-do-not-send="true">Volodymyr_Babchuk@epam.com</a>&gt;<br>
              CC: Oleksandr &lt;<a href="mailto:olekstysh@gmail.com" target="_blank" moz-do-not-send="true">olekstysh@gmail.com</a>&gt;<br>
              <br>
              Fails against current staging:<br>
              <br>
              &nbsp; XENMEM_acquire_resource tests<br>
              &nbsp; Test x86 PV<br>
              &nbsp; &nbsp; d7: grant table<br>
              &nbsp; &nbsp; &nbsp; Fail: Map 7 - Argument list too long<br>
              &nbsp; Test x86 PVH<br>
              &nbsp; &nbsp; d8: grant table<br>
              &nbsp; &nbsp; &nbsp; Fail: Map 7 - Argument list too long<br>
              <br>
              The fix has already been posted:<br>
              &nbsp; [PATCH v9 01/11] xen/memory: Fix mapping grant tables
              with XENMEM_acquire_resource<br>
              <br>
              and the fixed run is:<br>
              <br>
              &nbsp; XENMEM_acquire_resource tests<br>
              &nbsp; Test x86 PV<br>
              &nbsp; &nbsp; d7: grant table<br>
              &nbsp; Test x86 PVH<br>
              &nbsp; &nbsp; d8: grant table<br>
              <br>
              ARM folk: would you mind testing this?&nbsp; I'm pretty sure
              the create parameters<br>
              are suitable, but I don't have any way to test this.<br>
            </blockquote>
            <div>Yes, as it was agreed on IRC, I will test this today's
              evening and inform about the results)</div>
          </div>
        </div>
      </blockquote>
      <p><br>
      </p>
      <p>OK, well, I decided to test right away because going to be busy
        in the evening)<br>
        <br>
        I am based on:<br>
        9dc687f x86/debug: fix page-overflow bug in dbg_rw_guest_mem</p>
      <p>I noticed the error during building this test in my Yocto
        environment on Arm:<br>
      </p>
      <p><br>
      </p>
      <p>/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
        test-resource.o: undefined reference to symbol '<a class="moz-txt-link-abbreviated" href="mailto:xendevicemodel_open@@VERS_1.0" moz-do-not-send="true">xendevicemodel_open@@VERS_1.0</a>'<br>
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/git/tools/tests/resource/../../../tools/libs/devicemodel/libxendevicemodel.so.1:
        error adding symbols: DSO missing from command line<br>
        collect2: error: ld returned 1 exit status<br>
        Makefile:38: recipe for target 'test-resource' failed<br>
      </p>
      <p><br>
      </p>
      <p>I didn't investigate whether it is related or not. I just added
        as following:<br>
      </p>
      <p>diff --git a/tools/tests/resource/Makefile
        b/tools/tests/resource/Makefile<br>
        index 8a3373e..03b19ef 100644<br>
        --- a/tools/tests/resource/Makefile<br>
        +++ b/tools/tests/resource/Makefile<br>
        @@ -32,6 +32,7 @@ CFLAGS += $(APPEND_CFLAGS)<br>
        &nbsp;<br>
        &nbsp;LDFLAGS += $(LDLIBS_libxenctrl)<br>
        &nbsp;LDFLAGS += $(LDLIBS_libxenforeignmemory)<br>
        +LDFLAGS += $(LDLIBS_libxendevicemodel)<br>
        &nbsp;LDFLAGS += $(APPEND_LDFLAGS)<br>
        &nbsp;<br>
        &nbsp;test-resource: test-resource.o</p>
    </blockquote>
    <br>
    Urgh yes - I didn't fully strip out the libxendevicemodel uses.&nbsp;
    I'll fix that, rather than having this test link against a library
    which it doesn't use (yet).<br>
    <br>
    <blockquote type="cite" cite="mid:6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com">
      <p><br>
      </p>
      <p>I got the following result without and with &quot;[PATCH v9 01/11]
        xen/memory: Fix mapping grant tables with
        XENMEM_acquire_resource&quot;<br>
      </p>
      <p>root@generic-armv8-xt-dom0:~# test-resource <br>
        XENMEM_acquire_resource tests<br>
        Test ARM<br>
        &nbsp; d3: grant table<br>
        xenforeignmemory: error: ioctl failed: Invalid argument<br>
        &nbsp;&nbsp;&nbsp; Fail: Get size: 22 - Invalid argument</p>
    </blockquote>
    <br>
    Ah yes - you also need a bugfix in the dom0 kernel.&nbsp; &quot;xen/privcmd:
    allow fetching resource sizes&quot; which is in mainline, and also
    backported to the LTS trees.<br>
    <br>
    However, this did get past the bit I wasn't sure about for ARM,
    which is good.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------1727A7134F178C16BAEA78C6--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:53:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:53:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81356.150233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gx6-00019R-Ck; Thu, 04 Feb 2021 15:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81356.150233; Thu, 04 Feb 2021 15:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7gx6-00019K-8i; Thu, 04 Feb 2021 15:53:52 +0000
Received: by outflank-mailman (input) for mailman id 81356;
 Thu, 04 Feb 2021 15:53:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+S2h=HG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7gx4-00019F-Fi
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:53:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7381bd7e-3475-40b5-8258-0f62589acff6;
 Thu, 04 Feb 2021 15:53:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08107ACD4;
 Thu,  4 Feb 2021 15:53:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7381bd7e-3475-40b5-8258-0f62589acff6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612454027; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qI1xFSgH5+iP2jK3wwU0qrR2eCDVoLxcBN2kgpHJLWc=;
	b=I3XeZpfYSfyLwE26ePiqz+1rS3vA/PsPPC0LAAqgeWfcsz2OluvR56WfOkA7nO46c4YyC+
	QflEISt+tyD7CHWnqbOTw9xdnaNazg5+ONP5B3ZG1EPd1ytMZAaOSwkB5WIPT4cY93NJZF
	E13oWFhfhoLcrx5C31+pKiZbui1z82Q=
Subject: Re: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 James Dingwall <james-xen@dingwall.me.uk>
References: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
Message-ID: <a04862e5-b534-5f38-e072-be63b3fb2152@suse.com>
Date: Thu, 4 Feb 2021 16:53:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 10:36, Jan Beulich wrote:
> X86_VENDOR_* aren't bit masks in the older trees.
> 
> Reported-by: James Dingwall <james@dingwall.me.uk>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -226,7 +226,8 @@ int guest_rdmsr(const struct vcpu *v, ui
>           */
>      case MSR_IA32_PERF_STATUS:
>      case MSR_IA32_PERF_CTL:
> -        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
> +        if ( cp->x86_vendor != X86_VENDOR_INTEL &&
> +             cp->x86_vendor != X86_VENDOR_CENTAUR )
>              goto gp_fault;
>  
>          *val = 0;

Darn - this was only half of it. There's a similar construct
in guest_wrmsr() which also wants replacing.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 15:59:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 15:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81358.150245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7h2F-0001Y4-0t; Thu, 04 Feb 2021 15:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81358.150245; Thu, 04 Feb 2021 15:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7h2E-0001Xx-Ti; Thu, 04 Feb 2021 15:59:10 +0000
Received: by outflank-mailman (input) for mailman id 81358;
 Thu, 04 Feb 2021 15:59:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7h2D-0001Xs-14
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 15:59:09 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cda7f7fc-3bff-4e31-8a30-1b96c2b82cd7;
 Thu, 04 Feb 2021 15:59:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cda7f7fc-3bff-4e31-8a30-1b96c2b82cd7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612454347;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=CwKJFclLx3+gmhV9TPR9itejegLsKMZkkdIpvkqQIZI=;
  b=BO8zQgH8Z/96+J6cCxiFPswgzHJ7XEpXJQV5Fiz+lThDrSDusQlFzqKf
   5jZlyf3HOfIrVkXBizg/kr0b378YKh1YYcazaXALzzX1iuKHx5zznxdoy
   icZK2esCJITvWDyrfshdL8K0VXiPxDOmlEoSwZz7jQRt9gxkQ3sdtQimV
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: YIlOc9EbGCS8tIed7Mj/RvcjpP37tzVxzvotJEyUykYWXPme0aWaIvubFVKPhDByL8kAYoQ5By
 NnxQ3dBYgEVJGWz1EvdKiYZ1EVTpM/WguIJyoIVfHMZxHdqKd2q+S/QYLRs+kjck/zAabTKqOj
 qmqJDGIkD5kCI6hmk3sLyf/XtgMlbhy+r0BQ2kSSDnNIzngq/wJf12+xFpik4IX1Ie6Tosy5DO
 fp7mrp+/RF3hF4eqpJOVTi+DWqgHJdOJTwZI0KTPvD8fMlqIZxTClcpOZwU3jnTT6QRrDAv/uY
 +wI=
X-SBRS: 4.0
X-MesageID: 36517128
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,401,1602561600"; 
   d="scan'208";a="36517128"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien.grall@arm.com>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>
Subject: [PATCH for-4.15] libs/devicemodel: Fix ABI breakage from xendevicemodel_set_irq_level()
Date: Thu, 4 Feb 2021 15:58:50 +0000
Message-ID: <20210204155850.23649-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

It is not permitted to edit the VERS clause for a version in a release of Xen.

Revert xendevicemodel_set_irq_level()'s inclusion in .so.1.2 and bump the the
library minor version to .so.1.4 instead.

Fixes: 5d752df85f ("xen/dm: Introduce xendevicemodel_set_irq_level DM op")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Chen <Wei.Chen@arm.com>

Critical to include in 4.15, as this is an ABI breakage.  Reverting the broken
change doesn't look to be a practical option.
---
 tools/libs/devicemodel/Makefile              | 2 +-
 tools/libs/devicemodel/libxendevicemodel.map | 6 +++++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index 500de7adc5..3e50ff6d90 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
-MINOR    = 3
+MINOR    = 4
 
 SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += common.c
diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map
index a0c30125de..733549327b 100644
--- a/tools/libs/devicemodel/libxendevicemodel.map
+++ b/tools/libs/devicemodel/libxendevicemodel.map
@@ -32,10 +32,14 @@ VERS_1.2 {
 	global:
 		xendevicemodel_relocate_memory;
 		xendevicemodel_pin_memory_cacheattr;
-		xendevicemodel_set_irq_level;
 } VERS_1.1;
 
 VERS_1.3 {
 	global:
 		xendevicemodel_modified_memory_bulk;
 } VERS_1.2;
+
+VERS_1.4 {
+	global:
+		xendevicemodel_set_irq_level;
+} VERS_1.3;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 16:16:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 16:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81360.150257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hIT-0003yI-GJ; Thu, 04 Feb 2021 16:15:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81360.150257; Thu, 04 Feb 2021 16:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hIT-0003yB-CT; Thu, 04 Feb 2021 16:15:57 +0000
Received: by outflank-mailman (input) for mailman id 81360;
 Thu, 04 Feb 2021 16:15:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7hIS-0003y2-BD; Thu, 04 Feb 2021 16:15:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7hIS-0006rK-2m; Thu, 04 Feb 2021 16:15:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7hIR-0004Mi-Ro; Thu, 04 Feb 2021 16:15:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7hIR-0003Bg-RJ; Thu, 04 Feb 2021 16:15:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i1PQkXOM4QRAjsOWQRMblW2J93LX6M5jqsKtdZFhfVo=; b=H+v+xKLVB3DwrkvrMi7K3j729Q
	cO2mNG1+K/Y1pA1J5xS54jcMCtO0C+N//3hW+fG2/BhXt7IYHdffUYeprIGgM30+oyHJvZfcHhCcn
	q5zRmjdKDcEP3zjx2idPMELouUi6usLAqYi5vCjkAUsofsv9kYZ1fpZhyd+S2PXqLRkU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159014-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159014: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=92f5ffa58d188c9f9a9f1bcdccb6d6348d9df612
X-Osstest-Versions-That:
    xen=d203dbd69f1a02577dd6fe571d72beb980c548a6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 16:15:55 +0000

flight 159014 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159014/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 158993

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  92f5ffa58d188c9f9a9f1bcdccb6d6348d9df612
baseline version:
 xen                  d203dbd69f1a02577dd6fe571d72beb980c548a6

Last test of basis   158993  2021-02-03 21:00:26 Z    0 days
Testing same since   159014  2021-02-04 14:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)

Not pushing.

------------------------------------------------------------
commit 92f5ffa58d188c9f9a9f1bcdccb6d6348d9df612
Author: Roger Pau MonnÃ© <roger.pau@citrix.com>
Date:   Thu Feb 4 14:02:32 2021 +0100

    x86/efi: enable MS ABI attribute on clang
    
    Or else the EFI service calls will use the wrong calling convention.
    
    The __ms_abi__ attribute is available on all supported versions of
    clang. Add a specific Clang check because the GCC version reported by
    Clang is below the required 4.4 to use the __ms_abi__ attribute.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit da20c93108226cb2eb2ed1a13225337f2318a642
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 14:01:21 2021 +0100

    IOREQ: fix waiting for broadcast completion
    
    Checking just a single server is not enough - all of them must have
    signaled that they're done processing the request.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 7b93d92a35dc7c0a6e5f1f79b3c887aa3e66ddc0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 13:59:56 2021 +0100

    x86/string: correct memmove()'s forwarding to memcpy()
    
    With memcpy() expanding to the compiler builtin, we may not hand it
    overlapping source and destination. We strictly mean to forward to our
    own implementation (a few lines up in the same source file).
    
    Fixes: 78825e1c60fa ("x86/string: Clean up x86/string.h")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 16:16:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 16:16:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81362.150272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hJ2-00044c-06; Thu, 04 Feb 2021 16:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81362.150272; Thu, 04 Feb 2021 16:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hJ1-00044V-TG; Thu, 04 Feb 2021 16:16:31 +0000
Received: by outflank-mailman (input) for mailman id 81362;
 Thu, 04 Feb 2021 16:16:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7hJ0-00044G-25
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 16:16:30 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0bdd263-eea2-4200-8b80-d9a5e63c1869;
 Thu, 04 Feb 2021 16:01:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0bdd263-eea2-4200-8b80-d9a5e63c1869
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612454488;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3CpNVcu44AkPPgf8hXnwiZ6OvnbnOEUo26180MBXtRk=;
  b=enVcOJqxSaxgvXQykGIXPD9U3qC/T3JwC0hlrUWr1huHk/5IpPRb4DMR
   bZ41Os2TH8llZGZhumSPFsFRBKv1XFSwqLVAnvnJDo+vStu9OswRaWmQx
   E8wpEYsegt3qX2BBOK59XqAIKM3ucUrZ5pIG+9l17RxMDjxJPYspM6eGx
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UiQh8a7BPeIiUXI9t/Gjm+GOYKmqxVAhAx2BXV24SdTEHBSh5JdZMirIDaYLUwzH6/JhBjQ9bi
 VuW5jjeFYVg5ioG22yw9Z98KZOJE2UVce0EmoHKjll9P1QH8w4VsUOVdEIGaWF3yuCYPysumv2
 n8Hd6Lncj7C8YzrkUAcgnUnM2EpVlZ9+de7cPyClY5AdgxfRD64CadNNGS69p0EUagK9E9WH8W
 ESK9X8VQccLXvG9fs+ku7KqFIrkiFDs+HxrkB4xw3p8bODEtZeegpzWzh8GiQZdIrYjCCM8N1A
 3lE=
X-SBRS: 5.2
X-MesageID: 36601349
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,401,1602561600"; 
   d="scan'208";a="36601349"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dp+UOsUQkg+ImPqtrwc+TnwhIC40wXpMGxa8b/CGcNNuqvgtHbnuGp0iqAvv8U7V0Ggl4PvPNuYbGchub5OuXSmZ4A5QTNoTg7sNYo81dzo+NQ9+abLQHjCsFoAsUOESqw4ByvtxaYPmxcq3woet1Zs05ndtR6nUIS8VnKSC5ExVePjA2dmcYaHU1LUPDN+7KOipzrTO7E9w1tNqs391wB4FYPaVSIXxj9WLUpbdoKeO8uGWdK2GM4xmuUYNUXJTh/PL+lZHA+h/089vHAUoTlTX36TVKrgwYDXK/Osf+5BzFAEAfSd2RJZuy0AH3szw90CuBQeQBOt9iAG9LI5tVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qyeC+rgF7z7nZMewkrDjJA8ro9Auxcty6b9jmd3lXiA=;
 b=WXSANyZtlGe5PyjLydx56khymVoavHk7CI0k/mmkNjmj1DAakmPjBjA7Kvv/8GTJ2surNH0U3FBstVOp7f0mNUoevB6Z7sq9vyIX7tG1KbFFDsXtF/jSScyz/VyhqjLoPqWOQcosqkcNxv54E5xV1jkgDDzzoG5oR58MLaqPx+R2mLTCSB7SR3nAGGRiRnQ73wjfzcDapXe87YrHlVnRubBHEQh2AZR1HyuDqXTJ8TwAyNTyc5QXNgvMNRRi0ugp5LoKxU+YNjipqRtrEsN7qEeV6WPzlOUPJla/MwrxV+blP0jiOpdnZNFdrqGgY3zpP57RHnNnGtG39tPEmZ9U4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qyeC+rgF7z7nZMewkrDjJA8ro9Auxcty6b9jmd3lXiA=;
 b=DZkBUbDKafzH706t1J9+wKyx5/9zNToDkNVuYwFV0FhqIWpz4f0zGWoLej1+y0dxCPcA3Kpeo8wkNz0qbCm8iXVat1yPJ7geyhS8cJSq1bQOrzDhKzXTol65JZPHWtelveYhIBBGw5cZ356q9lgo0QAq5jrFiFt3M4427Mv9yTU=
Subject: Re: [PATCH for-4.12 and older] x86/msr: fix handling of
 MSR_IA32_PERF_{STATUS/CTL} (again)
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, James Dingwall <james-xen@dingwall.me.uk>
References: <1f4a8233-7f7b-dbd9-26f5-69e3eb659fa2@suse.com>
 <a04862e5-b534-5f38-e072-be63b3fb2152@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a55aa54c-cea6-8b33-0fd0-9534ffdaaf3e@citrix.com>
Date: Thu, 4 Feb 2021 16:01:08 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <a04862e5-b534-5f38-e072-be63b3fb2152@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0267.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::20) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3fdbe9ba-bf6d-45e5-116d-08d8c9261a95
X-MS-TrafficTypeDiagnostic: BY5PR03MB5185:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5185AD219F96F745149ACBB2BAB39@BY5PR03MB5185.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: N4o7iwz814hgZrs09qbOeVg2U75jkhIDa75Vs02FZ64AnUbTLDjxgjEdMamuUAEbN03jhhA4/mpV3juvYA37QmfqqCi35vdBNTqjshaNH748t1kWYEZ8kHRXW3ytawoops2YGpQLqb/80RD980FGm9z39QDxZx2XBgxQyeC+QYylf9ktSMQSBjFp4/pTnCCzIGrcO0/JqRlFyaSBPZ3SeNFr//Lko6FTxSD+jDwpD+1QAhicu+iK6HIeWFAhzg0YVoEEMjpbCnLvzDAFDJHT5HMInmoa56PxB4GH+/nJTy2vw/fh5H7uoCq14twOUdgSldr52WsZsUMkuVOOuG7FlOmnEO5+hQfoQMS0ZPLvllyZNkeJeKITUF2vat+4EbZAHbxbe6JcqIzUivHpl3szZcXGBigGBNJC5G/NdKjYs8phfWuxBKnu5VvczXYDNtpMGLHPF9skoyShjIK9UHI+qnmiDf46eI6J6ptLl1voRZOG0yWDE5fbg0wuMam5E5Gi1GJFapvWq38vnLw+wSbT7LNBXY3+loWvpjojD3qtJl/awsc695T0WUzjmu0BedmiwmQxqGTjt7E1tqm2pCVJVrey06UfvPCbGf5KJnXPXcg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39850400004)(396003)(366004)(376002)(136003)(8936002)(8676002)(54906003)(53546011)(4744005)(478600001)(110136005)(5660300002)(4326008)(66476007)(66556008)(31696002)(2616005)(956004)(66946007)(316002)(16526019)(6666004)(186003)(2906002)(26005)(6486002)(36756003)(31686004)(86362001)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MVlHNWl6amNRNjExOFdqVzNCVm9BbmlHQ29seEtQSFpIQm1oQjRsa3p1NmNY?=
 =?utf-8?B?RmJvU0lkOGkxREgycksvVmQxN0VrRDlDYXpXYVNMSk9iZEZMYWFrUGNxOEV0?=
 =?utf-8?B?UnZYMDRSN1ZzNXd0TjZkdDEwNnR3SWcrZEt1M01vVjl3RDJidG1nQUhIc3Zh?=
 =?utf-8?B?aTJySEs4bXlmTEFHT0NUQlU1eTFzVUNCaTNOR3ZSU21HZVNIcHIwcFAvZUp2?=
 =?utf-8?B?eVgza1l4Ui9Na2RPay90SlBCd1AyWmJpR2tEODFkMGdWR1k0Rml6aEhqL2Vl?=
 =?utf-8?B?MStkUHRmUzJWY2piWitYbG1zMXhzdjFCZkhpSUo5WEVsV1BiQjVqVktoRndo?=
 =?utf-8?B?b0NpUmNOenZwTXZqZ2JrOU14bHExWXZIWlhYS3dkbFVPVi9mMmo0YW5EUzVJ?=
 =?utf-8?B?ZWV4VTdORHFQdTErbVhOU2ZWcGlMSzFTL25odEZWYlVxK2FENVd0TzJiKzZV?=
 =?utf-8?B?VXBTT2tZZFl3ZVJ5Q3RoU2tPNXl2bDdvQStaSEVtN3h0d0ZKZ2FTY3M5Rytj?=
 =?utf-8?B?TVRLa2FJSlRIZkJFVno2K2ZHZERVMlQzZTJmMHp0RVZ1U2IrRmRpWWl6LzFL?=
 =?utf-8?B?U0Y2R1RNQTZjQ2JwVmdmbTA3blZ6aHE2KzRHZFBjMmFNaTlvK2VMTmsvSVBO?=
 =?utf-8?B?YkJMbjEreVk3UGlIcXU4NzdhdXZkQUd6SkNUTDJGVXhJYjNETTFxNU9rREpk?=
 =?utf-8?B?Uk10cFZCaXFuRGR5eFBpeUdRdk1OWmVSVURXNlBSeXZGZnpUL1RyQ1VQZFFJ?=
 =?utf-8?B?N0xyQXMvYjlVbU5TczJhMWNmRmJFd1lidVRZVnNrd3RURFlEOGplMjNHRU54?=
 =?utf-8?B?MTBST2FVTDVhVGt6TG1hRW92WFEzQXp5MmxjUm1QWkg5SC9Ybm13SzJlMm93?=
 =?utf-8?B?bll2cDNhWUhIN0hxVE1GSU8zL0tMSmgrclpKQWh2WjRMZ3E0emllWG1obDgy?=
 =?utf-8?B?T3VnL0ZjbDJjNndhZFNKb2ZiNWZSeHlqMXdxcm94eW95U3dzSEwrSTJucDly?=
 =?utf-8?B?T3R5VEhRMGpvVENodzJQQ294M3N2SGpqS0JTdWM1YlFRSUQramdpRzBzOThM?=
 =?utf-8?B?NTRwOUg3VnV0bldxRjlJOGJoc04xS2E4cjAxU2lJVzFwbTlnYUMzOUFZbS9H?=
 =?utf-8?B?UkVFLzZBU2trZ01mT0ZoVXk4alNYZ0lzTzRQdmRkN2FqUlVFRHhZbTNGN3hw?=
 =?utf-8?B?M2R4STVrSFROdTBIWTBKLzlXRzhtSWE5M3R4SXFJdGt4emdtbXh5Um1keHZU?=
 =?utf-8?B?eGl2SXF0NGc0VWJLY2NIcU5DNVBQd0Mrd0E4b0dMU21RV0VaOStWQkEyT1c3?=
 =?utf-8?B?ME1jYkZzajcrcFdHVVV4SDM2ZzNIUHhLVmJaMFZuL0oxU1RPc29OWk5sQU1t?=
 =?utf-8?B?bXFqMXZFaHdFb2hZUFBQMXkvYVFOL0s3OXk2ckpZL1llWTNuNkU4S0dINEVo?=
 =?utf-8?Q?DaDvuhTQ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3fdbe9ba-bf6d-45e5-116d-08d8c9261a95
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 16:01:17.4316
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mMVCUJL/ngMKkVJxl/Q2oUTwJdWo+UZwspE1J9nkdvcFncAee1PL6ZTaIo3Kd9VrBLOgHGmKeKFFOhHdM1DbieenGdVqCUTCOBykf+OuzIA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5185
X-OriginatorOrg: citrix.com

On 04/02/2021 15:53, Jan Beulich wrote:
> On 04.02.2021 10:36, Jan Beulich wrote:
>> X86_VENDOR_* aren't bit masks in the older trees.
>>
>> Reported-by: James Dingwall <james@dingwall.me.uk>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -226,7 +226,8 @@ int guest_rdmsr(const struct vcpu *v, ui
>>           */
>>      case MSR_IA32_PERF_STATUS:
>>      case MSR_IA32_PERF_CTL:
>> -        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
>> +        if ( cp->x86_vendor != X86_VENDOR_INTEL &&
>> +             cp->x86_vendor != X86_VENDOR_CENTAUR )
>>              goto gp_fault;
>>  
>>          *val = 0;
> Darn - this was only half of it. There's a similar construct
> in guest_wrmsr() which also wants replacing.

I really should have renamed the constants when I changed their layout...

My R-by stands in light of that change.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 16:29:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 16:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81368.150284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hVs-0005U8-8Z; Thu, 04 Feb 2021 16:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81368.150284; Thu, 04 Feb 2021 16:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hVs-0005U1-4N; Thu, 04 Feb 2021 16:29:48 +0000
Received: by outflank-mailman (input) for mailman id 81368;
 Thu, 04 Feb 2021 16:29:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l7hVq-0005Tv-0k
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 16:29:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7hVn-0007Bf-6p; Thu, 04 Feb 2021 16:29:43 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l7hVm-0006Oc-Ru; Thu, 04 Feb 2021 16:29:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ijO9rP/V6JlFPhd7aySuFgmF/jjyZb2ilQfvIGxL4j8=; b=ICJ06rhrpoNLSxjDTjhRLLu72G
	FSns4KuoQyH8cQ08yE/WdKEYGlqwzU3B9jkWkelpHlB6+6XHrL8A5VB2ClfV5rdxmP65AWOs3qPJJ
	A56m/Cz9ZMYzt1V5NgK5cYpnh9zAzaZVDqKhgPPow/0G5pjWeXzwC2zDipZLL1OAe5XY=;
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
To: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Cc: Elliott Mitchell <ehem+xen@m5p.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
 <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org>
 <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
 <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <9b97789b-5560-0186-642a-0501789830e5@xen.org>
Date: Thu, 4 Feb 2021 16:29:41 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 04/02/2021 00:13, Stefano Stabellini wrote:
> On Wed, 3 Feb 2021, Julien Grall wrote:
>> On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>> But aside from PCIe, let's say that we know of a few nodes for which
>>>>> "reg" needs a special treatment. I am not sure it makes sense to proceed
>>>>> with parsing those nodes without knowing how to deal with that.
>>>>
>>>> I believe that most of the time the "special" treatment would be to ignore the
>>>> property "regs" as it will not be an CPU memory address.
>>>>
>>>>> So maybe
>>>>> we should add those nodes to skip_matches until we know what to do with
>>>>> them. At that point, I would imagine we would introduce a special
>>>>> handle_device function that knows what to do. In the case of PCIe,
>>>>> something like "handle_device_pcie".
>>>> Could you outline how "handle_device_pcie()" will differ with handle_node()?
>>>>
>>>> In fact, the problem is not the PCIe node directly. Instead, it is the second
>>>> level of nodes below it (i.e usb@...).
>>>>
>>>> The current implementation of dt_number_of_address() only look at the bus type
>>>> of the parent. As the parent has no bus type and "ranges" then it thinks this
>>>> is something we can translate to a CPU address.
>>>>
>>>> However, this is below a PCI bus so the meaning of "reg" is completely
>>>> different. In this case, we only need to ignore "reg".
>>>
>>> I see what you are saying and I agree: if we had to introduce a special
>>> case for PCI, then  dt_number_of_address() seems to be a good place.  In
>>> fact, we already have special PCI handling, see our
>>> __dt_translate_address function and xen/common/device_tree.c:dt_busses.
>>>
>>> Which brings the question: why is this actually failing?
>>
>> I already hinted at the reason in my previous e-mail :). Let me expand
>> a bit more.
>>
>>>
>>> pcie0 {
>>>       ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0 0x40000000>;
>>>
>>> Which means that PCI addresses 0xc0000000-0x100000000 become 0x600000000-0x700000000.
>>>
>>> The offending DT is:
>>>
>>> &pcie0 {
>>>           pci@1,0 {
>>>                   #address-cells = <3>;
>>>                   #size-cells = <2>;
>>>                   ranges;
>>>
>>>                   reg = <0 0 0 0 0>;
>>>
>>>                   usb@1,0 {
>>>                           reg = <0x10000 0 0 0 0>;
>>>                           resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
>>>                   };
>>>           };
>>> };
>>>
>>>
>>> reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
>>> However, the rest of the regs cells are left as zero. It shouldn't be an
>>> issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.
>>
>> The property "ranges" is used to define a mapping or translation
>> between the address space of the "bus" (here pci@1,0) and the address
>> space of the bus node's parent (&pcie0).
>> IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).
>>
>> The problem is dt_number_of_address() will only look at the "bus" type
>> of the parent using dt_match_bus(). This will return the default bus
>> (see dt_bus_default_match()), because this is a property "ranges" in
>> the parent node (i.e. pci@1,0). Therefore...
>>
>>> So
>>> in theory dt_number_of_address() should already return 0 for it.
>>
>> ... dt_number_of_address() will return 1 even if the address is not a
>> CPU address. So when Xen will try to translate it, it will fail.
>>
>>>
>>> Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
>>> add a check to skip 0 size ranges. Just a hack to explain what I mean:
>>
>> The parent of pci@1,0 is a PCI bridge (see the property type), so the
>> CPU addresses are found not via "regs" but "assigned-addresses".
>>
>> In this situation, "regs" will have a different meaning and therefore
>> there is no promise that the size will be 0.
> 
> I copy/pasted the following:
> 
>         pci@1,0 {
>                 #address-cells = <3>;
>                 #size-cells = <2>;
>                 ranges;
> 
>                 reg = <0 0 0 0 0>;
> 
>                 usb@1,0 {
>                         reg = <0x10000 0 0 0 0>;
>                         resets = <&reset
>                         RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
>                 };
>         };
> 
> under pcie0 in my DTS to see what happens (the node is not there in the
> device tree for the rpi-5.9.y kernel.) It results in the expected error:
> 
> (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> (XEN) Device tree generation failed (-22).
> 
> I could verify that pci@1,0 is seen as "default" bus due to the range
> property, thus dt_number_of_address() returns 1.
> 
> 
> I can see that reg = <0 0 0 0 0> is not a problem because it is ignored
> given that the parent is a PCI bus. assigned-addresses is the one that
> is read.
> 
> 
> But from a device tree perspective I am actually confused by the
> presence of the "ranges" property under pci@1,0. Is that correct? It is
> stating that addresses of children devices will be translated to the
> address space of the parent (pcie0) using the parent translation rules.
> I mean -- it looks like Xen is right in trying to translate reg =
> <0x10000 0 0 0 0> using ranges = <0x02000000 0x0 0xc0000000 0x6
> 0x00000000 0x0 0x40000000>.
> 
> Or maybe since pcie0 is a PCI bus all the children addresses, even
> grand-children, are expected to be specified using "assigned-addresses"?
> 
> 
> Looking at other examples [1][2] maybe the mistake is that pci@1,0 is
> missing device_type = "pci"?  Of course, if I add that, the error
> disappear.

I am afraid, I don't know the answer. I think it would be best to ask 
the Linux DT folks about it.

> 
> [1] Documentation/devicetree/bindings/pci/mvebu-pci.txt
> [2] Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
> 
> For the sake of making Xen more resilient to possible DTSes, maybe we
> should try to extend the dt_bus_pci_match check? See for instance the
> change below, but we might be able to come up with better ideas.
> 
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 18825e333e..24d998f725 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -565,12 +565,21 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
>   
>   static bool_t dt_bus_pci_match(const struct dt_device_node *np)
>   {
> +    bool ret = false;
> +
>       /*
>        * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
>        * powermacs "ht" is hypertransport
>        */
> -    return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> +    ret = !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
>           !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> +
> +    if ( ret ) return ret;
> +
> +    if ( !strcmp(np->name, "pci") )
> +        ret = dt_bus_pci_match(dt_get_parent(np));

It is probably safe to assume that a PCI device (not hostbridge) will 
start with "pci". Although, I don't much like the idea because the name 
is not meant to be stable.

AFAICT, we can only rely on "compatible" and "type".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 16:33:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 16:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81370.150296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hZC-0006IM-On; Thu, 04 Feb 2021 16:33:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81370.150296; Thu, 04 Feb 2021 16:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hZC-0006IF-Kz; Thu, 04 Feb 2021 16:33:14 +0000
Received: by outflank-mailman (input) for mailman id 81370;
 Thu, 04 Feb 2021 16:33:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gi2U=HG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l7hZB-0006IA-6s
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 16:33:13 +0000
Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id feb93a3f-4be1-42b9-a9c7-eb8897fae5a0;
 Thu, 04 Feb 2021 16:33:10 +0000 (UTC)
Received: by mail-wm1-x334.google.com with SMTP id j11so3679912wmi.3
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 08:33:10 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id n15sm8551843wrx.2.2021.02.04.08.33.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 04 Feb 2021 08:33:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feb93a3f-4be1-42b9-a9c7-eb8897fae5a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language;
        bh=IaYqqD0brpgLP4Ux8g9ZMnx7iqcXn2aIzlPEt1C0O/E=;
        b=ejVngN8lTTPlO075/fISpieC/0a4qTf4RZLqmKxkuZa/IeVmJZwmO9NyEQ4IL1pMo2
         ZmtpcKSxaSCxavzGAR94jWp5alrBmxcn5aqGyIfyn2bn70bxLXp9J6Q8DD/RaVty5ozS
         HnWuw69ojC+5Qy76TCTxO14cNjf8utMhLgBYZp0IeCvJAO0I5PbxQvDzX9vjs1Yr0/Wg
         vQUdB0P4Ur3+zJA+TDNgTHcGVg91H0YUSpRCNZyvxOL4OxVVlH7bcEu4qEfl8Xl/HcWK
         gaIdSEpUsEm/jtLP8l7RScIfDx1VpVMNd302kPcFmrmeb6Ny2P8PdjVs1L5IMD77cynm
         QaqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language;
        bh=IaYqqD0brpgLP4Ux8g9ZMnx7iqcXn2aIzlPEt1C0O/E=;
        b=PLbVW4uhQFYRsDkIOwg9x7L76s6eDjT2eedLLvmR3ferr1DI4WJS3dwyp3A35OmzKI
         A1Iy/M8KXKMz9teAJLXRT6F3YlvxeFnXMORvdaqyXMUWirp4L3fv+sV786q7iVcjUddI
         QGTJfTt6WAsvoLk2nDQlTHmbuh1/U+eyUibhCJVqHwb3fe29472JfiEdNdY+9XabuQ5M
         jQB5Vz8Cem/gNeISkql9vEFwy+R9IjICWqSGxbBpoz3Tp0wAIyPgqkdT/TPSK+DbQ4MK
         fQUM3OevDGs+F1qtHCUTDx1hFZkRU8HH+prhBl2vOGUfbtEh2q6hfsxxtHCYwcJaWled
         aDNw==
X-Gm-Message-State: AOAM531f63SEUSYfnRnsSgDs+1+M1hUnMJu7jalv9iWe6hTQs0SpIEGR
	QB8+U3dC80VHobCa1+/DDYo=
X-Google-Smtp-Source: ABdhPJypB+d860YRntiqKja7EFschxMvzk/CbBWJji6zyTRZcC4VB+OU1gz7agSSGpeh46afhw9Q7A==
X-Received: by 2002:a1c:c308:: with SMTP id t8mr17589wmf.7.1612456389851;
        Thu, 04 Feb 2021 08:33:09 -0800 (PST)
Subject: Re: [PATCH for-4.15] tools/tests: Introduce a test for
 acquire_resource
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210202190937.30206-1-andrew.cooper3@citrix.com>
 <CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com>
 <6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com>
 <79a41e5b-0717-f31b-3cec-60b716777603@citrix.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <3c103959-880d-ce25-6993-65b93d248bcf@gmail.com>
Date: Thu, 4 Feb 2021 18:33:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <79a41e5b-0717-f31b-3cec-60b716777603@citrix.com>
Content-Type: multipart/alternative;
 boundary="------------A0AB5F0E1BCAA8CBCA232859"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------A0AB5F0E1BCAA8CBCA232859
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit


Hi Andrew


On 04.02.21 17:44, Andrew Cooper wrote:
> On 04/02/2021 15:38, Oleksandr wrote:
>>>
>>> Hi Andrew.
>>> [Sorry for the possible format issues]
>>>
>>> On Tue, Feb 2, 2021 at 9:10 PM Andrew Cooper 
>>> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>>>
>>>     For now, simply try to map 40 frames of grant table.Â  This
>>>     catches most of the
>>>     basic errors with resource sizes found and fixed through the
>>>     4.15 dev window.
>>>
>>>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com
>>>     <mailto:andrew.cooper3@citrix.com>>
>>>     ---
>>>     CC: Ian Jackson <iwj@xenproject.org <mailto:iwj@xenproject.org>>
>>>     CC: Wei Liu <wl@xen.org <mailto:wl@xen.org>>
>>>     CC: Jan Beulich <JBeulich@suse.com <mailto:JBeulich@suse.com>>
>>>     CC: Roger Pau MonnÃ© <roger.pau@citrix.com
>>>     <mailto:roger.pau@citrix.com>>
>>>     CC: Wei Liu <wl@xen.org <mailto:wl@xen.org>>
>>>     CC: Stefano Stabellini <sstabellini@kernel.org
>>>     <mailto:sstabellini@kernel.org>>
>>>     CC: Julien Grall <julien@xen.org <mailto:julien@xen.org>>
>>>     CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com
>>>     <mailto:Volodymyr_Babchuk@epam.com>>
>>>     CC: Oleksandr <olekstysh@gmail.com <mailto:olekstysh@gmail.com>>
>>>
>>>     Fails against current staging:
>>>
>>>     Â  XENMEM_acquire_resource tests
>>>     Â  Test x86 PV
>>>     Â  Â  d7: grant table
>>>     Â  Â  Â  Fail: Map 7 - Argument list too long
>>>     Â  Test x86 PVH
>>>     Â  Â  d8: grant table
>>>     Â  Â  Â  Fail: Map 7 - Argument list too long
>>>
>>>     The fix has already been posted:
>>>     Â  [PATCH v9 01/11] xen/memory: Fix mapping grant tables with
>>>     XENMEM_acquire_resource
>>>
>>>     and the fixed run is:
>>>
>>>     Â  XENMEM_acquire_resource tests
>>>     Â  Test x86 PV
>>>     Â  Â  d7: grant table
>>>     Â  Test x86 PVH
>>>     Â  Â  d8: grant table
>>>
>>>     ARM folk: would you mind testing this?Â  I'm pretty sure the
>>>     create parameters
>>>     are suitable, but I don't have any way to test this.
>>>
>>> Yes, as it was agreed on IRC, I will test this today's evening and 
>>> inform about the results)
>>
>>
>> OK, well, I decided to test right away because going to be busy in 
>> the evening)
>>
>> I am based on:
>> 9dc687f x86/debug: fix page-overflow bug in dbg_rw_guest_mem
>>
>> I noticed the error during building this test in my Yocto environment 
>> on Arm:
>>
>>
>> /media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld: 
>> test-resource.o: undefined reference to symbol 
>> 'xendevicemodel_open@@VERS_1.0'
>> /media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld: 
>> /media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/git/tools/tests/resource/../../../tools/libs/devicemodel/libxendevicemodel.so.1: 
>> error adding symbols: DSO missing from command line
>> collect2: error: ld returned 1 exit status
>> Makefile:38: recipe for target 'test-resource' failed
>>
>>
>> I didn't investigate whether it is related or not. I just added as 
>> following:
>>
>> diff --git a/tools/tests/resource/Makefile 
>> b/tools/tests/resource/Makefile
>> index 8a3373e..03b19ef 100644
>> --- a/tools/tests/resource/Makefile
>> +++ b/tools/tests/resource/Makefile
>> @@ -32,6 +32,7 @@ CFLAGS += $(APPEND_CFLAGS)
>>
>> Â LDFLAGS += $(LDLIBS_libxenctrl)
>> Â LDFLAGS += $(LDLIBS_libxenforeignmemory)
>> +LDFLAGS += $(LDLIBS_libxendevicemodel)
>> Â LDFLAGS += $(APPEND_LDFLAGS)
>>
>> Â test-resource: test-resource.o
>>
>
> Urgh yes - I didn't fully strip out the libxendevicemodel uses. I'll 
> fix that, rather than having this test link against a library which it 
> doesn't use (yet).
>
>>
>> I got the following result without and with "[PATCH v9 01/11] 
>> xen/memory: Fix mapping grant tables with XENMEM_acquire_resource"
>>
>> root@generic-armv8-xt-dom0:~# test-resource
>> XENMEM_acquire_resource tests
>> Test ARM
>> Â  d3: grant table
>> xenforeignmemory: error: ioctl failed: Invalid argument
>> Â Â Â  Fail: Get size: 22 - Invalid argument
>>
>
> Ah yes - you also need a bugfix in the dom0 kernel.Â  "xen/privcmd: 
> allow fetching resource sizes" which is in mainline, and also 
> backported to the LTS trees.

Well, my dom0 Linux is old)

uname -a
Linux generic-armv8-xt-dom0 4.14.75-ltsi-yocto-tiny #1 SMP PREEMPT Thu 
Nov 5 10:52:32 UTC 2020 aarch64 GNU/Linux
so I use ported "xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE".
I didn't find "xen/privcmd: allow fetching resource sizes" for my Linux 
version, so I backported it by myself.

So, with "[PATCH v9 01/11] xen/memory: Fix mapping grant tables with 
XENMEM_acquire_resource"

root@generic-armv8-xt-dom0:~# test-resource
XENMEM_acquire_resource tests
Test ARM
 Â  d7: grant table
(XEN) grant_table.c:1854:d0v1 Expanding d7 grant table from 1 to 32 frames
(XEN) grant_table.c:1854:d0v1 Expanding d7 grant table from 32 to 40 frames

[I didn't test without your patch]


Hope that helps.


>
> However, this did get past the bit I wasn't sure about for ARM, which 
> is good.
>
> ~Andrew

-- 
Regards,

Oleksandr Tyshchenko


--------------A0AB5F0E1BCAA8CBCA232859
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p><br>
    </p>
    <p>Hi Andrew<br>
    </p>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 04.02.21 17:44, Andrew Cooper wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:79a41e5b-0717-f31b-3cec-60b716777603@citrix.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div class="moz-cite-prefix">On 04/02/2021 15:38, Oleksandr wrote:<br>
      </div>
      <blockquote type="cite"
        cite="mid:6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com">
        <blockquote type="cite"
cite="mid:CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com">
          <div dir="ltr">
            <div dir="ltr"><br>
            </div>
            <div>Hi Andrew.</div>
            <div>[Sorry for the possible format issues]</div>
            <br>
            <div class="gmail_quote">
              <div dir="ltr" class="gmail_attr">On Tue, Feb 2, 2021 at
                9:10 PM Andrew Cooper &lt;<a
                  href="mailto:andrew.cooper3@citrix.com"
                  moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;
                wrote:<br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px
                0.8ex;border-left:1px solid
                rgb(204,204,204);padding-left:1ex">For now, simply try
                to map 40 frames of grant table.Â  This catches most of
                the<br>
                basic errors with resource sizes found and fixed through
                the 4.15 dev window.<br>
                <br>
                Signed-off-by: Andrew Cooper &lt;<a
                  href="mailto:andrew.cooper3@citrix.com"
                  target="_blank" moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;<br>
                ---<br>
                CC: Ian Jackson &lt;<a href="mailto:iwj@xenproject.org"
                  target="_blank" moz-do-not-send="true">iwj@xenproject.org</a>&gt;<br>
                CC: Wei Liu &lt;<a href="mailto:wl@xen.org"
                  target="_blank" moz-do-not-send="true">wl@xen.org</a>&gt;<br>
                CC: Jan Beulich &lt;<a href="mailto:JBeulich@suse.com"
                  target="_blank" moz-do-not-send="true">JBeulich@suse.com</a>&gt;<br>
                CC: Roger Pau MonnÃ© &lt;<a
                  href="mailto:roger.pau@citrix.com" target="_blank"
                  moz-do-not-send="true">roger.pau@citrix.com</a>&gt;<br>
                CC: Wei Liu &lt;<a href="mailto:wl@xen.org"
                  target="_blank" moz-do-not-send="true">wl@xen.org</a>&gt;<br>
                CC: Stefano Stabellini &lt;<a
                  href="mailto:sstabellini@kernel.org" target="_blank"
                  moz-do-not-send="true">sstabellini@kernel.org</a>&gt;<br>
                CC: Julien Grall &lt;<a href="mailto:julien@xen.org"
                  target="_blank" moz-do-not-send="true">julien@xen.org</a>&gt;<br>
                CC: Volodymyr Babchuk &lt;<a
                  href="mailto:Volodymyr_Babchuk@epam.com"
                  target="_blank" moz-do-not-send="true">Volodymyr_Babchuk@epam.com</a>&gt;<br>
                CC: Oleksandr &lt;<a href="mailto:olekstysh@gmail.com"
                  target="_blank" moz-do-not-send="true">olekstysh@gmail.com</a>&gt;<br>
                <br>
                Fails against current staging:<br>
                <br>
                Â  XENMEM_acquire_resource tests<br>
                Â  Test x86 PV<br>
                Â  Â  d7: grant table<br>
                Â  Â  Â  Fail: Map 7 - Argument list too long<br>
                Â  Test x86 PVH<br>
                Â  Â  d8: grant table<br>
                Â  Â  Â  Fail: Map 7 - Argument list too long<br>
                <br>
                The fix has already been posted:<br>
                Â  [PATCH v9 01/11] xen/memory: Fix mapping grant tables
                with XENMEM_acquire_resource<br>
                <br>
                and the fixed run is:<br>
                <br>
                Â  XENMEM_acquire_resource tests<br>
                Â  Test x86 PV<br>
                Â  Â  d7: grant table<br>
                Â  Test x86 PVH<br>
                Â  Â  d8: grant table<br>
                <br>
                ARM folk: would you mind testing this?Â  I'm pretty sure
                the create parameters<br>
                are suitable, but I don't have any way to test this.<br>
              </blockquote>
              <div>Yes, as it was agreed on IRC, I will test this
                today's evening and inform about the results)</div>
            </div>
          </div>
        </blockquote>
        <p><br>
        </p>
        <p>OK, well, I decided to test right away because going to be
          busy in the evening)<br>
          <br>
          I am based on:<br>
          9dc687f x86/debug: fix page-overflow bug in dbg_rw_guest_mem</p>
        <p>I noticed the error during building this test in my Yocto
          environment on Arm:<br>
        </p>
        <p><br>
        </p>
        <p>/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
          test-resource.o: undefined reference to symbol '<a
            class="moz-txt-link-abbreviated"
            href="mailto:xendevicemodel_open@@VERS_1.0"
            moz-do-not-send="true">xendevicemodel_open@@VERS_1.0</a>'<br>
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/8.2.0/ld:
/media/b/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/build/tmp/work/aarch64-poky-linux/xen/4.14.0+gitAUTOINC+e00e0f38c3-r0/git/tools/tests/resource/../../../tools/libs/devicemodel/libxendevicemodel.so.1:
          error adding symbols: DSO missing from command line<br>
          collect2: error: ld returned 1 exit status<br>
          Makefile:38: recipe for target 'test-resource' failed<br>
        </p>
        <p><br>
        </p>
        <p>I didn't investigate whether it is related or not. I just
          added as following:<br>
        </p>
        <p>diff --git a/tools/tests/resource/Makefile
          b/tools/tests/resource/Makefile<br>
          index 8a3373e..03b19ef 100644<br>
          --- a/tools/tests/resource/Makefile<br>
          +++ b/tools/tests/resource/Makefile<br>
          @@ -32,6 +32,7 @@ CFLAGS += $(APPEND_CFLAGS)<br>
          Â <br>
          Â LDFLAGS += $(LDLIBS_libxenctrl)<br>
          Â LDFLAGS += $(LDLIBS_libxenforeignmemory)<br>
          +LDFLAGS += $(LDLIBS_libxendevicemodel)<br>
          Â LDFLAGS += $(APPEND_LDFLAGS)<br>
          Â <br>
          Â test-resource: test-resource.o</p>
      </blockquote>
      <br>
      Urgh yes - I didn't fully strip out the libxendevicemodel uses.Â 
      I'll fix that, rather than having this test link against a library
      which it doesn't use (yet).<br>
      <br>
      <blockquote type="cite"
        cite="mid:6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com">
        <p><br>
        </p>
        <p>I got the following result without and with "[PATCH v9 01/11]
          xen/memory: Fix mapping grant tables with
          XENMEM_acquire_resource"<br>
        </p>
        <p>root@generic-armv8-xt-dom0:~# test-resource <br>
          XENMEM_acquire_resource tests<br>
          Test ARM<br>
          Â  d3: grant table<br>
          xenforeignmemory: error: ioctl failed: Invalid argument<br>
          Â Â Â  Fail: Get size: 22 - Invalid argument</p>
      </blockquote>
      <br>
      Ah yes - you also need a bugfix in the dom0 kernel.Â  "xen/privcmd:
      allow fetching resource sizes" which is in mainline, and also
      backported to the LTS trees.<br>
    </blockquote>
    <br>
    Well, my dom0 Linux is old)<br>
    <br>
    uname -a<br>
    Linux generic-armv8-xt-dom0 4.14.75-ltsi-yocto-tiny #1 SMP PREEMPT
    Thu Nov 5 10:52:32 UTC 2020 aarch64 GNU/Linux<br>
    so I use ported "xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE".<br>
    I didn't find "xen/privcmd: allow fetching resource sizes" for my
    Linux version, so I backported it by myself.<br>
    <p>So, with "[PATCH v9 01/11] xen/memory: Fix mapping grant tables
      with XENMEM_acquire_resource"<br>
    </p>
    <p>root@generic-armv8-xt-dom0:~# test-resource <br>
      XENMEM_acquire_resource tests<br>
      Test ARM<br>
      Â  d7: grant table<br>
      (XEN) grant_table.c:1854:d0v1 Expanding d7 grant table from 1 to
      32 frames<br>
      (XEN) grant_table.c:1854:d0v1 Expanding d7 grant table from 32 to
      40 frames</p>
    <p>[I didn't test without your patch]</p>
    <p><br>
    </p>
    <p>Hope that helps.<br>
    </p>
    <p><br>
    </p>
    <blockquote type="cite"
      cite="mid:79a41e5b-0717-f31b-3cec-60b716777603@citrix.com"> <br>
      However, this did get past the bit I wasn't sure about for ARM,
      which is good.<br>
      <br>
      ~Andrew<br>
    </blockquote>
    <pre class="moz-signature" cols="72">-- 
Regards,

Oleksandr Tyshchenko</pre>
  </body>
</html>

--------------A0AB5F0E1BCAA8CBCA232859--


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 16:40:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 16:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81373.150307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hfk-0006ju-Kb; Thu, 04 Feb 2021 16:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81373.150307; Thu, 04 Feb 2021 16:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hfk-0006jn-HD; Thu, 04 Feb 2021 16:40:00 +0000
Received: by outflank-mailman (input) for mailman id 81373;
 Thu, 04 Feb 2021 16:39:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7hfj-0006ji-GX
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 16:39:59 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42fec056-5bb3-4ef6-a269-76028654a015;
 Thu, 04 Feb 2021 16:39:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42fec056-5bb3-4ef6-a269-76028654a015
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612456798;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=b+ohvtkQ6WIL6lyHcHQX1ywWa3j8CpFE9vJ1L1NZCuQ=;
  b=a/5LA5rPo7DUw/S+7W7N4zACsFcouinmEtkOmWHURN4rnOFszEagwGLy
   o5jYacL9bxW4TXdyMSAX6OKAFkcE5m3yQvvemrETNCztilNSEEh52kdp2
   NUME7xyZR57/Iu0TrT/1Y/8ycD5NPWZTjFzmOvp510A/t/HXK2C4e0IUi
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rWpVNWfGsy+Ejq/Jxndnnz4grAutGk1AcZsKPPOqZRZdPP5mVnt3sKpr5H6Ixs16R9+NvQ7lzZ
 +N7McoNZ9t1MUSlE7mRbG1GOf67sSV1T8F/vyuHXAbBRYdbvMah+YqdP+YSTVsZLOww0XsQsen
 um0IUp40J60du0wSuXmpozNqFvLbbvmrZvGJL+CnWcuMEZmvNnjfBfTz/1rgLIUDWj0gRC1gmc
 Ksp457jXC8kpuhZirs2WMKGIu6vO9J8CV15Aa+3T7uxGpwPe72z7Qe+aa4d7KHFjHIo/aU1hQ6
 6a0=
X-SBRS: 5.2
X-MesageID: 36572465
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.79,401,1602561600"; 
   d="scan'208";a="36572465"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QJDFa6x3WM1wxU7B5kLJcXUopys0t1Cnj8PKOSi7RJJu6aiRVc4hZQqu1TwQay7sYmzy52+E4PiP4KkoFllEIfgKdsSuTVMY+90vkCwlZBJl5xH13r4MItWlgC3Af1Y2ypNvjesxWAG7p2mEo9ESpi0XM40enMgnu9df4PTRzjm3k/dTRnXQtD44gZCiVDqkG1zjNaOVi6gJQH1JLJesDh1SLdUeaCunmmE+qDaoCZBsf734W/Vyp/APlEXWMjBMKnMRXzyCeXxM81Ic3EpDGHatRNI/qhK6RueS1p7aN0wKg+F9RPg/NwweL2/oYpPdQzoFmQkuN4vOCZZN9vqR1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b+ohvtkQ6WIL6lyHcHQX1ywWa3j8CpFE9vJ1L1NZCuQ=;
 b=OiaZb0AMwmwh2cxESzl+v4ux1XAzVhMpcxoFoItoK3idDwTpRDFuRZNIO4moyV1UBKfOlnerCACzE4oHH2wJ6OYBxfbzIN5VsYNP9aINXTpSxl9RjFTEPB0z/8oQMlBA24/9md1UN45nlmFA+PEphLJceQWK1JaYK/4fnOFWbZkf++BBGarlqG9EJ3cPGdROvxYJyp9KJvepq8ShaUWXElOIvENY0MMiM1k8QEGnlcUjl0DdM1/Ma/DUsl99FOVKntL/b+r7vn2dqDv/4PFJBPD5eOBoJjlwXZ9vakix4FwkjhU1l+odvVsnK48nG3xgTssk5DvNKAMP+9RYtNX89w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b+ohvtkQ6WIL6lyHcHQX1ywWa3j8CpFE9vJ1L1NZCuQ=;
 b=g620PBcpZui6CoJ67lsBjBqmWkYmPrSYy7N8nvo/H8I8rPym3/Mo7jq7yVxzID//YUN8HpcFogVUpouO0451delIfTET24SI38DYSfqf5W1xoA0c6vAO/vrSnHmKDojljxvl4Dr7f6m53GjWbbm5ZwETwvdBAS9EUTHOqjHgHiU=
Subject: Re: [PATCH for-4.15] tools/tests: Introduce a test for
 acquire_resource
To: Oleksandr <olekstysh@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <20210202190937.30206-1-andrew.cooper3@citrix.com>
 <CAPD2p-nPr_OD7cMT-Ny6vyGsY4nMXuENgrqv0pfYRYtE5srnkQ@mail.gmail.com>
 <6586dd8a-8596-0226-f3d3-02cd9e361a52@gmail.com>
 <79a41e5b-0717-f31b-3cec-60b716777603@citrix.com>
 <3c103959-880d-ce25-6993-65b93d248bcf@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8b1a781a-598c-ae4a-a102-e3ebbb710f6b@citrix.com>
Date: Thu, 4 Feb 2021 16:39:45 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <3c103959-880d-ce25-6993-65b93d248bcf@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0371.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::23) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 60c61544-3c88-4901-73c7-08d8c92b7f1f
X-MS-TrafficTypeDiagnostic: BY5PR03MB5236:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB52363BDEE67AE8BDAF54B384BAB39@BY5PR03MB5236.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:63;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JY1ctFMIMXxKERoH1uHOB+1YhPfiTTMiAicSHhROhW7xlOxLxHZRMts/ST1EUnPsTgDkJG7u3uHlANi6+sVwVjdj/W7WMeOwiOkVy3JP/+W/Tdk+Xucg7U5JkldnFlEDc+JhzmvxzmLA4/n5d3QiBc7rtUhxXQGSUgDxZBEzYy7pYM5rdC3e/SVrR1f0YSZt0SPyExjx2Qewo4eEk+7HjkXJ3KIjI8e5jX1erPBBVOOE922u+1F1Y7Gish9/J0xZ1DOFVvlTUVv8Ju0fL6Rd6pzcNeYAYEHa3BdRRN1fi4AHp5mhXLNEqIylu0JLWpDRXIsVDeNgONDQlXo1vPNWDS7RGqKBdQlWhq/AE3O8kl1OaaTo2vvFG1zI++3cFJVJNqmrtIPuc+Me9JBPnjvRKO1o8lON7u1PQNfpM5cbZE9CGwlMMqR82ts8RtbBc3MOjzjERfcPQFrOveugoQlHkgdKrYKs2YbKyWld2YaEN+gsFz9rQKHGQZXeB/jfB3nSZFWQawdytVkdtYoavG+oKS7iTpwAyey/AnsyRdmwKAUs54xu2Ec6+LcdGLpaig0nGCcEh3THTpX35Inv4XYV07mVfgtnAqKHUdAIRFEug7g=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(376002)(396003)(136003)(8676002)(186003)(26005)(54906003)(31686004)(66946007)(8936002)(16576012)(31696002)(4326008)(6916009)(316002)(2616005)(6486002)(66476007)(86362001)(66556008)(36756003)(6666004)(478600001)(5660300002)(53546011)(956004)(16526019)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dU5rZERha0xLK1IyZEZxMXg3YmV1dTRURVMzOVc2bkgrV0Fhay9OZUpsQXIy?=
 =?utf-8?B?c0o0Y0FzQzhuRlcwMHVCT1dqb3dZVXR3dzYvMm9ZSFE5SkFlSTFjRFJSL3BS?=
 =?utf-8?B?bFUwNCtNMGV0LzdIdDhxcU1CRXRKRVNYNDVTSmdpWjJ2ZDM5Uis3OFdRMzg1?=
 =?utf-8?B?cTVJbkNYbk56OUVuUW5aUUp5enJPejlWK05JdzFQUlZiYTh0bHRNRHlackEv?=
 =?utf-8?B?dG1SeGZ0cFM3aUtCWnIvZHE0TE90SUtoU0ZRMER2WHFQQWJYSndKb080eno4?=
 =?utf-8?B?amEyL210K2FtVXJuSmpGTmZ4VnlXaVBtQ0dUREtLMlk3dWllT1QrUDJ2eDQ4?=
 =?utf-8?B?WURPWTIyK1hkK1J3WFZvckpRRlNmZ0hXZW03bEIxNFhHSExzKyswV2czeXFx?=
 =?utf-8?B?VmJrYnlsMHlkOFBLdG1oVGd6QTdEdUhzeFFxbSt5Y3h3UFNCRzdtblEzSEM0?=
 =?utf-8?B?dE5EbnNCaTBpajF4OUt4N1dlcEZoOFQ3UVZIS0RrQkJEZ3oyR3hFTDhtR3Rj?=
 =?utf-8?B?VEVMTTVOenBHNzhhZGpWUXJhNEFKaHNVMEgxV0Y3bXJEZXFFb0pTWVBvQ050?=
 =?utf-8?B?VGREZkZCc09WdWdOS2lhS3JVcjRvVldBV3lqODAraVRhNE9QM1pRYk80NCsr?=
 =?utf-8?B?ZXV4M2N6a1pzbkYvQ2IySWtzMmsyQWZiaVREc003eDVxOUJvWkduVTV6OFY4?=
 =?utf-8?B?NHdmTDZLcWJ5clcxWDFLY2lleWxVRXlENkJnTlhkTmJnQWphWWFwQUZ4QkNF?=
 =?utf-8?B?d2F3QnNFaE1mVDdibFc3NkFOZXpFQUYwdXBjK28zV2Q3WmdBQXhGOUxHTXJJ?=
 =?utf-8?B?TW4rNnI2cysyRUlOTHRuL1U2U0cyRm10MFFTdnpNSG1NdVRtTGZwS2RLeExE?=
 =?utf-8?B?aXRTSVdTT2sybmQvcTNITm1rS3drK1ExVzV6Zy9SNGV6VkJ5Q2x2c0JEQW5K?=
 =?utf-8?B?cFN3ckU0b2QzUVE3YVU4T1VVams2NWpLaUpEWHlSYjVISWtiV3E1R2YzNTdH?=
 =?utf-8?B?S0VBaDBKOHFITlVoNk9nYlpoNFpNeFYvemh3K0lLM0EvK3VUNE5DK05OdUFJ?=
 =?utf-8?B?c3J2bEpiS1ZPSk8rVHBMUk5LMHZPZ1p3aytyd1dlQkd5b3h3ckdSaGZyTUN2?=
 =?utf-8?B?TVBYQlBHMnB2SE5IVGJuS2MvZWlhc0U3Mjlaa0o0QXpCdWM3MTdLS3ZDdmpL?=
 =?utf-8?B?Wk5yN3hwRzlUeURkaHk2Qm9UNW8zNWROd08wdXFnZTdjT0dkQjBaYUpnaXdP?=
 =?utf-8?B?SDJOZHFBcWJBS1kzbVFDZmxJMGlnUTRoUlNOMkI0dEJpZmFmLytWY1prajZY?=
 =?utf-8?B?cnJJZTZrclRhUXgwcm0xMzBSWmRmOXZIS1ZGVWpPQkw0dWhsaENIVEQwcVph?=
 =?utf-8?B?MzJhRHQzSmVxaUJ6Y29ENFljS2s0NC9CSDM2M3YrWUlIOWxNbFNFZ3hZbjBC?=
 =?utf-8?B?SFNnUkxTRFdwbXg5TEV1Nkx6dXVUajBjOW5ST2VnPT0=?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 60c61544-3c88-4901-73c7-08d8c92b7f1f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 16:39:52.8528
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kOy12RFGKT7a2yYVb9zs46VqWd63w6HeVTu8N4xP2HcLpH3iLC5YB5Ak4VpwKXDi58lyUYPG1cH384D2xp/BPh8uyG7EWf8m+7xyqj/w/qk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5236
X-OriginatorOrg: citrix.com

On 04/02/2021 16:33, Oleksandr wrote:
> On 04.02.21 17:44, Andrew Cooper wrote:
>> On 04/02/2021 15:38, Oleksandr wrote:
>>>
>>>
>>> I got the following result without and with "[PATCH v9 01/11]
>>> xen/memory: Fix mapping grant tables with XENMEM_acquire_resource"
>>>
>>> root@generic-armv8-xt-dom0:~# test-resource
>>> XENMEM_acquire_resource tests
>>> Test ARM
>>> Â  d3: grant table
>>> xenforeignmemory: error: ioctl failed: Invalid argument
>>> Â Â Â  Fail: Get size: 22 - Invalid argument
>>>
>>
>> Ah yes - you also need a bugfix in the dom0 kernel.Â  "xen/privcmd:
>> allow fetching resource sizes" which is in mainline, and also
>> backported to the LTS trees.
>
> Well, my dom0 Linux is old)
>
> uname -a
> Linux generic-armv8-xt-dom0 4.14.75-ltsi-yocto-tiny #1 SMP PREEMPT Thu
> Nov 5 10:52:32 UTC 2020 aarch64 GNU/Linux
> so I use ported "xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE".
> I didn't find "xen/privcmd: allow fetching resource sizes" for my
> Linux version, so I backported it by myself.
>
> So, with "[PATCH v9 01/11] xen/memory: Fix mapping grant tables with
> XENMEM_acquire_resource"
>
> root@generic-armv8-xt-dom0:~# test-resource
> XENMEM_acquire_resource tests
> Test ARM
> Â  d7: grant table
> (XEN) grant_table.c:1854:d0v1 Expanding d7 grant table from 1 to 32 frames
> (XEN) grant_table.c:1854:d0v1 Expanding d7 grant table from 32 to 40
> frames
>
> [I didn't test without your patch]
>
> Hope that helps.
>

Yup - fantastic thankyou.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 16:50:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 16:50:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81375.150320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hpy-00007s-Jq; Thu, 04 Feb 2021 16:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81375.150320; Thu, 04 Feb 2021 16:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7hpy-00007l-Gg; Thu, 04 Feb 2021 16:50:34 +0000
Received: by outflank-mailman (input) for mailman id 81375;
 Thu, 04 Feb 2021 16:50:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gi2U=HG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l7hpx-00007g-2w
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 16:50:33 +0000
Received: from mail-wr1-x42f.google.com (unknown [2a00:1450:4864:20::42f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c3b4f79-4f3c-4491-b05f-96ae66483007;
 Thu, 04 Feb 2021 16:50:31 +0000 (UTC)
Received: by mail-wr1-x42f.google.com with SMTP id c12so4305559wrc.7
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 08:50:31 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id b13sm8591583wrt.31.2021.02.04.08.50.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 04 Feb 2021 08:50:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c3b4f79-4f3c-4491-b05f-96ae66483007
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=NPYYoEJ2y4To9yU1YphsNj2ELmwlaAi1v4jUFPLF1ng=;
        b=UU+iv8XpGC88TJk/bTu3Tk1dGxDAgqcO6tIPxtQ6WZPpTLTbH9LvHHZ9EkLuvDaQnh
         UDWrcRx59hySLm8OgsrDEuLE26gE8fWV2SH57C+QBnYQH0+/bvA/VGqWg8i1lQyYlB5u
         pXHrs26jt5Pf7bFIo6tESivBBSllTJVMnPJzIhZwDM2ynlCJiJlgMrF168UBhl23cJV+
         /HFfYDL7eLpwcpFk2dkn8rGWYfqpx1n0d5TOFJ5s04bGJlDjY3zygZSdWZit/ZE/0kcq
         GmUxTSU0/Gc6nCwkSSj7CAtSTSE6B01xwbLCMDXKMe69e5nRu40lFaw5XB4Ir5ZPYXnb
         xjzQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=NPYYoEJ2y4To9yU1YphsNj2ELmwlaAi1v4jUFPLF1ng=;
        b=sa0nJKsYvR/6KI4vT0wPCo4apCl0VDozzxOZUk3wC+fxEUkEmoiC2Km/53he87mGwX
         OUSP5oor61L+OlJd9VM/v0wS0HREg1/ggjbJEOg1JuDLoNCZ8WWO9E/C92Zi+Q0Glkb3
         wI5w8zmEmIAJDArm72qJ4AXMcSuhelwVS4qPoyDO0wER9PPoQAuVepqvdMrPwgVGVG8I
         XVvR7raVvfpYBs1maJ8p5NkHX4hzsOfRwcsiquuAw+8r8y2z2THxn7uRad32kJFOsjQ7
         rc3BSkzeEFMMbxOFrNoamdIqmbeu3RV4zkH5K9/W3RYUImEM11hnoqEqzA9gaGdyBMQU
         7znw==
X-Gm-Message-State: AOAM533HZp+OP9TUSSHJr/GRL3CA7a/7yN+ffTl7ETejYAPRHXGACgHM
	LSq72+opbG04McPDVcubPN0=
X-Google-Smtp-Source: ABdhPJymnvfLVdgw+iV/qERwrnMeWyVEy9prShAi/NjRsF8PECCJUdeHBzjScVAAh/vYskRKduijIQ==
X-Received: by 2002:adf:f009:: with SMTP id j9mr232372wro.35.1612457430698;
        Thu, 04 Feb 2021 08:50:30 -0800 (PST)
Subject: Re: [PATCH for-4.15] libs/devicemodel: Fix ABI breakage from
 xendevicemodel_set_irq_level()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien.grall@arm.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>
References: <20210204155850.23649-1-andrew.cooper3@citrix.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <eee99c85-4581-87d0-a48d-06619624c8b5@gmail.com>
Date: Thu, 4 Feb 2021 18:50:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20210204155850.23649-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 04.02.21 17:58, Andrew Cooper wrote:

Hi Andrew

> It is not permitted to edit the VERS clause for a version in a release of Xen.
>
> Revert xendevicemodel_set_irq_level()'s inclusion in .so.1.2 and bump the the
> library minor version to .so.1.4 instead.
>
> Fixes: 5d752df85f ("xen/dm: Introduce xendevicemodel_set_irq_level DM op")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Chen <Wei.Chen@arm.com>
>
> Critical to include in 4.15, as this is an ABI breakage.
I am sorry for the breakage, I admit I didn't know that
"It is not permitted to edit the VERS clause for a version in a release 
of Xen."


>    Reverting the broken
> change doesn't look to be a practical option.
> ---
>   tools/libs/devicemodel/Makefile              | 2 +-
>   tools/libs/devicemodel/libxendevicemodel.map | 6 +++++-
>   2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
> index 500de7adc5..3e50ff6d90 100644
> --- a/tools/libs/devicemodel/Makefile
> +++ b/tools/libs/devicemodel/Makefile
> @@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
>   include $(XEN_ROOT)/tools/Rules.mk
>   
>   MAJOR    = 1
> -MINOR    = 3
> +MINOR    = 4
>   
>   SRCS-y                 += core.c
>   SRCS-$(CONFIG_Linux)   += common.c
> diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map
> index a0c30125de..733549327b 100644
> --- a/tools/libs/devicemodel/libxendevicemodel.map
> +++ b/tools/libs/devicemodel/libxendevicemodel.map
> @@ -32,10 +32,14 @@ VERS_1.2 {
>   	global:
>   		xendevicemodel_relocate_memory;
>   		xendevicemodel_pin_memory_cacheattr;
> -		xendevicemodel_set_irq_level;
>   } VERS_1.1;
>   
>   VERS_1.3 {
>   	global:
>   		xendevicemodel_modified_memory_bulk;
>   } VERS_1.2;
> +
> +VERS_1.4 {
> +	global:
> +		xendevicemodel_set_irq_level;
> +} VERS_1.3;

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 17:01:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 17:01:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81381.150348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7i06-0001N9-RC; Thu, 04 Feb 2021 17:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81381.150348; Thu, 04 Feb 2021 17:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7i06-0001N2-Mj; Thu, 04 Feb 2021 17:01:02 +0000
Received: by outflank-mailman (input) for mailman id 81381;
 Thu, 04 Feb 2021 17:01:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7i05-0001Mx-GI
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 17:01:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7i05-0007oY-Dy
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 17:01:01 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l7i05-0004JB-C6
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 17:01:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l7i02-0000FB-6x; Thu, 04 Feb 2021 17:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=QNlvHYRroz6D6/RDu3PyVs6i+A0AYlfH++dbgPh+wDw=; b=3T+R76XiUqpkF9euMsRTNP7a3x
	vTKk9DVl7Sdy+Z7EPQp1hazQOIn5jZLeswPoyNAQMyZCM4CCrWcz09v5r7AT0q8XhKRuU6NDchZPY
	tyeAyWElm7pEOq6CHS3pTm0s5p6RdtSzsakYOKNWZArOy4OMhRzXP5Dz3eQlYPmm4U3s=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24604.10313.939225.448487@mariner.uk.xensource.com>
Date: Thu, 4 Feb 2021 17:00:57 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Julien Grall <julien.grall@arm.com>,
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH for-4.15] libs/devicemodel: Fix ABI breakage from xendevicemodel_set_irq_level()
In-Reply-To: <20210204155850.23649-1-andrew.cooper3@citrix.com>
References: <20210204155850.23649-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15] libs/devicemodel: Fix ABI breakage from xendevicemodel_set_irq_level()"):
> It is not permitted to edit the VERS clause for a version in a release of Xen.
> 
> Revert xendevicemodel_set_irq_level()'s inclusion in .so.1.2 and bump the the
> library minor version to .so.1.4 instead.
> 
> Fixes: 5d752df85f ("xen/dm: Introduce xendevicemodel_set_irq_level DM op")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Sorry for not spotting this earlier.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 17:23:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 17:23:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81385.150366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7iM3-0003YG-Mn; Thu, 04 Feb 2021 17:23:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81385.150366; Thu, 04 Feb 2021 17:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7iM3-0003Y9-JW; Thu, 04 Feb 2021 17:23:43 +0000
Received: by outflank-mailman (input) for mailman id 81385;
 Thu, 04 Feb 2021 17:23:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7iM1-0003Y4-Vq
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 17:23:42 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 393191c6-8dae-46ca-8394-98991b1fe66d;
 Thu, 04 Feb 2021 17:23:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 393191c6-8dae-46ca-8394-98991b1fe66d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612459419;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=XfZrpF4GOpEMLUruKR263/iHDqaN1rob3im39dLDKmk=;
  b=FCLJgVu+1tiv3/JfhgvOqmEpaKoP4MeBfZUgLmkB2l6kBRN1tdoarY1K
   nli/9wyasTZowHOi8g5wuhn7fwBz1+iMwxNdRsr2bUK1WrJrsEBUE1L3f
   /TnbGTZb6Dj38fWeAUYEU/7QxzAVKxmMSMtzi35hicUy9T3XAYCvYtizA
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7hmTvfHjU9qRwlFqlhuuhZKw9Xgofh0gILSO/PeYL1NnVe+2pn6FosI+9abhOrmNCbWIBLHdk4
 ShovRc+lIQFiDwbTGC38YmuzfSENlB2U+qGIWaSp9IpGKsQ/Z5n3JWp1Mk7YgTh3K4Nel1Sje6
 qWwf/enzQ83zdKvIsewLzVrcEZ8wKkzXRygoTFreVhe5hbDO1sDo7eg3Lu3ZOhl/aLvTJsPpag
 LQq/YSR91H0A75RINlpZWwhy9znVFRp2e4Y/IHePzRtvoMKtxRl/Bb7JnDIoBPpyRWSk5W/FRg
 4bg=
X-SBRS: 5.2
X-MesageID: 36526353
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,401,1610427600"; 
   d="scan'208";a="36526353"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HKDLClN5LrVKlf3Wv9cCOMTWrVB/JjS7iRlSDXa21UEThqSz57m7ofCLb7o4lopoXW+Phhsmg5GcSu5Od/IuRBMP/DhaE3xQjNYWGrQT+AHE263hbRMaTdkhhWd0vtzQYr77CB8zlNIxismso4WS58MFtqTcncfKFqxq/2MU8EgnKgEhYZtjiBa1FBIPJuq2Z+jyW8sI4Bk3CkTLlEGFyQRXqsnwvssEO5wNBHUEWI9UxEqNKjZAyeQyS40S8g601rwWGfvjR9uP+YrC1zH9liLxrvp2GSMNACCKQsLzqNaT8oCccsUHQsz2YQauvHa3+B5E4t7k7QCCyTkJf8mFJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XfZrpF4GOpEMLUruKR263/iHDqaN1rob3im39dLDKmk=;
 b=ayzfpc6AAUJEEynlhbKMZbpdXwNyqIHyd3pRJJ8c58zwvYHVis3eKsrJgIxW2VS5o/1CGO2Qi47uHqpe9xHmfUFAqtWi3FlPo1SaJUqVGYo23za4vUdBc4PCe/+KjkLqtl/7R9tWrzO2wq5CvLmnxA7J2d/l3R/q15EnqCZc5AxMozs0M6pyBsLms0YmS3fPSmQef7S4C9D7l2Rdm8/ThtjPEBDiZCyTDH8LJDvEZ/YmcadFeimLhaiTuQshoRd2yD6I09p7CwuZGb1ctac3cnSgAX+6B6fq/pmaMvFpy2u7/Zn2wj5yMkVs78ksfGHhEc3lfZq5sGQk+EvFkasbtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XfZrpF4GOpEMLUruKR263/iHDqaN1rob3im39dLDKmk=;
 b=YaV9mNQvzJwTEPnZ6PU/UZQRghGhdRsLvh770OSq1Hj204eSji73e2p6JUX5mZMK10WE/+TU98jvr0rQDNxr46J/4kxisFG4hluMkLYijsBRAOrzuGebTsdDDn6U0FqjiYwbKx+BUYwdSV7Gin4K84SDM1TBpQ5Q//p+wwbHcI8=
Subject: Re: [PATCH for-4.15] libs/devicemodel: Fix ABI breakage from
 xendevicemodel_set_irq_level()
To: Oleksandr <olekstysh@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien.grall@arm.com>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>
References: <20210204155850.23649-1-andrew.cooper3@citrix.com>
 <eee99c85-4581-87d0-a48d-06619624c8b5@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fe174e27-9f86-f476-6192-f2e62c9a7e3f@citrix.com>
Date: Thu, 4 Feb 2021 17:23:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <eee99c85-4581-87d0-a48d-06619624c8b5@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0437.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::17) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dc487f2c-99fd-4c6c-ff77-08d8c931998d
X-MS-TrafficTypeDiagnostic: BYAPR03MB3608:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3608E1B01D50EDBCD4661908BAB39@BYAPR03MB3608.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: DTeZtQo3Nv3YPo6c6W1aDKTr6hVkAce4WE45jPaDvGlzXJ/nX+WXEMK2OqtLXfCe1VrLNb2LZa0HhIosc2xJFdoQ7G1WB0mxojxv2HmeeHYueHsNzOi2eKC512bz8ahJ0fjqwQlU7tpE/7UfSkHnRCmhoQh6xiPFbp8/DvMb+BzwXcqzqD6Lebm/aoLls9ls9gtpP4KcWmUxat9UcgmxLfOLtSL1HFE7UJZlUTetrhbSi7VfcntJhU7ht0peK/8b4ZqscDiP8tqsJh3Kgzan7iSoUk03Xw1dAi7yyFGLjr1jXnF04NaxMQmLsPG6Oll9PoIt+KvIz23wAXR7g9Aq1Hu50oFFk/pY8rEnycSVNbcB48IooYba/r0omuHG2EelCX9tErlSCWWJtDLe6N9+MRTJEl6MMB/Dvog56ExOVR7SHCPbNq9osifpXB4FGFGjbysFJuYgRI520T2MxxJ3P8mOIjOkWAZheWE2yQ9lLycFw6v+2O13F6r2V4NeFdfBwet3YZiKoMZgFHq3WHrYR17wi5pMKTUnC3mxzyxlxjh781fwlAi3xIfSt4MH04rxFOAY3nosy9fk9s2rG9JfICkaiBd+m/gTbzNaaDaxWII=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(186003)(478600001)(36756003)(16526019)(8936002)(6916009)(316002)(2906002)(8676002)(66946007)(5660300002)(53546011)(16576012)(26005)(4326008)(66556008)(86362001)(31696002)(6486002)(54906003)(956004)(66476007)(6666004)(31686004)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R0FOWEs0NWx1ZlFLQXM0ODZZd25QRW9FNnlwcTFkcjRoWGpkNisyNnFldHNv?=
 =?utf-8?B?WUpLcW5NNm5ISUkvZWZCR01sM2d6RHJMQXc4Rk9vWU9kL2ozY2pEQWN3Tmhi?=
 =?utf-8?B?VitBdmF1a0xlSXZTM3REL1pBUEFlU29wNnFDc1REdnYxckZPTVNEU2ZxOEFZ?=
 =?utf-8?B?cm10bDRobmVGSFVseGFPMWVsaXRHd0Jsai9oM0UvT1N5Y0tReUNkeFc0a3ZI?=
 =?utf-8?B?aEVVZ2RSb3dkemVDYlNpSC9Tc3pNeERzNkFUUVRpUVRpTlZqLzU3bmd3Z2dK?=
 =?utf-8?B?TzRRNmxVQkcvMmU5NnltUnhKOGNwKzFuQWlMUStIQzZBNkxEZUllM2YyODQr?=
 =?utf-8?B?R2ZFSXl3TUlUSnVFMDM1Yk1wdDVJalN6Slc1RzMrWmF5YXNFY3YreEZ4enpX?=
 =?utf-8?B?dVVNbnpOY24rcUdNaGFOZjdIQkxXY3l4ajJpZGRKUzlyMFVPT2dDQUdzOGFY?=
 =?utf-8?B?NTRKZFhqTlJDZDdsYzBzcFNmWDc0MmdZak02SURVV1B5NUNXTGdFVlpIczBh?=
 =?utf-8?B?WkdVVktlUVBxOGdsMlhMK2NDUmRtQklEK0w5UFhqMllzL3FUOXJlbldmUm41?=
 =?utf-8?B?Mlo1cmxBa1cvRGl2VnIwQWhDc2xkSDl3UnF6aE5qZ29ZTVorOVFtc2pWSkFS?=
 =?utf-8?B?OW9VTlQxYnQrNzNVZHY3ZlRBamVOemhiVzk1WFhOUzk5UERrYjRtNnMwbnky?=
 =?utf-8?B?WDNUZFVnRks2L29NQlVaS29TcFBuMjV1c0F0K2ZpR3dMTEdYUkdrMHV4Ui9s?=
 =?utf-8?B?ODRxUHFEY1BXejFFc0hFZ01FcnV5V05Jb1NDUm1hM1F5ejhzR2ZUQTVDbkoy?=
 =?utf-8?B?aGVQUElRYmV2b202Um5OU05mTjZRSjZwalE0cmNzaHlmcDlHcm45YWo5K2N5?=
 =?utf-8?B?bnpReWR3aTNYRmQ5OWpib2NMS1ZzOG9ZN0pIZ2NJNG9pUUsrVndpWUZWQUhN?=
 =?utf-8?B?bXpRL01iMm5MRjhKckFKcFB2WmM5dGpxVVZxTXJackJmTmw4K0VWK2p3NGJj?=
 =?utf-8?B?WjUza0J5YWluRHZqUjExbXgxdHhudDlsNzJsYTZvRXBrOWZJbmVHTm8zVnFF?=
 =?utf-8?B?SGZTeTJmSlBPZG1iTXpHajlHR3F5eURMVzdXMlBrV1A3VFRTdFUxVi9QWG4x?=
 =?utf-8?B?VDB6L3BoZmN1ZC8rU05BZTRyVGlQWDZnRXg2OENDYTlIRnc0QS9MWTZscjJD?=
 =?utf-8?B?aTROWStmYzRZTTlXM2hja3pYZWVzVmU1NXF1cWQzZ1cxL09FdDBmUnM3TW1P?=
 =?utf-8?B?U1RYMU8wMWdyY0hhbVdhM0RYeGxKR2c2ZFhlZys1djNtYkIxY1JDMUxLQUVu?=
 =?utf-8?B?NWVCQUdHRk1GVGxnZTRMVm9RYktpNDlZRHQ2VHB4alhhOGo0amNwMVVxSjZK?=
 =?utf-8?B?NXhpa2NvOCtSaHQrY0Z0QUp2bUpTcVVpUW1VUjBIcFpvd2xCY1dDZjhQU0ti?=
 =?utf-8?Q?4s3yqTvI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: dc487f2c-99fd-4c6c-ff77-08d8c931998d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 17:23:34.1814
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w7ZMRqJKtLy/7Agiap6us2jc3xcqF3zM5mppM++U9q1YU0gMnGTWRnl+euR60cJQy8uNroTXtFt+lWao90Q88PoDLUUAGVhXJTUa0MU0vf0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3608
X-OriginatorOrg: citrix.com

On 04/02/2021 16:50, Oleksandr wrote:
>
> On 04.02.21 17:58, Andrew Cooper wrote:
>
> Hi Andrew
>
>> It is not permitted to edit the VERS clause for a version in a
>> release of Xen.
>>
>> Revert xendevicemodel_set_irq_level()'s inclusion in .so.1.2 and bump
>> the the
>> library minor version to .so.1.4 instead.
>>
>> Fixes: 5d752df85f ("xen/dm: Introduce xendevicemodel_set_irq_level DM
>> op")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Ian Jackson <iwj@xenproject.org>
>> CC: Wei Liu <wl@xen.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Wei Chen <Wei.Chen@arm.com>
>>
>> Critical to include in 4.15, as this is an ABI breakage.
> I am sorry for the breakage, I admit I didn't know that
> "It is not permitted to edit the VERS clause for a version in a
> release of Xen."

To be honest, its not Xen specific.Â  Its any shared object with a stable
API/ABI.

It is explicitly fine to bump the minor version to add new things, but
you must never change the ABI of one which has been released.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 17:57:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 17:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81393.150381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7is6-0006o6-AX; Thu, 04 Feb 2021 17:56:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81393.150381; Thu, 04 Feb 2021 17:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7is6-0006nz-7S; Thu, 04 Feb 2021 17:56:50 +0000
Received: by outflank-mailman (input) for mailman id 81393;
 Thu, 04 Feb 2021 17:56:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7BZ=HG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7is5-0006nu-BL
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 17:56:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cec07597-dbc1-450e-8040-1f4b45438823;
 Thu, 04 Feb 2021 17:56:48 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1FDCB64E16;
 Thu,  4 Feb 2021 17:56:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cec07597-dbc1-450e-8040-1f4b45438823
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612461407;
	bh=PZM7++DFciYJIXwmZYWX3m9pNq4vqDJHiX8c5FbyRVw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hC/QRiIIhtm/wXGBYt57q9XtXIt/TFPFk6AsS5IEUkhd3ji+QTpYFMsKnEsUMjn9x
	 b0u9J97Kj5hQZXpb9c2CXKCQ49pNr3FX3pQrQ6m+NV/WbBhDHQZCKx14M9MQnXbtji
	 VMbEvHX7LDWfA3ot6CYck/4zA3NpQyxaLI/8xMyh8R13/5K47NrZxAfhQqL16Tq/hM
	 VKDqJ13umW4ye4mG3lWGmbM1NFudbltXa/Nh5BRaD/VHEdv1HXssi9b/KGM+IOxiQw
	 3qxoDYV3P0tLklr5wBwP9e7OVTMAe45mhRfutPsOKJorfmKGT7cwH2zgMfveLAYHuJ
	 83Cw2HxwAU3gg==
Date: Thu, 4 Feb 2021 09:56:46 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: robh@kernel.org
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    Elliott Mitchell <ehem+xen@m5p.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Subject: Question on PCIe Device Tree bindings,  Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
In-Reply-To: <9b97789b-5560-0186-642a-0501789830e5@xen.org>
Message-ID: <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s> <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com> <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s> <9b97789b-5560-0186-642a-0501789830e5@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Rob,

We have a question on the PCIe device tree bindings. In summary, we have
come across the Raspberry Pi 4 PCIe description below:


pcie0: pcie@7d500000 {
   compatible = "brcm,bcm2711-pcie";
   reg = <0x0 0x7d500000  0x0 0x9310>;
   device_type = "pci";
   #address-cells = <3>;
   #interrupt-cells = <1>;
   #size-cells = <2>;
   interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
                <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
   interrupt-names = "pcie", "msi";
   interrupt-map-mask = <0x0 0x0 0x0 0x7>;
   interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
                                                     IRQ_TYPE_LEVEL_HIGH>;
   msi-controller;
   msi-parent = <&pcie0>;

   ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
             0x0 0x40000000>;
   /*
    * The wrapper around the PCIe block has a bug
    * preventing it from accessing beyond the first 3GB of
    * memory.
    */
   dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
                 0x0 0xc0000000>;
   brcm,enable-ssc;

   pci@1,0 {
           #address-cells = <3>;
           #size-cells = <2>;
           ranges;

           reg = <0 0 0 0 0>;

           usb@1,0 {
                   reg = <0x10000 0 0 0 0>;
                   resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
           };
   };
};


Xen fails to parse it with an error because it tries to remap reg =
<0x10000 0 0 0 0> as if it was a CPU address and of course it fails.

Reading the device tree description in details, I cannot tell if Xen has
a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
like a default bus (not a PCI bus), hence, the children regs are
translated using the ranges property of the parent (pcie@7d500000).

Is it possible that the device tree is missing device_type =
"pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
pcie@7d500000?

I'd like to make Xen able to parse this device tree without errors but I
am not sure what is the best way to fix it.

Thanks for any help you can provide!

Cheers,

Stefano



On Thu, 4 Feb 2021, Julien Grall wrote:
> On 04/02/2021 00:13, Stefano Stabellini wrote:
> > On Wed, 3 Feb 2021, Julien Grall wrote:
> > > On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org>
> > > wrote:
> > > > > > But aside from PCIe, let's say that we know of a few nodes for which
> > > > > > "reg" needs a special treatment. I am not sure it makes sense to
> > > > > > proceed
> > > > > > with parsing those nodes without knowing how to deal with that.
> > > > > 
> > > > > I believe that most of the time the "special" treatment would be to
> > > > > ignore the
> > > > > property "regs" as it will not be an CPU memory address.
> > > > > 
> > > > > > So maybe
> > > > > > we should add those nodes to skip_matches until we know what to do
> > > > > > with
> > > > > > them. At that point, I would imagine we would introduce a special
> > > > > > handle_device function that knows what to do. In the case of PCIe,
> > > > > > something like "handle_device_pcie".
> > > > > Could you outline how "handle_device_pcie()" will differ with
> > > > > handle_node()?
> > > > > 
> > > > > In fact, the problem is not the PCIe node directly. Instead, it is the
> > > > > second
> > > > > level of nodes below it (i.e usb@...).
> > > > > 
> > > > > The current implementation of dt_number_of_address() only look at the
> > > > > bus type
> > > > > of the parent. As the parent has no bus type and "ranges" then it
> > > > > thinks this
> > > > > is something we can translate to a CPU address.
> > > > > 
> > > > > However, this is below a PCI bus so the meaning of "reg" is completely
> > > > > different. In this case, we only need to ignore "reg".
> > > > 
> > > > I see what you are saying and I agree: if we had to introduce a special
> > > > case for PCI, then  dt_number_of_address() seems to be a good place.  In
> > > > fact, we already have special PCI handling, see our
> > > > __dt_translate_address function and xen/common/device_tree.c:dt_busses.
> > > > 
> > > > Which brings the question: why is this actually failing?
> > > 
> > > I already hinted at the reason in my previous e-mail :). Let me expand
> > > a bit more.
> > > 
> > > > 
> > > > pcie0 {
> > > >       ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0
> > > > 0x40000000>;
> > > > 
> > > > Which means that PCI addresses 0xc0000000-0x100000000 become
> > > > 0x600000000-0x700000000.
> > > > 
> > > > The offending DT is:
> > > > 
> > > > &pcie0 {
> > > >           pci@1,0 {
> > > >                   #address-cells = <3>;
> > > >                   #size-cells = <2>;
> > > >                   ranges;
> > > > 
> > > >                   reg = <0 0 0 0 0>;
> > > > 
> > > >                   usb@1,0 {
> > > >                           reg = <0x10000 0 0 0 0>;
> > > >                           resets = <&reset
> > > > RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > >                   };
> > > >           };
> > > > };
> > > > 
> > > > 
> > > > reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
> > > > However, the rest of the regs cells are left as zero. It shouldn't be an
> > > > issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.
> > > 
> > > The property "ranges" is used to define a mapping or translation
> > > between the address space of the "bus" (here pci@1,0) and the address
> > > space of the bus node's parent (&pcie0).
> > > IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).
> > > 
> > > The problem is dt_number_of_address() will only look at the "bus" type
> > > of the parent using dt_match_bus(). This will return the default bus
> > > (see dt_bus_default_match()), because this is a property "ranges" in
> > > the parent node (i.e. pci@1,0). Therefore...
> > > 
> > > > So
> > > > in theory dt_number_of_address() should already return 0 for it.
> > > 
> > > ... dt_number_of_address() will return 1 even if the address is not a
> > > CPU address. So when Xen will try to translate it, it will fail.
> > > 
> > > > 
> > > > Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
> > > > add a check to skip 0 size ranges. Just a hack to explain what I mean:
> > > 
> > > The parent of pci@1,0 is a PCI bridge (see the property type), so the
> > > CPU addresses are found not via "regs" but "assigned-addresses".
> > > 
> > > In this situation, "regs" will have a different meaning and therefore
> > > there is no promise that the size will be 0.
> > 
> > I copy/pasted the following:
> > 
> >         pci@1,0 {
> >                 #address-cells = <3>;
> >                 #size-cells = <2>;
> >                 ranges;
> > 
> >                 reg = <0 0 0 0 0>;
> > 
> >                 usb@1,0 {
> >                         reg = <0x10000 0 0 0 0>;
> >                         resets = <&reset
> >                         RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> >                 };
> >         };
> > 
> > under pcie0 in my DTS to see what happens (the node is not there in the
> > device tree for the rpi-5.9.y kernel.) It results in the expected error:
> > 
> > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > (XEN) Device tree generation failed (-22).
> > 
> > I could verify that pci@1,0 is seen as "default" bus due to the range
> > property, thus dt_number_of_address() returns 1.
> > 
> > 
> > I can see that reg = <0 0 0 0 0> is not a problem because it is ignored
> > given that the parent is a PCI bus. assigned-addresses is the one that
> > is read.
> > 
> > 
> > But from a device tree perspective I am actually confused by the
> > presence of the "ranges" property under pci@1,0. Is that correct? It is
> > stating that addresses of children devices will be translated to the
> > address space of the parent (pcie0) using the parent translation rules.
> > I mean -- it looks like Xen is right in trying to translate reg =
> > <0x10000 0 0 0 0> using ranges = <0x02000000 0x0 0xc0000000 0x6
> > 0x00000000 0x0 0x40000000>.
> > 
> > Or maybe since pcie0 is a PCI bus all the children addresses, even
> > grand-children, are expected to be specified using "assigned-addresses"?
> > 
> > 
> > Looking at other examples [1][2] maybe the mistake is that pci@1,0 is
> > missing device_type = "pci"?  Of course, if I add that, the error
> > disappear.
> 
> I am afraid, I don't know the answer. I think it would be best to ask the
> Linux DT folks about it.
> 
> > 
> > [1] Documentation/devicetree/bindings/pci/mvebu-pci.txt
> > [2] Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
> > 
> > For the sake of making Xen more resilient to possible DTSes, maybe we
> > should try to extend the dt_bus_pci_match check? See for instance the
> > change below, but we might be able to come up with better ideas.
> > 
> > 
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 18825e333e..24d998f725 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -565,12 +565,21 @@ static unsigned int dt_bus_default_get_flags(const
> > __be32 *addr)
> >     static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> >   {
> > +    bool ret = false;
> > +
> >       /*
> >        * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen
> > PCI
> >        * powermacs "ht" is hypertransport
> >        */
> > -    return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > +    ret = !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> >           !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> > +
> > +    if ( ret ) return ret;
> > +
> > +    if ( !strcmp(np->name, "pci") )
> > +        ret = dt_bus_pci_match(dt_get_parent(np));
> 
> It is probably safe to assume that a PCI device (not hostbridge) will start
> with "pci". Although, I don't much like the idea because the name is not meant
> to be stable.
> 
> AFAICT, we can only rely on "compatible" and "type".
> 
> Cheers,
> 
> -- 
> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 18:22:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 18:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81399.150406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7jGi-0001da-By; Thu, 04 Feb 2021 18:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81399.150406; Thu, 04 Feb 2021 18:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7jGi-0001dT-8x; Thu, 04 Feb 2021 18:22:16 +0000
Received: by outflank-mailman (input) for mailman id 81399;
 Thu, 04 Feb 2021 18:22:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YSLV=HG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1l7jGg-0001dO-Fz
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 18:22:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d617343-9321-4ca4-bf83-8d19b4f65e38;
 Thu, 04 Feb 2021 18:22:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C3A8AE95;
 Thu,  4 Feb 2021 18:22:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d617343-9321-4ca4-bf83-8d19b4f65e38
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612462932; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mySUJzLqMMp5dhbVZgKoKySONi5gQVEHn+PHotScFBY=;
	b=J+8f0n1N4zmA8i9J2BVgIpZ3TERksCAZCcUonXsaDOSPbA9BT/jI8R011O9+/py8dhW3UC
	68aI3RMKL0FYwJKS9q1W884OPw2yY6rDPkPUkqBloaxcObgwzi72eKRf7jmK9CRCmtjZwQ
	fHvNAC2MIB3fNaYCC6oSDsZBTRvh2n4=
Message-ID: <c3f4d5f15a6be36388e10bbf1e0e38b247f190aa.camel@suse.com>
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of
 significant bugs
From: Dario Faggioli <dfaggioli@suse.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Committers
 <committers@xenproject.org>,  Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,  community.manager@xenproject.org
Date: Thu, 04 Feb 2021 19:22:10 +0100
In-Reply-To: <CABfawhkT5JBsT2-reSLB-bNFhP1em5U3vBs+z_FM6_Kcd7TSiQ@mail.gmail.com>
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
	 <24603.58528.901884.980466@mariner.uk.xensource.com>
	 <6d0d7181bad79259aff28351621d2ac1eeaca113.camel@suse.com>
	 <CABfawhkT5JBsT2-reSLB-bNFhP1em5U3vBs+z_FM6_Kcd7TSiQ@mail.gmail.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-lW61QBr3CT5ho3U8Z+Tw"
User-Agent: Evolution 3.38.3 (by Flathub.org) 
MIME-Version: 1.0


--=-lW61QBr3CT5ho3U8Z+Tw
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2021-02-04 at 10:00 -0500, Tamas K Lengyel wrote:
> On Thu, Feb 4, 2021 at 9:21 AM Dario Faggioli <dfaggioli@suse.com>
> wrote:
> >=20
> > On Thu, 2021-02-04 at 12:12 +0000, Ian Jackson wrote:
> > > B. "scheduler broken" bugs.
> >=20
> > - Null scheduler and vwfi native problem
> >   =20
> > https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg01634.h=
tml
> >=20
> > =C2=A0 RCU issues, but manifests due to scheduler behavior (especially
> > =C2=A0 NULL scheduler, especially on ARM).
> > =C2=A0 I'm actively working on it.
> >=20
> > =C2=A0 Patches that should solve the issue for ARM posted already. They
> > =C2=A0 will need to be slightly adjusted to cover x86 as well. Waiting =
a
> > =C2=A0 couple days more for a confirmation from the reporter that the
> > =C2=A0 patches do help, at least on ARM.
> >=20
>=20
> I've run into null-scheduler causing CPU lockups as well on x86.
> Required physical machine reboot. Seems to be triggered with domain
> destruction when destroying fork vms. Happens only intermittently.
>=20
Yes, we know that it's generic and not ARM-only. It's just that on ARM
is easier (or I should say deterministic) to trigger it.

Thanks for reporting, though.

I'll add you in Cc when I send the updated version of the patches that
covers x86 as well, in case you want to test. :-)

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-lW61QBr3CT5ho3U8Z+Tw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmAcO1IACgkQFkJ4iaW4
c+7pFA/+PHla4bOzIEJy6rSwvLpSNEWzxN1RoIjMLiQgsOK6iFXe3FN2dDXAKNtE
vNxT1jYtqxGdAR+FasSDLa3qmiLMDIaa4fb6ivk03AJ9Q6YlRd6PHdWqh0b816C8
iP/yMuXCYBIiAcKZURUpJgaBjLhF/3E51no8NV1NtMSa0ZhkyMsOKRy/tfHqTFdx
XHaw9zNzpDSDlxzY3VhLI+H6315oUEAIesNI0qC+hed53bjuQy80d44tD9ER+r3q
kaaMSNzLakE+XnpEh0AT1B/7dd/QeskAGpwqeK3ESeU0UMg5wHyoZSwAC+gO9cp0
JLEYGqx7frH93zC/OS3cVdjTs7/wtkvQJRybzrPVnCSWPCoW/5uBsxbDuiybjlQS
3xyazNnA4Hr/nZgkSTc36UktigsJz+CpovYWcGA1mFwVp6RpVl/NMyOkBCZViG2l
Adv6dOyyBzMbBiuZBosBnQ0EnTZutSRYiEUjsWq7nIy2rLY9nXD0hNaJ7A+/1v6d
V12mkxj2nNfWoFWRKa8UTFXevqp9GAPpcziGzmRSbHvGqNxbww6fjXkdlVRHUgsZ
l/5q0fGwexiSZeBJX8vP3tXQ2jh7Z2vddVnSYY1weAmJ1JF497URzY+cHB77bIve
SVzezk+OnKiyWglaCRxxjzNs+4ZAyXz14mWWgwqN7rVTOZCCnaM=
=V/Wy
-----END PGP SIGNATURE-----

--=-lW61QBr3CT5ho3U8Z+Tw--



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 18:31:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 18:31:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81403.150426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7jPl-0002nw-CK; Thu, 04 Feb 2021 18:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81403.150426; Thu, 04 Feb 2021 18:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7jPl-0002np-8v; Thu, 04 Feb 2021 18:31:37 +0000
Received: by outflank-mailman (input) for mailman id 81403;
 Thu, 04 Feb 2021 18:31:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+ZK=HG=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1l7jPk-0002nk-05
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 18:31:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79e56288-507c-4de7-9684-54f967ad26a9;
 Thu, 04 Feb 2021 18:31:34 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id B002464F58
 for <xen-devel@lists.xenproject.org>; Thu,  4 Feb 2021 18:31:33 +0000 (UTC)
Received: by mail-ed1-f43.google.com with SMTP id z22so5400125edb.9
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 10:31:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79e56288-507c-4de7-9684-54f967ad26a9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612463494;
	bh=/bh8XlhA+JNa4dohvJBRDXcaAfqBsnpkGE0rzhB8TAc=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=bM9sM/pVXS9QSTwnD2yjn6N3dmJFrWzVH7itdfQ51i7URmPEDFCeXvEcEbLiJ27zV
	 IjfQoyG0qCTobz/loZJAWCOLto5VKS3Tt1ELgfljYqP/w7R2KZscf+UBTUpEluowzq
	 o09VvuPx/gijSEAe6Mk180yKGCWZlLDYo49F95N5n8FqXJMS2Jakh8kYqsT8b+UAGx
	 J7PFSg6HJk+SOWcIQYC0Bm4+8JKYyS8zWI4zd0sO/P4OUX9059dAWp6OZP477329cq
	 8bmQe5JMScgUVuESpTkETXIi00qJhAy+kpaCYRVMCOMQTOwOVs47/r6uwRR8pIWtQM
	 EWz0G90/Cny9Q==
X-Gm-Message-State: AOAM533Rl9weCtft8vjDfANTB+wQ3/ZG/TbE9Uu21ld8/dQjpcALJmvs
	QJnpSilnqXDwOR4yGq6d54mABQ15xNqgnghUKA==
X-Google-Smtp-Source: ABdhPJyTeTqt3wsKaS/LrOi4IKSEE9n34P3nXMK2EJsB2x4I4qtYzH+lCd1prfobtzo3uz2Ejn8KXs+1hFxMkwUWmt8=
X-Received: by 2002:a50:ee10:: with SMTP id g16mr329305eds.62.1612463492094;
 Thu, 04 Feb 2021 10:31:32 -0800 (PST)
MIME-Version: 1.0
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
 <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org> <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
 <9b97789b-5560-0186-642a-0501789830e5@xen.org> <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
From: Rob Herring <robh@kernel.org>
Date: Thu, 4 Feb 2021 12:31:20 -0600
X-Gmail-Original-Message-ID: <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
Message-ID: <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall.oss@gmail.com>, Elliott Mitchell <ehem+xen@m5p.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> Hi Rob,
>
> We have a question on the PCIe device tree bindings. In summary, we have
> come across the Raspberry Pi 4 PCIe description below:
>
>
> pcie0: pcie@7d500000 {
>    compatible = "brcm,bcm2711-pcie";
>    reg = <0x0 0x7d500000  0x0 0x9310>;
>    device_type = "pci";
>    #address-cells = <3>;
>    #interrupt-cells = <1>;
>    #size-cells = <2>;
>    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
>                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
>    interrupt-names = "pcie", "msi";
>    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
>    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
>                                                      IRQ_TYPE_LEVEL_HIGH>;
>    msi-controller;
>    msi-parent = <&pcie0>;
>
>    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
>              0x0 0x40000000>;
>    /*
>     * The wrapper around the PCIe block has a bug
>     * preventing it from accessing beyond the first 3GB of
>     * memory.
>     */
>    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
>                  0x0 0xc0000000>;
>    brcm,enable-ssc;
>
>    pci@1,0 {
>            #address-cells = <3>;
>            #size-cells = <2>;
>            ranges;
>
>            reg = <0 0 0 0 0>;
>
>            usb@1,0 {
>                    reg = <0x10000 0 0 0 0>;
>                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
>            };
>    };
> };
>
>
> Xen fails to parse it with an error because it tries to remap reg =
> <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
>
> Reading the device tree description in details, I cannot tell if Xen has
> a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> like a default bus (not a PCI bus), hence, the children regs are
> translated using the ranges property of the parent (pcie@7d500000).
>
> Is it possible that the device tree is missing device_type =
> "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> pcie@7d500000?

Indeed, it should have device_type. Linux (only recently due to
another missing device_type case) will also look at node name, but
only 'pcie'.

We should be able to create (or extend pci-bus.yaml) a schema to catch
this case.

Rob

> I'd like to make Xen able to parse this device tree without errors but I
> am not sure what is the best way to fix it.
>
> Thanks for any help you can provide!
>
> Cheers,
>
> Stefano
>
>
>
> On Thu, 4 Feb 2021, Julien Grall wrote:
> > On 04/02/2021 00:13, Stefano Stabellini wrote:
> > > On Wed, 3 Feb 2021, Julien Grall wrote:
> > > > On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org>
> > > > wrote:
> > > > > > > But aside from PCIe, let's say that we know of a few nodes for which
> > > > > > > "reg" needs a special treatment. I am not sure it makes sense to
> > > > > > > proceed
> > > > > > > with parsing those nodes without knowing how to deal with that.
> > > > > >
> > > > > > I believe that most of the time the "special" treatment would be to
> > > > > > ignore the
> > > > > > property "regs" as it will not be an CPU memory address.
> > > > > >
> > > > > > > So maybe
> > > > > > > we should add those nodes to skip_matches until we know what to do
> > > > > > > with
> > > > > > > them. At that point, I would imagine we would introduce a special
> > > > > > > handle_device function that knows what to do. In the case of PCIe,
> > > > > > > something like "handle_device_pcie".
> > > > > > Could you outline how "handle_device_pcie()" will differ with
> > > > > > handle_node()?
> > > > > >
> > > > > > In fact, the problem is not the PCIe node directly. Instead, it is the
> > > > > > second
> > > > > > level of nodes below it (i.e usb@...).
> > > > > >
> > > > > > The current implementation of dt_number_of_address() only look at the
> > > > > > bus type
> > > > > > of the parent. As the parent has no bus type and "ranges" then it
> > > > > > thinks this
> > > > > > is something we can translate to a CPU address.
> > > > > >
> > > > > > However, this is below a PCI bus so the meaning of "reg" is completely
> > > > > > different. In this case, we only need to ignore "reg".
> > > > >
> > > > > I see what you are saying and I agree: if we had to introduce a special
> > > > > case for PCI, then  dt_number_of_address() seems to be a good place.  In
> > > > > fact, we already have special PCI handling, see our
> > > > > __dt_translate_address function and xen/common/device_tree.c:dt_busses.
> > > > >
> > > > > Which brings the question: why is this actually failing?
> > > >
> > > > I already hinted at the reason in my previous e-mail :). Let me expand
> > > > a bit more.
> > > >
> > > > >
> > > > > pcie0 {
> > > > >       ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0
> > > > > 0x40000000>;
> > > > >
> > > > > Which means that PCI addresses 0xc0000000-0x100000000 become
> > > > > 0x600000000-0x700000000.
> > > > >
> > > > > The offending DT is:
> > > > >
> > > > > &pcie0 {
> > > > >           pci@1,0 {
> > > > >                   #address-cells = <3>;
> > > > >                   #size-cells = <2>;
> > > > >                   ranges;
> > > > >
> > > > >                   reg = <0 0 0 0 0>;
> > > > >
> > > > >                   usb@1,0 {
> > > > >                           reg = <0x10000 0 0 0 0>;
> > > > >                           resets = <&reset
> > > > > RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > > >                   };
> > > > >           };
> > > > > };
> > > > >
> > > > >
> > > > > reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
> > > > > However, the rest of the regs cells are left as zero. It shouldn't be an
> > > > > issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.
> > > >
> > > > The property "ranges" is used to define a mapping or translation
> > > > between the address space of the "bus" (here pci@1,0) and the address
> > > > space of the bus node's parent (&pcie0).
> > > > IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).
> > > >
> > > > The problem is dt_number_of_address() will only look at the "bus" type
> > > > of the parent using dt_match_bus(). This will return the default bus
> > > > (see dt_bus_default_match()), because this is a property "ranges" in
> > > > the parent node (i.e. pci@1,0). Therefore...
> > > >
> > > > > So
> > > > > in theory dt_number_of_address() should already return 0 for it.
> > > >
> > > > ... dt_number_of_address() will return 1 even if the address is not a
> > > > CPU address. So when Xen will try to translate it, it will fail.
> > > >
> > > > >
> > > > > Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
> > > > > add a check to skip 0 size ranges. Just a hack to explain what I mean:
> > > >
> > > > The parent of pci@1,0 is a PCI bridge (see the property type), so the
> > > > CPU addresses are found not via "regs" but "assigned-addresses".
> > > >
> > > > In this situation, "regs" will have a different meaning and therefore
> > > > there is no promise that the size will be 0.
> > >
> > > I copy/pasted the following:
> > >
> > >         pci@1,0 {
> > >                 #address-cells = <3>;
> > >                 #size-cells = <2>;
> > >                 ranges;
> > >
> > >                 reg = <0 0 0 0 0>;
> > >
> > >                 usb@1,0 {
> > >                         reg = <0x10000 0 0 0 0>;
> > >                         resets = <&reset
> > >                         RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > >                 };
> > >         };
> > >
> > > under pcie0 in my DTS to see what happens (the node is not there in the
> > > device tree for the rpi-5.9.y kernel.) It results in the expected error:
> > >
> > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > (XEN) Device tree generation failed (-22).
> > >
> > > I could verify that pci@1,0 is seen as "default" bus due to the range
> > > property, thus dt_number_of_address() returns 1.
> > >
> > >
> > > I can see that reg = <0 0 0 0 0> is not a problem because it is ignored
> > > given that the parent is a PCI bus. assigned-addresses is the one that
> > > is read.
> > >
> > >
> > > But from a device tree perspective I am actually confused by the
> > > presence of the "ranges" property under pci@1,0. Is that correct? It is
> > > stating that addresses of children devices will be translated to the
> > > address space of the parent (pcie0) using the parent translation rules.
> > > I mean -- it looks like Xen is right in trying to translate reg =
> > > <0x10000 0 0 0 0> using ranges = <0x02000000 0x0 0xc0000000 0x6
> > > 0x00000000 0x0 0x40000000>.
> > >
> > > Or maybe since pcie0 is a PCI bus all the children addresses, even
> > > grand-children, are expected to be specified using "assigned-addresses"?
> > >
> > >
> > > Looking at other examples [1][2] maybe the mistake is that pci@1,0 is
> > > missing device_type = "pci"?  Of course, if I add that, the error
> > > disappear.
> >
> > I am afraid, I don't know the answer. I think it would be best to ask the
> > Linux DT folks about it.
> >
> > >
> > > [1] Documentation/devicetree/bindings/pci/mvebu-pci.txt
> > > [2] Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
> > >
> > > For the sake of making Xen more resilient to possible DTSes, maybe we
> > > should try to extend the dt_bus_pci_match check? See for instance the
> > > change below, but we might be able to come up with better ideas.
> > >
> > >
> > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > > index 18825e333e..24d998f725 100644
> > > --- a/xen/common/device_tree.c
> > > +++ b/xen/common/device_tree.c
> > > @@ -565,12 +565,21 @@ static unsigned int dt_bus_default_get_flags(const
> > > __be32 *addr)
> > >     static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> > >   {
> > > +    bool ret = false;
> > > +
> > >       /*
> > >        * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen
> > > PCI
> > >        * powermacs "ht" is hypertransport
> > >        */
> > > -    return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > > +    ret = !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > >           !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> > > +
> > > +    if ( ret ) return ret;
> > > +
> > > +    if ( !strcmp(np->name, "pci") )
> > > +        ret = dt_bus_pci_match(dt_get_parent(np));
> >
> > It is probably safe to assume that a PCI device (not hostbridge) will start
> > with "pci". Although, I don't much like the idea because the name is not meant
> > to be stable.
> >
> > AFAICT, we can only rely on "compatible" and "type".
> >
> > Cheers,
> >
> > --
> > Julien Grall
> >


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 19:37:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 19:37:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81411.150444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7kQz-0000iU-GZ; Thu, 04 Feb 2021 19:36:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81411.150444; Thu, 04 Feb 2021 19:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7kQz-0000iN-DF; Thu, 04 Feb 2021 19:36:57 +0000
Received: by outflank-mailman (input) for mailman id 81411;
 Thu, 04 Feb 2021 19:36:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7XzF=HG=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1l7kQx-0000iI-Ob
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 19:36:56 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7b87dab-ad88-475c-aab9-4b5a51526481;
 Thu, 04 Feb 2021 19:36:53 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 114JTdwm044841;
 Thu, 4 Feb 2021 19:33:50 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2130.oracle.com with ESMTP id 36cvyb71sb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 04 Feb 2021 19:33:50 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 114JUXk4106088;
 Thu, 4 Feb 2021 19:31:49 GMT
Received: from nam10-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam10lp2103.outbound.protection.outlook.com [104.47.58.103])
 by userp3020.oracle.com with ESMTP id 36dh7vm7bb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 04 Feb 2021 19:31:49 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BYAPR10MB3207.namprd10.prod.outlook.com (2603:10b6:a03:152::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Thu, 4 Feb
 2021 19:31:44 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456%4]) with mapi id 15.20.3825.024; Thu, 4 Feb 2021
 19:31:44 +0000
Received: from fedora (209.6.208.110) by
 BL0PR02CA0129.namprd02.prod.outlook.com (2603:10b6:208:35::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.20 via Frontend Transport; Thu, 4 Feb 2021 19:31:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7b87dab-ad88-475c-aab9-4b5a51526481
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=1mSbR7UYytrVmb9m+OjsfJQ+ECzLuT5FK3uth4D/hSw=;
 b=uoPRZ4vz6mb4z1gMXMHFvxwBkbXKBRmCy2XhjVbaPw0jNoHu0QaiZnf3j5/B7emztro5
 0iztbYhLd3vDgay72tQ6v0ij5PSveEkRKdrr3wcjAKxozyU0JID7seBQmz211BJWL3/n
 21EHOXgDC0bsG30hLvCbhqscoLzqfAVDMG+yD0DGaFTgq86fgtsLcFNTfURcxZlBx5/k
 FJjWBPesneE3gBMvwHtZumwjne8eT2iYPAdxJ5t4GcypklTFE0C6r4ZHlw4l5CMRCBMQ
 jrOMtCxfZphJ9eSlzXIBH5oTU4e6e81X+VeOeNvIRXTOuKTE+Drv6PXP6ARwvuQ25MEv IA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JJRX8n8XulXtIXRsy8uGNsuDKgHehZ06tN4hkr9Y6Hpqe8VBz7mF4vuALeeF3RsXj6VV6Ioe+1HMiye4RgatyupMXkb9gZzMwGsHKUkZ8JQ7k3wMjdACflMNZ0QgpRNAf7eE5sQApZzzUksWAz+xfp7Bw9x54qX0YDJgpXdCRCUSpv46iRKsibQnlxkQtfzDayvDW7nfFsCpPC6IEIvHgpYeLLTIrwSqeQ7tfW6aQxULPXNk2ZL1TjCzgXRx3EDggfdVbglGophPcLovgGOrZZtxZdDQgKr83njV6y8JagEv8AMZdCRBsfuoZCLtmu4uh4DFTpLJSEvfJAk/oa+qOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1mSbR7UYytrVmb9m+OjsfJQ+ECzLuT5FK3uth4D/hSw=;
 b=d3fos0Kh39uzd19oyhdNiV5RCgUmtGZXuOsU4vT+3gPP4jy8C91KNWGYCoHRzariSN2dsL+Vn8OvhwROsmBtGp0ZL4SEidWTl1hz1XMIEPA03QlTOSj9ekIKjKXFi2vnzEsT8Lyt3nMrJw/6XTFbduPupuhZxeJp5PR1mQmvxeUfJOLVF4lFBhkNxj0wi5k94K/FuwlN1j0QXl78BmDaALE+Q7Um8K6TtofF9+Fb9lEzBx+/lkqjN8bphoMh15JK+b3890ccbfi++FdQ9kODvO5TaCsKE1YGXcnprZxnuUMGlIshUNfXBBEwrOYXlCN3Lu/cYbqtsQokn+fPuiuB2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1mSbR7UYytrVmb9m+OjsfJQ+ECzLuT5FK3uth4D/hSw=;
 b=nN9hWOh2tdMNz04L6MgzOKjjxEgtPjBcD0ZS7FMUKMpvtLgryAf3qEzi5GMWMiaUjUXtVqLYOx5IDYPB47nY/ZU+80CJmagSSM2aTRJEDxAIf99xx/fjlshMQjyV1BIE5WGD8JbhrwHYs/f2ROVtiJzYXdD8C5Cu0LO/4Sf27Lc=
Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=oracle.com;
Date: Thu, 4 Feb 2021 14:31:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>, Dongli Zhang <dongli.zhang@oracle.com>,
        dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
        linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
        x86@kernel.org, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
        akpm@linux-foundation.org, benh@kernel.crashing.org,
        bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
        boris.ostrovsky@oracle.com, chris@chris-wilson.co.uk, daniel@ffwll.ch,
        airlied@linux.ie, hpa@zytor.com, mingo@kernel.org, mingo@redhat.com,
        jani.nikula@linux.intel.com, joonas.lahtinen@linux.intel.com,
        jgross@suse.com, m.szyprowski@samsung.com, matthew.auld@intel.com,
        mpe@ellerman.id.au, rppt@kernel.org, paulus@samba.org,
        peterz@infradead.org, rodrigo.vivi@intel.com, sstabellini@kernel.org,
        bauerman@linux.ibm.com, tsbogend@alpha.franken.de, tglx@linutronix.de,
        ulf.hansson@linaro.org, joe.jin@oracle.com, thomas.lendacky@amd.com,
        Claire Chang <tientzu@chromium.org>
Subject: Re: [PATCH RFC v1 2/6] swiotlb: convert variables to arrays
Message-ID: <20210204193136.GA333094@fedora>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
 <20210203233709.19819-3-dongli.zhang@oracle.com>
 <20210204072947.GA29812@lst.de>
 <b46ddefe-d91a-fa6a-0e0d-cf1edc343c2e@arm.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <b46ddefe-d91a-fa6a-0e0d-cf1edc343c2e@arm.com>
X-Originating-IP: [209.6.208.110]
X-ClientProxiedBy: BL0PR02CA0129.namprd02.prod.outlook.com
 (2603:10b6:208:35::34) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ef55351e-693f-4d96-4932-08d8c943812e
X-MS-TrafficTypeDiagnostic: BYAPR10MB3207:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB3207426DBD5DE06078DD9EE689B39@BYAPR10MB3207.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	uAgalEdYjwOjJHCH2G7vN5tWQtCMtbX0n85NOzjg397fr4HPpL38hU+5XzzeT49OmKLLJVsnwxIpo+to5zKKQGIsK846HRhFER2KPTbPklCujGd8DTLblcPCup5OKFImj7oo9vISCGsDUiJZkiV/3ToqgvCCnZ6B7MuTJOyFbK32wJugnDz83jEY9ajCSyngUqObEoMU3edySoele8YHKMKQs6lZYD+ISVSsalYC0OzhTRXQE4+4oUMREmANuddjCtRwMZYVha7Km7A6VrPYzrIZUTCE+qTKTB/5qMehAd+orN5NztZ/wh0QB0yaOmm5HKNNBXGarBx77cjIGbdqh02HNg3UnBXLSrO3Vlw0yTXHrDNDI2IfSSAHFKYY8deJXB9W2dzWfmTSvLvpo+OPXRc2RGZ951S6BJ1FROzMHQhCJzBIgXYmDz/ZaD2ufs3XKyOOO17e5wOUQtMfLmh7W/1FrwiplQ115y6pFi47ExlWLvcLqtuN7NfidZI4Czy4IotM2VARalMW8UzXklor4Gvk0Bu87CDO0GQd7Z6htWQc5dBRVrO9l/fxVoYdjmAg
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(39860400002)(346002)(396003)(376002)(136003)(8936002)(26005)(4326008)(9576002)(8676002)(33656002)(86362001)(16526019)(5660300002)(53546011)(186003)(83380400001)(956004)(66556008)(52116002)(66946007)(55016002)(478600001)(7416002)(6496006)(2906002)(6916009)(66476007)(316002)(54906003)(7406005)(33716001)(9686003)(1076003)(21314003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?wawCRNyZ8ljyEuFsDq2wYljIRGRa+x/+LWf2aOYw3EPj/ts+T86ZgSpVrNI1?=
 =?us-ascii?Q?E+hi9N0U+VHnKEi643sfk/lKPgdNcyg1X5dIqfKRQ4obHAugfVLS3yvF9uHA?=
 =?us-ascii?Q?Et/CzMB5IJrQmQCH1De9LLITIdU4HxYUHXWeiyTewys9ZPW92hzJbHRHipFS?=
 =?us-ascii?Q?ZjKreRgS4eenVHSWbXtGKX1RdGwsvFmGUtZmt6RPIQnBG2nQbh6uY1x8XDm+?=
 =?us-ascii?Q?HNlo4+zCXXBRmqVsO/7K9N6zTYffAibVUCmdssjl652dJxDd3pjO60aeIy0D?=
 =?us-ascii?Q?8V+eyWUUKndo85mK1a37h25dT8XAJwF9Su3wjpK6ufcFPfV0VuubOJHsl211?=
 =?us-ascii?Q?eA7BLwTGejgutN/nAwdMSAZkzE2FrKGJS10H0MSnyrk04g7iNLjEZgUdKSG3?=
 =?us-ascii?Q?dbKSyYBKbpp42miUwg9hkS08itIpYL2S9a9RkXNAde3GAtY8hBCh3qcgSHdi?=
 =?us-ascii?Q?6Bwioin/m6tIHwT2fqnyX9f92pWdI4xt6zbdalUgmS0Nn/nWd6UBBXTVIwp3?=
 =?us-ascii?Q?dIIm4w1C7EEb9JRK6Ur4QTI7vS8dkeShTS7EEI0vs2iqliz4Ng9rEm1Ef1AH?=
 =?us-ascii?Q?7oWFim24xrTDWQ5X5aZEq27Kuqu+tHvo+AT6VOmALtiohgErAKdEVxNQZh0x?=
 =?us-ascii?Q?Vml36FCg9b/IZbsngoX9D721PfCaqnU+oC4nPWWIgX0I74A33IcyRGjtiXTn?=
 =?us-ascii?Q?Ia0ZN5cwmDDIGVeJGyG3iNUfqiIqi0y7Zeqhe9JI1OfjSliJO8W+KRBewRsa?=
 =?us-ascii?Q?TWvEmmm8e9DHFLSpnnuNii/MUeYnKoFTiIFdCEn+7nHBcowyz281im4/anKz?=
 =?us-ascii?Q?mlQI4Wr14VzMSyrfRi+QjA7lU5xNQGDgk4P6StAGD+ydA0roF+OAcMIs+Y0B?=
 =?us-ascii?Q?oLhz70asx7ROi582lpwamEJbklY3Udnoslrv910cdfzXvIG+i5kqodTaRA2n?=
 =?us-ascii?Q?H/vSpzVb3icXkkiJQGuwVwDxwg2+AvhseyrxM1QG/HMlrIT4zuO3DAM8Q+Xc?=
 =?us-ascii?Q?WhNd8k33wXZ5x9HqOyUVfSGBgprXrx/AWC7J2VboMq8mp1zfTY0uZ5flkTcc?=
 =?us-ascii?Q?KPICc97M?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef55351e-693f-4d96-4932-08d8c943812e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 19:31:44.2409
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ezbeptdNGkgwxrYaRP+vWUR9oW8gyl3BZp7dR4e7wHu1eriSLHANm2B5ihfHQRmNh+F5kZ7Hnkdiiin0cMy+PQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3207
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9885 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0
 suspectscore=0 spamscore=0 mlxlogscore=999 phishscore=0 adultscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102040118
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9885 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1011 impostorscore=0
 mlxscore=0 spamscore=0 bulkscore=0 priorityscore=1501 adultscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 mlxlogscore=999
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102040118

On Thu, Feb 04, 2021 at 11:49:23AM +0000, Robin Murphy wrote:
> On 2021-02-04 07:29, Christoph Hellwig wrote:
> > On Wed, Feb 03, 2021 at 03:37:05PM -0800, Dongli Zhang wrote:
> > > This patch converts several swiotlb related variables to arrays, in
> > > order to maintain stat/status for different swiotlb buffers. Here are
> > > variables involved:
> > > 
> > > - io_tlb_start and io_tlb_end
> > > - io_tlb_nslabs and io_tlb_used
> > > - io_tlb_list
> > > - io_tlb_index
> > > - max_segment
> > > - io_tlb_orig_addr
> > > - no_iotlb_memory
> > > 
> > > There is no functional change and this is to prepare to enable 64-bit
> > > swiotlb.
> > 
> > Claire Chang (on Cc) already posted a patch like this a month ago,
> > which looks much better because it actually uses a struct instead
> > of all the random variables.
> 
> Indeed, I skimmed the cover letter and immediately thought that this whole
> thing is just the restricted DMA pool concept[1] again, only from a slightly
> different angle.


Kind of. Let me lay out how some of these pieces are right now:

+-----------------------+      +----------------------+
|                       |      |                      |
|                       |      |                      |
|   a)Xen-SWIOTLB       |      | b)SWIOTLB (for !Xen) |
|                       |      |                      |
+-----------XX----------+      +-------X--------------+
              XXXX             XXXXXXXXX
                 XXXX     XX XXX
                    X   XX
                    XXXX
         +----------XX-----------+
         |                       |
         |                       |
         |   c) SWIOTLB generic  |
         |                       |
         +-----------------------+

Dongli's patches modify the SWIOTLB generic c), and Xen-SWIOTLB a)
parts.

Also see the IOMMU_INIT logic which lays this a bit more deepth
(for example how to enable SWIOTLB on AMD boxes, or IBM with Calgary
IOMMU, etc - see iommu_table.h).

Furtheremore it lays the groundwork to allocate AMD SEV SWIOTLB buffers
later after boot (so that you can stich different pools together).
All the bits are kind of inside of the SWIOTLB code. And also it changes
the Xen-SWIOTLB to do something similar.

The mempool did it similarly by taking the internal parts (aka the
various io_tlb) of SWIOTLB and exposing them out and having
other code:

+-----------------------+      +----------------------+
|                       |      |                      |
|                       |      |                      |
| a)Xen-SWIOTLB         |      | b)SWIOTLB (for !Xen) |
|                       |      |                      |
+-----------XX----------+      +-------X--------------+
              XXXX             XXXXXXXXX
                 XXXX     XX XXX
                    X   XX
                    XXXX
         +----------XX-----------+         +------------------+
         |                       |         | Device tree      |
         |                       +<--------+ enabling SWIOTLB |
         |c) SWIOTLB generic     |         |                  |
         |                       |         | mempool          |
         +-----------------------+         +------------------+

What I was suggesting to Clarie to follow Xen model, that is
do something like this:

+-----------------------+      +----------------------+   +--------------------+
|                       |      |                      |   |                    |
|                       |      |                      |   |                    |
| a)Xen-SWIOTLB         |      | b)SWIOTLB (for !Xen) |   | e) DT-SWIOTLB      |
|                       |      |                      |   |                    |
+-----------XX----------+      +-------X--------------+   +----XX-X------------+
              XXXX             XXXXXXXXX        XXX X X XX X XX
                 XXXX     XX XXX        XXXXXXXX
                    X   XX XXXXXXXXXXXXX
                    XXXXXXXX
         +----------XXX----------+
         |                       |
         |                       |
         |c) SWIOTLB generic     |
         |                       |
         +-----------------------+


so using the SWIOTLB generic parts, and then bolt on top
of the device-tree logic, along with the mempool logic.



But Christopher has an interesting suggestion which is
to squash the all the existing code (a, b, c) all together
and pepper it with various jump-tables.


So:


-----------------------------+
| SWIOTLB:                   |
|                            |
|  a) SWIOTLB (for non-Xen)  |
|  b) Xen-SWIOTLB            |
|  c) DT-SWIOTLB             |
|                            |
|                            |
-----------------------------+


with all the various bits (M2P/P2M for Xen, mempool for ARM,
and normal allocation for BM) in one big file.



From xen-devel-bounces@lists.xenproject.org Thu Feb 04 19:40:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 19:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81413.150456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7kU7-0001ly-0F; Thu, 04 Feb 2021 19:40:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81413.150456; Thu, 04 Feb 2021 19:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7kU6-0001lr-T8; Thu, 04 Feb 2021 19:40:10 +0000
Received: by outflank-mailman (input) for mailman id 81413;
 Thu, 04 Feb 2021 19:40:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6i/M=HG=kernel.org=patchwork-bot+netdevbpf@srs-us1.protection.inumbo.net>)
 id 1l7kU5-0001ll-Fq
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 19:40:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8050dec6-fb29-40a4-8fc7-046702afd302;
 Thu, 04 Feb 2021 19:40:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id 8B67C64E07;
 Thu,  4 Feb 2021 19:40:06 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id 7A17F609ED;
 Thu,  4 Feb 2021 19:40:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8050dec6-fb29-40a4-8fc7-046702afd302
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612467606;
	bh=QRdzaOhqV1YA0jawlRZXll9qFnZhTEghj2eey9E/ssk=;
	h=Subject:From:Date:References:In-Reply-To:To:Cc:From;
	b=gpOzdn9Fnh51kjuquSC40onsv4vEM6C5hpDHsL88TBHYO0XU5BdwHp+qpEqvSiki1
	 sgwCAoVdSJOvZTVJ5nbU8LMkLiiyA5scpZi8O3oLoNBTSG+tjsy6fxLsBj8FiVzoYh
	 AnqjVwodV+AOXKxhvuSD/3ivFmlf1f3ZAIoDb+xMx8kqokvqy/Y3+mhVbE9o8I4nO4
	 1cu5dA7wA/PPeNs5a4Jw3Ljy/eizQJhwoU+VdK05Ol7gzFmQHZXQubmuUfNNbc/l4+
	 1v1e/aWkA4pj5lwHfTgzEaE8yc3UnQirPxJLJsyuDy+lHKNp8zayetSWBgWVyGFiZh
	 lV/RuPdtcCaLQ==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: Re: [PATCH] drivers: net: xen-netfront: Simplify the calculation of
 variables
From: patchwork-bot+netdevbpf@kernel.org
Message-Id: 
 <161246760649.23921.17330459930759347792.git-patchwork-notify@kernel.org>
Date: Thu, 04 Feb 2021 19:40:06 +0000
References: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>
In-Reply-To: <1612261069-13315-1-git-send-email-jiapeng.chong@linux.alibaba.com>
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
 davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
 hawk@kernel.org, john.fastabend@gmail.com, andrii@kernel.org, kafai@fb.com,
 songliubraving@fb.com, yhs@fb.com, kpsingh@kernel.org,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, bpf@vger.kernel.org

Hello:

This patch was applied to netdev/net-next.git (refs/heads/master):

On Tue,  2 Feb 2021 18:17:49 +0800 you wrote:
> Fix the following coccicheck warnings:
> 
> ./drivers/net/xen-netfront.c:1816:52-54: WARNING !A || A && B is
> equivalent to !A || B.
> 
> Reported-by: Abaci Robot <abaci@linux.alibaba.com>
> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
> 
> [...]

Here is the summary with links:
  - drivers: net: xen-netfront: Simplify the calculation of variables
    https://git.kernel.org/netdev/net-next/c/e93fac3b5161

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html




From xen-devel-bounces@lists.xenproject.org Thu Feb 04 19:55:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 19:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81416.150467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7kj1-0002xz-CT; Thu, 04 Feb 2021 19:55:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81416.150467; Thu, 04 Feb 2021 19:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7kj1-0002xs-9L; Thu, 04 Feb 2021 19:55:35 +0000
Received: by outflank-mailman (input) for mailman id 81416;
 Thu, 04 Feb 2021 19:55:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7kiz-0002xk-NY; Thu, 04 Feb 2021 19:55:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7kiz-0002MD-Go; Thu, 04 Feb 2021 19:55:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7kiz-0005WJ-4R; Thu, 04 Feb 2021 19:55:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7kiz-0001fi-3v; Thu, 04 Feb 2021 19:55:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PKhHaRytmQOrQ774Z4TIK16EXMs+in05Bhw9warfnrc=; b=shrocSJA/+WcdXbpFRWoNM2wTZ
	c+sy5ehVq1ysUs4cNmNgXqvxe+nkYYCfIQHfcNs9paM9h4BkV8qjbInGPwj3FkNv6BUO5NPDvwxEn
	6dc5lzM1wQdLhdFtIrlich5mOqxp8ELYQqb77j8qldcAua5z94vTgNipfIjcj/pWHzho=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159020-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159020: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=45dee7d92b493bb531e7e77a6f9c0180ab152f87
X-Osstest-Versions-That:
    xen=d203dbd69f1a02577dd6fe571d72beb980c548a6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 19:55:33 +0000

flight 159020 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159020/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  45dee7d92b493bb531e7e77a6f9c0180ab152f87
baseline version:
 xen                  d203dbd69f1a02577dd6fe571d72beb980c548a6

Last test of basis   158993  2021-02-03 21:00:26 Z    0 days
Failing since        159014  2021-02-04 14:00:28 Z    0 days    2 attempts
Testing same since   159020  2021-02-04 17:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d203dbd69f..45dee7d92b  45dee7d92b493bb531e7e77a6f9c0180ab152f87 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 20:36:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 20:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81421.150487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7lMC-0007Fo-Gu; Thu, 04 Feb 2021 20:36:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81421.150487; Thu, 04 Feb 2021 20:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7lMC-0007Fh-Dn; Thu, 04 Feb 2021 20:36:04 +0000
Received: by outflank-mailman (input) for mailman id 81421;
 Thu, 04 Feb 2021 20:36:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7BZ=HG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7lMB-0007Fc-Dk
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 20:36:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5366e0b6-dd8e-4b3c-b5c0-17fc4de85173;
 Thu, 04 Feb 2021 20:36:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 27B8264DDD;
 Thu,  4 Feb 2021 20:36:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5366e0b6-dd8e-4b3c-b5c0-17fc4de85173
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612470961;
	bh=GUAwnF7UD5G2w0R8DRwo2FaXGSz1TcKwEcFLyIDaf50=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hN3dqEpbl2FHA7KncWxrcZVkXJeNfs+4V8+neDriSkVSFm7qnN7IAl3/Y3uV7ADXR
	 uExvGc4q+Z8q5G4wjle69pGadL9GhpkNBvQ9w91dMDD2Vvd8I+U8sf0TvoMf40fnc4
	 YZUo8pd+DS8t/AM7aHSZjNKTkdvNwfFvXqfMeRYUGIfpFuTdHg6EGPwmjipQAYG6/w
	 gZB0PgFqWk/CiMU2Po5eqBUvMPaca2JSQmvUNFvtaiY+zW3vSwaxYULGjIXenNyxpa
	 xfTo7bea33aRHfdGSLwSOOPxIgddFf1R2ML1qKo9lnoBrwMvV573rO5JotfNUPtXU7
	 e1K/R6t1jPvIQ==
Date: Thu, 4 Feb 2021 12:36:00 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rob Herring <robh@kernel.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    Elliott Mitchell <ehem+xen@m5p.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
In-Reply-To: <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s> <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com> <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s> <9b97789b-5560-0186-642a-0501789830e5@xen.org>
 <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s> <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 Feb 2021, Rob Herring wrote:
> On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > Hi Rob,
> >
> > We have a question on the PCIe device tree bindings. In summary, we have
> > come across the Raspberry Pi 4 PCIe description below:
> >
> >
> > pcie0: pcie@7d500000 {
> >    compatible = "brcm,bcm2711-pcie";
> >    reg = <0x0 0x7d500000  0x0 0x9310>;
> >    device_type = "pci";
> >    #address-cells = <3>;
> >    #interrupt-cells = <1>;
> >    #size-cells = <2>;
> >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> >    interrupt-names = "pcie", "msi";
> >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> >                                                      IRQ_TYPE_LEVEL_HIGH>;
> >    msi-controller;
> >    msi-parent = <&pcie0>;
> >
> >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> >              0x0 0x40000000>;
> >    /*
> >     * The wrapper around the PCIe block has a bug
> >     * preventing it from accessing beyond the first 3GB of
> >     * memory.
> >     */
> >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> >                  0x0 0xc0000000>;
> >    brcm,enable-ssc;
> >
> >    pci@1,0 {
> >            #address-cells = <3>;
> >            #size-cells = <2>;
> >            ranges;
> >
> >            reg = <0 0 0 0 0>;
> >
> >            usb@1,0 {
> >                    reg = <0x10000 0 0 0 0>;
> >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> >            };
> >    };
> > };
> >
> >
> > Xen fails to parse it with an error because it tries to remap reg =
> > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> >
> > Reading the device tree description in details, I cannot tell if Xen has
> > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > like a default bus (not a PCI bus), hence, the children regs are
> > translated using the ranges property of the parent (pcie@7d500000).
> >
> > Is it possible that the device tree is missing device_type =
> > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > pcie@7d500000?
> 
> Indeed, it should have device_type. Linux (only recently due to
> another missing device_type case) will also look at node name, but
> only 'pcie'.
>
> We should be able to create (or extend pci-bus.yaml) a schema to catch
> this case.

Ah, that is what I needed to know, thank you!  Is Linux considering a
node named "pcie" as if it has device_type = "pci"?

In Xen, also to cover the RPi4 case, maybe I could add a check for the
node name to be "pci" or "pcie" and if so Xen could assume device_type =
"pci".


> > I'd like to make Xen able to parse this device tree without errors but I
> > am not sure what is the best way to fix it.
> >
> > Thanks for any help you can provide!
> >
> > Cheers,
> >
> > Stefano
> >
> >
> >
> > On Thu, 4 Feb 2021, Julien Grall wrote:
> > > On 04/02/2021 00:13, Stefano Stabellini wrote:
> > > > On Wed, 3 Feb 2021, Julien Grall wrote:
> > > > > On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org>
> > > > > wrote:
> > > > > > > > But aside from PCIe, let's say that we know of a few nodes for which
> > > > > > > > "reg" needs a special treatment. I am not sure it makes sense to
> > > > > > > > proceed
> > > > > > > > with parsing those nodes without knowing how to deal with that.
> > > > > > >
> > > > > > > I believe that most of the time the "special" treatment would be to
> > > > > > > ignore the
> > > > > > > property "regs" as it will not be an CPU memory address.
> > > > > > >
> > > > > > > > So maybe
> > > > > > > > we should add those nodes to skip_matches until we know what to do
> > > > > > > > with
> > > > > > > > them. At that point, I would imagine we would introduce a special
> > > > > > > > handle_device function that knows what to do. In the case of PCIe,
> > > > > > > > something like "handle_device_pcie".
> > > > > > > Could you outline how "handle_device_pcie()" will differ with
> > > > > > > handle_node()?
> > > > > > >
> > > > > > > In fact, the problem is not the PCIe node directly. Instead, it is the
> > > > > > > second
> > > > > > > level of nodes below it (i.e usb@...).
> > > > > > >
> > > > > > > The current implementation of dt_number_of_address() only look at the
> > > > > > > bus type
> > > > > > > of the parent. As the parent has no bus type and "ranges" then it
> > > > > > > thinks this
> > > > > > > is something we can translate to a CPU address.
> > > > > > >
> > > > > > > However, this is below a PCI bus so the meaning of "reg" is completely
> > > > > > > different. In this case, we only need to ignore "reg".
> > > > > >
> > > > > > I see what you are saying and I agree: if we had to introduce a special
> > > > > > case for PCI, then  dt_number_of_address() seems to be a good place.  In
> > > > > > fact, we already have special PCI handling, see our
> > > > > > __dt_translate_address function and xen/common/device_tree.c:dt_busses.
> > > > > >
> > > > > > Which brings the question: why is this actually failing?
> > > > >
> > > > > I already hinted at the reason in my previous e-mail :). Let me expand
> > > > > a bit more.
> > > > >
> > > > > >
> > > > > > pcie0 {
> > > > > >       ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0
> > > > > > 0x40000000>;
> > > > > >
> > > > > > Which means that PCI addresses 0xc0000000-0x100000000 become
> > > > > > 0x600000000-0x700000000.
> > > > > >
> > > > > > The offending DT is:
> > > > > >
> > > > > > &pcie0 {
> > > > > >           pci@1,0 {
> > > > > >                   #address-cells = <3>;
> > > > > >                   #size-cells = <2>;
> > > > > >                   ranges;
> > > > > >
> > > > > >                   reg = <0 0 0 0 0>;
> > > > > >
> > > > > >                   usb@1,0 {
> > > > > >                           reg = <0x10000 0 0 0 0>;
> > > > > >                           resets = <&reset
> > > > > > RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > > > >                   };
> > > > > >           };
> > > > > > };
> > > > > >
> > > > > >
> > > > > > reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
> > > > > > However, the rest of the regs cells are left as zero. It shouldn't be an
> > > > > > issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.
> > > > >
> > > > > The property "ranges" is used to define a mapping or translation
> > > > > between the address space of the "bus" (here pci@1,0) and the address
> > > > > space of the bus node's parent (&pcie0).
> > > > > IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).
> > > > >
> > > > > The problem is dt_number_of_address() will only look at the "bus" type
> > > > > of the parent using dt_match_bus(). This will return the default bus
> > > > > (see dt_bus_default_match()), because this is a property "ranges" in
> > > > > the parent node (i.e. pci@1,0). Therefore...
> > > > >
> > > > > > So
> > > > > > in theory dt_number_of_address() should already return 0 for it.
> > > > >
> > > > > ... dt_number_of_address() will return 1 even if the address is not a
> > > > > CPU address. So when Xen will try to translate it, it will fail.
> > > > >
> > > > > >
> > > > > > Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
> > > > > > add a check to skip 0 size ranges. Just a hack to explain what I mean:
> > > > >
> > > > > The parent of pci@1,0 is a PCI bridge (see the property type), so the
> > > > > CPU addresses are found not via "regs" but "assigned-addresses".
> > > > >
> > > > > In this situation, "regs" will have a different meaning and therefore
> > > > > there is no promise that the size will be 0.
> > > >
> > > > I copy/pasted the following:
> > > >
> > > >         pci@1,0 {
> > > >                 #address-cells = <3>;
> > > >                 #size-cells = <2>;
> > > >                 ranges;
> > > >
> > > >                 reg = <0 0 0 0 0>;
> > > >
> > > >                 usb@1,0 {
> > > >                         reg = <0x10000 0 0 0 0>;
> > > >                         resets = <&reset
> > > >                         RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > >                 };
> > > >         };
> > > >
> > > > under pcie0 in my DTS to see what happens (the node is not there in the
> > > > device tree for the rpi-5.9.y kernel.) It results in the expected error:
> > > >
> > > > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > > > (XEN) Device tree generation failed (-22).
> > > >
> > > > I could verify that pci@1,0 is seen as "default" bus due to the range
> > > > property, thus dt_number_of_address() returns 1.
> > > >
> > > >
> > > > I can see that reg = <0 0 0 0 0> is not a problem because it is ignored
> > > > given that the parent is a PCI bus. assigned-addresses is the one that
> > > > is read.
> > > >
> > > >
> > > > But from a device tree perspective I am actually confused by the
> > > > presence of the "ranges" property under pci@1,0. Is that correct? It is
> > > > stating that addresses of children devices will be translated to the
> > > > address space of the parent (pcie0) using the parent translation rules.
> > > > I mean -- it looks like Xen is right in trying to translate reg =
> > > > <0x10000 0 0 0 0> using ranges = <0x02000000 0x0 0xc0000000 0x6
> > > > 0x00000000 0x0 0x40000000>.
> > > >
> > > > Or maybe since pcie0 is a PCI bus all the children addresses, even
> > > > grand-children, are expected to be specified using "assigned-addresses"?
> > > >
> > > >
> > > > Looking at other examples [1][2] maybe the mistake is that pci@1,0 is
> > > > missing device_type = "pci"?  Of course, if I add that, the error
> > > > disappear.
> > >
> > > I am afraid, I don't know the answer. I think it would be best to ask the
> > > Linux DT folks about it.
> > >
> > > >
> > > > [1] Documentation/devicetree/bindings/pci/mvebu-pci.txt
> > > > [2] Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
> > > >
> > > > For the sake of making Xen more resilient to possible DTSes, maybe we
> > > > should try to extend the dt_bus_pci_match check? See for instance the
> > > > change below, but we might be able to come up with better ideas.
> > > >
> > > >
> > > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > > > index 18825e333e..24d998f725 100644
> > > > --- a/xen/common/device_tree.c
> > > > +++ b/xen/common/device_tree.c
> > > > @@ -565,12 +565,21 @@ static unsigned int dt_bus_default_get_flags(const
> > > > __be32 *addr)
> > > >     static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> > > >   {
> > > > +    bool ret = false;
> > > > +
> > > >       /*
> > > >        * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen
> > > > PCI
> > > >        * powermacs "ht" is hypertransport
> > > >        */
> > > > -    return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > > > +    ret = !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > > >           !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> > > > +
> > > > +    if ( ret ) return ret;
> > > > +
> > > > +    if ( !strcmp(np->name, "pci") )
> > > > +        ret = dt_bus_pci_match(dt_get_parent(np));
> > >
> > > It is probably safe to assume that a PCI device (not hostbridge) will start
> > > with "pci". Although, I don't much like the idea because the name is not meant
> > > to be stable.
> > >
> > > AFAICT, we can only rely on "compatible" and "type".
> > >
> > > Cheers,
> > >
> > > --
> > > Julien Grall
> > >
> 


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 20:55:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 20:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81424.150499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7leg-0000yi-69; Thu, 04 Feb 2021 20:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81424.150499; Thu, 04 Feb 2021 20:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7leg-0000yb-1K; Thu, 04 Feb 2021 20:55:10 +0000
Received: by outflank-mailman (input) for mailman id 81424;
 Thu, 04 Feb 2021 20:55:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7lee-0000yT-GJ; Thu, 04 Feb 2021 20:55:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7lee-0003QK-Ao; Thu, 04 Feb 2021 20:55:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7lee-0007pK-2P; Thu, 04 Feb 2021 20:55:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7lee-0002rw-1v; Thu, 04 Feb 2021 20:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nP8rGZFBLUBX7zxTdF/D6a2HxS+oBSSYMlN5nAaM7Sw=; b=2lIPcZfWINZmAlQbqLIBQ7QHJR
	3WPxu75LX7EJhv7oyhxL5fja9+XYDsKAbwF6LdIR9Z5fctAvewUks5CUJe5AknqVr4h/ijt0h/Qfr
	HRZ7yXLRATY/PmKr6oJcEqvPzZIzUrKIRfBp1v/IV9lqONBMtr7wYCW7UZEG0FZMjr/k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158997-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 158997: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-5.4:test-arm64-arm64-xl-seattle:host-install(5):broken:regression
    linux-5.4:test-arm64-arm64-xl-credit1:host-install(5):broken:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e89428970c23011a2679121c56e9f54f654c6602
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 20:55:08 +0000

flight 158997 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158997/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-seattle   5 host-install(5)        broken REGR. vs. 158387
 test-arm64-arm64-xl-credit1   5 host-install(5)        broken REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e89428970c23011a2679121c56e9f54f654c6602
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   23 days
Failing since        158473  2021-01-17 13:42:20 Z   18 days   30 attempts
Testing same since   158997  2021-02-04 00:10:36 Z    0 days    1 attempts

------------------------------------------------------------
353 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 10442 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 21:08:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 21:08:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81433.150533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7lrh-0002FO-Mh; Thu, 04 Feb 2021 21:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81433.150533; Thu, 04 Feb 2021 21:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7lrh-0002FH-HK; Thu, 04 Feb 2021 21:08:37 +0000
Received: by outflank-mailman (input) for mailman id 81433;
 Thu, 04 Feb 2021 21:08:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+ZK=HG=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1l7lrg-0002FC-8o
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 21:08:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58a2dd8f-8f74-4880-8c43-e9d609e1956e;
 Thu, 04 Feb 2021 21:08:35 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FE0064FA7
 for <xen-devel@lists.xenproject.org>; Thu,  4 Feb 2021 21:08:34 +0000 (UTC)
Received: by mail-ed1-f45.google.com with SMTP id s11so6061987edd.5
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 13:08:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58a2dd8f-8f74-4880-8c43-e9d609e1956e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612472914;
	bh=Rvdz6s2CQJMjmqrnEEDi4CeczclMMbom+GqfWo2JyxM=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=GIY2TLwhYOcb6foSyJDPVi87CfWi5pOA79eX1tG1Ak3kQAHdPG8ZFaZCEr+YSCm7n
	 zyUQQYUDxNveBrKBFyFsQ+aJPeAft2jPPAKgrIHqL4sfFdDrRAsv83DulZQPtsO4el
	 oUIR/g/TLQ40Jk2mGQufI8/Jk6h/jUUqeILlVw9GyxUnZHxJeB5QK0b9zRl/5jssln
	 muxRTcMKKUqJyvNZqRGFpHCSqEcHzYlEnTCo4qMWa0c6WJwE7AUBzLRbZlqH5Vz9On
	 6zwyfvV/8QYGKoCgEGKeFTGPqzelT3Tu/251tJR3Nrbs8rrkuCTiv5vGGLjTzs0q4G
	 8U0PyPhdrVZSA==
X-Gm-Message-State: AOAM532DumNdJalwmWlbPhZ0wvcDc+/L7wC4VV+IKOGtum2sorkNVZtH
	FwD/bdRLlAa8/H+AABxVPNm9QSYE39KSXqFT2Q==
X-Google-Smtp-Source: ABdhPJxVh1MljQT9BbkRlXGDfOHsEqEJCt96fK8W4gA8VjnjqA99XuPL10SBvWu8mUkCcBCsfqrH56DYYOgi72WZbEw=
X-Received: by 2002:aa7:c895:: with SMTP id p21mr465192eds.165.1612472912572;
 Thu, 04 Feb 2021 13:08:32 -0800 (PST)
MIME-Version: 1.0
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
 <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org> <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
 <9b97789b-5560-0186-642a-0501789830e5@xen.org> <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com> <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
From: Rob Herring <robh@kernel.org>
Date: Thu, 4 Feb 2021 15:08:21 -0600
X-Gmail-Original-Message-ID: <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
Message-ID: <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall.oss@gmail.com>, Elliott Mitchell <ehem+xen@m5p.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Feb 4, 2021 at 2:36 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Thu, 4 Feb 2021, Rob Herring wrote:
> > On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> > <sstabellini@kernel.org> wrote:
> > >
> > > Hi Rob,
> > >
> > > We have a question on the PCIe device tree bindings. In summary, we have
> > > come across the Raspberry Pi 4 PCIe description below:
> > >
> > >
> > > pcie0: pcie@7d500000 {
> > >    compatible = "brcm,bcm2711-pcie";
> > >    reg = <0x0 0x7d500000  0x0 0x9310>;
> > >    device_type = "pci";
> > >    #address-cells = <3>;
> > >    #interrupt-cells = <1>;
> > >    #size-cells = <2>;
> > >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > >    interrupt-names = "pcie", "msi";
> > >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > >                                                      IRQ_TYPE_LEVEL_HIGH>;
> > >    msi-controller;
> > >    msi-parent = <&pcie0>;
> > >
> > >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > >              0x0 0x40000000>;
> > >    /*
> > >     * The wrapper around the PCIe block has a bug
> > >     * preventing it from accessing beyond the first 3GB of
> > >     * memory.
> > >     */
> > >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > >                  0x0 0xc0000000>;
> > >    brcm,enable-ssc;
> > >
> > >    pci@1,0 {
> > >            #address-cells = <3>;
> > >            #size-cells = <2>;
> > >            ranges;
> > >
> > >            reg = <0 0 0 0 0>;
> > >
> > >            usb@1,0 {
> > >                    reg = <0x10000 0 0 0 0>;
> > >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > >            };
> > >    };
> > > };
> > >
> > >
> > > Xen fails to parse it with an error because it tries to remap reg =
> > > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> > >
> > > Reading the device tree description in details, I cannot tell if Xen has
> > > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > > like a default bus (not a PCI bus), hence, the children regs are
> > > translated using the ranges property of the parent (pcie@7d500000).
> > >
> > > Is it possible that the device tree is missing device_type =
> > > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > > pcie@7d500000?
> >
> > Indeed, it should have device_type. Linux (only recently due to
> > another missing device_type case) will also look at node name, but
> > only 'pcie'.
> >
> > We should be able to create (or extend pci-bus.yaml) a schema to catch
> > this case.
>
> Ah, that is what I needed to know, thank you!  Is Linux considering a
> node named "pcie" as if it has device_type = "pci"?

Yes, it was added for Rockchip RK3399 to avoid a DT update and regression.

> In Xen, also to cover the RPi4 case, maybe I could add a check for the
> node name to be "pci" or "pcie" and if so Xen could assume device_type =
> "pci".

I assume this never worked for RPi4 (and Linux will have the same
issue), so can't we just update the DT in this case?

Rob


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 21:23:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 21:23:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81437.150552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7m6L-0004NK-3w; Thu, 04 Feb 2021 21:23:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81437.150552; Thu, 04 Feb 2021 21:23:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7m6L-0004ND-0c; Thu, 04 Feb 2021 21:23:45 +0000
Received: by outflank-mailman (input) for mailman id 81437;
 Thu, 04 Feb 2021 21:23:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAxh=HG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7m6J-0004N8-Po
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 21:23:44 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 226ca2ba-08de-4096-afad-1d2d230f74de;
 Thu, 04 Feb 2021 21:23:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 226ca2ba-08de-4096-afad-1d2d230f74de
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612473822;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=E6QeFw8xwAcE5+K8wHwb/uaOF+w+rMLsifm2BpSDz7s=;
  b=He2Q2Q52G3IaoWjj+qX9gYuFJyr1P3Wl8vJKuo9QQWFrv6FhV4vRIKd7
   7ZFmkc4fH1yo/Yg4GpCh6ZCkqOXm9ixNcUzMX5roFRkLQV7xiaHzsNuEV
   g28Q1xlxOyM1GfOcoi+BnDxNYOOgCHYfknpnNPgEbFzfPScLJh2EHPbI/
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jaGDq4Xg0cAu1SqeIXRqTQOSGy8qqt/Ctuke8NqtZ+scDLh/DTlUmqiuPb2TEILbakeW34sxT7
 hCTFB59qeeMn+sRpZ1nTryBmM2TooYX0QgZQgQvCzjvakgcC+7QZTHwrY/hR1xauWaeFYu8C0a
 ycbG38jKK93Wo+I0Mryb6XYPZOo5RyzwVMKvpuGIjKOnGqOTv0ptImhWQvfa5U0pHsqE3kWDO9
 +tShPugQaNxkcIBz9hIEMORGqC9CKAZ+B6jNcOHJyeIx+RmlDFNJa1haF+OSXXsTeNzGrIcv73
 t40=
X-SBRS: 5.2
X-MesageID: 36547878
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,153,1610427600"; 
   d="scan'208";a="36547878"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oQiH8ImkkrAOPHslBF0GX9P1w5RbTiM1ZFAMYh52HIQ1rFe1QUQRLYWHOWx9HgmSTH6pn4+S38N/IIAzh/w69dDbDct+sbWnLSuW06Nkt1RBJ3CAi8weoiRR7cMmb6MxeDyLXWGEh+Mv6QKlC6eO+cUYdL4KfgpYK1r2DP4fveS9BQB61PYk3Bvc6BV+mbH7ye9CXDMoyzMSVzw0/R5lj1+sBJ9zE80o+C+dGtZtzatzf95cAdq1jkRITSvCLVTlHPBKd7bRhbEjA8sZgAwJ3oJc67i7VOSze3EO1DNqMw1h3bSCeuo9uYUwMOOaoHvi13XNDJNelwfchjgMPqDHFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VMv3LYQK/0CHBXpc5n3kLk7zi6yi7umBpDakIZDZya8=;
 b=OvPCLXVNFHGctWmoECrE5tEp/asAGKuzwfu7p+M9b1cIOjkxQaVEhk8Wtb5J2eXk2EfUVe1l0Bj9YplyLHCEIhrObzMDHRVmlYpknSHNyTzsMlh2/JveBDL5qiq70bQWo15npjgdb+LOHex8jAwFk7iNAnePt78m2lRVP2DyWAoY1YUx9mB/yCkVImMrNhvQOlWspCjw1Ov9QLPhW63PjhMBl/yh/ah82u5m37agk7DsGm/nwjQn2J9Khdv0k2SBu035pA8mx4PaEGLdXNBzBm7mq2z1JLuRZFDkT4pe41mtTa/0fZ6kjmKhymgCYDvqcFYyVb1Q1qby+Y+8OVypSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VMv3LYQK/0CHBXpc5n3kLk7zi6yi7umBpDakIZDZya8=;
 b=IXJET+s3zxVxMZziMFhnfTl7Haz3CctbRNbny85PSvPtVo18wuP+YFVVWZF3JcK9A0nb4ncTcMG7up5ydNJQxg5bYPwQhe+0aJ/5u2e+4JFXGFTZNK46xlFQK2kgxHlGHyT3zat7e6E2QxcBEHGJ4PawHMMfrceW01o2iyIAVLs=
Subject: Re: [PATCH v9 01/11] xen/memory: Fix mapping grant tables with
 XENMEM_acquire_resource
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <JBeulich@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>, Hubert
 Jasudowicz <hubert.jasudowicz@cert.pl>, Tamas K Lengyel <tamas@tklengyel.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
 <20210201232703.29275-2-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <86f9845f-f0a5-93c7-0703-c3a51d50febc@citrix.com>
Date: Thu, 4 Feb 2021 21:23:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <20210201232703.29275-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0423.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::14) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 038c8a9e-76da-49ff-1b39-08d8c9532311
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5646:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5646A8F6E7873EA1DF51DB16BAB39@SJ0PR03MB5646.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Uep/unc3UYteG5qYV6jM/xXGxsR04GucYayIJ45Lbj4002HgQ5um9MSe/pSKbMgJ1V8Ey5bVw2JEZIff+vLKCz42aa9FW9Da5+9AuxlhxGSHiRUhFo5gEqXzCgexaWD94CdwU2uVp4nzUEBU2bbf4l4XgCVmf45W3Aac5l2wxllB0oG5J+ax4swQTR4awmvYhx5eLFq3bvqpVbz6yCDpO2yB6DWSMxIFDb3+Q4pOEQwfAollp5yBntPSL0zFGMaPDGiCj67Mli0WfPE+VCDrr0IbwQ/I/vMBSQIaOQFYtTKuL50HaFrXCB84DnbTZXXX5MiATRzMBsRp/XY8z6AOOpl4mrgStTFloBjQpNHSkjbL7EdIZlaGV+iObUIQkEnYQmvEHrWcgPq5QmrWTBQCqEX7Z0mWdgxr58uBdVDYt6rdW7/ac6ijvM3DqaLW7wfu0bzhc8pj83S+jAjeww91Xq19pWKUR+Xu+WVxe7qgdg5AfxKsWnPe0iNholxipQzJhoezx7yXfITJlZbzmJOtzXFihVBdZx+e/Cg0Xuprk8fm7FS0LedZug0DJ03dOBW4HKAPHVVDVpM1E1RpuF5WmcX9P2TbuRA9ba1GXduH67Y=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(376002)(136003)(366004)(6666004)(86362001)(956004)(66476007)(31686004)(16576012)(8676002)(4326008)(66946007)(478600001)(316002)(2616005)(6916009)(53546011)(7416002)(8936002)(83380400001)(2906002)(66574015)(5660300002)(31696002)(66556008)(6486002)(16526019)(26005)(186003)(36756003)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MGYrRklEWWpZdUdmM01VWlFNa0MrMnJjOURWdVE5ZmJwTnltNlhmbGN1Wjhs?=
 =?utf-8?B?bDFCM0s1MHVMSTR3aWJZd2lmbjVIcUxydVlwNkFxbktENHdVTXJNQzNHdjZG?=
 =?utf-8?B?UFFmTGVNRzcyeGQyNFVFaDNkZ2ZXWkxwWHZ1T1dNbFVRaDZ0Q3c1cExJTjYy?=
 =?utf-8?B?THo3Ty9NY3JQVzdGWUdJUWNjMGxOTndlaW94Q2tSMWlqODAyelRaZXRXeG9u?=
 =?utf-8?B?R1lMelNRcmFaQjhrT2VucVM5bFRRbWo0UHgxZFZqazFHeDhOUTVXQWwxYmZl?=
 =?utf-8?B?VWMyUHEwaS9RQ1NUSWJkODh2Q2t2a0NZNm8zaFpCcCtqaE1ET0FTVkVBbG9l?=
 =?utf-8?B?SDBzcnNMZDU5VVhYRXExYk9kbGUvK1JQTEhrT1kyblY3OEpWZEJla0svUmNH?=
 =?utf-8?B?dExQZkhsMmJvcWlzeDZpeW12YWdjTWQzV05SbDJYaXFUcVRSZ010VUF3dVNq?=
 =?utf-8?B?Y2dVY2gwMkZyWTBOSEVYL215Nkw2bkdtN3h6SlZnejdhdTZQQjRkanRUR0pu?=
 =?utf-8?B?bm8yN1A1R0Z3K1VHbk1YNVpQWTdYcTBlM0xNOGRXdlhHSFZScVkvdUwrdW1l?=
 =?utf-8?B?K2JwUFdjZ0JCZ2ZXa1E0a2ZsSVIxZS8xZnBVaFBSN1haREFCc0Z5OUJtbWxs?=
 =?utf-8?B?bWg1V0p4cEUwQk16d3NaazhuQTk0blVrNHg3NmRYRUxNYW5GZnRVNkp2TUlF?=
 =?utf-8?B?QUtjRUU3ZUp3VkIzZDhoZEF5UHVxbXloWHNyMVJObW1MVkZnL0ZWMDBKSlZu?=
 =?utf-8?B?RU1KTWxCbVdIY0FnOGxBbW1aWlRPeUVuS2hpYzM3Mzd3RWFBc215WXloMlVG?=
 =?utf-8?B?cHdhVEFOazdEdU00RjZiUlBwWVF6a1BXaitoL0xUMkxqcjBFYjdpVlYvUCsx?=
 =?utf-8?B?a1hqR3BiVXkxT29ONXZ4YzZVWUtUd2gyZGpibmoxYlFsa3p4dndEbDdvdTNU?=
 =?utf-8?B?SUIwZWlwNDVlQmhSdEY5R3kwZFp3WERpeFFTUm9uMDdsR2l1dEYyWi95WlBS?=
 =?utf-8?B?Vk5mdGZHTCsxT21TcmdUS2JkYUhIQXFJNVMrL0JOcVJZamlJMStYOGxCYlN1?=
 =?utf-8?B?NzdKUnhwdFBHY1NRVXNpOWVldUlqbWtOSC9yb3ZVcDRWT3dNYnhSU1RVM0R2?=
 =?utf-8?B?SzI1QjhySFF6NzNOcGhVOUd1c0dHVUhMeURuM2JXUE0rUlIrR1JQSWZHZytv?=
 =?utf-8?B?SHdZcEl4bmRxc0NjWDdrMzIwZ2kwMHY2emhxUk9Tb1d5TXd6V2JuT2MzSDdC?=
 =?utf-8?B?TzVsTHpBeUNnd2ZjZVg1WDZCcE5ScDhoWlRNdFI2QXlNNi8wUS9yb1FkRDBY?=
 =?utf-8?B?MExadnk2S0MzaXF3Q1J6SE4rWE9kNHAzZGx2aFdwUG1YUlkzd2lSV1dZVjRF?=
 =?utf-8?B?UkE4VG9GT2NDRXFzbzRVVmRHTW5GSUplbTJWZk1MUzFqZk8zUnN0d1IwTHlW?=
 =?utf-8?Q?28FFhLoP?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 038c8a9e-76da-49ff-1b39-08d8c9532311
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Feb 2021 21:23:38.2406
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mLeQND0luqiMwRMMJQPCwoY+rBQbxp9xMp5VSiKb6lL1A3twmmiNDL817o/vbz3PowV96Fla6LNxL+vpOSHtB9Tvt+t7unG+hMWbUnBqv4c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5646
X-OriginatorOrg: citrix.com

On 01/02/2021 23:26, Andrew Cooper wrote:
> A guest's default number of grant frames is 64, and XENMEM_acquire_resource
> will reject an attempt to map more than 32 frames.  This limit is caused by
> the size of mfn_list[] on the stack.
>
> Fix mapping of arbitrary size requests by looping over batches of 32 in
> acquire_resource(), and using hypercall continuations when necessary.
>
> To start with, break _acquire_resource() out of acquire_resource() to cope
> with type-specific dispatching, and update the return semantics to indicate
> the number of mfns returned.  Update gnttab_acquire_resource() and x86's
> arch_acquire_resource() to match these new semantics.
>
> Have do_memory_op() pass start_extent into acquire_resource() so it can pick
> up where it left off after a continuation, and loop over batches of 32 until
> all the work is done, or a continuation needs to occur.
>
> compat_memory_op() is a bit more complicated, because it also has to marshal
> frame_list in the XLAT buffer.  Have it account for continuation information
> itself and hide details from the upper layer, so it can marshal the buffer in
> chunks if necessary.
>
> With these fixes in place, it is now possible to map the whole grant table for
> a guest.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
>
> v9:
>  * Crash domain rather than returning late with -ERANGE/-EFAULT.
>
> v8:
>  * nat => cmp change in the start_extent check.
>  * Rebase over 'frame' and ARM/IOREQ series.
>
> v3:
>  * Spelling fixes
> ---
>  xen/common/compat/memory.c | 114 +++++++++++++++++++++++++++++++++--------
>  xen/common/grant_table.c   |   3 ++
>  xen/common/memory.c        | 124 +++++++++++++++++++++++++++++++++------------
>  3 files changed, 187 insertions(+), 54 deletions(-)

Attempt at release-ack paperwork.

This is a bugfix for an issue which doesn't manifest by in-tree default
callers, but does manifest when using the
xenforeignmemory_map_resource() interface in the expected manner.

The hypercall is made of a metadata structure, and an array of frames.Â 
The bug is that Xen only tolerates a maximum of 32 frames, and the
bugfix is to accept an arbitrary number of frames.


What can go wrong (other than the theoretical base case of everything,
seeing as we're talking about C in system context)?

The bugfix is basically "do { chunk_of_32(); } while ( !done );", so
we're adding in an extra loop into the hypervisor.Â  We could fail to
terminate the loop (possible livelock in the hypervisor), or we could
incorrectly marshal the buffer (guest kernel might receive junk instead
of the mapping they expected).

The majority of the complexity actually comes from the fact there are
two nested loops, one in the compat layer doing 32=>64 (and back)
marshalling, and one in the main layer, looping over chunks of 32
frames.Â  Therefore, the same risks apply at both layers.

I am certain the code is not bug free.Â  The compat layer here is
practically impossible to follow, and has (self inflicted) patterns
where we have to crash the guest rather than raise a clean failure, due
to an inability to unwind the fact that the upper layer decided to issue
a continuation.

There is also one bit where I literally had to give up, and put this
logic in:
> +            /*
> +             * Well... Somethings gone wrong with the two levels of chunking.
> +             * My condolences to whomever next has to debug this mess.
> +             */
> +            ASSERT_UNREACHABLE();
> +            domain_crash(current->domain);
> +            split = 0;
>              break;

Mitigations to these risks are thus:

* Explicit use of failsafe coding patterns, will break out of the loops
and pass -EINVAL back to the caller, or crashing the domain when we
can't figure out how to pass an error back safely.

* This codepath codepath gets used multiple times on every single VM
boot, so will get ample testing from the in-tree caller point of view,
as soon as OSSTest starts running.

* The IPT series (which discovered this mess to start with) shows that,
in addition to the in-tree paths working, the >32 frame mappings appear
to work correctly.

* An in-tree unit test exercising this codepath in a way which
demonstrates this bug.Â  Further work planned for this test.

* Some incredibly invasive Xen+XTF testing to prove the correctness of
the marshalling.Â  Not suitable for committing, but available for
inspection/query.Â  In particular, this covers aspects of the logic with
won't get any practical testing elsewhere.


Overall, if there are bugs, they're very likely to be spotted by OSSTest
in short order.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 21:33:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 21:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81442.150572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7mFb-0005XP-9Y; Thu, 04 Feb 2021 21:33:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81442.150572; Thu, 04 Feb 2021 21:33:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7mFb-0005XI-6T; Thu, 04 Feb 2021 21:33:19 +0000
Received: by outflank-mailman (input) for mailman id 81442;
 Thu, 04 Feb 2021 21:33:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7BZ=HG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7mFZ-0005X7-Ng
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 21:33:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7dc51cf-e00c-421d-a16b-5c5c0e6a71a2;
 Thu, 04 Feb 2021 21:33:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E680564FA7;
 Thu,  4 Feb 2021 21:33:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7dc51cf-e00c-421d-a16b-5c5c0e6a71a2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612474396;
	bh=jmj08TIm0grS0knSKsUh67sTTwFFaEtBO6MV6j+TXiw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GSmpTvqSt0YKRHqQXmEd7W4UXXaQyXBj+ANb0f8TXqPzho/F+RsF9whg6790wKkqn
	 uVbeUoTA5DlB9zt9HP6fVGWiqpi5u3ezpkVqxJHyJBTQyjnd/FWBKiRXwbYc2/T++s
	 QlOWYN+ZHmDQoEFZ6jkfaCpOynbgOI7avHIGLi+WRH5xLB2m3l2QY67RW1wtdit3Rh
	 3HJXYMMre2DWva4RBA+5aJ5noLZu8lBK9ZOTT/pPWz3YM0n4jpm3UH00Jzz7lM2O4u
	 YYWL8IL7BRU8k0THZGLsDdqEk5r+xZJBNsTDUE94eyGtBco+90KASWRIdPvfili60Q
	 PLBG7XJJ8rbpw==
Date: Thu, 4 Feb 2021 13:33:15 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rob Herring <robh@kernel.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    Elliott Mitchell <ehem+xen@m5p.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
In-Reply-To: <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102041309430.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s> <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com> <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s> <9b97789b-5560-0186-642a-0501789830e5@xen.org>
 <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s> <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com> <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 Feb 2021, Rob Herring wrote:
> On Thu, Feb 4, 2021 at 2:36 PM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> > > <sstabellini@kernel.org> wrote:
> > > >
> > > > Hi Rob,
> > > >
> > > > We have a question on the PCIe device tree bindings. In summary, we have
> > > > come across the Raspberry Pi 4 PCIe description below:
> > > >
> > > >
> > > > pcie0: pcie@7d500000 {
> > > >    compatible = "brcm,bcm2711-pcie";
> > > >    reg = <0x0 0x7d500000  0x0 0x9310>;
> > > >    device_type = "pci";
> > > >    #address-cells = <3>;
> > > >    #interrupt-cells = <1>;
> > > >    #size-cells = <2>;
> > > >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > > >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > > >    interrupt-names = "pcie", "msi";
> > > >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > > >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > > >                                                      IRQ_TYPE_LEVEL_HIGH>;
> > > >    msi-controller;
> > > >    msi-parent = <&pcie0>;
> > > >
> > > >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > > >              0x0 0x40000000>;
> > > >    /*
> > > >     * The wrapper around the PCIe block has a bug
> > > >     * preventing it from accessing beyond the first 3GB of
> > > >     * memory.
> > > >     */
> > > >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > > >                  0x0 0xc0000000>;
> > > >    brcm,enable-ssc;
> > > >
> > > >    pci@1,0 {
> > > >            #address-cells = <3>;
> > > >            #size-cells = <2>;
> > > >            ranges;
> > > >
> > > >            reg = <0 0 0 0 0>;
> > > >
> > > >            usb@1,0 {
> > > >                    reg = <0x10000 0 0 0 0>;
> > > >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > >            };
> > > >    };
> > > > };
> > > >
> > > >
> > > > Xen fails to parse it with an error because it tries to remap reg =
> > > > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> > > >
> > > > Reading the device tree description in details, I cannot tell if Xen has
> > > > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > > > like a default bus (not a PCI bus), hence, the children regs are
> > > > translated using the ranges property of the parent (pcie@7d500000).
> > > >
> > > > Is it possible that the device tree is missing device_type =
> > > > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > > > pcie@7d500000?
> > >
> > > Indeed, it should have device_type. Linux (only recently due to
> > > another missing device_type case) will also look at node name, but
> > > only 'pcie'.
> > >
> > > We should be able to create (or extend pci-bus.yaml) a schema to catch
> > > this case.
> >
> > Ah, that is what I needed to know, thank you!  Is Linux considering a
> > node named "pcie" as if it has device_type = "pci"?
> 
> Yes, it was added for Rockchip RK3399 to avoid a DT update and regression.
> 
> > In Xen, also to cover the RPi4 case, maybe I could add a check for the
> > node name to be "pci" or "pcie" and if so Xen could assume device_type =
> > "pci".
> 
> I assume this never worked for RPi4 (and Linux will have the same
> issue), so can't we just update the DT in this case?

I am not sure where the DT is coming from, probably from the RPi4 kernel
trees or firmware. I think it would be good if somebody got in touch to
tell them they have an issue.

Elliot, where was that device tree coming from originally?


>From a Xen perspective, for the sake of minimizing user pains (given
that it might take a while to update those DTs) and to introduce as few
ties as possible with kernel versions, it might be best to add the
"pci" name workaround maybe with a /* HACK */ comment on top.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 21:52:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 21:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81450.150596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7mYO-0007mY-36; Thu, 04 Feb 2021 21:52:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81450.150596; Thu, 04 Feb 2021 21:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7mYN-0007mR-Vy; Thu, 04 Feb 2021 21:52:43 +0000
Received: by outflank-mailman (input) for mailman id 81450;
 Thu, 04 Feb 2021 21:52:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+ZK=HG=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1l7mYM-0007mM-2M
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 21:52:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 725cb6a7-0189-4698-8235-552ec516869f;
 Thu, 04 Feb 2021 21:52:41 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1575C64FA0
 for <xen-devel@lists.xenproject.org>; Thu,  4 Feb 2021 21:52:40 +0000 (UTC)
Received: by mail-ej1-f50.google.com with SMTP id i8so8163926ejc.7
 for <xen-devel@lists.xenproject.org>; Thu, 04 Feb 2021 13:52:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 725cb6a7-0189-4698-8235-552ec516869f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612475560;
	bh=CA6RZ5cZsWmFr6tnDtX1gfiPU9iDcd+i935vjkJp2io=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=vAU78h1ADWrcT9HDMa2RDBNIBCVnJefNWsj+7Fdx4uzjSIe9BfUwsF9hXOUbEUcLB
	 YpijZ+LHNEIpk4tnm+Tq25TGnYbxkbi6/POc8kn+mGckTxs2bv18BEZnef5oaAwiL2
	 2AnXkyDw5vYHgUZNUg39rIEL6U0cS3IPhFTDatUXbyIKa1fVEvETpqIQqrKwTN+Nla
	 UCSjguKDNeaXjdOXPfoQH3BnZm+GtG/+L/PGlbnvkcd6sW8NJ3L/TgKbui0QTCL8fr
	 ktBl2DU8sn6Z4zuYo7Vhy9pcDJrLpDApvyYeQZ4w/PC3KaNelTXtW7H32KfUcAPZWn
	 qBpBeo74wZsWQ==
X-Gm-Message-State: AOAM533/NlAidBWYgGgbCo2cJo0ysZEsW9fHEEujuMk9HVM70AAHgAnd
	8E89w6HzbsFSUhCdpw/tObTA8U49uGr00CNMjQ==
X-Google-Smtp-Source: ABdhPJyJOl5DBjCX9pBRmiwHlyIHxZYHXS14HqHDUjfknEaVuT+HKQjxh0VQ/dx+TbjCRvLzC3BcUXWc7pGCLbulytg=
X-Received: by 2002:a17:906:c9d8:: with SMTP id hk24mr1138761ejb.468.1612475557780;
 Thu, 04 Feb 2021 13:52:37 -0800 (PST)
MIME-Version: 1.0
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
 <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org> <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
 <9b97789b-5560-0186-642a-0501789830e5@xen.org> <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
 <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com> <alpine.DEB.2.21.2102041309430.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102041309430.29047@sstabellini-ThinkPad-T480s>
From: Rob Herring <robh@kernel.org>
Date: Thu, 4 Feb 2021 15:52:26 -0600
X-Gmail-Original-Message-ID: <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com>
Message-ID: <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com>
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall.oss@gmail.com>, Elliott Mitchell <ehem+xen@m5p.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Feb 4, 2021 at 3:33 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Thu, 4 Feb 2021, Rob Herring wrote:
> > On Thu, Feb 4, 2021 at 2:36 PM Stefano Stabellini
> > <sstabellini@kernel.org> wrote:
> > >
> > > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > > On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> > > > <sstabellini@kernel.org> wrote:
> > > > >
> > > > > Hi Rob,
> > > > >
> > > > > We have a question on the PCIe device tree bindings. In summary, we have
> > > > > come across the Raspberry Pi 4 PCIe description below:
> > > > >
> > > > >
> > > > > pcie0: pcie@7d500000 {
> > > > >    compatible = "brcm,bcm2711-pcie";
> > > > >    reg = <0x0 0x7d500000  0x0 0x9310>;
> > > > >    device_type = "pci";
> > > > >    #address-cells = <3>;
> > > > >    #interrupt-cells = <1>;
> > > > >    #size-cells = <2>;
> > > > >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > > > >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > > > >    interrupt-names = "pcie", "msi";
> > > > >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > > > >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > > > >                                                      IRQ_TYPE_LEVEL_HIGH>;
> > > > >    msi-controller;
> > > > >    msi-parent = <&pcie0>;
> > > > >
> > > > >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > > > >              0x0 0x40000000>;
> > > > >    /*
> > > > >     * The wrapper around the PCIe block has a bug
> > > > >     * preventing it from accessing beyond the first 3GB of
> > > > >     * memory.
> > > > >     */
> > > > >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > > > >                  0x0 0xc0000000>;
> > > > >    brcm,enable-ssc;
> > > > >
> > > > >    pci@1,0 {
> > > > >            #address-cells = <3>;
> > > > >            #size-cells = <2>;
> > > > >            ranges;
> > > > >
> > > > >            reg = <0 0 0 0 0>;
> > > > >
> > > > >            usb@1,0 {
> > > > >                    reg = <0x10000 0 0 0 0>;
> > > > >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > > >            };
> > > > >    };
> > > > > };
> > > > >
> > > > >
> > > > > Xen fails to parse it with an error because it tries to remap reg =
> > > > > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> > > > >
> > > > > Reading the device tree description in details, I cannot tell if Xen has
> > > > > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > > > > like a default bus (not a PCI bus), hence, the children regs are
> > > > > translated using the ranges property of the parent (pcie@7d500000).
> > > > >
> > > > > Is it possible that the device tree is missing device_type =
> > > > > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > > > > pcie@7d500000?
> > > >
> > > > Indeed, it should have device_type. Linux (only recently due to
> > > > another missing device_type case) will also look at node name, but
> > > > only 'pcie'.
> > > >
> > > > We should be able to create (or extend pci-bus.yaml) a schema to catch
> > > > this case.
> > >
> > > Ah, that is what I needed to know, thank you!  Is Linux considering a
> > > node named "pcie" as if it has device_type = "pci"?
> >
> > Yes, it was added for Rockchip RK3399 to avoid a DT update and regression.
> >
> > > In Xen, also to cover the RPi4 case, maybe I could add a check for the
> > > node name to be "pci" or "pcie" and if so Xen could assume device_type =
> > > "pci".
> >
> > I assume this never worked for RPi4 (and Linux will have the same
> > issue), so can't we just update the DT in this case?
>
> I am not sure where the DT is coming from, probably from the RPi4 kernel
> trees or firmware. I think it would be good if somebody got in touch to
> tell them they have an issue.

So you just take whatever downstream RPi invents? Good luck.

> Elliot, where was that device tree coming from originally?
>
>
> From a Xen perspective, for the sake of minimizing user pains (given
> that it might take a while to update those DTs) and to introduce as few
> ties as possible with kernel versions, it might be best to add the
> "pci" name workaround maybe with a /* HACK */ comment on top.

There is some possibility of that causing a regression on another
platform. That's why we limited Linux as much as possible and also
print a warning. But we have to worry about 20 year old PowerMacs and
such.

Rob


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 22:03:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 22:03:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81453.150607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7mi8-0000Vt-2J; Thu, 04 Feb 2021 22:02:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81453.150607; Thu, 04 Feb 2021 22:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7mi7-0000Vm-Vj; Thu, 04 Feb 2021 22:02:47 +0000
Received: by outflank-mailman (input) for mailman id 81453;
 Thu, 04 Feb 2021 22:02:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7BZ=HG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7mi7-0000Vh-J7
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 22:02:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa9156d9-ddf9-4dd5-8528-ba17f9e695c0;
 Thu, 04 Feb 2021 22:02:46 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A58AB64E4A;
 Thu,  4 Feb 2021 22:02:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa9156d9-ddf9-4dd5-8528-ba17f9e695c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612476166;
	bh=cWpf/sl7YbyXMwRUWefx2vK72Rmt9Pl/ONyCk2nEOfw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NLGe0BQfXWA9t2m3IDHzOu/yh3P/E4+OQdrH28O3rMIoQCMMN3UGZ1LxvLwl0zJXA
	 K3GbN4iIv/x5C93bNuL2JHzSGq10UyPgFCTe3Ime4VbkRJXobNWwpY2TGkOHdkAVeZ
	 GDUvG9T4nvGkbu4D9W0VH1p5CkXu8SMzhYYT5gEEkxHtMTzP1Pmy96RhNGUtvbAPzb
	 pzrYvNyoGaleJ/8KU5UfdkBK9HesrLDml9w4jaPme8y45rYzDe83TSQLz9oxUOtja2
	 PKZVTCAnVgsJImOoKe6f8FAksC78XcfvMQG9tLJ6UzWuhW3oCkL0F4anyfHh6v3ksn
	 Q9uHhpfXyZRKA==
Date: Thu, 4 Feb 2021 14:02:44 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rob Herring <robh@kernel.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    Elliott Mitchell <ehem+xen@m5p.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
In-Reply-To: <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102041359340.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s> <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com> <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s> <9b97789b-5560-0186-642a-0501789830e5@xen.org>
 <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s> <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com> <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s> <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
 <alpine.DEB.2.21.2102041309430.29047@sstabellini-ThinkPad-T480s> <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 Feb 2021, Rob Herring wrote:
> On Thu, Feb 4, 2021 at 3:33 PM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > On Thu, Feb 4, 2021 at 2:36 PM Stefano Stabellini
> > > <sstabellini@kernel.org> wrote:
> > > >
> > > > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > > > On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> > > > > <sstabellini@kernel.org> wrote:
> > > > > >
> > > > > > Hi Rob,
> > > > > >
> > > > > > We have a question on the PCIe device tree bindings. In summary, we have
> > > > > > come across the Raspberry Pi 4 PCIe description below:
> > > > > >
> > > > > >
> > > > > > pcie0: pcie@7d500000 {
> > > > > >    compatible = "brcm,bcm2711-pcie";
> > > > > >    reg = <0x0 0x7d500000  0x0 0x9310>;
> > > > > >    device_type = "pci";
> > > > > >    #address-cells = <3>;
> > > > > >    #interrupt-cells = <1>;
> > > > > >    #size-cells = <2>;
> > > > > >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > > > > >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > > > > >    interrupt-names = "pcie", "msi";
> > > > > >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > > > > >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > > > > >                                                      IRQ_TYPE_LEVEL_HIGH>;
> > > > > >    msi-controller;
> > > > > >    msi-parent = <&pcie0>;
> > > > > >
> > > > > >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > > > > >              0x0 0x40000000>;
> > > > > >    /*
> > > > > >     * The wrapper around the PCIe block has a bug
> > > > > >     * preventing it from accessing beyond the first 3GB of
> > > > > >     * memory.
> > > > > >     */
> > > > > >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > > > > >                  0x0 0xc0000000>;
> > > > > >    brcm,enable-ssc;
> > > > > >
> > > > > >    pci@1,0 {
> > > > > >            #address-cells = <3>;
> > > > > >            #size-cells = <2>;
> > > > > >            ranges;
> > > > > >
> > > > > >            reg = <0 0 0 0 0>;
> > > > > >
> > > > > >            usb@1,0 {
> > > > > >                    reg = <0x10000 0 0 0 0>;
> > > > > >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > > > >            };
> > > > > >    };
> > > > > > };
> > > > > >
> > > > > >
> > > > > > Xen fails to parse it with an error because it tries to remap reg =
> > > > > > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> > > > > >
> > > > > > Reading the device tree description in details, I cannot tell if Xen has
> > > > > > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > > > > > like a default bus (not a PCI bus), hence, the children regs are
> > > > > > translated using the ranges property of the parent (pcie@7d500000).
> > > > > >
> > > > > > Is it possible that the device tree is missing device_type =
> > > > > > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > > > > > pcie@7d500000?
> > > > >
> > > > > Indeed, it should have device_type. Linux (only recently due to
> > > > > another missing device_type case) will also look at node name, but
> > > > > only 'pcie'.
> > > > >
> > > > > We should be able to create (or extend pci-bus.yaml) a schema to catch
> > > > > this case.
> > > >
> > > > Ah, that is what I needed to know, thank you!  Is Linux considering a
> > > > node named "pcie" as if it has device_type = "pci"?
> > >
> > > Yes, it was added for Rockchip RK3399 to avoid a DT update and regression.
> > >
> > > > In Xen, also to cover the RPi4 case, maybe I could add a check for the
> > > > node name to be "pci" or "pcie" and if so Xen could assume device_type =
> > > > "pci".
> > >
> > > I assume this never worked for RPi4 (and Linux will have the same
> > > issue), so can't we just update the DT in this case?
> >
> > I am not sure where the DT is coming from, probably from the RPi4 kernel
> > trees or firmware. I think it would be good if somebody got in touch to
> > tell them they have an issue.
> 
> So you just take whatever downstream RPi invents? Good luck.

Ehm, yes, I know what you are talking about :-D

We don't promise to support downstream kernels and device trees but
fortunately they work most of the time.


> > Elliot, where was that device tree coming from originally?
> >
> >
> > From a Xen perspective, for the sake of minimizing user pains (given
> > that it might take a while to update those DTs) and to introduce as few
> > ties as possible with kernel versions, it might be best to add the
> > "pci" name workaround maybe with a /* HACK */ comment on top.
> 
> There is some possibility of that causing a regression on another
> platform. That's why we limited Linux as much as possible and also
> print a warning. But we have to worry about 20 year old PowerMacs and
> such.

I see, that makes sense. Printing a warning is very sensible.


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 22:39:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 22:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81459.150630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7nHq-00042H-1x; Thu, 04 Feb 2021 22:39:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81459.150630; Thu, 04 Feb 2021 22:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7nHp-00042A-Ua; Thu, 04 Feb 2021 22:39:41 +0000
Received: by outflank-mailman (input) for mailman id 81459;
 Thu, 04 Feb 2021 22:39:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7BZ=HG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7nHo-000425-Ey
 for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 22:39:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95ca87d6-903a-42e3-8fcd-569dff8e28b9;
 Thu, 04 Feb 2021 22:39:39 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0142F64F95;
 Thu,  4 Feb 2021 22:39:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95ca87d6-903a-42e3-8fcd-569dff8e28b9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612478378;
	bh=ahBmcFZW2v+ZiH8PV3Rzl5cfNUzyXvIM1jnlEfzTGUU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=b6IoH/NN504hJFQ34wzjq7ttT+rp2p1giuBbKyfhI2wDC62IauefNqahF87VrKaan
	 QlR75cpejtgXRsFb7ZESKxNfQzqpfwTr+qXRrtQ/ZkaFsitp5NIq2llg1mX6CklcX1
	 lVLNAGijFiHOAHnz3wqA581HYDEngw/Nr+eCMfc7oMdEhIHSqjzy7nc3BG3O8E9SZV
	 NbQjd6AKU/WXnFy8jmhsimrdKCZr/Qww/pMDPzWHpvd/Z7yA2PFtJZyWPGU6bb5h/A
	 vKCMdqq6Laxcv6CnJVF2vI/5+0uMf9I2CMN5hrovSmJT0rwgXgJghdQOhsNUk9OpW5
	 GXMd4JyMhkZrg==
Date: Thu, 4 Feb 2021 14:39:37 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    Elliott Mitchell <ehem+xen@m5p.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
In-Reply-To: <9b97789b-5560-0186-642a-0501789830e5@xen.org>
Message-ID: <alpine.DEB.2.21.2102041435300.29047@sstabellini-ThinkPad-T480s>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com> <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org> <b6d342f8-c833-db88-9808-cdc946999300@xen.org> <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s> <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s> <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com> <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s> <9b97789b-5560-0186-642a-0501789830e5@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 Feb 2021, Julien Grall wrote:
> On 04/02/2021 00:13, Stefano Stabellini wrote:
> > On Wed, 3 Feb 2021, Julien Grall wrote:
> > > On Wed, 3 Feb 2021 at 22:18, Stefano Stabellini <sstabellini@kernel.org>
> > > wrote:
> > > > > > But aside from PCIe, let's say that we know of a few nodes for which
> > > > > > "reg" needs a special treatment. I am not sure it makes sense to
> > > > > > proceed
> > > > > > with parsing those nodes without knowing how to deal with that.
> > > > > 
> > > > > I believe that most of the time the "special" treatment would be to
> > > > > ignore the
> > > > > property "regs" as it will not be an CPU memory address.
> > > > > 
> > > > > > So maybe
> > > > > > we should add those nodes to skip_matches until we know what to do
> > > > > > with
> > > > > > them. At that point, I would imagine we would introduce a special
> > > > > > handle_device function that knows what to do. In the case of PCIe,
> > > > > > something like "handle_device_pcie".
> > > > > Could you outline how "handle_device_pcie()" will differ with
> > > > > handle_node()?
> > > > > 
> > > > > In fact, the problem is not the PCIe node directly. Instead, it is the
> > > > > second
> > > > > level of nodes below it (i.e usb@...).
> > > > > 
> > > > > The current implementation of dt_number_of_address() only look at the
> > > > > bus type
> > > > > of the parent. As the parent has no bus type and "ranges" then it
> > > > > thinks this
> > > > > is something we can translate to a CPU address.
> > > > > 
> > > > > However, this is below a PCI bus so the meaning of "reg" is completely
> > > > > different. In this case, we only need to ignore "reg".
> > > > 
> > > > I see what you are saying and I agree: if we had to introduce a special
> > > > case for PCI, then  dt_number_of_address() seems to be a good place.  In
> > > > fact, we already have special PCI handling, see our
> > > > __dt_translate_address function and xen/common/device_tree.c:dt_busses.
> > > > 
> > > > Which brings the question: why is this actually failing?
> > > 
> > > I already hinted at the reason in my previous e-mail :). Let me expand
> > > a bit more.
> > > 
> > > > 
> > > > pcie0 {
> > > >       ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000 0x0
> > > > 0x40000000>;
> > > > 
> > > > Which means that PCI addresses 0xc0000000-0x100000000 become
> > > > 0x600000000-0x700000000.
> > > > 
> > > > The offending DT is:
> > > > 
> > > > &pcie0 {
> > > >           pci@1,0 {
> > > >                   #address-cells = <3>;
> > > >                   #size-cells = <2>;
> > > >                   ranges;
> > > > 
> > > >                   reg = <0 0 0 0 0>;
> > > > 
> > > >                   usb@1,0 {
> > > >                           reg = <0x10000 0 0 0 0>;
> > > >                           resets = <&reset
> > > > RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > >                   };
> > > >           };
> > > > };
> > > > 
> > > > 
> > > > reg = <0x10000 0 0 0 0> means that usb@1,0 is PCI device 01:00.0.
> > > > However, the rest of the regs cells are left as zero. It shouldn't be an
> > > > issue because usb@1,0 is a child of pci@1,0 but pci@1,0 is not a bus.
> > > 
> > > The property "ranges" is used to define a mapping or translation
> > > between the address space of the "bus" (here pci@1,0) and the address
> > > space of the bus node's parent (&pcie0).
> > > IOW, it means "reg" in usb@1,0 is an address on the PCI bus (i.e. BDF).
> > > 
> > > The problem is dt_number_of_address() will only look at the "bus" type
> > > of the parent using dt_match_bus(). This will return the default bus
> > > (see dt_bus_default_match()), because this is a property "ranges" in
> > > the parent node (i.e. pci@1,0). Therefore...
> > > 
> > > > So
> > > > in theory dt_number_of_address() should already return 0 for it.
> > > 
> > > ... dt_number_of_address() will return 1 even if the address is not a
> > > CPU address. So when Xen will try to translate it, it will fail.
> > > 
> > > > 
> > > > Maybe reg = <0 0 0 0 0> is the problem. In that case, we could simply
> > > > add a check to skip 0 size ranges. Just a hack to explain what I mean:
> > > 
> > > The parent of pci@1,0 is a PCI bridge (see the property type), so the
> > > CPU addresses are found not via "regs" but "assigned-addresses".
> > > 
> > > In this situation, "regs" will have a different meaning and therefore
> > > there is no promise that the size will be 0.
> > 
> > I copy/pasted the following:
> > 
> >         pci@1,0 {
> >                 #address-cells = <3>;
> >                 #size-cells = <2>;
> >                 ranges;
> > 
> >                 reg = <0 0 0 0 0>;
> > 
> >                 usb@1,0 {
> >                         reg = <0x10000 0 0 0 0>;
> >                         resets = <&reset
> >                         RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> >                 };
> >         };
> > 
> > under pcie0 in my DTS to see what happens (the node is not there in the
> > device tree for the rpi-5.9.y kernel.) It results in the expected error:
> > 
> > (XEN) Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0
> > (XEN) Device tree generation failed (-22).
> > 
> > I could verify that pci@1,0 is seen as "default" bus due to the range
> > property, thus dt_number_of_address() returns 1.
> > 
> > 
> > I can see that reg = <0 0 0 0 0> is not a problem because it is ignored
> > given that the parent is a PCI bus. assigned-addresses is the one that
> > is read.
> > 
> > 
> > But from a device tree perspective I am actually confused by the
> > presence of the "ranges" property under pci@1,0. Is that correct? It is
> > stating that addresses of children devices will be translated to the
> > address space of the parent (pcie0) using the parent translation rules.
> > I mean -- it looks like Xen is right in trying to translate reg =
> > <0x10000 0 0 0 0> using ranges = <0x02000000 0x0 0xc0000000 0x6
> > 0x00000000 0x0 0x40000000>.
> > 
> > Or maybe since pcie0 is a PCI bus all the children addresses, even
> > grand-children, are expected to be specified using "assigned-addresses"?
> > 
> > 
> > Looking at other examples [1][2] maybe the mistake is that pci@1,0 is
> > missing device_type = "pci"?  Of course, if I add that, the error
> > disappear.
> 
> I am afraid, I don't know the answer. I think it would be best to ask the
> Linux DT folks about it.
> 
> > 
> > [1] Documentation/devicetree/bindings/pci/mvebu-pci.txt
> > [2] Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
> > 
> > For the sake of making Xen more resilient to possible DTSes, maybe we
> > should try to extend the dt_bus_pci_match check? See for instance the
> > change below, but we might be able to come up with better ideas.
> > 
> > 
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 18825e333e..24d998f725 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -565,12 +565,21 @@ static unsigned int dt_bus_default_get_flags(const
> > __be32 *addr)
> >     static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> >   {
> > +    bool ret = false;
> > +
> >       /*
> >        * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen
> > PCI
> >        * powermacs "ht" is hypertransport
> >        */
> > -    return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > +    ret = !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> >           !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> > +
> > +    if ( ret ) return ret;
> > +
> > +    if ( !strcmp(np->name, "pci") )
> > +        ret = dt_bus_pci_match(dt_get_parent(np));
> 
> It is probably safe to assume that a PCI device (not hostbridge) will start
> with "pci". Although, I don't much like the idea because the name is not meant
> to be stable.
> 
> AFAICT, we can only rely on "compatible" and "type".

After the discussion with Rob, it is clear that we have to add a check
on the node name for "pcie" in dt_bus_pci_match. However, that wouldn't
solve the problem reported by Elliot, because in this case the node name
is "pci" not "pcie".

I suggest that we add a check for "pci" too in dt_bus_pci_match,
although that means that our check will be slightly different from the
equivalent Linux check. The "pci" check should come with an in-code
comment to explain the situation and the reasons for it to be.

What do you think?


From xen-devel-bounces@lists.xenproject.org Thu Feb 04 23:23:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 Feb 2021 23:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81462.150642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7nxu-0000a1-Ks; Thu, 04 Feb 2021 23:23:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81462.150642; Thu, 04 Feb 2021 23:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7nxu-0000Zu-Hm; Thu, 04 Feb 2021 23:23:10 +0000
Received: by outflank-mailman (input) for mailman id 81462;
 Thu, 04 Feb 2021 23:23:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7nxt-0000Zm-CD; Thu, 04 Feb 2021 23:23:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7nxt-0005ql-4k; Thu, 04 Feb 2021 23:23:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7nxs-0006Xq-Rp; Thu, 04 Feb 2021 23:23:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7nxs-0003ud-RJ; Thu, 04 Feb 2021 23:23:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SwqSbv31+BQFwy/e5fjsdGFTQ9g4lyptkQL41yw1Tig=; b=pwhtt+isuehWW7rSMCDb/C01Pd
	LNtdHcYZ7rK69hiEWKtNlj0ZJG4gHVzI2BmPjTxz/vdaAOLq8H7SW4eLoCBJuU1od3usagbYn3Mzx
	rXaBlWv3fjKgOEAHIEZSij2O8zjx14c6redI8tCTqWblDMqNZESE+h/SIOjFoXszP4r0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159005-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159005: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=61556703b610a104de324e4f061dc6cf7b218b46
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 Feb 2021 23:23:08 +0000

flight 159005 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159005/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                61556703b610a104de324e4f061dc6cf7b218b46
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  188 days
Failing since        152366  2020-08-01 20:49:34 Z  187 days  334 attempts
Testing same since   159005  2021-02-04 05:04:01 Z    0 days    1 attempts

------------------------------------------------------------
4527 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)

Not pushing.

(No revision log; it would be 1024264 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 00:58:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 00:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81474.150663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7pRt-0002CV-BF; Fri, 05 Feb 2021 00:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81474.150663; Fri, 05 Feb 2021 00:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7pRt-0002CO-88; Fri, 05 Feb 2021 00:58:13 +0000
Received: by outflank-mailman (input) for mailman id 81474;
 Fri, 05 Feb 2021 00:58:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7pRr-0002CG-2m; Fri, 05 Feb 2021 00:58:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7pRq-0007xl-Sg; Fri, 05 Feb 2021 00:58:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7pRq-000340-Kj; Fri, 05 Feb 2021 00:58:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7pRq-0008Ne-KI; Fri, 05 Feb 2021 00:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=30c2KGn0gaTVDIUdLOfBhFrx91AtL/dNWApIZ1V3T0M=; b=kvMpyzmkxELCO0VQSzqcFL6p6h
	Cz5X9kR+loFgvIvpSWUWEhSOyrtEh9PnGLEVIbhN5GsfWDsynY1/SSjamsznBhfIQRbU5dyWB5zyB
	s8Ev5dlRbu8EumPOan5fcNWMNfw6h3LQOjUVaevfgkL9BOQhhFdlquLG+8mkEWY8Lf20=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159025-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159025: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=def12125357ed2efd6d581d9033afcc9d66daa8a
X-Osstest-Versions-That:
    xen=45dee7d92b493bb531e7e77a6f9c0180ab152f87
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 00:58:10 +0000

flight 159025 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159025/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  def12125357ed2efd6d581d9033afcc9d66daa8a
baseline version:
 xen                  45dee7d92b493bb531e7e77a6f9c0180ab152f87

Last test of basis   159020  2021-02-04 17:00:27 Z    0 days
Testing same since   159025  2021-02-04 22:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Manuel Bouyer <bouyer@netbsd.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   45dee7d92b..def1212535  def12125357ed2efd6d581d9033afcc9d66daa8a -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 01:23:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 01:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81483.150678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ppz-0007Kq-Ej; Fri, 05 Feb 2021 01:23:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81483.150678; Fri, 05 Feb 2021 01:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ppz-0007Kj-BZ; Fri, 05 Feb 2021 01:23:07 +0000
Received: by outflank-mailman (input) for mailman id 81483;
 Fri, 05 Feb 2021 01:23:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kT56=HH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l7ppy-0007Ke-3b
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 01:23:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0c88f42-86d6-4283-ad45-ba418021db27;
 Fri, 05 Feb 2021 01:23:05 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3BF9964FB0;
 Fri,  5 Feb 2021 01:23:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0c88f42-86d6-4283-ad45-ba418021db27
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612488184;
	bh=PItTdRKiVSD2caMBRqMvkQCLWOB3928cKd3ymKxTe3c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UQZrpyddxze7Ih3JMuySrWgzpEvVuCBiSj9JXdxgH6sgsepgMx4EDrSbE24Cs67nk
	 jx+nzlVjb8US2E/nRVfMTgXqZB7TJ/UpGLB8HKg2KDmYxcP9zRM2FkvbNLLl9rkwxe
	 SKC6k0jTECi5CDS7WYIegHKH8HlL8Dt8sW5fXlA+zGG38SmogrIwgyzoo9J87Bw9oC
	 Vgi26YRr4EVZsiAYticoLBdrb8ZYQIz93pHZrnKunaQKMjf8NwnRoo9HQu84SemHsv
	 bGLkmHQmlp76i5cqqR3B53AFzYpuLmfuExSj2n/adQDqUUdWwHFZzGzUFj+8luzldw
	 yDsP6RFSsksbA==
Date: Thu, 4 Feb 2021 17:23:03 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jukka Kaartinen <jukka.kaartinen@unikie.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Roman Shaposhnik <roman@zededa.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Question about xen and Rasp 4B
In-Reply-To: <6a0dab88-aede-f048-fb86-b2a786ac3674@unikie.com>
Message-ID: <alpine.DEB.2.21.2102041711510.29047@sstabellini-ThinkPad-T480s>
References: <CAFnJQOouOUox_kBB66sfSuriUiUSvmhSjPxucJjgrB9DdLOwkg@mail.gmail.com> <alpine.DEB.2.21.2101221449480.14612@sstabellini-ThinkPad-T480s> <CAFnJQOqoqj6mWwR61ZsZj1JxRrdisFtH_87YXCeW619GM+L21Q@mail.gmail.com> <alpine.DEB.2.21.2101251646470.20638@sstabellini-ThinkPad-T480s>
 <CAFnJQOpuehAWde5Ta4ud9CGufwZ-K+=60epzSdKc_DnS75O2iA@mail.gmail.com> <alpine.DEB.2.21.2101261149210.2568@sstabellini-ThinkPad-T480s> <CAFnJQOpgRM-3_aZsnv36w+aQV=gMcBA18ZEw_-man7zmYb4O4Q@mail.gmail.com> <5a33e663-4a6d-6247-769a-8f14db4810f2@xen.org>
 <b9247831-335a-f791-1664-abed6b400a42@unikie.com> <c44d45ed-f03e-e901-4a46-0ce57504703f@xen.org> <alpine.DEB.2.21.2102011055080.29047@sstabellini-ThinkPad-T480s> <6a0dab88-aede-f048-fb86-b2a786ac3674@unikie.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 Feb 2021, Jukka Kaartinen wrote:
> We really need direct HW access so PVFB is not really an option. And at this
> point. We can trust the VMs.
> 
> 
> Any idea what am I missing something here is this the way to give domu access
> to the memory?
> 
> # from dom0: cat /proc/iomem
> # fe104000-fe104027 : fe104000.rng rng@7e104000
> 
> # to hide the above rng from the dom0 I added these to device-tree and above
> line from /proc/iomem dissapiered.
> boot2.scr:
> fdt set /soc/rng@7e104000 xen,passthrough <0x1>
> fdt set /soc/rng@7e104000 status disabled

Leave status to enabled or okay, just set xen,passthrough to 0x1.

 
> domu.cfg:
> iomem = [ 'fe104,1' ]
> 
> domu start but I cannot see that address in the domu iomem range.
> Also the device-tree from the domu is quite empty.
> 
> Do I need something like:
> device_tree = "rng.dtb"
> 
> like here:
> https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg02618.html
> 
> 
> I tried to add
> dtdev = [ "/soc/rng@7e104000" ]

Yes you need both dtdev and device_tree, see
https://xenbits.xenproject.org/docs/unstable/misc/arm/passthrough.txt

iomem is to remap memory regions
irqs is to remap interrupts
device_tree is to populate the DomU device tree
dtdev is to setup the IOMMU, linking to the original device in the host DT


> but this gives be error:
> "libxl: error: libxl_create.c:1107:libxl__domain_config_setdefault:
> passthrough not supported on this platform"
> 
> Will this happen if generate the rng.dtb?

This usually happens because there is no IOMMU on the board, or the
IOMMU is disabled. Indeed, the Raspberry Pi 4 doesn't seem to have one.

Now this is going to make things more difficult: without the IOMMU there
can be no protection. In addition, we have a problem with address
translations: when the domU programs a device to do DMA, it uses its
"fake" pseudo-physical addresses. When the device starts the DMA
transaction with the "fake" address, the IOMMU translates it back to a
real physical address.

For this reason, there is no way to assign a device to a domU without
IOMMU on upstream Xen. However, I sent out a patch series a while back
to allow to create domUs with memory 1:1 mapped (pseudo-physical ==
physical). With those patches applied, there is no issue with address
translations anymore and you can assign a device to a domU even without
IOMMU. However, keep in mind that there is going to be no protection.
The series only works for dom0less domUs for now, but it shouldn't be
hard to make it work for any other domUs.

You can find the series here in the Xilinx Xen branch:
https://github.com/Xilinx/xen/tree/xilinx/release-2020.2

and specifically the relevant patches start here:
https://github.com/Xilinx/xen/commit/b8953d357aa095d1027156cf386ad37bd8a34da5

up until:
https://github.com/Xilinx/xen/commit/a7b332c420da40aa3192a8b77c65bcdb1935b5ab


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 01:56:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 01:56:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81486.150690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7qML-00028a-9r; Fri, 05 Feb 2021 01:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81486.150690; Fri, 05 Feb 2021 01:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7qML-00028R-23; Fri, 05 Feb 2021 01:56:33 +0000
Received: by outflank-mailman (input) for mailman id 81486;
 Fri, 05 Feb 2021 01:56:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=srdo=HH=kernel.org=kuba@srs-us1.protection.inumbo.net>)
 id 1l7qMK-000286-2Q
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 01:56:32 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8af0d0b7-7fb0-4765-af12-991bfb7797ef;
 Fri, 05 Feb 2021 01:56:31 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D19C264DC4;
 Fri,  5 Feb 2021 01:56:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8af0d0b7-7fb0-4765-af12-991bfb7797ef
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612490190;
	bh=ZlbTbixiZ/VEvhe2cB1BDy9leTlPGZZcyvIJqx6f2UI=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=tu6SJbXUsHBFLZamGifs83f3CijLyOjFBee5L2c4iFeuoBlcqfKvKIhvZc3L22Z+R
	 tj4OtzBNvXCd1gkay1lHs/nd57rPGEKXSgmcitvQJXpqPVESKx7TaT9aWCvHxfyh2C
	 OyTPEq5RzCUwfuqgsZjB9f7ng3IutikF/4Ce+tym0NfiWfuPmmdETry1Hco4C8u2kN
	 1Z+wjDtqiMrg0kz/fDOSsnZ/MxzHEO6WHMBtOinAwVe5DoAXBo6xc6PbgQAIbqFkGv
	 VhYGfuPZusMTknZUlZiZpuAFqgJZPFbNt3gXaqN1e+SUFquG0GafJv6BIFuRLsnot2
	 BkL2UbiwyKmNQ==
Date: Thu, 4 Feb 2021 17:56:28 -0800
From: Jakub Kicinski <kuba@kernel.org>
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>, Paul Durrant
 <paul@xen.org>, "David S. Miller" <davem@davemloft.net>, Igor Druzhinin
 <igor.druzhinin@citrix.com>, stable@vger.kernel.org
Subject: Re: [PATCH] xen/netback: avoid race in
 xenvif_rx_ring_slots_available()
Message-ID: <20210204175628.7904d1da@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <f6fa1533-0646-e8b1-b7f8-51ad70691cae@suse.com>
References: <20210202070938.7863-1-jgross@suse.com>
	<20210203154800.4c6959d6@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	<f6fa1533-0646-e8b1-b7f8-51ad70691cae@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, 4 Feb 2021 06:32:32 +0100 J=C3=BCrgen Gro=C3=9F wrote:
> On 04.02.21 00:48, Jakub Kicinski wrote:
> > On Tue,  2 Feb 2021 08:09:38 +0100 Juergen Gross wrote: =20
> >> Since commit 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> >> xenvif_rx_ring_slots_available() is no longer called only from the rx
> >> queue kernel thread, so it needs to access the rx queue with the
> >> associated queue held.
> >>
> >> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> >> Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding")
> >> Cc: stable@vger.kernel.org
> >> Signed-off-by: Juergen Gross <jgross@suse.com> =20
> >=20
> > Should we route this change via networking trees? I see the bug did not
> > go through networking :)
>=20
> I'm fine with either networking or the Xen tree. It should be included
> in 5.11, though. So if you are willing to take it, please do so.

All right, applied to net, it'll most likely hit Linus's tree on Tue.

Thanks!


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 03:32:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 03:32:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81497.150708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7rrG-0004TD-5H; Fri, 05 Feb 2021 03:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81497.150708; Fri, 05 Feb 2021 03:32:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7rrG-0004T6-2A; Fri, 05 Feb 2021 03:32:34 +0000
Received: by outflank-mailman (input) for mailman id 81497;
 Fri, 05 Feb 2021 03:32:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HLiv=HH=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l7rrF-0004Sl-6U
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 03:32:33 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61187142-8eaa-4a28-8043-e8503f06f38b;
 Fri, 05 Feb 2021 03:32:30 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 1153WDlV037679
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 4 Feb 2021 22:32:19 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 1153WD1H037678;
 Thu, 4 Feb 2021 19:32:13 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61187142-8eaa-4a28-8043-e8503f06f38b
Date: Thu, 4 Feb 2021 19:32:13 -0800
From: Elliott Mitchell <ehem+undef@m5p.com>
To: Rob Herring <robh@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien.grall.oss@gmail.com>,
        Elliott Mitchell <ehem+xen@m5p.com>,
        xen-devel <xen-devel@lists.xenproject.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
Message-ID: <YBy8PbYeLEHjcELY@mattapan.m5p.com>
References: <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
 <9b97789b-5560-0186-642a-0501789830e5@xen.org>
 <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
 <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
 <alpine.DEB.2.21.2102041309430.29047@sstabellini-ThinkPad-T480s>
 <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Feb 04, 2021 at 03:52:26PM -0600, Rob Herring wrote:
> On Thu, Feb 4, 2021 at 3:33 PM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > On Thu, Feb 4, 2021 at 2:36 PM Stefano Stabellini
> > > <sstabellini@kernel.org> wrote:
> > > >
> > > > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > > > On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> > > > > <sstabellini@kernel.org> wrote:
> > > > > >
> > > > > > Hi Rob,
> > > > > >
> > > > > > We have a question on the PCIe device tree bindings. In summary, we have
> > > > > > come across the Raspberry Pi 4 PCIe description below:
> > > > > >
> > > > > >
> > > > > > pcie0: pcie@7d500000 {
> > > > > >    compatible = "brcm,bcm2711-pcie";
> > > > > >    reg = <0x0 0x7d500000  0x0 0x9310>;
> > > > > >    device_type = "pci";
> > > > > >    #address-cells = <3>;
> > > > > >    #interrupt-cells = <1>;
> > > > > >    #size-cells = <2>;
> > > > > >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > > > > >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > > > > >    interrupt-names = "pcie", "msi";
> > > > > >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > > > > >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > > > > >                                                      IRQ_TYPE_LEVEL_HIGH>;
> > > > > >    msi-controller;
> > > > > >    msi-parent = <&pcie0>;
> > > > > >
> > > > > >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > > > > >              0x0 0x40000000>;
> > > > > >    /*
> > > > > >     * The wrapper around the PCIe block has a bug
> > > > > >     * preventing it from accessing beyond the first 3GB of
> > > > > >     * memory.
> > > > > >     */
> > > > > >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > > > > >                  0x0 0xc0000000>;
> > > > > >    brcm,enable-ssc;
> > > > > >
> > > > > >    pci@1,0 {
> > > > > >            #address-cells = <3>;
> > > > > >            #size-cells = <2>;
> > > > > >            ranges;
> > > > > >
> > > > > >            reg = <0 0 0 0 0>;
> > > > > >
> > > > > >            usb@1,0 {
> > > > > >                    reg = <0x10000 0 0 0 0>;
> > > > > >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > > > >            };
> > > > > >    };
> > > > > > };
> > > > > >
> > > > > >
> > > > > > Xen fails to parse it with an error because it tries to remap reg =
> > > > > > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> > > > > >
> > > > > > Reading the device tree description in details, I cannot tell if Xen has
> > > > > > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > > > > > like a default bus (not a PCI bus), hence, the children regs are
> > > > > > translated using the ranges property of the parent (pcie@7d500000).
> > > > > >
> > > > > > Is it possible that the device tree is missing device_type =
> > > > > > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > > > > > pcie@7d500000?
> > > > >
> > > > > Indeed, it should have device_type. Linux (only recently due to
> > > > > another missing device_type case) will also look at node name, but
> > > > > only 'pcie'.
> > > > >
> > > > > We should be able to create (or extend pci-bus.yaml) a schema to catch
> > > > > this case.
> > > >
> > > > Ah, that is what I needed to know, thank you!  Is Linux considering a
> > > > node named "pcie" as if it has device_type = "pci"?
> > >
> > > Yes, it was added for Rockchip RK3399 to avoid a DT update and regression.
> > >
> > > > In Xen, also to cover the RPi4 case, maybe I could add a check for the
> > > > node name to be "pci" or "pcie" and if so Xen could assume device_type =
> > > > "pci".
> > >
> > > I assume this never worked for RPi4 (and Linux will have the same
> > > issue), so can't we just update the DT in this case?
> >
> > I am not sure where the DT is coming from, probably from the RPi4 kernel
> > trees or firmware. I think it would be good if somebody got in touch to
> > tell them they have an issue.
> 
> So you just take whatever downstream RPi invents? Good luck.
> 
> > Elliot, where was that device tree coming from originally?

Please excuse my very weak device-tree fu...

I'm unsure which section is the problem, but looking at `git blame` what
shows is commt d5c8dc0d4c880fbde5293cc186b1ab23466254c4.

This commit is present in the Linux master branch and the linux-5.10.y
branch.

You were saying?


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Feb 05 04:30:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 04:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81501.150720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7slW-0002Vf-G0; Fri, 05 Feb 2021 04:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81501.150720; Fri, 05 Feb 2021 04:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7slW-0002VY-Bc; Fri, 05 Feb 2021 04:30:42 +0000
Received: by outflank-mailman (input) for mailman id 81501;
 Fri, 05 Feb 2021 04:30:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7slV-0002VQ-Dx; Fri, 05 Feb 2021 04:30:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7slV-0005LZ-7M; Fri, 05 Feb 2021 04:30:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7slU-0006Fe-VN; Fri, 05 Feb 2021 04:30:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7slU-0003MN-Ur; Fri, 05 Feb 2021 04:30:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hL4u8zOaXQEXFLbzb6I/sHthd0tW0JvP0ccui43Nfzc=; b=P0DiBGdRK4/ogVjKxXuGRd3Q6Q
	cuUGdYfSLRQveF/56GYJbZ9rXWe36UjS+qyhFlNuY8XvNNGPGfQEqXVF9xazjPQIqGwpRt+oiOkiU
	KmNtbWZCczH7kkAMH9e/WIoyRG3JyioRb18MO6/lQB/UOnZ7y5npDUMtlZ9L+i9J4QsA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159029-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159029: trouble: broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:capture-logs(22):broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
X-Osstest-Versions-That:
    xen=def12125357ed2efd6d581d9033afcc9d66daa8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 04:30:40 +0000

flight 159029 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159029/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl          22 capture-logs(22)       broken REGR. vs. 159025

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
baseline version:
 xen                  def12125357ed2efd6d581d9033afcc9d66daa8a

Last test of basis   159025  2021-02-04 22:00:27 Z    0 days
Testing same since   159029  2021-02-05 01:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl capture-logs(22)

Not pushing.

------------------------------------------------------------
commit ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jul 23 17:26:16 2020 +0100

    tools/tests: Introduce a test for acquire_resource
    
    For now, simply try to map 40 frames of grant table.  This catches most of the
    basic errors with resource sizes found and fixed through the 4.15 dev window.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 04:44:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 04:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81505.150735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7syJ-0003fM-Lc; Fri, 05 Feb 2021 04:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81505.150735; Fri, 05 Feb 2021 04:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7syJ-0003fF-Hw; Fri, 05 Feb 2021 04:43:55 +0000
Received: by outflank-mailman (input) for mailman id 81505;
 Fri, 05 Feb 2021 04:43:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7syH-0003f7-OD; Fri, 05 Feb 2021 04:43:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7syH-0005Xh-Go; Fri, 05 Feb 2021 04:43:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7syH-0006yr-9G; Fri, 05 Feb 2021 04:43:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7syH-0008Qz-8q; Fri, 05 Feb 2021 04:43:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JDFizDZ1fjWdi8OljBwi9uMgkkrnoUS4TZYm4kRZlD8=; b=oCSaY1c8Z10nHj4GgH7icJxnJb
	sJvWRfFj/TV72VGZFJnOLqTuuGSHmPgFGhIyRNpiFJtPVBZQQ7GTNKm3arqkVlY9IjOriFPAbgNjY
	yQF/gQCJm9wyfgIf9efxM7KU5c3d3MnD4BcTFkAdEgbtk357+u34eFCHd44ZhHLj6kko=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159009-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159009: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1ed9228f63ea4bcc0ae240365305ee264e9189ce
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 04:43:53 +0000

flight 159009 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159009/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1ed9228f63ea4bcc0ae240365305ee264e9189ce
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  168 days
Failing since        152659  2020-08-21 14:07:39 Z  167 days  338 attempts
Testing same since   159009  2021-02-04 07:51:49 Z    0 days    1 attempts

------------------------------------------------------------
373 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 100178 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 07:15:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 07:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81525.150762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7vKO-0002me-Nq; Fri, 05 Feb 2021 07:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81525.150762; Fri, 05 Feb 2021 07:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7vKO-0002mX-KN; Fri, 05 Feb 2021 07:14:52 +0000
Received: by outflank-mailman (input) for mailman id 81525;
 Fri, 05 Feb 2021 07:14:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7vKN-0002mP-D8; Fri, 05 Feb 2021 07:14:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7vKN-0008N3-6o; Fri, 05 Feb 2021 07:14:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7vKM-0006VN-V0; Fri, 05 Feb 2021 07:14:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7vKM-00076c-UV; Fri, 05 Feb 2021 07:14:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zBEinKxQd1WIDVBlCLRoxKdC4BM1c+D53EGPnpspyBM=; b=Ue05t8qga0OEVgsrgcRcvedyXD
	Oq+N5uQB838jeXrDwhdbxNY9Per3ZU/7PlKgXdzgNCMxUySupcJ8dIyNP5mWQ2CsDJDNHwDJBAfsU
	NsbyDNAhjC3e7MtYK/GPjFWJvNhVChlUIjENmPtt730zPPP2dGlLq3krM7SKJbRqNFxA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159033-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159033: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
X-Osstest-Versions-That:
    xen=def12125357ed2efd6d581d9033afcc9d66daa8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 07:14:50 +0000

flight 159033 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159033/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
baseline version:
 xen                  def12125357ed2efd6d581d9033afcc9d66daa8a

Last test of basis   159025  2021-02-04 22:00:27 Z    0 days
Testing same since   159029  2021-02-05 01:01:31 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   def1212535..ff522e2e91  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 08:11:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 08:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81540.150785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7wDS-00018y-Ub; Fri, 05 Feb 2021 08:11:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81540.150785; Fri, 05 Feb 2021 08:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7wDS-00018r-Pm; Fri, 05 Feb 2021 08:11:46 +0000
Received: by outflank-mailman (input) for mailman id 81540;
 Fri, 05 Feb 2021 08:11:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7wDR-00018m-KP
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 08:11:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a097a26-456f-4d01-91e1-bad4b6a0dc08;
 Fri, 05 Feb 2021 08:11:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 00D8EB19D;
 Fri,  5 Feb 2021 08:11:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a097a26-456f-4d01-91e1-bad4b6a0dc08
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612512703; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iHGoqUvcfHbzOyfzU/GOWcvUFDcZafbrxHVRWqXJxJs=;
	b=gFa1Nk1QP7Lp0wQu696KWTc1+38/RJxULG7d89sB89dF+5/poeC+5TmdheiU2qcq4wyb3D
	fzFI0I5uSGFgpn8n4N/V7ufeRbDAnUfUkPiCrgFxSJVWN4S1/T3jBRa/tRMbItE0VauTEH
	nWgupxk/BxOqrjU4tx1KhjuwMGu9RTc=
Subject: Re: [PATCH] x86/EFI: work around GNU ld 2.36 issue
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <e6d59277-35b2-e7df-0e68-a794c8855ac0@suse.com>
Message-ID: <8450b84d-93f2-7568-362e-af27954e5157@suse.com>
Date: Fri, 5 Feb 2021 09:11:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <e6d59277-35b2-e7df-0e68-a794c8855ac0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 14:38, Jan Beulich wrote:
> Our linker capability check fails with the recent binutils release's ld:
> 
> .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
> .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
> .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
> .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
> .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
> .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
> .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
> .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
> .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
> .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
> .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
> 
> Tell the linker to strip debug info as a workaround. Oddly enough debug
> info has been getting stripped when linking the actual xen.efi, without
> me being able to tell why this would be.

I've changed this to

"Tell the linker to strip debug info as a workaround. Debug info has been
 getting stripped already anyway when linking the actual xen.efi."

as I noticed I did look for -S only yesterday, while we have

EFI_LDFLAGS += --image-base=$(1) --stack=0,0 --heap=0,0 --strip-debug

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 08:44:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 08:44:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81546.150801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7wii-0004NO-BA; Fri, 05 Feb 2021 08:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81546.150801; Fri, 05 Feb 2021 08:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7wii-0004NH-7Y; Fri, 05 Feb 2021 08:44:04 +0000
Received: by outflank-mailman (input) for mailman id 81546;
 Fri, 05 Feb 2021 08:44:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7wih-0004Mw-Ev; Fri, 05 Feb 2021 08:44:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7wih-0001ue-AP; Fri, 05 Feb 2021 08:44:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7wig-0002u2-Qv; Fri, 05 Feb 2021 08:44:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7wig-0007lx-QA; Fri, 05 Feb 2021 08:44:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5OdxbA2ppEVJ2nGccfGXGumg2lW/aMzn4PaEQqkomSM=; b=FgRip4+PtfKbvA6d/WzMToyrEE
	XgJJjb1W6KUO0c3rZkakmWU39IgLUFjLGwT+l/v81z8JamsgrbxXpIvcjVNzBkBYToH8ZdrnfRMrZ
	R/kHKjJeCFfdY1H0lyQadJCOBg1A7wOs4kfgx1AyBBSMvynxuODY6S1ZBWugqhfwFqeQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159013-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159013: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d203dbd69f1a02577dd6fe571d72beb980c548a6
X-Osstest-Versions-That:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 08:44:02 +0000

flight 159013 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159013/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158977
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158977
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158977
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158977
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158977
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158977
 test-armhf-armhf-libvirt-raw 13 guest-start                  fail  like 158977
 test-armhf-armhf-xl-vhd      13 guest-start                  fail  like 158977
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158977
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158977
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158977
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158977
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d203dbd69f1a02577dd6fe571d72beb980c548a6
baseline version:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265

Last test of basis   158977  2021-02-03 08:08:44 Z    2 days
Testing same since   158996  2021-02-04 00:07:40 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e7aa90440..d203dbd69f  d203dbd69f1a02577dd6fe571d72beb980c548a6 -> master


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 10:10:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 10:10:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81570.150826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7y4H-0005Lw-Mk; Fri, 05 Feb 2021 10:10:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81570.150826; Fri, 05 Feb 2021 10:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7y4H-0005Lp-JX; Fri, 05 Feb 2021 10:10:25 +0000
Received: by outflank-mailman (input) for mailman id 81570;
 Fri, 05 Feb 2021 10:10:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7y4F-0005Lk-RZ
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 10:10:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5e06275-3b04-4533-871c-f751f6c419d5;
 Fri, 05 Feb 2021 10:10:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D412AB113;
 Fri,  5 Feb 2021 10:10:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5e06275-3b04-4533-871c-f751f6c419d5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612519821; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6Pxr9R0Ucn7udxBrgJVKqihdfuhAH52HDrtFL4xAcg0=;
	b=VS60mCXqj0LkAay6s4Uf3izCyZmcYq6vzramkHSB4Vu88lRiJ1WBcBsF84FFB9kVPazXck
	eKmSLPRuqQsAm/YInn1hPH2J0FV6exBfwRMMRnrXhIT0gkF9zGL+GiiVx3V/hEzmSilrPy
	TaLB5kAf731W2bWa1nFHTy8jyaOCfkE=
Subject: Ping: [PATCH] x86emul: de-duplicate scatters to the same linear
 address
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
Message-ID: <a9739f4a-cfc6-673c-f2a7-a21723eda199@suse.com>
Date: Fri, 5 Feb 2021 11:10:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.11.2020 14:26, Jan Beulich wrote:
> The SDM specifically allows for earlier writes to fully overlapping
> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
> would crash it if varying data was written to the same address. Detect
> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
> be quite a bit more difficult.
> 
> Note that due to cache slot use being linear address based, there's no
> similar issue with multiple writes to the same physical address (mapped
> through different linear addresses).
> 
> Since this requires an adjustment to the EVEX Disp8 scaling test,
> correct a comment there at the same time.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: The SDM isn't entirely unambiguous about the faulting behavior in
>      this case: If a fault would need delivering on the earlier slot
>      despite the write getting squashed, we'd have to call ops->write()
>      with size set to zero for the earlier write(s). However,
>      hvm/emulate.c's handling of zero-byte accesses extends only to the
>      virtual-to-linear address conversions (and raising of involved
>      faults), so in order to also observe #PF changes to that logic
>      would then also be needed. Can we live with a possible misbehavior
>      here?
> 
> --- a/tools/tests/x86_emulator/evex-disp8.c
> +++ b/tools/tests/x86_emulator/evex-disp8.c
> @@ -647,8 +647,8 @@ static const uint16_t imm0f[16] = {
>  static struct x86_emulate_ops emulops;
>  
>  /*
> - * Access tracking (by granular) is used on the first 64 bytes of address
> - * space. Instructions get encode with a raw Disp8 value of 1, which then
> + * Access tracking (byte granular) is used on the first bytes of address
> + * space. Instructions get encoded with a raw Disp8 value of 1, which then
>   * gets scaled accordingly. Hence accesses below the address <scaling factor>
>   * as well as at or above 2 * <scaling factor> are indications of bugs. To
>   * aid diagnosis / debugging, track all accesses below 3 * <scaling factor>.
> @@ -804,6 +804,31 @@ static void test_one(const struct test *
>  
>      asm volatile ( "kxnorw %k1, %k1, %k1" );
>      asm volatile ( "vxorps %xmm4, %xmm4, %xmm4" );
> +    if ( sg && (test->opc | 3) == 0xa3 )
> +    {
> +        /*
> +         * Non-prefetch scatters need handling specially, due to the
> +         * overlapped write elimination done by the emulator. The order of
> +         * indexes chosen is somewhat arbitrary, within the constraints
> +         * imposed by the various different uses.
> +         */
> +        static const struct {
> +            int32_t _[16];
> +        } off32 = {{ 1, 0, 2, 3, 7, 6, 5, 4, 8, 10, 12, 14, 9, 11, 13, 15 }};
> +
> +        if ( test->opc & 1 )
> +        {
> +            asm volatile ( "vpmovsxdq %0, %%zmm4" :: "m" (off32) );
> +            vsz >>= !evex.w;
> +        }
> +        else
> +            asm volatile ( "vmovdqu32 %0, %%zmm4" :: "m" (off32) );
> +
> +        /* Scale by element size. */
> +        instr[6] |= (evex.w | 2) << 6;
> +
> +        sg = false;
> +    }
>  
>      ctxt->regs->eip = (unsigned long)&instr[0];
>      ctxt->regs->edx = 0;
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -10032,25 +10032,47 @@ x86_emulate(
>  
>          for ( i = 0; op_mask; ++i )
>          {
> -            long idx = b & 1 ? index.qw[i] : index.dw[i];
> +            long idx = (b & 1 ? index.qw[i]
> +                              : index.dw[i]) * (1 << state->sib_scale);
> +            unsigned long offs = truncate_ea(ea.mem.off + idx);
> +            unsigned int j;
>  
>              if ( !(op_mask & (1 << i)) )
>                  continue;
>  
> -            rc = ops->write(ea.mem.seg,
> -                            truncate_ea(ea.mem.off +
> -                                        idx * (1 << state->sib_scale)),
> -                            (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
> -            if ( rc != X86EMUL_OKAY )
> +            /*
> +             * hvmemul_linear_mmio_access() will find a cache slot based on
> +             * linear address. hvmemul_phys_mmio_access() will crash the
> +             * domain if observing varying data getting written to the same
> +             * address within a cache slot. Utilize that squashing earlier
> +             * writes to fully overlapping addresses is permitted by the spec.
> +             */
> +            for ( j = i + 1; j < n; ++j )
>              {
> -                /* See comment in gather emulation. */
> -                if ( rc != X86EMUL_EXCEPTION && done )
> -                    rc = X86EMUL_RETRY;
> -                break;
> +                long idx2 = (b & 1 ? index.qw[j]
> +                                   : index.dw[j]) * (1 << state->sib_scale);
> +
> +                if ( (op_mask & (1 << j)) &&
> +                     truncate_ea(ea.mem.off + idx2) == offs )
> +                    break;
> +            }
> +
> +            if ( j >= n )
> +            {
> +                rc = ops->write(ea.mem.seg, offs,
> +                                (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
> +                if ( rc != X86EMUL_OKAY )
> +                {
> +                    /* See comment in gather emulation. */
> +                    if ( rc != X86EMUL_EXCEPTION && done )
> +                        rc = X86EMUL_RETRY;
> +                    break;
> +                }
> +
> +                done = true;
>              }
>  
>              op_mask &= ~(1 << i);
> -            done = true;
>  
>  #ifdef __XEN__
>              if ( op_mask && local_events_need_delivery() )
> 



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 10:14:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 10:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81572.150838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7y7q-0005WR-CT; Fri, 05 Feb 2021 10:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81572.150838; Fri, 05 Feb 2021 10:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7y7q-0005WK-97; Fri, 05 Feb 2021 10:14:06 +0000
Received: by outflank-mailman (input) for mailman id 81572;
 Fri, 05 Feb 2021 10:14:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7y7p-0005WE-GC
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 10:14:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a71c240-d41f-4c0f-8132-69e3fe14b935;
 Fri, 05 Feb 2021 10:14:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 902D0AD2B;
 Fri,  5 Feb 2021 10:14:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a71c240-d41f-4c0f-8132-69e3fe14b935
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612520043; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uGatrluN+l13sHAAB/WmEAoA7zhX5k2q+5XeoUQvkWc=;
	b=X3N8t2oZvb00UsOlBg1cqP+RCYSoh0WSNpDVHdnipEZTIfKkGGYNmJ+eokO5VgSs3T/sbq
	6d7byI6V4hhVGpmXosNs3Rh3gvyeTsQ/t0FpAF3cw1YfB+f2NqxVX4AP6j+e0g+GN2XhAK
	8let0xOBxfa8yRB6zEXybG5665Wv17o=
From: Jan Beulich <jbeulich@suse.com>
Subject: =?UTF-8?Q?Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_avoid_r?=
 =?UTF-8?Q?aising_=23GP_for_early_guest_MSR_accesses?=
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
Message-ID: <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
Date: Fri, 5 Feb 2021 11:14:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

(simply re-sending what was sent over 2 months ago)

On 04.11.2020 11:50, Jan Beulich wrote:
> On 03.11.2020 18:31, Andrew Cooper wrote:
>> On 03/11/2020 17:06, Jan Beulich wrote:
>>> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
>>> handler early enough to cover for example the rdmsrl_safe() of
>>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded read
>>> of MSR_K7_HWCR later in the same function). The respective change
>>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv guests") was
>>> backported to 4.14, but no further - presumably since it wasn't really
>>> easy because of other dependencies.
>>>
>>> Therefore, to prevent our change in the handling of guest MSR accesses
>>> to render PV Linux 4.13 and older unusable on at least AMD systems, make
>>> the raising of #GP on these paths conditional upon the guest having
>>> installed a handler. Producing zero for reads and discarding writes
>>> isn't necessarily correct and may trip code trying to detect presence of
>>> MSRs early, but since such detection logic won't work without a #GP
>>> handler anyway, this ought to be a fair workaround.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> I appreciate that we probably have to do something, but I don't think
>> this is a wise move.
> 
> I wouldn't call it wise either, but I'm afraid something along
> these lines is necessary.
> 
>> Linux is fundamentally buggy.Â  It is deliberately looking for a
>> potential #GP fault given its use of rdmsrl_safe().Â  The reason this bug
>> stayed hidden for so long was as a consequence of Xen's inappropriate
>> MSR handling for guests, and the reasons for changing Xen's behaviour
>> still stand.
> 
> I agree.
> 
>> This change, in particular, does not apply to any explicitly handled
>> MSRs, and therefore is not a comprehensive fix.
> 
> But it's intentional that this deals with the situation in a
> generic way, not on a per-MSR basis. If we did as you suggest
> further down, we'd have to audit all Linux versions up to 4.14
> for similar issues with other MSRs. I don't think this would
> be a practical thing to do, and I also don't think that leaving
> things as they are until we have concrete reports of problems
> is a viable option either.
> 
> Adding explicit handling for the two offending MSRs (and any
> possible further ones we discover) would imo only be to avoid
> issuing the respective log messages.
> 
>> Â  Nor is it robust to
>> someone adding code to explicitly handling the impacted MSRs at a later
>> date (which are are likely to need to do for HWCR), and which would
>> reintroduce this failure to boot.
> 
> I'm afraid I don't understand. Looking at the two functions
> the patch alters, only X86EMUL_OKAY is used in return statements
> other than the final one. If this model is to be followed by
> future additions (which I think it ought to be; perhaps we
> should add comments to this effect), the code introduced here
> will take care of the situation nevertheless.
> 
>> We should have the impacted MSRs handled explicitly, with a note stating
>> this was a bug in Linux 4.14 and older.Â  We already have workaround for
>> similar bugs in Windows, and it also gives us a timeline to eventually
>> removing support for obsolete workarounds, rather than having a "now and
>> in the future, we'll explicitly tolerate broken PV behaviour for one bug
>> back in ancient linux".
> 
> Comparing with Windows isn't very helpful; the patch here is
> specifically about PV, and would help other OSes as well in
> case they would have missed setting up exceptions early in
> just the PV-on-Xen case. For the HVM case I'd indeed rather
> see us go the route we've gone for Windows, if need be.

As can be seen from this reply, we're not in agreement what to
do here. But we need to do something. I'm not sure how to get
unstuck discussions like this one ...

Besides this suggestion of yours I also continue to have
trouble seeing what good it will do to record an exception to
inject into a guest when we know it didn't install a handler
yet.

Jan

> There's one adjustment to the logic here that I've been
> considering, but I'm still undecided due to the downsides:
> Without exposing the value, we could make the decision to zap
> the #GP dependent upon us being able to read the MSR.
> 
> The other possible adjustment would be to avoid issuing two
> log messages for the same operation (affecting debug builds
> only). But the code structure (which isn't really consistent
> about when the pre-existing message would get issued)
> doesn't directly lend itself to such an adjustment without
> altering the behavior for some of the MSRs explicitly
> handled.
> 
> As a tangent, while discussing this situation, please let's
> not forget about this code in Linux:
> 
> static u64 xen_read_msr(unsigned int msr)
> {
> 	/*
> 	 * This will silently swallow a #GP from RDMSR.  It may be worth
> 	 * changing that.
> 	 */
> 	int err;
> 
> 	return xen_read_msr_safe(msr, &err);
> }
> 
> static void xen_write_msr(unsigned int msr, unsigned low, unsigned high)
> {
> 	/*
> 	 * This will silently swallow a #GP from WRMSR.  It may be worth
> 	 * changing that.
> 	 */
> 	xen_write_msr_safe(msr, low, high);
> }
> 
> Imo this "silent swallowing" has always been the wrong thing
> to do, and hence ought to be dropped. Of course right now it
> saves the kernel from dying on the HWCR read.
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 10:33:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 10:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81580.150853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7yQQ-0007dY-1v; Fri, 05 Feb 2021 10:33:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81580.150853; Fri, 05 Feb 2021 10:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7yQP-0007dR-V7; Fri, 05 Feb 2021 10:33:17 +0000
Received: by outflank-mailman (input) for mailman id 81580;
 Fri, 05 Feb 2021 10:33:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7yQP-0007dM-0N
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 10:33:17 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4313d5ea-b0e1-402b-8c21-fd777791c127;
 Fri, 05 Feb 2021 10:33:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4313d5ea-b0e1-402b-8c21-fd777791c127
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612521194;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Yk+9SVcFZXJKfIzm15IKOvJ062xTXMZ7E/og3oxBvQo=;
  b=f/l8v8Me72TwM+2jAu5aQ9DAWQkPOV16aV0CwDs0S5lCB9rjAltV888U
   mvDqIo84goT8xyIWrIdco5X4Yj9Sz/2A2/9YR+5FJ2mwvTzL5mVwFdAHR
   +PxqY1IPpMVBdkLzynhUQ94eUvItYZXUR1Bzz0QBgJ6NLdSSi99tPZaVJ
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ttwBhKGBOe4P8Evt0nYvbhIHXX9QqvJIQf7DQ1vOqR8cSoQ33Vj3qkJFoQIztPOFPNJHQCcU6+
 PTaWFY8V2EsEPInI+TLsY+uH9qZW3RBps8QCf4AjcHITIw0KDZeCbxWcMzBLz80/erIM+goTqU
 HXPzT1ibwZeFbAejPIh4gnF2PP202S67MbszXQqAVzvRjkwuCqppUI9ZXZgrZnig91bhHvky4G
 2l82H4WSMmB9IUGbID2Z24mQFGIouSzXUWinPN/BowCwaiwFFocj3BLaHFpYWaWLvejllcYRuU
 0Vs=
X-SBRS: 5.2
X-MesageID: 36833515
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36833515"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cSDF0CvYPnhN0EybmxeT+CmH1EL27x7EZaFDFlVfUPZoQhNh2WnFQsHuwtXGtIQQ8L6m/qqA4h8sNKlhxXIDqhk6F94qOdRuWukXzwJ3qOs4ZJNQBw1gDlxTLxEtMPEVCc1saM1Sv7FoRupjXAsDrjHLbxnEADwrO5zaBsRHa87yobQ7baOehla6MYG4mUbPkJhxMbRGg7/8JYAF62KRQq4XPaAWlOfdZhSoNpXqwIUp1/fSNuxjmTw2ZLvRiZhRjqiZJf/fgWrerwybNXDQd2qqAbkpSU6IZPxrAZK+z/cNEYccdL8dP5dGUapMyYr3KP+VjjbpceLIDOqJ3ta/xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5t1y/vW/EGERf7/B9L7HaSU5tqPV3o8mDIH/V5sGE/U=;
 b=P0FBsMOuduL+RwdNsWv+rA7weKhzEkQeGakjik6zeJbzMzEQvX2QUQufrlD6n2V0lC883hTKbImxl9xb/ec6S7+X2LiIwyUavfP0skmSc1nzoRvibU3V+BW5Lj4O5TyTNT2ocPJQeQLrtBprSzV4MRlIWR9PQZzB/H2lYnQ74EDNnrhOF1tlNlcQid66SXaiEAqjR4VwlifkrNHow60hscxoBQwufhgPC7brSg2S5RzgQ+lm7Y/irY3hIurUsQcYajTLM5bED0aEBA2YZJrx+qeDY28npczub9FGM5718Jwyu1g6xDoIom8TfoxAn4hx3j73jBIAcBWCO+mIkkQ93A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5t1y/vW/EGERf7/B9L7HaSU5tqPV3o8mDIH/V5sGE/U=;
 b=vcPdSJflPFdNZag6D1sauLwsQFxgHXW+XEhUXN3FQcW55gCFjnpjjlxNGejni8DzznnO0+OTqXO/MsPnUSzPvUZx9dG0mCrpWj5cWlFLmosDB+gcfL6sUfaqNG2D62DDkO20sRcK4svHwGWOw43kgyVLJoVvvOc2ChFLiRMdUMM=
Subject: Re: [PATCH] x86/EFI: work around GNU ld 2.36 issue
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <e6d59277-35b2-e7df-0e68-a794c8855ac0@suse.com>
 <8450b84d-93f2-7568-362e-af27954e5157@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <881b97a1-a4e3-f213-f81b-ac07ca46ed27@citrix.com>
Date: Fri, 5 Feb 2021 10:33:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <8450b84d-93f2-7568-362e-af27954e5157@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0315.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::15) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: acca0212-23f5-47bc-563c-08d8c9c16d53
X-MS-TrafficTypeDiagnostic: BYAPR03MB4726:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4726666B40928DDC1F7ABD79BAB29@BYAPR03MB4726.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Tc8FHiTMl1fAM/DG1RHMDlA2QlewlpkaWfzhoPgNLUJWiclGG4PymoIIPtW+Eb6X050sFB/Rmn3M1ANvma3Knn51wDhiOk2NBe1MbdaEXBb+tSeGtqyjHlnws6zukTiKhmV1iqzkxGrzZerc/xS2iLvI/kiAPUson50fdtCO8j2v00rDtr0fdiJ0QlBG6bwGpXUfpJ+uRtCG3Iv9bFJoM+OC7+Bqtvc5VqSDuL77yxdSVpf1DfrO0XKUXP0FIWsMvatLe9PeSvX2b7KbEJbxyKJ+aDXc0Ea+CSAs6AkoKSS3Wn/9XGQAP6r98dyRR54jvGtQ/8a1ESmyJ9jDwp48pQIjJi7x6FMDlmw4dJRGO0nZXzvRbBbSMZKrXg7FB3boS5j9o0JZ5gXEehBEEu/pAVywn9etl18PNe3oOR4pE6dqzoIB/qItpuhguyNgR/KZcRy5c2zFDou1+cUg77XP2oYI2yPQ37UwlalqkDxbln2Dg7lLIOvgW23sZcmrraWnt1Xawpxvv2ZBZx0OGqGp8y4D2kgard5iXDjEdQ6pS1KAvSL8yXj4ZWvpBOSVJk2QvlHh19p8kVvcWgL2JG5C5wRshmez+4xyz/ZqSKHYXs4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(346002)(366004)(39850400004)(66556008)(66476007)(83380400001)(66946007)(5660300002)(478600001)(16526019)(4326008)(186003)(2906002)(31686004)(86362001)(8936002)(36756003)(26005)(16576012)(956004)(54906003)(8676002)(316002)(2616005)(53546011)(6486002)(31696002)(6666004)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cGRuTWZzb0ZxZ3g1UkpQYjNVczRkN2Rvb3dCQzFWNzArUks0UlhhVUJFejQy?=
 =?utf-8?B?UzlodGVLMXE2aGhmMlhRK1M5QjZsYjU2SmtQN0psUlE2SEFMUlA2VTBDNEli?=
 =?utf-8?B?UHFPM0NaWk5Ra0QvNy81cll6MisvM3lOaEVSZmNGd0hrWjdnQzFxSHJHY0hr?=
 =?utf-8?B?QlhRMzFoVmQzMHJVeHVDUjJSR1dvaXQydDhTd0JxOE9vVmVkZmdPNzU1UUN3?=
 =?utf-8?B?S1NLR3lvazlqSVV0cHU5MXBVSUt3czVyTTdDOGxTV1Rac2RkbG9jZTkxMjl4?=
 =?utf-8?B?Siszb0VFSGRkb3F4STB4OXFwMkNxZDJETm16VXM1OVJQZ3E4RXFJUkE1d2py?=
 =?utf-8?B?L2xEclRkV0p1dUJjSzNpWUY4Tkd0OGZrZ2RacDhETS9scXJ1aTEvQ0liWnR6?=
 =?utf-8?B?cTd1eG1XK1JIcnhUeGJzYUhXSytZUUhMWHhQQnlOaDJyOElVcUVUVktDRUtr?=
 =?utf-8?B?N0x3OFJHTWFBVTVPSkxUbkhaM2xzbW5zeSsxZFNiSVl2TjVFQUFMYzRGeXNl?=
 =?utf-8?B?M1F1aWNTbU1YYTJpMWRCSXc5V0Y5dTRZTDhMVHlleWhDTGlzQk9qdEhZSzFs?=
 =?utf-8?B?VnlkN2U2YTNjdE5Sb2hrMGlPN2Q4WGYyRUtURnVnYnR3V01iN2pkbXBPMkl2?=
 =?utf-8?B?WW9Ha2xJSk1mWkZ6N0VDdTdmd2I0MlFSeUlmdXNZN1l6NjVUaGxrSTNxK01H?=
 =?utf-8?B?MzQxVnh2OHh2MXo2TVFiSk9HT1UzOW1JNUdhdG5McFBrOEZwYXFYTkM1Nk5X?=
 =?utf-8?B?MXgrTjZpNForekdwZ2x4VVNYN3I4NEhHK1NrMno2djBhMVM1V3JYblZ6QUVH?=
 =?utf-8?B?TkR6U2QzM0g5ZVNxazZ1SW1YcS9WQlNqRDVtMk9jVFMxZ1JJVlpYMlFSVVdq?=
 =?utf-8?B?ZUhsWTMya1RVVzB6cHllUnEwekhWMjZDMEtGaDRDRU9zWUswZWo0V1llbHZG?=
 =?utf-8?B?cDRlUFo4V1VOR0pMLzE2N0hPRTVSb0VGLzFvelRXS3VmeGszOXI3N1FGMG84?=
 =?utf-8?B?U0k0Z0pMa0pHVE1HYnIvWGlkMGQ2MTJzZDl5YjlkMDJQQVphQXh4dnRzb0Qw?=
 =?utf-8?B?SkVkdDQzWnkxUEM2ZGRHdWU1ZXA4TG04cW15RUw2eFpSTUVXZ0tMdEsweUN5?=
 =?utf-8?B?L0JnaVFnT0F6Rm1FdGpYTjNSaWw3MWFENW1NVjNzV0xPSHQwbUNlT3lXUzdx?=
 =?utf-8?B?UlJGa0Z3elJzMlNrVE9DeXZQTzllVlZYVGFOdTArNlRSU29rNjVHMzBVcDVB?=
 =?utf-8?B?ZGdLSDUwVDJxSU4vdHlqbmdhQ25Rdy92cmQxeGYzbGlxQWY1a2FhRytoWXA5?=
 =?utf-8?B?ZjZ3cEFyZjJjTjhDc28rSkhxTEYxcnArU3laVmpxb1VGQUNINExOZjhzd0Zv?=
 =?utf-8?B?WWFUa05ENVdUTnFOdFhVR1BFbzlvODJEYzkvd2U0VXQ5NjRGUDFNOTdMOTJT?=
 =?utf-8?B?cDk5RTY4MDFxTG9VRzl5OWY0UUhISmJUNStkempBYVU4T0ZkaFl4a3ltSEJJ?=
 =?utf-8?B?dkZoVVhtZGVmWGNPVE13aU1mNHpZQ2ZCUk9rWEdGN081Z09pbmtmNkQrcS9x?=
 =?utf-8?B?V29jaXJIblQrK3JZV1VyNUcrN1BHM2hsV3Z1cXFiRTE5NGR4dE81NTdxS0Ey?=
 =?utf-8?B?ZVZYdGJ4VGNsV2JYQ0txU2hpUldNKzNtelEvWVRIc2libUFxMExPYU50K0NR?=
 =?utf-8?B?MitrZTNXV256SnJZLzl1cDZqcGE0NzEwTUkvcktsVU5uZDQ3cyt2MWVCc1VU?=
 =?utf-8?Q?02QIPDOSMcdbvRUPfFXOq7XCjM3+uo975XqHl0D?=
X-MS-Exchange-CrossTenant-Network-Message-Id: acca0212-23f5-47bc-563c-08d8c9c16d53
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 10:33:07.4503
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t0MWxunAkLKhfyvBg7BxUVAuRhLLRWALF2X9omH26uR9zm1Gtw/rR52FRDoRg5LYdGoiKy444ZuPednxRx/fA5VYHL7K6unK5Y1KUjJ2fqQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4726
X-OriginatorOrg: citrix.com

On 05/02/2021 08:11, Jan Beulich wrote:
> On 04.02.2021 14:38, Jan Beulich wrote:
>> Our linker capability check fails with the recent binutils release's ld:
>>
>> .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
>> .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
>> .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
>> .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
>> .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
>> .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
>> .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
>> .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
>> .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
>> .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
>> .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
>>
>> Tell the linker to strip debug info as a workaround. Oddly enough debug
>> info has been getting stripped when linking the actual xen.efi, without
>> me being able to tell why this would be.
> I've changed this to
>
> "Tell the linker to strip debug info as a workaround. Debug info has been
>  getting stripped already anyway when linking the actual xen.efi."
>
> as I noticed I did look for -S only yesterday, while we have
>
> EFI_LDFLAGS += --image-base=$(1) --stack=0,0 --heap=0,0 --strip-debug

So, in terms of the bugfix, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>

However, we ought be keeping the debug symbols for xen-syms.efi (or
equiv) seeing as there is logic included here which isn't in the regular
xen-syms.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 10:42:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 10:42:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81587.150868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7yYq-0000Kg-V1; Fri, 05 Feb 2021 10:42:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81587.150868; Fri, 05 Feb 2021 10:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7yYq-0000KZ-RT; Fri, 05 Feb 2021 10:42:00 +0000
Received: by outflank-mailman (input) for mailman id 81587;
 Fri, 05 Feb 2021 10:41:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7yYp-0000KU-TD
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 10:41:59 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c10c0c0-5f1a-41ec-b246-222c4485329b;
 Fri, 05 Feb 2021 10:41:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c10c0c0-5f1a-41ec-b246-222c4485329b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612521718;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=78Smq182kcln96k3zp/qk8qIQnx5uR3A2BgHjdu+2rc=;
  b=BPOzJpyeCkpksVZMmXbToeZcMpErPT0+zEawvNQ59wW7qW8lk2GvR0wP
   2P11Wq2j0PK9OAg0F98t0723ag9yqjGED6DY9yXodh6IrjrgsnULlI/em
   zaPfRqTwcCsbXNheotoPHKcZw++Pv+FNgY9vnhK6KNhF8/CoT0Y/VEOWZ
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0KAX4Gk8A28S9aoEOWk17kaKb59PzvH74uUV7OSVFebBtpIiOQ2y6b0ULA92jF+As8Z3HpgkqQ
 kWYqhzVcleMLBkf0Q1YTNMn/WA1EJCyjv1/GuH9GZYh0ouzp0Tyq7lvdpi+OCC34fEXj8vAX/4
 Ni+Jii6R+1Qq9Br0n7TuUOKdYnFCkc9onG8BuKqWHo2Bxg6jvpBHSfjRS4ZGhdtYjWppDDa57p
 7UNAoUcvEe4LpC2ev71Ev0nR/j/bnGi3i0Fw/sz4ZQ/Q4J+EwCJDQKMWeC9WDKxzmQZtlSROW1
 pSI=
X-SBRS: 5.2
X-MesageID: 37983576
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="37983576"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aJJDe17+6kP789E/r6sj0QSwkL0P2HxyY0/2stM+ZVdVkCOmls18oWQpfyuxtw2fZdGarNFq8kfH0mKqNAfuU02IclI/DaFVKsGaePjH6JRW+3OZwcYzri5px+AZpftCQaZoo1YWq/BN6yNrFaZwjh0PXPQIfWuFwGIiVHT/UkWBROWvBYY9za3f7k4ZqJwohd1y7Sy/mMLN6R9hX+gyxnjRu5GgmGzYJ+K0tRl5GfmN5hjYXnmQNI2ydZpS20MfEbiRQeXe38bZt6u2z+PJSzv99HNhhv4FjfHZk6ODqeW4AMLjOCb0hPmviOfnEFVU8WOLCg6iuKOOwTi7tIEJMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9r8eA83LOvRw2GSa4YUiS/AbKFJ5iGtm51D+Kgfpl7k=;
 b=fl82bik1cauUlAGeUGpoilhKjkcAyScXWIsxzet5+AEBcP2ShaCuM+1P+hYLKPUmymoj7+7+5hE5c/qiYOALuG8wbkZFWMabHIheur+Bq3XnGXarACOBr+n68udwAPwmVZnaAMJjdh7HYXtctO43xPlGwy3WQZAI1LjPZGJ0PYhidOy/q3KH5zKn+nVd8kXwCUtmBf0fpIORbBpSc2CXX+JwwSAnFapUWGadgSrbJLyutQ+pGv6MIZqztG/zkecz7dgaBsegEnfKdvC0bkmdI/YP5yQL2OS5DGf1LZWfdFAEaY9CQatVAll284addhOGf9Om6e7/HTbhrLt31RpiOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9r8eA83LOvRw2GSa4YUiS/AbKFJ5iGtm51D+Kgfpl7k=;
 b=OLjWYnPikqeQdZuC+4OwGi4VU3Jm32x+iPp0aUDBMfcxI/KWC2amZ/al3RkhI55weyBucGVHCHL/lmZQhoQde26SjtIWX+XePD+baDBlODI+MbepjHZyJ+hKQOjwih/nf113BbTgHp0U4dUiCxE5W/A1wFbOQbpa57BTtcccMzc=
Subject: Re: [PATCH] x86emul: de-duplicate scatters to the same linear address
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fcf7e123-3cdd-fd4d-6c58-36facb26a68e@citrix.com>
Date: Fri, 5 Feb 2021 10:41:33 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0053.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::17) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 90fc2657-70ac-4f6a-a948-08d8c9c2a770
X-MS-TrafficTypeDiagnostic: BYAPR03MB3415:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3415288EC22FD5C41821447DBAB29@BYAPR03MB3415.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HpJI53bNls9O/rW+X3Iz3G20A+wnkwO2Mv2hYa9Ku2WKYQfGlvnn+m8XLT+VklINRD0nhSlgUZJF1Q0tN9shjdz8q8dezA6BAQCCDvpBf43hFeUVA80KhJmAhE2RP/KZxJ2kzrnFELtmaOqYDJSElp9a3IIEmQr7x4QH0Sd9YVWDHH4H5FOPA++Q89JP+TE3/L+YCWOh+U2JQVCVJWCGI65nHFQqk0DAI+gDARYK15erNl2mhPDzY2AlDempvHbpBEb6oRh+Awnwb1LjjTPrGUJSUblXGhEag1Y9xL4+M7OuC93jBXIRtzIRn4crU/Y/V6Qy0ppEGjKKHrmrkUZAC2euSFkYUDHdtTe+n8FNulHO1bqOIXHspqrbtjj/UWYgDPpYK+darLjol7Xkq7klXfuoe/duhVfucAvq8k52R5SG1jM1KRFRxeDRSdJ4fVQTTq7vFjXO8FyLXrCxak0QTUGBG5QcTR1GyOdY4bCDCoUQkbjSw8whqP37g8zCREj8TZYLkRHgGQrao0ajwkSlx6xotxIcQRazJRoVtSTUoHcEOtOdmOVLXUdoVhTmCS3Q3X8U/XdPYtnUdjD4WNynZYvcKa1DbUmQU/9pN8HNZN4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(136003)(39860400002)(396003)(346002)(31696002)(31686004)(66556008)(107886003)(478600001)(2906002)(8676002)(53546011)(16526019)(186003)(86362001)(26005)(6486002)(2616005)(956004)(110136005)(5660300002)(16576012)(316002)(54906003)(6666004)(36756003)(4326008)(66476007)(66946007)(8936002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?clNmVFVnaCtZTDFOa0FvN3B2RVF2OTVEK1JqT3YrdWVBRlNhT2hzZFFSZ25I?=
 =?utf-8?B?eFZvQmd6VlFHSURseDZmNnl1ZWlUaDNIQkVqUTRnRGJUbXRpdGRDZTFielhG?=
 =?utf-8?B?VmQ3RkpJenVjTUp1TFpZVklUdEhoalpkUVZndmsvZDJiU1hyVkxhV2lzTjVT?=
 =?utf-8?B?ckt6eDd4K2o5bmhSSThMa3Z0UGZlS0VqNWRXNktWek96SDV0UjFMdzR0ZXhV?=
 =?utf-8?B?cHgwb1pNeXF0Vzh1M2s5cEVrL0wrT2tZV1JueUhrZ1ZlQWxZZlMxTnA4ZWJw?=
 =?utf-8?B?dVg3VUY1QkNISkRSbnBROG5BMUdHZGFNbGEzNk0rRzJRL21jb0hEVzJuK1lr?=
 =?utf-8?B?Q1JTcUM5c29ZZG9qTXFDQkQvZ2h0eGt1YnVESGpWQ0RRNmdGNThTenVjVmlU?=
 =?utf-8?B?YWtRd3RIZTNrcmVzNnJOWGV0T2tMTDllVE5YcDg2WUgxQjZXL0g3OFZza0Fm?=
 =?utf-8?B?VXVWVnpFR1liNm1uQ3lSS0NJeERiNk1kYWg0Sy9ScVhYbm9ORCtidWNtSGxk?=
 =?utf-8?B?c0Zwb1pCRkdwV1M4b3gwUmVnMnBhV1ZlSk5vc01jNll6Vlhzd1NXcUNCeWU5?=
 =?utf-8?B?dVFISlREYXVaSlN2cnQ3OWZXaXJReU1JcllJUUNMK1RGWDNzQXlEZFJjRlps?=
 =?utf-8?B?SlRGeHBzeFN3R2pyMDJEWUJVQjZvNHE0QXcvR28vdjVpYUY0M3hnT1VYWDdH?=
 =?utf-8?B?aDdVaDhTS3VZNTZ4cUZJRlBBUkpCZW52cmdEemIxdHpVWS9nOG1nMjErVUVD?=
 =?utf-8?B?ZEM3ZExXTTZuWFF0LzRRVHpRQm1ra1dwVTNpYUVNU2hsVnoxNGhMQTRDN0pP?=
 =?utf-8?B?WC9ZeUx0blBnazUwcTJoM0pua0UwaldMVS9Kcis4MkxPdzd1djBGMmNjWmxC?=
 =?utf-8?B?YUdTR2tQNGtUVDA1MG9vQmg3NHVxdGRBeWdEdWV6S2U4RHFzc3BDSjZjcjMx?=
 =?utf-8?B?aGZMQXU1OWVMNkdkbzBEODZhcG5KSTFqWlBEQkI2MGhSMVdVQ0RhbldGaFRv?=
 =?utf-8?B?ZmM2NERrWllXaGJQWTV0UUl0UVVFcS9ncFFZWjkxdEV1ZWJyLzFIdExCcG13?=
 =?utf-8?B?cTYvUHdZUlphcTE2Sk53eS9GeWNyRXNUeEQ3MVFCZktPOGlQaUtxd3h3WlU4?=
 =?utf-8?B?UEFHYll3bXhzcTViZndTZmRyTkZuVExYMHdITzVJc1VaWTNpcXVuSXFJSGNZ?=
 =?utf-8?B?K25seFpiTzhnZURLeGJWM1orcGxBaFU3Z2dvOWN4em00L0dtSUdQdFEweGpT?=
 =?utf-8?B?enpIeWJPTlJlOWwzYVhSVUFhQjZic3ZTVE1TaVZlZ0V0eUZlSllXTWw1TlB3?=
 =?utf-8?B?MXlIbE5LWHc4ZmJkZEhJbzl1alM1eFFIdldGdDkyNm1iekFjNGFOSmlzWG9W?=
 =?utf-8?B?ZlJKZ2Q1SVExQk5jRFg3YkR1L1RwZHdiT3h4T1hPMVF1dlJRZlBYZ2FUMElV?=
 =?utf-8?B?U3BBb0pKRi9KQnFtdnN2SmZtYkVseVdvZXBSTDlXZXdJWDR1bDZQWTBJTU42?=
 =?utf-8?B?aE8wVWVkUVdwM2tjejBoeERpTWQvZ09VR0ZXWTFjbVJNZEYvbE9mSmtIWXZP?=
 =?utf-8?B?UGxYMVFtNDMxOGRHZVYwUTFPT1JkV2pQcS85N0tpc2J4WnBvSjN1L01UcU9N?=
 =?utf-8?B?SkM5MzNtN0VzYlpTQXBaN3VFZHduaE9WeEFJYzNZTTVOblY2b0hnYWF0Y0Fa?=
 =?utf-8?B?MDV2YVNDVXkrVGkvRytJUDJpYXVZY1hmU0FMNUEzeVREOTNFbUFaTm1BWC8z?=
 =?utf-8?Q?ZjGjoQEnQc0lfIwF19G245Y9Yg85qal1Bza3VBC?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 90fc2657-70ac-4f6a-a948-08d8c9c2a770
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 10:41:54.4211
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6jfHlaqOmryOSfFBsFiOurhG6tO0SsKLUHAncMGPSItfAn8cFSGPMGrx+VIzWCASF4kijjjUd51Va5fyGxXhkaohuj8w+KQxoLsd7IvRc0o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3415
X-OriginatorOrg: citrix.com

On 10/11/2020 13:26, Jan Beulich wrote:
> The SDM specifically allows for earlier writes to fully overlapping
> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
> would crash it if varying data was written to the same address. Detect
> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
> be quite a bit more difficult.

Are you saying that there is currently a bug if a guest does encode such
an instruction, and we emulate it?

>
> Note that due to cache slot use being linear address based, there's no
> similar issue with multiple writes to the same physical address (mapped
> through different linear addresses).
>
> Since this requires an adjustment to the EVEX Disp8 scaling test,
> correct a comment there at the same time.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: The SDM isn't entirely unambiguous about the faulting behavior in
>      this case: If a fault would need delivering on the earlier slot
>      despite the write getting squashed, we'd have to call ops->write()
>      with size set to zero for the earlier write(s). However,
>      hvm/emulate.c's handling of zero-byte accesses extends only to the
>      virtual-to-linear address conversions (and raising of involved
>      faults), so in order to also observe #PF changes to that logic
>      would then also be needed. Can we live with a possible misbehavior
>      here?

Do you have a chapter/section reference?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 10:56:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 10:56:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81590.150879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ymi-0001WG-C3; Fri, 05 Feb 2021 10:56:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81590.150879; Fri, 05 Feb 2021 10:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7ymi-0001W9-8m; Fri, 05 Feb 2021 10:56:20 +0000
Received: by outflank-mailman (input) for mailman id 81590;
 Fri, 05 Feb 2021 10:56:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v9N4=HH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7ymg-0001W4-FE
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 10:56:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49da6690-2fe2-4ab0-a6ae-cfc0c7214356;
 Fri, 05 Feb 2021 10:56:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75FC0AD29;
 Fri,  5 Feb 2021 10:56:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49da6690-2fe2-4ab0-a6ae-cfc0c7214356
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612522576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Bf1mxx6eK5gG8ZLJEsNSc3VGRjRjdiizHDkYiSTVUjs=;
	b=H1gKocv+4XxdeW8q3Yr1TG7DyQKHuKBRZymW34T/AcvNJaHIQK7/4Ie21LIXqvEGovusZo
	zgNbHJFkh9iFMI7h3/qVq2ImmLWoNkr5Mnb5y0wKYlIGDRcAPpYriLJZYhSQ8jbXzTQQTc
	hvv2UW4QbvbKDILKrbKR5wjga0fQMYM=
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
Date: Fri, 5 Feb 2021 11:56:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="MY5rErkQlgpO6WqO6NCTFEF6atPO4AO6p"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--MY5rErkQlgpO6WqO6NCTFEF6atPO4AO6p
Content-Type: multipart/mixed; boundary="05gFgNqyiCiOA0LbuBztRQsPeJfa7m44z";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
In-Reply-To: <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>

--05gFgNqyiCiOA0LbuBztRQsPeJfa7m44z
Content-Type: multipart/mixed;
 boundary="------------4F1780995A3F4A68DDE27EB8"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4F1780995A3F4A68DDE27EB8
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 05.02.21 11:14, Jan Beulich wrote:
> (simply re-sending what was sent over 2 months ago)
>=20
> On 04.11.2020 11:50, Jan Beulich wrote:
>> On 03.11.2020 18:31, Andrew Cooper wrote:
>>> On 03/11/2020 17:06, Jan Beulich wrote:
>>>> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
>>>> handler early enough to cover for example the rdmsrl_safe() of
>>>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded re=
ad
>>>> of MSR_K7_HWCR later in the same function). The respective change
>>>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv guests") =
was
>>>> backported to 4.14, but no further - presumably since it wasn't real=
ly
>>>> easy because of other dependencies.
>>>>
>>>> Therefore, to prevent our change in the handling of guest MSR access=
es
>>>> to render PV Linux 4.13 and older unusable on at least AMD systems, =
make
>>>> the raising of #GP on these paths conditional upon the guest having
>>>> installed a handler. Producing zero for reads and discarding writes
>>>> isn't necessarily correct and may trip code trying to detect presenc=
e of
>>>> MSRs early, but since such detection logic won't work without a #GP
>>>> handler anyway, this ought to be a fair workaround.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> I appreciate that we probably have to do something, but I don't think=

>>> this is a wise move.
>>
>> I wouldn't call it wise either, but I'm afraid something along
>> these lines is necessary.
>>
>>> Linux is fundamentally buggy.=C2=A0 It is deliberately looking for a
>>> potential #GP fault given its use of rdmsrl_safe().=C2=A0 The reason =
this bug
>>> stayed hidden for so long was as a consequence of Xen's inappropriate=

>>> MSR handling for guests, and the reasons for changing Xen's behaviour=

>>> still stand.
>>
>> I agree.
>>
>>> This change, in particular, does not apply to any explicitly handled
>>> MSRs, and therefore is not a comprehensive fix.
>>
>> But it's intentional that this deals with the situation in a
>> generic way, not on a per-MSR basis. If we did as you suggest
>> further down, we'd have to audit all Linux versions up to 4.14
>> for similar issues with other MSRs. I don't think this would
>> be a practical thing to do, and I also don't think that leaving
>> things as they are until we have concrete reports of problems
>> is a viable option either.
>>
>> Adding explicit handling for the two offending MSRs (and any
>> possible further ones we discover) would imo only be to avoid
>> issuing the respective log messages.
>>
>>>  =C2=A0 Nor is it robust to
>>> someone adding code to explicitly handling the impacted MSRs at a lat=
er
>>> date (which are are likely to need to do for HWCR), and which would
>>> reintroduce this failure to boot.
>>
>> I'm afraid I don't understand. Looking at the two functions
>> the patch alters, only X86EMUL_OKAY is used in return statements
>> other than the final one. If this model is to be followed by
>> future additions (which I think it ought to be; perhaps we
>> should add comments to this effect), the code introduced here
>> will take care of the situation nevertheless.
>>
>>> We should have the impacted MSRs handled explicitly, with a note stat=
ing
>>> this was a bug in Linux 4.14 and older.=C2=A0 We already have workaro=
und for
>>> similar bugs in Windows, and it also gives us a timeline to eventuall=
y
>>> removing support for obsolete workarounds, rather than having a "now =
and
>>> in the future, we'll explicitly tolerate broken PV behaviour for one =
bug
>>> back in ancient linux".
>>
>> Comparing with Windows isn't very helpful; the patch here is
>> specifically about PV, and would help other OSes as well in
>> case they would have missed setting up exceptions early in
>> just the PV-on-Xen case. For the HVM case I'd indeed rather
>> see us go the route we've gone for Windows, if need be.
>=20
> As can be seen from this reply, we're not in agreement what to
> do here. But we need to do something. I'm not sure how to get
> unstuck discussions like this one ...
>=20
> Besides this suggestion of yours I also continue to have
> trouble seeing what good it will do to record an exception to
> inject into a guest when we know it didn't install a handler
> yet.

As we need to consider backports of processor bug mitigations
in old guests, too, I think we need to have a "catch-all"
fallback.

Not being able to run an old updated guest until we add handling
for a new MSR isn't a viable option IMO.


Juergen

--------------4F1780995A3F4A68DDE27EB8
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4F1780995A3F4A68DDE27EB8--

--05gFgNqyiCiOA0LbuBztRQsPeJfa7m44z--

--MY5rErkQlgpO6WqO6NCTFEF6atPO4AO6p
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAdJE8FAwAAAAAACgkQsN6d1ii/Ey+y
NAgAklZ/+yTSJ8YV5UdZ83YWrF49RdrplE3Ul96lqAytlzlmsRiMBjpt/TGdtnRJgVBYmkMpCsTB
UT0eWYyDvKk/8ErN2UHbpWMGiTdyfXcYeGkfxT7HCBwnzSZ4lARRxljrRaWKiE5rq/gTu8kfPs8f
Xs+oryzjA9orlCKFMhAVs953nLOsp/nt0ezVcsVrTtiHxkxgdhoSLypleCSvLdKtrLoL9OqJl3U4
tf/9V0wH5U/gg93kqFiUxhDkqvW9wAblPx6q84gm5QKfQUI6T17c00T/co56x8d6hIcPtal221qq
f8UNu/3JcFKSvHQwQrDkLLOKgfWvD97JRPZNXqh4Rw==
=2ofY
-----END PGP SIGNATURE-----

--MY5rErkQlgpO6WqO6NCTFEF6atPO4AO6p--


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 11:10:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 11:10:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81604.150895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7z0G-0003ex-Ld; Fri, 05 Feb 2021 11:10:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81604.150895; Fri, 05 Feb 2021 11:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7z0G-0003eq-IX; Fri, 05 Feb 2021 11:10:20 +0000
Received: by outflank-mailman (input) for mailman id 81604;
 Fri, 05 Feb 2021 11:10:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7z0G-0003ei-6l; Fri, 05 Feb 2021 11:10:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7z0F-0004QN-Uw; Fri, 05 Feb 2021 11:10:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7z0F-00020H-JN; Fri, 05 Feb 2021 11:10:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7z0F-00025s-Iu; Fri, 05 Feb 2021 11:10:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Is8i9PnX2FAMu9Lz0qYZF9EzAZIG5xYGkVEhxcjsCpI=; b=1Fbbxnj/wRcuOZNh6hyAgcoSdI
	xJE2+d1ALUFjbGTA0UH2YVAFH55XwDJ3K/URwCk8YYZgKzYNDf6U1RhS2+JnMFQ1vYhZKsUrP/uwe
	dUbZHil1BUrwbAUndpVwFulDLht9QIWOXw702AF8AvzxT9vhASnAzYRJp5dB2qNnlkB0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159019-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159019: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1b6c3a94eca7f12f6a3b65a3e8619d2e2e7c1eb6
X-Osstest-Versions-That:
    ovmf=f6ec1dd34fb6b9757b5ead465ee2ea20c182b0ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 11:10:19 +0000

flight 159019 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159019/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1b6c3a94eca7f12f6a3b65a3e8619d2e2e7c1eb6
baseline version:
 ovmf                 f6ec1dd34fb6b9757b5ead465ee2ea20c182b0ac

Last test of basis   159000  2021-02-04 01:40:54 Z    1 days
Testing same since   159019  2021-02-04 16:41:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f6ec1dd34f..1b6c3a94ec  1b6c3a94eca7f12f6a3b65a3e8619d2e2e7c1eb6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 11:13:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 11:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81607.150910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7z3E-0003mx-5H; Fri, 05 Feb 2021 11:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81607.150910; Fri, 05 Feb 2021 11:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7z3E-0003mq-21; Fri, 05 Feb 2021 11:13:24 +0000
Received: by outflank-mailman (input) for mailman id 81607;
 Fri, 05 Feb 2021 11:13:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7z3D-0003mk-8r
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 11:13:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8891eb4-8e4b-4047-9566-c7aca688ad49;
 Fri, 05 Feb 2021 11:13:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0D97CAD29;
 Fri,  5 Feb 2021 11:13:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8891eb4-8e4b-4047-9566-c7aca688ad49
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612523601; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8v4oWbwcX3QM0Dfxs0hXbaPsKCNxckSU0hKztJTQ96Q=;
	b=ClE2nSCJ+g9Pau9qslpyY+stz07us5Mg0wztUHEVjvLwWr4oR/wqtetHSS8aOL8F5N3lp2
	iRrvszepF5hLmxDFCD7lOgjGPTNwfAdWgHb8U4tXAtTrK+wiXZIMW/JO9CGXzXFtHAYclo
	QJD8fi/o9ZIZhMYW0KugBdidrxAZZnQ=
Subject: Re: [PATCH] x86/EFI: work around GNU ld 2.36 issue
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e6d59277-35b2-e7df-0e68-a794c8855ac0@suse.com>
 <8450b84d-93f2-7568-362e-af27954e5157@suse.com>
 <881b97a1-a4e3-f213-f81b-ac07ca46ed27@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5a02763e-715f-a76d-e926-87a2a4c38449@suse.com>
Date: Fri, 5 Feb 2021 12:13:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <881b97a1-a4e3-f213-f81b-ac07ca46ed27@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.02.2021 11:33, Andrew Cooper wrote:
> On 05/02/2021 08:11, Jan Beulich wrote:
>> On 04.02.2021 14:38, Jan Beulich wrote:
>>> Our linker capability check fails with the recent binutils release's ld:
>>>
>>> .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
>>> .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
>>> .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
>>> .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
>>> .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
>>> .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
>>> .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
>>> .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
>>> .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
>>> .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
>>> .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
>>>
>>> Tell the linker to strip debug info as a workaround. Oddly enough debug
>>> info has been getting stripped when linking the actual xen.efi, without
>>> me being able to tell why this would be.
>> I've changed this to
>>
>> "Tell the linker to strip debug info as a workaround. Debug info has been
>>  getting stripped already anyway when linking the actual xen.efi."
>>
>> as I noticed I did look for -S only yesterday, while we have
>>
>> EFI_LDFLAGS += --image-base=$(1) --stack=0,0 --heap=0,0 --strip-debug
> 
> So, in terms of the bugfix, Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks.

> However, we ought be keeping the debug symbols for xen-syms.efi (or
> equiv) seeing as there is logic included here which isn't in the regular
> xen-syms.

Well, perhaps. Besides the 2.36 binutils regression needing fixing
(or preventing us to avoid the stripping in case that's the linker
version used), there are a few more points relevant here:

- Checking with a random older binutils (2.32) I observe the linker
  working fine, but our mkreloc utility choking on the (admittedly
  suspicious, at least at the first glance) output. This may be
  possible to deal with, but still.

- It would need checking whether the resulting binary works at all.
  All the .debug_* sections come first. Of course there are surely
  again ways to overcome this (albeit it smells like a binutils
  bug).

- While in ELF binaries the particular .debug_* sections are
  conventionally assumed to hold Dwarf debug info, no such
  assumption is true for PE executables. In particular I observe
  objdump (2.32 as well as 2.36) to merely dump the COFF symbol
  table when handed -g. Are you aware of consumers of the
  information, if we indeed kept it?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 11:28:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 11:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81614.150921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zHb-0004zV-EU; Fri, 05 Feb 2021 11:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81614.150921; Fri, 05 Feb 2021 11:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zHb-0004zO-Bg; Fri, 05 Feb 2021 11:28:15 +0000
Received: by outflank-mailman (input) for mailman id 81614;
 Fri, 05 Feb 2021 11:28:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l7zHZ-0004zJ-NK
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 11:28:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cd6350f-d5ab-4997-8763-0b4f07241808;
 Fri, 05 Feb 2021 11:28:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ECB3FAD2B;
 Fri,  5 Feb 2021 11:28:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cd6350f-d5ab-4997-8763-0b4f07241808
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612524491; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rD0s5mWydfTCukar/1w+Kwlu2u2dtYsDsbCwAqEN/jY=;
	b=hy8Rs2ll/aSJZmwFDd3In3bafZpxMTKQQOwDJ2nyVuzZ4vnWNzeIkapatVa7ykDSDomquJ
	YLc7QhWh+r6o4/tRJ1n9YIUtGsN0GhmAXx326e+Xy0BH30D1+SWi6nq8WRmOdjAAZpji9Q
	LABegX2dbEAYpOpOySj2McOS8bo/eVo=
Subject: Re: [PATCH] x86emul: de-duplicate scatters to the same linear address
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
 <fcf7e123-3cdd-fd4d-6c58-36facb26a68e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2e559806-5bc0-0f61-8e23-95e0dba34c41@suse.com>
Date: Fri, 5 Feb 2021 12:28:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <fcf7e123-3cdd-fd4d-6c58-36facb26a68e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 11:41, Andrew Cooper wrote:
> On 10/11/2020 13:26, Jan Beulich wrote:
>> The SDM specifically allows for earlier writes to fully overlapping
>> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
>> would crash it if varying data was written to the same address. Detect
>> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
>> be quite a bit more difficult.
> 
> Are you saying that there is currently a bug if a guest does encode such
> an instruction, and we emulate it?

That is my take on it, yes.

>> Note that due to cache slot use being linear address based, there's no
>> similar issue with multiple writes to the same physical address (mapped
>> through different linear addresses).
>>
>> Since this requires an adjustment to the EVEX Disp8 scaling test,
>> correct a comment there at the same time.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> TBD: The SDM isn't entirely unambiguous about the faulting behavior in
>>      this case: If a fault would need delivering on the earlier slot
>>      despite the write getting squashed, we'd have to call ops->write()
>>      with size set to zero for the earlier write(s). However,
>>      hvm/emulate.c's handling of zero-byte accesses extends only to the
>>      virtual-to-linear address conversions (and raising of involved
>>      faults), so in order to also observe #PF changes to that logic
>>      would then also be needed. Can we live with a possible misbehavior
>>      here?
> 
> Do you have a chapter/section reference?

The instruction pages. They say in particular

"If two or more destination indices completely overlap, the â€œearlierâ€
 write(s) may be skipped."

and

"Faults are delivered in a right-to-left manner. That is, if a fault
 is triggered by an element and delivered ..."

To me this may or may not mean the skipping of indices includes the
skipping of faults (which a later element then would raise anyway).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 11:32:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 11:32:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81623.150933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zLd-00063A-1Y; Fri, 05 Feb 2021 11:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81623.150933; Fri, 05 Feb 2021 11:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zLc-000633-Ug; Fri, 05 Feb 2021 11:32:24 +0000
Received: by outflank-mailman (input) for mailman id 81623;
 Fri, 05 Feb 2021 11:32:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l7zLb-00062y-Jj
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 11:32:23 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5f1d411-1cf3-48e1-a377-c6bf18ade084;
 Fri, 05 Feb 2021 11:32:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5f1d411-1cf3-48e1-a377-c6bf18ade084
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612524742;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=kmQQmcdcYR1qlbvU5mwSsGaJFWNh/dzbjzITAV278Vw=;
  b=PfDNu8fQGKr4vX5gEZMVNzWBT8M/I0Q68GcqVgBrexWeE/xrZyKH7DHl
   wOxLBWpjVrJ+7Za2dUYffJhnm/2FgwPtNnCF/Blu6NsOCm4XYq79wq0RV
   QDi1E3P590MU8XeClhAth8Cjppr08T43fhfLpUtl6rqqNDu7nbmVrQVdI
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3sL55lhBnClkfU06ag6FaD7Iw2BBIHEewqjbgZtYxobP+SmNWPi8N6NrNVn78IacAodf/CVtuR
 mTg472yMK3u3xDJBJB5eCaYH+6rOChrh1KJ4wYSIlWGQdL0Fp3NtqKhGtXPtVicAA7nWKvEsSp
 hRrqkXRt4kVvKQGE1xr56NCvU2n3o8LjzDr+EKsrY5QsGitadDkec1OEjh1M9lcmgDoAlcgSGK
 7D/ovTz2Ackoik2NrL1YzPNFKdyiUYqqi3EWznV0OiC49rS8F8XVAZ42HRzJuthGFg90bE1C19
 qDc=
X-SBRS: 5.2
X-MesageID: 36665032
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36665032"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m9csYst7JAHauj+9taIQvKTKDRM3so++G34LjSoK7Ki22OUCVq8aLBwmhhhL4DKRVcXEhyRtP1O+Xr9JwackwweE4gH9Ht6OI259v63OWMPYFhBfAXrf/8VEcYzHOGKmaZAbSSHpl1tVvwRA7u+XK7RaLwcRkQQehTqg2CALbbf2BTHGb6Wjf96KbnEzTQNdbUoskFT74J/uTH00IqIy1KL0mlP/Gn3C5YpitAzldgaUyIoVEvohA/tGUiXD8+FzWTk4zYkjITsS7rrCJSLMkcTpAt6tgGzeeD3cwF0wBGqMKtKMA1kRaKpFME4ZMz61Zun3/9FyFyTOB967Wv2sMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kmQQmcdcYR1qlbvU5mwSsGaJFWNh/dzbjzITAV278Vw=;
 b=V+juD+e65ilVzdKKhoNxV0I6v6hif2aNtMviyDnM3y87qYg1osAKx+9p5WEbI+FSWUl3eLPIw3ip4NjnrTLDaEfLCvtBcCzwglnqXDzFYOzox76NyPmXyiU9OdoAoDNH07LIzaLeXYe0prwv9rEXFXZiL1U8fjvmC1K4cj8q6hNiW9IvmQNVRXSEm6RnCOoxOrNDAdlxktW4AiA6wEhjRMirkYnkjvE/fBuFv1Msl/2CT7DXaQfikJ+4rzwsLMYxIAqwXU1Iol/T5lmZdHBNYMxAxkJPd+wuiso9EYBIDpu97669Aj015gFgdhwQE7JTGF61pdGMWyQ5+8jdXGu4EQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kmQQmcdcYR1qlbvU5mwSsGaJFWNh/dzbjzITAV278Vw=;
 b=EoQ7FNvGhszKdtX1KgM/7MVldtiJNU1JbIU7qSkNEy0xEGDWVFWCXdO7uPCanQUWYycEXTD5xYCYq9hGOs5YlE+A9M3US0LMj3JFRSkgZpTs60aS3l9s7sZ6+US1nv91hMnbrwcnYs5VPlCBcXpgWlWzb8twFFqrnIlWBTf0uWE=
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Jan Beulich
	<jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
Date: Fri, 5 Feb 2021 11:32:07 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0492.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::11) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 82b65dd0-eb5e-48f5-3f47-08d8c9c9af29
X-MS-TrafficTypeDiagnostic: BYAPR03MB4582:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB45826E7B90ECED62DDD1B6E5BAB29@BYAPR03MB4582.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dm4/mkfzawNgpY+TQF86gj0h2PNlqmm69MR9LHIjWGvK/pb8Hp39L8Z98JDsqj3CSFAxPP76jjfpvckcrUu2aOK61KeqT+d8sAiQpqIQWFzwcMgpE7o2kRxElVEOPyFtANjNq7sG4ny/IQ65jD0BtYv37pxWT6zOtcdZv5CrriF5NSozZaQ8JsbD3ThGKgqNJ+IoVxkNz/rxCNYmQeVwCzpR4aPbuJAg5TMMqcxsN40Xx8BT51ZeuTt/RNWVG72ebbBI3Qit68JEwqCrqLRVecb2+R3xaoVksa41wtJv1ErHoQDHhLA4i0JodUIqfu1qqLq0o97F2E19Rw/eF6Uxjf6I8P5oEbSTQiDGCNgtL8Qbm6dIew57PXTkB44ZfYc0aQ8EAMgzxFaBGXabdzhE0xfaqpAtUCxwfnbvlT1SrFDw77e4/pHquDjhUCduq155mG/qA7nEkevD4AIXGp51imlcTxwDzofSQ/A3DpzEWa+frYky7dRe7PN9m+6UhycwR1JKB7L4wmrWGXaAgXCBfmNczmdHT8fVm2SSwUOkPKBmT/VwnJKHVxSDc6iMdfN/shdNYDPFx5IzyTgv9wlSQY/YQRb3nXad29id2j9oDDU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(39850400004)(396003)(136003)(66574015)(66476007)(186003)(66556008)(110136005)(66946007)(54906003)(26005)(36756003)(53546011)(316002)(6486002)(956004)(83380400001)(2906002)(31696002)(2616005)(6666004)(8936002)(5660300002)(31686004)(86362001)(16576012)(16526019)(478600001)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Y2Y4NnA2d2JYRVNrbUxLUGxwQnhNWGtPa3k3bEFVL01vY2xEdUVldkRCcnRL?=
 =?utf-8?B?aHM3UTJBcWEzYWtuQkRFYXBOdFNQYUZDZlhKUWxvTlBya1NWK1k0RUwzZ1ds?=
 =?utf-8?B?UzZQcERyVE5BNVFNN1FNVW5KZGVaWkN6ZE1lRm1BQmZoR3FmM2pYNjZJN1d2?=
 =?utf-8?B?ZGNkSEdGM1UxbkUyUUNaYU81c29DSXpBcXZJSjBlZGp1OEJpYnQ1TVJJNjZI?=
 =?utf-8?B?REhnUG42MUhoUUd3WCtlby9hNDJybnRyV0U2dDZtZHdvS3ZONGhNbVBaTHAv?=
 =?utf-8?B?Ti9HckUyK1JtUHNreGpCdnJMRnJVelhWckpEZGVpQnEvV1NGcVJxTi8raUZP?=
 =?utf-8?B?TGdBaUJKRVk4R1JYeWxFSFYrSmd4cU1Db3VkbUwzNCtaRmNobkh4WFdtVExu?=
 =?utf-8?B?d3AyeFk3M3Rpd2thSVcvOE1NdXh2OVpzaEJMZ3RmTFRsL1pyK3F6c3ZhaDd5?=
 =?utf-8?B?TFJYZTE1RS9tT3pLWDFIdjMyRjV0a3Y1M0hsSUE3Wm5QUGUySUpNTmMzN1VF?=
 =?utf-8?B?S29ZN2lWMFRxQ3FJSEJDcTJrUHltbkdtdmdlZ2gwYzJML3cvY2srSFlzODFR?=
 =?utf-8?B?TkFFUE1xQW9RNFRxR0h1TGxNRXNxeFk4MmwwaVpydGp1dG92TVF0TFZTeWd4?=
 =?utf-8?B?UW81bXVTY1BIcWhqYS9SUVR1SUd1djFVREFvR3JiOThiUkovOWM5SjF3Y01q?=
 =?utf-8?B?MDdudm1oTTRhMnJCaEc2Vzd1OUQ2a09oQTl5aU5maHJyeERXT2gzY2RZNUJp?=
 =?utf-8?B?TmZvVVgrOGZCY2NpUDhycmJTMDFiTXIxcFlyYUgyQ25EbFJkUG50NkZRRitm?=
 =?utf-8?B?SFZHNXVuQ0c2VGJRNXNwTU5hQy9FN2Fxdy9JZ0lhRlFOdzBwMm8zTk0rdFpY?=
 =?utf-8?B?RS9mQnhMSUcyMzVkNG1CMGZtbkM1dEQyVjJ2LzdFTkpBSHM3RExLUlY2VjFE?=
 =?utf-8?B?b0NBc1lWeElNRjZxbVg1SUtScVR0R1M4QlJPK2dqdXBpdW1VWmEvTmZkdEdU?=
 =?utf-8?B?aW5NSEREMEorN2FLdGxNNm5zbGR2WGovVjZEaTZTMmlWQnBNY09Yb1U4aVJQ?=
 =?utf-8?B?UVFXRkdvdW1qVktRZno5dnE0ZnpDUkpydSt5RWFscWFoUE1VbzErTDZHZTRj?=
 =?utf-8?B?RkNCQTBCMnJ0QlI4Z3hiVE1vT2xvQVFyWVZKcTRLNVROMTk2V1NJeFNpUVNW?=
 =?utf-8?B?OXU3aXlqY21DeWNZckFCeGMxM2Y1d0VDMi9IaHo5SUh0TzU1Tmo3Q0hCS2pn?=
 =?utf-8?B?MEZZUk9JMWgwRzRTbjhvMkEzWU9MbDBOeUFPMUZzV3ZjWURYNThndjBZZkRk?=
 =?utf-8?B?blIzQnpXZG8xVHJYVFFQMWRoM0cyUmhxZXgrc3Z4MmxPVkxCRUFyTnJmMmg4?=
 =?utf-8?B?R0lZYkhoR3YyUHlXTm1iYUlMV2ZxbG1MK2d3WTVVbk12c3p3ODdPZTU5Ylkr?=
 =?utf-8?B?YTVMYmRYRGlQb3I5UFdIeFFnSjNreHc1dmlvTkhyUGtoMlRnajZwakk4U3Fr?=
 =?utf-8?B?bUtjTzJHMnpXb0c1SnlNWGpzUmI5UlZxRzZuaGx1Q3ZobXc2LzhabVBIMWRF?=
 =?utf-8?B?OE96dlcrR3VNMXdvUEhqaUJYc3ZaQmo0YmVOSDJCRnJwdmhaZlV1TmgvbEZ5?=
 =?utf-8?B?MWFyMmhkdGlWRGVYZkE4bHB1ZHJUY3BmYUE1TWkzM05LeFRMdHN1anNsQkdD?=
 =?utf-8?B?Wms4WVp1OHUwL2VMMTdnOTJWb0hTeHIwSHJyYUdLc2syNm8ySEZxZGlRUW44?=
 =?utf-8?Q?7wvWuG2hwP0aXwPKBtMm8mSaTdEIVdKNRq0ypsL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 82b65dd0-eb5e-48f5-3f47-08d8c9c9af29
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 11:32:13.9769
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VoatxHEG3aNeI9rl7c5L5r/VkR1gPMYQWsPchy28AIwLlpYnfFCy4+7DwSx2XqQuzNxTyOlhUi0bKkYKBL9sFEJB0AaVYgmkGgxw6az1RAY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4582
X-OriginatorOrg: citrix.com

On 05/02/2021 10:56, JÃ¼rgen GroÃŸ wrote:
> On 05.02.21 11:14, Jan Beulich wrote:
>> (simply re-sending what was sent over 2 months ago)
>>
>> On 04.11.2020 11:50, Jan Beulich wrote:
>>> On 03.11.2020 18:31, Andrew Cooper wrote:
>>>> On 03/11/2020 17:06, Jan Beulich wrote:
>>>>> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
>>>>> handler early enough to cover for example the rdmsrl_safe() of
>>>>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded
>>>>> read
>>>>> of MSR_K7_HWCR later in the same function). The respective change
>>>>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv
>>>>> guests") was
>>>>> backported to 4.14, but no further - presumably since it wasn't
>>>>> really
>>>>> easy because of other dependencies.
>>>>>
>>>>> Therefore, to prevent our change in the handling of guest MSR
>>>>> accesses
>>>>> to render PV Linux 4.13 and older unusable on at least AMD
>>>>> systems, make
>>>>> the raising of #GP on these paths conditional upon the guest having
>>>>> installed a handler. Producing zero for reads and discarding writes
>>>>> isn't necessarily correct and may trip code trying to detect
>>>>> presence of
>>>>> MSRs early, but since such detection logic won't work without a #GP
>>>>> handler anyway, this ought to be a fair workaround.
>>>>>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> I appreciate that we probably have to do something, but I don't think
>>>> this is a wise move.
>>>
>>> I wouldn't call it wise either, but I'm afraid something along
>>> these lines is necessary.
>>>
>>>> Linux is fundamentally buggy.Â  It is deliberately looking for a
>>>> potential #GP fault given its use of rdmsrl_safe().Â  The reason
>>>> this bug
>>>> stayed hidden for so long was as a consequence of Xen's inappropriate
>>>> MSR handling for guests, and the reasons for changing Xen's behaviour
>>>> still stand.
>>>
>>> I agree.
>>>
>>>> This change, in particular, does not apply to any explicitly handled
>>>> MSRs, and therefore is not a comprehensive fix.
>>>
>>> But it's intentional that this deals with the situation in a
>>> generic way, not on a per-MSR basis. If we did as you suggest
>>> further down, we'd have to audit all Linux versions up to 4.14
>>> for similar issues with other MSRs. I don't think this would
>>> be a practical thing to do, and I also don't think that leaving
>>> things as they are until we have concrete reports of problems
>>> is a viable option either.
>>>
>>> Adding explicit handling for the two offending MSRs (and any
>>> possible further ones we discover) would imo only be to avoid
>>> issuing the respective log messages.
>>>
>>>> Â Â  Nor is it robust to
>>>> someone adding code to explicitly handling the impacted MSRs at a
>>>> later
>>>> date (which are are likely to need to do for HWCR), and which would
>>>> reintroduce this failure to boot.
>>>
>>> I'm afraid I don't understand. Looking at the two functions
>>> the patch alters, only X86EMUL_OKAY is used in return statements
>>> other than the final one. If this model is to be followed by
>>> future additions (which I think it ought to be; perhaps we
>>> should add comments to this effect), the code introduced here
>>> will take care of the situation nevertheless.
>>>
>>>> We should have the impacted MSRs handled explicitly, with a note
>>>> stating
>>>> this was a bug in Linux 4.14 and older.Â  We already have workaround
>>>> for
>>>> similar bugs in Windows, and it also gives us a timeline to eventually
>>>> removing support for obsolete workarounds, rather than having a
>>>> "now and
>>>> in the future, we'll explicitly tolerate broken PV behaviour for
>>>> one bug
>>>> back in ancient linux".
>>>
>>> Comparing with Windows isn't very helpful; the patch here is
>>> specifically about PV, and would help other OSes as well in
>>> case they would have missed setting up exceptions early in
>>> just the PV-on-Xen case. For the HVM case I'd indeed rather
>>> see us go the route we've gone for Windows, if need be.
>>
>> As can be seen from this reply, we're not in agreement what to
>> do here. But we need to do something. I'm not sure how to get
>> unstuck discussions like this one ...
>>
>> Besides this suggestion of yours I also continue to have
>> trouble seeing what good it will do to record an exception to
>> inject into a guest when we know it didn't install a handler
>> yet.
>
> As we need to consider backports of processor bug mitigations
> in old guests, too, I think we need to have a "catch-all"
> fallback.
>
> Not being able to run an old updated guest until we add handling
> for a new MSR isn't a viable option IMO.

You're trading off against issuing XSAs for all the unknown quantities
of sensitive in MSRs on all past and future platforms.Â  This has
unbounded scope.

Xen's previous behaviour was astoundingly stupid, and yes - we're
playing more than a decades worth of catchup in one release cycle.

I'll absolutely take "care/tweaks need to happen crossing the Xen
4.14=>4.15 boundary" over whack-a-mole for MSRs in the form of security
advisories.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 11:37:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 11:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81626.150946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zQi-0006F3-Pc; Fri, 05 Feb 2021 11:37:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81626.150946; Fri, 05 Feb 2021 11:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zQi-0006Ew-M4; Fri, 05 Feb 2021 11:37:40 +0000
Received: by outflank-mailman (input) for mailman id 81626;
 Fri, 05 Feb 2021 11:37:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zQi-0006Eo-B2; Fri, 05 Feb 2021 11:37:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zQi-0004sV-2E; Fri, 05 Feb 2021 11:37:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zQh-0002ti-PH; Fri, 05 Feb 2021 11:37:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zQh-0007MO-Oa; Fri, 05 Feb 2021 11:37:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=3ZfTENvZ6EfiQ0Jp62W6DWn/HyHiypH+qsl1Qt4ElEc=; b=cpMy+qMDTB9BS+B+tEqOinhxxZ
	QqtgWksZHFweWkVGtO4p5qygsFR1mOwwNQZV08b7mCameNsGyZPlYQMljY/775C2Eig2yTezw2wrv
	XDTYheqR+vPkLfWfklFaBwpYZ6NEJSrB0IZ/hwYPFJDme94deiCGNGHOlcuiFI4GdSug=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-amd64-i386-qemut-rhel6hvm-amd
Message-Id: <E1l7zQh-0007MO-Oa@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 11:37:39 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-qemut-rhel6hvm-amd
testid redhat-install

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159038/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-amd64-i386-qemut-rhel6hvm-amd.redhat-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-amd64-i386-qemut-rhel6hvm-amd.redhat-install --summary-out=tmp/159038.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-amd64-i386-qemut-rhel6hvm-amd redhat-install
Searching for failure / basis pass:
 158962 fail [host=rimava1] / 158681 [host=pinot1] 158624 [host=pinot0] 158616 [host=pinot1] 158609 ok.
Failure / basis pass flights: 158962 / 158609
(tree with no url: minios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-0fbca6ce4174724f28be5268c5d210f51ed96e31 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#3b769c5110384fb33bcfeddced80f721ec7838cc-3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 git://xenbits.xen.org/qemu-xen-traditional\
 .git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-9dc687f155a57216b83b17f9cde55dd43e06b0cd
>From git://cache:9419/git://xenbits.xen.org/osstest/ovmf
   f6ec1dd34f..1b6c3a94ec  xen-tested-master -> origin/xen-tested-master
Loaded 15001 nodes in revision graph
Searching for test results:
 158593 pass irrelevant
 158603 [host=pinot0]
 158609 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158616 [host=pinot1]
 158624 [host=pinot0]
 158681 [host=pinot1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 159011 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159012 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b468095cd3dfcd1aa4ae63bc623f534bc2390d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 159015 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159018 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159021 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159022 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159024 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159026 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159028 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159030 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159034 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159035 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159037 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159038 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158609 (pass), for basis pass
 For basis failure, parent search stopping at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x56373e6eb558) HASH(0x56373f69d9e8) HASH(0x56373f6a0798) For basis failure, parent search stopping at 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x56373f68c5d8) For basis failure, parent search stopping at 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x56373f68de60) For basis failure, parent search stopping at 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x56373f688f28) For basis\
  failure, parent search stopping at d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3, results HASH(0x56373f64cc88) For basis failure, parent search stopping at 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcf\
 eddced80f721ec7838cc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd, results HASH(0x56373f64ef90) HASH(0x56373f694620) Result found: flight 158748 (fail), for basis failure (at ancestor ~6011)
 Repro found: flight 159011 (pass), for basis pass
 Repro found: flight 159012 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159028 (pass), for last pass
 Result found: flight 159030 (fail), for first failure
 Repro found: flight 159034 (pass), for last pass
 Repro found: flight 159035 (fail), for first failure
 Repro found: flight 159037 (pass), for last pass
 Repro found: flight 159038 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159038/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 252 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-amd64-i386-qemut-rhel6hvm-amd.redhat-install.{dot,ps,png,html,svg}.
----------------------------------------
159038: tolerable ALL FAIL

flight 159038 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159038/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 12 redhat-install    fail baseline untested


jobs:
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 11:53:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 11:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81648.150964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zgH-0008NI-8u; Fri, 05 Feb 2021 11:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81648.150964; Fri, 05 Feb 2021 11:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zgH-0008NB-4P; Fri, 05 Feb 2021 11:53:45 +0000
Received: by outflank-mailman (input) for mailman id 81648;
 Fri, 05 Feb 2021 11:53:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7zgF-0008N6-Qz
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 11:53:44 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a859675-05a5-4dfd-b7d3-a25b7ca76046;
 Fri, 05 Feb 2021 11:53:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a859675-05a5-4dfd-b7d3-a25b7ca76046
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612526022;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=70OolYbpJ0TB3JCeqVXbodRWpdm1hyzWa9tahDBVDJw=;
  b=X2wLtKGgkzkUQgTc/3309ZJwrAtBfqxHzAjLOAStILaT1zkGVeBe2hew
   +QpzX5g1qUP4K8m4Bb5tm46RrwJuVFOCj5BWXq9TspW9ecjmnP+m/sEgD
   fqcWW3p4Up7s6r2HqVcWbeoQYjPWJV4d3L3WOLXm/gP2dNt741OeCewhN
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8uIUkoKrRC9V5hN7rQYmNtukNB3Vlx6J4kGgobe7TZBlkwqRzJuBQ2I4W9/G2YCnoNLOYu6+5Y
 vKbeiT5oFU8Y5EqJPjTESij5CLlKOo85ZoLOTvw1isY2GTc9QM8YoF8TZUy8RmmHqsmNychLUq
 xx4sMJZ/x7FoTzSOpKE0S24Sm4sbFeWVMkfHdA0jGKK7lfoK0+vWulBONKADeYwJ6HZUouUoi/
 XoULeBuPEOeGPUU+hvGnYWkWrvTysIelsiISBmWIhyPNVuSDyqbLbVtu9J+c1Wdh1rgg36D4SO
 ZAI=
X-SBRS: 5.2
X-MesageID: 36587093
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36587093"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cCxMluIOBOxS9dcifZg2bkMFGByVWnhPT78YY9v6XO00iRWJYGIFqnw4F40maGf02cisJMW2mFemjM0qy4Amyas/ANQB/1Pwk5V0d+AdF+o1Ud/su7fFWzXP+/Q8gxeDYZ/FLYluchItxv9pAaVX8FMi7AZAdYxG6RdkfjnBCePK58WZCZJ6ToFPhEIvZc3UJ1MCQYZo75MjT99YPGQPCorrgVzgFJIKEh6ighUlZK2DyFxRK+5GeBwbZbwVhqYmGefHDpAc+3TrnBF+LugjPHVUZI+2inCvllZZLW0+V2jm9yoJZC5IlMBSlnxL467iCGofbMWZbkTlAF7+GE8ebg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=msA6GqCa+9D1/ZVUvtfAvMVUNgubP3GI27ive4wtEts=;
 b=UcZbD50GLYFGc5vCkvgOSUyw6SCYzzFILjIpOPiGmSD/UV8W96AdsDJc0z5bepfjlVncxOMfM9zE7go0gYs2MkD6SDdpPWUGvJ4LZEIptqu+2TldqaQ+pexAXBBD9LAqjlCtuEwKMmeJT9OrdrrL0FU64cPNtGfuSGFvsoqpFqNqhePkyIkJrANx2n/cUhLP4kkFaarAGJllIeNh4FU4zkWzusiDZPNJK5TT4eUxftTgXwFyorZ8H7cQ4wn4ov5ryKU/oaVDFsvDR4qpEZXcgtONh6IMoYKJ0djrkp3CTwBf7zTzZdDA/vEYpNfogFlprUY84NbsriaheaLcw4Dz8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=msA6GqCa+9D1/ZVUvtfAvMVUNgubP3GI27ive4wtEts=;
 b=gGliNeMFqxq3VsY862vazrXMmAs8TjJoCb43vLX+9TBW6E7xWzM24Qm9QOQT+tcKn+4EnpDcm6DLihR0Koena5+WFp6dWtuOBhDS+3ZOIoUpK+EsilmtXcsYZh3FDXNhFzfLaiTsE+ay2KhgMTfP7+B5JQYHVbUgsEXtnZs9kxY=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-4.15] tools/configure: add bison as mandatory
Date: Fri,  5 Feb 2021 12:53:27 +0100
Message-ID: <20210205115327.4086-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0046.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::34)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7c7fd90a-6a1a-4f6c-e54c-08d8c9ccac0f
X-MS-TrafficTypeDiagnostic: DM6PR03MB4844:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4844FDB0F163187E7360FD7C8FB29@DM6PR03MB4844.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8idbAkSt118zAAyrbbdXrM1t3o9OwTXKmobxY+WKGPUG1b4YoT7eGmv/Kos0QX5Ij+FYVScqVbJ/yE5/kjxQynbPYgRoFWwdj03bozq22EB9V0nI2he9pAqb8sei3MAs6aT1zWtwUhc/FV5//Ho6qnDClhJLQqQtucjjJqP7WX4FtPoHlYZy2PWq6y0Q9HrPo+5l3LN9/MdK5U7OfnjHD8sttzqg1ZAXC9E8MTe6uxc2bHIVbieYOZNK5eiCQX16XNcRe+fbgs7pbYVw897xxC/Jx23y6+EtsMW9Twx2EtSdSRA4e3dEnLUvEnTjWvUZgU0Yr2dYZPpTnwvxoN+KxcSBIiwERnFyglS74139spS3vBNRi2XDo4kojIQXUontGc9i3VRh8GoPDYRTDgyoKyqCbiV9Gyi+hfurmjmrZ5uxle6n+xYJDnihXFsxMoDGSnVc2QxhVwAD30XpPbxjZtbIkVQZB3cnWQOAYuVv5Iqv4ksEWIUg5p6LqYmEgOEsWr7Mv0VHvwcOc4qgAKOeGA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(1076003)(83380400001)(2906002)(26005)(478600001)(4326008)(54906003)(36756003)(6496006)(6486002)(316002)(956004)(186003)(6916009)(8936002)(8676002)(6666004)(66946007)(16526019)(2616005)(66476007)(86362001)(5660300002)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Rk8wZ2J3K1RzdThYUEYvYVhISG5BM1hLSHV2YmpxazBuODVSejd0eDlVcDN6?=
 =?utf-8?B?WVZlL1IxVWVzSDZLYytDcFpEa2JNenlWeTQxeGorYmpWZmoreWdrSjVLdjUv?=
 =?utf-8?B?VmpSR2tiZkUyWnhNd2xJNEljOXRwVlN6bTJNeC93WWFyVWYwVlVZcU1GK3du?=
 =?utf-8?B?RWU5OTl3L1VoSFFvUEc0YWRZRXJQNnlzdm1Da3dKN2pNZkFmemJhTEF0d1F0?=
 =?utf-8?B?K0s3Qk5QeGdHcXloWis3Z1czZDJIL2VxWXRrZmRqZXBQbjhZVEtaRkNNTm5u?=
 =?utf-8?B?b0M0bW9nUy9aTzJHbGcrZEJkQWxnYkt5WlZPVGZpcXlKRUs3a3dSNS8zUkhR?=
 =?utf-8?B?TWhxVExFTXlLeThSWTA5UkxkMkpjaTZGa1czVkNFbEI1eXFrenVYSkFCUUhm?=
 =?utf-8?B?S2tQdHp4UldYdXVKYjcwZEpPUjAzMTdVTHUyWnZWbmpYR1FCNVhvRlYrSTdZ?=
 =?utf-8?B?ZFNJMUFuQ1FNdUJHZC9peE1jVzRWTEFWRVN2V0d3cEVjNzZtb3c0NDFqd0pl?=
 =?utf-8?B?UEJDUHlRWW5GUXpGYTN2U2ZBTGMzeUZmZDhncDgyb0tPU2J6OW8reTJzRnhm?=
 =?utf-8?B?RFhlYk5DOE1JWUwwWXYyOEg0ME9IYzhxTHdxWHlqbDF6dkp4SWdOVkluUWhD?=
 =?utf-8?B?cktuVVlSL0N3UzdBV0VMUzhiSzRFc2JFQyt0OFZGWnUzbnZ6R29rUC9yTWQ5?=
 =?utf-8?B?NHdxNFQxdmlhOS9QMjZFOHRoKy81OGtsdE1QMld6dVZwWUZlbHZnUU5rQkFx?=
 =?utf-8?B?bmo5cEQxQXE0YzdPOU1jais2V1RkTCtLajRCdS9IOXlrVXJaNmJJNElUU0hP?=
 =?utf-8?B?aEZpYVhUNnJqaUR4Z1NWSDBwclB2MlZ1V3BVZEcxRFVDNzdLUnE1K1JVM05K?=
 =?utf-8?B?RzhwOGlIVkM1eGpUV09Lakdadkk3UVpWNFhxL3JXNTdxVlRyVStacUo0M2Z3?=
 =?utf-8?B?NkpjZk12bmRYQllrNEV5YzdIOENQcGNpdlRaTnFDK3N4Zlc3TEUyVlpHdmpG?=
 =?utf-8?B?Z0tqYnpUcHlRNTRKMmVNTFJpcjlWdERyOThXejVrZDhMQzd2Qk9odkh4bGFO?=
 =?utf-8?B?b3NKVHlaNEFUR1o2TENyTE9Xc1lmaG9jSlF1UE1EaGZRY1hhRmZncDJweEo1?=
 =?utf-8?B?MHJrUjdVNWpjcmJ3WnRFdWswcnF1L25hTHJuQWlKc0Y4R3QwTHRaZVRLb1ln?=
 =?utf-8?B?aVJXanVIY3AxVXdDWUVYc2F5N1RlcTFjaDNxVFBCNURmNHhrQWJpSzdOTTBu?=
 =?utf-8?B?VUt4UktGMnRlRnlJVVFBcHB2VkFLUmJiMVVyMjFNKzg2Z1BiMnVSblVkRWMy?=
 =?utf-8?B?c2c1dTRKZjVmU2JlT0pqM1FTZjBEVThINkQ4cWJ5Rmc3SUN3Mk13MVk4cllj?=
 =?utf-8?B?RlplSllKNHpyTlFCa1VRM3d2Vkh5RUlqQnZsdHc4ODRiSkJBYlZuanplVlYr?=
 =?utf-8?B?VEx6QTVrR3hzSVlodkh4azE1cHJINForWDFaOE44a2JRV1ZyeDFNTjB4eXFL?=
 =?utf-8?B?RHNUL2Y5Mk03MWgzNmVPUUVkd3FWZkd0dHJrMU9PekczNUFWaUphWFNQSTZy?=
 =?utf-8?B?SGNhQU9UT2NKTmxBT1Evb0ZzdVJIRzlvOWJzeS85bVVVZnVZbFh5M200VGl4?=
 =?utf-8?B?Vzg3b3RVZkdsVVJySEM3MEpCZG9FMW5WSTRYYzJmdExjRnlaSDY5L1dBWVBw?=
 =?utf-8?B?a1V0dXN2ZW5iN2g2aDNmOWFMYjVBK1JJQlAzUG1PRk9DUERjb1VoN2pxdytM?=
 =?utf-8?Q?k7iWj2Wqqls5UR7bQtFWjEAlDoUeRLUk42k4D0O?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c7fd90a-6a1a-4f6c-e54c-08d8c9ccac0f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 11:53:37.3086
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bfQxKTxd5FOLW40m0UZ4DvFNAkYtTAIV+NcrjkGtmaspssvItqMjziMIzN7EpSHQYeCmmZ6YgXQWkC8hE0ZzaA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4844
X-OriginatorOrg: citrix.com

Bison is now mandatory when the pvshim build is enabled in order to
generate the Kconfig.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
Please re-run autogen.sh after applying.

Fallout from this patch can lead to broken configure script being
generated or bison not detected correctly, but those will be cached
quite quickly by the automated testing.
---
 tools/configure.ac | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/configure.ac b/tools/configure.ac
index 5b328700e0..f4e3fccdb0 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -308,7 +308,6 @@ AC_ARG_VAR([AWK], [Path to awk tool])
 AC_PROG_CC
 AC_PROG_MAKE_SET
 AC_PROG_INSTALL
-AC_PATH_PROG([BISON], [bison])
 AC_PATH_PROG([FLEX], [flex])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
 AX_PATH_PROG_OR_FAIL([AWK], [awk])
@@ -516,5 +515,10 @@ AC_ARG_ENABLE([pvshim],
     esac
 ])
 AC_SUBST(pvshim)
+AS_IF([test "x$pvshim" = "xy"], [
+    AX_PATH_PROG_OR_FAIL([BISON], [bison])
+], [
+    AC_PATH_PROG([BISON], [bison])
+])
 
 AC_OUTPUT()
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 12:00:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 12:00:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81651.150975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zmJ-0000MW-2L; Fri, 05 Feb 2021 11:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81651.150975; Fri, 05 Feb 2021 11:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zmI-0000MP-Vc; Fri, 05 Feb 2021 11:59:58 +0000
Received: by outflank-mailman (input) for mailman id 81651;
 Fri, 05 Feb 2021 11:59:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v9N4=HH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l7zmH-0000MK-Pr
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 11:59:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54b79f53-e88a-484c-a2d1-d9285d649574;
 Fri, 05 Feb 2021 11:59:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B2D3EAD4E;
 Fri,  5 Feb 2021 11:59:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54b79f53-e88a-484c-a2d1-d9285d649574
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612526395; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cHaQ1mgKnkADJHg5lWEVnDaP95vX1UErggdff0BFbjM=;
	b=P5FWX4a3vpbBwZliohutbraFAI/5k8c2dcMxzLFks5lL2VDPDblNYotNswwhv0IfPnbCMb
	Uvka14HPlkhP+aXo1oRnW46nTmzU4hwD02DUND/Dd+r0enynBGreSjjIM6ZItn0Gbp1Mld
	+l5smDBZhhbaJXzH0dsrEgtKeQuRZek=
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
 <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <ce11e82c-a80b-60fd-73a3-f9212f975767@suse.com>
Date: Fri, 5 Feb 2021 12:59:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="7ukG6CC3P3LCc6ClqkY6hbZsGInvdqvC5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--7ukG6CC3P3LCc6ClqkY6hbZsGInvdqvC5
Content-Type: multipart/mixed; boundary="yVfWrfIlRbVKu29Wujrq1zmFX9vSIR4D7";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <ce11e82c-a80b-60fd-73a3-f9212f975767@suse.com>
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
 <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
In-Reply-To: <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>

--yVfWrfIlRbVKu29Wujrq1zmFX9vSIR4D7
Content-Type: multipart/mixed;
 boundary="------------3E654E1D7EF6826373AC9B2B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------3E654E1D7EF6826373AC9B2B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 05.02.21 12:32, Andrew Cooper wrote:
> On 05/02/2021 10:56, J=C3=BCrgen Gro=C3=9F wrote:
>> On 05.02.21 11:14, Jan Beulich wrote:
>>> (simply re-sending what was sent over 2 months ago)
>>>
>>> On 04.11.2020 11:50, Jan Beulich wrote:
>>>> On 03.11.2020 18:31, Andrew Cooper wrote:
>>>>> On 03/11/2020 17:06, Jan Beulich wrote:
>>>>>> Prior to 4.15 Linux, when running in PV mode, did not install a #G=
P
>>>>>> handler early enough to cover for example the rdmsrl_safe() of
>>>>>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded
>>>>>> read
>>>>>> of MSR_K7_HWCR later in the same function). The respective change
>>>>>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv
>>>>>> guests") was
>>>>>> backported to 4.14, but no further - presumably since it wasn't
>>>>>> really
>>>>>> easy because of other dependencies.
>>>>>>
>>>>>> Therefore, to prevent our change in the handling of guest MSR
>>>>>> accesses
>>>>>> to render PV Linux 4.13 and older unusable on at least AMD
>>>>>> systems, make
>>>>>> the raising of #GP on these paths conditional upon the guest havin=
g
>>>>>> installed a handler. Producing zero for reads and discarding write=
s
>>>>>> isn't necessarily correct and may trip code trying to detect
>>>>>> presence of
>>>>>> MSRs early, but since such detection logic won't work without a #G=
P
>>>>>> handler anyway, this ought to be a fair workaround.
>>>>>>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> I appreciate that we probably have to do something, but I don't thi=
nk
>>>>> this is a wise move.
>>>>
>>>> I wouldn't call it wise either, but I'm afraid something along
>>>> these lines is necessary.
>>>>
>>>>> Linux is fundamentally buggy.=C2=A0 It is deliberately looking for =
a
>>>>> potential #GP fault given its use of rdmsrl_safe().=C2=A0 The reaso=
n
>>>>> this bug
>>>>> stayed hidden for so long was as a consequence of Xen's inappropria=
te
>>>>> MSR handling for guests, and the reasons for changing Xen's behavio=
ur
>>>>> still stand.
>>>>
>>>> I agree.
>>>>
>>>>> This change, in particular, does not apply to any explicitly handle=
d
>>>>> MSRs, and therefore is not a comprehensive fix.
>>>>
>>>> But it's intentional that this deals with the situation in a
>>>> generic way, not on a per-MSR basis. If we did as you suggest
>>>> further down, we'd have to audit all Linux versions up to 4.14
>>>> for similar issues with other MSRs. I don't think this would
>>>> be a practical thing to do, and I also don't think that leaving
>>>> things as they are until we have concrete reports of problems
>>>> is a viable option either.
>>>>
>>>> Adding explicit handling for the two offending MSRs (and any
>>>> possible further ones we discover) would imo only be to avoid
>>>> issuing the respective log messages.
>>>>
>>>>>  =C2=A0=C2=A0 Nor is it robust to
>>>>> someone adding code to explicitly handling the impacted MSRs at a
>>>>> later
>>>>> date (which are are likely to need to do for HWCR), and which would=

>>>>> reintroduce this failure to boot.
>>>>
>>>> I'm afraid I don't understand. Looking at the two functions
>>>> the patch alters, only X86EMUL_OKAY is used in return statements
>>>> other than the final one. If this model is to be followed by
>>>> future additions (which I think it ought to be; perhaps we
>>>> should add comments to this effect), the code introduced here
>>>> will take care of the situation nevertheless.
>>>>
>>>>> We should have the impacted MSRs handled explicitly, with a note
>>>>> stating
>>>>> this was a bug in Linux 4.14 and older.=C2=A0 We already have worka=
round
>>>>> for
>>>>> similar bugs in Windows, and it also gives us a timeline to eventua=
lly
>>>>> removing support for obsolete workarounds, rather than having a
>>>>> "now and
>>>>> in the future, we'll explicitly tolerate broken PV behaviour for
>>>>> one bug
>>>>> back in ancient linux".
>>>>
>>>> Comparing with Windows isn't very helpful; the patch here is
>>>> specifically about PV, and would help other OSes as well in
>>>> case they would have missed setting up exceptions early in
>>>> just the PV-on-Xen case. For the HVM case I'd indeed rather
>>>> see us go the route we've gone for Windows, if need be.
>>>
>>> As can be seen from this reply, we're not in agreement what to
>>> do here. But we need to do something. I'm not sure how to get
>>> unstuck discussions like this one ...
>>>
>>> Besides this suggestion of yours I also continue to have
>>> trouble seeing what good it will do to record an exception to
>>> inject into a guest when we know it didn't install a handler
>>> yet.
>>
>> As we need to consider backports of processor bug mitigations
>> in old guests, too, I think we need to have a "catch-all"
>> fallback.
>>
>> Not being able to run an old updated guest until we add handling
>> for a new MSR isn't a viable option IMO.
>=20
> You're trading off against issuing XSAs for all the unknown quantities
> of sensitive in MSRs on all past and future platforms.=C2=A0 This has
> unbounded scope.
>=20
> Xen's previous behaviour was astoundingly stupid, and yes - we're
> playing more than a decades worth of catchup in one release cycle.
>=20
> I'll absolutely take "care/tweaks need to happen crossing the Xen
> 4.14=3D>4.15 boundary" over whack-a-mole for MSRs in the form of securi=
ty
> advisories.

What about making this behavior controllable via a boot parameter?

Selecting that could result in a boot warning issued via warning_add().


Juergen

--------------3E654E1D7EF6826373AC9B2B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3E654E1D7EF6826373AC9B2B--

--yVfWrfIlRbVKu29Wujrq1zmFX9vSIR4D7--

--7ukG6CC3P3LCc6ClqkY6hbZsGInvdqvC5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAdMzsFAwAAAAAACgkQsN6d1ii/Ey/1
JQf/ft0XjgpftpveKAx6MkxCafxb6H1FzCPc2uZaiHHKGaZr7tepJcXGR+Y05fEKA+8AO1Xnk01s
iMKgV5VNxcjwHWH2yNNvvBw1CPIOADMCTRpI4iyjmTtPvKtoAiNs1JKcRl162MiT0bhru/izrGMn
RYPujuIcj+CgmFiWa2EGsQtqzdaL2Apdf5+MlZhZLOmwpVyVeCg2EMxhWR10vZwocfF+CilidR9K
9T7TgjY2n3qflb7UAFQbha4kslXm+P+y4lc55IkdHHQytlcBJ3wWLczPe0kNjOwhWrcMqSCWvjWW
jHSC3D+N/BD6Oq+p0uUfHqYrCeIRDwSO2C96vRr6YQ==
=Ameg
-----END PGP SIGNATURE-----

--7ukG6CC3P3LCc6ClqkY6hbZsGInvdqvC5--


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 12:05:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 12:05:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81655.150988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zry-0001Gg-Us; Fri, 05 Feb 2021 12:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81655.150988; Fri, 05 Feb 2021 12:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zry-0001GZ-Rp; Fri, 05 Feb 2021 12:05:50 +0000
Received: by outflank-mailman (input) for mailman id 81655;
 Fri, 05 Feb 2021 12:05:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l7zry-0001GU-Df
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 12:05:50 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff0fc39f-8052-4b15-8a01-ec03e5a67ca9;
 Fri, 05 Feb 2021 12:05:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff0fc39f-8052-4b15-8a01-ec03e5a67ca9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612526748;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=wI9erkbu0/wX24Oxec5EiES1+4Cd6UYRzPRO/ELxRQk=;
  b=XP25Zh9/VSmb75vgNP6bUSrzmLOxriiPtOll0o2uHxpkqaaoHAVIcOl3
   sqpstEWk+7ff8fXtSsMqs9CuFdyhWYWpsYEnbIl0HJFZg0wPmxffKXGiA
   sdw9+apQ81aEbvEeBDtR/dCTi4BHnQO3FFCnYm5TVEIMMWwd7aMiT5w3W
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: sNJvlcGzAXg5KGLksvcZglX5+0xm6aBgw/T8sCDjM3GfGv4mbXu9D74cbya9vRWyMZPoCXCyD7
 OiPiTuFmGg+tjF8PgzPk0DbuLzLmQfeDS/cRN/mSSUv83oSM5njlD8jlH085V+KeMQJ1l2qBR0
 SzTLf86WQMC8SggmdAfTm3WN2oYYrUGasg6owuZqPJMwMwTQJrfYImfNdBaqPfH7DOSXif8Sno
 LgclvvABxetoR7WDy+r83RVCAiyUIFBJLGvu0YSvV/8jefheUsnyGltrUQna2NAIBp7nyz7PI9
 fWQ=
X-SBRS: 5.2
X-MesageID: 36634052
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36634052"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bf1c2yQ95iN019C3bGsNxiX5b5WD5gmJcp4s1QWoF2o5Bg08CET+iyLtUPfF7WgMYjErDnjblSuc8lbol/9bX4QWEet/7r9hbx9+Nb+WQKEnC96eOWcdE3WBpJCiviLDIKdjecfO7Ck19s/82HMZdiQ8NRmcgu4RbZpDFlZ7Opg9wjMd4APoNSuCHYjBaxd2SNO+VR8eaKpFp9tqoPGlFSEpXFtGQ9Fls9jl3b6zEQPZYOr9UYjSO6DfaDQxyUlkdvbgheYjV5JOWJ8glRLL7+jKw7G+2/Ddk1BCCe2Pq7kDZPxSJkOJC6zvc9mAD6HV2kmQ7Y2cu+UMU35HsyyTsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fbkXz2G1gY1mWnqqlGfO24JSvUwQzoGuQbX7AZGJnFs=;
 b=P3pS0wzQVt5UMr8z/QGxHSb3TUj/qZjbwdSwsWrYyO5Zf6R/rcihc9ExwrMVgJfIvXeF+5rK7qugZe7l0b+AKZCDvctxKcUWxKHGQYQATNWTWH2NrKYlKW5cAQ3JdZ5PhDjRb0XUDpKIWlPIZhJsrImd+fjPCaImKjulIRNusuZKulUvA2qxN24b/M9tr+P+L/MpGqO7YcIhK0Sr8rX1WB+DHDcD6ZnYj4e6d2Kt5V+Mw5gMa4NNII0zVRnf6aCYA1eyA6WDHQQn3deUwJoWW/YuQ0F+0uDDvW4o/ehd5TRazoGohLCT9bgSvYXNbIquzONz4yAt3bMtidVcs57HWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fbkXz2G1gY1mWnqqlGfO24JSvUwQzoGuQbX7AZGJnFs=;
 b=g/1bmK4svlUuAGAcRhS8Av02770YxmSX6kTWs+vQfJReOZfwNIK88jrgWxczlcSYFlfwgsWQaFrQrXa6PPt5m7fbzmtyqN32QhlI3nBQf4+Q0+9qFLWmZU5K0DShSwUu3SOaD8+kssV2PbXrjIeEqJrBPnBjcgl8NzHxEoK6P2I=
Date: Fri, 5 Feb 2021 13:05:38 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Jan Beulich
	<jbeulich@suse.com>, Wei Liu <wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: =?utf-8?B?UGluZ8KyOiBbUEFUQ0hdIHg4Ni9Q?=
 =?utf-8?Q?V=3A_conditionall?= =?utf-8?Q?y?= avoid raising #GP for early
 guest MSR accesses
Message-ID: <YB00kpC8Iz54TbxA@Air-de-Roger>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
 <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
X-ClientProxiedBy: PR3P189CA0031.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::6) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c6427c3a-61cf-4726-60d6-08d8c9ce5c6c
X-MS-TrafficTypeDiagnostic: DS7PR03MB5591:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5591D2A44F218B283366ECC78FB29@DS7PR03MB5591.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: WVDVWSh858goVJOe8x0TT/ecjoRpimDdVEGYnjcNM00CqrAj+oNFIZDvWsKpsSKNI3aqqq4UdDLX1EyWAB5OOrIkopowpxHTT0lGEQ273GclA8zMIMh06h7mVyOACydxt2N9fD1ZTquDttbvt2dRiDqC00SmGDxnIUJ1YHxW0rLYQrCdCv+Z6wgtj3dUCq7FSvOMwfxpz6BLbPaxh72Lc3OktyF9SFq+FVoiE2UJjpnIUfii4erhme1A/bRHJ0geHVdoMcVR7IzSrsSJcdoeAPF4b4LTv+uLeUsbn8AUVglNW2Aa5RhxpgOk/xf9KtDTB5oTMF68a/WMVaATL4UzV31hUmQ1Ly/b0TqCQzLuX6zerigay6Z/NyqFswWK1QY+FSOY5HIAn3lt1UuUS+HGRvHH45N69ipF4YetpRQSJXkNV7jeqwGq7tbsnNhS/2BJGb2tWbllt7DK8Ftud8ItEj1e5U7ti24mQrgJZOH1uYRRpUzz0tQNWRDV5AjVaObGcB5xnuu4qw5VbKLhSOIzRA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(346002)(376002)(39860400002)(396003)(136003)(956004)(54906003)(2906002)(66556008)(66574015)(316002)(33716001)(83380400001)(26005)(66476007)(186003)(8936002)(53546011)(9686003)(66946007)(85182001)(6496006)(6486002)(16526019)(6862004)(478600001)(6636002)(4326008)(6666004)(5660300002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Y0Z1N2ZWQnhhWGNDS3ZyR3E4a25sNGhMQnEvSDNEUncrWTNHRUhzUktQbTBV?=
 =?utf-8?B?aW1INmpna21sQ0d5VHFIS1pMbGV6aEVVS1JlS2Z3QnZ6RStOaE1nVW1uT09k?=
 =?utf-8?B?elVpU3ZSSytiTXVTNGV4ditEY2czdE9xQlhjMkN6S08xVFpXaDJkUng4TkMz?=
 =?utf-8?B?N3dwY0NkR1NUU3Rqc3BtSWFjSE1pd3FHVmNDSnpIZTFpM2RCeTNUcjRuUlJi?=
 =?utf-8?B?eHVkV2NNT2dWNkduRmZPSnduQUd6aVFaZHZ4MUhEVHdZNWlqQ1czaWhiQUN0?=
 =?utf-8?B?SnhLTFdWTlEwaFZoaTJkYWNra25aaGFiRnBCVHJHeXlGNjMyUHRNdHdjWlBY?=
 =?utf-8?B?bWFweGFaTU1VT3FrZ0VKM2ZnMXBSSGw5SmtTWEVoSENCckJuZVN2bHloSWto?=
 =?utf-8?B?T09HUVZCR0lpc3gxcnNrdDZYcVhMZldXSC8zQW1SZGJBR2wvdXpoNEJtSlds?=
 =?utf-8?B?Wk91QVBWMUVpTGw5TTN2M0tueWtkS3hrNGpaNmp0eWN4MTQvU0o0eXdBeEF5?=
 =?utf-8?B?YzQvVFRwZlFDbjJLRTdCOWZZaFY3RmJYemlaN1BhdDBrVEN5QUFRbnVyVm1G?=
 =?utf-8?B?dnQ3ZEUxWGJQUFRjM2kxK29OVVBNY0dpaGN4KzJYcVJaalNuMitRd25odWFu?=
 =?utf-8?B?SkpIL3pVL2xnbzhZT09vRkVhVWRJWk02TmdPdDFzUG1KcFZPcHJKa0hyVHBh?=
 =?utf-8?B?aXozVWFpdHFVSUpCU29VNVN0bkh1SXZNbS9hR3YvR1ZUR25hNm93cWJkdzNJ?=
 =?utf-8?B?bldqbmlYQ2hBdXlZU1hJanUrd2RYQTF5RmIxWkVBV2loaTNRMEJNVjFXblJi?=
 =?utf-8?B?QXlSb295MTgwWTNpQXh6K3prK2YyMlpONEltZ2s4ZFJIblpRUHNOSjg5R1o1?=
 =?utf-8?B?cDNVSm0wbGN5a1BLU3RuUmM4RW81UE5na1Jjdlk5M3Vidm5jT3JhenVtS1ZS?=
 =?utf-8?B?VGdUTkFTRVFudHZEM0ZBWVdYVkdkZ2xTSCt2Y2FUNzQ0SDNSakpoaGp2NEl6?=
 =?utf-8?B?NUxkeWwrNW10UDJ1bXZ4eHZ4d3ZJMjJybWpMby9LQUFsQ3krV1lMd2NQV24z?=
 =?utf-8?B?V3dydTdoUHFXY3c3UEhFTTJoanB4cVVicTRmWC9tTkZocDIwaWI2eldsU0RO?=
 =?utf-8?B?WWFmYzlpZmpxNEVtZ0hPRk1Oa2h3YkRoQTMydFN5OUtBUlk4NkIzdGthZWJu?=
 =?utf-8?B?YlFMcGdvVWZ2UnpyRXZ6Q2NpUURuMGlWV3V6NGpJTSsyeG9JL1NJNmJxR1NG?=
 =?utf-8?B?TUlVWGFEUmpOS0hRR1E3OE1IWTE5dytoOSsrbjdCNjZsdDZJV1JMMjZXMFdl?=
 =?utf-8?B?TEU5aFZ0dXd5azJLMHVPV29vVmdYZGJWeVFGSTkxdzhOT2QrNVkzZldtMk9T?=
 =?utf-8?B?Wk14UWljM2NxVlhnaGxRNTZYSjMzaXp3L3U5UmdoVVRYVXN4eGE5VWFQWlVD?=
 =?utf-8?B?aTE2dnBOWjdUcFlwQzBaWUFrcjF2b2NIZVp2TU5PSnd1Sks0dXQzZ0FXZml3?=
 =?utf-8?B?M3JabGZoayt1SVVEaVlKbllEK055aWZvNnQxVWxXMVFGU2lQSWFRN0xOcHJC?=
 =?utf-8?B?Y1JneGd2VXFSYlhtR2YzbDZkc1dycjREUEtxcVRYcHZKNEVrWFVxa0dUTHJL?=
 =?utf-8?B?YnJtdmFDLzdMZnN2eFlTc204QWoxOHVZL3A2M1V1OXBKVG8vcUorUjJHa1Jw?=
 =?utf-8?B?SVliU0tEcnRDQlNERUx3eWtDcnZlYzZ5Z1VGMzZWLzVCcWFnTjlXQ1Ard2F4?=
 =?utf-8?Q?DqwuMjBwH3Y0SaUkRXiCrAYEQMJvFdJLi/u0pM+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c6427c3a-61cf-4726-60d6-08d8c9ce5c6c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 12:05:42.5401
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /14aRT8UB1KgMPVenQWwM7nMQH2IHCUcXH3a4SH1tJFGzGj55oSqSfwXkgV2npVLdRp9tmWNAuGnOUm8TuDqEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5591
X-OriginatorOrg: citrix.com

On Fri, Feb 05, 2021 at 11:32:07AM +0000, Andrew Cooper wrote:
> On 05/02/2021 10:56, JÃ¼rgen GroÃŸ wrote:
> > On 05.02.21 11:14, Jan Beulich wrote:
> >> (simply re-sending what was sent over 2 months ago)
> >>
> >> On 04.11.2020 11:50, Jan Beulich wrote:
> >>> On 03.11.2020 18:31, Andrew Cooper wrote:
> >>>> On 03/11/2020 17:06, Jan Beulich wrote:
> >>>>> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
> >>>>> handler early enough to cover for example the rdmsrl_safe() of
> >>>>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded
> >>>>> read
> >>>>> of MSR_K7_HWCR later in the same function). The respective change
> >>>>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv
> >>>>> guests") was
> >>>>> backported to 4.14, but no further - presumably since it wasn't
> >>>>> really
> >>>>> easy because of other dependencies.
> >>>>>
> >>>>> Therefore, to prevent our change in the handling of guest MSR
> >>>>> accesses
> >>>>> to render PV Linux 4.13 and older unusable on at least AMD
> >>>>> systems, make
> >>>>> the raising of #GP on these paths conditional upon the guest having
> >>>>> installed a handler. Producing zero for reads and discarding writes
> >>>>> isn't necessarily correct and may trip code trying to detect
> >>>>> presence of
> >>>>> MSRs early, but since such detection logic won't work without a #GP
> >>>>> handler anyway, this ought to be a fair workaround.
> >>>>>
> >>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>>
> >>>> I appreciate that we probably have to do something, but I don't think
> >>>> this is a wise move.
> >>>
> >>> I wouldn't call it wise either, but I'm afraid something along
> >>> these lines is necessary.
> >>>
> >>>> Linux is fundamentally buggy.Â  It is deliberately looking for a
> >>>> potential #GP fault given its use of rdmsrl_safe().Â  The reason
> >>>> this bug
> >>>> stayed hidden for so long was as a consequence of Xen's inappropriate
> >>>> MSR handling for guests, and the reasons for changing Xen's behaviour
> >>>> still stand.
> >>>
> >>> I agree.
> >>>
> >>>> This change, in particular, does not apply to any explicitly handled
> >>>> MSRs, and therefore is not a comprehensive fix.
> >>>
> >>> But it's intentional that this deals with the situation in a
> >>> generic way, not on a per-MSR basis. If we did as you suggest
> >>> further down, we'd have to audit all Linux versions up to 4.14
> >>> for similar issues with other MSRs. I don't think this would
> >>> be a practical thing to do, and I also don't think that leaving
> >>> things as they are until we have concrete reports of problems
> >>> is a viable option either.
> >>>
> >>> Adding explicit handling for the two offending MSRs (and any
> >>> possible further ones we discover) would imo only be to avoid
> >>> issuing the respective log messages.
> >>>
> >>>> Â Â  Nor is it robust to
> >>>> someone adding code to explicitly handling the impacted MSRs at a
> >>>> later
> >>>> date (which are are likely to need to do for HWCR), and which would
> >>>> reintroduce this failure to boot.
> >>>
> >>> I'm afraid I don't understand. Looking at the two functions
> >>> the patch alters, only X86EMUL_OKAY is used in return statements
> >>> other than the final one. If this model is to be followed by
> >>> future additions (which I think it ought to be; perhaps we
> >>> should add comments to this effect), the code introduced here
> >>> will take care of the situation nevertheless.
> >>>
> >>>> We should have the impacted MSRs handled explicitly, with a note
> >>>> stating
> >>>> this was a bug in Linux 4.14 and older.Â  We already have workaround
> >>>> for
> >>>> similar bugs in Windows, and it also gives us a timeline to eventually
> >>>> removing support for obsolete workarounds, rather than having a
> >>>> "now and
> >>>> in the future, we'll explicitly tolerate broken PV behaviour for
> >>>> one bug
> >>>> back in ancient linux".
> >>>
> >>> Comparing with Windows isn't very helpful; the patch here is
> >>> specifically about PV, and would help other OSes as well in
> >>> case they would have missed setting up exceptions early in
> >>> just the PV-on-Xen case. For the HVM case I'd indeed rather
> >>> see us go the route we've gone for Windows, if need be.
> >>
> >> As can be seen from this reply, we're not in agreement what to
> >> do here. But we need to do something. I'm not sure how to get
> >> unstuck discussions like this one ...
> >>
> >> Besides this suggestion of yours I also continue to have
> >> trouble seeing what good it will do to record an exception to
> >> inject into a guest when we know it didn't install a handler
> >> yet.
> >
> > As we need to consider backports of processor bug mitigations
> > in old guests, too, I think we need to have a "catch-all"
> > fallback.
> >
> > Not being able to run an old updated guest until we add handling
> > for a new MSR isn't a viable option IMO.
> 
> You're trading off against issuing XSAs for all the unknown quantities
> of sensitive in MSRs on all past and future platforms.Â  This has
> unbounded scope.
> 
> Xen's previous behaviour was astoundingly stupid, and yes - we're
> playing more than a decades worth of catchup in one release cycle.
> 
> I'll absolutely take "care/tweaks need to happen crossing the Xen
> 4.14=>4.15 boundary" over whack-a-mole for MSRs in the form of security
> advisories.

I think I'm likely missing part of the point here - Jan's patch would
just return 0 for reads, so there's no leak of unhandled MSRs? Hence
I'm not seeing the XSA aspect of this.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 12:11:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 12:11:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81661.151000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zxA-0002K9-Kz; Fri, 05 Feb 2021 12:11:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81661.151000; Fri, 05 Feb 2021 12:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l7zxA-0002K2-GH; Fri, 05 Feb 2021 12:11:12 +0000
Received: by outflank-mailman (input) for mailman id 81661;
 Fri, 05 Feb 2021 12:11:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zx9-0002Ju-Ji; Fri, 05 Feb 2021 12:11:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zx9-0005Td-9d; Fri, 05 Feb 2021 12:11:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zx8-0004YS-Sv; Fri, 05 Feb 2021 12:11:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l7zx8-0006BB-Q1; Fri, 05 Feb 2021 12:11:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2RxeJObTJFbEk2j+xgFU8jXKx9G5TYtbD1PIkHLUIhI=; b=IKLHLq+neYGF5nDGzoyHE16i2S
	gmVcj7HbYCR1A5XaZFlPdAfG3wvwukgXvHr1/Xm+4e3SLydDFe7JlxJrSDkcTi8rv3ANefSNYiWZ4
	f03LDGNY+e4cNzwHnh1xwVuIBu+DJ3z7RbOhyP36L2ETHmktSLRPvBvxcwdaJ8zLQEHU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159016-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159016: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f9090d990e201a5ca045976b8ddaab9fa6ee69dd
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 12:11:10 +0000

flight 159016 xen-4.11-testing real [real]
flight 159039 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159016/
http://logs.test-lab.xenproject.org/osstest/logs/159039/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 157566
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 157566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f9090d990e201a5ca045976b8ddaab9fa6ee69dd
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   51 days
Testing same since   159016  2021-02-04 15:05:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 12:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 12:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81669.151015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l805a-0002lu-NT; Fri, 05 Feb 2021 12:19:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81669.151015; Fri, 05 Feb 2021 12:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l805a-0002ln-KN; Fri, 05 Feb 2021 12:19:54 +0000
Received: by outflank-mailman (input) for mailman id 81669;
 Fri, 05 Feb 2021 12:19:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l805Z-0002li-CG
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 12:19:53 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84997aa5-a3f6-4916-a0d3-82379f7e9a80;
 Fri, 05 Feb 2021 12:19:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84997aa5-a3f6-4916-a0d3-82379f7e9a80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612527592;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=vD7gtRp67wc93VIMFNu6NWMTgUtulnuGQomOx0lsWvs=;
  b=Evz1kYQ6WyTiCDSbCgqwTj9y0javz3m/7W555GX7t5TmhMtVapgpziOu
   pjl1Ikk+y5zvAIDGzUeN4l4oBQeEAgMFfIEHRZtoZv8xxnkPBmzMemY7v
   IMANUg8ypYofbfUsZmaC7yDNpNraFWtBPjsBRVJqrTr9SrGZ9SebZBfER
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Wz5BbAfAl7dQ7HaTfG8L+snBoa4T1d3gCxdYDCzPoFOnuw3gy3UNNfNOKuDWDDpxmuIVFq/N/X
 LhflLMA/eI3eYerk8jWk0Y+EgRimtsLiIGTZxs4JwOPKo/oaYsRlsOwX7WPopJ+8mfoMdtLEYZ
 2J5saNOlJJVTb3TZ5l105054uCcTTjR/P56lDN075QdQE/0ySjir82VCvSjEhAowJXFBUxaeLp
 aENImUmVuBYT/4QvB3p8MGFzJy+GD/tqx0ZoP5WCaGJs6PJvf+KWh1+Sa1crniipCaOWqxE96N
 OoE=
X-SBRS: 5.2
X-MesageID: 36667251
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36667251"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TnFOM6v5W7e6bokJLMEBuneGnW1mhOqSs8vTDmawHNFEMK4tWUzqbTJcqZvyFQu3tuPlt0qkFnNhKyzs/CKhmtBSYh4P8Tm4u8UnnqhaH5zqTS8GqFYP37lnrzrk9XZ+ShHht+MPmMTMZwje4ZFPbt0IDglP6tt8xk+y2mtAu/spmmmtjDlC0dh1j0G6iM5LtMP0P91M4suJFlfPjIsVQcQsOM16tlttP08kW77NcEZ8fsKIiuCLBabEKKmC5Aon7arJAa5i3uCgnwkdpwTwDXwLZwpZ3ZNd3IJzYUZNknyhAARM0hEm2RrAaY9RHB9Ft/DJtAbYrvrSQZ/WdfzAMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UV5n6nH1iEwsFQQCZRAUmnKqYBOrhumq+1ERLJqmQgY=;
 b=FNTMfKGAwT1iHlr3ESpyr9Bi7ik/L09kdBx5mZlh6Cn77F3ThmaPqnvWTU1cPlZCPKQOi+kTXm9Tq0lLQCbhYdHKevvhhEXsS1YRchwowVwxLy/8Es2UajmvLXxt71HAVZmizxDkdChTFB6t6mgvLdzuXBfQ3eqdhjC74vkYVIx3qcuLTRIbNEO1Miwkx1FDOY75fgXNt5+Z3ebNFHyvnqp6UavgenKAASh5N0ThEmrdssDZ81WxPLNEvAuLRady92qbfWz8Unc7z9eHfSqpuS5L/HePSHfrnBTYs+lPC3fLon415U8UPNZ0xxD5cTMw+70EFhm3fgLVyYTffgqcBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UV5n6nH1iEwsFQQCZRAUmnKqYBOrhumq+1ERLJqmQgY=;
 b=GNHz1UpOD8wAcOSYqxmYdOJtfADV0lXxtdg7anyPRdBd+7u4O3FzixjjwyfNubXDheEK2SSFcMowLKcR76LeeGI9szv6bhPmFCrGE8/TMSOlLl0xrxTosboA5saNRIEqTGRSdoIkt3AkQixOS0ZCDILA/t2Yfqbvb8hozpTbsms=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-4.15] tools/tests: fix resource test build on FreeBSD
Date: Fri,  5 Feb 2021 13:19:38 +0100
Message-ID: <20210205121938.4636-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR01CA0063.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::40) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d7c3b659-bfe5-416e-7ba8-08d8c9d053c1
X-MS-TrafficTypeDiagnostic: DM6PR03MB4684:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4684411AAEA6AF8FB7E70A2A8FB29@DM6PR03MB4684.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: LVgu7sq9XJ6doZGsYSE4tuurk8Ktq+XjDxArq2wjOzNYRdn1cWrY7Fimgr4LPt1brp9z8Pjuu15fDffXCXPiiklvyyaHySMpgi80nt4H1V7inIUCaGWsc9lOwsdyf4PqMb0B4K0XUrLr6LMNpRM26wJa7EL7TnI6ZXTEMD7dS3+si0MasnpfCUUkLWsBRlSEmrpFeL+RCwxN3FehcrYi6CDYXDYJI/9EDqlNDoRbsgIeQrFFPDSKxEk7r5cKCdCLNBFTPKOr0aX5sCUurl6ke9binkJ4JAAWGtNcgiMHnxsqO7E2KhD8VewSrD2Gv5Mh/6aNzkwAfDXw4h6M0d0kMaaQlA2HdIMeXb/+yiFL/AmTz/K8kGrWXWAoVuqO0vva/s28bNMx21S43jtbwjTBXj6AEYMHdS0PUO/tqvthIOF2Wth/C1i/zt6QixIdgZBkzwSDUqBsjrwsICGTJknQx0VvWWFOnJO1dQfZqA8SGRdnNl/N1lYcAAcnnwQ2LhJhkJiBPufAQbKYCPaBT822Hg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(6666004)(54906003)(4744005)(478600001)(1076003)(316002)(66556008)(26005)(956004)(6496006)(5660300002)(66476007)(86362001)(6916009)(2906002)(8936002)(83380400001)(186003)(6486002)(2616005)(66946007)(8676002)(4326008)(16526019)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RUhZSmhjV1AyODJPdm1iZDlCdjhyQUJkUzdjVGh5RERESXVxY1ExOFFSc2du?=
 =?utf-8?B?RVFueE1NZVE5NXovRnFJRUpoTmVkUy91M2l3NmZxYm05Unh5ZWh2WFZrU3BQ?=
 =?utf-8?B?WGFCUStPWCsxNnpObmFpSkYvUnZWdUdjNGJYRTZSV1h5WDdKYmNqQ3JjYjNG?=
 =?utf-8?B?aVBUS0Q3RFlHVlFHeTFRc2FtcUM3L25JSTBWYUlkcERHQnM4cjFuZ2lXaXB1?=
 =?utf-8?B?aDlJbUkycDloajZpdkNORmxFV2dPby9CNSt0V1dKY1Y1WGQ2TVNYaGxnSTN3?=
 =?utf-8?B?ZHV0bzREbzF6M1VMTnRVU0c5Vys3ak9xVVd0aTcrdVB5YXhiWUFMVEhxbXQ5?=
 =?utf-8?B?V2VJSHlyOTJpRURDalFEZHNHWFRtdVdJUG12SE5WNXI2dmpNZk93S094ZG9O?=
 =?utf-8?B?am4wSEdMaHROTUpaVEplSGQ3d2RyaWFYSS9qeWhNbUorZ0NibWRFdTlrdDlF?=
 =?utf-8?B?SnBMYm4vbVRDNklzVXJPRTVBN1NGSmtaRHhZWEF6emRkeG9kUkFrZzB3Rkc2?=
 =?utf-8?B?NU9wa1FJQ1ZCYVIxUEtQZzZWa2RmRWVJMGdMb05LSTdXRUk1MGFyRVFWWFow?=
 =?utf-8?B?OUZKcU9UaTU0dEJmQ3lPcWtjTUJQZGFGQ0Z2MWFHWWM5cGczbWpsNkdiNXZO?=
 =?utf-8?B?Rm9lZDRsTVZjMVNZdzRNR0hkOHhyQXRWdUtRTW5Nd1I1WjNjbGkramRVb0JW?=
 =?utf-8?B?emhDNGwySmwrTXBEdlUyYThaU296NmdONGZtUkIvTWxabE1qeVdDd3BkR3lG?=
 =?utf-8?B?dERKL25IRDliNS9RYnllTlJ5UTVwb1JtcUQvOGJuU3lpem1FYk9kYkFGRjZk?=
 =?utf-8?B?OTVNUjlWVlBGWWRBeWxOajZBV3FXSml6bS9KMmNGZzNsTHo3Y3V4a2xScGE2?=
 =?utf-8?B?UFc3RWluUDY1eGxBWlcycTR3UE5RNG1TRHl0S2JVRFI3a0lTTGVJR0dNWGxa?=
 =?utf-8?B?ZVRUS2NtL0w5aTBHc0JYZTluNzJSRGhBWFdWcldLYTNiY2diQkJpNitCa2hD?=
 =?utf-8?B?ZW03MXVKRjRIWkcwb2h6cEVhL0VRRnpSaDJuYk9CVWtrY3J2R1hkbVhzYS9X?=
 =?utf-8?B?ajNkOEtpME1PMkJJV2M4a2d5d0xlWFhrMDBIMlRjbjFEVGJLN1NhRjEyNXdD?=
 =?utf-8?B?WVA0bGRjSEpZY3Yrd1o4b2piZFZxNWgwUlZnY045aWo2VzF3a093SUlJaXJ2?=
 =?utf-8?B?OGg0ZkJVY200RDcwTXQxODRMOTdSSDd0WjV1WW82WFFrVDVmL0NDdThxUUN4?=
 =?utf-8?B?WmRVUHBHZHNYd2MyLzFNL3JNYzdUT2NKMG1qWDNtblI5dmZ2ZGhoYmhXclJO?=
 =?utf-8?B?ektFNHVpQjJOTGUrSDFmbXpkSFJnWFFXWDZ0T0d5aFZSOTNmdFB0L2ZXY2NZ?=
 =?utf-8?B?NUxEeDdSdTNCUGFLYnJ6aHpJd29URG04VG9NcmV1cWJpQ3JOWE5CaUt0cXVL?=
 =?utf-8?B?aTY1ZCtDTkZvMjZiT1huWnlQY2gvbDE5WFZDMmVuMDdnazF6U0Q3QkhwNnlI?=
 =?utf-8?B?UTBYcU5sT3ErUWpxM29qTlo0RGJuUHE2QmI5MWZlV3p3YitxU2tRdVBsRlRQ?=
 =?utf-8?B?TnRjYTIzQ3lnVUpKVmcrOThCMzdESjdHOEJZelM3aWJTaUJ0UW9ERUp4NDdP?=
 =?utf-8?B?aVJTeGFNM3VpcEQvd0pyU3lKQjhpTFhpVGNSNnpIS3FVdEEzWm93WUdWUjZN?=
 =?utf-8?B?cDZvejNhbXFST0lEdFhiSGFvTndON0t0M3pCZGZJOENXaHJqV1NRU29Za0Fw?=
 =?utf-8?Q?9z716ilUxSwEpcxLQuTMd1j98U+ZM7XJIdeKE0s?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d7c3b659-bfe5-416e-7ba8-08d8c9d053c1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 12:19:46.9550
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1qqxKVD6KaceaV9aJ2uDMIHqJAC/C//PaTdt84O//rf9NyysKIcA0wTwH7iydMAVAJOkELdMiLyGnc43Us7HJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4684
X-OriginatorOrg: citrix.com

error.h is not a standard header, and none of the functions declared
there are actually used by the code. This fixes the build on FreeBSD
that doesn't have error.h

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
Build tested on both Linux and FreeBSD. The risk would be breaking the
build, but it's already broken on FreeBSD so there's not much to lose.
Build breakages on Linux will be spotted fast by either osstest of the
gitlab build.
---
 tools/tests/resource/test-resource.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index a409a82f44..1caaa60e62 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -1,6 +1,5 @@
 #include <err.h>
 #include <errno.h>
-#include <error.h>
 #include <stdio.h>
 #include <string.h>
 #include <sys/mman.h>
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 12:50:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 12:50:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81691.151030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l80ZT-0006br-8L; Fri, 05 Feb 2021 12:50:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81691.151030; Fri, 05 Feb 2021 12:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l80ZT-0006bk-5B; Fri, 05 Feb 2021 12:50:47 +0000
Received: by outflank-mailman (input) for mailman id 81691;
 Fri, 05 Feb 2021 12:50:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l80ZS-0006bf-2D
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 12:50:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cdd1cd4-9da5-4a52-8f7b-3799ab250f72;
 Fri, 05 Feb 2021 12:50:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cdd1cd4-9da5-4a52-8f7b-3799ab250f72
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612529443;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nzCzGFFzNmvLz6Ikgfi4AHuLd9L2881IN3rxWVPOMHw=;
  b=TsrTgellj215RuDh5VUscn1pomI5ebxyCvZOjSKOs4K+natmcqF2M7L2
   wZXoAImQeE4j/zXY2FwQ9SkUXXEo0oMOpcp+O5sZJ1xmQZ1celQoRUxDo
   +NbnCrbh2GGaMSMOWz6x0nFdIQQ+7hascZiUJPySrwQn7aNCUeSeg+j80
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: frZBMeSvHkGny/4cocqOwsggbJPhCuCtzL4kwGLB06mfRKZxRdVPhR+JLnb/PAeJcuA6374x8n
 g56hkFuO8758M1NOmNfqkndMJ1+U6+6qAgWWJA9nWQyQJTgOUBSNy7CfVrus5oJ/IhTCSUPgsH
 eGn2+TrbljdzCb9W8Ljn1y5T0Lbcbccs1rd7/my9cjAboiZ1I0LHXeCd6ygy+LD1adAtzRgBSG
 tyfrkCyVSQDI6ZWAqsLuvy2PzIAfGPHAxNy6CkFQyQ1mSrS8ng4sB8dtq1/4XQ7eP+V1/NuxrZ
 sSk=
X-SBRS: 5.2
X-MesageID: 36668927
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36668927"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JePFRaeokWPCP8CNVuYbmGBOMjmg4riTG22LPVyYakh0yPSxExYqxo2Bv3pBjcLYqB4N8rtL2GeXDCDNPcsuoYEKDSk3JPVeR4AEEq3HM4eUWnGgZQg/+9Nn+CIRU8QeQediARHbRkNE0CnX/roRAp3r7XdPBlEwClYAHMz4ih5gi7vmHge8wyJhOWAVj497y0A3Y1255jH7+zXYCM+2+PF7OhOBONHI6QDvu1xju91DDlFR3XceZlGFgDIfM5KjtL9Z6NRnzr26lZ6sLL0goyBOt/S0p44sCR996Hxz8t+OLqJmnYeKqeVmzfvQd7BwSTFRsqv1DtZlFQdWGwDX2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzCzGFFzNmvLz6Ikgfi4AHuLd9L2881IN3rxWVPOMHw=;
 b=SLl3pw6J2Ge6iSBzn45kFxMi6KnQ+VrMOkBB99Z8D3E7wkRbkTRQke4cgYDG4HhXjIDGXEZJxrkUXD8AvQqXJNRc4mR//kAX2H3oumyBFvZ/MniRb+b2XvK90Yx3U974Lvq0+b66zVZ2wSUxJPl5BRbTUbGaYLIhi4amnPXRVe//84WvyaLjBdk5NO9A3NSbyECI2SBH5EonagO7J14SINbyCnGLmv7E1sa5vcpHgnoVZg8bVY85Uz/4y+2O4g4KZcf6Yri41Ns+U4pnnh/q8Y9dQ+uu51B5s4gefbZ8m3sj1gsmEVNnKDwbCVy2uVjVGMOxZRgEDfffGjcFBCyxhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzCzGFFzNmvLz6Ikgfi4AHuLd9L2881IN3rxWVPOMHw=;
 b=qmdXIpAKPwrewiPOi+t6jltNsnK7wnBCszr8ZBgUiSoMDBXjYPv/tP5J8EGEH2DQASnzHSSsxxwSImeYgnlZgs77eEZx86jNyAXu1AOeK9XR4iN5rjYvSJsiXkNeUyLvtSFXfRhuNfWm1GOLH3KhxsVdG1ofGa8juNfahT05fBM=
Subject: Re: [PATCH for-4.15] tools/tests: fix resource test build on FreeBSD
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210205121938.4636-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <89d70b24-5b47-6e7f-6a89-8d6a4cb2e59a@citrix.com>
Date: Fri, 5 Feb 2021 12:50:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <20210205121938.4636-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0110.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::7) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a56a748b-3e6e-4aac-4282-08d8c9d4a292
X-MS-TrafficTypeDiagnostic: BYAPR03MB4277:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4277C77FBFB94E9003E475C0BAB29@BYAPR03MB4277.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: SBPYvV+/kgXzoAuETQJIy8unh5+L1KxBQm/QJ9e3mzsYC1r+95O9KzSTK9HdjSQEZDeXlpHlhMkB+OdvDuFl3BIRHZ3I499Q8pays3cKUM3wn+qQPsjF3NKqXrPZrxM84kLlZpBaEVgVAXVM0eJBIcZPVkBhUAGAgbyeZHQII9Hk1etXsuzYjXesebzRwaDlMxEJcCFvmN4kL0si1gwbYWuaLXJvC+K+1TfdgoASF+hr6T6T4M/8IuOEKcNxvqR8kOFafI/in6Bf+lhXD3TmOZMj0IkzrnLKvbK4U+IuFFpycmzIbYKxe6rsnwrjSfq7nyEZLKFTt21r+5yWazi4y8bYgNjqxrSxeweK4batnncN0pSeXINN9e0b+BM4XahxYCVt8/xitsGlP3kM/qAmHOa9wrWfJ+PlMlZ4lerXclHaoYPsVYDH7loxA0I5sk9yozOGKQaSVVcYWebE1EuLApBhpT1LnScVyBrOnV3IswsM/pZJhU/asIO/lNfl87gYZ1W+oykBvldtDhBOC2MLfQFeGd3KwIBTiq7SIqtom6XY+7MoUklXrnGtnK6ZWYDlC94zO0VgdScI3c1A6BZOCyR/uRb058qm7shHYLvAGpw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39850400004)(396003)(366004)(136003)(36756003)(316002)(4744005)(16576012)(54906003)(6666004)(478600001)(66946007)(66556008)(16526019)(2906002)(66476007)(8676002)(2616005)(956004)(8936002)(186003)(53546011)(86362001)(26005)(31696002)(4326008)(5660300002)(6486002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?a1h2Yk5rNFo3Zyt2cjRjTm1aa2JsRWlLSFVMYzRubWVubXF5WFBvOVJPcFdj?=
 =?utf-8?B?dVN2aDZaK2NxUUxxdEtkb1A0YUd0dFIrNmdLVkY0R3hIRW9jaWVnZWJYeXdY?=
 =?utf-8?B?K293U1dObzJ6cXk0NnR6ZEcyZHYyUWo3K3V6QmF5L1g4MEhpMVdXeGozRnBE?=
 =?utf-8?B?NExocWNtNGs4dGdsWklNMFdTS0xFUjV0cUFwRk9iNjY3RFd0OWEzNTRaN1ZD?=
 =?utf-8?B?emVwMnlpaHZZNG0vaXpvOHVxMjdqd3NWbGxONUtHL2pENk5ML3ZHeHR4MzJ6?=
 =?utf-8?B?cWJMWUZObGtINEtYSHUzNTVpU0dGb3dtbHZjZWVuU0t3VkdIMHd5S0I1a3J2?=
 =?utf-8?B?VWVsUVJaSkZud1U3NUlRVnYxOEpzZHMzZjFyYzZlMTFSNmZOQkgrbVpaYTA5?=
 =?utf-8?B?eWRqVUVyTTZhVi9IRTA4bGN1K1RhcitXV0NaOGc3eE85N3Y3VjNla3VvWk8x?=
 =?utf-8?B?Mm5TWHhiTzYzUUNlNktiT0tkWlJURFhEekZTTEw3NlFaR0ZnSnFOMXFQeURY?=
 =?utf-8?B?ZDVkaEdlUzdEMkJ4d0RpaUZZdDJLdHczSU9BeG1zeHFPL0R0L1A2cUlCdW56?=
 =?utf-8?B?dGM3NG5MWnNnWGx5OVFtMTBibjFLSG15a3NubzNHUm1OYS9XZEZ2eG94U2Rt?=
 =?utf-8?B?c05zRVJHOXlmTGdFelpFQjFHREhvWHZMT2IrL1RiMThOUUNxQ3JzR0ZpUEty?=
 =?utf-8?B?dGY3NGdnWjFGWWJQS2lyMWlPbllScFVRVWdPVEVodng4M09LQlY2ZVU2eEFt?=
 =?utf-8?B?eXpGNStMU3J6Yit6S1dacFB6VE1BUDV3T1RzVW9MaDJBN3poRUtqQ2E0TnFj?=
 =?utf-8?B?Q2NCT0J1Q3ArMnUwa0pqNXQzL2ptUFE4ZFVtRXNTQjdhM3hNWkpvM25WcUZq?=
 =?utf-8?B?dEw1cDdqMXUyWVg0d1VORUVjQzZsdmRhVWc2Zk5KU0gxekpuaktVY3ZXVEIv?=
 =?utf-8?B?Sy81dlVwMXl4US8zL0NVTzRXRWVpSXlPUzZwVHJDOVZIWStRS3pOeW5vS2Rr?=
 =?utf-8?B?YkZiUWh0QStEa0d2NTZ0WVlSSmZLa0NBNUx6aklWRFp2bWZMaTdhOXVjWUNL?=
 =?utf-8?B?VTd3S3A5cno2VmQydWswNHlLclcydngweUQ3WklCYzlXelEyNGozNjFFd0hE?=
 =?utf-8?B?cU0zOUt4Y0VCSER1cXBub1Eyd0prZzY0RUVJY0JPS0E0bS8rR1dRM1NkdXNV?=
 =?utf-8?B?MW9IdnJrL01CL2czZGFNaUg1ak9mZlFrZVpKSEh3RStZZS9uOWJnaXBWRDFU?=
 =?utf-8?B?bENNWkdaczZ2T3NkL0V3QmE3bFV5RG1UblBtelR2SE9xSFNHMG9qbmVwZ0dH?=
 =?utf-8?B?R09neWNYWHdwUXRGOEhSSUxUWnh0OWJDZVVlWGI5aGVzREsxOTYySjl4aldM?=
 =?utf-8?B?eVlDcGhkNWhCWGR0K0ZHZnYrSjhsaGZyQ1BWMy93MC9oR3lJSitKSHhLcGJS?=
 =?utf-8?B?VzFGWGJ1eDZXRWN1WDhJeDRPdUVvemNBTTJFbVNhWUk3RUFZUkV0dnhqSkw4?=
 =?utf-8?B?bkRUSmxRd1pCWEsxTU1CSit4RGNoc05XdGc3Ris3M3JFeEx1ZVJ4amkzbW5L?=
 =?utf-8?B?Z054ZFQyY2lwK3g5WVZWTXJXeXBkLy84ekFrWUlKYi9ZVVpDV3F3Y3lZUVho?=
 =?utf-8?B?UmNjVUh2VGd3L3BqcWxSUnFNd1p5aVV4V3JDYjBQRGkxMWkvMFo4K3BiNU9i?=
 =?utf-8?B?NkZMdVB3a1ZHY3FvSTQ3N3J1d3F6QjRrREhtQzFvSktoZDhiN3ZiUXR2RW1y?=
 =?utf-8?Q?3M1y9tvvPxzx+hM9dHQMhayaVjia2Kg8V4Npx/I?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a56a748b-3e6e-4aac-4282-08d8c9d4a292
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 12:50:37.2638
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: stkl9qXS3NU6bc1WEB3OW4mY0Oc1p73SlsaXII5vymSjGkFl3H0R6tp6NGaXRI66oEE46ZPluxhxkDj3cVsELFwxDQzE9mC5yrNYHT26mvQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4277
X-OriginatorOrg: citrix.com

On 05/02/2021 12:19, Roger Pau Monne wrote:
> error.h is not a standard header, and none of the functions declared
> there are actually used by the code. This fixes the build on FreeBSD
> that doesn't have error.h
>
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Urgh sorry.Â  Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:00:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81698.151046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l80jE-0007ll-8d; Fri, 05 Feb 2021 13:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81698.151046; Fri, 05 Feb 2021 13:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l80jE-0007le-5b; Fri, 05 Feb 2021 13:00:52 +0000
Received: by outflank-mailman (input) for mailman id 81698;
 Fri, 05 Feb 2021 13:00:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l80jD-0007lZ-GB
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:00:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c82199b5-909c-4ba3-90c0-af5004520987;
 Fri, 05 Feb 2021 13:00:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 359E7AC9B;
 Fri,  5 Feb 2021 13:00:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c82199b5-909c-4ba3-90c0-af5004520987
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612530048; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iJ8XZqZ4agGo/b1N2M5QIXp5x/7fc7ETiotJmZXoBeo=;
	b=Or3YV8AFZ3UeJQDoWS4AvgIaqnnN2WN8Ygqbh4D1m0EMTPWCs+snK3vyKppCegHLde8v+q
	pqOEzvhl4zqehQgCfvdLCkb0A2oDvwfc2Fi8hSxyLPU+NLJ/v55MUrvP7wp6Yu+WT/73Ly
	cfniI1/6tPHDxQXZ6HvJSVaA5MW6qRM=
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d6cf3dbc-9f82-8e1a-e088-f7ae036d16e3@suse.com>
Date: Fri, 5 Feb 2021 14:00:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 11:56, JÃ¼rgen GroÃŸ wrote:
> As we need to consider backports of processor bug mitigations
> in old guests, too, I think we need to have a "catch-all"
> fallback.
> 
> Not being able to run an old updated guest until we add handling
> for a new MSR isn't a viable option IMO.

I'm not sure I follow you here: Such backports should still make
use of the respective CPUID bits, and hence shouldn't contain
"blind" MSR accesses. And if there's really something needing to
probe an MSR, then I'd expect such a backport to make sure the
probing actually works in a prereq (presumably) backport.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:04:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81700.151058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l80n0-0007vf-QP; Fri, 05 Feb 2021 13:04:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81700.151058; Fri, 05 Feb 2021 13:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l80n0-0007vY-NT; Fri, 05 Feb 2021 13:04:46 +0000
Received: by outflank-mailman (input) for mailman id 81700;
 Fri, 05 Feb 2021 13:04:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l80mz-0007vT-0N
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:04:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd61a216-b801-4886-8864-8f2437836436;
 Fri, 05 Feb 2021 13:04:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 26F55AC9B;
 Fri,  5 Feb 2021 13:04:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd61a216-b801-4886-8864-8f2437836436
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612530283; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ao4EmPSYFr99nEYNECu1PQXBnTpacFV+NGYq7fH28qY=;
	b=Tk3y7qc8Ot0sENOuTE3a+N7z8lv61F28vkvfS7PQVedQuV2zFtPCp05o0pLPuGi7402A6M
	ug8qNz7c0yOrftbDxZEECP10Qf2eifrBdzwdIq5FIcfLfRtHMyJf0EUSEDqMN1mPvnwmqg
	Oya1l68p6T0EOyAMk4wRKaoBCU7dhYQ=
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
 <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <153ebca8-cffe-53da-420b-274ac0f535d9@suse.com>
Date: Fri, 5 Feb 2021 14:04:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <7fdfac30-0c7f-f07d-fc7c-f7bb87b71a28@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 12:32, Andrew Cooper wrote:
> On 05/02/2021 10:56, JÃ¼rgen GroÃŸ wrote:
>> On 05.02.21 11:14, Jan Beulich wrote:
>>> On 04.11.2020 11:50, Jan Beulich wrote:
>>>> On 03.11.2020 18:31, Andrew Cooper wrote:
>>>>> We should have the impacted MSRs handled explicitly, with a note
>>>>> stating
>>>>> this was a bug in Linux 4.14 and older.Â  We already have workaround
>>>>> for
>>>>> similar bugs in Windows, and it also gives us a timeline to eventually
>>>>> removing support for obsolete workarounds, rather than having a
>>>>> "now and
>>>>> in the future, we'll explicitly tolerate broken PV behaviour for
>>>>> one bug
>>>>> back in ancient linux".
>>>>
>>>> Comparing with Windows isn't very helpful; the patch here is
>>>> specifically about PV, and would help other OSes as well in
>>>> case they would have missed setting up exceptions early in
>>>> just the PV-on-Xen case. For the HVM case I'd indeed rather
>>>> see us go the route we've gone for Windows, if need be.
>>>
>>> As can be seen from this reply, we're not in agreement what to
>>> do here. But we need to do something. I'm not sure how to get
>>> unstuck discussions like this one ...
>>>
>>> Besides this suggestion of yours I also continue to have
>>> trouble seeing what good it will do to record an exception to
>>> inject into a guest when we know it didn't install a handler
>>> yet.
>>
>> As we need to consider backports of processor bug mitigations
>> in old guests, too, I think we need to have a "catch-all"
>> fallback.
>>
>> Not being able to run an old updated guest until we add handling
>> for a new MSR isn't a viable option IMO.
> 
> You're trading off against issuing XSAs for all the unknown quantities
> of sensitive in MSRs on all past and future platforms.Â  This has
> unbounded scope.
> 
> Xen's previous behaviour was astoundingly stupid, and yes - we're
> playing more than a decades worth of catchup in one release cycle.
> 
> I'll absolutely take "care/tweaks need to happen crossing the Xen
> 4.14=>4.15 boundary" over whack-a-mole for MSRs in the form of security
> advisories.

All of this reads as if someone was asking to re-introduce the
blind leaking of MSRs. I'm pretty sure though you realize that
this isn't the case, so I guess I'm confused ...

Note that while I have just disagreed with JÃ¼rgen's specific
case, I still think the conclusion is right: Where possible
_without_ reintroducing tjhe bad original behavior, we should
try to not put ourselves in the position of needing to update
Xen once guests want to access new MSRs (or even us simply
being unaware of present guests having rarely used code paths
leading to such MSR accesses).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:19:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81714.151073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l810k-0000jM-3y; Fri, 05 Feb 2021 13:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81714.151073; Fri, 05 Feb 2021 13:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l810k-0000jF-17; Fri, 05 Feb 2021 13:18:58 +0000
Received: by outflank-mailman (input) for mailman id 81714;
 Fri, 05 Feb 2021 13:18:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v9N4=HH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l810i-0000hQ-7L
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:18:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d820693-8c42-47cc-8878-00c93a2ac977;
 Fri, 05 Feb 2021 13:18:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66EBDACD4;
 Fri,  5 Feb 2021 13:18:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d820693-8c42-47cc-8878-00c93a2ac977
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612531134; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pTSZLOXEgriFpz3J0lut/vOgfOdOE2zwhFyH5OaUsvE=;
	b=p2PWaNzHxTlrtgHO3+ZlY7hCMFejkNQBWzsvkSmW6zKrZimnj4ZyzLuLg7ytjQh+DDNspM
	rDzTeCQN2+Tez546DM9l38nqs1vgTU3MZR9mICrtQl4SSlcxcoHKmdRTDfYuRSzPD0ruBi
	bTzVrufbHzRR+SE6wxk6taGVrmf2iDY=
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
 <d6cf3dbc-9f82-8e1a-e088-f7ae036d16e3@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
Message-ID: <c1346b04-0582-8e23-e44f-aa93a79b34bd@suse.com>
Date: Fri, 5 Feb 2021 14:18:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <d6cf3dbc-9f82-8e1a-e088-f7ae036d16e3@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iGlwj6RZQN2aqa54xdeBQjbTrxhYrC5l9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iGlwj6RZQN2aqa54xdeBQjbTrxhYrC5l9
Content-Type: multipart/mixed; boundary="lEe35Nm1HQbxtuyMt3UaMR52LtQdeVVdm";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c1346b04-0582-8e23-e44f-aa93a79b34bd@suse.com>
Subject: =?UTF-8?Q?Re=3a_Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_conditionally_a?=
 =?UTF-8?Q?void_raising_=23GP_for_early_guest_MSR_accesses?=
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
 <aad25a24-b598-4c35-05f0-80f39152c11e@suse.com>
 <d4be9aea-0c14-dac6-5fb6-431f7899f075@suse.com>
 <d6cf3dbc-9f82-8e1a-e088-f7ae036d16e3@suse.com>
In-Reply-To: <d6cf3dbc-9f82-8e1a-e088-f7ae036d16e3@suse.com>

--lEe35Nm1HQbxtuyMt3UaMR52LtQdeVVdm
Content-Type: multipart/mixed;
 boundary="------------42C2FE7AE410DAD1C0CF4B04"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------42C2FE7AE410DAD1C0CF4B04
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 05.02.21 14:00, Jan Beulich wrote:
> On 05.02.2021 11:56, J=C3=BCrgen Gro=C3=9F wrote:
>> As we need to consider backports of processor bug mitigations
>> in old guests, too, I think we need to have a "catch-all"
>> fallback.
>>
>> Not being able to run an old updated guest until we add handling
>> for a new MSR isn't a viable option IMO.
>=20
> I'm not sure I follow you here: Such backports should still make
> use of the respective CPUID bits, and hence shouldn't contain
> "blind" MSR accesses. And if there's really something needing to
> probe an MSR, then I'd expect such a backport to make sure the
> probing actually works in a prereq (presumably) backport.

We know that Linux partially relies on blind MSR accesses to be
tolerated on bare metal. With this and older Xen versions having
worked by _not_ faulting when the guest accesses illegal MSRs
it is not that unlikely that some guests will suddenly fail to
boot.

Especially when backporting processor bug mitigations I could
envision not all distros will thoroughly test the kernel to still
run as PV guest on newest Xen.


Juergen

--------------42C2FE7AE410DAD1C0CF4B04
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------42C2FE7AE410DAD1C0CF4B04--

--lEe35Nm1HQbxtuyMt3UaMR52LtQdeVVdm--

--iGlwj6RZQN2aqa54xdeBQjbTrxhYrC5l9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAdRb0FAwAAAAAACgkQsN6d1ii/Ey+/
gAf/fTVCQWkt3W5yU1JvSQNNF/4qXLqhQ2YjXHR+8fOz/vuiqhIIvdL0O7z6rvbrp0THyr4HmwlZ
Kvn6O8ubfqiXXwRF220u6ND35tEsEA+EFNVIggMdcpAzRacY+KFsX0DijLSl37XxApJ9ZgW8xGPr
q/uoDz40tKOpyIYzJQrkvpMmQdbtPlE5LBh6tkmRXSZKDvWLw1TRYFtgSVLSsbdw8DfEPnHn3jau
W6TqtTYDQeRX4yU2z3S5kwkqolFMNlGvuMheaKD6ZoNKHjLE8wIsBM71Tu76tRKj08GwNdN159N/
O8xFRrKAFXq68z+gG4DJ9M7CSySR+x5m84c3aKX/Tg==
=z9qQ
-----END PGP SIGNATURE-----

--iGlwj6RZQN2aqa54xdeBQjbTrxhYrC5l9--


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:34:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:34:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81719.151085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81Fw-0002nC-Im; Fri, 05 Feb 2021 13:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81719.151085; Fri, 05 Feb 2021 13:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81Fw-0002n5-Ff; Fri, 05 Feb 2021 13:34:40 +0000
Received: by outflank-mailman (input) for mailman id 81719;
 Fri, 05 Feb 2021 13:34:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l81Fu-0002mj-Pb
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:34:39 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43ac7d48-e925-4e17-9000-3a784f79ac30;
 Fri, 05 Feb 2021 13:34:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43ac7d48-e925-4e17-9000-3a784f79ac30
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612532072;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=hB0Z5Gq6YD9C3xjPLRN4+e6Oygzl2psZURV7jpWf2vM=;
  b=GIz7MAgNQ/O4YnAGY1SFpkZDxZJB1qZe6BhZ4/JdU8u07oXOESoSQykU
   Ev2IQyZaoFt47/4AJlmI1yqvFlEZU/Iptg92K1s7cwd9SCS/lj+96r6LS
   prwtiqQNet79Q5FHakDIkkpMn8jc6X7aQca7E4IPZ70SZkGuMnAELS+zd
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FCC/PQIU0BVIaRjc6Gj1k9uZuO7cMgHNuVXexHJUqqQXbZxtsXAKSHeVepDPmD4lT8U1wTb0Zy
 OykHw+K9+lu1LEgbR8RWP2RBm8y8nivzlDyUjyST5PmIXAicuv2BMRfvdfGNwyhGJAlIIXt7jv
 vmr8h2JCUBnw0HUiNuiFVLUMb1GrK3ILXI8d2b1/kUf77tZcYjhYTJDWkUokR9GFdEdvU6H0Wz
 UpAHs8plrMIjI2e6KFEQHkcfOgvwehXwyHXNiRaO4UTWGoCHks1nhq/QjYwcDVgRKW/C2C4EfV
 S/s=
X-SBRS: 5.2
X-MesageID: 36843310
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="36843310"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YNJ7huA1a6nplKfgFrqt0Bl8uCiA963UseKd9q8WW9bsozKxsYh9mqqEMMOFzhhsAxoMRF0jRUSGdKxBOJ0Zf+Ag+fUMKJg2p78xXZPaCu0VaFDBw/gVhJBKa64HfiinHYAlOdRbiGqRyNLs6H2ZKr286qsL+Qufs6q50/rb8yZmejsix/XrVk1hjYI6Eru2ot88W5U2RHHv2ZU5zAidMufJn6VP6/RpfSKYocimWSt6v0Q3ZLsL6pcBzlD5Ueb66AFkTzwlxz44KkUSSxQivsB/0MdSubeJFqGEwvjsivasa6pBcC5nJ3dXsQXXILxasuNHHRNkqPgPmXUMZTPLPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w39XjvvEUDGdeFK3tqqzxUK/ZTF7bkWusYkQkq/7oYY=;
 b=djwy8Uo/StcbUdwQmxgXcR9N/IcfPi13q8DvmFV2zmEWvR7TBA5a19Q/4DWOzd5B1gAtu0LFT05LGbJjFnaFm0TJkwYBUDVkyJHSkzzgLYeQoaJ+u6h2zNsezkTo6lgfxlmEAedaOb+CEWpf2UcCPCIMqfmMUQAUAl8Cm+/vFOFwxvVDqG0QlrO8qwT+5CPbL5NKAh4VfdwVCEr4GDFN7pyqwDlETkyzqjn7iG7tCcAVIXVpzrI0LNQybN8E+cRDFcnICbLHjqHzAbxDwgSSuwrl1sHsFIFHqdLRFZRZnhntmvd6t+60L/fkIdxQuUNGl9V1baHrl0DeDS7ocpzZsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w39XjvvEUDGdeFK3tqqzxUK/ZTF7bkWusYkQkq/7oYY=;
 b=dR/5mXV6FKY1mDyiAIbBVDgxfxdZZFImzQ4U7sU1ydoQ5hft3gUmhZNL/vwVJBrPSOC6r7ViYqIpKrD2yZnUf1ulIjVQ297q/99JqcAv3sMspvOEHH7WFybCueT+kc+Y4kYjWbZDObGvBuj4TDL1rfBsiUSiranVexVfGdcDOB0=
Subject: Re: [PATCH for-4.15] tools/configure: add bison as mandatory
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210205115327.4086-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bd9d86ef-485d-fc93-f402-0a97acd9d2dd@citrix.com>
Date: Fri, 5 Feb 2021 13:34:20 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <20210205115327.4086-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0094.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::34) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 348496f2-8791-4a10-f7f8-08d8c9dac168
X-MS-TrafficTypeDiagnostic: BYAPR03MB4663:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4663A1031F3FAFB4F9AD1472BAB29@BYAPR03MB4663.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: n3OPDCvbPlOzWy+eE+pPb/nJAhzIxOSAGcgiSTFxpSNNPBr3o9jr5U3wwdujkYOBngeXFOIOOo3O6QOCGrxbFIdkRPIzsRsuqn0vT2TFhTFtzYn7quR+RF17D9GxTgPFRuflHRjCYxIVDqYalQK3HlADBSW5CNVpCPHf2UATcPUipVXc1aMgTUkG3GsfrF798TTyJPXVsw7OTZgDcDDPUgmNd27j43BF2sG0r2yHvElh8olbF5yY+TfQcUsX4ldCwvNWTrz/+vFniQ2oMSJwS00q3tidTlf0L93uM1tK80N6TfJBgtfKpBgVmiijZP32Jtrfhqrg3r17JCy2RL7+SBL5hXSE6kR0lLhP5oEJ1N3A0uCXAmiXMMsAQrxqB7bPDxves2/bHh46bnBsJ0Uk2MGFdiVvVijIsIHQvRYwAZ43P31WujUilC90e9mc9H4v9WDkKvM0RMgUBm6KHqMzVsP09s59hPkVJo43mNS7TkD4ZRH69BFh91Mf38xOB7nF3TIoql5pEnbrIy/UNAdLPAQWb/7m9ocpp/AQq6XOOM9CuI8RyPvEnX3Jla7LExM6VJKRF2NkugXsNVr70q5zjI9dM3m193IMu/k9FIyrt5U=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(39860400002)(366004)(186003)(8936002)(5660300002)(66476007)(478600001)(86362001)(66556008)(8676002)(2906002)(6666004)(16526019)(66946007)(31686004)(16576012)(316002)(53546011)(31696002)(2616005)(54906003)(956004)(36756003)(4326008)(6486002)(83380400001)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?clVTblo5aWcyd0I2djhoWlRRSXl3Mm9uY1RjYWt4eHFRREtYOWtJZEdtLzls?=
 =?utf-8?B?ZjgrVDR6djcwU0o4OTIzemdFZ3BKZ3lERkdRZTE2WWtPNjJUd0czQWNnUFBG?=
 =?utf-8?B?bjMwcU5DNStDUWtvWTdpZkVXdWU2emRyNGpZWDB1N09ydUdYdlMzZlprK3lk?=
 =?utf-8?B?ZHpjaEY4R1J4NW5rUm9HRVpXQWZGRXM1V1FnVzZnUUJJM2tRTzRxTUlhV2dx?=
 =?utf-8?B?K2NUWmxCNGFVMnM1N3FRUXo1cERKWXZLdW1WSVlLem5LYVEzREZwNWU2SE9o?=
 =?utf-8?B?OGdxK1h3dG8xVTJ2VjRud1ZGaVhRSU90YnJqUndNNlREeDlHeVZVb3FEZS90?=
 =?utf-8?B?SzdvQkdLOFlZZGVXR1pVSDVRbjd0SkVvS2ZLTUVsOGZlQTg0aE5TdHpHVm9X?=
 =?utf-8?B?dS9DVzBFa1JMTFdNMjdjem1TOERCUVFlMW8xSmdCV0NnL3htSzd1OTg5QzAw?=
 =?utf-8?B?czV0TFRhNnVJemg2OWU2WjkwV3BFQnRBTEdQRW1oSk5aOWJpTUs5cUNNejRm?=
 =?utf-8?B?YXlBYnZPOC9OM2piTGQrbklRYUVDOFNRNXhKZW9LZzA4UHFleGRnUlBGNHVz?=
 =?utf-8?B?ZS9ud0txN3lmSHp3djNWdWh6TTdTYStNNFBzemVNVHJHYU5PejVlVHBqc0Rl?=
 =?utf-8?B?WmZiNkozd0xjRFI2QWp3bThkNDlBL3pGbnJWd3NWYjZMNU1LUHNCbmx6R05F?=
 =?utf-8?B?bXRWMDlWb2E1WjNEZkxJOVJwdU9IQkdrUS9hMEUyd3hyZmVTblVzNTR1TElj?=
 =?utf-8?B?WS9HWGxYcno4dzRzQS9mS2FjUTdzVDFIeTIyZHFVbEhIek8xYy9kaWxlczZU?=
 =?utf-8?B?TmNHcGI5WVFtRVV1N1I5bktWVy84R2JHRm1EdFE4ckN0U29FM3BSUllYNlNv?=
 =?utf-8?B?TmE1M0ZOdkVMeUN1VWRaNklVQ2pjSHdoTWFHMmhUbTEySStDRnlUY0xaNktT?=
 =?utf-8?B?cVdPeWZxem9hMzlOOFZkbk1WOTB2L1IxRXgrNjFlZ2xnQnlYVWFPZ0FtZ2ta?=
 =?utf-8?B?QjNBdlF4cGlYbUpMczhTY3J0bDFlK01xRlV6TVdMajAzS2JPOHVDcW5aNU9D?=
 =?utf-8?B?dGE2cnY5UzZXdnRQQytlZXErbXJVUlErZUV3TnNSSCtxZTFsdVFxOTRvRDJD?=
 =?utf-8?B?K3Z2aGQ4amtJcm4zYkgwZ0xrNHJFU21MSnN1R2d1ZG96cXI5Ti8vRlJkN3d4?=
 =?utf-8?B?WndZTHBtc1lxUVo3UmhxOEFqYTZGdlhoN1JwYUZDWjI4TWxyVEEwKzZQYWlS?=
 =?utf-8?B?bERrV0NzU2x3aFdtVXZaQlVpWE81L1N1ZnVjMXNUdXZ5MUY2RDlYZFB0SHhs?=
 =?utf-8?B?MVgzcUJqZ2M1YjVDM2hUM1FzMDVFMVlRdHF5cGtlbnh2bjdkb1dpQjl2SUVq?=
 =?utf-8?B?SS93QnFPNGRyQXgwcGU4bFJndGxabGVUZUxCVWdjMkJHTVEwdWZzSmN2OTVV?=
 =?utf-8?B?Tkp6d3dOL0pFd3o0bWw1OWJSS0tmYVlnb3J2SnN2RGpTL3BUeUpnWWZKanZS?=
 =?utf-8?B?cVBiblU0T1g3cDVPOE5WVTRwa2hGVU9tT0lWZGpxWTFVa3VucVdzUm5oKzQ0?=
 =?utf-8?B?OHI5bmtYeGwyT2tIWVZ1cGxYMTRmZzJPOXNOc043dzZha3JUYlFyV2E2eUpV?=
 =?utf-8?B?eWlnU25QUWJZVy9DWlhJaGpaUCtSRWozOWFkWDI0dXlzNDRhSE9wTXJPY2Fp?=
 =?utf-8?B?UHA0c0tvajNZR1dXNFVvUmZxRjJLcFdmWGZSTkZCMmw3TGc1NGRzNW9NRyt3?=
 =?utf-8?Q?JelZouB1hMOqR+m+XMd0x2lddJOAKjvFh7WLirA?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 348496f2-8791-4a10-f7f8-08d8c9dac168
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 13:34:25.9749
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zvMZFNrU5A2u8nn5qEDkhqUyXXo7P8pusjBjfuDm6DPNFy0jJNgTF3Tg0MhnL+bFP6gamqbXk+lhElpAuwNYhYUPzPbZqDJJPdTmyEGG/I4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4663
X-OriginatorOrg: citrix.com

On 05/02/2021 11:53, Roger Pau Monne wrote:
> Bison is now mandatory when the pvshim build is enabled in order to
> generate the Kconfig.
>
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> ---
> Please re-run autogen.sh after applying.
>
> Fallout from this patch can lead to broken configure script being
> generated or bison not detected correctly, but those will be cached
> quite quickly by the automated testing.

I think this can be simpler.Â  Both flex and bison are mandatory libxlutil.

i.e. they should both simply to convert to _OR_FAIL variants in place.

~Andrew

> ---
>  tools/configure.ac | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/tools/configure.ac b/tools/configure.ac
> index 5b328700e0..f4e3fccdb0 100644
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -308,7 +308,6 @@ AC_ARG_VAR([AWK], [Path to awk tool])
>  AC_PROG_CC
>  AC_PROG_MAKE_SET
>  AC_PROG_INSTALL
> -AC_PATH_PROG([BISON], [bison])
>  AC_PATH_PROG([FLEX], [flex])
>  AX_PATH_PROG_OR_FAIL([PERL], [perl])
>  AX_PATH_PROG_OR_FAIL([AWK], [awk])
> @@ -516,5 +515,10 @@ AC_ARG_ENABLE([pvshim],
>      esac
>  ])
>  AC_SUBST(pvshim)
> +AS_IF([test "x$pvshim" = "xy"], [
> +    AX_PATH_PROG_OR_FAIL([BISON], [bison])
> +], [
> +    AC_PATH_PROG([BISON], [bison])
> +])
>  
>  AC_OUTPUT()



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:40:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81721.151098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81LE-0003qt-7L; Fri, 05 Feb 2021 13:40:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81721.151098; Fri, 05 Feb 2021 13:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81LE-0003qm-3u; Fri, 05 Feb 2021 13:40:08 +0000
Received: by outflank-mailman (input) for mailman id 81721;
 Fri, 05 Feb 2021 13:40:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l81LD-0003qh-9Y
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:40:07 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 917b7da3-a5e7-40c4-86c7-5816c380d6db;
 Fri, 05 Feb 2021 13:40:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 917b7da3-a5e7-40c4-86c7-5816c380d6db
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612532405;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jxu+07f2P/7cTk04wemVh59C5QwUkAlLUWRRXhvqLfQ=;
  b=AK2b1XlUjV9IxBr7Pb3Wsvjv7808hG/g/fNKW/JOFmh+hrPVGwIsZgEU
   V/soZlS1EpPJAkMUDSIfalhE0UjNcMM/PV7Ot+YEETYt7NLFq9jiu8ecW
   4GEN+Jx8VxxQTrFrzCO5L7KNbH02LGhUbhWcunOZeNDmrQkFEzSfIGLlo
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: JQ2iRsWx8wYIFxQ80QNV60Q4gqQqHauZMiKbPDK2dDBG/Elw2e4itTYla0hn4ZY6PNG9YtZsup
 JWntFGQ0mO+a8lk+8VlqlLJa4bxzuPRp+rSOe6uc670beTkOobFv7Xw8wbp/2WmVU/p3RunVEC
 7cmqXLToV3LWuTMZ7LIlGNPyNkDWrTuRtCGs20Vmee+o/3WqPZktH7ERMujJvUqFl1n+pENGtv
 A/a5ocySi9J7xePqrERf5uo8i1nla2ASzMW5bg1gAaZOpDVJEGKnzeQCxsQItqkwQrIFXPOjzB
 R6o=
X-SBRS: 5.2
X-MesageID: 37017221
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,154,1610427600"; 
   d="scan'208";a="37017221"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ISHRWwMcsuCRnf3iJoSFWe/i5U6WYqKA1rZze0CwQ49DErGxUcPoVvgDKUb8LpTtu44GC+MG4mdCSKbwi7iHFBkuy3gY2uNABq5FkV8VYHpOs70oZLlCLMxxB3Lhgp7jJRYT2HBy+PlCOp18TjfCl0qoP0l1NOHRoPGLs65ACD5zVgLZFeyyEX2SGHNL7lzjdy9R2yQgduVhjuGmQ5Grkm2IxDEQ2IW3xzELaGvz4skWGegG4bE2jvc4zd5JTC+/eaabzfJG1yJoU31UfcESyI/CfKzmk2yPnOmqjiHCdStEgciNHRDoinkyB5cAJSJ/5/NwXzo8C1QzNn+f7fvCWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lyc7unYAD9k032jv108wYTg80RlGWh+6uMqt7vWKe9s=;
 b=Ghjhv8HfE8uEbuzqAo84sqCeFkBgJ2pvYS5TtRHbXjppx5G/fQz64lCFQ9aPSGP9OxfWx3vuLjQbQB5ejeM57Tvjx4mdXZaCMEq7pJSUKNgDQXbhFZqtgXfAT1kj0Dqc/UGeYINAhFOlhCBQwe9cRaquYhRiKWm0KLvKRbrsv5HsfccDyquMPWwyrRC850kRmS5Vi3rB3Jn1obkyoZzA+dV9Ik2+DpjFLQx0MsHeVVKyBU8bo1jZe6ijxcVp/vuNEkrRkX1+RS4oYx9B6iIqQCLS8sE3Gak/SCa49GaV7SBoxu1302621+evFdO76u/vW0WRC5gx5f4HJPvFNq2l8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lyc7unYAD9k032jv108wYTg80RlGWh+6uMqt7vWKe9s=;
 b=iWIBZbreo/r90IqhblDh2fp44ksUqhaNP40SQU8+twNkbGCCstmQE30bhA2NVWtdcaYWCw15nksd2gpquof9st6zQEbB1gxs6oNCip3xnwVX0BYEtRDemoi7rhluE7O9Xu97OphotGwKNbRlEx6hapcT2LLuWVrrtxW6BQDzhC4=
Date: Fri, 5 Feb 2021 14:39:53 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] tools/configure: add bison as mandatory
Message-ID: <YB1Kqez4mjzog2YM@Air-de-Roger>
References: <20210205115327.4086-1-roger.pau@citrix.com>
 <bd9d86ef-485d-fc93-f402-0a97acd9d2dd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bd9d86ef-485d-fc93-f402-0a97acd9d2dd@citrix.com>
X-ClientProxiedBy: MR2P264CA0133.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a55dfe9e-01f6-4269-c96c-08d8c9db88b1
X-MS-TrafficTypeDiagnostic: DM6PR03MB3673:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB36731F6B183E020FE9BACC9D8FB29@DM6PR03MB3673.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lM7x/YeKp7dvIRRBqxte/FnsPp0Q5VQ4NFCrCSwucvSThNkKnb9MtobVH/vSL880fTarWRi36QjzkBatjSZnr9JDDu5N68m1vJE2wGgEY2foK0AOrYUIyyZ0pqrPk6b+RG5o5wNUWt27IxEcxr5hyLxTOTi50Nx3d9yutEn1w1fQPvRJIFPuB+1cq5n5wTGcbcc5012Q9BgegJnILLB6aHOyelvQaEOLH2oZ9RgWpNYYFFqcmNTublp9Eg0f8WnTN+ab8gG59Jyix44OHE+jpToGSziiD6ZR0F/mIS3dxZQW8FyrnUSr05aHm20odEDUfpb9scJSY+ZfLKimTvJ5BR1pawJ862GIH8KSI/w+UF/kDOzC48y1wD4OXSWE5uml4/7ALfKCu+nwnO970MsMw6UfU/1XE48gOKm7dZIe9wU3y2NI3yp08VIWJ0rWt4HDqQQ/Bdyx0JE+FAmks2Twr+xn3Ldo97uqUvbKPI0QNvYpRGx7GkcYIoVFxHu9ywO/1r4MiW17IY+fNyvOsUudXg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39850400004)(396003)(366004)(346002)(376002)(136003)(66476007)(85182001)(6666004)(54906003)(33716001)(6862004)(478600001)(9686003)(66556008)(2906002)(8936002)(8676002)(6496006)(4744005)(186003)(86362001)(16526019)(53546011)(4326008)(956004)(26005)(6636002)(316002)(6486002)(83380400001)(66946007)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Q1A5U2R2WWhkZnBTOFZSbG1sWHB6eEFjNXNqNnZsaFlLbnFQZU5ucTY2alB4?=
 =?utf-8?B?MVZDYTdmaGhFSlIxTlBoT1dzZWRUOFQ5b3RMUEJ6bUhBWWdpaTltYVFTSFc1?=
 =?utf-8?B?QTJUeEF4NG1DTC8rd3RYRGZnZDcvSFBPT0U4bTV4eCtRcXdiVldpcWViZ2or?=
 =?utf-8?B?UXlLNm4vWkI2NXRpZlRHVEdFYTUyMGZhUnRzUVowVVFnS2lFSnFEd0JZNVRF?=
 =?utf-8?B?S0dobFFPdFJUU3F4ck1hMGV5RmhRQlRxZmJxLzZYc3BFb0xoaURrRUd1elk3?=
 =?utf-8?B?dGEwTVQ0ZnFydEl1bmtSTElLQnM4VUkxMisvcmMxc3BiUHEwTm1taGZKY2lO?=
 =?utf-8?B?aktzcjZrL2syUlpVSTNUYXYxMFpBenkvcEJEcUVBMmhVZm54VGNON080djRK?=
 =?utf-8?B?bkFzU1R6ZGw1YzVWTGY1NktnZXJJQlVYdWVoQW9YK254TkxBODZXamdFd2hF?=
 =?utf-8?B?eFZFVm9WdzMvSlh0SVMvMDlQdjFhay8xd2REeTBBZzNNTGpSdU5PU3VabDN0?=
 =?utf-8?B?eUJ1N3UvbHYycDJHR1JaeU5VQVhMdFZCN3IzZEduVnZmLzk1SFdvTHJSUkp4?=
 =?utf-8?B?VjBONHNtbmh0VkY1Y2hQT1I4Uy9SeEo4UGVVeDMzdU00aEYzQjVXU0hxU0Fz?=
 =?utf-8?B?dk5qWU1pVTk0TExWaDJTZ1RqWGl4M2o4SUk4OExYdTRza0Y5bnpaODZnckNJ?=
 =?utf-8?B?UExxRHhpSHlaU0NJRkI0eTRROUhLODJXSkpXWUxPYjhROTlWN1pOamh0NHFV?=
 =?utf-8?B?bmFvVzJyTzNac1JVM1FNTStMT2hBNHBXeE1qc1BXdTNjc0FsMFhMQjR6YTFU?=
 =?utf-8?B?emR5T3FpMC8rSG5QclpXUnRWbWRZcFB3TjBCaU4yRDVmc21kSWZOWHBrWmxa?=
 =?utf-8?B?MkN3V0NkenNvS0tock9WUDE0T0JwekZuRWNNcWhubDY5SDVVUldsR3krQWVN?=
 =?utf-8?B?UVRISG1TTnpSbVpuYkVBYlNwUlYxdkdSSTRpUnFwdnM3UE4vQmlhOWpJZHY2?=
 =?utf-8?B?MFVtTjNldE5Xby9PTGxiTkcrdUJIM1hGdSttK2xubzYwa1VIOThxdEpndnNV?=
 =?utf-8?B?Nkl0WE8rK21kV0VmQ0lpM0dTWjZ6ZDFhL3ZlSXhkTlRBUEw4R3d1QkVnTC8x?=
 =?utf-8?B?b1BlSjRtNGY2eGVOVlgvdTdzUFlzK3BTYThhNWlOeThlc1FFUEJKVVlaa3dU?=
 =?utf-8?B?M0RVSEVSeG1hMVByYkIzZ3pocEJ5cmRNZXBwTlhWSFFBTUY2Z1QzaXpPRlFS?=
 =?utf-8?B?Wm11Mlg2R0UxTGFiSUk4NDM4S3hmbmFES1ZmVG92VEExMFozcXo1d2VCRlA1?=
 =?utf-8?B?S1FUOUgrblZtVDFyM2dJSnRDTWVQUGhmbmkxc2JNemZDOEppdzNEUGdPZ0U3?=
 =?utf-8?B?TGJoK2k4U0F5eklrWit1ek5mSk9rbzRpYVhBZXkwTjJ3NlhqTmZHN2RybU1s?=
 =?utf-8?B?RkxpczZiS3ZWZFVZN2E2QVl1U2E2Umo5UUpuNmVlVjlFZ2Vhak5mNE1Hdkk2?=
 =?utf-8?B?UTgrM3JEaWl2UGJxUmFYdlVNbk1XWlpvcEVpTFp3MXF6TC8zcVVxTUFQNHhE?=
 =?utf-8?B?Z3Y2aWVMeGd6WVJ5aFo5L2srdUJMOThCY0ozWlExdjFTTFV3ZWNPd3FWajV0?=
 =?utf-8?B?Vzk1ejVhWE9CR3k0QVZZK1VacnJUVFNVbEFRakpmaGtQdDdvUnkrTG9NY2FB?=
 =?utf-8?B?dVMzZXowd0NaZ1RPU29FZnFYaUQ1QWJVRGZGOTRmUk9RaWoxdlNOUGdFL3lW?=
 =?utf-8?Q?f0cKP7NDfnXWHynH4Bix3d0FrwPppeRCfczcPE/?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a55dfe9e-01f6-4269-c96c-08d8c9db88b1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 13:40:00.2513
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Aztv9aoPh+bafK/QrWNSoWlrg7E0rf064VIbf2gsVYpneaopejLDxGjm79iN8caJ8hYiNxnyOmBKdMlnnEdiuA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3673
X-OriginatorOrg: citrix.com

On Fri, Feb 05, 2021 at 01:34:20PM +0000, Andrew Cooper wrote:
> On 05/02/2021 11:53, Roger Pau Monne wrote:
> > Bison is now mandatory when the pvshim build is enabled in order to
> > generate the Kconfig.
> >
> > Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> > ---
> > Please re-run autogen.sh after applying.
> >
> > Fallout from this patch can lead to broken configure script being
> > generated or bison not detected correctly, but those will be cached
> > quite quickly by the automated testing.
> 
> I think this can be simpler.Â  Both flex and bison are mandatory libxlutil.

No, we ship the output .c and .h files so that the user only needs to
have bison/flex if it wants to modify the .l or .y files AFAICT?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:45:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81724.151109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81QU-00042O-Rc; Fri, 05 Feb 2021 13:45:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81724.151109; Fri, 05 Feb 2021 13:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81QU-00042H-Oc; Fri, 05 Feb 2021 13:45:34 +0000
Received: by outflank-mailman (input) for mailman id 81724;
 Fri, 05 Feb 2021 13:45:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l81QT-00042C-Cn
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:45:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59741112-2f91-4ed3-8bf1-fbd139107c0a;
 Fri, 05 Feb 2021 13:45:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A30F6ACAC;
 Fri,  5 Feb 2021 13:45:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59741112-2f91-4ed3-8bf1-fbd139107c0a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612532730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9LpACkC7WkcjdBI3CtAMqNLbyfyrzIv4qwUP8sIonvg=;
	b=f9OWEjZSxTyONKkJimAYuiHwfuaUZuBrC29jXmBXAo1XBwr1eSALeryH4ogkkazcp6Q/AW
	6UzWWWqZXOAXk/8EudROmsvpGd0EIqBYFUWd6CHXjCQHmzVDnLASEoHOQcGV3eEWIADI2u
	/UB8dmpygB4ehOMNCFmA4KLCkKPi5BQ=
Subject: Re: [PATCH] x86/HVM: support emulated UMIP
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Tamas K Lengyel
 <tamas@tklengyel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5a8b1c37-5f53-746f-ba87-778d4d980d99@suse.com>
 <c717bd30-27b2-625d-576e-eb41a7192c55@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b544830f-4104-264e-77da-ebe6cd811fe1@suse.com>
Date: Fri, 5 Feb 2021 14:45:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <c717bd30-27b2-625d-576e-eb41a7192c55@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.02.2021 15:10, Andrew Cooper wrote:
> On 29/01/2021 11:45, Jan Beulich wrote:
>> There are three noteworthy drawbacks:
>> 1) The intercepts we need to enable here are CPL-independent, i.e. we
>>    now have to emulate certain instructions for ring 0.
>> 2) On VMX there's no intercept for SMSW, so the emulation isn't really
>>    complete there.
>> 3) The CR4 write intercept on SVM is lower priority than all exception
>>    checks, so we need to intercept #GP.
>> Therefore this emulation doesn't get offered to guests by default.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I wonder if it would be helpful for this to be 3 patches, simply because
> of the differing complexity of the VT-x and SVM pieces.

If so, then three or even four. One each for SVM/VMX, and
a final one for the enabling in vendor independent code.
For the possible 4th one, see below in the
hvm_descriptor_access_intercept() related part.

>> --- a/xen/arch/x86/cpuid.c
>> +++ b/xen/arch/x86/cpuid.c
>> @@ -453,6 +453,13 @@ static void __init calculate_hvm_max_pol
>>      __set_bit(X86_FEATURE_X2APIC, hvm_featureset);
>>  
>>      /*
>> +     * Xen can often provide UMIP emulation to HVM guests even if the host
>> +     * doesn't have such functionality.
>> +     */
>> +    if ( hvm_funcs.set_descriptor_access_exiting )
> 
> No need for this check.Â  Exiting is available on all generations and
> vendors.

VMX code treats this as optional, and I think it validly
does so at least in case we're running virtualized ourselves
and the lower hypervisor doesn't emulate this.

> Also, the header file probably wants a ! annotation for UMIP to signify
> that we doing something special with it.

I can do so, sure. I'm never really sure how wide the scope
of "special" is here.

>> @@ -3731,6 +3732,13 @@ int hvm_descriptor_access_intercept(uint
>>      struct vcpu *curr = current;
>>      struct domain *currd = curr->domain;
>>  
>> +    if ( (is_write || curr->arch.hvm.guest_cr[4] & X86_CR4_UMIP) &&
> 
> Brackets for & expression?

Oops.

>> +         hvm_get_cpl(curr) )
>> +    {
>> +        hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> +        return X86EMUL_OKAY;
>> +    }
> 
> I believe this is a logical change for monitor - previously, non-ring0
> events would go all the way to userspace.
> 
> I don't expect this to be an issue - monitoring agents really shouldn't
> be interested in userspace actions the guest kernel is trying to turn
> into #GP.

Isn't the present behavior flawed, in that UMIP (if supported
by hardware and enabled by the guest) doesn't get honored,
and the access _instead_ gets forwarded to the monitor?
Looking at public/vm_event.h I can't seem to be able to spot
any means by which the monitor could say "I want an exception
here" in response. IOW - shouldn't this hunk be split out as
a prereq bug fix (i.e. aforementioned 4th patch)?

>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -547,6 +547,28 @@ void svm_update_guest_cr(struct vcpu *v,
>>              value &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
>>          }
>>  
>> +        if ( v->domain->arch.cpuid->feat.umip && !cpu_has_umip )
> 
> Throughout the series, examples like this should have the !cpu_has_umip
> clause first.Â  It is static per host, rather than variable per VM, and
> will improve the branch prediction.
> 
> Where the logic is equivalent, it is best to have the clauses in
> stability order, as this will prevent a modern CPU from even evaluating
> the CPUID policy.
> 
>> +        {
>> +            u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
>> +
>> +            if ( v->arch.hvm.guest_cr[4] & X86_CR4_UMIP )
>> +            {
>> +                value &= ~X86_CR4_UMIP;
>> +                ASSERT(vmcb_get_cr_intercepts(vmcb) & CR_INTERCEPT_CR0_READ);
> 
> It occurs to me that adding CR0 read exiting adds a lot of complexity
> for very little gain.
> 
> From a practical standpoint, UMIP exists to block SIDT/SGDT which are
> the two instructions which confer an attacker with useful information
> (linear addresses of the IDT/GDT respectively).Â  SLDT/STR only confer a
> 16 bit index within the GDT (fixed per OS), and SMSW is as good as a
> constant these days.
> 
> Given that Intel cannot intercept SMSW at all and we've already accepted
> that as a limitation vs architectural UMIP, I don't think extra
> complexity on AMD is worth the gain.

Hmm, I didn't want to make this emulation any less complete
than is necessary because of external restrictions. As an
intermediate solution, how about hiding this behind a
default-off command line option, e.g. "svm=full-umip"?

>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -1537,6 +1552,7 @@ static void vmx_update_guest_cr(struct v
>>                                               (X86_CR4_PSE | X86_CR4_SMEP |
>>                                                X86_CR4_SMAP)
>>                                               : 0;
>> +            v->arch.hvm.vmx.cr4_host_mask |= cpu_has_umip ? 0 : X86_CR4_UMIP;
> 
> if ( !cpu_has_umip )
> Â Â Â Â  v->arch.hvm.vmx.cr4_host_mask |= X86_CR4_UMIP;

This wouldn't be in line with immediately preceding code
(visible in context), and subsequent code using if() is, aiui,
because the conditions are (textually) more complex there. So
if I was to make the change, I'd at least like to understand
why adjacent code is fine doing differently, even more so that
iirc it was often you to introduce such constructs in favor of
if()-s.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:50:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:50:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81730.151122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81VR-00057X-F5; Fri, 05 Feb 2021 13:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81730.151122; Fri, 05 Feb 2021 13:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81VR-00057Q-Bk; Fri, 05 Feb 2021 13:50:41 +0000
Received: by outflank-mailman (input) for mailman id 81730;
 Fri, 05 Feb 2021 13:50:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l81VP-000573-DR
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:50:39 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63611457-242b-4474-aace-9b61d91e55d8;
 Fri, 05 Feb 2021 13:50:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63611457-242b-4474-aace-9b61d91e55d8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612533038;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=z2UDGZRnpcoDmRxQ7SFncx43i0q3V6ykywWkgxPmIMc=;
  b=OS50f3WfFq6xDEVsTtSsCIJWKz7VftLgzRzRqmhNDa85etV7QS5tA35r
   oEQ2skCZnyqEyj7VgVEF1LZjQ4VTLVNNwZgrQjSAtSDKwSGkZNVxMCFFc
   uhyUVggmOOMbuSOOs930fN0lEp8J4HRIRyqLFemEMBVkiNSEd9CMGAvEj
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Bo0BH94Zb51Hc7PNx3YhCiVHF/sQRlrD0lPX4ork/NYFPzd1GOIxPakuFElumDti33cZpsMDay
 a8bHUewKjyRTUj5rPv1n6LMucNKgR4u2ctPVrV+PGGZD5TUj58GVMJe3t7yK7e4XYiNnB4Qr7W
 SbSf5i1B5XhaKKijWazsVpke+bsoGW1SVJWeEQYtEdEm4VCkjF7lvAHo+2WavX2gzPtb0gDDXN
 CH6mx2bYrqEdAWqzIxTAJ5ACrBSIgX0Z/CLvbhUXfe1l4Lr1EpZl8Gw0M1UKSujlRrbU/+Eiy8
 JsY=
X-SBRS: 5.2
X-MesageID: 37017814
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="37017814"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tbwd4ruA+nnitmJZrVlI3kvVtGenGNxwmtCN/GsJOb4ZiT1FF/Ak2L3MQCpeF13uLHwOXohgJAZzmQSe5hDzGxzecnxaCYqZeqxTZPXl2G/tvmR9XmcBEQVqtMo/AMQES6/oDO4XXvjMHxwwQ3Jt70C1naGae4Vg40OzR1pra+fw8itVAsNX8MDEgWEexD3b/JnQT3GcIQbpZUHJ6ZW13WIqUPaLlGU7CahoOgCF+RRy1+ZkXZq8OuppYlkUDNXIbFIp02pKaaqk0MA+8IvUOk172vi9Pkl1J67qeu8NbGYBrWMtqro0uAWBLMkATrg+bqEEQxA5LdgQ/a1kEts8XQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z2UDGZRnpcoDmRxQ7SFncx43i0q3V6ykywWkgxPmIMc=;
 b=oZdLp+wbAbBKLbvPhf2LKavh1juv507DZa+cek0I93vn29ZJb8UoqN0MLbrRXVdAjMgekA2e6eJP7XIc5Sznd/UvcWmXOE2ZujqKi2RCHZZAXS7SIU/U9zGEWyYoiKU/wIWFrLrh7HUP/cD/zVQoqvUvKfUJ3MFbFyW9f4au0/dL2gm2DvBmQs7KA/xNDh4scDKSIRv9mb3hw/X99sKjsc+jVnlpf2U4AQo8BupxO59DOv/r5+q96kkAyr/uA4BCMvLPubKrENDKHTytjEIlIzCL72LIJQNfHu3vsg0ioeb0MN/An6GRiHH7s+AX3ArYKMU+CUuC8Vcr7raQT7J6UQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z2UDGZRnpcoDmRxQ7SFncx43i0q3V6ykywWkgxPmIMc=;
 b=dxl7egaBW9Z8T7GzC9GnOX+kUbCsBujFwCa3cXZkXcjGi0VDJWZ26DaNVxIw/s1sLLS0zSIlgh185VMq9tOZc94v6VmsYlv3dZn0hvb4xPCKV1gdbMZP5jcXY4eB3RbUJ4Ax8wNh5u03d5AX+ypIZJd/1rhhqmY9Sadzm5ozj0k=
Subject: Re: [PATCH for-4.15] tools/configure: add bison as mandatory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
References: <20210205115327.4086-1-roger.pau@citrix.com>
 <bd9d86ef-485d-fc93-f402-0a97acd9d2dd@citrix.com>
 <YB1Kqez4mjzog2YM@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c1185550-8e73-5985-de70-a68a0b1e31ab@citrix.com>
Date: Fri, 5 Feb 2021 13:50:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <YB1Kqez4mjzog2YM@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0161.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::22) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3aaae21c-ffb5-4f40-3389-08d8c9dd023b
X-MS-TrafficTypeDiagnostic: BYAPR03MB4037:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4037A36E12CEFC7EAE1F75BFBAB29@BYAPR03MB4037.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3E30vNEBi5FmONSJflVOsXdLOCAjLTAjbruyHebYb9IAaBkbACcvv+mITHPaI1JYXqPPebQ5MM7I771xnZzNcCZETo/lqIPNi0q9rGc7FmBLw8j0xfA0hzT4qDL2/4Xxc61ZDurQvrsqVd8UkR4MN1/adAGryn4LrvydL4L0Z4yZie9q0ibBIHRKeoRYOkJ5UDjaLMyppo2w64QgkVgdD8bCeOSDMhiJyLhgF27h+XRFjctnkWBAS995vw0xbU5K0lLmNFWHY+lPie7LJ+56QwXrDQTJ9Mao3yXUMrtuUSOUN1oOVRoD8+ZDC6JouL9J2xiMiZLZO56tkvsp8WqVxbAY7G1Nc3V2SfeW6bVDDBP87Ty7VnIqRU2IEriKa/ZAQVCUXGdO7WNVsfkc0WA/DpndIGHHIGg7JSAs4jlZKc684+1rQYS6ekKY/TbT32HRk83GXQfIUJRn/9/gpwDT3zFa1gujP6tFxoaQy509o6oml+deidOkgvr84wgoYdifV+6jM3cQMBGfhSbnTvj5pOZR4xZ1M5B33cj2Jjck4s6HKFpzp1squ24rcCOInYvEDBKOOt4tZ293bQ/P1QgGCmXRChOH5b+Al8w8RNm1tX8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(376002)(396003)(39860400002)(316002)(26005)(8676002)(6486002)(6636002)(36756003)(37006003)(16576012)(53546011)(186003)(16526019)(956004)(2616005)(8936002)(31686004)(478600001)(2906002)(66476007)(4744005)(66946007)(86362001)(6666004)(83380400001)(6862004)(5660300002)(31696002)(54906003)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T2pRVjdjbFQ4OFp2VjRVd0tnOGxwcVVDL1NxWC9xd1lXUEUwMDVxY1BOQVgv?=
 =?utf-8?B?WW9tQkdqZjdxaWZFMzMyQVNPQ3dUOGlZUEpWMzFvWE12dG5HUTJxNDBVcjFL?=
 =?utf-8?B?RFQ3c1dBUi96Y3BtRGRGZkQ5dTZEQ3lVVmpPdWFxbWpoMUJVR0VRWXlyeHd5?=
 =?utf-8?B?L1RwU2s4aWJpMzR0QktGaURHM2hZbjB0dGFadUJ6bmxoazRFVzBwMitacWlH?=
 =?utf-8?B?bWVYaUhCakY5bExMQ3p6Ylk4NDNPVEw4WFpDY2ZYeXNibFhyTFkxUm9SbjRF?=
 =?utf-8?B?d1N2K2pidTJJc082Zlh3Q2l3TDh6aGYxZ0RPVTlYczY0bXM5QjdVejZPN2l4?=
 =?utf-8?B?VS83cmhGYnJMSmpnYXhIbVdrRFgyVy9wYW9iYXZyYnlaMmc1WTU4d1g2eWFS?=
 =?utf-8?B?Yk5IcVpPZE5xREozcHQ2UUY2Wk9HNGVCWUw5UmVSdWhDNlNmR2Q2dVhKZnc4?=
 =?utf-8?B?bW5wZ2s2cE1ZRE5UalF2UFBsOC9XVmpSNUdYMW9SdHJWVzIzcFZudkpsS0ph?=
 =?utf-8?B?SnZwdnl0c01vdmVDWEZUcStnajdXQ0xTYm94U2FCZWRmd0I0WHYzcTlCem1L?=
 =?utf-8?B?UEVxUWtCTHRaaE1TenhlaVZMUlYzSVQ0L1NIUDhsMkpoNVRMRXVtUjJvL2Q2?=
 =?utf-8?B?M2UrT0dIcTMwcDBUZ3hiUmlMRklibWE3WlZoZlVzQ00ybHpmM2lmTnlVQXRN?=
 =?utf-8?B?UHNSTWlKRUYvc0dMejlKb1FoMkR3V01DSHB3UGZ4QUloRUtCekVnR3hTM0R1?=
 =?utf-8?B?dTdkOGhKWHgxVG5jSDlrTlB4dk5XQ0wybVJnL2lQUU5ZcFJhY2pJbGtGSGM1?=
 =?utf-8?B?ZkdXVE9TSityUjhTZkVOZ2Q4MDZkV0p0QndMaGI5eWg4ZGxpVU1DUnZXbVdR?=
 =?utf-8?B?NVYzUVhQYTV1enpneVFkYmxLSkwyMFFtV3B0M293OWJzbTZBdUdhUEZnSHlr?=
 =?utf-8?B?OHRZSzRuNDFpWks3QW1LekdVOGUxSFhHWm1yQ0d0VGpqVlE4UEFRL1JqNzF5?=
 =?utf-8?B?aGxPeU1iZHhKNVd6WTdhR0xwSjdKR2VocThhald5RkhPS0xtOXBPUHo1U2dX?=
 =?utf-8?B?bzJhVis5Z0hXSkQyUS9GT0tmMUNrTW0xUkhJYVZmVXlNejNCUDBuNXdGSFE2?=
 =?utf-8?B?dng1RVpYNkhPMFI4ZGdxdTNMVjk0eUR0dkZ2NWNldlN0MTNoNVV4S2RLNExE?=
 =?utf-8?B?RktVN28vQjBySEx2b0pzOTROK2pkU3BNTjBsTk12djVjbUVKeWNZeUJxRC91?=
 =?utf-8?B?RTNFcU1IMEtkbmo0VzR5cFJKbWcrd05hZm1XTnhuU2lmemtvYkhsTHRQTElj?=
 =?utf-8?B?L3RYSXJTV2pwTHN6Z3J3M0lzL0xhcW5xanUyRkZQZ1ovMXk0RXpLa0h1YjA0?=
 =?utf-8?B?SkZrRm1ocks3NC9RYzZEZ0oybkRia1JEYkpTU2xLSVAyc0cwLzlkMG9JNkg0?=
 =?utf-8?B?ems2MFAzd3dYZFRnMDgvODRQVUw5Z1hYRHBsb2FzT1lmaEZNa0NGWGxndWVX?=
 =?utf-8?B?YkFhU1VuRnhseHlVWXFLN3FQUytHRUFrbFA3SE1FTzgxazB5RXR4MEtZUXl3?=
 =?utf-8?B?VmgyOFdYUURKOVVnUlNGOHNUQktIbXRPMmxHZFNISTJ4ZXo2cEx0bmZuNEJE?=
 =?utf-8?B?WjJlOHdHbG5RRTFIbWNJMkI2N1k4OTh6QVJHVm4vOUJiRFIwMURtaSswakwz?=
 =?utf-8?B?NXNPQnRKemdvQ1BES0Y2NC9xdFNLakxhdWpLRjBvZ0tQTCt1YzFHTlV3VjJ1?=
 =?utf-8?Q?mpzWndNVvcquPD9imTAXOTH8Lt4n/euHJ9bKQJH?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3aaae21c-ffb5-4f40-3389-08d8c9dd023b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 13:50:33.6823
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vSQjWVrS/h1i88YoKf76dA+aI1O6SoAjswTprKmVenvQBfC+amLskbYk07+IkpLR1eW6TVMxeo9rw+mZuA28RiZpBUO/uetMAQFcf7ejEhg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4037
X-OriginatorOrg: citrix.com

On 05/02/2021 13:39, Roger Pau MonnÃ© wrote:
> On Fri, Feb 05, 2021 at 01:34:20PM +0000, Andrew Cooper wrote:
>> On 05/02/2021 11:53, Roger Pau Monne wrote:
>>> Bison is now mandatory when the pvshim build is enabled in order to
>>> generate the Kconfig.
>>>
>>> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
>>> ---
>>> Please re-run autogen.sh after applying.
>>>
>>> Fallout from this patch can lead to broken configure script being
>>> generated or bison not detected correctly, but those will be cached
>>> quite quickly by the automated testing.
>> I think this can be simpler.Â  Both flex and bison are mandatory libxlutil.
> No, we ship the output .c and .h files so that the user only needs to
> have bison/flex if it wants to modify the .l or .y files AFAICT?

I know that theory, but it is broken in practice because of `git
checkout` timestamps.

Also, the Makefile explicitly enforces the checks, so they are mandatory
in despite an attempt to ship the preprocessed form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 13:57:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 13:57:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81733.151133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81bj-0005Kq-91; Fri, 05 Feb 2021 13:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81733.151133; Fri, 05 Feb 2021 13:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81bj-0005Kj-5z; Fri, 05 Feb 2021 13:57:11 +0000
Received: by outflank-mailman (input) for mailman id 81733;
 Fri, 05 Feb 2021 13:57:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Iq/M=HH=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1l81bi-0005Ke-Ns
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 13:57:10 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31e03a70-067d-41de-bb6e-078d1638811f;
 Fri, 05 Feb 2021 13:57:10 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id DB8E664FC4
 for <xen-devel@lists.xenproject.org>; Fri,  5 Feb 2021 13:57:08 +0000 (UTC)
Received: by mail-ej1-f45.google.com with SMTP id w1so12010459ejf.11
 for <xen-devel@lists.xenproject.org>; Fri, 05 Feb 2021 05:57:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31e03a70-067d-41de-bb6e-078d1638811f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612533429;
	bh=9Mw4lLzsK6vgoXWqjy7jhrlf3iQ42jlD9NTI8s6g9ME=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=oPxHGTxejB5ZyMKKS3FS6WldkVjKb43m6862FndaCS+qmBHi56r7mDjG0ZUVn7Xzh
	 l7Y5Lg8yKdOptnYePju90v0eJKyBgnr1C5+fOrvQEIPy7k6NxLz8lMcoolTe2n8W3g
	 OTZiYOOxWvVIAbQZk5y6KZyHTvaZgtNix7MJGXroAeE1q2cYgBzSdY1VSXrxCv+bgd
	 dRF2nCIqGjC3KLFtigeeWWI/2nKolYOO9E+WX6YUyenbSynS4UUbFDCHrAHYF3b7S3
	 YZXXUjDxt5ZFnH5sCctcBEErVnqXy1a4svI86qfFFX6tGZL6YzNNKMv63r7YZskn0c
	 Ztb7UJ3tdrWLg==
X-Gm-Message-State: AOAM530tE3IJ3c6GFDew3cmJgY6T4OYCRSRJpL0rHkc3sD369jsJNuc6
	qEUZKLLE3jIawsHrx4nvdcVYFOKiD+Fh9+qfXg==
X-Google-Smtp-Source: ABdhPJxtN3UjgPOA/NhlcqOiuRYTaIUm9iO77IYbz+Md6xUxQtWW+nAVvuic5OJzV1ZRbOjwDH/UpEvtMQUAbnhbBb4=
X-Received: by 2002:a17:906:4301:: with SMTP id j1mr4102199ejm.108.1612533427315;
 Fri, 05 Feb 2021 05:57:07 -0800 (PST)
MIME-Version: 1.0
References: <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
 <9b97789b-5560-0186-642a-0501789830e5@xen.org> <alpine.DEB.2.21.2102040944520.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqJuvZPheRkacaopHtbATj8uRua=wj_XU5ib41sSpVO-ug@mail.gmail.com>
 <alpine.DEB.2.21.2102041228560.29047@sstabellini-ThinkPad-T480s>
 <CAL_JsqKTz8J3txk9W5ekqmfON_g_TdLYsLi0YXYU3rmiyubL2A@mail.gmail.com>
 <alpine.DEB.2.21.2102041309430.29047@sstabellini-ThinkPad-T480s>
 <CAL_Jsq+cedzG5NBfLRub=msZEK6umBrk-O7FYB=Dk34=k9fuCA@mail.gmail.com> <YBy8PbYeLEHjcELY@mattapan.m5p.com>
In-Reply-To: <YBy8PbYeLEHjcELY@mattapan.m5p.com>
From: Rob Herring <robh@kernel.org>
Date: Fri, 5 Feb 2021 07:56:55 -0600
X-Gmail-Original-Message-ID: <CAL_JsqKaB+=B-9x0tBoeRkJHR0oD+j_erD36X2fBGsczKthnRQ@mail.gmail.com>
Message-ID: <CAL_JsqKaB+=B-9x0tBoeRkJHR0oD+j_erD36X2fBGsczKthnRQ@mail.gmail.com>
Subject: Re: Question on PCIe Device Tree bindings, Was: [PATCH] xen/arm:
 domain_build: Ignore device nodes with invalid addresses
To: Elliott Mitchell <ehem+undef@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien.grall.oss@gmail.com>, 
	Elliott Mitchell <ehem+xen@m5p.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, julien@xen.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Feb 4, 2021 at 9:51 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
>
> On Thu, Feb 04, 2021 at 03:52:26PM -0600, Rob Herring wrote:
> > On Thu, Feb 4, 2021 at 3:33 PM Stefano Stabellini
> > <sstabellini@kernel.org> wrote:
> > >
> > > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > > On Thu, Feb 4, 2021 at 2:36 PM Stefano Stabellini
> > > > <sstabellini@kernel.org> wrote:
> > > > >
> > > > > On Thu, 4 Feb 2021, Rob Herring wrote:
> > > > > > On Thu, Feb 4, 2021 at 11:56 AM Stefano Stabellini
> > > > > > <sstabellini@kernel.org> wrote:
> > > > > > >
> > > > > > > Hi Rob,
> > > > > > >
> > > > > > > We have a question on the PCIe device tree bindings. In summary, we have
> > > > > > > come across the Raspberry Pi 4 PCIe description below:
> > > > > > >
> > > > > > >
> > > > > > > pcie0: pcie@7d500000 {
> > > > > > >    compatible = "brcm,bcm2711-pcie";
> > > > > > >    reg = <0x0 0x7d500000  0x0 0x9310>;
> > > > > > >    device_type = "pci";
> > > > > > >    #address-cells = <3>;
> > > > > > >    #interrupt-cells = <1>;
> > > > > > >    #size-cells = <2>;
> > > > > > >    interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
> > > > > > >                 <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
> > > > > > >    interrupt-names = "pcie", "msi";
> > > > > > >    interrupt-map-mask = <0x0 0x0 0x0 0x7>;
> > > > > > >    interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143
> > > > > > >                                                      IRQ_TYPE_LEVEL_HIGH>;
> > > > > > >    msi-controller;
> > > > > > >    msi-parent = <&pcie0>;
> > > > > > >
> > > > > > >    ranges = <0x02000000 0x0 0xc0000000 0x6 0x00000000
> > > > > > >              0x0 0x40000000>;
> > > > > > >    /*
> > > > > > >     * The wrapper around the PCIe block has a bug
> > > > > > >     * preventing it from accessing beyond the first 3GB of
> > > > > > >     * memory.
> > > > > > >     */
> > > > > > >    dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000
> > > > > > >                  0x0 0xc0000000>;
> > > > > > >    brcm,enable-ssc;
> > > > > > >
> > > > > > >    pci@1,0 {
> > > > > > >            #address-cells = <3>;
> > > > > > >            #size-cells = <2>;
> > > > > > >            ranges;
> > > > > > >
> > > > > > >            reg = <0 0 0 0 0>;
> > > > > > >
> > > > > > >            usb@1,0 {
> > > > > > >                    reg = <0x10000 0 0 0 0>;
> > > > > > >                    resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> > > > > > >            };
> > > > > > >    };
> > > > > > > };
> > > > > > >
> > > > > > >
> > > > > > > Xen fails to parse it with an error because it tries to remap reg =
> > > > > > > <0x10000 0 0 0 0> as if it was a CPU address and of course it fails.
> > > > > > >
> > > > > > > Reading the device tree description in details, I cannot tell if Xen has
> > > > > > > a bug: the ranges property under pci@1,0 means that pci@1,0 is treated
> > > > > > > like a default bus (not a PCI bus), hence, the children regs are
> > > > > > > translated using the ranges property of the parent (pcie@7d500000).
> > > > > > >
> > > > > > > Is it possible that the device tree is missing device_type =
> > > > > > > "pci" under pci@1,0? Or is it just implied because pci@1,0 is a child of
> > > > > > > pcie@7d500000?
> > > > > >
> > > > > > Indeed, it should have device_type. Linux (only recently due to
> > > > > > another missing device_type case) will also look at node name, but
> > > > > > only 'pcie'.
> > > > > >
> > > > > > We should be able to create (or extend pci-bus.yaml) a schema to catch
> > > > > > this case.
> > > > >
> > > > > Ah, that is what I needed to know, thank you!  Is Linux considering a
> > > > > node named "pcie" as if it has device_type = "pci"?
> > > >
> > > > Yes, it was added for Rockchip RK3399 to avoid a DT update and regression.
> > > >
> > > > > In Xen, also to cover the RPi4 case, maybe I could add a check for the
> > > > > node name to be "pci" or "pcie" and if so Xen could assume device_type =
> > > > > "pci".
> > > >
> > > > I assume this never worked for RPi4 (and Linux will have the same
> > > > issue), so can't we just update the DT in this case?
> > >
> > > I am not sure where the DT is coming from, probably from the RPi4 kernel
> > > trees or firmware. I think it would be good if somebody got in touch to
> > > tell them they have an issue.
> >
> > So you just take whatever downstream RPi invents? Good luck.
> >
> > > Elliot, where was that device tree coming from originally?
>
> Please excuse my very weak device-tree fu...
>
> I'm unsure which section is the problem, but looking at `git blame` what
> shows is commt d5c8dc0d4c880fbde5293cc186b1ab23466254c4.
>
> This commit is present in the Linux master branch and the linux-5.10.y
> branch.
>
> You were saying?

That commit looks perfectly fine. The problem is the PCI bridge node
shown above which doesn't exist upstream.

Rob


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 14:02:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 14:02:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81737.151145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81gq-0006Ts-Sh; Fri, 05 Feb 2021 14:02:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81737.151145; Fri, 05 Feb 2021 14:02:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81gq-0006Tl-Pg; Fri, 05 Feb 2021 14:02:28 +0000
Received: by outflank-mailman (input) for mailman id 81737;
 Fri, 05 Feb 2021 14:02:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l81gp-0006Tb-8S
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 14:02:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fdb8e56-6a1f-4a22-8888-130bf227abe3;
 Fri, 05 Feb 2021 14:02:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91B35ACAC;
 Fri,  5 Feb 2021 14:02:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fdb8e56-6a1f-4a22-8888-130bf227abe3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612533745; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cOerHE3BM3EUcL+/lKl+NMf0+GAK72MtEO1QWCoUE/Q=;
	b=sDlmpV5zw5K4pxOE2DyuPoh9cwpPlWEHxOM54D4wrqDTlr/V+whLIOpSddHQgCJQTANrbj
	fqJQ7qjXZcYz3G+eKwrCj9TiGJkY03M2+cKMJZDT/F1PCMljdsW3CScnktG/afRly8GIud
	jDaGeMcVpoP7nQ4847JJumUQ7BAQWh8=
Subject: Re: [PATCH] x86/HVM: support emulated UMIP
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Tamas K Lengyel
 <tamas@tklengyel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5a8b1c37-5f53-746f-ba87-778d4d980d99@suse.com>
 <c717bd30-27b2-625d-576e-eb41a7192c55@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <78ed8806-20e3-cb67-829f-916504e5654a@suse.com>
Date: Fri, 5 Feb 2021 15:02:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <c717bd30-27b2-625d-576e-eb41a7192c55@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 15:10, Andrew Cooper wrote:
> On 29/01/2021 11:45, Jan Beulich wrote:
>> +        {
>> +            u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
>> +
>> +            if ( v->arch.hvm.guest_cr[4] & X86_CR4_UMIP )
>> +            {
>> +                value &= ~X86_CR4_UMIP;
>> +                ASSERT(vmcb_get_cr_intercepts(vmcb) & CR_INTERCEPT_CR0_READ);
> 
> It occurs to me that adding CR0 read exiting adds a lot of complexity
> for very little gain.

Actually, upon second look - why do you say "adding CR0 read
exiting"? This is only an assertion. No other changes are
being made to CR0 interception (apart from the few lines in
the handling of the respective #VMEXIT).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 14:04:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 14:04:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81739.151161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81it-0006dx-AN; Fri, 05 Feb 2021 14:04:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81739.151161; Fri, 05 Feb 2021 14:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81it-0006dq-7J; Fri, 05 Feb 2021 14:04:35 +0000
Received: by outflank-mailman (input) for mailman id 81739;
 Fri, 05 Feb 2021 14:04:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l81is-0006dj-As
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 14:04:34 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4eb88141-41dd-4a7d-946c-6c00d9952943;
 Fri, 05 Feb 2021 14:04:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4eb88141-41dd-4a7d-946c-6c00d9952943
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612533870;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=dqVcbFhSDdlx1o5H2aEP/C1x+XTwSM6TeB9MH0+EejI=;
  b=f21zlfGGd0uoQAmbLdOBF24+Th0hfqZbqLnEgw+PK7nOJUzwFWUlbWRq
   1afSeREhSxaJMZkZALnYwF0hTh636y4cnv1LiKo+CV8HrL5Qpt89oO3up
   e6YQMDAeyhX1PsrkARydzdmT7IZjdwjh5QPkQ4tOzHq1AG74Z0paVRotg
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KiT6cgX95AHYOCPGXK5UboeOPbRusiVRzkOh05PzGOkHr+L/EBlprXuOtYUXuAuMKkhJ2Ica+j
 dJX41CtdSLY8QxZMCzDApN+uYeLPmnxKRotAPCvGP7SxgLs6brRIOYIw21qBJAkKuFSNLj6Ar0
 XLm+QNmlHPBdEFmXRzpNBi9R2T6Mi6lUVJk61twOfr7TYb6fLLm8Xty/d1gE5gOsSFt0VRj023
 wWfCHIcHgnTj/hym2Kqz7QjCVv+DWCRLew+jjg239fFGdLoE6BudAMPIVu19mby0toBX2mAzgn
 UXY=
X-SBRS: 5.2
X-MesageID: 36845671
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="36845671"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BBWUDjG2WdUDJPea045EtRZBUniMdJwbc7s9aJyzY+dvhJgVUe6Ivm8PYlCEYAfGOamAigh0JJuVFb4zRn9yC2SbxOQOMgZ3y0jMx+fAg5ISv+fv1N54Z8TgVV9dbxM6dKjPu3h7T95PcnCvQ0OiHzb0UD+wenO/IJPeZnoUY7g/pTO7NiaM0Mo+fWDjyXmfn3VcuxqP+1dTchahjNGb6Vf17F3C7k2mzPJ40tNREhyL2Zdlbm6RFsO4RL0EjsKPuRK942gTsmyOmcfIiaHx3gk5Bb2QaWREZkhEjRwW/i+qEx5qQuZDuDl0xWhZ8kP7Q+PnkZ0XCABFZkNpJzuOKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=brzd3fHckUGI+bOjKdCAJYyz8fn7/y7WDdrmge4fyEU=;
 b=KsMLW9mLuMseB/qfEupTykPGaqHS3Hd3FI69Uk7C3ICJgc/BixARkYsDi5q5FaeLtcMX2Uw5dSIdnAQx1ZGcZGgcpvSD+HAWOgTuByB50gfXTL6PsSN1gOQy/PQXzdyUdEl52RCgO1YYx1iGbX9IGJJTK4kEnpauB1n0w3bc3DRZzFuEzsfAjvtV4cqEiVGis8uVftsrWR4kSUDp04lAD1Jl4IotyD0+aZoq+ZmBrBNjZ4XYVsLOiDQQTYyWQVpJT0NaDzLCQsk0dqY5HrEJXpuAzeip0RqNCJuTjRgr5R8C4eqwK0plHFFJYkwk1a67P1YnjPbu52wDRKY0xi3FZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=brzd3fHckUGI+bOjKdCAJYyz8fn7/y7WDdrmge4fyEU=;
 b=mzeHlfxbFgxy1HOHHuoWTLC8Q26/90822ld8SbgOwjFog9ww/hTbe/ydS8Snj5WblU2ylggQliOdEXmZhgRiC6S6BWI94M3UCDlmps/Oi+t3Jral81WYiLDfh+M1oFHyNMnr3kELav7lERK5aQr2rP4h0X5NwwoDl3eC5DBJqJw=
Date: Fri, 5 Feb 2021 15:04:20 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] tools/configure: add bison as mandatory
Message-ID: <YB1QZBejBVDwTthT@Air-de-Roger>
References: <20210205115327.4086-1-roger.pau@citrix.com>
 <bd9d86ef-485d-fc93-f402-0a97acd9d2dd@citrix.com>
 <YB1Kqez4mjzog2YM@Air-de-Roger>
 <c1185550-8e73-5985-de70-a68a0b1e31ab@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c1185550-8e73-5985-de70-a68a0b1e31ab@citrix.com>
X-ClientProxiedBy: MR2P264CA0131.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 804a667c-4ba4-4906-e69e-08d8c9def254
X-MS-TrafficTypeDiagnostic: DS7PR03MB5590:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB559087848751B7D411D5479B8FB29@DS7PR03MB5590.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3pj0+MLJ2QDy0mX5B9MMSy2qGqB3BzV1QjXzRpoQbrNHrwoO05J4QF1dS33gwLSURxenvZqD1tseLjBC5HWj3UZ1C2kSBQhyhbjIZe5vrYU4GcRpitWLAkMFNdKls896qnIFAlNiHdDHe9okMz8off4HQr1EKSjUQIUq2Iyy03RGtS+Z7DNZJeENvcXx0ACD82BIhDfSgZmJ5OHexy3zkDfdZ7amdcaGH/0+Ru4nwSzw8PYr9z3eQaP9jXGgU4ddtfMutmUdqehDZragLN0qlUUe4xfXMHSVCJJfrY6bR7K+P32rTwQ1oigo21yFr5l9B2zKx/FHwIcUaOcrwe2ajm+NjjBJIkEC6XtmkHZEk+CyutSi8EG63ilZNeu/3M4dL+KSniuvm0WjQjzs8K80a/himOCmRBUpDiQr2wN7BiEvuVCIBVA/R5A2tMbZXZCLNXWiPhu218LIJS4DtD8r8ymjiUWeyJuRAYJruI7KNSN8X3N+pgNKZeJj32+peKw22g5Kb3uojwprtVjPP//c+Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39850400004)(396003)(366004)(346002)(136003)(376002)(5660300002)(478600001)(4326008)(6862004)(66946007)(66476007)(66556008)(8936002)(6636002)(9686003)(53546011)(6486002)(956004)(6666004)(8676002)(33716001)(2906002)(85182001)(86362001)(54906003)(316002)(26005)(83380400001)(186003)(16526019)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MzE2azcwcGg3RzdObHVVNFNWRUVrWFNiaHRjUTZ4ZDRiSjcxSFlXeGk2M3k5?=
 =?utf-8?B?cERHVHoreTQ2ODFiWVZSQmFaOE9HdUZnczJuUU1rcjBaNE5tNVFMK3NGRHJJ?=
 =?utf-8?B?QmxiRmliSGlqS2pZa3E2c090eE5aTy9vdUdEUEhZRGxDSHM5cEU5MVgrUGFa?=
 =?utf-8?B?SFdBTUhUbS9neXlGVkIwbmlCZkgzOUR1K3NXaFp1YWk4NWZxMnVFYVJmWm9O?=
 =?utf-8?B?V0Z4WFF6ZE1Odmx6bnd5SWVoVXFXc3N2bmV5TkFuSFkwRjFzUUZXdDBnc203?=
 =?utf-8?B?UGpxOUxpU0RORDV2dVJIYXhtRDFjT0FoaVBWUE9XaXpwQklMNlVNWERESFZv?=
 =?utf-8?B?cnBUWGZMMDlsdk5nRUk0L0JZQmMzWnZ0V0dHVnkzYVUvOXJsTTFVQ0tFdFFj?=
 =?utf-8?B?czJIUk5TQnpxdVpzR1BpV2MweWg3eXUvM0w1TGZUQWQrR0kwS1gwaFliTnNN?=
 =?utf-8?B?SjBFVXNOQ1d2d1dYRURxdFdEQ3l0WS9Hek9ocUhXYXlTRXg1clVHODdKMmUz?=
 =?utf-8?B?d0tWZHNtOXIyS043WG9GTERRYy9yOTB2UU9mRHZOUGdtd3lmLzNoZkNKbEpL?=
 =?utf-8?B?ek0wN0Fud1lpYmxMdUp5OVh0bDRkZXhxM1RtOFRoTjNNMU85SUlkZUlIeTVp?=
 =?utf-8?B?dFNVS0JTRHIyd0ZQUElPRkVqbi8veWhsWENmdVIxV1NDRnZSUGUybFYrNmpK?=
 =?utf-8?B?Z2lsa0NjRnpYY3ZhZFA5ZU15aXNQeVFLSnkveEFLZ25LS0F5akxlelFpLyt4?=
 =?utf-8?B?V0pyNGw5OVk0cHR6dlVyRmZxT0hqeWVLek5neEV0c0QxSkVwOVdleHlwSWZQ?=
 =?utf-8?B?RlZlWWcyaGxDLzNvWHR3RHd4azcrZjBnZW5BL1JML2d2VTRzeXVDL1N2TFBH?=
 =?utf-8?B?c2V5ek1MM1VvZHVrNEEyUnRsYk9ZZGdWNXJpOEpoL1V5MTV5VU5xYUU5Mkw3?=
 =?utf-8?B?bC9GRjBzeHhYbUpWcjlaTzVKczcrSHBiWEIrMHZuUFJmeGZpeXJzZlhrZ1h5?=
 =?utf-8?B?UVNyRWRIWUZKSEh2ZmVibVloWUNrWFFDOHRETVlPc3RGaEFFY2E5VjBCQkI0?=
 =?utf-8?B?cUcrZ05pbE1qUFpjc080NXEwdjBzT0VoODNFdm9lT0tZeElmdGZNVklmelRh?=
 =?utf-8?B?ZTJ4U2xEOHVreTF3Q1hWZWR5a1BZWGxqSDdUMGNZTWU5N2xKM1A1REU3aktT?=
 =?utf-8?B?NllKRGdwdTd4Z2NXWjM5Y05MQk9paEIvQWNlV1VGT3ViZ0VOSUtjT25XMk5L?=
 =?utf-8?B?dGFObGI0VXQ5ZFN2MDBncWROTFUvSVJvMkE2dGNTR2Y1VTdUcDJhMTJmU3E1?=
 =?utf-8?B?ZUUvajRDaGhwa1A5S2JyQ0xVOWlXQXdRMWw4MHBWODNyZU0yUUMxS0JTRzEw?=
 =?utf-8?B?dDJNaHh6OU1UWGZKM2JFTDNZc1FPTXcwdTdkc3IrU2M4M0JsbTdOWnIrRXFl?=
 =?utf-8?B?U1RmWmFyQ1RuTW9PS0YwSVE4ZFJOMmxlVVV3YkE5aW9JdEw3QU5ZVVlPTUpr?=
 =?utf-8?B?SVRxSXJBVEo3Y09kZ2lHQ3pDVUtpQVBENjlpNGpkcE0zUGRpRVBiWi9Hb1cr?=
 =?utf-8?B?T3lySFJYVmEyMXdJd2tCWVc1V0I1NVdjaE1xOGlsczQvOXMwNTZ2SFpvSzVC?=
 =?utf-8?B?K2haMDdyMFZHRDFWek5oakZOM28vV3ZvUkFkb3dMOUhnT3ZyM3NhR2xvMFE1?=
 =?utf-8?B?UTF2ZDZuQ3BkeDZ0eHkxV3ZPTCs2UktwakFKOHNVOXhReW85UU9pN1hKdWU3?=
 =?utf-8?Q?s+6H+/TnDbIfHKjo3I4m3NuTlYmU+42QoqAIPOF?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 804a667c-4ba4-4906-e69e-08d8c9def254
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 14:04:26.0907
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GtzpqEKbPUv1+JTGjXo8Gw/3hhM1Aiv5iHyn/qObjFiO7IQboKN0Qw5NXtfSH0qWXKqCSwKfVSUg3phr/t4nrQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5590
X-OriginatorOrg: citrix.com

On Fri, Feb 05, 2021 at 01:50:27PM +0000, Andrew Cooper wrote:
> On 05/02/2021 13:39, Roger Pau MonnÃ© wrote:
> > On Fri, Feb 05, 2021 at 01:34:20PM +0000, Andrew Cooper wrote:
> >> On 05/02/2021 11:53, Roger Pau Monne wrote:
> >>> Bison is now mandatory when the pvshim build is enabled in order to
> >>> generate the Kconfig.
> >>>
> >>> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> >>> ---
> >>> Please re-run autogen.sh after applying.
> >>>
> >>> Fallout from this patch can lead to broken configure script being
> >>> generated or bison not detected correctly, but those will be cached
> >>> quite quickly by the automated testing.
> >> I think this can be simpler.Â  Both flex and bison are mandatory libxlutil.
> > No, we ship the output .c and .h files so that the user only needs to
> > have bison/flex if it wants to modify the .l or .y files AFAICT?
> 
> I know that theory, but it is broken in practice because of `git
> checkout` timestamps.
> 
> Also, the Makefile explicitly enforces the checks, so they are mandatory
> in despite an attempt to ship the preprocessed form.

I seem to be able to `make -C tools/libs/util/` just fine without
having bison installed. If we do require bison/flex then we certainly
need to remove the output *.c/*.h files from tools/libs/util/.

I'm not specially thrilled either way, but I think the proposed patch
is safer given the point of the release we are at.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 14:05:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 14:05:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81741.151173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81jp-0006k1-Ko; Fri, 05 Feb 2021 14:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81741.151173; Fri, 05 Feb 2021 14:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l81jp-0006ju-Hi; Fri, 05 Feb 2021 14:05:33 +0000
Received: by outflank-mailman (input) for mailman id 81741;
 Fri, 05 Feb 2021 14:05:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l81jo-0006jn-Ey
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 14:05:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e96ab7da-acd3-4e42-9ae8-1d46eacb6f62;
 Fri, 05 Feb 2021 14:05:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0FD7CAC9B;
 Fri,  5 Feb 2021 14:05:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e96ab7da-acd3-4e42-9ae8-1d46eacb6f62
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612533929; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D74wxNoQbMywGrV8PBRGASKiV8IlPYkJharhsx+Zq18=;
	b=sQW11e0foe3CCXXSZVV1mGMhpgEDXtF1Hky/c89tbuGemIcbKA7+mCbhfSyY+lJ4RbjXF5
	B1S9oYp6UxwSBPG6IlgukGezWB5FGFYxvefdpy0zPomrbxTlyTUZuTPhA60ecVu+ic98Yp
	YRgy7HmdqTxJXegK9antp3abDgB6zDE=
Subject: Re: [PATCH for-4.15] tools/configure: add bison as mandatory
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210205115327.4086-1-roger.pau@citrix.com>
 <bd9d86ef-485d-fc93-f402-0a97acd9d2dd@citrix.com>
 <YB1Kqez4mjzog2YM@Air-de-Roger>
 <c1185550-8e73-5985-de70-a68a0b1e31ab@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <df5e7984-90a4-ae13-c751-0723885c46fe@suse.com>
Date: Fri, 5 Feb 2021 15:05:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <c1185550-8e73-5985-de70-a68a0b1e31ab@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 14:50, Andrew Cooper wrote:
> On 05/02/2021 13:39, Roger Pau MonnÃ© wrote:
>> On Fri, Feb 05, 2021 at 01:34:20PM +0000, Andrew Cooper wrote:
>>> On 05/02/2021 11:53, Roger Pau Monne wrote:
>>>> Bison is now mandatory when the pvshim build is enabled in order to
>>>> generate the Kconfig.
>>>>
>>>> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
>>>> ---
>>>> Please re-run autogen.sh after applying.
>>>>
>>>> Fallout from this patch can lead to broken configure script being
>>>> generated or bison not detected correctly, but those will be cached
>>>> quite quickly by the automated testing.
>>> I think this can be simpler.Â  Both flex and bison are mandatory libxlutil.
>> No, we ship the output .c and .h files so that the user only needs to
>> have bison/flex if it wants to modify the .l or .y files AFAICT?
> 
> I know that theory, but it is broken in practice because of `git
> checkout` timestamps.
> 
> Also, the Makefile explicitly enforces the checks, so they are mandatory
> in despite an attempt to ship the preprocessed form.

I don't see the Makefile enforcing anything. Upon seeing "XYZ is
needed to rebuild some libxl parsers and scanners, please install
it and rerun configure" you then have the choice of doing so or,
if you know you didn't fiddle with the sources, playing with the
time stamps.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:04:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:04:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81768.151187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l82eD-0004RG-46; Fri, 05 Feb 2021 15:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81768.151187; Fri, 05 Feb 2021 15:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l82eD-0004R9-0e; Fri, 05 Feb 2021 15:03:49 +0000
Received: by outflank-mailman (input) for mailman id 81768;
 Fri, 05 Feb 2021 15:03:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=su6d=HH=redhat.com=mst@srs-us1.protection.inumbo.net>)
 id 1l82eB-0004R4-Bh
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:03:47 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 13bb7c49-3fa4-4db7-a68f-67fc66518788;
 Fri, 05 Feb 2021 15:03:45 +0000 (UTC)
Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com
 [209.85.218.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-165-rMDRYlE6M6ineWQIZ6CPWw-1; Fri, 05 Feb 2021 10:03:40 -0500
Received: by mail-ej1-f72.google.com with SMTP id k3so6985835ejr.16
 for <xen-devel@lists.xenproject.org>; Fri, 05 Feb 2021 07:03:40 -0800 (PST)
Received: from redhat.com (bzq-79-180-2-31.red.bezeqint.net. [79.180.2.31])
 by smtp.gmail.com with ESMTPSA id d5sm4104521edu.12.2021.02.05.07.03.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 05 Feb 2021 07:03:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13bb7c49-3fa4-4db7-a68f-67fc66518788
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1612537424;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zwMMO/GhT5P4qg6F1lADPTp5N9LNgrC5HBBNpAooZ60=;
	b=JMm3/v6hHQmW9FqmN1XFAIVzCKyoLAt/YOZKbvuW2LAvubDuiZj+hVF7m0i+EGFhKyp7PA
	UWLH/z1/N49N+R1ZDkR3Jl1ilUEaF0SVr4VthyXIaVI+aYcT6/VHqgawiSr/O6nJD4DEBK
	Q9XJJwZC1xKgQdYDS2p0VlVDh2ex15Q=
X-MC-Unique: rMDRYlE6M6ineWQIZ6CPWw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=zwMMO/GhT5P4qg6F1lADPTp5N9LNgrC5HBBNpAooZ60=;
        b=eAnTf+msp/DZijAjpP+UIhMC8YrsyESxsyBDDQPPkOGlzAljmMefaHitsbcVdGzz5i
         pNau5Zl5OOpKjRifSuEI0x88joGIEgx4k6owasfxtlR+aov+86VICY/PO7aafIxXU1GV
         AofpHWw0lP4d+6lUIWfSUM4nYviQ3vOGDsym5XrhZAYnvCTAukXq0Jsjp9XXmBDaViUu
         qZtCMk7q0Ua4qh48b1y69/LO3mUVj1lp4mGts2CA/chTffooFZRvBflx3ldBKCxClDfa
         QUcV312jNVRtgwHun3e8aU6tHHAqfw06HVDVUtytsYqQ72FOyXGwf/BnYc+1O3a/jeEA
         8esg==
X-Gm-Message-State: AOAM5325bOKOKEHewiY2cP0pMv3fU2iZ8RzoBykDaD2QV5a0/hnJLF3u
	RfHyGKKou0GBw/JJGoAzJKmKte3NQUl872vKiEx1kZQdMgBr1kAwnK374MlvMki7leSwH4MXZmU
	aVHyak1F++beY+u3iMe+hmWUP6OY=
X-Received: by 2002:a17:907:98c3:: with SMTP id kd3mr4449968ejc.482.1612537419224;
        Fri, 05 Feb 2021 07:03:39 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyszPIeKkwa+10z8LK7e/ZhS/mQJGnrOXUtz/rjTwb1tdREvjR9aq3yiXVn0vxPl25KHrANHw==
X-Received: by 2002:a17:907:98c3:: with SMTP id kd3mr4449818ejc.482.1612537417767;
        Fri, 05 Feb 2021 07:03:37 -0800 (PST)
Date: Fri, 5 Feb 2021 10:03:34 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	David Edmondson <david.edmondson@oracle.com>,
	Laszlo Ersek <lersek@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Subject: [PULL v2 02/16] pci: add romsize property
Message-ID: <20210205150135.94643-3-mst@redhat.com>
References: <20210205150135.94643-1-mst@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20210205150135.94643-1-mst@redhat.com>
X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1
X-Mutt-Fcc: =sent
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

From: Paolo Bonzini <pbonzini@redhat.com>

This property can be useful for distros to set up known-good ROM sizes for
migration purposes.  The VM will fail to start if the ROM is too large,
and migration compatibility will not be broken if the ROM is too small.

Note that even though romsize is a uint32_t, it has to be between 1
(because empty ROM files are not accepted, and romsize must be greater
than the file) and 2^31 (because values above are not powers of two and
are rejected).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20201218182736.1634344-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210203131828.156467-3-pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
---
 include/hw/pci/pci.h     |  1 +
 hw/pci/pci.c             | 19 +++++++++++++++++--
 hw/xen/xen_pt_load_rom.c | 14 ++++++++++++--
 3 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 66db08462f..1bc231480f 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -344,6 +344,7 @@ struct PCIDevice {
 
     /* Location of option rom */
     char *romfile;
+    uint32_t romsize;
     bool has_rom;
     MemoryRegion rom;
     uint32_t rom_bar;
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 58560c044d..a9ebef8a35 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -69,6 +69,7 @@ static void pcibus_reset(BusState *qbus);
 static Property pci_props[] = {
     DEFINE_PROP_PCI_DEVFN("addr", PCIDevice, devfn, -1),
     DEFINE_PROP_STRING("romfile", PCIDevice, romfile),
+    DEFINE_PROP_UINT32("romsize", PCIDevice, romsize, -1),
     DEFINE_PROP_UINT32("rombar",  PCIDevice, rom_bar, 1),
     DEFINE_PROP_BIT("multifunction", PCIDevice, cap_present,
                     QEMU_PCI_CAP_MULTIFUNCTION_BITNR, false),
@@ -2084,6 +2085,11 @@ static void pci_qdev_realize(DeviceState *qdev, Error **errp)
     bool is_default_rom;
     uint16_t class_id;
 
+    if (pci_dev->romsize != -1 && !is_power_of_2(pci_dev->romsize)) {
+        error_setg(errp, "ROM size %u is not a power of two", pci_dev->romsize);
+        return;
+    }
+
     /* initialize cap_present for pci_is_express() and pci_config_size(),
      * Note that hybrid PCIs are not set automatically and need to manage
      * QEMU_PCI_CAP_EXPRESS manually */
@@ -2349,7 +2355,16 @@ static void pci_add_option_rom(PCIDevice *pdev, bool is_default_rom,
         g_free(path);
         return;
     }
-    size = pow2ceil(size);
+    if (pdev->romsize != -1) {
+        if (size > pdev->romsize) {
+            error_setg(errp, "romfile \"%s\" (%u bytes) is too large for ROM size %u",
+                       pdev->romfile, (uint32_t)size, pdev->romsize);
+            g_free(path);
+            return;
+        }
+    } else {
+        pdev->romsize = pow2ceil(size);
+    }
 
     vmsd = qdev_get_vmsd(DEVICE(pdev));
 
@@ -2359,7 +2374,7 @@ static void pci_add_option_rom(PCIDevice *pdev, bool is_default_rom,
         snprintf(name, sizeof(name), "%s.rom", object_get_typename(OBJECT(pdev)));
     }
     pdev->has_rom = true;
-    memory_region_init_rom(&pdev->rom, OBJECT(pdev), name, size, &error_fatal);
+    memory_region_init_rom(&pdev->rom, OBJECT(pdev), name, pdev->romsize, &error_fatal);
     ptr = memory_region_get_ram_ptr(&pdev->rom);
     if (load_image_size(path, ptr, size) < 0) {
         error_setg(errp, "failed to load romfile \"%s\"", pdev->romfile);
diff --git a/hw/xen/xen_pt_load_rom.c b/hw/xen/xen_pt_load_rom.c
index a50a80837e..03422a8a71 100644
--- a/hw/xen/xen_pt_load_rom.c
+++ b/hw/xen/xen_pt_load_rom.c
@@ -53,10 +53,20 @@ void *pci_assign_dev_load_option_rom(PCIDevice *dev,
     }
     fseek(fp, 0, SEEK_SET);
 
+    if (dev->romsize != -1) {
+        if (st.st_size > dev->romsize) {
+            error_report("ROM BAR \"%s\" (%ld bytes) is too large for ROM size %u",
+                         rom_file, (long) st.st_size, dev->romsize);
+            goto close_rom;
+        }
+    } else {
+        dev->romsize = st.st_size;
+    }
+
     snprintf(name, sizeof(name), "%s.rom", object_get_typename(owner));
-    memory_region_init_ram(&dev->rom, owner, name, st.st_size, &error_abort);
+    memory_region_init_ram(&dev->rom, owner, name, dev->romsize, &error_abort);
     ptr = memory_region_get_ram_ptr(&dev->rom);
-    memset(ptr, 0xff, st.st_size);
+    memset(ptr, 0xff, dev->romsize);
 
     if (!fread(ptr, 1, st.st_size, fp)) {
         error_report("pci-assign: Cannot read from host %s", rom_file);
-- 
MST



From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:26:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:26:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81789.151200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8303-0006cZ-20; Fri, 05 Feb 2021 15:26:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81789.151200; Fri, 05 Feb 2021 15:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8302-0006cS-Uv; Fri, 05 Feb 2021 15:26:22 +0000
Received: by outflank-mailman (input) for mailman id 81789;
 Fri, 05 Feb 2021 15:26:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8301-0006cK-K1; Fri, 05 Feb 2021 15:26:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8301-0000NN-FC; Fri, 05 Feb 2021 15:26:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8301-0008DH-7n; Fri, 05 Feb 2021 15:26:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8301-0001Se-7I; Fri, 05 Feb 2021 15:26:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NxO9kvXqUAAhatn6lTkijNDUikKI7EZeWjdF3kreQgw=; b=denZ9tHwM7cjfTKfpx7QW8mepd
	naTEbmU4BAq/PGvtuD60ZkkftsQ0wFQDEQrFlUD3WR7rfzXaoniPTqt+1FrY8B+fHXnbPtpZezCJw
	iByGJuL6jvEMxGx1xCsFxGMjiVKz00SJeMBJKfv2puFvQy/BRID+A0k7IhdXFPRGHJEk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159044-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159044: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d7acc47c8201611fda98ce5bd465626478ca4759
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 15:26:21 +0000

flight 159044 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159044/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d7acc47c8201611fda98ce5bd465626478ca4759
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159033  2021-02-05 05:01:28 Z    0 days
Testing same since   159044  2021-02-05 13:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ff522e2e91..d7acc47c82  d7acc47c8201611fda98ce5bd465626478ca4759 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:33:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:33:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81805.151215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l836u-0007hM-O6; Fri, 05 Feb 2021 15:33:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81805.151215; Fri, 05 Feb 2021 15:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l836u-0007hF-Kc; Fri, 05 Feb 2021 15:33:28 +0000
Received: by outflank-mailman (input) for mailman id 81805;
 Fri, 05 Feb 2021 15:33:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l836t-0007hA-0D
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:33:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9eaa72c3-a5f0-4921-b8ff-14bfe80b849b;
 Fri, 05 Feb 2021 15:33:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5B947B126;
 Fri,  5 Feb 2021 15:33:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eaa72c3-a5f0-4921-b8ff-14bfe80b849b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612539204; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7WlWgN3tZhK/CGvAjLlD5+NGjUp7Ddd59Xa7JrvrAnM=;
	b=Q5Gaf1gn9crn2hf9kliRTUiuYXn41kNgkjvlWyZ7F8kIkzDfEupY1MX4gOfzgfjvQoEqLP
	+RyxH8w3knpfot6sYv1Ii8zFHzFgF/hPkXLb1cdy9DdazOtWEVegyGFuE9fKTQfvSjovzN
	kghdj7P+rtZiwI2HbkvN33pYDhR/t7o=
Subject: Re: [ANNOUNCE] Xen 4.15 - call for notification/status of significant
 bugs
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Julien Grall <julien@xen.org>,
 community.manager@xenproject.org, committers@xenproject.org,
 xen-devel@lists.xenproject.org
References: <24600.8030.769396.165224@mariner.uk.xensource.com>
 <24603.58528.901884.980466@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <361da489-25a1-4a36-b917-9a092900b2e5@suse.com>
Date: Fri, 5 Feb 2021 16:33:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <24603.58528.901884.980466@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 13:12, Ian Jackson wrote:
> Although there are a few things outstanding, we are now firmly into
> the bugfixing phase of the Xen 4.15 release.
> 
> I searched my email (and my memory) and found four open blockers which
> I have listed below, and one closed blocker.
> 
> I feel there are probably more issues out there, so please let me
> know, in response to this mail, of any other significant bugs you are
> aware of.
> 
> Ian.
> 
> 
> OPEN ISSUES
> -----------

Roger has just pointed out to me that I should probably ask for
"x86/time: calibration rendezvous adjustments" to also be tracked
here. Though just to clarify - the bad behavior has been there
for a (long) while, so this isn't like a recent regression.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:36:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81807.151227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l839V-0007qq-6g; Fri, 05 Feb 2021 15:36:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81807.151227; Fri, 05 Feb 2021 15:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l839V-0007qj-2g; Fri, 05 Feb 2021 15:36:09 +0000
Received: by outflank-mailman (input) for mailman id 81807;
 Fri, 05 Feb 2021 15:36:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l839T-0007qe-Lx
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:36:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l839T-0000Xh-Jn
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:36:07 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l839T-0002Qy-Gj
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:36:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l839Q-0002yP-3v; Fri, 05 Feb 2021 15:36:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=T6yOPSS0BQ1BHig02gmxg61Rp3kdqQeTr9C2cHW8R8Q=; b=U4j8YNWpVBPD4aI+0D8jF/m8Iv
	mp5G2pZ7FwulNggxblqepUWT+dExfIbq9K77l6lDtfjaICBn7qu6E9MUTw98WpDZ/WF3HGdi4paUF
	XXqqNiWxHvGgvBtl/tLeJCa8SP8JFJ7i8LUnsXTDp+6jfB/FEJi10Ra1nzHro8QuDu78=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24605.26083.900493.802201@mariner.uk.xensource.com>
Date: Fri, 5 Feb 2021 15:36:03 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Jan Beulich <JBeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>,
    Jun  Nakajima <jun.nakajima@intel.com>,
    Kevin Tian <kevin.tian@intel.com>,
    =?iso-8859-2?Q?Micha=B3_Leszczy=F1ski?= <michal.leszczynski@cert.pl>,
    Tamas  K Lengyel <tamas@tklengyel.com>,
    George Dunlap <George.Dunlap@eu.citrix.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Julien Grall <julien@xen.org>,
    Paul Durrant <paul@xen.org>,
    Hubert  Jasudowicz <hubert.jasudowicz@cert.pl>
Subject: Re: [PATCH v9 00/11] acquire_resource size and external IPT
 monitoring [and 1 more messages]
In-Reply-To: <e2cecfe6-2cd0-0d92-8f17-0283bc1f8503@citrix.com>,
	<86f9845f-f0a5-93c7-0703-c3a51d50febc@citrix.com>
References: <20210201232703.29275-1-andrew.cooper3@citrix.com>
	<20210201232703.29275-2-andrew.cooper3@citrix.com>
	<86f9845f-f0a5-93c7-0703-c3a51d50febc@citrix.com>
	<24601.17287.280124.602809@mariner.uk.xensource.com>
	<e2cecfe6-2cd0-0d92-8f17-0283bc1f8503@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Thanks, Andy, for those writeups.  I still have substantial misgivings
I don't feel confident.  However, I don't think continuing attempts to
try to understand and/or mitigate this risk will be helpful.  I need
to make a decision now.

I think there are significant downsides to either choice here.  At
this stage of the freeze I am going to err on the side of saying
"yes", so, for the whole series:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Provied this is committed by the end of Monday at the very latest.  I
would appreciate it if it could be committed today.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:38:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:38:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81810.151239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83C8-0007z7-LE; Fri, 05 Feb 2021 15:38:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81810.151239; Fri, 05 Feb 2021 15:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83C8-0007z0-HT; Fri, 05 Feb 2021 15:38:52 +0000
Received: by outflank-mailman (input) for mailman id 81810;
 Fri, 05 Feb 2021 15:38:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l83C7-0007yu-CW
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:38:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l83C7-0000a6-9Z
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:38:51 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l83C7-0002fl-8q
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:38:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l83C4-0002zq-3e; Fri, 05 Feb 2021 15:38:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=16mzkL9plM92z0MNgTt2S6zESa+fJFqwk/urzUjR48U=; b=UM2GkWwkcYdSEtRLtDnSgYmTOw
	qet/O8ToqkWIwblC7B5hyHmVOBXxIporn6aaJdbSPmAHi9U6hkvv8AR2BGGzmxU/8ROWJQhiDqTJY
	muhrzJiHnGLlPMZM8FbiiBqz+iAbv1zLqO6bSER2/8jak5h6PGDsHsnCt4bi9UVrysuo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24605.26247.828218.254710@mariner.uk.xensource.com>
Date: Fri, 5 Feb 2021 15:38:47 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] tools/tests: fix resource test build on FreeBSD
In-Reply-To: <20210205121938.4636-1-roger.pau@citrix.com>
References: <20210205121938.4636-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] tools/tests: fix resource test build on FreeBSD"):
> error.h is not a standard header, and none of the functions declared
> there are actually used by the code. This fixes the build on FreeBSD
> that doesn't have error.h
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Build tested on both Linux and FreeBSD. The risk would be breaking the
> build, but it's already broken on FreeBSD so there's not much to lose.
> Build breakages on Linux will be spotted fast by either osstest of the
> gitlab build.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>
Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks.  I think this was probably a typo.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:41:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:41:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81818.151251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83Eq-0000a4-2a; Fri, 05 Feb 2021 15:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81818.151251; Fri, 05 Feb 2021 15:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83Ep-0000Zx-Vc; Fri, 05 Feb 2021 15:41:39 +0000
Received: by outflank-mailman (input) for mailman id 81818;
 Fri, 05 Feb 2021 15:41:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l83Eo-0000Zs-L6
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:41:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l83Eo-0000dy-HS
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:41:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l83Eo-0002s4-G5
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:41:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l83El-00030N-4V; Fri, 05 Feb 2021 15:41:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=i5r1kT9l6nzTnQb8KcgXr4DsHfbpXhigJZttROLTS9Y=; b=GUzsyOf/XTSpG2XbWB1kULcCqn
	1ezV8TsU1ISBCkbVL/Dh7i6XbCn3ehR4qq5Kam2diw1UpmEIkJqNKBwzXQ7b4UM3WD/KZpTILxXYE
	HonL3obWGZ0cZ21UQXPjyQdBS5FwXqXMombUromy9LGfiF9SdPFJEaPIXp4IlhNN0ing=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24605.26414.914939.725856@mariner.uk.xensource.com>
Date: Fri, 5 Feb 2021 15:41:34 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>
Subject: [PATCH for-4.15] tools/configure: add bison as mandatory
In-Reply-To: <20210205115327.4086-1-roger.pau@citrix.com>
References: <20210205115327.4086-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] tools/configure: add bison as mandatory"):
> Bison is now mandatory when the pvshim build is enabled in order to
> generate the Kconfig.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Please re-run autogen.sh after applying.
> 
> Fallout from this patch can lead to broken configure script being
> generated or bison not detected correctly, but those will be cached
> quite quickly by the automated testing.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>
Reviewed-by: Ian Jackson <iwj@xenproject.org>

I've read the rest of the thread and I prefer Roger's version of this
patch.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 15:43:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 15:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81819.151263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83GR-0000hJ-EX; Fri, 05 Feb 2021 15:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81819.151263; Fri, 05 Feb 2021 15:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83GR-0000hC-Aw; Fri, 05 Feb 2021 15:43:19 +0000
Received: by outflank-mailman (input) for mailman id 81819;
 Fri, 05 Feb 2021 15:43:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l83GP-0000h6-Ii
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 15:43:17 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f46ad903-751d-4ba6-9fde-780eb22fd7c2;
 Fri, 05 Feb 2021 15:43:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f46ad903-751d-4ba6-9fde-780eb22fd7c2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612539796;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=mH9yQkQE1fqjRRX8L2lFXDS47yVPfUw4NppaJ8wSLnQ=;
  b=fT4qYq96F4PbOKvGa4bvKOVJpCJCrte30STrnJ2YPwdnIki+aEbg3xiz
   RzdTumJDmaNbDbILKCbLZtU6GHDnNoP/nEx4cDH3+3JLmqJbo5QD+F6Sj
   /SDJ3GB7+Szp0MdrCEodddI0DajZMFnYdVPjOA3dnR7fZJOYrrb8pZ4c0
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ex0bbkdtnMwWjLh7Ai9ZYX1H7X+eS1N3ddpcfTuagk1h9Jn+TF4RHi5vfO6ueSlXjX1pMoDH0i
 9EJ6XEeeIog5yHFFC/mwgHciaiPCZctV68EQguM5+WhYUd217bZJj3gdtw2XyerGcPq4r4WyB1
 dEecPA+ZnaJie3BHrs14wtDXNpoYAF2lVt/e70mMI+OdzFLkDK0J5j4htg/jpdW8zB+a3xL5TT
 rlEh4Nk7XzaV2KzRvhxyA57xAeYMqWuLqX3zVhbzewRuhSgpvDbOv2bDnCDeiVUm7aYhgOhslo
 kLg=
X-SBRS: 5.2
X-MesageID: 36854674
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="36854674"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T7HTzf18NmHFSRsfHxaZEztXkvhe/Zg0O2LoSPTOLltrGhyvK6FxgwJOSWQySld7Y1IlQGqm32fRuMuFIVUgyc7WrePJ2dTcslXQhB/zeWHTMb+0LxGN1Fr1Iu87OFLhW62cIgPKD4r/7rFsmhOr6XvwHx4sHEOBYJe3YPMWN+DcWjAgRmcJvA+NqBV3RGkgxfRalEQ5cLiBViTj+Y8WmJGLlH+klPL4omO69rAhdyu2WyDlluurZqe/NAkxCn/lKH5Tm0XlXJkr4ukVSMbXZ6M0EYj8ZImUMJr5sPnHazkdCxsOcI07DymR9if45ijQJP70wKYfCghvQWZ1KRcN8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mH9yQkQE1fqjRRX8L2lFXDS47yVPfUw4NppaJ8wSLnQ=;
 b=ZFFWOZYOR96KtORLXVsmjIyN3wjWvt4RTfYGLOK67fpwB/6ZWXe13/TBuGC81t+XBEy4zG4dyjompOFARioaXpG/RxawzPMqE1kc08k3OITGzpGF/RzfSMXK+wNBfNWKxGtD6uak+8nSQX5Cb3nGVKvNvRJWQv93n0mYdkQAj9kOMEDYddbJ6hRg9OQYMwy+t7ekUhdF7TAuGhH47mpbhh2qhiVTa2pRa3cl2ioD1t/PWI5Q0COI/Ivqw1asD+FfBFuS5UHbLIFNcIgVWkiuNYlkClALivAWJ0XPY/JZDocLZ93IBP46wNxAPFKAONxveOoHJz5Z+pXiAyh4MYzZ/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mH9yQkQE1fqjRRX8L2lFXDS47yVPfUw4NppaJ8wSLnQ=;
 b=myjycmqTlsvzqFkwGCm/gW0MlP3As98v5Gdw3yIAnGj5NHATXHViNS0htc+E94frHYksIONFFikDUu8Jd4ohphFjd5DqeATQhP+k8Hi4sBIuv4I5IUUHwmIh6XAep8mW70dif62M6PSNWfss6U9Gtv6t4PbZiAGQKOsj70IpcYw=
Date: Fri, 5 Feb 2021 16:43:00 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YB1nhBeMRVGyO8Fk@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
X-ClientProxiedBy: MR2P264CA0015.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d6bae161-3f8a-4d94-0368-08d8c9ecbc00
X-MS-TrafficTypeDiagnostic: DM6PR03MB4219:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4219FEAC90C6DC79ACD32AD38FB29@DM6PR03MB4219.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: eXb0pViF+tpdpAmm/j7/KXeld+8BAdL19iVIPa12peeQLrDTvhRFwTkixdCxdgn+1pjiGUHD7GSvG7dyIvXebxbeVubpFMgpQ/Eq5l1Gv+gy4hWdk24lXbYBC8bKm1bS+qt2CqNAe9U8/K6LdpdS3KT1CV+SAxdC+TtZf2if1f+TueJCDtOSabQMGkF8oTbWslpjTLTdIvRoNxrhw/AYWdMqE0qUO3mLbbz8m4NJn9P5gzCThEfCaRpOUiiYmw2HUUt6b9XpKTEDIc6fyQtt0S8VhVfsnKR2vjz5KmzrWB+eS70fjXOaW8uz/0qc0ExaaKYmKnXE8XlZ+uL9WjM8jbs5O/ijdc2parmWiadW3HRNX7XS/XzJht9WYm6uOV1OwOBgD7IP0vmwiKlmLOGrujNTAeiAmkzC4/dacwXZSbTzfoFPTgwast4jqfN2XjQ4k4UzZNWt/EwzM2lK01rUQuq8LotBLLAtG3ABgmMpftfRAhWYDecCmVxZngbm9oCYGF7R7nbje93bqTQfitl9lQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(39860400002)(366004)(136003)(376002)(478600001)(86362001)(4326008)(186003)(66556008)(8676002)(4744005)(6486002)(8936002)(66476007)(2906002)(6496006)(26005)(16526019)(66946007)(85182001)(6916009)(54906003)(33716001)(9686003)(107886003)(6666004)(316002)(956004)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UzFPVXhHQmUvS3lMVytmSXZzbTRPTFd4NUtlNVhWbnZhbUtvOEliZHVMdkY2?=
 =?utf-8?B?OURiSk1SOFNIUUF1eHp1MjVqcmRoL1pKV1hJZXBSeWxuZkpSdFBzbUc3YXNK?=
 =?utf-8?B?YmFpSUFvK3BoNllpVHM1WDN2RDNyYm9rcmh0ZGplYmxjKzFlY3hUN0YvMm1r?=
 =?utf-8?B?bzZaYnZmS2RGRFZydzVQYUhlMytXcnBHZjREbGQ2TVlRN0RFcTlWQmtLTnRX?=
 =?utf-8?B?MUFlaWpUNWpLQWZkTzBvcG1HeVNDbVVoY3NvSHpyNmw4Z3JpWFBWbjYvUzBo?=
 =?utf-8?B?ZU0vaDluajFxVnpIbkNmRFAxOHVoV1QxRStkMVZJdyt1VFBOak9wR1ZGNzZj?=
 =?utf-8?B?UVV1VHE1S2RBZHBWWUVuZ0xqcnQ1dHZTNkdUSGlsYjNZM1RFK1ovQ3BoWktt?=
 =?utf-8?B?T1RFcU5JMWhESzlNRGRtYzMrT3BlMnBoUHVFM1BTS1Y0L3hsT216dllwZVJu?=
 =?utf-8?B?THNVS0JDS1BBVGlRZjVRMkJFZnJ6blN3Zm5saHNZNnVvRmtpTG95eVR5SXZS?=
 =?utf-8?B?RStFMU5VOG9LVDNHaGNyaGpNL1Q0eC9xN2hnSHYxNVU4M1BCb2s2eDBiQmpV?=
 =?utf-8?B?Z1YzeUlLS2FZU1VZVlZLOTBCeG95bkNJc1FlM2JEM3FZR0FoM0lMRkF5ekhz?=
 =?utf-8?B?RWZ5ekNCdFhKUEZQZUVML1RkN2dxTW5qK2V4SndwOVUwLytyVXFDbWtZalRK?=
 =?utf-8?B?MTIrMW83bStwQUxhK1RRZjloa2ZOa2JDY0hicS9zTmFDeDZLUXJ4OEJTRytj?=
 =?utf-8?B?MS8zeGg3b3ZyUjUwZHRCWml0dFN1S29YQ1FPVHRyZkpnRGJpaXB5WVFPTTN5?=
 =?utf-8?B?cDhwOXR0b0FKUDdua3FXem5pOXVQeEJ6Qm9RblBRSW0zY0UzRkVwNjFySGRo?=
 =?utf-8?B?Y0FkZmY5VkJ0U2NaSW5JeXhWVFFQaDJrcjFKdTY1WUVGcDZmNFUxa0R6RWds?=
 =?utf-8?B?V0gra0cyZWlsOG5iYVNCeEkyTTF5RlR5MzlYRFNpZEVJdmZkdmg2eEdYMFZz?=
 =?utf-8?B?QkF6bzZ4cnJ5K0JFL09PSlhXS1ZPSFA0S0NCM0tveEZpbEF5QjNKQnhic2FZ?=
 =?utf-8?B?YWo3TDR2UnozUXVnZ2ZYSlZia0VhcDlTYkprNCsvaFZ3QnNVT2NvaUdQSHda?=
 =?utf-8?B?Yy9Hb0lmejJkOVk2Q2xNVHFYMVQxMlViVTlvTnU3N09vZzkzK25rbC9URzh6?=
 =?utf-8?B?K0lTQjlDVGV6Lzc0VlltU00zTEx3TnVqNmJEcUN3NmcwQ090bGRWMEdqYVo4?=
 =?utf-8?B?aGRNRE1WRk1RRDJ0emJqeVZOM05MaUtNV2NkemMwNUppYi9Jc1NCSW9ORGxh?=
 =?utf-8?B?YlI4K3N3S0dXaHNaOUh4cXFudXJPbkl1L05sMmtPMU9xM1preVl0d1F4Mkdv?=
 =?utf-8?B?UDV4aEZxMHhaTGoyVTFCSWVkMDNwSGF0em1jQ29wdnhtRlBHYTlmcUZKRTRl?=
 =?utf-8?B?U2hjNE5paWhqQk1VMDQ5N0dHR3JuTGNhVHFGS3J5dkZpTGlZc0tTbWVZQVZX?=
 =?utf-8?B?K01Nd0FpK3NLSzlBMDRGQVVDUUFpcWVVdW1MaVpxYU1hZDNvenFLc3JNQWNC?=
 =?utf-8?B?WWtwNjNDNXlic010bFZFN2FKdWFqT0VBa0hDOHVTMnBzWktzejAzUzZVOFVN?=
 =?utf-8?B?Q1d4RUtTaFgvTGxWK2NUWEFxNmRoMGxKZXRGdFF3VHB1eTUxWXBwTEwyQitC?=
 =?utf-8?B?TXo5MjE2MnFRMHZyeTU2SnlrK3A1WGFtZDVrWCtGTDdpMS9uMStXMnRnME56?=
 =?utf-8?Q?1pafqDLEkuK0eW+hIm6gXGzHrT44rh/s6QHUckR?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d6bae161-3f8a-4d94-0368-08d8c9ecbc00
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 15:43:07.7861
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mXLl0U6OpnQi7SmowwFMmgLheA0Jro8ZeOYOJ+AQtIIzzbER2OuKsaIEH9QPqQYBIR3LQhDT0x6SFi566R+6vw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4219
X-OriginatorOrg: citrix.com

On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
> The "guest" variants are intended to work with (potentially) fully guest
> controlled addresses, while the "unsafe" variants are not.

Just to clarify, both work against user addresses, but guest variants
need to be more careful because the guest provided address can also be
modified?

I'm trying to understand the difference between "fully guest
controlled" and "guest controlled".

Maybe an example would help clarify.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 16:01:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 16:01:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81826.151275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83Xl-0003Jo-Vb; Fri, 05 Feb 2021 16:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81826.151275; Fri, 05 Feb 2021 16:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83Xl-0003Jh-Rj; Fri, 05 Feb 2021 16:01:13 +0000
Received: by outflank-mailman (input) for mailman id 81826;
 Fri, 05 Feb 2021 16:01:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l83Xk-0003Ja-0V
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 16:01:12 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0dd9cad8-ba63-45ce-a3d4-83261a840557;
 Fri, 05 Feb 2021 16:01:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dd9cad8-ba63-45ce-a3d4-83261a840557
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612540869;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tAQoe8/Q8jNI3ywYeerUjvLYjZb7VgWEY06WpUr04lU=;
  b=IX3tpOdqSzDP6kQLfONwQsrTnySQqdEec/IKNvyps2y5rRPzoGwnOoyB
   qR7b5HHv8D4r6PTNwTY53y3B1zWLLKgWYI/2E7SHmhU/tK41l3gl1MEj8
   d8y39Xfl85dLeZwf+oDkZwnwqH/fPO14eKI3sI1+sIZv5c0KgnxRS9Kqo
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s7NeREJ15/eqlvHZBLIfZ8UwVKgHY4HQpaJ92zj5qz8063s7UosZxZ2t9ciD35FpbFyDlcnA7r
 0u4IrFaCIYdPYDpe3lk08ILLhZsauE8/RVDuHwunGrDkSe90eQbzTnMgnB2rowlKSIvW+WL8bE
 EF+GWYfAj9Zhtjtv+jO0lPt4pWwTemi2k/D/Ma8REE10syXWy4Wd9zAM1THBDT3kODxJ3K/ery
 2l+MSd1W+3vcfU2FDpOl0siLwHr8jnKmmLFzyxyM2kjBB8ERRSG5K5rh890fz9iMqsSXTl1c+r
 WYs=
X-SBRS: 5.2
X-MesageID: 37028792
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="37028792"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KQzgG0hey/9oMcUA7AaP+AdN3yNgpUyw/RMSYOkLaAoQelb3GjudQZbM4YjOxsolmB47ehquLn3WUG59rE6qBCbbyu9HDAEutQ6yDw5zTdYSu6JGJrdgcStpufux4vb9akk66RCzUMczSYSx1iCRhz4k/dwRIWxaBQWPRjGl2KrR3ud3YgkVR7TH9wy8IhLl4bodgXgj3pfB2VQT1b4LdF8TePqRb/m7T/1EYhHkQxnzxZTicrtQN7S1zTVAb2s/a11EXoT1v6eQ+s/268oz6M0gjUl/SowXu+NK2t8fMTgyZOeGqrp2cyoO52RewjQDH/c6DCp8mb22wL1sB6TpVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nT4NKIPVR/ucz9aEFjx9b8PXYVT84ee6W2JYqmSQNKA=;
 b=FKp4Ij/Xo8DnR5Rzk5XZb+l9l1mm/dtGvOtk2STnFa3atJNovmUWLoZ1D9dQX+Dbxyx55F/IPxIuagBromXLE82AVRrRMl/ocZwnvbOtk5Y5WP0FaqXm3nQyyNdveRcv3Mw6ZoSQP7Bwk4z6zu3vQtH0j4RwXC6hiGJpV0mMJPb85rlGH4REaV9iRQ8/40bHON4qI8mIzj5RdxKB5AVeryRjsiUJX8bvDVEhSgMpNmLLYT8yBa4hNTuTY13WiXeew/hVrpUij+ihicntVvsvHy433PfGS5ej2aB6/gf85bGR7sa/WmiI7Zk0VG/iTuJp35UuOFgCQN5Wdf61P3Suxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nT4NKIPVR/ucz9aEFjx9b8PXYVT84ee6W2JYqmSQNKA=;
 b=FWID/K6xiBm+8mX0ac9936r+0ftnGMUXCFs8Ul97G9TPgDa7/0mX4msfLiD8qSDMbmemmkqrLl4rh7rGP0/LvLPC6+30B+0o191EgPzj75mEPgrug6dCkvjOI2iDONZOzTWqykBMmIrYufO0iEiNKEMASHHMOc8BNFWmjZskMgU=
Date: Fri, 5 Feb 2021 17:00:55 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 1/3] x86/time: change initiation of the calibration
 timer
Message-ID: <YB1rt1LNXuhpQNZC@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <ca624e2e-5a2c-e2a6-6e26-54bc3ac7cc19@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ca624e2e-5a2c-e2a6-6e26-54bc3ac7cc19@suse.com>
X-ClientProxiedBy: MR2P264CA0039.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::27)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d40efb7b-088d-480f-4a7f-08d8c9ef3b83
X-MS-TrafficTypeDiagnostic: DM5PR03MB2636:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB263689D20E50B271641A01158FB29@DM5PR03MB2636.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vw5kRnu/PEWgFPcDlSZdgLQHnKc8/dLCSjGbM01wr6V9EvAZwFVG4R5y88JNIGAQUrgtjlmEw3eRMHbuZXTYGDBFcTumekbAHCwMg7ybcsNQyUnug4gnIR0eETEC/2HxWiLbCN8IjzP9BK2wAAMsl9Ri3FUmBc+TOdeOL1FMvi7MiD7xccr3fQly/7+90zCgz5rtzsOU1hnjDIKahW+t0jNdlUg2IpUeYN8HAPTGZUvzZ8qdigiax7dXAkPdfpsoSVv8++cDr4JfDssguIy8A+DydEfEXzZPhi7sT7oyy9O2XVJAqsAG61CWEZ2JR0ewu1xklvT5Bl3nrVBYc3VE5CYMvYjj91F9kW6cq4mooYG2d8e+lyJA75sCid5hbACKEo++uq1YuuLkELTtI5s8U1N5LfR6OK3gGbrGb11PsiL5btEDcpIXVHY+CNX3QTz+/f4yNh5u+6Nmmk+QykXvIRiA07L42THtDuhhgQq2ULVdCAijfkYSbPfiSqmAG7wYwhAngYd+p2Dq2yqCIfCA4g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(39860400002)(376002)(366004)(136003)(86362001)(6916009)(66556008)(6496006)(9686003)(2906002)(5660300002)(4326008)(33716001)(54906003)(8676002)(85182001)(6666004)(956004)(83380400001)(478600001)(4744005)(66946007)(316002)(186003)(66476007)(26005)(8936002)(6486002)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RXpjM0Zud1VMeVgyQUJob29WbGhoR0l5dGdkclFUeTNGU0VFclllY1RUUUxh?=
 =?utf-8?B?OTU1SGN5aUEvbFc1SkxkNElwV3MvcmJ4RndJaHJuYTYrQVliTnM2OGEvWTg4?=
 =?utf-8?B?TFBmclZ1QUVsclB0aUNkQndwcHNGMXNTdWZDVWxxM1F2OFFsaUkyeFNDTzFS?=
 =?utf-8?B?ZmVZbUxKSGZQcnJuNkhTZ0lQU0hYZkw4TGVMbktCMm1OYmVvZnNmVG9WamE3?=
 =?utf-8?B?N2I4R0RTVmt1ZXczUnBra2VOcHZJaGJLV01Jc20rTnZoem8zTUpGeFdOREJt?=
 =?utf-8?B?SHdpdENBWllITXd5aWxGMUdmcVl5N2RKRWkvN0k2ZjRXMFd2T1AwaERnWjNy?=
 =?utf-8?B?M0JuRUIyb1loRnNMNlQ4ZzVBeldVek9uaUc1NTFBclN5aFprdDZVS1RRTlRV?=
 =?utf-8?B?OUJYZEZlajB4RFdmRW9Nc0tXdkNTK3dranBZdm5kSHFXQkpoMG9XNGpDQXV3?=
 =?utf-8?B?L3piZEtFazVHQ1doVUdrME9KcElpWEZFT3E3UmxFaWp4OEFQMUhOY0tjdlJo?=
 =?utf-8?B?ZjhHQm14cEgxMjRqbFZIdlppWkc5T05EN0Q4V1FiOXlIQmRWaTh4VENHTllq?=
 =?utf-8?B?cGxOUDFmc0xhMjh2K1Y0eWJYT29wbURvZFg5V0FaN3U5T0dQcTc4MVZ1MFlU?=
 =?utf-8?B?eC91N3NveUdWbFFKaElkbUZXbTdyY1B5b25jT3B5VDY0MGJqVldZbjRoRzNr?=
 =?utf-8?B?Y2lOOVZVcFpENVFYTFNzcmVCeDhVRUlMRG1RTm5OaWU5UTUwa2FFcHhNRHlO?=
 =?utf-8?B?cmM3RUhES2xSL0JnYkRTaGZPWTVFeVphY00wNzg0clEwdkJ5SFZQbjJuQlhu?=
 =?utf-8?B?Z3cyVHFUbVVjd1l1RzFncUtBVEpwN010OUFWbDhrVTFTRFdCdlY1cmM3NVFE?=
 =?utf-8?B?QXhvVXJmQnB2Z0QvcVVlQy94bFByOUFldnZENVNlVmNka2hqdEZ2blJJUHlP?=
 =?utf-8?B?ZW5kUGRWOGdlQ0x6VW1uVi9QTjZhWDlsaElSNXlJM3B0c3BaaE1OUzlTS3Bm?=
 =?utf-8?B?d1p6aUxMNlZzR0x3V2xMUGNNKzRLS2I4VjZ5U1VkZ1hWM2dTR0h3bEo1YkI3?=
 =?utf-8?B?SkpPdHgvUFVwQTVibkNJaEozaUlEUHNTc3hvUlFDUDhLL0xGdUVnK1h4akli?=
 =?utf-8?B?NjNFMjM0aDdUYWwzSTMxRWdSSFZsbzgxd3ZGd01nNEU3S2JZQ3F0Z2tTanF1?=
 =?utf-8?B?V1lYb1NMTGxwYU9rZk8wakNTcHkzM3V2Qml2UytBamp1aWdJV1dLZ095eG8y?=
 =?utf-8?B?YzhGRDZNR1gwaXpKM2tmOG5jdEUwUi9VNDdsTjZoZnRNWkJobUc4aUdEN2Mx?=
 =?utf-8?B?Y0lsbjF0S3hXakdLNWZOMURFeTI2cXhIWmRuc0FuYlZUQU9HR2JVa0hXbFN5?=
 =?utf-8?B?WDgzZDNiNjBTTi9PZDhkc1pBTElSNnZ4Uk1zWnBLcG9NdFdXOVBMVmlKQ2dS?=
 =?utf-8?B?TEdkQ29xVTNoM0NyZStaazVQeFFJM0ZXWUVmUFBxSlNnQ2RUd0JYaGhYMGZJ?=
 =?utf-8?B?K3ZQTG95QWExSkNVT2M2VHVwV3ZTSDNiK1BtQTFHWGpvS082aXNFTkQwSUtv?=
 =?utf-8?B?WVpNUFlPNTFEeDVOQ09OUzBVVjJEbUhjMVIraEtqRTlFTnRUeW11Wkl5ZjBN?=
 =?utf-8?B?TllScGhxRXFwZ2dhbTNaOTUzZFV1NThDTGJDUUNlSTdiRnFINXBiL2tJa2xW?=
 =?utf-8?B?R2IzZ05SdU1kNTJEeWFCdVU2ZTI5UGo0aFRZQVpVWDNmNXMrNTZOZFN5MkdF?=
 =?utf-8?Q?c0RmUrIz/Wt+U3iXpjueWZnGdoqgpX+aK5PlO2n?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d40efb7b-088d-480f-4a7f-08d8c9ef3b83
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 16:01:00.6946
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9t/4lZ7mx8sYuLJuvqF4SynOjO74D5YuJs4RS2QuDEKee3x1Bzah1OZdlHRPPsBUBkCJ56c1ij8qy69JSWblOw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2636
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 01:42:35PM +0100, Jan Beulich wrote:
> Setting the timer a second (EPOCH) into the future at a random point
> during boot (prior to bringing up APs and prior to launching Dom0) does
> not yield predictable results: The timer may expire while we're still
> bringing up APs (too early) or when Dom0 already boots (too late).
> Instead invoke the timer handler function explicitly at a predictable
> point in time, once we've established the rendezvous function to use
> (and hence also once all APs are online). This will, through the raising
> and handling of TIMER_SOFTIRQ, then also have the effect of arming the
> timer.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 16:13:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 16:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81842.151305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83jZ-0004fu-K0; Fri, 05 Feb 2021 16:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81842.151305; Fri, 05 Feb 2021 16:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83jZ-0004fn-H8; Fri, 05 Feb 2021 16:13:25 +0000
Received: by outflank-mailman (input) for mailman id 81842;
 Fri, 05 Feb 2021 16:13:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l83jY-0004fi-7f
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 16:13:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e263937d-e489-46ef-a3a4-d41fa0f964e1;
 Fri, 05 Feb 2021 16:13:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A21B7AC43;
 Fri,  5 Feb 2021 16:13:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e263937d-e489-46ef-a3a4-d41fa0f964e1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612541602; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b3kirXJU48dypmJzCOnO80H3HzLlx3BNfZfWBsqH/yU=;
	b=fZUWwOvCT/fDNtuDWTnP1guXCOci10CkdX7b/qwhurFlNALpFAH3VFiQZkbUY3XV1V52Ey
	ok3dA8l6RhswfTxuwU2Ty9KpTdmqrB8ZnvwRUADoary/pw/TpMdURpxl431CHBN+z0YF+O
	w8pB5OPjmKpYa7fnjJeedD6IToAG8b0=
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
Date: Fri, 5 Feb 2021 17:13:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YB1nhBeMRVGyO8Fk@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
>> The "guest" variants are intended to work with (potentially) fully guest
>> controlled addresses, while the "unsafe" variants are not.
> 
> Just to clarify, both work against user addresses, but guest variants
> need to be more careful because the guest provided address can also be
> modified?
> 
> I'm trying to understand the difference between "fully guest
> controlled" and "guest controlled".

Not exactly, not. "unsafe" means access to anything which may
fault, guest controlled or not. do_invalid_op()'s reading of
the insn stream is a good example - the faulting insn there
isn't guest controlled at all, but we still want to be careful
when trying to read these bytes, as we don't want to fully
trust %rip there.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 16:15:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 16:15:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81844.151317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83le-0004op-1C; Fri, 05 Feb 2021 16:15:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81844.151317; Fri, 05 Feb 2021 16:15:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83ld-0004oi-Ti; Fri, 05 Feb 2021 16:15:33 +0000
Received: by outflank-mailman (input) for mailman id 81844;
 Fri, 05 Feb 2021 16:15:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l83lc-0004od-LN
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 16:15:32 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ed90c9f-312e-442a-94b0-bbcbcdde2320;
 Fri, 05 Feb 2021 16:15:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ed90c9f-312e-442a-94b0-bbcbcdde2320
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612541731;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tx9o70c+H5WdmuAIAHgElwvn2qLRlQMIoDmnhpOZZIk=;
  b=Gz43EEmje54u1s41OcQCcP1uVEKwC5s0IfTE71ViMhd2T2RK2tQmOR8f
   Om/um3OucxFW0+E1eJgUEXOBRNjQCw1tEvgIGYT7TUGW+vjhZzMQNM/MA
   u/kW7x3eGssh8p63k6HLXC2YQ49hWgAwoWCxS7EnIqib3hOoL7McCr+XF
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: B/+ETSqGYnLEvGRQFuEZAJZEK+jQyFjBPqMqgKp1BRpRzm3LSYp30bSaJxbS4jNUjjOlmVBcOF
 SChEckX/5Bgrn4W1mc98OD5J4Y6FlWueDxgaRrUrKS9KZ9QgCBw1hRxzvrYYQwnCENuMiHPxNs
 t50LIs4Db+Eo33BmfxMiYmurfzP/iUJPITwO9vEJnLfJ8h8r6KW9Z6SYtQVAy+68AJpa4sBaj5
 UTDBiYoIJj8MvuuIVMmFzsf9GWE89K+xOYkwB9vZbiT1Qq+PSZOhxHrSCO2A2nlb9bhl3dVwZh
 tsg=
X-SBRS: 5.2
X-MesageID: 37030285
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="37030285"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kQKvUlpS0i9XPQDalnQpHnkXai7nBxLBqNJ0bLHtfgkxLLtCnViHgsXNFijUT4GH2hwQ1MAk44fEJyImALAFRKzeOnTznTSW3nPOVVU2YhWKUrBJ3P5e/mct5fx67MzXLUqTSGAC+PVwq6CDe4oNEkYOgDmUiA8TCDmpMbbJ7YGQW3xoT3HdDyhdGeKGmsoB1uItacDJ2Rlhx9JFd+A6cQZfS0/rTOIBakxtFM3Ct7LJHS9ujR5zubEJHxabolq2GMn3AyBUN035PKwNCHuPQL8ESLpF+gPGx3BMPLcktVbQ7CHt6E/1jwnBCVTMlW2rGnnbNc0qfO1rFJXIuZrdZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oc3FDCN3o1ZwwMC7DuyIq2R1mf6gvpVM0ajBjiuIH9M=;
 b=EXYVjtvu3lmM09662+RqpjSa5Eoy57VhqLA7EJgOVCbWwlQ/Fpt+PWihcWREDHYRcJmowrGmXqcB8kggx/fvk7/YA1IOckJna+jQxd6epOBX8MB3d5BaWDDmUCfhF7N+LOPqZx/fwZUXICmrnkqNEIdG05ToGGMHshCmZe2UlB2T3y6Uj7Wul0PANBKwi+3dcdb7TloHccM0ZmB5iC3/tebMk5y0yWL7dMgZSoj/FhAPsGe4cDpcJ2/vFdSz8baJPL6Tc7OMbSg2+tXjlrxAhJw7qUfklsjSIoK4NjXVIx5oxP2tcIUdXon+/OeTM1Zb1mXpVRI1a2PLRq3z3dOFIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oc3FDCN3o1ZwwMC7DuyIq2R1mf6gvpVM0ajBjiuIH9M=;
 b=OFBXdM3xrMo64c0q9hWSVSDcjarpykHdLxH2C6Btw8T1DDwifgUM6KQm84kjukI0AqSQgI3uGLKooeTcPc6QxAUxUMrHMtkkvSFsMd++zihL8kgK+04SRlj6RNHCkgFrZd0CXcuJKlDftPiuPhPsqKVmbQ9zn+i965X8HgisTps=
Date: Fri, 5 Feb 2021 17:15:20 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 2/3] x86/time: adjust time recording
 time_calibration_tsc_rendezvous()
Message-ID: <YB1vGGl59oNZb5m5@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
X-ClientProxiedBy: MR2P264CA0169.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::8)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 03baa663-6c3f-45b1-78bb-08d8c9f13f51
X-MS-TrafficTypeDiagnostic: DM6PR03MB4603:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4603BBA50F34163DB9AD4B378FB29@DM6PR03MB4603.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 9tI7s8mESWJX2ljtYKe57UjEPRllK5KD9Y/MRQ3kuk6GKa3SSbR02gOQFrZFCos188paxkW7Fa5ccf/lNwzDhF+VlLg1YqS+1zefdB8iDoOh49XhJNEQIA/Htcjl8BBdq7oJZtfGK9rS8/XFqpi2sSjacUR77sDAqEBDgHd3CXd5hcauP/ePmDDYLFvWPBdUuo8bMncjX8/3u8CvoihfXtyXQP6F3t8kUrZengDufuGPSpBQ45Ur2Q8NIuj/4gWFX09uGcPyx/v7pazuuisUlFWlW8wCXvXm090DFfqBQn5WqSOjeFDmZAqVjeL5GHK3nxHskbmnioIW1iWTbpBoSvM7J6qhoLjLclHVpMzk0VKzPlWJNsjSN+uJ4bgxUNAagw7+Y3PmNoyGJvtQT07NWfpBCGxxPhaNXWyERZSfiSv2LCVe3Bxuo60vEMpG1gkv0D4t3S10XxtFatnbWplmfgHUtSFcYbNJVEQBF1CMkqMQJgAT+9YmLExHedCToeB0iDVTPiJXmMLTHZTHVDs5yA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(376002)(136003)(39860400002)(366004)(396003)(316002)(8676002)(66476007)(26005)(2906002)(5660300002)(86362001)(6486002)(66946007)(8936002)(66556008)(6666004)(54906003)(186003)(83380400001)(4326008)(6496006)(6916009)(478600001)(33716001)(85182001)(956004)(9686003)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bXVZaTVaVVpWU2RKSGZYTG13UkRiN1ZBdm1NVnBkZHZwZ21WTTFzWXNMMWN1?=
 =?utf-8?B?ajNibG9zRy80cHRjL09FZTM0SWkwZElRemYzQk5xdVU3L3J6ZzlvNi9tbVVF?=
 =?utf-8?B?ODlLRE5xUEpuZHN4WXV1c0JUekxkc3BhNzNKUXZWQWs0OXBPQktGZlBoQm1M?=
 =?utf-8?B?T01EK3RPbFBjaGk1Y3NZRGtDT2tkOG1XN0dGamVibnRHVVlBWVhxbU9jT3Fq?=
 =?utf-8?B?Tno2b1ovOU5iTzJvNkhpaW12T3lvQXU5NVpUM1haaU1TYWowUDlwYWJvNStI?=
 =?utf-8?B?blVZYjlhTkIxOGxTZys2MUc3MWpjZXVkdTdkNVp0dW5vdGhZc0U0TFFxalN5?=
 =?utf-8?B?WUNXbCt0NWhFbkJLU1ZCY0hDRVZkWVNPRWxLNDZJOHlwY0ZTUnRpM2s4c1Fh?=
 =?utf-8?B?Zy9WK3ZjOWp0REo1ekxZSUhQU1NsWmZLcHBwN3ExbHRTV3kxQlNhdmtBY2Uz?=
 =?utf-8?B?dy9nREo0eWQ3R1BpY2RpMVMzRHZ2ODJqcEdtaWd3OWRLeUxwZytscHh4RHpI?=
 =?utf-8?B?ejF3aVdQWVFZcnRjQUVCS3lHcGtMVWxrWjdEVVN2dzRlcXZ3cTdHSGY2TUNu?=
 =?utf-8?B?RkhMZTdOOW8vS1M5SUoyNmkyb0RINHl0N01CUWxDUERCZFNyK0Y2cndzOGhr?=
 =?utf-8?B?M0lTYnJqVHE5QjZoVFRTVWkwZlJPZ0x2RlJ4cEtHellKaDM5Q0UzTFhtdUgy?=
 =?utf-8?B?bjAvVEtkN3BxUW95SGF6YUZDUXFBRHlidGtiWmRZZFB0b0x0dEpuLzd3dVMr?=
 =?utf-8?B?ZmhINVhvMlZlOHh5bmx0MjRyZkZvT21RNCsyOGxPQXFKdnp2MnRmMStrVlV1?=
 =?utf-8?B?QWY1dnlZcXVqbHF5YW04QzRldVdDKzFaM3pwQWxDQWpqc0l3U0hLY0hzdlpB?=
 =?utf-8?B?dmU0SlArNmU3QkhSYStDUloxM01rUXMzaHgvSkpCd0dtdlczOUFISzJUeVhq?=
 =?utf-8?B?bWhrRnFZNG9DM2FWclJxeHZlL09nWWJ1TmhXdkYyNXFTazVDUnVubWJMazc2?=
 =?utf-8?B?Q0UwR1pyNVdMNkV3RkloemJEVGhVYzkwdXF6a1JZcENJd2o5NlE5Q1JXRUNJ?=
 =?utf-8?B?QnduVjlrazh0THFGNG5JbVd3Q0FLMnNFWlVURi9ZTmp1S1NpZXp1cFUzVnZJ?=
 =?utf-8?B?elV0bjM5aGR3UlJZbFU2eCt6amtLOHRjM3lMRG4zSjRoSnd0eGxWcWkvc2Rk?=
 =?utf-8?B?cjdCMmtKZnJGYVNNUUtTMVdTcWJIMjdLUTNabVdMZXQ0bHZybkZ5RUNOSUQy?=
 =?utf-8?B?bWNFeFptWmJwbHFxc3licWtwMXhlakN4aUNVYjY3RWFTaVNuQWFGNUEwVzRt?=
 =?utf-8?B?SE1tWkFzMlZqMVRFNGQ2aWZWK0pvSHhFSWdyWUFSZzBlSTdta3pxOFZjSTB6?=
 =?utf-8?B?ZzZUTlp5Mjl4RXUvL2F6YzFSc1ZKcS8zZS9zZUxHZU02T2lIWS9uMWUya0lW?=
 =?utf-8?B?dzRUd2NMdDQ2aWtpdEZ0UXMyZ2ZpUmZKQWoyYXBkMWkyb3l0RzlhWnZvT2lN?=
 =?utf-8?B?TzhqbllmMUxzZGlNRWJyZUJMWXA3czg5dE9TbDN0Risrck85NUFuV2ZtYjVS?=
 =?utf-8?B?WmlwSFdmMG5lQ25DNjBNL1NiZlovaWlFTGdYUkd2NC84clVtQ1QxVXYrbktn?=
 =?utf-8?B?MXp1anVqcFA2WDVPSUFjYVpVNmVpRVc1b0RIa1hzaC9vYzJFVFdrWGsrL2xm?=
 =?utf-8?B?dUNYTzZ6TU4wWURKUkNlSkZvMU1zRU9OK2xPcnFEUzZWSUI4azY3Q2N1KzVK?=
 =?utf-8?Q?0wha60AOaEjMr0vUMeILrYvxjLR4aCDnMJQ6QJ4?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 03baa663-6c3f-45b1-78bb-08d8c9f13f51
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 16:15:26.2278
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XQNSnG9dX1oo2QjUpdDpQkCN3VYqQpSzqUnoCB8bUvtjkprjIqqcPtwDh2/1NNxNh0te7smR0oOE0y29UuHnPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4603
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 01:43:04PM +0100, Jan Beulich wrote:
> The (stime,tsc) tuple is the basis for extrapolation by get_s_time().
> Therefore the two better get taken as close to one another as possible.
> This means two things: First, reading platform time is too early when
> done on the first iteration. The closest we can get is on the last
> iteration, immediately before telling other CPUs to write their TSCs
> (and then also writing CPU0's). While at the first glance it may seem
> not overly relevant when exactly platform time is read (when assuming
> that only stime is ever relevant anywhere, and hence the association
> with the precise TSC values is of lower interest), both CPU frequency
> changes and the effects of SMT make it unpredictable (between individual
> rendezvous instances) how long the loop iterations will take. This will
> in turn lead to higher an error than neccesary in how close to linear
> stime movement we can get.
> 
> Second, re-reading the TSC for local recording is increasing the overall
> error as well, when we already know a more precise value - the one just
> written.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

I've been thinking this all seems doomed when Xen runs in a virtualized
environment, and should likely be disabled. There's no point on trying
to sync the TSC over multiple vCPUs as the scheduling delay between
them will likely skew any calculations.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 16:19:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 16:19:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81848.151332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83p2-00055G-IZ; Fri, 05 Feb 2021 16:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81848.151332; Fri, 05 Feb 2021 16:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83p2-000559-FV; Fri, 05 Feb 2021 16:19:04 +0000
Received: by outflank-mailman (input) for mailman id 81848;
 Fri, 05 Feb 2021 16:19:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB+w=HH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l83p1-00054I-5k
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 16:19:03 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba5ecc28-768a-4d7e-88b4-26c38e1f36d7;
 Fri, 05 Feb 2021 16:19:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba5ecc28-768a-4d7e-88b4-26c38e1f36d7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612541941;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=UqYSAjHvVVlXP0efegEpgt7oYTzFjqMGfGwZRX/sxr4=;
  b=Xc56JOcPwfn/eswtR21yOVP1hl8SjCDaj4iWb7PHB+ol6xeI/G5C4U0O
   7nX9rt8c3Ex/7IHr8aaSuX89ixyjCoY7C5KoPJBgZodNyhcdbw/JTdPD5
   52YZTkM1CRJrvRfbgrSu5ZEYvKGkTuG93txsuMz7ROP3AVkR6gZnmlyIP
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NqSno6kXg1aMleDRpNHKPuxfL8+dHJV2A0YXlmbPUylHwJLN5ke/ZvnBhimVm9Z5GVvWQmpjEt
 TpWlYPNK85aVuxME6oNdlnfjO0vdvVZYlERa/HNj59cWh/jDsR7dawuTT19HCtDqza/UgAyIQI
 4tsf7S1g+nSRT2hJp2e3Sa4EMsMdFP1AVq/7iOTCe317tZmdzEnvFXu5RlKWjxL7R/0jgEfnO5
 wsyxNHj0On7ETbqww6M7EUyaoBrt5/DcE+m4QFICCZwKw8ZuJhfMVqBYf8qwop6iYX9gpOlJDg
 8tE=
X-SBRS: 5.2
X-MesageID: 37030604
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="37030604"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VFfexuSUFGqhlqKKm493rOYXGepCX9alzz2kCwV/AuQ9536jE3zEFWsj7znI3qUVzgLsrOXk71RKK7dO+kbO0msFiWwn80pNMiy5ZE44GpJ3V6HO1dmBvDIgy8478q6ay4emEDriDyCbkuYfyuONKcNilOITM4rq72ZiREi+jMnsLRZhU/x/izvhPbBajaYiCfJwEvODcxWH6+mcUZwpNrXY+XtdzO2F8xTFMcPBBCr2gzm/BuIbwhx7aPDvGZoeoYOZC7xDXfZHZoTasi/qnVNIs9RSxSCkbDGz06TB3hGDCd3pVpqgSdnLudeWDGmorFwcTkyqZHKzJpNOdOEVeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0bAEeOhfQmJAQk6Ge3ef0Q9NgE3gNZDzR1tRMu6dHu4=;
 b=A6YZ2mWwWFdvJpTZtQEZHexoV0L/PCsMEyaeVd+BTT8Dl1lF37JntSBgTRXMW92W+GFFPfEa0hiMpwpszkln/XTJN7cT/6ZsjDXdd/a07/Coke0Z7xPXVln5VvixI8+ZDz5BfYzaXpmZycE5LSDtXSn7yorQBHK7/hV8N9+QKuq2AYWWlcXRUy+5s4YejWbRFTdP5Eph3+Rd0XvC1HEOkyTFmZGk8Jq3JoL6BAKYJoATXciPOviNumwUezy5pj4zikOOum0C8zweEiMTbrQbq/UZvEfE/81f7F0t0rRNOOc0nfCkZoBEGm3OwfekzThFTZJbKG4qOD0drq/B2xsgcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0bAEeOhfQmJAQk6Ge3ef0Q9NgE3gNZDzR1tRMu6dHu4=;
 b=Dt+GSbu67ifPdrMlyaTfJhDcZ2AwApcA2gE5hKEEun1OwikALlplv6ggnKbVK1HkAptVCzjCuz9HvCu86lJKerZZC49rj6DCOKfLSldFhbP06/JleZHFXsxKauk5wsium7hmrwRTrJJNujfv+U9x+O2Z3Nun4kqZunrb+PdMrgc=
Date: Fri, 5 Feb 2021 17:18:51 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YB1v60CuOdhxFwNy@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
X-ClientProxiedBy: PR0P264CA0139.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1a::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1f5aa6c1-d177-4827-9bc3-08d8c9f1bcb8
X-MS-TrafficTypeDiagnostic: DM6PR03MB4603:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4603B88E5768F98E1F1ED8B48FB29@DM6PR03MB4603.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: WuJYWs9kEN5rpU2OKb1/XksmtXmNE1x0QpsYGrnHs4PFLmBFJawzBOi6xu6r5JmZwZNOr/IGmx6oiM71SF2yJ9ILvk2O7OQ8S99pKRHt+xJXF1tmhYXrWnDgb2sdvJmpSFLz4OXyoG9NpxIqg+C9bqWoukwhRMRnbLg/XHocjXWdammQXLO2KELwHewlkHjr7iEn2eEecNdhD8sBgLFrdq/WJniUzbcpKTAWQEAEnjXutv8beJOXLo/mJd4PnADVWVsBmrQjXOIyB8BJi5kXB3wijJENdWm24WKE7Gs2AJdThYZCCWho+/zip7CXpeVmUvj13HLFO83pUm6gq79Uesgi6WfHB+Ip/bJ8NGBJSgWOwjb3fC6zo9nvwppop1spIfci0qgd4EAuo2+YNcaTDeDvhWGmelnLVUK7kJXOILbDvBky9bpxm+nCIDv8tK28coa7ErV5FKbB/JTB2ZrTcWlfXULf5rciG7l55whMmpOhlRRMwaqtFqjOSV2W/f928OrkKL6dFWY3yScirYZziQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(376002)(136003)(39860400002)(366004)(396003)(316002)(8676002)(66476007)(26005)(107886003)(2906002)(5660300002)(86362001)(6486002)(66946007)(8936002)(66556008)(6666004)(54906003)(186003)(83380400001)(4326008)(6496006)(6916009)(53546011)(478600001)(33716001)(85182001)(956004)(9686003)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dExSSlQ3L0FpZDFGZDEybW1YQkxWZ0kwYWx1R05iSkI2djJ5aFF3RkxQQTkw?=
 =?utf-8?B?aFFOT0gvM0t1VTlBVXFCNW9WYzBUU0RxQUViU3lCbGRVSys3WHNjcERJdm82?=
 =?utf-8?B?ZzVMa2djY2xoUnBJOTVvRXVvb012WEp0RDV2bEdKZTk4YlNYdm1iUFNTc282?=
 =?utf-8?B?SW9jNU9FNXpqZ3BkZ3VHa3hiZnBQRS9vZzJTcTFDUVNNbzNlN2Q2dWl2Szc3?=
 =?utf-8?B?QU9adnJJazRxcXlhNEpIKzg4QnpRclZtYUhYK3ljTTJWYXZPMHFiQVhrRnRt?=
 =?utf-8?B?QUY5VGlPU1pGQlhBN1Q0RHQ0VTdWUkhaNWJQTkRBbVhlUmFNV2tIU1k4M3o0?=
 =?utf-8?B?MGVHb2paTlp2YzY3b25HUTVHOTZoQWo0ZkJKRE5reVBobmRGTG9PZ1pXamd3?=
 =?utf-8?B?c2xMbktscEFsRnJha09URThRN2pvVXFhM3N5SVZwNFNQZ0hYbW9UdzB1OTBa?=
 =?utf-8?B?aEdieXNJd1gzM0tmL0xRWWZCeUxRejNGQklMWUl4UUVmZ2txZ1NPTkhaaFMv?=
 =?utf-8?B?MmxmZ3lEZGxtVGRyaURTUXR3UkhsZUdLK2JORmFJazF4YlR0MDhZSUJwQ3lt?=
 =?utf-8?B?T3J3clZBRFZ6V0JDMFp4b3lwNEtieWUxdHhMd0lQZ1JPUlY1NGx4YXZOUnlD?=
 =?utf-8?B?MGFRbWVDUFU1SmZSVGkzUm1icTYwMDI2LytKOWtsYUZwbGR1eDdOdWZyaWNq?=
 =?utf-8?B?cklpT2EyYzE0Y0l3T3l1N3FQeko3UTBpTVlMRFVNTlowNW5oU0pQRFcvTWFr?=
 =?utf-8?B?Y3M1OHdzMlZEd2RkZG81NkErdFpja25uR2NGSEZsYkxmYUJ1cHpPTmhxWFBG?=
 =?utf-8?B?RStZV2RWclBNS0I3RnBxK1MydFZvVGwxRW4vcDVZeFEyYkIvSzY0YTFGQndB?=
 =?utf-8?B?UmljVTk4OWJrb2dzT1hSQTByMXVrRkFwUFlCZi8zVGVLL2RPNUliTUxtQlVn?=
 =?utf-8?B?aUl6ZUVhUEtwODZvRklqOXcralcycGlsOEtZVEZ4UEhLcjdySmo1Wjcxa3hO?=
 =?utf-8?B?OExWUzQ0RE9ad2E1OUo5RHA0Rm9wdGY4NTFqNmVqaXMxN2dINi83bUg4UkRy?=
 =?utf-8?B?WThxV0trV1VzQVlId1h5WEM4bDVwbzVzN0tkZmtJSXMyLy9rRFVyVFMxZ2F6?=
 =?utf-8?B?QXYzZWd5MEx0L1k2SVFwRTIvVlRCeERycS90TzY0Q09LL2lkdm9JUzEvR1U3?=
 =?utf-8?B?L0hJZjdwMUZLdGY4TFlXbTF1QXN3Z2xGTnR1RURBM2F5YVI3aXk4dEl5YjZx?=
 =?utf-8?B?QW1SSlFUanJGelZja2IyRi9tYXZzWmowdzduaVpLWHZJNlpYSHpXM1plMUJV?=
 =?utf-8?B?eGRXWnVvR3Y3dnU1R3JSMmh4K3NVcldSQ2dhWlJidGRUWmlFNVJNQ0RROWpZ?=
 =?utf-8?B?TkFlSHVvYk9ZR0JQcWozTmlmVEJETnVCYlZFanJWT1lQdC9Sam45NGloWWh3?=
 =?utf-8?B?SmVYZE8rSVRuRCtFVFpCSitoSE9jTWpITkVUL3NFQVVwKzFHWFZrYXBYTTNW?=
 =?utf-8?B?djhWS3EwNkhBTTdNc0N5Qm13Y2YxQXFHSlJHRW1lV1h0MkVuM0d4aHo4RHVY?=
 =?utf-8?B?dFJkQ1JMVnZhRzBCeVRhTWtYb0l0VUwrMGtwejVMQVFaL09ybnlEVWs2OWsw?=
 =?utf-8?B?L2kwZzVQNmJHNlU4RkNmYzhPbHVsMHY3S2Vya0N6bytHNTZKcy9WRVJqRXBE?=
 =?utf-8?B?cHVrQi9oeThob2RoZXdNMGVIdjlTc25vQjJQZXM1NGtLcWFaRnExcVdEUlkv?=
 =?utf-8?Q?qY7Bn/KiD10bN5utQ4Uh1o7dvhLdz2tMn98u0Kd?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f5aa6c1-d177-4827-9bc3-08d8c9f1bcb8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 16:18:56.5651
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pXCnKu+NoZLzgJh2niRI28REGytsiaIGksCkC4Lrl398eZX3lhMqO5352f2Sz5MU2W6TOsKMS9AR4U8QivLcFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4603
X-OriginatorOrg: citrix.com

On Fri, Feb 05, 2021 at 05:13:22PM +0100, Jan Beulich wrote:
> On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
> > On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
> >> The "guest" variants are intended to work with (potentially) fully guest
> >> controlled addresses, while the "unsafe" variants are not.
> > 
> > Just to clarify, both work against user addresses, but guest variants
> > need to be more careful because the guest provided address can also be
> > modified?
> > 
> > I'm trying to understand the difference between "fully guest
> > controlled" and "guest controlled".
> 
> Not exactly, not. "unsafe" means access to anything which may
> fault, guest controlled or not. do_invalid_op()'s reading of
> the insn stream is a good example - the faulting insn there
> isn't guest controlled at all, but we still want to be careful
> when trying to read these bytes, as we don't want to fully
> trust %rip there.

Would it make sense to threat everything as 'guest' accesses for the
sake of not having this difference?

I think having two accessors it's likely to cause confusion and could
possibly lead to the wrong one being used in unexpected contexts. Does
it add a too big performance penalty to always use the most
restrictive one?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 16:26:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 16:26:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81853.151347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83wK-00064R-DG; Fri, 05 Feb 2021 16:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81853.151347; Fri, 05 Feb 2021 16:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l83wK-00064K-A1; Fri, 05 Feb 2021 16:26:36 +0000
Received: by outflank-mailman (input) for mailman id 81853;
 Fri, 05 Feb 2021 16:26:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IalI=HH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l83wJ-00064F-F4
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 16:26:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71759d0d-1dc5-4c48-8466-cc4dda91890c;
 Fri, 05 Feb 2021 16:26:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD4E9AD29;
 Fri,  5 Feb 2021 16:26:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71759d0d-1dc5-4c48-8466-cc4dda91890c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612542393; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b/aHtEjb3Xmz73loVJMDhFauQ6/tDF9t8/KHwkleKAU=;
	b=m6dkOGSPdyqX/Qlc4PvBStTZkCmi9kHYqpWjwZaah2IqcKm0fiZo8uOSPU9FKRAZ6jhSZA
	asYZCNzp23BYrKfV4ZuLczB48PVniVCdxjq38aULXsc19XbJ1/+rncnzpdW0JBSbuooTfv
	t/jtzCTnkq9f8w2NSsXvennWRsfXeuI=
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
 <YB1v60CuOdhxFwNy@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
Date: Fri, 5 Feb 2021 17:26:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YB1v60CuOdhxFwNy@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 17:18, Roger Pau MonnÃ© wrote:
> On Fri, Feb 05, 2021 at 05:13:22PM +0100, Jan Beulich wrote:
>> On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
>>> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
>>>> The "guest" variants are intended to work with (potentially) fully guest
>>>> controlled addresses, while the "unsafe" variants are not.
>>>
>>> Just to clarify, both work against user addresses, but guest variants
>>> need to be more careful because the guest provided address can also be
>>> modified?
>>>
>>> I'm trying to understand the difference between "fully guest
>>> controlled" and "guest controlled".
>>
>> Not exactly, not. "unsafe" means access to anything which may
>> fault, guest controlled or not. do_invalid_op()'s reading of
>> the insn stream is a good example - the faulting insn there
>> isn't guest controlled at all, but we still want to be careful
>> when trying to read these bytes, as we don't want to fully
>> trust %rip there.
> 
> Would it make sense to threat everything as 'guest' accesses for the
> sake of not having this difference?

That's what we've been doing until now. It is the purpose of
this change to allow the two to behave differently.

> I think having two accessors it's likely to cause confusion and could
> possibly lead to the wrong one being used in unexpected contexts. Does
> it add a too big performance penalty to always use the most
> restrictive one?

The problem is the most restrictive one is going to be too
restrictive - we wouldn't be able to access Xen space anymore
e.g. from the place pointed at above as example. This is
because for guest accesses (but not for "unsafe" ones) we're
going to divert them into non-canonical space (and hence make
speculation impossible, as such an access would fault) if it
would touch Xen space.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 18:25:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 18:25:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81866.151370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l85mv-0001bC-P6; Fri, 05 Feb 2021 18:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81866.151370; Fri, 05 Feb 2021 18:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l85mv-0001b5-M9; Fri, 05 Feb 2021 18:25:01 +0000
Received: by outflank-mailman (input) for mailman id 81866;
 Fri, 05 Feb 2021 18:25:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l85mu-0001ax-Px; Fri, 05 Feb 2021 18:25:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l85mu-0003qf-It; Fri, 05 Feb 2021 18:25:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l85mu-0007EZ-90; Fri, 05 Feb 2021 18:25:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l85mu-0000tF-8W; Fri, 05 Feb 2021 18:25:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QWSQR74E0DRwBT6f9a/alEQHSvk4SgSL+DfcPp5qS7g=; b=rgvdJEaMb+VGSqErxt0EI5MV19
	BN2NMHSdVCcuA7lMRGqq6ACwjo2YPQAsCl75ivLm9mFb05rqA46/TmVL1nzGlYYUNTI9WEB3QzeGy
	HYTnfjmU7rv8RuYc8oJf83AN2vly+yUGE/EgebsYS/YbCFA3LTKm0XfCfP2RbBGo98Jk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159017-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159017: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f1f322610718c40680ac09e66f6c82e69c78ba3a
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 18:25:00 +0000

flight 159017 xen-4.12-testing real [real]
flight 159048 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159017/
http://logs.test-lab.xenproject.org/osstest/logs/159048/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 158556
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 158556

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158556
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 158556
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158556
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f1f322610718c40680ac09e66f6c82e69c78ba3a
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   15 days
Testing same since   159017  2021-02-04 15:06:13 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 18:32:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 18:32:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81878.151389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l85uK-0002hu-Ob; Fri, 05 Feb 2021 18:32:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81878.151389; Fri, 05 Feb 2021 18:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l85uK-0002hn-Lb; Fri, 05 Feb 2021 18:32:40 +0000
Received: by outflank-mailman (input) for mailman id 81878;
 Fri, 05 Feb 2021 18:32:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l85uJ-0002hf-NK; Fri, 05 Feb 2021 18:32:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l85uJ-0003zE-Ft; Fri, 05 Feb 2021 18:32:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l85uJ-0007hO-8F; Fri, 05 Feb 2021 18:32:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l85uJ-0005zC-7h; Fri, 05 Feb 2021 18:32:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qy3I6f7pAdqpnWxs94jnuQHZ4RB65akHSGFeuEwwOeY=; b=5WDlsTk/vDZDy537hKYvSavmWy
	WAQN23DLR/R76QRcNoTtDKZb7tJd7rCQNJt2oLq8HArymNgH5CauT2zf4Kn9SI7pY6l92lue/WSpL
	4WHllvF+jsVs+f+JmpnZGEREn0n8YijpyfavRecevHKFG+77By+wL5ggmOMMTHD5TmDM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159046-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159046: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f4318db940c39cc656128fcf72df3e79d2e55bc1
X-Osstest-Versions-That:
    xen=d7acc47c8201611fda98ce5bd465626478ca4759
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 18:32:39 +0000

flight 159046 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159046/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f4318db940c39cc656128fcf72df3e79d2e55bc1
baseline version:
 xen                  d7acc47c8201611fda98ce5bd465626478ca4759

Last test of basis   159044  2021-02-05 13:00:29 Z    0 days
Testing same since   159046  2021-02-05 16:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d7acc47c82..f4318db940  f4318db940c39cc656128fcf72df3e79d2e55bc1 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 18:45:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 18:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81883.151403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l866w-0003sq-Th; Fri, 05 Feb 2021 18:45:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81883.151403; Fri, 05 Feb 2021 18:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l866w-0003sj-Qg; Fri, 05 Feb 2021 18:45:42 +0000
Received: by outflank-mailman (input) for mailman id 81883;
 Fri, 05 Feb 2021 18:45:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l866v-0003se-8z
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:45:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l866v-0004BF-6U
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:45:41 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l866v-000412-3p
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:45:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l866r-0003Ns-Ti; Fri, 05 Feb 2021 18:45:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=ttXgd32hlMCpaFX3vSRQwg+LXtqXxCIbeUaW0PkKeN8=; b=Qrk1JGYPgVugRU2ptQ86Xp7mBu
	RQ4SMtanG+SvWhc0GbHo7YFO7e3N2NWc0Z7gp8wpXJmdnqus7NfwhM0QoS6Ek7LHObAbHTjCa8Wmh
	K1AaDHOCdzry0PJk4U694j6TGV3bqrJV+snH2BoFf3ZJdkyLFMumbP6aEaUR6eMcEmck=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24605.37457.630412.791202@mariner.uk.xensource.com>
Date: Fri, 5 Feb 2021 18:45:37 +0000
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    <wl@xen.org>,
    <anthony.perard@citrix.com>,
    <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH v2 1/2] tools/libxl: pass libxl__domain_build_state to libxl__arch_domain_create
In-Reply-To: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
References: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Igor Druzhinin writes ("[PATCH v2 1/2] tools/libxl: pass libxl__domain_build_state to libxl__arch_domain_create"):
> No functional change.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 18:46:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 18:46:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81884.151416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l867t-0003zC-A3; Fri, 05 Feb 2021 18:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81884.151416; Fri, 05 Feb 2021 18:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l867t-0003z5-5B; Fri, 05 Feb 2021 18:46:41 +0000
Received: by outflank-mailman (input) for mailman id 81884;
 Fri, 05 Feb 2021 18:46:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l867s-0003yz-55
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:46:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l867s-0004DW-3S
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:46:40 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l867s-00044T-2b
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:46:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l867o-0003ON-UO; Fri, 05 Feb 2021 18:46:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=fIqVCLKUuQmL4EbaGNZnTj94fj/XiYx4QW7u+aQe0v0=; b=qrJrBtfYR7Zw9WcXYOTlUgp6gX
	Yd5mYv2Yv7CLtb+3CTkpUKnTyOVRqN3YBeSqluMoXZjIR/ZfIQBh27ng94qYnH45yG6cwMBAX0Eub
	5k87GZGWMAvDOXiApZ41WjA1yreQe4MRI1BXRzMhv2kHvME6oBSz68nMVtK+ceVj+H7c=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24605.37516.737667.412159@mariner.uk.xensource.com>
Date: Fri, 5 Feb 2021 18:46:36 +0000
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    <iwj@xenproject.org>,
    <wl@xen.org>,
    <anthony.perard@citrix.com>,
    <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH v2 2/2] tools/libxl: only set viridian flags on new domains
In-Reply-To: <1612382824-20232-2-git-send-email-igor.druzhinin@citrix.com>
References: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
	<1612382824-20232-2-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Igor Druzhinin writes ("[PATCH v2 2/2] tools/libxl: only set viridian flags on new domains"):
> Domains migrating or restoring should have viridian HVM param key in
> the migration stream already and setting that twice results in Xen
> returing -EEXIST on the second attempt later (during migration stream parsing)
> in case the values don't match. That causes migration/restore operation
> to fail at destination side.
> 
> That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
> extending default viridian feature set making the values from the previous
> migration streams and those set at domain construction different.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Thanks for splitting this up.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Andy, I think from irc that this meets with your approval but can I
have a formal R-b ?  If so please put my tools maintainer ack on it
too.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 18:59:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 18:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81893.151434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l86KN-0005PC-GU; Fri, 05 Feb 2021 18:59:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81893.151434; Fri, 05 Feb 2021 18:59:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l86KN-0005P5-C2; Fri, 05 Feb 2021 18:59:35 +0000
Received: by outflank-mailman (input) for mailman id 81893;
 Fri, 05 Feb 2021 18:59:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jdnZ=HH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l86KM-0005P0-Dp
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 18:59:34 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9cc5e97-6171-4570-8f38-ad1fb7751700;
 Fri, 05 Feb 2021 18:59:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9cc5e97-6171-4570-8f38-ad1fb7751700
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612551573;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=PRQQ6KLTb47sJDxnltJs4SuWH9tSjTDlWyRnGyBKr/A=;
  b=EoKYoSfJGmucWLKCAOCtbK/Ge5DsUD9SCKou55Z6ch0Y0w+r9yL1IcGm
   8B42j/jsZnqgyCQ3RK8s2UTagJap9FUjSBy17MTyManqfkhaRtBzJ2XOf
   NkCKvv/02EXBWxy1jCLP4vsI7+2QY+40NxKngUXZZpokIc98g5y4vsydn
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zi4v/2zyY0YU2RkStvzV6+kGgUegRmgfkQljhiK2VrM3FvgKwaaN3+vRCVHvFf8/v7l/u8mqdy
 tgTSLgCQLXc+6nQ3WENKiAq1fN2oWh+ZVYBmtnX8JWR902BiMW7c7WNjwEn6Jg2Y9iEkmcWovb
 zedVgsGo/EYkYIYhEwGy1LQsi1mbahkhgNS0+IqLTrOTdx2olvLkJluEWR8U5a1k9HhzN8Jyti
 Qs+iKuOIIfWO+70ciryOuiOY3UEwB7S3JvfU9QwgTnE5wVBZWvXsQetOteSKN4cypv/1eWdpF/
 gaU=
X-SBRS: 5.2
X-MesageID: 37041521
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,155,1610427600"; 
   d="scan'208";a="37041521"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OJeMFi/M5MnROJCcs6IqMrTaCtSw0L9bTxwFPplONvJE9RRNkyIDNAX/LHe4JpdvUqlzYykT1LSQaQZs6Cp03Bs4kkdB3QyziLUa7qZq4rFc2JWPfPwEU0ARAO5YM/PEy5kMNeRysdiFDxArfXh2B0SRQAGLT7ca2dCmJFJZUgTpt1ufWFlkR6Iv0YT9DoYVtLg5ux/Z7D1lPsaP8daO00i5zAneKqyxpuiXigBiifZyFSJ2Pa+QcFVstrm4kpSllR0xJvxSo8Ud62PM9RB1V9njMEcYINdGF/f6PhMzAjupXW8+Zgl6rYoWWJ4TqDpoNLgl7aW2K5YP/JM06KAh1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ho9WuhTbVpwK/VfA1t6b1i2htzxQfFJD+14VX5n/ZJY=;
 b=LQF6pGJCRrL/QbU5BuGQrgqiU/FS/xMuh8KoCsCAAurZbzeYUW14C9M40dVGxes6sp6YUaG8Ye9Ik2JtH2PGNFfhiHyYTlP9E85crH1nJ0XhnXc51I6j65ilOdN92L73jJrM/lzZBkPWvzXbVO3vmDLWo1lqtn7VgtjPh/5Tb3IAWmMN3bKjqx5Kx99vSrOJkYnKUbViar1DsSEKKodqdwsSTuW6GOo6XpHjgvam1T3YPxkbKuRW/Y+PUvLmdclbDBWtHNLVPZHJBAq2kFClZcjN8OCePZJQzkjvCEx6qgxDiJOUaFVU43ZI/w9OM3bZrY+41bg3t+JLgdVtUM5Wuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ho9WuhTbVpwK/VfA1t6b1i2htzxQfFJD+14VX5n/ZJY=;
 b=Ni4xZryKVMii4EO+XQdh1cjuVJzxKf19/QbdTJ2ldWpsEnHG4NVBGFl+5pWiUWI+TMgixax8V0amUbu+2xV2bVCg0frsMppwgJsN7TVoqOHvv8I+B/3lAfb4zdN6y24DVYwbC/Y0i6TB23nn7Ey3EjcpFhkU/XC+0Lq6ndL+k1s=
Subject: Re: [PATCH v2 2/2] tools/libxl: only set viridian flags on new
 domains
To: Ian Jackson <iwj@xenproject.org>, Igor Druzhinin
	<igor.druzhinin@citrix.com>
CC: <xen-devel@lists.xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <tamas.k.lengyel@gmail.com>
References: <1612382824-20232-1-git-send-email-igor.druzhinin@citrix.com>
 <1612382824-20232-2-git-send-email-igor.druzhinin@citrix.com>
 <24605.37516.737667.412159@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ecdd0986-bf35-4bc8-71a4-3fc23e42d163@citrix.com>
Date: Fri, 5 Feb 2021 18:59:22 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <24605.37516.737667.412159@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0133.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::12) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4c225ee5-74ad-448d-bda1-08d8ca082a01
X-MS-TrafficTypeDiagnostic: BYAPR03MB3544:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB354491B26EC4C8F43FE6EBCDBAB29@BYAPR03MB3544.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: f/YHhq+ByFHpr7qdVrYl5OjqS6ck+Ng09sSoowCFYDQXHFw/y+aIdAlcA1icQt6mx4PiIT0wf7bbJRcpQ4jfW/TBFJRGiHcSoz+EafMubcE2sZaMVmwWzHewZk+UQ46slFJyuppePggUXMHOwbRfZXhxmZCPe/U6zlGacnAEda16MIn8R/ZKzSP33uY8ca1Snz19iAy57SfDPejzUny0XEfg3rooUro2oc2Lc7ujnF9ezfgAbfIgVPv/bUFBIrKPFDBiTcoTYCEqULQkyrKiZdrsNG4oY5mlSqQh4vz1rpUksKfott+FOmO6yOzB6FxXQhYt+8WpC8rYyBVhkBCUDCUlsELrs/q0I3UCjH3WjCmqpsxvtMw8QDojTQPI9B9na+4evvMmmeSdXQ4V6IksZpQ83Eda1AK/l0g296tgo6w/0b3v9+meaQ/c/W79IZxiAmgMmb0+7YCW5z93bLOYKjGPFKqycSYpg7GGBYrb+MrNSIsAWT1mY5bO2T+lRcsjUySRoOruYtupshA2IrGdcB9GhkVloicP083IUjLo0bHaV1BevWDn3byjqBlumuDB8yJViF8+OwWM8wtB5rI5zjUMpxFy5YmLI6waOiANhqI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(136003)(396003)(366004)(8936002)(83380400001)(316002)(2616005)(8676002)(956004)(53546011)(16526019)(26005)(66946007)(6666004)(110136005)(186003)(66476007)(4326008)(16576012)(6636002)(5660300002)(36756003)(31686004)(478600001)(31696002)(86362001)(6486002)(2906002)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WDQ1akVsK3lwTlV4NjZpWkkwaHg5NW9DUWUwR0JrWXBCU21IaWxPNFBpRm9F?=
 =?utf-8?B?TTdYY001eE9hVkM0SWU0VEl2L0ZvNmVSTjB3Z01DRTI1eUNMRlQ4Y2srV0JU?=
 =?utf-8?B?ME9QYUlxQmtHQVNWMTJLTkt3azZTV3BTTTQ1WkVxN01keXlqMm00YWkxUmt2?=
 =?utf-8?B?bTd1Z1QzanRoUXhKOEFLMFFsL1cyRlpUazJTSVNaMkQ4VXNZeXhoa3dqUzZ4?=
 =?utf-8?B?TThCbEx5a3c2aGdqQjE4eHVTSmJVV2VVMWNPZStnNkVXQkh4Q2RoVlRGYXAz?=
 =?utf-8?B?cHh4MjlPZ294UWtQVitnMVlEdFg1NE9PaXN2OGE4MlBPcHdxaXA3a21zbXJI?=
 =?utf-8?B?anhYVHJEeWhEYk1lbDhiVFdOanYxclp0RWQwVHRUVUFMOSs2clJTbWI2WVE0?=
 =?utf-8?B?M09EdjdxbzZHU0xJMi84TWhudUN3bkFibVFrcEpxU2J1cWUyK1hpOWxLTmhE?=
 =?utf-8?B?c3dRajFJbUlkOGZHK0E2YlNHbUpXSW5SbkxKSVZ3NFVNallqUFdCYjV6cHpS?=
 =?utf-8?B?M3gzL2FyWm9PK080c3ZPZVpFMG5teHNSS0tOKzZqL3pwWXdBK25uRnQvQnNU?=
 =?utf-8?B?UUFZSkVIQ1JiazloRVBDUlQ0Mi9UVjErWk1sTFc2NU0wYmMrQ0Qrb08zZ3cw?=
 =?utf-8?B?b0NaMkRNK2NiRmlvdmhaekdUSlg1UTBIWTk3WnZSYVlFQTBXWFpCWW9xZXpk?=
 =?utf-8?B?a2hjbElmYUM3VWl5anVCcHAxYUFrSm9Ka2kvOVRWS3hObm8vcnZZN1JvbklX?=
 =?utf-8?B?cW5BRkRvTFBvdTRMK3dDd2tVU29pQmtqNmx6eW5WWmVKMnR1b0hzVzl6bmVu?=
 =?utf-8?B?OWh0TGMra2lCaGUzSTQwZFRKWXVLSk4vN2tJejFDeEJQSTdEbXZ0Mis5YVRV?=
 =?utf-8?B?TEhtaXQ0enNaa0phN1RKelBwODVtZE5KazFTN2NCMnVSdWxIQUZEVzU1OThY?=
 =?utf-8?B?VzJhbHNFNkpXZ1hESzRvYUVTV3FLbTJ4bzVvVXdjWVl2RytlOEdOT3V6YlQr?=
 =?utf-8?B?YjZHTmRIdWhPRE8xekhpQW5ZVUJxY3czRzZCVmhRUEVHQU52ZlJUZ2JsdmFO?=
 =?utf-8?B?dnRvWEsyb3ZVZUJ4OGR2VTBlTlk4VVI1UUhpeVdLS21COUZQa3d6Z3RvYzNZ?=
 =?utf-8?B?ekZEWE1DajA2a3BKR1R0aldkUGNlb1pSVFJDdUFZUDdtdkJNWk55M1NYeTk4?=
 =?utf-8?B?L0h0RGZpMjZscDFIeFVSS3lNZ0pCb0ZTbWdlblFGNTZVa3NCK29PdERTWUpa?=
 =?utf-8?B?d2hTSHVVMnVYanZxM0FBYmptRk9EK0o2eW5Ba3VZSFpWS3c0VkZKY1RFZ09r?=
 =?utf-8?B?Nm1maTcvaDlFYUJlWklNVFVUMWg1SWZ0aEg1c2xhRzh5dUZ6ME9jNVdQVWl6?=
 =?utf-8?B?K0Y1MkszZWg5cDErQk83Z0tPQ3haY3cvMUhZTWd4Ulg2TlBQelNDWEdzOE95?=
 =?utf-8?B?U083UUhad1VxQ0VzT2dmNC9kbUNMeG13K2NNUnBCb0RpYmFlQ3lHdTMwTGdx?=
 =?utf-8?B?U09sM25vaXAxMGYrTHljVW5jUFlUeWtGbi9JQzF3RjdFcnJESWoyOTkxMkN4?=
 =?utf-8?B?L2I5WTNHOGZZQmJVT3ppbXBrWTNBSURqSjV6dVY2Uno3azVZTWZGTm51bldX?=
 =?utf-8?B?OEowc3AzNVliRnV6M0xCRE4zMElIUnl5VnlBVm5GaXdLd3JseTlBSXNCTEND?=
 =?utf-8?B?Q3dHUlV1UXhRWmlFZ0xaTVg1SWhpRXp0Sm9kUGhVM0ZCbm5ubUgyWGViajJM?=
 =?utf-8?Q?Y8kMKQ9ZnxGXlj8VN8eqd7dAX+Pr+l2PPlkKq0c?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c225ee5-74ad-448d-bda1-08d8ca082a01
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Feb 2021 18:59:28.8889
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y5v8JyPzZCKpDH9lRpYHloFPKyWLw6BDHsBUlUW0IzMYu2RfMQAD1vPnH9OoK1aw+Sdw0cpW1RC7Om8GPLc9AHz/8+qSSsS4FgNn4orkfcA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3544
X-OriginatorOrg: citrix.com

On 05/02/2021 18:46, Ian Jackson wrote:
> Igor Druzhinin writes ("[PATCH v2 2/2] tools/libxl: only set viridian flags on new domains"):
>> Domains migrating or restoring should have viridian HVM param key in
>> the migration stream already and setting that twice results in Xen
>> returing -EEXIST on the second attempt later (during migration stream parsing)
>> in case the values don't match. That causes migration/restore operation
>> to fail at destination side.
>>
>> That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
>> extending default viridian feature set making the values from the previous
>> migration streams and those set at domain construction different.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Thanks for splitting this up.
>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
>
> Andy, I think from irc that this meets with your approval but can I
> have a formal R-b ?  If so please put my tools maintainer ack on it
> too.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 19:12:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 19:12:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81905.151482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l86Ws-0007Sb-4F; Fri, 05 Feb 2021 19:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81905.151482; Fri, 05 Feb 2021 19:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l86Ws-0007SU-17; Fri, 05 Feb 2021 19:12:30 +0000
Received: by outflank-mailman (input) for mailman id 81905;
 Fri, 05 Feb 2021 19:12:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EJWk=HH=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1l86Wq-0007Ra-5N
 for xen-devel@lists.xen.org; Fri, 05 Feb 2021 19:12:28 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9436b2c5-dc7c-46a5-adc2-56ca87cebc9c;
 Fri, 05 Feb 2021 19:12:26 +0000 (UTC)
Received: from [10.10.1.24] (c-73-129-147-140.hsd1.md.comcast.net
 [73.129.147.140]) by mx.zohomail.com
 with SMTPS id 1612552337163336.71227547137937;
 Fri, 5 Feb 2021 11:12:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9436b2c5-dc7c-46a5-adc2-56ca87cebc9c
ARC-Seal: i=1; a=rsa-sha256; t=1612552339; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=BbvK+mv5YHPYOxZLycAb7WDtLVq1TQjqOtUuvE4O+u/0JD/pNSAdQ/k4VC1T31twNMzRBtlGk5+qzsjAN0UbVDAFthhsNR8EDWHFWsk/w6N7Un3HTJqnV7S7DjTIpeW6nUfcmA/xAYuJOb9gicIaty6Z4KWXjNZAY7glbMFPFB8=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1612552339; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=PIKvVobs4kXtJycHP8jJuBx2XuoxReoi4j0/1WaJWSQ=; 
	b=L90WRJSEygdluiEmSRSPqgVOprEzdb0qEjYDtLt5N6q1Bfw7zVbJN60PtBf1zhSdbBcevF60AxDJmcDIqMREthZfphO+jRQn5MctGb62FPvSmUIxIR+c2FkSue6kOVCoM7Dr7T8+tcxsMrOfesoyOUsfInOB/hWaHVVEtGoNcLQ=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1612552339;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Message-ID:Subject:Date:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=PIKvVobs4kXtJycHP8jJuBx2XuoxReoi4j0/1WaJWSQ=;
	b=Vpc7YdTQ0vY40/s8u79RhbX3fCWB2GKmtKwletRAj4bhhdUKanVXQ7qXNQ9l3Gkk
	1S3/dHLjHAxXUz2eWNhOmcLa46RWoiXczsidIF5zzH/zSJ44DrTVKZHq0pJgLrgxPYO
	AGxRMmxVpH51hmdFmkglo7GGwJ7HnvYi93cCJ+Vs=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <JBeulich@suse.com>, bertrand.marquis@arm.com, roger.pau@citrix.com,
 julien@xen.org, Stefano Stabellini <sstabellini@kernel.org>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Rich Persaud <persaur@gmail.com>, adam.schwalm@starlab.io
Message-ID: <d0b1a7d1-2260-567b-fd8d-04e32a3504f2@apertussolutions.com>
Subject: DomB Working Group
Date: Fri, 5 Feb 2021 14:12:15 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

Greetings,

Per the community call on Feb. 4 I would like to get the working group
started that will be reviewing the major design decisions for the DomB
implementation. A summary of the discussion around the two primary
decisions we are seeking to get resolved are as follows,


Topic: DomB: Adoption of Device Tree as the format for the Launch
Control Module
* Consensus approval from x86 and Arm maintainers and members of the Xen
community on the call to proceed with Device Tree as the format for the
DomB LCM (described in the previous mailing list posts).

- a working group will follow up on the work for handling migration of
device tree handling code within the hypervisor, previously imported
from outside the project, from the Arm hypervisor code into Common.

Topic: DomB: Surfacing configuration data to guests: ACPI tables, Device
Tree
* Recommendation was that both will be needed, and OK to proceed with
just implementing one but plan and design for the second to be added.
First is likely to be ACPI; to be determined as development progress is
made.


To continue the discussion from there, I would like to propose a call on
Thursday February 11th at 1700UTC, 0900PST/1200EST/1800CEST. I have
provided call details below for those who are able to attend. The agenda
is available on CryptPad. If you are not able to attend, please reach
out directly. Thanks and hope to see everyone on the call!


Agenda
=3D=3D=3D=3D=3D=3D
https://cryptpad.fr/pad/#/2/pad/edit/iVEku8zImQg320a3D4IBAKQh/


Meeting Invite
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Daniel Smith's Meeting

Please join my meeting from your computer, tablet or smartphone.
https://www.gotomeet.me/apertussolutions

You can also dial in using your phone.
United States (Toll Free): 1 877 568 4106

Access Code: 691-818-141

More phone numbers:
Austria (Toll Free): 0 800 202148
Belarus (Toll Free): 8 820 0011 0400
Belgium (Toll Free): 0 800 78884
Bulgaria (Toll Free): 00800 120 4417
Canada (Toll Free): 1 888 455 1389
China (Toll Free): 4000 762962
Czech Republic (Toll Free): 800 500448
Denmark (Toll Free): 8025 3126
Finland (Toll Free): 0 800 917656
France (Toll Free): 0 805 541 047
Germany (Toll Free): 0 800 184 4222
Greece (Toll Free): 00 800 4414 3838
Hungary (Toll Free): (06) 80 986 255
Iceland (Toll Free): 800 7204
India (Toll Free): 18002669254
Ireland (Toll Free): 1 800 901 610
Italy (Toll Free): 800 793887
Netherlands (Toll Free): 0 800 020 0182
Norway (Toll Free): 800 69 046
Poland (Toll Free): 00 800 1124759
Portugal (Toll Free): 800 819 575
Romania (Toll Free): 0 800 400 819
Slovakia (Toll Free): 0 800 105 748
Spain (Toll Free): 800 900 582
Sweden (Toll Free): 0 200 330 905
Switzerland (Toll Free): 0 800 002 348
Ukraine (Toll Free): 0 800 60 9135
United Kingdom (Toll Free): 0 800 169 0432


New to GoToMeeting? Get the app now and be ready when your first meeting
starts: https://global.gotomeeting.com/install/691818141


V/r,

Daniel P. Smith

Apertus Solutions, LLC





From xen-devel-bounces@lists.xenproject.org Fri Feb 05 19:40:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 19:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81918.151501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l86xj-0002CI-IW; Fri, 05 Feb 2021 19:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81918.151501; Fri, 05 Feb 2021 19:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l86xj-0002CB-FM; Fri, 05 Feb 2021 19:40:15 +0000
Received: by outflank-mailman (input) for mailman id 81918;
 Fri, 05 Feb 2021 19:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kT56=HH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l86xi-0002C6-2m
 for xen-devel@lists.xen.org; Fri, 05 Feb 2021 19:40:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c40160d-5cd6-4a46-9a83-cfe66fb38413;
 Fri, 05 Feb 2021 19:40:13 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E971464FB7;
 Fri,  5 Feb 2021 19:40:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c40160d-5cd6-4a46-9a83-cfe66fb38413
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612554012;
	bh=S00fFEx8CJmuybUec97Mzjk57cdf8FqtKK0aWEm5DF4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qpXKjpqMwO+2mr6OZtUMq6uNB8Xfj/O7AQZh+OTEL0oA/JyRYlknerLjTiaSmek73
	 0zHogtxETtK7ZyGS2YNrvnTxOmzGOejLqlYViVrhGwuBAesvKvtETXLamYm13BZTw4
	 QWcuo+0NGIcg9auVtwYIhsuC3/2aNrZ/G/mYCeY5beJVxaSSald4pN9GB4QOgzNAcZ
	 XvW03AtoAtdI5t6ceL15NXUbUCREwrIFBM1j8/MNOlHKDdi4irHsGyI6AOq/UcczE7
	 KOH+CBZvB1uoEszsl/J/mmMEYEF6n10Ag5oPZtMwYIk0xdLt92iIRIMxYtETECcPCQ
	 MKEgp+Q0FN6UQ==
Date: Fri, 5 Feb 2021 11:40:10 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
cc: Xen-devel <xen-devel@lists.xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>, 
    bertrand.marquis@arm.com, roger.pau@citrix.com, julien@xen.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Christopher Clark <christopher.w.clark@gmail.com>, 
    Rich Persaud <persaur@gmail.com>, adam.schwalm@starlab.io
Subject: Re: DomB Working Group
In-Reply-To: <d0b1a7d1-2260-567b-fd8d-04e32a3504f2@apertussolutions.com>
Message-ID: <alpine.DEB.2.21.2102051139460.29047@sstabellini-ThinkPad-T480s>
References: <d0b1a7d1-2260-567b-fd8d-04e32a3504f2@apertussolutions.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Daniel,

The time works for me. I am looking forward to it.

Cheers,

Stefano


On Fri, 5 Feb 2021, Daniel P. Smith wrote:

> Greetings,
> 
> Per the community call on Feb. 4 I would like to get the working group
> started that will be reviewing the major design decisions for the DomB
> implementation. A summary of the discussion around the two primary
> decisions we are seeking to get resolved are as follows,
> 
> 
> Topic: DomB: Adoption of Device Tree as the format for the Launch
> Control Module
> * Consensus approval from x86 and Arm maintainers and members of the Xen
> community on the call to proceed with Device Tree as the format for the
> DomB LCM (described in the previous mailing list posts).
> 
> - a working group will follow up on the work for handling migration of
> device tree handling code within the hypervisor, previously imported
> from outside the project, from the Arm hypervisor code into Common.
> 
> Topic: DomB: Surfacing configuration data to guests: ACPI tables, Device
> Tree
> * Recommendation was that both will be needed, and OK to proceed with
> just implementing one but plan and design for the second to be added.
> First is likely to be ACPI; to be determined as development progress is
> made.
> 
> 
> To continue the discussion from there, I would like to propose a call on
> Thursday February 11th at 1700UTC, 0900PST/1200EST/1800CEST. I have
> provided call details below for those who are able to attend. The agenda
> is available on CryptPad. If you are not able to attend, please reach
> out directly. Thanks and hope to see everyone on the call!
> 
> 
> Agenda
> ======
> https://cryptpad.fr/pad/#/2/pad/edit/iVEku8zImQg320a3D4IBAKQh/
> 
> 
> Meeting Invite
> ==============
> Daniel Smith's Meeting
> 
> Please join my meeting from your computer, tablet or smartphone.
> https://www.gotomeet.me/apertussolutions
> 
> You can also dial in using your phone.
> United States (Toll Free): 1 877 568 4106
> 
> Access Code: 691-818-141
> 
> More phone numbers:
> Austria (Toll Free): 0 800 202148
> Belarus (Toll Free): 8 820 0011 0400
> Belgium (Toll Free): 0 800 78884
> Bulgaria (Toll Free): 00800 120 4417
> Canada (Toll Free): 1 888 455 1389
> China (Toll Free): 4000 762962
> Czech Republic (Toll Free): 800 500448
> Denmark (Toll Free): 8025 3126
> Finland (Toll Free): 0 800 917656
> France (Toll Free): 0 805 541 047
> Germany (Toll Free): 0 800 184 4222
> Greece (Toll Free): 00 800 4414 3838
> Hungary (Toll Free): (06) 80 986 255
> Iceland (Toll Free): 800 7204
> India (Toll Free): 18002669254
> Ireland (Toll Free): 1 800 901 610
> Italy (Toll Free): 800 793887
> Netherlands (Toll Free): 0 800 020 0182
> Norway (Toll Free): 800 69 046
> Poland (Toll Free): 00 800 1124759
> Portugal (Toll Free): 800 819 575
> Romania (Toll Free): 0 800 400 819
> Slovakia (Toll Free): 0 800 105 748
> Spain (Toll Free): 800 900 582
> Sweden (Toll Free): 0 200 330 905
> Switzerland (Toll Free): 0 800 002 348
> Ukraine (Toll Free): 0 800 60 9135
> United Kingdom (Toll Free): 0 800 169 0432
> 
> 
> New to GoToMeeting? Get the app now and be ready when your first meeting
> starts: https://global.gotomeeting.com/install/691818141
> 
> 
> V/r,
> 
> Daniel P. Smith
> 
> Apertus Solutions, LLC
> 
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 20:39:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 20:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81933.151519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l87t3-0007g5-6g; Fri, 05 Feb 2021 20:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81933.151519; Fri, 05 Feb 2021 20:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l87t3-0007fy-2o; Fri, 05 Feb 2021 20:39:29 +0000
Received: by outflank-mailman (input) for mailman id 81933;
 Fri, 05 Feb 2021 20:39:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ARPF=HH=amazon.de=prvs=663df6f7c=nmanthey@srs-us1.protection.inumbo.net>)
 id 1l87t2-0007ft-2r
 for xen-devel@lists.xenproject.org; Fri, 05 Feb 2021 20:39:28 +0000
Received: from smtp-fw-9103.amazon.com (unknown [207.171.188.200])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99b05317-1d8e-43d3-a5e4-ba2e0c5463b2;
 Fri, 05 Feb 2021 20:39:26 +0000 (UTC)
Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO
 email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.47.22.38])
 by smtp-border-fw-out-9103.sea19.amazon.com with ESMTP;
 05 Feb 2021 20:39:18 +0000
Received: from EX13D02EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS
 id F12C6C0843; Fri,  5 Feb 2021 20:39:16 +0000 (UTC)
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D02EUC004.ant.amazon.com (10.43.164.117) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 5 Feb 2021 20:39:15 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.32) by
 mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 5 Feb 2021 20:39:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99b05317-1d8e-43d3-a5e4-ba2e0c5463b2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1612557566; x=1644093566;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=VRWb4eKyRdPK6Oy3AAebSqsCZ19CyjABZhutg30XesA=;
  b=PPOZ9XW8gD51Do9wlU/nlQWF9m0pmlrv5v/+3LJLq3jBSoGbyLqIwZw2
   zboyB19yYSZNDkpIRcjMSlpfxUjzPy4D7w81A3MNNpvAkMVxXqVuNL1Xp
   MMMJRMarBJU1rnEDtMpP4AheHFA1d+ObXxtpvlw7vzJMFuaipIqpxL2Dl
   c=;
X-IronPort-AV: E=Sophos;i="5.81,156,1610409600"; 
   d="scan'208";a="915977907"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Norbert Manthey
	<nmanthey@amazon.de>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH HVM v2 1/1] hvm: refactor set param
Date: Fri, 5 Feb 2021 21:39:05 +0100
Message-ID: <20210205203905.8824-1-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

To prevent leaking HVM params via L1TF and similar issues on a
hyperthread pair, let's load values of domains as late as possible.

Furthermore, speculative barriers are re-arranged to make sure we do not
allow guests running on co-located VCPUs to leak hvm parameter values of
other domains.

This is part of the speculative hardening effort.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reported-by: Hongyan Xia <hongyxia@amazon.co.uk>

---
v2: Add another speculative blocker, which protects the return code check
    of the function hvm_allow_set_param.


 xen/arch/x86/hvm/hvm.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4060,7 +4060,7 @@ static int hvm_allow_set_param(struct domain *d,
                                uint32_t index,
                                uint64_t new_value)
 {
-    uint64_t value = d->arch.hvm.params[index];
+    uint64_t value;
     int rc;
 
     rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
@@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
     if ( rc )
         return rc;
 
+    if ( index >= HVM_NR_PARAMS )
+        return -EINVAL;
+
+    /* Make sure we evaluate permissions before loading data of domains. */
+    block_speculation();
+
+    value = d->arch.hvm.params[index];
     switch ( index )
     {
     /* The following parameters should only be changed once. */
@@ -4141,6 +4148,9 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
     if ( rc )
         return rc;
 
+    /* Make sure we evaluate permissions before loading data of domains. */
+    block_speculation();
+
     switch ( index )
     {
     case HVM_PARAM_CALLBACK_IRQ:
@@ -4388,6 +4398,10 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value)
     if ( rc )
         return rc;
 
+    /* Make sure the index bound check in hvm_get_param is respected, as well as
+       the above domain permissions. */
+    block_speculation();
+
     switch ( index )
     {
     case HVM_PARAM_ACPI_S_STATE:
@@ -4428,9 +4442,6 @@ static int hvmop_get_param(
     if ( a.index >= HVM_NR_PARAMS )
         return -EINVAL;
 
-    /* Make sure the above bound check is not bypassed during speculation. */
-    block_speculation();
-
     d = rcu_lock_domain_by_any_id(a.domid);
     if ( d == NULL )
         return -ESRCH;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 05 22:16:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 22:16:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81942.151538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l89OB-0000di-Il; Fri, 05 Feb 2021 22:15:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81942.151538; Fri, 05 Feb 2021 22:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l89OB-0000db-FQ; Fri, 05 Feb 2021 22:15:43 +0000
Received: by outflank-mailman (input) for mailman id 81942;
 Fri, 05 Feb 2021 22:15:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89OA-0000dT-T5; Fri, 05 Feb 2021 22:15:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89OA-0007ic-LR; Fri, 05 Feb 2021 22:15:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89OA-0001CE-DO; Fri, 05 Feb 2021 22:15:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l89OA-0000NB-Ct; Fri, 05 Feb 2021 22:15:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=udrT9gShPaB8ew+JtIZYMqKcc/S/I1XirBo2lDwWmck=; b=H2/pScUyAaT+mZTvUPrT2OgaLS
	OGSD5rdjX6VKTeCNznjjI3yu0xYsI4ZVYHGCjHyPbLJAl8mIpKP3Dfnf1zcwvZSqlb5lBaiRAftLv
	uGlImMit0TuUL77gbV3f+oKIbi98z6bH03oiCiWnUH5aw04sWS+Xaw/PSPm+YQM/D8W0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159031-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159031: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=879edc697bfde5371628a83fb56446bb88a008c4
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 22:15:42 +0000

flight 159031 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159031/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              879edc697bfde5371628a83fb56446bb88a008c4
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  210 days
Failing since        151818  2020-07-11 04:18:52 Z  209 days  204 attempts
Testing same since   159031  2021-02-05 04:19:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 39967 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 22:40:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 22:40:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81951.151552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l89m3-0003hX-Fy; Fri, 05 Feb 2021 22:40:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81951.151552; Fri, 05 Feb 2021 22:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l89m3-0003hQ-Cx; Fri, 05 Feb 2021 22:40:23 +0000
Received: by outflank-mailman (input) for mailman id 81951;
 Fri, 05 Feb 2021 22:40:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89m2-0003hI-GY; Fri, 05 Feb 2021 22:40:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89m2-00085v-AG; Fri, 05 Feb 2021 22:40:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89m2-00022O-1j; Fri, 05 Feb 2021 22:40:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l89m1-00034c-V1; Fri, 05 Feb 2021 22:40:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qqt1VNoIjOACK6N8CArAbIMn8sBNjbkS1WX9QPzK3A4=; b=TTg7to21x394u/ZgOL433/trwQ
	8kkIfRYXG5wWDcsqoJujLxMOgQXwmneP/8cEBeU/UH+m4StuBW1BmpocZT8CiQWL3KzBAX9ex0i4S
	GuWvy3FpauGi/XPzIYzPR1dCz+XO9mGe7U8LONd49Y7uYnIMr7k4/65r+UlzkoOh+yPY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159054-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159054: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
X-Osstest-Versions-That:
    xen=f4318db940c39cc656128fcf72df3e79d2e55bc1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 22:40:21 +0000

flight 159054 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159054/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7
baseline version:
 xen                  f4318db940c39cc656128fcf72df3e79d2e55bc1

Last test of basis   159046  2021-02-05 16:01:29 Z    0 days
Testing same since   159054  2021-02-05 20:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f4318db940..ca82d3fecc  ca82d3fecc93745ee17850a609ac7772bd7c8bf7 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 05 22:46:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 Feb 2021 22:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81955.151567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l89sE-0003uP-6T; Fri, 05 Feb 2021 22:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81955.151567; Fri, 05 Feb 2021 22:46:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l89sE-0003uI-3S; Fri, 05 Feb 2021 22:46:46 +0000
Received: by outflank-mailman (input) for mailman id 81955;
 Fri, 05 Feb 2021 22:46:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89sC-0003uA-Jc; Fri, 05 Feb 2021 22:46:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89sC-0008Dl-Cb; Fri, 05 Feb 2021 22:46:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l89sC-0002Nu-1J; Fri, 05 Feb 2021 22:46:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l89sC-0005kH-0o; Fri, 05 Feb 2021 22:46:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CaPYlNbGVc5fhc/441tI/yZI5Er1sqAsoE3ifvqyyzI=; b=1B+GUqo/D9yHjV5DGYS1wUhmac
	xAJNkKWxZayoj+oa9wCS323SNdToRIBYQBL1/oJo/H41KjKdN6J/kTWNux9eQOsXUpJ+drROGbhNT
	qA/romVu3caa1iVtnVZ0K3RGZOuajEWbzV0IxHLjZmFYVPDk8KvyJU7apVytAo1evMw4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159023-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159023: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e89428970c23011a2679121c56e9f54f654c6602
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 Feb 2021 22:46:44 +0000

flight 159023 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159023/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e89428970c23011a2679121c56e9f54f654c6602
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   24 days
Failing since        158473  2021-01-17 13:42:20 Z   19 days   31 attempts
Testing same since   158997  2021-02-04 00:10:36 Z    1 days    2 attempts

------------------------------------------------------------
353 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 10442 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 00:38:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 00:38:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.81999.151592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8BcS-0007O2-To; Sat, 06 Feb 2021 00:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 81999.151592; Sat, 06 Feb 2021 00:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8BcS-0007Nv-P6; Sat, 06 Feb 2021 00:38:36 +0000
Received: by outflank-mailman (input) for mailman id 81999;
 Sat, 06 Feb 2021 00:38:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g8Bj=HI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l8BcQ-0007Nq-Qn
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 00:38:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f739bcf-dcd0-4450-ba04-385f4720d55c;
 Sat, 06 Feb 2021 00:38:34 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1D82B64FEA;
 Sat,  6 Feb 2021 00:38:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f739bcf-dcd0-4450-ba04-385f4720d55c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612571913;
	bh=/K2eQTpPEn3hhRchfomiYUpVmHyLfVWO94ver0PRSKA=;
	h=Date:From:To:cc:Subject:From;
	b=gPxsO5UT8dIV5AMJKdeJsxtW8wortS8LIw1g/ewQcozPyQfTTqrK/phXWFVK0Jl7M
	 onyn710PQ0CArG5U8VF67vs5LAmZ0PxQxsouBuPOZfxdI2RbasiH/bb/Y7PvjGr7ZV
	 dyFL+XPCahGlmy7wPQ9E+D6pmAVX5rPf53UKmJE2GQvJSyyCPfDtYdGfocxfg8CCys
	 /SP1UfY2dVdh/ImB2BnmszyTilUPfOaWkz1oOcoI2xGR3dIVnFgaBjUx812CPfzBA7
	 c7E26HPQ0WhS5i/AqeIUeE9WBy0Ss/ZnN6t7F6IuhzsqegCv2Q0Tne7MfSdSTY995w
	 SGCYtJPQT9a5Q==
Date: Fri, 5 Feb 2021 16:38:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: julien@xen.org
cc: lucmiccio@gmail.com, sstabellini@kernel.org, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Volodymyr_Babchuk@epam.com
Subject: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Message-ID: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
The offending chunk is:

 #define gnttab_need_iommu_mapping(d)                    \
-    (is_domain_direct_mapped(d) && need_iommu(d))
+    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))

On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
directly mapped, like the old check did, but the new check is always
false.

In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
need_sync is set as:

    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
        hd->need_sync = !iommu_use_hap_pt(d);

iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
definition in docs/misc/xen-command-line.pandoc:

    This option is hardwired to true for x86 PVH dom0's (as RAM belonging to
    other domains in the system don't live in a compatible address space), and
    is ignored for ARM.

But aside from that, the issue is that iommu_use_hap_pt(d) is true,
hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
too.

As a consequence, when using PV network from a domU on a system where
IOMMU is on from Dom0, I get:

(XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
[   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK

The fix is to go back to the old implementation of
gnttab_need_iommu_mapping.  However, we don't even need to specify &&
need_iommu(d) since we don't actually need to check for the IOMMU to be
enabled (iommu_map does it for us at the beginning of the function.)

This fix is preferrable to changing the implementation of need_sync or
iommu_use_hap_pt because "need_sync" is not really the reason why we
want gnttab_need_iommu_mapping to return true.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Backport: 4.12+ 

---

It is incredible that it was missed for this long, but it takes a full
PV drivers test using DMA from a non-coherent device to trigger it, e.g.
wget from a domU to an HTTP server on a different machine, ping or
connections to dom0 won't trigger the bug.

It is interesting that given that IOMMU is on for dom0, Linux could
have just avoided using swiotlb-xen and everything would have just
worked. It is worth considering introducing a feature flag (e.g.
XENFEAT_ARM_dom0_iommu) to let dom0 know that the IOMMU is on and
swiotlb-xen is not necessary. Then Linux can avoid initializing
swiotlb-xen and just rely on the IOMMU for translation.

diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 6f585b1538..2a154d1851 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -88,8 +88,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
 #define gnttab_status_gfn(d, t, i)                                       \
     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
 
-#define gnttab_need_iommu_mapping(d)                    \
-    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
+#define gnttab_need_iommu_mapping(d)  (is_domain_direct_mapped(d))
 
 #endif /* __ASM_GRANT_TABLE_H__ */
 /*


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 01:35:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 01:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82007.151616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8CVh-00073q-DF; Sat, 06 Feb 2021 01:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82007.151616; Sat, 06 Feb 2021 01:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8CVh-00073i-73; Sat, 06 Feb 2021 01:35:41 +0000
Received: by outflank-mailman (input) for mailman id 82007;
 Sat, 06 Feb 2021 01:35:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8CVg-00073a-Bt; Sat, 06 Feb 2021 01:35:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8CVg-0004eK-57; Sat, 06 Feb 2021 01:35:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8CVf-00019A-Lq; Sat, 06 Feb 2021 01:35:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8CVf-0004NG-IZ; Sat, 06 Feb 2021 01:35:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5hvq7ioUwg7mKXMIXUOX7m2mQkvnaZvMunIySe+GUSY=; b=kEcnuZty23uVc5pi36D2KDPutF
	EVc2VctK9GyBMOU9uNzBTdJ2eenJwZ7qwWt5O9CE1o1U6kMSN/cAPVmfwS20xr4UiU8fLmIvk902s
	H9hqeDDZFRq07X4YhRm+tOFyzxTVwCd95ntb/yT7gI8cNw2z3LtMQAXOBK8I3Dnw5RuI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159027-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159027: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:debian-fixup:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5c279c4cf206e03995e04fd3404fa95ffd243a97
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 Feb 2021 01:35:39 +0000

flight 159027 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159027/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 13 debian-fixup            fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5c279c4cf206e03995e04fd3404fa95ffd243a97
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  189 days
Failing since        152366  2020-08-01 20:49:34 Z  188 days  335 attempts
Testing same since   159027  2021-02-04 23:40:25 Z    1 days    1 attempts

------------------------------------------------------------
4529 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1024590 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 08:21:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 08:21:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82076.151680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8Iq6-0007cK-CA; Sat, 06 Feb 2021 08:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82076.151680; Sat, 06 Feb 2021 08:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8Iq6-0007cD-90; Sat, 06 Feb 2021 08:21:10 +0000
Received: by outflank-mailman (input) for mailman id 82076;
 Sat, 06 Feb 2021 08:21:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Iq5-0007c5-6H; Sat, 06 Feb 2021 08:21:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Iq5-0004Eb-0Y; Sat, 06 Feb 2021 08:21:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Iq4-0005L5-OA; Sat, 06 Feb 2021 08:21:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Iq4-0006dw-Mu; Sat, 06 Feb 2021 08:21:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gReh7YXfF8eRV2ycHIeSkmDo4JCNgilceXRQ/e9UBd4=; b=EHS97wXoMkVfRPC1eZltWAZTte
	45ey6nlUj35NhbeQ9KuA44ihL7uxDU1gkEgVIWoCodVQG8DMOfSYaEcRMho3AIWZonE2rN1PjlhuC
	oSe0QAKLZtOSHXnx2lYd+wM01R9A78AkASpWBubK7FlzV7aJKf1hpQbNhNn1KdNDhock=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159040-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159040: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0d96664df322d50e0ac54130e129c0bf4f2b72df
X-Osstest-Versions-That:
    ovmf=1b6c3a94eca7f12f6a3b65a3e8619d2e2e7c1eb6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 Feb 2021 08:21:08 +0000

flight 159040 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159040/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0d96664df322d50e0ac54130e129c0bf4f2b72df
baseline version:
 ovmf                 1b6c3a94eca7f12f6a3b65a3e8619d2e2e7c1eb6

Last test of basis   159019  2021-02-04 16:41:43 Z    1 days
Testing same since   159040  2021-02-05 11:11:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1b6c3a94ec..0d96664df3  0d96664df322d50e0ac54130e129c0bf4f2b72df -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 08:43:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 08:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82085.151701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8JBf-0001Ks-Bs; Sat, 06 Feb 2021 08:43:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82085.151701; Sat, 06 Feb 2021 08:43:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8JBf-0001Kl-8q; Sat, 06 Feb 2021 08:43:27 +0000
Received: by outflank-mailman (input) for mailman id 82085;
 Sat, 06 Feb 2021 08:43:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8JBe-0001Kb-7N; Sat, 06 Feb 2021 08:43:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8JBd-0004cl-UG; Sat, 06 Feb 2021 08:43:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8JBd-0006Cx-Lb; Sat, 06 Feb 2021 08:43:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8JBd-0003ow-L4; Sat, 06 Feb 2021 08:43:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DTYPuxDyg5jajcwbWJwM/2oHr+lTAx6jfP/mNUMnC/s=; b=52sAN1Uy0U9o2QfJ1YAby00WuH
	P/Ynzo6XzTgX1XUKtgCzvupz7BVCOHTRmftMmPkqUsQAPtib/FdEN0WYlp/eXv4lE0sWe8fV+eJRo
	gaDRj02PaNBI9U3KMNw4Cpq0y1SsyYpt7jCJcEN8Y0HJYXbh4vTP9C3tZUYUx6cRqJsE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159032-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159032: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1ba089f2255bfdb071be3ce6ac6c3069e8012179
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 Feb 2021 08:43:25 +0000

flight 159032 qemu-mainline real [real]
flight 159068 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159032/
http://logs.test-lab.xenproject.org/osstest/logs/159068/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159068 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd       8 xen-boot            fail pass in 159068-retest
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 159068-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1ba089f2255bfdb071be3ce6ac6c3069e8012179
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  169 days
Failing since        152659  2020-08-21 14:07:39 Z  168 days  339 attempts
Testing same since   159032  2021-02-05 04:45:57 Z    1 days    1 attempts

------------------------------------------------------------
373 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 100638 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 08:47:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 08:47:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82089.151716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8JFS-0001W6-VN; Sat, 06 Feb 2021 08:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82089.151716; Sat, 06 Feb 2021 08:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8JFS-0001Vz-Ru; Sat, 06 Feb 2021 08:47:22 +0000
Received: by outflank-mailman (input) for mailman id 82089;
 Sat, 06 Feb 2021 08:47:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l8JFR-0001Vu-Cm
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 08:47:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8JFR-0004fo-33; Sat, 06 Feb 2021 08:47:21 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8JFQ-0000Lx-Qn; Sat, 06 Feb 2021 08:47:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=aCHl8+f/ltmkEPWqrrxaCuzIovfzdfgo+iOfS+H6FX8=; b=XAly+ESxphs9pxEN4sBDCEYAPX
	nPqlY+uopvgyMMdkpri+QX5Zv2b4zVUsii8pG+1qGIW+vhrdNxD6gkcqRVKnSjT2bUzhE2RHln69p
	G0UDn8/kiAg6JJmqzJyFDsolcXmSRSBxNShy5fKurF3ndB/pFtIIWbtXD+YBzZ7qPwGU=;
Subject: Re: [PATCH] xen/arm: domain_build: Ignore device nodes with invalid
 addresses
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall.oss@gmail.com>,
 Elliott Mitchell <ehem+xen@m5p.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <YBmQQ3Tzu++AadKx@mattapan.m5p.com>
 <a422c04c-f908-6fb6-f2de-fea7b18a6e7d@xen.org>
 <b6d342f8-c833-db88-9808-cdc946999300@xen.org>
 <alpine.DEB.2.21.2102021412480.29047@sstabellini-ThinkPad-T480s>
 <06d6b9ec-0db9-d6da-e30b-df9f9381157d@xen.org>
 <alpine.DEB.2.21.2102031315350.29047@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1LsqOMFXV5GLYEkF7=akMx7fT_vpgVtT6xP6MPfmP9vQ@mail.gmail.com>
 <alpine.DEB.2.21.2102031519540.29047@sstabellini-ThinkPad-T480s>
 <9b97789b-5560-0186-642a-0501789830e5@xen.org>
 <alpine.DEB.2.21.2102041435300.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <f87cd0dc-fb01-bf4e-b8f9-66fceb1884f3@xen.org>
Date: Sat, 6 Feb 2021 08:47:18 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102041435300.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 04/02/2021 22:39, Stefano Stabellini wrote:
> After the discussion with Rob, it is clear that we have to add a check
> on the node name for "pcie" in dt_bus_pci_match. However, that wouldn't
> solve the problem reported by Elliot, because in this case the node name
> is "pci" not "pcie".

I'd like to point out that in the Linux case, the problem was in the 
hostbridge and not the PCI device.

> 
> I suggest that we add a check for "pci" too in dt_bus_pci_match,
> although that means that our check will be slightly different from the
> equivalent Linux check. The "pci" check should come with an in-code
> comment to explain the situation and the reasons for it to be.

I'd like to follow the same approach as a Linux commit:

commit d1ac0002dd297069bb8448c2764c9c31c4668441
Author: Marc Zyngier <maz@kernel.org>
Date:   Wed Aug 19 10:42:55 2020 +0100

     of: address: Work around missing device_type property in pcie nodes

This means a warning should also be added.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 09:09:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 09:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82099.151738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8Jb9-0003tx-Ux; Sat, 06 Feb 2021 09:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82099.151738; Sat, 06 Feb 2021 09:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8Jb9-0003tq-R5; Sat, 06 Feb 2021 09:09:47 +0000
Received: by outflank-mailman (input) for mailman id 82099;
 Sat, 06 Feb 2021 09:09:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l8Jb9-0003tl-DC
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 09:09:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8Jb7-000549-SD; Sat, 06 Feb 2021 09:09:45 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8Jb7-0001s9-Km; Sat, 06 Feb 2021 09:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oqP4xG7bM1foQTlDIS9+rLLoDK89o1h2rVe0nC9o5P8=; b=eJ0Xem+TaWtSQt42aNNSBlhhs6
	x2KqTrKpvZ7X200XQw27FAGyt4Zp9PeAKFpxTh2RcvcXVzhsTkGpvlGtowIJhgffsx941qEiqixPg
	5rrI1esqawmkEZwPRKSzVZHMCeJMUUoq/g1ETK0Mf1QLDJCPV0OLJ6rkUqR7bJ4hgFbc=;
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c2b857fa-a606-1795-3aaf-a69572c43951@xen.org>
Date: Sat, 6 Feb 2021 09:09:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 01/02/2021 14:56, Jan Beulich wrote:
> Going through an intermediate *.new file requires telling the compiler
> what the real target is, so that the inclusion of the resulting .*.d
> file will actually be useful.
> 
> Fixes: 7d2d7a43d014 ("x86/build: limit rebuilding of asm-offsets.h")
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Already on the original patch I did suggest that perhaps Arm would want
> to follow suit. So again - perhaps the rules should be unified by moving
> to common code?

Sorry I missed the original patch. The recent changes looks beneficials 
to Arm as well.

> 
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -241,7 +241,7 @@ efi/buildid.o efi/relocs-dummy.o: $(BASE
>   efi/buildid.o efi/relocs-dummy.o: ;
>   
>   asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c $(BASEDIR)/include/asm-x86/asm-macros.h

On Arm, only asm-offsets.c is a pre-requisite. May I ask why you need 
the second on x86?

> -	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new $<
> +	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new -MQ $@ $<
>   	$(call move-if-changed,$@.new,$@)
>   
>   asm-macros.i: CFLAGS-y += -D__ASSEMBLY__ -P
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82123.151804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA7-0005ci-LM; Sat, 06 Feb 2021 10:49:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82123.151804; Sat, 06 Feb 2021 10:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA7-0005cb-Hw; Sat, 06 Feb 2021 10:49:59 +0000
Received: by outflank-mailman (input) for mailman id 82123;
 Sat, 06 Feb 2021 10:49:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LA6-0005Wo-9C
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:49:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 819c7b88-760f-48f6-b9e9-d19095e75ee2;
 Sat, 06 Feb 2021 10:49:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DD5BDAD29;
 Sat,  6 Feb 2021 10:49:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 819c7b88-760f-48f6-b9e9-d19095e75ee2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=dE46dgUQjvo/vJLSEaUtAdX4bEkarOQCB+4cTFCnP1M=;
	b=QrCfkhZ9kH6dd3p0kbDY5etXQLR7YA6fF7S4sBCvIYn3HsEhD8zDi8/nrJAkjr+5OiuNsO
	qR+sdnPHE3sVhc6pMZV3asmTaXx7t2OGH0pSGgLCV/LVP9RbwfgXXcjXrnMW/fO54ZX1ek
	UhKlmhgCima7ddr5rYrpiYWaVy5Ib8M=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org,
	netdev@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Date: Sat,  6 Feb 2021 11:49:25 +0100
Message-Id: <20210206104932.29064-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The first three patches are fixes for XSA-332. The avoid WARN splats
and a performance issue with interdomain events.

Patches 4 and 5 are some additions to event handling in order to add
some per pv-device statistics to sysfs and the ability to have a per
backend device spurious event delay control.

Patches 6 and 7 are minor fixes I had lying around.

Juergen Gross (7):
  xen/events: reset affinity of 2-level event initially
  xen/events: don't unmask an event channel when an eoi is pending
  xen/events: fix lateeoi irq acknowledgement
  xen/events: link interdomain events to associated xenbus device
  xen/events: add per-xenbus device event statistics and settings
  xen/evtch: use smp barriers for user event ring
  xen/evtchn: read producer index only once

 drivers/block/xen-blkback/xenbus.c  |   2 +-
 drivers/net/xen-netback/interface.c |  16 ++--
 drivers/xen/events/events_2l.c      |  20 +++++
 drivers/xen/events/events_base.c    | 133 ++++++++++++++++++++++------
 drivers/xen/evtchn.c                |   6 +-
 drivers/xen/pvcalls-back.c          |   4 +-
 drivers/xen/xen-pciback/xenbus.c    |   2 +-
 drivers/xen/xen-scsiback.c          |   2 +-
 drivers/xen/xenbus/xenbus_probe.c   |  66 ++++++++++++++
 include/xen/events.h                |   7 +-
 include/xen/xenbus.h                |   7 ++
 11 files changed, 217 insertions(+), 48 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82124.151809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA8-0005dP-16; Sat, 06 Feb 2021 10:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82124.151809; Sat, 06 Feb 2021 10:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA7-0005dA-RH; Sat, 06 Feb 2021 10:49:59 +0000
Received: by outflank-mailman (input) for mailman id 82124;
 Sat, 06 Feb 2021 10:49:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LA6-0005VY-Ih
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:49:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e83dcfe2-0d53-42b6-8df6-ef237726f0a4;
 Sat, 06 Feb 2021 10:49:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DB079AEB9;
 Sat,  6 Feb 2021 10:49:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e83dcfe2-0d53-42b6-8df6-ef237726f0a4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608588; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9dwSISahcOfN3NMnONYxvA+Xvjxj9asEhl9tMU+G0iU=;
	b=Mo4F52hs9OAJxH6oqDjd2ntDp9R6F7ZJdn+zPE4y5bLUhZ9kXSzHtT1/3GVi2DosWy+1Tx
	pBjsTb/LSNITlohk/amoDfpckf1C/ScW8KmaztBneh8tyVPIahPVYqgwlYXL4PiRsYG7Am
	RzQCI08jXlLS/z74o6/g8tAKTJ04Nz8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
Date: Sat,  6 Feb 2021 11:49:31 +0100
Message-Id: <20210206104932.29064-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The ring buffer for user events is used in the local system only, so
smp barriers are fine for ensuring consistency.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/evtchn.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index a7a85719a8c8..421382c73d88 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -173,7 +173,7 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 
 	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
 		*evtchn_ring_entry(u, u->ring_prod) = evtchn->port;
-		wmb(); /* Ensure ring contents visible */
+		smp_wmb(); /* Ensure ring contents visible */
 		if (u->ring_cons == u->ring_prod++) {
 			wake_up_interruptible(&u->evtchn_wait);
 			kill_fasync(&u->evtchn_async_queue,
@@ -245,7 +245,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 	}
 
 	rc = -EFAULT;
-	rmb(); /* Ensure that we see the port before we copy it. */
+	smp_rmb(); /* Ensure that we see the port before we copy it. */
 	if (copy_to_user(buf, evtchn_ring_entry(u, c), bytes1) ||
 	    ((bytes2 != 0) &&
 	     copy_to_user(&buf[bytes1], &u->ring[0], bytes2)))
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82122.151787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA3-0005YM-ED; Sat, 06 Feb 2021 10:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82122.151787; Sat, 06 Feb 2021 10:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA3-0005Y3-7O; Sat, 06 Feb 2021 10:49:55 +0000
Received: by outflank-mailman (input) for mailman id 82122;
 Sat, 06 Feb 2021 10:49:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LA1-0005VY-IM
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:49:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22f2ff8f-1608-4480-8ae5-8185a34135e6;
 Sat, 06 Feb 2021 10:49:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1DFC9AD78;
 Sat,  6 Feb 2021 10:49:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22f2ff8f-1608-4480-8ae5-8185a34135e6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VlVL2Y2ASTFVpwcU4SmdSz79vlZWGd1T5oGBYzvUtKQ=;
	b=CaAOHRpv7RYB3U4+TM6JjEfbGxTHtF6cA58Si23ab7AwqoG6YglxUsLmvFuf0c94PPFVIm
	SpAadYtJiVApk9QAnqoPFKPJPHt+6euf4qwQg6uIr8CsiyoWiW+qlFD/zWnyEkhfhkHS8b
	Gs1DtsywWPRDSXVKB3DIktuDeZYkblE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org
Subject: [PATCH 3/7] xen/events: fix lateeoi irq acknowledgment
Date: Sat,  6 Feb 2021 11:49:28 +0100
Message-Id: <20210206104932.29064-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When having accepted an irq as result from receiving an event the
related event should be cleared. The lateeoi model is missing that,
resulting in a continuous stream of events being signalled.

Fixes: 54c9de89895e0a ("xen/events: add a new late EOI evtchn framework")
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 6a836d131e73..7b26ef817f8b 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1826,6 +1826,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
 	if (VALID_EVTCHN(evtchn)) {
 		info->eoi_pending = true;
 		mask_evtchn(evtchn);
+		clear_evtchn(evtchn);
 	}
 }
 
@@ -1838,6 +1839,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *data)
 		info->masked = true;
 		info->eoi_pending = true;
 		mask_evtchn(evtchn);
+		clear_evtchn(evtchn);
 	}
 }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82121.151779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA3-0005Xh-1Q; Sat, 06 Feb 2021 10:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82121.151779; Sat, 06 Feb 2021 10:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LA2-0005Xa-Ui; Sat, 06 Feb 2021 10:49:54 +0000
Received: by outflank-mailman (input) for mailman id 82121;
 Sat, 06 Feb 2021 10:49:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LA1-0005Wo-Gc
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:49:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bdf918e8-a7cd-4417-a934-24d4bee992af;
 Sat, 06 Feb 2021 10:49:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5AFAAD2B;
 Sat,  6 Feb 2021 10:49:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdf918e8-a7cd-4417-a934-24d4bee992af
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GH77gMFeluI4ISQQCL5mL3HOsRIyaZbyOEdSH8AsWt0=;
	b=IFFMq9YDbxtkEPiO6C/5LtLAtDHYvxM651iS1+qH4tvdDWCxIzsRg6ikvZ/7+U0tEsFI1x
	ri0x4lc7ZzQrLo2ZMEyqZPK8rluRKsUjTIfUFmT+v41H7Vdw1SFIxyZnTE+MZ2UQIsn8hn
	Yc3SHb9TRQIBgPoj/vnPwBgJcmpAZU4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>
Subject: [PATCH 2/7] xen/events: don't unmask an event channel when an eoi is pending
Date: Sat,  6 Feb 2021 11:49:27 +0100
Message-Id: <20210206104932.29064-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

An event channel should be kept masked when an eoi is pending for it.
When being migrated to another cpu it might be unmasked, though.

In order to avoid this keep two different flags for each event channel
to be able to distinguish "normal" masking/unmasking from eoi related
masking/unmasking. The event channel should only be able to generate
an interrupt if both flags are cleared.

Cc: stable@vger.kernel.org
Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framework")
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 63 +++++++++++++++++++++++++++-----
 1 file changed, 53 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index e850f79351cb..6a836d131e73 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -97,7 +97,9 @@ struct irq_info {
 	short refcnt;
 	u8 spurious_cnt;
 	u8 is_accounted;
-	enum xen_irq_type type; /* type */
+	short type;		/* type: IRQT_* */
+	bool masked;		/* Is event explicitly masked? */
+	bool eoi_pending;	/* Is EOI pending? */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
@@ -302,6 +304,8 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 	info->irq = irq;
 	info->evtchn = evtchn;
 	info->cpu = cpu;
+	info->masked = true;
+	info->eoi_pending = false;
 
 	ret = set_evtchn_to_irq(evtchn, irq);
 	if (ret < 0)
@@ -585,7 +589,10 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 	}
 
 	info->eoi_time = 0;
-	unmask_evtchn(evtchn);
+	info->eoi_pending = false;
+
+	if (!info->masked)
+		unmask_evtchn(evtchn);
 }
 
 static void xen_irq_lateeoi_worker(struct work_struct *work)
@@ -830,7 +837,11 @@ static unsigned int __startup_pirq(unsigned int irq)
 		goto err;
 
 out:
-	unmask_evtchn(evtchn);
+	info->masked = false;
+
+	if (!info->eoi_pending)
+		unmask_evtchn(evtchn);
+
 	eoi_pirq(irq_get_irq_data(irq));
 
 	return 0;
@@ -857,6 +868,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
+	info->masked = true;
 	mask_evtchn(evtchn);
 	xen_evtchn_close(evtchn);
 	xen_irq_info_cleanup(info);
@@ -1768,18 +1780,26 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 
 static void enable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
-	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked = false;
+
+		if (!info->eoi_pending)
+			unmask_evtchn(evtchn);
+	}
 }
 
 static void disable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
-	if (VALID_EVTCHN(evtchn))
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked = true;
 		mask_evtchn(evtchn);
+	}
 }
 
 static void ack_dynirq(struct irq_data *data)
@@ -1798,6 +1818,29 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
+static void lateeoi_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		info->eoi_pending = true;
+		mask_evtchn(evtchn);
+	}
+}
+
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked = true;
+		info->eoi_pending = true;
+		mask_evtchn(evtchn);
+	}
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
 	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
@@ -2023,8 +2066,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
 	.irq_mask		= disable_dynirq,
 	.irq_unmask		= enable_dynirq,
 
-	.irq_ack		= mask_ack_dynirq,
-	.irq_mask_ack		= mask_ack_dynirq,
+	.irq_ack		= lateeoi_ack_dynirq,
+	.irq_mask_ack		= lateeoi_mask_ack_dynirq,
 
 	.irq_set_affinity	= set_affinity_irq,
 	.irq_retrigger		= retrigger_dynirq,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82120.151768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8L9x-0005Vk-Mv; Sat, 06 Feb 2021 10:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82120.151768; Sat, 06 Feb 2021 10:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8L9x-0005Vd-JO; Sat, 06 Feb 2021 10:49:49 +0000
Received: by outflank-mailman (input) for mailman id 82120;
 Sat, 06 Feb 2021 10:49:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8L9w-0005VY-Of
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:49:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2769dbec-4bb6-4c28-870b-700ba0aab103;
 Sat, 06 Feb 2021 10:49:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B8C33AC43;
 Sat,  6 Feb 2021 10:49:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2769dbec-4bb6-4c28-870b-700ba0aab103
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608586; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TI/KsWCKGxrmPABhxI8VTW9N3/J68xoTHMjTIJFsX+o=;
	b=tDYiLQHQ2Y3VHKT6bGCTthGsTkWsSP79d3wVp5yBPPuyeVHOi0AtUJVqbLwz1XXqsFNptC
	QG67jJ0rLSBrGdI/inYBfrWSl8RL6fq4NHsMgaa+tnF5n4VCoJqieLOjwsjXVUNIWTcS/w
	ZljrX2l+lSX/iIl9WKpsEVz8MKCRhs0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>
Subject: [PATCH 1/7] xen/events: reset affinity of 2-level event initially
Date: Sat,  6 Feb 2021 11:49:26 +0100
Message-Id: <20210206104932.29064-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When creating a new event channel with 2-level events the affinity
needs to be reset initially in order to avoid using an old affinity
from earlier usage of the event channel port.

The same applies to the affinity when onlining a vcpu: all old
affinity settings for this vcpu must be reset. As percpu events get
initialized before the percpu event channel hook is called,
resetting of the affinities happens after offlining a vcpu (this is
working, as initial percpu memory is zeroed out).

Cc: stable@vger.kernel.org
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_2l.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index da87f3a1e351..23217940144a 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -47,6 +47,16 @@ static unsigned evtchn_2l_max_channels(void)
 	return EVTCHN_2L_NR_CHANNELS;
 }
 
+static int evtchn_2l_setup(evtchn_port_t evtchn)
+{
+	unsigned int cpu;
+
+	for_each_online_cpu(cpu)
+		clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
+
+	return 0;
+}
+
 static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
 				  unsigned int old_cpu)
 {
@@ -355,9 +365,18 @@ static void evtchn_2l_resume(void)
 				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
 }
 
+static int evtchn_2l_percpu_deinit(unsigned int cpu)
+{
+	memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
+			EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
+
+	return 0;
+}
+
 static const struct evtchn_ops evtchn_ops_2l = {
 	.max_channels      = evtchn_2l_max_channels,
 	.nr_channels       = evtchn_2l_max_channels,
+	.setup             = evtchn_2l_setup,
 	.bind_to_cpu       = evtchn_2l_bind_to_cpu,
 	.clear_pending     = evtchn_2l_clear_pending,
 	.set_pending       = evtchn_2l_set_pending,
@@ -367,6 +386,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
 	.unmask            = evtchn_2l_unmask,
 	.handle_events     = evtchn_2l_handle_events,
 	.resume	           = evtchn_2l_resume,
+	.percpu_deinit     = evtchn_2l_percpu_deinit,
 };
 
 void __init xen_evtchn_2l_init(void)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82125.151828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LAD-00069n-ER; Sat, 06 Feb 2021 10:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82125.151828; Sat, 06 Feb 2021 10:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LAD-00069d-8R; Sat, 06 Feb 2021 10:50:05 +0000
Received: by outflank-mailman (input) for mailman id 82125;
 Sat, 06 Feb 2021 10:50:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LAB-0005Wo-9S
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:50:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4de7dbd8-6cea-4538-ba5f-6835bcb1d264;
 Sat, 06 Feb 2021 10:49:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86EEFADE0;
 Sat,  6 Feb 2021 10:49:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4de7dbd8-6cea-4538-ba5f-6835bcb1d264
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1VJK6QNP8PH/qhgzBa2alWmwYAG3vAg+9p0SW4Mi99s=;
	b=m0Tq2m2yHkw5PZ5bXnK45m5pqwBKgVDh4u3AQOZ5VZClNlixytTRypTI7tI3snxhVmw/6b
	9wUuuvFfc+rvcMgm/ff1eFYdRXRvh3T6Tiwzcb3n9NcARR35WnTegdB8+Bk1UV6axxbabD
	bLMchCjhctq9zEGCHIGMfXz3R+TCQgQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 4/7] xen/events: link interdomain events to associated xenbus device
Date: Sat,  6 Feb 2021 11:49:29 +0100
Message-Id: <20210206104932.29064-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to support the possibility of per-device event channel
settings (e.g. lateeoi spurious event thresholds) add a xenbus device
pointer to struct irq_info() and modify the related event channel
binding interfaces to take the pointer to the xenbus device as a
parameter instead of the domain id of the other side.

While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkback/xenbus.c  |  2 +-
 drivers/net/xen-netback/interface.c | 16 +++++------
 drivers/xen/events/events_base.c    | 41 +++++++++++++++++------------
 drivers/xen/pvcalls-back.c          |  4 +--
 drivers/xen/xen-pciback/xenbus.c    |  2 +-
 drivers/xen/xen-scsiback.c          |  2 +-
 include/xen/events.h                |  7 ++---
 7 files changed, 41 insertions(+), 33 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 9860d4842f36..c2aaf690352c 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -245,7 +245,7 @@ static int xen_blkif_map(struct xen_blkif_ring *ring, grant_ref_t *gref,
 	if (req_prod - rsp_prod > size)
 		goto fail;
 
-	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->domid,
+	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->be->dev,
 			evtchn, xen_blkif_be_int, 0, "blkif-backend", ring);
 	if (err < 0)
 		goto fail;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index acb786d8b1d8..494b4330a4ea 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -628,13 +628,13 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 			unsigned int evtchn)
 {
 	struct net_device *dev = vif->dev;
+	struct xenbus_device *xendev = xenvif_to_xenbus_device(vif);
 	void *addr;
 	struct xen_netif_ctrl_sring *shared;
 	RING_IDX rsp_prod, req_prod;
 	int err;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
-				     &ring_ref, 1, &addr);
+	err = xenbus_map_ring_valloc(xendev, &ring_ref, 1, &addr);
 	if (err)
 		goto err;
 
@@ -648,7 +648,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 	if (req_prod - rsp_prod > RING_SIZE(&vif->ctrl))
 		goto err_unmap;
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(vif->domid, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(xendev, evtchn);
 	if (err < 0)
 		goto err_unmap;
 
@@ -671,8 +671,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 	vif->ctrl_irq = 0;
 
 err_unmap:
-	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-				vif->ctrl.sring);
+	xenbus_unmap_ring_vfree(xendev, vif->ctrl.sring);
 	vif->ctrl.sring = NULL;
 
 err:
@@ -717,6 +716,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 			unsigned int tx_evtchn,
 			unsigned int rx_evtchn)
 {
+	struct xenbus_device *dev = xenvif_to_xenbus_device(queue->vif);
 	struct task_struct *task;
 	int err;
 
@@ -753,7 +753,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			dev, tx_evtchn, xenvif_interrupt, 0,
 			queue->name, queue);
 		if (err < 0)
 			goto err;
@@ -764,7 +764,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
 			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			dev, tx_evtchn, xenvif_tx_interrupt, 0,
 			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err;
@@ -774,7 +774,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
 			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			dev, rx_evtchn, xenvif_rx_interrupt, 0,
 			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err;
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 7b26ef817f8b..8c620c11e32a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -63,6 +63,7 @@
 #include <xen/interface/physdev.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/xenbus.h>
 #include <asm/hw_irq.h>
 
 #include "events_internal.h"
@@ -117,6 +118,7 @@ struct irq_info {
 			unsigned char flags;
 			uint16_t domid;
 		} pirq;
+		struct xenbus_device *interdomain;
 	} u;
 };
 
@@ -317,11 +319,16 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 }
 
 static int xen_irq_info_evtchn_setup(unsigned irq,
-				     evtchn_port_t evtchn)
+				     evtchn_port_t evtchn,
+				     struct xenbus_device *dev)
 {
 	struct irq_info *info = info_for_irq(irq);
+	int ret;
 
-	return xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
+	ret = xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
+	info->u.interdomain = dev;
+
+	return ret;
 }
 
 static int xen_irq_info_ipi_setup(unsigned cpu,
@@ -1128,7 +1135,8 @@ int xen_pirq_from_irq(unsigned irq)
 }
 EXPORT_SYMBOL_GPL(xen_pirq_from_irq);
 
-static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
+static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip,
+				   struct xenbus_device *dev)
 {
 	int irq;
 	int ret;
@@ -1148,7 +1156,7 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
 		irq_set_chip_and_handler_name(irq, chip,
 					      handle_edge_irq, "event");
 
-		ret = xen_irq_info_evtchn_setup(irq, evtchn);
+		ret = xen_irq_info_evtchn_setup(irq, evtchn, dev);
 		if (ret < 0) {
 			__unbind_from_irq(irq);
 			irq = ret;
@@ -1175,7 +1183,7 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
 
 int bind_evtchn_to_irq(evtchn_port_t evtchn)
 {
-	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip);
+	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip, NULL);
 }
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
 
@@ -1224,27 +1232,27 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
 	return irq;
 }
 
-static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain,
+static int bind_interdomain_evtchn_to_irq_chip(struct xenbus_device *dev,
 					       evtchn_port_t remote_port,
 					       struct irq_chip *chip)
 {
 	struct evtchn_bind_interdomain bind_interdomain;
 	int err;
 
-	bind_interdomain.remote_dom  = remote_domain;
+	bind_interdomain.remote_dom  = dev->otherend_id;
 	bind_interdomain.remote_port = remote_port;
 
 	err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
 					  &bind_interdomain);
 
 	return err ? : bind_evtchn_to_irq_chip(bind_interdomain.local_port,
-					       chip);
+					       chip, dev);
 }
 
-int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
 					   evtchn_port_t remote_port)
 {
-	return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
+	return bind_interdomain_evtchn_to_irq_chip(dev, remote_port,
 						   &xen_lateeoi_chip);
 }
 EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq_lateeoi);
@@ -1357,7 +1365,7 @@ static int bind_evtchn_to_irqhandler_chip(evtchn_port_t evtchn,
 {
 	int irq, retval;
 
-	irq = bind_evtchn_to_irq_chip(evtchn, chip);
+	irq = bind_evtchn_to_irq_chip(evtchn, chip, NULL);
 	if (irq < 0)
 		return irq;
 	retval = request_irq(irq, handler, irqflags, devname, dev_id);
@@ -1392,14 +1400,13 @@ int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn,
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler_lateeoi);
 
 static int bind_interdomain_evtchn_to_irqhandler_chip(
-		unsigned int remote_domain, evtchn_port_t remote_port,
+		struct xenbus_device *dev, evtchn_port_t remote_port,
 		irq_handler_t handler, unsigned long irqflags,
 		const char *devname, void *dev_id, struct irq_chip *chip)
 {
 	int irq, retval;
 
-	irq = bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
-						  chip);
+	irq = bind_interdomain_evtchn_to_irq_chip(dev, remote_port, chip);
 	if (irq < 0)
 		return irq;
 
@@ -1412,14 +1419,14 @@ static int bind_interdomain_evtchn_to_irqhandler_chip(
 	return irq;
 }
 
-int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(struct xenbus_device *dev,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
 						  unsigned long irqflags,
 						  const char *devname,
 						  void *dev_id)
 {
-	return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
+	return bind_interdomain_evtchn_to_irqhandler_chip(dev,
 				remote_port, handler, irqflags, devname,
 				dev_id, &xen_lateeoi_chip);
 }
@@ -1691,7 +1698,7 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
 	   so there should be a proper type */
 	BUG_ON(info->type == IRQT_UNBOUND);
 
-	(void)xen_irq_info_evtchn_setup(irq, evtchn);
+	(void)xen_irq_info_evtchn_setup(irq, evtchn, NULL);
 
 	mutex_unlock(&irq_mapping_update_lock);
 
diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a7d293fa8d14..b47fd8435061 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -348,7 +348,7 @@ static struct sock_mapping *pvcalls_new_active_socket(
 	map->bytes = page;
 
 	ret = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			fedata->dev->otherend_id, evtchn,
+			fedata->dev, evtchn,
 			pvcalls_back_conn_event, 0, "pvcalls-backend", map);
 	if (ret < 0)
 		goto out;
@@ -948,7 +948,7 @@ static int backend_connect(struct xenbus_device *dev)
 		goto error;
 	}
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn);
 	if (err < 0)
 		goto error;
 	fedata->irq = err;
diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index e7c692cfb2cf..5188f02e75fb 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -124,7 +124,7 @@ static int xen_pcibk_do_attach(struct xen_pcibk_device *pdev, int gnt_ref,
 	pdev->sh_info = vaddr;
 
 	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-		pdev->xdev->otherend_id, remote_evtchn, xen_pcibk_handle_event,
+		pdev->xdev, remote_evtchn, xen_pcibk_handle_event,
 		0, DRV_NAME, pdev);
 	if (err < 0) {
 		xenbus_dev_fatal(pdev->xdev, err,
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 862162dca33c..8b59897b2df9 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -799,7 +799,7 @@ static int scsiback_init_sring(struct vscsibk_info *info, grant_ref_t ring_ref,
 	sring = (struct vscsiif_sring *)area;
 	BACK_RING_INIT(&info->ring, sring, PAGE_SIZE);
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(info->domid, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(info->dev, evtchn);
 	if (err < 0)
 		goto unmap_page;
 
diff --git a/include/xen/events.h b/include/xen/events.h
index 8ec418e30c7f..c204262d9fc2 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -12,10 +12,11 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/events.h>
 
+struct xenbus_device;
+
 unsigned xen_evtchn_nr_channels(void);
 
 int bind_evtchn_to_irq(evtchn_port_t evtchn);
-int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn);
 int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
 			      irq_handler_t handler,
 			      unsigned long irqflags, const char *devname,
@@ -35,9 +36,9 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
 			   unsigned long irqflags,
 			   const char *devname,
 			   void *dev_id);
-int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
 					   evtchn_port_t remote_port);
-int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(struct xenbus_device *dev,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
 						  unsigned long irqflags,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82126.151834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LAD-0006BJ-UE; Sat, 06 Feb 2021 10:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82126.151834; Sat, 06 Feb 2021 10:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LAD-0006Al-JZ; Sat, 06 Feb 2021 10:50:05 +0000
Received: by outflank-mailman (input) for mailman id 82126;
 Sat, 06 Feb 2021 10:50:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LAB-0005VY-Ir
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:50:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1794a89d-3d36-4a2e-a112-1c1b7b15abf5;
 Sat, 06 Feb 2021 10:49:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14272AEC4;
 Sat,  6 Feb 2021 10:49:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1794a89d-3d36-4a2e-a112-1c1b7b15abf5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608588; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+KP59jAs7saISZqMq/sKJ7ioUVrcML6xC9JkBIsqGnM=;
	b=TsFTkQEp8+BbnvC9FjNahPV9/OMDu5YaNXS+RFx2/56hsUmQ/m0XUeQohZt+9dc2pfi1Mn
	ZU+VYw6cMqbxpx1pFdD2FvUGZB/ZCLAbWKkk91aj4rxBdlcpSAkz8uFWYX1MaivEDbUOI9
	oCisMByJFL/PfB9JBHXl3fq+xzZFIRs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 7/7] xen/evtchn: read producer index only once
Date: Sat,  6 Feb 2021 11:49:32 +0100
Message-Id: <20210206104932.29064-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In evtchn_read() use READ_ONCE() for reading the producer index in
order to avoid the compiler generating multiple accesses.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/evtchn.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index 421382c73d88..f6b199b597bf 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 			goto unlock_out;
 
 		c = u->ring_cons;
-		p = u->ring_prod;
+		p = READ_ONCE(u->ring_prod);
 		if (c != p)
 			break;
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 10:50:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 10:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82127.151851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LAH-0006WC-Jg; Sat, 06 Feb 2021 10:50:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82127.151851; Sat, 06 Feb 2021 10:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LAH-0006Vv-Ep; Sat, 06 Feb 2021 10:50:09 +0000
Received: by outflank-mailman (input) for mailman id 82127;
 Sat, 06 Feb 2021 10:50:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8LAG-0005VY-J0
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 10:50:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 379e30f7-dd5d-4dd4-ae1b-01c2cce109c6;
 Sat, 06 Feb 2021 10:49:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8F2EAE55;
 Sat,  6 Feb 2021 10:49:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 379e30f7-dd5d-4dd4-ae1b-01c2cce109c6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612608587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+rhXTWkgbyXvV9gClvwDbgn20u/kYNFxY4GB9OVP82g=;
	b=RGpZWqhB1MjUDVT7qV5Vbs65zV+xJItgY6MdTYnW6OU8SQSOPBnnf+yTZvjl+m/ff6/cm7
	/sTtzQxlR3CrNaSip17pGUzElz0mgWAMvehJRNUMQ6YaeEXd2FgGbPTd9diut4ODNqVso9
	3rFOpioQlUVEVE6RoWIRlOalao/MEgE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 5/7] xen/events: add per-xenbus device event statistics and settings
Date: Sat,  6 Feb 2021 11:49:30 +0100
Message-Id: <20210206104932.29064-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
References: <20210206104932.29064-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sysfs nodes for each xenbus device showing event statistics (number
of events and spurious events, number of associated event channels)
and for setting a spurious event threshold in case a frontend is
sending too many events without being rogue on purpose.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c  | 27 ++++++++++++-
 drivers/xen/xenbus/xenbus_probe.c | 66 +++++++++++++++++++++++++++++++
 include/xen/xenbus.h              |  7 ++++
 3 files changed, 98 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 8c620c11e32a..d0c57c5664c0 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -327,6 +327,8 @@ static int xen_irq_info_evtchn_setup(unsigned irq,
 
 	ret = xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
 	info->u.interdomain = dev;
+	if (dev)
+		atomic_inc(&dev->event_channels);
 
 	return ret;
 }
@@ -572,18 +574,28 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 		return;
 
 	if (spurious) {
+		struct xenbus_device *dev = info->u.interdomain;
+		unsigned int threshold = 1;
+
+		if (dev && dev->spurious_threshold)
+			threshold = dev->spurious_threshold;
+
 		if ((1 << info->spurious_cnt) < (HZ << 2)) {
 			if (info->spurious_cnt != 0xFF)
 				info->spurious_cnt++;
 		}
-		if (info->spurious_cnt > 1) {
-			delay = 1 << (info->spurious_cnt - 2);
+		if (info->spurious_cnt > threshold) {
+			delay = 1 << (info->spurious_cnt - 1 - threshold);
 			if (delay > HZ)
 				delay = HZ;
 			if (!info->eoi_time)
 				info->eoi_cpu = smp_processor_id();
 			info->eoi_time = get_jiffies_64() + delay;
+			if (dev)
+				atomic_add(delay, &dev->jiffies_eoi_delayed);
 		}
+		if (dev)
+			atomic_inc(&dev->spurious_events);
 	} else {
 		info->spurious_cnt = 0;
 	}
@@ -920,6 +932,7 @@ static void __unbind_from_irq(unsigned int irq)
 
 	if (VALID_EVTCHN(evtchn)) {
 		unsigned int cpu = cpu_from_irq(irq);
+		struct xenbus_device *dev;
 
 		xen_evtchn_close(evtchn);
 
@@ -930,6 +943,11 @@ static void __unbind_from_irq(unsigned int irq)
 		case IRQT_IPI:
 			per_cpu(ipi_to_irq, cpu)[ipi_from_irq(irq)] = -1;
 			break;
+		case IRQT_EVTCHN:
+			dev = info->u.interdomain;
+			if (dev)
+				atomic_dec(&dev->event_channels);
+			break;
 		default:
 			break;
 		}
@@ -1593,6 +1611,7 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 {
 	int irq;
 	struct irq_info *info;
+	struct xenbus_device *dev;
 
 	irq = get_evtchn_to_irq(port);
 	if (irq == -1)
@@ -1622,6 +1641,10 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 
 	info = info_for_irq(irq);
 
+	dev = (info->type == IRQT_EVTCHN) ? info->u.interdomain : NULL;
+	if (dev)
+		atomic_inc(&dev->events);
+
 	if (ctrl->defer_eoi) {
 		info->eoi_cpu = smp_processor_id();
 		info->irq_epoch = __this_cpu_read(irq_epoch);
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 18ffd0551b54..9494ecad3c92 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -206,6 +206,65 @@ void xenbus_otherend_changed(struct xenbus_watch *watch,
 }
 EXPORT_SYMBOL_GPL(xenbus_otherend_changed);
 
+#define XENBUS_SHOW_STAT(name)						\
+static ssize_t show_##name(struct device *_dev,				\
+			   struct device_attribute *attr,		\
+			   char *buf)					\
+{									\
+	struct xenbus_device *dev = to_xenbus_device(_dev);		\
+									\
+	return sprintf(buf, "%d\n", atomic_read(&dev->name));		\
+}									\
+static DEVICE_ATTR(name, 0444, show_##name, NULL)
+
+XENBUS_SHOW_STAT(event_channels);
+XENBUS_SHOW_STAT(events);
+XENBUS_SHOW_STAT(spurious_events);
+XENBUS_SHOW_STAT(jiffies_eoi_delayed);
+
+static ssize_t show_spurious_threshold(struct device *_dev,
+				       struct device_attribute *attr,
+				       char *buf)
+{
+	struct xenbus_device *dev = to_xenbus_device(_dev);
+
+	return sprintf(buf, "%d\n", dev->spurious_threshold);
+}
+
+static ssize_t set_spurious_threshold(struct device *_dev,
+				      struct device_attribute *attr,
+				      const char *buf, size_t count)
+{
+	struct xenbus_device *dev = to_xenbus_device(_dev);
+	unsigned int val;
+	ssize_t ret;
+
+	ret = kstrtouint(buf, 0, &val);
+	if (ret)
+		return ret;
+
+	dev->spurious_threshold = val;
+
+	return count;
+}
+
+static DEVICE_ATTR(spurious_threshold, 0644, show_spurious_threshold,
+		   set_spurious_threshold);
+
+static struct attribute *xenbus_attrs[] = {
+	&dev_attr_event_channels.attr,
+	&dev_attr_events.attr,
+	&dev_attr_spurious_events.attr,
+	&dev_attr_jiffies_eoi_delayed.attr,
+	&dev_attr_spurious_threshold.attr,
+	NULL
+};
+
+static const struct attribute_group xenbus_group = {
+	.name = "xenbus",
+	.attrs = xenbus_attrs,
+};
+
 int xenbus_dev_probe(struct device *_dev)
 {
 	struct xenbus_device *dev = to_xenbus_device(_dev);
@@ -253,6 +312,11 @@ int xenbus_dev_probe(struct device *_dev)
 		return err;
 	}
 
+	dev->spurious_threshold = 1;
+	if (sysfs_create_group(&dev->dev.kobj, &xenbus_group))
+		dev_warn(&dev->dev, "sysfs_create_group on %s failed.\n",
+			 dev->nodename);
+
 	return 0;
 fail_put:
 	module_put(drv->driver.owner);
@@ -269,6 +333,8 @@ int xenbus_dev_remove(struct device *_dev)
 
 	DPRINTK("%s", dev->nodename);
 
+	sysfs_remove_group(&dev->dev.kobj, &xenbus_group);
+
 	free_otherend_watch(dev);
 
 	if (drv->remove) {
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 2c43b0ef1e4d..13ee375a1f05 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -88,6 +88,13 @@ struct xenbus_device {
 	struct completion down;
 	struct work_struct work;
 	struct semaphore reclaim_sem;
+
+	/* Event channel based statistics and settings. */
+	atomic_t event_channels;
+	atomic_t events;
+	atomic_t spurious_events;
+	atomic_t jiffies_eoi_delayed;
+	unsigned int spurious_threshold;
 };
 
 static inline struct xenbus_device *to_xenbus_device(struct device *dev)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Feb 06 11:09:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 11:09:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82138.151863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LSk-0008Ml-Au; Sat, 06 Feb 2021 11:09:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82138.151863; Sat, 06 Feb 2021 11:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LSk-0008Me-7i; Sat, 06 Feb 2021 11:09:14 +0000
Received: by outflank-mailman (input) for mailman id 82138;
 Sat, 06 Feb 2021 11:09:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l8LSi-0008MZ-6B
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 11:09:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8LSg-000721-Mx; Sat, 06 Feb 2021 11:09:10 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8LSg-0000z4-B6; Sat, 06 Feb 2021 11:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=e5i6NxZrM2fGvPLAGwDz2aKPTGuJQtRwfz0sok4jvOY=; b=xX9U2IdXnrSNvwEpXe7TmLKOES
	5SmMXQE405Mui+JCD+T8SbFGJ9fzhiQXw7tMJAZ7EWGWqcNr5H6C2k4/WaKjel0dmcLfKhJ7Q5lyT
	GRXfpjTnAKJ14QNqN+zTE5HDkkB01kTYf9d7fVivNWmgqlHS3U6QOjM73URMq5QK4dVE=;
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: lucmiccio@gmail.com, xen-devel@lists.xenproject.org,
 Bertrand.Marquis@arm.com, Volodymyr_Babchuk@epam.com
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <247f517e-a283-12c8-2ccb-3915cda4ac2e@xen.org>
Date: Sat, 6 Feb 2021 11:09:08 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 06/02/2021 00:38, Stefano Stabellini wrote:
> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.

Doh :/.

> The offending chunk is: >
>   #define gnttab_need_iommu_mapping(d)                    \
> -    (is_domain_direct_mapped(d) && need_iommu(d))
> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> 
> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> directly mapped, like the old check did,

This is not entirely correct, we only need gnttab_need_iommu_mapping() 
to return true when the domain is direct mapped **and** the IOMMU is 
enabled for the domain.

> but the new check is always
> false. >
> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> need_sync is set as:
> 
>      if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>          hd->need_sync = !iommu_use_hap_pt(d); >
> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
> definition in docs/misc/xen-command-line.pandoc:
> 
>      This option is hardwired to true for x86 PVH dom0's (as RAM belonging to
>      other domains in the system don't live in a compatible address space), and
>      is ignored for ARM.
> 
> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
> too.

need_sync means that you have a separate IOMMU page-table and they need 
to be updated for every change.

hap_pt means the page-table used by the IOMMU is the P2M.

For Arm, we always shared the P2M with the IOMMU.

> 
> As a consequence, when using PV network from a domU on a system where
> IOMMU is on from Dom0, I get:
> 
> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> 
> The fix is to go back to the old implementation of
> gnttab_need_iommu_mapping.  However, we don't even need to specify &&
> need_iommu(d) since we don't actually need to check for the IOMMU to be
> enabled (iommu_map does it for us at the beginning of the function.)

gnttab_need_iommu_mapping() doesn't only gate the 
iommu_legacy_{,un}map() call but also decides whether we need to held 
both the local and remote grant-table write lock for the duration of the 
operation (see double_gt_lock()).

I'd like to avoid the requirement to held the double_gt_lock() if there 
is the domain is going to use the IOMMU.

> 
> This fix is preferrable to changing the implementation of need_sync or
> iommu_use_hap_pt because "need_sync" is not really the reason why we
> want gnttab_need_iommu_mapping to return true.

In 4.13, we introduced is_iommu_enabled() (see commit c45f59292367 
"domain: introduce XEN_DOMCTL_CDF_iommu flag") that should do the job 
for this patch.

For 4.12, we could use iommu_enabled as in general dom0 will use an 
IOMMU if Xen enable it globally. Note that 4.12 is only security 
supported since last October (see [1]). So this would be up to patch 
there tree.

> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Backport: 4.12+

I would suggest to use Fixes: tag if you know the exact commit. This 
would make easier for downstream users if they backported the offending 
patch.

> 
> ---
> 
> It is incredible that it was missed for this long, but it takes a full
> PV drivers test using DMA from a non-coherent device to trigger it, e.g.
> wget from a domU to an HTTP server on a different machine, ping or
> connections to dom0 won't trigger the bug.

Great finding!

> 
> It is interesting that given that IOMMU is on for dom0, Linux could
> have just avoided using swiotlb-xen and everything would have just
> worked. It is worth considering introducing a feature flag (e.g.
> XENFEAT_ARM_dom0_iommu) to let dom0 know that the IOMMU is on and
> swiotlb-xen is not necessary.
> Then Linux can avoid initializing
> swiotlb-xen and just rely on the IOMMU for translation.

The presence of an IOMMU on the system doesn't necessarily indicate that 
all the devices will be protected by an IOMMU. We can only turn off the 
swiotlb-xen when we know that **all** the devices are protected.

Therefore a simple feature flag is not going to do the job. Instead, we 
need to tell Linux which devices has been protected by an IOMMU. This is 
something I attempted to do a few years ago (see [2]).

In addition to that, we also need to know whether Linux is capable to 
disable swiotlb-xen. This would allow us to turn off all the mitigation 
we introduced in Xen for direct mapped domain. One possibility would be 
to introduce ELF note like for Arm (see [3]).

> 
> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
> index 6f585b1538..2a154d1851 100644
> --- a/xen/include/asm-arm/grant_table.h
> +++ b/xen/include/asm-arm/grant_table.h
> @@ -88,8 +88,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
>   #define gnttab_status_gfn(d, t, i)                                       \
>       (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
>   
> -#define gnttab_need_iommu_mapping(d)                    \
> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> +#define gnttab_need_iommu_mapping(d)  (is_domain_direct_mapped(d))
>   
>   #endif /* __ASM_GRANT_TABLE_H__ */
>   /*
> 

Cheers,

[1] https://xenbits.xen.org/docs/unstable/support-matrix.html
[2] 
https://lists.infradead.org/pipermail/linux-arm-kernel/2014-February/234523.html
[3] 
https://patchwork.kernel.org/project/linux-arm-kernel/patch/5342AF59.3030405@linaro.org/


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 11:09:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 11:09:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82139.151876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LT3-0008Qt-Kj; Sat, 06 Feb 2021 11:09:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82139.151876; Sat, 06 Feb 2021 11:09:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LT3-0008Qm-GX; Sat, 06 Feb 2021 11:09:33 +0000
Received: by outflank-mailman (input) for mailman id 82139;
 Sat, 06 Feb 2021 11:09:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8LT2-0008QU-5l; Sat, 06 Feb 2021 11:09:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8LT2-000728-1U; Sat, 06 Feb 2021 11:09:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8LT1-0003Zm-Np; Sat, 06 Feb 2021 11:09:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8LT1-00059H-NH; Sat, 06 Feb 2021 11:09:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XU4e+tT1Nwe84ki074vuNwcSKcYrsAbDJgaSQ5Tf7uw=; b=1O3TxvqCYso1SICweXFhb7GZpg
	MJP7/ju/MMiDjZmkB34xmeVopYev1djy08/JL9oUJ5+bupAszbllvxmYHbMVPO8YK4sAShas0QqrV
	LkinQTbbe+ho/6onjIkgxgknNlMJLyqd5AcuVx1AAUlhLQYYlfpAHvJVI91i/acc2hIk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159036-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159036: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
X-Osstest-Versions-That:
    xen=d203dbd69f1a02577dd6fe571d72beb980c548a6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 Feb 2021 11:09:31 +0000

flight 159036 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159036/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 159013

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159013
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159013
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159013
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159013
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159013
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159013
 test-armhf-armhf-libvirt-raw 13 guest-start                  fail  like 159013
 test-armhf-armhf-xl-vhd      13 guest-start                  fail  like 159013
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159013
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159013
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159013
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159013
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
baseline version:
 xen                  d203dbd69f1a02577dd6fe571d72beb980c548a6

Last test of basis   159013  2021-02-04 13:58:42 Z    1 days
Testing same since   159036  2021-02-05 08:46:54 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Manuel Bouyer <bouyer@netbsd.org>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d203dbd69f..ff522e2e91  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7 -> master


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 11:20:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 11:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82156.151935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LdV-00023W-IN; Sat, 06 Feb 2021 11:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82156.151935; Sat, 06 Feb 2021 11:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8LdV-00023P-FC; Sat, 06 Feb 2021 11:20:21 +0000
Received: by outflank-mailman (input) for mailman id 82156;
 Sat, 06 Feb 2021 11:20:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l8LdU-00023K-Gs
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 11:20:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8LdS-0007F2-Ew; Sat, 06 Feb 2021 11:20:18 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8LdS-0001vv-5l; Sat, 06 Feb 2021 11:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=cQMVVJSJD0Yb9XwMSYFH2sar2ohLPP0jLc971S/sMZo=; b=ffsul8eGaVzjXDWs7EhAD5EEVQ
	uJn1mS8YMhPvBFddMAHUIXJCNzkhcLZqkJH+YtdQ17z4s3/BDSWyZlFmJ/sb7QnX1sPMn0cAoqMMf
	tH7LPW55otKCCBIBm8Yhdue3YtzZoFlozVtvzCTF9e+OrAvDEMqKM5+fiT3jmHR5uq3s=;
Subject: Re: [PATCH 1/7] xen/events: reset affinity of 2-level event initially
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f89567cf-f954-0d97-087e-5e31bfa6d49d@xen.org>
Date: Sat, 6 Feb 2021 11:20:16 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 06/02/2021 10:49, Juergen Gross wrote:
> When creating a new event channel with 2-level events the affinity
> needs to be reset initially in order to avoid using an old affinity
> from earlier usage of the event channel port.
> 
> The same applies to the affinity when onlining a vcpu: all old
> affinity settings for this vcpu must be reset. As percpu events get
> initialized before the percpu event channel hook is called,
> resetting of the affinities happens after offlining a vcpu (this is
> working, as initial percpu memory is zeroed out).
> 
> Cc: stable@vger.kernel.org
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   drivers/xen/events/events_2l.c | 20 ++++++++++++++++++++
>   1 file changed, 20 insertions(+)
> 
> diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
> index da87f3a1e351..23217940144a 100644
> --- a/drivers/xen/events/events_2l.c
> +++ b/drivers/xen/events/events_2l.c
> @@ -47,6 +47,16 @@ static unsigned evtchn_2l_max_channels(void)
>   	return EVTCHN_2L_NR_CHANNELS;
>   }
>   
> +static int evtchn_2l_setup(evtchn_port_t evtchn)
> +{
> +	unsigned int cpu;
> +
> +	for_each_online_cpu(cpu)
> +		clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));

The bit corresponding to the event channel can only be set on a single 
CPU. Could we avoid the loop and instead clear the bit while closing the 
port?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 12:09:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 12:09:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82172.151947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8MOy-0006PH-Iw; Sat, 06 Feb 2021 12:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82172.151947; Sat, 06 Feb 2021 12:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8MOy-0006PA-Ei; Sat, 06 Feb 2021 12:09:24 +0000
Received: by outflank-mailman (input) for mailman id 82172;
 Sat, 06 Feb 2021 12:09:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gVlU=HI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8MOw-0006P5-K5
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 12:09:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4433bc7b-6c5a-4b6e-a15a-37a75edc00ed;
 Sat, 06 Feb 2021 12:09:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3E32ACD4;
 Sat,  6 Feb 2021 12:09:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4433bc7b-6c5a-4b6e-a15a-37a75edc00ed
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612613360; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bLVpkjvwwsqTJMAhg+ODWS/f0rJ20lZm7EB2rlXndJQ=;
	b=nETloEJqeRb1B8noD3G6MK5p7CmyF2qhrJEJwcLTMHDgWqP4kHYsmvb53bXbtY2eXKkVXu
	p3rQeHDirXputjWpw4GiAr6r3qxh31F/2v4tj+3DGeiGTPh0TGsujNdz5+P9gInc2QKU4m
	HFPXblDYLtFGC1AdcftZegmDOKKOUcs=
Subject: Re: [PATCH 1/7] xen/events: reset affinity of 2-level event initially
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-2-jgross@suse.com>
 <f89567cf-f954-0d97-087e-5e31bfa6d49d@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d2017caa-0ea8-ae9d-d9f6-45be3da20688@suse.com>
Date: Sat, 6 Feb 2021 13:09:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <f89567cf-f954-0d97-087e-5e31bfa6d49d@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="k44Pv081BDAGDRg5GIN3FYf23bN8FtnTt"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--k44Pv081BDAGDRg5GIN3FYf23bN8FtnTt
Content-Type: multipart/mixed; boundary="sWZapDRceH0piXqAOWoHExChDqfbHzOt6";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
Message-ID: <d2017caa-0ea8-ae9d-d9f6-45be3da20688@suse.com>
Subject: Re: [PATCH 1/7] xen/events: reset affinity of 2-level event initially
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-2-jgross@suse.com>
 <f89567cf-f954-0d97-087e-5e31bfa6d49d@xen.org>
In-Reply-To: <f89567cf-f954-0d97-087e-5e31bfa6d49d@xen.org>

--sWZapDRceH0piXqAOWoHExChDqfbHzOt6
Content-Type: multipart/mixed;
 boundary="------------9361F2EC9288E604E62BA699"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9361F2EC9288E604E62BA699
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 06.02.21 12:20, Julien Grall wrote:
> Hi Juergen,
>=20
> On 06/02/2021 10:49, Juergen Gross wrote:
>> When creating a new event channel with 2-level events the affinity
>> needs to be reset initially in order to avoid using an old affinity
>> from earlier usage of the event channel port.
>>
>> The same applies to the affinity when onlining a vcpu: all old
>> affinity settings for this vcpu must be reset. As percpu events get
>> initialized before the percpu event channel hook is called,
>> resetting of the affinities happens after offlining a vcpu (this is
>> working, as initial percpu memory is zeroed out).
>>
>> Cc: stable@vger.kernel.org
>> Reported-by: Julien Grall <julien@xen.org>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> =C2=A0 drivers/xen/events/events_2l.c | 20 ++++++++++++++++++++
>> =C2=A0 1 file changed, 20 insertions(+)
>>
>> diff --git a/drivers/xen/events/events_2l.c=20
>> b/drivers/xen/events/events_2l.c
>> index da87f3a1e351..23217940144a 100644
>> --- a/drivers/xen/events/events_2l.c
>> +++ b/drivers/xen/events/events_2l.c
>> @@ -47,6 +47,16 @@ static unsigned evtchn_2l_max_channels(void)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return EVTCHN_2L_NR_CHANNELS;
>> =C2=A0 }
>> +static int evtchn_2l_setup(evtchn_port_t evtchn)
>> +{
>> +=C2=A0=C2=A0=C2=A0 unsigned int cpu;
>> +
>> +=C2=A0=C2=A0=C2=A0 for_each_online_cpu(cpu)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 clear_bit(evtchn, BM(per_c=
pu(cpu_evtchn_mask, cpu)));
>=20
> The bit corresponding to the event channel can only be set on a single =

> CPU. Could we avoid the loop and instead clear the bit while closing th=
e=20
> port?

This would need another callback.


Juergen


--------------9361F2EC9288E604E62BA699
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9361F2EC9288E604E62BA699--

--sWZapDRceH0piXqAOWoHExChDqfbHzOt6--

--k44Pv081BDAGDRg5GIN3FYf23bN8FtnTt
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAehu8FAwAAAAAACgkQsN6d1ii/Ey9X
kwgAlYliBzwYKSnfrnhvXCvyGlf/2wg58/IU1RSP3v/M21lBFPsvzMXY6/x3au0rfGt8u6GZiFyE
6BdzORQdNVQzW1ODZcLHE0I3MRmoLQtfMbzg/xRD4DgG5dyJybjY7Wd0EbNJbj9L7P1Qq90OuJXj
0qWwz/MHElHwYa+MhcXdvZsRckN8gFyo389TKMlmCdb9V1Cs54KHy3czAuviGJ0WpZv62hzeteHM
I1WpABERAa3P5Ftx59OwyRWLF7n/hWDZUaBzzWqOnFwxD8eEkdU976Qy4XKo5KEbzKWtFz8k+m8v
SKj2/6BuXAEvo+ANYxJdxQ/vaONqkDIJa7jwa0A64A==
=dKdX
-----END PGP SIGNATURE-----

--k44Pv081BDAGDRg5GIN3FYf23bN8FtnTt--


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 12:19:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 12:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82178.151959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8MYi-0007X9-H2; Sat, 06 Feb 2021 12:19:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82178.151959; Sat, 06 Feb 2021 12:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8MYi-0007X2-Dp; Sat, 06 Feb 2021 12:19:28 +0000
Received: by outflank-mailman (input) for mailman id 82178;
 Sat, 06 Feb 2021 12:19:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l8MYh-0007Wx-En
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 12:19:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8MYf-0008BZ-0C; Sat, 06 Feb 2021 12:19:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8MYe-0005pl-Ml; Sat, 06 Feb 2021 12:19:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=e3DoAeL+lOAUxA+lEznGlmmjujWcwFYkHH+iAB3YFd8=; b=4rbcp6mGnnPqoWsQ2dSADRlhJb
	msnMqOlsGoUcFiM4mmlsLdcW1GDhMQWw3xTBjBDyrmdIykTolAfFqm0lCx6QRYuMm6byzvsQJiYqV
	wT+UBehO42aD2R0/DbrTJ8fOjw5qAU5VJKeFbkcCX4w3szma0Ec/mtfb7bXPXkNa+YKU=;
Subject: Re: [PATCH 1/7] xen/events: reset affinity of 2-level event initially
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-2-jgross@suse.com>
 <f89567cf-f954-0d97-087e-5e31bfa6d49d@xen.org>
 <d2017caa-0ea8-ae9d-d9f6-45be3da20688@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d856a721-4f5f-8f8f-bddd-810213daac9c@xen.org>
Date: Sat, 6 Feb 2021 12:19:22 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <d2017caa-0ea8-ae9d-d9f6-45be3da20688@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 06/02/2021 12:09, JÃ¼rgen GroÃŸ wrote:
> On 06.02.21 12:20, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 06/02/2021 10:49, Juergen Gross wrote:
>>> When creating a new event channel with 2-level events the affinity
>>> needs to be reset initially in order to avoid using an old affinity
>>> from earlier usage of the event channel port.
>>>
>>> The same applies to the affinity when onlining a vcpu: all old
>>> affinity settings for this vcpu must be reset. As percpu events get
>>> initialized before the percpu event channel hook is called,
>>> resetting of the affinities happens after offlining a vcpu (this is
>>> working, as initial percpu memory is zeroed out).
>>>
>>> Cc: stable@vger.kernel.org
>>> Reported-by: Julien Grall <julien@xen.org>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> Â  drivers/xen/events/events_2l.c | 20 ++++++++++++++++++++
>>> Â  1 file changed, 20 insertions(+)
>>>
>>> diff --git a/drivers/xen/events/events_2l.c 
>>> b/drivers/xen/events/events_2l.c
>>> index da87f3a1e351..23217940144a 100644
>>> --- a/drivers/xen/events/events_2l.c
>>> +++ b/drivers/xen/events/events_2l.c
>>> @@ -47,6 +47,16 @@ static unsigned evtchn_2l_max_channels(void)
>>> Â Â Â Â Â  return EVTCHN_2L_NR_CHANNELS;
>>> Â  }
>>> +static int evtchn_2l_setup(evtchn_port_t evtchn)
>>> +{
>>> +Â Â Â  unsigned int cpu;
>>> +
>>> +Â Â Â  for_each_online_cpu(cpu)
>>> +Â Â Â Â Â Â Â  clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
>>
>> The bit corresponding to the event channel can only be set on a single 
>> CPU. Could we avoid the loop and instead clear the bit while closing 
>> the port?
> 
> This would need another callback.

Right, this seems to be better than walking over all the CPUs every time 
just for cleaning one bit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 16:59:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 16:59:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82234.151989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8QvB-0001TU-F9; Sat, 06 Feb 2021 16:58:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82234.151989; Sat, 06 Feb 2021 16:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8QvB-0001TN-BT; Sat, 06 Feb 2021 16:58:57 +0000
Received: by outflank-mailman (input) for mailman id 82234;
 Sat, 06 Feb 2021 16:58:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Qv9-0001Kl-AA; Sat, 06 Feb 2021 16:58:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Qv8-0005Bq-S1; Sat, 06 Feb 2021 16:58:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Qv8-0002p5-K4; Sat, 06 Feb 2021 16:58:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8Qv8-0007Us-JZ; Sat, 06 Feb 2021 16:58:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o01XIaDB9zD1oChJR/ynkwebyVYWGsQ8gnlZ+YTPa+M=; b=KHEypl0rXuwyPCQ795kvBM357S
	9ATJHYuHACaGP8rv1QEcWQM3tjHtQ8910g+/vLuWqEiSg05saSBbtMc7SqSfuhIgz3R0twH5H9aQj
	QVUuyerualvRsxFyDVKhZhu9ocMtSsafcMOqA1lermkoJcCE+yv73xL/n4lCIGfanDVI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159042-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159042: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:host-install(5):broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:regression
    xen-4.11-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 Feb 2021 16:58:54 +0000

flight 159042 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159042/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <job status>    broken
 test-amd64-i386-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <job status>         broken
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>             broken
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>    broken
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-i386-xl-xsm          <job status>                 broken
 test-amd64-i386-freebsd10-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-i386-xl-shadow       <job status>                 broken
 test-amd64-i386-xl-raw          <job status>                 broken
 test-amd64-i386-freebsd10-i386    <job status>                 broken
 test-amd64-i386-libvirt         <job status>                 broken
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <job status>       broken
 test-amd64-i386-libvirt-xsm     <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-xl              <job status>                 broken
 test-amd64-i386-xl-pvshim       <job status>                 broken
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <job status>             broken
 test-amd64-i386-libvirt-xsm   5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-ws16-amd64  5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken REGR. vs. 157566
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-freebsd10-i386  5 host-install(5)      broken REGR. vs. 157566
 test-amd64-i386-xl-qemut-win7-amd64  5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken REGR. vs. 157566
 test-amd64-amd64-pygrub       5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-xl-raw        5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-qemut-rhel6hvm-amd  5 host-install(5)  broken REGR. vs. 157566
 test-amd64-i386-qemuu-rhel6hvm-amd  5 host-install(5)  broken REGR. vs. 157566
 test-amd64-amd64-libvirt-vhd  5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken REGR. vs. 157566
 test-amd64-amd64-libvirt-xsm  5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-freebsd10-amd64  5 host-install(5)     broken REGR. vs. 157566
 test-amd64-i386-xl-xsm        5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-pvshim     5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-win7-amd64  5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-libvirt       5 host-install(5)        broken REGR. vs. 157566
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5) broken REGR. vs. 157566
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)       broken REGR. vs. 157566
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken REGR. vs. 157566
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl-qemuu-ovmf-amd64  5 host-install(5) broken REGR. vs. 157566
 test-amd64-i386-xl            5 host-install(5)        broken REGR. vs. 157566
 test-amd64-i386-xl-shadow     5 host-install(5)        broken REGR. vs. 157566
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)       broken REGR. vs. 157566
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 157566
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 157566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157566
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   53 days
Failing since        159016  2021-02-04 15:05:58 Z    2 days    2 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            broken  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         broken  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  broken  
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  broken  
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       broken  
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     broken  
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          broken  
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         broken  
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      broken  
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    broken  
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       broken  
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              broken  
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    broken  
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-xl-xsm broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm broken
broken-step test-amd64-i386-libvirt-xsm host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-intel host-install(5)
broken-step test-amd64-i386-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-i386-freebsd10-i386 host-install(5)
broken-step test-amd64-i386-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-i386-xl-raw host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-amd host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-amd host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-i386-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-i386-freebsd10-amd64 host-install(5)
broken-step test-amd64-i386-xl-xsm host-install(5)
broken-step test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-intel host-install(5)
broken-step test-amd64-i386-xl-qemut-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-i386-xl-pvshim host-install(5)
broken-step test-amd64-i386-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-i386-libvirt host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-i386-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-i386-xl host-install(5)
broken-step test-amd64-i386-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 18:46:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 18:46:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82258.152010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8SbQ-0003SS-1E; Sat, 06 Feb 2021 18:46:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82258.152010; Sat, 06 Feb 2021 18:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8SbP-0003SL-UR; Sat, 06 Feb 2021 18:46:39 +0000
Received: by outflank-mailman (input) for mailman id 82258;
 Sat, 06 Feb 2021 18:46:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l8SbP-0003SG-II
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 18:46:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8SbJ-0006wh-UU; Sat, 06 Feb 2021 18:46:33 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l8SbJ-0001KW-JY; Sat, 06 Feb 2021 18:46:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=t6RLyrgrl+6icHjlUcT9RbYfFIeBzwHbjcRTiWQRm98=; b=43EaaP4+j2FIHADQ+iNmhrZtp0
	GLLA9A9bhzBLCUTEmZpoYo6oq6x56VWfSamlcEe/O4+HBPeu8pCDytu4q9Bzkgv7ilF9il4FX1NOD
	K7jLFGXa7skQOkAsb39MmrKdRzJI8/tOs0ZLz1rkmDtI72Y9XJ/WSlFNsht6iZoWXYVs=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
Date: Sat, 6 Feb 2021 18:46:30 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 06/02/2021 10:49, Juergen Gross wrote:
> The first three patches are fixes for XSA-332. The avoid WARN splats
> and a performance issue with interdomain events.

Thanks for helping to figure out the problem. Unfortunately, I still see 
reliably the WARN splat with the latest Linux master (1e0d27fce010) + 
your first 3 patches.

I am using Xen 4.11 (1c7d984645f9) and dom0 is forced to use the 2L 
events ABI.

After some debugging, I think I have an idea what's went wrong. The 
problem happens when the event is initially bound from vCPU0 to a 
different vCPU.

 From the comment in xen_rebind_evtchn_to_cpu(), we are masking the 
event to prevent it being delivered on an unexpected vCPU. However, I 
believe the following can happen:

vCPU0				| vCPU1
				|
				| Call xen_rebind_evtchn_to_cpu()
receive event X			|
				| mask event X
				| bind to vCPU1
<vCPU descheduled>		| unmask event X
				|
				| receive event X
				|
				| handle_edge_irq(X)
handle_edge_irq(X)		|  -> handle_irq_event()
				|   -> set IRQD_IN_PROGRESS
  -> set IRQS_PENDING		|
				|   -> evtchn_interrupt()
				|   -> clear IRQD_IN_PROGRESS
				|  -> IRQS_PENDING is set
				|  -> handle_irq_event()
				|   -> evtchn_interrupt()
				|     -> WARN()
				|

All the lateeoi handlers expect a ONESHOT semantic and 
evtchn_interrupt() is doesn't tolerate any deviation.

I think the problem was introduced by 7f874a0447a9 ("xen/events: fix 
lateeoi irq acknowledgment") because the interrupt was disabled 
previously. Therefore we wouldn't do another iteration in handle_edge_irq().

Aside the handlers, I think it may impact the defer EOI mitigation 
because in theory if a 3rd vCPU is joining the party (let say vCPU A 
migrate the event from vCPU B to vCPU C). So info->{eoi_cpu, irq_epoch, 
eoi_time} could possibly get mangled?

For a fix, we may want to consider to hold evtchn_rwlock with the write 
permission. Although, I am not 100% sure this is going to prevent 
everything.

Does my write-up make sense to you?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 06 20:03:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 Feb 2021 20:03:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82268.152028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8Tnt-0002fa-MP; Sat, 06 Feb 2021 20:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82268.152028; Sat, 06 Feb 2021 20:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8Tnt-0002fT-JS; Sat, 06 Feb 2021 20:03:37 +0000
Received: by outflank-mailman (input) for mailman id 82268;
 Sat, 06 Feb 2021 20:03:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SMBC=HI=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1l8Tns-0002fO-6u
 for xen-devel@lists.xenproject.org; Sat, 06 Feb 2021 20:03:36 +0000
Received: from mail-qk1-x72b.google.com (unknown [2607:f8b0:4864:20::72b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 367b384a-c30a-454b-8e79-9fa8e1ce46dc;
 Sat, 06 Feb 2021 20:03:35 +0000 (UTC)
Received: by mail-qk1-x72b.google.com with SMTP id u20so10538909qku.7
 for <xen-devel@lists.xenproject.org>; Sat, 06 Feb 2021 12:03:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 367b384a-c30a-454b-8e79-9fa8e1ce46dc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:from:date:message-id:subject:to:cc;
        bh=lTAa5xIKptH7yVh7rmoF3zRc9qlRhOfeqFHTB4JCXjc=;
        b=J88jMf6tXnY7c7DkCNPiQhhVJ0Trdm/lDEbTvmo4B1RIC4nE7+E6relvm9ou1YKgUu
         +h8LMav2VhywXmTd7f03/NuItdY0rAnazi9+QV1HP0Q6VlqGppMSl2NfFPADWBwHQUDc
         fHiIdOXJzuP80urERBhCsdFj8m/pm+RYR0dSf8LEXGe4M2JVSWI12R7EB73OmuhOkaRU
         eBeGE+pdw0Dh8h0D5OUydN+ZtgLPD55L3NIfsdecBUq7Cm7JLOzu2IJUM3oP9UaDIcLh
         pXbEnDiRmBilf1GssJwpgGPf62LkFtXLVeWU4FlHFoGcktQFXKk4lKJs2idPLIG/Y5p7
         Untw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
        bh=lTAa5xIKptH7yVh7rmoF3zRc9qlRhOfeqFHTB4JCXjc=;
        b=TLdHKW+cTuZo6YDm1pk+YETbabW80bmuz6dNDNHILKNCGeVUvl7k9P56RaAHqOkHF7
         Zp0WvC8TR/liQtAapYcM/p5arAs4bRg5qroxY5bdKRRj4mKkBIGkeKK938VExTej36A9
         6u305lDh9U4Grk/4TnbHvQCC+1LZKKoq/oXCMAi+eLD+GsuGLWXvtuKXvK+JopfJpkEk
         4chgyH5FsfH/B7ogtnU8Pefx5E03B7eGU5S+oWxXcMMJ5yJpXy4DO49VtjYG0WtuOzph
         QeU0Wf4oX9ffP6v1Fht0dMoYdZ1xN4ohlIyX+aZWx+XU5lRVSTy9w/0lTDNyHOpeqh2s
         7F2w==
X-Gm-Message-State: AOAM533CFKYzftEgij/AEOw47vKgc1eiG0QspR9O7uooma0KPG+szi39
	DxL/xBAVt2x9jxh9S46PUzLcV0jgNDrD1RVAzE6GQlM3AK7l+A==
X-Google-Smtp-Source: ABdhPJwoDdkst/4e0ogd8xJP8lbu2ldiv6QfpwVSuKHWjb6XnVvSlFuqj+BlRwH+rL+dYypEg7yv8+P1BD8/nxfO120=
X-Received: by 2002:a05:620a:152b:: with SMTP id n11mr9908742qkk.267.1612641814097;
 Sat, 06 Feb 2021 12:03:34 -0800 (PST)
MIME-Version: 1.0
From: Roman Shaposhnik <roman@zededa.com>
Date: Sat, 6 Feb 2021 12:03:23 -0800
Message-ID: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
Subject: Linux DomU freezes and dies under heavy memory shuffling
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Roman Shaposhnik <roman@zededa.com>
Content-Type: text/plain; charset="UTF-8"

Hi!

all of a sudden (but only after a few days of running normally), on a stock
Ubuntu 18.04 (Bionic with 4.15.0 kernel) DomU I'm seeing Microsoft's .net
runtime go into a heave GC cycle and then freeze and die like what is
shown below. This is under stock Xen 4.14.0 on a pretty unremarkable
x86_64 box made by Supermicro.

I would really appreciate any thoughts on the subject or at least directions
in which I should go to investigate this. At this point -- this part
of Xen is a
bit of a mystery to me -- but I'm very much willing to learn ;-)

>From my completely uneducated guess it feels like some kind of an issue
between DomU shuffling memory much more than normal and Xen somehow
getting unhappy about that:

[376900.874560] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [dotnet:3518]
[376900.874764] Kernel panic - not syncing: softlockup: hung tasks
[376900.874793] CPU: 0 PID: 3518 Comm: dotnet Tainted: G L
4.15.0-112-generic #113-Ubuntu
[376900.874824] Hardware name: Xen HVM domU, BIOS 4.14.0 12/15/2020
[376900.874847] Call Trace:
[376900.874860] <IRQ>
[376900.874874] dump_stack+0x6d/0x8e
[376900.874892] panic+0xe4/0x254
[376900.874911] watchdog_timer_fn+0x21e/0x230
[376900.874928] ? watchdog+0x30/0x30
[376900.874947] __hrtimer_run_queues+0xdf/0x230
[376900.874970] hrtimer_interrupt+0xa0/0x1d0
[376900.874989] xen_timer_interrupt+0x20/0x30
[376900.875008] __handle_irq_event_percpu+0x44/0x1a0
[376900.875031] handle_irq_event_percpu+0x32/0x80
[376900.875053] handle_percpu_irq+0x3d/0x60
[376900.875071] generic_handle_irq+0x28/0x40
[376900.875090] __evtchn_fifo_handle_events+0x172/0x190
[376900.875112] evtchn_fifo_handle_events+0x10/0x20
[376900.875133] __xen_evtchn_do_upcall+0x49/0x80
[376900.875156] xen_evtchn_do_upcall+0x2b/0x50
[376900.875177] xen_hvm_callback_vector+0x90/0xa0
[376900.875197] </IRQ>
[376900.875211] RIP: 0010:smp_call_function_single+0xdc/0x100
[376900.875230] RSP: 0018:ffffaaa3c1807c20 EFLAGS: 00000202 ORIG_RAX:
ffffffffffffff0c
[376900.875261] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
0000000000000000
[376900.875288] RDX: 0000000000000001 RSI: 0000000000000003 RDI:
0000000000000003
[376900.875314] RBP: ffffaaa3c1807c70 R08: fffffffffffffffc R09:
0000000000000002
[376900.875341] R10: 0000000000000040 R11: 0000000000000000 R12:
ffff8e0ab2c1de70
[376900.875368] R13: 0000000000000000 R14: ffffffff95a7ecd0 R15:
ffffaaa3c1807d08
[376900.875396] ? flush_tlb_func_common.constprop.10+0x230/0x230
[376900.875424] ? flush_tlb_func_common.constprop.10+0x230/0x230
[376900.875449] ? unmap_page_range+0xbbc/0xd00
[376900.875470] smp_call_function_many+0x1cc/0x250
[376900.875491] ? smp_call_function_many+0x1cc/0x250
[376900.875513] native_flush_tlb_others+0x3c/0xf0
[376900.875534] flush_tlb_mm_range+0xae/0x110
[376900.875552] tlb_flush_mmu_tlbonly+0x5f/0xc0
[376900.875574] arch_tlb_finish_mmu+0x3f/0x80
[376900.875592] tlb_finish_mmu+0x23/0x30
[376900.875610] unmap_region+0xf7/0x130
[376900.875629] do_munmap+0x276/0x450
[376900.875647] vm_munmap+0x69/0xb0
[376900.875664] SyS_munmap+0x22/0x30
[376900.875682] do_syscall_64+0x73/0x130
[376900.875701] entry_SYSCALL_64_after_hwframe+0x41/0xa6
[376900.875721] RIP: 0033:0x7f05ad52dd59
[376900.875737] RSP: 002b:00007f05a8037150 EFLAGS: 00000246 ORIG_RAX:
000000000000000b
[376900.875765] RAX: ffffffffffffffda RBX: 000056517e2a08c0 RCX:
00007f05ad52dd59
[376900.875791] RDX: 0000000000000000 RSI: 0000000000006a00 RDI:
00007f05aad8f000
[376900.875818] RBP: 0000000000006a00 R08: 0000000000020b18 R09:
0000000000000000
[376900.875844] R10: 0000000000020ad0 R11: 0000000000000246 R12:
0000000000000001
[376900.875870] R13: 0000000000000000 R14: 000056517eb02300 R15:
00007f05aad8f000

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 02:41:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 02:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82294.152070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8a0Y-0000wo-4r; Sun, 07 Feb 2021 02:41:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82294.152070; Sun, 07 Feb 2021 02:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8a0X-0000wf-RE; Sun, 07 Feb 2021 02:41:05 +0000
Received: by outflank-mailman (input) for mailman id 82294;
 Sun, 07 Feb 2021 02:41:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8a0V-0000wH-KW; Sun, 07 Feb 2021 02:41:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8a0V-0000MI-7n; Sun, 07 Feb 2021 02:41:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8a0U-000742-Sb; Sun, 07 Feb 2021 02:41:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8a0U-0000lF-S6; Sun, 07 Feb 2021 02:41:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FMGPEN/Gj+W5tZJ0TjKwQrr0VZTummbpJQ/UiDsDodQ=; b=BNH83XXSoTFxDDrZCXmy0l0sku
	b2Nad1yTYvGBvALzHCUU72lIIAuDS9m0bBMN5eQ4BdIeelUh0C0568PRIlQ1DolHaX78T1GItLaod
	4NsRsIhNfMoWANDqVthkjNKpDa6IwgASP4k3+OruLKN9yTMxfigzGgd1zKUvIR+9b1T4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159052-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159052: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-livepatch:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit1:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-raw:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:host-install(5):broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:host-install(5):broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:regression
    xen-4.12-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/src_host(6):broken:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-pair:host-install/src_host(6):broken:regression
    xen-4.12-testing:test-amd64-amd64-pair:host-install/dst_host(7):broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/dst_host(7):broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:regression
    xen-4.12-testing:test-amd64-i386-pair:host-install/src_host(6):broken:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/dst_host(7):broken:regression
    xen-4.12-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-i386-xl:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:regression
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-xl-arndale:debian-fixup:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:allowable
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 02:41:02 +0000

flight 159052 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159052/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>                broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <job status>         broken
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>    broken
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>                 broken
 test-xtf-amd64-amd64-1          <job status>                 broken
 test-xtf-amd64-amd64-2          <job status>                 broken
 test-amd64-i386-freebsd10-amd64    <job status>                 broken
 test-amd64-amd64-xl-xsm         <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>            broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm    <job status>   broken
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>            broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-livepatch      <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 test-amd64-amd64-migrupgrade    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-i386-xl-shadow       <job status>                 broken
 test-amd64-i386-xl-raw          <job status>                 broken
 test-amd64-i386-freebsd10-i386    <job status>                 broken
 test-amd64-i386-libvirt         <job status>                 broken
 test-amd64-i386-libvirt-pair    <job status>                 broken
 test-amd64-i386-livepatch       <job status>                 broken
 test-amd64-i386-migrupgrade     <job status>                 broken
 test-amd64-i386-pair            <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-xl              <job status>                 broken
 test-amd64-i386-xl-pvshim       <job status>                 broken
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>                broken
 test-xtf-amd64-amd64-3          <job status>                 broken
 test-xtf-amd64-amd64-4          <job status>                 broken
 test-amd64-amd64-xl-xsm       5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl-pvshim    5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)       broken REGR. vs. 158556
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-xl-pvshim     5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-pygrub       5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-libvirt-xsm  5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl-shadow    5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken REGR. vs. 158556
 test-xtf-amd64-amd64-3        5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-xl-qemut-win7-amd64  5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)       broken REGR. vs. 158556
 test-amd64-i386-qemuu-rhel6hvm-amd  5 host-install(5)  broken REGR. vs. 158556
 test-amd64-i386-livepatch     5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-xl-qemuu-ovmf-amd64  5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-libvirt       5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-qcow2     5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-libvirt      5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-qemut-rhel6hvm-amd  5 host-install(5)  broken REGR. vs. 158556
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)     broken REGR. vs. 158556
 test-amd64-amd64-xl-credit1   5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl-multivcpu  5 host-install(5)       broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-livepatch    5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-xl-raw        5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl           5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken REGR. vs. 158556
 test-xtf-amd64-amd64-4        5 host-install(5)        broken REGR. vs. 158556
 test-xtf-amd64-amd64-1        5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-qemuu-nested-intel  5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-xl-credit2   5 host-install(5)        broken REGR. vs. 158556
 test-xtf-amd64-amd64-2        5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-xl-qemuu-win7-amd64  5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-xl-shadow     5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-freebsd10-i386  5 host-install(5)      broken REGR. vs. 158556
 test-amd64-i386-xl-qemuu-ws16-amd64  5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-i386-pvgrub  5 host-install(5)        broken REGR. vs. 158556
 test-amd64-i386-freebsd10-amd64  5 host-install(5)     broken REGR. vs. 158556
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-libvirt-vhd  5 host-install(5)        broken REGR. vs. 158556
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 158556
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-migrupgrade 6 host-install/src_host(6) broken REGR. vs. 158556
 test-amd64-amd64-migrupgrade 6 host-install/src_host(6) broken REGR. vs. 158556
 test-amd64-amd64-migrupgrade 7 host-install/dst_host(7) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken REGR. vs. 158556
 test-amd64-amd64-pair       6 host-install/src_host(6) broken REGR. vs. 158556
 test-amd64-amd64-pair       7 host-install/dst_host(7) broken REGR. vs. 158556
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken REGR. vs. 158556
 test-amd64-i386-libvirt-pair 7 host-install/dst_host(7) broken REGR. vs. 158556
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken REGR. vs. 158556
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken REGR. vs. 158556
 test-amd64-i386-pair        6 host-install/src_host(6) broken REGR. vs. 158556
 test-amd64-i386-migrupgrade 7 host-install/dst_host(7) broken REGR. vs. 158556
 test-amd64-i386-pair        7 host-install/dst_host(7) broken REGR. vs. 158556
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken REGR. vs. 158556
 test-amd64-i386-xl            5 host-install(5)        broken REGR. vs. 158556
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)   broken REGR. vs. 158556
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken REGR. vs. 158556
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 158556
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 158556
 test-armhf-armhf-xl-arndale  13 debian-fixup             fail REGR. vs. 158556

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      5 host-install(5)        broken REGR. vs. 158556

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158556
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   16 days
Failing since        159017  2021-02-04 15:06:13 Z    2 days    2 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       broken  
 test-xtf-amd64-amd64-2                                       broken  
 test-xtf-amd64-amd64-3                                       broken  
 test-xtf-amd64-amd64-4                                       broken  
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          broken  
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        broken  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     broken  
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          broken  
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         broken  
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      broken  
 test-amd64-amd64-livepatch                                   broken  
 test-amd64-i386-livepatch                                    broken  
 test-amd64-amd64-migrupgrade                                 broken  
 test-amd64-i386-migrupgrade                                  broken  
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 broken  
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    broken  
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       broken  
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              broken  
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    broken  
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-xtf-amd64-amd64-1 broken
broken-job test-xtf-amd64-amd64-2 broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-livepatch broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-migrupgrade broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-i386-livepatch broken
broken-job test-amd64-i386-migrupgrade broken
broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-xtf-amd64-amd64-3 broken
broken-job test-xtf-amd64-amd64-4 broken
broken-step test-amd64-amd64-xl-xsm host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-i386-xl-pvshim host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step test-xtf-amd64-amd64-3 host-install(5)
broken-step test-amd64-i386-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-amd host-install(5)
broken-step test-amd64-i386-livepatch host-install(5)
broken-step test-amd64-i386-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-i386-libvirt host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-amd host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-amd64-livepatch host-install(5)
broken-step test-amd64-i386-xl-raw host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-amd64-i386-qemuu-rhel6hvm-intel host-install(5)
broken-step test-xtf-amd64-amd64-4 host-install(5)
broken-step test-xtf-amd64-amd64-1 host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-xtf-amd64-amd64-2 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-i386-xl-shadow host-install(5)
broken-step test-amd64-i386-freebsd10-i386 host-install(5)
broken-step test-amd64-i386-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-intel host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-i386-freebsd10-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step build-i386-xsm host-install(4)
broken-step test-amd64-i386-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-i386-migrupgrade host-install/src_host(6)
broken-step test-amd64-amd64-migrupgrade host-install/src_host(6)
broken-step test-amd64-amd64-migrupgrade host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-i386-libvirt-pair host-install/src_host(6)
broken-step test-amd64-i386-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-i386-pair host-install/src_host(6)
broken-step test-amd64-i386-migrupgrade host-install/dst_host(7)
broken-step test-amd64-i386-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-i386-xl host-install(5)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)

Not pushing.

------------------------------------------------------------
commit 8d26cdd3b66ab86d560dacd763d76ff3da95723e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:52:54 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 09:18:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 09:18:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82336.152109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8gCl-0006vC-LO; Sun, 07 Feb 2021 09:18:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82336.152109; Sun, 07 Feb 2021 09:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8gCl-0006v5-Hi; Sun, 07 Feb 2021 09:18:07 +0000
Received: by outflank-mailman (input) for mailman id 82336;
 Sun, 07 Feb 2021 09:18:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gCk-0006ux-4d; Sun, 07 Feb 2021 09:18:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gCj-0001Gs-QP; Sun, 07 Feb 2021 09:18:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gCj-0007Im-DF; Sun, 07 Feb 2021 09:18:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gCj-00085G-Ck; Sun, 07 Feb 2021 09:18:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZFrY0NjBb0ZmG00De7+YVTEmdJB2z4TsMrhiqIzwddg=; b=t1E6FY2t+y/FmfhBXMwWhmojOt
	7O4XB2Vwvn2Z9v1U0Elubamn7nre0pUrO9jcNoFJIcyhTKtL9zw2RvUkJyVjDtIQEkcSxh/hZj4bh
	9vhAq1P1Mc12OpWAzDi175ozypHLfzi/YjTwzVH5BbAbPFWEBMuB5H1xPGX1mCQUVg2I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159057-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159057: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-5.4:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    linux-5.4:build-i386:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    linux-5.4:build-i386-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-libvirt:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-pair:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-pygrub:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    linux-5.4:build-i386:host-install(4):broken:regression
    linux-5.4:build-i386-xsm:host-install(4):broken:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qcow2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-pvshim:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-coresched-amd64-xl:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-multivcpu:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-i386-pvgrub:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    linux-5.4:test-amd64-amd64-pair:host-install/src_host(6):broken:heisenbug
    linux-5.4:test-amd64-amd64-pair:host-install/dst_host(7):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-examine:host-install:broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e89428970c23011a2679121c56e9f54f654c6602
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 09:18:05 +0000

flight 159057 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159057/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-amd64-xl-xsm         <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>            broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm    <job status>   broken
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>            broken
 build-i386                      <job status>                 broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 158387
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 13 guest-start              fail REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-xsm       5 host-install(5)          broken pass in 159023
 test-amd64-amd64-xl-qcow2     5 host-install(5)          broken pass in 159023
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)         broken pass in 159023
 test-amd64-amd64-xl-qemuu-ws16-amd64  5 host-install(5)  broken pass in 159023
 test-amd64-amd64-qemuu-nested-intel  5 host-install(5)   broken pass in 159023
 test-amd64-amd64-xl-qemuu-win7-amd64  5 host-install(5)  broken pass in 159023
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)       broken pass in 159023
 test-amd64-amd64-xl-pvshim    5 host-install(5)          broken pass in 159023
 test-amd64-coresched-amd64-xl  5 host-install(5)         broken pass in 159023
 test-amd64-amd64-xl-multivcpu  5 host-install(5)         broken pass in 159023
 test-amd64-amd64-i386-pvgrub  5 host-install(5)          broken pass in 159023
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)       broken pass in 159023
 test-amd64-amd64-libvirt      5 host-install(5)          broken pass in 159023
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken pass in 159023
 test-amd64-amd64-pair         6 host-install/src_host(6) broken pass in 159023
 test-amd64-amd64-pair         7 host-install/dst_host(7) broken pass in 159023
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken pass in 159023
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)     broken pass in 159023
 test-amd64-amd64-xl           5 host-install(5)          broken pass in 159023
 test-amd64-amd64-xl-credit2   5 host-install(5)          broken pass in 159023
 test-amd64-amd64-xl-qemuu-ovmf-amd64  5 host-install(5)  broken pass in 159023
 test-amd64-amd64-xl-credit1   5 host-install(5)          broken pass in 159023
 test-amd64-amd64-xl-shadow    5 host-install(5)          broken pass in 159023
 test-amd64-amd64-xl-qemut-ws16-amd64  5 host-install(5)  broken pass in 159023
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)         broken pass in 159023
 test-amd64-amd64-pygrub       5 host-install(5)          broken pass in 159023
 test-amd64-amd64-libvirt-xsm  5 host-install(5)          broken pass in 159023
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken pass in 159023
 test-amd64-amd64-libvirt-vhd  5 host-install(5)          broken pass in 159023
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken pass in 159023
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-qemut-win7-amd64  5 host-install(5)  broken pass in 159023
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)     broken pass in 159023
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken pass in 159023
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 159023
 test-amd64-amd64-examine      5 host-install             broken pass in 159023

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 159023 like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 159023 like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop  fail in 159023 like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail in 159023 like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop  fail in 159023 like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop  fail in 159023 like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 159023 like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 159023 like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 159023 like 158387
 test-amd64-i386-xl-pvshim    14 guest-start          fail in 159023 never pass
 test-amd64-i386-libvirt     15 migrate-support-check fail in 159023 never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail in 159023 never pass
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 159023 never pass
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 159023 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 159023 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 159023 never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 159023 never pass

version targeted for testing:
 linux                e89428970c23011a2679121c56e9f54f654c6602
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   25 days
Failing since        158473  2021-01-17 13:42:20 Z   20 days   32 attempts
Testing same since   158997  2021-02-04 00:10:36 Z    3 days    3 attempts

------------------------------------------------------------
353 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        broken  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      broken  
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-coresched-amd64-xl broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-step test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-xl-xsm host-install(5)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step build-i386 host-install(4)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step build-i386-xsm host-install(4)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-examine host-install

Not pushing.

(No revision log; it would be 10442 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 09:22:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 09:22:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82339.152124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8gHC-0007uX-F3; Sun, 07 Feb 2021 09:22:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82339.152124; Sun, 07 Feb 2021 09:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8gHC-0007uQ-BJ; Sun, 07 Feb 2021 09:22:42 +0000
Received: by outflank-mailman (input) for mailman id 82339;
 Sun, 07 Feb 2021 09:22:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gHB-0007uI-H5; Sun, 07 Feb 2021 09:22:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gHB-0001LL-Cs; Sun, 07 Feb 2021 09:22:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gHB-0007PM-5Y; Sun, 07 Feb 2021 09:22:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8gHB-0002kv-55; Sun, 07 Feb 2021 09:22:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BDwD5KMkdJbEedYVk/xFFhvs3WN1jWqyJMW96URZuPs=; b=Kfs1ScdC9RK1bFWJawn8vPpzMq
	MyrCoyzeDB4JwQWjtvQ7wmidcQ3hpPoUebh79huhFHk9yW7fHTBPohMeKPZILkS3cn5xMoHyDt5JR
	b0J1bu64F67Qj9tJ+KBFxuUuwaXG6XJrt+TF//01vNxkMhpagA1NemtCIslM44HWAkvg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159064-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159064: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-i386-libvirt:<job status>:broken:regression
    libvirt:build-i386-xsm:<job status>:broken:regression
    libvirt:build-i386-libvirt:host-install(4):broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3068294e77c1ad0e08799fc76fac0ef0cee5c1bc
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 09:22:41 +0000

flight 159064 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159064/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt              <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-libvirt            4 host-install(4)        broken REGR. vs. 151777
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3068294e77c1ad0e08799fc76fac0ef0cee5c1bc
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  212 days
Failing since        151818  2020-07-11 04:18:52 Z  211 days  205 attempts
Testing same since   159064  2021-02-06 04:19:49 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           broken  
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-i386-libvirt broken
broken-job build-i386-xsm broken
broken-step build-i386-libvirt host-install(4)
broken-step build-i386-xsm host-install(4)

Not pushing.

(No revision log; it would be 40492 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 11:34:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 11:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82356.152145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8iKV-0003Hb-Bf; Sun, 07 Feb 2021 11:34:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82356.152145; Sun, 07 Feb 2021 11:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8iKV-0003HU-6u; Sun, 07 Feb 2021 11:34:15 +0000
Received: by outflank-mailman (input) for mailman id 82356;
 Sun, 07 Feb 2021 11:34:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iKT-0003HM-Qm; Sun, 07 Feb 2021 11:34:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iKT-0003TH-LT; Sun, 07 Feb 2021 11:34:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iKT-0001qq-Dk; Sun, 07 Feb 2021 11:34:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iKT-0008Kq-DD; Sun, 07 Feb 2021 11:34:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PdVf3L0uSJOq/JKpGeio4Sr2ruwCkZwtrzmKX6i6d5g=; b=WvTnqC9QQuONXE67QRLlnKnwvk
	/OavTrLUOlbNu+oJolHj+qVqP7P1ZiuaDhpDDL3H2emcohhxZ30N61zocab4U1XTEzdK1WZ7/S451
	vn+x2Pmx3lKyI1HkAhatzFHms9+LNJYy7peHe3pVSIANctlMI1MP5WDEKvVIRBOXYeFY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159095-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159095: trouble: broken
X-Osstest-Failures:
    xen-unstable-coverity:coverity-amd64:<job status>:broken:regression
    xen-unstable-coverity:coverity-amd64:host-install(4):broken:regression
X-Osstest-Versions-This:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
X-Osstest-Versions-That:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 11:34:13 +0000

flight 159095 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159095/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 coverity-amd64                  <job status>                 broken
 coverity-amd64                4 host-install(4)        broken REGR. vs. 158979

version targeted for testing:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7
baseline version:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265

Last test of basis   158979  2021-02-03 09:18:36 Z    4 days
Testing same since   159095  2021-02-07 09:19:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Manuel Bouyer <bouyer@netbsd.org>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 coverity-amd64                                               broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 453 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 11:49:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 11:49:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82362.152166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8iYf-0004MF-M5; Sun, 07 Feb 2021 11:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82362.152166; Sun, 07 Feb 2021 11:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8iYf-0004M8-Is; Sun, 07 Feb 2021 11:48:53 +0000
Received: by outflank-mailman (input) for mailman id 82362;
 Sun, 07 Feb 2021 11:48:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iYe-0004M0-Ci; Sun, 07 Feb 2021 11:48:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iYe-0003iB-0t; Sun, 07 Feb 2021 11:48:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iYd-00029N-NQ; Sun, 07 Feb 2021 11:48:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8iYd-0002Up-Mu; Sun, 07 Feb 2021 11:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4CKTN6J+MaU3uyivWgWvkmsarlrse1lRQvcyOH1TQ8I=; b=BipdlLUrlBVRCS8hXBJFS+ZUDU
	BElypRLDiM73MC+Z6L/Aeba+zjrkD0b1ol9jv0pVht1Hm9EQf4ZnL8LSgom6CA3laOB1e31dyjgMp
	17r4TLRdt0d6Jm8FPZ8sQohPvAuOtQBwkR+PwN/cxtC+pw7T4lHjbfhRJ2C93NIPpRmM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159061-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159061: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-pair:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-amd64-pygrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:test-amd64-amd64-examine:host-install:broken:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-pair:host-install/src_host(6):broken:nonblocking
    linux-linus:test-amd64-amd64-pair:host-install/dst_host(7):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-pygrub:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
X-Osstest-Versions-This:
    linux=1e0d27fce010b0a4a9e595506b6ede75934c31be
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 11:48:51 +0000

flight 159061 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159061/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-amd64-xl-xsm         <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>            broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm    <job status>   broken
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>            broken
 build-i386                      <job status>                 broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 build-i386-pvops                <job status>                 broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 test-amd64-amd64-examine      5 host-install           broken REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)  broken blocked in 152332
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-multivcpu  5 host-install(5)      broken blocked in 152332
 test-amd64-amd64-xl-qcow2     5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken blocked in 152332
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken blocked in 152332
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)  broken blocked in 152332
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)      broken blocked in 152332
 test-amd64-coresched-amd64-xl  5 host-install(5)      broken blocked in 152332
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)      broken blocked in 152332
 test-amd64-amd64-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)    broken blocked in 152332
 test-amd64-amd64-libvirt      5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-pair      6 host-install/src_host(6) broken blocked in 152332
 test-amd64-amd64-pair      7 host-install/dst_host(7) broken blocked in 152332
 test-amd64-amd64-xl-rtds      5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-i386-pvgrub  5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-xl-pvshim    5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-pygrub       5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-libvirt-xsm  5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)    broken blocked in 152332
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken blocked in 152332
 test-amd64-amd64-xl-credit1   5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-shadow    5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl           5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken blocked in 152332
 test-amd64-amd64-libvirt-vhd  5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken blocked in 152332
 test-amd64-amd64-xl-xsm       5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken blocked in 152332
 test-armhf-armhf-xl-cubietruck  5 host-install(5)     broken blocked in 152332
 test-armhf-armhf-libvirt      5 host-install(5)       broken blocked in 152332
 test-armhf-armhf-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332

version targeted for testing:
 linux                1e0d27fce010b0a4a9e595506b6ede75934c31be
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  190 days
Failing since        152366  2020-08-01 20:49:34 Z  189 days  336 attempts
Testing same since   159061  2021-02-06 01:39:25 Z    1 days    1 attempts

------------------------------------------------------------
4541 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        broken  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      broken  
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-amd64-coresched-amd64-xl broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-i386-xsm host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step build-i386 host-install(4)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-examine host-install
broken-step test-armhf-armhf-xl-cubietruck host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 1026545 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 12:58:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 12:58:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82388.152181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8jdz-0002Wl-RJ; Sun, 07 Feb 2021 12:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82388.152181; Sun, 07 Feb 2021 12:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8jdz-0002We-OC; Sun, 07 Feb 2021 12:58:27 +0000
Received: by outflank-mailman (input) for mailman id 82388;
 Sun, 07 Feb 2021 12:58:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7TP6=HJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l8jdy-0002WZ-JY
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 12:58:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 487c08bd-bc4e-46ab-8a0c-662dcd3d46c4;
 Sun, 07 Feb 2021 12:58:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B65BFAD3E;
 Sun,  7 Feb 2021 12:58:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 487c08bd-bc4e-46ab-8a0c-662dcd3d46c4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612702703; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=R4cfw25YjQoJcPP/iQMBZfe/aKgR1aMn6caYYNJq2oA=;
	b=NVUgb1rUvbGL7B8EVkCTRilE+7aJXjXcGnDQicYl5bl0Tjm57ZfU+dPxnJkVUB5wFTJze/
	T737CA8q//lZm2NPNe8HQRz++grQzU4akhrhcv9Szk8TeLE/1eK6UdVIiN6goOOz/lYcrg
	VOH+VrW52LA+831qJjxNScRTPTjO5R4=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Message-ID: <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
Date: Sun, 7 Feb 2021 13:58:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="jbyJhFnNSEXLNH4hAQuvMm4TNC9cM8aAU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--jbyJhFnNSEXLNH4hAQuvMm4TNC9cM8aAU
Content-Type: multipart/mixed; boundary="WsIzMPvAKA2367z5Rpwd77YJ5fV8xFlI0";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
In-Reply-To: <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>

--WsIzMPvAKA2367z5Rpwd77YJ5fV8xFlI0
Content-Type: multipart/mixed;
 boundary="------------2346595A18135BE727B08B51"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2346595A18135BE727B08B51
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 06.02.21 19:46, Julien Grall wrote:
> Hi Juergen,
>=20
> On 06/02/2021 10:49, Juergen Gross wrote:
>> The first three patches are fixes for XSA-332. The avoid WARN splats
>> and a performance issue with interdomain events.
>=20
> Thanks for helping to figure out the problem. Unfortunately, I still se=
e=20
> reliably the WARN splat with the latest Linux master (1e0d27fce010) +=20
> your first 3 patches.
>=20
> I am using Xen 4.11 (1c7d984645f9) and dom0 is forced to use the 2L=20
> events ABI.
>=20
> After some debugging, I think I have an idea what's went wrong. The=20
> problem happens when the event is initially bound from vCPU0 to a=20
> different vCPU.
>=20
>  From the comment in xen_rebind_evtchn_to_cpu(), we are masking the=20
> event to prevent it being delivered on an unexpected vCPU. However, I=20
> believe the following can happen:
>=20
> vCPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 | vCPU1
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | Call xen_rebind_evtchn_to_cpu()
> receive event X=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | mask event X
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | bind to vCPU1
> <vCPU descheduled>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | unmask e=
vent X
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | receive event X
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | handle_edge_irq(X)
> handle_edge_irq(X)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 ->=
 handle_irq_event()
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> set IRQD_IN_PROGRESS
>  =C2=A0-> set IRQS_PENDING=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> evtchn_interrupt()
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> clear IRQD_IN_PROGRESS
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0 -> IRQS_PENDING is set
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0 -> handle_irq_event()
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> evtchn_interrupt()
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0=C2=A0 -> WARN()
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |
>=20
> All the lateeoi handlers expect a ONESHOT semantic and=20
> evtchn_interrupt() is doesn't tolerate any deviation.
>=20
> I think the problem was introduced by 7f874a0447a9 ("xen/events: fix=20
> lateeoi irq acknowledgment") because the interrupt was disabled=20
> previously. Therefore we wouldn't do another iteration in=20
> handle_edge_irq().

I think you picked the wrong commit for blaming, as this is just
the last patch of the three patches you were testing.

> Aside the handlers, I think it may impact the defer EOI mitigation=20
> because in theory if a 3rd vCPU is joining the party (let say vCPU A=20
> migrate the event from vCPU B to vCPU C). So info->{eoi_cpu, irq_epoch,=
=20
> eoi_time} could possibly get mangled?
>=20
> For a fix, we may want to consider to hold evtchn_rwlock with the write=
=20
> permission. Although, I am not 100% sure this is going to prevent=20
> everything.

It will make things worse, as it would violate the locking hierarchy
(xen_rebind_evtchn_to_cpu() is called with the IRQ-desc lock held).

On a first glance I think we'll need a 3rd masking state ("temporarily
masked") in the second patch in order to avoid a race with lateeoi.

In order to avoid the race you outlined above we need an "event is being
handled" indicator checked via test_and_set() semantics in
handle_irq_for_port() and reset only when calling clear_evtchn().

> Does my write-up make sense to you?

Yes. What about my reply? ;-)


Juergen

--------------2346595A18135BE727B08B51
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2346595A18135BE727B08B51--

--WsIzMPvAKA2367z5Rpwd77YJ5fV8xFlI0--

--jbyJhFnNSEXLNH4hAQuvMm4TNC9cM8aAU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAf4+wFAwAAAAAACgkQsN6d1ii/Ey/K
wAf/TDFYrTVsLzXeGxzeWtu1WaVoa4Ox7JPKEZSm8YJEojQtmVLPl+srPzcEDNlv3AAuecZNf2JB
5itbzEBHszyYZFVlr/uWRtkGM6d6wiiDngAo+sQHw7HUM3izmn+CxiwEXVjIitUvDOzsLm6ORRDy
sqJ71jM50C9fP7f+mCAlK3CJ3SnIULeMnuisSyu/QNwpYNXf8KJa9WhB5UAwBYN1rielG0LFQTeH
7gSZqmDj+5ahj9ZFXKAcwjHRjoz8eOqKHA1SYoSL2Hl2HjnuBCjK/zzFUzAaDzXUxAZHtkArREZZ
D2ttbvrl1JzUc4Nhf5iaKCJbCSeu+ET0u+AVKHIwFQ==
=TWlv
-----END PGP SIGNATURE-----

--jbyJhFnNSEXLNH4hAQuvMm4TNC9cM8aAU--


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 15:56:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 15:56:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82424.152205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mPz-0002KY-Uj; Sun, 07 Feb 2021 15:56:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82424.152205; Sun, 07 Feb 2021 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mPz-0002KR-Rd; Sun, 07 Feb 2021 15:56:11 +0000
Received: by outflank-mailman (input) for mailman id 82424;
 Sun, 07 Feb 2021 15:56:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Evt2=HJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1l8mPz-0002KM-5n
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 15:56:11 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e0603dc-e1a6-405a-afa1-61ad820dc84d;
 Sun, 07 Feb 2021 15:56:08 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3732F68B02; Sun,  7 Feb 2021 16:56:02 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e0603dc-e1a6-405a-afa1-61ad820dc84d
Date: Sun, 7 Feb 2021 16:56:01 +0100
From: Christoph Hellwig <hch@lst.de>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	iommu@lists.linux-foundation.org, linux-mips@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-pci@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, nouveau@lists.freedesktop.org,
	x86@kernel.org, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org, adrian.hunter@intel.com,
	akpm@linux-foundation.org, benh@kernel.crashing.org,
	bskeggs@redhat.com, bhelgaas@google.com, bp@alien8.de,
	boris.ostrovsky@oracle.com, hch@lst.de, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
	mingo@redhat.com, jani.nikula@linux.intel.com,
	joonas.lahtinen@linux.intel.com, jgross@suse.com,
	konrad.wilk@oracle.com, m.szyprowski@samsung.com,
	matthew.auld@intel.com, mpe@ellerman.id.au, rppt@kernel.org,
	paulus@samba.org, peterz@infradead.org, robin.murphy@arm.com,
	rodrigo.vivi@intel.com, sstabellini@kernel.org,
	bauerman@linux.ibm.com, tsbogend@alpha.franken.de,
	tglx@linutronix.de, ulf.hansson@linaro.org, joe.jin@oracle.com,
	thomas.lendacky@amd.com
Subject: Re: [PATCH RFC v1 5/6] xen-swiotlb: convert variables to arrays
Message-ID: <20210207155601.GA25111@lst.de>
References: <20210203233709.19819-1-dongli.zhang@oracle.com> <20210203233709.19819-6-dongli.zhang@oracle.com> <20210204084023.GA32328@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210204084023.GA32328@lst.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Feb 04, 2021 at 09:40:23AM +0100, Christoph Hellwig wrote:
> So one thing that has been on my mind for a while:  I'd really like
> to kill the separate dma ops in Xen swiotlb.  If we compare xen-swiotlb
> to swiotlb the main difference seems to be:
> 
>  - additional reasons to bounce I/O vs the plain DMA capable
>  - the possibility to do a hypercall on arm/arm64
>  - an extra translation layer before doing the phys_to_dma and vice
>    versa
>  - an special memory allocator
> 
> I wonder if inbetween a few jump labels or other no overhead enablement
> options and possibly better use of the dma_range_map we could kill
> off most of swiotlb-xen instead of maintaining all this code duplication?

So I looked at this a bit more.

For x86 with XENFEAT_auto_translated_physmap (how common is that?)
pfn_to_gfn is a nop, so plain phys_to_dma/dma_to_phys do work as-is.

xen_arch_need_swiotlb always returns true for x86, and
range_straddles_page_boundary should never be true for the
XENFEAT_auto_translated_physmap case.

So as far as I can tell the mapping fast path for the
XENFEAT_auto_translated_physmap can be trivially reused from swiotlb.

That leaves us with the next more complicated case, x86 or fully cache
coherent arm{,64} without XENFEAT_auto_translated_physmap.  In that case
we need to patch in a phys_to_dma/dma_to_phys that performs the MFN
lookup, which could be done using alternatives or jump labels.
I think if that is done right we should also be able to let that cover
the foreign pages in is_xen_swiotlb_buffer/is_swiotlb_buffer, but
in that worst case that would need another alternative / jump label.

For non-coherent arm{,64} we'd also need to use alternatives or jump
labels to for the cache maintainance ops, but that isn't a hard problem
either.




From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:09:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82428.152216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8md9-00042j-5m; Sun, 07 Feb 2021 16:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82428.152216; Sun, 07 Feb 2021 16:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8md9-00042c-2f; Sun, 07 Feb 2021 16:09:47 +0000
Received: by outflank-mailman (input) for mailman id 82428;
 Sun, 07 Feb 2021 16:09:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8md7-00042X-RU
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:09:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7d135a5-d412-43ec-be84-3a518bb3d43e;
 Sun, 07 Feb 2021 16:09:42 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8mcx-004tm1-ER; Sun, 07 Feb 2021 16:09:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7d135a5-d412-43ec-be84-3a518bb3d43e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=JfVmnK29gWEUJ5mUlngz+CH9M4JMM7XULZkTDufyTkA=; b=ZpbuHAq6KrjfrtW0zuLMXzIh3N
	4cIjBdabhl28Sk7wP/iiwRa2v76qNjRIpgRlC8HWkcgB0CG7d3l24sOFA7L6QMRSl7R+SIgU1GniZ
	SD3B3vfnVEQM1TqvpPdTCYbQM4LUpBE1GSr7bAxUAL5pxA6tM9ejZ89S1aE6/rBJ0gJIiDKUj5ZpG
	hgUsZi32qvg/MGkxiHRJOpQLU2tzbK6xO7Ui2oeEHS8EwDgQUGCO/FWqzI3m30RRBswiQKnsIkqSY
	msDQiTUDyK/2HG5ocr0uGX6DsGHD9/yvzpvRx7dHp46JjY6TbE3yL+8YqgmatGvKVz5WNC02frig9
	oh/LyeNg==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: swiotlb cleanups
Date: Sun,  7 Feb 2021 17:09:26 +0100
Message-Id: <20210207160934.2955931-1-hch@lst.de>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Konrad,

this series contains a bunch of swiotlb cleanups, mostly to reduce the
amount of internals exposed to code outside of swiotlb.c, which should
helper to prepare for supporting multiple different bounce buffer pools.


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:09:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82429.152229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdD-00044O-Er; Sun, 07 Feb 2021 16:09:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82429.152229; Sun, 07 Feb 2021 16:09:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdD-00044F-AV; Sun, 07 Feb 2021 16:09:51 +0000
Received: by outflank-mailman (input) for mailman id 82429;
 Sun, 07 Feb 2021 16:09:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdC-00042X-Hs
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:09:50 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2f4a4d4-4f43-46c1-9e8e-10f239eba289;
 Sun, 07 Feb 2021 16:09:43 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8mcz-004tm4-Bg; Sun, 07 Feb 2021 16:09:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2f4a4d4-4f43-46c1-9e8e-10f239eba289
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=ud3Ea8LmZI8vUpOxMZKE5jSGVPMWnEsLwfRS0dNFbE0=; b=CqX4vXqbmh6QXJAg3YewZtxxcH
	a2xzhphD8VLBCm3tOSIRvmJqwHwT1+K8P/oc1ldXlA56muOJ0R30QDUGLXMU1HJMX2h3xNCEKCkPP
	7qqxpgtnv2BkqiGDNANYGx4IELardjX+PUkA0qNFFBY1MW/3JChiJHBxfp/YWzQ0E5F3Mdp2/OVh/
	9g6CysD+IM/6jxu/nwmnHPlUGiXtrFDZqSz6+yq/nbOb+l23UwU03aaZmuiZR4BRT9NtcFg9QyoSB
	THL9WiHFfyXyVjQSg87GaF+1IBS1WU5j2owUfxHwogo5pylPoK7SW5yRVtUK+9TksokO0o2L+sabB
	bODKCAsQ==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 1/8] powerpc/svm: stop using io_tlb_start
Date: Sun,  7 Feb 2021 17:09:27 +0100
Message-Id: <20210207160934.2955931-2-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the local variable that is passed to swiotlb_init_with_tbl for
freeing the memory in the failure case to isolate the code a little
better from swiotlb internals.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/powerpc/platforms/pseries/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index 7b739cc7a8a93e..b9968ac7cc0789 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -56,7 +56,7 @@ void __init svm_swiotlb_init(void)
 		return;
 
 	if (io_tlb_start)
-		memblock_free_early(io_tlb_start,
+		memblock_free_early(__pa(vstart),
 				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
 	panic("SVM: Cannot allocate SWIOTLB buffer");
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:09:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:09:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82430.152241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdI-00047e-N1; Sun, 07 Feb 2021 16:09:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82430.152241; Sun, 07 Feb 2021 16:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdI-00047X-Jb; Sun, 07 Feb 2021 16:09:56 +0000
Received: by outflank-mailman (input) for mailman id 82430;
 Sun, 07 Feb 2021 16:09:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdH-00042X-IA
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:09:55 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83c2965e-1cdd-4643-a833-b32e7e910eb2;
 Sun, 07 Feb 2021 16:09:47 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8md1-004tmA-Lk; Sun, 07 Feb 2021 16:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83c2965e-1cdd-4643-a833-b32e7e910eb2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=jFI2mi7dyRl0KQwCReup9amZMDdC7KVLA5/jiqTUJgc=; b=PzcY4sAIeTkV30mNeSR7f4zhev
	1l86naI/SQcUf0zOFqJq4KJw5/8N1oMJjS8yNc/Z8ehT98TCXfbqC0yQIJkU5+vzsX2EJcA7lqXnD
	inxtjszyQ9sK7lZOffjVlwnAIPOGvXzlMpup2gQRm/SVMpF6REFiaCeA3V36wmiPskUOOwbKs2f2/
	Bh9N0vS73XhuuqKUlQtGemY4mIiM6twaFuU0N84w1IvGRIOriBPhiyre+OW/45YumEnkat66MuvVP
	iZjwGsedAM5O6qFZcTy0pBe4gMYLXotX3BJJucUjweYhEXiYYk6TIHHM3R2KLkGXZDkHwdJsP3F08
	DtFINz5Q==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 2/8] xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_buffer
Date: Sun,  7 Feb 2021 17:09:28 +0100
Message-Id: <20210207160934.2955931-3-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the is_swiotlb_buffer to check if a physical address is
a swiotlb buffer.  This works because xen-swiotlb does use the
same buffer as the main swiotlb code, and xen_io_tlb_{start,end}
are just the addresses for it that went through phys_to_virt.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 2b385c1b4a99cb..a4026822a889f7 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -111,10 +111,8 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * have the same virtual address as another address
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
-	if (pfn_valid(PFN_DOWN(paddr))) {
-		return paddr >= virt_to_phys(xen_io_tlb_start) &&
-		       paddr < virt_to_phys(xen_io_tlb_end);
-	}
+	if (pfn_valid(PFN_DOWN(paddr)))
+		return is_swiotlb_buffer(paddr);
 	return 0;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:10:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:10:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82431.152253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdO-0004Dp-25; Sun, 07 Feb 2021 16:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82431.152253; Sun, 07 Feb 2021 16:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdN-0004DL-Sd; Sun, 07 Feb 2021 16:10:01 +0000
Received: by outflank-mailman (input) for mailman id 82431;
 Sun, 07 Feb 2021 16:10:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdM-00042X-IQ
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:10:00 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a86c215f-204d-47a0-a1f2-4377bd28f039;
 Sun, 07 Feb 2021 16:09:48 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8md3-004tmH-BE; Sun, 07 Feb 2021 16:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a86c215f-204d-47a0-a1f2-4377bd28f039
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=T8t1FwG6dE6zlEC3wLjW3JLNNjj/ykTR2lwFCEWQrxw=; b=YqZ4QnN1AVP9n/XFE9fsSrvWCF
	8EEyVnum1O4pgtOIHD11S+iAWfm6laLkacn4Z2eWxuAyluOFTvlaedJznz/qrn3jE5wUHhthpubHU
	Apn4BjiI+OFuhTOzZH+VzHVviNaDF4pAfarvhdBIVt8KWXZ43vISJu96/HCVVesdgCAxz3eawu2lg
	2xjUd2GoHUF+9GKve/GSvZ20KB2IuFHtrylOK3+Wn3al61Xht1ZQhvrv95mynOS4veIx/MiL2So2G
	SmG5re6h//fchVk1f1XgUNgw6TBeVkKwuHQVxG2BIwR4ROuxSMOFuG56ulMVKHCEGHViqPxAFdMAg
	CFXHUMMQ==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 3/8] xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supported
Date: Sun,  7 Feb 2021 17:09:29 +0100
Message-Id: <20210207160934.2955931-4-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the existing variable that holds the physical address for
xen_io_tlb_end to simplify xen_swiotlb_dma_supported a bit, and remove
the otherwise unused xen_io_tlb_end variable and the xen_virt_to_bus
helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a4026822a889f7..4298f74a083985 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -46,7 +46,7 @@
  * API.
  */
 
-static char *xen_io_tlb_start, *xen_io_tlb_end;
+static char *xen_io_tlb_start;
 static unsigned long xen_io_tlb_nslabs;
 /*
  * Quick lookup value of the bus address of the IOTLB.
@@ -82,11 +82,6 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev,
 	return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr));
 }
 
-static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address)
-{
-	return xen_phys_to_dma(dev, virt_to_phys(address));
-}
-
 static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 {
 	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
@@ -250,7 +245,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
 
 end:
-	xen_io_tlb_end = xen_io_tlb_start + bytes;
 	if (!rc)
 		swiotlb_set_max_segment(PAGE_SIZE);
 
@@ -558,7 +552,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
+	return xen_phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:10:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82432.152265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdS-0004nP-GC; Sun, 07 Feb 2021 16:10:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82432.152265; Sun, 07 Feb 2021 16:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdS-0004mi-CW; Sun, 07 Feb 2021 16:10:06 +0000
Received: by outflank-mailman (input) for mailman id 82432;
 Sun, 07 Feb 2021 16:10:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdR-00042X-Ib
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:10:05 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6bd119b-0684-4d90-873e-2567b3034fd2;
 Sun, 07 Feb 2021 16:09:49 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8md5-004tmP-JA; Sun, 07 Feb 2021 16:09:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6bd119b-0684-4d90-873e-2567b3034fd2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=tHBexmHGDU8sjj38AZ8OoEmgptCGTXSoPlkiz2alw+s=; b=PBdwbGIJlKrg+agS8u5um5z3+E
	lRbcNlTzG3ntexPbto0mi2062ImrpQVKR3XGI4/cKR3mfaGRiJgDHdqMFG0mP1KLhbe2zWELau6xh
	YOi37Sez5qoaovImrcBXFeSYI28UxhVa7x6E+3aB/UaioGgM4KIS8iunU4KBsKMISwsyXxkncv/EY
	ltEqLzdq2KfRLnJTlX6hBx+PgpoKnaxT0DPfw5Cg0VXBcqnxvcAYaken/jlwA5MX8p28MsZpw1aey
	uebkJGJud2BW7OZYmg4y//+8KPP9MHqx0t9vTK7eomCKEuKDWTNHRowqZmVS/2c7Yhr8Mk8ky1CZU
	OASFkWow==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 4/8] xen-swiotlb: remove xen_set_nslabs
Date: Sun,  7 Feb 2021 17:09:30 +0100
Message-Id: <20210207160934.2955931-5-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The xen_set_nslabs function is a little weird, as it has just one
caller, that caller passes a global variable as the argument,
which is then overriden in the function and a derivative of it
returned.  Just add a cpp symbol for the default size using a readable
constant and open code the remaining three lines in the caller.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4298f74a083985..57f8d5fadc1fcd 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -138,16 +138,6 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 	} while (i < nslabs);
 	return 0;
 }
-static unsigned long xen_set_nslabs(unsigned long nr_tbl)
-{
-	if (!nr_tbl) {
-		xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
-		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
-	} else
-		xen_io_tlb_nslabs = nr_tbl;
-
-	return xen_io_tlb_nslabs << IO_TLB_SHIFT;
-}
 
 enum xen_swiotlb_err {
 	XEN_SWIOTLB_UNKNOWN = 0,
@@ -170,6 +160,9 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
 	}
 	return "";
 }
+
+#define DEFAULT_NSLABS		ALIGN(SZ_64M >> IO_TLB_SHIFT, IO_TLB_SEGSIZE)
+
 int __ref xen_swiotlb_init(int verbose, bool early)
 {
 	unsigned long bytes, order;
@@ -179,8 +172,10 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 
 	xen_io_tlb_nslabs = swiotlb_nr_tbl();
 retry:
-	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
-	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
+	if (!xen_io_tlb_nslabs)
+		xen_io_tlb_nslabs = DEFAULT_NSLABS;
+	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+	order = get_order(bytes);
 
 	/*
 	 * IO TLB memory already allocated. Just use it.
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:10:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:10:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82433.152277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdX-00052H-SE; Sun, 07 Feb 2021 16:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82433.152277; Sun, 07 Feb 2021 16:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdX-000527-O0; Sun, 07 Feb 2021 16:10:11 +0000
Received: by outflank-mailman (input) for mailman id 82433;
 Sun, 07 Feb 2021 16:10:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdW-00042X-Io
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:10:10 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3f3b700-ab19-4285-b9a3-fed5910f7041;
 Sun, 07 Feb 2021 16:09:51 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8md7-004tmh-DD; Sun, 07 Feb 2021 16:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3f3b700-ab19-4285-b9a3-fed5910f7041
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=rEuv14TWvGuv4AUXj7YefRFQ9Xyg6sjJXdKXUy0YdBM=; b=UVaqF+zxpqx611WOIaQ9KZyzB2
	re821/ddYXmPPjC9ugy3GaeRlBLWplBRXsujMtZd1vW0IfK3iPfhxgkhWlj2pOCPKDk/Z64FWg9qG
	dHIZsb/VOpLlw9FxnFS3s24XnncV7ayWGvtT/dbQBX6YTxwlS6vcHhsA7xy7YtsaYbgBAhJZqy23B
	/IzKWcYn+2++207f++acZ9XrPm1qBYQ85sJ/oBgoVFBCDx4vKxYxRdkp0Tg1TUUSit0t2UDqHUBk9
	Ow1JcHjpXfjK8ybhbOQEqngr1mR8y92BfM/Wbg7c+DwzEOanYt56k0DlKSTDYvqhkRe3aLgg08Pvo
	tf8I8czw==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 5/8] xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabs
Date: Sun,  7 Feb 2021 17:09:31 +0100
Message-Id: <20210207160934.2955931-6-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The xen_io_tlb_start and xen_io_tlb_nslabs variables ar now only used in
xen_swiotlb_init, so replace them with local variables.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 57 +++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 32 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 57f8d5fadc1fcd..6e0f2c5ecd1a2f 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -40,14 +40,7 @@
 
 #include <trace/events/swiotlb.h>
 #define MAX_DMA_BITS 32
-/*
- * Used to do a quick range check in swiotlb_tbl_unmap_single and
- * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
 
-static char *xen_io_tlb_start;
-static unsigned long xen_io_tlb_nslabs;
 /*
  * Quick lookup value of the bus address of the IOTLB.
  */
@@ -169,75 +162,75 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	int rc = -ENOMEM;
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
+	char *start;
+	unsigned long nslabs;
 
-	xen_io_tlb_nslabs = swiotlb_nr_tbl();
+	nslabs = swiotlb_nr_tbl();
 retry:
-	if (!xen_io_tlb_nslabs)
-		xen_io_tlb_nslabs = DEFAULT_NSLABS;
-	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+	if (!nslabs)
+		nslabs = DEFAULT_NSLABS;
+	bytes = nslabs << IO_TLB_SHIFT;
 	order = get_order(bytes);
 
 	/*
 	 * IO TLB memory already allocated. Just use it.
 	 */
-	if (io_tlb_start != 0) {
-		xen_io_tlb_start = phys_to_virt(io_tlb_start);
+	if (io_tlb_start != 0)
 		goto end;
-	}
 
 	/*
 	 * Get IO TLB memory from any location.
 	 */
 	if (early) {
-		xen_io_tlb_start = memblock_alloc(PAGE_ALIGN(bytes),
+		start = memblock_alloc(PAGE_ALIGN(bytes),
 						  PAGE_SIZE);
-		if (!xen_io_tlb_start)
+		if (!start)
 			panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
 			      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
 	} else {
 #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
 		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
-			xen_io_tlb_start = (void *)xen_get_swiotlb_free_pages(order);
-			if (xen_io_tlb_start)
+			start = (void *)xen_get_swiotlb_free_pages(order);
+			if (start)
 				break;
 			order--;
 		}
 		if (order != get_order(bytes)) {
 			pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n",
 				(PAGE_SIZE << order) >> 20);
-			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
-			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+			nslabs = SLABS_PER_PAGE << order;
+			bytes = nslabs << IO_TLB_SHIFT;
 		}
 	}
-	if (!xen_io_tlb_start) {
+	if (!start) {
 		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
 	}
 	/*
 	 * And replace that memory with pages under 4GB.
 	 */
-	rc = xen_swiotlb_fixup(xen_io_tlb_start,
+	rc = xen_swiotlb_fixup(start,
 			       bytes,
-			       xen_io_tlb_nslabs);
+			       nslabs);
 	if (rc) {
 		if (early)
-			memblock_free(__pa(xen_io_tlb_start),
+			memblock_free(__pa(start),
 				      PAGE_ALIGN(bytes));
 		else {
-			free_pages((unsigned long)xen_io_tlb_start, order);
-			xen_io_tlb_start = NULL;
+			free_pages((unsigned long)start, order);
+			start = NULL;
 		}
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	if (early) {
-		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
+		if (swiotlb_init_with_tbl(start, nslabs,
 			 verbose))
 			panic("Cannot allocate SWIOTLB buffer");
 		rc = 0;
 	} else
-		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+		rc = swiotlb_late_init_with_tbl(start, nslabs);
 
 end:
 	if (!rc)
@@ -246,17 +239,17 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	return rc;
 error:
 	if (repeat--) {
-		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
-					(xen_io_tlb_nslabs >> 1));
+		nslabs = max(1024UL, /* Min is 2MB */
+					(nslabs >> 1));
 		pr_info("Lowering to %luMB\n",
-			(xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
+			(nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
 	pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc);
 	if (early)
 		panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
 	else
-		free_pages((unsigned long)xen_io_tlb_start, order);
+		free_pages((unsigned long)start, order);
 	return rc;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:10:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82434.152289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdd-00057y-5r; Sun, 07 Feb 2021 16:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82434.152289; Sun, 07 Feb 2021 16:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdd-00057q-1w; Sun, 07 Feb 2021 16:10:17 +0000
Received: by outflank-mailman (input) for mailman id 82434;
 Sun, 07 Feb 2021 16:10:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdb-00042X-Iy
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:10:15 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3c0691c-47f9-4d9f-939d-c11edf2a73f0;
 Sun, 07 Feb 2021 16:09:52 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8md9-004tnJ-Fd; Sun, 07 Feb 2021 16:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3c0691c-47f9-4d9f-939d-c11edf2a73f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9exQlU2bM+z89vz4InpfIN+OgEoSYW+vKDhpQvUzWEc=; b=fmNr0UgbkUNuM/6xYoYIl1Rrvu
	NjXGGNO7rb6ADI6vT40u/zONLvuSngV72/jHsqgPSgPdAN+wEUDyiIdoRTVpEZmLqJSlRXchWUKGA
	eluKXm1mFdJJn++1DfZvR2YO+X0JUshTE3gzwf6gjtlHNPTzbVGePGmIGFxgBXOxNKJfI1KkyC2Pb
	nMwmcXMNzuG2JOFRxdZzeeKNS4HQwVd/0Q/gujMYFOHIZXrOk3w8PuAmWcqmAE1DUqnlU6d7QuM6s
	o+Kwv+YAPHNTcLEXNvfSe7Cnz+d04XyeVlR7knzJwEz+D3x/VVO20+JVl3mfPkAXYtMUG2hKTNqpL
	Zvtma1Pw==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 6/8] swiotlb: lift the double initialization protection from xen-swiotlb
Date: Sun,  7 Feb 2021 17:09:32 +0100
Message-Id: <20210207160934.2955931-7-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Lift the double initialization protection from xen-swiotlb to the core
code to avoid exposing too many swiotlb internals.  Also upgrade the
check to a warning as it should not happen.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 7 -------
 kernel/dma/swiotlb.c      | 8 ++++++++
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 6e0f2c5ecd1a2f..e6c8556e879ee6 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -172,12 +172,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	bytes = nslabs << IO_TLB_SHIFT;
 	order = get_order(bytes);
 
-	/*
-	 * IO TLB memory already allocated. Just use it.
-	 */
-	if (io_tlb_start != 0)
-		goto end;
-
 	/*
 	 * Get IO TLB memory from any location.
 	 */
@@ -232,7 +226,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	} else
 		rc = swiotlb_late_init_with_tbl(start, nslabs);
 
-end:
 	if (!rc)
 		swiotlb_set_max_segment(PAGE_SIZE);
 
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index adcc3c87e10078..a3737961ae5769 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -224,6 +224,10 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	unsigned long i, bytes;
 	size_t alloc_size;
 
+	/* protect against double initialization */
+	if (WARN_ON_ONCE(io_tlb_start))
+		return -ENOMEM;
+
 	bytes = nslabs << IO_TLB_SHIFT;
 
 	io_tlb_nslabs = nslabs;
@@ -355,6 +359,10 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
 	unsigned long i, bytes;
 
+	/* protect against double initialization */
+	if (WARN_ON_ONCE(io_tlb_start))
+		return -ENOMEM;
+
 	bytes = nslabs << IO_TLB_SHIFT;
 
 	io_tlb_nslabs = nslabs;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:10:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:10:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82435.152301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdh-0005Cp-Gp; Sun, 07 Feb 2021 16:10:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82435.152301; Sun, 07 Feb 2021 16:10:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdh-0005Cg-Cl; Sun, 07 Feb 2021 16:10:21 +0000
Received: by outflank-mailman (input) for mailman id 82435;
 Sun, 07 Feb 2021 16:10:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdg-00042X-J4
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:10:20 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2647eb6-c94a-4370-8790-ccad487df9e6;
 Sun, 07 Feb 2021 16:09:55 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8mdB-004tnV-CW; Sun, 07 Feb 2021 16:09:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2647eb6-c94a-4370-8790-ccad487df9e6
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=dDMIGfzIheGO5qFy1hXSQ2rWGLVI0aCG/glNZ/fxGYA=; b=guK83sRp0sI706y5mxJR7c8Pyy
	I0k15MXABcCNb5NEG69YACE/elWwduL8+TB2gYY4qqDRJT7Ow5ehxVFKS98rlqb7410fT5kiZVPVs
	XWmf3PL6WA1YDRPc0JnAY7nt2dm5+tymJmEKsHBLCs/niiW+rjs6JlCMuIgAjalYzE7hv9szyDn8v
	4UAan3aXO6d3UJ2YEi879p1eGENwljDVNpNPZcGlN/Z2PgtdbU5lU1kiBj5+CJ4x+JUzIT56pToKx
	uSTNK1JWZPJb9mVrp+d+ESHhLpdpPP63KMEfrMlqOTUJA7NT2GDEmdcJimjaR7z938DAJ3KSjqGDN
	ChpXeE8Q==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 7/8] xen-swiotlb: split xen_swiotlb_init
Date: Sun,  7 Feb 2021 17:09:33 +0100
Message-Id: <20210207160934.2955931-8-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Split xen_swiotlb_init into a normal an an early case.  That makes both
much simpler and more readable, and also allows marking the early
code as __init and x86-only.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/arm/xen/mm.c              |   2 +-
 arch/x86/xen/pci-swiotlb-xen.c |   4 +-
 drivers/xen/swiotlb-xen.c      | 124 +++++++++++++++++++--------------
 include/xen/swiotlb-xen.h      |   3 +-
 4 files changed, 75 insertions(+), 58 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 467fa225c3d0ed..aae950cd053fea 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -140,7 +140,7 @@ static int __init xen_mm_init(void)
 	struct gnttab_cache_flush cflush;
 	if (!xen_initial_domain())
 		return 0;
-	xen_swiotlb_init(1, false);
+	xen_swiotlb_init();
 
 	cflush.op = 0;
 	cflush.a.dev_bus_addr = 0;
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 19ae3e4fe4e98e..54f9aa7e845739 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -59,7 +59,7 @@ int __init pci_xen_swiotlb_detect(void)
 void __init pci_xen_swiotlb_init(void)
 {
 	if (xen_swiotlb) {
-		xen_swiotlb_init(1, true /* early */);
+		xen_swiotlb_init_early();
 		dma_ops = &xen_swiotlb_dma_ops;
 
 #ifdef CONFIG_PCI
@@ -76,7 +76,7 @@ int pci_xen_swiotlb_init_late(void)
 	if (xen_swiotlb)
 		return 0;
 
-	rc = xen_swiotlb_init(1, false /* late */);
+	rc = xen_swiotlb_init();
 	if (rc)
 		return rc;
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index e6c8556e879ee6..b2d9e77059bf5a 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -156,96 +156,112 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
 
 #define DEFAULT_NSLABS		ALIGN(SZ_64M >> IO_TLB_SHIFT, IO_TLB_SEGSIZE)
 
-int __ref xen_swiotlb_init(int verbose, bool early)
+int __ref xen_swiotlb_init(void)
 {
-	unsigned long bytes, order;
-	int rc = -ENOMEM;
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
+	unsigned long nslabs, bytes, order;
 	unsigned int repeat = 3;
+	int rc = -ENOMEM;
 	char *start;
-	unsigned long nslabs;
 
 	nslabs = swiotlb_nr_tbl();
-retry:
 	if (!nslabs)
 		nslabs = DEFAULT_NSLABS;
+retry:
+	m_ret = XEN_SWIOTLB_ENOMEM;
 	bytes = nslabs << IO_TLB_SHIFT;
 	order = get_order(bytes);
 
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-	if (early) {
-		start = memblock_alloc(PAGE_ALIGN(bytes),
-						  PAGE_SIZE);
-		if (!start)
-			panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
-			      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
-	} else {
 #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
-		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
-			start = (void *)xen_get_swiotlb_free_pages(order);
-			if (start)
-				break;
-			order--;
-		}
-		if (order != get_order(bytes)) {
-			pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n",
-				(PAGE_SIZE << order) >> 20);
-			nslabs = SLABS_PER_PAGE << order;
-			bytes = nslabs << IO_TLB_SHIFT;
-		}
+	while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
+		start = (void *)xen_get_swiotlb_free_pages(order);
+		if (start)
+			break;
+		order--;
 	}
-	if (!start) {
-		m_ret = XEN_SWIOTLB_ENOMEM;
+	if (!start)
 		goto error;
+	if (order != get_order(bytes)) {
+		pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n",
+			(PAGE_SIZE << order) >> 20);
+		nslabs = SLABS_PER_PAGE << order;
+		bytes = nslabs << IO_TLB_SHIFT;
 	}
+
 	/*
 	 * And replace that memory with pages under 4GB.
 	 */
-	rc = xen_swiotlb_fixup(start,
-			       bytes,
-			       nslabs);
+	rc = xen_swiotlb_fixup(start, bytes, nslabs);
 	if (rc) {
-		if (early)
-			memblock_free(__pa(start),
-				      PAGE_ALIGN(bytes));
-		else {
-			free_pages((unsigned long)start, order);
-			start = NULL;
-		}
+		free_pages((unsigned long)start, order);
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
-	if (early) {
-		if (swiotlb_init_with_tbl(start, nslabs,
-			 verbose))
-			panic("Cannot allocate SWIOTLB buffer");
-		rc = 0;
-	} else
-		rc = swiotlb_late_init_with_tbl(start, nslabs);
-
-	if (!rc)
-		swiotlb_set_max_segment(PAGE_SIZE);
-
-	return rc;
+	rc = swiotlb_late_init_with_tbl(start, nslabs);
+	if (rc)
+		return rc;
+	swiotlb_set_max_segment(PAGE_SIZE);
+	return 0;
 error:
 	if (repeat--) {
-		nslabs = max(1024UL, /* Min is 2MB */
-					(nslabs >> 1));
+		/* Min is 2MB */
+		nslabs = max(1024UL, (nslabs >> 1));
 		pr_info("Lowering to %luMB\n",
 			(nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
 	pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc);
-	if (early)
-		panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
-	else
-		free_pages((unsigned long)start, order);
+	free_pages((unsigned long)start, order);
 	return rc;
 }
 
+#ifdef CONFIG_X86
+void __init xen_swiotlb_init_early(void)
+{
+	unsigned long nslabs, bytes;
+	unsigned int repeat = 3;
+	char *start;
+	int rc;
+
+	nslabs = swiotlb_nr_tbl();
+	if (!nslabs)
+		nslabs = DEFAULT_NSLABS;
+retry:
+	/*
+	 * Get IO TLB memory from any location.
+	 */
+	bytes = nslabs << IO_TLB_SHIFT;
+	start = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE);
+	if (!start)
+		panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+		      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
+
+	/*
+	 * And replace that memory with pages under 4GB.
+	 */
+	rc = xen_swiotlb_fixup(start, bytes, nslabs);
+	if (rc) {
+		memblock_free(__pa(start), PAGE_ALIGN(bytes));
+		if (repeat--) {
+			/* Min is 2MB */
+			nslabs = max(1024UL, (nslabs >> 1));
+			pr_info("Lowering to %luMB\n",
+				(nslabs << IO_TLB_SHIFT) >> 20);
+			goto retry;
+		}
+		panic("%s (rc:%d)", xen_swiotlb_error(XEN_SWIOTLB_EFIXUP), rc);
+	}
+
+	if (swiotlb_init_with_tbl(start, nslabs, false))
+		panic("Cannot allocate SWIOTLB buffer");
+	swiotlb_set_max_segment(PAGE_SIZE);
+}
+#endif /* CONFIG_X86 */
+
 static void *
 xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			   dma_addr_t *dma_handle, gfp_t flags,
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index d5eaf9d682b804..6206b1ec99168a 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -9,7 +9,8 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
 			     size_t size, enum dma_data_direction dir);
 
-extern int xen_swiotlb_init(int verbose, bool early);
+int xen_swiotlb_init(void);
+void __init xen_swiotlb_init_early(void);
 extern const struct dma_map_ops xen_swiotlb_dma_ops;
 
 #endif /* __LINUX_SWIOTLB_XEN_H */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 16:10:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 16:10:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82436.152313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdm-0005Ig-RL; Sun, 07 Feb 2021 16:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82436.152313; Sun, 07 Feb 2021 16:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8mdm-0005IW-ND; Sun, 07 Feb 2021 16:10:26 +0000
Received: by outflank-mailman (input) for mailman id 82436;
 Sun, 07 Feb 2021 16:10:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QIBx=HJ=casper.srs.infradead.org=batv+661ee30cee4f8a507613+6377+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1l8mdl-00042X-J4
 for xen-devel@lists.xenproject.org; Sun, 07 Feb 2021 16:10:25 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1dabfde-05d8-42e2-9fcf-dc970d56a089;
 Sun, 07 Feb 2021 16:09:58 +0000 (UTC)
Received: from [2001:4bb8:184:7d04:4590:5583:6cb7:77c7] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1l8mdD-004top-Qm; Sun, 07 Feb 2021 16:09:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1dabfde-05d8-42e2-9fcf-dc970d56a089
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=II87lVlVi39N4h2hDH/RczH9V+xrUdtPsM3lWkdXNMw=; b=f8tMOfWQPN+aI2qzvJbaH14hPP
	zUj0n8mdP6winTuA7Ld5imaXHoukB6enEo6lpPJY+gbSl/jdcrgT44RWZQn/fvkyEITRM+o9or5eu
	42eQVAHxivLfAwJv+PiXGneSRGScJivDXMw3UpV/vAFaBWvOxesQIxEHKndOLafA+pwX+pN3A8fvZ
	kRU7PMlUyNuF99RbDyJsfWPFAbPrlpVxKVJHEDHbNnO4liTGTjj/Xg8BXtv5tSYchEQJT2//eY8SE
	PUZCpnQwbq8lfxC2xYn6xNbU/fxFrjtoq/ILQmud6M79thxiPQrjFHkq1boJ0lxkXWSaDjtq13XNN
	9tvDIyrQ==;
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	Dongli Zhang <dongli.zhang@oracle.com>,
	Claire Chang <tientzu@chromium.org>,
	xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH 8/8] xen-swiotlb: remove the unused size argument from xen_swiotlb_fixup
Date: Sun,  7 Feb 2021 17:09:34 +0100
Message-Id: <20210207160934.2955931-9-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210207160934.2955931-1-hch@lst.de>
References: <20210207160934.2955931-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b2d9e77059bf5a..621a20c1143597 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -104,8 +104,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	return 0;
 }
 
-static int
-xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
+static int xen_swiotlb_fixup(void *buf, unsigned long nslabs)
 {
 	int i, rc;
 	int dma_bits;
@@ -195,7 +194,7 @@ int __ref xen_swiotlb_init(void)
 	/*
 	 * And replace that memory with pages under 4GB.
 	 */
-	rc = xen_swiotlb_fixup(start, bytes, nslabs);
+	rc = xen_swiotlb_fixup(start, nslabs);
 	if (rc) {
 		free_pages((unsigned long)start, order);
 		m_ret = XEN_SWIOTLB_EFIXUP;
@@ -243,7 +242,7 @@ void __init xen_swiotlb_init_early(void)
 	/*
 	 * And replace that memory with pages under 4GB.
 	 */
-	rc = xen_swiotlb_fixup(start, bytes, nslabs);
+	rc = xen_swiotlb_fixup(start, nslabs);
 	if (rc) {
 		memblock_free(__pa(start), PAGE_ALIGN(bytes));
 		if (repeat--) {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Sun Feb 07 19:05:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 19:05:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82508.152337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8pMM-0004Xo-RH; Sun, 07 Feb 2021 19:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82508.152337; Sun, 07 Feb 2021 19:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8pMM-0004Xh-Nz; Sun, 07 Feb 2021 19:04:38 +0000
Received: by outflank-mailman (input) for mailman id 82508;
 Sun, 07 Feb 2021 19:04:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8pML-0004XZ-A6; Sun, 07 Feb 2021 19:04:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8pML-0002wa-0b; Sun, 07 Feb 2021 19:04:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8pMK-0003CF-Lh; Sun, 07 Feb 2021 19:04:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8pMK-0005li-LD; Sun, 07 Feb 2021 19:04:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uNgry5Z8KtoWdSDQsUk52wY/TIImjT0iCXznFhWjt00=; b=Cs/hdFHaKCjkmjE2X9T7sZqdYF
	YX97yKxVguekKbq3OlCnxKQHzxh/mdO7uaaBaL6MAjNWTgHfH871n+v2W3FUvZ+tPbKWczqozOp/H
	4YDhybiP4/HyUIcHO2iAk1qE9cG2bQWQOCnmkCsmzfvUlq+5FUbDc8JPKmmOzkOjsP48=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159081-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159081: trouble: blocked/broken
X-Osstest-Failures:
    xen-4.11-testing:build-amd64:<job status>:broken:regression
    xen-4.11-testing:build-amd64-prev:<job status>:broken:regression
    xen-4.11-testing:build-amd64-pvops:<job status>:broken:regression
    xen-4.11-testing:build-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:build-amd64-xtf:<job status>:broken:regression
    xen-4.11-testing:build-arm64:<job status>:broken:regression
    xen-4.11-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.11-testing:build-arm64-xsm:<job status>:broken:regression
    xen-4.11-testing:build-armhf:<job status>:broken:regression
    xen-4.11-testing:build-armhf-pvops:<job status>:broken:regression
    xen-4.11-testing:build-i386:<job status>:broken:regression
    xen-4.11-testing:build-i386-prev:<job status>:broken:regression
    xen-4.11-testing:build-i386-pvops:<job status>:broken:regression
    xen-4.11-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:build-i386:host-install(4):broken:regression
    xen-4.11-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.11-testing:build-i386-prev:host-install(4):broken:regression
    xen-4.11-testing:build-i386-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-arm64:host-install(4):broken:regression
    xen-4.11-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-arm64-xsm:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-xsm:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-armhf-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-xtf:host-install(4):broken:regression
    xen-4.11-testing:build-amd64:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-prev:host-install(4):broken:regression
    xen-4.11-testing:build-armhf:host-install(4):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 19:04:36 +0000

flight 159081 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159081/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 157566
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 157566
 build-i386-prev               4 host-install(4)        broken REGR. vs. 157566
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 157566
 build-arm64                   4 host-install(4)        broken REGR. vs. 157566
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 157566
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 157566
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 157566
 build-amd64                   4 host-install(4)        broken REGR. vs. 157566
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 157566
 build-armhf                   4 host-install(4)        broken REGR. vs. 157566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   54 days
Failing since        159016  2021-02-04 15:05:58 Z    3 days    3 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-xtf host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 19:49:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 19:49:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82516.152357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8q3c-0008Ra-Js; Sun, 07 Feb 2021 19:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82516.152357; Sun, 07 Feb 2021 19:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8q3c-0008RT-Gl; Sun, 07 Feb 2021 19:49:20 +0000
Received: by outflank-mailman (input) for mailman id 82516;
 Sun, 07 Feb 2021 19:49:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8q3b-0008RL-KS; Sun, 07 Feb 2021 19:49:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8q3b-0003da-CD; Sun, 07 Feb 2021 19:49:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8q3b-0004AX-4j; Sun, 07 Feb 2021 19:49:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8q3b-0004E3-4G; Sun, 07 Feb 2021 19:49:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+1qLE8+Uu8s/9waPwnDDNIjii8Xt3rxGZ+/eG5j+rQY=; b=5R9OdB+wJu8aR+1Tvww9sO7T5/
	kQcnE2Y3VN8UVC3KLl9+ZInms+I4PY+8No+eS48XfndQvLZvKQMbvEF7H2q5aY5MOvzTCO/SejecZ
	Q5DWIaMF+hSaIxQ4EeUlDsfSqq8LKAy6pp+W83jl55hYZgiHPS7mNKsoPfqrhIF3goMM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159072-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159072: trouble: blocked/broken/pass
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-libvirt:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    qemu-mainline:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-pair:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-pygrub:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    qemu-mainline:test-arm64-arm64-xl:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-pair:host-install/src_host(6):broken:nonblocking
    qemu-mainline:test-amd64-amd64-pair:host-install/dst_host(7):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:host-install(5):broken:nonblocking
    qemu-mainline:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:host-install(5):broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:host-install(5):broken:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d0dddab40e472ba62b5f43f11cc7dba085dabe71
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 19:49:19 +0000

flight 159072 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159072/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-thunderx    <job status>                 broken
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-amd64-xl-xsm         <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>            broken
 build-i386                      <job status>                 broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 build-i386-pvops                <job status>                 broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <job status>      broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 test-amd64-amd64-libvirt-vhd    <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-arm64-arm64-xl             <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-multivcpu  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-i386-pvgrub  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)  broken blocked in 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken blocked in 152631
 test-armhf-armhf-xl-credit1   5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-libvirt      5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-xl           5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken blocked in 152631
 test-armhf-armhf-xl-rtds      5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-xl-vhd       5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-xl-arndale   5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-xl-credit2   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-libvirt-xsm  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-shadow    5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-pvshim    5 host-install(5)       broken blocked in 152631
 test-amd64-coresched-amd64-xl  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)    broken blocked in 152631
 test-amd64-amd64-libvirt-vhd  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-pair      6 host-install/src_host(6) broken blocked in 152631
 test-amd64-amd64-pair      7 host-install/dst_host(7) broken blocked in 152631
 test-amd64-amd64-xl-xsm       5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken blocked in 152631
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)  broken blocked in 152631
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken blocked in 152631
 test-amd64-amd64-xl-qcow2     5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-credit2   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-credit1   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)      broken blocked in 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)    broken blocked in 152631
 test-amd64-amd64-pygrub       5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-rtds      5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl           5 host-install(5)       broken blocked in 152631
 test-arm64-arm64-xl-seattle   5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-libvirt      5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken blocked in 152631
 test-arm64-arm64-xl-thunderx  5 host-install(5)       broken blocked in 152631
 test-arm64-arm64-xl           5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-libvirt-raw  5 host-install(5)       broken blocked in 152631
 test-armhf-armhf-xl-cubietruck  5 host-install(5)     broken blocked in 152631
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d0dddab40e472ba62b5f43f11cc7dba085dabe71
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  171 days
Failing since        152659  2020-08-21 14:07:39 Z  170 days  340 attempts
Testing same since   159072  2021-02-06 08:45:24 Z    1 days    1 attempts

------------------------------------------------------------
374 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 broken  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 broken  
 test-armhf-armhf-xl-vhd                                      broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-thunderx broken
broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-job test-amd64-coresched-amd64-xl broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-xl broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-arm64-arm64-xl broken
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-xl-credit2 host-install(5)
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-amd64-amd64-libvirt-vhd host-install(5)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-libvirt-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step build-i386-pvops host-install(4)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-arm64-arm64-xl-thunderx host-install(5)
broken-step test-arm64-arm64-xl host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-xl-cubietruck host-install(5)

Not pushing.

(No revision log; it would be 104630 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 21:03:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 21:03:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82534.152379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8rDU-0007K4-H8; Sun, 07 Feb 2021 21:03:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82534.152379; Sun, 07 Feb 2021 21:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8rDU-0007Jx-Dt; Sun, 07 Feb 2021 21:03:36 +0000
Received: by outflank-mailman (input) for mailman id 82534;
 Sun, 07 Feb 2021 21:03:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rDT-0007Jp-4F; Sun, 07 Feb 2021 21:03:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rDS-0004w6-R7; Sun, 07 Feb 2021 21:03:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rDS-0005m4-Iq; Sun, 07 Feb 2021 21:03:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rDS-0003SR-IJ; Sun, 07 Feb 2021 21:03:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ipc2yCa9hv4u9dUiOY7NMCHdhab8kLFQ0xDUgV+guHE=; b=GLGNtLitkVgtpnIa35WK7ER0Y5
	T8g+IQWQNFf7NvZ8b6mJN3Ch7TtRjBRj9xlbrBRCOCg7ajGgAPqZ+MQOMSwYWw3ywP4b5Qcv/H856
	57zDsw5ULOaCzCqTKUoJ0T0wTo6iWmIoLdXy06xe2tdVF9OxHAGIslW18ILmTVNGl728=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159077-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159077: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-libvirt:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-livepatch:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-1:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-2:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-4:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-5:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:regression
    xen-unstable:test-amd64-coresched-amd64-xl:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:regression
    xen-unstable:test-amd64-amd64-pair:host-install/src_host(6):broken:regression
    xen-unstable:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-pair:host-install/dst_host(7):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:regression
    xen-unstable:test-xtf-amd64-amd64-4:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-multivcpu:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-shadow:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-credit2:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-credit1:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-pygrub:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-livepatch:host-install(5):broken:regression
    xen-unstable:test-xtf-amd64-amd64-2:host-install(5):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:test-xtf-amd64-amd64-3:host-install(5):broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:test-xtf-amd64-amd64-5:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:regression
    xen-unstable:test-xtf-amd64-amd64-1:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-examine:host-install:broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-amd64-xsm:host-install(4):broken:regression
    xen-unstable:build-armhf-pvops:host-install(4):broken:regression
    xen-unstable:build-amd64-libvirt:host-install(4):broken:regression
    xen-unstable:test-amd64-amd64-xl-rtds:host-install(5):broken:allowable
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 21:03:34 +0000

flight 159077 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159077/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 build-amd64-libvirt             <job status>                 broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qcow2       <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 build-arm64                     <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 test-amd64-amd64-xl-multivcpu    <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 test-amd64-amd64-xl-credit2     <job status>                 broken
 build-i386-prev                 <job status>                 broken
 test-amd64-amd64-xl-credit1     <job status>                 broken
 build-i386-pvops                <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-amd    <job status>                 broken
 test-amd64-amd64-qemuu-nested-intel    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-i386-pvgrub    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-livepatch      <job status>                 broken
 test-amd64-amd64-migrupgrade    <job status>                 broken
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-xl-shadow      <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-xtf-amd64-amd64-1          <job status>                 broken
 test-xtf-amd64-amd64-2          <job status>                 broken
 test-xtf-amd64-amd64-3          <job status>                 broken
 test-xtf-amd64-amd64-4          <job status>                 broken
 test-xtf-amd64-amd64-5          <job status>                 broken
 test-amd64-amd64-xl-qcow2     5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-pvhv2-amd  5 host-install(5)       broken REGR. vs. 159036
 test-amd64-coresched-amd64-xl  5 host-install(5)       broken REGR. vs. 159036
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)   broken REGR. vs. 159036
 test-amd64-amd64-xl-pvshim    5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-i386-pvgrub  5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)     broken REGR. vs. 159036
 test-amd64-amd64-qemuu-nested-intel  5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-dom0pvh-xl-amd  5 host-install(5)     broken REGR. vs. 159036
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-dom0pvh-xl-intel  5 host-install(5)   broken REGR. vs. 159036
 test-amd64-amd64-xl           5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-migrupgrade 6 host-install/src_host(6) broken REGR. vs. 159036
 test-amd64-amd64-pair       6 host-install/src_host(6) broken REGR. vs. 159036
 test-amd64-amd64-migrupgrade 7 host-install/dst_host(7) broken REGR. vs. 159036
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-pair       7 host-install/dst_host(7) broken REGR. vs. 159036
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken REGR. vs. 159036
 test-xtf-amd64-amd64-4        5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-multivcpu  5 host-install(5)       broken REGR. vs. 159036
 test-amd64-amd64-xl-shadow    5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-credit2   5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-amd64-pvgrub  5 host-install(5)       broken REGR. vs. 159036
 test-amd64-amd64-xl-credit1   5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-pygrub       5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-livepatch    5 host-install(5)        broken REGR. vs. 159036
 test-xtf-amd64-amd64-2        5 host-install(5)        broken REGR. vs. 159036
 build-i386                    4 host-install(4)        broken REGR. vs. 159036
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 159036
 build-i386-prev               4 host-install(4)        broken REGR. vs. 159036
 test-xtf-amd64-amd64-3        5 host-install(5)        broken REGR. vs. 159036
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 159036
 test-xtf-amd64-amd64-5        5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken REGR. vs. 159036
 test-xtf-amd64-amd64-1        5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-examine      5 host-install           broken REGR. vs. 159036
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 159036
 build-arm64                   4 host-install(4)        broken REGR. vs. 159036
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 159036
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 159036
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 159036
 build-amd64-libvirt           4 host-install(4)        broken REGR. vs. 159036

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      5 host-install(5)        broken REGR. vs. 159036

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    2 days
Testing same since   159077  2021-02-06 11:11:30 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          broken  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              broken  
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       broken  
 test-xtf-amd64-amd64-2                                       broken  
 test-xtf-amd64-amd64-3                                       broken  
 test-xtf-amd64-amd64-4                                       broken  
 test-xtf-amd64-amd64-5                                       broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         broken  
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         broken  
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  broken  
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          broken  
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   broken  
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 broken  
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                broken  
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                broken  
 test-amd64-amd64-i386-pvgrub                                 broken  
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    broken  
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job build-amd64-libvirt broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job build-amd64-xsm broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job build-arm64 broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job build-arm64-pvops broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job build-arm64-xsm broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job build-i386-prev broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-dom0pvh-xl-amd broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-livepatch broken
broken-job test-amd64-amd64-migrupgrade broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-coresched-amd64-xl broken
broken-job test-xtf-amd64-amd64-1 broken
broken-job test-xtf-amd64-amd64-2 broken
broken-job test-xtf-amd64-amd64-3 broken
broken-job test-xtf-amd64-amd64-4 broken
broken-job test-xtf-amd64-amd64-5 broken
broken-step test-amd64-amd64-xl-qcow2 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-amd host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-amd64-i386-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-amd64-qemuu-nested-intel host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-amd host-install(5)
broken-step test-amd64-amd64-xl-qemuu-win7-amd64 host-install(5)
broken-step test-amd64-amd64-dom0pvh-xl-intel host-install(5)
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)
broken-step test-amd64-amd64-migrupgrade host-install/src_host(6)
broken-step test-amd64-amd64-pair host-install/src_host(6)
broken-step test-amd64-amd64-migrupgrade host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemut-ws16-amd64 host-install(5)
broken-step test-xtf-amd64-amd64-4 host-install(5)
broken-step test-amd64-amd64-xl-multivcpu host-install(5)
broken-step test-amd64-amd64-xl-shadow host-install(5)
broken-step test-amd64-amd64-xl-credit2 host-install(5)
broken-step test-amd64-amd64-amd64-pvgrub host-install(5)
broken-step test-amd64-amd64-xl-credit1 host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)
broken-step test-amd64-amd64-livepatch host-install(5)
broken-step test-xtf-amd64-amd64-2 host-install(5)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step test-xtf-amd64-amd64-3 host-install(5)
broken-step build-i386-pvops host-install(4)
broken-step test-xtf-amd64-amd64-5 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-xtf-amd64-amd64-1 host-install(5)
broken-step test-amd64-amd64-examine host-install
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-libvirt host-install(4)

Not pushing.

------------------------------------------------------------
commit ca82d3fecc93745ee17850a609ac7772bd7c8bf7
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Sat Jan 30 08:36:37 2021 -0500

    x86/vm_event: add response flag to reset vmtrace buffer
    
    Allow resetting the vmtrace buffer in response to a vm_event. This can be used
    to optimize a use-case where detecting a looped vmtrace buffer is important.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit c5866ab93167a73a8d4d85b844edf4aa364a1aaa
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Mon Jan 18 12:46:37 2021 -0500

    x86/vm_event: Carry the vmtrace buffer position in vm_event
    
    Add vmtrace_pos field to x86 regs in vm_event. Initialized to ~0 if
    vmtrace is not in use.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 9744611991a042e9aea348c5721b80cc2101c7e5
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Fri Sep 11 20:14:00 2020 +0200

    xen/vmtrace: support for VM forks
    
    Implement vmtrace_reset_pt function. Properly set IPT
    state for VM forks.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 88dd8389dd2c9442729e9d96a4febaf38cd822e3
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:35:07 2020 +0200

    tools/misc: Add xen-vmtrace tool
    
    Add an demonstration tool that uses xc_vmtrace_* calls in order
    to manage external IPT monitoring for DomU.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 53aaa792fdebcf131983d45ee8e3d09bd0740c71
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:33:25 2020 +0200

    tools/libxc: Add xc_vmtrace_* functions
    
    Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 1cee4bd97c88633c4a39f56f6722be0727c9ea8f
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Sun Jun 28 23:48:09 2020 +0200

    xen/domctl: Add XEN_DOMCTL_vmtrace_op
    
    Implement an interface to configure and control tracing operations.  Reuse the
    existing SETDEBUGGING flask vector rather than inventing a new one.
    
    Userspace using this interface is going to need platform specific knowledge
    anyway to interpret the contents of the trace buffer.  While some operations
    (e.g. enable/disable) can reasonably be generic, others cannot.  Provide an
    explicitly-platform specific pair of get/set operations to reduce API churn as
    new options get added/enabled.
    
    For the VMX specific Processor Trace implementation, tolerate reading and
    modifying a safe subset of bits in CTL, STATUS and OUTPUT_MASK.  This permits
    userspace to control the content which gets logged, but prevents modification
    of details such as the position/size of the output buffer.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 71cb03f03ce309e8cc1dacd18aa383ccea6af231
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:20:18 2020 +0200

    x86/vmx: Add Intel Processor Trace support
    
    Add CPUID/MSR enumeration details for Processor Trace.  For now, we will only
    support its use inside VMX operation.  Fill in the vmtrace_available boolean
    to activate the newly introduced common infrastructure for allocating trace
    buffers.
    
    For now, Processor Trace is going to be operated in Single Output mode behind
    the guests back.  Add the MSRs to struct vcpu_msrs, and set up the buffer
    limit in vmx_init_ipt() as it is fixed for the lifetime of the domain.
    
    Context switch the most of the MSRs in and out of vCPU context, but the main
    control register needs to reside in the MSR load/save lists.  Explicitly pull
    the msrs pointer out into a local variable, because the optimiser cannot keep
    it live across the memory clobbers in the MSR accesses.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit b72eab263592a3d76aa826675e5d62606d83cecd
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Mon Jun 29 00:05:51 2020 +0200

    xen/memory: Add a vmtrace_buf resource type
    
    Allow to map processor trace buffer using acquire_resource().
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 45ba9a7d7688a6a08200e37a8caa2bc99bb4d267
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jun 19 00:31:24 2020 +0200

    tools/[lib]xl: Add vmtrace_buf_size parameter
    
    Allow to specify the size of per-vCPU trace buffer upon
    domain creation. This is zero by default (meaning: not enabled).
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 217dd79ee29286b85074d22cc75ee064206fb2af
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jul 3 01:16:10 2020 +0200

    xen/domain: Add vmtrace_size domain creation parameter
    
    To use vmtrace, buffers of a suitable size need allocating, and different
    tasks will want different sizes.
    
    Add a domain creation parameter, and audit it appropriately in the
    {arch_,}sanitise_domain_config() functions.
    
    For now, the x86 specific auditing is tuned to Processor Trace running in
    Single Output mode, which requires a single contiguous range of memory.
    
    The size is given an arbitrary limit of 64M which is expected to be enough for
    anticipated usecases, but not large enough to get into long-running-hypercall
    problems.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 34cc2e5f8dba6906da82fe8d76e839f9ab20f153
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 27 17:24:11 2020 +0100

    xen/memory: Fix mapping grant tables with XENMEM_acquire_resource
    
    A guest's default number of grant frames is 64, and XENMEM_acquire_resource
    will reject an attempt to map more than 32 frames.  This limit is caused by
    the size of mfn_list[] on the stack.
    
    Fix mapping of arbitrary size requests by looping over batches of 32 in
    acquire_resource(), and using hypercall continuations when necessary.
    
    To start with, break _acquire_resource() out of acquire_resource() to cope
    with type-specific dispatching, and update the return semantics to indicate
    the number of mfns returned.  Update gnttab_acquire_resource() and x86's
    arch_acquire_resource() to match these new semantics.
    
    Have do_memory_op() pass start_extent into acquire_resource() so it can pick
    up where it left off after a continuation, and loop over batches of 32 until
    all the work is done, or a continuation needs to occur.
    
    compat_memory_op() is a bit more complicated, because it also has to marshal
    frame_list in the XLAT buffer.  Have it account for continuation information
    itself and hide details from the upper layer, so it can marshal the buffer in
    chunks if necessary.
    
    With these fixes in place, it is now possible to map the whole grant table for
    a guest.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit f4318db940c39cc656128fcf72df3e79d2e55bc1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 14:09:42 2021 +0100

    x86/EFI: work around GNU ld 2.36 issue
    
    Our linker capability check fails with the recent binutils release's ld:
    
    .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
    .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
    .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
    .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
    .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
    .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
    .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
    .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
    .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
    .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
    .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
    
    Tell the linker to strip debug info as a workaround. Debug info has been
    getting stripped already anyway when linking the actual xen.efi.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d7acc47c8201611fda98ce5bd465626478ca4759
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Feb 5 13:19:38 2021 +0100

    tools/tests: fix resource test build on FreeBSD
    
    error.h is not a standard header, and none of the functions declared
    there are actually used by the code. This fixes the build on FreeBSD
    that doesn't have error.h
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 21:21:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 21:21:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82540.152393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8rUQ-0000o1-9I; Sun, 07 Feb 2021 21:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82540.152393; Sun, 07 Feb 2021 21:21:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8rUQ-0000nu-6P; Sun, 07 Feb 2021 21:21:06 +0000
Received: by outflank-mailman (input) for mailman id 82540;
 Sun, 07 Feb 2021 21:21:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rUP-0000nm-Cx; Sun, 07 Feb 2021 21:21:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rUP-0005DM-4C; Sun, 07 Feb 2021 21:21:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rUO-0006AB-Sa; Sun, 07 Feb 2021 21:21:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rUO-0005cr-S4; Sun, 07 Feb 2021 21:21:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LfZv5OzFuhJDCEQsHrExoR31m/8S4U3FiKbVvnFM+gA=; b=KJXpK4gKZuOBG9JmS+M5F3d8Aq
	mbaTVfyZ8RJYebY1YXpnE45946t43+Iuu9aQDm0j9/MvPL9jfzx9ZnWxaQcppvnuthr6OK5zIO+vF
	1wwhRa3Yv9FdU6oBTwcLBhM23E8Ki/pq1HwifRJl3Iq1Tm+qTKqGdBKBtu3Tojp0Z/y4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159088-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159088: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=43a113385e370530eb52cf2e55b3019d8d4f6558
X-Osstest-Versions-That:
    ovmf=0d96664df322d50e0ac54130e129c0bf4f2b72df
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 21:21:04 +0000

flight 159088 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159088/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 159040
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 159040
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 159040
 build-amd64                   4 host-install(4)        broken REGR. vs. 159040
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 159040
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 159040

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 43a113385e370530eb52cf2e55b3019d8d4f6558
baseline version:
 ovmf                 0d96664df322d50e0ac54130e129c0bf4f2b72df

Last test of basis   159040  2021-02-05 11:11:01 Z    2 days
Testing same since   159088  2021-02-07 01:54:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)

Not pushing.

------------------------------------------------------------
commit 43a113385e370530eb52cf2e55b3019d8d4f6558
Author: Bob Feng <bob.c.feng@intel.com>
Date:   Mon Feb 1 18:28:58 2021 +0800

    BaseTools: fix the split output files root dir
    
    If the output file path is a relative path, the split
    tool will create the output file under the input file path.
    But the expected behavior for this case is the output file
    should be relative to the current directory. This patch will
    fix this bug.
    
    If the output file path is not specified and output prefix is not
    specified, the output file should be under the input file path
    
    Signed-off-by: Bob Feng <bob.c.feng@intel.com>
    Cc: Liming Gao <gaoliming@byosoft.com.cn>
    Cc: Yuwei Chen <yuwei.chen@intel.com>
    Acked-by: Liming Gao <gaoliming@byosoft.com.cn>
    Reviewed-by: Yuwei Chen <yuwei.chen@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 21:46:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 21:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82544.152409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8rtF-0002lh-Es; Sun, 07 Feb 2021 21:46:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82544.152409; Sun, 07 Feb 2021 21:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8rtF-0002la-Bh; Sun, 07 Feb 2021 21:46:45 +0000
Received: by outflank-mailman (input) for mailman id 82544;
 Sun, 07 Feb 2021 21:46:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rtD-0002lS-QM; Sun, 07 Feb 2021 21:46:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rtD-0005c1-Ih; Sun, 07 Feb 2021 21:46:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rtD-0006jA-9Q; Sun, 07 Feb 2021 21:46:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8rtD-0003MV-8z; Sun, 07 Feb 2021 21:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pu68ab2jsQcx0VvFRw5IPbn2wTPhKOBP5xwDEBtbXJ0=; b=yemqeJCu1QujpYK/6nBUyWunMo
	wsgV025hWC1j59PEwTJbBd2bKwY0d93clUNgU38CZIZ+Itx4sygEEpxv1l9zcIYVng7hrlhrU2OEO
	D1XUYcxj7Wb/XofnGF8hcIbzW0wVCoajZOhxqNgTopUJyMvYYWRhNQ+q+s7wy17nF2Jw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159090-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159090: trouble: blocked/broken
X-Osstest-Failures:
    xen-4.12-testing:build-amd64:<job status>:broken:regression
    xen-4.12-testing:build-amd64-prev:<job status>:broken:regression
    xen-4.12-testing:build-amd64-pvops:<job status>:broken:regression
    xen-4.12-testing:build-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:build-amd64-xtf:<job status>:broken:regression
    xen-4.12-testing:build-arm64:<job status>:broken:regression
    xen-4.12-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.12-testing:build-arm64-xsm:<job status>:broken:regression
    xen-4.12-testing:build-armhf:<job status>:broken:regression
    xen-4.12-testing:build-armhf-pvops:<job status>:broken:regression
    xen-4.12-testing:build-i386:<job status>:broken:regression
    xen-4.12-testing:build-i386-prev:<job status>:broken:regression
    xen-4.12-testing:build-i386-pvops:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:build-i386-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-i386-prev:host-install(4):broken:regression
    xen-4.12-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.12-testing:build-i386:host-install(4):broken:regression
    xen-4.12-testing:build-arm64:host-install(4):broken:regression
    xen-4.12-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-arm64-xsm:host-install(4):broken:regression
    xen-4.12-testing:build-amd64-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-amd64:host-install(4):broken:regression
    xen-4.12-testing:build-amd64-xtf:host-install(4):broken:regression
    xen-4.12-testing:build-amd64-xsm:host-install(4):broken:regression
    xen-4.12-testing:build-armhf-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-amd64-prev:host-install(4):broken:regression
    xen-4.12-testing:build-armhf:host-install(4):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 21:46:43 +0000

flight 159090 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159090/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 158556
 build-i386-prev               4 host-install(4)        broken REGR. vs. 158556
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 158556
 build-i386                    4 host-install(4)        broken REGR. vs. 158556
 build-arm64                   4 host-install(4)        broken REGR. vs. 158556
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 158556
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 158556
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 158556
 build-amd64                   4 host-install(4)        broken REGR. vs. 158556
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 158556
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 158556
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 158556
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 158556
 build-armhf                   4 host-install(4)        broken REGR. vs. 158556

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   17 days
Failing since        159017  2021-02-04 15:06:13 Z    3 days    3 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xtf host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 8d26cdd3b66ab86d560dacd763d76ff3da95723e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:52:54 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Feb 07 22:51:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 Feb 2021 22:51:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82558.152429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8stq-0000il-Dv; Sun, 07 Feb 2021 22:51:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82558.152429; Sun, 07 Feb 2021 22:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8stq-0000ie-AI; Sun, 07 Feb 2021 22:51:26 +0000
Received: by outflank-mailman (input) for mailman id 82558;
 Sun, 07 Feb 2021 22:51:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8sto-0000iW-MA; Sun, 07 Feb 2021 22:51:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8sto-0006dI-Dq; Sun, 07 Feb 2021 22:51:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8sto-00088T-2S; Sun, 07 Feb 2021 22:51:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8sto-0001Lk-1u; Sun, 07 Feb 2021 22:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=n5MGlt0AFJ3vF04MYClCEN6FDyY4vPQSKS8/JKS/vf0=; b=RpSNkaBFfHq7vOOBgxcmZdNS5P
	8EGLpgjW1nyVbVdAXb3NktZhBUW8uM6S7i8880C4jHQGL2+YqU4Qe8NWt8GFi+O1u1hx7EeL+l3WC
	oYQR4OIMNrVWxMKp9scSTs+PfJSzaEqcCdmo4Ft2E5g1FbwXQkidIVxgvmzY3qJ+2e28=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159096-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159096: trouble: blocked/broken
X-Osstest-Failures:
    linux-5.4:build-amd64:<job status>:broken:regression
    linux-5.4:build-amd64-pvops:<job status>:broken:regression
    linux-5.4:build-amd64-xsm:<job status>:broken:regression
    linux-5.4:build-arm64:<job status>:broken:regression
    linux-5.4:build-arm64-pvops:<job status>:broken:regression
    linux-5.4:build-arm64-xsm:<job status>:broken:regression
    linux-5.4:build-armhf:<job status>:broken:regression
    linux-5.4:build-armhf-pvops:<job status>:broken:regression
    linux-5.4:build-i386:<job status>:broken:regression
    linux-5.4:build-i386-pvops:<job status>:broken:regression
    linux-5.4:build-i386-xsm:<job status>:broken:regression
    linux-5.4:build-i386-pvops:host-install(4):broken:regression
    linux-5.4:build-i386-xsm:host-install(4):broken:regression
    linux-5.4:build-i386:host-install(4):broken:regression
    linux-5.4:build-arm64-xsm:host-install(4):broken:regression
    linux-5.4:build-arm64-pvops:host-install(4):broken:regression
    linux-5.4:build-arm64:host-install(4):broken:regression
    linux-5.4:build-amd64:host-install(4):broken:regression
    linux-5.4:build-amd64-pvops:host-install(4):broken:regression
    linux-5.4:build-armhf-pvops:host-install(4):broken:regression
    linux-5.4:build-amd64-xsm:host-install(4):broken:regression
    linux-5.4:build-armhf:host-install(4):broken:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=e89428970c23011a2679121c56e9f54f654c6602
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 Feb 2021 22:51:24 +0000

flight 159096 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159096/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 158387
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 158387
 build-i386                    4 host-install(4)        broken REGR. vs. 158387
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 158387
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 158387
 build-arm64                   4 host-install(4)        broken REGR. vs. 158387
 build-amd64                   4 host-install(4)        broken REGR. vs. 158387
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 158387
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 158387
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 158387
 build-armhf                   4 host-install(4)        broken REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 linux                e89428970c23011a2679121c56e9f54f654c6602
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   26 days
Failing since        158473  2021-01-17 13:42:20 Z   21 days   33 attempts
Testing same since   158997  2021-02-04 00:10:36 Z    3 days    4 attempts

------------------------------------------------------------
353 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 10442 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 00:20:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 00:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82590.152450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8uHq-00014Y-5d; Mon, 08 Feb 2021 00:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82590.152450; Mon, 08 Feb 2021 00:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8uHq-00014R-2R; Mon, 08 Feb 2021 00:20:18 +0000
Received: by outflank-mailman (input) for mailman id 82590;
 Mon, 08 Feb 2021 00:20:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8uHn-00014I-Sz; Mon, 08 Feb 2021 00:20:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8uHn-0000FK-NO; Mon, 08 Feb 2021 00:20:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8uHn-0001cs-DY; Mon, 08 Feb 2021 00:20:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8uHn-0004N8-D4; Mon, 08 Feb 2021 00:20:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LAvaYGpvIZ1UqX56OJyFaEu5DgaBy16mULPRMEcfAWg=; b=NfY7nmqM5pq8Y6JO48oF9b5Lqy
	itaMBjZYWVmAR/Lf/ZKeET6mQwzJdZ65N6IR01dTFxCP5OOl83WBD3tTPq63qTev7nDwSi00Zcmw8
	ydFxH5/2RbNkV805ui6MGDv/S2yZT4GG9/ljRkZ6USxvig2Mfi1Oq3eEioo6NYfinK7s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159097-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159097: trouble: blocked/broken
X-Osstest-Failures:
    libvirt:build-amd64:<job status>:broken:regression
    libvirt:build-amd64-pvops:<job status>:broken:regression
    libvirt:build-amd64-xsm:<job status>:broken:regression
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-armhf-pvops:<job status>:broken:regression
    libvirt:build-i386:<job status>:broken:regression
    libvirt:build-i386-pvops:<job status>:broken:regression
    libvirt:build-i386-xsm:<job status>:broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-i386-pvops:host-install(4):broken:regression
    libvirt:build-i386:host-install(4):broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-armhf-pvops:host-install(4):broken:regression
    libvirt:build-amd64-xsm:host-install(4):broken:regression
    libvirt:build-amd64:host-install(4):broken:regression
    libvirt:build-amd64-pvops:host-install(4):broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a58edc602ebfef9323d405f846cb0076bdfc8044
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 00:20:15 +0000

flight 159097 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159097/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 151777
 build-i386                    4 host-install(4)        broken REGR. vs. 151777
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-amd64                   4 host-install(4)        broken REGR. vs. 151777
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-armhf                   4 host-install(4)        broken REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a58edc602ebfef9323d405f846cb0076bdfc8044
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  212 days
Failing since        151818  2020-07-11 04:18:52 Z  211 days  206 attempts
Testing same since   159097  2021-02-07 09:24:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 40506 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 01:08:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 01:08:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82594.152466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8v2p-0006np-Ev; Mon, 08 Feb 2021 01:08:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82594.152466; Mon, 08 Feb 2021 01:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8v2p-0006nh-8a; Mon, 08 Feb 2021 01:08:51 +0000
Received: by outflank-mailman (input) for mailman id 82594;
 Mon, 08 Feb 2021 01:08:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8v2n-0006nZ-Mb; Mon, 08 Feb 2021 01:08:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8v2n-0002bp-Ae; Mon, 08 Feb 2021 01:08:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8v2n-0002fI-12; Mon, 08 Feb 2021 01:08:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8v2n-0004HZ-0W; Mon, 08 Feb 2021 01:08:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V069jb1b30EmasCggJdlOFym61kglQVo3wM1A/iFXec=; b=Tg5HqpcVgJ/UiySr/SddbVb9RG
	uWcb/N4dIs1M8f3LD25TcVNS3MWStpj87dJOWNyh+AyQa/LdQb1Yad8n/QcVPIny6XHtu9CIAH6Ne
	5HqyDBDMNSmkgW8uO2XWUJI4Usg7/CTUzSjVfiUFb4bObQbC7oS0K8RhmUu1mSONCMnA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159100-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159100: trouble: blocked/broken
X-Osstest-Failures:
    linux-linus:build-amd64:<job status>:broken:regression
    linux-linus:build-amd64-pvops:<job status>:broken:regression
    linux-linus:build-amd64-xsm:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf-pvops:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-amd64-xsm:host-install(4):broken:regression
    linux-linus:build-amd64:host-install(4):broken:regression
    linux-linus:build-amd64-pvops:host-install(4):broken:regression
    linux-linus:build-armhf-pvops:host-install(4):broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=825b5991a46ef28a05a4646c8fe1ae5cef7c7828
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 01:08:49 +0000

flight 159100 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159100/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-amd64                   4 host-install(4)        broken REGR. vs. 152332
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-armhf                   4 host-install(4)        broken REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 linux                825b5991a46ef28a05a4646c8fe1ae5cef7c7828
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  191 days
Failing since        152366  2020-08-01 20:49:34 Z  190 days  337 attempts
Testing same since   159100  2021-02-07 11:50:10 Z    0 days    1 attempts

------------------------------------------------------------
4557 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 1027957 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 04:49:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 04:49:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82604.152499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8yUH-0002E8-6F; Mon, 08 Feb 2021 04:49:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82604.152499; Mon, 08 Feb 2021 04:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8yUH-0002E0-04; Mon, 08 Feb 2021 04:49:25 +0000
Received: by outflank-mailman (input) for mailman id 82604;
 Mon, 08 Feb 2021 04:49:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8yUF-0002Ds-KH; Mon, 08 Feb 2021 04:49:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8yUF-0007Qg-Ck; Mon, 08 Feb 2021 04:49:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8yUF-0007U8-4C; Mon, 08 Feb 2021 04:49:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8yUF-0001Oe-3j; Mon, 08 Feb 2021 04:49:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DOJgnTBDcnnMsf8SzPtmRfzjHy9KPJ2YFiJAMvgckCA=; b=anBTH97nXDButiurfLbquz5jEX
	RlECLkW6AxbTIFYazwuF+1gk1lkQZ83p3VE8LsBpO3+dPyoxiUvQCRznA0ExOAq3lvMJa5fRJTc3N
	/cCNspD0y4L/Tqh2GT3TVdN8rRxx+BcDDiNnFvtGCYU1se0dLWMGctO38Sbxxabf08OY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159105-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159105: trouble: blocked/broken
X-Osstest-Failures:
    xen-4.11-testing:build-amd64:<job status>:broken:regression
    xen-4.11-testing:build-amd64-prev:<job status>:broken:regression
    xen-4.11-testing:build-amd64-pvops:<job status>:broken:regression
    xen-4.11-testing:build-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:build-amd64-xtf:<job status>:broken:regression
    xen-4.11-testing:build-arm64:<job status>:broken:regression
    xen-4.11-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.11-testing:build-arm64-xsm:<job status>:broken:regression
    xen-4.11-testing:build-armhf:<job status>:broken:regression
    xen-4.11-testing:build-armhf-pvops:<job status>:broken:regression
    xen-4.11-testing:build-i386:<job status>:broken:regression
    xen-4.11-testing:build-i386-prev:<job status>:broken:regression
    xen-4.11-testing:build-i386-pvops:<job status>:broken:regression
    xen-4.11-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:build-i386:host-install(4):broken:regression
    xen-4.11-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.11-testing:build-i386-prev:host-install(4):broken:regression
    xen-4.11-testing:build-i386-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-arm64:host-install(4):broken:regression
    xen-4.11-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-arm64-xsm:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-armhf-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-xsm:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-xtf:host-install(4):broken:regression
    xen-4.11-testing:build-amd64:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-prev:host-install(4):broken:regression
    xen-4.11-testing:build-armhf:host-install(4):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 04:49:23 +0000

flight 159105 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159105/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 157566
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 157566
 build-i386-prev               4 host-install(4)        broken REGR. vs. 157566
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 157566
 build-arm64                   4 host-install(4)        broken REGR. vs. 157566
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 157566
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 157566
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 157566
 build-amd64                   4 host-install(4)        broken REGR. vs. 157566
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 157566
 build-armhf                   4 host-install(4)        broken REGR. vs. 157566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   54 days
Failing since        159016  2021-02-04 15:05:58 Z    3 days    4 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-xtf host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 06:19:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 06:19:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82614.152520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8zt1-0002Np-0p; Mon, 08 Feb 2021 06:19:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82614.152520; Mon, 08 Feb 2021 06:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l8zt0-0002Ni-SI; Mon, 08 Feb 2021 06:19:02 +0000
Received: by outflank-mailman (input) for mailman id 82614;
 Mon, 08 Feb 2021 06:19:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8zsz-0002Mp-9Z; Mon, 08 Feb 2021 06:19:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8zsy-0000pu-UX; Mon, 08 Feb 2021 06:19:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l8zsy-0000yY-Kn; Mon, 08 Feb 2021 06:19:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l8zsy-0005C9-KJ; Mon, 08 Feb 2021 06:19:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dlybTk4Gm8R+pUaYymUZCtciEQ2+O/9HmlhOWyX+6fk=; b=dBSmsInBhb1hXS1owiYaiFIbJ2
	h8poegskvG5fIoa64x+itDwcww6s+odCYW/dzjMVDNkna2aiwkmDQJz8uKB1t7RV2fKBq9Wcvd2zk
	LJhXuDBDWqqyI5hB940r8WEEEv2PndvKc0vErDnDHwcr7CFPYH/4YrgI2PNc8kZ9YnEw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159107-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159107: trouble: blocked/broken
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:host-install(4):broken:regression
    qemu-mainline:build-armhf-pvops:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-install(4):broken:regression
    qemu-mainline:build-amd64-xsm:host-install(4):broken:regression
    qemu-mainline:build-i386-pvops:host-install(4):broken:regression
    qemu-mainline:build-i386-xsm:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-armhf:host-install(4):broken:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=5b19cb63d9dfda41b412373b8c9fe14641bcab60
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 06:19:00 +0000

flight 159107 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159107/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-amd64                   4 host-install(4)        broken REGR. vs. 152631
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152631
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-armhf                   4 host-install(4)        broken REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                5b19cb63d9dfda41b412373b8c9fe14641bcab60
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  171 days
Failing since        152659  2020-08-21 14:07:39 Z  170 days  341 attempts
Testing same since   159107  2021-02-07 19:51:11 Z    0 days    1 attempts

------------------------------------------------------------
375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 105392 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 06:39:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 06:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82618.152535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l90D0-0004N2-LZ; Mon, 08 Feb 2021 06:39:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82618.152535; Mon, 08 Feb 2021 06:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l90D0-0004Mv-IJ; Mon, 08 Feb 2021 06:39:42 +0000
Received: by outflank-mailman (input) for mailman id 82618;
 Mon, 08 Feb 2021 06:39:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l90Cz-0004Mn-Ax; Mon, 08 Feb 2021 06:39:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l90Cz-00019w-34; Mon, 08 Feb 2021 06:39:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l90Cy-0001Rn-N5; Mon, 08 Feb 2021 06:39:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l90Cy-0001nw-Mb; Mon, 08 Feb 2021 06:39:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mxmjn4AXhg3fi9QYtAIOod/Rb85wL4qhFqq3DuNvczo=; b=qf9bVxe54zu6pEBKSACrnjNrA4
	kCzwZmhZRJwHbVZ0Q/NkuEaA1DV5KRDUWrJXdreoEF2LkrtLy1b7xXW0LYK2l7/GJwkL5k9Y1ST0T
	sE12vLjNbBivy6K5qcnS+dYLK6zBzYiVsmTjVMqwpSRcVKm4PH79dS37nP/Nb3JKKM38=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159109-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159109: trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:host-install(4):broken:regression
    xen-unstable:build-amd64:host-install(4):broken:regression
    xen-unstable:build-amd64-pvops:host-install(4):broken:regression
    xen-unstable:build-amd64-prev:host-install(4):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-amd64-xsm:host-install(4):broken:regression
    xen-unstable:build-armhf-pvops:host-install(4):broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 06:39:40 +0000

flight 159109 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159109/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-xtf               4 host-install(4)        broken REGR. vs. 159036
 build-amd64                   4 host-install(4)        broken REGR. vs. 159036
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 159036
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 159036
 build-i386                    4 host-install(4)        broken REGR. vs. 159036
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 159036
 build-i386-prev               4 host-install(4)        broken REGR. vs. 159036
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 159036
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 159036
 build-arm64                   4 host-install(4)        broken REGR. vs. 159036
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 159036
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 159036
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 159036
 build-armhf                   4 host-install(4)        broken REGR. vs. 159036

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    2 days
Testing same since   159077  2021-02-06 11:11:30 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-amd64-xtf host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit ca82d3fecc93745ee17850a609ac7772bd7c8bf7
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Sat Jan 30 08:36:37 2021 -0500

    x86/vm_event: add response flag to reset vmtrace buffer
    
    Allow resetting the vmtrace buffer in response to a vm_event. This can be used
    to optimize a use-case where detecting a looped vmtrace buffer is important.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit c5866ab93167a73a8d4d85b844edf4aa364a1aaa
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Mon Jan 18 12:46:37 2021 -0500

    x86/vm_event: Carry the vmtrace buffer position in vm_event
    
    Add vmtrace_pos field to x86 regs in vm_event. Initialized to ~0 if
    vmtrace is not in use.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 9744611991a042e9aea348c5721b80cc2101c7e5
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Fri Sep 11 20:14:00 2020 +0200

    xen/vmtrace: support for VM forks
    
    Implement vmtrace_reset_pt function. Properly set IPT
    state for VM forks.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 88dd8389dd2c9442729e9d96a4febaf38cd822e3
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:35:07 2020 +0200

    tools/misc: Add xen-vmtrace tool
    
    Add an demonstration tool that uses xc_vmtrace_* calls in order
    to manage external IPT monitoring for DomU.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 53aaa792fdebcf131983d45ee8e3d09bd0740c71
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:33:25 2020 +0200

    tools/libxc: Add xc_vmtrace_* functions
    
    Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 1cee4bd97c88633c4a39f56f6722be0727c9ea8f
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Sun Jun 28 23:48:09 2020 +0200

    xen/domctl: Add XEN_DOMCTL_vmtrace_op
    
    Implement an interface to configure and control tracing operations.  Reuse the
    existing SETDEBUGGING flask vector rather than inventing a new one.
    
    Userspace using this interface is going to need platform specific knowledge
    anyway to interpret the contents of the trace buffer.  While some operations
    (e.g. enable/disable) can reasonably be generic, others cannot.  Provide an
    explicitly-platform specific pair of get/set operations to reduce API churn as
    new options get added/enabled.
    
    For the VMX specific Processor Trace implementation, tolerate reading and
    modifying a safe subset of bits in CTL, STATUS and OUTPUT_MASK.  This permits
    userspace to control the content which gets logged, but prevents modification
    of details such as the position/size of the output buffer.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 71cb03f03ce309e8cc1dacd18aa383ccea6af231
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:20:18 2020 +0200

    x86/vmx: Add Intel Processor Trace support
    
    Add CPUID/MSR enumeration details for Processor Trace.  For now, we will only
    support its use inside VMX operation.  Fill in the vmtrace_available boolean
    to activate the newly introduced common infrastructure for allocating trace
    buffers.
    
    For now, Processor Trace is going to be operated in Single Output mode behind
    the guests back.  Add the MSRs to struct vcpu_msrs, and set up the buffer
    limit in vmx_init_ipt() as it is fixed for the lifetime of the domain.
    
    Context switch the most of the MSRs in and out of vCPU context, but the main
    control register needs to reside in the MSR load/save lists.  Explicitly pull
    the msrs pointer out into a local variable, because the optimiser cannot keep
    it live across the memory clobbers in the MSR accesses.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit b72eab263592a3d76aa826675e5d62606d83cecd
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Mon Jun 29 00:05:51 2020 +0200

    xen/memory: Add a vmtrace_buf resource type
    
    Allow to map processor trace buffer using acquire_resource().
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 45ba9a7d7688a6a08200e37a8caa2bc99bb4d267
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jun 19 00:31:24 2020 +0200

    tools/[lib]xl: Add vmtrace_buf_size parameter
    
    Allow to specify the size of per-vCPU trace buffer upon
    domain creation. This is zero by default (meaning: not enabled).
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 217dd79ee29286b85074d22cc75ee064206fb2af
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jul 3 01:16:10 2020 +0200

    xen/domain: Add vmtrace_size domain creation parameter
    
    To use vmtrace, buffers of a suitable size need allocating, and different
    tasks will want different sizes.
    
    Add a domain creation parameter, and audit it appropriately in the
    {arch_,}sanitise_domain_config() functions.
    
    For now, the x86 specific auditing is tuned to Processor Trace running in
    Single Output mode, which requires a single contiguous range of memory.
    
    The size is given an arbitrary limit of 64M which is expected to be enough for
    anticipated usecases, but not large enough to get into long-running-hypercall
    problems.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 34cc2e5f8dba6906da82fe8d76e839f9ab20f153
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 27 17:24:11 2020 +0100

    xen/memory: Fix mapping grant tables with XENMEM_acquire_resource
    
    A guest's default number of grant frames is 64, and XENMEM_acquire_resource
    will reject an attempt to map more than 32 frames.  This limit is caused by
    the size of mfn_list[] on the stack.
    
    Fix mapping of arbitrary size requests by looping over batches of 32 in
    acquire_resource(), and using hypercall continuations when necessary.
    
    To start with, break _acquire_resource() out of acquire_resource() to cope
    with type-specific dispatching, and update the return semantics to indicate
    the number of mfns returned.  Update gnttab_acquire_resource() and x86's
    arch_acquire_resource() to match these new semantics.
    
    Have do_memory_op() pass start_extent into acquire_resource() so it can pick
    up where it left off after a continuation, and loop over batches of 32 until
    all the work is done, or a continuation needs to occur.
    
    compat_memory_op() is a bit more complicated, because it also has to marshal
    frame_list in the XLAT buffer.  Have it account for continuation information
    itself and hide details from the upper layer, so it can marshal the buffer in
    chunks if necessary.
    
    With these fixes in place, it is now possible to map the whole grant table for
    a guest.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit f4318db940c39cc656128fcf72df3e79d2e55bc1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 14:09:42 2021 +0100

    x86/EFI: work around GNU ld 2.36 issue
    
    Our linker capability check fails with the recent binutils release's ld:
    
    .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
    .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
    .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
    .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
    .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
    .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
    .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
    .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
    .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
    .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
    .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
    
    Tell the linker to strip debug info as a workaround. Debug info has been
    getting stripped already anyway when linking the actual xen.efi.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d7acc47c8201611fda98ce5bd465626478ca4759
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Feb 5 13:19:38 2021 +0100

    tools/tests: fix resource test build on FreeBSD
    
    error.h is not a standard header, and none of the functions declared
    there are actually used by the code. This fixes the build on FreeBSD
    that doesn't have error.h
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 08:11:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 08:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82635.152556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l91d8-0005Le-Sb; Mon, 08 Feb 2021 08:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82635.152556; Mon, 08 Feb 2021 08:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l91d8-0005LX-PT; Mon, 08 Feb 2021 08:10:46 +0000
Received: by outflank-mailman (input) for mailman id 82635;
 Mon, 08 Feb 2021 08:10:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l91d6-0005LS-VE
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 08:10:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 697bc72d-9360-4730-9d6e-4fb6418fe8d3;
 Mon, 08 Feb 2021 08:10:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CFD21AC43;
 Mon,  8 Feb 2021 08:10:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 697bc72d-9360-4730-9d6e-4fb6418fe8d3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612771842; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9syUDH9NP6Li6L1TbJwrwj5Fu3nAKSVDz0SCErkMfXM=;
	b=kFnkqjS7EX6v4T6E4TmrDcQhWjVgvIBeVMOGt8aBl8gxI5p7JW3W41HYItPKCP3N+pRj+D
	xF/QQGDmd+3WFx6ygxhlGi9nyNFFOMQXoUc6ojipTgO3gKhMvS2R5IsVpYaY5Tm2yQWfZm
	g9uTiBKQPzJbWlycrRgP+GQchsHTy1k=
Subject: Re: [PATCH] x86/build: correctly record dependencies of asm-offsets.s
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <b3b57f6b-3ed9-18f6-2a87-6af3304c6645@suse.com>
 <c2b857fa-a606-1795-3aaf-a69572c43951@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <304c2f74-40d5-5927-2401-f5d451ff6788@suse.com>
Date: Mon, 8 Feb 2021 09:10:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <c2b857fa-a606-1795-3aaf-a69572c43951@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.02.2021 10:09, Julien Grall wrote:
> On 01/02/2021 14:56, Jan Beulich wrote:
>> Going through an intermediate *.new file requires telling the compiler
>> what the real target is, so that the inclusion of the resulting .*.d
>> file will actually be useful.
>>
>> Fixes: 7d2d7a43d014 ("x86/build: limit rebuilding of asm-offsets.h")
>> Reported-by: Julien Grall <julien@xen.org>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Already on the original patch I did suggest that perhaps Arm would want
>> to follow suit. So again - perhaps the rules should be unified by moving
>> to common code?
> 
> Sorry I missed the original patch. The recent changes looks beneficials 
> to Arm as well.

Okay, I can make a patch then (for 4.16) to make this common
as far as possible.

>> --- a/xen/arch/x86/Makefile
>> +++ b/xen/arch/x86/Makefile
>> @@ -241,7 +241,7 @@ efi/buildid.o efi/relocs-dummy.o: $(BASE
>>   efi/buildid.o efi/relocs-dummy.o: ;
>>   
>>   asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c $(BASEDIR)/include/asm-x86/asm-macros.h
> 
> On Arm, only asm-offsets.c is a pre-requisite. May I ask why you need 
> the second on x86?

Because this header gets used by the file and hence needs
generating up front? But that's nothing Arm needs to worry
about - I intend to allow extra per-arch dependencies, no
matter that common things are to become common.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 08:35:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 08:35:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82637.152568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9210-0007IK-Og; Mon, 08 Feb 2021 08:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82637.152568; Mon, 08 Feb 2021 08:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9210-0007ID-Lj; Mon, 08 Feb 2021 08:35:26 +0000
Received: by outflank-mailman (input) for mailman id 82637;
 Mon, 08 Feb 2021 08:35:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l920z-0007Hp-AV; Mon, 08 Feb 2021 08:35:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l920z-0003X9-59; Mon, 08 Feb 2021 08:35:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l920y-00040J-Qr; Mon, 08 Feb 2021 08:35:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l920y-0006eG-QP; Mon, 08 Feb 2021 08:35:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0HlGQY86TvNHeDXlFy88XLGGSHeAyTXi/JuUvefe2MM=; b=Vz3VZKWivl3rC/Qh3qCkkdYuLm
	/fZfnG5HQ78fw6qD58x75gq60wTkWXcP7jkEgDMK97Qy1d8/UN5WCRqGWRWd24MolDFWZMHwWw2Kw
	Z9Wa5Z2JLvmVEe5VJVUuudMLVFg5lKW01WoPOJTm7BHgyvtS8h6B9boBwSEcD2uyI9Q8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159110-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159110: trouble: blocked/broken
X-Osstest-Failures:
    ovmf:build-amd64:<job status>:broken:regression
    ovmf:build-amd64-pvops:<job status>:broken:regression
    ovmf:build-amd64-xsm:<job status>:broken:regression
    ovmf:build-i386:<job status>:broken:regression
    ovmf:build-i386-pvops:<job status>:broken:regression
    ovmf:build-i386-xsm:<job status>:broken:regression
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-i386-xsm:host-install(4):broken:regression
    ovmf:build-i386-pvops:host-install(4):broken:regression
    ovmf:build-amd64:host-install(4):broken:regression
    ovmf:build-amd64-xsm:host-install(4):broken:regression
    ovmf:build-amd64-pvops:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=43a113385e370530eb52cf2e55b3019d8d4f6558
X-Osstest-Versions-That:
    ovmf=0d96664df322d50e0ac54130e129c0bf4f2b72df
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 08:35:24 +0000

flight 159110 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159110/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386                    4 host-install(4)        broken REGR. vs. 159040
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 159040
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 159040
 build-amd64                   4 host-install(4)        broken REGR. vs. 159040
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 159040
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 159040

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 43a113385e370530eb52cf2e55b3019d8d4f6558
baseline version:
 ovmf                 0d96664df322d50e0ac54130e129c0bf4f2b72df

Last test of basis   159040  2021-02-05 11:11:01 Z    2 days
Testing same since   159088  2021-02-07 01:54:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-amd64-pvops host-install(4)

Not pushing.

------------------------------------------------------------
commit 43a113385e370530eb52cf2e55b3019d8d4f6558
Author: Bob Feng <bob.c.feng@intel.com>
Date:   Mon Feb 1 18:28:58 2021 +0800

    BaseTools: fix the split output files root dir
    
    If the output file path is a relative path, the split
    tool will create the output file under the input file path.
    But the expected behavior for this case is the output file
    should be relative to the current directory. This patch will
    fix this bug.
    
    If the output file path is not specified and output prefix is not
    specified, the output file should be under the input file path
    
    Signed-off-by: Bob Feng <bob.c.feng@intel.com>
    Cc: Liming Gao <gaoliming@byosoft.com.cn>
    Cc: Yuwei Chen <yuwei.chen@intel.com>
    Acked-by: Liming Gao <gaoliming@byosoft.com.cn>
    Reviewed-by: Yuwei Chen <yuwei.chen@intel.com>


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:11:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82653.152588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l92Zt-0002WQ-J5; Mon, 08 Feb 2021 09:11:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82653.152588; Mon, 08 Feb 2021 09:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l92Zt-0002WJ-G4; Mon, 08 Feb 2021 09:11:29 +0000
Received: by outflank-mailman (input) for mailman id 82653;
 Mon, 08 Feb 2021 09:11:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l92Zs-0002WE-Gw
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:11:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l92Zm-00046z-Fi; Mon, 08 Feb 2021 09:11:22 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l92Zm-0003Oc-7O; Mon, 08 Feb 2021 09:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Z2FuntU0FCSPGAWD+ofKK+yUmsoCyrZClvV7APxBvaU=; b=S3YFzpNjNeKQK9a3thbvJwcCVT
	/0VqloskC35+uEOgRFBZD+9JV3WrwxjsyPgNFqKOnfjWAtoIi0DKRJlxFQvbTwbupeSSukJX8x1dj
	n1sYe+iGxzjWMHsDVsqLDKXVJUELdnB3CcU8BTrhm0NTQvuJ5S5mzzk9OVF2XbxfQZ1w=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
Date: Mon, 8 Feb 2021 09:11:18 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 07/02/2021 12:58, JÃ¼rgen GroÃŸ wrote:
> On 06.02.21 19:46, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 06/02/2021 10:49, Juergen Gross wrote:
>>> The first three patches are fixes for XSA-332. The avoid WARN splats
>>> and a performance issue with interdomain events.
>>
>> Thanks for helping to figure out the problem. Unfortunately, I still 
>> see reliably the WARN splat with the latest Linux master 
>> (1e0d27fce010) + your first 3 patches.
>>
>> I am using Xen 4.11 (1c7d984645f9) and dom0 is forced to use the 2L 
>> events ABI.
>>
>> After some debugging, I think I have an idea what's went wrong. The 
>> problem happens when the event is initially bound from vCPU0 to a 
>> different vCPU.
>>
>> Â From the comment in xen_rebind_evtchn_to_cpu(), we are masking the 
>> event to prevent it being delivered on an unexpected vCPU. However, I 
>> believe the following can happen:
>>
>> vCPU0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | vCPU1
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | Call xen_rebind_evtchn_to_cpu()
>> receive event XÂ Â Â Â Â Â Â Â Â Â Â  |
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | mask event X
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | bind to vCPU1
>> <vCPU descheduled>Â Â Â Â Â Â Â  | unmask event X
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | receive event X
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | handle_edge_irq(X)
>> handle_edge_irq(X)Â Â Â Â Â Â Â  |Â  -> handle_irq_event()
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> set IRQD_IN_PROGRESS
>> Â Â -> set IRQS_PENDINGÂ Â Â Â Â Â Â  |
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> evtchn_interrupt()
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> clear IRQD_IN_PROGRESS
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â  -> IRQS_PENDING is set
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â  -> handle_irq_event()
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> evtchn_interrupt()
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â Â Â  -> WARN()
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>>
>> All the lateeoi handlers expect a ONESHOT semantic and 
>> evtchn_interrupt() is doesn't tolerate any deviation.
>>
>> I think the problem was introduced by 7f874a0447a9 ("xen/events: fix 
>> lateeoi irq acknowledgment") because the interrupt was disabled 
>> previously. Therefore we wouldn't do another iteration in 
>> handle_edge_irq().
> 
> I think you picked the wrong commit for blaming, as this is just
> the last patch of the three patches you were testing.

I actually found the right commit for blaming but I copied the 
information from the wrong shell :/. The bug was introduced by:

c44b849cee8c ("xen/events: switch user event channels to lateeoi model")

> 
>> Aside the handlers, I think it may impact the defer EOI mitigation 
>> because in theory if a 3rd vCPU is joining the party (let say vCPU A 
>> migrate the event from vCPU B to vCPU C). So info->{eoi_cpu, 
>> irq_epoch, eoi_time} could possibly get mangled?
>>
>> For a fix, we may want to consider to hold evtchn_rwlock with the 
>> write permission. Although, I am not 100% sure this is going to 
>> prevent everything.
> 
> It will make things worse, as it would violate the locking hierarchy
> (xen_rebind_evtchn_to_cpu() is called with the IRQ-desc lock held).

Ah, right.

> 
> On a first glance I think we'll need a 3rd masking state ("temporarily
> masked") in the second patch in order to avoid a race with lateeoi.
> 
> In order to avoid the race you outlined above we need an "event is being
> handled" indicator checked via test_and_set() semantics in
> handle_irq_for_port() and reset only when calling clear_evtchn().

It feels like we are trying to workaround the IRQ flow we are using 
(i.e. handle_edge_irq()).

This reminds me the thread we had before discovering XSA-332 (see [1]). 
Back then, it was suggested to switch back to handle_fasteoi_irq().

Cheers,

[1] 
https://lore.kernel.org/xen-devel/alpine.DEB.2.21.2004271552430.29217@sstabellini-ThinkPad-T480s/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:13:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82654.152601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l92bw-0002dI-VJ; Mon, 08 Feb 2021 09:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82654.152601; Mon, 08 Feb 2021 09:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l92bw-0002dB-SL; Mon, 08 Feb 2021 09:13:36 +0000
Received: by outflank-mailman (input) for mailman id 82654;
 Mon, 08 Feb 2021 09:13:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l92bv-0002d6-Rj
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:13:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l92bu-00048b-Rg; Mon, 08 Feb 2021 09:13:34 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l92bu-0003Tv-Lp; Mon, 08 Feb 2021 09:13:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=CGT/3ifhVWzqMpXvTZNtAp3Nk8XNfG5uu7e/M4r77rw=; b=zfasqy4h3Y3Sf3cO72RS4uHYKu
	U8z20BhChP07UfrgI7ljUqSmEf4AVCO4rhTa324+xq3ANyIbIfGy7ss8QYnFXqZ3VKZDd3CYrnt+U
	1FRBEqyQvasgSZ0d+ctzc2EryQPrvvf444Xk4h+2e4XybqZFJXwm/wmwihN0uHY+U6pY=;
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
From: Julien Grall <julien@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: lucmiccio@gmail.com, xen-devel@lists.xenproject.org,
 Bertrand.Marquis@arm.com, Volodymyr_Babchuk@epam.com
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
 <247f517e-a283-12c8-2ccb-3915cda4ac2e@xen.org>
Message-ID: <915a0214-cccf-6764-d20b-c45d29f1ce26@xen.org>
Date: Mon, 8 Feb 2021 09:13:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <247f517e-a283-12c8-2ccb-3915cda4ac2e@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 06/02/2021 11:09, Julien Grall wrote:
> Hi Stefano,
> 
> On 06/02/2021 00:38, Stefano Stabellini wrote:
>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> 
> Doh :/.
> 
>> The offending chunk is: >
>> Â  #define gnttab_need_iommu_mapping(d)Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  \
>> -Â Â Â  (is_domain_direct_mapped(d) && need_iommu(d))
>> +Â Â Â  (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>
>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>> directly mapped, like the old check did,
> 
> This is not entirely correct, we only need gnttab_need_iommu_mapping() 
> to return true when the domain is direct mapped **and** the IOMMU is 
> enabled for the domain.
> 
>> but the new check is always
>> false. >
>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>> need_sync is set as:
>>
>> Â Â Â Â  if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>> Â Â Â Â Â Â Â Â  hd->need_sync = !iommu_use_hap_pt(d); >
>> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
>> definition in docs/misc/xen-command-line.pandoc:
>>
>> Â Â Â Â  This option is hardwired to true for x86 PVH dom0's (as RAM 
>> belonging to
>> Â Â Â Â  other domains in the system don't live in a compatible address 
>> space), and
>> Â Â Â Â  is ignored for ARM.
>>
>> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
>> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
>> too.
> 
> need_sync means that you have a separate IOMMU page-table and they need 
> to be updated for every change.
> 
> hap_pt means the page-table used by the IOMMU is the P2M.
> 
> For Arm, we always shared the P2M with the IOMMU.
> 
>>
>> As a consequence, when using PV network from a domU on a system where
>> IOMMU is on from Dom0, I get:
>>
>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, 
>> iova=0x8424cb148, fsynr=0xb0001, cb=0
>> [Â Â  68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>
>> The fix is to go back to the old implementation of
>> gnttab_need_iommu_mapping.Â  However, we don't even need to specify &&
>> need_iommu(d) since we don't actually need to check for the IOMMU to be
>> enabled (iommu_map does it for us at the beginning of the function.)
> 
> gnttab_need_iommu_mapping() doesn't only gate the 
> iommu_legacy_{,un}map() call but also decides whether we need to held 
> both the local and remote grant-table write lock for the duration of the 
> operation (see double_gt_lock()).
> 
> I'd like to avoid the requirement to held the double_gt_lock() if there  > is the domain is going to use the IOMMU.

It looks like I didn't convey the right message here. What I meant is we 
should avoid to held both grant-table locks if dom0 is not going to be 
use the IOMMU.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:39:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:39:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82659.152613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l930B-0001Bc-ST; Mon, 08 Feb 2021 09:38:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82659.152613; Mon, 08 Feb 2021 09:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l930B-0001BT-Lc; Mon, 08 Feb 2021 09:38:39 +0000
Received: by outflank-mailman (input) for mailman id 82659;
 Mon, 08 Feb 2021 09:38:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l930A-0001BN-1K
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:38:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79c6eac7-0d17-4fae-b7f6-f9abd47d8d78;
 Mon, 08 Feb 2021 09:38:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EDB9AD62;
 Mon,  8 Feb 2021 09:38:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79c6eac7-0d17-4fae-b7f6-f9abd47d8d78
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612777114; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ey4KeQ4wI8Ow8TBhYfMVHQwpaYnFFEiIsiUIit12DQc=;
	b=X3B/SXktu9OMTy3gCQxKfATO0YpQH5xheJaVU3VFlUCboSnM0ZJ3XMUnFDT4Nb0M3RXbQt
	/XfS7+BQnhF2x43SmgC72bRa4C5wB08RgIEcwa/NPXW0LO5bxZTYw4zykSZJ6+jMUiEnYr
	rCEC9iNZdEEoaRqQbhVaGX7lqbQvV44=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5cef0d0e-8f03-7cd2-4246-268a67a87dc5@suse.com>
Date: Mon, 8 Feb 2021 10:38:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.02.2021 11:49, Juergen Gross wrote:
> The ring buffer for user events is used in the local system only, so
> smp barriers are fine for ensuring consistency.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Albeit I think "local system" is at least ambiguous (physical
machine? VM?). How about something like "is local to the given
kernel instance"?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:39:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82660.152625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l930g-0001ID-1u; Mon, 08 Feb 2021 09:39:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82660.152625; Mon, 08 Feb 2021 09:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l930f-0001I6-Ue; Mon, 08 Feb 2021 09:39:09 +0000
Received: by outflank-mailman (input) for mailman id 82660;
 Mon, 08 Feb 2021 09:39:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atCq=HK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l930e-0001Hy-W9
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:39:09 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ce1e02c-5ccd-4ee5-a133-d4280913eebc;
 Mon, 08 Feb 2021 09:39:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ce1e02c-5ccd-4ee5-a133-d4280913eebc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612777146;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=nSM6ADTd5b+CQTsgH7WSGLswgVPo6dJOBg75zfbU2+0=;
  b=WqDgb03i95NltYe6ArCBjl1LQPc9vlUqqlqcFkQihh2TssnNyg/7NkAM
   kSPM75tB/8dJtOoDiZvdgb+qmlQmP9Ta6O0PyL3hHDPgiGvwvmNRDrlrn
   ya9p0zYtzQbyQOjz+WxqkRZ/d46MAY9d0cjY9cVEA1aq6YDTb+itbc5rj
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XbaO5SN1pumLcMYO50PiMWGTIURoI0NGLwT/oun//Sg83ZtDBePyXefsxon9Mtt37cnSuuvEFD
 HwntF1g5d5NFp96XhqpbmZkm41O1ZiLY2jltV5dE70SBbb+KIZUMsHmaUK8k0A0zE9dKvrWB+O
 aZ6Au8TTNjAnVMygtU6xgzQpfYH5RN0GPyeeUCT3V8unio73mAz6+1Ne3KadhdUnpHnL/03DD2
 nMH4V3TeTRipDNAP/p1IUSIryT6NT88GdZtLHzlkkeI1v3FDdMDefNA/Q+msex1jRRBkyyQGrO
 9xA=
X-SBRS: 5.2
X-MesageID: 36701346
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="36701346"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQmf9JddjhhK0aswZCGs4kp0V3vVTw3mk5R0tNPFvkeWl81baYZ79CSJ06QdGpRreaORq0Ujf6N+0vQRUxtnwGlcdds5UuNjlGbLaBntRhuNUdHo5p0CLtMP/Eol7PtERBnq6YO3UDSIz4XksC2g4AqGel6PwZTPJgVgGSq3X9RCHiFEQGEmP4KOip08v5axBcD91A8yoYIul+sGH5NmoAuENSRmn7utwXA0HWoisBZFPK9J/Ubl/M8++qnvhV8ffcRcbgzs9PotncgFexBHAzmQsM3lz/Npc/mYg8V29Axsm4LQEPbOnsUrnVbzXjdaIxQ2Ab2s0vv35wGPP6r4uA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jxQUzOQovPq+4OVo/Jsj2TfPDsijD3JtXeQggZxcGwc=;
 b=VK3W/YP0WiDdE+Ob3xQ0au2uxtZWW3bSK8g6fPl9c+1nXzqbXiYnWPM1MXxYEAP+fFMCQDdP6LLt8YjPrmp7LBAxOayOur4R79O/zo2ftzfT8u8GAENyLO+k/qibwcN/ZoVev2P78Gt/jCypAfgINzz0QsL01ud4AIR2n0vTsEzTdGKs8BPEbQxX2owpnP8quTY/kjPVN3zvY036UopszlWEkugxfinYxIHmlPdzkuD03rcGvHe8mfYc13C7dL6UrDpl9fEJEPvvqbP4oKzuE4sDavEuG8XVEEgz6GVdBgtFNcRxjWP6+NagTwy4bVQ+ORaYiQTNMvwDj4J+2/85Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jxQUzOQovPq+4OVo/Jsj2TfPDsijD3JtXeQggZxcGwc=;
 b=vLWn71cTdNP/jWzl+1ImipIcPB5Yb9eDkT6tFb5QM9qyWU1oM3M0Gfcamkj6z91jOyj2myXKWO0OFDblKgVoYPoewFGl6Zkxn+PvYkDfQ5nseFVPbqM4Oxd9Nu0FRuagvOPRVehC3j79glLrzsv+N4DsPUC7I81ghbdiuK97Ksw=
Date: Mon, 8 Feb 2021 10:38:58 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
Message-ID: <YCEGshHDEH9bJU7y@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
X-ClientProxiedBy: MR2P264CA0150.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9afb7208-f6ad-404c-0b28-08d8cc155f21
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53245470C79D61175367B3C08F8F9@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: auFZXBLtBwOZTBghv4qFEHRp0qNWHo7I8QFox3ek88otYnlM5T/dknpzjiMAFyQ0yYTF/qm5HSsWB7Aw2jjtSkJwk0Iu1UHsP3a1V61K0Xss7UR45US9nGj17ZoQPc0p+f9NY1G+jXG0arODZ8Wi9g3tV8alUQTvStL4Vo1U2i1LQqQ2hbfCrO1uB+jMIlfG7lDZqfg2SlDXD9OJ+uqFIJPqPLEEYn8yBBx8qJcJY8uPqk897loNCmcK5ch2KJcyeee15qqflWlG2OCGa7LRCFElf8OJHkCzQTrhtSr1SSpp6ClBYb5j1MyGNGAo9AVVP93PNfg/nj6nIEbj+2MQZ8m3dx3v+YXKIlRGEWLV+JN00NrSyt1pU64FfyJYKgplms2Ex/VIYTWBg7jrThaNVA1ixNIhLoEUqHGZPPgHg1agclharGku4NR6mDR2K3hPfXvaYfFiunPnAPSyGUHCN291kdipgROFjU3da515IUVwEKeosRa0NFe58xuqeUv3WRRx2RSeV8P+MPJ9mCBWNA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(39860400002)(396003)(376002)(346002)(366004)(4326008)(6486002)(8676002)(8936002)(956004)(16526019)(9686003)(33716001)(86362001)(26005)(186003)(83380400001)(316002)(6916009)(54906003)(66476007)(6496006)(5660300002)(66946007)(2906002)(478600001)(6666004)(66556008)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T2g0cjB2dERTdmVYUXJNMDNKcG5Uc3JwWUJueXgzaTkyVThRbURlOE5veFE2?=
 =?utf-8?B?UU1HbVFyVE5KYTRMeEd0azc5a3Zkek52MC9kK2FEOHJZNkdReTJZOWtXY0lo?=
 =?utf-8?B?dUZYVUNOT0VnNjFkcW5QemREc0FOMllYSTVoeVdVQnl6NkVoUW05NGpycW5E?=
 =?utf-8?B?NlpPTVBZa0M4VXU2RERKMUNJbGQwUFZLSjBUOHFyVVJtOVlwR2RIdnVpTGow?=
 =?utf-8?B?NUFBK1I3QXB0ckhiQXZXYkpjVnpCa1pGdlB5NHdkZkcwV25vbXV2VGRPUGZq?=
 =?utf-8?B?b1I2Z1MvQVErNVoyV2hkU1Q1OXlRL1BNMzdER3JZaG9FNXg0REVjRWREdGlQ?=
 =?utf-8?B?ZVJZZG5FMGFuT3FiV0hCdjJTU29VSlpKV1h2clZ4TlgrdEFDb0t4R3k3NTlR?=
 =?utf-8?B?bkVtZUM1Ti9GTVFFWUZZck5WZ1lzMVRrUk5SWW81c2Z5UC9DbFo3OTN3bDFi?=
 =?utf-8?B?WG1JamhaUXpNdGxIVEltVzZWVzM5RFg3Q0l5T21vNEdJZXh3TGM3ZDBxa0Ew?=
 =?utf-8?B?VXo0S1RpUkVnNisybFVFazJ0dkV1Tk1uMk9JRGtJSGYySndzaEFqdzV1VlU4?=
 =?utf-8?B?UWdJVG4rTm0rM1JLSE1BckcvckFsZzdKSDJPRVZKQklXVitLTUpaTnhlQ3cv?=
 =?utf-8?B?bkZVWm9udm54UlpzSmpqUTNMQTBWUnNBTExHUWtOaHhyK2FXcktaZVVRZ2k4?=
 =?utf-8?B?YlFROUduRlByUElaWmZITU51V2lIdWRSeDg0ekdzRWRwbHVULzAyd0YyeCt5?=
 =?utf-8?B?bmhtYUlqZUw2Zkxnc3JWY2x2Z045Q0swZW9PcHA0bng1aDBZRXBhamU1azVu?=
 =?utf-8?B?S21kb3htM2pGTnBmenlWem5nVzdEQnpjaWZWQmRvN1FUclV5OFJmTWpELytF?=
 =?utf-8?B?QThSUVMxcytReFk2aWhjZjNsVnF1bmhZazBvUjczUWZ2QkxqZ3gxMWh0YVBO?=
 =?utf-8?B?YW8zNkRVQ2k1NFZCTm12bGdwTXZEQnlRY0pydGQ2UlJ2L0dtNjFXeUZQVmhw?=
 =?utf-8?B?ZUlOZ1FPalIwSFRMSSs1WktFbTE4d1NNOU9CZ244ZEtsWkZ6bjhHalRoU0Vk?=
 =?utf-8?B?bmpSUWxsUkZwK1d1RFJpQXJ2dHRqNVo5T2RwaWtFaXk3MThRbUIvcEFuSXQ4?=
 =?utf-8?B?dEtRalFQUTc1RjFmNnovc2lIdTV3VjhXdmNXYWdHK1BDN1lqdThFU3ZzOTlU?=
 =?utf-8?B?S29ZMHptQ1VFRk8rd0Q3aVRhWHYyYVdFZlZ5T1hwSlNXN2Nzci9wR3gxbTRC?=
 =?utf-8?B?aUxvd1ljZmtUR01ZUC80LzZwSVlJSjg5OG41Qy9KSHJWeXhYSENibSs5NVZp?=
 =?utf-8?B?S0ovSVI3ZUJYbWJ2cUZ5YkR3LzRNait4ZEt3ZVNJcTcyeGhtanRmbjB0MXM5?=
 =?utf-8?B?d0M3cGxlSlZhMG82VVhzZmVFbWZtbmtrZVJCeGtuYWkxQm5sY205VkdXRlpi?=
 =?utf-8?B?eDJJRDFBNm5iUllEOGdMbWtWNUpvb240djAvd3EyWktROGhocThmU3pKMDdL?=
 =?utf-8?B?c1R5V2pzcWxVUTJ5OTZUMEpRTldxSUFpb2ZjN2dZaVcxbXZrQUxMTjNvNERI?=
 =?utf-8?B?dGVsdm1CdTlLYy9xWmduSUxPcEtJS3lmakExU3pUWkg1dHlvaWJTTCtwcU9V?=
 =?utf-8?B?TEdwK3pMSWhOekhzbFlRZjEzdWQzeC9WTkN5Zk1WYTlLUEhyWlJxazBOM3Bj?=
 =?utf-8?B?dHc2ZmRXaEtYdCtUampMRElLYThpN2RBdnJSNFVNazNpU2dSb3IvZnh3U1NW?=
 =?utf-8?Q?CVWciejCcKIBu4g9Juh+c9eYZzvGenIlxwwrExX?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9afb7208-f6ad-404c-0b28-08d8cc155f21
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 09:39:03.7227
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TCAQcFOHMA5XzM94PEGbUFSuwxjuU0fvCkiKDg2zS/oJdasgi9VT6O7y7Y43BE7gdPQaaSgNpn1WnAiqNRf6xQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Mon, Feb 01, 2021 at 01:43:28PM +0100, Jan Beulich wrote:
> While doing this for small amounts may be okay, the unconditional use
> of CPU0's value here has been found to be a problem when the boot time
> TSC of the BSP was behind that of all APs by more than a second. In
> particular because of get_s_time_fixed() producing insane output when
> the calculated delta is negative, we can't allow this to happen.
> 
> On the first iteration have all other CPUs sort out the highest TSC
> value any one of them has read. On the second iteration, if that
> maximum is higher than CPU0's, update its recorded value from that
> taken in the first iteration. Use the resulting value on the last
> iteration to write everyone's TSCs.
> 
> To account for the possible discontinuity, have
> time_calibration_rendezvous_tail() record the newly written value, but
> extrapolate local stime using the value read.
> 
> Reported-by: Claudemir Todo Bom <claudemir@todobom.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Don't update r->master_stime by calculation. Re-base over new
>     earlier patch. Make time_calibration_rendezvous_tail() take two TSC
>     values.
> ---
> Since CPU0 reads its TSC last on the first iteration, if TSCs were
> perfectly sync-ed there shouldn't ever be a need to update. However,
> even on the TSC-reliable system I first tested this on (using
> "tsc=skewed" to get this rendezvous function into use in the first
> place) updates by up to several thousand clocks did happen. I wonder
> whether this points at some problem with the approach that I'm not (yet)
> seeing.

I'm confused by this, so on a system that had reliable TSCs, which
you forced to remove the reliable flag, and then you saw big
differences when doing the rendezvous?

That would seem to indicate that such system doesn't really have
reliable TSCs?

> Considering the sufficiently modern CPU it's using, I suspect the
> reporter's system wouldn't even need to turn off TSC_RELIABLE, if only
> there wasn't the boot time skew. Hence another approach might be to fix
> this boot time skew. Of course to recognize whether the TSCs then still
> aren't in sync we'd need to run tsc_check_reliability() sufficiently
> long after that adjustment. Which is besides the need to have this
> "fixing" be precise enough for the TSCs to not look skewed anymore
> afterwards.

Maybe it would make sense to do a TSC counter sync after APs are up
and then disable the rendezvous if the next calibration rendezvous
shows no skew?

I also wonder, we test for skew just after the APs have been booted,
and decide at that point whether we need a calibration rendezvous.

Maybe we could do a TSC sync just after APs are up (to hopefully bring
them in sync), and then do the tsc_check_reliability just before Xen
ends booting (ie: before handing control to dom0?)

What we do right now (ie: do the tsc_check_reliability so early) is
also likely to miss small skews that will only show up after APs have
been running for a while?

> As per the comment ahead of it, the original purpose of the function was
> to deal with TSCs halted in deep C states. While this probably explains
> why only forward moves were ever expected, I don't see how this could
> have been reliable in case CPU0 was deep-sleeping for a sufficiently
> long time. My only guess here is a hidden assumption of CPU0 never being
> idle for long enough.
> 
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1658,17 +1658,17 @@ struct calibration_rendezvous {
>      cpumask_t cpu_calibration_map;
>      atomic_t semaphore;
>      s_time_t master_stime;
> -    u64 master_tsc_stamp;
> +    uint64_t master_tsc_stamp, max_tsc_stamp;
>  };
>  
>  static void
>  time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
> -                                 uint64_t tsc)
> +                                 uint64_t old_tsc, uint64_t new_tsc)
>  {
>      struct cpu_time_stamp *c = &this_cpu(cpu_calibration);
>  
> -    c->local_tsc    = tsc;
> -    c->local_stime  = get_s_time_fixed(c->local_tsc);
> +    c->local_tsc    = new_tsc;
> +    c->local_stime  = get_s_time_fixed(old_tsc ?: new_tsc);
>      c->master_stime = r->master_stime;
>  
>      raise_softirq(TIME_CALIBRATE_SOFTIRQ);
> @@ -1683,6 +1683,7 @@ static void time_calibration_tsc_rendezv
>      int i;
>      struct calibration_rendezvous *r = _r;
>      unsigned int total_cpus = cpumask_weight(&r->cpu_calibration_map);
> +    uint64_t tsc = 0;
>  
>      /* Loop to get rid of cache effects on TSC skew. */
>      for ( i = 4; i >= 0; i-- )
> @@ -1692,8 +1693,15 @@ static void time_calibration_tsc_rendezv
>              while ( atomic_read(&r->semaphore) != (total_cpus - 1) )
>                  cpu_relax();
>  
> -            if ( r->master_tsc_stamp == 0 )
> -                r->master_tsc_stamp = rdtsc_ordered();
> +            if ( tsc == 0 )
> +                r->master_tsc_stamp = tsc = rdtsc_ordered();
> +            else if ( r->master_tsc_stamp < r->max_tsc_stamp )
> +                /*
> +                 * We want to avoid moving the TSC backwards for any CPU.
> +                 * Use the largest value observed anywhere on the first
> +                 * iteration.
> +                 */
> +                r->master_tsc_stamp = r->max_tsc_stamp;
>              else if ( i == 0 )
>                  r->master_stime = read_platform_stime(NULL);
>  
> @@ -1712,6 +1720,16 @@ static void time_calibration_tsc_rendezv
>              while ( atomic_read(&r->semaphore) < total_cpus )
>                  cpu_relax();
>  
> +            if ( tsc == 0 )
> +            {
> +                uint64_t cur;
> +
> +                tsc = rdtsc_ordered();
> +                while ( tsc > (cur = r->max_tsc_stamp) )
> +                    if ( cmpxchg(&r->max_tsc_stamp, cur, tsc) == cur )
> +                        break;

I think you could avoid reading cur explicitly for each loop and
instead do?

cur = ACCESS_ONCE(r->max_tsc_stamp)
while ( tsc > cur )
    cur = cmpxchg(&r->max_tsc_stamp, cur, tsc);

> +            }
> +
>              if ( i == 0 )
>                  write_tsc(r->master_tsc_stamp);
>  
> @@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
>              while ( atomic_read(&r->semaphore) > total_cpus )
>                  cpu_relax();
>          }
> +
> +        /* Just in case a read above ended up reading zero. */
> +        tsc += !tsc;

Won't that be worthy of an ASSERT_UNREACHABLE? I'm not sure I see how
tsc could be 0 on a healthy system after the loop above.


Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:41:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:41:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82665.152637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l932X-0002D4-Hf; Mon, 08 Feb 2021 09:41:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82665.152637; Mon, 08 Feb 2021 09:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l932X-0002Cx-EN; Mon, 08 Feb 2021 09:41:05 +0000
Received: by outflank-mailman (input) for mailman id 82665;
 Mon, 08 Feb 2021 09:41:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l932W-0002Cr-6A
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:41:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbd00fbf-5a29-43c3-87f5-d4da740304de;
 Mon, 08 Feb 2021 09:41:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EAB7EAE53;
 Mon,  8 Feb 2021 09:41:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbd00fbf-5a29-43c3-87f5-d4da740304de
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612777262; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fF169tx56y6mupjIvrPQt9dqN5xV/kHLsueKWM8xvUg=;
	b=MoSRipFUt6ufW8TtS8lzbiX5/FVZJzmez4YJCZRHeMBFobJx4wUaMpOEY0Ksoxlj08E0EB
	BMga+pAmqKxvJzr+h92HO0x1eYAOFojiRnJ+LvHFYMgWcOQM2yxISYYR1UJ41qM4vIXkKZ
	9dFMw0ot34fjhu/eiJLf+M+wr0MVcMQ=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Message-ID: <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
Date: Mon, 8 Feb 2021 10:41:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="GMBB9RCSmiMsxNr0VEDv5TlmxZ6GktJjs"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--GMBB9RCSmiMsxNr0VEDv5TlmxZ6GktJjs
Content-Type: multipart/mixed; boundary="k4gyNUY4aI5Lw9phW38uxpSLf7g9cDz0D";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
In-Reply-To: <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>

--k4gyNUY4aI5Lw9phW38uxpSLf7g9cDz0D
Content-Type: multipart/mixed;
 boundary="------------8545B5C2DC00D006FC21850E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8545B5C2DC00D006FC21850E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 10:11, Julien Grall wrote:
> Hi Juergen,
>=20
> On 07/02/2021 12:58, J=C3=BCrgen Gro=C3=9F wrote:
>> On 06.02.21 19:46, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>> The first three patches are fixes for XSA-332. The avoid WARN splats=

>>>> and a performance issue with interdomain events.
>>>
>>> Thanks for helping to figure out the problem. Unfortunately, I still =

>>> see reliably the WARN splat with the latest Linux master=20
>>> (1e0d27fce010) + your first 3 patches.
>>>
>>> I am using Xen 4.11 (1c7d984645f9) and dom0 is forced to use the 2L=20
>>> events ABI.
>>>
>>> After some debugging, I think I have an idea what's went wrong. The=20
>>> problem happens when the event is initially bound from vCPU0 to a=20
>>> different vCPU.
>>>
>>> =C2=A0From the comment in xen_rebind_evtchn_to_cpu(), we are masking =
the=20
>>> event to prevent it being delivered on an unexpected vCPU. However, I=
=20
>>> believe the following can happen:
>>>
>>> vCPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 | vCPU1
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | Call xen_rebind_evtchn_to_cpu()
>>> receive event X=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | mask event X
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | bind to vCPU1
>>> <vCPU descheduled>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | unmask=
 event X
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | receive event X
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | handle_edge_irq(X)
>>> handle_edge_irq(X)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 =
-> handle_irq_event()
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> set IRQD_IN_PROGRESS
>>> =C2=A0=C2=A0-> set IRQS_PENDING=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 |
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> evtchn_interrupt()
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> clear IRQD_IN_PROGRESS
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 -> IRQS_PENDING is set
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 -> handle_irq_event()
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> evtchn_interrupt()
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0=C2=A0 -> WARN()
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>>
>>> All the lateeoi handlers expect a ONESHOT semantic and=20
>>> evtchn_interrupt() is doesn't tolerate any deviation.
>>>
>>> I think the problem was introduced by 7f874a0447a9 ("xen/events: fix =

>>> lateeoi irq acknowledgment") because the interrupt was disabled=20
>>> previously. Therefore we wouldn't do another iteration in=20
>>> handle_edge_irq().
>>
>> I think you picked the wrong commit for blaming, as this is just
>> the last patch of the three patches you were testing.
>=20
> I actually found the right commit for blaming but I copied the=20
> information from the wrong shell :/. The bug was introduced by:
>=20
> c44b849cee8c ("xen/events: switch user event channels to lateeoi model"=
)
>=20
>>
>>> Aside the handlers, I think it may impact the defer EOI mitigation=20
>>> because in theory if a 3rd vCPU is joining the party (let say vCPU A =

>>> migrate the event from vCPU B to vCPU C). So info->{eoi_cpu,=20
>>> irq_epoch, eoi_time} could possibly get mangled?
>>>
>>> For a fix, we may want to consider to hold evtchn_rwlock with the=20
>>> write permission. Although, I am not 100% sure this is going to=20
>>> prevent everything.
>>
>> It will make things worse, as it would violate the locking hierarchy
>> (xen_rebind_evtchn_to_cpu() is called with the IRQ-desc lock held).
>=20
> Ah, right.
>=20
>>
>> On a first glance I think we'll need a 3rd masking state ("temporarily=

>> masked") in the second patch in order to avoid a race with lateeoi.
>>
>> In order to avoid the race you outlined above we need an "event is bei=
ng
>> handled" indicator checked via test_and_set() semantics in
>> handle_irq_for_port() and reset only when calling clear_evtchn().
>=20
> It feels like we are trying to workaround the IRQ flow we are using=20
> (i.e. handle_edge_irq()).

I'm not really sure this is the main problem here. According to your
analysis the main problem is occurring when handling the event, not when
handling the IRQ: the event is being received on two vcpus.

Our problem isn't due to the IRQ still being pending, but due it being
raised again, which should happen for a one shot IRQ the same way.

But maybe I'm misunderstanding your idea.


Juergen

--------------8545B5C2DC00D006FC21850E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8545B5C2DC00D006FC21850E--

--k4gyNUY4aI5Lw9phW38uxpSLf7g9cDz0D--

--GMBB9RCSmiMsxNr0VEDv5TlmxZ6GktJjs
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhBywFAwAAAAAACgkQsN6d1ii/Ey+X
7Qf+OnrlhZHoEZMFLJ8YuxArgwY/dc0hT9fYNejZag9l2vEAND4Z36Oqpy4lMt9l1wsocKvxEhhF
pSpzfvQWh9qFvJbNZtU3gSfVM6z8XQ+FXWlr2vfGOqMmYTZKKcZrMtTmL0spDeb4EDwM1G86mtZ4
88bV0RVAoliWwy5CUk5MyZyHwYOC3YeH3VP7K3kBLy4F1gopdOrTWHvI50FFrTBavw1ZnCRTzlgj
qYWYHD2ztYmPtDEtlSsfcyxj2+xur511a16/B/RfP0f+g1maWYySmaxhVFNQa+icHKzu8VgfenHR
sXi1Zu4uQNMf/zQXInigQg8IPawMiI2GTHjN+tovog==
=fwXt
-----END PGP SIGNATURE-----

--GMBB9RCSmiMsxNr0VEDv5TlmxZ6GktJjs--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:41:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82666.152648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l933F-0002JB-R8; Mon, 08 Feb 2021 09:41:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82666.152648; Mon, 08 Feb 2021 09:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l933F-0002J4-Nt; Mon, 08 Feb 2021 09:41:49 +0000
Received: by outflank-mailman (input) for mailman id 82666;
 Mon, 08 Feb 2021 09:41:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l933E-0002Iv-Fz
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:41:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 940ae044-306e-40c1-9ba1-b7071aeb047d;
 Mon, 08 Feb 2021 09:41:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 711D8AC6E;
 Mon,  8 Feb 2021 09:41:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 940ae044-306e-40c1-9ba1-b7071aeb047d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612777306; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YaeR6Sf39P/hrf0A7inZDxmPB1YsERyszTlFed0deJU=;
	b=pWrqjYnezRo0PoPhcVw9QuSNgJifli2QlnLsEJQrCcw5HE7QUTr/5Lo1iSqjtSGGgYueXc
	aXpT52NG9XEikW51RtksFPuwAvd1IFkQuyTzIENfyoMA1Vnrf3gDP83lABMKGDd0OrwrL+
	tE8Elzg+YQgSJKabi8BOC4w0jTjS3jQ=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <5cef0d0e-8f03-7cd2-4246-268a67a87dc5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <bbe51b26-6087-efef-b8a6-1922600e67ab@suse.com>
Date: Mon, 8 Feb 2021 10:41:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <5cef0d0e-8f03-7cd2-4246-268a67a87dc5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="FEcWlBCP1PpGWxpo0v5xgNDeGJK5sgUIN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--FEcWlBCP1PpGWxpo0v5xgNDeGJK5sgUIN
Content-Type: multipart/mixed; boundary="ydmzFB2GhA1BIAETK1krrsw0CA38D0rEJ";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <bbe51b26-6087-efef-b8a6-1922600e67ab@suse.com>
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <5cef0d0e-8f03-7cd2-4246-268a67a87dc5@suse.com>
In-Reply-To: <5cef0d0e-8f03-7cd2-4246-268a67a87dc5@suse.com>

--ydmzFB2GhA1BIAETK1krrsw0CA38D0rEJ
Content-Type: multipart/mixed;
 boundary="------------BC7EC0D4E1E8BDD8BDAD82E2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BC7EC0D4E1E8BDD8BDAD82E2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 10:38, Jan Beulich wrote:
> On 06.02.2021 11:49, Juergen Gross wrote:
>> The ring buffer for user events is used in the local system only, so
>> smp barriers are fine for ensuring consistency.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
> Albeit I think "local system" is at least ambiguous (physical
> machine? VM?). How about something like "is local to the given
> kernel instance"?

Yes.


Juergen


--------------BC7EC0D4E1E8BDD8BDAD82E2
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BC7EC0D4E1E8BDD8BDAD82E2--

--ydmzFB2GhA1BIAETK1krrsw0CA38D0rEJ--

--FEcWlBCP1PpGWxpo0v5xgNDeGJK5sgUIN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhB1kFAwAAAAAACgkQsN6d1ii/Ey+N
OQf+K99vfasrxwrBqPZYl5blFMkB7uu+UM297px3zQIAp6o0uI7N3x+nLkkUwt9884/bc4E40JdQ
Oq+UKcKQ0P6dx3cm0yYOi5rr4J4cnKWTRtUaAc9jwM6PbsalHri2zIPfZg/6GsCy/2Kjtfe/W4sq
yPUG4Wd6598DXmxXHlJlbF/pTKCXeqBEFrpNgruEaQVyv6/hcy75dEEXf2tYN/ThHarNue0Fe2ts
DPH+dJZp5SsZeViAf3bPOz1Yi7ZAO32ZsiXbzh966q0d602BcmaoQ3SrwJ1N9nmwShSG2P5XL61B
+G/BnAz1Vy3T8ELniaarrXwwB8VQQMs2UmFqkstj5g==
=CG/q
-----END PGP SIGNATURE-----

--FEcWlBCP1PpGWxpo0v5xgNDeGJK5sgUIN--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:44:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82670.152661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l935Z-0002Ud-7x; Mon, 08 Feb 2021 09:44:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82670.152661; Mon, 08 Feb 2021 09:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l935Z-0002UW-4v; Mon, 08 Feb 2021 09:44:13 +0000
Received: by outflank-mailman (input) for mailman id 82670;
 Mon, 08 Feb 2021 09:44:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0k8o=HK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l935X-0002UR-TL
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:44:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d700b4a-5c21-4089-ad6e-2f95fa6e0d54;
 Mon, 08 Feb 2021 09:44:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d700b4a-5c21-4089-ad6e-2f95fa6e0d54
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612777450;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1CILNLGQk+nV1NwutoxKVcy4yXu72qzm4UlRJNpn/QE=;
  b=P5/lLBzcZurcFuSE5XCkfgJIUtmv4MZt3D72950hjo8fcNijvYsnskSC
   8zWQnObVjLEa8Ud9ReRWJGzijK1g+NdYXk/+nI70speeyRtSdhZ1vGfXF
   dXFpix/rlbW6WTQ9qmxaUXEQdFnslDscq88Ob7Eyizo+yROId31p58B1G
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: VacezOIW4l9LHyr1wEJF/5aOIJrH9RGzuijKW0bRL2rR4n7XcTT2YPLKn9TNsJ5L0Qkjcdl85C
 V8SxRfMMhpcnvDkIBizyAB7VdCbbXLtxERD8z16SSXdmEwPlCsnTyy293HKgBFL+RQUlLGv5f/
 KneKc2Dh11A/ZiSvVeLd1WPI1ybzTS7j/cjaeS+7ouhU7vv49mVmrKmfQ+8ecJWsz2x/TnTaOZ
 RgBDgwpGlhMV7xj5OFCJs3XMEHRScGm6iawRY5SDMJbiMDieof70S18o+2lX138wn9Zxmqu+k6
 deE=
X-SBRS: 5.2
X-MesageID: 38106338
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="38106338"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V6mxGID9lcHLwDItkqUjJNKevM3lvZ32G4rDUWrUlbJxxyuHGmpQwPR8o9RkxfBHVuDExjNNy9dI85QF5tObDXWWDenE4kdi+RZsctYtNpQpwrQiSRzMH3XSVh85pcjeqXREGYYhwGNCl65wo64xoLWzW/zsF3Iewdty5s24IhQ0exPo1+QCKTWBRDKpJUa5v492Y3st9hmie64BkWLzVjhS0/H1U1UPE4dF5GZVFkAYpwGo/kQ2gOeLQpcGzmPpL7VPPmqrunKQgoeeR9nzbMJaJcFGRwDMubyLtgqtd1pf0yzetrw/tqhHyVJIEP/2lRnmfLeju7cwKI9+W7Ziew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1CILNLGQk+nV1NwutoxKVcy4yXu72qzm4UlRJNpn/QE=;
 b=SKiBmjD+qT7e5aq3pFLZg13sE6Xs/NsUX1aIA5lEG/nOqzeU+JyDLDIQ+fnpBPsUhgo/Clnp/UD+okUwGQXSuKA+2JnwqQvxqmlBd0kztfGWQfbQtH6wDTQI8UKoI4ofgQOX8kbuMIPjOcupu2rJSjyLxCxD8oAvuYLjAYcagnzhu+IhGqS6mvApUerzKCHMSQ0YJa/OSCc4EnJFR2xz/AZT/i0TGPwQxbjNxV2vU5uRP3AynjlpK2a7UJE+onOJjsUwixtchoZrYNxsukItwOkNo+abzsvsguJEEVO5pDg4tazVgqLYdAeCnScyMcPGs8O+gYcC3RSmV8Lqzg8HoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1CILNLGQk+nV1NwutoxKVcy4yXu72qzm4UlRJNpn/QE=;
 b=geA3bPxxeh0tL4JbDcnwsOA7ZP0Kh8BwLCBfH5DxQHFXbHkwlH9zPkKjsC9Gu0VhquQt9/0UB3F8lGPw0svTsDhPG2GJy2lXt11UhXIvNJ1g9b8r4XmIaf/QqdbfKt2Q88kUL2D+uz7NXU7bNZ4rWWEeoeXyr3mLP0lnFvpwRGo=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
Date: Mon, 8 Feb 2021 09:44:00 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <20210206104932.29064-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0491.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::10) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d483c7b6-d667-4a94-6409-08d8cc161408
X-MS-TrafficTypeDiagnostic: BYAPR03MB3605:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3605FAFBF11171785F2383DBBA8F9@BYAPR03MB3605.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2733;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZPHs0Tmovse6ARD9nnZc2iqacjO64VmEwBLVzbikoN2rK6f3jTFVvrUtqFn9AbnC6nqleSmasS+K2QUYdc8vTolHpKJfUqEH25/rpsGKxy6PE1fEm7PJ6guqKNamcaPSGVfHWVrEZEfu8pY+QfDfdPzNoa5XPiK9/0PAialQMJNMyDiC5rwqUtlCqRfgj5YxPCq3aRScblwl5E8VN3ZLUNFVdCC0XBjpQVxFd/jbG9zWsthas090KpxPc0GcO1XYnnwaQm5SwbnRAu+xbLRstkw8nNU0Kd4HbxYAYXXP9Q81GxHRpDOhsg+cbXLLBq7j12FAosVdaTVOSgmWsGSjrIW+vQXfGqRtcx+uU0MTq7YDKd441pW0T5KLCifLGrdl/CPbzIInrN1pOY7+CqeBkRxGZje7sUsBbxrLRZrV1ztZY9ZRnkiw+kcmo/5ng4ijTF8J8TKA73yIZKiLAGg8kxSXZ0ADfDaKzWVR7IviY3yI//eBE3hjih5HyDgwSptMjuL0fRqsrI/zVYmKBoBg+TngX6D8H7WPXv97rmQnP06LD6e7HQP46QGLrT7ZZ0uXeigWIg1vXstOkRK2+ZDjeMTWjHw7g7aQpNa5mFD76j4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(366004)(346002)(376002)(186003)(16526019)(316002)(2906002)(26005)(4744005)(6486002)(53546011)(54906003)(8676002)(36756003)(956004)(478600001)(31686004)(16576012)(66476007)(6666004)(4326008)(66556008)(8936002)(5660300002)(31696002)(66946007)(86362001)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aGNCNUJMYkVhMk1hVVU2YTFCcWRrMS9hcmh5dWdZVkZoZWtQZ24vSkFDclZL?=
 =?utf-8?B?VzhhaURrU292UnV2QmNrcGxFQVgvSGhKZlNCU2pyamRuTkhHeHdhUWZ4dHNR?=
 =?utf-8?B?TGhiQmlJMDl4clo5ZjV1RUlvdkNrVVpXcU5NZExYUUl3U1RMVExGMWgvcXQz?=
 =?utf-8?B?Vy83b0o1ZVVQUU5XWHJXY0J2aFZzbXlISDNaWktneTM5WE9ITDlhdjhWYVZv?=
 =?utf-8?B?M1ZnOUV5TFRER2JCT1pZYXRkUzBNeWdEeUs2WWU5ckFiY2JWMDNINW1XUDhJ?=
 =?utf-8?B?VmdzYVVwV1hyVEVwTUVUZmNxRTA4YjNUVzM3WTFqQTVmaDMxWjdlNGdTc0hS?=
 =?utf-8?B?anJ1cWtiMkswN3l6ZVNNVE9VblV0Q3ZWTGJiaVF2NlAydnp0WjFhYUM4ZWFF?=
 =?utf-8?B?MFVLOXNwcm5JZGkzb1JZMlFTamg3QXBIWWZraDhjTGJZK3R6cGtjbG9tOGcw?=
 =?utf-8?B?dlNORklQRUpkWGtXd01Fc3VDY2oyU3hwalRxK2Q4eisyY1dVS3czaEkrdVBE?=
 =?utf-8?B?NG83ZEoxMnZ2dEJTKzJmZ3o1OEIvaFQrQVZScW1pRE4wcVZBMmtHR2FPNkFa?=
 =?utf-8?B?OStYQVg4NzdNM0kxOXI0a3kxTmxTcFJYdms2Ni9EWU5sWGtLWE1hajVIaklW?=
 =?utf-8?B?Wm9BbFRlQ2ZFY1JRN21ZbTFOOHBlUFBmVEZ2NW02VE5NbURMSWd2dUhtN2pt?=
 =?utf-8?B?U0tWUy84SWhTNU1JdlN0S3ZrQmJoNWVuaTNaV0pMUTZob1Vmc1NrbVZOQ3Z5?=
 =?utf-8?B?N2Z6WW5DNHYxblZXQjRySytJWjVnWHo4Zmhkakl6VmQzWWdhQVYwa3A5Vmp3?=
 =?utf-8?B?NzRpZnl5YStSeUtmMVNONWNZdzIrTHZ1MFNuRi9heHhPd0lVV21MN2NvTHlN?=
 =?utf-8?B?c1M3WEcrTi9KWVNMY1NzUk11OUloWHRnQzFnQlMrL2RWenNMN1BDTlhkTUJV?=
 =?utf-8?B?dVhhY3YxZTRkNjVwTkxSVXI2WDRuQkNIM3pMUTBJcVJMcE42aWh1cnB6dDdC?=
 =?utf-8?B?eTIrMTdGS1RaaEpiVlk3d2t0VndtdHJpWDRacllOanpnTWpKTENnajhmS3My?=
 =?utf-8?B?WjYvbGFtSHAvMUc0NnBvaE43WTBrdlRVWEVZUkR5QlJVM2VnTTd0Q0pNQWdF?=
 =?utf-8?B?d2pDU21ub0JoOWN2SDFmcHhxZXFXc0R1MW9DWnRVSWFSbkIwVXZyK3BCeDYy?=
 =?utf-8?B?ZXBvVVBHUFpNY1h3d0NRdDZpeFY2KzIyaW1GY0Fyak9GK2ZwVHYxRGlwa0Zy?=
 =?utf-8?B?ZlNwSTkyUm1LUWhMam8vYlV5bGl6bGZobkFDTUljdnlUem03ZW5vOUgyaHlm?=
 =?utf-8?B?VWMzeUxWcERQMi9leUxyRXBCZkdZNENRYzNXcG0rU2RlcE16cnVOejA5UVkz?=
 =?utf-8?B?MStkSDloVmV0d25vNDNzWEozVSs4ejdFTUtNc2RETFNFUytJRjdDcHpBK1ZC?=
 =?utf-8?B?Qnh5bEIramYxei9VWkhqNUYwSjdFMUFvUDNxZmJmVHR3RDNQVURjSjEranhJ?=
 =?utf-8?B?WjFsRmRtUUo4bEp1V3N0Z1IyV01PMk81Z1hyUDhHVUpxNm80dUUvRUIzWDZN?=
 =?utf-8?B?ZVdMZW50VEZWNytXTXZIMlp0S3BYQ1NMUXRqRUpzZmNSQ2graWlqQlozWk04?=
 =?utf-8?B?QWJXTTBmUUZqdjR0Ti82bDFxMkRRM1RNTGh3SkNkUnRqZEF1U0IxTHRhbkVG?=
 =?utf-8?B?ckZZSmpQWEtOSEFPbFlqd0JDUk0vZzlxMUIybXBKcnVFeHY4bCtDYXNveE9K?=
 =?utf-8?Q?G+a9CEqSRUGlPYsdYx11RlUc3/IejuYUoQnSZHF?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d483c7b6-d667-4a94-6409-08d8cc161408
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 09:44:07.4144
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I2qwwHcm6xgSTbtc3A8SGTcTYmPn1ta+YjNfIfii/bzxZOyF/WX+b83noIHbVkkJ5xC0zKj8QrY/BrZZfvd0nDP0Oh4zavpTXuntU45+bHw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3605
X-OriginatorOrg: citrix.com

On 06/02/2021 10:49, Juergen Gross wrote:
> The ring buffer for user events is used in the local system only, so
> smp barriers are fine for ensuring consistency.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

These need to be virt_* to not break in UP builds (on non-x86).

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:48:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:48:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82674.152672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93A3-0002eP-Uz; Mon, 08 Feb 2021 09:48:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82674.152672; Mon, 08 Feb 2021 09:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93A3-0002eI-S3; Mon, 08 Feb 2021 09:48:51 +0000
Received: by outflank-mailman (input) for mailman id 82674;
 Mon, 08 Feb 2021 09:48:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l93A2-0002eD-GN
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:48:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a4aac4d-1c80-4c40-9132-7dc81b48bd33;
 Mon, 08 Feb 2021 09:48:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D20D2AC6E;
 Mon,  8 Feb 2021 09:48:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a4aac4d-1c80-4c40-9132-7dc81b48bd33
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612777728; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3NhcpUvwMgEF2rRU4mx1o9wN8tYHo5OlqhAF8sWXq9M=;
	b=cumVrgkaunXuO6f9PY170PKriCorRoIVGi2zdDkYcDguu3RJ7sDt6XroUa4+2Bpkdhv2hs
	BieJgAhpprvHACzZ/tyu106MaAMLcLVdtFAhmKhkINjYZ1TqUQilo3ICF25vqpJR76GEHE
	u5I8/zp7ei0cTHCwYx+RXxe+5xOs1mY=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
Date: Mon, 8 Feb 2021 10:48:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-8-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.02.2021 11:49, Juergen Gross wrote:
> In evtchn_read() use READ_ONCE() for reading the producer index in
> order to avoid the compiler generating multiple accesses.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  drivers/xen/evtchn.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
> index 421382c73d88..f6b199b597bf 100644
> --- a/drivers/xen/evtchn.c
> +++ b/drivers/xen/evtchn.c
> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
>  			goto unlock_out;
>  
>  		c = u->ring_cons;
> -		p = u->ring_prod;
> +		p = READ_ONCE(u->ring_prod);
>  		if (c != p)
>  			break;

Why only here and not also in

		rc = wait_event_interruptible(u->evtchn_wait,
					      u->ring_cons != u->ring_prod);

or in evtchn_poll()? I understand it's not needed when
ring_prod_lock is held, but that's not the case in the two
afaics named places. Plus isn't the same then true for
ring_cons and ring_cons_mutex, i.e. aren't the two named
places plus evtchn_interrupt() also in need of READ_ONCE()
for ring_cons?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:50:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82681.152684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93BP-0003Yn-9h; Mon, 08 Feb 2021 09:50:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82681.152684; Mon, 08 Feb 2021 09:50:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93BP-0003Yg-6u; Mon, 08 Feb 2021 09:50:15 +0000
Received: by outflank-mailman (input) for mailman id 82681;
 Mon, 08 Feb 2021 09:50:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l93BO-0003Ya-0o
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:50:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id acac2de7-8b8f-4e6b-941b-06af315ee6b9;
 Mon, 08 Feb 2021 09:50:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B27ACADCD;
 Mon,  8 Feb 2021 09:50:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acac2de7-8b8f-4e6b-941b-06af315ee6b9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612777811; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MoTBA6avz+A5L1BdaknRfGBP0w5pvSSOGiusH5vYDLg=;
	b=V8EHf/3iHEYlg8oCEopDrfPwpSnRJZHbotdl/3VZvjcRnKLAiYBfUuLlEn57q7d+GwKCf2
	p0sDGsDk2r/IdKR2od8UkH1+zySf2vCmmNW4gxaTekyswxXzjTANm6Fk2cqQFu+X2zbB6a
	tgZtprSKGTYZPMv0XiX2jib5gUKbkYE=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
 <jgross@suse.com>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
Date: Mon, 8 Feb 2021 10:50:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.02.2021 10:44, Andrew Cooper wrote:
> On 06/02/2021 10:49, Juergen Gross wrote:
>> The ring buffer for user events is used in the local system only, so
>> smp barriers are fine for ensuring consistency.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> These need to be virt_* to not break in UP builds (on non-x86).

Initially I though so, too, but isn't the sole vCPU of such a
VM getting re-scheduled to a different pCPU in the hypervisor
an implied barrier anyway?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 09:54:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 09:54:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82684.152697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93FP-0003jN-SR; Mon, 08 Feb 2021 09:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82684.152697; Mon, 08 Feb 2021 09:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93FP-0003jG-Nc; Mon, 08 Feb 2021 09:54:23 +0000
Received: by outflank-mailman (input) for mailman id 82684;
 Mon, 08 Feb 2021 09:54:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l93FO-0003jB-Aw
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 09:54:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l93FJ-0000fi-7C; Mon, 08 Feb 2021 09:54:17 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l93FI-00019w-RE; Mon, 08 Feb 2021 09:54:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DAnR7MzHh3CuEpYPfOXRcoYHW4xu0lHgJg9s/j96+CA=; b=NQGvflJoJaq+BVdv8rdh+MkiSv
	yol9jTaW9vwwwSHp5cMZreqBIOdubT37g1wVGnfYlYuHjdIVeTKUrOIYNxXb/NkBsEkGnK0YWRmVR
	Oy5X/RCcavvyZ/EDQTSzY9UNWcretq8xUfp9jvHU7Azj5q1sWMdWYfbpuZTB3zCUWJ6w=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
Date: Mon, 8 Feb 2021 09:54:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 08/02/2021 09:41, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 10:11, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 07/02/2021 12:58, JÃ¼rgen GroÃŸ wrote:
>>> On 06.02.21 19:46, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>>> The first three patches are fixes for XSA-332. The avoid WARN splats
>>>>> and a performance issue with interdomain events.
>>>>
>>>> Thanks for helping to figure out the problem. Unfortunately, I still 
>>>> see reliably the WARN splat with the latest Linux master 
>>>> (1e0d27fce010) + your first 3 patches.
>>>>
>>>> I am using Xen 4.11 (1c7d984645f9) and dom0 is forced to use the 2L 
>>>> events ABI.
>>>>
>>>> After some debugging, I think I have an idea what's went wrong. The 
>>>> problem happens when the event is initially bound from vCPU0 to a 
>>>> different vCPU.
>>>>
>>>> Â From the comment in xen_rebind_evtchn_to_cpu(), we are masking the 
>>>> event to prevent it being delivered on an unexpected vCPU. However, 
>>>> I believe the following can happen:
>>>>
>>>> vCPU0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | vCPU1
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | Call xen_rebind_evtchn_to_cpu()
>>>> receive event XÂ Â Â Â Â Â Â Â Â Â Â  |
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | mask event X
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | bind to vCPU1
>>>> <vCPU descheduled>Â Â Â Â Â Â Â  | unmask event X
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | receive event X
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | handle_edge_irq(X)
>>>> handle_edge_irq(X)Â Â Â Â Â Â Â  |Â  -> handle_irq_event()
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> set IRQD_IN_PROGRESS
>>>> Â Â -> set IRQS_PENDINGÂ Â Â Â Â Â Â  |
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> evtchn_interrupt()
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> clear IRQD_IN_PROGRESS
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â  -> IRQS_PENDING is set
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â  -> handle_irq_event()
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â  -> evtchn_interrupt()
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |Â Â Â Â  -> WARN()
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  |
>>>>
>>>> All the lateeoi handlers expect a ONESHOT semantic and 
>>>> evtchn_interrupt() is doesn't tolerate any deviation.
>>>>
>>>> I think the problem was introduced by 7f874a0447a9 ("xen/events: fix 
>>>> lateeoi irq acknowledgment") because the interrupt was disabled 
>>>> previously. Therefore we wouldn't do another iteration in 
>>>> handle_edge_irq().
>>>
>>> I think you picked the wrong commit for blaming, as this is just
>>> the last patch of the three patches you were testing.
>>
>> I actually found the right commit for blaming but I copied the 
>> information from the wrong shell :/. The bug was introduced by:
>>
>> c44b849cee8c ("xen/events: switch user event channels to lateeoi model")
>>
>>>
>>>> Aside the handlers, I think it may impact the defer EOI mitigation 
>>>> because in theory if a 3rd vCPU is joining the party (let say vCPU A 
>>>> migrate the event from vCPU B to vCPU C). So info->{eoi_cpu, 
>>>> irq_epoch, eoi_time} could possibly get mangled?
>>>>
>>>> For a fix, we may want to consider to hold evtchn_rwlock with the 
>>>> write permission. Although, I am not 100% sure this is going to 
>>>> prevent everything.
>>>
>>> It will make things worse, as it would violate the locking hierarchy
>>> (xen_rebind_evtchn_to_cpu() is called with the IRQ-desc lock held).
>>
>> Ah, right.
>>
>>>
>>> On a first glance I think we'll need a 3rd masking state ("temporarily
>>> masked") in the second patch in order to avoid a race with lateeoi.
>>>
>>> In order to avoid the race you outlined above we need an "event is being
>>> handled" indicator checked via test_and_set() semantics in
>>> handle_irq_for_port() and reset only when calling clear_evtchn().
>>
>> It feels like we are trying to workaround the IRQ flow we are using 
>> (i.e. handle_edge_irq()).
> 
> I'm not really sure this is the main problem here. According to your
> analysis the main problem is occurring when handling the event, not when
> handling the IRQ: the event is being received on two vcpus.

I don't think we can easily divide the two because we rely on the IRQ 
framework to handle the lifecycle of the event. So...

> 
> Our problem isn't due to the IRQ still being pending, but due it being
> raised again, which should happen for a one shot IRQ the same way.

... I don't really see how the difference matter here. The idea is to 
re-use what's already existing rather than trying to re-invent the wheel 
with an extra lock (or whatever we can come up).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:06:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82686.152709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93Qv-0004oT-Tn; Mon, 08 Feb 2021 10:06:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82686.152709; Mon, 08 Feb 2021 10:06:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93Qv-0004oM-Qa; Mon, 08 Feb 2021 10:06:17 +0000
Received: by outflank-mailman (input) for mailman id 82686;
 Mon, 08 Feb 2021 10:06:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l93Qu-0004oH-Jp
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:06:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74f1ce93-b028-499c-85c6-ef5a5312b4cc;
 Mon, 08 Feb 2021 10:06:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 72167ADCD;
 Mon,  8 Feb 2021 10:06:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74f1ce93-b028-499c-85c6-ef5a5312b4cc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612778773; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m57HtWSnT9lNNo+P6sNLTZYsYuwZK8OejPr+c5tKA0k=;
	b=YMzOBN7u8wu6DufA67/rqYavO42OtrEcVU0jrj2aGc+eEo80i9SGA23JLWUCw6STUqfGY7
	UFg6BMAOIQmKCyCdNRE2hQsOGniljK3XArUoLVHL7yuIcuWB4h1/4OsHuaY/sCZVUZ3j2E
	iGgzdr++IJ8PC0eJtQLBc1eu1UldXZU=
Subject: Re: [PATCH 2/7] xen/events: don't unmask an event channel when an eoi
 is pending
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7aa76e26-b6f4-fae2-861b-bcc0039ce497@suse.com>
Date: Mon, 8 Feb 2021 11:06:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.02.2021 11:49, Juergen Gross wrote:
> @@ -1798,6 +1818,29 @@ static void mask_ack_dynirq(struct irq_data *data)
>  	ack_dynirq(data);
>  }
>  
> +static void lateeoi_ack_dynirq(struct irq_data *data)
> +{
> +	struct irq_info *info = info_for_irq(data->irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
> +
> +	if (VALID_EVTCHN(evtchn)) {
> +		info->eoi_pending = true;
> +		mask_evtchn(evtchn);
> +	}
> +}
> +
> +static void lateeoi_mask_ack_dynirq(struct irq_data *data)
> +{
> +	struct irq_info *info = info_for_irq(data->irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
> +
> +	if (VALID_EVTCHN(evtchn)) {
> +		info->masked = true;
> +		info->eoi_pending = true;
> +		mask_evtchn(evtchn);
> +	}
> +}
> +
>  static int retrigger_dynirq(struct irq_data *data)
>  {
>  	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
> @@ -2023,8 +2066,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
>  	.irq_mask		= disable_dynirq,
>  	.irq_unmask		= enable_dynirq,
>  
> -	.irq_ack		= mask_ack_dynirq,
> -	.irq_mask_ack		= mask_ack_dynirq,
> +	.irq_ack		= lateeoi_ack_dynirq,
> +	.irq_mask_ack		= lateeoi_mask_ack_dynirq,
>  
>  	.irq_set_affinity	= set_affinity_irq,
>  	.irq_retrigger		= retrigger_dynirq,
> 

Unlike the prior handler the two new ones don't call ack_dynirq()
anymore, and the description doesn't give a hint towards this
difference. As a consequence, clear_evtchn() also doesn't get
called anymore - patch 3 adds the calls, but claims an older
commit to have been at fault. _If_ ack_dynirq() indeed isn't to
be called here, shouldn't the clear_evtchn() calls get added
right here?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:15:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82692.152721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93Zs-0005nu-RV; Mon, 08 Feb 2021 10:15:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82692.152721; Mon, 08 Feb 2021 10:15:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93Zs-0005nn-NM; Mon, 08 Feb 2021 10:15:32 +0000
Received: by outflank-mailman (input) for mailman id 82692;
 Mon, 08 Feb 2021 10:15:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nr9D=HK=citrix.com=ross.lagerwall@srs-us1.protection.inumbo.net>)
 id 1l93Zr-0005ni-HK
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:15:31 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8a0974f-f3b2-47a9-82d8-e22dbd83e620;
 Mon, 08 Feb 2021 10:15:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8a0974f-f3b2-47a9-82d8-e22dbd83e620
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612779329;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=qqXwtjOco8fjJeHnQYr5Tm8x0nM5NCab4a40inMqm1A=;
  b=iCG3yLbN0Y6u7/yBdsk+5uTiQWu1hmej61kc/SKEnAoGGcaJTAci4oHW
   KPDVLTqS5iVx1UcTOdZSEzgBb6zZFP4n2k+4RG3OgFYcdmBnn8KcKL+QR
   Wlw7vSSyzVCiZI9InspBaUh8XINmi6glfd2UUwkTygKzAwdDqyfpYLhgz
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: rjLiNJ3vCQ9cpicm4ICWdmqbU0Vr/mR7YGxVzAVJ52hyZBx42ePUyqcKVcg6h7iZihXAwtKnvk
 /u2ugcsHkCr2tjlPPtTnxOG0gr7hIogThvz1V16ht6iY6ej2VQBERCF/JonZUOAMxcv1kmMigu
 t848MnWL4C9XA83gbWwfXTwIC+UvubXumL2q3pImtIFTIMF3nv5XVgU+Y3eWieoOYu+wqvg7aw
 5/aeHo7oBKNfyDvBwrXzr6XxnepobkJpK0X8OPTp9b8GRTuQ0FaGUL8wvw83yykXckovqWbTs8
 NP0=
X-SBRS: 5.1
X-MesageID: 36787092
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="36787092"
Subject: Re: [PATCH 2/7] xen/events: don't unmask an event channel when an eoi
 is pending
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, <stable@vger.kernel.org>, Julien Grall
	<julien@xen.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-3-jgross@suse.com>
From: Ross Lagerwall <ross.lagerwall@citrix.com>
Message-ID: <bdf20a87-5b4d-529e-7028-cea3eeef769b@citrix.com>
Date: Mon, 8 Feb 2021 10:15:31 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.2
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-3-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2021-02-06 10:49, Juergen Gross wrote:
> An event channel should be kept masked when an eoi is pending for it.
> When being migrated to another cpu it might be unmasked, though.
> 
> In order to avoid this keep two different flags for each event channel
> to be able to distinguish "normal" masking/unmasking from eoi related
> masking/unmasking. The event channel should only be able to generate
> an interrupt if both flags are cleared.
> 
> Cc: stable@vger.kernel.org
> Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framework")
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>
...> +static void lateeoi_ack_dynirq(struct irq_data *data)
> +{
> +	struct irq_info *info = info_for_irq(data->irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
> +
> +	if (VALID_EVTCHN(evtchn)) {
> +		info->eoi_pending = true;
> +		mask_evtchn(evtchn);
> +	}
> +}

Doesn't this (and the one below) need a call to clear_evtchn() to
actually ack the pending event? Otherwise I can't see what clears
the pending bit.

I tested out this patch but processes using the userspace evtchn device did
not work very well without the clear_evtchn() call.

Ross

> +
> +static void lateeoi_mask_ack_dynirq(struct irq_data *data)
> +{
> +	struct irq_info *info = info_for_irq(data->irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
> +
> +	if (VALID_EVTCHN(evtchn)) {
> +		info->masked = true;
> +		info->eoi_pending = true;
> +		mask_evtchn(evtchn);
> +	}
> +}
> +


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:21:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:21:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82696.152733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93fG-0006kV-Ct; Mon, 08 Feb 2021 10:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82696.152733; Mon, 08 Feb 2021 10:21:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93fG-0006kO-8c; Mon, 08 Feb 2021 10:21:06 +0000
Received: by outflank-mailman (input) for mailman id 82696;
 Mon, 08 Feb 2021 10:21:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l93fF-0006kJ-Ey
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:21:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 469becb6-1409-43c0-8d92-9c3af7cd4608;
 Mon, 08 Feb 2021 10:21:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D087AE74;
 Mon,  8 Feb 2021 10:21:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 469becb6-1409-43c0-8d92-9c3af7cd4608
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612779663; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mfkAxxAokOUF/5Yrj0Gcend/eK2vJW+zxm1Juw1PG30=;
	b=GeD4RdYExGDvC/5H56o8lCjnLdv3RWi36iUlebvwdQ2k9AE3LVOM/JxaMg1+3c5WZINwpB
	YSASq02Rrd1+5IrGYsSzzox1msonlsQRKdkzzY7H6kFme5Jg5bxnwwN4GxaxOkQelRmItS
	KEdOxweuaOjwxMHasssCIdh0X8MJwU4=
Subject: Re: [PATCH 2/7] xen/events: don't unmask an event channel when an eoi
 is pending
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-3-jgross@suse.com>
 <7aa76e26-b6f4-fae2-861b-bcc0039ce497@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <db19fbd4-f613-b8b5-3d46-eaa674417e4f@suse.com>
Date: Mon, 8 Feb 2021 11:21:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <7aa76e26-b6f4-fae2-861b-bcc0039ce497@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="i8j5vHaygQB9c5oKqQXikXYebgrf8KGnM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--i8j5vHaygQB9c5oKqQXikXYebgrf8KGnM
Content-Type: multipart/mixed; boundary="mQVnubCfralaKH811EULeoqUjdynnObGW";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Message-ID: <db19fbd4-f613-b8b5-3d46-eaa674417e4f@suse.com>
Subject: Re: [PATCH 2/7] xen/events: don't unmask an event channel when an eoi
 is pending
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-3-jgross@suse.com>
 <7aa76e26-b6f4-fae2-861b-bcc0039ce497@suse.com>
In-Reply-To: <7aa76e26-b6f4-fae2-861b-bcc0039ce497@suse.com>

--mQVnubCfralaKH811EULeoqUjdynnObGW
Content-Type: multipart/mixed;
 boundary="------------CDB33DBB3839EBD104452B3B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CDB33DBB3839EBD104452B3B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 11:06, Jan Beulich wrote:
> On 06.02.2021 11:49, Juergen Gross wrote:
>> @@ -1798,6 +1818,29 @@ static void mask_ack_dynirq(struct irq_data *da=
ta)
>>   	ack_dynirq(data);
>>   }
>>  =20
>> +static void lateeoi_ack_dynirq(struct irq_data *data)
>> +{
>> +	struct irq_info *info =3D info_for_irq(data->irq);
>> +	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
>> +
>> +	if (VALID_EVTCHN(evtchn)) {
>> +		info->eoi_pending =3D true;
>> +		mask_evtchn(evtchn);
>> +	}
>> +}
>> +
>> +static void lateeoi_mask_ack_dynirq(struct irq_data *data)
>> +{
>> +	struct irq_info *info =3D info_for_irq(data->irq);
>> +	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
>> +
>> +	if (VALID_EVTCHN(evtchn)) {
>> +		info->masked =3D true;
>> +		info->eoi_pending =3D true;
>> +		mask_evtchn(evtchn);
>> +	}
>> +}
>> +
>>   static int retrigger_dynirq(struct irq_data *data)
>>   {
>>   	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
>> @@ -2023,8 +2066,8 @@ static struct irq_chip xen_lateeoi_chip __read_m=
ostly =3D {
>>   	.irq_mask		=3D disable_dynirq,
>>   	.irq_unmask		=3D enable_dynirq,
>>  =20
>> -	.irq_ack		=3D mask_ack_dynirq,
>> -	.irq_mask_ack		=3D mask_ack_dynirq,
>> +	.irq_ack		=3D lateeoi_ack_dynirq,
>> +	.irq_mask_ack		=3D lateeoi_mask_ack_dynirq,
>>  =20
>>   	.irq_set_affinity	=3D set_affinity_irq,
>>   	.irq_retrigger		=3D retrigger_dynirq,
>>
>=20
> Unlike the prior handler the two new ones don't call ack_dynirq()
> anymore, and the description doesn't give a hint towards this
> difference. As a consequence, clear_evtchn() also doesn't get
> called anymore - patch 3 adds the calls, but claims an older
> commit to have been at fault. _If_ ack_dynirq() indeed isn't to
> be called here, shouldn't the clear_evtchn() calls get added
> right here?

There was clearly too much time between writing this patch and looking
at its performance impact. :-(

Somehow I managed to overlook that I just introduced the bug here. This
OTOH explains why there are not tons of complaints with the current
implementation. :-)

Will merge patch 3 into this one.


Juergen

--------------CDB33DBB3839EBD104452B3B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CDB33DBB3839EBD104452B3B--

--mQVnubCfralaKH811EULeoqUjdynnObGW--

--i8j5vHaygQB9c5oKqQXikXYebgrf8KGnM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhEI4FAwAAAAAACgkQsN6d1ii/Ey89
kAf/ZUDsoa83NwD65pWWPOxU/aJHnp/4FqhoQB9WZoZSGqaf0aKK2Ij75nsIkms5Z17Kehnz2ZA4
cXlGsJUFAZ/zQh9acB5K5SDQeHZqN3+qOGvX1Lki4zit23JOzJdiJWdbq6yE1SW8DqGU9gBNaLfW
/QXt4qT14sfD00ZprR4rt3O/z8Z5gt3Wew7Khqe/u2bPDQkGmt/Wryo66WiO4PJSYdAPYa4Wf5Ix
NF97qLzKd/M40DovPBl7njfNMcKwrbdWDTu3Ukwg496QsnflUFEk5R3Cpz380hkTEqYS6sfLuPV4
Gmt7u0xwXmUHK9EKi8tnd1dWNNPLibHkOe0grJGT3A==
=tE8T
-----END PGP SIGNATURE-----

--i8j5vHaygQB9c5oKqQXikXYebgrf8KGnM--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:22:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:22:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82697.152745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93h3-0006sc-Sq; Mon, 08 Feb 2021 10:22:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82697.152745; Mon, 08 Feb 2021 10:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93h3-0006sV-Ox; Mon, 08 Feb 2021 10:22:57 +0000
Received: by outflank-mailman (input) for mailman id 82697;
 Mon, 08 Feb 2021 10:22:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l93h2-0006sQ-2B
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:22:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 051ba6f2-065d-4349-b088-ec8421c0aabc;
 Mon, 08 Feb 2021 10:22:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CF675AC6E;
 Mon,  8 Feb 2021 10:22:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 051ba6f2-065d-4349-b088-ec8421c0aabc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612779774; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=60lHO1XdO2ZdH7NjubMLgmwlciqf0fRY8xZgW51WUbU=;
	b=cCI1lUOYefv5w41g8IqFaarOQrGRW26S9XVaZ29zU2W7zMtAY1e/8OPD1HsVNd0SF1KGZP
	7uBpXw8t/QrqYEkkhBLbir6WydP7kc60yyp0laO62qF5Sl11MUWmCs2GJWT84gTGeCuSpY
	vhJ0nhQHlfe1bXiekrN6PlHfZg+Rr7I=
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
Date: Mon, 8 Feb 2021 11:22:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="arpr7hQ62FH49b0XyzuGqHY5LdYWp7uWP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--arpr7hQ62FH49b0XyzuGqHY5LdYWp7uWP
Content-Type: multipart/mixed; boundary="8PGsFlUGasLxMz28nEmjrpbMjwk7PLL4D";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
In-Reply-To: <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>

--8PGsFlUGasLxMz28nEmjrpbMjwk7PLL4D
Content-Type: multipart/mixed;
 boundary="------------4E6A363ECAD742BE1796000E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4E6A363ECAD742BE1796000E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 10:54, Julien Grall wrote:
>=20
>=20
> On 08/02/2021 09:41, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 10:11, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 07/02/2021 12:58, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 06.02.21 19:46, Julien Grall wrote:
>>>>> Hi Juergen,
>>>>>
>>>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>>>> The first three patches are fixes for XSA-332. The avoid WARN spla=
ts
>>>>>> and a performance issue with interdomain events.
>>>>>
>>>>> Thanks for helping to figure out the problem. Unfortunately, I=20
>>>>> still see reliably the WARN splat with the latest Linux master=20
>>>>> (1e0d27fce010) + your first 3 patches.
>>>>>
>>>>> I am using Xen 4.11 (1c7d984645f9) and dom0 is forced to use the 2L=
=20
>>>>> events ABI.
>>>>>
>>>>> After some debugging, I think I have an idea what's went wrong. The=
=20
>>>>> problem happens when the event is initially bound from vCPU0 to a=20
>>>>> different vCPU.
>>>>>
>>>>> =C2=A0From the comment in xen_rebind_evtchn_to_cpu(), we are maskin=
g the=20
>>>>> event to prevent it being delivered on an unexpected vCPU. However,=
=20
>>>>> I believe the following can happen:
>>>>>
>>>>> vCPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | vCPU1
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | Call xen_rebind_evtchn_to_cpu()
>>>>> receive event X=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | mask event X
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | bind to vCPU1
>>>>> <vCPU descheduled>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | unma=
sk event X
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | receive event X
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | handle_edge_irq(X)
>>>>> handle_edge_irq(X)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=
 -> handle_irq_event()
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> set IRQD_IN_PROGRESS
>>>>> =C2=A0=C2=A0-> set IRQS_PENDING=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 |
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> evtchn_interrupt()
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> clear IRQD_IN_PROGRESS
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 -> IRQS_PENDING is set
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 -> handle_irq_event()
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 -> evtchn_interrupt()
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0=C2=A0 -> WARN()
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |
>>>>>
>>>>> All the lateeoi handlers expect a ONESHOT semantic and=20
>>>>> evtchn_interrupt() is doesn't tolerate any deviation.
>>>>>
>>>>> I think the problem was introduced by 7f874a0447a9 ("xen/events:=20
>>>>> fix lateeoi irq acknowledgment") because the interrupt was disabled=
=20
>>>>> previously. Therefore we wouldn't do another iteration in=20
>>>>> handle_edge_irq().
>>>>
>>>> I think you picked the wrong commit for blaming, as this is just
>>>> the last patch of the three patches you were testing.
>>>
>>> I actually found the right commit for blaming but I copied the=20
>>> information from the wrong shell :/. The bug was introduced by:
>>>
>>> c44b849cee8c ("xen/events: switch user event channels to lateeoi mode=
l")
>>>
>>>>
>>>>> Aside the handlers, I think it may impact the defer EOI mitigation =

>>>>> because in theory if a 3rd vCPU is joining the party (let say vCPU =

>>>>> A migrate the event from vCPU B to vCPU C). So info->{eoi_cpu,=20
>>>>> irq_epoch, eoi_time} could possibly get mangled?
>>>>>
>>>>> For a fix, we may want to consider to hold evtchn_rwlock with the=20
>>>>> write permission. Although, I am not 100% sure this is going to=20
>>>>> prevent everything.
>>>>
>>>> It will make things worse, as it would violate the locking hierarchy=

>>>> (xen_rebind_evtchn_to_cpu() is called with the IRQ-desc lock held).
>>>
>>> Ah, right.
>>>
>>>>
>>>> On a first glance I think we'll need a 3rd masking state ("temporari=
ly
>>>> masked") in the second patch in order to avoid a race with lateeoi.
>>>>
>>>> In order to avoid the race you outlined above we need an "event is=20
>>>> being
>>>> handled" indicator checked via test_and_set() semantics in
>>>> handle_irq_for_port() and reset only when calling clear_evtchn().
>>>
>>> It feels like we are trying to workaround the IRQ flow we are using=20
>>> (i.e. handle_edge_irq()).
>>
>> I'm not really sure this is the main problem here. According to your
>> analysis the main problem is occurring when handling the event, not wh=
en
>> handling the IRQ: the event is being received on two vcpus.
>=20
> I don't think we can easily divide the two because we rely on the IRQ=20
> framework to handle the lifecycle of the event. So...
>=20
>>
>> Our problem isn't due to the IRQ still being pending, but due it being=

>> raised again, which should happen for a one shot IRQ the same way.
>=20
> ... I don't really see how the difference matter here. The idea is to=20
> re-use what's already existing rather than trying to re-invent the whee=
l=20
> with an extra lock (or whatever we can come up).

The difference is that the race is occurring _before_ any IRQ is
involved. So I don't see how modification of IRQ handling would help.


Juergen

--------------4E6A363ECAD742BE1796000E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4E6A363ECAD742BE1796000E--

--8PGsFlUGasLxMz28nEmjrpbMjwk7PLL4D--

--arpr7hQ62FH49b0XyzuGqHY5LdYWp7uWP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhEPwFAwAAAAAACgkQsN6d1ii/Ey+8
+gf/fYUiahBXkyaFYti/fdacT0KA4crdiz1HFIWfzYd6aE7hnhLSA9cgayDoTMzZ1ixUOP4Dg8zA
RPsvIfz1HR433Qtn/LalZSRn3VdOV9UMQpCZeGptlLCUp7PYmVflhau47UZKdb12bXn2yAxHJ1nT
80zX9Tq0gi5hsIfy/uoPeVL27XIP89Ze7IKn6x4vCBJ/r02tTNNz2jaYKDkenC0vO1y0CFP3rL9g
cP3scoA+dEH9ZErOV4g9YsEpAliAZeVzEdjpNTUSE3Jj07rZccD83UNQRgUoIQqOkeNiShTzugbD
ScMv4YjK2ItFjbV9x/8b4YpBDtNoYQV6j0+fd628QA==
=7tk+
-----END PGP SIGNATURE-----

--arpr7hQ62FH49b0XyzuGqHY5LdYWp7uWP--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:23:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:23:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82698.152757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93hM-0006xU-4X; Mon, 08 Feb 2021 10:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82698.152757; Mon, 08 Feb 2021 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93hM-0006xN-10; Mon, 08 Feb 2021 10:23:16 +0000
Received: by outflank-mailman (input) for mailman id 82698;
 Mon, 08 Feb 2021 10:23:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0k8o=HK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l93hK-0006x6-2Q
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:23:14 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a51f082-4255-4d3b-9ff5-b73af9977752;
 Mon, 08 Feb 2021 10:23:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a51f082-4255-4d3b-9ff5-b73af9977752
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612779793;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4jEKH8PX/ONb1EVKOuyR6l6EL4AdOivu1ZjE6u1YryU=;
  b=Ba3V6F1EKMmm/GF60/6TPEXB/jmnXyA3S9prYmuIr9ASnbyWFaVUYBL1
   6gZh2DBB912ipDyscAUqHSRn67uMxp6/t1N69t7lxe6FlJpQw5+xiB6sF
   iWZQht2xopgUb+BTysejnxf1In5pGB4XPuKBdP/mORxnVBDUi6EX4woVm
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vb8cVxuP5RB4sCqF/4CwEu/J3PK3WDMh9MFw62YblUyjsjYlMcoBh5r2h1M4Zh/ufLscsfqUXJ
 lV60z1m4FjpgKmUGSaqttXn9WJMO+9HKRdh4saQi9EauYIWQaITZFP2GTqjW/iw3GFdRpyVwa/
 +o+WxNLj19tqntFfZnh4Eo/V7sdHMvcpQNGwq3cFHmgtjBHUPkTDD85OwN6TKXxLPbvahOqhzy
 +MjGE/3u3dkGjWGIRfyZ/9D6BLB/V5RXerC1XJIKodKi/08h0Sr5HdLuPSdNFSFX196CFTZ4r1
 4jI=
X-SBRS: 5.2
X-MesageID: 36755069
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="36755069"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a+qaZxtdKsFlFU5uTAjyUnLbFZM69CjskEiNGjCn36GZcpNbu+8pdnhRz1yNoibVT4cQTTKpimTMS/l6DPwTyjKqcusIz+Y38U2p7A1k4scVLOYEbjzwJbXeecNMt4uE0iH+q6xfRqpYiyA8gGHcKip9IyAlrVQuiIXw437iJnNRdUoxIxi56LsG47+EKC2s1SMHQRNKzyOX4BX7iJyVvlS9HffpBKa5Hlk3qHsDPQxFSu8R53xsAz1+XtN6RMAPIuJtctXlqGOlpLTWIlfWD2+2W9OvArcQ4bJe6ooCNHfNmK/36nyydEz80HUDWXWSWxuJ1Q1DIGceWpI8UQIsIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4jEKH8PX/ONb1EVKOuyR6l6EL4AdOivu1ZjE6u1YryU=;
 b=Tf7UGgbXZXd/G52ttsWPKl4fwvUG1s8lO4aWfq14WLnd5hZZfgFPjwRe9qdFhzS6EjkudPmeSx79jWRL7mpKhXmJLGxUXhQ/XYj7rPqFR/DH1pJFpl4THjzQwfhLj9TZzBTlhVBD4u18JlvLjlua9xY5mxA9llcrI2JNOFTdPsmhBqKjEQAaE+jx3olUAxVMwBO6gYNq+xWlOBp/48j6ZtM5WwtKjX9/gn9qSAm8oTkAExHFohD20nMyZLTFxI4h9DnpQxybBJ1b8X5RFh73sWpFKw65iEa4qD/qqqgg3ao1YblJuZP943umdeFbcTNRkIlL+s/p5Apbg6w8hhJVew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4jEKH8PX/ONb1EVKOuyR6l6EL4AdOivu1ZjE6u1YryU=;
 b=S2OtcvTiSGn6vZWinWZAhBolgHOrafmijbxh7RaLQXFiROYzwo7HBopYYgxsTBtxTV2gOkDcEPlAe72w5vkPDewkO/puQdQ2S/O4JzAsUq+sezVsA7eZTI+3lyJ+Q2VGZTkHuVvJf6UP5EWGKuVo7pvjtXr6MtK5KqkjEtiwEik=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Jan Beulich <jbeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>,
	<linux-kernel@vger.kernel.org>, <xen-devel@lists.xenproject.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
 <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
Date: Mon, 8 Feb 2021 10:23:02 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0317.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::16) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d8d14eb3-f60b-43c7-d20b-08d8cc1b8797
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5518:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5518953EA3674ED5DBFE4D7FBA8F9@SJ0PR03MB5518.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JpN26I261m+/Xt09hkqIuuvhW8b84HekpvRF1ezS6dKhew40OJveN79VJ44GZd0IlS4AwwPyUwmofa0BMWkiTsuhIcQVhh5U9vyhDexHdURAD01HJEoUCgnK2Wkz9FM2+iYOhlpVZcc+LO2jUUVemGwEdzIngG8Z8sOTmmR3czMbU7D2ZY20BiX0k53OryNHx+cu3Hn+SKbmHS0UlpeolkBhQ8vYy5dEXN04MruwVwaDqr+qT1l/CFmGvGDL2rhylpw98N5f1k4Tjh0GC9R3//9/JpPVjrm1qK90f9Q59Mi3L9kD08ybqZkvhO0pfPBe/35k2rN9rvelxdG89kuLZaz4hxQrNGQ9+XPoRwV0s34PJ4xnrBYePjQ5Fzmh139neGeeUU0cn/7wdEQj8/zPQtSHHi4DN4SnIvOy9uAR9vw3lxbLh9tuwJXqN7qd335tVVQpuJpHjLluVcOK3ht4fer1hTZa8vuHcKCWa/boKiGSBqeCO2bN0gtKmteZu1I07Tik6dV0g6LtbT+E4K+nhAHQqYABa1ahEWGVPzqTMTxtX8nAGv/K/2TUCU+QGvF1j/ahFk8Sp9xyDq1rrCV9d5bJEzN1Q9AbcDSnreFl7NY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(346002)(376002)(136003)(5660300002)(2616005)(2906002)(53546011)(316002)(66476007)(86362001)(8936002)(956004)(6666004)(54906003)(16526019)(66556008)(31696002)(31686004)(36756003)(26005)(186003)(6916009)(478600001)(4326008)(16576012)(8676002)(6486002)(66946007)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?L2lFWDVidHl0Y09YRGhuS3Jhb1N6WkorM0NJT09aRlFjTjlUTk1TUlQ2T0hM?=
 =?utf-8?B?YmtVS2hnN05IQllnWlBGWVAydEhUbFd0MjBLelVQQ3laeGdvWlBieERweGxa?=
 =?utf-8?B?OVNaSVBRVGZiVFZwUnBNY1ZxczJ5a2dQdTVlbWdRcEpsWXVUZ0RrYlJlNC96?=
 =?utf-8?B?QVYvUjR0UlBRbFJNVzB3a0tFRExBcTNKdUJOTGl0cXNTWlJNK3h3VG5BR1E2?=
 =?utf-8?B?SGlLR2w1dE1MZXBCVkgzeUFSTXNjdlVENWZvN1psR2xIb1NvYTliMElLazdi?=
 =?utf-8?B?K3k2QU9xOTVjQVVoc3piSTFkRjdrVFpFL2p0SVM3TUxGd2xIeUdQOXJtbWVu?=
 =?utf-8?B?NkRXYzZiWVNwREVScUs5U3RnVUZEdUg1NElKK2hWdXYwVmlMakYrbXhCQ3hS?=
 =?utf-8?B?ODdmUkZPOHNpQWJqQ1pGanVHdnVGdG1veTE4MFBIMjRnb0swdlpRNHBySXUy?=
 =?utf-8?B?eDVscDBxVXoxaVVIakNRM21BU3dIQ0t0VDhNL1BMaU9TTkJ2WlFEb2ZjcFVj?=
 =?utf-8?B?ZzY3RU9ST0poTzc5eDgydkIycnViczZFVEFJYkFGRlp0SHRyaXJvZnBDNXBs?=
 =?utf-8?B?bmFsQjVzYkVSRm1MK2lVTWIwQU1YN2NDbVZKaDZIRDlGeWFYMURuaHR2c3Ir?=
 =?utf-8?B?Z250Qloza0taZTBkcWZxK0ZhVUxQZ1I0WGhRSCsrZWdJbE4rN3REN3R4ODRG?=
 =?utf-8?B?Rm1ZOW4xd2h1R2ZNMVhucnJrMmJzZ2NidHk1eGYyN2lGWHQ0Rk80bTJsRFE0?=
 =?utf-8?B?Sjc3QWNxU2hjNFFXMElhd05qZ2dtbTB3MFB0R1BjVjBuRmJyZHgvTzNRWmFM?=
 =?utf-8?B?UVVTa0F1bkMyR2p6Tm9iYkwyZm91N3ZzYWwyc2xETVlwRngwOXQxeTFBTkpN?=
 =?utf-8?B?aHA1Q0o0WDlralRpWDVCSzdBMnJNanZxWlBWT000THp0Uk4zM05zc3c1SzBu?=
 =?utf-8?B?T0lEQTkzeWc4OGdzcG1TTTNFeUsrRVJmaW41OVRjQk9qSnRHb0Vxa2ZiWGN5?=
 =?utf-8?B?S2prU3dGdlBxVzMreEZnRlJpaExCem5McmNHOVJoZ0ZnYm80eFZBRWdlSWtp?=
 =?utf-8?B?TC92ZUlOaW1OWDJDaDNkenlwdERPZFZSUWh1TnVaYnQwdnlhdnFxby9Lc2ls?=
 =?utf-8?B?eGZHelJKT3pWRzIyWjVxeTZ5WEx5L0JXcDBUa2ZUWkVCN2JVbkNkZy91dm5L?=
 =?utf-8?B?bzAycUdDS1RQQkdXRFQ3QTRXNThZWk9wRmlvTEtoVnpMcEZnd2F2aGptNWNL?=
 =?utf-8?B?ZEZ6b1RvV1lFeis5RHMxSHJOcUJsM3ZzaE02bUwxSHFNOW9QaGlPZkpDQ245?=
 =?utf-8?B?QlZCa3gyeDR3Y2UyMnBBVm1kSGJjc3F3NDdMSDdNNDRpRnFxcGNjcXU3b2xQ?=
 =?utf-8?B?TjZFZzV5YkMvM1dJR1k3K1FXaU1yRlIwNWpURExWTGgzTWw4OVFnWVYrTnFF?=
 =?utf-8?B?bnhzVzlORkdwc2lWM3VNYTV4ZFJDaUp5THpWcXV4ZzlwSStoellydVlLbklM?=
 =?utf-8?B?RUZiY3RnWUdWL1d4VFJQNStVUmxXWjE0anhqZ1NFaUs1S3hyazZNeU5iVXEy?=
 =?utf-8?B?WDNkcTFTaVF0R2FBajFITGNhN0Q5RHFtb05wVzQvUEp1QVhDVFJFOGl6Z0RB?=
 =?utf-8?B?VVlNa0RHWit0WEV5ZTRreGo5N29Ua285ZmhLTjhsSjBtb2Z2RjMvSzVBMzlP?=
 =?utf-8?B?dUJFeE94NnRPRGpnaHFSdFhXbjdTUkg5ajAxalFyNzBvR01RZnZQSy9QcFlt?=
 =?utf-8?Q?4JsCrkTNMntTFIcIj4qggTZZ0jN6SYqZ5UhOH6q?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d8d14eb3-f60b-43c7-d20b-08d8cc1b8797
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 10:23:08.5241
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jnVbszVxZc7qz/ORwnAZE91BQ61RIHMIdcBz+frg+U6Zd3YOs6JrfOk5t9rzqB71Wt44ubMzCS67F+W55dMOySFEJZVOqaLXP4nRq5YfBxY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5518
X-OriginatorOrg: citrix.com

On 08/02/2021 09:50, Jan Beulich wrote:
> On 08.02.2021 10:44, Andrew Cooper wrote:
>> On 06/02/2021 10:49, Juergen Gross wrote:
>>> The ring buffer for user events is used in the local system only, so
>>> smp barriers are fine for ensuring consistency.
>>>
>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> These need to be virt_* to not break in UP builds (on non-x86).
> Initially I though so, too, but isn't the sole vCPU of such a
> VM getting re-scheduled to a different pCPU in the hypervisor
> an implied barrier anyway?

Yes, but that isn't relevant to why UP builds break.

smp_*() degrade to compiler barriers in UP builds, and while that's
mostly fine for x86 read/write, its not fine for ARM barriers.

virt_*() exist specifically to be smp_*() which don't degrade to broken
in UP builds.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:25:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:25:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82700.152769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93jg-00078P-Hf; Mon, 08 Feb 2021 10:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82700.152769; Mon, 08 Feb 2021 10:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93jg-00078I-Ec; Mon, 08 Feb 2021 10:25:40 +0000
Received: by outflank-mailman (input) for mailman id 82700;
 Mon, 08 Feb 2021 10:25:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l93jf-00078C-S8
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:25:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd4663b1-d55b-456f-9247-452e5b0c53e4;
 Mon, 08 Feb 2021 10:25:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2ED36AE74;
 Mon,  8 Feb 2021 10:25:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd4663b1-d55b-456f-9247-452e5b0c53e4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612779937; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=H06JaN8fug7i/Y6iWgQvk85J7AdPgKO0vB3JapM+Xzg=;
	b=aHXkP37LPeLYPJd9d/4mnBH5UhjKIRswKzDjCda09pL7gTvBGmQlKIdnx1wGtxJXs4XPKi
	iTSXkUxg3Ari8esrC13hc3v8V3jg/77F7y4+pC9PvSXlWOxAvyWqDU7Evwrh9Xez0KBvE0
	wJFu2v3CJ1gkmP5IwLoiJPTU5N/ZgA8=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
 <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
 <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <60ca5c18-bbf5-5d3d-1af6-f4692077c44e@suse.com>
Date: Mon, 8 Feb 2021 11:25:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="GxFWARbBMCsXSAbZH8aPotGcUQpIZmcPb"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--GxFWARbBMCsXSAbZH8aPotGcUQpIZmcPb
Content-Type: multipart/mixed; boundary="84oCDWIr6PGk2QsocQ05WTzxKaZRsOV9D";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <60ca5c18-bbf5-5d3d-1af6-f4692077c44e@suse.com>
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
 <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
 <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
In-Reply-To: <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>

--84oCDWIr6PGk2QsocQ05WTzxKaZRsOV9D
Content-Type: multipart/mixed;
 boundary="------------8C6E44BB372F7E35D27C6418"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8C6E44BB372F7E35D27C6418
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 11:23, Andrew Cooper wrote:
> On 08/02/2021 09:50, Jan Beulich wrote:
>> On 08.02.2021 10:44, Andrew Cooper wrote:
>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>> The ring buffer for user events is used in the local system only, so=

>>>> smp barriers are fine for ensuring consistency.
>>>>
>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> These need to be virt_* to not break in UP builds (on non-x86).
>> Initially I though so, too, but isn't the sole vCPU of such a
>> VM getting re-scheduled to a different pCPU in the hypervisor
>> an implied barrier anyway?
>=20
> Yes, but that isn't relevant to why UP builds break.
>=20
> smp_*() degrade to compiler barriers in UP builds, and while that's
> mostly fine for x86 read/write, its not fine for ARM barriers.
>=20
> virt_*() exist specifically to be smp_*() which don't degrade to broken=

> in UP builds.

But the barrier is really only necessary to serialize accesses within
the guest against each other. There is no guest outside party involved.

In case you are right this would mean that UP guests are all broken on
Arm.


Juergen

--------------8C6E44BB372F7E35D27C6418
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8C6E44BB372F7E35D27C6418--

--84oCDWIr6PGk2QsocQ05WTzxKaZRsOV9D--

--GxFWARbBMCsXSAbZH8aPotGcUQpIZmcPb
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhEaAFAwAAAAAACgkQsN6d1ii/Ey8m
rwf/QY6oWPIN2zwy7rjcTcX/RLM8CEBVVokOaF7ly0Ih0z+JvvD7GWv4cA14otkPnM5mcLKIomGg
NwYjLtToNdG71AHPi3gyfjr1Fb0rhINWmutHOG2Wqit1oPLxJIZ8wEHbK09JRUknz1VYcbJvr+Oe
liz9o0SAijEPnNQ0yRB/GTtTBRD1Pw8CiaO3vWvYxfuDuFgdnbNMq8U54SY11juBF7BdGV187VPZ
FD5F5BrpvhfX1UqdOWz6bO9osBGcakqyUdZa+lIdHbxPlVjMBZyGS4BEiWzeOdL3RzBEX1jSB2jY
X3fg8em6Yto5dHNZraX5h0uw5zux06hR1GBHdIiLGQ==
=KGGX
-----END PGP SIGNATURE-----

--GxFWARbBMCsXSAbZH8aPotGcUQpIZmcPb--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:31:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:31:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82710.152786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93pa-00087M-Bj; Mon, 08 Feb 2021 10:31:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82710.152786; Mon, 08 Feb 2021 10:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93pa-00087F-8s; Mon, 08 Feb 2021 10:31:46 +0000
Received: by outflank-mailman (input) for mailman id 82710;
 Mon, 08 Feb 2021 10:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0k8o=HK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l93pY-00087A-R9
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:31:44 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5153694-dd00-4076-b64a-8d430af2a76c;
 Mon, 08 Feb 2021 10:31:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5153694-dd00-4076-b64a-8d430af2a76c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612780303;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DZUx01oQax/iYUlWe97Je4a/eksVcPW2+BdzIcW+Xfk=;
  b=bA5X9p0FtwDe1XxfBpeP6Z53WmWTtWbJef7osa6apInIJxJFqGIOWDCl
   eyYSqy1e5y49D9n/fJxkKMP9yz9381EF3/FwboaTJZFZ6y8hAjqR1jSNT
   79HErG641OCf2LEiJVto09xm0bFit8DEhGa0HLnAhPqpczBXh+wxjatvg
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 28c8XnO8/zFeeNVyIbqMLb8LB/GzluumD0GZLmwM2hAjdrV/NtEemG4LHXnWPEf8CoLA+XkbFY
 x0FxiF0JDvPP+FMCS1qI1SGQ2oXrxecmuVIL2li+jSWywE+YriYP1Yk3Iv6J7aQrih2uvuaXUo
 RP32mOwW1VYmCP/GyzkDnYtX5nt9UNEJIf87nLlySEh0NA3IdgcCOorEuGIkKyF9tabwJnZlOY
 M9uGG9crrA06ElzSLZoG9SeubTGfNW6L0h+VE6gdbvUu5j8v/EQtvzcjRGhOohsx3/W4YZasj7
 PQo=
X-SBRS: 5.2
X-MesageID: 36957211
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="36957211"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VmkdMkIzKGI5cnd8KEcS8MRFD6tVmQ5spXKjCA0Z22kAgr+NyhUOUByRqzaLTjKfxSBAvVfxZBrVXsUz7xMOIGRxVVDF673bSVezoGdfRgivq9Rkrwr9n2fpXuWSZKM3sJV20YKw0m+4ZyXis6cdb2aYoEY6tMIyF2VqYEIBz8BL816cVatXUZMNj/8gD5Lj60ldWwUpg7u8c2llgxrD9VL2yZUexm40MbVbzCUX62xu12dixRgoISNZXbBdo2u/JCFHXMvLYsVhLEoCBUbGLrxAfyMmnPm/5wfKvUee9YySWZ6PivDpDxnaG4s2wiOZdItsbjLFp+C88odvQIGpZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DZUx01oQax/iYUlWe97Je4a/eksVcPW2+BdzIcW+Xfk=;
 b=nsMo0+jt8oun7j2KU4Q4gP8epCLmooXq6KOxoeQWr2hViGPYP1ytHQFTXqx1cxTiNwlsbX6VOxTviQBAoq0bKZbIcNKd0N4+Fc+6mF4MMJS7bh1NyrgSD2zQ2jR0D1FBonLx2rdYR8XRo0dgn1oZadZDY2mk/Z30ZSyCf1qjF281A9EkjpTwspEQXhhwdLfCUGgonfNR9VTfKjyM3iNCb8DmT5gejaRlDix/4vMyMNRFVHzShIIIMR3xAPjJGRVq+DTf7nt6U39orV8IebWB1E98KvIcLX5HiPaoaqP3flajhgorClzXQZP2wHGqxJx7rtsz9IdPS0qYOpNPHsL69A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DZUx01oQax/iYUlWe97Je4a/eksVcPW2+BdzIcW+Xfk=;
 b=Bow2+BJPaP5Cv4pFBouR0+Yf6ZFuJKAlTmG7hNvZ0KAQgjIbCAu9JGxo8clNcNRQylod07VxScoie4u7ibjgfnqZav3ny8Lo3GUpEDPfoVkZ+FekfLZeEmKR+kapAkrDFr1WIR1amsQpp0o+iSeszgWUw25hrQwi19Hqd8/Ucqg=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Jan Beulich
	<jbeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, <linux-kernel@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
 <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
 <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
 <60ca5c18-bbf5-5d3d-1af6-f4692077c44e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bac45a67-087b-636f-3a74-db266f970cda@citrix.com>
Date: Mon, 8 Feb 2021 10:31:33 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <60ca5c18-bbf5-5d3d-1af6-f4692077c44e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0050.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::14) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 80187434-e7e1-4c8e-9ea3-08d8cc1cb88d
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5823:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB58238E99987B9F8C0153A683BA8F9@SJ0PR03MB5823.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Rnwve+rxlTHlYchZQFF8Vqx6rf0WC/8ejvQxxppxDCmsOlKnPbJ8T48kpweIf3ApNgl4OX1HeHa9szh5J1KnX6jsB1nMXYBAreWls1M05LWVcDdghWaOpjvKdC1PCrcJtojWNxTukDxISNqqEvFikMjs3tl5/yUvT3RtEjBAytq6ok8ZUW+0xXuFsRVooLanjso6OzMYk1220y5FkMQJRil3plgEj2f7eqFSRHiRwqSQi+4MRHjIIKYlPdWcAdFo/Y+EbCKW3nHEBBX0aNPH0RSbhY1QL0aw9UIg3GmlkEccpgDRNABhazy9UQeOvVzlETEYLLYyuUcezpgPLjJEc3Nkn2pS7mqyfDSKiRBMvja+k53XpHEMzD9FIo/d06lBQ5xDUfVRTE8p+MTuDLIkK3pmqX3dTabVn2zxNo2h1LmZJpTTRBdQOMPhu7IMwm5OvGWVzXer5mVS8bluXkTpBA0jtFrdkYTD3vSbUU4cdt+rhr2oOWLMWx7Zb2E47DB50PzViHBub8iLtKGvfjCGlediufH6dbb7kMcW/Ou+ewKX3L4nyOqTWuDtplpzEkdW9fd0YyYra63pq3Gy4qEr5IRtBi7GloPrPyZvkfBpNvE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(136003)(39860400002)(396003)(346002)(2616005)(956004)(83380400001)(66574015)(316002)(66476007)(53546011)(16576012)(66946007)(8936002)(66556008)(31686004)(478600001)(86362001)(31696002)(36756003)(6666004)(110136005)(54906003)(2906002)(6486002)(4326008)(8676002)(186003)(16526019)(26005)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eWRVTmgyblc0czZUc1d2M0czelduZEN5bTMweEhQdTRGRzBuNWdpdkRmc3ll?=
 =?utf-8?B?cUNEeGtNTVM5RXdrYlFhMkQ1K2JPTDMvOEdRVFkrRndFQWZESXJiVnZWRTJr?=
 =?utf-8?B?Wk9nY2wzSzhHVTNyKysyN1NyZmFpT3Y5OWEyYTVnVGtIRzRwQnlvRWd2LzFt?=
 =?utf-8?B?d3FTdkt3azJOQlJsMmczYzdSK0hFUTBJNFFCUitmaGVBczdKb1BDV1dmVEo5?=
 =?utf-8?B?cWllcTh5Wkd1N3hxQkFzQW9mVVNJTWtjcDMyaUl3ZkJmVnI2TEtNalNGdE9R?=
 =?utf-8?B?c0NMOGIyV2t2elFqNHZ3QUNjNVk0THNiUlkxK0N3UGViODhUUkRJSUd6VUlv?=
 =?utf-8?B?QmZXLzJ3ZENNWGpCYm1Zb0VDQ3N4NkZ0NVlMeGRJY04rdkc5dno0VGF2REg4?=
 =?utf-8?B?dk1pWDJuMVZvVlRkb2F5Rk0vOUY4UysvdGFxSDk1NGtaM1JjY2VVMGpRVEx2?=
 =?utf-8?B?MlRYLzZvUTRkR1E2TTcwYlVBT1N5MENaWDZNMFJaVEVlc3BoaDB6dUNIUUQ2?=
 =?utf-8?B?M0hlMnQ2dGV2ZUVPT3NJcXpHMHZMQ3JSZHVrSkdRMEt2Q0pSWkJFekdvUEV3?=
 =?utf-8?B?WVkrUHdDK0JxMXhvbVR2cmpJSTh2Q2lHR1RkUHNXVklFMUpmMFdFWC9yTXk0?=
 =?utf-8?B?UngwWXh4M1A1T0dQQkQ0ZXlzc21FajhTMTdQUk9NLzV3ZzdTQWxFRVlIcURW?=
 =?utf-8?B?dmJ2OVNTQmpIYTJxbEtHOS9NYTVQanFwekpCNy9URXU1ODdxZXc5ZVBpVEFj?=
 =?utf-8?B?UklSaHYyMDRnRDlBZEUrVytnSUtJZzBlUlNlVHBYZXh6Z3c5ZzhPVExWOHI4?=
 =?utf-8?B?RzM1T1g0eHBVL2RLcGFKRTFzZFBpTHJBZ0pJWmI1ZHRqNG5ZbUFGeXU3YThW?=
 =?utf-8?B?TzI1SFFGekp5RVVCVE9jM3l5QW9lSU8yaGNhMUdCWFZRM1FXaUN0Y0lDOUhQ?=
 =?utf-8?B?eHcvN1NtMExjZEJuMFUxQ1BlVWwzUFNiS3liOW1kUzVHV1I5ZGx2cVc3RVdU?=
 =?utf-8?B?aGZpeXlzN1M1bm1oVmw1UUI4dkNWdjJqeXJTa0tkU2VPemk4YUw4Sm9tbTBt?=
 =?utf-8?B?ajdXTkQ2L2ZVZkhWQS9SOVNjQ29lQStMbW52dHc1aXJvcm5CZjZXa2pSd3dD?=
 =?utf-8?B?TnN5VW8vVXVFWVF6c1hJVGhGTC9SZENuNHdHZEhNdGJISGdSTVdSa3NERS9D?=
 =?utf-8?B?V0hZbFptWWNzRkFVczV5VUZNUktUdzd0SDd0c0srWVgxMTNKZThYTWRJMyt3?=
 =?utf-8?B?c1hvNlIydHJoaTZpeGFEcGY1YUdjWktWNU1UM25yMWw0ZlF6SVVNUEtUcEZC?=
 =?utf-8?B?dDBneWgxWGloUmZYVnNVTWlvL1huZ243MmFwUEFacHVId0t0dUFMRVlCTGVt?=
 =?utf-8?B?am5yRUswMG81UkhyZDJ5RWwwNnE4MkM3WG1pVGlTVUFEcVlLcTNrZ2hKMnBG?=
 =?utf-8?B?LzFnNlFnSWZCWWZvek9yN1F5eDFmQS9MU2s5bWRLNFQ5eldJc3VlWUZCYitu?=
 =?utf-8?B?VjVsOFBqa2lPcTVPNEppVjdNTmMyZkl0bmxFcGpZbTAzTm03SDVXcFk2WkNh?=
 =?utf-8?B?US91cmoxSjFTVlRNMWxORlprWm41T1M1ZFJjL1hadG56cXRyZnBZaVlxelZy?=
 =?utf-8?B?R3BQMGplQ3dEZmhrZjJtQW54VVZkaVpyYVNKVWtzbVlwRDNzcWxkR3UvdXI3?=
 =?utf-8?B?bDhFUEtVWFVHcDQvNElnUmcxWkEza0R3WHRtUTJqRGdQbXgrRjBqVThGeFNm?=
 =?utf-8?Q?ZV8sQnEYMRxnkYFhsr37tD7Qx6diu5OQXeQwAf1?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 80187434-e7e1-4c8e-9ea3-08d8cc1cb88d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 10:31:40.1534
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YuAdKviTi76TRNXnW7hUxfaEtDBnw5QPNOwPNMiTzDAMBSQUkg7Manjp+ehVuHUMmiuEW07m2cjNC+uume8GZHlU83q1FP9UdhJtUiWOkpA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5823
X-OriginatorOrg: citrix.com

On 08/02/2021 10:25, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 11:23, Andrew Cooper wrote:
>> On 08/02/2021 09:50, Jan Beulich wrote:
>>> On 08.02.2021 10:44, Andrew Cooper wrote:
>>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>>> The ring buffer for user events is used in the local system only, so
>>>>> smp barriers are fine for ensuring consistency.
>>>>>
>>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> These need to be virt_* to not break in UP builds (on non-x86).
>>> Initially I though so, too, but isn't the sole vCPU of such a
>>> VM getting re-scheduled to a different pCPU in the hypervisor
>>> an implied barrier anyway?
>>
>> Yes, but that isn't relevant to why UP builds break.
>>
>> smp_*() degrade to compiler barriers in UP builds, and while that's
>> mostly fine for x86 read/write, its not fine for ARM barriers.
>>
>> virt_*() exist specifically to be smp_*() which don't degrade to broken
>> in UP builds.
>
> But the barrier is really only necessary to serialize accesses within
> the guest against each other. There is no guest outside party involved.
>
> In case you are right this would mean that UP guests are all broken on
> Arm.

Oh - right.Â  This is a ring between the interrupt handler and a task.Â 
Not a ring between the guest and something else.

In which case smp_*() are correct.Â  Sorry for the noise.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:36:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82712.152798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93u1-0008Ik-V5; Mon, 08 Feb 2021 10:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82712.152798; Mon, 08 Feb 2021 10:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93u1-0008Id-S8; Mon, 08 Feb 2021 10:36:21 +0000
Received: by outflank-mailman (input) for mailman id 82712;
 Mon, 08 Feb 2021 10:36:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l93u0-0008IY-7Z
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:36:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 61db22b6-4fcd-4c70-818c-0ff2c4adf5f3;
 Mon, 08 Feb 2021 10:36:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FFD2AC6E;
 Mon,  8 Feb 2021 10:36:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61db22b6-4fcd-4c70-818c-0ff2c4adf5f3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612780578; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5UpmvGJTacd6yUXg2H9+bLosTdiuU5Va+P2BDP3oi4Y=;
	b=QrwXNShCzDyYAZA4BYXeiZBCyfJGNbF/wzlZPiEU3K8Ie6xvQEI1yjQpnzynJRqYeVDDfU
	KODbn7VjygcHyxQAPUiPYawbNC9JOaoFJqTVptTceuTM4hTUTnHQJIaAqTaLykBtTgvKb5
	g5ANxsDslCGRmAKiMXbh/ZysXVTk3KU=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
 <jgross@suse.com>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
 <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
 <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <def49076-f943-9418-86b0-2aafbd0bf7de@suse.com>
Date: Mon, 8 Feb 2021 11:36:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.02.2021 11:23, Andrew Cooper wrote:
> On 08/02/2021 09:50, Jan Beulich wrote:
>> On 08.02.2021 10:44, Andrew Cooper wrote:
>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>> The ring buffer for user events is used in the local system only, so
>>>> smp barriers are fine for ensuring consistency.
>>>>
>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> These need to be virt_* to not break in UP builds (on non-x86).
>> Initially I though so, too, but isn't the sole vCPU of such a
>> VM getting re-scheduled to a different pCPU in the hypervisor
>> an implied barrier anyway?
> 
> Yes, but that isn't relevant to why UP builds break.
> 
> smp_*() degrade to compiler barriers in UP builds, and while that's
> mostly fine for x86 read/write, its not fine for ARM barriers.

Hmm, I may not know enough of Arm's memory model - are you saying
Arm CPUs aren't even self-coherent, i.e. later operations (e.g.
the consuming of ring contents) won't observe earlier ones (the
updating of ring contents) when only a single physical CPU is
involved in all of this? (I did mention the hypervisor level
context switch simply because that's the only way multiple CPUs
can get involved.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:41:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82717.152823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93yi-0000u4-QK; Mon, 08 Feb 2021 10:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82717.152823; Mon, 08 Feb 2021 10:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93yi-0000tw-N3; Mon, 08 Feb 2021 10:41:12 +0000
Received: by outflank-mailman (input) for mailman id 82717;
 Mon, 08 Feb 2021 10:41:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l93yi-0000tk-51
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:41:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd5dc9ea-6111-4a81-820b-f17d1a7e05f4;
 Mon, 08 Feb 2021 10:41:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B983CAD57;
 Mon,  8 Feb 2021 10:41:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd5dc9ea-6111-4a81-820b-f17d1a7e05f4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612780868; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Oyt4NkBsy2VOvu6TQb/77EdCP7ezmBzu+cWNb2D9Psc=;
	b=jInQEGa6Xw5a0rQHuwGUHqc3bPPkvN0aXjIZejVAVjV87o/lvtqvUIf6zuCKhyl/g9U28P
	+nTquktmPa7KK+lco70/R8wxEqRXaBfxz8IuJy+gI3Kwm2vpZKnor4wmnFlVxFcHz/kEUF
	jIVf3rnSr0DoSnZLVKC5tYWfoABHL0U=
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
Message-ID: <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
Date: Mon, 8 Feb 2021 11:41:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="IupQnLRuWdSKH5sgS2OBr8Sl0mDFNikL9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IupQnLRuWdSKH5sgS2OBr8Sl0mDFNikL9
Content-Type: multipart/mixed; boundary="vm53xC4dDhJC8M4kx3W85gteeQw4eJHxu";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
In-Reply-To: <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>

--vm53xC4dDhJC8M4kx3W85gteeQw4eJHxu
Content-Type: multipart/mixed;
 boundary="------------1F70BFF80D4B87DC7A855CC4"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1F70BFF80D4B87DC7A855CC4
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 10:48, Jan Beulich wrote:
> On 06.02.2021 11:49, Juergen Gross wrote:
>> In evtchn_read() use READ_ONCE() for reading the producer index in
>> order to avoid the compiler generating multiple accesses.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   drivers/xen/evtchn.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>> index 421382c73d88..f6b199b597bf 100644
>> --- a/drivers/xen/evtchn.c
>> +++ b/drivers/xen/evtchn.c
>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char=
 __user *buf,
>>   			goto unlock_out;
>>  =20
>>   		c =3D u->ring_cons;
>> -		p =3D u->ring_prod;
>> +		p =3D READ_ONCE(u->ring_prod);
>>   		if (c !=3D p)
>>   			break;
>=20
> Why only here and not also in
>=20
> 		rc =3D wait_event_interruptible(u->evtchn_wait,
> 					      u->ring_cons !=3D u->ring_prod);
>=20
> or in evtchn_poll()? I understand it's not needed when
> ring_prod_lock is held, but that's not the case in the two
> afaics named places. Plus isn't the same then true for
> ring_cons and ring_cons_mutex, i.e. aren't the two named
> places plus evtchn_interrupt() also in need of READ_ONCE()
> for ring_cons?

The problem solved here is the further processing using "p" multiple
times. p must not be silently replaced with u->ring_prod by the
compiler, so I probably should reword the commit message to say:

=2E.. in order to not allow the compiler to refetch p.


Juergen

--------------1F70BFF80D4B87DC7A855CC4
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1F70BFF80D4B87DC7A855CC4--

--vm53xC4dDhJC8M4kx3W85gteeQw4eJHxu--

--IupQnLRuWdSKH5sgS2OBr8Sl0mDFNikL9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhFUQFAwAAAAAACgkQsN6d1ii/Ey+u
Zwf8Ct7UaT6BRdKZXaUYyMQVXjemwFjfgeNdTAJ3FeTI043fYYZOJTF6ddFG8QgHKg4Jd+DrmJNq
gh5hLLJ4CrJaS71U4B1mruE8940e4Kcb2xEOtbHR+A0sF0+YOwfs+B2R6HdVAUp0Byg3EF0rwLZ5
ja70uiUiyMtJsbxxfArcGamcsOJKb8iwZeo9hIxKlLMGKB+h1u20y4xFhxMSlAZCu8x7EEAx9k5q
2seqPe854sa6vKKVKu5ZeEYIe4aoGWrM4gzSNXJAifBfINv2Dzrcu3EaAbqtQZJmaM731upup3km
N9USoU2cEEQk38aO4FrtHfBgWkGDJfz0P6EE/Xf5zA==
=lf7B
-----END PGP SIGNATURE-----

--IupQnLRuWdSKH5sgS2OBr8Sl0mDFNikL9--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:41:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82716.152811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93yU-0000qv-Hh; Mon, 08 Feb 2021 10:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82716.152811; Mon, 08 Feb 2021 10:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l93yU-0000qo-EZ; Mon, 08 Feb 2021 10:40:58 +0000
Received: by outflank-mailman (input) for mailman id 82716;
 Mon, 08 Feb 2021 10:40:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l93yT-0000qj-3S
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:40:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l93yJ-0001Vr-NE; Mon, 08 Feb 2021 10:40:47 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l93yJ-0004LV-Ej; Mon, 08 Feb 2021 10:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IQPlMuHphzCxemau7EPG8KdIBVritDvW1xuyCMc3fhU=; b=sC+AR9iTEqeFWbDuKWB5Y35WRU
	0RbDCYD8HqDZ5+YavvWLxIKZjjBgShWRRynceb425BC8VjwpIVUA9HMJAGLCISgOYcuyi1zxNzhqU
	swR71F4tJ9r/e/ArJUxgAMKjDK+qn++/QfBSK2/QATA/oMqTo/0taQ58egBGhScfddCo=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
Date: Mon, 8 Feb 2021 10:40:44 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 08/02/2021 10:22, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 10:54, Julien Grall wrote:
>> ... I don't really see how the difference matter here. The idea is to 
>> re-use what's already existing rather than trying to re-invent the 
>> wheel with an extra lock (or whatever we can come up).
> 
> The difference is that the race is occurring _before_ any IRQ is
> involved. So I don't see how modification of IRQ handling would help.

Roughly our current IRQ handling flow (handle_eoi_irq()) looks like:

if ( irq in progress )
{
   set IRQS_PENDING
   return;
}

do
{
   clear IRQS_PENDING
   handle_irq()
} while (IRQS_PENDING is set)

IRQ handling flow like handle_fasteoi_irq() looks like:

if ( irq in progress )
   return;

handle_irq()

The latter flow would catch "spurious" interrupt and ignore them. So it 
would handle nicely the race when changing the event affinity.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:45:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:45:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82720.152834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l942h-00019C-6y; Mon, 08 Feb 2021 10:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82720.152834; Mon, 08 Feb 2021 10:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l942h-000195-44; Mon, 08 Feb 2021 10:45:19 +0000
Received: by outflank-mailman (input) for mailman id 82720;
 Mon, 08 Feb 2021 10:45:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0k8o=HK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l942g-000190-9p
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:45:18 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dcc1da21-fd14-42a2-8d3a-1e9dfa182c8d;
 Mon, 08 Feb 2021 10:45:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcc1da21-fd14-42a2-8d3a-1e9dfa182c8d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612781116;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=KTAhKbc8ItfP/8f/ZVtVH7mdz7uVY/mhTn0Uoxu7XQ4=;
  b=V7M31FwJwZmGdl+YujJFaQoFVhfI0QmB25A58qBU3TuUtEZs3p0LzMoB
   H983bJmSlvQJMoOZN9d4CaruU4uEYN9uZ0aWv+FM0qVd54tzC0qpPkQHQ
   gA9OzHvx1Tbnk21W1Q3rCfqy2AkStTeK04SErNxmnDMKl7cochdQ/pc2C
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dLXmBGtmA5SK9mLVGnfJ5CwbCEpIfdJepfI+K//9UEPkFe/WYAMwhCpW1kbX2JY7Mgiiy6zT/t
 MgP9ZnPhZ7jNe0QmV5gPCxIBaJ9/DFDdk0mw7sk10PXeBxsh6jMYtYP7F5qIR+sHYauwMfMe+o
 xK51HjPdrlP7T1HZ12+9FFSNVFaGy+sNgH1xpYbupRWq/OOJmlmUS65rN7sLTIsLAd7zHHXVDS
 hntvuBe2/4c59edQA6Aid2ZNvb0WwKVW4yyo8mVOeHHIaS/WfGLi3cyfdnUfXdeZoBIoUGJg3M
 GMg=
X-SBRS: 5.2
X-MesageID: 38109484
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="38109484"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZitFKSKKOhP1dbIzVOi/kUfoIOhPjzI8OS24C5WkEv/JZI4dCHtHJBzl+yQmpUb0FrRSZF5LbbF5V1yv+v1i82/YIerPTl+8wb5hz9RFtorKjtIZiTpBjkkiNwqo1M1CtOseUR3S+I1aouIasZ12UapvPQacFNeigdO95bGmHlUb/rQeGnveXgQds2IQc9ckWS3zughSB6dINEKcP4MZxJpd3Qv569PBQBVIcsvPQmeBuI3oh/3iXsge9iEsWU42VkYOyweiaSUAKlbtqxmFBTCgHi+ou0qLt0n5fYq3H67bQ8slbLweb9A2CD/bTJWAddMjNmnnJKC2v1eu7Jb0vA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KTAhKbc8ItfP/8f/ZVtVH7mdz7uVY/mhTn0Uoxu7XQ4=;
 b=aQSUZ3Jba6eFbxJFXSpUsp0Cljtcb4YTA7+qWnlpTG80CmOJwda335H3OcqHDVkRGF2URlpH6Ho0xwiWHdDLwAa1FHeFTD4N4xMymrGCpLpWj/eJw0/Xp6DeSSmYOlEZLGrK65jOX+Uh/2KfLpRM0nm/PxZt5cYD0w069nkGTEF1VAqcjUQHMTGcDSKAozusyxCK78rd0U2/lAxAYaVWr9QXnWaYGYdwGmTG3upadjt2d2e8dgaeZKhtS20rkwvWVCjeI8bZwqr7cl/OAq338AFquhdJAGEVvQYQ66+2EVlijhEXad4fW5tety1pMgiw5uuuixWHf3i1zpM0mldHEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KTAhKbc8ItfP/8f/ZVtVH7mdz7uVY/mhTn0Uoxu7XQ4=;
 b=oGLjqoGfJ8mpbXHIk8yY3Mp/qKI0opKoC0/VAQ4EehagjowU907ZnZR+AULJYdvFmcs35VP3JCX2ZszO9zsEXHLvTrLwsThC7rOTt/OxsH0MVhKvJT5vJDptSdBISiE3KkobsC5utB3FAJDJ2/WMfG7qFc1/ufODGFbV3vw1NmQ=
Subject: Re: [PATCH 6/7] xen/evtch: use smp barriers for user event ring
To: Jan Beulich <jbeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>,
	<linux-kernel@vger.kernel.org>, <xen-devel@lists.xenproject.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-7-jgross@suse.com>
 <2d354cad-3413-a416-0bc1-01d03e1f41cd@citrix.com>
 <dfa40af0-2706-2540-c59a-6593c1c0553a@suse.com>
 <e68f76a5-27ce-a6d8-88a3-1e8fa1c30659@citrix.com>
 <def49076-f943-9418-86b0-2aafbd0bf7de@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f2a71421-7169-f3c8-9799-66dd8e07459d@citrix.com>
Date: Mon, 8 Feb 2021 10:45:07 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <def49076-f943-9418-86b0-2aafbd0bf7de@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0096.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:191::11) To BYAPR03MB4728.namprd03.prod.outlook.com
 (2603:10b6:a03:13a::24)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 72475f3f-8efe-4915-2a52-08d8cc1e9d98
X-MS-TrafficTypeDiagnostic: BY5PR03MB5153:
X-Microsoft-Antispam-PRVS: <BY5PR03MB5153E3CDB939AA8B01E7AAFEBA8F9@BY5PR03MB5153.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: H2I1amvuTChn+tCPiOxf4+h1ktKuLuEekRNJQup3NpdC6vmpHxtPOK6R5WROPp1l79FZH3Cp0qxkH98NtjytBAWQTrggqo8Yaj1SnkZJHr2w1jRiCZdNbLmOenDqRYRrXebqHPDhXEki6dFUEGBuP0WCZcU61GsjZthHlgSpxwvxD56jDs7k9Gr4+4TgDi3hJLT2TUtG62LK/qaNmI7CfHFemJTlupq8qYPmyK1N5Fy6MPwzH5SWKge4VEaPOumtaYzABWzBX4AueBhW4i41gNCJnjFeJarFAa/O5S581Iptf6eEn8wUZpccwTd6ErbS3qh/m3/h0XVZsfLVgNsDRKKs7LPlqjtiJVWfU73f+H46s3DvyW13PVH6BMS0NV7zr+KrwtXjDWRN2e2qxY6NUeA3Thle5Bfg9JDAnnghhpK7/tLwBdpLYIiEMniaYXL+vtB4Mg1nIz19vl4qc5W8iBLO4ZoL0vOW9hv2DFYy3EoysntsWGagNOtdpR0lGUJsYZyOBPE7woWnj2G1eWX9Ef8sA3AQllzuJuzjbIpvma7cUzFb3n9EiUkBXnqGmukKgXcNQDliLPflTuGsNYabGml/rixY5LroVh5J7+pyKcQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4728.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(39860400002)(396003)(346002)(31696002)(2906002)(66946007)(36756003)(478600001)(6486002)(31686004)(54906003)(4326008)(8936002)(6916009)(316002)(5660300002)(6666004)(16576012)(2616005)(53546011)(26005)(86362001)(8676002)(186003)(956004)(16526019)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YzBpTlVGK3hMdThGTk1EOVVVbjQyQm12TTJjSW05WWIwVHJPNHBMTUR4amh5?=
 =?utf-8?B?VENnNXh1Ti9ucUZ6WnVMRTA3RkRXemREUUZBVnh6WmlPVnNpZjZvVEliTHZE?=
 =?utf-8?B?dXU3UjQweDNqQlRQM1k1OFQ3cFBwQ2RYQUcvcmJoOVpqU056Y0dIRmFOblEr?=
 =?utf-8?B?S3hkZzRkODZ0bmdVY2p4TmxGcmw4cUJ1V0xnR3ZKL05XblpHR2h3Wk9QQUww?=
 =?utf-8?B?YlI4bHcrUTBNWDEvME8yT1NzemdGK1g1NGQ3Mkw1Q3Qzb1RaWWJZL00wMlNz?=
 =?utf-8?B?cVNvSVJXR081UjI2b20xUlJkZW1FcHdLakRJY2FWeGdMa2JMMWVZdDN2UWtW?=
 =?utf-8?B?ME1ieExtaWQ5R0tzSkZnNkR3VWprYlhMb1VWS3VMcU1UOG1tRmlZMzZPVkJw?=
 =?utf-8?B?VXBQVitFRk5Wa0xvdDhlQUFUdmFTNHBiT0pyejNGNFlKMjZEVHdnbVpnb3hU?=
 =?utf-8?B?MU5wWGNYQS91d21KUStsQzNKNkljWTJmb0ZBU3VWZ2xxN0UrdkJUSEFOei9G?=
 =?utf-8?B?MHRvTVZHc3pFQ1dyRXBvd1VBWGszaXdqZHJzMUZFYytVeFV3cGY5OWpZR05N?=
 =?utf-8?B?K0VWTXBlbFFBenpHTWxKZVJZWUFvQXdoM0JPaE5nNEJudU5Vc2FOZXVQQ2lF?=
 =?utf-8?B?a05zUGRZdGZ5cXhKcHpIaGZqV0dmNDNiSlpEK25JL2hLTXkwOUJFK2NuOGZT?=
 =?utf-8?B?Wnh4ZDU0TmN1Z1BieVozamVvRm1CTzRocmkvdTNyMmVTTDlsQi9lQTlNSmdL?=
 =?utf-8?B?d1I2OVc3TkM2LzMrTzQzY29LRXhzS0NkSWFtQmt1SkZuWi9rTXU1UnNIS1ZU?=
 =?utf-8?B?WGZNZ2NDa0FYbzNSa2t3QXBZRUUxQlMvOVNwNlVzcGpKZEpNb0s3SHJMN2ZE?=
 =?utf-8?B?WEgxdWJlME9FWjVmc1dJM1YyZklCb1NsN2pJRzg3L1BQZHdXTTJmYi9FMUJh?=
 =?utf-8?B?S2x5UHpTam5XQWVxSGowWDM3d2dxaVBDQ3JhMEJscnZZcHIrVXhnYWx3bTBt?=
 =?utf-8?B?ZDNRWWN3NzloRVlUZHJxbXIyOXZtb2s3SE9yNjFlNS8wUUw0czZXSXBmZEdN?=
 =?utf-8?B?UnJrYVYxa21xTzRvTnZwMGlSaEZKM3N3RDJCTERrQWRnV2pwd0IxYVlYQmNQ?=
 =?utf-8?B?YWhWWWV4WVhIUWNrWDZkbWtwQzc2VkM3ZVBUNXR3MVVkVUVzYndYYWFZTW5F?=
 =?utf-8?B?Q1RtSTM3R3NkN1ZQa2RsYUVpeUtaOXBwWHFCbndqRDlxVE51UnJhcTl2TG1w?=
 =?utf-8?B?QWEyQklBREQxZ1FmaUVwWVVETk9GNVN6U05VZElVOGo0bEgzMXJzSnJvL0tD?=
 =?utf-8?B?SlE5OTFOT0NlSmNKbUVLRytxTjB2ZmNUMEdxeFRZZmEyc01VVk5XbFk3NjN0?=
 =?utf-8?B?WEp0dGk2Zm04NFoxVVFvZHR3OW9xY2lqQmI2eDFTeEFWODN3N2Judjl2R1NX?=
 =?utf-8?B?OXdoSnB4RUpvUWZwVDVtZFJ4VlkraVVtaDFJbkVqZHhSQWtWWlB3VzBwR2cr?=
 =?utf-8?B?S2dHS1IrRFJNaDZvQWxZMksxdmtkUzlRSUpHRUpqRHRraGNGbFhxOUJOTkMv?=
 =?utf-8?B?eXdRWnNVMlpFc2JMSlRSVjRCemxwSUtmTEhiTjZGMGdYNThWNmR4NFlyeDR6?=
 =?utf-8?B?SkdqNC91Q1ZDelJHaWlDcG0rL1QwV1R5N0tCYzJ4bDVqNUQzNlkwVHEwaHRY?=
 =?utf-8?B?MmswL3FvYTFwZm1jTzBhdnllYmk3dVU2K3dxSTN1NWlZUXE2dkZ6bmVyeGhi?=
 =?utf-8?Q?htMe1yvQhFI8stNoOw4SpJPXh85ptrHmuzUTmD1?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 72475f3f-8efe-4915-2a52-08d8cc1e9d98
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4728.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 10:45:14.1333
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 43hiNtLVWtxhfBbIbGRUgT3J7Pu6a8zkvJkHblBGNbXRqWT3OVkfOisoE6VU4u5JVYqHiTsspG6EZQEPRwT7+6AzoTFLT8Jsq3Sv7TyYJpU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5153
X-OriginatorOrg: citrix.com

On 08/02/2021 10:36, Jan Beulich wrote:
> On 08.02.2021 11:23, Andrew Cooper wrote:
>> On 08/02/2021 09:50, Jan Beulich wrote:
>>> On 08.02.2021 10:44, Andrew Cooper wrote:
>>>> On 06/02/2021 10:49, Juergen Gross wrote:
>>>>> The ring buffer for user events is used in the local system only, so
>>>>> smp barriers are fine for ensuring consistency.
>>>>>
>>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> These need to be virt_* to not break in UP builds (on non-x86).
>>> Initially I though so, too, but isn't the sole vCPU of such a
>>> VM getting re-scheduled to a different pCPU in the hypervisor
>>> an implied barrier anyway?
>> Yes, but that isn't relevant to why UP builds break.
>>
>> smp_*() degrade to compiler barriers in UP builds, and while that's
>> mostly fine for x86 read/write, its not fine for ARM barriers.
> Hmm, I may not know enough of Arm's memory model - are you saying
> Arm CPUs aren't even self-coherent, i.e. later operations (e.g.
> the consuming of ring contents) won't observe earlier ones (the
> updating of ring contents) when only a single physical CPU is
> involved in all of this? (I did mention the hypervisor level
> context switch simply because that's the only way multiple CPUs
> can get involved.)

In this case, no - see my later reply.Â  I'd mistaken the two ends of
this ring.Â  As they're both inside the same guest, its fine.

For cases such as the xenstore/console ring, the semantics required
really are SMP, even if the guest is built UP.Â  These cases really will
break when smp_rmb() etc degrade to just a compiler barrier on
architectures with weaker semantics than x86.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82723.152850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943L-0001GU-Mt; Mon, 08 Feb 2021 10:45:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82723.152850; Mon, 08 Feb 2021 10:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943L-0001GN-HX; Mon, 08 Feb 2021 10:45:59 +0000
Received: by outflank-mailman (input) for mailman id 82723;
 Mon, 08 Feb 2021 10:45:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943K-0001GD-O0; Mon, 08 Feb 2021 10:45:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943K-0001bp-Fb; Mon, 08 Feb 2021 10:45:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943K-0006qb-88; Mon, 08 Feb 2021 10:45:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943K-0007tg-7b; Mon, 08 Feb 2021 10:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sA9iqbUvM6kyrc8rtghpEd2UgLf27JQHWh1ymys++Ig=; b=DnGIyJu9ulcbxs8IU6OZNlfo04
	xqUyUa96QyfV39uVGsXZbw1+oei5pDnZc2a86v+C4m66C92DKUTl7F2ZXpyDOsmMZpH3zPkrU67zY
	4QNT3bVmiWjojXMwjtH0UGhUAjLQw1QJpbdTDDAOPQc0YfGekUav6bCBnyJQRkt/WXLg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159113-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159113: trouble: blocked/broken/preparing/queued
X-Osstest-Failures:
    linux-5.4:build-amd64-xsm:<job status>:broken:regression
    linux-5.4:build-arm64:<job status>:broken:regression
    linux-5.4:build-arm64-pvops:<job status>:broken:regression
    linux-5.4:build-arm64-xsm:<job status>:broken:regression
    linux-5.4:build-armhf:<job status>:broken:regression
    linux-5.4:build-i386:<job status>:broken:regression
    linux-5.4:build-i386-pvops:<job status>:broken:regression
    linux-5.4:build-i386-xsm:<job status>:broken:regression
    linux-5.4:build-i386-pvops:host-install(4):broken:regression
    linux-5.4:build-i386-xsm:host-install(4):broken:regression
    linux-5.4:build-i386:host-install(4):broken:regression
    linux-5.4:build-arm64-xsm:host-install(4):broken:regression
    linux-5.4:build-arm64-pvops:host-install(4):broken:regression
    linux-5.4:build-arm64:host-install(4):broken:regression
    linux-5.4:build-amd64-xsm:host-install(4):broken:regression
    linux-5.4:build-armhf:host-install(4):broken:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-raw:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-examine:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-libvirt:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
    linux-5.4:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-examine:<none executed>:queued:regression
    linux-5.4:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    linux-5.4:test-amd64-coresched-amd64-xl:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    linux-5.4:build-amd64-libvirt:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-examine:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-libvirt:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-pair:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-pygrub:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-xl:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-libvirt:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-pair:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-5.4:build-amd64-xsm:STARTING:running:regression
    linux-5.4:build-amd64:hosts-allocate:running:regression
    linux-5.4:build-amd64-pvops:hosts-allocate:running:regression
    linux-5.4:build-armhf-pvops:hosts-allocate:running:regression
    linux-5.4:build-amd64-xsm:syslog-server:running:regression
    linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:build-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=d4716ee8751bf8dabf5872ba008124a0979a5f94
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:45:58 +0000

flight 159113 linux-5.4 running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159113/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 158387
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 158387
 build-i386                    4 host-install(4)        broken REGR. vs. 158387
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 158387
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 158387
 build-arm64                   4 host-install(4)        broken REGR. vs. 158387
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 158387
 build-armhf                   4 host-install(4)        broken REGR. vs. 158387
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-armhf-armhf-examine        <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-armhf-armhf-xl             <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-amd64-i386-examine         <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-amd64-examine        <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 build-amd64-xsm               5 STARTING                     running
 build-amd64                   2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-armhf-pvops             2 hosts-allocate               running
 build-amd64-xsm               3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a

version targeted for testing:
 linux                d4716ee8751bf8dabf5872ba008124a0979a5f94
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   26 days
Failing since        158473  2021-01-17 13:42:20 Z   21 days   33 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
377 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            preparing
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-amd64-examine                                     queued  
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     queued  
 test-amd64-i386-examine                                      queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-armhf-armhf-examine queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-armhf-armhf-xl queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-amd64-i386-examine queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job build-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job build-arm64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-xl-pvshim queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-amd64-examine queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-xl queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-step build-i386-pvops host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 11411 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82725.152865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943X-0001Ld-4o; Mon, 08 Feb 2021 10:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82725.152865; Mon, 08 Feb 2021 10:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943X-0001LW-1B; Mon, 08 Feb 2021 10:46:11 +0000
Received: by outflank-mailman (input) for mailman id 82725;
 Mon, 08 Feb 2021 10:46:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943V-0001Kw-Md; Mon, 08 Feb 2021 10:46:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943V-0001cF-Hg; Mon, 08 Feb 2021 10:46:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943V-0006qt-51; Mon, 08 Feb 2021 10:46:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943V-00084i-4V; Mon, 08 Feb 2021 10:46:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pPM+drSg55V9+9/AKD6NgLYiBJ2HOoZZhQxJ7nUlRiA=; b=hRjlAkTi27hC4RBXaGHHZ8le+t
	9o7zjkrzVtug2c66SWgFed7IMJtFAIrUUcoEuXYjOvSINEQrGZPFI2YtbkXqr2EzkWFAj0HbjZtya
	/29RiRkggx8n2m1pI6nnYvfGQWBBH0es/SzE6igscx52fgCMNr51xKsYkCHLsfsdWltQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159118-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159118: trouble: blocked/broken/preparing/queued/running
X-Osstest-Failures:
    libvirt:build-amd64:<job status>:broken:regression
    libvirt:build-amd64-pvops:<job status>:broken:regression
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-i386-xsm:host-install(4):broken:regression
    libvirt:build-amd64:host-install(4):broken:regression
    libvirt:build-amd64-pvops:host-install(4):broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-arm64-libvirt:<none executed>:queued:regression
    libvirt:build-i386-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    libvirt:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    libvirt:test-arm64-arm64-libvirt:<none executed>:queued:regression
    libvirt:test-arm64-arm64-libvirt-qcow2:<none executed>:queued:regression
    libvirt:test-arm64-arm64-libvirt-xsm:<none executed>:queued:regression
    libvirt:test-armhf-armhf-libvirt:<none executed>:queued:regression
    libvirt:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    libvirt:build-amd64-xsm:hosts-allocate:running:regression
    libvirt:build-arm64-pvops:hosts-allocate:running:regression
    libvirt:build-armhf-pvops:hosts-allocate:running:regression
    libvirt:build-i386:hosts-allocate:running:regression
    libvirt:build-i386-pvops:hosts-allocate:running:regression
    libvirt:build-arm64:hosts-allocate:running:regression
    libvirt:build-i386-xsm:syslog-server:running:regression
    libvirt:build-arm64-xsm:host-install(4):running:regression
    libvirt:build-arm64-xsm:syslog-server:running:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a58edc602ebfef9323d405f846cb0076bdfc8044
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:09 +0000

flight 159118 libvirt running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159118/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-armhf                     <job status>                 broken
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 151777
 build-amd64                   4 host-install(4)        broken REGR. vs. 151777
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-armhf                   4 host-install(4)        broken REGR. vs. 151777
 build-arm64-libvirt             <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-arm64-arm64-libvirt        <none executed>              queued
 test-arm64-arm64-libvirt-qcow2    <none executed>              queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 build-amd64-xsm               2 hosts-allocate               running
 build-arm64-pvops             2 hosts-allocate               running
 build-armhf-pvops             2 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 build-arm64                   2 hosts-allocate               running
 build-i386-xsm                3 syslog-server                running
 build-arm64-xsm               4 host-install(4)              running
 build-arm64-xsm               3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a58edc602ebfef9323d405f846cb0076bdfc8044
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  213 days
Failing since        151818  2020-07-11 04:18:52 Z  212 days  206 attempts
Testing same since   159097  2021-02-07 09:24:19 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              preparing
 build-arm64-xsm                                              running 
 build-i386-xsm                                               running 
 build-amd64                                                  broken  
 build-arm64                                                  preparing
 build-armhf                                                  broken  
 build-i386                                                   preparing
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          queued  
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           queued  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            preparing
 build-armhf-pvops                                            preparing
 build-i386-pvops                                             preparing
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 queued  
 test-arm64-arm64-libvirt-qcow2                               queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-arm64-libvirt queued
broken-job build-armhf broken
broken-job build-i386-libvirt queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-arm64-arm64-libvirt queued
broken-job test-arm64-arm64-libvirt-qcow2 queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-step build-i386-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 40506 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82727.152880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943d-0001Q3-F4; Mon, 08 Feb 2021 10:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82727.152880; Mon, 08 Feb 2021 10:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943d-0001Pu-Bh; Mon, 08 Feb 2021 10:46:17 +0000
Received: by outflank-mailman (input) for mailman id 82727;
 Mon, 08 Feb 2021 10:46:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943c-0001PS-Fe; Mon, 08 Feb 2021 10:46:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943c-0001cJ-8S; Mon, 08 Feb 2021 10:46:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943c-0006qy-0k; Mon, 08 Feb 2021 10:46:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943c-00089A-0I; Mon, 08 Feb 2021 10:46:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cjmWajFYmQeyvy6Cv0QkUwFTj+IIKODllv6TX3nRXPs=; b=3EQmC+nOv/jlAyez1v8TIACnWI
	rfzqmSNEj0X/KEMqJIa048OUPdHJz4SAxCOJSlclRPwZ5twsm7Oj8VSZ3BxpK4J3NShirV4494lbB
	zhU2T96l4w3i4O5R+OlT53YzjEQ5B6RAkz2+yTSRqoS83WLqgq7p6V3Ecs4bnYmXpmXo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159120-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159120: trouble: blocked/broken/preparing/queued/running
X-Osstest-Failures:
    xen-4.11-testing:build-amd64-prev:<job status>:broken:regression
    xen-4.11-testing:build-amd64-pvops:<job status>:broken:regression
    xen-4.11-testing:build-armhf:<job status>:broken:regression
    xen-4.11-testing:build-armhf-pvops:<job status>:broken:regression
    xen-4.11-testing:build-amd64-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-armhf-pvops:host-install(4):broken:regression
    xen-4.11-testing:build-amd64-prev:host-install(4):broken:regression
    xen-4.11-testing:build-armhf:host-install(4):broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-xl:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-xl-credit1:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-xl-credit2:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-xl-seattle:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:<none executed>:queued:regression
    xen-4.11-testing:test-arm64-arm64-xl-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-xtf-amd64-amd64-1:<none executed>:queued:regression
    xen-4.11-testing:test-xtf-amd64-amd64-2:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-4.11-testing:build-amd64-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.11-testing:build-arm64-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    xen-4.11-testing:build-i386-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-xl:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-livepatch:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-migrupgrade:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-pair:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-livepatch:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-migrupgrade:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-pair:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-xtf-amd64-amd64-3:<none executed>:queued:regression
    xen-4.11-testing:test-xtf-amd64-amd64-4:<none executed>:queued:regression
    xen-4.11-testing:test-xtf-amd64-amd64-5:<none executed>:queued:regression
    xen-4.11-testing:build-amd64:hosts-allocate:running:regression
    xen-4.11-testing:build-i386-prev:hosts-allocate:running:regression
    xen-4.11-testing:build-amd64-xtf:hosts-allocate:running:regression
    xen-4.11-testing:build-arm64:hosts-allocate:running:regression
    xen-4.11-testing:build-arm64-xsm:hosts-allocate:running:regression
    xen-4.11-testing:build-i386:hosts-allocate:running:regression
    xen-4.11-testing:build-arm64-pvops:hosts-allocate:running:regression
    xen-4.11-testing:build-i386-pvops:hosts-allocate:running:regression
    xen-4.11-testing:build-i386-xsm:hosts-allocate:running:regression
    xen-4.11-testing:build-amd64-xsm:host-install(4):running:regression
    xen-4.11-testing:build-amd64-xsm:syslog-server:running:regression
    xen-4.11-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.11-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:16 +0000

flight 159120 xen-4.11-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159120/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 157566
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 157566
 build-armhf                   4 host-install(4)        broken REGR. vs. 157566
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-arm64-arm64-xl             <none executed>              queued
 test-arm64-arm64-xl-credit1     <none executed>              queued
 test-arm64-arm64-xl-credit2     <none executed>              queued
 test-arm64-arm64-xl-seattle     <none executed>              queued
 test-arm64-arm64-xl-thunderx    <none executed>              queued
 test-arm64-arm64-xl-xsm         <none executed>              queued
 test-xtf-amd64-amd64-1          <none executed>              queued
 test-xtf-amd64-amd64-2          <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 build-arm64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-livepatch      <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-amd64-migrupgrade    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 test-xtf-amd64-amd64-3          <none executed>              queued
 test-xtf-amd64-amd64-4          <none executed>              queued
 test-xtf-amd64-amd64-5          <none executed>              queued
 build-amd64                   2 hosts-allocate               running
 build-i386-prev               2 hosts-allocate               running
 build-amd64-xtf               2 hosts-allocate               running
 build-arm64                   2 hosts-allocate               running
 build-arm64-xsm               2 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 build-arm64-pvops             2 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 build-i386-xsm                2 hosts-allocate               running
 build-amd64-xsm               4 host-install(4)              running
 build-amd64-xsm               3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   54 days
Failing since        159016  2021-02-04 15:05:58 Z    3 days    4 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              running 
 build-arm64-xsm                                              preparing
 build-i386-xsm                                               preparing
 build-amd64-xtf                                              preparing
 build-amd64                                                  preparing
 build-arm64                                                  preparing
 build-armhf                                                  broken  
 build-i386                                                   preparing
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          queued  
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           queued  
 build-amd64-prev                                             broken  
 build-i386-prev                                              preparing
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            preparing
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             preparing
 test-xtf-amd64-amd64-1                                       queued  
 test-xtf-amd64-amd64-2                                       queued  
 test-xtf-amd64-amd64-3                                       queued  
 test-xtf-amd64-amd64-4                                       queued  
 test-xtf-amd64-amd64-5                                       queued  
 test-amd64-amd64-xl                                          queued  
 test-arm64-arm64-xl                                          queued  
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      queued  
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  queued  
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  queued  
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   queued  
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 queued  
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 queued  
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-arm64-arm64-xl queued
broken-job test-arm64-arm64-xl-credit1 queued
broken-job test-arm64-arm64-xl-credit2 queued
broken-job test-arm64-arm64-xl-seattle queued
broken-job test-arm64-arm64-xl-thunderx queued
broken-job test-arm64-arm64-xl-xsm queued
broken-job test-xtf-amd64-amd64-1 queued
broken-job test-xtf-amd64-amd64-2 queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-arm64-libvirt queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-pvshim queued
broken-job build-armhf-pvops broken
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job build-i386-libvirt queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-xl queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-livepatch queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-amd64-migrupgrade queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-xtf-amd64-amd64-3 queued
broken-job test-xtf-amd64-amd64-4 queued
broken-job test-xtf-amd64-amd64-5 queued
broken-step build-amd64-pvops host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82729.152895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943g-0001UC-Tl; Mon, 08 Feb 2021 10:46:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82729.152895; Mon, 08 Feb 2021 10:46:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943g-0001U4-QT; Mon, 08 Feb 2021 10:46:20 +0000
Received: by outflank-mailman (input) for mailman id 82729;
 Mon, 08 Feb 2021 10:46:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943f-0001TK-Td; Mon, 08 Feb 2021 10:46:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943f-0001cN-Qf; Mon, 08 Feb 2021 10:46:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943f-0006r5-L9; Mon, 08 Feb 2021 10:46:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943f-0008Bc-Kf; Mon, 08 Feb 2021 10:46:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bHdRiCqh3F5fwIxvJ6FeWwXaYxiskBZ2GwmTU11GoxU=; b=d0Frqdn275Y7Xz2YHLBCoIyak8
	UBJLyXCgjrOW6MyAW5xER3xIylU1CXtUM7A30R1dlxT6d4Mcy2/f5foVCRlGaIjzq5wDntbOTKuJX
	MDblTAGxhHviJ1dIzngmBFkN/MT/r56vo+bcLZzmBTBrlsJc6om37k1jYDFv2S/U9TA8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159125-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159125: trouble: preparing/queued/running
X-Osstest-Failures:
    ovmf:build-i386:host-install(4):broken:regression
    ovmf:build-amd64-libvirt:<none executed>:queued:regression
    ovmf:build-i386-libvirt:<none executed>:queued:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    ovmf:build-i386-xsm:hosts-allocate:running:regression
    ovmf:build-amd64-xsm:hosts-allocate:running:regression
    ovmf:build-i386-pvops:hosts-allocate:running:regression
    ovmf:build-amd64:hosts-allocate:running:regression
    ovmf:build-amd64-pvops:hosts-allocate:running:regression
    ovmf:build-i386:syslog-server:running:regression
X-Osstest-Versions-This:
    ovmf=43a113385e370530eb52cf2e55b3019d8d4f6558
X-Osstest-Versions-That:
    ovmf=0d96664df322d50e0ac54130e129c0bf4f2b72df
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:19 +0000

flight 159125 ovmf running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159125/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 host-install(4)        broken REGR. vs. 159040
 build-amd64-libvirt             <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 build-i386-xsm                2 hosts-allocate               running
 build-amd64-xsm               2 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 build-amd64                   2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-i386                    3 syslog-server                running

version targeted for testing:
 ovmf                 43a113385e370530eb52cf2e55b3019d8d4f6558
baseline version:
 ovmf                 0d96664df322d50e0ac54130e129c0bf4f2b72df

Last test of basis   159040  2021-02-05 11:11:01 Z    2 days
Testing same since   159088  2021-02-07 01:54:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              preparing
 build-i386-xsm                                               preparing
 build-amd64                                                  preparing
 build-i386                                                   running 
 build-amd64-libvirt                                          queued  
 build-i386-libvirt                                           queued  
 build-amd64-pvops                                            preparing
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-libvirt queued
broken-job build-i386-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-step build-i386 host-install(4)

Not pushing.

------------------------------------------------------------
commit 43a113385e370530eb52cf2e55b3019d8d4f6558
Author: Bob Feng <bob.c.feng@intel.com>
Date:   Mon Feb 1 18:28:58 2021 +0800

    BaseTools: fix the split output files root dir
    
    If the output file path is a relative path, the split
    tool will create the output file under the input file path.
    But the expected behavior for this case is the output file
    should be relative to the current directory. This patch will
    fix this bug.
    
    If the output file path is not specified and output prefix is not
    specified, the output file should be under the input file path
    
    Signed-off-by: Bob Feng <bob.c.feng@intel.com>
    Cc: Liming Gao <gaoliming@byosoft.com.cn>
    Cc: Yuwei Chen <yuwei.chen@intel.com>
    Acked-by: Liming Gao <gaoliming@byosoft.com.cn>
    Reviewed-by: Yuwei Chen <yuwei.chen@intel.com>


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82731.152910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943m-0001Zm-99; Mon, 08 Feb 2021 10:46:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82731.152910; Mon, 08 Feb 2021 10:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943m-0001Zd-5L; Mon, 08 Feb 2021 10:46:26 +0000
Received: by outflank-mailman (input) for mailman id 82731;
 Mon, 08 Feb 2021 10:46:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943k-0001Yp-S2; Mon, 08 Feb 2021 10:46:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943k-0001cU-ND; Mon, 08 Feb 2021 10:46:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943k-0006rG-ET; Mon, 08 Feb 2021 10:46:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943k-0008Fd-Dz; Mon, 08 Feb 2021 10:46:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O0MSg8VMAaaQlpyANYh8BH1uBB3jKrGdPaAc6/eAdw0=; b=CSlUyJ2oIBegYP2J3l99fLEJ73
	zq1piHsZEFWu6vFdZIwqdbsc14mgRwwCDAY4/3k8SpP+yQLXuHsxrwRQa58RW2OzR+imoX7HzPBWX
	wVBQkhWCK8JJgrDB++GWbXsYgKgG1ZSkFcfzq6ukK+oYNZP4hdP36TuHoM/nPAoJQB70=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159122-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159122: trouble: blocked/broken/preparing/queued/running
X-Osstest-Failures:
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:host-install(4):broken:regression
    qemu-mainline:build-i386:host-install(4):broken:regression
    qemu-mainline:build-armhf:host-install(4):broken:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:<none executed>:queued:regression
    qemu-mainline:test-arm64-arm64-xl-credit2:<none executed>:queued:regression
    qemu-mainline:test-arm64-arm64-xl-seattle:<none executed>:queued:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:<none executed>:queued:regression
    qemu-mainline:test-arm64-arm64-xl-xsm:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-libvirt:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-multivcpu:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    qemu-mainline:test-amd64-coresched-amd64-xl:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    qemu-mainline:build-amd64-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    qemu-mainline:build-arm64-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    qemu-mainline:build-armhf-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-pair:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-pygrub:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-amd64-xl:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
    qemu-mainline:test-armhf-armhf-xl-cubietruck:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-raw:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    qemu-mainline:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:<none executed>:queued:regression
    qemu-mainline:test-arm64-arm64-xl:<none executed>:queued:regression
    qemu-mainline:build-amd64:hosts-allocate:running:regression
    qemu-mainline:build-arm64:hosts-allocate:running:regression
    qemu-mainline:build-arm64-pvops:hosts-allocate:running:regression
    qemu-mainline:build-arm64-xsm:hosts-allocate:running:regression
    qemu-mainline:build-i386-pvops:hosts-allocate:running:regression
    qemu-mainline:build-i386-xsm:hosts-allocate:running:regression
    qemu-mainline:build-armhf-pvops:host-install(4):running:regression
    qemu-mainline:build-amd64-xsm:host-install(4):running:regression
    qemu-mainline:build-armhf-pvops:syslog-server:running:regression
    qemu-mainline:build-amd64-xsm:syslog-server:running:regression
    qemu-mainline:build-armhf:syslog-server:running:regression
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=5b19cb63d9dfda41b412373b8c9fe14641bcab60
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:24 +0000

flight 159122 qemu-mainline running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159122/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops               <job status>                 broken
 build-armhf                     <job status>                 broken
 build-i386                      <job status>                 broken
 build-amd64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-i386                    4 host-install(4)        broken REGR. vs. 152631
 build-armhf                   4 host-install(4)        broken REGR. vs. 152631
 test-arm64-arm64-xl-credit1     <none executed>              queued
 test-arm64-arm64-xl-credit2     <none executed>              queued
 test-arm64-arm64-xl-seattle     <none executed>              queued
 test-arm64-arm64-xl-thunderx    <none executed>              queued
 test-arm64-arm64-xl-xsm         <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-armhf-armhf-xl             <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 build-arm64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 build-armhf-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-arm64-arm64-xl             <none executed>              queued
 build-amd64                   2 hosts-allocate               running
 build-arm64                   2 hosts-allocate               running
 build-arm64-pvops             2 hosts-allocate               running
 build-arm64-xsm               2 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 build-i386-xsm                2 hosts-allocate               running
 build-armhf-pvops             4 host-install(4)              running
 build-amd64-xsm               4 host-install(4)              running
 build-armhf-pvops             3 syslog-server                running
 build-amd64-xsm               3 syslog-server                running
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-i386-libvirt            1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                5b19cb63d9dfda41b412373b8c9fe14641bcab60
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  172 days
Failing since        152659  2020-08-21 14:07:39 Z  170 days  341 attempts
Testing same since   159107  2021-02-07 19:51:11 Z    0 days    1 attempts

------------------------------------------------------------
375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              running 
 build-arm64-xsm                                              preparing
 build-i386-xsm                                               preparing
 build-amd64                                                  preparing
 build-arm64                                                  preparing
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          queued  
 build-armhf-libvirt                                          queued  
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            preparing
 build-armhf-pvops                                            running 
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          queued  
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      queued  
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  queued  
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  queued  
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 queued  
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 queued
broken-job test-arm64-arm64-xl-credit2 queued
broken-job test-arm64-arm64-xl-seattle queued
broken-job test-arm64-arm64-xl-thunderx queued
broken-job test-arm64-arm64-xl-xsm queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-armhf-armhf-xl queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job build-amd64-pvops broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job build-arm64-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job build-armhf-libvirt queued
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-amd64-xl-pvshim queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-amd64-xl queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-arm64-arm64-xl queued
broken-step build-amd64-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 105392 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82733.152925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943r-0001gU-Rp; Mon, 08 Feb 2021 10:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82733.152925; Mon, 08 Feb 2021 10:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l943r-0001gM-O4; Mon, 08 Feb 2021 10:46:31 +0000
Received: by outflank-mailman (input) for mailman id 82733;
 Mon, 08 Feb 2021 10:46:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943r-0001fj-0l; Mon, 08 Feb 2021 10:46:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943q-0001ce-RA; Mon, 08 Feb 2021 10:46:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943q-0006rN-Je; Mon, 08 Feb 2021 10:46:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943q-0008KZ-J8; Mon, 08 Feb 2021 10:46:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d1yddNztOfIKtA7MRXWIn2EFZwS2x5ts+jHsn/EBsRs=; b=QewbHZyHi+ivtKta6ZGAx8T8v/
	y8Iolx6Q9MU0TvQ9ztZVayGEWaoba7DfPTdMo6xa66gaFcvxLDa5zUDq4EJBPZIc7fFUD51OUnnb5
	EWnKD0VMgbFFTRMuR3q6B9VAlgAtHSGC90Im+wzv6LgPi7WT6DaXd6IEJdpfOQG13dv8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159116-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159116: trouble: blocked/broken/preparing/queued
X-Osstest-Failures:
    linux-linus:build-amd64-xsm:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf-pvops:<job status>:broken:regression
    linux-linus:build-i386:<job status>:broken:regression
    linux-linus:build-i386-pvops:<job status>:broken:regression
    linux-linus:build-i386-xsm:<job status>:broken:regression
    linux-linus:build-i386-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-i386:host-install(4):broken:regression
    linux-linus:build-i386-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-amd64-xsm:host-install(4):broken:regression
    linux-linus:build-armhf-pvops:host-install(4):broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-raw:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-examine:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-xl:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-xl-credit1:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-xl-credit2:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-xl-seattle:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-xl-thunderx:<none executed>:queued:regression
    linux-linus:test-arm64-arm64-xl-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-examine:<none executed>:queued:regression
    linux-linus:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    linux-linus:test-amd64-coresched-amd64-xl:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    linux-linus:build-amd64-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:<none executed>:queued:regression
    linux-linus:build-arm64-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    linux-linus:build-i386-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-examine:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-pair:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-pygrub:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-xl:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-pair:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    linux-linus:build-amd64:hosts-allocate:running:regression
    linux-linus:build-amd64-pvops:hosts-allocate:running:regression
    linux-linus:build-arm64:syslog-server:running:regression
    linux-linus:build-i386:syslog-server:running:regression
    linux-linus:build-i386:capture-logs:running:regression
    linux-linus:build-arm64-xsm:syslog-server:running:regression
    linux-linus:build-arm64-xsm:capture-logs:running:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=92bf22614b21a2706f4993b278017e437f7785b3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:30 +0000

flight 159116 linux-linus running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159116/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-i386                    4 host-install(4)        broken REGR. vs. 152332
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 152332
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-armhf                   4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-arm64-arm64-examine        <none executed>              queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-arm64-arm64-xl             <none executed>              queued
 test-arm64-arm64-xl-credit1     <none executed>              queued
 test-arm64-arm64-xl-credit2     <none executed>              queued
 test-arm64-arm64-xl-seattle     <none executed>              queued
 test-arm64-arm64-xl-thunderx    <none executed>              queued
 test-arm64-arm64-xl-xsm         <none executed>              queued
 test-amd64-i386-examine         <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 build-arm64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-amd64-examine        <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 build-amd64                   2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-arm64                   3 syslog-server                running
 build-i386                    3 syslog-server                running
 build-i386                    5 capture-logs                 running
 build-arm64-xsm               3 syslog-server                running
 build-arm64-xsm               5 capture-logs                 running

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 linux                92bf22614b21a2706f4993b278017e437f7785b3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  191 days
Failing since        152366  2020-08-01 20:49:34 Z  190 days  337 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
4560 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          queued  
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           queued  
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          queued  
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      queued  
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  queued  
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  queued  
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-amd64-examine                                     queued  
 test-arm64-arm64-examine                                     queued  
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 queued  
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-arm64-arm64-examine queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-arm64-arm64-xl queued
broken-job test-arm64-arm64-xl-credit1 queued
broken-job test-arm64-arm64-xl-credit2 queued
broken-job test-arm64-arm64-xl-seattle queued
broken-job test-arm64-arm64-xl-thunderx queued
broken-job test-arm64-arm64-xl-xsm queued
broken-job test-amd64-i386-examine queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job build-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job build-arm64 broken
broken-job build-arm64-libvirt queued
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job build-i386-libvirt queued
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-xl-pvshim queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-amd64-examine queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-xl queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-step build-i386-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 1029128 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82736.152940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9440-0001pe-It; Mon, 08 Feb 2021 10:46:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82736.152940; Mon, 08 Feb 2021 10:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9440-0001pV-FA; Mon, 08 Feb 2021 10:46:40 +0000
Received: by outflank-mailman (input) for mailman id 82736;
 Mon, 08 Feb 2021 10:46:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943y-0001o8-Fm; Mon, 08 Feb 2021 10:46:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943y-0001cm-Al; Mon, 08 Feb 2021 10:46:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l943y-0006ra-1J; Mon, 08 Feb 2021 10:46:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l943y-00007b-0r; Mon, 08 Feb 2021 10:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CYNyUjvslyMzMzQT1dhpKjVdVSPHiZvDjHNqjaTY37Q=; b=BXxYEL1u5GTY3t1PhM+OmMc38Q
	bgFijvf6Zimff+oe+DsFwF09CQdS175Ju8r0ganf9vj+2tKCQBgXhzM9RHNzlA14yYVbWitmswiKF
	i2SKKfTHQ7S4ifDfPmGHp0FwasWt+jbbSdjES0mgCVsDu1szZ4hXjEt4lpI/toIKFrqs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159111-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159111: trouble: blocked/broken/preparing/queued/running
X-Osstest-Failures:
    xen-4.12-testing:build-amd64-prev:<job status>:broken:regression
    xen-4.12-testing:build-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:build-arm64:<job status>:broken:regression
    xen-4.12-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.12-testing:build-arm64-xsm:<job status>:broken:regression
    xen-4.12-testing:build-armhf:<job status>:broken:regression
    xen-4.12-testing:build-armhf-pvops:<job status>:broken:regression
    xen-4.12-testing:build-i386:<job status>:broken:regression
    xen-4.12-testing:build-i386-prev:<job status>:broken:regression
    xen-4.12-testing:build-i386-pvops:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:build-i386-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-i386-prev:host-install(4):broken:regression
    xen-4.12-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.12-testing:build-i386:host-install(4):broken:regression
    xen-4.12-testing:build-arm64:host-install(4):broken:regression
    xen-4.12-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-arm64-xsm:host-install(4):broken:regression
    xen-4.12-testing:build-amd64-xsm:host-install(4):broken:regression
    xen-4.12-testing:build-armhf-pvops:host-install(4):broken:regression
    xen-4.12-testing:build-amd64-prev:host-install(4):broken:regression
    xen-4.12-testing:build-armhf:host-install(4):broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:<none executed>:queued:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-4.12-testing:build-amd64-libvirt:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-xl:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-pair:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-raw:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-libvirt:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-livepatch:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-pair:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:<none executed>:queued:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:<none executed>:queued:regression
    xen-4.12-testing:test-xtf-amd64-amd64-5:<none executed>:queued:regression
    xen-4.12-testing:build-amd64:hosts-allocate:running:regression
    xen-4.12-testing:build-amd64-pvops:hosts-allocate:running:regression
    xen-4.12-testing:build-amd64-xtf:host-install(4):running:regression
    xen-4.12-testing:build-amd64-xtf:syslog-server:running:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:38 +0000

flight 159111 xen-4.12-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159111/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-prev                <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 158556
 build-i386-prev               4 host-install(4)        broken REGR. vs. 158556
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 158556
 build-i386                    4 host-install(4)        broken REGR. vs. 158556
 build-arm64                   4 host-install(4)        broken REGR. vs. 158556
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 158556
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 158556
 build-amd64-xsm               4 host-install(4)        broken REGR. vs. 158556
 build-armhf-pvops             4 host-install(4)        broken REGR. vs. 158556
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 158556
 build-armhf                   4 host-install(4)        broken REGR. vs. 158556
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-xtf-amd64-amd64-1          <none executed>              queued
 test-xtf-amd64-amd64-2          <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-livepatch      <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-amd64-migrupgrade    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-xtf-amd64-amd64-3          <none executed>              queued
 test-xtf-amd64-amd64-4          <none executed>              queued
 test-xtf-amd64-amd64-5          <none executed>              queued
 build-amd64                   2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-amd64-xtf               4 host-install(4)              running
 build-amd64-xtf               3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   17 days
Failing since        159017  2021-02-04 15:06:13 Z    3 days    3 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              running 
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       queued  
 test-xtf-amd64-amd64-2                                       queued  
 test-xtf-amd64-amd64-3                                       queued  
 test-xtf-amd64-amd64-4                                       queued  
 test-xtf-amd64-amd64-5                                       queued  
 test-amd64-amd64-xl                                          queued  
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   queued  
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 queued  
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-xtf-amd64-amd64-1 queued
broken-job test-xtf-amd64-amd64-2 queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job build-amd64-prev broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job build-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-arm64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job build-arm64-pvops broken
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-pvshim queued
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job build-i386-prev broken
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job build-i386-pvops broken
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-xl queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-livepatch queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-amd64-migrupgrade queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-xtf-amd64-amd64-3 queued
broken-job test-xtf-amd64-amd64-4 queued
broken-job test-xtf-amd64-amd64-5 queued
broken-step build-i386-pvops host-install(4)
broken-step build-i386-prev host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-i386 host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-armhf-pvops host-install(4)
broken-step build-amd64-prev host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 8d26cdd3b66ab86d560dacd763d76ff3da95723e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:52:54 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:46:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:46:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82738.152955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9445-0001vJ-9s; Mon, 08 Feb 2021 10:46:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82738.152955; Mon, 08 Feb 2021 10:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9445-0001vA-6K; Mon, 08 Feb 2021 10:46:45 +0000
Received: by outflank-mailman (input) for mailman id 82738;
 Mon, 08 Feb 2021 10:46:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9444-0001uL-0Z; Mon, 08 Feb 2021 10:46:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9443-0001cv-N0; Mon, 08 Feb 2021 10:46:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9443-0006rd-7F; Mon, 08 Feb 2021 10:46:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9443-0000Jo-6l; Mon, 08 Feb 2021 10:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1+lwrV9Ad675bkxzIpKf+pQeP2rJvOP4PvmHmD/8KYg=; b=XPyvDtVtDVR0PW/1G3t64btkgt
	s5AUtivR3Sb1Ngsz8/j4uTogrpqd9mUivONG+OCZJTk3hmdCp1lOTm+YQM9BoLNYr0Y+zwjhhXWsm
	fEmcpZa+nS4mp2+V2w3epqkCoNCmrbjrepiWy5cK88dZ83aLbHZ9SvEZ6P0Pn28fZnbg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159123-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159123: trouble: blocked/broken/preparing/queued/running
X-Osstest-Failures:
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-prev:host-install(4):broken:regression
    xen-unstable:build-i386:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-raw:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-examine:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-libvirt-xsm:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl-credit1:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl-credit2:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl-seattle:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl-thunderx:<none executed>:queued:regression
    xen-unstable:test-arm64-arm64-xl-xsm:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-examine:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-libvirt:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:build-amd64-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:<none executed>:queued:regression
    xen-unstable:build-armhf-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-multivcpu:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-amd64-pvgrub:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-examine:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-pygrub:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-livepatch:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-migrupgrade:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
    xen-unstable:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-coresched-amd64-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-examine:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-livepatch:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-migrupgrade:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-pair:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-cubietruck:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
    xen-unstable:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-1:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-2:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-3:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-4:<none executed>:queued:regression
    xen-unstable:test-xtf-amd64-amd64-5:<none executed>:queued:regression
    xen-unstable:build-amd64:hosts-allocate:running:regression
    xen-unstable:build-amd64-pvops:hosts-allocate:running:regression
    xen-unstable:build-amd64-xsm:hosts-allocate:running:regression
    xen-unstable:build-amd64-xtf:hosts-allocate:running:regression
    xen-unstable:build-i386-prev:hosts-allocate:running:regression
    xen-unstable:build-arm64-xsm:hosts-allocate:running:regression
    xen-unstable:build-arm64-pvops:hosts-allocate:running:regression
    xen-unstable:build-armhf-pvops:hosts-allocate:running:regression
    xen-unstable:build-i386-pvops:host-install(4):running:regression
    xen-unstable:build-i386-pvops:syslog-server:running:regression
    xen-unstable:build-armhf:host-install(4):running:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 10:46:43 +0000

flight 159123 xen-unstable running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159123/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-prev                <job status>                 broken
 build-arm64                     <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-prev              4 host-install(4)        broken REGR. vs. 159036
 build-i386                    4 host-install(4)        broken REGR. vs. 159036
 build-i386-xsm                4 host-install(4)        broken REGR. vs. 159036
 build-arm64                   4 host-install(4)        broken REGR. vs. 159036
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-arm64-arm64-examine        <none executed>              queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-arm64-arm64-xl             <none executed>              queued
 test-arm64-arm64-xl-credit1     <none executed>              queued
 test-arm64-arm64-xl-credit2     <none executed>              queued
 test-arm64-arm64-xl-seattle     <none executed>              queued
 test-arm64-arm64-xl-thunderx    <none executed>              queued
 test-arm64-arm64-xl-xsm         <none executed>              queued
 test-armhf-armhf-examine        <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-armhf-armhf-xl             <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 build-armhf-libvirt             <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-amd64-examine        <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-livepatch      <none executed>              queued
 test-amd64-amd64-migrupgrade    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-i386-examine         <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-xtf-amd64-amd64-1          <none executed>              queued
 test-xtf-amd64-amd64-2          <none executed>              queued
 test-xtf-amd64-amd64-3          <none executed>              queued
 test-xtf-amd64-amd64-4          <none executed>              queued
 test-xtf-amd64-amd64-5          <none executed>              queued
 build-amd64                   2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-amd64-xsm               2 hosts-allocate               running
 build-amd64-xtf               2 hosts-allocate               running
 build-i386-prev               2 hosts-allocate               running
 build-arm64-xsm               2 hosts-allocate               running
 build-arm64-pvops             2 hosts-allocate               running
 build-armhf-pvops             2 hosts-allocate               running
 build-i386-pvops              4 host-install(4)              running
 build-i386-pvops              3 syslog-server                running
 build-armhf                   4 host-install(4)              running
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    3 days
Testing same since   159077  2021-02-06 11:11:30 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              preparing
 build-arm64-xsm                                              preparing
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              preparing
 build-amd64                                                  preparing
 build-arm64                                                  broken  
 build-armhf                                                  running 
 build-i386                                                   broken  
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          queued  
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              preparing
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            preparing
 build-armhf-pvops                                            preparing
 build-i386-pvops                                             running 
 test-xtf-amd64-amd64-1                                       queued  
 test-xtf-amd64-amd64-2                                       queued  
 test-xtf-amd64-amd64-3                                       queued  
 test-xtf-amd64-amd64-4                                       queued  
 test-xtf-amd64-amd64-5                                       queued  
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          queued  
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      queued  
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  queued  
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  queued  
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-amd64-examine                                     queued  
 test-arm64-arm64-examine                                     queued  
 test-armhf-armhf-examine                                     queued  
 test-amd64-i386-examine                                      queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   queued  
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 queued  
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 queued  
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-arm64-arm64-examine queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-arm64-arm64-xl queued
broken-job test-arm64-arm64-xl-credit1 queued
broken-job test-arm64-arm64-xl-credit2 queued
broken-job test-arm64-arm64-xl-seattle queued
broken-job test-arm64-arm64-xl-thunderx queued
broken-job test-arm64-arm64-xl-xsm queued
broken-job test-armhf-armhf-examine queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-armhf-armhf-xl queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job build-amd64-prev broken
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job test-amd64-amd64-xl-pvshim queued
broken-job build-arm64 broken
broken-job test-amd64-amd64-xl-pvhv2-intel queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job build-armhf-libvirt queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job build-i386 broken
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-amd64-amd64-xl queued
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-amd64-examine queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-livepatch queued
broken-job test-amd64-amd64-migrupgrade queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-i386-examine queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-xtf-amd64-amd64-1 queued
broken-job test-xtf-amd64-amd64-2 queued
broken-job test-xtf-amd64-amd64-3 queued
broken-job test-xtf-amd64-amd64-4 queued
broken-job test-xtf-amd64-amd64-5 queued
broken-step build-amd64-prev host-install(4)
broken-step build-i386 host-install(4)
broken-step build-i386-xsm host-install(4)
broken-step build-arm64 host-install(4)

Not pushing.

------------------------------------------------------------
commit ca82d3fecc93745ee17850a609ac7772bd7c8bf7
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Sat Jan 30 08:36:37 2021 -0500

    x86/vm_event: add response flag to reset vmtrace buffer
    
    Allow resetting the vmtrace buffer in response to a vm_event. This can be used
    to optimize a use-case where detecting a looped vmtrace buffer is important.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit c5866ab93167a73a8d4d85b844edf4aa364a1aaa
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Mon Jan 18 12:46:37 2021 -0500

    x86/vm_event: Carry the vmtrace buffer position in vm_event
    
    Add vmtrace_pos field to x86 regs in vm_event. Initialized to ~0 if
    vmtrace is not in use.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 9744611991a042e9aea348c5721b80cc2101c7e5
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Fri Sep 11 20:14:00 2020 +0200

    xen/vmtrace: support for VM forks
    
    Implement vmtrace_reset_pt function. Properly set IPT
    state for VM forks.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 88dd8389dd2c9442729e9d96a4febaf38cd822e3
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:35:07 2020 +0200

    tools/misc: Add xen-vmtrace tool
    
    Add an demonstration tool that uses xc_vmtrace_* calls in order
    to manage external IPT monitoring for DomU.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 53aaa792fdebcf131983d45ee8e3d09bd0740c71
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:33:25 2020 +0200

    tools/libxc: Add xc_vmtrace_* functions
    
    Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 1cee4bd97c88633c4a39f56f6722be0727c9ea8f
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Sun Jun 28 23:48:09 2020 +0200

    xen/domctl: Add XEN_DOMCTL_vmtrace_op
    
    Implement an interface to configure and control tracing operations.  Reuse the
    existing SETDEBUGGING flask vector rather than inventing a new one.
    
    Userspace using this interface is going to need platform specific knowledge
    anyway to interpret the contents of the trace buffer.  While some operations
    (e.g. enable/disable) can reasonably be generic, others cannot.  Provide an
    explicitly-platform specific pair of get/set operations to reduce API churn as
    new options get added/enabled.
    
    For the VMX specific Processor Trace implementation, tolerate reading and
    modifying a safe subset of bits in CTL, STATUS and OUTPUT_MASK.  This permits
    userspace to control the content which gets logged, but prevents modification
    of details such as the position/size of the output buffer.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 71cb03f03ce309e8cc1dacd18aa383ccea6af231
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:20:18 2020 +0200

    x86/vmx: Add Intel Processor Trace support
    
    Add CPUID/MSR enumeration details for Processor Trace.  For now, we will only
    support its use inside VMX operation.  Fill in the vmtrace_available boolean
    to activate the newly introduced common infrastructure for allocating trace
    buffers.
    
    For now, Processor Trace is going to be operated in Single Output mode behind
    the guests back.  Add the MSRs to struct vcpu_msrs, and set up the buffer
    limit in vmx_init_ipt() as it is fixed for the lifetime of the domain.
    
    Context switch the most of the MSRs in and out of vCPU context, but the main
    control register needs to reside in the MSR load/save lists.  Explicitly pull
    the msrs pointer out into a local variable, because the optimiser cannot keep
    it live across the memory clobbers in the MSR accesses.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit b72eab263592a3d76aa826675e5d62606d83cecd
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Mon Jun 29 00:05:51 2020 +0200

    xen/memory: Add a vmtrace_buf resource type
    
    Allow to map processor trace buffer using acquire_resource().
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 45ba9a7d7688a6a08200e37a8caa2bc99bb4d267
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jun 19 00:31:24 2020 +0200

    tools/[lib]xl: Add vmtrace_buf_size parameter
    
    Allow to specify the size of per-vCPU trace buffer upon
    domain creation. This is zero by default (meaning: not enabled).
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 217dd79ee29286b85074d22cc75ee064206fb2af
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jul 3 01:16:10 2020 +0200

    xen/domain: Add vmtrace_size domain creation parameter
    
    To use vmtrace, buffers of a suitable size need allocating, and different
    tasks will want different sizes.
    
    Add a domain creation parameter, and audit it appropriately in the
    {arch_,}sanitise_domain_config() functions.
    
    For now, the x86 specific auditing is tuned to Processor Trace running in
    Single Output mode, which requires a single contiguous range of memory.
    
    The size is given an arbitrary limit of 64M which is expected to be enough for
    anticipated usecases, but not large enough to get into long-running-hypercall
    problems.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 34cc2e5f8dba6906da82fe8d76e839f9ab20f153
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 27 17:24:11 2020 +0100

    xen/memory: Fix mapping grant tables with XENMEM_acquire_resource
    
    A guest's default number of grant frames is 64, and XENMEM_acquire_resource
    will reject an attempt to map more than 32 frames.  This limit is caused by
    the size of mfn_list[] on the stack.
    
    Fix mapping of arbitrary size requests by looping over batches of 32 in
    acquire_resource(), and using hypercall continuations when necessary.
    
    To start with, break _acquire_resource() out of acquire_resource() to cope
    with type-specific dispatching, and update the return semantics to indicate
    the number of mfns returned.  Update gnttab_acquire_resource() and x86's
    arch_acquire_resource() to match these new semantics.
    
    Have do_memory_op() pass start_extent into acquire_resource() so it can pick
    up where it left off after a continuation, and loop over batches of 32 until
    all the work is done, or a continuation needs to occur.
    
    compat_memory_op() is a bit more complicated, because it also has to marshal
    frame_list in the XLAT buffer.  Have it account for continuation information
    itself and hide details from the upper layer, so it can marshal the buffer in
    chunks if necessary.
    
    With these fixes in place, it is now possible to map the whole grant table for
    a guest.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit f4318db940c39cc656128fcf72df3e79d2e55bc1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 14:09:42 2021 +0100

    x86/EFI: work around GNU ld 2.36 issue
    
    Our linker capability check fails with the recent binutils release's ld:
    
    .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
    .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
    .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
    .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
    .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
    .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
    .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
    .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
    .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
    .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
    .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
    
    Tell the linker to strip debug info as a workaround. Debug info has been
    getting stripped already anyway when linking the actual xen.efi.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d7acc47c8201611fda98ce5bd465626478ca4759
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Feb 5 13:19:38 2021 +0100

    tools/tests: fix resource test build on FreeBSD
    
    error.h is not a standard header, and none of the functions declared
    there are actually used by the code. This fixes the build on FreeBSD
    that doesn't have error.h
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:51:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82762.152970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l948c-0003C8-Ak; Mon, 08 Feb 2021 10:51:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82762.152970; Mon, 08 Feb 2021 10:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l948c-0003C1-7I; Mon, 08 Feb 2021 10:51:26 +0000
Received: by outflank-mailman (input) for mailman id 82762;
 Mon, 08 Feb 2021 10:51:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l948b-0003Bw-Bj
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:51:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b9dc084-37a9-4b2a-b62b-de959e808245;
 Mon, 08 Feb 2021 10:51:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7AA6AAD57;
 Mon,  8 Feb 2021 10:51:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b9dc084-37a9-4b2a-b62b-de959e808245
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612781483; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XKk2PET/f9qNuvf7nanooRSg83l7JBZ1VosjaVLv5Ps=;
	b=e9UtpokbF6fi0pxElC4p1PwXwmOVndw2KQt7dhC6iEVrMSf9LpjM1icoierv4AJKNlcWWB
	m/mrsmy3iycBc2AW0di72WG7yB5xOLvH9OlRuud6i5L+6nayIOxAoGOFKnDHpXbMvP6rDS
	/IXghosceH/GCA5VWljWQmL16eiLCKM=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
Date: Mon, 8 Feb 2021 11:51:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.02.2021 11:41, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 10:48, Jan Beulich wrote:
>> On 06.02.2021 11:49, Juergen Gross wrote:
>>> In evtchn_read() use READ_ONCE() for reading the producer index in
>>> order to avoid the compiler generating multiple accesses.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   drivers/xen/evtchn.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>> index 421382c73d88..f6b199b597bf 100644
>>> --- a/drivers/xen/evtchn.c
>>> +++ b/drivers/xen/evtchn.c
>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
>>>   			goto unlock_out;
>>>   
>>>   		c = u->ring_cons;
>>> -		p = u->ring_prod;
>>> +		p = READ_ONCE(u->ring_prod);
>>>   		if (c != p)
>>>   			break;
>>
>> Why only here and not also in
>>
>> 		rc = wait_event_interruptible(u->evtchn_wait,
>> 					      u->ring_cons != u->ring_prod);
>>
>> or in evtchn_poll()? I understand it's not needed when
>> ring_prod_lock is held, but that's not the case in the two
>> afaics named places. Plus isn't the same then true for
>> ring_cons and ring_cons_mutex, i.e. aren't the two named
>> places plus evtchn_interrupt() also in need of READ_ONCE()
>> for ring_cons?
> 
> The problem solved here is the further processing using "p" multiple
> times. p must not be silently replaced with u->ring_prod by the
> compiler, so I probably should reword the commit message to say:
> 
> ... in order to not allow the compiler to refetch p.

I still wouldn't understand the change (and the lack of
further changes) then: The first further use of p is
outside the loop, alongside one of c. IOW why would c
then not need treating the same as p?

I also still don't see the difference between latching a
value into a local variable vs a "freestanding" access -
neither are guaranteed to result in exactly one memory
access afaict.

And of course there's also our beloved topic of access
tearing here: READ_ONCE() also excludes that (at least as
per its intentions aiui).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:56:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:56:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82764.152981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94D8-0003OI-TJ; Mon, 08 Feb 2021 10:56:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82764.152981; Mon, 08 Feb 2021 10:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94D8-0003OB-QJ; Mon, 08 Feb 2021 10:56:06 +0000
Received: by outflank-mailman (input) for mailman id 82764;
 Mon, 08 Feb 2021 10:56:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l94D7-0003O5-Pt
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:56:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c759e767-a26c-49a2-9332-575394cc8971;
 Mon, 08 Feb 2021 10:56:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF59FAD62;
 Mon,  8 Feb 2021 10:56:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c759e767-a26c-49a2-9332-575394cc8971
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612781763; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GDtqT/NkvhN0liuoHeVYmWQQ/qglAed0OLmt4k3vztk=;
	b=cW0ogfxkd/UKFPFOgseBcxImbbi+BEa2vzwQpips5bFo6q8aYGXVXq+PvZUXxpj4jtGzLK
	YHmR6RPl3mvQAdYD9BZv4Tba1cqp5Lnu1nXUcqmbZOqkHVYsMsSQ0TcAM91i9h34kTUQZ3
	HCfENYwEKDGasa2084vdf+EjunuVN1c=
Subject: Re: [PATCH v2 2/3] x86/time: adjust time recording
 time_calibration_tsc_rendezvous()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
 <YB1vGGl59oNZb5m5@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <861c931e-7922-0b5b-58a9-63e46ba24af0@suse.com>
Date: Mon, 8 Feb 2021 11:56:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YB1vGGl59oNZb5m5@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 17:15, Roger Pau MonnÃ© wrote:
> On Mon, Feb 01, 2021 at 01:43:04PM +0100, Jan Beulich wrote:
>> The (stime,tsc) tuple is the basis for extrapolation by get_s_time().
>> Therefore the two better get taken as close to one another as possible.
>> This means two things: First, reading platform time is too early when
>> done on the first iteration. The closest we can get is on the last
>> iteration, immediately before telling other CPUs to write their TSCs
>> (and then also writing CPU0's). While at the first glance it may seem
>> not overly relevant when exactly platform time is read (when assuming
>> that only stime is ever relevant anywhere, and hence the association
>> with the precise TSC values is of lower interest), both CPU frequency
>> changes and the effects of SMT make it unpredictable (between individual
>> rendezvous instances) how long the loop iterations will take. This will
>> in turn lead to higher an error than neccesary in how close to linear
>> stime movement we can get.
>>
>> Second, re-reading the TSC for local recording is increasing the overall
>> error as well, when we already know a more precise value - the one just
>> written.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks.

> I've been thinking this all seems doomed when Xen runs in a virtualized
> environment, and should likely be disabled. There's no point on trying
> to sync the TSC over multiple vCPUs as the scheduling delay between
> them will likely skew any calculations.

We may want to consider to force the equivalent of
"clocksource=tsc" in that case. Otoh a well behaved hypervisor
underneath shouldn't lead to us finding a need to clear
TSC_RELIABLE, at which point this logic wouldn't get engaged
in the first place.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 10:59:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 10:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82766.152994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94GS-0003ZN-Cu; Mon, 08 Feb 2021 10:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82766.152994; Mon, 08 Feb 2021 10:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94GS-0003ZG-9y; Mon, 08 Feb 2021 10:59:32 +0000
Received: by outflank-mailman (input) for mailman id 82766;
 Mon, 08 Feb 2021 10:59:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l94GR-0003Z8-2O
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 10:59:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db998329-aca1-441d-8bb7-7ecf5d4b3235;
 Mon, 08 Feb 2021 10:59:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3377AE74;
 Mon,  8 Feb 2021 10:59:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db998329-aca1-441d-8bb7-7ecf5d4b3235
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612781968; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Y1k1Irf/aD43gtYCsWD5VZ8YG64iTvviIv/OIuLN4mo=;
	b=r8zjScQIHDKIuFx1t0KulyJX0azA91ELxLtXUTH0g20l/6Bz95Jj0Hu/TLY/UUQFZoAJ5C
	/tePjvH2o+wOS0ig5FAqfj+oDk12BNhw0oJXcellZ62rb+A9HEQ4RJKD3ynqWj3ZFFfMnG
	8NjApdjvOsQ/Au88Ib9MFa5koRyNOYE=
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
Message-ID: <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
Date: Mon, 8 Feb 2021 11:59:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="g10FMe3I19WjDHFRhrA8hHzwXJTqmeBhS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--g10FMe3I19WjDHFRhrA8hHzwXJTqmeBhS
Content-Type: multipart/mixed; boundary="WbA8Zux20nsOlI7TWqLh2MCi1HGxyzgG9";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
In-Reply-To: <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>

--WbA8Zux20nsOlI7TWqLh2MCi1HGxyzgG9
Content-Type: multipart/mixed;
 boundary="------------1B0C26C8A87E3798CD3649FF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1B0C26C8A87E3798CD3649FF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 11:51, Jan Beulich wrote:
> On 08.02.2021 11:41, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 10:48, Jan Beulich wrote:
>>> On 06.02.2021 11:49, Juergen Gross wrote:
>>>> In evtchn_read() use READ_ONCE() for reading the producer index in
>>>> order to avoid the compiler generating multiple accesses.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>    drivers/xen/evtchn.c | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>>> index 421382c73d88..f6b199b597bf 100644
>>>> --- a/drivers/xen/evtchn.c
>>>> +++ b/drivers/xen/evtchn.c
>>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, ch=
ar __user *buf,
>>>>    			goto unlock_out;
>>>>   =20
>>>>    		c =3D u->ring_cons;
>>>> -		p =3D u->ring_prod;
>>>> +		p =3D READ_ONCE(u->ring_prod);
>>>>    		if (c !=3D p)
>>>>    			break;
>>>
>>> Why only here and not also in
>>>
>>> 		rc =3D wait_event_interruptible(u->evtchn_wait,
>>> 					      u->ring_cons !=3D u->ring_prod);
>>>
>>> or in evtchn_poll()? I understand it's not needed when
>>> ring_prod_lock is held, but that's not the case in the two
>>> afaics named places. Plus isn't the same then true for
>>> ring_cons and ring_cons_mutex, i.e. aren't the two named
>>> places plus evtchn_interrupt() also in need of READ_ONCE()
>>> for ring_cons?
>>
>> The problem solved here is the further processing using "p" multiple
>> times. p must not be silently replaced with u->ring_prod by the
>> compiler, so I probably should reword the commit message to say:
>>
>> ... in order to not allow the compiler to refetch p.
>=20
> I still wouldn't understand the change (and the lack of
> further changes) then: The first further use of p is
> outside the loop, alongside one of c. IOW why would c
> then not need treating the same as p?

Its value wouldn't change, as ring_cons is being modified only at
the bottom of this function, and nowhere else (apart from the reset
case, but this can't run concurrently due to ring_cons_mutex).

> I also still don't see the difference between latching a
> value into a local variable vs a "freestanding" access -
> neither are guaranteed to result in exactly one memory
> access afaict.

READ_ONCE() is using a pointer to volatile, so any refetching by
the compiler would be a bug.

> And of course there's also our beloved topic of access
> tearing here: READ_ONCE() also excludes that (at least as
> per its intentions aiui).

Yes, but I don't see an urgent need to fix that, as there would
be thousands of accesses in the kernel needing a fix. A compiler
tearing a naturally aligned access into multiple memory accesses
would be rejected as buggy from the kernel community IMO.


Juergen

--------------1B0C26C8A87E3798CD3649FF
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1B0C26C8A87E3798CD3649FF--

--WbA8Zux20nsOlI7TWqLh2MCi1HGxyzgG9--

--g10FMe3I19WjDHFRhrA8hHzwXJTqmeBhS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhGY8FAwAAAAAACgkQsN6d1ii/Ey/W
rwf+KC8M1vX876+HEvj1ckqQSbTHVV50T/sybxL3l8haeurUmI5jCXfSElNw7p2yJ07YT8pZQF3f
/LlODxDm38K7gF32W1tcMJoWhV11GS/l8+4apQOENKEpRecaa8KdlkgD9N67m5BFw9GUSXvaEL3v
Th6gKvL4FqLn80QutvDR9XZx2NMKnexFM3akYUbaQJBBCwL65Ej2IZGoipoRZsmWFBItpEnnMzem
pyprwsthM4ohdRJ0ClC1Sy/rxznmHl44pQJota7F4L7f6HpK2h9laxrYX2dx9dTsWKuDwwFw5V02
g1BHM+8Q/+qy8xs7G0Fd3lrZmpXI/3HN93G33+UfiA==
=ecN0
-----END PGP SIGNATURE-----

--g10FMe3I19WjDHFRhrA8hHzwXJTqmeBhS--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:05:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:05:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82776.153006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94Mb-0004ZV-40; Mon, 08 Feb 2021 11:05:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82776.153006; Mon, 08 Feb 2021 11:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94Ma-0004ZO-W7; Mon, 08 Feb 2021 11:05:52 +0000
Received: by outflank-mailman (input) for mailman id 82776;
 Mon, 08 Feb 2021 11:05:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atCq=HK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l94MZ-0004ZJ-8j
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:05:51 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b29e3542-7869-4427-8a03-0d00da53cdb1;
 Mon, 08 Feb 2021 11:05:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b29e3542-7869-4427-8a03-0d00da53cdb1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612782349;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=VxIMKn8BEn8PkuVawNQPMLCTsYD0TLzmAval7+CvIU8=;
  b=L2VW+06jQTDPsR62dS6IU8rsG3upNnpTqs2ciLF9P6bK7OjrIFwUw0mQ
   5kugm6KRLbTsZM2uAib4VpGQv4+RX5VPtlqdpdF2VV6qhQ8uMFzF4uSA+
   EsJp6ywi9BLgcmFSkhfS+P9JyHK6eJZiHUm0uJjoCm/XHLiZ0M/XAaQWQ
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +nmeD25TcU2eDIDgaesTGb1/7y81d03nUpgr/1MyDGqCPZPg+0iniLeS5mDuc3/8rF1DWNHgYw
 SuPjM/qWS05bIBng04GYsnAMaPipYZ8QW7zhm+PJhCs8rR7v8X0aApd+Sm9kH5ZPypBe3frlTA
 klSfzLxHUHLf1s9OxO+eu3cJccSYYllbqHH/R/64v32E37mx6FxtbZuDIglMMMczhqeQnEKBbO
 ctZj/eya6BRsYkcZHt0Gd7COVqsWMPdpeqC0MErl4MfIqRyoaoTjBx4B9uVQmGr8UZlSfOJRZA
 kw8=
X-SBRS: 5.2
X-MesageID: 36958995
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,161,1610427600"; 
   d="scan'208";a="36958995"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m0c2Ee6PAPeZolxy6zBoEl3gzn+X91p9HhJkaCVKqPZFDrmL4cPYFeocm/yqbfGkEk4TCPD6jkqWMb3WA+vHRJL947PsvqnbOC2mTHibR/0R9WBfagKU52RcsogAsZuGc2d6TkbSY39Urz7U2z78oCfO/WdvHsI5i3V4nDPjv4prst8fwFEoXnyvJpqn7ZJkny7aQAJg0yc+HLdd9ppdrN3Mq7X+zIyYD8P+YYinDcOqKJlsqb8FaVKoIxzcnnmK31ExC2uEcbGUmr3UiTISNmBJrXkZKSMNaXK7Z2E/OAmIUzefAC9yYky69gAT2vgthmFhtpgtw76g3hqSqedCIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HQSxXtlHwovMKOHuGQ7hP9hhPtSNIV81Egw7e+KD0M4=;
 b=jnnlYuc8sWzQ4NchX+gOGsO3ToEL0Ib+sFCePwt4MAzvToe2k4gKDRhqU5PUI678qUtRUdceTLVx6qOzYGKHGt37bgugQd4puVsqOEtJAV7OalzKBGQrEA9lFZgCzVjaix5jbvUJJQkJpzVysEJ0c3EqBK+op4HNWcUtp9K6XUVFPuy49BAmZIpwcNGrPYSpZM2TLp4FpRzrTrZcQjHHOpIv8EGs1E/ZnXj9IUCSKXNKHzYVCRwSnjKXNLuZC5ICv/A92onxokoL1L3K5kbE9sEbjNNUhJcjYIe1nH00m5PNiG40YGKMheTGpLtWNJZrRAddPoCx7s1gzsC/dqJ3yA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HQSxXtlHwovMKOHuGQ7hP9hhPtSNIV81Egw7e+KD0M4=;
 b=DgVIVF7vFEQIYDD0/56Pqh2Sh0vk4jEFSJFfN/zo2O1CYHW5EI+3eLcm8MXs8Gyc1oPBpG0vUeHEsrA6m7BcS6VOkrFTxbfHlZ2e3w3LJSb0ZSpY75g7rPYnDOW4T0HeLA2XqU0YzAWLMIZciZyLX/XFJpwroIX264Z722+8OEA=
Date: Mon, 8 Feb 2021 12:05:16 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 2/3] x86/time: adjust time recording
 time_calibration_tsc_rendezvous()
Message-ID: <YCEa7JxPCAzyWqfe@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
 <YB1vGGl59oNZb5m5@Air-de-Roger>
 <861c931e-7922-0b5b-58a9-63e46ba24af0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <861c931e-7922-0b5b-58a9-63e46ba24af0@suse.com>
X-ClientProxiedBy: AM5PR0701CA0013.eurprd07.prod.outlook.com
 (2603:10a6:203:51::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0950ec92-6a85-4581-8b1f-08d8cc216e74
X-MS-TrafficTypeDiagnostic: DM5PR03MB2491:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2491CF083933739A28A424CA8F8F9@DM5PR03MB2491.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Dt2Wp84kHLAfnRPofBED7H4WgGxUsuECyvYdmuhwvhlkLAmT1y0Cao6YYKTkIt4QoskNDdV/6U2kXLnFpQOEdSUFYP8OgnfanIEv2REKLRPmQA/z+6TJ+HUMCuF9vy18bvdRz3mZX5v+r+jVVP8euSXZnPROiZt3HYLpYklijnszRxKDEnzC60/W6wXiDyTyuvb0N7ninspQXxxp1/IJmG08x4zsUrFDjkzgC2FkQVnF6q/GuzMnmGYdux3ZjD5DoiVKH6Sz6tI/fHB3rA4MXSLraa9TdikPoW4vI8cp55sWsXKQ05/U83tik06yaqkI9cD/iey+XOBe/jJXAchjZE2mxnkfMbvMWzMbQYD0wAdbwivRHHG3Iv4jGEKIYDumDL+2ySPZD8UnnMniSVhi9eTUrupC7lN8klcT9HOTDLoJGfD9CNiMVy7w08LtGg6rKq541Sdgu/wy7nxPHANgWb0CnQgbuOqGc+w34BfOzAKeFrLQhcl5HzgPEmOeZ+ms/O5W0NSoHCC1jXG0thBegw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(376002)(39860400002)(346002)(366004)(8936002)(66556008)(66946007)(66476007)(6666004)(33716001)(8676002)(26005)(16526019)(85182001)(5660300002)(86362001)(2906002)(53546011)(83380400001)(316002)(54906003)(478600001)(6496006)(4326008)(956004)(6486002)(9686003)(6916009)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MHdhSHQ4bXZnakdvL0lKL1RRdlc2QkJ0TnpiK0tiQzFnWXdPWng5WXNrNjZY?=
 =?utf-8?B?ekNqaXhnaW9pbTlidDhELzV0bzRWMWptZ1RjZ3V1NFkvTTA3UE80ejRFK0ZT?=
 =?utf-8?B?Vi92M0pZSThZZk9GeWVYa1gveVhsTTNZckZGbFVxY0xmaTBjTmpOeFFldWto?=
 =?utf-8?B?WVlsWjh3ek9MN1VUYkVSQjh2SERwcnNBa2hITWNKZzA3NmtyMS8raWFyMzNs?=
 =?utf-8?B?ZGRnbzhoNVAvU1N1VERPM1RJeWJSWEM4YjI4VG0wdFJ4Q2RzdlRONG9YbWJ6?=
 =?utf-8?B?TUR0bGx4TjNGU0tkNTUrQWJseVR0ZVlPeWFYSWxyZzJmU284SDRuSloxaXEx?=
 =?utf-8?B?K0hMOVFHT2NWcHk1ZC8xQytqQ1pJR1RxOTdqb2tQYUJ4Z3F6ZHdLT0xVNkVU?=
 =?utf-8?B?Z3dGdkEvNW8wN0Y0THRzNjhUNUhzdC9FUEhqRkZka0NBT1Z1c0hYdlRkenpL?=
 =?utf-8?B?M2hTY1BtdkNjY3Z5R1pjWTNtMmNSUlJpcnFTK1k1c0dvZGY0dEd4RUZQUG8r?=
 =?utf-8?B?RU5TTmNYTGNPazZFTlJ4MHFGU0J5Nlc1N0tCWDJZcmhvMGIrdDZLemlLMG9y?=
 =?utf-8?B?c0wvZnF6aXpEdHdyYWYxOC8rb2cxWXkxL0FoamIxOUtCOWUyaHppTlFHSGps?=
 =?utf-8?B?ZXJUQU9JdGFYNVgycEhFbzM1QmhFZHJNMG95TGhyYzBONlpTclBsTzQ2aWg3?=
 =?utf-8?B?d2J0L1FXQ3NMVjY3ZUd4TkExakFSNnpWZDFBSExHV0tRbHV6eEY1TkJnZmxX?=
 =?utf-8?B?SmNmZE5DVXN4eGtxOXlJT2RMT0psdGFpUGtOb1VpUUhrbTliVXpQYTkyRWQ3?=
 =?utf-8?B?ckw2cmM5WTdYYmZlQmhXRWhSZS80YkE1NFdMUUF3UmorWlNTMGlMWm5xMFFQ?=
 =?utf-8?B?VURRQ0oyR3VIbmNwaFkyNWxHdjVUVGoxZmdpeEt1SXZBOHB3MTR3UFkvQW1w?=
 =?utf-8?B?TmxxQVgvVGRqS2cvcm9FL2ZUOVNaRG5IVE5YUFl2TTRqbE00cEtveFl2OVdK?=
 =?utf-8?B?MTFzTzR4UmJqN04rQ2N3YzFXVWVyZWZLY3FIdkM4TGgwZ3A4MHRFNUx1Z09q?=
 =?utf-8?B?bUVBdmxkQmhLQmpURjFOOG9qQUtyZFBZdmc0Q3FPVkQxMnNMaEduWktBRGVq?=
 =?utf-8?B?TkdxQjNJVTUzdkJseEpRemRSRmxWQ01pckFPbk5JNExYcXpnMkQvWGRyTWlI?=
 =?utf-8?B?a0piUWhpaEtXMEQ2Q0dZZ2Nha1ByR0JpL0lVZzF2SFl0MGtEQTBDaDFRYUYr?=
 =?utf-8?B?N3ZmMHRHYTdWY01xajJQT3lNWEJjdzQyZFgzTnJUQjlxeXhac3oyRVJHUzEv?=
 =?utf-8?B?d0pjaGtsbkYwMTFYM1hSeEdLUTlGU1hCM1JWWG5USXIrY3Nza29lTTZXQkNn?=
 =?utf-8?B?bGJySFBnL3NJNHBhR3JRV1R5QmtrUEh6Wk5yZStlSFdJWVI0WVliY2lKd200?=
 =?utf-8?B?cGh3My8veUZ2NVJ1d2FUejAyR3ZkMjd1aGpEVVZGVzg2UHh4ekkvTnRQdkhp?=
 =?utf-8?B?YTlMLzZNS1FSV3ZNRzZlemdicktOdGM1QVZ3ZC9Ja0E2RW4yTkh1clBWV1VR?=
 =?utf-8?B?MzZpWDg4WHBRVi9GWER2bmZ2aXJpTVhTcWlmajBpQ1hUUEdzcVc1dUhIMjFE?=
 =?utf-8?B?b1VSSEZFSG5nUlpFcVVXMWg1YldvanNYWUVBYkNFeE10WDhYcFBCWlBoQlZ0?=
 =?utf-8?B?U01tZ09tbG5IYXRnemR1aXVNQU4zczdHMWpJNExmM0dpSWxpRUI3WDVXMVJj?=
 =?utf-8?Q?bKcN/FhG61W/LlPp/vtip4z0NpL2uPS7t5QW9Ui?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0950ec92-6a85-4581-8b1f-08d8cc216e74
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 11:05:23.4323
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7J+F1NIuOKcU3MqAW9y7olVrBq1N0DASil+4Qr0nn+jtbO+7W6FQQ+yNeNu1BBnxI/4n/koH8/nsD1ulPOUdnA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2491
X-OriginatorOrg: citrix.com

On Mon, Feb 08, 2021 at 11:56:01AM +0100, Jan Beulich wrote:
> On 05.02.2021 17:15, Roger Pau MonnÃ© wrote:
> > On Mon, Feb 01, 2021 at 01:43:04PM +0100, Jan Beulich wrote:
> >> The (stime,tsc) tuple is the basis for extrapolation by get_s_time().
> >> Therefore the two better get taken as close to one another as possible.
> >> This means two things: First, reading platform time is too early when
> >> done on the first iteration. The closest we can get is on the last
> >> iteration, immediately before telling other CPUs to write their TSCs
> >> (and then also writing CPU0's). While at the first glance it may seem
> >> not overly relevant when exactly platform time is read (when assuming
> >> that only stime is ever relevant anywhere, and hence the association
> >> with the precise TSC values is of lower interest), both CPU frequency
> >> changes and the effects of SMT make it unpredictable (between individual
> >> rendezvous instances) how long the loop iterations will take. This will
> >> in turn lead to higher an error than neccesary in how close to linear
> >> stime movement we can get.
> >>
> >> Second, re-reading the TSC for local recording is increasing the overall
> >> error as well, when we already know a more precise value - the one just
> >> written.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> 
> Thanks.
> 
> > I've been thinking this all seems doomed when Xen runs in a virtualized
> > environment, and should likely be disabled. There's no point on trying
> > to sync the TSC over multiple vCPUs as the scheduling delay between
> > them will likely skew any calculations.
> 
> We may want to consider to force the equivalent of
> "clocksource=tsc" in that case. Otoh a well behaved hypervisor
> underneath shouldn't lead to us finding a need to clear
> TSC_RELIABLE, at which point this logic wouldn't get engaged
> in the first place.

I got the impression that on a loaded system guests with a non-trivial
amount of vCPUs might be in trouble to be able to schedule them all
close enough for the rendezvous to not report a big skew, and thus
disable TSC_RELIABLE?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:22:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:22:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82790.153018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94cf-0006RN-Mp; Mon, 08 Feb 2021 11:22:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82790.153018; Mon, 08 Feb 2021 11:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94cf-0006RG-J6; Mon, 08 Feb 2021 11:22:29 +0000
Received: by outflank-mailman (input) for mailman id 82790;
 Mon, 08 Feb 2021 11:22:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l94ce-0006RB-GK
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:22:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9604af4-abe1-4b18-9b29-cf1a4040d86d;
 Mon, 08 Feb 2021 11:22:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1504EB0C6;
 Mon,  8 Feb 2021 11:22:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9604af4-abe1-4b18-9b29-cf1a4040d86d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612783346; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1qHMnyRAoM3fpQV+JMy2aLPq/a6GbDNY/AmiuDukVn0=;
	b=NTH+f+ZYkI9TZkDeUo4kqj3Gi0Pe5pWNBKovGlXOuaijRph/LeEwUdZv4n6xpg0NWABYwE
	LevwdaMDwwxbl3jgCnB6s48yqz2+8a545CVMvTfRWvmGeEhEcwWnRWocRMBgMsorqsMJvt
	gpJhE9nbvSRfsDnGrYSoioxWqQ7v9Bw=
Subject: Re: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
 <YCEGshHDEH9bJU7y@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae8d8e02-9d2e-a26e-9321-cae0640a0dee@suse.com>
Date: Mon, 8 Feb 2021 12:22:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCEGshHDEH9bJU7y@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.02.2021 10:38, Roger Pau MonnÃ© wrote:
> On Mon, Feb 01, 2021 at 01:43:28PM +0100, Jan Beulich wrote:
>> ---
>> Since CPU0 reads its TSC last on the first iteration, if TSCs were
>> perfectly sync-ed there shouldn't ever be a need to update. However,
>> even on the TSC-reliable system I first tested this on (using
>> "tsc=skewed" to get this rendezvous function into use in the first
>> place) updates by up to several thousand clocks did happen. I wonder
>> whether this points at some problem with the approach that I'm not (yet)
>> seeing.
> 
> I'm confused by this, so on a system that had reliable TSCs, which
> you forced to remove the reliable flag, and then you saw big
> differences when doing the rendezvous?
> 
> That would seem to indicate that such system doesn't really have
> reliable TSCs?

I don't think so, no. This can easily be a timing effect from the
heavy cache line bouncing involved here.

What I'm worried here seeing these updates is that I might still
be moving TSCs backwards in ways observable to the rest of the
system (i.e. beyond the inherent property of the approach), and
this then getting corrected by a subsequent rendezvous. But as
said - I can't see what this could result from, and hence I'm
inclined to assume these are merely effects I've not found a
good explanation for so far.

>> Considering the sufficiently modern CPU it's using, I suspect the
>> reporter's system wouldn't even need to turn off TSC_RELIABLE, if only
>> there wasn't the boot time skew. Hence another approach might be to fix
>> this boot time skew. Of course to recognize whether the TSCs then still
>> aren't in sync we'd need to run tsc_check_reliability() sufficiently
>> long after that adjustment. Which is besides the need to have this
>> "fixing" be precise enough for the TSCs to not look skewed anymore
>> afterwards.
> 
> Maybe it would make sense to do a TSC counter sync after APs are up
> and then disable the rendezvous if the next calibration rendezvous
> shows no skew?

Yes, that's what I was hinting at with the above. For the next
rendezvous to not observe any skew, our adjustment would need to
be far more precise than it is today, though.

> I also wonder, we test for skew just after the APs have been booted,
> and decide at that point whether we need a calibration rendezvous.
> 
> Maybe we could do a TSC sync just after APs are up (to hopefully bring
> them in sync), and then do the tsc_check_reliability just before Xen
> ends booting (ie: before handing control to dom0?)
> 
> What we do right now (ie: do the tsc_check_reliability so early) is
> also likely to miss small skews that will only show up after APs have
> been running for a while?

The APs' TSCs will have been running for about as long as the
BSP's, as INIT does not affect them (and in fact they ought to
be running for _exactly_ as long, or else tsc_check_reliability()
would end up turning off TSC_RELIABLE). So I expect skews to be
large enough at this point to be recognizable.

>> @@ -1712,6 +1720,16 @@ static void time_calibration_tsc_rendezv
>>              while ( atomic_read(&r->semaphore) < total_cpus )
>>                  cpu_relax();
>>  
>> +            if ( tsc == 0 )
>> +            {
>> +                uint64_t cur;
>> +
>> +                tsc = rdtsc_ordered();
>> +                while ( tsc > (cur = r->max_tsc_stamp) )
>> +                    if ( cmpxchg(&r->max_tsc_stamp, cur, tsc) == cur )
>> +                        break;
> 
> I think you could avoid reading cur explicitly for each loop and
> instead do?
> 
> cur = ACCESS_ONCE(r->max_tsc_stamp)
> while ( tsc > cur )
>     cur = cmpxchg(&r->max_tsc_stamp, cur, tsc);

Ah yes. I tried something similar, but not quite the same,
and it looked wrong, so I gave up re-arranging.

>> @@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
>>              while ( atomic_read(&r->semaphore) > total_cpus )
>>                  cpu_relax();
>>          }
>> +
>> +        /* Just in case a read above ended up reading zero. */
>> +        tsc += !tsc;
> 
> Won't that be worthy of an ASSERT_UNREACHABLE? I'm not sure I see how
> tsc could be 0 on a healthy system after the loop above.

It's not forbidden for the firmware to set the TSCs to some
huge negative value. Considering the effect TSC_ADJUST has on
the actual value read by RDTSC, I think I did actually observe
a system coming up this way, because of (not very helpful)
TSC_ADJUST setting by firmware. So no, no ASSERT_UNREACHABLE()
here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:40:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82802.153030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94u7-0008Je-77; Mon, 08 Feb 2021 11:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82802.153030; Mon, 08 Feb 2021 11:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l94u7-0008JX-3A; Mon, 08 Feb 2021 11:40:31 +0000
Received: by outflank-mailman (input) for mailman id 82802;
 Mon, 08 Feb 2021 11:40:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l94u5-0008JS-I8
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:40:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l94u2-0002aq-Ul; Mon, 08 Feb 2021 11:40:26 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l94u2-0000zS-Lb; Mon, 08 Feb 2021 11:40:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zGrVRyh4Bb806FG6nBatBhxLklpw43eAz07n9qXefLM=; b=xB+0y4mG9nGyv+K0rMSe7assdJ
	Blahn0VgEg8Z2h3s3tSzkEDkVSLPk8YVcpox5SrzLQSMtLCND17pUY4746K1Rf/aCyLYeX05a6aBV
	LBMzMH6iJ7qhPg76qhe4KgSgnpBFn1GsWh29UzJj56x7WqovrDEFuHAPIWCJKjCGwgRQ=;
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8032d8a9-b28f-95e1-a5a8-e955ada4dc0a@xen.org>
Date: Mon, 8 Feb 2021 11:40:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210206104932.29064-8-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 06/02/2021 10:49, Juergen Gross wrote:
> In evtchn_read() use READ_ONCE() for reading the producer index in
> order to avoid the compiler generating multiple accesses.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   drivers/xen/evtchn.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
> index 421382c73d88..f6b199b597bf 100644
> --- a/drivers/xen/evtchn.c
> +++ b/drivers/xen/evtchn.c
> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
>   			goto unlock_out;
>   
>   		c = u->ring_cons;
> -		p = u->ring_prod;
> +		p = READ_ONCE(u->ring_prod);
For consistency, don't you also need the write side in 
evtchn_interrupt() to use WRITE_ONCE()?

>   		if (c != p)
>   			break;
>   
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:49:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82804.153042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l952E-00005J-1A; Mon, 08 Feb 2021 11:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82804.153042; Mon, 08 Feb 2021 11:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l952D-00005C-SZ; Mon, 08 Feb 2021 11:48:53 +0000
Received: by outflank-mailman (input) for mailman id 82804;
 Mon, 08 Feb 2021 11:48:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l952C-000057-4H
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:48:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ccc4fda-535a-4da8-8548-b04b35def1c1;
 Mon, 08 Feb 2021 11:48:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E9F48AD4E;
 Mon,  8 Feb 2021 11:48:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ccc4fda-535a-4da8-8548-b04b35def1c1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612784930; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rNXEQttSljSOXQ5vUQs4GTPYC1r1hAnr6viwolFR9tc=;
	b=r5dHf4kFkxHmbA04PtcbaZu0atntZZsdLD35oGLActq/Hd9uIGmlaQlRW9pfEuXFvBrHX5
	pA45TfkeavAOujIRJ0MsmqAbaLd8GrbmDqnk4HxAft3rabux3DqyLXtnjfJuim2oOcFINC
	wag2fddgjXcY/urh6k3eta9Qg6GdWD0=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <8032d8a9-b28f-95e1-a5a8-e955ada4dc0a@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <969f1492-764b-3345-eb65-64aca554ce9e@suse.com>
Date: Mon, 8 Feb 2021 12:48:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <8032d8a9-b28f-95e1-a5a8-e955ada4dc0a@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="MibTRswBhznEAVzFPSZwv0FK78kpDjPYj"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--MibTRswBhznEAVzFPSZwv0FK78kpDjPYj
Content-Type: multipart/mixed; boundary="EKPtOYUKTfIooA4V1ZpTVYV16JtWuK84z";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <969f1492-764b-3345-eb65-64aca554ce9e@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <8032d8a9-b28f-95e1-a5a8-e955ada4dc0a@xen.org>
In-Reply-To: <8032d8a9-b28f-95e1-a5a8-e955ada4dc0a@xen.org>

--EKPtOYUKTfIooA4V1ZpTVYV16JtWuK84z
Content-Type: multipart/mixed;
 boundary="------------182073CD3ED7DAC9C47D0BBB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------182073CD3ED7DAC9C47D0BBB
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 12:40, Julien Grall wrote:
>=20
>=20
> On 06/02/2021 10:49, Juergen Gross wrote:
>> In evtchn_read() use READ_ONCE() for reading the producer index in
>> order to avoid the compiler generating multiple accesses.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> =C2=A0 drivers/xen/evtchn.c | 2 +-
>> =C2=A0 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>> index 421382c73d88..f6b199b597bf 100644
>> --- a/drivers/xen/evtchn.c
>> +++ b/drivers/xen/evtchn.c
>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char=
=20
>> __user *buf,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 goto unlock_out;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 c =3D u->ring_c=
ons;
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 p =3D u->ring_prod;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 p =3D READ_ONCE(u->ring_pr=
od);
> For consistency, don't you also need the write side in=20
> evtchn_interrupt() to use WRITE_ONCE()?

Only in case I'd consider the compiler needing multiple memory
accesses for doing the update (see my reply to Jan's comment on this
patch).


Juergen

--------------182073CD3ED7DAC9C47D0BBB
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------182073CD3ED7DAC9C47D0BBB--

--EKPtOYUKTfIooA4V1ZpTVYV16JtWuK84z--

--MibTRswBhznEAVzFPSZwv0FK78kpDjPYj
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhJSEFAwAAAAAACgkQsN6d1ii/Ey/t
IQgAlW5KCjM05LLQruIcKBrHgQ5NuFxq66+eb2SrqfgpvWigxbqzU0bKA3bJpvuViEdFGtIS1v75
8hnne4C8he9/V0ZBINTvSrtoSN25SBO//r4dVZmUALU+ONiFzIgK0N6QZ/o/anU+sj3gxrFvdYez
CatX6TQ1io16ySb0ww2r2DfE9EWMMFYs8+Ojmr5b67iBlPfKnRmSK4F9TrixDmWTmCnm716g6P5x
uz80mZj4jRSZ4710JvTJTHepFEioW3TIpnBnrj1eaDUW3aWuMjkn25BKkHq8S/6i9OFW8Bjfifpt
XaSEPh2/CrVpmfVYfitd99iMEK8i/4tQjsKZgx8Kjg==
=V7sO
-----END PGP SIGNATURE-----

--MibTRswBhznEAVzFPSZwv0FK78kpDjPYj--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:50:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:50:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82809.153054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l953Z-0000zv-At; Mon, 08 Feb 2021 11:50:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82809.153054; Mon, 08 Feb 2021 11:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l953Z-0000zo-7p; Mon, 08 Feb 2021 11:50:17 +0000
Received: by outflank-mailman (input) for mailman id 82809;
 Mon, 08 Feb 2021 11:50:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l953Y-0000zi-10
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:50:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a59b54c-0cb5-49f5-8aec-2d17403fdff6;
 Mon, 08 Feb 2021 11:50:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 093A8AD3E;
 Mon,  8 Feb 2021 11:50:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a59b54c-0cb5-49f5-8aec-2d17403fdff6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612785010; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=khQPCjFLkaUBbFM0VZHYHJDPpvAzDhX5L0EjibS3IuE=;
	b=ZiuWX8Xl97t9TdV0zeuAalqt/N3WciFiF+uG1D4jL3M1AF8v6v2EJYO7GL8OeKkoZP4oqB
	keX9e/52QFb0cay5K9G8JLSWC35eQusxqVXrU2kaz6FWfpYWWtWOvUhz2GydcM/1n7RCAh
	h+1BglsuFbnY2V96i0B6HqwntBMtSEI=
Subject: Re: [PATCH v2 2/3] x86/time: adjust time recording
 time_calibration_tsc_rendezvous()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
 <YB1vGGl59oNZb5m5@Air-de-Roger>
 <861c931e-7922-0b5b-58a9-63e46ba24af0@suse.com>
 <YCEa7JxPCAzyWqfe@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9d37329c-50fd-81ae-6987-40c81b78e031@suse.com>
Date: Mon, 8 Feb 2021 12:50:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCEa7JxPCAzyWqfe@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.02.2021 12:05, Roger Pau MonnÃ© wrote:
> On Mon, Feb 08, 2021 at 11:56:01AM +0100, Jan Beulich wrote:
>> On 05.02.2021 17:15, Roger Pau MonnÃ© wrote:
>>> I've been thinking this all seems doomed when Xen runs in a virtualized
>>> environment, and should likely be disabled. There's no point on trying
>>> to sync the TSC over multiple vCPUs as the scheduling delay between
>>> them will likely skew any calculations.
>>
>> We may want to consider to force the equivalent of
>> "clocksource=tsc" in that case. Otoh a well behaved hypervisor
>> underneath shouldn't lead to us finding a need to clear
>> TSC_RELIABLE, at which point this logic wouldn't get engaged
>> in the first place.
> 
> I got the impression that on a loaded system guests with a non-trivial
> amount of vCPUs might be in trouble to be able to schedule them all
> close enough for the rendezvous to not report a big skew, and thus
> disable TSC_RELIABLE?

No, check_tsc_warp() / tsc_check_reliability() don't have a
problem there. Every CPU reads the shared "most advanced"
stamp before reading its local one. So it doesn't matter how
large the gaps are here. In fact the possible bad effect is
the other way around here - if the scheduling effects are
too heavy, we may mistakenly consider TSCs reliable when
they aren't.

A problem of the kind you describe exists in the actual
rendezvous function. And actually any problem of this kind
can, on a smaller scale, already be be observed with SMT,
because the individual hyperthreads of a core can't
possibly all run at the same time.

As occurs to me only now, I think we can improve accuracy
some (in particular on big systems) by making sure
struct calibration_rendezvous's master_tsc_stamp is not
sharing its cache line with semaphore and master_stime. The
latter get written by (at least) the BSP, while
master_tsc_stamp is stable after the 2nd loop iteration.
Hence on the 3rd and 4th iterations we could even prefetch
it to reduce the delay on the last one.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:51:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82810.153066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l954I-00016I-Kx; Mon, 08 Feb 2021 11:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82810.153066; Mon, 08 Feb 2021 11:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l954I-00016A-HR; Mon, 08 Feb 2021 11:51:02 +0000
Received: by outflank-mailman (input) for mailman id 82810;
 Mon, 08 Feb 2021 11:51:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l954H-000162-3j
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:51:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l954G-0002lN-Lq; Mon, 08 Feb 2021 11:51:00 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l954G-0001gh-EF; Mon, 08 Feb 2021 11:51:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Yb8hwLCySjNi2+iqUZJlgDYQEPN0SidS4NbLYbSWASA=; b=0EP9GAFGUUaHqcqmkHxhIkNS/2
	q+12rHMTFSmo7vkP8d46/QLtWVzWHba1fqhtQm1Y2z2JOKDtsVDeBpzFAfxoAa3D6PPfX4bAD4ndV
	7rpEDV3zOiQ3K+NGx7pWQziyno6q4hapeUafIHQy//JxAUSFNREMC7hY5efEE6w/wgmo=;
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <42e15cc4-56d1-b34b-d97e-d579e771788a@xen.org>
Date: Mon, 8 Feb 2021 11:50:58 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 08/02/2021 10:59, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 11:51, Jan Beulich wrote:
> Yes, but I don't see an urgent need to fix that, as there would
> be thousands of accesses in the kernel needing a fix. A compiler
> tearing a naturally aligned access into multiple memory accesses
> would be rejected as buggy from the kernel community IMO.

I would not be so sure. From lwn [1]:

"In the Linux kernel, tearing of plain C-language loads has been 
observed even given properly aligned and machine-word-sized loads.)"

And for store tearing:

"Note that this tearing can happen even on properly aligned and 
machine-word-sized accesses, and in this particular case, even for 
volatile stores. Some might argue that this behavior constitutes a bug 
in the compiler, but either way it illustrates the perceived value of 
store tearing from a compiler-writer viewpoint. [...] But for properly 
aligned machine-sized stores, WRITE_ONCE() will prevent store tearing."

Cheers,

[1] https://lwn.net/Articles/793253/#Load%20Tearing

> 
> 
> Juergen

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 11:54:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 11:54:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82814.153078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l957r-0001HQ-4S; Mon, 08 Feb 2021 11:54:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82814.153078; Mon, 08 Feb 2021 11:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l957r-0001HJ-1J; Mon, 08 Feb 2021 11:54:43 +0000
Received: by outflank-mailman (input) for mailman id 82814;
 Mon, 08 Feb 2021 11:54:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l957q-0001HE-5k
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 11:54:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07405e04-481f-42f0-87cf-d175b5c9ff81;
 Mon, 08 Feb 2021 11:54:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 400A8AEC2;
 Mon,  8 Feb 2021 11:54:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07405e04-481f-42f0-87cf-d175b5c9ff81
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612785280; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Lzj0M0ODVg7irUOYnOdjiNxBrgsVBdOtnhI9h2vRceY=;
	b=GqVgYoTCsAEXTwhIkrKe4y4fnUSBbqbfH23KdRhYyjhWoTUQxU2YjCKiSMxHC0q7DR7pAj
	BjGXzvTTXLSBHZOJetheHO6MsUo02OPb8NcEhBwEO5KrdYtca0SdCU0+ZoRAlBl5pJ6eUr
	VBFfB0utZwpC7HNrBYRpT1Y4P/IPkW8=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
Date: Mon, 8 Feb 2021 12:54:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.02.2021 11:59, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 11:51, Jan Beulich wrote:
>> On 08.02.2021 11:41, JÃ¼rgen GroÃŸ wrote:
>>> On 08.02.21 10:48, Jan Beulich wrote:
>>>> On 06.02.2021 11:49, Juergen Gross wrote:
>>>>> In evtchn_read() use READ_ONCE() for reading the producer index in
>>>>> order to avoid the compiler generating multiple accesses.
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> ---
>>>>>    drivers/xen/evtchn.c | 2 +-
>>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>>>> index 421382c73d88..f6b199b597bf 100644
>>>>> --- a/drivers/xen/evtchn.c
>>>>> +++ b/drivers/xen/evtchn.c
>>>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
>>>>>    			goto unlock_out;
>>>>>    
>>>>>    		c = u->ring_cons;
>>>>> -		p = u->ring_prod;
>>>>> +		p = READ_ONCE(u->ring_prod);
>>>>>    		if (c != p)
>>>>>    			break;
>>>>
>>>> Why only here and not also in
>>>>
>>>> 		rc = wait_event_interruptible(u->evtchn_wait,
>>>> 					      u->ring_cons != u->ring_prod);
>>>>
>>>> or in evtchn_poll()? I understand it's not needed when
>>>> ring_prod_lock is held, but that's not the case in the two
>>>> afaics named places. Plus isn't the same then true for
>>>> ring_cons and ring_cons_mutex, i.e. aren't the two named
>>>> places plus evtchn_interrupt() also in need of READ_ONCE()
>>>> for ring_cons?
>>>
>>> The problem solved here is the further processing using "p" multiple
>>> times. p must not be silently replaced with u->ring_prod by the
>>> compiler, so I probably should reword the commit message to say:
>>>
>>> ... in order to not allow the compiler to refetch p.
>>
>> I still wouldn't understand the change (and the lack of
>> further changes) then: The first further use of p is
>> outside the loop, alongside one of c. IOW why would c
>> then not need treating the same as p?
> 
> Its value wouldn't change, as ring_cons is being modified only at
> the bottom of this function, and nowhere else (apart from the reset
> case, but this can't run concurrently due to ring_cons_mutex).
> 
>> I also still don't see the difference between latching a
>> value into a local variable vs a "freestanding" access -
>> neither are guaranteed to result in exactly one memory
>> access afaict.
> 
> READ_ONCE() is using a pointer to volatile, so any refetching by
> the compiler would be a bug.

Of course, but this wasn't my point. I was contrasting

		c = u->ring_cons;
		p = u->ring_prod;

which you change with

		rc = wait_event_interruptible(u->evtchn_wait,
					      u->ring_cons != u->ring_prod);

which you leave alone.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:04:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82818.153089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95Gq-0002KZ-2T; Mon, 08 Feb 2021 12:04:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82818.153089; Mon, 08 Feb 2021 12:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95Gp-0002KS-Vo; Mon, 08 Feb 2021 12:03:59 +0000
Received: by outflank-mailman (input) for mailman id 82818;
 Mon, 08 Feb 2021 12:03:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l95Go-0002KK-Ce
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:03:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l95Gm-000318-1S; Mon, 08 Feb 2021 12:03:56 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l95Gl-0002j7-O1; Mon, 08 Feb 2021 12:03:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MD6Bb5sP80quPeH2zs3kTwyyq4XqIGzqFtYmOHg1498=; b=C2TcdzJzRhXqdo5vQnTzknz9uv
	AqlQyCIi6Czc2ySaYeTorxbgl4KLAS9pghcA6ZKMr0Y/zZBz84DqLdo8Z6rYnV/2y/6s/jO68fsYM
	RH9qzcmV9HxFJOoxtBRMMohhVDpVPokj2R4iCDMqA6SHPal+1MK/3Snr7DtWNXwLEhzA=;
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <8032d8a9-b28f-95e1-a5a8-e955ada4dc0a@xen.org>
 <969f1492-764b-3345-eb65-64aca554ce9e@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a3f47092-0525-4594-0421-48e83cee5045@xen.org>
Date: Mon, 8 Feb 2021 12:03:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <969f1492-764b-3345-eb65-64aca554ce9e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 08/02/2021 11:48, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 12:40, Julien Grall wrote:
>>
>>
>> On 06/02/2021 10:49, Juergen Gross wrote:
>>> In evtchn_read() use READ_ONCE() for reading the producer index in
>>> order to avoid the compiler generating multiple accesses.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> Â  drivers/xen/evtchn.c | 2 +-
>>> Â  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>> index 421382c73d88..f6b199b597bf 100644
>>> --- a/drivers/xen/evtchn.c
>>> +++ b/drivers/xen/evtchn.c
>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, 
>>> char __user *buf,
>>> Â Â Â Â Â Â Â Â Â Â Â Â Â  goto unlock_out;
>>> Â Â Â Â Â Â Â Â Â  c = u->ring_cons;
>>> -Â Â Â Â Â Â Â  p = u->ring_prod;
>>> +Â Â Â Â Â Â Â  p = READ_ONCE(u->ring_prod);
>> For consistency, don't you also need the write side in 
>> evtchn_interrupt() to use WRITE_ONCE()?
> 
> Only in case I'd consider the compiler needing multiple memory
> accesses for doing the update (see my reply to Jan's comment on this
> patch).

Right, I have just answered there :). AFAICT, without using 
WRITE_ONCE()/READ_ONCE() there is no guarantee that load/store tearing 
will not happen.

We can continue the conversation there.

Cheers,

> 
> Juergen

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:14:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:14:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82820.153102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95Qn-0003Nt-2X; Mon, 08 Feb 2021 12:14:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82820.153102; Mon, 08 Feb 2021 12:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95Qm-0003Nm-VV; Mon, 08 Feb 2021 12:14:16 +0000
Received: by outflank-mailman (input) for mailman id 82820;
 Mon, 08 Feb 2021 12:14:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l95Qm-0003Nh-9a
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:14:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3354ca5f-64bd-4ecf-a952-2ed456754d9f;
 Mon, 08 Feb 2021 12:14:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4C4C1AEC2;
 Mon,  8 Feb 2021 12:14:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3354ca5f-64bd-4ecf-a952-2ed456754d9f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612786454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YkL31YCnyHYbb7t23TaR8QRiNbERTptF7wXzJLL1Vxo=;
	b=f6+8KpM2onabGcpcXQyUEuf2Tz0jJbgqS9S25f4GYpXaFlrQJeFMiR0nSmfFIILRXAFi/D
	CHRNDrFYiA/HyMMwMls90LLIPO0p6n9ZzVr1Xl+qs/Wp8cpEY1kF6KD7/GClbtpZoCDncK
	Z3KzAvtPmxBw//0U7enbXBF6dX3j/fo=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Message-ID: <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
Date: Mon, 8 Feb 2021 13:14:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="hIAoLX3MJuQazBJLmWBlWtXZWpRDvRYID"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--hIAoLX3MJuQazBJLmWBlWtXZWpRDvRYID
Content-Type: multipart/mixed; boundary="qiVuDEdwEs52OcBCUZRJfP80UdZbnlZdc";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
In-Reply-To: <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>

--qiVuDEdwEs52OcBCUZRJfP80UdZbnlZdc
Content-Type: multipart/mixed;
 boundary="------------910DD0FAB41FE5DC74A08D00"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------910DD0FAB41FE5DC74A08D00
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 11:40, Julien Grall wrote:
> Hi Juergen,
>=20
> On 08/02/2021 10:22, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 10:54, Julien Grall wrote:
>>> ... I don't really see how the difference matter here. The idea is to=
=20
>>> re-use what's already existing rather than trying to re-invent the=20
>>> wheel with an extra lock (or whatever we can come up).
>>
>> The difference is that the race is occurring _before_ any IRQ is
>> involved. So I don't see how modification of IRQ handling would help.
>=20
> Roughly our current IRQ handling flow (handle_eoi_irq()) looks like:
>=20
> if ( irq in progress )
> {
>  =C2=A0 set IRQS_PENDING
>  =C2=A0 return;
> }
>=20
> do
> {
>  =C2=A0 clear IRQS_PENDING
>  =C2=A0 handle_irq()
> } while (IRQS_PENDING is set)
>=20
> IRQ handling flow like handle_fasteoi_irq() looks like:
>=20
> if ( irq in progress )
>  =C2=A0 return;
>=20
> handle_irq()
>=20
> The latter flow would catch "spurious" interrupt and ignore them. So it=
=20
> would handle nicely the race when changing the event affinity.

Sure? Isn't "irq in progress" being reset way before our "lateeoi" is
issued, thus having the same problem again? And I think we want to keep
the lateeoi behavior in order to be able to control event storms.


Juergen

--------------910DD0FAB41FE5DC74A08D00
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------910DD0FAB41FE5DC74A08D00--

--qiVuDEdwEs52OcBCUZRJfP80UdZbnlZdc--

--hIAoLX3MJuQazBJLmWBlWtXZWpRDvRYID
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhKxUFAwAAAAAACgkQsN6d1ii/Ey/X
GwgAhB+De5IgMGnLjs2xXuEsjs1CoaFMv6yM53+0stHNws/f9YhqD6Kd0pD3uEC4cy4Fthz6T3c8
J/lq5sS+jmbxUG1UGG8TJjmDK63oSwAqIBU+aefIRsOjLeMGuWxSy+wpvkAllz9iTrJmPg40i+u2
LXDclNuKGEnAAyIUEhfbITMYSsV6K7UTZKiiRC2K42nPyxn9e2KNBtCcGgttYvkop35e3ejDYBoc
QpffC1v/HZOkDBXWqSdffwRlQeQAVhBLwYkXk6J8zynxTqmT5GQJUkhfn4MZQqdGlItysPwuIgQl
kY567Mvsh2aL2O4yNPzEcNpzfKS8wqoZbjAOU+XsuA==
=VlL/
-----END PGP SIGNATURE-----

--hIAoLX3MJuQazBJLmWBlWtXZWpRDvRYID--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:15:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:15:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82821.153114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95S5-0003Ua-DV; Mon, 08 Feb 2021 12:15:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82821.153114; Mon, 08 Feb 2021 12:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95S5-0003UT-AZ; Mon, 08 Feb 2021 12:15:37 +0000
Received: by outflank-mailman (input) for mailman id 82821;
 Mon, 08 Feb 2021 12:15:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l95S4-0003UM-BK
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:15:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 148bb942-4f65-4731-b262-b0ddd1a5d595;
 Mon, 08 Feb 2021 12:15:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8C9C8AD2E;
 Mon,  8 Feb 2021 12:15:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 148bb942-4f65-4731-b262-b0ddd1a5d595
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612786530; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=RKXtbcsoovvaCUMeOTZHe5m8aYHItg1m+gJCpla7VS8=;
	b=X5fxVB+wboUGoTTgV9YtK1DA66r0Wjs0HMGrY68aayv1ZUq5YQT4nJulsXvEVL4gL+alGf
	9bKmTzNpqrpYzS8PWAQvryIIC4kMHz/YznN7n00ypcV6bGmkpwwqyzRkdOW0to55i9Wj3W
	xHMG7Z8yC7E/TG5EfZPOWe0BL0e5vUQ=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
 <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9d725a1b-ec8e-c078-5ec6-9c4899d4c7aa@suse.com>
Date: Mon, 8 Feb 2021 13:15:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="YuKOlcDS7YODQvUINCuHNtInK5LAZedtz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--YuKOlcDS7YODQvUINCuHNtInK5LAZedtz
Content-Type: multipart/mixed; boundary="wyQF6qxm371oj8AGDLzN5uM2V34m93hPf";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <9d725a1b-ec8e-c078-5ec6-9c4899d4c7aa@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
 <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
In-Reply-To: <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>

--wyQF6qxm371oj8AGDLzN5uM2V34m93hPf
Content-Type: multipart/mixed;
 boundary="------------E4AB4358A9365B6EDCA7EB62"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E4AB4358A9365B6EDCA7EB62
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 12:54, Jan Beulich wrote:
> On 08.02.2021 11:59, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 11:51, Jan Beulich wrote:
>>> On 08.02.2021 11:41, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 08.02.21 10:48, Jan Beulich wrote:
>>>>> On 06.02.2021 11:49, Juergen Gross wrote:
>>>>>> In evtchn_read() use READ_ONCE() for reading the producer index in=

>>>>>> order to avoid the compiler generating multiple accesses.
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>> ---
>>>>>>     drivers/xen/evtchn.c | 2 +-
>>>>>>     1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>>>>> index 421382c73d88..f6b199b597bf 100644
>>>>>> --- a/drivers/xen/evtchn.c
>>>>>> +++ b/drivers/xen/evtchn.c
>>>>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, =
char __user *buf,
>>>>>>     			goto unlock_out;
>>>>>>    =20
>>>>>>     		c =3D u->ring_cons;
>>>>>> -		p =3D u->ring_prod;
>>>>>> +		p =3D READ_ONCE(u->ring_prod);
>>>>>>     		if (c !=3D p)
>>>>>>     			break;
>>>>>
>>>>> Why only here and not also in
>>>>>
>>>>> 		rc =3D wait_event_interruptible(u->evtchn_wait,
>>>>> 					      u->ring_cons !=3D u->ring_prod);
>>>>>
>>>>> or in evtchn_poll()? I understand it's not needed when
>>>>> ring_prod_lock is held, but that's not the case in the two
>>>>> afaics named places. Plus isn't the same then true for
>>>>> ring_cons and ring_cons_mutex, i.e. aren't the two named
>>>>> places plus evtchn_interrupt() also in need of READ_ONCE()
>>>>> for ring_cons?
>>>>
>>>> The problem solved here is the further processing using "p" multiple=

>>>> times. p must not be silently replaced with u->ring_prod by the
>>>> compiler, so I probably should reword the commit message to say:
>>>>
>>>> ... in order to not allow the compiler to refetch p.
>>>
>>> I still wouldn't understand the change (and the lack of
>>> further changes) then: The first further use of p is
>>> outside the loop, alongside one of c. IOW why would c
>>> then not need treating the same as p?
>>
>> Its value wouldn't change, as ring_cons is being modified only at
>> the bottom of this function, and nowhere else (apart from the reset
>> case, but this can't run concurrently due to ring_cons_mutex).
>>
>>> I also still don't see the difference between latching a
>>> value into a local variable vs a "freestanding" access -
>>> neither are guaranteed to result in exactly one memory
>>> access afaict.
>>
>> READ_ONCE() is using a pointer to volatile, so any refetching by
>> the compiler would be a bug.
>=20
> Of course, but this wasn't my point. I was contrasting
>=20
> 		c =3D u->ring_cons;
> 		p =3D u->ring_prod;
>=20
> which you change with
>=20
> 		rc =3D wait_event_interruptible(u->evtchn_wait,
> 					      u->ring_cons !=3D u->ring_prod);
>=20
> which you leave alone.

Can you point out which problem might arise from that?


Juergen

--------------E4AB4358A9365B6EDCA7EB62
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E4AB4358A9365B6EDCA7EB62--

--wyQF6qxm371oj8AGDLzN5uM2V34m93hPf--

--YuKOlcDS7YODQvUINCuHNtInK5LAZedtz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhK2EFAwAAAAAACgkQsN6d1ii/Ey8H
9wf+MPuO3ggydQhCAfAfX21hvC2kwrm+cuFYLd6oNmNNUUxwMFRpU2he8VbAF/pzFZNm/ikIfmQe
Yr5Oeaa2WYDZq0NjGB3khtK11dreh0Ec5F6Z14uMxnTuezK+q7jJPvr7q09NXFPYlJKS458DmWM7
sIoCYNvRpVJdC//ZX/ybvaCMq3+wC6ySJgtFcbxUxQgB5t5oqnvTnVAOZzs0TaT4UKm7JlpDsH+j
WcpWyPx8Km5TIqHgM+SvfEPMOXMmPeB4zj1/YPh4koj/YDTA64DWirN6dcf6W2Eq2MbBXr7+u0fO
4OpZ2B/ld6d1WG2/FhGqIcixUlm1pJo3i6W+ZWioKA==
=Jxbh
-----END PGP SIGNATURE-----

--YuKOlcDS7YODQvUINCuHNtInK5LAZedtz--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:16:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:16:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82823.153129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95TG-0003bZ-Ph; Mon, 08 Feb 2021 12:16:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82823.153129; Mon, 08 Feb 2021 12:16:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95TG-0003bS-M9; Mon, 08 Feb 2021 12:16:50 +0000
Received: by outflank-mailman (input) for mailman id 82823;
 Mon, 08 Feb 2021 12:16:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l95TF-0003bN-Kj
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:16:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l95TA-0003Cs-Sv; Mon, 08 Feb 2021 12:16:44 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l95TA-0003jL-Jb; Mon, 08 Feb 2021 12:16:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OWbFE+G8Cgp7E//J4RR16Muw6fFI+IxOAFf2yLhvGHY=; b=c/2hHsUek5LDKbZ7f1d8uOM8h9
	J9C8x2Xg6FFKpzkImmwvnegEVQAHrRoPcd60hIPfJJ6XtY89bli+ybWC3OIn/KgVn9VlH6vrBwdtD
	agkDaEThisGyeLlJigni8/HHWKalNoN+MByk8NuDvXexAEfIEJkmiykdUKLdsV3ZX7Eo=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
Date: Mon, 8 Feb 2021 12:16:41 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 08/02/2021 12:14, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 11:40, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 08/02/2021 10:22, JÃ¼rgen GroÃŸ wrote:
>>> On 08.02.21 10:54, Julien Grall wrote:
>>>> ... I don't really see how the difference matter here. The idea is 
>>>> to re-use what's already existing rather than trying to re-invent 
>>>> the wheel with an extra lock (or whatever we can come up).
>>>
>>> The difference is that the race is occurring _before_ any IRQ is
>>> involved. So I don't see how modification of IRQ handling would help.
>>
>> Roughly our current IRQ handling flow (handle_eoi_irq()) looks like:
>>
>> if ( irq in progress )
>> {
>> Â Â  set IRQS_PENDING
>> Â Â  return;
>> }
>>
>> do
>> {
>> Â Â  clear IRQS_PENDING
>> Â Â  handle_irq()
>> } while (IRQS_PENDING is set)
>>
>> IRQ handling flow like handle_fasteoi_irq() looks like:
>>
>> if ( irq in progress )
>> Â Â  return;
>>
>> handle_irq()
>>
>> The latter flow would catch "spurious" interrupt and ignore them. So 
>> it would handle nicely the race when changing the event affinity.
> 
> Sure? Isn't "irq in progress" being reset way before our "lateeoi" is
> issued, thus having the same problem again? 

Sorry I can't parse this.

And I think we want to keep
> the lateeoi behavior in order to be able to control event storms.

I didn't (yet) suggest to remove lateeoi. I only suggest to use a 
different workflow to handle the race with vCPU affinity.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:23:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:23:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82830.153143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95Zl-0004dq-Lo; Mon, 08 Feb 2021 12:23:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82830.153143; Mon, 08 Feb 2021 12:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95Zl-0004dj-It; Mon, 08 Feb 2021 12:23:33 +0000
Received: by outflank-mailman (input) for mailman id 82830;
 Mon, 08 Feb 2021 12:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l95Zj-0004de-W0
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:23:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f06e92cf-5749-4bb3-aad2-c62a719224c2;
 Mon, 08 Feb 2021 12:23:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E7690B0D1;
 Mon,  8 Feb 2021 12:23:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f06e92cf-5749-4bb3-aad2-c62a719224c2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612787006; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sAbphsKklSMr84x/HNHun/Cn+tfX3a8rPojN7H8+PsE=;
	b=R/yfEUNZkNfprRqDy4QV5LM9vEuMUPZfykspz1LaSTCgKYi56fpfwWc8UbtajswoWLFDTd
	A2t8imjBE3gNj90/nGueDuYq+YxMVAXlP5Yj9H/EVu1OdNOFzGkaKXBwdN5YqVf3Z+Z/0O
	3lugsKSYA0HJlL09fa4E22cCcOSeJdQ=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
 <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
 <9d725a1b-ec8e-c078-5ec6-9c4899d4c7aa@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <58ec68aa-6d86-62dd-f28a-a4e5754b0fdf@suse.com>
Date: Mon, 8 Feb 2021 13:23:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <9d725a1b-ec8e-c078-5ec6-9c4899d4c7aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.02.2021 13:15, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 12:54, Jan Beulich wrote:
>> On 08.02.2021 11:59, JÃ¼rgen GroÃŸ wrote:
>>> On 08.02.21 11:51, Jan Beulich wrote:
>>>> On 08.02.2021 11:41, JÃ¼rgen GroÃŸ wrote:
>>>>> On 08.02.21 10:48, Jan Beulich wrote:
>>>>>> On 06.02.2021 11:49, Juergen Gross wrote:
>>>>>>> In evtchn_read() use READ_ONCE() for reading the producer index in
>>>>>>> order to avoid the compiler generating multiple accesses.
>>>>>>>
>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>> ---
>>>>>>>     drivers/xen/evtchn.c | 2 +-
>>>>>>>     1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>>>>>> index 421382c73d88..f6b199b597bf 100644
>>>>>>> --- a/drivers/xen/evtchn.c
>>>>>>> +++ b/drivers/xen/evtchn.c
>>>>>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
>>>>>>>     			goto unlock_out;
>>>>>>>     
>>>>>>>     		c = u->ring_cons;
>>>>>>> -		p = u->ring_prod;
>>>>>>> +		p = READ_ONCE(u->ring_prod);
>>>>>>>     		if (c != p)
>>>>>>>     			break;
>>>>>>
>>>>>> Why only here and not also in
>>>>>>
>>>>>> 		rc = wait_event_interruptible(u->evtchn_wait,
>>>>>> 					      u->ring_cons != u->ring_prod);
>>>>>>
>>>>>> or in evtchn_poll()? I understand it's not needed when
>>>>>> ring_prod_lock is held, but that's not the case in the two
>>>>>> afaics named places. Plus isn't the same then true for
>>>>>> ring_cons and ring_cons_mutex, i.e. aren't the two named
>>>>>> places plus evtchn_interrupt() also in need of READ_ONCE()
>>>>>> for ring_cons?
>>>>>
>>>>> The problem solved here is the further processing using "p" multiple
>>>>> times. p must not be silently replaced with u->ring_prod by the
>>>>> compiler, so I probably should reword the commit message to say:
>>>>>
>>>>> ... in order to not allow the compiler to refetch p.
>>>>
>>>> I still wouldn't understand the change (and the lack of
>>>> further changes) then: The first further use of p is
>>>> outside the loop, alongside one of c. IOW why would c
>>>> then not need treating the same as p?
>>>
>>> Its value wouldn't change, as ring_cons is being modified only at
>>> the bottom of this function, and nowhere else (apart from the reset
>>> case, but this can't run concurrently due to ring_cons_mutex).
>>>
>>>> I also still don't see the difference between latching a
>>>> value into a local variable vs a "freestanding" access -
>>>> neither are guaranteed to result in exactly one memory
>>>> access afaict.
>>>
>>> READ_ONCE() is using a pointer to volatile, so any refetching by
>>> the compiler would be a bug.
>>
>> Of course, but this wasn't my point. I was contrasting
>>
>> 		c = u->ring_cons;
>> 		p = u->ring_prod;
>>
>> which you change with
>>
>> 		rc = wait_event_interruptible(u->evtchn_wait,
>> 					      u->ring_cons != u->ring_prod);
>>
>> which you leave alone.
> 
> Can you point out which problem might arise from that?

Not any particular active one. Yet enhancing some accesses
but not others seems to me like a recipe for new problems
down the road.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:26:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:26:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82831.153155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95cb-0004lz-45; Mon, 08 Feb 2021 12:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82831.153155; Mon, 08 Feb 2021 12:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95cb-0004ls-1E; Mon, 08 Feb 2021 12:26:29 +0000
Received: by outflank-mailman (input) for mailman id 82831;
 Mon, 08 Feb 2021 12:26:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l95cZ-0004lm-Ue
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:26:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b329a489-706e-42d4-bad0-bd2eabfe93e9;
 Mon, 08 Feb 2021 12:26:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 27B9EAC6E;
 Mon,  8 Feb 2021 12:26:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b329a489-706e-42d4-bad0-bd2eabfe93e9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612787186; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SQ+7QYr8QMBro3PDGDYzBHa7DUmRvP2svzPWxpQQUX8=;
	b=NBhu0ghoq1b25VBtos7db92mRomDBXsIrBe0B5cuQkfVxbzmSbH8c0Adp0qyH5plfEwTVD
	B8qmh+yoIwyNywegV+FeuzQyaKnGUg/MYgMeGPExVhouvatMqskg+CE+OIKv+hzKTjcCRH
	s9r7MPv36HivI+wTC/ZU8cDPhGvYT98=
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
 <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
 <9d725a1b-ec8e-c078-5ec6-9c4899d4c7aa@suse.com>
 <58ec68aa-6d86-62dd-f28a-a4e5754b0fdf@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3b415b97-da42-c680-90e2-8b984934b846@suse.com>
Date: Mon, 8 Feb 2021 13:26:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <58ec68aa-6d86-62dd-f28a-a4e5754b0fdf@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="WTBwUd9oY26fDvPcG0MkYtgFJNLMgtSRj"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--WTBwUd9oY26fDvPcG0MkYtgFJNLMgtSRj
Content-Type: multipart/mixed; boundary="MDIa9lEOeplKStXf1FoGsrJ4sUihE6Ekg";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <3b415b97-da42-c680-90e2-8b984934b846@suse.com>
Subject: Re: [PATCH 7/7] xen/evtchn: read producer index only once
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-8-jgross@suse.com>
 <72334160-cffe-2d8a-23b7-2ea9ab1d803a@suse.com>
 <626f500a-494a-0141-7bf3-94fb86b47ed4@suse.com>
 <e88526ac-6972-fe08-c58f-ea872cbdcc14@suse.com>
 <d0ca217c-ecc9-55f7-abb1-30a687a46b31@suse.com>
 <a30db278-087b-554c-d5bf-1317e14e8508@suse.com>
 <9d725a1b-ec8e-c078-5ec6-9c4899d4c7aa@suse.com>
 <58ec68aa-6d86-62dd-f28a-a4e5754b0fdf@suse.com>
In-Reply-To: <58ec68aa-6d86-62dd-f28a-a4e5754b0fdf@suse.com>

--MDIa9lEOeplKStXf1FoGsrJ4sUihE6Ekg
Content-Type: multipart/mixed;
 boundary="------------6878FD23673DD896F00CC86D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6878FD23673DD896F00CC86D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 13:23, Jan Beulich wrote:
> On 08.02.2021 13:15, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 12:54, Jan Beulich wrote:
>>> On 08.02.2021 11:59, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 08.02.21 11:51, Jan Beulich wrote:
>>>>> On 08.02.2021 11:41, J=C3=BCrgen Gro=C3=9F wrote:
>>>>>> On 08.02.21 10:48, Jan Beulich wrote:
>>>>>>> On 06.02.2021 11:49, Juergen Gross wrote:
>>>>>>>> In evtchn_read() use READ_ONCE() for reading the producer index =
in
>>>>>>>> order to avoid the compiler generating multiple accesses.
>>>>>>>>
>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>> ---
>>>>>>>>      drivers/xen/evtchn.c | 2 +-
>>>>>>>>      1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>>>>>>> index 421382c73d88..f6b199b597bf 100644
>>>>>>>> --- a/drivers/xen/evtchn.c
>>>>>>>> +++ b/drivers/xen/evtchn.c
>>>>>>>> @@ -211,7 +211,7 @@ static ssize_t evtchn_read(struct file *file=
, char __user *buf,
>>>>>>>>      			goto unlock_out;
>>>>>>>>     =20
>>>>>>>>      		c =3D u->ring_cons;
>>>>>>>> -		p =3D u->ring_prod;
>>>>>>>> +		p =3D READ_ONCE(u->ring_prod);
>>>>>>>>      		if (c !=3D p)
>>>>>>>>      			break;
>>>>>>>
>>>>>>> Why only here and not also in
>>>>>>>
>>>>>>> 		rc =3D wait_event_interruptible(u->evtchn_wait,
>>>>>>> 					      u->ring_cons !=3D u->ring_prod);
>>>>>>>
>>>>>>> or in evtchn_poll()? I understand it's not needed when
>>>>>>> ring_prod_lock is held, but that's not the case in the two
>>>>>>> afaics named places. Plus isn't the same then true for
>>>>>>> ring_cons and ring_cons_mutex, i.e. aren't the two named
>>>>>>> places plus evtchn_interrupt() also in need of READ_ONCE()
>>>>>>> for ring_cons?
>>>>>>
>>>>>> The problem solved here is the further processing using "p" multip=
le
>>>>>> times. p must not be silently replaced with u->ring_prod by the
>>>>>> compiler, so I probably should reword the commit message to say:
>>>>>>
>>>>>> ... in order to not allow the compiler to refetch p.
>>>>>
>>>>> I still wouldn't understand the change (and the lack of
>>>>> further changes) then: The first further use of p is
>>>>> outside the loop, alongside one of c. IOW why would c
>>>>> then not need treating the same as p?
>>>>
>>>> Its value wouldn't change, as ring_cons is being modified only at
>>>> the bottom of this function, and nowhere else (apart from the reset
>>>> case, but this can't run concurrently due to ring_cons_mutex).
>>>>
>>>>> I also still don't see the difference between latching a
>>>>> value into a local variable vs a "freestanding" access -
>>>>> neither are guaranteed to result in exactly one memory
>>>>> access afaict.
>>>>
>>>> READ_ONCE() is using a pointer to volatile, so any refetching by
>>>> the compiler would be a bug.
>>>
>>> Of course, but this wasn't my point. I was contrasting
>>>
>>> 		c =3D u->ring_cons;
>>> 		p =3D u->ring_prod;
>>>
>>> which you change with
>>>
>>> 		rc =3D wait_event_interruptible(u->evtchn_wait,
>>> 					      u->ring_cons !=3D u->ring_prod);
>>>
>>> which you leave alone.
>>
>> Can you point out which problem might arise from that?
>=20
> Not any particular active one. Yet enhancing some accesses
> but not others seems to me like a recipe for new problems
> down the road.

I already reasoned that the usage of READ_ONCE() is due to storing the
value in a local variable which needs to be kept constant during the
following processing (no refetches by the compiler). This reasoning
very clearly doesn't apply to the other accesses.


Juergen

--------------6878FD23673DD896F00CC86D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6878FD23673DD896F00CC86D--

--MDIa9lEOeplKStXf1FoGsrJ4sUihE6Ekg--

--WTBwUd9oY26fDvPcG0MkYtgFJNLMgtSRj
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhLfEFAwAAAAAACgkQsN6d1ii/Ey8R
Rgf8C9FHzSBje9COydLv/CIZmaPgaS9mAru5hs+MoSQ3Z8L7dTglPbeMgxJIn9eLZiXge9wBozBE
y+hoLcPoh8HXm1SWcUaRbORPDVmlLtOntSkWDrrBXGZMToAXG5MUgMOwXM53yvCR07F0iKBFDatl
kqWUJ4UoHNoOdmpdbXDAiCPjDNILfm2TORxq71KlkMUz0oi12+P0IrVMqv+K4U1uSxQJDFkBBSJg
XTBUSNgE2kvXCnbEkYAhgu+a6gHhCLrWsokIwDxU4JglHwsU11+buxLiy9FKzh8MMtMbzFd4ZyLC
Aty/MyF8GJXuM6yLRG2EtpYEiXOm2lZpaLdP6S7gGA==
=inSB
-----END PGP SIGNATURE-----

--WTBwUd9oY26fDvPcG0MkYtgFJNLMgtSRj--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 12:31:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 12:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82836.153167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95hg-0005jN-Oj; Mon, 08 Feb 2021 12:31:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82836.153167; Mon, 08 Feb 2021 12:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l95hg-0005jG-Lm; Mon, 08 Feb 2021 12:31:44 +0000
Received: by outflank-mailman (input) for mailman id 82836;
 Mon, 08 Feb 2021 12:31:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l95hf-0005jB-E2
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 12:31:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbcb1621-aef6-471c-bbc0-b9f548f690a4;
 Mon, 08 Feb 2021 12:31:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54E13B0CC;
 Mon,  8 Feb 2021 12:31:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbcb1621-aef6-471c-bbc0-b9f548f690a4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612787497; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IWEu9KuKzRSdTdHxRxZvVuwEzOx1f/VeQ3IFyCOr++U=;
	b=GQKDajpLJzYZO4pSYsJM+ATEYZWQcmjsx1htbQ1dlH1riPWjdSYjdAYdH3MWfsocoPPh9x
	HpGElfceU/9byhIVwFrXc+zBff9tj+EvVI/kwvBJJ4wRiTKGUX9iEJSBuIqfIxDesrwyjz
	LftZdS5c7L7GeuhXd0hA3h6H38S2GzE=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Message-ID: <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
Date: Mon, 8 Feb 2021 13:31:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JgNIcJOaVP5Gw4rk7riNifkQ1mxTeO1Ew"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JgNIcJOaVP5Gw4rk7riNifkQ1mxTeO1Ew
Content-Type: multipart/mixed; boundary="sCmJlej2lRR5GzR7qwJDdoFQWBk5LVHvX";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
In-Reply-To: <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>

--sCmJlej2lRR5GzR7qwJDdoFQWBk5LVHvX
Content-Type: multipart/mixed;
 boundary="------------2B450E9D523701FD7CCF2F18"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2B450E9D523701FD7CCF2F18
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 13:16, Julien Grall wrote:
>=20
>=20
> On 08/02/2021 12:14, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 11:40, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 08/02/2021 10:22, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 08.02.21 10:54, Julien Grall wrote:
>>>>> ... I don't really see how the difference matter here. The idea is =

>>>>> to re-use what's already existing rather than trying to re-invent=20
>>>>> the wheel with an extra lock (or whatever we can come up).
>>>>
>>>> The difference is that the race is occurring _before_ any IRQ is
>>>> involved. So I don't see how modification of IRQ handling would help=
=2E
>>>
>>> Roughly our current IRQ handling flow (handle_eoi_irq()) looks like:
>>>
>>> if ( irq in progress )
>>> {
>>> =C2=A0=C2=A0 set IRQS_PENDING
>>> =C2=A0=C2=A0 return;
>>> }
>>>
>>> do
>>> {
>>> =C2=A0=C2=A0 clear IRQS_PENDING
>>> =C2=A0=C2=A0 handle_irq()
>>> } while (IRQS_PENDING is set)
>>>
>>> IRQ handling flow like handle_fasteoi_irq() looks like:
>>>
>>> if ( irq in progress )
>>> =C2=A0=C2=A0 return;
>>>
>>> handle_irq()
>>>
>>> The latter flow would catch "spurious" interrupt and ignore them. So =

>>> it would handle nicely the race when changing the event affinity.
>>
>> Sure? Isn't "irq in progress" being reset way before our "lateeoi" is
>> issued, thus having the same problem again?=20
>=20
> Sorry I can't parse this.

handle_fasteoi_irq() will do nothing "if ( irq in progress )". When is
this condition being reset again in order to be able to process another
IRQ? I believe this will be the case before our "lateeoi" handling is
becoming active (more precise: when our IRQ handler is returning to
handle_fasteoi_irq()), resulting in the possibility of the same race we
are experiencing now.


Juergen

--------------2B450E9D523701FD7CCF2F18
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2B450E9D523701FD7CCF2F18--

--sCmJlej2lRR5GzR7qwJDdoFQWBk5LVHvX--

--JgNIcJOaVP5Gw4rk7riNifkQ1mxTeO1Ew
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhLygFAwAAAAAACgkQsN6d1ii/Ey9e
Nwf9FLV8FM82fXo33jJcXnUYTrhDEBODgfNVp6BIWVPs0z2jnkfnoxy7wQkEupYsbRkEU18fcRQj
aymzBqq57r/iDuI3vGOHHZV0CCIz9sn91SnCUC3hDCV+HR3u5jK2bvJRnXp2YxBILxdDrWac6vUw
oEsdCbLMtCDf8aKIcSnTYNTcDTQuJqXTmtZJttua/M8LvNshYjJg5mMpKt2BWjSaE1GCzAug08dx
I/SvrIHAPDfAU5/2ZrH5SqQYzkuBu61B5/y/RbP8yLoVvg9XxFYkfEDB9xmSz8KWYeMBFJ0LtY85
ocnORaemCMS72+2eRittkqWU9WXJ/j9RgccWVn8x5A==
=VUK/
-----END PGP SIGNATURE-----

--JgNIcJOaVP5Gw4rk7riNifkQ1mxTeO1Ew--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 13:09:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 13:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82846.153188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l96IM-0000Pq-HA; Mon, 08 Feb 2021 13:09:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82846.153188; Mon, 08 Feb 2021 13:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l96IM-0000Pi-EK; Mon, 08 Feb 2021 13:09:38 +0000
Received: by outflank-mailman (input) for mailman id 82846;
 Mon, 08 Feb 2021 13:09:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l96IK-0000Pd-Ua
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 13:09:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l96IE-00045Q-UI; Mon, 08 Feb 2021 13:09:30 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l96IE-0008Vv-Ks; Mon, 08 Feb 2021 13:09:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=uol57jO/dZDNnz0gvnYYX8iUatRwa2AiCurGg9ekwqw=; b=rqta3YaZQM7WojriayJqyS/qS9
	XZoq87teH/mB8cvOiipmyn0hUgSsBKV/1Xs4jZlbsex3ZFVPfRWm4cxUvEUP19mCORwonG3uA9HXw
	ckZ41cJjgLTBpOSC38lsRDbSToCHYjq8I3zqXaYe9hb9yMslD0ltn4QuzE6KNsU2RiZc=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
Date: Mon, 8 Feb 2021 13:09:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 08/02/2021 12:31, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 13:16, Julien Grall wrote:
>>
>>
>> On 08/02/2021 12:14, JÃ¼rgen GroÃŸ wrote:
>>> On 08.02.21 11:40, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 08/02/2021 10:22, JÃ¼rgen GroÃŸ wrote:
>>>>> On 08.02.21 10:54, Julien Grall wrote:
>>>>>> ... I don't really see how the difference matter here. The idea is 
>>>>>> to re-use what's already existing rather than trying to re-invent 
>>>>>> the wheel with an extra lock (or whatever we can come up).
>>>>>
>>>>> The difference is that the race is occurring _before_ any IRQ is
>>>>> involved. So I don't see how modification of IRQ handling would help.
>>>>
>>>> Roughly our current IRQ handling flow (handle_eoi_irq()) looks like:
>>>>
>>>> if ( irq in progress )
>>>> {
>>>> Â Â  set IRQS_PENDING
>>>> Â Â  return;
>>>> }
>>>>
>>>> do
>>>> {
>>>> Â Â  clear IRQS_PENDING
>>>> Â Â  handle_irq()
>>>> } while (IRQS_PENDING is set)
>>>>
>>>> IRQ handling flow like handle_fasteoi_irq() looks like:
>>>>
>>>> if ( irq in progress )
>>>> Â Â  return;
>>>>
>>>> handle_irq()
>>>>
>>>> The latter flow would catch "spurious" interrupt and ignore them. So 
>>>> it would handle nicely the race when changing the event affinity.
>>>
>>> Sure? Isn't "irq in progress" being reset way before our "lateeoi" is
>>> issued, thus having the same problem again? 
>>
>> Sorry I can't parse this.
> 
> handle_fasteoi_irq() will do nothing "if ( irq in progress )". When is
> this condition being reset again in order to be able to process another
> IRQ?
It is reset after the handler has been called. See handle_irq_event().

> I believe this will be the case before our "lateeoi" handling is
> becoming active (more precise: when our IRQ handler is returning to
> handle_fasteoi_irq()), resulting in the possibility of the same race we
> are experiencing now.

I am a bit confused what you mean by "lateeoi" handling is becoming 
active. Can you clarify?

Note that are are other IRQ flows existing. We should have a look at 
them before trying to fix thing ourself.

Although, the other issue I can see so far is handle_irq_for_port() will 
update info->{eoi_cpu, irq_epoch, eoi_time} without any locking. But it 
is not clear this is what you mean by "becoming active".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 13:19:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 13:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82861.153218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l96S4-0001ec-0y; Mon, 08 Feb 2021 13:19:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82861.153218; Mon, 08 Feb 2021 13:19:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l96S3-0001eV-To; Mon, 08 Feb 2021 13:19:39 +0000
Received: by outflank-mailman (input) for mailman id 82861;
 Mon, 08 Feb 2021 13:19:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atCq=HK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l96S1-0001eQ-TJ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 13:19:38 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50bd079c-3295-4a88-9812-3a292d555968;
 Mon, 08 Feb 2021 13:19:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50bd079c-3295-4a88-9812-3a292d555968
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612790376;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=GAdXufS+KLlvQJv/WZCnx33EJabByXdMoyrmOQ598Aw=;
  b=PsbG6hihINq0yE6WpKniLy9axqXe/5btKRW1QLULOjAwwtwNUiArr3D7
   hbiE/iTyXUOI4hwEfDdv+oXen3dI9S743f587tdfGI9kxe53ZF+RLOAeN
   7pYe+7kh/eXH/8mxoma0sQYQcAXI+2Q1q/hxru3RM/yJfYUgn+N9F0Eg9
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: CzNSBcy1REGjVGrIsJAxSyz1R8VXG9ExyALxSNh7SQavjoIJlTx0XggtyIfzXrbaidRyBBjX5f
 PUNGjHKkEPi4aTIU3Fs5Cy3VQoaJjTf0ariEpAEYH+oW68GujNVId18paHLGj57XnjE1wMhqGv
 45ztj8yk6T+tvDoM70D4Nqay3y9MT7ffiLGIRQkpeOeBwGpWmA4j9M3mXfFpW3wx/Y3ZJIgK9C
 45LZDH2V71Hs8aRcRvjuCDJY2Z57Mr5RIQwOJQZyVDRu7oVChHBrQXAdLC0l6Z6109Hz012sG7
 NoU=
X-SBRS: 5.2
X-MesageID: 36765489
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,162,1610427600"; 
   d="scan'208";a="36765489"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DVz6PznaE3RTymjN6t2GZYjw//5j6yTM3bIFfdjcP9/fBIF8atc+0r+HiLOH84hEB0ik3py7mJ4dQMLhrgSuR6J4fiCbMJftAbEuTJJGNB2ye0MVZviiqThlDgu54mFh0t1Mqa9WT/hszXMSWpBcu2s5AasIHnJ7Z9CfwuxTiDlwGdLDTpMCSBAWiBk2WdEuUnjj1ZMsXX4am16vqBMiKeQbaLp+Q7VrdfaH0nBVfSS0Bv4H2AZpsjb9aL1mCZzG6ZUJTwsy6UJkO7ohqA4yoidjO2emQFFIEi/mKIG6ZgOGv3Owl5FfnYIhvm81yjlrT2RgVEy9H4bPFD/4wHp3mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tV/26+qgpw3p+QQ995YyObvTlk8eaguLbR203PlOHrw=;
 b=ZABxf0UxZrsYIHYUlezTheiuMYAfHJfNjoZDk80KLNrNUtYfJHtua5iJQt5DU5RcKv9Yd+e5psDrNgd3OGLimFMFPTORL5Lt97PPvRMjN4JZ9vqk5K2f+eXiP/hjeAhNg17zV8V1PVuzGUR7/Z/Nk9ymrvfI81IFBWqVnzn2arhQK8lAB/ICKbJqe4hbXZO7V0W7C+/CZS4j9UbQOXcMwfOM2qTZ5YHanQ9LIFGENS6OJBsgjWLtSmGMatpn7Cgor1NorO2z1qHZpWSU5CLUdsoO9AyWL1eN6qRKU0eXVsatTrpIARtjrgjHql/WM5WMSZNpCV5Yo/j4OCWaDxL/iQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tV/26+qgpw3p+QQ995YyObvTlk8eaguLbR203PlOHrw=;
 b=X+mHgXdHnb7TCSlNB9e0xJU6PjiwsZhpmuD3JZl2h0/hpO2qhaGI5RpXvZSd23YmQ788ODWXb+gM25uXoDqdFJziTJ3qNd8b9MztdzKLKF0TnDheLxln4olXA3eOy1y7rLLpjgIunYR2csAYqzRuqudeLcSFV3vn+CePPSQ0mkY=
Date: Mon, 8 Feb 2021 14:19:26 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
Message-ID: <YCE6XlA14A2Qsq8H@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
 <YCEGshHDEH9bJU7y@Air-de-Roger>
 <ae8d8e02-9d2e-a26e-9321-cae0640a0dee@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ae8d8e02-9d2e-a26e-9321-cae0640a0dee@suse.com>
X-ClientProxiedBy: PR1PR01CA0010.eurprd01.prod.exchangelabs.com
 (2603:10a6:102::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0f34633b-f2c7-4357-6a60-08d8cc342c5c
X-MS-TrafficTypeDiagnostic: DM6PR03MB4300:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4300CB6A4BF920365371E3E68F8F9@DM6PR03MB4300.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: muPCv3eVfXtxcKAjIJCAPnpDwykAMARxr/6nKuPJ1dbVlRjlQevo/eaULqWvs/f4G0nI5s3UgR4lrqu+YXTdemJXwBDi0j+vufDGM3Qkec5XTtSNvoZObcXV1rjzrKxIr3+2yENLa+apIIjYGblMzC05AzYmD46ddNEt6osMWkjxiwXkTp5UpKjDA3w1lfQgkSZKu82GDYzNmuhHm1eiKUk09YePzTswlovpqiFCzzBls9HGXbrh1bD7c0Ag4kMz+bFOKAHyuZBEJKYMkVy6sDJ58L9zSy08yrcy3gxL4x8n+td6G3+tGjXaidFDkGixrQplNtAgeOGCj+RBD2UDmE08j1+xrfzH9UKcSK3BxTKfCqgW5FcY3hJ/q7fJG1rdIE/KMxRdkYAKkpBVvvEadlbB+LKWRdUG+3heGTQxLn9TPRXP5iLtHzNCdgAn+iGN8Y4/KGftwnpoysMscBXt3DOfO+chUiZIyYFQAo7+PszeJ3zUFF9ljHXcaKLN/Vzhqy2bvp/C9vpOh6d6rXm3rQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(366004)(136003)(346002)(39860400002)(396003)(54906003)(6496006)(316002)(66476007)(66946007)(5660300002)(8936002)(2906002)(9686003)(956004)(6486002)(86362001)(66556008)(6916009)(85182001)(26005)(186003)(4326008)(6666004)(33716001)(478600001)(8676002)(16526019)(83380400001)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UTE5WFNESjhKS2xXNGwybVRiKzI0VW5sclBYdGNZR2lFWXpKcFNqdDBtY253?=
 =?utf-8?B?WEo1ODRLc2Exc3BCN0haOUhMVXdXVmhoTmdIUDdDZThNSTdHSjFrQTBhbEgz?=
 =?utf-8?B?c2cxYTh3SVlVeTFsVzhCNXJyQkh6djloblNTODV0QnFKcmZtdW1ERFBIUXVR?=
 =?utf-8?B?S2hQemo2enNHNWdzRTdjRFV3dnVSbWEyRzN3UnVJRFNwYjZNNlZmcFdUQjFu?=
 =?utf-8?B?VHEwZmhnYkRGWFBrWmI3Wjd5eWl1aFJuR3RhRzJGcnhVNXRNQSswVDkxSk45?=
 =?utf-8?B?NkxwN0JabmpBTGdzaTB0dTh6eitaT01LOEZzNHdkRDhoaHhsb0xzVGJOODlS?=
 =?utf-8?B?YzNzVEM4UzZXUzBZUFdFRmxMaWJJcXhzd3JWeXNaVWhQcWorajNlbHJqVGhh?=
 =?utf-8?B?cXlXRTZYSW1NaFRtT2tsQnIzVnE2U1gyaklHTUJJUzRZNXNkUW5kRWk4R0pt?=
 =?utf-8?B?Um00cnY3RVJLalMzNXZldXIrTVlMcVdYNlJ4SGF2eWxsZkhRKzF5MVdreXJ5?=
 =?utf-8?B?cjdVSE5hR2Z6cnJXQ3NVcVc2aXQvMVkydWlwbWt5R3ZQd2JqcStsMHdFdWFP?=
 =?utf-8?B?YVA2d2d1UFU5ejI4bVhVdHl6ZkpHZGJSaDVrR2FnbjUwM0cvSDFQY0FMa2VZ?=
 =?utf-8?B?QVJHanRxYUtCZ3IxdUMwVlpGWVN5V3Y2SmJ3akJwMWpueC85RU56TDZqZ0RY?=
 =?utf-8?B?VEwzOUdYTitJRW9nRWV5N3NKMDRxNW0xTTVYSW8xTlBpdWFIeHVTYjB4S1Fi?=
 =?utf-8?B?MVAxa25EQWtzT0YrcHZGWDkvN1pQOWtLY01IZEw0R283U21OYUJQZWJyYTFI?=
 =?utf-8?B?Y3p3dThUay9OaHRKVUdTSEs0OTlQWDNZZEVVcGRTLzh5ays3YWpHaHlRUUlG?=
 =?utf-8?B?UzdiYTJENWt3dkhZemRJYi9OZmFBM0VLM1NOVUs4T05wendZNHpIcHFFNDV4?=
 =?utf-8?B?N0JCbVJ5enhhZGlkcEVsZWo0anh1YjYvUmRuRk93QnRoMHltZlVUTm9BN2Z0?=
 =?utf-8?B?TFBDT3I2a280WEtlK0RyU2hyTVBGL0V2VndKOHNSQzlwOXFuY3ZUWEtHMHJX?=
 =?utf-8?B?VmRMUjgvdG9NT2tXWUY4YkdIUUtxaG5wQUFjZ080NXFkbU1oQTBnWVU5Ky9B?=
 =?utf-8?B?RTdMS0lMQjR0cUdreE1YNm1LalBHOU91aDJPN2ZKVU42NTlLSVlzVUdaOFRC?=
 =?utf-8?B?bTNpMWdJWU9QTlh0ZnhuTnFCU08wamdwcDZzNlliRENXcXdJZ2I3TmFsQ0tJ?=
 =?utf-8?B?SSszNXFsT0ovT1lYTGtNSnVUcEpDZC9TTUltZ0lCSEV1ODAvTkszcmpOaUo4?=
 =?utf-8?B?Tnh3dnhEK2wwQkYxRnljbkN2eDdLblBMK1R1VVYrWGtLcUoycXY4ZklXY1hh?=
 =?utf-8?B?YzA2R2szWnVSZUxpMkpENVo5cHh1T1FPdDJ0bkx1ditBeGxGdWM4dFlnQlVN?=
 =?utf-8?B?T0wwV2JCQ0FoK1hRK3gzeXRFT29ad2Ixa2FXMmxrdEVPUWFLWnh3K21sQnR6?=
 =?utf-8?B?SW1WRnY0cjN6Y1ExK0M1YnVxWVJZUnJKdjhVUjRPYWtYM3dyS2plZ21hWmtT?=
 =?utf-8?B?Umd6V29POWNhUmJEMDdTbzR2QnQ1WjNMMW5QbmtVVHIzejhxWTZNbEJyaWpI?=
 =?utf-8?B?VDljQWcwdmpwdzRGSFRranlGS3ZqbGhvRytTUktaTlNnRmplUy9uS3BYTDVo?=
 =?utf-8?B?dzU1L1JFMjRSeEUwU3BQYVJCSUVsbkI4UWdoMnRGMFN5Y08weHh6OXJycFlH?=
 =?utf-8?Q?MkXHaF91DU6Z41rh5BCYTZIlBPY6u8wvA+c8TEG?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0f34633b-f2c7-4357-6a60-08d8cc342c5c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 13:19:32.8908
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WrLjVJOSkztHDdQb24jD7ioTMV9RjCORNkN5/n2fhz6kwfRhJuPnlE4hJpi8OhAWk80vxOY+VV1xM1uBBy69eg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4300
X-OriginatorOrg: citrix.com

On Mon, Feb 08, 2021 at 12:22:25PM +0100, Jan Beulich wrote:
> On 08.02.2021 10:38, Roger Pau MonnÃ© wrote:
> > On Mon, Feb 01, 2021 at 01:43:28PM +0100, Jan Beulich wrote:
> >> ---
> >> Since CPU0 reads its TSC last on the first iteration, if TSCs were
> >> perfectly sync-ed there shouldn't ever be a need to update. However,
> >> even on the TSC-reliable system I first tested this on (using
> >> "tsc=skewed" to get this rendezvous function into use in the first
> >> place) updates by up to several thousand clocks did happen. I wonder
> >> whether this points at some problem with the approach that I'm not (yet)
> >> seeing.
> > 
> > I'm confused by this, so on a system that had reliable TSCs, which
> > you forced to remove the reliable flag, and then you saw big
> > differences when doing the rendezvous?
> > 
> > That would seem to indicate that such system doesn't really have
> > reliable TSCs?
> 
> I don't think so, no. This can easily be a timing effect from the
> heavy cache line bouncing involved here.
> 
> What I'm worried here seeing these updates is that I might still
> be moving TSCs backwards in ways observable to the rest of the
> system (i.e. beyond the inherent property of the approach), and
> this then getting corrected by a subsequent rendezvous. But as
> said - I can't see what this could result from, and hence I'm
> inclined to assume these are merely effects I've not found a
> good explanation for so far.

I'm slightly worried by this, maybe because I'm misunderstanding part
of the TSC sync stuff.

So you forced a system that Xen would otherwise consider to have a
reliable TSC (one that doesn't need a calibration rendezvous) into
doing the calibration rendezvous, and then the skew seen is quite big.
I would expect such skew to be minimal? As we would otherwise consider
the system to not need calibration at all.

This makes me wonder if the system does indeed need such calibration
(which I don't think so), or if the calibration that we actually try
to do is quite bogus?

> >> @@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
> >>              while ( atomic_read(&r->semaphore) > total_cpus )
> >>                  cpu_relax();
> >>          }
> >> +
> >> +        /* Just in case a read above ended up reading zero. */
> >> +        tsc += !tsc;
> > 
> > Won't that be worthy of an ASSERT_UNREACHABLE? I'm not sure I see how
> > tsc could be 0 on a healthy system after the loop above.
> 
> It's not forbidden for the firmware to set the TSCs to some
> huge negative value. Considering the effect TSC_ADJUST has on
> the actual value read by RDTSC, I think I did actually observe
> a system coming up this way, because of (not very helpful)
> TSC_ADJUST setting by firmware. So no, no ASSERT_UNREACHABLE()
> here.

But then the code here will loop 5 times, and it's not possible for
those 5 loops to read a TSC value of 0? I could see it reading 0 on
one of the iterations but not in all of them.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 13:24:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 13:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82864.153230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l96Wt-0002V9-IX; Mon, 08 Feb 2021 13:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82864.153230; Mon, 08 Feb 2021 13:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l96Wt-0002V2-Fc; Mon, 08 Feb 2021 13:24:39 +0000
Received: by outflank-mailman (input) for mailman id 82864;
 Mon, 08 Feb 2021 13:24:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l96Ws-0002Uu-Lh; Mon, 08 Feb 2021 13:24:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l96Ws-0004KJ-Dv; Mon, 08 Feb 2021 13:24:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l96Ws-0003UP-6c; Mon, 08 Feb 2021 13:24:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l96Ws-0000KK-67; Mon, 08 Feb 2021 13:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uuIqbcshItmYOSTgBEQ0jiL30QutDp0qB/bUjXlOGyI=; b=FaYDD+8QdM9KyygwZyQlWrcs2Y
	aC0hsXYIGdW4CPPUIU4gHfDXBBSK2JK0z9jNPQiq1LgzAMA4LF/OGouUK69VzWS7l1U0qTV0UZs09
	OyHxTf9J5kE5qxUb5FgtvghU6N3LZU9WEqj4U57MiWDUllDloIDMrT1jBEe8oz6LbUTY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159136: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:hosts-allocate:starved:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    ovmf=43a113385e370530eb52cf2e55b3019d8d4f6558
X-Osstest-Versions-That:
    ovmf=0d96664df322d50e0ac54130e129c0bf4f2b72df
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 13:24:38 +0000

flight 159136 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159136/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ovmf-amd64  3 hosts-allocate              starved n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate             starved n/a

version targeted for testing:
 ovmf                 43a113385e370530eb52cf2e55b3019d8d4f6558
baseline version:
 ovmf                 0d96664df322d50e0ac54130e129c0bf4f2b72df

Last test of basis   159040  2021-02-05 11:11:01 Z    3 days
Testing same since   159088  2021-02-07 01:54:46 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         starved 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0d96664df3..43a113385e  43a113385e370530eb52cf2e55b3019d8d4f6558 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 13:58:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 13:58:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82876.153245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l973R-0005LD-7I; Mon, 08 Feb 2021 13:58:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82876.153245; Mon, 08 Feb 2021 13:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l973R-0005L6-2o; Mon, 08 Feb 2021 13:58:17 +0000
Received: by outflank-mailman (input) for mailman id 82876;
 Mon, 08 Feb 2021 13:58:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l973Q-0005L1-Cg
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 13:58:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5099f50e-7d08-4cc7-9fdd-068de6b12c56;
 Mon, 08 Feb 2021 13:58:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A9FEACF4;
 Mon,  8 Feb 2021 13:58:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5099f50e-7d08-4cc7-9fdd-068de6b12c56
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612792694; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tcBhSfP94VTv7FXTnn4DMhRjG+4pRpXkNW2nr2ZTzIA=;
	b=mOmky49dpItLnisu1ssDflSKufMzx7HOIXR0mmEqtiD6+ovsjz9zEQWL9Di0nUCfcSIwKe
	KMpQpWzNVeaPBLHAEX3MAXqTf0aDST9hFkhv2mFs+s9RCm+Kg590Q/tW7llRsHv1B89fJT
	W0Oj70T1zfvL+y8yU8rguhesfOa8qGk=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
 <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Message-ID: <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
Date: Mon, 8 Feb 2021 14:58:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="aeHBsCzabkbZQTGoGyDZ3tNfjnz3rrSmx"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--aeHBsCzabkbZQTGoGyDZ3tNfjnz3rrSmx
Content-Type: multipart/mixed; boundary="ykmiwzPoeTGee2Ilphs7XRwDJ8uG5zBLk";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
 <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
In-Reply-To: <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>

--ykmiwzPoeTGee2Ilphs7XRwDJ8uG5zBLk
Content-Type: multipart/mixed;
 boundary="------------41703FFA3099386D19AA195D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------41703FFA3099386D19AA195D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 14:09, Julien Grall wrote:
> Hi Juergen,
>=20
> On 08/02/2021 12:31, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 13:16, Julien Grall wrote:
>>>
>>>
>>> On 08/02/2021 12:14, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 08.02.21 11:40, Julien Grall wrote:
>>>>> Hi Juergen,
>>>>>
>>>>> On 08/02/2021 10:22, J=C3=BCrgen Gro=C3=9F wrote:
>>>>>> On 08.02.21 10:54, Julien Grall wrote:
>>>>>>> ... I don't really see how the difference matter here. The idea=20
>>>>>>> is to re-use what's already existing rather than trying to=20
>>>>>>> re-invent the wheel with an extra lock (or whatever we can come u=
p).
>>>>>>
>>>>>> The difference is that the race is occurring _before_ any IRQ is
>>>>>> involved. So I don't see how modification of IRQ handling would he=
lp.
>>>>>
>>>>> Roughly our current IRQ handling flow (handle_eoi_irq()) looks like=
:
>>>>>
>>>>> if ( irq in progress )
>>>>> {
>>>>> =C2=A0=C2=A0 set IRQS_PENDING
>>>>> =C2=A0=C2=A0 return;
>>>>> }
>>>>>
>>>>> do
>>>>> {
>>>>> =C2=A0=C2=A0 clear IRQS_PENDING
>>>>> =C2=A0=C2=A0 handle_irq()
>>>>> } while (IRQS_PENDING is set)
>>>>>
>>>>> IRQ handling flow like handle_fasteoi_irq() looks like:
>>>>>
>>>>> if ( irq in progress )
>>>>> =C2=A0=C2=A0 return;
>>>>>
>>>>> handle_irq()
>>>>>
>>>>> The latter flow would catch "spurious" interrupt and ignore them.=20
>>>>> So it would handle nicely the race when changing the event affinity=
=2E
>>>>
>>>> Sure? Isn't "irq in progress" being reset way before our "lateeoi" i=
s
>>>> issued, thus having the same problem again?=20
>>>
>>> Sorry I can't parse this.
>>
>> handle_fasteoi_irq() will do nothing "if ( irq in progress )". When is=

>> this condition being reset again in order to be able to process anothe=
r
>> IRQ?
> It is reset after the handler has been called. See handle_irq_event().

Right. And for us this is too early, as we want the next IRQ being
handled only after we have called xen_irq_lateeoi().

>=20
>> I believe this will be the case before our "lateeoi" handling is
>> becoming active (more precise: when our IRQ handler is returning to
>> handle_fasteoi_irq()), resulting in the possibility of the same race w=
e
>> are experiencing now.
>=20
> I am a bit confused what you mean by "lateeoi" handling is becoming=20
> active. Can you clarify?

See above: the next call of the handler should be allowed only after
xen_irq_lateeoi() for the IRQ has been called.

If the handler is being called earlier we have the race resulting
in the WARN() splats.

> Note that are are other IRQ flows existing. We should have a look at=20
> them before trying to fix thing ourself.

Fine with me, but it either needs to fit all use cases (interdomain,
IPI, real interrupts) or we need to have a per-type IRQ flow.

I think we should fix the issue locally first, then we can start to do
a thorough rework planning. Its not as if the needed changes with the
current flow would be so huge, and I'd really like to have a solution
rather sooner than later. Changing the IRQ flow might have other side
effects which need to be excluded by thorough testing.

> Although, the other issue I can see so far is handle_irq_for_port() wil=
l=20
> update info->{eoi_cpu, irq_epoch, eoi_time} without any locking. But it=
=20
> is not clear this is what you mean by "becoming active".

As long as a single event can't be handled on multiple cpus at the same
time, there is no locking needed.


Juergen

--------------41703FFA3099386D19AA195D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------41703FFA3099386D19AA195D--

--ykmiwzPoeTGee2Ilphs7XRwDJ8uG5zBLk--

--aeHBsCzabkbZQTGoGyDZ3tNfjnz3rrSmx
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhQ3MFAwAAAAAACgkQsN6d1ii/Ey//
nwf/WR1HuxI4mP/eoTooYvlrvMdLQwJz76FzY7shxiHf/hlxC/tgI33fNMpt3wtQb4WhGDvccmUX
oZ37H1af/nUkBChVqXHSlBtBrTJYMUj1gKCWouzG8fQCDLSXmVHncw4RAmbXbR+1+Xy2qYv6SxNN
AWL0Tyg3rRb8SJo2fGKdW7/HMzfVxPfokE72KwrHAtaUMOldU73EOAh5U0fsYeJcOIsjjTptUBqc
8gv7pwpwgLXZ7/fg9LnfNJEFXtXh9wDUt4OVPsJl80+pNdgKyfN6uU9CV/O1RNpEMice/LMU66fu
P4TfgwDldPKZmjD3Z95wujkJ/7s83pFv4Cu6SM63vw==
=/+rS
-----END PGP SIGNATURE-----

--aeHBsCzabkbZQTGoGyDZ3tNfjnz3rrSmx--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:00:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:00:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82881.153257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9755-0005Xq-I6; Mon, 08 Feb 2021 13:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82881.153257; Mon, 08 Feb 2021 13:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9755-0005Xj-EV; Mon, 08 Feb 2021 13:59:59 +0000
Received: by outflank-mailman (input) for mailman id 82881;
 Mon, 08 Feb 2021 13:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9754-0005Xd-8I
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 13:59:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19fbceb2-5a12-4518-99b3-44d21bc3e3aa;
 Mon, 08 Feb 2021 13:59:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0D3EAE3C;
 Mon,  8 Feb 2021 13:59:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19fbceb2-5a12-4518-99b3-44d21bc3e3aa
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612792796; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6UaGZetiNDTKaxGviBuwA0NF4AUjf8OxPPuyiL4EVj8=;
	b=AnSfa2sMGCwaATsDOwWFAJKCcmbMr+YWwrs9qd5rYCDMijIB29eclZDThwvpmwczH3J5GW
	6W4/KHDj5eAeWa1G7xloMGYd+EcDFfq4KAp3+OsSYYjnOLsAvkHtuOOyrKWzVEf2Bv8Pwm
	W4kCcEDV70uKTDgfThGSfzl4sq3V6K8=
Subject: Re: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Claudemir Todo Bom <claudemir@todobom.com>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
 <YCEGshHDEH9bJU7y@Air-de-Roger>
 <ae8d8e02-9d2e-a26e-9321-cae0640a0dee@suse.com>
 <YCE6XlA14A2Qsq8H@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <391c4c6f-6213-ac7a-8871-a0c138f6b29f@suse.com>
Date: Mon, 8 Feb 2021 14:59:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCE6XlA14A2Qsq8H@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.02.2021 14:19, Roger Pau MonnÃ© wrote:
> On Mon, Feb 08, 2021 at 12:22:25PM +0100, Jan Beulich wrote:
>> On 08.02.2021 10:38, Roger Pau MonnÃ© wrote:
>>> On Mon, Feb 01, 2021 at 01:43:28PM +0100, Jan Beulich wrote:
>>>> ---
>>>> Since CPU0 reads its TSC last on the first iteration, if TSCs were
>>>> perfectly sync-ed there shouldn't ever be a need to update. However,
>>>> even on the TSC-reliable system I first tested this on (using
>>>> "tsc=skewed" to get this rendezvous function into use in the first
>>>> place) updates by up to several thousand clocks did happen. I wonder
>>>> whether this points at some problem with the approach that I'm not (yet)
>>>> seeing.
>>>
>>> I'm confused by this, so on a system that had reliable TSCs, which
>>> you forced to remove the reliable flag, and then you saw big
>>> differences when doing the rendezvous?
>>>
>>> That would seem to indicate that such system doesn't really have
>>> reliable TSCs?
>>
>> I don't think so, no. This can easily be a timing effect from the
>> heavy cache line bouncing involved here.
>>
>> What I'm worried here seeing these updates is that I might still
>> be moving TSCs backwards in ways observable to the rest of the
>> system (i.e. beyond the inherent property of the approach), and
>> this then getting corrected by a subsequent rendezvous. But as
>> said - I can't see what this could result from, and hence I'm
>> inclined to assume these are merely effects I've not found a
>> good explanation for so far.
> 
> I'm slightly worried by this, maybe because I'm misunderstanding part
> of the TSC sync stuff.
> 
> So you forced a system that Xen would otherwise consider to have a
> reliable TSC (one that doesn't need a calibration rendezvous) into
> doing the calibration rendezvous, and then the skew seen is quite big.
> I would expect such skew to be minimal? As we would otherwise consider
> the system to not need calibration at all.
> 
> This makes me wonder if the system does indeed need such calibration
> (which I don't think so), or if the calibration that we actually try
> to do is quite bogus?

I wouldn't call it bogus, but it's not very precise. Hence me
saying that if we wanted to make the problematic system seen as
TSC_RELIABLE (and hence be able to switch from "tsc" to "std"
rendezvous), we'd first need to improve accuracy here quite a bit.
(I suspect sufficient accuracy can only be achieved by making use
of TSC_ADJUST, but that's not available on the reporter's hardware,
so of no immediate interest here.)

>>>> @@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
>>>>              while ( atomic_read(&r->semaphore) > total_cpus )
>>>>                  cpu_relax();
>>>>          }
>>>> +
>>>> +        /* Just in case a read above ended up reading zero. */
>>>> +        tsc += !tsc;
>>>
>>> Won't that be worthy of an ASSERT_UNREACHABLE? I'm not sure I see how
>>> tsc could be 0 on a healthy system after the loop above.
>>
>> It's not forbidden for the firmware to set the TSCs to some
>> huge negative value. Considering the effect TSC_ADJUST has on
>> the actual value read by RDTSC, I think I did actually observe
>> a system coming up this way, because of (not very helpful)
>> TSC_ADJUST setting by firmware. So no, no ASSERT_UNREACHABLE()
>> here.
> 
> But then the code here will loop 5 times, and it's not possible for
> those 5 loops to read a TSC value of 0? I could see it reading 0 on
> one of the iterations but not in all of them.

Sure, we can read zero at most once here. Yet the "if ( tsc == 0 )"
conditionals get executed on every iteration, while they must yield
"true" only on the first (from the variable's initializer); we
absolutely need to go the "else if()" path on CPU0 on the 2nd
iteration, and we also need to skip that part on later iterations
on the other CPUs (for CPU0 to then take the 2nd "else if()" path
from no later than the 3rd iteration onwards; the body of this of
course will only be executed on the last iteration).

The arrangement of all of this could be changed of course, but I'd
prefer to retain the property of there not being any dependency on
the exact number of iterations the loop header specifies, as long as
it's no less than 2 (before this series) / 3 (after this series).
I.e. I wouldn't want to identify e.g. the 1st iteration by "i == 4".

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:21:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82886.153268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97PY-0008Fz-CE; Mon, 08 Feb 2021 14:21:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82886.153268; Mon, 08 Feb 2021 14:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97PY-0008Fs-96; Mon, 08 Feb 2021 14:21:08 +0000
Received: by outflank-mailman (input) for mailman id 82886;
 Mon, 08 Feb 2021 14:21:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l97PX-0008Fn-UX
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 14:21:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l97PQ-0005JM-59; Mon, 08 Feb 2021 14:21:00 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l97PP-0006Dn-Nb; Mon, 08 Feb 2021 14:20:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5wS9jApuQ6snC1lAYDvURgj8Yzs3/FPfCQanUYhUoH4=; b=bvHCq83LkPUCYauekJ4gj+HboE
	m00kpkrKFq0YE1awNW4Ug94dTN8ctow0FVCufnLzgze73/s/t6WcNUpu98KmYn+gT09Wlk867t2Dc
	60yfQsaFVMWifcGwW0LQmicJ1coBc3N7Pa3jEkGQuU9wsAN95aNedj4fqlpV+lMZaWoQ=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
 <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
 <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>
Date: Mon, 8 Feb 2021 14:20:56 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 08/02/2021 13:58, JÃ¼rgen GroÃŸ wrote:
> On 08.02.21 14:09, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 08/02/2021 12:31, JÃ¼rgen GroÃŸ wrote:
>>> On 08.02.21 13:16, Julien Grall wrote:
>>>>
>>>>
>>>> On 08/02/2021 12:14, JÃ¼rgen GroÃŸ wrote:
>>>>> On 08.02.21 11:40, Julien Grall wrote:
>>>>>> Hi Juergen,
>>>>>>
>>>>>> On 08/02/2021 10:22, JÃ¼rgen GroÃŸ wrote:
>>>>>>> On 08.02.21 10:54, Julien Grall wrote:
>>>>>>>> ... I don't really see how the difference matter here. The idea 
>>>>>>>> is to re-use what's already existing rather than trying to 
>>>>>>>> re-invent the wheel with an extra lock (or whatever we can come 
>>>>>>>> up).
>>>>>>>
>>>>>>> The difference is that the race is occurring _before_ any IRQ is
>>>>>>> involved. So I don't see how modification of IRQ handling would 
>>>>>>> help.
>>>>>>
>>>>>> Roughly our current IRQ handling flow (handle_eoi_irq()) looks like:
>>>>>>
>>>>>> if ( irq in progress )
>>>>>> {
>>>>>> Â Â  set IRQS_PENDING
>>>>>> Â Â  return;
>>>>>> }
>>>>>>
>>>>>> do
>>>>>> {
>>>>>> Â Â  clear IRQS_PENDING
>>>>>> Â Â  handle_irq()
>>>>>> } while (IRQS_PENDING is set)
>>>>>>
>>>>>> IRQ handling flow like handle_fasteoi_irq() looks like:
>>>>>>
>>>>>> if ( irq in progress )
>>>>>> Â Â  return;
>>>>>>
>>>>>> handle_irq()
>>>>>>
>>>>>> The latter flow would catch "spurious" interrupt and ignore them. 
>>>>>> So it would handle nicely the race when changing the event affinity.
>>>>>
>>>>> Sure? Isn't "irq in progress" being reset way before our "lateeoi" is
>>>>> issued, thus having the same problem again? 
>>>>
>>>> Sorry I can't parse this.
>>>
>>> handle_fasteoi_irq() will do nothing "if ( irq in progress )". When is
>>> this condition being reset again in order to be able to process another
>>> IRQ?
>> It is reset after the handler has been called. See handle_irq_event().
> 
> Right. And for us this is too early, as we want the next IRQ being
> handled only after we have called xen_irq_lateeoi().

It is not really the next IRQ here. It is more a spurious IRQ because we 
don't clear & mask the event right away. Instead, it is done later in 
the handling.

> 
>>
>>> I believe this will be the case before our "lateeoi" handling is
>>> becoming active (more precise: when our IRQ handler is returning to
>>> handle_fasteoi_irq()), resulting in the possibility of the same race we
>>> are experiencing now.
>>
>> I am a bit confused what you mean by "lateeoi" handling is becoming 
>> active. Can you clarify?
> 
> See above: the next call of the handler should be allowed only after
> xen_irq_lateeoi() for the IRQ has been called.
> 
> If the handler is being called earlier we have the race resulting
> in the WARN() splats.

I feel it is dislike to understand race with just words. Can you provide 
a scenario (similar to the one I originally provided) with two vCPUs and 
show how this can happen?

> 
>> Note that are are other IRQ flows existing. We should have a look at 
>> them before trying to fix thing ourself.
> 
> Fine with me, but it either needs to fit all use cases (interdomain,
> IPI, real interrupts) or we need to have a per-type IRQ flow.

AFAICT, we already used different flow based on the use cases. Before 
2011, we used to use the fasteoi one but this was changed by the 
following commit:


commit 7e186bdd0098b34c69fb8067c67340ae610ea499
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Fri May 6 12:27:50 2011 +0100

     xen: do not clear and mask evtchns in __xen_evtchn_do_upcall

     Change the irq handler of evtchns and pirqs that don't need EOI (pirqs
     that correspond to physical edge interrupts) to handle_edge_irq.

     Use handle_fasteoi_irq for pirqs that need eoi (they generally
     correspond to level triggered irqs), no risk in loosing interrupts
     because we have to EOI the irq anyway.

     This change has the following benefits:

     - it uses the very same handlers that Linux would use on native for the
     same irqs (handle_edge_irq for edge irqs and msis, and
     handle_fasteoi_irq for everything else);

     - it uses these handlers in the same way native code would use them: it
     let Linux mask\unmask and ack the irq when Linux want to mask\unmask
     and ack the irq;

     - it fixes a problem occurring when a driver calls disable_irq() in its
     handler: the old code was unconditionally unmasking the evtchn even if
     the irq is disabled when irq_eoi was called.

     See Documentation/DocBook/genericirq.tmpl for more informations.

     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
     [v1: Fixed space/tab issues]
     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


> 
> I think we should fix the issue locally first, then we can start to do
> a thorough rework planning. Its not as if the needed changes with the
> current flow would be so huge, and I'd really like to have a solution
> rather sooner than later. Changing the IRQ flow might have other side
> effects which need to be excluded by thorough testing.
I agree that we need a solution ASAP. But I am a bit worry to:
   1) Add another lock in that event handling path.
   2) Add more complexity in the event handling (it is already fairly 
difficult to reason about the locking/race)

Let see what the local fix look like.

>> Although, the other issue I can see so far is handle_irq_for_port() 
>> will update info->{eoi_cpu, irq_epoch, eoi_time} without any locking. 
>> But it is not clear this is what you mean by "becoming active".
> 
> As long as a single event can't be handled on multiple cpus at the same
> time, there is no locking needed.

Well, it can happen in the current code (see my original scenario). If 
your idea fix it then fine.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:21:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:21:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82887.153281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97QG-0008Kp-KZ; Mon, 08 Feb 2021 14:21:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82887.153281; Mon, 08 Feb 2021 14:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97QG-0008Ki-HT; Mon, 08 Feb 2021 14:21:52 +0000
Received: by outflank-mailman (input) for mailman id 82887;
 Mon, 08 Feb 2021 14:21:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZIyB=HK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l97QF-0008KZ-2L
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 14:21:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b015e4c-4d40-4fbb-9b25-9866d05e91bd;
 Mon, 08 Feb 2021 14:21:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4039ACBA;
 Mon,  8 Feb 2021 14:21:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b015e4c-4d40-4fbb-9b25-9866d05e91bd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612794108; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T/tXhAtjyVPDUEpJlzuULewsT31oSoNMA2AySuTm43Q=;
	b=g5Lk1qraSxYF63Nmxkk1TIzRAk+ejM3Z1XIJqkm+08MwNv+LkJohHTwTM0nAvilZK7u50y
	a2srTc0/soeUyjO61jeURD6zb1zMpdDWCW/TwVmuVhk1jVQlOp0qoyA8x98hTl4nG0gr9v
	oYMC7i5UekIj3vTGYyPskNUQciQT32w=
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210205203905.8824-1-nmanthey@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
Date: Mon, 8 Feb 2021 15:21:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210205203905.8824-1-nmanthey@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.02.2021 21:39, Norbert Manthey wrote:
> To prevent leaking HVM params via L1TF and similar issues on a
> hyperthread pair, let's load values of domains as late as possible.
> 
> Furthermore, speculative barriers are re-arranged to make sure we do not
> allow guests running on co-located VCPUs to leak hvm parameter values of
> other domains.
> 
> This is part of the speculative hardening effort.
> 
> Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
> Reported-by: Hongyan Xia <hongyxia@amazon.co.uk>

Did you lose Ian's release-ack, or did you drop it for a specific
reason?

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4060,7 +4060,7 @@ static int hvm_allow_set_param(struct domain *d,
>                                 uint32_t index,
>                                 uint64_t new_value)
>  {
> -    uint64_t value = d->arch.hvm.params[index];
> +    uint64_t value;
>      int rc;
>  
>      rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
> @@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
>      if ( rc )
>          return rc;
>  
> +    if ( index >= HVM_NR_PARAMS )
> +        return -EINVAL;
> +
> +    /* Make sure we evaluate permissions before loading data of domains. */
> +    block_speculation();
> +
> +    value = d->arch.hvm.params[index];
>      switch ( index )
>      {
>      /* The following parameters should only be changed once. */

I don't see the need for the heavier block_speculation() here;
afaict array_access_nospec() should do fine. The switch() in
context above as well as the switch() further down in the
function don't have any speculation susceptible code.

Furthermore the first switch() doesn't use "value" at all, so
you could move the access even further down. This may have the
downside of adding latency, so may not be worth it, but in
this case at least the description should say half a word,
especially seeing it say "as late as possible" right now.

> @@ -4141,6 +4148,9 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
>      if ( rc )
>          return rc;
>  
> +    /* Make sure we evaluate permissions before loading data of domains. */
> +    block_speculation();
> +
>      switch ( index )
>      {
>      case HVM_PARAM_CALLBACK_IRQ:

Like you do for the "get" path I think this similarly renders
pointless the use in hvmop_set_param() (and - see below - the
same consideration wrt is_hvm_domain() applies).

> @@ -4388,6 +4398,10 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value)
>      if ( rc )
>          return rc;
>  
> +    /* Make sure the index bound check in hvm_get_param is respected, as well as
> +       the above domain permissions. */
> +    block_speculation();

Nit: Please fix comment style here.

> @@ -4428,9 +4442,6 @@ static int hvmop_get_param(
>      if ( a.index >= HVM_NR_PARAMS )
>          return -EINVAL;
>  
> -    /* Make sure the above bound check is not bypassed during speculation. */
> -    block_speculation();
> -
>      d = rcu_lock_domain_by_any_id(a.domid);
>      if ( d == NULL )
>          return -ESRCH;

This one really was pointless anyway, as is_hvm_domain() (used
down from here) already contains a suitable barrier.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:26:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:26:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82890.153293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97UR-00005P-6z; Mon, 08 Feb 2021 14:26:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82890.153293; Mon, 08 Feb 2021 14:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97UR-00005I-3Z; Mon, 08 Feb 2021 14:26:11 +0000
Received: by outflank-mailman (input) for mailman id 82890;
 Mon, 08 Feb 2021 14:26:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l97UP-00005A-Ur; Mon, 08 Feb 2021 14:26:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l97UP-0005P5-Ph; Mon, 08 Feb 2021 14:26:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l97UP-00073g-G1; Mon, 08 Feb 2021 14:26:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l97UP-0003MM-FV; Mon, 08 Feb 2021 14:26:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gMFaA2qTIzuri5m0S8I7iNOLYgQuVGMS950if3AZF2w=; b=WvmOLcTrap9xXUYzrlPhikDNkc
	hjwZHZIilBKN1Hnj1uuv50FNc3iXqPYN/CPsz4IIaUXpDedp3MuuMxJgzkPYVzWWjakA7B8AqXqi0
	m6Ub+BjU4cggGrR8rskK8YuoZQ4H9X9awj8Jb20hdCPA/GQnJZNRQDZwlZ6IOEDMIrUI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159138-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159138: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=25ee478662d378ec2b24f517fb6d8d4829b885ff
X-Osstest-Versions-That:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 14:26:09 +0000

flight 159138 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159138/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 159054
 build-armhf                   6 xen-build                fail REGR. vs. 159054

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  25ee478662d378ec2b24f517fb6d8d4829b885ff
baseline version:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7

Last test of basis   159054  2021-02-05 20:01:29 Z    2 days
Testing same since   159138  2021-02-08 13:00:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Igor Druzhinin <igor.druzhinin@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 25ee478662d378ec2b24f517fb6d8d4829b885ff
Author: Igor Druzhinin <igor.druzhinin@citrix.com>
Date:   Wed Feb 3 20:07:04 2021 +0000

    tools/libxl: only set viridian flags on new domains
    
    Domains migrating or restoring should have viridian HVM param key in
    the migration stream already and setting that twice results in Xen
    returing -EEXIST on the second attempt later (during migration stream parsing)
    in case the values don't match. That causes migration/restore operation
    to fail at destination side.
    
    That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e)
    extending default viridian feature set making the values from the previous
    migration streams and those set at domain construction different.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 804fe751375b1f40eb3142121bf2b70fa2a83972
Author: Igor Druzhinin <igor.druzhinin@citrix.com>
Date:   Wed Feb 3 20:07:03 2021 +0000

    tools/libxl: pass libxl__domain_build_state to libxl__arch_domain_create
    
    No functional change.
    
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:35:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:35:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82897.153311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97dM-00017Z-4z; Mon, 08 Feb 2021 14:35:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82897.153311; Mon, 08 Feb 2021 14:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97dM-00017S-1K; Mon, 08 Feb 2021 14:35:24 +0000
Received: by outflank-mailman (input) for mailman id 82897;
 Mon, 08 Feb 2021 14:35:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l97dK-00017N-IM
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 14:35:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l97dF-0005YS-Lb; Mon, 08 Feb 2021 14:35:17 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l97dF-0007Co-Df; Mon, 08 Feb 2021 14:35:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=aivOqaGOcSxpvU6conwQTcZ0h2NZKR4vLEU5lBoBLnM=; b=TpQqFHUGQ309Z3JMnbU0rIfpjj
	ASrKoC0dMPhuUjLk3ViAyPeqvknYbyMrlu/ppcR+upMwaNIxtxB5uhfH28LHUOellFyF5th0KdgHs
	jKTAv2UPTPD50GHhHJfMLTxTwKrTdbDlgJtHRIu8230PgEIb7ppiHtcCI8fRvPRQBvDo=;
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
From: Julien Grall <julien@xen.org>
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org, netdev@vger.kernel.org,
 linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
 <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
 <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
 <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>
Message-ID: <9f07dae5-050c-da2c-edc1-e1587dbae9c4@xen.org>
Date: Mon, 8 Feb 2021 14:35:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 08/02/2021 14:20, Julien Grall wrote:
>>>> I believe this will be the case before our "lateeoi" handling is
>>>> becoming active (more precise: when our IRQ handler is returning to
>>>> handle_fasteoi_irq()), resulting in the possibility of the same race we
>>>> are experiencing now.
>>>
>>> I am a bit confused what you mean by "lateeoi" handling is becoming 
>>> active. Can you clarify?
>>
>> See above: the next call of the handler should be allowed only after
>> xen_irq_lateeoi() for the IRQ has been called.
>>
>> If the handler is being called earlier we have the race resulting
>> in the WARN() splats.
> 
> I feel it is dislike to understand race with just words. Can you provide

Sorry I meant difficult rather than dislike.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:51:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:51:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82902.153327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97sW-0002xx-Ev; Mon, 08 Feb 2021 14:51:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82902.153327; Mon, 08 Feb 2021 14:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97sW-0002xq-B4; Mon, 08 Feb 2021 14:51:04 +0000
Received: by outflank-mailman (input) for mailman id 82902;
 Mon, 08 Feb 2021 14:51:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0oM=HK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1l97sV-0002xl-2N
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 14:51:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c948877-cab5-4fd9-9e2e-f7119c2f377f;
 Mon, 08 Feb 2021 14:51:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C495DAF49;
 Mon,  8 Feb 2021 14:51:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c948877-cab5-4fd9-9e2e-f7119c2f377f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612795861; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6vdod3vMI6fHDIRT527ymU4BKIman1O1s9W6e0uStMQ=;
	b=RpsnuNywiDLF3uAe9PVzXzoTBanGsXv++rbklf5MYeimeDIfvHezSQv6Je0LYBhNzRPQzP
	IycZLQNFCSqkhItpBARDuuHJhTZyvbD0MoyAAHY074xAZwICkXeM1QdeKIjgj8g1Z8wD+i
	d+zteJL+b7jJjjGBHFK/Dd1MM4/AQaY=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
 <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
 <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
 <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
Message-ID: <bf0a3f8f-9604-1ff3-6b82-3cb117ce3839@suse.com>
Date: Mon, 8 Feb 2021 15:50:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="6rnEPAxP8BNu02rsbHJBGo3yGwdcmguUG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--6rnEPAxP8BNu02rsbHJBGo3yGwdcmguUG
Content-Type: multipart/mixed; boundary="SSKASkvmPJETXlABrEUvDJ6n2PGoqrA1p";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
 netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>
Message-ID: <bf0a3f8f-9604-1ff3-6b82-3cb117ce3839@suse.com>
Subject: Re: [PATCH 0/7] xen/events: bug fixes and some diagnostic aids
References: <20210206104932.29064-1-jgross@suse.com>
 <bd63694e-ac0c-7954-ec00-edad05f8da1c@xen.org>
 <eeb62129-d9fc-2155-0e0f-aff1fbb33fbc@suse.com>
 <fcf3181b-3efc-55f5-687c-324937b543e6@xen.org>
 <7aaeeb3d-1e1b-6166-84e9-481153811b62@suse.com>
 <6f547bb5-777a-6fc2-eba2-cccb4adfca87@xen.org>
 <0d623c98-a714-1639-cc53-f58ba3f08212@suse.com>
 <28399fd1-9fe8-f31a-6ee8-e78de567155b@xen.org>
 <1831964f-185e-31bb-2446-778f2c18d71b@suse.com>
 <e8c46e36-cf9e-fb30-21b5-fa662834a01a@xen.org>
 <199b76fd-630b-a0c6-926b-3e662103ec42@suse.com>
 <063eff75-56a5-1af7-f684-a2ed4b13c9a7@xen.org>
 <4279cab9-9b36-e83d-bd7a-ff7cd2832054@suse.com>
 <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>
In-Reply-To: <279b741b-09dc-c6af-bf9d-df57922fa465@xen.org>

--SSKASkvmPJETXlABrEUvDJ6n2PGoqrA1p
Content-Type: multipart/mixed;
 boundary="------------4B6D9B234859DF83E078C716"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4B6D9B234859DF83E078C716
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.02.21 15:20, Julien Grall wrote:
> Hi Juergen,
>=20
> On 08/02/2021 13:58, J=C3=BCrgen Gro=C3=9F wrote:
>> On 08.02.21 14:09, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 08/02/2021 12:31, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 08.02.21 13:16, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 08/02/2021 12:14, J=C3=BCrgen Gro=C3=9F wrote:
>>>>>> On 08.02.21 11:40, Julien Grall wrote:
>>>>>>> Hi Juergen,
>>>>>>>
>>>>>>> On 08/02/2021 10:22, J=C3=BCrgen Gro=C3=9F wrote:
>>>>>>>> On 08.02.21 10:54, Julien Grall wrote:
>>>>>>>>> ... I don't really see how the difference matter here. The idea=
=20
>>>>>>>>> is to re-use what's already existing rather than trying to=20
>>>>>>>>> re-invent the wheel with an extra lock (or whatever we can come=
=20
>>>>>>>>> up).
>>>>>>>>
>>>>>>>> The difference is that the race is occurring _before_ any IRQ is=

>>>>>>>> involved. So I don't see how modification of IRQ handling would =

>>>>>>>> help.
>>>>>>>
>>>>>>> Roughly our current IRQ handling flow (handle_eoi_irq()) looks li=
ke:
>>>>>>>
>>>>>>> if ( irq in progress )
>>>>>>> {
>>>>>>> =C2=A0=C2=A0 set IRQS_PENDING
>>>>>>> =C2=A0=C2=A0 return;
>>>>>>> }
>>>>>>>
>>>>>>> do
>>>>>>> {
>>>>>>> =C2=A0=C2=A0 clear IRQS_PENDING
>>>>>>> =C2=A0=C2=A0 handle_irq()
>>>>>>> } while (IRQS_PENDING is set)
>>>>>>>
>>>>>>> IRQ handling flow like handle_fasteoi_irq() looks like:
>>>>>>>
>>>>>>> if ( irq in progress )
>>>>>>> =C2=A0=C2=A0 return;
>>>>>>>
>>>>>>> handle_irq()
>>>>>>>
>>>>>>> The latter flow would catch "spurious" interrupt and ignore them.=
=20
>>>>>>> So it would handle nicely the race when changing the event affini=
ty.
>>>>>>
>>>>>> Sure? Isn't "irq in progress" being reset way before our "lateeoi"=
 is
>>>>>> issued, thus having the same problem again?=20
>>>>>
>>>>> Sorry I can't parse this.
>>>>
>>>> handle_fasteoi_irq() will do nothing "if ( irq in progress )". When =
is
>>>> this condition being reset again in order to be able to process anot=
her
>>>> IRQ?
>>> It is reset after the handler has been called. See handle_irq_event()=
=2E
>>
>> Right. And for us this is too early, as we want the next IRQ being
>> handled only after we have called xen_irq_lateeoi().
>=20
> It is not really the next IRQ here. It is more a spurious IRQ because w=
e=20
> don't clear & mask the event right away. Instead, it is done later in=20
> the handling.
>=20
>>
>>>
>>>> I believe this will be the case before our "lateeoi" handling is
>>>> becoming active (more precise: when our IRQ handler is returning to
>>>> handle_fasteoi_irq()), resulting in the possibility of the same race=
 we
>>>> are experiencing now.
>>>
>>> I am a bit confused what you mean by "lateeoi" handling is becoming=20
>>> active. Can you clarify?
>>
>> See above: the next call of the handler should be allowed only after
>> xen_irq_lateeoi() for the IRQ has been called.
>>
>> If the handler is being called earlier we have the race resulting
>> in the WARN() splats.
>=20
> I feel it is dislike to understand race with just words. Can you provid=
e=20
> a scenario (similar to the one I originally provided) with two vCPUs an=
d=20
> show how this can happen?

vCPU0                | vCPU1
                      |
                      | Call xen_rebind_evtchn_to_cpu()
receive event X      |
                      | mask event X
                      | bind to vCPU1
<vCPU descheduled>   | unmask event X
                      |
                      | receive event X
                      |
                      | handle_fasteoi_irq(X)
                      |  -> handle_irq_event()
                      |   -> set IRQD_IN_PROGRESS
                      |   -> evtchn_interrupt()
                      |      -> evtchn->enabled =3D false
                      |   -> clear IRQD_IN_PROGRESS
handle_fasteoi_irq(X)|
-> evtchn_interrupt()|
    -> WARN()         |
                      | xen_irq_lateeoi(X)

>=20
>>
>>> Note that are are other IRQ flows existing. We should have a look at =

>>> them before trying to fix thing ourself.
>>
>> Fine with me, but it either needs to fit all use cases (interdomain,
>> IPI, real interrupts) or we need to have a per-type IRQ flow.
>=20
> AFAICT, we already used different flow based on the use cases. Before=20
> 2011, we used to use the fasteoi one but this was changed by the=20
> following commit:

Yes, I know that.

>>
>> I think we should fix the issue locally first, then we can start to do=

>> a thorough rework planning. Its not as if the needed changes with the
>> current flow would be so huge, and I'd really like to have a solution
>> rather sooner than later. Changing the IRQ flow might have other side
>> effects which need to be excluded by thorough testing.
> I agree that we need a solution ASAP. But I am a bit worry to:
>  =C2=A0 1) Add another lock in that event handling path.

Regarding complexity: it is very simple (just around masking/unmasking
of the event channel). Contention is very unlikely.

>  =C2=A0 2) Add more complexity in the event handling (it is already fai=
rly=20
> difficult to reason about the locking/race)
>=20
> Let see what the local fix look like.

Yes.

>=20
>>> Although, the other issue I can see so far is handle_irq_for_port()=20
>>> will update info->{eoi_cpu, irq_epoch, eoi_time} without any locking.=
=20
>>> But it is not clear this is what you mean by "becoming active".
>>
>> As long as a single event can't be handled on multiple cpus at the sam=
e
>> time, there is no locking needed.
>=20
> Well, it can happen in the current code (see my original scenario). If =

> your idea fix it then fine.

I hope so.


Juergen

--------------4B6D9B234859DF83E078C716
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4B6D9B234859DF83E078C716--

--SSKASkvmPJETXlABrEUvDJ6n2PGoqrA1p--

--6rnEPAxP8BNu02rsbHJBGo3yGwdcmguUG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAhT9MFAwAAAAAACgkQsN6d1ii/Ey+W
Wwf+JlPyY/SaCYj3H8LEW9wJeTTUpV3cnrjPSh3LQR7Ri13AKhqPffX803AWoyYpG8bsas7xh1iM
GOh7sC10H1L/JhwFOvstESbte68FEwTymzJRVdMTdmEYu160VbvJeHmzxkXucQ6agMw2H+vWj34p
6JGJQWRVvPfuzt8jZOywQApOWXsjO5saOf7c0a5m7uJdGd0r5X9eVLkoLus1MUFIedHCzZDSH5RL
LGlk8tou1gjaqeIMOitxNTxH9vmRBOoKMPDa0vq8Zp5gLYYyPkxxNTPo0q0kH5CPpwqkNx9IRuoH
XAsZBoerrUU1a0FlB+sh+wMvbKSbzUZLtFqQny/UfA==
=NV/W
-----END PGP SIGNATURE-----

--6rnEPAxP8BNu02rsbHJBGo3yGwdcmguUG--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 14:58:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 14:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82904.153339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97zM-0003Ba-AS; Mon, 08 Feb 2021 14:58:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82904.153339; Mon, 08 Feb 2021 14:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l97zM-0003BT-7B; Mon, 08 Feb 2021 14:58:08 +0000
Received: by outflank-mailman (input) for mailman id 82904;
 Mon, 08 Feb 2021 14:58:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pnXs=HK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l97zL-0003BO-IQ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 14:58:07 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eff95827-0690-4ffb-897b-1f6f6cc2f306;
 Mon, 08 Feb 2021 14:58:06 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 005e38x18Evr4hN
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 Feb 2021 15:57:53 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eff95827-0690-4ffb-897b-1f6f6cc2f306
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612796285;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=mfSlXIZTGVpc5iACREmWyB5fQ/z7qNNmgCd71Ynykfc=;
	b=a6HfzJlL5x4BDp6OBWAfYhi1Phj1eJgFWq+pu4Ip0tcdtWV+DnCymeJR8B2U97X0OA
	5JN+oKaWpfwqjUy4FLH2JDNeOjO1OFRBVn6HWjG0A+CjhVq/dse9wzlOQ7w5WvoR5yPp
	i3mI7k9vlFpPCPw/kHRtbbLVLPLH8mU8/D+M0uzSpLi3w0NFRgy9LngWe5bZBR1PCjK2
	eOtjWm8XAEmAelgF/R29r/1h3hoEWGzPqIs0146luq3qOuw310gwSY+KwXZKcfW9eSaM
	8xRmEDQA/huukzZMLWqztjmo9naqDQ+6gnGJAungaEVRIXHbMZBouHVtUfhXxyIdKl3y
	ioWg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Mon, 8 Feb 2021 15:57:46 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] tools: create libxensaverestore
Message-ID: <20210208155746.2021dde7.olaf@aepfle.de>
In-Reply-To: <20210111164646.13543-1-olaf@aepfle.de>
References: <20210111164646.13543-1-olaf@aepfle.de>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/JocnwV.HTemQKXHGukqnAwJ"; protocol="application/pgp-signature"

--Sig_/JocnwV.HTemQKXHGukqnAwJ
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 11 Jan 2021 17:46:46 +0100
schrieb Olaf Hering <olaf@aepfle.de>:

> Move all save/restore related code from libxenguest.so into a separate li=
brary libxensaverestore.so.

Was this considered, in general, or for 4.15?


Olaf

--Sig_/JocnwV.HTemQKXHGukqnAwJ
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAhUWoACgkQ86SN7mm1
DoAUGxAAopAD1IRXfDFbLe4TRa+R5pf+fhJqcj6jbKDwb+phc/d2yqJjhbdlCqFz
SvH72oLBMW90du1ePXNJYagc+6JpMgx1yofFFTJ8Af7INKaXcuZTXUXTOt1F+jnV
u/gdwO00khBP6CYnnCKzrnTnHYgZPQIq08KE5XicoAknEo0o7erOhl2xK3DFCiY9
EAC8GT2GIQxeBX7+rT9u6Db29roWYKYPpX2ie92ophx6VpSg9ajfoIyyOPvMAIJ8
jAJUcpLKbONJYyolPSBzT9qKyCOcWuOS2FlzYd383BXKSHrjqsl3LLT4OTdcuYmf
GsfT3b1k4kTRV+5UdGxDuzA+s1OS2BKR7agt1OhCmOpvGKWozaBZUKiPbmEeG8uf
7LiFZvOvlMvxXdRN67HsmBfOfnEfHuWiNdTjvZHn4UIHDl7mTZjknxvak5n4UUPr
zgUhHzxRPF2u3gd61SfQpxuaLKeLR/3CPqJNJgGdtMdQK5iJrInTmdm14mKgeUKe
KulCjUhxuQobjan4mqvqVdEhiAz6J+jLxzqyR/dqUSuWoQrRIb0jiBL/GIcHtMhm
VtCNk4h+mo7NlwaxXWWmwogr4K4A5zfqR938Um+7r55cqH/spWS3mbpAfUe5I1cn
AozgziL8BDBoQH2DP5jcBs7Xgc8R+AcxCTQT4Q7tch3TEWLGDF0=
=0eNw
-----END PGP SIGNATURE-----

--Sig_/JocnwV.HTemQKXHGukqnAwJ--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:14:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82908.153351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98Ei-00052n-Hb; Mon, 08 Feb 2021 15:14:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82908.153351; Mon, 08 Feb 2021 15:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98Ei-00052g-EK; Mon, 08 Feb 2021 15:14:00 +0000
Received: by outflank-mailman (input) for mailman id 82908;
 Mon, 08 Feb 2021 15:13:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98Eh-00052b-Dh
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:13:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98Eh-0006Cp-Ak
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:13:59 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98Eh-00020N-9g
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:13:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l98Ee-0002iL-1S; Mon, 08 Feb 2021 15:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=itUgtq693czQSzNb9hg0AFWfyVDvYYJygnMmXkX/elY=; b=fPudKIoL2UDM+wcERKkdcsCjF/
	XGj+7R8cKzqVX2+q2MeNuY8/VdaW1Ke+BU1Q3bi/czctivaaVfQEqER+ZHOcBab/tRyspdywcQDXB
	m5xLmD1SvgJue5Rmh5lEakkMu2HKsELLnCAs2zaQQuDnCfO57IDxi1ftsU2kq9ioWHCc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.21811.725828.923726@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 15:13:55 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] tools: create libxensaverestore
In-Reply-To: <20210111164646.13543-1-olaf@aepfle.de>
References: <20210111164646.13543-1-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v2] tools: create libxensaverestore"):
> Move all save/restore related code from libxenguest.so into a separate
> library libxensaverestore.so. The only consumer is libxl-save-helper.
> There is no need to have the moved code mapped all the time in binaries
> where libxenguest.so is used.

I assume this is not targeted for 4.15.  I am focusing my effort on
4.15 right now and won't review this until 4.16 is released.  Please
feel free to repost then.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:18:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82910.153363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98J6-0005Cp-45; Mon, 08 Feb 2021 15:18:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82910.153363; Mon, 08 Feb 2021 15:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98J6-0005Ci-0x; Mon, 08 Feb 2021 15:18:32 +0000
Received: by outflank-mailman (input) for mailman id 82910;
 Mon, 08 Feb 2021 15:18:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pnXs=HK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l98J4-0005Cd-Kb
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:18:30 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4397047e-5e32-4f62-be5e-d1424a7d7b1e;
 Mon, 08 Feb 2021 15:18:29 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 005e38x18FIH4nV
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 Feb 2021 16:18:17 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4397047e-5e32-4f62-be5e-d1424a7d7b1e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612797508;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=XLsYjmi+CC1V1KCbL3bzzXD1bKOuGUMTSG1LNokrvdI=;
	b=LTwN52FpZwb3tke/Zo2fi8ErxKxnwNYBqZuWu5opP6aJMTnQeyFPFT6AB85YfeBbbJ
	zPLEw73Bh9aS3Xfg1A0bJMGV1JWBWOemkqlF13sF3PtLcvbxNBUL1m6aJN3dwaqMQG9+
	9wp2t56bYS1OJQUZ916pObFaMrnvrRibnB9ww4yCO79R/bHt391OEs8rrcM5+xd+H8as
	Lcjl6+rpBf4MW0Tu+ElimYgYalgp4J8cA3phhWqeAq7OUz4NfTg4WDHALOEO7Z1HL/Nt
	qziNQ/d6GhcgoPG6t5MhqsgDIlDrNacT0biM2/UryQsITKQZp27H1o4ciWEQJfaTWNqy
	drBQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Mon, 8 Feb 2021 16:18:11 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] tools: create libxensaverestore
Message-ID: <20210208161720.144d1cbe.olaf@aepfle.de>
In-Reply-To: <24609.21811.725828.923726@mariner.uk.xensource.com>
References: <20210111164646.13543-1-olaf@aepfle.de>
	<24609.21811.725828.923726@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/+ud3k=T7MfNH3fho=C1z9R9"; protocol="application/pgp-signature"

--Sig_/+ud3k=T7MfNH3fho=C1z9R9
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 8 Feb 2021 15:13:55 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> I assume this is not targeted for 4.15.

It is, was sent before the deadline I think.


Olaf

--Sig_/+ud3k=T7MfNH3fho=C1z9R9
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAhVjMACgkQ86SN7mm1
DoBjiBAAjz6JaYR+zMEk8eMBtcm5mvZ8g7UWfnyxo9KoZJhqChvemBSkTPZJhDda
O1bHXmN1tr98ttUlt1iPcorOvB1TlymKGt0sZj6bkbZsReQd9+xeFNZ4wijYSgYm
qJrmchPbMff62FLsPrO7/GG4BBInx3LbGIVyWyP0FAtSjnVCg54GXRGOINHh5FTU
Me56SyYHZ3Rt+vCxDyvNtHpKYmds6L2rdqRGaK4u0CbhUS2WrMHvma7UKJgWvQMA
TLzF9l290Dd/R3w5TqdJ7ZeBI+S+DjaVNNUpVvqubdqO5rlcXFpxdYqgvdTUml1z
KJjWSlae4NKZdFnz5GZWfHBv9v8w3lwKfeSr7cn0PWe6+rNO7udNgtFVAD+/Sn5H
KOscWHJeseM+Anx/dUnxIiSpNMZt384n4WlAEJAAoxHgHIhr8AnV9XBGM12RKQBz
cB/2DFXPacjGONUVe9LKy+3xeMMWAyLvrIsM0eZEiGzm5+jmhg+NZ9zRyHtllTfp
4fESaEucqqC8i3AR7sWwA87mQhMH7d3X3Mbqvo4IhaxR4sG3YhL17FaiaePPc36g
4UZVKLgD3EIUXh1GgWqvCV+Pqi9kUJHnt7nMXSinOdnVd4oV2DLUoWO9BqMJOVRg
wucl6y2twCdaJQnmAWxIDZbefWQO/pl4uPmHAG5zFRd03Z47RWE=
=H78H
-----END PGP SIGNATURE-----

--Sig_/+ud3k=T7MfNH3fho=C1z9R9--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:36:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82920.153375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98ae-00073m-MJ; Mon, 08 Feb 2021 15:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82920.153375; Mon, 08 Feb 2021 15:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98ae-00073f-Ix; Mon, 08 Feb 2021 15:36:40 +0000
Received: by outflank-mailman (input) for mailman id 82920;
 Mon, 08 Feb 2021 15:36:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98ac-00073a-Od
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:36:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98ac-0006Yi-Kz
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:36:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98ac-0003mS-Ja
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:36:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l98aX-0002lg-TP; Mon, 08 Feb 2021 15:36:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=IeKenqTVIrnUaxr0XGlDtL9CPSeaPfsXjev4fLBnT98=; b=Q2N+xm+U52BzBbqq/0RArE+3ID
	YpNBmG72VWHzHvBf896egEFBun2Js+T5eP65WAEJ8SsoY7mkamUBkWOlujocUJJotDub7644f6wHv
	jBHQBElbF+Z+SHZZkhixizMvduhpyWhUu5NtusshASHO/UU0jyeuJcXIGeUpt8CeUZUA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.23169.639047.297808@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 15:36:33 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: George Dunlap <george.dunlap@citrix.com>,
    xen-devel@lists.xenproject.org,
    Andrew Cooper  <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD  <anthony.perard@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano  Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] tools: create libxensaverestore
In-Reply-To: <20210208155746.2021dde7.olaf@aepfle.de>
References: <20210111164646.13543-1-olaf@aepfle.de>
	<20210208155746.2021dde7.olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("Re: [PATCH v2] tools: create libxensaverestore"):
> Am Mon, 11 Jan 2021 17:46:46 +0100
> schrieb Olaf Hering <olaf@aepfle.de>:
> 
> > Move all save/restore related code from libxenguest.so into a separate library libxensaverestore.so.
> 
> Was this considered, in general, or for 4.15?

I'm afraid we have had a problem with tools review bandwidth, for
which I must apologise on behalf of the project.  This also falls
within one of my own areas of responsibility, but we have had a
shortage of effort in this area for a while now.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:41:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82922.153387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98fe-0007zo-9m; Mon, 08 Feb 2021 15:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82922.153387; Mon, 08 Feb 2021 15:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98fe-0007zh-5y; Mon, 08 Feb 2021 15:41:50 +0000
Received: by outflank-mailman (input) for mailman id 82922;
 Mon, 08 Feb 2021 15:41:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98fc-0007zc-6t
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:41:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98fc-0006en-5W
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:41:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98fc-0004Bk-2y
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:41:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l98fY-0002n3-Ld; Mon, 08 Feb 2021 15:41:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=WtvNeMrjPRzqBOw1NgxBVExmQBEb1f2sUfKr7JqjWio=; b=gOq4nNbxWgxW/BB6Eknp+ZR39n
	HKDGuMgQN/aEucfVBCEUgOoZZYqiS7ryzoxBq6qHRIg0s4th19Et4nhPHGMgtbsaNLBvqEaVD0s+3
	NtsWdwrrieUlZR/mR6MZrQypNwmKR+Kk8KPsqnobvjgpI8AUKiRJZARWlZQgES9urP7A=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.23480.412380.988668@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 15:41:44 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 03/39] docs: remove stale create example from xl.1
In-Reply-To: <20210111174224.24714-4-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-4-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 03/39] docs: remove stale create example from xl.1"):
> Maybe xm create had a feature to create a domU based on a configuration
> file. xl create requires the '-f' option to refer to a file.
> There is no code to look into XEN_CONFIG_DIR, so remove the example.

This series seems to contain a number of patches which seem to have
slipped through the net before the freeze, despite some having reviews
even :-/.

I'm going to go through them and look at the status of each and hope
to commit and/or review and/or release-ack as many of them as
possible.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:43:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:43:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82925.153405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98gn-00088p-KH; Mon, 08 Feb 2021 15:43:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82925.153405; Mon, 08 Feb 2021 15:43:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98gn-00088i-HF; Mon, 08 Feb 2021 15:43:01 +0000
Received: by outflank-mailman (input) for mailman id 82925;
 Mon, 08 Feb 2021 15:43:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98gm-00088c-Ja
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:43:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98gm-0006gk-In
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:43:00 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98gm-0004HK-I4
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:43:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l98gj-0002nM-DG; Mon, 08 Feb 2021 15:42:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=AOgDrJ4yZDGO1gSnfIGl4OLht4SgwhC2pSgrvoIietM=; b=OdewOEWAjTZQ6CvPFcHKK99Q76
	3A5ZPJN7EW3NWeaLcyqsqhbUdvETONqekHa+eQxDnRHSwO2jWBHpPdopxXQMedpqBLQTJb3WTuGeB
	c6c0FWj+nVlVFAMBKiNo5JJ3gAtZA4FoscWTZOftuLYMPOJ+suLXoc4Y8rgXpqhhWXCU=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.23552.778672.924224@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 15:42:56 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 03/39] docs: remove stale create example from xl.1
In-Reply-To: <20210111174224.24714-4-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-4-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 03/39] docs: remove stale create example from xl.1"):
> Maybe xm create had a feature to create a domU based on a configuration
> file. xl create requires the '-f' option to refer to a file.
> There is no code to look into XEN_CONFIG_DIR, so remove the example.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

This is docs so

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

I'm queueing it now.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:45:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82926.153416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98iy-0008H5-1S; Mon, 08 Feb 2021 15:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82926.153416; Mon, 08 Feb 2021 15:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98ix-0008Gy-Uh; Mon, 08 Feb 2021 15:45:15 +0000
Received: by outflank-mailman (input) for mailman id 82926;
 Mon, 08 Feb 2021 15:45:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98iw-0008Gt-Dk
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:45:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98iw-0006ix-CY
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:45:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98iw-0004aR-AS
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:45:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l98it-0002oQ-0b; Mon, 08 Feb 2021 15:45:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=imAIVGAbMRtohXVh0DCBXS75lbPtg3FidXOUa4gct/4=; b=TFs444O0CcpFXUDxTl3vPcJA8m
	UwuuAK7/jgMoYV2EXdP4JqTLURB9FZW6D75vpUxV/sFpDHebkKMlqH7mQYEfqPDcBzZ9KSPtLIpcK
	FB+KokFSVFOP+lHdDh919qtPMuENF3prOTOFrUU/7UQgCJX5CrniiAgKEMdMBDf3LNH4=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.23686.678193.814785@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 15:45:10 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Anthony PERARD <anthony.perard@citrix.com>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 04/39] docs: substitute XEN_CONFIG_DIR in xl.conf.5
In-Reply-To: <20210111174224.24714-5-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-5-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 04/39] docs: substitute XEN_CONFIG_DIR in xl.conf.5"):
> xl(1) opens xl.conf in XEN_CONFIG_DIR.
> Substitute this variable also in the man page.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

This is docs.  Worst risk is slight chance of breaking the build.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 15:48:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 15:48:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82932.153435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98m5-0008Ru-I7; Mon, 08 Feb 2021 15:48:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82932.153435; Mon, 08 Feb 2021 15:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l98m5-0008Rn-E1; Mon, 08 Feb 2021 15:48:29 +0000
Received: by outflank-mailman (input) for mailman id 82932;
 Mon, 08 Feb 2021 15:48:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98m4-0008Ri-Hy
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:48:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98m4-0006mu-GS
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:48:28 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l98m4-0004qT-Dx
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 15:48:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l98m1-0002pw-3W; Mon, 08 Feb 2021 15:48:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=q9msgAeqseQIu4nuQh6AjKQUlAj36K5h2z1Obf9CJbc=; b=zvd9tMxEknUF8I95kIF5e23zRO
	iHrR568OmWAn71O7h6G0eHlK9fTbBFg7y/NHk0eIOjk4g8bBs5uHy/2II15FGwiyDH9FIp8DRFkEH
	ECFGItMKNsmfdNFkgsp3fnr3CZfIl/gkWjIP9plCS7gqGmE8zmEfpVVuZAR4ZnImNAD0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.23880.634537.769933@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 15:48:24 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 02/39] xl: use proper name for bash_completion file
In-Reply-To: <20210111174224.24714-3-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-3-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 02/39] xl: use proper name for bash_completion file"):
> Files in the bash-completion dirs should be named like the commands,
> without suffix. Without this change 'xl' will not be recognized as a
> command with completion support if BASH_COMPLETION_DIR is set to
> /usr/share/bash-completion/completions.
> 
> Fixes commit 9136a919b19929ecb242ef327053d55d824397df

This was committed already.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:03:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82940.153446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l990g-0002P6-TD; Mon, 08 Feb 2021 16:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82940.153446; Mon, 08 Feb 2021 16:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l990g-0002Oz-Q2; Mon, 08 Feb 2021 16:03:34 +0000
Received: by outflank-mailman (input) for mailman id 82940;
 Mon, 08 Feb 2021 16:03:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l990f-0002Ou-Cw
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:03:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l990f-0007ZJ-A0
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:03:33 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l990f-0006Hj-9A
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:03:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l990c-0002s1-2l; Mon, 08 Feb 2021 16:03:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=ULoVTlqYjIo5XPRNUVxoY3zeGP2Wlvk1S89eRa24Ado=; b=djMfHjqlfuMEn9okbZfhmvxWca
	tkcf4Ay1KJ8w1eclFe5+OeQV+yLTRE4mFNwSQMif3am51rCW/3rzqQNPJ/nRPriXNNBaYzrrQyjnS
	nkLY7hQRcBjaCENqcJz1+lOibo4CMkOKN9fAoLvGkK0xe7v02s429fXwb7oTz9uD/pfg=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.24785.791060.128298@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 16:03:29 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure option
In-Reply-To: <20210111174224.24714-6-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-6-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 05/39] tools: add with-xen-scriptdir configure option"):
> In the near future all fresh installations will have an empty /etc.
> The content of this directory will not be controlled by the package
> manager anymore. One of the reasons for this move is to make snapshots
> more robust.

As I wrote in September 2019 I don't agree with it, if it's intended
as a claim about all systems that Xen will run on.

However, this seems to be an accidental retention in the commit
message, since the content of the commit is now what I asked for,
which is to make this directory configurable.

So I approve of the contents of this patch in principle.

> As a first step into this direction, add a knob to configure to allow
> storing the hotplug scripts to libexec because they are not exactly
> configuration. The current default is unchanged, which is
> /etc/xen/scripts.

I suggest this commit message as a compromise:

  Some distros plan for fresh installations will have an empty /etc,
  whose content will not be controlled by the package manager
  anymore.

  To make this possible, add a knob to configure to allow storing the
  hotplug scripts to libexec instead of /etc/xen/scripts.

Olaf, would that be OK with you ?


As for detailed review:

> diff --git a/m4/paths.m4 b/m4/paths.m4
> index 89d3bb8312..0cec2bb190 100644
> --- a/m4/paths.m4
> +++ b/m4/paths.m4
...
> +AC_ARG_WITH([xen-scriptdir],
> +    AS_HELP_STRING([--with-xen-scriptdir=DIR],
> +    [Path to directory for dom0 hotplug scripts. [SYSCONFDIR/xen/scripts]]),
> +    [xen_scriptdir_path=$withval],
> +    [xen_scriptdir_path=$sysconfdir/xen/scripts])
...
> -XEN_SCRIPT_DIR=$XEN_CONFIG_DIR/scripts
> +XEN_SCRIPT_DIR=$xen_scriptdir_path
>  AC_SUBST(XEN_SCRIPT_DIR)

It is not clear to me why the deefault is changed from
"$XEN_CONFIG_DIR/scripts" to "$sysconfdir/xen/scripts" and there isn't
any explanation for this in the commit message.  I think this may make
no difference but an explanation is called for.


As for the release ack question: there is a risk that this patch would
break the build, by causing the scripts to go somewhere wrong.  This
risk seems fairly low and osstest would catch it.

However, I find it difficult to analyse the consequence of the changed
way the default is calculated.

So, a conditional release ack:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

subject to changing this patch to make the actual implemented default
to still be $XEN_CONFIG_DIR/scripts.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:04:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82941.153459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l991i-0002Vd-Cq; Mon, 08 Feb 2021 16:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82941.153459; Mon, 08 Feb 2021 16:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l991i-0002VW-8Q; Mon, 08 Feb 2021 16:04:38 +0000
Received: by outflank-mailman (input) for mailman id 82941;
 Mon, 08 Feb 2021 16:04:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l991h-0002VQ-Bw
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:04:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l991h-0007aA-9Y
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:04:37 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l991h-0006NL-8j
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:04:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l991e-0002sI-3u; Mon, 08 Feb 2021 16:04:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=6WJbPb41gFwOKUOGB3oaG9gDtRnmuWnQbnrdy0KBGRA=; b=d8IO6ueEjR5yAoyshhvxuHOJZO
	u5VDLYIp0r2oToKPDgOohE36STdAjMDJOe5GC2ka9wwYRWa2QOhanfaYRsTP6TsO9LFLUc77c85wF
	qFd9683m1umhllXVp0obDP9jzRFELiLyUoO79uxvyrFM7ErUUfm6pC6A0oyyX/aJqHzo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.24849.166342.342346@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 16:04:33 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Ian Jackson <iwj@xenproject.org>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 06/39] Use XEN_SCRIPT_DIR to refer to /etc/xen/scripts
In-Reply-To: <20210111174224.24714-7-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-7-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 06/39] Use XEN_SCRIPT_DIR to refer to /etc/xen/scripts"):
> Replace all hardcoded paths to use XEN_SCRIPT_DIR to expand the actual location.
> 
> Update .gitignore.

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Although I would have preferred these to be separate patches.

I see no reason not to commit this now so I will queue it.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:23:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82956.153475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99JU-0004RR-1H; Mon, 08 Feb 2021 16:23:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82956.153475; Mon, 08 Feb 2021 16:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99JT-0004RK-U9; Mon, 08 Feb 2021 16:22:59 +0000
Received: by outflank-mailman (input) for mailman id 82956;
 Mon, 08 Feb 2021 16:22:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99JS-0004RF-Fs
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:22:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99JS-0007sw-BX
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:22:58 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99JS-00081I-9Q
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:22:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l99JO-0002wy-Ua; Mon, 08 Feb 2021 16:22:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=nbpjJ9Oiix6NlDk6bi3vd6dG2ssISm2wUkze8i4tfmA=; b=gAV66YWTIpEGomF6esyKHVoBPl
	+o9tChSxgl9/CP1Y2rpNjr4aAFcaf/yLBptiSeNbSfl0mb2xWoMS3hyE6yYwbTFdOI9iYw7mEpIk8
	8aJPg5pSbnAs4RGSy3lcWjjk06d1dCGcE6R4BIFly5Ow2ZpXZnHYTJ3sgFxJfUkzogZ0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.25950.629059.164010@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 16:22:54 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 07/39] xl: optionally print timestamps during xl migrate
In-Reply-To: <20210111174224.24714-8-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-8-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 07/39] xl: optionally print timestamps during xl migrate"):
> During 'xl -v.. migrate domU host' a large amount of debug is generated.
> It is difficult to map each line to the sending and receiving side.
> Also the time spent for migration is not reported.
> 
> With 'xl migrate -T domU host' both sides will print timestamps and
> also the pid of the invoked xl process to make it more obvious which
> side produced a given log line.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

This is from October and ought to have been reviewed much sooner.
Sorry.

With my maintainer hat on: this is a useful development but I am not
sure about the choice of -T, and the choice to make this a
migrate-specific option.  Most unix things that have an option to
print timestamps use `-t`.  But we have already stolen `-t` as a
global option for "prinht CR as well as LF".  Hrmf.

Under the circumstances -T for migrate seems a plausible and
conservative choice.  I think maybe we should maybe add a global -T
later.

Reviewed-by: Ian Jackson <iwj@xenproject.org>



Now I have to decide what to do about it for 4.15.

The downside risks I see from reading the patch are:

* The CLI API choice is being made in a hurry.

* We might break something.  This actually seems quite unlikely since
  I have reviewed the code changes in detail.


I wonder if I can get a quick second option from someone on the API
question.  Using up a single letter option is something I don't want
to just rush into.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:23:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:23:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82957.153487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99KM-0004Wd-BO; Mon, 08 Feb 2021 16:23:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82957.153487; Mon, 08 Feb 2021 16:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99KM-0004WW-8H; Mon, 08 Feb 2021 16:23:54 +0000
Received: by outflank-mailman (input) for mailman id 82957;
 Mon, 08 Feb 2021 16:23:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99KL-0004WR-4I
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:23:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99KL-0007tV-1r
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:23:53 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99KL-00085Z-0l
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:23:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l99KH-0002xM-Rt; Mon, 08 Feb 2021 16:23:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:To:Date:
	Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=o5y3wWhVRABUyOS+Dm/p7JAKXVxr6lPjjMeS6KuB+CU=; b=FPUpu3sYloEVOhIo6jjsubfOWe
	lDQNra28afY1y/aOHDfPk6/AHYfJxCTWfSfXYugQ7BY3kV7l8f8HWmpfxhQKxQOTpLPXmlYvTQUWo
	diTFuv+jKplHO3zp0EkFGUpBg1EzI8XqzIR7xjxpAPGC7UeMlW3i1xYTGL9H5jt5qCYg=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.26005.625180.775877@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 16:23:49 +0000
To: Olaf Hering <olaf@aepfle.de>,
    xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 06/39] Use XEN_SCRIPT_DIR to refer to /etc/xen/scripts
In-Reply-To: <24609.24849.166342.342346@mariner.uk.xensource.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-7-olaf@aepfle.de>
	<24609.24849.166342.342346@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Ian Jackson writes ("Re: [PATCH v20210111 06/39] Use XEN_SCRIPT_DIR to refer to /etc/xen/scripts"):
> Olaf Hering writes ("[PATCH v20210111 06/39] Use XEN_SCRIPT_DIR to refer to /etc/xen/scripts"):
> > Replace all hardcoded paths to use XEN_SCRIPT_DIR to expand the actual location.
> > 
> > Update .gitignore.
> 
> Reviewed-by: Ian Jackson <iwj@xenproject.org>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> Although I would have preferred these to be separate patches.
> 
> I see no reason not to commit this now so I will queue it.

I split the patch up.

I also dropped the hunk for docs/misc/block-scripts.txt which seems to
have been left over from v1.  Thanks to Andy for spotting that.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:26:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:26:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82958.153499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99MQ-0004fW-PO; Mon, 08 Feb 2021 16:26:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82958.153499; Mon, 08 Feb 2021 16:26:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99MQ-0004fP-L3; Mon, 08 Feb 2021 16:26:02 +0000
Received: by outflank-mailman (input) for mailman id 82958;
 Mon, 08 Feb 2021 16:26:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99MO-0004fK-Lx
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:26:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99MO-0007w1-LB
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:26:00 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99MO-0008DC-K3
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l99MJ-0002yM-VU; Mon, 08 Feb 2021 16:25:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=BeQIAPdWUoOefYkXdb494vmAilqs/09bNczhP+UUqWs=; b=TEB12VpwpuGD8qZkfiXHzQ1Goa
	ld4Ozu1mRFrvhzGWcQP28TJgamgeDobauRc9ZOsL4z8Blt+ZY96rm4V2vwB9i7QMUumpSetx7BVYK
	Zpof7AAQk4Xy5pO4DKbaHPlcUAQRCzxYEB03Y9aQbbTGFbLQ1ze82m1mneHf4q+BoNZc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.26131.733756.369535@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 16:25:55 +0000
To: Olaf Hering <olaf@aepfle.de>,
    Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 08/39] xl: fix description of migrate --debug
In-Reply-To: <20210111174224.24714-9-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-9-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 08/39] xl: fix description of migrate --debug"):
> xl migrate --debug used to track every pfn in every batch of pages.
> But these times are gone. Adjust the help text to tell what --debug
> is supposed to do today.
...
> -Display huge (!) amount of debug information during the migration process.
> +Verify transferred domU page data. All memory will be transferred one more
> +time to the destination host while the domU is paused, and compared with
> +the result of the inital transfer while the domU was still running.

Andy, as our expert on migration, can you confirm whether this is
accurate ?

Docs (including usage message) so

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:28:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82961.153511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99Ok-0004op-5N; Mon, 08 Feb 2021 16:28:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82961.153511; Mon, 08 Feb 2021 16:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99Ok-0004oi-2I; Mon, 08 Feb 2021 16:28:26 +0000
Received: by outflank-mailman (input) for mailman id 82961;
 Mon, 08 Feb 2021 16:28:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99Oj-0004od-6w
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:28:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99Oj-0007zg-3c
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:28:25 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l99Oj-0008Mt-2S
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:28:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l99Of-0002zQ-QR; Mon, 08 Feb 2021 16:28:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=5K2ndahwpF8pR9I8HDBgw7p9a+OU6Usg6qzOnE7hlL4=; b=f+xxwRaQO589PXHjXZhYWudkga
	vZp2RXQMRzQxBXhYM6oGUj60wucvv8gODudVy4KqncYXvLAUFrRzad2PjiVYpZl8fkv5jovwz56iJ
	FCXqoMqogROAcO9KF3eY9mEQ6vA0pBeR3n7MxvlUnMGulDvp2K0Pb2AkGks3D5Tr7CcM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.26277.574330.875381@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 16:28:21 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 32/39] tools: remove tabs from code produced by libxl_save_msgs_gen.pl
In-Reply-To: <20210111174224.24714-33-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-33-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 32/39] tools: remove tabs from code produced by libxl_save_msgs_gen.pl"):
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Personally I think this is busywork but I am happy to take the patch.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>
Acked-by: Ian Jackson <iwj@xenproject.org>

Release risk is clearly negligible since it's a whitespace change.



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:34:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:34:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82969.153543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99UG-0005rS-3o; Mon, 08 Feb 2021 16:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82969.153543; Mon, 08 Feb 2021 16:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99UF-0005rK-Vo; Mon, 08 Feb 2021 16:34:07 +0000
Received: by outflank-mailman (input) for mailman id 82969;
 Mon, 08 Feb 2021 16:34:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atCq=HK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l99UE-0005rC-RZ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:34:07 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fc6ace0-0521-4e7d-a924-fa1396796465;
 Mon, 08 Feb 2021 16:34:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fc6ace0-0521-4e7d-a924-fa1396796465
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612802045;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=A0Em8e96lo0tVUoTLXM6uyiy3YUTBFOlmfCCVcq8k00=;
  b=fVv4ireRSjdT1+hYJKR6GIUSofRyE3sKWgpOXsIi9MdpPjAqeqRIx7pw
   61gibYfygfgnO6TW4Wd4PeVdqxAFN8sQf82Y37GSmDvyTjFfO7xe+9Nsp
   lWb+lfZC69UDLfLgHlAescikV0ZfqVbk+mDCU+uxI45nCvZj3/HH2Nr8m
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: te60TKIPCi4bvwd05rkVTextz6J+Gb36KbzIn78tXkB8j0nCEFZmJdcfgXqaCYIcm+3nOHv873
 IVPcXt3H4AsZmzNgBo71KvX0/N9rhcQwd6rhbsWD1k73fpV+EBy76GpiWmVMsRUqbkDQ2iKmXY
 JgBfImg0KNSkBGWUxjNHb1dxb+zVUb51PwO+W9JQOclHNrQ2lLb5aKACPEG4WXl8EhKEW8hD6M
 2GuBnofMEmUcbpy/G0ETK5mmK8jbkrVWtZ+6iOq8l+fCYtcOncHoNPKDdhr2J7Oup7+IG4lUHf
 KfA=
X-SBRS: 5.2
X-MesageID: 37161291
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,162,1610427600"; 
   d="scan'208";a="37161291"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=amSSSO763ZmBPu4zYSBLKjao6pbErrWas+OXuQyYsWG3ezqBUn4zJINov4dtRlfUZ0VWlGBdFum/kkJ2aEc8xcuKNqTkVO7pP0urGDKe7Pf6rsQq/JUA0n5qhYtf7SSwtINuFCUaNRa0U3MGYO1tWgjUh/ebdKuSUW4Cy7Bi3s5xJcIbbgMzU3nenV7Sa757HFyP0cKQDSnAUa6Amx/Tjnmz7Lu3sPul8cX7DnTZhaxXJW5fy8RnIMIjz0njw+L5fDBnZ5uPba2ohoIDSiNLUP5Z/+ulqSrNgMhB4MWBSW9gFJVCOG3LA9w8J+bR8nyfLdhkQrzGZz6RzlHgViac1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sLypsmW5q1e6Ayo48trqI8SIWhmV4lmtJWvBXiMikZg=;
 b=CnEZ1RnoeALfM/M0UNx/SOQ83wVQxGZ5zb+cR9IjuXfgefnQXTNBg+sYuJ5F3MMQGCYFGphhZaGJUQsUSFxP+xrHRlAVRWC6UE4uATQpB6ggUjEfktPrXVHTb0eGyuMwcmcgUG4ODvesn2/xqIzJQTNAhpOirp/Dim0CRUgX38u8Ar0vziRxd90htf3usWdx2H8oaCkMb2Bd8wSebS6UNtjLkO8cMkr2Jh9XNmDQrZooJn4elLlwLD7Z/IgWMpOg103wXv+N2aHlIF6kA6oN8OObwnwge/M90XvU5Fyu5RfQJSTHrljTI46vzTczprkGuku7ZzlMB5jOM3DKlDbOdQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sLypsmW5q1e6Ayo48trqI8SIWhmV4lmtJWvBXiMikZg=;
 b=qtokz6wMQkAlXNtYIcajiPwo9TIst+5x4xl0zk2BN5vD/J7rc5HWKFNbo9nAERScv6Hu1YCsxV6HjaCoyCPQXmK4lWZbFR/8DGTFkEmCsjnNWex/1SGsIM7Rfa1O0uVRXBZmxaOCEyy33mdrtyhyANk81iuQ7aYzm6aaQByMdwg=
Date: Mon, 8 Feb 2021 17:33:53 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 3/3] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
Message-ID: <YCFn8Uy2B+tiQj12@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <80d05abb-4d53-3229-8326-21d79e32dfe4@suse.com>
 <YCEGshHDEH9bJU7y@Air-de-Roger>
 <ae8d8e02-9d2e-a26e-9321-cae0640a0dee@suse.com>
 <YCE6XlA14A2Qsq8H@Air-de-Roger>
 <391c4c6f-6213-ac7a-8871-a0c138f6b29f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <391c4c6f-6213-ac7a-8871-a0c138f6b29f@suse.com>
X-ClientProxiedBy: PR3P189CA0075.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f614b2ff-4a03-4011-6fd7-08d8cc4f563b
X-MS-TrafficTypeDiagnostic: DM6PR03MB5084:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB508483C95A3C1559E441E4768F8F9@DM6PR03MB5084.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ATBtp/YwWVPMkDMbRLjQ8wsnqT+XCet5vvfMXCCj3mwHlCtXh5SsKCVXkfTW2UjCF2ZAUp0XZmyjxlXvda8U3YhflOaIK+D5AenMb/4S9cnjGkbnU079/ASfQqGzXHpbSMzv12VVMwv2E3Wev4QROP4zMpik2LD95CS5QYlFcA5DG7gP9HxlVT8/7+y4uuNMFtxJ/uVFqOHe1zj61Q6W9F4E7yeXgrMwaojX3qZ3UkBJ+Ycb5pHUE793RPh6DJx/lyLVbyU2luUWcfHeVLzldVH4YnCLRkgTNC9wltc4CooSbMAvnxrqttOjbNJYg8GxNOpuvLX46twuLPzylOQKn/1VQezimuff8diFnrfldWeUGr6IFyrKbUQPRC5NuJRd1qtyIMLUpRT8tCc1shOh+voB2mu2XPKEyXeY9/a7qU2Zl9ozQpSpNonuobWTyEfX4Yu+H1KoUkJudKFK/waA/hOEnuPHIlSJPtaNuBctnwYm/+i9R4uVtEHd9ylYeDe5swDfh72UryxC4SoNRHJUAWmUXQSHPhK9fPqdLzAkNGK/KCR9phj/pe28kMHa19zH
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(366004)(136003)(376002)(346002)(39860400002)(956004)(2906002)(16526019)(478600001)(66556008)(186003)(4326008)(86362001)(33716001)(316002)(26005)(66476007)(6916009)(54906003)(66946007)(6496006)(8676002)(53546011)(83380400001)(8936002)(6666004)(6486002)(9686003)(85182001)(5660300002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OTlmU3lhZVdsQjBOMVRNRHpOL0dyNzFBR2JpendoOFByeHE4bDJhU00yZFBq?=
 =?utf-8?B?SWRJck1JTmNRRFpwdk5HRGlxSldvSzRGK1dBVkZKUUM5c011bm5McG9Dc0lV?=
 =?utf-8?B?amFsVkJPaWFKd3RURDQwczBjSlRyRi9uNk9qWGZheWJDbERKRk84eFEwVWFi?=
 =?utf-8?B?UjB4eDgxMW9leFdFa0RSQ2RSYWxxSEM1Y0w1QWxDMlR4RmQ0aW5uYVBoS2lP?=
 =?utf-8?B?Q3pkRjl2dFN3cnZBREI4cGRhdE1WNHpVUXRtU1hoOWI4TktuN0FQcEZuQTM2?=
 =?utf-8?B?bkxBOHZZb3B1Y1VOaTltSlFWTnNXVllGaHlLaWtDNE5BK2hGYUxkaVJPTzZC?=
 =?utf-8?B?SHZMc085MzdoN1lBbWNEZWlJQTUxNUY3OUtFVmFQZ3NLVnFYaFB5R0h6YjM3?=
 =?utf-8?B?YU9HajRtSTZyb3ZZRXFDZXA0VjkxaDRHUERBQWFaNExMMnlYM3ordTFaTDJR?=
 =?utf-8?B?ZTNuNjdCOWJnMCsxUGhOUFhUZzB5RVExeUFtNzRXWTJsTXdlNjM0WVhsWlUw?=
 =?utf-8?B?eHpmZlhld0NQUDQ3UjE5bU9IRmNHZTM5U3pZZHpldk5FMG5TTEIvM3V1OXp6?=
 =?utf-8?B?TUtTZm9WMS9oYkw2NE9CRllDaTVPMVpZakptTCs5elExMkh0VVlUU2tabjlQ?=
 =?utf-8?B?NUdEOEtwclRmM3ArQThRbGVtZll5L0Z6NlhmS2kyWjkycjB1VVlwcVUzd2xN?=
 =?utf-8?B?anVnQkN0UmhKR3grMGJUWFVsUVZlQTlHYzB6WDQ2U1RzUnZBZjExSmJvNHVT?=
 =?utf-8?B?MjlaMkRFMHJmWDNBZURMQ2tjNmJrUnU1Q0tBbXZsTFNydjdzMDlMVkFsOG50?=
 =?utf-8?B?SzJTam1FOU96djgxR2RGdkVmOU4wRlNkUU1XUUREV1VxQjgzdm1hTWR5RTlw?=
 =?utf-8?B?UEdNM3VYOHR3aUJGcWhMQlY4Ukx6bzAzRDVFVjhhUEh5WGs3L1poS3V4cW9K?=
 =?utf-8?B?Y0Vxd1hlN0oxY0k1YjEvZHNIWjREUFNaL1hTbkt6bkJiY2NOQkY0a2I3cWJl?=
 =?utf-8?B?eWtGeFBXSy9CNHVlemVWM085SXlpZStadkdLYzlyOTJPbVdHREdidFhoMjhs?=
 =?utf-8?B?TUJPcVlvVFh6R0NTY3hCZUZrNHFTdlVJQ3o4N3NxSmwweWplblRkazJGS0Fr?=
 =?utf-8?B?NkFlamJyOExkWXBZVHZENnJmN0twcmpaQ0dHL3VrTXlDSjJXQ3NYTXVGS3do?=
 =?utf-8?B?R2p2Y0EwdzRFOGF1Z3dhazhyV0tWT05pVlM1Vm0rOVFycVdMT1pRZ2VxRG13?=
 =?utf-8?B?bE9JcjEwQjFJdjFKbnIxNEpwL3FGYUNBbTNNVERyQmVnOE1mNWpqR3h6ZXdT?=
 =?utf-8?B?MThCSmFNMDBpR0JFL29TMU51dnVzcTFuKzQ0RHd3T3h1UDVXdlA3QnFJY2tw?=
 =?utf-8?B?R1hRcVNuT1hpcWhuWGpmQmp1bjZHYlo5R1ZkWTdtdVVQNDZSL2tUei9IekRD?=
 =?utf-8?B?emlXYVlDZ2hWSnFyRWE3Ni9nR2ZuN1dZR3NGdkNNWEhERG5JUGZ5Q1lNMHpl?=
 =?utf-8?B?enY0eW9xUm1CWjdvQnB5NUMwNGQzUkc3cXFVaGhVc01wdFhFenlwWWZpdFdl?=
 =?utf-8?B?K2JNRW1HdjZFWWNYbStYRDJDWGl6SWUyNGp4bDk4MjllOExmT0NWREdvUjVz?=
 =?utf-8?B?VFhtK2d3WXN6RElWMFhCbTVPRGZ0cE82bFA3dS9teFdvdFVuUUpzYkdZaWZa?=
 =?utf-8?B?VGlrY21GZjAxSm5nYU1CSEJxNlV4dDZnOHJRN0RxNFNKNkh5ZVR4a3BUQ0VC?=
 =?utf-8?Q?YrcTGo4AcAdn4+UKecwrlVfB3qJSC7AMK9wvYtu?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f614b2ff-4a03-4011-6fd7-08d8cc4f563b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 16:33:59.6244
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f8q98QUpiok1p6l252ElplFY4dXvPSIL7x5iDHEzpm3VjOzrJdkXfbL9eyPgh+tOcaAP5VKwXTYpRhA9oskwTg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5084
X-OriginatorOrg: citrix.com

On Mon, Feb 08, 2021 at 02:59:55PM +0100, Jan Beulich wrote:
> On 08.02.2021 14:19, Roger Pau MonnÃ© wrote:
> > On Mon, Feb 08, 2021 at 12:22:25PM +0100, Jan Beulich wrote:
> >> On 08.02.2021 10:38, Roger Pau MonnÃ© wrote:
> >>> On Mon, Feb 01, 2021 at 01:43:28PM +0100, Jan Beulich wrote:
> >>>> ---
> >>>> Since CPU0 reads its TSC last on the first iteration, if TSCs were
> >>>> perfectly sync-ed there shouldn't ever be a need to update. However,
> >>>> even on the TSC-reliable system I first tested this on (using
> >>>> "tsc=skewed" to get this rendezvous function into use in the first
> >>>> place) updates by up to several thousand clocks did happen. I wonder
> >>>> whether this points at some problem with the approach that I'm not (yet)
> >>>> seeing.
> >>>
> >>> I'm confused by this, so on a system that had reliable TSCs, which
> >>> you forced to remove the reliable flag, and then you saw big
> >>> differences when doing the rendezvous?
> >>>
> >>> That would seem to indicate that such system doesn't really have
> >>> reliable TSCs?
> >>
> >> I don't think so, no. This can easily be a timing effect from the
> >> heavy cache line bouncing involved here.
> >>
> >> What I'm worried here seeing these updates is that I might still
> >> be moving TSCs backwards in ways observable to the rest of the
> >> system (i.e. beyond the inherent property of the approach), and
> >> this then getting corrected by a subsequent rendezvous. But as
> >> said - I can't see what this could result from, and hence I'm
> >> inclined to assume these are merely effects I've not found a
> >> good explanation for so far.
> > 
> > I'm slightly worried by this, maybe because I'm misunderstanding part
> > of the TSC sync stuff.
> > 
> > So you forced a system that Xen would otherwise consider to have a
> > reliable TSC (one that doesn't need a calibration rendezvous) into
> > doing the calibration rendezvous, and then the skew seen is quite big.
> > I would expect such skew to be minimal? As we would otherwise consider
> > the system to not need calibration at all.
> > 
> > This makes me wonder if the system does indeed need such calibration
> > (which I don't think so), or if the calibration that we actually try
> > to do is quite bogus?
> 
> I wouldn't call it bogus, but it's not very precise. Hence me
> saying that if we wanted to make the problematic system seen as
> TSC_RELIABLE (and hence be able to switch from "tsc" to "std"
> rendezvous), we'd first need to improve accuracy here quite a bit.
> (I suspect sufficient accuracy can only be achieved by making use
> of TSC_ADJUST, but that's not available on the reporter's hardware,
> so of no immediate interest here.)

Right, TSC_ADJUST does indeed seem to be a better way to adjust TSC,
and to notice if firmware has skewed them.

> 
> >>>> @@ -1719,9 +1737,12 @@ static void time_calibration_tsc_rendezv
> >>>>              while ( atomic_read(&r->semaphore) > total_cpus )
> >>>>                  cpu_relax();
> >>>>          }
> >>>> +
> >>>> +        /* Just in case a read above ended up reading zero. */
> >>>> +        tsc += !tsc;
> >>>
> >>> Won't that be worthy of an ASSERT_UNREACHABLE? I'm not sure I see how
> >>> tsc could be 0 on a healthy system after the loop above.
> >>
> >> It's not forbidden for the firmware to set the TSCs to some
> >> huge negative value. Considering the effect TSC_ADJUST has on
> >> the actual value read by RDTSC, I think I did actually observe
> >> a system coming up this way, because of (not very helpful)
> >> TSC_ADJUST setting by firmware. So no, no ASSERT_UNREACHABLE()
> >> here.
> > 
> > But then the code here will loop 5 times, and it's not possible for
> > those 5 loops to read a TSC value of 0? I could see it reading 0 on
> > one of the iterations but not in all of them.
> 
> Sure, we can read zero at most once here. Yet the "if ( tsc == 0 )"
> conditionals get executed on every iteration, while they must yield
> "true" only on the first (from the variable's initializer); we
> absolutely need to go the "else if()" path on CPU0 on the 2nd
> iteration, and we also need to skip that part on later iterations
> on the other CPUs (for CPU0 to then take the 2nd "else if()" path
> from no later than the 3rd iteration onwards; the body of this of
> course will only be executed on the last iteration).

Oh, I see. Then I think I have no further comments. If you agree to
adjust the cmpxchg please add by R-b.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:39:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82972.153554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99ZB-00063z-Pn; Mon, 08 Feb 2021 16:39:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82972.153554; Mon, 08 Feb 2021 16:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99ZB-00063s-ML; Mon, 08 Feb 2021 16:39:13 +0000
Received: by outflank-mailman (input) for mailman id 82972;
 Mon, 08 Feb 2021 16:39:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0k8o=HK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l99ZA-00063n-7G
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:39:12 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44961016-2e51-4c6c-a965-6f420ccb7e8f;
 Mon, 08 Feb 2021 16:39:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44961016-2e51-4c6c-a965-6f420ccb7e8f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612802350;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=recupDDv4itL4a7RNXRsCzbocB9loR8wVm7fC9/Kl9w=;
  b=Ke3MYw1LeN9yrhNuoQMTlVg0Z2SS0thD313iY9BCrG3NPfwcL11cUbIt
   foGJDCUJ41cW+mCuDCbgjHPmT5T7NcgMcYB8076/L0BdAk82wLCXH7R5/
   UwBHCJxTLoGOfUjYGLJ9zIqGT8aaRbUVLIJz8pqh1bQEgiEcuQnFJndB5
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PGtIZCY0ufzCg8dZVnxJFNGZqp35E9o0ymQdqOEc0P1B8u6jOkOFaug/zzvnZ3ilRRYbYSvC01
 jgA380lwXVs4ABS3HAF1+sP7om2CGspWtNPuBx+m5mZgTr8r7JUZaTnlFmOwzDH/fPF1KUE93M
 BIUpIjNPu4U+k4Vd+VBtL6im7CWeLEqvSp0kwVngCwxFNnTsiGZolgU9PEnwkuvl/6RogG7Rhs
 kUaONQ52gIWgNTx7M0m5BhuGu8EhCtd9zj0tH6aYBnrLq4dj4f4wTHbhL6mfm3+GWU123adZvx
 ZBw=
X-SBRS: 5.2
X-MesageID: 36734626
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,162,1610427600"; 
   d="scan'208";a="36734626"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LKiPZUjfj1kDHUqphLFcOmOWBdN16YWiUXH0akvW+kEhnmFj9SNcLhkK1Mr43HtKtDzLzxmmKXO5heZpmC91w8vCbraK4jVwMri35ChSb74KTjQX56co3+YF6TOTug4hedUZd7ypi3ws7HdJSNXYdUtRl4qS0Lk+XOp5ypKTtJ6aeJB0qqJfMC8sVYFtPyX/sJOQFKNnAEgqP9uKN++PfZudY1/iDR0ZeSZSI0/fBeB+aqSQJOYpnysBnHQPd3p7CDcVpL+V4QCwiSV5aMnW5/w+yxmqT5i5T1PgzQpBDP1r5QQRb6wnHr+a7zwE4cf9GbQ4/P5vUw/mGnoP3lCiRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=recupDDv4itL4a7RNXRsCzbocB9loR8wVm7fC9/Kl9w=;
 b=g/LzgpAA4yqhwEphLpZbfX6+VBIbFcu28b2ud/CzEYxE6jtnaD14iB8ZJR6zVV570nCbLfwIHkoUybf412XlGjsDDKiu6EhEW9r0nC1MPb8yk044031fxAAkSFPeQKGHOAo+MWsDE+34q9iuj6YLVISgAwVhzyFpX9wFIPqd/3+ofCnqtK4vBKv84t6IcvCSvg/oQSOJvqZ8OI5GLOEqzUdQGK+ARSKqCpJWPP4Fn08faA05vv3ldwfzf2M59Y8Z1KIQO0t2QP/y2zsqm7u8MwV3IXQW1xABc1Bk7uvJR33R5AEYH5IOF8f02POXV3TEBdBFUUevK49ZpSE8ssiBbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=recupDDv4itL4a7RNXRsCzbocB9loR8wVm7fC9/Kl9w=;
 b=nv6x879b1tXMWHYxL/ERCKFneDWG7kZa8bB1rZywJf3otXgbb2XxZg02+WKyCMCkymHhdtraJJXF7bWvD72VBoVb3spI+cHDfyLPZ8bjLyUVpUSzMdGuOUYuk0v86bkdrQRmo7xGJYU8OXTd1kwNo4t0WNILinSRzM1AqDDxBg0=
Subject: Re: [PATCH v20210111 08/39] xl: fix description of migrate --debug
To: Ian Jackson <iwj@xenproject.org>, Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
 <20210111174224.24714-9-olaf@aepfle.de>
 <24609.26131.733756.369535@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6debd4a3-2d12-2b9f-c857-11dc2346d750@citrix.com>
Date: Mon, 8 Feb 2021 16:39:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <24609.26131.733756.369535@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0447.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 62209071-085f-49f4-ea43-08d8cc500dd3
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5567:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5567A6B9E9D65DCA1F9642F0BA8F9@SJ0PR03MB5567.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Izw1IKV7DhpEOpYULu8AXkvbvddJNX4ocLWx2zEVUg50yqzb8k456QIgWUpuPUgGWZKaUr2Sb2BesbOEhXlv6O4FDfUetEVsZO6D2I0tO1ddKT0ffS0wihVlU/MdN6oA2rxW1bhHm+uI5+V2IU1CQfZdXDzHcxTYShQQoZo50M0qU6yjrTe9sHBZbqXD4Vmd8Q/3/Rbs+/ncI8VDCIobhZcRNy5grfzIxaQvYo6doEJEyiOR1dLp/aFUm4RUFLCLBMjI8p0fT6KYdDYd4e5y5SE+ZZZq2E3ph2UhIsFC7NjIWiYf2PTI6Brp+5cHxMfEORZR+UUofiQROQDZfOkZGLjGirmCB0K2CjnQJqqQXUe3DnqMhECiSFLEIifa/lo512Fo6Eo27RlN2AJJIuNs5ZcAb+jR7HIqgw4mVDU8cFrP8MQFCY7rFH3XlUjAPpt7PSTmarO1VRI2fTxRLot1dLS8gaeR3KaSYooccp47GU3Zjf1q8YzfGlebjFHYilT3VgzImjZdi2gOSF/bKoPWnUVBEfQnLyhr8U4jBZT9Lpu91ShvVYk1J5rcofbz/QBurzWq991B4RgeJ0e1wbv+uA4xDLVt+xp7O7ptB+dO0b4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(6666004)(31696002)(36756003)(2906002)(107886003)(66946007)(6486002)(66556008)(83380400001)(66476007)(8676002)(86362001)(53546011)(2616005)(186003)(5660300002)(956004)(31686004)(26005)(478600001)(16526019)(4326008)(16576012)(110136005)(54906003)(316002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NzVBblY3VmR0NHFLbmtkdGtmR3ZveC8zTDNHL1ZUODhJbGo4NkpBOEttVkJa?=
 =?utf-8?B?RktSREF0QkhRZmRsVDRRcFVxWDJhZUVLZ2YwYUNETStxOEkyaklMUnlPc3Ny?=
 =?utf-8?B?bXl3OTU3bkFYemJST0U2djdxYjk3cXBWSTh1SFJBYmt1dTBtZVhYZFR4VVhp?=
 =?utf-8?B?OW5iUjRRQU9JM3RYYTBHdDhmQjZtdHJJWUNFSndmTzNmUHY3cENFeWNBNzFX?=
 =?utf-8?B?NzJjekcvTHBCR2NCaXlvZDBmaEVPMmRtNElCc2lsYlpKcExKbC9JSEJieUxm?=
 =?utf-8?B?Q0p5a09FZW51U3ZTajBXSHllN1FobnluaW1GMWVyYnN5Q2FTaDVzb01EZWtF?=
 =?utf-8?B?cmdsZzhNQ3Z4QitQSk8zUXV6NUMvZmNOR1RLa1dZbnVEN2M1UjdTU21CVmFZ?=
 =?utf-8?B?TnJIVTNFSklIeWpadVZwZjEwYk1FQ0llTXd0bDZkOVVYbnZqbUZXSUlvSlZM?=
 =?utf-8?B?VnBoZUxBRCtVRjlWYmlvOCt6LzV2RmI5UnJ4ZnFESlpOVWU0WGtvSWtleWRj?=
 =?utf-8?B?NzZ1Y1VTWWExbm0vWEdhNzdDVmRiYU4yQmFUU2xOemVSYmpab1pZaFN6M3pL?=
 =?utf-8?B?RDRoSlY4aTREV01BSDVXTEFYdTJ6OGcxdFZXTjVOQis0VVJCL2xJK1QwOXB1?=
 =?utf-8?B?NzdOak51TmE2Tm5DTUt4eWJxUEpVSUxIcHBUclFGS3dqdmQwZ0c1NVZqcmo2?=
 =?utf-8?B?aCt4Y0VRbUQvWXVpQklnU3pWRWU3OHlmWUVRb3QzVWRRMnJvdWpuSjM2ME9J?=
 =?utf-8?B?RnZWY012NW5ubHdKQnBOY0tlL0xJTzFLZVNYRU1JRTBwRnBEblBWQ0szSElp?=
 =?utf-8?B?U1c5blpQQjJNb2tJRmxzN3FGRXpJaWJlVDhiRWFlQUlRQzBRUmR5aE0xNUVT?=
 =?utf-8?B?R3RBckNmQ251M3dwL0hyYUpzZXNvQjdwbktkamtkRUhOZURXRGp0ZlhuUW5V?=
 =?utf-8?B?ei9VSmZTdWJCTjQzb2JXU0dCRWlxSE41ZTBoWWEvMTdwcUpLc1R4a3NxQ1pE?=
 =?utf-8?B?QWl4RzZ4RSt6c3pqa05PMU85aHpHMlU1YkVKV0QrQzExbTZ0QWZNTmdRbEt2?=
 =?utf-8?B?RHk0cytyMUVGUng3QmJBQUdnaFM2amFaOWxLNWlFZkF4M0VDMnVHZXJvTnBV?=
 =?utf-8?B?aEQwakJOQ2hwc0lsMEUzbXRleVZQMjNNUHh0WVo2S0dFbFF1VTdZd1dJMEh6?=
 =?utf-8?B?eGRpL3I3Vkx3ODhFZE5XcmsxaGRMRkgvaTd4WlZoL1pBandWSWdnaWM3MXFr?=
 =?utf-8?B?eng1NVdQZGFiSC84VU9tTysxTEcrOWNwM3MrbWlaMDlja0FJdmtDU0R2NnJ6?=
 =?utf-8?B?N1J6VVBQWlVJQWRPSjVMVDBQb0dmYkwwaXJyRmw5UEhmdnc2SURaSEd0akNZ?=
 =?utf-8?B?Z3VZaVJQOExWN0ZxSXNsTWxBTlh4akF4aXU0d3UxNVVGVG16UGVmK3dzOUNH?=
 =?utf-8?B?WkxnRlpxc09ra1daeHYvb0ZEdlRZOXpPam9HUzBWU0FYemNEMENZdU1mbHps?=
 =?utf-8?B?NlpVSDZGeTc2U2RvNkhhQnMvdWVKNm1pTmNWMVFxSXBWUzRrbW95ZzN2RXkz?=
 =?utf-8?B?UWdzbXZBdW9mWnl1ZFQza0d6N2praXVkeVBkbDBvWHYySUZpZXN1cXNkZEJh?=
 =?utf-8?B?M0RhRG81aU1IZlhUeCs5N2RvUjYvUXF6VGI3NVczVnF2emhFL0hJTkJ5bGZB?=
 =?utf-8?B?MHJwcTRrc21mYnUyTTNKRFp4MGJ5by94NGlrc2dLVUVVVWN4ckZ6TnUxbm01?=
 =?utf-8?Q?Kk5xJO0MWE/5sNsNTKPC4UUIj4te/QPW7iZ1UVB?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 62209071-085f-49f4-ea43-08d8cc500dd3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 16:39:07.6275
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Tb9AS9IwTzZu8lVRpNeXVz7zWijI9mJkaabL/YGZBWukewMcbQYBAB+RqyhP2zAEk5nnBfxQ/71ww7tiSqmGM4AsodSwb4cLGNAgn/JMCEo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5567
X-OriginatorOrg: citrix.com

On 08/02/2021 16:25, Ian Jackson wrote:
> Olaf Hering writes ("[PATCH v20210111 08/39] xl: fix description of migrate --debug"):
>> xl migrate --debug used to track every pfn in every batch of pages.
>> But these times are gone. Adjust the help text to tell what --debug
>> is supposed to do today.
> ...
>> -Display huge (!) amount of debug information during the migration process.
>> +Verify transferred domU page data. All memory will be transferred one more
>> +time to the destination host while the domU is paused, and compared with
>> +the result of the inital transfer while the domU was still running.
> Andy, as our expert on migration, can you confirm whether this is
> accurate ?
>
> Docs (including usage message) so
>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Yeah - I totally changed what --debug did in migration v2, and this is
stale.Â  The legacy behaviour was unhelpful on any non-trivial guest.

It is possibly worth noting that you typically do see changed data when
using --debug, because of how the front/back pairs work.Â  This was a bit
of a curveball during development.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 16:39:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 16:39:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82976.153567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99Zp-0006EO-36; Mon, 08 Feb 2021 16:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82976.153567; Mon, 08 Feb 2021 16:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l99Zo-0006EH-W2; Mon, 08 Feb 2021 16:39:52 +0000
Received: by outflank-mailman (input) for mailman id 82976;
 Mon, 08 Feb 2021 16:39:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atCq=HK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l99Zo-0006E6-6z
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 16:39:52 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddb1c5a2-ccf8-4349-a44d-c1b5ed0d5ed0;
 Mon, 08 Feb 2021 16:39:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddb1c5a2-ccf8-4349-a44d-c1b5ed0d5ed0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612802391;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=u/9MljKpQSyzA/UEqo6UHZARuh5o2PonKRKGkPOWK7I=;
  b=BsOfKoliOF43GhO3959DYXudGZITbHMfxue31ifDfzB9dKUDIkiF05vJ
   UI12fi/17Rt2JZfbCsa5LkE/YUwtga191PwG2v9HSfYVG5OOkUwZ/BAk7
   K1AQYV7kM1oUTnYSZMaYjYynDqMlvGGRlcYq5Le45wi1FbAwJcp1+V0dc
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KNmOHEoc+fs6Qjf7XpnuqOlS3WHPDruVugi4fc/osjZEtU/UqKOv+3wvEjD6g1VMjbXkzJ0lCS
 Utas9z0C5it1TN0LrhIRFMwbw8BzM7A2GlkDkkJKJbSeCqlja7T1xPp0nbz973Hksb45AQwfzD
 IAM0h9Dwbmke4oNXKSPT3sn6ZkhzDBBmYbu6zZQ3Ta8FB5txhIPtSj0VG7HrjL+fGeVkqIuZ0j
 HAt2ArH7DEZ9S8eo2Lct93FBusbrBg9WFY12ZnDZ1OKp3T5XDHBGjQJoVhfn8v7RfikDrfjP2n
 txM=
X-SBRS: 5.2
X-MesageID: 36818554
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,162,1610427600"; 
   d="scan'208";a="36818554"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TZ38+pU0zpgRQVSH9X4LKSIyAVPLS2jFDXk294NHfLoo4pCj5bQGGDz+5zQh9zjDAl4i937M6+2elrgi1VSjTzWqpnYDWsMcmqKJxilTWcd2uJ6KyABFfiaqauVsMv7O4XcCRII0Pom710VzQW5Aja5heQxzHCvwNWXSJNL9sFdgvsAtYLxLnjP6MxvB7+QOvC0cf6ZTIbnnZ0UppNnMMbq6n0ugplVgwQWx8smy4SHYwfoE8amVkLzw5BHbFplw58Lci11n/mw2vPBWL0B00yvAvllyTqNJSdH6kxiO9DtfdqsBpO82edsPBUG/ZVHUx+zWuO9BfjxF1Pv7hWavzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y58D1HQJPdV9rWbvSfCrG+b68fRv8CzeC0e4ZYte8A0=;
 b=hOTqeFTeeUiUL4peoeg9jgbiQk/qZ7HymDFuxV2EB+hSYPtiC2iW7oUK+d8g/Q22yeAf9FjuFuXYh3i7x9HAXkR3wQ2h/fcMccK5AxFhHuFPbF4x38fiAyGREd4Sz0/rRKg3OJNTAict4VPbB+HZYWYSMdusnQAsaJgBtz79sjFDjjJGmMrJXOdAkOgm/7hIuJczI4EsuKyUmCW6WMw9ExsYPv6YfGXrlv3gmFGDAHJD/Qy29fjwZvXnoXrVAZcDZ9z464Fg90MHP8hGRDLOkzppEU9V6EKI5USfJ83MBBZeuPZbkaCwLIDurGyVKDaUmroMEYCduEjew0QBOye7oQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y58D1HQJPdV9rWbvSfCrG+b68fRv8CzeC0e4ZYte8A0=;
 b=u2GEEn2P/9oDELOCmF7VIHqCT9PJv1pskAMUKLxSjUMgKmIS7or1H8Qh5G1+HuGqjMcvOChNjgQPh8z6DHN6MSZnf9IEUmETbLcXpNjVQouQ8V7jbZLwO68zuOeoah7/aVJXg3DTtECUP+7MI4pknKVTEfg/W2vP5qwNIl/gBUI=
Date: Mon, 8 Feb 2021 17:39:35 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Claudemir Todo Bom
	<claudemir@todobom.com>
Subject: Re: [PATCH v2 2/3] x86/time: adjust time recording
 time_calibration_tsc_rendezvous()
Message-ID: <YCFpR7NVYGblUKGS@Air-de-Roger>
References: <b8d1d4f8-8675-1393-8524-d060ffb1551a@suse.com>
 <26b71f94-d1c7-d906-5b2a-4e7994d6f7c0@suse.com>
 <YB1vGGl59oNZb5m5@Air-de-Roger>
 <861c931e-7922-0b5b-58a9-63e46ba24af0@suse.com>
 <YCEa7JxPCAzyWqfe@Air-de-Roger>
 <9d37329c-50fd-81ae-6987-40c81b78e031@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9d37329c-50fd-81ae-6987-40c81b78e031@suse.com>
X-ClientProxiedBy: MR2P264CA0097.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1ab799ab-458c-43de-6f78-08d8cc5021fb
X-MS-TrafficTypeDiagnostic: DM5PR03MB3369:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB336983D38EDB19339743B00E8F8F9@DM5PR03MB3369.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: K9S91eI5ZNgivnLvAZONHjxkyyw1bS9nFhqzdLJiZdJ6ksKJcjtSGHwWD0ghWqwsEVw9PWPr//HMn5inwog79ZymswlegtmF9dRbzbvb13dN7dolRemagITMwKry2r7piTTY0HpiqDAbGA7zr+S488D55uUHlnjqPVh7/d87lkkT3LaP4x3vDfuVQJO82ofs2DBt/QFYxe7TEIvxDTb+vxgwcd0cet24kl+JrmwEDCHCL8dWPECIn+Iu68sOL9u2DIKTHVKiX7DwpiWdAHEgnFTGTynkhmMVFrIxiVwHUlErgQ5L7/gwyqyxocfLoofw8ZZWEUUSrVrg6BAv1P+v+Odpas1qw2mzLsWuf+57C3sy5B2hePh21EGTRQGS26HSYoSO+VUjAIPiL6GbjYiHis2siEfpg1r+L0Vf+ASnBfFjiFAZ1I0Wu0ao4Q7JNhH6IuomPw161bnN+nYvbT+loYioUB427UdHrMGQRgUSg3iKCuBYdz2AtDVUi5uaXiaus0JmcMXdx1Pwn2eiuu5GDg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(85182001)(83380400001)(9686003)(316002)(8936002)(478600001)(6486002)(6916009)(16526019)(53546011)(956004)(186003)(86362001)(6666004)(26005)(54906003)(2906002)(6496006)(66946007)(66556008)(33716001)(66476007)(4326008)(5660300002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cTZDN1RIYlVBSU5tYVM5Q0ROcnI2a0FGQ2VrMnFjNHRTRDFrKzhVMXMxa1Jw?=
 =?utf-8?B?Zm1IMXRuV21OaGxpWW11ankxam90SlJDeGw3cGJTWUZ4cDdnYlZweTM4RlBX?=
 =?utf-8?B?UDBwYVh4ejVLTTVxUEFKQ2ZGUk5NNHJ5aG1nL1dWRlNTYVhYTC9Vakhnc3NR?=
 =?utf-8?B?bTlEczcyaTJOWEdxWERiVWsxVUVydzg5MEd3R0ZybXNBSDV5eDVFa3ZlaHlX?=
 =?utf-8?B?dlFUc1F2bm1aeWhtZi85RkR2Z1hjUUZyQXpPUzdjc1k4QlpHbG1HbjY0Z3pW?=
 =?utf-8?B?MmVzdVEvcW9UZ0x1UVQ4MGVKTGhYcHZWSDA1ZWdPbXl5RnB0NGlFNlVWbnNl?=
 =?utf-8?B?ZklDeCtvMHgwdlNlTFVQT2FmSGlYVzFQQTZ1V3liUHE4RVhwTTZaNXhreVo4?=
 =?utf-8?B?VVEyekhLTE5JRlF0TWVnUkNvditKQkozdDI4cGJhUmo1WmNNRGlMZmlMM3Ra?=
 =?utf-8?B?NXNaSVp2WW5aUXY0RTM4MEk3dnRpSVdETXdVcDFWdUZMR3liRUdBN0NYeUpH?=
 =?utf-8?B?REhpUjI4dTBreVRhalFLd0d1VjZNSWlOaXNFeCtDbEhJVmZsQmZZMWN2T3ln?=
 =?utf-8?B?UVJYWmRpeFdvS25ET1RHc3hINWc0SCtTVUs1RDdkRW9jL01qaURoR3lyQ3pC?=
 =?utf-8?B?ZGppZWhTVUkzRld0bjNKL2pkeXlxMDBUZHZqRHZlRHNLRHU5cDk3b0dNQkRi?=
 =?utf-8?B?MXRGS2VMcC9yTkVpaGRnb2RISmtQYUxQM3dISlBKM0ExYjJmYXRkY1pxQnJ1?=
 =?utf-8?B?UVZvdEk0OCtKbVFTSXdndWxqYXVYTlorVFR2SGovRU5VTVNob3pJdEpvdU1H?=
 =?utf-8?B?Wk82cURoeHF1aTBDL0lCcDRHUFVPeDBsK1ZZY3pQTUlabnlCRGlLbWZNVnVM?=
 =?utf-8?B?eGtNanRFT3VwblJYQnlvRFNuWnNsbFdDYW4ySUw1N08yVzNQM1hVa0gxc01l?=
 =?utf-8?B?WlgzZVl2dkd4QjdaNE9kSXpENUpDNnhOUjF5elMyME01dlUvcVZNMmxvOFBS?=
 =?utf-8?B?T1YwNjlzdjFVY0ZpR0hJT1RhTTNnS2pCZVVKOGttRDE5NlJqNEM4TVFqaHNz?=
 =?utf-8?B?eEpnMk1DUHZybnFDdGE3NVYxWnBLcGxsMEhKRnovSDFCWnl0SmJnaXBUTnhB?=
 =?utf-8?B?YzZaMDFsazVRUm9RTWRmemQvbmpnbmRCYUNZallvaWprclNsUGFTK2pkd094?=
 =?utf-8?B?VnFacDVXYnEvQ1c2eEVFS3RhSTBwMjA5MWhveTh6SEVVdmZFL2FGK0NHZC9H?=
 =?utf-8?B?V0EzMDlCR0o5RzVLWktOL2QrdENRZkt5STlEbEI2ZzV5S2V4VzEyemxJK0FZ?=
 =?utf-8?B?VlZsMC9EYTZzL3NyeWVMbElKS2RKM2dyOWZ2MnN2MHExSUlTNUVTd05lU3NS?=
 =?utf-8?B?eVNyMUpnQ25vcXVnT29jb3BVVDJ4clpxb1NYMXpkaExpZFFORXdjN1V3SDl4?=
 =?utf-8?B?NEtYUG9vd1cvODcxSldTR21JZ0p1OE1jUFFCSGwrYmJqTTN5M2QvSEdEUVFo?=
 =?utf-8?B?d2VldXRsNWRzdjNLT3VCdjBKSFlEa2pUc2tZVUZEUXJNelVRUCtMbHZLeWNt?=
 =?utf-8?B?YkxzRHh2S0Riclh2dUtUNDdmYjFmS2txWFJPaWZYNW5GZThOWWpyY0JiSUw5?=
 =?utf-8?B?VURJcDk1QTVza1liV2N2MjV5OHFjUXMxeGxqSlV6T3dId2hoRldHWUhYNGFm?=
 =?utf-8?B?Yy9qeUV1dmlLOXJDUGFEdWF6VnluNWM4SERZRmdJWjdOaFJ6SW5ORG4xcVdW?=
 =?utf-8?Q?l+eeOd23AcPHcbi2LzWuc0OkouRgj/ldZ4d6Ctt?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ab799ab-458c-43de-6f78-08d8cc5021fb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 16:39:41.4582
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8XoXB5n2pPP3Rh6petamUV+c2WvpI16nn2CyaxdPCrOdHEZSyBUBvIZG/4LH6TcO7uA+GeRE4Sj1qMtJSCYU6g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3369
X-OriginatorOrg: citrix.com

On Mon, Feb 08, 2021 at 12:50:09PM +0100, Jan Beulich wrote:
> On 08.02.2021 12:05, Roger Pau MonnÃ© wrote:
> > On Mon, Feb 08, 2021 at 11:56:01AM +0100, Jan Beulich wrote:
> >> On 05.02.2021 17:15, Roger Pau MonnÃ© wrote:
> >>> I've been thinking this all seems doomed when Xen runs in a virtualized
> >>> environment, and should likely be disabled. There's no point on trying
> >>> to sync the TSC over multiple vCPUs as the scheduling delay between
> >>> them will likely skew any calculations.
> >>
> >> We may want to consider to force the equivalent of
> >> "clocksource=tsc" in that case. Otoh a well behaved hypervisor
> >> underneath shouldn't lead to us finding a need to clear
> >> TSC_RELIABLE, at which point this logic wouldn't get engaged
> >> in the first place.
> > 
> > I got the impression that on a loaded system guests with a non-trivial
> > amount of vCPUs might be in trouble to be able to schedule them all
> > close enough for the rendezvous to not report a big skew, and thus
> > disable TSC_RELIABLE?
> 
> No, check_tsc_warp() / tsc_check_reliability() don't have a
> problem there. Every CPU reads the shared "most advanced"
> stamp before reading its local one. So it doesn't matter how
> large the gaps are here. In fact the possible bad effect is
> the other way around here - if the scheduling effects are
> too heavy, we may mistakenly consider TSCs reliable when
> they aren't.
> 
> A problem of the kind you describe exists in the actual
> rendezvous function. And actually any problem of this kind
> can, on a smaller scale, already be be observed with SMT,
> because the individual hyperthreads of a core can't
> possibly all run at the same time.

Indeed I got confused between tsc_check_reliability and the actual
rendezvous function, so it's likely the adjustments done by the
rendezvous are pointless when running virtualized IMO, due to the
inability to likely schedule all the vCPUs at one to execute the
rendezvous.

> As occurs to me only now, I think we can improve accuracy
> some (in particular on big systems) by making sure
> struct calibration_rendezvous's master_tsc_stamp is not
> sharing its cache line with semaphore and master_stime. The
> latter get written by (at least) the BSP, while
> master_tsc_stamp is stable after the 2nd loop iteration.
> Hence on the 3rd and 4th iterations we could even prefetch
> it to reduce the delay on the last one.

Seems like a possibility indeed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:23:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:23:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82992.153585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AFw-0002JN-6g; Mon, 08 Feb 2021 17:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82992.153585; Mon, 08 Feb 2021 17:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AFw-0002JG-2v; Mon, 08 Feb 2021 17:23:24 +0000
Received: by outflank-mailman (input) for mailman id 82992;
 Mon, 08 Feb 2021 17:23:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pnXs=HK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9AFv-0002JB-J2
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:23:23 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b8e438d-457f-4613-a8df-d374274a324f;
 Mon, 08 Feb 2021 17:23:22 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 005e38x18HNI5Qc
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 Feb 2021 18:23:18 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b8e438d-457f-4613-a8df-d374274a324f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612805001;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=/PVjxoVd9QfON2M8su6lMcKonZepM1EeGXkK66OwFUk=;
	b=bzXum8gnHzG9TBxO3tHbLbd8ElvBaR4E8M5RopOT2CE54xwTzqLq5FjHgyVcTIVvkv
	AwLsBv6S58BDO4ZfysJGRjki52k/tM8KjHL+JDd4xY4syimiFEBZM0mp/TtLHdpEgXDK
	aXOioFYB79nq4iZKkthbwQO9Z6Vih2Q9/k3IPkkT+wTl3tu6wH9aAtrVPqObtetwhY9N
	Q2eTEMvvueV822mAZLPBz2nCn/eAQ+vmDlF8uIj5eqMQn3onUerLfKD9AYA0RGLH+VDA
	jNjFJj8DWuim6w591NYz1RoL0zY3JfoiCeC7zxhhxx8BD1anKsxv+Ju4P7fgLpogJjwN
	+eEw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Mon, 8 Feb 2021 18:23:11 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure
 option
Message-ID: <20210208182311.53dac1a3.olaf@aepfle.de>
In-Reply-To: <24609.24785.791060.128298@mariner.uk.xensource.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-6-olaf@aepfle.de>
	<24609.24785.791060.128298@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/KRediBe/B+uUOAS09ebVvfq"; protocol="application/pgp-signature"

--Sig_/KRediBe/B+uUOAS09ebVvfq
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 8 Feb 2021 16:03:29 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> I suggest this commit message as a compromise:
>=20
>   Some distros plan for fresh installations will have an empty /etc,
>   whose content will not be controlled by the package manager
>   anymore.
>=20
>   To make this possible, add a knob to configure to allow storing the
>   hotplug scripts to libexec instead of /etc/xen/scripts.
>=20
> Olaf, would that be OK with you ?

Yes, this is fine. Thanks.


> As for detailed review:
>=20
> > diff --git a/m4/paths.m4 b/m4/paths.m4
> > index 89d3bb8312..0cec2bb190 100644
> > --- a/m4/paths.m4
> > +++ b/m4/paths.m4 =20
> ...
> > +AC_ARG_WITH([xen-scriptdir],
> > +    AS_HELP_STRING([--with-xen-scriptdir=3DDIR],
> > +    [Path to directory for dom0 hotplug scripts. [SYSCONFDIR/xen/scrip=
ts]]),
> > +    [xen_scriptdir_path=3D$withval],
> > +    [xen_scriptdir_path=3D$sysconfdir/xen/scripts]) =20
> ...
> > -XEN_SCRIPT_DIR=3D$XEN_CONFIG_DIR/scripts
> > +XEN_SCRIPT_DIR=3D$xen_scriptdir_path
> >  AC_SUBST(XEN_SCRIPT_DIR) =20
>=20
> It is not clear to me why the deefault is changed from
> "$XEN_CONFIG_DIR/scripts" to "$sysconfdir/xen/scripts" and there isn't
> any explanation for this in the commit message.  I think this may make
> no difference but an explanation is called for.

The reason is the ordering of assignments in the file:
AC_ARG_WITH comes early in the file, XEN_CONFIG_DIR=3D is assigned much lat=
er.

It seems the assignments for CONFIG_DIR and XEN_CONFIG_DIR can be moved up,=
 because $sysconfdir is expected to be set already. As a result the new AC_=
ARG_WITH=3D can continue to use "$XEN_CONFIG_DIR/scripts" for the default c=
ase. I assume the current ordering is to have a separate AC_ARG_WITH and AC=
_SUBST section.

Olaf

--Sig_/KRediBe/B+uUOAS09ebVvfq
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAhc38ACgkQ86SN7mm1
DoDXZw//TixZZSjh6sLj7ASOkbUTN3HQ0Bft6GNfUaRv47tL+XbeST+tNPbTF65h
Oz4Pi1M/QdkLtGony0s84B6AQDQSzYys1AH0Ew083Hcn5jQTyktTSgCQEAUxqc7E
2Rw0NmMURSYcpsDMmhxSwS0vLOkG2zcryM9DvsSuE62zfoorE09GJY+MGX4O2CbL
CszVovz0RS4J8SwrhVejCpL4Ol1FJVHQlt6wP0RAbt3CqKnqcWlXsfpyJds6jbrD
tu053fNtQQ0/FJx17Mn3qf63e0DWKlXKPP4txfx8x0JCX7OI4MNen6JmyfMp/Evy
0y+CxfZ/ctyzwYLAURWpBfYgcvWFtV2Pudzcu7hOSnoLwLNT05suRFQbAnOuQII+
8N15nudQzgPMia4WOd9H9NhC5tz6FsjJNK0DNBl5+KMAuLX0x5OL/WyaqqDPSjGS
r7IWH4HVJHgKZwt8j6sI1owR94njRF7cjQ3qxwhRrMexHKMnuSdsccc1ne9EFANb
qkfhes5dKhYk9rR+0PbBo7ZbYNeYHQsNH+VbzXB+8/dAZ9tbQ8RUYnjJy98AE+1K
YBecK4K6XfvwB5a3NexZ6J7XowkLzJyZZtO/7RTKYM4j+Pq40tvpwoYzOry0Lt8V
nfUr63QiblrMsr/3XxDHZiPDb5bQT/dD7+qO1bcT1mflnSIwbj0=
=5Ypn
-----END PGP SIGNATURE-----

--Sig_/KRediBe/B+uUOAS09ebVvfq--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:30:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82996.153597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ANB-0003Ge-W6; Mon, 08 Feb 2021 17:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82996.153597; Mon, 08 Feb 2021 17:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ANB-0003GX-St; Mon, 08 Feb 2021 17:30:53 +0000
Received: by outflank-mailman (input) for mailman id 82996;
 Mon, 08 Feb 2021 17:30:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pnXs=HK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9ANA-0003GS-DB
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:30:52 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f832fe2e-54fd-4dfe-b6f7-21d2790017f1;
 Mon, 08 Feb 2021 17:30:51 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 005e38x18HUj5SM
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 Feb 2021 18:30:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f832fe2e-54fd-4dfe-b6f7-21d2790017f1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612805450;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=BiiWbli1BkYRtu3wqvFm3m7Y+oPANItkF1Kk3tR1ghM=;
	b=Ys3EP6DuZhwF3g99Gy3q892kZd1u9qZ2b9FTnUNztxew+B+2tk9T8bob4qfXNKGn8J
	6C6hHkiZcJ7QqwqembznAVsr/oCY8VqlSd3UC/iYVnuGEBhSHLNNhI64bryPJNyq49SR
	Neho/XaoFn/21tT0KXPSojSftmypSILf0kiGvce7LKCZrIFyTp9GaXDVZG+xF2ILzW4C
	Ag7nXhvuzhMb0sQUnTKDHUWTprc6XlSE0tGfa4eAMH4N+LV97TcRezFDbf2XeViNsCWL
	j2MFV6KftIGTeOiPP0KCW0Hu4gwDon+9cCeGO31KLcoMmU+jUz251EJY9Zi+I5IeWncJ
	YTvA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Mon, 8 Feb 2021 18:30:36 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 07/39] xl: optionally print timestamps during
 xl migrate
Message-ID: <20210208183036.3f95bb0f.olaf@aepfle.de>
In-Reply-To: <24609.25950.629059.164010@mariner.uk.xensource.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-8-olaf@aepfle.de>
	<24609.25950.629059.164010@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/6wpClQSmYSn0yNXf.PL.WKm"; protocol="application/pgp-signature"

--Sig_/6wpClQSmYSn0yNXf.PL.WKm
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 8 Feb 2021 16:22:54 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> I wonder if I can get a quick second option from someone on the API
> question.  Using up a single letter option is something I don't want
> to just rush into.

Maybe put "-T" or "-t" into xl.c:main() instead, so that every command prin=
ts timestamp+pid?

For everything beside "migrate" it will only be useful along with a couple =
of "-v" to enable global debug output.


Olaf

--Sig_/6wpClQSmYSn0yNXf.PL.WKm
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAhdTwACgkQ86SN7mm1
DoDczxAAhKSTE9tPYEqw+A6W1rhZrL8wpmLV9xoiT0mXLQ8mOuv1YiWBQVaMTMBV
G1zceTi7rJT6D6ODc4tsmgWP8ATtDITwO//pB+dA3+KeG2EJhADaawyjCsF8OzfK
IsT1eoTsETwAmX+YTHeuCDa7PMSlG5xa/OQ/jxg6XsF3HiFWCKoOMXfePFvtTodj
Vbklky8QO3NIzV/VQl0s1K+SrB7yTHA4BGLNImd4DBWiwd02j6sCOvQL0FbedYg7
lAA+rYMv2mJkL4bwC6MjiJ091kP7SFZxdzoQZqWnH4PE3YLMzI2vutsfU68g7udb
KJ8ZFyJCB9WoABIBrRG+VOyVDKTZBRj5tXNznhrBJD43UgT9vSwEtJ3YqDoEU2zU
LMXMc5cARjxnwdHKaHYkfeD7tYTdCO5DWqZ9TOTRv+Z9SnaNuXEJSOmwIZqgT7h2
xxjyic29srpoNpGWQ+m51/kC1SEOtmMwFlxe8t3nQD2SUQ66D0hQQ3DTHOp5Qi7Y
vsTeCYDajmcBa+wOFhMrFDrjSbGYdxKkNodULnWrsdMZKe+JSwutZ7mjXs4nbnzr
zFdcikxRgfkygm7EDyOBP0lFrhA/6HsN7jwooF6e5YtkmZh/2i8oN3XBE99NQdMP
LaQ+Zw/vIzYl1dZZ149PLTClxk0FTg5Xws1V3/Hv2v12r1jU2z8=
=9jwz
-----END PGP SIGNATURE-----

--Sig_/6wpClQSmYSn0yNXf.PL.WKm--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:46:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82998.153609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AcW-0004LV-FZ; Mon, 08 Feb 2021 17:46:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82998.153609; Mon, 08 Feb 2021 17:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AcW-0004LO-Bt; Mon, 08 Feb 2021 17:46:44 +0000
Received: by outflank-mailman (input) for mailman id 82998;
 Mon, 08 Feb 2021 17:46:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9AcU-0004LJ-Gn
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:46:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9AcU-0000uh-CP
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:46:42 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9AcU-0000Tm-AJ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:46:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9AcP-00038n-Dr; Mon, 08 Feb 2021 17:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=HqbDYQzq5ylB3pQh2NMNQTh6lR+3mFRbwfdeXQvQ5Tw=; b=mWJetaJ44hqn2vsjYUr1Eu6sFV
	SuZLzMMRa4oKrTS/8DxnJTJg47U8RD6hmrBIa44UYEMVlRA6U+pIJMGJTF7a2AWoSB/9hGr2jEHo+
	tW2mD4Z28+AHST2FOOGXkZTIyOltqB2R2CmBpTPu3ARHfX3WcRLq5J1AXC0fu38h5x9c=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.30973.166445.944436@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 17:46:37 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>,
    Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 15/39] tools/guest: save: move batch_pfns
In-Reply-To: <20210111174224.24714-16-olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-16-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210111 15/39] tools/guest: save: move batch_pfns"):
> The batch_pfns array is already allocated in advance.
> Move it into the preallocated area.

I think these patche really need a review from someone who understands
the migration code.  Ideally, Andy, who unfortunately has been very
busy.

I doubt this is going to make it for 4.15 :-( but maybe Andy or
someone else has an opinion.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:47:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.82999.153621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Acz-0004QY-Ov; Mon, 08 Feb 2021 17:47:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 82999.153621; Mon, 08 Feb 2021 17:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Acz-0004QR-LN; Mon, 08 Feb 2021 17:47:13 +0000
Received: by outflank-mailman (input) for mailman id 82999;
 Mon, 08 Feb 2021 17:47:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Acy-0004QH-AQ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:47:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Acy-0000x0-9d
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:47:12 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Acy-0000Vw-8M
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:47:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9Acu-00039K-P0; Mon, 08 Feb 2021 17:47:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=5yoZFqMzJOYFYJ3eOacoTnR1JQ0/derntfD0JSsze0M=; b=Lawqt87bpd3UPpNvEell8eL92N
	ZIX1B9b49lrOSG76M5oAMGcqg8KsZyxw2ucUlj9FF6CrssTx7D75OoX29dU1bQTpYhJVrbRRrpdw/
	eWgUEAFgBNrgqXdgPjgU3DrNlSTen4TkK66y7WG6dkbJWa6TFk/9wNdI5BKPzlKv6By8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.31004.503545.742097@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 17:47:08 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD  <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 07/39] xl: optionally print timestamps during
 xl migrate
In-Reply-To: <20210208183036.3f95bb0f.olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-8-olaf@aepfle.de>
	<24609.25950.629059.164010@mariner.uk.xensource.com>
	<20210208183036.3f95bb0f.olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("Re: [PATCH v20210111 07/39] xl: optionally print timestamps during xl migrate"):
> Am Mon, 8 Feb 2021 16:22:54 +0000
> schrieb Ian Jackson <iwj@xenproject.org>:
> 
> > I wonder if I can get a quick second option from someone on the API
> > question.  Using up a single letter option is something I don't want
> > to just rush into.
> 
> Maybe put "-T" or "-t" into xl.c:main() instead, so that every command prints timestamp+pid?

Yes, I think following some informal irc discussions that that would
be best.

> For everything beside "migrate" it will only be useful along with a couple of "-v" to enable global debug output.

Right.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:48:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83000.153632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AeM-0004a4-3W; Mon, 08 Feb 2021 17:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83000.153632; Mon, 08 Feb 2021 17:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AeM-0004Zx-0g; Mon, 08 Feb 2021 17:48:38 +0000
Received: by outflank-mailman (input) for mailman id 83000;
 Mon, 08 Feb 2021 17:48:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9AeK-0004Zr-Q7
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:48:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9AeK-0000y9-OT
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:48:36 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9AeK-0000bz-Mr
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:48:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9AeH-00039j-H7; Mon, 08 Feb 2021 17:48:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=gi2DaL39nSETXfcihFpkI2eRsVfIa+o38yQUx/utY6g=; b=UIHX6qBzDXFnkLhmKQE5TnmmYJ
	ReIPN+yGno4OByLYgG4GvvJjvXdmKHvyptvWefPR53RGGmW4rcegczmuLgdP6a0eOF+v2CuuObuc5
	mmv+bo+3vxSMsb/iSM81JTRQ4HwTDAZoV1Z0xI02KYb0KiF9CJz1C/7HSoMbBgfMpH6c=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.31087.818313.780529@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 17:48:31 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure
 option
In-Reply-To: <20210208182311.53dac1a3.olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-6-olaf@aepfle.de>
	<24609.24785.791060.128298@mariner.uk.xensource.com>
	<20210208182311.53dac1a3.olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure option"):
> The reason is the ordering of assignments in the file:
> AC_ARG_WITH comes early in the file, XEN_CONFIG_DIR= is assigned much later.

Ah.

> It seems the assignments for CONFIG_DIR and XEN_CONFIG_DIR can be moved up, because $sysconfdir is expected to be set already. As a result the new AC_ARG_WITH= can continue to use "$XEN_CONFIG_DIR/scripts" for the default case.

That would be best I think.

> I assume the current ordering is to have a separate AC_ARG_WITH and AC_SUBST section.

I could be wrong, but I don't think the location of AC_SUBST matters.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:53:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83004.153645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AjH-0005Xn-Oq; Mon, 08 Feb 2021 17:53:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83004.153645; Mon, 08 Feb 2021 17:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AjH-0005Xg-Kz; Mon, 08 Feb 2021 17:53:43 +0000
Received: by outflank-mailman (input) for mailman id 83004;
 Mon, 08 Feb 2021 17:53:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+r8=HK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9AjG-0005Xb-Nb
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:53:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f7d2b89-e32c-4c6f-b193-0e863ca4ba3f;
 Mon, 08 Feb 2021 17:53:41 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 91F9D64E7D;
 Mon,  8 Feb 2021 17:53:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f7d2b89-e32c-4c6f-b193-0e863ca4ba3f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612806820;
	bh=AvmbdsMEtZySlIGWuQEr56QGNFODJMsBTDXkKDN9g3g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qI7eO2WgW06Sfk+FaqW8mrOdAIxo5J7IlNV0BRz5b2vJ4hOGyOMOn7N/QVC9Kx9T3
	 9BoU5BrrmkKMXmitU/xiwv38zsE8I6prnNAp1xK5SI+ykrv+E/kK5SNFZgAxVOHgtQ
	 T7jmEldiYxCF0xCqHUvrqd5mWzqNhDAODSmH7BjJzohkuvBFE7GNBWIE5zI5uvHHzE
	 4OO8hfKCdBIEqNsMaqcIZ/CzcYwV2FXFJSVjUNYmGsm5fW+tnl0O94N21qcukK04pM
	 cWWdq+pljFxU9fc/V1PJTeaqubMGqog4I2ahT1lqNRDNEWua6sCsKoXFHMQuYV1/MG
	 +9tBD8mpL9IgQ==
Date: Mon, 8 Feb 2021 09:53:39 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, lucmiccio@gmail.com, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Volodymyr_Babchuk@epam.com
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <247f517e-a283-12c8-2ccb-3915cda4ac2e@xen.org>
Message-ID: <alpine.DEB.2.21.2102080947390.8948@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s> <247f517e-a283-12c8-2ccb-3915cda4ac2e@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 6 Feb 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 06/02/2021 00:38, Stefano Stabellini wrote:
> > Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> 
> Doh :/.
> 
> > The offending chunk is: >
> >   #define gnttab_need_iommu_mapping(d)                    \
> > -    (is_domain_direct_mapped(d) && need_iommu(d))
> > +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > 
> > On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> > directly mapped, like the old check did,
> 
> This is not entirely correct, we only need gnttab_need_iommu_mapping() to
> return true when the domain is direct mapped **and** the IOMMU is enabled for
> the domain.
> 
> > but the new check is always
> > false. >
> > In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> > need_sync is set as:
> > 
> >      if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> >          hd->need_sync = !iommu_use_hap_pt(d); >
> > iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
> > definition in docs/misc/xen-command-line.pandoc:
> > 
> >      This option is hardwired to true for x86 PVH dom0's (as RAM belonging
> > to
> >      other domains in the system don't live in a compatible address space),
> > and
> >      is ignored for ARM.
> > 
> > But aside from that, the issue is that iommu_use_hap_pt(d) is true,
> > hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
> > too.
> 
> need_sync means that you have a separate IOMMU page-table and they need to be
> updated for every change.
> 
> hap_pt means the page-table used by the IOMMU is the P2M.
> 
> For Arm, we always shared the P2M with the IOMMU.
> 
> > 
> > As a consequence, when using PV network from a domU on a system where
> > IOMMU is on from Dom0, I get:
> > 
> > (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402,
> > iova=0x8424cb148, fsynr=0xb0001, cb=0
> > [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> > 
> > The fix is to go back to the old implementation of
> > gnttab_need_iommu_mapping.  However, we don't even need to specify &&
> > need_iommu(d) since we don't actually need to check for the IOMMU to be
> > enabled (iommu_map does it for us at the beginning of the function.)
> 
> gnttab_need_iommu_mapping() doesn't only gate the iommu_legacy_{,un}map() call
> but also decides whether we need to held both the local and remote grant-table
> write lock for the duration of the operation (see double_gt_lock()).
> 
> I'd like to avoid the requirement to held the double_gt_lock() if there is the
> domain is going to use the IOMMU.
> 
> > 
> > This fix is preferrable to changing the implementation of need_sync or
> > iommu_use_hap_pt because "need_sync" is not really the reason why we
> > want gnttab_need_iommu_mapping to return true.
> 
> In 4.13, we introduced is_iommu_enabled() (see commit c45f59292367 "domain:
> introduce XEN_DOMCTL_CDF_iommu flag") that should do the job for this patch.
> 
> For 4.12, we could use iommu_enabled as in general dom0 will use an IOMMU if
> Xen enable it globally. Note that 4.12 is only security supported since last
> October (see [1]). So this would be up to patch there tree.

I'll make some commit message improvements based on your reply and also
add "is_iommu_enable(d)" to the check for this patch, with the
understanding that in 4.12 it would have to be different.

Speaking of 4.12, this bug is so severe that I would consider asking for
a backport even if technically the tree is only open for security fixes.



> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Backport: 4.12+
> 
> I would suggest to use Fixes: tag if you know the exact commit. This would
> make easier for downstream users if they backported the offending patch.

I'll add a Fixes tag


> > ---
> > 
> > It is incredible that it was missed for this long, but it takes a full
> > PV drivers test using DMA from a non-coherent device to trigger it, e.g.
> > wget from a domU to an HTTP server on a different machine, ping or
> > connections to dom0 won't trigger the bug.
> 
> Great finding!
> 
> > 
> > It is interesting that given that IOMMU is on for dom0, Linux could
> > have just avoided using swiotlb-xen and everything would have just
> > worked. It is worth considering introducing a feature flag (e.g.
> > XENFEAT_ARM_dom0_iommu) to let dom0 know that the IOMMU is on and
> > swiotlb-xen is not necessary.
> > Then Linux can avoid initializing
> > swiotlb-xen and just rely on the IOMMU for translation.
> 
> The presence of an IOMMU on the system doesn't necessarily indicate that all
> the devices will be protected by an IOMMU. We can only turn off the
> swiotlb-xen when we know that **all** the devices are protected.
> 
> Therefore a simple feature flag is not going to do the job. Instead, we need
> to tell Linux which devices has been protected by an IOMMU. This is something
> I attempted to do a few years ago (see [2]).
> 
> In addition to that, we also need to know whether Linux is capable to disable
> swiotlb-xen. This would allow us to turn off all the mitigation we introduced
> in Xen for direct mapped domain. One possibility would be to introduce ELF
> note like for Arm (see [3]).

Thanks for your feedback, I'll mull over it a bit more and then start a
separate email thread on this topic.

 
> > diff --git a/xen/include/asm-arm/grant_table.h
> > b/xen/include/asm-arm/grant_table.h
> > index 6f585b1538..2a154d1851 100644
> > --- a/xen/include/asm-arm/grant_table.h
> > +++ b/xen/include/asm-arm/grant_table.h
> > @@ -88,8 +88,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t
> > mfn,
> >   #define gnttab_status_gfn(d, t, i)                                       \
> >       (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
> >   -#define gnttab_need_iommu_mapping(d)                    \
> > -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > +#define gnttab_need_iommu_mapping(d)  (is_domain_direct_mapped(d))
> >     #endif /* __ASM_GRANT_TABLE_H__ */
> >   /*
> > 
> 
> Cheers,
> 
> [1] https://xenbits.xen.org/docs/unstable/support-matrix.html
> [2]
> https://lists.infradead.org/pipermail/linux-arm-kernel/2014-February/234523.html
> [3]
> https://patchwork.kernel.org/project/linux-arm-kernel/patch/5342AF59.3030405@linaro.org/
> 
> 
> -- 
> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:54:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83005.153656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Ak8-0005df-1p; Mon, 08 Feb 2021 17:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83005.153656; Mon, 08 Feb 2021 17:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Ak7-0005dY-VB; Mon, 08 Feb 2021 17:54:35 +0000
Received: by outflank-mailman (input) for mailman id 83005;
 Mon, 08 Feb 2021 17:54:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pnXs=HK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9Ak6-0005dR-OY
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:54:34 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b8f5b53-62bb-442d-bbd2-92689106d9ba;
 Mon, 08 Feb 2021 17:54:33 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 005e38x18HsU5Wx
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 Feb 2021 18:54:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b8f5b53-62bb-442d-bbd2-92689106d9ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612806873;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=zSF1VrDFjjcc7/koq7ymPiGui/6k9P9w6E9VMq44wfM=;
	b=YJl8DQ3EGFMrfwwU6QSHPdW/HDbk56UXxbt/UO7fY3oZ4L230zmFUc+y43CUxnitQ+
	ptAMeR4ocYKpnToAyAh2aOiKazkA5IiImIGPUrtk+Ziinhys78kychkazN0y2zLt2iAK
	gGze4hqVYNb6XZTt7I/MLY4FIPMDB+GTkOgCNAmXgOk0Dh6FvvmxVpFveWr4yhFr6SpG
	/vatai07y7J29zNxbB6VFN4Cp+JqRziGi4eNsq13UT5Y7wHavuj7DjytZhxLCseg4TOr
	5HiLPmFt+8ENDmCGw+9lu/bi1MjtnmTR/SqIm7xtfeHIf7SbwqRarHDa9xtl5gin3lgQ
	wLnQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Mon, 8 Feb 2021 18:54:13 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure
 option
Message-ID: <20210208185413.51acc99b.olaf@aepfle.de>
In-Reply-To: <24609.31087.818313.780529@mariner.uk.xensource.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-6-olaf@aepfle.de>
	<24609.24785.791060.128298@mariner.uk.xensource.com>
	<20210208182311.53dac1a3.olaf@aepfle.de>
	<24609.31087.818313.780529@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/mt4jlHxIO8tJEBwOoQxy5oi"; protocol="application/pgp-signature"

--Sig_/mt4jlHxIO8tJEBwOoQxy5oi
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 8 Feb 2021 17:48:31 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> > It seems the assignments for CONFIG_DIR and XEN_CONFIG_DIR can be moved=
 up, because $sysconfdir is expected to be set already. As a result the new=
 AC_ARG_WITH=3D can continue to use "$XEN_CONFIG_DIR/scripts" for the defau=
lt case. =20
>=20
> That would be best I think.

I will split this into individual changes and send a separate series.


Olaf

--Sig_/mt4jlHxIO8tJEBwOoQxy5oi
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAhesUACgkQ86SN7mm1
DoB8ihAAlh8D0Q0aePy2ionL6S/Rmggi7ATrEdlkw4FCrz4BtjVQlR26S3twLknN
KqanqAl9dvdZI3gIelJ9ZYT0DWxtvIJo9AnBnrI6hgHEN+sqeiZipGL7EClwv4Kd
xdl27R/RUgBNxjfTXwG84uV5o/AuoMGJaCyY/S7/Hqq30qLel56dXrWOMonJpJuO
sh0dnBnfYjPtHilzIG2uNrzAfi+BoX9Qu1oN66hoc9s4Kd7SAemj/gLJ+k0OtfS1
0eM2JQExa4BmlKbpUrVx7HQmc0uIsT2tBJHivTlV5bYk/GCEUMx+qSQzLTXEkG+6
wz9/Uhcqz0pgydVkoGCEq9HpDz7BbrtfRKB9LU0EWA5zy8KynE3WFNVHBSPZUGTz
CVU8GjwphO+4Nvv5qEPrTxECA+n7EC7TC/eDPDjkA21g8zneZHpKQ/jyeAKeB2dH
xjC3ld1JoYpoKUQrBWbe/LZS1o6zeF/lcxaYCDVLO8XhZrGLPXs9E+2WaUwQG0ez
/rlDMXiwMEgsc7eQlcdieSLYRi7WidL6xrfIIYr3ixuS9E0/rzHxnnQuVgYW7GWW
LGjgvpuZlagotjcvwG32vrmF5b/qyw9dNpAvQrqbp6W7MsIV2BWf7gMWgbVDXz36
gigw9qrGLq+7AFwN79FcqyMVaUtn+62zFtj+ehRVfskLcN46Gsc=
=HJJA
-----END PGP SIGNATURE-----

--Sig_/mt4jlHxIO8tJEBwOoQxy5oi--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 17:55:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 17:55:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83006.153669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Aky-0005kI-Bl; Mon, 08 Feb 2021 17:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83006.153669; Mon, 08 Feb 2021 17:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Aky-0005kB-8U; Mon, 08 Feb 2021 17:55:28 +0000
Received: by outflank-mailman (input) for mailman id 83006;
 Mon, 08 Feb 2021 17:55:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Akw-0005k6-HI
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:55:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Akw-00014n-Ek
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:55:26 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Akw-000128-DQ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 17:55:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9Akt-0003BF-5C; Mon, 08 Feb 2021 17:55:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=R4GG7JJ+EpnkksH0FqdML0mwaQM/AIZTKvFwe1uszLo=; b=BId9BYvKEegyE5vMK6Prc0rO4v
	mpz3GLook9HXFXofbMATQczjBtFxvgUt7RclOjKtxPe3XDBPT10y78oQJzg5Z5YPmPsZFCzRZgDme
	aTstFzl6VMoNa9+uIf7BZUDH09paQtkGsoZQnaUwL340Q3iZ9aWolJlcftNZoQkP67lE=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24609.31498.932428.931134@mariner.uk.xensource.com>
Date: Mon, 8 Feb 2021 17:55:22 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure
 option
In-Reply-To: <20210208185413.51acc99b.olaf@aepfle.de>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-6-olaf@aepfle.de>
	<24609.24785.791060.128298@mariner.uk.xensource.com>
	<20210208182311.53dac1a3.olaf@aepfle.de>
	<24609.31087.818313.780529@mariner.uk.xensource.com>
	<20210208185413.51acc99b.olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("Re: [PATCH v20210111 05/39] tools: add with-xen-scriptdir configure option"):
> Am Mon, 8 Feb 2021 17:48:31 +0000
> schrieb Ian Jackson <iwj@xenproject.org>:
> > > It seems the assignments for CONFIG_DIR and XEN_CONFIG_DIR can be moved up, because $sysconfdir is expected to be set already. As a result the new AC_ARG_WITH= can continue to use "$XEN_CONFIG_DIR/scripts" for the default case.  
> > 
> > That would be best I think.
> 
> I will split this into individual changes and send a separate series.

OK, thanks, that will help with de-risking this from a release PoV.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:07:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83013.153680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AwJ-0006sP-9X; Mon, 08 Feb 2021 18:07:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83013.153680; Mon, 08 Feb 2021 18:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AwJ-0006sI-6e; Mon, 08 Feb 2021 18:07:11 +0000
Received: by outflank-mailman (input) for mailman id 83013;
 Mon, 08 Feb 2021 18:07:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oudU=HK=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l9AwH-0006sD-OQ
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 18:07:09 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.41]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b587996-0912-465d-8841-847419767d62;
 Mon, 08 Feb 2021 18:07:07 +0000 (UTC)
Received: from AM6P195CA0087.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::28)
 by AM0PR08MB5523.eurprd08.prod.outlook.com (2603:10a6:208:17f::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.17; Mon, 8 Feb
 2021 18:07:04 +0000
Received: from VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::cf) by AM6P195CA0087.outlook.office365.com
 (2603:10a6:209:86::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20 via Frontend
 Transport; Mon, 8 Feb 2021 18:07:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT003.mail.protection.outlook.com (10.152.18.108) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Mon, 8 Feb 2021 18:07:03 +0000
Received: ("Tessian outbound f362b81824dc:v71");
 Mon, 08 Feb 2021 18:07:03 +0000
Received: from 342238e23383.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B266FBDC-0358-44F3-BD63-B6DD50E41F05.1; 
 Mon, 08 Feb 2021 18:06:41 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 342238e23383.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 08 Feb 2021 18:06:41 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6202.eurprd08.prod.outlook.com (2603:10a6:10:209::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.30; Mon, 8 Feb
 2021 18:06:39 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Mon, 8 Feb 2021
 18:06:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b587996-0912-465d-8841-847419767d62
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i0/3jX6KzFfFsjKfa2HZ9RmN8D8cy6Ubu3tZljfLVVQ=;
 b=6KwB7MDrKLFHtCt5fHA84CzhoBFciwp5XqG9wZNbiaLnR5asv5+SfS7+/nBVol+zXm00no1ayDg/zJLZGq6e36iRlaVtm2npOzSqQIPX8A4MMEWUwqdYirQ+ZTvfZeYp5isvKc8PQ8/PiPth81O6/YCZK3G6WrJhyx2cj7YgS2A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a0d826f09d5bafcd
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cPEN9LvjwGTXYO2lW2SagdjD2aHSAsOJRGc0WhapZ47Q/XtLcuySzWMWvbBgbI0d0dfIEt6XhCwF12l8xY6iaAgwoQ2xohc2gLxn+STZfhuvIEdipUZ6O16UJCI2wZlECWdE4yWYu4PnfxQbXY9mzCFBaq1lq1QDCvKKDtog6p7o7GKdWFIgkfY4Rav9FnXj7cKbxvILAb5JgbAJ7p6R/fKoUz1XPYF23iiTDWkUzwF2G91E9Jb6yBjOk0442z97WuChuOK9EItZ0xhBBsX7VvvAg6mODzWT6ZqIAUXKn+McyXWI9IjZ2qWaN4XVXfU2vvYaEThFpKjm0nO5U47rXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i0/3jX6KzFfFsjKfa2HZ9RmN8D8cy6Ubu3tZljfLVVQ=;
 b=Xi2AmpwMdwTBwnrmQRz1yAAbJa7r84XaN4/lecqy8rg2aYzZXtV8e5UmzpiBXpiI/+nnPAJVv5ijdhjTi9WS86eMUWYo8/A4kN4szAzIzCTvJvWicLBLk5KeIZvAaeqsBAwlGtqcyO7jhqtLdz7HLyBSVlGZo5ck7eH2rNJpp2ho0R+HkReJCK0duzA+86YrIv29v6/xi6WmIVX5noyu8r0FCM0xr3JtSV7UshCCmBIdmwRFG4w81Zyn/EbrjsQIXh/4NEqxogfV4LzWS9FBof4Hz1dMh5kuz5MRD+mLXHQcfrIVEca73gHxMJTt5SfZx3cLmR0nfIpDPcZRamOtzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i0/3jX6KzFfFsjKfa2HZ9RmN8D8cy6Ubu3tZljfLVVQ=;
 b=6KwB7MDrKLFHtCt5fHA84CzhoBFciwp5XqG9wZNbiaLnR5asv5+SfS7+/nBVol+zXm00no1ayDg/zJLZGq6e36iRlaVtm2npOzSqQIPX8A4MMEWUwqdYirQ+ZTvfZeYp5isvKc8PQ8/PiPth81O6/YCZK3G6WrJhyx2cj7YgS2A=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/CCVIBhvrRIcEE2wSNDNMu6y5KpOkgeA
Date: Mon, 8 Feb 2021 18:06:39 +0000
Message-ID: <C36DCA9F-1212-4385-AE66-7D41C586A313@arm.com>
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ef40cdd8-359b-4000-4598-08d8cc5c56be
x-ms-traffictypediagnostic: DBBPR08MB6202:|AM0PR08MB5523:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5523CA6EAC538973308F1148FC8F9@AM0PR08MB5523.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wStW8CTVsGdIWLaLSwuDzm/6bDKqmrbAtZ5UjwtPSU/OM31MVocax2i7AcShGsB4qmNmwPHcTmKZjpsGfGMxi42nM9DhTIqNCAVVJIvLV2b2KrHqII++PE4QnuI2ZCzuvcW1Pshogk08v/fGPcnMhmtcRkflhukvYwlcZTIYbM43qqK7YNK5xjJ13wkNbYF2uKbWcatGAeWk2fOOTg7vGxHkMlyY5W3bZMsFoylsMGXDnVqB4G2UF2Mtm5va7//Pgvda4OrjsRlCqVXKXdKVebL/ttC2OUfrc8z3pxN5qpYG685saMJkBeTZeoJfkucArJlYyf9zf9Th/LagOB2b195hVYQOiwEO/FcZ00wEhO6gWKuBsMACetdy1kpqwgwR7D9jqC8YMUmMFr/5w7SYJf0DayTPSKgjoGlzVkxYsB6p+N4MnsVZkvLX0rr6z26EB1ysbFZc21ikfqXHfEnaef5ZbPGtDZeNUNsAk2IDrOb05Jm5hYUl5xpCbRvkmpmkL9IOehDsnie8LZg1Lkcu6zUtIIHfY+zUirYVsyGCPlbYqcjxy7aCTMEOYO8JveyBIM3cpiC6EOrD+gKFYLWf8LWv4M2o6XSKz+BUt2mFavYKHcitoRAhtUqS2ZvI7w8ZKYpvXa/pvhiXicEL9VGbLT9Dyc/fMxfAAX5aTR1K98Y=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39850400004)(376002)(346002)(136003)(86362001)(36756003)(8676002)(6916009)(8936002)(4326008)(53546011)(33656002)(6512007)(6486002)(2906002)(966005)(26005)(478600001)(6506007)(2616005)(66556008)(5660300002)(66446008)(66476007)(64756008)(316002)(71200400001)(66946007)(54906003)(76116006)(91956017)(186003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?U1qBalolSAdTegR1YoPIoXBsXYhkHimsjwSVVg7jby2PSOINwatliIMPWJEd?=
 =?us-ascii?Q?aLN3qaQBl3EwoVgg75IJe/uq3U/UM/n6f+BAY8t3In/zY49rYN6i90axz0sd?=
 =?us-ascii?Q?CWXmQRFOGSL1K8H0+Esc2ILgikwv6epbMCcb0KPDWEtKeZt1LQlxkJBlSKk6?=
 =?us-ascii?Q?aUM9Nk+FxH63ukD09bPdlcS6R4la+X1EFScNxZnDbvtnsEthRTVVYhaOnmwQ?=
 =?us-ascii?Q?zsEBjDlAgeUQLYKmr3QJfFAE0r/4SpqQH270fUZMtvUke7prrbw+Q6DMWUjr?=
 =?us-ascii?Q?vsjrIRNyv85oPI4Boc2u6BQxUOcjtaqf+SmDHchCll0ZG9yIfjAGCltsQlpC?=
 =?us-ascii?Q?iNNMWuCrX1SKfKmAfCX+Ea8GLhUZKPSOCZ6YPFyzUQDiPAdTHSO5I8YJb++3?=
 =?us-ascii?Q?/IztfnJ2RTNjXToXnKxszR3mejpZ4iZNIsx0ccpyAsn6OsWmQ7r87YYFIsLf?=
 =?us-ascii?Q?LkWeu2wnm+kB5ce8Icz0HZ/vvdlny8cKP3r1FYXP530GBs9PC5JzwUSiAMDW?=
 =?us-ascii?Q?TVzSioryZzdhXjRPV2zaz/CaKqIOLl0FzDNkqRv8F3heQNciuFATJWy7TvNi?=
 =?us-ascii?Q?QdIKukFJ+bcj/9lVIVrPrmJBntECkYSBJpII39MMOZfoV4fEHYZyCp3lq3H5?=
 =?us-ascii?Q?c9pFwIdclIfgYwHOqQxFv+/UidVv+L1/CCw3ipE8Ra8RedbG+FcvImFLkC+t?=
 =?us-ascii?Q?eda92pf3O/xU09P1whDDyrNE1MYtp4bOv6P/Q9lDbbbdY+oFXbBTLSpugugT?=
 =?us-ascii?Q?Lt0zvlBxD7B0juhnZDqRuMOdhATBuPaZphY0vnXaYrK6McbWO65gVwFfxjl5?=
 =?us-ascii?Q?a9U0HQWZc+aS0dDtb3I4az8eWAdpSNHMHPlIwrJYB8oHMqlpguucclGB4uym?=
 =?us-ascii?Q?0KYk+jGE+wJGx7+kbj2EMbRWMxINKcZT4dEv52cohvJJAmNhVmRI9YeWhwZC?=
 =?us-ascii?Q?qu32zC/PHxn3yNK7T7gw4x/26CqW72WWB3HZutpzP7ZCg45BuPiYg4fkbJMn?=
 =?us-ascii?Q?W9c3fIrLAagMTilC4BZtCQmHDPmsxobMr+gs9saIrt6qLV+ymGcSFF7BGJQc?=
 =?us-ascii?Q?aGgv0ST3d/qbY6r64CsBLJmr1Kle85mHE8bnHXmVpTThhHA6VpeCMC6zTOrw?=
 =?us-ascii?Q?PRbeL2PIc4vqftiE2OLLY4VI7GFC1F7b3Ybgqgpoc92K4w8MblczgP0l+mGx?=
 =?us-ascii?Q?ys2wUKboMX5lspr8VNZvseP8pa7oky/rv6P7C0q62nSPAhoH/Bm0oTByoKoF?=
 =?us-ascii?Q?9z0lyOSDwAsctrCzCygQFa4ITLCssEPMSbWT0x7FFy6isaA5xldn6jGj532B?=
 =?us-ascii?Q?yKxC/hMCWYloukFDQvw9oBcb?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <37ECC0657301E34FAD5F0A93C2751D92@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6202
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	58596d6c-5e64-4614-3748-08d8cc5c486c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JbE7Nw9nHR+aueWAJIgi4Sh86WjGVthAcx53D2iTlQX86PwWSMGtWCVkuS/cu/FpjYlKOLIQa9PXdzaLQScb9MKPHMcqJsXpP85+Vrf32x1j6U6qTLqFl7BnpKHeTgNffIpbaDivor3S0arXvib15u16DlHcsByN3waymEa8vYy0HOf3ymchQdd3RDZyW0DztfSPEVxI/TU+LSo4glAHzA40TCK2xPg21yW5yyvoUdW5uKRBZZeIX3gaQfkxTyo0sGVK7lJhxwPg08crBNOvA2yWSgbFtYETysRf+Q296Qv3ZBBQDjhbETk5iCRIYpztmje6vo4pZnDzzYf9IcTHEw0A0DOTE4K+wRgPoiHMYtIeaQni/4pNAdIOhMES1lmb1b8/DFIZ6ghGLkwqb1f90aR67ThVxMLU/G9eOLtWi2AsONjd1eCJn3HBErrlSS2SfFcqKvhp1t+RDb3ALfR0y4gWyf3N7bt1ARTuY1eOLEQ7EQJmdpf4Lv0+oa0wuWf7LX1ODuE4icUiOAonDbN8rhaMQmSYgdtfXJpJQxjELnJo52AoRg3q/L+36vQjw2kKKOTbAJlWlxrH+270Jmi2atmA8NZOFmWxOl32o6LAgUHICxLu/1puMWZDU8Jk3TDDYjNuDRlKtg1wF/Q2bs80ZMS7keX/DAItD4wxJy9fvH45xYjFXSzyNGShEIzw7L97yM7ARuZ578P4K3Gi1/HfE4jWY7caE/93YeZsI5FLnqw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(346002)(396003)(376002)(136003)(46966006)(36840700001)(966005)(26005)(316002)(4326008)(82740400003)(47076005)(54906003)(356005)(2906002)(70586007)(6862004)(36756003)(8936002)(336012)(107886003)(186003)(5660300002)(33656002)(6506007)(53546011)(6486002)(2616005)(82310400003)(36860700001)(478600001)(81166007)(86362001)(8676002)(70206006)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 18:07:03.5983
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ef40cdd8-359b-4000-4598-08d8cc5c56be
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5523

Hello Stefano,

> On 6 Feb 2021, at 12:38 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> The offending chunk is:
>=20
> #define gnttab_need_iommu_mapping(d)                    \
> -    (is_domain_direct_mapped(d) && need_iommu(d))
> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>=20
> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> directly mapped, like the old check did, but the new check is always
> false.
>=20
> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> need_sync is set as:
>=20
>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>        hd->need_sync =3D !iommu_use_hap_pt(d);
>=20
> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
> definition in docs/misc/xen-command-line.pandoc:
>=20
>    This option is hardwired to true for x86 PVH dom0's (as RAM belonging =
to
>    other domains in the system don't live in a compatible address space),=
 and
>    is ignored for ARM.
>=20
> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
> too.
>=20
> As a consequence, when using PV network from a domU on a system where
> IOMMU is on from Dom0, I get:
>=20
> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=3D0x402, iova=3D=
0x8424cb148, fsynr=3D0xb0001, cb=3D0
> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK

I also observed the IOMMU fault when DOMU guest is created and great table =
is used when IOMMU is enabled. I fixed the error in different way but I am =
not sure if you also observing the same error. I submitted the patch to pci=
-passthrough integration branch. Please have a look once if that make sense=
.

https://gitlab.com/xen-project/fusa/xen-integration/-/commit/43a1a6ec91c4e3=
e28fa54dcbecdc8a2917836765

Regards,
Rahul
>=20
> The fix is to go back to the old implementation of
> gnttab_need_iommu_mapping.  However, we don't even need to specify &&
> need_iommu(d) since we don't actually need to check for the IOMMU to be
> enabled (iommu_map does it for us at the beginning of the function.)
>=20
> This fix is preferrable to changing the implementation of need_sync or
> iommu_use_hap_pt because "need_sync" is not really the reason why we
> want gnttab_need_iommu_mapping to return true.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Backport: 4.12+=20
>=20
> ---
>=20
> It is incredible that it was missed for this long, but it takes a full
> PV drivers test using DMA from a non-coherent device to trigger it, e.g.
> wget from a domU to an HTTP server on a different machine, ping or
> connections to dom0 won't trigger the bug.
>=20
> It is interesting that given that IOMMU is on for dom0, Linux could
> have just avoided using swiotlb-xen and everything would have just
> worked. It is worth considering introducing a feature flag (e.g.
> XENFEAT_ARM_dom0_iommu) to let dom0 know that the IOMMU is on and
> swiotlb-xen is not necessary. Then Linux can avoid initializing
> swiotlb-xen and just rely on the IOMMU for translation.
>=20
> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/gran=
t_table.h
> index 6f585b1538..2a154d1851 100644
> --- a/xen/include/asm-arm/grant_table.h
> +++ b/xen/include/asm-arm/grant_table.h
> @@ -88,8 +88,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mf=
n_t mfn,
> #define gnttab_status_gfn(d, t, i)                                       =
\
>     (((i) >=3D nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[=
i])
>=20
> -#define gnttab_need_iommu_mapping(d)                    \
> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> +#define gnttab_need_iommu_mapping(d)  (is_domain_direct_mapped(d))
>=20
> #endif /* __ASM_GRANT_TABLE_H__ */
> /*
>=20



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:08:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:08:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83014.153692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AxM-0006zB-Pl; Mon, 08 Feb 2021 18:08:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83014.153692; Mon, 08 Feb 2021 18:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9AxM-0006z4-MG; Mon, 08 Feb 2021 18:08:16 +0000
Received: by outflank-mailman (input) for mailman id 83014;
 Mon, 08 Feb 2021 18:08:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9AxL-0006yv-HT; Mon, 08 Feb 2021 18:08:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9AxL-0001OS-C5; Mon, 08 Feb 2021 18:08:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9AxL-0002CQ-52; Mon, 08 Feb 2021 18:08:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9AxL-0002DF-4Z; Mon, 08 Feb 2021 18:08:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yfPwXCOT1uUuzNW/xljQ6njEj14WCMsoZPQphjDUEeE=; b=VJJFNmKjQtD95Bc5Kj2cxdkmwq
	k9e6enJ3qDlPfM/TtOHMlaqwm9OohHoZWTy/f6chs0x96eoqRYpqEt6vLDc2M/EQsXpCUx5AOS9A0
	NEWMPc7lFKLcJwNCJRswTsQxswhGnallRo8NKJnDpU9ooY7gShDoz1h/lQpTUaDQ+Wwo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159140-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159140: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c973c38c2722b44396e94b64655a909330c631e7
X-Osstest-Versions-That:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 18:08:15 +0000

flight 159140 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159140/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c973c38c2722b44396e94b64655a909330c631e7
baseline version:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7

Last test of basis   159054  2021-02-05 20:01:29 Z    2 days
Failing since        159138  2021-02-08 13:00:32 Z    0 days    2 attempts
Testing same since   159140  2021-02-08 15:01:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ca82d3fecc..c973c38c27  c973c38c2722b44396e94b64655a909330c631e7 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:11:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83019.153708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9B0r-0007va-A2; Mon, 08 Feb 2021 18:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83019.153708; Mon, 08 Feb 2021 18:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9B0r-0007vT-6z; Mon, 08 Feb 2021 18:11:53 +0000
Received: by outflank-mailman (input) for mailman id 83019;
 Mon, 08 Feb 2021 18:11:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9B0q-0007vO-2C
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 18:11:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9B0p-0001SN-Nl; Mon, 08 Feb 2021 18:11:51 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9B0p-0001zY-GN; Mon, 08 Feb 2021 18:11:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=16J+I5cfm/rW7vorBzMATV6uGmHYzVKAlCaT4tFrYUE=; b=FrnUf+gsjPkzNjMtuu52kS84SF
	7PLg/Ee2/sRlqmWUXo6JgqR/+Cd4oAwnbMjF5mQxTzohDkkY7ElDIVF9P//Z9IrBXZ/LYDzmg2m+q
	wyGzLYNniQhr8twzORDWrUN8v0ovjoQq74tSJxSegl9aIBQXArdQZqAXix9p7u01TBMc=;
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
 <C36DCA9F-1212-4385-AE66-7D41C586A313@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7e963696-a21f-4c79-5f35-a342982bee86@xen.org>
Date: Mon, 8 Feb 2021 18:11:49 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <C36DCA9F-1212-4385-AE66-7D41C586A313@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 08/02/2021 18:06, Rahul Singh wrote:
>> On 6 Feb 2021, at 12:38 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>> The offending chunk is:
>>
>> #define gnttab_need_iommu_mapping(d)                    \
>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>
>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>> directly mapped, like the old check did, but the new check is always
>> false.
>>
>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>> need_sync is set as:
>>
>>     if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>         hd->need_sync = !iommu_use_hap_pt(d);
>>
>> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
>> definition in docs/misc/xen-command-line.pandoc:
>>
>>     This option is hardwired to true for x86 PVH dom0's (as RAM belonging to
>>     other domains in the system don't live in a compatible address space), and
>>     is ignored for ARM.
>>
>> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
>> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
>> too.
>>
>> As a consequence, when using PV network from a domU on a system where
>> IOMMU is on from Dom0, I get:
>>
>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> 
> I also observed the IOMMU fault when DOMU guest is created and great table is used when IOMMU is enabled. I fixed the error in different way but I am not sure if you also observing the same error. I submitted the patch to pci-passthrough integration branch. Please have a look once if that make sense.

I belive this is the same error as Stefano has observed. However, your 
patch will unfortunately not work if you have a system with a mix of 
protected and non-protected DMA-capable devices.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:19:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:19:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83022.153720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9B83-00088C-4z; Mon, 08 Feb 2021 18:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83022.153720; Mon, 08 Feb 2021 18:19:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9B83-000885-0m; Mon, 08 Feb 2021 18:19:19 +0000
Received: by outflank-mailman (input) for mailman id 83022;
 Mon, 08 Feb 2021 18:19:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oudU=HK=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l9B81-000880-Ke
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 18:19:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::604])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4bf4009-d991-48c1-b68a-043b25c0aca8;
 Mon, 08 Feb 2021 18:19:15 +0000 (UTC)
Received: from AM6PR04CA0051.eurprd04.prod.outlook.com (2603:10a6:20b:f0::28)
 by AM8PR08MB6323.eurprd08.prod.outlook.com (2603:10a6:20b:354::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20; Mon, 8 Feb
 2021 18:19:13 +0000
Received: from VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:f0:cafe::ec) by AM6PR04CA0051.outlook.office365.com
 (2603:10a6:20b:f0::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20 via Frontend
 Transport; Mon, 8 Feb 2021 18:19:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT016.mail.protection.outlook.com (10.152.18.115) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Mon, 8 Feb 2021 18:19:12 +0000
Received: ("Tessian outbound 587c3d093005:v71");
 Mon, 08 Feb 2021 18:19:12 +0000
Received: from 8a16fee07ad1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4DE35D99-9F4A-4043-9DAF-E1415F98B7A3.1; 
 Mon, 08 Feb 2021 18:19:07 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8a16fee07ad1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 08 Feb 2021 18:19:07 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2151.eurprd08.prod.outlook.com (2603:10a6:4:84::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.19; Mon, 8 Feb
 2021 18:19:04 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Mon, 8 Feb 2021
 18:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4bf4009-d991-48c1-b68a-043b25c0aca8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UkXB6+yAzqlmMcIig7zwuWmEpB7pxu6engCf19nVC9Y=;
 b=50ysvvDuqLlYJDWHzE46oA9cwSRVTWebmKt6kQjrCSuixX5wItLIOnO//leRr9YN4K00LKHJJGYFMh6p7hHYVqRTeYii7cg01ci2Uw7z5NmaGa4bKRSJ0S/rDcPU3hbHmMxU/GLsxY2ESng/wi6ePdji4SEKBnpY5eb96fAnNH0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: cbec2b7e418bc4b3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z0EDkC7a5RkBLT1NtqDnWQIreSgwJ6mivuZEySCcBOoexhcxm7WQYOX9ua7UCD3t8k3huNBe8j/yVjEYV0fCbNjnYAcor/jav8xdk9QF5ch7yt0PF/61vp3uDEz1aMC+RW6uWkybgWLQAgu+zO4cTbkpuOyc/A+ABCkpxOPZqOsElGUDC2wG5FkNYOzFHWDCl1KC94swfD4wApEBv2uHCYi8IyMAv4f3HEfsqFteVpaq/lUzuTAk6B+5kQKBjf/3dnHwpCA0+wavdot1S5QUl8heosggPM9ZYQHTpTCZKmX+r+gpaW5CrE5BzSbVXQyjS1R1GPjONeO5AOLix9Ip0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UkXB6+yAzqlmMcIig7zwuWmEpB7pxu6engCf19nVC9Y=;
 b=kNrwFaTsZdFBJnN0EltAnZ14dosOkMIoNty0bj4Bmyu1WMdIz+fK6fE0BrprsolCLGkcQAIgDUE1jbghlzq4WtrX+XdHOSXltrra5pYIlhbxnI5VQTjSIJYlYWd7q7SKwGyGJNRxr8TGg9NLs4dw0pBshKP49qq32iJW8n0cDxmlvbM3MH8v4hFm4dUJUbHN8bo8+iuaNOOHWBUOo9sOp58NM9FVywv91V04RXNZb1PoIhwA2iQ7xl1VRd766j1xxyrVv28pgaHfjQYa6af545MFYnnblFswE0VNScIRbmm5f50c+Jd8+sjn0RLh4AWMpH00zglgdF4QzJKjl7DFag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UkXB6+yAzqlmMcIig7zwuWmEpB7pxu6engCf19nVC9Y=;
 b=50ysvvDuqLlYJDWHzE46oA9cwSRVTWebmKt6kQjrCSuixX5wItLIOnO//leRr9YN4K00LKHJJGYFMh6p7hHYVqRTeYii7cg01ci2Uw7z5NmaGa4bKRSJ0S/rDcPU3hbHmMxU/GLsxY2ESng/wi6ePdji4SEKBnpY5eb96fAnNH0=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/CCVIBhvrRIcEE2wSNDNMu6y5KpOkgeAgAABdICAAAIFgA==
Date: Mon, 8 Feb 2021 18:19:04 +0000
Message-ID: <3EEEACBE-2028-4DE9-A3BD-053FF82CFC75@arm.com>
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
 <C36DCA9F-1212-4385-AE66-7D41C586A313@arm.com>
 <7e963696-a21f-4c79-5f35-a342982bee86@xen.org>
In-Reply-To: <7e963696-a21f-4c79-5f35-a342982bee86@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 74da05d7-245f-40de-de3a-08d8cc5e0973
x-ms-traffictypediagnostic: DB6PR0802MB2151:|AM8PR08MB6323:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB63239C92B57A5010F00F8ECFFC8F9@AM8PR08MB6323.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bmLdN713cJQzy3wxX4KQn9ocxG8SdCZ2XkwUlDWBpQ1r6Rkq9o0C2aqvVM5VM/jJYv+XcD9SmLYFX+E8Ihh6ZVCPXgjYvss46FFQWz9cDNI/NAMCsiOoit7sU4qqCym7g8G0GH3Jv4ItJrf9tLzz06HpgvJ6AtVBeCUygK54tWCyroyffKhpf34vzlSGK6sHVmVv0QodfiaEPLpN8NKz3EemNnq7I+7Puc30ABegBVg3gIVIGS7ONDyULQ3M5Q7OY7LmQAKqYZJoOnmL2IJX2CViTOvNw0xzvbXUzuWPWmBQkb00f8ssOFIS0CO+QkAwWq79KfQWWejSq+dd7nmgEE3xn+3FyaUYzkeD61XWw1YwECDSvRpT6JQqiw/56B2lHWf+ITH247nJUVjQvJDcwYsErkl9BRpW9aeQIbVYi9PttlE9FSrZXTbLhOA0b7Zl/hm0mgDuZBe46hSARVbw4ukmPzmE+hAuC5MNKX3sX9Oxrt85axqy2GdeQALU3B1Squy+fzC9i1PjHdQqbSTty+3uNttgSYR/UUWZefrW6FcG5yNKR5wU/6T97kyQ38HjNZIdbjbPMiOlzdE4E6bfRA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(39860400002)(346002)(136003)(2616005)(83380400001)(4326008)(5660300002)(64756008)(66446008)(91956017)(71200400001)(76116006)(26005)(53546011)(86362001)(6506007)(8676002)(54906003)(66946007)(66476007)(66556008)(6486002)(478600001)(36756003)(6512007)(6916009)(33656002)(316002)(186003)(8936002)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?y8bv+tLgUxKAC75pguhHYzj7TY3FlEY5ZLkn7nqGqOJjZXIs9rO7Bmak6fcX?=
 =?us-ascii?Q?wppVWlvtfZwGP2chwtpK51dU7b7kco/VsAehJo7CVpr2lupP+6MRvoC0xbzE?=
 =?us-ascii?Q?GMGFwRIbfI6V5gTuwgGdxf//gFzamGK11rjoACtWomh408CUxC1geWrm5kvN?=
 =?us-ascii?Q?uTDsAwbGrPVKTOUNK3B8OLfzkNSaULpLVXirB88DBhpbRI4FtVaDHbIg3ihv?=
 =?us-ascii?Q?APA7Ks5HZ7kdec+Hv7wzCv6vLz+zSTdVxTcHDKJbkb8MI3RPEigDLSm+XT2n?=
 =?us-ascii?Q?ZJY2Vp3fftGcs5vhBvZ2KGpbL2XJNBPZC0SHoJea5tXfhfeKbdVAtHfdTJe9?=
 =?us-ascii?Q?uE/BelkFeesjnNa1+pxAZ/hzpglL2Y5er3bCukLdtSLjtF6xB4VhYHdViCqO?=
 =?us-ascii?Q?933oOhb0oVa1tjXJmwU8XFK4673Pzs88ZhnftpDjVCOtF+j9dp8FC7koc1ck?=
 =?us-ascii?Q?Cbdu2GYEnEqH+PVGvuLV7VlYdH5m65RjPAvJZXiGm40C0PhEd7TEcYmHhLw2?=
 =?us-ascii?Q?ubHREU4su/SnzHqMwtNw9lrSIezVzXN8aMLp7hsK/28XJJhGqo5l79EEmFZW?=
 =?us-ascii?Q?MID8JcuJOLca/y1AobX15QXa/0zDJ/x1gTrigqFb9pw2uj5U0yIs+RILbv04?=
 =?us-ascii?Q?4QbLvN6lmmBb4HiVyj3JUKCquKOpQOy1NMSx8Q4yAaBDaUSMjxJzDnD/+/Mz?=
 =?us-ascii?Q?Jw/kN5AEAv6ijhh8lHNR8mfmZLMyynGvnb4j2Hjgb1lMsqGFqa0bax/GY8YF?=
 =?us-ascii?Q?q2geSq+Tgb75gSa443Dm+DMc8zbEmm9cgIaWo5jRpmSchZQNOEcTR+OWmygF?=
 =?us-ascii?Q?QbrTAuYcaffkWUC87OyYwHNlgHlL5JKKsAeC/VNZxdwzK+/Gu2bUDNvGL6rC?=
 =?us-ascii?Q?mnpC1JwvEkYtXzAoCxGiR3hb/Fj4DPScic1TlnLE5XPLnA9nhj1Ym+mTEb1D?=
 =?us-ascii?Q?y5agEsJT+WHVV/2jEsHFk99yluBXp2mSbDFn4c/zUYCWgKvb3guRuN63sp8Z?=
 =?us-ascii?Q?AFCt8Oocn5VEepc5/XDNgoeQi9VEuWjpj5+SMsWVPVplAD5i0EVzjwkxgiXK?=
 =?us-ascii?Q?uIipqjrAsULT1lBqK6sE2EaJqrr90HXyn17bUt8J7887yYWw7+R378v/UyTO?=
 =?us-ascii?Q?KiSQXUywj5nDBuRf+BOt/KehX0YJDNuZFdMWWTKRh/ZTlI7JLFEEY9/+neli?=
 =?us-ascii?Q?jw3fxrSC5Ps5dXTDEZXMd+BWxedFfU+hF472PK4yVgeRB0llIcaloTiy4Cab?=
 =?us-ascii?Q?aiHP5X7GYsN9VgMJmI+NTyOGh1m4alAsWCdM/VbotWyXkyrHEbCfYSXf8D3L?=
 =?us-ascii?Q?tyyBD520revoU7sDO25vtZEn?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <87FAC6D2CAA15345911254B34322B9D9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2151
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e64ac90a-c1db-4aba-41b0-08d8cc5e0492
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	etX2MjU0Ss/5b1NzK1ruPTrqU8krbl4euZCV0+X1hoHRAqt3agOmeICGhAbm2RU9/ufxvpfVCWD11xa0bG52f6FCSNulkaq2pTIL7jHZU3NOMwaY55ba6bwgPpgN/z+lF7nGOQ1gB+Pnzl+S9SQAoCd2D9FpglkaDhhAXIjju51MESimsFpG8uSSPGgGmZC7TmYjz30Av9yRTOSNZWG2nuLginqDztVPjJ08MSZ5V8H4acn4pQtWIcfevYTvT/EgDQ/XfrtA4pGGUjqa0AVi2b++9DXDRFzXJh3sbMn/4g33PtwxRt1y/slBuXnvSl95fKcQufefHpd/IZQRjdsV6+XVXB1TSW6xuSdab4sOp9V3vF8cm9VzCBGNja/h8G3lHZn6AQnKx3nAgz2XBUVZPzbGlqMPD8NR/ZGnZUlPyb4iP0Ob8OHqm/oE+mhOnB0/PhHoOIDjU9ShQYtUjLcYCES50nlEdyGioUDLK+pNJxPXJU+9+vEjB16DBSJTSzRDckk2lBAZJLtTb7nV4JqwXepi/JtkQamfaPZot+LmZ3mV/QEjNmMUTOd+UD6+S8K3d8oPHEF23zjNLkkt2QNbr6N4LJJwfrKQPD4nuV3HBByjxNpNgoTP7INJpwkSA0KhqwGeTJuOoQ3C659c8kbRdUuOKOhgjzUQ0qvZ0rs9fMA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(39850400004)(36840700001)(46966006)(36756003)(316002)(86362001)(5660300002)(6486002)(6506007)(53546011)(8676002)(70586007)(70206006)(6512007)(107886003)(83380400001)(186003)(2616005)(47076005)(26005)(36860700001)(336012)(81166007)(82740400003)(478600001)(2906002)(33656002)(54906003)(8936002)(356005)(6862004)(4326008)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 18:19:12.8746
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 74da05d7-245f-40de-de3a-08d8cc5e0973
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6323

Hello Julien,

> On 8 Feb 2021, at 6:11 pm, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 08/02/2021 18:06, Rahul Singh wrote:
>>> On 6 Feb 2021, at 12:38 am, Stefano Stabellini <sstabellini@kernel.org>=
 wrote:
>>>=20
>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>> The offending chunk is:
>>>=20
>>> #define gnttab_need_iommu_mapping(d)                    \
>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>=20
>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>> directly mapped, like the old check did, but the new check is always
>>> false.
>>>=20
>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>> need_sync is set as:
>>>=20
>>>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>        hd->need_sync =3D !iommu_use_hap_pt(d);
>>>=20
>>> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
>>> definition in docs/misc/xen-command-line.pandoc:
>>>=20
>>>    This option is hardwired to true for x86 PVH dom0's (as RAM belongin=
g to
>>>    other domains in the system don't live in a compatible address space=
), and
>>>    is ignored for ARM.
>>>=20
>>> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
>>> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is fals=
e
>>> too.
>>>=20
>>> As a consequence, when using PV network from a domU on a system where
>>> IOMMU is on from Dom0, I get:
>>>=20
>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=3D0x402, iova=
=3D0x8424cb148, fsynr=3D0xb0001, cb=3D0
>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>> I also observed the IOMMU fault when DOMU guest is created and great tab=
le is used when IOMMU is enabled. I fixed the error in different way but I =
am not sure if you also observing the same error. I submitted the patch to =
pci-passthrough integration branch. Please have a look once if that make se=
nse.
>=20
> I belive this is the same error as Stefano has observed. However, your pa=
tch will unfortunately not work if you have a system with a mix of protecte=
d and non-protected DMA-capable devices.

Yes you are right thats what I though when I fixed the error but then I tho=
ught in different direction if IOMMU is enabled system wise every device sh=
ould be protected by IOMMU. My understanding might be wrong.

Regards,
Rahul

> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:49:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:49:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83033.153735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Bb9-0002aq-Gx; Mon, 08 Feb 2021 18:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83033.153735; Mon, 08 Feb 2021 18:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Bb9-0002aj-E0; Mon, 08 Feb 2021 18:49:23 +0000
Received: by outflank-mailman (input) for mailman id 83033;
 Mon, 08 Feb 2021 18:49:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9Bb7-0002ae-QT
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 18:49:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9Bb7-00023V-EX; Mon, 08 Feb 2021 18:49:21 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9Bb7-0003uw-78; Mon, 08 Feb 2021 18:49:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=uoF8bUpJutlkFjnTMxiVf+Z7Q1WnD2lsQYRfAefm6VM=; b=KoxPTe+VS95MxpSvL31HtYeA9n
	4NzeCjdzwVYST6K7zFeE4uzgrrcqp8DSVDP0dGMhvvLPQ1cM2SINwU7LYY+TozxltwDVWN55Tqma9
	PbB7tIB4O7BFk8UzfE0rOuwyjROkJcptXTCYLnFgL86tkE3FuTa+UgexdFlBGgqx0YwQ=;
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
 <C36DCA9F-1212-4385-AE66-7D41C586A313@arm.com>
 <7e963696-a21f-4c79-5f35-a342982bee86@xen.org>
 <3EEEACBE-2028-4DE9-A3BD-053FF82CFC75@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bc7c3b13-88da-691f-8094-75502f06e882@xen.org>
Date: Mon, 8 Feb 2021 18:49:19 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <3EEEACBE-2028-4DE9-A3BD-053FF82CFC75@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 08/02/2021 18:19, Rahul Singh wrote:
> Hello Julien,

Hi Rahul,

> 
>> On 8 Feb 2021, at 6:11 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 08/02/2021 18:06, Rahul Singh wrote:
>>>> On 6 Feb 2021, at 12:38 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>
>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>> The offending chunk is:
>>>>
>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>
>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>>> directly mapped, like the old check did, but the new check is always
>>>> false.
>>>>
>>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>>> need_sync is set as:
>>>>
>>>>     if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>         hd->need_sync = !iommu_use_hap_pt(d);
>>>>
>>>> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
>>>> definition in docs/misc/xen-command-line.pandoc:
>>>>
>>>>     This option is hardwired to true for x86 PVH dom0's (as RAM belonging to
>>>>     other domains in the system don't live in a compatible address space), and
>>>>     is ignored for ARM.
>>>>
>>>> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
>>>> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
>>>> too.
>>>>
>>>> As a consequence, when using PV network from a domU on a system where
>>>> IOMMU is on from Dom0, I get:
>>>>
>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>> I also observed the IOMMU fault when DOMU guest is created and great table is used when IOMMU is enabled. I fixed the error in different way but I am not sure if you also observing the same error. I submitted the patch to pci-passthrough integration branch. Please have a look once if that make sense.
>>
>> I belive this is the same error as Stefano has observed. However, your patch will unfortunately not work if you have a system with a mix of protected and non-protected DMA-capable devices.
> 
> Yes you are right thats what I though when I fixed the error but then I thought in different direction if IOMMU is enabled system wise every device should be protected by IOMMU.
I am not aware of any rule preventing a mix of protected and unprotected 
DMA-capable devices.

However, even if they are all protected by an IOMMU, some of the IOMMUs 
may have been disabled by the firmware tables for various reasons (e.g. 
performance, buggy SMMU...). For instance, this is the case on Juno 
where 2 out of 3 SMMUs are disabled in the Linux upstream DT.

As we don't know which device will use the grant for DMA, we always need 
to return the machine physical address.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:49:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:49:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83034.153747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9BbM-0002eS-Ut; Mon, 08 Feb 2021 18:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83034.153747; Mon, 08 Feb 2021 18:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9BbM-0002eJ-QE; Mon, 08 Feb 2021 18:49:36 +0000
Received: by outflank-mailman (input) for mailman id 83034;
 Mon, 08 Feb 2021 18:49:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+r8=HK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9BbM-0002dz-52
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 18:49:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c44e7bbf-4292-451f-813e-4dd408d60eb7;
 Mon, 08 Feb 2021 18:49:35 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E6DA464E73;
 Mon,  8 Feb 2021 18:49:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c44e7bbf-4292-451f-813e-4dd408d60eb7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612810174;
	bh=x3jT2WCNesc6xdYA2+YmBtliItPWwumDdnPHxTeTnlk=;
	h=From:To:Cc:Subject:Date:From;
	b=eh9dMUAYisypWCgBOwzHM2vf/y9QoxFtA5lIEib9mKE+OuwvWvHDp0T5yEL3mSuHb
	 +Kc+Iuemxh6KO3fWkKg60XwLJh9n/i5wDJXe9NLFvUzwewI08zozSFXg0gQumN5Flb
	 wXBJ6jLMb7ZgsP0vwFYnH2M1yjCgXKLGJWJ+dhDKFYoto6vSFOTmBh4N6UxGYpuoRk
	 fzMDGKOTq/dgeisZr/Bj1DQcKVthDInE+6Y9LCElUgCMnPOMWSDO9D6KJxlkfPWNJb
	 x0ZwmlmpnlS6Anh4nbhFw/DlEPHtlw3OXyqvrGGBytsNt2utR7cmLhzflnQOC1nK0h
	 gye6MzoNk4Srw==
From: Stefano Stabellini <sstabellini@kernel.org>
To: julien@xen.org
Cc: lucmiccio@gmail.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Bertrand.Marquis@arm.com,
	Volodymyr_Babchuk@epam.com,
	Rahul.Singh@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Date: Mon,  8 Feb 2021 10:49:32 -0800
Message-Id: <20210208184932.23468-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
The offending chunk is:

 #define gnttab_need_iommu_mapping(d)                    \
-    (is_domain_direct_mapped(d) && need_iommu(d))
+    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))

On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
directly mapped and IOMMU is enabled for the domain, like the old check
did, but the new check is always false.

In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
need_sync is set as:

    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
        hd->need_sync = !iommu_use_hap_pt(d);

iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
P2M. It is true on ARM. need_sync means that you have a separate IOMMU
page-table and it needs to be updated for every change. need_sync is set
to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
which is wrong.

As a consequence, when using PV network from a domU on a system where
IOMMU is on from Dom0, I get:

(XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
[   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK

The fix is to go back to something along the lines of the old
implementation of gnttab_need_iommu_mapping.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Fixes: 91d4eca7add
Backport: 4.12+

---

Given the severity of the bug, I would like to request this patch to be
backported to 4.12 too, even if 4.12 is security-fixes only since Oct
2020.

For the 4.12 backport, we can use iommu_enabled() instead of
is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.

Changes in v2:
- improve commit message
- add is_iommu_enabled(d) to the check
---
 xen/include/asm-arm/grant_table.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 6f585b1538..0ce77f9a1c 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
 
 #define gnttab_need_iommu_mapping(d)                    \
-    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
+    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
 
 #endif /* __ASM_GRANT_TABLE_H__ */
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 18:54:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 18:54:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83037.153759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Bfw-0003Z1-G3; Mon, 08 Feb 2021 18:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83037.153759; Mon, 08 Feb 2021 18:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Bfw-0003Yu-Cp; Mon, 08 Feb 2021 18:54:20 +0000
Received: by outflank-mailman (input) for mailman id 83037;
 Mon, 08 Feb 2021 18:54:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pnXs=HK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9Bfv-0003Yp-GL
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 18:54:19 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 539bb791-a89c-489f-be91-2ab650428b4e;
 Mon, 08 Feb 2021 18:54:18 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 005e38x18IsB5fl
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 Feb 2021 19:54:11 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 539bb791-a89c-489f-be91-2ab650428b4e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612810457;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=RxmIgK2V27/s8zTeUW93U4Nul9Kl0sVucrmJEH1JL28=;
	b=Jgc5Ap7BfUWzw76steZioMq7lVkhkzApurSXqLXOax6dxuh6IlqxJXiiWcNLIA8nmd
	sm7eZqgfQtryoIFfQksdM3UUBODw/WyaSisIJ0PI5Y5T3lS0eaB7wQT/GGUvNgKGUih2
	HoMD6QtHFro8sAhBkmR9pL7JgUAie5ZQKoS1nm/TcCwA9Vtte/uw15LJ/FexNN8ekYYi
	9as9gqhSGl7uIDzhlaZtrx9sDBhRHzbFyO2ZreArCDszYTY1Y49S4EhPy8lHtfFz8Nb1
	qKW4gbCT6m5IuSdz1cmB1U986763b4XZWCtDGbxL0WpwdRSeDiNbm3tm/KHhMR3xZrOL
	Jy+Q==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Mon, 8 Feb 2021 19:53:57 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 07/39] xl: optionally print timestamps during
 xl migrate
Message-ID: <20210208195357.0d4b21c0.olaf@aepfle.de>
In-Reply-To: <24609.25950.629059.164010@mariner.uk.xensource.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-8-olaf@aepfle.de>
	<24609.25950.629059.164010@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/JG.jPVPCNEr8FFhEdl=QKgz"; protocol="application/pgp-signature"

--Sig_/JG.jPVPCNEr8FFhEdl=QKgz
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 8 Feb 2021 16:22:54 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> With my maintainer hat on: this is a useful development but I am not
> sure about the choice of -T, and the choice to make this a
> migrate-specific option.  Most unix things that have an option to
> print timestamps use `-t`.  But we have already stolen `-t` as a
> global option for "prinht CR as well as LF".  Hrmf.

Was 'xl -t' intentionally left out of xl.c:help()?
It is only mentioned in the xl man page.

If yes, my change will also omit it.
If no, I will add it along with the global -T option.


Olaf

--Sig_/JG.jPVPCNEr8FFhEdl=QKgz
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAhiMUACgkQ86SN7mm1
DoA/wg//QY40tppMn5FtjWP0cfgFS0bHuDadl7qtpMW3LM9LlFejCVRRBWyRucxU
tJ2mtvGOro55xvg3TOZxUva0acJGKnHnhU0rs38rARgFmqJyKJxIRH+JSe/bFWYE
+2Zn2F8B1lEon5MsNga6Pg/HHhp+d0BuOmhzJJAlTQSlMl/tiBueKg1U+RnGh7EF
XYjlipYrjotuBOCo/PhCUbLw3Xn651T1Adn2Q8n2161974VqXbV1Er5fKQbiOkDE
J97k0FgVL3rkC10LOppMWU9HAGbTaxRpcfQHs6yF37vxdNUlDKqz8anDjDQaRpNg
VonhwgSH9J63A84/lgzBxSEG7eNetBO8ahqyARcHriMXP95JPRGPOobkE7dCf3AP
zRpQ6h89rOesSD+dFVGKDdQjLN38qtUaVK6VRoxOXEPi45wAAv5ozvz97zQcuFX3
AcMIR+o1U/FWfmg5Jf63Cy7otwJ1ftGvG8JUkYZrODr4pHclS45cjUtBUpTKqUZb
1mF6iGbSYPe9wp6wYIyGWgQstvqhRpPfT4Q48HYKe5wI+Y+7/Sze+FovaIdxgUzl
fsAjilAOhU5T9HFQ1YPzZT3o5R27fIBmaxtx9+vDtst5dPz1txVF8nu2nBZeGGe4
Cy+OWdhsa6WpYBmfgneueMncyIj+gM2B12Wb1fhOJl9FlP4/XyA=
=Vlcx
-----END PGP SIGNATURE-----

--Sig_/JG.jPVPCNEr8FFhEdl=QKgz--


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 19:49:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 19:49:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83044.153771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9CXM-0008HJ-LA; Mon, 08 Feb 2021 19:49:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83044.153771; Mon, 08 Feb 2021 19:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9CXM-0008HC-G3; Mon, 08 Feb 2021 19:49:32 +0000
Received: by outflank-mailman (input) for mailman id 83044;
 Mon, 08 Feb 2021 19:49:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/EF1=HK=amazon.de=prvs=666814ed0=nmanthey@srs-us1.protection.inumbo.net>)
 id 1l9CXK-0008H7-PI
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 19:49:30 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15c1ae38-4462-4725-86f6-c6f822125455;
 Mon, 08 Feb 2021 19:49:29 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-538b0bfb.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 08 Feb 2021 19:49:23 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-538b0bfb.us-west-2.amazon.com (Postfix) with ESMTPS
 id B9E62A1CF1; Mon,  8 Feb 2021 19:47:31 +0000 (UTC)
Received: from u6fc700a6f3c650.ant.amazon.com (10.43.161.244) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 8 Feb 2021 19:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15c1ae38-4462-4725-86f6-c6f822125455
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1612813770; x=1644349770;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=jFTCvyRrIRBWVv+qLk094m7iBYBzlJUP1rVc2LoEbvg=;
  b=JcCONl0uy7w4Up0kxyJl9UcWRCPay/Co+v3ZmP7xrW3zRw+dMDoXH+PO
   VZFXrE1DPRLKs8mUwFmdXWpvefXDNca0e3sPpsilXULFXW3gcmYWqKtcL
   +1glbgG2FbVBaHRJqFgDQfa16AOuw1YbOvesYzctfQokU5a89RrRQDZ79
   A=;
X-IronPort-AV: E=Sophos;i="5.81,163,1610409600"; 
   d="scan'208";a="84732560"
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	<xen-devel@lists.xenproject.org>
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
From: Norbert Manthey <nmanthey@amazon.de>
Autocrypt: addr=nmanthey@amazon.de; prefer-encrypt=mutual; keydata=
 xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR
 /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg
 kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps
 K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89
 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV
 QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok
 xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o
 eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C
 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK
 VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h
 bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI
 BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC
 XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG
 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c
 kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN
 sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar
 C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt
 OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL
 n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur
 xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5
 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs
 nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf
 Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG
 MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK
 sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ
 /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV
 vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM
 p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk
 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p
 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f
 HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE
 GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b
 tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+
 RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ
 R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4
 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5
 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd
 DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i
 p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U
 sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C
 hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9
 pEddsOqYwOs=
Message-ID: <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
Date: Mon, 8 Feb 2021 20:47:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
X-Originating-IP: [10.43.161.244]
X-ClientProxiedBy: EX13D37UWC004.ant.amazon.com (10.43.162.212) To
 EX13D02EUB001.ant.amazon.com (10.43.166.150)
Precedence: Bulk
Content-Transfer-Encoding: base64

T24gMi84LzIxIDM6MjEgUE0sIEphbiBCZXVsaWNoIHdyb3RlOgo+IE9uIDA1LjAyLjIwMjEgMjE6
MzksIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4gVG8gcHJldmVudCBsZWFraW5nIEhWTSBwYXJh
bXMgdmlhIEwxVEYgYW5kIHNpbWlsYXIgaXNzdWVzIG9uIGEKPj4gaHlwZXJ0aHJlYWQgcGFpciwg
bGV0J3MgbG9hZCB2YWx1ZXMgb2YgZG9tYWlucyBhcyBsYXRlIGFzIHBvc3NpYmxlLgo+Pgo+PiBG
dXJ0aGVybW9yZSwgc3BlY3VsYXRpdmUgYmFycmllcnMgYXJlIHJlLWFycmFuZ2VkIHRvIG1ha2Ug
c3VyZSB3ZSBkbyBub3QKPj4gYWxsb3cgZ3Vlc3RzIHJ1bm5pbmcgb24gY28tbG9jYXRlZCBWQ1BV
cyB0byBsZWFrIGh2bSBwYXJhbWV0ZXIgdmFsdWVzIG9mCj4+IG90aGVyIGRvbWFpbnMuCj4+Cj4+
IFRoaXMgaXMgcGFydCBvZiB0aGUgc3BlY3VsYXRpdmUgaGFyZGVuaW5nIGVmZm9ydC4KPj4KPj4g
U2lnbmVkLW9mZi1ieTogTm9yYmVydCBNYW50aGV5IDxubWFudGhleUBhbWF6b24uZGU+Cj4+IFJl
cG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvLnVrPgo+IERpZCB5b3Ug
bG9zZSBJYW4ncyByZWxlYXNlLWFjaywgb3IgZGlkIHlvdSBkcm9wIGl0IGZvciBhIHNwZWNpZmlj
Cj4gcmVhc29uPwpUaGF0IGhhcHBlbmVkIGJ5IGFjY2lkZW50LCBzaW1pbGFybHkgdG8gbm90IGNo
YWluaW5nIHRoaXMgdjIgdG8gdGhlCmZvcm1lciB2MS4gSSdsbCBhZGQgaXQgdG8gdGhlIG5leHQg
cmV2aXNpb24uCj4KPj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwo+PiArKysgYi94ZW4v
YXJjaC94ODYvaHZtL2h2bS5jCj4+IEBAIC00MDYwLDcgKzQwNjAsNyBAQCBzdGF0aWMgaW50IGh2
bV9hbGxvd19zZXRfcGFyYW0oc3RydWN0IGRvbWFpbiAqZCwKPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50MzJfdCBpbmRleCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICB1aW50NjRfdCBuZXdfdmFsdWUpCj4+ICB7Cj4+IC0gICAgdWludDY0X3QgdmFsdWUg
PSBkLT5hcmNoLmh2bS5wYXJhbXNbaW5kZXhdOwo+PiArICAgIHVpbnQ2NF90IHZhbHVlOwo+PiAg
ICAgIGludCByYzsKPj4KPj4gICAgICByYyA9IHhzbV9odm1fcGFyYW0oWFNNX1RBUkdFVCwgZCwg
SFZNT1Bfc2V0X3BhcmFtKTsKPj4gQEAgLTQxMDgsNiArNDEwOCwxMyBAQCBzdGF0aWMgaW50IGh2
bV9hbGxvd19zZXRfcGFyYW0oc3RydWN0IGRvbWFpbiAqZCwKPj4gICAgICBpZiAoIHJjICkKPj4g
ICAgICAgICAgcmV0dXJuIHJjOwo+Pgo+PiArICAgIGlmICggaW5kZXggPj0gSFZNX05SX1BBUkFN
UyApCj4+ICsgICAgICAgIHJldHVybiAtRUlOVkFMOwo+PiArCj4+ICsgICAgLyogTWFrZSBzdXJl
IHdlIGV2YWx1YXRlIHBlcm1pc3Npb25zIGJlZm9yZSBsb2FkaW5nIGRhdGEgb2YgZG9tYWlucy4g
Ki8KPj4gKyAgICBibG9ja19zcGVjdWxhdGlvbigpOwo+PiArCj4+ICsgICAgdmFsdWUgPSBkLT5h
cmNoLmh2bS5wYXJhbXNbaW5kZXhdOwo+PiAgICAgIHN3aXRjaCAoIGluZGV4ICkKPj4gICAgICB7
Cj4+ICAgICAgLyogVGhlIGZvbGxvd2luZyBwYXJhbWV0ZXJzIHNob3VsZCBvbmx5IGJlIGNoYW5n
ZWQgb25jZS4gKi8KPiBJIGRvbid0IHNlZSB0aGUgbmVlZCBmb3IgdGhlIGhlYXZpZXIgYmxvY2tf
c3BlY3VsYXRpb24oKSBoZXJlOwo+IGFmYWljdCBhcnJheV9hY2Nlc3Nfbm9zcGVjKCkgc2hvdWxk
IGRvIGZpbmUuIFRoZSBzd2l0Y2goKSBpbgo+IGNvbnRleHQgYWJvdmUgYXMgd2VsbCBhcyB0aGUg
c3dpdGNoKCkgZnVydGhlciBkb3duIGluIHRoZQo+IGZ1bmN0aW9uIGRvbid0IGhhdmUgYW55IHNw
ZWN1bGF0aW9uIHN1c2NlcHRpYmxlIGNvZGUuClRoZSByZWFzb24gdG8gYmxvY2sgc3BlY3VsYXRp
b24gaW5zdGVhZCBvZiBqdXN0IHVzaW5nIHRoZSBoYXJkZW5lZCBpbmRleAphY2Nlc3MgaXMgdG8g
bm90IGFsbG93IHRvIHNwZWN1bGF0aXZlbHkgbG9hZCBkYXRhIGZyb20gYW5vdGhlciBkb21haW4u
Cj4KPiBGdXJ0aGVybW9yZSB0aGUgZmlyc3Qgc3dpdGNoKCkgZG9lc24ndCB1c2UgInZhbHVlIiBh
dCBhbGwsIHNvCj4geW91IGNvdWxkIG1vdmUgdGhlIGFjY2VzcyBldmVuIGZ1cnRoZXIgZG93bi4g
VGhpcyBtYXkgaGF2ZSB0aGUKPiBkb3duc2lkZSBvZiBhZGRpbmcgbGF0ZW5jeSwgc28gbWF5IG5v
dCBiZSB3b3J0aCBpdCwgYnV0IGluCj4gdGhpcyBjYXNlIGF0IGxlYXN0IHRoZSBkZXNjcmlwdGlv
biBzaG91bGQgc2F5IGhhbGYgYSB3b3JkLAo+IGVzcGVjaWFsbHkgc2VlaW5nIGl0IHNheSAiYXMg
bGF0ZSBhcyBwb3NzaWJsZSIgcmlnaHQgbm93LgpBZ3JlZWQsIEkgY2FuIG1vdmUgdGhpcyBmdXJ0
aGVyIGRvd24sIG9yIGFkYXB0IHRoZSB3b3JkaW5nLiBUaGUgaW5pdGlhbAppbnRlbnRpb24gd2Fz
IHRvIG1vdmUgaXQgb25seSBiZWxvdyB0aGUgZmlyc3QgcG9zc2libGUgc3BlY3VsYXRpdmUKYmxv
Y2tlci4gSGVuY2UsIGxldCBtZSBhZGFwdCB0aGUgd29yZGluZy4KPgo+PiBAQCAtNDE0MSw2ICs0
MTQ4LDkgQEAgc3RhdGljIGludCBodm1fc2V0X3BhcmFtKHN0cnVjdCBkb21haW4gKmQsIHVpbnQz
Ml90IGluZGV4LCB1aW50NjRfdCB2YWx1ZSkKPj4gICAgICBpZiAoIHJjICkKPj4gICAgICAgICAg
cmV0dXJuIHJjOwo+Pgo+PiArICAgIC8qIE1ha2Ugc3VyZSB3ZSBldmFsdWF0ZSBwZXJtaXNzaW9u
cyBiZWZvcmUgbG9hZGluZyBkYXRhIG9mIGRvbWFpbnMuICovCj4+ICsgICAgYmxvY2tfc3BlY3Vs
YXRpb24oKTsKPj4gKwo+PiAgICAgIHN3aXRjaCAoIGluZGV4ICkKPj4gICAgICB7Cj4+ICAgICAg
Y2FzZSBIVk1fUEFSQU1fQ0FMTEJBQ0tfSVJROgo+IExpa2UgeW91IGRvIGZvciB0aGUgImdldCIg
cGF0aCBJIHRoaW5rIHRoaXMgc2ltaWxhcmx5IHJlbmRlcnMKPiBwb2ludGxlc3MgdGhlIHVzZSBp
biBodm1vcF9zZXRfcGFyYW0oKSAoYW5kIC0gc2VlIGJlbG93IC0gdGhlCj4gc2FtZSBjb25zaWRl
cmF0aW9uIHdydCBpc19odm1fZG9tYWluKCkgYXBwbGllcykuCkNhbiB5b3UgcGxlYXNlIGJlIG1v
cmUgc3BlY2lmaWMgd2h5IHRoaXMgaXMgcG9pbnRsZXNzPyBJIHVuZGVyc3RhbmQgdGhhdAp0aGUg
aXNfaHZtX2RvbWFpbiBjaGVjayBjb21lcyB3aXRoIGEgYmFycmllciB0aGF0IGNhbiBiZSB1c2Vk
IHRvIG5vdCBhZGQKYW5vdGhlciBiYXJyaWVyLiBIb3dldmVyLCBJIGRpZCBub3QgZmluZCBzdWNo
IGEgYmFycmllciBoZXJlLCB3aGljaApjb21lcyBiZXR3ZWVuIHRoZSAnaWYgKHJjKScganVzdCBh
Ym92ZSwgYW5kIHRoZSBwb3RlbnRpYWwgbmV4dCBhY2Nlc3MKYmFzZWQgb24gdGhlIHZhbHVlIG9m
ICdpbmRleCcuIEF0IGxlYXN0IHRoZSBhY2Nlc3MgYmVoaW5kIHRoZSBzd2l0Y2gKc3RhdGVtZW50
IGNhbm5vdCBiZSBvcHRpbWl6ZWQgYW5kIHJlcGxhY2VkIHdpdGggYSBjb25zdGFudCB2YWx1ZSBl
YXNpbHkuCj4KPj4gQEAgLTQzODgsNiArNDM5OCwxMCBAQCBpbnQgaHZtX2dldF9wYXJhbShzdHJ1
Y3QgZG9tYWluICpkLCB1aW50MzJfdCBpbmRleCwgdWludDY0X3QgKnZhbHVlKQo+PiAgICAgIGlm
ICggcmMgKQo+PiAgICAgICAgICByZXR1cm4gcmM7Cj4+Cj4+ICsgICAgLyogTWFrZSBzdXJlIHRo
ZSBpbmRleCBib3VuZCBjaGVjayBpbiBodm1fZ2V0X3BhcmFtIGlzIHJlc3BlY3RlZCwgYXMgd2Vs
bCBhcwo+PiArICAgICAgIHRoZSBhYm92ZSBkb21haW4gcGVybWlzc2lvbnMuICovCj4+ICsgICAg
YmxvY2tfc3BlY3VsYXRpb24oKTsKPiBOaXQ6IFBsZWFzZSBmaXggY29tbWVudCBzdHlsZSBoZXJl
LgpXaWxsIGRvLgo+Cj4+IEBAIC00NDI4LDkgKzQ0NDIsNiBAQCBzdGF0aWMgaW50IGh2bW9wX2dl
dF9wYXJhbSgKPj4gICAgICBpZiAoIGEuaW5kZXggPj0gSFZNX05SX1BBUkFNUyApCj4+ICAgICAg
ICAgIHJldHVybiAtRUlOVkFMOwo+Pgo+PiAtICAgIC8qIE1ha2Ugc3VyZSB0aGUgYWJvdmUgYm91
bmQgY2hlY2sgaXMgbm90IGJ5cGFzc2VkIGR1cmluZyBzcGVjdWxhdGlvbi4gKi8KPj4gLSAgICBi
bG9ja19zcGVjdWxhdGlvbigpOwo+PiAtCj4+ICAgICAgZCA9IHJjdV9sb2NrX2RvbWFpbl9ieV9h
bnlfaWQoYS5kb21pZCk7Cj4+ICAgICAgaWYgKCBkID09IE5VTEwgKQo+PiAgICAgICAgICByZXR1
cm4gLUVTUkNIOwo+IFRoaXMgb25lIHJlYWxseSB3YXMgcG9pbnRsZXNzIGFueXdheSwgYXMgaXNf
aHZtX2RvbWFpbigpICh1c2VkCj4gZG93biBmcm9tIGhlcmUpIGFscmVhZHkgY29udGFpbnMgYSBz
dWl0YWJsZSBiYXJyaWVyLgoKWWVzLCBhZ3JlZWQuCgpCZXN0LApOb3JiZXJ0Cgo+Cj4gSmFuCgoK
CgpBbWF6b24gRGV2ZWxvcG1lbnQgQ2VudGVyIEdlcm1hbnkgR21iSApLcmF1c2Vuc3RyLiAzOAox
MDExNyBCZXJsaW4KR2VzY2hhZWZ0c2Z1ZWhydW5nOiBDaHJpc3RpYW4gU2NobGFlZ2VyLCBKb25h
dGhhbiBXZWlzcwpFaW5nZXRyYWdlbiBhbSBBbXRzZ2VyaWNodCBDaGFybG90dGVuYnVyZyB1bnRl
ciBIUkIgMTQ5MTczIEIKU2l0ejogQmVybGluClVzdC1JRDogREUgMjg5IDIzNyA4NzkKCgo=



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 20:01:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 20:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83049.153783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Cik-0001j6-MC; Mon, 08 Feb 2021 20:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83049.153783; Mon, 08 Feb 2021 20:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Cik-0001iz-J5; Mon, 08 Feb 2021 20:01:18 +0000
Received: by outflank-mailman (input) for mailman id 83049;
 Mon, 08 Feb 2021 20:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/EF1=HK=amazon.de=prvs=666814ed0=nmanthey@srs-us1.protection.inumbo.net>)
 id 1l9Cii-0001iu-Uf
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 20:01:16 +0000
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 628e4e96-ccc8-440d-8332-a90fae3f0d76;
 Mon, 08 Feb 2021 20:01:14 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 08 Feb 2021 20:01:08 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com (Postfix) with ESMTPS
 id 0DD36A2311; Mon,  8 Feb 2021 20:01:07 +0000 (UTC)
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 8 Feb 2021 20:01:04 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.95.82.139) by
 mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Mon, 8 Feb 2021 20:01:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 628e4e96-ccc8-440d-8332-a90fae3f0d76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1612814475; x=1644350475;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=/rpGf2bOgCDHM0uHQqYN9yEMExmcpf9437BH+44nals=;
  b=HS4y8o9gBjcByBgm4Ko2183u0trgW133LUSI1WPaWOOqN0jkumoi+g1+
   aW2gordvk9ElnXLdKbEfvaujbqgsbkykMXp8EKj2dI+aHTcaT7ZyN8hdH
   qBxJaAhiq0LU5jqP8C8lh0dRAXyFObtXUcKcFp5N5jGJ63q7Nwcb7Lfik
   M=;
X-IronPort-AV: E=Sophos;i="5.81,163,1610409600"; 
   d="scan'208";a="116838210"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Norbert Manthey
	<nmanthey@amazon.de>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH HVM v3 1/1] hvm: refactor set param
Date: Mon, 8 Feb 2021 21:00:49 +0100
Message-ID: <20210208200049.28571-1-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
References: <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

To prevent leaking HVM params via L1TF and similar issues on a
hyperthread pair, let's load values of domains after performing all
relevant checks, and blocking speculative execution.

Furthermore, speculative barriers are re-arranged to make sure we do not
allow guests running on co-located VCPUs to leak hvm parameter values of
other domains.

This is part of the speculative hardening effort.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reported-by: Hongyan Xia <hongyxia@amazon.co.uk>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

---
v3: * rephrased commit message to better explain code relocation
    * added release-acked


 xen/arch/x86/hvm/hvm.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4060,7 +4060,7 @@ static int hvm_allow_set_param(struct domain *d,
                                uint32_t index,
                                uint64_t new_value)
 {
-    uint64_t value = d->arch.hvm.params[index];
+    uint64_t value;
     int rc;
 
     rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
@@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
     if ( rc )
         return rc;
 
+    if ( index >= HVM_NR_PARAMS )
+        return -EINVAL;
+
+    /* Make sure we evaluate permissions before loading data of domains. */
+    block_speculation();
+
+    value = d->arch.hvm.params[index];
     switch ( index )
     {
     /* The following parameters should only be changed once. */
@@ -4141,6 +4148,9 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
     if ( rc )
         return rc;
 
+    /* Make sure we evaluate permissions before loading data of domains. */
+    block_speculation();
+
     switch ( index )
     {
     case HVM_PARAM_CALLBACK_IRQ:
@@ -4388,6 +4398,10 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value)
     if ( rc )
         return rc;
 
+    /* Make sure the index bound check in hvm_get_param is respected, as well as
+       the above domain permissions. */
+    block_speculation();
+
     switch ( index )
     {
     case HVM_PARAM_ACPI_S_STATE:
@@ -4428,9 +4442,6 @@ static int hvmop_get_param(
     if ( a.index >= HVM_NR_PARAMS )
         return -EINVAL;
 
-    /* Make sure the above bound check is not bypassed during speculation. */
-    block_speculation();
-
     d = rcu_lock_domain_by_any_id(a.domid);
     if ( d == NULL )
         return -ESRCH;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Mon Feb 08 20:02:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 20:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83050.153795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Cjm-0001pL-0g; Mon, 08 Feb 2021 20:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83050.153795; Mon, 08 Feb 2021 20:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Cjl-0001pE-TA; Mon, 08 Feb 2021 20:02:21 +0000
Received: by outflank-mailman (input) for mailman id 83050;
 Mon, 08 Feb 2021 20:02:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9Cjk-0001p7-0r
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 20:02:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9Cje-0003Kz-Fs; Mon, 08 Feb 2021 20:02:14 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9Cje-0008KB-7M; Mon, 08 Feb 2021 20:02:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=iWN4KTGXqqiuFOYIvQUkcnfQ2/Rs4mlupUFiDuC1wKc=; b=zU3gSypzmrxrJQc7F4FS3omuko
	J8InxXtpGFZzBWSbhVxF7YYoRYX/C+N6RW87/3v/+UhRgDO1lThqQaoptA4csCGdPElFbhS8V5vI3
	TGtQdGJ0n9E2BegQGqeg0OcxI4H8JywvqW9P6cGJ0/Sw78cTSvqvfFfGX7ElqZOQGAxc=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: lucmiccio@gmail.com, xen-devel@lists.xenproject.org,
 Bertrand.Marquis@arm.com, Volodymyr_Babchuk@epam.com, Rahul.Singh@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
Date: Mon, 8 Feb 2021 20:02:11 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210208184932.23468-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

(+ Jan and Ian for RM/stable decision)

Hi Jan,

On 08/02/2021 18:49, Stefano Stabellini wrote:
> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> The offending chunk is:
> 
>   #define gnttab_need_iommu_mapping(d)                    \
> -    (is_domain_direct_mapped(d) && need_iommu(d))
> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> 
> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> directly mapped and IOMMU is enabled for the domain, like the old check
> did, but the new check is always false.
> 
> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> need_sync is set as:
> 
>      if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>          hd->need_sync = !iommu_use_hap_pt(d);
> 
> iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
> P2M. It is true on ARM. need_sync means that you have a separate IOMMU
> page-table and it needs to be updated for every change. need_sync is set
> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
> which is wrong.
> 
> As a consequence, when using PV network from a domU on a system where
> IOMMU is on from Dom0, I get:
> 
> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> 
> The fix is to go back to something along the lines of the old
> implementation of gnttab_need_iommu_mapping.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Fixes: 91d4eca7add

The format for fixes tag is:

Fixes: sha ("<commit title>")

For staging fix:

Reviewed-by: Julien Grall <jgrall@amazon.com>

@Ian, I think this wants to go in 4.15. Without it, Xen may receive an 
IOMMU fault for DMA transaction using granted page.

> Backport: 4.12+
> 
> ---
> 
> Given the severity of the bug, I would like to request this patch to be
> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> 2020.

I would agree that the bug is bad, but it is not clear to me why this 
would be warrant for an exception for backporting. Can you outline 
what's the worse that can happen?

Correct me if I am wrong, if one can hit this error, then it should be 
pretty reliable. Therefore, anyone wanted to use 4.12 in production 
should have seen if the error on there setup by now (4.12 has been out 
for nearly two years). If not, then they are most likely not affected.

Any new users of Xen should use the latest stable rather than starting 
with an old version.

Other than the seriousness of the bug, I think there is also a fairness 
concern.

So far our rules says there is only security support backport allowed. 
If we start granting exception, then we need a way to prevent abuse of 
it. To take an extreme example, why one couldn't ask backport for 4.2?

That said, I vaguely remember this topic was brought up a few time on 
security@. So maybe it is time to have a new discussion about stable tree.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 20:25:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 20:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83054.153807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9D5d-0003lL-VC; Mon, 08 Feb 2021 20:24:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83054.153807; Mon, 08 Feb 2021 20:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9D5d-0003lE-Ru; Mon, 08 Feb 2021 20:24:57 +0000
Received: by outflank-mailman (input) for mailman id 83054;
 Mon, 08 Feb 2021 20:24:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+r8=HK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9D5c-0003l9-Nw
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 20:24:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0bca1306-eb07-4065-b1a4-7a79b4b7b9cd;
 Mon, 08 Feb 2021 20:24:56 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 872F364E6C;
 Mon,  8 Feb 2021 20:24:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bca1306-eb07-4065-b1a4-7a79b4b7b9cd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612815895;
	bh=nIyxAUmCJtJ4kHMxYu4XNhL+98Zh3bRqkQ1cjGlMq7Y=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hx49aaQqTMgWxrFXM+gxPRxG5O3LuYEyqHb3DR9j4T5NZo+ChQk7TuxgdxOsvRlVA
	 FMURjgRCWbDt4xOHROU998tsMJXB/aNuZbc3FtROgu1Se3plTIhekzJy6LgtTewCNs
	 LElWysUjaZaDe0XJkVaTaZp+vxLUiaLIuhD1mtZTdpZCHKYihoJDWp0AhcQZtqHu+b
	 nAzuG2QcJv7smdYz2NvZOwExuJtrNxeOS9vhVbz4WHp1j6tQC06VEiz7jUiEy22Qzr
	 WeqTK/FmyU+ihrbPPIeLlAYuffgi7Tdo35rla3m8PFxs/AU9E92/wXvhvdHE+ivDPt
	 tK0wNMQQStfBw==
Date: Mon, 8 Feb 2021 12:24:53 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, lucmiccio@gmail.com, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Volodymyr_Babchuk@epam.com, Rahul.Singh@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
Message-ID: <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 8 Feb 2021, Julien Grall wrote:
> (+ Jan and Ian for RM/stable decision)
> 
> Hi Jan,
> 
> On 08/02/2021 18:49, Stefano Stabellini wrote:
> > Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> > The offending chunk is:
> > 
> >   #define gnttab_need_iommu_mapping(d)                    \
> > -    (is_domain_direct_mapped(d) && need_iommu(d))
> > +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > 
> > On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> > directly mapped and IOMMU is enabled for the domain, like the old check
> > did, but the new check is always false.
> > 
> > In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> > need_sync is set as:
> > 
> >      if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> >          hd->need_sync = !iommu_use_hap_pt(d);
> > 
> > iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
> > P2M. It is true on ARM. need_sync means that you have a separate IOMMU
> > page-table and it needs to be updated for every change. need_sync is set
> > to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
> > which is wrong.
> > 
> > As a consequence, when using PV network from a domU on a system where
> > IOMMU is on from Dom0, I get:
> > 
> > (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402,
> > iova=0x8424cb148, fsynr=0xb0001, cb=0
> > [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> > 
> > The fix is to go back to something along the lines of the old
> > implementation of gnttab_need_iommu_mapping.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Fixes: 91d4eca7add
> 
> The format for fixes tag is:
> 
> Fixes: sha ("<commit title>")

Oops. Can be fixed on commit.


> For staging fix:
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thank you!


> @Ian, I think this wants to go in 4.15. Without it, Xen may receive an IOMMU
> fault for DMA transaction using granted page.
> 
> > Backport: 4.12+
> > 
> > ---
> > 
> > Given the severity of the bug, I would like to request this patch to be
> > backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> > 2020.
> 
> I would agree that the bug is bad, but it is not clear to me why this would be
> warrant for an exception for backporting. Can you outline what's the worse
> that can happen?
> 
> Correct me if I am wrong, if one can hit this error, then it should be pretty
> reliable. Therefore, anyone wanted to use 4.12 in production should have seen
> if the error on there setup by now (4.12 has been out for nearly two years).
> If not, then they are most likely not affected.
> 
> Any new users of Xen should use the latest stable rather than starting with an
> old version.

Yes, the bug reproduces reliably but it takes more than a smoke test to
find it. That's why it wasn't found by OSSTest and also our internal
CI-loop at Xilinx.

Users can be very slow at upgrading, so I am worried that 4.12 might still
be picked by somebody, especially given that it is still security
supported for a while.


> Other than the seriousness of the bug, I think there is also a fairness
> concern.
> 
> So far our rules says there is only security support backport allowed. If we
> start granting exception, then we need a way to prevent abuse of it. To take
> an extreme example, why one couldn't ask backport for 4.2?
> 
> That said, I vaguely remember this topic was brought up a few time on
> security@. So maybe it is time to have a new discussion about stable tree.

I wouldn't consider a backport for a tree that is closed even for
security backports. So in your example, I'd say no to a backport to 4.2
or 4.10.

I think there is a valid question for trees that are still open to
security fixes but not general backports.

For these cases, I would just follow a simple rule of thumb:
- is the submitter willing to provide the backport?
- is the backport low-risk?
- is the underlying bug important?

If the answer to all is "yes" then I'd go with it.


In this case, given that the fix is a one-liner, and obviously correct,
I think it is worth considering.


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 21:24:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 21:24:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83067.153818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9E0g-0000jx-9Y; Mon, 08 Feb 2021 21:23:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83067.153818; Mon, 08 Feb 2021 21:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9E0g-0000jq-6j; Mon, 08 Feb 2021 21:23:54 +0000
Received: by outflank-mailman (input) for mailman id 83067;
 Mon, 08 Feb 2021 21:23:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6zY=HK=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1l9E0e-0000jl-IH
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 21:23:52 +0000
Received: from mail-ej1-x635.google.com (unknown [2a00:1450:4864:20::635])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab5f131a-5490-4627-9b4b-42ebac14542f;
 Mon, 08 Feb 2021 21:23:51 +0000 (UTC)
Received: by mail-ej1-x635.google.com with SMTP id y9so27680413ejp.10
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 13:23:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab5f131a-5490-4627-9b4b-42ebac14542f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=v1ZuOx2gp/cNkIxL2LILsvUXrDaY2isqlU5IhoGVUD0=;
        b=TqBaOO6xNX/iudZQU71vPeUqahRHOLvkgAqRFrkhgr/00t2z9zdIdgZWRJUqAXfBw2
         yhdI9YbDHfTE0vzKKvyYfewLkY0KlMvXRnPIydcLE5DwMbEEz+n3MNzogzPOmGIP/Svh
         Rp8F3V+qc5/tM/T8JVLbNe5JsyrZC/Jid6te3LjEB5MZqWIZjQJoc/qAEYf7Pxc0HoyT
         2PyIG0fO0WpBl1ESJDVLMdsHcJIs+dg8RSC7bNTiI2P6TNHsNj9vxl4p0YsFyqjNr19z
         4SSdF5tWL3wj1kTIlcTSp6txTMcmys/8njznPA0uRJGSFxkazLw72JCXO3jDZILsOwEa
         ictA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=v1ZuOx2gp/cNkIxL2LILsvUXrDaY2isqlU5IhoGVUD0=;
        b=nSCkHic9xFDZAOV+CQK5ETfX9FjhKd3invym9047ols/krPXFZfjkytpcGAuDjJPe1
         abCM50+oJDUuscYagV5zBNu/YQfQjaXuCBxUIhXn+y0iVCNLPJoHWl3VgTJMmo9+gxyC
         H7KJ1Q/qDVyNZUWoeWkUwygTNrYA/IoYb2yro5o0ZV/uRjA4LjIXNKWbCP9Ex+/JEA4n
         5nVFaHBR9kT69eolblA0dB+yEST6vEE8bseTWkTFKY/NHsfvzj2ESq5snrjBLTHXFjBh
         FSA100ObNMJ2DW+ZMhCtqAOrQNKCc5eUCJ9tXFbhvbkeaiR9ONZzITxyh6rbiM81hBvL
         vy4w==
X-Gm-Message-State: AOAM530Je+QLUsJJj/wdTb9Cpy0klEZ6WryfI/Wcd8/kGd/UivXOfGjT
	+grJXlUmlFIfB0DCspABeZTBarrmbWaGRC4gLdo=
X-Google-Smtp-Source: ABdhPJy2p7f9LGP0+iO6NkxAcrM49vDk/utYgPwwfUhir9l5pQWZD6PQi87Q38XJcvwD9vw57sB69bTo9P8JKJiHTuc=
X-Received: by 2002:a17:906:b082:: with SMTP id x2mr18365035ejy.100.1612819430538;
 Mon, 08 Feb 2021 13:23:50 -0800 (PST)
MIME-Version: 1.0
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <173ed75a-94cf-26a5-9271-a687bf201578@xen.org> <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Mon, 8 Feb 2021 21:23:39 +0000
Message-ID: <CAJ=z9a3uhiFKE6gepaPvWZxqRErCyLiv2CTDSx3Sihef7CaMtQ@mail.gmail.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: lucmiccio@gmail.com, xen-devel <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Rahul Singh <Rahul.Singh@arm.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>, 
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, 8 Feb 2021 at 20:24, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > @Ian, I think this wants to go in 4.15. Without it, Xen may receive an IOMMU
> > fault for DMA transaction using granted page.
> >
> > > Backport: 4.12+
> > >
> > > ---
> > >
> > > Given the severity of the bug, I would like to request this patch to be
> > > backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> > > 2020.
> >
> > I would agree that the bug is bad, but it is not clear to me why this would be
> > warrant for an exception for backporting. Can you outline what's the worse
> > that can happen?
> >
> > Correct me if I am wrong, if one can hit this error, then it should be pretty
> > reliable. Therefore, anyone wanted to use 4.12 in production should have seen
> > if the error on there setup by now (4.12 has been out for nearly two years).
> > If not, then they are most likely not affected.
> >
> > Any new users of Xen should use the latest stable rather than starting with an
> > old version.
>
> Yes, the bug reproduces reliably but it takes more than a smoke test to
> find it. That's why it wasn't found by OSSTest and also our internal
> CI-loop at Xilinx.

Ok. So a user should be able to catch it during testing, is that correct?

>
> Users can be very slow at upgrading, so I am worried that 4.12 might still
> be picked by somebody, especially given that it is still security
> supported for a while.

Don't tell me about upgrading Xen... ;) But I am a bit confused, are
you worried about existing users or new users?

>
> > Other than the seriousness of the bug, I think there is also a fairness
> > concern.
> >
> > So far our rules says there is only security support backport allowed. If we
> > start granting exception, then we need a way to prevent abuse of it. To take
> > an extreme example, why one couldn't ask backport for 4.2?
> >
> > That said, I vaguely remember this topic was brought up a few time on
> > security@. So maybe it is time to have a new discussion about stable tree.
>
> I wouldn't consider a backport for a tree that is closed even for
> security backports. So in your example, I'd say no to a backport to 4.2
> or 4.10.
>
> I think there is a valid question for trees that are still open to
> security fixes but not general backports.
>
> For these cases, I would just follow a simple rule of thumb:

Aren't those rules already used for stable trees?

> - is the submitter willing to provide the backport?
> - is the backport low-risk?
> - is the underlying bug important?

You wrote multiple times that this is serious but it is still not
clear what's the worse that can happen...

>
> If the answer to all is "yes" then I'd go with it.
>
>
> In this case, given that the fix is a one-liner, and obviously correct,

I have seen one-liners that introduced XSA in the past ;).

Cheers,


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 22:08:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 22:08:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83082.153840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Ei9-0004bC-M3; Mon, 08 Feb 2021 22:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83082.153840; Mon, 08 Feb 2021 22:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Ei9-0004b5-Hl; Mon, 08 Feb 2021 22:08:49 +0000
Received: by outflank-mailman (input) for mailman id 83082;
 Mon, 08 Feb 2021 22:08:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Ei8-0004ax-Cz; Mon, 08 Feb 2021 22:08:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Ei8-0005Nq-5f; Mon, 08 Feb 2021 22:08:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Ei7-0006z1-SK; Mon, 08 Feb 2021 22:08:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Ei7-0002KG-Rp; Mon, 08 Feb 2021 22:08:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vi2/916ZZ4DBxmaWW6n0yZL8ro8pJUv3k7IhaQWy9ik=; b=pGj76MXxkB+kJvbg7NBK7qiWXW
	PRM/EgUGGinsuanwlVa3NPaa/p2ntsBI/espxUPcbScsJ7cGGU2THHzIBK/2RHoQZgPm66geG4vZa
	jBV0O1rPQX2f06BnIWD2sWRx4hMdp9wKFv0EF+GG+rLZB4Wn24922AXjKnAL1T0vafeY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159146: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f18309eb06efd1db5a2ab9903a1c246b6299951a
X-Osstest-Versions-That:
    xen=c973c38c2722b44396e94b64655a909330c631e7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 Feb 2021 22:08:47 +0000

flight 159146 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159146/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f18309eb06efd1db5a2ab9903a1c246b6299951a
baseline version:
 xen                  c973c38c2722b44396e94b64655a909330c631e7

Last test of basis   159140  2021-02-08 15:01:34 Z    0 days
Testing same since   159146  2021-02-08 19:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c973c38c27..f18309eb06  f18309eb06efd1db5a2ab9903a1c246b6299951a -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Feb 08 23:27:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 23:27:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83090.153855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9FvZ-0003I0-KM; Mon, 08 Feb 2021 23:26:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83090.153855; Mon, 08 Feb 2021 23:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9FvZ-0003Ht-FK; Mon, 08 Feb 2021 23:26:45 +0000
Received: by outflank-mailman (input) for mailman id 83090;
 Mon, 08 Feb 2021 23:26:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DVmt=HK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1l9FvY-0003Ho-JG
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 23:26:44 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d84a909-0d30-483f-a59d-cb80fa2b9549;
 Mon, 08 Feb 2021 23:26:43 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 118NNhpG148767;
 Mon, 8 Feb 2021 23:26:38 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 36hkrmwpj8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 08 Feb 2021 23:26:38 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 118NPKD1066341;
 Mon, 8 Feb 2021 23:26:37 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by aserp3020.oracle.com with ESMTP id 36j510ex88-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 08 Feb 2021 23:26:37 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by SJ0PR10MB4414.namprd10.prod.outlook.com (2603:10b6:a03:2d0::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.30; Mon, 8 Feb
 2021 23:26:35 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3825.030; Mon, 8 Feb 2021
 23:26:35 +0000
Received: from [10.74.101.99] (138.3.200.35) by
 BYAPR08CA0022.namprd08.prod.outlook.com (2603:10b6:a03:100::35) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.17 via Frontend
 Transport; Mon, 8 Feb 2021 23:26:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d84a909-0d30-483f-a59d-cb80fa2b9549
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=w/G1lZRC+vgRBxfYDmrHCcTCTpOXCNl3eDr95g5hVP8=;
 b=XXAL7Yhywc5uSHCkuuhnsmks/LDMpHgvB1nmynox4lXYP9nY0yDomK9n3Scn4WnjjTa3
 YYFZxDp9qoPcRsHLk0KDQ5Ya8vzdsBtkSuVDWCHnsQRuvdjfaHZ0LyU0uM33NHdM+X63
 yEAE/3bS3vPtO+b0nxmsYtRmUewlQpK3ilGLshhik4JozNSR5kvHql9Cd9MrlBqpAvL4
 nEtvyltC5X1q8+FwNPq/mIxQbEPJvvfXiFyWDT+pG5PSnAscsmzC0AiuVb8+JlWG9pX/
 ipCSliVd5wVjlhsP1yLuGQ4xM5gm6WUANyVoOyu182etYqmk/tYMJoOrrxJ4Q+/XwEn5 Lw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O91ZEO3+jBJjxEmlv34UhNQnsSYxzzgiNn8PYN/On443kRm/mBw4uXyAoFgVs424kCPwWgouVy8r8FXxzHkRyNw40q/WKA4yWDsDgn7t7kPJpfPhnaAc/UnohhooS1C1AtmMI7IE5pc2LkQjygFxj0e8BiwmohfiO4RqaNTwqGOQdjb3foU43SSvBruxo+9VFTmAE3vEhsFlBehxZ4rNar/BlFXVYJ7vI2lBgn6T0UN3c9mOvJo6bRHFFubs9MA6N/4tEQvGKbErLSs4cJnUYRPN2O9SV3AJ1p596NrIZAugBp+SkVegr+mUUlW7NNLGFcyKfcb4g6TyDC/9cXsj8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w/G1lZRC+vgRBxfYDmrHCcTCTpOXCNl3eDr95g5hVP8=;
 b=idFAWbekpVrX7pTLLdmtnfhFquaICohwnf00L5T8LktAhgOh7zgTDeY3OOMQ+vUhNNj/J95vrBQHkuhsfGHAutzJOJEHgx24PckLemoB6JQFE0vOe46pcXuuJx3qH8h2zaABqVr0kgVayJN3fpfTymHnxH9uh5s/KSfFPu0VmOlL2XzyFo11mjKQAexuVwejFwFfOobAjfGht1sTsudk0sfSpW9N3CLtd2V69+iWHE0e+/wj86plhb76/v/iOwaiWKUxgFTxXZVfF0CrBFHEITU5sP5rzBEADoL7dJv/h/dmKUXX4c7ak/teL1dFB2zxrEgmXbD4Mai+Bs2JV08xbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w/G1lZRC+vgRBxfYDmrHCcTCTpOXCNl3eDr95g5hVP8=;
 b=isNbs91VY6kzo9J8SWRzi/sU3+JITTgEYQASB6GAvwmiD1AVVe9iFd3yJrCPDEUBdLLcz9Hkka9+JgOmVG4KiPVcJk31rew5XGbmEqN7NJPDrr1DoeP7SfvbpC6WwSTNgFqfLKUVUqPydN97Jk1m45VvVz5n9Eeb5wnNbzzEudA=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 4/7] xen/events: link interdomain events to associated
 xenbus device
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
        netdev@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
        Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
        Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
        Jakub Kicinski <kuba@kernel.org>,
        Stefano Stabellini <sstabellini@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-5-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <b7576788-c734-1fd7-69fa-2a576541880e@oracle.com>
Date: Mon, 8 Feb 2021 18:26:30 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
In-Reply-To: <20210206104932.29064-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.35]
X-ClientProxiedBy: BYAPR08CA0022.namprd08.prod.outlook.com
 (2603:10b6:a03:100::35) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d72ed724-1cc4-4ba9-92d4-08d8cc88f9b6
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4414:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB4414380B8C5BABD102D3B2C98A8F9@SJ0PR10MB4414.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1051;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	+6/XoSpBFjuBOP9EWZ6K/1Z3YNC3l2zn8lxmOZ2/iEjfQzyVK0Qga4hoKZN3mFaKq7a5tiXe+Bgie5j0huS887JAMLLsqh4AgWKrTJEscHF7pfkxOgYDZv4pN45yP2ZUypXs5lEg1iFkk8Ph0Kc9VLajqOIixpunfmRgtblxMTnYyLGteJHD2Vz32Brl4gvRshPoywhAZVnEEtHssebM08b4HjSmimKl6lpAnNWWr6hrLqyBL5UGKjSYGYWi3pLIAjvGHjwTkyb6MZ7oTjrUVo7W+4eYqxiQ3aRoSObDWV27wLE5t03Z3MvLB3V46JumEFtHk9p0IVJBjDLQEvmZlyYhLydYvDLtx4ASC/qH3ZzgRYIglrcetdk4TqqyoqxWG1/bu1fMShe6WLpuDy8dM/srTXX5HCmNVODpnAQmuXZpwY6JWyagT6ZjOTpKRdzvtOqSJ6iG9HP261pQdrp4Ozx26huPgKyD+LmmAFcw6s5U+FN561xeVRIP4bcVwRbO3DLEfPQnSbJjwOuOhTSIv96qDA/JfOXxYddzpb3qhqe4l6+gV117OLuk+82wHQi8tkrtCE8Fsn/7P34JsEh5+xBekJvwguDRdkp6o4imivM=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(39860400002)(346002)(366004)(136003)(44832011)(36756003)(4744005)(26005)(5660300002)(16526019)(66946007)(66476007)(66556008)(186003)(86362001)(478600001)(2906002)(8676002)(8936002)(2616005)(6486002)(31696002)(16576012)(54906003)(83380400001)(7416002)(956004)(53546011)(316002)(4326008)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?TkZLRGg3MitMVnM0QlNhbmc4dkpuN0NReTBMYzNmVk5tTkx1LzFzMFVLWTl5?=
 =?utf-8?B?U0pGczYyRkl0R0RsbTFnWmNQY0EyTDRpd2JxeVYyb1Bhajh1VEwxQUNLYURX?=
 =?utf-8?B?YjYyZVBaT1ZzZnhTR1J0Qy9BclJDcFN4c2pkVWVNcVA0bUo4SE9senRCVENz?=
 =?utf-8?B?WVVaODhZTGdNRzk5WGVMUXZuWnlNNlJsZW5aN2tYR2hnSWsrNUlTNHJVL210?=
 =?utf-8?B?SFpaTEMwMnh1OG16UTlWR21FZ0JTR1I2OTNxZnhsYlJueC9DUk0wTXVCKzlD?=
 =?utf-8?B?bVVWTXRVY3JsTGdJUTVqVWsyc2x6TmExNHhzS3BVZjl4R3dlRVVDci8xeldB?=
 =?utf-8?B?Tk44UCttN3J5aXQ5R25vUFhpRExYWXZ0aWhyQnpjL1I5WlVGVWkrQm9ISGtL?=
 =?utf-8?B?cXdHUGV3YTl2Wk5WY0FRM2pSL1FZVEhWeDM5MU5odWdKZzRSbHFKVlI2VkxF?=
 =?utf-8?B?UVJvazYxa3hoY0VST2lodytaZ1VSTXBRMnZQREprcjBycmdpOEdPeDF0R29E?=
 =?utf-8?B?Z0Voa3ZIbCtwaWpoN3pTRFVhaytKekcyaXhReG1peU90bWtuTjNJOFNjdENJ?=
 =?utf-8?B?RG9XY2lxOXR4cktpRjdUQzF5SmtnSUNJblpkNkRtQTlPQ09GTm5DQkc4bFBW?=
 =?utf-8?B?WXlraFNBRWlRQVA1RHhZbEF1T3dZeWhnY2VzM3lxblJTNHJ6cXlnZnBVTDRX?=
 =?utf-8?B?MFUzTjd4NmJIMjhRYTZhRC85Sko1WGI4NWlzbktPY1JzRjc5bUxYdktVUW14?=
 =?utf-8?B?RnNoRlFwSjg2RmphM0tJamRYVEY4YVloTjEyMTVCTUMvMnUwaUNPdTZWNGdh?=
 =?utf-8?B?dHFlTDNtWHFuQWppdml0N2ZIVjBYaTBrTWM4QlhPNUZIVTAwYXRIcTdTTisy?=
 =?utf-8?B?ZGhUa1VvVi9jWUpTNHBmai9CQ1k1d2hoVGtkdVJtUUJmcVhCbHNQcVNYWnQy?=
 =?utf-8?B?aFZwL0U1aENUczZRT04vb3I4V2NzSTgrcExrVG1lWiszNmpaRGdwNG5VaTdM?=
 =?utf-8?B?VjFPWExTQ3d0VXVpZU4ydGYzZzlPK1VXTmNweGFtTHQ2NnFlSFJqMWVpWHJC?=
 =?utf-8?B?L3BiWmV5UEprK2FYZmpXUDdOSi9na001TTF3QnZUdlVFWmFkeE9wb2ZQbzl1?=
 =?utf-8?B?OTQyWU9EWld4MlAyYlBJZ000TlorVXNSbHJDQVFYVjNFY0pWWkhraFBBcVZI?=
 =?utf-8?B?eno2WjNVOE5NSnpmWXVUQTR5WitGOHF2bXdiUW9ZSUI1a1YvRDRwUlVzUUFo?=
 =?utf-8?B?Sk90UGVqL2QyMG93L1NGb1hjTGNOWlVZZzFBUVg4dWQ4UCtPVlBzRWxsSHVP?=
 =?utf-8?B?OUZKNVIrdnNVWW1XZUo5SXVsdVozaWk0UXIwODc2SS9MVGV1VFVkNm9kQU5h?=
 =?utf-8?B?NDlmeURzVWIrUVZjNmFaMkJoa3E4S3ZQN2k1TG9sNXlxSTJ4WUpkWEl3cXdx?=
 =?utf-8?B?SWpsRk55bXdTRFNLTklqQ2xwblE0YVlFbksxZnBIYnBIa0xFZmdVZUZHcVQ2?=
 =?utf-8?B?WlhtY2NaTDMyZCszNUt1RER3bVEvUy9JTkxiOXQ4OXh5SU5zVlhsVTlaMW4x?=
 =?utf-8?B?dS9BcVkzN05sQzZIM0kxajd4dUJhOXgreEFTdjJJTVZQekpJZkZ0LzdWNXor?=
 =?utf-8?B?eWdYbWREbjE1azVJbWxCK1YzS2d2Zkw5WUIyUTBNRFhXbWhTdWIya29kb3Za?=
 =?utf-8?B?ZzlEMzlESTNjYnJTRzJFd2NkTjVvQ3k2S1N0T0FXblZ3alF1MkdEVnhFOVE3?=
 =?utf-8?Q?Be3hNdoSwvekLXJjYCvcwYB6sUHCdOEk+qGAYXj?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d72ed724-1cc4-4ba9-92d4-08d8cc88f9b6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 23:26:35.1504
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XmGBa14TU8D+lZFERDqc744yX4ULKG/78vx4caTi4bbH6bien1aadzusp9BRFCp8Aagrdi2dq8woWCLPMLZf/hvbytaeGgqyFNdls7PjJYU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4414
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9889 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 bulkscore=0 adultscore=0
 mlxlogscore=999 phishscore=0 spamscore=0 suspectscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102080130
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9889 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0
 priorityscore=1501 bulkscore=0 spamscore=0 impostorscore=0 mlxscore=0
 suspectscore=0 mlxlogscore=999 adultscore=0 clxscore=1011
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102080130


On 2/6/21 5:49 AM, Juergen Gross wrote:
> In order to support the possibility of per-device event channel
> settings (e.g. lateeoi spurious event thresholds) add a xenbus device
> pointer to struct irq_info() and modify the related event channel
> binding interfaces to take the pointer to the xenbus device as a
> parameter instead of the domain id of the other side.
>
> While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi().
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 23:35:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 23:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83095.153870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9G46-0004GF-Kt; Mon, 08 Feb 2021 23:35:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83095.153870; Mon, 08 Feb 2021 23:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9G46-0004G8-G9; Mon, 08 Feb 2021 23:35:34 +0000
Received: by outflank-mailman (input) for mailman id 83095;
 Mon, 08 Feb 2021 23:35:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DVmt=HK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1l9G45-0004G3-6B
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 23:35:33 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69b9dbea-8abb-433c-8e26-7c4c714ceb8b;
 Mon, 08 Feb 2021 23:35:32 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 118NOUER149491;
 Mon, 8 Feb 2021 23:35:29 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 36hkrmwpyn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 08 Feb 2021 23:35:29 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 118NQZd1156628;
 Mon, 8 Feb 2021 23:35:29 GMT
Received: from nam02-sn1-obe.outbound.protection.outlook.com
 (mail-sn1nam02lp2059.outbound.protection.outlook.com [104.47.36.59])
 by userp3020.oracle.com with ESMTP id 36j4vqhu68-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 08 Feb 2021 23:35:29 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB3937.namprd10.prod.outlook.com (2603:10b6:a03:1fe::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.29; Mon, 8 Feb
 2021 23:35:27 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3825.030; Mon, 8 Feb 2021
 23:35:27 +0000
Received: from [10.74.101.99] (138.3.200.35) by
 BYAPR05CA0001.namprd05.prod.outlook.com (2603:10b6:a03:c0::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.10 via Frontend Transport; Mon, 8 Feb 2021 23:35:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69b9dbea-8abb-433c-8e26-7c4c714ceb8b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=POYpBYt3+KxDmyUbp+MzcOudNXbZt7Zq8mS8+HAxE78=;
 b=Tdyjq4ewbUlJ0Qi2cIzw2xyiWRqdOq/oIQXmnfaCtF+p44SEEozxQ1aDWsdso2kOdSvV
 Xltw5msaruw9PWCG/+XpzvnFlPuTYKtw6OfDPsKWx1IuGCIP1pmyOsSLxZtpQpgvC8n8
 bDo5FfAv5yTRQ79cUchxBw3ABLLuNa+MRCh2jjREYCZgK7oYZTVkMZbEQik/LnKFMeWW
 ZBJF18RIFLJYsY1hnZacFO1XhlsGPNxZslEFVdFdruPraaOoZRxomCqW7ZESq/j/24t6
 ijrwipd2/X7yXVafO3L82cDUJFKimB1Y3qKmucRJa6XfvEhaUt8Qv1k5AfCRB2VrzB2R Yg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oWaylq31GoAWTade8UFzPHUgagvx5bwWSt54sNe3H8G3D5US+N3w9n27clkdNvyexPSl6EMty0MJmTKjpRfjAj2YYV/+dmO9W6aI5I0kVZGS8fsIwCPBnvziQKJrFI4pTsW5N2alLQNujtGY0s3ZcxvcNC/8drEH9GR69yr5LTadp/YcqojlrC+0EsP38ayVVsm4oVyUyjg5SFaRxwnV+WaNLrq2so+zVuNywud963qBvvq0dbNCtarZIk+BhCzsoBxYTR+SK0ntHvSuQKQrT07KbVOQz0wtSrstbS4P8X+Z9ocwOLSWK+NFkJrM3N5Z0+9FZx9D7WHPQyQRBQ8INg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=POYpBYt3+KxDmyUbp+MzcOudNXbZt7Zq8mS8+HAxE78=;
 b=efVuE/EOBradGKK28j85Gz0iEcVR0gm0ijkDvnaTAJtdWxNNlb/3kGCLkLd5tn8OTriyO9ISwRDJ7baT3oQLqzk1Mb+4FdeY53HiFCaCa1JQ7HOsdFdttwBahzfy/Aj50eYADJEk5vUpNH5XipeEMm6NzJjXuHG2bGDkhT7OoGhGAHZV5J7CnwwY1tc9S/BkIhLMEywcm8OptWRlW+GZmuCOdJq9OBDfaXWxx8276LF0XriHfPMC4p7bHid5eIma883i1IH9C9JMzwAnugffd+byyzJUHS44J+b1dnNzwaOKBwRJ5lCLU1F2rYkfmmHeX3/c3TBg+qIsQz0KBlfXWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=POYpBYt3+KxDmyUbp+MzcOudNXbZt7Zq8mS8+HAxE78=;
 b=kAxMb+fKO9P4iXR2GPe4rDzmfCR4j4IG47gE1snivWfutyE3lynJQOA3EVQK8mgiNiVhfq7AL4tzI1ZfeIL2r/MCvZSS7B3YrSm6rVkxIh3FlvBgEFQY8YkIqXA2JtQuLN3cSEVclM+cUKDy0DRcq/lBW0vKSnGWnoVmHe/cOm0=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 5/7] xen/events: add per-xenbus device event statistics
 and settings
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-6-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <d218749d-92c7-f995-a282-2490fb13a458@oracle.com>
Date: Mon, 8 Feb 2021 18:35:24 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.0
In-Reply-To: <20210206104932.29064-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.35]
X-ClientProxiedBy: BYAPR05CA0001.namprd05.prod.outlook.com
 (2603:10b6:a03:c0::14) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f78cb41-f1ae-452f-772d-08d8cc8a36fa
X-MS-TrafficTypeDiagnostic: BY5PR10MB3937:
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB393773417F2FF631053B40E58A8F9@BY5PR10MB3937.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:475;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	uWcgWUmJhvzm0ZV2r+ItGjccmzjlbQWj5MjTgggVIlNkfM3FeOsnXbH6LvFewtLboDowpB1OMeLxuiC8mAaeyX3ki/VEh6jcf9HPWce8KHcTR39VdrfVcXWlyyOUbKe7Ga0k8n1YeLsIVyOH3UJBWK0+9INjdEAiybNjTPmCGQNddf4E56rA/qHHzVuWCjIM288BkWI5ccJBqXABonpA565nRBd+bnpe/3VnmUFyEYv21YrRyw9s7LGF5HiFBjcfQanrlsZuAEYfE4KRypLhz1c4rtM1xP98+QfKS/CWG33qLyg5K6Ts20m3eXnCTfN2WRNysxN+p2Ho8czzhCq30PEIhsGSJ7GaULx1KLROql8NLnsipqnqlkfZcZQarZyIrdsadVvfhrSQ5l6/56J+spxvzFPeOxOl1IBl/3LF0FkPtIQ6sHFZozMpwpH6jl33ijjjMXKMVUbwVODftRT2tOX50Gti3rq78Yhy0w9oRgz42KzJTfNlDNsCbFQ3SesOo7nFEwGKBglYpFMLKhrJb/kNHgimmfRTFi5yRfgmePRh1wDaJFh58oMqEwj4wuc5FaTMtetWAyXscWiNGqDzNw5BCDdG1bpRo7BABVFfsHk=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(396003)(366004)(136003)(346002)(39860400002)(316002)(6486002)(186003)(16526019)(2906002)(31696002)(26005)(16576012)(478600001)(83380400001)(36756003)(44832011)(31686004)(53546011)(4744005)(956004)(2616005)(66946007)(4326008)(5660300002)(8936002)(66556008)(8676002)(86362001)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?aDF0NkhDcXA5OVdRSitIWnNvc0VMMGVCNnV0eG9pMjBBcS90YnJhb2lJOFlP?=
 =?utf-8?B?NDl2WGZtTDhZb3dRM2NKaFRCSlVXWit2RE1oMFlGdlJzZjVwUGljNjBKTHdH?=
 =?utf-8?B?bTZmMmx5MnQrMVp1OERCTHFheEFRNGQxQlV3ODB1ZnBNUXBjMHExdmtiMXVR?=
 =?utf-8?B?U1F2UmJVWGlLM3BvL1h4QU9sSm44R0Q0aVNRZEl5bC9HTWhCdm9wQjdvVzNI?=
 =?utf-8?B?bnJJNkVPRWE2TnJUenAzRXZINFp4aVNYNnFwQ0lIY1kzQm1qSnlIejJZQUVP?=
 =?utf-8?B?RFpKNWttL3Q1SFYzQS9jYXJuSU54NWJDRmkyVCt4L1pDbFpJMDdWV0dFdWFl?=
 =?utf-8?B?UjFqWlZIMC9vWGxtY29XVVBJa0ZTRGpKMW1BWVVMRUJPRlZHZldFeEUwZVNJ?=
 =?utf-8?B?TlhTbnZiU2ZSVUhjLzBsQUs1K2x6c1JNakFhVHJtcTJvTmYxb0RmQkpOV0E2?=
 =?utf-8?B?U0crTDgxemxMOW4rZXEzK1RkeStSdXZwSlFTYm9LQTNNYWl0ejVrbEhpRUds?=
 =?utf-8?B?Mm8xMU80VGo5R1RoYWNBWlcyTEVNSDFTWVY3R0FzOGZ2TCsrZHh0b1p6eTFn?=
 =?utf-8?B?cENzazh3c2xZdFgzcmJIOW9kc2paL0VSTlhQbnoxQlJVUktuSGtJVjQzUEZn?=
 =?utf-8?B?YTA1WDBXMVNMR0FWQUxJS1NHR05Wak93SEp1cEFSUEkxaGVkWmVVQ3dBR1hH?=
 =?utf-8?B?YUpmQjhMQ01UY0x5M3BjTUM1emNkSVBiK082YzE2TVJXZXlhN1ZGb0l0dVFT?=
 =?utf-8?B?ZUpjeDd3eGxMNUtjL0dybHhxQ0phM2FmY0lMZHFNUGxpbi9IaTh0WEk1cXh3?=
 =?utf-8?B?UDF5V05jeWx0aXBhNzBpN2E4ZWtJOGVaZGM5NjNaZzREV3doY3psbmY0d0Ja?=
 =?utf-8?B?VzB1bDcyTDhxZ1FabUZxaldoOGFKRFVKT3ljYWR3cHgrU0IxcGkyUlZtQ2Ji?=
 =?utf-8?B?MHkxamxGMzdGVEQvZkhZVXQxM3psOW5MNk1GUTQ5cHl3Uk1tVXBpRWNEK2w0?=
 =?utf-8?B?d255aXArdjd6TVlEblNicGJKcGhDWkI1amdsYjFyaldOcDROODhHYVlUNFlN?=
 =?utf-8?B?cG9xTXJiRS9qbEd4VTd1alE3bEpYZXFJbk5HQVJ4aDZaWm01ZWNvamlyNmQr?=
 =?utf-8?B?T1JDTHFWNnRLSmlWTG1mazBlSTFyM1ZJQU5wQUNVSjFjWHJJd1FDVVRiRnZH?=
 =?utf-8?B?NXRaMEhjekVFcmdydWhPU0VDZDhuVC90ektrZUhrbHNMQkhrUWpKTWFaQTBx?=
 =?utf-8?B?YjFLR2RrcXFLVE9tVXZ1SkpWZGlRTXJZeTNyR1pmT2FxMVRtNGFKL05Sckl1?=
 =?utf-8?B?Y284a3NpWXhLVTM1Q1R1VGZETlNqYVRKd3ZNUG8wUmF1V0tBRlVyWlN0N2Iz?=
 =?utf-8?B?RXhlSnFxUUkxL0p1REJ4SnNCNzZCclNpYVNUYTdkMThiK3BOWG9LUi9rdXRm?=
 =?utf-8?B?cGJ3R2xQRVBjcjhNeHBUOE1tTUpxcjdLUjgxVUt2cnRRK3RlUjNNalJBZ1V3?=
 =?utf-8?B?THhveHJlcHVtUXJ2ZC81ZlZoME1RaUErNEJmazBBUlc3Y3MyWjdQYUJFWmhY?=
 =?utf-8?B?ZG1KMms5K2x4WmV3bjYzZ1l2MWZUMElTcXFoVHFINUhvNVFLaWpRNHptVWpB?=
 =?utf-8?B?MS9QUmVGa2RITkpSQkdSYmEzdDduR1ZINkc1VHVWc2NHTUZkb2dlbjhWL0Vv?=
 =?utf-8?B?NmgzTGRNc0psL2VmcS9SYk0vT3ZCUWFJTE9kT1ViQzRHQzdDOTN6SXVieHJF?=
 =?utf-8?Q?ueUk4QPbLN70cWBAWKT6vljmShqMRWhrTYC15Vq?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f78cb41-f1ae-452f-772d-08d8cc8a36fa
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Feb 2021 23:35:27.4871
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rVdI3luxQnN5IFlbz7g9GQexEkVL+N4Z/v8g3IxRnc+tj1rEBcM//VjFq/z4V/1U6CVwzD3ciNN7BOBT0gBqRr20gFEIwk1n4xWOzDRFsjQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB3937
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9889 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0
 mlxlogscore=999 mlxscore=0 suspectscore=0 malwarescore=0 phishscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102080130
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9889 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0
 priorityscore=1501 bulkscore=0 spamscore=0 impostorscore=0 mlxscore=0
 suspectscore=0 mlxlogscore=999 adultscore=0 clxscore=1015
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102080130


On 2/6/21 5:49 AM, Juergen Gross wrote:
> Add sysfs nodes for each xenbus device showing event statistics (number
> of events and spurious events, number of associated event channels)
> and for setting a spurious event threshold in case a frontend is
> sending too many events without being rogue on purpose.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  drivers/xen/events/events_base.c  | 27 ++++++++++++-
>  drivers/xen/xenbus/xenbus_probe.c | 66 +++++++++++++++++++++++++++++++
>  include/xen/xenbus.h              |  7 ++++
>  3 files changed, 98 insertions(+), 2 deletions(-)


This needs Documentation/ABI update.


-boris



From xen-devel-bounces@lists.xenproject.org Mon Feb 08 23:56:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 Feb 2021 23:56:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83098.153884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9GO8-00069L-5v; Mon, 08 Feb 2021 23:56:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83098.153884; Mon, 08 Feb 2021 23:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9GO8-00069E-2u; Mon, 08 Feb 2021 23:56:16 +0000
Received: by outflank-mailman (input) for mailman id 83098;
 Mon, 08 Feb 2021 23:56:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+r8=HK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9GO6-000699-S4
 for xen-devel@lists.xenproject.org; Mon, 08 Feb 2021 23:56:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9827e62-59f5-416e-9adc-0090c8a60223;
 Mon, 08 Feb 2021 23:56:14 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0CF4D64E76;
 Mon,  8 Feb 2021 23:56:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9827e62-59f5-416e-9adc-0090c8a60223
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612828573;
	bh=JC2fgA3vCo0/t26eZhsquPHOlPBaFoo5utCIes70ss4=;
	h=Date:From:To:cc:Subject:From;
	b=OU0bgciatfTnoFcp9WJ+rdfVyTkXWGIJLBPZk7h1UmcGFO/kGJbvT3/Xaenk9rIJ1
	 e7jxOALI9PopYsF+6J6breULtNiaOnXcH7fl9GDdkqRw1MKNv6uIl3yZossGHcYU4P
	 wmIZhtUNcsuqBlUjBYxkFLVvGhKJey67CYDL6p6FsrJKKo727c9lhSZEqQkt1tXqyp
	 d75L9lRUHOG6itLPg6Ma1GtzU3VJKUjkacspFeU1nVBLDwpP68p4M4WQMMFcURlvBb
	 rpWmbGrxlJdZRJwyE+FJVOhWlmX+IVBPnkVEmJdKX8I/xBFAqS89M6fjQQunadd6TM
	 uNufNg/FGsdjA==
Date: Mon, 8 Feb 2021 15:56:12 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, ehem+xen@m5p.com
Subject: [PATCH] xen: workaround missing device_type property in pci/pcie
 nodes
Message-ID: <alpine.DEB.2.21.2102081544230.8948@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

PCI buses differ from default buses in a few important ways, so it is
important to detect them properly. Normally, PCI buses are expected to
have the following property:

    device_type = "pci"

In reality, it is not always the case. To handle PCI bus nodes that
don't have the device_type property, also consider the node name: if the
node name is "pcie" or "pci" then consider the bus as a PCI bus.

This commit is based on the Linux kernel commit
d1ac0002dd29 "of: address: Work around missing device_type property in
pcie nodes".

This fixes Xen boot on RPi4.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 18825e333e..f1a96a3b90 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
  * PCI bus specific translator
  */
 
+static bool_t dt_node_is_pci(const struct dt_device_node *np)
+{
+    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
+
+    if (is_pci)
+        printk(XENLOG_WARNING "%s: Missing device_type\n", np->full_name);
+
+    return is_pci;
+}
+
 static bool_t dt_bus_pci_match(const struct dt_device_node *np)
 {
     /*
      * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
      * powermacs "ht" is hypertransport
+     *
+     * If none of the device_type match, and that the node name is
+     * "pcie" or "pci", accept the device as PCI (with a warning).
      */
     return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
-        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
+        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
+        dt_node_is_pci(np);
 }
 
 static void dt_bus_pci_count_cells(const struct dt_device_node *np,


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 00:07:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 00:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83100.153897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9GZB-0007kE-KA; Tue, 09 Feb 2021 00:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83100.153897; Tue, 09 Feb 2021 00:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9GZB-0007k7-GV; Tue, 09 Feb 2021 00:07:41 +0000
Received: by outflank-mailman (input) for mailman id 83100;
 Tue, 09 Feb 2021 00:07:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9GZ9-0007jy-Ro; Tue, 09 Feb 2021 00:07:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9GZ9-0007wF-HD; Tue, 09 Feb 2021 00:07:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9GZ9-00049U-7V; Tue, 09 Feb 2021 00:07:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9GZ9-0003JI-6y; Tue, 09 Feb 2021 00:07:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tEDNonrQ71OAobkJrhGQuayphqhSfJg4XzK0JRaVzL0=; b=OSBK8KPS4cAJWT29fMOnKK5R7w
	jBazfSTjJSEtZs1hn1TQCTIEmXJ/EbiZ+jGC71K1f9fY3xDe8iS2mGGwbLBGCbwUTr+JVxHUu7GOL
	ybu4u/aYX5GBhvHclK9JlNUq1MoafSVMsvPFkMTlWQt/bmxPfnm0LhbfueowtNFKoqrA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159133-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159133: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.11-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-shadow:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 00:07:39 +0000

flight 159133 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159133/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 159042
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>        broken in 159042
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 159042
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 159042
 test-amd64-i386-xl              <job status>                 broken  in 159042
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 159042
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken in 159042
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <job status>   broken in 159042
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159042
 test-amd64-i386-libvirt-xsm     <job status>                 broken  in 159042
 test-amd64-i386-libvirt         <job status>                 broken  in 159042
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 159042
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 159042
 test-amd64-amd64-libvirt-xsm    <job status>                 broken  in 159042
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 159042
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>          broken in 159042
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 159042
 test-amd64-i386-xl-xsm          <job status>                 broken  in 159042
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 159042
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  <job status> broken in 159042
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 159042
 test-amd64-amd64-pygrub         <job status>                 broken  in 159042
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken in 159042
 test-amd64-i386-xl-shadow       <job status>                 broken  in 159042
 test-amd64-i386-freebsd10-amd64    <job status>               broken in 159042
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159042
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 159042
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>          broken in 159042
 test-amd64-i386-xl-raw          <job status>                 broken  in 159042
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm <job status> broken in 159042
 test-amd64-i386-freebsd10-i386    <job status>                broken in 159042
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>   broken in 159042
 test-amd64-amd64-libvirt-vhd    <job status>                 broken  in 159042
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159042 REGR. vs. 157566
 test-armhf-armhf-libvirt-raw 13 guest-start    fail in 159042 REGR. vs. 157566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-freebsd10-i386 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-pygrub      5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-raw       5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-libvirt-xsm 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-freebsd10-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-xsm       5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-libvirt      5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-amd64-pvgrub 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl           5 host-install(5) broken in 159042 pass in 159133
 test-amd64-i386-xl-shadow    5 host-install(5) broken in 159042 pass in 159133
 test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 159042 pass in 159133
 test-armhf-armhf-xl-multivcpu  8 xen-boot                  fail pass in 159042
 test-arm64-arm64-xl-thunderx  8 xen-boot                   fail pass in 159042
 test-armhf-armhf-xl-vhd      12 debian-di-install          fail pass in 159042
 test-armhf-armhf-libvirt-raw 12 debian-di-install          fail pass in 159042

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 159042 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 159042 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 159042 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 159042 never pass
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   55 days
Failing since        159016  2021-02-04 15:05:58 Z    4 days    5 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-xsm broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-libvirt-vhd broken

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 01:57:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 01:57:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83112.153921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9IH3-0002fJ-WE; Tue, 09 Feb 2021 01:57:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83112.153921; Tue, 09 Feb 2021 01:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9IH3-0002fA-PJ; Tue, 09 Feb 2021 01:57:05 +0000
Received: by outflank-mailman (input) for mailman id 83112;
 Tue, 09 Feb 2021 01:57:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9IH2-0002f5-St
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 01:57:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0067784a-622d-4f5d-9d0e-7b17c00bbf6a;
 Tue, 09 Feb 2021 01:57:04 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id C0EA364E42;
 Tue,  9 Feb 2021 01:57:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0067784a-622d-4f5d-9d0e-7b17c00bbf6a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612835823;
	bh=02b2E+kJvjn8yZRseo4go7BFG3EU9dBxTtARIjhIxr4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qFF++4z4UUmZDHvJBylzM43zRL+EcOG2ygmr7so6gpFXmfEyre/4Lk+rpU63JhYtn
	 m4d5xp8u3tx9vlzD5IKb5VgyB9Ip1DF6G1sQmchRJr4nhbMJVoU3I78mQ1i1y4iFnq
	 +kkxRj3cDPAEQfxgSBXct3vyJRsSrtMbA6PoaLfog0kYCEFMZcfwKTfvL1xKjcD4/j
	 7doXfn4rUG/Skx9j3eK+XVmk242+a195KAle41z2HnHGO/uILwn1wKVz9sa6tno9i7
	 2uIpgoBbrY6O0DgrwZcfoxADY8+cuzDF4HZkKeE87AanzsHhiroqIF6IkP7NT4/1IN
	 9+UTVBZzgklnw==
Date: Mon, 8 Feb 2021 17:57:01 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, lucmiccio@gmail.com, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Rahul Singh <Rahul.Singh@arm.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <CAJ=z9a3uhiFKE6gepaPvWZxqRErCyLiv2CTDSx3Sihef7CaMtQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102081556480.8948@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <173ed75a-94cf-26a5-9271-a687bf201578@xen.org> <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s> <CAJ=z9a3uhiFKE6gepaPvWZxqRErCyLiv2CTDSx3Sihef7CaMtQ@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 8 Feb 2021, Julien Grall wrote:
> On Mon, 8 Feb 2021 at 20:24, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > @Ian, I think this wants to go in 4.15. Without it, Xen may receive an IOMMU
> > > fault for DMA transaction using granted page.
> > >
> > > > Backport: 4.12+
> > > >
> > > > ---
> > > >
> > > > Given the severity of the bug, I would like to request this patch to be
> > > > backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> > > > 2020.
> > >
> > > I would agree that the bug is bad, but it is not clear to me why this would be
> > > warrant for an exception for backporting. Can you outline what's the worse
> > > that can happen?
> > >
> > > Correct me if I am wrong, if one can hit this error, then it should be pretty
> > > reliable. Therefore, anyone wanted to use 4.12 in production should have seen
> > > if the error on there setup by now (4.12 has been out for nearly two years).
> > > If not, then they are most likely not affected.
> > >
> > > Any new users of Xen should use the latest stable rather than starting with an
> > > old version.
> >
> > Yes, the bug reproduces reliably but it takes more than a smoke test to
> > find it. That's why it wasn't found by OSSTest and also our internal
> > CI-loop at Xilinx.
> 
> Ok. So a user should be able to catch it during testing, is that correct?

Yes, probably. The failure is that PV drivers do not work (they trigger
the IOMMU fault), specifically PV network and block, maybe others too.

I think it is unlikely but possible that an hardware update would also
trigger the bug. For instance, a change of the network card might
trigger the bug, if the previous network card driver was always bouncing
requests on bounce buffers, while the new drivers uses the provided
memory pages directly. I don't know how realistic this scenario is.


> > Users can be very slow at upgrading, so I am worried that 4.12 might still
> > be picked by somebody, especially given that it is still security
> > supported for a while.
> 
> Don't tell me about upgrading Xen... ;) But I am a bit confused, are
> you worried about existing users or new users?

I am mostly worried about people that start using 4.12.

If a user was already on 4.12 and not seeing any errors, they are
unlikely to see this error. It would only happen if:
- they didn't use PV drivers before, and they want to start using PV
  drivers now
- they are upgrading hardware (not sure how likely to happen, see above)


> > > Other than the seriousness of the bug, I think there is also a fairness
> > > concern.
> > >
> > > So far our rules says there is only security support backport allowed. If we
> > > start granting exception, then we need a way to prevent abuse of it. To take
> > > an extreme example, why one couldn't ask backport for 4.2?
> > >
> > > That said, I vaguely remember this topic was brought up a few time on
> > > security@. So maybe it is time to have a new discussion about stable tree.
> >
> > I wouldn't consider a backport for a tree that is closed even for
> > security backports. So in your example, I'd say no to a backport to 4.2
> > or 4.10.
> >
> > I think there is a valid question for trees that are still open to
> > security fixes but not general backports.
> >
> > For these cases, I would just follow a simple rule of thumb:
> 
> Aren't those rules already used for stable trees?

No, I don't think so. Backports are done by Jan and me, not by the
submitter (in this case I am the submitter but it is a coincidence :-)
If a commit is fixing a genuine bug and the backport doesn't cause
issues, then it is typically done. Here the bar should be certainly
higher, both in terms of low-risk, and importance of the bug to fix.



> > - is the submitter willing to provide the backport?
> > - is the backport low-risk?
> > - is the underlying bug important?
> 
> You wrote multiple times that this is serious but it is still not
> clear what's the worse that can happen...

PV drivers don't work: each data transfer involving granted pages causes
an IOMMU fault.


> > If the answer to all is "yes" then I'd go with it.
> >
> >
> > In this case, given that the fix is a one-liner, and obviously correct,
> 
> I have seen one-liners that introduced XSA in the past ;).
 
Sure but this is a revert to the pre-4.12 implementation.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:21:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83126.153951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPC-00027o-CX; Tue, 09 Feb 2021 06:21:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83126.153951; Tue, 09 Feb 2021 06:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPC-00027h-9U; Tue, 09 Feb 2021 06:21:46 +0000
Received: by outflank-mailman (input) for mailman id 83126;
 Tue, 09 Feb 2021 06:21:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPB-00027c-Lk
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:21:45 +0000
Received: from mail-pj1-x102c.google.com (unknown [2607:f8b0:4864:20::102c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91bc743f-1ed8-406a-bc96-e3b732d44b4b;
 Tue, 09 Feb 2021 06:21:44 +0000 (UTC)
Received: by mail-pj1-x102c.google.com with SMTP id q72so983995pjq.2
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:21:44 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id s23sm21047537pgj.29.2021.02.08.22.21.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:21:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91bc743f-1ed8-406a-bc96-e3b732d44b4b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=d/9o6gw27S6UN8MkYq1blVrCI9BLsI4+H00tlbC8EBE=;
        b=favimtx+6weGe8M555lmQOBhCoy54oReElTcJzpJXeVeBgHQUImtrzhR4xL2PNQ4hj
         B61sBGHNGjtqai17pbRiKkfpKFlyf3N7SfEZNhj9p694hdjwUDNvbaxplQIoX3Y2HB75
         oM9e2a4FzCjXRLbXyImmgB1sz2HO9c4H1yk+0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=d/9o6gw27S6UN8MkYq1blVrCI9BLsI4+H00tlbC8EBE=;
        b=oOPYjOMuUbsGh+ZZcLTtdBcAqogbsFAtjMv7wYHLlO41PxODMaN6oDDWrf1fJcOZ24
         G6FbiXF8pHL/XupLhsmMsH5II2KhOktWMAr2kfXq34jMKqxn5mRgac422hiG9pah+UCg
         7dffYDO2xtBMAkyp7TrFdsyEWBJCc5kRsy00vNqFq8StkDUIgTFI407RBfxydV0wJIR3
         Cu6Ii42vF88QwPw1BrBxnOmQglMw+ppBdbCNoO7eQOLeEl/Llv3ZDXmIL9a4tkDaBBti
         DvKQBo0pRX/dVOLSrBWJsM3sxr9jA2kn9dswlpzrGTNzxiKxZgrRiaSxxVZgcFEfORIK
         n7pA==
X-Gm-Message-State: AOAM532lS2c7h9zcIyJFZdo933askUuFTf0Ir6M/wliHljaPXpzs75NK
	fwo4Fwf+P+chzLlydLBYFAIVYA==
X-Google-Smtp-Source: ABdhPJz0oyYOZadpQA/7+OnNYsgqbz6PN7ozvQBTUr4giRuwP1fH3QYLnHESLb3E3I02eo5S/4aG/w==
X-Received: by 2002:a17:90a:4fe4:: with SMTP id q91mr2479504pjh.165.1612851703796;
        Mon, 08 Feb 2021 22:21:43 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 00/14] Restricted DMA
Date: Tue,  9 Feb 2021 14:21:17 +0800
Message-Id: <20210209062131.2300005-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

Claire Chang (14):
  swiotlb: Remove external access to io_tlb_start
  swiotlb: Move is_swiotlb_buffer() to swiotlb.c
  swiotlb: Add struct swiotlb
  swiotlb: Refactor swiotlb_late_init_with_tbl
  swiotlb: Add DMA_RESTRICTED_POOL
  swiotlb: Add restricted DMA pool
  swiotlb: Update swiotlb API to gain a struct device argument
  swiotlb: Use restricted DMA pool if available
  swiotlb: Refactor swiotlb_tbl_{map,unmap}_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add is_dev_swiotlb_force()
  swiotlb: Add restricted DMA alloc/free support.
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  24 +
 arch/powerpc/platforms/pseries/svm.c          |   4 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  25 +
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   5 +
 drivers/xen/swiotlb-xen.c                     |   4 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  32 +-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  51 +-
 kernel/dma/direct.h                           |   8 +-
 kernel/dma/swiotlb.c                          | 636 ++++++++++++------
 13 files changed, 582 insertions(+), 240 deletions(-)

-- 

v4:
  - Fix spinlock bad magic
  - Use rmem->name for debugfs entry
  - Address the comments in v3

v3:
  Using only one reserved memory region for both streaming DMA and memory
  allocation.
  https://lore.kernel.org/patchwork/cover/1360992/

v2:
  Building on top of swiotlb.
  https://lore.kernel.org/patchwork/cover/1280705/

v1:
  Using dma_map_ops.
  https://lore.kernel.org/patchwork/cover/1271660/

2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83128.153975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPQ-0002DA-Tv; Tue, 09 Feb 2021 06:22:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83128.153975; Tue, 09 Feb 2021 06:22:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPQ-0002D2-Ql; Tue, 09 Feb 2021 06:22:00 +0000
Received: by outflank-mailman (input) for mailman id 83128;
 Tue, 09 Feb 2021 06:21:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPP-0002CP-93
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:21:59 +0000
Received: from mail-pl1-x62e.google.com (unknown [2607:f8b0:4864:20::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a10e0d0-ec6a-498b-990d-14c632b15aae;
 Tue, 09 Feb 2021 06:21:58 +0000 (UTC)
Received: by mail-pl1-x62e.google.com with SMTP id y10so9168900plk.7
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:21:58 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id a24sm22136125pff.18.2021.02.08.22.21.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:21:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a10e0d0-ec6a-498b-990d-14c632b15aae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ypYSGMF0E/cBAHmG+5MgTYKeSo3o0eLQF2YTe8sYrHU=;
        b=nuyunksJVzuVGUJwTZPlPVCVdAk7PqLR5/eJIWK+czvsKm+Rv5adeEB/7QzXpfJ6Qb
         eTQmwh//uII6ybrnF53ZsBr2I2SswetGJe9SJ79VWxXwIXylAfF5A0x6o0MD+aeanu4S
         AHLyX/KnrhYHKwiWZolFO//7UXaKKEQSkWti8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ypYSGMF0E/cBAHmG+5MgTYKeSo3o0eLQF2YTe8sYrHU=;
        b=XL3vaxdA7H1QeHirmFBt1FjX1pSjMloNotxMPxzIq9A+C3osCdaQ1AGdijRp34cvA0
         6N1fYeu+0/WNfQNPiOTd6OgYq235Fcr8q17OEyygSEyvAVe2N0fDHZ8t+G/w+/7PHmUn
         aRhpuNS7TCuKP2+67suogEpxFk0VZPDh/EnUZF8YEAl89Fgo5+7eHGNuuL6/WEXPV77X
         0u+enS5I4hlQd20104pmq9dj58dy4tjs1uIfpGxzxGWa+rMwph1vZnW3ZeVG+C4zgEV8
         O9hDFUvgeAXk50dGqXiT9NQfma1eTlC02rRZMIzfeF3WpCZ+ACU1z8FIWlD3ny90VZ3K
         qKyw==
X-Gm-Message-State: AOAM5330TSK7fAZirgG8TjA+e5hbcwc6M+QAdxNsreoMT48GFYeHrxo3
	gtfGGAgYi+rr4b5Qzrre3PuLSw==
X-Google-Smtp-Source: ABdhPJywOf1kdt/G4UCY0rOH88voPYoRmAlcet1VV4EMdbEytPdXO083Yd5o1179zNV2rDMmBilckQ==
X-Received: by 2002:a17:90a:df87:: with SMTP id p7mr663033pjv.99.1612851717915;
        Mon, 08 Feb 2021 22:21:57 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 02/14] swiotlb: Move is_swiotlb_buffer() to swiotlb.c
Date: Tue,  9 Feb 2021 14:21:19 +0800
Message-Id: <20210209062131.2300005-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move is_swiotlb_buffer() to swiotlb.c and make io_tlb_{start,end}
static, so we can entirely hide struct swiotlb inside of swiotlb.c in
the following patches.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 7 +------
 kernel/dma/swiotlb.c    | 7 ++++++-
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 83200f3b042a..041611bf3c2a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -70,13 +70,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
 
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
-extern phys_addr_t io_tlb_start, io_tlb_end;
-
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
-{
-	return paddr >= io_tlb_start && paddr < io_tlb_end;
-}
 
+bool is_swiotlb_buffer(phys_addr_t paddr);
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e180211f6ad9..678490d39e55 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -69,7 +69,7 @@ enum swiotlb_force swiotlb_force;
  * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
  * API.
  */
-phys_addr_t io_tlb_start, io_tlb_end;
+static phys_addr_t io_tlb_start, io_tlb_end;
 
 /*
  * The number of IO TLB blocks (in groups of 64) between io_tlb_start and
@@ -719,6 +719,11 @@ bool is_swiotlb_active(void)
 	return io_tlb_end != 0;
 }
 
+bool is_swiotlb_buffer(phys_addr_t paddr)
+{
+	return paddr >= io_tlb_start && paddr < io_tlb_end;
+}
+
 phys_addr_t get_swiotlb_start(void)
 {
 	return io_tlb_start;
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83127.153962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPJ-00029b-Kd; Tue, 09 Feb 2021 06:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83127.153962; Tue, 09 Feb 2021 06:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPJ-00029S-HL; Tue, 09 Feb 2021 06:21:53 +0000
Received: by outflank-mailman (input) for mailman id 83127;
 Tue, 09 Feb 2021 06:21:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPI-000298-9o
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:21:52 +0000
Received: from mail-pl1-x633.google.com (unknown [2607:f8b0:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88784016-37e5-4c45-b58f-4d3d30475329;
 Tue, 09 Feb 2021 06:21:51 +0000 (UTC)
Received: by mail-pl1-x633.google.com with SMTP id a16so9161705plh.8
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:21:51 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id y26sm21067426pgk.42.2021.02.08.22.21.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:21:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88784016-37e5-4c45-b58f-4d3d30475329
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=VQjRLnhkPorvAVKaOPejw74RXuDyQjfs10AJ4S/GkGY=;
        b=iIl5SP7dar66QYJx01a8eKBzk/OkK/x97Udeqkp8z6Zecwu6gJbtr4+R7AJhbJUsET
         mW/NYKe+92F4cTf+MWb8Kjs3DEc/fRUB2Tuk/IK9lw1i5dbH+o71qP4rUdYAStwKxzrf
         aFKGX0hir3NozqAR+M+IQwDL2zs+YG9OyvX5A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=VQjRLnhkPorvAVKaOPejw74RXuDyQjfs10AJ4S/GkGY=;
        b=OqWBdvNlNwl42JVqtcBAeIGbXKhe829U+cvVF7V9AS0dBjbhFOHGJGFT7NsajsV1gC
         ryf4QvN4PRNO9Iq6IlB0QgyrePoJpKiU8YXAggHqyXYx54rC0kP0DkQgMa5abiPoDZih
         2yp0pp/1GErKlhbp/1CcWkXk3xYp+8PMQo4OVV1CKubyndWhPjRFv28sQ7G51MoZw8mv
         V6qfAPtNunbt9Dlq+JY5ntlsakmzgS99K6ZDcuibSUrkdNqqtqnwkXnDL4ayS8TSuzYE
         vEc2E6Fy96tNHzrVTJ0ZWG7DDxuYelhNAp0O6d3tULndJXyPQoxnBNm+DzkOw4mqqs8N
         XVgA==
X-Gm-Message-State: AOAM533OuaCKZIPNhB/Kd+lljpGP3goRwE2S/a2OEDVbBgqS5EvjPDWX
	vT6RS3Vg/KStdtlCwy/n140hHA==
X-Google-Smtp-Source: ABdhPJzMs/GVP3LhI2pHnVT9gB2t4EDZQ5t7O4c7u18G3hsJLKUa+IhCtg1SpLtI9Cft7lrwlj4fPw==
X-Received: by 2002:a17:90a:ad09:: with SMTP id r9mr2555446pjq.51.1612851710886;
        Mon, 08 Feb 2021 22:21:50 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 01/14] swiotlb: Remove external access to io_tlb_start
Date: Tue,  9 Feb 2021 14:21:18 +0800
Message-Id: <20210209062131.2300005-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, get_swiotlb_start(), and remove external access to
io_tlb_start, so we can entirely hide struct swiotlb inside of swiotlb.c
in the following patches.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 arch/powerpc/platforms/pseries/svm.c | 4 ++--
 drivers/xen/swiotlb-xen.c            | 4 ++--
 include/linux/swiotlb.h              | 1 +
 kernel/dma/swiotlb.c                 | 5 +++++
 4 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index 7b739cc7a8a9..c10c51d49f3d 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -55,8 +55,8 @@ void __init svm_swiotlb_init(void)
 	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false))
 		return;
 
-	if (io_tlb_start)
-		memblock_free_early(io_tlb_start,
+	if (vstart)
+		memblock_free_early(vstart,
 				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
 	panic("SVM: Cannot allocate SWIOTLB buffer");
 }
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 2b385c1b4a99..91f8c68d1a9b 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	/*
 	 * IO TLB memory already allocated. Just use it.
 	 */
-	if (io_tlb_start != 0) {
-		xen_io_tlb_start = phys_to_virt(io_tlb_start);
+	if (is_swiotlb_active()) {
+		xen_io_tlb_start = phys_to_virt(get_swiotlb_start());
 		goto end;
 	}
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d9c9fc9ca5d2..83200f3b042a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -81,6 +81,7 @@ void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(void);
+phys_addr_t get_swiotlb_start(void);
 void __init swiotlb_adjust_size(unsigned long new_size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 7c42df6e6100..e180211f6ad9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -719,6 +719,11 @@ bool is_swiotlb_active(void)
 	return io_tlb_end != 0;
 }
 
+phys_addr_t get_swiotlb_start(void)
+{
+	return io_tlb_start;
+}
+
 #ifdef CONFIG_DEBUG_FS
 
 static int __init swiotlb_create_debugfs(void)
--

This can be dropped if Christoph's swiotlb cleanups are landed.
https://lore.kernel.org/linux-iommu/20210207160934.2955931-1-hch@lst.de/T/#m7124f29b6076d462101fcff6433295157621da09 

2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83129.153987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPa-0002JB-8H; Tue, 09 Feb 2021 06:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83129.153987; Tue, 09 Feb 2021 06:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPa-0002J4-3l; Tue, 09 Feb 2021 06:22:10 +0000
Received: by outflank-mailman (input) for mailman id 83129;
 Tue, 09 Feb 2021 06:22:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPY-0002IN-Km
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:08 +0000
Received: from mail-pl1-x631.google.com (unknown [2607:f8b0:4864:20::631])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ed5e34b-d72a-47cf-80b7-dd254de9f3a4;
 Tue, 09 Feb 2021 06:22:06 +0000 (UTC)
Received: by mail-pl1-x631.google.com with SMTP id e9so9176012plh.3
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:06 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id b25sm13766245pfp.26.2021.02.08.22.21.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ed5e34b-d72a-47cf-80b7-dd254de9f3a4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XD+CQBaV23BDlNnO+PpvLMFfqVc2yOMxmH+MUGKPs6w=;
        b=D8UmSO/ERP7kOKUSbGuHANSz9VFDE05biNMmXqRTUbET3+rkQ5LLNdZr4BHcWQ7G7g
         mYSYnczi/RbEbwT/nkTNlbF/EFD+y/rGOnS3U4Dd+yM4HqpkpAWccFAu6G2OdcYUfka0
         rFGRq0QsZwzAp4bjrdVueBqqe1LrrbcHvK6uA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=XD+CQBaV23BDlNnO+PpvLMFfqVc2yOMxmH+MUGKPs6w=;
        b=DpCtysdIyXWx+sOpfGHlbJ/JQEAAgniYtO2kU+hzf8VKO6tMt3MgCc9Jsh4lW/op0D
         rx5aHhkPbhha9PnV0WduqWWEY64bVX08FPyl1KBKnvdkx2TBB7aQrc6s48z99J8ByzF1
         8GBPk2zB9cR1TUqQD8Udw/NJMv9QPXcteCojG723nlOalUA2qJfEh2QYvj+DKgi5XzPJ
         CRta4PRSliU+aHKE1HnRgpGqGKPcvJ3cnsmndT+NRfAbt4W7qDoYTcRNeNGGsU66rmIf
         5JB2pElCsDUe+m4AhPCgIoaeEsJOO3bie0RoYKtWPtoat8nxXw/gv7wFOXYemDNMOgMH
         IJKw==
X-Gm-Message-State: AOAM530gvLEgia0MRUsHaqxaCZZYi7sZNVWl2ognjiHrG9Rg8OjcLfU8
	6hPsW0loQgsh12VwH/4Zv3ncZw==
X-Google-Smtp-Source: ABdhPJwiCoGV2Q0lgiH5vbozu5JoZAEHZGIKhYvfLKOhwI17DJRAwojDGzNQdyAdFGBmg/ai3bJumQ==
X-Received: by 2002:a17:90a:3d42:: with SMTP id o2mr2498572pjf.173.1612851725714;
        Mon, 08 Feb 2021 22:22:05 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 03/14] swiotlb: Add struct swiotlb
Date: Tue,  9 Feb 2021 14:21:20 +0800
Message-Id: <20210209062131.2300005-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Added a new struct, swiotlb, as the IO TLB memory pool descriptor and
moved relevant global variables into that struct.
This will be useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 327 +++++++++++++++++++++++--------------------
 1 file changed, 172 insertions(+), 155 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 678490d39e55..28b7bfe7a2a8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -61,33 +61,43 @@
  * allocate a contiguous 1MB, we're probably in trouble anyway.
  */
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
+#define INVALID_PHYS_ADDR (~(phys_addr_t)0)
 
 enum swiotlb_force swiotlb_force;
 
 /*
- * Used to do a quick range check in swiotlb_tbl_unmap_single and
- * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static phys_addr_t io_tlb_start, io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) between io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * The number of used IO TLB block
- */
-static unsigned long io_tlb_used;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
+ * struct swiotlb - Software IO TLB Memory Pool Descriptor
+ *
+ * @start:      The start address of the swiotlb memory pool. Used to do a quick
+ *              range check to see if the memory was in fact allocated by this
+ *              API.
+ * @end:        The end address of the swiotlb memory pool. Used to do a quick
+ *              range check to see if the memory was in fact allocated by this
+ *              API.
+ * @nslabs:     The number of IO TLB blocks (in groups of 64) between @start and
+ *              @end. This is command line adjustable via setup_io_tlb_npages.
+ * @used:       The number of used IO TLB block.
+ * @list:       The free list describing the number of free entries available
+ *              from each index.
+ * @index:      The index to start searching in the next round.
+ * @orig_addr:  The original address corresponding to a mapped entry for the
+ *              sync operations.
+ * @lock:       The lock to protect the above data structures in the map and
+ *              unmap calls.
+ * @debugfs:    The dentry to debugfs.
  */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
+struct swiotlb {
+	phys_addr_t start;
+	phys_addr_t end;
+	unsigned long nslabs;
+	unsigned long used;
+	unsigned int *list;
+	unsigned int index;
+	phys_addr_t *orig_addr;
+	spinlock_t lock;
+	struct dentry *debugfs;
+};
+static struct swiotlb default_swiotlb;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -95,27 +105,17 @@ static unsigned int io_tlb_index;
  */
 static unsigned int max_segment;
 
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-#define INVALID_PHYS_ADDR (~(phys_addr_t)0)
-static phys_addr_t *io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
 static int late_alloc;
 
 static int __init
 setup_io_tlb_npages(char *str)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
+
 	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		swiotlb->nslabs = simple_strtoul(str, &str, 0);
 		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+		swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE);
 	}
 	if (*str == ',')
 		++str;
@@ -123,7 +123,7 @@ setup_io_tlb_npages(char *str)
 		swiotlb_force = SWIOTLB_FORCE;
 	} else if (!strcmp(str, "noforce")) {
 		swiotlb_force = SWIOTLB_NO_FORCE;
-		io_tlb_nslabs = 1;
+		swiotlb->nslabs = 1;
 	}
 
 	return 0;
@@ -134,7 +134,7 @@ static bool no_iotlb_memory;
 
 unsigned long swiotlb_nr_tbl(void)
 {
-	return unlikely(no_iotlb_memory) ? 0 : io_tlb_nslabs;
+	return unlikely(no_iotlb_memory) ? 0 : default_swiotlb.nslabs;
 }
 EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 
@@ -156,13 +156,14 @@ unsigned long swiotlb_size_or_default(void)
 {
 	unsigned long size;
 
-	size = io_tlb_nslabs << IO_TLB_SHIFT;
+	size = default_swiotlb.nslabs << IO_TLB_SHIFT;
 
 	return size ? size : (IO_TLB_DEFAULT_SIZE);
 }
 
 void __init swiotlb_adjust_size(unsigned long new_size)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
 	unsigned long size;
 
 	/*
@@ -170,10 +171,10 @@ void __init swiotlb_adjust_size(unsigned long new_size)
 	 * architectures such as those supporting memory encryption to
 	 * adjust/expand SWIOTLB size for their use.
 	 */
-	if (!io_tlb_nslabs) {
+	if (!swiotlb->nslabs) {
 		size = ALIGN(new_size, 1 << IO_TLB_SHIFT);
-		io_tlb_nslabs = size >> IO_TLB_SHIFT;
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+		swiotlb->nslabs = size >> IO_TLB_SHIFT;
+		swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE);
 
 		pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);
 	}
@@ -181,14 +182,15 @@ void __init swiotlb_adjust_size(unsigned long new_size)
 
 void swiotlb_print_info(void)
 {
-	unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	struct swiotlb *swiotlb = &default_swiotlb;
+	unsigned long bytes = swiotlb->nslabs << IO_TLB_SHIFT;
 
 	if (no_iotlb_memory) {
 		pr_warn("No low mem\n");
 		return;
 	}
 
-	pr_info("mapped [mem %pa-%pa] (%luMB)\n", &io_tlb_start, &io_tlb_end,
+	pr_info("mapped [mem %pa-%pa] (%luMB)\n", &swiotlb->start, &swiotlb->end,
 	       bytes >> 20);
 }
 
@@ -200,57 +202,61 @@ void swiotlb_print_info(void)
  */
 void __init swiotlb_update_mem_attributes(void)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
 	void *vaddr;
 	unsigned long bytes;
 
 	if (no_iotlb_memory || late_alloc)
 		return;
 
-	vaddr = phys_to_virt(io_tlb_start);
-	bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT);
+	vaddr = phys_to_virt(swiotlb->start);
+	bytes = PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT);
 	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
 	memset(vaddr, 0, bytes);
 }
 
 int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
 	unsigned long i, bytes;
 	size_t alloc_size;
 
 	bytes = nslabs << IO_TLB_SHIFT;
 
-	io_tlb_nslabs = nslabs;
-	io_tlb_start = __pa(tlb);
-	io_tlb_end = io_tlb_start + bytes;
+	swiotlb->nslabs = nslabs;
+	swiotlb->start = __pa(tlb);
+	swiotlb->end = swiotlb->start + bytes;
 
 	/*
 	 * Allocate and initialize the free list array.  This array is used
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
+	 * between swiotlb->start and swiotlb->end.
 	 */
-	alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int));
-	io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!io_tlb_list)
+	alloc_size = PAGE_ALIGN(swiotlb->nslabs * sizeof(int));
+	swiotlb->list = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!swiotlb->list)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t));
-	io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!io_tlb_orig_addr)
+	alloc_size = PAGE_ALIGN(swiotlb->nslabs * sizeof(phys_addr_t));
+	swiotlb->orig_addr = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!swiotlb->orig_addr)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	for (i = 0; i < io_tlb_nslabs; i++) {
-		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+	for (i = 0; i < swiotlb->nslabs; i++) {
+		swiotlb->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+		swiotlb->orig_addr[i] = INVALID_PHYS_ADDR;
 	}
-	io_tlb_index = 0;
+	swiotlb->index = 0;
 	no_iotlb_memory = false;
 
 	if (verbose)
 		swiotlb_print_info();
 
-	swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT);
+	swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT);
+	spin_lock_init(&swiotlb->lock);
+
 	return 0;
 }
 
@@ -261,26 +267,27 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 void  __init
 swiotlb_init(int verbose)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
 	size_t default_size = IO_TLB_DEFAULT_SIZE;
 	unsigned char *vstart;
 	unsigned long bytes;
 
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	if (!swiotlb->nslabs) {
+		swiotlb->nslabs = (default_size >> IO_TLB_SHIFT);
+		swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE);
 	}
 
-	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	bytes = swiotlb->nslabs << IO_TLB_SHIFT;
 
 	/* Get IO TLB memory from the low pages */
 	vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
-	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
+	if (vstart && !swiotlb_init_with_tbl(vstart, swiotlb->nslabs, verbose))
 		return;
 
-	if (io_tlb_start) {
-		memblock_free_early(io_tlb_start,
-				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
-		io_tlb_start = 0;
+	if (swiotlb->start) {
+		memblock_free_early(swiotlb->start,
+				    PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT));
+		swiotlb->start = 0;
 	}
 	pr_warn("Cannot allocate buffer");
 	no_iotlb_memory = true;
@@ -294,22 +301,23 @@ swiotlb_init(int verbose)
 int
 swiotlb_late_init_with_default_size(size_t default_size)
 {
-	unsigned long bytes, req_nslabs = io_tlb_nslabs;
+	struct swiotlb *swiotlb = &default_swiotlb;
+	unsigned long bytes, req_nslabs = swiotlb->nslabs;
 	unsigned char *vstart = NULL;
 	unsigned int order;
 	int rc = 0;
 
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	if (!swiotlb->nslabs) {
+		swiotlb->nslabs = (default_size >> IO_TLB_SHIFT);
+		swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE);
 	}
 
 	/*
 	 * Get IO TLB memory from the low pages
 	 */
-	order = get_order(io_tlb_nslabs << IO_TLB_SHIFT);
-	io_tlb_nslabs = SLABS_PER_PAGE << order;
-	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	order = get_order(swiotlb->nslabs << IO_TLB_SHIFT);
+	swiotlb->nslabs = SLABS_PER_PAGE << order;
+	bytes = swiotlb->nslabs << IO_TLB_SHIFT;
 
 	while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
 		vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN,
@@ -320,15 +328,15 @@ swiotlb_late_init_with_default_size(size_t default_size)
 	}
 
 	if (!vstart) {
-		io_tlb_nslabs = req_nslabs;
+		swiotlb->nslabs = req_nslabs;
 		return -ENOMEM;
 	}
 	if (order != get_order(bytes)) {
 		pr_warn("only able to allocate %ld MB\n",
 			(PAGE_SIZE << order) >> 20);
-		io_tlb_nslabs = SLABS_PER_PAGE << order;
+		swiotlb->nslabs = SLABS_PER_PAGE << order;
 	}
-	rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs);
+	rc = swiotlb_late_init_with_tbl(vstart, swiotlb->nslabs);
 	if (rc)
 		free_pages((unsigned long)vstart, order);
 
@@ -337,22 +345,25 @@ swiotlb_late_init_with_default_size(size_t default_size)
 
 static void swiotlb_cleanup(void)
 {
-	io_tlb_end = 0;
-	io_tlb_start = 0;
-	io_tlb_nslabs = 0;
+	struct swiotlb *swiotlb = &default_swiotlb;
+
+	swiotlb->end = 0;
+	swiotlb->start = 0;
+	swiotlb->nslabs = 0;
 	max_segment = 0;
 }
 
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
 	unsigned long i, bytes;
 
 	bytes = nslabs << IO_TLB_SHIFT;
 
-	io_tlb_nslabs = nslabs;
-	io_tlb_start = virt_to_phys(tlb);
-	io_tlb_end = io_tlb_start + bytes;
+	swiotlb->nslabs = nslabs;
+	swiotlb->start = virt_to_phys(tlb);
+	swiotlb->end = swiotlb->start + bytes;
 
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
 	memset(tlb, 0, bytes);
@@ -360,39 +371,40 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	/*
 	 * Allocate and initialize the free list array.  This array is used
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
+	 * between swiotlb->start and swiotlb->end.
 	 */
-	io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL,
-	                              get_order(io_tlb_nslabs * sizeof(int)));
-	if (!io_tlb_list)
+	swiotlb->list = (unsigned int *)__get_free_pages(GFP_KERNEL,
+	                              get_order(swiotlb->nslabs * sizeof(int)));
+	if (!swiotlb->list)
 		goto cleanup3;
 
-	io_tlb_orig_addr = (phys_addr_t *)
+	swiotlb->orig_addr = (phys_addr_t *)
 		__get_free_pages(GFP_KERNEL,
-				 get_order(io_tlb_nslabs *
+				 get_order(swiotlb->nslabs *
 					   sizeof(phys_addr_t)));
-	if (!io_tlb_orig_addr)
+	if (!swiotlb->orig_addr)
 		goto cleanup4;
 
-	for (i = 0; i < io_tlb_nslabs; i++) {
-		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+	for (i = 0; i < swiotlb->nslabs; i++) {
+		swiotlb->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+		swiotlb->orig_addr[i] = INVALID_PHYS_ADDR;
 	}
-	io_tlb_index = 0;
+	swiotlb->index = 0;
 	no_iotlb_memory = false;
 
 	swiotlb_print_info();
 
 	late_alloc = 1;
 
-	swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT);
+	swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT);
+	spin_lock_init(&swiotlb->lock);
 
 	return 0;
 
 cleanup4:
-	free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs *
-	                                                 sizeof(int)));
-	io_tlb_list = NULL;
+	free_pages((unsigned long)swiotlb->list,
+		   get_order(swiotlb->nslabs * sizeof(int)));
+	swiotlb->list = NULL;
 cleanup3:
 	swiotlb_cleanup();
 	return -ENOMEM;
@@ -400,23 +412,25 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 
 void __init swiotlb_exit(void)
 {
-	if (!io_tlb_orig_addr)
+	struct swiotlb *swiotlb = &default_swiotlb;
+
+	if (!swiotlb->orig_addr)
 		return;
 
 	if (late_alloc) {
-		free_pages((unsigned long)io_tlb_orig_addr,
-			   get_order(io_tlb_nslabs * sizeof(phys_addr_t)));
-		free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs *
-								 sizeof(int)));
-		free_pages((unsigned long)phys_to_virt(io_tlb_start),
-			   get_order(io_tlb_nslabs << IO_TLB_SHIFT));
+		free_pages((unsigned long)swiotlb->orig_addr,
+			   get_order(swiotlb->nslabs * sizeof(phys_addr_t)));
+		free_pages((unsigned long)swiotlb->list,
+			   get_order(swiotlb->nslabs * sizeof(int)));
+		free_pages((unsigned long)phys_to_virt(swiotlb->start),
+			   get_order(swiotlb->nslabs << IO_TLB_SHIFT));
 	} else {
-		memblock_free_late(__pa(io_tlb_orig_addr),
-				   PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)));
-		memblock_free_late(__pa(io_tlb_list),
-				   PAGE_ALIGN(io_tlb_nslabs * sizeof(int)));
-		memblock_free_late(io_tlb_start,
-				   PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
+		memblock_free_late(__pa(swiotlb->orig_addr),
+				   PAGE_ALIGN(swiotlb->nslabs * sizeof(phys_addr_t)));
+		memblock_free_late(__pa(swiotlb->list),
+				   PAGE_ALIGN(swiotlb->nslabs * sizeof(int)));
+		memblock_free_late(swiotlb->start,
+				   PAGE_ALIGN(swiotlb->nslabs << IO_TLB_SHIFT));
 	}
 	swiotlb_cleanup();
 }
@@ -465,7 +479,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
+	struct swiotlb *swiotlb = &default_swiotlb;
+	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start);
 	unsigned long flags;
 	phys_addr_t tlb_addr;
 	unsigned int nslots, stride, index, wrap;
@@ -516,13 +531,13 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	 * Find suitable number of IO TLB entries size that will fit this
 	 * request and allocate a buffer from that IO TLB pool.
 	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
+	spin_lock_irqsave(&swiotlb->lock, flags);
 
-	if (unlikely(nslots > io_tlb_nslabs - io_tlb_used))
+	if (unlikely(nslots > swiotlb->nslabs - swiotlb->used))
 		goto not_found;
 
-	index = ALIGN(io_tlb_index, stride);
-	if (index >= io_tlb_nslabs)
+	index = ALIGN(swiotlb->index, stride);
+	if (index >= swiotlb->nslabs)
 		index = 0;
 	wrap = index;
 
@@ -530,7 +545,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		while (iommu_is_span_boundary(index, nslots, offset_slots,
 					      max_slots)) {
 			index += stride;
-			if (index >= io_tlb_nslabs)
+			if (index >= swiotlb->nslabs)
 				index = 0;
 			if (index == wrap)
 				goto not_found;
@@ -541,40 +556,40 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		 * contiguous buffers, we allocate the buffers from that slot
 		 * and mark the entries as '0' indicating unavailable.
 		 */
-		if (io_tlb_list[index] >= nslots) {
+		if (swiotlb->list[index] >= nslots) {
 			int count = 0;
 
 			for (i = index; i < (int) (index + nslots); i++)
-				io_tlb_list[i] = 0;
-			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
-				io_tlb_list[i] = ++count;
-			tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+				swiotlb->list[i] = 0;
+			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && swiotlb->list[i]; i--)
+				swiotlb->list[i] = ++count;
+			tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT);
 
 			/*
 			 * Update the indices to avoid searching in the next
 			 * round.
 			 */
-			io_tlb_index = ((index + nslots) < io_tlb_nslabs
-					? (index + nslots) : 0);
+			swiotlb->index = ((index + nslots) < swiotlb->nslabs
+				      ? (index + nslots) : 0);
 
 			goto found;
 		}
 		index += stride;
-		if (index >= io_tlb_nslabs)
+		if (index >= swiotlb->nslabs)
 			index = 0;
 	} while (index != wrap);
 
 not_found:
-	tmp_io_tlb_used = io_tlb_used;
+	tmp_io_tlb_used = swiotlb->used;
 
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
+	spin_unlock_irqrestore(&swiotlb->lock, flags);
 	if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
 		dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
-			 alloc_size, io_tlb_nslabs, tmp_io_tlb_used);
+			 alloc_size, swiotlb->nslabs, tmp_io_tlb_used);
 	return (phys_addr_t)DMA_MAPPING_ERROR;
 found:
-	io_tlb_used += nslots;
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
+	swiotlb->used += nslots;
+	spin_unlock_irqrestore(&swiotlb->lock, flags);
 
 	/*
 	 * Save away the mapping from the original address to the DMA address.
@@ -582,7 +597,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	 * needed.
 	 */
 	for (i = 0; i < nslots; i++)
-		io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
+		swiotlb->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
 		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
@@ -597,10 +612,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, size_t alloc_size,
 			      enum dma_data_direction dir, unsigned long attrs)
 {
+	struct swiotlb *swiotlb = &default_swiotlb;
 	unsigned long flags;
 	int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = io_tlb_orig_addr[index];
+	int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = swiotlb->orig_addr[index];
 
 	/*
 	 * First, sync the memory before unmapping the entry
@@ -616,36 +632,37 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	 * While returning the entries to the free list, we merge the entries
 	 * with slots below and above the pool being returned.
 	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
+	spin_lock_irqsave(&swiotlb->lock, flags);
 	{
 		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
+			 swiotlb->list[index + nslots] : 0);
 		/*
 		 * Step 1: return the slots to the free list, merging the
 		 * slots with superceeding slots
 		 */
 		for (i = index + nslots - 1; i >= index; i--) {
-			io_tlb_list[i] = ++count;
-			io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
+			swiotlb->list[i] = ++count;
+			swiotlb->orig_addr[i] = INVALID_PHYS_ADDR;
 		}
 		/*
 		 * Step 2: merge the returned slots with the preceding slots,
 		 * if available (non zero)
 		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && swiotlb->list[i]; i--)
+			swiotlb->list[i] = ++count;
 
-		io_tlb_used -= nslots;
+		swiotlb->used -= nslots;
 	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
+	spin_unlock_irqrestore(&swiotlb->lock, flags);
 }
 
 void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
 			     size_t size, enum dma_data_direction dir,
 			     enum dma_sync_target target)
 {
-	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = io_tlb_orig_addr[index];
+	struct swiotlb *swiotlb = &default_swiotlb;
+	int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = swiotlb->orig_addr[index];
 
 	if (orig_addr == INVALID_PHYS_ADDR)
 		return;
@@ -713,31 +730,31 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 bool is_swiotlb_active(void)
 {
 	/*
-	 * When SWIOTLB is initialized, even if io_tlb_start points to physical
-	 * address zero, io_tlb_end surely doesn't.
+	 * When SWIOTLB is initialized, even if swiotlb->start points to
+	 * physical address zero, swiotlb->end surely doesn't.
 	 */
-	return io_tlb_end != 0;
+	return default_swiotlb.end != 0;
 }
 
 bool is_swiotlb_buffer(phys_addr_t paddr)
 {
-	return paddr >= io_tlb_start && paddr < io_tlb_end;
+	return paddr >= default_swiotlb.start && paddr < default_swiotlb.end;
 }
 
 phys_addr_t get_swiotlb_start(void)
 {
-	return io_tlb_start;
+	return default_swiotlb.start;
 }
 
 #ifdef CONFIG_DEBUG_FS
 
 static int __init swiotlb_create_debugfs(void)
 {
-	struct dentry *root;
+	struct swiotlb *swiotlb = &default_swiotlb;
 
-	root = debugfs_create_dir("swiotlb", NULL);
-	debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs);
-	debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used);
+	swiotlb->debugfs = debugfs_create_dir("swiotlb", NULL);
+	debugfs_create_ulong("io_tlb_nslabs", 0400, swiotlb->debugfs, &swiotlb->nslabs);
+	debugfs_create_ulong("io_tlb_used", 0400, swiotlb->debugfs, &swiotlb->used);
 	return 0;
 }
 
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83130.153999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPf-0002ND-Nn; Tue, 09 Feb 2021 06:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83130.153999; Tue, 09 Feb 2021 06:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPf-0002N6-Ip; Tue, 09 Feb 2021 06:22:15 +0000
Received: by outflank-mailman (input) for mailman id 83130;
 Tue, 09 Feb 2021 06:22:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPe-0002Ma-69
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:14 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14809f83-4e15-4ff8-a496-6d1b68be30c5;
 Tue, 09 Feb 2021 06:22:13 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id t11so8139642pgu.8
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:13 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id w1sm14605147pfg.116.2021.02.08.22.22.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14809f83-4e15-4ff8-a496-6d1b68be30c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=WvKUPmchuhcE1zfso8bHoA3z2cJUHyqor/xynDrdSzc=;
        b=WmMoRPfwVrKkMTfzfSeApWYOsANXGiOgA3320pKWhEerXc+d04kio6kBo6Samikmhh
         dKY6FAm9apY748uamOD6+T+Q7zkEWfp30SA3fCJNtSNB7CjdS5M+/v04eEEHZ9JPhbQU
         oGnPa5k409EctotY2FmiW01VL+4N5+/MQmqlc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=WvKUPmchuhcE1zfso8bHoA3z2cJUHyqor/xynDrdSzc=;
        b=tshSeZAKFdFdEUET3cbZoVInDpSplFjL/vGjUl5M8bKYz4WEFDJY+J7vXJqizEX21k
         tBZJoFiFqGOwT7x2zrd0KOH1+EqV8DgUPALKvgPQnad50lx1nRMeD5lQwTz/KMcXmpwn
         MJDKEVBYESzRSJ2gyLLd5qZTs9g34ItNXNBphsTRo443oWO/LaQz+hFUz97jt3xeamjH
         SA8SMN+NvpgKIa6X8u38oaOhlt0+ORj5d1a37R0R2x+Q74kChfmDklFkUCV9bGalZPfM
         Y8AURoSxDXCv5sBjzOdJqc48zmqzlZ97tMxuDS0+oI71jhjA0acwmjnawhLz8Hrkhpeu
         UOng==
X-Gm-Message-State: AOAM532F/dWhuuTyF9svzTqcB/3VDjzehOYMrL+yqSkjB65xCxGIR3Se
	+uHS4lGqBzKoPTp7VoKjMwRVcw==
X-Google-Smtp-Source: ABdhPJwPilLMeHO4izUWlTNwHOL/6AulGQdpqWdqLrBPBm35gr6jqlo9rCgrJT+lklmP7VdqmUgqQg==
X-Received: by 2002:a63:80c8:: with SMTP id j191mr1570888pgd.58.1612851732543;
        Mon, 08 Feb 2021 22:22:12 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 04/14] swiotlb: Refactor swiotlb_late_init_with_tbl
Date: Tue,  9 Feb 2021 14:21:21 +0800
Message-Id: <20210209062131.2300005-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Refactor swiotlb_late_init_with_tbl to make the code reusable for
restricted DMA pool initialization.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 65 ++++++++++++++++++++++++++++----------------
 1 file changed, 42 insertions(+), 23 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 28b7bfe7a2a8..dc37951c6924 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -353,20 +353,21 @@ static void swiotlb_cleanup(void)
 	max_segment = 0;
 }
 
-int
-swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+static int swiotlb_init_tlb_pool(struct swiotlb *swiotlb, phys_addr_t start,
+				size_t size)
 {
-	struct swiotlb *swiotlb = &default_swiotlb;
-	unsigned long i, bytes;
+	unsigned long i;
+	void *vaddr = phys_to_virt(start);
 
-	bytes = nslabs << IO_TLB_SHIFT;
+	size = ALIGN(size, 1 << IO_TLB_SHIFT);
+	swiotlb->nslabs = size >> IO_TLB_SHIFT;
+	swiotlb->nslabs = ALIGN(swiotlb->nslabs, IO_TLB_SEGSIZE);
 
-	swiotlb->nslabs = nslabs;
-	swiotlb->start = virt_to_phys(tlb);
-	swiotlb->end = swiotlb->start + bytes;
+	swiotlb->start = start;
+	swiotlb->end = swiotlb->start + size;
 
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT);
+	memset(vaddr, 0, size);
 
 	/*
 	 * Allocate and initialize the free list array.  This array is used
@@ -390,13 +391,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 		swiotlb->orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	swiotlb->index = 0;
-	no_iotlb_memory = false;
-
-	swiotlb_print_info();
 
-	late_alloc = 1;
-
-	swiotlb_set_max_segment(swiotlb->nslabs << IO_TLB_SHIFT);
 	spin_lock_init(&swiotlb->lock);
 
 	return 0;
@@ -410,6 +405,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	return -ENOMEM;
 }
 
+int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+{
+	struct swiotlb *swiotlb = &default_swiotlb;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT;
+	int ret;
+
+	ret = swiotlb_init_tlb_pool(swiotlb, virt_to_phys(tlb), bytes);
+	if (ret)
+		return ret;
+
+	no_iotlb_memory = false;
+
+	swiotlb_print_info();
+
+	late_alloc = 1;
+
+	swiotlb_set_max_segment(bytes);
+
+	return 0;
+}
+
 void __init swiotlb_exit(void)
 {
 	struct swiotlb *swiotlb = &default_swiotlb;
@@ -747,17 +763,20 @@ phys_addr_t get_swiotlb_start(void)
 }
 
 #ifdef CONFIG_DEBUG_FS
-
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs(struct swiotlb *swiotlb, const char *name,
+				   struct dentry *node)
 {
-	struct swiotlb *swiotlb = &default_swiotlb;
-
-	swiotlb->debugfs = debugfs_create_dir("swiotlb", NULL);
+	swiotlb->debugfs = debugfs_create_dir(name, node);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, swiotlb->debugfs, &swiotlb->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, swiotlb->debugfs, &swiotlb->used);
-	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+static int __init swiotlb_create_default_debugfs(void)
+{
+	swiotlb_create_debugfs(&default_swiotlb, "swiotlb", NULL);
+
+	return 0;
+}
 
+late_initcall(swiotlb_create_default_debugfs);
 #endif
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83131.154011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPm-0002SP-16; Tue, 09 Feb 2021 06:22:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83131.154011; Tue, 09 Feb 2021 06:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPl-0002SF-Tt; Tue, 09 Feb 2021 06:22:21 +0000
Received: by outflank-mailman (input) for mailman id 83131;
 Tue, 09 Feb 2021 06:22:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPk-0002Rf-Mf
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:20 +0000
Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e444da7-3acf-4b43-96a3-de85db326b31;
 Tue, 09 Feb 2021 06:22:20 +0000 (UTC)
Received: by mail-pl1-x62d.google.com with SMTP id a16so9162295plh.8
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:20 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id g17sm21205826pfq.135.2021.02.08.22.22.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e444da7-3acf-4b43-96a3-de85db326b31
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=9SQyVkSYwcHNPQBFHoCQtFFFjCENqy21v8WbruxTyIQ=;
        b=S8b7bks4s14Aq0h3NWBRKYyM5rNroqOh0mjkOQqRPPLHlWF1b/82tP2JW6Ak+XahaS
         vuqFZkcLranydMu7pllFHxd5nuNgxoGZIBPnWrP05Of2B7XaG8dtqQLOaoHbIZQ1OzXK
         3qWvUM55PHjPtT5jCfFBuHsTloYPqv037Cmok=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=9SQyVkSYwcHNPQBFHoCQtFFFjCENqy21v8WbruxTyIQ=;
        b=V8EaYOQ/SRVEKXMNpWzKzgYZEgVLcFIY7Cm/UDnvGB8e66jrwAqhSJGnbiu4eN1oIv
         Xr2yD1xo4b3Bj1llQzUDg+ax5z+IeRik95sy8IDm50hBoxGQlIuvtXYkap16MoBAbYX0
         Dtxogc7aHqsvXTW2Z1vUCoQoB/nWMcHRzkGRxrQGGO3yzal1Vi8HWxCpglZ8nlVgiF3X
         hfYRAtm7CvAvpukJjRmCsFlNL7aitQTEwEwViSdYWrUxCM/Gm45oOy6PJACgV+INhSZM
         s0DJG0ur9zsuVhqj2VmLiObACfu2m6Iw+fdGBlnzJ63f0P80KbgMjMZlayBRSdHx3iAB
         FU+A==
X-Gm-Message-State: AOAM531dR4/fLnwzUs6tIHn0/T7dl6LR+UejouSNVLaM6ZCfn2/YHwfy
	HRrJFfPq4FD0Rpup1eZg2eVQAg==
X-Google-Smtp-Source: ABdhPJxcpi2bQzo5Ag59ySIm3BqsShiYpJbthB3TKnbwDYfpDCilPQMTJU6vCjKSHchn3Ia3PqrJPA==
X-Received: by 2002:a17:90a:654a:: with SMTP id f10mr2534268pjs.202.1612851739435;
        Mon, 08 Feb 2021 22:22:19 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 05/14] swiotlb: Add DMA_RESTRICTED_POOL
Date: Tue,  9 Feb 2021 14:21:22 +0800
Message-Id: <20210209062131.2300005-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/Kconfig | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 479fc145acfc..97ff9f8dd3c8 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -83,6 +83,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83132.154023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPu-0002ZF-9V; Tue, 09 Feb 2021 06:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83132.154023; Tue, 09 Feb 2021 06:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPu-0002Z6-6H; Tue, 09 Feb 2021 06:22:30 +0000
Received: by outflank-mailman (input) for mailman id 83132;
 Tue, 09 Feb 2021 06:22:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPs-0002Xy-F2
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:28 +0000
Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08b5f9aa-1429-40ab-93ff-b5fa52d14e3d;
 Tue, 09 Feb 2021 06:22:27 +0000 (UTC)
Received: by mail-pj1-x1031.google.com with SMTP id gb24so976520pjb.4
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:27 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id np7sm1080411pjb.10.2021.02.08.22.22.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08b5f9aa-1429-40ab-93ff-b5fa52d14e3d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=CxLFhhx42ZtNAWWPxxTC2YTMXHtyxeDPJClSNQiYoYc=;
        b=FOjc62yoUhk/LYUclkinZZ4zDvKpeGo1x74CjLhqDxql67icfdZqyBr48BDMmzkMqR
         tA+flicJ3qCGmrksh4BzQ5Am4hl3t7sFnye0tmSkf0JhPfYSoJrQn4ouvbgIPDxqcZT9
         22NHFR6mKzgKsRHbHqBZhVhBnyuYHLbdeoeHw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=CxLFhhx42ZtNAWWPxxTC2YTMXHtyxeDPJClSNQiYoYc=;
        b=QmhH21nsdCyP3anOIGhloWuW3HcMHlzz6pUCJ8ElxmyskhU20XjltIddFoivValrAn
         o68WFMa3cHaPf9XW+VlSZb3MP7JUP+XVy5DB6PgPDzKjtxXH2MVEDpwI6HjuFvdu5ip6
         knxPnONCYq8l6nseSHT5kCGjIIcUx5oWOWGYJJo6NZZujMs4XnNi0JPsfzQuBf/75JWQ
         vrQPJf/fzOwyZowhwCNu/EPUlcupu0iNd6BkrKfVRYhWAXHkIT3FTOz/+pP5eAv6huw5
         QXYIQsHMD11VMsDwB5QbvSPlb9+Sr4V6CVFEp9J5spgROotxWnAK042uqQav6jlZPBkI
         OOrg==
X-Gm-Message-State: AOAM532b1R9/QSf3LxfQW98JtlWG4xbYW2jJhP6JsMCvzO0KKwHh5cEp
	bb3P36sg3aBUB0QoVnadLo3/mA==
X-Google-Smtp-Source: ABdhPJwDHlCULV4tTgyY9CKwnf/NbNTRufsz7Pi36U/uwqUNeuQrkhr5EaPoNzogQlT61PzYunqO5A==
X-Received: by 2002:a17:90a:3188:: with SMTP id j8mr2559404pjb.53.1612851746343;
        Mon, 08 Feb 2021 22:22:26 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 06/14] swiotlb: Add restricted DMA pool
Date: Tue,  9 Feb 2021 14:21:23 +0800
Message-Id: <20210209062131.2300005-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/device.h |  4 ++
 kernel/dma/swiotlb.c   | 94 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 97 insertions(+), 1 deletion(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 7619a84f8ce4..08d440627b93 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -415,6 +415,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dev_swiotlb: Internal for swiotlb override.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -517,6 +518,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	struct swiotlb *dev_swiotlb;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dc37951c6924..3a17451c5981 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -75,7 +82,8 @@ enum swiotlb_force swiotlb_force;
  *              range check to see if the memory was in fact allocated by this
  *              API.
  * @nslabs:     The number of IO TLB blocks (in groups of 64) between @start and
- *              @end. This is command line adjustable via setup_io_tlb_npages.
+ *              @end. For default swiotlb, this is command line adjustable via
+ *              setup_io_tlb_npages.
  * @used:       The number of used IO TLB block.
  * @list:       The free list describing the number of free entries available
  *              from each index.
@@ -780,3 +788,87 @@ static int __init swiotlb_create_default_debugfs(void)
 
 late_initcall(swiotlb_create_default_debugfs);
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct swiotlb *swiotlb = rmem->priv;
+	int ret;
+
+	if (dev->dev_swiotlb)
+		return -EBUSY;
+
+	/* Since multiple devices can share the same pool, the private data,
+	 * swiotlb struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!swiotlb) {
+		swiotlb = kzalloc(sizeof(*swiotlb), GFP_KERNEL);
+		if (!swiotlb)
+			return -ENOMEM;
+#ifdef CONFIG_ARM
+		unsigned long pfn = PHYS_PFN(reme->base);
+
+		if (!PageHighMem(pfn_to_page(pfn))) {
+			ret = -EINVAL;
+			goto cleanup;
+		}
+#endif /* CONFIG_ARM */
+
+		ret = swiotlb_init_tlb_pool(swiotlb, rmem->base, rmem->size);
+		if (ret)
+			goto cleanup;
+
+		rmem->priv = swiotlb;
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	swiotlb_create_debugfs(swiotlb, rmem->name, default_swiotlb.debugfs);
+#endif /* CONFIG_DEBUG_FS */
+
+	dev->dev_swiotlb = swiotlb;
+
+	return 0;
+
+cleanup:
+	kfree(swiotlb);
+
+	return ret;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	if (!dev)
+		return;
+
+#ifdef CONFIG_DEBUG_FS
+	debugfs_remove_recursive(dev->dev_swiotlb->debugfs);
+#endif /* CONFIG_DEBUG_FS */
+	dev->dev_swiotlb = NULL;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83133.154035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPz-0002eR-L6; Tue, 09 Feb 2021 06:22:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83133.154035; Tue, 09 Feb 2021 06:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MPz-0002eH-Gx; Tue, 09 Feb 2021 06:22:35 +0000
Received: by outflank-mailman (input) for mailman id 83133;
 Tue, 09 Feb 2021 06:22:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MPz-0002dt-2O
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:35 +0000
Received: from mail-pj1-x1033.google.com (unknown [2607:f8b0:4864:20::1033])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a28c162-c85c-4bea-adae-b9d1292bbf68;
 Tue, 09 Feb 2021 06:22:33 +0000 (UTC)
Received: by mail-pj1-x1033.google.com with SMTP id d2so1037168pjs.4
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:33 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id y3sm15909957pfr.125.2021.02.08.22.22.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a28c162-c85c-4bea-adae-b9d1292bbf68
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=bfnwvtu6c4qFg17z3zRw+ngra3tQPiKaBNAeGu3kuys=;
        b=DN/xJ9zyoyOl7tww0sHqTDrXTmBoYAaQW/aAjyZF/ZQsZmPalMIFISuv8D920br2VA
         xieDVxUIBy4YAKQYRTVV+UeMH/2q7WluBpEAMNlCbAfhXUEH7XYvH0L98upfR63ITAPt
         zmL+fN06Y3KeFCHEmdSEQ7l08fSY4b6qEVcGI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=bfnwvtu6c4qFg17z3zRw+ngra3tQPiKaBNAeGu3kuys=;
        b=OceEldU694BIqBQJNFB+xxydzMOUVYG7MUe8NqHE9MSXqiuOo3eC0jmfU+0AOrgh0i
         VMh+ODw/I4VEAx4YXHPvepJVESPPdff3WoQEdNLo+EULMh2noZPgBvF2xhr6MS/2rRUs
         0IHrShYnncNd+2H4WPmd1n4o+JPknBELC3eOibwGWSFgK9ipK5Jd7NfTYw4THxkbnUW5
         pCGk8A/1UJjVrnCcM/HxC5W4dhFZQIZME0Wn2mUBA5cl2X9q24wH5OzRxwrGAIMezwGE
         qT3wlIhj3JlnotvzSvp1IJu3rsmhn85AQJf7ywXYZv0UwnkQxuf4PzSMiSHwkYhCU9xn
         8QBw==
X-Gm-Message-State: AOAM530GwCZiybOtIbUsnAYxaRvEbrHfS3af7F05grqBCuS82rdTmMEw
	lxf2M5t0JSD7KNJWGGk0r1YmiA==
X-Google-Smtp-Source: ABdhPJzsoA6NwKm+S4KZVPeR4zk76WKbIUMqYXifW3lINXvbVDJBj2JyDT2vUppUwxRx5/36CLZtuw==
X-Received: by 2002:a17:902:b40b:b029:df:cf31:2849 with SMTP id x11-20020a170902b40bb02900dfcf312849mr19548774plr.33.1612851753276;
        Mon, 08 Feb 2021 22:22:33 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 07/14] swiotlb: Update swiotlb API to gain a struct device argument
Date: Tue,  9 Feb 2021 14:21:24 +0800
Message-Id: <20210209062131.2300005-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the get_swiotlb() getter and update all callers of
is_swiotlb_active(), is_swiotlb_buffer() and get_swiotlb_start() to gain
a struct device argument.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  4 ++--
 include/linux/swiotlb.h   | 10 +++++-----
 kernel/dma/direct.c       |  8 ++++----
 kernel/dma/direct.h       |  6 +++---
 kernel/dma/swiotlb.c      | 23 +++++++++++++++++------
 6 files changed, 37 insertions(+), 26 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f659395e7959..abdbe14472cc 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -503,7 +503,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size,
 				iova_align(iovad, size), dir, attrs);
 }
@@ -580,7 +580,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(phys))
+	if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size,
 				aligned_size, dir, attrs);
 
@@ -753,7 +753,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_CPU);
 }
 
@@ -766,7 +766,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_DEVICE);
 
 	if (!dev_is_dma_coherent(dev))
@@ -787,7 +787,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length,
 						dir, SYNC_FOR_CPU);
 	}
@@ -804,7 +804,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length,
 						dir, SYNC_FOR_DEVICE);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 91f8c68d1a9b..f424d46756b1 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 	/*
 	 * IO TLB memory already allocated. Just use it.
 	 */
-	if (is_swiotlb_active()) {
-		xen_io_tlb_start = phys_to_virt(get_swiotlb_start());
+	if (is_swiotlb_active(NULL)) {
+		xen_io_tlb_start = phys_to_virt(get_swiotlb_start(NULL));
 		goto end;
 	}
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 041611bf3c2a..f13a52a97382 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -71,16 +71,16 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
 
-bool is_swiotlb_buffer(phys_addr_t paddr);
+bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr);
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
-phys_addr_t get_swiotlb_start(void);
+bool is_swiotlb_active(struct device *dev);
+phys_addr_t get_swiotlb_start(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long new_size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
@@ -96,7 +96,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 002268262c9a..30ccbc08e229 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_tbl_sync_single(dev, paddr, sg->length,
 					dir, SYNC_FOR_DEVICE);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_tbl_sync_single(dev, paddr, sg->length, dir,
 					SYNC_FOR_CPU);
 
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+		is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index b98615578737..7b83b1595989 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3a17451c5981..e22e7ae75f1c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -107,6 +107,11 @@ struct swiotlb {
 };
 static struct swiotlb default_swiotlb;
 
+static inline struct swiotlb *get_swiotlb(struct device *dev)
+{
+	return &default_swiotlb;
+}
+
 /*
  * Max segment that we can provide which (if pages are contingous) will
  * not be bounced (unless SWIOTLB_FORCE is set).
@@ -751,23 +756,29 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
+	struct swiotlb *swiotlb = get_swiotlb(dev);
+
 	/*
 	 * When SWIOTLB is initialized, even if swiotlb->start points to
 	 * physical address zero, swiotlb->end surely doesn't.
 	 */
-	return default_swiotlb.end != 0;
+	return swiotlb->end != 0;
 }
 
-bool is_swiotlb_buffer(phys_addr_t paddr)
+bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	return paddr >= default_swiotlb.start && paddr < default_swiotlb.end;
+	struct swiotlb *swiotlb = get_swiotlb(dev);
+
+	return paddr >= swiotlb->start && paddr < swiotlb->end;
 }
 
-phys_addr_t get_swiotlb_start(void)
+phys_addr_t get_swiotlb_start(struct device *dev)
 {
-	return default_swiotlb.start;
+	struct swiotlb *swiotlb = get_swiotlb(dev);
+
+	return swiotlb->start;
 }
 
 #ifdef CONFIG_DEBUG_FS
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83134.154047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQ7-0002l0-4u; Tue, 09 Feb 2021 06:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83134.154047; Tue, 09 Feb 2021 06:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQ7-0002kt-1Y; Tue, 09 Feb 2021 06:22:43 +0000
Received: by outflank-mailman (input) for mailman id 83134;
 Tue, 09 Feb 2021 06:22:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQ5-0002k8-SD
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:41 +0000
Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51ad2447-8834-4279-af1e-9195b9f37977;
 Tue, 09 Feb 2021 06:22:40 +0000 (UTC)
Received: by mail-pj1-x1031.google.com with SMTP id e9so1049176pjj.0
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:40 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id x14sm20837364pfj.15.2021.02.08.22.22.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51ad2447-8834-4279-af1e-9195b9f37977
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=KUcKCdqxUtmE3c3/sCLCFX58+aEPLItGYu5wwEcivTc=;
        b=NhEm9T2AwADa9VsvefRZO3hJf81eglORlqEc/M/a8rR0kuSIY5UrlWGbaksqIVirGr
         J6QDJ3CyNkeplzaOrje+vpzgt129yWbwHTzlb+beGZFWZQ8LQ0tH754TJy6h0q3jS8jo
         cGxvFhbRKlBHTIBko/408oiJovV3JTKwJW8QU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=KUcKCdqxUtmE3c3/sCLCFX58+aEPLItGYu5wwEcivTc=;
        b=JZP9wRoKUlJsHxqxeD3m24pjlJBwGjk+Yf3iApErGwFgvXHxqt0KfDuVBbtyxsWDPh
         5S6IvZ2bzu04XotKmisgKVuo2TQp3NGzj3IE4E886vTwKTHPDxnqe2OYFfv9R2wJM67Z
         fWo99QX3OBmVQM5xEfoHNgVhCEe7a0bVucYmiDr2qnvCCclKBdUZPfoZrXAQnaSd/gSz
         vx44tig7Qjc3IzsWTHviJChV2oqisCoPvNtatwO4X3CDdXqZiRycy/GxiFfG2E2ld2rY
         cuwj52LVisqyUN/2Cch+hq7/x9QvaBiQwGbq66x4xm2P9IHFrsYpTvNEpyQ0WwACXgGs
         HxTw==
X-Gm-Message-State: AOAM53259GAGPHUE4DSc5E0T8C0ZTuUPRbojYMohM1DYOAKbxBehZH4n
	WkyVqM+N0baJ2trQTfp2WvF7Zg==
X-Google-Smtp-Source: ABdhPJxTkbyFhURADasjsjB42g8YIEdg4z+Ug1Qla2hS5HdHoxmM0uHRHvdpONYXgEmtPzgr5O1D4Q==
X-Received: by 2002:a17:90a:49c4:: with SMTP id l4mr2647464pjm.33.1612851760260;
        Mon, 08 Feb 2021 22:22:40 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 08/14] swiotlb: Use restricted DMA pool if available
Date: Tue,  9 Feb 2021 14:21:25 +0800
Message-Id: <20210209062131.2300005-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 13 +++++++++++++
 kernel/dma/direct.h     |  2 +-
 kernel/dma/swiotlb.c    | 20 +++++++++++++++++---
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index f13a52a97382..76f86c684524 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -71,6 +71,15 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+bool is_swiotlb_force(struct device *dev);
+#else
+static inline bool is_swiotlb_force(struct device *dev)
+{
+	return unlikely(swiotlb_force == SWIOTLB_FORCE);
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr);
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
@@ -80,6 +89,10 @@ phys_addr_t get_swiotlb_start(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long new_size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
+static inline bool is_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 7b83b1595989..b011db1b625d 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (is_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e22e7ae75f1c..6fdebde8fb1f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -40,6 +40,7 @@
 #include <linux/debugfs.h>
 #endif
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/device.h>
 #include <linux/io.h>
 #include <linux/of.h>
 #include <linux/of_fdt.h>
@@ -109,6 +110,10 @@ static struct swiotlb default_swiotlb;
 
 static inline struct swiotlb *get_swiotlb(struct device *dev)
 {
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev && dev->dev_swiotlb)
+		return dev->dev_swiotlb;
+#endif
 	return &default_swiotlb;
 }
 
@@ -508,7 +513,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct swiotlb *swiotlb = &default_swiotlb;
+	struct swiotlb *swiotlb = get_swiotlb(hwdev);
 	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start);
 	unsigned long flags;
 	phys_addr_t tlb_addr;
@@ -519,7 +524,11 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	unsigned long max_slots;
 	unsigned long tmp_io_tlb_used;
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (no_iotlb_memory && !hwdev->dev_swiotlb)
+#else
 	if (no_iotlb_memory)
+#endif
 		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
 
 	if (mem_encrypt_active())
@@ -641,7 +650,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, size_t alloc_size,
 			      enum dma_data_direction dir, unsigned long attrs)
 {
-	struct swiotlb *swiotlb = &default_swiotlb;
+	struct swiotlb *swiotlb = get_swiotlb(hwdev);
 	unsigned long flags;
 	int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
 	int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT;
@@ -689,7 +698,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
 			     size_t size, enum dma_data_direction dir,
 			     enum dma_sync_target target)
 {
-	struct swiotlb *swiotlb = &default_swiotlb;
+	struct swiotlb *swiotlb = get_swiotlb(hwdev);
 	int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = swiotlb->orig_addr[index];
 
@@ -801,6 +810,11 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+bool is_swiotlb_force(struct device *dev)
+{
+	return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83135.154059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQF-0002ra-FV; Tue, 09 Feb 2021 06:22:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83135.154059; Tue, 09 Feb 2021 06:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQF-0002rR-BS; Tue, 09 Feb 2021 06:22:51 +0000
Received: by outflank-mailman (input) for mailman id 83135;
 Tue, 09 Feb 2021 06:22:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQE-0002qa-0K
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:50 +0000
Received: from mail-pg1-x530.google.com (unknown [2607:f8b0:4864:20::530])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef116655-434b-4336-9e46-e03df93aed3e;
 Tue, 09 Feb 2021 06:22:49 +0000 (UTC)
Received: by mail-pg1-x530.google.com with SMTP id m2so5090413pgq.5
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:48 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id n1sm6296866pgn.94.2021.02.08.22.22.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef116655-434b-4336-9e46-e03df93aed3e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=;
        b=iQ8wBJSEVZzeAe0ffh2h2GM8F5kehjcPxtVFeT4gPFJg8QooaH8d0bbR3DBFmEjh7/
         +t81Z22ZQ3jfYQJEtw6EEDXPAwtTfgWAGJvdqJ64cWu+Z/90HeG76Xdlz9LMQYcjebg6
         t1DpKPM5Kv3tzEnaDX3egDRfxshz/CzrFwO+0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=;
        b=FzgTjVUokJEVezdrNkB7Gxghzd+1xSvmub6pb2AB1dsRfNIqi6w/3/M/pHb/CHL3Zf
         yEZ5OyLBhnQL3tgdLvhSVzpXOypJXqndRi2IPaRvnyFoRnspPiLZW2c6//5S4FOF+gtZ
         JXlEA0zsYdL1+DMzTImcSFBUuqQDR3smpfJwf0ggV6WsCyqihQzmW6xQbUPBCZZahxaP
         mQO+TO1hkWwOhbPsOPlfTjWQfPassNdSvkzvKqePh8RWiVY+CvsGwNny5EsWSejngpSH
         USRrMPCAQIVj6LmkptFW1FN1t3gfn0nITMmLAkvQTE3f8E0fObxkqjD/qX65pe1UXOUt
         5hDQ==
X-Gm-Message-State: AOAM531lTsQoFNVrg5zjMxkOKPrm6dxsPUUYcoRiHTWXGOEfF4mFzL52
	mufkAAYwnvZJkYUyBHXE67fsxA==
X-Google-Smtp-Source: ABdhPJy1CzOC+GQys63fdcsVT9unv4NpcqE4/TesqmaSSFAML/xnfw6XY2m3UHOllfAZ0J4Itqp2AA==
X-Received: by 2002:aa7:80cc:0:b029:1da:689d:2762 with SMTP id a12-20020aa780cc0000b02901da689d2762mr14012849pfn.3.1612851768274;
        Mon, 08 Feb 2021 22:22:48 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 09/14] swiotlb: Refactor swiotlb_tbl_{map,unmap}_single
Date: Tue,  9 Feb 2021 14:21:26 +0800
Message-Id: <20210209062131.2300005-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Refactor swiotlb_tbl_{map,unmap}_single to make the code reusable for
dev_swiotlb_{alloc,free}.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 116 ++++++++++++++++++++++++++-----------------
 1 file changed, 71 insertions(+), 45 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 6fdebde8fb1f..f64cbe6e84cc 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -509,14 +509,12 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
 	}
 }
 
-phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
-		size_t mapping_size, size_t alloc_size,
-		enum dma_data_direction dir, unsigned long attrs)
+static int swiotlb_tbl_find_free_region(struct device *hwdev,
+					dma_addr_t tbl_dma_addr,
+					size_t alloc_size, unsigned long attrs)
 {
 	struct swiotlb *swiotlb = get_swiotlb(hwdev);
-	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start);
 	unsigned long flags;
-	phys_addr_t tlb_addr;
 	unsigned int nslots, stride, index, wrap;
 	int i;
 	unsigned long mask;
@@ -531,15 +529,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 #endif
 		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
 
-	if (mem_encrypt_active())
-		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
-
-	if (mapping_size > alloc_size) {
-		dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)",
-			      mapping_size, alloc_size);
-		return (phys_addr_t)DMA_MAPPING_ERROR;
-	}
-
 	mask = dma_get_seg_boundary(hwdev);
 
 	tbl_dma_addr &= mask;
@@ -601,7 +590,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 				swiotlb->list[i] = 0;
 			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && swiotlb->list[i]; i--)
 				swiotlb->list[i] = ++count;
-			tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT);
 
 			/*
 			 * Update the indices to avoid searching in the next
@@ -624,45 +612,20 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
 	if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
 		dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
 			 alloc_size, swiotlb->nslabs, tmp_io_tlb_used);
-	return (phys_addr_t)DMA_MAPPING_ERROR;
+	return -ENOMEM;
+
 found:
 	swiotlb->used += nslots;
 	spin_unlock_irqrestore(&swiotlb->lock, flags);
 
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	for (i = 0; i < nslots; i++)
-		swiotlb->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
-
-	return tlb_addr;
+	return index;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, size_t alloc_size,
-			      enum dma_data_direction dir, unsigned long attrs)
+static void swiotlb_tbl_release_region(struct device *hwdev, int index, size_t size)
 {
 	struct swiotlb *swiotlb = get_swiotlb(hwdev);
 	unsigned long flags;
-	int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT;
-	phys_addr_t orig_addr = swiotlb->orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (orig_addr != INVALID_PHYS_ADDR &&
-	    !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
-		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
 
 	/*
 	 * Return the buffer to the free list by setting the corresponding
@@ -694,6 +657,69 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&swiotlb->lock, flags);
 }
 
+phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
+				   size_t mapping_size, size_t alloc_size,
+				   enum dma_data_direction dir,
+				   unsigned long attrs)
+{
+	struct swiotlb *swiotlb = get_swiotlb(hwdev);
+	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, swiotlb->start);
+	phys_addr_t tlb_addr;
+	unsigned int nslots, index;
+	int i;
+
+	if (mem_encrypt_active())
+		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
+
+	if (mapping_size > alloc_size) {
+		dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)",
+			      mapping_size, alloc_size);
+		return (phys_addr_t)DMA_MAPPING_ERROR;
+	}
+
+	index = swiotlb_tbl_find_free_region(hwdev, tbl_dma_addr, alloc_size, attrs);
+	if (index < 0)
+		return (phys_addr_t)DMA_MAPPING_ERROR;
+
+	tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	for (i = 0; i < nslots; i++)
+		swiotlb->orig_addr[index + i] = orig_addr + (i << IO_TLB_SHIFT);
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
+
+	return tlb_addr;
+}
+
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
+			      size_t mapping_size, size_t alloc_size,
+			      enum dma_data_direction dir, unsigned long attrs)
+{
+	struct swiotlb *swiotlb = get_swiotlb(hwdev);
+	int index = (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT;
+	phys_addr_t orig_addr = swiotlb->orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (orig_addr != INVALID_PHYS_ADDR &&
+	    !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
+		swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	swiotlb_tbl_release_region(hwdev, index, alloc_size);
+}
+
 void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
 			     size_t size, enum dma_data_direction dir,
 			     enum dma_sync_target target)
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:22:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:22:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83136.154071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQM-0002xT-OQ; Tue, 09 Feb 2021 06:22:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83136.154071; Tue, 09 Feb 2021 06:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQM-0002xJ-Ke; Tue, 09 Feb 2021 06:22:58 +0000
Received: by outflank-mailman (input) for mailman id 83136;
 Tue, 09 Feb 2021 06:22:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQL-0002wn-QK
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:57 +0000
Received: from mail-pg1-x52b.google.com (unknown [2607:f8b0:4864:20::52b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a9ce06d-f690-4a5e-a404-5afefa9bfb0b;
 Tue, 09 Feb 2021 06:22:57 +0000 (UTC)
Received: by mail-pg1-x52b.google.com with SMTP id o38so301027pgm.9
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:22:57 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id x20sm10253509pfn.14.2021.02.08.22.22.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:22:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a9ce06d-f690-4a5e-a404-5afefa9bfb0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=497zjM77pvF3Z9oAZHKRrRRQASb3zykCWrEq06oWuA4=;
        b=J3JXGtOrm5PoEyvBsVk+eONszRpKlS2e/fHFYsrzg7r41jB4Qhydpz4iHXCx+nR2XB
         bgZwF53kTc5vzTPgHCJnyyfoRJvrgBHi7UionQnej7Yu1P183TVntz02XzomTxMCHg3G
         mcXNNavE9b460T2UV+kLxlE9ubP6dRssCo1RI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=497zjM77pvF3Z9oAZHKRrRRQASb3zykCWrEq06oWuA4=;
        b=aulAd3gJRz1Us+eG+bD2DObDE8TgpPn3j+WFcCwhANg7Nl3CBdiTazg8ae+TeoeuaG
         5b+QYYg6vqkTtX6zLqmuCFGaQ0eg3eN8rUvgIFPd86FoKicFCk7ztw39yxlZOqXRh9H2
         DCRqWv2xcrwgUtOrcK1yD0NK38UZwDfWLhWKq4nElji8bp39lndaPdWDt8/TdtIj34C5
         Ht90kCJZsQ22Vvkheoj5WMis5T1+PJHgc5w/eSoxuU/6WzJLWBlST55ncTqhv7zFxw0e
         4NxDKIswurwDVkZDhdcqBaxEUUOdvFDK8zb28TrTmfsqNBTFKiZeoUXQNKe/ldHYbUts
         mVrw==
X-Gm-Message-State: AOAM531HtiBrqMYhxGHEzF6vpbpgh3e5Y2rYIb4EH0yEKF7seRNSqf76
	2MVllqA0y0zCv4i5+G2ypP4fEA==
X-Google-Smtp-Source: ABdhPJxEw/06walO5wKH5QrYrrWevtYirMnCFq5PBC/eIi86fDszp4wESrmyrR0n6ZgmsKGnPFNlrg==
X-Received: by 2002:a63:4a1a:: with SMTP id x26mr21490368pga.260.1612851776396;
        Mon, 08 Feb 2021 22:22:56 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages()
Date: Tue,  9 Feb 2021 14:21:27 +0800
Message-Id: <20210209062131.2300005-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new wrapper __dma_direct_free_pages() that will be useful later
for dev_swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 30ccbc08e229..a76a1a2f24da 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,11 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +242,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +278,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +315,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +334,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:23:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83137.154083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQU-00033x-4k; Tue, 09 Feb 2021 06:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83137.154083; Tue, 09 Feb 2021 06:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQT-00033m-WC; Tue, 09 Feb 2021 06:23:06 +0000
Received: by outflank-mailman (input) for mailman id 83137;
 Tue, 09 Feb 2021 06:23:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQS-000330-R5
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:04 +0000
Received: from mail-pg1-x52f.google.com (unknown [2607:f8b0:4864:20::52f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 527fcaab-956a-41ff-bffd-62b421d9a286;
 Tue, 09 Feb 2021 06:23:04 +0000 (UTC)
Received: by mail-pg1-x52f.google.com with SMTP id c132so11827524pga.3
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:23:04 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id gx22sm1155253pjb.49.2021.02.08.22.22.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:23:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 527fcaab-956a-41ff-bffd-62b421d9a286
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=6zQVHgfobPuQEWulcSd+ytGSNGCX6uxs4mC7Gt2kf+c=;
        b=ert+XkxphAdsHWZRYOV0Cfn/2U+Hq+bg6asziCHHt/7u8Vvc0xyaeiuaRHcJi9Q1Vz
         h0k512lAFo9/rmCIttSQjdSndXS7KAKpm476xzbqDMAPDG5a7mdNYtdhfGaun9yWGWmH
         a9iyECq63v2VApjt+BbXpHPW6ZsiyzSxuIc1E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=6zQVHgfobPuQEWulcSd+ytGSNGCX6uxs4mC7Gt2kf+c=;
        b=OuCqFn9BlcvKsMQFvouVam6hB+0HcbOtV1qrm01SVeuO0VVBuUFRHHQV5J2sNRWq2c
         9ylqRfYp21liRkn7xRWPpYx6DYqknXNHfwY8aZBS2UJ2EJUjByW7LHuWplxks0zB8SWo
         nJnHOIvwRsvka59b0OFlDw28H8cafacGjElfdjUs4D3udFePB8lOCj13HDoJDJEipN+H
         isoFgfXkRxbgZgdDsYrRDB+L+XPvdC+6hVIXjxCJocHr2/TKJNKFAw19rTYo5nh0KqoW
         f5IA2fBjo/clzpuaopSc29Ts0/d7vFKEuS1ZP6x+IQzmPKekMR4FixzG7Qzx9i6AG2gC
         uZFQ==
X-Gm-Message-State: AOAM533L1w9+Pq0QxYjlBTRI/kxUtM2+a1vg0fNORLjuG1FJ8u6ztrRb
	x11M/0hoOd2BdP/lCI/XtSgwqA==
X-Google-Smtp-Source: ABdhPJzmV9YXPB6hCsXZ7URzXPDB7/qCe304q7KRCHJSQjtn7qxPBetED2jKI5U7yWvekiJpr6hFzw==
X-Received: by 2002:a62:4e10:0:b029:1c9:9015:dc5b with SMTP id c16-20020a624e100000b02901c99015dc5bmr21533363pfb.30.1612851783406;
        Mon, 08 Feb 2021 22:23:03 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 11/14] swiotlb: Add is_dev_swiotlb_force()
Date: Tue,  9 Feb 2021 14:21:28 +0800
Message-Id: <20210209062131.2300005-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add is_dev_swiotlb_force() which returns true if the device has
restricted DMA pool (e.g. dev->dev_swiotlb is set).

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 9 +++++++++
 kernel/dma/swiotlb.c    | 5 +++++
 2 files changed, 14 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 76f86c684524..b9f2a250c8da 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,11 +73,16 @@ extern enum swiotlb_force swiotlb_force;
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
 bool is_swiotlb_force(struct device *dev);
+bool is_dev_swiotlb_force(struct device *dev);
 #else
 static inline bool is_swiotlb_force(struct device *dev)
 {
 	return unlikely(swiotlb_force == SWIOTLB_FORCE);
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
 
 bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr);
@@ -93,6 +98,10 @@ static inline bool is_swiotlb_force(struct device *dev)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f64cbe6e84cc..fd9c1bd183ac 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -841,6 +841,11 @@ bool is_swiotlb_force(struct device *dev)
 	return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb;
 }
 
+bool is_dev_swiotlb_force(struct device *dev)
+{
+	return dev->dev_swiotlb;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:23:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83138.154095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQb-0003BG-Lh; Tue, 09 Feb 2021 06:23:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83138.154095; Tue, 09 Feb 2021 06:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQb-0003B8-Fe; Tue, 09 Feb 2021 06:23:13 +0000
Received: by outflank-mailman (input) for mailman id 83138;
 Tue, 09 Feb 2021 06:23:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQZ-0003A2-RH
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:11 +0000
Received: from mail-pl1-x630.google.com (unknown [2607:f8b0:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1684ab7b-4ba4-4df0-8dab-bc81045a7a5d;
 Tue, 09 Feb 2021 06:23:10 +0000 (UTC)
Received: by mail-pl1-x630.google.com with SMTP id u11so9151602plg.13
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:23:10 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id p12sm1179850pju.35.2021.02.08.22.23.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:23:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1684ab7b-4ba4-4df0-8dab-bc81045a7a5d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=vHso+1dmoTPyqg/ZF0QCJG3XeyxPCOLU98naLUieJmU=;
        b=QwHj1zYjx8/U4T+NrsaexKnUNeJz6V3FI8pfT++P1AguN4L2+bwgyhZpJV9bSAwRQa
         twEx8bTmtbIBs2lxyKwlCbga0ygDkQ0nQZXX5ABNhH9W7/F1oNHJLy6ve5KSFZapowUH
         93XaB33tmsf953SSQYHK0jmBpqLV34Qb65Vhc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=vHso+1dmoTPyqg/ZF0QCJG3XeyxPCOLU98naLUieJmU=;
        b=s+BuOTYxAARBjquq9nUa+i9tsObzOFzBHMmzmjzQZAnpRZFBIDMIgs7f94IsxDk1uu
         aKmGZPzZh6TVsef7Fhmocb28CM/j88XD0EnN6S767uNl0PO94dKSj2cv6IuWVGo5O2dK
         mLuTLf+d2uczd1nQZscH5GRhuHAOvg+obZiGv7Lw+X/36JNyyzCJdGz+T/mbbycmtMa1
         yB4BnwU1k84V6bhVqKG28yzYFySYsHgKztFdPvqhdy8yn27GWO9Fvs9rWIc2/Y0zU2FX
         sTViTiS1HibX3hRWlkBd/DYrAX1r6haGGHPfg8ly7OVdAA48eX6OqJh+iACoNe2qVEkD
         LW8w==
X-Gm-Message-State: AOAM532YrlcmVaF2YvHvLltpd6nwgD7rjdCDiQGlZKKWM7BEZRNEs6T1
	2qFjHtkvUBT9xqnPWh55id4EvA==
X-Google-Smtp-Source: ABdhPJxWN5Jg96v3iIqbJoN9yhUIoNv8pB4+CsEk+MHaFgKY0zVKfDlN6kq7qCIcEpJ1CyA9UerivQ==
X-Received: by 2002:a17:90a:c82:: with SMTP id v2mr2529855pja.171.1612851790238;
        Mon, 08 Feb 2021 22:23:10 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 12/14] swiotlb: Add restricted DMA alloc/free support.
Date: Tue,  9 Feb 2021 14:21:29 +0800
Message-Id: <20210209062131.2300005-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, dev_swiotlb_{alloc,free} to support the memory
allocation from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  2 ++
 kernel/dma/direct.c     | 30 ++++++++++++++++++++++--------
 kernel/dma/swiotlb.c    | 34 ++++++++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+), 8 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index b9f2a250c8da..2cd39e102915 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -74,6 +74,8 @@ extern enum swiotlb_force swiotlb_force;
 #ifdef CONFIG_DMA_RESTRICTED_POOL
 bool is_swiotlb_force(struct device *dev);
 bool is_dev_swiotlb_force(struct device *dev);
+struct page *dev_swiotlb_alloc(struct device *dev, size_t size, gfp_t gfp);
+bool dev_swiotlb_free(struct device *dev, struct page *page, size_t size);
 #else
 static inline bool is_swiotlb_force(struct device *dev)
 {
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index a76a1a2f24da..f9a9321f7559 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -12,6 +12,7 @@
 #include <linux/pfn.h>
 #include <linux/vmalloc.h>
 #include <linux/set_memory.h>
+#include <linux/swiotlb.h>
 #include <linux/slab.h>
 #include "direct.h"
 
@@ -77,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 
 static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size)
 {
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev_swiotlb_free(dev, page, size))
+		return;
+#endif
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -89,6 +94,12 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	WARN_ON_ONCE(!PAGE_ALIGNED(size));
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	page = dev_swiotlb_alloc(dev, size, gfp);
+	if (page)
+		return page;
+#endif
+
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
 	page = dma_alloc_contiguous(dev, size, gfp);
@@ -147,7 +158,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -160,8 +171,8 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
@@ -171,7 +182,9 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -252,15 +265,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -288,7 +301,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index fd9c1bd183ac..8b77fd64199e 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -836,6 +836,40 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *dev_swiotlb_alloc(struct device *dev, size_t size, gfp_t gfp)
+{
+	struct swiotlb *swiotlb;
+	phys_addr_t tlb_addr;
+	unsigned int index;
+
+	/* dev_swiotlb_alloc can be used only in the context which permits sleeping. */
+	if (!dev->dev_swiotlb || !gfpflags_allow_blocking(gfp))
+		return NULL;
+
+	swiotlb = dev->dev_swiotlb;
+	index = swiotlb_tbl_find_free_region(dev, swiotlb->start, size, 0);
+	if (index < 0)
+		return NULL;
+
+	tlb_addr = swiotlb->start + (index << IO_TLB_SHIFT);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool dev_swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	unsigned int index;
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	index = (tlb_addr - dev->dev_swiotlb->start) >> IO_TLB_SHIFT;
+	swiotlb_tbl_release_region(dev, index, size);
+
+	return true;
+}
+
 bool is_swiotlb_force(struct device *dev)
 {
 	return unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dev_swiotlb;
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:23:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83139.154107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQh-0003HM-V8; Tue, 09 Feb 2021 06:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83139.154107; Tue, 09 Feb 2021 06:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQh-0003HF-Qj; Tue, 09 Feb 2021 06:23:19 +0000
Received: by outflank-mailman (input) for mailman id 83139;
 Tue, 09 Feb 2021 06:23:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQg-0003Ge-UH
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:18 +0000
Received: from mail-pf1-x434.google.com (unknown [2607:f8b0:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d39c7deb-f1b9-4f4a-86ee-faa0d136912d;
 Tue, 09 Feb 2021 06:23:18 +0000 (UTC)
Received: by mail-pf1-x434.google.com with SMTP id q131so11244161pfq.10
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:23:18 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id a8sm1160332pjs.40.2021.02.08.22.23.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:23:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d39c7deb-f1b9-4f4a-86ee-faa0d136912d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=eRi8Yzf5Y8qG5991xxXPsSC5LTyacs0FIIEghf5qlqE=;
        b=I1pOI9d2uMAOj25XXD6wAuAIs02PiqHn3vYbrTGQfRLUalLAaaWHj4AP66p1gZRw6c
         B3Fs+yS4m4TscPVDCU+jECnoKxyq5OcojOQ3UlsKgJX/2bKZw0g3cUyt8Lb9tyEzB+I7
         d4RC5Ezyz3o4oj5hhy8N+DKnKQ48mOYCkG1WM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=eRi8Yzf5Y8qG5991xxXPsSC5LTyacs0FIIEghf5qlqE=;
        b=YGeVkwiyi/hMIWbhrBxfc/HSto4FqmnAkeJzXlNIajKTfcQUJcsI6Te5eN2zP4vfb+
         cEB6U4QCYfjqjUMLDNBppzdC6FAlnpFjlNPmF6Rov12s9N61trtvqOzKoO+RfrsMM/ZC
         2h1RHsbHUBnII/YX7brG7tgtowuXpr2lu/X7eBIuD1kZ+2byTIYWdS5/wUEbzEdK4oCv
         7A8qOA2m3kXzIV2tWvlu+V63Gt1rWqGfTvrqc5X5hOlwFYs2AzrBK6tGY8JHY/KNQFrW
         H4DFNOV9KoonQWg6YSecCw2x7B7JgXfYTzSJ4eZ/sF2wYRKM7eMvOLtTfYMcb9mDV3aF
         /gNA==
X-Gm-Message-State: AOAM533VDtqziwmTUMOH+kczEh6R8xBFjnRyeUjWV/3bf643q0Uh0uEl
	bdSk+B+Ws7eh+aQ2MRbFFOb6SQ==
X-Google-Smtp-Source: ABdhPJxkt3Ed7T7cJDKzmZe4wszjCxFAf1leiOQT2ICOasIg3V/FKQTBEDaG8y6oYK6K58aSdM216A==
X-Received: by 2002:a05:6a00:a8d:b029:1ba:71d1:fe3c with SMTP id b13-20020a056a000a8db02901ba71d1fe3cmr21262398pfl.51.1612851797444;
        Mon, 08 Feb 2021 22:23:17 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 13/14] dt-bindings: of: Add restricted DMA pool
Date: Tue,  9 Feb 2021 14:21:30 +0800
Message-Id: <20210209062131.2300005-14-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 24 +++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..fc9a12c2f679 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,20 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -120,6 +134,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_mem_reserved: restricted_dma_mem_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x400000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +157,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:23:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:23:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83140.154118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQp-0003NU-8P; Tue, 09 Feb 2021 06:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83140.154118; Tue, 09 Feb 2021 06:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MQp-0003NK-5F; Tue, 09 Feb 2021 06:23:27 +0000
Received: by outflank-mailman (input) for mailman id 83140;
 Tue, 09 Feb 2021 06:23:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MQo-0003Ma-5F
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:23:26 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6423188d-c4c4-448e-a666-2f0ae8d8a268;
 Tue, 09 Feb 2021 06:23:25 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id j5so3022315pgb.11
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:23:25 -0800 (PST)
Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df])
 by smtp.gmail.com with UTF8SMTPSA id c18sm11072205pfo.171.2021.02.08.22.23.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:23:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6423188d-c4c4-448e-a666-2f0ae8d8a268
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fCpdetXZbkmTlNt6IWgIQo6qPUhUH6Y1glnHH1TYQ8A=;
        b=m27gPzIn4y36BwdfABPhJctzVcQ+lZFochfhUvq6yh+3RZRwkrsGVp6GPOttVPTYhe
         piJwyWKCXZarFl+5Mgad+/eyxNzq8NXJiZDrHYIevzx3qMO6Yl37ZUAmkcqRBRBt6DSZ
         YohFfsGtKU7yDyK0RzwEYOoKhUHsk0fZc4SIk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fCpdetXZbkmTlNt6IWgIQo6qPUhUH6Y1glnHH1TYQ8A=;
        b=XJmiwbJemI5ObNW1pZpZVfppvfdXwUPyN1lZcXINH312KhNBpOpKRPFkEEFZnwL7y/
         UOTcVp7We6GF/zIJRtY0WKsPaqSWyjqsN7Xaz82nv1W720mddpOVtsLplTNYBcR1d0K3
         fzSIx/gFI97oIjvmHZTtbzD7j9LzQ+IpsaFBdo+vku8CfvMANIt2Fe4MwvArNn7ok7ro
         OVZtmSpZUo8mBr9vCewIAvTs8LoD03ikQSWjL7UFaRVrgWpT0ENUmP0qSD53QnVyjdkc
         Y00Vwp3Jrnkaf14oqF6Ki+qURLehFumz75JOoCNuW2VL1HCK8MC/k60S5gycYSboYvRU
         Achg==
X-Gm-Message-State: AOAM5313LXFAcQCzx7csrZqUHq+bqZpDWkEHIIbRaZVEbTN+7JAhUZwC
	JC4RJBo7mnj4OjEhHn8oJo7eUQ==
X-Google-Smtp-Source: ABdhPJzbc6B180HFMEwkww4vcx+xgwf9jKl8iifukr0wO8EhuGUoiPsVkBYDMJ6LpbG7UB75Krtn/A==
X-Received: by 2002:a62:484:0:b029:1b7:878b:c170 with SMTP id 126-20020a6204840000b02901b7878bc170mr21302515pfe.28.1612851804694;
        Mon, 08 Feb 2021 22:23:24 -0800 (PST)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Claire Chang <tientzu@chromium.org>
Subject: [PATCH v4 14/14] of: Add plumbing for restricted DMA pool
Date: Tue,  9 Feb 2021 14:21:31 +0800
Message-Id: <20210209062131.2300005-15-tientzu@chromium.org>
X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog
In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org>
References: <20210209062131.2300005-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 25 +++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  5 +++++
 3 files changed, 33 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..b6093c9b135d 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1094,3 +1095,27 @@ bool of_dma_is_coherent(struct device_node *np)
 	return false;
 }
 EXPORT_SYMBOL_GPL(of_dma_is_coherent);
+
+int of_dma_set_restricted_buffer(struct device *dev)
+{
+	struct device_node *node;
+	int count, i;
+
+	if (!dev->of_node)
+		return 0;
+
+	count = of_property_count_elems_of_size(dev->of_node, "memory-region",
+						sizeof(phandle));
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(dev->of_node, "memory-region", i);
+		/* There might be multiple memory regions, but only one
+		 * restriced-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(
+				dev, dev->of_node, i);
+	}
+
+	return 0;
+}
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 1122daa8e273..38c631f1fafa 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -186,6 +186,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..28a2dfa197ba 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,17 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_get_restricted_buffer(struct device *dev)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.30.0.478.g8a0d178c01-goog



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:33:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:33:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83156.154131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MaH-0004hZ-7o; Tue, 09 Feb 2021 06:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83156.154131; Tue, 09 Feb 2021 06:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9MaH-0004hS-4E; Tue, 09 Feb 2021 06:33:13 +0000
Received: by outflank-mailman (input) for mailman id 83156;
 Tue, 09 Feb 2021 06:33:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyDg=HL=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1l9MaF-0004hN-SJ
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:33:12 +0000
Received: from mail-pg1-x535.google.com (unknown [2607:f8b0:4864:20::535])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07582933-33a1-4e8b-9a25-436d3d8a854f;
 Tue, 09 Feb 2021 06:33:11 +0000 (UTC)
Received: by mail-pg1-x535.google.com with SMTP id c132so11841643pga.3
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:33:11 -0800 (PST)
Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com.
 [209.85.210.171])
 by smtp.gmail.com with ESMTPSA id k12sm14634527pfh.123.2021.02.08.22.33.09
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 08 Feb 2021 22:33:09 -0800 (PST)
Received: by mail-pf1-f171.google.com with SMTP id u143so5046624pfc.7
 for <xen-devel@lists.xenproject.org>; Mon, 08 Feb 2021 22:33:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07582933-33a1-4e8b-9a25-436d3d8a854f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1yfWtyKyTZvhHLdRd0Os+HgXY0IXloCRl6/1X6y49yY=;
        b=I8sAs6STSD3viziWc9zjdGpRV2Ij0huuBcH4DTqpEpqza+Sa89+jcmi0lEr6QyU3Yc
         L7rP/rH0/RfRBI83gH4R47SJOVgSrDffjCf/VWsaIa4tSZ3K01/puYHwXeA+73O+NMbj
         Mayn3mIGAEUCdMMblgZt8JrLoIUQfDCTzzdMQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1yfWtyKyTZvhHLdRd0Os+HgXY0IXloCRl6/1X6y49yY=;
        b=gXLW+Orgd9NH2iFJqloxkyxt73G6XTlJ9wrDmgcYWPZVkbiFlwRC/J9Rywo8hljxJc
         LB2VFKfamKH/wN6AJNKMC9xAhkbW6oo1NFQA7l7Nn47cd5AUYdGcfXgMjashobP2XG1s
         LImyyiZ9NAKPUMP4hyeB4NFKglhn5C8f16jBy1G1kUt0wz23BRtXeefj0Bf3HEMOfjfT
         CGg1zVZmP5Egaeu8wGZQMCcunUHbwtf5SgynoaDHmYrjxIfHxRr0TTQnPTNyVlDIpV2I
         /3vj4rLrcjf7OrXxeb9/famDtz9fMmt/EXGbyA0QtiDtib8KE246s4N5kFQSi1/BCUtM
         huuw==
X-Gm-Message-State: AOAM533pXXIMHNJFJe3Ukq4hQsAENmKKMWaPQTWhrRZqDQ1zMvZsrW++
	msUDYk3ebb2QPQNcwQju2yQE1qTzPhbHbQ==
X-Google-Smtp-Source: ABdhPJz4Ry/aueayWTd0I72d6dGp9pcluMTKmzOFwAZ0XUI3pp2qCPnqf11yVOZHGGgaFArbu0Tt7Q==
X-Received: by 2002:a63:d304:: with SMTP id b4mr20839159pgg.299.1612852390107;
        Mon, 08 Feb 2021 22:33:10 -0800 (PST)
X-Received: by 2002:a6b:144c:: with SMTP id 73mr18274991iou.69.1612852048986;
 Mon, 08 Feb 2021 22:27:28 -0800 (PST)
MIME-Version: 1.0
References: <20210106034124.30560-1-tientzu@chromium.org> <d7043239-12cf-3636-4726-2e3b90917dc6@gmail.com>
 <CALiNf28sU1VtGB7LeTXExkMwQiCeg8N5arqyEjw0CPZP72R4dg@mail.gmail.com>
 <78871151-947d-b085-db03-0d0bd0b55632@gmail.com> <CALiNf29_PmLJTVLksSHp3NFAaL52idqehSMOtatJ=jaM2Muq1g@mail.gmail.com>
 <23a09b9a-70fc-a7a8-f3ea-b0bfa60507f0@gmail.com> <CAAFQd5DX=AdaYSYQbxgnrYYojkM5q7EE_Qs-BYPOiNjcQWbN1A@mail.gmail.com>
 <c7f7941d-b8bd-f0f3-4e40-b899a77188bf@gmail.com> <CAAFQd5AGm4U8hD4jHmw10S7MRS1-ZUSq7eGgoUifMMyfZgP2NA@mail.gmail.com>
 <7fe99ad2-79a7-9c8b-65ce-ce8353e9d9bf@gmail.com>
In-Reply-To: <7fe99ad2-79a7-9c8b-65ce-ce8353e9d9bf@gmail.com>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 9 Feb 2021 14:27:18 +0800
X-Gmail-Original-Message-ID: <CALiNf2_rRufFoxNN=i0_LkVvw31tXetKasm3SrzYy7O8o-sfgg@mail.gmail.com>
Message-ID: <CALiNf2_rRufFoxNN=i0_LkVvw31tXetKasm3SrzYy7O8o-sfgg@mail.gmail.com>
Subject: Re: [RFC PATCH v3 0/6] Restricted DMA
To: Florian Fainelli <f.fainelli@gmail.com>
Cc: Tomasz Figa <tfiga@chromium.org>, Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
	benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	sstabellini@kernel.org, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, 
	xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, mingo@kernel.org, 
	bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>
Content-Type: text/plain; charset="UTF-8"

v4 here: https://lore.kernel.org/patchwork/cover/1378113/


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 06:57:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 06:57:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83159.154143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Mxm-0006fd-BH; Tue, 09 Feb 2021 06:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83159.154143; Tue, 09 Feb 2021 06:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Mxm-0006fW-7S; Tue, 09 Feb 2021 06:57:30 +0000
Received: by outflank-mailman (input) for mailman id 83159;
 Tue, 09 Feb 2021 06:57:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Mxl-0006fO-BG; Tue, 09 Feb 2021 06:57:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Mxl-0000EN-3B; Tue, 09 Feb 2021 06:57:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Mxk-0000qk-Qt; Tue, 09 Feb 2021 06:57:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Mxk-0001Or-QQ; Tue, 09 Feb 2021 06:57:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FNsEI28/PsOHOnH9vl+2lNQcaez8oGgMgQ95dED2xEo=; b=CpQs0K93yAncto3wngo0inOc2h
	Pe+PyeUgtZWYCVexH2q6XWPRQYoeuwz+E4w5wEUkLZkk5fGvfQ9iIt2jmYsxjFlFlLKGubU5Nbu7v
	AiXXaL5YAN405OXuuGZA+So2OCCOI0oQKJWpk85YZvnEPvWdbAOjLTUvdFCmaDhAB1Lw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159134-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159134: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ca82d3fecc93745ee17850a609ac7772bd7c8bf7
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 06:57:28 +0000

flight 159134 xen-unstable real [real]
flight 159154 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159134/
http://logs.test-lab.xenproject.org/osstest/logs/159154/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 159036
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 159036

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159036
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159036
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ca82d3fecc93745ee17850a609ac7772bd7c8bf7
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    3 days
Testing same since   159077  2021-02-06 11:11:30 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ca82d3fecc93745ee17850a609ac7772bd7c8bf7
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Sat Jan 30 08:36:37 2021 -0500

    x86/vm_event: add response flag to reset vmtrace buffer
    
    Allow resetting the vmtrace buffer in response to a vm_event. This can be used
    to optimize a use-case where detecting a looped vmtrace buffer is important.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit c5866ab93167a73a8d4d85b844edf4aa364a1aaa
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Mon Jan 18 12:46:37 2021 -0500

    x86/vm_event: Carry the vmtrace buffer position in vm_event
    
    Add vmtrace_pos field to x86 regs in vm_event. Initialized to ~0 if
    vmtrace is not in use.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 9744611991a042e9aea348c5721b80cc2101c7e5
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Fri Sep 11 20:14:00 2020 +0200

    xen/vmtrace: support for VM forks
    
    Implement vmtrace_reset_pt function. Properly set IPT
    state for VM forks.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 88dd8389dd2c9442729e9d96a4febaf38cd822e3
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:35:07 2020 +0200

    tools/misc: Add xen-vmtrace tool
    
    Add an demonstration tool that uses xc_vmtrace_* calls in order
    to manage external IPT monitoring for DomU.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 53aaa792fdebcf131983d45ee8e3d09bd0740c71
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:33:25 2020 +0200

    tools/libxc: Add xc_vmtrace_* functions
    
    Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 1cee4bd97c88633c4a39f56f6722be0727c9ea8f
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Sun Jun 28 23:48:09 2020 +0200

    xen/domctl: Add XEN_DOMCTL_vmtrace_op
    
    Implement an interface to configure and control tracing operations.  Reuse the
    existing SETDEBUGGING flask vector rather than inventing a new one.
    
    Userspace using this interface is going to need platform specific knowledge
    anyway to interpret the contents of the trace buffer.  While some operations
    (e.g. enable/disable) can reasonably be generic, others cannot.  Provide an
    explicitly-platform specific pair of get/set operations to reduce API churn as
    new options get added/enabled.
    
    For the VMX specific Processor Trace implementation, tolerate reading and
    modifying a safe subset of bits in CTL, STATUS and OUTPUT_MASK.  This permits
    userspace to control the content which gets logged, but prevents modification
    of details such as the position/size of the output buffer.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 71cb03f03ce309e8cc1dacd18aa383ccea6af231
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Tue Jun 16 15:20:18 2020 +0200

    x86/vmx: Add Intel Processor Trace support
    
    Add CPUID/MSR enumeration details for Processor Trace.  For now, we will only
    support its use inside VMX operation.  Fill in the vmtrace_available boolean
    to activate the newly introduced common infrastructure for allocating trace
    buffers.
    
    For now, Processor Trace is going to be operated in Single Output mode behind
    the guests back.  Add the MSRs to struct vcpu_msrs, and set up the buffer
    limit in vmx_init_ipt() as it is fixed for the lifetime of the domain.
    
    Context switch the most of the MSRs in and out of vCPU context, but the main
    control register needs to reside in the MSR load/save lists.  Explicitly pull
    the msrs pointer out into a local variable, because the optimiser cannot keep
    it live across the memory clobbers in the MSR accesses.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit b72eab263592a3d76aa826675e5d62606d83cecd
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Mon Jun 29 00:05:51 2020 +0200

    xen/memory: Add a vmtrace_buf resource type
    
    Allow to map processor trace buffer using acquire_resource().
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 45ba9a7d7688a6a08200e37a8caa2bc99bb4d267
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jun 19 00:31:24 2020 +0200

    tools/[lib]xl: Add vmtrace_buf_size parameter
    
    Allow to specify the size of per-vCPU trace buffer upon
    domain creation. This is zero by default (meaning: not enabled).
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 217dd79ee29286b85074d22cc75ee064206fb2af
Author: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
Date:   Fri Jul 3 01:16:10 2020 +0200

    xen/domain: Add vmtrace_size domain creation parameter
    
    To use vmtrace, buffers of a suitable size need allocating, and different
    tasks will want different sizes.
    
    Add a domain creation parameter, and audit it appropriately in the
    {arch_,}sanitise_domain_config() functions.
    
    For now, the x86 specific auditing is tuned to Processor Trace running in
    Single Output mode, which requires a single contiguous range of memory.
    
    The size is given an arbitrary limit of 64M which is expected to be enough for
    anticipated usecases, but not large enough to get into long-running-hypercall
    problems.
    
    Signed-off-by: MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 34cc2e5f8dba6906da82fe8d76e839f9ab20f153
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 27 17:24:11 2020 +0100

    xen/memory: Fix mapping grant tables with XENMEM_acquire_resource
    
    A guest's default number of grant frames is 64, and XENMEM_acquire_resource
    will reject an attempt to map more than 32 frames.  This limit is caused by
    the size of mfn_list[] on the stack.
    
    Fix mapping of arbitrary size requests by looping over batches of 32 in
    acquire_resource(), and using hypercall continuations when necessary.
    
    To start with, break _acquire_resource() out of acquire_resource() to cope
    with type-specific dispatching, and update the return semantics to indicate
    the number of mfns returned.  Update gnttab_acquire_resource() and x86's
    arch_acquire_resource() to match these new semantics.
    
    Have do_memory_op() pass start_extent into acquire_resource() so it can pick
    up where it left off after a continuation, and loop over batches of 32 until
    all the work is done, or a continuation needs to occur.
    
    compat_memory_op() is a bit more complicated, because it also has to marshal
    frame_list in the XLAT buffer.  Have it account for continuation information
    itself and hide details from the upper layer, so it can marshal the buffer in
    chunks if necessary.
    
    With these fixes in place, it is now possible to map the whole grant table for
    a guest.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit f4318db940c39cc656128fcf72df3e79d2e55bc1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 14:09:42 2021 +0100

    x86/EFI: work around GNU ld 2.36 issue
    
    Our linker capability check fails with the recent binutils release's ld:
    
    .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
    .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
    .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
    .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
    .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
    .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
    .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
    .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
    .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
    .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
    .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
    
    Tell the linker to strip debug info as a workaround. Debug info has been
    getting stripped already anyway when linking the actual xen.efi.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d7acc47c8201611fda98ce5bd465626478ca4759
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Feb 5 13:19:38 2021 +0100

    tools/tests: fix resource test build on FreeBSD
    
    error.h is not a standard header, and none of the functions declared
    there are actually used by the code. This fixes the build on FreeBSD
    that doesn't have error.h
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 08:41:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 08:41:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83169.154167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9OZv-0000ca-9f; Tue, 09 Feb 2021 08:40:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83169.154167; Tue, 09 Feb 2021 08:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9OZv-0000cT-6f; Tue, 09 Feb 2021 08:40:59 +0000
Received: by outflank-mailman (input) for mailman id 83169;
 Tue, 09 Feb 2021 08:40:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLR6=HL=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1l9OZt-0000cO-ED
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 08:40:57 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e785b8ec-0630-489f-8e4b-3fe257e332de;
 Tue, 09 Feb 2021 08:40:55 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 711FD68AFE; Tue,  9 Feb 2021 09:40:51 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e785b8ec-0630-489f-8e4b-3fe257e332de
Date: Tue, 9 Feb 2021 09:40:50 +0100
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>
Subject: Re: [PATCH v4 01/14] swiotlb: Remove external access to
 io_tlb_start
Message-ID: <20210209084050.GA32212@lst.de>
References: <20210209062131.2300005-1-tientzu@chromium.org> <20210209062131.2300005-2-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210209062131.2300005-2-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Feb 09, 2021 at 02:21:18PM +0800, Claire Chang wrote:
> This can be dropped if Christoph's swiotlb cleanups are landed.
> https://lore.kernel.org/linux-iommu/20210207160934.2955931-1-hch@lst.de/T/#m7124f29b6076d462101fcff6433295157621da09 

FYI, I've also started looking into additional cleanups based on your
struct in this branch, but I'd like to wait for all the previous
changes to settle first:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/swiotlb-struct


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 08:44:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 08:44:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83170.154178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Odf-0000mD-PY; Tue, 09 Feb 2021 08:44:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83170.154178; Tue, 09 Feb 2021 08:44:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Odf-0000m6-Me; Tue, 09 Feb 2021 08:44:51 +0000
Received: by outflank-mailman (input) for mailman id 83170;
 Tue, 09 Feb 2021 08:44:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9Odd-0000lz-Rr
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 08:44:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9OdY-0002VG-GH; Tue, 09 Feb 2021 08:44:44 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9OdY-0007mi-8L; Tue, 09 Feb 2021 08:44:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JHNad0cRbgJiiXXvQqnBjdntDM++m4DLcJwaUrDqeoA=; b=daLdFN31GiQsQjcwAk/NpDECqe
	IvYsyjY4A70wepeE7/Qp3GjC1u8B8BYCNSw3nvhe4jkEuGtYpAQ9IFmEXq0BjyEA3kuN7xHAyBENI
	0Is4EK2M3x+YJ6ZK7LKK82KiV+jF+lcZ8eOmsGQ34S9XcQ09Cp+M4R2ToqqBJ3DCiDpg=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Cc: lucmiccio@gmail.com, xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
 <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3uhiFKE6gepaPvWZxqRErCyLiv2CTDSx3Sihef7CaMtQ@mail.gmail.com>
 <alpine.DEB.2.21.2102081556480.8948@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <c1fa0c41-07f5-eef3-254d-c5285a672f97@xen.org>
Date: Tue, 9 Feb 2021 08:44:41 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102081556480.8948@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 09/02/2021 01:57, Stefano Stabellini wrote:
> On Mon, 8 Feb 2021, Julien Grall wrote:
>> On Mon, 8 Feb 2021 at 20:24, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>> @Ian, I think this wants to go in 4.15. Without it, Xen may receive an IOMMU
>>>> fault for DMA transaction using granted page.
>>>>
>>>>> Backport: 4.12+
>>>>>
>>>>> ---
>>>>>
>>>>> Given the severity of the bug, I would like to request this patch to be
>>>>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
>>>>> 2020.
>>>>
>>>> I would agree that the bug is bad, but it is not clear to me why this would be
>>>> warrant for an exception for backporting. Can you outline what's the worse
>>>> that can happen?
>>>>
>>>> Correct me if I am wrong, if one can hit this error, then it should be pretty
>>>> reliable. Therefore, anyone wanted to use 4.12 in production should have seen
>>>> if the error on there setup by now (4.12 has been out for nearly two years).
>>>> If not, then they are most likely not affected.
>>>>
>>>> Any new users of Xen should use the latest stable rather than starting with an
>>>> old version.
>>>
>>> Yes, the bug reproduces reliably but it takes more than a smoke test to
>>> find it. That's why it wasn't found by OSSTest and also our internal
>>> CI-loop at Xilinx.
>>
>> Ok. So a user should be able to catch it during testing, is that correct?
> 
> Yes, probably. The failure is that PV drivers do not work (they trigger
> the IOMMU fault), specifically PV network and block, maybe others too.
> 
> I think it is unlikely but possible that an hardware update would also
> trigger the bug. For instance, a change of the network card might
> trigger the bug, if the previous network card driver was always bouncing
> requests on bounce buffers, while the new drivers uses the provided
> memory pages directly. I don't know how realistic this scenario is.
> 
> 
>>> Users can be very slow at upgrading, so I am worried that 4.12 might still
>>> be picked by somebody, especially given that it is still security
>>> supported for a while.
>>
>> Don't tell me about upgrading Xen... ;) But I am a bit confused, are
>> you worried about existing users or new users?
> 
> I am mostly worried about people that start using 4.12.

I think it would be a big mistake for anyone to start using 4.12 now. I 
can already cite a few bugs (including in the SMMU driver) that haven't 
been backport to 4.12 . This is only going to be worse as it is not 
stable anymore.

It is also not clear why someone would decide to use 4.12 when 4.13/4.14 
are still supported and will also come with an extra 1 year and half 
security support.

> 
> If a user was already on 4.12 and not seeing any errors, they are
> unlikely to see this error. It would only happen if:
> - they didn't use PV drivers before, and they want to start using PV
>    drivers now
> - they are upgrading hardware (not sure how likely to happen, see above)

Right, if you decide to switch device or upgrade HW, then you may also 
face other issues either in Xen or Linux.

Once a tree is out of support, we make no promise that it will work on 
new setup (including dom0 software). We only promise that it will 
continue to work on existing setup and we will address security issue.

>>> - is the submitter willing to provide the backport?
>>> - is the backport low-risk?
>>> - is the underlying bug important?
>>
>> You wrote multiple times that this is serious but it is still not
>> clear what's the worse that can happen...
> 
> PV drivers don't work: each data transfer involving granted pages causes
> an IOMMU fault.
Based on all the information you provided, this is not a fix I would 
recommend to backport to 4.12 because it is only impacting new/upgraded 
system (software or HW).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 09:40:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 09:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83177.154196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9PVB-0006Qs-Ua; Tue, 09 Feb 2021 09:40:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83177.154196; Tue, 09 Feb 2021 09:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9PVB-0006Ql-RY; Tue, 09 Feb 2021 09:40:09 +0000
Received: by outflank-mailman (input) for mailman id 83177;
 Tue, 09 Feb 2021 09:40:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9PVA-0006QQ-Lr
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 09:40:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb84ca81-c303-42dd-b071-6664f336b92e;
 Tue, 09 Feb 2021 09:40:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8EE1B020;
 Tue,  9 Feb 2021 09:40:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb84ca81-c303-42dd-b071-6664f336b92e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612863606; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zWEvnI/v8UsciVmot0jECcmv6rEGbRmKLcX6GcymyPY=;
	b=Lu9LgmCOOKYjtdnKpkO/AsyuXsAuyrok8ZmM1L7ZsNGsb8BJ4zhFGjFXh4aetQ91WLDrX6
	5EVpG0aYm5dMTRGV076iREOgDW/u8JIvhQh53X/bJMkF+Y/gqwvH9HlbWpwj6wOi7DdhH0
	LTdfgTuJtTFLjahVKgXwdNR3PC/5Q2w=
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
Date: Tue, 9 Feb 2021 10:40:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.02.2021 20:47, Norbert Manthey wrote:
> On 2/8/21 3:21 PM, Jan Beulich wrote:
>> On 05.02.2021 21:39, Norbert Manthey wrote:
>>> @@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
>>>      if ( rc )
>>>          return rc;
>>>
>>> +    if ( index >= HVM_NR_PARAMS )
>>> +        return -EINVAL;
>>> +
>>> +    /* Make sure we evaluate permissions before loading data of domains. */
>>> +    block_speculation();
>>> +
>>> +    value = d->arch.hvm.params[index];
>>>      switch ( index )
>>>      {
>>>      /* The following parameters should only be changed once. */
>> I don't see the need for the heavier block_speculation() here;
>> afaict array_access_nospec() should do fine. The switch() in
>> context above as well as the switch() further down in the
>> function don't have any speculation susceptible code.
> The reason to block speculation instead of just using the hardened index
> access is to not allow to speculatively load data from another domain.

Okay, looks like I got mislead by the added bounds check. Why
do you add that, when the sole caller already has one? It'll
suffice since you move the array access past the barrier,
won't it?

>>> @@ -4141,6 +4148,9 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
>>>      if ( rc )
>>>          return rc;
>>>
>>> +    /* Make sure we evaluate permissions before loading data of domains. */
>>> +    block_speculation();
>>> +
>>>      switch ( index )
>>>      {
>>>      case HVM_PARAM_CALLBACK_IRQ:
>> Like you do for the "get" path I think this similarly renders
>> pointless the use in hvmop_set_param() (and - see below - the
>> same consideration wrt is_hvm_domain() applies).
> Can you please be more specific why this is pointless? I understand that
> the is_hvm_domain check comes with a barrier that can be used to not add
> another barrier. However, I did not find such a barrier here, which
> comes between the 'if (rc)' just above, and the potential next access
> based on the value of 'index'. At least the access behind the switch
> statement cannot be optimized and replaced with a constant value easily.

I'm suspecting a misunderstanding (the more that further down
you did agree to what I've said for hvmop_get_param()): I'm
not saying your addition is pointless. Instead I'm saying that
your addition should be accompanied by removal of the barrier
from hvmop_set_param(), paralleling what you do to
hvmop_get_param(). And additionally I'm saying that just like
in hvmop_get_param() the barrier there was already previously
redundant with that inside is_hvm_domain().

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 10:03:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 10:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83179.154208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Pqw-0008Rp-QX; Tue, 09 Feb 2021 10:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83179.154208; Tue, 09 Feb 2021 10:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Pqw-0008Ri-NY; Tue, 09 Feb 2021 10:02:38 +0000
Received: by outflank-mailman (input) for mailman id 83179;
 Tue, 09 Feb 2021 10:02:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9Pqv-0008Rd-9O
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 10:02:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eaa20949-1c72-4972-b76f-c501aa5503c8;
 Tue, 09 Feb 2021 10:02:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 81872AC43;
 Tue,  9 Feb 2021 10:02:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaa20949-1c72-4972-b76f-c501aa5503c8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612864954; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FyK5MsDNNp1NN+1loQFIX8n6XjC4zx7Pzae0rBgi9Uc=;
	b=BI9NwY0uI45yVJlZVjomIYZYCTYuT5hG+HmMjPC+Tpr1QUeb3Lphf08EiS3XgSCp/Yow2z
	br6htmfxUnEMXwJsJhX4dFujW0vqM18AQANtrpKnLponPIZwh90KYaBm+QtAgFfImQHbGs
	Luc8A0yfeYNz6EPDBUknH+dg1N9z5y0=
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: lucmiccio@gmail.com, xen-devel@lists.xenproject.org,
 Bertrand.Marquis@arm.com, Volodymyr_Babchuk@epam.com, Rahul.Singh@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Ian Jackson <iwj@xenproject.org>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
 <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com>
Date: Tue, 9 Feb 2021 11:02:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.02.2021 21:24, Stefano Stabellini wrote:
> On Mon, 8 Feb 2021, Julien Grall wrote:
>> On 08/02/2021 18:49, Stefano Stabellini wrote:
>>> Given the severity of the bug, I would like to request this patch to be
>>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
>>> 2020.
>>
>> I would agree that the bug is bad, but it is not clear to me why this would be
>> warrant for an exception for backporting. Can you outline what's the worse
>> that can happen?
>>
>> Correct me if I am wrong, if one can hit this error, then it should be pretty
>> reliable. Therefore, anyone wanted to use 4.12 in production should have seen
>> if the error on there setup by now (4.12 has been out for nearly two years).
>> If not, then they are most likely not affected.
>>
>> Any new users of Xen should use the latest stable rather than starting with an
>> old version.
> 
> Yes, the bug reproduces reliably but it takes more than a smoke test to
> find it. That's why it wasn't found by OSSTest and also our internal
> CI-loop at Xilinx.
> 
> Users can be very slow at upgrading, so I am worried that 4.12 might still
> be picked by somebody, especially given that it is still security
> supported for a while.
> 
> 
>> Other than the seriousness of the bug, I think there is also a fairness
>> concern.
>>
>> So far our rules says there is only security support backport allowed. If we
>> start granting exception, then we need a way to prevent abuse of it. To take
>> an extreme example, why one couldn't ask backport for 4.2?
>>
>> That said, I vaguely remember this topic was brought up a few time on
>> security@. So maybe it is time to have a new discussion about stable tree.
> 
> I wouldn't consider a backport for a tree that is closed even for
> security backports. So in your example, I'd say no to a backport to 4.2
> or 4.10.
> 
> I think there is a valid question for trees that are still open to
> security fixes but not general backports.
> 
> For these cases, I would just follow a simple rule of thumb:
> - is the submitter willing to provide the backport?
> - is the backport low-risk?
> - is the underlying bug important?
> 
> If the answer to all is "yes" then I'd go with it.

Personally I disagree, for the very simple reason of the question
going to become "Where do we draw the line?" The only non-security
backports that I consider acceptable are low-risk changes to allow
building with newer tool chains. I know other backports have
occurred in the past, and I did voice my disagreement with this
having happened.

But this is a community decision, so my opinion counts as just a
single vote.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 10:06:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 10:06:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83181.154221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Pum-0000AZ-CU; Tue, 09 Feb 2021 10:06:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83181.154221; Tue, 09 Feb 2021 10:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Pum-0000AR-8B; Tue, 09 Feb 2021 10:06:36 +0000
Received: by outflank-mailman (input) for mailman id 83181;
 Tue, 09 Feb 2021 10:06:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9Puk-0000AL-Kt
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 10:06:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35d98260-657a-4fd7-9db2-2e3128c60896;
 Tue, 09 Feb 2021 10:06:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 251B4AE3B;
 Tue,  9 Feb 2021 10:06:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35d98260-657a-4fd7-9db2-2e3128c60896
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612865193; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZAdp+dgFr/1Fy+3hDJaJOcEodz3X7Lg/NImt0uIIEJw=;
	b=lldFJSVcoH11qByznRii7BSJBTm09rlpAMdZZUzOKpGoNpv2ZlhW6C4+uU6WLm4xo7lKPa
	1xgS3eqMS0x8HQV/wMgHs6GCu6OAhqYMGifDthS1QdiaggawfrG5k4s4zzXcDF+CcF1UX2
	U96G0RnN1Rql3PYOnqSWwG2TgHBDXFQ=
Subject: Re: [PATCH HVM v3 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <20210208200049.28571-1-nmanthey@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <67bb7ba1-bacf-7884-f513-b47b7fe5b694@suse.com>
Date: Tue, 9 Feb 2021 11:06:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210208200049.28571-1-nmanthey@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.02.2021 21:00, Norbert Manthey wrote:
> To prevent leaking HVM params via L1TF and similar issues on a
> hyperthread pair, let's load values of domains after performing all
> relevant checks, and blocking speculative execution.

I'd like to suggest "..., let's load values of domains only
after ...".

But there are other points open from v2; I'd like to further
suggest that you allow discussion on a prior version to first
settle, before sending a new one. Unless of course discussion
appears to have stalled.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 10:25:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 10:25:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83185.154233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9QDC-00022t-Vd; Tue, 09 Feb 2021 10:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83185.154233; Tue, 09 Feb 2021 10:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9QDC-00022m-RZ; Tue, 09 Feb 2021 10:25:38 +0000
Received: by outflank-mailman (input) for mailman id 83185;
 Tue, 09 Feb 2021 10:25:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kDMm=HL=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1l9QDA-00022h-Qr
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 10:25:37 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe06::617])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b8d5cb1-0ecb-43e5-8892-2ec872e26aa7;
 Tue, 09 Feb 2021 10:25:34 +0000 (UTC)
Received: from AM3PR04CA0144.eurprd04.prod.outlook.com (2603:10a6:207::28) by
 AM5PR0802MB2420.eurprd08.prod.outlook.com (2603:10a6:203:9e::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.30; Tue, 9 Feb
 2021 10:25:32 +0000
Received: from VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:0:cafe::19) by AM3PR04CA0144.outlook.office365.com
 (2603:10a6:207::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25 via Frontend
 Transport; Tue, 9 Feb 2021 10:25:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT018.mail.protection.outlook.com (10.152.18.135) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 9 Feb 2021 10:25:31 +0000
Received: ("Tessian outbound 4d8113405d55:v71");
 Tue, 09 Feb 2021 10:25:31 +0000
Received: from 7662cbe77fa4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C91F0C8F-5AE5-4C19-9420-27A919CAF9FE.1; 
 Tue, 09 Feb 2021 10:25:25 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7662cbe77fa4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 09 Feb 2021 10:25:25 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VI1PR08MB5422.eurprd08.prod.outlook.com (2603:10a6:803:12e::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25; Tue, 9 Feb
 2021 10:25:24 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::c8bf:1301:3373:94a6]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::c8bf:1301:3373:94a6%5]) with mapi id 15.20.3825.030; Tue, 9 Feb 2021
 10:25:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b8d5cb1-0ecb-43e5-8892-2ec872e26aa7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3zefSISgaWRNh6z2H7QFf3Laz3IEn6VRp+GE51AU420=;
 b=BwUZASTQ5k2vxxYeS+zqTpJZdxHJmtNXZsWNdgKuB8pDPUWvFUsAoPuPVJ099ji1WOWnPlh1pxRv8JTw/Tm7tRAsED3LqYY+0KjTHUXcaM9cEdR5cieL1ZNHoO2+bqAMYTLLWhruTaMZXU4rR6AQJXyV0rr7HzO1Z3HVxS7Bmd0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 99f6bfaa43d52bf2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TyKAtDA5Ltc/W6PP8tDrYE8ywTAtc2FaphIXjVtvAmBWiwjYXFRtL9jgUtl+fa2YQd88pFP3WAVP2QUF4yqZUOURggAYN8Rxv0wAZR0kzgKuVXzBb7xBk92ts3ppVk/cKOzMj7oWyzYJ4qGvg++RuCiuX7Wn3iH28uNSxTtszAH1pBpbUBljqgc9bXmvpaMo+YdAk+pSqGvTsjgLLVK8SAdNlL4ioYAA1aa0/1k94nkuVt6zbnjchXalVh0W36TB8zZttMhiMzERIIZLf9vvC+MXLymbQO20HTp5BkF0FZVSG1kkNcBaR/ZSLt56fwC0HgnufIPwKQ/tBh8gYJrkOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3zefSISgaWRNh6z2H7QFf3Laz3IEn6VRp+GE51AU420=;
 b=b/3/ab8QrgFK3gQFEVl+ZTGUPA3Anbs6qJ2ZMtE3vXbrbBeTy0TnIPPxg1euaLAHnZNDmFNDcJvwvhAy3bP0Wc2KxiOxSOgaHx166qQsm4/Lph0iwv557bBNgvdDfmomqrSh53B/6eS7Zwv9PM4ADwfqNv3UkutSgiqHqm1PukIMBW1PuQDKnIEgjhtvG0wjfhMb//n0GH+BjjhWO4IN2lB4HTdDLEjrnvkcnrGfmp9sTKl1P6Z4glL5U16ZsIHv56AIY8/NgilDj3IZCEp6crpiIvK/Sg1eJ7NeZBribseydkQ297phOkCffaAFnejGDLXn0Vnijd/PaX1Wx+Zlhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3zefSISgaWRNh6z2H7QFf3Laz3IEn6VRp+GE51AU420=;
 b=BwUZASTQ5k2vxxYeS+zqTpJZdxHJmtNXZsWNdgKuB8pDPUWvFUsAoPuPVJ099ji1WOWnPlh1pxRv8JTw/Tm7tRAsED3LqYY+0KjTHUXcaM9cEdR5cieL1ZNHoO2+bqAMYTLLWhruTaMZXU4rR6AQJXyV0rr7HzO1Z3HVxS7Bmd0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"ehem+xen@m5p.com" <ehem+xen@m5p.com>
Subject: Re: [PATCH] xen: workaround missing device_type property in pci/pcie
 nodes
Thread-Topic: [PATCH] xen: workaround missing device_type property in pci/pcie
 nodes
Thread-Index: AQHW/nYULOKwv72r4kWh+gedUrMffqpPntOA
Date: Tue, 9 Feb 2021 10:25:23 +0000
Message-ID: <22372A39-83F4-41AB-8FCC-B3A9C8551604@arm.com>
References: <alpine.DEB.2.21.2102081544230.8948@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102081544230.8948@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6d8790b3-3c75-403c-3f4b-08d8cce50757
x-ms-traffictypediagnostic: VI1PR08MB5422:|AM5PR0802MB2420:
X-Microsoft-Antispam-PRVS:
	<AM5PR0802MB2420D38C4D29B269A4B40E559D8E9@AM5PR0802MB2420.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3276;OLM:3276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zCFSRXgVwNbvvt1Q1Icbqk4aBBrQWeapFX5u+/QaKx36jvAE3D0T/DguxfPIqEYuUtLvsMWE+jHvOIpagJn7Bwv/oVwzwUJBLUjEJRFy2bngLe4dcpXDGOIgcCXCJTJPpKyCCeGr/yb2EC5ZlCQaBu1udwAizYGqFlizO+4ZA5It6bKbHHqbvE+gZWp4uS0KSSD2AHy7Rb1jNelZQ0At8+1qP24aXWe39O2EEnwJWZTKT+n8WLircnUbPHQNuiCnSyMiLZ9kiKSHy6dFqfkQUZaqnjIwipEPR1vSvOcPWkjwOK02cCqphK+ApXysuTL6bSyOdG8fQSe1MF23pw5AHo2d+h7LFtd376W70W/tfdjmDpr7UvwthGuVHDfFxJZpFw2HJ/jdrxshNP1AdRLi2EQ+fhxzF9USd4RwE7uOpbtB7wtXL3hcGLAlEPqinwRjlK1SmrzwkZzeOupGxktf+Oju+yvzd6jYGBt6xmQe7yGhhyRrfTcLP4hBPsjy5vhy/sulUduIhX0s/MssS7/Zzr7cG5O1R/I3xm2FgNEX1mER2ktBZ0YWHhlTLwJLhwUaUGp9O3PrVz/RVsfJdOmX0g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39830400003)(366004)(346002)(396003)(186003)(2616005)(5660300002)(6512007)(26005)(83380400001)(2906002)(316002)(8936002)(66946007)(4326008)(8676002)(478600001)(6486002)(66446008)(64756008)(53546011)(66556008)(66476007)(55236004)(76116006)(91956017)(36756003)(54906003)(6916009)(71200400001)(33656002)(6506007)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?NVA1N29od2lUWmhEeldkZGIraDlGV0wzLzJqamdZNEFpNU5zWUk4MkJ1bDhV?=
 =?utf-8?B?YkNFbkhiM3YveGlSZTNxN3MzNnc0NHlSaUV2SkRtQ2N6WDZEWUExUGRKQUtY?=
 =?utf-8?B?ZUVKYkgvTEpwZUZ6ZVNJSkRqV2VFRE8zdnptb1lhWjRyMkxvWHdEbyt5RlhQ?=
 =?utf-8?B?N3RNVnRBZnNqRU9mSldjUk5YK1pUaDV1YWtUZkR6MjJ3UStmTVMrTDFaUXdJ?=
 =?utf-8?B?bUV2WjFGVk5URU9Qd3hZdy82TXl3T0NWSzNNZnU0eGFZbnkyemI0R2ZoMmxO?=
 =?utf-8?B?anN0b2lTYmpHOGhLU2lsY1VpOEpGK285UTVRMTRFMGtHRlhRSldBVTZNY0xs?=
 =?utf-8?B?K3VRM2s1MTZBZFVJcVE0ZHVwNlpLcmFoSTF2TDNQWWhra29TRXd4SVIxN012?=
 =?utf-8?B?SjRFZEg1c2RSVHJDOHVHcTFTRTFPZCtFNmhaN1pQQ2FqellkZ0RUNCswYWNq?=
 =?utf-8?B?bTFQL3lWNkkycThOcG1qbGhkSzFHZTVGWkExVFRFZWRGZ2pJeTRrZjl3UkJX?=
 =?utf-8?B?M1NBNVdiOGxEb2x5WVA5OThoN2FnVjAyWkEwb0wzZDJGTkUybUpITGdEMGhr?=
 =?utf-8?B?YzdBYm9taU9HVXVmTnVVMFZIVk41ZjJ5M1h0RjZvdm50RWFZS1BoWEVkZWhx?=
 =?utf-8?B?NzBEZHFEalNDbVIxQkpOUmErbG1na1M2cTdlaG1xcVA0c0Ivd2tKMmhuZll2?=
 =?utf-8?B?T2JjUnFSc0ttb1gwb3dkRFNlRytGMGJsOWFFM1dnQWxYV2dtdHBrSEhib0ky?=
 =?utf-8?B?dUYvakhlUEhqOUMxMDM4Um0yWnYxR2tiajVxVkxvQksyMTFvWUJZSjd1VXFl?=
 =?utf-8?B?ZytDWm1ldzg1L2NmaXM5dGRFWkl5bXJkK056aVJBUWhYSWJUNjhBWHVJZ2ZV?=
 =?utf-8?B?NExnWmZkWVZuRlhCNTk5OGlzNnB3elpZeEVxc1BJZnBydVkreEVJSGREeERj?=
 =?utf-8?B?NUlvbmdRQ3RUWjlPQ1l5UWpab2FXRThOVUV6Y1M5UE9NYnpWcVhrbVU1RFBB?=
 =?utf-8?B?cEJDMDhJT3JsTkNyemZDdjFwMGlWUkQzMGFXbUF4dmlYZVNFeUtBYWZRR25G?=
 =?utf-8?B?a1FvWlI4M2xOeXF1K3ZaNTlYdEhVL2ZJclRWckovYWg1NEpGaS9vZStZWi9x?=
 =?utf-8?B?ZTNnLzBmT1JTWTlHb2FBYnRyN1d0SzFFSW9NVFY4R21aTVg1UnYxQkFlb0Rq?=
 =?utf-8?B?ZlFwNHU1dlM0NDRJNlI5eWxTMnNzclBiR3lZYUMxbGlZY3lKL3NtMXMrb0Ji?=
 =?utf-8?B?THdhaVJvOStkTHZtOEJzcXNBeHhoSTMyRHBaOHY4Y0l6K3VtVEdQd2Y5VHRk?=
 =?utf-8?B?cmhOZkNWbUdWT29FRGs2WVF0MW1BLzRQUVM5eGlvaXFpK0JTeURVUlFaU0JR?=
 =?utf-8?B?QVIyT3RkMVp4THZmQ2hNWW1abnVUVG9NRitNOGdta2VNalVyWWtsclVDZTdM?=
 =?utf-8?B?MmhkWU5PaEhZZ21iRTNIN0dxdmdyb0JYc29DbDBJYUlZV2dJcXRyYkYzT0JD?=
 =?utf-8?B?Z0RZRHMrTUZ6SktZNkNKVVAyYldWNy93SUxnMGtYcUhJUXVKeTl1ZFNrV050?=
 =?utf-8?B?djIyMjdJQmV6YXd4dFAvUzZCTVU1dWpyRFIwQWlFcXRCakliakM2SU5zMFdO?=
 =?utf-8?B?OVlkYVN3SURPQkd3LzgxM3NBRUhHTnRVbUNBSkpSSlNUY1UxaHgrSjRKVnBP?=
 =?utf-8?B?RENMUnBTTmNFK2Vmdm1NVEkrdDNscUVnNEk2TEQ4L2ZFV0JxV1FDdURldVlJ?=
 =?utf-8?Q?25jPSYNgpHTj5OaU0DCB6afMUjWgm/n8uKg+1/s?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F09C2EC468212A419652F4B41F6A10BE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5422
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	07f1a390-3123-4474-f9c6-08d8cce502c0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uX+drHPHnUfxeO9CQLD1zVRKVLE3KECNDsIt7YOoZ/ZaiJMn/YDwXAg62T6B5j33fUGaVjuIKT7WiSwDTo+D3mn9hvrTzxOVEEq3JqIPL+eIci8rxT6m2PRhtVQxFdW5G3LTgClbwhMpwxpNy8BhNre3YYFLkDz6avZt8gq9CGgdOKsryAOatVk7GaNDfxb4BASLZfIcoREPacT9UsdNfrBq3XrNPxoHWrfuA8+U77LQEpavwH1ua1IUIXdvkav9gMFbpD9gnE8Dfhp64QVWwPjigGNGXdE2Wsf9HJaPhj6Uaiwwc18yUwARkk+Z5qeg8KWRxYaYtldDeekUvMLEHrHbRRqkEY/nnfYGOfBc6sl/eo2fSd6WfriWtU/WPQyQ5mWjEZcIZBH4+YQ+ZkTIZSXciH0UdYPgS9XstzUjJhBMxXKsliIWZOVWk7ajvmpYdHtMaikfgnUxMlP90lgm66KZqAFGzHfBI8RcTH+fnpzZ4Bd3X47K0naq+8hmejbTvXG9QMPhPebVU9zLTopBTEIjx7pN8ulKr8a4ySSq0KbU5dIiICMmlNkJz9InIQ0lGgsbrdoO4psRdewnGEGTwnO40/ewxUBv6FH8NqTRXo7WImxsywfr/sNU3NanYnsXiQ6NzfDD3zlC6KuIeXNYZaZKtZoSIz7UaPBebKh1z+A=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39830400003)(396003)(136003)(346002)(376002)(46966006)(36840700001)(36860700001)(33656002)(47076005)(356005)(81166007)(70206006)(2906002)(316002)(54906003)(36756003)(2616005)(8936002)(55236004)(6862004)(6512007)(186003)(53546011)(4326008)(6486002)(82310400003)(86362001)(6506007)(26005)(8676002)(83380400001)(478600001)(336012)(70586007)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 10:25:31.3371
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d8790b3-3c75-403c-3f4b-08d8cce50757
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2420

SGkgU3RlZmFubywNCg0KPiBPbiA4IEZlYiAyMDIxLCBhdCAyMzo1NiwgU3RlZmFubyBTdGFiZWxs
aW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4gDQo+IFBDSSBidXNlcyBkaWZm
ZXIgZnJvbSBkZWZhdWx0IGJ1c2VzIGluIGEgZmV3IGltcG9ydGFudCB3YXlzLCBzbyBpdCBpcw0K
PiBpbXBvcnRhbnQgdG8gZGV0ZWN0IHRoZW0gcHJvcGVybHkuIE5vcm1hbGx5LCBQQ0kgYnVzZXMg
YXJlIGV4cGVjdGVkIHRvDQo+IGhhdmUgdGhlIGZvbGxvd2luZyBwcm9wZXJ0eToNCj4gDQo+ICAg
IGRldmljZV90eXBlID0gInBjaSINCj4gDQo+IEluIHJlYWxpdHksIGl0IGlzIG5vdCBhbHdheXMg
dGhlIGNhc2UuIFRvIGhhbmRsZSBQQ0kgYnVzIG5vZGVzIHRoYXQNCj4gZG9uJ3QgaGF2ZSB0aGUg
ZGV2aWNlX3R5cGUgcHJvcGVydHksIGFsc28gY29uc2lkZXIgdGhlIG5vZGUgbmFtZTogaWYgdGhl
DQo+IG5vZGUgbmFtZSBpcyAicGNpZSIgb3IgInBjaSIgdGhlbiBjb25zaWRlciB0aGUgYnVzIGFz
IGEgUENJIGJ1cy4NCj4gDQo+IFRoaXMgY29tbWl0IGlzIGJhc2VkIG9uIHRoZSBMaW51eCBrZXJu
ZWwgY29tbWl0DQo+IGQxYWMwMDAyZGQyOSAib2Y6IGFkZHJlc3M6IFdvcmsgYXJvdW5kIG1pc3Np
bmcgZGV2aWNlX3R5cGUgcHJvcGVydHkgaW4NCj4gcGNpZSBub2RlcyIuDQo+IA0KPiBUaGlzIGZp
eGVzIFhlbiBib290IG9uIFJQaTQuDQoNCldlIGFyZSByZWFsbHkgaGFuZGxpbmcgaGVyZSBhIHdy
b25nIGRldmljZS10cmVlIGJ1ZyB0aGF0IGNvdWxkIGVhc2lseSBiZSBmaXhlZA0KYnkgdGhlIHVz
ZXIuDQpXZSBzaG91bGQgYXQgbGVhc3QgbWVudGlvbiBpbiB0aGUgY29tbWl0IG1lc3NhZ2UgdGhh
dCB0aGlzIGlzIGEgd29ya2Fyb3VuZA0KdG8gc29sdmUgUlBpNCBidWdneSBkZXZpY2UgdHJlZS4N
Cg0KPiANCj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0YWJl
bGxpbmlAeGlsaW54LmNvbT4NCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2RldmljZV90
cmVlLmMgYi94ZW4vY29tbW9uL2RldmljZV90cmVlLmMNCj4gaW5kZXggMTg4MjVlMzMzZS4uZjFh
OTZhM2I5MCAxMDA2NDQNCj4gLS0tIGEveGVuL2NvbW1vbi9kZXZpY2VfdHJlZS5jDQo+ICsrKyBi
L3hlbi9jb21tb24vZGV2aWNlX3RyZWUuYw0KPiBAQCAtNTYzLDE0ICs1NjMsMjggQEAgc3RhdGlj
IHVuc2lnbmVkIGludCBkdF9idXNfZGVmYXVsdF9nZXRfZmxhZ3MoY29uc3QgX19iZTMyICphZGRy
KQ0KPiAgKiBQQ0kgYnVzIHNwZWNpZmljIHRyYW5zbGF0b3INCj4gICovDQo+IA0KPiArc3RhdGlj
IGJvb2xfdCBkdF9ub2RlX2lzX3BjaShjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKm5wKQ0K
PiArew0KPiArICAgIGJvb2wgaXNfcGNpID0gIXN0cmNtcChucC0+bmFtZSwgInBjaWUiKSB8fCAh
c3RyY21wKG5wLT5uYW1lLCAicGNpIik7DQoNClRoZSBMaW51eCBjb21taXQgaXMgYSBiaXQgbW9y
ZSByZXN0cmljdGl2ZSBhbmQgb25seSBkb2VzIHRoYXQgZm9yIOKAnHBjaWXigJ0uDQpBbnkgcmVh
c29uIHdoeSB5b3UgYWxzbyB3YW50IHRvIGhhdmUgdGhpcyB3b3JrYXJvdW5kIGRvbmUgYWxzbyBm
b3Ig4oCccGNp4oCdID8NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo+ICsNCj4gKyAgICBpZiAoaXNf
cGNpKQ0KPiArICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgIiVzOiBNaXNzaW5nIGRldmlj
ZV90eXBlXG4iLCBucC0+ZnVsbF9uYW1lKTsNCj4gKw0KPiArICAgIHJldHVybiBpc19wY2k7DQo+
ICt9DQo+ICsNCj4gc3RhdGljIGJvb2xfdCBkdF9idXNfcGNpX21hdGNoKGNvbnN0IHN0cnVjdCBk
dF9kZXZpY2Vfbm9kZSAqbnApDQo+IHsNCj4gICAgIC8qDQo+ICAgICAgKiAicGNpZXgiIGlzIFBD
SSBFeHByZXNzICJ2Y2kiIGlzIGZvciB0aGUgL2NoYW9zIGJyaWRnZSBvbiAxc3QtZ2VuIFBDSQ0K
PiAgICAgICogcG93ZXJtYWNzICJodCIgaXMgaHlwZXJ0cmFuc3BvcnQNCj4gKyAgICAgKg0KPiAr
ICAgICAqIElmIG5vbmUgb2YgdGhlIGRldmljZV90eXBlIG1hdGNoLCBhbmQgdGhhdCB0aGUgbm9k
ZSBuYW1lIGlzDQo+ICsgICAgICogInBjaWUiIG9yICJwY2kiLCBhY2NlcHQgdGhlIGRldmljZSBh
cyBQQ0kgKHdpdGggYSB3YXJuaW5nKS4NCj4gICAgICAqLw0KPiAgICAgcmV0dXJuICFzdHJjbXAo
bnAtPnR5cGUsICJwY2kiKSB8fCAhc3RyY21wKG5wLT50eXBlLCAicGNpZXgiKSB8fA0KPiAtICAg
ICAgICAhc3RyY21wKG5wLT50eXBlLCAidmNpIikgfHwgIXN0cmNtcChucC0+dHlwZSwgImh0Iik7
DQo+ICsgICAgICAgICFzdHJjbXAobnAtPnR5cGUsICJ2Y2kiKSB8fCAhc3RyY21wKG5wLT50eXBl
LCAiaHQiKSB8fA0KPiArICAgICAgICBkdF9ub2RlX2lzX3BjaShucCk7DQo+IH0NCj4gDQo+IHN0
YXRpYyB2b2lkIGR0X2J1c19wY2lfY291bnRfY2VsbHMoY29uc3Qgc3RydWN0IGR0X2RldmljZV9u
b2RlICpucCwNCj4gDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 10:29:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 10:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83187.154245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9QGw-0002K7-Jg; Tue, 09 Feb 2021 10:29:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83187.154245; Tue, 09 Feb 2021 10:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9QGw-0002K0-FY; Tue, 09 Feb 2021 10:29:30 +0000
Received: by outflank-mailman (input) for mailman id 83187;
 Tue, 09 Feb 2021 10:29:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slWv=HL=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1l9QGv-0002Jv-7P
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 10:29:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 341fe205-0331-41ad-a974-c322f9c0d5d5;
 Tue, 09 Feb 2021 10:29:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DD08EB109;
 Tue,  9 Feb 2021 10:29:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 341fe205-0331-41ad-a974-c322f9c0d5d5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
From: Thomas Zimmermann <tzimmermann@suse.de>
To: airlied@linux.ie,
	daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com,
	mripard@kernel.org
Cc: dri-devel@lists.freedesktop.org,
	linux-aspeed@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mips@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-amlogic@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	linux-renesas-soc@vger.kernel.org,
	linux-rockchip@lists.infradead.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-tegra@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic helpers
Date: Tue,  9 Feb 2021 11:29:13 +0100
Message-Id: <20210209102913.6372-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.30.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function drm_gem_fb_prepare_fb() is a helper for atomic modesetting,
but currently located next to framebuffer helpers. Move it to GEM atomic
helpers, rename it slightly and adopt the drivers. Same for the rsp
simple-pipe helper.

Compile-tested with x86-64, aarch64 and arm. The patch is fairly large,
but there are no functional changes.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c     |  4 +-
 drivers/gpu/drm/drm_gem_atomic_helper.c      | 69 +++++++++++++++++++-
 drivers/gpu/drm/drm_gem_framebuffer_helper.c | 63 ------------------
 drivers/gpu/drm/drm_gem_vram_helper.c        |  4 +-
 drivers/gpu/drm/imx/dcss/dcss-plane.c        |  4 +-
 drivers/gpu/drm/imx/ipuv3-plane.c            |  4 +-
 drivers/gpu/drm/ingenic/ingenic-drm-drv.c    |  3 +-
 drivers/gpu/drm/ingenic/ingenic-ipu.c        |  4 +-
 drivers/gpu/drm/mcde/mcde_display.c          |  4 +-
 drivers/gpu/drm/mediatek/mtk_drm_plane.c     |  6 +-
 drivers/gpu/drm/meson/meson_overlay.c        |  8 +--
 drivers/gpu/drm/meson/meson_plane.c          |  4 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c    |  4 +-
 drivers/gpu/drm/msm/msm_atomic.c             |  4 +-
 drivers/gpu/drm/mxsfb/mxsfb_kms.c            |  6 +-
 drivers/gpu/drm/pl111/pl111_display.c        |  4 +-
 drivers/gpu/drm/rcar-du/rcar_du_vsp.c        |  4 +-
 drivers/gpu/drm/rockchip/rockchip_drm_vop.c  |  3 +-
 drivers/gpu/drm/stm/ltdc.c                   |  4 +-
 drivers/gpu/drm/sun4i/sun4i_layer.c          |  4 +-
 drivers/gpu/drm/sun4i/sun8i_ui_layer.c       |  4 +-
 drivers/gpu/drm/sun4i/sun8i_vi_layer.c       |  4 +-
 drivers/gpu/drm/tegra/plane.c                |  4 +-
 drivers/gpu/drm/tidss/tidss_plane.c          |  4 +-
 drivers/gpu/drm/tiny/hx8357d.c               |  4 +-
 drivers/gpu/drm/tiny/ili9225.c               |  4 +-
 drivers/gpu/drm/tiny/ili9341.c               |  4 +-
 drivers/gpu/drm/tiny/ili9486.c               |  4 +-
 drivers/gpu/drm/tiny/mi0283qt.c              |  4 +-
 drivers/gpu/drm/tiny/repaper.c               |  3 +-
 drivers/gpu/drm/tiny/st7586.c                |  4 +-
 drivers/gpu/drm/tiny/st7735r.c               |  4 +-
 drivers/gpu/drm/tve200/tve200_display.c      |  4 +-
 drivers/gpu/drm/vc4/vc4_plane.c              |  4 +-
 drivers/gpu/drm/vkms/vkms_plane.c            |  3 +-
 drivers/gpu/drm/xen/xen_drm_front_kms.c      |  3 +-
 include/drm/drm_gem_atomic_helper.h          |  8 +++
 include/drm/drm_gem_framebuffer_helper.h     |  6 +-
 include/drm/drm_modeset_helper_vtables.h     |  2 +-
 include/drm/drm_plane.h                      |  4 +-
 include/drm/drm_simple_kms_helper.h          |  2 +-
 41 files changed, 152 insertions(+), 141 deletions(-)

diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
index e54686c31a90..d8f214e0be82 100644
--- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
+++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
@@ -9,8 +9,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_panel.h>
 #include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_vblank.h>
@@ -219,7 +219,7 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
 	.enable		= aspeed_gfx_pipe_enable,
 	.disable	= aspeed_gfx_pipe_disable,
 	.update		= aspeed_gfx_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank	= aspeed_gfx_enable_vblank,
 	.disable_vblank	= aspeed_gfx_disable_vblank,
 };
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
index e27762cef360..c656b40656bf 100644
--- a/drivers/gpu/drm/drm_gem_atomic_helper.c
+++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
@@ -1,6 +1,10 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+#include <linux/dma-resv.h>
+
 #include <drm/drm_atomic_state_helper.h>
+#include <drm/drm_atomic_uapi.h>
+#include <drm/drm_gem.h>
 #include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_simple_kms_helper.h>
@@ -12,10 +16,69 @@
  *
  * The GEM atomic helpers library implements generic atomic-commit
  * functions for drivers that use GEM objects. Currently, it provides
- * plane state and framebuffer BO mappings for planes with shadow
- * buffers.
+ * synchronization helpers, and plane state and framebuffer BO mappings
+ * for planes with shadow buffers.
+ */
+
+/*
+ * Plane Helpers
  */
 
+/**
+ * drm_gem_prepare_fb() - Prepare a GEM backed framebuffer
+ * @plane: Plane
+ * @state: Plane state the fence will be attached to
+ *
+ * This function extracts the exclusive fence from &drm_gem_object.resv and
+ * attaches it to plane state for the atomic helper to wait on. This is
+ * necessary to correctly implement implicit synchronization for any buffers
+ * shared as a struct &dma_buf. This function can be used as the
+ * &drm_plane_helper_funcs.prepare_fb callback.
+ *
+ * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
+ * GEM based framebuffer drivers which have their buffers always pinned in
+ * memory.
+ *
+ * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
+ * explicit fencing in atomic modeset updates.
+ */
+int drm_gem_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
+{
+	struct drm_gem_object *obj;
+	struct dma_fence *fence;
+
+	if (!state->fb)
+		return 0;
+
+	obj = drm_gem_fb_get_obj(state->fb, 0);
+	fence = dma_resv_get_excl_rcu(obj->resv);
+	drm_atomic_set_fence_for_plane(state, fence);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_prepare_fb);
+
+/**
+ * drm_gem_simple_display_pipe_prepare_fb - prepare_fb helper for &drm_simple_display_pipe
+ * @pipe: Simple display pipe
+ * @plane_state: Plane state
+ *
+ * This function uses drm_gem_prepare_fb() to extract the exclusive fence
+ * from &drm_gem_object.resv and attaches it to plane state for the atomic
+ * helper to wait on. This is necessary to correctly implement implicit
+ * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
+ * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
+ *
+ * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
+ * explicit fencing in atomic modeset updates.
+ */
+int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
+					   struct drm_plane_state *plane_state)
+{
+	return drm_gem_prepare_fb(&pipe->plane, plane_state);
+}
+EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb);
+
 /*
  * Shadow-buffered Planes
  */
@@ -74,7 +137,7 @@ static int drm_gem_prepare_shadow_fb(struct drm_plane *plane, struct drm_plane_s
 	if (!fb)
 		return 0;
 
-	ret = drm_gem_fb_prepare_fb(plane, plane_state);
+	ret = drm_gem_prepare_fb(plane, plane_state);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
index 109d11fb4cd4..5ed2067cebb6 100644
--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
@@ -5,13 +5,8 @@
  * Copyright (C) 2017 Noralf TrÃ¸nnes
  */
 
-#include <linux/dma-buf.h>
-#include <linux/dma-fence.h>
-#include <linux/dma-resv.h>
 #include <linux/slab.h>
 
-#include <drm/drm_atomic.h>
-#include <drm/drm_atomic_uapi.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
@@ -19,7 +14,6 @@
 #include <drm/drm_gem.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_modeset_helper.h>
-#include <drm/drm_simple_kms_helper.h>
 
 #define AFBC_HEADER_SIZE		16
 #define AFBC_TH_LAYOUT_ALIGNMENT	8
@@ -432,60 +426,3 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init);
-
-/**
- * drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer
- * @plane: Plane
- * @state: Plane state the fence will be attached to
- *
- * This function extracts the exclusive fence from &drm_gem_object.resv and
- * attaches it to plane state for the atomic helper to wait on. This is
- * necessary to correctly implement implicit synchronization for any buffers
- * shared as a struct &dma_buf. This function can be used as the
- * &drm_plane_helper_funcs.prepare_fb callback.
- *
- * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
- * gem based framebuffer drivers which have their buffers always pinned in
- * memory.
- *
- * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
- * explicit fencing in atomic modeset updates.
- */
-int drm_gem_fb_prepare_fb(struct drm_plane *plane,
-			  struct drm_plane_state *state)
-{
-	struct drm_gem_object *obj;
-	struct dma_fence *fence;
-
-	if (!state->fb)
-		return 0;
-
-	obj = drm_gem_fb_get_obj(state->fb, 0);
-	fence = dma_resv_get_excl_rcu(obj->resv);
-	drm_atomic_set_fence_for_plane(state, fence);
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(drm_gem_fb_prepare_fb);
-
-/**
- * drm_gem_fb_simple_display_pipe_prepare_fb - prepare_fb helper for
- *     &drm_simple_display_pipe
- * @pipe: Simple display pipe
- * @plane_state: Plane state
- *
- * This function uses drm_gem_fb_prepare_fb() to extract the exclusive fence
- * from &drm_gem_object.resv and attaches it to plane state for the atomic
- * helper to wait on. This is necessary to correctly implement implicit
- * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
- * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
- *
- * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
- * explicit fencing in atomic modeset updates.
- */
-int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
-					      struct drm_plane_state *plane_state)
-{
-	return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
-}
-EXPORT_SYMBOL(drm_gem_fb_simple_display_pipe_prepare_fb);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 48d4b59d3145..2071ec637df8 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -8,7 +8,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_file.h>
 #include <drm/drm_framebuffer.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_ttm_helper.h>
 #include <drm/drm_gem_vram_helper.h>
 #include <drm/drm_managed.h>
@@ -720,7 +720,7 @@ drm_gem_vram_plane_helper_prepare_fb(struct drm_plane *plane,
 			goto err_drm_gem_vram_unpin;
 	}
 
-	ret = drm_gem_fb_prepare_fb(plane, new_state);
+	ret = drm_gem_prepare_fb(plane, new_state);
 	if (ret)
 		goto err_drm_gem_vram_unpin;
 
diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c
index 03ba88f7f995..092e98fe0cfd 100644
--- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
+++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
@@ -6,7 +6,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
 
 #include "dcss-dev.h"
@@ -355,7 +355,7 @@ static void dcss_plane_atomic_disable(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs dcss_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = dcss_plane_atomic_check,
 	.atomic_update = dcss_plane_atomic_update,
 	.atomic_disable = dcss_plane_atomic_disable,
diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
index 075508051b5f..0b6d81c4fa77 100644
--- a/drivers/gpu/drm/imx/ipuv3-plane.c
+++ b/drivers/gpu/drm/imx/ipuv3-plane.c
@@ -9,8 +9,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_plane_helper.h>
 
@@ -704,7 +704,7 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = ipu_plane_atomic_check,
 	.atomic_disable = ipu_plane_atomic_disable,
 	.atomic_update = ipu_plane_atomic_update,
diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
index 7bb31fbee29d..1ca02de60895 100644
--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
@@ -28,6 +28,7 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_irq.h>
 #include <drm/drm_managed.h>
@@ -780,7 +781,7 @@ static const struct drm_plane_helper_funcs ingenic_drm_plane_helper_funcs = {
 	.atomic_update		= ingenic_drm_plane_atomic_update,
 	.atomic_check		= ingenic_drm_plane_atomic_check,
 	.atomic_disable		= ingenic_drm_plane_atomic_disable,
-	.prepare_fb		= drm_gem_fb_prepare_fb,
+	.prepare_fb		= drm_gem_prepare_fb,
 };
 
 static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_funcs = {
diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/ingenic/ingenic-ipu.c
index e52777ef85fd..1b9b5de6b67c 100644
--- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
+++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
@@ -23,7 +23,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_property.h>
@@ -608,7 +608,7 @@ static const struct drm_plane_helper_funcs ingenic_ipu_plane_helper_funcs = {
 	.atomic_update		= ingenic_ipu_plane_atomic_update,
 	.atomic_check		= ingenic_ipu_plane_atomic_check,
 	.atomic_disable		= ingenic_ipu_plane_atomic_disable,
-	.prepare_fb		= drm_gem_fb_prepare_fb,
+	.prepare_fb		= drm_gem_prepare_fb,
 };
 
 static int
diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
index 7c2e0b865441..dde16ef9650a 100644
--- a/drivers/gpu/drm/mcde/mcde_display.c
+++ b/drivers/gpu/drm/mcde/mcde_display.c
@@ -13,8 +13,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_mipi_dsi.h>
 #include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_bridge.h>
@@ -1481,7 +1481,7 @@ static struct drm_simple_display_pipe_funcs mcde_display_funcs = {
 	.update = mcde_display_update,
 	.enable_vblank = mcde_display_enable_vblank,
 	.disable_vblank = mcde_display_disable_vblank,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 int mcde_display_init(struct drm_device *drm)
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
index 92141a19681b..64f7873e9867 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
@@ -6,10 +6,10 @@
 
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
-#include <drm/drm_fourcc.h>
 #include <drm/drm_atomic_uapi.h>
+#include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 
 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -216,7 +216,7 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = mtk_plane_atomic_check,
 	.atomic_update = mtk_plane_atomic_update,
 	.atomic_disable = mtk_plane_atomic_disable,
diff --git a/drivers/gpu/drm/meson/meson_overlay.c b/drivers/gpu/drm/meson/meson_overlay.c
index 1ffbbecafa22..0ee2132a990f 100644
--- a/drivers/gpu/drm/meson/meson_overlay.c
+++ b/drivers/gpu/drm/meson/meson_overlay.c
@@ -10,11 +10,11 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_device.h>
+#include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_plane_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_plane_helper.h>
 
 #include "meson_overlay.h"
 #include "meson_registers.h"
@@ -742,7 +742,7 @@ static const struct drm_plane_helper_funcs meson_overlay_helper_funcs = {
 	.atomic_check	= meson_overlay_atomic_check,
 	.atomic_disable	= meson_overlay_atomic_disable,
 	.atomic_update	= meson_overlay_atomic_update,
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_prepare_fb,
 };
 
 static bool meson_overlay_format_mod_supported(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
index 35338ed18209..24d64c9675ff 100644
--- a/drivers/gpu/drm/meson/meson_plane.c
+++ b/drivers/gpu/drm/meson/meson_plane.c
@@ -16,8 +16,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 
 #include "meson_plane.h"
@@ -417,7 +417,7 @@ static const struct drm_plane_helper_funcs meson_plane_helper_funcs = {
 	.atomic_check	= meson_plane_atomic_check,
 	.atomic_disable	= meson_plane_atomic_disable,
 	.atomic_update	= meson_plane_atomic_update,
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_prepare_fb,
 };
 
 static bool meson_plane_format_mod_supported(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index bc0231a50132..3e9f9f3dd679 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -13,7 +13,7 @@
 #include <drm/drm_atomic_uapi.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_file.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 
 #include "msm_drv.h"
 #include "dpu_kms.h"
@@ -892,7 +892,7 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
 	 *       we can use msm_atomic_prepare_fb() instead of doing the
 	 *       implicit fence and fb prepare by hand here.
 	 */
-	drm_gem_fb_prepare_fb(plane, new_state);
+	drm_gem_prepare_fb(plane, new_state);
 
 	if (pstate->aspace) {
 		ret = msm_framebuffer_prepare(new_state->fb,
diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
index 6a326761dc4a..03a113eb6571 100644
--- a/drivers/gpu/drm/msm/msm_atomic.c
+++ b/drivers/gpu/drm/msm/msm_atomic.c
@@ -5,7 +5,7 @@
  */
 
 #include <drm/drm_atomic_uapi.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_vblank.h>
 
 #include "msm_atomic_trace.h"
@@ -22,7 +22,7 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
 	if (!new_state->fb)
 		return 0;
 
-	drm_gem_fb_prepare_fb(plane, new_state);
+	drm_gem_prepare_fb(plane, new_state);
 
 	return msm_framebuffer_prepare(new_state->fb, kms->aspace);
 }
diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
index 3e1bb0aefb87..33188dea886d 100644
--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
@@ -21,8 +21,8 @@
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_vblank.h>
@@ -495,13 +495,13 @@ static bool mxsfb_format_mod_supported(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs mxsfb_plane_primary_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = mxsfb_plane_atomic_check,
 	.atomic_update = mxsfb_plane_primary_atomic_update,
 };
 
 static const struct drm_plane_helper_funcs mxsfb_plane_overlay_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = mxsfb_plane_atomic_check,
 	.atomic_update = mxsfb_plane_overlay_atomic_update,
 };
diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
index 69c02e7c82b7..6fd7f13f1aca 100644
--- a/drivers/gpu/drm/pl111/pl111_display.c
+++ b/drivers/gpu/drm/pl111/pl111_display.c
@@ -17,8 +17,8 @@
 
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_vblank.h>
 
 #include "pl111_drm.h"
@@ -440,7 +440,7 @@ static struct drm_simple_display_pipe_funcs pl111_display_funcs = {
 	.enable = pl111_display_enable,
 	.disable = pl111_display_disable,
 	.update = pl111_display_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
index 53221d8473c1..964fdaee7c7d 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
@@ -11,8 +11,8 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_vblank.h>
@@ -236,7 +236,7 @@ static int rcar_du_vsp_plane_prepare_fb(struct drm_plane *plane,
 	if (ret < 0)
 		return ret;
 
-	return drm_gem_fb_prepare_fb(plane, state);
+	return drm_gem_prepare_fb(plane, state);
 }
 
 void rcar_du_vsp_unmap_fb(struct rcar_du_vsp *vsp, struct drm_framebuffer *fb,
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
index 8d15cabdcb02..45577de18b49 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
@@ -23,6 +23,7 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_flip_work.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
@@ -1096,7 +1097,7 @@ static const struct drm_plane_helper_funcs plane_helper_funcs = {
 	.atomic_disable = vop_plane_atomic_disable,
 	.atomic_async_check = vop_plane_atomic_async_check,
 	.atomic_async_update = vop_plane_atomic_async_update,
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 };
 
 static const struct drm_plane_funcs vop_plane_funcs = {
diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
index 7812094f93d6..73522c6ba3eb 100644
--- a/drivers/gpu/drm/stm/ltdc.c
+++ b/drivers/gpu/drm/stm/ltdc.c
@@ -26,8 +26,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_of.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
@@ -911,7 +911,7 @@ static const struct drm_plane_funcs ltdc_plane_funcs = {
 };
 
 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = ltdc_plane_atomic_check,
 	.atomic_update = ltdc_plane_atomic_update,
 	.atomic_disable = ltdc_plane_atomic_disable,
diff --git a/drivers/gpu/drm/sun4i/sun4i_layer.c b/drivers/gpu/drm/sun4i/sun4i_layer.c
index acfbfd4463a1..68da94b7c35d 100644
--- a/drivers/gpu/drm/sun4i/sun4i_layer.c
+++ b/drivers/gpu/drm/sun4i/sun4i_layer.c
@@ -7,7 +7,7 @@
  */
 
 #include <drm/drm_atomic_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
 
 #include "sun4i_backend.h"
@@ -122,7 +122,7 @@ static bool sun4i_layer_format_mod_supported(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs sun4i_backend_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_prepare_fb,
 	.atomic_disable	= sun4i_backend_layer_atomic_disable,
 	.atomic_update	= sun4i_backend_layer_atomic_update,
 };
diff --git a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
index 816ad4ce8996..95654c153279 100644
--- a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
+++ b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
@@ -14,8 +14,8 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
 
@@ -299,7 +299,7 @@ static void sun8i_ui_layer_atomic_update(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs sun8i_ui_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_prepare_fb,
 	.atomic_check	= sun8i_ui_layer_atomic_check,
 	.atomic_disable	= sun8i_ui_layer_atomic_disable,
 	.atomic_update	= sun8i_ui_layer_atomic_update,
diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
index 8cc294a9969d..4005884dbce4 100644
--- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
+++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
@@ -7,8 +7,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
 
@@ -402,7 +402,7 @@ static void sun8i_vi_layer_atomic_update(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs sun8i_vi_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_prepare_fb,
 	.atomic_check	= sun8i_vi_layer_atomic_check,
 	.atomic_disable	= sun8i_vi_layer_atomic_disable,
 	.atomic_update	= sun8i_vi_layer_atomic_update,
diff --git a/drivers/gpu/drm/tegra/plane.c b/drivers/gpu/drm/tegra/plane.c
index 539d14935728..ec86a8d060aa 100644
--- a/drivers/gpu/drm/tegra/plane.c
+++ b/drivers/gpu/drm/tegra/plane.c
@@ -8,7 +8,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
 
 #include "dc.h"
@@ -198,7 +198,7 @@ int tegra_plane_prepare_fb(struct drm_plane *plane,
 	if (!state->fb)
 		return 0;
 
-	drm_gem_fb_prepare_fb(plane, state);
+	drm_gem_prepare_fb(plane, state);
 
 	return tegra_dc_pin(dc, to_tegra_plane_state(state));
 }
diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c
index 35067ae674ea..d39baa66e876 100644
--- a/drivers/gpu/drm/tidss/tidss_plane.c
+++ b/drivers/gpu/drm/tidss/tidss_plane.c
@@ -10,7 +10,7 @@
 #include <drm/drm_crtc_helper.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 
 #include "tidss_crtc.h"
 #include "tidss_dispc.h"
@@ -151,7 +151,7 @@ static void drm_plane_destroy(struct drm_plane *plane)
 }
 
 static const struct drm_plane_helper_funcs tidss_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_prepare_fb,
 	.atomic_check = tidss_plane_atomic_check,
 	.atomic_update = tidss_plane_atomic_update,
 	.atomic_disable = tidss_plane_atomic_disable,
diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8357d.c
index c6525cd02bc2..3e2c2868a363 100644
--- a/drivers/gpu/drm/tiny/hx8357d.c
+++ b/drivers/gpu/drm/tiny/hx8357d.c
@@ -19,8 +19,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -184,7 +184,7 @@ static const struct drm_simple_display_pipe_funcs hx8357d_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode yx350hv15_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili9225.c
index 8e98962db5a2..6b87df19eec1 100644
--- a/drivers/gpu/drm/tiny/ili9225.c
+++ b/drivers/gpu/drm/tiny/ili9225.c
@@ -22,8 +22,8 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_rect.h>
@@ -328,7 +328,7 @@ static const struct drm_simple_display_pipe_funcs ili9225_pipe_funcs = {
 	.enable		= ili9225_pipe_enable,
 	.disable	= ili9225_pipe_disable,
 	.update		= ili9225_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode ili9225_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili9341.c
index 6ce97f0698eb..a97f3f70e4a6 100644
--- a/drivers/gpu/drm/tiny/ili9341.c
+++ b/drivers/gpu/drm/tiny/ili9341.c
@@ -18,8 +18,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -140,7 +140,7 @@ static const struct drm_simple_display_pipe_funcs ili9341_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode yx240qv29_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
index d7ce40eb166a..6422a7f67079 100644
--- a/drivers/gpu/drm/tiny/ili9486.c
+++ b/drivers/gpu/drm/tiny/ili9486.c
@@ -17,8 +17,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -153,7 +153,7 @@ static const struct drm_simple_display_pipe_funcs waveshare_pipe_funcs = {
 	.enable = waveshare_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode waveshare_mode = {
diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi0283qt.c
index ff77f983f803..dc76fe53aa72 100644
--- a/drivers/gpu/drm/tiny/mi0283qt.c
+++ b/drivers/gpu/drm/tiny/mi0283qt.c
@@ -16,8 +16,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -144,7 +144,7 @@ static const struct drm_simple_display_pipe_funcs mi0283qt_pipe_funcs = {
 	.enable = mi0283qt_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode mi0283qt_mode = {
diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
index 11c602fc9897..2cee07a2e00b 100644
--- a/drivers/gpu/drm/tiny/repaper.c
+++ b/drivers/gpu/drm/tiny/repaper.c
@@ -29,6 +29,7 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_format_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
@@ -860,7 +861,7 @@ static const struct drm_simple_display_pipe_funcs repaper_pipe_funcs = {
 	.enable = repaper_pipe_enable,
 	.disable = repaper_pipe_disable,
 	.update = repaper_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static int repaper_connector_get_modes(struct drm_connector *connector)
diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st7586.c
index ff5cf60f4bd7..7d216fe9267f 100644
--- a/drivers/gpu/drm/tiny/st7586.c
+++ b/drivers/gpu/drm/tiny/st7586.c
@@ -19,8 +19,8 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_format_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_rect.h>
@@ -268,7 +268,7 @@ static const struct drm_simple_display_pipe_funcs st7586_pipe_funcs = {
 	.enable		= st7586_pipe_enable,
 	.disable	= st7586_pipe_disable,
 	.update		= st7586_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode st7586_mode = {
diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
index faaba0a033ea..df8872d62cdd 100644
--- a/drivers/gpu/drm/tiny/st7735r.c
+++ b/drivers/gpu/drm/tiny/st7735r.c
@@ -19,8 +19,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 
@@ -136,7 +136,7 @@ static const struct drm_simple_display_pipe_funcs st7735r_pipe_funcs = {
 	.enable		= st7735r_pipe_enable,
 	.disable	= mipi_dbi_pipe_disable,
 	.update		= mipi_dbi_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct st7735r_cfg jd_t18003_t01_cfg = {
diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
index cb0e837d3dba..50e1fb71869f 100644
--- a/drivers/gpu/drm/tve200/tve200_display.c
+++ b/drivers/gpu/drm/tve200/tve200_display.c
@@ -17,8 +17,8 @@
 
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_panel.h>
 #include <drm/drm_vblank.h>
 
@@ -316,7 +316,7 @@ static const struct drm_simple_display_pipe_funcs tve200_display_funcs = {
 	.enable = tve200_display_enable,
 	.disable = tve200_display_disable,
 	.update = tve200_display_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank = tve200_display_enable_vblank,
 	.disable_vblank = tve200_display_disable_vblank,
 };
diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
index 7322169c0682..a65e980078f3 100644
--- a/drivers/gpu/drm/vc4/vc4_plane.c
+++ b/drivers/gpu/drm/vc4/vc4_plane.c
@@ -20,7 +20,7 @@
 #include <drm/drm_atomic_uapi.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
 
 #include "uapi/drm/vc4_drm.h"
@@ -1250,7 +1250,7 @@ static int vc4_prepare_fb(struct drm_plane *plane,
 
 	bo = to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base);
 
-	drm_gem_fb_prepare_fb(plane, state);
+	drm_gem_prepare_fb(plane, state);
 
 	if (plane->state->fb == state->fb)
 		return 0;
diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
index 0824327cc860..e3fd8cd1f3f1 100644
--- a/drivers/gpu/drm/vkms/vkms_plane.c
+++ b/drivers/gpu/drm/vkms/vkms_plane.c
@@ -5,6 +5,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_gem_shmem_helper.h>
@@ -159,7 +160,7 @@ static int vkms_prepare_fb(struct drm_plane *plane,
 	if (ret)
 		DRM_ERROR("vmap failed: %d\n", ret);
 
-	return drm_gem_fb_prepare_fb(plane, state);
+	return drm_gem_prepare_fb(plane, state);
 }
 
 static void vkms_cleanup_fb(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index ef11b1e4de39..371202ebe900 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -13,6 +13,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_probe_helper.h>
 #include <drm/drm_vblank.h>
@@ -301,7 +302,7 @@ static const struct drm_simple_display_pipe_funcs display_funcs = {
 	.mode_valid = display_mode_valid,
 	.enable = display_enable,
 	.disable = display_disable,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.check = display_check,
 	.update = display_update,
 };
diff --git a/include/drm/drm_gem_atomic_helper.h b/include/drm/drm_gem_atomic_helper.h
index 08b96ccea325..91e73d23fea8 100644
--- a/include/drm/drm_gem_atomic_helper.h
+++ b/include/drm/drm_gem_atomic_helper.h
@@ -9,6 +9,14 @@
 
 struct drm_simple_display_pipe;
 
+/*
+ * Plane Helpers
+ */
+
+int drm_gem_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state);
+int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
+					   struct drm_plane_state *plane_state);
+
 /*
  * Helpers for planes with shadow buffers
  */
diff --git a/include/drm/drm_gem_framebuffer_helper.h b/include/drm/drm_gem_framebuffer_helper.h
index 6b013154911d..495d174d9989 100644
--- a/include/drm/drm_gem_framebuffer_helper.h
+++ b/include/drm/drm_gem_framebuffer_helper.h
@@ -9,9 +9,11 @@ struct drm_framebuffer;
 struct drm_framebuffer_funcs;
 struct drm_gem_object;
 struct drm_mode_fb_cmd2;
+#if 0
 struct drm_plane;
 struct drm_plane_state;
 struct drm_simple_display_pipe;
+#endif
 
 #define AFBC_VENDOR_AND_TYPE_MASK	GENMASK_ULL(63, 52)
 
@@ -44,8 +46,4 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
 			 const struct drm_mode_fb_cmd2 *mode_cmd,
 			 struct drm_afbc_framebuffer *afbc_fb);
 
-int drm_gem_fb_prepare_fb(struct drm_plane *plane,
-			  struct drm_plane_state *state);
-int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
-					      struct drm_plane_state *plane_state);
 #endif
diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h
index eb706342861d..8d41d3734153 100644
--- a/include/drm/drm_modeset_helper_vtables.h
+++ b/include/drm/drm_modeset_helper_vtables.h
@@ -1179,7 +1179,7 @@ struct drm_plane_helper_funcs {
 	 * members in the plane structure.
 	 *
 	 * Drivers which always have their buffers pinned should use
-	 * drm_gem_fb_prepare_fb() for this hook.
+	 * drm_gem_prepare_fb() for this hook.
 	 *
 	 * The helpers will call @cleanup_fb with matching arguments for every
 	 * successful call to this hook.
diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
index 95ab14a4336a..be08b6b1fde0 100644
--- a/include/drm/drm_plane.h
+++ b/include/drm/drm_plane.h
@@ -79,8 +79,8 @@ struct drm_plane_state {
 	 * preserved.
 	 *
 	 * Drivers should store any implicit fence in this from their
-	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_fb_prepare_fb()
-	 * and drm_gem_fb_simple_display_pipe_prepare_fb() for suitable helpers.
+	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_prepare_fb()
+	 * and drm_gem_simple_display_pipe_prepare_fb() for suitable helpers.
 	 */
 	struct dma_fence *fence;
 
diff --git a/include/drm/drm_simple_kms_helper.h b/include/drm/drm_simple_kms_helper.h
index 40b34573249f..ef9944e9c5fc 100644
--- a/include/drm/drm_simple_kms_helper.h
+++ b/include/drm/drm_simple_kms_helper.h
@@ -117,7 +117,7 @@ struct drm_simple_display_pipe_funcs {
 	 * more details.
 	 *
 	 * Drivers which always have their buffers pinned should use
-	 * drm_gem_fb_simple_display_pipe_prepare_fb() for this hook.
+	 * drm_gem_simple_display_pipe_prepare_fb() for this hook.
 	 */
 	int (*prepare_fb)(struct drm_simple_display_pipe *pipe,
 			  struct drm_plane_state *plane_state);
-- 
2.30.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83217.154280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SMF-0006WQ-Hw; Tue, 09 Feb 2021 12:43:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83217.154280; Tue, 09 Feb 2021 12:43:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SMF-0006WJ-EO; Tue, 09 Feb 2021 12:43:07 +0000
Received: by outflank-mailman (input) for mailman id 83217;
 Tue, 09 Feb 2021 12:43:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9SMD-0006WE-Ua
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:43:06 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c223e0d-f572-4785-a7b9-dc15ac0baa76;
 Tue, 09 Feb 2021 12:43:04 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id m08c6fx19Ch01Ka
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 13:43:00 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c223e0d-f572-4785-a7b9-dc15ac0baa76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612874583;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=kt+HjqYDFzVkKWwKnvvp0Lqb/b/anGw/0WXWlcWl1Nw=;
	b=fOt6XlMQPxDnB5YCI3UStL7Gri8AEyhnnet2/vvDeF6FexLQiVAlc8FExapSxHZwK9
	y68O28oy2sG/3vm/CodLtLKIBgvRdD0TvpIALzQ4AgO9AO0T14ccfbb2zO4qDBRuwQOj
	pOBpOdj8Scpf0QUB4nI546ct8EkfW85Df9jdQ19RO64Yoc5LrSSLljxoRenFofNmk1zW
	xW/oacjEPCqmm705U2KFsEV4WTVfNpNTKlTTPUL/VB22fcdLEzIxKxwO/nq9ZQwT8CHC
	WQD9MoxvXYDmiQjWNLsFlHCONVrZms0HxvgYxZkQ3XZUNIqy+dxfvMkJEPTLdSYY9zzw
	PEuw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Tue, 9 Feb 2021 13:42:53 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, <xen-devel@lists.xenproject.org>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210111 08/39] xl: fix description of migrate --debug
Message-ID: <20210209134253.5eb68cbe.olaf@aepfle.de>
In-Reply-To: <6debd4a3-2d12-2b9f-c857-11dc2346d750@citrix.com>
References: <20210111174224.24714-1-olaf@aepfle.de>
	<20210111174224.24714-9-olaf@aepfle.de>
	<24609.26131.733756.369535@mariner.uk.xensource.com>
	<6debd4a3-2d12-2b9f-c857-11dc2346d750@citrix.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/z0idfK3Yl/YitIqovWtHLcR"; protocol="application/pgp-signature"

--Sig_/z0idfK3Yl/YitIqovWtHLcR
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Mon, 8 Feb 2021 16:39:01 +0000
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> It is possibly worth noting that you typically do see changed data when
> using --debug, because of how the front/back pairs work.=C2=A0 This was a=
 bit
> of a curveball during development.

I just noticed "migrate --debug" is a noop, "verify" works just for remus o=
r colo, per send_domain_memory_live():
- libxl_domain_remus_start sets checkpointed_stream to COLO/REMUS
- libxl_domain_suspend sets checkpointed_stream to NONE.
- external callers can not influence this internal state.
- main_migrate_receive sets it based on the command line option.


In case we want a "verify" functionality also for migration, the "stream_ty=
pe" check could be removed to make it work everywhere.
The domU is suspended, it should not make much difference how often its mem=
ory is passed around in this suspended state.
But this would be a separate thing to explore.

Having a "LIBXL_SUSPEND_DEBUG/XCFLAGS_DEBUG" might be useful, but in its cu=
rrent state the flags should have "STREAM_VERIFY" in their name.

So instead of changing the help string I suggest to remove "--debug" altoge=
ther from the xl UI.


Olaf

--Sig_/z0idfK3Yl/YitIqovWtHLcR
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAig00ACgkQ86SN7mm1
DoCgKA/+MDD1AFX0KRPpNeqwPTflao9EzaHMAMGvpH/BvC2XnnVkLuoa4LTTxSHY
8i1UTMl80zyxfNNHGcaKVm5JwHujIt8j7Cd1zdJIyrsC0GOP54CHUfhgElDNQJAz
5jDV6QnsmSKT5JwdCZd0CC2rKL63OxSrd5D/sbEthcP8fUUsascgSLQmHSJ6SYAC
vMIdjAn7mSg4nBjXbnkRY7KoWpycSN/1BCbz1wktxmakOMLlYEf4QDJlx/gFoIug
314hdR+tBR7kz4CO4cRzOOPnG+0O0EG03Ic90KD5mLyiVEPh0vqdFjMpUMiINLTo
jIeZMb5zN9q3ZMrpOPwZG0cB9Fsh+kbyxhtO3t7b93IxjdKTjaWdzINpxIxHSTMg
r8emrGTB3L2VoCgvhG/CSWnT1lyYv4wp9nFnQdHsBLtOYUpDKmt/F/5ti3cXLGVB
cAxYBvWq06KnXs2P5nmECs8zZmj4p5R4U3I8yghrlbOFJZONZFgKyEeFTGU+g4Zr
7H/x5MC09yTAEh7eyFLwkO26XYbKmt0e7FLvstFv8q206v8cGwlEpLrfhhdac8Ro
wtxFBlmHADbvhS6mFO30wLvJZTUKlqwdb3G1ea5DqUP9q+1bxBH2v4LoTjY44+01
9QW2FAFnxi32ZYqqK71vMo0vWsc2YXGqnXPLFgngqarN8uPghvA=
=KR7j
-----END PGP SIGNATURE-----

--Sig_/z0idfK3Yl/YitIqovWtHLcR--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:53:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:53:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83219.154293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SWd-0007YF-Cm; Tue, 09 Feb 2021 12:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83219.154293; Tue, 09 Feb 2021 12:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SWd-0007Y8-9f; Tue, 09 Feb 2021 12:53:51 +0000
Received: by outflank-mailman (input) for mailman id 83219;
 Tue, 09 Feb 2021 12:53:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9SWc-0007Y3-IE
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:53:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7051a01d-c083-4310-826b-3ebb13ffd65e;
 Tue, 09 Feb 2021 12:53:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16D30AD6A;
 Tue,  9 Feb 2021 12:53:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7051a01d-c083-4310-826b-3ebb13ffd65e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612875228; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Xz6SEvYbpCIel5pRUo0p4mEoQEJzeymHCtV40pnxtHo=;
	b=mPpH3gVnMlhRoqlew3FuZ+8Yc0SppIiRiIjukJWGMOVwWLn9+HNk38eF8di/GOj7gJd3UH
	brVdcAqtG9HfRJFU/YEJor53qrOumR6RvhR7qG+Bs16Uj2jHC69KdERG34FJUgwdn54PIQ
	qVX74kblIKXqB8/Ts2SP+WI3YbTdRJg=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/4] x86/time: calibration rendezvous adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>, Ian Jackson <iwj@xenproject.org>
Message-ID: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Date: Tue, 9 Feb 2021 13:53:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The middle two patches are meant to address a regression reported on
the list under "Problems with APIC on versions 4.9 and later (4.8
works)". In the course of analyzing output from a debugging patch I
ran into another anomaly again, which I thought I should finally try
to address. Hence patch 1. Patch 4 is new in v3 and RFC for now.

While looking closely at the corresponding debugging patch'es output I
noticed a suspicious drift between local and master stime: Measured not
very precisely, local was behind master by about 200ms in about half an
hour. Interestingly the recording of ->master_stime (and hence the not
really inexpensive invocation of read_platform_stime()) looks to be
pretty pointless when CONSTANT_TSC - I haven't been able to spot an
actual consumer. IOW the drift may not be a problem, and we might be
able to eliminate the platform timer reads. (When !CONSTANT_TSC, such
drift would get corrected anyway, by local_time_calibration().)

1: change initiation of the calibration timer
2: adjust time recording time_calibration_tsc_rendezvous()
3: don't move TSC backwards in time_calibration_tsc_rendezvous()
4: re-arrange struct calibration_rendezvous

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:54:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:54:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83220.154305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SXb-0007eZ-Ne; Tue, 09 Feb 2021 12:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83220.154305; Tue, 09 Feb 2021 12:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SXb-0007eS-K2; Tue, 09 Feb 2021 12:54:51 +0000
Received: by outflank-mailman (input) for mailman id 83220;
 Tue, 09 Feb 2021 12:54:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9SXZ-0007eK-TA
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:54:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eecf05fe-8a00-45f6-9f2a-56b6d6230b69;
 Tue, 09 Feb 2021 12:54:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1D3C5ADCD;
 Tue,  9 Feb 2021 12:54:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eecf05fe-8a00-45f6-9f2a-56b6d6230b69
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612875288; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ssL8YomSClhyIfhNV+3oRLDdbnJ7jLRwXkJQuCwirWw=;
	b=qNDMd4ahTwY5I4S9mSOMEuccQJ8Gk5viheOJrJQalud4LoxASbLpCRcJGfUAvctw0qZd9f
	Rb2t6qBqzt/Fw7SRHEujmDIiUFSF+eXfwjwAdvL+TwtjV8i4BKwfofD5uROmXVqlNred8S
	3jNDj/VTmQ/OqN7lQ+iqRAcAktY4VHk=
Subject: [PATCH v3 1/4] x86/time: change initiation of the calibration timer
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>, Ian Jackson <iwj@xenproject.org>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Message-ID: <9068daae-0f27-a724-5062-c02e5ebb4593@suse.com>
Date: Tue, 9 Feb 2021 13:54:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Setting the timer a second (EPOCH) into the future at a random point
during boot (prior to bringing up APs and prior to launching Dom0) does
not yield predictable results: The timer may expire while we're still
bringing up APs (too early) or when Dom0 already boots (too late).
Instead invoke the timer handler function explicitly at a predictable
point in time, once we've established the rendezvous function to use
(and hence also once all APs are online). This will, through the raising
and handling of TIMER_SOFTIRQ, then also have the effect of arming the
timer.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -854,9 +854,7 @@ static void resume_platform_timer(void)
 
 static void __init reset_platform_timer(void)
 {
-    /* Deactivate any timers running */
     kill_timer(&plt_overflow_timer);
-    kill_timer(&calibration_timer);
 
     /* Reset counters and stamps */
     spin_lock_irq(&platform_timer_lock);
@@ -1956,19 +1954,13 @@ static void __init reset_percpu_time(voi
     t->stamp.master_stime = t->stamp.local_stime;
 }
 
-static void __init try_platform_timer_tail(bool late)
+static void __init try_platform_timer_tail(void)
 {
     init_timer(&plt_overflow_timer, plt_overflow, NULL, 0);
     plt_overflow(NULL);
 
     platform_timer_stamp = plt_stamp64;
     stime_platform_stamp = NOW();
-
-    if ( !late )
-        init_percpu_time();
-
-    init_timer(&calibration_timer, time_calibration, NULL, 0);
-    set_timer(&calibration_timer, NOW() + EPOCH);
 }
 
 /* Late init function, after all cpus have booted */
@@ -2009,10 +2001,13 @@ static int __init verify_tsc_reliability
             time_calibration_rendezvous_fn = time_calibration_nop_rendezvous;
 
             /* Finish platform timer switch. */
-            try_platform_timer_tail(true);
+            try_platform_timer_tail();
 
             printk("Switched to Platform timer %s TSC\n",
                    freq_string(plt_src.frequency));
+
+            time_calibration(NULL);
+
             return 0;
         }
     }
@@ -2033,6 +2028,8 @@ static int __init verify_tsc_reliability
          !boot_cpu_has(X86_FEATURE_TSC_RELIABLE) )
         time_calibration_rendezvous_fn = time_calibration_tsc_rendezvous;
 
+    time_calibration(NULL);
+
     return 0;
 }
 __initcall(verify_tsc_reliability);
@@ -2048,7 +2045,11 @@ int __init init_xen_time(void)
     do_settime(get_wallclock_time(), 0, NOW());
 
     /* Finish platform timer initialization. */
-    try_platform_timer_tail(false);
+    try_platform_timer_tail();
+
+    init_percpu_time();
+
+    init_timer(&calibration_timer, time_calibration, NULL, 0);
 
     /*
      * Setup space to track per-socket TSC_ADJUST values. Don't fiddle with



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:55:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:55:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83221.154316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SY8-0007kv-07; Tue, 09 Feb 2021 12:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83221.154316; Tue, 09 Feb 2021 12:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SY7-0007ko-TC; Tue, 09 Feb 2021 12:55:23 +0000
Received: by outflank-mailman (input) for mailman id 83221;
 Tue, 09 Feb 2021 12:55:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9SY6-0007kd-39
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:55:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e30d3eeb-2174-45f1-8bad-e6cc0e4db328;
 Tue, 09 Feb 2021 12:55:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 36B88AF7B;
 Tue,  9 Feb 2021 12:55:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e30d3eeb-2174-45f1-8bad-e6cc0e4db328
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612875320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Wj3LF6N3hDpyMxGowUzNE8QQ+tNIwy9wVm4fwaEKDOE=;
	b=vHpsQ5Aw3nml/+0jP8BUWaK7WUZklTEIBFXQy//AlIp4Tz0krwYW650znhIxTqU3UVo/6G
	vat7dW0J0gMTpYXSp4cMmVREXTQxSzOzmQ5NZGABwINBmH8cTtLi4CJMoOyOfRT2RCxi7s
	Z32QdGSq6yNxYkGGg/X45DkRyfA9uVU=
Subject: [PATCH v3 2/4] x86/time: adjust time recording in
 time_calibration_tsc_rendezvous()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>, Ian Jackson <iwj@xenproject.org>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Message-ID: <834b2d29-2589-f2b7-b496-7f1b35d35cff@suse.com>
Date: Tue, 9 Feb 2021 13:55:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

The (stime,tsc) tuple is the basis for extrapolation by get_s_time().
Therefore the two better get taken as close to one another as possible.
This means two things: First, reading platform time is too early when
done on the first iteration. The closest we can get is on the last
iteration, immediately before telling other CPUs to write their TSCs
(and then also writing CPU0's). While at the first glance it may seem
not overly relevant when exactly platform time is read (when assuming
that only stime is ever relevant anywhere, and hence the association
with the precise TSC values is of lower interest), both CPU frequency
changes and the effects of SMT make it unpredictable (between individual
rendezvous instances) how long the loop iterations will take. This will
in turn lead to higher an error than neccesary in how close to linear
stime movement we can get.

Second, re-reading the TSC for local recording is increasing the overall
error as well, when we already know a more precise value - the one just
written.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
v2: New.

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1662,11 +1662,12 @@ struct calibration_rendezvous {
 };
 
 static void
-time_calibration_rendezvous_tail(const struct calibration_rendezvous *r)
+time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
+                                 uint64_t tsc)
 {
     struct cpu_time_stamp *c = &this_cpu(cpu_calibration);
 
-    c->local_tsc    = rdtsc_ordered();
+    c->local_tsc    = tsc;
     c->local_stime  = get_s_time_fixed(c->local_tsc);
     c->master_stime = r->master_stime;
 
@@ -1691,11 +1692,11 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) != (total_cpus - 1) )
                 cpu_relax();
 
-            if ( r->master_stime == 0 )
-            {
-                r->master_stime = read_platform_stime(NULL);
+            if ( r->master_tsc_stamp == 0 )
                 r->master_tsc_stamp = rdtsc_ordered();
-            }
+            else if ( i == 0 )
+                r->master_stime = read_platform_stime(NULL);
+
             atomic_inc(&r->semaphore);
 
             if ( i == 0 )
@@ -1720,7 +1721,7 @@ static void time_calibration_tsc_rendezv
         }
     }
 
-    time_calibration_rendezvous_tail(r);
+    time_calibration_rendezvous_tail(r, r->master_tsc_stamp);
 }
 
 /* Ordinary rendezvous function which does not modify TSC values. */
@@ -1745,7 +1746,7 @@ static void time_calibration_std_rendezv
         smp_rmb(); /* receive signal /then/ read r->master_stime */
     }
 
-    time_calibration_rendezvous_tail(r);
+    time_calibration_rendezvous_tail(r, rdtsc_ordered());
 }
 
 /*



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:55:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83222.154329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SYb-0007rM-94; Tue, 09 Feb 2021 12:55:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83222.154329; Tue, 09 Feb 2021 12:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SYb-0007rF-63; Tue, 09 Feb 2021 12:55:53 +0000
Received: by outflank-mailman (input) for mailman id 83222;
 Tue, 09 Feb 2021 12:55:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9SYa-0007r9-Qv
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:55:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 008cb336-344b-47ca-9730-6ac64ba4ecc5;
 Tue, 09 Feb 2021 12:55:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7CAB3AF6F;
 Tue,  9 Feb 2021 12:55:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 008cb336-344b-47ca-9730-6ac64ba4ecc5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612875350; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HnGdtCYHBPgvwpHsIvzUOym5DXab5WLpkJmnAis8uD4=;
	b=DzOcWzU3kh8hg69qB9McgliT0UySqj7K1zXEglgiEV+PmEuGG6yjHa3xAuUIKO3cRxDavo
	r9/exgmYGinySPkl7lVUhvSajbURIbFuomFFuO2R65kDJCpC9XRTVZq6NdQ8eVTrx+pkAN
	mswRk5n8FLNZNyNt0hYKr/+ezprrbyE=
Subject: [PATCH v3 3/4] x86/time: don't move TSC backwards in
 time_calibration_tsc_rendezvous()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Claudemir Todo Bom <claudemir@todobom.com>, Ian Jackson <iwj@xenproject.org>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Message-ID: <b6450b3f-7fee-4b6c-8e75-b786f49d0ff3@suse.com>
Date: Tue, 9 Feb 2021 13:55:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

While doing this for small amounts may be okay, the unconditional use
of CPU0's value here has been found to be a problem when the boot time
TSC of the BSP was behind that of all APs by more than a second. In
particular because of get_s_time_fixed() producing insane output when
the calculated delta is negative, we can't allow this to happen.

On the first iteration have all other CPUs sort out the highest TSC
value any one of them has read. On the second iteration, if that
maximum is higher than CPU0's, update its recorded value from that
taken in the first iteration. Use the resulting value on the last
iteration to write everyone's TSCs.

To account for the possible discontinuity, have
time_calibration_rendezvous_tail() record the newly written value, but
extrapolate local stime using the value read.

Reported-by: Claudemir Todo Bom <claudemir@todobom.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
v3: Simplify cmpxchg() loop.
v2: Don't update r->master_stime by calculation. Re-base over new
    earlier patch. Make time_calibration_rendezvous_tail() take two TSC
    values.
---
Since CPU0 reads its TSC last on the first iteration, if TSCs were
perfectly sync-ed there shouldn't ever be a need to update. However,
even on the TSC-reliable system I first tested this on (using
"tsc=skewed" to get this rendezvous function into use in the first
place) updates by up to several thousand clocks did happen. I wonder
whether this points at some problem with the approach that I'm not (yet)
seeing.

Considering the sufficiently modern CPU it's using, I suspect the
reporter's system wouldn't even need to turn off TSC_RELIABLE, if only
there wasn't the boot time skew. Hence another approach might be to fix
this boot time skew. Of course to recognize whether the TSCs then still
aren't in sync we'd need to run tsc_check_reliability() sufficiently
long after that adjustment. Which is besides the need to have this
"fixing" be precise enough for the TSCs to not look skewed anymore
afterwards.

As per the comment ahead of it, the original purpose of the function was
to deal with TSCs halted in deep C states. While this probably explains
why only forward moves were ever expected, I don't see how this could
have been reliable in case CPU0 was deep-sleeping for a sufficiently
long time. My only guess here is a hidden assumption of CPU0 never being
idle for long enough. Furthermore that comment looks to be contradicting
the actual use of the function: It gets installed when !RELIABLE_TSC,
while the comment would suggest !NONSTOP_TSC. I suppose the comment is
simply misleading, because RELIABLE_TSC implies NONSTOP_TSC according to
all the places where either of the two feature bits gets played with.
Plus in the !NONSTOP_TSC case we write the TSC explicitly anyway when
coming back out of a (deep; see below) C-state.

As an implication from the above mwait_idle_cpu_init() then looks to
pointlessly clear "reliable" when "nonstop" is clear.

It further looks odd that mwait_idle() (unlike acpi_processor_idle())
calls cstate_restore_tsc() independent of what C-state was active.

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1658,17 +1658,17 @@ struct calibration_rendezvous {
     cpumask_t cpu_calibration_map;
     atomic_t semaphore;
     s_time_t master_stime;
-    u64 master_tsc_stamp;
+    uint64_t master_tsc_stamp, max_tsc_stamp;
 };
 
 static void
 time_calibration_rendezvous_tail(const struct calibration_rendezvous *r,
-                                 uint64_t tsc)
+                                 uint64_t old_tsc, uint64_t new_tsc)
 {
     struct cpu_time_stamp *c = &this_cpu(cpu_calibration);
 
-    c->local_tsc    = tsc;
-    c->local_stime  = get_s_time_fixed(c->local_tsc);
+    c->local_tsc    = new_tsc;
+    c->local_stime  = get_s_time_fixed(old_tsc ?: new_tsc);
     c->master_stime = r->master_stime;
 
     raise_softirq(TIME_CALIBRATE_SOFTIRQ);
@@ -1683,6 +1683,7 @@ static void time_calibration_tsc_rendezv
     int i;
     struct calibration_rendezvous *r = _r;
     unsigned int total_cpus = cpumask_weight(&r->cpu_calibration_map);
+    uint64_t tsc = 0;
 
     /* Loop to get rid of cache effects on TSC skew. */
     for ( i = 4; i >= 0; i-- )
@@ -1692,8 +1693,15 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) != (total_cpus - 1) )
                 cpu_relax();
 
-            if ( r->master_tsc_stamp == 0 )
-                r->master_tsc_stamp = rdtsc_ordered();
+            if ( tsc == 0 )
+                r->master_tsc_stamp = tsc = rdtsc_ordered();
+            else if ( r->master_tsc_stamp < r->max_tsc_stamp )
+                /*
+                 * We want to avoid moving the TSC backwards for any CPU.
+                 * Use the largest value observed anywhere on the first
+                 * iteration.
+                 */
+                r->master_tsc_stamp = r->max_tsc_stamp;
             else if ( i == 0 )
                 r->master_stime = read_platform_stime(NULL);
 
@@ -1712,6 +1720,15 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) < total_cpus )
                 cpu_relax();
 
+            if ( tsc == 0 )
+            {
+                uint64_t cur = ACCESS_ONCE(r->max_tsc_stamp);
+
+                tsc = rdtsc_ordered();
+                while ( tsc > cur )
+                    cur = cmpxchg(&r->max_tsc_stamp, cur, tsc);
+            }
+
             if ( i == 0 )
                 write_tsc(r->master_tsc_stamp);
 
@@ -1719,9 +1736,12 @@ static void time_calibration_tsc_rendezv
             while ( atomic_read(&r->semaphore) > total_cpus )
                 cpu_relax();
         }
+
+        /* Just in case a read above ended up reading zero. */
+        tsc += !tsc;
     }
 
-    time_calibration_rendezvous_tail(r, r->master_tsc_stamp);
+    time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp);
 }
 
 /* Ordinary rendezvous function which does not modify TSC values. */
@@ -1746,7 +1766,7 @@ static void time_calibration_std_rendezv
         smp_rmb(); /* receive signal /then/ read r->master_stime */
     }
 
-    time_calibration_rendezvous_tail(r, rdtsc_ordered());
+    time_calibration_rendezvous_tail(r, 0, rdtsc_ordered());
 }
 
 /*



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:56:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:56:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83223.154341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SZJ-00080Y-K0; Tue, 09 Feb 2021 12:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83223.154341; Tue, 09 Feb 2021 12:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SZJ-00080R-GB; Tue, 09 Feb 2021 12:56:37 +0000
Received: by outflank-mailman (input) for mailman id 83223;
 Tue, 09 Feb 2021 12:56:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9SZI-00080E-Q4; Tue, 09 Feb 2021 12:56:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9SZI-0006dR-KR; Tue, 09 Feb 2021 12:56:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9SZI-0001kB-9f; Tue, 09 Feb 2021 12:56:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9SZI-0006y2-9B; Tue, 09 Feb 2021 12:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=blrZzqRxOiG3EVQmzEaPCwQ1IBLTJMb4ma3DnB9G2WE=; b=D1wuCbb4jJkbWQCbmawSB00dZH
	QRZ5kR2+Fbwzy5mvo8P5RZe/jAt61vJS2G2hCbmaGKKIsBnzYfYU+vEoVdonI7pp+jy2Wl0FP3k1E
	kSIlVYmuyRpHkt4C0SGa0Yvy2J7lB4oBqNOnD8NgsRMtBTA2VxygWO9yEKRhQVP6JmxA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159129-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159129: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d4716ee8751bf8dabf5872ba008124a0979a5f94
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 12:56:36 +0000

flight 159129 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159129/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d4716ee8751bf8dabf5872ba008124a0979a5f94
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   27 days
Failing since        158473  2021-01-17 13:42:20 Z   22 days   34 attempts
Testing same since   159129  2021-02-08 10:46:56 Z    1 days    1 attempts

------------------------------------------------------------
377 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 11411 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 12:57:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 12:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83225.154355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SZo-00088G-1V; Tue, 09 Feb 2021 12:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83225.154355; Tue, 09 Feb 2021 12:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SZn-000888-Ui; Tue, 09 Feb 2021 12:57:07 +0000
Received: by outflank-mailman (input) for mailman id 83225;
 Tue, 09 Feb 2021 12:57:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9SZm-00087u-5T
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:57:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 279d4154-5031-449e-842b-7b4652fc68ce;
 Tue, 09 Feb 2021 12:57:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 78800AF57;
 Tue,  9 Feb 2021 12:57:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 279d4154-5031-449e-842b-7b4652fc68ce
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612875424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FDe1jUzRfbbh7+YuIc5yGiZBwFURovh6CEvEftGqGYw=;
	b=S3/TWOADLqQR2nKGj5HFEPFdKZxPTyk+Xw+ZRLaMAKmUwrIaPIGXaD12RG9NWrB/qjwijr
	1A2IglhliX5W9ILvav41xKsxqTVOQK+Wv1k+GxMbTkF6ap5QpbfTum7eteNr8gYgaore8U
	rpN26Ghp/9Y896HOb3jD3i+5CPEBITQ=
Subject: [PATCH RFC v3 4/4] x86/time: re-arrange struct calibration_rendezvous
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Message-ID: <56d70757-a887-6824-18f4-93b1f244e44b@suse.com>
Date: Tue, 9 Feb 2021 13:57:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

To reduce latency on time_calibration_tsc_rendezvous()'s last loop
iteration, separate fields written on the last iteration enough from the
crucial field read by all CPUs on the last iteration such that they end
up in distinct cache lines. Prefetch this field on an earlier iteration.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.
---
While readability would likely suffer, we may want to consider avoiding
the prefetch on at least the first two iterations (where the field still
gets / may get written to by CPU0). Could e.g. be

    switch ( i )
    {
    case 0:
        write_tsc(r->master_tsc_stamp);
        break;
    case 1:
        prefetch(&r->master_tsc_stamp);
        break;
    }

Of course it would also be nice to avoid the pretty likely branch
misprediction on the last iteration. But with the static prediction
hints having been rather short-lived in the architecture, I don't see
any good means to do so.

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1655,10 +1655,20 @@ static void tsc_check_reliability(void)
  * All CPUS snapshot their local TSC and extrapolation of system time.
  */
 struct calibration_rendezvous {
-    cpumask_t cpu_calibration_map;
     atomic_t semaphore;
     s_time_t master_stime;
-    uint64_t master_tsc_stamp, max_tsc_stamp;
+    cpumask_t cpu_calibration_map;
+    /*
+     * All CPUs want to read master_tsc_stamp on the last iteration.  If
+     * cpu_calibration_map isn't large enough to push the field into a cache
+     * line different from the one used by semaphore (written by all CPUs on
+     * every iteration) and master_stime (written by CPU0 on the last
+     * iteration), align the field suitably.
+     */
+    uint64_t __aligned(BITS_TO_LONGS(NR_CPUS) * sizeof(long) +
+                       sizeof(atomic_t) + sizeof(s_time_t) < SMP_CACHE_BYTES
+                       ? SMP_CACHE_BYTES : 1) master_tsc_stamp;
+    uint64_t max_tsc_stamp;
 };
 
 static void
@@ -1709,6 +1719,8 @@ static void time_calibration_tsc_rendezv
 
             if ( i == 0 )
                 write_tsc(r->master_tsc_stamp);
+            else
+                prefetch(&r->master_tsc_stamp);
 
             while ( atomic_read(&r->semaphore) != (2*total_cpus - 1) )
                 cpu_relax();
@@ -1731,6 +1743,8 @@ static void time_calibration_tsc_rendezv
 
             if ( i == 0 )
                 write_tsc(r->master_tsc_stamp);
+            else
+                prefetch(&r->master_tsc_stamp);
 
             atomic_inc(&r->semaphore);
             while ( atomic_read(&r->semaphore) > total_cpus )



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:02:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:02:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83234.154368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Seq-0000ka-Mo; Tue, 09 Feb 2021 13:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83234.154368; Tue, 09 Feb 2021 13:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Seq-0000kT-IT; Tue, 09 Feb 2021 13:02:20 +0000
Received: by outflank-mailman (input) for mailman id 83234;
 Tue, 09 Feb 2021 13:02:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9Seo-0000kL-Vd
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:02:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdd56266-18b9-45e0-8ced-70d5c545f552;
 Tue, 09 Feb 2021 13:02:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD7CAAD6A;
 Tue,  9 Feb 2021 13:02:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdd56266-18b9-45e0-8ced-70d5c545f552
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612875736; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pm1SXLBpvnL4ukE2e2l7lHTnGHmAvihoOGdi1YBFjwM=;
	b=VUxofy4sEx8Fhj1IzhsRGNvv+K+xEMVx4cMyo+VqbgiGnoZhbG/5NNlJYnNoQsTEEPPyjl
	kr7UnwQe4gGSqGUd0np5KeBnBtzlkEDnwj6df6shvJBOsrCV/KfpUctezTG8uiEJR7ZtUD
	bL7MJexcf8N1Zv7w4CyWaDdAZV2gS28=
Subject: Re: [PATCH v3 0/4] x86/time: calibration rendezvous adjustments
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Message-ID: <a3be96d8-1480-8af4-601b-a55ab3819f97@suse.com>
Date: Tue, 9 Feb 2021 14:02:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 13:53, Jan Beulich wrote:
> The middle two patches are meant to address a regression reported on
> the list under "Problems with APIC on versions 4.9 and later (4.8
> works)". In the course of analyzing output from a debugging patch I
> ran into another anomaly again, which I thought I should finally try
> to address. Hence patch 1. Patch 4 is new in v3 and RFC for now.

Of course this is the kind of change I'd prefer doing early in a
release cycle. I don't think there are severe risks from patch 1, but
I'm not going to claim patches 2 and 3 are risk free. They fix booting
Xen on a system left in rather awkward state by the firmware. And
they shouldn't affect well behaved modern systems at all (due to
those using a different rendezvous function). While we've been having
this issue for years, I also consider this set a backporting
candidate. Hence I can see reasons pro and con inclusion in 4.15.

Jan

> 1: change initiation of the calibration timer
> 2: adjust time recording time_calibration_tsc_rendezvous()
> 3: don't move TSC backwards in time_calibration_tsc_rendezvous()
> 4: re-arrange struct calibration_rendezvous
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:08:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:08:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83236.154379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SkT-0000wp-Ae; Tue, 09 Feb 2021 13:08:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83236.154379; Tue, 09 Feb 2021 13:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SkT-0000wi-7W; Tue, 09 Feb 2021 13:08:09 +0000
Received: by outflank-mailman (input) for mailman id 83236;
 Tue, 09 Feb 2021 13:08:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9SkR-0000wd-Oa
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:08:07 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a9cba1f-80ea-4359-8e9c-a0cc52912175;
 Tue, 09 Feb 2021 13:08:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a9cba1f-80ea-4359-8e9c-a0cc52912175
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612876086;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=vQpef0QdBfkairhatvK4xR4ldSyHqiim2peurgDoeIY=;
  b=D9iUiej96UbWzhHJqtDSMPGXgH6AGc7FzCEamBxPaYo978bNNktMQsNq
   gASx93FdjPtHxHZuGkPBk0/OqI5fF+8P7zmMPzszeWDxlsFCd+SoIKu3S
   FV3ZU563mJxH6sMFM3JnoBdInJRY0ikIbvbpdTIpWZ2drbS9EPpFf4FY+
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Wr6s/gR6zUAVhVe1bXo7R05s2lXo99PNjpLUZZyABs0ThrvrBDgYRCXRsYHE7HczKh86Ba88Gg
 LursSQgwYIZUw6oD3jN18zuS1hEei1jaWbQ35GGnjS5eAMhBh4s12IB8vDyAAhRAwx5b39Z49r
 3ELcfrdqrzblMC9y+95rNj5OVYmRfhgiqBXWyeFR6g1djrbUi/MPDCMrWrxNOLaIgCnLNUQsqG
 pVXZAh/dLIFuLNpSkrsmtUtb5GvVnxw/BpM6zKfmOSniQW9WKwnhN970kynoZPuw9+K7cPI88u
 +CA=
X-SBRS: 5.2
X-MesageID: 37051619
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,164,1610427600"; 
   d="scan'208";a="37051619"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FeSAhYFQsoNTNd3p92BFXfpMIEnE0lgBMvl7IrByoSZd1EDs746cPJRiVhxshA+2Np8EeJb72OoiO2e0FnoG3tsAmbY485iTKnGcOn5eOkp54RywJJQ35Uv8DPPLeg3lXCVivR0XYd/LiDKMWtcO7cunhZNcs5l4srosMXFp7Dc7PgGCi+weeps3RWFH9iZgVBdjSR/6q0ZdGfGwpCLssrX/2b/HrFJQye+cyun/rcXAjSuqqNr95aS4rno3gThI6L4qDEQw/bLo2++hnm0pZYn/rT+w9rAkdf/yCN8LYS6r/+0NKpNv8DCSg/3NQ7m2UJk7mHTLyi8Yf4XGE+xkog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/+Ro2D7jldwfZhO1uG8tJx5GLxNvI7eDcL3+eU7nxCU=;
 b=IxFeZiN8Y/TmClsQMY1vSz+mgO9Xiq2liszRqlZ6ls/jbfaN5o0+CbJmocrXQ1Ev7KTXHlJjzttTirSPrtbuFRXYyqY2t5AuJq+0nyd+Nm81i0waxqu0lC+dcIS0WrDkSDxRks7yjBXz+b5E50xezOhcAKJ0WhfE7R37q7T6Zt7trFej2W2ogc1pRZR2PvcyVx+X7+ktTa+oT6kEOJU+3+inNUt3xB8n/FqahEjh2I+JwybWpiVrC0TnZA1aqMYEoMogZ9WL9CvAy/QdKDe0yFt9V89Cb1/0IZdllgntbvwbs8kIe7q7qofDP9a8ueDLBcw0LF/IrC7Zy7afN5oPww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/+Ro2D7jldwfZhO1uG8tJx5GLxNvI7eDcL3+eU7nxCU=;
 b=MVm2u2MPjDATdrw/wgPn1IydFcEZ8RMz5S/pT67FUCPwLE23nK7a7j+sWyz5tY569JTPf74PH2tx5i2Iwvjs9/FdvL+MzqLnZPOyLVyGVWC5TMSpv9/KrUeYnQSMqPObribk1qrFKnEJKODF7WUcaL3w/kdYJioKNReEuYphZUM=
Date: Tue, 9 Feb 2021 14:07:57 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YCKJLbaTzD6YF/g5@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
 <YB1v60CuOdhxFwNy@Air-de-Roger>
 <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
X-ClientProxiedBy: PR0P264CA0139.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1a::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6177d926-3e45-4975-52a6-08d8ccfbbbc4
X-MS-TrafficTypeDiagnostic: DM5PR03MB3211:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3211F6FE212F5F66A4E6ECDC8F8E9@DM5PR03MB3211.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XDGjuVk+lw7TndSAvO4vu9PQTRejQOdUwpUgaoZjjidXn8WGD032Eg2M5FYR+1Hj0g6o/2/E1ICc8f/4FPFmOKGNHeXjkvfqXt6OUR9MWMQeA2UoGjqTb/0bsJ+9El8CnJ0HhQ1iTzFIZ+Zhq6iX+lixRLyu0Zas5xtpjzrkqnrZJwhs7xxt4sMSMCtziWc0sdwjZ2E6cygiZGt73aNEXeQfS0u9z3FkbppTFg4aLDx7TsM9cUGCrMyccsgP317d1tkF6d7KfIwcGCLcYPUvgBq0u6DhNGNVcP/q2+2G4K7k3GCzpntumiM/8YRyQXgnOghNXLMRkC+iWuydZD6STeMFWLJjhF69P7ZZgFGTLxtpKVzPqIEs/ie5CSpsYqlGA+E8Fjktncg67UcdpWBrECxu/biAQfccGozeL1/uhRU6AyiRiL82XEHI2gCMdLJj9WygRrKUH8TL87JBH5tOUAhkVJ7H/DAo4JYa12fGGIhD4jpYH7LrNstP/cVVxvSqTWgIurVFhsNsbQ3GNouVMQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(346002)(366004)(39860400002)(376002)(8936002)(83380400001)(186003)(66556008)(6486002)(26005)(4326008)(956004)(5660300002)(8676002)(6496006)(9686003)(316002)(478600001)(53546011)(107886003)(2906002)(54906003)(66946007)(6666004)(86362001)(6916009)(16526019)(66476007)(33716001)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RzhqWmY3TXNaaFJOZmo2MWYzU01SQUNVNjJxTXZoOHRRME9na2I2dWhrK1ky?=
 =?utf-8?B?WkNETldYczFDTkVhTWhGbW8yNkpZZmhjRS9QblowM0oxNm05Y3YzQmRoU3lG?=
 =?utf-8?B?SmRsbWROQURYcTFBaVBjWi80SUpkRE81ZUliVFpidnFkM2RqZzFEdlAyL2Zy?=
 =?utf-8?B?Qk5JVXhreHQ3SkhHWlJ1WVU4cW8relYycStPZmYyS3dxVEpXR3V6SmxmaHAv?=
 =?utf-8?B?aWliWkNmMHpFdFZOTVk4eFVBNlpDOC9XSmdjWlZuenQ1dlJjbGxoQmdDd2Nn?=
 =?utf-8?B?M0lNMlgwc24xSWoydFErWHpuRDZCcVAxakdVOG4xNlhlVzlXdmVpUnA3M2Fs?=
 =?utf-8?B?VklyYUQyYUZOZ2R0K1hRSHpldG4yNEpBT2gxLzBHeS9NUFZ3WHdzb0lPanIx?=
 =?utf-8?B?T2NJVXpOS0FtTTBmWVgwNDhsQ3pzZUhJdXlwd204MlJPL1VZOG96VGxRaFo1?=
 =?utf-8?B?aW1lb3dGNVF4R1BLbEttOExRQVRvMW1EbTJwbVg3TDRDL05ZZ2k0UzN5RjFo?=
 =?utf-8?B?azVvejJXNTVxUE83TTlMSFl6S0xnLzZaM01LQWlQM0hmK2lHYzVQUHg1WFI3?=
 =?utf-8?B?dE9ydFVpVElxT2tUZkNLVFlOTjRjSUFPUGJwTjdLNC96aGoySnBObGl4bFd0?=
 =?utf-8?B?c3kwcnR5bzZVd25hTUpUMDBaKy9ZVEsyaEF5eGg1N2xwandCL09kNXdlU0pi?=
 =?utf-8?B?Ymc3bXA5QmJXRHErZ0IzMW90UTNzSHBBZVl2Mmt0UlFjcGJPd1ZDY1U2aGMr?=
 =?utf-8?B?akxtMEZCVU5zTmFJU2lPSmRMUWhiRWJYaURIOXNXNTNUTEZLOEF3NDFRZ1Ra?=
 =?utf-8?B?NFc1dWZlTFhuQlgwbmlqTm9XdzVmK2JkSEhmSkNuUnBMck0yZnNOeUVjOFRT?=
 =?utf-8?B?RHpkT0VPNituTFB2c0U0aUY0ZG5mZkpUa2k1SFRScmR2SFA2SEFFM1M2dzBk?=
 =?utf-8?B?OGc3MUNQbTAwc0JUU0oxRjdBcUtVaE5lN21aSGNSYWpyTjBQV2VaTzBiTTNL?=
 =?utf-8?B?djJ0bTFCZGxlVnBPZXJBY3l1NTYzSDJYWmhIRlRwcExSbTZnS1ZrV2t2bWFB?=
 =?utf-8?B?LzlOMGl4VXpOWUFIM3Q1bHhERFJOck5KOFU0SC9iSm1JTzlPY1JNa2Z4NVc3?=
 =?utf-8?B?UXluY2pKQytXYjBvc0RCR2lOUmpobGNXN3VCcWlmZnJaMHl6SUFGZGE5cnZx?=
 =?utf-8?B?dCtmZTBSdldDNWRuTzhPNFJKL0dKZGsvNGE4M3BGZFVDV2t0ajF3ZFlzbk5p?=
 =?utf-8?B?eCthb0hXZTg4OVhNNTNrTW9lTHljL01OWnNOQWpNano4YWx6TzNpRmltckNw?=
 =?utf-8?B?OC9aN0svVlJvRHBoUmt6L3ZONWl1QnVSbHA5ZExVam9XUzh1dVE4SmlFNW91?=
 =?utf-8?B?QVVVdldVMGd6NDN3SmlEMGxzUi84NUIzTlhUamhoclMveEhjaGx2OFF0M3Yz?=
 =?utf-8?B?QTNzcldDa2YrOTdsdUd1MXdweGVGWS93UHZpS0I1NWpaZWt0cHpjQWExMFNC?=
 =?utf-8?B?U0NqazVEbWl1MlhLMWVQb25wMXpXUGNyaUZGKzBsVHVFSU1OZ2lRQnREWkFU?=
 =?utf-8?B?THFkbEcrbGhhWS8yT3d2OUZUWHVhY1JuZXhFbncyMjQ4eGpFVHRIdHVTbU95?=
 =?utf-8?B?czZ3SEFQb3RyWGhsazZPeGQwZGtGVEJmQVNXSVI2MGtYTTRNU2E0RGErRS9y?=
 =?utf-8?B?YjFKamNQMmNkNk5ucWJRUFJDbG5sODE0Z1ZaZ0FPeTA0c0hqbGNTTXI1U0li?=
 =?utf-8?Q?vFYK8CiE7L/DO4sgdU4OBkX55lv1cK7a9j4O3gZ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6177d926-3e45-4975-52a6-08d8ccfbbbc4
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 13:08:03.4646
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: emXKLM/LQi3Nv5NU0kVfvWVryQcp4eR0t5ptO+4Xm8B3Ss7EzWvUo+6XSQVyQ92z4dmnmCQqfA3X7iZjWuqurw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3211
X-OriginatorOrg: citrix.com

On Fri, Feb 05, 2021 at 05:26:33PM +0100, Jan Beulich wrote:
> On 05.02.2021 17:18, Roger Pau MonnÃ© wrote:
> > On Fri, Feb 05, 2021 at 05:13:22PM +0100, Jan Beulich wrote:
> >> On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
> >>> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
> >>>> The "guest" variants are intended to work with (potentially) fully guest
> >>>> controlled addresses, while the "unsafe" variants are not.
> >>>
> >>> Just to clarify, both work against user addresses, but guest variants
> >>> need to be more careful because the guest provided address can also be
> >>> modified?
> >>>
> >>> I'm trying to understand the difference between "fully guest
> >>> controlled" and "guest controlled".
> >>
> >> Not exactly, not. "unsafe" means access to anything which may
> >> fault, guest controlled or not. do_invalid_op()'s reading of
> >> the insn stream is a good example - the faulting insn there
> >> isn't guest controlled at all, but we still want to be careful
> >> when trying to read these bytes, as we don't want to fully
> >> trust %rip there.

Oh, I see. It's possible that %rip points to an unmapped address
there, and we need to be careful when reading, even if the value of
%rip cannot be controlled by the guest and can legitimacy point to
Xen's address space.

> > Would it make sense to threat everything as 'guest' accesses for the
> > sake of not having this difference?
> 
> That's what we've been doing until now. It is the purpose of
> this change to allow the two to behave differently.
> 
> > I think having two accessors it's likely to cause confusion and could
> > possibly lead to the wrong one being used in unexpected contexts. Does
> > it add a too big performance penalty to always use the most
> > restrictive one?
> 
> The problem is the most restrictive one is going to be too
> restrictive - we wouldn't be able to access Xen space anymore
> e.g. from the place pointed at above as example. This is
> because for guest accesses (but not for "unsafe" ones) we're
> going to divert them into non-canonical space (and hence make
> speculation impossible, as such an access would fault) if it
> would touch Xen space.

Yes, I understand now. I think it would have been helpful (for me) to
have the first sentence as:

The "guest" variants are intended to work with (potentially) fully guest
controlled addresses, while the "unsafe" variants are expected to be
used in order to access addresses not under the guest control, but
that could trigger faults anyway (like accessing the instruction
stream in do_invalid_op).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:10:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83239.154392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Smc-0001tE-OC; Tue, 09 Feb 2021 13:10:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83239.154392; Tue, 09 Feb 2021 13:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Smc-0001t7-Ku; Tue, 09 Feb 2021 13:10:22 +0000
Received: by outflank-mailman (input) for mailman id 83239;
 Tue, 09 Feb 2021 13:10:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gqd1=HL=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l9Smb-0001t1-FX
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:10:21 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.71]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d1a1fd7-4be5-48d1-b012-6bba688f6568;
 Tue, 09 Feb 2021 13:10:18 +0000 (UTC)
Received: from DB6PR0601CA0020.eurprd06.prod.outlook.com (2603:10a6:4:7b::30)
 by AM0PR08MB3826.eurprd08.prod.outlook.com (2603:10a6:208:105::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.25; Tue, 9 Feb
 2021 13:10:16 +0000
Received: from DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::b0) by DB6PR0601CA0020.outlook.office365.com
 (2603:10a6:4:7b::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25 via Frontend
 Transport; Tue, 9 Feb 2021 13:10:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT018.mail.protection.outlook.com (10.152.20.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 9 Feb 2021 13:10:15 +0000
Received: ("Tessian outbound 4d8113405d55:v71");
 Tue, 09 Feb 2021 13:10:15 +0000
Received: from f8e84157b6f8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2A68FCB6-F6E9-47FA-AC53-D9EDE9756E6C.1; 
 Tue, 09 Feb 2021 13:10:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f8e84157b6f8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 09 Feb 2021 13:10:10 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB5899.eurprd08.prod.outlook.com (2603:10a6:10:208::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.19; Tue, 9 Feb
 2021 13:10:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Tue, 9 Feb 2021
 13:10:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d1a1fd7-4be5-48d1-b012-6bba688f6568
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F+iMAGOzN1j9tIkmGuC+qwNoR956aFcvV7NuUkUY7Pg=;
 b=a89FmOBo/rtAktPg4or1YRIvFxoJctA04j/KfbMzcySmnMPF6C1CKMDxsmdQsDBI4Lw5+LYWWVPEVxWjZ0MsT6kTwn3oTD7iw077uDmtykgg5cpSximIE/pjCAyvPsx2o4KfufqgqzCxNFyXyR3mX7JEpa2DUTiVLdFkpTkXb18=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 91649c49368791df
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tb4H5nOyjei9476wSmvoSPVbrwQW2Xdf8Krk5xEjMqx5ppqFooIxYka/e1NS1iPDLBmWQlnIJX8vReqyWtm2Ubw8/3ZrAA9J0RlEMzcZ2Cjt50CfYRGPO9qJChrX8yYKff1vpT06AdmI+EIzSuApO+WDhwzbwLotAkIzWIVrcR07/mblx3Ux0l0A1sP9c6iI1c70zz5gM4vCjsTwKyN+sDFf2sTe0ivLe+orqCeHlUpAOtJ5WrDvJGo7/YuObBY5JnLwvR4RIpK/TExjRBozQ8robg99znstG8pIM77nqPN4vX/sqGbKdH9IIqVyes1YmBjSex3sInTsF4oDXS8z1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F+iMAGOzN1j9tIkmGuC+qwNoR956aFcvV7NuUkUY7Pg=;
 b=R3xIY/rUGkWpjBOR9aQ7yKNijUWKjfjOGtPUizePK9Ig3MY7uNXZUv5cuqOjN/JrsD9EMQh96qiSRT6w+EO0JoqjYnUIEhQX0jYkWrwPTNq9MN7OoJibgdxrrroG0BcdMIAlNH2/5XAN69lRgmgFnR4hUH0ogZV85UG2mQ82gbTXnJvCrIFVTib6dVK/n2RaaH7VgLEMBLzcKMa6b3KncKTHXUJmi1ipimhMGQSU4lSPP2lY1/O1OZ9cJzQYws9HGrNxiCENqZuqiK/oWrmjkZhlORke4MRu3H89CNFVPUlnONcnJ8FIyiNpbmNjWO4BEZ2D47H+VexAhnQ3yBBFzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F+iMAGOzN1j9tIkmGuC+qwNoR956aFcvV7NuUkUY7Pg=;
 b=a89FmOBo/rtAktPg4or1YRIvFxoJctA04j/KfbMzcySmnMPF6C1CKMDxsmdQsDBI4Lw5+LYWWVPEVxWjZ0MsT6kTwn3oTD7iw077uDmtykgg5cpSximIE/pjCAyvPsx2o4KfufqgqzCxNFyXyR3mX7JEpa2DUTiVLdFkpTkXb18=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/CCVIBhvrRIcEE2wSNDNMu6y5KpOkgeAgAABdICAAAIFgIAACHWAgAEzkIA=
Date: Tue, 9 Feb 2021 13:10:07 +0000
Message-ID: <D7BC7145-3C03-456C-B255-AF2CC90A351E@arm.com>
References: <alpine.DEB.2.21.2102051604320.29047@sstabellini-ThinkPad-T480s>
 <C36DCA9F-1212-4385-AE66-7D41C586A313@arm.com>
 <7e963696-a21f-4c79-5f35-a342982bee86@xen.org>
 <3EEEACBE-2028-4DE9-A3BD-053FF82CFC75@arm.com>
 <bc7c3b13-88da-691f-8094-75502f06e882@xen.org>
In-Reply-To: <bc7c3b13-88da-691f-8094-75502f06e882@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d6a0cd37-3b54-4953-8b6a-08d8ccfc0ad5
x-ms-traffictypediagnostic: DBBPR08MB5899:|AM0PR08MB3826:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3826CE735CCA5EE7E8612D65FC8E9@AM0PR08MB3826.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +tAKqqUUP2ZQN5gdIQ6kfftJw/EX4aLfRnHGKjOJMD8WfuoLKarFQqEHfLfUclnSxMpBlL3YQKq9/6EyxW13eswWdOw1L21zVvcFguua/bEjDpfb0MgMpN+jIKbq+VEpwV8SDJLja7+jeM0pQZX0rBSS+faitw8n6Zkmp3iG4VIqTkXpuVMTTiTrbYBp0LQSzp0ZJtfu0b8Gp+cYTvXAUO24NEjCkvO4mv37gfaykMklUukwRr4f7JXqlLkm64jYoBCVkULzUhx8/9e5kHbLDRnxNlvd/I/4BSrwb/NRyU0MOSHzBRmoojbNVcbPNr3WGg7sTY4LYxgCNHuGpx7kTQiyHvVKsxT0w1Urs2dE+rE8+74mGKX6hEyLbTju/0fYuYJnJX7v5hZFLMW2u65i0Ah9lzalGU5QBMmh4mDE96QdEBcgM5ZdHGmb27tUWx24FL0yANoskgyQ91h4hNWogUDg3HMjpEGHYrwK6IfL4IZMF7QXZeo1Cu/vORlrXgryprg9Am1jGfPz46KOwsjMfxje5WvTH3P/obv2JOIerxKttZ7iJ68ycZfulCt7Unm7lVfHKkvk5D2Io7Fa1/Ef+Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39850400004)(366004)(346002)(136003)(396003)(4326008)(8936002)(53546011)(6506007)(6486002)(2616005)(26005)(33656002)(6512007)(54906003)(186003)(316002)(6916009)(86362001)(91956017)(76116006)(83380400001)(36756003)(66476007)(66556008)(64756008)(66446008)(66946007)(8676002)(478600001)(5660300002)(2906002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?NllXQnhhWjhBdWg1MVRORkI3Z0I4YWQzSnJCTEkrUGV4ZzV1ektEdFd2MUh3?=
 =?utf-8?B?TllBZE5LRy9udnJGaG4wdVBhVUFRMDZzamx6UmVJZW9wYjB0cDBpKzVDOEZa?=
 =?utf-8?B?Y1U1YWJRTmUzaEMrSytnRFdRS2J4NnNENmswWnJQNHhWaHRqNG4rR2Z2ZUJC?=
 =?utf-8?B?b0hETGNqNXNxSnpLZXc2Q0VWMnZwc2dTN01QT1VscmlNWmRSU2dkam4rV1BX?=
 =?utf-8?B?ejB4YmNLbG8rQnAyN0UvZmtMemR2Y2ZIVGZGc1pOTlcrZDY3N3dXTEZ0TE1x?=
 =?utf-8?B?R0kyVGtpeHIwVGhxN0M5UmhtSklUNDBxd01qTjdld2lVRmxwZTgyUStmUGZo?=
 =?utf-8?B?a2ZyMSttSEljdjF3WlZJQ1lPSkUwSjhjVGZROS9SbC9RM2pvT1UyeU9NdGs4?=
 =?utf-8?B?T1IzZWZJdnNMaHBwZ1g4c1NaOWpHUDFRdXZ2Y3VldS9PQlMwVmJGTFJQRmRS?=
 =?utf-8?B?WVVJQUNvaUJLTjdLQkV4aTJqRnRFMEZKMjhMTkMxLzFTampVejZSaEd3UFcz?=
 =?utf-8?B?ZHNVREpRRzd6TURla3RWZFRnK2htTjUrRTJCT29RN2xNN01YM3hKdmt3dUpq?=
 =?utf-8?B?R1VIU1d4cUw4MXMxa0pndXJmbU0ySytlSDVQTVVQNUVVMmw0ZkI2T1N3bVZo?=
 =?utf-8?B?SFVtNkdqeDNtalovNHcxSGFoRi9TRG9Ecm8xRUlUelBHSkFNOUREVk9QcWVV?=
 =?utf-8?B?SHI2Z2k5d1I3M0txVUZia0ZMUUJMck94emVibzF4ejByZnI5VmxlaDB5MDdE?=
 =?utf-8?B?QmphOUlUcUVEbjg2QTlVRzNGbzkvNldSTVd2M0x1TTRwcjkxT1BtZXpBRVYw?=
 =?utf-8?B?NnY4U2srb3l0RE4zQXQyT1pySHdtYXl4RHNaRGlBQkZIbkFYRzQrS3hwV3lQ?=
 =?utf-8?B?S0Fha2VvdU1hZGFyU1dmK1pQOGp1bHlXWC93dFpoV1ZSZVJrcGtFNk84QWVo?=
 =?utf-8?B?TGFnSDNFZGc4ZEc5ZUN0d2NZVWUxMkJhUzhIWk5uOURReGNVa2tEVG1Ic1B1?=
 =?utf-8?B?R1luVDBCRWtleGVUTEJkZUFlaGN3VXVrOWlyS2Rodzg4ckdING1DOWttdnF3?=
 =?utf-8?B?L2hXN0J1UjNicDNqK3pXWDdjQmRZNlJLSElRTHRkZTdjQjZOSWJNNERnYzFW?=
 =?utf-8?B?OHdWRlM0TnVXdFVRcE90RXE1enAyM3ZJdXZRTG5KSmhGY1NzRlJRSWN2WFF4?=
 =?utf-8?B?L3lWOGVDYkx5ZnBkVzVGSXQ3aW9TLzBwYkhSL0p6SHhJa0duTmpFUDJTTEw2?=
 =?utf-8?B?Skd6aHlPNW5DeDM0TlN0OWxKdDVNMXg0dXhDa3RzTFoveFhNS0tiNTVaYnFq?=
 =?utf-8?B?bGNxcUtXWStwWnk5c1psdCtYOWtwVWV1TmswK1VoVUwzT1BsZTEreUxJem9w?=
 =?utf-8?B?c1pSRERRNXdmOW1rWlpJRk9zQk9RWWl1eGhpQXdtMXg1bDlpVDZoUENCOTUw?=
 =?utf-8?B?QjFpaW9ZOWNOQmFGdm5JUHVVWVVKajVYOE1kUDVZdkM2aTVjWlYzME1UUDQx?=
 =?utf-8?B?L2dSeUtRd2c3RWtFd1RqWmo3ZE1pNHROT0lnYlhTNE9EVlJNZy9DVmZxRW9L?=
 =?utf-8?B?eFNpRnhPZzZ2akxyQXBCYW1XbUllQXJGVmxqYnd2aDZOd2FzUWxCL0Z3Qk9X?=
 =?utf-8?B?SzNwTzhQdm9sZ0RhMWxocVVvOGxHUnpCNXJlRzEwTWFqV25ZU3RYdS9kc29k?=
 =?utf-8?B?c2w0RlJHZlUvTDgrakJjYk1rc2RVakVhRU5zMC8yY1J0YXlmc1pGbG5sV0tX?=
 =?utf-8?Q?wmjVwVOI/dyceMzKthOZKFuptOuY/rbcecA1C7n?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <31A76DA5169DE749BE5A06DFAEC97D3E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5899
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f6999384-d6f3-474f-2db5-08d8ccfc063e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KXRJvGTQKg/D4J3fpffBKxWdX/KS+HoN/oXqWXO9N8YaJIiFIZrPwSK+++nP5ntmSySf020Fh9I1tclLmKi3/tlmJCoC0JSfkVUwxF/8d6s2wtZYH/prmFQvobi/tENcld3U1UDFM/Vq9L8tm1B4KzVqOr37b7wsZTBMiheg9MBqJ+inzGsajuU+UYSn40Qf8xneKITKLGmQNot2/A/Df4FgbtQOkcGFC8Kg0lr6PptNhMocE676ByQHmp3fG9GyS1tUVs14oOfB8X3SAkNbbi4ckoUgBa+va+Oo5fWl/4ZRHmtllB8Vs6jGlk1ZExEdI0ZpMXQd/dBjLabkOd80V2OxjgGRJJEG2wwZ4ztfujRlb3Or5Tcax7XBtiekNUTbb1/hp4Ho4MRaiejN67cE3F2TkqAiDrbozPILnG97S8tZrC8NWczK+Y32iNnbRfMSNSjGPo9dSHvdEaWI0kQJu0bhJXMitHrwqlIoZ/5xYO1iPB4plg8GFFXsP9p2FllVVuOIblFrf5vQGUM8ya7DuYhXPYF72WaMxwgks9THd1dL+VZov+c4qttn+UedUTJvSkP7aPQcbsyPpc4tVForIw6duOTxj6w2uGe1UwAgvT2W4mx2qpjISjZYB0DICwoebW2jsEwNm1H9c2LlyKD7Tn6r/1ZbeYSC8656Pmye9IA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(39860400002)(136003)(346002)(46966006)(36840700001)(316002)(6862004)(6486002)(6512007)(70206006)(33656002)(86362001)(5660300002)(53546011)(6506007)(81166007)(82310400003)(82740400003)(70586007)(36756003)(356005)(83380400001)(478600001)(4326008)(54906003)(47076005)(107886003)(8936002)(186003)(2906002)(2616005)(26005)(336012)(8676002)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 13:10:15.8447
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d6a0cd37-3b54-4953-8b6a-08d8ccfc0ad5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3826

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDggRmViIDIwMjEsIGF0IDY6NDkgcG0sIEp1bGllbiBHcmFs
bCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPiANCj4gDQo+IA0KPiBPbiAwOC8wMi8yMDIxIDE4
OjE5LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+IEhlbGxvIEp1bGllbiwNCj4gDQo+IEhpIFJhaHVs
LA0KPiANCj4+PiBPbiA4IEZlYiAyMDIxLCBhdCA2OjExIHBtLCBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPiB3cm90ZToNCj4+PiANCj4+PiANCj4+PiANCj4+PiBPbiAwOC8wMi8yMDIxIDE4
OjA2LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+IE9uIDYgRmViIDIwMjEsIGF0IDEyOjM4IGFt
LCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+
Pj4gDQo+Pj4+PiBDb21taXQgOTFkNGVjYTdhZGQgYnJva2UgZ250dGFiX25lZWRfaW9tbXVfbWFw
cGluZyBvbiBBUk0uDQo+Pj4+PiBUaGUgb2ZmZW5kaW5nIGNodW5rIGlzOg0KPj4+Pj4gDQo+Pj4+
PiAjZGVmaW5lIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcoZCkgICAgICAgICAgICAgICAgICAg
IFwNCj4+Pj4+IC0gICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQpICYmIG5lZWRfaW9tbXUo
ZCkpDQo+Pj4+PiArICAgIChpc19kb21haW5fZGlyZWN0X21hcHBlZChkKSAmJiBuZWVkX2lvbW11
X3B0X3N5bmMoZCkpDQo+Pj4+PiANCj4+Pj4+IE9uIEFSTSB3ZSBuZWVkIGdudHRhYl9uZWVkX2lv
bW11X21hcHBpbmcgdG8gYmUgdHJ1ZSBmb3IgZG9tMCB3aGVuIGl0IGlzDQo+Pj4+PiBkaXJlY3Rs
eSBtYXBwZWQsIGxpa2UgdGhlIG9sZCBjaGVjayBkaWQsIGJ1dCB0aGUgbmV3IGNoZWNrIGlzIGFs
d2F5cw0KPj4+Pj4gZmFsc2UuDQo+Pj4+PiANCj4+Pj4+IEluIGZhY3QsIG5lZWRfaW9tbXVfcHRf
c3luYyBpcyBkZWZpbmVkIGFzIGRvbV9pb21tdShkKS0+bmVlZF9zeW5jIGFuZA0KPj4+Pj4gbmVl
ZF9zeW5jIGlzIHNldCBhczoNCj4+Pj4+IA0KPj4+Pj4gICAgaWYgKCAhaXNfaGFyZHdhcmVfZG9t
YWluKGQpIHx8IGlvbW11X2h3ZG9tX3N0cmljdCApDQo+Pj4+PiAgICAgICAgaGQtPm5lZWRfc3lu
YyA9ICFpb21tdV91c2VfaGFwX3B0KGQpOw0KPj4+Pj4gDQo+Pj4+PiBpb21tdV9od2RvbV9zdHJp
Y3QgaXMgYWN0dWFsbHkgc3VwcG9zZWQgdG8gYmUgaWdub3JlZCBvbiBBUk0sIHNlZSB0aGUNCj4+
Pj4+IGRlZmluaXRpb24gaW4gZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jOg0KPj4+
Pj4gDQo+Pj4+PiAgICBUaGlzIG9wdGlvbiBpcyBoYXJkd2lyZWQgdG8gdHJ1ZSBmb3IgeDg2IFBW
SCBkb20wJ3MgKGFzIFJBTSBiZWxvbmdpbmcgdG8NCj4+Pj4+ICAgIG90aGVyIGRvbWFpbnMgaW4g
dGhlIHN5c3RlbSBkb24ndCBsaXZlIGluIGEgY29tcGF0aWJsZSBhZGRyZXNzIHNwYWNlKSwgYW5k
DQo+Pj4+PiAgICBpcyBpZ25vcmVkIGZvciBBUk0uDQo+Pj4+PiANCj4+Pj4+IEJ1dCBhc2lkZSBm
cm9tIHRoYXQsIHRoZSBpc3N1ZSBpcyB0aGF0IGlvbW11X3VzZV9oYXBfcHQoZCkgaXMgdHJ1ZSwN
Cj4+Pj4+IGhlbmNlLCBoZC0+bmVlZF9zeW5jIGlzIGZhbHNlLCBhbmQgZ250dGFiX25lZWRfaW9t
bXVfbWFwcGluZyhkKSBpcyBmYWxzZQ0KPj4+Pj4gdG9vLg0KPj4+Pj4gDQo+Pj4+PiBBcyBhIGNv
bnNlcXVlbmNlLCB3aGVuIHVzaW5nIFBWIG5ldHdvcmsgZnJvbSBhIGRvbVUgb24gYSBzeXN0ZW0g
d2hlcmUNCj4+Pj4+IElPTU1VIGlzIG9uIGZyb20gRG9tMCwgSSBnZXQ6DQo+Pj4+PiANCj4+Pj4+
IChYRU4pIHNtbXU6IC9zbW11QGZkODAwMDAwOiBVbmhhbmRsZWQgY29udGV4dCBmYXVsdDogZnNy
PTB4NDAyLCBpb3ZhPTB4ODQyNGNiMTQ4LCBmc3lucj0weGIwMDAxLCBjYj0wDQo+Pj4+PiBbICAg
NjguMjkwMzA3XSBtYWNiIGZmMGUwMDAwLmV0aGVybmV0IGV0aDA6IERNQSBidXMgZXJyb3I6IEhS
RVNQIG5vdCBPSw0KPj4+PiBJIGFsc28gb2JzZXJ2ZWQgdGhlIElPTU1VIGZhdWx0IHdoZW4gRE9N
VSBndWVzdCBpcyBjcmVhdGVkIGFuZCBncmVhdCB0YWJsZSBpcyB1c2VkIHdoZW4gSU9NTVUgaXMg
ZW5hYmxlZC4gSSBmaXhlZCB0aGUgZXJyb3IgaW4gZGlmZmVyZW50IHdheSBidXQgSSBhbSBub3Qg
c3VyZSBpZiB5b3UgYWxzbyBvYnNlcnZpbmcgdGhlIHNhbWUgZXJyb3IuIEkgc3VibWl0dGVkIHRo
ZSBwYXRjaCB0byBwY2ktcGFzc3Rocm91Z2ggaW50ZWdyYXRpb24gYnJhbmNoLiBQbGVhc2UgaGF2
ZSBhIGxvb2sgb25jZSBpZiB0aGF0IG1ha2Ugc2Vuc2UuDQo+Pj4gDQo+Pj4gSSBiZWxpdmUgdGhp
cyBpcyB0aGUgc2FtZSBlcnJvciBhcyBTdGVmYW5vIGhhcyBvYnNlcnZlZC4gSG93ZXZlciwgeW91
ciBwYXRjaCB3aWxsIHVuZm9ydHVuYXRlbHkgbm90IHdvcmsgaWYgeW91IGhhdmUgYSBzeXN0ZW0g
d2l0aCBhIG1peCBvZiBwcm90ZWN0ZWQgYW5kIG5vbi1wcm90ZWN0ZWQgRE1BLWNhcGFibGUgZGV2
aWNlcy4NCj4+IFllcyB5b3UgYXJlIHJpZ2h0IHRoYXRzIHdoYXQgSSB0aG91Z2ggd2hlbiBJIGZp
eGVkIHRoZSBlcnJvciBidXQgdGhlbiBJIHRob3VnaHQgaW4gZGlmZmVyZW50IGRpcmVjdGlvbiBp
ZiBJT01NVSBpcyBlbmFibGVkIHN5c3RlbSB3aXNlIGV2ZXJ5IGRldmljZSBzaG91bGQgYmUgcHJv
dGVjdGVkIGJ5IElPTU1VLg0KPiBJIGFtIG5vdCBhd2FyZSBvZiBhbnkgcnVsZSBwcmV2ZW50aW5n
IGEgbWl4IG9mIHByb3RlY3RlZCBhbmQgdW5wcm90ZWN0ZWQgRE1BLWNhcGFibGUgZGV2aWNlcy4N
Cj4gDQo+IEhvd2V2ZXIsIGV2ZW4gaWYgdGhleSBhcmUgYWxsIHByb3RlY3RlZCBieSBhbiBJT01N
VSwgc29tZSBvZiB0aGUgSU9NTVVzIG1heSBoYXZlIGJlZW4gZGlzYWJsZWQgYnkgdGhlIGZpcm13
YXJlIHRhYmxlcyBmb3IgdmFyaW91cyByZWFzb25zIChlLmcuIHBlcmZvcm1hbmNlLCBidWdneSBT
TU1VLi4uKS4gRm9yIGluc3RhbmNlLCB0aGlzIGlzIHRoZSBjYXNlIG9uIEp1bm8gd2hlcmUgMiBv
dXQgb2YgMyBTTU1VcyBhcmUgZGlzYWJsZWQgaW4gdGhlIExpbnV4IHVwc3RyZWFtIERULg0KPiAN
Cj4gQXMgd2UgZG9uJ3Qga25vdyB3aGljaCBkZXZpY2Ugd2lsbCB1c2UgdGhlIGdyYW50IGZvciBE
TUEsIHdlIGFsd2F5cyBuZWVkIHRvIHJldHVybiB0aGUgbWFjaGluZSBwaHlzaWNhbCBhZGRyZXNz
Lg0KDQpUaGFua3MgZm9yIHRoZSBpbmZvcm1hdGlvbiBmb3IgY2xlYXJpbmcgbXkgZG91YnRzLiAN
Cg0KTm93IEkgdW5kZXJzdGFuZCB0aGF0IHdlIG5lZWQgdG8gcmV0dXJuIHRoZSBtYWNoaW5lIHBo
eXNpY2FsIGFkZHJlc3MuIEkgZml4ZWQgdGhlIGlzc3VlIHdoZW4gdGhlcmUgaXMgbm8gSU9NTVUg
bWFwcGluZyBjYWxsIGZvciBncmFudCBwYWdlcy4gSSB0aG91Z2h0IGlmIHBhZ2UgdGFibGVzIGlz
IG5vdCBzaGFyZWQgYmV0d2VlbiBJT01NVSBhbmQgQ1BVLCBpbiB0aGF0IHNjZW5hcmlvIG9ubHkg
d2UgY2FuIGFkZCB0aGUgbWFwcGluZyBmb3IgdGhlIGdyYW50IHRhYmxlIGluIElPTU1VIHBhZ2Ug
dGFibGVzIGJ5IGNhbGxpbmcNCmlvbW11X21hcC9pb21tdV91bm1hcCBmdW5jdGlvbnMuIFRoYXTi
gJlzIHdoeSBJIGZpeGVkIHRoZSBpc3N1ZSB0byByZXR1cm4gSVBBIGFzIHRoZXJlIGlzIG5vICht
Zm4gLT4gbWZuKSBtYXBwaW5nIGluIElPTU1VIHBhZ2UgdGFibGUgZm9yIERNQS4gQWZ0ZXIgdGhp
cyBwYXRjaCB3ZSBkb27igJl0IG5lZWQgdG8gcmV0dXJuIHRoZSBJUEEgYXMgKG1mbiAtPiBtZm4p
IG1hcHBpbmcgd2lsbCBiZSBwcmVzZW50IGluIFAyTS4NCg0KUmVnYXJkcywNClJhaHVsDQoNCj4g
DQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:13:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:13:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83242.154404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Spp-00024y-Bx; Tue, 09 Feb 2021 13:13:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83242.154404; Tue, 09 Feb 2021 13:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Spp-00024r-8r; Tue, 09 Feb 2021 13:13:41 +0000
Received: by outflank-mailman (input) for mailman id 83242;
 Tue, 09 Feb 2021 13:13:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gqd1=HL=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l9Spn-00024m-OF
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:13:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::627])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e7a8857-c6a0-4045-a02a-5ea5146a8e0a;
 Tue, 09 Feb 2021 13:13:37 +0000 (UTC)
Received: from AS8PR04CA0054.eurprd04.prod.outlook.com (2603:10a6:20b:312::29)
 by DB8PR08MB4106.eurprd08.prod.outlook.com (2603:10a6:10:b2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.17; Tue, 9 Feb
 2021 13:13:33 +0000
Received: from VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::90) by AS8PR04CA0054.outlook.office365.com
 (2603:10a6:20b:312::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20 via Frontend
 Transport; Tue, 9 Feb 2021 13:13:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT036.mail.protection.outlook.com (10.152.19.204) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3784.11 via Frontend Transport; Tue, 9 Feb 2021 13:13:32 +0000
Received: ("Tessian outbound af289585f0f4:v71");
 Tue, 09 Feb 2021 13:13:32 +0000
Received: from ad7550db06b1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BACB0556-064C-4998-9A14-CF9498813E6E.1; 
 Tue, 09 Feb 2021 13:13:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad7550db06b1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 09 Feb 2021 13:13:26 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4630.eurprd08.prod.outlook.com (2603:10a6:10:d6::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.29; Tue, 9 Feb
 2021 13:13:25 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Tue, 9 Feb 2021
 13:13:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e7a8857-c6a0-4045-a02a-5ea5146a8e0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71G+RiebnK1Xhhgl789TMXhc6QVzqtAjUbtLZKHE1zE=;
 b=Horr+Gj+PC8qEvwbGfSHOvveB21n5ZcwPrXKydgS7BRp/01p8adA2qS4qJ6/XBBrrfAxC+XlOyXlW/8YWEK340qt7h4Zwf1+QFU4qHePccsapY8coDLv2Oa7x7+GreX/qyKwwxibHEOw1NwVTE6zuDU+srpDzkWpzZWEjLgZraQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: b929fcc4acef4730
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iWZ+kg9oIXc87wHBysnldGUwBankR2SXmcY/ctsx/LCpBtBqbAPuwFZz2voh6An5wurgWPvJYqCKJ9eqegBcgjyBzgd9jyyXrYpBQHReBTsdZmH+vLhb3W/kuHyeIpvJygZCqCR4rcMpjZeYEjDcxThdSs45wyXPg0yrzMjAJpjHany4bCPAbeRWO0evusgr0BuCo0KHMEdFmmm/lQS3vzZfyEbTI+LiG+tbAyiroOz0fY3REY9eGAXkcv4zowETVWZfZDfKUErDu/WFmofXY+/40PCc5gD2Af2W3lXVIwLuai5Qk2ZDN0aiWHBhWRKBhZpbWClgg7LV2HM94uHdaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71G+RiebnK1Xhhgl789TMXhc6QVzqtAjUbtLZKHE1zE=;
 b=STh0aEDlCEURgosXq1YOa75KvgfVZlNnDkfUIMjN9zqm+22ZV9lNaef6rb0TLUowGUS55D6h0DXmYGE5C24ogt6+1nXigbBewG97RyaJHHH777+gMS7Aqla7PfwZdwaCfNVhvhO76OsWNvR5L8sgwoKcXgDqrxLT1Eh2iKPesfxl9JeuuV9CbPv0ef4c3GOwTafr8m1+dq6u9JbXoNyhbh7fBXWlIGXPrrM2o4hT44cE7TUIT6OO12y7ANlRbhNu3LLnI66vMVFrqBD5EmVgwYPmZnD2+JuYaV9svf16jHorz8nrLnr5ZnwLjTCUz5Y+B13HPy6hBmk/5L0hX3ihUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71G+RiebnK1Xhhgl789TMXhc6QVzqtAjUbtLZKHE1zE=;
 b=Horr+Gj+PC8qEvwbGfSHOvveB21n5ZcwPrXKydgS7BRp/01p8adA2qS4qJ6/XBBrrfAxC+XlOyXlW/8YWEK340qt7h4Zwf1+QFU4qHePccsapY8coDLv2Oa7x7+GreX/qyKwwxibHEOw1NwVTE6zuDU+srpDzkWpzZWEjLgZraQ=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoA
Date: Tue, 9 Feb 2021 13:13:25 +0000
Message-ID: <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
In-Reply-To: <20210208184932.23468-1-sstabellini@kernel.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 49d893ab-6a76-4296-d8ab-08d8ccfc8049
x-ms-traffictypediagnostic: DBBPR08MB4630:|DB8PR08MB4106:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB410618D944C4DCE821EF6FBCFC8E9@DB8PR08MB4106.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RADADNcrAGVO54aWbo5a3Tx+poYh6qFOpvHAZox80jT1MO+1Mcl5RT1njwUfijlhC3B/C219FnL++nnAWWjHXC2YrMYqVnihtJozD+nF8AolscUZ7/CVAt8JaMb3H8dplVn3VP86PLwip9YHzMey/1cih5lAKDg9pcF8ADx/WGO7ApgzdqVxNtMlBywO98hmq7Xt2ivLXnQITZ7nx/Vr78Zs8zU/5yrUGWVBMCliFfOI0MURQIWL2NgmOAIQKKmud9NNXkTCFnRPhiPkPyfXIdwt6y/13vyDiwwlX4XMhZ9Ma5/6HrP6H8kCLlcVfZFrtnxmVQl4yzpkdvHComxgKFFwIK2NrlbLdEyZuRiEpr9ZCXNdoz+6VdlUWnEGbHoIZLe7Smt9KjlUadEbau3Hxd2QMTtx2IabtfPjIQX4FhwAWxoPW6cnf+8zGcklaZ+5A17h4W/nd97zir2GOlTgTdPH0nW1yFQGxA+kWLyCtH7Wx/wzESoUPqwbY+cuu/fhKFPp0HXBm/r8okeQ77nS1bslleqSc6PJFJCoZSpYs4pnpAM2D03tF07+Y3iyOaw5PLJ/w5L1MahjytT7OW5ylW4tiMCaWbBcork2h/znDCpPLyCVgAeRSHCnO9l93zEdCYdu7K+WArpM8PjV4ez6llnQpCnDezMIcRpzhX/6Y5k=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(366004)(376002)(39860400002)(5660300002)(966005)(478600001)(2906002)(316002)(33656002)(26005)(6512007)(45080400002)(91956017)(54906003)(8936002)(186003)(4326008)(53546011)(6916009)(71200400001)(64756008)(66556008)(66476007)(8676002)(6506007)(66446008)(2616005)(76116006)(6486002)(83380400001)(86362001)(66946007)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?ZGltOEdRZUtMVGVlbExJYndjalhXNSsyczVad2l0NHVlcENTVFpHdEZGa2Rh?=
 =?utf-8?B?bVRadWV5Z3l2Z3JFVHVpM1I5WVNxWlBzVUpBR21EYVFUK3VZUzkvVFd3MlBQ?=
 =?utf-8?B?L3M0M1dqTFZUaFZlNUQ5ZTJjWjM1Y0hTSWtGMDNZS2JaMDlvM2lUbzh0enhZ?=
 =?utf-8?B?Vlp5K1ltQ25TRkY4aTRyVnp2U0x4cnhKWXRuTTJkY1k2eC9MbmZPSVpvMktZ?=
 =?utf-8?B?Z204dERMZzZEc2ZZUy9Eam4ycm1iTTRBYXNteTRod0ZKYnBwODA5QVVSZGZD?=
 =?utf-8?B?ZExXMGZua2RVeDdkbGZ3dHNZUlR3bjMzemp1MHdZNmorRXpRdmJUcVRJaG9i?=
 =?utf-8?B?OUhZVlowRk1MRWtPNlNwUjFtOERkR0NuWWh1L292NmpKQ1lNd1pSMDNQVHhq?=
 =?utf-8?B?TzNJd3RBRGNsa21rajJNVmdNUW1mVzc1VDd6L0s3VnNLUU8xMXBzOVl6NzZz?=
 =?utf-8?B?dStxeWQ5WkZPeldQeUtEY2Z3SVhuRVJvWDAvQ0FnclJZamY4ZXBpaHl5dkJo?=
 =?utf-8?B?VDEwOElWamdkMUlQV05yaG8yTmNSYzE0dUtxN1h5WG52dlhUdEFrRGZ3QnlI?=
 =?utf-8?B?TFZpYUpvU1hqVVM0UUsyQnNNMWRRejRXMXJBU2UrbCtWeGtiMllTS3YzcjNV?=
 =?utf-8?B?OXlHQnVJcWovMDhXOUwzQU9ESWpGcFJsRGJKYzgrMytxRFpDMzZvMUdmMkpC?=
 =?utf-8?B?MGhnYUFaSUdRK0JTK25rK1BnVllOTHQ0VVh3QmErZkdteEpvc3kvbFcvNjFP?=
 =?utf-8?B?eEdwYnBZRGhqRFZrUmJYanl0N2tnQ1Vva0JzRXZMR3lISTJ5MnFZTCt4bDZW?=
 =?utf-8?B?a21TUmxBeDAweUN6ZCtpZGh5c0I1M1Y2ZGw5cFNXLzBoRXEyZE9DbWcyMlpD?=
 =?utf-8?B?TXlzbWtMMjRycUZCQUQ5bStzUGJxMUErNWc0UEs4REQvSy9rWFpLSDlLSmJp?=
 =?utf-8?B?REZJT2VBbUdtVTFxNngwdTlpV1c3R0NHSnVoWkgxbDNyakhHYUt1NGlnQXBt?=
 =?utf-8?B?eDlST3BkL0NJYVgyNHJncUwvMW5abUd4MXlRQitoalF3REZtbDUycW0vaWw1?=
 =?utf-8?B?UktuUGFLdmllQVRjeGRKSWI4S3dpRGlxcWtFZU1DSEJDbGw5VFVIOG5uSTc0?=
 =?utf-8?B?VXpQSnpUZVZTMjJpcllMY3BRbGVNb3dxV1RKeVVKQlJna3REYThSb0RWUGZC?=
 =?utf-8?B?UEFsaUxnOGphRTdVT3dyWE51ZW9kK1dxT0Y1TW53WFJrUkFjWXdvYkpZbXZw?=
 =?utf-8?B?eXhnQkIxbDV3WnFqN0dkR3Q2Uy9TNC94bUgvMHBGQndWY2EyWWZhRlhGOHFE?=
 =?utf-8?B?azNzaFpiU2lveVdSRlZqMUhWclVLOGFuOVAvbkhvb3dmdnlaWjNmTWxyY0hT?=
 =?utf-8?B?K3ZETnNnSXl3UG9velRTd2JJOW5LZUM3cnpzWmM5a2xZaDY0QzdjMnNDVmdB?=
 =?utf-8?B?NWhrNi8wcnpmeGx1WGxBMStLQVRPY055WHpnSjluT2VQRXE0QXdyb1dKclIr?=
 =?utf-8?B?TE5BTnY5UUdXQk9wSjI5eUhGTXBBcU45Z05BdmNhaHpKTkVzMUo5Y0FGUmEv?=
 =?utf-8?B?V2N0QUQwcnB5SkZORGFYN0RST3cxaU1mb3JEL0VVaElGMVl6U3BBWE5hbDVN?=
 =?utf-8?B?T21Pb1JmbXJ0TnJic2JqWDJVT2hnbGdQeXdhMFFFMEc1Wk4xY1laazNFbXBL?=
 =?utf-8?B?YlU1S0FOUy91SVozd0U3SmtrSURLZ3BhQWhWU25UdWxyellPelF5SUkyQWVV?=
 =?utf-8?Q?psu+1DxNSsg4TaTnLqI5UKQTaCks5Uhp5i86X/Q?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D0C44DB169897841ADB2F53C0F6330A6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4630
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6c57f6a1-4354-4395-1c3a-08d8ccfc7bd3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OYxcQrqm8c/jpExYq9DeBcdDAESvnWXCXPFSaOtLTwBt2AVZT7HNheky6JdbH/MS4n4FizYfI6OdDfQ0/fw9VMb6q6/ZEHjOdvRtIUbZwlA17FILnLCrU8kW10oBYZIpqpPbZRG6yc+REsimnntJ54xT4bo0JFFJ/QAyVsZeLVQgDUgB/rDtPMiuvK5PaoLBiwoi00th81b8ngk/bHr/CVu8wdU5DB8BqG5MHDdp2//QjNuXQnhQpIa1m69StlnU5+q8m0ZBFU1FO+bKY8ALCPlzQDVIj80PI2Caq88a9UeOZ7lyeiWg1yjRe0JrzuvbckISh57VshxrxaUe4oQr1VFFvbUSPDDdhmBGGxwlGxgv0yB5AWuzKDCa4b1QWpApzkq5+sNGYIoEWK6DpPH4ra+4B4X1wqBOWN6bPYDI8B9QWSkcr8bBwSmY7vdzPB31oDq6mFTUVDasAW97yinCQBksbXJJFOYcpu0inl83QOl/bc6pLi2hmE4VaPP3U7n/0uwKhauzEkrTWSAAmvSxb90r2LSHsGYdo4U2E8mMKCcfHpvuzwKx2313frKAuEnBC0HMi/WPUg4pvRrHQndNA9lxlsROqo/bv3Ww8HAXJCF7hYbX5XUMfmfCyS54dyu2PMrU/iIxV8jvBIKHXDdbfweGeswXEtbzgopcl+xmVIIRRUdqoqxt9jZHZtAbhIaOkBMPkRDsjtDcyyEvs1RGs4vxQNYuFN/XiDQo1KE9SBk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39850400004)(396003)(346002)(136003)(36840700001)(46966006)(36860700001)(54906003)(316002)(81166007)(356005)(47076005)(336012)(2906002)(82740400003)(83380400001)(86362001)(966005)(6486002)(6862004)(2616005)(26005)(186003)(4326008)(107886003)(8936002)(478600001)(45080400002)(70206006)(53546011)(6512007)(6506007)(70586007)(8676002)(5660300002)(36756003)(82310400003)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 13:13:32.7404
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 49d893ab-6a76-4296-d8ab-08d8ccfc8049
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4106

SGVsbG8gU3RlZmFubywNCg0KPiBPbiA4IEZlYiAyMDIxLCBhdCA2OjQ5IHBtLCBTdGVmYW5vIFN0
YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gQ29tbWl0IDkx
ZDRlY2E3YWRkIGJyb2tlIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcgb24gQVJNLg0KPiBUaGUg
b2ZmZW5kaW5nIGNodW5rIGlzOg0KPiANCj4gI2RlZmluZSBnbnR0YWJfbmVlZF9pb21tdV9tYXBw
aW5nKGQpICAgICAgICAgICAgICAgICAgICBcDQo+IC0gICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFw
cGVkKGQpICYmIG5lZWRfaW9tbXUoZCkpDQo+ICsgICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVk
KGQpICYmIG5lZWRfaW9tbXVfcHRfc3luYyhkKSkNCj4gDQo+IE9uIEFSTSB3ZSBuZWVkIGdudHRh
Yl9uZWVkX2lvbW11X21hcHBpbmcgdG8gYmUgdHJ1ZSBmb3IgZG9tMCB3aGVuIGl0IGlzDQo+IGRp
cmVjdGx5IG1hcHBlZCBhbmQgSU9NTVUgaXMgZW5hYmxlZCBmb3IgdGhlIGRvbWFpbiwgbGlrZSB0
aGUgb2xkIGNoZWNrDQo+IGRpZCwgYnV0IHRoZSBuZXcgY2hlY2sgaXMgYWx3YXlzIGZhbHNlLg0K
PiANCj4gSW4gZmFjdCwgbmVlZF9pb21tdV9wdF9zeW5jIGlzIGRlZmluZWQgYXMgZG9tX2lvbW11
KGQpLT5uZWVkX3N5bmMgYW5kDQo+IG5lZWRfc3luYyBpcyBzZXQgYXM6DQo+IA0KPiAgICBpZiAo
ICFpc19oYXJkd2FyZV9kb21haW4oZCkgfHwgaW9tbXVfaHdkb21fc3RyaWN0ICkNCj4gICAgICAg
IGhkLT5uZWVkX3N5bmMgPSAhaW9tbXVfdXNlX2hhcF9wdChkKTsNCj4gDQo+IGlvbW11X3VzZV9o
YXBfcHQoZCkgbWVhbnMgdGhhdCB0aGUgcGFnZS10YWJsZSB1c2VkIGJ5IHRoZSBJT01NVSBpcyB0
aGUNCj4gUDJNLiBJdCBpcyB0cnVlIG9uIEFSTS4gbmVlZF9zeW5jIG1lYW5zIHRoYXQgeW91IGhh
dmUgYSBzZXBhcmF0ZSBJT01NVQ0KPiBwYWdlLXRhYmxlIGFuZCBpdCBuZWVkcyB0byBiZSB1cGRh
dGVkIGZvciBldmVyeSBjaGFuZ2UuIG5lZWRfc3luYyBpcyBzZXQNCj4gdG8gZmFsc2Ugb24gQVJN
LiBIZW5jZSwgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhkKSBpcyBmYWxzZSB0b28sDQo+IHdo
aWNoIGlzIHdyb25nLg0KPiANCj4gQXMgYSBjb25zZXF1ZW5jZSwgd2hlbiB1c2luZyBQViBuZXR3
b3JrIGZyb20gYSBkb21VIG9uIGEgc3lzdGVtIHdoZXJlDQo+IElPTU1VIGlzIG9uIGZyb20gRG9t
MCwgSSBnZXQ6DQo+IA0KPiAoWEVOKSBzbW11OiAvc21tdUBmZDgwMDAwMDogVW5oYW5kbGVkIGNv
bnRleHQgZmF1bHQ6IGZzcj0weDQwMiwgaW92YT0weDg0MjRjYjE0OCwgZnN5bnI9MHhiMDAwMSwg
Y2I9MA0KPiBbICAgNjguMjkwMzA3XSBtYWNiIGZmMGUwMDAwLmV0aGVybmV0IGV0aDA6IERNQSBi
dXMgZXJyb3I6IEhSRVNQIG5vdCBPSw0KPiANCj4gVGhlIGZpeCBpcyB0byBnbyBiYWNrIHRvIHNv
bWV0aGluZyBhbG9uZyB0aGUgbGluZXMgb2YgdGhlIG9sZA0KPiBpbXBsZW1lbnRhdGlvbiBvZiBn
bnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBT
dGFiZWxsaW5pIDxzdGVmYW5vLnN0YWJlbGxpbmlAeGlsaW54LmNvbT4NCj4gRml4ZXM6IDkxZDRl
Y2E3YWRkDQo+IEJhY2twb3J0OiA0LjEyKw0KPiANCj4gLS0tDQo+IA0KPiBHaXZlbiB0aGUgc2V2
ZXJpdHkgb2YgdGhlIGJ1ZywgSSB3b3VsZCBsaWtlIHRvIHJlcXVlc3QgdGhpcyBwYXRjaCB0byBi
ZQ0KPiBiYWNrcG9ydGVkIHRvIDQuMTIgdG9vLCBldmVuIGlmIDQuMTIgaXMgc2VjdXJpdHktZml4
ZXMgb25seSBzaW5jZSBPY3QNCj4gMjAyMC4NCj4gDQo+IEZvciB0aGUgNC4xMiBiYWNrcG9ydCwg
d2UgY2FuIHVzZSBpb21tdV9lbmFibGVkKCkgaW5zdGVhZCBvZg0KPiBpc19pb21tdV9lbmFibGVk
KCkgaW4gdGhlIGltcGxlbWVudGF0aW9uIG9mIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcuDQo+
IA0KPiBDaGFuZ2VzIGluIHYyOg0KPiAtIGltcHJvdmUgY29tbWl0IG1lc3NhZ2UNCj4gLSBhZGQg
aXNfaW9tbXVfZW5hYmxlZChkKSB0byB0aGUgY2hlY2sNCj4gLS0tDQo+IHhlbi9pbmNsdWRlL2Fz
bS1hcm0vZ3JhbnRfdGFibGUuaCB8IDIgKy0NCj4gMSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9u
KCspLCAxIGRlbGV0aW9uKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLWFy
bS9ncmFudF90YWJsZS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9ncmFudF90YWJsZS5oDQo+IGlu
ZGV4IDZmNTg1YjE1MzguLjBjZTc3ZjlhMWMgMTAwNjQ0DQo+IC0tLSBhL3hlbi9pbmNsdWRlL2Fz
bS1hcm0vZ3JhbnRfdGFibGUuaA0KPiArKysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3Rh
YmxlLmgNCj4gQEAgLTg5LDcgKzg5LDcgQEAgaW50IHJlcGxhY2VfZ3JhbnRfaG9zdF9tYXBwaW5n
KHVuc2lnbmVkIGxvbmcgZ3BhZGRyLCBtZm5fdCBtZm4sDQo+ICAgICAoKChpKSA+PSBucl9zdGF0
dXNfZnJhbWVzKHQpKSA/IElOVkFMSURfR0ZOIDogKHQpLT5hcmNoLnN0YXR1c19nZm5baV0pDQo+
IA0KPiAjZGVmaW5lIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcoZCkgICAgICAgICAgICAgICAg
ICAgIFwNCj4gLSAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgbmVlZF9pb21tdV9w
dF9zeW5jKGQpKQ0KPiArICAgIChpc19kb21haW5fZGlyZWN0X21hcHBlZChkKSAmJiBpc19pb21t
dV9lbmFibGVkKGQpKQ0KPiANCj4gI2VuZGlmIC8qIF9fQVNNX0dSQU5UX1RBQkxFX0hfXyAqLw0K
DQpJIHRlc3RlZCB0aGUgcGF0Y2ggYW5kIHdoaWxlIGNyZWF0aW5nIHRoZSBndWVzdCBJIG9ic2Vy
dmVkIHRoZSBiZWxvdyB3YXJuaW5nIGZyb20gTGludXggZm9yIGJsb2NrIGRldmljZS4NCmh0dHBz
Oi8vZWxpeGlyLmJvb3RsaW4uY29tL2xpbnV4L3Y0LjMvc291cmNlL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2sveGVuYnVzLmMjTDI1OA0KDQpJIGRpZCBpbml0aWFsIGRlYnVnZ2luZyBhbmQgZm91
bmQgb3V0IHRoYXQgdGhlcmUgYXJlIG1hbnkgY2FsbHMgdG8gaW9tbXVfbGVnYWN5X3ssdW59bWFw
KCkgd2hpbGUgY3JlYXRpbmcgdGhlIGd1ZXN0IGJ1dCB3aGVuIGlvbW11X2xlZ2FjeV91bm1hcCgp
IGZ1bmN0aW9uIHVubWFwIHRoZSBwYWdlcyBzb21ldGhpbmcgaXMgd3JpdHRlbiAgd3JvbmcgaW4g
cGFnZSB0YWJsZXMgYmVjYXVzZSBvZiB0aGF0IHdoZW4gbmV4dCB0aW1lIHNhbWUgcGFnZSBpcyBt
YXBwZWQgdmlhIGNyZWF0ZV9ncmFudF9ob3N0X21hcHBpbmcoKSB3ZSBvYnNlcnZlZCBiZWxvdyB3
YXJuaW5nLiANCiANCg0KWyAgMTM4LjYzOTkzNF0geGVuLWJsa2JhY2s6IGJhY2tlbmQvdmJkLzAv
NTE3MTI6IHVzaW5nIDQgcXVldWVzLCBwcm90b2NvbCAxIChhcm0tYWJpKSBwZXJzaXN0ZW50IGdy
YW50cw0KKFhFTikgZ250dGFiX21hcmtfZGlydHkgbm90IGltcGxlbWVudGVkIHlldA0KWyAgMTM4
LjY1OTcwMl0geGVuLWJsa2JhY2s6IGJhY2tlbmQvdmJkLzAvNTE3MTI6IHVzaW5nIDQgcXVldWVz
LCBwcm90b2NvbCAxIChhcm0tYWJpKSBwZXJzaXN0ZW50IGdyYW50cw0KWyAgMTM4LjY2OTgyN10g
dmJkIHZiZC0wLTUxNzEyOiA5IG1hcHBpbmcgaW4gc2hhcmVkIHBhZ2UgOCBmcm9tIGRvbWFpbiAw
DQpbICAxMzguNjc2NjM2XSB2YmQgdmJkLTAtNTE3MTI6IDkgbWFwcGluZyByaW5nLXJlZiBwb3J0
IDUNClsgIDEzOC42ODIwODldIC0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQ0K
WyAgMTM4LjY4NjYwNV0gV0FSTklORzogQ1BVOiAyIFBJRDogMzcgYXQgZHJpdmVycy9ibG9jay94
ZW4tYmxrYmFjay94ZW5idXMuYzoyOTYgeGVuX2Jsa2lmX2Rpc2Nvbm5lY3QrMHgyMGMvMHgyMzAN
ClsgIDEzOC42OTY2NjhdIE1vZHVsZXMgbGlua2VkIGluOiBicmlkZ2Ugc3RwIGxsYyBpcHY2IG5m
X2RlZnJhZ19pcHY2DQpbICAxMzguNzAyODMzXSBDUFU6IDIgUElEOiAzNyBDb21tOiB4ZW53YXRj
aCBOb3QgdGFpbnRlZCA1LjQuMC15b2N0by1zdGFuZGFyZCAjMQ0KWyAgMTM4LjcxMDAzN10gSGFy
ZHdhcmUgbmFtZTogQXJtIE5lb3ZlcnNlIE4xIFN5c3RlbSBEZXZlbG9wbWVudCBQbGF0Zm9ybSAo
RFQpDQpbICAxMzguNzE3MDY3XSBwc3RhdGU6IDgwYzAwMDA1IChOemN2IGRhaWYgK1BBTiArVUFP
KQ0KWyAgMTM4LjcyMTkyN10gcGMgOiB4ZW5fYmxraWZfZGlzY29ubmVjdCsweDIwYy8weDIzMA0K
WyAgMTM4LjcyNjcwMV0gbHIgOiB4ZW5fYmxraWZfZGlzY29ubmVjdCsweGJjLzB4MjMwDQpbICAx
MzguNzMxMzg4XSBzcCA6IGZmZmY4MDAwMTFjYjNjODANClsgIDEzOC43MzQ3NzNdIHgyOTogZmZm
ZjgwMDAxMWNiM2M4MCB4Mjg6IGZmZmYwMDAwNWI2ZGE5NDAgDQpbICAxMzguNzQwMTU2XSB4Mjc6
IDAwMDAwMDAwMDAwMDAwMDAgeDI2OiAwMDAwMDAwMDAwMDAwMDAwIA0KWyAgMTM4Ljc0NTUzNl0g
eDI1OiBmZmZmMDAwMDI5NzU1MDYwIHgyNDogMDAwMDAwMDAwMDAwMDE3MCANClsgIDEzOC43NTA5
MTldIHgyMzogZmZmZjAwMDAyOTc1NTA0MCB4MjI6IGZmZmYwMDAwNTljNzIwMDAgDQpbICAxMzgu
NzU2Mjk5XSB4MjE6IDAwMDAwMDAwMDAwMDAwMDAgeDIwOiBmZmZmMDAwMDI5NzU1MDAwIA0KWyAg
MTM4Ljc2MTY4MV0geDE5OiAwMDAwMDAwMDAwMDAwMDAxIHgxODogMDAwMDAwMDAwMDAwMDAwMCAN
ClsgIDEzOC43NjcwNjNdIHgxNzogMDAwMDAwMDAwMDAwMDAwMCB4MTY6IDAwMDAwMDAwMDAwMDAw
MDAgDQpbICAxMzguNzcyNDQ0XSB4MTU6IDAwMDAwMDAwMDAwMDAwMDAgeDE0OiAwMDAwMDAwMDAw
MDAwMDAwIA0KWyAgMTM4Ljc3NzgyNl0geDEzOiAwMDAwMDAwMDAwMDAwMDAwIHgxMjogMDAwMDAw
MDAwMDAwMDAwMCANClsgIDEzOC43ODMyMDddIHgxMTogMDAwMDAwMDAwMDAwMDAwMSB4MTA6IDAw
MDAwMDAwMDAwMDA5OTAgDQpbICAxMzguNzg4NTg5XSB4OSA6IDAwMDAwMDAwMDAwMDAwMDEgeDgg
OiAwMDAwMDAwMDAwMjEwZDAwIA0KWyAgMTM4Ljc5Mzk3MV0geDcgOiAwMDAwMDAwMDAwMDAwMDE4
IHg2IDogZmZmZjAwMDA1ZGRmNzJhMCANClsgIDEzOC43OTkzNTJdIHg1IDogZmZmZjgwMDAxMWNi
M2MyOCB4NCA6IDAwMDAwMDAwMDAwMDAwMDAgDQpbICAxMzguODA0NzM0XSB4MyA6IGZmZmYwMDAw
Mjk3NTUxMTggeDIgOiAwMDAwMDAwMDAwMDAwMDAwIA0KWyAgMTM4LjgxMDExN10geDEgOiBmZmZm
MDAwMDI5NzU1MTIwIHgwIDogMDAwMDAwMDAwMDAwMDAwMSANClsgIDEzOC44MTU0OTddIENhbGwg
dHJhY2U6DQpbICAxMzguODE4MDE1XSAgeGVuX2Jsa2lmX2Rpc2Nvbm5lY3QrMHgyMGMvMHgyMzAN
ClsgIDEzOC44MjI0NDJdICBmcm9udGVuZF9jaGFuZ2VkKzB4MWIwLzB4NTRjDQpbICAxMzguODI2
NTIzXSAgeGVuYnVzX290aGVyZW5kX2NoYW5nZWQrMHg4MC8weGIwDQpbICAxMzguODMxMDM1XSAg
ZnJvbnRlbmRfY2hhbmdlZCsweDEwLzB4MjANClsgIDEzOC44MzQ5NDFdICB4ZW53YXRjaF90aHJl
YWQrMHg4MC8weDE0NA0KWyAgMTM4LjgzODg0OV0gIGt0aHJlYWQrMHgxMTgvMHgxMjANClsgIDEz
OC44NDIxNDddICByZXRfZnJvbV9mb3JrKzB4MTAvMHgxOA0KWyAgMTM4Ljg0NTc5MV0gLS0tWyBl
bmQgdHJhY2UgZmI5ZjBhM2IzYjQ4YTU1ZiBd4oCUDQoNClJlZ2FyZHMsDQpSYWh1bA0KDQo+IC8q
DQo+IC0tIA0KPiAyLjE3LjENCj4gDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:15:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:15:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83243.154416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SrQ-0002Cx-Nr; Tue, 09 Feb 2021 13:15:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83243.154416; Tue, 09 Feb 2021 13:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9SrQ-0002Cq-Kq; Tue, 09 Feb 2021 13:15:20 +0000
Received: by outflank-mailman (input) for mailman id 83243;
 Tue, 09 Feb 2021 13:15:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9SrP-0002Cl-Hy
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:15:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58693e7a-4c06-4031-bdf7-c96fc95b1f18;
 Tue, 09 Feb 2021 13:15:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A748CAFE2;
 Tue,  9 Feb 2021 13:15:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58693e7a-4c06-4031-bdf7-c96fc95b1f18
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612876517; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Nu/y5EI4TivFC/CQT7MSTKDOQzBsp2j+1XANDFk47X8=;
	b=VUO8T87PEzQgCDiAHrtiYBqe0tePaHFnbiQZpymt0Nr/OmCvMwCxYpAjRLfkxRZWuI1CIj
	O0PrsLNAgDJhbSkTEg/EQaFveZMrq+xprv2omT4IVxS47f+UmcOVt/OZzJ83sc6Em+cvDj
	biprcAQIV7pdfrGdOGDRf/B4IcX4Z5g=
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
 <YB1v60CuOdhxFwNy@Air-de-Roger>
 <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
 <YCKJLbaTzD6YF/g5@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1cf476b9-4ac1-9a12-7fdb-c898f02532f7@suse.com>
Date: Tue, 9 Feb 2021 14:15:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCKJLbaTzD6YF/g5@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.02.2021 14:07, Roger Pau MonnÃ© wrote:
> On Fri, Feb 05, 2021 at 05:26:33PM +0100, Jan Beulich wrote:
>> On 05.02.2021 17:18, Roger Pau MonnÃ© wrote:
>>> On Fri, Feb 05, 2021 at 05:13:22PM +0100, Jan Beulich wrote:
>>>> On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
>>>>> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
>>>>>> The "guest" variants are intended to work with (potentially) fully guest
>>>>>> controlled addresses, while the "unsafe" variants are not.
>>>>>
>>>>> Just to clarify, both work against user addresses, but guest variants
>>>>> need to be more careful because the guest provided address can also be
>>>>> modified?
>>>>>
>>>>> I'm trying to understand the difference between "fully guest
>>>>> controlled" and "guest controlled".
>>>>
>>>> Not exactly, not. "unsafe" means access to anything which may
>>>> fault, guest controlled or not. do_invalid_op()'s reading of
>>>> the insn stream is a good example - the faulting insn there
>>>> isn't guest controlled at all, but we still want to be careful
>>>> when trying to read these bytes, as we don't want to fully
>>>> trust %rip there.
> 
> Oh, I see. It's possible that %rip points to an unmapped address
> there, and we need to be careful when reading, even if the value of
> %rip cannot be controlled by the guest and can legitimacy point to
> Xen's address space.
> 
>>> Would it make sense to threat everything as 'guest' accesses for the
>>> sake of not having this difference?
>>
>> That's what we've been doing until now. It is the purpose of
>> this change to allow the two to behave differently.
>>
>>> I think having two accessors it's likely to cause confusion and could
>>> possibly lead to the wrong one being used in unexpected contexts. Does
>>> it add a too big performance penalty to always use the most
>>> restrictive one?
>>
>> The problem is the most restrictive one is going to be too
>> restrictive - we wouldn't be able to access Xen space anymore
>> e.g. from the place pointed at above as example. This is
>> because for guest accesses (but not for "unsafe" ones) we're
>> going to divert them into non-canonical space (and hence make
>> speculation impossible, as such an access would fault) if it
>> would touch Xen space.
> 
> Yes, I understand now. I think it would have been helpful (for me) to
> have the first sentence as:
> 
> The "guest" variants are intended to work with (potentially) fully guest
> controlled addresses, while the "unsafe" variants are expected to be
> used in order to access addresses not under the guest control, but
> that could trigger faults anyway (like accessing the instruction
> stream in do_invalid_op).

I can use some of this, but in particular "access addresses not
under the guest control" isn't entirely correct. The question isn't
whether there's a guest control aspect, but which part of the
address space the addresses are in. See specifically x86/PV: use
get_unsafe() instead of copy_from_unsafe()" for two pretty good
examples. The address within the linear page tables are - in a
way at least - still somewhat guest controlled, but we're
deliberately accessing Xen virtual addresses there.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:42:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83250.154434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9THM-00051Q-Uw; Tue, 09 Feb 2021 13:42:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83250.154434; Tue, 09 Feb 2021 13:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9THM-00051J-RM; Tue, 09 Feb 2021 13:42:08 +0000
Received: by outflank-mailman (input) for mailman id 83250;
 Tue, 09 Feb 2021 13:42:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+9J=HL=amazon.de=prvs=667ab14d4=nmanthey@srs-us1.protection.inumbo.net>)
 id 1l9THL-00051E-Dx
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:42:07 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea08302a-9a50-40ea-8eb0-23c555be7103;
 Tue, 09 Feb 2021 13:42:06 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 09 Feb 2021 13:41:59 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS
 id 9E774C0154; Tue,  9 Feb 2021 13:41:57 +0000 (UTC)
Received: from u6fc700a6f3c650.ant.amazon.com (10.43.160.207) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 9 Feb 2021 13:41:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea08302a-9a50-40ea-8eb0-23c555be7103
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1612878126; x=1644414126;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=4nK5w3MJVVynroNoPKfhgIY8pLUf9kXkXRYfO4rxfqE=;
  b=qsyFpRfDlvFa4GMiErJecFm2tsIBcwXC4GURqKJzOJfIEzbvChcsTb8e
   0uSwPwWZ7rW1B3z+tREUAvijEZC5fFTZNMzrp7dkQGWqknmU5Ac09ir6v
   9jBgdowTFSlcZwo8cLQ+vVPAxsYk992IThigkf/aPBGdHGoK8WoH4ZWqO
   c=;
X-IronPort-AV: E=Sophos;i="5.81,165,1610409600"; 
   d="scan'208";a="83572981"
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	<xen-devel@lists.xenproject.org>
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
From: Norbert Manthey <nmanthey@amazon.de>
Autocrypt: addr=nmanthey@amazon.de; prefer-encrypt=mutual; keydata=
 xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR
 /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg
 kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps
 K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89
 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV
 QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok
 xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o
 eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C
 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK
 VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h
 bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI
 BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC
 XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG
 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c
 kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN
 sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar
 C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt
 OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL
 n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur
 xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5
 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs
 nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf
 Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG
 MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK
 sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ
 /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV
 vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM
 p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk
 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p
 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f
 HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE
 GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b
 tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+
 RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ
 R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4
 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5
 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd
 DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i
 p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U
 sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C
 hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9
 pEddsOqYwOs=
Message-ID: <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
Date: Tue, 9 Feb 2021 14:41:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
X-Originating-IP: [10.43.160.207]
X-ClientProxiedBy: EX13D16UWC001.ant.amazon.com (10.43.162.117) To
 EX13D02EUB001.ant.amazon.com (10.43.166.150)
Precedence: Bulk
Content-Transfer-Encoding: base64

T24gMi85LzIxIDEwOjQwIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPiBDQVVUSU9OOiBUaGlzIGVt
YWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdhbml6YXRpb24uIERvIG5vdCBj
bGljayBsaW5rcyBvciBvcGVuIGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgY2FuIGNvbmZpcm0gdGhl
IHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLgo+Cj4KPgo+IE9uIDA4LjAyLjIw
MjEgMjA6NDcsIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4gT24gMi84LzIxIDM6MjEgUE0sIEph
biBCZXVsaWNoIHdyb3RlOgo+Pj4gT24gMDUuMDIuMjAyMSAyMTozOSwgTm9yYmVydCBNYW50aGV5
IHdyb3RlOgo+Pj4+IEBAIC00MTA4LDYgKzQxMDgsMTMgQEAgc3RhdGljIGludCBodm1fYWxsb3df
c2V0X3BhcmFtKHN0cnVjdCBkb21haW4gKmQsCj4+Pj4gICAgICBpZiAoIHJjICkKPj4+PiAgICAg
ICAgICByZXR1cm4gcmM7Cj4+Pj4KPj4+PiArICAgIGlmICggaW5kZXggPj0gSFZNX05SX1BBUkFN
UyApCj4+Pj4gKyAgICAgICAgcmV0dXJuIC1FSU5WQUw7Cj4+Pj4gKwo+Pj4+ICsgICAgLyogTWFr
ZSBzdXJlIHdlIGV2YWx1YXRlIHBlcm1pc3Npb25zIGJlZm9yZSBsb2FkaW5nIGRhdGEgb2YgZG9t
YWlucy4gKi8KPj4+PiArICAgIGJsb2NrX3NwZWN1bGF0aW9uKCk7Cj4+Pj4gKwo+Pj4+ICsgICAg
dmFsdWUgPSBkLT5hcmNoLmh2bS5wYXJhbXNbaW5kZXhdOwo+Pj4+ICAgICAgc3dpdGNoICggaW5k
ZXggKQo+Pj4+ICAgICAgewo+Pj4+ICAgICAgLyogVGhlIGZvbGxvd2luZyBwYXJhbWV0ZXJzIHNo
b3VsZCBvbmx5IGJlIGNoYW5nZWQgb25jZS4gKi8KPj4+IEkgZG9uJ3Qgc2VlIHRoZSBuZWVkIGZv
ciB0aGUgaGVhdmllciBibG9ja19zcGVjdWxhdGlvbigpIGhlcmU7Cj4+PiBhZmFpY3QgYXJyYXlf
YWNjZXNzX25vc3BlYygpIHNob3VsZCBkbyBmaW5lLiBUaGUgc3dpdGNoKCkgaW4KPj4+IGNvbnRl
eHQgYWJvdmUgYXMgd2VsbCBhcyB0aGUgc3dpdGNoKCkgZnVydGhlciBkb3duIGluIHRoZQo+Pj4g
ZnVuY3Rpb24gZG9uJ3QgaGF2ZSBhbnkgc3BlY3VsYXRpb24gc3VzY2VwdGlibGUgY29kZS4KPj4g
VGhlIHJlYXNvbiB0byBibG9jayBzcGVjdWxhdGlvbiBpbnN0ZWFkIG9mIGp1c3QgdXNpbmcgdGhl
IGhhcmRlbmVkIGluZGV4Cj4+IGFjY2VzcyBpcyB0byBub3QgYWxsb3cgdG8gc3BlY3VsYXRpdmVs
eSBsb2FkIGRhdGEgZnJvbSBhbm90aGVyIGRvbWFpbi4KPiBPa2F5LCBsb29rcyBsaWtlIEkgZ290
IG1pc2xlYWQgYnkgdGhlIGFkZGVkIGJvdW5kcyBjaGVjay4gV2h5Cj4gZG8geW91IGFkZCB0aGF0
LCB3aGVuIHRoZSBzb2xlIGNhbGxlciBhbHJlYWR5IGhhcyBvbmU/IEl0J2xsCj4gc3VmZmljZSBz
aW5jZSB5b3UgbW92ZSB0aGUgYXJyYXkgYWNjZXNzIHBhc3QgdGhlIGJhcnJpZXIsCj4gd29uJ3Qg
aXQ/CkkgY2FuIGRyb3AgdGhhdCBib3VuZCBjaGVjayBhZ2Fpbi4gVGhpcyB3YXMgYWRkZWQgdG8g
bWFrZSBzdXJlIG90aGVyCmNhbGxlcnMgd291bGQgYmUgc2F2ZSBhcyB3ZWxsLiBUaGlua2luZyBh
Ym91dCB0aGlzIGEgbGl0dGxlIG1vcmUsIHRoZQpjaGVjayBjb3VsZCBhY3R1YWxseSBiZSBtb3Zl
ZCBpbnRvIHRoZSBodm1fYWxsb3dfc2V0X3BhcmFtIGZ1bmN0aW9uLAphYm92ZSB0aGUgZmlyc3Qg
aW5kZXggYWNjZXNzIGluIHRoYXQgZnVuY3Rpb24uIEFyZSB0aGVyZSBnb29kIHJlYXNvbnMgdG8K
bm90IG1vdmUgdGhlIGluZGV4IGNoZWNrIGludG8gdGhlIGFsbG93IGZ1bmN0aW9uPwo+Cj4+Pj4g
QEAgLTQxNDEsNiArNDE0OCw5IEBAIHN0YXRpYyBpbnQgaHZtX3NldF9wYXJhbShzdHJ1Y3QgZG9t
YWluICpkLCB1aW50MzJfdCBpbmRleCwgdWludDY0X3QgdmFsdWUpCj4+Pj4gICAgICBpZiAoIHJj
ICkKPj4+PiAgICAgICAgICByZXR1cm4gcmM7Cj4+Pj4KPj4+PiArICAgIC8qIE1ha2Ugc3VyZSB3
ZSBldmFsdWF0ZSBwZXJtaXNzaW9ucyBiZWZvcmUgbG9hZGluZyBkYXRhIG9mIGRvbWFpbnMuICov
Cj4+Pj4gKyAgICBibG9ja19zcGVjdWxhdGlvbigpOwo+Pj4+ICsKPj4+PiAgICAgIHN3aXRjaCAo
IGluZGV4ICkKPj4+PiAgICAgIHsKPj4+PiAgICAgIGNhc2UgSFZNX1BBUkFNX0NBTExCQUNLX0lS
UToKPj4+IExpa2UgeW91IGRvIGZvciB0aGUgImdldCIgcGF0aCBJIHRoaW5rIHRoaXMgc2ltaWxh
cmx5IHJlbmRlcnMKPj4+IHBvaW50bGVzcyB0aGUgdXNlIGluIGh2bW9wX3NldF9wYXJhbSgpIChh
bmQgLSBzZWUgYmVsb3cgLSB0aGUKPj4+IHNhbWUgY29uc2lkZXJhdGlvbiB3cnQgaXNfaHZtX2Rv
bWFpbigpIGFwcGxpZXMpLgo+PiBDYW4geW91IHBsZWFzZSBiZSBtb3JlIHNwZWNpZmljIHdoeSB0
aGlzIGlzIHBvaW50bGVzcz8gSSB1bmRlcnN0YW5kIHRoYXQKPj4gdGhlIGlzX2h2bV9kb21haW4g
Y2hlY2sgY29tZXMgd2l0aCBhIGJhcnJpZXIgdGhhdCBjYW4gYmUgdXNlZCB0byBub3QgYWRkCj4+
IGFub3RoZXIgYmFycmllci4gSG93ZXZlciwgSSBkaWQgbm90IGZpbmQgc3VjaCBhIGJhcnJpZXIg
aGVyZSwgd2hpY2gKPj4gY29tZXMgYmV0d2VlbiB0aGUgJ2lmIChyYyknIGp1c3QgYWJvdmUsIGFu
ZCB0aGUgcG90ZW50aWFsIG5leHQgYWNjZXNzCj4+IGJhc2VkIG9uIHRoZSB2YWx1ZSBvZiAnaW5k
ZXgnLiBBdCBsZWFzdCB0aGUgYWNjZXNzIGJlaGluZCB0aGUgc3dpdGNoCj4+IHN0YXRlbWVudCBj
YW5ub3QgYmUgb3B0aW1pemVkIGFuZCByZXBsYWNlZCB3aXRoIGEgY29uc3RhbnQgdmFsdWUgZWFz
aWx5Lgo+IEknbSBzdXNwZWN0aW5nIGEgbWlzdW5kZXJzdGFuZGluZyAodGhlIG1vcmUgdGhhdCBm
dXJ0aGVyIGRvd24KPiB5b3UgZGlkIGFncmVlIHRvIHdoYXQgSSd2ZSBzYWlkIGZvciBodm1vcF9n
ZXRfcGFyYW0oKSk6IEknbQo+IG5vdCBzYXlpbmcgeW91ciBhZGRpdGlvbiBpcyBwb2ludGxlc3Mu
IEluc3RlYWQgSSdtIHNheWluZyB0aGF0Cj4geW91ciBhZGRpdGlvbiBzaG91bGQgYmUgYWNjb21w
YW5pZWQgYnkgcmVtb3ZhbCBvZiB0aGUgYmFycmllcgo+IGZyb20gaHZtb3Bfc2V0X3BhcmFtKCks
IHBhcmFsbGVsaW5nIHdoYXQgeW91IGRvIHRvCj4gaHZtb3BfZ2V0X3BhcmFtKCkuIEFuZCBhZGRp
dGlvbmFsbHkgSSdtIHNheWluZyB0aGF0IGp1c3QgbGlrZQo+IGluIGh2bW9wX2dldF9wYXJhbSgp
IHRoZSBiYXJyaWVyIHRoZXJlIHdhcyBhbHJlYWR5IHByZXZpb3VzbHkKPiByZWR1bmRhbnQgd2l0
aCB0aGF0IGluc2lkZSBpc19odm1fZG9tYWluKCkuCgpJIG5vdyB1bmRlcnN0YW5kLCB0aGFuayB5
b3UuIEkgYWdyZWUsIHRoZSBhbHJlYWR5IGV4aXN0aW5nIGJhcnJpZXIgaW4KdGhlIGh2bW9wX3Nl
dF9wYXJhbSBmdW5jdGlvbiBjYW4gYmUgZHJvcHBlZCBhcyB3ZWxsLiBJIHdpbGwgdXBkYXRlIHRo
ZQpkaWZmIGFjY29yZGluZ2x5LCBhZnRlciB3ZSBjb25jbHVkZWQgd2hlcmUgdG8gcHV0IHRoZSBp
bmRleCBjaGVjay4KCkJlc3QsCk5vcmJlcnQKCgoKCkFtYXpvbiBEZXZlbG9wbWVudCBDZW50ZXIg
R2VybWFueSBHbWJICktyYXVzZW5zdHIuIDM4CjEwMTE3IEJlcmxpbgpHZXNjaGFlZnRzZnVlaHJ1
bmc6IENocmlzdGlhbiBTY2hsYWVnZXIsIEpvbmF0aGFuIFdlaXNzCkVpbmdldHJhZ2VuIGFtIEFt
dHNnZXJpY2h0IENoYXJsb3R0ZW5idXJnIHVudGVyIEhSQiAxNDkxNzMgQgpTaXR6OiBCZXJsaW4K
VXN0LUlEOiBERSAyODkgMjM3IDg3OQoKCg==



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:45:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83252.154446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TKk-0005Ah-H3; Tue, 09 Feb 2021 13:45:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83252.154446; Tue, 09 Feb 2021 13:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TKk-0005Aa-Da; Tue, 09 Feb 2021 13:45:38 +0000
Received: by outflank-mailman (input) for mailman id 83252;
 Tue, 09 Feb 2021 13:45:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9TKj-0005AU-Ga
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:45:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2dc625c-f936-4623-a119-3a9169a410ea;
 Tue, 09 Feb 2021 13:45:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5B1DCAFE2;
 Tue,  9 Feb 2021 13:45:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2dc625c-f936-4623-a119-3a9169a410ea
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612878335; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PhEssM9GnhMl6C59+642kKhVJoWtc/CLp6ESwW9f5jk=;
	b=Vd7tI9dF9BI/g7Yn+/JTs8Vo4upTo0YOshsZ08OVi/lpZtIWxKJM45CozEwXwiG5QWxM4i
	SvWAO2X4O3hwDRx61WD7djZy1Re67F9d0aYu9i0wqn/wWZbJn3T+8AdP4mL4kew7licfO2
	RvrHmSSDLQW4QKUFyekIOQJ+1UrpWKg=
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
 <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
Date: Tue, 9 Feb 2021 14:45:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 14:41, Norbert Manthey wrote:
> On 2/9/21 10:40 AM, Jan Beulich wrote:
>> On 08.02.2021 20:47, Norbert Manthey wrote:
>>> On 2/8/21 3:21 PM, Jan Beulich wrote:
>>>> On 05.02.2021 21:39, Norbert Manthey wrote:
>>>>> @@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
>>>>>      if ( rc )
>>>>>          return rc;
>>>>>
>>>>> +    if ( index >= HVM_NR_PARAMS )
>>>>> +        return -EINVAL;
>>>>> +
>>>>> +    /* Make sure we evaluate permissions before loading data of domains. */
>>>>> +    block_speculation();
>>>>> +
>>>>> +    value = d->arch.hvm.params[index];
>>>>>      switch ( index )
>>>>>      {
>>>>>      /* The following parameters should only be changed once. */
>>>> I don't see the need for the heavier block_speculation() here;
>>>> afaict array_access_nospec() should do fine. The switch() in
>>>> context above as well as the switch() further down in the
>>>> function don't have any speculation susceptible code.
>>> The reason to block speculation instead of just using the hardened index
>>> access is to not allow to speculatively load data from another domain.
>> Okay, looks like I got mislead by the added bounds check. Why
>> do you add that, when the sole caller already has one? It'll
>> suffice since you move the array access past the barrier,
>> won't it?
> I can drop that bound check again. This was added to make sure other
> callers would be save as well. Thinking about this a little more, the
> check could actually be moved into the hvm_allow_set_param function,
> above the first index access in that function. Are there good reasons to
> not move the index check into the allow function?

I guess I'm confused: We're talking about dropping the check
you add to hvm_allow_set_param() and you suggest to "move" it
right there?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:55:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:55:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83256.154458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TUV-0006At-Cb; Tue, 09 Feb 2021 13:55:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83256.154458; Tue, 09 Feb 2021 13:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TUV-0006Am-9G; Tue, 09 Feb 2021 13:55:43 +0000
Received: by outflank-mailman (input) for mailman id 83256;
 Tue, 09 Feb 2021 13:55:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A1nq=HL=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
 id 1l9TUT-0006Ah-OU
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:55:41 +0000
Received: from mail-wm1-f45.google.com (unknown [209.85.128.45])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9a7ce4a-daa4-4d66-a8d9-2e576671d8ce;
 Tue, 09 Feb 2021 13:55:41 +0000 (UTC)
Received: by mail-wm1-f45.google.com with SMTP id i9so3552968wmq.1
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 05:55:41 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 2sm20140855wre.24.2021.02.09.05.55.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 Feb 2021 05:55:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9a7ce4a-daa4-4d66-a8d9-2e576671d8ce
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=XSnpm7/HVnmzb1rRaH6rhaurZgUv42UULqOJi/Beqao=;
        b=puuXZ4/U40TJLhXZkbAPZ9WxGgQQbz1YomO0h+O/apWZOhujuI9nCWfYT2bF3PTxk5
         Z60c2iJBw8qd6SKuFErSProTJwJmni+AjwLcoEfyO4cNCqT+GkmMH0GpXkOXdC1ezRTY
         FgXKL1hR88Oa9x/c0+J3x45m5+raNcQWW5zZ7BJdmpizy0R3r9muxKqSIltC9AxMXB76
         aKQDwXoh+UgvtV2EkjogWHFeIZ6r7tNNKzrNVTvjQ9IVDTOqNj75xnepUWvs6E4fQWy0
         zb6UzwP3xngN0HiyTH6Wk64V3Fu3R+gCPHgjAoRgQmNzjvQSvBt9vXAmiXB7O0a581Vx
         nPMA==
X-Gm-Message-State: AOAM531/X/TlFs3thHCOPEU7+u4vwKfyYPDt9WOsvIOvIgC2xqrL2mtg
	2GwehSLzuMK1av50R4xjJvg=
X-Google-Smtp-Source: ABdhPJwNB2giv7+CoyjHR4gxPw378n2meWhV1w/yKa2jvI06fXrZd3Z4dr3WdpEwpM4n3qwOnwgCXQ==
X-Received: by 2002:a1c:ab57:: with SMTP id u84mr3640970wme.115.1612878940148;
        Tue, 09 Feb 2021 05:55:40 -0800 (PST)
Date: Tue, 9 Feb 2021 13:55:38 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 4/7] xen/events: link interdomain events to associated
 xenbus device
Message-ID: <20210209135538.ysr5pzxihvwxn22p@liuwe-devbox-debian-v2>
References: <20210206104932.29064-1-jgross@suse.com>
 <20210206104932.29064-5-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210206104932.29064-5-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Sat, Feb 06, 2021 at 11:49:29AM +0100, Juergen Gross wrote:
> In order to support the possibility of per-device event channel
> settings (e.g. lateeoi spurious event thresholds) add a xenbus device
> pointer to struct irq_info() and modify the related event channel
> binding interfaces to take the pointer to the xenbus device as a
> parameter instead of the domain id of the other side.
> 
> While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi().
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Wei Liu <wei.liu@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 13:57:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 13:57:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83257.154470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TVr-0006Ir-NX; Tue, 09 Feb 2021 13:57:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83257.154470; Tue, 09 Feb 2021 13:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TVr-0006Ik-KN; Tue, 09 Feb 2021 13:57:07 +0000
Received: by outflank-mailman (input) for mailman id 83257;
 Tue, 09 Feb 2021 13:57:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+9J=HL=amazon.de=prvs=667ab14d4=nmanthey@srs-us1.protection.inumbo.net>)
 id 1l9TVq-0006Ie-FN
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 13:57:06 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3b0abca-a8ce-48c2-abbf-8aaf1fd1378c;
 Tue, 09 Feb 2021 13:57:05 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 09 Feb 2021 13:56:59 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com (Postfix) with ESMTPS
 id A479FA07E2; Tue,  9 Feb 2021 13:56:57 +0000 (UTC)
Received: from u6fc700a6f3c650.ant.amazon.com (10.43.162.124) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 9 Feb 2021 13:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3b0abca-a8ce-48c2-abbf-8aaf1fd1378c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1612879026; x=1644415026;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=D+ryArfULe2L2OH/7HvVZ7NS1e4ozA2vHMXkeJQjCA4=;
  b=aQ/VLF/ZYzfd6FVCD3qxOHS8cZfr+yHEm+pKjfXEIZZYHwlI2bDDOdhe
   vB9lhhtqEEEc5oAadxIYxtgkC5isW+UHuBCjb5THVcw0MC6WJHvDpwPna
   0yN3rYpQRkO7WQQDnFHxVEtUmM4Fc95Nf6EpGX7lq8TrCbJH+NUv1xUCF
   Q=;
X-IronPort-AV: E=Sophos;i="5.81,165,1610409600"; 
   d="scan'208";a="80832677"
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	<xen-devel@lists.xenproject.org>
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
 <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
 <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
From: Norbert Manthey <nmanthey@amazon.de>
Autocrypt: addr=nmanthey@amazon.de; prefer-encrypt=mutual; keydata=
 xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR
 /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg
 kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps
 K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89
 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV
 QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok
 xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o
 eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C
 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK
 VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h
 bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI
 BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC
 XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG
 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c
 kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN
 sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar
 C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt
 OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL
 n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur
 xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5
 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs
 nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf
 Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG
 MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK
 sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ
 /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV
 vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM
 p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk
 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p
 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f
 HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE
 GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b
 tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+
 RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ
 R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4
 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5
 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd
 DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i
 p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U
 sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C
 hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9
 pEddsOqYwOs=
Message-ID: <adee7548-0a60-d7d1-731f-474a585edf6c@amazon.de>
Date: Tue, 9 Feb 2021 14:56:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
X-Originating-IP: [10.43.162.124]
X-ClientProxiedBy: EX13D02UWB001.ant.amazon.com (10.43.161.240) To
 EX13D02EUB001.ant.amazon.com (10.43.166.150)
Precedence: Bulk
Content-Transfer-Encoding: base64

T24gMi85LzIxIDI6NDUgUE0sIEphbiBCZXVsaWNoIHdyb3RlOgo+IE9uIDA5LjAyLjIwMjEgMTQ6
NDEsIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4gT24gMi85LzIxIDEwOjQwIEFNLCBKYW4gQmV1
bGljaCB3cm90ZToKPj4+IE9uIDA4LjAyLjIwMjEgMjA6NDcsIE5vcmJlcnQgTWFudGhleSB3cm90
ZToKPj4+PiBPbiAyLzgvMjEgMzoyMSBQTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4+IE9uIDA1
LjAyLjIwMjEgMjE6MzksIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4+Pj4+IEBAIC00MTA4LDYg
KzQxMDgsMTMgQEAgc3RhdGljIGludCBodm1fYWxsb3dfc2V0X3BhcmFtKHN0cnVjdCBkb21haW4g
KmQsCj4+Pj4+PiAgICAgIGlmICggcmMgKQo+Pj4+Pj4gICAgICAgICAgcmV0dXJuIHJjOwo+Pj4+
Pj4KPj4+Pj4+ICsgICAgaWYgKCBpbmRleCA+PSBIVk1fTlJfUEFSQU1TICkKPj4+Pj4+ICsgICAg
ICAgIHJldHVybiAtRUlOVkFMOwo+Pj4+Pj4gKwo+Pj4+Pj4gKyAgICAvKiBNYWtlIHN1cmUgd2Ug
ZXZhbHVhdGUgcGVybWlzc2lvbnMgYmVmb3JlIGxvYWRpbmcgZGF0YSBvZiBkb21haW5zLiAqLwo+
Pj4+Pj4gKyAgICBibG9ja19zcGVjdWxhdGlvbigpOwo+Pj4+Pj4gKwo+Pj4+Pj4gKyAgICB2YWx1
ZSA9IGQtPmFyY2guaHZtLnBhcmFtc1tpbmRleF07Cj4+Pj4+PiAgICAgIHN3aXRjaCAoIGluZGV4
ICkKPj4+Pj4+ICAgICAgewo+Pj4+Pj4gICAgICAvKiBUaGUgZm9sbG93aW5nIHBhcmFtZXRlcnMg
c2hvdWxkIG9ubHkgYmUgY2hhbmdlZCBvbmNlLiAqLwo+Pj4+PiBJIGRvbid0IHNlZSB0aGUgbmVl
ZCBmb3IgdGhlIGhlYXZpZXIgYmxvY2tfc3BlY3VsYXRpb24oKSBoZXJlOwo+Pj4+PiBhZmFpY3Qg
YXJyYXlfYWNjZXNzX25vc3BlYygpIHNob3VsZCBkbyBmaW5lLiBUaGUgc3dpdGNoKCkgaW4KPj4+
Pj4gY29udGV4dCBhYm92ZSBhcyB3ZWxsIGFzIHRoZSBzd2l0Y2goKSBmdXJ0aGVyIGRvd24gaW4g
dGhlCj4+Pj4+IGZ1bmN0aW9uIGRvbid0IGhhdmUgYW55IHNwZWN1bGF0aW9uIHN1c2NlcHRpYmxl
IGNvZGUuCj4+Pj4gVGhlIHJlYXNvbiB0byBibG9jayBzcGVjdWxhdGlvbiBpbnN0ZWFkIG9mIGp1
c3QgdXNpbmcgdGhlIGhhcmRlbmVkIGluZGV4Cj4+Pj4gYWNjZXNzIGlzIHRvIG5vdCBhbGxvdyB0
byBzcGVjdWxhdGl2ZWx5IGxvYWQgZGF0YSBmcm9tIGFub3RoZXIgZG9tYWluLgo+Pj4gT2theSwg
bG9va3MgbGlrZSBJIGdvdCBtaXNsZWFkIGJ5IHRoZSBhZGRlZCBib3VuZHMgY2hlY2suIFdoeQo+
Pj4gZG8geW91IGFkZCB0aGF0LCB3aGVuIHRoZSBzb2xlIGNhbGxlciBhbHJlYWR5IGhhcyBvbmU/
IEl0J2xsCj4+PiBzdWZmaWNlIHNpbmNlIHlvdSBtb3ZlIHRoZSBhcnJheSBhY2Nlc3MgcGFzdCB0
aGUgYmFycmllciwKPj4+IHdvbid0IGl0Pwo+PiBJIGNhbiBkcm9wIHRoYXQgYm91bmQgY2hlY2sg
YWdhaW4uIFRoaXMgd2FzIGFkZGVkIHRvIG1ha2Ugc3VyZSBvdGhlcgo+PiBjYWxsZXJzIHdvdWxk
IGJlIHNhdmUgYXMgd2VsbC4gVGhpbmtpbmcgYWJvdXQgdGhpcyBhIGxpdHRsZSBtb3JlLCB0aGUK
Pj4gY2hlY2sgY291bGQgYWN0dWFsbHkgYmUgbW92ZWQgaW50byB0aGUgaHZtX2FsbG93X3NldF9w
YXJhbSBmdW5jdGlvbiwKPj4gYWJvdmUgdGhlIGZpcnN0IGluZGV4IGFjY2VzcyBpbiB0aGF0IGZ1
bmN0aW9uLiBBcmUgdGhlcmUgZ29vZCByZWFzb25zIHRvCj4+IG5vdCBtb3ZlIHRoZSBpbmRleCBj
aGVjayBpbnRvIHRoZSBhbGxvdyBmdW5jdGlvbj8KPiBJIGd1ZXNzIEknbSBjb25mdXNlZDogV2Un
cmUgdGFsa2luZyBhYm91dCBkcm9wcGluZyB0aGUgY2hlY2sKPiB5b3UgYWRkIHRvIGh2bV9hbGxv
d19zZXRfcGFyYW0oKSBhbmQgeW91IHN1Z2dlc3QgdG8gIm1vdmUiIGl0Cj4gcmlnaHQgdGhlcmU/
CgpZZXMuIEkgY2FuIGVpdGhlciBqdXN0IG5vIGludHJvZHVjZSB0aGF0IGNoZWNrIGluIHRoYXQg
ZnVuY3Rpb24gKHdoaWNoCmlzIHdoYXQgeW91IHN1Z2dlc3RlZCkuIEFzIGFuIGFsdGVybmF0aXZl
LCB0byBoYXZlIGFsbCBjaGVja3MgaW4gb25lCmZ1bmN0aW9uLCBJIHByb3Bvc2VkIHRvIGluc3Rl
YWQgbW92ZSBpdCBpbnRvIGh2bV9hbGxvd19zZXRfcGFyYW0uIElmIHlvdQpkbyBub3QgbGlrZSB0
aGlzIHByb3Bvc2FsLCBJJ2xsIGZvbGxvdyB5b3VyIHN1Z2dlc3Rpb24gYW5kIHdpbGwgc2ltcGx5
Cm5vdCBpbnRyb2R1Y2UgdGhlIGFkZGl0aW9uYWwgYm91bmQgY2hlY2sgYXMgcGFydCBvZiB0aGlz
IHBhdGNoLgoKQmVzdCwKTm9yYmVydAoKCgoKQW1hem9uIERldmVsb3BtZW50IENlbnRlciBHZXJt
YW55IEdtYkgKS3JhdXNlbnN0ci4gMzgKMTAxMTcgQmVybGluCkdlc2NoYWVmdHNmdWVocnVuZzog
Q2hyaXN0aWFuIFNjaGxhZWdlciwgSm9uYXRoYW4gV2Vpc3MKRWluZ2V0cmFnZW4gYW0gQW10c2dl
cmljaHQgQ2hhcmxvdHRlbmJ1cmcgdW50ZXIgSFJCIDE0OTE3MyBCClNpdHo6IEJlcmxpbgpVc3Qt
SUQ6IERFIDI4OSAyMzcgODc5CgoK



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 14:04:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 14:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83262.154482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TcT-0007M3-Em; Tue, 09 Feb 2021 14:03:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83262.154482; Tue, 09 Feb 2021 14:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9TcT-0007Lw-Bd; Tue, 09 Feb 2021 14:03:57 +0000
Received: by outflank-mailman (input) for mailman id 83262;
 Tue, 09 Feb 2021 14:03:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9TcS-0007Lr-3i
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:03:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9TcS-0007ut-0H
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:03:56 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9TcR-0002Un-Uk
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:03:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9TcK-0005Yw-2r; Tue, 09 Feb 2021 14:03:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=M3aBlIg3/bnVMULC4dyTeqPoVq7ecTLMRgUB7dNf9ek=; b=ejIS6e4oQC54cFTur5nxVYrXdW
	DcfjHzxmsZPpt6096cD87s0UHuhyJPvxHTJ5CpXbT66EBrOf9HmjoukpG81hurawwl4YfMakprHnH
	X2LBAhPUEEzJtqstCdY8lPvYgMSWD0+b6NW8slOfFCb69uZ/2Hde/nGgkKTzRswqKFSM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.38467.808678.941320@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 14:03:47 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
    Julien Grall <julien@xen.org>,
    lucmiccio@gmail.com,
    xen-devel@lists.xenproject.org,
    Bertrand.Marquis@arm.com,
    Volodymyr_Babchuk@epam.com,
    Rahul.Singh@arm.com,
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
	<173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
	<alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
	<4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping"):
> On 08.02.2021 21:24, Stefano Stabellini wrote:
...
> > For these cases, I would just follow a simple rule of thumb:
> > - is the submitter willing to provide the backport?
> > - is the backport low-risk?
> > - is the underlying bug important?
> > 
> > If the answer to all is "yes" then I'd go with it.
> 
> Personally I disagree, for the very simple reason of the question
> going to become "Where do we draw the line?" The only non-security
> backports that I consider acceptable are low-risk changes to allow
> building with newer tool chains. I know other backports have
> occurred in the past, and I did voice my disagreement with this
> having happened.

I think I take a more relaxed view than Jan, but still a much more
firm line than Stefano.  My opinion is that we should make exceptions
for only bugs of exceptional severity.

I don't think I have seen an argument that this bug is exceptionally
severe.

For me the fact that you can only experience this bug if you upgrade
the hardware or significantly change the configuration, means that
this isn't so serious a bug.

The downside of accepting this backport is not only the slippery
slope.  It is also shipping the risk of a mistake in it to people who
are using 4.12 and expect it to remain almost unchanged.

The alternative, which I think is reasonable, is to ask people who are
substantially changing their hardware and/or configuration, to also
upgrade to a newer hypervisor.  I don't see that this would be a
significant imposition, but maybe I am missing some reason why
upgrading to a newer hypervisor would add a lot of difficulty to an
upgrade which is already significant in other ways.

(I definitely agree with Jan that backports to fix build problems with
newer tools are serious because they are so blocking, so will
generally need an exception to the "security fixes only" rule.)

> But this is a community decision, so my opinion counts as just a
> single vote.

I think on this particular patch, with the information I have so far,
I am with Jan and Julien.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 14:22:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 14:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83267.154494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Ttt-0000p7-27; Tue, 09 Feb 2021 14:21:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83267.154494; Tue, 09 Feb 2021 14:21:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Tts-0000p0-TB; Tue, 09 Feb 2021 14:21:56 +0000
Received: by outflank-mailman (input) for mailman id 83267;
 Tue, 09 Feb 2021 14:21:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9Tts-0000ov-C0
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:21:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1a553e2-f37c-4660-9402-b38d3034adc0;
 Tue, 09 Feb 2021 14:21:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B0AFAD6A;
 Tue,  9 Feb 2021 14:21:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1a553e2-f37c-4660-9402-b38d3034adc0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612880514; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mIQB3lHLlmRhkVMcXJDBArQs4ArE4UWm4/bPgZ6ct9A=;
	b=Yq1vrBGj7DGnnkuAVbyaw2O4Ws/sJE7DJZq0pEBEMipj9uMSzKhHFoD1Aeo/SDBaSQF7w0
	3RS6l6spLAtasQLLNAnuZ0aQYzJ4sJSlcLLnwDhkMyNmF534UFlQrDSEIQ7tiGfGL+Q1tu
	y5MCn3EEGBpoTaRWLO/FYRceyUJ7xwU=
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
 <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
 <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
 <adee7548-0a60-d7d1-731f-474a585edf6c@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a50a8c3-44f5-9ea9-7ff1-0d716bc05ebd@suse.com>
Date: Tue, 9 Feb 2021 15:21:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <adee7548-0a60-d7d1-731f-474a585edf6c@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 14:56, Norbert Manthey wrote:
> On 2/9/21 2:45 PM, Jan Beulich wrote:
>> On 09.02.2021 14:41, Norbert Manthey wrote:
>>> On 2/9/21 10:40 AM, Jan Beulich wrote:
>>>> On 08.02.2021 20:47, Norbert Manthey wrote:
>>>>> On 2/8/21 3:21 PM, Jan Beulich wrote:
>>>>>> On 05.02.2021 21:39, Norbert Manthey wrote:
>>>>>>> @@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
>>>>>>>      if ( rc )
>>>>>>>          return rc;
>>>>>>>
>>>>>>> +    if ( index >= HVM_NR_PARAMS )
>>>>>>> +        return -EINVAL;
>>>>>>> +
>>>>>>> +    /* Make sure we evaluate permissions before loading data of domains. */
>>>>>>> +    block_speculation();
>>>>>>> +
>>>>>>> +    value = d->arch.hvm.params[index];
>>>>>>>      switch ( index )
>>>>>>>      {
>>>>>>>      /* The following parameters should only be changed once. */
>>>>>> I don't see the need for the heavier block_speculation() here;
>>>>>> afaict array_access_nospec() should do fine. The switch() in
>>>>>> context above as well as the switch() further down in the
>>>>>> function don't have any speculation susceptible code.
>>>>> The reason to block speculation instead of just using the hardened index
>>>>> access is to not allow to speculatively load data from another domain.
>>>> Okay, looks like I got mislead by the added bounds check. Why
>>>> do you add that, when the sole caller already has one? It'll
>>>> suffice since you move the array access past the barrier,
>>>> won't it?
>>> I can drop that bound check again. This was added to make sure other
>>> callers would be save as well. Thinking about this a little more, the
>>> check could actually be moved into the hvm_allow_set_param function,
>>> above the first index access in that function. Are there good reasons to
>>> not move the index check into the allow function?
>> I guess I'm confused: We're talking about dropping the check
>> you add to hvm_allow_set_param() and you suggest to "move" it
>> right there?
> 
> Yes. I can either just no introduce that check in that function (which
> is what you suggested). As an alternative, to have all checks in one
> function, I proposed to instead move it into hvm_allow_set_param.

Oh, I see. What I'd like to be the result of your re-arrangement is
symmetry between "get" and "set" where possible in this regard, and
asymmetry having a clear reason. Seeing that hvm_{get,set}_param()
are the non-static functions here, I think it would be quite
desirable for the bounds checking to live there. Since
hvm_allow_{get,set}_param() are specifically helpers of the two
earlier named functions, checks consistently living there would as
well be fine with me.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 14:46:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 14:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83269.154505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UHj-0002of-0a; Tue, 09 Feb 2021 14:46:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83269.154505; Tue, 09 Feb 2021 14:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UHi-0002oY-TR; Tue, 09 Feb 2021 14:46:34 +0000
Received: by outflank-mailman (input) for mailman id 83269;
 Tue, 09 Feb 2021 14:46:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9UHh-0002oT-Rc
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:46:34 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b0c44be-62fa-4ff4-9a73-cd4607e9519d;
 Tue, 09 Feb 2021 14:46:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b0c44be-62fa-4ff4-9a73-cd4607e9519d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612881992;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Jm2uM7sN1mhPr4RqlZCT/1dIpuyWA++MFK5EczqTNJA=;
  b=QiC2LtmTz12nui25cvC5MfOQy2caETTk7CjO8CvT+IgEM9W6ISgIj16L
   rYeibf6iRFBh2hTSpJTdFehMXjIw+l2jwxFFhiad41kVn7z7cjqsbiKU/
   BtFLZnR06kEne7KgKAoZV7QxnNr/ukXE1AsvhpR9rMZIntA4TydoikMIc
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Z/IkgMkbGJyUBjpCsHMCB1dwmcn3cZvER17v5iFt5iBvfDBLokq9tmO1lvutTOOt33rv0cgtS1
 GuwN/ckoTHiHFt5xvh4YcM/nhEODcKWINCpwunbs4Nno7ig9MTPdi74OXnbPI7HyZbznqxaHhL
 N/j6ukVKKErIRoLnRtKKIpY8sik4AGpI8ESYgMG/lViBuhmgUga/JWeSmln6wGmgCONR9I+L5D
 JrPn+SXEW/0bQFreZyrtMuNTnwfg96x10uWRaTDb56mUt47dowhg2QbEzR0wgq7BHZy4lW3rmP
 IwU=
X-SBRS: 5.2
X-MesageID: 37238681
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="37238681"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aFITXuqlTYYMKdF+JMvQMBazPDE8W7us8f7E3FN3ufLlxacYVMcROytOotxceQhIXcJiRSw5OTuTHaZoSyyIva39JY5LYmrwcdYU4y7dPmPgxv4tSVStcVuLC2iixEmIcVmuecyllUQlNUPHCDvkJNgNXJ6bCUxS2tERYm3jbrDCr2E3Rws9wTpnaatdKWAaOamuMciVxArjGsSvu/MK1f2+vJBqyx8OcdIJHZlXdBv3oquXpXV+ZhezMauysbG4doUM5Ew8dX1Lf17/ndI5DysZQu/BKwvPpyRFSA69Qczp3WTnDgtR1RqRprc2fxRXH4A4tSASaozcisinUJ7Whw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9JIbKaduRGh1ym21FJE8LohQOTAqkO2V2Kr77hsM3Q=;
 b=i0MIdeXXF3NNRlWzULcdrNHZ54QkorOtKn8PAoJDBCwraWRQmU/VLJL6tddWcQnv+FllHsCetCZg/nIFOdqV4aHYt5Lgd7CfU6b4v9hL3A16Tmo01nzWpB5dbTG/6ErhUzJRLW6UNnvFGM6UMX8XBtJFWacMoJZfy7p5IbPTB3TXDBVUoBEGxz9m8zUM1AHtqUltLBmK993B3dhk/xPjt2IeVjcXfkWd7zgtkyV5Db/T5ukzAh1qcWR9T2r1Duo5NvzhKZUesJHD2sWcLUw6UpLqo26Nl5wjWfiCF2uoSpeqaZuWNFRLNR4+KIZC+mrhfaUxZY40Oe1xr7Rv/RndHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9JIbKaduRGh1ym21FJE8LohQOTAqkO2V2Kr77hsM3Q=;
 b=O4h8jw2d4qSwoRrSWght1CevYqeAylAiXcDUhePHHSejw3dpp45Tz1QGXA36HiAnMA36QsnrqzAbRyGk8sZNCQAqayJYdRFdDrt4yGqxUhCLe1wgQVjSy/3G47kRVXRs0l6cm53Yf+rACfJVCQG+3wQdWEPpLF/rgVamq19SVi4=
Date: Tue, 9 Feb 2021 15:46:22 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YCKgPro1yTtSSnLQ@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
 <YB1v60CuOdhxFwNy@Air-de-Roger>
 <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
 <YCKJLbaTzD6YF/g5@Air-de-Roger>
 <1cf476b9-4ac1-9a12-7fdb-c898f02532f7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1cf476b9-4ac1-9a12-7fdb-c898f02532f7@suse.com>
X-ClientProxiedBy: MRXP264CA0037.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b8ed9210-97a2-4669-9c26-08d8cd097b2a
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53249EECEBD4EFAB1AE6A5D88F8E9@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fSd26OrX2KJtPKnmSP2+44k0q7OPWFXyy5XMgimA1pepf9ypqPKZry3UmzUtU7C7C6zop+gg/IZfk98J1mS2YYtirefO8VxynEsrIUNyi2BRnL3nI6GYhi0wr+/98pZFaIU/4tj0bA9aecgQ84ITN7O2oGt2Xn10eZGU9F0eV5suHYOZg4LMH8wijEPToPS3ysKmx2lCwtzEBqnzhHqTw9Lspx2vJShoKOkLicWsHD4qwZPGv2s5F1czXX64UvTYhULibunfICriwHq7jslBL+GrrsCONgZXrhNbQDRG4yOdwxQ69sM1pNpuIL+yKYj8UzPH36mH9L6D4D8SN9yWBv8Vi4XIwbyaiv/PHDRW2pjszo6dA9UMs/pfLRh2mjev2TfC1QQ51H3L8C09z5gN1PJuB0hwtYhbgqtSP2ZOfckYBGCmBHjCvxgthRwwaGRTvd2IascaWpInQOtwyJ+zD0cy0HiyZRbIpbGqicHns0THLQSn3VMHzWDvOr+rB2QC2Y6WwJc94Qrq5ZHSxNR5uQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(366004)(136003)(346002)(396003)(39860400002)(186003)(16526019)(5660300002)(107886003)(85182001)(26005)(6666004)(2906002)(956004)(83380400001)(9686003)(316002)(8936002)(66946007)(4326008)(8676002)(478600001)(6486002)(53546011)(66556008)(66476007)(54906003)(6496006)(6916009)(33716001)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eGpNcU56WTRVVTgxQnNTTVdKalNLWkppSVBEc2lUS0tzYi9xMTYzaUVNZlpw?=
 =?utf-8?B?bjFxYnFVTk42T2FMVHBsbUJwZjJmRXdoaWhBVzVNM1E5LzJKRVFVd1lQV2hP?=
 =?utf-8?B?YVVhR1YrRE9vcFp4U1laeTZsREJ2YkEvMERWeTJSQ1RGVTVnd3NOWEZmdXFL?=
 =?utf-8?B?eGRzUFNQN1FmS0J5RzNma00zV0JEMlFKOEZoTlp4RUQyancySlJVWTFpYStU?=
 =?utf-8?B?YXJRVndtcWNKb2hOc0pzN2lTNExKNUlUaXh5cDVDT0hXV21iWjQ3VDFFVVBa?=
 =?utf-8?B?cDVWQjY1K0loQlBycytVR1FXVjF2WFArTUVpOFFnaSs3UVdnejZyRVZZNmc3?=
 =?utf-8?B?bXlmWDc2ZnczVnlramt3NVE5T0RCYXRtbnFualEzTG01cGl1YkhvcmpMVFR1?=
 =?utf-8?B?dGtkdTE4RGRoNzZpdTk0YklHZ1B3aWJFcGhiYUFpRlJTQzR0aUNnWG80c0Zw?=
 =?utf-8?B?NW9SZ0xLNXg0bk9EUUlpekZ3akNZVXM0cmVZcWdFVHJ4ZGc4ak84NEtQU3B0?=
 =?utf-8?B?VW4rOHNFOGdFTEp2aFZxMkpmWU44L1RIWGc0R0lwVG9xdnVDU1NRT1IwSUhZ?=
 =?utf-8?B?QUtYeElZS3JQa2NmcG8rdzVhSGVEWkdUbmNiNEhmV2pCZzU4aExwaUV1NGtr?=
 =?utf-8?B?dnNYSUtNdk1LVUNTSDYrWHhZVkRva29VZjZ4VzdVWi95NmIydDFyNnpVNUJk?=
 =?utf-8?B?bytDOEtTUURnazg2VEwySTlrTnByYjNyRkc4Yk91M3RWdDN1cS9HTjF5T3N3?=
 =?utf-8?B?VFFQMlUwclpFbW1GKzUrVklybUpTdGtFRXZadlgyb01vaTF3NFpEVTZQcXky?=
 =?utf-8?B?cmk1OGZyQ0FwL2VxeTFlVDZ0cGp4U0RhTWhOT1dNcDVOYUlJUGphc1l6KzVY?=
 =?utf-8?B?dktGRXlTTW5ab3J2eXdRVlJlZHVncDc2a1QrbTd0bUh1NHBBS256N2NPaWp5?=
 =?utf-8?B?M1N0Qm5ROFNDa3FSK0RtaVRpRjJibG5FVnNlNTJ5eUVuNGJUbHhmOXhoRXRH?=
 =?utf-8?B?QVNjNmxVTHFpYVRYd2NpUDJZbTJRN0djNk8yWFhERkhkUVJRanNobG9EeW00?=
 =?utf-8?B?M254UVFlRytMWWRWeFNhNGxMQTNoWFI2T3lhYkxBOVF5TCtqUzhCZzJkQzFj?=
 =?utf-8?B?c2pKLzZCUG84cWNxSzlId3ZwRjRmUm85UDNydk0wU2s0eVFQTUYxOHBadDR1?=
 =?utf-8?B?QlE4N0N2S2c2NmFFY3IxN09PRGRiQTZWRS82UFRmVk5yTm9teHhHVjRhTUpM?=
 =?utf-8?B?QWhSYktQbnp5and0eGhFbk9KRElMcnpUWFRTRTNOZlRmZDdxWlpka21lOEl1?=
 =?utf-8?B?dkF4SkxlbDlkRU9mNTVmbEtOZGdJbVBiOVBJYkZDdElrNGZzQ0RXb3dvZXlS?=
 =?utf-8?B?UUxhUlBCSEpnL0krMXh2NFp3VXlGU2RkcGhBZlcwUDJONW02eUhtRW9ueXVq?=
 =?utf-8?B?NGxwM3NhWERETUN1VmRxeE5hZ2gzbWZTek1TVmR2M29XRHRJYW9hZFhJTGl5?=
 =?utf-8?B?dmozeC93Y3BKY2JPWXBsQm9DUmZvWE9KQ3pGY0xBdWREY1Bic1BNa1RYbUNG?=
 =?utf-8?B?M21XcmVlYmttZXhXdkg4NkJDMCs0YkdraytlT1ZDUjRMNkEvYWpDTVFIWVFD?=
 =?utf-8?B?U3ZFQU5sMHU3dHJvbExoTDIrZmJuUzRlcWxaVXl0dm5RZHkzVXJJWlY2ZFUr?=
 =?utf-8?B?Y3YwWng3dm84R2Nrc3NOT0FOb2gxc1h6Wnh6Sm9sbzdJNTZwQWRHNitMVlZS?=
 =?utf-8?Q?r2nlbBHvtO2JWMxefUwublEeXXI7PJSgt9yU5co?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b8ed9210-97a2-4669-9c26-08d8cd097b2a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 14:46:28.0164
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s6z8mi0g2raVNJfvLAjLCMwH4A6zjA0jXxit78VgXgrngW7kNJf6oj5NAt3zOsRe7pNWJOC/pHq4Hfi5NSaKPA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Tue, Feb 09, 2021 at 02:15:18PM +0100, Jan Beulich wrote:
> On 09.02.2021 14:07, Roger Pau MonnÃ© wrote:
> > On Fri, Feb 05, 2021 at 05:26:33PM +0100, Jan Beulich wrote:
> >> On 05.02.2021 17:18, Roger Pau MonnÃ© wrote:
> >>> On Fri, Feb 05, 2021 at 05:13:22PM +0100, Jan Beulich wrote:
> >>>> On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
> >>>>> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
> >>>>>> The "guest" variants are intended to work with (potentially) fully guest
> >>>>>> controlled addresses, while the "unsafe" variants are not.
> >>>>>
> >>>>> Just to clarify, both work against user addresses, but guest variants
> >>>>> need to be more careful because the guest provided address can also be
> >>>>> modified?
> >>>>>
> >>>>> I'm trying to understand the difference between "fully guest
> >>>>> controlled" and "guest controlled".
> >>>>
> >>>> Not exactly, not. "unsafe" means access to anything which may
> >>>> fault, guest controlled or not. do_invalid_op()'s reading of
> >>>> the insn stream is a good example - the faulting insn there
> >>>> isn't guest controlled at all, but we still want to be careful
> >>>> when trying to read these bytes, as we don't want to fully
> >>>> trust %rip there.
> > 
> > Oh, I see. It's possible that %rip points to an unmapped address
> > there, and we need to be careful when reading, even if the value of
> > %rip cannot be controlled by the guest and can legitimacy point to
> > Xen's address space.
> > 
> >>> Would it make sense to threat everything as 'guest' accesses for the
> >>> sake of not having this difference?
> >>
> >> That's what we've been doing until now. It is the purpose of
> >> this change to allow the two to behave differently.
> >>
> >>> I think having two accessors it's likely to cause confusion and could
> >>> possibly lead to the wrong one being used in unexpected contexts. Does
> >>> it add a too big performance penalty to always use the most
> >>> restrictive one?
> >>
> >> The problem is the most restrictive one is going to be too
> >> restrictive - we wouldn't be able to access Xen space anymore
> >> e.g. from the place pointed at above as example. This is
> >> because for guest accesses (but not for "unsafe" ones) we're
> >> going to divert them into non-canonical space (and hence make
> >> speculation impossible, as such an access would fault) if it
> >> would touch Xen space.
> > 
> > Yes, I understand now. I think it would have been helpful (for me) to
> > have the first sentence as:
> > 
> > The "guest" variants are intended to work with (potentially) fully guest
> > controlled addresses, while the "unsafe" variants are expected to be
> > used in order to access addresses not under the guest control, but
> > that could trigger faults anyway (like accessing the instruction
> > stream in do_invalid_op).
> 
> I can use some of this, but in particular "access addresses not
> under the guest control" isn't entirely correct. The question isn't
> whether there's a guest control aspect, but which part of the
> address space the addresses are in. See specifically x86/PV: use
> get_unsafe() instead of copy_from_unsafe()" for two pretty good
> examples. The address within the linear page tables are - in a
> way at least - still somewhat guest controlled, but we're
> deliberately accessing Xen virtual addresses there.

OK, could this be somehow added to the commit message then?

Maybe it would be better to have something like:

The "guest" variants are intended to work with addresses belonging to
the guest address space, while the "unsafe" variants should be used
for addresses that fall into the Xen address space.

I think it's important to list exactly how the distinction between the
guest/unsafe accessors is made, or else it's impossible to review that
the changes done here are correct.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 14:55:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 14:55:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83272.154518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UQi-0003nz-4Y; Tue, 09 Feb 2021 14:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83272.154518; Tue, 09 Feb 2021 14:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UQi-0003ns-12; Tue, 09 Feb 2021 14:55:52 +0000
Received: by outflank-mailman (input) for mailman id 83272;
 Tue, 09 Feb 2021 14:55:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9UQg-0003nn-UK
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:55:50 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34175c16-f3a9-4fed-a6a7-7f78a64a2842;
 Tue, 09 Feb 2021 14:55:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34175c16-f3a9-4fed-a6a7-7f78a64a2842
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612882549;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=uHKlxmGDmcHVXNMhGVVfjDokHAmUUsD2nu9sDHYd6B0=;
  b=TFCwuoO5pcavyt4nXK76pCiJTCrDijr5hMqWOY+CFLwc1fwG78KGT+k9
   kCa95IwBmoPIS86iaH8nkv9iEGZfXxakUgTomRi0W8QBUlxVEcume5/0m
   oiYtyqe1L+s7heAXhDWgi5pAmDZRF96jdrU8XFW8uWwFS6jvScR9I9ktO
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: R35jWxZ3i64F3E1SwsZwRvpI9PkwesI5DS9p8we1HHFjzyxUAp5iODXIdz7twSOL/UUrF4o99/
 e57+LDGH1vdrGpl7n3FjU1nxEM/9+G4XP25vZDIJtkQwUcxb0k7l8xIXAVnOtCrQfHW7Kjq9DA
 YPrrjJHsb2AtJ5dQIsexJ33y0zvUswa0k1g01JhcEzzdy0a86A1cmViD/bH3Zqlbu5EAFjVixY
 hkxa7/vYKsI9ZhumhVW3OdlXTMtGgyPp77FjfHYHw50bmZ1peZQtDLskbmmeYOTt+d36Z5Mz9u
 +UQ=
X-SBRS: 5.2
X-MesageID: 36810456
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="36810456"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mqa4njf+W9vW9rM+j/6PUyqLXJBUEwEesY5Ls2vnaT+f+XN/R2poYDBw2XiosLTiWUY55P2W7dC8IumREh9ITE3ZXvHpNM+WTmcQa4eb+Y8EqIkNnvcwzCzr6Lq+wedeePnMZ+ucZhGL5/X5f4z4f6iWm0jmBaAkBaeahzPZwO00/5IRe5OhP19+XE/VnoD0jjXHocLysK6SRfmCYHVNAyo9ukmuaoLW9vbDgtXCDsJUhd4YFda6eSABmrNWEmfWy+QlDfdvVxNa/wvUuUCgi9tNymbPFxCnQoxKx6A7xbmbPnrUtPUpNufGZBQOA+/dCWwJG6YP/zsP5axHFeOGFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPRC+A+QhgxlTfMwM00h5x2y0xe4I6v/fBLyo6DRhrQ=;
 b=HxoObebu0f/NNWiTp9DTapgmJNJ6aPtEKOwOWNTj5ASPN1qRwD1rluYsPy6HvAIw+XgLEF2ju1rXoZiR7p/OJN17NkwCZIVqVbzF17paTiexDIZBhRu1fwAxxnGY64DP7svzHbUAhYeFH6zyEJdtt9ns5by8Al9afRjcF6gPAIPBdLpyLgz1LZq5jfQ+2RgUCoQH0fzeCviJfzoIGQz4oA3ht+3EJOC5MFMclBv9kt2LF0hCVBKr0/qFnUvNsGjcPJdsheME/jSH7XQ5wDpt4fTy3PU01uu1Z6Aa8AfcbuvPsx+Z6flS8fsk/GLEIB23LuDMMxYK3UcbzwKj2zrY0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPRC+A+QhgxlTfMwM00h5x2y0xe4I6v/fBLyo6DRhrQ=;
 b=NXkSNfV5vo5MAcFidelxgks/DiqJLeNs3EPWv6TEWgdiqQk06EyZkTC38PnV002TFYNCC2DJa0qvcnSJB76lMVVfhsCC02bInSKafrqGJzhTONajQj/xfFksq6Uj1r+26SJ9pDrb8GDSmk5ZW2qmqzP0Ak5S48PetwHmI2DEUL8=
Date: Tue, 9 Feb 2021 15:55:40 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YCKibN0mjROss4M4@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
X-ClientProxiedBy: MR2P264CA0028.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::16)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 95239af7-2a32-45e7-3992-08d8cd0ac742
X-MS-TrafficTypeDiagnostic: DM6PR03MB4602:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4602E295077716F208BC77868F8E9@DM6PR03MB4602.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: FXHPQfSbBOhVNjPctpj3YvObk1D1WIJN2kewkMGKG7oe8r3GHybqzZHqCDSscSg7yEc2n9wsaxT4oVou0XEej3hwDupjFASs2cHsVHTsh5BrzUnuf/F/8D7u+JOPIK9qi0d5SyG5Rh0hIV61D1cZ2qMxx0QND8yJEfAABQyiQcBK1yxVakeRL42/3ec9YipoLOxSy9t95phnJR8OCUcR+5+VcUMLBFNShfH0fpGoVCjBcNpC4C5Q47Z0dsLIO8pPkNuB/MWb+Zipc2c7THKueKSBFUI9usZ+kOUtuTk7R+ga6kyH1aM1jibN6Y3aQDUDfTikU1IJwoafZuRItyY35JMiyk3NJgzF8mlbYUpL3IK2KFJj0HxUcVaBKCPHtQoTrC7c9K0mLtx/SFX7Sl+gq9J6zeVH62fjQiPJ9+0V5OS/OYjxRRH16VZi7YWM+KQMYu9b2SK8O31AhUbUe97kL+3RxMWKoXeYnlDgNXU5JhqwJgy2cU1pnblicytRecQuSwCZ529Gz/h55dBfWL5sYg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(396003)(376002)(136003)(39860400002)(346002)(5660300002)(6496006)(2906002)(54906003)(478600001)(33716001)(66476007)(66556008)(66946007)(85182001)(8936002)(26005)(6666004)(6916009)(86362001)(956004)(16526019)(186003)(4326008)(107886003)(9686003)(6486002)(316002)(83380400001)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VE5zTmlub2srcFBIWHdzcnZ5aGNFb0NTMWpXc1U5WDFRMDJvdUpWTGtjSDU3?=
 =?utf-8?B?QkVJMFg0NitvN3o0MW0yWlZZMFFIakdFTEFzM2JQa1g3Y1BkVDZuSEQvMmtn?=
 =?utf-8?B?RTFMT0hrVFdYNEs3d1VCUTJ6TmNoeDltUmpYdTZnR0tEMytTN1dtcHRMcUhL?=
 =?utf-8?B?OXVlMlcreHhBdThVVjZJREo3a0JqSHNzdmxRRTRHYnIvTk05UzZjeVp5RkEw?=
 =?utf-8?B?TlVISVBIODR5SnJ0V01NQ21pWU5LeW1JbTl1TlBZZHlmblR5c0FLWnF5dUE5?=
 =?utf-8?B?SUp4NmRnbDJiUUNCK3c0WE00U1RMNGd2R1NVdlZxMVg0RmNaUDcxcS91eVpu?=
 =?utf-8?B?NzRMKzE1TzFmeEVubXRSYndaQmErM0I4Q09oUUcrUFFyVSs1NWV5ejdzc2p6?=
 =?utf-8?B?NnJXQWhxQ2l4c2hVNVpHTFQ5WlRGYzNyWTJuTW9CQjZwUmN6Wjh3bFVwVWZj?=
 =?utf-8?B?QnNHM1lRczQvR1Zmb2t4WSt5YUVVTDV4WlRxdEwvcTNSTVpaa1hHVjd6VDR4?=
 =?utf-8?B?bzdzbGcxQlZabjhPWXpyR0czaXcwQXB4MFVEdjBLQzNTLzFSdUhLd1MwZUor?=
 =?utf-8?B?bm9OUCtmQWtsOWI2bHl6cXlEMVZmQlB5OStvTlZIRWprbHJmajNxZDJscVdX?=
 =?utf-8?B?Nmlsek1BU1hPNGoyTWZBVGJCNmpRN1NVVG5sSzdhN3A1Y09MSFBESkwzVEM5?=
 =?utf-8?B?U0xVM1JHbEVZem8reTgxOGMxQjZETE5DaG96YlhxbW9ZZHFnUy9CYTlHNGdv?=
 =?utf-8?B?bVZaSkR2Y2Fxb1RkbjNGN2QvRlFpYWZSb3FBWGVBbUtObU9veGJmd2VZa1F4?=
 =?utf-8?B?dHlKdjkxemxHTzVQQXoxaE5sSFhPSXJmUlVobWN2a2VTUzY4eE1uVzhBT1d1?=
 =?utf-8?B?NTRQTU0yU3AvWnk1OEpuOWJWK0I4V1g3Vm1yRWkyaC9nYVEwMkJCU25hR2Js?=
 =?utf-8?B?N1daRmsrN2QxSFFXQUQxZGFuRk1hRk1rTUZUaStRZ3RNZ1JQK2RWVEt5Yi9h?=
 =?utf-8?B?WjlQZEVzdjliVEdENFVpdWg4akhyMUsvVGpPZFhpNlBDMTNwOUdyRzIrSjVr?=
 =?utf-8?B?ZTMzTnN1N2poWEtuOG9aa0RrTS9LV2JuK2xlZ3l6Vm85N01GTlU1ZHVkWklo?=
 =?utf-8?B?cmszQ1FQTXJ3OUdEQk9TQ1kydE5CSHRZWmxPQzRIT3gvYnRvRG9SUzFEaGJJ?=
 =?utf-8?B?S0JXUmNZYXFheFpDU3NrYlQ1MzlVY2M3eTk5czJqY3VoVFlXM0lLQnNib1Jp?=
 =?utf-8?B?K3ZKbHlBSkppamFEV2dJMmFvYTErOVZRWjV6QlQvYTc5Q0ZKLytTell3ay81?=
 =?utf-8?B?alk4Yy9KMWxRN2o4TzBXNlNtS1BiYllWRHVEb3BUWGtZNi9rbXBiMm1VREtD?=
 =?utf-8?B?N3VOQVFWU28yMDUvVkV5OU9nRHc0eDVQaCtMZE1vdCsvK0VTT2M1NmFoa2Jz?=
 =?utf-8?B?QTE1NUVLbzk5ZlpqVXY5ZG0xdU90WGFkdWZYeTFwSFNLR0lwWHdaQ3ZsSW9R?=
 =?utf-8?B?M3A5NFJkcXRJTmw1dGoyMFUwT2VoWWdPcnlTLy9GTVRZTUdNY2VoOGRRcXZl?=
 =?utf-8?B?Smw4anVRWmw4N0dJWWJSY1gycS9UTEEvSEJSRm9LUUx2bU1TYjExenFwZkVh?=
 =?utf-8?B?aVJNMXZIZEc2S3lCdXF6bTlUb1Q2L2p0K3JLTHJ3YkhwN0JZVFpPUHhjVjk3?=
 =?utf-8?B?Vmg3QTNPa1NLMkFTd0JZRDRXQjFSY1VoYVlnREpvdFFBV2lmNUZqdnczaFpm?=
 =?utf-8?Q?TYSJsX9PrQkWmXnjhCkr7fSINRq0LlY80uzzsC0?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 95239af7-2a32-45e7-3992-08d8cd0ac742
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 14:55:45.1568
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wsHy6RoV2pnpIZldE+x7V6tpjDcZiUzfOOdaCYZ7ZAAuABHrdw2vmbFiodVQDGmmH7h660Qf5m2+jFqh92JawQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4602
X-OriginatorOrg: citrix.com

On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
> The "guest" variants are intended to work with (potentially) fully guest
> controlled addresses, while the "unsafe" variants are not. (For
> descriptor table accesses the low bits of the addresses may still be
> guest controlled, but this still won't allow speculation to "escape"
> into unwanted areas.)

Descriptor table is also in guest address space, and hence would be
fine using the _guest accessors? (even if not in guest control and
thus unsuitable as an speculation vector)

> Subsequently we will want them to have different
> behavior, so as first step identify which one is which. For now, both
> groups of constructs alias one another.
> 
> Double underscore prefixes are retained only on __{get,put}_guest(), to
> allow still distinguishing them from their "checking" counterparts once
> they also get renamed (to {get,put}_guest()).
> 
> Since for them it's almost a full re-write, move what becomes
> {get,put}_unsafe_size() into the "common" uaccess.h (x86_64/*.h should
> disappear at some point anyway).
> 
> In __copy_to_user() one of the two casts in each put_guest_size()
> invocation gets dropped. They're not needed and did break symmetry with
> __copy_from_user().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -776,9 +776,9 @@ shadow_write_entries(void *d, void *s, i
>      /* Because we mirror access rights at all levels in the shadow, an
>       * l2 (or higher) entry with the RW bit cleared will leave us with
>       * no write access through the linear map.
> -     * We detect that by writing to the shadow with __put_user() and
> +     * We detect that by writing to the shadow with put_unsafe() and
>       * using map_domain_page() to get a writeable mapping if we need to. */
> -    if ( __put_user(*dst, dst) )
> +    if ( put_unsafe(*dst, dst) )
>      {
>          perfc_incr(shadow_linear_map_failed);
>          map = map_domain_page(mfn);
> --- a/xen/arch/x86/pv/emul-gate-op.c
> +++ b/xen/arch/x86/pv/emul-gate-op.c
> @@ -40,7 +40,7 @@ static int read_gate_descriptor(unsigned
>           ((gate_sel >> 3) + !is_pv_32bit_vcpu(v) >=
>            (gate_sel & 4 ? v->arch.pv.ldt_ents
>                          : v->arch.pv.gdt_ents)) ||
> -         __get_user(desc, pdesc) )
> +         get_unsafe(desc, pdesc) )
>          return 0;
>  
>      *sel = (desc.a >> 16) & 0x0000fffc;
> @@ -59,7 +59,7 @@ static int read_gate_descriptor(unsigned
>      {
>          if ( (*ar & 0x1f00) != 0x0c00 ||
>               /* Limit check done above already. */
> -             __get_user(desc, pdesc + 1) ||
> +             get_unsafe(desc, pdesc + 1) ||
>               (desc.b & 0x1f00) )
>              return 0;
>  
> @@ -294,7 +294,7 @@ void pv_emulate_gate_op(struct cpu_user_
>          { \
>              --stkp; \
>              esp -= 4; \
> -            rc = __put_user(item, stkp); \
> +            rc = __put_guest(item, stkp); \
>              if ( rc ) \
>              { \
>                  pv_inject_page_fault(PFEC_write_access, \
> @@ -362,7 +362,7 @@ void pv_emulate_gate_op(struct cpu_user_
>                      unsigned int parm;
>  
>                      --ustkp;
> -                    rc = __get_user(parm, ustkp);
> +                    rc = __get_guest(parm, ustkp);
>                      if ( rc )
>                      {
>                          pv_inject_page_fault(0, (unsigned long)(ustkp + 1) - rc);
> --- a/xen/arch/x86/pv/emulate.c
> +++ b/xen/arch/x86/pv/emulate.c
> @@ -34,13 +34,13 @@ int pv_emul_read_descriptor(unsigned int
>      if ( sel < 4 ||
>           /*
>            * Don't apply the GDT limit here, as the selector may be a Xen
> -          * provided one. __get_user() will fail (without taking further
> +          * provided one. get_unsafe() will fail (without taking further
>            * action) for ones falling in the gap between guest populated
>            * and Xen ones.
>            */
>           ((sel & 4) && (sel >> 3) >= v->arch.pv.ldt_ents) )
>          desc.b = desc.a = 0;
> -    else if ( __get_user(desc, gdt_ldt_desc_ptr(sel)) )
> +    else if ( get_unsafe(desc, gdt_ldt_desc_ptr(sel)) )
>          return 0;
>      if ( !insn_fetch )
>          desc.b &= ~_SEGMENT_L;
> --- a/xen/arch/x86/pv/iret.c
> +++ b/xen/arch/x86/pv/iret.c
> @@ -114,15 +114,15 @@ unsigned int compat_iret(void)
>      regs->rsp = (u32)regs->rsp;
>  
>      /* Restore EAX (clobbered by hypercall). */
> -    if ( unlikely(__get_user(regs->eax, (u32 *)regs->rsp)) )
> +    if ( unlikely(__get_guest(regs->eax, (u32 *)regs->rsp)) )
>      {
>          domain_crash(v->domain);
>          return 0;
>      }
>  
>      /* Restore CS and EIP. */
> -    if ( unlikely(__get_user(regs->eip, (u32 *)regs->rsp + 1)) ||
> -        unlikely(__get_user(regs->cs, (u32 *)regs->rsp + 2)) )
> +    if ( unlikely(__get_guest(regs->eip, (u32 *)regs->rsp + 1)) ||
> +        unlikely(__get_guest(regs->cs, (u32 *)regs->rsp + 2)) )
>      {
>          domain_crash(v->domain);
>          return 0;
> @@ -132,7 +132,7 @@ unsigned int compat_iret(void)
>       * Fix up and restore EFLAGS. We fix up in a local staging area
>       * to avoid firing the BUG_ON(IOPL) check in arch_get_info_guest.
>       */
> -    if ( unlikely(__get_user(eflags, (u32 *)regs->rsp + 3)) )
> +    if ( unlikely(__get_guest(eflags, (u32 *)regs->rsp + 3)) )
>      {
>          domain_crash(v->domain);
>          return 0;
> @@ -164,16 +164,16 @@ unsigned int compat_iret(void)
>          {
>              for (i = 1; i < 10; ++i)
>              {
> -                rc |= __get_user(x, (u32 *)regs->rsp + i);
> -                rc |= __put_user(x, (u32 *)(unsigned long)ksp + i);
> +                rc |= __get_guest(x, (u32 *)regs->rsp + i);
> +                rc |= __put_guest(x, (u32 *)(unsigned long)ksp + i);
>              }
>          }
>          else if ( ksp > regs->esp )
>          {
>              for ( i = 9; i > 0; --i )
>              {
> -                rc |= __get_user(x, (u32 *)regs->rsp + i);
> -                rc |= __put_user(x, (u32 *)(unsigned long)ksp + i);
> +                rc |= __get_guest(x, (u32 *)regs->rsp + i);
> +                rc |= __put_guest(x, (u32 *)(unsigned long)ksp + i);
>              }
>          }
>          if ( rc )
> @@ -189,7 +189,7 @@ unsigned int compat_iret(void)
>              eflags &= ~X86_EFLAGS_IF;
>          regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|
>                            X86_EFLAGS_NT|X86_EFLAGS_TF);
> -        if ( unlikely(__put_user(0, (u32 *)regs->rsp)) )
> +        if ( unlikely(__put_guest(0, (u32 *)regs->rsp)) )
>          {
>              domain_crash(v->domain);
>              return 0;
> @@ -205,8 +205,8 @@ unsigned int compat_iret(void)
>      else if ( ring_1(regs) )
>          regs->esp += 16;
>      /* Return to ring 2/3: restore ESP and SS. */
> -    else if ( __get_user(regs->ss, (u32 *)regs->rsp + 5) ||
> -              __get_user(regs->esp, (u32 *)regs->rsp + 4) )
> +    else if ( __get_guest(regs->ss, (u32 *)regs->rsp + 5) ||
> +              __get_guest(regs->esp, (u32 *)regs->rsp + 4) )
>      {
>          domain_crash(v->domain);
>          return 0;
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -274,7 +274,7 @@ static void compat_show_guest_stack(stru
>      {
>          if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
>              break;
> -        if ( __get_user(addr, stack) )
> +        if ( get_unsafe(addr, stack) )
>          {
>              if ( i != 0 )
>                  printk("\n    ");
> @@ -343,7 +343,7 @@ static void show_guest_stack(struct vcpu
>      {
>          if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
>              break;
> -        if ( __get_user(addr, stack) )
> +        if ( get_unsafe(addr, stack) )

Shouldn't accessing the guest stack use the _guest accessors?

Or has this address been verified by Xen and not in idrect control of
the guest, and thus can't be used for speculation purposes?

I feel like this should be using the _guest accessors anyway, as the
guest stack is an address in guest space?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 14:57:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 14:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83273.154530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9USD-0003vi-GN; Tue, 09 Feb 2021 14:57:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83273.154530; Tue, 09 Feb 2021 14:57:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9USD-0003vb-D4; Tue, 09 Feb 2021 14:57:25 +0000
Received: by outflank-mailman (input) for mailman id 83273;
 Tue, 09 Feb 2021 14:57:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9USB-0003vW-Oi
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 14:57:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 367524fb-4ee8-4538-8a9c-996a97ee57b9;
 Tue, 09 Feb 2021 14:57:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DD8CCB1D0;
 Tue,  9 Feb 2021 14:57:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 367524fb-4ee8-4538-8a9c-996a97ee57b9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612882642; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6hER2UZhFNmNNsYtEHAf4viM09RvDVnXS8darKIfCYQ=;
	b=BIMuKyWhfT1kRkKwBe/Yz+QFs9U0ajdjMsI61FRBQqusvGOSv6IUL2YxxYSnW0FlgnggFg
	J/n186WxslJUA87GHFdz0CvXcs7T6OgFq2qTbu/rEcVv3dvgbfwSoJQi8knmfom0arv347
	gdY1oEFeeix7xgVp5m/OO6COPp2Gs8M=
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
 <YB1v60CuOdhxFwNy@Air-de-Roger>
 <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
 <YCKJLbaTzD6YF/g5@Air-de-Roger>
 <1cf476b9-4ac1-9a12-7fdb-c898f02532f7@suse.com>
 <YCKgPro1yTtSSnLQ@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e86040e7-0c16-00c8-9b01-eac9b928ddc2@suse.com>
Date: Tue, 9 Feb 2021 15:57:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCKgPro1yTtSSnLQ@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.02.2021 15:46, Roger Pau MonnÃ© wrote:
> On Tue, Feb 09, 2021 at 02:15:18PM +0100, Jan Beulich wrote:
>> On 09.02.2021 14:07, Roger Pau MonnÃ© wrote:
>>> On Fri, Feb 05, 2021 at 05:26:33PM +0100, Jan Beulich wrote:
>>>> On 05.02.2021 17:18, Roger Pau MonnÃ© wrote:
>>>>> On Fri, Feb 05, 2021 at 05:13:22PM +0100, Jan Beulich wrote:
>>>>>> On 05.02.2021 16:43, Roger Pau MonnÃ© wrote:
>>>>>>> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
>>>>>>>> The "guest" variants are intended to work with (potentially) fully guest
>>>>>>>> controlled addresses, while the "unsafe" variants are not.
>>>>>>>
>>>>>>> Just to clarify, both work against user addresses, but guest variants
>>>>>>> need to be more careful because the guest provided address can also be
>>>>>>> modified?
>>>>>>>
>>>>>>> I'm trying to understand the difference between "fully guest
>>>>>>> controlled" and "guest controlled".
>>>>>>
>>>>>> Not exactly, not. "unsafe" means access to anything which may
>>>>>> fault, guest controlled or not. do_invalid_op()'s reading of
>>>>>> the insn stream is a good example - the faulting insn there
>>>>>> isn't guest controlled at all, but we still want to be careful
>>>>>> when trying to read these bytes, as we don't want to fully
>>>>>> trust %rip there.
>>>
>>> Oh, I see. It's possible that %rip points to an unmapped address
>>> there, and we need to be careful when reading, even if the value of
>>> %rip cannot be controlled by the guest and can legitimacy point to
>>> Xen's address space.
>>>
>>>>> Would it make sense to threat everything as 'guest' accesses for the
>>>>> sake of not having this difference?
>>>>
>>>> That's what we've been doing until now. It is the purpose of
>>>> this change to allow the two to behave differently.
>>>>
>>>>> I think having two accessors it's likely to cause confusion and could
>>>>> possibly lead to the wrong one being used in unexpected contexts. Does
>>>>> it add a too big performance penalty to always use the most
>>>>> restrictive one?
>>>>
>>>> The problem is the most restrictive one is going to be too
>>>> restrictive - we wouldn't be able to access Xen space anymore
>>>> e.g. from the place pointed at above as example. This is
>>>> because for guest accesses (but not for "unsafe" ones) we're
>>>> going to divert them into non-canonical space (and hence make
>>>> speculation impossible, as such an access would fault) if it
>>>> would touch Xen space.
>>>
>>> Yes, I understand now. I think it would have been helpful (for me) to
>>> have the first sentence as:
>>>
>>> The "guest" variants are intended to work with (potentially) fully guest
>>> controlled addresses, while the "unsafe" variants are expected to be
>>> used in order to access addresses not under the guest control, but
>>> that could trigger faults anyway (like accessing the instruction
>>> stream in do_invalid_op).
>>
>> I can use some of this, but in particular "access addresses not
>> under the guest control" isn't entirely correct. The question isn't
>> whether there's a guest control aspect, but which part of the
>> address space the addresses are in. See specifically x86/PV: use
>> get_unsafe() instead of copy_from_unsafe()" for two pretty good
>> examples. The address within the linear page tables are - in a
>> way at least - still somewhat guest controlled, but we're
>> deliberately accessing Xen virtual addresses there.
> 
> OK, could this be somehow added to the commit message then?
> 
> Maybe it would be better to have something like:
> 
> The "guest" variants are intended to work with addresses belonging to
> the guest address space, while the "unsafe" variants should be used
> for addresses that fall into the Xen address space.
> 
> I think it's important to list exactly how the distinction between the
> guest/unsafe accessors is made, or else it's impossible to review that
> the changes done here are correct.

I did change the paragraph to read

The "guest" variants are intended to work with (potentially) fully guest
controlled addresses, while the "unsafe" variants are intended to be
used in order to access addresses not (directly) under guest control,
within Xen's part of virtual address space. (For linear page table and
descriptor table accesses the low bits of the addresses may still be
guest controlled, but this still won't allow speculation to "escape"
into unwanted areas.) Subsequently we will want them to have distinct
behavior, so as first step identify which one is which. For now, both
groups of constructs alias one another.

earlier today. Is this clear enough? Relevant parts I've also mirrored
into patch 3's description?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:14:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83280.154551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Uiy-0005rc-4j; Tue, 09 Feb 2021 15:14:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83280.154551; Tue, 09 Feb 2021 15:14:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Uiy-0005rV-0q; Tue, 09 Feb 2021 15:14:44 +0000
Received: by outflank-mailman (input) for mailman id 83280;
 Tue, 09 Feb 2021 15:14:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9Uiw-0005rQ-Im
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:14:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 398a33ec-0192-4276-a988-e8691c4c73cd;
 Tue, 09 Feb 2021 15:14:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE300B21A;
 Tue,  9 Feb 2021 15:14:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 398a33ec-0192-4276-a988-e8691c4c73cd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612883680; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U6qYsU4In/PbhGu50pUxSSjZI/9AictEFwYLXUfuVZg=;
	b=djzeuIbxCdOI0K1rlXfuzOffmqPJR4Svmm0VvWKzHGR8z1WNB74/dLnMNB7hv5+toL7fbJ
	SEMV6dJYztymzF6xxGodJ5e8F6tP+pYNco36eI+4NRoahLF0u5eh7t4awDQdLGuB+Nfo+V
	OceDJXyzxbgWivcbm65WhlD6/OwlUcc=
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YCKibN0mjROss4M4@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <11d58555-97d2-0489-b123-cbcf084a0094@suse.com>
Date: Tue, 9 Feb 2021 16:14:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCKibN0mjROss4M4@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.02.2021 15:55, Roger Pau MonnÃ© wrote:
> On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
>> The "guest" variants are intended to work with (potentially) fully guest
>> controlled addresses, while the "unsafe" variants are not. (For
>> descriptor table accesses the low bits of the addresses may still be
>> guest controlled, but this still won't allow speculation to "escape"
>> into unwanted areas.)
> 
> Descriptor table is also in guest address space, and hence would be
> fine using the _guest accessors? (even if not in guest control and
> thus unsuitable as an speculation vector)

No, we don't access descriptor tables in guest space, I don't
think. See read_gate_descriptor() for an example. After all PV
guests don't even have the full concept of self-managed (in
their VA space) descriptor tables (GDT gets specified in terms
of frames, while LDT gets specified in terms of (VA,size)
tuples, but just for Xen to read the underlying page table
entries upon 1st access).

>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -274,7 +274,7 @@ static void compat_show_guest_stack(stru
>>      {
>>          if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
>>              break;
>> -        if ( __get_user(addr, stack) )
>> +        if ( get_unsafe(addr, stack) )
>>          {
>>              if ( i != 0 )
>>                  printk("\n    ");
>> @@ -343,7 +343,7 @@ static void show_guest_stack(struct vcpu
>>      {
>>          if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
>>              break;
>> -        if ( __get_user(addr, stack) )
>> +        if ( get_unsafe(addr, stack) )
> 
> Shouldn't accessing the guest stack use the _guest accessors?

Hmm, yes indeed.

> Or has this address been verified by Xen and not in idrect control of
> the guest, and thus can't be used for speculation purposes?
> 
> I feel like this should be using the _guest accessors anyway, as the
> guest stack is an address in guest space?

I think this being a debugging function only, not directly
accessible by guests, is what made me think speculation is
not an issue here and hence the "unsafe" variants are fine
to use (they're slightly cheaper after all, once the
subsequent changes are in place). But I guess I will better
switch these two around.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:24:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83282.154562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Us9-0006rd-2n; Tue, 09 Feb 2021 15:24:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83282.154562; Tue, 09 Feb 2021 15:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Us8-0006rW-VD; Tue, 09 Feb 2021 15:24:12 +0000
Received: by outflank-mailman (input) for mailman id 83282;
 Tue, 09 Feb 2021 15:24:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9Us7-0006rR-Ln
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:24:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fb090c6-1f85-413d-8a7c-95f85362d838;
 Tue, 09 Feb 2021 15:24:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fb090c6-1f85-413d-8a7c-95f85362d838
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612884250;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=5b0dwn9pswPkwah+l5lRFyOp1xbIXcc/MDq+HGqrHlY=;
  b=LJGpUC755HaOhCK09wpEOT1uoX8D22K04IgDZu3/9NIHEfrWGiBS8PVF
   ttjUoWPEXkKdRsI+Ky4rFjOiZIb6s/1kDMKV/ADthfw65iCqwqHZEV1LI
   8SKywfMuwllZ8AEpbEODlXKa8c6SCmvM/h2NbaxjQ/dEzo8hE2Gdou8Bl
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vLE7AGnVPzj1iO3iSJyMVmtmxpAuofefivjs8zk34hsU9dSJCQ4wlK6ETNYHOL7LMVWRj+dgYQ
 +fpw7Mf8V2zWi1cjnCjVA69HlWbyRRENLLnMiMas1Agf1R3UqBfxkOPupn+1IX/XNAo4gw85f1
 z4ZVYYD+LqKs3qB38Mpq766y1my1EjcSawh39bd130mKy2tJosrTrG2gGdK9Wk340NWYUquQoj
 pAIxy9VYv6NtKJmwjFSTpAI8d+EGZkUclNcXVb3AJix28R4qiKRAc56L7RSVxlvFlgYWk6Ovip
 l1I=
X-SBRS: 5.2
X-MesageID: 38222665
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="38222665"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OQ+WDk5p/xbqJMuFjLqkQw7FgyZVQjhc9LwSC9cvW55VVJlaG5/rLPJqiDl9vkTaWMl9ZR5fZ4Kzu658/o6VSAW8+vxyvyscznRR5S/6fF1N6u8EZbJeqh3dIZtt9iUVx/BODl06MrkmdkH4aSZfFp+ONu63J8b9Lco0Bl78wT2w5am0Hi66+D4MkmHHop/Iwc8bHPehRW4uqhRIqY3NAssjolNZW6BGUjkssrSkRtrXclFzyg8jTxW6yg6s8cu5vNeXaUywAxbZi63CU3Bh4y+p6Bf7KMJu00GL7Mlxp7T7Xj65SquEo4zPWtrgny8+NI9Di4anojdBq+xQKjt8zQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8O2/S5CF5RmqGA34mAkw1tXkAfl66NNjHoTE0U6S1s=;
 b=UaXNahui+gxHzd2N3GZg1WP8g+jFa19nG4n9ypvVhTpnOkGLRCNAGlRm8XmP9I7iZm+cKzuYx4sZb5jqzqD2JCaCmbFYQKITv7CWRW6kG0G155E4Rjisa4WKpY9ynMA1Tm2Xv67qi8KCdFPhQmmmzWHdcxAyQmk+r0GprPsIQRlX5ozdNJKECVn9tdJA6pILcc6mdI2j/Oz/bVGKv5eq4ixi/Y+GOL+DnWJiF5x9wK2IU8+necYBJcMKvw5gla8CwoFXkMGXOvWAEccv1kIsGADa9DAuWS60lJT9TRNGl6m92QmbGUqXUM2NfKTToua+dkMZvJlRPAw7eUb5/Qdo6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8O2/S5CF5RmqGA34mAkw1tXkAfl66NNjHoTE0U6S1s=;
 b=qD9rVzlRLzPP9NwaJ4raYAoxEaVJlYKO/P1CQzOUCJctlcFqYW3e5ehLK0WhoFbFWhwB2hOq64IpKhnsTEfiv35/q/Gtf5FqD1g2dOem3gQRZqO4bippm0GgrX1LVTabrce6OQbm5ISDyAHQhdmsL2q27Nm9ltnJ2JLVfJwQX4M=
Date: Tue, 9 Feb 2021 16:23:55 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YCKpC1IgLuscSFK3@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YB1nhBeMRVGyO8Fk@Air-de-Roger>
 <d23dc40c-3b37-ade2-f987-4f79b06901b1@suse.com>
 <YB1v60CuOdhxFwNy@Air-de-Roger>
 <199d2681-9704-8804-d3c3-d8ad24fca137@suse.com>
 <YCKJLbaTzD6YF/g5@Air-de-Roger>
 <1cf476b9-4ac1-9a12-7fdb-c898f02532f7@suse.com>
 <YCKgPro1yTtSSnLQ@Air-de-Roger>
 <e86040e7-0c16-00c8-9b01-eac9b928ddc2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <e86040e7-0c16-00c8-9b01-eac9b928ddc2@suse.com>
X-ClientProxiedBy: MRXP264CA0035.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8188be57-c6b5-457e-a62f-08d8cd0ebe0a
X-MS-TrafficTypeDiagnostic: DM6PR03MB4137:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB41375067B26D7A9034244B118F8E9@DM6PR03MB4137.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1A/ZR1/Z+2uwE8tWcFLreTrprdSjFz1jhLaWCHCefaqmsJCaxUdwcogojZHMIrX8mZY/8duQyMlyVTcYvN3st//eVnxAvqhK+kaq9RpglLgKOHqUqJYz7uRb4Gtpw1XiVex8Br2hEVT7d7EnVmPdemKLm0U0pOfX7Cckjy4QS7ZyVVbanVKBUWLwDoCASvvDqhLDtgYBYkGshXbA/OcgNOfY8nDQjqw+4URiXHvLyCL/YbCcv4duLyfkBDaMLkeEzCiUgHHFfKz/B71rb4/ryKFWwFCH0ejadqMbxY2pphcse5GO7cDvVMZV5cr2WTo1NTFbHtZXbqIuHwZaNOQgs+SP7cXJAmEhXyfKXyQAhjFQ+AN9Q/ca0OB1nLu6ogwE6Iab1fy16qAucudvhkYVuoySLaRUjsV8xCvvY/wH6XlC5RH/8DYP2tmAxXR7RgynDZLa/W4ladJucoODqGKd5ohs2HBfOJwzCgRfUduBdP839gaL9bova4R5WAlJX+Qt2uixJjwoQ5cedv0cBt83nA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(136003)(39860400002)(346002)(396003)(376002)(6486002)(54906003)(66476007)(66556008)(956004)(66946007)(8936002)(9686003)(85182001)(4326008)(5660300002)(86362001)(6916009)(8676002)(2906002)(6496006)(4744005)(186003)(26005)(316002)(478600001)(33716001)(6666004)(83380400001)(107886003)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Mmt6WkpNOWgrcmN6Ujc1RFBpS2dGR1k1WjBFUzZGSWtTUnpkRm45QXNxWmdo?=
 =?utf-8?B?RzhibzhhdmxLRU53dVRGU3pHNkl2RFVpdFBMTnBhUXdKK3ZTYnJVZWljajhi?=
 =?utf-8?B?RFNPeVVhejZnUXc4dVVwMVZBZUpMeWtyb3NQalFSVkdWNHFRQmgrNVVsdHpx?=
 =?utf-8?B?UlVwb0JubW16OFdCVUxHQkFPMWxvVjdoV2hCZTM1ZXFMaXhkZHhBRVNjOEV1?=
 =?utf-8?B?RForSTBHNnpIODVlbHVZdWQ2cnFTV2kvQS9BdnZPM2lKOWVDUElncWQ0NWxy?=
 =?utf-8?B?SnhEQjBTWTdSZVlvelNES2c0RHF4YnQ4NVB5RFhYbVVCU2FPcFJXMWIycDRX?=
 =?utf-8?B?NTduKzdHZE54bHdiWGY0ekVXVndLWC82dEZsNUZFWVFNcVZtd2FsNjhsV0o1?=
 =?utf-8?B?MWYxMGJlTDNLSXVReExIejVsYzI5eEtJbWVOVXI3cDNqVHBZU05MK1BZN3ov?=
 =?utf-8?B?V0l1Vk9HeThtU3YrOXRHSU1DYXlCRU83RkY5UGsyZGEydjM2NDdoMDJlQjdY?=
 =?utf-8?B?RjlaZGFocndHUjlJYlN3ckpoaUJnR2xSajNHNWoyWjd5b3N6TjBLb0ZsdjRM?=
 =?utf-8?B?b213b2dSc281VzMyWWRPS1Q1NzhtYWpxbERCa2Jzbm5xWTl3NXQ3MmJuR1p4?=
 =?utf-8?B?UUg2NzU3ci9kbG9Hb21ZZERPcWdqRVdmUGxYa3NFYzRpaVFTVzFUZExqbjND?=
 =?utf-8?B?RjhvS29ubFQycjZUazBsODAvTnQyL2lnQUFPZS8zTUp0ZG4zWGNBeEJhVHZv?=
 =?utf-8?B?LzBFZ1J0UlJjRGJPZlBHcUNDcDh0U092RjQ2V2NxU2hwREdXZC9kRWt0cFhB?=
 =?utf-8?B?WTY3VW51MzRRMUVjQVI4QUZTS1c0bk5PZE5zaE5BOHdzWDVmNHlGa0hueUM4?=
 =?utf-8?B?Wms5TkwvcWt0S1BVVFJ0OC9GenZFZGFVSVJKb3k0bG9tcW94NUxFaXh3SkFJ?=
 =?utf-8?B?S2pUOWU4dWI3Yy8ybUE0TVJTU3VHL0x4YVA2MVRPOGlScUJPVk84MlVIOExx?=
 =?utf-8?B?TnZCNTZwZnBSOGhDcjRJK3BLSDh6aTZVbDR0eGZLMUlybE5nMUFqL3pPRXNy?=
 =?utf-8?B?cWZWem84QTF1MGZzSXoxWStCcnFDQ2dST3VpdUdMMmhRS1puVmJHMWp6M2F2?=
 =?utf-8?B?dzMxUnN6QUhHVTg3MnhXbUcrWDBxbVNZbm5lT1F6dEYva3Z3VVAvWkY1Vm42?=
 =?utf-8?B?TXVpdWhQVThjelVmNGFrT01sZllOMWxrLzFnWUZpa3BuYmpHeDRlYk1LRE1Z?=
 =?utf-8?B?S1ByY3FuUWdEZmk2VUJpS245LzNjcnQ4RTM1dXRqK2hmc2VKQTUrRkNMMkg4?=
 =?utf-8?B?SHR2dGZwK09rVkNYMUhmSkgvN0FCSVlSL2oxUWoyTEl1VE1DQUhGenBqdGk1?=
 =?utf-8?B?Vy91OUQ4d1d2STJERzcrazQ2M3Zxako1Z2pSYVFkeWN3RHFINFJ5RVQ0djIv?=
 =?utf-8?B?dVB4NHgrWGZ1R0tQb0JVV0daUXVyRGJGd2ZMeE51cTNsNHpaRTdhZkVRbThX?=
 =?utf-8?B?cWF2NG9CbWI0TjBpZDgvMlZiTVNhajF2SEJQNEhwMVJ5U3J4RDJtQnY1RTcx?=
 =?utf-8?B?MmtpTnRNSGNXaktrazdaY1dicU5iVVdldUh0NU8raVJ6MlJCSmdmNDhqaitR?=
 =?utf-8?B?TmE3UXRRakR1eEpsei90S2Mvb1F5UmViRFRWaWR1amU1TU5UVVRZUThOMTFJ?=
 =?utf-8?B?cm5TdDNNS2NnYVlUYkhTZ2dwVjQxaHFnbktzSlMxNjJ0RkdVSEtVZEw1VHZ3?=
 =?utf-8?Q?hqdVS/gXAbImaxuQims/Sy1YaxvdyHCupkFSJvN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8188be57-c6b5-457e-a62f-08d8cd0ebe0a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 15:24:07.6730
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gWoQyCun5Oq2Iwz+tSxker6cJB8hAJyPu0BCnnQaklI5C8fP2/H044oirOxu2Rya3oNGGw1l0I7lymabYgm2pw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4137
X-OriginatorOrg: citrix.com

On Tue, Feb 09, 2021 at 03:57:22PM +0100, Jan Beulich wrote:
> I did change the paragraph to read
> 
> The "guest" variants are intended to work with (potentially) fully guest
> controlled addresses, while the "unsafe" variants are intended to be
> used in order to access addresses not (directly) under guest control,
> within Xen's part of virtual address space. (For linear page table and
> descriptor table accesses the low bits of the addresses may still be
> guest controlled, but this still won't allow speculation to "escape"
> into unwanted areas.) Subsequently we will want them to have distinct
> behavior, so as first step identify which one is which. For now, both
> groups of constructs alias one another.
> 
> earlier today. Is this clear enough? Relevant parts I've also mirrored
> into patch 3's description?

Yes, that does indeed seem clearer, thanks.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:27:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83284.154575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Uve-00072g-Lt; Tue, 09 Feb 2021 15:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83284.154575; Tue, 09 Feb 2021 15:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Uve-00072Z-Il; Tue, 09 Feb 2021 15:27:50 +0000
Received: by outflank-mailman (input) for mailman id 83284;
 Tue, 09 Feb 2021 15:27:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9Uvd-00072R-23
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:27:49 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c742a8ba-f49b-49d5-91d2-40533d1d09f9;
 Tue, 09 Feb 2021 15:27:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c742a8ba-f49b-49d5-91d2-40533d1d09f9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612884467;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=HXZLslgFJrdRiAT2A/Enh1tqqq9va1VCA0pmgs4xyoA=;
  b=dEscJflqfxTJWfL+ncDa+jK/ECu/XZKgc52zqYjLD8BnExlhECaHzA+y
   x0FLZ0j2HU358eiGQq4DbNK+Kb6l35H93kPmj+FFtXyYZwDOydtBYThtT
   qq6u56XAIuFrKo+jaJO1eVgGHvck/BrS5K4CGvViy44RTfFwxXc2YNwBr
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: AhVPYfQI43N4hv/yeooexhfuycttd28VvyHBqCFz+HA1QtXS08EPntvQJxcPOLZjBAxrhbu17L
 NHsivAOAZ53R9nffaJ+LPRttLZ1YMbQS3O5A0B6KBfTUZszGy70nSGbhodIYKuipIAVENS6xwJ
 8mDfMXfVYPI3ccy5jNiqjv+0BhaYc0Wg37/4SDSEmfHkDJRpOwFzzamyDMIQ6ZD3KM0xQm2vO9
 w84OG7vJLtOajF9h3whxYg+aZEjmAEl9FDiZvU3MycY5BJqoH/yh5vsPCRjcFpY58kTWZGqXKy
 LPU=
X-SBRS: 5.2
X-MesageID: 36867672
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="36867672"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lyudzkh3MvF5iNBPNF+9E8EkcYgg3WoE75Ps7PuPcyPmE+s3UFuDMvmMAZ9RpTLkXLrY4Ty8TOFaCPNf2mPOMn7koFlHNDGKcuy5xdaM1xAb4JxhN/uV+C1dhiRZ5fIBxiXAh+i8Tk8Q1qOHq4hJJqXhJEzjLqH0pdeBsV/lTWkEFQwWgQMOkQ1ojmbnE4EvRlPCQ+jPI2NcDEa10tpglJ3Qw43eIhAnMss/gdLkbtbAsc4Ajkt6VjCWzz8ky92HFN54PSsolPhovva+CJshFuX3JodpRponMxGAU1vf422vnsfETTUSyWhQKq09AlNeYTIlak1jvM7liJdv41W5SQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=75TnBMT0TZ+oFnnZVxQ94PIkDAM1wT+CN9/UN1di+dk=;
 b=aEkfJFqK+oodA5vb5Lbr8kngTZE7pmdvXYFwxkZuM0lCUtzSm6xb0CdXaAGjreG0O7ezDdct/D9dRTjHunIHOs51hXSgNMTmS2fS6VFU5ylikVvjXJhPWrqL2O1XlSn2fUhmRO02B9F9XZT+/RwwCpGc2DA2jysZXZkKuT9FN+6DMRBBVJVvEHtsJKcNytteJsx1iREPzds3IDuqrYuykHvCbXa4KhanQ2lqSLsgBNPXLgBndUTm3dAaiTOJVgWMTXgIPvJXDcyuKupOrgUes5HTaSvHT/v31BrW5MGx6z8+N4FMt7QhdhykxOI05wHYuV8/KF7VAjKOo7koDcIy7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=75TnBMT0TZ+oFnnZVxQ94PIkDAM1wT+CN9/UN1di+dk=;
 b=D9UYryspZnFRgx9QefyYoI90bIH8ccKxMkphH2gqmip9ZTIzewcveqJ2E273pT9U1foIHvWRUZJDh8xbixnHbDh23dwUpXVYkvhPIDySmFyTjUzTTD/4Gh6YqX5Y5kfebMJM9KtuPpqYen/hDVdPu9p3fFTZPryAhmS+y7T2E5w=
Date: Tue, 9 Feb 2021 16:27:36 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 02/17] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
Message-ID: <YCKp6Cm0wtja7D1i@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <13d1d621-21db-0e59-6603-2b22b6a9d180@suse.com>
 <YCKibN0mjROss4M4@Air-de-Roger>
 <11d58555-97d2-0489-b123-cbcf084a0094@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <11d58555-97d2-0489-b123-cbcf084a0094@suse.com>
X-ClientProxiedBy: AM5PR1001CA0067.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::44) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d33bc5ec-06c8-4301-2c06-08d8cd0f3ded
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5324AE2CFBB743401741375E8F8E9@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xIWv9cfLdIoc/nsyFZrg89wzMM8AchTiDf8nvCYYYAGB7IaeWX46Vex17mruW+Ha6+Ogxi4WpMIdD86bfXV+yDxXy394AlRPLRKWIYDPS95U9MqAbrTVzI7/LVYr0w6nhJu98sLn9bjfuJoLfkhk5OEfRy6TMciHSsWdJATtu4C3vJUH0phrQDT3v05+kMKqDhcYBEwmxM3qgWvLKrROEVjx2qh6ycemLL6HMCZBYmmDwsV0m2/t/COIs0WZDV+sdMjc6Ht4Zki3C6RXeGchwhi0R7G8MjVMYfRYgNcKp9QRTs6CTYv6BJ0UJhpuFmtyYL7OxFwQMfvzSwCRLUO6/KeuiZ/cPe10FOl1JONtLfYqcPauYNxyJMKgfVf9endKplB9tVAiQMuDjoWBGtuY5ptnReFKYOnwe2XT12Io95vzpcTPMbAySRDfUBQ2oGj1GgXorN/s/nVC98sMPaxfwlElzjAbOTbIcOCgf3JawSW8T8+lzv7dNHKZDxkDho3K9n4adzTOimDg7f6MUDEyZQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(366004)(396003)(39860400002)(346002)(136003)(186003)(16526019)(5660300002)(107886003)(85182001)(26005)(2906002)(956004)(83380400001)(6666004)(316002)(9686003)(8936002)(66946007)(4326008)(8676002)(478600001)(6486002)(53546011)(66556008)(66476007)(54906003)(6496006)(6916009)(33716001)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dWczMW9BMU9VMEVwVTVqcG1MVFFtTm5EWm1QUTF2RThXT2YvVUhvYlhLZUQv?=
 =?utf-8?B?U3FtaXZOdXVORVdnY2czam9DMEVyUkdmcXFuYWZwbllsZm52UVZMMS9HZ0Zx?=
 =?utf-8?B?MHR1dUhJT2FPNXliYk81RHA1QTFXRGR6UE5CcVNkckpFRnpiWXQ4VCtiSFdM?=
 =?utf-8?B?UGdnMXZFVnV4UG9Ha0FOaFFJTTlIUzNXZTQySXRMSzF3VStOYnJrM0NtTlpj?=
 =?utf-8?B?cnRHWFl2SFc1UUhJMGY3NmN4T3ZQZ2drSzU4TTNraUozKzNsZ0RqN096ejVS?=
 =?utf-8?B?R05VUlRMcU5kd2doR1NUQXY5N3dTVW5CQ1hycVpBR0FDNEUzM3EyZGhGZ0RO?=
 =?utf-8?B?dTE2NWVnOThIaVhNS3RtOU5FaXNrK2NObE1RQXFYQ2hhYlVKSnZJL2xpc1I5?=
 =?utf-8?B?NCtuSnJaQ0Y5dkZkOTFVSWJITVZRY01BS2ZGbFlQbWNnTjFzYUN0dFJsMVVX?=
 =?utf-8?B?dUZlOXIwTENKNm9XZk9qOXZZOHdlNDJrQ2FZUlZJN2JKRzRacjFMdUZCZ1BW?=
 =?utf-8?B?ZUZOUjFlcjY2bTJ5RDNLSW9rRXR3d3pxSVBuN0xjMldZRkVyUHlKbThSQnkz?=
 =?utf-8?B?cG1RdlJEUm9qTDg1cEFjdklnbGQvb0FDK25iYVI0c0dVMFBlM2dIMnFPVC9j?=
 =?utf-8?B?SitLYkRZMDVCSDFzMEQ0NllFQ29LY2J0bzF5akRwS2NwS3B4L0gxWlA0WXFx?=
 =?utf-8?B?M3hycWZLRnhxeXBRSUdlUklzOW1ZU253a3hRQ1VjM3RFVGQvSk9BZ1UyWHI1?=
 =?utf-8?B?UXh5Y3o4VVNIUk5xN2ZzdHZ3NUJ1Q2lGbFBSQWV6T1lITEhISHhlYzBjWTNJ?=
 =?utf-8?B?MDRQZW0xL0xXY3ZnL3p2YVNkTWdyTFdad2pkanExTXhvbllMTkdzYVNLU2Er?=
 =?utf-8?B?cGVxalFoL0R1bW9DRHVScTdJd0FLLzYrVDBtZnRYK1FTbVB0LzZkU0NVZzda?=
 =?utf-8?B?SEdnLzVuVjUwbmo1Mmk0QnRVeGtJdjROZld3QTcvYVQ2NGJ0YTlIeGFIcmZz?=
 =?utf-8?B?QmZ0cGM5RFo1b3crekV4cVJYMTQxVGRreEQzS1RPVEJOOUhjVlZncU9CWlJl?=
 =?utf-8?B?SFRZLzA1b3JtTVNTSkpFbnJvNkt5MXlRR25YdTJrSVNjYXZhQkZBcEpFdG1R?=
 =?utf-8?B?Tm5RcGdZZHVPQng5emM4d0RVSCthUGcyVTFDTjUvU1NhQ1pJd1RhcWNGWEs3?=
 =?utf-8?B?SGV4Ti8yc1JCYm9SYzYzS2xQLzA3d1FqRm0yenV3ZWhvM3FCTnIvbkFxeTEx?=
 =?utf-8?B?TUlxRzV0QmxLd2xwVGR2V1BLYThvSEoyOHNEMm1zVkhxY0lCSk5nZkR2MTA1?=
 =?utf-8?B?dFJaS0N3K1c1MzdNL2RQN0ZPOWNNWlEvclpFTzFZNFhLbHZwUEZBWXViZHBm?=
 =?utf-8?B?UVNwRVNLVUV3Q2pYdXp4d2g2dnNhZHEzenlkaGZ2bUI1bENRU2d3QnFtN2dx?=
 =?utf-8?B?M0QyRjdRREs1YXVhcC9TY2tyZUwyTGtGS3FyZ2V3UWxQUGxQMm9Lc2ZrY0E1?=
 =?utf-8?B?enFtVG95NFFqUEs0Mzg3dFU1NUo3WVBIazU4ZEUvbndLektMVlNRaFpDUHlz?=
 =?utf-8?B?Zk5HZ0xWRllUUC9vdmxTRmFoMkkvVFRQb0dJcDJ1Z0txbEhmb08xNklRb2pq?=
 =?utf-8?B?RE1hS21HSzNidU5iSVRZQUFXaWJUN1VlRFl1ci9SZ1BXdnFsVWNXVUdiSWlp?=
 =?utf-8?B?TWZCNjR3YmU5ZHlJejFNeDA5UGFoaFJ0cEVWUGRweXFsQ3YzRjZGNUk3ak93?=
 =?utf-8?Q?co9ziTqv9fz2Ueh5YOpjyiUTv8Fk16WjpdauntV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d33bc5ec-06c8-4301-2c06-08d8cd0f3ded
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 15:27:42.2023
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: huNdmsUSoCUQ8f4YVAkqxitq0eJamLlRLJrOqC+41eptxRTW15+CWOkUxgASnCNz0VliVg/HtSa6PN14I2wSIA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Tue, Feb 09, 2021 at 04:14:41PM +0100, Jan Beulich wrote:
> On 09.02.2021 15:55, Roger Pau MonnÃ© wrote:
> > On Thu, Jan 14, 2021 at 04:04:11PM +0100, Jan Beulich wrote:
> >> The "guest" variants are intended to work with (potentially) fully guest
> >> controlled addresses, while the "unsafe" variants are not. (For
> >> descriptor table accesses the low bits of the addresses may still be
> >> guest controlled, but this still won't allow speculation to "escape"
> >> into unwanted areas.)
> > 
> > Descriptor table is also in guest address space, and hence would be
> > fine using the _guest accessors? (even if not in guest control and
> > thus unsuitable as an speculation vector)
> 
> No, we don't access descriptor tables in guest space, I don't
> think. See read_gate_descriptor() for an example. After all PV
> guests don't even have the full concept of self-managed (in
> their VA space) descriptor tables (GDT gets specified in terms
> of frames, while LDT gets specified in terms of (VA,size)
> tuples, but just for Xen to read the underlying page table
> entries upon 1st access).

I see, read_gate_descriptor uses gdt_ldt_desc_ptr which points into
the per-domain Xen virt space, so it's indeed an address in Xen
address space. I realize it doesn't make sense for the descriptor
table to be in guest address space, as it's not fully under guest
control.

> >> --- a/xen/arch/x86/traps.c
> >> +++ b/xen/arch/x86/traps.c
> >> @@ -274,7 +274,7 @@ static void compat_show_guest_stack(stru
> >>      {
> >>          if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
> >>              break;
> >> -        if ( __get_user(addr, stack) )
> >> +        if ( get_unsafe(addr, stack) )
> >>          {
> >>              if ( i != 0 )
> >>                  printk("\n    ");
> >> @@ -343,7 +343,7 @@ static void show_guest_stack(struct vcpu
> >>      {
> >>          if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
> >>              break;
> >> -        if ( __get_user(addr, stack) )
> >> +        if ( get_unsafe(addr, stack) )
> > 
> > Shouldn't accessing the guest stack use the _guest accessors?
> 
> Hmm, yes indeed.
> 
> > Or has this address been verified by Xen and not in idrect control of
> > the guest, and thus can't be used for speculation purposes?
> > 
> > I feel like this should be using the _guest accessors anyway, as the
> > guest stack is an address in guest space?
> 
> I think this being a debugging function only, not directly
> accessible by guests, is what made me think speculation is
> not an issue here and hence the "unsafe" variants are fine
> to use (they're slightly cheaper after all, once the
> subsequent changes are in place). But I guess I will better
> switch these two around.

With that:

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:28:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83285.154587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwG-00078E-Ve; Tue, 09 Feb 2021 15:28:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83285.154587; Tue, 09 Feb 2021 15:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwG-000787-Rx; Tue, 09 Feb 2021 15:28:28 +0000
Received: by outflank-mailman (input) for mailman id 83285;
 Tue, 09 Feb 2021 15:28:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9UwF-00077q-0W
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwC-0000to-Ln; Tue, 09 Feb 2021 15:28:24 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwC-0007gX-9V; Tue, 09 Feb 2021 15:28:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=EIpfkEBAlcm//25TrsQzpn+MKemTXRwxfnqy6YF6Oj0=; b=nQc2NjtVgDfQ3i7O8ci9/9LfGY
	K7umk4ka3BTksaG+wPTU9nteLBjhGmgt0c5+Gs2sI0Q8xGC6re4FoEpT8qqWnDDk0WvvWm/0BNCBX
	pHyOOjTHNSa35kAIVNPArXe+ZX4sZSyAPdbqVuatpxC44SnLHGeKereLIT+MwgTroA4o=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [for-4.15][PATCH v2 0/5] xen/iommu: Collection of bug fixes for IOMMU teadorwn
Date: Tue,  9 Feb 2021 15:28:11 +0000
Message-Id: <20210209152816.15792-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of bug fixes for the IOMMU teardown code.
All of them are candidate for 4.15 as they can either leak memory or
lead to host crash/host corruption.

This is sent directly on xen-devel because all the issues were either
introduced in 4.15 or happen in the domain creation code.

Cheers,

Julien Grall (5):
  xen/x86: p2m: Don't map the special pages in the IOMMU page-tables
  xen/iommu: Check if the IOMMU was initialized before tearing down
  xen/iommu: iommu_map: Don't crash the domain if it is dying
  xen/iommu: x86: Don't leak the IOMMU page-tables
  xen/iommu: x86: Clear the root page-table before freeing the
    page-tables

 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++-
 xen/drivers/passthrough/iommu.c             |  9 ++++-
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++-
 xen/drivers/passthrough/x86/iommu.c         | 42 ++++++++++++++++++++-
 xen/include/asm-x86/p2m.h                   |  4 ++
 xen/include/xen/iommu.h                     |  1 +
 6 files changed, 76 insertions(+), 4 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:28:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83286.154593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwH-00078m-Bk; Tue, 09 Feb 2021 15:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83286.154593; Tue, 09 Feb 2021 15:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwH-00078W-35; Tue, 09 Feb 2021 15:28:29 +0000
Received: by outflank-mailman (input) for mailman id 83286;
 Tue, 09 Feb 2021 15:28:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9UwF-00077w-Jx
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwD-0000tq-Tk; Tue, 09 Feb 2021 15:28:25 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwD-0007gX-KT; Tue, 09 Feb 2021 15:28:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=DDJbcY16/y5U53JjUe4ikLZjvgyi9SSn2g/0HtB+Inw=; b=S16UaSjNKVbDDvcp6CNg1HvSo
	X7fraRQzjcyIYmP2WyNFGLKbE2zl4DOolZXI5pIj3RWPD9NlBG7W1ydCmBdFI0Pd59350wgA+cpOA
	/V3rbONk4wgXhCjfDEFh7tXjHSVIyvBSUmkzYmAp0jhr7MkTC/F3Yvl4oHre/DyCaoTK8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special pages in the IOMMU page-tables
Date: Tue,  9 Feb 2021 15:28:12 +0000
Message-Id: <20210209152816.15792-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210209152816.15792-1-julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Currently, the IOMMU page-tables will be populated early in the domain
creation if the hardware is able to virtualize the local APIC. However,
the IOMMU page tables will not be freed during early failure and will
result to a leak.

An assigned device should not need to DMA into the vLAPIC page, so we
can avoid to map the page in the IOMMU page-tables.

This statement is also true for any special pages (the vLAPIC page is
one of them). So to take the opportunity to prevent the mapping for all
of them.

Note that:
    - This is matching the existing behavior with PV guest
    - This doesn't change the behavior when the P2M is shared with the
    IOMMU. IOW, the special pages will still be accessibled by the
    device.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - New patch
---
 xen/include/asm-x86/p2m.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 7d63f5787e62..1802545969b3 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
 {
     unsigned int flags;
 
+    /* Don't map special pages in the IOMMU page-tables. */
+    if ( mfn_valid(mfn) && is_special_page(mfn_to_page(mfn)) )
+        return 0;
+
     switch( p2mt )
     {
     case p2m_ram_rw:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:28:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83287.154599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwH-00079W-Jy; Tue, 09 Feb 2021 15:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83287.154599; Tue, 09 Feb 2021 15:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwH-00079F-BH; Tue, 09 Feb 2021 15:28:29 +0000
Received: by outflank-mailman (input) for mailman id 83287;
 Tue, 09 Feb 2021 15:28:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9UwG-000782-A2
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwF-0000tx-0v; Tue, 09 Feb 2021 15:28:27 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwE-0007gX-OF; Tue, 09 Feb 2021 15:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=eQ35EFoowuVBzEra1/WxradtagWIfRx9v97DQCn3iWk=; b=OuY2hFpKMrvSysnl6fveqTD+K
	UPI1TUVam8Z+5kdBgHnR+ofBqXwNJqiU5jyLcj66pWGqrmRA22nhV/8NKFJJVGsiLGhAGY2B8R5xg
	yfSWe8ystK1ZdQAx6Kg8q3GACzaaYP8Cy+4nkdfPyDc7pbUYQKGEkMY5UruU2L6727Pwc=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v2 2/5] xen/iommu: Check if the IOMMU was initialized before tearing down
Date: Tue,  9 Feb 2021 15:28:13 +0000
Message-Id: <20210209152816.15792-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210209152816.15792-1-julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

is_iommu_enabled() will return true even if the IOMMU has not been
initialized (e.g. the ops are not set).

In the case of an early failure in arch_domain_init(), the function
iommu_destroy_domain() will be called even if the IOMMU is not
initialized.

This will result to dereference the ops which will be NULL and an host
crash.

Fix the issue by checking that ops has been set before accessing it.

Fixes: 71e617a6b8f6 ("use is_iommu_enabled() where appropriate...")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Move the check in iommu_teardown() so we don't rely on
        arch_iommu_domain_init() to clean-up its allocation on failure.
        - Fix typo in the commit message
---
 xen/drivers/passthrough/iommu.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 2358b6eb09f4..879d238bcd31 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -221,6 +221,13 @@ static void iommu_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
+    /*
+     * During early domain creation failure, we may reach here with the
+     * ops not yet initialized.
+     */
+    if ( !hd->platform_ops )
+        return;
+
     iommu_vcall(hd->platform_ops, teardown, d);
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:28:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83288.154623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwI-0007D3-TD; Tue, 09 Feb 2021 15:28:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83288.154623; Tue, 09 Feb 2021 15:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwI-0007Cs-Oz; Tue, 09 Feb 2021 15:28:30 +0000
Received: by outflank-mailman (input) for mailman id 83288;
 Tue, 09 Feb 2021 15:28:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9UwG-00078G-UV
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwG-0000u7-4T; Tue, 09 Feb 2021 15:28:28 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwF-0007gX-S3; Tue, 09 Feb 2021 15:28:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Abq0PPrn9pqn8Kzw0uPNv4HuJCaFMRZ3qng4/qj2vjw=; b=55TSYWVkX0xe7FrDVESrW/6UE
	wGYnGSpzg5Qa+uVM5TPcDzoPUoP6QyOgp9vgW6Bshv81L+xdL8+69NptC7B67BfIXHVGZiSNOL1RA
	EwItmCZkQc/HmjFAqTOI3cIyuryUcWT65XgCp10viS6zdWAN9m+UZrVWqi9e/M1gQUgfs=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the domain if it is dying
Date: Tue,  9 Feb 2021 15:28:14 +0000
Message-Id: <20210209152816.15792-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210209152816.15792-1-julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

It is a bit pointless to crash a domain that is already dying. This will
become more an annoyance with a follow-up change where page-table
allocation will be forbidden when the domain is dying.

Security wise, there is no change as the devices would still have access
to the IOMMU page-tables even if the domain has crashed until Xen
start to relinquish the resources.

For x86, we rely on dom_iommu(d)->arch.mapping.lock to ensure
d->is_dying is correctly observed (a follow-up patch will held it in the
relinquish path).

For Arm, there is still a small race possible. But there is so far no
failure specific to a domain dying.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This was spotted when trying to destroy IOREQ servers while the domain
is dying. The code will try to add the entry back in the P2M and
therefore update the P2M (see arch_ioreq_server_disable() ->
hvm_add_ioreq_gfn()).

It should be possible to skip the mappin in hvm_add_ioreq_gfn(), however
I didn't try a patch yet because checking d->is_dying can be racy (I
can't find a proper lock).

Changes in v2:
    - Patch added
---
 xen/drivers/passthrough/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 879d238bcd31..75419f20f76d 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -272,7 +272,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                             flush_flags) )
                 continue;
 
-        if ( !is_hardware_domain(d) )
+        if ( !is_hardware_domain(d) && !d->is_dying )
             domain_crash(d);
 
         break;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:28:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83289.154635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwK-0007Ft-Ap; Tue, 09 Feb 2021 15:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83289.154635; Tue, 09 Feb 2021 15:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwK-0007FY-3C; Tue, 09 Feb 2021 15:28:32 +0000
Received: by outflank-mailman (input) for mailman id 83289;
 Tue, 09 Feb 2021 15:28:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9UwI-0007BU-39
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwH-0000uI-88; Tue, 09 Feb 2021 15:28:29 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwG-0007gX-Vo; Tue, 09 Feb 2021 15:28:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=LJMsVj+8JqORqU40yjLdihq8PI5kGbY5vaa1vtJvoyo=; b=1HwMrxON3gHEeOkxPa3M1UOf+
	PxJapha9kW1ZcdNlaab938I9SyajXZKb4wtLsi9UMdJvbYCsxZdlgkybphOLDLdLX8XJQB6ivNgvb
	PmT/kg9fLGCMKgIrqQjeML6vRPUaY/9cT6kxiR0SO9aj9RI9tJwQUR9UlabxCjRO/tMvI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU page-tables
Date: Tue,  9 Feb 2021 15:28:15 +0000
Message-Id: <20210209152816.15792-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210209152816.15792-1-julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new IOMMU page-tables allocator will release the pages when
relinquish the domain resources. However, this is not sufficient when
the domain is dying because nothing prevents page-table to be allocated.

iommu_alloc_pgtable() is now checking if the domain is dying before
adding the page in the list. We are relying on &hd->arch.pgtables.lock
to synchronize d->is_dying.

Take the opportunity to check in arch_iommu_domain_destroy() that all
that page tables have been freed.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

There is one more bug that will be solved in the next patch as I felt
they each needed a long explanation.

Changes in v2:
    - Rework the approach
    - Move the patch earlier in the series
---
 xen/drivers/passthrough/x86/iommu.c | 36 ++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cea1032b3d02..82d770107a47 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    /*
+     * There should be not page-tables left allocated by the time the
+     * domain is destroyed. Note that arch_iommu_domain_destroy() is
+     * called unconditionally, so pgtables may be unitialized.
+     */
+    ASSERT(dom_iommu(d)->platform_ops == NULL ||
+           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -267,6 +274,12 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    /* After this barrier no new page allocations can occur. */
+    spin_barrier(&hd->arch.pgtables.lock);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
@@ -284,6 +297,7 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     unsigned int memflags = 0;
     struct page_info *pg;
     void *p;
+    bool alive = false;
 
 #ifdef CONFIG_NUMA
     if ( hd->node != NUMA_NO_NODE )
@@ -303,9 +317,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     unmap_domain_page(p);
 
     spin_lock(&hd->arch.pgtables.lock);
-    page_list_add(pg, &hd->arch.pgtables.list);
+    /*
+     * The IOMMU page-tables are freed when relinquishing the domain, but
+     * nothing prevent allocation to happen afterwards. There is no valid
+     * reasons to continue to update the IOMMU page-tables while the
+     * domain is dying.
+     *
+     * So prevent page-table allocation when the domain is dying.
+     *
+     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
+     */
+    if ( likely(!d->is_dying) )
+    {
+        alive = true;
+        page_list_add(pg, &hd->arch.pgtables.list);
+    }
     spin_unlock(&hd->arch.pgtables.lock);
 
+    if ( unlikely(!alive) )
+    {
+        free_domheap_page(pg);
+        pg = NULL;
+    }
+
     return pg;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:28:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:28:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83290.154647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwL-0007Jh-Tx; Tue, 09 Feb 2021 15:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83290.154647; Tue, 09 Feb 2021 15:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9UwL-0007JU-PH; Tue, 09 Feb 2021 15:28:33 +0000
Received: by outflank-mailman (input) for mailman id 83290;
 Tue, 09 Feb 2021 15:28:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9UwK-0007FU-1d
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwI-0000uQ-J7; Tue, 09 Feb 2021 15:28:30 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9UwI-0007gX-AV; Tue, 09 Feb 2021 15:28:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=1kLAf1Z7iULhueFYJ6h0q4KfgHh7gIb0TxMqFFD/Zl4=; b=A+Vz9/vfVRcaVb3gpeijZuu+g
	YHkRCdz2sHg/6KjLX5VVkb8B3LEuQjVPC+v3tBgZVZ3XyyQl/igp/YvPRBsWkDHMMCPZ1DxUzuvgr
	q8KdTpr+3wQEbywObS4H0iOpglMtfdTBFwz5wVmKvS5QGJtAP89w/aTsuMUTpFGPa0mTY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Tue,  9 Feb 2021 15:28:16 +0000
Message-Id: <20210209152816.15792-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210209152816.15792-1-julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new per-domain IOMMU page-table allocator will now free the
page-tables when domain's resources are relinquished. However, the root
page-table (i.e. hd->arch.pg_maddr) will not be cleared.

Xen may access the IOMMU page-tables afterwards at least in the case of
PV domain:

(XEN) Xen call trace:
(XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
(XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
(XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
(XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
(XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
(XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
(XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
(XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
(XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
(XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
(XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
(XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
(XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
(XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
(XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
(XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120

This will result to a use after-free and possibly an host crash or
memory corruption.

Freeing the page-tables further down in domain_relinquish_resources()
would not work because pages may not be released until later if another
domain hold a reference on them.

Once all the PCI devices have been de-assigned, it is actually pointless
to access modify the IOMMU page-tables. So we can simply clear the root
page-table address.

Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Introduce clear_root_pgtable()
        - Move the patch later in the series
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
 xen/drivers/passthrough/x86/iommu.c         |  6 ++++++
 xen/include/xen/iommu.h                     |  1 +
 4 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 42b5a5a9bec4..81add0ba26b4 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.amd.root_table = NULL;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    dom_iommu(d)->arch.amd.root_table = NULL;
+    ASSERT(!dom_iommu(d)->arch.amd.root_table);
 }
 
 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
@@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .remove_device = amd_iommu_remove_device,
     .assign_device  = amd_iommu_assign_device,
     .teardown = amd_iommu_domain_destroy,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d136fe36883b..e1871f6c2bc1 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1726,6 +1726,15 @@ out:
     return ret;
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.vtd.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void iommu_domain_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d)
         xfree(mrmrr);
     }
 
-    hd->arch.vtd.pgd_maddr = 0;
+    ASSERT(!hd->arch.vtd.pgd_maddr);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2719,6 +2728,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .remove_device = intel_iommu_remove_device,
     .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 82d770107a47..d3cdec6ee83f 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -280,6 +280,12 @@ int iommu_free_pgtables(struct domain *d)
     /* After this barrier no new page allocations can occur. */
     spin_barrier(&hd->arch.pgtables.lock);
 
+    /*
+     * Pages will be moved to the free list in a bit. So we want to
+     * clear the root page-table to avoid any potential use after-free.
+     */
+    hd->platform_ops->clear_root_pgtable(d);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 863a68fe1622..d59ed7cbad43 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -272,6 +272,7 @@ struct iommu_ops {
 
     int (*adjust_irq_affinities)(void);
     void (*sync_cache)(const void *addr, unsigned int size);
+    void (*clear_root_pgtable)(struct domain *d);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:31:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83299.154659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Uz2-0000Dn-DT; Tue, 09 Feb 2021 15:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83299.154659; Tue, 09 Feb 2021 15:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Uz2-0000Dg-A1; Tue, 09 Feb 2021 15:31:20 +0000
Received: by outflank-mailman (input) for mailman id 83299;
 Tue, 09 Feb 2021 15:31:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Uz0-0000DY-UV; Tue, 09 Feb 2021 15:31:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Uz0-0000xS-S2; Tue, 09 Feb 2021 15:31:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Uz0-0000Cm-JE; Tue, 09 Feb 2021 15:31:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Uz0-00056e-Ik; Tue, 09 Feb 2021 15:31:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xAW9n0WmxVHhJ2IDwjzuV0LIEbOSyeJH1SdRrEgUGlM=; b=1bRyUjYfQKrwRzKwjvVO9tNDmp
	F1nmcd+8uLB7ihQQg9f9ldAol++vqdjo/hxPp6QQ5aLi3hRFERdlH4QT5E1NfbcP6iwdVcoEWv1YR
	W450boLgyQvXLkJPLChI5f5isNJbHUthL94RZVT4NKXJEnm22Us5JhIAzuVsCKWSN2vw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159135: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5b19cb63d9dfda41b412373b8c9fe14641bcab60
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 15:31:18 +0000

flight 159135 qemu-mainline real [real]
flight 159160 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159135/
http://logs.test-lab.xenproject.org/osstest/logs/159160/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                5b19cb63d9dfda41b412373b8c9fe14641bcab60
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  173 days
Failing since        152659  2020-08-21 14:07:39 Z  172 days  342 attempts
Testing same since   159107  2021-02-07 19:51:11 Z    1 days    2 attempts

------------------------------------------------------------
375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 105392 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:34:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:34:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83303.154674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9V1f-0000Ml-Su; Tue, 09 Feb 2021 15:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83303.154674; Tue, 09 Feb 2021 15:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9V1f-0000Me-Pd; Tue, 09 Feb 2021 15:34:03 +0000
Received: by outflank-mailman (input) for mailman id 83303;
 Tue, 09 Feb 2021 15:34:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5GlY=HL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9V1e-0000MZ-K3
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:34:02 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2c82df9-58d6-4634-9ac2-4c1f747c8794;
 Tue, 09 Feb 2021 15:34:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2c82df9-58d6-4634-9ac2-4c1f747c8794
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612884841;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=evLa/BNS2wckTYhJm+Vfd5cvV1KAOJvy29N7hg9YC+4=;
  b=O7OuA32jU7IiQfFDAuV3NP3pHPrUHncji3MfGrvLAwGPzq01EYtx3A88
   8zC+a4Wx/Hw74rq550g/5m/EgKWpL5GiIMCZg5PNAYocy9qMYcNwt0Hgk
   gT2kG4gJZ7VbDNAJMC8mU/C2tmhnQu3+d9fyQDBUlqWQDOtVTfzxOrdXn
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uqevPY52mP0+MCL1tyha9isG/6xSm6O0XDNd90OrIcCg5evehdH6dcR2ETswAjUrFuZZRv9TqD
 N+88m1xlz+pN0Ab56WYnhEZyENAuavVXxoehszBcRnY5wvaG2o8zQXnH1yEtKtaJM5RVZsPGbG
 c5fKNjuqh8Tk7wNRubn81KbsjzjVZ4ZYxaF51+3h9esGe3vkK3djbzZExIQPgC7T4uEcOGXqkl
 aDc8hUmrW14c6SznrAE4nQyp/sckg3qJLQoRiVXE1SPP7bNCX7rfDk0ysqiQTLAFj9MKREDTg1
 mok=
X-SBRS: 5.1
X-MesageID: 38224177
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="38224177"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors
Date: Tue, 9 Feb 2021 15:33:36 +0000
Message-ID: <20210209153336.4016-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The original limit provided wasn't accurate.  Blobs are in fact rather larger.

Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>

For 4.15.  This is a very targetted fix at AMD Zen3 processors.  Without it,
microcode loading doesn't function.
---
 xen/arch/x86/cpu/microcode/amd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index 5255028af7..c4ab395799 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
 #define F15H_MPB_MAX_SIZE 4096
 #define F16H_MPB_MAX_SIZE 3458
 #define F17H_MPB_MAX_SIZE 3200
-#define F19H_MPB_MAX_SIZE 4800
+#define F19H_MPB_MAX_SIZE 5568
 
     switch ( boot_cpu_data.x86 )
     {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:45:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83309.154689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VCz-0001RH-4f; Tue, 09 Feb 2021 15:45:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83309.154689; Tue, 09 Feb 2021 15:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VCz-0001RA-1S; Tue, 09 Feb 2021 15:45:45 +0000
Received: by outflank-mailman (input) for mailman id 83309;
 Tue, 09 Feb 2021 15:45:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9VCx-0001R5-R2
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:45:43 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66c33846-4cae-4a22-8f71-3e369ef80446;
 Tue, 09 Feb 2021 15:45:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19Fje1k5
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 16:45:40 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66c33846-4cae-4a22-8f71-3e369ef80446
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612885541;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
	bh=gL+mi1IlTYrjtEMwDgnPgLG4B4OTkjoS2M85wDEpNlo=;
	b=TMfI2NW9UoNXoSzKFa+8ql3WnHgqaQkHwOzDHJEkjMVrGmSNfLX5sBS7f7Q/fCHJ5D
	jVKRtzA/08rvYoZu+Trcumv3y55QHMG6BRIhGOPeWz8uhvRJuV25W6wx1K3EP3Srdpqh
	tDO8JdRW0PZmvKxTTqbiPkrFgRIw/D9q+HR/tDT7mOenf4b9yl8fEQsgMe5OwuVLsmKw
	uTVqvDLM+4PxnBXOEtv7ePRZAwOfPJDfo0iCGIgs3O1tuY7X/qcV1iRZy3am/tw/j9g/
	2q8A8mKLfaWlOxCRrDu3eh7CIUGN3d7ezbAFlqZ1KJyT3KkAE+DkgVVcIgEn/dHBqjyL
	y4Bw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GxPjw=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [PATCH v20210209 0/4] tools changes
Date: Tue,  9 Feb 2021 16:45:32 +0100
Message-Id: <20210209154536.10851-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As discussed in the v20210111 thread, split out and redo a few xl and configure.ac specific changes.

Olaf

Olaf Hering (4):
  tools: move CONFIG_DIR and XEN_CONFIG_DIR in paths.m4
  tools: add with-xen-scriptdir configure option
  xl: optionally print timestamps when running xl commands
  xl: disable --debug option for xl migrate

 docs/man/xl.1.pod.in   |  8 ++++----
 m4/paths.m4            | 23 ++++++++++++++---------
 tools/xl/xl.c          | 18 +++++++++++++-----
 tools/xl/xl.h          |  1 +
 tools/xl/xl_cmdtable.c |  1 -
 tools/xl/xl_migrate.c  | 16 +++++++---------
 6 files changed, 39 insertions(+), 28 deletions(-)



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:45:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:45:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83310.154701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VD4-0001TL-DD; Tue, 09 Feb 2021 15:45:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83310.154701; Tue, 09 Feb 2021 15:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VD4-0001T9-9j; Tue, 09 Feb 2021 15:45:50 +0000
Received: by outflank-mailman (input) for mailman id 83310;
 Tue, 09 Feb 2021 15:45:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9VD2-0001R5-ND
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:45:48 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 310bbf59-db49-48f3-94bd-4b1681ac95d7;
 Tue, 09 Feb 2021 15:45:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19Fjf1k7
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 16:45:41 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 310bbf59-db49-48f3-94bd-4b1681ac95d7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612885545;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
	From:Subject:Sender;
	bh=aII1TmKzcTAfe8t2sM/kmBYVk4jnNgWkXZv7CILKryQ=;
	b=GQvajl0CbHysFq2/9fJJdSsHk/hjyB3+YU30wqKdqzcC5KcQ6K1GJNvVSDTRvbUFjl
	ABn77RT+pWvpZ10/abSM/TyGiPMVXLBPtPTscubgq/Asrw9nTAQkzKJrhCmVUkNNVEQg
	+Wp8IESU45RtyiPknAQvwyyHKvYTEBEYjXVIWFuDF+K50mAwNEVn/JzH6DpJQ9J1GslE
	AvomINg+KfDIb6NtHHZJZ71ZLdjp+dFkCjUSZkMMC47nGlQge95zUjhLZVPkVJZoVXT4
	R2j51kIokX/M7GdqHf+sKAgcFa2bjqq8WDGGVBA2ui6N8LK0rwpejHtUxPavoIOF3ojq
	TZxw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GxPjw=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210209 1/4] tools: move CONFIG_DIR and XEN_CONFIG_DIR in paths.m4
Date: Tue,  9 Feb 2021 16:45:33 +0100
Message-Id: <20210209154536.10851-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210209154536.10851-1-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Upcoming changes need to reuse XEN_CONFIG_DIR.

In its current location the assignment happens too late. Move it up
in the file, along with CONFIG_DIR. Their only dependency is
sysconfdir, which may also be adjusted in this file.

No functional change intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 m4/paths.m4 | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/m4/paths.m4 b/m4/paths.m4
index 1c107b1a61..a736f57d8d 100644
--- a/m4/paths.m4
+++ b/m4/paths.m4
@@ -34,6 +34,12 @@ if test "x$sysconfdir" = 'x${prefix}/etc' ; then
     esac
 fi
 
+CONFIG_DIR=$sysconfdir
+AC_SUBST(CONFIG_DIR)
+
+XEN_CONFIG_DIR=$CONFIG_DIR/xen
+AC_SUBST(XEN_CONFIG_DIR)
+
 AC_ARG_WITH([initddir],
     AS_HELP_STRING([--with-initddir=DIR],
     [Path to directory with sysv runlevel scripts. [SYSCONFDIR/init.d]]),
@@ -128,15 +134,9 @@ AC_SUBST(XEN_LIB_DIR)
 SHAREDIR=$prefix/share
 AC_SUBST(SHAREDIR)
 
-CONFIG_DIR=$sysconfdir
-AC_SUBST(CONFIG_DIR)
-
 INITD_DIR=$initddir_path
 AC_SUBST(INITD_DIR)
 
-XEN_CONFIG_DIR=$CONFIG_DIR/xen
-AC_SUBST(XEN_CONFIG_DIR)
-
 XEN_SCRIPT_DIR=$XEN_CONFIG_DIR/scripts
 AC_SUBST(XEN_SCRIPT_DIR)
 


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:45:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:45:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83311.154713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VD9-0001Wi-M1; Tue, 09 Feb 2021 15:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83311.154713; Tue, 09 Feb 2021 15:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VD9-0001WY-IE; Tue, 09 Feb 2021 15:45:55 +0000
Received: by outflank-mailman (input) for mailman id 83311;
 Tue, 09 Feb 2021 15:45:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9VD7-0001R5-NH
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:45:53 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0b968c8-2196-422f-9340-f280c55629c7;
 Tue, 09 Feb 2021 15:45:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19Fjf1k8
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 16:45:41 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0b968c8-2196-422f-9340-f280c55629c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612885546;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
	From:Subject:Sender;
	bh=fa5j5Ui9HQveRfkyt1TfxRyt7x+hRbirF8KvXljSbrY=;
	b=ORVRf+/ObjRTJPxO3DjyVCIf5lR2dIgrebamLEWloeFSSLOffRPHa1G7yLcYoi4mCz
	dG09APmwN9/iQsJ9E8/lFNKLch8Bg8mGIgozz9YvZDag6/QFTZ/es1/tagBeRv8z3jZd
	WKkN/CRd9rY12St/WLeu3A1zyc0jkzn7QGwaSd7wKMjvBgfwmV6GhXibjSMhMpGw6X/u
	kRN37fxq8+6Whuy8i0XcxXCJfLvWcreb78aPbQpS3FLfvbonuZTQWA+lNhNwkLozIeHl
	5NxCfm4J5zss9NFEFF+EJ9jals0CTxa/BQ8Nih5C66HpYsMc7u40trL2cB1v7V7+Wk5b
	QADw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GxPjw=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v20210209 2/4] tools: add with-xen-scriptdir configure option
Date: Tue,  9 Feb 2021 16:45:34 +0100
Message-Id: <20210209154536.10851-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210209154536.10851-1-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Some distros plan for fresh installations will have an empty /etc,
whose content will not be controlled by the package manager anymore.

To make this possible, add a knob to configure to allow storing the
hotplug scripts to libexec instead of /etc/xen/scripts.

The current default remains unchanged, which is /etc/xen/scripts.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 m4/paths.m4 | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/m4/paths.m4 b/m4/paths.m4
index a736f57d8d..7be314a3e2 100644
--- a/m4/paths.m4
+++ b/m4/paths.m4
@@ -76,6 +76,14 @@ AC_ARG_WITH([libexec-leaf-dir],
     [libexec_subdir=$withval],
     [libexec_subdir=$PACKAGE_TARNAME])
 
+AC_ARG_WITH([xen-scriptdir],
+    AS_HELP_STRING([--with-xen-scriptdir=DIR],
+    [Path to directory for dom0 hotplug scripts. [SYSCONFDIR/xen/scripts]]),
+    [xen_scriptdir_path=$withval],
+    [xen_scriptdir_path=$XEN_CONFIG_DIR/scripts])
+XEN_SCRIPT_DIR=$xen_scriptdir_path
+AC_SUBST(XEN_SCRIPT_DIR)
+
 AC_ARG_WITH([xen-dumpdir],
     AS_HELP_STRING([--with-xen-dumpdir=DIR],
     [Path to directory for domU crash dumps. [LOCALSTATEDIR/lib/xen/dump]]),
@@ -137,9 +145,6 @@ AC_SUBST(SHAREDIR)
 INITD_DIR=$initddir_path
 AC_SUBST(INITD_DIR)
 
-XEN_SCRIPT_DIR=$XEN_CONFIG_DIR/scripts
-AC_SUBST(XEN_SCRIPT_DIR)
-
 case "$host_os" in
 *freebsd*) XEN_LOCK_DIR=$localstatedir/lib ;;
 *netbsd*) XEN_LOCK_DIR=$rundir_path ;;


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:46:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:46:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83312.154725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VDE-0001aF-0d; Tue, 09 Feb 2021 15:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83312.154725; Tue, 09 Feb 2021 15:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VDD-0001a8-S2; Tue, 09 Feb 2021 15:45:59 +0000
Received: by outflank-mailman (input) for mailman id 83312;
 Tue, 09 Feb 2021 15:45:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9VDC-0001R5-NL
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:45:58 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8129c05c-0e83-4df1-8465-e927c287ec4e;
 Tue, 09 Feb 2021 15:45:49 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19Fjf1k9
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 16:45:41 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8129c05c-0e83-4df1-8465-e927c287ec4e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612885548;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
	From:Subject:Sender;
	bh=zvrEgmv5X8uRsW5yGE7mKuAvmfklbHSa7m/edYazMSU=;
	b=Awrc1+37JQTq2YMQvkWL5ZXD2aXxW26igYavDfBmHxeDi3dirawjFEbThdAfDowjUv
	+JZUt+LNkeMZu5z8AygoiJMGgMD4TdT6mpdMZfWFm77M0M0yF8DLgsFRy1bs5eWpV3nv
	oEGXhsoeCEl7ckX8IzscNbn8l4gKKD5OJyVpjiSbq54JlrGTC+Mb2Ie0qIFtbNKoxchX
	9hYkA6OB7v8i0J/YZv4N4CSR40AHs1V4QWwWNXGg3rbNq1V2P4W9DjAwAbI7lu/UD/6m
	6wNpTlTQEv7m94UJRfnwhp8Ov8WGHxiE4/oxotXTqbDarMxmQRFabzBMlT4M/Wk/vmIl
	0TQQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GxPjw=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210209 3/4] xl: optionally print timestamps when running xl commands
Date: Tue,  9 Feb 2021 16:45:35 +0100
Message-Id: <20210209154536.10851-4-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210209154536.10851-1-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a global option "-T" to xl to enable timestamps in the output from
libxl and libxc. This is most useful with long running commands such
as "migrate".

During 'xl -v.. migrate domU host' a large amount of debug is generated.
It is difficult to map each line to the sending and receiving side.
Also the time spent for migration is not reported.

With 'xl -T migrate domU host' both sides will print timestamps and
also the pid of the invoked xl process to make it more obvious which
side produced a given log line.

Note: depending on the command, xl itself also produces other output
which does not go through libxentoollog. As a result such output will
not have timestamps prepended.


This change adds also the missing "-t" flag to "xl help" output.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in  |  4 ++++
 tools/xl/xl.c         | 18 +++++++++++++-----
 tools/xl/xl.h         |  1 +
 tools/xl/xl_migrate.c |  3 ++-
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 618c195148..e2176bd696 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -86,6 +86,10 @@ Always use carriage-return-based overwriting for displaying progress
 messages without scrolling the screen.  Without -t, this is done only
 if stderr is a tty.
 
+=item B<-T>
+
+Include timestamps and pid of the xl process in output.
+
 =back
 
 =head1 DOMAIN SUBCOMMANDS
diff --git a/tools/xl/xl.c b/tools/xl/xl.c
index 2a5ddd4390..3a89295802 100644
--- a/tools/xl/xl.c
+++ b/tools/xl/xl.c
@@ -52,6 +52,7 @@ libxl_bitmap global_pv_affinity_mask;
 enum output_format default_output_format = OUTPUT_FORMAT_JSON;
 int claim_mode = 1;
 bool progress_use_cr = 0;
+bool timestamps = 0;
 int max_grant_frames = -1;
 int max_maptrack_frames = -1;
 libxl_domid domid_policy = INVALID_DOMID;
@@ -365,8 +366,9 @@ int main(int argc, char **argv)
     int ret;
     void *config_data = 0;
     int config_len = 0;
+    unsigned int xtl_flags = 0;
 
-    while ((opt = getopt(argc, argv, "+vftN")) >= 0) {
+    while ((opt = getopt(argc, argv, "+vftTN")) >= 0) {
         switch (opt) {
         case 'v':
             if (minmsglevel > 0) minmsglevel--;
@@ -380,6 +382,9 @@ int main(int argc, char **argv)
         case 't':
             progress_use_cr = 1;
             break;
+        case 'T':
+            timestamps = 1;
+            break;
         default:
             fprintf(stderr, "unknown global option\n");
             exit(EXIT_FAILURE);
@@ -394,8 +399,11 @@ int main(int argc, char **argv)
     }
     opterr = 0;
 
-    logger = xtl_createlogger_stdiostream(stderr, minmsglevel,
-        (progress_use_cr ? XTL_STDIOSTREAM_PROGRESS_USE_CR : 0));
+    if (progress_use_cr)
+        xtl_flags |= XTL_STDIOSTREAM_PROGRESS_USE_CR;
+    if (timestamps)
+        xtl_flags |= XTL_STDIOSTREAM_SHOW_DATE | XTL_STDIOSTREAM_SHOW_PID;
+    logger = xtl_createlogger_stdiostream(stderr, minmsglevel, xtl_flags);
     if (!logger) exit(EXIT_FAILURE);
 
     xl_ctx_alloc();
@@ -457,7 +465,7 @@ void help(const char *command)
     struct cmd_spec *cmd;
 
     if (!command || !strcmp(command, "help")) {
-        printf("Usage xl [-vfN] <subcommand> [args]\n\n");
+        printf("Usage xl [-vfNtT] <subcommand> [args]\n\n");
         printf("xl full list of subcommands:\n\n");
         for (i = 0; i < cmdtable_len; i++) {
             printf(" %-19s ", cmd_table[i].cmd_name);
@@ -468,7 +476,7 @@ void help(const char *command)
     } else {
         cmd = cmdtable_lookup(command);
         if (cmd) {
-            printf("Usage: xl [-v%s%s] %s %s\n\n%s.\n\n",
+            printf("Usage: xl [-vtT%s%s] %s %s\n\n%s.\n\n",
                    cmd->modifies ? "f" : "",
                    cmd->can_dryrun ? "N" : "",
                    cmd->cmd_name,
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 06569c6c4a..137a29077c 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -269,6 +269,7 @@ extern int run_hotplug_scripts;
 extern int dryrun_only;
 extern int claim_mode;
 extern bool progress_use_cr;
+extern bool timestamps;
 extern xentoollog_level minmsglevel;
 #define minmsglevel_default XTL_PROGRESS
 extern char *lockfile;
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 0813beb801..b8594f44a5 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -592,9 +592,10 @@ int main_migrate(int argc, char **argv)
         } else {
             verbose_len = (minmsglevel_default - minmsglevel) + 2;
         }
-        xasprintf(&rune, "exec %s %s xl%s%.*s migrate-receive%s%s%s",
+        xasprintf(&rune, "exec %s %s xl%s%s%.*s migrate-receive%s%s%s",
                   ssh_command, host,
                   pass_tty_arg ? " -t" : "",
+                  timestamps ? " -T" : "",
                   verbose_len, verbose_buf,
                   daemonize ? "" : " -e",
                   debug ? " -d" : "",


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 15:46:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 15:46:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83313.154737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VDJ-0001fO-9b; Tue, 09 Feb 2021 15:46:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83313.154737; Tue, 09 Feb 2021 15:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VDJ-0001fE-4y; Tue, 09 Feb 2021 15:46:05 +0000
Received: by outflank-mailman (input) for mailman id 83313;
 Tue, 09 Feb 2021 15:46:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9VDH-0001R5-NO
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:46:03 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 739a3183-5188-41ff-8023-ab4e5cfe0f35;
 Tue, 09 Feb 2021 15:45:49 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19Fjg1kA
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 16:45:42 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 739a3183-5188-41ff-8023-ab4e5cfe0f35
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612885548;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
	From:Subject:Sender;
	bh=i1NPX2zWTNw88m/1NcvmRDBKDd6HKMtx2izL2PhwkGc=;
	b=YyQOQJEab1mtpgyyEF+N4rhJCncFlCYsovqIpocTFLkHF6oZcXRc0St6/iGKr2Mkpe
	UeMGwTa+t4iKFVbU46JNBJmlA9XKgljK70d40SUbTrwKevdl/tcB+2UkerAL1AjyptLE
	QtVvSEDEKdJY0Tg5whzscUk6Md5oBadZkrIcBEHuIkrGHKhOMg7hyWrWDtP6LSnD7Y/w
	qVe/DXeHc1+j4t4+ZBtZJihbxpFdNILXNA44gkKBG9TLygUQWMbvv9rEEYqRaUsKGN7Y
	H3JDdvjfTshGyU3sAEGSdAHo7zZ3oCtjkJdS/3xuRKiMvEWe4JkfF0Cmk0ej9TYNggxU
	6cIQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GxPjw=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v20210209 4/4] xl: disable --debug option for xl migrate
Date: Tue,  9 Feb 2021 16:45:36 +0100
Message-Id: <20210209154536.10851-5-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210209154536.10851-1-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xl migrate --debug used to track every pfn in every batch of pages.

Since commit cfa955591caea5d7ec505cdcbf4442f2d6e889e1 from Xen 4.6 the
debug flag changed meaning from "verify transferred memory during live
migration" to "verify transferred memory in remus/colo". At least xl
will not be able to trigger execution of the verifying code in
send_domain_memory_live anymore.

Remove "--debug" from "xl migrate -h" output.
Remove "--debug" from from xl man page.
Do not send "-d" as potential option for "xl migrate-receive" anymore.
Do not pass the flag LIBXL_SUSPEND_DEBUG anymore to libxl_domain_suspend.
Continue to recognize "--debug" as valid option for "xl migrate".
Continue to recognize "-d" as valid option for "xl migrate-receive".

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in   |  4 ----
 tools/xl/xl_cmdtable.c |  1 -
 tools/xl/xl_migrate.c  | 15 ++++++---------
 3 files changed, 6 insertions(+), 14 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index e2176bd696..b14196ccfe 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -479,10 +479,6 @@ domain. See the corresponding option of the I<create> subcommand.
 Send the specified <config> file instead of the file used on creation of the
 domain.
 
-=item B<--debug>
-
-Display huge (!) amount of debug information during the migration process.
-
 =item B<-p>
 
 Leave the domain on the receive side paused after migration.
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 07f54daabe..150f4cd1d3 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -171,7 +171,6 @@ struct cmd_spec cmd_table[] = {
       "                migrate-receive [-d -e]\n"
       "-e              Do not wait in the background (on <host>) for the death\n"
       "                of the domain.\n"
-      "--debug         Print huge (!) amount of debug during the migration process.\n"
       "-p              Do not unpause domain after migrating it.\n"
       "-D              Preserve the domain id"
     },
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index b8594f44a5..e4e4f918c7 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -177,8 +177,7 @@ static void migrate_do_preamble(int send_fd, int recv_fd, pid_t child,
 }
 
 static void migrate_domain(uint32_t domid, int preserve_domid,
-                           const char *rune, int debug,
-                           const char *override_config_file)
+                           const char *rune, const char *override_config_file)
 {
     pid_t child = -1;
     int rc;
@@ -204,8 +203,6 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
 
     xtl_stdiostream_adjust_flags(logger, XTL_STDIOSTREAM_HIDE_PROGRESS, 0);
 
-    if (debug)
-        flags |= LIBXL_SUSPEND_DEBUG;
     rc = libxl_domain_suspend(ctx, domid, send_fd, flags, NULL);
     if (rc) {
         fprintf(stderr, "migration sender: libxl_domain_suspend failed"
@@ -500,6 +497,7 @@ int main_migrate_receive(int argc, char **argv)
         monitor = 0;
         break;
     case 'd':
+        /* For compatibility with older variants of xl */
         debug = 1;
         break;
     case 'r':
@@ -537,7 +535,7 @@ int main_migrate(int argc, char **argv)
     const char *ssh_command = "ssh";
     char *rune = NULL;
     char *host;
-    int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
+    int opt, daemonize = 1, monitor = 1, pause_after_migration = 0;
     int preserve_domid = 0;
     static struct option opts[] = {
         {"debug", 0, 0, 0x100},
@@ -566,7 +564,7 @@ int main_migrate(int argc, char **argv)
         preserve_domid = 1;
         break;
     case 0x100: /* --debug */
-        debug = 1;
+        /* ignored for compatibility with older variants of xl */
         break;
     case 0x200: /* --live */
         /* ignored for compatibility with xm */
@@ -592,17 +590,16 @@ int main_migrate(int argc, char **argv)
         } else {
             verbose_len = (minmsglevel_default - minmsglevel) + 2;
         }
-        xasprintf(&rune, "exec %s %s xl%s%s%.*s migrate-receive%s%s%s",
+        xasprintf(&rune, "exec %s %s xl%s%s%.*s migrate-receive%s%s",
                   ssh_command, host,
                   pass_tty_arg ? " -t" : "",
                   timestamps ? " -T" : "",
                   verbose_len, verbose_buf,
                   daemonize ? "" : " -e",
-                  debug ? " -d" : "",
                   pause_after_migration ? " -p" : "");
     }
 
-    migrate_domain(domid, preserve_domid, rune, debug, config_filename);
+    migrate_domain(domid, preserve_domid, rune, config_filename);
     return EXIT_SUCCESS;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:06:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:06:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83321.154748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VX8-0004Km-1R; Tue, 09 Feb 2021 16:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83321.154748; Tue, 09 Feb 2021 16:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VX7-0004Kf-Ub; Tue, 09 Feb 2021 16:06:33 +0000
Received: by outflank-mailman (input) for mailman id 83321;
 Tue, 09 Feb 2021 16:06:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9VX6-0004Ka-H3
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:06:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 279daca3-9d9a-4eda-8fa7-405d953d945c;
 Tue, 09 Feb 2021 16:06:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 279daca3-9d9a-4eda-8fa7-405d953d945c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612886791;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=mfADmmA5oo9A3poCZNPBvD61kcuRgiB6lJAGQ0GhQcI=;
  b=HEl0Rq5pHBoxNuETAQO1o7brYHpv990tVFbV4+txZgPnfoorIA/gYt12
   mTEWT+woVwpo/5EoQp59VI0ueMQ/7BC2YGuV6wkLJ+I6xF9zfnr8367IM
   N9TiBtczWLrtAkBKW18WYqhL+7+3LUDRemOxky1t4rCbS/9qyW5y6ypuk
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: YrnJxObSb9qn6D3kJ7/fwIxqLbO9c7ct+cyYxsCY4syHuGQy5tIktycwyqa9PF2RwNgpIdeS5S
 +JNrc/VLvFe+zBGNU0T1FGDaY9oaDUe/tUs1L1xDkerFAXkakAdSWpHdMpGINXoWfdrZWsDH7a
 jcxgLmiy5GaeWRumVyiQDr/Y4dKeLvZYaSWcaDUmejqaEnIrMIQQxB9WGJ78aO6X+jQE+YicEn
 TwOnMJm6M6fJeRB3xuOy75auY7TI+g/djLag0LqMS5TTJwGSgoP3FD2053FJ4+zqcu7OjArXRS
 BDg=
X-SBRS: None
X-MesageID: 36818908
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="36818908"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SDC2duD4FcKi3L1gkRcCqXDUL3K3sc++P/pyUZwXuVJrzX6dtsbY9NjyDYNhpksyzHGVBRzGqpRi8/KBUIG4kkVbTAaynEeluTUVWRTM/2u6EAnqI4QVUplVgRPDosPDMdqiCISBl2YyTM1FeY+nS5MFtr9rfeDvzXEsXpa0EMk/fWMqPPE5dH7z2zlvBcY3rMB+62U6SUy1Jy4N6l8TjX8fp+rLmfq0RBWgGa9HEyvpKF1nEOLuUWa4j2i06/j8Fe0xmftdRQ9er6XKuZLjgFeWAEoSl+kwYRujuANovWcUFvHqiV0rW32bWTHuqnvBujerHiIScERX5J2etG90tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=69J3HOQhUjTMGEkNOvJ1siyYj8g7/5aQXC6Co04pVF8=;
 b=C2m46v+u34T+nstL6F/2J1FD1fFP0tJgXYD+g+OjQWaYjCQlPFZn6Q3SWpxNZ89y6mvlWbb761qLmy7edbnzxwjlRpUYXaBT4Xas2gLu7O79wolcD04buEqV5sQSKMk1MZq9RmiIrJzDVVFxKDSsa369yyUuCTNWPRaYUtG87RtA1xpa2pIHcc0NpGZ1rD6dfQlYBk1aTZDGmtNLLCuyXMOFF08ctwuQOMNI9aFXGt5fmy3YBELHPAVbaJ2YTwjIsTef/y9aAFe4xALGul4dkorc/+z8ummXi6p9F2g+tvy+RC/sxWFkJ42AmfhN/fH1TqdzFE3Vme1CTv+IyeXB0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=69J3HOQhUjTMGEkNOvJ1siyYj8g7/5aQXC6Co04pVF8=;
 b=sW2T9bQ+EAfMZZ9kgalT6xMtfLhfN1YNXxTNBpfEARtDGtqdJaX34fcpBhJ2MQ0ZZdTuq8CSoN76YeoZJIrhGAWj/rBLj6uHqtxE3/d6RN7AljSsVKyxCGweoZV/4fdBMxBaxeS+LbCybl25L9YTh23t9kPqwFG44WVNU3hhlhI=
Date: Tue, 9 Feb 2021 17:06:10 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 03/17] x86: split __copy_{from,to}_user() into "guest"
 and "unsafe" variants
Message-ID: <YCKy8lwh2YVWYChc@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <b8112628-a2e3-2fdc-9847-1fa684283135@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b8112628-a2e3-2fdc-9847-1fa684283135@suse.com>
X-ClientProxiedBy: MR2P264CA0181.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::20)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e9ae5f88-0aee-46f7-4f8d-08d8cd14a1c3
X-MS-TrafficTypeDiagnostic: DM6PR03MB4970:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4970C2368922285F182E0E2A8F8E9@DM6PR03MB4970.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 5W1Q1wGmt63I3mp81PfWPH9Fzh25P3KyVSMyUnRwRcu/VqDQ4dbQ7XCn+r2nPuxBQMZ9zzsqY05DCexnOMZxAKOARgRHAiFvOEeKc7NP9xocr1zzFWfgByuz9A9x9R+yiCGm9Sp5wZXc49gYZvAFwp8VWdMlSv099Ojvpup/bOq569cXKjuzqTbHtGZ6UR/0rnTl+jcbmASJ5E4Nq107ItRaKV/b+rThmxpo4syHjd4mHsEvsTntdAFc9LzReud+L3DJ+v1X2TAVrSQvQ4ZxJ+laPRor2BOtdxVF5mt59WfXxSQ6bGySpuRokRENYQciHBOIE2nt72paFQxfWluaMPDpn+DudO8qCUuCtOIGb/wkdloQ7klxxkL6GH+1P9LsAh1ZYlPAzlzqfXHi2itnAai2VCPdxsMgIQ5VfM88VvBWbaSp8dOUx8HGa5yL9RCSJ0ZDq/jTs4uBlYqKBStu7c9RDJLaVnPUZVmz3I2TsLbWo59FcOTP/ZCbBqekE2RMFySyx6AVfWOW5i3w/Mv8Qg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(366004)(346002)(396003)(376002)(39860400002)(6486002)(107886003)(186003)(86362001)(26005)(83380400001)(956004)(2906002)(4326008)(33716001)(316002)(6496006)(66556008)(9686003)(66946007)(66476007)(6666004)(5660300002)(6916009)(8676002)(8936002)(478600001)(54906003)(85182001)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QWpuRkJFWFVqcTNGS1c0KzZPcHl0Wm1Vb3hhcTZ0emNVaVhrdXVrZERnc3p4?=
 =?utf-8?B?emN6alRNSkFUODA2aHVSOUVaSWI3eklpWDlOZG11M2tDYnJkRkNjU1pPa3dQ?=
 =?utf-8?B?bG5VRVgxWDJ2d0pDNG9pRUZ3SnNzL0FMQ1NMR2t5NjEzeWdGaDhBZTd6aC9U?=
 =?utf-8?B?c1MrOWNnaFJibGMvdkxXZ2xSUElxV0M0QW5pd3hmTVljVk1uVnA2TXVodGcy?=
 =?utf-8?B?Z1dhRGxzaHI5dW9KbFdMczd0NmJ4T252dEdOSGZ3aGE1M01kbER5b3JLdnZa?=
 =?utf-8?B?VGhpT2QzbjJHdmF1YWVBZ21jYnhHejVVK3NwSzRDUUQ0cVpSNC9hSnhUSFRa?=
 =?utf-8?B?cUJzQTV3RU1WMVJDMVRUczY1YlU0aUdoN2J1bWxxYWwrZTRXendwU1RTTTNX?=
 =?utf-8?B?R2ZvL0JzRFBnWHV2Wnd3R0t4Q01BOGpBeEZCa3RzL1RjeXUwcGVSYjVMN0tv?=
 =?utf-8?B?eWJ0eVJRSFVjalRCVk9uWnNJMXh0YmtQRlY1VHdheFAzbVh3Q3ZLL1hjVVd6?=
 =?utf-8?B?SVlsK2V2Rmp6Y0xHWXIzalRwdXJJSU5UZVZPVjVhY0pWaXQ0MzZEcEtjbWVp?=
 =?utf-8?B?UmRQc2tBQlY4QUQvdE1wNjlDRnFBY2c1bUh4bUNZZ1dFTElLaDJ6b1ZEc01N?=
 =?utf-8?B?VGxOTEJ1UFFoNG5xQ3F0SGl2R1FvcUp0ZnduU2N2Tm1hbmRlUG90cW9LcmhY?=
 =?utf-8?B?Z0RPbTgyNklxWEh3NnV2dlJSVDJRajJ2eEs3dTJTZVBuWlJBczlqNjRNTWlt?=
 =?utf-8?B?S2pvT0lxRW9rRGIxRU5DVFZFa05LNEFsYkFSZElNS3BiL1ZhOWdQQjJ0NzVq?=
 =?utf-8?B?cERKMC9DQ3Q3QkFRTnFPYjlYdDZ6MmRsbEdremtpdi9PN0p2WkRGbk56WUJn?=
 =?utf-8?B?MW5aUUtXUS9wMFUwOHhzMWZFSUxjRTB4bndzSC82NHJxOXI2SHREL3VMV0Zw?=
 =?utf-8?B?NkR6YW82QWNHVjdRV3VoN2JCM3Q1QVMwS0lKRXhaYVB3Ykc4YzZEd1IwSEZK?=
 =?utf-8?B?bVZDUTFkRndiMFBSTy9ab0VNbERFZWxleklSNnlRQ1dVLzU5NFZ4Ym1TUWh3?=
 =?utf-8?B?em9kTTRhUUpBZUhRM0ZlcnJHNjdpcFdCSTlxbTVIQ3ZaWmd0RkNieG1MbWh2?=
 =?utf-8?B?RGFxUmp4T0RvSlNCZWZKOXZ4dmkvNTNiRm0yUm9NblRLRDIyRHBlMm5NZTFX?=
 =?utf-8?B?YUNqODJLeUlabzZGcFM2dDc4OSt1NU5aeTJMZ3l5QUFabTZKMGpnNkZDSFFU?=
 =?utf-8?B?NVh2NkVRYnZQajltQWJramV5aS9ZQXJEVDZ4YW50dTBtSTlNbFJBYUQ1b2Vh?=
 =?utf-8?B?cVoxT0FPdGhwVitxTHdSTFpUSXdyMnU3UHovV0dTdGhQUWhwYnhKdDRINUZE?=
 =?utf-8?B?Z2ZjcXVrT1ZkZ0I3VTJ6T3RWNEdYZDZlMkxIS3F1c2o5aE02RElxVCt1d3lo?=
 =?utf-8?B?SUJwSE15VEJ3bUtRWlNrdW1rbnVFdkR1SkVGNmRENHNGTzNJZ2U1NllVNmx4?=
 =?utf-8?B?WEpyUXZxb0Nsd25LQVpMNzFxckp2Sml1Nld5REVNSEZ0bzNacXdXcG9JcDZl?=
 =?utf-8?B?cmp0TFRXY3JVMjJIY2NMdURKWXlSd2JRbHJWUDJ4MmZkclNucko4Ym9seGF1?=
 =?utf-8?B?Vk5zUmJ6aHNKdndrc0I5bTYvRnIzYlNoRFo3anU3aHpyTWI0MjdEUkdJRTZY?=
 =?utf-8?B?dVdpUVJ4a0JuRGhGRlZUMWxPVHMxRjZ3QmtzM1JZajBWR3MwQzFUVmdNRDBU?=
 =?utf-8?Q?7ka6r4qafH46zN/2U3WkOJNDKJioMV6Ojm+UIcS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e9ae5f88-0aee-46f7-4f8d-08d8cd14a1c3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 16:06:17.2326
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a5grSyDKP5aGghhdCzsTBTN1/dSnuI1XCzEIzYFZo92ux/RryKC7srS8fPnB4Zyw5Rqr+hcEuJkFBjyZIWZIng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4970
X-OriginatorOrg: citrix.com

On Thu, Jan 14, 2021 at 04:04:32PM +0100, Jan Beulich wrote:
> The "guest" variants are intended to work with (potentially) fully guest
> controlled addresses, while the "unsafe" variants are not. Subsequently
> we will want them to have different behavior, so as first step identify
> which one is which. For now, both groups of constructs alias one another.
> 
> Double underscore prefixes are retained only on
> __copy_{from,to}_guest_pv(), to allow still distinguishing them from
> their "checking" counterparts once they also get renamed (to
> copy_{from,to}_guest_pv()).
> 
> Add previously missing __user at some call sites.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Instead of __copy_{from,to}_guest_pv(), perhaps name them just
> __copy_{from,to}_pv()?
> 
> --- a/xen/arch/x86/gdbstub.c
> +++ b/xen/arch/x86/gdbstub.c
> @@ -33,13 +33,13 @@ gdb_arch_signal_num(struct cpu_user_regs
>  unsigned int
>  gdb_arch_copy_from_user(void *dest, const void *src, unsigned len)
>  {
> -    return __copy_from_user(dest, src, len);
> +    return copy_from_unsafe(dest, src, len);
>  }
>  
>  unsigned int 
>  gdb_arch_copy_to_user(void *dest, const void *src, unsigned len)
>  {
> -    return __copy_to_user(dest, src, len);
> +    return copy_to_unsafe(dest, src, len);

I assume we need to use the unsafe variants here, because the input
addresses are fully controlled by gdb, and hence not suitable as
speculation vectors?

Also could point to addresses belonging to both Xen or the guest
address space AFAICT.

> --- a/xen/include/asm-x86/uaccess.h
> +++ b/xen/include/asm-x86/uaccess.h

At some point we should also rename this to pvaccess.h maybe?

> @@ -197,21 +197,20 @@ do {
>  #define get_guest_size get_unsafe_size
>  
>  /**
> - * __copy_to_user: - Copy a block of data into user space, with less checking
> - * @to:   Destination address, in user space.
> - * @from: Source address, in kernel space.
> + * __copy_to_guest_pv: - Copy a block of data into guest space, with less
> + *                       checking

I would have preferred pv to be a prefix rather than a suffix, but we
already have the hvm accessors using that nomenclature.

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:26:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83328.154764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VqW-0006Bx-V0; Tue, 09 Feb 2021 16:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83328.154764; Tue, 09 Feb 2021 16:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9VqW-0006Bq-Rb; Tue, 09 Feb 2021 16:26:36 +0000
Received: by outflank-mailman (input) for mailman id 83328;
 Tue, 09 Feb 2021 16:26:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t2YR=HL=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9VqW-0006Bl-0a
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:26:36 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9d83035-da87-454a-af58-61638f32ec3a;
 Tue, 09 Feb 2021 16:26:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9d83035-da87-454a-af58-61638f32ec3a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612887994;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=4iLr10S04cZJz58I01E6lZH5vzPhXXtXcjQatdrQgNE=;
  b=gzAT8c+gvjl2NOALSp3e666vEcPXqo9tX+ue7Up5W7kcCaohohV/dkCh
   mhuG1v3BsY1jMpWXG9ua0l9r0/KwfLRCc2Jrvv1fUQSrdEH40oorSRS1z
   gzYHIHsHS0ShwBQzhjwg9GEaNFUJNlQKcw+4xoupydP4Eevr1IEPiavA6
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: H6feBqfnHyvXWcbo+4MFSKnqklO2CzvVyB4A9dHWrKCFJaB1DxdxoiZbS1bY+ggPmHdwbyazz3
 ypo/s+MXBTDQHcibGjlyFM/oi8j6lpCuC4gqM30N110eLmGU/mateChM7fCksgSRssMGBm4u5J
 xtk3PqN/lWECybNSN6ESTCCpIme7m8CqEYCabK6oXsjwDTSNJjG9eJQcn+02UboZHPYzUFeb+N
 WhhE82kRb+Atvupm7uAYtKInNIuJhEoTuwsAr4rJ7lfF4GiGpoikKdgPlCm25LOv5S2N9nNi49
 Qbc=
X-SBRS: 5.2
X-MesageID: 36875618
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="36875618"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jJE6xpilQ5kAFRI7Ebbu4Se2S2ZeOPabEwdGZr8lWcGQTGHy0nu/DkmpIpJUXo2DJpH4s69zYIn3SKExjengpcq3x/Iv33Ey4rB89nhXG/mfsBIAYXoCZKG2Dj0IfcOuYAydEg+m9p+Q8d9Y7WD4ntVT3793+SUPt3HOSqvO515udJMCMtRxf3q+W+ncp/KwE5C09KQs/2wkqQO2rxqGAwDxXXc2dDiEfvd+ZLEqry2TEcF5ssqbadKZH5FH6C8FWEo+Dem1sjmgOX8bXqg2vBiTdx1QoZHCGU8QCxW+6dBBPBdQi84k2RCBwIRYaxm7iMoImzoGoXqwuJ124x8P9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2Oje/ThS83Ub8f0kkwgIEombPtTAPzoJJioaG/TCiJ8=;
 b=KO2FEDyw9mbr+wCgrVpv2bP5VqnHfEFfWibbBrLlzh0AFepnOoC7KRLtNCNVhE89wUerzyxz7S9+HwLwdqIgKAS9jGtWR66rbBpO9JQV9VvywmXJTt3zyIK5Qge/dgyC7VssCB/2V/eRUu19kG7Cf1f7e92LIphA7ymDUITt0QjT33EgWit3kL31mCGQ7Iglg8zL9gtyuSOw49YHPeB3OYYiqrqthQgC0Cj9eVGqe5SunO8smwS1B01/zup4/JzfJbmMUCwGxnOEjxMJgSjpFJpdLgQ8yyhFfqzaNWcFlYDMhe8x8/u6CePXT0r0niGJDjgl/vgZkmXvh4LDGwrymQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2Oje/ThS83Ub8f0kkwgIEombPtTAPzoJJioaG/TCiJ8=;
 b=ubrdy0gNb7mAXTGksgUIcjHYyfCnlKcHKsYtuifwNs+nAJb3ZgHnEJmWH7K5tCF5b/6mCVoVriS5v/IC3ZTXDQgx9F26ghoePblmFtkmCWztVOq71Re0idCwkynveCVwbmYKE6O79UHd2VB7qIaBUuCzWXUTl7uX17zOj97XoKI=
Date: Tue, 9 Feb 2021 17:26:24 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
Message-ID: <YCK3sH/4EVLzRfZ3@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
X-ClientProxiedBy: MR2P264CA0117.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1d004747-8fab-47d4-e57a-08d8cd17746b
X-MS-TrafficTypeDiagnostic: DM5PR03MB2841:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB28413C69DD085EFE03D884D18F8E9@DM5PR03MB2841.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mxbBmS8YLBzut3To4Pyp0e0C3wVZf+S6PhVA10kzSj2lETGJprQhzi3s2w+5bYjwhKPo2fwVUBR9NZ4tIjV9s/d5F996qYZWalj1NP+DYTjC0fuD1ANZdWGcC6ZgQciTMVwfOQTj4ecNnqJLCevCIAwI8fwHmHG01W1t51uXPwnKzzPCTj/52PEgLXE+HNCiDi2mc48c5sj2qxavWJqaNhMjlnGPg+5JkIRCUeVx1eyjY6/5MWI3mdCgaXOyJIPvYprdsEDusDpIU+7yRxQvsKA8wFW4dblfkXcqteVnOEe1nQE5H/7LinsB2MKyZP5NaZf2pj36Ikt6UCuWS4XGccWeDLhi3+VJcL/J67LHsNJBOQJoUVgutPAIweJsJ+FBXNyQrG4393JzqfPyUyA0oDLv5qLWruPpZ24OvysgNY+IZ40QvBguqz715CBY5yr6Q1Stt93VUWaINACsatV79DcXvmZ8v8FDXhy5734pjwSqcVXOLdiQo9GloACHRA4qPUKy+eSoylQ3iCCak+jsuVR7VudU85QYOgZIkJDR4sJq2gNFbEipQRe7YpNnAejZ0RSsrk0Av2BBvoQsDmFRzO8KP1vwrBSGZCvM5zVCSnQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(136003)(366004)(39860400002)(346002)(396003)(107886003)(8936002)(86362001)(4326008)(66556008)(966005)(66476007)(66946007)(16526019)(186003)(9686003)(478600001)(6496006)(8676002)(85182001)(2906002)(33716001)(316002)(6666004)(54906003)(6486002)(6916009)(26005)(83380400001)(956004)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T2FoT1BMMWdFSFNpNzRYSzdlVWpFdE81WnAvSHo4b21nTkJTSVVFa25VQmhH?=
 =?utf-8?B?RzczVWtKeW56TTAyMC9lZkFFY3RRb2lwWllYYWJVVnJZTFIyTk9aL0xESkwy?=
 =?utf-8?B?WkNXekJ4TlhVWG9FU0xtbXU1UDJaMXM3a0lhaG41TklMVDlyQXNBQWJqLzh0?=
 =?utf-8?B?MXFYckJJMkRhNHZ2NTBkdEVCOEN0R1NoQW11UnZzeEV2MjhVR29vdHRPaXM2?=
 =?utf-8?B?eFBaYVkxcldIQ0NOSmVYTGlCVUF6ZVRwdkEybnp4VkR2eklzeHR1YkVOYitZ?=
 =?utf-8?B?bUx6d1ZkM1EyMllKdEVkVUZtVTRFNGlCUnZ4akVNUW9KWVZVcjVLZEdvWVJF?=
 =?utf-8?B?RjRucWcwU1U0ZTBTTC90MDUwMjE2ZFhubk5EZWE3S1RpRTZ1bXIxNFRuVXlO?=
 =?utf-8?B?RDNydmNFdi9DRFo5eEc4SmxUYUwwR25kY1R2T3RxdlhpN1VYOXVTR3JzR3ox?=
 =?utf-8?B?L2lmUzlNUmpSZms4eWJ0VWI5aE51YUpHNXZJSStob05hY1l1ODJKY2R1QzhS?=
 =?utf-8?B?WjdDTU5oZDhzMUtIZk1IQTVaRG5TVmZ3OXdicktqRTFTVzlhYmNoejBQcjl4?=
 =?utf-8?B?V05tUlAwazViZkhWam5pSjNnNlNBOUI2UCtXMHM5eC90WXZqaThaWWZDdVN6?=
 =?utf-8?B?WGFMSjRHTVY0cWtGSmpvK0Y2OFJIRHpqaUZDTjdlRXFpdVhnc0NmQWt4VDM1?=
 =?utf-8?B?SjE1bE83ZE5LakJTSTdEelpNYVJpcEpYN29LMlBCbkIwcHN6bjl3NmFyRitp?=
 =?utf-8?B?YjhoZ2pSd0hCNkdNbmJHWlhqNmtJUU05MHhJM1hSejVETnRKckNoWlk0NEtP?=
 =?utf-8?B?OEY5aXFpVGVBSi9kMks3RGk3NXNJZkFPSDRtdDZEMGRPYWVTTjhBb2wyeGFa?=
 =?utf-8?B?U0puWkg2Lzk0YzJRN2xsNUJkUVFiSDVxcEI5anVhcyt4SUp4ZG5YWjNJcEdm?=
 =?utf-8?B?a1Y1c1RzRkdrd2JLNVRwTEo1OWtNVjNFSWtKcksyT0JTVm13NzlqQW8wcmhQ?=
 =?utf-8?B?dFQwaEVhaC8wZnZZQ3FmUVlTaTNSaXlOdy93YXpZZW4zbVhXOStkT3hRdWhr?=
 =?utf-8?B?NFF1WVZwWm1jUW1nL0hBUTB3cnRXUE53NEY2SGsrdjNDUmhZOEMwaDZNSzdF?=
 =?utf-8?B?RVlRbEE5RXhBbUFCV0dhSytvK3FTL2dheVR5Tm9XdHlDVmdwT1hvOWk4S0Iz?=
 =?utf-8?B?TUVyQTdrVGRyQ3hpZjdtR3NVaEt3MkZMaTh2cW1HSFNzcitlK1JDVVczZmpX?=
 =?utf-8?B?aWJqVC9oQ3U4YjQ5eHpKMU1kS1d2bG5ML2g4QmROaGkyOXptamxUei9MSitU?=
 =?utf-8?B?VUY5dFI3L3ZCMmVCMkt1QXNkdVhmVTliVE1jTnJaRHNUWDRwVEg0RFR1TU5Z?=
 =?utf-8?B?SHNDWDNNZzIyeFRUSFhiZkVZQmJyRTgveFRxZ1djRGhtUE9sQzJNMVgxdjRB?=
 =?utf-8?B?OHFsaEQ3TlhPdkx5czRoakUxaU8rVGNub2Q4ejVNN3hIVmZzSVAvenNnSHJI?=
 =?utf-8?B?d0Y5Mm5EM2lsSUVvYU1PZkl2ZXVRY0E5V09sbjg0N2UwZGF3UzdSMUo1VTFr?=
 =?utf-8?B?RkQzNGJzNGM3MTlWKzRNZENqd1pabkRTYzlheEdoK0trRUNUcnpqaGZnOGdT?=
 =?utf-8?B?S25sTUlGQ01XRUROeTlzUDd0VnJrUytFdkRvRThKU054a3cvalVYSURIblQr?=
 =?utf-8?B?NGd6eG13N3AvVFlmbHpBZWdZSXJ0c3F6dDVsaDBDVmNlaldmTFdOWEJET29F?=
 =?utf-8?Q?j7vPFTDlAqFHgPemEUJ1FZC0Q5MjQU9h0F1/76+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d004747-8fab-47d4-e57a-08d8cd17746b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 16:26:29.5459
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o2/R1NBFS7zm45Vtd3155droaoe0CSC5w0E6EXObhg90p5DnMjWt8OnROnVG40/tAM1OfqtvUyRtVLaJ6weAog==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2841
X-OriginatorOrg: citrix.com

On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
> Inspired by
> https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
> and prior work in that area of x86 Linux, suppress speculation with
> guest specified pointer values by suitably masking the addresses to
> non-canonical space in case they fall into Xen's virtual address range.
> 
> Introduce a new Kconfig control.
> 
> Note that it is necessary in such code to avoid using "m" kind operands:
> If we didn't, there would be no guarantee that the register passed to
> guest_access_mask_ptr is also the (base) one used for the memory access.
> 
> As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
> parameter gets dropped and the XOR on the fixup path gets changed to be
> a 32-bit one in all cases: This way we avoid pointless REX.W or operand
> size overrides, or writes to partial registers.
> 
> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> The insn sequence chosen is certainly up for discussion; I've picked
> this one despite the RCR because alternatives I could come up with,
> like
> 
> 	mov	$(HYPERVISOR_VIRT_END), %rax
> 	mov	$~0, %rdx
> 	mov	$0x7fffffffffffffff, %rcx
> 	cmp	%rax, %rdi
> 	cmovb	%rcx, %rdx
> 	and	%rdx, %rdi
> 
> weren't necessarily better: Either, as above, they are longer and
> require a 3rd scratch register, or they also utilize the carry flag in
> some similar way.
> ---
> Judging from the comment ahead of put_unsafe_asm() we might as well not
> tell gcc at all anymore about the memory access there, now that there's
> no use of the operand anymore in the assembly code.
> 
> --- a/xen/arch/x86/usercopy.c
> +++ b/xen/arch/x86/usercopy.c
> @@ -10,12 +10,19 @@
>  #include <xen/sched.h>
>  #include <asm/uaccess.h>
>  
> -unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n)
> +#ifndef GUARD
> +# define GUARD UA_KEEP
> +#endif
> +
> +unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n)
>  {
>      unsigned dummy;
>  
>      stac();
>      asm volatile (
> +        GUARD(
> +        "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"

Don't you need to also take 'n' into account here to assert that the
address doesn't end in hypervisor address space? Or that's fine as
speculation wouldn't go that far?

I also wonder why this needs to be done in assembly, could you check
the address(es) using C?

> +        )
>          "    cmp  $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n"
>          "    jbe  1f\n"
>          "    mov  %k[to], %[cnt]\n"
> @@ -42,6 +49,7 @@ unsigned __copy_to_user_ll(void __user *
>          _ASM_EXTABLE(1b, 2b)
>          : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
>            [aux] "=&r" (dummy)
> +          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
>          : "[aux]" (n)
>          : "memory" );
>      clac();
> @@ -49,12 +57,15 @@ unsigned __copy_to_user_ll(void __user *
>      return n;
>  }
>  
> -unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n)
> +unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n)
>  {
>      unsigned dummy;
>  
>      stac();
>      asm volatile (
> +        GUARD(
> +        "    guest_access_mask_ptr %[from], %q[scratch1], %q[scratch2]\n"
> +        )
>          "    cmp  $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n"
>          "    jbe  1f\n"
>          "    mov  %k[to], %[cnt]\n"
> @@ -87,6 +98,7 @@ unsigned __copy_from_user_ll(void *to, c
>          _ASM_EXTABLE(1b, 6b)
>          : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
>            [aux] "=&r" (dummy)
> +          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
>          : "[aux]" (n)
>          : "memory" );
>      clac();
> @@ -94,6 +106,8 @@ unsigned __copy_from_user_ll(void *to, c
>      return n;
>  }
>  
> +#if GUARD(1) + 0
> +
>  /**
>   * copy_to_user: - Copy a block of data into user space.
>   * @to:   Destination address, in user space.
> @@ -128,8 +142,11 @@ unsigned clear_user(void __user *to, uns
>  {
>      if ( access_ok(to, n) )
>      {
> +        long dummy;
> +
>          stac();
>          asm volatile (
> +            "    guest_access_mask_ptr %[to], %[scratch1], %[scratch2]\n"
>              "0:  rep stos"__OS"\n"
>              "    mov  %[bytes], %[cnt]\n"
>              "1:  rep stosb\n"
> @@ -140,7 +157,8 @@ unsigned clear_user(void __user *to, uns
>              ".previous\n"
>              _ASM_EXTABLE(0b,3b)
>              _ASM_EXTABLE(1b,2b)
> -            : [cnt] "=&c" (n), [to] "+D" (to)
> +            : [cnt] "=&c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy),
> +              [scratch2] "=&r" (dummy)
>              : [bytes] "r" (n & (BYTES_PER_LONG - 1)),
>                [longs] "0" (n / BYTES_PER_LONG), "a" (0) );
>          clac();
> @@ -174,6 +192,16 @@ unsigned copy_from_user(void *to, const
>      return n;
>  }
>  
> +# undef GUARD
> +# define GUARD UA_DROP
> +# define copy_to_guest_ll copy_to_unsafe_ll
> +# define copy_from_guest_ll copy_from_unsafe_ll
> +# undef __user
> +# define __user
> +# include __FILE__
> +
> +#endif /* GUARD(1) */
> +
>  /*
>   * Local variables:
>   * mode: C
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -446,6 +446,8 @@ UNLIKELY_START(g, create_bounce_frame_ba
>          jmp   asm_domain_crash_synchronous  /* Does not return */
>  __UNLIKELY_END(create_bounce_frame_bad_sp)
>  
> +        guest_access_mask_ptr %rsi, %rax, %rcx
> +
>  #define STORE_GUEST_STACK(reg, n) \
>  0:      movq  %reg,(n)*8(%rsi); \
>          _ASM_EXTABLE(0b, domain_crash_page_fault_ ## n ## x8)
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -114,6 +114,24 @@ config SPECULATIVE_HARDEN_BRANCH
>  
>  	  If unsure, say Y.
>  
> +config SPECULATIVE_HARDEN_GUEST_ACCESS
> +	bool "Speculative PV Guest Memory Access Hardening"
> +	default y
> +	depends on PV
> +	help
> +	  Contemporary processors may use speculative execution as a
> +	  performance optimisation, but this can potentially be abused by an
> +	  attacker to leak data via speculative sidechannels.
> +
> +	  One source of data leakage is via speculative accesses to hypervisor
> +	  memory through guest controlled values used to access guest memory.
> +
> +	  When enabled, code paths accessing PV guest memory will have guest
> +	  controlled addresses massaged such that memory accesses through them
> +	  won't touch hypervisor address space.
> +
> +	  If unsure, say Y.
> +
>  endmenu
>  
>  config HYPFS
> --- a/xen/include/asm-x86/asm-defns.h
> +++ b/xen/include/asm-x86/asm-defns.h
> @@ -44,3 +44,16 @@
>  .macro INDIRECT_JMP arg:req
>      INDIRECT_BRANCH jmp \arg
>  .endm
> +
> +.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req
> +#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS)
> +    mov $(HYPERVISOR_VIRT_END - 1), \scratch1
> +    mov $~0, \scratch2
> +    cmp \ptr, \scratch1
> +    rcr $1, \scratch2
> +    and \scratch2, \ptr
> +#elif defined(CONFIG_DEBUG) && defined(CONFIG_PV)
> +    xor $~\@, \scratch1
> +    xor $~\@, \scratch2
> +#endif
> +.endm
> --- a/xen/include/asm-x86/uaccess.h
> +++ b/xen/include/asm-x86/uaccess.h
> @@ -13,13 +13,19 @@
>  unsigned copy_to_user(void *to, const void *from, unsigned len);
>  unsigned clear_user(void *to, unsigned len);
>  unsigned copy_from_user(void *to, const void *from, unsigned len);
> +
>  /* Handles exceptions in both to and from, but doesn't do access_ok */
> -unsigned __copy_to_user_ll(void __user*to, const void *from, unsigned n);
> -unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n);
> +unsigned int copy_to_guest_ll(void __user*to, const void *from, unsigned int n);
> +unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n);
> +unsigned int copy_to_unsafe_ll(void *to, const void *from, unsigned int n);
> +unsigned int copy_from_unsafe_ll(void *to, const void *from, unsigned int n);
>  
>  extern long __get_user_bad(void);
>  extern void __put_user_bad(void);
>  
> +#define UA_KEEP(args...) args
> +#define UA_DROP(args...)

I assume UA means user access, and since you have dropped other uses
of user and changed to guest instead I wonder if we should name this
just A_{KEEP/DROP}.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:48:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83332.154776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WB8-00081v-QN; Tue, 09 Feb 2021 16:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83332.154776; Tue, 09 Feb 2021 16:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WB8-00081o-MM; Tue, 09 Feb 2021 16:47:54 +0000
Received: by outflank-mailman (input) for mailman id 83332;
 Tue, 09 Feb 2021 16:47:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WB7-00081j-Kn
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:47:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WB7-0002ko-J0
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:47:53 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WB7-0004Ij-HQ
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:47:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9WB4-0005t8-8b; Tue, 09 Feb 2021 16:47:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=ivCnS8WYD44pUoy+GhX/fzJsXHsE/y5TvjI4Kph2Kng=; b=ocVUPCQ/xyFwkpOdeEyZyu+aW6
	ivp3p8yPu3RA2Y6GZXvI1Jj9jjcJjJRfw8HtQEcFzGhVWmRu99VONrZlzbyPlzy1Mn73etczAgixF
	eB0uaaj5Cu6/Dbdyk4VjK6v7ZP7sBzwswCv5lmN+yC5wfDbMoh2KH0CJZ3cyvoA2PuvM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.48309.568020.376765@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 16:47:49 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    hongyxia@amazon.co.uk,
    Julien Grall <jgrall@amazon.com>,
    Jan Beulich <jbeulich@suse.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Paul Durrant <paul@xen.org>,
    Kevin Tian <kevin.tian@intel.com>
Subject: Re: [for-4.15][PATCH v2 0/5] xen/iommu: Collection of bug fixes for IOMMU teadorwn
In-Reply-To: <20210209152816.15792-1-julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("[for-4.15][PATCH v2 0/5] xen/iommu: Collection of bug fixes for IOMMU teadorwn"):
> From: Julien Grall <jgrall@amazon.com>
...
> This series is a collection of bug fixes for the IOMMU teardown code.
> All of them are candidate for 4.15 as they can either leak memory or
> lead to host crash/host corruption.
> 
> This is sent directly on xen-devel because all the issues were either
> introduced in 4.15 or happen in the domain creation code.

I think by current freeze rules this does not need a release-ack but
for the avoidance of doubt

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

assuming it's commited by the end of the week.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:48:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83333.154788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WBV-00086P-4y; Tue, 09 Feb 2021 16:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83333.154788; Tue, 09 Feb 2021 16:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WBV-00086I-0w; Tue, 09 Feb 2021 16:48:17 +0000
Received: by outflank-mailman (input) for mailman id 83333;
 Tue, 09 Feb 2021 16:48:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9WBS-00085v-SK
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:48:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc212742-5759-4fb4-9e3b-780b88eb1f97;
 Tue, 09 Feb 2021 16:48:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D8843AC97;
 Tue,  9 Feb 2021 16:48:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc212742-5759-4fb4-9e3b-780b88eb1f97
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612889293; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=me0NiE9awRubwK9ZwvvpuVHPIAggzHLa7Z9SrtLp+tM=;
	b=jXa+/pG1zIfgx5JbJH6Hbfi/n7JMcrU2vb8OGHNYNGzI/dXyO8ZD5Pp09KcPAHNw3xGmWF
	KQWuWiUAGBTI1vT/v9z+lWfVlDip4VWkdbPPu+4W24MYqM7Rdn+u7ZI+aXrsP+o/aKr1A0
	LO9MQKWNlaJr6j+9ONVkMwoVO/9kw0o=
Subject: Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19
 processors
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210209153336.4016-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
Date: Tue, 9 Feb 2021 17:48:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209153336.4016-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 16:33, Andrew Cooper wrote:
> The original limit provided wasn't accurate.  Blobs are in fact rather larger.
> 
> Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
>  #define F15H_MPB_MAX_SIZE 4096
>  #define F16H_MPB_MAX_SIZE 3458
>  #define F17H_MPB_MAX_SIZE 3200
> -#define F19H_MPB_MAX_SIZE 4800
> +#define F19H_MPB_MAX_SIZE 5568

How certain is it that there's not going to be another increase?
And in comparison, how bad would it be if we pulled this upper
limit to something that's at least slightly less of an "odd"
number, e.g. 0x1800, and thus provide some headroom?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:48:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:48:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83334.154800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WC4-0008Cl-Fh; Tue, 09 Feb 2021 16:48:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83334.154800; Tue, 09 Feb 2021 16:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WC4-0008Ce-BX; Tue, 09 Feb 2021 16:48:52 +0000
Received: by outflank-mailman (input) for mailman id 83334;
 Tue, 09 Feb 2021 16:48:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WC3-0008CZ-Su
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:48:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WC3-0002m1-SA
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:48:51 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WC3-0004Pk-R4
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:48:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9WC0-0005to-Kq; Tue, 09 Feb 2021 16:48:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=zPW2ABlkHfszLjqeThttAOSm5mlmwRyifQb60Y/TTrY=; b=iqSGhW7QezZDTXFDJef3HQMGEj
	J/A+Ze5Y9nO5ZWVkRp1qZkUWMokvun7BZdBgwR0MmJGLs7lLI8G/x16MumB7RbEUOFcoBNr+FM+w9
	POQ6CE6qehpuly4QAXIU7QLgwXqs2oHv5W6WX+V/cSb4YoHeTwixGiq+4LFCVk+sv+8Q=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.48368.426558.75373@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 16:48:48 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210209 1/4] tools: move CONFIG_DIR and XEN_CONFIG_DIR in paths.m4
In-Reply-To: <20210209154536.10851-2-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-2-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210209 1/4] tools: move CONFIG_DIR and XEN_CONFIG_DIR in paths.m4"):
> Upcoming changes need to reuse XEN_CONFIG_DIR.
> 
> In its current location the assignment happens too late. Move it up
> in the file, along with CONFIG_DIR. Their only dependency is
> sysconfdir, which may also be adjusted in this file.

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:49:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83335.154811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WCh-0008OM-PH; Tue, 09 Feb 2021 16:49:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83335.154811; Tue, 09 Feb 2021 16:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WCh-0008OF-MC; Tue, 09 Feb 2021 16:49:31 +0000
Received: by outflank-mailman (input) for mailman id 83335;
 Tue, 09 Feb 2021 16:49:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WCg-0008O3-Fb
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:49:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WCg-0002mm-Ev
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:49:30 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WCg-0004RZ-EG
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:49:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9WCd-0005u5-4k; Tue, 09 Feb 2021 16:49:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=yHG2qfc/IXAIs9CHJMF2Vx71A2v4y1qIu7VmreiNC4E=; b=T33QSR+JMN77FHDfATAi9eu0p7
	yip8C9fGx93KadqiPP6SNXBaetdDD9S3QRdL8ty8GszIg3SYSpuc39CKS2d/Pus+OLraCC2IRbcJC
	Yni0Rgc5g7/GRHtJpbpQ9g9NljUzBTTTpgc8L4xImJJ/i++GC8UnA9+wyyIf9o0fcVEw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.48406.940250.878091@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 16:49:26 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v20210209 2/4] tools: add with-xen-scriptdir configure option
In-Reply-To: <20210209154536.10851-3-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-3-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210209 2/4] tools: add with-xen-scriptdir configure option"):
> Some distros plan for fresh installations will have an empty /etc,
> whose content will not be controlled by the package manager anymore.
> 
> To make this possible, add a knob to configure to allow storing the
> hotplug scripts to libexec instead of /etc/xen/scripts.
> 
> The current default remains unchanged, which is /etc/xen/scripts.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>
Reviewed-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 16:53:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 16:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83340.154824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WGF-0000qo-Dy; Tue, 09 Feb 2021 16:53:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83340.154824; Tue, 09 Feb 2021 16:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WGF-0000qh-At; Tue, 09 Feb 2021 16:53:11 +0000
Received: by outflank-mailman (input) for mailman id 83340;
 Tue, 09 Feb 2021 16:53:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WGE-0000qc-06
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:53:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WGD-0002rP-Rn
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:53:09 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WGD-0007Dy-PE
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 16:53:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9WGA-0005uj-IC; Tue, 09 Feb 2021 16:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=1FGfhXqvsB0s8U10Z+v7BjWMeRka/NXyo7WUojyJH3U=; b=EO/ZW17Xw4r09SlFKBvqYLwUwG
	miuAYjpdKQz+m5EG8nsicUPeECGMeneb85DWD/l2WgdYpvDUFfmB0JQXswMVMOX60PwTTfVL1UP7b
	yR4wvReoKBsew59APPyVLL4PSTAvC1n1oD1/fzNSrqpyEPGUPaR4UW/zt7wdHTGpb7fc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.48626.319165.973767@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 16:53:06 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 3/4] xl: optionally print timestamps when running xl commands
In-Reply-To: <20210209154536.10851-4-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-4-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210209 3/4] xl: optionally print timestamps when running xl commands"):
> Add a global option "-T" to xl to enable timestamps in the output from
> libxl and libxc. This is most useful with long running commands such
> as "migrate".
> 
> During 'xl -v.. migrate domU host' a large amount of debug is generated.
> It is difficult to map each line to the sending and receiving side.
> Also the time spent for migration is not reported.
> 
> With 'xl -T migrate domU host' both sides will print timestamps and
> also the pid of the invoked xl process to make it more obvious which
> side produced a given log line.
> 
> Note: depending on the command, xl itself also produces other output
> which does not go through libxentoollog. As a result such output will
> not have timestamps prepended.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

This is a new feature so it needs a release risk analysis.

The changes are entirely to xl command line handling.  The worst /
most likely bug would probably be something to do with the way the
logger is created.  Such a bug would be easy to fix.  Or, this patch
could easily be reverted.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

> This change adds also the missing "-t" flag to "xl help" output.

This part of the commit message talks about -t rather than -T.  I will
fix that on commit.

I'm going to commit the first three now.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:03:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83344.154841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WQR-0001sj-Es; Tue, 09 Feb 2021 17:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83344.154841; Tue, 09 Feb 2021 17:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WQR-0001sc-Bw; Tue, 09 Feb 2021 17:03:43 +0000
Received: by outflank-mailman (input) for mailman id 83344;
 Tue, 09 Feb 2021 17:03:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BG3/=HL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9WQQ-0001sX-Gr
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:03:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f011f3cf-8369-4f15-b73f-d07e73f5e6a4;
 Tue, 09 Feb 2021 17:03:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99756AD2E;
 Tue,  9 Feb 2021 17:03:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f011f3cf-8369-4f15-b73f-d07e73f5e6a4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612890220; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ytnzvGDPdmShFgyZyfaeZFXwCVssEFRt0dlXx43VwYI=;
	b=tqEBteWQtBOoDDF6nfnG6D0Sdph6+crW+TU1jSBpAypuC4H1bC+exI1IZXWIGdXTSVPozY
	5vZpBjp8mLA+SSqDLkFQCXt4139Uh8UdesAfOFS1GT7/6LPLdHBCpzpMHVq0RdFeokzwHy
	I2gXq3kUJo+QPnwJ0m0rRhMmanL4ZqM=
Subject: Re: [PATCH 03/17] x86: split __copy_{from,to}_user() into "guest" and
 "unsafe" variants
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <b8112628-a2e3-2fdc-9847-1fa684283135@suse.com>
 <YCKy8lwh2YVWYChc@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <566262ba-d45e-d2fa-69ae-2e1549cd6a94@suse.com>
Date: Tue, 9 Feb 2021 18:03:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCKy8lwh2YVWYChc@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.02.2021 17:06, Roger Pau MonnÃ© wrote:
> On Thu, Jan 14, 2021 at 04:04:32PM +0100, Jan Beulich wrote:
>> The "guest" variants are intended to work with (potentially) fully guest
>> controlled addresses, while the "unsafe" variants are not. Subsequently
>> we will want them to have different behavior, so as first step identify
>> which one is which. For now, both groups of constructs alias one another.
>>
>> Double underscore prefixes are retained only on
>> __copy_{from,to}_guest_pv(), to allow still distinguishing them from
>> their "checking" counterparts once they also get renamed (to
>> copy_{from,to}_guest_pv()).
>>
>> Add previously missing __user at some call sites.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Instead of __copy_{from,to}_guest_pv(), perhaps name them just
>> __copy_{from,to}_pv()?
>>
>> --- a/xen/arch/x86/gdbstub.c
>> +++ b/xen/arch/x86/gdbstub.c
>> @@ -33,13 +33,13 @@ gdb_arch_signal_num(struct cpu_user_regs
>>  unsigned int
>>  gdb_arch_copy_from_user(void *dest, const void *src, unsigned len)
>>  {
>> -    return __copy_from_user(dest, src, len);
>> +    return copy_from_unsafe(dest, src, len);
>>  }
>>  
>>  unsigned int 
>>  gdb_arch_copy_to_user(void *dest, const void *src, unsigned len)
>>  {
>> -    return __copy_to_user(dest, src, len);
>> +    return copy_to_unsafe(dest, src, len);
> 
> I assume we need to use the unsafe variants here, because the input
> addresses are fully controlled by gdb, and hence not suitable as
> speculation vectors?

Speculation doesn't matter when it comes to debugging, I
think. We were using the variants without access_ok()
checks already anyway to allow access to Xen addresses.
In fact it is my understanding ...

> Also could point to addresses belonging to both Xen or the guest
> address space AFAICT.

... that the primary goal here is to access Xen
addresses, and guest space only falls into the "may also
happen to be accessed" category.

>> --- a/xen/include/asm-x86/uaccess.h
>> +++ b/xen/include/asm-x86/uaccess.h
> 
> At some point we should also rename this to pvaccess.h maybe?

We could, but I'd rather not - this isn't about PV only.
Instead I would simply re-interpret 'u' from standing for
"user" (which didn't make much sense in Xen anyway, and
was only attributed to the Linux origin) to standing for
"unsafe" (both meanings - guest and in-Xen-but-unsafe).

>> @@ -197,21 +197,20 @@ do {
>>  #define get_guest_size get_unsafe_size
>>  
>>  /**
>> - * __copy_to_user: - Copy a block of data into user space, with less checking
>> - * @to:   Destination address, in user space.
>> - * @from: Source address, in kernel space.
>> + * __copy_to_guest_pv: - Copy a block of data into guest space, with less
>> + *                       checking
> 
> I would have preferred pv to be a prefix rather than a suffix, but we
> already have the hvm accessors using that nomenclature.

Right, I wanted to match that naming model. Later we can
think about renaming to copy_{to,from}_{hvm,pv}() or
whatever else naming scheme we like. I have to admit though
I'm not convinced the longer {hvm,pv}_copy_{from,to}_guest()
would really be better.

> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks!

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:06:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:06:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83345.154854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WSt-000226-Tf; Tue, 09 Feb 2021 17:06:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83345.154854; Tue, 09 Feb 2021 17:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WSt-00021z-QF; Tue, 09 Feb 2021 17:06:15 +0000
Received: by outflank-mailman (input) for mailman id 83345;
 Tue, 09 Feb 2021 17:06:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9WSt-00021u-0S
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:06:15 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d0973db-f352-4721-9b09-d7a9388f7f3f;
 Tue, 09 Feb 2021 17:06:14 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19H6721Y
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 18:06:07 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d0973db-f352-4721-9b09-d7a9388f7f3f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612890373;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=YsCchLV6ss1aMoXsjF9HOSE0UP0I6pyzvep7ylz6Z08=;
	b=p1Y/VP7jXvu3koeb0jCcTCvg8mHKp2OWJDwOxtBlbc/lJCgbayJsduFVu4o1v4Wv3E
	MQkRl58oKVJscAT6RYIV+n74U7p/dk/r+/9ZFadF082dC496cNFzB1WmLy3eJWlqch1f
	d/XmfD0cR3LUJPy/YtBe559TbiK1heQhoJzTSrlFhEpf0VlqMxx38vo/isrp7q8yMKsj
	H+9WnxBv0NqsaTDWvbiJm+AZ3Liy7DI81nPht4xbJCyka0Un1I9TR1zF8gp3nWVWr51L
	9QIG+cA6tbawSoAQcinO4bSfsw4cBxQSwceCfzCgnMHIwbYb4VVREgfhy041O8jwguAy
	TT5Q==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Tue, 9 Feb 2021 18:06:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 3/4] xl: optionally print timestamps when
 running xl commands
Message-ID: <20210209180600.67e3f167.olaf@aepfle.de>
In-Reply-To: <24610.48626.319165.973767@mariner.uk.xensource.com>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-4-olaf@aepfle.de>
	<24610.48626.319165.973767@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/j/Ci0QjYvSsJUbeSuGhPz8d"; protocol="application/pgp-signature"

--Sig_/j/Ci0QjYvSsJUbeSuGhPz8d
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 9 Feb 2021 16:53:06 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> This part of the commit message talks about -t rather than -T.  I will
> fix that on commit.

It is really the lowercase t.

01f78a81ae56220dd496a61185ba5dfae30dc2fe did not adjust the output in tools=
/libxl/xl_cmdimpl.c:help().


Olaf

--Sig_/j/Ci0QjYvSsJUbeSuGhPz8d
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAiwPgACgkQ86SN7mm1
DoAILhAAgubCeEv8v391rlVJFEh4eDzB3DMlUrGnudmn8QvBNS/nr4X/zAlbfggq
jqpO0GgOyaUlQ3JTm3waOAgmUDF2wFDYtEhbIL12XEJJC1gcO3mjMP2RKP5Y84Zo
tFdl9IrwKPlDou5ydiqZ342AcnkEyT/yLDHhRVipQSyQtOPvztCrHMOFmBOflCvy
wlLu/K44JhnELiZf4VJ1tzR9fh07TZ/AU/yRU2J4OencYvXX+XU5NodvROlq29Nv
FOTnOxhyIJxERra6+x0tWFl6jSKHXq2hPgdV/KNivMCuTCyNEQRVUBjFFUEEMx9d
km038yiM2voEG6GmrI3tw7qfonomoiSp6EOnsmhF694stxYFbOUW1kjgZwGKVdo3
2XdrNWzD950JEriYpohq2vi6LOupCbaSCGPS/tQl+iTYTR0bDgiFj+wQMoZcMEwW
wV33qyOoc3LlEiIwh1B6+0eYQ1sRSLq1VVh0JE2Ck7BHddtDAj1tPvDSZ2VqQzBF
QyUF6lY+EGjGo1FC3vCGSO6feLJBXaiPiJYQts6WuXSR6iXvC6kNlBkGZDBIVH8t
pXU2iilA/lPqZImnLIEM8L+yhU+fW6LNu4PwIRtk6kFmudM5gevNaUsfrRBkMB3u
+MgB+8oUOr4oT797HZb2v4AOX+paEVi3wcyd03I8BPjNQ+WiVRo=
=+bYN
-----END PGP SIGNATURE-----

--Sig_/j/Ci0QjYvSsJUbeSuGhPz8d--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:12:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83351.154881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WZ0-00032H-Or; Tue, 09 Feb 2021 17:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83351.154881; Tue, 09 Feb 2021 17:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WZ0-00032A-Lm; Tue, 09 Feb 2021 17:12:34 +0000
Received: by outflank-mailman (input) for mailman id 83351;
 Tue, 09 Feb 2021 17:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WYz-000325-MK
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:12:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WYz-0003Dq-LI
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:12:33 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WYz-0001U1-KP
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:12:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9WYu-0005ye-P4; Tue, 09 Feb 2021 17:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=UpX8S6XRwqpEcQHWjRqxBp4cqcoxZJbg5g6R2fWMqS4=; b=CoIrFO8Bu0JSYTrND2zbuANXwd
	wemzf/ihRaJXNZwRvJ0u9UrEfYTtSxxsViSO3c1eaEZbx44tiuGoy/evOLMVo7b5b81QyOCiVpbFG
	cVSa1XdYDCBe0uJhjjY9OvaFBhbh/L2FQyEgMwO6IzYAq+YijHMsLZWkI48urFySGmyI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.49788.493621.307069@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 17:12:28 +0000
To: Olaf Hering <olaf@aepfle.de>,
    Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 4/4] xl: disable --debug option for xl migrate
In-Reply-To: <20210209154536.10851-5-olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-5-olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("[PATCH v20210209 4/4] xl: disable --debug option for xl migrate"):
> xl migrate --debug used to track every pfn in every batch of pages.
> 
> Since commit cfa955591caea5d7ec505cdcbf4442f2d6e889e1 from Xen 4.6 the
> debug flag changed meaning from "verify transferred memory during live
> migration" to "verify transferred memory in remus/colo". At least xl
> will not be able to trigger execution of the verifying code in
> send_domain_memory_live anymore.
> 
> Remove "--debug" from "xl migrate -h" output.
> Remove "--debug" from from xl man page.
> Do not send "-d" as potential option for "xl migrate-receive" anymore.
> Do not pass the flag LIBXL_SUSPEND_DEBUG anymore to libxl_domain_suspend.
> Continue to recognize "--debug" as valid option for "xl migrate".
> Continue to recognize "-d" as valid option for "xl migrate-receive".

It seems to me that something is definitely a bug here but I want to
understand from Andy what the best thing to do is.  I'm hesitant to
release-ack removing this at this stage.

Wouldn't it be better to just fix the docs like in your previously
suggested patch ?

Also, Olaf, please CC Andy on these migration-related patches.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:16:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:16:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83354.154894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wcz-0003Cv-Al; Tue, 09 Feb 2021 17:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83354.154894; Tue, 09 Feb 2021 17:16:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wcz-0003Co-6f; Tue, 09 Feb 2021 17:16:41 +0000
Received: by outflank-mailman (input) for mailman id 83354;
 Tue, 09 Feb 2021 17:16:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NhOK=HL=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9Wcx-0003Ch-9F
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:16:39 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c87bd2ec-64ca-4c29-afe6-54547d6b4020;
 Tue, 09 Feb 2021 17:16:38 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x19HGY23t
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 Feb 2021 18:16:34 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c87bd2ec-64ca-4c29-afe6-54547d6b4020
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612890997;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=40w3WAsyj1Y6QIRn7DUmyhbPlGiTUGQ7jHaOxhb0TFs=;
	b=cP5xc5c4ZZx8Dv9+z0rjKJ/tMYA91DalzZ69Td8btnqbLU9+rh9jxTOV3x9zcc+SMP
	LIo96D6BpVHoLfbkSAtCGmaAZxdiFmwrHf6GHf+WPO8rSQzEYTg5ALr37kR8i5TYJDAB
	FvRjJJUMQvgf/vsjRy8bG+rXITge+I5AN7FwTdzuTS3sM/QkiQEteDDynVcixwJOI+ZM
	6U5H3on0/4QNfvGmkMiv9rHt18b9GIjbSUvOK390A8LC3S2ErUOpENZ6vZh9L+i82nF2
	vKBezP1TjiTOkyXImFgM1ySutxgwaY5W/LfHXgt0XkMMhd/z67BDYqzdM5W8c1VP5PFu
	3KNg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Tue, 9 Feb 2021 18:16:21 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 4/4] xl: disable --debug option for xl migrate
Message-ID: <20210209181621.2b329fd0.olaf@aepfle.de>
In-Reply-To: <24610.49788.493621.307069@mariner.uk.xensource.com>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-5-olaf@aepfle.de>
	<24610.49788.493621.307069@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/EVRJ6jljPGl=GoN32bmL8Go"; protocol="application/pgp-signature"

--Sig_/EVRJ6jljPGl=GoN32bmL8Go
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 9 Feb 2021 17:12:28 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> Also, Olaf, please CC Andy on these migration-related patches.

Can this be automated via MAINTAINERS, so that scripts/get_maintainer.pl ad=
dresses the people who feel responsible for it? Right now it ends up in you=
r queue due to 'tools/*'.


Olaf=20

--Sig_/EVRJ6jljPGl=GoN32bmL8Go
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAiw2UACgkQ86SN7mm1
DoDWvQ/7Buwm79E/JBRMy+iFEy7KgcqTn0vROT6PrymppyTEPunDDwEOfxlzNmTP
TYg2+FP4OOskMoefBGRvsBh19nfI4VnGXsHRElEhPhe3//qIVPuxpMGTL+tBTkV6
V7BSdLhEfB6jxgEKAmDIzCfymSRxig6iTfSQ1JMWc/QQ6lewbD0WV4qrvnZZMvgE
SmNUcDzu4dFORGcWXEIhUkWqiuQVyE5l4T62mxk8mOvaQXWgUNJED0ZHs8YkDGyB
HBAf69klnDUv03tPS9VejolFiWWR7N9ULQN/clhguwnKWTxcGnF2LhznpPPl+vxP
v76finMKYC5ONZsb7O4sC5qA8LLLlnWXijhjgF85G5+UAG0E7d9qx8MgK6BMkPjF
3Zk3OX1nQreFhWm4q7ZGxFo0Gp6tYAgTkfL4ppxAkHpXdVIrdamyTOhdwVMBIh3A
OR026eDiEn/W20dlI0OdG1jrO5KWvYGtOMuIsB8n9OsyjG5fCb1Tb1QkJXfqftzG
9sONQE8WGZPoB6o9fdrenCbEK65pZbsz0CxNHG3XsJa6Zq1lOCyIFTr1IIpp5tD5
Lm9JbvI1+OwA0lup//7N9AwO7eeKDNLQANmevmc2L9rhG1LmYvM7VDo5bcBWXI56
9RyqfQEW3680kEkpvxv7r7tBQV1GKK665cPKkhWy/2/B3dzFNjQ=
=nCFc
-----END PGP SIGNATURE-----

--Sig_/EVRJ6jljPGl=GoN32bmL8Go--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:17:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:17:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83355.154906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WdI-0003H3-Jj; Tue, 09 Feb 2021 17:17:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83355.154906; Tue, 09 Feb 2021 17:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WdI-0003Gt-Fh; Tue, 09 Feb 2021 17:17:00 +0000
Received: by outflank-mailman (input) for mailman id 83355;
 Tue, 09 Feb 2021 17:17:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WdH-0003Gl-Vw
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:16:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WdH-0003IZ-U9
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:16:59 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9WdH-0001g1-T8
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:16:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9WdE-0005zj-L4; Tue, 09 Feb 2021 17:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=a9VGnPZOIdShHck4LO3O3Jk2W9JGbHYNo8DpOXP2rAA=; b=bmca7RF7TpFxylo3gl37py07Ks
	FMXQ966KkSPzK6pdcSokMCvhQM/5wIh8HACPKSNCW8O1iwCRrCVrwKkrK04yam8Puhy7/7CmphtwR
	GXfTaA7o4kaTjUb/x3icWlOttxM5n+YZbVaeOQa4Rn1ijcL6TXbDB1tWTp7NaSzuWip0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.50056.424595.200181@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 17:16:56 +0000
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    Anthony PERARD  <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 3/4] xl: optionally print timestamps when
 running xl commands
In-Reply-To: <20210209180600.67e3f167.olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-4-olaf@aepfle.de>
	<24610.48626.319165.973767@mariner.uk.xensource.com>
	<20210209180600.67e3f167.olaf@aepfle.de>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Olaf Hering writes ("Re: [PATCH v20210209 3/4] xl: optionally print timestamps when running xl commands"):
> Am Tue, 9 Feb 2021 16:53:06 +0000
> schrieb Ian Jackson <iwj@xenproject.org>:
> 
> > This part of the commit message talks about -t rather than -T.  I will
> > fix that on commit.
> 
> It is really the lowercase t.
> 
> 01f78a81ae56220dd496a61185ba5dfae30dc2fe did not adjust the output in tools/libxl/xl_cmdimpl.c:help().

OIC, thanks.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:17:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:17:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83356.154918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wdu-0003Od-US; Tue, 09 Feb 2021 17:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83356.154918; Tue, 09 Feb 2021 17:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wdu-0003OW-Qi; Tue, 09 Feb 2021 17:17:38 +0000
Received: by outflank-mailman (input) for mailman id 83356;
 Tue, 09 Feb 2021 17:17:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Wdu-0003ON-57
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:17:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Wdu-0003JV-4E
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:17:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9Wdu-0001kj-3J
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:17:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9Wdm-000606-8W; Tue, 09 Feb 2021 17:17:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=n/QNOK4FXGJY30IVGQZNniqQAODGoWX7ALyzPpeKOHs=; b=0aUvnpk9P5oZgMr/kLcgVLLm1m
	Dph1tepo/1Dpbz9eQupguUUJEeO1eZylBhc7z26QwSSHkmDakoY6X5xU4ZcFn72a6Fu7Y/YYj+jVk
	/H19RO06pS2pjX+wijARaL63tQWDmnSki8eammfsdD9BeDK40psqmnTiF1ZwGa9RnsT0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24610.50089.887907.573064@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 17:17:29 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19
 processors
In-Reply-To: <c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
References: <20210209153336.4016-1-andrew.cooper3@citrix.com>
	<c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors"):
> On 09.02.2021 16:33, Andrew Cooper wrote:
> > The original limit provided wasn't accurate.  Blobs are in fact rather larger.
> > 
> > Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> > --- a/xen/arch/x86/cpu/microcode/amd.c
> > +++ b/xen/arch/x86/cpu/microcode/amd.c
> > @@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
> >  #define F15H_MPB_MAX_SIZE 4096
> >  #define F16H_MPB_MAX_SIZE 3458
> >  #define F17H_MPB_MAX_SIZE 3200
> > -#define F19H_MPB_MAX_SIZE 4800
> > +#define F19H_MPB_MAX_SIZE 5568
> 
> How certain is it that there's not going to be another increase?
> And in comparison, how bad would it be if we pulled this upper
> limit to something that's at least slightly less of an "odd"
> number, e.g. 0x1800, and thus provide some headroom?

5568 does seem really an excessively magic number...

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:31:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83360.154930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wqu-0005AQ-8a; Tue, 09 Feb 2021 17:31:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83360.154930; Tue, 09 Feb 2021 17:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wqu-0005AJ-4X; Tue, 09 Feb 2021 17:31:04 +0000
Received: by outflank-mailman (input) for mailman id 83360;
 Tue, 09 Feb 2021 17:31:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9Wqt-0005AE-6F
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:31:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 284dad74-7b50-4d07-ad37-4336d2027047;
 Tue, 09 Feb 2021 17:31:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1D0AA64E9C;
 Tue,  9 Feb 2021 17:31:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 284dad74-7b50-4d07-ad37-4336d2027047
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612891861;
	bh=hq6J9tfSw0+QitLzOO0NVNYl77fkpxulMT4itRqF76I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YjVSlOvxmNRdER0ncyndrl+iC187QTGWt/Vyj2JZI8y5KrW4sYP5n0xxVuPHzGYG1
	 5av8MFKE8ER91TnowTalpQsEcbkO3LLLNFnyxDVT0sZGPup1RY+/30Oj7pbjOrksox
	 7cWBkhNkKrhpaPkPu5kJqDu27qLZ5qKMZGeQZgWf76G9sRwgZ9IWJYjcT/uvlfeVKf
	 CZHb8OeZ3CqEOk3uH6nyWaNTxz8qMkbhD2MtTUyJ9zBtRJsBzbwBlGmxxwMcyl0QRO
	 ergIA6fahhnkI921e8VcVFvgSgw3RExaae7C2qXT/R+OCCRuI8Hjqo8c5N05fJ2jxb
	 eb/PBToLuwDuw==
Date: Tue, 9 Feb 2021 09:31:00 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Ian Jackson <iwj@xenproject.org>
cc: Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    lucmiccio@gmail.com, xen-devel@lists.xenproject.org, 
    Bertrand.Marquis@arm.com, Volodymyr_Babchuk@epam.com, Rahul.Singh@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    George.Dunlap@citrix.com
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <24610.38467.808678.941320@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.21.2102090914280.8948@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <173ed75a-94cf-26a5-9271-a687bf201578@xen.org> <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s> <4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com>
 <24610.38467.808678.941320@mariner.uk.xensource.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 9 Feb 2021, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping"):
> > On 08.02.2021 21:24, Stefano Stabellini wrote:
> ...
> > > For these cases, I would just follow a simple rule of thumb:
> > > - is the submitter willing to provide the backport?
> > > - is the backport low-risk?
> > > - is the underlying bug important?
> > > 
> > > If the answer to all is "yes" then I'd go with it.
> > 
> > Personally I disagree, for the very simple reason of the question
> > going to become "Where do we draw the line?" The only non-security
> > backports that I consider acceptable are low-risk changes to allow
> > building with newer tool chains. I know other backports have
> > occurred in the past, and I did voice my disagreement with this
> > having happened.
> 
> I think I take a more relaxed view than Jan, but still a much more
> firm line than Stefano.  My opinion is that we should make exceptions
> for only bugs of exceptional severity.
> 
> I don't think I have seen an argument that this bug is exceptionally
> severe.
> 
> For me the fact that you can only experience this bug if you upgrade
> the hardware or significantly change the configuration, means that
> this isn't so serious a bug.

Yeah, I think that's really the core of this issue. If somebody is
already using 4.12 happily, there is really no reason for them to take
the fix. If somebody is about to use 4.12, then it is a severe issue.

The view of the group is that nobody should be switching to 4.12 now
because there are newer releases out there. I don't know if that is
true.

I didn't realize we had a policy or even a recommendation of always
choosing the latest among the many releases available with
security-support. I went through the website and SUPPORT.md but couldn't
find it spelled out anywhere. See:

https://xenproject.org/downloads/
https://xenproject.org/downloads/xen-project-archives/xen-project-4-12-series/
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=SUPPORT.md;h=52f25fa85af41fa3b38288ab7e172408b77dc779;hb=97b7b5567fba6918a656ad349051b5343b5dea2e

At most we have:

    Supported-Until: 2020-10-02
    Security-Support-Until: 2022-04-02

Anecdotally, if I go to https://www.kernel.org/ to download a kernel
tarball, I expect all tarballs to have all the major functionalities. I
wouldn't imagine that I cannot get one entire Linux subsystem (e.g.
ethernet or SATA) to work if I don't pick the latest.

Maybe it would make sense to clarify which releases are discouraged from
being used on https://xenproject.org/downloads/ at least?


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:34:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:34:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83362.154942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wtk-0005Kj-S3; Tue, 09 Feb 2021 17:34:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83362.154942; Tue, 09 Feb 2021 17:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Wtk-0005Kc-OF; Tue, 09 Feb 2021 17:34:00 +0000
Received: by outflank-mailman (input) for mailman id 83362;
 Tue, 09 Feb 2021 17:33:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Wtj-0005KU-2C; Tue, 09 Feb 2021 17:33:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Wti-0003Za-P0; Tue, 09 Feb 2021 17:33:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Wti-0005u4-91; Tue, 09 Feb 2021 17:33:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9Wti-00026g-8K; Tue, 09 Feb 2021 17:33:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9GKkjv/6kKWmzBoW5r0Dv/W70T916aH+5HXMxFOMQpE=; b=IdwUfutly0t2edTMOzEJD9nTIy
	oSQ4EQlhJ597K2qfwyosUyzTVJqHyOwnX/B4859x6wG81R+AhxRDbH1oPywIZPmT4EXasv3nb5Qst
	GFqyTt3alYyUZqwfdcMCmlV+covZr0x4XOHtO7IMnvnSoutg5+FsIG+pYjsOwthNRVfE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159130-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159130: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=92bf22614b21a2706f4993b278017e437f7785b3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 17:33:58 +0000

flight 159130 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159130/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-seattle   5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                92bf22614b21a2706f4993b278017e437f7785b3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  192 days
Failing since        152366  2020-08-01 20:49:34 Z  191 days  338 attempts
Testing same since   159130  2021-02-08 10:47:38 Z    1 days    1 attempts

------------------------------------------------------------
4560 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken
broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl host-install(5)
broken-step test-arm64-arm64-xl-seattle host-install(5)

Not pushing.

(No revision log; it would be 1029128 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:39:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83366.154956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WzK-0005a2-Ku; Tue, 09 Feb 2021 17:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83366.154956; Tue, 09 Feb 2021 17:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9WzK-0005Zv-HT; Tue, 09 Feb 2021 17:39:46 +0000
Received: by outflank-mailman (input) for mailman id 83366;
 Tue, 09 Feb 2021 17:39:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5GlY=HL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9WzJ-0005Zq-9n
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:39:45 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cdf44238-4457-4d3e-87e3-2916d7e42f81;
 Tue, 09 Feb 2021 17:39:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdf44238-4457-4d3e-87e3-2916d7e42f81
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612892382;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=5DVuKO9dEJDdo9BehC6hnQ9RZzyM1cGiInSHA0/R130=;
  b=bgPhZG5cBnFld//7ej5DYJzumIfXnaZNNiczNHa6RdbLmPQI5IFaa7hG
   VumS5KakORBYB2fU/A0JEqfsp5pkWwhNNiyJ7Prv45hYiF06+9akGNrbm
   2PY3ieq1ndqY98eOJVuLdjz4vkndb1FuJN52rpWJf45yrKM9yHDkWRYoc
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8QT1x28PzYwSByDlQE4pguL8FEPFcmNiRpkTesBCQ/a1Ph/tkAr73EcwQeN3meekWSTDcP8xCT
 FR+Y+MouZnJBVGKRGuM56/9MnpTyPLUEoU80MwN/DR0WYpjd0xUVjl3S073iKcjE9nGVlQBSlw
 MMvVVpUA2sTgQk9VfTxLMc3ojaFejxjfP1MdyQ+7v9+M3EE2FMR8MkpJL1eNtcoL9brOVjnXSo
 m90zDmfxh2VcvwmLkjvopCiLLk2OuMp7RYVhQQft8Jwszk6V4rKFrApO0gkSZz0CY16ymqobiI
 JAw=
X-SBRS: 5.2
X-MesageID: 38240198
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,165,1610427600"; 
   d="scan'208";a="38240198"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UQ4JKCN7FI9lrh/6QvgivZhHNr2YPkLkGa1JBkQXfEXT9G8H2u3H4DbZc6UUTXVGcudtHSAa1mtgS1szZ7CaiUrBkogdBm/bduWUMBUmcbxTDvDUiyMMES/LuNyZ+xjJ9dM5J9PG2ZM4nKSrMTXP/k5iTqKA/2ANf8+zdH2tCy1VVEVh+3Jd2H27eUfxwLIHoRXUYsHQONq3D4sj3cYg/trWBfIkVpdnA7zPlqXb6JBIpL3vd12qdFle/vCd9EU58fWxkllswq0hvF9+OFgtB2sLRwKX+s2UGH3LeXjbFW+ucDV7QK186Jxbz9JnZduagU/MC+NIuIxo5rK/JMr7lw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AXn9QUImd1S3SlodRF6h6lNwYvlJo/Pz7byyPzeRIVA=;
 b=Giruf/oHuAOWLRTULtX9Dv0aECB2h5X2Oxc0xMwnT2aFGKLXbgCxwcSfpUsWxOvUkKQEg+IskiaAdiBDZBcreXuE1oM9g8Ua8hivyfvDTcFA1ee9t2+BNoEGh175d3HBbCTF0Osy6pKgVNO0syTtBVSkOrSqjFBqqFTJv1NzBRkOXQiktueKQzuiRlqwHwz13lxSPpsSd459ZKbfw6xKrOLYJF3F/3IZv6nq8pEIwemsTkA1aa2OwjFJgQz8e3AgB4eHAYjFHKDuJmAsgj9TtjIw7zReh3Sr0TAA/yU2/Tl6L/NDu3y1jtETwm9acZQqpXJo36+AqJWsvep8xDOzAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AXn9QUImd1S3SlodRF6h6lNwYvlJo/Pz7byyPzeRIVA=;
 b=h/5Om46nTzuQVTBqnkF31JARi32RX4zBu24dj/MQaEsi9Uw3YwcuzKJyz99e8bgXz46SkMZZ+GRxfRi9wC/WIDbINxSh3fLSlP8a1aJcNS0Wc/US+tSThh4d8FrMpQPOc1H+CKiiGcEgeVyVPb3yirCujXau/CxolfwlMT/4+xE=
Subject: Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19
 processors
To: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210209153336.4016-1-andrew.cooper3@citrix.com>
 <c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
 <24610.50089.887907.573064@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6f44ae66-9956-3312-c4c8-b0f1e4b568bb@citrix.com>
Date: Tue, 9 Feb 2021 17:39:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <24610.50089.887907.573064@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0159.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4725b79a-ad31-4081-6158-08d8cd21ac4f
X-MS-TrafficTypeDiagnostic: BYAPR03MB4872:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB487210C9ABF22012651727EBBA8E9@BYAPR03MB4872.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Iz9C9SqBMz/jmKYNkm0boZNAxru6okW/m8A+rD1v/XGZjCSM7obZgW3uWt7XD6OI089X7+PRkJFKDC1M9M2/rmNzMqHCeXEffob8oLJJVPLzlVXQeVwZhyc6CMz7suW8S/2hJlqExkSt0PjWqOmt8e591wK5+VhC/koeTLz+7IYHWGzgDdWsgx+0h3RbV9cCRD4IKNERCmu6ftP2/eH1aGa9B40pKTq8Frn/zqBxVG/crWbdadPgnOrefGUmsGnr/MNyTS7uP2kSOsinxVnPWT1T4+qKepx64mb/FXjgCnQAM0WnbHxMz8qXuduiAZ6mXhMj+TB9sgnawP3dtlac3wbj1LRMBDUY83+DzAtXw7H6vmCrcc4cc6mNdJZzpeTvF44dO/D0DqLS5bdvAGC4RHBnd2oiGDVcqiOX4vWKw9dqPKGeEhJmj57LaPFe0ZLbjMP1gXzL5f4STMo8IFcPskX7+TqFq4/LzA6cdkAraVUHzbF0gjV6uiNhzTkofmv2ebBGMMtmB/wCO0vliAzSH9QH/PMb/txmAERrnjAqLa1fC03vJgdfQywJbxKDLb/4CM6mvFt62/uuCcMTGIm1dh9HO+55+FuPMgx14YzrNx4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(39860400002)(366004)(396003)(4326008)(66556008)(66476007)(83380400001)(31686004)(2906002)(6486002)(66946007)(5660300002)(36756003)(6666004)(478600001)(8936002)(956004)(2616005)(53546011)(31696002)(316002)(26005)(16576012)(86362001)(186003)(110136005)(54906003)(8676002)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SDFQZDNGY3BqSGhvRkxhcDRFZlpMeUZha1J4Z28yU1k5bWV4U1FhSmkwRjdq?=
 =?utf-8?B?V21LdXJxdWd3UGtjYVR2K0JPTzVkbUZ0VWYzWEpnMS96MXVFcEMyKzdLRC9j?=
 =?utf-8?B?bW0zWmljamd0aHRoVFI0dmJ0eXUzanlFTmM4a1kyeC9rajZaZ1JNY0YwaEE3?=
 =?utf-8?B?VUtocFVSejFLVjlOZ1VyNDdmQTEvMnVXZVBJTmJVMWpWR2VqbjBPVFZQK3pk?=
 =?utf-8?B?NTdTMmFOTkVyRGdmTyt6dmxpdXJRL1ZvTkd3cVo2V1hlTE5QYTFuM0JmamM1?=
 =?utf-8?B?Rk5GKyt2WXhSVmljaTlnc0Q1WkxDOHN5eUNDZTVEWUw0QzEwSUhiVi94M1Z5?=
 =?utf-8?B?UzV3dkFSSXV0QUxYMExGai9XL2cvNExWRm5rcGMxR3BaK2lkdEZJajB6MVNj?=
 =?utf-8?B?MVNMbmpQR1Y5Um1MTTFyZzhiWU5vMVFIdXZLb3ZZUjBUV1dYYzJuSnRYRm1P?=
 =?utf-8?B?djB3aUZrVTFQQjN0MG5pdVZCWHdmRnJQNEFnOW42RllNL2xRY3RuN1poU2x1?=
 =?utf-8?B?Z0RFNWtOaTNmcU1Lb0g4M2IwNDVBeEJVL20yUSt4Z2NOa2gyS3F0OUJKd3lK?=
 =?utf-8?B?emI4eC94U0NSd0EzL3dsV3pWOUlqcElFTWFBbERTZFZxdEYyMEJnNHd2M1c0?=
 =?utf-8?B?WjRhSXBYRUp4SFYrOXJBYk4zcHJLcWh3eHgwSVBNUEFicWpaU2ltRitxL1c1?=
 =?utf-8?B?Tlc3RXNPd1BmY1ZuZnVOcUwyTGFlTjc3N2pSdDExWXBxZVoyNVZEb2tNMGt0?=
 =?utf-8?B?UlNndVVIQXZwUGlIdXZQQnZmZFhEZnZESzIvcTNuTVJmSjlYOW4vNlpHZVlw?=
 =?utf-8?B?ZW10RjExalpyTFpRK2Z0MHpyZHAzbDNrZHFBcExRSkV2K2docm5DOUNTNXpH?=
 =?utf-8?B?bmNVY3NnQllDZGE1emcyYlBGSGFIdDMxcUthdGd2MmF6UVpSRXllRHJlOS9w?=
 =?utf-8?B?aytxQVFJVnlOVEZRN0JXdHFFZWpWNk9kZEI4Wm50RHFibWlsNFhjenAySGd4?=
 =?utf-8?B?aGcwVUZhOXVjd1k5T0oyYnlYa1A2akgvK1UzeEZzVTN0ZUxNVU80bzFHNm1r?=
 =?utf-8?B?L2FEenB0NlF0UUVTSUtZNml3Z09KdWxCOElSZENKaVhCYWkxb0dRT01oaTFx?=
 =?utf-8?B?MjhuVW9PQzVQU0M1aVFmU3lmS0xJblR3ZDdJN243eHVlc3E0ZVE3bFV2ajZn?=
 =?utf-8?B?KzZGQWN6eFJHR2lDS2JCZE9UaEVTNmFkSm1UNnY3NWhzbkVhdTQvK3N4L25C?=
 =?utf-8?B?d1BJM1F2NFRKTHNkMlJ5VjMrOUtLYklXWU4rMG1idmZFSXBIMHlqYjhjOEEw?=
 =?utf-8?B?OUl3dEd2WUhUZi93OXB1MVRvK1BheHBCQmZqLzBlWlMwZCtaUWRkR2JBQmU3?=
 =?utf-8?B?djZHOWNxTk4xcmFBL0tOQkpyam9tMjV1VjJoK2VmVXRzOHFKdXNFU0FJWE5k?=
 =?utf-8?B?a0VuQ1ZUV0Y4ZHlLTi82TWNyRlBUU1BPalRJYS9oVWZkbUZTb2c1Rk83UktM?=
 =?utf-8?B?MEJnZHRtRzY4Q0FBYUlSeWJUYit0Um9WRncwbXI0S1ZTdmVTbDM1a1Zpa0tj?=
 =?utf-8?B?Uk44ZGo1RUVOanMxSE50c2Uzcm5JakxWUTVKRUh3RjA1V2pVK0hQaXE2aTdt?=
 =?utf-8?B?aWFUdnE2ZmVSUE5XMFVSd0huankrcDY0dWFxeDdEVGNGcDhPNThDQ016UHcy?=
 =?utf-8?B?ZW5MZlJmS1E5UUVwYkRod3FQVkFjT3Rrc1JCWXBRWG9IZjQ3amJEMGV5cThB?=
 =?utf-8?Q?Qu203H508CiXfTqWx7tYMMxxziiiP3FzwTpyeps?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4725b79a-ad31-4081-6158-08d8cd21ac4f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 17:39:38.4141
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jQcT7sWEk/ddrKDO/L+Bj4dy+LV5fd/HZyRhV6ELPnbs4uE4LkQG7CTTsNurygiZ6xIs4DDvh9KEnUmJX64VbFKEqoE6AOSbbwE4UjfLDMc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4872
X-OriginatorOrg: citrix.com

On 09/02/2021 17:17, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors"):
>> On 09.02.2021 16:33, Andrew Cooper wrote:
>>> The original limit provided wasn't accurate.  Blobs are in fact rather larger.
>>>
>>> Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>>> --- a/xen/arch/x86/cpu/microcode/amd.c
>>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>>> @@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
>>>  #define F15H_MPB_MAX_SIZE 4096
>>>  #define F16H_MPB_MAX_SIZE 3458
>>>  #define F17H_MPB_MAX_SIZE 3200
>>> -#define F19H_MPB_MAX_SIZE 4800
>>> +#define F19H_MPB_MAX_SIZE 5568
>> How certain is it that there's not going to be another increase?
>> And in comparison, how bad would it be if we pulled this upper
>> limit to something that's at least slightly less of an "odd"
>> number, e.g. 0x1800, and thus provide some headroom?
> 5568 does seem really an excessively magic number...

It reads better in hex - 0x15c0.Â  It is exactly the header,
match/patch-ram, and the digital signature of a fixed algorithm.

Its far simpler than Intel's format which contains multiple embedded
blobs, and support for minor platform variations within the same blob.

There are a lot of problems with how we do patch verification on AMD
right now, but its all a consequence of the header not containing a
length field.

This number wont change now.Â  Zen3 processors are out in the world.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:47:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83368.154969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9X6y-0006UH-Jd; Tue, 09 Feb 2021 17:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83368.154969; Tue, 09 Feb 2021 17:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9X6y-0006UA-GU; Tue, 09 Feb 2021 17:47:40 +0000
Received: by outflank-mailman (input) for mailman id 83368;
 Tue, 09 Feb 2021 17:47:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9X6x-0006U5-93
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:47:39 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e53976b-0b80-4321-a460-aaa049aeb2dd;
 Tue, 09 Feb 2021 17:47:38 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6283F64EB6;
 Tue,  9 Feb 2021 17:47:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e53976b-0b80-4321-a460-aaa049aeb2dd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612892857;
	bh=2OnZR6eknH107cFTlwNMe+fdbVtEGcuFuTOJfubpQII=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LHl3q/G6A6xS56pUJQfg9LP3ormdXNypdzxC/FVLxFf5mUtWUKdnElMEvl8+ukX9w
	 WmHy7YlDVYR5JwP2xH25ew7cA/hiHvVf466zARxwAFlfeh9Ocu6rUSYHr3XXSi065G
	 Fy1rEqSEhjggOuc9bQ53K3TANWAG/Uxupw0IHgF6ShBefr8Zpfe20Z76Pj2RzImDvg
	 Z7s0fN4fEiVIErz5Qjp96WiKhiEczwyxTKqhjeVdn6fkEcNAlUzEPaAfAkorhBUp/c
	 ixVAedar9CMEoLlYN1L054Q7cqeCmPBdQ+JoBIFM3Lh6LtslMtSjvwhsx0+W5YC+Vu
	 XpTKbC7JtFUNg==
Date: Tue, 9 Feb 2021 09:47:36 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    "ehem+xen@m5p.com" <ehem+xen@m5p.com>
Subject: Re: [PATCH] xen: workaround missing device_type property in pci/pcie
 nodes
In-Reply-To: <22372A39-83F4-41AB-8FCC-B3A9C8551604@arm.com>
Message-ID: <alpine.DEB.2.21.2102090944240.8948@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2102081544230.8948@sstabellini-ThinkPad-T480s> <22372A39-83F4-41AB-8FCC-B3A9C8551604@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-589529193-1612892761=:8948"
Content-ID: <alpine.DEB.2.21.2102090946510.8948@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-589529193-1612892761=:8948
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102090946511.8948@sstabellini-ThinkPad-T480s>

On Tue, 9 Feb 2021, Bertrand Marquis wrote:
> Hi Stefano,
> 
> > On 8 Feb 2021, at 23:56, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > PCI buses differ from default buses in a few important ways, so it is
> > important to detect them properly. Normally, PCI buses are expected to
> > have the following property:
> > 
> >    device_type = "pci"
> > 
> > In reality, it is not always the case. To handle PCI bus nodes that
> > don't have the device_type property, also consider the node name: if the
> > node name is "pcie" or "pci" then consider the bus as a PCI bus.
> > 
> > This commit is based on the Linux kernel commit
> > d1ac0002dd29 "of: address: Work around missing device_type property in
> > pcie nodes".
> > 
> > This fixes Xen boot on RPi4.
> 
> We are really handling here a wrong device-tree bug that could easily be fixed
> by the user.
> We should at least mention in the commit message that this is a workaround
> to solve RPi4 buggy device tree.

Not sure if it can be "easily" fixed by the user -- it took me a few
hours to figure out what the problem was, and I know Xen and device tree
pretty well :-)

Yes it would be good to have a link to the discussion in the commit
message, using the Link tag. It could be done on commit, or I can add it
to the next version.

Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com/


> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 18825e333e..f1a96a3b90 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
> >  * PCI bus specific translator
> >  */
> > 
> > +static bool_t dt_node_is_pci(const struct dt_device_node *np)
> > +{
> > +    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
> 
> The Linux commit is a bit more restrictive and only does that for â€œpcieâ€.
> Any reason why you also want to have this workaround done also for â€œpciâ€ ?

Yes, that's because in our case the offending node is named "pci" not
"pcie" so the original Linux commit wouldn't cover it.


> > +
> > +    if (is_pci)
> > +        printk(XENLOG_WARNING "%s: Missing device_type\n", np->full_name);
> > +
> > +    return is_pci;
> > +}
> > +
> > static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> > {
> >     /*
> >      * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
> >      * powermacs "ht" is hypertransport
> > +     *
> > +     * If none of the device_type match, and that the node name is
> > +     * "pcie" or "pci", accept the device as PCI (with a warning).
> >      */
> >     return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > -        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> > +        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
> > +        dt_node_is_pci(np);
> > }
> > 
> > static void dt_bus_pci_count_cells(const struct dt_device_node *np,
> > 
> 
> 
--8323329-589529193-1612892761=:8948--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:53:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83370.154980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XC6-0007Ov-6W; Tue, 09 Feb 2021 17:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83370.154980; Tue, 09 Feb 2021 17:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XC6-0007Oo-3Q; Tue, 09 Feb 2021 17:52:58 +0000
Received: by outflank-mailman (input) for mailman id 83370;
 Tue, 09 Feb 2021 17:52:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9XC4-0007Oj-2M
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:52:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9XC2-0003sg-I3; Tue, 09 Feb 2021 17:52:54 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9XC2-0004AY-9h; Tue, 09 Feb 2021 17:52:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=38P2yPqgo7rLVaWL+FEyQMD/wtAxWX5vZrErR3z3Osw=; b=KiA1ENq8+cX/leulPwwuIxEQ2n
	uSbN3oH+dUNnzJXz4qoB7ZgKgfnF009tw5lh81PN87bNP0Fk5m+2LW3L4SSfUeJFE5lCdRc3dFift
	OWawtRDLhGSM9DCtac7T+rsBTrX9g/xIlIXQ1jGX87Fn7CIEJZKHXmSylrIBJYsA1s+g=;
Subject: Re: [PATCH] xen: workaround missing device_type property in pci/pcie
 nodes
To: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "ehem+xen@m5p.com" <ehem+xen@m5p.com>
References: <alpine.DEB.2.21.2102081544230.8948@sstabellini-ThinkPad-T480s>
 <22372A39-83F4-41AB-8FCC-B3A9C8551604@arm.com>
 <alpine.DEB.2.21.2102090944240.8948@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <9a97ecc3-f35f-44e4-68b8-1c801b326c40@xen.org>
Date: Tue, 9 Feb 2021 17:52:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102090944240.8948@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Stefano,

On 09/02/2021 17:47, Stefano Stabellini wrote:
> On Tue, 9 Feb 2021, Bertrand Marquis wrote:
>> Hi Stefano,
>>
>>> On 8 Feb 2021, at 23:56, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>
>>> PCI buses differ from default buses in a few important ways, so it is
>>> important to detect them properly. Normally, PCI buses are expected to
>>> have the following property:
>>>
>>>     device_type = "pci"
>>>
>>> In reality, it is not always the case. To handle PCI bus nodes that
>>> don't have the device_type property, also consider the node name: if the
>>> node name is "pcie" or "pci" then consider the bus as a PCI bus.
>>>
>>> This commit is based on the Linux kernel commit
>>> d1ac0002dd29 "of: address: Work around missing device_type property in
>>> pcie nodes".
>>>
>>> This fixes Xen boot on RPi4.
I am a bit confused with this sentence... How did you manage to boot Xen 
on RPi4 before hand?

>>
>> We are really handling here a wrong device-tree bug that could easily be fixed
>> by the user.
>> We should at least mention in the commit message that this is a workaround
>> to solve RPi4 buggy device tree.
> 
> Not sure if it can be "easily" fixed by the user -- it took me a few
> hours to figure out what the problem was, and I know Xen and device tree
> pretty well :-)
> 
> Yes it would be good to have a link to the discussion in the commit
> message, using the Link tag. It could be done on commit, or I can add it
> to the next version.

A summary of the discussion would be useful in the commit message so a 
reader can easily make the connection between the Linux commit and the 
Xen one.

> 
> Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com/
> 
> 
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>
>>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>>> index 18825e333e..f1a96a3b90 100644
>>> --- a/xen/common/device_tree.c
>>> +++ b/xen/common/device_tree.c
>>> @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
>>>   * PCI bus specific translator
>>>   */
>>>
>>> +static bool_t dt_node_is_pci(const struct dt_device_node *np)
>>> +{
>>> +    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
>>
>> The Linux commit is a bit more restrictive and only does that for â€œpcieâ€.
>> Any reason why you also want to have this workaround done also for â€œpciâ€ ?
> 
> Yes, that's because in our case the offending node is named "pci" not
> "pcie" so the original Linux commit wouldn't cover it.
> 
> 
>>> +
>>> +    if (is_pci)
>>> +        printk(XENLOG_WARNING "%s: Missing device_type\n", np->full_name);
>>> +
>>> +    return is_pci;
>>> +}
>>> +
>>> static bool_t dt_bus_pci_match(const struct dt_device_node *np)
>>> {
>>>      /*
>>>       * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
>>>       * powermacs "ht" is hypertransport
>>> +     *
>>> +     * If none of the device_type match, and that the node name is
>>> +     * "pcie" or "pci", accept the device as PCI (with a warning).
>>>       */
>>>      return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
>>> -        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
>>> +        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
>>> +        dt_node_is_pci(np);
>>> }
>>>
>>> static void dt_bus_pci_count_cells(const struct dt_device_node *np,
>>>
>>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83372.154993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIB-0007eM-SO; Tue, 09 Feb 2021 17:59:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83372.154993; Tue, 09 Feb 2021 17:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIB-0007eF-P6; Tue, 09 Feb 2021 17:59:15 +0000
Received: by outflank-mailman (input) for mailman id 83372;
 Tue, 09 Feb 2021 17:59:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XI9-0007dX-S8
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XI9-0003yp-OO
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:13 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XI9-0004cv-MB
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:13 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XI7-00065Y-NK; Tue, 09 Feb 2021 17:59:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=DeylNrZYYK3wMWxKS9RPXvprA8fskbcUy3SMB2/3gZ8=; b=YY4Hz0YJpc+VgI5f4qu8w1f9vh
	2B9/AgT9Mod3ed28iSgZMackaFGtssgGVMKlehZ8sX7NEuezlyZnEicsF6yaKmpLAXyS+csxe1nbO
	eNt/TS4wz2TKzqRYuEfTzLc/WWsKkms6edoaGMYIA4/+6eAYXxRKuHy16LmTqTGmPeCs=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/7] mg-debian-installer-update: Honour redirect for dtbs
Date: Tue,  9 Feb 2021 17:58:58 +0000
Message-Id: <20210209175904.28282-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When using snapshots, we can get a redirect and then we don't
recurse.  There doesn't seem to be a suitable option for wget, so do
this by hand before we call wget -m.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 mg-debian-installer-update | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mg-debian-installer-update b/mg-debian-installer-update
index fb4fe2ab..5e890d34 100755
--- a/mg-debian-installer-update
+++ b/mg-debian-installer-update
@@ -89,7 +89,12 @@ if [ "x$dtbs" != "x" ] ; then
     # Can't seem to get curl to globs.
     rm -rf dtbs
     mkdir dtbs
-    ( cd dtbs && wget -nv -A README,\*.dtb -nd -nH -np -m  $dtbs )
+    dtbs_redir="$(curl -sSI -o /dev/null -w '%{redirect_url}' $dtbs)"
+    if [ "x$dtbs_redir" != x ]; then
+        dtbs=$dtbs_redir
+        echo "Redirected for dtbs, to $dtbs"
+    fi
+    ( cd dtbs && wget -nv -A README,\*.dtb -nd -nH -np -m  "$dtbs" )
     tar --mtime=./dtbs/README -cf dtbs.tar dtbs
     gzip -9nf dtbs.tar
 fi
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83376.155035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XID-0007hd-Pr; Tue, 09 Feb 2021 17:59:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83376.155035; Tue, 09 Feb 2021 17:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XID-0007hG-BJ; Tue, 09 Feb 2021 17:59:17 +0000
Received: by outflank-mailman (input) for mailman id 83376;
 Tue, 09 Feb 2021 17:59:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIB-0007eA-Kf
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIB-0003z4-JF
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:15 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIB-0004e7-I5
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XI9-00065Y-LO; Tue, 09 Feb 2021 17:59:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/4mHoLOl6nDqU6KIAQm25wzyPgRrf/4mH1E4Sbq3iG4=; b=3uMlFeG9iuaWxstI/MqtDF3r5I
	ZwYaOHg+2JTXqdkuvzMo95q4LPITbPe89bOVmMhK9H20+RkPE+JyNhLOl3Dws4p8CoYK55ZeKyLTJ
	GgyW8AYdRkGXrotuiULJxYyMWTgVOCOUsBrMQ+G39mjoWo5OUQXmnmKH/+Bt4hKzfZZ0=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 5/7] Debian mirror: Disable timestamp verification for snapshot.d.o
Date: Tue,  9 Feb 2021 17:59:02 +0000
Message-Id: <20210209175904.28282-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210209175904.28282-1-iwj@xenproject.org>
References: <20210209175904.28282-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is kind of duplicative of the logic in preseed_backports_packages
but I don't want to mess with that now.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest.pm        |  2 ++
 Osstest/Debian.pm | 16 ++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/Osstest.pm b/Osstest.pm
index 809194f0..7776ba88 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -246,6 +246,8 @@ sub readglobalconfig () {
     $c{DefaultBranch} ||= 'xen-unstable';
 
     $c{DebianMirrorHost} ||= 'ftp.debian.org' if $c{DebianMirrorProxy};
+    $c{DebianMirrorAllowExpiredReleaseRegexp} //=
+      qr{^\Qhttp://snapshot.debian.org/};
 
     $c{EmailStdHeaders} ||= <<'END';
 Content-Type: text/plain; charset="UTF-8"
diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index dee52b3d..d6e0b59d 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -972,6 +972,22 @@ END
     preseed_hook_command($ho, 'late_command', $sfx,
 			 debian_dhcp_rofs_fix($ho, '/target'));
 
+    my $murl = debian_mirror_url($ho);
+    if ($murl =~ m/$c{DebianMirrorAllowExpiredReleaseRegexp}/) {
+	# Inspired by
+	#  https://stackoverflow.com/questions/25039317/is-there-any-setting-in-the-preseed-file-to-ignore-the-release-valid-until-opt/51396935#51396935
+	# In some sense a workaround for the lack of a better way,
+	#  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771699
+	preseed_hook_installscript($ho, $sfx,
+            '/usr/lib/apt-setup/generators/', '02IgnoreValidUntil', <<'END');
+#!/bin/sh
+set -ex
+d=/target/etc/apt/apt.conf.d/
+mkdir -p $d
+echo 'Acquire::Check-Valid-Until "false";' >$d/02IgnoreValidUntil
+END
+    }
+
     my ($mhost, $mpath) = debian_mirror_host_path($ho);
 
     my $preseed = <<"END";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83374.155006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIC-0007fV-Ia; Tue, 09 Feb 2021 17:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83374.155006; Tue, 09 Feb 2021 17:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIC-0007fG-C2; Tue, 09 Feb 2021 17:59:16 +0000
Received: by outflank-mailman (input) for mailman id 83374;
 Tue, 09 Feb 2021 17:59:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIA-0007dh-LS
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIA-0003yv-Km
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIA-0004dP-JD
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:14 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XI8-00065Y-M7; Tue, 09 Feb 2021 17:59:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=7ptVHVv6hSaLUTOmQvRCmHGiTbV+rGIqKIYD1ZJ897I=; b=Uf6AjtirZWVDuiNlWJEiz0Cjjy
	M4h4JY1qy4daDHN8uF5lX9ckO5XK99gdLtdZqCE3E1nkF48MdTluyWXQPmPvQIRArF727dkCCU84T
	X+hEwuKcvhvwyZafykJad7ktpYWMCXK0VWEG9GW2s94Zfxq63g4y0cmjUQ1EGmrm52vk=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/7] Debian mirror selection: Provide debian_archive_url_suite_arch
Date: Tue,  9 Feb 2021 17:59:00 +0000
Message-Id: <20210209175904.28282-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210209175904.28282-1-iwj@xenproject.org>
References: <20210209175904.28282-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

mg-debian-installer-update is going to want this.  NFC.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/Debian.pm | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 05cc6e1f..dee52b3d 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -814,12 +814,8 @@ chmod 600 $subdir/etc/ssh/ssh_host_*_key ||:
 END
 }
 
-sub debian_mirror_url ($) {
-    # I think ideally this should handle, and be used for, backports too.
-    # It would need an optional suite suffix which could be "-backports"?
-    my ($ho) = @_;
-    my $suite = $ho->{Suite};
-    my $arch = $ho->{Arch};
+sub debian_mirror_url_suite_arch ($$) {
+    my ($suite, $arch) = @_;
     my $url =
       $c{"DebianMirror_${suite}_${arch}"} //
       $c{"DebianMirror_${suite}"} //
@@ -830,6 +826,13 @@ sub debian_mirror_url ($) {
     return $url;
 }
 
+sub debian_mirror_url ($) {
+    # I think ideally this should handle, and be used for, backports too.
+    # It would need an optional suite suffix which could be "-backports"?
+    my ($ho) = @_;
+    return debian_mirror_url_suite_arch($ho->{Suite}, $ho->{Arch});
+}
+
 sub debian_mirror_host_path ($) {
     my ($ho) = @_;
     my $url = debian_mirror_url($ho);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83377.155050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIE-0007kJ-Nm; Tue, 09 Feb 2021 17:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83377.155050; Tue, 09 Feb 2021 17:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIE-0007jo-AA; Tue, 09 Feb 2021 17:59:18 +0000
Received: by outflank-mailman (input) for mailman id 83377;
 Tue, 09 Feb 2021 17:59:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIC-0007em-2J
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIC-0003zA-1N
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:16 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIC-0004eY-0O
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XIA-00065Y-4X; Tue, 09 Feb 2021 17:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=zS5Qpm9EEk8VpYi2KeFWiIa9aufzej4lmRBAOxzJ8fo=; b=iXl7meE5dLhsslURerdtufop65
	bz/nnK0lO0CavJvqgI+tBlYjzuUCTg0iLIU769Cli5Zrup6eUCeWBBsmVBXZNAHdip0sUN7q/gxhT
	ayjr5TVq2F6yRhx9XTHoFwO7GZ5BfHaYIqhL6ZvYMExoWF0HFehuIhEvN2V5t7bWihW0=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [OSSTEST PATCH 6/7] production-config: Use a snapshot for buster armhf
Date: Tue,  9 Feb 2021 17:59:03 +0000
Message-Id: <20210209175904.28282-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210209175904.28282-1-iwj@xenproject.org>
References: <20210209175904.28282-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The recent Debian update broke some guest setup.  Roll back.

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 production-config | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/production-config b/production-config
index e7009a55..126c5d3b 100644
--- a/production-config
+++ b/production-config
@@ -93,6 +93,8 @@ TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-09-24
 TftpDiVersion_buster 2021-02-08
 
+DebianMirror_buster_armhf http://snapshot.debian.org/archive/debian/20210201T024125Z/
+
 DebianSnapshotBackports_jessie http://snapshot.debian.org/archive/debian/20190206T211314Z/
 
 # For ISO installs
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83373.155001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIC-0007eo-AX; Tue, 09 Feb 2021 17:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83373.155001; Tue, 09 Feb 2021 17:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIC-0007ec-0X; Tue, 09 Feb 2021 17:59:16 +0000
Received: by outflank-mailman (input) for mailman id 83373;
 Tue, 09 Feb 2021 17:59:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIA-0007dc-92
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIA-0003ys-7v
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIA-0004d9-4H
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:14 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XI8-00065Y-AQ; Tue, 09 Feb 2021 17:59:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=IJysOWmQHZYngadVXUNXmiph+YVmGyRyOsy43nVdaL8=; b=aDETW7u5hqy8GTIuhW5MBMIuKz
	hOjzqZ2/xy/zUu1LB7WahmA3a83lzYRy/55wrSHZOM1W2JHmwbrMgyy041Xz5lqmhoXsEvA5zzg6E
	jZRzPZCiBeYXZxodHsHABFUcxH1flsNQQangJ7Pji8VS+re4iVNeFyrbzKlDt3HZmluY=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/7] Debian mirror selection: Introduce DebianMirror[_<suite>[_<arch>]]
Date: Tue,  9 Feb 2021 17:58:59 +0000
Message-Id: <20210209175904.28282-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210209175904.28282-1-iwj@xenproject.org>
References: <20210209175904.28282-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No functional change with existing configs.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/Debian.pm   | 35 ++++++++++++++++++++++++++++++++---
 README              |  4 ++++
 make-distros-flight |  2 ++
 production-config   |  3 ---
 ts-debian-install   |  4 +++-
 5 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 01930e1f..05cc6e1f 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -22,6 +22,7 @@ use warnings;
 
 use POSIX;
 
+use Carp;
 use IO::File;
 use File::Copy;
 use File::Basename;
@@ -35,6 +36,8 @@ BEGIN {
     $VERSION     = 1.00;
     @ISA         = qw(Exporter);
     @EXPORT      = qw(debian_boot_setup
+		      debian_mirror_url debian_mirror_host_path
+		      debian_mirror_url_suite_arch
                       di_installer_path di_special_kernel
                       setupboot_bootloader_edited_rune
                       debian_overlays debian_overlays_fixup_cmd
@@ -811,12 +814,36 @@ chmod 600 $subdir/etc/ssh/ssh_host_*_key ||:
 END
 }
 
+sub debian_mirror_url ($) {
+    # I think ideally this should handle, and be used for, backports too.
+    # It would need an optional suite suffix which could be "-backports"?
+    my ($ho) = @_;
+    my $suite = $ho->{Suite};
+    my $arch = $ho->{Arch};
+    my $url =
+      $c{"DebianMirror_${suite}_${arch}"} //
+      $c{"DebianMirror_${suite}"} //
+      $c{"DebianMirror"};
+    if (!defined $url) {
+	$url = "http://$c{DebianMirrorHost}/$c{DebianMirrorSubpath}";
+    }
+    return $url;
+}
+
+sub debian_mirror_host_path ($) {
+    my ($ho) = @_;
+    my $url = debian_mirror_url($ho);
+    $url =~ m{^http://([^/]+)/(.*)$} or
+      confess "unsupported Debian url (needs to be http://HOST/...): $url";
+    return ($1, $2);
+}
+
 sub preseed_backports_packages ($$$$@) {
     my ($ho, $sfx, $xopts, $suite, @pkgs) = @_;
 
     if (! $xopts->{BackportsSourcesAlreadyAdded}++) {
 	my $bp_url = $c{"DebianSnapshotBackports_$suite"};
-	$bp_url ||= "http://$c{DebianMirrorHost}/$c{DebianMirrorSubpath}";
+	$bp_url ||= debian_mirror_url($ho);
 
 	my $apt_insert='';
 	my $extra_rune='';
@@ -942,6 +969,8 @@ END
     preseed_hook_command($ho, 'late_command', $sfx,
 			 debian_dhcp_rofs_fix($ho, '/target'));
 
+    my ($mhost, $mpath) = debian_mirror_host_path($ho);
+
     my $preseed = <<"END";
 d-i debian-installer/locale string en_GB
 d-i console-keymaps-at/keymap select gb
@@ -1001,9 +1030,9 @@ d-i finish-install/keep-consoles boolean true
 d-i finish-install/reboot_in_progress note
 d-i cdrom-detect/eject boolean false
 
-d-i mirror/http/hostname string $c{DebianMirrorHost}
+d-i mirror/http/hostname string $mhost
+d-i mirror/http/directory string /$mpath
 d-i mirror/http/proxy string $c{DebianMirrorProxy}
-d-i mirror/http/directory string /$c{DebianMirrorSubpath}
 d-i apt-setup/use_mirror boolean yes
 d-i apt-setup/another boolean false
 d-i apt-setup/non-free boolean false
diff --git a/README b/README
index 33c4d2cc..20d9802a 100644
--- a/README
+++ b/README
@@ -398,6 +398,10 @@ DebianMirrorProxy
    The apt proxy to specify for Debian (and derivatives),
    eg http://apt-cacher:3142/ .
 
+DebianMirror[_<suite>[_<arch>]]
+   Overrides DebianMirrorHost and DebianMirrorSubpath, optionally
+   for specific suite and arch.
+
 TestHost <hostname>
 TestHost_<ident> <hostname>
    Specifies the test box to use.  Should be a bare hostname,
diff --git a/make-distros-flight b/make-distros-flight
index 406d7d64..47094ef5 100755
--- a/make-distros-flight
+++ b/make-distros-flight
@@ -74,6 +74,8 @@ test_do_one_netboot () {
   else
     #local mirror="http://`getconfig DebianMirrorHost`/`getconfig DebianMirrorSubpath`"
     # XXX local mirror seems to serve up stale files.
+    # ^ this should use debian_mirror_url, not plain config, so it
+    #   honours suite- and arch- specific settings
     local mirror="http://ftp.debian.org/debian"
     diurl="$mirror/dists/$guest_suite/main/installer-$domU/current/images/netboot"
     gver=$guest_suite
diff --git a/production-config b/production-config
index df32e863..e7009a55 100644
--- a/production-config
+++ b/production-config
@@ -126,9 +126,6 @@ CoverityUploadUrl https://scan.coverity.com/builds?project=XenProject
 CoverityTools cov-analysis-linux64-2019.03.tar.gz
 CoverityToolsStripComponents 1
 
-# We use the IP address because Citrix can't manage reliable nameservice
-#DebianMirrorHost debian.uk.xensource.com
-#DebianMirrorHost 10.80.16.196
 DebianMirrorProxy http://cache:3143/
 
 HostProp_DhcpWatchMethod leases dhcp3 infra.t:5556
diff --git a/ts-debian-install b/ts-debian-install
index 8caa9d76..c42e8a37 100755
--- a/ts-debian-install
+++ b/ts-debian-install
@@ -83,12 +83,14 @@ END
     $cmd .= <<END if defined $useproxy;
         http_proxy=$useproxy \\
 END
+    my $mirror = debian_mirror_url($gho);
+
     $cmd .= <<END;
         xen-create-image \\
             --dhcp --mac $gho->{Ether} \\
             --memory ${ram_mb}Mb --swap ${swap_mb}Mb \\
             --dist $gsuite \\
-            --mirror http://$c{DebianMirrorHost}/$c{DebianMirrorSubpath} \\
+            --mirror $mirror \\
             --hostname $gho->{Name} \\
             --lvm $gho->{Vg} --force \\
             --kernel $kernpath \\
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83375.155016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIC-0007gS-VB; Tue, 09 Feb 2021 17:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83375.155016; Tue, 09 Feb 2021 17:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIC-0007g1-NW; Tue, 09 Feb 2021 17:59:16 +0000
Received: by outflank-mailman (input) for mailman id 83375;
 Tue, 09 Feb 2021 17:59:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIB-0007e5-6G
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIB-0003yy-5Z
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:15 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIB-0004di-4W
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XI9-00065Y-8X; Tue, 09 Feb 2021 17:59:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ZGibySREmlkdtxh1B3xdTGiAPnSGGHBkgH+xeUe0jfg=; b=ZXghKaLkqZMGrYekheqY7qwBmD
	6SbZ629tQKFK7kN0tFA5MEbVOXnOhSdxyaK5G92Bm4f8gMECaSQJGcH04JYRoB+GJLeO9ZA30OeoI
	W7MfzLxXgPaZ0Vd8bY/HX/wlSUC7GDuxkCNpn5gJm+MGfWMlBuUXMJfIaDPORooZajRw=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 4/7] mg-debian-installer-update: Use Debian mirror selection
Date: Tue,  9 Feb 2021 17:59:01 +0000
Message-Id: <20210209175904.28282-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210209175904.28282-1-iwj@xenproject.org>
References: <20210209175904.28282-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

NFC with existing config.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 mg-debian-installer-update | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mg-debian-installer-update b/mg-debian-installer-update
index 5e890d34..4fb4bc21 100755
--- a/mg-debian-installer-update
+++ b/mg-debian-installer-update
@@ -28,7 +28,13 @@ suite=$1
 arch=$2
 packages="$3"
 
-site=http://ftp.debian.org/debian/
+site=$(perl -we '
+    use Osstest;
+    use Osstest::Debian;
+    readglobalconfig();
+    print debian_mirror_url_suite_arch($ARGV[0], $ARGV[1]) or die $!;
+' "$suite" "$arch")
+
 sbase=$site/dists/$suite
 
 src=$sbase/main/installer-$arch/current/images/netboot/
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 17:59:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83378.155060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIF-0007ly-HY; Tue, 09 Feb 2021 17:59:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83378.155060; Tue, 09 Feb 2021 17:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XIE-0007lb-Vb; Tue, 09 Feb 2021 17:59:18 +0000
Received: by outflank-mailman (input) for mailman id 83378;
 Tue, 09 Feb 2021 17:59:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIC-0007fs-LS
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIC-0003zG-KT
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:16 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9XIC-0004f5-JK
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 17:59:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9XIA-00065Y-I8; Tue, 09 Feb 2021 17:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=oG2vXONH0nxvtWXCeybFMwC7yWwCLiEPbKtB71WxGuQ=; b=dPgj101E687RQ7LZrLQcOT7ICt
	0l6sBtDIJHrp53fUQ3Bl7FjzR/diEc95s3gJ6XrhYVA36GrUooqoZ9NW0C1Zw2IOfdBf2gs3FzLFC
	9k1u0Qjafkc1scz8mAX5WrjhtrDYZUWUY4GlkNi0Z5lPkkaL42vi+LO7IH0S0qQc1fiw=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 7/7] production-config: Update d-i; use snapshot for buster armhf
Date: Tue,  9 Feb 2021 17:59:04 +0000
Message-Id: <20210209175904.28282-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210209175904.28282-1-iwj@xenproject.org>
References: <20210209175904.28282-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This version was generated by running mg-debian-installer-update-all
with the recent changes to handle snapshots, and to use that for
buster armhf.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 126c5d3b..f783af3c 100644
--- a/production-config
+++ b/production-config
@@ -91,7 +91,7 @@ TftpNetbootGroup osstest
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-09-24
-TftpDiVersion_buster 2021-02-08
+TftpDiVersion_buster 2021-02-09
 
 DebianMirror_buster_armhf http://snapshot.debian.org/archive/debian/20210201T024125Z/
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 18:01:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 18:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83381.155076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XKI-0000nW-Sq; Tue, 09 Feb 2021 18:01:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83381.155076; Tue, 09 Feb 2021 18:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XKI-0000nP-Pq; Tue, 09 Feb 2021 18:01:26 +0000
Received: by outflank-mailman (input) for mailman id 83381;
 Tue, 09 Feb 2021 18:01:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9XKH-0000nA-Hv
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 18:01:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9XKH-00048P-HG
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 18:01:25 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9XKH-0004zb-G4
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 18:01:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9XK7-00067b-Q2; Tue, 09 Feb 2021 18:01:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=2LtKfQVabQUGbBjUgqbS2v50wctm2buFD9ChFc83YQY=; b=pD6j6uxh4JmPOxUbXiaJEQhtzj
	nt8hFsQIhN5YC0OyFmSzUIu+Tq1hJiTC9syQc/z1hXLh8jJ2krnDqMkPMOhB1vid6dnErxwsmH1gw
	yKhWrl04rlLCAQc2y7xUEzWCPooGJtbAN28HwwGE7hg/6zuF0sojF5WRa8VbLcfFyuKA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24610.52715.533963.858366@mariner.uk.xensource.com>
Date: Tue, 9 Feb 2021 18:01:15 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>,
    Jan Beulich <jbeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19
 processors
In-Reply-To: <6f44ae66-9956-3312-c4c8-b0f1e4b568bb@citrix.com>
References: <20210209153336.4016-1-andrew.cooper3@citrix.com>
	<c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
	<24610.50089.887907.573064@mariner.uk.xensource.com>
	<6f44ae66-9956-3312-c4c8-b0f1e4b568bb@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors"):
> On 09/02/2021 17:17, Ian Jackson wrote:
> > Jan Beulich writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors"):
...
> >> How certain is it that there's not going to be another increase?
> >> And in comparison, how bad would it be if we pulled this upper
> >> limit to something that's at least slightly less of an "odd"
> >> number, e.g. 0x1800, and thus provide some headroom?
> > 5568 does seem really an excessively magic number...
> 
> It reads better in hex - 0x15c0.  It is exactly the header,
> match/patch-ram, and the digital signature of a fixed algorithm.
> 
> Its far simpler than Intel's format which contains multiple embedded
> blobs, and support for minor platform variations within the same blob.
> 
> There are a lot of problems with how we do patch verification on AMD
> right now, but its all a consequence of the header not containing a
> length field.
> 
> This number wont change now.  Zen3 processors are out in the world.

Err.  This Is Fine.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 18:40:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 18:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83390.155088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XvU-0003hT-SR; Tue, 09 Feb 2021 18:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83390.155088; Tue, 09 Feb 2021 18:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9XvU-0003hM-PV; Tue, 09 Feb 2021 18:39:52 +0000
Received: by outflank-mailman (input) for mailman id 83390;
 Tue, 09 Feb 2021 18:39:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9XvT-0003hB-5H; Tue, 09 Feb 2021 18:39:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9XvS-0004jz-VO; Tue, 09 Feb 2021 18:39:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9XvS-0002xi-M1; Tue, 09 Feb 2021 18:39:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9XvS-0004sW-LA; Tue, 09 Feb 2021 18:39:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=ufz/o4H7w4wj/fyD9JByP8340x3Ayd26dvgS59MNSPo=; b=PaP1phx/3dBxv7D3eY2LclHBVP
	sNscS/eubjOfR8SClJ+cgU+HLKTEPVvTOQne+nRlh3yevp+Udz3mrLpeQYJcTC6yJN43UTsA/bAkE
	Wc/jlsV4awbIV8uKiXspI/As1BuFcbcllC1HEH5CLZyuL3XPvc37qM59ycX8qvJ19ph4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-xl-seattle
Message-Id: <E1l9XvS-0004sW-LA@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 18:39:50 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-xl-seattle
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159179/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-xl-seattle.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-seattle.guest-start --summary-out=tmp/159179.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-xl-seattle guest-start
Searching for failure / basis pass:
 159129 fail [host=laxton0] / 158681 [host=laxton1] 158624 ok.
Failure / basis pass flights: 159129 / 158624
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest d4716ee8751bf8dabf5872ba008124a0979a5f94 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0d96664df322d50e0ac54130e129c0bf4f2b72df 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-d4716ee8751bf8dabf5872ba008124a0979a5f94 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-0d96664df322d50e0ac54130e129c0bf4f2b72df git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Loaded 15001 nodes in revision graph
Searching for test results:
 158609 pass irrelevant
 158616 [host=laxton1]
 158624 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158681 [host=laxton1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159058 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159059 fail irrelevant
 159062 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159063 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159065 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159066 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159067 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159069 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159070 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159071 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159073 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159074 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159075 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159129 fail d4716ee8751bf8dabf5872ba008124a0979a5f94 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0d96664df322d50e0ac54130e129c0bf4f2b72df 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159169 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159175 fail d4716ee8751bf8dabf5872ba008124a0979a5f94 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0d96664df322d50e0ac54130e129c0bf4f2b72df 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159179 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158624 (pass), for basis pass
 Result found: flight 159129 (fail), for basis failure (at ancestor ~116)
 Repro found: flight 159169 (pass), for basis pass
 Repro found: flight 159175 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159070 (pass), for last pass
 Result found: flight 159071 (fail), for first failure
 Repro found: flight 159073 (pass), for last pass
 Repro found: flight 159074 (fail), for first failure
 Repro found: flight 159075 (pass), for last pass
 Repro found: flight 159179 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159179/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 184 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-seattle.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159179: tolerable FAIL

flight 159179 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159179/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl-seattle  14 guest-start             fail baseline untested


jobs:
 build-arm64                                                  pass    
 build-arm64-pvops                                            pass    
 test-arm64-arm64-xl-seattle                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 19:51:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 19:51:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83401.155110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Z2Y-0002Eb-DV; Tue, 09 Feb 2021 19:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83401.155110; Tue, 09 Feb 2021 19:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Z2Y-0002EU-9n; Tue, 09 Feb 2021 19:51:14 +0000
Received: by outflank-mailman (input) for mailman id 83401;
 Tue, 09 Feb 2021 19:51:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9Z2W-0002Cz-U8
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 19:51:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5de479fa-2e6e-4a84-a78c-f43df68d1b21;
 Tue, 09 Feb 2021 19:51:11 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 4B68664E7C;
 Tue,  9 Feb 2021 19:51:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5de479fa-2e6e-4a84-a78c-f43df68d1b21
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612900270;
	bh=zhbCJKhCALrKzWiOicpL3uWvgXeExvnSWOo4v2kZdno=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=fUI6Gd56ulMfVNV7ETqsuCE3m4mL1DBssH0ZrgTUnXlxGh2xI5mpUJIUMnnBXjxIu
	 tBo+zzbSGguK0t0PPiotNrltI/6nvfAxTUr15wivsgpP8IKsis95by+RTZ3sNSgZwV
	 thlucTKGINRxEmW09Myr9HJEHo5EgDptmWMF36s0OmMkRV4pYU0UqJPpDNsnh4igIb
	 zS5eEhAf+gVB8piV4d1KYPmZ5yiSaMoYMcuQkC+3/KpSLNFc0daMDOVGt9f4DmYs8d
	 4lCdKq7dTvODyPa29TZ/qLW4BI+vvplx+4GnimBVJ76RgyoeUb5Y5bO5HwT38c8qrI
	 FZxMTapuBW+6g==
Date: Tue, 9 Feb 2021 11:51:09 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    "ehem+xen@m5p.com" <ehem+xen@m5p.com>
Subject: Re: [PATCH] xen: workaround missing device_type property in pci/pcie
 nodes
In-Reply-To: <9a97ecc3-f35f-44e4-68b8-1c801b326c40@xen.org>
Message-ID: <alpine.DEB.2.21.2102091146420.8948@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2102081544230.8948@sstabellini-ThinkPad-T480s> <22372A39-83F4-41AB-8FCC-B3A9C8551604@arm.com> <alpine.DEB.2.21.2102090944240.8948@sstabellini-ThinkPad-T480s> <9a97ecc3-f35f-44e4-68b8-1c801b326c40@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1519644172-1612900270=:8948"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1519644172-1612900270=:8948
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 9 Feb 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 09/02/2021 17:47, Stefano Stabellini wrote:
> > On Tue, 9 Feb 2021, Bertrand Marquis wrote:
> > > Hi Stefano,
> > > 
> > > > On 8 Feb 2021, at 23:56, Stefano Stabellini <sstabellini@kernel.org>
> > > > wrote:
> > > > 
> > > > PCI buses differ from default buses in a few important ways, so it is
> > > > important to detect them properly. Normally, PCI buses are expected to
> > > > have the following property:
> > > > 
> > > >     device_type = "pci"
> > > > 
> > > > In reality, it is not always the case. To handle PCI bus nodes that
> > > > don't have the device_type property, also consider the node name: if the
> > > > node name is "pcie" or "pci" then consider the bus as a PCI bus.
> > > > 
> > > > This commit is based on the Linux kernel commit
> > > > d1ac0002dd29 "of: address: Work around missing device_type property in
> > > > pcie nodes".
> > > > 
> > > > This fixes Xen boot on RPi4.
> I am a bit confused with this sentence... How did you manage to boot Xen on
> RPi4 before hand?

The older rpi kernel that I was using didn't have the problematic pci
node in device tree at all.


> > > We are really handling here a wrong device-tree bug that could easily be
> > > fixed
> > > by the user.
> > > We should at least mention in the commit message that this is a workaround
> > > to solve RPi4 buggy device tree.
> > 
> > Not sure if it can be "easily" fixed by the user -- it took me a few
> > hours to figure out what the problem was, and I know Xen and device tree
> > pretty well :-)
> > 
> > Yes it would be good to have a link to the discussion in the commit
> > message, using the Link tag. It could be done on commit, or I can add it
> > to the next version.
> 
> A summary of the discussion would be useful in the commit message so a reader
> can easily make the connection between the Linux commit and the Xen one.

OK, good idea


> > 
> > Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com/
> > 
> > 
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > 
> > > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > > > index 18825e333e..f1a96a3b90 100644
> > > > --- a/xen/common/device_tree.c
> > > > +++ b/xen/common/device_tree.c
> > > > @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const
> > > > __be32 *addr)
> > > >   * PCI bus specific translator
> > > >   */
> > > > 
> > > > +static bool_t dt_node_is_pci(const struct dt_device_node *np)
> > > > +{
> > > > +    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name,
> > > > "pci");
> > > 
> > > The Linux commit is a bit more restrictive and only does that for â€œpcieâ€.
> > > Any reason why you also want to have this workaround done also for â€œpciâ€ ?
> > 
> > Yes, that's because in our case the offending node is named "pci" not
> > "pcie" so the original Linux commit wouldn't cover it.
> > 
> > 
> > > > +
> > > > +    if (is_pci)
> > > > +        printk(XENLOG_WARNING "%s: Missing device_type\n",
> > > > np->full_name);
> > > > +
> > > > +    return is_pci;
> > > > +}
> > > > +
> > > > static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> > > > {
> > > >      /*
> > > >       * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen
> > > > PCI
> > > >       * powermacs "ht" is hypertransport
> > > > +     *
> > > > +     * If none of the device_type match, and that the node name is
> > > > +     * "pcie" or "pci", accept the device as PCI (with a warning).
> > > >       */
> > > >      return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> > > > -        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> > > > +        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
> > > > +        dt_node_is_pci(np);
> > > > }
> > > > 
> > > > static void dt_bus_pci_count_cells(const struct dt_device_node *np,
--8323329-1519644172-1612900270=:8948--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 19:53:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 19:53:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83402.155121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Z4t-0002Lr-QA; Tue, 09 Feb 2021 19:53:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83402.155121; Tue, 09 Feb 2021 19:53:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Z4t-0002Lk-N9; Tue, 09 Feb 2021 19:53:39 +0000
Received: by outflank-mailman (input) for mailman id 83402;
 Tue, 09 Feb 2021 19:53:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9Z4s-0002Ld-BL
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 19:53:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8651265-d30a-43c8-a770-011d65eba556;
 Tue, 09 Feb 2021 19:53:37 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 8D73964E7D;
 Tue,  9 Feb 2021 19:53:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8651265-d30a-43c8-a770-011d65eba556
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612900416;
	bh=142UJrDAH4ZHWpdBmuC4nsTiOOGx3iM5WQFQL38gVa0=;
	h=From:To:Cc:Subject:Date:From;
	b=HEz7/geGw6O9fqLT4gvMqPbAF12Dzy46pM6FOOGyMbqaPrQjvG1aBgPHf8VQcvqaF
	 1L3ny1erEqFvmAnD9LOp0Itw/mchCe/nl1d1LIbYWy4EJNW8EHBX+GFz25YbXSOqo9
	 QXojnzLYCcr796mTQQUjHF8RVwZ4iYKkCPzkXRhbFk+OfW73bMYksvD+CTMtb7fRqB
	 10pYqxyfRIOFy6BAym53HyBIA68a0k6t9V7KY/qS1wwEhcYlaY3cOm0+cMw9e4BTOU
	 ObptFsV4/K61S8oc7Gtq886eAv38H5DdPrvQt5ZRZxJWoW0MuO5hHBfKFwU/qtC809
	 oMZ5ytC8WDMYA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	ehem+xen@m5p.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2] xen: workaround missing device_type property in pci/pcie nodes
Date: Tue,  9 Feb 2021 11:53:34 -0800
Message-Id: <20210209195334.21206-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

PCI buses differ from default buses in a few important ways, so it is
important to detect them properly. Normally, PCI buses are expected to
have the following property:

    device_type = "pci"

In reality, it is not always the case. To handle PCI bus nodes that
don't have the device_type property, also consider the node name: if the
node name is "pcie" or "pci" then consider the bus as a PCI bus.

This commit is based on the Linux kernel commit
d1ac0002dd29 "of: address: Work around missing device_type property in
pcie nodes".

This fixes Xen boot on RPi4. Some RPi4 kernels have the following node
on their device trees:

&pcie0 {
	pci@1,0 {
		#address-cells = <3>;
		#size-cells = <2>;
		ranges;

		reg = <0 0 0 0 0>;

		usb@1,0 {
				reg = <0x10000 0 0 0 0>;
				resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
		};
	};
};

The pci@1,0 node is a PCI bus. If we parse the node and its children as
a default bus, the reg property under usb@1,0 would have to be
interpreted as an address range mappable by the CPU, which is not the
case and would break.

Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com/
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- improve commit message

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 18825e333e..f1a96a3b90 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
  * PCI bus specific translator
  */
 
+static bool_t dt_node_is_pci(const struct dt_device_node *np)
+{
+    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
+
+    if (is_pci)
+        printk(XENLOG_WARNING "%s: Missing device_type\n", np->full_name);
+
+    return is_pci;
+}
+
 static bool_t dt_bus_pci_match(const struct dt_device_node *np)
 {
     /*
      * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
      * powermacs "ht" is hypertransport
+     *
+     * If none of the device_type match, and that the node name is
+     * "pcie" or "pci", accept the device as PCI (with a warning).
      */
     return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
-        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
+        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
+        dt_node_is_pci(np);
 }
 
 static void dt_bus_pci_count_cells(const struct dt_device_node *np,


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 19:55:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 19:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83404.155134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Z6W-0002Tm-6F; Tue, 09 Feb 2021 19:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83404.155134; Tue, 09 Feb 2021 19:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Z6W-0002Tf-2n; Tue, 09 Feb 2021 19:55:20 +0000
Received: by outflank-mailman (input) for mailman id 83404;
 Tue, 09 Feb 2021 19:55:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9Z6U-0002Ta-OH
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 19:55:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db253516-8057-4776-9591-7501a82a652d;
 Tue, 09 Feb 2021 19:55:18 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D062D64E7D;
 Tue,  9 Feb 2021 19:55:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db253516-8057-4776-9591-7501a82a652d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612900517;
	bh=C8mwFtVaqcLwwJZp9LBwoctjUhYIjCRcp4gkPN3Hghw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WXn1XjVc7BYrjqL7wZhTkdiCcEd8HEp0XRFGFpw5Y8gNKINdU3kwwmkluBmVTTNJV
	 sZz+p38IZyl9nRO/YbuF6KkijHkgfLHM4vYAHJf4jMbwtv9Ew/3LKQ/gfp+vzW/kGl
	 VXLT1ffMp4mPv0twU4hy3Sqbn8GBi7dctYrShkNjIUnkVCiLRvBqOv5ZL9QoIkGhTT
	 Z2CSKRdSaiPovD9yPfQcCKMv71S4NKcoLTnjSK/4yT18Ly/oEDxYPlVRSWIjhV2ELY
	 CD6cuxuHyV2fgCbyDRGFK+8YrxNGX461KCfPBQ2swd0ZnIhwFwJ3h7VMkpEDcGAj7f
	 U6m9mrkGeNXOg==
Date: Tue, 9 Feb 2021 11:55:16 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: xen-devel@lists.xenproject.org, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    ehem+xen@m5p.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
In-Reply-To: <20210209195334.21206-1-sstabellini@kernel.org>
Message-ID: <alpine.DEB.2.21.2102091154550.8948@sstabellini-ThinkPad-T480s>
References: <20210209195334.21206-1-sstabellini@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 9 Feb 2021, Stefano Stabellini wrote:
> PCI buses differ from default buses in a few important ways, so it is
> important to detect them properly. Normally, PCI buses are expected to
> have the following property:
> 
>     device_type = "pci"
> 
> In reality, it is not always the case. To handle PCI bus nodes that
> don't have the device_type property, also consider the node name: if the
> node name is "pcie" or "pci" then consider the bus as a PCI bus.
> 
> This commit is based on the Linux kernel commit
> d1ac0002dd29 "of: address: Work around missing device_type property in
> pcie nodes".
> 
> This fixes Xen boot on RPi4. Some RPi4 kernels have the following node
> on their device trees:
> 
> &pcie0 {
> 	pci@1,0 {
> 		#address-cells = <3>;
> 		#size-cells = <2>;
> 		ranges;
> 
> 		reg = <0 0 0 0 0>;
> 
> 		usb@1,0 {
> 				reg = <0x10000 0 0 0 0>;
> 				resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;

there is one more tab than needed here, sorry


> 		};
> 	};
> };
> 
> The pci@1,0 node is a PCI bus. If we parse the node and its children as
> a default bus, the reg property under usb@1,0 would have to be
> interpreted as an address range mappable by the CPU, which is not the
> case and would break.
> 
> Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com/
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
> Changes in v2:
> - improve commit message
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 18825e333e..f1a96a3b90 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
>   * PCI bus specific translator
>   */
>  
> +static bool_t dt_node_is_pci(const struct dt_device_node *np)
> +{
> +    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
> +
> +    if (is_pci)
> +        printk(XENLOG_WARNING "%s: Missing device_type\n", np->full_name);
> +
> +    return is_pci;
> +}
> +
>  static bool_t dt_bus_pci_match(const struct dt_device_node *np)
>  {
>      /*
>       * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen PCI
>       * powermacs "ht" is hypertransport
> +     *
> +     * If none of the device_type match, and that the node name is
> +     * "pcie" or "pci", accept the device as PCI (with a warning).
>       */
>      return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> -        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> +        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
> +        dt_node_is_pci(np);
>  }
>  
>  static void dt_bus_pci_count_cells(const struct dt_device_node *np,
> 


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:03:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:03:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83407.155146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZE3-0003Vt-WE; Tue, 09 Feb 2021 20:03:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83407.155146; Tue, 09 Feb 2021 20:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZE3-0003Vm-TA; Tue, 09 Feb 2021 20:03:07 +0000
Received: by outflank-mailman (input) for mailman id 83407;
 Tue, 09 Feb 2021 20:03:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5GlY=HL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9ZE2-0003Vh-Ah
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:03:06 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ccd6d5c-053a-4e29-a9fa-a9bae9d54129;
 Tue, 09 Feb 2021 20:03:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ccd6d5c-053a-4e29-a9fa-a9bae9d54129
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612900985;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=RhyvlAvej6uFasCeoeU7D0dVC/Z4zPiTox9Lv2VZcgQ=;
  b=Z6+KxwOuFh4FpFInK0u3uIYY1kymHhqGHMNLh3Mrq9bR49w4bKFodJ1A
   4HfluovDVh7psC7yWrV8Ve0SoZC/QmYwYmkbOlzqcBiGej3BRg59ZJMDv
   Y6eohSeKTysv4yP/axfVdWyGOdrDK05bVQdtlyG5eAR7ysn45OafvEDn2
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3Sc4b73rnR4qID9eZCIvziHY7zr0yzJA7NhBMn504csyx/ynXpdaDHO5vaE5xZAGsD/uuBqHVT
 Qblt8dO3Sw/P7B7kukSxS5i2GrmkIBCkxIA3WKh1kKXSyiY7Qpy4IkuM1P5dxbfevNe4QAWXum
 KBrshqUt1lksvKpMSQcTvivRfaCuNNz9e3MG+pXm5Gs22Bh4JIopm/URSObAbEwNwUMy02R20u
 qMLzG538E4AvF1jPN8Rh0Cz+NyoBKshrJZHMuUjRUG8ZWZoLuIAttpxlPcUR46H7aGwPwUHmdI
 TAY=
X-SBRS: 5.2
X-MesageID: 38253602
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,166,1610427600"; 
   d="scan'208";a="38253602"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jImxHux7MqiS/AL69Y75DZ3KblRv32s7yAOvXTgRNPwusKjJAPoH/nzGV6DEOSpA+/VSMN3r4ObMfd9ys7vPRBzlI7Qm0XyF3XEH4CuGlovgFbdXZ0Tprn+JslOUsT2lbqeR9dFyNzrDHyYgTa6AmfWrDOZsDA0SQJeyFXwWdXM9FCN3t2yz1vmt6tAIHyC38qkrMjF/v9ZeWI827Z/wDCWRAEJOU63r7oYHivTAgoJyYWcUd7RD2H6wtkwseGJAwtdm0LPFZhBI7z3GxEw7+Yoh6uvvqr1dCDrli5gj3rnAypcDZ9iWG1wPo4wk+HZeTURpg0hHhvPnVzXACkCufg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nOAoT+g4WqRsMZuM9rfXCyGWS86HflA8SSBIFOTmXko=;
 b=ciGzT0/rGHIlBvhIvg9JyR5qbUb9HB1jiIWCE7gXea1lh6YCP9+goEqpI/dm7ojUBbt56MfJy2nJJWYLn4A2gMPGGYEprZ64cdBuvNyPv18BPl7gfjAMSNJ+186jciTOnrupAx9FBFWW4jGvdbx1tMcPW8cJ9zpqv0HxPi2VqgTKuMu/XHO9t5SuMnEwBmGJHroUUH7GEp2RG16NOMi1Ffnm+vrO2KvgHXZ2ctlD70icGtnFbOcQsTSOyDHrdBBT9/g6FOzfskonOPAITbFlS7+aXPGEEHZJipaj20WuFf4mGqnezhBRkzPqPxRzIOD4Edk9lbDtS4hGVep2ql7YLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nOAoT+g4WqRsMZuM9rfXCyGWS86HflA8SSBIFOTmXko=;
 b=YllCTg+2gPN9+f6TJjZqcymR9R5e+hR1GCgZCqLX9bAisUdPYsa9GTSPSlUsbaCKepXqdbxU7ByXwwR20cCsBQHtexzp48DMJiv4OBNo19jZX0gtyFrEdwU2czuOUb6bbFiWQF6Os1qQPwOb5wQqD04HARBHttROMJd8PEiTeoo=
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
To: Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>
CC: <julien@xen.org>, <Volodymyr_Babchuk@epam.com>, <ehem+xen@m5p.com>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210209195334.21206-1-sstabellini@kernel.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e2a807fd-a027-fa7d-315a-f64db8c56e1c@citrix.com>
Date: Tue, 9 Feb 2021 20:02:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <20210209195334.21206-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP123CA0021.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::33) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 63be1c4d-8db0-4f42-b88d-08d8cd35ae44
X-MS-TrafficTypeDiagnostic: BY5PR03MB5030:
X-Microsoft-Antispam-PRVS: <BY5PR03MB503044254DB56984F24D4BE3BA8E9@BY5PR03MB5030.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2399;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: iI+oNMfQEZVkqZTn1+vOZ3jKqg9gq/GeYnfuTwiP8hukwHm3KFDJbtY2ND4MBv0XNJpoQho/GGyV8eTZCfE2zJ3z8xqHNsd0tzen0gUjaEcRJFAx21AFXcmyXS1PoWiGdO27x0NdnKWWj67RAM9yF5KBHLKinykA5I11Ou6FOVrLCJfASIyL6iv+w1mIk11dsL7qEkXz5kFvwcc+BsRbVo6iZ7wBH7gt2kTdx1+EiJEk8XUTgWgAd4FQweI3P1W9MKklSFPLE+HjlRX7+jSGikdZp5Bg2heRY1qac8gg+0ZL3FBlOnkIUdT2yDuWqZjkTk1p1ZIKJT9DTWasyevA1iM0Dz0JKNjqU5hELf+mNnQaiiaEHk0NuVFyxhFQjQ3jPT42DCtFUzF1LFgZNvy0iIHR2wkKIL0AGzTW8Knar4F71u+bkOaRd3fQESLQDDpyq9bQXOg3/q764ChVy54SI4KSfLSy3A4j0emEC/nSR9zgdHKK0oYSRAQe7oXmXzIWTB6q9OqEpO4b/mnqocektYhyYx4tuAODAEe/7jQV4Q4/f/mf9EcLcLTCpv8D8x6VrIXV1gy70WDaleg7WFf0563nMB9tTX1P7K1DRTyOfZw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(376002)(396003)(39860400002)(53546011)(6666004)(2616005)(31696002)(186003)(86362001)(956004)(16526019)(26005)(31686004)(36756003)(4326008)(8676002)(6486002)(316002)(16576012)(478600001)(4744005)(5660300002)(2906002)(66476007)(66556008)(8936002)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TUM5VFNnZTdVZGo1MEVoYk5yQWxSYWpyS0VpWktpMDlSY1lpRDZVVXd6MzBm?=
 =?utf-8?B?OG5MZFIxdHZaeFU3OGpta3hkVXlwS3gyWmh0NmJqam1iV3A4b2poUmhaMjVP?=
 =?utf-8?B?ODNiazh6dE95QmpqdlU0L3RJQVY1bitxSlhyWVBPdnJMWWxTL2E1aGJOaHJ1?=
 =?utf-8?B?ajVPYzNwU29JenZjakVPYTVIMjZBbGhRS2VPY1VKaUZCVTlaRWRObm8rWTZM?=
 =?utf-8?B?TW0xN3Y5UGpOdTBmUDJpUFFwcmhMcmhxRXlpNzJUcitNUUpsQndIdzVCOTVK?=
 =?utf-8?B?UlBtNnE1a3V1bndMYVoxUnk3UUJtL0VCMCtPSFRvK1dyY1F3alVkUmg0c01F?=
 =?utf-8?B?K3FOaWxhUWFINUxTWTBEWkxicGFCM2NwSy9aaXVwZVFWZjA5Wk1FK1c1MWxE?=
 =?utf-8?B?UHM5cXo5TWtIZDZIOFFrSEFFdGlCR1JRc0lsWjVNb0JKOXR4dXliektmOWxt?=
 =?utf-8?B?M3IyRExpK3VwT1AwcjRYU21seEh1WExOV3pLbkk4Vk1ucTRPb0Myd0t2cFJI?=
 =?utf-8?B?bzMzL250TGI4aDBybDYvUjFFbzUvam5UcjVCUzBSUm5TMDBCOTZFekNZUHlG?=
 =?utf-8?B?V1FQK1RkdmswZVk0QndjMGVZNWMrd1JvaGxKK2JYd2tLMjFJdjJianF3QjRr?=
 =?utf-8?B?TU1FbzE2c1pnMHliOEFzbVlCV2t3YkZqOTBkNmFoSk9kc1U0ZW5HdUtFcXVT?=
 =?utf-8?B?RzRCVGNFbDBTVU5scVlWSHNxeGxJRC8wVXdwRzIrWjk1RC9GVm51ejF6Z1Rh?=
 =?utf-8?B?ZVZiZWYrQlFHd1JSZFZpQXA1ZEtWTGdXWTJOZjBHS1o5a0tPSUhWQ2diSzh5?=
 =?utf-8?B?TW0yWlFVektvT3JXTnZQYXJZWEhtdnNUMjhNODdETlNTcUQ2THRGRkx4dkJm?=
 =?utf-8?B?VUVYU3AvdXZNc2hjR3lMSWo1Skl0YkhDNkZZb0J2ZkY2Vlpabk90R09NTGtQ?=
 =?utf-8?B?eE1ZdnB1cEU2UmgwZUx3TDJUSTZhMkxtSEkwZkxrbnZFTGJRM25hVC9CQnQ5?=
 =?utf-8?B?ZkFEekM4VUVJZWpOM291UjJQVkdWMGN1UnI1M0hzb1BOTUNDL0dTNkZWZy9N?=
 =?utf-8?B?ZVBXeTBuTjlOOURaNEdscUYwMTlwMC9Ha1dPSHVvM3RqNnBFK0pOOUNSNW5s?=
 =?utf-8?B?QmhYYXFSbjl2VEJPWDMvK3JpVy9saTB1VEFZemErYk1lOXp2bE5vdmcwNnVN?=
 =?utf-8?B?bUhIZkE3bk5XQ0lDUFBVUnhUUWNHcnY0dHJkSmFOaDNzVjNBQXdTdnJSQ2dH?=
 =?utf-8?B?Y3pyNFRQVC8wR0FGNHBrZFJ5bEVHZHFYdW9sYXVzL0p3UkVKeDBINW42YjAx?=
 =?utf-8?B?WlJUNEFsWHpaZlB0NFlZVHlBeXdPbUVzSW1UZWplRlBYRnlmZEIySFN3M0F1?=
 =?utf-8?B?TjRFOVM5a2oxc1FDT1diQ0pkZldzYTBjbmVtZ2tJOGRxOXYvelExRDN5L2RC?=
 =?utf-8?B?QWlXS3U0bm1Jdjdvbm04Qm5qN00wZGd6N2RseHlIdUowdVFRck5GODZpczhN?=
 =?utf-8?B?U3ovTjVQNCtZWGlFcXRCTXlTSS9Ycjg4cGtqaW1GUHdiN0Y5Wit1a1FBMDNP?=
 =?utf-8?B?Z2ZRM25oV0hYQlA4ckJZWEF5VUlpcHppU3hqS3NGTHN5L2poTEVwby9pdU5X?=
 =?utf-8?B?UmxqeEFlSkJxQktaKzVabU9LYXpVQXRLT0N3dVN5VDUrSkdoWHRNMWRaTU5F?=
 =?utf-8?B?WjYzMFBScmpoQ2NRakc5R0lUWGZrZUNzNHcxQTBaQ2VMcnFXcTFwOUYzY2xJ?=
 =?utf-8?Q?xVEacEL+OpFRJOlzMPQZ4/NO0P1Ov1+3OxBsiup?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 63be1c4d-8db0-4f42-b88d-08d8cd35ae44
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Feb 2021 20:02:51.6993
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B3EV7ocTEXkny2lmePYHdaxmnwUowdITc5gB1g2ptG77agkEo4xK/p6GXVg2acYEFUgZeGsb47XmI/jeR++H2dEzy+kYstz+GDWAdldbovI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5030
X-OriginatorOrg: citrix.com

On 09/02/2021 19:53, Stefano Stabellini wrote:
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 18825e333e..f1a96a3b90 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
>   * PCI bus specific translator
>   */
>  
> +static bool_t dt_node_is_pci(const struct dt_device_node *np)
> +{
> +    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
> +
> +    if (is_pci)

bool on the function, and spaces here, which I'm sure you can fix while
committing :)

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83409.155158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZUP-0004ck-HJ; Tue, 09 Feb 2021 20:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83409.155158; Tue, 09 Feb 2021 20:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZUP-0004cd-DB; Tue, 09 Feb 2021 20:20:01 +0000
Received: by outflank-mailman (input) for mailman id 83409;
 Tue, 09 Feb 2021 20:20:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p39W=HL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l9ZUO-0004cY-4V
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:20:00 +0000
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fa19d0d-d469-4b36-907c-b0f15b4a9e1a;
 Tue, 09 Feb 2021 20:19:59 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id v14so8038376wro.7
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 12:19:59 -0800 (PST)
Received: from CBGR90WXYV0 (host86-180-176-157.range86-180.btcentralplus.com.
 [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id 67sm6482784wmz.46.2021.02.09.12.19.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 09 Feb 2021 12:19:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fa19d0d-d469-4b36-907c-b0f15b4a9e1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=boj373LsOZqDTZQxKCZOAT4rKi/jpq+W1thHmB9t9BE=;
        b=ox8keUe/nKVd6aTwAnAgLTAsIjPwKRSUALrUo2LnA12YifZuRL3b6yVpBmlInYlndw
         U1JeomgrODpGUmGl4euVRP53hdJAIPchCGIDldHxyQtdP9cLRa0xkh1fBJpDLofgk6Eo
         HoDN0Zvs7Y77vXIMzw0QMeBdTzGo9hSKUijmhZ290RV4Dpp2DnXkM9tKQWGYAla3i+uE
         JHXybAatmvzECilv2gNGo1z4FxgV2XZ985ieyn6ilT/QEGzgAXg1qNWrHn+WmTEMzWLO
         iahb52k5kqNMtghBgjlo3F9esKpQPropamS4nw16Z1mj074X3S9EA3pXtJvfCWBCZpgq
         XKVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=boj373LsOZqDTZQxKCZOAT4rKi/jpq+W1thHmB9t9BE=;
        b=RUiP8ztQF9sugrrmjIpisfYNc/gavq6PZtbvej5zBC22OzXSJqZDQZfq64iTkoIHBm
         g55L7DuteLJzkwsPP9QZNjHRM5psOZORS6LNMyeNWxPQ4buqkylL+jmuUbfmVR1X3neU
         abru30Zz3IgzkV8+BH8NRzzahKH/oTBQEA3Pc/sifExMbp1NlwgrsIkYNx2YTTVM+c1N
         yGbLvha42LKHFosD98q/0pIxfr8zVW4oSYL2ZmaSLBqDTh3mjrKRyPhgmJds70lB6zLL
         ssr46MC86wssk+kLrkQcOdEHKieiCLQ11eXniYa0qkcJkqbNNYEzknN7qSLktIgCCI93
         bCFw==
X-Gm-Message-State: AOAM5336OAk1R9kAvvx6fRf+x5q3jkBzVQocI1ZP/VkcQo3jvbkUV7/u
	6xoKOPzeG4Wdibc3zvww4+w=
X-Google-Smtp-Source: ABdhPJzxE9/711iBjJ6yVZgbY2CwJPa39YR4FYGKvguDFIZHH3dGIHKFbwYIOsJqJmUGfvuLcLPwfw==
X-Received: by 2002:adf:e4c9:: with SMTP id v9mr27128156wrm.277.1612901998446;
        Tue, 09 Feb 2021 12:19:58 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>,
	"'Julien Grall'" <jgrall@amazon.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>
References: <20210209152816.15792-1-julien@xen.org> <20210209152816.15792-2-julien@xen.org>
In-Reply-To: <20210209152816.15792-2-julien@xen.org>
Subject: RE: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special pages in the IOMMU page-tables
Date: Tue, 9 Feb 2021 20:19:56 -0000
Message-ID: <04f301d6ff20$efa93b10$cefbb130$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJkA3sfHUQVO5jg8t87X8qwyW0VowLFj79gqSAOLIA=
Content-Language: en-gb

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Julien Grall
> Sent: 09 February 2021 15:28
> To: xen-devel@lists.xenproject.org
> Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall =
<jgrall@amazon.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>
> Subject: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special =
pages in the IOMMU page-tables
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, the IOMMU page-tables will be populated early in the domain
> creation if the hardware is able to virtualize the local APIC. =
However,
> the IOMMU page tables will not be freed during early failure and will
> result to a leak.
>=20
> An assigned device should not need to DMA into the vLAPIC page, so we
> can avoid to map the page in the IOMMU page-tables.
>=20
> This statement is also true for any special pages (the vLAPIC page is
> one of them). So to take the opportunity to prevent the mapping for =
all
> of them.
>=20
> Note that:
>     - This is matching the existing behavior with PV guest
>     - This doesn't change the behavior when the P2M is shared with the
>     IOMMU. IOW, the special pages will still be accessibled by the
>     device.
>=20
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:21:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:21:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83410.155170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZVu-0005S3-0U; Tue, 09 Feb 2021 20:21:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83410.155170; Tue, 09 Feb 2021 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZVt-0005Rv-Tf; Tue, 09 Feb 2021 20:21:33 +0000
Received: by outflank-mailman (input) for mailman id 83410;
 Tue, 09 Feb 2021 20:21:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9ZVs-0005Rq-IF
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:21:32 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd45748c-46c7-4b81-a4fb-069acfbdb8c9;
 Tue, 09 Feb 2021 20:21:31 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 2166C64E7E;
 Tue,  9 Feb 2021 20:21:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd45748c-46c7-4b81-a4fb-069acfbdb8c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612902090;
	bh=a9TA/DE8lGcJsKiZKDbb6+lH/9UqAUlv/Zis0qEg+08=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bIDoohiMlUoU44IvHBOSpKkH0TGMbfaouxcBZOWxzq1FCZvX+J6uzh4FpA5V1WzBt
	 VpCpodHkRPX2hXT8mb+OmfusWdS/dR+qHBFTEzs9f7GlFejOpAEDzVs+pqSN8dgr+f
	 zPG67Jp1S00sJ8wDqhZpiS+0vl2V8jgBFFxHpmaK54GtF5o3X+8Vl2EhodDos+coMe
	 MWyvERVR6Ggwj7l835YxabmZcLODbp1cZidTAYHl4kGwIh0akmr8cTHM/VrD6r5P1F
	 ClJYmR70WvQrLDL1VjOoKTyL5QpaUkmctXOmWUZus5c5Ntwr85eOWdvVsk47VDH1hi
	 cYIvw+hIG3vvA==
Date: Tue, 9 Feb 2021 12:21:29 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    ehem+xen@m5p.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
In-Reply-To: <e2a807fd-a027-fa7d-315a-f64db8c56e1c@citrix.com>
Message-ID: <alpine.DEB.2.21.2102091221090.8948@sstabellini-ThinkPad-T480s>
References: <20210209195334.21206-1-sstabellini@kernel.org> <e2a807fd-a027-fa7d-315a-f64db8c56e1c@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 9 Feb 2021, Andrew Cooper wrote:
> On 09/02/2021 19:53, Stefano Stabellini wrote:
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 18825e333e..f1a96a3b90 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const __be32 *addr)
> >   * PCI bus specific translator
> >   */
> >  
> > +static bool_t dt_node_is_pci(const struct dt_device_node *np)
> > +{
> > +    bool is_pci = !strcmp(np->name, "pcie") || !strcmp(np->name, "pci");
> > +
> > +    if (is_pci)
> 
> bool on the function, and spaces here, which I'm sure you can fix while
> committing :)

Oops, thanks Andrew


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:22:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83411.155181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZWS-0005Y0-99; Tue, 09 Feb 2021 20:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83411.155181; Tue, 09 Feb 2021 20:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZWS-0005Xt-5q; Tue, 09 Feb 2021 20:22:08 +0000
Received: by outflank-mailman (input) for mailman id 83411;
 Tue, 09 Feb 2021 20:22:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p39W=HL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l9ZWQ-0005Xk-IP
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:22:06 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5450e4f-a17a-4135-8ca7-bec551057602;
 Tue, 09 Feb 2021 20:22:05 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id u14so23669462wri.3
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 12:22:05 -0800 (PST)
Received: from CBGR90WXYV0 (host86-180-176-157.range86-180.btcentralplus.com.
 [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id y11sm36651129wrh.16.2021.02.09.12.22.04
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 09 Feb 2021 12:22:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5450e4f-a17a-4135-8ca7-bec551057602
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=0DM4/d+upFhn6wMrNIoHq66Rpjqu7XLqTPr3rwHqnX8=;
        b=CTUnUF6XELrMelT02nwplknI30gE6VfySBgNx2Cvu0MWJdcWKYC6RgVFPkI7oJVJZ9
         odDmCd7MJoZIf44hpEfpcETJ+LjXArS/fNSfzgnx+bzDwxIq45KZimAB/Z/HGM3CpPG4
         MbuHYGbB7tsCXVtbDNZpEcmhlfgvVP2hB/jsMFEBbnVGAq/uzFR08uNZKLYKC4kwhK49
         ZKO0wOWWcBkkYWIe6n9haeqaMS4dKts8zWRwMmxe86dvQpYRDV57FETL5PwnPaC3TA2S
         WvshIK7GxJLHyFOamqFcW4UpRtm/CuZZUFDJ+vWyJtYLnOrzmBEE04RbkCOYqu91qWfM
         CZ+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=0DM4/d+upFhn6wMrNIoHq66Rpjqu7XLqTPr3rwHqnX8=;
        b=bEcbDfcElpUutBNM1PWQUirfSqZbhb3wrPItHmlP0MdxkwnEteCBO4VuGz1kbnuJzp
         74OiEAV+7DljBQdTuA7MR0jXkzl/rep/ZeMy8ZfwkGcpW0IQ9ik9oPd0IjzppKFoHoQ6
         AWbzaFharlRDhM3Az4tzjSOZEVKR48VVkynzWoHYdkGF3zylSi4mGqZffEEjns3zBBDe
         iQy9qIYDTSBU8EE/vb9w1BQFa62zegVfICmApBUF4WCT//jo2s/M6MFqXjzoeiXGgcQ/
         h2ShOLux1EB3q61M91fMww1VEv/NJqViSTbEWs8VNUDVoF6u10emFeOy3luWE5nZ0bm5
         W5SQ==
X-Gm-Message-State: AOAM533Df1l6RI6SgL3OjSbVl+hO5V1Ez8nUnZiAW/GJdGMJ8YGbmKxh
	bPkv3yLTnsEj35pM80MHvXn8ZV/CXrw=
X-Google-Smtp-Source: ABdhPJxRScY63HGVSkdDj2jTf1QcEe8VguSUdVtnfCwkNOJdLWX7SzwY19auaYumAr5HcnFLdS7/Gw==
X-Received: by 2002:adf:d0d2:: with SMTP id z18mr28457666wrh.70.1612902125107;
        Tue, 09 Feb 2021 12:22:05 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>,
	"'Julien Grall'" <jgrall@amazon.com>,
	"'Jan Beulich'" <jbeulich@suse.com>
References: <20210209152816.15792-1-julien@xen.org> <20210209152816.15792-3-julien@xen.org>
In-Reply-To: <20210209152816.15792-3-julien@xen.org>
Subject: RE: [for-4.15][PATCH v2 2/5] xen/iommu: Check if the IOMMU was initialized before tearing down
Date: Tue, 9 Feb 2021 20:22:03 -0000
Message-ID: <04f401d6ff21$3b167720$b1436560$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJkA3sfHUQVO5jg8t87X8qwyW0VowJJWhUSqSPwG1A=
Content-Language: en-gb

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 09 February 2021 15:28
> To: xen-devel@lists.xenproject.org
> Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall <jgrall@amazon.com>; Jan Beulich
> <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
> Subject: [for-4.15][PATCH v2 2/5] xen/iommu: Check if the IOMMU was initialized before tearing down
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> is_iommu_enabled() will return true even if the IOMMU has not been
> initialized (e.g. the ops are not set).
> 
> In the case of an early failure in arch_domain_init(), the function
> iommu_destroy_domain() will be called even if the IOMMU is not
> initialized.
> 
> This will result to dereference the ops which will be NULL and an host
> crash.
> 
> Fix the issue by checking that ops has been set before accessing it.
> 
> Fixes: 71e617a6b8f6 ("use is_iommu_enabled() where appropriate...")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:28:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:28:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83416.155197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Zcd-0005l1-1A; Tue, 09 Feb 2021 20:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83416.155197; Tue, 09 Feb 2021 20:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Zcc-0005ku-UQ; Tue, 09 Feb 2021 20:28:30 +0000
Received: by outflank-mailman (input) for mailman id 83416;
 Tue, 09 Feb 2021 20:28:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p39W=HL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l9Zcb-0005kp-K7
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:28:29 +0000
Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2233fe9b-025f-42cc-9aeb-d368ef3c7ce7;
 Tue, 09 Feb 2021 20:28:28 +0000 (UTC)
Received: by mail-wm1-x32e.google.com with SMTP id 190so4466398wmz.0
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 12:28:28 -0800 (PST)
Received: from CBGR90WXYV0 (host86-180-176-157.range86-180.btcentralplus.com.
 [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id o18sm6116550wmp.19.2021.02.09.12.28.26
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 09 Feb 2021 12:28:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2233fe9b-025f-42cc-9aeb-d368ef3c7ce7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=46I3FtWtKWr70b2Tg3d0U944LEcVXMR7JWoWN107KGQ=;
        b=K6DAsgh/y9k1u823HKaI69dj520lamXxZ0Xj15uZDkndO20UiCzWh4U6ktQ7nv+2wo
         0UbvCK3tj54KMSQ3d06EtS4tQQsxk0zw4XGVm7ud5JXT+XVuCi9qROXc971Ox5JGwEB+
         O7SDwXVP4JZiDw598p1CmHxu+TXqD+9zzpZGV6SeInZPy5VDhxJh4YZaeHAdUB5mv7V6
         Imo2PeGHAtYHjfjkNw7nbnblX0/bD8M2w7VL2qrdNUEPhvxHEurOIsJP/xtsrxIAIq2z
         CtY5h8h0Q//zlWk0YoQVKr7qr41/yWmtiT0wh919sADEAp7y5lhb3mvO4Kgbgi8mYRlM
         tI+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=46I3FtWtKWr70b2Tg3d0U944LEcVXMR7JWoWN107KGQ=;
        b=dw1r775CkiHeBlPFCEeQeuYQVKXYqDGW+HWfuRGq3b1Duba/606ubdizd8dNQ8ryod
         RFsvsnG08+wSHJoYADaWfZnWHXXNmJ3Bsbj/1UUF4WRnhkRjwlLG6EKt9pQN2CY3SfDt
         KUTzy+ImHurP7TpvrLuf49YRJPkOlTP9D5mcNR/vAO3jF9ccQKPimkkNVVLOxXpksO1g
         /DLowfWrxoszsLDAHyT2AkhAhIwhf4zfUILo0uCyEPBJenKkVQI+/ir8ltriuJEdjqNK
         Z+8dauTNQnrw4bexLV1jkXssVvGqhYVxy5iHAJDGSoJXw/sdx4Ai4SnJqvwiiJbB1HfX
         u45g==
X-Gm-Message-State: AOAM530zzKEFdIs305Hhk3X33rYVdFR8Txf44DVvp9Wmel1DF01PO4Cr
	Acmwa4v9DVjc+pwOdf1FIik=
X-Google-Smtp-Source: ABdhPJxXVF4yN4vgbbIychghmloXYhdMZr08LXGGKN1/kjTxDa9cpsKaitMvRjZMZBJRw2crTB+CDA==
X-Received: by 2002:a1c:3185:: with SMTP id x127mr4971652wmx.117.1612902507926;
        Tue, 09 Feb 2021 12:28:27 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>,
	"'Julien Grall'" <jgrall@amazon.com>,
	"'Jan Beulich'" <jbeulich@suse.com>
References: <20210209152816.15792-1-julien@xen.org> <20210209152816.15792-4-julien@xen.org>
In-Reply-To: <20210209152816.15792-4-julien@xen.org>
Subject: RE: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the domain if it is dying
Date: Tue, 9 Feb 2021 20:28:26 -0000
Message-ID: <04f601d6ff22$1f52cf60$5df86e20$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJkA3sfHUQVO5jg8t87X8qwyW0VowHF7mi9qSgLgIA=
Content-Language: en-gb

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 09 February 2021 15:28
> To: xen-devel@lists.xenproject.org
> Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall <jgrall@amazon.com>; Jan Beulich
> <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
> Subject: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the domain if it is dying
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> It is a bit pointless to crash a domain that is already dying. This will
> become more an annoyance with a follow-up change where page-table
> allocation will be forbidden when the domain is dying.
> 
> Security wise, there is no change as the devices would still have access
> to the IOMMU page-tables even if the domain has crashed until Xen
> start to relinquish the resources.
> 
> For x86, we rely on dom_iommu(d)->arch.mapping.lock to ensure
> d->is_dying is correctly observed (a follow-up patch will held it in the

s/held/hold

> relinquish path).
> 
> For Arm, there is still a small race possible. But there is so far no
> failure specific to a domain dying.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> This was spotted when trying to destroy IOREQ servers while the domain
> is dying. The code will try to add the entry back in the P2M and
> therefore update the P2M (see arch_ioreq_server_disable() ->
> hvm_add_ioreq_gfn()).
> 
> It should be possible to skip the mappin in hvm_add_ioreq_gfn(), however

s/mappin/mapping

> I didn't try a patch yet because checking d->is_dying can be racy (I
> can't find a proper lock).
> 
> Changes in v2:
>     - Patch added
> ---
>  xen/drivers/passthrough/iommu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 879d238bcd31..75419f20f76d 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -272,7 +272,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
>                              flush_flags) )
>                  continue;
> 
> -        if ( !is_hardware_domain(d) )
> +        if ( !is_hardware_domain(d) && !d->is_dying )
>              domain_crash(d);

Would it make more sense to check is_dying inside domain_crash() (and turn it into a no-op in that case)?

  Paul

> 
>          break;
> --
> 2.17.1




From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:33:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:33:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83419.155211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZhC-0006gd-Kq; Tue, 09 Feb 2021 20:33:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83419.155211; Tue, 09 Feb 2021 20:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZhC-0006gW-Hv; Tue, 09 Feb 2021 20:33:14 +0000
Received: by outflank-mailman (input) for mailman id 83419;
 Tue, 09 Feb 2021 20:33:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p39W=HL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l9ZhB-0006gR-Da
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:33:13 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a72d443-996c-426a-bc59-b7eaf68ecbbe;
 Tue, 09 Feb 2021 20:33:12 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id z6so23595421wrq.10
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 12:33:12 -0800 (PST)
Received: from CBGR90WXYV0 (host86-180-176-157.range86-180.btcentralplus.com.
 [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id k6sm41012436wro.27.2021.02.09.12.33.11
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 09 Feb 2021 12:33:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a72d443-996c-426a-bc59-b7eaf68ecbbe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=3iyx9VVlDE6qQNYd7wWdV49vT9TLEOl7/nUPjTiL/N4=;
        b=mRJmOY4beMvTvvFz/TOXeNBJf9/AQ/t6RHOz8s5VjWEsg/0T4DiFAVic0JEYMMflXz
         mbDKAVTJ6xMA1rddrVPI4zgshuBYv+lZY9JiuaRWDoSS25gzhxTM+k/XjFTPEiTZvpKZ
         08/6AuydsuZpL54aW+E1q/FCSA2oZuaLlEJByf2Zg3JlGfNxiWq83c1Pbf4GkLOm/Ip+
         rI1L5czgKGwqgS1q0FCHLkpkbE9ZqRR6BxDN1D9FodxRzhOG9lE98ZsXsgPR7ZK+gQbl
         GbYAdpkGeaTKRnUa6pHK4U4t9kBjksewhDpMiIjA6xr4lEESLvDKJ+oiF7Jgq2e66d6I
         GLHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=3iyx9VVlDE6qQNYd7wWdV49vT9TLEOl7/nUPjTiL/N4=;
        b=mRQ4cWMHUr1Ot2Gtx0Cl9JEvIwGJDFmAQyWfj5ZnQjKZ4BBNs9waKOWCroVzlYykuq
         XhaIDT18vDMWCwMyuqSvVgYZJT6ifKWUz533Z7YQvh96+y0byAB2G6zVMxc+a1texCYM
         8E0mxv/xPg4pYHw8W1EmvdHVczj8lGdYoiB6x5m35B1psKp7DUhbXpL053SzWlKHEiAu
         KKXCiZ70ukQCUxNbBGGsse7gf5x+sVXD043FE1MS8pppsn2tG7q46TRUDCYOas0UNV2H
         cEpk8tNYtE8BNhSqgvI+2RTncdm6GSl1s6siGv6o6L7dVnmV41pGbdCqrMlIOqcO/Do2
         CktA==
X-Gm-Message-State: AOAM532ZFJM+4Sgpu6s45NATT7fuCKMwYPUYkV8JZQdKcbSZTbo/ZE35
	UnqUpTXzP3srOHMzu7vN6/g=
X-Google-Smtp-Source: ABdhPJzTiSpA46gDk0XLNGFrO6od8nzFaQzNXeSPtgmJPcbmQ+QTVE1IFRNwfTBSCfWpsiKkzFaJcQ==
X-Received: by 2002:a05:6000:1379:: with SMTP id q25mr8414113wrz.89.1612902791971;
        Tue, 09 Feb 2021 12:33:11 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>,
	"'Julien Grall'" <jgrall@amazon.com>,
	"'Jan Beulich'" <jbeulich@suse.com>
References: <20210209152816.15792-1-julien@xen.org> <20210209152816.15792-5-julien@xen.org>
In-Reply-To: <20210209152816.15792-5-julien@xen.org>
Subject: RE: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU page-tables
Date: Tue, 9 Feb 2021 20:33:10 -0000
Message-ID: <04f801d6ff22$c8acd830$5a068890$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJkA3sfHUQVO5jg8t87X8qwyW0VowJUSDRyqSOYttA=
Content-Language: en-gb

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 09 February 2021 15:28
> To: xen-devel@lists.xenproject.org
> Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall <jgrall@amazon.com>; Jan Beulich
> <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
> Subject: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU page-tables
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> The new IOMMU page-tables allocator will release the pages when
> relinquish the domain resources. However, this is not sufficient when
> the domain is dying because nothing prevents page-table to be allocated.
> 
> iommu_alloc_pgtable() is now checking if the domain is dying before
> adding the page in the list. We are relying on &hd->arch.pgtables.lock
> to synchronize d->is_dying.
> 
> Take the opportunity to check in arch_iommu_domain_destroy() that all
> that page tables have been freed.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:36:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:36:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83421.155223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZkB-0006qW-4T; Tue, 09 Feb 2021 20:36:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83421.155223; Tue, 09 Feb 2021 20:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ZkB-0006qP-1U; Tue, 09 Feb 2021 20:36:19 +0000
Received: by outflank-mailman (input) for mailman id 83421;
 Tue, 09 Feb 2021 20:36:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9Zk9-0006qK-OX
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:36:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 729f3185-1068-4c49-aa7a-5bf987304a05;
 Tue, 09 Feb 2021 20:36:16 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 898A064ECE;
 Tue,  9 Feb 2021 20:36:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 729f3185-1068-4c49-aa7a-5bf987304a05
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612902975;
	bh=le/nb7UckXwi6Ch3bFdrq8MqOkMxws5Y7mq7VVEuzBA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AqaVDcs3qMorSqlP7GbBXcC0ejGxLj3mTr6u1Za0vtZth7m42Dj26dZjX+IIbikFu
	 o4XJo6dNLuaopVszAKECHbEDUs72Mj6/jlmt38wHIMMEgswFt5hIcCmVSO3BuxhtxT
	 uqFP5fXJFazUWQ0ytwxmSU32S13kY1/Djn5ofItnDZ6fCV5ZKaejsuQdp2kJCgZkvE
	 LQ0cmIDdSTah6znHS6sT4+f5r2vg0dBoY0LZ0l9mv/SBl4wUDYJuZlFJgL/P2c/xRJ
	 VIYhNmuLnli3bd9ZKhyd4RaFAtBe3RYufjNjcL5U0xJlh7v/cW6WmsMgrUOJlBBJJh
	 Guk37MGCU0P6Q==
Date: Tue, 9 Feb 2021 12:36:14 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "lucmiccio@gmail.com" <lucmiccio@gmail.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
Message-ID: <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1326401187-1612902975=:8948"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1326401187-1612902975=:8948
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 9 Feb 2021, Rahul Singh wrote:
> > On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> > The offending chunk is:
> > 
> > #define gnttab_need_iommu_mapping(d)                    \
> > -    (is_domain_direct_mapped(d) && need_iommu(d))
> > +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > 
> > On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> > directly mapped and IOMMU is enabled for the domain, like the old check
> > did, but the new check is always false.
> > 
> > In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> > need_sync is set as:
> > 
> >    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> >        hd->need_sync = !iommu_use_hap_pt(d);
> > 
> > iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
> > P2M. It is true on ARM. need_sync means that you have a separate IOMMU
> > page-table and it needs to be updated for every change. need_sync is set
> > to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
> > which is wrong.
> > 
> > As a consequence, when using PV network from a domU on a system where
> > IOMMU is on from Dom0, I get:
> > 
> > (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> > [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> > 
> > The fix is to go back to something along the lines of the old
> > implementation of gnttab_need_iommu_mapping.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Fixes: 91d4eca7add
> > Backport: 4.12+
> > 
> > ---
> > 
> > Given the severity of the bug, I would like to request this patch to be
> > backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> > 2020.
> > 
> > For the 4.12 backport, we can use iommu_enabled() instead of
> > is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
> > 
> > Changes in v2:
> > - improve commit message
> > - add is_iommu_enabled(d) to the check
> > ---
> > xen/include/asm-arm/grant_table.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
> > index 6f585b1538..0ce77f9a1c 100644
> > --- a/xen/include/asm-arm/grant_table.h
> > +++ b/xen/include/asm-arm/grant_table.h
> > @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
> >     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
> > 
> > #define gnttab_need_iommu_mapping(d)                    \
> > -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
> > 
> > #endif /* __ASM_GRANT_TABLE_H__ */
> 
> I tested the patch and while creating the guest I observed the below warning from Linux for block device.
> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258

So you are creating a guest with "xl create" in dom0 and you see the
warnings below printed by the Dom0 kernel? I imagine the domU has a
virtual "disk" of some sort.


> I did initial debugging and found out that there are many calls to iommu_legacy_{,un}map() while creating the guest but when iommu_legacy_unmap() function unmap the pages something is written  wrong in page tables because of that when next time same page is mapped via create_grant_host_mapping() we observed below warning. 

By "while creating a guest", do you mean before the domU is even
unpaused? Hence, the calls are a result of dom0 operations? Or after
domU is unpaused, hence, the calls are a result of domU operations
(probably the domU simply trying to access its virtual disk)?
Please note that you can start a guest paused with xl create -p.

Looking at the logs, it is probably the latter. The following line
should be printed when the domU PV block frontend connects to the
backend in dom0:

[  138.639934] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants

I'll see if I can repro the issue here.

 
> [  138.639934] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants
> (XEN) gnttab_mark_dirty not implemented yet
> [  138.659702] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants
> [  138.669827] vbd vbd-0-51712: 9 mapping in shared page 8 from domain 0
> [  138.676636] vbd vbd-0-51712: 9 mapping ring-ref port 5
> [  138.682089] ------------[ cut here ]------------
> [  138.686605] WARNING: CPU: 2 PID: 37 at drivers/block/xen-blkback/xenbus.c:296 xen_blkif_disconnect+0x20c/0x230
> [  138.696668] Modules linked in: bridge stp llc ipv6 nf_defrag_ipv6
> [  138.702833] CPU: 2 PID: 37 Comm: xenwatch Not tainted 5.4.0-yocto-standard #1
> [  138.710037] Hardware name: Arm Neoverse N1 System Development Platform (DT)
> [  138.717067] pstate: 80c00005 (Nzcv daif +PAN +UAO)
> [  138.721927] pc : xen_blkif_disconnect+0x20c/0x230
> [  138.726701] lr : xen_blkif_disconnect+0xbc/0x230
> [  138.731388] sp : ffff800011cb3c80
> [  138.734773] x29: ffff800011cb3c80 x28: ffff00005b6da940 
> [  138.740156] x27: 0000000000000000 x26: 0000000000000000 
> [  138.745536] x25: ffff000029755060 x24: 0000000000000170 
> [  138.750919] x23: ffff000029755040 x22: ffff000059c72000 
> [  138.756299] x21: 0000000000000000 x20: ffff000029755000 
> [  138.761681] x19: 0000000000000001 x18: 0000000000000000 
> [  138.767063] x17: 0000000000000000 x16: 0000000000000000 
> [  138.772444] x15: 0000000000000000 x14: 0000000000000000 
> [  138.777826] x13: 0000000000000000 x12: 0000000000000000 
> [  138.783207] x11: 0000000000000001 x10: 0000000000000990 
> [  138.788589] x9 : 0000000000000001 x8 : 0000000000210d00 
> [  138.793971] x7 : 0000000000000018 x6 : ffff00005ddf72a0 
> [  138.799352] x5 : ffff800011cb3c28 x4 : 0000000000000000 
> [  138.804734] x3 : ffff000029755118 x2 : 0000000000000000 
> [  138.810117] x1 : ffff000029755120 x0 : 0000000000000001 
> [  138.815497] Call trace:
> [  138.818015]  xen_blkif_disconnect+0x20c/0x230
> [  138.822442]  frontend_changed+0x1b0/0x54c
> [  138.826523]  xenbus_otherend_changed+0x80/0xb0
> [  138.831035]  frontend_changed+0x10/0x20
> [  138.834941]  xenwatch_thread+0x80/0x144
> [  138.838849]  kthread+0x118/0x120
> [  138.842147]  ret_from_fork+0x10/0x18
> [  138.845791] ---[ end trace fb9f0a3b3b48a55f ]â€”

--8323329-1326401187-1612902975=:8948--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:36:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83422.155236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Zkk-0006vf-DI; Tue, 09 Feb 2021 20:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83422.155236; Tue, 09 Feb 2021 20:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9Zkk-0006vX-AM; Tue, 09 Feb 2021 20:36:54 +0000
Received: by outflank-mailman (input) for mailman id 83422;
 Tue, 09 Feb 2021 20:36:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p39W=HL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1l9Zkj-0006vR-IE
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 20:36:53 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21741177-5897-419b-bb2c-49b86dadd9e4;
 Tue, 09 Feb 2021 20:36:51 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id l12so23713625wry.2
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 12:36:51 -0800 (PST)
Received: from CBGR90WXYV0 (host86-180-176-157.range86-180.btcentralplus.com.
 [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id m2sm5955539wml.34.2021.02.09.12.36.49
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 09 Feb 2021 12:36:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21741177-5897-419b-bb2c-49b86dadd9e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=NHBCiO4S/77uJJWibt68h7ZUEYfILKGtx3HqlnXBhAM=;
        b=XEJjXR6d28ogiagL9pPCuvGp5Tr4U3kWb1QY/ohyQAlCY93tssc3XFrarbFP+p44lH
         44KJVCEtTwFD2fnR1MBA5wez54BIaI37R/nh7UG20mHKPG+S2lTsEWCHaab90L3LQFTW
         noCP/d+ShAJzlGLfP4HfmxDPSmu2oHPDRUL1uJcaalPDnU632C363jz9fjMgOn+JYOMd
         kvKAVUoB2tlqsMmfS5C5iR7rVmyH/vVsI1s9hlymlQmg4C3oUU7CweVbC+G42/bphcfQ
         yODJjmC4GvE5JQSiDrwgBRA/MG3W92wXy+0tpiHkdSnKw1QW6atUtvALSdZQxKVM4lR4
         lFvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=NHBCiO4S/77uJJWibt68h7ZUEYfILKGtx3HqlnXBhAM=;
        b=m1RfOQLHTd6sul2ipvdV+yeCSjtd21w5me/Y5GEfUzRN4eym6LFv9leugMmBA4VatQ
         Py20GVA/1EBD52yIN3lw/MaraP0/neZWEjoWM5j3HBrC8amHRIJvOo7+q4j6TOQHzPw0
         ZBK2NQPONmX5J425lECIvXa6lsKQUHCOUilmsy90Mi0UNAe5esyC6mifUi+B1Ko+3VSn
         N9dqe2NttM3SXi6h3urqIjYWJ50ljVoJl3VFgTz3vp+4kZBokt2TS3e1JfynAFuOsiei
         C6D+ONB7f2RfgaNRgOmd1A9mwmaDSSiThzE56RVEii8z1h5PgQt0he8SEOsr0AC/A4Cd
         5YvA==
X-Gm-Message-State: AOAM531UBxIQJlMiknieIAxl94PP2H4dNfyRKrqLVi35uFtn8sjH5z2x
	XHJbHAcW9hRa0N4vHn3zm48=
X-Google-Smtp-Source: ABdhPJx29HclS2DQMOe9rm+PqjkaeMOQ0V0ZRaS08FWSA0UOpMeJmooPXC4BdZNSQqAPo+Gf/Y4XNA==
X-Received: by 2002:adf:edcd:: with SMTP id v13mr1523759wro.336.1612903010935;
        Tue, 09 Feb 2021 12:36:50 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>,
	"'Julien Grall'" <jgrall@amazon.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>
References: <20210209152816.15792-1-julien@xen.org> <20210209152816.15792-6-julien@xen.org>
In-Reply-To: <20210209152816.15792-6-julien@xen.org>
Subject: RE: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Tue, 9 Feb 2021 20:36:49 -0000
Message-ID: <04f901d6ff23$4b2b5e80$e1821b80$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJkA3sfHUQVO5jg8t87X8qwyW0VowISFH3OqSWqV1A=
Content-Language: en-gb

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 09 February 2021 15:28
> To: xen-devel@lists.xenproject.org
> Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall <jgrall@amazon.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Kevin Tian <kevin.tian@intel.com>;
> Paul Durrant <paul@xen.org>
> Subject: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root page-table before freeing the page-
> tables
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> The new per-domain IOMMU page-table allocator will now free the
> page-tables when domain's resources are relinquished. However, the root
> page-table (i.e. hd->arch.pg_maddr) will not be cleared.
> 
> Xen may access the IOMMU page-tables afterwards at least in the case of
> PV domain:
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
> (XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
> (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
> (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
> (XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
> (XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
> (XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
> (XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
> (XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
> (XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
> (XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
> (XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
> (XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
> (XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
> (XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
> (XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
> (XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120
> 
> This will result to a use after-free and possibly an host crash or
> memory corruption.
> 
> Freeing the page-tables further down in domain_relinquish_resources()
> would not work because pages may not be released until later if another
> domain hold a reference on them.
> 
> Once all the PCI devices have been de-assigned, it is actually pointless
> to access modify the IOMMU page-tables. So we can simply clear the root
> page-table address.
> 
> Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - Introduce clear_root_pgtable()
>         - Move the patch later in the series
> ---
>  xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
>  xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
>  xen/drivers/passthrough/x86/iommu.c         |  6 ++++++
>  xen/include/xen/iommu.h                     |  1 +
>  4 files changed, 29 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index 42b5a5a9bec4..81add0ba26b4 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
>      return reassign_device(pdev->domain, d, devfn, pdev);
>  }
> 
> +static void iommu_clear_root_pgtable(struct domain *d)
> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +
> +    spin_lock(&hd->arch.mapping_lock);
> +    hd->arch.amd.root_table = NULL;
> +    spin_unlock(&hd->arch.mapping_lock);
> +}
> +
>  static void amd_iommu_domain_destroy(struct domain *d)
>  {
> -    dom_iommu(d)->arch.amd.root_table = NULL;
> +    ASSERT(!dom_iommu(d)->arch.amd.root_table);
>  }
> 
>  static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
> @@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
>      .remove_device = amd_iommu_remove_device,
>      .assign_device  = amd_iommu_assign_device,
>      .teardown = amd_iommu_domain_destroy,
> +    .clear_root_pgtable = iommu_clear_root_pgtable,
>      .map_page = amd_iommu_map_page,
>      .unmap_page = amd_iommu_unmap_page,
>      .iotlb_flush = amd_iommu_flush_iotlb_pages,
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index d136fe36883b..e1871f6c2bc1 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1726,6 +1726,15 @@ out:
>      return ret;
>  }
> 
> +static void iommu_clear_root_pgtable(struct domain *d)
> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +
> +    spin_lock(&hd->arch.mapping_lock);
> +    hd->arch.vtd.pgd_maddr = 0;
> +    spin_unlock(&hd->arch.mapping_lock);
> +}
> +
>  static void iommu_domain_teardown(struct domain *d)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> @@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d)
>          xfree(mrmrr);
>      }
> 
> -    hd->arch.vtd.pgd_maddr = 0;
> +    ASSERT(!hd->arch.vtd.pgd_maddr);
>  }
> 
>  static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
> @@ -2719,6 +2728,7 @@ static struct iommu_ops __initdata vtd_ops = {
>      .remove_device = intel_iommu_remove_device,
>      .assign_device  = intel_iommu_assign_device,
>      .teardown = iommu_domain_teardown,
> +    .clear_root_pgtable = iommu_clear_root_pgtable,
>      .map_page = intel_iommu_map_page,
>      .unmap_page = intel_iommu_unmap_page,
>      .lookup_page = intel_iommu_lookup_page,
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index 82d770107a47..d3cdec6ee83f 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -280,6 +280,12 @@ int iommu_free_pgtables(struct domain *d)
>      /* After this barrier no new page allocations can occur. */
>      spin_barrier(&hd->arch.pgtables.lock);
> 
> +    /*
> +     * Pages will be moved to the free list in a bit. So we want to

'in a bit' sounds a little odd. I think you could just say 'below'. The code looks fine though so...

Reviewed-by: Paul Durrant <paul@xen.org>

> +     * clear the root page-table to avoid any potential use after-free.
> +     */
> +    hd->platform_ops->clear_root_pgtable(d);
> +
>      while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
>      {
>          free_domheap_page(pg);
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 863a68fe1622..d59ed7cbad43 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -272,6 +272,7 @@ struct iommu_ops {
> 
>      int (*adjust_irq_affinities)(void);
>      void (*sync_cache)(const void *addr, unsigned int size);
> +    void (*clear_root_pgtable)(struct domain *d);
>  #endif /* CONFIG_X86 */
> 
>      int __must_check (*suspend)(void);
> --
> 2.17.1




From xen-devel-bounces@lists.xenproject.org Tue Feb 09 20:53:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 20:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83426.155248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9a0r-0000N4-0Y; Tue, 09 Feb 2021 20:53:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83426.155248; Tue, 09 Feb 2021 20:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9a0q-0000Mx-Ti; Tue, 09 Feb 2021 20:53:32 +0000
Received: by outflank-mailman (input) for mailman id 83426;
 Tue, 09 Feb 2021 20:53:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9a0o-0000Mp-Vt; Tue, 09 Feb 2021 20:53:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9a0o-00072b-Kx; Tue, 09 Feb 2021 20:53:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9a0o-0001DQ-CL; Tue, 09 Feb 2021 20:53:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9a0o-0006dk-Bt; Tue, 09 Feb 2021 20:53:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z1nqytggiG8961FgR6pBexNIoIepUM8kZTx8TbWLD7U=; b=aOrUng+sWii16mT9U0B6gMyO1Z
	trcQ1fSKdcSEqEfGDgbGhZVOSgT6SE+HDMjk6NXy9q02Hii+Kh7+hakRfxkQobilYyTdbn1mY5CEW
	c78u3I4Ic87hHmvUla1xUwefB+KJI75kFskcnKdM72DQciFBM3SqKPjZ55Riiwq4TdSg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159131-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159131: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-3:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-livepatch:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit1:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-livepatch:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-4:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-1:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-arndale:debian-fixup:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 20:53:30 +0000

flight 159131 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-raw          <job status>                 broken  in 159052
 test-amd64-amd64-migrupgrade    <job status>                 broken  in 159052
 test-amd64-i386-livepatch       <job status>                 broken  in 159052
 test-amd64-amd64-xl-rtds        <job status>                 broken  in 159052
 test-amd64-amd64-libvirt-xsm    <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>          broken in 159052
 test-amd64-i386-libvirt         <job status>                 broken  in 159052
 test-amd64-amd64-livepatch      <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159052
 test-amd64-amd64-xl             <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>  broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>     broken in 159052
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken in 159052
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 159052
 test-xtf-amd64-amd64-4          <job status>                 broken  in 159052
 test-xtf-amd64-amd64-1          <job status>                 broken  in 159052
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 159052
 test-amd64-amd64-pair           <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>        broken in 159052
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  <job status> broken in 159052
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 159052
 test-amd64-i386-pair            <job status>                 broken  in 159052
 test-amd64-i386-migrupgrade     <job status>                 broken  in 159052
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-nested-amd    <job status>             broken in 159052
 test-amd64-i386-xl-shadow       <job status>                 broken  in 159052
 test-amd64-amd64-xl-pvhv2-intel    <job status>               broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow <job status> broken in 159052
 test-amd64-amd64-libvirt-vhd    <job status>                 broken  in 159052
 test-amd64-i386-xl              <job status>                 broken  in 159052
 test-amd64-amd64-xl-xsm         <job status>                 broken  in 159052
 test-amd64-amd64-i386-pvgrub    <job status>                 broken  in 159052
 test-amd64-amd64-xl-qcow2       <job status>                 broken  in 159052
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 159052
 test-amd64-amd64-libvirt        <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 159052
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 159052
 test-amd64-i386-freebsd10-amd64    <job status>               broken in 159052
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>          broken in 159052
 test-amd64-amd64-libvirt-pair    <job status>                 broken in 159052
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 159052
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 159052
 test-amd64-amd64-pygrub         <job status>                 broken  in 159052
 test-amd64-amd64-xl-shadow      <job status>                 broken  in 159052
 test-xtf-amd64-amd64-2          <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-nested-intel    <job status>           broken in 159052
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken in 159052
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>        broken in 159052
 test-amd64-amd64-xl-multivcpu    <job status>                 broken in 159052
 test-amd64-i386-libvirt-pair    <job status>                 broken  in 159052
 test-amd64-i386-freebsd10-i386    <job status>                broken in 159052
 test-amd64-amd64-xl-credit1     <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159052
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>          broken in 159052
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 159052
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 159052
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm <job status> broken in 159052
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>  broken in 159052
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 159052
 build-i386-xsm                  <job status>                 broken  in 159052
 test-xtf-amd64-amd64-3          <job status>                 broken  in 159052
 test-amd64-amd64-xl-pvshim      <job status>                 broken  in 159052
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 159052
 build-i386-xsm             4 host-install(4) broken in 159052 REGR. vs. 158556
 test-armhf-armhf-libvirt-raw 13 guest-start    fail in 159052 REGR. vs. 158556
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159052 REGR. vs. 158556

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-xsm      5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-pvshim   5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-amd64-pvgrub 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-pygrub      5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-shadow   5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-3       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-livepatch    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-libvirt      5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qcow2    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt     5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-pvhv2-intel 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-credit1  5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-multivcpu 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-livepatch   5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-raw       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl          5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-4       5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-1       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-2       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-shadow    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-freebsd10-i386 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-i386-pvgrub 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-freebsd10-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-rtds     5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-migrupgrade 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-migrupgrade 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-migrupgrade 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-i386-libvirt-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-i386-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-i386-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-i386-migrupgrade 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl           5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-nested-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-armhf-armhf-xl-arndale  13 debian-fixup     fail in 159052 pass in 159131
 test-armhf-armhf-xl-vhd      12 debian-di-install          fail pass in 159052
 test-armhf-armhf-libvirt-raw 12 debian-di-install          fail pass in 159052

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 159052 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 159052 n/a
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 158556
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158556
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158556
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158556
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   19 days
Failing since        159017  2021-02-04 15:06:13 Z    5 days    4 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    4 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-amd64-migrupgrade broken
broken-job test-amd64-i386-livepatch broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-amd64-livepatch broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-xtf-amd64-amd64-4 broken
broken-job test-xtf-amd64-amd64-1 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-migrupgrade broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-xtf-amd64-amd64-2 broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job build-i386-xsm broken
broken-job test-xtf-amd64-amd64-3 broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken

Not pushing.

------------------------------------------------------------
commit 8d26cdd3b66ab86d560dacd763d76ff3da95723e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:52:54 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 21:04:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 21:04:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83430.155263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aBP-0001QE-Ah; Tue, 09 Feb 2021 21:04:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83430.155263; Tue, 09 Feb 2021 21:04:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aBP-0001Q7-7C; Tue, 09 Feb 2021 21:04:27 +0000
Received: by outflank-mailman (input) for mailman id 83430;
 Tue, 09 Feb 2021 21:04:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6s4x=HL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l9aBO-0001Q2-Au
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 21:04:26 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91943110-4ddf-4638-bfdf-626979fdfc35;
 Tue, 09 Feb 2021 21:04:25 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id f19so24268158ljn.5
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 13:04:25 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id n16sm1377521lfe.13.2021.02.09.13.04.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 09 Feb 2021 13:04:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91943110-4ddf-4638-bfdf-626979fdfc35
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=QNfZ7ERkEtbflrh2FEFxojUnUn1/h5mtDJeWPD/s4vk=;
        b=uRuiqgMYzEuPo+sv1PXWynMzZ1c/V6ZsIJ63cWjUiBlfVzACzmuj/2f9NJKzIvftGa
         voGrPwBi0a36ZURvseLYcKHxRzPmv3mWfXwzbQlGmfl2AGZr/fpiCIdLYF9C3OFq6q/i
         +5vxb5e51BLXkBDncjYdFRlUJzaoRVfz4OdFPDK2E0XZRJsdclj5Nc70EcFyafRLuWpB
         td6BaF3kDjHF8XG7PhgF+fcOHwz0aZEfaZ66uLgTL7VtpKpY/cYk+u1z2i5rNdPiatkb
         Fwv6S+I2lnZBCq3Vcbgfr3PYt8g6MGaDPB8WkknLEseuwHGlGIDfElRaFtIaUFV42zEJ
         Hjbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=QNfZ7ERkEtbflrh2FEFxojUnUn1/h5mtDJeWPD/s4vk=;
        b=Lkpz78vfpUauMzbmhH+hruJjOfA4z3Q/dnrCJi/FpeMP0B5XlBIggbI24vTnDuug5m
         +9101A8nekQUjbG8Ui871LPvzKImkrtjTJNpt0d5NC6S3riL80esfpsDxxvrAELdGQhJ
         OLoTy29ZvrQwzzDLovBVEGhOkgFAaNYVk5k54/uYdaMRnzD8SY0jyPSvw+gc7tBggwAA
         IOW/LOnZFY2ZGajU2yrIh4bRpJ0ML6Qu750NUHBXq+NlxMiuQWVZvXtR+VJ02XofK3de
         J33vOe3eZQYx9mM8bHygnTOl4nIfeOwCGT/JU5wg6HnZ7F0W6NyKO74a6kEAVuoIHFO6
         FqNg==
X-Gm-Message-State: AOAM532tSdUo+eBol8C/PHi58X0TKWCXJuRNk9508Nm9xTIbsfyepW6z
	NRITKaAu7nkAnjyh6SnIY3c=
X-Google-Smtp-Source: ABdhPJzTW1JpTYIYTggwJCty+7QItjLgOsTwoXvE0DnB37AghFoJ1KFLHmJkR3h6UrhQFxQp5jCHqg==
X-Received: by 2002:a2e:8541:: with SMTP id u1mr752874ljj.338.1612904664185;
        Tue, 09 Feb 2021 13:04:24 -0800 (PST)
Subject: Re: [PATCH V4 23/24] libxl: Introduce basic virtio-mmio support on
 Arm
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com>
 <1610488352-18494-24-git-send-email-olekstysh@gmail.com>
 <25b62097-9ea9-31f3-0f8f-92a7f0d01d7c@xen.org>
 <51d44085-f178-3985-9324-da6494cd9d2e@gmail.com>
 <58c9da23-ef6a-1d33-b2ec-30e3425da2f3@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <d1e4d2bc-af88-d91a-ec43-d5055b37bc96@gmail.com>
Date: Tue, 9 Feb 2021 23:04:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58c9da23-ef6a-1d33-b2ec-30e3425da2f3@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 20.01.21 18:40, Julien Grall wrote:
> Hi Oleksandr,

Hi Julien


Sorry for the late response.


>
> On 17/01/2021 22:22, Oleksandr wrote:
>> On 15.01.21 23:30, Julien Grall wrote:
>>> On 12/01/2021 21:52, Oleksandr Tyshchenko wrote:
>>>> From: Julien Grall <julien.grall@arm.com>
>>> So I am not quite too sure how this new parameter can be used. Could 
>>> you expand it?
>> The original idea was to set it if we are going to assign virtio 
>> device(s) to the guest.
>> Being honest, I have a plan to remove this extra parameter. It might 
>> not be obvious looking at the current patch, but next patch will show 
>> that we can avoid introducing it at all.
>
> Right, so I think we want to avoid introducing the parameter. I have 
> suggested in patch #24 a different way to split code introduced by #23 
> and #24.

Got it. Will take it into the account for the next version.


>
>
> [...]
>
>>>
>>>> +#define GUEST_VIRTIO_MMIO_SIZE xen_mk_ullong(0x200)
>>>
>>> AFAICT, the size of the virtio mmio region should be 0x100. So why 
>>> is it 0x200?
>>
>>
>> I didn't find the total size requirement for the mmio region in 
>> virtio specification v1.1 (the size of control registers is indeed 
>> 0x100 and device-specific configuration registers starts at the 
>> offset 0x100, however it's size depends on the device and the driver).
>>
>> kvmtool uses 0x200 [1], in some Linux device-trees we can see 0x200 
>> [2] (however, device-tree bindings example has 0x100 [3]), so what 
>> would be the proper value for Xen code?
>
> Hmm... I missed that fact. I would say we want to use the biggest size 
> possible so we can cover most of the devices.
>
> Although, as you pointed out, this may not cover all the devices. So 
> maybe we want to allow the user to configure the size via xl.cfg for 
> the one not conforming with 0x200.
>
> This could be implemented in the future. Stefano/Ian, what do you think?

I see that Stefano has already agreed on that, so let's leave 0x200 for now.


>
>
>>> Most likely, you will want to reserve a range
>>
>> it seems yes, good point. BTW, the range is needed for the mmio 
>> region as well, correct?
>
> I would reserve 1MB (just for the sake of avoid region size in KB).
>
> For the SPIs, I would consider to reserve 10-20 interrupts. Do you 
> think this will cover your use cases?

Yes, I think it would be enough for now.


>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 21:04:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 21:04:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83431.155275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aBl-0001UT-K3; Tue, 09 Feb 2021 21:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83431.155275; Tue, 09 Feb 2021 21:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aBl-0001UM-GN; Tue, 09 Feb 2021 21:04:49 +0000
Received: by outflank-mailman (input) for mailman id 83431;
 Tue, 09 Feb 2021 21:04:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4M0B=HL=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1l9aBj-0001UA-Ow
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 21:04:47 +0000
Received: from mail-ej1-x62d.google.com (unknown [2a00:1450:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35d90b3d-1fae-4688-bb7b-3122d7526469;
 Tue, 09 Feb 2021 21:04:46 +0000 (UTC)
Received: by mail-ej1-x62d.google.com with SMTP id w1so34101477ejf.11
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 13:04:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35d90b3d-1fae-4688-bb7b-3122d7526469
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=iYNtP4w4sH9U75AdPRx/WV9Gg94sZqH3kRt5XGDTuNI=;
        b=vd98t3h5FnK5Wv3l+6Sj/RewcgHsuqbkhvNc3HsYBf+qQXpyKxF2i1uMERflaWea4X
         5Qb93JgDqVUWstYE7TXMIICi7OKk8puJsBrdyzAqgTczpnMTcQp7KROYlBkLlqUfRUha
         PWI2tf91Jxy8UJ5DI33MvCbiLA9HnvzgONdtoK2bB/dx1tNBtcobQPC9De4zIF/LJzkR
         NZzrhOlGltpByiC3VSVfISZMUMcH8SHSH7HixtuVUIwVm5O5crPvXj6zXt6vINC6FFGm
         QXWcM2yh1ULvDr6Y4peRnaCMPHQ8QmQY4Wmhs0yNBqxog8Vh36HsRs3WaqzErssOroKY
         EVFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=iYNtP4w4sH9U75AdPRx/WV9Gg94sZqH3kRt5XGDTuNI=;
        b=SplIVXdcZr/PDsauhZ7lYgm+YQwha8bgKfzgQJmMw0zLMshLMprMjh4Nvnxd7evfU4
         AeQ4wft7gz7tPaprZmOVYpacwmCiBTwQM8givkj8Nr+wMFP5NG+8qoxi3GhyDNgCt0Ca
         39WesE4NJQIRZuqbVpi+mwnLPkLx7U+0Od5mXQpm37x+AOpxc2vhPW8353uBlsW4vutH
         XFJkNTxdB1nrfFJsFpTAe4Dq5mC5iiVTLbPjIqpRan+tFdOSdk3NJYX1Udlyf/PfxWca
         OEuWQopZB0o+xvpvlfEJzk+43FsOkkDHxnxkK7sosm0EeC6uqEpqec5kbsw1z9mxUUUF
         Db3w==
X-Gm-Message-State: AOAM532aW6lL1PavcNNNj66BdHF0hI6vv2/XSWDbUq1WwhMiqV41w7+t
	WzR4Bmhg7ZyWN4VTylofeMabYsdh446zRvCV0/8=
X-Google-Smtp-Source: ABdhPJzlxxDQxgFa01yTwTNXaPWDBUOenQ8Nk7LcWb2LU+tOMbxwO02y0ppWNkaEgDfsgOLNngfgFGo/6FKOin1mYQs=
X-Received: by 2002:a17:906:3a10:: with SMTP id z16mr9300715eje.483.1612904685924;
 Tue, 09 Feb 2021 13:04:45 -0800 (PST)
MIME-Version: 1.0
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <173ed75a-94cf-26a5-9271-a687bf201578@xen.org> <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
 <4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com> <24610.38467.808678.941320@mariner.uk.xensource.com>
 <alpine.DEB.2.21.2102090914280.8948@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102090914280.8948@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Tue, 9 Feb 2021 21:04:34 +0000
Message-ID: <CAJ=z9a0fDYccxTDkpvmG77D-havkySuYOUK4MSYvpZw4EL9oGg@mail.gmail.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, lucmiccio@gmail.com, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Rahul Singh <Rahul.Singh@arm.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>, 
	George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 9 Feb 2021 at 17:31, Stefano Stabellini <sstabellini@kernel.org> wrote:
>
> On Tue, 9 Feb 2021, Ian Jackson wrote:
> > Jan Beulich writes ("Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping"):
> > > On 08.02.2021 21:24, Stefano Stabellini wrote:
> > ...
> > > > For these cases, I would just follow a simple rule of thumb:
> > > > - is the submitter willing to provide the backport?
> > > > - is the backport low-risk?
> > > > - is the underlying bug important?
> > > >
> > > > If the answer to all is "yes" then I'd go with it.
> > >
> > > Personally I disagree, for the very simple reason of the question
> > > going to become "Where do we draw the line?" The only non-security
> > > backports that I consider acceptable are low-risk changes to allow
> > > building with newer tool chains. I know other backports have
> > > occurred in the past, and I did voice my disagreement with this
> > > having happened.
> >
> > I think I take a more relaxed view than Jan, but still a much more
> > firm line than Stefano.  My opinion is that we should make exceptions
> > for only bugs of exceptional severity.
> >
> > I don't think I have seen an argument that this bug is exceptionally
> > severe.
> >
> > For me the fact that you can only experience this bug if you upgrade
> > the hardware or significantly change the configuration, means that
> > this isn't so serious a bug.
>
> Yeah, I think that's really the core of this issue. If somebody is
> already using 4.12 happily, there is really no reason for them to take
> the fix. If somebody is about to use 4.12, then it is a severe issue.

If somebody is about to use 4.12, then it is most likely going to
encounter a serious blocker as there is no support for generic SMMU
bindings. I would be surprised if there are a lot of DT still using
the old bindings, at which point such users would want to switch to
4.15 + your patches to add support.

>
> The view of the group is that nobody should be switching to 4.12 now
> because there are newer releases out there. I don't know if that is
> true.

This is mostly based on the definition of supported vs security
supported. When a tree is only security supported, then there is no
promise the code will run on any systems.

>
> I didn't realize we had a policy or even a recommendation of always
> choosing the latest among the many releases available with
> security-support. I went through the website and SUPPORT.md but couldn't
> find it spelled out anywhere. See:

May I ask, what sort of users would want to start a
development/product based on a soon to be abandoned version?

For any new development, I have always advised to switch to the latest
Xen (or at least stable) because it will contain the latest fixes,
features, and better support because the code is still in mind...

>
> https://xenproject.org/downloads/
> https://xenproject.org/downloads/xen-project-archives/xen-project-4-12-series/
> https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=SUPPORT.md;h=52f25fa85af41fa3b38288ab7e172408b77dc779;hb=97b7b5567fba6918a656ad349051b5343b5dea2e
>
> At most we have:
>
>     Supported-Until: 2020-10-02
>     Security-Support-Until: 2022-04-02
>
> Anecdotally, if I go to https://www.kernel.org/ to download a kernel
> tarball, I expect all tarballs to have all the major functionalities. I
> wouldn't imagine that I cannot get one entire Linux subsystem (e.g.
> ethernet or SATA) to work if I don't pick the latest.

I think this is an unrealistic expectation... You can't pick any
version of stable Linux and expect it to work on your shiny HW. There
might be missing drivers, workaround (including in core subsystems)...

>
> Maybe it would make sense to clarify which releases are discouraged from
> being used on https://xenproject.org/downloads/ at least?

Feel free to suggest a wording that can be discussed here.

Cheers,


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 21:09:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 21:09:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83434.155287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aGc-0001lg-67; Tue, 09 Feb 2021 21:09:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83434.155287; Tue, 09 Feb 2021 21:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aGc-0001lZ-2r; Tue, 09 Feb 2021 21:09:50 +0000
Received: by outflank-mailman (input) for mailman id 83434;
 Tue, 09 Feb 2021 21:09:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9aGa-0001lR-LI; Tue, 09 Feb 2021 21:09:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9aGa-0007Kg-DN; Tue, 09 Feb 2021 21:09:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9aGa-0001hR-76; Tue, 09 Feb 2021 21:09:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9aGa-0007tG-6H; Tue, 09 Feb 2021 21:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MILKInN7CaBZYimuWpV83gWinSwTYDvm2jVXfxN/rgg=; b=CitjdBHdZuewsopg6HPxc71yCq
	gcZzRT4i9xtgnWPyxxa+4dSmuxNrbwSkyNMgk20b8YWXkXdlmfeXjq5367TzqYT5/f66cjOuB2erX
	Skaa6yoR1Z7bV4SiAH4Ae9L/ZBHcwYm4Nb1VX7Ls7yfeSZRsUquB73IzjONGTbdf760A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159143: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=472276f59bba2b22bb882c5c6f5479754e68d467
X-Osstest-Versions-That:
    ovmf=43a113385e370530eb52cf2e55b3019d8d4f6558
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 21:09:48 +0000

flight 159143 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159143/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 472276f59bba2b22bb882c5c6f5479754e68d467
baseline version:
 ovmf                 43a113385e370530eb52cf2e55b3019d8d4f6558

Last test of basis   159136  2021-02-08 10:53:57 Z    1 days
Testing same since   159143  2021-02-08 16:09:49 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Jordan Justen <jordan.l.justen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   43a113385e..472276f59b  472276f59bba2b22bb882c5c6f5479754e68d467 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 21:14:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 21:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83438.155302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aLO-0002dB-PU; Tue, 09 Feb 2021 21:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83438.155302; Tue, 09 Feb 2021 21:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aLO-0002d4-MJ; Tue, 09 Feb 2021 21:14:46 +0000
Received: by outflank-mailman (input) for mailman id 83438;
 Tue, 09 Feb 2021 21:14:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4M0B=HL=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1l9aLM-0002cz-K1
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 21:14:44 +0000
Received: from mail-ej1-x629.google.com (unknown [2a00:1450:4864:20::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8760d633-fe97-461d-bdb0-4c6e907c504a;
 Tue, 09 Feb 2021 21:14:43 +0000 (UTC)
Received: by mail-ej1-x629.google.com with SMTP id p20so34211467ejb.6
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 13:14:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8760d633-fe97-461d-bdb0-4c6e907c504a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=UxYIthWvsnz3bn3mTcJ1XLDF5SIGF/4rISVDwpXXG10=;
        b=ghnuU4TVsMS5N0k0aQH9BtI+q2/o/TX6KIh0aTPA80VoduLACMh4UMAXVU2uXnrxJG
         RgppA1ZqXsE8sShAQbt/ijh1A92D6O1jwOiSu4zHccSCVmxUPXlB2p+E9CHIKUvcvg+m
         Pxg4vOZESO1T8DHGmiHqK7BSoGre9/Y7O4s5YONXC8pRbleIu+wBz2tzzq5/w/eZ2KNH
         rX/ahHgeHG9071YawIuuQmKf0SoM5Co5F/7oVul3GMc8saMLaghRhHGQyD/FN3eCwARn
         NIX8LxIEEssw7n+itYRn1Q1Qar3EIMxfQnsFaaC5HipPZNQgotWidRSqFfbw1jH2EEVZ
         TLFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=UxYIthWvsnz3bn3mTcJ1XLDF5SIGF/4rISVDwpXXG10=;
        b=bgwBnhg8r+uPtwOPOAX3ToD2JxgBGUQRyL96vFR3KFngzZrh44s3UviPquVCsq7Dtv
         xy3ic4BYxubk6MbWyFnE9x/RO8jhEaMRneFAplbazVEy9khN0nQ1NuAfcsVZi3gQpxts
         N4LMfDp6v3jEg/gaAotCtD0i8i3Zq+PKLr+IYcX9K/60l3CSzEqDa8WVfDc6QOqcD+3a
         Zg0h+OrSCxY8DgxgUxl+Nv+/LmnkUhrrUtghD710owcg60dWi/ug3pJQUKrdeLKV3Ur3
         58t98LPFASRdhbZ8W0tYKyK3+X5TRfeIoE/ukzr7Q9Pky807nudRsFw7KsjFKLjB6gaR
         5FnQ==
X-Gm-Message-State: AOAM530bTFnkpIF9coM3TNwWbiHUKh806Edgfy/MB344DQiWjgSXev5f
	JGcm/rAJUut5luqi1aoauTAbd9TwttUuzBU0kY8=
X-Google-Smtp-Source: ABdhPJzXPHoNgJdVMGpTD39cX60XZyKAR7/6Bu7mttA7mFKalBKSQEQRIaARLdQQvUKBP/rij+YhPOWBGWr5M9JZSb8=
X-Received: by 2002:a17:906:2f07:: with SMTP id v7mr24223984eji.343.1612905282942;
 Tue, 09 Feb 2021 13:14:42 -0800 (PST)
MIME-Version: 1.0
References: <20210209152816.15792-1-julien@xen.org> <20210209152816.15792-4-julien@xen.org>
 <04f601d6ff22$1f52cf60$5df86e20$@xen.org>
In-Reply-To: <04f601d6ff22$1f52cf60$5df86e20$@xen.org>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Tue, 9 Feb 2021 21:14:32 +0000
Message-ID: <CAJ=z9a18XxQLrUanxg_E7Vups7aRee93_vFhqxu1=yq+VdXH-w@mail.gmail.com>
Subject: Re: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the
 domain if it is dying
To: Paul Durrant <paul@xen.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, hongyxia@amazon.co.uk, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>, Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 9 Feb 2021 at 20:28, Paul Durrant <xadimgnik@gmail.com> wrote:
>
> > -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > Sent: 09 February 2021 15:28
> > To: xen-devel@lists.xenproject.org
> > Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall <jgrall@amazon.com>; Jan Beulich
> > <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
> > Subject: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the domain if it is dying
> >
> > From: Julien Grall <jgrall@amazon.com>
> >
> > It is a bit pointless to crash a domain that is already dying. This will
> > become more an annoyance with a follow-up change where page-table
> > allocation will be forbidden when the domain is dying.
> >
> > Security wise, there is no change as the devices would still have access
> > to the IOMMU page-tables even if the domain has crashed until Xen
> > start to relinquish the resources.
> >
> > For x86, we rely on dom_iommu(d)->arch.mapping.lock to ensure
> > d->is_dying is correctly observed (a follow-up patch will held it in the
>
> s/held/hold
>
> > relinquish path).
> >
> > For Arm, there is still a small race possible. But there is so far no
> > failure specific to a domain dying.
> >
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> >
> > ---
> >
> > This was spotted when trying to destroy IOREQ servers while the domain
> > is dying. The code will try to add the entry back in the P2M and
> > therefore update the P2M (see arch_ioreq_server_disable() ->
> > hvm_add_ioreq_gfn()).
> >
> > It should be possible to skip the mappin in hvm_add_ioreq_gfn(), however
>
> s/mappin/mapping
>
> > I didn't try a patch yet because checking d->is_dying can be racy (I
> > can't find a proper lock).
> >
> > Changes in v2:
> >     - Patch added
> > ---
> >  xen/drivers/passthrough/iommu.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> > index 879d238bcd31..75419f20f76d 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -272,7 +272,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
> >                              flush_flags) )
> >                  continue;
> >
> > -        if ( !is_hardware_domain(d) )
> > +        if ( !is_hardware_domain(d) && !d->is_dying )
> >              domain_crash(d);
>
> Would it make more sense to check is_dying inside domain_crash() (and turn it into a no-op in that case)?

Jan also suggested moving the check in domain_crash(). However, I felt
it is potentially a too risky change for 4.15 as there are quite a few
callers.

Cheers,


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 21:15:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 21:15:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83439.155313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aMO-0002ja-3I; Tue, 09 Feb 2021 21:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83439.155313; Tue, 09 Feb 2021 21:15:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9aMO-0002jT-05; Tue, 09 Feb 2021 21:15:48 +0000
Received: by outflank-mailman (input) for mailman id 83439;
 Tue, 09 Feb 2021 21:15:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SKhV=HL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9aMN-0002jJ-BC
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 21:15:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 454c7221-1aea-4d94-83d6-9d087311ba00;
 Tue, 09 Feb 2021 21:15:46 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 25F2364E74;
 Tue,  9 Feb 2021 21:15:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 454c7221-1aea-4d94-83d6-9d087311ba00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612905345;
	bh=BPCep2afbRPxaYr2tE9qGZRa06FlAZFmQg710LXxQFU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aF3D8Dz/FQwGudQkatMJTAtep6+Qfba9E5jbQgTVtlFEVtfhof2zkspxwXcJ8roTt
	 Iv4xq9DTgN3FenbxsyaZYVFEwVGIXaySyCI8oThNKBatEv0SH5x4ua8gmHF0uAZdV2
	 2liVVR516JPEVRc4gyEmajSInzXXGINY2fwoVnvpjMa4bkg3TK5B1ii7shLrnqxR9e
	 f1n/QN8avKNeS8n+E5Gww+zr/aAZbgymQ0HrHBo1ISHgr84k2tmQtJXsIle6uYHj9R
	 PSib0UN4p83TOfqBd/NqOVYjv2YXfqfuq43BVRL+b+PmjO4cu7Vp78Kx0NKrz47c1p
	 4UlywrE04xK0g==
Date: Tue, 9 Feb 2021 13:15:44 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Rahul Singh <Rahul.Singh@arm.com>, Julien Grall <julien@xen.org>, 
    "lucmiccio@gmail.com" <lucmiccio@gmail.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2102091255020.8948@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com> <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1516528875-1612904242=:8948"
Content-ID: <alpine.DEB.2.21.2102091309510.8948@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1516528875-1612904242=:8948
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102091309511.8948@sstabellini-ThinkPad-T480s>

On Tue, 9 Feb 2021, Stefano Stabellini wrote:
> On Tue, 9 Feb 2021, Rahul Singh wrote:
> > > On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > 
> > > Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> > > The offending chunk is:
> > > 
> > > #define gnttab_need_iommu_mapping(d)                    \
> > > -    (is_domain_direct_mapped(d) && need_iommu(d))
> > > +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > > 
> > > On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> > > directly mapped and IOMMU is enabled for the domain, like the old check
> > > did, but the new check is always false.
> > > 
> > > In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> > > need_sync is set as:
> > > 
> > >    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> > >        hd->need_sync = !iommu_use_hap_pt(d);
> > > 
> > > iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
> > > P2M. It is true on ARM. need_sync means that you have a separate IOMMU
> > > page-table and it needs to be updated for every change. need_sync is set
> > > to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
> > > which is wrong.
> > > 
> > > As a consequence, when using PV network from a domU on a system where
> > > IOMMU is on from Dom0, I get:
> > > 
> > > (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> > > [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> > > 
> > > The fix is to go back to something along the lines of the old
> > > implementation of gnttab_need_iommu_mapping.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > Fixes: 91d4eca7add
> > > Backport: 4.12+
> > > 
> > > ---
> > > 
> > > Given the severity of the bug, I would like to request this patch to be
> > > backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> > > 2020.
> > > 
> > > For the 4.12 backport, we can use iommu_enabled() instead of
> > > is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
> > > 
> > > Changes in v2:
> > > - improve commit message
> > > - add is_iommu_enabled(d) to the check
> > > ---
> > > xen/include/asm-arm/grant_table.h | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
> > > index 6f585b1538..0ce77f9a1c 100644
> > > --- a/xen/include/asm-arm/grant_table.h
> > > +++ b/xen/include/asm-arm/grant_table.h
> > > @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
> > >     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
> > > 
> > > #define gnttab_need_iommu_mapping(d)                    \
> > > -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > > +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
> > > 
> > > #endif /* __ASM_GRANT_TABLE_H__ */
> > 
> > I tested the patch and while creating the guest I observed the below warning from Linux for block device.
> > https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
> 
> So you are creating a guest with "xl create" in dom0 and you see the
> warnings below printed by the Dom0 kernel? I imagine the domU has a
> virtual "disk" of some sort.
> 
> 
> > I did initial debugging and found out that there are many calls to iommu_legacy_{,un}map() while creating the guest but when iommu_legacy_unmap() function unmap the pages something is written  wrong in page tables because of that when next time same page is mapped via create_grant_host_mapping() we observed below warning. 
> 
> By "while creating a guest", do you mean before the domU is even
> unpaused? Hence, the calls are a result of dom0 operations? Or after
> domU is unpaused, hence, the calls are a result of domU operations
> (probably the domU simply trying to access its virtual disk)?
> Please note that you can start a guest paused with xl create -p.
> 
> Looking at the logs, it is probably the latter. The following line
> should be printed when the domU PV block frontend connects to the
> backend in dom0:
> 
> [  138.639934] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants
> 
> I'll see if I can repro the issue here.

I cannot repro the issue at my end. If there is anything interesting in
your setup that is necessary to reproduce the problem please share the
details.

Looking at the code, xen_blkif_disconnect is called by frontend_changed
on XenbusStateConnected. I don't know for sure why we need to call
xen_blkif_disconnect() before connecting, but I imagine it is to make sure
we are starting from a clean slate.

In the case of domU creation, I'd imagine the xen_blkif_disconnect() is
unnecessary. And, in fact, in my setup blkif->nr_rings is 0 so the loop
at the beginning of xen_blkif_disconnect is not even entered.

I wonder why it is not the same at your end. I don't know how it could
be that blkif->nr_rings is not 0 if you are just starting the VM for the
first time. Also, are you sure that this is related to this patch?

  
> > [  138.639934] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants
> > (XEN) gnttab_mark_dirty not implemented yet
> > [  138.659702] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants
> > [  138.669827] vbd vbd-0-51712: 9 mapping in shared page 8 from domain 0
> > [  138.676636] vbd vbd-0-51712: 9 mapping ring-ref port 5
> > [  138.682089] ------------[ cut here ]------------
> > [  138.686605] WARNING: CPU: 2 PID: 37 at drivers/block/xen-blkback/xenbus.c:296 xen_blkif_disconnect+0x20c/0x230
> > [  138.696668] Modules linked in: bridge stp llc ipv6 nf_defrag_ipv6
> > [  138.702833] CPU: 2 PID: 37 Comm: xenwatch Not tainted 5.4.0-yocto-standard #1
> > [  138.710037] Hardware name: Arm Neoverse N1 System Development Platform (DT)
> > [  138.717067] pstate: 80c00005 (Nzcv daif +PAN +UAO)
> > [  138.721927] pc : xen_blkif_disconnect+0x20c/0x230
> > [  138.726701] lr : xen_blkif_disconnect+0xbc/0x230
> > [  138.731388] sp : ffff800011cb3c80
> > [  138.734773] x29: ffff800011cb3c80 x28: ffff00005b6da940 
> > [  138.740156] x27: 0000000000000000 x26: 0000000000000000 
> > [  138.745536] x25: ffff000029755060 x24: 0000000000000170 
> > [  138.750919] x23: ffff000029755040 x22: ffff000059c72000 
> > [  138.756299] x21: 0000000000000000 x20: ffff000029755000 
> > [  138.761681] x19: 0000000000000001 x18: 0000000000000000 
> > [  138.767063] x17: 0000000000000000 x16: 0000000000000000 
> > [  138.772444] x15: 0000000000000000 x14: 0000000000000000 
> > [  138.777826] x13: 0000000000000000 x12: 0000000000000000 
> > [  138.783207] x11: 0000000000000001 x10: 0000000000000990 
> > [  138.788589] x9 : 0000000000000001 x8 : 0000000000210d00 
> > [  138.793971] x7 : 0000000000000018 x6 : ffff00005ddf72a0 
> > [  138.799352] x5 : ffff800011cb3c28 x4 : 0000000000000000 
> > [  138.804734] x3 : ffff000029755118 x2 : 0000000000000000 
> > [  138.810117] x1 : ffff000029755120 x0 : 0000000000000001 
> > [  138.815497] Call trace:
> > [  138.818015]  xen_blkif_disconnect+0x20c/0x230
> > [  138.822442]  frontend_changed+0x1b0/0x54c
> > [  138.826523]  xenbus_otherend_changed+0x80/0xb0
> > [  138.831035]  frontend_changed+0x10/0x20
> > [  138.834941]  xenwatch_thread+0x80/0x144
> > [  138.838849]  kthread+0x118/0x120
> > [  138.842147]  ret_from_fork+0x10/0x18
> > [  138.845791] ---[ end trace fb9f0a3b3b48a55f ]â€”
> 
--8323329-1516528875-1612904242=:8948--


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 21:50:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 21:50:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83442.155325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9au9-0006Hc-R7; Tue, 09 Feb 2021 21:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83442.155325; Tue, 09 Feb 2021 21:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9au9-0006HV-OB; Tue, 09 Feb 2021 21:50:41 +0000
Received: by outflank-mailman (input) for mailman id 83442;
 Tue, 09 Feb 2021 21:50:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5GlY=HL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9au8-0006HQ-LN
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 21:50:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1af808a-5596-47f7-b842-bae338eac303;
 Tue, 09 Feb 2021 21:50:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1af808a-5596-47f7-b842-bae338eac303
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612907439;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ARI5VWYbnY4OIGnzqdSyn1ER3PngI7//x6H6E5t8HQA=;
  b=bGOQPJRyfzmRIMH8ktn5/Jy8iFLLm1wCe4gW10hld/uEoK7k+9kdPzDi
   4pljENxJm8ycVqWpBZZUqlqwo45zh96HbRX02pNdYalOF65jVATLcU00l
   /v4txGHTcbEhdgla5KHE++ZIEVTGG6YrnTuJCym8Bav4y9dN0nprFxPaQ
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: pgu4N++WhcU8+9Y8T3snB+4wQNwKfuevA/seMVLw5cnRDnNkr6lpI1/6QiadHzspLIXe4iU5VJ
 wzYpTFx5rxnSWlRkFhETv/Os64AQCc5y7Rh5CsAa3pMUusijVoKupWNtSg8TdYDCpdjMkj1MBP
 JRGMMLsKDeN8lkXAPVTZubKHJ7VhX0mMs5MGR0zXpk9Kbx7UkLFte3yz4ObOdzq5oQExgQGT0U
 8NLm0J3UIFRLNQEnIOUmncS1YArFONXMs2pbRhcpBF+ip+Rv4FvfDuHKYI9Z2rE7Sl2plx+6XF
 QSg=
X-SBRS: 5.1
X-MesageID: 37280526
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,166,1610427600"; 
   d="scan'208";a="37280526"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15] x86/ucode/amd: Handle length sanity check failures more gracefully
Date: Tue, 9 Feb 2021 21:49:11 +0000
Message-ID: <20210209214911.18461-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Currently, a failure of verify_patch_size() causes an early abort of the
microcode blob loop, which in turn causes a second go around the main
container loop, ultimately failing the UCODE_MAGIC check.

First, check for errors after the blob loop.  An error here is unrecoverable,
so avoid going around the container loop again and printing an
unhelpful-at-best error concerning bad UCODE_MAGIC.

Second, split the verify_patch_size() check out of the microcode blob header
check.  In the case that the sanity check fails, we can still use the
known-to-be-plausible header length to continue walking the container to
potentially find other applicable microcode blobs.

Before:
  (XEN) microcode: Bad microcode data
  (XEN) microcode: Wrong microcode patch file magic
  (XEN) Parsing microcode blob error -22

After:
  (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa000
  (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa010
  (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa011
  (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa200
  (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa210
  (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa500
  (XEN) microcode: couldn't find any matching ucode in the provided blob!

Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>

For 4.15.  Found when putting a test together to prove the correctness of the
"x86/ucode: Fix microcode payload size for Fam19 processors" fix.

This allows microcode loading to still function even if the length magic
numbers aren't correct for a subset of blobs within the container(s).
---
 xen/arch/x86/cpu/microcode/amd.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index c4ab395799..fe7b79bd0a 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -348,8 +348,7 @@ static struct microcode_patch *cpu_request_microcode(const void *buf, size_t siz
 
             if ( size < sizeof(*mc) ||
                  (mc = buf)->type != UCODE_UCODE_TYPE ||
-                 size - sizeof(*mc) < mc->len ||
-                 (!skip_ucode && !verify_patch_size(mc->len)) )
+                 size - sizeof(*mc) < mc->len )
             {
                 printk(XENLOG_ERR "microcode: Bad microcode data\n");
                 error = -EINVAL;
@@ -359,6 +358,19 @@ static struct microcode_patch *cpu_request_microcode(const void *buf, size_t siz
             if ( skip_ucode )
                 goto skip;
 
+            if ( !verify_patch_size(mc->len) )
+            {
+                printk(XENLOG_WARNING
+                       "microcode: Bad microcode length 0x%08x for cpu 0x%04x\n",
+                       mc->len, mc->patch->processor_rev_id);
+                /*
+                 * If the blob size sanity check fails, trust the container
+                 * length which has already been checked to be at least
+                 * plausible at this point.
+                 */
+                goto skip;
+            }
+
             /*
              * If the new ucode covers current CPU, compare ucodes and store the
              * one with higher revision.
@@ -382,6 +394,14 @@ static struct microcode_patch *cpu_request_microcode(const void *buf, size_t siz
             if ( size >= 4 && *(const uint32_t *)buf == UCODE_MAGIC )
                 break;
         }
+
+        /*
+         * Any error means we didn't get cleanly to the end of the microcode
+         * container.  There isn't an overall length field, so we've got no
+         * way of skipping to the next container in the stream.
+         */
+        if ( error )
+            break;
     }
 
     if ( saved )
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 22:13:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 22:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83444.155338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9bG4-0008Ci-Ls; Tue, 09 Feb 2021 22:13:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83444.155338; Tue, 09 Feb 2021 22:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9bG4-0008Cb-Ip; Tue, 09 Feb 2021 22:13:20 +0000
Received: by outflank-mailman (input) for mailman id 83444;
 Tue, 09 Feb 2021 22:13:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9bG3-0008CR-H7; Tue, 09 Feb 2021 22:13:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9bG3-0008Mv-B1; Tue, 09 Feb 2021 22:13:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9bG2-00054l-Vs; Tue, 09 Feb 2021 22:13:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9bG2-0004L5-VM; Tue, 09 Feb 2021 22:13:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R/Y/WL0H/1HEPq0FxFXxseKL5jpZIDulg/xBv26pxCQ=; b=YGYPO+1jJJUOFeOlGn8zpa92HX
	zAvSqhLkHtQQFv7A5KBo8F0CAJ+JfToNrWJMqX6iq6XfqKlaX35wtNPsNd/TuYmOtcD6nC6VcRxes
	kyidSLtE+pp4HQiRNA3zf4O+7BWyasmTbHPFu/DEII7ZgFzKiExYujwPlHb7jNWEkq5E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159184-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159184: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
X-Osstest-Versions-That:
    xen=f18309eb06efd1db5a2ab9903a1c246b6299951a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 Feb 2021 22:13:18 +0000

flight 159184 xen-unstable-smoke real [real]
flight 159188 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159184/
http://logs.test-lab.xenproject.org/osstest/logs/159188/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm     18 guest-start/debian.repeat fail REGR. vs. 159146

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14
baseline version:
 xen                  f18309eb06efd1db5a2ab9903a1c246b6299951a

Last test of basis   159146  2021-02-08 19:00:27 Z    1 days
Testing same since   159184  2021-02-09 18:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 687121f8a0e7c1ea1c4fa3d056637e5819342f14
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Feb 9 16:45:35 2021 +0100

    xl: optionally print timestamps when running xl commands
    
    Add a global option "-T" to xl to enable timestamps in the output from
    libxl and libxc. This is most useful with long running commands such
    as "migrate".
    
    During 'xl -v.. migrate domU host' a large amount of debug is generated.
    It is difficult to map each line to the sending and receiving side.
    Also the time spent for migration is not reported.
    
    With 'xl -T migrate domU host' both sides will print timestamps and
    also the pid of the invoked xl process to make it more obvious which
    side produced a given log line.
    
    Note: depending on the command, xl itself also produces other output
    which does not go through libxentoollog. As a result such output will
    not have timestamps prepended.
    
    This change adds also the missing "-t" flag to "xl help" output.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>

commit 7a321c3676250aac5bacb1ae8d7dd22bfe8b1448
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Feb 9 16:45:34 2021 +0100

    tools: add with-xen-scriptdir configure option
    
    Some distros plan for fresh installations will have an empty /etc,
    whose content will not be controlled by the package manager anymore.
    
    To make this possible, add a knob to configure to allow storing the
    hotplug scripts to libexec instead of /etc/xen/scripts.
    
    The current default remains unchanged, which is /etc/xen/scripts.
    
    [autoconf rerun -iwj]
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>

commit fe9ba142c03a2046def52cfd5864f5a89172bf5c
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Feb 9 16:45:33 2021 +0100

    tools: move CONFIG_DIR and XEN_CONFIG_DIR in paths.m4
    
    Upcoming changes need to reuse XEN_CONFIG_DIR.
    
    In its current location the assignment happens too late. Move it up
    in the file, along with CONFIG_DIR. Their only dependency is
    sysconfdir, which may also be adjusted in this file.
    
    No functional change intended.
    
    [autoconf rerun -iwj]
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 548ba7c85c6d80a671c2abb8681c29bc85c616f3
Author: Ian Jackson <iwj@xenproject.org>
Date:   Tue Feb 9 17:05:54 2021 +0000

    tools: Regenerate autoconf
    
    This seems to have been omitted in many recent commits.  The earliest
    of which are, according to git-bisect:
      154137dfdba3  stubdom/configure      stubdom: add xenstore pvh stubdom
      cc83ee4c6c37  all configure scripts  NetBSD: Fix lock directory path
    but it seems that this is true of several later commits too.
    
    Release status: I consider this discrepancy a release critical bug.
    
    Signed-off-by: Ian Jackson <iwj@xenproject.org>
    Release-acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Feb 09 22:21:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 22:21:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83449.155353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9bNm-0000kb-Gp; Tue, 09 Feb 2021 22:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83449.155353; Tue, 09 Feb 2021 22:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9bNm-0000kU-Co; Tue, 09 Feb 2021 22:21:18 +0000
Received: by outflank-mailman (input) for mailman id 83449;
 Tue, 09 Feb 2021 22:21:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1l9bNl-0000k6-2v
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 22:21:17 +0000
Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dc70a99-fe7e-4d42-bda6-f1e1e43f412c;
 Tue, 09 Feb 2021 22:21:15 +0000 (UTC)
Received: by mail-pf1-x435.google.com with SMTP id t29so12793874pfg.11
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 14:21:15 -0800 (PST)
Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1])
 by smtp.gmail.com with ESMTPSA id v9sm58601pju.33.2021.02.09.14.21.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 Feb 2021 14:21:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dc70a99-fe7e-4d42-bda6-f1e1e43f412c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=HjSiRe9FHCVaGCF52pKJiLElvBu049No/wtLMDAICXE=;
        b=ZPccq1cXTR6T3lt5wME8WuvLAa5mZge5Mw8wXRyQz0ElTK2dM1mAnAYXOysMgOvFTF
         M1Xw0/LEh9dW5Qkrenqg0x0Lv+vYZsYEyKugTwXm+DqKUU9BR7UzcXPZmO9XadLkecQv
         ApW9qLmbiPGG6qIUhUfOsDjkvmBBT3d1bIuWRorvesu1dHc8b0GYSrflEf1HFbYKj3Ps
         33QK4jBS3ZO5kHS9hN+Jy1KJAsWXHDkIkgvZxzA1ZM4RRCcfTprdS7m3wectpcSsxT5g
         gsp5xUecBmhB2lJcNcmkxhi49e7wSClMUrfkE5Gmy18NkLMWcoXbkUDEDX4QE6sQ9x5B
         IO6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=HjSiRe9FHCVaGCF52pKJiLElvBu049No/wtLMDAICXE=;
        b=OQSl+fzNLuOpPgz9/JjDmEf58dw18DjOmum3B4/NoJtKVzwgpXsARqWj01NBYtDvr0
         8UarNOWyW8DdVZcA05hGQKiokrWS//Cp9XvuPerw0p1zvr2SQSUOcGQe7C+FOJTvcnoS
         tqCE/MMQWaS171gk4EGeczk5tYJTEah6Ien7F2PDaNfof/IYMma1a2yMSkFS6u1GFahG
         HBCDIg6eQVRTvvyhgIpXDOBuZnfYq7fpDFd6o49SuyKXqQ8ewrTsO+0QWZp123UTnDi+
         UGKj+rqp7l9JheFwEbLk6LSGFV/0Vpw3dS43bqpUGXeZO8UWt0X9MHjQS9AL+vkiKwlC
         MJtg==
X-Gm-Message-State: AOAM532jGkbMuqIeLUGm97DqhG0vuNt4F+YCznY3JGilUTBfzJTcjgV1
	1+ZvmdTfu2PIIV8DetofyPQ=
X-Google-Smtp-Source: ABdhPJz3LfnWDYq7JINpREZUWfE+W5NwAF3qi+h7lRxojWAtxXFLj2+kpLvcsoQwwfGFKhWJBLAhPg==
X-Received: by 2002:a63:545f:: with SMTP id e31mr136017pgm.212.1612909274776;
        Tue, 09 Feb 2021 14:21:14 -0800 (PST)
From: Nadav Amit <nadav.amit@gmail.com>
X-Google-Original-From: Nadav Amit
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Nadav Amit <namit@vmware.com>,
	Borislav Petkov <bp@alien8.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Ingo Molnar <mingo@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Rik van Riel <riel@surriel.com>,
	Sasha Levin <sashal@kernel.org>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	kvm@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v5 0/8] x86/tlb: Concurrent TLB flushes
Date: Tue,  9 Feb 2021 14:16:45 -0800
Message-Id: <20210209221653.614098-1-namit@vmware.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nadav Amit <namit@vmware.com>

This is a respin of a rebased version of an old series, which I did not
follow, as I was preoccupied with personal issues (sorry).

The series improve TLB shootdown by flushing the local TLB concurrently
with remote TLBs, overlapping the IPI delivery time with the local
flush. Performance numbers can be found in the previous version [1].

The patches are essentially the same, but rebasing on the last version
required some changes. I left the reviewed-by tags - if anyone considers
it inappropriate, please let me know (and you have my apology).

[1] https://lore.kernel.org/lkml/20190823224153.15223-1-namit@vmware.com/

v4 -> v5:
* Rebase on 5.11
* Move concurrent smp logic to smp_call_function_many_cond() 
* Remove SGI-UV patch which is not needed anymore

v3 -> v4:
* Merge flush_tlb_func_local and flush_tlb_func_remote() [Peter]
* Prevent preemption on_each_cpu(). It is not needed, but it prevents
  concerns. [Peter/tglx]
* Adding acked-, review-by tags

v2 -> v3:
* Open-code the remote/local-flush decision code [Andy]
* Fix hyper-v, Xen implementations [Andrew]
* Fix redundant TLB flushes.

v1 -> v2:
* Removing the patches that Thomas took [tglx]
* Adding hyper-v, Xen compile-tested implementations [Dave]
* Removing UV [Andy]
* Adding lazy optimization, removing inline keyword [Dave]
* Restructuring patch-set

RFCv2 -> v1:
* Fix comment on flush_tlb_multi [Juergen]
* Removing async invalidation optimizations [Andy]
* Adding KVM support [Paolo]

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm@vger.kernel.org
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: x86@kernel.org
Cc: xen-devel@lists.xenproject.org

Nadav Amit (8):
  smp: Run functions concurrently in smp_call_function_many_cond()
  x86/mm/tlb: Unify flush_tlb_func_local() and flush_tlb_func_remote()
  x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()
  x86/mm/tlb: Flush remote and local TLBs concurrently
  x86/mm/tlb: Privatize cpu_tlbstate
  x86/mm/tlb: Do not make is_lazy dirty for no reason
  cpumask: Mark functions as pure
  x86/mm/tlb: Remove unnecessary uses of the inline keyword

 arch/x86/hyperv/mmu.c                 |  10 +-
 arch/x86/include/asm/paravirt.h       |   6 +-
 arch/x86/include/asm/paravirt_types.h |   4 +-
 arch/x86/include/asm/tlbflush.h       |  48 +++----
 arch/x86/include/asm/trace/hyperv.h   |   2 +-
 arch/x86/kernel/alternative.c         |   2 +-
 arch/x86/kernel/kvm.c                 |  11 +-
 arch/x86/mm/init.c                    |   2 +-
 arch/x86/mm/tlb.c                     | 177 +++++++++++++++-----------
 arch/x86/xen/mmu_pv.c                 |  11 +-
 include/linux/cpumask.h               |   6 +-
 include/trace/events/xen.h            |   2 +-
 kernel/smp.c                          | 148 +++++++++++----------
 13 files changed, 242 insertions(+), 187 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 22:21:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 22:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83450.155365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9bNs-0000mu-Q5; Tue, 09 Feb 2021 22:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83450.155365; Tue, 09 Feb 2021 22:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9bNs-0000mn-LE; Tue, 09 Feb 2021 22:21:24 +0000
Received: by outflank-mailman (input) for mailman id 83450;
 Tue, 09 Feb 2021 22:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1l9bNs-0000md-7D
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 22:21:24 +0000
Received: from mail-pl1-x633.google.com (unknown [2607:f8b0:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 069a73c6-d8ec-41e7-a020-5044499d82fd;
 Tue, 09 Feb 2021 22:21:22 +0000 (UTC)
Received: by mail-pl1-x633.google.com with SMTP id a16so77295plh.8
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 14:21:22 -0800 (PST)
Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1])
 by smtp.gmail.com with ESMTPSA id v9sm58601pju.33.2021.02.09.14.21.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 Feb 2021 14:21:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 069a73c6-d8ec-41e7-a020-5044499d82fd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=jOyYx1TDjb2A4gHSVOiGYO/bJUu5bLpN/42e2o+FxXM=;
        b=ZudShlgecDcgL2Wf7EG99R2XM3pqdJk3GWkX8vluBL6fzf8DqgL/EzorucwyMsx473
         cWp4bOxGdoorhZtXy7Tr0y1KlPEImuVeXVMi3aIYKeYFmsEGAWYZ4r1y5xDQglrr889X
         NL7wsA8Vp6Tak9bbZi4PWSpfHM5dcxpZ9rkveVr53WNiLM//4j+SpsZXoB5wdDi4C+dI
         NN+OA9jejs5MTmkLQKWHNCRrHffajpvUi6XNEXyxRG9f9fvB8a0qHZxPuNYKe61jRasg
         MIAm1hLLu6hshupMPGoLrP0BL9DLL3SmxlC1gHjdFuC4JmTPNsiwBdutV/opH2MChwlF
         4oHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=jOyYx1TDjb2A4gHSVOiGYO/bJUu5bLpN/42e2o+FxXM=;
        b=OXjhTE90PDjikTG7/fBgWb9aO1XT9VGeWSbDdXKpP5Yvjwr16euBThf1O28pcyDyqa
         NoPLvLU3PExAKvwzVBsRHryZmR/KKIGyL38iVm0leC8U/ZB6zdr7uC6+F3aFW+40ItyX
         7m64ZtkfadW4hbX8b7DE45vO5EU5IwS7XU5B9CEyvgEHVGJCAh9jjdxQblDi4ijekpY2
         Rynx8i4ymZTcdMOX8OdhHOIqNZmL2BW8qu5DZE4O9kC5C7FdhdihJ9K1KEOw00aixapR
         t2yAO14PK3ent+65unV+oLCV+hImA8v5hA4xE26tnE4QMmXpSH6JY6asm+EJacXwhY0o
         z1yQ==
X-Gm-Message-State: AOAM533MZbK8aBwTBsO2/OqGUDTol788U8cOjI67AWEP18nFUcUiZdcO
	uGt79VUAmqbtDQyE2657sCE=
X-Google-Smtp-Source: ABdhPJwGDIgAQd0D3Bj7K7xE4Obh8SaZ1xW5cnD9EhQUhLiG0OIrXKXpw3elA2mYDupV4v1VuXjmJQ==
X-Received: by 2002:a17:90a:fc3:: with SMTP id 61mr8462pjz.197.1612909281819;
        Tue, 09 Feb 2021 14:21:21 -0800 (PST)
From: Nadav Amit <nadav.amit@gmail.com>
X-Google-Original-From: Nadav Amit
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Nadav Amit <namit@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Sasha Levin <sashal@kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Michael Kelley <mikelley@microsoft.com>
Subject: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs concurrently
Date: Tue,  9 Feb 2021 14:16:49 -0800
Message-Id: <20210209221653.614098-5-namit@vmware.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210209221653.614098-1-namit@vmware.com>
References: <20210209221653.614098-1-namit@vmware.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nadav Amit <namit@vmware.com>

To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. Introduce
paravirtual versions of flush_tlb_multi() for KVM, Xen and hyper-v (Xen
and hyper-v are only compile-tested).

While the updated smp infrastructure is capable of running a function on
a single local core, it is not optimized for this case. The multiple
function calls and the indirect branch introduce some overhead, and
might make local TLB flushes slower than they were before the recent
changes.

Before calling the SMP infrastructure, check if only a local TLB flush
is needed to restore the lost performance in this common case. This
requires to check mm_cpumask() one more time, but unless this mask is
updated very frequently, this should impact performance negatively.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Reviewed-by: Michael Kelley <mikelley@microsoft.com> # Hyper-v parts
Reviewed-by: Juergen Gross <jgross@suse.com> # Xen and paravirt parts
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/hyperv/mmu.c                 | 10 +++---
 arch/x86/include/asm/paravirt.h       |  6 ++--
 arch/x86/include/asm/paravirt_types.h |  4 +--
 arch/x86/include/asm/tlbflush.h       |  4 +--
 arch/x86/include/asm/trace/hyperv.h   |  2 +-
 arch/x86/kernel/kvm.c                 | 11 ++++--
 arch/x86/mm/tlb.c                     | 49 +++++++++++++++++----------
 arch/x86/xen/mmu_pv.c                 | 11 +++---
 include/trace/events/xen.h            |  2 +-
 9 files changed, 58 insertions(+), 41 deletions(-)

diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index 2c87350c1fb0..681dba8de4f2 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -52,8 +52,8 @@ static inline int fill_gva_list(u64 gva_list[], int offset,
 	return gva_n - offset;
 }
 
-static void hyperv_flush_tlb_others(const struct cpumask *cpus,
-				    const struct flush_tlb_info *info)
+static void hyperv_flush_tlb_multi(const struct cpumask *cpus,
+				   const struct flush_tlb_info *info)
 {
 	int cpu, vcpu, gva_n, max_gvas;
 	struct hv_tlb_flush **flush_pcpu;
@@ -61,7 +61,7 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
 	u64 status = U64_MAX;
 	unsigned long flags;
 
-	trace_hyperv_mmu_flush_tlb_others(cpus, info);
+	trace_hyperv_mmu_flush_tlb_multi(cpus, info);
 
 	if (!hv_hypercall_pg)
 		goto do_native;
@@ -164,7 +164,7 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
 	if (!(status & HV_HYPERCALL_RESULT_MASK))
 		return;
 do_native:
-	native_flush_tlb_others(cpus, info);
+	native_flush_tlb_multi(cpus, info);
 }
 
 static u64 hyperv_flush_tlb_others_ex(const struct cpumask *cpus,
@@ -239,6 +239,6 @@ void hyperv_setup_mmu_ops(void)
 		return;
 
 	pr_info("Using hypercall for remote TLB flush\n");
-	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
+	pv_ops.mmu.flush_tlb_multi = hyperv_flush_tlb_multi;
 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
 }
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index f8dce11d2bc1..515e49204c8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -50,7 +50,7 @@ static inline void slow_down_io(void)
 void native_flush_tlb_local(void);
 void native_flush_tlb_global(void);
 void native_flush_tlb_one_user(unsigned long addr);
-void native_flush_tlb_others(const struct cpumask *cpumask,
+void native_flush_tlb_multi(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info);
 
 static inline void __flush_tlb_local(void)
@@ -68,10 +68,10 @@ static inline void __flush_tlb_one_user(unsigned long addr)
 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
 }
 
-static inline void __flush_tlb_others(const struct cpumask *cpumask,
+static inline void __flush_tlb_multi(const struct cpumask *cpumask,
 				      const struct flush_tlb_info *info)
 {
-	PVOP_VCALL2(mmu.flush_tlb_others, cpumask, info);
+	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
 }
 
 static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index b6b02b7c19cc..541fe7193526 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -201,8 +201,8 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_one_user)(unsigned long addr);
-	void (*flush_tlb_others)(const struct cpumask *cpus,
-				 const struct flush_tlb_info *info);
+	void (*flush_tlb_multi)(const struct cpumask *cpus,
+				const struct flush_tlb_info *info);
 
 	void (*tlb_remove_table)(struct mmu_gather *tlb, void *table);
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index a7a598af116d..3c6681def912 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -175,7 +175,7 @@ extern void initialize_tlbstate_and_flush(void);
  *  - flush_tlb_page(vma, vmaddr) flushes one page
  *  - flush_tlb_range(vma, start, end) flushes a range of pages
  *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
- *  - flush_tlb_others(cpumask, info) flushes TLBs on other cpus
+ *  - flush_tlb_multi(cpumask, info) flushes TLBs on multiple cpus
  *
  * ..but the i386 has somewhat limited tlb flushing capabilities,
  * and page-granular flushes are available only on i486 and up.
@@ -209,7 +209,7 @@ struct flush_tlb_info {
 void flush_tlb_local(void);
 void flush_tlb_one_user(unsigned long addr);
 void flush_tlb_one_kernel(unsigned long addr);
-void flush_tlb_others(const struct cpumask *cpumask,
+void flush_tlb_multi(const struct cpumask *cpumask,
 		      const struct flush_tlb_info *info);
 
 #ifdef CONFIG_PARAVIRT
diff --git a/arch/x86/include/asm/trace/hyperv.h b/arch/x86/include/asm/trace/hyperv.h
index 4d705cb4d63b..a8e5a7a2b460 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -8,7 +8,7 @@
 
 #if IS_ENABLED(CONFIG_HYPERV)
 
-TRACE_EVENT(hyperv_mmu_flush_tlb_others,
+TRACE_EVENT(hyperv_mmu_flush_tlb_multi,
 	    TP_PROTO(const struct cpumask *cpus,
 		     const struct flush_tlb_info *info),
 	    TP_ARGS(cpus, info),
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5e78e01ca3b4..38ea9dee2456 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -613,7 +613,7 @@ static int kvm_cpu_down_prepare(unsigned int cpu)
 }
 #endif
 
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
 			const struct flush_tlb_info *info)
 {
 	u8 state;
@@ -627,6 +627,11 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 	 * queue flush_on_enter for pre-empted vCPUs
 	 */
 	for_each_cpu(cpu, flushmask) {
+		/*
+		 * The local vCPU is never preempted, so we do not explicitly
+		 * skip check for local vCPU - it will never be cleared from
+		 * flushmask.
+		 */
 		src = &per_cpu(steal_time, cpu);
 		state = READ_ONCE(src->preempted);
 		if ((state & KVM_VCPU_PREEMPTED)) {
@@ -636,7 +641,7 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 		}
 	}
 
-	native_flush_tlb_others(flushmask, info);
+	native_flush_tlb_multi(flushmask, info);
 }
 
 static void __init kvm_guest_init(void)
@@ -654,7 +659,7 @@ static void __init kvm_guest_init(void)
 	}
 
 	if (pv_tlb_flush_supported()) {
-		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
+		pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
 		pr_info("KVM setup pv remote TLB flush\n");
 	}
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 07b6701a540a..78fcbd58716e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -24,7 +24,7 @@
 # define __flush_tlb_local		native_flush_tlb_local
 # define __flush_tlb_global		native_flush_tlb_global
 # define __flush_tlb_one_user(addr)	native_flush_tlb_one_user(addr)
-# define __flush_tlb_others(msk, info)	native_flush_tlb_others(msk, info)
+# define __flush_tlb_multi(msk, info)	native_flush_tlb_multi(msk, info)
 #endif
 
 /*
@@ -490,7 +490,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		/*
 		 * Even in lazy TLB mode, the CPU should stay set in the
 		 * mm_cpumask. The TLB shootdown code can figure out from
-		 * from cpu_tlbstate.is_lazy whether or not to send an IPI.
+		 * cpu_tlbstate.is_lazy whether or not to send an IPI.
 		 */
 		if (WARN_ON_ONCE(real_prev != &init_mm &&
 				 !cpumask_test_cpu(cpu, mm_cpumask(next))))
@@ -697,7 +697,7 @@ static void flush_tlb_func(void *info)
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
 		 *
-		 * This should be rare, with native_flush_tlb_others skipping
+		 * This should be rare, with native_flush_tlb_multi() skipping
 		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
@@ -795,9 +795,14 @@ static bool tlb_is_not_lazy(int cpu)
 
 static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
 
-STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
+STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
 					 const struct flush_tlb_info *info)
 {
+	/*
+	 * Do accounting and tracing. Note that there are (and have always been)
+	 * cases in which a remote TLB flush will be traced, but eventually
+	 * would not happen.
+	 */
 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
 	if (info->end == TLB_FLUSH_ALL)
 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
@@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables) {
-		smp_call_function_many(cpumask, flush_tlb_func,
-			       (void *)info, 1);
+		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
+				      cpumask);
 	} else {
 		/*
 		 * Although we could have used on_each_cpu_cond_mask(),
@@ -844,14 +849,15 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
 			if (tlb_is_not_lazy(cpu))
 				__cpumask_set_cpu(cpu, cond_cpumask);
 		}
-		smp_call_function_many(cond_cpumask, flush_tlb_func, (void *)info, 1);
+		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
+				      cpumask);
 	}
 }
 
-void flush_tlb_others(const struct cpumask *cpumask,
+void flush_tlb_multi(const struct cpumask *cpumask,
 		      const struct flush_tlb_info *info)
 {
-	__flush_tlb_others(cpumask, info);
+	__flush_tlb_multi(cpumask, info);
 }
 
 /*
@@ -931,16 +937,20 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables,
 				  new_tlb_gen);
 
-	if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
+	/*
+	 * flush_tlb_multi() is not optimized for the common case in which only
+	 * a local TLB flush is needed. Optimize this use-case by calling
+	 * flush_tlb_func_local() directly in this case.
+	 */
+	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
+		flush_tlb_multi(mm_cpumask(mm), info);
+	} else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
 		flush_tlb_func(info);
 		local_irq_enable();
 	}
 
-	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
-		flush_tlb_others(mm_cpumask(mm), info);
-
 	put_flush_tlb_info();
 	put_cpu();
 }
@@ -1151,17 +1161,20 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 
 	int cpu = get_cpu();
 
-	info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, 0);
-	if (cpumask_test_cpu(cpu, &batch->cpumask)) {
+	/*
+	 * flush_tlb_multi() is not optimized for the common case in which only
+	 * a local TLB flush is needed. Optimize this use-case by calling
+	 * flush_tlb_func_local() directly in this case.
+	 */
+	if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) {
+		flush_tlb_multi(&batch->cpumask, info);
+	} else if (cpumask_test_cpu(cpu, &batch->cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
 		flush_tlb_func(info);
 		local_irq_enable();
 	}
 
-	if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids)
-		flush_tlb_others(&batch->cpumask, info);
-
 	cpumask_clear(&batch->cpumask);
 
 	put_flush_tlb_info();
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index cf2ade864c30..09b95c0e876e 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -1247,8 +1247,8 @@ static void xen_flush_tlb_one_user(unsigned long addr)
 	preempt_enable();
 }
 
-static void xen_flush_tlb_others(const struct cpumask *cpus,
-				 const struct flush_tlb_info *info)
+static void xen_flush_tlb_multi(const struct cpumask *cpus,
+				const struct flush_tlb_info *info)
 {
 	struct {
 		struct mmuext_op op;
@@ -1258,7 +1258,7 @@ static void xen_flush_tlb_others(const struct cpumask *cpus,
 	const size_t mc_entry_size = sizeof(args->op) +
 		sizeof(args->mask[0]) * BITS_TO_LONGS(num_possible_cpus());
 
-	trace_xen_mmu_flush_tlb_others(cpus, info->mm, info->start, info->end);
+	trace_xen_mmu_flush_tlb_multi(cpus, info->mm, info->start, info->end);
 
 	if (cpumask_empty(cpus))
 		return;		/* nothing to do */
@@ -1267,9 +1267,8 @@ static void xen_flush_tlb_others(const struct cpumask *cpus,
 	args = mcs.args;
 	args->op.arg2.vcpumask = to_cpumask(args->mask);
 
-	/* Remove us, and any offline CPUS. */
+	/* Remove any offline CPUs */
 	cpumask_and(to_cpumask(args->mask), cpus, cpu_online_mask);
-	cpumask_clear_cpu(smp_processor_id(), to_cpumask(args->mask));
 
 	args->op.cmd = MMUEXT_TLB_FLUSH_MULTI;
 	if (info->end != TLB_FLUSH_ALL &&
@@ -2086,7 +2085,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 	.flush_tlb_user = xen_flush_tlb,
 	.flush_tlb_kernel = xen_flush_tlb,
 	.flush_tlb_one_user = xen_flush_tlb_one_user,
-	.flush_tlb_others = xen_flush_tlb_others,
+	.flush_tlb_multi = xen_flush_tlb_multi,
 	.tlb_remove_table = tlb_remove_table,
 
 	.pgd_alloc = xen_pgd_alloc,
diff --git a/include/trace/events/xen.h b/include/trace/events/xen.h
index 3b61b587e137..44a3f565264d 100644
--- a/include/trace/events/xen.h
+++ b/include/trace/events/xen.h
@@ -346,7 +346,7 @@ TRACE_EVENT(xen_mmu_flush_tlb_one_user,
 	    TP_printk("addr %lx", __entry->addr)
 	);
 
-TRACE_EVENT(xen_mmu_flush_tlb_others,
+TRACE_EVENT(xen_mmu_flush_tlb_multi,
 	    TP_PROTO(const struct cpumask *cpus, struct mm_struct *mm,
 		     unsigned long addr, unsigned long end),
 	    TP_ARGS(cpus, mm, addr, end),
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 09 23:41:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 Feb 2021 23:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83456.155380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9cch-0007yW-VP; Tue, 09 Feb 2021 23:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83456.155380; Tue, 09 Feb 2021 23:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9cch-0007yP-RY; Tue, 09 Feb 2021 23:40:47 +0000
Received: by outflank-mailman (input) for mailman id 83456;
 Tue, 09 Feb 2021 23:40:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5GlY=HL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9ccg-0007yG-LU
 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 23:40:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01467dc3-32a1-4785-8305-06333bace16f;
 Tue, 09 Feb 2021 23:40:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01467dc3-32a1-4785-8305-06333bace16f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612914045;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=yLrkLwcyr2KlIqio3Kl3OKuvXVXkJRZdqlVuUYnouSs=;
  b=ZkSQJybmRRypFBcR3oFMYTdZtTwWz3hs8D1wHxqwepABDTPi/Zi6ICJQ
   md8HLnWZPdC/RIUfZCApP45oK8pcabBN+OaBXYGrirSvPQNWrJzMn8u9M
   l1s8bJI8oawF0NSKPuqf/NF7Ppv4IO7O+uzJHbEchAE9nYX5XN+99jrE3
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: o5+VlFo9/XotGNHCMH6EKmkRk3thxdZHEXV6A33GZI6Tee3T6QQHiRl5EJZ5+wGbsazcjNRuX7
 ditmIDSdIW1qhAyv1YM4/+lPrhTqS9NekLgyy9iFYWH2pklVHKprl11IZWLKNEMooOIlssEhgk
 1A8BKG1vMJlE9K8AZs+Ro9ly6ybIwxe/KxVnepm4BWPiY+dXM/AMDkz9DipMDkXFe8bpieSEpV
 mOyfJ99xz9BHUzhrOE0G4bfacCRmPIcQ9jLoykZdBP6xbXPwE/qpeTFGb2ggxi06+1sF6GZeU2
 /UY=
X-SBRS: 5.1
X-MesageID: 36942693
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,166,1610427600"; 
   d="scan'208";a="36942693"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15] x86/ucode/amd: Fix OoB read in cpu_request_microcode()
Date: Tue, 9 Feb 2021 23:40:19 +0000
Message-ID: <20210209234019.3827-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

verify_patch_size() is a maximum size check, and doesn't have a minimum bound.

If the microcode container encodes a blob with a length less than 64 bytes,
the subsequent calls to microcode_fits()/compare_header() may read off the end
of the buffer.

Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>

In practice, processor_rev_id is the only field read, which is 2 bytes at
offset 24 into the header.  Not that this makes the bug any less bad.

For 4.15.  Only dom0 can load new microcode, hence no XSA, but the bug is bad
and the fix simple and obvious.
---
 xen/arch/x86/cpu/microcode/amd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index c4ab395799..cf5947389f 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -349,6 +349,7 @@ static struct microcode_patch *cpu_request_microcode(const void *buf, size_t siz
             if ( size < sizeof(*mc) ||
                  (mc = buf)->type != UCODE_UCODE_TYPE ||
                  size - sizeof(*mc) < mc->len ||
+                 mc->len < sizeof(struct microcode_patch) ||
                  (!skip_ucode && !verify_patch_size(mc->len)) )
             {
                 printk(XENLOG_ERR "microcode: Bad microcode data\n");
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 01:20:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 01:20:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83462.155401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9eAp-0002az-LY; Wed, 10 Feb 2021 01:20:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83462.155401; Wed, 10 Feb 2021 01:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9eAp-0002as-IA; Wed, 10 Feb 2021 01:20:07 +0000
Received: by outflank-mailman (input) for mailman id 83462;
 Wed, 10 Feb 2021 01:20:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eAn-0002Tk-PG; Wed, 10 Feb 2021 01:20:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eAn-00054s-J3; Wed, 10 Feb 2021 01:20:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eAn-0006RG-BN; Wed, 10 Feb 2021 01:20:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eAn-00087l-Aq; Wed, 10 Feb 2021 01:20:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FFwKCoQTJpBfYsbvEWIBrxnbm3boCbsFB0LBT4c0EW4=; b=7Kej2PfnNUUC2gQduSM0xXgmmv
	OK2OujFvep1yXBbyKLLXZMz7vkBmHTf66rsRLRp4qZAxyK5Ixlhadycps/Z4ImTHaTjU5zPgj2lTh
	A5bqicxfgYSwg8VY037yx1AWw6jjloeRCDllMo8pEOaNF3BP1/Zead/i/uZiLw7wWUB8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159152: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f8f7bc254f3233ee15692fa0dd951cfbe2e59bf2
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 01:20:05 +0000

flight 159152 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159152/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f8f7bc254f3233ee15692fa0dd951cfbe2e59bf2
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  214 days
Failing since        151818  2020-07-11 04:18:52 Z  213 days  207 attempts
Testing same since   159152  2021-02-09 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 40581 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 01:35:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 01:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83466.155416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ePX-0003dp-UZ; Wed, 10 Feb 2021 01:35:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83466.155416; Wed, 10 Feb 2021 01:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ePX-0003di-PX; Wed, 10 Feb 2021 01:35:19 +0000
Received: by outflank-mailman (input) for mailman id 83466;
 Wed, 10 Feb 2021 01:35:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QmaB=HM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9ePV-0003db-LA
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 01:35:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d5438f9-8055-46e4-a32b-c0e81475d5d7;
 Wed, 10 Feb 2021 01:35:16 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 5262464E2A;
 Wed, 10 Feb 2021 01:35:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d5438f9-8055-46e4-a32b-c0e81475d5d7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612920915;
	bh=duFbnubRBWp9pCDRE9Yfg3IIB1zSpihTESixzeHBrqo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BWvruVB7h+PLgyKaiBTXSMFzb/5Sv0Xh5g+5tXtCpFDA/93/my6TSKZj5aVYBM3sS
	 JHyaCdD4Bh1Fm695Yp10Twjj21mCZ3AbT/UqYOVfCCSVzOmdBiDbphiPFCnnnDbr7Q
	 0e431xHqkcZC9uQ9V9ChZvFS0/AXb4FHIBgqiEP6ezIF21eabxF5qfJpZyFm03d5i5
	 b/Guzx/1z3DLwAX6LxZx/6t91g4gvD9vbdCeNS4WeJbuebfDCJkvKUC8SUNEFfeNui
	 6J3+Qlc6P3b7fguGJKQMVvc5aTCGKbr/ZpIioLmNrrSCn22fBExcb0o8LzVZnX7jlc
	 lBrd7bgUGVroQ==
Date: Tue, 9 Feb 2021 17:35:14 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
    lucmiccio@gmail.com, xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Rahul Singh <Rahul.Singh@arm.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <CAJ=z9a0fDYccxTDkpvmG77D-havkySuYOUK4MSYvpZw4EL9oGg@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102091724100.8948@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <173ed75a-94cf-26a5-9271-a687bf201578@xen.org> <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s> <4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com> <24610.38467.808678.941320@mariner.uk.xensource.com>
 <alpine.DEB.2.21.2102090914280.8948@sstabellini-ThinkPad-T480s> <CAJ=z9a0fDYccxTDkpvmG77D-havkySuYOUK4MSYvpZw4EL9oGg@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 9 Feb 2021, Julien Grall wrote:
> On Tue, 9 Feb 2021 at 17:31, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >
> > On Tue, 9 Feb 2021, Ian Jackson wrote:
> > > Jan Beulich writes ("Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping"):
> > > > On 08.02.2021 21:24, Stefano Stabellini wrote:
> > > ...
> > > > > For these cases, I would just follow a simple rule of thumb:
> > > > > - is the submitter willing to provide the backport?
> > > > > - is the backport low-risk?
> > > > > - is the underlying bug important?
> > > > >
> > > > > If the answer to all is "yes" then I'd go with it.
> > > >
> > > > Personally I disagree, for the very simple reason of the question
> > > > going to become "Where do we draw the line?" The only non-security
> > > > backports that I consider acceptable are low-risk changes to allow
> > > > building with newer tool chains. I know other backports have
> > > > occurred in the past, and I did voice my disagreement with this
> > > > having happened.
> > >
> > > I think I take a more relaxed view than Jan, but still a much more
> > > firm line than Stefano.  My opinion is that we should make exceptions
> > > for only bugs of exceptional severity.
> > >
> > > I don't think I have seen an argument that this bug is exceptionally
> > > severe.
> > >
> > > For me the fact that you can only experience this bug if you upgrade
> > > the hardware or significantly change the configuration, means that
> > > this isn't so serious a bug.
> >
> > Yeah, I think that's really the core of this issue. If somebody is
> > already using 4.12 happily, there is really no reason for them to take
> > the fix. If somebody is about to use 4.12, then it is a severe issue.
> 
> If somebody is about to use 4.12, then it is most likely going to
> encounter a serious blocker as there is no support for generic SMMU
> bindings. I would be surprised if there are a lot of DT still using
> the old bindings, at which point such users would want to switch to
> 4.15 + your patches to add support.
> 
> >
> > The view of the group is that nobody should be switching to 4.12 now
> > because there are newer releases out there. I don't know if that is
> > true.
> 
> This is mostly based on the definition of supported vs security
> supported. When a tree is only security supported, then there is no
> promise the code will run on any systems.
> 
> >
> > I didn't realize we had a policy or even a recommendation of always
> > choosing the latest among the many releases available with
> > security-support. I went through the website and SUPPORT.md but couldn't
> > find it spelled out anywhere. See:
> 
> May I ask, what sort of users would want to start a
> development/product based on a soon to be abandoned version?
> 
> For any new development, I have always advised to switch to the latest
> Xen (or at least stable) because it will contain the latest fixes,
> features, and better support because the code is still in mind...

I don't have an answer -- I hope nobody.


> > https://xenproject.org/downloads/
> > https://xenproject.org/downloads/xen-project-archives/xen-project-4-12-series/
> > https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=SUPPORT.md;h=52f25fa85af41fa3b38288ab7e172408b77dc779;hb=97b7b5567fba6918a656ad349051b5343b5dea2e
> >
> > At most we have:
> >
> >     Supported-Until: 2020-10-02
> >     Security-Support-Until: 2022-04-02
> >
> > Anecdotally, if I go to https://www.kernel.org/ to download a kernel
> > tarball, I expect all tarballs to have all the major functionalities. I
> > wouldn't imagine that I cannot get one entire Linux subsystem (e.g.
> > ethernet or SATA) to work if I don't pick the latest.
> 
> I think this is an unrealistic expectation... You can't pick any
> version of stable Linux and expect it to work on your shiny HW. There
> might be missing drivers, workaround (including in core subsystems)...

I don't mean to insist as of course I accept the group decision and I
don't care about 4.12 that much (we don't use it as Xilinx). But this is
not about new hardware. This regression affects old hardware too. So yes
if an old Linux version worked on my HW I expect any of the slightly
newer (but still old) tarballs on kernel.org to work on the same HW.
That said I noticed that kernel.org seems to have only supported
releases on https://www.kernel.org/, while we have a mix.


> > Maybe it would make sense to clarify which releases are discouraged from
> > being used on https://xenproject.org/downloads/ at least?
> 
> Feel free to suggest a wording that can be discussed here.

Distinguishing between supported and not supported releases would be a
start. Maybe with a one line statement to recommend people to always use
supported (not just security-supported, fully supported) releases.


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 01:37:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 01:37:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83467.155428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9eRd-0003n1-9s; Wed, 10 Feb 2021 01:37:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83467.155428; Wed, 10 Feb 2021 01:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9eRd-0003mu-6v; Wed, 10 Feb 2021 01:37:29 +0000
Received: by outflank-mailman (input) for mailman id 83467;
 Wed, 10 Feb 2021 01:37:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eRc-0003mm-Fu; Wed, 10 Feb 2021 01:37:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eRc-0005N3-7X; Wed, 10 Feb 2021 01:37:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eRb-0007Jc-Up; Wed, 10 Feb 2021 01:37:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9eRb-0006cJ-Sf; Wed, 10 Feb 2021 01:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZsOD2ydt41QQYAV3r8GcSMqhK/seP8b+cq1y8jxtQT0=; b=meJArIjj5kDB85FC39r3xDxiea
	0zVa1e0iVFRcuw9yPNj2I8Jk9/+UDa5QwG2X2MmXXvuQ9NXonGoSO6vvy1HIRsjtvBLKNGm3vxmRm
	UPg3P+ElxW9ig2PGQN0uHTOwCPilbBslMiR5yIU9nnUXwiXhlsXpj/9AGjr8apo29V5Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159191-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159191: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
X-Osstest-Versions-That:
    xen=f18309eb06efd1db5a2ab9903a1c246b6299951a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 01:37:27 +0000

flight 159191 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159191/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14
baseline version:
 xen                  f18309eb06efd1db5a2ab9903a1c246b6299951a

Last test of basis   159146  2021-02-08 19:00:27 Z    1 days
Testing same since   159184  2021-02-09 18:01:28 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <iwj@xenproject.org>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f18309eb06..687121f8a0  687121f8a0e7c1ea1c4fa3d056637e5819342f14 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 02:00:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 02:00:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83473.155443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9enI-0005me-7P; Wed, 10 Feb 2021 01:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83473.155443; Wed, 10 Feb 2021 01:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9enI-0005mX-4X; Wed, 10 Feb 2021 01:59:52 +0000
Received: by outflank-mailman (input) for mailman id 83473;
 Wed, 10 Feb 2021 01:59:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=r83b=HM=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1l9enG-0005mS-E5
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 01:59:50 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e47361b2-458a-4ba9-b835-6ec227bf5d8f;
 Wed, 10 Feb 2021 01:59:49 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11A1xXAD076972
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 9 Feb 2021 20:59:38 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11A1xWSQ076971;
 Tue, 9 Feb 2021 17:59:32 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e47361b2-458a-4ba9-b835-6ec227bf5d8f
Date: Tue, 9 Feb 2021 17:59:32 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, julien@xen.org, Volodymyr_Babchuk@epam.com,
        Stefano Stabellini <stefano.stabellini@xilinx.com>,
        Jukka Kaartinen <jukka.kaartinen@unikie.com>
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
Message-ID: <YCM+BBvcMoloMXeT@mattapan.m5p.com>
References: <20210209195334.21206-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210209195334.21206-1-sstabellini@kernel.org>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Tue, Feb 09, 2021 at 11:53:34AM -0800, Stefano Stabellini wrote:
> This fixes Xen boot on RPi4. Some RPi4 kernels have the following node
> on their device trees:

IMO I like keeping the reference to Linux kernel d1ac0002dd29 in the
commit message.  The commit is distinctly informative and takes some
searching to find in the thread archive.  This though is merely /my/
opinion.

Two builds later to ensure I've got a build which doesn't work due to the
problematic device-tree and one with the patch to ensure it does fix the
issue and:

Tested-by: Elliott Mitchell <ehem+xen@m5p.com>


Note to Jukka Kaartinen, people who merely build from source are useful
for confirming proposed fixes work.  The above line gets added to commit
messages to note people have tried it and it works for them.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Feb 10 02:21:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 02:21:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83475.155455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9f89-0000Hp-2N; Wed, 10 Feb 2021 02:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83475.155455; Wed, 10 Feb 2021 02:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9f88-0000Hc-Us; Wed, 10 Feb 2021 02:21:24 +0000
Received: by outflank-mailman (input) for mailman id 83475;
 Wed, 10 Feb 2021 02:21:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sgYq=HM=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1l9f87-0000HX-KS
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 02:21:23 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1320c047-8902-4955-a191-59492d8e9732;
 Wed, 10 Feb 2021 02:21:20 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Feb 2021 18:21:19 -0800
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by orsmga007.jf.intel.com with ESMTP; 09 Feb 2021 18:21:18 -0800
Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Tue, 9 Feb 2021 18:21:18 -0800
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Tue, 9 Feb 2021 18:21:17 -0800
Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2
 via Frontend Transport; Tue, 9 Feb 2021 18:21:17 -0800
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.100)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Tue, 9 Feb 2021 18:21:17 -0800
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by MWHPR11MB1520.namprd11.prod.outlook.com (2603:10b6:301:b::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.30; Wed, 10 Feb
 2021 02:21:14 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::f1b4:bace:1e44:4a46]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::f1b4:bace:1e44:4a46%6]) with mapi id 15.20.3825.030; Wed, 10 Feb 2021
 02:21:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1320c047-8902-4955-a191-59492d8e9732
IronPort-SDR: 5C9QNkzzHmFL6e3crUnt90UwcxxwieWsrD2a4g+Prgp4H2nMLv/qPLE1Z/adcfcV6da+A2tRR4
 httz7bCp00Nw==
X-IronPort-AV: E=McAfee;i="6000,8403,9890"; a="182141502"
X-IronPort-AV: E=Sophos;i="5.81,166,1610438400"; 
   d="scan'208";a="182141502"
IronPort-SDR: 3xrGCGKGQkU1kNJdBbcmHG6jBUDDK+hHEGnrRz6twJ8pvtAcZc+LORHJi+0pRVC5By78TuxcKF
 Qm4eauq/VaeQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.81,166,1610438400"; 
   d="scan'208";a="399010233"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BbLkjcU28PmEQVTgV2qZNlQT9DIIg0pBTL6r7v5K5W/7jsmjD3LF65U84/Y18UeiG3+fXLb+TIBrZwLx0B5nKZGc8NZHBmQTn7U7BaGR72OfuBOUZtnNEJ9naNHUFaXszCM42insBJobzppqx2U2CADiYBwo2Wfiy80YN+Uo7IdUizv0GDYrwysDCSZWcBx77Pj0hhKXHo3h56sbA3wc4oVOj73aZELjEHwBBtm6xGMzrgwWpk+Mq/oEyRLj8yLDNr7Xr4lzdwi0grd6WrepadnLnGHvXQpJVcX9GQEMVVFdF2ucISh4vbpS5t1CC+4yv0GJBoCUJuYuUrtfjGQAvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R8QagWhyIYQyHSiPB4aG3zr+NfNXMTJU65g/jXVd69E=;
 b=C0bZGbKl6d2Z6LEEp03q0sCA2JHwRFb0hfcNzqdOCDANCCQ5eEK7xuJ/74KbfZfyCAuem6PsoPnGPaciwDGBp15Oz0MqLa/BQpWAZ8doPBtvtFaHVd9ivTAYfPFEA5Y6tw9Qpl+3nTiHXrLOU67JJb5kp+OsLv9HkLrBU2S+V5vqDGvw+sgFyrp/S1/5OHI1LcBn+o062fOEH27Ru+kxwKMamYIM7shw13rygs9apr+9Lh0HNOiQnNqR5z9CPZBXHQ69fwjEpCNEaLMbC//I6sGO2sX0HF1Fq9xE5R6wpQQ9rL6/CPKHjAVjBmk0XJKdQcq7HQB58f4/GEeIvtpncw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R8QagWhyIYQyHSiPB4aG3zr+NfNXMTJU65g/jXVd69E=;
 b=MP/Xs1mjOg+JMI1DPZGiJSpA09kCfZNzuCHAp/IBHd6RsTgUmQFd05WC4LeBicmxbnE9SZ92jUnrbFToZ+faESJoaX1dLP8rZVAztlU2Q/vKk4IdbZvpYwLb00BKREhCywQWvnBHcBbpFcrj2Yt9Gnj/TC4802RkUZWRXHqrYOo=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "hongyxia@amazon.co.uk" <hongyxia@amazon.co.uk>, "iwj@xenproject.org"
	<iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>, Jan Beulich
	<jbeulich@suse.com>, "Cooper, Andrew" <andrew.cooper3@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: RE: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
Thread-Topic: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
Thread-Index: AQHW/vg+0ex05nXJyEaHJ1jzoYv6S6pQp57g
Date: Wed, 10 Feb 2021 02:21:14 +0000
Message-ID: <MWHPR11MB18869B3DB550711B3F6F6CA48C8D9@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-6-julien@xen.org>
In-Reply-To: <20210209152816.15792-6-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.199]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 492c8a48-f4c3-451e-dead-08d8cd6a8a3b
x-ms-traffictypediagnostic: MWHPR11MB1520:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MWHPR11MB15209153660919233723E3678C8D9@MWHPR11MB1520.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ac2BYmEKdq2iboVhjW0THdydqdJnELAB5X94YYC0pCW1HQuGQHpDgkljmsaaH1I6RTZMU1jxhQkV4ctpfDs5CwXOU1Lk3BYQPgHcG9/wlHeIkDGr7r/0BGd6IOjAPwidvEFflV2JflEQU3JLLpDPzDOh3AH6mqXySWm0BnK/uBCzX1XTgQ5VVkcMkuVw912nQOpmoyh+gVhCr8mZzd5McneAJZxzT7rYtxkSPe5R018f/N69LfX9Uj9zQzBIYg8E6dkzbU3p4tXjEsQ0PSk0k24d6vlLSq08h+ZbZ33tD3TB+9eDcaWjU55pF5CyzToq1wCHzByvRUayigqPxGz0ILOUcpRmMxtNynhPL+1FcGZ45Qe+6MjY9vALLM9NWHXd1mnDn3xvtchSDPVnMJgYlhEPkVfVHIgm9cr3Uvdbah9u6Io1WkZuCpXWmo4uLlGPd3N/dBKPtKvp55v7eWLR+pmnoIeG+W9fKbvz/RdSh3hoBCe0NLsjFqcvRT1ChD5q43RIF/lhaNx+Ek7nhlu2Jg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(376002)(396003)(39860400002)(346002)(136003)(76116006)(186003)(71200400001)(64756008)(66556008)(66446008)(7696005)(83380400001)(2906002)(478600001)(316002)(4326008)(6506007)(66476007)(8936002)(9686003)(8676002)(5660300002)(55016002)(86362001)(26005)(54906003)(66946007)(110136005)(33656002)(52536014);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?r3hXZgaBhADlxHjF8hCVzpMi02qRKq3PkJQBj2YC0DwTVItMyh244Tg5hkLK?=
 =?us-ascii?Q?jXXojfoxMS1FhKsmIwlsmYxEQn8R5DT/TDSbrFtXlw4PG7uLB/HKa1U+fQPf?=
 =?us-ascii?Q?WpgBMsqXNrzhq84AhEMV8jgbluoNsekn1rS9l6+i0VIF8aOYQl4bvYnyXypj?=
 =?us-ascii?Q?T4G6Iv/kIzy2aTNnEPELTsme2l3bIdqjUbz2doluAmNL2t2CTpQltSITZKpN?=
 =?us-ascii?Q?kr4XP2VyMIGWpGy+4zFrWCBQkSdSqwVmPUxFxqCWB2aXd3BIYv5eaFr1A1Ap?=
 =?us-ascii?Q?WLJSfBEpfjYqloGokZI+DqBwtPEf6kAPTIEg1e7dCAZpqdEn8SIRKzIM0IXE?=
 =?us-ascii?Q?jINr9uq38s/NZb2rwyfNsR04PpIT14nKIl4c1nEAOeDiai2wNl+rTNQuo1SW?=
 =?us-ascii?Q?TvSNDnVs8rN6SxxZvgJs4u/syyH8r9YN7T0ghOPik9j9pRpOSbERFoqmV1l+?=
 =?us-ascii?Q?1/gFbYY4CWKt/TcUxKJUHbU8YVahOoKWOxZD773XJdUDLTAwxCN7yqA59i0f?=
 =?us-ascii?Q?0/f2CZ1eDH7FPNxzVVPfigxjZRUP2EWo0FLJJ2b12/4gmJaA5Q8clNiXrALl?=
 =?us-ascii?Q?LYM92UGoZFq1TjdQ+/ur08vwsBaKWMD5cNU534nWgIf73rwW4aH9K9HjjMP3?=
 =?us-ascii?Q?SMFgFCRhA47P1MLuVoa6PFGeLIE6hI2AQJN314DGWel6nQzuxTK7z2uRxOMP?=
 =?us-ascii?Q?O9nj2DuobuwSMaS4jBiCdX92Kh0iIGAa59psBU6r0TvqVPmHGX0HXGmoqnWd?=
 =?us-ascii?Q?sbmPpoNzrtjC9GABU9wBy+bCixMhIxCtqzVrjRMFgZTXKvCubd0itR1D1ONP?=
 =?us-ascii?Q?Zd97Zvp42swv3CQzE7M02HEVKzSqG8Gr4UNqg8MMRwySOFPf9orYcca+bqZx?=
 =?us-ascii?Q?VOOi6SvMjaNSElkWSenxj0peeutrDFWAfi194BB81u67ZK8wuofQIIeB5rth?=
 =?us-ascii?Q?lGXL8gf6jyNMvsMYYmMeKp5DkPlx7DksEJ8wSufJpSXLungnxQDUW1PnEYNc?=
 =?us-ascii?Q?uTTiAj14aL+xWZFqJPf8cEAAwEsoj/BKfBLR9ULzDmCXI4A+6x8imB6N4qhW?=
 =?us-ascii?Q?m2vMAy6WmEnOKXveOluLjRz3aLfg6ujF1hvRW4IXyMEKv5FKFtL1XHshUC9Z?=
 =?us-ascii?Q?wF4cbIOv3KhNjXgQ2izAkG0aAiqoag2upJeBYXQEsqJ16cR+t8f3RB8KjZwm?=
 =?us-ascii?Q?AFGlEBo//DZNsumvHh+4Ho1tph0Kl5+sM0tkHvWh/OndY32mgkvOy1QQduUz?=
 =?us-ascii?Q?3xSZFMXFDQQsVjLF+PMIMw944pYRlqpZT4yhsIiLa2wS4LtFn0EGoxD8oPUq?=
 =?us-ascii?Q?5Sm9L7d/z5NujV/k9DsMG47O?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 492c8a48-f4c3-451e-dead-08d8cd6a8a3b
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Feb 2021 02:21:14.0962
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 8dscKfYiYplBbpAgifrGfSl1WlWi+cTCzXXPeYTbO5H8uAZob/PL1DZJHemiZ+rUD3RNKTbrtQH5GJI7GAfLQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1520
X-OriginatorOrg: intel.com

> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, February 9, 2021 11:28 PM
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The new per-domain IOMMU page-table allocator will now free the
> page-tables when domain's resources are relinquished. However, the root
> page-table (i.e. hd->arch.pg_maddr) will not be cleared.

It's the pointer not the table itself.

>=20
> Xen may access the IOMMU page-tables afterwards at least in the case of
> PV domain:
>=20
> (XEN) Xen call trace:
> (XEN)    [<ffff82d04025b4b2>] R
> iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
> (XEN)    [<ffff82d04025b695>] F
> iommu.c#intel_iommu_unmap_page+0x5d/0xf8
> (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
> (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
> (XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
> (XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
> (XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
> (XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
> (XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
> (XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
> (XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
> (XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
> (XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
> (XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
> (XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
> (XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
> (XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120
>=20
> This will result to a use after-free and possibly an host crash or
> memory corruption.
>=20
> Freeing the page-tables further down in domain_relinquish_resources()
> would not work because pages may not be released until later if another
> domain hold a reference on them.
>=20
> Once all the PCI devices have been de-assigned, it is actually pointless
> to access modify the IOMMU page-tables. So we can simply clear the root
> page-table address.

Above two paragraphs are confusing to me. I don't know what exact change
is proposed until looking over the whole patch. Isn't it clearer to just sa=
y
"We should clear the root page table pointer in iommu_free_pgtables to
avoid use-after-free after all pgtables are moved to the free list"?

otherwise:

	Reviewed-by: Kevin Tian <kevin.tian@intel.com>

>=20
> Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table
> allocator")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> ---
>     Changes in v2:
>         - Introduce clear_root_pgtable()
>         - Move the patch later in the series
> ---
>  xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
>  xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
>  xen/drivers/passthrough/x86/iommu.c         |  6 ++++++
>  xen/include/xen/iommu.h                     |  1 +
>  4 files changed, 29 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index 42b5a5a9bec4..81add0ba26b4 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain
> *d, u8 devfn,
>      return reassign_device(pdev->domain, d, devfn, pdev);
>  }
>=20
> +static void iommu_clear_root_pgtable(struct domain *d)
> +{
> +    struct domain_iommu *hd =3D dom_iommu(d);
> +
> +    spin_lock(&hd->arch.mapping_lock);
> +    hd->arch.amd.root_table =3D NULL;
> +    spin_unlock(&hd->arch.mapping_lock);
> +}
> +
>  static void amd_iommu_domain_destroy(struct domain *d)
>  {
> -    dom_iommu(d)->arch.amd.root_table =3D NULL;
> +    ASSERT(!dom_iommu(d)->arch.amd.root_table);
>  }
>=20
>  static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
> @@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel
> _iommu_ops =3D {
>      .remove_device =3D amd_iommu_remove_device,
>      .assign_device  =3D amd_iommu_assign_device,
>      .teardown =3D amd_iommu_domain_destroy,
> +    .clear_root_pgtable =3D iommu_clear_root_pgtable,
>      .map_page =3D amd_iommu_map_page,
>      .unmap_page =3D amd_iommu_unmap_page,
>      .iotlb_flush =3D amd_iommu_flush_iotlb_pages,
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index d136fe36883b..e1871f6c2bc1 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1726,6 +1726,15 @@ out:
>      return ret;
>  }
>=20
> +static void iommu_clear_root_pgtable(struct domain *d)
> +{
> +    struct domain_iommu *hd =3D dom_iommu(d);
> +
> +    spin_lock(&hd->arch.mapping_lock);
> +    hd->arch.vtd.pgd_maddr =3D 0;
> +    spin_unlock(&hd->arch.mapping_lock);
> +}
> +
>  static void iommu_domain_teardown(struct domain *d)
>  {
>      struct domain_iommu *hd =3D dom_iommu(d);
> @@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct
> domain *d)
>          xfree(mrmrr);
>      }
>=20
> -    hd->arch.vtd.pgd_maddr =3D 0;
> +    ASSERT(!hd->arch.vtd.pgd_maddr);
>  }
>=20
>  static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn=
,
> @@ -2719,6 +2728,7 @@ static struct iommu_ops __initdata vtd_ops =3D {
>      .remove_device =3D intel_iommu_remove_device,
>      .assign_device  =3D intel_iommu_assign_device,
>      .teardown =3D iommu_domain_teardown,
> +    .clear_root_pgtable =3D iommu_clear_root_pgtable,
>      .map_page =3D intel_iommu_map_page,
>      .unmap_page =3D intel_iommu_unmap_page,
>      .lookup_page =3D intel_iommu_lookup_page,
> diff --git a/xen/drivers/passthrough/x86/iommu.c
> b/xen/drivers/passthrough/x86/iommu.c
> index 82d770107a47..d3cdec6ee83f 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -280,6 +280,12 @@ int iommu_free_pgtables(struct domain *d)
>      /* After this barrier no new page allocations can occur. */
>      spin_barrier(&hd->arch.pgtables.lock);
>=20
> +    /*
> +     * Pages will be moved to the free list in a bit. So we want to
> +     * clear the root page-table to avoid any potential use after-free.
> +     */
> +    hd->platform_ops->clear_root_pgtable(d);
> +
>      while ( (pg =3D page_list_remove_head(&hd->arch.pgtables.list)) )
>      {
>          free_domheap_page(pg);
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 863a68fe1622..d59ed7cbad43 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -272,6 +272,7 @@ struct iommu_ops {
>=20
>      int (*adjust_irq_affinities)(void);
>      void (*sync_cache)(const void *addr, unsigned int size);
> +    void (*clear_root_pgtable)(struct domain *d);
>  #endif /* CONFIG_X86 */
>=20
>      int __must_check (*suspend)(void);
> --
> 2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 02:37:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 02:37:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83477.155467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9fNJ-0001LQ-CJ; Wed, 10 Feb 2021 02:37:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83477.155467; Wed, 10 Feb 2021 02:37:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9fNJ-0001LJ-9G; Wed, 10 Feb 2021 02:37:05 +0000
Received: by outflank-mailman (input) for mailman id 83477;
 Wed, 10 Feb 2021 02:37:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9fNI-0001LB-Ao; Wed, 10 Feb 2021 02:37:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9fNH-0006ko-Tm; Wed, 10 Feb 2021 02:37:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9fNH-0001ZG-K0; Wed, 10 Feb 2021 02:37:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9fNH-0007d0-JD; Wed, 10 Feb 2021 02:37:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vnVAUhjcI8OQ/KyQAHrVepMJp0+PiLyBNijo++YZBSw=; b=hsriThyPRMU3ocXPFMUWbeUQlu
	qbn1Ql3szepNnmQw71B+ldFKTUF3vn+xcOwP6cZPlGdyVBWP3keI28y2Dva+xCPszb96uB4tqAVet
	HvKR9IguZ5oJguG5/JQuICsr9MedNOP3xTo9huPY+uY/ZdyXv92DK+PHjH3AjqC8DY3Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159149-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159149: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-shadow:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 02:37:03 +0000

flight 159149 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159149/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken
 test-amd64-i386-libvirt-xsm     <job status>                 broken  in 159042
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <job status>   broken in 159042
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 159042
 test-amd64-i386-xl              <job status>                 broken  in 159042
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 159042
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 159042
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 159042
 test-amd64-i386-libvirt         <job status>                 broken  in 159042
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken in 159042
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159042
 test-amd64-i386-xl-shadow       <job status>                 broken  in 159042
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 159042
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 159042
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>   broken in 159042
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 159042
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>          broken in 159042
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken in 159042
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 159042
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 159042
 test-amd64-amd64-libvirt-xsm    <job status>                 broken  in 159042
 test-amd64-i386-freebsd10-i386    <job status>                broken in 159042
 test-amd64-i386-xl-xsm          <job status>                 broken  in 159042
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159042
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>          broken in 159042
 test-amd64-i386-freebsd10-amd64    <job status>               broken in 159042
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>        broken in 159042
 test-amd64-i386-xl-raw          <job status>                 broken  in 159042
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  <job status> broken in 159042
 test-amd64-amd64-pygrub         <job status>                 broken  in 159042
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 159042
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm <job status> broken in 159042
 test-amd64-amd64-libvirt-vhd    <job status>                 broken  in 159042
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 159042
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159042 REGR. vs. 157566
 test-armhf-armhf-libvirt-raw 13 guest-start    fail in 159042 REGR. vs. 157566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-freebsd10-i386 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-raw       5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-pygrub      5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-libvirt-xsm 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-freebsd10-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-xsm       5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-libvirt      5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-amd64-pvgrub 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl           5 host-install(5) broken in 159042 pass in 159149
 test-amd64-i386-xl-shadow    5 host-install(5) broken in 159042 pass in 159149
 test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 159042 pass in 159149
 test-arm64-arm64-libvirt-xsm  5 host-install(5)          broken pass in 159133
 test-armhf-armhf-xl-multivcpu  8 xen-boot        fail in 159133 pass in 159149
 test-arm64-arm64-xl-thunderx  8 xen-boot         fail in 159133 pass in 159149
 test-armhf-armhf-xl-vhd      12 debian-di-install          fail pass in 159042
 test-armhf-armhf-libvirt-raw 12 debian-di-install          fail pass in 159042
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159133

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 159133 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 159133 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 159133 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 159133 never pass
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   56 days
Failing since        159016  2021-02-04 15:05:58 Z    5 days    6 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 broken  
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken
broken-step test-arm64-arm64-libvirt-xsm host-install(5)
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-xl-xsm broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 06:46:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 06:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83482.155485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9jG5-00071n-8R; Wed, 10 Feb 2021 06:45:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83482.155485; Wed, 10 Feb 2021 06:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9jG5-00071g-5B; Wed, 10 Feb 2021 06:45:53 +0000
Received: by outflank-mailman (input) for mailman id 83482;
 Wed, 10 Feb 2021 06:45:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gno4=HM=unikie.com=jukka.kaartinen@srs-us1.protection.inumbo.net>)
 id 1l9jG3-00071b-D2
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 06:45:51 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b0ce87f-e799-4d48-beec-2afbc68f30dc;
 Wed, 10 Feb 2021 06:45:50 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id e17so1390944ljl.8
 for <xen-devel@lists.xenproject.org>; Tue, 09 Feb 2021 22:45:50 -0800 (PST)
Received: from [192.168.1.7] (adsl12-kmo.oulunkaari.net. [213.255.164.81])
 by smtp.gmail.com with ESMTPSA id t27sm256401ljo.93.2021.02.09.22.45.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 09 Feb 2021 22:45:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b0ce87f-e799-4d48-beec-2afbc68f30dc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=unikie-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=plIu20ef33Gm0Sap/98ltm0awXIsXPGHKtC6qSVBUUc=;
        b=mT31oQf/ijljow1ax2FtFFp0EHYSQQJLioMMzbxbHuq7kAI17GcZ3pXf/g/uu0Iw3D
         wzlVdjWmtYjrXKdeJtk/XCE4xxqeFLPuPAt0eVCS2TwNfW6epAZHEqLKTZX/Jg7O/98B
         EntMYkmxeroYg/J559lkXG5W8p/PO2zFmJUQE+0glKEM0ZYqm0TROu6OR3ZjxiF/RHms
         GLNo8n3UiB884ND+qpfICN9Pewe/tO3Sbk3d51YOcuiamfu49ZFxbA3k0sIb9kmPoXFl
         KW323hAXaJavumn03lhckv9b4ZxdYvHxXj4RQSj4tG7gFVUC1FsoGy3nGSR9PKlzMSFj
         QRlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=plIu20ef33Gm0Sap/98ltm0awXIsXPGHKtC6qSVBUUc=;
        b=NXpaMnXrDjFg64e9BFADPnhRTxfv5b/2JgXIVBLiFxjckHY/tfs+IkRdwRI9/4bnIs
         fc94B2Zc8bQbLB4anDkYrRL1r92yV+BetgxV3+eNSEOSpmQl6GM8vQ4bFWEzm3jNoJFV
         xILOP479ZGJzFc09htqVFCt11djILpvqlUJyzrpgH0I9scPQHL8XfB+MMTA30mUFPi9Y
         oGKw9mc8qtbs5AZIVKeSB/WmqcxrdNqMbd0D4fLGzEASA2oMl5DKO5NsOP1a6mfJaNWD
         8yTddkm20t4Hlb8Ens0cpXdZQ2bDI0S2BzqwIj/s6mN/qi30zDXq1s7f8WTeM0cZkqoU
         IPIg==
X-Gm-Message-State: AOAM533mbFNyd+/FsC1ldIGAJ8FzglH2MXsZz3prBJwDqU3oTFjeIvuV
	u/LNZw0EZRayLs26LlH8TnadcA==
X-Google-Smtp-Source: ABdhPJw8WhFnvylP18RwvzfuQpdmnJCu2mcda7B2DkJQSP5qOxl+4/xUIWHJiyE/2ZSEdvqLBe8swA==
X-Received: by 2002:a2e:984a:: with SMTP id e10mr996526ljj.160.1612939548876;
        Tue, 09 Feb 2021 22:45:48 -0800 (PST)
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
To: Elliott Mitchell <ehem+xen@m5p.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, julien@xen.org,
 Volodymyr_Babchuk@epam.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210209195334.21206-1-sstabellini@kernel.org>
 <YCM+BBvcMoloMXeT@mattapan.m5p.com>
From: Jukka Kaartinen <jukka.kaartinen@unikie.com>
Message-ID: <739f5d0a-5649-b7d6-a75e-511a6ac3fc64@unikie.com>
Date: Wed, 10 Feb 2021 08:45:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <YCM+BBvcMoloMXeT@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 10.2.2021 3.59, Elliott Mitchell wrote:
> On Tue, Feb 09, 2021 at 11:53:34AM -0800, Stefano Stabellini wrote:
>> This fixes Xen boot on RPi4. Some RPi4 kernels have the following node
>> on their device trees:
> 
> IMO I like keeping the reference to Linux kernel d1ac0002dd29 in the
> commit message.  The commit is distinctly informative and takes some
> searching to find in the thread archive.  This though is merely /my/
> opinion.
> 
> Two builds later to ensure I've got a build which doesn't work due to the
> problematic device-tree and one with the patch to ensure it does fix the
> issue and:
> 
> Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
> 
> 
> Note to Jukka Kaartinen, people who merely build from source are useful
> for confirming proposed fixes work.  The above line gets added to commit
> messages to note people have tried it and it works for them.
> 
> 

Thanks for the info!
I tested the fix and it works fine.

Tested-by: Jukka Kaartinen <jukka.kaartinen@unikie.com>


-Jukka


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 08:29:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 08:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83488.155497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ksZ-00088U-QB; Wed, 10 Feb 2021 08:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83488.155497; Wed, 10 Feb 2021 08:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ksZ-00088N-N6; Wed, 10 Feb 2021 08:29:43 +0000
Received: by outflank-mailman (input) for mailman id 83488;
 Wed, 10 Feb 2021 08:29:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/zh=HM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9ksY-00088I-PX
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 08:29:42 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e58bb0c2-e654-4c0b-9294-f960bf46bbca;
 Wed, 10 Feb 2021 08:29:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e58bb0c2-e654-4c0b-9294-f960bf46bbca
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612945781;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=UMtyzgIZuOXNpNwWmsl0meo2GD642Ak+5j1R92yhDWo=;
  b=F3c744fCpjvAGiB37cAGWhDtqVBvawFa6dg6KRgFQlkA+MS37mtXYmll
   CCFks8j1bOMB2CKLQfP83r3+hlQKPS7rmseQ+c5TYFdFQ9Lgas/gxP0Ge
   uGQJRnXy6Z2Ir5mbCd4zuMPiXJPY0NTaGLQ26QVlPqkCF3Qgjaxo+uYYV
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: WuRgUfPxBPSwhfDErkDvHd5CHI6RJLSoQoX020a0dE/7eR8FKW/4SM+/FiErhzHSH5egOfd1DT
 1MstnfEW/IGfTRX89fgctzEymqXcYVfR8IgWfdHri4uRH3tE3iJNXzhaO0LbrFN+MyGSAqXp+I
 cfeDws1ixQteLHqu9dnNQyfxfCh7pJHwLb0+ZlqJ/QV+rwRndZVRyiBw2VqDXtFwneSOGQ4v2H
 /gTa9pRqWK+NVvmOAI3g+n/wmjYdMtrIColsPdd2E06uwY24ajIM9zzlPSqqWepDRQjV1llhkq
 nDI=
X-SBRS: 5.2
X-MesageID: 37128466
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,167,1610427600"; 
   d="scan'208";a="37128466"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O3zTYRHWSu8ypTsi/rdOU6GmRA5prKYbuwKNAzdg1HbwIsVqJb66giuwg0gM0ZwsMGSTs+a536ya+1XFA8kOsqVsfzI4iVnwnXkfWofslgEnDFgExO7XLRq3x3So+NbKKf+lqPWnZ7yHGVDiA2kO7+ADOIe+EjmrIHda6oh6WV/fJgnzQS+XCXWP8Dzdlw5WsdsVvBBe0ChFdriFzNRCvkSueQOcy4vEuo8fc77iRyFUXdsF8Jzx+zMgxNr9vO0C3iCnOO6V8jbb4+rQQFdAOvigyYNSkJF4QFn6KMPkVWVJT3MSZ6W5wc/QnFOZN1aFtINcXnvSrS4HsmGfp9cgag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tMGIVTMavWni1yEkj8pJgHZaJwyiQFr6zEaJOla1vTc=;
 b=Da1tbaWGI9tZg5Py1UHeM6+/wFBDXOhD4slfXMAzBmslN9WCktUDqpqg4+hTaVb7yLDCqe014+/k2ejC3lnP6zOp2xl15k4/QXaVnpGwduS7qh844AmiM9s6Bo+BEb/SfLTL1Snq3uCp8pwLUHr6qXo9twOD8jcX7p90XEwjEFG+zwjpsn4NWiiIu5Kl1x1EAqLvZs5e72s0YqqKVc320K5O1tYPph6EaYzuMS88yf3ve80BoFt43IdWOWE6f8wbzWn97CjPFLbn5kVkBUDERVT1GX0IyILINAmWvcQL3aiMp8Um3zp6atiaZr57onlCBfzQgOK1aWdGwTwj+KJL+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tMGIVTMavWni1yEkj8pJgHZaJwyiQFr6zEaJOla1vTc=;
 b=NjJXEnQNiD+FHOUyx0AWRwCffBcEIWeqiIfQEA7cduE9aIhzIaWA0o1AoiVVUjhCIejeNlUgCsEeIydsUwGujqIyLNCg4WxjwuFhdGanhJ9NpsRqAjXAILuMM00DXsj+Z2O+bcbRlw5oS8blBvEl6zBcINQZY8iMZPgdbGrGUVg=
Date: Wed, 10 Feb 2021 09:29:32 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
Message-ID: <YCOZbNly7YCSNtHY@Air-de-Roger>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210209152816.15792-2-julien@xen.org>
X-ClientProxiedBy: MRXP264CA0037.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 62e52f71-b6e3-40aa-00aa-08d8cd9e0132
X-MS-TrafficTypeDiagnostic: DM6PR03MB3737:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3737382FCB52868F7C4D7F008F8D9@DM6PR03MB3737.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: YSvAGQVIg6XhfuA00L29GeOlufke8keVbfaDUqjSZ6iE1Lf0ZskOSb3LksPefwT44tnYyDkZdnQUchSc3PArQuo6H580ApVatIJ9U934bCv3hWfRs4mIvHOjxcaYC1EO1SKR0gidMVNnI2FVpfdeBzulBgipiL6XJ58wlO5IXyYT+kQe9SE/IzuOLBCqc1L9pOsLf0k/ST8zRNyJ8sLdQhHmTmd2yITEvMhVK3wvn5C6eW6G12UaDJx5N+0ov8Yd3Ok7jlxcpLISaFIqeTTyC9bs4+2Qr0+m6DQjOLOUniFjyxr115bLxrFT/UxZXe/+Y3l4zL8/kO62n8G/rmF2D8MTPMjlESS48fOmd5AR4RSg5d2nFhgYke4ukeEgJ+0R9L1KjR6dnwz+h4PQwGJMlLTHoMaIbnsOTDKr2nQl3WP7Rgx9Y6z857cBrVkgPEwMPE8X3V9FvcRw7AkfCrJooGrkjv21pgvE7hXGEBEcKVc3SSyTMovqdkPFsEOo3rcQX+AkPMTZjS41yYS+vZeOfQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(396003)(136003)(366004)(346002)(39860400002)(6916009)(478600001)(956004)(8936002)(186003)(86362001)(6496006)(5660300002)(33716001)(4326008)(54906003)(8676002)(2906002)(6666004)(9686003)(66946007)(316002)(6486002)(26005)(66476007)(83380400001)(66556008)(85182001)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TEFLR1dvcy9QYTB3ZG1zY3d3Kyt6cmwvSmIveG9hTS9ybGJDZTBZMHdybDlN?=
 =?utf-8?B?UDNpYVNLSVFRTVBESDZSaEkxS3dPQ3JKR2dmc2FvbC9EWk82eSt4Qitia3FG?=
 =?utf-8?B?blVsMFdNQ2M4SDNPYXBCSGlIa0dIQTdpVXN6OThuWEZ1TFZXcEZaclRlOTlu?=
 =?utf-8?B?VmFOWWNnNUMvNnc1NnUzN0Rodk9qdVpZRlNoaUQ2Z2w5elhnaEhUalNpcXdG?=
 =?utf-8?B?eElqbjIwMVJJU0d2eVk4TlpjeGJBQTRiYXN3OXlsSUtDRGFxalVBL1UzSVgz?=
 =?utf-8?B?YytVODJ3MkxKU0ZScGVWTUFsMnJoZVRrTXFnTUVBMC8wbDhlUDFrUW5XRER5?=
 =?utf-8?B?TjBoMVNxS2JJNStmVTl2U1dKR0FhMWVvc3dSd0RDbnBrQzJkNWVPTXpYcGJF?=
 =?utf-8?B?QjkzNzhTa1dBdjFDeTZ2QzZXQS9YL2szbnVxN1ladXlrZnNFZDUyUVhWeGtH?=
 =?utf-8?B?azJEZlNydWZtd1UwSkZDaG4yWStmRkI0UCtrY0J2WnNrQlZndnRzUUdXd29M?=
 =?utf-8?B?L0VoQm1vN08wTmhES3M4RkRwWDdxQjlCSjhnemZQWGZDTXRYVmJJYWhocDhD?=
 =?utf-8?B?OCs5KzlkRmxHZjBVdnNKZXMxOE5nSndCNTF1SXJORmlseWR6enIyVXl1QWpi?=
 =?utf-8?B?NVdEZzRNdlZzQ1FuamtKdkoyRUZtbnpiS01FbXFlZUFYSnJjN2k1ZmJqUGJV?=
 =?utf-8?B?bzlzc0JWYmJXUkN0TDMvdWtpSXMzbjFTaTZoQWdVV3haUitVbXJyQmZHQlJ3?=
 =?utf-8?B?YnFLV2ZVeE5OQlZ6RGt0c2lqY1RudXpVNkNmNFV1Zmp5U0pnbXpvN2E2REV5?=
 =?utf-8?B?OVcweUpRU0hkNER2cDVLai9Tb2Y1eFFwSktKWkVsZG4zWExoQUhaTnlqTnoy?=
 =?utf-8?B?WXJjekJyZEVVNHdVLzlBL29GakxWbm5SRnFUVkxhalNzcjB4czQ0dXBWOU1J?=
 =?utf-8?B?R1YxNDA0ZFNMM2ttTWRZSThSVzRiL0kra2VJNm9FUS9iZ0plZ2pEbUdZV1Zp?=
 =?utf-8?B?b3hueG5GbHZzYjVmaDhlUWtCUkkzdGt3QTV5RUlqMXFmUDZaM2I5UmNqY0RZ?=
 =?utf-8?B?V1VyQkJTYlMwNHJlVVN5ak5pbHl6amUrT0d3c21EZ3lkRTBCU3hnZHVsUnBj?=
 =?utf-8?B?bGdDQlJWVFZmdHN2S3ovSVB5ZFNsU1NQM1NzT2ZpbkowcnRjWE5ENjhVb0hB?=
 =?utf-8?B?WUlNa3FtNk5wb0x6U1JPYldtaFBTR3NZN3JRZWplZ1dTWSsrYVdQK0F4RnhX?=
 =?utf-8?B?Q2lOVDdyajI5UDgzM2oyNWFINmdYcHA2bjRnQm1jZjBJejhQWFdGK09wemNC?=
 =?utf-8?B?VDNRVHNNTXVBd1BIOXNpSHk1MC9Mb0hVWWVUWE10T1Z2UnNYbE9hNkl1YWJJ?=
 =?utf-8?B?MFhNVVM5OUw0dFhzTU1nV3ljaDVpNUh6M0F1N3VEM2lRQnd5ODF3TEVCem0w?=
 =?utf-8?B?a2VQcXdoWi9DSlp5K0ovclNvNm1iZTVMaC9KcVdkQkVvc2VxRElXc1ZWcVlU?=
 =?utf-8?B?VGJwVzJ2V2NYWi9wMlM2Szh5REJ1L2lkSWVCTklBMmhUSys4RWZvSGJ5WCs1?=
 =?utf-8?B?dDBWa1laaWEvZ2cvOG5CckMvRjRZOTNSUFA5ZzBEQXFNTFdLUldibks1SXZL?=
 =?utf-8?B?YVlXOTB0a1l3cG03NVQ1aC94aHJPeXBHNmg4Q3djeTBiQ09sSUZ4cmZvRkYw?=
 =?utf-8?B?MVRvN2x3eHUrY1ZHSm9QMExyQlMrK0VXTFhJNzFKaVNVZ2VoQTNlYkxkcU5K?=
 =?utf-8?Q?eCQxeG40O2mEH58WQ+/12mGkCy6+T380+cXyYA3?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 62e52f71-b6e3-40aa-00aa-08d8cd9e0132
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 08:29:38.2929
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cuq05WfodddlXYg30sjZfvCiIhKaQQgW4S2rqU2vZjwd3yOO3Nlm1ipFGcVlUMxuezjZloQww42HJUxZE4oEzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3737
X-OriginatorOrg: citrix.com

On Tue, Feb 09, 2021 at 03:28:12PM +0000, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, the IOMMU page-tables will be populated early in the domain
> creation if the hardware is able to virtualize the local APIC. However,
> the IOMMU page tables will not be freed during early failure and will
> result to a leak.
> 
> An assigned device should not need to DMA into the vLAPIC page, so we
> can avoid to map the page in the IOMMU page-tables.
> 
> This statement is also true for any special pages (the vLAPIC page is
> one of them). So to take the opportunity to prevent the mapping for all
> of them.

Hm, OK, while I assume it's likely for special pages to not be target
of DMA operations, it's not easy to spot what are special pages.

> Note that:
>     - This is matching the existing behavior with PV guest

You might make HVM guests not sharing page-tables 'match' PV
behavior, but you are making behavior between HVM guests themselves
diverge.


>     - This doesn't change the behavior when the P2M is shared with the
>     IOMMU. IOW, the special pages will still be accessibled by the
>     device.

I have to admit I don't like this part at all. Having diverging device
mappings depending on whether the page tables are shared or not is
bad IMO, as there might be subtle bugs affecting one of the two
modes.

I get the feeling this is just papering over an existing issue instead
of actually fixing it: IOMMU page tables need to be properly freed
during early failure.

> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/include/asm-x86/p2m.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 7d63f5787e62..1802545969b3 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
>  {
>      unsigned int flags;
>  
> +    /* Don't map special pages in the IOMMU page-tables. */

I think this should explain why special pages don't require IOMMU
mappings, or even just note that special pages cannot be added to the
IOMMU page tables due to failure to free them afterwards and that this
is a bodge for it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 08:37:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 08:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83491.155512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9kzy-0000bV-L0; Wed, 10 Feb 2021 08:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83491.155512; Wed, 10 Feb 2021 08:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9kzy-0000bO-HW; Wed, 10 Feb 2021 08:37:22 +0000
Received: by outflank-mailman (input) for mailman id 83491;
 Wed, 10 Feb 2021 08:37:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9kzx-0000bJ-O3
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 08:37:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13318f3e-d536-47bb-9605-87b57fcd3774;
 Wed, 10 Feb 2021 08:37:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AAC53AB71;
 Wed, 10 Feb 2021 08:37:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13318f3e-d536-47bb-9605-87b57fcd3774
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612946239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AcDjXdURlVn6QUZXRLyz4HTGC4SkraH3h0dv/G8ZZVs=;
	b=ZFYBlzOi8VsYv9fCLOo1gdelGPFvD9JZDHvyJnrW0Vu04IHTE22jc9GgmyfyrjNLdz8DLd
	HXHeRNVCrUagFGeT0HM5Ke5vtzs2jusBY9yIfzoMs2c/1Yr/VHCnOp5XTQUBR2NeRjTFfh
	5eSqHtwmz5Mo0sOt11yEIlL5HGApKHM=
Subject: Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19
 processors
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>
References: <20210209153336.4016-1-andrew.cooper3@citrix.com>
 <c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
 <24610.50089.887907.573064@mariner.uk.xensource.com>
 <6f44ae66-9956-3312-c4c8-b0f1e4b568bb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a0da7d15-d0a2-62c6-0551-f62a09780e16@suse.com>
Date: Wed, 10 Feb 2021 09:37:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <6f44ae66-9956-3312-c4c8-b0f1e4b568bb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.02.2021 18:39, Andrew Cooper wrote:
> On 09/02/2021 17:17, Ian Jackson wrote:
>> Jan Beulich writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors"):
>>> On 09.02.2021 16:33, Andrew Cooper wrote:
>>>> The original limit provided wasn't accurate.  Blobs are in fact rather larger.
>>>>
>>>> Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>
>>>> --- a/xen/arch/x86/cpu/microcode/amd.c
>>>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>>>> @@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
>>>>  #define F15H_MPB_MAX_SIZE 4096
>>>>  #define F16H_MPB_MAX_SIZE 3458
>>>>  #define F17H_MPB_MAX_SIZE 3200
>>>> -#define F19H_MPB_MAX_SIZE 4800
>>>> +#define F19H_MPB_MAX_SIZE 5568
>>> How certain is it that there's not going to be another increase?
>>> And in comparison, how bad would it be if we pulled this upper
>>> limit to something that's at least slightly less of an "odd"
>>> number, e.g. 0x1800, and thus provide some headroom?
>> 5568 does seem really an excessively magic number...
> 
> It reads better in hex - 0x15c0.Â  It is exactly the header,
> match/patch-ram, and the digital signature of a fixed algorithm.

And I realize it's less odd than Fam16's 3458 (0xd82).

> Its far simpler than Intel's format which contains multiple embedded
> blobs, and support for minor platform variations within the same blob.
> 
> There are a lot of problems with how we do patch verification on AMD
> right now, but its all a consequence of the header not containing a
> length field.
> 
> This number wont change now.Â  Zen3 processors are out in the world.

Given history on earlier families, where in some cases sizes
also vary by model, how can this fact guarantee anything?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 08:50:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 08:50:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83495.155523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lCa-0002N3-QS; Wed, 10 Feb 2021 08:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83495.155523; Wed, 10 Feb 2021 08:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lCa-0002Mw-NQ; Wed, 10 Feb 2021 08:50:24 +0000
Received: by outflank-mailman (input) for mailman id 83495;
 Wed, 10 Feb 2021 08:50:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9lCZ-0002Mr-Ph
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 08:50:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9lCX-0005KQ-7C; Wed, 10 Feb 2021 08:50:21 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9lCW-0006aS-VO; Wed, 10 Feb 2021 08:50:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EJC1p+lr5039VDZDISnYId5De2QGLYOJDSMvjJC1rLc=; b=khxuRFrec3yiFQjSlYOsYDX+Lt
	QxkCY4adtQiCPY21A4DynA4UqtMZOqg+HSCCPxJ94BcZqTLuZv3fOwEAdP84wXw57RvwZoRIWXuiq
	Y/9B2wkbD7BBiG/QIOyPSzkb49hM3iJeKlR8q5yp4cSLUIDGj6+eaIScBwcCd+RKjymc=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <0910de32-8af9-6c21-b17a-b569aa59c4a4@xen.org>
Date: Wed, 10 Feb 2021 08:50:17 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCOZbNly7YCSNtHY@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Roger,

On 10/02/2021 08:29, Roger Pau MonnÃ© wrote:
> On Tue, Feb 09, 2021 at 03:28:12PM +0000, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, the IOMMU page-tables will be populated early in the domain
>> creation if the hardware is able to virtualize the local APIC. However,
>> the IOMMU page tables will not be freed during early failure and will
>> result to a leak.
>>
>> An assigned device should not need to DMA into the vLAPIC page, so we
>> can avoid to map the page in the IOMMU page-tables.
>>
>> This statement is also true for any special pages (the vLAPIC page is
>> one of them). So to take the opportunity to prevent the mapping for all
>> of them.
> 
> Hm, OK, while I assume it's likely for special pages to not be target
> of DMA operations, it's not easy to spot what are special pages.

Special pages are allocated by Xen for grant-table, vCPU info...

> 
>> Note that:
>>      - This is matching the existing behavior with PV guest
> 
> You might make HVM guests not sharing page-tables 'match' PV
> behavior, but you are making behavior between HVM guests themselves
> diverge.
> 
> 
>>      - This doesn't change the behavior when the P2M is shared with the
>>      IOMMU. IOW, the special pages will still be accessibled by the
>>      device.
> 
> I have to admit I don't like this part at all. Having diverging device
> mappings depending on whether the page tables are shared or not is
> bad IMO, as there might be subtle bugs affecting one of the two
> modes.
> 
> I get the feeling this is just papering over an existing issue instead
> of actually fixing it: IOMMU page tables need to be properly freed
> during early failure.

My initial approach was to free the IOMMU page tables during early 
failure (see [1] and [2]). But Jan said the special pages should really 
not be mapped in the IOMMU ([3]) and he wasn't very happy with freeing 
the IOMMU pages table for early failure.

I don't particularly care about the approach as long as we don't leak 
IOMMU page-tables at the end.

So please try to find a common ground with Jan here.

Cheers,

[1] <20201222154338.9459-1-julien@xen.org>
[2] <20201222154338.9459-5-julien@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:02:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83497.155536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lOL-0003Py-0X; Wed, 10 Feb 2021 09:02:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83497.155536; Wed, 10 Feb 2021 09:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lOK-0003Pr-Ro; Wed, 10 Feb 2021 09:02:32 +0000
Received: by outflank-mailman (input) for mailman id 83497;
 Wed, 10 Feb 2021 09:02:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=en6K=HM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1l9lOI-0003Pm-Sc
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 09:02:30 +0000
Received: from mail-lf1-x133.google.com (unknown [2a00:1450:4864:20::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7f91d6f-0f9e-4b34-9c87-6a909752286c;
 Wed, 10 Feb 2021 09:02:29 +0000 (UTC)
Received: by mail-lf1-x133.google.com with SMTP id u25so1726062lfc.2
 for <xen-devel@lists.xenproject.org>; Wed, 10 Feb 2021 01:02:29 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id p3sm211881lfu.271.2021.02.10.01.02.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 10 Feb 2021 01:02:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7f91d6f-0f9e-4b34-9c87-6a909752286c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=7G5wVlyRsXuQzM/tjl3QMxBCBuW8EeiKEzvQNC3uzGw=;
        b=KVmGRBLWx3DqHtvFy0kgHN6SH+CngI92cF5MXuq6qA+MIqdQ8xLnDGuMUm/DMLVisB
         ERf+ZhF0owfTt/goko7oHISPBRZwavdzye4SeS4Ezv9YrsMt3Z30I2S+lZv64/4TIRz2
         uoC579sap1XR0VZy8OsoIeZ3RFC9p/aQV7zFAbeAvrErbZOd7AHNnAaWORiFXQO7g0NQ
         Sc69ULHO16+b6nAe0USVEDg6Sbc6eYETpx1CZZ/ZiWjqCDfRNMI6fcQYtHzyN760swOs
         4yvyrCsjnyx/hoZK7dZaWIM8jDPgqBpBvE00G1yGrrpNYAO9mLmaPhrPJy0sbMcac3MS
         0Lgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=7G5wVlyRsXuQzM/tjl3QMxBCBuW8EeiKEzvQNC3uzGw=;
        b=tHsLK6Vjqr4sbbY3hcCOERb/i0H8Tzvqz4QMogpwjgLTA8l0JHRG3paXQ1MRyEtM9O
         VQ3Js0ZpkTt1yTqESwG3L56C5DSzsvDSqn+LOGNlwNwXwBxpylzSutQK/cJGCmVJi30A
         9lbqFGsNxuvrh2G/jPU8oqCUFwnbrAPHe3hhSf1DS6bn6DQDOmp54CAfylX+ZNNasLsw
         EoRdC/rqD3CTi6qu1qUjniW/nmBEOrmiLtD3YbMSjuUpK6CcEAS6MjI1UqmxuJSyD/E5
         HrZnGOiD+ob+cR7rPLaSFGqY4DA/JztYh2ukoMjIFmxsHimKkuxxfinbqnvGn5a0JJfB
         qUSw==
X-Gm-Message-State: AOAM5324D8Vir+BoBWRCJtS9P4zdHTijQH1lxYSSDYDG3hC18S2uxsMW
	+btYtmaNf4tu3YZcgQ5urSo=
X-Google-Smtp-Source: ABdhPJzfmsWUPZI98ctxJ7iBA7EjBVpaZFnHW54OyMwgehgB99FB36yeOre1EaBU/HqrcHzWiOUMrA==
X-Received: by 2002:a05:6512:2214:: with SMTP id h20mr1158603lfu.81.1612947748361;
        Wed, 10 Feb 2021 01:02:28 -0800 (PST)
Subject: Re: [PATCH V4 24/24] [RFC] libxl: Add support for virtio-disk
 configuration
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com>
 <1610488352-18494-25-git-send-email-olekstysh@gmail.com>
 <e1da0892-5496-b438-f52f-1e5dd8d48979@xen.org>
 <87f92e40-6462-21ba-0c56-b77c6518fef8@gmail.com>
 <dce22061-aa73-dba7-601d-fe20f989688d@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <57272148-ff37-1e5e-1b83-b56304431bc9@gmail.com>
Date: Wed, 10 Feb 2021 11:02:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <dce22061-aa73-dba7-601d-fe20f989688d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.01.21 19:05, Julien Grall wrote:
> Hi Oleksandr,


Hi Julien


Sorry for the late response.


>
> On 18/01/2021 08:32, Oleksandr wrote:
>>
>> On 16.01.21 00:01, Julien Grall wrote:
>>> Hi Oleksandr,
>>
>> Hi Julien
>>
>>
>>>
>>> On 12/01/2021 21:52, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> This patch adds basic support for configuring and assisting 
>>>> virtio-disk
>>>> backend (emualator) which is intended to run out of Qemu and could 
>>>> be run
>>>> in any domain.
>>>>
>>>> Xenstore was chosen as a communication interface for the emulator 
>>>> running
>>>> in non-toolstack domain to be able to get configuration either by 
>>>> reading
>>>> Xenstore directly or by receiving command line parameters (an 
>>>> updated 'xl devd'
>>>> running in the same domain would read Xenstore beforehand and call 
>>>> backend
>>>> executable with the required arguments).
>>>>
>>>> An example of domain configuration (two disks are assigned to the 
>>>> guest,
>>>> the latter is in readonly mode):
>>>>
>>>> vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]
>>>>
>>>> Where per-disk Xenstore entries are:
>>>> - filename and readonly flag (configured via "vdisk" property)
>>>> - base and irq (allocated dynamically)
>>>>
>>>> Besides handling 'visible' params described in configuration file,
>>>> patch also allocates virtio-mmio specific ones for each device and
>>>> writes them into Xenstore. virtio-mmio params (irq and base) are
>>>> unique per guest domain, they allocated at the domain creation time
>>>> and passed through to the emulator. Each VirtIO device has at least
>>>> one pair of these params.
>>>>
>>>> TODO:
>>>> 1. An extra "virtio" property could be removed.
>>>> 2. Update documentation.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> [On Arm only]
>>>> Tested-by: Wei Chen <Wei.Chen@arm.com>
>>>>
>>>> ---
>>>> Changes RFC -> V1:
>>>> Â Â Â  - no changes
>>>>
>>>> Changes V1 -> V2:
>>>> Â Â Â  - rebase according to the new location of libxl_virtio_disk.c
>>>>
>>>> Changes V2 -> V3:
>>>> Â Â Â  - no changes
>>>>
>>>> Changes V3 -> V4:
>>>> Â Â Â  - rebase according to the new argument for 
>>>> DEFINE_DEVICE_TYPE_STRUCT
>>>>
>>>> Please note, there is a real concern about VirtIO interrupts 
>>>> allocation.
>>>> [Just copy here what Stefano said in RFC thread]
>>>>
>>>> So, if we end up allocating let's say 6 virtio interrupts for a 
>>>> domain,
>>>> the chance of a clash with a physical interrupt of a passthrough 
>>>> device is real.
>>>
>>> For the first version, I think a static approach is fine because it 
>>> doesn't bind us to anything yet (there is no interface change). We 
>>> can refine it on follow-ups as we figure out how virtio is going to 
>>> be used in the field.
>>>
>>>>
>>>> I am not entirely sure how to solve it, but these are a few ideas:
>>>> - choosing virtio interrupts that are less likely to conflict 
>>>> (maybe > 1000)
>>>
>>> Well, we only support 988 interrupts :). However, we will waste some 
>>> memory in the vGIC structure (we would need to allocate memory for 
>>> the 988 interrupts) if you chose an interrupt towards then end.
>>>
>>>> - make the virtio irq (optionally) configurable so that a user could
>>>> Â Â  override the default irq and specify one that doesn't conflict
>>>
>>> This is not very ideal because it makes the use of virtio quite 
>>> unfriendly with passthrough. Note that platform device passthrough 
>>> is already unfriendly, but I am thinking PCI :).
>>>
>>>> - implementing support for virq != pirq (even the xl interface doesn't
>>>> Â Â  allow to specify the virq number for passthrough devices, see 
>>>> "irqs")
>>> I can't remember whether I had a reason to not support virq != pirq 
>>> when this was initially implemented. This is one possibility, but it 
>>> is as unfriendly as the previous option.
>>>
>>> I will add a 4th one:
>>> Â Â  - Automatically allocate the virtio IRQ. This should be possible 
>>> to do it without too much trouble as we know in advance which IRQs 
>>> will be passthrough.
>> As I understand the IRQs for passthrough are described in "irq" 
>> property and stored in d_config->b_info.irqs[i], so yes we know in 
>> advance which IRQs will be used for passthrough
>> and we will be able to choose non-clashed ones (iterating over all 
>> IRQs in a reserved range) for the virtio devices.Â  The question is 
>> how many IRQs should be reserved.
>
> If we are automatically selecting the interrupt for virtio devices, 
> then I don't think we need to reserve a batch. Instead, we can 
> allocate one by one as we create the virtio device in libxl.

Looks like, yes, the reserved range is not needed if we use 4th option.


>
>
> For the static case, then a range of 10-20 might be sufficient for now.

ok


Thinking a bit more what approach to choose...
I would tend to automatically allocate the virtio IRQ (4th option) 
rather than use static approach with reserved IRQs
in order to eliminate the chance of a clash with a physical IRQs 
completely from the very beginning. From other side
we can indeed use static approach (as simpler one) for now and then 
refine it when we have more understanding about the virtio usage.
What do you think?


>
>
> [...]
>
>>>> -Â Â Â Â Â Â Â  nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1;
>>>> +Â Â Â Â Â Â Â  uint64_t virtio_base;
>>>> +Â Â Â Â Â Â Â  libxl_device_virtio_disk *virtio_disk;
>>>> +
>>>> +Â Â Â Â Â Â Â  virtio_base = GUEST_VIRTIO_MMIO_BASE;
>>>> Â Â Â Â Â Â Â Â Â  virtio_irq = GUEST_VIRTIO_MMIO_SPI;
>>>
>>> Looking at patch #23, you defined a single SPI and a region that can 
>>> only fit virtio device. However, here, you are going to define 
>>> multiple virtio devices.
>>>
>>> I think you want to define the following:
>>>
>>> Â - GUEST_VIRTIO_MMIO_BASE: Base address of the virtio window
>>> Â - GUEST_VIRTIO_MMIO_SIZE: Full length of the virtio window (may 
>>> contain multiple devices)
>>> Â - GUEST_VIRTIO_SPI_FIRST: First SPI reserved for virtio
>>> Â - GUEST_VIRTIO_SPI_LAST: Last SPI reserved for virtio
>>>
>>> The per-device size doesn't need to be defined in arch-arm.h. 
>>> Instead, I would only define internally (unless we can use a 
>>> virtio.h header from Linux?).
>>
>> I think I got the idea. What are the preferences for these values?
>
> I have suggested some values in patch #23. Let me know what you think 
> there.

ok, thank you. I agree with the values.


>
>
> [...]
>
>>>> +
>>>> +Â Â Â Â Â Â Â  nr_spis += (virtio_irq - 32) + 1;
>>>> Â Â Â Â Â Â Â Â Â  virtio_enabled = true;
>>>> Â Â Â Â Â  }
>>>
>>> [...]
>>>
>>>> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
>>>> index 2a3364b..054a0c9 100644
>>>> --- a/tools/xl/xl_parse.c
>>>> +++ b/tools/xl/xl_parse.c
>>>> @@ -1204,6 +1204,120 @@ out:
>>>> Â Â Â Â Â  if (rc) exit(EXIT_FAILURE);
>>>> Â  }
>>>> Â  +#define MAX_VIRTIO_DISKS 4
>>>
>>> May I ask why this is hardcoded to 4?
>>
>> I found 4 as a reasonable value for the initial implementation.
>> This means how many disks the single device instance can handle.
>
> Right, the question is why do you need to impose a limit in xl?
>
> Looking at the code, the value is only used in:
>
> +Â Â Â Â Â Â Â  if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) {
> +Â Â Â Â Â Â Â Â Â Â Â  fprintf(stderr, "vdisk: currently only %d disks are 
> supported",
> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  MAX_VIRTIO_DISKS);
>
> The rest of the code (at list in libxl/xl) seems to be completely 
> agnostic to the number of disks. So it feels strange to me to impose 
> what looks like an arbitrary limit in the tools.

Well, will drop this limit here.


>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:06:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:06:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83499.155548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lRz-0003af-Kj; Wed, 10 Feb 2021 09:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83499.155548; Wed, 10 Feb 2021 09:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lRz-0003aY-Hm; Wed, 10 Feb 2021 09:06:19 +0000
Received: by outflank-mailman (input) for mailman id 83499;
 Wed, 10 Feb 2021 09:06:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oq3b=HM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9lRy-0003aT-MN
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 09:06:18 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99846e07-2bca-4cbe-a49b-d309fd568cb8;
 Wed, 10 Feb 2021 09:06:17 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x1A96D46z
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 10 Feb 2021 10:06:13 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99846e07-2bca-4cbe-a49b-d309fd568cb8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612947976;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=o2zy8YJhH4SinklC41ksuuoPwiukLp/2mXQaUSvxX0s=;
	b=jg6ANq+InhImUDGHn1rCWk4xchpMnULsWfTGtpp4b0AY8Y6LpY2gxtdi6FGvsy8VUV
	fW2sLZmLlBdQ6FCWRrQnkcXqFDKa2/zKS6hsOZCVMN8XwXSrnKVnpchNWwBZVzY5DgTr
	zMgswyWz+HAEzEyyXfcpG/MpF1w5LCbqQwumOCxJnOQV62e1bPpF++MM9VdGdL3RlJ9V
	wte0jtc70Ih4KPLuF2dOauSeO42PmQ2rjjOPc0KNomq41oYumoUR3jPqEa9yyRL0zl1h
	84P76XLGQWQzHtZHm9XAxqI/5kEflL1RYcgE7FybaVF4che+swfBVxvx2MRfsehZLga8
	Q96g==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Wed, 10 Feb 2021 10:06:06 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 4/4] xl: disable --debug option for xl migrate
Message-ID: <20210210100606.45de7991.olaf@aepfle.de>
In-Reply-To: <24610.49788.493621.307069@mariner.uk.xensource.com>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-5-olaf@aepfle.de>
	<24610.49788.493621.307069@mariner.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/6cPC=H_55pXAQgsSe3Vgr/P"; protocol="application/pgp-signature"

--Sig_/6cPC=H_55pXAQgsSe3Vgr/P
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 9 Feb 2021 17:12:28 +0000
schrieb Ian Jackson <iwj@xenproject.org>:

> It seems to me that something is definitely a bug here but I want to
> understand from Andy what the best thing to do is.  I'm hesitant to
> release-ack removing this at this stage.
>=20
> Wouldn't it be better to just fix the docs like in your previously
> suggested patch ?

To make that initial patch accurate, this additional change is required:

--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -752,7 +752,7 @@ static int send_domain_memory_live(struct xc_sr_context=
 *ctx)
     if ( rc )
         goto out;
=20
-    if ( ctx->save.debug && ctx->stream_type !=3D XC_STREAM_PLAIN )
+    if ( ctx->save.debug )
     {
         rc =3D verify_frames(ctx);
         if ( rc )

Otherwise there is no way for "xl" to trigger a call into verify_frames.

I will test this patch today.


Olaf

--Sig_/6cPC=H_55pXAQgsSe3Vgr/P
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAjof4ACgkQ86SN7mm1
DoBkZw//Rp/wSMuDlOvHeAxuQlzEseQAtIP0YDoKUqRsICFpZOU+wTdEaQMUN60n
7WwpAefo9zIHSiiqxFGyb/nANE5AN6ZNx06Brk+cxMAQpG3593s2sOXWi//6Q+O1
jxs/ya9cfwvbCWDx9D7FIGbIxrnqA6NNzhzKUktkYulh+UqPoSBYmsCw09NEm90i
dJfN0KywOKc1N44s+wojV7le3sxMRxgBEZtHCtwW4HmlxmpgR2eO3de9FfhxcqLB
84scMTqYi6K4bon+VDGV3btIgJSfRh65Js/1t8Nx9Z12E1XcI0D8cEGnBBFonkDt
xbIVCxDH8176f6kQ9MxgA3s4wk1S3QWDrHsQBG49QuVAxmpwgsjWDFI8BRfR50YS
xIG4mYV+Ka1IwQXbe7z2MskWYpYuCOarOwsd0w2TACpXC3UnSqJbggI00XC4xuv+
UOxXSuvZUKPgdu8SWq9UfPzZPWyYZGW9CegqObGB7uLBjSANX0hgpOQeeSlacYBy
cQhrYnuxi5alqDiuHfpYZ0b2WvhdfmWv4XYRYlckW0Rxr/6oGv2D1Ez1PxtZirpo
iUxDMvkeEm54dTITlQPxgGAgPHwHrZsisFZ4HIT56WZ0dxyuWFjmytAJWkm7POmV
xCs8nt21q5OnHWYe2z0Ba9EH2rGOLuQSEIlPISJCXcHHgnA0u+A=
=euQG
-----END PGP SIGNATURE-----

--Sig_/6cPC=H_55pXAQgsSe3Vgr/P--


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:22:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83503.155560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lhc-0005Oo-0o; Wed, 10 Feb 2021 09:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83503.155560; Wed, 10 Feb 2021 09:22:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lhb-0005Oh-SW; Wed, 10 Feb 2021 09:22:27 +0000
Received: by outflank-mailman (input) for mailman id 83503;
 Wed, 10 Feb 2021 09:22:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/zh=HM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9lha-0005Oc-NN
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 09:22:26 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0d8016d-1c23-4613-b599-610932a88d6e;
 Wed, 10 Feb 2021 09:22:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0d8016d-1c23-4613-b599-610932a88d6e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612948945;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=AgmAcQzToFYaA8/EEyPtwH+emMkgUTeWkVOA7LA2jMw=;
  b=S4/hQKxBbFs/c+6U3aeUibFEaUig++kI0NV5FWi4A04CgovvjKQ4pEg3
   RkeyHGF498M4q//S75NqpKd9svKJMEfayyW2a6cjlbMF82sSARB6xud79
   j3+FwseD9kdq0/uGixu2ok5xkVIxPIJosC+Numrb2KQypPS6J6KXlIhAF
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Latd8G8zmysZV05HQOWBk5PU+Nggjef96NbFtl/NKZ2RwRtglodI+my9WoNndiPWILSnSOBv93
 ZVuVEsfRmz1Vb4IFSPVdk7kl4WeR8U72pmFQOhzXI7MkCWIJrNNmAg6OYvq9BGsSoT1g6ZhDCp
 Jo4nDEBfmBKxdNyFn0+hMO1ulI80CiYqZGKFL6jimcXIjnnTionfiHiTQMCjXO6C4/LAzfXo4U
 8naYdJj7WYd+ncyqmnxQR8HyT/bF9Fbjq2BxHgeT+uBqPgvZPomho9pYi53CdK7CM4BmewOjbU
 BPw=
X-SBRS: 5.2
X-MesageID: 36877287
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,167,1610427600"; 
   d="scan'208";a="36877287"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nNIntsBgDH4EKwhfyZpjenesfEy5hPl6ko+m3TSJzLe9xudWeul1ApfHkQQEBIG1f8eFlzJiYZ5FyigSoNkOiQp3xMKY/AZfdfkvVgikLlU+DZtqcF8LPviXngIAZHEgNH83XiscTVuuXL6tMLFMACFSNiVjwcmds7lSnZDEC0ZFsAKn1zZVvDrbfHPyt0WpANflp17/pBkzuzeePLDIWVWo1tGn9Z4NwixLcnJ/j8rrqfXSIYcYzk4ARBRe741CiyslomqkQbIIgIMeUXBy2PRNvxXZr/w7E+LA7h5giXH5q1U9+VbfvJMdSmGVYypvEkgs6UmIxIfNHe8t0J1XJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1wJjk0FnZ0exCCqNZwfmwO0V8jY0khE99BEKiypY9VA=;
 b=hRd5vdqDMnSr6gM/QmC+e+UsCnQ9h5CeB4oy+Fk/ycE7Wk7wZtOopO57tCYI6mX7rEShOfrnwrm/o4RTb5rAGN9NFsuZQEJyyBXZ9bp7pQ2IIwf5EX5Bj566IeBjc8bSEj85/7KKjxH9g/oetqEyU7IQcNxeRjcPdjBalUIBFf9AUgHYne8dpMjJY0i1c5HH0XJtge1IlUXaXA9Vol5fKRHkp3Aay65CxaCTHRsy0JxhW/xPlMMuujDj1q1R5wKy6Q4GdUWPkrRpvUOLrhlMkkm487IBLlZVGOCJjulFXxNKyTsZ2iECFhPb+so6WjOQeG7/1p051LD6+LpRE22img==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1wJjk0FnZ0exCCqNZwfmwO0V8jY0khE99BEKiypY9VA=;
 b=hrQ5hIkiga69kcDkR4XtN707/DkLo6rZN/VFidb6HLi8ZMliasY73s8SEibZm0msh+f8b1p0mByIPzJs1FAQhs04uVvbZsKsJRiakyqj15VyM1oqNkf0vYIi0/rfXwYv+m/kljqMvOFFb6pq/2J0X9DS99DPYBsm1XdQ2elaSes=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/irq: simplify loop in unmap_domain_pirq
Date: Wed, 10 Feb 2021 10:22:11 +0100
Message-ID: <20210210092211.53359-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR04CA0244.eurprd04.prod.outlook.com
 (2603:10a6:20b:330::9) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 28c020f7-2ac0-426f-2b26-08d8cda55d19
X-MS-TrafficTypeDiagnostic: DM5PR03MB3371:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB337139505AA353D3D4AEC54D8F8D9@DM5PR03MB3371.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: R2aG9HDrUm7a1/03LCUduiJlcPh5nzSeCPIToVzRFAlOZozohcRrYNMjL5vXGXRbhHjr1x6062eTTqHGkQuNqMkgK4kRYNXYIyxYd8ucCBlxrkRjh9c+s/r1Ng5Thk8w1cNfq7qkWv2a9t0sOM0U0HeZISdQyBv32m2evh/tWdpQoUc+9PAOiXhLTa/+1bkL2V3q0NVZQEINbGibcBbQPi4BkceTCJba/BRoVeGxshtyPARYDkWyrh+yVhYiqQ7ka/JEzvbv5cN5BIIfXJLSee2ODAdOe40oZQkRjMkrpOWvpNjbvdTqLFD4aRLTc38e/8C+P4PU1TgoqgyO3XNV+egYEHVXOFZCYwMMSk89wLZvYVP1oGEutvxbUXT7Owpa7viNfLEsXC0Jy8foNrPyzOPo+6yxRRCU1hjv6ZX1YT9Pfy+wdWEXsiun5qTq9F02g7Hw554pdXdmXkbtE46/MtFWt3TWveVtFVl68r6jOAPYJSDxxMSPFDRAhV1NeRiEmwjQ5ICNhwioQHGdZtyXig==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(396003)(136003)(376002)(346002)(366004)(26005)(4326008)(478600001)(2616005)(956004)(6916009)(6486002)(83380400001)(186003)(66556008)(16526019)(8676002)(66946007)(66476007)(1076003)(2906002)(6496006)(54906003)(86362001)(6666004)(316002)(8936002)(36756003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aTMxWm1aQXcrR0hmUVJQN3hHWHVrZjBrNmtUZ2FIdVZaMDhBOFptQWQyOTBJ?=
 =?utf-8?B?YzUyaWNNemVydFRGbEVYbGJsa3ZWVXd5Z2toSVNlcXU0dmtrYWxJYkNUeE8r?=
 =?utf-8?B?bDkwclNTaGs5T3RRdzBORGxiS293Z0cyODAxUkI2NVdHZjhDZGhqaUpPVFJF?=
 =?utf-8?B?ZWdrUDdUYk9COU5ReVd2VjZlZWhoa1VJOXdNMC96aTd2ODNYbnc1aVo2am05?=
 =?utf-8?B?QlIySHlCcnkycC8rWktPWEg5d2ltN1NHNWlyeEVuNk9qZ3AydGU2ZStIemMy?=
 =?utf-8?B?NXJNY2ZPK1FNSUZHbFB0VHRRVmNsQStlczRtMmRETHhHVE9OZ0kvUjhvWVdN?=
 =?utf-8?B?UXRHdUYwTW9vN05KWUN3YjE5R3B0c0FHSmRMR2NLWUFRZXZBTzF1YTA3bTdw?=
 =?utf-8?B?WmRsT3Q0eTF3MGRVOE1Yek53RlhWUlVKWlJnQW1qaytCdHVhQ0poSHNvZ0ZS?=
 =?utf-8?B?RWtJWkpyTy9EdU5MSEM2MXFvK0Zsd2tsOFM1bWpZUEowdGVtNVByMGxTOVZw?=
 =?utf-8?B?bXNSMmxjNmFJUDI0WkZoMktkclBBNlIzRFZLdGNxMkJJY3ZBdE1zRElLYzNC?=
 =?utf-8?B?WXYwVU9MOFhDSm5NSGZzV0NBWCtQMmpnRjk3blo5TFdJWUtjUFdPWDBvYmtp?=
 =?utf-8?B?YVhkUFIrOUVHRXE5RnU1N1FRVC9VZFBvWGtKRVhLQ0Q5a1puNjlpTEZYSEpS?=
 =?utf-8?B?VDhweVdpM0crYUtFNFFCN2VMUXNyZmt1L2p3cTNiSWxwZzZUK0pQSFdPYzgy?=
 =?utf-8?B?dTUyR3c5SmF4SWxjMzc1YzNZdXNyaFlwNlhrNy9yUjIrdXIvMjlUV2FjNEkx?=
 =?utf-8?B?ZkxwNlV1bVlFMERRUHVlOTN5ZXlaT3ZKekZ0L1Jkb25LVnQycjFMNVRlbklO?=
 =?utf-8?B?UHduM3ZkZlVHaE1YNmpWRk15M2IySTZlbHRsbjVoNWFNQnZBWENwK3RhZ01j?=
 =?utf-8?B?Z3JDVmNiTHlYU1NETzVwbWNJRnhWeEdlN0tWME1kZ2NPSzJ6MXg5cWtSckFG?=
 =?utf-8?B?bTl1Y3hzNUhQa0ptL2NhZjBDL2hBaEFMWkdaR0FWUms2UW1GaXdRZEhJMTBt?=
 =?utf-8?B?ZGM2dHdEWHJva0sxd1lwMGE3OWNBd1lVc256N3hjOFFRNTZsUUNkdnM3OFo4?=
 =?utf-8?B?N1VjUHowaWQvNSt0RG5WWEU1YmtmZHdoVjJTb2JSYkZXNnpSS2JjN3lDenpI?=
 =?utf-8?B?ZnYva0NPRXFtR3JMMkQxM2FTTmpxUStMU3FKM21oVHNaNzlPcWVxZVBPcytK?=
 =?utf-8?B?SGxMNGxURXZLdTlidFVyZU9QKzNiT21YRTltZlFuQWRRRU82L1JOYXg2Sy9t?=
 =?utf-8?B?dVZTQldIaUJVUmg4bThza3l4dTRtTFpZcWlObFdTM3QrekVOZTBORGJaNVUx?=
 =?utf-8?B?TWE5NFFsbWh6bzFWWVNpRGExQnhjOGttbGk1WjhlaTFGMEVWVDhuZTZQWWxY?=
 =?utf-8?B?WUMxVVlVSmFEK3ZqUDVZditvMWdqRGpPZ2hKNnBicnJEZFFwWDR3QzdEOTVr?=
 =?utf-8?B?ckdHT283ZWdqeEFGcjlMRFR1QWxHbkpwOGU4dFYvRnk1YWtaeVk1Wi84amVV?=
 =?utf-8?B?OFAvRDdJS3Qzb0RwZlRrYVh3ZGVpZWZqTFZQUXBueU1qUE5tWVl2L2xtUTRJ?=
 =?utf-8?B?OGx4K0V0dXRja3JoU2Zud050MHhaVkR1UWFlSHRxUkNEMk9abmxBckFtWXQr?=
 =?utf-8?B?WnNDV3RJMy9BcEFuSUxGWllieUdOaXlVWFU2TXJOd2dKR01xdmtIYVhwMGtI?=
 =?utf-8?Q?grQ9RnCaIDUprRl3OxyaWoyKKRHQxTlrFN7FcI1?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 28c020f7-2ac0-426f-2b26-08d8cda55d19
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 09:22:19.4416
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U5SMuLsjF5skozyWUS+4Mb7xmuxHPk2F+UWEIABJfhqzAEpF1mgDkPJ3wVzhJqTBKpqqPwi2RxukdWc5WDYUlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3371
X-OriginatorOrg: citrix.com

The for loop in unmap_domain_pirq is unnecessary complicated, with
several places where the index is incremented, and also different
exit conditions spread between the loop body.

Simplify it by looping over each possible PIRQ using the for loop
syntax, and remove all possible in-loop exit points.

No functional change intended.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 60 +++++++++++++---------------------------------
 1 file changed, 16 insertions(+), 44 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 032fe82167..856714b5bf 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2305,7 +2305,6 @@ done:
 /* The pirq should have been unbound before this call. */
 int unmap_domain_pirq(struct domain *d, int pirq)
 {
-    unsigned long flags;
     struct irq_desc *desc;
     int irq, ret = 0, rc;
     unsigned int i, nr = 1;
@@ -2356,11 +2355,23 @@ int unmap_domain_pirq(struct domain *d, int pirq)
     if ( msi_desc != NULL )
         pci_disable_msi(msi_desc);
 
-    spin_lock_irqsave(&desc->lock, flags);
-
-    for ( i = 0; ; )
+    for ( i = 0; i < nr; i++, info = pirq_info(d, pirq + i) )
     {
+        unsigned long flags;
+
+        if ( !info || info->arch.irq <= 0 )
+        {
+            printk(XENLOG_G_ERR "dom%d: MSI pirq %d not mapped\n",
+                   d->domain_id, pirq + i);
+            continue;
+        }
+        irq = info->arch.irq;
+        desc = irq_to_desc(irq);
+
+        spin_lock_irqsave(&desc->lock, flags);
+
         BUG_ON(irq != domain_pirq_to_irq(d, pirq + i));
+        BUG_ON(desc->msi_desc != msi_desc + i);
 
         if ( !forced_unbind )
             clear_domain_irq_pirq(d, irq, info);
@@ -2378,45 +2389,6 @@ int unmap_domain_pirq(struct domain *d, int pirq)
             desc->msi_desc = NULL;
         }
 
-        if ( ++i == nr )
-            break;
-
-        spin_unlock_irqrestore(&desc->lock, flags);
-
-        if ( !forced_unbind )
-           cleanup_domain_irq_pirq(d, irq, info);
-
-        rc = irq_deny_access(d, irq);
-        if ( rc )
-        {
-            printk(XENLOG_G_ERR
-                   "dom%d: could not deny access to IRQ%d (pirq %d)\n",
-                   d->domain_id, irq, pirq + i);
-            ret = rc;
-        }
-
-        do {
-            info = pirq_info(d, pirq + i);
-            if ( info && (irq = info->arch.irq) > 0 )
-                break;
-            printk(XENLOG_G_ERR "dom%d: MSI pirq %d not mapped\n",
-                   d->domain_id, pirq + i);
-        } while ( ++i < nr );
-
-        if ( i == nr )
-        {
-            desc = NULL;
-            break;
-        }
-
-        desc = irq_to_desc(irq);
-        BUG_ON(desc->msi_desc != msi_desc + i);
-
-        spin_lock_irqsave(&desc->lock, flags);
-    }
-
-    if ( desc )
-    {
         spin_unlock_irqrestore(&desc->lock, flags);
 
         if ( !forced_unbind )
@@ -2427,7 +2399,7 @@ int unmap_domain_pirq(struct domain *d, int pirq)
         {
             printk(XENLOG_G_ERR
                    "dom%d: could not deny access to IRQ%d (pirq %d)\n",
-                   d->domain_id, irq, pirq + nr - 1);
+                   d->domain_id, irq, pirq + i);
             ret = rc;
         }
     }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:35:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:35:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83505.155572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lte-0006OW-4f; Wed, 10 Feb 2021 09:34:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83505.155572; Wed, 10 Feb 2021 09:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9lte-0006OP-1S; Wed, 10 Feb 2021 09:34:54 +0000
Received: by outflank-mailman (input) for mailman id 83505;
 Wed, 10 Feb 2021 09:34:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9ltc-0006OH-As
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 09:34:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9ltZ-00063z-RE; Wed, 10 Feb 2021 09:34:49 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9ltZ-0000tB-K1; Wed, 10 Feb 2021 09:34:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=K9Yn1SJvphKf8IhuSW3R+W+2hUnJQyo7vbUspJHeoNM=; b=mwCTUYW8aUWMpYuPp1yInCe1h1
	Z0DlP2vHCoLm47UXXAmJ3tC4gebM6MfUywvzmu1pOsMGg8tAiaSDLw5dRGgHEeYejOzJ1WJkjtQGQ
	MgaWZNuQmhtssVAk1T2XCiwr0+MWsQwVoT1ySmwW1rQU9ipdp+nZdYA9FwSy8Ya790go=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
From: Julien Grall <julien@xen.org>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <0910de32-8af9-6c21-b17a-b569aa59c4a4@xen.org>
Message-ID: <a52aaaa5-1cca-3137-c405-4597288b1331@xen.org>
Date: Wed, 10 Feb 2021 09:34:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <0910de32-8af9-6c21-b17a-b569aa59c4a4@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

On 10/02/2021 08:50, Julien Grall wrote:
> Hi Roger,
> 
> On 10/02/2021 08:29, Roger Pau MonnÃ© wrote:
>> On Tue, Feb 09, 2021 at 03:28:12PM +0000, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Currently, the IOMMU page-tables will be populated early in the domain
>>> creation if the hardware is able to virtualize the local APIC. However,
>>> the IOMMU page tables will not be freed during early failure and will
>>> result to a leak.
>>>
>>> An assigned device should not need to DMA into the vLAPIC page, so we
>>> can avoid to map the page in the IOMMU page-tables.
>>>
>>> This statement is also true for any special pages (the vLAPIC page is
>>> one of them). So to take the opportunity to prevent the mapping for all
>>> of them.
>>
>> Hm, OK, while I assume it's likely for special pages to not be target
>> of DMA operations, it's not easy to spot what are special pages.
> 
> Special pages are allocated by Xen for grant-table, vCPU info...
> 
>>
>>> Note that:
>>> Â Â Â Â  - This is matching the existing behavior with PV guest
>>
>> You might make HVM guests not sharing page-tables 'match' PV
>> behavior, but you are making behavior between HVM guests themselves
>> diverge.
>>
>>
>>> Â Â Â Â  - This doesn't change the behavior when the P2M is shared with the
>>> Â Â Â Â  IOMMU. IOW, the special pages will still be accessibled by the
>>> Â Â Â Â  device.
>>
>> I have to admit I don't like this part at all. Having diverging device
>> mappings depending on whether the page tables are shared or not is
>> bad IMO, as there might be subtle bugs affecting one of the two
>> modes.
>>
>> I get the feeling this is just papering over an existing issue instead
>> of actually fixing it: IOMMU page tables need to be properly freed
>> during early failure.
> 
> My initial approach was to free the IOMMU page tables during early 
> failure (see [1] and [2]). But Jan said the special pages should really 
> not be mapped in the IOMMU ([3]) and he wasn't very happy with freeing 
> the IOMMU pages table for early failure.
> 
> I don't particularly care about the approach as long as we don't leak 
> IOMMU page-tables at the end.
> 
> So please try to find a common ground with Jan here.
> 
> Cheers,
> 
> [1] <20201222154338.9459-1-julien@xen.org>
> [2] <20201222154338.9459-5-julien@xen.org>

Roger pointed out on IRC that I forgot to add a link for [3]. So here we go:

[3] <a22f7364-518f-ea5f-3b87-5c0462cfc193@suse.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:44:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83507.155584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9m2t-0007O7-1K; Wed, 10 Feb 2021 09:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83507.155584; Wed, 10 Feb 2021 09:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9m2s-0007O0-UX; Wed, 10 Feb 2021 09:44:26 +0000
Received: by outflank-mailman (input) for mailman id 83507;
 Wed, 10 Feb 2021 09:44:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oq3b=HM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1l9m2q-0007Nc-V8
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 09:44:25 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b371855c-e2e3-426f-b947-004003b55654;
 Wed, 10 Feb 2021 09:44:23 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.17.1 DYNA|AUTH)
 with ESMTPSA id 604447x1A9iK4O0
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 10 Feb 2021 10:44:20 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b371855c-e2e3-426f-b947-004003b55654
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1612950262;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
	From:Subject:Sender;
	bh=aGi4uWZnydphFEovvX9Pq2z+hJHWZl0IERtuQfxUvmE=;
	b=iRtiPViGlYEa1WnD57aKKEgTsNpsSWQON68PTGb2nEdaPukR0ylA0JRZkvhVfV/VM6
	pFCkjpXzVwuTs4fs64mqcecw/wFaVSuluVbwgID/rebdzFBoREo2HCe8q571W3Z4iFNp
	TkTrmyJDz+RG8UwoqJfVWqiYRO549ZU9aQVv82KR9cboiudcjb5Ad414hDd1Ojs8Nu4d
	X4+ksHwTxZEZ0FF/wH3jys99fLJYNmFnb3DzSuitKN33MXcoMLyLYuTsALds4ZlLkJnz
	be72Fg6xIN8izn5dSkIm2825Ggi9iAx0IWoI/u2FlUjcOzJgmInHuDJ/aTvLx0FhlnMd
	fZ1w==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+v4/A=="
X-RZG-CLASS-ID: mo00
Date: Wed, 10 Feb 2021 10:44:13 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v20210209 4/4] xl: disable --debug option for xl migrate
Message-ID: <20210210104413.4f44b49c.olaf@aepfle.de>
In-Reply-To: <20210210100606.45de7991.olaf@aepfle.de>
References: <20210209154536.10851-1-olaf@aepfle.de>
	<20210209154536.10851-5-olaf@aepfle.de>
	<24610.49788.493621.307069@mariner.uk.xensource.com>
	<20210210100606.45de7991.olaf@aepfle.de>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/56ALD4faR3Dabw_C1DvHNpn"; protocol="application/pgp-signature"

--Sig_/56ALD4faR3Dabw_C1DvHNpn
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 10 Feb 2021 10:06:06 +0100
schrieb Olaf Hering <olaf@aepfle.de>:

> -    if ( ctx->save.debug && ctx->stream_type !=3D XC_STREAM_PLAIN )
> +    if ( ctx->save.debug )

This will do the verification, and finds many errors:

2021-02-10 02:37:03 MST [2149] xc: error: verify pfn 0xfda9 failed (type 0)=
: Internal error

As Andrew said elsewhere, something writes into the memory of the idle, but=
 suspended domU. Likely the netback, since it can not know that the domU wi=
ll never come back.

No idea if verify_frames would have ways to figure out what purpose a given=
 pfn really has.


Olaf

--Sig_/56ALD4faR3Dabw_C1DvHNpn
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmAjqu0ACgkQ86SN7mm1
DoAqQhAAmZrW8prkkaHo/xwy+I7WQyDz/1GibFp3PCk1Ttv2MGFiwAHiWzgFBCb0
NJQRzb9PqZbNNTKUYIZLJrWh3MPL3gV2zmKH8R8KtepZdeXjyy3slqFaeL8Nb/pS
QI3vyz0bb/uJXEPKaotCfC4dNak0TQ/dRfm8/RWwFgTvRFiXxuwsEP91gK2VxNLK
ddVh8+zvJBzBWcCZUpM9wb3znHMqe/uFrOyVWGPbcLLEp9QX2fzHCYXwJkNpxcT6
tEizaAmJWzbAsQhZVx3Y2GBzjhtXOQHQHhyAoqFPJljDcV2SqxuMgLf/UWEJWeIR
iiXXcie1fSGsSL89ws+8zZEEcjCpZI9qt1mZx6pe8RqM2ngORvrP8/ko9Eeyc4sx
UsRNY5PlZXTK576AFvAtbcL5X2lfe/YPF/TgQDSGelQZj0tOuQ62sp/R6szehSc2
cmu7k9n7YZIIgASZckn7ACLccZMFMfDuK73g75ZbyoWhkLXBgO+hJK0lgMxKelHf
pSkntk6UloDyEvUdhbCR/Hxt5Rw98OcmHW5/QubR+DOpHMWjCIJLFvMqAY0aIdjx
i46VKBK50/UxEClH00gIfU46mnO5CEDM/c5SSSRwKWtmK+ZVFcLRKn2WH3RcOfVd
QC6LP7pTGpaLuO9YoFZgvZ9peTStMIX0qBTCBtOBss3fSETer/c=
=OITQ
-----END PGP SIGNATURE-----

--Sig_/56ALD4faR3Dabw_C1DvHNpn--


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:48:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83508.155595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9m6I-0007Yw-Hp; Wed, 10 Feb 2021 09:47:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83508.155595; Wed, 10 Feb 2021 09:47:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9m6I-0007Yp-Er; Wed, 10 Feb 2021 09:47:58 +0000
Received: by outflank-mailman (input) for mailman id 83508;
 Wed, 10 Feb 2021 09:47:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9m6H-0007Yf-JD; Wed, 10 Feb 2021 09:47:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9m6H-0006Hj-AB; Wed, 10 Feb 2021 09:47:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9m6H-0004xJ-0f; Wed, 10 Feb 2021 09:47:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9m6H-00037N-0A; Wed, 10 Feb 2021 09:47:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3oFBYrFbNCXfWZwckUDkN6fK3wLQvA/IQj8ezqCB2X8=; b=ETHz4oGulNwIZJPcs3GgEPzHMh
	qOTAW2dvAJTWJvFMIt5Zcm0qKLN8uu3MKKB5CcmbbMaPee4YKJyuGhbUzisrzU5VOLIQr7Vm+ine8
	JlO/PRznPIm+l948xxoJoaCARQ2jV3BKAblznOsmj8rRVaOAgBIe8VRDsd+qUdNwJn9E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159197-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159197: all pass - PUSHED
X-Osstest-Versions-This:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
X-Osstest-Versions-That:
    xen=5e7aa904405fa2f268c3af213516bae271de3265
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 09:47:57 +0000

flight 159197 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159197/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14
baseline version:
 xen                  5e7aa904405fa2f268c3af213516bae271de3265

Last test of basis   158979  2021-02-03 09:18:36 Z    7 days
Failing since        159095  2021-02-07 09:19:37 Z    3 days    2 attempts
Testing same since   159197  2021-02-10 09:18:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin TÃ¶rÃ¶k <edvin.torok@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Manuel Bouyer <bouyer@netbsd.org>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Olaf Hering <olaf@aepfle.de>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e7aa90440..687121f8a0  687121f8a0e7c1ea1c4fa3d056637e5819342f14 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 09:57:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 09:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83513.155610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9mFj-000090-Ek; Wed, 10 Feb 2021 09:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83513.155610; Wed, 10 Feb 2021 09:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9mFj-00008t-Br; Wed, 10 Feb 2021 09:57:43 +0000
Received: by outflank-mailman (input) for mailman id 83513;
 Wed, 10 Feb 2021 09:57:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9mFi-00008o-LX
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 09:57:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e29b3b47-3708-4922-a3a9-94abd4aa0967;
 Wed, 10 Feb 2021 09:57:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76DBEB138;
 Wed, 10 Feb 2021 09:57:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e29b3b47-3708-4922-a3a9-94abd4aa0967
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612951060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=qquZIWRI0HVUHWPa6REcgtkd8siTn2WKZs0cNZYkmOc=;
	b=QzD3QTcmLhvQF32yzs4uP8b+tEsPU08Q2UeZj4M5qg3L68flLVd3heQ7N2bYfg7UNSVp6x
	ItkKyzjrBmo3ydfZmY2ktRCor5Xg3/o/rx3iDzQq+UZwWhXokLsCejfJJMxia7j8Va80qH
	nzqN4TelAba99nv1lsHCW6saxTmR144=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: fix SYSENTER/SYSCALL switching into 64-bit mode
Message-ID: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
Date: Wed, 10 Feb 2021 10:57:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When invoked by compat mode, mode_64bit() will be false at the start of
emulation. The logic after complete_insn, however, needs to consider the
mode switched into, in particular to avoid truncating RIP.

Inspired by / paralleling and extending Linux commit 943dea8af21b ("KVM:
x86: Update emulator context mode if SYSENTER xfers to 64-bit mode").

While there, tighten a related assertion in x86_emulate_wrapper() - we
want to be sure to not switch into an impossible mode when the code gets
built for 32-bit only (as is possible for the test harness).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
In principle we could drop SYSENTER's ctxt->lma dependency when setting
_regs.r(ip). I wasn't certain whether leaving it as is serves as kind of
documentation ...

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6127,6 +6127,10 @@ x86_emulate(
              (rc = ops->write_segment(x86_seg_ss, &sreg, ctxt)) )
             goto done;
 
+        if ( ctxt->lma )
+            /* In particular mode_64bit() needs to return true from here on. */
+            ctxt->addr_size = ctxt->sp_size = 64;
+
         /*
          * SYSCALL (unlike most instructions) evaluates its singlestep action
          * based on the resulting EFLAGS.TF, not the starting EFLAGS.TF.
@@ -6927,6 +6931,10 @@ x86_emulate(
                                       ctxt)) != X86EMUL_OKAY )
             goto done;
 
+        if ( ctxt->lma )
+            /* In particular mode_64bit() needs to return true from here on. */
+            ctxt->addr_size = ctxt->sp_size = 64;
+
         singlestep = _regs.eflags & X86_EFLAGS_TF;
         break;
 
@@ -12113,8 +12121,12 @@ int x86_emulate_wrapper(
     unsigned long orig_ip = ctxt->regs->r(ip);
     int rc;
 
+#ifdef __x86_64__
     if ( mode_64bit() )
         ASSERT(ctxt->lma);
+#else
+    ASSERT(!ctxt->lma && !mode_64bit());
+#endif
 
     rc = x86_emulate(ctxt, ops);
 


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 10:56:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 10:56:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83515.155622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nA5-0005bf-GR; Wed, 10 Feb 2021 10:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83515.155622; Wed, 10 Feb 2021 10:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nA5-0005bY-DY; Wed, 10 Feb 2021 10:55:57 +0000
Received: by outflank-mailman (input) for mailman id 83515;
 Wed, 10 Feb 2021 10:55:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9nA4-0005bT-LY
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 10:55:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3c4871c-146c-4887-be25-b11a40a2fbe6;
 Wed, 10 Feb 2021 10:55:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C203AD29;
 Wed, 10 Feb 2021 10:55:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3c4871c-146c-4887-be25-b11a40a2fbe6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612954554; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pJlrZxV5uLKWdS54+s0REJOsOKX7+hJYzquZXRubllI=;
	b=Hy+R/SqvMSf9zxjrNKxVldv9DFrpKUqkc0WP3sVXgOfYA86jtyVhd7tCTIrQQzK7VYI4DW
	qUZWxJHxIORUPNZwYp1uX5LB+a51QmmnjLnto/ZbtiA1Bb0mTT1OQTZOUTnsCbaO51PzWJ
	uq8W8YQNUxiDa1pLYXAFSZyddFIY6Y0=
Subject: Re: [PATCH for-4.15] x86/ucode/amd: Handle length sanity check
 failures more gracefully
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210209214911.18461-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <87d6982a-00d9-3daa-ebd7-9afb8ee60126@suse.com>
Date: Wed, 10 Feb 2021 11:55:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209214911.18461-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 22:49, Andrew Cooper wrote:
> Currently, a failure of verify_patch_size() causes an early abort of the
> microcode blob loop, which in turn causes a second go around the main
> container loop, ultimately failing the UCODE_MAGIC check.
> 
> First, check for errors after the blob loop.  An error here is unrecoverable,
> so avoid going around the container loop again and printing an
> unhelpful-at-best error concerning bad UCODE_MAGIC.
> 
> Second, split the verify_patch_size() check out of the microcode blob header
> check.  In the case that the sanity check fails, we can still use the
> known-to-be-plausible header length to continue walking the container to
> potentially find other applicable microcode blobs.

Since the code comment you add further clarifies this, if my
understanding here is correct that you don't think we should
mistrust the entire container in such a case ...

> Before:
>   (XEN) microcode: Bad microcode data
>   (XEN) microcode: Wrong microcode patch file magic
>   (XEN) Parsing microcode blob error -22
> 
> After:
>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa000
>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa010
>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa011
>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa200
>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa210
>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa500
>   (XEN) microcode: couldn't find any matching ucode in the provided blob!
> 
> Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

After all we're trying to balance between detecting broken
containers and having wrong constants ourselves. Personally
I'd be more inclined to err on the safe side and avoid
further loading attempts, but I can see the alternative
perspective also being a reasonable one.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:00:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83525.155635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nEo-0006Zh-35; Wed, 10 Feb 2021 11:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83525.155635; Wed, 10 Feb 2021 11:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nEn-0006Za-W3; Wed, 10 Feb 2021 11:00:49 +0000
Received: by outflank-mailman (input) for mailman id 83525;
 Wed, 10 Feb 2021 11:00:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9nEm-0006ZS-QQ
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:00:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 000ce99c-82a2-49f7-890d-80d140094453;
 Wed, 10 Feb 2021 11:00:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37D0AAD6A;
 Wed, 10 Feb 2021 11:00:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 000ce99c-82a2-49f7-890d-80d140094453
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612954847; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X9TCsROEfoQjUpUcb/1udpcRlDuCQCbqIGHT1zT0HiI=;
	b=bKxh1EnSSHPsF2I5f8UVdb/BM/cXEjcEraLatdQnofV91E/tJp1XejcWCprcvjwJIWHFRn
	6cx+JeDfoN7E5pCttJzkkSatLMgihoyl4F5yEv4dIN33vneidI2Y9U/W4CDExTub09j12Q
	eIETTGN/DrhcWOvHe0veAQm64x0xwkA=
Subject: Re: [PATCH for-4.15] x86/ucode/amd: Fix OoB read in
 cpu_request_microcode()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210209234019.3827-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0cb0e48b-68d9-3de1-42e1-9a75ac2795d7@suse.com>
Date: Wed, 10 Feb 2021 12:00:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209234019.3827-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.02.2021 00:40, Andrew Cooper wrote:
> verify_patch_size() is a maximum size check, and doesn't have a minimum bound.
> 
> If the microcode container encodes a blob with a length less than 64 bytes,
> the subsequent calls to microcode_fits()/compare_header() may read off the end
> of the buffer.
> 
> Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -349,6 +349,7 @@ static struct microcode_patch *cpu_request_microcode(const void *buf, size_t siz
>              if ( size < sizeof(*mc) ||
>                   (mc = buf)->type != UCODE_UCODE_TYPE ||
>                   size - sizeof(*mc) < mc->len ||
> +                 mc->len < sizeof(struct microcode_patch) ||

I was inclined to suggest to use <= here, but I guess a blob
with 1 byte of data is as bogus as one with 0 bytes of data.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:05:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83527.155647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nIv-0006jg-LA; Wed, 10 Feb 2021 11:05:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83527.155647; Wed, 10 Feb 2021 11:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nIv-0006jZ-Hh; Wed, 10 Feb 2021 11:05:05 +0000
Received: by outflank-mailman (input) for mailman id 83527;
 Wed, 10 Feb 2021 11:05:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zXqh=HM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1l9nIu-0006jU-2o
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:05:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0f74fb5-7d2f-40ca-9a5a-1644295a4e6f;
 Wed, 10 Feb 2021 11:05:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0f74fb5-7d2f-40ca-9a5a-1644295a4e6f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612955102;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=S4VMAbTurNdKPv2oUY6KDHCWiyvUpxAAlhweQL8RLkI=;
  b=egv7geZG4cHwOJ9zUcakepSWDTpU+xnpuYKFOq0tvorDZjMs2TfY/nlZ
   1jG1lQT0FyFim/MG1nasaEE5HPpUnGbpC/R9Zhb8BIC2653UB8e3uEWuB
   5pKfgqqe4AfvMx5wCYFS+9brRLnSyxGqsO43gGFiQbBmcQyVmVY+JRHKI
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: AMRtoFChYKtnOihXL2f67m3GKBjVdoebzNDwlXElaFp/sq1c+kKqV7wHUwgNG580B5H4GNHb6G
 clJ3GYSwx6mDyJEc0pebWZFU8eSDDSVYEDKJr4yqwXpkxzfFhfAIW8nAM4fHxK19keWb5oLYDm
 WaPgyPlyDSJJfPp+YO8oHr9WqRHjy8iR0bciMs59uEQXF7STisgiJkq04rRvujL/TuPCTED31m
 cAutjGgBq5+KYhWHjWUPFVrwyQ69DDQdqdKKA+trk6xDkbKUhrwe6L23l46GlyAjrrW7hLvzy1
 tkE=
X-SBRS: 5.2
X-MesageID: 38297382
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="38297382"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IYVaTtKSBgfHIdmh1ZrIO4t6EQGfWOfVKdufECDCgMu6YBffrsIvvzFDJmmFUBmVxpFK6Q4MAPRdtufWkjm7I83s4lOET0LWxdcwu67fK2WTwwBLzCz8ClWrRJO1TWTS4QqjJ67DK/Z2ka4vDf/7ueeVleu0ITS8UF/jnHZsimx8slCVDOrHWFEPTOD7socWDDBQPREyZ8qSB91zeHjvxXc/VDA0YWhaznFMjLH4+0nmpawGSHl+dNe2Qww8R4QKY7EcWvEdGmv81usJKtsKDzftQ9sxbfHrBECVYTMmrz0TJDfraT8G2LnuYFqwQ42Aexjiw3AwBi7gR96DbcjUAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S4VMAbTurNdKPv2oUY6KDHCWiyvUpxAAlhweQL8RLkI=;
 b=Iko1sqIUDuokEXmEi56MFnSoOVtn0q0+ynY3WPccvqmA2EiUnAhhRBCbjos8IZfrgdvM3Iu6n65wS0p2WdzJcnL3Hp/2FVjvCs7Du3+DIxULo/Rw0jcJ8bsHvd+3lG9d16goZw3HQ0HTD4/7yfACbmsvXeXaRU8Vabd6lbsFFlYw2Tckhx0gvr/Gg+DhJGcEbFMqghiz2HVEU4LMl1Wx9o1yqypxZKnkqHw3r0QdzaET4ZhZ3L4GIfUVPwJE0uBY8pm07N6I5zyINkD6/PnD9GY7hh3M0R581OhNONb52WSkDMtUJUZxMvALd6V9T0xWbdbIs0i0likAp2UIDftV8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S4VMAbTurNdKPv2oUY6KDHCWiyvUpxAAlhweQL8RLkI=;
 b=I+wi8ikydXf5VAHP2XBLlA9x1GdQvgZpzjvUwsoLh/nXV+yVzeC4XfTIx01HZdEvzXkZbJ+STA+fXKxd5FPfBdCjsPNYf33TE3M3tNxroXdzhG4Y4KMduP4qJLKqoUXm6KYDrloTDnH0bVvGWWoSXtrOouLdaQB9eV7i95rZSuo=
From: George Dunlap <George.Dunlap@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, "lucmiccio@gmail.com" <lucmiccio@gmail.com>, "open
 list:X86" <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/wlm+vwmTie2HEebCGdpHuZ0HapROvYA
Date: Wed, 10 Feb 2021 11:04:37 +0000
Message-ID: <CA0BC7AD-056F-413D-A7C7-7C00A3FF7348@citrix.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <173ed75a-94cf-26a5-9271-a687bf201578@xen.org>
 <alpine.DEB.2.21.2102081214010.8948@sstabellini-ThinkPad-T480s>
 <4df687cb-d3bc-ccb8-4e7c-a6429c37574e@suse.com>
 <24610.38467.808678.941320@mariner.uk.xensource.com>
 <alpine.DEB.2.21.2102090914280.8948@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102090914280.8948@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bc98ee5d-a3f9-4131-783f-08d8cdb3a7e7
x-ms-traffictypediagnostic: BYAPR03MB4838:
x-microsoft-antispam-prvs: <BYAPR03MB4838F3477271C64817C92DF2998D9@BYAPR03MB4838.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: /M44ZOHTJda8UxPOIrmoaXD9ymK0OIGADbDXzu0upe6eRyL06RVN+sqeOAcpha/jLbF5ZfVa763JLQECZoA3u3WgNp5g1QqClH4rDD7o4+Q2QVqG6OKdH4K/Jl83FupWOwXuHXDpCbVzx4D5OF97sLdnyYTfFNj7EerZ/pmzwRiIf+48vcuaoi3axKxGFkIbnAYiPWRxCyb4tY+VwG3kx++SI23Ms/3hd4T0qgvi3hpTdmGDijicuTEh/q4uDcdIO/BsHo9H4WDej7MB/UC7DWqDbBSQ85aOxL1sS+wEFgNOoa1JHbeTnVg39Rt10N1IaE6u4sEg/xf7ml86s80t8Yhe36icO6/PQckTJ2qROB6CI3GPo0stvWxtnbqZyynImRDS/eSC479y4X5SGbA3UIIXxe9ac1FFN7Jzx261URsyt91CFoHncClFZVPs9qlTMajEqdiAcjn6K402ciCkrJKkLiSy/xn1u81IkESYo/YGeRVExRH/kToayFMREH8i7KNdfRAVGh+7NEHcimKq3C1+fM5AflfNMcC5uyi/WsuLWURKq8SOFpXnh7G0tVduooF16ah+k7ZlOYUzy8rSfA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5680.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(39860400002)(376002)(366004)(76116006)(86362001)(8936002)(66556008)(71200400001)(66476007)(26005)(64756008)(2616005)(6512007)(36756003)(66446008)(91956017)(83380400001)(7416002)(66946007)(33656002)(6486002)(8676002)(2906002)(54906003)(478600001)(53546011)(4326008)(6506007)(316002)(6916009)(5660300002)(186003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?NzNEUkYwYjFCUzFlcXVhc1NJV2UzaGlEbXEzbEhWR2hTZ3p4UlRrdWxLVURv?=
 =?utf-8?B?YzFmV1BzTlFUMGJmVnF3bkZOYW1VeWRmU2ZHWFJjaERRVjczTGl4cEJOOHpF?=
 =?utf-8?B?ZFE5RHYvSEI0bGRzdWRZY3VybmV3WVZuVDMxdlJrb0c1clFSbFhxSVdVb2Zi?=
 =?utf-8?B?MTF3dVU5MnU5bXRXdllycUtqMGdFTW5oMHNYTXhEM2NUU29LZklwV0RMc3Rz?=
 =?utf-8?B?N1NuYW1VcU9TeTQ0MzY5c1EvMWlmSlUxa3VocGQxMVBSd3JWWHVTa0lUNnVm?=
 =?utf-8?B?M25sUmNRb2U5b0FsaXBnTFBaQWtOYkFSMVA2VzNpMWdReU9ONW9KN2dhVkZl?=
 =?utf-8?B?aDBGNlM4cjRvMzdtMGZmSzJTLzlWaXU1d1NOMDVuZC9EOWw3cFZVWWJ3VFVQ?=
 =?utf-8?B?eGw5OFgxVWZ3RHRsQjlCSlNCVzNZbDBtL0NvVmxHWFVBdzNjRS9sZDloZ21r?=
 =?utf-8?B?SDV4dTlhcGJtT0U4TWdtdjRkdXFMaDNPcDJXT1ArWmVYZUFjdzFWNnh1Y3Jp?=
 =?utf-8?B?bDVhU1RzdVpYbkt4SWNTQ25OY0xmNXo5R3puMmZIbkd2QlBRLzBPbnh6TmIy?=
 =?utf-8?B?cFhYbnJJVXNNZ2RoTmtGZmxSVEpPcUFwSTNwSy9kNXhZSUZFYWFMQWpQbXNu?=
 =?utf-8?B?UmFZdEh4cEJBNWtLSHRKaXdPZE5wWnZCQWpYbHhoYk1mS2lhTTBKWGhadjVp?=
 =?utf-8?B?c0lHc0Jzdkgwd2xUYmFLeEZhQk1kcVR0QnhQNE00TFUyeG1mbi9kUnZMcE9T?=
 =?utf-8?B?dXgrZXVwbzA1cG0vTlFjUDNJL0l3U3BlYzVTV21LdXZIRWg2c0lPYWVaSVoy?=
 =?utf-8?B?NUk5WnYrWG5hR2crdHhqZjF2WWdKRFljKzY1OWxCUi8xRnNmRk5ERE1MSFU2?=
 =?utf-8?B?aGFxSUtpRWo4Z01aME9wQjQ1aTVGTTU5aTQ1aVlzUDZaNk9qTG50SlBsZ3Zr?=
 =?utf-8?B?by9kTWVCMVJvRzhIa0l2bGxCYlBmRTF2TFgvamJXZXg0SEFtR3NpMVNROUlC?=
 =?utf-8?B?c1JXUFkrSGwyeU5FVXBGOXlrRDU0TmRxdnhxZVpuYUR1T1g5WkFUbFNmb3VV?=
 =?utf-8?B?Nnh2V0hKczFUNjNHeVdiZjVMTU05RVp4VWFZT1owNXR0bmpkTHdCK3dTUTVG?=
 =?utf-8?B?RGxOczR2S1V1bGszNGlSZWNQR2N0UXdFcDJpM0pkWW8xVlFNR0habEFNRE4r?=
 =?utf-8?B?TEw1aDREZ25VOS9DL1ZkcXFoRTZjTGo3ak5yWXFwRlZTaElRalB6empRbE1j?=
 =?utf-8?B?NzlzOERya3hja0h3NE9EVDMvZUVRa2FUSExDTWhnU1hOUGNFZzdibU9mSjZl?=
 =?utf-8?B?NHppUTkrTVptUTRaL3VFbHRsdGxHakptTGl5elhCd1RKOU9aeHpLTVBLSkda?=
 =?utf-8?B?UzNHdnlUTkRhUm5yRlN0QWJwSitSRk9SN3psOGNrc1JxQVNTaXkyRFpOcGE1?=
 =?utf-8?B?clRlUEptdFp3RzVhbHBqYlFqMDZwWVRxQ3lQYzVsVlVIbitwSG84ZDNCNjBW?=
 =?utf-8?B?OWtjR2llSlM4TnJFd0xIdkFkdHRMRUxtTTJDRHVKVzNQVzBsZVZhWXcvY3ZN?=
 =?utf-8?B?d1pzak81QXp0UEV3Q092OHlvcGhHdmFEREMveUovMEUzM2dhSGgzMjRvQ0J6?=
 =?utf-8?B?Vk1CWFJRNTZTYUI0SnZxd204YnV1enc2eERNMGFNT1FEQzltSGxhYUZMMXJT?=
 =?utf-8?B?ZHhtZlA4cWllNlRHTVY5b3ZUNFdITUtNdjlwZ3dXazE3Mlp2VjBBbGFaMGYz?=
 =?utf-8?B?cHdMUE9qTU5NbWpUTlRVZW1xMW5qQXBDa2ZPNG1hWVk2U0lENTJMM21GbTlF?=
 =?utf-8?B?VXBTc0dKYVZRL1E1ZS9JQT09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <77D33764C521B744A21A25BFE0304DBB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5680.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bc98ee5d-a3f9-4131-783f-08d8cdb3a7e7
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Feb 2021 11:04:37.1415
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: CL6xp/wzsKtB+oGmFJMFDrrBJbfqtIVTICMdQ/k0mI/P4qzcJW+BViSyHgnIeCCoyS+DLCKv6//X0dfMK5PrJN31IS/CckA64DbVFzqfxP4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4838
X-OriginatorOrg: citrix.com

DQoNCj4gT24gRmViIDksIDIwMjEsIGF0IDU6MzEgUE0sIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBPbiBUdWUsIDkgRmViIDIwMjEsIElh
biBKYWNrc29uIHdyb3RlOg0KPj4gSmFuIEJldWxpY2ggd3JpdGVzICgiUmU6IFtQQVRDSCB2Ml0g
eGVuL2FybTogZml4IGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmciKToNCj4+PiBPbiAwOC4wMi4y
MDIxIDIxOjI0LCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+PiAuLi4NCj4+Pj4gRm9yIHRo
ZXNlIGNhc2VzLCBJIHdvdWxkIGp1c3QgZm9sbG93IGEgc2ltcGxlIHJ1bGUgb2YgdGh1bWI6DQo+
Pj4+IC0gaXMgdGhlIHN1Ym1pdHRlciB3aWxsaW5nIHRvIHByb3ZpZGUgdGhlIGJhY2twb3J0Pw0K
Pj4+PiAtIGlzIHRoZSBiYWNrcG9ydCBsb3ctcmlzaz8NCj4+Pj4gLSBpcyB0aGUgdW5kZXJseWlu
ZyBidWcgaW1wb3J0YW50Pw0KPj4+PiANCj4+Pj4gSWYgdGhlIGFuc3dlciB0byBhbGwgaXMgInll
cyIgdGhlbiBJJ2QgZ28gd2l0aCBpdC4NCj4+PiANCj4+PiBQZXJzb25hbGx5IEkgZGlzYWdyZWUs
IGZvciB0aGUgdmVyeSBzaW1wbGUgcmVhc29uIG9mIHRoZSBxdWVzdGlvbg0KPj4+IGdvaW5nIHRv
IGJlY29tZSAiV2hlcmUgZG8gd2UgZHJhdyB0aGUgbGluZT8iIFRoZSBvbmx5IG5vbi1zZWN1cml0
eQ0KPj4+IGJhY2twb3J0cyB0aGF0IEkgY29uc2lkZXIgYWNjZXB0YWJsZSBhcmUgbG93LXJpc2sg
Y2hhbmdlcyB0byBhbGxvdw0KPj4+IGJ1aWxkaW5nIHdpdGggbmV3ZXIgdG9vbCBjaGFpbnMuIEkg
a25vdyBvdGhlciBiYWNrcG9ydHMgaGF2ZQ0KPj4+IG9jY3VycmVkIGluIHRoZSBwYXN0LCBhbmQg
SSBkaWQgdm9pY2UgbXkgZGlzYWdyZWVtZW50IHdpdGggdGhpcw0KPj4+IGhhdmluZyBoYXBwZW5l
ZC4NCj4+IA0KPj4gSSB0aGluayBJIHRha2UgYSBtb3JlIHJlbGF4ZWQgdmlldyB0aGFuIEphbiwg
YnV0IHN0aWxsIGEgbXVjaCBtb3JlDQo+PiBmaXJtIGxpbmUgdGhhbiBTdGVmYW5vLiAgTXkgb3Bp
bmlvbiBpcyB0aGF0IHdlIHNob3VsZCBtYWtlIGV4Y2VwdGlvbnMNCj4+IGZvciBvbmx5IGJ1Z3Mg
b2YgZXhjZXB0aW9uYWwgc2V2ZXJpdHkuDQo+PiANCj4+IEkgZG9uJ3QgdGhpbmsgSSBoYXZlIHNl
ZW4gYW4gYXJndW1lbnQgdGhhdCB0aGlzIGJ1ZyBpcyBleGNlcHRpb25hbGx5DQo+PiBzZXZlcmUu
DQo+PiANCj4+IEZvciBtZSB0aGUgZmFjdCB0aGF0IHlvdSBjYW4gb25seSBleHBlcmllbmNlIHRo
aXMgYnVnIGlmIHlvdSB1cGdyYWRlDQo+PiB0aGUgaGFyZHdhcmUgb3Igc2lnbmlmaWNhbnRseSBj
aGFuZ2UgdGhlIGNvbmZpZ3VyYXRpb24sIG1lYW5zIHRoYXQNCj4+IHRoaXMgaXNuJ3Qgc28gc2Vy
aW91cyBhIGJ1Zy4NCj4gDQo+IFllYWgsIEkgdGhpbmsgdGhhdCdzIHJlYWxseSB0aGUgY29yZSBv
ZiB0aGlzIGlzc3VlLiBJZiBzb21lYm9keSBpcw0KPiBhbHJlYWR5IHVzaW5nIDQuMTIgaGFwcGls
eSwgdGhlcmUgaXMgcmVhbGx5IG5vIHJlYXNvbiBmb3IgdGhlbSB0byB0YWtlDQo+IHRoZSBmaXgu
IElmIHNvbWVib2R5IGlzIGFib3V0IHRvIHVzZSA0LjEyLCB0aGVuIGl0IGlzIGEgc2V2ZXJlIGlz
c3VlLg0KPiANCj4gVGhlIHZpZXcgb2YgdGhlIGdyb3VwIGlzIHRoYXQgbm9ib2R5IHNob3VsZCBi
ZSBzd2l0Y2hpbmcgdG8gNC4xMiBub3cNCj4gYmVjYXVzZSB0aGVyZSBhcmUgbmV3ZXIgcmVsZWFz
ZXMgb3V0IHRoZXJlLiBJIGRvbid0IGtub3cgaWYgdGhhdCBpcw0KPiB0cnVlLg0KDQpJIGRvbuKA
mXQgdGhpbmsgd2UgaGF2ZSB0byB0ZWxsIHBlb3BsZSB3aGF0IHRvIGRvOyByYXRoZXIsIHdlIG5l
ZWQgdG8gdGVsbCBwZW9wbGUgd2hhdCAqd2UqIGFyZSBnb2luZyB0byBkby4gIFdl4oCZdmUgdG9s
ZCBwZW9wbGUgdGhhdCB3ZeKAmWxsIHByb3ZpZGUgbm9ybWFsIGJ1Zy1maXhlcyB1bnRpbCAyMDIw
LTEwLTAyLCBhbmQgc2VjdXJpdHkgYnVnZml4ZXMgdW50aWwgMjAyMi0wNC0wMi4gQW55b25lIGlz
IHdlbGNvbWUgdG8gdXNlIDQuMTIgd2hlbmV2ZXIgdGhleSB3YW50IHRvOyBidXQgaWYgdGhleSBk
bywgdGhleSB0YWtlIHRoZSByaXNrIHRoYXQgdGhlcmUgd2lsbCBiZSBhIOKAnG5vcm1hbOKAnSBi
dWcgdGhhdCBkb2VzbuKAmXQgZ2V0IGZpeGVkIGJ5IHVwc3RyZWFtLiAgVGhlIHBhdGNoIGlzIGF2
YWlsYWJsZSwgYW55b25lIHdobyB3YW50cyB0byBjYW4gYXBwbHkgaXQgdGhlbXNlbHZlcyAoYXQg
dGhlaXIgb3duIHJpc2ssIG9mIGNvdXJzZSkuDQoNCldlIGRvbuKAmXQgc2VlbSB0byBoYXZlIGFu
eXRoaW5nIGV4cGxpY2l0bHkgd3JpdHRlbiBkb3duIGFueXdoZXJlLCBidXQgd2hlbmV2ZXIgc29t
ZW9uZSBoYXMgY29tZSB0byB1cyB1c2luZyBzb21lIG9ic29sZXRlIHZlcnNpb24gb2YgWGVuLCB3
ZeKAmXZlIHJlY29tbWVuZGVkIHRoZXkgdXNlIHRoZSBtb3N0IHJlY2VudCBzdGFibGUgcmVsZWFz
ZS4NCg0KVGhlIG9uZSByZWFzb24gc29tZW9uZSBtaWdodCB1c2UgNC4xMiBvdmVyIGEgbmV3ZXIg
dmVyc2lvbiBpcyBpZiB0aGF04oCZcyBzdGlsbCB0aGUgb25lIHBhY2thZ2VkIGJ5IHRoZWlyIGRp
c3Ryby4gIEluIHRoYXQgY2FzZSwgdGhlIGRpc3RybyBzaG91bGQgZWl0aGVyIHVwZGF0ZSB0byBh
IG5ld2VyIHZlcnNpb24sIG9yIHRha2Ugb24gdGhlIGVmZm9ydCBvZiBzdXBwb3J0aW5nIHRoZSBv
bGQgdmVyc2lvbiBieSBiYWNrcG9ydGluZyBidWdmaXhlcy4gIElmIHN1Y2ggYSBkaXN0cm8gY2hv
b3NlcyBub3QgdG8gZG8gc28sIHRoZW4gYSB1c2VyIHNob3VsZCBjb25zaWRlciBzd2l0Y2hpbmcg
dG8gYSBkaWZmZXJlbnQgZGlzdHJvLg0KDQpUaGUgcHVycG9zZSBvZiBoYXZpbmcgc3VjaCBhIHBv
bGljeSBpcyB0byBhdm9pZCBuZWVkaW5nIHRvIGhhdmUgdGhpcyBkaXNjdXNzaW9uIGFib3V0IGV2
ZXJ5IHNpbmdsZSBwYXRjaCB0aGF0IGNvbWVzIHVwOyBhbmQgdGhlIG51bWJlciB3YXMgY2hvc2Vu
IHRvIGJhbGFuY2UgZWZmb3J0IGZvciB0ZXN0aW5nIGFuZCBtYWludGFpbmVycyBhZ2FpbnN0IHZh
bHVlIHRvIGRvd25zdHJlYW1zLiAgQ2VydGFpbmx5IGJhY2twb3N0aW5nIEp1c3QgVGhpcyBPbmUg
cGF0Y2ggd29u4oCZdCBiZSBhIGh1Z2UgYW1vdW50IG9mIHdvcms7IGJ1dCBJIGNvbXBsZXRlbHkg
c3ltcGF0aGl6ZSB3aXRoIEphbuKAmXMgcG9pbnQgb2YgdmlldyB0aGF0IGlmIHdlIHN0YXJ0IHRv
IG1ha2UgZXhjZXB0aW9ucywgcGVvcGxlIHdpbGwgYmVnaW4gdG8gZXhwZWN0IHRoZW0uICBXZeKA
mXZlIHNhaWQgd2hhdCB3ZeKAmXJlIGdvaW5nIHRvIGRvLCB3ZSBzaG91bGQganVzdCBzdGljayB3
aXRoIGl0Lg0KDQpOb3cgbWF5YmUgdGhlIDE4LW1vbnRoIOKAnGZ1bGwtc3VwcG9ydCIgd2luZG93
IGlzIHRvbyBzbWFsbC4gIFdlIHNlZW0gdG8gaGF2ZSBsb3RzIG9mIGRvd25zdHJlYW1zIChhdCBs
ZWFzdCBTdVNFIGFuZCBYZW5TZXJ2ZXIpIHdobyBzdXBwb3J0IG11Y2ggb2xkZXIgdmVyc2lvbnM7
IGNvb3JkaW5hdGluZyBvbiB2ZXJzaW9ucyB0byBkZXNpZ25hdGUgTFRTIHJlbGVhc2VzIHRvIGF2
b2lkIGR1cGxpY2F0aW9uIG9mIGVmZm9ydCBhbmQgc2hhcmUgdGhlIGJlbmVmaXRzIG9mIHRoYXQg
ZWZmb3J0IHdpdGggdGhlIHdpZGVyIGNvbW11bml0eSBtaWdodCBiZSBzb21ldGhpbmcgd29ydGgg
Y29uc2lkZXJpbmcuICBCdXQgdGhhdOKAmXMgYSBjb21wbGV0ZWx5IGRpZmZlcmVudCBkaXNjdXNz
aW9uLg0KDQogLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:10:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:10:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83529.155659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nNt-0007em-Bl; Wed, 10 Feb 2021 11:10:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83529.155659; Wed, 10 Feb 2021 11:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nNt-0007ef-73; Wed, 10 Feb 2021 11:10:13 +0000
Received: by outflank-mailman (input) for mailman id 83529;
 Wed, 10 Feb 2021 11:10:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9nNr-0007ea-Jr
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:10:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98b6c9af-70ac-424f-a88a-77bfae8f2b48;
 Wed, 10 Feb 2021 11:10:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73992AD29;
 Wed, 10 Feb 2021 11:10:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b6c9af-70ac-424f-a88a-77bfae8f2b48
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612955409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=W1bwpkiqh+t+17POKpfLzTJ7Yag0BfQIMUO9uVdEKxw=;
	b=SR8bt3OOgKFtw4WF4/auYc40Ht+x0xiko6naSptwthGMXN47J8CD4f5UWN0mGaaygzdDJS
	eyi1xsNB7ruY5rwPGwP/B6Bpwk9WkwCP/cPhpVRosM4aDCR0sSYO0U01sp/JVItgRlhuNN
	w8/Ifr5KPZs6gJ6ufWTvZe2RrLBg1cQ=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
Date: Wed, 10 Feb 2021 12:10:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCOZbNly7YCSNtHY@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
> On Tue, Feb 09, 2021 at 03:28:12PM +0000, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, the IOMMU page-tables will be populated early in the domain
>> creation if the hardware is able to virtualize the local APIC. However,
>> the IOMMU page tables will not be freed during early failure and will
>> result to a leak.
>>
>> An assigned device should not need to DMA into the vLAPIC page, so we
>> can avoid to map the page in the IOMMU page-tables.
>>
>> This statement is also true for any special pages (the vLAPIC page is
>> one of them). So to take the opportunity to prevent the mapping for all
>> of them.
> 
> Hm, OK, while I assume it's likely for special pages to not be target
> of DMA operations, it's not easy to spot what are special pages.
> 
>> Note that:
>>     - This is matching the existing behavior with PV guest
> 
> You might make HVM guests not sharing page-tables 'match' PV
> behavior, but you are making behavior between HVM guests themselves
> diverge.
> 
> 
>>     - This doesn't change the behavior when the P2M is shared with the
>>     IOMMU. IOW, the special pages will still be accessibled by the
>>     device.
> 
> I have to admit I don't like this part at all. Having diverging device
> mappings depending on whether the page tables are shared or not is
> bad IMO, as there might be subtle bugs affecting one of the two
> modes.

This is one way to look at things, yes. But if you take the
other perspective that special pages shouldn't be
IOMMU-mapped, then the divergence is the price to pay for
being able to share pages (and it's not Julien introducing
bad behavior here).

Additionally it may be possible to utilize the divergence to
our advantage: If one way of setting up things works and the
other doesn't, we have a reasonable clue where to look. In
fact the aspect above may, together with possible future
findings, end up being a reason to not default to or even
disallow (like for AMD) page table sharing.

> I get the feeling this is just papering over an existing issue instead
> of actually fixing it: IOMMU page tables need to be properly freed
> during early failure.

I take a different perspective: IOMMU page tables shouldn't
get created (yet) at all in the course of
XEN_DOMCTL_createdomain - this op is supposed to produce an
empty container for a VM.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:13:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:13:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83532.155671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nRP-0007q7-2I; Wed, 10 Feb 2021 11:13:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83532.155671; Wed, 10 Feb 2021 11:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nRO-0007q0-UH; Wed, 10 Feb 2021 11:13:50 +0000
Received: by outflank-mailman (input) for mailman id 83532;
 Wed, 10 Feb 2021 11:13:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nRN-0007ps-LW; Wed, 10 Feb 2021 11:13:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nRN-0007m5-D0; Wed, 10 Feb 2021 11:13:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nRN-0000P4-2u; Wed, 10 Feb 2021 11:13:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nRN-0004mP-2N; Wed, 10 Feb 2021 11:13:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0RRfQJgw24Y8ouSDjWlnjOUxONylNpgHzcOb0RghHtI=; b=d0T3xPTYwq3uH1e2w9PgRSjBXN
	MY5TkYrjikCYgoRpbSzUiR66t1kgybrZNoTtJo36kDWYLTdfzk4lKVpHN6X/rWJBof1/bCtiBVF3x
	QKFUi4PjpgCQnHKSMzhJTZckXMmG4u3JUXX5aaYqcsI6R+V4jV17wU61dubev0z6+TvY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159181-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159181: regressions - trouble: fail/pass/preparing/queued/running
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-raw:<none executed>:queued:regression
    linux-linus:test-amd64-i386-examine:<none executed>:queued:regression
    linux-linus:test-amd64-coresched-i386-xl:<none executed>:queued:regression
    linux-linus:build-i386-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-pair:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    linux-linus:build-i386-pvops:hosts-allocate:running:regression
    linux-linus:build-i386:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-examine:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-libvirt:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-xsm:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-libvirt-pair:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-pair:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-pygrub:hosts-allocate:running:regression
    linux-linus:test-amd64-coresched-amd64-xl:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-shadow:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-rtds:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-credit1:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-pvshim:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-credit2:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:hosts-allocate:running:regression
    linux-linus:test-amd64-amd64-xl-qcow2:hosts-allocate:running:regression
    linux-linus:test-armhf-armhf-libvirt-raw:host-install(5):running:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:running:regression
    linux-linus:test-armhf-armhf-libvirt-raw:syslog-server:running:regression
    linux-linus:test-armhf-armhf-libvirt:syslog-server:running:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
X-Osstest-Versions-This:
    linux=e0756cfc7d7cd08c98a53b6009c091a3f6a50be6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:13:49 +0000

flight 159181 linux-linus running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159181/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-examine         <none executed>              queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 build-i386-pvops              2 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 test-amd64-amd64-examine      2 hosts-allocate               running
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm  3 hosts-allocate   running
 test-amd64-amd64-amd64-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-i386-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-dom0pvh-xl-intel  3 hosts-allocate               running
 test-amd64-amd64-libvirt      3 hosts-allocate               running
 test-amd64-amd64-dom0pvh-xl-amd  3 hosts-allocate               running
 test-amd64-amd64-libvirt-xsm  3 hosts-allocate               running
 test-amd64-amd64-xl-xsm       3 hosts-allocate               running
 test-amd64-amd64-libvirt-pair  4 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-qemuu-nested-amd  3 hosts-allocate               running
 test-amd64-amd64-libvirt-vhd  3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd11-amd64  3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd12-amd64  3 hosts-allocate               running
 test-amd64-amd64-pair         4 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 3 hosts-allocate running
 test-amd64-amd64-pygrub       3 hosts-allocate               running
 test-amd64-coresched-amd64-xl  3 hosts-allocate               running
 test-amd64-amd64-xl-shadow    3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-rtds      3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-amd  3 hosts-allocate               running
 test-amd64-amd64-qemuu-nested-intel  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate     running
 test-amd64-amd64-xl-credit1   3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-win7-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 3 hosts-allocate running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-multivcpu  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-pvshim    3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-intel  3 hosts-allocate               running
 test-amd64-amd64-xl           3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-win7-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-credit2   3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qcow2     3 hosts-allocate               running
 test-armhf-armhf-libvirt-raw  5 host-install(5)              running
 test-armhf-armhf-libvirt     14 guest-start                  running
 test-armhf-armhf-libvirt-raw  4 syslog-server                running
 test-armhf-armhf-libvirt      4 syslog-server                running

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

version targeted for testing:
 linux                e0756cfc7d7cd08c98a53b6009c091a3f6a50be6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  193 days
Failing since        152366  2020-08-01 20:49:34 Z  192 days  338 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
4560 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   preparing
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           queued  
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          preparing
 test-amd64-coresched-amd64-xl                                preparing
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           preparing
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        preparing
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 preparing
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      preparing
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            preparing
 test-amd64-amd64-xl-pvhv2-amd                                preparing
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              preparing
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       preparing
 test-amd64-amd64-qemuu-freebsd12-amd64                       preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         preparing
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         preparing
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         preparing
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         preparing
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  preparing
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  preparing
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        preparing
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-amd64-examine                                     running 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          preparing
 test-amd64-amd64-xl-pvhv2-intel                              preparing
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            preparing
 test-amd64-amd64-libvirt                                     preparing
 test-armhf-armhf-libvirt                                     running 
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-xl-multivcpu                                preparing
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                preparing
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                preparing
 test-amd64-amd64-i386-pvgrub                                 preparing
 test-amd64-amd64-xl-pvshim                                   preparing
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      preparing
 test-amd64-amd64-xl-qcow2                                    preparing
 test-armhf-armhf-libvirt-raw                                 running 
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     preparing
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   preparing
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 preparing
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-examine queued
broken-job test-amd64-coresched-i386-xl queued
broken-job build-i386-libvirt queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued

Not pushing.

(No revision log; it would be 1029181 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:15:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83535.155686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nT4-0007yY-E2; Wed, 10 Feb 2021 11:15:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83535.155686; Wed, 10 Feb 2021 11:15:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nT4-0007yR-A7; Wed, 10 Feb 2021 11:15:34 +0000
Received: by outflank-mailman (input) for mailman id 83535;
 Wed, 10 Feb 2021 11:15:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nT2-0007yI-P6; Wed, 10 Feb 2021 11:15:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nT2-0007nc-Hl; Wed, 10 Feb 2021 11:15:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nT2-0000TP-Bs; Wed, 10 Feb 2021 11:15:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nT2-0005e0-BL; Wed, 10 Feb 2021 11:15:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZKz8phywdPy/YjVHiKO5S8Dj9v52FyJ91iQUwEtnPCs=; b=yhVnW0kBO8Zweain9/tI0NVT5c
	jw9VBwjWhLHX791zXKmyJz+M+jLjakDW8/oxtQu2i8G0QBI92YVsKMS3En0Cv2sHe/HmkvA/PqLri
	pxKdU+d75QWzHZRV9rSDFh5znZfqu88YmpnkqOsYKDhby7StCsSS0ul29pbKRQJb4CKU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159166-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159166: regressions - trouble: fail/pass/preparing/running
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-amd64-amd64-libvirt:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-examine:hosts-allocate:running:regression
    linux-5.4:test-amd64-coresched-amd64-xl:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-libvirt-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-libvirt:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-i386-pvgrub:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-libvirt-pair:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-libvirt-pair:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-raw:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-amd64-pvgrub:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-libvirt-vhd:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-shadow:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-credit1:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-freebsd10-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-freebsd10-i386:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-pair:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-rtds:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-credit2:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-pvshim:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-pair:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-shadow:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-coresched-i386-xl:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-multivcpu:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-xsm:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-pvshim:hosts-allocate:running:regression
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:hosts-allocate:running:regression
    linux-5.4:test-amd64-i386-examine:host-install:running:regression
    linux-5.4:test-amd64-i386-examine:syslog-server:running:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:debian-di-install:running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore.2:running:regression
    linux-5.4:test-amd64-amd64-xl-qcow2:syslog-server:running:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):running:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:syslog-server:running:regression
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:syslog-server:running:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
X-Osstest-Versions-This:
    linux=d4716ee8751bf8dabf5872ba008124a0979a5f94
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:15:32 +0000

flight 159166 linux-5.4 running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159166/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387
 test-amd64-amd64-libvirt      3 hosts-allocate               running
 test-amd64-amd64-examine      2 hosts-allocate               running
 test-amd64-coresched-amd64-xl  3 hosts-allocate               running
 test-amd64-amd64-dom0pvh-xl-amd  3 hosts-allocate               running
 test-amd64-amd64-libvirt-xsm  3 hosts-allocate               running
 test-amd64-i386-xl-xsm        3 hosts-allocate               running
 test-amd64-i386-libvirt       3 hosts-allocate               running
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate          running
 test-amd64-i386-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-i386-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-libvirt-pair  4 hosts-allocate               running
 test-amd64-i386-libvirt-pair  4 hosts-allocate               running
 test-amd64-i386-xl-qemuu-debianhvm-amd64  3 hosts-allocate             running
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate          running
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict  3 hosts-allocate running
 test-amd64-i386-xl-qemuu-win7-amd64  3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-i386-xl-raw        3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-amd64-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-dom0pvh-xl-intel  3 hosts-allocate               running
 test-amd64-amd64-libvirt-vhd  3 hosts-allocate               running
 test-amd64-i386-xl-shadow     3 hosts-allocate               running
 test-amd64-i386-libvirt-xsm   3 hosts-allocate               running
 test-amd64-i386-qemut-rhel6hvm-amd  3 hosts-allocate               running
 test-amd64-i386-qemuu-rhel6hvm-amd  3 hosts-allocate               running
 test-amd64-i386-xl            3 hosts-allocate               running
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm  3 hosts-allocate running
 test-amd64-i386-xl-qemut-win7-amd64  3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate      running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate     running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-xl-credit1   3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd12-amd64  3 hosts-allocate               running
 test-amd64-amd64-qemuu-nested-intel  3 hosts-allocate               running
 test-amd64-i386-freebsd10-amd64  3 hosts-allocate               running
 test-amd64-i386-freebsd10-i386  3 hosts-allocate               running
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm  3 hosts-allocate    running
 test-amd64-i386-qemut-rhel6hvm-intel  3 hosts-allocate               running
 test-amd64-i386-qemuu-rhel6hvm-intel  3 hosts-allocate               running
 test-amd64-i386-xl-qemut-debianhvm-amd64  3 hosts-allocate             running
 test-amd64-amd64-pair         4 hosts-allocate               running
 test-amd64-amd64-xl-rtds      3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 3 hosts-allocate running
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 3 hosts-allocate running
 test-amd64-amd64-xl-pvhv2-amd  3 hosts-allocate               running
 test-amd64-amd64-xl-credit2   3 hosts-allocate               running
 test-amd64-amd64-qemuu-nested-amd  3 hosts-allocate               running
 test-amd64-i386-xl-pvshim     3 hosts-allocate               running
 test-amd64-i386-pair          4 hosts-allocate               running
 test-amd64-i386-xl-qemut-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-shadow    3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-win7-amd64  3 hosts-allocate               running
 test-amd64-coresched-i386-xl  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-qemut-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-multivcpu  3 hosts-allocate               running
 test-amd64-amd64-xl           3 hosts-allocate               running
 test-amd64-amd64-xl-xsm       3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-pvshim    3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-intel  3 hosts-allocate               running
 test-amd64-i386-examine       5 host-install                 running
 test-amd64-i386-examine       3 syslog-server                running
 test-amd64-amd64-xl-qcow2    12 debian-di-install            running
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-saverestore.2          running
 test-amd64-amd64-xl-qcow2     4 syslog-server                running
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm  5 host-install(5)  running
 test-amd64-amd64-xl-qemuu-win7-amd64  4 syslog-server                running
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm  4 syslog-server    running

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

version targeted for testing:
 linux                d4716ee8751bf8dabf5872ba008124a0979a5f94
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   28 days
Failing since        158473  2021-01-17 13:42:20 Z   23 days   34 attempts
Testing same since   159129  2021-02-08 10:46:56 Z    2 days    1 attempts

------------------------------------------------------------
377 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          preparing
 test-amd64-coresched-amd64-xl                                preparing
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           preparing
 test-amd64-coresched-i386-xl                                 preparing
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           running 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            preparing
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        preparing
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         preparing
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  preparing
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  preparing
 test-amd64-amd64-libvirt-xsm                                 preparing
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  preparing
 test-amd64-amd64-xl-xsm                                      preparing
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       preparing
 test-amd64-amd64-qemuu-nested-amd                            preparing
 test-amd64-amd64-xl-pvhv2-amd                                preparing
 test-amd64-i386-qemut-rhel6hvm-amd                           preparing
 test-amd64-i386-qemuu-rhel6hvm-amd                           preparing
 test-amd64-amd64-dom0pvh-xl-amd                              preparing
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemut-debianhvm-amd64                     preparing
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     preparing
 test-amd64-i386-freebsd10-amd64                              preparing
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         preparing
 test-amd64-i386-xl-qemuu-ovmf-amd64                          preparing
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          preparing
 test-amd64-amd64-xl-qemuu-win7-amd64                         running 
 test-amd64-i386-xl-qemuu-win7-amd64                          preparing
 test-amd64-amd64-xl-qemut-ws16-amd64                         preparing
 test-amd64-i386-xl-qemut-ws16-amd64                          preparing
 test-amd64-amd64-xl-qemuu-ws16-amd64                         preparing
 test-amd64-i386-xl-qemuu-ws16-amd64                          preparing
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  preparing
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  preparing
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        preparing
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         preparing
 test-amd64-amd64-examine                                     running 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      running 
 test-amd64-i386-freebsd10-i386                               preparing
 test-amd64-amd64-qemuu-nested-intel                          preparing
 test-amd64-amd64-xl-pvhv2-intel                              preparing
 test-amd64-i386-qemut-rhel6hvm-intel                         preparing
 test-amd64-i386-qemuu-rhel6hvm-intel                         preparing
 test-amd64-amd64-dom0pvh-xl-intel                            preparing
 test-amd64-amd64-libvirt                                     preparing
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      preparing
 test-amd64-amd64-xl-multivcpu                                preparing
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         preparing
 test-amd64-amd64-libvirt-pair                                preparing
 test-amd64-i386-libvirt-pair                                 preparing
 test-amd64-amd64-amd64-pvgrub                                preparing
 test-amd64-amd64-i386-pvgrub                                 preparing
 test-amd64-amd64-xl-pvshim                                   preparing
 test-amd64-i386-xl-pvshim                                    preparing
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    running 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       preparing
 test-amd64-amd64-xl-rtds                                     preparing
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              preparing
 test-amd64-amd64-xl-shadow                                   preparing
 test-amd64-i386-xl-shadow                                    preparing
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 preparing
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 11411 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:16:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83537.155701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nUK-00088I-27; Wed, 10 Feb 2021 11:16:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83537.155701; Wed, 10 Feb 2021 11:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nUJ-00088B-V2; Wed, 10 Feb 2021 11:16:51 +0000
Received: by outflank-mailman (input) for mailman id 83537;
 Wed, 10 Feb 2021 11:16:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nUI-00087w-Le; Wed, 10 Feb 2021 11:16:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nUI-0007q3-Cz; Wed, 10 Feb 2021 11:16:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nUI-0000Uc-57; Wed, 10 Feb 2021 11:16:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nUI-0005sP-4a; Wed, 10 Feb 2021 11:16:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r/u1gg3xQw4rEIpEsZNUS8xRb3lPJnLFodcoe/Alx9U=; b=IGZEAoyKaMJRx4NsHJ6EkMarwx
	NF8s86U/eLKOLrUK3oXbYvxDvFwVIHaFCBhiT/1koDLj32l4BsRc26fQiEDA/fTAE56+7asmYZGYp
	tbso+HIKikSHVF1FXpVAhVcQtCM56Z9ZqZxydXmt6fzN6IsWL9ZOvN0iC+/Gff4Gbhzc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159187-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159187: regressions - trouble: fail/pass/preparing/queued
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-armhf-armhf-libvirt:<none executed>:queued:regression
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    xen-4.12-testing:build-i386-libvirt:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-raw:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-libvirt:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-livepatch:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-pair:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.12-testing:build-armhf-libvirt:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:hosts-allocate:running:regression
    xen-4.12-testing:build-i386-pvops:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:hosts-allocate:running:regression
    xen-4.12-testing:build-i386-prev:hosts-allocate:running:regression
    xen-4.12-testing:build-i386-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:hosts-allocate:running:regression
    xen-4.12-testing:build-i386:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:hosts-allocate:running:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl:hosts-allocate:running:regression
    xen-4.12-testing:test-xtf-amd64-amd64-5:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:hosts-allocate:running:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-pair:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit1:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:hosts-allocate:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-3:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-livepatch:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit1:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-livepatch:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-4:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-1:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-arndale:debian-fixup:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:16:50 +0000

flight 159187 xen-4.12-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159187/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit1     <job status>                 broken  in 159052
 test-amd64-amd64-pair           <job status>                 broken  in 159052
 test-amd64-amd64-migrupgrade    <job status>                 broken  in 159052
 test-amd64-amd64-livepatch      <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>        broken in 159052
 test-amd64-i386-pair            <job status>                 broken  in 159052
 test-amd64-i386-xl              <job status>                 broken  in 159052
 test-amd64-amd64-libvirt-pair    <job status>                 broken in 159052
 test-xtf-amd64-amd64-2          <job status>                 broken  in 159052
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 159052
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 159052
 test-amd64-i386-freebsd10-amd64    <job status>               broken in 159052
 test-amd64-i386-freebsd10-i386    <job status>                broken in 159052
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm <job status> broken in 159052
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-nested-intel    <job status>           broken in 159052
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  <job status> broken in 159052
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>     broken in 159052
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159052
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 159052
 test-amd64-amd64-libvirt        <job status>                 broken  in 159052
 test-amd64-amd64-xl             <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-nested-amd    <job status>             broken in 159052
 test-amd64-amd64-libvirt-xsm    <job status>                 broken  in 159052
 build-i386-xsm                  <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 159052
 test-amd64-amd64-libvirt-vhd    <job status>                 broken  in 159052
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow <job status> broken in 159052
 test-amd64-i386-libvirt-pair    <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>  broken in 159052
 test-amd64-amd64-pygrub         <job status>                 broken  in 159052
 test-amd64-amd64-i386-pvgrub    <job status>                 broken  in 159052
 test-xtf-amd64-amd64-4          <job status>                 broken  in 159052
 test-xtf-amd64-amd64-1          <job status>                 broken  in 159052
 test-xtf-amd64-amd64-3          <job status>                 broken  in 159052
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 159052
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 159052
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 159052
 test-amd64-i386-migrupgrade     <job status>                 broken  in 159052
 test-amd64-i386-libvirt         <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>  broken in 159052
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 159052
 test-amd64-amd64-xl-qcow2       <job status>                 broken  in 159052
 test-amd64-amd64-xl-pvshim      <job status>                 broken  in 159052
 test-amd64-amd64-xl-pvhv2-intel    <job status>               broken in 159052
 test-amd64-amd64-xl-multivcpu    <job status>                 broken in 159052
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken in 159052
 test-amd64-amd64-xl-xsm         <job status>                 broken  in 159052
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken in 159052
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>        broken in 159052
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 159052
 test-amd64-i386-xl-raw          <job status>                 broken  in 159052
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 159052
 test-amd64-amd64-xl-rtds        <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>          broken in 159052
 test-amd64-i386-livepatch       <job status>                 broken  in 159052
 test-amd64-amd64-xl-shadow      <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 159052
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 159052
 test-amd64-i386-xl-shadow       <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 159052
 build-i386-xsm             4 host-install(4) broken in 159052 REGR. vs. 158556
 test-armhf-armhf-libvirt-raw 13 guest-start    fail in 159052 REGR. vs. 158556
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159052 REGR. vs. 158556
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 build-armhf-libvirt           2 hosts-allocate               running
 test-amd64-amd64-libvirt-vhd  3 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 test-amd64-amd64-libvirt      3 hosts-allocate               running
 test-amd64-amd64-libvirt-xsm  3 hosts-allocate               running
 test-amd64-amd64-libvirt-pair  4 hosts-allocate               running
 build-i386-prev               2 hosts-allocate               running
 build-i386-xsm                2 hosts-allocate               running
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm  3 hosts-allocate   running
 test-armhf-armhf-xl-cubietruck  3 hosts-allocate               running
 test-amd64-amd64-migrupgrade  4 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate         running
 test-xtf-amd64-amd64-4        3 hosts-allocate               running
 test-xtf-amd64-amd64-3        3 hosts-allocate               running
 test-amd64-amd64-xl-rtds      3 hosts-allocate               running
 test-amd64-amd64-pygrub       3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-i386-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-amd64-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-qemuu-nested-amd  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 3 hosts-allocate running
 test-xtf-amd64-amd64-2        3 hosts-allocate               running
 test-amd64-amd64-xl           3 hosts-allocate               running
 test-xtf-amd64-amd64-5        3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-amd  3 hosts-allocate               running
 test-amd64-amd64-xl-shadow    3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-xsm       3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-credit2   3 hosts-allocate               running
 test-amd64-amd64-livepatch    3 hosts-allocate               running
 test-xtf-amd64-amd64-1        3 hosts-allocate               running
 test-amd64-amd64-xl-multivcpu  3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd11-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-intel  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-win7-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-win7-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate     running
 test-amd64-amd64-qemuu-nested-intel  3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 3 hosts-allocate running
 test-amd64-amd64-qemuu-freebsd12-amd64  3 hosts-allocate               running
 test-amd64-amd64-pair         4 hosts-allocate               running
 test-amd64-amd64-xl-credit1   3 hosts-allocate               running
 test-amd64-amd64-xl-pvshim    3 hosts-allocate               running
 test-amd64-amd64-xl-qcow2     3 hosts-allocate               running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-xsm      5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-pvshim   5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-amd64-pvgrub 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-pygrub      5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-shadow   5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-3       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-livepatch    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-libvirt      5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt     5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qcow2    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-credit1  5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-pvhv2-intel 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-multivcpu 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-livepatch   5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-raw       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl          5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-4       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-1       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 159052 pass in 159131
 test-xtf-amd64-amd64-2       5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-shadow    5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-freebsd10-i386 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-i386-pvgrub 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-freebsd10-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-xl-rtds     5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-migrupgrade 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-migrupgrade 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-migrupgrade 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-i386-libvirt-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-i386-pair 6 host-install/src_host(6) broken in 159052 pass in 159131
 test-amd64-i386-pair 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-i386-migrupgrade 7 host-install/dst_host(7) broken in 159052 pass in 159131
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-i386-xl           5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-nested-amd 5 host-install(5) broken in 159052 pass in 159131
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken in 159052 pass in 159131
 test-armhf-armhf-xl-arndale  13 debian-fixup     fail in 159052 pass in 159131
 test-armhf-armhf-libvirt-raw 12 debian-di-install fail in 159131 pass in 159052
 test-armhf-armhf-xl-vhd      12 debian-di-install          fail pass in 159052

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 159052 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 159052 n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 159052 n/a
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 159052 like 158556
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail in 159052 never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail in 159052 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 159052 never pass
 test-amd64-amd64-xl-qcow2 19 guest-localmigrate/x10 fail in 159131 like 158556
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 159131 like 158556
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop  fail in 159131 like 158556
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop  fail in 159131 like 158556
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 159131 like 158556
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop  fail in 159131 like 158556
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 159131 like 158556
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 159131 like 158556
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 159131 like 158556
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail in 159131 like 158556
 test-amd64-i386-xl-pvshim    14 guest-start          fail in 159131 never pass
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 159131 never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail in 159131 never pass
 test-amd64-i386-libvirt     15 migrate-support-check fail in 159131 never pass
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 159131 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 159131 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 159131 never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 159131 never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   19 days
Failing since        159017  2021-02-04 15:06:13 Z    5 days    4 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    4 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               preparing
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   preparing
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          preparing
 build-i386-libvirt                                           queued  
 build-amd64-prev                                             pass    
 build-i386-prev                                              preparing
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             preparing
 test-xtf-amd64-amd64-1                                       preparing
 test-xtf-amd64-amd64-2                                       preparing
 test-xtf-amd64-amd64-3                                       preparing
 test-xtf-amd64-amd64-4                                       preparing
 test-xtf-amd64-amd64-5                                       preparing
 test-amd64-amd64-xl                                          preparing
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           preparing
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        preparing
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 preparing
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      preparing
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            preparing
 test-amd64-amd64-xl-pvhv2-amd                                preparing
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       preparing
 test-amd64-amd64-qemuu-freebsd12-amd64                       preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         preparing
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         preparing
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         preparing
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         preparing
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  preparing
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  preparing
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               preparing
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        preparing
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          preparing
 test-amd64-amd64-xl-pvhv2-intel                              preparing
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-libvirt                                     preparing
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   preparing
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 preparing
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                preparing
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                preparing
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                preparing
 test-amd64-amd64-i386-pvgrub                                 preparing
 test-amd64-amd64-xl-pvshim                                   preparing
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      preparing
 test-amd64-amd64-xl-qcow2                                    preparing
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     preparing
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   preparing
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 preparing
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job build-i386-libvirt queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-migrupgrade broken
broken-job test-amd64-amd64-livepatch broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-xtf-amd64-amd64-2 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job build-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-xtf-amd64-amd64-4 broken
broken-job test-xtf-amd64-amd64-1 broken
broken-job test-xtf-amd64-amd64-3 broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-migrupgrade broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-livepatch broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken

Not pushing.

------------------------------------------------------------
commit 8d26cdd3b66ab86d560dacd763d76ff3da95723e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:52:54 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:20:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:20:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83545.155716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nXT-0000d6-Qg; Wed, 10 Feb 2021 11:20:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83545.155716; Wed, 10 Feb 2021 11:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nXT-0000cz-N1; Wed, 10 Feb 2021 11:20:07 +0000
Received: by outflank-mailman (input) for mailman id 83545;
 Wed, 10 Feb 2021 11:20:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nXT-0000cr-3r; Wed, 10 Feb 2021 11:20:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nXS-0007si-V1; Wed, 10 Feb 2021 11:20:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nXS-0000Zk-Nj; Wed, 10 Feb 2021 11:20:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nXS-0006hA-ND; Wed, 10 Feb 2021 11:20:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=71YP4xv8rXpBdzEhFKGa9CxxYvSHptc9ao32re4SfOk=; b=qGSGpZwRJsX8BAXCBnPhKCAVaX
	yi/ZllX0JKYgMJ7vtqz+zzCNK0roj5QZgBBWET1jQHKkvq7AeULBcGsISgI2CCxxQ517WP7/fHT4p
	e1mKTFaAWyZ4YkZ5OwwXs843AShutzs+T14vdy5Sl4sFGKA8doDFBeVhzhUTqIox8L5E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159156: regressions - trouble: broken/fail/pass/preparing/running
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-unstable:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    xen-unstable:test-amd64-coresched-amd64-xl:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-pvshim:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-pygrub:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    xen-unstable:test-amd64-coresched-i386-xl:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl-shadow:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):running:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:xen-install:running:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-ping-check-xen:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:windows-install:running:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:syslog-server:running:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:syslog-server:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:syslog-server:running:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:syslog-server:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:syslog-server:running:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:syslog-server:running:regression
    xen-unstable:test-amd64-amd64-i386-pvgrub:syslog-server:running:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f18309eb06efd1db5a2ab9903a1c246b6299951a
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:20:06 +0000

flight 159156 xen-unstable running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159156/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-pvshim      <job status>                 broken
 test-amd64-amd64-qemuu-nested-amd    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-coresched-amd64-xl  5 host-install(5)       broken REGR. vs. 159036
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-qemuu-nested-amd  5 host-install(5)   broken REGR. vs. 159036
 test-amd64-amd64-xl-pvshim    5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-pygrub       5 host-install(5)        broken REGR. vs. 159036
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken REGR. vs. 159036
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 159036
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 159036
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 159036
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate          running
 test-amd64-coresched-i386-xl  3 hosts-allocate               running
 test-amd64-i386-xl            3 hosts-allocate               running
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate          running
 test-amd64-i386-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate      running
 test-amd64-i386-xl-shadow     3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-i386-xl-qemut-debianhvm-amd64  5 host-install(5)            running
 test-amd64-amd64-libvirt-xsm  7 xen-install                  running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 host-ping-check-xen    running
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install running
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install              running
 test-amd64-amd64-libvirt-xsm  4 syslog-server                running
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10       running
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict  4 syslog-server  running
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 20 guest-start/debianhvm.repeat running
 test-amd64-i386-xl-qemut-debianhvm-amd64  4 syslog-server              running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  4 syslog-server          running
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm  4 syslog-server  running
 test-amd64-i386-xl-qemut-win7-amd64  4 syslog-server                running
 test-amd64-amd64-i386-pvgrub  4 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159036
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f18309eb06efd1db5a2ab9903a1c246b6299951a
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    5 days
Failing since        159077  2021-02-06 11:11:30 Z    3 days    3 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           preparing
 test-amd64-coresched-i386-xl                                 preparing
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         running 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 running 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  preparing
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  preparing
 test-amd64-amd64-libvirt-xsm                                 running 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            broken  
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     running 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       broken  
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          preparing
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          running 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          preparing
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         running 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 running 
 test-amd64-amd64-xl-pvshim                                   broken  
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              preparing
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    preparing
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-coresched-amd64-xl broken
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-amd64-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-amd64-qemuu-nested-amd host-install(5)
broken-step test-amd64-amd64-xl-pvshim host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-step test-amd64-amd64-qemuu-freebsd11-amd64 host-install(5)

Not pushing.

(No revision log; it would be 350 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:21:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83548.155731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nYP-0000je-5h; Wed, 10 Feb 2021 11:21:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83548.155731; Wed, 10 Feb 2021 11:21:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nYP-0000jX-25; Wed, 10 Feb 2021 11:21:05 +0000
Received: by outflank-mailman (input) for mailman id 83548;
 Wed, 10 Feb 2021 11:21:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nYN-0000jP-LD; Wed, 10 Feb 2021 11:21:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nYN-0007tO-GV; Wed, 10 Feb 2021 11:21:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nYN-0000am-41; Wed, 10 Feb 2021 11:21:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nYN-0007Rv-3Y; Wed, 10 Feb 2021 11:21:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tL1xJrimtytnmwG7fWsZDeobTIboogkudVsG5EY5qyI=; b=EVq6/UcundpoqAfhq6/FKooUnx
	iOEaXZ7H0BoUr210fTKt8Ru/b5T4cX5SjXMPi3RB7NJFGXBOy+m3JKVpJv+J4hSl2guqY7oKBvTpF
	Wfz4k23zUW7xl40WvDI1zHMtBb/Kc6FQBmMSiXg1SEoCU2d3pwGaKvO21zXkYyxBrUOk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159195-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159195: regressions - trouble: blocked/fail/pass/preparing/queued/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:<none executed>:queued:regression
    libvirt:build-i386-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    libvirt:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    libvirt:test-armhf-armhf-libvirt:<none executed>:queued:regression
    libvirt:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    libvirt:build-armhf:hosts-allocate:running:regression
    libvirt:build-i386:hosts-allocate:running:regression
    libvirt:build-i386-pvops:hosts-allocate:running:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-i386-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=e3d60f761c7fc1c254e39ea8e42161698c0ee7b5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:21:03 +0000

flight 159195 libvirt running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159195/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt             <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 build-armhf                   2 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-i386-xsm                2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              e3d60f761c7fc1c254e39ea8e42161698c0ee7b5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  215 days
Failing since        151818  2020-07-11 04:18:52 Z  214 days  207 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               starved 
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  preparing
 build-i386                                                   preparing
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          queued  
 build-i386-libvirt                                           queued  
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             preparing
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 queued  
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf-libvirt queued
broken-job build-i386-libvirt queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued

Not pushing.

(No revision log; it would be 40707 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:23:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:23:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83555.155754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nb4-0000yS-WC; Wed, 10 Feb 2021 11:23:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83555.155754; Wed, 10 Feb 2021 11:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nb4-0000yL-Sd; Wed, 10 Feb 2021 11:23:50 +0000
Received: by outflank-mailman (input) for mailman id 83555;
 Wed, 10 Feb 2021 11:23:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nb4-0000yD-8T; Wed, 10 Feb 2021 11:23:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nb4-0007xS-3o; Wed, 10 Feb 2021 11:23:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nb3-0000eQ-QM; Wed, 10 Feb 2021 11:23:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9nb3-0001YW-IP; Wed, 10 Feb 2021 11:23:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BThOQduRW3RmZsGCjKfcY2v9/RsYlclKTdoNUzMJRtg=; b=KB9TGiwxGd7uQ29MG6mg5RdRQl
	ONftqlYbnDUOfS8NHI5SMNoJgzZqtvdZBJ42KR11D3iF/LnILxxpJvmn43GFRj1sE2UVf0e0Uiuui
	HpRw5kFp0lBGQU8kyeJIBZmw+3djyKYTeFXPQTR+AuTVsbt1aoeFk8taJaLtN+tQ8WmY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159176: regressions - trouble: fail/pass/preparing
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-libvirt:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-libvirt:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-raw:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-shadow:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-pair:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-i386-pvgrub:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-credit1:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:hosts-allocate:running:regression
    qemu-mainline:test-amd64-coresched-amd64-xl:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-pvshim:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:hosts-allocate:running:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-coresched-i386-xl:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-pygrub:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-pair:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-shadow:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:hosts-allocate:running:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:hosts-allocate:running:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1214d55d1c41fbab3a9973a05085b8760647e411
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:23:49 +0000

flight 159176 qemu-mainline running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159176/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm   3 hosts-allocate               running
 test-amd64-amd64-dom0pvh-xl-amd  3 hosts-allocate               running
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm  3 hosts-allocate    running
 test-amd64-amd64-libvirt      3 hosts-allocate               running
 test-amd64-amd64-amd64-pvgrub  3 hosts-allocate               running
 test-amd64-i386-freebsd10-amd64  3 hosts-allocate               running
 test-amd64-amd64-dom0pvh-xl-intel  3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate      running
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict  3 hosts-allocate running
 test-amd64-i386-libvirt       3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-win7-amd64  3 hosts-allocate               running
 test-amd64-i386-xl-raw        3 hosts-allocate               running
 test-amd64-i386-xl-shadow     3 hosts-allocate               running
 test-amd64-i386-xl-xsm        3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate          running
 test-amd64-i386-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd11-amd64  3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd12-amd64  3 hosts-allocate               running
 test-amd64-i386-xl            3 hosts-allocate               running
 test-amd64-i386-xl-pvshim     3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-i386-pair          4 hosts-allocate               running
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm  3 hosts-allocate   running
 test-amd64-amd64-libvirt-xsm  3 hosts-allocate               running
 test-amd64-amd64-xl           3 hosts-allocate               running
 test-amd64-i386-libvirt-pair  4 hosts-allocate               running
 test-amd64-amd64-i386-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-libvirt-vhd  3 hosts-allocate               running
 test-amd64-amd64-qemuu-nested-amd  3 hosts-allocate               running
 test-amd64-amd64-xl-rtds      3 hosts-allocate               running
 test-amd64-amd64-xl-credit1   3 hosts-allocate               running
 test-amd64-i386-qemuu-rhel6hvm-amd  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-win7-amd64  3 hosts-allocate               running
 test-amd64-i386-qemuu-rhel6hvm-intel  3 hosts-allocate               running
 test-amd64-i386-freebsd10-i386  3 hosts-allocate               running
 test-amd64-coresched-amd64-xl  3 hosts-allocate               running
 test-amd64-amd64-xl-pvshim    3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-qemuu-nested-intel  3 hosts-allocate               running
 test-amd64-i386-xl-qemuu-debianhvm-amd64  3 hosts-allocate             running
 test-amd64-amd64-xl-xsm       3 hosts-allocate               running
 test-amd64-coresched-i386-xl  3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-intel  3 hosts-allocate               running
 test-amd64-amd64-pygrub       3 hosts-allocate               running
 test-amd64-amd64-libvirt-pair  4 hosts-allocate               running
 test-amd64-amd64-pair         4 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 3 hosts-allocate running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate     running
 test-amd64-amd64-xl-qcow2     3 hosts-allocate               running
 test-amd64-amd64-xl-shadow    3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-xl-credit2   3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-amd  3 hosts-allocate               running
 test-amd64-amd64-xl-multivcpu  3 hosts-allocate               running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1214d55d1c41fbab3a9973a05085b8760647e411
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  174 days
Failing since        152659  2020-08-21 14:07:39 Z  172 days  342 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
394 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          preparing
 test-amd64-coresched-amd64-xl                                preparing
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           preparing
 test-amd64-coresched-i386-xl                                 preparing
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           preparing
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            preparing
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  preparing
 test-amd64-amd64-libvirt-xsm                                 preparing
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  preparing
 test-amd64-amd64-xl-xsm                                      preparing
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       preparing
 test-amd64-amd64-qemuu-nested-amd                            preparing
 test-amd64-amd64-xl-pvhv2-amd                                preparing
 test-amd64-i386-qemuu-rhel6hvm-amd                           preparing
 test-amd64-amd64-dom0pvh-xl-amd                              preparing
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     preparing
 test-amd64-i386-freebsd10-amd64                              preparing
 test-amd64-amd64-qemuu-freebsd11-amd64                       preparing
 test-amd64-amd64-qemuu-freebsd12-amd64                       preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         preparing
 test-amd64-i386-xl-qemuu-ovmf-amd64                          preparing
 test-amd64-amd64-xl-qemuu-win7-amd64                         preparing
 test-amd64-i386-xl-qemuu-win7-amd64                          preparing
 test-amd64-amd64-xl-qemuu-ws16-amd64                         preparing
 test-amd64-i386-xl-qemuu-ws16-amd64                          preparing
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  preparing
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  preparing
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        preparing
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         preparing
 test-amd64-i386-freebsd10-i386                               preparing
 test-amd64-amd64-qemuu-nested-intel                          preparing
 test-amd64-amd64-xl-pvhv2-intel                              preparing
 test-amd64-i386-qemuu-rhel6hvm-intel                         preparing
 test-amd64-amd64-dom0pvh-xl-intel                            preparing
 test-amd64-amd64-libvirt                                     preparing
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      preparing
 test-amd64-amd64-xl-multivcpu                                preparing
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         preparing
 test-amd64-amd64-libvirt-pair                                preparing
 test-amd64-i386-libvirt-pair                                 preparing
 test-amd64-amd64-amd64-pvgrub                                preparing
 test-amd64-amd64-i386-pvgrub                                 preparing
 test-amd64-amd64-xl-pvshim                                   preparing
 test-amd64-i386-xl-pvshim                                    preparing
 test-amd64-amd64-pygrub                                      preparing
 test-amd64-amd64-xl-qcow2                                    preparing
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       preparing
 test-amd64-amd64-xl-rtds                                     preparing
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              preparing
 test-amd64-amd64-xl-shadow                                   preparing
 test-amd64-i386-xl-shadow                                    preparing
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 preparing
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 109250 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:26:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:26:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83559.155769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ndd-00019N-Ex; Wed, 10 Feb 2021 11:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83559.155769; Wed, 10 Feb 2021 11:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ndd-00019G-BM; Wed, 10 Feb 2021 11:26:29 +0000
Received: by outflank-mailman (input) for mailman id 83559;
 Wed, 10 Feb 2021 11:26:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9ndb-00019B-UQ
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:26:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 759cd76b-6804-4cd4-a817-164ca2a499fe;
 Wed, 10 Feb 2021 11:26:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0D3F3AE36;
 Wed, 10 Feb 2021 11:26:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 759cd76b-6804-4cd4-a817-164ca2a499fe
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612956386; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UgWnJuf85refogm4v3ApJ4GsPgI5aul814UAUaT0pxA=;
	b=UcTmRfi3NN8y4xwnHsTN4U7+V/rcHGiOHo78/zmKVd87lQmLQ4KmfxqyTdAD2yxaynAgi5
	dnc2HWgGkUgvW2j04Op0gSc5+m9xM3BntrrT5Lt/BfNQHz+0ZnchsJyQ9Z4Q56QB42SL1/
	WRSZu2kyf+r+uYxbXxnMmB4B70C9jhw=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
Date: Wed, 10 Feb 2021 12:26:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209152816.15792-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 16:28, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, the IOMMU page-tables will be populated early in the domain
> creation if the hardware is able to virtualize the local APIC. However,
> the IOMMU page tables will not be freed during early failure and will
> result to a leak.
> 
> An assigned device should not need to DMA into the vLAPIC page, so we
> can avoid to map the page in the IOMMU page-tables.

Here and below, may I ask that you use the correct term "APIC
access page", as there are other pages involved in vLAPIC
handling (in particular the virtual APIC page, which is where
accesses actually go to that translate to the APIC access page
in EPT).

> This statement is also true for any special pages (the vLAPIC page is
> one of them). So to take the opportunity to prevent the mapping for all
> of them.

I probably should have realized this earlier, but there is a
downside to this: A guest wanting to core dump itself may want
to dump e.g. shared info and vcpu info pages. Hence ...

> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
>  {
>      unsigned int flags;
>  
> +    /* Don't map special pages in the IOMMU page-tables. */
> +    if ( mfn_valid(mfn) && is_special_page(mfn_to_page(mfn)) )
> +        return 0;

... instead of is_special_page() I think you want to limit the
check here to seeing whether PGC_extra is set.

But as said on irc, since this crude way of setting up the APIC
access page is now firmly a problem, I intend to try to redo it.
I can't tell yet whether this will still take a PGC_extra page
of some form (nor if my plans will work out in the first place).
Irrespective of this I think we indeed want to exclude PGC_extra
pages from getting mapped. However, the APIC access page is, I
think, an outlier here - we wouldn't insert other PGC_extra
pages into the p2m, so for all other cases the above addition
would be effectively dead code.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:26:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83561.155781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ne2-0001Ee-OC; Wed, 10 Feb 2021 11:26:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83561.155781; Wed, 10 Feb 2021 11:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ne2-0001EX-KP; Wed, 10 Feb 2021 11:26:54 +0000
Received: by outflank-mailman (input) for mailman id 83561;
 Wed, 10 Feb 2021 11:26:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9ne1-0001EK-MF; Wed, 10 Feb 2021 11:26:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9ne1-00081H-Ff; Wed, 10 Feb 2021 11:26:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9ne0-0000oj-Qj; Wed, 10 Feb 2021 11:26:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9ne0-0005aj-Py; Wed, 10 Feb 2021 11:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sGQFsxpNppo5SAPKeda3IkwuCuHk2UPoplCs3Auictg=; b=RQyrpV9eRWE/R9oo2OUZlIjo5Z
	W9Fn8XKxIl8jNPJBBmBMdtddPMCobwMymeiDBEkpyuab1cKp7unfLaC33qol0WJ/o3bbDXsJKFFqF
	NyaSQZ0fAC1i/Rg5Wy2WGacDH+ljkdUsn49aKmx9KCbfmSADiXlMuibCgvwI47D0+MsA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159194-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159194: trouble: pass/preparing/queued
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
    xen-4.11-testing:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:<none executed>:queued:regression
    xen-4.11-testing:build-armhf-libvirt:<none executed>:queued:regression
    xen-4.11-testing:build-i386-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt-pair:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-livepatch:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-migrupgrade:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-pair:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<none executed>:queued:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none executed>:queued:regression
    xen-4.11-testing:build-i386-prev:hosts-allocate:running:regression
    xen-4.11-testing:build-armhf:hosts-allocate:running:regression
    xen-4.11-testing:build-i386:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-libvirt:hosts-allocate:running:regression
    xen-4.11-testing:build-i386-pvops:hosts-allocate:running:regression
    xen-4.11-testing:build-i386-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-i386-pvgrub:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:hosts-allocate:running:regression
    xen-4.11-testing:test-xtf-amd64-amd64-4:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-shadow:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-intel:hosts-allocate:running:regression
    xen-4.11-testing:test-xtf-amd64-amd64-5:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-pair:hosts-allocate:running:regression
    xen-4.11-testing:test-xtf-amd64-amd64-1:hosts-allocate:running:regression
    xen-4.11-testing:test-xtf-amd64-amd64-2:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-rtds:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-xtf-amd64-amd64-3:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-pair:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qcow2:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-credit2:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-multivcpu:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvshim:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-livepatch:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-migrupgrade:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-intel:hosts-allocate:running:regression
    xen-4.11-testing:test-amd64-amd64-xl-credit1:hosts-allocate:running:regression
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 11:26:52 +0000

flight 159194 xen-4.11-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/159194/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-armhf-armhf-xl             <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 build-armhf-libvirt             <none executed>              queued
 build-i386-libvirt              <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 build-i386-prev               2 hosts-allocate               running
 build-armhf                   2 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 test-amd64-amd64-libvirt      3 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 build-i386-xsm                2 hosts-allocate               running
 test-amd64-amd64-xl-xsm       3 hosts-allocate               running
 test-amd64-amd64-i386-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-libvirt-vhd  3 hosts-allocate               running
 test-xtf-amd64-amd64-4        3 hosts-allocate               running
 test-amd64-amd64-xl-shadow    3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 3 hosts-allocate running
 test-amd64-amd64-amd64-pvgrub  3 hosts-allocate               running
 test-amd64-amd64-xl           3 hosts-allocate               running
 test-amd64-amd64-qemuu-nested-intel  3 hosts-allocate               running
 test-xtf-amd64-amd64-5        3 hosts-allocate               running
 test-amd64-amd64-libvirt-pair  4 hosts-allocate               running
 test-xtf-amd64-amd64-1        3 hosts-allocate               running
 test-xtf-amd64-amd64-2        3 hosts-allocate               running
 test-amd64-amd64-xl-rtds      3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-qemuu-freebsd11-amd64  3 hosts-allocate               running
 test-xtf-amd64-amd64-3        3 hosts-allocate               running
 test-amd64-amd64-pair         4 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-win7-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 3 hosts-allocate running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-qemut-win7-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemut-debianhvm-amd64  3 hosts-allocate            running
 test-amd64-amd64-xl-qcow2     3 hosts-allocate               running
 test-amd64-amd64-xl-credit2   3 hosts-allocate               running
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm  3 hosts-allocate   running
 test-amd64-amd64-qemuu-nested-amd  3 hosts-allocate               running
 test-amd64-amd64-pygrub       3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  3 hosts-allocate     running
 test-amd64-amd64-xl-qemut-ws16-amd64  3 hosts-allocate               running
 test-amd64-amd64-xl-multivcpu  3 hosts-allocate               running
 test-amd64-amd64-libvirt-xsm  3 hosts-allocate               running
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  3 hosts-allocate         running
 test-amd64-amd64-xl-pvshim    3 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-amd  3 hosts-allocate               running
 test-amd64-amd64-livepatch    3 hosts-allocate               running
 test-amd64-amd64-qemuu-freebsd12-amd64  3 hosts-allocate               running
 test-amd64-amd64-migrupgrade  4 hosts-allocate               running
 test-amd64-amd64-xl-pvhv2-intel  3 hosts-allocate               running
 test-amd64-amd64-xl-credit1   3 hosts-allocate               running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   56 days
Failing since        159016  2021-02-04 15:05:58 Z    5 days    6 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               preparing
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  preparing
 build-i386                                                   preparing
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          queued  
 build-i386-libvirt                                           queued  
 build-amd64-prev                                             pass    
 build-i386-prev                                              preparing
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             preparing
 test-xtf-amd64-amd64-1                                       preparing
 test-xtf-amd64-amd64-2                                       preparing
 test-xtf-amd64-amd64-3                                       preparing
 test-xtf-amd64-amd64-4                                       preparing
 test-xtf-amd64-amd64-5                                       preparing
 test-amd64-amd64-xl                                          preparing
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           preparing
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        preparing
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 preparing
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 preparing
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      preparing
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            preparing
 test-amd64-amd64-xl-pvhv2-amd                                preparing
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       preparing
 test-amd64-amd64-qemuu-freebsd12-amd64                       preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         preparing
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         preparing
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         preparing
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         preparing
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  preparing
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  preparing
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        preparing
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          preparing
 test-amd64-amd64-xl-pvhv2-intel                              preparing
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-libvirt                                     preparing
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   preparing
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 preparing
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                preparing
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                preparing
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                preparing
 test-amd64-amd64-i386-pvgrub                                 preparing
 test-amd64-amd64-xl-pvshim                                   preparing
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      preparing
 test-amd64-amd64-xl-qcow2                                    preparing
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     preparing
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             preparing
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   preparing
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 preparing
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-armhf-armhf-xl queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job build-armhf-libvirt queued
broken-job build-i386-libvirt queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-amd64-i386-pair queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:35:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83567.155796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nlz-0002EK-Sa; Wed, 10 Feb 2021 11:35:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83567.155796; Wed, 10 Feb 2021 11:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nlz-0002ED-PQ; Wed, 10 Feb 2021 11:35:07 +0000
Received: by outflank-mailman (input) for mailman id 83567;
 Wed, 10 Feb 2021 11:35:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/zh=HM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9nly-0002E8-I8
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:35:06 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3a92d67-3b54-4a1b-979c-b609e0de3677;
 Wed, 10 Feb 2021 11:35:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3a92d67-3b54-4a1b-979c-b609e0de3677
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612956905;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=cPSsm1PTi7ikS4ZFh1cZo1XySe6RyeCug/hia3cTN3s=;
  b=KC5weaiFWVaRsjgwQFQLw/cvuM/9/j7jCOP0wTCoHbN6R0iw/Z6NtePf
   Ro1r5OpONSYqHLI4vFB3BXQEl70vsS8Sfr8OuvsN4iQRXcQS99RPtsHlu
   l9rn3dXA0FmHFurDk3HBgKVlUfQi2xX5cPZtNaPmH4IZGiu2kXWHoHUBP
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: y4W28styJ7zUg9W1wrkYS9Wu7IQwFRs3hDEp6qddv3y6P+nWQC/exe1e3UAX8mRbpnNMWt9g5z
 e2xWzb8Ijoq2mKDiy6MxO1B1gXomOG4gjfh1hUcoCmtCikVSaGt0XjqcPwyPqMhNBPRXX+v9IB
 L/pAQA5rffTBTDAhdRMcEwYNqwumDuhc4dgs0HnYNTqsqy8Sz/7yPB6Yoocl3NRvJ3ELHbvj26
 MJBHtXpz17GG8sVqtbQfCm4RTBrkd3VJByStcXxwqsQ/Fm5B41GHQZVeNym62sdkOD7H5/HitV
 u78=
X-SBRS: 5.2
X-MesageID: 38299202
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="38299202"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KxdPddOXRJllKSyhL1lYZHd4PqYTR0sojJD0FC5Z61hmvMyjkn+VjkT0Dds7Q1XoqD8Zs2pkFDh/BujaOic/nEcDofV1FpVze0wf7m6Neo9DB8wyLjJWazRDQiH1bo9EzGePRPZ+Zry6HM1cWH/nHkA1Eez9lnnnGOOs3BN/spgDe7Dgj46m5UIMCfAzUKTSTEoZAUbdUcePMraPF46KrKXCgLpkTp0sKSho1AVB2TRIngWfX/ElvA2r24SfcUYGvQ5RcgTLP7sxKZpqVnqUCCwbtqPOmyM/emKcEZNEfcaNvBD2byCAG1ZY8/2F8CoHIXfKU6Po8H0KotwEoPbUMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cBPUQwNO59+m/qq75DvFxWl+Mqnh1M7vO6RzVX0hR/Y=;
 b=HnM3L0tCJMEiwxEtI4ivUjbQKpUzow6nZFIjhh6i8qu/6s4LXxei6aePWahveZCX3+HkvCeUGNB4vl7roXN25NyYeLgbBhLUul+CjngmsbqyFeRcuSYFWU57Hyj1ZGpPSC4NweD8wiOUSVK1mzhGSAObWycKtkBvpBIhmh3ShkF1NlL2aqrVbPsUqx8q6x+HTBvQAOV7jXVDTT/nrSP8SmJbqGNguCXhRPAesEgCPIG0GS3ifPdWdbS6c0vBh3iTmjrB9aNKE1hHab6t0GpsVGbAfWNgy+vT3+INctl7vcNfUJ3vZlp8m2IjeiU2MWCiOZRpO5CcaOErRHQfeJHbUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cBPUQwNO59+m/qq75DvFxWl+Mqnh1M7vO6RzVX0hR/Y=;
 b=pinDsuYyAEeuZNB3rJuIyiyFCHWHNJf+d4kVPZHE2yEmfVJN+7SkMFpKRz5ImnQInUGMswMEr6qKwtbiH8YgvJHzdYc59zExaSZaUxxPk86p0vCCbCwAWdzJdOx7VS5AqmU04ZMW+6I8F1hkyOpPZKYmUahC33krm2VFzKqDqv0=
Date: Wed, 10 Feb 2021 12:34:41 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
Message-ID: <YCPE0byWKlf/uOFT@Air-de-Roger>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
X-ClientProxiedBy: MR2P264CA0055.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9836e9c1-64af-4103-c58c-08d8cdb7df0e
X-MS-TrafficTypeDiagnostic: DM6PR03MB5067:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB50675A8E219EA9AC5289F2608F8D9@DM6PR03MB5067.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mX6PEM9RkZxsZO8SNK+qJUX2kcoA8ctSiKBaJurlNM2W0OMfa3eRXpjmkGmQQ560JI5uDT54Pd+LjGHoBJN0Q1MM7ejV8FO4O9U31ncNW6adE7A4hJiTWkLGIqLwCNuELsJQhtH9cU+Zu6B/Aby8egJ0f6rR2pjJo92gHC4P7TufUR7aWstsdqaqIdDYA/LwBkFWJ2CKjMn/2GrUJ+SA3OnCLYqO2axAMv3Ohphtrjpnz8pD+82DfwDcbf2g04NGMwm/5Bguj8qF4K+w/RpX/N/uxPBmOuXaj9rqIF9szGJOuBYEtL7GPYAnpSGTA0zTv9KC+zM2/wo4u3pj2Xd26DIJPdim/RfobaQeyPzMZpHX3BbG9BzNOrEp6umDF3xkshtlQI+sZ4co69nEK5nPn2k4LYAmbSyyCOKqSvID7XRVsuSOuJ+fcYw77Z06ixSBtW4TPd8fHn3HHWG0fIMR8Vl0+Af8o0v96Sou5xyX4FHaLVTQ+niNT8GCSV0zSzgJZr+70cgK1475xo+VNwmYsA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(136003)(376002)(346002)(39860400002)(396003)(956004)(53546011)(186003)(16526019)(6916009)(83380400001)(478600001)(33716001)(26005)(4326008)(2906002)(9686003)(6496006)(5660300002)(6486002)(54906003)(316002)(66556008)(66476007)(8936002)(66946007)(8676002)(85182001)(86362001)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R2t5TmRHVnpHcFNxY2c2M05IK01YbXNxTVd0TGhNU1JkWWl6ZWx4Rk1RQTNR?=
 =?utf-8?B?QWNMQU50MlczMUswUHBLaU96ekJZRnB6cy9EaWt5bmFqc2FXSDArKzVPM0pP?=
 =?utf-8?B?czJXcTdmd0RTbEpodFlwUEZZb0V6RDNjcGNxbVVSSDgwMDBGME9XRjVLTWJn?=
 =?utf-8?B?YlArQTlPcjRwTmt4YXFjZWpiRlJ4bmd3U3JUSXA2SVM5WklVTmZOL1VHeUNl?=
 =?utf-8?B?Vm10QVFJMVFjWjBFUGpYSnZNQ1hKVytzY1VYQk96cXR1K2hlUzRhNERFM1o2?=
 =?utf-8?B?K0pvR1YyZEpQQUMyMlZWWnlJOUx1anhBdGYrU1kzRmRWY3lqQXdQNFdQbm9j?=
 =?utf-8?B?b29vTHFSNCtXc3NJa2cxTzJUdzlrQzQyazVoSXNlSVlhVU1wbjdVbWpMWHVy?=
 =?utf-8?B?dlhiSnZPcEY0RVIyU2E1cDhhcEFIU0JrS3l5Nm40Ujd0UzNwbGtXV1VxNGFw?=
 =?utf-8?B?TmpSWDZHUitaNGdHaFF0cm1RZnJaS0Q5bnpKWEd1djNoVTJ6QWpBbU1mNmRn?=
 =?utf-8?B?SmR0QmdRemcwMjd4U2ROOGpHbDVST0NpN1A1K1Nhb2tsdDJRbERPOVIwaHVS?=
 =?utf-8?B?ZzNVQlE2VWVrZ3V4R1pBM1o3NUpnTDVXQmV4NWFsRE56d0Y5SWxXVG42UCty?=
 =?utf-8?B?VThXUWdMME50Mk5mR1hkYkNud0xIQTd5L1B2eSt5WTRqR2FkeU91VkRITUFY?=
 =?utf-8?B?ZmtxbHlUWFg3aVZLK2hMQ24xclZVdWx0WWcySk5VYjJNbFhSOWZ5elN6emZj?=
 =?utf-8?B?ZkV5Qm4vejExMnFDN0N6WnBoV2V2cWhURTMrYXZOMlJTeTVQU2dCN2c4KzE4?=
 =?utf-8?B?OGU1eVY1WTI0bW8zUnZZVWppK2w3VjBhTmpYclRMNXJHc2dYYUQ5YUhQMHdW?=
 =?utf-8?B?cGd0R1J6aXRudnFmOU8xbFk3WWdENm5hTVRmeE9BcEMzZHkxNjFhN21JK0F6?=
 =?utf-8?B?UHFOS2RaVU9GcmV4S2lFU2RISmtGemhZbW0wVFZnZDdRZ1ZrN2ZyZkF3TkNJ?=
 =?utf-8?B?OEZrbGxHTWRzRmFNSG9EWkxid2Z5VGJlNVJ4RXRzUHFlMVBNWHdiakpXWE9U?=
 =?utf-8?B?THgyNFdydkN0YTg1bFBjYVBPakIzbXVPT0Z0OU5ZdDg0U1c5SnlWNXc0dnlU?=
 =?utf-8?B?Y0g1LzRQa2g1ZzZna0NjK0JKWlN4SDhDaGd6dGdkOTJIYW04V1lkYWI5WWN2?=
 =?utf-8?B?ckxuWUhWelB0d2JGWVBzZ1RWczhTVXpRYlVpcmJJLzFVR1FMUm5EQkl5MUxz?=
 =?utf-8?B?VDVSb01MRll3T2d1bXl5Y05oQWhWUXNJYURhVmhQS2xYaFh2REgzSmVFSCtR?=
 =?utf-8?B?b25SUmExSU9ZUWszeSthNXQrWWxFUFY3OTZpRzVONm5Mem9CNUR3enVkclQ2?=
 =?utf-8?B?SHVpUXNMc3ZVVkNGUjd0Y1hid29oWjdUZ0pBck96NERWQWxGd1p5MFVGV3BN?=
 =?utf-8?B?NlBXSVk3cVNXT09mSHlyOVJYZEMrakJ1MUk4aXBNTmJKbEVZWkM4OXM0ZXRm?=
 =?utf-8?B?MENwSGI2TXQ4VTR6U1BManp4YTQ2ajNQbElyNVg0aFRYZWNMdnlMb011SmNs?=
 =?utf-8?B?Q1dqSk9CamRyKys2Y2lKRUFYZzFGVURCUFNOT2lNMVNZYll0a0FTcDFFTjFI?=
 =?utf-8?B?bWhuNlpqOVBkSnAxd3paVktEZitXU0s3K05NRzAyblZERTNIMmhmZWlVLys1?=
 =?utf-8?B?djkrUGhVZERGVGlBTVkyYk9FcElMY3BTZTg4TWVwRExUZVQzOWRTRi9mYU00?=
 =?utf-8?B?WjFTeXE1UktncTBKOHdBU2JpazVab0trb2pKMi9zVmMya05SWHZEd0RhVVds?=
 =?utf-8?B?OHJ6TlNQdHU1YVlQOUpmZz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9836e9c1-64af-4103-c58c-08d8cdb7df0e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 11:34:48.0211
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7zrYFQkmMQo9jDt9yyFWaYguGTJPgbr2RvZKMdplzQah9TdjpS2JstlXy27UJYlU6UpqrLefaW1FWwhIkKPYNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5067
X-OriginatorOrg: citrix.com

On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
> On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
> > On Tue, Feb 09, 2021 at 03:28:12PM +0000, Julien Grall wrote:
> >> From: Julien Grall <jgrall@amazon.com>
> >>
> >> Currently, the IOMMU page-tables will be populated early in the domain
> >> creation if the hardware is able to virtualize the local APIC. However,
> >> the IOMMU page tables will not be freed during early failure and will
> >> result to a leak.
> >>
> >> An assigned device should not need to DMA into the vLAPIC page, so we
> >> can avoid to map the page in the IOMMU page-tables.
> >>
> >> This statement is also true for any special pages (the vLAPIC page is
> >> one of them). So to take the opportunity to prevent the mapping for all
> >> of them.
> > 
> > Hm, OK, while I assume it's likely for special pages to not be target
> > of DMA operations, it's not easy to spot what are special pages.
> > 
> >> Note that:
> >>     - This is matching the existing behavior with PV guest
> > 
> > You might make HVM guests not sharing page-tables 'match' PV
> > behavior, but you are making behavior between HVM guests themselves
> > diverge.
> > 
> > 
> >>     - This doesn't change the behavior when the P2M is shared with the
> >>     IOMMU. IOW, the special pages will still be accessibled by the
> >>     device.
> > 
> > I have to admit I don't like this part at all. Having diverging device
> > mappings depending on whether the page tables are shared or not is
> > bad IMO, as there might be subtle bugs affecting one of the two
> > modes.
> 
> This is one way to look at things, yes. But if you take the
> other perspective that special pages shouldn't be
> IOMMU-mapped, then the divergence is the price to pay for
> being able to share pages (and it's not Julien introducing
> bad behavior here).

Since when sharing we have no option but to make whatever is
accessible to the CPU also available to devices, it would seem
reasonable to also do it when not sharing.

> Additionally it may be possible to utilize the divergence to
> our advantage: If one way of setting up things works and the
> other doesn't, we have a reasonable clue where to look. In
> fact the aspect above may, together with possible future
> findings, end up being a reason to not default to or even
> disallow (like for AMD) page table sharing.

I think such approach is risky: we don't likely test VT-d without
sharing that much (or at all?), now that IOMMUs support the same page
sizes as EPT, hence it's likely for bugs to go unnoticed.

> > I get the feeling this is just papering over an existing issue instead
> > of actually fixing it: IOMMU page tables need to be properly freed
> > during early failure.
> 
> I take a different perspective: IOMMU page tables shouldn't
> get created (yet) at all in the course of
> XEN_DOMCTL_createdomain - this op is supposed to produce an
> empty container for a VM.

The same would apply for CPU page-tables then, yet they seem to be
created and populating them (ie: adding the lapic access page) doesn't
leak such entries, which points at an asymmetry. Either we setup both
tables and handle freeing them properly, or we set none of them.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:38:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83570.155811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9npb-0002go-E2; Wed, 10 Feb 2021 11:38:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83570.155811; Wed, 10 Feb 2021 11:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9npb-0002gh-Ak; Wed, 10 Feb 2021 11:38:51 +0000
Received: by outflank-mailman (input) for mailman id 83570;
 Wed, 10 Feb 2021 11:38:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9npa-0002gc-8f
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:38:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a207d90-c8a2-4938-a808-4c2306cce00f;
 Wed, 10 Feb 2021 11:38:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B28C9AC97;
 Wed, 10 Feb 2021 11:38:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a207d90-c8a2-4938-a808-4c2306cce00f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612957128; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jxAulCg55OWg0eLoqRjD1nST8G5UM5JmrxqpCT4vzDQ=;
	b=E+nIenkiWB+i6+JPpyCmAQDwpiA49BD33fpAaq7YU6559w/1c0oUgueP3tPcJ5EU0FLAUI
	xCvua0EBPQOraHYY0b0+YsSzLdOXpN3pRatybiD0JW7l0bRHUGlG6kMGBm3waEFjI09tJ0
	udXAAf2tt8U6i++kp4nIHc4JmWBakvM=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
Date: Wed, 10 Feb 2021 12:38:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCPE0byWKlf/uOFT@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 12:34, Roger Pau MonnÃ© wrote:
> On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
>> On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
>>> I get the feeling this is just papering over an existing issue instead
>>> of actually fixing it: IOMMU page tables need to be properly freed
>>> during early failure.
>>
>> I take a different perspective: IOMMU page tables shouldn't
>> get created (yet) at all in the course of
>> XEN_DOMCTL_createdomain - this op is supposed to produce an
>> empty container for a VM.
> 
> The same would apply for CPU page-tables then, yet they seem to be
> created and populating them (ie: adding the lapic access page) doesn't
> leak such entries, which points at an asymmetry. Either we setup both
> tables and handle freeing them properly, or we set none of them.

Where would CPU page tables get created from at this early stage?
There's no memory assigned to the guest yet, so there's nothing to
map afaics.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:40:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:40:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83572.155825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nqw-0003Yz-PI; Wed, 10 Feb 2021 11:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83572.155825; Wed, 10 Feb 2021 11:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nqw-0003Ys-MO; Wed, 10 Feb 2021 11:40:14 +0000
Received: by outflank-mailman (input) for mailman id 83572;
 Wed, 10 Feb 2021 11:40:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9nqv-0003Ym-Aa
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:40:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9nqt-0008EJ-Sg; Wed, 10 Feb 2021 11:40:11 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9nqt-0001Bf-Kz; Wed, 10 Feb 2021 11:40:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZBbwv4k8hfMiyXp6KH/O3Aq5Y/ESOsiC/dC8pW32B+M=; b=GpbtRwCgq5M5DS1m4+n3d7ANdy
	iuVNPvcxxb/74xcRwhMY+vCqmVUIeFCqpLghsTf1skWPIlQtCReXno+vXy4UrEfr5CugqqIxvNb4n
	CywKNVERB6lU44idultifMWUUz6BFdiGRZd4JavTsaXJ3jJB8qDdGXRUQqFrJa0TLToA=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e1c7c616-0941-b577-5842-a51374030798@xen.org>
Date: Wed, 10 Feb 2021 11:40:09 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/02/2021 11:38, Jan Beulich wrote:
> On 10.02.2021 12:34, Roger Pau MonnÃ© wrote:
>> On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
>>> On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
>>>> I get the feeling this is just papering over an existing issue instead
>>>> of actually fixing it: IOMMU page tables need to be properly freed
>>>> during early failure.
>>>
>>> I take a different perspective: IOMMU page tables shouldn't
>>> get created (yet) at all in the course of
>>> XEN_DOMCTL_createdomain - this op is supposed to produce an
>>> empty container for a VM.
>>
>> The same would apply for CPU page-tables then, yet they seem to be
>> created and populating them (ie: adding the lapic access page) doesn't
>> leak such entries, which points at an asymmetry. Either we setup both
>> tables and handle freeing them properly, or we set none of them.
> 
> Where would CPU page tables get created from at this early stage?

When mapping the APIC page in the P2M. I don't think you can get away 
with removing it completely.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:45:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:45:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83575.155838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nwK-0003kE-EE; Wed, 10 Feb 2021 11:45:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83575.155838; Wed, 10 Feb 2021 11:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nwK-0003k7-Ax; Wed, 10 Feb 2021 11:45:48 +0000
Received: by outflank-mailman (input) for mailman id 83575;
 Wed, 10 Feb 2021 11:45:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9nwI-0003jy-L0
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:45:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1850729e-a164-4b33-a415-7e53541065b6;
 Wed, 10 Feb 2021 11:45:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 958E4ACB7;
 Wed, 10 Feb 2021 11:45:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1850729e-a164-4b33-a415-7e53541065b6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612957544; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MALmJZYxkvItDYeYSbDrpDApSanAyKVOqrMAYZr3qm8=;
	b=TZnxQS6E/iSaMtvK4/6lK2L25855gulmlQ7TMtdmS74PPjEmf1B0602UQLp8VHFzyXxwUS
	yCUKRvpYfmsZrgnpc8CyDQcpuehZt9aM7PILSY+HZ9+eR7TAeC8d2tCYKtSP7/LbDBJOZN
	7DseA114+iJpu7nmJMBXfnm5o24tU8w=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
Date: Wed, 10 Feb 2021 12:45:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <e1c7c616-0941-b577-5842-a51374030798@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 12:40, Julien Grall wrote:
> On 10/02/2021 11:38, Jan Beulich wrote:
>> On 10.02.2021 12:34, Roger Pau MonnÃ© wrote:
>>> On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
>>>> On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
>>>>> I get the feeling this is just papering over an existing issue instead
>>>>> of actually fixing it: IOMMU page tables need to be properly freed
>>>>> during early failure.
>>>>
>>>> I take a different perspective: IOMMU page tables shouldn't
>>>> get created (yet) at all in the course of
>>>> XEN_DOMCTL_createdomain - this op is supposed to produce an
>>>> empty container for a VM.
>>>
>>> The same would apply for CPU page-tables then, yet they seem to be
>>> created and populating them (ie: adding the lapic access page) doesn't
>>> leak such entries, which points at an asymmetry. Either we setup both
>>> tables and handle freeing them properly, or we set none of them.
>>
>> Where would CPU page tables get created from at this early stage?
> 
> When mapping the APIC page in the P2M. I don't think you can get away 
> with removing it completely.

It doesn't need putting in the p2m this early. It would be quite
fine to defer this until e.g. the first vCPU gets created.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:48:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83576.155850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nzC-0003tX-St; Wed, 10 Feb 2021 11:48:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83576.155850; Wed, 10 Feb 2021 11:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9nzC-0003tQ-Pt; Wed, 10 Feb 2021 11:48:46 +0000
Received: by outflank-mailman (input) for mailman id 83576;
 Wed, 10 Feb 2021 11:48:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9nzB-0003tJ-4p
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:48:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9nz9-0008Nh-2V; Wed, 10 Feb 2021 11:48:43 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9nz8-0001st-P0; Wed, 10 Feb 2021 11:48:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lkR8InNLLzVeOd/06ybgP28NO53GP7CHLoDv/mWCGa8=; b=5cfP9PplEz/2R1amopc1uLT2/t
	OUBEHR+PsahKPU+IYZ3wVvRlrcmjRtX1+hPpOug49MQCUhWU1xm9nC08riBEcdeJWj5kV0iyvTYj2
	GQT2ekyhYaH0kdzEcU5aEiTt8mV4MEu+6U22on3g4f1PkSSDihXexPV9WoRfA7ujzuTM=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
 <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
Date: Wed, 10 Feb 2021 11:48:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/02/2021 11:45, Jan Beulich wrote:
> On 10.02.2021 12:40, Julien Grall wrote:
>> On 10/02/2021 11:38, Jan Beulich wrote:
>>> On 10.02.2021 12:34, Roger Pau MonnÃ© wrote:
>>>> On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
>>>>> On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
>>>>>> I get the feeling this is just papering over an existing issue instead
>>>>>> of actually fixing it: IOMMU page tables need to be properly freed
>>>>>> during early failure.
>>>>>
>>>>> I take a different perspective: IOMMU page tables shouldn't
>>>>> get created (yet) at all in the course of
>>>>> XEN_DOMCTL_createdomain - this op is supposed to produce an
>>>>> empty container for a VM.
>>>>
>>>> The same would apply for CPU page-tables then, yet they seem to be
>>>> created and populating them (ie: adding the lapic access page) doesn't
>>>> leak such entries, which points at an asymmetry. Either we setup both
>>>> tables and handle freeing them properly, or we set none of them.
>>>
>>> Where would CPU page tables get created from at this early stage?
>>
>> When mapping the APIC page in the P2M. I don't think you can get away
>> with removing it completely.
> 
> It doesn't need putting in the p2m this early. It would be quite
> fine to defer this until e.g. the first vCPU gets created.

It feels wrong to me to setup a per-domain mapping when initializing the 
first vCPU.

But, I was under the impression that there is plan to remove 
XEN_DOMCTL_max_vcpus. So it would only buy just a bit of time...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 11:54:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 11:54:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83579.155862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9o4Y-0004p1-Gs; Wed, 10 Feb 2021 11:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83579.155862; Wed, 10 Feb 2021 11:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9o4Y-0004ou-Dq; Wed, 10 Feb 2021 11:54:18 +0000
Received: by outflank-mailman (input) for mailman id 83579;
 Wed, 10 Feb 2021 11:54:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/zh=HM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9o4X-0004op-6x
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 11:54:17 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81bdab73-d7e1-414b-8911-7baa03c6b00d;
 Wed, 10 Feb 2021 11:54:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81bdab73-d7e1-414b-8911-7baa03c6b00d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612958055;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TY1YrNqigC3nuby2bo/MLZCaBbBqG+HDRQsggwYR6+c=;
  b=e3spqWfDL9YQLJ+fYinKcYH8SAnOACrdsVoSkN5gCwIHSmXCO2G8r+7q
   /bQ6aj5l1CSQ3jcIWn4MF59M6rOMsugW8ivorM240exH4gplDlW2MQY3B
   atuJhMOHqG5IsbAAtSgzKm6Rx9B6KgUazjWr7qKzDNvKtG27xivFgcJpj
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s3bHAMP/o4I+FNHgVDTCjEJvudCyhEqKRG4GSs84jgYCmhVnS71/QGavLn7uY00fB7prwf0jHA
 0Z/mxWCz7sxoi7gKn2p+h69mG95N8k8BuEA1rrpkGlZRcFhWe5oXUD6TYLhOyb8EadX7yjbwQa
 eDwlX5ub8TWaupb6Z4WfpWOGPXHY8vQZRgavngsULhAmHzY0FDuXEYkRfhH2y5AWgcI55pPM5g
 Wd7eyTPQ40zBpayvZpc6djNXCscUicspBiZtektkMsqouFb8xwt4yeZk9cPKejIxT+qm4LRwU9
 nK4=
X-SBRS: 5.2
X-MesageID: 37318094
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="37318094"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ekKdIlzhKEvU5cagbs+zOMpbYG4cEX6WnIxaU9qfI4NGijFwO7jBP7PHzlUvxfXW/zhPXWgC8pPkfV1eIl/tu00WT/ZwZjksbPptKlts0jckSOvimqyw5st0c/tpr/JscD3w8HuUPjIoNosGRoDY1WsA6s2FEmjE5sVnc01y5eyJIocueZrFeJ1ty/SRpNsj8X+a6QtGgMTu3TSem6kN4aa7YhnBeV+n//3v16hjZj0mTkFKPwIX8AO+NPthE777OXz7DSBCILS3U2qx0a4W+761AHuQwkHdLvT17Km1dt9R7H3duvU16olFxJCXmbiDDhcbqDoemn1j9UV7WfW/6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZAgjDsA+4s1BTE67/NgyCiYQcMfDswFDZhm5DmIzkI=;
 b=B+OivFkMrVwNzlxf/APPm6kQaJtp+jbgBlmYrxycjQR9ioOzsWdGpidqbA9ppO2tUYSuE8qVDEvTdZ4ETZjenwnU2P27agLDsRDCYKXV/lV3XHULLPU2d2IQDc7UbS9pW4BsB/zsXQ7OeuvzeKHxuhqOgYLd5ESDwSvjwa+OjRLGI29HdqrCIBRAK8WlLoDN64SCQBRJM185PL40WkQP4rbKF99/dVoTIB6NwdNy7rY2uhlGzTiDRvn9EQfb1MlZQGT+d3rHolQdoGXk6c9elaOa5TilFmXhT16MmRgmoDssFMJwjl3HJfIQL+GWNiKGHpZeJPeIgFcMzy/ZJckE7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZAgjDsA+4s1BTE67/NgyCiYQcMfDswFDZhm5DmIzkI=;
 b=F7Tn6UabO7znoiN4vtHYqNjqB9kG6ny+oxII0yLEVhahl++uSQcekzl8O7QdllNEzJJap45pi+p8wW3MqO1beqlnT9glETDZB2oDFHG0ppogek5bmyYnxmFX0QTMynkrf8ONf4cgRYrcb70F8LJqLddq0l3syKrbCH+ZQ8zsipA=
Date: Wed, 10 Feb 2021 12:54:05 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>,
	<hongyxia@amazon.co.uk>, <iwj@xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
Message-ID: <YCPJXe1L1SCXoL7a@Air-de-Roger>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
 <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
 <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
X-ClientProxiedBy: MR2P264CA0033.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::21)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 59c88579-264f-487f-6fd3-08d8cdba9435
X-MS-TrafficTypeDiagnostic: DS7PR03MB5608:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB560849131655E252C2342F358F8D9@DS7PR03MB5608.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rV9FT+Hc40KEULQeIT7PsQU1adXoNQiFrFsZOpkApOq/aBqFAXAjyiNsYYlVq5y5ZluPmtK8N1QWsYUC+Pq6QElUthfyP4GXdgENw2B0dTR3JcWb2PTcxJVCUKrA0h4x/P2ZuOB1ERc5bFrpW7wpkwWgJRfy2HGrgZnV9yFSVJ/wD3a3UA18L5x1pJE5xDzhWEjnXyX50ieECZ1AFfl6oRLM4G/qC8hsZZ/JuVzAMDyAgrIwu0CFakAFZtVnAfPslTV0pdRYLOh4zscnF5nRyuisKPOJSnMpnx9Zjp5r8KONblGTaMxsG6kYHmrWl9OgTTf5LSufRC4Qot3BNWotsp+E0hF70FDg5hTsylTdfVjtviOD65ao0DLBQCRwmof5IJiWNANA3ZU7vwd3G6a4Z3ajguuz6/mHjaKQhbSkHmdB15VTOoI61kwhF5PlBtLR1s7g2luy4HDroN1oqGkgswKsFbGoZzozHfQrB6bbqRJV6rasZK2Q+Lf8pQaJb26OnBgwJ2o4PYsr/7VCRbJwOg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(39850400004)(136003)(366004)(396003)(346002)(8936002)(4326008)(5660300002)(66556008)(6496006)(86362001)(16526019)(9686003)(66946007)(66476007)(478600001)(8676002)(186003)(85182001)(2906002)(53546011)(33716001)(316002)(6916009)(26005)(6666004)(54906003)(6486002)(83380400001)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YzhtUE1LaUlmMkJsNVgxN05CekVYbld0S1NOc05xKzh0V2pKRjROK3M3NGZS?=
 =?utf-8?B?OVpyZGthR0ZWb1ZmRVNTYzZGSkk0Z1BFQnB6QU1qVExFendVUEdIQk9vb0FB?=
 =?utf-8?B?a3FZY04rdTBmeDZDVjJHcS9DbHhJa2E0YnIxNUVkTzdpSytCa1VhM1FzRGlz?=
 =?utf-8?B?ei9RTGdveFllSTREUWF4a2h3bkl6RmRDNGhJUEgvMk5ERTVvVk51b2VKYW9W?=
 =?utf-8?B?N3dlYnN1d2tZZEhuY2RtOTI2U09WZmxnQkVQOTI5YlRHclFjVEczM05aZlZq?=
 =?utf-8?B?THUzTXVxWjM3cFpsdStYOEpteVJUeVNZNzcvQ0NQeGZVdjEyRmxSOEZWWThW?=
 =?utf-8?B?SW9xeVZ4NWlBT3VXQUQ1bll0cUNudy9HY0tDcGR2UzI5KzQ3VUplYUNZOUhY?=
 =?utf-8?B?UVZpRjl0cUdBL0lWZElsUVBpNDBqV01QNlh6b2t2bW1EbkkzQWxpK0dzUlI5?=
 =?utf-8?B?MXRBdDdmZmgvS2toMURuZzBnWUFSZ1A5NGljU2RreHI5clF0WVhBMXkrNnZY?=
 =?utf-8?B?UXY2OGxad1NQVEpLV1ErMTVManhNa0oxRVNJbUZFUXN1OHdtblYxbFUzc1Zu?=
 =?utf-8?B?SSsyM2pOa3hhUTlROHBXWHF4cUh1aWxrbTAvVkNWSFBhdlBGMEN6Uis0eHBl?=
 =?utf-8?B?RDI4ZUtlQjlUa001T3BRT0p6SnpwRUJvSCtKN3dMNWdBaithRFVoTWhROHFK?=
 =?utf-8?B?a2F4bndzTlV2aUN4dUtEaFFwbGJYZTVlbVdnRHNzaTBPRUQ1d3NwcnNYb1FD?=
 =?utf-8?B?cVB0UGVYYzQ4cVp6OTVHbGgxbHdENE1zeFVqRVV4MHYrL1VnV0pkSWM3SkV5?=
 =?utf-8?B?WXBDd2d6YXgyTEp3NzkzanVyOGs5WnVFRFc4SWVENHFSMWZEUlhjcThGdmkv?=
 =?utf-8?B?aHJwL2p5N0VwRU81T1czcEJUYTB6TTAvUkFMWGQ1aFRBYlhxbXJxUGRvZjh3?=
 =?utf-8?B?cW5Dc3duejRGM0cxUFprNkNkZ2JBdTl2YWFKbm5zd01vSjB6RzltZGM0VE1D?=
 =?utf-8?B?bkxLUnYrU0NQTndkWGxwOHllbjhGZTcxeldnZlZrY1VCZkFxbnBkdDlHQy80?=
 =?utf-8?B?STlTVUJrYmdLQ25UcEdUWTcrTmhwQ0FNbXRSVVA4ZWU4aHJsRWZqMW55c3RD?=
 =?utf-8?B?VFNXYUlVVXp5NUZtTm5rY2xBRDQrdDc1OXdqNEs0cGFCSmt0VnhPR0I0R05G?=
 =?utf-8?B?MWtyemZuczg4aWVFRHErb3JIc0NIQ0NGREJFSHBjN2ROQlFjU3I0SEozdXN2?=
 =?utf-8?B?eWxEeW5HRE5KZGp5NjhUeVdCM05JQjkxdjVCNEJUZHFubEhkR3hHbHV5U2Fl?=
 =?utf-8?B?dkliNGpYRTBaNXY4dUZWdVpVUW5kSHlKQXpJeCtPY1djZlgwK25EL0s4TGV6?=
 =?utf-8?B?Ti9Ub3FDVWFHTnIvM2lFYWF1bDBrODNyQ2NURVk4UUVGaHgwcGMzSll3YjNP?=
 =?utf-8?B?OWZpa3d5V09RRzk4RGlyTzdibXQvZ1I5V0MvQVJSSWh2TjJ0am5uM3FERm1v?=
 =?utf-8?B?OU1xNHR3L0pzSENtOGVKMmhPckQ2WWpGN0d1UVVXSkRMdjN5Y0VodHJGdGFk?=
 =?utf-8?B?WGxxTWxqbXBvalVXWGdnaUdyUis0VThhRXN5OWc0K2ZFc0prTDFhUlZvb3J5?=
 =?utf-8?B?MlVQeGpKMnkrS3JPa09mZG9vczdvWlU5RlI2THJOQ0ZiZmI0WmJuRGRnb1Mw?=
 =?utf-8?B?NEZUdFZVbVYycEUyeXliNjR0ekE2Z2dPK1F2dHVhbzBHaGJ3UDU2SUZrTmhW?=
 =?utf-8?B?NnAzYXo4THo3VzVCOGJ6cWtaM1ZVSTRBV3BNeHFhQmpGdkcwVTNXeVJTTmUx?=
 =?utf-8?B?clczejdxbER4dU1FVWJYUT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 59c88579-264f-487f-6fd3-08d8cdba9435
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 11:54:10.9107
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LAbj0cWaHs6TiHNtZXOXDyXaGoW1LlM9hUu/BJdcYbrKbn3bcQAh9wyJhfhqgr4u7LD/UyVe/Kuf5/TivGJJgw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5608
X-OriginatorOrg: citrix.com

On Wed, Feb 10, 2021 at 11:48:40AM +0000, Julien Grall wrote:
> 
> 
> On 10/02/2021 11:45, Jan Beulich wrote:
> > On 10.02.2021 12:40, Julien Grall wrote:
> > > On 10/02/2021 11:38, Jan Beulich wrote:
> > > > On 10.02.2021 12:34, Roger Pau MonnÃ© wrote:
> > > > > On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
> > > > > > On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
> > > > > > > I get the feeling this is just papering over an existing issue instead
> > > > > > > of actually fixing it: IOMMU page tables need to be properly freed
> > > > > > > during early failure.
> > > > > > 
> > > > > > I take a different perspective: IOMMU page tables shouldn't
> > > > > > get created (yet) at all in the course of
> > > > > > XEN_DOMCTL_createdomain - this op is supposed to produce an
> > > > > > empty container for a VM.
> > > > > 
> > > > > The same would apply for CPU page-tables then, yet they seem to be
> > > > > created and populating them (ie: adding the lapic access page) doesn't
> > > > > leak such entries, which points at an asymmetry. Either we setup both
> > > > > tables and handle freeing them properly, or we set none of them.
> > > > 
> > > > Where would CPU page tables get created from at this early stage?
> > > 
> > > When mapping the APIC page in the P2M. I don't think you can get away
> > > with removing it completely.
> > 
> > It doesn't need putting in the p2m this early. It would be quite
> > fine to defer this until e.g. the first vCPU gets created.
> 
> It feels wrong to me to setup a per-domain mapping when initializing the
> first vCPU.
> 
> But, I was under the impression that there is plan to remove
> XEN_DOMCTL_max_vcpus. So it would only buy just a bit of time...

I was also under that impression. We could setup the lapic access page
at vlapic_init for the BSP (which is part of XEN_DOMCTL_max_vcpus
ATM).

But then I think there should be some kind of check to prevent
populating either the CPU or the IOMMU page tables at domain creation
hypercall, and so the logic to free CPU table tables on failure could
be removed.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 12:10:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 12:10:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83582.155874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9oJw-0006fn-4d; Wed, 10 Feb 2021 12:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83582.155874; Wed, 10 Feb 2021 12:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9oJw-0006fg-0b; Wed, 10 Feb 2021 12:10:12 +0000
Received: by outflank-mailman (input) for mailman id 83582;
 Wed, 10 Feb 2021 12:10:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9oJu-0006fb-0E
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 12:10:10 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4468b0c-9790-4a28-9777-3cd9c6daffce;
 Wed, 10 Feb 2021 12:10:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4468b0c-9790-4a28-9777-3cd9c6daffce
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612959008;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ZZh51H33LQxyXrcaxFY1fF6FmsB/GQQ3PE7hJpqwODI=;
  b=JNGZf7jhEb95yW/U8urTm2iKv9uPbzcN1XwYYLQKmsz4MtBvPDk++Tk5
   KouRVz0QYarbJw8GhEHwQqiLa7BtNm44Ey3N2h2kMS6C/iqCeI7v5ZGnJ
   zvh0gVf96fTrvKPWTkUOMeOQt8bPfz+m2rTVJ1QSihztBe2VvWmWqOE5k
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Tp//KLyitTQZwlNHM4U+Mtql86HlxFVvM0Qq5bXzp1YOiLnFsV2zrC/ewIfKghfc/rDxL3C7Mw
 0xg6+o8ossfN4epDHvoeznf1lxLfp/uXH1klZu6jlz55iaZSry1qEaFqxvHiXY+SZfbJ6IkRXN
 XrUZ6pMUjq2oOrnOBfrVoNO8R17qkI/g3aL8FXd5giv3mtSYzrWQkGIyQhzcNz5xXbkvgohfhj
 VAatX9Hrc/WvBlrIronvjCUu2lOlVkHeDpUCNCMhkZ2aFwiaCXD0TxE/NVU7PTDTjhDmYX0YF9
 viA=
X-SBRS: 5.2
X-MesageID: 38301100
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="38301100"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q+QR3Jx///i99qgMlIZBsrHx4z8azwmwxkXbJ38zLTYnWtTFcfAUPqeW0Yn9lpNn0V/lfSTqXHwF/6snSbNrlj5GfF0q8jdvI4O2sODPzj368elW9ya+P+oM4QK3g+tCaaWby+WD/MXavszylEBD9zueS06K5+IMQRtbVjogDRgg/S5SZMcN+lD/POHNGhfYpNCqrkBp3QQnH02KCjlvl6NIT5VHUVKwHvZjBUctMKO8td2nfEYb8KqToYYqhSasLoi4wvvt93J01zGeDZ7eVVJm2v7o/QdFGA11YlZ2zZwe1aVGQR1KGgmQ6huxmv7TWDuJbjbXzXTm0eVAjLnvEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xVhcAl44TWDzt7zo21fEbPet6O5DdizLMhnWHNpIo6I=;
 b=iIFZp7CL4yYc4PTWZuq+huKJXCINb7nnzt8Gj+Gb6Sm+leoDrq8Dy4IuBgo5nKbjD7b7y/N06U60Mzzs+gnfCDrywd7mxTKrDJ+Pac7zLXPkTel7mu/QcynO3tuikNrQ/mAeB28XIDRPiQYLPlDv5lGhCeIKHcjlOhXNO7tElg6er6qXEW4vNIyqAg8rV+mbTs7+SQMZzXN+AgqaEBAEIttPqzmsPZAVWFqZ1jENPDmkfyy1a8IDhwHbUkr3TpnPrgHLyUxzXLiTM+ZsMgr7FrbOoLQCdRSwxVlkJNxayHEAeEsPaelUuS3XSOcXPnJ++Xlk1rsCuYLbC2nFtTxpug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xVhcAl44TWDzt7zo21fEbPet6O5DdizLMhnWHNpIo6I=;
 b=XXkCRKaMoBHAodb/530AK1DrkqAmK+qvtOmQcu0Ee1YIS5kTv3dRJEdFa1YLKr8qi5PfCR5kOOhm8ErBkdW8qFInHo/rhnVlDolFudur5JA6QA5l/563rDZUpsqqHQX0lqOLnlwbkD3XOphrG1LPNnnIi64W0L4KVHPNgwGFrME=
Subject: Re: [PATCH for-4.15] x86/ucode/amd: Handle length sanity check
 failures more gracefully
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20210209214911.18461-1-andrew.cooper3@citrix.com>
 <87d6982a-00d9-3daa-ebd7-9afb8ee60126@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7a91591c-5401-2419-c636-7e456bcfe911@citrix.com>
Date: Wed, 10 Feb 2021 12:09:47 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <87d6982a-00d9-3daa-ebd7-9afb8ee60126@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0342.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9ba3b2f8-f5b1-4783-91b1-08d8cdbcc627
X-MS-TrafficTypeDiagnostic: BYAPR03MB4549:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4549F3D2062F1E0F223D70A8BA8D9@BYAPR03MB4549.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bADLNTQE6TP6nvR3Zt7vbYdiEh3y6G2vVNfVNSSL3uSPNEnJ4LRme2BIDJMva6FUhrQXXHdgx/VoU2fBIEq7Vi7EyqfcVmSxnolX0CQA7IJMKjAxalpFP61SFNb3B1Pr5agMb/QJY1Hc02gbjT0FLXGyNXMwp9LYZ+gBqjMpBr0XtfqdkvFfkSevhnGtM3ftTSdngP7sBbe91DUjthMjiwxPdwBDtHV3Nw9YoLw+4/OHKdsEZrhiGAGufeMIg1OI++6t+6yWwP4QtY+vHPtOKC2BiU3kTWKm24Yp0IaMIDP8SU6b1kiYt/EnbsrDW79+7HuOhaSn2gDHA2t67sQfuorKbW2Wk9MIF4cwwW1/jV3wjNwXm1D5BIgD3QgzWwmHWmsBKh1bIh99sl62LlzDH3vrLFMgLkCbSxe4ZYA7i7Mw3Rhcz6nznWsaidp65/EghzxyISAQfwZW48Ve8KyGk0MnZpXU0GzeTUnrYSl5+5uBfyNT60kKswTAcP5mcwD++TF5nWTRIVKAlIXcO02JKh0qLbVganUjP3h0BU4duuCqCASjn1ObCLX5aqC8yCYvMxxQuoovSn9oFN3KxBTBtXGpVMyyKj+3yPIhEQ/0Z1A=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39850400004)(136003)(376002)(346002)(5660300002)(26005)(86362001)(478600001)(2616005)(956004)(186003)(31696002)(36756003)(16526019)(6916009)(8936002)(6486002)(66946007)(16576012)(316002)(66556008)(54906003)(66476007)(31686004)(53546011)(4326008)(6666004)(8676002)(2906002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Mk9semwxSk1rVXNlUHY2Y2hCVERhMTBGaUc2UmU3blV5d2hpYjBVc3QrbWpE?=
 =?utf-8?B?RGtPUzJwUlhuNTE4NnFGdERSc2pXdGdaRDRJVTFzT0hZVjFVMDM4QUpJUCtU?=
 =?utf-8?B?ZEF1ajBrb29GVHFDUWRVNEVNeElSb2JIcm0yTGdjOFhuOFA1K3huWDdkL1lP?=
 =?utf-8?B?UHd0ZGRqUEtpNkUyeU9OVGE4bWZTOS94d3ZrVThybStVR3NlMyswZHllbk9k?=
 =?utf-8?B?K25BSTIrUSt0RC9IUDJVdUpISGxTWUFtUXRSdVVsd29zV1pkcVo5Y1Bpd3BB?=
 =?utf-8?B?cnhmSWtjdTFMdXFSRWpuWldwcWtXeWdaRm5FbzVQTlNoVnhiRmhqclNJUDlR?=
 =?utf-8?B?NFhRcUR5YWx1M21nd3J2OERmQW5NckpNTHJJRnVuSTFnNXVXakdYWlN3eTdH?=
 =?utf-8?B?eVJYQ0plM2VNcERtUDdqdS9CeVFmdDgzSS9SOWhnWGVTNlJDYWVBRERyZlZ4?=
 =?utf-8?B?aC9KbWVKeldiK0NJTVVQOUhCUzl3OUhuREdJeWpodXRERTlGTFBIdTdocHQ3?=
 =?utf-8?B?VDFuNGd6cFY2anc3Sy92WjlmbTcyUTdSSVlPYmlzck1FQmtuS2hYN2RUZ0tO?=
 =?utf-8?B?Qks5VUJOZDRabGJuRENFQjBhUzFvUkRqeDBOQ0JNakduMXg4WUs3Wm8rT05N?=
 =?utf-8?B?ZWpTdlFWaFY3SlZudEJVY2hkRHNIU1FJYjF5eldXNXFFQ2NDQ1V0dkpuMnVT?=
 =?utf-8?B?QmlvYzgvWFUwTHBjQXZqWGl2NXVNTVp5QzdGU2JyZzFOdDNlbStaVGIxNnhz?=
 =?utf-8?B?eHdJOUx0OE1jdGMvallMY0lKaEZFSG1jMis0bVJyS0MwWmpNcWlmRVJiTU43?=
 =?utf-8?B?OE9xdS9SbDVwcDdqbk9YdkUrMXNlU1ZYWVd3T0I3MlE4ZGV5MkM4ZkJ4SGJn?=
 =?utf-8?B?SmM2Smo0NzdnZWYrWHkrTTM2Zlcyb1FQYUliTGs5Ykk3UFFSM051NFF5SUI3?=
 =?utf-8?B?ODlNbnFDN1BnTjlzNllWS2RLVDRkb2RiaUh2RXRRQ21lNVNxWVJibXZ6aFZ4?=
 =?utf-8?B?RXVCMTF6WmJ5SURvai94ZTlab05yUWR5bzFwVERYcDhVZWpOVDllQWp5ak5l?=
 =?utf-8?B?Zzk3bFZ0RnpvMHhDZFJPTVQxWU56emxBelNZZXRnQ0VDWTVrbktka0FDYi9X?=
 =?utf-8?B?bHgwak1WQ0ZFcEZreUFmb1NSaDV6Z0t6blg1U1FieWd3eFBtb29Vbkg5ZnFU?=
 =?utf-8?B?dGJzaGVDMk0xTVc5Z0VhUFJBYjZoWUIrZ0RPaXhQODBwdTNZK0tjdWRGNEFs?=
 =?utf-8?B?TzhaczFCYWJCdE9LcGNJWnRJc0RYb25SUzloa3hibEs5Rk85eWRhMElTSEVE?=
 =?utf-8?B?ekdSRHlTMndHTmYvOVhtTkJiSDZLYnEwOGhHMmFOZWFjTk8wQ0VZYlZCa21r?=
 =?utf-8?B?YnBCaHFEWnJ0OVp2c1BoNFRTb1QrK0l3WkZpbENDTGx4QlQvVFB6Q1VYMkMz?=
 =?utf-8?B?MnBZRERoT1pUM2E2T3h6ZW1Ub01yYjNyZkFyYnhrSUs3Q2hDMHlQWTZrbHht?=
 =?utf-8?B?Y0V5bk0xUzBTd3lrd241b29ycWd1bml4RjJ0SHBGTDNKd1pvbDdzSzV2em5x?=
 =?utf-8?B?VWNTRlc3Y2t4UHptcERBTGRWL1N5OUFuVzZoM1RTTEJpYXh2NTFZMDdza052?=
 =?utf-8?B?WkIzblliRWhvempWbURZSXZnR1pDSHVnV05ZWkkrY05pOHdFTXNDd2p4NStR?=
 =?utf-8?B?dERHUUpXdWx3dHYxS1FPREJxazhiQ3lpZldNZzA4cEdvSlZ4NnZ4dFIxcm5Z?=
 =?utf-8?Q?ikKYwB0rpltUj3wfxtBr6NkCVdYeedYPpMBSe7M?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ba3b2f8-f5b1-4783-91b1-08d8cdbcc627
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 12:09:53.9130
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 32h+zmIE7iDRMtz1/9RnTJZs+2Ugxi7syPzzD/q5oUB0HqjFK07BQV5jYBhL22pku7bGp1AYgOBRT2j+fdeGrewOTAL+MZ6uBqzyfxiOD1g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4549
X-OriginatorOrg: citrix.com

On 10/02/2021 10:55, Jan Beulich wrote:
> On 09.02.2021 22:49, Andrew Cooper wrote:
>> Currently, a failure of verify_patch_size() causes an early abort of the
>> microcode blob loop, which in turn causes a second go around the main
>> container loop, ultimately failing the UCODE_MAGIC check.
>>
>> First, check for errors after the blob loop.  An error here is unrecoverable,
>> so avoid going around the container loop again and printing an
>> unhelpful-at-best error concerning bad UCODE_MAGIC.
>>
>> Second, split the verify_patch_size() check out of the microcode blob header
>> check.  In the case that the sanity check fails, we can still use the
>> known-to-be-plausible header length to continue walking the container to
>> potentially find other applicable microcode blobs.
> Since the code comment you add further clarifies this, if my
> understanding here is correct that you don't think we should
> mistrust the entire container in such a case ...
>
>> Before:
>>   (XEN) microcode: Bad microcode data
>>   (XEN) microcode: Wrong microcode patch file magic
>>   (XEN) Parsing microcode blob error -22
>>
>> After:
>>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa000
>>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa010
>>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa011
>>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa200
>>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa210
>>   (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa500
>>   (XEN) microcode: couldn't find any matching ucode in the provided blob!
>>
>> Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> After all we're trying to balance between detecting broken
> containers and having wrong constants ourselves. Personally
> I'd be more inclined to err on the safe side and avoid
> further loading attempts, but I can see the alternative
> perspective also being a reasonable one.

The more I learn, the more I'm starting to mistrust the containers.

But as we don't know whether it is us or the container at fault - and
have an example where we are at fault - I don't think blocking loading
is an appropriate thing to do.Â  (Amongst other things, it totally kills
any ability to test this interface.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 12:29:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 12:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83586.155885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ocH-0007q5-L0; Wed, 10 Feb 2021 12:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83586.155885; Wed, 10 Feb 2021 12:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9ocH-0007py-Hu; Wed, 10 Feb 2021 12:29:09 +0000
Received: by outflank-mailman (input) for mailman id 83586;
 Wed, 10 Feb 2021 12:29:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9ocF-0007pt-P1
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 12:29:07 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30f7e9e0-5a82-40a3-822b-1da4cd404d24;
 Wed, 10 Feb 2021 12:29:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30f7e9e0-5a82-40a3-822b-1da4cd404d24
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612960146;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=11GhLA/pbKAn6c/TrvS2UJYAbz1yXGhTnOC934fh44g=;
  b=S+1TCOLqOVP3TO8KsjmJJJDZFio0qDKbi1IfztlwfAkA7F8AjMzqeJrX
   1BwR5275NBbW9NSlE0MqSwVYOj81sNfDpbHRlSEMN+lfFESrU+qJ9geWF
   AEoKCEUqL/DYNoFAptRUuwIctmlZPUuOEzyVvKR6fBL9ji0l+RiFYUNYL
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rp6gEOFqwAkEuMirGyd3zr+VVakdmnR580AxKm4yGRdz1R7K7LFR+xd5qJ/N19defPKsx23yCe
 JaMJhWFF/D6A8+AGmh8IiDWCUWyrq5kvBwXpskebUhM9UQRsqXXlNFjjfhkaAXn+Hs+ZvSv7xh
 oso8yg5EAh2J8ALmA/yxgoxZZUln4GVcGq5CT1AjJOHOPsuikBK20Kxdg/eQ1dijtILLEuiKbb
 CPh8pPwdois7ay2zcK/1J53FTt/OnGaSKd8RT2OKrJ9BDEMpW9fIzXlRPejIOue2B8nY8RDmKk
 6pQ=
X-SBRS: 5.2
X-MesageID: 36886994
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="36886994"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FCCkj2Pp+zkhKuZUB7ZIc+5RAxK1HanQzO5CLeMhf+0N6n3tjxaiO5CaU6+zqoCgStcUv9wW/00f7SkcxPPigDLcFbcmyH8T4VRRtyt6pNXS9UYCNp1ETgCjCfQu/VGbbSqSwek/yqvP8n3q5AYncY3orIofYgWE16OlSFOmM/YXDRAXb5qma9+vfidhoMCahQ8xusd2PZu6/EDYEEWIW/pDOpP1VUILErNJAJcxSr0M1JqMmcXKTf0mEwFV+wlM1kFTDfOXTPauBqihkkTaTPn9CIp7PNHvXMZnnNzWjVkDb6yJhRTvaEIKzDNL0CM0MlXMtyzvAzvcumS3EtMPPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=emJHKddwaIBMSXWDUgZuUuneTdw1N3nliZ2Shn8ET5c=;
 b=jw/VKx8r5dK1GZlus91tk/j9F8NuzBQ8TGjA+/XQrzJZ4/NJyc+W4bRubTvS3lxKbHaiqDT+U15odoSJCSe7n1xQDI3DozCT6DZXCSJg9ONeqB7Fo6PHyT0Cf9odvlRTG5g7WY2ydsCPumyjtGjld5do0MHaHSXqADzpQpsgE4Qhh6JPz4UE9kn1fFVhFW2FXAhc3T/xUafxo6bY61Sdqs2IxawDSITO0rriXYvbYKsYeskGxKJ7u+dqq0rX0CxqVqDPL+vw23npO2G7+BlzOI1MpRKstmQRT1hGrFEOvUgnEEos2OLt9VxlFa7qaYzZcNQ23e8vFf+wTv55JBuf1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=emJHKddwaIBMSXWDUgZuUuneTdw1N3nliZ2Shn8ET5c=;
 b=KTTDYYxJt2RI5QgivRtWSbSYmqRaXHx/kNqByj338rRIjkEZuDJVs1tW9ZFqVoWnn3t/tKLpBnrUtgt6aewg/69rvAoeN3JfYoc0/1bt5tNphtd7bGUNvC5Yc4Yu0s01mENTWq7yrjTTagJegHLOrZfpdbyXvb/pLLopvsyMYO4=
Subject: Re: [PATCH] x86emul: fix SYSENTER/SYSCALL switching into 64-bit mode
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d66cce4b-6563-4857-30be-5889788ca6c8@citrix.com>
Date: Wed, 10 Feb 2021 12:28:37 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0169.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18a::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dd1cb453-4ebc-4e50-21d6-08d8cdbf67bf
X-MS-TrafficTypeDiagnostic: BYAPR03MB3432:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3432BEC146FEA9E6C1FCDAC7BA8D9@BYAPR03MB3432.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6sUmzMDdp6ZZhEwNRoQkjnO0ixDntg5wb0xwWqVnF4Lz+qKKE/e+UzKHqHNz61KfSYiquZG5zUkaGcklf5r8gGHZxPqr6vdlPPBsStZHIxlXliqdQN+orpKnhR2xh2qM9dGo5wxaOYlBFNI8b0Xfldhk0g10ZCcOwGZckm0SfmJxUjLHNiOuakXq6d0chsPoUPl3IMVYWVEELWdKEhVncp1isZmV+BfF3hW/LBHiPwCgnlhljrl40lxRuEdz4+yHAFSkoIpxSXWVoB0Mj9mRXco1rEaZi5I1Gv4tpUS0XQ+fFhBx4w9CQjfE12ES7ZmMnhhnQZ9nYnNPls155pSYjJuzJeHJ+sKNCxQ17YbHUrv+MffkGSb0LygtZfW0elJNDFi+/OG3zi5JvjewpQcGxP2Wqdm13bw95A2gOmgRZzg8VTg0gIR17YKl0SdSjxnSDLEm2D2k2qJbSfGTD4f0TtZC1dPPrGhym9pajrSunfAN1pmIDIF+1duHGeeruhWwr1t8kgNCeH6FSGmuANyxbgCE7f7Nqw9urObHpEBT3U1B7rxWkeoSurpkVETsbJk7WyM6Oi1GI6YVkuiXQYp8hTh0ihxMXPsfe/7yAi3ifYk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39860400002)(136003)(366004)(396003)(4326008)(6666004)(107886003)(8676002)(2906002)(31686004)(110136005)(66556008)(54906003)(66476007)(53546011)(83380400001)(2616005)(956004)(478600001)(5660300002)(26005)(86362001)(8936002)(6486002)(66946007)(316002)(16576012)(31696002)(186003)(16526019)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NTIweUxvUVFvaWp1YW5zdlZjM3BZVEVYYlcyeVRqTDdtYzltRTlIZ0QxNkps?=
 =?utf-8?B?eEJYdzgyOHI4VDlrMVRod1UrL0hXWWxlSkZRZlR6dFhTR3hMNzdWdGlkRFk1?=
 =?utf-8?B?NzBtMlBBa3o0emtGRWNTYk9ETFB2allWZFYvVktaL3lqVDJZdHFocVlQaDFn?=
 =?utf-8?B?MitZRnc5d3lPV2NXenVYQjFXeno4K1JQOWV2R3VSUnJUK0FNNTVNN2ZZYmNr?=
 =?utf-8?B?VWhDYlp5OTZLZm11clhObEozZXl0bzlOajZESk14NTVIODZaS04xZVJmSmRN?=
 =?utf-8?B?WXRiaml1RVJSVVJIWVJoRVJzQldVaFhlclFxcndiaVRIbFUrTHkwNW0yRnFJ?=
 =?utf-8?B?a0hDRjFRaitiM1ZGRlkzZ1hXR2ZuTHVVMFFyUXl3cjJKM2VoTVpQdS9ySUtN?=
 =?utf-8?B?QjRkT0JoTjFiTDYrQUpmTFZHNDlZWWw4VDBPRHF0SkFUNzZIbE1BTFJWd1Zt?=
 =?utf-8?B?L01HRWJObTBJTFBGTXQ1SjBWZzhXUzF2WEFGOUFsOXdCNDg0MXJ6bzV2L01t?=
 =?utf-8?B?WlJXbkdoTzdqSnNGWTVLeTJpNFdUWURBZjJzOHQxVjNOYmEvQ2p6UGd3TEY3?=
 =?utf-8?B?S1gvWllNOXo0WFV2Zkx5WmFwU29qVVpERUF3Vk9WVDNIOXVjNUhhdWNxSmxr?=
 =?utf-8?B?K0dCUzAxM3M0UnpWT0JGSUY0WjlHUVB1S3ZwT0NoV2NTWnFGOXZaMFFod0oy?=
 =?utf-8?B?cXhrZ2dzTC9kdUFpZ1dFejMxRnlUM1NHTCtCQVlaakRjQUlJR0N5dTRubmNJ?=
 =?utf-8?B?TkE3WFpvZTNxbXhkcVE1SUhPRjV0bEU5eE5jazdsbjc1Rkg1dm1HaEU5WFFI?=
 =?utf-8?B?V3lURmF1TW5TMWt5NHFjV1M2SitIRXpSQU5WLzNvZ0UybHBZL0JFemN2Rnc1?=
 =?utf-8?B?MmF2N0JMZGo3VUI0MlNFbEozYk5MMTdhRjJuMTFUT0RESUN3c2xuTDVMY2ZI?=
 =?utf-8?B?MFMrRkFsRWd6eFFJR2w4M2ZPYmx3cWJSZ2Q2UmtIWlVUUW9OUWVqSWVVbzJm?=
 =?utf-8?B?ZXdwc0h2ZlAybkFUVXYyaGoyTEhxeXE0N2Z1YTV5ZEdoR1V6ZFJwRzRTNFRj?=
 =?utf-8?B?cDRLZ3FBUkpYRTR5eTlqVkRtUXRiY3RlcmpseXVadUpsamV4TDUvcmlRa3Rr?=
 =?utf-8?B?bnh6YUlqVXN0NnBXRUdsTW5EaVQ0RC84UEZWUzJiWmcvTHoweUtpb2c3UWxB?=
 =?utf-8?B?a2YrUVhwMGs0dzNVOEVMcTM2Sjk0TGs3TmNhR0pXeHAweStFdnl6MTVLQmhX?=
 =?utf-8?B?OTg1M2ZwWUpiZi9yMUpET1JpeWtsdVdoU0wvcUY5SmZDZ3JWcmZyenhwUERK?=
 =?utf-8?B?SHF1alJGbDZncXMzZWVic0Z3OE9wSkY0Z2hzRkpCQjN4QmRSZzF5NFdiWU1F?=
 =?utf-8?B?MHRJd3Nxa0lrazVHdGhJMzhOYjZQQlNIeUw3VG14ODRBSDZyazlEdmJ6bUR4?=
 =?utf-8?B?SjFsTHA3bzF3blBFSmJVcVhyQzBXSjFlTTZvZk1CTFhHMEcrd1BoRm90NTdn?=
 =?utf-8?B?R0c4RjBRdjFlRU9ENENWS1J5UktZS01oMGljeDFDc2s1K2lOMUl5UE9BZTBJ?=
 =?utf-8?B?bERUdUEvcCsvQ21zMUQ4eEQ0SmJ4WHQrTE5IZVduMDExYjNidTlmVDIzR0gz?=
 =?utf-8?B?UUZVWEhta1RDZFB2SXlzZXd1RVJmb0NUTWtTZDhvaUluN3hoWWIzZm1TN3Vz?=
 =?utf-8?B?R0RyMXRzVW5uZ2ZwUWNuRnF0bDlkSSt1NCtNWUNoN3RmWk1yWUxBb3NLVDNK?=
 =?utf-8?Q?VlNN7z3u8izknnRyc0X0gEOiiqiTw1mKZLHWDtg?=
X-MS-Exchange-CrossTenant-Network-Message-Id: dd1cb453-4ebc-4e50-21d6-08d8cdbf67bf
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 12:28:43.7900
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ja7pRMjRN1oXImvU+1WFnQG3tfjXUVfkzhYcR63fJ7dvwQI3VgAc0Dx9nuEv6Moj5Jb8xFqE6NNqTgYqG5qbEXv+6iBfHrkZ5gWrrMltgXs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3432
X-OriginatorOrg: citrix.com

On 10/02/2021 09:57, Jan Beulich wrote:
> When invoked by compat mode, mode_64bit() will be false at the start of
> emulation. The logic after complete_insn, however, needs to consider the
> mode switched into, in particular to avoid truncating RIP.
>
> Inspired by / paralleling and extending Linux commit 943dea8af21b ("KVM:
> x86: Update emulator context mode if SYSENTER xfers to 64-bit mode").
>
> While there, tighten a related assertion in x86_emulate_wrapper() - we
> want to be sure to not switch into an impossible mode when the code gets
> built for 32-bit only (as is possible for the test harness).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> In principle we could drop SYSENTER's ctxt->lma dependency when setting
> _regs.r(ip). I wasn't certain whether leaving it as is serves as kind of
> documentation ...
>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -6127,6 +6127,10 @@ x86_emulate(
>               (rc = ops->write_segment(x86_seg_ss, &sreg, ctxt)) )
>              goto done;
>  
> +        if ( ctxt->lma )
> +            /* In particular mode_64bit() needs to return true from here on. */
> +            ctxt->addr_size = ctxt->sp_size = 64;

I think this is fine as presented, but don't we want the logical
opposite for SYSRET/SYSEXIT ?

We truncate rip suitably already, but don't know what other checks may
appear in the future.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 12:58:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 12:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83590.155898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9p4Y-000240-Pq; Wed, 10 Feb 2021 12:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83590.155898; Wed, 10 Feb 2021 12:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9p4Y-00023t-MO; Wed, 10 Feb 2021 12:58:22 +0000
Received: by outflank-mailman (input) for mailman id 83590;
 Wed, 10 Feb 2021 12:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9p4Y-00023o-AW
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 12:58:22 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcf6e080-0927-4f31-ac19-9487c25a0af7;
 Wed, 10 Feb 2021 12:58:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcf6e080-0927-4f31-ac19-9487c25a0af7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612961900;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=41Vdhg3SptdnpWxAfI6sAP+UXQNkHECufq/USFIug5Q=;
  b=UgVGDw8hZ/c8fWGKip5D4br0SU+x9t4F6Yqk+KKsCOqBwH/du++esEsY
   7jiHm8HcFCixQyuazP2GxqE0+ujPz/RQQsvn3LgN59go/oyuDdEfi6ikT
   ObZFEvpXwGIFDRih+t1Mf90UhTka4gFBERGuOs8Cc1ElyFvrHjxVWm2Xx
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: mmI2HU8iDsZ+ZDvUYJqnDm7ognZ8bl2KD1kr2C4ujJEcmF7zEgzbdn//fcONBXPRqNjThHnhNp
 2ZWHUnt2IQaSBt4UnxQi3HOPi/KRtvS2MvirXhcBmu7IHsssTWIk7KN70kV+llON7CbPXH91cL
 E6fp1Op9xdhVx/Fw5hAYesnWQIueCVCIqSRoAg2E484bG6UuVk1Mhks/8QI0UdG7rzpEho4uDZ
 FMnUUqrcaadKyrFMcyfVjPmpsfxpvpAUvNWKu5LCv+kUrdTgm1I5aa1eOQqkkA2iVEKQK3Y3cM
 8X8=
X-SBRS: 5.2
X-MesageID: 37321311
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="37321311"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XUxq07yNxR/wdw/4xWZjwxPrXPpgNYI8+aFmZWTKavuF+bySgI7u9Fl23bHbSkzVSumlH/Lzl0/iiFfYFTTOzbvqTGSbcnAJLCRxHJHqL+J0u3VyW61U6CHQ+p8VI47lxU972iVMQCqJP2uAKOQIUlCCT02LLL678BH6gSorfTrfuo0zoW/rKsoeHoD/PBnznxo5d9yVby3v/BWoXbwNc8C0cJ4XyWBmTVUsgaFBYlz8wybPY/2v32iX1GZXKGgoQ3EbmhfwSCM1OUI77W+rkydfzwKUqB+d80GJxT39WsBeppkRNPJyFCnWRz2ffWutT1wYs32kiaZCtKLcqiNJyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bmLS1XXA8+GOCyTy3cOL5oRSgP2O+b1POeq3ZqBmk+E=;
 b=EESNuh4msTMWI7USWUcpyV2w5CUyM6x36VL4D2946EQe9sSaR6i2V7wYEw/DfopN1YIBXVy5F+zNGaYTl7sbIwKHFawi9NJLbvGIcT1LzKiHT7WoOFN0L3WHjIGYZvqNkxuf+9HWjV3FwfckdqhVdyRhX/tWZFlnJ2ZQrFFOq3ew07ejLgx5NAOeqYvBy3pl7MaBMmv+M5h+hY0rqveWI1/AfXmkEhRa6ZRVoxLNi+aKYnrYWxPrGyBAX3fQAL6ioB2jXiuz55bqmmngG5+HEdzfHPl+zrYubGkaJSfbKDpdyWcNHOFlQKaflAVCRoosC6f0BxE+1qrXndRaV2Ucqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bmLS1XXA8+GOCyTy3cOL5oRSgP2O+b1POeq3ZqBmk+E=;
 b=eKxVAhmQrpPR0prr2fze6a9Cuw9i16zv/zTcTcb+fDierKjRuiTWhf3m428H2cCHBRbpRvwrVYtULv+akKWrD0rndnnOKdgSQ7t5b8iuEFpohOjdGCkFyp5WXZhcNfcDGFM+Cf4WZOydt1WmEKEeapXOwjM0MELV9v71yXoOrOI=
Subject: Re: [PATCH for-4.15] x86/ucode/amd: Fix OoB read in
 cpu_request_microcode()
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20210209234019.3827-1-andrew.cooper3@citrix.com>
 <0cb0e48b-68d9-3de1-42e1-9a75ac2795d7@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cb9b52fe-f43b-a45e-2a38-88d422763f5d@citrix.com>
Date: Wed, 10 Feb 2021 12:58:11 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <0cb0e48b-68d9-3de1-42e1-9a75ac2795d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0114.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::30) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 781d03c3-5f07-4453-4b50-08d8cdc38852
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5663:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB56637FAA1778EE81F2AFC1C1BA8D9@SJ0PR03MB5663.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: RzR830HdkI/du8euD6cVhrF5iC+tHgCJGOEuKMPFRvzCPhezwFTcbsfj9Rkcuz9vPTC45aWjouErmst1lPVfQzfdRwt9UmfmghNM5PdtZVwEqmrQdki55Yfru9Qekl8awGg63iIvfC2TW6Xifvt0PoY6WzMJ3ri+JNbbqEgQiXY/Lz6s4rBKLMDzZNIzBhNQRGLJxJDlXn0rQXlV0hHjYhaGV7/zIvEFP+gEYOrIodzmbRfqvNDwed28o+Mdj3RDR+jQ5vDogn+ueK5nnDPIXT5rneLu0tZHIz4Zk+f3LA3kZnAKYRa4p+NTiBhoSeEYLyWE08nvNTa20+5YZytj8puiIDHE3RQGoyKj2rOqvhzYBpe2Wl+CoAXWuu9x2lFDFdxYgxADqARdj6jtNioac1qj1N78HU8HurVestPimtcHKOPDVKy8Vo9ErMtvf1UtT3hZFe27uJ5L5GkfhHA3GQgg/XfnrT7VDYRaNO/5SGltjlg4qJOEbgWFXlTtw3K/fqiuMvHgOWM7GsRaOPGQ3s68cw1vofOqo30qcBwAcPIij04auoFopQ7QyFaGF8I2kE1tO+BOFcptFMy91Wy5q29VtFnsFKu3H3ePuaXjL34=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(366004)(346002)(376002)(478600001)(83380400001)(5660300002)(66946007)(66556008)(66476007)(54906003)(6916009)(8676002)(6666004)(4326008)(8936002)(16576012)(316002)(31686004)(86362001)(31696002)(36756003)(53546011)(26005)(16526019)(2616005)(2906002)(186003)(956004)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZU0rSE1RSCt6UVRPenBhaktmQkxTRFJucWk2VkZQczhGZUJoNmtFS29yVXl3?=
 =?utf-8?B?YXBmQzBxUmZOOXo5ZVNkTnJTRkdTSUcwUTlGYTZWV1RibkFYN3ZQc0NVSVAy?=
 =?utf-8?B?UXpLY21jelNiWVB5dGJFSk9XWEhEeSs3SndtWTFta2tKUkNUNTJnMER6MWxx?=
 =?utf-8?B?TlJZbWdCeDF2Q3djeU5Ub0ZCTEFqNUFFZjVxNFNPckE0MThFdzJ3LzNtT09U?=
 =?utf-8?B?cEpRVVJ0RDc0bGNKQXhiV0NWc2xEbmtMbGl5aEh6UjVRNFBtWm9iazZDcDdR?=
 =?utf-8?B?VVNWL0lmZEtueTB0WCtXNVBYcWtDb1BBYmFDemNLbVc5RFhZWE9RWjFFT0wy?=
 =?utf-8?B?QVZHbkZNSW9xRjFtWTlWMW9PTUFjL1pSNjBvRFFZVThwSUNwcHZjMGZGN2ZK?=
 =?utf-8?B?SlN4d2xwUXpTSGY1QytxaGFxek04d1JGSUVDYTNGOWVTbklTV3VTTW1DTm56?=
 =?utf-8?B?LzNrTDV2M1VSYUZzZnNBZWZ3QVhuNWZ4Q2Q1THJhS21pK3ZaYldteXVoRnIy?=
 =?utf-8?B?bFBHejBrcS95cVRST0ZFUnNGYVpkM3FhaG1WMm1XQjFhYVdYVDkxbG1FRGp4?=
 =?utf-8?B?UmtzVkZRcXFtdnk5ME5jT2FnSGxvbWtxbGQ0WEZ6RE5hZE9qOVpDajR5dUVU?=
 =?utf-8?B?MXNKUDQ2NkVLSXR5ZGsvZk5laG9pMW9iQldVcG8zNkhFVWxkTFdZSkI5RGQw?=
 =?utf-8?B?dFg3VlRPSTFMNSsvbC9aWUN0TStkZVRheEVBbWhIOHZlbmUwZjdaK09leGV6?=
 =?utf-8?B?MjA0MHNrZ2R1eEE5RVBISE5XZTFjeXlkeDNGeVkyRE0rZDdLbjBnSlI4RVIy?=
 =?utf-8?B?RDFwNlF5a0tkcHRkcFhMckhwTFplZTlLREh2NnFNMVRkZmhzWDdaUmVEUlNT?=
 =?utf-8?B?WVVEN29JMWw5TldLR05WeGp1SWtxUkpHWXB4YW5zK25PV3dXL1lmcnRZMzNO?=
 =?utf-8?B?N2Z1VWFsTk5nUW1ISTdBZ2YrdnBxdE02cHJzc3d1Mm5ySnNMdDg3NFZLbzg1?=
 =?utf-8?B?WFU5eGcvZ0Y4UXFjNk9XMHZuSzhGNk80RFlnZVBQcU40dXUyaWtZekVJNzVT?=
 =?utf-8?B?MUhsaFMwcmY1WjlvbTBQaGpKaHYwckduajVVWkYyVWlYVEFKRTNxMWExTm1R?=
 =?utf-8?B?Ynh3bU9YSEl2RTcweEk3NlJZNnBuL0Zmbm5TTVNvQWpubGE1Zy9xS1dFMUJR?=
 =?utf-8?B?Z1lmZkZEVHQ4c2d0VE9jSzhZVUowRmRSL2lwc2kvdmpld2ZMZXpQOHg3czlh?=
 =?utf-8?B?KzZidGoyNk1UZVh3WUVyYnVWWCtwYWR6OVNSaytRTlJxOEs5WmxMTStCeFVa?=
 =?utf-8?B?OFIveTZZT29nY1J4eUxOUG9icHlOSUEweUhOTGRDK0NCL21Za0Jneno2ZWwv?=
 =?utf-8?B?czA5bTBCRWtQT3QwRHpQQW4ycXVycnVobjFaSEptb0Z0T1llSFEyQURRa1VP?=
 =?utf-8?B?czdtSWlaUkFsci9McmpnUFRrZ3VvTXI4QysvRWVBYkJsaFdjZlh4dlRuY2Vh?=
 =?utf-8?B?TWRsdzN2azh2UzNBL3NnbDh0cHZwOUxvTWR2ZERxc0o2K1NGUnQzR3NzMVFE?=
 =?utf-8?B?MHVUZDZ0VThtOGxmais0ajRmSEhLRmtBcWNGck96WDFXVnh2VmF0ZFhoTnVK?=
 =?utf-8?B?ckRhaWlqanNMM3daNGk4VXRCbnVNcitqc2JDazFIYk05Vy9EYzhFNFJaYW9o?=
 =?utf-8?B?eENGdGV5d08yUGVBYWdMcENLeDYxMjJaVlNiYTRoeDY3S0t4VWdwRkxKWTRo?=
 =?utf-8?Q?VGAiWFAXssZ0vPBjHoNUVzZHnE1CK6Nch5YuJ9j?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 781d03c3-5f07-4453-4b50-08d8cdc38852
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 12:58:16.5994
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hUxoEpwqLAdSspvJeIsVO6h7eS+FUyQEa3pW5GD1Uwe8RicmqYPGXWj7cP2rbxjbv/Hi+UJ0mOBMLxSC6QUpryPup6DxNclJi9I7MlUKV3Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5663
X-OriginatorOrg: citrix.com

On 10/02/2021 11:00, Jan Beulich wrote:
> On 10.02.2021 00:40, Andrew Cooper wrote:
>> verify_patch_size() is a maximum size check, and doesn't have a minimum bound.
>>
>> If the microcode container encodes a blob with a length less than 64 bytes,
>> the subsequent calls to microcode_fits()/compare_header() may read off the end
>> of the buffer.
>>
>> Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
>> --- a/xen/arch/x86/cpu/microcode/amd.c
>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>> @@ -349,6 +349,7 @@ static struct microcode_patch *cpu_request_microcode(const void *buf, size_t siz
>>              if ( size < sizeof(*mc) ||
>>                   (mc = buf)->type != UCODE_UCODE_TYPE ||
>>                   size - sizeof(*mc) < mc->len ||
>> +                 mc->len < sizeof(struct microcode_patch) ||
> I was inclined to suggest to use <= here, but I guess a blob
> with 1 byte of data is as bogus as one with 0 bytes of data.

Yeah - its unfortunate.Â  On the Intel side, we've got a fairly rigorous
set of sanity checks we can perform on the blob, and on the AMD side we
appear to have nothing.

This at least prevents our minimal header checks from causing OoB accesses.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:07:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:07:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83592.155909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pCr-000360-OL; Wed, 10 Feb 2021 13:06:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83592.155909; Wed, 10 Feb 2021 13:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pCr-00035t-LO; Wed, 10 Feb 2021 13:06:57 +0000
Received: by outflank-mailman (input) for mailman id 83592;
 Wed, 10 Feb 2021 13:06:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9pCr-00035o-0q
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 13:06:57 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21c67e1a-eb4f-4c02-a654-47f971c9ae52;
 Wed, 10 Feb 2021 13:06:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21c67e1a-eb4f-4c02-a654-47f971c9ae52
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612962415;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CzeGTnabnmW9jdhhvUNUY2czbocXBYcaisgwRwIZVu4=;
  b=WIiHpa2ivLOAH1W8LXHEe+bxUt62exiH+MiCCj/xnDGYcnzd6HofzQ5H
   SdVVLMRhNDWs4LD9MmZj//7GUpQmBYs5HFgGXKzvbXpHQPkPtxP4+2CjK
   u/JYoPwoXoj0mt7Xq/AhXB+rexGU86jqZ+x+nGe++rak31/mONN0r89KT
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: WYDf+LBokPn6T9yRJqGgcFXZ0bm8FK96Hf5QRTJL7tkK7wktdhT045h5WVK6u40iP3AAmQomMC
 8Ytp/rIJnunVTsaGDFE38bVJWxQO/92MQGtcN95cjHp3Jkn2DSKWXgQ2YLvgE0LMf7Eb9ST/2S
 Co0QM6ytjOUuD8toatDyk0VxOXjkLGRz8w1vMQxmldPVGITKi4/I9rpPZkOt+Q+ZTG7QbxxIXH
 qBCiE9pcrGml5geaUUr0PkHvnCs0Kildk1hXp6fEWVPeOAlP7I8C+QI19/PovqWam9f7vtW1hj
 vnU=
X-SBRS: 5.2
X-MesageID: 36889415
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="36889415"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hF9tMdzRed9mAUZ4dSYPVTPJdqLTmmcfBr4KuHYA/1IbSNx8TVWBqq4r48/rY2bk+jRDHSu7F2arGOqD1VfaTkyinpWGYVI38giP6Aq11z5phbQ/rMqXjF8bYrAsvQLOuaOX7S5tvBtDDhsbl1rreNhu3HSAYJvu/1URGM6mP+OvUjWh733Edlweidwn8Q5Fgl35CzKsG+BDnVQXe6lnCjAgWgB/rOokTrmilAeaqt7ROjh39ZW6GCUT0cDZj9XsOlOuqmy8j5MoCyJtaLWasAXz5O+Z2deOJahV7Jvj4Cfp4/Km009QwPobHr0Ug56032uh/NIFr2K4l4FwSpG+cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4I24MA5PNI/EgwkSnWseXrKXGL1J6wOkrf8NckIIML0=;
 b=WlcG+2ycbGCb0mwuhHyO4GPedLkt8WxzavTluBPvKKjHNz31QoByydACK9QQ8ml/l1PEEassH2cC1OJLQhW38te6VI3WboAXusU3WhsQAASSV1+xztq/3+HJgUmwLxZly62W+op7KyFm0qWr4+bROnN1OclG7OuoG2mqZIhjV/x3T7dwgDDg0ylhqiSnc5TuNCwF6bR7MSEoHusPb4P/JUUMsfsJy4rbeAEUG9nmSYgrWXRkbqiW6xVoySjKYlBTnRCCth9rNl+zSwonVVSq5D7M/uTWpXTJxxoceLsTL8ZD7vZREeruehaFJqJvG8XXH98AzA29yynEj6F5gYImSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4I24MA5PNI/EgwkSnWseXrKXGL1J6wOkrf8NckIIML0=;
 b=qpiWKckBtS+/L23XEg4JDOER0eu+oGmMVlfS3A6AF55fcIyXS46MEzVzQIY5V+DxNgto32KG09hisieRI+3EjOXU9Kc5Tc9PBxHIE/tcbHqQZiYBMEJJGvEMuHEh/sVsYo+X4OVsLC0dNO6NVXwiPiBQRzigGh8sFOQopz/MMpA=
Subject: Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19
 processors
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>
References: <20210209153336.4016-1-andrew.cooper3@citrix.com>
 <c09110f7-6459-e1f7-2175-09d535ad03ce@suse.com>
 <24610.50089.887907.573064@mariner.uk.xensource.com>
 <6f44ae66-9956-3312-c4c8-b0f1e4b568bb@citrix.com>
 <a0da7d15-d0a2-62c6-0551-f62a09780e16@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0e5decfb-cfdb-0fc1-73c2-8ee14d750323@citrix.com>
Date: Wed, 10 Feb 2021 13:06:45 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <a0da7d15-d0a2-62c6-0551-f62a09780e16@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0453.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::33) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b95160bb-c712-43d0-186c-08d8cdc4bbaa
X-MS-TrafficTypeDiagnostic: BY5PR03MB5000:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50006E8D911803CD2D4379D3BA8D9@BY5PR03MB5000.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XtI/2ZRGfjFKnAXLsMIdO1p9lPyFhDJO6QGpMs+4kFZV2GKacpxUaHCPVGKK+UCeZ6KfD7GL3mmtiXSJ1RPUEdUutY4NmpbsmMpneLRGXbajLTqWoeJ0DtXFvSw9gab0k5XfBOs0IFW0YoF3EusNAYH3zE/BbMwuWBSCsYrtDnwlmJdkDwVYLDq0TWfjxLlMJbbA8D9Wactx7cEUq/MTcC4t72KQn3slkXBqZzeIA4zkn3yjeV+biGyu5rptol9iiUzg+yqt4CnqBInybMfZkkoSf5HbgLrGCIb+PLLI8ZuKDOkAr2UEvpPwKXWdMp5CAFS8EoK1U7A8ulKAuwaIdzvE7Q9MMa5s5iHfVvYjdwH6h0Qu8GjMjbopStt1FHzEVYO4/DchEbTED0ylNm+Hhm9IaIN4DupE0iJKfNQkWQyXIPAOaPIBUsa84LCe8819+zQ7d5MdckkLLakMiFdr//3b7w064Rj3NRAQRu/rQEZLpaNf6tiCTpv2v7Yd34JzjW82vfmxDydy30/2AAGhVWHsVZRUY08WSQyTfFJVT276fVExPNnB/qZQiiGrqYs+bQicmzyUHVV/1yFONsLl+UIZGGHQUCpmguuP6C1PSaI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(86362001)(36756003)(31686004)(478600001)(66476007)(66556008)(66946007)(83380400001)(31696002)(956004)(2616005)(53546011)(8936002)(16526019)(186003)(8676002)(26005)(16576012)(316002)(2906002)(5660300002)(6916009)(4326008)(6666004)(6486002)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QSt2RTJ0b3dTampqNmZ5RkF6S0JBTFlDajNjcmtsUERCcGZISmxjNHNXUmdM?=
 =?utf-8?B?UTlYZkgyS2JUK0xIbUgxMHM5dVI2ekIxaFRwZUxQa1poMTIydXN1ODkxcXZO?=
 =?utf-8?B?bTNaZXRkUXZ1VHZVNFlIWjc2U255elBBSkY2VDFJTGpmNmxwNlN0VG1YNHpV?=
 =?utf-8?B?aUpyMFNuejZSRVFCUmh3U24wZFdLMXhQZ2Q1QXdIY01sSFFmYzNFWksvc0JE?=
 =?utf-8?B?YXdPR1U0SWpCZ2dremZ5YWlsNmdBUUVEN1VRR0FGTVpyRmhUM2YvclJTaHZC?=
 =?utf-8?B?VmJMQUdSOENaZytkSHZNcTVMRHBHSzhucFdBTG9XM3BKK1lSdUlFd2pCeHVh?=
 =?utf-8?B?THZBTGQvUW9iRnI2ZGMvcHR6YWhsRWYvTHJWdDNtYU9nS01JV1hHNFVuQU5X?=
 =?utf-8?B?ODZ6WnNhL0hqQmFXK1NZV28vbmwraVVrMnYyRDVFL0FqYkMrVFJOV25FOGlV?=
 =?utf-8?B?YllwRFhIczRmSnZmbDB0enc2L242Q3JiTldqblF0TWRQTTRKeWZGejZWQk1x?=
 =?utf-8?B?OURMZGVndlpQQ3ZNV1REemFNdVVSeEtrTEV2RERqUm9WNWE5Uko1djdMc3Nr?=
 =?utf-8?B?U0pCdUpWNjdrSXZ0ck9OTTI2SFBpczNPMHdyRTZ5ZnZGZ2JnWkFyVWJOUEg1?=
 =?utf-8?B?Y2hkTzJHWVc4UHc4OUtQRUtSN3Naby9QbjB6eG1XZlBPTTJBUzlSUUdheFI1?=
 =?utf-8?B?QlZjL1ZIRjRRcm9zM0M2QlB4Z2NIWkxoNnRpZzQwcTIxZXFFZkdZL0ZWNlpm?=
 =?utf-8?B?L0dRdnZPTkV2WWU4ZXpEeWlNSUVMaEp0aWhuendiRWJmVTFNMnE3czMyMENx?=
 =?utf-8?B?NVgzcU1TUDVBL015UzAwRXNQemorTjlDUTZ3MUdzRXl5QjRoUUFMUjJzRCt3?=
 =?utf-8?B?OFpQc0RKSWpuTzdXaXd4MzBOOCt0eUhIWnk5THVLc2VhandHV1VnQ2hTb0VN?=
 =?utf-8?B?LzhFYmVFelk3SVc3MkY4akYvcEV2TmFoWmJBS3BvMmZKVGZlcmV4SDJmUERB?=
 =?utf-8?B?V1EzTnlGSzFWSDh2Tmh0d0RsYXI5M29EM0hBNkJRUGJYdWhlZGR0Y2M1TEp5?=
 =?utf-8?B?NnRSaS94RVpxWFBrK1lQWU5YcFRlbk82R2ZFYkxYZ3piK0hONE1uSDRkQ0JH?=
 =?utf-8?B?WDV2Z1VVNmkrcG9MeDVUYnlZdWhWNGtiM3cxTzZIc2V3a0VUTHdzbTU4cEZO?=
 =?utf-8?B?c1lUZk14RkFPcWVEcUcvOXN2R0xYcW9lb2prYkU1aHYzZzlGMTRLSURvbkxm?=
 =?utf-8?B?dmI0MHB3VmtUekh4Z1VYYld6bFRUaUUwZEZRS3ZKVXB3VjZqYnNyajI0NDV6?=
 =?utf-8?B?YU1HYzl3ZE9EemFWeU41YUFwTlI3NU13OE5BMTFwUVFhV0g1dnRjMmZyeTRP?=
 =?utf-8?B?WUhPQUxuUGhNVnk2TVpYQi9kSW1UakNVUmMzUlNOWko3UkROZ3pKTGJkd2ZV?=
 =?utf-8?B?NE5OV0Z6L0F5ZkM0ZDRqdUlENUYrS1BzZEJobzliaEplVzBiWHBXZXpmVnNV?=
 =?utf-8?B?enBTYVNhcGk5MTVKc0dXRmc3RjF5K0ZHRDI5S1VVOVRwSVl6WlFDWktTK0pE?=
 =?utf-8?B?UTdjdkpud2wvbE8xV0VMTDgwR21qVDZ1NmtlbXZsSCsvRlRyNWtKNWVXUVEw?=
 =?utf-8?B?b3A5dTZiSWlFa1dKeVF4Sk1ZeFRUZGRzU0VkczlMN05SanRwTlZ6bDM1Nkk4?=
 =?utf-8?B?cUFXYlhKTGc3RGJrVWxzb0RIYnRhVEJad2l0OHI1RHZmbmpSUTdlWTFYY0JU?=
 =?utf-8?Q?fGlR28BlQVpxkYxf0awVnNRxvvhixuSwFsogUnh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b95160bb-c712-43d0-186c-08d8cdc4bbaa
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 13:06:52.1074
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N5sBHzrilC7q1zaUA/kWFlQkcDopaHD9m/RQmV+zeu90PLwDzzU1R4XuVlAXJGopZPlXWa/cdxkEKT8hinOXND8QHnto9/osaEQKp5vRqwo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5000
X-OriginatorOrg: citrix.com

On 10/02/2021 08:37, Jan Beulich wrote:
> On 09.02.2021 18:39, Andrew Cooper wrote:
>> On 09/02/2021 17:17, Ian Jackson wrote:
>>> Jan Beulich writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload size for Fam19 processors"):
>>>> On 09.02.2021 16:33, Andrew Cooper wrote:
>>>>> The original limit provided wasn't accurate.  Blobs are in fact rather larger.
>>>>>
>>>>> Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>>> --- a/xen/arch/x86/cpu/microcode/amd.c
>>>>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>>>>> @@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
>>>>>  #define F15H_MPB_MAX_SIZE 4096
>>>>>  #define F16H_MPB_MAX_SIZE 3458
>>>>>  #define F17H_MPB_MAX_SIZE 3200
>>>>> -#define F19H_MPB_MAX_SIZE 4800
>>>>> +#define F19H_MPB_MAX_SIZE 5568
>>>> How certain is it that there's not going to be another increase?
>>>> And in comparison, how bad would it be if we pulled this upper
>>>> limit to something that's at least slightly less of an "odd"
>>>> number, e.g. 0x1800, and thus provide some headroom?
>>> 5568 does seem really an excessively magic number...
>> It reads better in hex - 0x15c0.Â  It is exactly the header,
>> match/patch-ram, and the digital signature of a fixed algorithm.
> And I realize it's less odd than Fam16's 3458 (0xd82).

This particular value is under investigation.Â  Firmware packages for
Fam16's have the blob size at 0xd60.

>> Its far simpler than Intel's format which contains multiple embedded
>> blobs, and support for minor platform variations within the same blob.
>>
>> There are a lot of problems with how we do patch verification on AMD
>> right now, but its all a consequence of the header not containing a
>> length field.
>>
>> This number wont change now.Â  Zen3 processors are out in the world.
> Given history on earlier families, where in some cases sizes
> also vary by model, how can this fact guarantee anything?

There is one example where it changed, and it got shorter.Â  Making it
larger would involve someone re-laying out the core so more silicon area
could be used for patch ram space, which is increasingly unlikely with
newer models in the same family.

When 4.16 comes, I do have plans to try and make us more robust to
changes in debug builds, particularly given the lack of suitable
documentation on the matter.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:08:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83593.155922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pE1-0003CO-49; Wed, 10 Feb 2021 13:08:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83593.155922; Wed, 10 Feb 2021 13:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pE1-0003CH-0E; Wed, 10 Feb 2021 13:08:09 +0000
Received: by outflank-mailman (input) for mailman id 83593;
 Wed, 10 Feb 2021 13:08:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9pDz-0003CB-QR
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 13:08:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a0956b9-1fc4-4d4d-a774-20d2f234662a;
 Wed, 10 Feb 2021 13:08:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 230BAAC43;
 Wed, 10 Feb 2021 13:08:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a0956b9-1fc4-4d4d-a774-20d2f234662a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612962486; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YeztlX2pKnWgOlCPnKds/8cKAL+q6yPzLeC7/Yvk43c=;
	b=CEefSBcwsDVk1L1ei6PZwLiAvr/kOeGit3WHBPq93UM9k0Sj5cmWEPysCENZgWU+ztdg7q
	06jAjMU91r30Z06ydftGy8wf75YwfYwl0quzvdXrQu58slg2nmWFzmf2hcDqucuxgaj3Rl
	SJ2MqrgRTR9shaIL/QVsEUap1HLZ2/U=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
 <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
 <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <40ad4ce9-71c4-b033-fc1f-8b5ed39f90b3@suse.com>
Date: Wed, 10 Feb 2021 14:08:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 12:48, Julien Grall wrote:
> 
> 
> On 10/02/2021 11:45, Jan Beulich wrote:
>> On 10.02.2021 12:40, Julien Grall wrote:
>>> On 10/02/2021 11:38, Jan Beulich wrote:
>>>> On 10.02.2021 12:34, Roger Pau MonnÃ© wrote:
>>>>> On Wed, Feb 10, 2021 at 12:10:09PM +0100, Jan Beulich wrote:
>>>>>> On 10.02.2021 09:29, Roger Pau MonnÃ© wrote:
>>>>>>> I get the feeling this is just papering over an existing issue instead
>>>>>>> of actually fixing it: IOMMU page tables need to be properly freed
>>>>>>> during early failure.
>>>>>>
>>>>>> I take a different perspective: IOMMU page tables shouldn't
>>>>>> get created (yet) at all in the course of
>>>>>> XEN_DOMCTL_createdomain - this op is supposed to produce an
>>>>>> empty container for a VM.
>>>>>
>>>>> The same would apply for CPU page-tables then, yet they seem to be
>>>>> created and populating them (ie: adding the lapic access page) doesn't
>>>>> leak such entries, which points at an asymmetry. Either we setup both
>>>>> tables and handle freeing them properly, or we set none of them.
>>>>
>>>> Where would CPU page tables get created from at this early stage?
>>>
>>> When mapping the APIC page in the P2M. I don't think you can get away
>>> with removing it completely.
>>
>> It doesn't need putting in the p2m this early. It would be quite
>> fine to defer this until e.g. the first vCPU gets created.
> 
> It feels wrong to me to setup a per-domain mapping when initializing the 
> first vCPU.

Then we could do it even later. Any time up to when the domain
would first get unpaused would do.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:10:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83595.155934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pG1-00045G-GA; Wed, 10 Feb 2021 13:10:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83595.155934; Wed, 10 Feb 2021 13:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pG1-000459-Cv; Wed, 10 Feb 2021 13:10:13 +0000
Received: by outflank-mailman (input) for mailman id 83595;
 Wed, 10 Feb 2021 13:10:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Nv7X=HM=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1l9pFz-000454-2s
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 13:10:11 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4927408d-5604-475e-8203-fe3536949838;
 Wed, 10 Feb 2021 13:10:07 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id g10so2504971wrx.1
 for <xen-devel@lists.xenproject.org>; Wed, 10 Feb 2021 05:10:07 -0800 (PST)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id 16sm2309657wmi.43.2021.02.10.05.10.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 10 Feb 2021 05:10:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4927408d-5604-475e-8203-fe3536949838
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=iWpF49uMU14UosAzlqwiuNqcUbJ2msT4pOMhzEXf42o=;
        b=TLQvDMdyJuQv+bg7lbkgLu31DaHBHWcxJzQfRZM/sNSAX4Wgt3e2671SIMoZ7Cp/SL
         /2z0y40cdUfQ46Vhb+t42L/I7zOIXoZdcHNwL5LXvGtQW0k1NphtWVn8TGoei+li9qH5
         RiNWOCBswIVy8fUmyMQwnuEY9a/95aRqK/gwM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=iWpF49uMU14UosAzlqwiuNqcUbJ2msT4pOMhzEXf42o=;
        b=p+Mt45Jw00xD43sUY9nKltuVNtdTwFxM/IXJYItGrGHO4fD+zVeBh0A1/nEgXu9tyG
         ESi04dYHVXvMZk+04cf4ND9Bss1lHMc7xP8qVwwB8jWL4qkwnkT5JRCymimSStSQ53jn
         qZwW1iNGG24vEGjPmNYHbXlqYbui4Q2smwAEQLNOVuoi1L5lJSTGNwCDdodCGWnYJN5Y
         A014njZL5AnOE+hHBqhoDtXAwwVzEpNUC2msxuVFjGWRdO9c2AGHXuO99KSD427Fv2Y/
         fKs/YXLyPMnjSUdntPjsI0BsmqqpC4ydHLLjqFxlAP3ywHT33v722J/x/ia/iwmTeXza
         FimQ==
X-Gm-Message-State: AOAM5337UDbQHSYAxxyc2dJ+BR3Gqdk2wbpOlT0SC5srxiCSwU/YMFDb
	46uKA+55A2+0Uv8lpjVaLAZAzA==
X-Google-Smtp-Source: ABdhPJyNgqJYNeBvHSDQfJEPR/ElqAAJrZui/zdUGShb+Dh4yiyexx3HbrJG83Bc1v/R4mCjOn1lPg==
X-Received: by 2002:a5d:6684:: with SMTP id l4mr3669530wru.111.1612962606385;
        Wed, 10 Feb 2021 05:10:06 -0800 (PST)
Date: Wed, 10 Feb 2021 14:10:03 +0100
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: airlied@linux.ie, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com,
	mripard@kernel.org, dri-devel@lists.freedesktop.org,
	linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org,
	linux-mips@vger.kernel.org, linux-mediatek@lists.infradead.org,
	linux-amlogic@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-renesas-soc@vger.kernel.org,
	linux-rockchip@lists.infradead.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-tegra@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic
 helpers
Message-ID: <YCPbK2zFJMzwHim/@phenom.ffwll.local>
References: <20210209102913.6372-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210209102913.6372-1-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Feb 09, 2021 at 11:29:13AM +0100, Thomas Zimmermann wrote:
> The function drm_gem_fb_prepare_fb() is a helper for atomic modesetting,
> but currently located next to framebuffer helpers. Move it to GEM atomic
> helpers, rename it slightly and adopt the drivers. Same for the rsp
> simple-pipe helper.
> 
> Compile-tested with x86-64, aarch64 and arm. The patch is fairly large,
> but there are no functional changes.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

If we bikeshed this, I think should also throw in the _helper_ somewhere?
But really I don't have an opinion on this.
-Daniel

> ---
>  drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c     |  4 +-
>  drivers/gpu/drm/drm_gem_atomic_helper.c      | 69 +++++++++++++++++++-
>  drivers/gpu/drm/drm_gem_framebuffer_helper.c | 63 ------------------
>  drivers/gpu/drm/drm_gem_vram_helper.c        |  4 +-
>  drivers/gpu/drm/imx/dcss/dcss-plane.c        |  4 +-
>  drivers/gpu/drm/imx/ipuv3-plane.c            |  4 +-
>  drivers/gpu/drm/ingenic/ingenic-drm-drv.c    |  3 +-
>  drivers/gpu/drm/ingenic/ingenic-ipu.c        |  4 +-
>  drivers/gpu/drm/mcde/mcde_display.c          |  4 +-
>  drivers/gpu/drm/mediatek/mtk_drm_plane.c     |  6 +-
>  drivers/gpu/drm/meson/meson_overlay.c        |  8 +--
>  drivers/gpu/drm/meson/meson_plane.c          |  4 +-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c    |  4 +-
>  drivers/gpu/drm/msm/msm_atomic.c             |  4 +-
>  drivers/gpu/drm/mxsfb/mxsfb_kms.c            |  6 +-
>  drivers/gpu/drm/pl111/pl111_display.c        |  4 +-
>  drivers/gpu/drm/rcar-du/rcar_du_vsp.c        |  4 +-
>  drivers/gpu/drm/rockchip/rockchip_drm_vop.c  |  3 +-
>  drivers/gpu/drm/stm/ltdc.c                   |  4 +-
>  drivers/gpu/drm/sun4i/sun4i_layer.c          |  4 +-
>  drivers/gpu/drm/sun4i/sun8i_ui_layer.c       |  4 +-
>  drivers/gpu/drm/sun4i/sun8i_vi_layer.c       |  4 +-
>  drivers/gpu/drm/tegra/plane.c                |  4 +-
>  drivers/gpu/drm/tidss/tidss_plane.c          |  4 +-
>  drivers/gpu/drm/tiny/hx8357d.c               |  4 +-
>  drivers/gpu/drm/tiny/ili9225.c               |  4 +-
>  drivers/gpu/drm/tiny/ili9341.c               |  4 +-
>  drivers/gpu/drm/tiny/ili9486.c               |  4 +-
>  drivers/gpu/drm/tiny/mi0283qt.c              |  4 +-
>  drivers/gpu/drm/tiny/repaper.c               |  3 +-
>  drivers/gpu/drm/tiny/st7586.c                |  4 +-
>  drivers/gpu/drm/tiny/st7735r.c               |  4 +-
>  drivers/gpu/drm/tve200/tve200_display.c      |  4 +-
>  drivers/gpu/drm/vc4/vc4_plane.c              |  4 +-
>  drivers/gpu/drm/vkms/vkms_plane.c            |  3 +-
>  drivers/gpu/drm/xen/xen_drm_front_kms.c      |  3 +-
>  include/drm/drm_gem_atomic_helper.h          |  8 +++
>  include/drm/drm_gem_framebuffer_helper.h     |  6 +-
>  include/drm/drm_modeset_helper_vtables.h     |  2 +-
>  include/drm/drm_plane.h                      |  4 +-
>  include/drm/drm_simple_kms_helper.h          |  2 +-
>  41 files changed, 152 insertions(+), 141 deletions(-)
> 
> diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
> index e54686c31a90..d8f214e0be82 100644
> --- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
> +++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
> @@ -9,8 +9,8 @@
>  #include <drm/drm_device.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_panel.h>
>  #include <drm/drm_simple_kms_helper.h>
>  #include <drm/drm_vblank.h>
> @@ -219,7 +219,7 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
>  	.enable		= aspeed_gfx_pipe_enable,
>  	.disable	= aspeed_gfx_pipe_disable,
>  	.update		= aspeed_gfx_pipe_update,
> -	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
>  	.enable_vblank	= aspeed_gfx_enable_vblank,
>  	.disable_vblank	= aspeed_gfx_disable_vblank,
>  };
> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
> index e27762cef360..c656b40656bf 100644
> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
> @@ -1,6 +1,10 @@
>  // SPDX-License-Identifier: GPL-2.0-or-later
>  
> +#include <linux/dma-resv.h>
> +
>  #include <drm/drm_atomic_state_helper.h>
> +#include <drm/drm_atomic_uapi.h>
> +#include <drm/drm_gem.h>
>  #include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_simple_kms_helper.h>
> @@ -12,10 +16,69 @@
>   *
>   * The GEM atomic helpers library implements generic atomic-commit
>   * functions for drivers that use GEM objects. Currently, it provides
> - * plane state and framebuffer BO mappings for planes with shadow
> - * buffers.
> + * synchronization helpers, and plane state and framebuffer BO mappings
> + * for planes with shadow buffers.
> + */
> +
> +/*
> + * Plane Helpers
>   */
>  
> +/**
> + * drm_gem_prepare_fb() - Prepare a GEM backed framebuffer
> + * @plane: Plane
> + * @state: Plane state the fence will be attached to
> + *
> + * This function extracts the exclusive fence from &drm_gem_object.resv and
> + * attaches it to plane state for the atomic helper to wait on. This is
> + * necessary to correctly implement implicit synchronization for any buffers
> + * shared as a struct &dma_buf. This function can be used as the
> + * &drm_plane_helper_funcs.prepare_fb callback.
> + *
> + * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
> + * GEM based framebuffer drivers which have their buffers always pinned in
> + * memory.
> + *
> + * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
> + * explicit fencing in atomic modeset updates.
> + */
> +int drm_gem_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
> +{
> +	struct drm_gem_object *obj;
> +	struct dma_fence *fence;
> +
> +	if (!state->fb)
> +		return 0;
> +
> +	obj = drm_gem_fb_get_obj(state->fb, 0);
> +	fence = dma_resv_get_excl_rcu(obj->resv);
> +	drm_atomic_set_fence_for_plane(state, fence);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_prepare_fb);
> +
> +/**
> + * drm_gem_simple_display_pipe_prepare_fb - prepare_fb helper for &drm_simple_display_pipe
> + * @pipe: Simple display pipe
> + * @plane_state: Plane state
> + *
> + * This function uses drm_gem_prepare_fb() to extract the exclusive fence
> + * from &drm_gem_object.resv and attaches it to plane state for the atomic
> + * helper to wait on. This is necessary to correctly implement implicit
> + * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
> + * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
> + *
> + * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
> + * explicit fencing in atomic modeset updates.
> + */
> +int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
> +					   struct drm_plane_state *plane_state)
> +{
> +	return drm_gem_prepare_fb(&pipe->plane, plane_state);
> +}
> +EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb);
> +
>  /*
>   * Shadow-buffered Planes
>   */
> @@ -74,7 +137,7 @@ static int drm_gem_prepare_shadow_fb(struct drm_plane *plane, struct drm_plane_s
>  	if (!fb)
>  		return 0;
>  
> -	ret = drm_gem_fb_prepare_fb(plane, plane_state);
> +	ret = drm_gem_prepare_fb(plane, plane_state);
>  	if (ret)
>  		return ret;
>  
> diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
> index 109d11fb4cd4..5ed2067cebb6 100644
> --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
> +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
> @@ -5,13 +5,8 @@
>   * Copyright (C) 2017 Noralf Trønnes
>   */
>  
> -#include <linux/dma-buf.h>
> -#include <linux/dma-fence.h>
> -#include <linux/dma-resv.h>
>  #include <linux/slab.h>
>  
> -#include <drm/drm_atomic.h>
> -#include <drm/drm_atomic_uapi.h>
>  #include <drm/drm_damage_helper.h>
>  #include <drm/drm_fb_helper.h>
>  #include <drm/drm_fourcc.h>
> @@ -19,7 +14,6 @@
>  #include <drm/drm_gem.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_modeset_helper.h>
> -#include <drm/drm_simple_kms_helper.h>
>  
>  #define AFBC_HEADER_SIZE		16
>  #define AFBC_TH_LAYOUT_ALIGNMENT	8
> @@ -432,60 +426,3 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init);
> -
> -/**
> - * drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer
> - * @plane: Plane
> - * @state: Plane state the fence will be attached to
> - *
> - * This function extracts the exclusive fence from &drm_gem_object.resv and
> - * attaches it to plane state for the atomic helper to wait on. This is
> - * necessary to correctly implement implicit synchronization for any buffers
> - * shared as a struct &dma_buf. This function can be used as the
> - * &drm_plane_helper_funcs.prepare_fb callback.
> - *
> - * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
> - * gem based framebuffer drivers which have their buffers always pinned in
> - * memory.
> - *
> - * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
> - * explicit fencing in atomic modeset updates.
> - */
> -int drm_gem_fb_prepare_fb(struct drm_plane *plane,
> -			  struct drm_plane_state *state)
> -{
> -	struct drm_gem_object *obj;
> -	struct dma_fence *fence;
> -
> -	if (!state->fb)
> -		return 0;
> -
> -	obj = drm_gem_fb_get_obj(state->fb, 0);
> -	fence = dma_resv_get_excl_rcu(obj->resv);
> -	drm_atomic_set_fence_for_plane(state, fence);
> -
> -	return 0;
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_fb_prepare_fb);
> -
> -/**
> - * drm_gem_fb_simple_display_pipe_prepare_fb - prepare_fb helper for
> - *     &drm_simple_display_pipe
> - * @pipe: Simple display pipe
> - * @plane_state: Plane state
> - *
> - * This function uses drm_gem_fb_prepare_fb() to extract the exclusive fence
> - * from &drm_gem_object.resv and attaches it to plane state for the atomic
> - * helper to wait on. This is necessary to correctly implement implicit
> - * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
> - * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
> - *
> - * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
> - * explicit fencing in atomic modeset updates.
> - */
> -int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
> -					      struct drm_plane_state *plane_state)
> -{
> -	return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
> -}
> -EXPORT_SYMBOL(drm_gem_fb_simple_display_pipe_prepare_fb);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 48d4b59d3145..2071ec637df8 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -8,7 +8,7 @@
>  #include <drm/drm_drv.h>
>  #include <drm/drm_file.h>
>  #include <drm/drm_framebuffer.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_ttm_helper.h>
>  #include <drm/drm_gem_vram_helper.h>
>  #include <drm/drm_managed.h>
> @@ -720,7 +720,7 @@ drm_gem_vram_plane_helper_prepare_fb(struct drm_plane *plane,
>  			goto err_drm_gem_vram_unpin;
>  	}
>  
> -	ret = drm_gem_fb_prepare_fb(plane, new_state);
> +	ret = drm_gem_prepare_fb(plane, new_state);
>  	if (ret)
>  		goto err_drm_gem_vram_unpin;
>  
> diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c
> index 03ba88f7f995..092e98fe0cfd 100644
> --- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
> +++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
> @@ -6,7 +6,7 @@
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_fb_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
>  
>  #include "dcss-dev.h"
> @@ -355,7 +355,7 @@ static void dcss_plane_atomic_disable(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs dcss_plane_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = dcss_plane_atomic_check,
>  	.atomic_update = dcss_plane_atomic_update,
>  	.atomic_disable = dcss_plane_atomic_disable,
> diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
> index 075508051b5f..0b6d81c4fa77 100644
> --- a/drivers/gpu/drm/imx/ipuv3-plane.c
> +++ b/drivers/gpu/drm/imx/ipuv3-plane.c
> @@ -9,8 +9,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_plane_helper.h>
>  
> @@ -704,7 +704,7 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = ipu_plane_atomic_check,
>  	.atomic_disable = ipu_plane_atomic_disable,
>  	.atomic_update = ipu_plane_atomic_update,
> diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
> index 7bb31fbee29d..1ca02de60895 100644
> --- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
> +++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
> @@ -28,6 +28,7 @@
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fb_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_irq.h>
>  #include <drm/drm_managed.h>
> @@ -780,7 +781,7 @@ static const struct drm_plane_helper_funcs ingenic_drm_plane_helper_funcs = {
>  	.atomic_update		= ingenic_drm_plane_atomic_update,
>  	.atomic_check		= ingenic_drm_plane_atomic_check,
>  	.atomic_disable		= ingenic_drm_plane_atomic_disable,
> -	.prepare_fb		= drm_gem_fb_prepare_fb,
> +	.prepare_fb		= drm_gem_prepare_fb,
>  };
>  
>  static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_funcs = {
> diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/ingenic/ingenic-ipu.c
> index e52777ef85fd..1b9b5de6b67c 100644
> --- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
> +++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
> @@ -23,7 +23,7 @@
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_plane.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_property.h>
> @@ -608,7 +608,7 @@ static const struct drm_plane_helper_funcs ingenic_ipu_plane_helper_funcs = {
>  	.atomic_update		= ingenic_ipu_plane_atomic_update,
>  	.atomic_check		= ingenic_ipu_plane_atomic_check,
>  	.atomic_disable		= ingenic_ipu_plane_atomic_disable,
> -	.prepare_fb		= drm_gem_fb_prepare_fb,
> +	.prepare_fb		= drm_gem_prepare_fb,
>  };
>  
>  static int
> diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
> index 7c2e0b865441..dde16ef9650a 100644
> --- a/drivers/gpu/drm/mcde/mcde_display.c
> +++ b/drivers/gpu/drm/mcde/mcde_display.c
> @@ -13,8 +13,8 @@
>  #include <drm/drm_device.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_mipi_dsi.h>
>  #include <drm/drm_simple_kms_helper.h>
>  #include <drm/drm_bridge.h>
> @@ -1481,7 +1481,7 @@ static struct drm_simple_display_pipe_funcs mcde_display_funcs = {
>  	.update = mcde_display_update,
>  	.enable_vblank = mcde_display_enable_vblank,
>  	.disable_vblank = mcde_display_disable_vblank,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  int mcde_display_init(struct drm_device *drm)
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
> index 92141a19681b..64f7873e9867 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
> @@ -6,10 +6,10 @@
>  
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
> -#include <drm/drm_fourcc.h>
>  #include <drm/drm_atomic_uapi.h>
> +#include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_plane_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  
>  #include "mtk_drm_crtc.h"
>  #include "mtk_drm_ddp_comp.h"
> @@ -216,7 +216,7 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = mtk_plane_atomic_check,
>  	.atomic_update = mtk_plane_atomic_update,
>  	.atomic_disable = mtk_plane_atomic_disable,
> diff --git a/drivers/gpu/drm/meson/meson_overlay.c b/drivers/gpu/drm/meson/meson_overlay.c
> index 1ffbbecafa22..0ee2132a990f 100644
> --- a/drivers/gpu/drm/meson/meson_overlay.c
> +++ b/drivers/gpu/drm/meson/meson_overlay.c
> @@ -10,11 +10,11 @@
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_device.h>
> +#include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> -#include <drm/drm_plane_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_fb_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_plane_helper.h>
>  
>  #include "meson_overlay.h"
>  #include "meson_registers.h"
> @@ -742,7 +742,7 @@ static const struct drm_plane_helper_funcs meson_overlay_helper_funcs = {
>  	.atomic_check	= meson_overlay_atomic_check,
>  	.atomic_disable	= meson_overlay_atomic_disable,
>  	.atomic_update	= meson_overlay_atomic_update,
> -	.prepare_fb	= drm_gem_fb_prepare_fb,
> +	.prepare_fb	= drm_gem_prepare_fb,
>  };
>  
>  static bool meson_overlay_format_mod_supported(struct drm_plane *plane,
> diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
> index 35338ed18209..24d64c9675ff 100644
> --- a/drivers/gpu/drm/meson/meson_plane.c
> +++ b/drivers/gpu/drm/meson/meson_plane.c
> @@ -16,8 +16,8 @@
>  #include <drm/drm_device.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_plane_helper.h>
>  
>  #include "meson_plane.h"
> @@ -417,7 +417,7 @@ static const struct drm_plane_helper_funcs meson_plane_helper_funcs = {
>  	.atomic_check	= meson_plane_atomic_check,
>  	.atomic_disable	= meson_plane_atomic_disable,
>  	.atomic_update	= meson_plane_atomic_update,
> -	.prepare_fb	= drm_gem_fb_prepare_fb,
> +	.prepare_fb	= drm_gem_prepare_fb,
>  };
>  
>  static bool meson_plane_format_mod_supported(struct drm_plane *plane,
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
> index bc0231a50132..3e9f9f3dd679 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
> @@ -13,7 +13,7 @@
>  #include <drm/drm_atomic_uapi.h>
>  #include <drm/drm_damage_helper.h>
>  #include <drm/drm_file.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  
>  #include "msm_drv.h"
>  #include "dpu_kms.h"
> @@ -892,7 +892,7 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
>  	 *       we can use msm_atomic_prepare_fb() instead of doing the
>  	 *       implicit fence and fb prepare by hand here.
>  	 */
> -	drm_gem_fb_prepare_fb(plane, new_state);
> +	drm_gem_prepare_fb(plane, new_state);
>  
>  	if (pstate->aspace) {
>  		ret = msm_framebuffer_prepare(new_state->fb,
> diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
> index 6a326761dc4a..03a113eb6571 100644
> --- a/drivers/gpu/drm/msm/msm_atomic.c
> +++ b/drivers/gpu/drm/msm/msm_atomic.c
> @@ -5,7 +5,7 @@
>   */
>  
>  #include <drm/drm_atomic_uapi.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_vblank.h>
>  
>  #include "msm_atomic_trace.h"
> @@ -22,7 +22,7 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
>  	if (!new_state->fb)
>  		return 0;
>  
> -	drm_gem_fb_prepare_fb(plane, new_state);
> +	drm_gem_prepare_fb(plane, new_state);
>  
>  	return msm_framebuffer_prepare(new_state->fb, kms->aspace);
>  }
> diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
> index 3e1bb0aefb87..33188dea886d 100644
> --- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
> +++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
> @@ -21,8 +21,8 @@
>  #include <drm/drm_encoder.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_plane.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_vblank.h>
> @@ -495,13 +495,13 @@ static bool mxsfb_format_mod_supported(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs mxsfb_plane_primary_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = mxsfb_plane_atomic_check,
>  	.atomic_update = mxsfb_plane_primary_atomic_update,
>  };
>  
>  static const struct drm_plane_helper_funcs mxsfb_plane_overlay_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = mxsfb_plane_atomic_check,
>  	.atomic_update = mxsfb_plane_overlay_atomic_update,
>  };
> diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
> index 69c02e7c82b7..6fd7f13f1aca 100644
> --- a/drivers/gpu/drm/pl111/pl111_display.c
> +++ b/drivers/gpu/drm/pl111/pl111_display.c
> @@ -17,8 +17,8 @@
>  
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_vblank.h>
>  
>  #include "pl111_drm.h"
> @@ -440,7 +440,7 @@ static struct drm_simple_display_pipe_funcs pl111_display_funcs = {
>  	.enable = pl111_display_enable,
>  	.disable = pl111_display_disable,
>  	.update = pl111_display_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
> diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
> index 53221d8473c1..964fdaee7c7d 100644
> --- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
> +++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
> @@ -11,8 +11,8 @@
>  #include <drm/drm_crtc.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_vblank.h>
> @@ -236,7 +236,7 @@ static int rcar_du_vsp_plane_prepare_fb(struct drm_plane *plane,
>  	if (ret < 0)
>  		return ret;
>  
> -	return drm_gem_fb_prepare_fb(plane, state);
> +	return drm_gem_prepare_fb(plane, state);
>  }
>  
>  void rcar_du_vsp_unmap_fb(struct rcar_du_vsp *vsp, struct drm_framebuffer *fb,
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
> index 8d15cabdcb02..45577de18b49 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
> @@ -23,6 +23,7 @@
>  #include <drm/drm_crtc.h>
>  #include <drm/drm_flip_work.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_probe_helper.h>
> @@ -1096,7 +1097,7 @@ static const struct drm_plane_helper_funcs plane_helper_funcs = {
>  	.atomic_disable = vop_plane_atomic_disable,
>  	.atomic_async_check = vop_plane_atomic_async_check,
>  	.atomic_async_update = vop_plane_atomic_async_update,
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  };
>  
>  static const struct drm_plane_funcs vop_plane_funcs = {
> diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
> index 7812094f93d6..73522c6ba3eb 100644
> --- a/drivers/gpu/drm/stm/ltdc.c
> +++ b/drivers/gpu/drm/stm/ltdc.c
> @@ -26,8 +26,8 @@
>  #include <drm/drm_device.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_of.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_probe_helper.h>
> @@ -911,7 +911,7 @@ static const struct drm_plane_funcs ltdc_plane_funcs = {
>  };
>  
>  static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = ltdc_plane_atomic_check,
>  	.atomic_update = ltdc_plane_atomic_update,
>  	.atomic_disable = ltdc_plane_atomic_disable,
> diff --git a/drivers/gpu/drm/sun4i/sun4i_layer.c b/drivers/gpu/drm/sun4i/sun4i_layer.c
> index acfbfd4463a1..68da94b7c35d 100644
> --- a/drivers/gpu/drm/sun4i/sun4i_layer.c
> +++ b/drivers/gpu/drm/sun4i/sun4i_layer.c
> @@ -7,7 +7,7 @@
>   */
>  
>  #include <drm/drm_atomic_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_plane_helper.h>
>  
>  #include "sun4i_backend.h"
> @@ -122,7 +122,7 @@ static bool sun4i_layer_format_mod_supported(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs sun4i_backend_layer_helper_funcs = {
> -	.prepare_fb	= drm_gem_fb_prepare_fb,
> +	.prepare_fb	= drm_gem_prepare_fb,
>  	.atomic_disable	= sun4i_backend_layer_atomic_disable,
>  	.atomic_update	= sun4i_backend_layer_atomic_update,
>  };
> diff --git a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
> index 816ad4ce8996..95654c153279 100644
> --- a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
> +++ b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
> @@ -14,8 +14,8 @@
>  #include <drm/drm_crtc.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_probe_helper.h>
>  
> @@ -299,7 +299,7 @@ static void sun8i_ui_layer_atomic_update(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs sun8i_ui_layer_helper_funcs = {
> -	.prepare_fb	= drm_gem_fb_prepare_fb,
> +	.prepare_fb	= drm_gem_prepare_fb,
>  	.atomic_check	= sun8i_ui_layer_atomic_check,
>  	.atomic_disable	= sun8i_ui_layer_atomic_disable,
>  	.atomic_update	= sun8i_ui_layer_atomic_update,
> diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
> index 8cc294a9969d..4005884dbce4 100644
> --- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
> +++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
> @@ -7,8 +7,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_crtc.h>
>  #include <drm/drm_fb_cma_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_probe_helper.h>
>  
> @@ -402,7 +402,7 @@ static void sun8i_vi_layer_atomic_update(struct drm_plane *plane,
>  }
>  
>  static const struct drm_plane_helper_funcs sun8i_vi_layer_helper_funcs = {
> -	.prepare_fb	= drm_gem_fb_prepare_fb,
> +	.prepare_fb	= drm_gem_prepare_fb,
>  	.atomic_check	= sun8i_vi_layer_atomic_check,
>  	.atomic_disable	= sun8i_vi_layer_atomic_disable,
>  	.atomic_update	= sun8i_vi_layer_atomic_update,
> diff --git a/drivers/gpu/drm/tegra/plane.c b/drivers/gpu/drm/tegra/plane.c
> index 539d14935728..ec86a8d060aa 100644
> --- a/drivers/gpu/drm/tegra/plane.c
> +++ b/drivers/gpu/drm/tegra/plane.c
> @@ -8,7 +8,7 @@
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_fourcc.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_plane_helper.h>
>  
>  #include "dc.h"
> @@ -198,7 +198,7 @@ int tegra_plane_prepare_fb(struct drm_plane *plane,
>  	if (!state->fb)
>  		return 0;
>  
> -	drm_gem_fb_prepare_fb(plane, state);
> +	drm_gem_prepare_fb(plane, state);
>  
>  	return tegra_dc_pin(dc, to_tegra_plane_state(state));
>  }
> diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c
> index 35067ae674ea..d39baa66e876 100644
> --- a/drivers/gpu/drm/tidss/tidss_plane.c
> +++ b/drivers/gpu/drm/tidss/tidss_plane.c
> @@ -10,7 +10,7 @@
>  #include <drm/drm_crtc_helper.h>
>  #include <drm/drm_fourcc.h>
>  #include <drm/drm_fb_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  
>  #include "tidss_crtc.h"
>  #include "tidss_dispc.h"
> @@ -151,7 +151,7 @@ static void drm_plane_destroy(struct drm_plane *plane)
>  }
>  
>  static const struct drm_plane_helper_funcs tidss_plane_helper_funcs = {
> -	.prepare_fb = drm_gem_fb_prepare_fb,
> +	.prepare_fb = drm_gem_prepare_fb,
>  	.atomic_check = tidss_plane_atomic_check,
>  	.atomic_update = tidss_plane_atomic_update,
>  	.atomic_disable = tidss_plane_atomic_disable,
> diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8357d.c
> index c6525cd02bc2..3e2c2868a363 100644
> --- a/drivers/gpu/drm/tiny/hx8357d.c
> +++ b/drivers/gpu/drm/tiny/hx8357d.c
> @@ -19,8 +19,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  #include <drm/drm_modeset_helper.h>
> @@ -184,7 +184,7 @@ static const struct drm_simple_display_pipe_funcs hx8357d_pipe_funcs = {
>  	.enable = yx240qv29_enable,
>  	.disable = mipi_dbi_pipe_disable,
>  	.update = mipi_dbi_pipe_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct drm_display_mode yx350hv15_mode = {
> diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili9225.c
> index 8e98962db5a2..6b87df19eec1 100644
> --- a/drivers/gpu/drm/tiny/ili9225.c
> +++ b/drivers/gpu/drm/tiny/ili9225.c
> @@ -22,8 +22,8 @@
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fb_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  #include <drm/drm_rect.h>
> @@ -328,7 +328,7 @@ static const struct drm_simple_display_pipe_funcs ili9225_pipe_funcs = {
>  	.enable		= ili9225_pipe_enable,
>  	.disable	= ili9225_pipe_disable,
>  	.update		= ili9225_pipe_update,
> -	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct drm_display_mode ili9225_mode = {
> diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili9341.c
> index 6ce97f0698eb..a97f3f70e4a6 100644
> --- a/drivers/gpu/drm/tiny/ili9341.c
> +++ b/drivers/gpu/drm/tiny/ili9341.c
> @@ -18,8 +18,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  #include <drm/drm_modeset_helper.h>
> @@ -140,7 +140,7 @@ static const struct drm_simple_display_pipe_funcs ili9341_pipe_funcs = {
>  	.enable = yx240qv29_enable,
>  	.disable = mipi_dbi_pipe_disable,
>  	.update = mipi_dbi_pipe_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct drm_display_mode yx240qv29_mode = {
> diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
> index d7ce40eb166a..6422a7f67079 100644
> --- a/drivers/gpu/drm/tiny/ili9486.c
> +++ b/drivers/gpu/drm/tiny/ili9486.c
> @@ -17,8 +17,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  #include <drm/drm_modeset_helper.h>
> @@ -153,7 +153,7 @@ static const struct drm_simple_display_pipe_funcs waveshare_pipe_funcs = {
>  	.enable = waveshare_enable,
>  	.disable = mipi_dbi_pipe_disable,
>  	.update = mipi_dbi_pipe_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct drm_display_mode waveshare_mode = {
> diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi0283qt.c
> index ff77f983f803..dc76fe53aa72 100644
> --- a/drivers/gpu/drm/tiny/mi0283qt.c
> +++ b/drivers/gpu/drm/tiny/mi0283qt.c
> @@ -16,8 +16,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  #include <drm/drm_modeset_helper.h>
> @@ -144,7 +144,7 @@ static const struct drm_simple_display_pipe_funcs mi0283qt_pipe_funcs = {
>  	.enable = mi0283qt_enable,
>  	.disable = mipi_dbi_pipe_disable,
>  	.update = mipi_dbi_pipe_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct drm_display_mode mi0283qt_mode = {
> diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
> index 11c602fc9897..2cee07a2e00b 100644
> --- a/drivers/gpu/drm/tiny/repaper.c
> +++ b/drivers/gpu/drm/tiny/repaper.c
> @@ -29,6 +29,7 @@
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fb_helper.h>
>  #include <drm/drm_format_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
> @@ -860,7 +861,7 @@ static const struct drm_simple_display_pipe_funcs repaper_pipe_funcs = {
>  	.enable = repaper_pipe_enable,
>  	.disable = repaper_pipe_disable,
>  	.update = repaper_pipe_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static int repaper_connector_get_modes(struct drm_connector *connector)
> diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st7586.c
> index ff5cf60f4bd7..7d216fe9267f 100644
> --- a/drivers/gpu/drm/tiny/st7586.c
> +++ b/drivers/gpu/drm/tiny/st7586.c
> @@ -19,8 +19,8 @@
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fb_helper.h>
>  #include <drm/drm_format_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  #include <drm/drm_rect.h>
> @@ -268,7 +268,7 @@ static const struct drm_simple_display_pipe_funcs st7586_pipe_funcs = {
>  	.enable		= st7586_pipe_enable,
>  	.disable	= st7586_pipe_disable,
>  	.update		= st7586_pipe_update,
> -	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct drm_display_mode st7586_mode = {
> diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
> index faaba0a033ea..df8872d62cdd 100644
> --- a/drivers/gpu/drm/tiny/st7735r.c
> +++ b/drivers/gpu/drm/tiny/st7735r.c
> @@ -19,8 +19,8 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_mipi_dbi.h>
>  
> @@ -136,7 +136,7 @@ static const struct drm_simple_display_pipe_funcs st7735r_pipe_funcs = {
>  	.enable		= st7735r_pipe_enable,
>  	.disable	= mipi_dbi_pipe_disable,
>  	.update		= mipi_dbi_pipe_update,
> -	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
>  };
>  
>  static const struct st7735r_cfg jd_t18003_t01_cfg = {
> diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
> index cb0e837d3dba..50e1fb71869f 100644
> --- a/drivers/gpu/drm/tve200/tve200_display.c
> +++ b/drivers/gpu/drm/tve200/tve200_display.c
> @@ -17,8 +17,8 @@
>  
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_cma_helper.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_panel.h>
>  #include <drm/drm_vblank.h>
>  
> @@ -316,7 +316,7 @@ static const struct drm_simple_display_pipe_funcs tve200_display_funcs = {
>  	.enable = tve200_display_enable,
>  	.disable = tve200_display_disable,
>  	.update = tve200_display_update,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  	.enable_vblank = tve200_display_enable_vblank,
>  	.disable_vblank = tve200_display_disable_vblank,
>  };
> diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
> index 7322169c0682..a65e980078f3 100644
> --- a/drivers/gpu/drm/vc4/vc4_plane.c
> +++ b/drivers/gpu/drm/vc4/vc4_plane.c
> @@ -20,7 +20,7 @@
>  #include <drm/drm_atomic_uapi.h>
>  #include <drm/drm_fb_cma_helper.h>
>  #include <drm/drm_fourcc.h>
> -#include <drm/drm_gem_framebuffer_helper.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_plane_helper.h>
>  
>  #include "uapi/drm/vc4_drm.h"
> @@ -1250,7 +1250,7 @@ static int vc4_prepare_fb(struct drm_plane *plane,
>  
>  	bo = to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base);
>  
> -	drm_gem_fb_prepare_fb(plane, state);
> +	drm_gem_prepare_fb(plane, state);
>  
>  	if (plane->state->fb == state->fb)
>  		return 0;
> diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
> index 0824327cc860..e3fd8cd1f3f1 100644
> --- a/drivers/gpu/drm/vkms/vkms_plane.c
> +++ b/drivers/gpu/drm/vkms/vkms_plane.c
> @@ -5,6 +5,7 @@
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_plane_helper.h>
>  #include <drm/drm_gem_shmem_helper.h>
> @@ -159,7 +160,7 @@ static int vkms_prepare_fb(struct drm_plane *plane,
>  	if (ret)
>  		DRM_ERROR("vmap failed: %d\n", ret);
>  
> -	return drm_gem_fb_prepare_fb(plane, state);
> +	return drm_gem_prepare_fb(plane, state);
>  }
>  
>  static void vkms_cleanup_fb(struct drm_plane *plane,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> index ef11b1e4de39..371202ebe900 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> @@ -13,6 +13,7 @@
>  #include <drm/drm_drv.h>
>  #include <drm/drm_fourcc.h>
>  #include <drm/drm_gem.h>
> +#include <drm/drm_gem_atomic_helper.h>
>  #include <drm/drm_gem_framebuffer_helper.h>
>  #include <drm/drm_probe_helper.h>
>  #include <drm/drm_vblank.h>
> @@ -301,7 +302,7 @@ static const struct drm_simple_display_pipe_funcs display_funcs = {
>  	.mode_valid = display_mode_valid,
>  	.enable = display_enable,
>  	.disable = display_disable,
> -	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
> +	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
>  	.check = display_check,
>  	.update = display_update,
>  };
> diff --git a/include/drm/drm_gem_atomic_helper.h b/include/drm/drm_gem_atomic_helper.h
> index 08b96ccea325..91e73d23fea8 100644
> --- a/include/drm/drm_gem_atomic_helper.h
> +++ b/include/drm/drm_gem_atomic_helper.h
> @@ -9,6 +9,14 @@
>  
>  struct drm_simple_display_pipe;
>  
> +/*
> + * Plane Helpers
> + */
> +
> +int drm_gem_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state);
> +int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
> +					   struct drm_plane_state *plane_state);
> +
>  /*
>   * Helpers for planes with shadow buffers
>   */
> diff --git a/include/drm/drm_gem_framebuffer_helper.h b/include/drm/drm_gem_framebuffer_helper.h
> index 6b013154911d..495d174d9989 100644
> --- a/include/drm/drm_gem_framebuffer_helper.h
> +++ b/include/drm/drm_gem_framebuffer_helper.h
> @@ -9,9 +9,11 @@ struct drm_framebuffer;
>  struct drm_framebuffer_funcs;
>  struct drm_gem_object;
>  struct drm_mode_fb_cmd2;
> +#if 0
>  struct drm_plane;
>  struct drm_plane_state;
>  struct drm_simple_display_pipe;
> +#endif
>  
>  #define AFBC_VENDOR_AND_TYPE_MASK	GENMASK_ULL(63, 52)
>  
> @@ -44,8 +46,4 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
>  			 const struct drm_mode_fb_cmd2 *mode_cmd,
>  			 struct drm_afbc_framebuffer *afbc_fb);
>  
> -int drm_gem_fb_prepare_fb(struct drm_plane *plane,
> -			  struct drm_plane_state *state);
> -int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
> -					      struct drm_plane_state *plane_state);
>  #endif
> diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h
> index eb706342861d..8d41d3734153 100644
> --- a/include/drm/drm_modeset_helper_vtables.h
> +++ b/include/drm/drm_modeset_helper_vtables.h
> @@ -1179,7 +1179,7 @@ struct drm_plane_helper_funcs {
>  	 * members in the plane structure.
>  	 *
>  	 * Drivers which always have their buffers pinned should use
> -	 * drm_gem_fb_prepare_fb() for this hook.
> +	 * drm_gem_prepare_fb() for this hook.
>  	 *
>  	 * The helpers will call @cleanup_fb with matching arguments for every
>  	 * successful call to this hook.
> diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
> index 95ab14a4336a..be08b6b1fde0 100644
> --- a/include/drm/drm_plane.h
> +++ b/include/drm/drm_plane.h
> @@ -79,8 +79,8 @@ struct drm_plane_state {
>  	 * preserved.
>  	 *
>  	 * Drivers should store any implicit fence in this from their
> -	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_fb_prepare_fb()
> -	 * and drm_gem_fb_simple_display_pipe_prepare_fb() for suitable helpers.
> +	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_prepare_fb()
> +	 * and drm_gem_simple_display_pipe_prepare_fb() for suitable helpers.
>  	 */
>  	struct dma_fence *fence;
>  
> diff --git a/include/drm/drm_simple_kms_helper.h b/include/drm/drm_simple_kms_helper.h
> index 40b34573249f..ef9944e9c5fc 100644
> --- a/include/drm/drm_simple_kms_helper.h
> +++ b/include/drm/drm_simple_kms_helper.h
> @@ -117,7 +117,7 @@ struct drm_simple_display_pipe_funcs {
>  	 * more details.
>  	 *
>  	 * Drivers which always have their buffers pinned should use
> -	 * drm_gem_fb_simple_display_pipe_prepare_fb() for this hook.
> +	 * drm_gem_simple_display_pipe_prepare_fb() for this hook.
>  	 */
>  	int (*prepare_fb)(struct drm_simple_display_pipe *pipe,
>  			  struct drm_plane_state *plane_state);
> -- 
> 2.30.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:12:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:12:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83597.155946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pIQ-0004FS-1v; Wed, 10 Feb 2021 13:12:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83597.155946; Wed, 10 Feb 2021 13:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pIP-0004FL-UV; Wed, 10 Feb 2021 13:12:41 +0000
Received: by outflank-mailman (input) for mailman id 83597;
 Wed, 10 Feb 2021 13:12:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9pIO-0004FF-Mf
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 13:12:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e4f410c-a7cb-4b50-bd62-2d3f9bd97ed6;
 Wed, 10 Feb 2021 13:12:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3F51BAC43;
 Wed, 10 Feb 2021 13:12:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e4f410c-a7cb-4b50-bd62-2d3f9bd97ed6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612962759; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KodbBIrLLtVpPc0B70pP4gcnK9uIr1vJtWfSMiDJuP8=;
	b=vRf8U6sFq+aFMtSZhNjg0g2liM3oGkOrVIRehY5FpwVBeTGDaUFM0lpbsTH+V4HmkeSOlI
	Ih+VTe0zhmpt30ZsrSTY95Q2+hUw0Rgl2hMTteFo8IK7MLIG01eqA9GQ97p1SbRB/DcXjN
	cacrWZAyFPpHelDgTgxhwIyK0FOYWMM=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org> <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
 <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
 <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
 <YCPJXe1L1SCXoL7a@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb242b17-01f3-6312-b563-f82abc5d300a@suse.com>
Date: Wed, 10 Feb 2021 14:12:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCPJXe1L1SCXoL7a@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 12:54, Roger Pau MonnÃ© wrote:
> On Wed, Feb 10, 2021 at 11:48:40AM +0000, Julien Grall wrote:
>> It feels wrong to me to setup a per-domain mapping when initializing the
>> first vCPU.
>>
>> But, I was under the impression that there is plan to remove
>> XEN_DOMCTL_max_vcpus. So it would only buy just a bit of time...
> 
> I was also under that impression. We could setup the lapic access page
> at vlapic_init for the BSP (which is part of XEN_DOMCTL_max_vcpus
> ATM).
> 
> But then I think there should be some kind of check to prevent
> populating either the CPU or the IOMMU page tables at domain creation
> hypercall, and so the logic to free CPU table tables on failure could
> be removed.

I can spot paging_final_teardown() on an error path there, but I
don't suppose that's what you mean? I guess I'm not looking in
the right place (there are quite a few after all) ...

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:26:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:26:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83600.155958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pWA-0005Hi-9y; Wed, 10 Feb 2021 13:26:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83600.155958; Wed, 10 Feb 2021 13:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pWA-0005Hb-6Q; Wed, 10 Feb 2021 13:26:54 +0000
Received: by outflank-mailman (input) for mailman id 83600;
 Wed, 10 Feb 2021 13:26:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9pW9-0005HT-EZ; Wed, 10 Feb 2021 13:26:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9pW9-0001bV-6j; Wed, 10 Feb 2021 13:26:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9pW8-0006bJ-Ug; Wed, 10 Feb 2021 13:26:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9pW8-000516-UE; Wed, 10 Feb 2021 13:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FI3V4Ag77LGZqxdLIzIJdkoKKlUsE9Q3TrmDM1FaFqg=; b=YHw4F8U/YVgkpdl7iuLji2Aiu2
	PzX18k2odiryNgZMdeIu9awG9vpQwumlTAuM05jXSwDsTPgePgy3gY+ky2HOPyvo30WTlfyXFEbxO
	Agpc+iKsKmkYrQ53BnkM5kM/YkaDJf9tOeAqEM33/JZO6gB1njvwH+aUVD16hbHMFafs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159206-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159206: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=20077d035224c6f3b3eef8b111b8b842635f2c40
X-Osstest-Versions-That:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 13:26:52 +0000

flight 159206 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159206/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 159191
 build-arm64-xsm               6 xen-build                fail REGR. vs. 159191
 build-armhf                   6 xen-build                fail REGR. vs. 159191

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  20077d035224c6f3b3eef8b111b8b842635f2c40
baseline version:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14

Last test of basis   159191  2021-02-09 23:00:29 Z    0 days
Testing same since   159206  2021-02-10 12:01:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 20077d035224c6f3b3eef8b111b8b842635f2c40
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Feb 5 12:53:27 2021 +0100

    tools/configure: add bison as mandatory
    
    Bison is now mandatory when the pvshim build is enabled in order to
    generate the Kconfig.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>

commit c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Feb 4 10:38:33 2021 +0100

    autoconf: check endian.h include path
    
    Introduce an autoconf macro to check for the include path of certain
    headers that can be different between OSes.
    
    Use such macro to find the correct path for the endian.h header, and
    modify the users of endian.h to use the output of such check.
    
    Suggested-by: Ian Jackson <iwj@xenproject.org>
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:54:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:54:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83607.155982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pwT-00080N-Hr; Wed, 10 Feb 2021 13:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83607.155982; Wed, 10 Feb 2021 13:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pwT-00080G-Eq; Wed, 10 Feb 2021 13:54:05 +0000
Received: by outflank-mailman (input) for mailman id 83607;
 Wed, 10 Feb 2021 13:54:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9pwS-00080B-Fo
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 13:54:04 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bc84721-3a5a-4feb-9ee3-b785dbd0bddf;
 Wed, 10 Feb 2021 13:54:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bc84721-3a5a-4feb-9ee3-b785dbd0bddf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612965243;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=RrarCaL9pi9yussd3vp/1m7VSdSMaXQNb/eS0BJptNk=;
  b=AslQ+jX0vX+fv2+3Sad/o7PfsxQNGgCxmkRcgNxepAdOue4pkOYlyonZ
   t0LdLLYsp70resbdvgMFC+pWjTlDed0oN/HnBUc6ITSQRb04j5z71AbY1
   CQyxRC4synSRni8c/AelKOSPjb/MtL3RjpYctzFAGvabvKKE1f4VCsTiT
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WIxDXhesAKCygVA1AKvENqXn7Mr8GPspEYgnMo9r1Pll+y+sC3SR8XoGCxELy5yI2/HSKJV8CE
 nyhWNwg5gBHDvqxACdBRO3Vp64cTub/Wman/cxULaAgFCzskuh/gWflUNVWgvBAnp07fL/wmF+
 pY+T/yTEvCZ9HXHNaZcmMltSwU3rYfnBVZUnUUXuhUZfkwNSzsnL0XqHJ/qvl8Z74FVCWptZA/
 DYtGqKYlJKdLY0z8cyJiFRsjk3PYSCWLti0ZQzGcRAlOXln8kcB2fqS6PPB/Wxtprt+G9NMgdZ
 lfw=
X-SBRS: 5.1
X-MesageID: 37147615
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="37147615"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH for-4.15] tools/libxl: Document where the magic MAC numbers come from
Date: Wed, 10 Feb 2021 13:53:35 +0000
Message-ID: <20210210135335.29180-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Matches the comment in the xl-network-configuration manpage.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_nic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/libs/light/libxl_nic.c b/tools/libs/light/libxl_nic.c
index 144e9e23e1..0b45469dca 100644
--- a/tools/libs/light/libxl_nic.c
+++ b/tools/libs/light/libxl_nic.c
@@ -73,6 +73,7 @@ static int libxl__device_nic_setdefault(libxl__gc *gc, uint32_t domid,
         libxl_uuid_generate(&uuid);
         r = libxl_uuid_bytearray(&uuid);
 
+        /* Generate a random MAC address, with Xen's OUI (00:16:3e) */
         nic->mac[0] = 0x00;
         nic->mac[1] = 0x16;
         nic->mac[2] = 0x3e;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 13:54:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 13:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83608.155994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pww-00085D-RA; Wed, 10 Feb 2021 13:54:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83608.155994; Wed, 10 Feb 2021 13:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9pww-000856-Nm; Wed, 10 Feb 2021 13:54:34 +0000
Received: by outflank-mailman (input) for mailman id 83608;
 Wed, 10 Feb 2021 13:54:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9pwv-00084y-Gq
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 13:54:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d961a3b7-a242-4333-8dcf-34e00102ec04;
 Wed, 10 Feb 2021 13:54:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 482CCACBF;
 Wed, 10 Feb 2021 13:54:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d961a3b7-a242-4333-8dcf-34e00102ec04
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612965271; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7oRBAvtl+Q8sPQoGsn5VbLvJ8fGKg4gthf78AOEzXqc=;
	b=nmQYRLnydaDmCXpsHAu8TWA4ondGQKrLbg3m48BMZ9hiFT1pj1ajQPj8twoW9z5/BvaW+K
	YYetU0ke3RZl5i7g8IQXQV8HyQcNwF+x0xUVvRq/NbJQTNPJIiR0IBOIJyPEoPAagDkAEb
	V8bVSHNnTzfWltlZizJpDtq4YbuBDUQ=
Subject: Re: [PATCH] x86emul: fix SYSENTER/SYSCALL switching into 64-bit mode
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
 <d66cce4b-6563-4857-30be-5889788ca6c8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2eed5630-3e23-3005-245e-989893fc8476@suse.com>
Date: Wed, 10 Feb 2021 14:54:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d66cce4b-6563-4857-30be-5889788ca6c8@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.02.2021 13:28, Andrew Cooper wrote:
> On 10/02/2021 09:57, Jan Beulich wrote:
>> When invoked by compat mode, mode_64bit() will be false at the start of
>> emulation. The logic after complete_insn, however, needs to consider the
>> mode switched into, in particular to avoid truncating RIP.
>>
>> Inspired by / paralleling and extending Linux commit 943dea8af21b ("KVM:
>> x86: Update emulator context mode if SYSENTER xfers to 64-bit mode").
>>
>> While there, tighten a related assertion in x86_emulate_wrapper() - we
>> want to be sure to not switch into an impossible mode when the code gets
>> built for 32-bit only (as is possible for the test harness).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> In principle we could drop SYSENTER's ctxt->lma dependency when setting
>> _regs.r(ip). I wasn't certain whether leaving it as is serves as kind of
>> documentation ...
>>
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -6127,6 +6127,10 @@ x86_emulate(
>>               (rc = ops->write_segment(x86_seg_ss, &sreg, ctxt)) )
>>              goto done;
>>  
>> +        if ( ctxt->lma )
>> +            /* In particular mode_64bit() needs to return true from here on. */
>> +            ctxt->addr_size = ctxt->sp_size = 64;
> 
> I think this is fine as presented, but don't we want the logical
> opposite for SYSRET/SYSEXIT ?
> 
> We truncate rip suitably already,

This is why I left them alone, i.e. ...

> but don't know what other checks may appear in the future.

... I thought we would deal with this if and when such checks
would appear. Just like considered in the post-description
remark, we could drop the conditional part from sysexit's
setting of _regs.r(ip), and _then_ we would indeed need a
respective change there, for the truncation to happen at
complete_insn:.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:02:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:02:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83614.156018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9q4y-0000nm-Ox; Wed, 10 Feb 2021 14:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83614.156018; Wed, 10 Feb 2021 14:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9q4y-0000nf-Lw; Wed, 10 Feb 2021 14:02:52 +0000
Received: by outflank-mailman (input) for mailman id 83614;
 Wed, 10 Feb 2021 14:02:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9q4x-0000na-74
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:02:51 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a613883b-5e9b-4a3d-88a1-5605cbd3e4a0;
 Wed, 10 Feb 2021 14:02:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a613883b-5e9b-4a3d-88a1-5605cbd3e4a0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612965769;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=LKxuVFzCwI30/ve4LOGavldIqq3W4dzJmAlCSiGbxiE=;
  b=LA0NF4x46FanwYi+d++6XnDnemhN02+ageRxy7vJ/DrhSBYyVLFi8zku
   5VM3BdyMhLXf8CMlrANLv8Rj3FXlNROVD5RnrL8GcAwlBjWP72tdkIdOi
   fXYH21EscsfwryNZ5VzZ8wzsaHPikdyUf5iXTmLqiEqwLGGmr0NTsl6jk
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8dmyrFxXGNxlH/wNBL1bpBb/eLntvt9ttSoFrrEtx/RDOZpy969OA/VhcUdnw46Nkrso7lyDv2
 kGX2g+v9QkF033I1yUBoCT3dtTF5eJtl8m4ldu4GLQU8H58oqhjYfy/r4DcdXYXAQArDXJ5ZQI
 IgzcVM9pHFXn7WIfBV2B0F56yQbY3s0Y9Kpm+0j07s27uPz3knOyW/L9VUwYEgzCKibBZhuRw0
 w68HjvK1nCK9caZGtI5mv7U/btgJpLX0gxS0YFjQwF6hvieAXVRxanLBbVJtb8HdSI7MYIJgk2
 rjg=
X-SBRS: 5.2
X-MesageID: 36893409
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="36893409"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l4Ij01F3MSSL0Ae8LKMI8e6XWowYh/tIDGh66N2HhPR6PSpIkhMqWFiNa4hQDm2pMZqcosuI3oM5p4cwiekcBjTOE5sQikonVnhTHosTdem9GlxRf2u13+LaH+IqZHkngS26INGfueKC498S9NSBPBMrx2xZKD+gOz8EPgieTpNVSf+tSIXj6LY21i8acK8FT+SKY9TITa8z4TwkS+ux1JjpufRPzhSNSOWYjdu9D+V5kGgqfskZYd9IU7bfdtTezaieQ3kGYuO/GCjOrf6d9l2ILtgPb0Rkynu/y0K0xgDaYFbxoqa99zL2gYWDzOe+BCaxg2JP/aNT/GGpAd0VVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EFHZcH+jQ9Eve3do2WdbBwrkpQEDzESAD3lY3yYCHQs=;
 b=DXlO5xzweYvQQYo5ZeBfftU6SFwxRjKQhzZX/DTV/f4K2PPPOECbbwhJGSk9+S0+MHrODq6q6zXN8k1VH+62yssTyorJHfHJMHPcf2SaErazFxreeS994bj5PozZ107nl+mTgCjYEIOu0q+AM7e/2jBjQTOoP1fESiiTFKwQxNUinhyLvLmnb3x9egojv9SWLqiNnbU/H2IhpmrQfFDW0i5aEbLANwILDJ8UtucsrYai1AypJcj9drww4Qf5QVa1XUEPD3OLnrcCeRcqsFM6vDmgPqO02HtESbeyN/EbzaYuQbRkPGFIowrWMFRhAmB5BptvvT9ovO9bNSi0TxnyZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EFHZcH+jQ9Eve3do2WdbBwrkpQEDzESAD3lY3yYCHQs=;
 b=U/nVNsULgjVM/fJyZoPG6O7I9m80xPsp6Azt214t9ecW93srHD2cU+66WKfngSk0Qogz5e/hfbiiVlDlnspVjylqiOWLSBcZnQI6YL3nO1U7l/2IJSbazN8itEn9VA9/qopYfU0LW16yGFjWFTHW10A2+XWXvwaLXRdnBiDqIGw=
Subject: Re: [PATCH] x86emul: fix SYSENTER/SYSCALL switching into 64-bit mode
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
 <d66cce4b-6563-4857-30be-5889788ca6c8@citrix.com>
 <2eed5630-3e23-3005-245e-989893fc8476@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bf31a01e-4a32-5938-c158-38923100355d@citrix.com>
Date: Wed, 10 Feb 2021 14:02:39 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <2eed5630-3e23-3005-245e-989893fc8476@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0112.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7071a748-b69b-4362-b67c-08d8cdcc8a0c
X-MS-TrafficTypeDiagnostic: BY5PR03MB4966:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB49666395C0892181EFBBD388BA8D9@BY5PR03MB4966.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: t/LivcIiVwBFD9pYswYiI9ZJrD4BHKN8HR3rf23ws6dAVqNebO4pIQqRusI1aC4IA9Nxqu0KdEyhRboroKEbdpSqIyYWR5lTURqqkvqt3/EPnpp2RbkuE+bmg1ilNptyy77OaQWj/LMhRrYayGpKE3t8MbC6NNrNnK6+l3352efqinIuS1VTtvBp55IRlrz/do5fXs1+m/wFXzoOt/oNw+a9Y3MaY5pAOJEWl/O0YkmfGQez6hYdxnokqh7T3ZY9HsH8iAW15TvoW46YJpzNEsTrgTLufYKdwpmm3lR9UEN4A+u43LugnjpgruT0ZvTXXonHm6nFMqyyAA0sbxxqFZOHK4Nt1eBiMSwAkHvMgW8wUuruQffS3uQsGC1oWiU6Mz555Pn7mhZowddrNqelcq6v5aovD/krbRWRrzH73fB0n1nF5Ov0NqloGyn1ozxfTVWj5m8fHsLgct3kcxc44UNAwyjfMmQXTwFFpzfytUrigBKha/zsm7uoyiw4xo/U6d9f6xu/1dei1sLpH3Gn6kFxOs9Lw2qQS1GHK6SSx6KrlbNvXQtDYh+UHZzJn/utG+RsYAV7GDEl/URa9Cb7Wsca/N6sbL/0aVWMDqW0FuY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(86362001)(36756003)(31686004)(478600001)(66476007)(66556008)(66946007)(83380400001)(31696002)(956004)(2616005)(53546011)(8936002)(16526019)(186003)(8676002)(26005)(16576012)(316002)(2906002)(5660300002)(6916009)(4326008)(6666004)(6486002)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Y0FQWmExS0R3VWlyWEpXR2xTeEphY2NXdDBNclBDU3kxS09RcG1Zbms5VzRq?=
 =?utf-8?B?TW11RHA3Zmx0VFJXdkJIaVdEaVVRLzV4VmxnWk1XaUlsekxCeFArcGQvTHhY?=
 =?utf-8?B?UDJxNjZlbmRPd3FQRzRSOWE5dmlmNC9MUG1sd2NlcUtuQ3Yxbit4VlFoTU1Q?=
 =?utf-8?B?U3l3aWtNTUx6OHNaWHM3VnU4WnIva09kcU5EYXYrYi84YmNTUDMxSFhuSk5B?=
 =?utf-8?B?QitCTTBsVFlWWHh1cW9DSXdWcGNqcFVxd0tGTkhRRnBxTkV0Q1UvaEY1RlFu?=
 =?utf-8?B?ck1qUGZPWHFoLzNTR3ozd1ZDU09pQnRLMFRZUkNOSmNidmIvT0hzTmZiMEJi?=
 =?utf-8?B?RFdEczd3SmQ3K05KRklUZHlHUy9CMGs1V21CMFdOZXZSeDB0U1QwajlBeklX?=
 =?utf-8?B?dWE5ZWJJS0NJM2x4ODh1dDdnUElYRkNqWENtdmpDMmRhbW5JczR2aFFPVVdk?=
 =?utf-8?B?dEVaejNac1Zrbno2WTZHVE42MlcxWFBHK2dCVC9wNDhiZE1HTmVtdDNSMjJM?=
 =?utf-8?B?b0Joc2NJclg3YnQ5MEU2WEZwUHcxOXVzd1N2ZGY0Z0xPd25XOFg5eWV3Uml5?=
 =?utf-8?B?d3VCZFlnVU5yb2ExN2huUGJtZ1NhbEh2ZFVrakJUMDM4bDBUdFcyV0o0K3F6?=
 =?utf-8?B?c1VONHltM2xua2JuelFGamRwZ2ljdmQrKzJRSFVabzZCUUwyQWdXZE5LQXFX?=
 =?utf-8?B?R3gzenVZRzNjVi9VWHhIZkg4VzM0L2d4Qkdwb3BxUEJLbW1jTFJMUEdGU1J0?=
 =?utf-8?B?dllUUEZUUGFqanF3RTNibFQreHlqMzZMWWJvc0J5WWRKZE42SUozNGtraElF?=
 =?utf-8?B?c2gyMEVKakZwZVY2YVV6L1hueGFYSWtWRW1pcVR1RUZWNURPVlArREkyOE02?=
 =?utf-8?B?MldCNkZNK3ZLK0xWOUl4eUpUblhwSlVEa0RlSzA0Tnh6TUZaTTlPaWFwTmtu?=
 =?utf-8?B?QmZjSU5PZ3lzb0dGZEpERlB6R0lBSEdkZ2hLMXNBZ0FYSXZwWWs1cy9WQlFz?=
 =?utf-8?B?VmJ3c3ZNSGd6dnFFalB4ZHRubExqYWx1SS9WNTZJbWdmNUhPQzBBMEZIUWVh?=
 =?utf-8?B?ZTduazBFZUF6VUFDcVRrU1psYVB5QzluZUJoaSt2NmhBeEY4bDc0OEdFZFNC?=
 =?utf-8?B?a1NGN2RzNHVnVTR1R3RnbmF3K05rMkNIVTZ1SVNiUUo2NzdqdEU4MWU2NWh0?=
 =?utf-8?B?aTJVUm1ld3h5UVBoNWJ6QzZQSFBUd3NkYXlPZGdGZWdRbjh3eFQ2RTlZNlJv?=
 =?utf-8?B?cmFZbnppOWR3T0V3OWM5UmV5RlVkUkk1SXVQWXNZTHdFN3Noa1JNR3hySGNM?=
 =?utf-8?B?VEx5aVUyUlZYSXNRblFYTHhkOXlkMHVFWG43L0lUZHNmSmV1dVovWWJra0tG?=
 =?utf-8?B?Z1ZJa1puY3VaNnNtRkVKWUtwWUhvZlJBQTBtUHpUV3o1TmQzSHVZN1FId2Zz?=
 =?utf-8?B?bGMrZjFTeTdra0gzNXFuQjVMdlArSm5LZHZkWkJyR25zcXFmR0Q2RWZMRmp2?=
 =?utf-8?B?T29HeE5HYnJvODNiREN5RWU5OGFFN2ZMakVjWmtFQWoxSGRPTTVRRkM4QkM0?=
 =?utf-8?B?VnZXQXVaZ0NxSXhmdEowNDhYSlJEYXo2ZWR2a1RHMmpRQSt1WTdsS2xHM2ZW?=
 =?utf-8?B?SkFuNXgvUUJ6TXRZUjNTcThmQkpSNXc2cU5rc0JvOGVXYUV3VjJSMjRSQklP?=
 =?utf-8?B?cjhsdjIvSXZHM1l1eCt1SHRwTlN1UG5VS1dTSnpJQ2tKOXROcElpczNxUHlo?=
 =?utf-8?Q?tdbNo6YqEQrKgwkJhnZBdtTIRwO6GRnO11nw5mK?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7071a748-b69b-4362-b67c-08d8cdcc8a0c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 14:02:44.7315
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LzkkckhfscUa/E61gDr1POvZDAe7YeD/3CkvM9uRc5KkaOpwdqRzDZLjkkfgk+3otSOr7Gw08IbILLoKFxKXgjvMuCz6uXsKI6YmeBbYSJg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4966
X-OriginatorOrg: citrix.com

On 10/02/2021 13:54, Jan Beulich wrote:
> On 10.02.2021 13:28, Andrew Cooper wrote:
>> On 10/02/2021 09:57, Jan Beulich wrote:
>>> When invoked by compat mode, mode_64bit() will be false at the start of
>>> emulation. The logic after complete_insn, however, needs to consider the
>>> mode switched into, in particular to avoid truncating RIP.
>>>
>>> Inspired by / paralleling and extending Linux commit 943dea8af21b ("KVM:
>>> x86: Update emulator context mode if SYSENTER xfers to 64-bit mode").
>>>
>>> While there, tighten a related assertion in x86_emulate_wrapper() - we
>>> want to be sure to not switch into an impossible mode when the code gets
>>> built for 32-bit only (as is possible for the test harness).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> In principle we could drop SYSENTER's ctxt->lma dependency when setting
>>> _regs.r(ip). I wasn't certain whether leaving it as is serves as kind of
>>> documentation ...
>>>
>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>> @@ -6127,6 +6127,10 @@ x86_emulate(
>>>               (rc = ops->write_segment(x86_seg_ss, &sreg, ctxt)) )
>>>              goto done;
>>>  
>>> +        if ( ctxt->lma )
>>> +            /* In particular mode_64bit() needs to return true from here on. */
>>> +            ctxt->addr_size = ctxt->sp_size = 64;
>> I think this is fine as presented, but don't we want the logical
>> opposite for SYSRET/SYSEXIT ?
>>
>> We truncate rip suitably already,
> This is why I left them alone, i.e. ...
>
>> but don't know what other checks may appear in the future.
> ... I thought we would deal with this if and when such checks
> would appear.

Ok.Â  Reviewed-by: Andrew Cooper <andrew.cooper3@citirix.com>

> Just like considered in the post-description
> remark, we could drop the conditional part from sysexit's
> setting of _regs.r(ip), and _then_ we would indeed need a
> respective change there, for the truncation to happen at
> complete_insn:.

I think it would look odd changing just rip and not rsp truncation.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:12:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:12:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83616.156030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qEJ-0001nU-SA; Wed, 10 Feb 2021 14:12:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83616.156030; Wed, 10 Feb 2021 14:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qEJ-0001nN-OE; Wed, 10 Feb 2021 14:12:31 +0000
Received: by outflank-mailman (input) for mailman id 83616;
 Wed, 10 Feb 2021 14:12:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GlAE=HM=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1l9qEI-0001nI-LP
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:12:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8020f454-efcc-415e-b105-bea3c8b30dc1;
 Wed, 10 Feb 2021 14:12:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05B9EAC43;
 Wed, 10 Feb 2021 14:12:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8020f454-efcc-415e-b105-bea3c8b30dc1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Subject: Re: [PATCH] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic
 helpers
To: Daniel Vetter <daniel@ffwll.ch>
Cc: linaro-mm-sig@lists.linaro.org, linux-aspeed@lists.ozlabs.org,
 airlied@linux.ie, linux-arm-msm@vger.kernel.org, linux-mips@vger.kernel.org,
 dri-devel@lists.freedesktop.org, linux-renesas-soc@vger.kernel.org,
 linux-rockchip@lists.infradead.org, linux-mediatek@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-tegra@vger.kernel.org,
 linux-amlogic@lists.infradead.org, freedreno@lists.freedesktop.org,
 linux-stm32@st-md-mailman.stormreply.com,
 linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org
References: <20210209102913.6372-1-tzimmermann@suse.de>
 <YCPbK2zFJMzwHim/@phenom.ffwll.local>
From: Thomas Zimmermann <tzimmermann@suse.de>
Message-ID: <d2187a9c-6afd-5610-34e5-a8b0553a3724@suse.de>
Date: Wed, 10 Feb 2021 15:12:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YCPbK2zFJMzwHim/@phenom.ffwll.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="brhUVpTaaIsU1Pu416E7yYfIywuZWVv5l"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--brhUVpTaaIsU1Pu416E7yYfIywuZWVv5l
Content-Type: multipart/mixed; boundary="zVTdTEY2ybqGG3p4xWPzRkxAvxOUdDuVf";
 protected-headers="v1"
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: linaro-mm-sig@lists.linaro.org, linux-aspeed@lists.ozlabs.org,
 airlied@linux.ie, linux-arm-msm@vger.kernel.org, linux-mips@vger.kernel.org,
 dri-devel@lists.freedesktop.org, linux-renesas-soc@vger.kernel.org,
 linux-rockchip@lists.infradead.org, linux-mediatek@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-tegra@vger.kernel.org,
 linux-amlogic@lists.infradead.org, freedreno@lists.freedesktop.org,
 linux-stm32@st-md-mailman.stormreply.com,
 linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org
Message-ID: <d2187a9c-6afd-5610-34e5-a8b0553a3724@suse.de>
Subject: Re: [PATCH] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic
 helpers
References: <20210209102913.6372-1-tzimmermann@suse.de>
 <YCPbK2zFJMzwHim/@phenom.ffwll.local>
In-Reply-To: <YCPbK2zFJMzwHim/@phenom.ffwll.local>

--zVTdTEY2ybqGG3p4xWPzRkxAvxOUdDuVf
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi

Am 10.02.21 um 14:10 schrieb Daniel Vetter:
> On Tue, Feb 09, 2021 at 11:29:13AM +0100, Thomas Zimmermann wrote:
>> The function drm_gem_fb_prepare_fb() is a helper for atomic modesettin=
g,
>> but currently located next to framebuffer helpers. Move it to GEM atom=
ic
>> helpers, rename it slightly and adopt the drivers. Same for the rsp
>> simple-pipe helper.
>>
>> Compile-tested with x86-64, aarch64 and arm. The patch is fairly large=
,
>> but there are no functional changes.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>=20
> If we bikeshed this, I think should also throw in the _helper_ somewher=
e?

Sure. How about drm_gem_plane_helper_prepare_fb()?

Best regards
Thomas

> But really I don't have an opinion on this.
> -Daniel
>=20
>> ---
>>   drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c     |  4 +-
>>   drivers/gpu/drm/drm_gem_atomic_helper.c      | 69 ++++++++++++++++++=
+-
>>   drivers/gpu/drm/drm_gem_framebuffer_helper.c | 63 ------------------=

>>   drivers/gpu/drm/drm_gem_vram_helper.c        |  4 +-
>>   drivers/gpu/drm/imx/dcss/dcss-plane.c        |  4 +-
>>   drivers/gpu/drm/imx/ipuv3-plane.c            |  4 +-
>>   drivers/gpu/drm/ingenic/ingenic-drm-drv.c    |  3 +-
>>   drivers/gpu/drm/ingenic/ingenic-ipu.c        |  4 +-
>>   drivers/gpu/drm/mcde/mcde_display.c          |  4 +-
>>   drivers/gpu/drm/mediatek/mtk_drm_plane.c     |  6 +-
>>   drivers/gpu/drm/meson/meson_overlay.c        |  8 +--
>>   drivers/gpu/drm/meson/meson_plane.c          |  4 +-
>>   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c    |  4 +-
>>   drivers/gpu/drm/msm/msm_atomic.c             |  4 +-
>>   drivers/gpu/drm/mxsfb/mxsfb_kms.c            |  6 +-
>>   drivers/gpu/drm/pl111/pl111_display.c        |  4 +-
>>   drivers/gpu/drm/rcar-du/rcar_du_vsp.c        |  4 +-
>>   drivers/gpu/drm/rockchip/rockchip_drm_vop.c  |  3 +-
>>   drivers/gpu/drm/stm/ltdc.c                   |  4 +-
>>   drivers/gpu/drm/sun4i/sun4i_layer.c          |  4 +-
>>   drivers/gpu/drm/sun4i/sun8i_ui_layer.c       |  4 +-
>>   drivers/gpu/drm/sun4i/sun8i_vi_layer.c       |  4 +-
>>   drivers/gpu/drm/tegra/plane.c                |  4 +-
>>   drivers/gpu/drm/tidss/tidss_plane.c          |  4 +-
>>   drivers/gpu/drm/tiny/hx8357d.c               |  4 +-
>>   drivers/gpu/drm/tiny/ili9225.c               |  4 +-
>>   drivers/gpu/drm/tiny/ili9341.c               |  4 +-
>>   drivers/gpu/drm/tiny/ili9486.c               |  4 +-
>>   drivers/gpu/drm/tiny/mi0283qt.c              |  4 +-
>>   drivers/gpu/drm/tiny/repaper.c               |  3 +-
>>   drivers/gpu/drm/tiny/st7586.c                |  4 +-
>>   drivers/gpu/drm/tiny/st7735r.c               |  4 +-
>>   drivers/gpu/drm/tve200/tve200_display.c      |  4 +-
>>   drivers/gpu/drm/vc4/vc4_plane.c              |  4 +-
>>   drivers/gpu/drm/vkms/vkms_plane.c            |  3 +-
>>   drivers/gpu/drm/xen/xen_drm_front_kms.c      |  3 +-
>>   include/drm/drm_gem_atomic_helper.h          |  8 +++
>>   include/drm/drm_gem_framebuffer_helper.h     |  6 +-
>>   include/drm/drm_modeset_helper_vtables.h     |  2 +-
>>   include/drm/drm_plane.h                      |  4 +-
>>   include/drm/drm_simple_kms_helper.h          |  2 +-
>>   41 files changed, 152 insertions(+), 141 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/dr=
m/aspeed/aspeed_gfx_crtc.c
>> index e54686c31a90..d8f214e0be82 100644
>> --- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
>> +++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
>> @@ -9,8 +9,8 @@
>>   #include <drm/drm_device.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_panel.h>
>>   #include <drm/drm_simple_kms_helper.h>
>>   #include <drm/drm_vblank.h>
>> @@ -219,7 +219,7 @@ static const struct drm_simple_display_pipe_funcs =
aspeed_gfx_funcs =3D {
>>   	.enable		=3D aspeed_gfx_pipe_enable,
>>   	.disable	=3D aspeed_gfx_pipe_disable,
>>   	.update		=3D aspeed_gfx_pipe_update,
>> -	.prepare_fb	=3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_simple_display_pipe_prepare_fb,
>>   	.enable_vblank	=3D aspeed_gfx_enable_vblank,
>>   	.disable_vblank	=3D aspeed_gfx_disable_vblank,
>>   };
>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm=
/drm_gem_atomic_helper.c
>> index e27762cef360..c656b40656bf 100644
>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>> @@ -1,6 +1,10 @@
>>   // SPDX-License-Identifier: GPL-2.0-or-later
>>  =20
>> +#include <linux/dma-resv.h>
>> +
>>   #include <drm/drm_atomic_state_helper.h>
>> +#include <drm/drm_atomic_uapi.h>
>> +#include <drm/drm_gem.h>
>>   #include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_simple_kms_helper.h>
>> @@ -12,10 +16,69 @@
>>    *
>>    * The GEM atomic helpers library implements generic atomic-commit
>>    * functions for drivers that use GEM objects. Currently, it provide=
s
>> - * plane state and framebuffer BO mappings for planes with shadow
>> - * buffers.
>> + * synchronization helpers, and plane state and framebuffer BO mappin=
gs
>> + * for planes with shadow buffers.
>> + */
>> +
>> +/*
>> + * Plane Helpers
>>    */
>>  =20
>> +/**
>> + * drm_gem_prepare_fb() - Prepare a GEM backed framebuffer
>> + * @plane: Plane
>> + * @state: Plane state the fence will be attached to
>> + *
>> + * This function extracts the exclusive fence from &drm_gem_object.re=
sv and
>> + * attaches it to plane state for the atomic helper to wait on. This =
is
>> + * necessary to correctly implement implicit synchronization for any =
buffers
>> + * shared as a struct &dma_buf. This function can be used as the
>> + * &drm_plane_helper_funcs.prepare_fb callback.
>> + *
>> + * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for s=
imple
>> + * GEM based framebuffer drivers which have their buffers always pinn=
ed in
>> + * memory.
>> + *
>> + * See drm_atomic_set_fence_for_plane() for a discussion of implicit =
and
>> + * explicit fencing in atomic modeset updates.
>> + */
>> +int drm_gem_prepare_fb(struct drm_plane *plane, struct drm_plane_stat=
e *state)
>> +{
>> +	struct drm_gem_object *obj;
>> +	struct dma_fence *fence;
>> +
>> +	if (!state->fb)
>> +		return 0;
>> +
>> +	obj =3D drm_gem_fb_get_obj(state->fb, 0);
>> +	fence =3D dma_resv_get_excl_rcu(obj->resv);
>> +	drm_atomic_set_fence_for_plane(state, fence);
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(drm_gem_prepare_fb);
>> +
>> +/**
>> + * drm_gem_simple_display_pipe_prepare_fb - prepare_fb helper for &dr=
m_simple_display_pipe
>> + * @pipe: Simple display pipe
>> + * @plane_state: Plane state
>> + *
>> + * This function uses drm_gem_prepare_fb() to extract the exclusive f=
ence
>> + * from &drm_gem_object.resv and attaches it to plane state for the a=
tomic
>> + * helper to wait on. This is necessary to correctly implement implic=
it
>> + * synchronization for any buffers shared as a struct &dma_buf. Drive=
rs can use
>> + * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
>> + *
>> + * See drm_atomic_set_fence_for_plane() for a discussion of implicit =
and
>> + * explicit fencing in atomic modeset updates.
>> + */
>> +int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_=
pipe *pipe,
>> +					   struct drm_plane_state *plane_state)
>> +{
>> +	return drm_gem_prepare_fb(&pipe->plane, plane_state);
>> +}
>> +EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb);
>> +
>>   /*
>>    * Shadow-buffered Planes
>>    */
>> @@ -74,7 +137,7 @@ static int drm_gem_prepare_shadow_fb(struct drm_pla=
ne *plane, struct drm_plane_s
>>   	if (!fb)
>>   		return 0;
>>  =20
>> -	ret =3D drm_gem_fb_prepare_fb(plane, plane_state);
>> +	ret =3D drm_gem_prepare_fb(plane, plane_state);
>>   	if (ret)
>>   		return ret;
>>  =20
>> diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gp=
u/drm/drm_gem_framebuffer_helper.c
>> index 109d11fb4cd4..5ed2067cebb6 100644
>> --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
>> @@ -5,13 +5,8 @@
>>    * Copyright (C) 2017 Noralf Tr=C3=B8nnes
>>    */
>>  =20
>> -#include <linux/dma-buf.h>
>> -#include <linux/dma-fence.h>
>> -#include <linux/dma-resv.h>
>>   #include <linux/slab.h>
>>  =20
>> -#include <drm/drm_atomic.h>
>> -#include <drm/drm_atomic_uapi.h>
>>   #include <drm/drm_damage_helper.h>
>>   #include <drm/drm_fb_helper.h>
>>   #include <drm/drm_fourcc.h>
>> @@ -19,7 +14,6 @@
>>   #include <drm/drm_gem.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_modeset_helper.h>
>> -#include <drm/drm_simple_kms_helper.h>
>>  =20
>>   #define AFBC_HEADER_SIZE		16
>>   #define AFBC_TH_LAYOUT_ALIGNMENT	8
>> @@ -432,60 +426,3 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
>>   	return 0;
>>   }
>>   EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init);
>> -
>> -/**
>> - * drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer
>> - * @plane: Plane
>> - * @state: Plane state the fence will be attached to
>> - *
>> - * This function extracts the exclusive fence from &drm_gem_object.re=
sv and
>> - * attaches it to plane state for the atomic helper to wait on. This =
is
>> - * necessary to correctly implement implicit synchronization for any =
buffers
>> - * shared as a struct &dma_buf. This function can be used as the
>> - * &drm_plane_helper_funcs.prepare_fb callback.
>> - *
>> - * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for s=
imple
>> - * gem based framebuffer drivers which have their buffers always pinn=
ed in
>> - * memory.
>> - *
>> - * See drm_atomic_set_fence_for_plane() for a discussion of implicit =
and
>> - * explicit fencing in atomic modeset updates.
>> - */
>> -int drm_gem_fb_prepare_fb(struct drm_plane *plane,
>> -			  struct drm_plane_state *state)
>> -{
>> -	struct drm_gem_object *obj;
>> -	struct dma_fence *fence;
>> -
>> -	if (!state->fb)
>> -		return 0;
>> -
>> -	obj =3D drm_gem_fb_get_obj(state->fb, 0);
>> -	fence =3D dma_resv_get_excl_rcu(obj->resv);
>> -	drm_atomic_set_fence_for_plane(state, fence);
>> -
>> -	return 0;
>> -}
>> -EXPORT_SYMBOL_GPL(drm_gem_fb_prepare_fb);
>> -
>> -/**
>> - * drm_gem_fb_simple_display_pipe_prepare_fb - prepare_fb helper for
>> - *     &drm_simple_display_pipe
>> - * @pipe: Simple display pipe
>> - * @plane_state: Plane state
>> - *
>> - * This function uses drm_gem_fb_prepare_fb() to extract the exclusiv=
e fence
>> - * from &drm_gem_object.resv and attaches it to plane state for the a=
tomic
>> - * helper to wait on. This is necessary to correctly implement implic=
it
>> - * synchronization for any buffers shared as a struct &dma_buf. Drive=
rs can use
>> - * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
>> - *
>> - * See drm_atomic_set_fence_for_plane() for a discussion of implicit =
and
>> - * explicit fencing in atomic modeset updates.
>> - */
>> -int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_displ=
ay_pipe *pipe,
>> -					      struct drm_plane_state *plane_state)
>> -{
>> -	return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
>> -}
>> -EXPORT_SYMBOL(drm_gem_fb_simple_display_pipe_prepare_fb);
>> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/d=
rm_gem_vram_helper.c
>> index 48d4b59d3145..2071ec637df8 100644
>> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
>> @@ -8,7 +8,7 @@
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_file.h>
>>   #include <drm/drm_framebuffer.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_ttm_helper.h>
>>   #include <drm/drm_gem_vram_helper.h>
>>   #include <drm/drm_managed.h>
>> @@ -720,7 +720,7 @@ drm_gem_vram_plane_helper_prepare_fb(struct drm_pl=
ane *plane,
>>   			goto err_drm_gem_vram_unpin;
>>   	}
>>  =20
>> -	ret =3D drm_gem_fb_prepare_fb(plane, new_state);
>> +	ret =3D drm_gem_prepare_fb(plane, new_state);
>>   	if (ret)
>>   		goto err_drm_gem_vram_unpin;
>>  =20
>> diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/i=
mx/dcss/dcss-plane.c
>> index 03ba88f7f995..092e98fe0cfd 100644
>> --- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
>> +++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
>> @@ -6,7 +6,7 @@
>>   #include <drm/drm_atomic.h>
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_fb_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>>  =20
>>   #include "dcss-dev.h"
>> @@ -355,7 +355,7 @@ static void dcss_plane_atomic_disable(struct drm_p=
lane *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs dcss_plane_helper_funcs =3D=
 {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D dcss_plane_atomic_check,
>>   	.atomic_update =3D dcss_plane_atomic_update,
>>   	.atomic_disable =3D dcss_plane_atomic_disable,
>> diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/i=
puv3-plane.c
>> index 075508051b5f..0b6d81c4fa77 100644
>> --- a/drivers/gpu/drm/imx/ipuv3-plane.c
>> +++ b/drivers/gpu/drm/imx/ipuv3-plane.c
>> @@ -9,8 +9,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_plane_helper.h>
>>  =20
>> @@ -704,7 +704,7 @@ static void ipu_plane_atomic_update(struct drm_pla=
ne *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs ipu_plane_helper_funcs =3D=
 {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D ipu_plane_atomic_check,
>>   	.atomic_disable =3D ipu_plane_atomic_disable,
>>   	.atomic_update =3D ipu_plane_atomic_update,
>> diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/d=
rm/ingenic/ingenic-drm-drv.c
>> index 7bb31fbee29d..1ca02de60895 100644
>> --- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
>> +++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
>> @@ -28,6 +28,7 @@
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fb_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_irq.h>
>>   #include <drm/drm_managed.h>
>> @@ -780,7 +781,7 @@ static const struct drm_plane_helper_funcs ingenic=
_drm_plane_helper_funcs =3D {
>>   	.atomic_update		=3D ingenic_drm_plane_atomic_update,
>>   	.atomic_check		=3D ingenic_drm_plane_atomic_check,
>>   	.atomic_disable		=3D ingenic_drm_plane_atomic_disable,
>> -	.prepare_fb		=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb		=3D drm_gem_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_fu=
ncs =3D {
>> diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/i=
ngenic/ingenic-ipu.c
>> index e52777ef85fd..1b9b5de6b67c 100644
>> --- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
>> +++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
>> @@ -23,7 +23,7 @@
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_plane.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_property.h>
>> @@ -608,7 +608,7 @@ static const struct drm_plane_helper_funcs ingenic=
_ipu_plane_helper_funcs =3D {
>>   	.atomic_update		=3D ingenic_ipu_plane_atomic_update,
>>   	.atomic_check		=3D ingenic_ipu_plane_atomic_check,
>>   	.atomic_disable		=3D ingenic_ipu_plane_atomic_disable,
>> -	.prepare_fb		=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb		=3D drm_gem_prepare_fb,
>>   };
>>  =20
>>   static int
>> diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcd=
e/mcde_display.c
>> index 7c2e0b865441..dde16ef9650a 100644
>> --- a/drivers/gpu/drm/mcde/mcde_display.c
>> +++ b/drivers/gpu/drm/mcde/mcde_display.c
>> @@ -13,8 +13,8 @@
>>   #include <drm/drm_device.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_mipi_dsi.h>
>>   #include <drm/drm_simple_kms_helper.h>
>>   #include <drm/drm_bridge.h>
>> @@ -1481,7 +1481,7 @@ static struct drm_simple_display_pipe_funcs mcde=
_display_funcs =3D {
>>   	.update =3D mcde_display_update,
>>   	.enable_vblank =3D mcde_display_enable_vblank,
>>   	.disable_vblank =3D mcde_display_disable_vblank,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   int mcde_display_init(struct drm_device *drm)
>> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/dr=
m/mediatek/mtk_drm_plane.c
>> index 92141a19681b..64f7873e9867 100644
>> --- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
>> +++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
>> @@ -6,10 +6,10 @@
>>  =20
>>   #include <drm/drm_atomic.h>
>>   #include <drm/drm_atomic_helper.h>
>> -#include <drm/drm_fourcc.h>
>>   #include <drm/drm_atomic_uapi.h>
>> +#include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_plane_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>  =20
>>   #include "mtk_drm_crtc.h"
>>   #include "mtk_drm_ddp_comp.h"
>> @@ -216,7 +216,7 @@ static void mtk_plane_atomic_update(struct drm_pla=
ne *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs mtk_plane_helper_funcs =3D=
 {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D mtk_plane_atomic_check,
>>   	.atomic_update =3D mtk_plane_atomic_update,
>>   	.atomic_disable =3D mtk_plane_atomic_disable,
>> diff --git a/drivers/gpu/drm/meson/meson_overlay.c b/drivers/gpu/drm/m=
eson/meson_overlay.c
>> index 1ffbbecafa22..0ee2132a990f 100644
>> --- a/drivers/gpu/drm/meson/meson_overlay.c
>> +++ b/drivers/gpu/drm/meson/meson_overlay.c
>> @@ -10,11 +10,11 @@
>>   #include <drm/drm_atomic.h>
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_device.h>
>> +#include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> -#include <drm/drm_plane_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_fb_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_plane_helper.h>
>>  =20
>>   #include "meson_overlay.h"
>>   #include "meson_registers.h"
>> @@ -742,7 +742,7 @@ static const struct drm_plane_helper_funcs meson_o=
verlay_helper_funcs =3D {
>>   	.atomic_check	=3D meson_overlay_atomic_check,
>>   	.atomic_disable	=3D meson_overlay_atomic_disable,
>>   	.atomic_update	=3D meson_overlay_atomic_update,
>> -	.prepare_fb	=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_prepare_fb,
>>   };
>>  =20
>>   static bool meson_overlay_format_mod_supported(struct drm_plane *pla=
ne,
>> diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/mes=
on/meson_plane.c
>> index 35338ed18209..24d64c9675ff 100644
>> --- a/drivers/gpu/drm/meson/meson_plane.c
>> +++ b/drivers/gpu/drm/meson/meson_plane.c
>> @@ -16,8 +16,8 @@
>>   #include <drm/drm_device.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>  =20
>>   #include "meson_plane.h"
>> @@ -417,7 +417,7 @@ static const struct drm_plane_helper_funcs meson_p=
lane_helper_funcs =3D {
>>   	.atomic_check	=3D meson_plane_atomic_check,
>>   	.atomic_disable	=3D meson_plane_atomic_disable,
>>   	.atomic_update	=3D meson_plane_atomic_update,
>> -	.prepare_fb	=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_prepare_fb,
>>   };
>>  =20
>>   static bool meson_plane_format_mod_supported(struct drm_plane *plane=
,
>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/d=
rm/msm/disp/dpu1/dpu_plane.c
>> index bc0231a50132..3e9f9f3dd679 100644
>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
>> @@ -13,7 +13,7 @@
>>   #include <drm/drm_atomic_uapi.h>
>>   #include <drm/drm_damage_helper.h>
>>   #include <drm/drm_file.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>  =20
>>   #include "msm_drv.h"
>>   #include "dpu_kms.h"
>> @@ -892,7 +892,7 @@ static int dpu_plane_prepare_fb(struct drm_plane *=
plane,
>>   	 *       we can use msm_atomic_prepare_fb() instead of doing the
>>   	 *       implicit fence and fb prepare by hand here.
>>   	 */
>> -	drm_gem_fb_prepare_fb(plane, new_state);
>> +	drm_gem_prepare_fb(plane, new_state);
>>  =20
>>   	if (pstate->aspace) {
>>   		ret =3D msm_framebuffer_prepare(new_state->fb,
>> diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/ms=
m_atomic.c
>> index 6a326761dc4a..03a113eb6571 100644
>> --- a/drivers/gpu/drm/msm/msm_atomic.c
>> +++ b/drivers/gpu/drm/msm/msm_atomic.c
>> @@ -5,7 +5,7 @@
>>    */
>>  =20
>>   #include <drm/drm_atomic_uapi.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_vblank.h>
>>  =20
>>   #include "msm_atomic_trace.h"
>> @@ -22,7 +22,7 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
>>   	if (!new_state->fb)
>>   		return 0;
>>  =20
>> -	drm_gem_fb_prepare_fb(plane, new_state);
>> +	drm_gem_prepare_fb(plane, new_state);
>>  =20
>>   	return msm_framebuffer_prepare(new_state->fb, kms->aspace);
>>   }
>> diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb=
/mxsfb_kms.c
>> index 3e1bb0aefb87..33188dea886d 100644
>> --- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
>> +++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
>> @@ -21,8 +21,8 @@
>>   #include <drm/drm_encoder.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_plane.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_vblank.h>
>> @@ -495,13 +495,13 @@ static bool mxsfb_format_mod_supported(struct dr=
m_plane *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs mxsfb_plane_primary_helpe=
r_funcs =3D {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D mxsfb_plane_atomic_check,
>>   	.atomic_update =3D mxsfb_plane_primary_atomic_update,
>>   };
>>  =20
>>   static const struct drm_plane_helper_funcs mxsfb_plane_overlay_helpe=
r_funcs =3D {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D mxsfb_plane_atomic_check,
>>   	.atomic_update =3D mxsfb_plane_overlay_atomic_update,
>>   };
>> diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/p=
l111/pl111_display.c
>> index 69c02e7c82b7..6fd7f13f1aca 100644
>> --- a/drivers/gpu/drm/pl111/pl111_display.c
>> +++ b/drivers/gpu/drm/pl111/pl111_display.c
>> @@ -17,8 +17,8 @@
>>  =20
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_vblank.h>
>>  =20
>>   #include "pl111_drm.h"
>> @@ -440,7 +440,7 @@ static struct drm_simple_display_pipe_funcs pl111_=
display_funcs =3D {
>>   	.enable =3D pl111_display_enable,
>>   	.disable =3D pl111_display_disable,
>>   	.update =3D pl111_display_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long=
 rate,
>> diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/r=
car-du/rcar_du_vsp.c
>> index 53221d8473c1..964fdaee7c7d 100644
>> --- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
>> +++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
>> @@ -11,8 +11,8 @@
>>   #include <drm/drm_crtc.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_vblank.h>
>> @@ -236,7 +236,7 @@ static int rcar_du_vsp_plane_prepare_fb(struct drm=
_plane *plane,
>>   	if (ret < 0)
>>   		return ret;
>>  =20
>> -	return drm_gem_fb_prepare_fb(plane, state);
>> +	return drm_gem_prepare_fb(plane, state);
>>   }
>>  =20
>>   void rcar_du_vsp_unmap_fb(struct rcar_du_vsp *vsp, struct drm_frameb=
uffer *fb,
>> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu=
/drm/rockchip/rockchip_drm_vop.c
>> index 8d15cabdcb02..45577de18b49 100644
>> --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
>> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
>> @@ -23,6 +23,7 @@
>>   #include <drm/drm_crtc.h>
>>   #include <drm/drm_flip_work.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_probe_helper.h>
>> @@ -1096,7 +1097,7 @@ static const struct drm_plane_helper_funcs plane=
_helper_funcs =3D {
>>   	.atomic_disable =3D vop_plane_atomic_disable,
>>   	.atomic_async_check =3D vop_plane_atomic_async_check,
>>   	.atomic_async_update =3D vop_plane_atomic_async_update,
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_plane_funcs vop_plane_funcs =3D {
>> diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
>> index 7812094f93d6..73522c6ba3eb 100644
>> --- a/drivers/gpu/drm/stm/ltdc.c
>> +++ b/drivers/gpu/drm/stm/ltdc.c
>> @@ -26,8 +26,8 @@
>>   #include <drm/drm_device.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_of.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_probe_helper.h>
>> @@ -911,7 +911,7 @@ static const struct drm_plane_funcs ltdc_plane_fun=
cs =3D {
>>   };
>>  =20
>>   static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs =3D=
 {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D ltdc_plane_atomic_check,
>>   	.atomic_update =3D ltdc_plane_atomic_update,
>>   	.atomic_disable =3D ltdc_plane_atomic_disable,
>> diff --git a/drivers/gpu/drm/sun4i/sun4i_layer.c b/drivers/gpu/drm/sun=
4i/sun4i_layer.c
>> index acfbfd4463a1..68da94b7c35d 100644
>> --- a/drivers/gpu/drm/sun4i/sun4i_layer.c
>> +++ b/drivers/gpu/drm/sun4i/sun4i_layer.c
>> @@ -7,7 +7,7 @@
>>    */
>>  =20
>>   #include <drm/drm_atomic_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>  =20
>>   #include "sun4i_backend.h"
>> @@ -122,7 +122,7 @@ static bool sun4i_layer_format_mod_supported(struc=
t drm_plane *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs sun4i_backend_layer_helpe=
r_funcs =3D {
>> -	.prepare_fb	=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_prepare_fb,
>>   	.atomic_disable	=3D sun4i_backend_layer_atomic_disable,
>>   	.atomic_update	=3D sun4i_backend_layer_atomic_update,
>>   };
>> diff --git a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c b/drivers/gpu/drm/=
sun4i/sun8i_ui_layer.c
>> index 816ad4ce8996..95654c153279 100644
>> --- a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
>> +++ b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
>> @@ -14,8 +14,8 @@
>>   #include <drm/drm_crtc.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_probe_helper.h>
>>  =20
>> @@ -299,7 +299,7 @@ static void sun8i_ui_layer_atomic_update(struct dr=
m_plane *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs sun8i_ui_layer_helper_fun=
cs =3D {
>> -	.prepare_fb	=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_prepare_fb,
>>   	.atomic_check	=3D sun8i_ui_layer_atomic_check,
>>   	.atomic_disable	=3D sun8i_ui_layer_atomic_disable,
>>   	.atomic_update	=3D sun8i_ui_layer_atomic_update,
>> diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/=
sun4i/sun8i_vi_layer.c
>> index 8cc294a9969d..4005884dbce4 100644
>> --- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
>> +++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
>> @@ -7,8 +7,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_crtc.h>
>>   #include <drm/drm_fb_cma_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_probe_helper.h>
>>  =20
>> @@ -402,7 +402,7 @@ static void sun8i_vi_layer_atomic_update(struct dr=
m_plane *plane,
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs sun8i_vi_layer_helper_fun=
cs =3D {
>> -	.prepare_fb	=3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_prepare_fb,
>>   	.atomic_check	=3D sun8i_vi_layer_atomic_check,
>>   	.atomic_disable	=3D sun8i_vi_layer_atomic_disable,
>>   	.atomic_update	=3D sun8i_vi_layer_atomic_update,
>> diff --git a/drivers/gpu/drm/tegra/plane.c b/drivers/gpu/drm/tegra/pla=
ne.c
>> index 539d14935728..ec86a8d060aa 100644
>> --- a/drivers/gpu/drm/tegra/plane.c
>> +++ b/drivers/gpu/drm/tegra/plane.c
>> @@ -8,7 +8,7 @@
>>   #include <drm/drm_atomic.h>
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_fourcc.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>  =20
>>   #include "dc.h"
>> @@ -198,7 +198,7 @@ int tegra_plane_prepare_fb(struct drm_plane *plane=
,
>>   	if (!state->fb)
>>   		return 0;
>>  =20
>> -	drm_gem_fb_prepare_fb(plane, state);
>> +	drm_gem_prepare_fb(plane, state);
>>  =20
>>   	return tegra_dc_pin(dc, to_tegra_plane_state(state));
>>   }
>> diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tid=
ss/tidss_plane.c
>> index 35067ae674ea..d39baa66e876 100644
>> --- a/drivers/gpu/drm/tidss/tidss_plane.c
>> +++ b/drivers/gpu/drm/tidss/tidss_plane.c
>> @@ -10,7 +10,7 @@
>>   #include <drm/drm_crtc_helper.h>
>>   #include <drm/drm_fourcc.h>
>>   #include <drm/drm_fb_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>  =20
>>   #include "tidss_crtc.h"
>>   #include "tidss_dispc.h"
>> @@ -151,7 +151,7 @@ static void drm_plane_destroy(struct drm_plane *pl=
ane)
>>   }
>>  =20
>>   static const struct drm_plane_helper_funcs tidss_plane_helper_funcs =
=3D {
>> -	.prepare_fb =3D drm_gem_fb_prepare_fb,
>> +	.prepare_fb =3D drm_gem_prepare_fb,
>>   	.atomic_check =3D tidss_plane_atomic_check,
>>   	.atomic_update =3D tidss_plane_atomic_update,
>>   	.atomic_disable =3D tidss_plane_atomic_disable,
>> diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8=
357d.c
>> index c6525cd02bc2..3e2c2868a363 100644
>> --- a/drivers/gpu/drm/tiny/hx8357d.c
>> +++ b/drivers/gpu/drm/tiny/hx8357d.c
>> @@ -19,8 +19,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fb_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>   #include <drm/drm_modeset_helper.h>
>> @@ -184,7 +184,7 @@ static const struct drm_simple_display_pipe_funcs =
hx8357d_pipe_funcs =3D {
>>   	.enable =3D yx240qv29_enable,
>>   	.disable =3D mipi_dbi_pipe_disable,
>>   	.update =3D mipi_dbi_pipe_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_display_mode yx350hv15_mode =3D {
>> diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili=
9225.c
>> index 8e98962db5a2..6b87df19eec1 100644
>> --- a/drivers/gpu/drm/tiny/ili9225.c
>> +++ b/drivers/gpu/drm/tiny/ili9225.c
>> @@ -22,8 +22,8 @@
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fb_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>   #include <drm/drm_rect.h>
>> @@ -328,7 +328,7 @@ static const struct drm_simple_display_pipe_funcs =
ili9225_pipe_funcs =3D {
>>   	.enable		=3D ili9225_pipe_enable,
>>   	.disable	=3D ili9225_pipe_disable,
>>   	.update		=3D ili9225_pipe_update,
>> -	.prepare_fb	=3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_display_mode ili9225_mode =3D {
>> diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili=
9341.c
>> index 6ce97f0698eb..a97f3f70e4a6 100644
>> --- a/drivers/gpu/drm/tiny/ili9341.c
>> +++ b/drivers/gpu/drm/tiny/ili9341.c
>> @@ -18,8 +18,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fb_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>   #include <drm/drm_modeset_helper.h>
>> @@ -140,7 +140,7 @@ static const struct drm_simple_display_pipe_funcs =
ili9341_pipe_funcs =3D {
>>   	.enable =3D yx240qv29_enable,
>>   	.disable =3D mipi_dbi_pipe_disable,
>>   	.update =3D mipi_dbi_pipe_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_display_mode yx240qv29_mode =3D {
>> diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili=
9486.c
>> index d7ce40eb166a..6422a7f67079 100644
>> --- a/drivers/gpu/drm/tiny/ili9486.c
>> +++ b/drivers/gpu/drm/tiny/ili9486.c
>> @@ -17,8 +17,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fb_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>   #include <drm/drm_modeset_helper.h>
>> @@ -153,7 +153,7 @@ static const struct drm_simple_display_pipe_funcs =
waveshare_pipe_funcs =3D {
>>   	.enable =3D waveshare_enable,
>>   	.disable =3D mipi_dbi_pipe_disable,
>>   	.update =3D mipi_dbi_pipe_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_display_mode waveshare_mode =3D {
>> diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi=
0283qt.c
>> index ff77f983f803..dc76fe53aa72 100644
>> --- a/drivers/gpu/drm/tiny/mi0283qt.c
>> +++ b/drivers/gpu/drm/tiny/mi0283qt.c
>> @@ -16,8 +16,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fb_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>   #include <drm/drm_modeset_helper.h>
>> @@ -144,7 +144,7 @@ static const struct drm_simple_display_pipe_funcs =
mi0283qt_pipe_funcs =3D {
>>   	.enable =3D mi0283qt_enable,
>>   	.disable =3D mipi_dbi_pipe_disable,
>>   	.update =3D mipi_dbi_pipe_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_display_mode mi0283qt_mode =3D {
>> diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/rep=
aper.c
>> index 11c602fc9897..2cee07a2e00b 100644
>> --- a/drivers/gpu/drm/tiny/repaper.c
>> +++ b/drivers/gpu/drm/tiny/repaper.c
>> @@ -29,6 +29,7 @@
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fb_helper.h>
>>   #include <drm/drm_format_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>> @@ -860,7 +861,7 @@ static const struct drm_simple_display_pipe_funcs =
repaper_pipe_funcs =3D {
>>   	.enable =3D repaper_pipe_enable,
>>   	.disable =3D repaper_pipe_disable,
>>   	.update =3D repaper_pipe_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static int repaper_connector_get_modes(struct drm_connector *connect=
or)
>> diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st75=
86.c
>> index ff5cf60f4bd7..7d216fe9267f 100644
>> --- a/drivers/gpu/drm/tiny/st7586.c
>> +++ b/drivers/gpu/drm/tiny/st7586.c
>> @@ -19,8 +19,8 @@
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fb_helper.h>
>>   #include <drm/drm_format_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>   #include <drm/drm_rect.h>
>> @@ -268,7 +268,7 @@ static const struct drm_simple_display_pipe_funcs =
st7586_pipe_funcs =3D {
>>   	.enable		=3D st7586_pipe_enable,
>>   	.disable	=3D st7586_pipe_disable,
>>   	.update		=3D st7586_pipe_update,
>> -	.prepare_fb	=3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct drm_display_mode st7586_mode =3D {
>> diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7=
735r.c
>> index faaba0a033ea..df8872d62cdd 100644
>> --- a/drivers/gpu/drm/tiny/st7735r.c
>> +++ b/drivers/gpu/drm/tiny/st7735r.c
>> @@ -19,8 +19,8 @@
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fb_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_managed.h>
>>   #include <drm/drm_mipi_dbi.h>
>>  =20
>> @@ -136,7 +136,7 @@ static const struct drm_simple_display_pipe_funcs =
st7735r_pipe_funcs =3D {
>>   	.enable		=3D st7735r_pipe_enable,
>>   	.disable	=3D mipi_dbi_pipe_disable,
>>   	.update		=3D mipi_dbi_pipe_update,
>> -	.prepare_fb	=3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb	=3D drm_gem_simple_display_pipe_prepare_fb,
>>   };
>>  =20
>>   static const struct st7735r_cfg jd_t18003_t01_cfg =3D {
>> diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm=
/tve200/tve200_display.c
>> index cb0e837d3dba..50e1fb71869f 100644
>> --- a/drivers/gpu/drm/tve200/tve200_display.c
>> +++ b/drivers/gpu/drm/tve200/tve200_display.c
>> @@ -17,8 +17,8 @@
>>  =20
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_cma_helper.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_panel.h>
>>   #include <drm/drm_vblank.h>
>>  =20
>> @@ -316,7 +316,7 @@ static const struct drm_simple_display_pipe_funcs =
tve200_display_funcs =3D {
>>   	.enable =3D tve200_display_enable,
>>   	.disable =3D tve200_display_disable,
>>   	.update =3D tve200_display_update,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   	.enable_vblank =3D tve200_display_enable_vblank,
>>   	.disable_vblank =3D tve200_display_disable_vblank,
>>   };
>> diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4=
_plane.c
>> index 7322169c0682..a65e980078f3 100644
>> --- a/drivers/gpu/drm/vc4/vc4_plane.c
>> +++ b/drivers/gpu/drm/vc4/vc4_plane.c
>> @@ -20,7 +20,7 @@
>>   #include <drm/drm_atomic_uapi.h>
>>   #include <drm/drm_fb_cma_helper.h>
>>   #include <drm/drm_fourcc.h>
>> -#include <drm/drm_gem_framebuffer_helper.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>  =20
>>   #include "uapi/drm/vc4_drm.h"
>> @@ -1250,7 +1250,7 @@ static int vc4_prepare_fb(struct drm_plane *plan=
e,
>>  =20
>>   	bo =3D to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base);
>>  =20
>> -	drm_gem_fb_prepare_fb(plane, state);
>> +	drm_gem_prepare_fb(plane, state);
>>  =20
>>   	if (plane->state->fb =3D=3D state->fb)
>>   		return 0;
>> diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/=
vkms_plane.c
>> index 0824327cc860..e3fd8cd1f3f1 100644
>> --- a/drivers/gpu/drm/vkms/vkms_plane.c
>> +++ b/drivers/gpu/drm/vkms/vkms_plane.c
>> @@ -5,6 +5,7 @@
>>   #include <drm/drm_atomic.h>
>>   #include <drm/drm_atomic_helper.h>
>>   #include <drm/drm_fourcc.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_plane_helper.h>
>>   #include <drm/drm_gem_shmem_helper.h>
>> @@ -159,7 +160,7 @@ static int vkms_prepare_fb(struct drm_plane *plane=
,
>>   	if (ret)
>>   		DRM_ERROR("vmap failed: %d\n", ret);
>>  =20
>> -	return drm_gem_fb_prepare_fb(plane, state);
>> +	return drm_gem_prepare_fb(plane, state);
>>   }
>>  =20
>>   static void vkms_cleanup_fb(struct drm_plane *plane,
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm=
/xen/xen_drm_front_kms.c
>> index ef11b1e4de39..371202ebe900 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
>> @@ -13,6 +13,7 @@
>>   #include <drm/drm_drv.h>
>>   #include <drm/drm_fourcc.h>
>>   #include <drm/drm_gem.h>
>> +#include <drm/drm_gem_atomic_helper.h>
>>   #include <drm/drm_gem_framebuffer_helper.h>
>>   #include <drm/drm_probe_helper.h>
>>   #include <drm/drm_vblank.h>
>> @@ -301,7 +302,7 @@ static const struct drm_simple_display_pipe_funcs =
display_funcs =3D {
>>   	.mode_valid =3D display_mode_valid,
>>   	.enable =3D display_enable,
>>   	.disable =3D display_disable,
>> -	.prepare_fb =3D drm_gem_fb_simple_display_pipe_prepare_fb,
>> +	.prepare_fb =3D drm_gem_simple_display_pipe_prepare_fb,
>>   	.check =3D display_check,
>>   	.update =3D display_update,
>>   };
>> diff --git a/include/drm/drm_gem_atomic_helper.h b/include/drm/drm_gem=
_atomic_helper.h
>> index 08b96ccea325..91e73d23fea8 100644
>> --- a/include/drm/drm_gem_atomic_helper.h
>> +++ b/include/drm/drm_gem_atomic_helper.h
>> @@ -9,6 +9,14 @@
>>  =20
>>   struct drm_simple_display_pipe;
>>  =20
>> +/*
>> + * Plane Helpers
>> + */
>> +
>> +int drm_gem_prepare_fb(struct drm_plane *plane, struct drm_plane_stat=
e *state);
>> +int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_=
pipe *pipe,
>> +					   struct drm_plane_state *plane_state);
>> +
>>   /*
>>    * Helpers for planes with shadow buffers
>>    */
>> diff --git a/include/drm/drm_gem_framebuffer_helper.h b/include/drm/dr=
m_gem_framebuffer_helper.h
>> index 6b013154911d..495d174d9989 100644
>> --- a/include/drm/drm_gem_framebuffer_helper.h
>> +++ b/include/drm/drm_gem_framebuffer_helper.h
>> @@ -9,9 +9,11 @@ struct drm_framebuffer;
>>   struct drm_framebuffer_funcs;
>>   struct drm_gem_object;
>>   struct drm_mode_fb_cmd2;
>> +#if 0
>>   struct drm_plane;
>>   struct drm_plane_state;
>>   struct drm_simple_display_pipe;
>> +#endif
>>  =20
>>   #define AFBC_VENDOR_AND_TYPE_MASK	GENMASK_ULL(63, 52)
>>  =20
>> @@ -44,8 +46,4 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
>>   			 const struct drm_mode_fb_cmd2 *mode_cmd,
>>   			 struct drm_afbc_framebuffer *afbc_fb);
>>  =20
>> -int drm_gem_fb_prepare_fb(struct drm_plane *plane,
>> -			  struct drm_plane_state *state);
>> -int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_displ=
ay_pipe *pipe,
>> -					      struct drm_plane_state *plane_state);
>>   #endif
>> diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/dr=
m_modeset_helper_vtables.h
>> index eb706342861d..8d41d3734153 100644
>> --- a/include/drm/drm_modeset_helper_vtables.h
>> +++ b/include/drm/drm_modeset_helper_vtables.h
>> @@ -1179,7 +1179,7 @@ struct drm_plane_helper_funcs {
>>   	 * members in the plane structure.
>>   	 *
>>   	 * Drivers which always have their buffers pinned should use
>> -	 * drm_gem_fb_prepare_fb() for this hook.
>> +	 * drm_gem_prepare_fb() for this hook.
>>   	 *
>>   	 * The helpers will call @cleanup_fb with matching arguments for ev=
ery
>>   	 * successful call to this hook.
>> diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
>> index 95ab14a4336a..be08b6b1fde0 100644
>> --- a/include/drm/drm_plane.h
>> +++ b/include/drm/drm_plane.h
>> @@ -79,8 +79,8 @@ struct drm_plane_state {
>>   	 * preserved.
>>   	 *
>>   	 * Drivers should store any implicit fence in this from their
>> -	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_fb_prepa=
re_fb()
>> -	 * and drm_gem_fb_simple_display_pipe_prepare_fb() for suitable help=
ers.
>> +	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_prepare_=
fb()
>> +	 * and drm_gem_simple_display_pipe_prepare_fb() for suitable helpers=
=2E
>>   	 */
>>   	struct dma_fence *fence;
>>  =20
>> diff --git a/include/drm/drm_simple_kms_helper.h b/include/drm/drm_sim=
ple_kms_helper.h
>> index 40b34573249f..ef9944e9c5fc 100644
>> --- a/include/drm/drm_simple_kms_helper.h
>> +++ b/include/drm/drm_simple_kms_helper.h
>> @@ -117,7 +117,7 @@ struct drm_simple_display_pipe_funcs {
>>   	 * more details.
>>   	 *
>>   	 * Drivers which always have their buffers pinned should use
>> -	 * drm_gem_fb_simple_display_pipe_prepare_fb() for this hook.
>> +	 * drm_gem_simple_display_pipe_prepare_fb() for this hook.
>>   	 */
>>   	int (*prepare_fb)(struct drm_simple_display_pipe *pipe,
>>   			  struct drm_plane_state *plane_state);
>> --=20
>> 2.30.0
>>
>=20

--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


--zVTdTEY2ybqGG3p4xWPzRkxAvxOUdDuVf--

--brhUVpTaaIsU1Pu416E7yYfIywuZWVv5l
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsF5BAABCAAjFiEExndm/fpuMUdwYFFolh/E3EQov+AFAmAj6cgFAwAAAAAACgkQlh/E3EQov+Bo
ZBAAyxGybXYI19sqRKFpQ7oPChLVFBKhCpKnAK3vxfT0bUuuDGT3Z3C0Ee0sVqzWJDJAAbS0y7FH
9HcwlXfK3G13p97yUSThF+xFKMjP1/+hnGsLUnslprtyCNPka7UIti203Xd2ex5gU1ULuO69PTPI
ehh4obz3Z2TOxJmy1+YAInYfvX4Km8B7oRQ2LmneiTKMop/roJ1BvxpnzdUVMqaCyHggcxPFSldt
mhc35poOYCMAX9IXDF0O2jTqT7g7tdDL068HXs5DZKnGmS0/sYmctzb6WmsY0nfhKxRODyJnZy+Y
CNPvMRNsGqBYFiLPmiJN2WLBhroS7RqfOj/NtY3SkSaGXD3gv7eAKWD2xXXitMyJ6TGvW5pkbNwC
r1yWJ3gkGzaPVZvSzDdE3KOYySUQ1+F5EsCamo7kheaBWqnRBDVhRB2QdX1+WDNTojhK+U7EI1Y6
RxFaRruD3NvywHV8P42pufkRCGl42KM1aaoXlwFt+M8izDWCeh3nTwrQwEAn4Kuzlk3xMhPLArbK
RddShuWHCZSpzP3B2XsR7QWGHivSTN87q4k07UymyRWMkMfcV1+l8xAf7ze1IjFdlk4lzvzDlx0a
DzUjsd+Grykob03z/vXn6PRJBdx/u2sjWJmzsLjVZQ2FEq9eN7nvZoJsq3dbv+O6XuwpnnFV4d5B
a9g=
=I7xx
-----END PGP SIGNATURE-----

--brhUVpTaaIsU1Pu416E7yYfIywuZWVv5l--


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:14:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83617.156042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qGY-0001vd-F1; Wed, 10 Feb 2021 14:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83617.156042; Wed, 10 Feb 2021 14:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qGY-0001vW-9s; Wed, 10 Feb 2021 14:14:50 +0000
Received: by outflank-mailman (input) for mailman id 83617;
 Wed, 10 Feb 2021 14:14:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9qGX-0001vR-6Z
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:14:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f53f803-8cd8-4c9a-84ef-73f67efddb2f;
 Wed, 10 Feb 2021 14:14:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 432DCAC97;
 Wed, 10 Feb 2021 14:14:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f53f803-8cd8-4c9a-84ef-73f67efddb2f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612966487; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=glsRcsrmHTb3HZFXfFZEAPW7hJU2dBAsYRYprRZGru4=;
	b=AeN3Tb/kJpB1/bQNAMCl83so0xJ4QnzW5TCYYPRzsg3UnBrpAuOnjsuWOv9UBZoh11o9JD
	pKowtMzh4x27zr763i6deEixh/vGIS2K//Jdx087teOx4wA0TKna1KEYc/0UFoSk2A8hcg
	7S/v/QRhTIWf5lFSMcFjRFLNnvYSbr4=
Subject: Re: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the
 domain if it is dying
To: Julien Grall <julien.grall.oss@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, hongyxia@amazon.co.uk,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-4-julien@xen.org>
 <04f601d6ff22$1f52cf60$5df86e20$@xen.org>
 <CAJ=z9a18XxQLrUanxg_E7Vups7aRee93_vFhqxu1=yq+VdXH-w@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6fb54306-20e6-516f-cdcf-c7d8dd430b96@suse.com>
Date: Wed, 10 Feb 2021 15:14:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <CAJ=z9a18XxQLrUanxg_E7Vups7aRee93_vFhqxu1=yq+VdXH-w@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 22:14, Julien Grall wrote:
> On Tue, 9 Feb 2021 at 20:28, Paul Durrant <xadimgnik@gmail.com> wrote:
>>> From: Julien Grall <julien@xen.org>
>>> Sent: 09 February 2021 15:28
>>>
>>> It is a bit pointless to crash a domain that is already dying. This will
>>> become more an annoyance with a follow-up change where page-table
>>> allocation will be forbidden when the domain is dying.
>>>
>>> Security wise, there is no change as the devices would still have access
>>> to the IOMMU page-tables even if the domain has crashed until Xen
>>> start to relinquish the resources.
>>>
>>> For x86, we rely on dom_iommu(d)->arch.mapping.lock to ensure
>>> d->is_dying is correctly observed (a follow-up patch will held it in the
>>> relinquish path).

Am I to understand this to mean that at this point of the series
things aren't really correct yet in this regard? If so, wouldn't
it be better to re-order?

>>> For Arm, there is still a small race possible. But there is so far no
>>> failure specific to a domain dying.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> ---
>>>
>>> This was spotted when trying to destroy IOREQ servers while the domain
>>> is dying. The code will try to add the entry back in the P2M and
>>> therefore update the P2M (see arch_ioreq_server_disable() ->
>>> hvm_add_ioreq_gfn()).
>>>
>>> It should be possible to skip the mappin in hvm_add_ioreq_gfn(), however
>>> I didn't try a patch yet because checking d->is_dying can be racy (I
>>> can't find a proper lock).

I understand the concern. I find it odd though that we permit
iommu_map() to do anything at all when the domain is already
dying. So irrespective of the remark below, how about bailing
from iommu_map() earlier when the domain is dying?

>>> --- a/xen/drivers/passthrough/iommu.c
>>> +++ b/xen/drivers/passthrough/iommu.c
>>> @@ -272,7 +272,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
>>>                              flush_flags) )
>>>                  continue;
>>>
>>> -        if ( !is_hardware_domain(d) )
>>> +        if ( !is_hardware_domain(d) && !d->is_dying )
>>>              domain_crash(d);
>>
>> Would it make more sense to check is_dying inside domain_crash() (and turn it into a no-op in that case)?
> 
> Jan also suggested moving the check in domain_crash(). However, I felt
> it is potentially a too risky change for 4.15 as there are quite a few
> callers.

This is a fair point. However, in such a case I'd prefer symmetry
at least throughout this one source file (there are three more
places), unless there are strong reasons against doing so.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:19:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:19:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83620.156054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qKa-00028k-W2; Wed, 10 Feb 2021 14:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83620.156054; Wed, 10 Feb 2021 14:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qKa-00028d-Sa; Wed, 10 Feb 2021 14:19:00 +0000
Received: by outflank-mailman (input) for mailman id 83620;
 Wed, 10 Feb 2021 14:19:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9qKa-00028Y-A9
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:19:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba50ed03-d14e-4779-846b-a9a9a5f456e2;
 Wed, 10 Feb 2021 14:18:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8BFDFAC97;
 Wed, 10 Feb 2021 14:18:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba50ed03-d14e-4779-846b-a9a9a5f456e2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612966738; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5W6aM15Fq4rJcOiUrhfLnfzkJvWfXae7DM4qOUMMC1A=;
	b=fqYiqRvijDI/AmSr2xK79l0iindQ0hAb9RfszroOmsU1M9Mxb+WXFAzjE11JNsihL7FAaJ
	tQyiE1nOQW8xB2M5BgKeoOr+Us+A06cUW7ABoje7v73vOvQ5MARqOYmTya4h86rUqJiIvY
	lzUe53yEZNrLuZ3BaOjyYIu3GRGnNXc=
Subject: Re: [PATCH] x86emul: fix SYSENTER/SYSCALL switching into 64-bit mode
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
 <d66cce4b-6563-4857-30be-5889788ca6c8@citrix.com>
 <2eed5630-3e23-3005-245e-989893fc8476@suse.com>
 <bf31a01e-4a32-5938-c158-38923100355d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <77fda392-6f6a-c8b0-f1ea-15b917245f5e@suse.com>
Date: Wed, 10 Feb 2021 15:18:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <bf31a01e-4a32-5938-c158-38923100355d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 15:02, Andrew Cooper wrote:
> On 10/02/2021 13:54, Jan Beulich wrote:
>> On 10.02.2021 13:28, Andrew Cooper wrote:
>>> On 10/02/2021 09:57, Jan Beulich wrote:
>>>> When invoked by compat mode, mode_64bit() will be false at the start of
>>>> emulation. The logic after complete_insn, however, needs to consider the
>>>> mode switched into, in particular to avoid truncating RIP.
>>>>
>>>> Inspired by / paralleling and extending Linux commit 943dea8af21b ("KVM:
>>>> x86: Update emulator context mode if SYSENTER xfers to 64-bit mode").
>>>>
>>>> While there, tighten a related assertion in x86_emulate_wrapper() - we
>>>> want to be sure to not switch into an impossible mode when the code gets
>>>> built for 32-bit only (as is possible for the test harness).
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> In principle we could drop SYSENTER's ctxt->lma dependency when setting
>>>> _regs.r(ip). I wasn't certain whether leaving it as is serves as kind of
>>>> documentation ...
>>>>
>>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>>> @@ -6127,6 +6127,10 @@ x86_emulate(
>>>>               (rc = ops->write_segment(x86_seg_ss, &sreg, ctxt)) )
>>>>              goto done;
>>>>  
>>>> +        if ( ctxt->lma )
>>>> +            /* In particular mode_64bit() needs to return true from here on. */
>>>> +            ctxt->addr_size = ctxt->sp_size = 64;
>>> I think this is fine as presented, but don't we want the logical
>>> opposite for SYSRET/SYSEXIT ?
>>>
>>> We truncate rip suitably already,
>> This is why I left them alone, i.e. ...
>>
>>> but don't know what other checks may appear in the future.
>> ... I thought we would deal with this if and when such checks
>> would appear.
> 
> Ok.Â  Reviewed-by: Andrew Cooper <andrew.cooper3@citirix.com>

Thanks.

>> Just like considered in the post-description
>> remark, we could drop the conditional part from sysexit's
>> setting of _regs.r(ip), and _then_ we would indeed need a
>> respective change there, for the truncation to happen at
>> complete_insn:.
> 
> I think it would look odd changing just rip and not rsp truncation.

Yes, this was another consideration of mine as well. But it
is a fact that we treat rip and rsp differently in this
regard. Perhaps generated code overall could benefit from
treating rsp more like rip, but this would need careful
looking at all the involved pieces - especially in cases
where the updated stack pointer gets further used we may
not be able to defer the truncation to complete_insn:.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:29:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83622.156066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qV4-0003Bk-0h; Wed, 10 Feb 2021 14:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83622.156066; Wed, 10 Feb 2021 14:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qV3-0003Bd-Tf; Wed, 10 Feb 2021 14:29:49 +0000
Received: by outflank-mailman (input) for mailman id 83622;
 Wed, 10 Feb 2021 14:29:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qQOe=HM=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1l9qV2-0003BW-9Y
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:29:48 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9157564-3bfb-41fa-922f-11244f485e6d;
 Wed, 10 Feb 2021 14:29:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9157564-3bfb-41fa-922f-11244f485e6d
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1612967386;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type;
	bh=Mcmrh40B4VXMR0jisEUU+cJGgnUcEN2FqgSKv/zts9E=;
	b=srlefx8a2kD8X/vmt4ioMIIxZZglHMRpj9VCzFN/sZPpXWWRXtpaDBFK7eWAhleW0djea+
	YEUrV/RvdNehfZL4K0XqkoIlp2saO19yN5Lb2+wnANLFHq8C5OH7CqomeG5YoJWrAurfBH
	3+bvaKz55Y7gzeoeMrAyk+im74WyP2QOTaotf9vnCpqAYX2boLeRqe/sdbw1GqSnrgmfpm
	ZeP3ZLO2X55/PxcAfxMWPwV3UuQEazJxBiL3EaVmGWNyRluCNzWVtbiFHpbYbnMnYyPMQG
	6qNtZLIr8v6CneWg5VStMIs9F0AuJhdZLzM8D11lZeM9Dga1yPkPLbtO7zDomQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1612967386;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type;
	bh=Mcmrh40B4VXMR0jisEUU+cJGgnUcEN2FqgSKv/zts9E=;
	b=EFO99bDjKUzkHNQTvugRYUNwQ8vOsXhGSENrxZIiNx7Y6K2/Pa2PR0WHFm501Blt1FDvOH
	cuS/REXCk4AFX7AQ==
To: lkml@nanos.tec.linutronix.de
Cc: Juergen Gross <jgross@suse.com>,x86@kernel.org,Bjorn Helgaas <helgaas@kernel.org>,xen-devel@lists.xenproject.org
Subject: [PATCH] x86/pci: Create PCI/MSI irqdomain after x86_init.pci.arch_init()
Date: Wed, 10 Feb 2021 15:29:44 +0100
Message-ID: <87sg64dmhz.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

Invoking x86_init.irqs.create_pci_msi_domain() before
x86_init.pci.arch_init() breaks XEN PV.

The XEN_PV specific pci.arch_init() function overrides the default
create_pci_msi_domain() which is obviously too late.

As a consequence the XEN PV PCI/MSI allocation goes through the native
path which runs out of vectors and causes malfunction.

Invoke it after x86_init.pci.arch_init().

Fixes: 6b15ffa07dc3 ("x86/irq: Initialize PCI/MSI domain at PCI init time")
Reported-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Cc: stable@vger.kernel.org
---
 arch/x86/pci/init.c |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

--- a/arch/x86/pci/init.c
+++ b/arch/x86/pci/init.c
@@ -9,16 +9,23 @@
    in the right sequence from here. */
 static __init int pci_arch_init(void)
 {
-	int type;
-
-	x86_create_pci_msi_domain();
+	int type, pcbios = 1;
 
 	type = pci_direct_probe();
 
 	if (!(pci_probe & PCI_PROBE_NOEARLY))
 		pci_mmcfg_early_init();
 
-	if (x86_init.pci.arch_init && !x86_init.pci.arch_init())
+	if (x86_init.pci.arch_init)
+		pcbios = x86_init.pci.arch_init();
+
+	/*
+	 * Must happen after x86_init.pci.arch_init(). Xen sets up the
+	 * x86_init.irqs.create_pci_msi_domain there.
+	 */
+	x86_create_pci_msi_domain();
+
+	if (!pcbios)
 		return 0;
 
 	pci_pcbios_init();


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:32:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83623.156078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qXZ-00040h-Ed; Wed, 10 Feb 2021 14:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83623.156078; Wed, 10 Feb 2021 14:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qXZ-00040a-Al; Wed, 10 Feb 2021 14:32:25 +0000
Received: by outflank-mailman (input) for mailman id 83623;
 Wed, 10 Feb 2021 14:32:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9qXX-00040V-Sk
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:32:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb99d8f2-abf9-4f0c-963d-f5dbf8ff88d8;
 Wed, 10 Feb 2021 14:32:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 834BCB1B1;
 Wed, 10 Feb 2021 14:32:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb99d8f2-abf9-4f0c-963d-f5dbf8ff88d8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612967541; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ddUnG2xVvUFWFetPfW/UI4nCG0WTOcWrTehPZZknISg=;
	b=MtFnysHK2UE0k1jsaMEtzCL5cSt0nkdmS07Y3Rbml4LDByZvmMleRnBTvD15ziitcQZtb/
	yWSf5TrAPnWZt7ys5WAtT89YCSTcpIWr/0vRYaoV15aMWVsZswZDkEdYe7MDNb6lLG2g17
	oxgFNxiG6N5TsKNXj1CHUF7pL/CI3wQ=
Subject: Re: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-5-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <62a791cb-a880-4097-5fec-4f728751b58b@suse.com>
Date: Wed, 10 Feb 2021 15:32:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209152816.15792-5-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 16:28, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new IOMMU page-tables allocator will release the pages when
> relinquish the domain resources. However, this is not sufficient when
> the domain is dying because nothing prevents page-table to be allocated.
> 
> iommu_alloc_pgtable() is now checking if the domain is dying before
> adding the page in the list. We are relying on &hd->arch.pgtables.lock
> to synchronize d->is_dying.

As said in reply to an earlier patch, I think suppressing
(really: ignoring) new mappings would be better. You could
utilize the same lock, but you'd need to duplicate the
checking in {amd,intel}_iommu_map_page().

I'm not entirely certain in the case about unmap requests:
It may be possible to also suppress/ignore them, but this
may require some further thought.

Apart from this, just in case we settle on your current
approach, a few spelling nits:

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
>  
>  void arch_iommu_domain_destroy(struct domain *d)
>  {
> +    /*
> +     * There should be not page-tables left allocated by the time the

... should be no ...

> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
> +     * called unconditionally, so pgtables may be unitialized.

uninitialized

> @@ -303,9 +317,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>      unmap_domain_page(p);
>  
>      spin_lock(&hd->arch.pgtables.lock);
> -    page_list_add(pg, &hd->arch.pgtables.list);
> +    /*
> +     * The IOMMU page-tables are freed when relinquishing the domain, but
> +     * nothing prevent allocation to happen afterwards. There is no valid

prevents

> +     * reasons to continue to update the IOMMU page-tables while the

reason

> +     * domain is dying.
> +     *
> +     * So prevent page-table allocation when the domain is dying.
> +     *
> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.

rely

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:39:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83629.156099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qeC-0004IY-7j; Wed, 10 Feb 2021 14:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83629.156099; Wed, 10 Feb 2021 14:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qeC-0004IR-4a; Wed, 10 Feb 2021 14:39:16 +0000
Received: by outflank-mailman (input) for mailman id 83629;
 Wed, 10 Feb 2021 14:39:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9qeA-0004IM-P7
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:39:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29ed9c5b-93ab-4330-b0aa-9f7113b1c89f;
 Wed, 10 Feb 2021 14:39:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 126B8AB98;
 Wed, 10 Feb 2021 14:39:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29ed9c5b-93ab-4330-b0aa-9f7113b1c89f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612967953; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1Da4xgW9Q9Zfd2gkw+zmTbLAnFFuelJcmAhpeJLNjJY=;
	b=ZktZhZCC2qzdwtV/gejwCxda1q6ia5A7Lg1xiW6SmZa4MJchTjOsrUlbsyUhBjEVRCGQdD
	v2pS5NWxIyNe3SzpTj/gb7yrK4S9FCIKOJr/r8NW4uOsbPvoBlP3oxhh6skPt3r6u0Lxdc
	gJIfPoIfm+zLnbohqRR/6rLxMbBCsOI=
Subject: Re: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-6-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e1ee8244-2d4a-64b8-daa5-c74516adae2f@suse.com>
Date: Wed, 10 Feb 2021 15:39:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209152816.15792-6-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 16:28, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new per-domain IOMMU page-table allocator will now free the
> page-tables when domain's resources are relinquished. However, the root
> page-table (i.e. hd->arch.pg_maddr) will not be cleared.
> 
> Xen may access the IOMMU page-tables afterwards at least in the case of
> PV domain:
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
> (XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
> (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
> (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63

So if unmap was (also) short-circuited for dying domains, this
path wouldn't be taken and all would be well even without this
change?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:44:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:44:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83634.156119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qiw-0005Ac-U8; Wed, 10 Feb 2021 14:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83634.156119; Wed, 10 Feb 2021 14:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qiw-0005AV-R6; Wed, 10 Feb 2021 14:44:10 +0000
Received: by outflank-mailman (input) for mailman id 83634;
 Wed, 10 Feb 2021 14:44:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9qiw-0005AQ-68
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:44:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f181cb41-1989-4113-906a-fd542e410532;
 Wed, 10 Feb 2021 14:44:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4AB19AF1E;
 Wed, 10 Feb 2021 14:44:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f181cb41-1989-4113-906a-fd542e410532
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612968248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NAQkCiv9DyM/8D0z5e/6MItjYMHSl0VZyhpHVHTkrvU=;
	b=AoUgYK9HvjVWuDkN7508L5KnaPGdaqHXbJ4Lsx0Ri2e9klreDQTDhuhBFDBUS8Bz8m2wTP
	2Li4kPW+Fow4DKOb0xNszUH4ma7hs0wEZ+AxyHS4LoBhtwJSedyj+yuCPtXEWHe4URqIhX
	jhAYI/8XJvYFdAcY+iQvHXUuHDpgkhc=
Subject: Re: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-5-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <73ace66d-f4ee-06d0-7b1e-0d561620fb82@suse.com>
Date: Wed, 10 Feb 2021 15:44:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209152816.15792-5-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 16:28, Julien Grall wrote:
> @@ -303,9 +317,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>      unmap_domain_page(p);
>  
>      spin_lock(&hd->arch.pgtables.lock);
> -    page_list_add(pg, &hd->arch.pgtables.list);
> +    /*
> +     * The IOMMU page-tables are freed when relinquishing the domain, but
> +     * nothing prevent allocation to happen afterwards. There is no valid
> +     * reasons to continue to update the IOMMU page-tables while the
> +     * domain is dying.
> +     *
> +     * So prevent page-table allocation when the domain is dying.
> +     *
> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
> +     */
> +    if ( likely(!d->is_dying) )
> +    {
> +        alive = true;
> +        page_list_add(pg, &hd->arch.pgtables.list);
> +    }
>      spin_unlock(&hd->arch.pgtables.lock);
>  
> +    if ( unlikely(!alive) )
> +    {
> +        free_domheap_page(pg);
> +        pg = NULL;
> +    }
> +
>      return pg;
>  }

There's a pretty clear downside to this approach compared to that
of ignoring maps (and perhaps also unmaps) for dying domains: The
caller here will (hopefully) recognize and propagate an error.

Additionally (considering the situation patch 5 fixes) ignoring
unmaps may provide quite a bit of a performance gain for domain
destruction - we don't need every individual page unmapped from
the page tables.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:54:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:54:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83638.156137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qso-0006AW-Ug; Wed, 10 Feb 2021 14:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83638.156137; Wed, 10 Feb 2021 14:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qso-0006AP-RO; Wed, 10 Feb 2021 14:54:22 +0000
Received: by outflank-mailman (input) for mailman id 83638;
 Wed, 10 Feb 2021 14:54:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6/qW=HM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1l9qsn-0006AK-5a
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:54:21 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.51]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce0772a3-596f-4d46-a07b-2ccaf48fa9f1;
 Wed, 10 Feb 2021 14:54:18 +0000 (UTC)
Received: from AM5PR0402CA0019.eurprd04.prod.outlook.com
 (2603:10a6:203:90::29) by PR3PR08MB5689.eurprd08.prod.outlook.com
 (2603:10a6:102:90::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.26; Wed, 10 Feb
 2021 14:54:16 +0000
Received: from VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::28) by AM5PR0402CA0019.outlook.office365.com
 (2603:10a6:203:90::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.26 via Frontend
 Transport; Wed, 10 Feb 2021 14:54:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT022.mail.protection.outlook.com (10.152.18.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Wed, 10 Feb 2021 14:54:16 +0000
Received: ("Tessian outbound af289585f0f4:v71");
 Wed, 10 Feb 2021 14:54:16 +0000
Received: from fe0c398f65a0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A0E7CD80-B764-4B47-9233-642ED3E15254.1; 
 Wed, 10 Feb 2021 14:54:08 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fe0c398f65a0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 10 Feb 2021 14:54:08 +0000
Received: from PR3PR08MB5689.eurprd08.prod.outlook.com (2603:10a6:102:90::10)
 by PR3PR08MB5785.eurprd08.prod.outlook.com (2603:10a6:102:89::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.26; Wed, 10 Feb
 2021 14:54:07 +0000
Received: from PR3PR08MB5689.eurprd08.prod.outlook.com
 ([fe80::850d:8ef2:87a1:8892]) by PR3PR08MB5689.eurprd08.prod.outlook.com
 ([fe80::850d:8ef2:87a1:8892%6]) with mapi id 15.20.3846.026; Wed, 10 Feb 2021
 14:54:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce0772a3-596f-4d46-a07b-2ccaf48fa9f1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yT2L1fUBE9nlkNBPBTTteUVTdeWMoWNrWKJyimGsx7w=;
 b=XVwz+W5A1o85efI1tKE4bEVYvpi6y9E0yOeoPIpYajJTSlYI+ea0pgLA6KddiWkvZ03zoioNB0pISRmMzsZuq1EOU4eBh+uie+2NH5Wo6pTYSiLIj2rAUmqevpjiD7oeoD21DHm7NZcI7JYm8XJ7YPqzQa+WFXvWzQuy0LSTjYk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 61877e67ce2ba8fd
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JfeWfZNezlDWy7bl3+JU2ilwg2KZ9RdK/jyfwL2o/MdKQ2U9DWd5beyhG44KI9cocLO9+Fdf/nIQF9VNWOEpTK7eYJJKDd0l6llssDTB8NF0tPg2dJhemA6U8OQYFTZyM3e03mw/2jcvO+CNqBhczxWYZbBJUUYdse4rY4Og93gdOd3CZvpQDx3O8IbflpwwY/JTTBLNC7veGzOxzReq19/hhd59qvcaVP/+whetn6610MxK5bh4snZmI+3+ECvsmShZAX9KdmE8I3XF3Ne4d5F9g6zYura8y4FSwHBj5+2wW31TszeOzx14RwTdoyTIecaRQQWI29wzk1aXb6xP2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yT2L1fUBE9nlkNBPBTTteUVTdeWMoWNrWKJyimGsx7w=;
 b=bY2Fs730fGB0bPw7zpcLdUBT5quGuQDFxS1pC6dzsjpdg5x0SyPdHAWveZum1VPCzK90qcTQm3p+RS3sKB0Q+43IdU7jyjIaBQ2DbJxtHYObyzg99DKa5u4zA4825hly63ZPf9s0VpTNELD2HPyPZ7AgM7IzjnHYXPHo91TA8OBi1fo2n/AbuzmUD54Wk4/uAnTgd5DKIKykMXwFI17LjDzANUald7gBhoCi05rw0fDni/kOHSaRruN68+Ogjsv3FMtpA2Txhfu9qpgwnBOROLmvmuFPNfM3vpZCpgxVCad3aajKjINGp1RiqsXF20KRuXfSDevZOzPDuGieEgTeVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yT2L1fUBE9nlkNBPBTTteUVTdeWMoWNrWKJyimGsx7w=;
 b=XVwz+W5A1o85efI1tKE4bEVYvpi6y9E0yOeoPIpYajJTSlYI+ea0pgLA6KddiWkvZ03zoioNB0pISRmMzsZuq1EOU4eBh+uie+2NH5Wo6pTYSiLIj2rAUmqevpjiD7oeoD21DHm7NZcI7JYm8XJ7YPqzQa+WFXvWzQuy0LSTjYk=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"julien@xen.org" <julien@xen.org>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "ehem+xen@m5p.com" <ehem+xen@m5p.com>, Stefano
 Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
Thread-Topic: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
Thread-Index: AQHW/x1qfHDypyDl2UmolBAclq6QOKpReu0A
Date: Wed, 10 Feb 2021 14:54:07 +0000
Message-ID: <8C76E543-25C1-49FD-A7B2-DB83F800AAB5@arm.com>
References: <20210209195334.21206-1-sstabellini@kernel.org>
In-Reply-To: <20210209195334.21206-1-sstabellini@kernel.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bafa2046-98da-46ab-1f63-08d8cdd3bcd8
x-ms-traffictypediagnostic: PR3PR08MB5785:|PR3PR08MB5689:
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB56896591CDED69E5ACC919309D8D9@PR3PR08MB5689.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GfxZJSVf2m9/ex/SH+wEFFqrmKFa+PczWQZjCzpUp/C9SugXt3+anv+lC06UNOnRI//4c8zXmp8VRUKw0Hc5dfkiMUnTxB0uP5YnuTc4aGOFL7NCYKRmih7SB+4jnEmkJUsEEJ83OvNqc1Q81I+PdhvoQY3qAec5GsjsarnDnU3rypopOv6at9n4y6FSHaDNQLW77kUyrqrQEz/RxzmkMgU2Pteui9Kc7crwHgtu0JIT/Ld8392iQkU0GvVeUD1IBpBwWIrMjTa8EGSEsw4vfkBS81+7CqfXWoxo8zhuWgAtOYZ95sigm+2Ym/ImlDIpniZrI/RivYTYUMwvoIfr396puqHkwsJjYw+pcBNJhf8SuwNrDlwSOZouZF9ysH3XihIZP/V7bBZ6cGlBIwzB34k3ScevgL75fHgRcmX2XiRcsvSzkY8hlxi4R3zyhTwYcKrXuYl76yHbr6XoGBJsLH1PwgryRSxIJavIlI5hUqv0gPD6XLk/YN+2xaY9YeKNK7isbVBFQzHnNP4+ClNlsYY860yFeQylnFMZcutq/4qCQSpVPp8HJwCz59xt7ZwurP3SrZoQtRo5cN/rqdtjzRHQJwon9PFN2arNLPg6HfUL7rKem0V4ZHg8o9+e6cJSs6EdwJJACQ4rCTwi51YNc/v7qqXYihzYvgoEYLjNkBA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PR3PR08MB5689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39850400004)(346002)(136003)(376002)(186003)(6506007)(33656002)(6916009)(26005)(66556008)(36756003)(478600001)(2616005)(316002)(6486002)(55236004)(54906003)(71200400001)(53546011)(76116006)(66476007)(66946007)(91956017)(64756008)(66446008)(966005)(8676002)(6512007)(4326008)(8936002)(86362001)(2906002)(83380400001)(5660300002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?Oqu07tk78c+XfhrIjHGSpLCPnEzmQloljSzwGvEslY/JREg4zrDU6PUAqLTQ?=
 =?us-ascii?Q?IyWelLeX7Ku8PyeTTgfbK/zOHloHP0Z2Q7sdD+SUzjpc9yHufYWSRa9gRRsY?=
 =?us-ascii?Q?0izHQhthIi6krDXOp9fOsdTay6os0DjkoVvNAGitCpN0C7DBt2vJueCtc8IG?=
 =?us-ascii?Q?dEgbOogM2tLJ7I9uJkeIFoM0RkVRdLXGVy2Rmut2kUBWRDW1+JvW/tuINo3b?=
 =?us-ascii?Q?ZccmAEXcGIKPjXYTfwH6nO+zP+3yutke1Trlb3Cccc/JpDGthi1Ds/Jt1Hch?=
 =?us-ascii?Q?9q0jOEetpORU3eOYrrmzhHM0CDuZLkNwTsI3aTIH/MTUJfj0i8OFJtFan7DA?=
 =?us-ascii?Q?rSsWdZacezyr2LzTlB6Uf2+JbLEN0Ba2NgJn8zdgWSaUgEXdj/yza9Tgnp7l?=
 =?us-ascii?Q?9qlssIAUVXV/qWlhPADdGcn8J7xqMgSbCV2K50PMn5JBEDPBCZ38HyjyCNXC?=
 =?us-ascii?Q?EhkA5SNnQdOJ7lF2+nGK7Hrgmy5Ar5FGfTq16t+mZQBgMl/gHLHk/Ti0YVNQ?=
 =?us-ascii?Q?asToGMct+oAVDjcsVpOGthLfTpknMDBBTJPyb3tHufA5wOVc2Lw7miAvUxkP?=
 =?us-ascii?Q?9EpA8NAy9VGr0VCYBaLykOOc9Wcbo9mqBXqqryBKVVlQhRVCU2UzZCsC0scz?=
 =?us-ascii?Q?83ipYfV8UBpSwacvmI1z3d5DA7SPjZakrzsYoOwJjtth+F/mqpNtw0r9Rsp4?=
 =?us-ascii?Q?dWimoN3EJxDXjx2xPPVJRtCo8Z1XBdmHCuwhUQA9V7LCeEwK5pzEGSWfYuUn?=
 =?us-ascii?Q?FAh0qym+ilQal0iN6pDEFzwgJb9RxwObHBgt25+EgLBSNSTGsUZ/i1HEqO0p?=
 =?us-ascii?Q?Wy4aKOetyWM5mPxt0v8QQo9KLMVQlJ/RAqlb4lCi3T7rL95u1gB1SzRcPZaH?=
 =?us-ascii?Q?d5lHJ/GMMmgduBk2Mt1MS+8k5tt77zRqWDHKagA7BgHETpF2uy+Yx74WDLQS?=
 =?us-ascii?Q?VfcWpDb1MgUgLALw8PCeqyj7y80/bggYe3Oa/fwfeTI+ye4QwZ3Gdj4Zvi7A?=
 =?us-ascii?Q?p7rPNL18SJ87yP4yA0Xda82S+SFgfxdfeSKVW9W7HaHlAjOIoLmwyZpp1X7l?=
 =?us-ascii?Q?DQWZqOnwUeoYydy89zCw3xlVQXStYRZfh1QXlZkx4IdJLHdii7GWaeBmFxHf?=
 =?us-ascii?Q?ZwiaiGn0/dBVN6Z38rNqsv5Tmm5E1ev9VVYAaQEvZMP3Q3Fc4J+PX2NgoYZj?=
 =?us-ascii?Q?QGikGAvrliZl4rmafdMldradvgy/2J50OnI1wFdrgyma8wxUVrS14+VUUD7C?=
 =?us-ascii?Q?OHScJOM1PVV8fRl2tpWrfwaccSYCrvQ0V02whZkUgWA6Zl9IF6nU5VLm1zqy?=
 =?us-ascii?Q?B8k4cRWpqHlk1f/qQAPmiVB7?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <41A31294881B734E9A6B2AD922E8DAAA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5785
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a44bc0c7-90cf-4f47-ff59-08d8cdd3b76a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bb5yy3RZf14f6GZMp2QT5FDDf995YwsBdGgEYlkflFntyb46huOMGqdOQnjJMI78dizBt46eJQXfMD3GzWoMLGkBr0LbFO1GWq7vnt3Y/ozJsdgmn9Eo6z0mqkj2dadDWogv3QHbzK7iLgod9Kbs7o/3biykwZFCsWpDdrEnUZhteffVHolnTBTO28Jw1yMN6yQlbuVlmm5BhzAoGIkFwTYZ9zmvsgVLb+zUJzk5YpMjMbX1W/MHx3IgQSW9SWdIbiCRMajOfpigY3iFJsD9c3qK+CoVeBYneU6CWa5/VePZcL3tZ/s2rLX89h5tsV+Rc3/g1wsS9Txe5brXO7Qw/M3nov855aKMbKZKTDcnNcwZA2lGIyDc374BRnoNzOi4KK/8Pv7EOsr78OKi1mbX2v41+yqdnfg862oHiL/CPIpMdposYJUGXzG6rBpu1bdwls2LkcJqJdk6vaXAhnkYoIioD0AruZsEJLUYRx2ckT2paXH+a+Hr3vnlPk2zfhkavJJhOSyDW7BGGSFDEiDfuctzjghTMiVVrn8OXv5iYSFWXFH3CW1cBEMm2maU8Ny5giBfAXGg/PQRrwrqIzxi8R4DKI2pq/+DUxjo3sGG6JpzEQcQEB4ZSMy0gP4BhD57EfaMdxkChmav6N9stTRmLQGTdHcfUNdsL4FYlHc76BZgQF7JG4nudNSmMPw1lWAmfeoo/8edZ7eep+w84OZ/kkLZDIpg1q0HPmxZ5dukSt0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(39850400004)(346002)(376002)(46966006)(36840700001)(36860700001)(6506007)(55236004)(107886003)(186003)(26005)(2906002)(82740400003)(33656002)(47076005)(36756003)(82310400003)(53546011)(81166007)(8936002)(8676002)(356005)(5660300002)(70206006)(316002)(54906003)(478600001)(336012)(6512007)(83380400001)(2616005)(6486002)(86362001)(70586007)(966005)(4326008)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 14:54:16.1709
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bafa2046-98da-46ab-1f63-08d8cdd3bcd8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5689

Hi Stefano,

> On 9 Feb 2021, at 19:53, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> PCI buses differ from default buses in a few important ways, so it is
> important to detect them properly. Normally, PCI buses are expected to
> have the following property:
>=20
>    device_type =3D "pci"
>=20
> In reality, it is not always the case. To handle PCI bus nodes that
> don't have the device_type property, also consider the node name: if the
> node name is "pcie" or "pci" then consider the bus as a PCI bus.
>=20
> This commit is based on the Linux kernel commit
> d1ac0002dd29 "of: address: Work around missing device_type property in
> pcie nodes".
>=20
> This fixes Xen boot on RPi4. Some RPi4 kernels have the following node
> on their device trees:
>=20
> &pcie0 {
> 	pci@1,0 {
> 		#address-cells =3D <3>;
> 		#size-cells =3D <2>;
> 		ranges;
>=20
> 		reg =3D <0 0 0 0 0>;
>=20
> 		usb@1,0 {
> 				reg =3D <0x10000 0 0 0 0>;
> 				resets =3D <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> 		};
> 	};
> };
>=20
> The pci@1,0 node is a PCI bus. If we parse the node and its children as
> a default bus, the reg property under usb@1,0 would have to be
> interpreted as an address range mappable by the CPU, which is not the
> case and would break.
>=20
> Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com=
/
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
With the type, spaces and tab fixes already found:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks the commit message is more clear :-)

Cheers
Bertrand

> ---
> Changes in v2:
> - improve commit message
>=20
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 18825e333e..f1a96a3b90 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -563,14 +563,28 @@ static unsigned int dt_bus_default_get_flags(const =
__be32 *addr)
>  * PCI bus specific translator
>  */
>=20
> +static bool_t dt_node_is_pci(const struct dt_device_node *np)
> +{
> +    bool is_pci =3D !strcmp(np->name, "pcie") || !strcmp(np->name, "pci"=
);
> +
> +    if (is_pci)
> +        printk(XENLOG_WARNING "%s: Missing device_type\n", np->full_name=
);
> +
> +    return is_pci;
> +}
> +
> static bool_t dt_bus_pci_match(const struct dt_device_node *np)
> {
>     /*
>      * "pciex" is PCI Express "vci" is for the /chaos bridge on 1st-gen P=
CI
>      * powermacs "ht" is hypertransport
> +     *
> +     * If none of the device_type match, and that the node name is
> +     * "pcie" or "pci", accept the device as PCI (with a warning).
>      */
>     return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") ||
> -        !strcmp(np->type, "vci") || !strcmp(np->type, "ht");
> +        !strcmp(np->type, "vci") || !strcmp(np->type, "ht") ||
> +        dt_node_is_pci(np);
> }
>=20
> static void dt_bus_pci_count_cells(const struct dt_device_node *np,
>=20



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 14:58:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 14:58:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83640.156150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qxA-0006Lg-Lb; Wed, 10 Feb 2021 14:58:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83640.156150; Wed, 10 Feb 2021 14:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9qxA-0006LZ-IA; Wed, 10 Feb 2021 14:58:52 +0000
Received: by outflank-mailman (input) for mailman id 83640;
 Wed, 10 Feb 2021 14:58:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9qx9-0006LU-KQ
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 14:58:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9qx8-0003F2-78; Wed, 10 Feb 2021 14:58:50 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9qx8-000090-0Q; Wed, 10 Feb 2021 14:58:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Cvu4bMne669hrxn1I1H+HxwZqherES5cx1cV9olAPns=; b=xw4vDG5HApz61g4h7GOfgzYvx2
	MKhlzuhrFL/zihjo6k0o27OQSKcuvxL7H+rLQcWfOsMQI8JRUUR7SISe9COmT1bbFzw6E8S4s6MlY
	ZZUyHq6DkPL3y/MZdfzFc3j7WoyxdVEH/3HSr7djOZEPpQ3LTAyrmgDH81wg3QsiH/ks=;
Subject: Re: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the
 domain if it is dying
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien.grall.oss@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, hongyxia@amazon.co.uk,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-4-julien@xen.org>
 <04f601d6ff22$1f52cf60$5df86e20$@xen.org>
 <CAJ=z9a18XxQLrUanxg_E7Vups7aRee93_vFhqxu1=yq+VdXH-w@mail.gmail.com>
 <6fb54306-20e6-516f-cdcf-c7d8dd430b96@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <04755ab0-94fe-f797-1cfd-cf8aa22ceba0@xen.org>
Date: Wed, 10 Feb 2021 14:58:48 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <6fb54306-20e6-516f-cdcf-c7d8dd430b96@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 10/02/2021 14:14, Jan Beulich wrote:
> On 09.02.2021 22:14, Julien Grall wrote:
>> On Tue, 9 Feb 2021 at 20:28, Paul Durrant <xadimgnik@gmail.com> wrote:
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: 09 February 2021 15:28
>>>>
>>>> It is a bit pointless to crash a domain that is already dying. This will
>>>> become more an annoyance with a follow-up change where page-table
>>>> allocation will be forbidden when the domain is dying.
>>>>
>>>> Security wise, there is no change as the devices would still have access
>>>> to the IOMMU page-tables even if the domain has crashed until Xen
>>>> start to relinquish the resources.
>>>>
>>>> For x86, we rely on dom_iommu(d)->arch.mapping.lock to ensure
>>>> d->is_dying is correctly observed (a follow-up patch will held it in the
>>>> relinquish path).
> 
> Am I to understand this to mean that at this point of the series
> things aren't really correct yet in this regard? If so, wouldn't
> it be better to re-order?

You asked this specific order... So are you saying you want me to use 
the original ordering?

> 
>>>> For Arm, there is still a small race possible. But there is so far no
>>>> failure specific to a domain dying.
>>>>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> ---
>>>>
>>>> This was spotted when trying to destroy IOREQ servers while the domain
>>>> is dying. The code will try to add the entry back in the P2M and
>>>> therefore update the P2M (see arch_ioreq_server_disable() ->
>>>> hvm_add_ioreq_gfn()).
>>>>
>>>> It should be possible to skip the mappin in hvm_add_ioreq_gfn(), however
>>>> I didn't try a patch yet because checking d->is_dying can be racy (I
>>>> can't find a proper lock).
> 
> I understand the concern. I find it odd though that we permit
> iommu_map() to do anything at all when the domain is already
> dying. So irrespective of the remark below, how about bailing
> from iommu_map() earlier when the domain is dying?

I felt this was potentially too racy to use it. But it should be fine if 
keep the !d->is_dying below.

> 
>>>> --- a/xen/drivers/passthrough/iommu.c
>>>> +++ b/xen/drivers/passthrough/iommu.c
>>>> @@ -272,7 +272,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
>>>>                               flush_flags) )
>>>>                   continue;
>>>>
>>>> -        if ( !is_hardware_domain(d) )
>>>> +        if ( !is_hardware_domain(d) && !d->is_dying )
>>>>               domain_crash(d);
>>>
>>> Would it make more sense to check is_dying inside domain_crash() (and turn it into a no-op in that case)?
>>
>> Jan also suggested moving the check in domain_crash(). However, I felt
>> it is potentially a too risky change for 4.15 as there are quite a few
>> callers.
> 
> This is a fair point. However, in such a case I'd prefer symmetry
> at least throughout this one source file (there are three more
> places), unless there are strong reasons against doing so.

I can have a look and see if the decision is easy to make.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:04:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83642.156162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9r2a-0007JG-AX; Wed, 10 Feb 2021 15:04:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83642.156162; Wed, 10 Feb 2021 15:04:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9r2a-0007J9-6h; Wed, 10 Feb 2021 15:04:28 +0000
Received: by outflank-mailman (input) for mailman id 83642;
 Wed, 10 Feb 2021 15:04:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9r2Y-0007J4-W0
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:04:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9r2X-0003N6-MA; Wed, 10 Feb 2021 15:04:25 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9r2X-0000fT-Bz; Wed, 10 Feb 2021 15:04:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KOiB/6xxDi9jogBpACkvMAO94VY+jJlBx/xkrOZWoLA=; b=Hc1waZW5aZrAQElmFMEnXp1tRJ
	GstEa7F4EpxlanykFAU3oFMLzo2fr2MD+sojb12XgkAhM/iIlDcqlRUoVcl94JA/JOjH8ut/p1cwy
	UzH8Dpao/i1PHnUMQotnTtqcvv4bYZY8vBo2mYEygNmWCrWjjSXotPKsPdvW4QIFPGa0=;
Subject: Re: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-5-julien@xen.org>
 <62a791cb-a880-4097-5fec-4f728751b58b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <712042bf-bec6-dc0f-67ee-b0807887772f@xen.org>
Date: Wed, 10 Feb 2021 15:04:23 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <62a791cb-a880-4097-5fec-4f728751b58b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 10/02/2021 14:32, Jan Beulich wrote:
> On 09.02.2021 16:28, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The new IOMMU page-tables allocator will release the pages when
>> relinquish the domain resources. However, this is not sufficient when
>> the domain is dying because nothing prevents page-table to be allocated.
>>
>> iommu_alloc_pgtable() is now checking if the domain is dying before
>> adding the page in the list. We are relying on &hd->arch.pgtables.lock
>> to synchronize d->is_dying.
> 
> As said in reply to an earlier patch, I think suppressing
> (really: ignoring) new mappings would be better.

This is exactly what I suggested in v1 but you wrote:

"Ignoring requests there seems fragile to me. Paul - what are your
thoughts about bailing early from hvm_add_ioreq_gfn() when the
domain is dying?"

Are you know saying that the following snipped would be fine:

if ( d->is_dying )
   return 0;

> You could
> utilize the same lock, but you'd need to duplicate the
> checking in {amd,intel}_iommu_map_page().
> 
> I'm not entirely certain in the case about unmap requests:
> It may be possible to also suppress/ignore them, but this
> may require some further thought.

I think the unmap part is quite risky to d->is_dying because the PCI 
devices may not quiesced and still assigned to the domain.

> 
> Apart from this, just in case we settle on your current
> approach, a few spelling nits:
> 
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
>>   
>>   void arch_iommu_domain_destroy(struct domain *d)
>>   {
>> +    /*
>> +     * There should be not page-tables left allocated by the time the
> 
> ... should be no ...
> 
>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>> +     * called unconditionally, so pgtables may be unitialized.
> 
> uninitialized
> 
>> @@ -303,9 +317,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>       unmap_domain_page(p);
>>   
>>       spin_lock(&hd->arch.pgtables.lock);
>> -    page_list_add(pg, &hd->arch.pgtables.list);
>> +    /*
>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>> +     * nothing prevent allocation to happen afterwards. There is no valid
> 
> prevents
> 
>> +     * reasons to continue to update the IOMMU page-tables while the
> 
> reason
> 
>> +     * domain is dying.
>> +     *
>> +     * So prevent page-table allocation when the domain is dying.
>> +     *
>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
> 
> rely
> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:06:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83643.156174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9r4Q-0007Rx-Mw; Wed, 10 Feb 2021 15:06:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83643.156174; Wed, 10 Feb 2021 15:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9r4Q-0007Rq-Ij; Wed, 10 Feb 2021 15:06:22 +0000
Received: by outflank-mailman (input) for mailman id 83643;
 Wed, 10 Feb 2021 15:06:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c5iL=HM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l9r4P-0007Rk-En
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:06:21 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.59]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04b2ac3d-dac3-4104-8648-c2ca44a8a040;
 Wed, 10 Feb 2021 15:06:18 +0000 (UTC)
Received: from MR2P264CA0022.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::34) by
 DBAPR08MB5717.eurprd08.prod.outlook.com (2603:10a6:10:1ae::23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25; Wed, 10 Feb 2021 15:06:10 +0000
Received: from VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::dd) by MR2P264CA0022.outlook.office365.com
 (2603:10a6:500:1::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.27 via Frontend
 Transport; Wed, 10 Feb 2021 15:06:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT017.mail.protection.outlook.com (10.152.18.90) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Wed, 10 Feb 2021 15:06:09 +0000
Received: ("Tessian outbound f362b81824dc:v71");
 Wed, 10 Feb 2021 15:06:09 +0000
Received: from aa200058ee94.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AF807945-DADE-4B75-9224-98CF085B72FA.1; 
 Wed, 10 Feb 2021 15:06:03 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aa200058ee94.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 10 Feb 2021 15:06:03 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR08MB2775.eurprd08.prod.outlook.com (2603:10a6:6:17::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.30; Wed, 10 Feb
 2021 15:06:01 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Wed, 10 Feb 2021
 15:06:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04b2ac3d-dac3-4104-8648-c2ca44a8a040
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nZkn8seQHPE8H+uNoSkxSEJeWy4Ehr/PYqHcAnXj9sg=;
 b=AahyYdah6hdfg/NjU5ivGZ3RYUOBnfD6NTp3GZVL+MX+9zuekMg8BxHc+8pwDPOiBrGmKRnV6T5y4zIXohxUkx+OsAsbyaSG2l05hOGM8QSFO45fJOhDqZkXSKh6mMQd2WO/G4M5EGw2E/RjkPfM39zDnnX7mlDTjDUImEVOBbM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2ae6ba6b8b8ef7b7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jPAUhUues+uYZkTAR32lwjQkhH0EpqlKt5b0gd5qYsqcT5YiVZd4MLFPwFPM8ifqFvfe5Vv2i0OW17DZqLMTD5SZNxTB2SeB5/9vnldnNppUgPjT0bzsRbGghxVd1W+R8BPjSxX1TBNy7Y6SZMAHVws5novZ2bDWxY83GWxliKpS92PFKh/BGC5Lu7jK557ysGCLRiYa72xbbz7ZHpSy+wesqZJjjPVgJ3Hs3zhiJ83Z2NImTI9i6sKPG1EbrBgNjQXuG9a0+akZCX0dbyqwRnYfzJE7UtghNcb0ZORcoOmRNwx1AOEm5G6M9h0JQqEMe60GVQtkhSIlSEHP8bRyww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nZkn8seQHPE8H+uNoSkxSEJeWy4Ehr/PYqHcAnXj9sg=;
 b=fXEPn+vuRju4r4Azafxajzc7OydReMJKivn362NxktI296i1CA+DlbqUUMXmZgihiArWNsoLgyHzc9/LdN4iUNxp+l65uMpVe8akUNzlFbHOf1zm2fW7INNQRbibXARrwg+0Po2D4CevsGhJ7uQwdZkseD4ms04Gl39H5hqtiBoykbybT5sGPDCbFz51dS1I/buWwfw4AGT39PC7YtzY2bBTRnBCAa2gTcaUwd+Y1Vn8A7+BPiXY42DcwOo58643aR8o8wz70pI9Wrp0ywjwTQIlOcEB3eiZ9He25+MwXdoI6tXzoazrKKqzmlHm7JUcGUR3JV5S83cZE5d7AnJobw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nZkn8seQHPE8H+uNoSkxSEJeWy4Ehr/PYqHcAnXj9sg=;
 b=AahyYdah6hdfg/NjU5ivGZ3RYUOBnfD6NTp3GZVL+MX+9zuekMg8BxHc+8pwDPOiBrGmKRnV6T5y4zIXohxUkx+OsAsbyaSG2l05hOGM8QSFO45fJOhDqZkXSKh6mMQd2WO/G4M5EGw2E/RjkPfM39zDnnX7mlDTjDUImEVOBbM=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoAgAB7ugCAATYRAA==
Date: Wed, 10 Feb 2021 15:06:01 +0000
Message-ID: <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8a424a3e-67d1-4f44-3291-08d8cdd56642
x-ms-traffictypediagnostic: DB6PR08MB2775:|DBAPR08MB5717:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5717374E586D207D67ACE213FC8D9@DBAPR08MB5717.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3968;OLM:3968;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 h1BR/ESK4DJtTO9SgMnfROiQ4AAS94JUFjzpEuoGWAXq66Vki75BJ1jhAfjWaJvDxSVGFO9HA3KL17hpX4ZwCeSiwkrb4mpMOAv9nrpEtvndq4/HSZufZ2xDfcPBkMH7PZ64HE+HkQeulmPfy6Jz7JaSngU4Iy4ta3/EmRcapaw0BvynoDHKNW3oxficAC8BNK5BsUAxpl5TiJs5BbopqEB4mf5mrOw3LAJE1JnsDpZZU4cf//6rMlNZThiOPCaRuUOWyUV8wyj8Yxcbu0HLpfSYOeljSZtasGUvx6nqugD3QA3SYk3VQ5Mi5uhL2Ipr0zw9ophKtZPhyJ+eY/qDJ9vDqPiLPGPV6wNVv3/4bU2G2zpXidozCyktcwe/Xa2KBAi7l+SPHIpiJDxXtj+aKqNGh//yAPG0oospV/ck9y3OlZrRsVeHkoJfTCjOSaoq3AOx5yNwuM1VYuLTrZxEaRu2ebM6nBPONGUTC7+kWIX2S2MvoANXXYMmANmA3Xeu0DQGkEDxV8Gt0SYpdKieVWazSAjMNwlv7RaLHa1wr8pgLMANJBbsijX/ye6F8hmRTdaI9SYbHp6KyYoySOeoWRAKV8/uPYNIWuiIjGBFvgUGK/ROY9EfhN5b3aGZezFUau4J095IALBZmA5f3c5+4yo1hQtpCj4lq091rDvvf68=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(366004)(39860400002)(136003)(26005)(76116006)(186003)(91956017)(71200400001)(6916009)(66446008)(64756008)(83380400001)(2906002)(8936002)(4326008)(66476007)(6506007)(316002)(478600001)(66556008)(66946007)(966005)(5660300002)(8676002)(6512007)(36756003)(86362001)(6486002)(2616005)(54906003)(33656002)(53546011)(45080400002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?anB4NnNVSEpXbUZCY3JsVUdxU2t0MkNpODhtSDlNUVQ3SHV6OWdPTnpFemg3?=
 =?utf-8?B?MnFVVXV0Vjc2ZHd3Q0NVaDlEWmlHQUwvNG5pTlp3T1ZzVjBJYUdkMjNWaTdO?=
 =?utf-8?B?VmszSSs5Smd6QUl4VUZQMHNWK0RjV2ZBNVErQU1BdjFZaTF1M3hNM1YwQ3hs?=
 =?utf-8?B?YTdERHJCTzVJeTlqbko4Wk1LdWpDb2pLckVMTzVEaW1DQ0dqaU9yeUlSVDdZ?=
 =?utf-8?B?T1pvRXZnTHovQlU3anhNVms1VmlpN28vT3ZPZU80TTduc1NxMk83SXRSWjVW?=
 =?utf-8?B?RlFIdkNTZjdYbjE4UnA5TEtSRzYrQXhRbkRMTWNsV2VYR3UwRlArdW5jMERX?=
 =?utf-8?B?aUo4WUdBd04yY0s3UVE3VW5sZy85NGRDVUswM05TRDdJRldMc1NjNDVSaEYx?=
 =?utf-8?B?dU5VN09YUlhGTUZaTk5FRXJOUktVWldneDYxUmxoeXVvQkdRNFVFd3ZzMkxH?=
 =?utf-8?B?b2RMYTMrRDkwdEowYWVwL2VQVFNRMDVXVzBKMXV4aVZkdWEwVmdQOXNZNjlV?=
 =?utf-8?B?M3lmTE9Bc1VlYVdKWkFrYUM2cnlzSEZjaGFETU42N1UyTzZSSWlQTUZ3N1FO?=
 =?utf-8?B?MXFGeHVrRmlPSS9uQTZRQmFCZnk1TzlSb01NVmVxK00yakppRHA1b05GbytH?=
 =?utf-8?B?dUtqaG9lN0svYWw5VEZsKzlVbWVMY2xRQW4wMWlRbjhhSlBDdWNvY2VKTk5w?=
 =?utf-8?B?aHp5MENPSnFTTEZyUG1NTEFWaVMxWisyd29yMU9TWXN4Mm5zQVRXcXRKT21z?=
 =?utf-8?B?QkI0OThSenlXdUVZRjhxQzRLRmVGQkQvejRhREhpdkVjcDFqQmpoWGVzaTRD?=
 =?utf-8?B?K01nREFqQUtSd0FrTEZEQXFUVERXZU9oeGxxVTZOWWMwejBXekZ2VU5KbXJo?=
 =?utf-8?B?SDY5QlZ3YjVqVXNOWk8rQ2g1N2s2Mmh4bUZEc0Z6OU9BVHdFaHBxdVJZRk9v?=
 =?utf-8?B?aHhWN0hKaGNNTURZMGs1UkRMTlM4RjRtYlYrT1F0bDRNNlNQUWpWcXlMWjFv?=
 =?utf-8?B?bjhTeUFaM29ZakhvVU5zMDlKSy9EemdNbjkxaGxNdmlwQWViUjNHNG5ONmxF?=
 =?utf-8?B?U0NwQ2ZheXl4a3lMTUVDS2d3T3pVNGdzNEMvREpiUTJseWMxaXg1ZEE1TFBF?=
 =?utf-8?B?R1o5b2RLVzhGdi9WU3VrcFBmYlJEZHoyUGJoZ2lCWExmdU5reG9sU2tKcnRa?=
 =?utf-8?B?OU15b1dDbThmb01KaWU4dDBGZkVPN2ZtOFNHVkwxekxIQWQ3bFU2U3R1NkVT?=
 =?utf-8?B?OG1KamNGSldSL2xKMFRLY1RTVVRuRWVTWWVCL1EzOVRhMVU5ckt3NXNvd0hi?=
 =?utf-8?B?VDdwYnRFNXg4OTJGNTRqUVE4eXJ5VEpmTnU0RmppY3hGeHVrL0JFaUtHUjhK?=
 =?utf-8?B?WEc3VWxrdXloNC9yRWg2QU91UW53WnpMbEw0UitYcnNzNCs2STdsanF4RWNm?=
 =?utf-8?B?Mk9MalQvSFBBa0VMY2RWTFBsV0ZFY2tQaW5lelQ1eGczU3NuWTFFYVROV0hw?=
 =?utf-8?B?QTkyOHpLdGtLSGdGOUE4UGh2N1ZYR2dwQUVaTDdhUDY0YU4wSThVODlTSnUy?=
 =?utf-8?B?UUVCb05oUlJ3ZFZadVVsT2pXZjVnaDlYTE5NcUJ6Q09hYXN6N2ZSdzk0MEV5?=
 =?utf-8?B?MFNHUVoxeXJKR2IrQjN2MDQzV1NEbk1TdlVhbXZoWUJyd3NBYURFbC9oMkwv?=
 =?utf-8?B?UkhBWWZtV3FVM0x4SDFmMHlVTURCTUVpYmw1am5EOCtiR2V3aElXeW1CWGFN?=
 =?utf-8?Q?3U3u4SwF/puTKzlipGELKHUdAK7o7jiXfLiE0F4?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9EAA45A1A832144FB1060AAADDA71B11@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2775
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4bb76421-86a6-4626-642d-08d8cdd56158
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YwrD3/ga9rHCV1e455aswC50SJoZvMVYNgLEDWWlS+PH+swwL7Fs2/G2KHpO5hgj3Kl0Phik/4RqXWc4eXZ+OBGLSFHQiMm2VQ5mCCPobd8GrOl1VWa45Qbk1Xt2WTQOEA3Ip9FEeZedoiG3OeYgiiNSUoPYl3xtWjaUHMacgM+oHT3GI7TlvfSkMsTY6kLCx8A+RxbQnd4OMT9iINmUl3uANd7sbA2e838QAZ/eblpSPkrt+fdKP7wbbPJ4mk6yJ9jpM0SrOlcL0VtNGcA61toOhzw0GFCr8PgInm4RIkVEdp2VGYqidkn7fw/qPDoz075icdOrrX3jU8xarJwo5zEYJtwOa0gluFG86tStEO+Emd53WIxYp0G7qdsr2L673iAGGo0rZzqs4vsTo2rDU/h71t3dn34PqA6Rk9sSpJplt6TpYpn+Gp2hcRpoxhv8orPS9bIfBWBIWW2tIghOlgO5FGKyYpIZtxJsAUu2v+a15T6bHEjS1ahe1WhoX71ln+q58ZhJ9yBVE0UxjKRyBaEN14EN+P8m7xjX7t0xvyR659Kj566BNYrKHUyhxFlrh+yUw6PGjmo6Q/KqEAhXTXfhQ3aIvM8qH4tx+5c1CH3Pm7KDDdI7Ec2J+pK5bLSXr9+DuZBFE39xdMi6M8/zSLPGt2TKdBN5uzII24vpgAKev5IdNGrcQ7QAo5FCHGGp63kXUSmNDyDo7UFbHm5PVJpL1nsa60+SM+XWMcYiwg4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39850400004)(346002)(396003)(136003)(36840700001)(46966006)(186003)(2616005)(6862004)(86362001)(107886003)(5660300002)(36860700001)(82310400003)(26005)(83380400001)(45080400002)(316002)(966005)(2906002)(6512007)(4326008)(36756003)(356005)(70206006)(8676002)(478600001)(6486002)(70586007)(47076005)(53546011)(33656002)(336012)(82740400003)(81166007)(8936002)(54906003)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 15:06:09.8558
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a424a3e-67d1-4f44-3291-08d8cdd56642
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5717

SGVsbG8gU3RlZmFubywNCg0KPiBPbiA5IEZlYiAyMDIxLCBhdCA4OjM2IHBtLCBTdGVmYW5vIFN0
YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gVHVlLCA5
IEZlYiAyMDIxLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+PiBPbiA4IEZlYiAyMDIxLCBhdCA2OjQ5
IHBtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0K
Pj4+IA0KPj4+IENvbW1pdCA5MWQ0ZWNhN2FkZCBicm9rZSBnbnR0YWJfbmVlZF9pb21tdV9tYXBw
aW5nIG9uIEFSTS4NCj4+PiBUaGUgb2ZmZW5kaW5nIGNodW5rIGlzOg0KPj4+IA0KPj4+ICNkZWZp
bmUgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhkKSAgICAgICAgICAgICAgICAgICAgXA0KPj4+
IC0gICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQpICYmIG5lZWRfaW9tbXUoZCkpDQo+Pj4g
KyAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgbmVlZF9pb21tdV9wdF9zeW5jKGQp
KQ0KPj4+IA0KPj4+IE9uIEFSTSB3ZSBuZWVkIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcgdG8g
YmUgdHJ1ZSBmb3IgZG9tMCB3aGVuIGl0IGlzDQo+Pj4gZGlyZWN0bHkgbWFwcGVkIGFuZCBJT01N
VSBpcyBlbmFibGVkIGZvciB0aGUgZG9tYWluLCBsaWtlIHRoZSBvbGQgY2hlY2sNCj4+PiBkaWQs
IGJ1dCB0aGUgbmV3IGNoZWNrIGlzIGFsd2F5cyBmYWxzZS4NCj4+PiANCj4+PiBJbiBmYWN0LCBu
ZWVkX2lvbW11X3B0X3N5bmMgaXMgZGVmaW5lZCBhcyBkb21faW9tbXUoZCktPm5lZWRfc3luYyBh
bmQNCj4+PiBuZWVkX3N5bmMgaXMgc2V0IGFzOg0KPj4+IA0KPj4+ICAgaWYgKCAhaXNfaGFyZHdh
cmVfZG9tYWluKGQpIHx8IGlvbW11X2h3ZG9tX3N0cmljdCApDQo+Pj4gICAgICAgaGQtPm5lZWRf
c3luYyA9ICFpb21tdV91c2VfaGFwX3B0KGQpOw0KPj4+IA0KPj4+IGlvbW11X3VzZV9oYXBfcHQo
ZCkgbWVhbnMgdGhhdCB0aGUgcGFnZS10YWJsZSB1c2VkIGJ5IHRoZSBJT01NVSBpcyB0aGUNCj4+
PiBQMk0uIEl0IGlzIHRydWUgb24gQVJNLiBuZWVkX3N5bmMgbWVhbnMgdGhhdCB5b3UgaGF2ZSBh
IHNlcGFyYXRlIElPTU1VDQo+Pj4gcGFnZS10YWJsZSBhbmQgaXQgbmVlZHMgdG8gYmUgdXBkYXRl
ZCBmb3IgZXZlcnkgY2hhbmdlLiBuZWVkX3N5bmMgaXMgc2V0DQo+Pj4gdG8gZmFsc2Ugb24gQVJN
LiBIZW5jZSwgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhkKSBpcyBmYWxzZSB0b28sDQo+Pj4g
d2hpY2ggaXMgd3JvbmcuDQo+Pj4gDQo+Pj4gQXMgYSBjb25zZXF1ZW5jZSwgd2hlbiB1c2luZyBQ
ViBuZXR3b3JrIGZyb20gYSBkb21VIG9uIGEgc3lzdGVtIHdoZXJlDQo+Pj4gSU9NTVUgaXMgb24g
ZnJvbSBEb20wLCBJIGdldDoNCj4+PiANCj4+PiAoWEVOKSBzbW11OiAvc21tdUBmZDgwMDAwMDog
VW5oYW5kbGVkIGNvbnRleHQgZmF1bHQ6IGZzcj0weDQwMiwgaW92YT0weDg0MjRjYjE0OCwgZnN5
bnI9MHhiMDAwMSwgY2I9MA0KPj4+IFsgICA2OC4yOTAzMDddIG1hY2IgZmYwZTAwMDAuZXRoZXJu
ZXQgZXRoMDogRE1BIGJ1cyBlcnJvcjogSFJFU1Agbm90IE9LDQo+Pj4gDQo+Pj4gVGhlIGZpeCBp
cyB0byBnbyBiYWNrIHRvIHNvbWV0aGluZyBhbG9uZyB0aGUgbGluZXMgb2YgdGhlIG9sZA0KPj4+
IGltcGxlbWVudGF0aW9uIG9mIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcuDQo+Pj4gDQo+Pj4g
U2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0YWJlbGxpbmlAeGls
aW54LmNvbT4NCj4+PiBGaXhlczogOTFkNGVjYTdhZGQNCj4+PiBCYWNrcG9ydDogNC4xMisNCj4+
PiANCj4+PiAtLS0NCj4+PiANCj4+PiBHaXZlbiB0aGUgc2V2ZXJpdHkgb2YgdGhlIGJ1ZywgSSB3
b3VsZCBsaWtlIHRvIHJlcXVlc3QgdGhpcyBwYXRjaCB0byBiZQ0KPj4+IGJhY2twb3J0ZWQgdG8g
NC4xMiB0b28sIGV2ZW4gaWYgNC4xMiBpcyBzZWN1cml0eS1maXhlcyBvbmx5IHNpbmNlIE9jdA0K
Pj4+IDIwMjAuDQo+Pj4gDQo+Pj4gRm9yIHRoZSA0LjEyIGJhY2twb3J0LCB3ZSBjYW4gdXNlIGlv
bW11X2VuYWJsZWQoKSBpbnN0ZWFkIG9mDQo+Pj4gaXNfaW9tbXVfZW5hYmxlZCgpIGluIHRoZSBp
bXBsZW1lbnRhdGlvbiBvZiBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nLg0KPj4+IA0KPj4+IENo
YW5nZXMgaW4gdjI6DQo+Pj4gLSBpbXByb3ZlIGNvbW1pdCBtZXNzYWdlDQo+Pj4gLSBhZGQgaXNf
aW9tbXVfZW5hYmxlZChkKSB0byB0aGUgY2hlY2sNCj4+PiAtLS0NCj4+PiB4ZW4vaW5jbHVkZS9h
c20tYXJtL2dyYW50X3RhYmxlLmggfCAyICstDQo+Pj4gMSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0
aW9uKCspLCAxIGRlbGV0aW9uKC0pDQo+Pj4gDQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRl
L2FzbS1hcm0vZ3JhbnRfdGFibGUuaCBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUu
aA0KPj4+IGluZGV4IDZmNTg1YjE1MzguLjBjZTc3ZjlhMWMgMTAwNjQ0DQo+Pj4gLS0tIGEveGVu
L2luY2x1ZGUvYXNtLWFybS9ncmFudF90YWJsZS5oDQo+Pj4gKysrIGIveGVuL2luY2x1ZGUvYXNt
LWFybS9ncmFudF90YWJsZS5oDQo+Pj4gQEAgLTg5LDcgKzg5LDcgQEAgaW50IHJlcGxhY2VfZ3Jh
bnRfaG9zdF9tYXBwaW5nKHVuc2lnbmVkIGxvbmcgZ3BhZGRyLCBtZm5fdCBtZm4sDQo+Pj4gICAg
KCgoaSkgPj0gbnJfc3RhdHVzX2ZyYW1lcyh0KSkgPyBJTlZBTElEX0dGTiA6ICh0KS0+YXJjaC5z
dGF0dXNfZ2ZuW2ldKQ0KPj4+IA0KPj4+ICNkZWZpbmUgZ250dGFiX25lZWRfaW9tbXVfbWFwcGlu
ZyhkKSAgICAgICAgICAgICAgICAgICAgXA0KPj4+IC0gICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFw
cGVkKGQpICYmIG5lZWRfaW9tbXVfcHRfc3luYyhkKSkNCj4+PiArICAgIChpc19kb21haW5fZGly
ZWN0X21hcHBlZChkKSAmJiBpc19pb21tdV9lbmFibGVkKGQpKQ0KPj4+IA0KPj4+ICNlbmRpZiAv
KiBfX0FTTV9HUkFOVF9UQUJMRV9IX18gKi8NCj4+IA0KPj4gSSB0ZXN0ZWQgdGhlIHBhdGNoIGFu
ZCB3aGlsZSBjcmVhdGluZyB0aGUgZ3Vlc3QgSSBvYnNlcnZlZCB0aGUgYmVsb3cgd2FybmluZyBm
cm9tIExpbnV4IGZvciBibG9jayBkZXZpY2UuDQo+PiBodHRwczovL2VsaXhpci5ib290bGluLmNv
bS9saW51eC92NC4zL3NvdXJjZS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jI0wy
NTgNCj4gDQo+IFNvIHlvdSBhcmUgY3JlYXRpbmcgYSBndWVzdCB3aXRoICJ4bCBjcmVhdGUiIGlu
IGRvbTAgYW5kIHlvdSBzZWUgdGhlDQo+IHdhcm5pbmdzIGJlbG93IHByaW50ZWQgYnkgdGhlIERv
bTAga2VybmVsPyBJIGltYWdpbmUgdGhlIGRvbVUgaGFzIGENCj4gdmlydHVhbCAiZGlzayIgb2Yg
c29tZSBzb3J0Lg0KDQpZZXMgeW91IGFyZSByaWdodCBJIGFtIHRyeWluZyB0byBjcmVhdGUgdGhl
IGd1ZXN0IHdpdGggInhsIGNyZWF0ZeKAnSBhbmQgYmVmb3JlIHRoYXQsIEkgY3JlYXRlZCB0aGUg
bG9naWNhbCB2b2x1bWUgYW5kIHRyeWluZyB0byBhdHRhY2ggdGhlIGxvZ2ljYWwgdm9sdW1lDQpi
bG9jayB0byB0aGUgZG9tYWluIHdpdGgg4oCceGwgYmxvY2stYXR0YWNo4oCdLiBJIG9ic2VydmVk
IHRoaXMgZXJyb3Igd2l0aCB0aGUgInhsIGJsb2NrLWF0dGFjaOKAnSBjb21tYW5kLg0KDQpUaGlz
IGlzc3VlIG9jY3VycyBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoIGFzIHdoYXQgSSBvYnNlcnZl
ZCB0aGlzIHBhdGNoIGludHJvZHVjZSB0aGUgY2FsbHMgdG8gaW9tbXVfbGVnYWN5X3ssIHVufW1h
cCgpIHRvIG1hcCB0aGUgZ3JhbnQgcGFnZXMgZm9yDQpJT01NVSB0aGF0IHRvdWNoZXMgdGhlIHBh
Z2UtdGFibGVzLiBJIGFtIG5vdCBzdXJlIGJ1dCB3aGF0IEkgb2JzZXJ2ZWQgaXMgdGhhdCBzb21l
dGhpbmcgaXMgd3JpdHRlbiB3cm9uZyB3aGVuIGlvbW1fdW5tYXAgY2FsbHMgdW5tYXAgdGhlIHBh
Z2VzDQpiZWNhdXNlIG9mIHRoYXQgaXNzdWUgaXMgb2JzZXJ2ZWQuDQoNCllvdSBjYW4gcmVwcm9k
dWNlIHRoZSBpc3N1ZSBieSBqdXN0IGNyZWF0aW5nIHRoZSBkdW1teSBpbWFnZSBmaWxlIGFuZCB0
cnkgdG8gYXR0YWNoIGl0IHdpdGgg4oCceGwgYmxvY2stYXR0YWNo4oCdDQoNCmRkIGlmPS9kZXYv
emVybyBvZj10ZXN0LmltZyBicz0xMDI0IGNvdW50PTAgc2Vlaz0xMDI0DQp4bCBibG9jay1hdHRh
Y2ggMCBwaHk6dGVzdC5pbWcgeHZkYSB3DQoNClNlcXVlbmNlIG9mIGNvbW1hbmQgdGhhdCBJIGZv
bGxvdyBpcyBhcyBmb2xsb3dzIHRvIHJlcHJvZHVjZSB0aGUgaXNzdWU6ICANCg0KbHZzIHZnLXhl
bi9teWd1ZXN0DQpsdmNyZWF0ZSAteSAtTCA0RyAtbiBteWd1ZXN0IHZnLXhlbg0KcGFydGVkIC1z
IC9kZXYvdmcteGVuL215Z3Vlc3QgbWtsYWJlbCBtc2Rvcw0KcGFydGVkIC1zIC9kZXYvdmcteGVu
L215Z3Vlc3QgdW5pdCBNQiBta3BhcnQgcHJpbWFyeSAxIDQwOTYNCnN5bmMNCnhsIGJsb2NrLWF0
dGFjaCAwIHBoeTovZGV2L3ZnLXhlbi9teWd1ZXN0IHh2ZGEgdw0KDQpsaWJ4bDogZXJyb3I6IGxp
YnhsX3hzaGVscC5jOjIwMTpsaWJ4bF9feHNfcmVhZF9tYW5kYXRvcnk6IHhlbnN0b3JlIHJlYWQg
ZmFpbGVkOiBgL2xpYnhsLzAvdHlwZSc6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkNCmxpYnhs
OiB3YXJuaW5nOiBsaWJ4bF9kb20uYzo1MTpsaWJ4bF9fZG9tYWluX3R5cGU6IHVuYWJsZSB0byBn
ZXQgZG9tYWluIHR5cGUgZm9yIGRvbWlkPTAsIGFzc3VtaW5nIEhWTQ0KWyAgMTYyLjYzMjIzMl0g
eGVuLWJsa2JhY2s6IGJhY2tlbmQvdmJkLzAvNTE3MTI6IHVzaW5nIDQgcXVldWVzLCBwcm90b2Nv
bCAxIChhcm0tYWJpKSBwZXJzaXN0ZW50IGdyYW50cw0KWyAgMTYyLjc2NDg0N10gdmJkIHZiZC0w
LTUxNzEyOiA5IG1hcHBpbmcgaW4gc2hhcmVkIHBhZ2UgOCBmcm9tIGRvbWFpbiAwDQpbICAxNjIu
NzcxNzQwXSB2YmQgdmJkLTAtNTE3MTI6IDkgbWFwcGluZyByaW5nLXJlZiBwb3J0IDUNClsgIDE2
Mi43Nzc2NTBdIC0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQ0KWyAgMTYyLjc4
MjE2N10gV0FSTklORzogQ1BVOiAyIFBJRDogMzcgYXQgZHJpdmVycy9ibG9jay94ZW4tYmxrYmFj
ay94ZW5idXMuYzoyOTYgeGVuX2Jsa2lmX2Rpc2Nvbm5lY3QrMHgyMGMvMHgyMzANClsgIDE2Mi43
OTIyMzBdIE1vZHVsZXMgbGlua2VkIGluOiBicmlkZ2Ugc3RwIGxsYyBpcHY2IG5mX2RlZnJhZ19p
cHY2DQpbICAxNjIuNzk4Mzk0XSBDUFU6IDIgUElEOiAzNyBDb21tOiB4ZW53YXRjaCBOb3QgdGFp
bnRlZCA1LjQuMC15b2N0by1zdGFuZGFyZCAjMQ0KWyAgMTYyLjgwNTU5N10gSGFyZHdhcmUgbmFt
ZTogQXJtIE5lb3ZlcnNlIE4xIFN5c3RlbSBEZXZlbG9wbWVudCBQbGF0Zm9ybSAoRFQpDQpbICAx
NjIuODEyNjMwXSBwc3RhdGU6IDgwYzAwMDA1IChOemN2IGRhaWYgK1BBTiArVUFPKQ0KWyAgMTYy
LjgxNzQ4OV0gcGMgOiB4ZW5fYmxraWZfZGlzY29ubmVjdCsweDIwYy8weDIzMA0KWyAgMTYyLjgy
MjI2Ml0gbHIgOiB4ZW5fYmxraWZfZGlzY29ubmVjdCsweGJjLzB4MjMwDQpbICAxNjIuODI2OTQ5
XSBzcCA6IGZmZmY4MDAwMTFjYjNjODANCuKApuKApi4NCg0KUmVnYXJkcywNClJhaHVsDQoNCj4g
DQo+IA0KPj4gSSBkaWQgaW5pdGlhbCBkZWJ1Z2dpbmcgYW5kIGZvdW5kIG91dCB0aGF0IHRoZXJl
IGFyZSBtYW55IGNhbGxzIHRvIGlvbW11X2xlZ2FjeV97LHVufW1hcCgpIHdoaWxlIGNyZWF0aW5n
IHRoZSBndWVzdCBidXQgd2hlbiBpb21tdV9sZWdhY3lfdW5tYXAoKSBmdW5jdGlvbiB1bm1hcCB0
aGUgcGFnZXMgc29tZXRoaW5nIGlzIHdyaXR0ZW4gIHdyb25nIGluIHBhZ2UgdGFibGVzIGJlY2F1
c2Ugb2YgdGhhdCB3aGVuIG5leHQgdGltZSBzYW1lIHBhZ2UgaXMgbWFwcGVkIHZpYSBjcmVhdGVf
Z3JhbnRfaG9zdF9tYXBwaW5nKCkgd2Ugb2JzZXJ2ZWQgYmVsb3cgd2FybmluZy4gDQo+IA0KPiBC
eSAid2hpbGUgY3JlYXRpbmcgYSBndWVzdCIsIGRvIHlvdSBtZWFuIGJlZm9yZSB0aGUgZG9tVSBp
cyBldmVuDQo+IHVucGF1c2VkPyBIZW5jZSwgdGhlIGNhbGxzIGFyZSBhIHJlc3VsdCBvZiBkb20w
IG9wZXJhdGlvbnM/DQo+IE9yIGFmdGVyDQo+IGRvbVUgaXMgdW5wYXVzZWQsIGhlbmNlLCB0aGUg
Y2FsbHMgYXJlIGEgcmVzdWx0IG9mIGRvbVUgb3BlcmF0aW9ucw0KPiAocHJvYmFibHkgdGhlIGRv
bVUgc2ltcGx5IHRyeWluZyB0byBhY2Nlc3MgaXRzIHZpcnR1YWwgZGlzayk/DQo+IFBsZWFzZSBu
b3RlIHRoYXQgeW91IGNhbiBzdGFydCBhIGd1ZXN0IHBhdXNlZCB3aXRoIHhsIGNyZWF0ZSAtcC4N
Cj4gDQo+IExvb2tpbmcgYXQgdGhlIGxvZ3MsIGl0IGlzIHByb2JhYmx5IHRoZSBsYXR0ZXIuIFRo
ZSBmb2xsb3dpbmcgbGluZQ0KPiBzaG91bGQgYmUgcHJpbnRlZCB3aGVuIHRoZSBkb21VIFBWIGJs
b2NrIGZyb250ZW5kIGNvbm5lY3RzIHRvIHRoZQ0KPiBiYWNrZW5kIGluIGRvbTA6DQo+IA0KPiBb
ICAxMzguNjM5OTM0XSB4ZW4tYmxrYmFjazogYmFja2VuZC92YmQvMC81MTcxMjogdXNpbmcgNCBx
dWV1ZXMsIHByb3RvY29sIDEgKGFybS1hYmkpIHBlcnNpc3RlbnQgZ3JhbnRzDQo+IA0KPiBJJ2xs
IHNlZSBpZiBJIGNhbiByZXBybyB0aGUgaXNzdWUgaGVyZS4NCj4gDQo+IA0KPj4gWyAgMTM4LjYz
OTkzNF0geGVuLWJsa2JhY2s6IGJhY2tlbmQvdmJkLzAvNTE3MTI6IHVzaW5nIDQgcXVldWVzLCBw
cm90b2NvbCAxIChhcm0tYWJpKSBwZXJzaXN0ZW50IGdyYW50cw0KPj4gKFhFTikgZ250dGFiX21h
cmtfZGlydHkgbm90IGltcGxlbWVudGVkIHlldA0KPj4gWyAgMTM4LjY1OTcwMl0geGVuLWJsa2Jh
Y2s6IGJhY2tlbmQvdmJkLzAvNTE3MTI6IHVzaW5nIDQgcXVldWVzLCBwcm90b2NvbCAxIChhcm0t
YWJpKSBwZXJzaXN0ZW50IGdyYW50cw0KPj4gWyAgMTM4LjY2OTgyN10gdmJkIHZiZC0wLTUxNzEy
OiA5IG1hcHBpbmcgaW4gc2hhcmVkIHBhZ2UgOCBmcm9tIGRvbWFpbiAwDQo+PiBbICAxMzguNjc2
NjM2XSB2YmQgdmJkLTAtNTE3MTI6IDkgbWFwcGluZyByaW5nLXJlZiBwb3J0IDUNCj4+IFsgIDEz
OC42ODIwODldIC0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQ0KPj4gWyAgMTM4
LjY4NjYwNV0gV0FSTklORzogQ1BVOiAyIFBJRDogMzcgYXQgZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay94ZW5idXMuYzoyOTYgeGVuX2Jsa2lmX2Rpc2Nvbm5lY3QrMHgyMGMvMHgyMzANCj4+IFsg
IDEzOC42OTY2NjhdIE1vZHVsZXMgbGlua2VkIGluOiBicmlkZ2Ugc3RwIGxsYyBpcHY2IG5mX2Rl
ZnJhZ19pcHY2DQo+PiBbICAxMzguNzAyODMzXSBDUFU6IDIgUElEOiAzNyBDb21tOiB4ZW53YXRj
aCBOb3QgdGFpbnRlZCA1LjQuMC15b2N0by1zdGFuZGFyZCAjMQ0KPj4gWyAgMTM4LjcxMDAzN10g
SGFyZHdhcmUgbmFtZTogQXJtIE5lb3ZlcnNlIE4xIFN5c3RlbSBEZXZlbG9wbWVudCBQbGF0Zm9y
bSAoRFQpDQo+PiBbICAxMzguNzE3MDY3XSBwc3RhdGU6IDgwYzAwMDA1IChOemN2IGRhaWYgK1BB
TiArVUFPKQ0KPj4gWyAgMTM4LjcyMTkyN10gcGMgOiB4ZW5fYmxraWZfZGlzY29ubmVjdCsweDIw
Yy8weDIzMA0KPj4gWyAgMTM4LjcyNjcwMV0gbHIgOiB4ZW5fYmxraWZfZGlzY29ubmVjdCsweGJj
LzB4MjMwDQo+PiBbICAxMzguNzMxMzg4XSBzcCA6IGZmZmY4MDAwMTFjYjNjODANCj4+IFsgIDEz
OC43MzQ3NzNdIHgyOTogZmZmZjgwMDAxMWNiM2M4MCB4Mjg6IGZmZmYwMDAwNWI2ZGE5NDAgDQo+
PiBbICAxMzguNzQwMTU2XSB4Mjc6IDAwMDAwMDAwMDAwMDAwMDAgeDI2OiAwMDAwMDAwMDAwMDAw
MDAwIA0KPj4gWyAgMTM4Ljc0NTUzNl0geDI1OiBmZmZmMDAwMDI5NzU1MDYwIHgyNDogMDAwMDAw
MDAwMDAwMDE3MCANCj4+IFsgIDEzOC43NTA5MTldIHgyMzogZmZmZjAwMDAyOTc1NTA0MCB4MjI6
IGZmZmYwMDAwNTljNzIwMDAgDQo+PiBbICAxMzguNzU2Mjk5XSB4MjE6IDAwMDAwMDAwMDAwMDAw
MDAgeDIwOiBmZmZmMDAwMDI5NzU1MDAwIA0KPj4gWyAgMTM4Ljc2MTY4MV0geDE5OiAwMDAwMDAw
MDAwMDAwMDAxIHgxODogMDAwMDAwMDAwMDAwMDAwMCANCj4+IFsgIDEzOC43NjcwNjNdIHgxNzog
MDAwMDAwMDAwMDAwMDAwMCB4MTY6IDAwMDAwMDAwMDAwMDAwMDAgDQo+PiBbICAxMzguNzcyNDQ0
XSB4MTU6IDAwMDAwMDAwMDAwMDAwMDAgeDE0OiAwMDAwMDAwMDAwMDAwMDAwIA0KPj4gWyAgMTM4
Ljc3NzgyNl0geDEzOiAwMDAwMDAwMDAwMDAwMDAwIHgxMjogMDAwMDAwMDAwMDAwMDAwMCANCj4+
IFsgIDEzOC43ODMyMDddIHgxMTogMDAwMDAwMDAwMDAwMDAwMSB4MTA6IDAwMDAwMDAwMDAwMDA5
OTAgDQo+PiBbICAxMzguNzg4NTg5XSB4OSA6IDAwMDAwMDAwMDAwMDAwMDEgeDggOiAwMDAwMDAw
MDAwMjEwZDAwIA0KPj4gWyAgMTM4Ljc5Mzk3MV0geDcgOiAwMDAwMDAwMDAwMDAwMDE4IHg2IDog
ZmZmZjAwMDA1ZGRmNzJhMCANCj4+IFsgIDEzOC43OTkzNTJdIHg1IDogZmZmZjgwMDAxMWNiM2My
OCB4NCA6IDAwMDAwMDAwMDAwMDAwMDAgDQo+PiBbICAxMzguODA0NzM0XSB4MyA6IGZmZmYwMDAw
Mjk3NTUxMTggeDIgOiAwMDAwMDAwMDAwMDAwMDAwIA0KPj4gWyAgMTM4LjgxMDExN10geDEgOiBm
ZmZmMDAwMDI5NzU1MTIwIHgwIDogMDAwMDAwMDAwMDAwMDAwMSANCj4+IFsgIDEzOC44MTU0OTdd
IENhbGwgdHJhY2U6DQo+PiBbICAxMzguODE4MDE1XSAgeGVuX2Jsa2lmX2Rpc2Nvbm5lY3QrMHgy
MGMvMHgyMzANCj4+IFsgIDEzOC44MjI0NDJdICBmcm9udGVuZF9jaGFuZ2VkKzB4MWIwLzB4NTRj
DQo+PiBbICAxMzguODI2NTIzXSAgeGVuYnVzX290aGVyZW5kX2NoYW5nZWQrMHg4MC8weGIwDQo+
PiBbICAxMzguODMxMDM1XSAgZnJvbnRlbmRfY2hhbmdlZCsweDEwLzB4MjANCj4+IFsgIDEzOC44
MzQ5NDFdICB4ZW53YXRjaF90aHJlYWQrMHg4MC8weDE0NA0KPj4gWyAgMTM4LjgzODg0OV0gIGt0
aHJlYWQrMHgxMTgvMHgxMjANCj4+IFsgIDEzOC44NDIxNDddICByZXRfZnJvbV9mb3JrKzB4MTAv
MHgxOA0KPj4gWyAgMTM4Ljg0NTc5MV0gLS0tWyBlbmQgdHJhY2UgZmI5ZjBhM2IzYjQ4YTU1ZiBd
4oCUDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:25:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83646.156186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rMa-0000sN-DC; Wed, 10 Feb 2021 15:25:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83646.156186; Wed, 10 Feb 2021 15:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rMa-0000sG-A4; Wed, 10 Feb 2021 15:25:08 +0000
Received: by outflank-mailman (input) for mailman id 83646;
 Wed, 10 Feb 2021 15:25:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/zh=HM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1l9rMY-0000sB-N6
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:25:06 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2231718c-182f-452e-96a8-0e5f21157712;
 Wed, 10 Feb 2021 15:25:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2231718c-182f-452e-96a8-0e5f21157712
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612970705;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=vyIgeLSJb5a9rLDbPUxwfP6jZo4f6iPsP5l4BS2OFWw=;
  b=XavaoN81BV06VEBOTs+0hBQT/a2/TIjLbCMnD4a6ktNfJZToDzbA8m7M
   OTjXBsqwE8+6BRDer0xIvXjyiQLBhUItctyk403O3Qf4FgEyEB8destPO
   zlv2yocu0tLPo5iLCOhqj/RUdW5zGHnqvwOlkNJgtvTbnfLGt9UwGEQfN
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: uhOn8mhcw0iueE0B+S3PwMfm3OQBkD8PBFRNGpOsbVdGqE3uQOMtG8xN1kLHS8NPl4urV36CFO
 DtqmoH2Pkpi5phorW7tAyuek240meIId2zLIlxOPq8xjpJt1gcrf3ooY7JiNM3yZOF2SHVXzPZ
 FjxgIvlYa1tBmYfCYt12dQC7gUHF9Aa425ZwE9iStI+irIncnWvXQwxF2P3MsEqpP9L+LLyrQO
 9TbLzltIo8oPyJmEmUgt+7y7oIUjhDAAUqENz83/AJvxbbx3aAFT16ryU2bhfJA91/ggai+Lmo
 SeQ=
X-SBRS: 5.2
X-MesageID: 36991317
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="36991317"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Da3/gl/EjEgfBO0l0dAFkEfVX0hD4jCFe8dGZlrFweunE4qxcc+7WTge6yIjplx9ai9rUK51L+2amm9ewgLFRDqc575SnrH1KSUej2fDhm8K+EsTV8TY/ervyD0s2YOjxMmIQCMh51WqulYlm9KMREfEZ/loW9RY1iEfANOHp6MWrf96lzY9EHrrBtxCBJ8fH1Ccy4ApddCSc2l8AS9tqsxmlLYJPbGiRspSfFyxDY7bDzE0QXWLO+LT0tyUjdGBYryerwna4puMAVN6nsNx1++0w60tXALKwvcT8B4x4UMlsNySNd8GH2RLZYdqHynvVG2ZB4aScjXaEMSTZ8VqJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EjsImBlz3+FL/q/Zd8u4pOch7FZ1qhKtrHNIEJ8Lqpc=;
 b=d9Fkj6wXYLXlNfaBCFVNRK3aCv8feOTQ9+fxhRyHp+y04GmSYnpCMPakXkoIfFNCyywG10EvXuXAt904j9K77djI4PA1dPM96bPuYTzDysxgMFzzFl2DAIKR0y6hLnZahYlMx+Amk/ikB3HZJr+QR/4yJcBJix73+ZSLOQrUzQ2P+52TKGYGfKWtf58bMeiJNN0PUhwTHysGt0bBDTiqecrKuK309qrLPcUD+CwP3IY1siTxWHLLVcW2T0btjD0pX1/9fnFGri5xTTj0mz9XloKHFjdQFoNYbvslqkQkgCyLK/511rgJ+rO24xAC3CbbEw+uWyOx5nctXmyEZduJ1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EjsImBlz3+FL/q/Zd8u4pOch7FZ1qhKtrHNIEJ8Lqpc=;
 b=qMN5SO+3TyjmVBVtXXdOskxdfaWbtHhls73tPF/C3CziM7HkSl//XlYCGbp9ADIxzkgdxxTl1Cc84tQ+gtrf3yWj/D0bBLchrdWfzviSurnpFdGNCfz7sitsXKab063/aziR51JxAULDiM7Oq4YgFutVBYy9z9KhP0sM6xgvBXY=
Date: Wed, 10 Feb 2021 16:24:55 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <hongyxia@amazon.co.uk>,
	<iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
Message-ID: <YCP6x9ApfJQuhALl@Air-de-Roger>
References: <20210209152816.15792-2-julien@xen.org>
 <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
 <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
 <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
 <YCPJXe1L1SCXoL7a@Air-de-Roger>
 <bb242b17-01f3-6312-b563-f82abc5d300a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bb242b17-01f3-6312-b563-f82abc5d300a@suse.com>
X-ClientProxiedBy: PR3P189CA0114.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b5::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 44893c94-dd53-4f55-4827-08d8cdd80848
X-MS-TrafficTypeDiagnostic: DM6PR03MB4473:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB44732AE4E8B25423F224CE728F8D9@DM6PR03MB4473.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QGUnBqrWbo+nAc8BEoi2vk6ntQyDrVkj8DIV850rAsQq+083Sy97Y7CcYfNO2q1cOKfP0ztmiq/sIH2AnXdglgO8YM/jFF3okMfMrrFuLTywgHeszh4VpsX1M3hyn1AgMeJHuoghLynM5JpywCgGGbjw3nMBuhWRht/LS0fajUuSQMZIWSAjFKDTq0nikgUvbEyt5qn4yRsT+vPF0cKZlnbJT+xim/+58l95QAgJ1rOAnL7I30/NVL+wezM3JL1aVNoI8eGggsebfswm8gk6vlOm4Vj++dIn3m2RR2Rbp3jeFEIfo6jmqBP81yqF0FM2tNtgg5GYJ+GuvvFg1EEwTHdsmMIfcz5RdPYwEk6kjECQvgalysftVZI0DQPwcYhY7MO8vIq0qxY+T6DH8cl7TkrhLPOc/spgxhFpE5Zz4h4bR2Y10yD9zZWBmyKLH2kC1F5kzAIBU43+aAMPfseauLhzjxD72JP8dQ4jah73rIijk8BloInv94MkudWlHNaL+C/VSsOa5fYmE3PeMuSCMQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(136003)(39860400002)(366004)(376002)(478600001)(5660300002)(33716001)(85182001)(53546011)(9686003)(66556008)(66476007)(6496006)(54906003)(66946007)(2906002)(6666004)(6916009)(8936002)(86362001)(16526019)(186003)(316002)(8676002)(6486002)(83380400001)(4326008)(956004)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dDVzS1hZUHlOWitUaXpObVdVWWZZTm85djYzWlIxMU0rZnBwVWNHRFppODFs?=
 =?utf-8?B?V2UxNGo4cCtJMWxyZTAwYzFTYVNYQWk1T0M4ODl2QkY2NUlFUmxWRmRsbHcw?=
 =?utf-8?B?TFIrZVJSN0J3STRJdHh2MGhYZjdyNzdBSFpNYlJFaENDNlNCSjM5M3dYeE9u?=
 =?utf-8?B?RmQ3aVRrcHMxQWwzSmdRSS9HMUNXaExoVDdkQStrREMrMEU2UDJySEw4V204?=
 =?utf-8?B?T05BSUJJTUZCalQzQk9MOVN6MExYdlhWL1VNaVdEb2s1ckJFUEc0U0NCTWs0?=
 =?utf-8?B?L0MrL3MxR0oxNzBSRlEwUm1GbjRyVjFydC85dFUzS0JhOFVuMUZKd2F1Undq?=
 =?utf-8?B?V3c5anBtNEhlZW1MdjQxODZDK0tpVXRZVlN5ZlB0aW5iWEFxVU9TcGQrZGkv?=
 =?utf-8?B?TWtNeVY1L21pSHpBSTNidjRoOUpPTUVPNTR3VHhPLzhGelh3Wmc5Y09ldjBY?=
 =?utf-8?B?aGRlZGlaV0JteXl2L3gyNHQvSUVuRW5WWm9zekhuMTFtZ0ZGUWlKVDlicVlQ?=
 =?utf-8?B?ODI2UDZCVjNLdTFEcGRuSEIwN2lUcEVTQkR4aUlPZVd1MDRFSDVmTG1sNFVE?=
 =?utf-8?B?TE1FOXcvTUViYkFGQkFaaGswVnVrVktvTmpsRVpUV3lyUGExMGRJWTlGNEly?=
 =?utf-8?B?ZFlCVSs3NmdkbGdDdWR2Sk9LaW05UmpxTnhRODdpenFqdkVNT2xMYkQzTkRn?=
 =?utf-8?B?M3FHdmRIbWgvRnBHSGR6djc0enhTZmViVWcwL3ZudGNIc21iYy9VMURreHVJ?=
 =?utf-8?B?cllKME1YRVFhZHhZcFF3aGo0Qi9vM0c5bTVpdGpvVVVBdDJuNzlLa3JJdjRj?=
 =?utf-8?B?dzhSNmZINlpodjNJdHJsT3IreXkzZlU1QUwwWlJvaVhGZDJsSXpkck1VMU5p?=
 =?utf-8?B?MmQxbS9ZTlh4RHN3UVNSRzh6bjN6VTBhU2MzdXl2Sjd3M3p2d1hhalUzL3BU?=
 =?utf-8?B?Q1NJSmp1Y0gxbUdQT2NKMzRheUVkZ2hTVVhTcE1NaWRNalJlVURHQ1RIZ0NS?=
 =?utf-8?B?OFBGcEE4Uk90ZXZGejZvYk9sMzV2dWgrMkp0dTZUOFNpYmhWT0ZJQVc0ZElP?=
 =?utf-8?B?UGR1VHFhdG9Dd2xWQlc1UnJyd1hENTlub2hpMW4xMWdYTEtmSmJKUUJHR0pv?=
 =?utf-8?B?ZnozQWV4L0RORjNGSklwMG5rL2VJdnYzQ0dzZStWcURoNlI3cTVoWmtjUzhl?=
 =?utf-8?B?ZGZLdmowb09lMWxJdHd5ZjFqeDQvRjhWUHJGd0t2eVBjRG00UmxrM2E0QVFy?=
 =?utf-8?B?Q0NLQ3dOdWpaOGhLVWYzekZaa3JRMTd0QWlZaHpCOE9BNDdUU3AxQ3hKQ0JM?=
 =?utf-8?B?cCtVWXMwbDUrQ2cwQkVkOS9kNXlSRmJ5a0JQemJuc2RwWCs0MjBid2hGWWh1?=
 =?utf-8?B?bW1FcHhPT2xFcVhNU1ZRa3g2cjRESitTNGVRVmdUdjBEQlBkOHArcERVRkxP?=
 =?utf-8?B?ampQM0pqcFd3LzkzMU9ad05FS1JseE5yNjFSTkNWUDN5WENERUYzMG05U1Fu?=
 =?utf-8?B?bkZENlZTY01BK1ZrWWhVSmsyeGhrdmYxdFkxNEZnYkR1ZThXdXVDbTFMRlFy?=
 =?utf-8?B?dlpXNFJ6M1NMTVdmVDN0T1piSHMzcU93VmNUMlJQV1huWWVFeFh6eWhJK1hF?=
 =?utf-8?B?UkxENi9ub1BIV3U3SHkrd09LMEN3Zm0zSDlEVHFsc3Y2Q3Q3OVhRREpYMU5z?=
 =?utf-8?B?aTZVcVlHaFhIejZmZ3pGN2pZaHlVWjg3WTRJc0dBc21qNUxzZGtDUWNCenJi?=
 =?utf-8?Q?TdIN6gJpNhVZUAMKKlXUsJwvNFDvCdQl7C3SOMz?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 44893c94-dd53-4f55-4827-08d8cdd80848
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 15:25:01.1337
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W5bB3iT2d99hldGoGRPQo0a7eKxf0foGORHymo8XpQsucrKRMai7sSm/Ul6Tkm6uTlPe6yRFbCCuwUITNtAmiQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4473
X-OriginatorOrg: citrix.com

On Wed, Feb 10, 2021 at 02:12:38PM +0100, Jan Beulich wrote:
> On 10.02.2021 12:54, Roger Pau MonnÃ© wrote:
> > On Wed, Feb 10, 2021 at 11:48:40AM +0000, Julien Grall wrote:
> >> It feels wrong to me to setup a per-domain mapping when initializing the
> >> first vCPU.
> >>
> >> But, I was under the impression that there is plan to remove
> >> XEN_DOMCTL_max_vcpus. So it would only buy just a bit of time...
> > 
> > I was also under that impression. We could setup the lapic access page
> > at vlapic_init for the BSP (which is part of XEN_DOMCTL_max_vcpus
> > ATM).
> > 
> > But then I think there should be some kind of check to prevent
> > populating either the CPU or the IOMMU page tables at domain creation
> > hypercall, and so the logic to free CPU table tables on failure could
> > be removed.
> 
> I can spot paging_final_teardown() on an error path there, but I
> don't suppose that's what you mean? I guess I'm not looking in
> the right place (there are quite a few after all) ...

Well, I assume some freeing of the EPT page tables must happen on
error paths, or else we would be leaking them like IOMMU tables are
leaked currently?

Maybe I've not correctly understanding the issue here.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:27:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:27:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83647.156198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rP7-00011P-Qr; Wed, 10 Feb 2021 15:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83647.156198; Wed, 10 Feb 2021 15:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rP7-00011I-NV; Wed, 10 Feb 2021 15:27:45 +0000
Received: by outflank-mailman (input) for mailman id 83647;
 Wed, 10 Feb 2021 15:27:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qQOe=HM=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1l9rP6-00011D-68
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:27:44 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ef8103c-ecf1-46eb-9464-c4d1b3f1b489;
 Wed, 10 Feb 2021 15:27:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ef8103c-ecf1-46eb-9464-c4d1b3f1b489
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1612970862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type;
	bh=izmVBGwgZIHobxzE3AKB3/PFOL7XlGjrBYkkrejnxqA=;
	b=Gy3/NgyO2BGhmzAQwucpYtHiUBH7oMqdaFRQqH/jYBxQKmm4VNnbkGvgO0Lib1ZzA9NjqQ
	DLC+sMliBRHGg6FMsQDj1WK8fYojTmpPQeREB2B8SHNaeQUCNar8iMMboJ4IxHLAiddx3e
	IIsZ0G1jEeB421sUk+Dx65HubQwKerjGmyasvS9dhMCmxA0GvmEENCgcxucwl4h/aftjZu
	5IqTKadxjzXGLTkt98q4MjpdNCq/VSQfswCKxKfWMt7+xEsCqBVzB2S2mRtoEpQVp0a5EE
	dACDgGzQwZrTTsFDoUwoPt0XwZFUbbikB0tf3qLpnErivThJVFywXvT7Lh5xKQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1612970862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type;
	bh=izmVBGwgZIHobxzE3AKB3/PFOL7XlGjrBYkkrejnxqA=;
	b=KqL+aFrQeS6pNATjhzD2eKbnUMsIHoJJOtKWV6L2Nc60iRmAY+bk1Ab7Fs+wxbEsbJxpDo
	4HdZDGI4NpebaxBg==
To: LKML <linux-kernel@vger.kernel.org>
Cc: Juergen Gross <jgross@suse.com>, x86@kernel.org, Bjorn Helgaas <helgaas@kernel.org>, xen-devel@lists.xenproject.org
Subject: [PATCH] x86/pci: Create PCI/MSI irqdomain after x86_init.pci.arch_init()
Date: Wed, 10 Feb 2021 16:27:41 +0100
Message-ID: <87pn18djte.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain


Invoking x86_init.irqs.create_pci_msi_domain() before
x86_init.pci.arch_init() breaks XEN PV.

The XEN_PV specific pci.arch_init() function overrides the default
create_pci_msi_domain() which is obviously too late.

As a consequence the XEN PV PCI/MSI allocation goes through the native
path which runs out of vectors and causes malfunction.

Invoke it after x86_init.pci.arch_init().

Fixes: 6b15ffa07dc3 ("x86/irq: Initialize PCI/MSI domain at PCI init time")
Reported-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Cc: stable@vger.kernel.org
---
 arch/x86/pci/init.c |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

--- a/arch/x86/pci/init.c
+++ b/arch/x86/pci/init.c
@@ -9,16 +9,23 @@
    in the right sequence from here. */
 static __init int pci_arch_init(void)
 {
-	int type;
-
-	x86_create_pci_msi_domain();
+	int type, pcbios = 1;

	type = pci_direct_probe();

	if (!(pci_probe & PCI_PROBE_NOEARLY))
		pci_mmcfg_early_init();

-	if (x86_init.pci.arch_init && !x86_init.pci.arch_init())
+	if (x86_init.pci.arch_init)
+		pcbios = x86_init.pci.arch_init();
+
+	/*
+	 * Must happen after x86_init.pci.arch_init(). Xen sets up the
+	 * x86_init.irqs.create_pci_msi_domain there.
+	 */
+	x86_create_pci_msi_domain();
+
+	if (!pcbios)
		return 0;

	pci_pcbios_init();


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:40:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:40:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83651.156214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rbB-0002p5-W6; Wed, 10 Feb 2021 15:40:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83651.156214; Wed, 10 Feb 2021 15:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rbB-0002oy-S5; Wed, 10 Feb 2021 15:40:13 +0000
Received: by outflank-mailman (input) for mailman id 83651;
 Wed, 10 Feb 2021 15:40:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9rbA-0002oq-3v; Wed, 10 Feb 2021 15:40:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9rb9-0003x3-Sd; Wed, 10 Feb 2021 15:40:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9rb9-0004OK-KX; Wed, 10 Feb 2021 15:40:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9rb9-0000kB-Jy; Wed, 10 Feb 2021 15:40:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Igd2+9+vtvM20h12OVTLGSGv1fDbYx4x8jydmJeY+70=; b=J3hwY+xcZZaStqgIMfF9sqYIF2
	b2eC7juD9/AYQXupCoXDorcRpT9C+E+/eNmDcMHWs9/0bLna72p2FLLZ9TPpwtTfHdx0l8/k2GngQ
	SPbDsznDluR0b4/5ML14rCANkIssPsjsQ7btYPclGu2WvG8aYwaqJa1PlbFXeunxYIxM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159210-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159210: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=90b014a6e6ecad036ec5846426afd19b305dedff
X-Osstest-Versions-That:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 15:40:11 +0000

flight 159210 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159210/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 159191
 build-arm64-xsm               6 xen-build                fail REGR. vs. 159191
 build-armhf                   6 xen-build                fail REGR. vs. 159191

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  90b014a6e6ecad036ec5846426afd19b305dedff
baseline version:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14

Last test of basis   159191  2021-02-09 23:00:29 Z    0 days
Failing since        159206  2021-02-10 12:01:51 Z    0 days    2 attempts
Testing same since   159210  2021-02-10 14:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90b014a6e6ecad036ec5846426afd19b305dedff
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 9 15:28:57 2021 +0000

    x86/ucode/amd: Fix microcode payload size for Fam19 processors
    
    The original limit provided wasn't accurate.  Blobs are in fact rather larger.
    
    Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 0e898ad8bc86d76485ce7a6a29ff2d3fa34d070d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 9 20:49:07 2021 +0000

    x86/ucode/amd: Handle length sanity check failures more gracefully
    
    Currently, a failure of verify_patch_size() causes an early abort of the
    microcode blob loop, which in turn causes a second go around the main
    container loop, ultimately failing the UCODE_MAGIC check.
    
    First, check for errors after the blob loop.  An error here is unrecoverable,
    so avoid going around the container loop again and printing an
    unhelpful-at-best error concerning bad UCODE_MAGIC.
    
    Second, split the verify_patch_size() check out of the microcode blob header
    check.  In the case that the sanity check fails, we can still use the
    known-to-be-plausible header length to continue walking the container to
    potentially find other applicable microcode blobs.
    
    Before:
      (XEN) microcode: Bad microcode data
      (XEN) microcode: Wrong microcode patch file magic
      (XEN) Parsing microcode blob error -22
    
    After:
      (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa000
      (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa010
      (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa011
      (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa200
      (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa210
      (XEN) microcode: Bad microcode length 0x000015c0 for cpu 0xa500
      (XEN) microcode: couldn't find any matching ucode in the provided blob!
    
    Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 1cbc4d89c45cba3929f1c0cb4bca0b000c4f174b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 9 22:10:54 2021 +0000

    x86/ucode/amd: Fix OoB read in cpu_request_microcode()
    
    verify_patch_size() is a maximum size check, and doesn't have a minimum bound.
    
    If the microcode container encodes a blob with a length less than 64 bytes,
    the subsequent calls to microcode_fits()/compare_header() may read off the end
    of the buffer.
    
    Fixes: 4de936a38a ("x86/ucode/amd: Rework parsing logic in cpu_request_microcode()")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 20077d035224c6f3b3eef8b111b8b842635f2c40
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Feb 5 12:53:27 2021 +0100

    tools/configure: add bison as mandatory
    
    Bison is now mandatory when the pvshim build is enabled in order to
    generate the Kconfig.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>

commit c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Feb 4 10:38:33 2021 +0100

    autoconf: check endian.h include path
    
    Introduce an autoconf macro to check for the include path of certain
    headers that can be different between OSes.
    
    Use such macro to find the correct path for the endian.h header, and
    modify the users of endian.h to use the output of such check.
    
    Suggested-by: Ian Jackson <iwj@xenproject.org>
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:53:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83655.156228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rnz-0003rf-6Q; Wed, 10 Feb 2021 15:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83655.156228; Wed, 10 Feb 2021 15:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rnz-0003rY-3P; Wed, 10 Feb 2021 15:53:27 +0000
Received: by outflank-mailman (input) for mailman id 83655;
 Wed, 10 Feb 2021 15:53:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9rny-0003rT-Nr
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:53:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6fb4d56-6a7d-4e5a-b8c6-5b9bb183efac;
 Wed, 10 Feb 2021 15:53:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E890DAB98;
 Wed, 10 Feb 2021 15:53:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6fb4d56-6a7d-4e5a-b8c6-5b9bb183efac
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612972405; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8uzm/DX6yvAf4dtEdrcBQOAQnj44/gh/KVu3tROWKUc=;
	b=LAmJjPDRcNDBskP2AC0ZgtotRZ0oZplBdBfscy0/1aqYUPj3TVq03TqIjD5lj9Tx11g9Ew
	Hsq81N7ECFi/aEwRlDyX36Tdpd4IiZpZzdQC9eNIJ3veLS9q/vU12egqZgBKXirfLa3fp8
	w5bsY2cZsgjk9IWqNXbZ/YhyKU166sI=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 iwj@xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>
References: <20210209152816.15792-2-julien@xen.org>
 <YCOZbNly7YCSNtHY@Air-de-Roger>
 <5bf0a2de-3f0e-8860-7bc7-f667437aa3a7@suse.com>
 <YCPE0byWKlf/uOFT@Air-de-Roger>
 <65797b03-7bd8-92e9-b6c7-e8eccde9f8ba@suse.com>
 <e1c7c616-0941-b577-5842-a51374030798@xen.org>
 <71c4150a-0b81-cdc3-b752-814f58cb5ca4@suse.com>
 <df760d78-a439-db0a-4b88-813b002f0a64@xen.org>
 <YCPJXe1L1SCXoL7a@Air-de-Roger>
 <bb242b17-01f3-6312-b563-f82abc5d300a@suse.com>
 <YCP6x9ApfJQuhALl@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <14ed7349-4af7-9882-af1f-08c1739cbf5f@suse.com>
Date: Wed, 10 Feb 2021 16:53:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCP6x9ApfJQuhALl@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 16:24, Roger Pau MonnÃ© wrote:
> On Wed, Feb 10, 2021 at 02:12:38PM +0100, Jan Beulich wrote:
>> On 10.02.2021 12:54, Roger Pau MonnÃ© wrote:
>>> On Wed, Feb 10, 2021 at 11:48:40AM +0000, Julien Grall wrote:
>>>> It feels wrong to me to setup a per-domain mapping when initializing the
>>>> first vCPU.
>>>>
>>>> But, I was under the impression that there is plan to remove
>>>> XEN_DOMCTL_max_vcpus. So it would only buy just a bit of time...
>>>
>>> I was also under that impression. We could setup the lapic access page
>>> at vlapic_init for the BSP (which is part of XEN_DOMCTL_max_vcpus
>>> ATM).
>>>
>>> But then I think there should be some kind of check to prevent
>>> populating either the CPU or the IOMMU page tables at domain creation
>>> hypercall, and so the logic to free CPU table tables on failure could
>>> be removed.
>>
>> I can spot paging_final_teardown() on an error path there, but I
>> don't suppose that's what you mean? I guess I'm not looking in
>> the right place (there are quite a few after all) ...
> 
> Well, I assume some freeing of the EPT page tables must happen on
> error paths, or else we would be leaking them like IOMMU tables are
> leaked currently?

Well, you can't eliminate paging_final_teardown() from that
error path because it frees internal structures. It _also_
sets HAP's / shadow's allocation to zero, so has the side
effect of freeing why may have been CPU page tables.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:56:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:56:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83657.156241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rr9-00041U-Lj; Wed, 10 Feb 2021 15:56:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83657.156241; Wed, 10 Feb 2021 15:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rr9-00041N-Ih; Wed, 10 Feb 2021 15:56:43 +0000
Received: by outflank-mailman (input) for mailman id 83657;
 Wed, 10 Feb 2021 15:56:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9rr8-00041I-Q9
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:56:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e8a3170-d20d-4420-b319-9ae35a8bd557;
 Wed, 10 Feb 2021 15:56:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A4F26AFED;
 Wed, 10 Feb 2021 15:56:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e8a3170-d20d-4420-b319-9ae35a8bd557
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612972600; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A3LmuoCGZFmMOdU5/rOompxGG8TBTh2cmu5JiDWGxtk=;
	b=dMimHsB759xBYyE9oJ35hx3m8fZ15oCHtOWqcX/fN/XE6AANyxLXqPYb4xaYbWkLt/voFd
	4vKAQluBSOy0gMo1z9J1mzDLvnx5FgO8htPN6TuTYhUPnI6GmzLw/2dxDjGmdzq3oy2S+c
	BT8rgVE9OkY7HxkX/Y/yWcfNlxgD7oY=
Subject: Re: [for-4.15][PATCH v2 3/5] xen/iommu: iommu_map: Don't crash the
 domain if it is dying
To: Julien Grall <julien@xen.org>, Julien Grall <julien.grall.oss@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, hongyxia@amazon.co.uk,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-4-julien@xen.org>
 <04f601d6ff22$1f52cf60$5df86e20$@xen.org>
 <CAJ=z9a18XxQLrUanxg_E7Vups7aRee93_vFhqxu1=yq+VdXH-w@mail.gmail.com>
 <6fb54306-20e6-516f-cdcf-c7d8dd430b96@suse.com>
 <04755ab0-94fe-f797-1cfd-cf8aa22ceba0@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <82fc1bcf-eff8-13bd-7679-5e8d0a17661f@suse.com>
Date: Wed, 10 Feb 2021 16:56:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <04755ab0-94fe-f797-1cfd-cf8aa22ceba0@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.02.2021 15:58, Julien Grall wrote:
> Hi Jan,
> 
> On 10/02/2021 14:14, Jan Beulich wrote:
>> On 09.02.2021 22:14, Julien Grall wrote:
>>> On Tue, 9 Feb 2021 at 20:28, Paul Durrant <xadimgnik@gmail.com> wrote:
>>>>> From: Julien Grall <julien@xen.org>
>>>>> Sent: 09 February 2021 15:28
>>>>>
>>>>> It is a bit pointless to crash a domain that is already dying. This will
>>>>> become more an annoyance with a follow-up change where page-table
>>>>> allocation will be forbidden when the domain is dying.
>>>>>
>>>>> Security wise, there is no change as the devices would still have access
>>>>> to the IOMMU page-tables even if the domain has crashed until Xen
>>>>> start to relinquish the resources.
>>>>>
>>>>> For x86, we rely on dom_iommu(d)->arch.mapping.lock to ensure
>>>>> d->is_dying is correctly observed (a follow-up patch will held it in the
>>>>> relinquish path).
>>
>> Am I to understand this to mean that at this point of the series
>> things aren't really correct yet in this regard? If so, wouldn't
>> it be better to re-order?
> 
> You asked this specific order... So are you saying you want me to use 
> the original ordering?

Well, it's been a while and I don't recall the specific reason
for the request. But then at least the spin_barrier() you mean
to rely on could / should be moved here?

>>>>> For Arm, there is still a small race possible. But there is so far no
>>>>> failure specific to a domain dying.
>>>>>
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> ---
>>>>>
>>>>> This was spotted when trying to destroy IOREQ servers while the domain
>>>>> is dying. The code will try to add the entry back in the P2M and
>>>>> therefore update the P2M (see arch_ioreq_server_disable() ->
>>>>> hvm_add_ioreq_gfn()).
>>>>>
>>>>> It should be possible to skip the mappin in hvm_add_ioreq_gfn(), however
>>>>> I didn't try a patch yet because checking d->is_dying can be racy (I
>>>>> can't find a proper lock).
>>
>> I understand the concern. I find it odd though that we permit
>> iommu_map() to do anything at all when the domain is already
>> dying. So irrespective of the remark below, how about bailing
>> from iommu_map() earlier when the domain is dying?
> 
> I felt this was potentially too racy to use it. But it should be fine if 
> keep the !d->is_dying below.

Why? As per later comments I didn't necessarily mean iommu_map()
literally - as indicated, the per-vendor functions ought to be
suitable to place the check, right after having taken the lock.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 15:58:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 15:58:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83658.156252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rsw-00049V-6o; Wed, 10 Feb 2021 15:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83658.156252; Wed, 10 Feb 2021 15:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9rsw-00049O-3t; Wed, 10 Feb 2021 15:58:34 +0000
Received: by outflank-mailman (input) for mailman id 83658;
 Wed, 10 Feb 2021 15:58:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9rsu-00049H-F6
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:58:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9rsu-0004Gc-C0
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:58:32 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1l9rsu-0004kR-Av
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 15:58:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1l9rsr-0000KE-0C; Wed, 10 Feb 2021 15:58:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Subject:CC:To:Date:Message-ID:
	Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=anaMKxXPZNARA9p0XBY0RLmDeOf/tum7GlpdhDHS2tI=; b=UVRyUcfi/CVzv8dehVi7sEsTzv
	J1NuA709OiURIyoTPbFl7WJh1cnv/hdWALXOf8SRgywoifBnJnF6fey/xZLG8OkJ+y/08k2107/jZ
	5KfsJzH7NqEbOGW1IFv0cZwNk4aSInuN8oNxvK6/PoWCp0rAbbpJcLFwpbN+zhYMS+gk=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24612.676.586372.372903@mariner.uk.xensource.com>
Date: Wed, 10 Feb 2021 15:58:28 +0000
To: committers@xenproject.org,
CC: xen-devel@lists.xenproject.org,
    community.manager@xenproject.org
Subject: [ANNOUNCE] Xen 4.15 - hard codefreeze slip by one week
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Hello.  Unfortunately we are having difficulty with osstest due to a
combination of an ill-timed Debian update and Linux kernel regressions
which got into the upstream stable trees and thence into Debian.  I
have been working to try to resolve this situation.  That has taken
time I should have been spending on release management and caused a
delay to pushes.  There have been a few other bugs, some in recently
introduced patches.  And we still have some bugs being investigated.

So I have decided to slip by one week.  Accordingly, here is the new
freeze status and remaining schedule:

  We are in feature freeze.  No new features should be committed to
  xen.git#staging.

  You may continue to commit straightforward bugfixes, docs changes, and
  new tests, without a release-ack.  Anything involving reorganisation
  or refactoring should get a release ack.  If in doubt please ask me
  and I will grant (or withhold) permission.

* Hard codefreeze (after which all patches will need a release manager
* ack) will occur on the 19th of February.
*
* Friday 19th February   Code freeze

       Bugfixes only, all changes to be approved by the Release Manager.

* Week of 19th March **tentative**    Release
       (probably Tuesday or Wednesday)

  Any patches containing substantial refactoring are to treated as
  new features, even if they intent is to fix bugs.

  Freeze exceptions will not be routine, but may be granted in
  exceptional cases for small changes on the basis of risk assessment.
  Large series will not get exceptions.  Contributors *must not* rely on
  getting, or expect, a freeze exception.

  New or improved tests (supposing they do not involve refactoring,
  even build system reorganisation), and documentation improvements,
  will generally be treated as bugfixes.

  The release dates is provisional and will be adjusted in the light
  of apparent code quality etc.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 16:12:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 16:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83661.156265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9s65-0006UB-Dt; Wed, 10 Feb 2021 16:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83661.156265; Wed, 10 Feb 2021 16:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9s65-0006U4-Am; Wed, 10 Feb 2021 16:12:09 +0000
Received: by outflank-mailman (input) for mailman id 83661;
 Wed, 10 Feb 2021 16:12:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9s63-0006Ty-HJ
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 16:12:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96ebd348-776a-4962-8926-4747fd7de88d;
 Wed, 10 Feb 2021 16:12:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0098B024;
 Wed, 10 Feb 2021 16:12:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96ebd348-776a-4962-8926-4747fd7de88d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612973525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nPqzVtOm7U0a7RlP4dHRRr86iStGaTcrqkXtmG505lg=;
	b=oslcy3VjlhfCNws/M6VmH017yl7IkcuHVXLwC5Se8jx+4ptuMOo36McjCkDDLBbTcc7Il4
	qtojIlGaJj2c/de/sBbuwHBd2mtWzSHomvjCCs79802Kd61usAwzmZHvQcHgAkfg7XFibo
	fOQS3auzGDL3LnaiLVDK8OXW28JKiPU=
Subject: Re: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-5-julien@xen.org>
 <62a791cb-a880-4097-5fec-4f728751b58b@suse.com>
 <712042bf-bec6-dc0f-67ee-b0807887772f@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <01546a53-1b1a-af35-a67e-7612e619961d@suse.com>
Date: Wed, 10 Feb 2021 17:12:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <712042bf-bec6-dc0f-67ee-b0807887772f@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.02.2021 16:04, Julien Grall wrote:
> On 10/02/2021 14:32, Jan Beulich wrote:
>> On 09.02.2021 16:28, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The new IOMMU page-tables allocator will release the pages when
>>> relinquish the domain resources. However, this is not sufficient when
>>> the domain is dying because nothing prevents page-table to be allocated.
>>>
>>> iommu_alloc_pgtable() is now checking if the domain is dying before
>>> adding the page in the list. We are relying on &hd->arch.pgtables.lock
>>> to synchronize d->is_dying.
>>
>> As said in reply to an earlier patch, I think suppressing
>> (really: ignoring) new mappings would be better.
> 
> This is exactly what I suggested in v1 but you wrote:
> 
> "Ignoring requests there seems fragile to me. Paul - what are your
> thoughts about bailing early from hvm_add_ioreq_gfn() when the
> domain is dying?"

Was this on the thread of this patch? I didn't find such a
reply of mine. I need more context here because you name
hvm_add_ioreq_gfn() above, while I refer to iommu_map()
(and downwards the call stack).

> Are you know saying that the following snipped would be fine:
> 
> if ( d->is_dying )
>    return 0;

In {amd,intel}_iommu_map_page(), after the lock was acquired
and with it suitably released, yes. And if that's what you
suggested, then I'm sorry - I don't think I can see anything
fragile there anymore.

>> You could
>> utilize the same lock, but you'd need to duplicate the
>> checking in {amd,intel}_iommu_map_page().
>>
>> I'm not entirely certain in the case about unmap requests:
>> It may be possible to also suppress/ignore them, but this
>> may require some further thought.
> 
> I think the unmap part is quite risky to d->is_dying because the PCI 
> devices may not quiesced and still assigned to the domain.

Hmm, yes, good point. Of course upon first unmap with is_dying
observed set we could zap the root page table, but I don't
suppose that's something we want to do for 4.15.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 16:48:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 16:48:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83664.156280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9sfG-000117-A1; Wed, 10 Feb 2021 16:48:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83664.156280; Wed, 10 Feb 2021 16:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9sfG-000110-6n; Wed, 10 Feb 2021 16:48:30 +0000
Received: by outflank-mailman (input) for mailman id 83664;
 Wed, 10 Feb 2021 16:48:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9sfF-00010v-Gh
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 16:48:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id abb190ba-46fa-4cd4-afea-0190bcd40ad2;
 Wed, 10 Feb 2021 16:48:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F920AE2D;
 Wed, 10 Feb 2021 16:48:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abb190ba-46fa-4cd4-afea-0190bcd40ad2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612975707; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=3BNKw5nysKMcyfik1KDgQujjMjTFovLjRJ+boG950gw=;
	b=CsbbXckuTNIqKkqXJHoK/q0a80N1ZxWX8vWzzrDUTmNJQ5aaKkhvGAbJeejEBG8KnC3nZs
	ERAWWcLciJsBpW1ovc9xrjA8B2OeX9x9B1r0fmaF2S4ZPs266Ak0Tj3xowlGNeyh75pGFs
	9sTTUudeJnS9VRL9a8GwXa6GNQ/WOqM=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] VMX: use a single, global APIC access page
Message-ID: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
Date: Wed, 10 Feb 2021 17:48:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The address of this page is used by the CPU only to recognize when to
instead access the virtual APIC page instead. No accesses would ever go
to this page. It only needs to be present in the (CPU) page tables so
that address translation will produce its address as result for
respective accesses.

By making this page global, we also eliminate the need to refcount it,
or to assign it to any domain in the first place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Hooking p2m insertion onto arch_domain_creation_finished() isn't very
nice, but I couldn't find any better hook (nor a good place where to
introduce a new one). In particular there look to be no hvm_funcs hooks
being used on a domain-wide basis (except for init/destroy of course).
I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
considered this no better, the more that the tool stack could be smarter
and avoid setting that param when not needed.

I did further consider not allocating any real page at all, but just
using the address of some unpopulated space (which would require
announcing this page as reserved to Dom0, so it wouldn't put any PCI
MMIO BARs there). But I thought this would be too controversial, because
of the possible risks associated with this.

Perhaps the change to p2m_get_iommu_flags() should be in a separate
patch.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1007,6 +1007,8 @@ int arch_domain_soft_reset(struct domain
 
 void arch_domain_creation_finished(struct domain *d)
 {
+    if ( is_hvm_domain(d) )
+        hvm_domain_creation_finished(d);
 }
 
 #define xen_vcpu_guest_context vcpu_guest_context
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -66,8 +66,7 @@ boolean_param("force-ept", opt_force_ept
 static void vmx_ctxt_switch_from(struct vcpu *v);
 static void vmx_ctxt_switch_to(struct vcpu *v);
 
-static int  vmx_alloc_vlapic_mapping(struct domain *d);
-static void vmx_free_vlapic_mapping(struct domain *d);
+static int alloc_vlapic_mapping(void);
 static void vmx_install_vlapic_mapping(struct vcpu *v);
 static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
                                 unsigned int flags);
@@ -78,6 +77,8 @@ static int vmx_msr_read_intercept(unsign
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg(struct vcpu *v, unsigned long linear);
 
+static mfn_t __read_mostly apic_access_mfn;
+
 /* Values for domain's ->arch.hvm_domain.pi_ops.flags. */
 #define PI_CSW_FROM (1u << 0)
 #define PI_CSW_TO   (1u << 1)
@@ -401,7 +402,6 @@ static int vmx_domain_initialise(struct
         .to   = vmx_ctxt_switch_to,
         .tail = vmx_do_resume,
     };
-    int rc;
 
     d->arch.ctxt_switch = &csw;
 
@@ -411,21 +411,16 @@ static int vmx_domain_initialise(struct
      */
     d->arch.hvm.vmx.exec_sp = is_hardware_domain(d) || opt_ept_exec_sp;
 
-    if ( !has_vlapic(d) )
-        return 0;
-
-    if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-        return rc;
-
     return 0;
 }
 
-static void vmx_domain_relinquish_resources(struct domain *d)
+static void domain_creation_finished(struct domain *d)
 {
-    if ( !has_vlapic(d) )
-        return;
 
-    vmx_free_vlapic_mapping(d);
+    if ( !mfn_eq(apic_access_mfn, _mfn(0)) &&
+         set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
+                            apic_access_mfn, PAGE_ORDER_4K) )
+        domain_crash(d);
 }
 
 static void vmx_init_ipt(struct vcpu *v)
@@ -2407,7 +2402,7 @@ static struct hvm_function_table __initd
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_relinquish_resources = vmx_domain_relinquish_resources,
+    .domain_creation_finished = domain_creation_finished,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
@@ -2653,7 +2648,7 @@ const struct hvm_function_table * __init
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( vmx_vmcs_init() )
+    if ( vmx_vmcs_init() || alloc_vlapic_mapping() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -3208,7 +3203,7 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
-static int vmx_alloc_vlapic_mapping(struct domain *d)
+static int __init alloc_vlapic_mapping(void)
 {
     struct page_info *pg;
     mfn_t mfn;
@@ -3216,53 +3211,28 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_refcount);
+    pg = alloc_domheap_page(NULL, 0);
     if ( !pg )
         return -ENOMEM;
 
-    if ( !get_page_and_type(pg, d, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(d);
-        return -ENODATA;
-    }
-
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    d->arch.hvm.vmx.apic_access_mfn = mfn;
+    apic_access_mfn = mfn;
 
-    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
-                              PAGE_ORDER_4K);
-}
-
-static void vmx_free_vlapic_mapping(struct domain *d)
-{
-    mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn;
-
-    d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
-    if ( !mfn_eq(mfn, _mfn(0)) )
-    {
-        struct page_info *pg = mfn_to_page(mfn);
-
-        put_page_alloc_ref(pg);
-        put_page_and_type(pg);
-    }
+    return 0;
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
 {
     paddr_t virt_page_ma, apic_page_ma;
 
-    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
+    if ( mfn_eq(apic_access_mfn, _mfn(0)) )
         return;
 
     ASSERT(cpu_has_vmx_virtualize_apic_accesses);
 
     virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
-    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
+    apic_page_ma = mfn_to_maddr(apic_access_mfn);
 
     vmx_vmcs_enter(v);
     __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -106,6 +106,7 @@ struct hvm_function_table {
      * Initialise/destroy HVM domain/vcpu resources
      */
     int  (*domain_initialise)(struct domain *d);
+    void (*domain_creation_finished)(struct domain *d);
     void (*domain_relinquish_resources)(struct domain *d);
     void (*domain_destroy)(struct domain *d);
     int  (*vcpu_initialise)(struct vcpu *v);
@@ -390,6 +391,12 @@ static inline bool hvm_has_set_descripto
     return hvm_funcs.set_descriptor_access_exiting;
 }
 
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    if ( hvm_funcs.domain_creation_finished )
+        alternative_vcall(hvm_funcs.domain_creation_finished, d);
+}
+
 static inline int
 hvm_guest_x86_mode(struct vcpu *v)
 {
@@ -765,6 +772,11 @@ static inline void hvm_invlpg(const stru
 {
     ASSERT_UNREACHABLE();
 }
+
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    ASSERT_UNREACHABLE();
+}
 
 /*
  * Shadow code needs further cleanup to eliminate some HVM-only paths. For
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -58,7 +58,6 @@ struct ept_data {
 #define _VMX_DOMAIN_PML_ENABLED    0
 #define VMX_DOMAIN_PML_ENABLED     (1ul << _VMX_DOMAIN_PML_ENABLED)
 struct vmx_domain {
-    mfn_t apic_access_mfn;
     /* VMX_DOMAIN_* */
     unsigned int status;
 
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
         flags = IOMMUF_readable;
         if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
             flags |= IOMMUF_writable;
+        /* VMX'es APIC access page is global and hence has no owner. */
+        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
+            flags = 0;
         break;
     default:
         flags = 0;


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 16:56:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 16:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83666.156292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9smT-00020S-4B; Wed, 10 Feb 2021 16:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83666.156292; Wed, 10 Feb 2021 16:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9smT-00020L-0P; Wed, 10 Feb 2021 16:55:57 +0000
Received: by outflank-mailman (input) for mailman id 83666;
 Wed, 10 Feb 2021 16:55:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9smS-00020G-1b
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 16:55:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f145fcb5-7d0d-4417-b674-29d059e1a0a3;
 Wed, 10 Feb 2021 16:55:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 191BBB029;
 Wed, 10 Feb 2021 16:55:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f145fcb5-7d0d-4417-b674-29d059e1a0a3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612976153; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Vk4MoTQworBG1bMwX+SVQ2xKqRp4qXIzUfgmmm1+rEI=;
	b=nHMQaqB9XajicMz9N209FmqB3OnPTULDiI8uoFKUEfrsnFYsyTuUOm0QfRHzVCtXJQlje7
	fogZ4tRn6N8XMTnN362NNMo0bXgKctIWx6PFOpq55FPv+yXQPkNOycLFK1DBmskVRdD1FM
	ZilajOMSjhh8KA66bhilfFOsFWHRU50=
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
 <YCK3sH/4EVLzRfZ3@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d3b62090-fdb5-068b-93ab-63f8bebc9d2e@suse.com>
Date: Wed, 10 Feb 2021 17:55:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCK3sH/4EVLzRfZ3@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.02.2021 17:26, Roger Pau MonnÃ© wrote:
> On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
>> --- a/xen/arch/x86/usercopy.c
>> +++ b/xen/arch/x86/usercopy.c
>> @@ -10,12 +10,19 @@
>>  #include <xen/sched.h>
>>  #include <asm/uaccess.h>
>>  
>> -unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n)
>> +#ifndef GUARD
>> +# define GUARD UA_KEEP
>> +#endif
>> +
>> +unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n)
>>  {
>>      unsigned dummy;
>>  
>>      stac();
>>      asm volatile (
>> +        GUARD(
>> +        "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"
> 
> Don't you need to also take 'n' into account here to assert that the
> address doesn't end in hypervisor address space? Or that's fine as
> speculation wouldn't go that far?

Like elsewhere this leverages that the hypervisor VA range starts
immediately after the non-canonical hole. I'm unaware of
speculation being able to cross over that hole.

> I also wonder why this needs to be done in assembly, could you check
> the address(es) using C?

For this to be efficient (in avoiding speculation) the insn
sequence would better not have any conditional jumps. I don't
think the compiler can be told so.

>> --- a/xen/include/asm-x86/uaccess.h
>> +++ b/xen/include/asm-x86/uaccess.h
>> @@ -13,13 +13,19 @@
>>  unsigned copy_to_user(void *to, const void *from, unsigned len);
>>  unsigned clear_user(void *to, unsigned len);
>>  unsigned copy_from_user(void *to, const void *from, unsigned len);
>> +
>>  /* Handles exceptions in both to and from, but doesn't do access_ok */
>> -unsigned __copy_to_user_ll(void __user*to, const void *from, unsigned n);
>> -unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n);
>> +unsigned int copy_to_guest_ll(void __user*to, const void *from, unsigned int n);
>> +unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n);
>> +unsigned int copy_to_unsafe_ll(void *to, const void *from, unsigned int n);
>> +unsigned int copy_from_unsafe_ll(void *to, const void *from, unsigned int n);
>>  
>>  extern long __get_user_bad(void);
>>  extern void __put_user_bad(void);
>>  
>> +#define UA_KEEP(args...) args
>> +#define UA_DROP(args...)
> 
> I assume UA means user access, and since you have dropped other uses
> of user and changed to guest instead I wonder if we should name this
> just A_{KEEP/DROP}.

Like in the name of the file I mean to see 'u' stand for "unsafe"
going forward. (A single letter name prefix would also seem more
prone to future collisions to me.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:00:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:00:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83668.156304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9squ-0002yf-LU; Wed, 10 Feb 2021 17:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83668.156304; Wed, 10 Feb 2021 17:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9squ-0002yY-IC; Wed, 10 Feb 2021 17:00:32 +0000
Received: by outflank-mailman (input) for mailman id 83668;
 Wed, 10 Feb 2021 17:00:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9sqs-0002yT-NY
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:00:30 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6bc416f-5210-4c55-82ac-0223bcaa7d92;
 Wed, 10 Feb 2021 17:00:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6bc416f-5210-4c55-82ac-0223bcaa7d92
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612976429;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Ud91ZUGiELp7dK+WvV+JKkvJnwquss6sCLbZsKcKllQ=;
  b=dh5RK/cGKT7FIslyoDL8GWmLs8vEp8GyfL+1z5KovXCyL0JZYL9SUweS
   FoD61Cmka1CzzIg5NVb0IHEcUd1mg3shNr7aF0X4+q7uYzY4bTkD7OGjT
   +dOyINaEF6Iiv86JUS2W1EnA0ccVmEBWw3jIKNq3YWfhxIEKDm78QxaBd
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DtCNs9wZPliw90ZVdn/6IdryXwlSK0eUT9Tz12zSDkBIaL60s+JZhKP+LWG07lhARHy8We6dgG
 j3Fxmx3l8EOoRMeBq56XOhRY1iEhQYuTkdm8XAXHbHzIg+7ego+hCO2aOZgKUHO5EceWnpMpln
 KS1fajXWxOlh3sD9MUN7uy/tOcE1CBUdEbui79gLZI/58aakfKvV0DlDlPl0yRa32at0BEfzLQ
 mSa10qaMPouMI0HWPBzz2soqmg6O0wfvbvrM3MHgoLR7UMIHOI6rZE7kZrQiNXd/C0chcuWygl
 e+Q=
X-SBRS: 5.2
X-MesageID: 36969478
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,168,1610427600"; 
   d="scan'208";a="36969478"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f6zYwy/g4hkrXE8u35DFzPg37Um+LkPdNedzJTHuEyzTTBZavjIC+peMCsClFRjWG9vGf1/6gJn+m19yiwAa+4RpOMNAY1Vb4V6Ejs0/RPoWNrUVfvRUkXdIbJN8JVLu+0s0NoNrKTi6tV0036AuMZ8aBvo3o2kpPA/d2sbkUp2aXF0j16pLB8S4Tfvhh1RDfVnqfLjQ10yoWYAWej/CwUdKmP5hwCMy41/g7/2y5ARnbcFs40pSlp53stGMJ2UcLuv8vJLbcv/n5RCKQrFIiTB/UxFYF3dvLzdt5hCXJZO4zaRnmn1Cbqefrjp0y1C9uQW1YJxbS4ibirLEbOyajg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ud91ZUGiELp7dK+WvV+JKkvJnwquss6sCLbZsKcKllQ=;
 b=adaEjeKGju3bWabD0PxtKcWCWXT4JpRK6zJWftMAHFM1eYX9bPhJqNI8MEoq+lSDcZFdeZBYr0Ew9wcJrGdsSRM60vsNMRYvmVt+Y5nP2XYHzcgh0x0l5Jb+piZOt1GX3PvIxUmd2LQcOhpdqiFlw+c03YIxNFpwMIzmT3ugsib4e9n5nH5S6joPvNkZuhI+PYYu9WWqzZhHNGUtD5vxMC3Jx7ZCHRawj1Sjkn0CCaxRC7Sb7AGwcS+xLUT7dazUcb/3PrwaFR7E25SGW4Bm4PS1NeClBchC3Irit1kIMJ74VxVjjC5mkucLrqiCM4QMipoFXfLF0R6KA7uE0OMqRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ud91ZUGiELp7dK+WvV+JKkvJnwquss6sCLbZsKcKllQ=;
 b=IIX+gUvQrJ2ukCxiXLGfmOOQKo7TRo5GBaoBQ9MvpwF9rS9swoEZoH4qHtb4atj4eqys0DtbdIcqFB+o9iXUSNtau0Tj7D6vgD4gcZ8+gSLb1t8PG40vVcm5P5vnpzfZ6ea5ayjb5IplNozwgTJww2pQI09BT3WaPnOnW54zfU8=
Subject: Re: [PATCH] VMX: use a single, global APIC access page
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3c7a4cff-3f11-22e1-ed46-e76f62cc08f4@citrix.com>
Date: Wed, 10 Feb 2021 17:00:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0413.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3af7927a-e94b-41b8-f010-08d8cde55bd5
X-MS-TrafficTypeDiagnostic: BYAPR03MB3558:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB35581ABE92DDBE891A20BD94BA8D9@BYAPR03MB3558.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: O8oz/CCogkFdbuRuGG/MojN8F/DtcTeRL6Q28Vp1IX79wbv4rqEQOwJAGQlbeRvqAiGwfVx5wkAF2hAhXaPgK+WtKLxXOeTnabxum4FDU8+jr7pU1drIqy/05PuQGNrH+woZ6u7J222h3kKDevG14OD4hGbTsV9ngjHrg9dJllbrukRA8g+Vmraxy1T/sjqbav45lDbM9MvLxyhDkL3s3pztIcIyge55Je6AarhWkF5uhw38j6/gfAvGxd2I1hekzhwpxAC75JVoUV5hLnmyMPUliKMqyalUSyCzrhDpJ8ilzzjfghZu9PVp8vFcWoceRmF6VKkj95eFkLO9V/cuzejG0amlBtZBQS5W7O6WMdqIDjAox0xcY0vm4TPl8eyZZNrsrVl4SHltXCshiAj0liuZQmcpaUOJwRIsv4KBCKCWjNTUXWZ+gK1hUunsnUi4eYab0AH7ue5F2VPmgHtRz+z21VdzsH5C3KowVBBjvoqEAvNMb+uiqHMvoO5uaU2FQTxKThHDNCqbom08NPE3fez7DJMmFCxqXQ2mpwu1BV4lxgyoUsgR7paAWQ6J6nwiJr1ctw3SDfj1sL3+9EnjkSyCujXFe/ScTsz39DKDDYg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(366004)(376002)(346002)(396003)(66946007)(6666004)(2906002)(66476007)(83380400001)(16576012)(110136005)(54906003)(316002)(66556008)(16526019)(26005)(2616005)(86362001)(956004)(4744005)(8676002)(8936002)(5660300002)(36756003)(31686004)(6486002)(186003)(31696002)(107886003)(4326008)(478600001)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SC82NmVEd0RQS3Q4aFVUdzJDUmVrckE1WFdxSHJhZUY2UzVNeGxOYjByQ0Nq?=
 =?utf-8?B?eTNSdWJEQ2VhYU1yTWZweXVlSFRiVkRuaFJrWnJVWnZ5TmFIakZJTUhndHA5?=
 =?utf-8?B?QkdjczUyc0JObS9GZE40YUl3TkVVK3lqbXU2alZneGpadnB5YlhFYmd4MW01?=
 =?utf-8?B?S3Y2K21VNEtaQTNBU2xjWmNQM3ZLN2kzSDRIaHRnOTJIK3FCZTFRZkFDUTZm?=
 =?utf-8?B?ZlRlWGhseDBmNEJLazFLQURRd3lERm5KN2crOFZrNlNyQTJtWjJ2MXhFdVFw?=
 =?utf-8?B?cW82VThKakdoS28reDRwSG5aMzY5Vy9OY2hZc21FWlBBWDdQREU4MDMvZnJp?=
 =?utf-8?B?MTdhU0ROWEtZNTV2dTUzUVZ1SVYwalp5Q0JGTTNtMjh3emVIdnNwbzhTdE9p?=
 =?utf-8?B?WmVqSDFMZ2tZUmRZbjVwb1pBb29STWlOM0RieCtjZGl6S2JIU3MvVjhMWCtO?=
 =?utf-8?B?dGxRODJCQnJ3TXk3aSs1N0ZSa29UM1JlZGt5bGVtNWRXRmg3RWdkSHozOEpr?=
 =?utf-8?B?S09SQ1ZQMThVSjJpS2JSK3VEMXNGVHY3SXFtc2pYaG81Z2NYZHB1NTBoVkxW?=
 =?utf-8?B?b2YxdGNUdWlRbnBVUVVkUE9naFk4MnhCRW85QkNDYi9JNXpCRHVpanhOWUtr?=
 =?utf-8?B?cmRlRG5GVDdkK3FmL09LYnZ6VGJUVG9PUGlpdGlaajd1ajFrcFI5UGt0MEpU?=
 =?utf-8?B?NGo4czhscFZqUUswc3Y1NEVSbzZxY2R0Rmo2UnUrV2dOTHFtaURsU2RuUkZ2?=
 =?utf-8?B?enEwYUI4Ykd4Z044eU03QjNtZXJSeWhLZlNzU2pXZ0tVTEV1VnQ3TnY4VXdL?=
 =?utf-8?B?UW9ubmMxUEcwS0lUeElQWjVLbThjWnNFU1Z3Y2hXVU9QZkRZV1VYUUpiejFE?=
 =?utf-8?B?ck91dXhrakVTeHhyY0NVQ2NYK3BIM0tjZ0RzZTVwMDJCSlFjdTFlVGVjcFhW?=
 =?utf-8?B?enBMR1BPYW8vanZHaExadkY5VHZMc2hvZHVZY0ZWMFNCQ1JFSTBFa1pPbmxE?=
 =?utf-8?B?VmkvUkFCcTFZdEtUb21ubzVLUm1DUTBNVWUwOTZ3bVFwZUpPVjRURjh2R0VC?=
 =?utf-8?B?bU9Sc0twK1ZodldMY08yQXlLcVNMVVBaVHJWbmlzbzIzdnIza1c0NVlhQ2dY?=
 =?utf-8?B?TU9TWTR2ZlpIcy9zeDBGZUI3MmNVd0RYK0pyZUZ1OFg1dkJ6QVhSTDE5VW1D?=
 =?utf-8?B?Z21PeWlBVTVXNDVkVy9QT1VqUzBRTTFGTXFyTmNUSEJnOEJ3TVJUSy9EWHl6?=
 =?utf-8?B?eXNEeEY5M0IwQmkxcG9URmk3MW5hdTByd2VtN2lRYzIzay91KytFRU5QSTJ4?=
 =?utf-8?B?YVR5b3lkMStZN1Ivb01WU2Q1RnUvNzl0VGdYRDAvTmZ0dU5ua2NoNjB3YXJh?=
 =?utf-8?B?QkJXdzdNMjljRTVkTWVzMElGYWZVWmlscEsxQ1ovT2VabzNadUgzQ2Q1VUsr?=
 =?utf-8?B?U29yZWNtYWlSYkI1dkZmRFAwUjM1SkY2YWV2NGtjdVVWQXJ6ZHB1SHNLZVdT?=
 =?utf-8?B?VE1iSkFyVUQ5M080R0hkVG4vaFlWN1o2MXdYQ0tSZlVYQUhyZFh5Skk2dGdx?=
 =?utf-8?B?cnFlMkFIZHhXTWtHeWxJT0VGejBPMk56dktVaVo0eklhRUgzV2ZJTmVGeVdD?=
 =?utf-8?B?OFRqbFpKaUFDUG9jV3RzS3hrK1FpaHJ5dnNsSWZkdlJmcjBkdGpxcUZFZGVM?=
 =?utf-8?B?dE9NTDBzVjZIWEcyb0ZocFBnZjMwRXo5bm5BemI1MWtJSkFXOWcxT21Qa2VT?=
 =?utf-8?Q?iI+xYPoZXYrbokYbWkbIXXkw8vB84QJW6RMjYpf?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3af7927a-e94b-41b8-f010-08d8cde55bd5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 17:00:24.6786
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8NtFszymuzEkAWWV0rkAV6xYxeM4XSbEUzThQ3jHIUm0p/HzDSiznSYfmOKf0YDHybw+TvoZfk991W1oZgYtLod9sF1v53B/qpedj1ejvzQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3558
X-OriginatorOrg: citrix.com

On 10/02/2021 16:48, Jan Beulich wrote:
> The address of this page is used by the CPU only to recognize when to
> instead access the virtual APIC page instead. No accesses would ever go
> to this page. It only needs to be present in the (CPU) page tables so
> that address translation will produce its address as result for
> respective accesses.
>
> By making this page global, we also eliminate the need to refcount it,
> or to assign it to any domain in the first place.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

How certain are you about this?

It's definitely not true on AMD's AVIC - writes very definitely end up
in the backing page if they miss the APIC registers.

This concern was why we didn't use a global page originally, IIRC.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:04:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83671.156316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9suF-00039A-8v; Wed, 10 Feb 2021 17:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83671.156316; Wed, 10 Feb 2021 17:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9suF-000392-5k; Wed, 10 Feb 2021 17:03:59 +0000
Received: by outflank-mailman (input) for mailman id 83671;
 Wed, 10 Feb 2021 17:03:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9suD-00038x-AL
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:03:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77678fe8-8068-410c-9e22-03e0a340576e;
 Wed, 10 Feb 2021 17:03:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A28ADB03A;
 Wed, 10 Feb 2021 17:03:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77678fe8-8068-410c-9e22-03e0a340576e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612976635; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KDV7ZGWbjJ9DTIyUn+4dStievokp69P2suXiHdXyUjk=;
	b=Sbh8eXgmg0ys/X12SnaQBbu1evAj3G1FBso4x5jPvgK5Oq0D1+QLrtgsn5/GTO+YxGZ6Kv
	kr6SBqX8/ZU7ASQyYvyeE3v2h1RaG1TmEpWzdb8vmakgNVT/BqEWvTHxEY9HNH3I1FMJoC
	1u8kKVbAER0B1F4mNHBrrzfd/G2UAHE=
Subject: Re: [PATCH] VMX: use a single, global APIC access page
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <3c7a4cff-3f11-22e1-ed46-e76f62cc08f4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e86499f9-8849-1d52-06b0-5cce224f4319@suse.com>
Date: Wed, 10 Feb 2021 18:03:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <3c7a4cff-3f11-22e1-ed46-e76f62cc08f4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.02.2021 18:00, Andrew Cooper wrote:
> On 10/02/2021 16:48, Jan Beulich wrote:
>> The address of this page is used by the CPU only to recognize when to
>> instead access the virtual APIC page instead. No accesses would ever go
>> to this page. It only needs to be present in the (CPU) page tables so
>> that address translation will produce its address as result for
>> respective accesses.
>>
>> By making this page global, we also eliminate the need to refcount it,
>> or to assign it to any domain in the first place.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> How certain are you about this?

The documentation (I'm inclined to say: unexpectedly) is very
clear about this; I don't think it had been this clear back at
the time. I'm hoping for Kevin to shout if he's aware of issues
here.

I've also gone through many of our emulation code paths (I
think I caught all relevant ones), and found them all having
suitable guards in place. (This process was why took until the
evening to have a patch ready.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:07:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:07:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83673.156328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9sxJ-0003Ii-OA; Wed, 10 Feb 2021 17:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83673.156328; Wed, 10 Feb 2021 17:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9sxJ-0003Ib-Kw; Wed, 10 Feb 2021 17:07:09 +0000
Received: by outflank-mailman (input) for mailman id 83673;
 Wed, 10 Feb 2021 17:07:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9sxI-0003IW-Dw
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:07:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9sxG-0005zz-7g; Wed, 10 Feb 2021 17:07:06 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9sxF-0005yO-TC; Wed, 10 Feb 2021 17:07:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=27tFXUPh2xwWKtj+Y8n/9TGsuBnMFwQ1a1Ih2Y26U+I=; b=fxliF5bvuBMImMNM6eZaNooTt8
	2vwxq2gJbxz/Pc/zCihJ1UZK7T2Shlu8Whq/Pes7HFXf08uwo8Et8gwOW7oBNvFS+GiGyQ3cI6k7t
	1AaYoc+L/QvJW56s1mgNL2/TzAXAgQSIpHAdO6xSm7X+0om72KBQH4/32drUmKpHScR4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	dwmw@amazon.co.uk
Subject: [PATCH] arm/xen: Don't probe xenbus as part of an early initcall
Date: Wed, 10 Feb 2021 17:06:54 +0000
Message-Id: <20210210170654.5377-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

After Commit 3499ba8198cad ("xen: Fix event channel callback via
INTX/GSI"), xenbus_probe() will be called too early on Arm. This will
recent to a guest hang during boot.

If there hang wasn't there, we would have ended up to call
xenbus_probe() twice (the second time is in xenbus_probe_initcall()).

We don't need to initialize xenbus_probe() early for Arm guest.
Therefore, the call in xen_guest_init() is now removed.

After this change, there is no more external caller for xenbus_probe().
So the function is turned to a static one. Interestingly there were two
prototypes for it.

Fixes: 3499ba8198cad ("xen: Fix event channel callback via INTX/GSI")
Reported-by: Ian Jackson <iwj@xenproject.org>
Signed-off-by: Julien Grall <jgrall@amazon.com>

CC: dwmw@amazon.co.uk

---

The offending commit unfortunately got backport to stable trees. So
this will need to be backported as well.
---
 arch/arm/xen/enlighten.c          | 2 --
 drivers/xen/xenbus/xenbus.h       | 1 -
 drivers/xen/xenbus/xenbus_probe.c | 2 +-
 include/xen/xenbus.h              | 2 --
 4 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 5a957a9a0984..8ad576ecd0f1 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -370,8 +370,6 @@ static int __init xen_guest_init(void)
 		return -ENOMEM;
 	}
 	gnttab_init();
-	if (!xen_initial_domain())
-		xenbus_probe();
 
 	/*
 	 * Making sure board specific code will not set up ops for
diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
index dc1537335414..2a93b7c9c159 100644
--- a/drivers/xen/xenbus/xenbus.h
+++ b/drivers/xen/xenbus/xenbus.h
@@ -115,7 +115,6 @@ int xenbus_probe_node(struct xen_bus_type *bus,
 		      const char *type,
 		      const char *nodename);
 int xenbus_probe_devices(struct xen_bus_type *bus);
-void xenbus_probe(void);
 
 void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
 
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 18ffd0551b54..8a75092bb148 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -683,7 +683,7 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
 }
 EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
 
-void xenbus_probe(void)
+static void xenbus_probe(void)
 {
 	xenstored_ready = 1;
 
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 2c43b0ef1e4d..bf3cfc7c35d0 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -192,8 +192,6 @@ void xs_suspend_cancel(void);
 
 struct work_struct;
 
-void xenbus_probe(void);
-
 #define XENBUS_IS_ERR_READ(str) ({			\
 	if (!IS_ERR(str) && strlen(str) == 0) {		\
 		kfree(str);				\
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:16:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83676.156340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9t5y-0004GU-LK; Wed, 10 Feb 2021 17:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83676.156340; Wed, 10 Feb 2021 17:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9t5y-0004GN-HN; Wed, 10 Feb 2021 17:16:06 +0000
Received: by outflank-mailman (input) for mailman id 83676;
 Wed, 10 Feb 2021 17:16:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jO30=HM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1l9t5y-0004GI-1N
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:16:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2731210-5bbb-432c-8478-24331b56d369;
 Wed, 10 Feb 2021 17:16:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CC586B151;
 Wed, 10 Feb 2021 17:16:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2731210-5bbb-432c-8478-24331b56d369
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1612977363; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=agRYBhrxy++kck7dfeb0DC0QtkzXEFQPjjTaKrqB02Q=;
	b=Px1pEEuyrJqtMuMzmlk6AmnUrjzFRJ79ZYymcvdKMPOnPW3vKNLx7oj+I4pu/BPFklgs2d
	RcGXP+T+37CiLKYmxjPR/5XphU90HFMkotnSkxQI7EYLlNjWFx5Dv7TrtRKbe9h8rpjNJM
	6W3btDcS4P/McAEf8577VYME6qmI53M=
Subject: Re: [PATCH] VMX: use a single, global APIC access page
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <3c7a4cff-3f11-22e1-ed46-e76f62cc08f4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <65d2fe51-a7c5-b6f4-ed6a-430b51db6595@suse.com>
Date: Wed, 10 Feb 2021 18:16:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <3c7a4cff-3f11-22e1-ed46-e76f62cc08f4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.02.2021 18:00, Andrew Cooper wrote:
> On 10/02/2021 16:48, Jan Beulich wrote:
>> The address of this page is used by the CPU only to recognize when to
>> instead access the virtual APIC page instead. No accesses would ever go
>> to this page. It only needs to be present in the (CPU) page tables so
>> that address translation will produce its address as result for
>> respective accesses.
>>
>> By making this page global, we also eliminate the need to refcount it,
>> or to assign it to any domain in the first place.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> How certain are you about this?
> 
> It's definitely not true on AMD's AVIC - writes very definitely end up
> in the backing page if they miss the APIC registers.

Doesn't this require a per-vCPU page then anyway? With a per-domain
one things can't work correctly in the case you describe.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:34:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83678.156351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tO1-00067s-7G; Wed, 10 Feb 2021 17:34:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83678.156351; Wed, 10 Feb 2021 17:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tO1-00067l-4E; Wed, 10 Feb 2021 17:34:45 +0000
Received: by outflank-mailman (input) for mailman id 83678;
 Wed, 10 Feb 2021 17:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9tNz-00067g-Qx
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:34:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9tNy-0006Sc-IQ; Wed, 10 Feb 2021 17:34:42 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9tNy-0007fz-4r; Wed, 10 Feb 2021 17:34:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=feHBOTTFHArjZWlBYP8U++OP0bbgQdT5LQSpe+yLsic=; b=rA6vt9RTSpfZTvdz5F6Xg8J1L1
	5bzoJBIbepZ7hz7Ex4ZOZarjWfJgf37vcL7HxE1xU9WdY76+BEuVcl0L4gUipQgkxUOlO7JwIGvav
	FIbuuDM/BOvqPEIAB87wnfXPMSVAwLKcgtvhzMgTEi+vxbnCtAkG6Ymmc2A3l0GdYAZM=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
Date: Wed, 10 Feb 2021 17:34:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 10/02/2021 15:06, Rahul Singh wrote:
>> On 9 Feb 2021, at 8:36 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> On Tue, 9 Feb 2021, Rahul Singh wrote:
>>>> On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>
>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>> The offending chunk is:
>>>>
>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>
>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>>> directly mapped and IOMMU is enabled for the domain, like the old check
>>>> did, but the new check is always false.
>>>>
>>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>>> need_sync is set as:
>>>>
>>>>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>        hd->need_sync = !iommu_use_hap_pt(d);
>>>>
>>>> iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
>>>> P2M. It is true on ARM. need_sync means that you have a separate IOMMU
>>>> page-table and it needs to be updated for every change. need_sync is set
>>>> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
>>>> which is wrong.
>>>>
>>>> As a consequence, when using PV network from a domU on a system where
>>>> IOMMU is on from Dom0, I get:
>>>>
>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>>>
>>>> The fix is to go back to something along the lines of the old
>>>> implementation of gnttab_need_iommu_mapping.
>>>>
>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>> Fixes: 91d4eca7add
>>>> Backport: 4.12+
>>>>
>>>> ---
>>>>
>>>> Given the severity of the bug, I would like to request this patch to be
>>>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
>>>> 2020.
>>>>
>>>> For the 4.12 backport, we can use iommu_enabled() instead of
>>>> is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
>>>>
>>>> Changes in v2:
>>>> - improve commit message
>>>> - add is_iommu_enabled(d) to the check
>>>> ---
>>>> xen/include/asm-arm/grant_table.h | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
>>>> index 6f585b1538..0ce77f9a1c 100644
>>>> --- a/xen/include/asm-arm/grant_table.h
>>>> +++ b/xen/include/asm-arm/grant_table.h
>>>> @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
>>>>     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
>>>>
>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>> +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
>>>>
>>>> #endif /* __ASM_GRANT_TABLE_H__ */
>>>
>>> I tested the patch and while creating the guest I observed the below warning from Linux for block device.
>>> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
>>
>> So you are creating a guest with "xl create" in dom0 and you see the
>> warnings below printed by the Dom0 kernel? I imagine the domU has a
>> virtual "disk" of some sort.
> 
> Yes you are right I am trying to create the guest with "xl createâ€ and before that, I created the logical volume and trying to attach the logical volume
> block to the domain with â€œxl block-attachâ€. I observed this error with the "xl block-attachâ€ command.
> 
> This issue occurs after applying this patch as what I observed this patch introduce the calls to iommu_legacy_{, un}map() to map the grant pages for
> IOMMU that touches the page-tables. I am not sure but what I observed is that something is written wrong when iomm_unmap calls unmap the pages
> because of that issue is observed.

Can you clarify what you mean by "written wrong"? What sort of error do 
you see in the iommu_unmap()?

> 
> You can reproduce the issue by just creating the dummy image file and try to attach it with â€œxl block-attachâ€
> 
> dd if=/dev/zero of=test.img bs=1024 count=0 seek=1024
> xl block-attach 0 phy:test.img xvda w
> 
> Sequence of command that I follow is as follows to reproduce the issue:
> 
> lvs vg-xen/myguest
> lvcreate -y -L 4G -n myguest vg-xen
> parted -s /dev/vg-xen/myguest mklabel msdos
> parted -s /dev/vg-xen/myguest unit MB mkpart primary 1 4096
> sync
> xl block-attach 0 phy:/dev/vg-xen/myguest xvda w
> 
> libxl: error: libxl_xshelp.c:201:libxl__xs_read_mandatory: xenstore read failed: `/libxl/0/type': No such file or directory
> libxl: warning: libxl_dom.c:51:libxl__domain_type: unable to get domain type for domid=0, assuming HVM
> [  162.632232] xen-blkback: backend/vbd/0/51712: using 4 queues, protocol 1 (arm-abi) persistent grants
> [  162.764847] vbd vbd-0-51712: 9 mapping in shared page 8 from domain 0
> [  162.771740] vbd vbd-0-51712: 9 mapping ring-ref port 5
> [  162.777650] ------------[ cut here ]------------
> [  162.782167] WARNING: CPU: 2 PID: 37 at drivers/block/xen-blkback/xenbus.c:296 xen_blkif_disconnect+0x20c/0x230

Just to confirm, this splat comes from:

WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));

If so, what are the values for i and blkif->nr_ring_pages?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:36:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:36:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83679.156363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tPr-0006GP-Iy; Wed, 10 Feb 2021 17:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83679.156363; Wed, 10 Feb 2021 17:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tPr-0006GI-G8; Wed, 10 Feb 2021 17:36:39 +0000
Received: by outflank-mailman (input) for mailman id 83679;
 Wed, 10 Feb 2021 17:36:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0006G3-6R
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0006Tt-5Q
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0007oM-3j
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9tPo-0000Wv-8E; Wed, 10 Feb 2021 17:36:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=hTYRoXTtZNmKydjwm79mBnW5f0OPODS85lQ5MlQDIf4=; b=hcN4jAvhDIvSKyYRRJQVHy3I6T
	ra828/KRGRF8wQ3xD1V8Vc5r7Y1S25Qn5f4qpTeQ8WWqt5n2zBpfKljo8FcweCSJV8BXv8GPVBh7U
	wi+gahWRoem5PNV54Gd9rBT6xcotsDP6+egX/YzmsldknaruBidhvGoknNARj76MWOIQ=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [OSSTEST PATCH 1/3] production-config: Rewind buster armhf snapshot to 2021-01-124
Date: Wed, 10 Feb 2021 17:36:27 +0000
Message-Id: <20210210173629.4788-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It seems that XXXX

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index f783af3c..d89257e5 100644
--- a/production-config
+++ b/production-config
@@ -93,7 +93,7 @@ TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-09-24
 TftpDiVersion_buster 2021-02-09
 
-DebianMirror_buster_armhf http://snapshot.debian.org/archive/debian/20210201T024125Z/
+DebianMirror_buster_armhf http://snapshot.debian.org/archive/debian/20210124T203726Z/
 
 DebianSnapshotBackports_jessie http://snapshot.debian.org/archive/debian/20190206T211314Z/
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:36:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:36:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83680.156370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tPr-0006Gs-TY; Wed, 10 Feb 2021 17:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83680.156370; Wed, 10 Feb 2021 17:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tPr-0006Gg-Na; Wed, 10 Feb 2021 17:36:39 +0000
Received: by outflank-mailman (input) for mailman id 83680;
 Wed, 10 Feb 2021 17:36:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0006G8-AY
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0006Tw-9V
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0007oV-8B
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9tPo-0000Wv-H8; Wed, 10 Feb 2021 17:36:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=pcXf+OQ/1Tk4x0BUm1rv3F8ihe0S6K2hiU6TMoIlEeg=; b=1nW9fTfhXr9JBRjeAtAm3Dd1x3
	t8H7B3qP85HcJa6ctyCVFqS+8y9/qgpNrkUuRgGtKMuY9XG54qTP+IuvjQP5j4VEpLPF/nK7IPMQE
	QueXpeBwYigM172D8R/OgQDRtq5viA3rwCes+AZ9/xzrIs1FAlKMdfCSurmxpMsY+u6Q=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/3] production-config: Update d-i version to older Debian snapshot
Date: Wed, 10 Feb 2021 17:36:28 +0000
Message-Id: <20210210173629.4788-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210210173629.4788-1-iwj@xenproject.org>
References: <20210210173629.4788-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index d89257e5..81582ec0 100644
--- a/production-config
+++ b/production-config
@@ -91,7 +91,7 @@ TftpNetbootGroup osstest
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-09-24
-TftpDiVersion_buster 2021-02-09
+TftpDiVersion_buster 2021-02-10
 
 DebianMirror_buster_armhf http://snapshot.debian.org/archive/debian/20210124T203726Z/
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:36:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:36:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83681.156381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tPs-0006Hr-Co; Wed, 10 Feb 2021 17:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83681.156381; Wed, 10 Feb 2021 17:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tPs-0006HM-3J; Wed, 10 Feb 2021 17:36:40 +0000
Received: by outflank-mailman (input) for mailman id 83681;
 Wed, 10 Feb 2021 17:36:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0006GD-Iq
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0006Tz-IC
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1l9tPq-0007oe-GJ
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:36:38 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1l9tPo-0000Wv-Q6; Wed, 10 Feb 2021 17:36:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=juDKe/7rXRZzHIxGzUePnxmhOsyn4oXjFJ3RnY+L0k0=; b=tLauqod4iWRGCMawxk9/+EA9Rn
	Gc/7EiEFWNNvuDey7qVTn0nqwwVrhRkiDxlyfKIzNcfVmD5pNI0TmzZt5MBIeUXlqOdbyOnagdj44
	qa82U5fBHYubLweGDbVX9mEFJ7cxO362s4zZcKBMH8Ay4D3lzjG18H2zhLL+wz2lZ5iI=;
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/3] Disable updates for ssapshot.debian.org
Date: Wed, 10 Feb 2021 17:36:29 +0000
Message-Id: <20210210173629.4788-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210210173629.4788-1-iwj@xenproject.org>
References: <20210210173629.4788-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

security updates are a separate apt source.

The point of using snapshot is to avoid pulling in uncontrolled
updates, so we need to disable security updates.

The non-security SUITE-updates are disabled by this too.  But
everything is on fire and I don't want another iteration while I
figure out the proper syntax for disabling only the security updates.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/Debian.pm | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index d6e0b59d..ded7cdfc 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -972,12 +972,19 @@ END
     preseed_hook_command($ho, 'late_command', $sfx,
 			 debian_dhcp_rofs_fix($ho, '/target'));
 
+    my $disable_security_updates = 0;
+
     my $murl = debian_mirror_url($ho);
     if ($murl =~ m/$c{DebianMirrorAllowExpiredReleaseRegexp}/) {
 	# Inspired by
 	#  https://stackoverflow.com/questions/25039317/is-there-any-setting-in-the-preseed-file-to-ignore-the-release-valid-until-opt/51396935#51396935
 	# In some sense a workaround for the lack of a better way,
 	#  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771699
+	$disable_security_updates = 1;
+	# ^ this disables normal -updates too.  That's not desirable
+	#   but I don't want to mess with this now.  Hopefully it can
+	#   be improved later.
+
 	preseed_hook_installscript($ho, $sfx,
             '/usr/lib/apt-setup/generators/', '02IgnoreValidUntil', <<'END');
 #!/bin/sh
@@ -1075,12 +1082,14 @@ END
 d-i mirror/suite string $suite
 END
 
+    $disable_security_updates = 1 if $suite =~ m/jessie/;
+    # security.d.o CDN seems unreliable right now
+    # and jessie-updates is no more, so disable both for jessie
+
     $preseed .= <<'END'
 d-i apt-setup/services-select multiselect
 END
-       if $suite =~ m/jessie/;
-    # security.d.o CDN seems unreliable right now
-    # and jessie-updates is no more
+       if $disable_security_updates;
 
     if (grep { $r{$_} } runvar_glob('*_dmrestrict') and
 	$suite =~ m/stretch/) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 17:50:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 17:50:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83686.156400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tcp-0007uJ-K8; Wed, 10 Feb 2021 17:50:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83686.156400; Wed, 10 Feb 2021 17:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tcp-0007th-GH; Wed, 10 Feb 2021 17:50:03 +0000
Received: by outflank-mailman (input) for mailman id 83686;
 Wed, 10 Feb 2021 17:50:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JVSV=HM=amazon.co.uk=prvs=6681f5337=dwmw@srs-us1.protection.inumbo.net>)
 id 1l9tcn-0007eb-L1
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 17:50:01 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7740b918-6d8d-4ce3-91a9-5b1e1865663c;
 Wed, 10 Feb 2021 17:50:01 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-859fe132.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 10 Feb 2021 17:49:53 +0000
Received: from EX13D37EUA001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-859fe132.us-west-2.amazon.com (Postfix) with ESMTPS
 id 37CA122017C; Wed, 10 Feb 2021 17:49:52 +0000 (UTC)
Received: from EX13D08UEB001.ant.amazon.com (10.43.60.245) by
 EX13D37EUA001.ant.amazon.com (10.43.165.212) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 10 Feb 2021 17:49:50 +0000
Received: from EX13D08UEB001.ant.amazon.com ([10.43.60.245]) by
 EX13D08UEB001.ant.amazon.com ([10.43.60.245]) with mapi id 15.00.1497.010;
 Wed, 10 Feb 2021 17:49:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7740b918-6d8d-4ce3-91a9-5b1e1865663c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1612979401; x=1644515401;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-id:mime-version:content-transfer-encoding:subject;
  bh=5RwsGI+G2DMh/LT3LUkOrjh5XVrq5GYltMUWmHdYBhA=;
  b=Tk8nJCeWnF4j6MO6wO1Zn2vLoCN9srguu68Ll5gDI1fBau1VsqtnNmly
   bjMgTwtVjb6NYpK4HitvRT36uUWxSv+Y2eduix+hpa+LppMY99PAyvCRS
   tZ6r5CoN6SGXkThjV+jFYFHhPtkuCQ4d1gKNq/ooTOZ0X/xhRTQKCA/hQ
   4=;
X-IronPort-AV: E=Sophos;i="5.81,168,1610409600"; 
   d="scan'208";a="81713523"
Subject: Re: [PATCH] arm/xen: Don't probe xenbus as part of an early initcall
Thread-Topic: [PATCH] arm/xen: Don't probe xenbus as part of an early initcall
From: "Woodhouse, David" <dwmw@amazon.co.uk>
To: "julien@xen.org" <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, "iwj@xenproject.org"
	<iwj@xenproject.org>, "Grall, Julien" <jgrall@amazon.co.uk>
Thread-Index: AQHW/89FvDxsOsmt+E6oHHLpfjI51qpRqqKA
Date: Wed, 10 Feb 2021 17:49:49 +0000
Message-ID: <7858866d099732baf56e395a627f610968d24a7d.camel@amazon.co.uk>
References: <20210210170654.5377-1-julien@xen.org>
In-Reply-To: <20210210170654.5377-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.160.207]
Content-Type: text/plain; charset="utf-8"
Content-ID: <873FCF2D6172CE4BBDD0A453F0DDE074@amazon.com>
MIME-Version: 1.0
Precedence: Bulk
Content-Transfer-Encoding: base64

T24gV2VkLCAyMDIxLTAyLTEwIGF0IDE3OjA2ICswMDAwLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+
IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IA0KPiBBZnRlciBDb21t
aXQgMzQ5OWJhODE5OGNhZCAoInhlbjogRml4IGV2ZW50IGNoYW5uZWwgY2FsbGJhY2sgdmlhDQo+
IElOVFgvR1NJIiksIHhlbmJ1c19wcm9iZSgpIHdpbGwgYmUgY2FsbGVkIHRvbyBlYXJseSBvbiBB
cm0uIFRoaXMgd2lsbA0KPiByZWNlbnQgdG8gYSBndWVzdCBoYW5nIGR1cmluZyBib290Lg0KPiAN
Cj4gSWYgdGhlcmUgaGFuZyB3YXNuJ3QgdGhlcmUsIHdlIHdvdWxkIGhhdmUgZW5kZWQgdXAgdG8g
Y2FsbA0KPiB4ZW5idXNfcHJvYmUoKSB0d2ljZSAodGhlIHNlY29uZCB0aW1lIGlzIGluIHhlbmJ1
c19wcm9iZV9pbml0Y2FsbCgpKS4NCj4gDQo+IFdlIGRvbid0IG5lZWQgdG8gaW5pdGlhbGl6ZSB4
ZW5idXNfcHJvYmUoKSBlYXJseSBmb3IgQXJtIGd1ZXN0Lg0KPiBUaGVyZWZvcmUsIHRoZSBjYWxs
IGluIHhlbl9ndWVzdF9pbml0KCkgaXMgbm93IHJlbW92ZWQuDQo+IA0KPiBBZnRlciB0aGlzIGNo
YW5nZSwgdGhlcmUgaXMgbm8gbW9yZSBleHRlcm5hbCBjYWxsZXIgZm9yIHhlbmJ1c19wcm9iZSgp
Lg0KPiBTbyB0aGUgZnVuY3Rpb24gaXMgdHVybmVkIHRvIGEgc3RhdGljIG9uZS4gSW50ZXJlc3Rp
bmdseSB0aGVyZSB3ZXJlIHR3bw0KPiBwcm90b3R5cGVzIGZvciBpdC4NCj4gDQo+IEZpeGVzOiAz
NDk5YmE4MTk4Y2FkICgieGVuOiBGaXggZXZlbnQgY2hhbm5lbCBjYWxsYmFjayB2aWEgSU5UWC9H
U0kiKQ0KPiBSZXBvcnRlZC1ieTogSWFuIEphY2tzb24gPGl3akB4ZW5wcm9qZWN0Lm9yZz4NCj4g
U2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCg0KUmV2aWV3
ZWQtYnk6IERhdmlkIFdvb2Rob3VzZSA8ZHdtd0BhbWF6b24uY28udWs+DQpDYzogc3RhYmxlQHZn
ZXIua2VybmVsLm9yZw0KDQoNCgoKCkFtYXpvbiBEZXZlbG9wbWVudCBDZW50cmUgKExvbmRvbikg
THRkLiBSZWdpc3RlcmVkIGluIEVuZ2xhbmQgYW5kIFdhbGVzIHdpdGggcmVnaXN0cmF0aW9uIG51
bWJlciAwNDU0MzIzMiB3aXRoIGl0cyByZWdpc3RlcmVkIG9mZmljZSBhdCAxIFByaW5jaXBhbCBQ
bGFjZSwgV29yc2hpcCBTdHJlZXQsIExvbmRvbiBFQzJBIDJGQSwgVW5pdGVkIEtpbmdkb20uCgoK



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 18:09:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 18:09:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83690.156418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tvE-00019L-8t; Wed, 10 Feb 2021 18:09:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83690.156418; Wed, 10 Feb 2021 18:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9tvE-00019E-5F; Wed, 10 Feb 2021 18:09:04 +0000
Received: by outflank-mailman (input) for mailman id 83690;
 Wed, 10 Feb 2021 18:09:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c5iL=HM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1l9tvC-000199-BB
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 18:09:02 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.75]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c1904d8a-4de3-42e6-9bcd-c7595eeedab0;
 Wed, 10 Feb 2021 18:08:59 +0000 (UTC)
Received: from DU2PR04CA0054.eurprd04.prod.outlook.com (2603:10a6:10:234::29)
 by AM8PR08MB5649.eurprd08.prod.outlook.com (2603:10a6:20b:1dd::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20; Wed, 10 Feb
 2021 18:08:57 +0000
Received: from DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234::4) by DU2PR04CA0054.outlook.office365.com
 (2603:10a6:10:234::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25 via Frontend
 Transport; Wed, 10 Feb 2021 18:08:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT016.mail.protection.outlook.com (10.152.20.141) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Wed, 10 Feb 2021 18:08:56 +0000
Received: ("Tessian outbound 587c3d093005:v71");
 Wed, 10 Feb 2021 18:08:56 +0000
Received: from be1b077e8999.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 49CEC92F-79DC-40E3-8D86-AD43C57453F1.1; 
 Wed, 10 Feb 2021 18:08:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id be1b077e8999.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 10 Feb 2021 18:08:43 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR08MB2774.eurprd08.prod.outlook.com (2603:10a6:6:23::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.30; Wed, 10 Feb
 2021 18:08:42 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Wed, 10 Feb 2021
 18:08:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1904d8a-4de3-42e6-9bcd-c7595eeedab0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cqf+jDfoi52ndV4YBOidannMXCAThow9Oqn/kBBPSTc=;
 b=o4gglPRU/tfxPJtYYvYHZTMxLj2keNH4dtQCLhsJUlm5JE6G0eX6saP3j4uVOaExqT3KQUTcRKpyjfGLcj7nEONJ35exBtJO9tU2bx6T32HDwa/8HQ+fgixLPfR95AjvoCEAhGUcTYD7kWvgNtwpldCy5BlpwrZkTyQiX/WBGiY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a7b105e680586bd2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L6ceJI/HPQMFOLYTdAMlJIjkkIM1Hj9ZILXJLCgycu4eJA3eMU9af6f4cN+6tD7rmLnNQ8y1NQPGk5TXLCAwjTdckfJlR36Eixkfg16B7JQ0t8mD25L+mOtexYc5EbSxIq47yWTUvmj24dxmufqNiVSbki2Ww8bF+HoLvHYCCxEgG5XTYmLL0Hf9tksLOQf5hTlnoLQnLYyfPgM5B/Ag3OnFkm3KO6QZ7gZC7xw1BDA30n697wLYN+ucYZn+nCQ52uW6+PYgvPRPeNWoBQqoAhNU6qQlqpc4hfRSDeY0v6JtwLsZ14CPbnvxFu8c/AwjqiP3hJ8ln2pRUW/xSq79Xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cqf+jDfoi52ndV4YBOidannMXCAThow9Oqn/kBBPSTc=;
 b=iV96E4oCgXpkSXdSws4nJHiS/pSq2QypgrA5MriBLy8xMvcC2rEt3kehI0S+6hN6+rKS33W7ocXZZ5hY40yZKOQt30YuzNOxJimtV1gipfZmSKcNeKPQaXC3R+XYf+wkDYiqFKdPgL/lfX2GOK1OMkH9FHKNN0cKw8flBoIBS+UdlWgQXdLYxVhUEF0I6zX6dBoGWaEFtykMPnaNKe5ulRKKhodqYPWI90uMFqdxiEsvMoBqLsJc6/WjMKztw0uiFgIm3QP5AcPYaiuOP2VFjPVAdOH/JaUmFQvhzNZXvv4b2u4/oTi2mD3QvPYxeOKZ0P8v5GDQGlc7w53oM1JPtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cqf+jDfoi52ndV4YBOidannMXCAThow9Oqn/kBBPSTc=;
 b=o4gglPRU/tfxPJtYYvYHZTMxLj2keNH4dtQCLhsJUlm5JE6G0eX6saP3j4uVOaExqT3KQUTcRKpyjfGLcj7nEONJ35exBtJO9tU2bx6T32HDwa/8HQ+fgixLPfR95AjvoCEAhGUcTYD7kWvgNtwpldCy5BlpwrZkTyQiX/WBGiY=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoAgAB7ugCAATYRAIAAKYkAgAAJgAA=
Date: Wed, 10 Feb 2021 18:08:42 +0000
Message-ID: <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
In-Reply-To: <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fc0e6730-902e-40c4-2fbb-08d8cdeeef07
x-ms-traffictypediagnostic: DB6PR08MB2774:|AM8PR08MB5649:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB56496806EF9A89C8C58B5026FC8D9@AM8PR08MB5649.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6108;OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 W3x+MESzKLTZyDI6NfBrE5BpFdONG1iqBm0ISEYqHyICcEVfj+OdlOXNa74HXvAIXpDSJtudeoV+vzhDC3/BxdI0bCVkkkWcrOP/tUVoITQ10QNgbG63FoAQvp/FwNsR5YSweJTyYmrZqgEHsMD24ngsB0KpbDKutNbv26d32nURqbK/h+HzkEDtSGq3mf62p/SgNI6fxc3J2YrBVGuAMX+h+tgCrBJ2BRNXgd/d4fcpsetLl79c+QEmpu+F9pUDMDpNmYWNysBjPS5njyDGuqnY8vw9xkBERSA8JQHRSgO2LJjAQuC8fekmP2brx6f9Fpp/AeyvrN7qKMUQDuDTV/ZnR8VQpkKJ90HR/5/zClQyAuDQ9WPJisRpCy1h2nT1NQTofQVznqphjfvBPSokKzViogCeVXKGtTOOB37Gwl37X5NCF8+++sH3mV5ASLNDaIxuTNH2IhtMLWRSnF7oORv6C3ycjvCuCDAC/VMwY202PWetZvNBOTWDVCoM9SRHxK7us32SaSgIdw9Ibe8KgAcdD8l9uaDb21vRVKmaVEDPc+ieYjQuwOqYO5pdBJJEbaG7fUC7BdeVzR5AF9BfP0WoDAaSpH1gw8MboPkq9jKaH0rQIJP9xRhdL357yx/s+NIFCjgj4WnaWBlE3/baOUJjv031LxKsVUfxC2TrndU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(396003)(39860400002)(346002)(71200400001)(26005)(6512007)(316002)(5660300002)(53546011)(6506007)(8936002)(2616005)(186003)(8676002)(91956017)(33656002)(54906003)(966005)(6486002)(2906002)(6916009)(4326008)(478600001)(76116006)(66446008)(66476007)(66556008)(64756008)(66946007)(86362001)(36756003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?bHYzSFRyYnhoYTdnRW9aRWNkVldKc3I5eFYrRkNIUGJuK0ZaYUdZRVJMdmdx?=
 =?utf-8?B?Q2tDR2l5YTNsUEJXVkZEbUhrTDNCK3VTeEZWT3gzbkYvZG80UjZEQUUxb3hp?=
 =?utf-8?B?YzRBVFRFWmFkdFB2dHhWaVZkM3VNYnlqc1A4MThrbzBXVkhLTnVNMnVSWjJy?=
 =?utf-8?B?Mk1nRkJGZTJiaGxCVCtoeUNUWWZyMlEwR2FaRmF2b1ZvblBnVzFzUndKUHla?=
 =?utf-8?B?U2t3eWYwN0V2QzRDSDFBeTNZNTllUkRTOXJZLzhMalNFQ1FnV1E2WE1HQkk4?=
 =?utf-8?B?RG1yVDdrR3VyamxsMDJ3V1BZbzN0QkdrL0dHd29kbGh6RGxCR0NwNktxb0wy?=
 =?utf-8?B?b1grQ1FFZ0RTbDBNc0lKN1dqV0xTMGlZc1I4cGlJTjk0aWJNM3hNVmx0V2pV?=
 =?utf-8?B?bVRCZjlDS3dmWW1Ta1ViRC81UENaV3VneGlLY25YUWR6Q1JZb2tsTGlEa3Mx?=
 =?utf-8?B?VjV4QmoyL2xiRCswaTk2SENQVzNzQlFQTGdnUFFsUzkzOVJiektaSitDTU5B?=
 =?utf-8?B?Y1JWMHFMT1RSOHdXTVIyWnNZL1JwY0wvZGRlS0tVaUJyOUtsOFZhSjJ4Z250?=
 =?utf-8?B?UUJ2WVprb0JEeTNlZnFqWW51b1B2T0dpaW9Pa1JqR1pvdVJ1RUNLaWNxck9t?=
 =?utf-8?B?aU9sL1doTXVQYXN2QXVYbWc3dmVmRU44WVp2VlRaazBOcTY1SlFDOVc0aU1t?=
 =?utf-8?B?cWVNZkh5c28zcm9UOG1qclkrNWNCUkVXQkhiR2hMTDdsaXJORW5nZlovcTIr?=
 =?utf-8?B?ZTMwWDVBZ1RYOXhBYU51dndsUEZOdHQ3cTZqSjZEQTZZcjR2VXhRYmxBOUJq?=
 =?utf-8?B?REhUdlJjTkcvYlc0YU5Yd1FXcmRTYkR2dmFXVW05cjdybGZ5N05UNU90amx5?=
 =?utf-8?B?cjRpdXNQUXY3d3M3NFAvOWFPdHpMRjBZc3RMV1JKVWROOGRKZzBKS2N6TXZY?=
 =?utf-8?B?RGxOM2NIMlpmV1RZUUpjUnRRMHdpSmxrbHBmQU12VVp2L2dqTEZaRWlJRko2?=
 =?utf-8?B?cjR5U0xjS2hPaitWbkMxYTlneDdkMHJ4Rm9LY2h6bmk0VGFqWlNSZXhOanF3?=
 =?utf-8?B?MU4wa3NybHRUd1BCWjdCYklZMVo3N2ZkeW9hQm9ZaHd4RlUrY3pXRm8rbXND?=
 =?utf-8?B?OFhlSTJ5eEFBVXNlSFZTcTd6VjZwdkJPVHFQaCt1ZjNUeEV4cmdocDRVQk01?=
 =?utf-8?B?eU9XV0dERnlNclM5UDZabW5VZFVlTDBYNnpZOWgxUitGdmNEZFA0TWprOHIx?=
 =?utf-8?B?UWFrSXFwL0NlQnNTRksvVTc4QUdXNXovQ1VPbS8yWms0WC9oUHBUR0ViK2ZB?=
 =?utf-8?B?WWVRNUdsOUpyK2dlNVJtdkovMGhnODc3a1p1UWRZVVVMMmRIeC9EN2pma2pY?=
 =?utf-8?B?K25ieFB6cy9VaGJkaDFWRzJ4VnRZM25qaXdqZGljUkpicEh6RXR1THdOMllS?=
 =?utf-8?B?eWdoRGxOa0VxblFEc2o1dVNnaUhmMWdxVXU5bWdhb3VpWm5XcWs1VHJ5SjE5?=
 =?utf-8?B?S01wZlNoc0wwZk8yMHErbS9jYktzcHdBV2wzSTNXUk5FOUQ4Y0w2c1JkVDMv?=
 =?utf-8?B?azJmUmZyQzR0a3RrTFllOW40YkRYNmpnWHJvYWtNak8zODNuVlNJYW5TcU9O?=
 =?utf-8?B?d204eHVxaFRxait1RzhwR2t4RVIrRlJBRmZNeHBkbVdYWUVseElOeVpaSUg0?=
 =?utf-8?B?c202S0huRDhIZ3EyODcvYk93TFl3QVRDWHkxbHQvdUh5U25qZUZQT2x2WURG?=
 =?utf-8?Q?U+ripc1GnMlqzmnOfFR9XTtQ33zb3ASj2m2RDNN?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F7E21B0351B02048A2F7DAB142C67F7A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2774
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ffdbb2cf-0e0a-401b-e522-08d8cdeee657
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vSDFZm0OJCTz7mkvXXHnhCj3t1C3KzMg8Sj8Ic0jz0OdIgA8UbEhRC7xcyMWBUwxfpF9LLNo2AfEYBBR8j3OCqCnxjz/x1rpX+Df5l5EQ6eEzeHzssWp/gjdXUyjYhpMA4BHxZ/bI+KT/WfjozCyfHRbnKLx37vIYXiruT9Rt6OKSsnviUodrGzdA/OpgOIgIXE5l6ZdfH3NLGARQG/092Rj1jZrV6WZHG2cAMNc2SfqOnomFQ7rEI9kcrkHLiK2//oYGOCQl/K/I85KSPAm5HEwGCIsvhBAZoBUzcPxUZzrUvzb0iAp3910h1kyk3zndt6SWDe7+p9jFbNwp43OgSpNdyUvPL+tvbWDkOkGmO0vmGSJRyyGeSVN2GxCpH0LTO5OltwzLfuEP4dnyszrnrlFzkQhb/TvQ8mdN/H901hSCY3jAVhpOeokJY4H+y69cvgMlwsog6NEju4Mm+3b/1ZJv6xzA/5BpF0kCk3DUz1j6cXI5Neh5Wh2WK+ztoAkTX9QKzE6v2zTM8SvNEMI1/K7H7n60+hCj6HZwSlyuJ9uX8r/9yrqrqdIzKz6jfxELi3VT52hk+tipFVNFZ+lxVi6mWlDc9pH/aRkRchki8bfF4YGUep2vdLtz+/OogutNqY5KabhB3bFvGcuJDNdi6lyA8XLZDvFLm6X4f05XvkrTnveY0hb/G20w+E68DmV1uxyJ8zhMwI8hx5s1ls7w7NDy5C73cUSLbhdfuUr1KU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(136003)(39860400002)(36840700001)(46966006)(54906003)(2906002)(86362001)(8676002)(33656002)(6512007)(36860700001)(107886003)(336012)(2616005)(186003)(316002)(4326008)(53546011)(81166007)(82740400003)(6486002)(5660300002)(83380400001)(26005)(6862004)(47076005)(6506007)(82310400003)(966005)(70206006)(70586007)(8936002)(356005)(478600001)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 18:08:56.8971
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fc0e6730-902e-40c4-2fbb-08d8cdeeef07
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5649

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDEwIEZlYiAyMDIxLCBhdCA1OjM0IHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gMTAvMDIvMjAy
MSAxNTowNiwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4gT24gOSBGZWIgMjAyMSwgYXQgODozNiBw
bSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+
PiANCj4+PiBPbiBUdWUsIDkgRmViIDIwMjEsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4gT24g
OCBGZWIgMjAyMSwgYXQgNjo0OSBwbSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4gQ29tbWl0IDkxZDRlY2E3YWRkIGJyb2tl
IGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcgb24gQVJNLg0KPj4+Pj4gVGhlIG9mZmVuZGluZyBj
aHVuayBpczoNCj4+Pj4+IA0KPj4+Pj4gI2RlZmluZSBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5n
KGQpICAgICAgICAgICAgICAgICAgICBcDQo+Pj4+PiAtICAgIChpc19kb21haW5fZGlyZWN0X21h
cHBlZChkKSAmJiBuZWVkX2lvbW11KGQpKQ0KPj4+Pj4gKyAgICAoaXNfZG9tYWluX2RpcmVjdF9t
YXBwZWQoZCkgJiYgbmVlZF9pb21tdV9wdF9zeW5jKGQpKQ0KPj4+Pj4gDQo+Pj4+PiBPbiBBUk0g
d2UgbmVlZCBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nIHRvIGJlIHRydWUgZm9yIGRvbTAgd2hl
biBpdCBpcw0KPj4+Pj4gZGlyZWN0bHkgbWFwcGVkIGFuZCBJT01NVSBpcyBlbmFibGVkIGZvciB0
aGUgZG9tYWluLCBsaWtlIHRoZSBvbGQgY2hlY2sNCj4+Pj4+IGRpZCwgYnV0IHRoZSBuZXcgY2hl
Y2sgaXMgYWx3YXlzIGZhbHNlLg0KPj4+Pj4gDQo+Pj4+PiBJbiBmYWN0LCBuZWVkX2lvbW11X3B0
X3N5bmMgaXMgZGVmaW5lZCBhcyBkb21faW9tbXUoZCktPm5lZWRfc3luYyBhbmQNCj4+Pj4+IG5l
ZWRfc3luYyBpcyBzZXQgYXM6DQo+Pj4+PiANCj4+Pj4+ICAgaWYgKCAhaXNfaGFyZHdhcmVfZG9t
YWluKGQpIHx8IGlvbW11X2h3ZG9tX3N0cmljdCApDQo+Pj4+PiAgICAgICBoZC0+bmVlZF9zeW5j
ID0gIWlvbW11X3VzZV9oYXBfcHQoZCk7DQo+Pj4+PiANCj4+Pj4+IGlvbW11X3VzZV9oYXBfcHQo
ZCkgbWVhbnMgdGhhdCB0aGUgcGFnZS10YWJsZSB1c2VkIGJ5IHRoZSBJT01NVSBpcyB0aGUNCj4+
Pj4+IFAyTS4gSXQgaXMgdHJ1ZSBvbiBBUk0uIG5lZWRfc3luYyBtZWFucyB0aGF0IHlvdSBoYXZl
IGEgc2VwYXJhdGUgSU9NTVUNCj4+Pj4+IHBhZ2UtdGFibGUgYW5kIGl0IG5lZWRzIHRvIGJlIHVw
ZGF0ZWQgZm9yIGV2ZXJ5IGNoYW5nZS4gbmVlZF9zeW5jIGlzIHNldA0KPj4+Pj4gdG8gZmFsc2Ug
b24gQVJNLiBIZW5jZSwgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhkKSBpcyBmYWxzZSB0b28s
DQo+Pj4+PiB3aGljaCBpcyB3cm9uZy4NCj4+Pj4+IA0KPj4+Pj4gQXMgYSBjb25zZXF1ZW5jZSwg
d2hlbiB1c2luZyBQViBuZXR3b3JrIGZyb20gYSBkb21VIG9uIGEgc3lzdGVtIHdoZXJlDQo+Pj4+
PiBJT01NVSBpcyBvbiBmcm9tIERvbTAsIEkgZ2V0Og0KPj4+Pj4gDQo+Pj4+PiAoWEVOKSBzbW11
OiAvc21tdUBmZDgwMDAwMDogVW5oYW5kbGVkIGNvbnRleHQgZmF1bHQ6IGZzcj0weDQwMiwgaW92
YT0weDg0MjRjYjE0OCwgZnN5bnI9MHhiMDAwMSwgY2I9MA0KPj4+Pj4gWyAgIDY4LjI5MDMwN10g
bWFjYiBmZjBlMDAwMC5ldGhlcm5ldCBldGgwOiBETUEgYnVzIGVycm9yOiBIUkVTUCBub3QgT0sN
Cj4+Pj4+IA0KPj4+Pj4gVGhlIGZpeCBpcyB0byBnbyBiYWNrIHRvIHNvbWV0aGluZyBhbG9uZyB0
aGUgbGluZXMgb2YgdGhlIG9sZA0KPj4+Pj4gaW1wbGVtZW50YXRpb24gb2YgZ250dGFiX25lZWRf
aW9tbXVfbWFwcGluZy4NCj4+Pj4+IA0KPj4+Pj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFi
ZWxsaW5pIDxzdGVmYW5vLnN0YWJlbGxpbmlAeGlsaW54LmNvbT4NCj4+Pj4+IEZpeGVzOiA5MWQ0
ZWNhN2FkZA0KPj4+Pj4gQmFja3BvcnQ6IDQuMTIrDQo+Pj4+PiANCj4+Pj4+IC0tLQ0KPj4+Pj4g
DQo+Pj4+PiBHaXZlbiB0aGUgc2V2ZXJpdHkgb2YgdGhlIGJ1ZywgSSB3b3VsZCBsaWtlIHRvIHJl
cXVlc3QgdGhpcyBwYXRjaCB0byBiZQ0KPj4+Pj4gYmFja3BvcnRlZCB0byA0LjEyIHRvbywgZXZl
biBpZiA0LjEyIGlzIHNlY3VyaXR5LWZpeGVzIG9ubHkgc2luY2UgT2N0DQo+Pj4+PiAyMDIwLg0K
Pj4+Pj4gDQo+Pj4+PiBGb3IgdGhlIDQuMTIgYmFja3BvcnQsIHdlIGNhbiB1c2UgaW9tbXVfZW5h
YmxlZCgpIGluc3RlYWQgb2YNCj4+Pj4+IGlzX2lvbW11X2VuYWJsZWQoKSBpbiB0aGUgaW1wbGVt
ZW50YXRpb24gb2YgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZy4NCj4+Pj4+IA0KPj4+Pj4gQ2hh
bmdlcyBpbiB2MjoNCj4+Pj4+IC0gaW1wcm92ZSBjb21taXQgbWVzc2FnZQ0KPj4+Pj4gLSBhZGQg
aXNfaW9tbXVfZW5hYmxlZChkKSB0byB0aGUgY2hlY2sNCj4+Pj4+IC0tLQ0KPj4+Pj4geGVuL2lu
Y2x1ZGUvYXNtLWFybS9ncmFudF90YWJsZS5oIHwgMiArLQ0KPj4+Pj4gMSBmaWxlIGNoYW5nZWQs
IDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9uKC0pDQo+Pj4+PiANCj4+Pj4+IGRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3RhYmxlLmggYi94ZW4vaW5jbHVkZS9hc20tYXJt
L2dyYW50X3RhYmxlLmgNCj4+Pj4+IGluZGV4IDZmNTg1YjE1MzguLjBjZTc3ZjlhMWMgMTAwNjQ0
DQo+Pj4+PiAtLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3RhYmxlLmgNCj4+Pj4+ICsr
KyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaA0KPj4+Pj4gQEAgLTg5LDcgKzg5
LDcgQEAgaW50IHJlcGxhY2VfZ3JhbnRfaG9zdF9tYXBwaW5nKHVuc2lnbmVkIGxvbmcgZ3BhZGRy
LCBtZm5fdCBtZm4sDQo+Pj4+PiAgICAoKChpKSA+PSBucl9zdGF0dXNfZnJhbWVzKHQpKSA/IElO
VkFMSURfR0ZOIDogKHQpLT5hcmNoLnN0YXR1c19nZm5baV0pDQo+Pj4+PiANCj4+Pj4+ICNkZWZp
bmUgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhkKSAgICAgICAgICAgICAgICAgICAgXA0KPj4+
Pj4gLSAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgbmVlZF9pb21tdV9wdF9zeW5j
KGQpKQ0KPj4+Pj4gKyAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgaXNfaW9tbXVf
ZW5hYmxlZChkKSkNCj4+Pj4+IA0KPj4+Pj4gI2VuZGlmIC8qIF9fQVNNX0dSQU5UX1RBQkxFX0hf
XyAqLw0KPj4+PiANCj4+Pj4gSSB0ZXN0ZWQgdGhlIHBhdGNoIGFuZCB3aGlsZSBjcmVhdGluZyB0
aGUgZ3Vlc3QgSSBvYnNlcnZlZCB0aGUgYmVsb3cgd2FybmluZyBmcm9tIExpbnV4IGZvciBibG9j
ayBkZXZpY2UuDQo+Pj4+IGh0dHBzOi8vZWxpeGlyLmJvb3RsaW4uY29tL2xpbnV4L3Y0LjMvc291
cmNlL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMjTDI1OA0KPj4+IA0KPj4+IFNv
IHlvdSBhcmUgY3JlYXRpbmcgYSBndWVzdCB3aXRoICJ4bCBjcmVhdGUiIGluIGRvbTAgYW5kIHlv
dSBzZWUgdGhlDQo+Pj4gd2FybmluZ3MgYmVsb3cgcHJpbnRlZCBieSB0aGUgRG9tMCBrZXJuZWw/
IEkgaW1hZ2luZSB0aGUgZG9tVSBoYXMgYQ0KPj4+IHZpcnR1YWwgImRpc2siIG9mIHNvbWUgc29y
dC4NCj4+IFllcyB5b3UgYXJlIHJpZ2h0IEkgYW0gdHJ5aW5nIHRvIGNyZWF0ZSB0aGUgZ3Vlc3Qg
d2l0aCAieGwgY3JlYXRl4oCdIGFuZCBiZWZvcmUgdGhhdCwgSSBjcmVhdGVkIHRoZSBsb2dpY2Fs
IHZvbHVtZSBhbmQgdHJ5aW5nIHRvIGF0dGFjaCB0aGUgbG9naWNhbCB2b2x1bWUNCj4+IGJsb2Nr
IHRvIHRoZSBkb21haW4gd2l0aCDigJx4bCBibG9jay1hdHRhY2jigJ0uIEkgb2JzZXJ2ZWQgdGhp
cyBlcnJvciB3aXRoIHRoZSAieGwgYmxvY2stYXR0YWNo4oCdIGNvbW1hbmQuDQo+PiBUaGlzIGlz
c3VlIG9jY3VycyBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoIGFzIHdoYXQgSSBvYnNlcnZlZCB0
aGlzIHBhdGNoIGludHJvZHVjZSB0aGUgY2FsbHMgdG8gaW9tbXVfbGVnYWN5X3ssIHVufW1hcCgp
IHRvIG1hcCB0aGUgZ3JhbnQgcGFnZXMgZm9yDQo+PiBJT01NVSB0aGF0IHRvdWNoZXMgdGhlIHBh
Z2UtdGFibGVzLiBJIGFtIG5vdCBzdXJlIGJ1dCB3aGF0IEkgb2JzZXJ2ZWQgaXMgdGhhdCBzb21l
dGhpbmcgaXMgd3JpdHRlbiB3cm9uZyB3aGVuIGlvbW1fdW5tYXAgY2FsbHMgdW5tYXAgdGhlIHBh
Z2VzDQo+PiBiZWNhdXNlIG9mIHRoYXQgaXNzdWUgaXMgb2JzZXJ2ZWQuDQo+IA0KPiBDYW4geW91
IGNsYXJpZnkgd2hhdCB5b3UgbWVhbiBieSAid3JpdHRlbiB3cm9uZyI/IFdoYXQgc29ydCBvZiBl
cnJvciBkbyB5b3Ugc2VlIGluIHRoZSBpb21tdV91bm1hcCgpPw0KDQoNCkkgbWlnaHQgYmUgd3Jv
bmcgYXMgcGVyIG15IHVuZGVyc3RhbmRpbmcgZm9yIEFSTSB3ZSBhcmUgc2hhcmluZyB0aGUgUDJN
IGJldHdlZW4gQ1BVIGFuZCBJT01NVSBhbHdheXMgYW5kIHRoZSBtYXBfZ3JhbnRfcmVmKCkgZnVu
Y3Rpb24gaXMgd3JpdHRlbiBpbiBzdWNoIGEgd2F5IHRoYXQgd2UgaGF2ZSB0byBjYWxsIGlvbW11
X2xlZ2FjeV97LCB1bn1tYXAoKSBvbmx5IGlmIFAyTSBpcyBub3Qgc2hhcmVkLiANCg0KQXMgd2Ug
YXJlIHNoYXJpbmcgdGhlIFAyTSB3aGVuIHdlIGNhbGwgdGhlIGlvbW11X21hcCgpIGZ1bmN0aW9u
IGl0IHdpbGwgb3ZlcndyaXRlIHRoZSBleGlzdGluZyBHRk4gLT4gTUZOICggRm9yIERPTTAgR0ZO
IGlzIHNhbWUgYXMgTUZOKSBlbnRyeSBhbmQgd2hlbiB3ZSBjYWxsIGlvbW11X3VubWFwKCkgaXQg
d2lsbCB1bm1hcCB0aGUgIChHRk4gLT4gTUZOICkgZW50cnkgZnJvbSB0aGUgcGFnZS10YWJsZS4g
TmV4dCB0aW1lIHdoZW4gbWFwX2dyYW50X3JlZigpIHRyaWVzIHRvIG1hcCB0aGUgc2FtZSBmcmFt
ZSBpdCB3aWxsIHRyeSB0byBnZXQgdGhlIGNvcnJlc3BvbmRpbmcgR0ZOIGJ1dCB0aGVyZSB3aWxs
IG5vIGVudHJ5IGluIFAyTSBhcyBpb21tdV91bm1hcCgpIGFscmVhZHkgdW5tYXBwZWQgdGhlIEdG
TiBiZWNhdXNlIG9mIHRoYXQgdGhpcyBlcnJvciBtaWdodCBiZSBvYnNlcnZlZC4NCg0KU2VxdWVu
Y2Ugb2YgZXZlbnRzIHRoYXQgcmVzdWx0cyB0aGUgaXNzdWUgOg0KDQpnbnR0YWJfbWFwX2dyYW50
X3JlZiAoR0ZOPWE5OWZiIE1GTj1hOTlmYikNCmFybV9pb21tdV9tYXBfcGFnZSgpIERPTUlEOjAg
ZGZuID0gYTk5ZmIgbWZuID0gYTk5ZmIgDQoNCmdudHRhYl9tYXBfZ3JhbnRfcmVmICggR0ZOPWQ5
OTEzIE1GTj1kOTkxMykNCmFybV9pb21tdV9tYXBfcGFnZSgpIERPTUlEOjAgZGZuID0gZDk5MTMg
bWZuID0gZDk5MTMNCg0KZ250dGFiX21hcF9ncmFudF9yZWYgKCBHRk49ZDk4NDYgTUZOPWQ5ODQ2
KQ0KYXJtX2lvbW11X21hcF9wYWdlKCkgRE9NSUQ6MCBkZm4gPSBkOTg0NiBtZm4gPSBkOTg0NiAN
Cg0KZ250dGFiX21hcF9ncmFudF9yZWYgKEdGTj1hODQ3NCBNRk49YTg0NzQpDQphcm1faW9tbXVf
bWFwX3BhZ2UoKSBET01JRDowIGRmbiA9IGE4NDc0IG1mbiA9IGE4NDc0DQoNCmFybV9pb21tdV91
bm1hcF9wYWdlKCkgRE9NSUQ6MCBkZm4gPSBhOTlmYg0KYXJtX2lvbW11X3VubWFwX3BhZ2UoKSBE
T01JRDowIGRmbiA9IGQ5OTEzDQphcm1faW9tbXVfdW5tYXBfcGFnZSgpIERPTUlEOjAgZGZuID0g
ZDk4NDYNCmFybV9pb21tdV91bm1hcF9wYWdlKCkgRE9NSUQ6MCBkZm4gPSBhODQ3NA0KDQpUcnkg
dG8gbWFwIHRoZSBzYW1lIGZyYW1lIHRoYXQgaXMgdW5tYXBwZWQgZWFybGllciBieSBpb21tdV91
bm1hcCBjYWxsKCkNCmdudHRhYl9tYXBfZ3JhbnRfcmVmIChHRk49YTk5ZmIgTUZOPTB4ZmZmZmZm
ZmYpLi0+IE5vdCBhYmxlIHRvIGZpbmQgdGhlIEdGTiBpbiBwMm0gZXJyb3IgaXMgb2JzZXJ2ZWQg
YWZ0ZXIgdGhhdC4gDQoNCj4gDQo+PiBZb3UgY2FuIHJlcHJvZHVjZSB0aGUgaXNzdWUgYnkganVz
dCBjcmVhdGluZyB0aGUgZHVtbXkgaW1hZ2UgZmlsZSBhbmQgdHJ5IHRvIGF0dGFjaCBpdCB3aXRo
IOKAnHhsIGJsb2NrLWF0dGFjaOKAnQ0KPj4gZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QuaW1nIGJz
PTEwMjQgY291bnQ9MCBzZWVrPTEwMjQNCj4+IHhsIGJsb2NrLWF0dGFjaCAwIHBoeTp0ZXN0Lmlt
ZyB4dmRhIHcNCj4+IFNlcXVlbmNlIG9mIGNvbW1hbmQgdGhhdCBJIGZvbGxvdyBpcyBhcyBmb2xs
b3dzIHRvIHJlcHJvZHVjZSB0aGUgaXNzdWU6DQo+PiBsdnMgdmcteGVuL215Z3Vlc3QNCj4+IGx2
Y3JlYXRlIC15IC1MIDRHIC1uIG15Z3Vlc3QgdmcteGVuDQo+PiBwYXJ0ZWQgLXMgL2Rldi92Zy14
ZW4vbXlndWVzdCBta2xhYmVsIG1zZG9zDQo+PiBwYXJ0ZWQgLXMgL2Rldi92Zy14ZW4vbXlndWVz
dCB1bml0IE1CIG1rcGFydCBwcmltYXJ5IDEgNDA5Ng0KPj4gc3luYw0KPj4geGwgYmxvY2stYXR0
YWNoIDAgcGh5Oi9kZXYvdmcteGVuL215Z3Vlc3QgeHZkYSB3DQo+PiBsaWJ4bDogZXJyb3I6IGxp
YnhsX3hzaGVscC5jOjIwMTpsaWJ4bF9feHNfcmVhZF9tYW5kYXRvcnk6IHhlbnN0b3JlIHJlYWQg
ZmFpbGVkOiBgL2xpYnhsLzAvdHlwZSc6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkNCj4+IGxp
YnhsOiB3YXJuaW5nOiBsaWJ4bF9kb20uYzo1MTpsaWJ4bF9fZG9tYWluX3R5cGU6IHVuYWJsZSB0
byBnZXQgZG9tYWluIHR5cGUgZm9yIGRvbWlkPTAsIGFzc3VtaW5nIEhWTQ0KPj4gWyAgMTYyLjYz
MjIzMl0geGVuLWJsa2JhY2s6IGJhY2tlbmQvdmJkLzAvNTE3MTI6IHVzaW5nIDQgcXVldWVzLCBw
cm90b2NvbCAxIChhcm0tYWJpKSBwZXJzaXN0ZW50IGdyYW50cw0KPj4gWyAgMTYyLjc2NDg0N10g
dmJkIHZiZC0wLTUxNzEyOiA5IG1hcHBpbmcgaW4gc2hhcmVkIHBhZ2UgOCBmcm9tIGRvbWFpbiAw
DQo+PiBbICAxNjIuNzcxNzQwXSB2YmQgdmJkLTAtNTE3MTI6IDkgbWFwcGluZyByaW5nLXJlZiBw
b3J0IDUNCj4+IFsgIDE2Mi43Nzc2NTBdIC0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0t
LS0tLQ0KPj4gWyAgMTYyLjc4MjE2N10gV0FSTklORzogQ1BVOiAyIFBJRDogMzcgYXQgZHJpdmVy
cy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYzoyOTYgeGVuX2Jsa2lmX2Rpc2Nvbm5lY3QrMHgy
MGMvMHgyMzANCj4gDQo+IEp1c3QgdG8gY29uZmlybSwgdGhpcyBzcGxhdCBjb21lcyBmcm9tOg0K
PiANCj4gV0FSTl9PTihpICE9IChYRU5fQkxLSUZfUkVRU19QRVJfUEFHRSAqIGJsa2lmLT5ucl9y
aW5nX3BhZ2VzKSk7DQo+IA0KPiBJZiBzbywgd2hhdCBhcmUgdGhlIHZhbHVlcyBmb3IgaSBhbmQg
YmxraWYtPm5yX3JpbmdfcGFnZXM/DQoNCkxldCBtZSBmaW5kIG91dCBhbmQgd2lsbCBzaGFyZSB3
aXRoIHlvdS4NCg0KUmVnYXJkcywNClJhaHVsDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4g
SnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 19:10:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 19:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83700.156451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9usq-0007Nv-3B; Wed, 10 Feb 2021 19:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83700.156451; Wed, 10 Feb 2021 19:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9usp-0007No-Vo; Wed, 10 Feb 2021 19:10:39 +0000
Received: by outflank-mailman (input) for mailman id 83700;
 Wed, 10 Feb 2021 19:10:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9uso-0007Ng-Vb; Wed, 10 Feb 2021 19:10:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9uso-00089g-Le; Wed, 10 Feb 2021 19:10:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9uso-0005I6-FY; Wed, 10 Feb 2021 19:10:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9uso-0004Vu-F2; Wed, 10 Feb 2021 19:10:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=321RvEiwLITY2nnRr85veBKqS7Sgwehi5aqhG9IVfH4=; b=gDwUPr6fS+b5QPXIcF17xgbwOu
	YZSDNXhZzCyxyjnJIjxM95vZFhll3Jvzl0qdujXf0BP5fHypQUmy7rNUO0wc4TiY5siQTtVplVvHg
	ACKl7eYgbpmoY7y/+I4q2X0OKP/JW3yBf3fTS4sCFXqlNCUeSvWcjrTuId4L1bQpFKgM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1l9uso-0004Vu-F2@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 19:10:38 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
  Bug not present: 687121f8a0e7c1ea1c4fa3d056637e5819342f14
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159224/


  commit c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
  Author: Roger Pau Monne <roger.pau@citrix.com>
  Date:   Thu Feb 4 10:38:33 2021 +0100
  
      autoconf: check endian.h include path
      
      Introduce an autoconf macro to check for the include path of certain
      headers that can be different between OSes.
      
      Use such macro to find the correct path for the endian.h header, and
      modify the users of endian.h to use the output of such check.
      
      Suggested-by: Ian Jackson <iwj@xenproject.org>
      Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Reviewed-by: Ian Jackson <iwj@xenproject.org>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/159224.bisection-summary --basis-template=159191 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 159210 fail [host=himrod1] / 159191 [host=himrod2] 159184 ok.
Failure / basis pass flights: 159210 / 159184
(tree with no url: minios)
(tree in basispass but not in latest: ovmf)
(tree in basispass but not in latest: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 90b014a6e6ecad036ec5846426afd19b305dedff
Basis pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#687121f8a0e7c1ea1c4fa3d056637e5819342f14-90b014a6e6ecad036ec5846426afd19b305dedff
Loaded 5001 nodes in revision graph
Searching for test results:
 159184 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
 159191 [host=himrod2]
 159206 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 20077d035224c6f3b3eef8b111b8b842635f2c40
 159209 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
 159211 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 20077d035224c6f3b3eef8b111b8b842635f2c40
 159213 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
 159210 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 90b014a6e6ecad036ec5846426afd19b305dedff
 159219 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
 159221 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
 159223 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
 159224 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
Searching for interesting versions
 Result found: flight 159184 (pass), for basis pass
 For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14, results HASH(0x556052ba3a60) HASH(0x556052bb5fe0) HASH(0x556052bbcf48) HASH(0x556052bc0ad8) Result found: flight 159206 (fail), for basis failure (at ancestor ~44)
 Repro found: flight 159209 (pass), for basis pass
 Repro found: flight 159210 (fail), for basis failure
 0 revisions at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
No revisions left to test, checking graph state.
 Result found: flight 159184 (pass), for last pass
 Result found: flight 159213 (fail), for first failure
 Repro found: flight 159219 (pass), for last pass
 Repro found: flight 159221 (fail), for first failure
 Repro found: flight 159223 (pass), for last pass
 Repro found: flight 159224 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
  Bug not present: 687121f8a0e7c1ea1c4fa3d056637e5819342f14
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159224/


  commit c9b0242ad44a739e0f4c72b67fd4bcf4b042f800
  Author: Roger Pau Monne <roger.pau@citrix.com>
  Date:   Thu Feb 4 10:38:33 2021 +0100
  
      autoconf: check endian.h include path
      
      Introduce an autoconf macro to check for the include path of certain
      headers that can be different between OSes.
      
      Use such macro to find the correct path for the endian.h header, and
      modify the users of endian.h to use the output of such check.
      
      Suggested-by: Ian Jackson <iwj@xenproject.org>
      Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Reviewed-by: Ian Jackson <iwj@xenproject.org>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
159224: tolerable ALL FAIL

flight 159224 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159224/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Feb 10 19:52:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 19:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83705.156468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9vXD-0002jD-CS; Wed, 10 Feb 2021 19:52:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83705.156468; Wed, 10 Feb 2021 19:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9vXD-0002j6-9b; Wed, 10 Feb 2021 19:52:23 +0000
Received: by outflank-mailman (input) for mailman id 83705;
 Wed, 10 Feb 2021 19:52:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1l9vXC-0002j1-1m
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 19:52:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9vXA-0000OT-Hd; Wed, 10 Feb 2021 19:52:20 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1l9vXA-0001Dw-4G; Wed, 10 Feb 2021 19:52:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QGygwq+XlRGby/6WU6Lyuq55EHTsHrtDGEFeegO9yks=; b=m/uPE0S46QuBNDBGv0fw9FMR1Q
	gALuollxYpRI1AQKu8Y18V9W8l+BB+fJtUZZedEg0L9hQlsrvCnf2VaNolozH1cdcdZQqkHbW3x6l
	FLcQM3p2amyQw4Fq5K1Id9Mkrl6HI9ahejKydWP4SUNmPraeye881Ybn7Y7wUvM7K7lA=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
Date: Wed, 10 Feb 2021 19:52:18 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/02/2021 18:08, Rahul Singh wrote:
> Hello Julien,
> 
>> On 10 Feb 2021, at 5:34 pm, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 10/02/2021 15:06, Rahul Singh wrote:
>>>> On 9 Feb 2021, at 8:36 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>
>>>> On Tue, 9 Feb 2021, Rahul Singh wrote:
>>>>>> On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>>>
>>>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>>>> The offending chunk is:
>>>>>>
>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>
>>>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>>>>> directly mapped and IOMMU is enabled for the domain, like the old check
>>>>>> did, but the new check is always false.
>>>>>>
>>>>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>>>>> need_sync is set as:
>>>>>>
>>>>>>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>>>        hd->need_sync = !iommu_use_hap_pt(d);
>>>>>>
>>>>>> iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
>>>>>> P2M. It is true on ARM. need_sync means that you have a separate IOMMU
>>>>>> page-table and it needs to be updated for every change. need_sync is set
>>>>>> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
>>>>>> which is wrong.
>>>>>>
>>>>>> As a consequence, when using PV network from a domU on a system where
>>>>>> IOMMU is on from Dom0, I get:
>>>>>>
>>>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>>>>>
>>>>>> The fix is to go back to something along the lines of the old
>>>>>> implementation of gnttab_need_iommu_mapping.
>>>>>>
>>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>>> Fixes: 91d4eca7add
>>>>>> Backport: 4.12+
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> Given the severity of the bug, I would like to request this patch to be
>>>>>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
>>>>>> 2020.
>>>>>>
>>>>>> For the 4.12 backport, we can use iommu_enabled() instead of
>>>>>> is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
>>>>>>
>>>>>> Changes in v2:
>>>>>> - improve commit message
>>>>>> - add is_iommu_enabled(d) to the check
>>>>>> ---
>>>>>> xen/include/asm-arm/grant_table.h | 2 +-
>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
>>>>>> index 6f585b1538..0ce77f9a1c 100644
>>>>>> --- a/xen/include/asm-arm/grant_table.h
>>>>>> +++ b/xen/include/asm-arm/grant_table.h
>>>>>> @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
>>>>>>     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
>>>>>>
>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>> +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
>>>>>>
>>>>>> #endif /* __ASM_GRANT_TABLE_H__ */
>>>>>
>>>>> I tested the patch and while creating the guest I observed the below warning from Linux for block device.
>>>>> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
>>>>
>>>> So you are creating a guest with "xl create" in dom0 and you see the
>>>> warnings below printed by the Dom0 kernel? I imagine the domU has a
>>>> virtual "disk" of some sort.
>>> Yes you are right I am trying to create the guest with "xl createâ€ and before that, I created the logical volume and trying to attach the logical volume
>>> block to the domain with â€œxl block-attachâ€. I observed this error with the "xl block-attachâ€ command.
>>> This issue occurs after applying this patch as what I observed this patch introduce the calls to iommu_legacy_{, un}map() to map the grant pages for
>>> IOMMU that touches the page-tables. I am not sure but what I observed is that something is written wrong when iomm_unmap calls unmap the pages
>>> because of that issue is observed.
>>
>> Can you clarify what you mean by "written wrong"? What sort of error do you see in the iommu_unmap()?
> 
> 
> I might be wrong as per my understanding for ARM we are sharing the P2M between CPU and IOMMU always and the map_grant_ref() function is written in such a way that we have to call iommu_legacy_{, un}map() only if P2M is not shared.

map_grant_ref() will call the IOMMU if gnttab_need_iommu_mapping() 
returns true. I don't really see where this is assuming the P2M is not 
shared.

In fact, on x86, this will always be false for HVM domain (they support 
both shared and separate page-tables).

> 
> As we are sharing the P2M when we call the iommu_map() function it will overwrite the existing GFN -> MFN ( For DOM0 GFN is same as MFN) entry and when we call iommu_unmap() it will unmap the  (GFN -> MFN ) entry from the page-table.
AFAIK, there should be nothing mapped at that GFN because the page 
belongs to the guest. At worse, we would overwrite a mapping that is the 
same.

Although, I agree that we may end up to remove the entry early and 
therefore we could get an IOMMU fault (this is not your case here). But 
that's not an Arm-only problem.

> Next time when map_grant_ref() tries to map the same frame it will try to get the corresponding GFN but there will no entry in P2M as iommu_unmap() already unmapped the GFN because of that this error might be observed.
I am afraid, I don't understand what you mean by "try to get the 
corresponding GFN".  Can you give a pointer to the code?

> Sequence of events that results the issue :
> 
> gnttab_map_grant_ref (GFN=a99fb MFN=a99fb)

Can you clarify whether the GFN is from the local domain (e.g. 
dom0/backend) or the remote domain (e.g. the frontend)?

> arm_iommu_map_page() DOMID:0 dfn = a99fb mfn = a99fb
> 
> gnttab_map_grant_ref ( GFN=d9913 MFN=d9913)
> arm_iommu_map_page() DOMID:0 dfn = d9913 mfn = d9913
> 
> gnttab_map_grant_ref ( GFN=d9846 MFN=d9846)
> arm_iommu_map_page() DOMID:0 dfn = d9846 mfn = d9846
> 
> gnttab_map_grant_ref (GFN=a8474 MFN=a8474)
> arm_iommu_map_page() DOMID:0 dfn = a8474 mfn = a8474
> 
> arm_iommu_unmap_page() DOMID:0 dfn = a99fb
> arm_iommu_unmap_page() DOMID:0 dfn = d9913
> arm_iommu_unmap_page() DOMID:0 dfn = d9846
> arm_iommu_unmap_page() DOMID:0 dfn = a8474
> 
> Try to map the same frame that is unmapped earlier by iommu_unmap call()
> gnttab_map_grant_ref (GFN=a99fb MFN=0xffffffff).-> Not able to find the GFN in p2m error is observed after that.

The iommu_map()/iommu_unmap() should only modify the dom0 P2M. It should 
not modify the guest P2M.

When Dom0 issue a request to map a grant we will:
   1) Look-up the guest P2M to find the corresponding MFN
   2) Do all the sanity check
   3) Map the page in dom0's P2M at the address requested (host_addr)
   4) Issue iommu_map() to get a 1:1 mapping in the P2M

So are you saying that the guest P2M walk has failed?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 20:26:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 20:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83710.156487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9w4C-0005dN-Vy; Wed, 10 Feb 2021 20:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83710.156487; Wed, 10 Feb 2021 20:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9w4C-0005dG-SP; Wed, 10 Feb 2021 20:26:28 +0000
Received: by outflank-mailman (input) for mailman id 83710;
 Wed, 10 Feb 2021 20:26:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08uA=HM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1l9w4B-0005dB-33
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 20:26:27 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 573f853c-e5ed-4250-aa1e-c70901ac5a64;
 Wed, 10 Feb 2021 20:26:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 573f853c-e5ed-4250-aa1e-c70901ac5a64
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1612988785;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lP5KMHlh5fAjRnXf9xvgb7P9qjltwPFN2giLnMpQCbw=;
  b=OC9z0nR7V15Erp9mL1qTJQi9XalqszqXrv+D9nCF86kZXBwjDsqyLvZB
   WdX6gTvsx7kHj9z+AVY+DDfFOc/9S8NyCUnOYTBPEGNgH2Ezr2gW0B8aI
   Pzk09ZDnrDMACP2pVqyfdZCmoD/HcQUPo/g+2KhMmpPe+GWjJg3c5VVMx
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DBHDTUI4HcDm5gOJojwfNs+OMAz+pM3QPKtfn1JzQOLz1GRDq18rPAIDhf+G9psVj7oR6g2fSk
 rEYqhOgGRtQIIrD+vgxpfNl2d98/8owlIP0W5nGySjB9+lII8CcVsQuBRADQUctePrwtcWfx7e
 uhM6SpjFBaZvxfPUDkzJUjKWMgjoRucEtN4UR7ZKxBPzjbaQ6b6GNk2tX736pMO9uRsl74C+DO
 Q58u7/PzOmTkjRo6FY5KyH5rXsc5hwat46yUBTLwQIhGmRWCSxvmkujFAy6zfUCvJTtRHLDhO2
 0j0=
X-SBRS: 5.2
X-MesageID: 36989356
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,169,1610427600"; 
   d="scan'208";a="36989356"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kq0/HGKT94QBvJ2iOKa79Q2jE/5bIjCx2mwd8LkXBCX8zGVEGIip9kz2JR3QkXgkA+3QHuMBuOSP1/MvfR3a2L1yH/TYYg+9mF6QqvED1PL4wWgG0g4grLHlP1qPCZ1YA5Pyg9251I7oJjIVSmcjLXp3HJ3WW2Wkl/jhHblmIsBLAJsyuK6oMzfFT2vqxC9JnMhvfOipUji2ZNXnC98g04dX/jaidnH6IfUPb9918ZcrDF9+2umvkfSi4KRxrh8nnAKajLRxzeKt6UvW2iB/SlHdpF+8FaWwmG1aGScp8YXm40STCirlzlP4aSgVVkxaCfsMaiu2IFwhDmIowAFDTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lP5KMHlh5fAjRnXf9xvgb7P9qjltwPFN2giLnMpQCbw=;
 b=DDrPS0DYHWeKQVBcJKZcKypJ2g/9xLhExpnfZHk4R18SClTxSRyUQxkDLu+5Saa/0AEfkrFGSBj3MoNChgcMeaVUfWE+CrWIlPI3aU94qjC4SFr+5IilP1Foy9LA1d+CQhjo6PBEHAZ/FIUdVJ06gTQFQ9iRZUSs2TNAiiy+gBI64Zqc7npcg18Hp30emjWlnVyygNbnZT36+9XsnxZjwqMcizf6cf9j1NWev+v3glZhhWiyADodu1crJ3Tjwr+Anm1U/79R0ci8kGnSxsLK4Pia03HSRvbxTf+jTpWQ6EADRHJWKj5LurYvydhRuBlcPa1rhBzy3Y2bJfKz+Q8ZyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lP5KMHlh5fAjRnXf9xvgb7P9qjltwPFN2giLnMpQCbw=;
 b=bQKw0JxpQ4mkf4cL+4A3hIE331eyP7Ug9eC7O0ey+PK8darOxT3e70wny3fTkcKpMtsm7r7Is9ok2watHsBprDzCg8glBdvyZhfEqIOttxIofqebrgNZ3PHF2kgu+XRZXRc3/5dTMQetQTnUkl6ZIL5barC/jQl4nJa/XRJeTLM=
Subject: Re: [PATCH] x86emul: fix SYSENTER/SYSCALL switching into 64-bit mode
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <7ce15e4b-8bf1-0cfd-ca9e-5f6eba12cac1@suse.com>
 <d66cce4b-6563-4857-30be-5889788ca6c8@citrix.com>
 <2eed5630-3e23-3005-245e-989893fc8476@suse.com>
 <bf31a01e-4a32-5938-c158-38923100355d@citrix.com>
 <77fda392-6f6a-c8b0-f1ea-15b917245f5e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f69d0f8d-3d13-1bd5-956d-29cf73c9351f@citrix.com>
Date: Wed, 10 Feb 2021 20:26:04 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <77fda392-6f6a-c8b0-f1ea-15b917245f5e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0309.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f20e2ee-890e-490e-dcbc-08d8ce021ac3
X-MS-TrafficTypeDiagnostic: BYAPR03MB3559:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3559EA06AE2A5E8021CF0978BA8D9@BYAPR03MB3559.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: kDWLr41+FdoW46dqVMul+bA1uTq1RzvRKk6RG0kPjZK5Nbw8UKg2x68n4Yq7mTCaSyaITuhulazddOzcJWRffOtPa0FN/ZG5t92DqQZT5KpegjiueKZ6V3CNE8bKxgGoCGtdF3SU4c7N0QDbSGvIL6dIqdu8OcQB565lP644Cw6mjFQYFzYdFeWEneYlqGtmA3XawRL/dvNZHrn+zN8tdWNZuAuJQjKG8FqYnxEucKjmh0wDvi7Fr6v+DR4Oh5XGIPXulkHzbpeR0gR0fsMupmA7hZQ4ERt7Fz4YZuJcUXQ+olkDZuhGzpqPAnfM2SXu10UD03+QTCinNNg7ew1sHgCSKm+JLswrEbq6kPTbdieuHzn9ZVQi0tA0WmdujdFviB87BGcJY5gt2/ckYfzWBTMvTZN9cvepZruhnfD7JeoXGCD2leyWzsvASp9XISNuXsnKU/QmbajISo0/Fp65NKKPBA0yYipYgezlo5kKJih2bAthTFjXR5buzyH7p+mv7sA7KDrDteN3sZ6BqJsLeMs8KGDPDhp4/qCoU8Q2/0SW0Lhd5LbpLqsJ09GlT+QS9yRrPj1pVc96ZP7Lt7VhfsxOc2a6HUmWu5HO0j8N4fs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(136003)(396003)(376002)(53546011)(316002)(16576012)(66946007)(186003)(2906002)(956004)(86362001)(5660300002)(36756003)(2616005)(26005)(6666004)(54906003)(6916009)(6486002)(8676002)(83380400001)(8936002)(4326008)(478600001)(31686004)(66476007)(31696002)(16526019)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RFVpV0szS0U3QW1NTDVadXZXWmxEc1VZK3J0YXVxZkNhM2hJSUdSdU41K01x?=
 =?utf-8?B?WGo3YnUybXRQb1l5bUg5Zk91akV5elJqTWlsam1FY0EvenJmREFrZFkzcy9G?=
 =?utf-8?B?NlVpYk93NTlpUmplaFl0Q3JERHE5czBkYjhuRGw1aDlOL1I5YitldUhmK0JD?=
 =?utf-8?B?RzdsYUM0NmFpcThLSGt3L3hSL01YTlozUGlNWEdXZVFZMk1Ld3I2bjVXOGZ5?=
 =?utf-8?B?aGIzUEhZNmYvODMrSTJjZlllZFZJNFNqM0NnT2xia1VtNEpWNFJsaGF2eXlv?=
 =?utf-8?B?NzlaLzRnY2hqYTFqNHFBcXlOMHNMUXFycHpTTVJXa2hjV2wzRzh4YjRsWnNB?=
 =?utf-8?B?TGc2dzVvUG82OTZubVd4cjB6ZG1UdTVOMEpuU2dIZ3F3Q09IdWFtSG5lNWxs?=
 =?utf-8?B?N0tNQmZ0aTc5eE1FUDZrcmZRVHJMdjBZQ1YrTWkyQnRDVENST2lEemJWMWpl?=
 =?utf-8?B?VzltTlRXNmtHUkhNT2E1Zi8wK3ZpN040ZFlwcXBNRnZzdGk2Y0QvdVVZUGcr?=
 =?utf-8?B?cE9sUDM5Y2sxaDFWOWFjaCtEVzYvRkQ2b1Z5ciszanE5S3NKWkQ2bFA4WmJt?=
 =?utf-8?B?VjZVMnV0VDY3NmRMbUVJdTZDRUlZSjBTMDVYUzNPYUtEamxva1lmNTcxRHhw?=
 =?utf-8?B?OWYweXBtbjVQTWp3SGZQSWxoeHpDS1ZLSmV2TzhIQXE1SG96NHFSOGx1MndZ?=
 =?utf-8?B?NXZ5VVoxUmxzSHppZHljTTN0WDdjU0l5VGhsNGJaaW1Tc0ZPdm5wd0M1d0s0?=
 =?utf-8?B?cGdhbjg5TGl5SDdJWW9weG9DaXZ4OG5FMkdJS0tKNjlUMC9VVWdvM2trcjBC?=
 =?utf-8?B?UWU2NnVVcTBGR0dCaEV3VFljYzFaNkRDLy91OWhKdGw0aDRyQjRUbXFvaHR5?=
 =?utf-8?B?bmZqZG16Zms5dk0zdmlwTjNRTFZCcjZqR0VFMkxyTnZqNVA0My9vY0NORHdj?=
 =?utf-8?B?TE5QT083WDlCUDJnbENjZDQ1YWorUm4ra0V6c1ArKzNyWXY0cS83Q1ltNy91?=
 =?utf-8?B?bzNMNHFDZkE2bGZURXZsSEU5RXRwdjNUWGsrUjJ0Z1BTVVlva2lPZGErVDNB?=
 =?utf-8?B?NkFRNit4TldoWXdDMDVNdDl0L2l3YTdNa0IvS1R5MWhUQlZNeURTOTFnTS9P?=
 =?utf-8?B?N0krOUZUSzAzZ1NPNVJOcTFkZk85Mm1TSHJEN29oYXQwZ2Fzd1Y0MGQzdGFH?=
 =?utf-8?B?M1VrZjYrd0dxZGxLZExzMGhRQ1lzbUl4SzVVS2tGYzNWT3lhMmMrMnU0dERk?=
 =?utf-8?B?MmlaWEdJOEtRVS9Mb1BJRTUxMS9Ga2hHbzVWci95Q2NFWUV5cWVubU1wUGFk?=
 =?utf-8?B?d3daRWJFdVVjR281THd0Y3VWVHU5Rm4xL0JPVSs3RnVmQmNtZGlJS2lhTE9X?=
 =?utf-8?B?c0JIZm1ZSzRKbGl5OTA3d2JORFdkcE9Dc1J0REluUUNIbWw5N0J1eGorUXdn?=
 =?utf-8?B?bkFvM1o4b2xReG9ESUorNDZCUlh5Z3g3SmNac1RoYnhObklzdjRuZTI0VURZ?=
 =?utf-8?B?a1lJSUY0dkZTMkxDbDdEZkF2T2tPQlZXSGM4b0wvbUljaHpCS2xOaTlVa3Zy?=
 =?utf-8?B?T3pZRExWY3NFb0VsVElUc3A4Uzc1NVlmTEw3V240bTZ5Y1ZnM2FJUmthaGVQ?=
 =?utf-8?B?T2xwbHZ0TUNVWEF6enZOK3JaYktiSTdxc3N0LzZtWDc1QzRVL0l4dWtNYkV2?=
 =?utf-8?B?T3BUUHVPSmVIaWxUcHZpL0lhZHJsSXNGU0d0LzNPbmFCTDNjeFF5Y09OYkRx?=
 =?utf-8?Q?o1xYs+Iqcq6TgyTVg1chgwkKV57UZOikVVsx1oO?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f20e2ee-890e-490e-dcbc-08d8ce021ac3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2021 20:26:11.0293
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /K92dgqUoaLSYHVxAnMe0/l8m02hKQlGfZsK6KVrcMwyACNYPSJJZqlGN8iM8tNHQ6V2TGLAhdGfQ+ZxM53c0q5Yx0IUubPkUdGGW2wnvvs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3559
X-OriginatorOrg: citrix.com

On 10/02/2021 14:18, Jan Beulich wrote:
> On 10.02.2021 15:02, Andrew Cooper wrote:
>> On 10/02/2021 13:54, Jan Beulich wrote:
>>> Just like considered in the post-description
>>> remark, we could drop the conditional part from sysexit's
>>> setting of _regs.r(ip), and _then_ we would indeed need a
>>> respective change there, for the truncation to happen at
>>> complete_insn:.
>> I think it would look odd changing just rip and not rsp truncation.
> Yes, this was another consideration of mine as well. But it
> is a fact that we treat rip and rsp differently in this
> regard. Perhaps generated code overall could benefit from
> treating rsp more like rip, but this would need careful
> looking at all the involved pieces - especially in cases
> where the updated stack pointer gets further used we may
> not be able to defer the truncation to complete_insn:.

There are other differences.Â  rip gets updated on every instruction,
while rsp does not.Â  We also have instructions with (possibly multiple)
rsp-relative memory references which need truncating individually to get
proper behaviour.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 21:13:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 21:13:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83722.156528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9wnv-0001p8-Ob; Wed, 10 Feb 2021 21:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83722.156528; Wed, 10 Feb 2021 21:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9wnv-0001p1-LS; Wed, 10 Feb 2021 21:13:43 +0000
Received: by outflank-mailman (input) for mailman id 83722;
 Wed, 10 Feb 2021 21:13:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QmaB=HM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9wnu-0001ow-3Q
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 21:13:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c6d5d99-315c-4a61-8c32-96f762e94b34;
 Wed, 10 Feb 2021 21:13:41 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 4EDD964E31;
 Wed, 10 Feb 2021 21:13:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c6d5d99-315c-4a61-8c32-96f762e94b34
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612991620;
	bh=pidYgsL7yGr1cf0tv9hCO5cPunWAz2rH8WLkJmo7WI8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MiKBlczchv7ysANd6UObc7XHhD5eF4YH7tmd6ZgVWtJgGaszGOVA51PPZ3YudM4xZ
	 t5BXMaIPSsw9hMdamWfKZ2NtLIbzMAlD0moi/ZUiI5DzWgTMRrIxVV0ZsKDWkmvPz6
	 vZQMb1M4IEITaG+8MfDHAlpaFVale0u4yVvvraLtL9pGA10S/fZAV0eej7MPf2JmJ5
	 PSoz6+YWQo8dRUEZi2WBUhErRlGgQpZKLXXd39Bz9OS5XJurTHEdTheTQpPyYy9pFB
	 M7gfzzflrbbGS9z217Jp3i3qI31wEhXOLSPSaTA/vJLzkz+kGvTeTpup9O5u1EiCR/
	 Ob+Njcaf3FFYw==
Date: Wed, 10 Feb 2021 13:13:39 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "lucmiccio@gmail.com" <lucmiccio@gmail.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
Message-ID: <alpine.DEB.2.21.2102101309230.7114@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com> <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s> <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-397500670-1612991620=:7114"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-397500670-1612991620=:7114
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 10 Feb 2021, Rahul Singh wrote:
> > On 9 Feb 2021, at 8:36 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > On Tue, 9 Feb 2021, Rahul Singh wrote:
> >>> On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>> 
> >>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> >>> The offending chunk is:
> >>> 
> >>> #define gnttab_need_iommu_mapping(d)                    \
> >>> -    (is_domain_direct_mapped(d) && need_iommu(d))
> >>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> >>> 
> >>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
> >>> directly mapped and IOMMU is enabled for the domain, like the old check
> >>> did, but the new check is always false.
> >>> 
> >>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
> >>> need_sync is set as:
> >>> 
> >>>   if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> >>>       hd->need_sync = !iommu_use_hap_pt(d);
> >>> 
> >>> iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
> >>> P2M. It is true on ARM. need_sync means that you have a separate IOMMU
> >>> page-table and it needs to be updated for every change. need_sync is set
> >>> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
> >>> which is wrong.
> >>> 
> >>> As a consequence, when using PV network from a domU on a system where
> >>> IOMMU is on from Dom0, I get:
> >>> 
> >>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> >>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
> >>> 
> >>> The fix is to go back to something along the lines of the old
> >>> implementation of gnttab_need_iommu_mapping.
> >>> 
> >>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> >>> Fixes: 91d4eca7add
> >>> Backport: 4.12+
> >>> 
> >>> ---
> >>> 
> >>> Given the severity of the bug, I would like to request this patch to be
> >>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
> >>> 2020.
> >>> 
> >>> For the 4.12 backport, we can use iommu_enabled() instead of
> >>> is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
> >>> 
> >>> Changes in v2:
> >>> - improve commit message
> >>> - add is_iommu_enabled(d) to the check
> >>> ---
> >>> xen/include/asm-arm/grant_table.h | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>> 
> >>> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
> >>> index 6f585b1538..0ce77f9a1c 100644
> >>> --- a/xen/include/asm-arm/grant_table.h
> >>> +++ b/xen/include/asm-arm/grant_table.h
> >>> @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
> >>>    (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
> >>> 
> >>> #define gnttab_need_iommu_mapping(d)                    \
> >>> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> >>> +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
> >>> 
> >>> #endif /* __ASM_GRANT_TABLE_H__ */
> >> 
> >> I tested the patch and while creating the guest I observed the below warning from Linux for block device.
> >> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
> > 
> > So you are creating a guest with "xl create" in dom0 and you see the
> > warnings below printed by the Dom0 kernel? I imagine the domU has a
> > virtual "disk" of some sort.
> 
> Yes you are right I am trying to create the guest with "xl createâ€ and before that, I created the logical volume and trying to attach the logical volume
> block to the domain with â€œxl block-attachâ€. I observed this error with the "xl block-attachâ€ command.
> 
> This issue occurs after applying this patch as what I observed this patch introduce the calls to iommu_legacy_{, un}map() to map the grant pages for
> IOMMU that touches the page-tables. I am not sure but what I observed is that something is written wrong when iomm_unmap calls unmap the pages
> because of that issue is observed.
> 
> You can reproduce the issue by just creating the dummy image file and try to attach it with â€œxl block-attachâ€
> 
> dd if=/dev/zero of=test.img bs=1024 count=0 seek=1024
> xl block-attach 0 phy:test.img xvda w
> 
> Sequence of command that I follow is as follows to reproduce the issue:  
> 
> lvs vg-xen/myguest
> lvcreate -y -L 4G -n myguest vg-xen
> parted -s /dev/vg-xen/myguest mklabel msdos
> parted -s /dev/vg-xen/myguest unit MB mkpart primary 1 4096
> sync
> xl block-attach 0 phy:/dev/vg-xen/myguest xvda w

Ah! You are block-attaching a device in dom0 to dom0! So frontend and
backend are both in the same domain, dom0. Yeah that is not supposed to
work, and if it did before it was just by coincidence :-)

There are a number of checks in Linux that wouldn't work as intended if
the page is coming from itself. This is not an ARM-only issue either.

I tried it with dom0, like you did, and I am seeing the same warning. I
did try to do block-attach to a domU and it works fine.

I don't think this is a concern.
--8323329-397500670-1612991620=:7114--


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 21:36:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 21:36:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83728.156552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9x9o-0003mr-OF; Wed, 10 Feb 2021 21:36:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83728.156552; Wed, 10 Feb 2021 21:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9x9o-0003mk-LM; Wed, 10 Feb 2021 21:36:20 +0000
Received: by outflank-mailman (input) for mailman id 83728;
 Wed, 10 Feb 2021 21:36:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QmaB=HM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1l9x9o-0003mf-5B
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 21:36:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01642790-1880-4238-bf5d-25bf859f3387;
 Wed, 10 Feb 2021 21:36:19 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F374A64DD6;
 Wed, 10 Feb 2021 21:36:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01642790-1880-4238-bf5d-25bf859f3387
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1612992978;
	bh=8/iQqCFp3uQ57lkielDRJ0nDpWhNDeLQmdENd46C96w=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=F836RUntz0IBDKngDQbq6aGEQziWb/MAEhkCht/Gs3AfcWs9QpJkpcpu53XnqB76L
	 26FFqUSFoVsm8s6oxtIMezKKdXHeSnYV5S+0VpaGOOzlckdja727xfnz9yYK9sxraD
	 XYawRZLigDxevf/m9gUmB710mGxhIT8kfHLFfSkySNkfjdxjR97Wq+mMKMGjw6MT1v
	 M6rUbgSmP1guhdUywccceiXEVQ5Gas/RR6CBDjZK3i7pwjL2CPpeWTBRwCUmbSuxm+
	 uVDXTzIs/zdj4f9sECXYyjcBVZwsgfnRJCQ03JlBNwTRX+XGCUgOugvP2l5Dm/ha1W
	 l7ilAWpbCkwXg==
Date: Wed, 10 Feb 2021 13:36:17 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Woodhouse, David" <dwmw@amazon.co.uk>
cc: "julien@xen.org" <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>, 
    "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, 
    "sstabellini@kernel.org" <sstabellini@kernel.org>, 
    "jgross@suse.com" <jgross@suse.com>, 
    "stable@vger.kernel.org" <stable@vger.kernel.org>, 
    "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>, 
    "iwj@xenproject.org" <iwj@xenproject.org>, 
    "Grall, Julien" <jgrall@amazon.co.uk>
Subject: Re: [PATCH] arm/xen: Don't probe xenbus as part of an early
 initcall
In-Reply-To: <7858866d099732baf56e395a627f610968d24a7d.camel@amazon.co.uk>
Message-ID: <alpine.DEB.2.21.2102101335590.7114@sstabellini-ThinkPad-T480s>
References: <20210210170654.5377-1-julien@xen.org> <7858866d099732baf56e395a627f610968d24a7d.camel@amazon.co.uk>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 10 Feb 2021, Woodhouse, David wrote:
> On Wed, 2021-02-10 at 17:06 +0000, Julien Grall wrote:
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > After Commit 3499ba8198cad ("xen: Fix event channel callback via
> > INTX/GSI"), xenbus_probe() will be called too early on Arm. This will
> > recent to a guest hang during boot.
> > 
> > If there hang wasn't there, we would have ended up to call
> > xenbus_probe() twice (the second time is in xenbus_probe_initcall()).
> > 
> > We don't need to initialize xenbus_probe() early for Arm guest.
> > Therefore, the call in xen_guest_init() is now removed.
> > 
> > After this change, there is no more external caller for xenbus_probe().
> > So the function is turned to a static one. Interestingly there were two
> > prototypes for it.
> > 
> > Fixes: 3499ba8198cad ("xen: Fix event channel callback via INTX/GSI")
> > Reported-by: Ian Jackson <iwj@xenproject.org>
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
> Cc: stable@vger.kernel.org

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 21:40:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 21:40:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83730.156565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9xE9-0004i7-De; Wed, 10 Feb 2021 21:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83730.156565; Wed, 10 Feb 2021 21:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9xE9-0004i0-A1; Wed, 10 Feb 2021 21:40:49 +0000
Received: by outflank-mailman (input) for mailman id 83730;
 Wed, 10 Feb 2021 21:40:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9xE8-0004hp-CA; Wed, 10 Feb 2021 21:40:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9xE8-0002Ft-5c; Wed, 10 Feb 2021 21:40:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1l9xE7-00040n-VY; Wed, 10 Feb 2021 21:40:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1l9xE7-0006Up-V2; Wed, 10 Feb 2021 21:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x+sZkmWY85M3h03TUjhmPav9SzmcWj0kqwTVvDbMEwc=; b=rDBJpdScMbaE7P4QYRyoe+wNZT
	sLzug1rAJoBOUW8VS4USMFAl04IOlhsqsFYAL2tgbacCNg7m30YHuP79UnRHiHOHGqq9sT3rTgrh4
	zKV/rw83nF3ZRydn2kL35pIlLmnJjKKFUp6YXj5Lwm8YzibM9FhHHwp51DqbhuAx1uCs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159220-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159220: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01456785ce093d95c6a8515e6b8feeb39e1820b8
X-Osstest-Versions-That:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 Feb 2021 21:40:47 +0000

flight 159220 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159220/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  01456785ce093d95c6a8515e6b8feeb39e1820b8
baseline version:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14

Last test of basis   159191  2021-02-09 23:00:29 Z    0 days
Failing since        159206  2021-02-10 12:01:51 Z    0 days    3 attempts
Testing same since   159220  2021-02-10 18:00:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   687121f8a0..01456785ce  01456785ce093d95c6a8515e6b8feeb39e1820b8 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 10 23:46:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 Feb 2021 23:46:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83738.156591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9zBg-0007Ev-Rx; Wed, 10 Feb 2021 23:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83738.156591; Wed, 10 Feb 2021 23:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1l9zBg-0007Eo-Os; Wed, 10 Feb 2021 23:46:24 +0000
Received: by outflank-mailman (input) for mailman id 83738;
 Wed, 10 Feb 2021 23:46:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LpDB=HM=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1l9zBf-0007Ej-5D
 for xen-devel@lists.xenproject.org; Wed, 10 Feb 2021 23:46:23 +0000
Received: from mail-pg1-x52d.google.com (unknown [2607:f8b0:4864:20::52d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3fff3cd9-f03e-4e37-a6ca-e13748a058c7;
 Wed, 10 Feb 2021 23:46:22 +0000 (UTC)
Received: by mail-pg1-x52d.google.com with SMTP id o38so2337969pgm.9
 for <xen-devel@lists.xenproject.org>; Wed, 10 Feb 2021 15:46:22 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id o4sm3039796pjs.57.2021.02.10.15.46.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 10 Feb 2021 15:46:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fff3cd9-f03e-4e37-a6ca-e13748a058c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=GxK3PI7vDPZB0gE5y+t0L6ekPC7epRKuLKN0gHcGI0k=;
        b=d3G7HKaV8TmLFxjZgmM1jOP6UW61nVg63mfethWVl7OVzgYXa4v/IGP6MBs7csssdM
         FzxOodhdMQjKziKVV5tXS3iah1TBI8bkHNvTqVAvxSVSAO9MlvRZ1ecRZNnnZjC+Ts+I
         Wn3ra5BRnmBNAwHijpb1eHcIh9e9+dERbWg+8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=GxK3PI7vDPZB0gE5y+t0L6ekPC7epRKuLKN0gHcGI0k=;
        b=UMJFfXi+PL9U7g3opkMnzzlO7ViG9LZQY6fFcBaBIZfAXGpl+7pbVObcGRhG9amhCa
         y485j6RtsKYfLN/V6IxnmmmgEMUu1ks0hK3ShFCXpC2+9yt6m4HJku7MfBOIwlzSwzfC
         n2kFeQqCDgf21sQ8Tto/9LP+zpXeZGoH4teuVG8I4cFJZ2F8t6zvZ1pPJDgWutqvNBsI
         HGQPDoR4Fn56PSE+1cuN8AlHtUIDd78LMBdWi65CoYoL2rzK9A4jUPfMXRXZFLPEWiDW
         IGai98YDX8P2rrmeqrFXhRcesHM696tfIfmSBGB3Y6l9AEa/TRYXv5Jw0//HETHejZ6F
         Lg7g==
X-Gm-Message-State: AOAM531Bmy/CRoSbqoos9z/Pj2MlDciJM+XXyt1ZqXaiWZuZs2RuiMl9
	ezm7CRCYW8IbPIHltFth9qNpiQ==
X-Google-Smtp-Source: ABdhPJwqbO8wVuYN5ck6GRwRGKKtKGHav/dWOVfnnFJrcGzMbAZZSGMs7fiQXdjdff7l8sTmz1+MAw==
X-Received: by 2002:a62:5b87:0:b029:1d8:90df:a54b with SMTP id p129-20020a625b870000b02901d890dfa54bmr5460354pfb.79.1613000781625;
        Wed, 10 Feb 2021 15:46:21 -0800 (PST)
From: Kees Cook <keescook@chromium.org>
To: linux-kernel@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Kees Cook <keescook@chromium.org>,
	Joe Perches <joe@perches.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH] xen: Replace lkml.org links with lore
Date: Wed, 10 Feb 2021 15:46:18 -0800
Message-Id: <20210210234618.2734785-1-keescook@chromium.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Patch-Hashes: v=1; h=sha256; g=dbcf4ca7bce2f2f2a80590b480a2765db93127bd; i=YSscTzp3lVgXfFntf8Kb5E2vpf2VUCH6pRlxS8sELt4=; m=kc8NgBcNkR1FdAUFoBcecPqIFWVDlhJwDI99K+gzg58=; p=VSpsoEkhhoc0b4MXqeePgJxef7V8SVFc1j/JqX3Kem4=
X-Patch-Sig: m=pgp; i=keescook@chromium.org; s=0x0x8972F4DFDC6DC026; b=iQIzBAABCgAdFiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAmAkcEkACgkQiXL039xtwCbScw//f9b +LBIxQ3WQurWK0tZaB0GyM/Kpvjk0b47840okM1XwHAppPCijhOa3epm/1lpm/4c1mYH/aJdtmXM9 sy9eS2iLx+g/NFl3GZOqY+8tcMiIn7JFANfcHkw0YZfO4/9XuikNVkIsOmAdSIEHSoP0Uj32ZQs0Y yev7conmkSmWwsoT5pA8JRArJnXRqUqGp9CnJzFe951BtAiEhwxIljBmDYzEKr8GRerUZpFeJlhuz 13EuG6ZOM0TMYXmMHUQlgSKrXAJjnf7ZolvmpyLs9EfW/6Pi1jgCUxaTwEQrk1cnte++qGi5V5Xs7 rBkpzZ0dJahji64MOHxmUjd9duXSFIJZT2In3v3S+6b8364b2tfiOgMN4M3659LQkBEocxWpr+VRW Qn3qz5ni2MVf5bqfEi7Z0y6EQN4Mmol72DqYNcH61h6ZFzEhg4l5w9p7m7s5Bo7v6qZNDfZT45HsP 8ewiaKJoagZsZDdAaF1uQsoHMhlwaPzfPrwRhhtgLdyc7FdPYuUpqwyZpRwNKwkVkHaIPRKSw23TC kFvjFQJcqF3tgjxX6pMhCRO7Qq3bNsSKNgU27m14Lj1ZZKgPN4ByDIRl79Bd1OI4cg5iVZM83bh6m aFsn9Ua0YayuhndsTYVtLaFARcW9FmMHBNJI/6NL75LYmuGsUrx2cP6cfkvIdxtI=
Content-Transfer-Encoding: 8bit

As started by commit 05a5f51ca566 ("Documentation: Replace lkml.org
links with lore"), replace lkml.org links with lore to better use a
single source that's more likely to stay available long-term.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/xen/xen-acpi-processor.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index ce8ffb595a46..df7cab870be5 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -3,7 +3,8 @@
  * Copyright 2012 by Oracle Inc
  * Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  *
- * This code borrows ideas from https://lkml.org/lkml/2011/11/30/249
+ * This code borrows ideas from
+ * https://lore.kernel.org/lkml/1322673664-14642-6-git-send-email-konrad.wilk@oracle.com
  * so many thanks go to Kevin Tian <kevin.tian@intel.com>
  * and Yu Ke <ke.yu@intel.com>.
  */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 01:08:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 01:08:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83741.156607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA0Sv-00009a-3d; Thu, 11 Feb 2021 01:08:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83741.156607; Thu, 11 Feb 2021 01:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA0Su-00009T-Vq; Thu, 11 Feb 2021 01:08:16 +0000
Received: by outflank-mailman (input) for mailman id 83741;
 Thu, 11 Feb 2021 01:08:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzmj=HN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lA0Su-00009O-87
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 01:08:16 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8ff5b641-6e16-48e4-b0d9-508566a6165d;
 Thu, 11 Feb 2021 01:08:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ff5b641-6e16-48e4-b0d9-508566a6165d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613005694;
  h=to:cc:from:subject:message-id:date:
   content-transfer-encoding:mime-version;
  bh=h4Rb2NlNlrWX/8HboqWgNxrgkJDnC9nh/PCJWVmgfRg=;
  b=R3Cssq1UCrO9tUf2xO9Dyo3+KDn7vMBhGXLOhwV4uQniaAXPs9ch4sgQ
   sQoN8PLVoYAfKrMJFFhqF85tQ7tgjWpWb5OXtd05MnhZF0byW7sj/caJA
   ZHQ4/LzqY6QtC5D9BZtf2XmpuvBe3IYMi79GFTAzrgPa2NeTFyEZiO0k1
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DqxFU/Mg61dDQZT9r2vtkB46ZRXThQzEWxTbygXes+Ace5Ea42gb5uRvcsXz8fmRKx0zNcvUzk
 vIFgv/CyF63ykiWcWZKwSiiiyjaUImr0TTLJElIbczoUpUlSNxzeMXTYVlF5fvl+4gw44Emfm3
 iAtvSBwq3VvkyvMyHxuEF6B0WmiqzA/CwHmmbVhtWueb4G/qsF8GyWyctodZtMKb4j23f5CWV/
 MsAfZIJbwETPa77NaqeBZ6vxeNDwug2W9bZ7ftfVsjQ8qgEQ4PXEpIcKKHNcyUYOzvYZaPfTRX
 vB0=
X-SBRS: 5.2
X-MesageID: 37204297
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,169,1610427600"; 
   d="scan'208";a="37204297"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P/ouGc3t4Vd0ouoEiYHwKFBH2VSPxFbgZXKYYUH6gFbHTPaDuzW+ClCTLpNevYdHzzEd9V+I7QjkcRRDMtQlMxy0K/SYChJqjH1DKqhTZ82pm+D7zzZzppnnh02gUsFRAKwiAzXRq1i/lik5PlH+qH+hc8SWk8wf29QQXOgRxM8nk27t/KCfBvArAGqMJEscotbPBflpFn9cvjNv+gX3JwCAD9nggnoPK5AdvrhpBRcep4mdeZ2AE5RTVVu6WY3fS1eNaD1qWeZ09mXBhxoYi/uaWbfpcuy2TGsmM8XgMnaMfj7BsFeAhF2xPt3s38gedn/MiMXwAwoBm1upImcNHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h4Rb2NlNlrWX/8HboqWgNxrgkJDnC9nh/PCJWVmgfRg=;
 b=G2LugsQN7g88NwBt8snmzrweGx6xzZEyvHbRbTunKmLQRyoOeQrmB1/IKeDteVa8zDe7Xroqy/0UwmPzScx45g35meiIrfF3m+dPSqVe9KvscTE5THgBmNvul5s7b3+PNMxyaqtxriw9e8iYuGYjYtH1z1Abl07UFwGdT/OEo4xYpgs8oVNaQ4kzeoYTEQzTp39q4dS52l3S1r05LlfelGe2PcIQHxsAwbdonKql/gFxAJq/W+QS7ONqZuWsRmC95+ENS2YFzP/gTztyeFaMuUUZWRB+3N4oV5Nc/unKZEeiCbbbPmloSbQg3q2GZzgd2cLKl/FDhU/x26XUL0J/1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h4Rb2NlNlrWX/8HboqWgNxrgkJDnC9nh/PCJWVmgfRg=;
 b=mIn3dhj3L2O85XfWELKgLBooharmBYY6w/UQvSTOLJAo6BTdQt+mdUENPeR179hVXbgW8wkit6PgRxfACk1DD/aAVU3GyrATh5IT9vk0lToJ7Rpbm80nGSocRb5tBNBoX1Tyfw9/auzJWpY/FST4gPhKetdN0wtDodwINRz9yas=
To: xen-devel <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Juergen Gross <jgross@suse.com>, Jan Beulich
	<jbeulich@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Stable library ABI compatibility checking
Message-ID: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
Date: Thu, 11 Feb 2021 01:08:02 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0026.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: df49b5f2-7354-46da-4988-08d8ce297e5b
X-MS-TrafficTypeDiagnostic: BYAPR03MB4119:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB41192B7160430E525BD72731BA8C9@BYAPR03MB4119.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: =?utf-8?B?QmFMSVc4WVl1MmxrNU13bWs1QTZDM1ozcnBJUjRUUTZVNDNEaE1ScnhWYUJx?=
 =?utf-8?B?aTNwVXBZY2VKZkJWRmFLejRGZ2hTZjNGd3QxcEpMbGplMnlRQ1FhMkQwNFVk?=
 =?utf-8?B?RExDdnA0bm9zZmZyNWJMSEVRY0w5NU1XTEVqZlJSUklrcU5xcUxTb2RneEhx?=
 =?utf-8?B?UFBMdER5K1FVNGowUXhibDZjSmxVVVlhY3d3R1QzejlKK3VjdHhQT0J2U0pH?=
 =?utf-8?B?MGxPdW5yZlFkTVVpRWV3OHM3VVdWN0pjbzY4SDRualh4RXNUVnVoQllQYnZx?=
 =?utf-8?B?VEF0QVcweERsK2ZLNlQ1c1JBSWUzNUxZaGM1bTlYR1N4N1Y2NmtwdUtDaFh4?=
 =?utf-8?B?ZmpvMmxHa1VzR3JaSXZYSmtwM3hIOU5jWGhmV01KamQ3dmdaclYvTnJrNTJt?=
 =?utf-8?B?Qit1MTZHSmNVMWk0bERnQ0ExK2pTbVlhMUFTcHUxWFFMWEhIbkFpVkxWSG5v?=
 =?utf-8?B?MmI3THBlNk90L21scStSd3hjQmovV0g3WVNpV1NyR1dQaldZZ05TQ25mQWZk?=
 =?utf-8?B?eWtTL0F3citNdHIvOExubUpEenRnYW1NWjhIZG9HWGNOMnZrN0wvdjBiMEc5?=
 =?utf-8?B?NmhvMUlDbDlWTzdJdGwrbkRZNUs2S1QyaXpBMUl0eEhVTHEzWlg2SWRJcCtT?=
 =?utf-8?B?SWhEZ092UjVVM0lRZHd2NEJEVW9DY3BkTkpWeFpJWDE4RXh3bml6TU9Tc2Vt?=
 =?utf-8?B?K2VOemh3UFhzNDByWmtXQSsyZVVXalg3QW5xa2lpa2VGaVVuemVCK2tSSWp0?=
 =?utf-8?B?UEtDVWRTcTIreDEwbW5vVEJWa0xyMFg3Slc1N29xcVdvOWZEaUkveXlEZ0pW?=
 =?utf-8?B?a3ozdTQwblZLVm5lejV2ZDFTenBIRGxvN1g1VmN0RGdSeW5nbVdEem9oOUxG?=
 =?utf-8?B?ZUVTalhhR2hQS2hwcmNtY2xYZXNDMjRUdm9Ja2hyU1Q1a0NXSHM0cWhnTEdO?=
 =?utf-8?B?TThPZUhMNnA5L21pbEg1SDFjc3pZeCtuTVV3eXp0UUR6aWxyT2ZQcW5sYlRx?=
 =?utf-8?B?WkNQaDNTMGdNb1dud056Y2xORTNmWDBBZDZqOGZ6N1FkaWFVWFpEd09ENGhG?=
 =?utf-8?B?bUY1ajVvT2lXT016aWkwZVE2T0gzL2h2enFzbWZuRUo1clVFeXh6OE9mUyt0?=
 =?utf-8?B?b2hobDNuMGtReG1LbWJPQ0lUZEVHaGo2WkZBYTVRbHVpdjNiZitzVGpYU25E?=
 =?utf-8?B?VlV2bm9GUG5aY3JPOEI2a2RQemMvVU1EZGdrWWl3WEJYaU5HUDJlb3JSRElS?=
 =?utf-8?B?aTR6QU82TmdtZ0dOMmozcmdvTEMxRUVEekVjSjlmZlorbEp5OFZGb2J0Y0E3?=
 =?utf-8?B?S1gvcVR1aFFoTmcxOGVxM2I4NVNPWDVNYVN3K1N0Z1IvZGhFblh3UDlHSkUv?=
 =?utf-8?B?YUdpUUt0RmFVTXc9PQ==?=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:5;SRV:;IPV:NLI;SFV:SPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:OSPM;SFS:(4636009)(346002)(376002)(39860400002)(136003)(396003)(366004)(4326008)(6666004)(2906002)(8676002)(31686004)(54906003)(66476007)(66556008)(16526019)(2616005)(956004)(86362001)(5660300002)(26005)(478600001)(8936002)(6486002)(66946007)(6916009)(316002)(16576012)(186003)(31696002)(966005)(36756003)(43740500002)(45980500001)(579004004);DIR:OUT;SFP:1501;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eE05blNmeUx4V3lZNmQzMzdwQUMxVndRajRRRjF4ZDh5bVkzRlQwQzJJL2t3?=
 =?utf-8?B?WnNGRVMzU09vWHVSK0t0Q2hKM1dLbXZYbTdCNXRJWFlWVHNVMS9UcUFIZnZJ?=
 =?utf-8?B?MTV0bk4xaGlWVmFoaXNQbnFXVDRmRUY0MlgwN2taRzJ1OGNmdUJvSCt3RjFr?=
 =?utf-8?B?VmtGN015Y0J0Q0tBSjBMS0o5MnhZa2RGZFNpc0hJRk1tazdaWjJHMjR5eFBD?=
 =?utf-8?B?S1hiTFhrb0Z0NG9iaEE4ZDBENWt4NysrY0o2bzZkbXNUQ0dqTFVlTGZyeGJI?=
 =?utf-8?B?Ynh1MGJ3blF5OHlVa05UQytNZDdOY3RRUXNLbzlKVEpiNlhtTk4xRzYvMm5j?=
 =?utf-8?B?a0IvWFVaaEp6NjF2blk1QTlHQlV0Y3FFdFZVWmE2UVBOUVF4MnA2QVg5WlVU?=
 =?utf-8?B?NExZbURHTkZJcW5YUWhWdkVSVG42YlA2NkJGeHZmanVlcnl0dTRlVzh5VTlC?=
 =?utf-8?B?TG1Mb3R5aDZqOC9jc2Z6VFBOUHdrU1lQNjFNQmVxd3Vqelp3OGE0MDBmVHZi?=
 =?utf-8?B?TXhjNmE1WEhkUk5hSUpsL0tNWllWek1XLzBrTEVTWDhBUDNZZVNyQU13NVJ6?=
 =?utf-8?B?QmVLMGszaTRTVGI5bm5Gdm5TekJaS25qajV2TTdwNVBOYW4xbysrNmJaMkJw?=
 =?utf-8?B?QTAvVDdadzE3Z3lHTmQ5QURBVUluNG1PMWNKNzh1Y0k0QlpCWFdmUW9uUlVp?=
 =?utf-8?B?cFFpWXcwM0lYVW03bUpiSGRGMGM0dTRFS3VTRjRYQURGL1F6QkpOKzJUSkhj?=
 =?utf-8?B?M3JwQ0JDKzdkcFowQVh3SHRzNU5HeHVWS2JnTFVoNTFQeHpEMmUyL1NPTERJ?=
 =?utf-8?B?VkZYRlBQSHExQnM5TVNZYmVmMDZHVGNqNGtoNisvTFVFblY4VnFuWXd5dXZ1?=
 =?utf-8?B?cHRCanV3OXdTSi9DZ0J6NGs5M1NRUytNZVpKb2hRWXhpdlg2bEZiVXJkZlBJ?=
 =?utf-8?B?SjRIOUEvemFWWXZmTmRUYi9PWW9iY1dnaUVHbTRmMktHUGZVSTlHemJ3WDdx?=
 =?utf-8?B?eUdJc3dlY1o1V3U2WkVLb1Fva3dSZzN5dG9pRitXK25yOE43eWZ3OVdkWWhy?=
 =?utf-8?B?a0hJQ2xoU2orOTlVL2FSM0ozYVE0SW9qZk9aZ3ZrelgvR2ozdXVUZ3ZxZDRZ?=
 =?utf-8?B?aXQ3R0grUlc5ek1rWW0zNFp1Rm5tTklXcmVwRGlTaERITWlqUEExKzRwRi9K?=
 =?utf-8?B?dk1CZTdpMmxNbjdqQ3JrTWtBVXdneTY4N2J6bG81YkdMOGloOWl3UENMZjVi?=
 =?utf-8?B?OHJMRGNibXRsMDQ3Q2t2aFBmaSszSDlTL2VPNjFMVk9ZUUNRK2lGcWZaTnk0?=
 =?utf-8?B?a3hTOS9NNGdqYVdOa2pEbDZpcmZNakVPNWtZQy9Kd0VqZWxhME5TTi9qKzcv?=
 =?utf-8?B?RHdYZURTckQvSXhCNTJJdExiTHROYWxqSTQycHFlNVg3b1JsU2pWREdnYjBz?=
 =?utf-8?B?aklEL3FTVVlqQmY4ZXhnamhIZTFlcUM2aW5IWjY4bE1SUlhHTm5tVEVYZ0hI?=
 =?utf-8?B?bThmVTEyUGJ5WlFQdnFBMjlWR3g1M2l0VWVPU2RYUlZhL2NZclAxSERENXdy?=
 =?utf-8?B?K281c1BiZ0tUS1FYL2UvekZOOHpjeTBqaHgwTHMxNHd5YW5KU0I3NXFlN0to?=
 =?utf-8?B?SkVRVFJYVzZ1Rkg4bENFVERyUWVLM1YrNUFiZnRtRUdJWUNReVBKeWd2R3I1?=
 =?utf-8?B?cXhFN2I2b3Bkd3RZWUpQYUYrL2VIOUJjOFgwWkVGbVZCa3BpUzVuOXltaUFp?=
 =?utf-8?Q?kfqOI1eXXE2sP0cUn+DbTH4j5OuUk7fPa8K4pSL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: df49b5f2-7354-46da-4988-08d8ce297e5b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 01:08:08.3127
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w6GSYXdwPHIt4cw6h5b5+UhxmyJsC02cIjCqmuD7D4UH88A/CYyuEROqqSj564GKgy7aKHfNhtYinbIljfGDsCIsN9OJAY+P4X5i/Xf6ahA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4119
X-OriginatorOrg: citrix.com

Hello,

Last things first, some examples:

http://xenbits.xen.org/people/andrewcoop/abi/libxenevtchn-1.1_to_1.2.html
http://xenbits.xen.org/people/andrewcoop/abi/libxenforeignmemory-1.3_to_1.4.html

These are an ABI compatibility report between RELEASE-4.14.0 and staging.

They're performed with abi-dumper (takes a shared object and header
file(s) and write out a text "dump" file which happens to be a perl
hash) and abi-compliance-checker checker, which takes two dumps and
produces the HTML reports linked above.Â  They're both debian tools
originally, but have found their way across the ecosystem.Â  They have no
impact on build time (when invoked correctly).

I'm encouraged to see that the foreignmem analysis has even spotted that
we deliberately renamed one parameter to clarify its purpose.


For the stable libraries, the RELEASE-$X.$Y.0 tag is the formal point
when accumulated changes in staging become fixed.Â  What we ought to be
doing is taking a "dump" of libraries at this point, and publishing
them, probably on xenbits.

Subsequent builds on all the staging branches should be performing an
ABI check against the appropriate lib version.Â  This will let us catch
breakages in staging (c/s e8af54084586f4) as well as breakages if we
ever need to backport changes to the libs.

For libraries wrapped by Juergen's tools/libs/ common-makefile changes,
adding ABI dumping is easy.Â  The only complicating is needing to build
everything with "-Og -g", but this is easy to arrange, and frankly ought
to be the default for debug builds anyway (the current use of -O0 is
silly and interferes with common distro hardening settings).

What I propose is tweaking the library build to write out
lib$FOO.so.$X.$Y-$(ARCH).abi.dump files.Â  A pristine set of these should
be put somewhere on xenbits, and a task put on the release technicians
checklist for future releases.

That way, subsequent builds which have these tools available can include
a call to abi-compliance-checker between the authoritative copy and the
one from the local build, which will make the report available, and exit
nonzero on breaking problems.


To make the pristine set, I need to retrofit the tooling to 4.14 and
ideally 4.13.Â  This is in contravention to our normal policy of not
backporting features, but I think, being optional build-time-only
tooling, it is worthy of an exception considering the gains we get
(specifically - to be able to check for ABI breakages on these branches
in OSSTest).Â  Backporting to 4.12 and older is far more invasive, due to
it being before the library build systems were common'd.


Anyway, thoughts?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 01:52:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 01:52:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83749.156625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA1A1-0004ff-MM; Thu, 11 Feb 2021 01:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83749.156625; Thu, 11 Feb 2021 01:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA1A1-0004fY-Ib; Thu, 11 Feb 2021 01:52:49 +0000
Received: by outflank-mailman (input) for mailman id 83749;
 Thu, 11 Feb 2021 01:52:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sXjU=HN=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1lA19z-0004fT-T8
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 01:52:48 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (unknown
 [40.107.244.76]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86175292-91d0-4b3d-a3dd-e30c7df3e4fd;
 Thu, 11 Feb 2021 01:52:45 +0000 (UTC)
Received: from BL0PR03CA0008.namprd03.prod.outlook.com (2603:10b6:208:2d::21)
 by MWHPR02MB2512.namprd02.prod.outlook.com (2603:10b6:300:41::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.23; Thu, 11 Feb
 2021 01:52:44 +0000
Received: from BL2NAM02FT060.eop-nam02.prod.protection.outlook.com
 (2603:10b6:208:2d:cafe::e7) by BL0PR03CA0008.outlook.office365.com
 (2603:10b6:208:2d::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25 via Frontend
 Transport; Thu, 11 Feb 2021 01:52:44 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 BL2NAM02FT060.mail.protection.outlook.com (10.152.76.124) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3846.25 via Frontend Transport; Thu, 11 Feb 2021 01:52:43 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Wed, 10 Feb 2021 17:52:40 -0800
Received: from smtp.xilinx.com (172.19.127.95) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Wed, 10 Feb 2021 17:52:40 -0800
Received: from [10.23.123.28] (port=64995 helo=localhost)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1lA19s-0002vc-Ll; Wed, 10 Feb 2021 17:52:40 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86175292-91d0-4b3d-a3dd-e30c7df3e4fd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mVrGnvt3MPBGhv71nxxnOhxibrSTUNVDqqNAjcW1suHQbEGmJyoEAah+NjrrgP+wqDcJVDZ4UlTJV+Xx8NmYgkEg70zK+CND8vI4rTRwAXddg9NBj0URtdr15Ut6Q5SBMdCAAazNQxsrjc79RwejkKzT+/JofVFpY7Gu1fTP1bgjEPSDMJWyJ/787UiH6DNlW1H2NYgI3qDN1l4WS1TjDzES76Lan04SQCM5D2uP/6wgE8523uAz+fnD9PNlXDxMxJCH75odbYrBXhtkAu1EmQ7AExOgVMfYr2M0Cp3exMT/6KAqlI/0paWlAT8r09WeYApMWX+Cl8d4UV8MCGrB7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hZ9eeh0GmXQiwbKPLpPRAgdfEfGOmeb7VRhcUsjxMFk=;
 b=IexpxZLSZzljxRhcRrpDdcmCdBK4Z9fs9202lnwzonpNsC9TFBKPAQZ2Bg7w70yBOBPzwm/Fc0uiyO6rKNxWQxdHEkqFEVVdcBattXK8sF6LUqpZ2C+ne2bJGl8vRN8FDbfkhMk2CsQJ3I5GRdSJMKz525KZ5RjVgBNm96BcnqRh5Qtgx3eTQG+I4YE8E665OWTDclDOaq6FGm2VIdJ7FadhNIeUIvQp8cAmcEdE1L28UlrnmKThCfKMOpfq3vTIpPdtLHMVpSVCWmLevqoeGmxEfvw0ziG74OdR5nEwBNWDFALgAkLzUryrrp2FIqn7jNIC7bJ3EtZnvn98gpJfmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=apertussolutions.com
 smtp.mailfrom=xilinx.com; dmarc=bestguesspass action=none
 header.from=xilinx.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hZ9eeh0GmXQiwbKPLpPRAgdfEfGOmeb7VRhcUsjxMFk=;
 b=qAl5miOKHjOa/pF0gm3uzyLzM0QmkH46ojnxC7ivlQKvpK38UU/i2AlOmKCY1fspG0vSO/bDXB08b+vocIVI4dz9iALPOsq7f/mhiZAieYZcTXdBVn+yD/UbUdTT//iaUJl1e5y7eCwxKwtznsiIBX22d7dCv/Jku05wGJPFLIc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; apertussolutions.com; dkim=none (message not
 signed) header.d=none;apertussolutions.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com;
Date: Wed, 10 Feb 2021 17:52:40 -0800
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
CC: <xen-devel@lists.xenproject.org>, <christopher.w.clark@gmail.com>,
	<andrew.cooper3@citrix.com>, <stefano.stabellini@xilinx.com>,
	<jgrall@amazon.com>, <iwj@xenproject.org>, <wl@xen.org>,
	<george.dunlap@citrix.com>, <jbeulich@suse.com>, <persaur@gmail.com>,
	<adam.schwalm@starlab.io>, <tomase@xilinx.com>, <brucea@xilinx.com>
Subject: Re: [RFC PATCH v2] docs/design: boot domain device tree design
In-Reply-To: <20210203000952.31695-1-dpsmith@apertussolutions.com>
Message-ID: <alpine.DEB.2.21.2102101744580.7114@sstabellini-ThinkPad-T480s>
References: <20210203000952.31695-1-dpsmith@apertussolutions.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-135185210-1613007905=:7114"
Content-ID: <alpine.DEB.2.21.2102101745060.7114@sstabellini-ThinkPad-T480s>
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4147b980-6805-460e-9638-08d8ce2fb938
X-MS-TrafficTypeDiagnostic: MWHPR02MB2512:
X-Microsoft-Antispam-PRVS:
	<MWHPR02MB25127AC0B374BB0D7150F1B6A08C9@MWHPR02MB2512.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8c5OsGF7lMztrzyuI3+E07RU+6eNWRrzKILlZUSDe/LlTgo7uNbRbW5Y390UjI00vRlJgXSgN/U3gZ89DFkWnwiV6UvAk70yBz7D1m2S5Uu0iSTOBoXrvgfAM+1rudN+o6PcAwNX5LbiGHgr8olYltUwqsOXQ+w8qqkhjsXLRSTpKm4CAJo71p5eG6AZSz275N9zJhEYuaZLYItVdwXJ0ivi2shahvUK7lZ/z0/A1Cg/Z6TAV37dH/vPx1DxL/mmTYFzpTqPVadp46rP3BI4Uy4SKpEIGFx69HynpSlSivDeB56ts9CfEKnoavb0aeocK3j/nsDvQBLTxTGHggo/X09KuuM3r4hv9ydEOK8wHiTQGQLyokWMxPf0PDeQgfWBy6Wv2kkkSfxmFLT0dmqCel4B6tZvLxx5gkltwoM/u+VkUozDpthl+RpkLSuBKbo/J5maX+BrcRDh3M5d3HwvUuTmVLWRMRwbX2LMo9xG0pDlz5rwT9Cp8msSntRNt9iKez54s9WU4a8Eei2jkBcHhOXB1ut4M2l++y3zO+McdC/vmj/ymYHFcF0hPwUwIJQ34AajzmVxJSDm+1kMeaK8mFQkWfzeq5lrTwIRs2qkwQwatHazBU7o3aXOWTKtkqksR1OWb3XqwzQdqJwyx+04FA0RN+Z7rycU5afrJpE69wAl8ac9tL1PmCNZtn7XAdqY+7hUGAO/UgebNeFtBiYxrVa/OKreW3vAJoJIa+jJwZ8=
X-Forefront-Antispam-Report:
	CIP:149.199.62.198;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapexch01.xlnx.xilinx.com;PTR:unknown-62-198.xilinx.com;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(376002)(39860400002)(346002)(36840700001)(46966006)(356005)(8936002)(70206006)(7636003)(33716001)(5660300002)(54906003)(7416002)(9786002)(36906005)(70586007)(30864003)(82740400003)(966005)(6916009)(336012)(478600001)(2906002)(83380400001)(316002)(36860700001)(82310400003)(26005)(33964004)(186003)(8676002)(426003)(4326008)(9686003)(107886003)(47076005)(44832011);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 01:52:43.7794
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4147b980-6805-460e-9638-08d8ce2fb938
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.62.198];Helo=[xsj-pvapexch01.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL2NAM02FT060.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR02MB2512

--8323329-135185210-1613007905=:7114
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102101745061.7114@sstabellini-ThinkPad-T480s>

On Tue, 2 Feb 2021, Daniel P. Smith wrote:
> This is a Request For Comments on the adoption of Device Tree as the
> format for the Launch Control Module as described in the previously
> posted DomB RFC.
> 
> For RFC purposes, a rendered of this file can be found here:
> ihttps://drive.google.com/file/d/1l3fo4FylvZCQs1V00DcwifyLjl5Jw3W8/view?usp=sharing
> 
> Details on the DomB boot domain can be found on Xen wiki:
> https://wiki.xenproject.org/wiki/DomB_mode_of_dom0less
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
> 
> Version 2
> ---------
> 
>  - cleaned up wording
>  - updated example to reflect a real configuration
>  - add explanation of mb2 modules that would be loaded
>  - add the config node
> ---
>  docs/designs/boot-domain-device-tree.rst | 235 +++++++++++++++++++++++++++++++
>  1 file changed, 235 insertions(+)
>  create mode 100644 docs/designs/boot-domain-device-tree.rst
> 
> diff --git a/docs/designs/boot-domain-device-tree.rst b/docs/designs/boot-domain-device-tree.rst
> new file mode 100644
> index 0000000000..558d75a796
> --- /dev/null
> +++ b/docs/designs/boot-domain-device-tree.rst
> @@ -0,0 +1,235 @@
> +====================================
> +Xen Boot Domain Device Tree Bindings
> +====================================
> +
> +The Xen Boot Domain device tree adopts the dom0less device tree structure and
> +extends it to meet the requirements for the Boot Domain capability. The primary
> +difference is the introduction of the ``xen`` node that is under the ``/chosen``
> +node. The move to a dedicated node was driven by:
> +
> +1. Reduces the need to walk over nodes that are not of interest, e.g. only
> +nodes of interest should be in ``/chosen/xen``
> +
> +2. Enables the use of the ``#address-cells`` and ``#size-cells`` fields on the
> +xen node.
> +
> +3. Allows for the domain construction information to easily be sanitized by
> +simple removing the ``/chosen/xen`` node.
> +
> +Below is an example device tree definition for a xen node followed by an
> +explanation of each section and field:

This is great!

You should know that there is an effort ongoing to standardize a set of
device tree nodes for static hypervisors and heterogeneous computing
configurations. It is called system device tree and it is also aligned
with our dom0less nodes, in fact, you could say that dom0less is the
predecessor of system device tree. It is great to see that this proposal
is also very well aligned with it. We are all going in the same
direction, excellent! :-)

https://www.youtube.com/watch?v=2o-B21unV9M
https://github.com/devicetree-org/lopper/blob/68c35bdb92d25a24a1a0b9b3eaf258c034b1f5db/device-trees/system-device-tree.dts#L784


I am going to suggest to make a few changes below to make it even more
aligned and more device tree compatible. It might even allow us to
reuse system device tree tools such as lopper in the long term.




> +::
> +    xen {

This needs a compatible string too. In system device tree every domain
(Xen domain or AMP domain) has "openamp,domain" as compatible. I think
it would be good to reuse it here. We could also use something else, but
either way, I'd add a compatible string.


> +        #address-cells = <1>;
> +        #size-cells = <0>;
> +
> +        // Configuration container
> +        config@0 {
> +            #address-cells = <1>;
> +            #size-cells = <0>;
> +            compatible = "xen,config";
> +
> +            // reg is required but ignored for "xen,config" node
> +            reg = <0>;

I think it might be better to avoid using regs for the domid, and
instead add a new property for it.


> +            module@1 {
> +                compatible = "multiboot,microcode", "multiboot,module";
> +                reg = <1>;
> +            };
> +
> +            module@2 {
> +                compatible = "multiboot,xsm-policy", "multiboot,module";
> +                reg = <2>;
> +            };
> +        };
> +
> +        // Boot Domain definition
> +        domain@0x7FF5 {
> +            #address-cells = <1>;
> +            #size-cells = <0>;
> +            compatible = "xen,domain";

I would add the domid here, e.g.:

domid = <1>;


> +            reg = <0x7FF5>;
> +            memory = <0x0 0x20000>;
> +            cpus = <1>;
> +            module@3 {
> +                compatible = "multiboot,kernel", "multiboot,module";
> +                reg = <3>;
> +            };
> +
> +            module@4 {
> +                compatible = "multiboot,ramdisk", "multiboot,module";
> +                reg = <4>;
> +            };
> +            module@5 {
> +                compatible = "multiboot,config", "multiboot,module";
> +                reg = <5>;
> +            };
> +
> +        // Classic Dom0 definition
> +        domain@0 {
> +            #address-cells = <1>;
> +            #size-cells = <0>;
> +            compatible = "xen,domain";
> +
> +            reg = <0>;
> +
> +            // PERMISSION_NONE          (0)
> +            // PERMISSION_CONTROL       (1 << 0)
> +            // PERMISSION_HARDWARE      (1 << 1)
> +            permissions = <3>;
> +
> +            // FUNCTION_NONE            (0)
> +            // FUNCTION_BOOT            (1 << 1)
> +            // FUNCTION_CRASH           (1 << 2)
> +            // FUNCTION_CONSOLE         (1 << 3)
> +            // FUNCTION_XENSTORE        (1 << 30)
> +            // FUNCTION_LEGACY_DOM0     (1 << 31)
> +            functions = <0xFFFFFFFF>;
> +
> +            // MODE_PARAVIRTUALIZED     (1 << 0) /* PV | PVH/HVM */
> +            // MODE_ENABLE_DEVICE_MODEL (1 << 1) /* HVM | PVH */
> +            // MODE_LONG                (1 << 2) /* 64 BIT | 32 BIT */
> +            mode = <5>; /* 64 BIT, PV */
> +
> +            // UUID
> +            domain-handle = [B3 FB 98 FB 8F 9F 67 A3];

let's use uuid, see also below


> +            cpus = <1>;
> +            memory = <0x0 0x20000>;
> +            security-id = <0>;
> +
> +            module@6 {
> +                compatible = "multiboot,kernel", "multiboot,module";
> +                reg = <6>;
> +                bootargs = "console=hvc0";
> +            };
> +            module@7 {
> +                compatible = "multiboot,ramdisk", "multiboot,module";
> +                reg = <7>;
> +            };
> +    };
> +
> +The multiboot modules that would be supplied when using the above config would
> +be, in order:
> + - (the above config, compiled)
> + - CPU microcode
> + - XSM policy
> + - kernel for boot domain
> + - ramdisk for boot domain
> + - boot domain configuration file
> + - kernel for the classic dom0 domain
> + - ramdisk for the classic dom0 domain
> +
> +The Xen node
> +------------
> +The xen node is a top level container for the domains that will be built by
> +hypervisor on start up. On the ``xen`` node the ``#address-cells`` is set to one
> +and the ``#size-cells`` is set to zero. This is done to enforce that each domain
> +node must define a reg property and the hypervisor will use it to determine the
> +``domid`` for the domain.

I understand that it would be good to specify the domid, but why not
using a new device tree property for it? Reusing reg seems a bit
strange, and it causes a bit of awkwardness like reg = <0> for
xen,config.

You could just use one if the following:

domid = <0x1>;
domain-id = <0x1>;

and also:

uuid = "12345678-9abc-def0-1234-56789abcdef0";

The latter one is used in a recent secure enclave proposal:
https://lore.kernel.org/linux-devicetree/20201204121137.2966778-2-sudeep.holla@arm.com/

I think uuid would be better than domain-handle.


> +The Config node
> +---------------
> +
> +A config node is for detailing any multiboot modules that are of interest to Xen
> +itself. For example this would be where Xen would be informed of microcode or
> +XSM policy locations.

If you make the top-level xen node a domain too, then the module can be
put directly under it and you don't need a config node at all.


> For locating the multiboot modules, the #address-cells and
> +#size-cells are set according to how the multiboot modules are identified and
> +located. If the multiboot modules will be located by index within the module
> +chain, the values should be â€œ1â€ and â€œ0â€ respectively. If the multiboot module
> +will be located by physical memory address, then the values should be â€œ1â€ and
> +â€œ1â€ respectively.

This is an interesting problem. The device tree way to do this would be
to use the "ranges" property of the parent.

If the multiboot module is located by physical memory address, the
parent node is going to have a range property:

xen {
    compatible = "openamp,domain";
    ranges;
    #address-cells = <0x1>;
    #size-cells = <0x1>;

    module@1 {
        compatible = "multiboot,microcode", "multiboot,module";
        reg = <0x0 0x10000>;
    };
    
    module@2 {
        compatible = "multiboot,xsm-policy", "multiboot,module";
        reg = <0x10000 0x10000>;
    };
}


Instead, if the multiboot module is not a physical memory address, but
an index in the multiboot chain, then you'd remove ranges from the
parent:

xen {
    compatible = "openamp,domain";
    #address-cells = <0x1>;
    #size-cells = <0x0>;

    module@1 {
        compatible = "multiboot,microcode", "multiboot,module";
        reg = <0x1>;
    };
    
    module@2 {
        compatible = "multiboot,xsm-policy", "multiboot,module";
        reg = <0x2>;
    };
}


ranges is meant to express if the children are supposed to map to the
CPU address space or not.

This is going to work fine as long as all the modules are either in
physical memory or have a multiboot index.  This is going to work only
if all the modules are either in physical memory or a multiboot index.
There is also the issue that the existing dom0less spec doesn't use
ranges and we don't want to break compatibility (interesting, even
system device tree doesn't use ranges for domains.)

So maybe it is better to avoid the problem and just use a different way
to specify that you are referring to a multiboot index. Something like:

xen {
    compatible = "openamp,domain";
    #address-cells = <0x1>;
    #size-cells = <0x1>;

    module@1 {
        compatible = "multiboot,microcode", "multiboot,module";
        multiboot-index = <0x1>;
    };
    
    module@2 {
        compatible = "multiboot,xsm-policy", "multiboot,module";
        reg = <0x10000 0x10000>;
    };
}

This way we can mix the two for different modules and don't break
compatibility with dom0less. Any thoughts?



> +\#address-cells
> +  Identifies number of fields for address. Required.
> +
> +\#size-cells
> +  Identifies number of fields for size. Required.
> +
> +compatible
> +  Identifies the type of node. Required.
> +
> +reg
> +  Unused. Required.
> +
> +The Domain node
> +---------------
> +A domain node is for describing the construction of a domain. It is free to set
> +the #address-cells and #size-cells depending on how the multiboot modules
> +identified and located. If the multiboot modules will be located by index within
> +the module chain, the values should be â€œ1â€ and â€œ0â€ respectively. If the
> +multiboot module will be located by physical memory address, then the values
> +should be â€œ1â€ and â€œ1â€ respectively.
> +
> +As previously mentioned a domain node must have a reg property which will be
> +used as the requested domain id for the domain with a value of â€œ0â€ signifying to
> +use the next available domain id. A domain configuration is not able to request
> +a domid of â€œ0â€. After that a domain node may have any of the following
> +parameters,
> +
> +\#address-cells
> +  Identifies number of fields for address. Required.
> +
> +\#size-cells
> +  Identifies number of fields for size. Required.
> +
> +compatible
> +  Identifies the type of node. Required.
> +
> +reg
> +  Identifies the domid requested to assign to the domain. Required.
> +
> +permissions
> +  This sets what Discretionary Access Control permissions
> +  a domain is assigned. Optional, default is none.
> +
> +functions
> +  This identifies what system functions a domain will fulfill.
> +  Optional, the default is none.
> +
> +.. note::  The `functions` bits that have been selected to indicate ``FUNCTION_XENSTORE`` and ``FUNCTION_LEGACY_DOM0`` are the last two bits (30, 31) such that should these features ever be fully retired, the flags may be dropped without leaving a gap in the flag set.
> +
> +mode
> +  The mode the domain will be executed under. Required.
> +
> +domain-handle
> +  A globally unique identifier for the domain. Optional,
> +  the default is NULL.
> +
> +cpus
> +  The number of vCPUs to be assigned to the domain. Optional,
> +  the default is â€œ1â€.
> +
> +memory
> +  The amount of memory to assign to the domain, in KBs.
> +  Required.
> +
> +security-id
> +  The security identity to be assigned to the domain when XSM
> +  is the access control mechanism being used. Optional,
> +  the default is â€œdomuâ€.
> +
> +The Module node
> +---------------
> +This node describes a multiboot module loaded by the boot loader. The required
> +compatible property follows the format: multiboot,<type> where type can be
> +â€œmoduleâ€, â€œkernelâ€, â€œramdiskâ€, â€œdevice-treeâ€, â€œmicrocodeâ€, â€œxsm-policyâ€ or
> +â€œconfigâ€. The reg property is required and identifies how to locate the
> +multiboot module.
> +
> +compatible
> +  This identifies what the module is and thus what the hypervisor
> +  should use the module for during domain construction. Required.
> +
> +reg
> +  This identifies where this module is located within the multiboot
> +  module chain. Required.
> +
> +bootargs
> +  This is used to override the boot params carried with the
> +  multiboot module.
> +
> +.. note::  The bootargs property is intended for situations where the same kernel multiboot module is used for more than one domain.
> +
--8323329-135185210-1613007905=:7114--


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 04:13:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 04:13:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83754.156643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA3Lf-0000pT-Tv; Thu, 11 Feb 2021 04:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83754.156643; Thu, 11 Feb 2021 04:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA3Lf-0000pM-QJ; Thu, 11 Feb 2021 04:12:59 +0000
Received: by outflank-mailman (input) for mailman id 83754;
 Thu, 11 Feb 2021 04:12:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA3Lf-0000pE-0z; Thu, 11 Feb 2021 04:12:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA3Le-0002lT-PL; Thu, 11 Feb 2021 04:12:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA3Le-00008G-Fz; Thu, 11 Feb 2021 04:12:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lA3Le-0003DF-FK; Thu, 11 Feb 2021 04:12:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=STi0aYZarzxGIaoi/erLo2SRHxtgaBmf0dRnU01Immc=; b=OjZTIF39sXW2euYhjEZ9byiSvN
	1CI1UDB8/eornjWWGdMmqey84NI4Z7FCkZC/6bVPxhkDi6UYTh6W58qcvpOxphSAzUCZgQxzNMfSH
	hVpyYfRMpdoofm45dKl7kQRDid/msJHthL7kKYZUaItvckpFXYfD0GCPzAEK0OkTfX7c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159200: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5e1942063dc3633f7a127aa2b159c13507580d21
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 04:12:58 +0000

flight 159200 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159200/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 158387
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5e1942063dc3633f7a127aa2b159c13507580d21
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   29 days
Failing since        158473  2021-01-17 13:42:20 Z   24 days   35 attempts
Testing same since   159200  2021-02-10 11:25:12 Z    0 days    1 attempts

------------------------------------------------------------
453 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13546 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 04:57:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 04:57:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83758.156658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA42U-0004lo-Ch; Thu, 11 Feb 2021 04:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83758.156658; Thu, 11 Feb 2021 04:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA42U-0004lh-9E; Thu, 11 Feb 2021 04:57:14 +0000
Received: by outflank-mailman (input) for mailman id 83758;
 Thu, 11 Feb 2021 04:57:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA42T-0004lZ-7X; Thu, 11 Feb 2021 04:57:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA42S-0003cL-Ua; Thu, 11 Feb 2021 04:57:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA42S-0002JB-Jw; Thu, 11 Feb 2021 04:57:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lA42S-00088E-JP; Thu, 11 Feb 2021 04:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+JZtIN+vZZ8wF1A3vFc+v4M2ExiIf6oomKUKGzxAikE=; b=PiZ4XbX233U+oe15b258T+efTe
	c9W8JpTJehfLranT5lXly/hgRYnHT6IL0zFMeg2pjBPqhmsPluIzUD9qR8F3nc9x9QORByoi/BT/I
	n3UpFHFXwrShGSv9y+v1QZK+iO2p/l3exhTInewXnsrqkzDcmXx2fuu66yJPzcv3tsr4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159201-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159201: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-4.12-testing:build-armhf-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit1:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-2:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-migrupgrade:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-livepatch:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-4:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.12-testing:test-xtf-amd64-amd64-1:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.12-testing:build-i386-xsm:host-install(4):broken:regression
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.12-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.12-testing:build-armhf-libvirt:host-install(4):running:regression
    xen-4.12-testing:build-armhf-libvirt:syslog-server:running:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-3:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-livepatch:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit1:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-livepatch:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-4:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-1:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-xtf-amd64-amd64-2:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-shadow:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-pair:host-install/src_host(6):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-i386-xl:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-arndale:debian-fixup:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 04:57:12 +0000

flight 159201 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159201/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt             <job status>                 broken
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken in 159052
 test-amd64-i386-libvirt-pair    <job status>                 broken  in 159052
 test-amd64-amd64-xl-qcow2       <job status>                 broken  in 159052
 test-amd64-amd64-xl-credit1     <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 159052
 test-amd64-amd64-migrupgrade    <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-nested-amd    <job status>             broken in 159052
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 159052
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 159052
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm <job status> broken in 159052
 test-amd64-i386-xl              <job status>                 broken  in 159052
 test-amd64-amd64-xl             <job status>                 broken  in 159052
 test-amd64-amd64-i386-pvgrub    <job status>                 broken  in 159052
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 159052
 test-amd64-i386-xl-raw          <job status>                 broken  in 159052
 test-amd64-amd64-xl-multivcpu    <job status>                 broken in 159052
 test-amd64-amd64-xl-shadow      <job status>                 broken  in 159052
 test-amd64-i386-pair            <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-win7-amd64    <job status>          broken in 159052
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 159052
 test-xtf-amd64-amd64-3          <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>          broken in 159052
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 159052
 test-xtf-amd64-amd64-2          <job status>                 broken  in 159052
 test-amd64-amd64-pair           <job status>                 broken  in 159052
 test-amd64-amd64-libvirt-vhd    <job status>                 broken  in 159052
 test-amd64-amd64-libvirt-pair    <job status>                 broken in 159052
 test-amd64-amd64-livepatch      <job status>                 broken  in 159052
 test-amd64-i386-migrupgrade     <job status>                 broken  in 159052
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 159052
 test-amd64-amd64-qemuu-nested-intel    <job status>           broken in 159052
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159052
 test-amd64-i386-freebsd10-amd64    <job status>               broken in 159052
 test-amd64-amd64-xl-pvhv2-intel    <job status>               broken in 159052
 test-amd64-i386-livepatch       <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>        broken in 159052
 test-amd64-amd64-xl-rtds        <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow <job status> broken in 159052
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 159052
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>          broken in 159052
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 159052
 test-amd64-amd64-xl-pvshim      <job status>                 broken  in 159052
 test-amd64-amd64-xl-xsm         <job status>                 broken  in 159052
 test-amd64-amd64-libvirt        <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>     broken in 159052
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 159052
 test-xtf-amd64-amd64-4          <job status>                 broken  in 159052
 test-amd64-i386-xl-shadow       <job status>                 broken  in 159052
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>        broken in 159052
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>  broken in 159052
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken in 159052
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159052
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <job status>  broken in 159052
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  <job status> broken in 159052
 test-xtf-amd64-amd64-1          <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>          broken in 159052
 test-amd64-amd64-xl-qemuu-ws16-amd64    <job status>          broken in 159052
 test-amd64-i386-libvirt         <job status>                 broken  in 159052
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 159052
 test-amd64-i386-freebsd10-i386    <job status>                broken in 159052
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 159052
 test-amd64-amd64-pygrub         <job status>                 broken  in 159052
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 159052
 build-i386-xsm                  <job status>                 broken  in 159052
 test-amd64-amd64-libvirt-xsm    <job status>                 broken  in 159052
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 159052
 build-i386-xsm             4 host-install(4) broken in 159052 REGR. vs. 158556
 test-armhf-armhf-libvirt-raw 13 guest-start    fail in 159052 REGR. vs. 158556
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159052 REGR. vs. 158556
 build-armhf-libvirt           4 host-install(4)              running
 build-armhf-libvirt           3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-xsm      5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-pvshim   5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-amd64-pvgrub 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-pygrub      5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-libvirt-xsm 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-shadow   5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-xtf-amd64-amd64-3       5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-livepatch    5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-libvirt      5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qcow2    5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-libvirt     5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-credit1  5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-pvhv2-intel 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-multivcpu 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-livepatch   5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-raw       5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl          5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 159052 pass in 159201
 test-xtf-amd64-amd64-4       5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-qemuu-nested-intel 5 host-install(5) broken in 159052 pass in 159201
 test-xtf-amd64-amd64-1       5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 159052 pass in 159201
 test-xtf-amd64-amd64-2       5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-shadow    5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-freebsd10-i386 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-i386-pvgrub 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-freebsd10-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-xl-rtds     5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-migrupgrade 6 host-install/src_host(6) broken in 159052 pass in 159201
 test-amd64-amd64-migrupgrade 6 host-install/src_host(6) broken in 159052 pass in 159201
 test-amd64-amd64-migrupgrade 7 host-install/dst_host(7) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-win7-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-pair 6 host-install/src_host(6) broken in 159052 pass in 159201
 test-amd64-amd64-pair 7 host-install/dst_host(7) broken in 159052 pass in 159201
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken in 159052 pass in 159201
 test-amd64-i386-libvirt-pair 7 host-install/dst_host(7) broken in 159052 pass in 159201
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken in 159052 pass in 159201
 test-amd64-amd64-libvirt-pair 7 host-install/dst_host(7) broken in 159052 pass in 159201
 test-amd64-i386-pair 6 host-install/src_host(6) broken in 159052 pass in 159201
 test-amd64-i386-pair 7 host-install/dst_host(7) broken in 159052 pass in 159201
 test-amd64-i386-migrupgrade 7 host-install/dst_host(7) broken in 159052 pass in 159201
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-i386-xl           5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-qemuu-nested-amd 5 host-install(5) broken in 159052 pass in 159201
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken in 159052 pass in 159201
 test-armhf-armhf-xl-arndale  13 debian-fixup     fail in 159052 pass in 159201
 test-armhf-armhf-xl-vhd      12 debian-di-install          fail pass in 159052

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 159052 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 159052 n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 159052 n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 159052 n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 159052 like 158556
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 159052 never pass
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 158556
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158556
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   20 days
Failing since        159017  2021-02-04 15:06:13 Z    6 days    5 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    5 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          broken  
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf-libvirt broken
broken-job test-amd64-amd64-amd64-pvgrub broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-amd64-xl-qcow2 broken
broken-job test-amd64-amd64-xl-credit1 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-migrupgrade broken
broken-job test-amd64-amd64-qemuu-nested-amd broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-i386-pvgrub broken
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-amd64-xl-multivcpu broken
broken-job test-amd64-amd64-xl-shadow broken
broken-job test-amd64-i386-pair broken
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-xtf-amd64-amd64-3 broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-xtf-amd64-amd64-2 broken
broken-job test-amd64-amd64-pair broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-job test-amd64-amd64-livepatch broken
broken-job test-amd64-i386-migrupgrade broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-qemuu-nested-intel broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-i386-livepatch broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-pvshim broken
broken-job test-amd64-amd64-xl-xsm broken
broken-job test-amd64-amd64-libvirt broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-xtf-amd64-amd64-4 broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-xtf-amd64-amd64-1 broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job build-i386-xsm broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken

Not pushing.

------------------------------------------------------------
commit 8d26cdd3b66ab86d560dacd763d76ff3da95723e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:52:54 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f1f322610718c40680ac09e66f6c82e69c78ba3a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:39:45 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 06:56:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 06:56:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83768.156690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA5tD-0007X5-HQ; Thu, 11 Feb 2021 06:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83768.156690; Thu, 11 Feb 2021 06:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA5tD-0007Wy-EU; Thu, 11 Feb 2021 06:55:47 +0000
Received: by outflank-mailman (input) for mailman id 83768;
 Thu, 11 Feb 2021 06:55:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA5tC-0007Wp-EA
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 06:55:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8511911c-846d-495b-a1f6-fbdd7680f1f9;
 Thu, 11 Feb 2021 06:55:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3D28AC69;
 Thu, 11 Feb 2021 06:55:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8511911c-846d-495b-a1f6-fbdd7680f1f9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613026544; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MG2qQKwGA2uwRNd91SQJ/4zh6icXa16Zvxps844WLyY=;
	b=o6p/9ZxnNkhqoyKI3pkOCczBjjBBKV7qZRZR95iTrfxykBKKhe2Qn4Yy1GgTJcDT3sdOmR
	n/5NkhGfdA23FkvLGB4SNaPYN6KIZS0soJtJdfrzjkEJ9DLeZtXAIO6aYMBmuu5fQADFfg
	T0r7TyvFXHK083w4dxDUU9f5ItH/6oI=
Subject: Re: [PATCH] arm/xen: Don't probe xenbus as part of an early initcall
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 dwmw@amazon.co.uk
References: <20210210170654.5377-1-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <06a4feb3-a29e-221d-cdb6-68b9c453b2a4@suse.com>
Date: Thu, 11 Feb 2021 07:55:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210210170654.5377-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="nnFzFztgxzAPbRsxw8PBIWzXP1DcgeEzh"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--nnFzFztgxzAPbRsxw8PBIWzXP1DcgeEzh
Content-Type: multipart/mixed; boundary="A8lgVItKpCdvz0ARwxIlEoRtrBIMamMIS";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 dwmw@amazon.co.uk
Message-ID: <06a4feb3-a29e-221d-cdb6-68b9c453b2a4@suse.com>
Subject: Re: [PATCH] arm/xen: Don't probe xenbus as part of an early initcall
References: <20210210170654.5377-1-julien@xen.org>
In-Reply-To: <20210210170654.5377-1-julien@xen.org>

--A8lgVItKpCdvz0ARwxIlEoRtrBIMamMIS
Content-Type: multipart/mixed;
 boundary="------------95F5BB8C01B298769DA42A6A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------95F5BB8C01B298769DA42A6A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.02.21 18:06, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> After Commit 3499ba8198cad ("xen: Fix event channel callback via
> INTX/GSI"), xenbus_probe() will be called too early on Arm. This will
> recent to a guest hang during boot.
>=20
> If there hang wasn't there, we would have ended up to call
> xenbus_probe() twice (the second time is in xenbus_probe_initcall()).
>=20
> We don't need to initialize xenbus_probe() early for Arm guest.
> Therefore, the call in xen_guest_init() is now removed.
>=20
> After this change, there is no more external caller for xenbus_probe().=

> So the function is turned to a static one. Interestingly there were two=

> prototypes for it.
>=20
> Fixes: 3499ba8198cad ("xen: Fix event channel callback via INTX/GSI")
> Reported-by: Ian Jackson <iwj@xenproject.org>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Pushed to xen/tip.git for-linus-5.11


Juergen

--------------95F5BB8C01B298769DA42A6A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------95F5BB8C01B298769DA42A6A--

--A8lgVItKpCdvz0ARwxIlEoRtrBIMamMIS--

--nnFzFztgxzAPbRsxw8PBIWzXP1DcgeEzh
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAk1O4FAwAAAAAACgkQsN6d1ii/Ey8J
IQf+PSQEDt8aCr21sa7tvlRw1eioS5L/bTXlTHmERWCqh1+F3zTIc11A3mGG+TWLR2qjBzdH/qhw
z1uATDVZc3onJB8JiYfY1PWZXKyc/aUdWir687Aoxf9m2fyBM6+vm1s5+h6IhPwS8lSfYqpVzjYB
utWheEjj7FJdfqrHlnKsKfiAYpVuWZwwZZOtQ/3rRISXXQFtmoPr/rkPWFsc8BwraBG1kkrPcbOH
xKOIRkpPxG3f9GnYjuYDRU/n4sVraUJ8AznOT9rPwBWVF0lLxUrUIQOoKO1MN8Wh/KXSiU18Osk4
4xoX7BHd8KR093breswelP6lE30hWD1b1Kd2Dc1Wqg==
=7ZRQ
-----END PGP SIGNATURE-----

--nnFzFztgxzAPbRsxw8PBIWzXP1DcgeEzh--


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 07:10:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 07:10:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83770.156703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA66y-0000Jj-Pm; Thu, 11 Feb 2021 07:10:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83770.156703; Thu, 11 Feb 2021 07:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA66y-0000Jc-Mf; Thu, 11 Feb 2021 07:10:00 +0000
Received: by outflank-mailman (input) for mailman id 83770;
 Thu, 11 Feb 2021 07:09:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA66x-0000JX-IH
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 07:09:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cddac22-2547-4427-b772-2ec0dd7f2112;
 Thu, 11 Feb 2021 07:09:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 57B08B113;
 Thu, 11 Feb 2021 07:09:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cddac22-2547-4427-b772-2ec0dd7f2112
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613027397; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=C91ebXZNzoKbv/EE2K4v30/3rR8vNp/LicttouaukRw=;
	b=qNuAiZPpHtlua9QmiOIWYUpNAOS+9eEax533G/t7wi2RZdO+bmAHddRioIDtTF74jYoWxz
	eAeo/pTY9sH9o+4cM8Y8nkQbem/rEMC3ilUp1chL2lcl2e0EH6WFbPXg2OGCPs6a85woot
	WIlz07/w3uYyxb17mC2QN149iItiMC4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [PATCH] irq: simplify condition in irq_matrix_reserve()
Date: Thu, 11 Feb 2021 08:09:53 +0100
Message-Id: <20210211070953.5914-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The if condition in irq_matrix_reserve() can be much simpler.

While at it fix a typo in the comment.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 kernel/irq/matrix.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
index 651a4ad6d711..1f02a5c801a3 100644
--- a/kernel/irq/matrix.c
+++ b/kernel/irq/matrix.c
@@ -337,15 +337,14 @@ void irq_matrix_assign(struct irq_matrix *m, unsigned int bit)
  * irq_matrix_reserve - Reserve interrupts
  * @m:		Matrix pointer
  *
- * This is merily a book keeping call. It increments the number of globally
+ * This is merely a book keeping call. It increments the number of globally
  * reserved interrupt bits w/o actually allocating them. This allows to
  * setup interrupt descriptors w/o assigning low level resources to it.
  * The actual allocation happens when the interrupt gets activated.
  */
 void irq_matrix_reserve(struct irq_matrix *m)
 {
-	if (m->global_reserved <= m->global_available &&
-	    m->global_reserved + 1 > m->global_available)
+	if (m->global_reserved == m->global_available)
 		pr_warn("Interrupt reservation exceeds available resources\n");
 
 	m->global_reserved++;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 08:12:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 08:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83778.156726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA75K-00072I-KU; Thu, 11 Feb 2021 08:12:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83778.156726; Thu, 11 Feb 2021 08:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA75K-00072B-HH; Thu, 11 Feb 2021 08:12:22 +0000
Received: by outflank-mailman (input) for mailman id 83778;
 Thu, 11 Feb 2021 08:12:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=szQa=HN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lA75J-000726-Lm
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 08:12:21 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18c7dd5b-3b01-4acf-893a-4a84d7b85138;
 Thu, 11 Feb 2021 08:12:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18c7dd5b-3b01-4acf-893a-4a84d7b85138
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613031138;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=dG6JCylEl2/Tfujvrf43gaV2OOWFnRLuvhhOujU7IGs=;
  b=QZ19+isf1lyMZJMpKupWctg90vDbFMzXAacCpgLrRVee9Nxk2NYYDlLj
   wtzwaJniVVDiIlS8k7/fr7TCOn/mq8llACLnN7kHRTSJmGEjzoeUFsiic
   nZgsgMCAhM0u1uQT9HjN4s27XL5Up/EvTOcbFgUhug+/iepA2zph+Kg4v
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3UxdGt+y4JLdadUcNkU0CGFwvJbRq+Ip3M016hc/n1yN/6xuFqiqLJX8feCTEbBguwkj9rIJjX
 KHzfvmV67XpTXBKTQhsSV30UbCrjeJNEm9nyirdd1/BaIIUKZf+6J257nbLv4uWUqcGuahPPR6
 YXSn2QSnqtqx+ieg4SUrWdjhsqn9kryYWm8yXp0xH+GGTF85P1Ak3gdhJJ/WfMAQdVqj46edqI
 K3woKKwq/FM+9Yuz2EiYIVe6HKFGFJpg+9o38pOue4Z7TN+VKKI4kRnQV0rxfGgHXo78iMjCtH
 SVE=
X-SBRS: 5.2
X-MesageID: 37049007
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,169,1610427600"; 
   d="scan'208";a="37049007"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UgvqKG5du2WYh0m77pL+u8C1Z2gmdEZ0CjMLl3FT2X0tUqDPeTMkNbc6g8IY7Tekd9yRvPH3HbULycyWQJfWHfYMsSFTsOCH4NzBtcXv5+bJpwETQwdkhwT3UrX4qKxGPgD6q4W3dRL9U+K9FN2JFlnppAXAkZ5Mb5elpjAGKuTyQYjeWuEbEHTEO4Mgdn4EEpTrSqRWTakDVGYqAy2vNR8dUmVpKCcVbreNawFavTkHsPGhsW1T4VeTIa3TTo/LSMLFrwkUCKcgtmJ1+2aN+tWnrEyh4NKt1fvcOM99/YvyA0GEQ9GsKUxlLV0u1rHh2KGqdV0ujug3Jjfd14BXsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ojR4zD6caGLc48AS2kVww9sTzC7GHI0szU8/ekSZn2A=;
 b=ccAiU1uX6zdeADLOIpJLeyjVjdR2338Pv9bwxN+UwzI709hUkSSvjqJ6/vjNuotnQ6mpoXimmBHu7RyeAUcdX6U/bUIlpVyid/U++SDcGu5faAXR6vINnmjp/A5HOJA9RHlV9OAyGOqAfw2Gh0w2Oyk3pVCVFHoUxQOBvmRgzTKO9ckp4VQYRGTJxHriZAhY/Hs7AHsW9IuZCKvv6/StqPtGXoW7QDSv7kV515JOEoO8GGv712oRNG/i9sxIKKJFtAOpqMZKIhFtloPE2bM8UUUM3Qn/pz1p0KodksP/RFtj6QJC+RuAzvLJoU91KGs1TzvXsMIqs7zqL56ItsGuRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ojR4zD6caGLc48AS2kVww9sTzC7GHI0szU8/ekSZn2A=;
 b=qPUVnuxju1k1a9uYtgapFzS3yxRImSJYzSeS+65mVXaKLUnl5I/+vWPpHv8JP8u5O8dzf1SXlMqPyR838gHHjzC6ts/HHP2NriVmN9XKTOKvHrTFyTnml92szvckjsctNtN7G+6EIu8GRuO5HULW3m9d+wkHGNzce9mnbeLmxRU=
Date: Thu, 11 Feb 2021 09:11:57 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
Message-ID: <YCTmzRcZw9JUJkxw@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
 <YCK3sH/4EVLzRfZ3@Air-de-Roger>
 <d3b62090-fdb5-068b-93ab-63f8bebc9d2e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d3b62090-fdb5-068b-93ab-63f8bebc9d2e@suse.com>
X-ClientProxiedBy: MR2P264CA0173.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::12)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a921c9a8-42d1-461e-d903-08d8ce64bd43
X-MS-TrafficTypeDiagnostic: DM5PR03MB3369:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3369D1C72771DB44150F877E8F8C9@DM5PR03MB3369.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: W3x+/qARg+Gr14I65juVDQBz2Rtp/1bAVC2eDVUc59CdlfydKTClsJTWG6hf28b66wVSXn5+8vM0MKckCV1/b1m58ppft7+ONQGZpXX2PAbM3rbzq9lJGBBcC8FPnA0aZZtQwcwxIzICdLb2GkU39anvUD3PABHR0uZC97QZ+IkgEdR5pJlsMvIcOm7OW+3oMsDUveXFd6J9C0rrU6epIyiVDcvmY9tHoWrW6GO0//6tGh/wgD9MTNfkOMb5qtBZBFzbVWhZ+9ESJjyTUI6dyJGc0WC0shv6QshCCZFtmpcQzwx7aFz4bHLcVs0e9pEmJIeFuXyFKhlDzPhUGEUtZED+rAWDiN7X+AOtHYqriqhzehvSseOtFb+A6eo1FEQjB3vrHZQ+vwbZcFP0pLAXRIp28qFcAEPMzRufaqgbOj6Ihd62IvS2sEMJztd5U4un/S/Br9kR7qkal+X0xnxmH9AO+b8FX2xX35YAyNQWKp185yNPmelc/+KzE5MtFXpYzmhw4+BzUVE1GFSho1VU8w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39860400002)(366004)(376002)(136003)(396003)(5660300002)(316002)(9686003)(478600001)(85182001)(83380400001)(6916009)(6486002)(53546011)(186003)(956004)(86362001)(6666004)(8936002)(16526019)(26005)(6496006)(54906003)(4326008)(66556008)(66946007)(33716001)(107886003)(66476007)(8676002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TEVqQjdNQmVoTXBVdjc5WHoxam5UL2tuTktHdzgvUG8vVjhFK0xFVHh2QlNm?=
 =?utf-8?B?YXNxK3pGWEtQS1B0Y2c4L3c4bUFGNnRSVDlhc0hhK1lSMWNvWmpXNCtKaXg0?=
 =?utf-8?B?YVFHcTFmdnpxTlNQRkZXaVpsWGIzYVNldlovQWlDVjlKOEM3eTBkY2JjZVZT?=
 =?utf-8?B?aldQYzVIaDNaTDVyOWcvZ0JpNVozckJSc3k1eDNHRyt5K3kvM3o1MGhtRTJw?=
 =?utf-8?B?RXBZNFBaUWkrcUx4UUd4M1pLdDJpaGJTd1RoWG9XZ0d6MzR2eUo1ZDdLOTdr?=
 =?utf-8?B?M3EvVk9iTlozUWhEc01sU2NIUG5MWlBDeDJBVDR3UmhXRzU1TEVrYWhYWTZj?=
 =?utf-8?B?L2lPT0JZZi82VjJ0emZ5UGxFTFp6RXB6MS9HY2lsVnhZT3VUYXcvSHpaSENh?=
 =?utf-8?B?QVJmT2dtNldUcHdIYXJCYWIrZGkzVjBPcFhKbkdCc2MxeGNnS2FGRTBmUHVJ?=
 =?utf-8?B?Z2xDbXpkMUtTdWt1RWJtYmIwbE1QY2loRUtSN0lEWnViaGUzN0RvWGtITGR6?=
 =?utf-8?B?ZnFaVkptWGFVYWhZS2p5aE52a2hhYVE5WEdKMzVIbktnVWtsQWdCcFpzR2hP?=
 =?utf-8?B?dVdkcENjdmY4dGpDNjRFWlp5bmk3R0xueE5RbFpIb1FtNWF5RGFPTDBPZDFQ?=
 =?utf-8?B?Tmt2UUZ6SGUvVnAvd1YwMkNvaGY0ZHdYemVna3U4RE92SHhNWFFVRUY2Nm15?=
 =?utf-8?B?bE9yUzl4Q2xMT3pTMmVVUDN4cnZ3Zm5iTVB4aDlBK1Q0SEZXeGlscDdTMXlG?=
 =?utf-8?B?THE1U01zZGFna3Y5Um5ldkp0akd5Y2wrTEVMQVIyYXN0ckVFVyttY1dJL2V4?=
 =?utf-8?B?V1VCZFc1dmw5M2hhOHZ1MEVlcUlaYWl0elYzY0N1Mkk3b2h1RlVXVmhjUzc3?=
 =?utf-8?B?T0R5ZS9GNThHZmNUcWdDZ1lENm81dUdJVTVGd1FxTDVCdUhWc0pqcTVlczRz?=
 =?utf-8?B?TUR5T0Y0QVAwbzVsSnhQdjh6QVJldk5HRm1yZ0lkdHU5OWJJdlFtM3BGWXdx?=
 =?utf-8?B?dzVNWGlMT1BVNEtweU92WXBvRzN3UGRHeUhaRzlQS1JSZlozT2hmTndzN2hX?=
 =?utf-8?B?YmQvL2NCaXlrellWdUlqZDU4eWtIb3BxaGZNbjN3MkM1dUVLdFAvU1NCb1lh?=
 =?utf-8?B?SGo5WlZHekZTc0N2VGNGSWlWb0lhdFRSTTFuRkxqRUdXTkJKenZZemZxZFN3?=
 =?utf-8?B?dGtIYnVVWmhDMU9hYXFWTWdCOEZidVRjRTBFaDF0NXVFdlhLTk9IR2NZV0hk?=
 =?utf-8?B?OWEva0JyM09NS0IySU5aTW52Y01HS0J5ZndZbG9pL1hBLzQyL1pKNHNRRWVK?=
 =?utf-8?B?T3NZM0krQWJ5cGxROC92SE8yc0JTRUZoeXhvdFVjQnZObFdtNDF3L0s4UzlU?=
 =?utf-8?B?d1VkdkxGTC84eGVsQUFwSy9vNjh6Y1kxd2FCbXB1N2ZLbWpQRDJGTlVFdi9D?=
 =?utf-8?B?N01iNC8zb3ZxSXNpQ3laZ2dNenpVU04rQ2JnMEd5Qm1kZnorK1daMGNxbG9z?=
 =?utf-8?B?Smc4bTREVUFJTER2U3N1TklHOFg4aDI0WXdObU1jUm44WUM0bG1xZjNIbFpU?=
 =?utf-8?B?M2VPRGNrNlNxYyt5WXB4a3N0bGlOb2pxSnYxeXE5eHdPZ0VCS1dJWGYzUjVM?=
 =?utf-8?B?dldUOFRkSk5LTzRsV25NRDBHZ25rZWNyNVYvQ0ZUTnRkWTMySlhONW1ReU5h?=
 =?utf-8?B?Sm5FU0Y1Mllacm9MVXdEMWR6cXRTRjZZZ01HYllVbWY2MVloblBhdGdFMGFa?=
 =?utf-8?Q?W13nU9+AjptcfjdB4Nv8AT2pgj7+AtT4xyXy9DE?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a921c9a8-42d1-461e-d903-08d8ce64bd43
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 08:12:14.3642
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J2UBkgOgT2uABDKHYskSUs0b9leQXhDIW17x1Ip/v56xcnVGb4oHt+u+7QjySZuA7t40Sc+m+CPsPEDtrej5Nw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3369
X-OriginatorOrg: citrix.com

On Wed, Feb 10, 2021 at 05:55:52PM +0100, Jan Beulich wrote:
> On 09.02.2021 17:26, Roger Pau MonnÃ© wrote:
> > On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
> >> --- a/xen/arch/x86/usercopy.c
> >> +++ b/xen/arch/x86/usercopy.c
> >> @@ -10,12 +10,19 @@
> >>  #include <xen/sched.h>
> >>  #include <asm/uaccess.h>
> >>  
> >> -unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n)
> >> +#ifndef GUARD
> >> +# define GUARD UA_KEEP
> >> +#endif
> >> +
> >> +unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n)
> >>  {
> >>      unsigned dummy;
> >>  
> >>      stac();
> >>      asm volatile (
> >> +        GUARD(
> >> +        "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"
> > 
> > Don't you need to also take 'n' into account here to assert that the
> > address doesn't end in hypervisor address space? Or that's fine as
> > speculation wouldn't go that far?
> 
> Like elsewhere this leverages that the hypervisor VA range starts
> immediately after the non-canonical hole. I'm unaware of
> speculation being able to cross over that hole.
> 
> > I also wonder why this needs to be done in assembly, could you check
> > the address(es) using C?
> 
> For this to be efficient (in avoiding speculation) the insn
> sequence would better not have any conditional jumps. I don't
> think the compiler can be told so.

Why not use evaluate_nospec to check the address like we do in other
places?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 08:16:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 08:16:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83779.156738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA79Z-0007CI-68; Thu, 11 Feb 2021 08:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83779.156738; Thu, 11 Feb 2021 08:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA79Z-0007CB-36; Thu, 11 Feb 2021 08:16:45 +0000
Received: by outflank-mailman (input) for mailman id 83779;
 Thu, 11 Feb 2021 08:16:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qHH=HN=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lA79X-0007C5-SY
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 08:16:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 851e2a77-5406-453e-9b20-c82ee9e94b00;
 Thu, 11 Feb 2021 08:16:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 522F1ADE3;
 Thu, 11 Feb 2021 08:16:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 851e2a77-5406-453e-9b20-c82ee9e94b00
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
From: Thomas Zimmermann <tzimmermann@suse.de>
To: airlied@linux.ie,
	daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com,
	mripard@kernel.org
Cc: dri-devel@lists.freedesktop.org,
	linux-aspeed@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mips@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-amlogic@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	linux-renesas-soc@vger.kernel.org,
	linux-rockchip@lists.infradead.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-tegra@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v2] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic helpers
Date: Thu, 11 Feb 2021 09:16:36 +0100
Message-Id: <20210211081636.28311-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.30.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function drm_gem_fb_prepare_fb() is a helper for atomic modesetting,
but currently located next to framebuffer helpers. Move it to GEM atomic
helpers, rename it slightly and adopt the drivers. Same for the rsp
simple-pipe helper.

Compile-tested with x86-64, aarch64 and arm. The patch is fairly large,
but there are no functional changes.

v2:
	* rename to drm_gem_plane_helper_prepare_fb() (Daniel)
	* add tutorial-style documentation

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c     |  4 +-
 drivers/gpu/drm/drm_gem_atomic_helper.c      | 96 +++++++++++++++++++-
 drivers/gpu/drm/drm_gem_framebuffer_helper.c | 63 -------------
 drivers/gpu/drm/drm_gem_vram_helper.c        |  4 +-
 drivers/gpu/drm/imx/dcss/dcss-plane.c        |  4 +-
 drivers/gpu/drm/imx/ipuv3-plane.c            |  4 +-
 drivers/gpu/drm/ingenic/ingenic-drm-drv.c    |  3 +-
 drivers/gpu/drm/ingenic/ingenic-ipu.c        |  4 +-
 drivers/gpu/drm/mcde/mcde_display.c          |  4 +-
 drivers/gpu/drm/mediatek/mtk_drm_plane.c     |  6 +-
 drivers/gpu/drm/meson/meson_overlay.c        |  8 +-
 drivers/gpu/drm/meson/meson_plane.c          |  4 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c    |  4 +-
 drivers/gpu/drm/msm/msm_atomic.c             |  4 +-
 drivers/gpu/drm/mxsfb/mxsfb_kms.c            |  6 +-
 drivers/gpu/drm/pl111/pl111_display.c        |  4 +-
 drivers/gpu/drm/rcar-du/rcar_du_vsp.c        |  4 +-
 drivers/gpu/drm/rockchip/rockchip_drm_vop.c  |  3 +-
 drivers/gpu/drm/stm/ltdc.c                   |  4 +-
 drivers/gpu/drm/sun4i/sun4i_layer.c          |  4 +-
 drivers/gpu/drm/sun4i/sun8i_ui_layer.c       |  4 +-
 drivers/gpu/drm/sun4i/sun8i_vi_layer.c       |  4 +-
 drivers/gpu/drm/tegra/plane.c                |  4 +-
 drivers/gpu/drm/tidss/tidss_plane.c          |  4 +-
 drivers/gpu/drm/tiny/hx8357d.c               |  4 +-
 drivers/gpu/drm/tiny/ili9225.c               |  4 +-
 drivers/gpu/drm/tiny/ili9341.c               |  4 +-
 drivers/gpu/drm/tiny/ili9486.c               |  4 +-
 drivers/gpu/drm/tiny/mi0283qt.c              |  4 +-
 drivers/gpu/drm/tiny/repaper.c               |  3 +-
 drivers/gpu/drm/tiny/st7586.c                |  4 +-
 drivers/gpu/drm/tiny/st7735r.c               |  4 +-
 drivers/gpu/drm/tve200/tve200_display.c      |  4 +-
 drivers/gpu/drm/vc4/vc4_plane.c              |  4 +-
 drivers/gpu/drm/vkms/vkms_plane.c            |  3 +-
 drivers/gpu/drm/xen/xen_drm_front_kms.c      |  3 +-
 include/drm/drm_gem_atomic_helper.h          |  8 ++
 include/drm/drm_gem_framebuffer_helper.h     |  6 +-
 include/drm/drm_modeset_helper_vtables.h     |  2 +-
 include/drm/drm_plane.h                      |  4 +-
 include/drm/drm_simple_kms_helper.h          |  2 +-
 41 files changed, 178 insertions(+), 142 deletions(-)

diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
index e54686c31a90..d8f214e0be82 100644
--- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
+++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
@@ -9,8 +9,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_panel.h>
 #include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_vblank.h>
@@ -219,7 +219,7 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
 	.enable		= aspeed_gfx_pipe_enable,
 	.disable	= aspeed_gfx_pipe_disable,
 	.update		= aspeed_gfx_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank	= aspeed_gfx_enable_vblank,
 	.disable_vblank	= aspeed_gfx_disable_vblank,
 };
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
index fa4eae492b81..a005c5a0ba46 100644
--- a/drivers/gpu/drm/drm_gem_atomic_helper.c
+++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
@@ -1,6 +1,10 @@
 // SPDX-License-Identifier: GPL-2.0-or-later

+#include <linux/dma-resv.h>
+
 #include <drm/drm_atomic_state_helper.h>
+#include <drm/drm_atomic_uapi.h>
+#include <drm/drm_gem.h>
 #include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_simple_kms_helper.h>
@@ -12,8 +16,33 @@
  *
  * The GEM atomic helpers library implements generic atomic-commit
  * functions for drivers that use GEM objects. Currently, it provides
- * plane state and framebuffer BO mappings for planes with shadow
- * buffers.
+ * synchronization helpers, and plane state and framebuffer BO mappings
+ * for planes with shadow buffers.
+ *
+ * Before scanout, a plane's framebuffer needs to be synchronized with
+ * possible writers that draw into the framebuffer. All drivers should
+ * call drm_gem_plane_helper_prepare_fb() from their implementation of
+ * struct &drm_plane_helper.prepare_fb . It sets the plane's fence from
+ * the framebuffer so that the DRM core can synchronize access automatically.
+ *
+ * drm_gem_plane_helper_prepare_fb() can also be used directly as
+ * implementation of prepare_fb. For drivers based on
+ * struct drm_simple_display_pipe, drm_gem_simple_display_pipe_prepare_fb()
+ * provides equivalent functionality.
+ *
+ * .. code-block:: c
+ *
+ *	#include <drm/drm_gem_atomic_helper.h>
+ *
+ *	struct drm_plane_helper_funcs driver_plane_helper_funcs = {
+ *		...,
+ *		. prepare_fb = drm_gem_plane_helper_prepare_fb,
+ *	};
+ *
+ *	struct drm_simple_display_pipe_funcs driver_pipe_funcs = {
+ *		...,
+ *		. prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
+ *	};
  *
  * A driver using a shadow buffer copies the content of the shadow buffers
  * into the HW's framebuffer memory during an atomic update. This requires
@@ -32,7 +61,7 @@
  *
  * .. code-block:: c
  *
- *	#include <drm/drm/gem_atomic_helper.h>
+ *	#include <drm/drm_gem_atomic_helper.h>
  *
  *	struct drm_plane_funcs driver_plane_funcs = {
  *		...,
@@ -87,6 +116,65 @@
  *	}
  */

+/*
+ * Plane Helpers
+ */
+
+/**
+ * drm_gem_plane_helper_prepare_fb() - Prepare a GEM backed framebuffer
+ * @plane: Plane
+ * @state: Plane state the fence will be attached to
+ *
+ * This function extracts the exclusive fence from &drm_gem_object.resv and
+ * attaches it to plane state for the atomic helper to wait on. This is
+ * necessary to correctly implement implicit synchronization for any buffers
+ * shared as a struct &dma_buf. This function can be used as the
+ * &drm_plane_helper_funcs.prepare_fb callback.
+ *
+ * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
+ * GEM based framebuffer drivers which have their buffers always pinned in
+ * memory.
+ *
+ * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
+ * explicit fencing in atomic modeset updates.
+ */
+int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
+{
+	struct drm_gem_object *obj;
+	struct dma_fence *fence;
+
+	if (!state->fb)
+		return 0;
+
+	obj = drm_gem_fb_get_obj(state->fb, 0);
+	fence = dma_resv_get_excl_rcu(obj->resv);
+	drm_atomic_set_fence_for_plane(state, fence);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_plane_helper_prepare_fb);
+
+/**
+ * drm_gem_simple_display_pipe_prepare_fb - prepare_fb helper for &drm_simple_display_pipe
+ * @pipe: Simple display pipe
+ * @plane_state: Plane state
+ *
+ * This function uses drm_gem_plane_helper_prepare_fb() to extract the exclusive fence
+ * from &drm_gem_object.resv and attaches it to plane state for the atomic
+ * helper to wait on. This is necessary to correctly implement implicit
+ * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
+ * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
+ *
+ * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
+ * explicit fencing in atomic modeset updates.
+ */
+int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
+					   struct drm_plane_state *plane_state)
+{
+	return drm_gem_plane_helper_prepare_fb(&pipe->plane, plane_state);
+}
+EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb);
+
 /*
  * Shadow-buffered Planes
  */
@@ -198,7 +286,7 @@ int drm_gem_prepare_shadow_fb(struct drm_plane *plane, struct drm_plane_state *p
 	if (!fb)
 		return 0;

-	ret = drm_gem_fb_prepare_fb(plane, plane_state);
+	ret = drm_gem_plane_helper_prepare_fb(plane, plane_state);
 	if (ret)
 		return ret;

diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
index 109d11fb4cd4..5ed2067cebb6 100644
--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
@@ -5,13 +5,8 @@
  * Copyright (C) 2017 Noralf TrÃ¸nnes
  */

-#include <linux/dma-buf.h>
-#include <linux/dma-fence.h>
-#include <linux/dma-resv.h>
 #include <linux/slab.h>

-#include <drm/drm_atomic.h>
-#include <drm/drm_atomic_uapi.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
@@ -19,7 +14,6 @@
 #include <drm/drm_gem.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_modeset_helper.h>
-#include <drm/drm_simple_kms_helper.h>

 #define AFBC_HEADER_SIZE		16
 #define AFBC_TH_LAYOUT_ALIGNMENT	8
@@ -432,60 +426,3 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init);
-
-/**
- * drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer
- * @plane: Plane
- * @state: Plane state the fence will be attached to
- *
- * This function extracts the exclusive fence from &drm_gem_object.resv and
- * attaches it to plane state for the atomic helper to wait on. This is
- * necessary to correctly implement implicit synchronization for any buffers
- * shared as a struct &dma_buf. This function can be used as the
- * &drm_plane_helper_funcs.prepare_fb callback.
- *
- * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
- * gem based framebuffer drivers which have their buffers always pinned in
- * memory.
- *
- * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
- * explicit fencing in atomic modeset updates.
- */
-int drm_gem_fb_prepare_fb(struct drm_plane *plane,
-			  struct drm_plane_state *state)
-{
-	struct drm_gem_object *obj;
-	struct dma_fence *fence;
-
-	if (!state->fb)
-		return 0;
-
-	obj = drm_gem_fb_get_obj(state->fb, 0);
-	fence = dma_resv_get_excl_rcu(obj->resv);
-	drm_atomic_set_fence_for_plane(state, fence);
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(drm_gem_fb_prepare_fb);
-
-/**
- * drm_gem_fb_simple_display_pipe_prepare_fb - prepare_fb helper for
- *     &drm_simple_display_pipe
- * @pipe: Simple display pipe
- * @plane_state: Plane state
- *
- * This function uses drm_gem_fb_prepare_fb() to extract the exclusive fence
- * from &drm_gem_object.resv and attaches it to plane state for the atomic
- * helper to wait on. This is necessary to correctly implement implicit
- * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
- * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
- *
- * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
- * explicit fencing in atomic modeset updates.
- */
-int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
-					      struct drm_plane_state *plane_state)
-{
-	return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
-}
-EXPORT_SYMBOL(drm_gem_fb_simple_display_pipe_prepare_fb);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 48d4b59d3145..b4d2412d9493 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -8,7 +8,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_file.h>
 #include <drm/drm_framebuffer.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_ttm_helper.h>
 #include <drm/drm_gem_vram_helper.h>
 #include <drm/drm_managed.h>
@@ -720,7 +720,7 @@ drm_gem_vram_plane_helper_prepare_fb(struct drm_plane *plane,
 			goto err_drm_gem_vram_unpin;
 	}

-	ret = drm_gem_fb_prepare_fb(plane, new_state);
+	ret = drm_gem_plane_helper_prepare_fb(plane, new_state);
 	if (ret)
 		goto err_drm_gem_vram_unpin;

diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c
index 03ba88f7f995..4723da457bad 100644
--- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
+++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
@@ -6,7 +6,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>

 #include "dcss-dev.h"
@@ -355,7 +355,7 @@ static void dcss_plane_atomic_disable(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs dcss_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = dcss_plane_atomic_check,
 	.atomic_update = dcss_plane_atomic_update,
 	.atomic_disable = dcss_plane_atomic_disable,
diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
index 075508051b5f..cff783a37162 100644
--- a/drivers/gpu/drm/imx/ipuv3-plane.c
+++ b/drivers/gpu/drm/imx/ipuv3-plane.c
@@ -9,8 +9,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_plane_helper.h>

@@ -704,7 +704,7 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = ipu_plane_atomic_check,
 	.atomic_disable = ipu_plane_atomic_disable,
 	.atomic_update = ipu_plane_atomic_update,
diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
index 7bb31fbee29d..c00961907b10 100644
--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
@@ -28,6 +28,7 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_irq.h>
 #include <drm/drm_managed.h>
@@ -780,7 +781,7 @@ static const struct drm_plane_helper_funcs ingenic_drm_plane_helper_funcs = {
 	.atomic_update		= ingenic_drm_plane_atomic_update,
 	.atomic_check		= ingenic_drm_plane_atomic_check,
 	.atomic_disable		= ingenic_drm_plane_atomic_disable,
-	.prepare_fb		= drm_gem_fb_prepare_fb,
+	.prepare_fb		= drm_gem_plane_helper_prepare_fb,
 };

 static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_funcs = {
diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/ingenic/ingenic-ipu.c
index e52777ef85fd..91457263a3ce 100644
--- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
+++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
@@ -23,7 +23,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_property.h>
@@ -608,7 +608,7 @@ static const struct drm_plane_helper_funcs ingenic_ipu_plane_helper_funcs = {
 	.atomic_update		= ingenic_ipu_plane_atomic_update,
 	.atomic_check		= ingenic_ipu_plane_atomic_check,
 	.atomic_disable		= ingenic_ipu_plane_atomic_disable,
-	.prepare_fb		= drm_gem_fb_prepare_fb,
+	.prepare_fb		= drm_gem_plane_helper_prepare_fb,
 };

 static int
diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
index 7c2e0b865441..dde16ef9650a 100644
--- a/drivers/gpu/drm/mcde/mcde_display.c
+++ b/drivers/gpu/drm/mcde/mcde_display.c
@@ -13,8 +13,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_mipi_dsi.h>
 #include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_bridge.h>
@@ -1481,7 +1481,7 @@ static struct drm_simple_display_pipe_funcs mcde_display_funcs = {
 	.update = mcde_display_update,
 	.enable_vblank = mcde_display_enable_vblank,
 	.disable_vblank = mcde_display_disable_vblank,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 int mcde_display_init(struct drm_device *drm)
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
index 92141a19681b..c95ceb400b07 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
@@ -6,10 +6,10 @@

 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
-#include <drm/drm_fourcc.h>
 #include <drm/drm_atomic_uapi.h>
+#include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>

 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -216,7 +216,7 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = mtk_plane_atomic_check,
 	.atomic_update = mtk_plane_atomic_update,
 	.atomic_disable = mtk_plane_atomic_disable,
diff --git a/drivers/gpu/drm/meson/meson_overlay.c b/drivers/gpu/drm/meson/meson_overlay.c
index 1ffbbecafa22..be6ca49e20b0 100644
--- a/drivers/gpu/drm/meson/meson_overlay.c
+++ b/drivers/gpu/drm/meson/meson_overlay.c
@@ -10,11 +10,11 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_device.h>
+#include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_plane_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_plane_helper.h>

 #include "meson_overlay.h"
 #include "meson_registers.h"
@@ -742,7 +742,7 @@ static const struct drm_plane_helper_funcs meson_overlay_helper_funcs = {
 	.atomic_check	= meson_overlay_atomic_check,
 	.atomic_disable	= meson_overlay_atomic_disable,
 	.atomic_update	= meson_overlay_atomic_update,
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 };

 static bool meson_overlay_format_mod_supported(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
index 35338ed18209..b8309d8fc277 100644
--- a/drivers/gpu/drm/meson/meson_plane.c
+++ b/drivers/gpu/drm/meson/meson_plane.c
@@ -16,8 +16,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "meson_plane.h"
@@ -417,7 +417,7 @@ static const struct drm_plane_helper_funcs meson_plane_helper_funcs = {
 	.atomic_check	= meson_plane_atomic_check,
 	.atomic_disable	= meson_plane_atomic_disable,
 	.atomic_update	= meson_plane_atomic_update,
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 };

 static bool meson_plane_format_mod_supported(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index bc0231a50132..40eb5c911e3c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -13,7 +13,7 @@
 #include <drm/drm_atomic_uapi.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_file.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>

 #include "msm_drv.h"
 #include "dpu_kms.h"
@@ -892,7 +892,7 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
 	 *       we can use msm_atomic_prepare_fb() instead of doing the
 	 *       implicit fence and fb prepare by hand here.
 	 */
-	drm_gem_fb_prepare_fb(plane, new_state);
+	drm_gem_plane_helper_prepare_fb(plane, new_state);

 	if (pstate->aspace) {
 		ret = msm_framebuffer_prepare(new_state->fb,
diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
index 6a326761dc4a..e9c6544b6a01 100644
--- a/drivers/gpu/drm/msm/msm_atomic.c
+++ b/drivers/gpu/drm/msm/msm_atomic.c
@@ -5,7 +5,7 @@
  */

 #include <drm/drm_atomic_uapi.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_vblank.h>

 #include "msm_atomic_trace.h"
@@ -22,7 +22,7 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
 	if (!new_state->fb)
 		return 0;

-	drm_gem_fb_prepare_fb(plane, new_state);
+	drm_gem_plane_helper_prepare_fb(plane, new_state);

 	return msm_framebuffer_prepare(new_state->fb, kms->aspace);
 }
diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
index 3e1bb0aefb87..7c19ec5384d4 100644
--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
@@ -21,8 +21,8 @@
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_vblank.h>
@@ -495,13 +495,13 @@ static bool mxsfb_format_mod_supported(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs mxsfb_plane_primary_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = mxsfb_plane_atomic_check,
 	.atomic_update = mxsfb_plane_primary_atomic_update,
 };

 static const struct drm_plane_helper_funcs mxsfb_plane_overlay_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = mxsfb_plane_atomic_check,
 	.atomic_update = mxsfb_plane_overlay_atomic_update,
 };
diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
index 69c02e7c82b7..6fd7f13f1aca 100644
--- a/drivers/gpu/drm/pl111/pl111_display.c
+++ b/drivers/gpu/drm/pl111/pl111_display.c
@@ -17,8 +17,8 @@

 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_vblank.h>

 #include "pl111_drm.h"
@@ -440,7 +440,7 @@ static struct drm_simple_display_pipe_funcs pl111_display_funcs = {
 	.enable = pl111_display_enable,
 	.disable = pl111_display_disable,
 	.update = pl111_display_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
index 53221d8473c1..336ba0648a79 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
@@ -11,8 +11,8 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_vblank.h>
@@ -236,7 +236,7 @@ static int rcar_du_vsp_plane_prepare_fb(struct drm_plane *plane,
 	if (ret < 0)
 		return ret;

-	return drm_gem_fb_prepare_fb(plane, state);
+	return drm_gem_plane_helper_prepare_fb(plane, state);
 }

 void rcar_du_vsp_unmap_fb(struct rcar_du_vsp *vsp, struct drm_framebuffer *fb,
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
index 8d15cabdcb02..daea2493bfb8 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
@@ -23,6 +23,7 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_flip_work.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
@@ -1096,7 +1097,7 @@ static const struct drm_plane_helper_funcs plane_helper_funcs = {
 	.atomic_disable = vop_plane_atomic_disable,
 	.atomic_async_check = vop_plane_atomic_async_check,
 	.atomic_async_update = vop_plane_atomic_async_update,
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 };

 static const struct drm_plane_funcs vop_plane_funcs = {
diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
index 7812094f93d6..15a9eff38abb 100644
--- a/drivers/gpu/drm/stm/ltdc.c
+++ b/drivers/gpu/drm/stm/ltdc.c
@@ -26,8 +26,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_of.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
@@ -911,7 +911,7 @@ static const struct drm_plane_funcs ltdc_plane_funcs = {
 };

 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = ltdc_plane_atomic_check,
 	.atomic_update = ltdc_plane_atomic_update,
 	.atomic_disable = ltdc_plane_atomic_disable,
diff --git a/drivers/gpu/drm/sun4i/sun4i_layer.c b/drivers/gpu/drm/sun4i/sun4i_layer.c
index acfbfd4463a1..259c10b85ee7 100644
--- a/drivers/gpu/drm/sun4i/sun4i_layer.c
+++ b/drivers/gpu/drm/sun4i/sun4i_layer.c
@@ -7,7 +7,7 @@
  */

 #include <drm/drm_atomic_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "sun4i_backend.h"
@@ -122,7 +122,7 @@ static bool sun4i_layer_format_mod_supported(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs sun4i_backend_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 	.atomic_disable	= sun4i_backend_layer_atomic_disable,
 	.atomic_update	= sun4i_backend_layer_atomic_update,
 };
diff --git a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
index 816ad4ce8996..7cd584e16b53 100644
--- a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
+++ b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
@@ -14,8 +14,8 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>

@@ -299,7 +299,7 @@ static void sun8i_ui_layer_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs sun8i_ui_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 	.atomic_check	= sun8i_ui_layer_atomic_check,
 	.atomic_disable	= sun8i_ui_layer_atomic_disable,
 	.atomic_update	= sun8i_ui_layer_atomic_update,
diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
index 8cc294a9969d..404c0d2d49cf 100644
--- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
+++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
@@ -7,8 +7,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>

@@ -402,7 +402,7 @@ static void sun8i_vi_layer_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs sun8i_vi_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 	.atomic_check	= sun8i_vi_layer_atomic_check,
 	.atomic_disable	= sun8i_vi_layer_atomic_disable,
 	.atomic_update	= sun8i_vi_layer_atomic_update,
diff --git a/drivers/gpu/drm/tegra/plane.c b/drivers/gpu/drm/tegra/plane.c
index 539d14935728..19e8847a164b 100644
--- a/drivers/gpu/drm/tegra/plane.c
+++ b/drivers/gpu/drm/tegra/plane.c
@@ -8,7 +8,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "dc.h"
@@ -198,7 +198,7 @@ int tegra_plane_prepare_fb(struct drm_plane *plane,
 	if (!state->fb)
 		return 0;

-	drm_gem_fb_prepare_fb(plane, state);
+	drm_gem_plane_helper_prepare_fb(plane, state);

 	return tegra_dc_pin(dc, to_tegra_plane_state(state));
 }
diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c
index 35067ae674ea..795d24b44091 100644
--- a/drivers/gpu/drm/tidss/tidss_plane.c
+++ b/drivers/gpu/drm/tidss/tidss_plane.c
@@ -10,7 +10,7 @@
 #include <drm/drm_crtc_helper.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>

 #include "tidss_crtc.h"
 #include "tidss_dispc.h"
@@ -151,7 +151,7 @@ static void drm_plane_destroy(struct drm_plane *plane)
 }

 static const struct drm_plane_helper_funcs tidss_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = tidss_plane_atomic_check,
 	.atomic_update = tidss_plane_atomic_update,
 	.atomic_disable = tidss_plane_atomic_disable,
diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8357d.c
index c6525cd02bc2..3e2c2868a363 100644
--- a/drivers/gpu/drm/tiny/hx8357d.c
+++ b/drivers/gpu/drm/tiny/hx8357d.c
@@ -19,8 +19,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -184,7 +184,7 @@ static const struct drm_simple_display_pipe_funcs hx8357d_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode yx350hv15_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili9225.c
index 8e98962db5a2..6b87df19eec1 100644
--- a/drivers/gpu/drm/tiny/ili9225.c
+++ b/drivers/gpu/drm/tiny/ili9225.c
@@ -22,8 +22,8 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_rect.h>
@@ -328,7 +328,7 @@ static const struct drm_simple_display_pipe_funcs ili9225_pipe_funcs = {
 	.enable		= ili9225_pipe_enable,
 	.disable	= ili9225_pipe_disable,
 	.update		= ili9225_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode ili9225_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili9341.c
index 6ce97f0698eb..a97f3f70e4a6 100644
--- a/drivers/gpu/drm/tiny/ili9341.c
+++ b/drivers/gpu/drm/tiny/ili9341.c
@@ -18,8 +18,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -140,7 +140,7 @@ static const struct drm_simple_display_pipe_funcs ili9341_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode yx240qv29_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
index d7ce40eb166a..6422a7f67079 100644
--- a/drivers/gpu/drm/tiny/ili9486.c
+++ b/drivers/gpu/drm/tiny/ili9486.c
@@ -17,8 +17,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -153,7 +153,7 @@ static const struct drm_simple_display_pipe_funcs waveshare_pipe_funcs = {
 	.enable = waveshare_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode waveshare_mode = {
diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi0283qt.c
index ff77f983f803..dc76fe53aa72 100644
--- a/drivers/gpu/drm/tiny/mi0283qt.c
+++ b/drivers/gpu/drm/tiny/mi0283qt.c
@@ -16,8 +16,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -144,7 +144,7 @@ static const struct drm_simple_display_pipe_funcs mi0283qt_pipe_funcs = {
 	.enable = mi0283qt_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode mi0283qt_mode = {
diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
index 11c602fc9897..2cee07a2e00b 100644
--- a/drivers/gpu/drm/tiny/repaper.c
+++ b/drivers/gpu/drm/tiny/repaper.c
@@ -29,6 +29,7 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_format_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
@@ -860,7 +861,7 @@ static const struct drm_simple_display_pipe_funcs repaper_pipe_funcs = {
 	.enable = repaper_pipe_enable,
 	.disable = repaper_pipe_disable,
 	.update = repaper_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static int repaper_connector_get_modes(struct drm_connector *connector)
diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st7586.c
index ff5cf60f4bd7..7d216fe9267f 100644
--- a/drivers/gpu/drm/tiny/st7586.c
+++ b/drivers/gpu/drm/tiny/st7586.c
@@ -19,8 +19,8 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_format_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_rect.h>
@@ -268,7 +268,7 @@ static const struct drm_simple_display_pipe_funcs st7586_pipe_funcs = {
 	.enable		= st7586_pipe_enable,
 	.disable	= st7586_pipe_disable,
 	.update		= st7586_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode st7586_mode = {
diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
index faaba0a033ea..df8872d62cdd 100644
--- a/drivers/gpu/drm/tiny/st7735r.c
+++ b/drivers/gpu/drm/tiny/st7735r.c
@@ -19,8 +19,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>

@@ -136,7 +136,7 @@ static const struct drm_simple_display_pipe_funcs st7735r_pipe_funcs = {
 	.enable		= st7735r_pipe_enable,
 	.disable	= mipi_dbi_pipe_disable,
 	.update		= mipi_dbi_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct st7735r_cfg jd_t18003_t01_cfg = {
diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
index cb0e837d3dba..50e1fb71869f 100644
--- a/drivers/gpu/drm/tve200/tve200_display.c
+++ b/drivers/gpu/drm/tve200/tve200_display.c
@@ -17,8 +17,8 @@

 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_panel.h>
 #include <drm/drm_vblank.h>

@@ -316,7 +316,7 @@ static const struct drm_simple_display_pipe_funcs tve200_display_funcs = {
 	.enable = tve200_display_enable,
 	.disable = tve200_display_disable,
 	.update = tve200_display_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank = tve200_display_enable_vblank,
 	.disable_vblank = tve200_display_disable_vblank,
 };
diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
index 7322169c0682..1a1d6609b80f 100644
--- a/drivers/gpu/drm/vc4/vc4_plane.c
+++ b/drivers/gpu/drm/vc4/vc4_plane.c
@@ -20,7 +20,7 @@
 #include <drm/drm_atomic_uapi.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "uapi/drm/vc4_drm.h"
@@ -1250,7 +1250,7 @@ static int vc4_prepare_fb(struct drm_plane *plane,

 	bo = to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base);

-	drm_gem_fb_prepare_fb(plane, state);
+	drm_gem_plane_helper_prepare_fb(plane, state);

 	if (plane->state->fb == state->fb)
 		return 0;
diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
index 0824327cc860..2a02334b72ac 100644
--- a/drivers/gpu/drm/vkms/vkms_plane.c
+++ b/drivers/gpu/drm/vkms/vkms_plane.c
@@ -5,6 +5,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_gem_shmem_helper.h>
@@ -159,7 +160,7 @@ static int vkms_prepare_fb(struct drm_plane *plane,
 	if (ret)
 		DRM_ERROR("vmap failed: %d\n", ret);

-	return drm_gem_fb_prepare_fb(plane, state);
+	return drm_gem_plane_helper_prepare_fb(plane, state);
 }

 static void vkms_cleanup_fb(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index ef11b1e4de39..371202ebe900 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -13,6 +13,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_probe_helper.h>
 #include <drm/drm_vblank.h>
@@ -301,7 +302,7 @@ static const struct drm_simple_display_pipe_funcs display_funcs = {
 	.mode_valid = display_mode_valid,
 	.enable = display_enable,
 	.disable = display_disable,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.check = display_check,
 	.update = display_update,
 };
diff --git a/include/drm/drm_gem_atomic_helper.h b/include/drm/drm_gem_atomic_helper.h
index 7abf40bdab3d..cfc5adee3d13 100644
--- a/include/drm/drm_gem_atomic_helper.h
+++ b/include/drm/drm_gem_atomic_helper.h
@@ -9,6 +9,14 @@

 struct drm_simple_display_pipe;

+/*
+ * Plane Helpers
+ */
+
+int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state);
+int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
+					   struct drm_plane_state *plane_state);
+
 /*
  * Helpers for planes with shadow buffers
  */
diff --git a/include/drm/drm_gem_framebuffer_helper.h b/include/drm/drm_gem_framebuffer_helper.h
index 6b013154911d..495d174d9989 100644
--- a/include/drm/drm_gem_framebuffer_helper.h
+++ b/include/drm/drm_gem_framebuffer_helper.h
@@ -9,9 +9,11 @@ struct drm_framebuffer;
 struct drm_framebuffer_funcs;
 struct drm_gem_object;
 struct drm_mode_fb_cmd2;
+#if 0
 struct drm_plane;
 struct drm_plane_state;
 struct drm_simple_display_pipe;
+#endif

 #define AFBC_VENDOR_AND_TYPE_MASK	GENMASK_ULL(63, 52)

@@ -44,8 +46,4 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
 			 const struct drm_mode_fb_cmd2 *mode_cmd,
 			 struct drm_afbc_framebuffer *afbc_fb);

-int drm_gem_fb_prepare_fb(struct drm_plane *plane,
-			  struct drm_plane_state *state);
-int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
-					      struct drm_plane_state *plane_state);
 #endif
diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h
index eb706342861d..df77ed843dd6 100644
--- a/include/drm/drm_modeset_helper_vtables.h
+++ b/include/drm/drm_modeset_helper_vtables.h
@@ -1179,7 +1179,7 @@ struct drm_plane_helper_funcs {
 	 * members in the plane structure.
 	 *
 	 * Drivers which always have their buffers pinned should use
-	 * drm_gem_fb_prepare_fb() for this hook.
+	 * drm_gem_plane_helper_prepare_fb() for this hook.
 	 *
 	 * The helpers will call @cleanup_fb with matching arguments for every
 	 * successful call to this hook.
diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
index 95ab14a4336a..1294610e84f4 100644
--- a/include/drm/drm_plane.h
+++ b/include/drm/drm_plane.h
@@ -79,8 +79,8 @@ struct drm_plane_state {
 	 * preserved.
 	 *
 	 * Drivers should store any implicit fence in this from their
-	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_fb_prepare_fb()
-	 * and drm_gem_fb_simple_display_pipe_prepare_fb() for suitable helpers.
+	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_plane_helper_prepare_fb()
+	 * and drm_gem_simple_display_pipe_prepare_fb() for suitable helpers.
 	 */
 	struct dma_fence *fence;

diff --git a/include/drm/drm_simple_kms_helper.h b/include/drm/drm_simple_kms_helper.h
index 40b34573249f..ef9944e9c5fc 100644
--- a/include/drm/drm_simple_kms_helper.h
+++ b/include/drm/drm_simple_kms_helper.h
@@ -117,7 +117,7 @@ struct drm_simple_display_pipe_funcs {
 	 * more details.
 	 *
 	 * Drivers which always have their buffers pinned should use
-	 * drm_gem_fb_simple_display_pipe_prepare_fb() for this hook.
+	 * drm_gem_simple_display_pipe_prepare_fb() for this hook.
 	 */
 	int (*prepare_fb)(struct drm_simple_display_pipe *pipe,
 			  struct drm_plane_state *plane_state);
--
2.30.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 08:45:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 08:45:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83784.156757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA7bd-0001Xq-OZ; Thu, 11 Feb 2021 08:45:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83784.156757; Thu, 11 Feb 2021 08:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA7bd-0001Xj-LP; Thu, 11 Feb 2021 08:45:45 +0000
Received: by outflank-mailman (input) for mailman id 83784;
 Thu, 11 Feb 2021 08:45:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=szQa=HN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lA7bc-0001Xe-JV
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 08:45:44 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5a98f4b-21c3-4b26-9b9a-fe1fc103eabf;
 Thu, 11 Feb 2021 08:45:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5a98f4b-21c3-4b26-9b9a-fe1fc103eabf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613033142;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=YCiIoVLGNZJZ69t4uAsXuQqnYJJsrgsT/WwnSrlDw8s=;
  b=OeOa/fG3rdgHGfwiGHTZORCOdQZ3QxiXsBttx1fJ3Vp/+66ey/HsuAae
   XgCEEVbE9sOwsalmAPa5YWcg9y4QAVZTx9DrPhWvaTV/yaW0dkdfjJuCM
   LbH2KilHK0o4dtM1dgRTw9YZO84hPSJJn0tms6JrT6OvZDoXym9guThw1
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: luGaybCio+dwxHAEdlfbG/A99XLAeIBkTvPdOPBdx9MHbKxoQ9L1sP318SwapRkgTa5RD4DHoJ
 neYl2nqV6xFNK+d+MAh+cOUtWcejlB7wrq004tu4JYHKsZ7M5iEHQqlRmOpYMskCgmjT8/ZglY
 D5NHxn8gANvBP18NcBxQB5niVWAaY4BGftgpSGV1puFnSdXf3wmMqJTIbl8fi31xEna3VZNLmB
 MOhFXMhgplV6b5h1JdALr35oqnNDPX+zOaXyMjDJjk/D7EA/jxrG5f1yxYCQUqHJ6H9NA2V2W9
 xzA=
X-SBRS: 5.2
X-MesageID: 38379951
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,169,1610427600"; 
   d="scan'208";a="38379951"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JL5JQDOVG6gPi0UhS1EJk2FQUPYpotEq/3xo0PSRgvixtRsBduB4tYEqSIq/TQ9+EjBWbkp2Ynw+iuHY3ry9Cag3FuH6IZBIXnwLcEl47HYx9wNYqk6MeqJhWzospDeT+cGJyCjjoPprZoG8JglPbyo5ZLVHcuxWoWrFF2YpzvzjQ3WQvx8B23R7ZNR0XehUCQ2uMXDAc6fVgzCbq2KFX/FvxpuvqRfDzR4xJ+jEix21g4U5qM5XjDIpSJdMhsGrqYZUzBK86ts+7P1xqOB11Kv/R31w3pyjFC1KG7DGow2GYB8lbazTKhW+ouoQ1/hCXsXaZ8FtCzkFPFkpooUA0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EWLfxRiPzPQUxaYqEvh2+Vz1QyXxG6AASbN+xoP5nJ0=;
 b=EkmxQGiPGBZLcCg73zsg2AeLb3ipn9UcHmQvo9BeL3Ti/pnhqvbkChKJPfawmn5hh4RN4kD1mtg8zNk1/tMuMJHBmhlQfgvPJYsOjNg7ZrJmLir3uUJjbdcLcfIHPY5VJAO1fQ0h5A7pblVQmHbCicjOi08sxfbOZC9czFtzijy++bDMb7MACt+RWmPy21Tk14d5+qoai5fsGbAxAUF+LLh7QCBUbhFovL5bkxyfsFY7jHPMr7jCXMP69E81HQOF5Sz+lrdo8wGk1gJw7USRmBuDBlXFfLerr5bplkb2knVMNd/RJaQfD3YR9Qxu9KjHt3eMwm72ydlGY4J227Ot4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EWLfxRiPzPQUxaYqEvh2+Vz1QyXxG6AASbN+xoP5nJ0=;
 b=RTVgIM+n2RT+TcSBQ41TFkZST89XwLVBFHkjcaXbfE1+0K5Nkefk7smO3rxoOYH8qZJI/cMy0PwK0RgPQPLA7xUXohHlsHs8UdA7v7ukHm32x9gL98Fa0VXDU5lW+kLrHMQHhygcgbSPuR+sUrdjlZXbvMkrM3Lodc0YD60AFr4=
Date: Thu, 11 Feb 2021 09:45:31 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Kevin
 Tian" <kevin.tian@intel.com>, Julien Grall <julien@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] VMX: use a single, global APIC access page
Message-ID: <YCTuq5b130PR6G35@Air-de-Roger>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
X-ClientProxiedBy: MR2P264CA0042.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::30)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 88e7dd79-7be9-4fc1-34a3-08d8ce6966ac
X-MS-TrafficTypeDiagnostic: DM6PR03MB3483:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB348309D2AF5B12D71AF8C22F8F8C9@DM6PR03MB3483.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: f1oCdYJFc4/03S2yT2t4MkqMxnwCekfraPdDdalp2UUpkbBmrZ4QfcxqF8sxNd71KwPuReSzUlB0FuiCBPu7PCpvfsFThbDPwiuvF/6ppdWD7fC1/UTUlYuierI8z+JmzB80v4PgwPQQ00p5MRyVSdF4t6DbloDmYMa1c6N6QbI2/N1fVU9M1wfeXdWMsg+gq6gpfFPFeEgjf/CU1rWFFf0NkwnOhi9blld6rQfoGCcfsxnVydSvURME6l+jjsc7cDTuCeOdeYyzb8dK62kkpe9FshfUgrzTJmhWMhVbF7zLGSGR3AX0zVRc5edzgn+Q0QbVVJMT3NCNpBchHEeof1bL8MunlZnXwrC0i4uw4kmt1GMgNOKNMaBV+27U33cK2BYLU2m7YEt1kJomioVS5v8bJ0ShIZIiPL4XQXwRZYKki2KGmieBuPhJpkm3Poa2gIY9OkoaAvTzTYDhAhu7ON5vCVuMOtyPXdTZkF5dK4eryH7cHtRZ79hkqY1vEUZ8cotvpRHm3ESymYYPvMpULA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(346002)(136003)(39860400002)(396003)(6486002)(33716001)(2906002)(6496006)(9686003)(186003)(85182001)(956004)(478600001)(83380400001)(5660300002)(26005)(86362001)(8676002)(6666004)(66476007)(54906003)(316002)(8936002)(16526019)(66556008)(66946007)(6916009)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bFUrRXhLcFNPU0I5bHhSNFgvSkhNQ2VmZkpYMlBkTXRxSk9hYmFyRlY3T1VT?=
 =?utf-8?B?elQrNE9aMXRxQ0kvYk8xSzhLTDh4RUhpVDEwUk1GYit3VzhkS1I0cDQrMG43?=
 =?utf-8?B?RWRtVHhMS2hIdkJzQU9TNnMrUWhUa0taMEpQeTFLeEc0NnF6SkgzMEpkOXd5?=
 =?utf-8?B?U244K2p2NHYrTFRtbTl1eFRPdklldmROUjZPa3dTOHRsK2hEVVBvNmp0cWhX?=
 =?utf-8?B?U01VU0xHUnVWVGZnTGtKWEtlZjBPdURiT1ZSd2FiM1k2QnFGOTB0SFFIak9r?=
 =?utf-8?B?YzA0SWMxK21FQ3dVTmJlWFp4MWtGUTQzQk9pa1JXV0toeGxHckdsbVVkR3Vq?=
 =?utf-8?B?R1BCMkovYWYrVFB5RUVjOGxHVkJsRTVPcFF5RFYxUVRwR2RoeWcwczh6NGJM?=
 =?utf-8?B?L1NRSStqcFRQUkFPL2hzL20zd3loQXUwTEtzWnJ4VkVaeUNiUVVNNWw2eXl1?=
 =?utf-8?B?UmVuMDErcDRXWVFxSzBRUHZYZ1R0K2gxVFJlVW9sYXhrRVB5MkgyNW5OTUo5?=
 =?utf-8?B?QkJjVzB4TmRlZm1kU1RxQTB4REQwMGF1blhLeHBuNkpISkV6Qys2T2hlZTJq?=
 =?utf-8?B?bzNDZ3R4V2lRdk1OMmJNWHYrVlRHNWZTcnRXV2ZGTitwMGF5bUpQbFEvbndH?=
 =?utf-8?B?c1lVUm40TFEvQ3lxRkptcFBDN2kwbzJBcmpuNFpPY24vcm5EUkVaTSs3RTFP?=
 =?utf-8?B?bUZtbWhWWUJGNk9YWjQvWUdYMVR5WWgzSW9kMUF2MlNOQVoxeDRTSk1nZjdm?=
 =?utf-8?B?VUdEQUdjd2txN21oWHVBVi8xd3R4VFFneWM2YXRkV3o3cDFNS0dRWExsdmd2?=
 =?utf-8?B?R1FJdjVmRFowbGh2NWxQQ0xVRWpzc0o4S29XSWdzVjVzdHpEZFZNN3R3d2Vn?=
 =?utf-8?B?TVRBTXZoQUdIbEFXL0VUUFhFdS93M0Zxb2JyQmc2S2ZKc2RxY0tPek0yZHFZ?=
 =?utf-8?B?WlJaYXRIS2IxWkhjTkNQQ1MyUDkxL0Y5cWVwSy9jRm0zb1JGQ2Nwc25HbG1l?=
 =?utf-8?B?aWsyaG1IMGowUDcxblI4YzBLZ1NkMlZvRjNKdzZoZjlVOTlQUnVYZjVTZm9B?=
 =?utf-8?B?WUR1MmtZeWdRSGtQcksrT01MZDNGemFOUE1tWGZZQzVUVnZOeXo2QTNqWlg3?=
 =?utf-8?B?RVd4Q2NLNVJjdi9ldmZpY3c5MWNkWVlCcjhkRDFKRU5MaWtBMVVxRVZVSm9V?=
 =?utf-8?B?cnhiQmZvejBnMW1uSS9oQkdldTBTVlRhT3JpTUpTS0VVNDdIaFB2MGRkUmZu?=
 =?utf-8?B?RC9nNW41bmtqYlRzNHhuSlBZMjZ0ZDgwRGdnQlNhczU1UzBUK3NIcitOY1F3?=
 =?utf-8?B?S3hUQWJoUVdXcFcxei9yVzZLTUN1SWdhNXJFdWk3cEF5ZEdyVWpWMm4vK2hu?=
 =?utf-8?B?NVl2SVdVWXJYOU9GbUl4QVQrZ09mM2l3NzhsV2ZzbTFRUThGejlqbmZWNEZ2?=
 =?utf-8?B?K1k1MHhRWlZLcGRhRGhrYURYQnU2bGVvQlpUWm9oTVVPc2xuS3FLNCtrT21y?=
 =?utf-8?B?Z3lhb29TTjk5a0F2OEd6dVVvMTFCVWswNUErOXY2VGdaWjlDc2tMbVJKb1kx?=
 =?utf-8?B?eFRGdkFSdUF0NGV4ODFPSkVkekZ5WWFoQmVTNStub2Vmem1YcE5SVXhrWHNC?=
 =?utf-8?B?TjEwV1puRndqTFhOM3gydmhXTU5Kc2NGcFhmTXY4SU9WRThEVU9teVpCUk5F?=
 =?utf-8?B?U29hS28ya1JwQ3dhdTNPY09oKzVaK203TWpleXlYSEhFcGRUUDNFcXlWSzF6?=
 =?utf-8?Q?Y+gVZQjkgw4m1A0Mem8vRhToDdC+Xq030WgYBHI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 88e7dd79-7be9-4fc1-34a3-08d8ce6966ac
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 08:45:36.3947
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5QkETuRKivQQ9yKwKjLQVegorhze4ZQGHUV3Hg+874/M3VrTD/ONeyOdc/xIEuxnoQh2Ym+SvQxW+akP1a5m+w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3483
X-OriginatorOrg: citrix.com

On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
> I did further consider not allocating any real page at all, but just
> using the address of some unpopulated space (which would require
> announcing this page as reserved to Dom0, so it wouldn't put any PCI
> MMIO BARs there). But I thought this would be too controversial, because
> of the possible risks associated with this.

No, Xen is not capable of allocating a suitable unpopulated page IMO,
so let's not go down that route. Wasting one RAM page seems perfectly
fine to me.

> Perhaps the change to p2m_get_iommu_flags() should be in a separate
> patch.

Maybe, I'm still not fully convinced that's a change we want to do
TBH.

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1007,6 +1007,8 @@ int arch_domain_soft_reset(struct domain
>  
>  void arch_domain_creation_finished(struct domain *d)
>  {
> +    if ( is_hvm_domain(d) )
> +        hvm_domain_creation_finished(d);
>  }
>  
>  #define xen_vcpu_guest_context vcpu_guest_context
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -66,8 +66,7 @@ boolean_param("force-ept", opt_force_ept
>  static void vmx_ctxt_switch_from(struct vcpu *v);
>  static void vmx_ctxt_switch_to(struct vcpu *v);
>  
> -static int  vmx_alloc_vlapic_mapping(struct domain *d);
> -static void vmx_free_vlapic_mapping(struct domain *d);
> +static int alloc_vlapic_mapping(void);
>  static void vmx_install_vlapic_mapping(struct vcpu *v);
>  static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
>                                  unsigned int flags);
> @@ -78,6 +77,8 @@ static int vmx_msr_read_intercept(unsign
>  static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
>  static void vmx_invlpg(struct vcpu *v, unsigned long linear);
>  
> +static mfn_t __read_mostly apic_access_mfn;
> +
>  /* Values for domain's ->arch.hvm_domain.pi_ops.flags. */
>  #define PI_CSW_FROM (1u << 0)
>  #define PI_CSW_TO   (1u << 1)
> @@ -401,7 +402,6 @@ static int vmx_domain_initialise(struct
>          .to   = vmx_ctxt_switch_to,
>          .tail = vmx_do_resume,
>      };
> -    int rc;
>  
>      d->arch.ctxt_switch = &csw;
>  
> @@ -411,21 +411,16 @@ static int vmx_domain_initialise(struct
>       */
>      d->arch.hvm.vmx.exec_sp = is_hardware_domain(d) || opt_ept_exec_sp;
>  
> -    if ( !has_vlapic(d) )
> -        return 0;
> -
> -    if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
> -        return rc;
> -
>      return 0;
>  }
>  
> -static void vmx_domain_relinquish_resources(struct domain *d)
> +static void domain_creation_finished(struct domain *d)
>  {
> -    if ( !has_vlapic(d) )
> -        return;
>  
> -    vmx_free_vlapic_mapping(d);
> +    if ( !mfn_eq(apic_access_mfn, _mfn(0)) &&
> +         set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
> +                            apic_access_mfn, PAGE_ORDER_4K) )
> +        domain_crash(d);
>  }
>  
>  static void vmx_init_ipt(struct vcpu *v)
> @@ -2407,7 +2402,7 @@ static struct hvm_function_table __initd
>      .cpu_up_prepare       = vmx_cpu_up_prepare,
>      .cpu_dead             = vmx_cpu_dead,
>      .domain_initialise    = vmx_domain_initialise,
> -    .domain_relinquish_resources = vmx_domain_relinquish_resources,
> +    .domain_creation_finished = domain_creation_finished,
>      .vcpu_initialise      = vmx_vcpu_initialise,
>      .vcpu_destroy         = vmx_vcpu_destroy,
>      .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
> @@ -2653,7 +2648,7 @@ const struct hvm_function_table * __init
>  {
>      set_in_cr4(X86_CR4_VMXE);
>  
> -    if ( vmx_vmcs_init() )
> +    if ( vmx_vmcs_init() || alloc_vlapic_mapping() )
>      {
>          printk("VMX: failed to initialise.\n");
>          return NULL;
> @@ -3208,7 +3203,7 @@ gp_fault:
>      return X86EMUL_EXCEPTION;
>  }
>  
> -static int vmx_alloc_vlapic_mapping(struct domain *d)
> +static int __init alloc_vlapic_mapping(void)
>  {
>      struct page_info *pg;
>      mfn_t mfn;
> @@ -3216,53 +3211,28 @@ static int vmx_alloc_vlapic_mapping(stru
>      if ( !cpu_has_vmx_virtualize_apic_accesses )
>          return 0;
>  
> -    pg = alloc_domheap_page(d, MEMF_no_refcount);
> +    pg = alloc_domheap_page(NULL, 0);
>      if ( !pg )
>          return -ENOMEM;
>  
> -    if ( !get_page_and_type(pg, d, PGT_writable_page) )
> -    {
> -        /*
> -         * The domain can't possibly know about this page yet, so failure
> -         * here is a clear indication of something fishy going on.
> -         */
> -        domain_crash(d);
> -        return -ENODATA;
> -    }
> -
>      mfn = page_to_mfn(pg);
>      clear_domain_page(mfn);
> -    d->arch.hvm.vmx.apic_access_mfn = mfn;
> +    apic_access_mfn = mfn;
>  
> -    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
> -                              PAGE_ORDER_4K);
> -}
> -
> -static void vmx_free_vlapic_mapping(struct domain *d)
> -{
> -    mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn;
> -
> -    d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
> -    if ( !mfn_eq(mfn, _mfn(0)) )
> -    {
> -        struct page_info *pg = mfn_to_page(mfn);
> -
> -        put_page_alloc_ref(pg);
> -        put_page_and_type(pg);
> -    }
> +    return 0;
>  }
>  
>  static void vmx_install_vlapic_mapping(struct vcpu *v)
>  {
>      paddr_t virt_page_ma, apic_page_ma;
>  
> -    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
> +    if ( mfn_eq(apic_access_mfn, _mfn(0)) )

You should check if the domain has a vlapic I think, since now
apic_access_mfn is global and will be set regardless of whether the
domain has vlapic enabled or not.

Previously apic_access_mfn was only allocated if the domain had vlapic
enabled.

>          return;
>  
>      ASSERT(cpu_has_vmx_virtualize_apic_accesses);
>  
>      virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
> -    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
> +    apic_page_ma = mfn_to_maddr(apic_access_mfn);
>  
>      vmx_vmcs_enter(v);
>      __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);

I would consider setting up the vmcs and adding the page to the p2m in
the same function, and likely call it from vlapic_init. We could have
a domain_setup_apic in hvm_function_table that takes care of all this.

> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -106,6 +106,7 @@ struct hvm_function_table {
>       * Initialise/destroy HVM domain/vcpu resources
>       */
>      int  (*domain_initialise)(struct domain *d);
> +    void (*domain_creation_finished)(struct domain *d);
>      void (*domain_relinquish_resources)(struct domain *d);
>      void (*domain_destroy)(struct domain *d);
>      int  (*vcpu_initialise)(struct vcpu *v);
> @@ -390,6 +391,12 @@ static inline bool hvm_has_set_descripto
>      return hvm_funcs.set_descriptor_access_exiting;
>  }
>  
> +static inline void hvm_domain_creation_finished(struct domain *d)
> +{
> +    if ( hvm_funcs.domain_creation_finished )
> +        alternative_vcall(hvm_funcs.domain_creation_finished, d);
> +}
> +
>  static inline int
>  hvm_guest_x86_mode(struct vcpu *v)
>  {
> @@ -765,6 +772,11 @@ static inline void hvm_invlpg(const stru
>  {
>      ASSERT_UNREACHABLE();
>  }
> +
> +static inline void hvm_domain_creation_finished(struct domain *d)
> +{
> +    ASSERT_UNREACHABLE();
> +}
>  
>  /*
>   * Shadow code needs further cleanup to eliminate some HVM-only paths. For
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -58,7 +58,6 @@ struct ept_data {
>  #define _VMX_DOMAIN_PML_ENABLED    0
>  #define VMX_DOMAIN_PML_ENABLED     (1ul << _VMX_DOMAIN_PML_ENABLED)
>  struct vmx_domain {
> -    mfn_t apic_access_mfn;
>      /* VMX_DOMAIN_* */
>      unsigned int status;
>  
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
>          flags = IOMMUF_readable;
>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
>              flags |= IOMMUF_writable;
> +        /* VMX'es APIC access page is global and hence has no owner. */
> +        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
> +            flags = 0;

Is it fine to have this page accessible to devices if the page tables
are shared between the CPU and the IOMMU?

Is it possible for devices to write to it?

I still think both the CPU and the IOMMU page tables should be the
same unless there's a technical reason for them not to be, rather than
us just wanting to hide things from devices.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 09:39:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 09:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83789.156769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA8Rj-0006F5-O0; Thu, 11 Feb 2021 09:39:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83789.156769; Thu, 11 Feb 2021 09:39:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA8Rj-0006Ey-Kd; Thu, 11 Feb 2021 09:39:35 +0000
Received: by outflank-mailman (input) for mailman id 83789;
 Thu, 11 Feb 2021 09:39:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA8Rh-0006Eq-T2; Thu, 11 Feb 2021 09:39:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA8Rh-0000aW-IV; Thu, 11 Feb 2021 09:39:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA8Rh-0005mc-BG; Thu, 11 Feb 2021 09:39:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lA8Rh-0003WA-Ak; Thu, 11 Feb 2021 09:39:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=9TGYrR2wqydvCwvjaWwnhGTw88OLXTWmFzRf05taqRs=; b=uhODCG/PZpXyxhdk+8+SH95hxM
	wz+8cBBK8kKx9gJusHHwkSslmHldOBqJlYwQuH1PB2nio55enRTAWMsc8y3Q/ARfcfE6FlhyRvEFD
	ccr/KgfpM//+cZit+gvh6vNtedcfwYB7yD59bPGVt0g7SOtU5vqi6JQMaZLcfNyHhIAE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-xl-xsm
Message-Id: <E1lA8Rh-0003WA-Ak@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 09:39:33 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-xl-xsm
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159246/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-xsm.guest-start --summary-out=tmp/159246.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-xl-xsm guest-start
Searching for failure / basis pass:
 159200 fail [host=laxton1] / 158681 [host=rochester1] 158624 [host=laxton0] 158616 [host=rochester0] 158609 ok.
Failure / basis pass flights: 159200 / 158609
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-5e1942063dc3633f7a127aa2b159c13507580d21 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#3b769c5110384fb33bcfeddced80f721ec7838cc-472276f59bba2b22bb882c5c6f5479754e68d467 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Loaded 15001 nodes in revision graph
Searching for test results:
 158593 [host=rochester1]
 158603 [host=laxton0]
 158609 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158616 [host=rochester0]
 158624 [host=laxton0]
 158681 [host=rochester1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159205 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159207 fail irrelevant
 159212 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159218 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159226 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159234 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159235 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159236 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159200 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159237 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159240 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159242 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159243 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159244 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159245 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159246 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158609 (pass), for basis pass
 Result found: flight 159200 (fail), for basis failure (at ancestor ~185)
 Repro found: flight 159205 (pass), for basis pass
 Repro found: flight 159240 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159237 (pass), for last pass
 Result found: flight 159242 (fail), for first failure
 Repro found: flight 159243 (pass), for last pass
 Repro found: flight 159244 (fail), for first failure
 Repro found: flight 159245 (pass), for last pass
 Repro found: flight 159246 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159246/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 147 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159246: tolerable ALL FAIL

flight 159246 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159246/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl-xsm      14 guest-start             fail baseline untested


jobs:
 test-arm64-arm64-xl-xsm                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83795.156799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92e-0001WM-6n; Thu, 11 Feb 2021 10:17:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83795.156799; Thu, 11 Feb 2021 10:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92e-0001WF-3O; Thu, 11 Feb 2021 10:17:44 +0000
Received: by outflank-mailman (input) for mailman id 83795;
 Thu, 11 Feb 2021 10:17:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92c-0001VI-FC
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3863bdfc-b802-4643-845d-1182e4a74dc7;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C4EBAEE6;
 Thu, 11 Feb 2021 10:17:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3863bdfc-b802-4643-845d-1182e4a74dc7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038660; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=p4KSkF0c5fU4qCPg8or+1uqqop6uTe4bYlvvnGDRxWY=;
	b=DivHVvXIK3d+2H8s2mJSXlaS/ID6mGaFcAur+N//M0KKelVsACyazfqTdU4OBJdNvQbyLH
	gkM2KGQRRMZzRO4po/06SU5P4EBCoFB8pFlh44T9iVqutpsjQ14Y2is+xeZcc8XjsMhUbm
	QwWpkI3/YayYR7YxkZ8FR36tCXSQ7i4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH v2 3/8] xen/events: avoid handling the same event on two cpus at the same time
Date: Thu, 11 Feb 2021 11:16:11 +0100
Message-Id: <20210211101616.13788-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When changing the cpu affinity of an event it can happen today that
(with some unlucky timing) the same event will be handled on the old
and the new cpu at the same time.

Avoid that by adding an "event active" flag to the per-event data and
call the handler only if this flag isn't set.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 drivers/xen/events/events_base.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index e157e7506830..f7e22330dcef 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -102,6 +102,7 @@ struct irq_info {
 #define EVT_MASK_REASON_EXPLICIT	0x01
 #define EVT_MASK_REASON_TEMPORARY	0x02
 #define EVT_MASK_REASON_EOI_PENDING	0x04
+	u8 is_active;		/* Is event just being handled? */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
@@ -622,6 +623,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 	}
 
 	info->eoi_time = 0;
+	smp_store_release(&info->is_active, 0);
 	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
 }
 
@@ -809,13 +811,15 @@ static void pirq_query_unmask(int irq)
 
 static void eoi_pirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 	struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
 	int rc = 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
+	smp_store_release(&info->is_active, 0);
 	clear_evtchn(evtchn);
 
 	if (pirq_needs_eoi(data->irq)) {
@@ -1640,6 +1644,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 	}
 
 	info = info_for_irq(irq);
+	if (xchg_acquire(&info->is_active, 1))
+		return;
 
 	if (ctrl->defer_eoi) {
 		info->eoi_cpu = smp_processor_id();
@@ -1823,11 +1829,13 @@ static void disable_dynirq(struct irq_data *data)
 
 static void ack_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
+	smp_store_release(&info->is_active, 0);
 	clear_evtchn(evtchn);
 }
 
@@ -1969,10 +1977,13 @@ static void restore_cpu_ipis(unsigned int cpu)
 /* Clear an irq's pending state, in preparation for polling on it */
 void xen_clear_irq_pending(int irq)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(irq);
+	struct irq_info *info = info_for_irq(irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
-	if (VALID_EVTCHN(evtchn))
+	if (VALID_EVTCHN(evtchn)) {
+		smp_store_release(&info->is_active, 0);
 		clear_evtchn(evtchn);
+	}
 }
 EXPORT_SYMBOL(xen_clear_irq_pending);
 void xen_set_irq_pending(int irq)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83794.156786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92c-0001VU-Uz; Thu, 11 Feb 2021 10:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83794.156786; Thu, 11 Feb 2021 10:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92c-0001VN-Rx; Thu, 11 Feb 2021 10:17:42 +0000
Received: by outflank-mailman (input) for mailman id 83794;
 Thu, 11 Feb 2021 10:17:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92b-0001Uu-VA
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93fd298e-33ea-4c26-a35d-e453663e6bc5;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F54EADA2;
 Thu, 11 Feb 2021 10:17:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93fd298e-33ea-4c26-a35d-e453663e6bc5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038660; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=in+h4lBDIRM6mJ7t9cKXZa5bGbgsEjGF9LUcGBiK4pw=;
	b=V6/nHildAeeP4mTaW9pkNSIVN3W5JG7oTeZ/pm8uEYQwO4UvFiHUpuKpMve/BpBv1v2hDc
	zSTmNSzLpmjpGw4gEVI+1q7BnJ17tQmdjF7FJ39oRAfLmWt2kbB7tj8L2NeYypbl1pGYHA
	D4i8R84Uh3YQj183ZThezUvLnAY9qNY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org,
	linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: [PATCH v2 0/8] xen/events: bug fixes and some diagnostic aids
Date: Thu, 11 Feb 2021 11:16:08 +0100
Message-Id: <20210211101616.13788-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The first four patches are fixes for XSA-332. The avoid WARN splats
and a performance issue with interdomain events.

Patches 5 and 6 are some additions to event handling in order to add
some per pv-device statistics to sysfs and the ability to have a per
backend device spurious event delay control.

Patches 7 and 8 are minor fixes I had lying around.

Juergen Gross (8):
  xen/events: reset affinity of 2-level event when tearing it down
  xen/events: don't unmask an event channel when an eoi is pending
  xen/events: avoid handling the same event on two cpus at the same time
  xen/netback: fix spurious event detection for common event case
  xen/events: link interdomain events to associated xenbus device
  xen/events: add per-xenbus device event statistics and settings
  xen/evtchn: use smp barriers for user event ring
  xen/evtchn: use READ/WRITE_ONCE() for accessing ring indices

 .../ABI/testing/sysfs-devices-xenbus          |  41 ++++
 drivers/block/xen-blkback/xenbus.c            |   2 +-
 drivers/net/xen-netback/interface.c           |  24 ++-
 drivers/xen/events/events_2l.c                |  22 +-
 drivers/xen/events/events_base.c              | 190 ++++++++++++++----
 drivers/xen/events/events_fifo.c              |   7 -
 drivers/xen/events/events_internal.h          |  14 +-
 drivers/xen/evtchn.c                          |  29 ++-
 drivers/xen/pvcalls-back.c                    |   4 +-
 drivers/xen/xen-pciback/xenbus.c              |   2 +-
 drivers/xen/xen-scsiback.c                    |   2 +-
 drivers/xen/xenbus/xenbus_probe.c             |  66 ++++++
 include/xen/events.h                          |   7 +-
 include/xen/xenbus.h                          |   7 +
 14 files changed, 323 insertions(+), 94 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-devices-xenbus

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83796.156811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92i-0001Yo-Ft; Thu, 11 Feb 2021 10:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83796.156811; Thu, 11 Feb 2021 10:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92i-0001Yh-CM; Thu, 11 Feb 2021 10:17:48 +0000
Received: by outflank-mailman (input) for mailman id 83796;
 Thu, 11 Feb 2021 10:17:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92g-0001Uu-Nr
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7325ca90-66f4-46d3-8126-5d42ffe92f14;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 42709AEE5;
 Thu, 11 Feb 2021 10:17:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7325ca90-66f4-46d3-8126-5d42ffe92f14
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038660; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3oKwFjI7QJbP3FIEeR3qa2NxeJ0L76eV38hLUEb9hjo=;
	b=f6mXAjuRCbzMf35BDEDWXTNA6DVlLvcAIChLY3m7jKJkXro1baLzQOjC1iXUb5JEebuLeO
	4+UyZ9/AnZQmz8mMOePJZBkC4fy2koCnhIu4EXJOm3Z8yb46lYhgyNtHTuCxtQyUJtcSRC
	WDmMJJ57vhgngsOi6bgwDZnZNFSiNVs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>
Subject: [PATCH v2 2/8] xen/events: don't unmask an event channel when an eoi is pending
Date: Thu, 11 Feb 2021 11:16:10 +0100
Message-Id: <20210211101616.13788-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

An event channel should be kept masked when an eoi is pending for it.
When being migrated to another cpu it might be unmasked, though.

In order to avoid this keep three different flags for each event channel
to be able to distinguish "normal" masking/unmasking from eoi related
masking/unmasking and temporary masking. The event channel should only
be able to generate an interrupt if all flags are cleared.

Cc: stable@vger.kernel.org
Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framework")
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- introduce a lock around masking/unmasking
- merge patch 3 into this one (Jan Beulich)
---
 drivers/xen/events/events_2l.c       |   7 --
 drivers/xen/events/events_base.c     | 102 +++++++++++++++++++++------
 drivers/xen/events/events_fifo.c     |   7 --
 drivers/xen/events/events_internal.h |   6 --
 4 files changed, 81 insertions(+), 41 deletions(-)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index a7f413c5c190..b8f2f971c2f0 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -77,12 +77,6 @@ static bool evtchn_2l_is_pending(evtchn_port_t port)
 	return sync_test_bit(port, BM(&s->evtchn_pending[0]));
 }
 
-static bool evtchn_2l_test_and_set_mask(evtchn_port_t port)
-{
-	struct shared_info *s = HYPERVISOR_shared_info;
-	return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0]));
-}
-
 static void evtchn_2l_mask(evtchn_port_t port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
@@ -376,7 +370,6 @@ static const struct evtchn_ops evtchn_ops_2l = {
 	.clear_pending     = evtchn_2l_clear_pending,
 	.set_pending       = evtchn_2l_set_pending,
 	.is_pending        = evtchn_2l_is_pending,
-	.test_and_set_mask = evtchn_2l_test_and_set_mask,
 	.mask              = evtchn_2l_mask,
 	.unmask            = evtchn_2l_unmask,
 	.handle_events     = evtchn_2l_handle_events,
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 6c539db81f8f..e157e7506830 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -97,13 +97,18 @@ struct irq_info {
 	short refcnt;
 	u8 spurious_cnt;
 	u8 is_accounted;
-	enum xen_irq_type type; /* type */
+	short type;		/* type: IRQT_* */
+	u8 mask_reason;		/* Why is event channel masked */
+#define EVT_MASK_REASON_EXPLICIT	0x01
+#define EVT_MASK_REASON_TEMPORARY	0x02
+#define EVT_MASK_REASON_EOI_PENDING	0x04
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
 	unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
 	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
 	u64 eoi_time;           /* Time in jiffies when to EOI. */
+	spinlock_t lock;
 
 	union {
 		unsigned short virq;
@@ -152,6 +157,7 @@ static DEFINE_RWLOCK(evtchn_rwlock);
  *   evtchn_rwlock
  *     IRQ-desc lock
  *       percpu eoi_list_lock
+ *         irq_info->lock
  */
 
 static LIST_HEAD(xen_irq_list_head);
@@ -302,6 +308,8 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 	info->irq = irq;
 	info->evtchn = evtchn;
 	info->cpu = cpu;
+	info->mask_reason = EVT_MASK_REASON_EXPLICIT;
+	spin_lock_init(&info->lock);
 
 	ret = set_evtchn_to_irq(evtchn, irq);
 	if (ret < 0)
@@ -450,6 +458,34 @@ unsigned int cpu_from_evtchn(evtchn_port_t evtchn)
 	return ret;
 }
 
+static void do_mask(struct irq_info *info, u8 reason)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	if (!info->mask_reason)
+		mask_evtchn(info->evtchn);
+
+	info->mask_reason |= reason;
+
+	spin_unlock_irqrestore(&info->lock, flags);
+}
+
+static void do_unmask(struct irq_info *info, u8 reason)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	info->mask_reason &= ~reason;
+
+	if (!info->mask_reason)
+		unmask_evtchn(info->evtchn);
+
+	spin_unlock_irqrestore(&info->lock, flags);
+}
+
 #ifdef CONFIG_X86
 static bool pirq_check_eoi_map(unsigned irq)
 {
@@ -586,7 +622,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 	}
 
 	info->eoi_time = 0;
-	unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
 }
 
 static void xen_irq_lateeoi_worker(struct work_struct *work)
@@ -831,7 +867,8 @@ static unsigned int __startup_pirq(unsigned int irq)
 		goto err;
 
 out:
-	unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_EXPLICIT);
+
 	eoi_pirq(irq_get_irq_data(irq));
 
 	return 0;
@@ -858,7 +895,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	mask_evtchn(evtchn);
+	do_mask(info, EVT_MASK_REASON_EXPLICIT);
 	xen_evtchn_close(evtchn);
 	xen_irq_info_cleanup(info);
 }
@@ -1691,10 +1728,10 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
 }
 
 /* Rebind an evtchn so that it gets delivered to a specific cpu */
-static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
+static int xen_rebind_evtchn_to_cpu(struct irq_info *info, unsigned int tcpu)
 {
 	struct evtchn_bind_vcpu bind_vcpu;
-	int masked;
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return -1;
@@ -1710,7 +1747,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
 	 * Mask the event while changing the VCPU binding to prevent
 	 * it being delivered on an unexpected VCPU.
 	 */
-	masked = test_and_set_mask(evtchn);
+	do_mask(info, EVT_MASK_REASON_TEMPORARY);
 
 	/*
 	 * If this fails, it usually just indicates that we're dealing with a
@@ -1720,8 +1757,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
 		bind_evtchn_to_cpu(evtchn, tcpu, false);
 
-	if (!masked)
-		unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_TEMPORARY);
 
 	return 0;
 }
@@ -1760,7 +1796,7 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 	unsigned int tcpu = select_target_cpu(dest);
 	int ret;
 
-	ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
+	ret = xen_rebind_evtchn_to_cpu(info_for_irq(data->irq), tcpu);
 	if (!ret)
 		irq_data_update_effective_affinity(data, cpumask_of(tcpu));
 
@@ -1769,18 +1805,20 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 
 static void enable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+		do_unmask(info, EVT_MASK_REASON_EXPLICIT);
 }
 
 static void disable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (VALID_EVTCHN(evtchn))
-		mask_evtchn(evtchn);
+		do_mask(info, EVT_MASK_REASON_EXPLICIT);
 }
 
 static void ack_dynirq(struct irq_data *data)
@@ -1799,18 +1837,40 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
+static void lateeoi_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
+		clear_evtchn(evtchn);
+	}
+}
+
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		do_mask(info, EVT_MASK_REASON_EXPLICIT |
+			      EVT_MASK_REASON_EOI_PENDING);
+		clear_evtchn(evtchn);
+	}
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
-	int masked;
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return 0;
 
-	masked = test_and_set_mask(evtchn);
+	do_mask(info, EVT_MASK_REASON_TEMPORARY);
 	set_evtchn(evtchn);
-	if (!masked)
-		unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_TEMPORARY);
 
 	return 1;
 }
@@ -2024,8 +2084,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
 	.irq_mask		= disable_dynirq,
 	.irq_unmask		= enable_dynirq,
 
-	.irq_ack		= mask_ack_dynirq,
-	.irq_mask_ack		= mask_ack_dynirq,
+	.irq_ack		= lateeoi_ack_dynirq,
+	.irq_mask_ack		= lateeoi_mask_ack_dynirq,
 
 	.irq_set_affinity	= set_affinity_irq,
 	.irq_retrigger		= retrigger_dynirq,
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index b234f1766810..ad9fe51d3fb3 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -209,12 +209,6 @@ static bool evtchn_fifo_is_pending(evtchn_port_t port)
 	return sync_test_bit(EVTCHN_FIFO_BIT(PENDING, word), BM(word));
 }
 
-static bool evtchn_fifo_test_and_set_mask(evtchn_port_t port)
-{
-	event_word_t *word = event_word_from_port(port);
-	return sync_test_and_set_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word));
-}
-
 static void evtchn_fifo_mask(evtchn_port_t port)
 {
 	event_word_t *word = event_word_from_port(port);
@@ -423,7 +417,6 @@ static const struct evtchn_ops evtchn_ops_fifo = {
 	.clear_pending     = evtchn_fifo_clear_pending,
 	.set_pending       = evtchn_fifo_set_pending,
 	.is_pending        = evtchn_fifo_is_pending,
-	.test_and_set_mask = evtchn_fifo_test_and_set_mask,
 	.mask              = evtchn_fifo_mask,
 	.unmask            = evtchn_fifo_unmask,
 	.handle_events     = evtchn_fifo_handle_events,
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
index 18a4090d0709..4d3398eff9cd 100644
--- a/drivers/xen/events/events_internal.h
+++ b/drivers/xen/events/events_internal.h
@@ -21,7 +21,6 @@ struct evtchn_ops {
 	void (*clear_pending)(evtchn_port_t port);
 	void (*set_pending)(evtchn_port_t port);
 	bool (*is_pending)(evtchn_port_t port);
-	bool (*test_and_set_mask)(evtchn_port_t port);
 	void (*mask)(evtchn_port_t port);
 	void (*unmask)(evtchn_port_t port);
 
@@ -84,11 +83,6 @@ static inline bool test_evtchn(evtchn_port_t port)
 	return evtchn_ops->is_pending(port);
 }
 
-static inline bool test_and_set_mask(evtchn_port_t port)
-{
-	return evtchn_ops->test_and_set_mask(port);
-}
-
 static inline void mask_evtchn(evtchn_port_t port)
 {
 	return evtchn_ops->mask(port);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83797.156817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92i-0001ZV-Ul; Thu, 11 Feb 2021 10:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83797.156817; Thu, 11 Feb 2021 10:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92i-0001ZF-M8; Thu, 11 Feb 2021 10:17:48 +0000
Received: by outflank-mailman (input) for mailman id 83797;
 Thu, 11 Feb 2021 10:17:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92h-0001VI-7o
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0feffe05-2a7d-4ce3-8742-72cb4290f884;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F592ADE3;
 Thu, 11 Feb 2021 10:17:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0feffe05-2a7d-4ce3-8742-72cb4290f884
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038660; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EdBaXbPjwuisXp+wA3ceCVG/oi9M98Y6TOFrfwWaHa8=;
	b=Hxp5lciP0+R7TqImPb1Oi0mOF1Kt4gS8HSFGF77fTX3A5UvKadVDgCggHJxuXESqpwWZlr
	XXEY8jwagjCeQ20jMrUsdk4x8278lK28+Vn5MRXaJ8eeZdMTvVfKWMeL9iXotkOhz7jmpw
	qiiDgnuNfHNzpjoj4iJqFS4N6cnFV8w=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>
Subject: [PATCH v2 1/8] xen/events: reset affinity of 2-level event when tearing it down
Date: Thu, 11 Feb 2021 11:16:09 +0100
Message-Id: <20210211101616.13788-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When creating a new event channel with 2-level events the affinity
needs to be reset initially in order to avoid using an old affinity
from earlier usage of the event channel port. So when tearing an event
channel down reset all affinity bits.

The same applies to the affinity when onlining a vcpu: all old
affinity settings for this vcpu must be reset. As percpu events get
initialized before the percpu event channel hook is called,
resetting of the affinities happens after offlining a vcpu (this is
working, as initial percpu memory is zeroed out).

Cc: stable@vger.kernel.org
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- reset affinity when tearing down the event (Julien Grall)
---
 drivers/xen/events/events_2l.c       | 15 +++++++++++++++
 drivers/xen/events/events_base.c     |  1 +
 drivers/xen/events/events_internal.h |  8 ++++++++
 3 files changed, 24 insertions(+)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index da87f3a1e351..a7f413c5c190 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
 	return EVTCHN_2L_NR_CHANNELS;
 }
 
+static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu)
+{
+	clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
+}
+
 static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
 				  unsigned int old_cpu)
 {
@@ -355,9 +360,18 @@ static void evtchn_2l_resume(void)
 				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
 }
 
+static int evtchn_2l_percpu_deinit(unsigned int cpu)
+{
+	memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
+			EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
+
+	return 0;
+}
+
 static const struct evtchn_ops evtchn_ops_2l = {
 	.max_channels      = evtchn_2l_max_channels,
 	.nr_channels       = evtchn_2l_max_channels,
+	.remove            = evtchn_2l_remove,
 	.bind_to_cpu       = evtchn_2l_bind_to_cpu,
 	.clear_pending     = evtchn_2l_clear_pending,
 	.set_pending       = evtchn_2l_set_pending,
@@ -367,6 +381,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
 	.unmask            = evtchn_2l_unmask,
 	.handle_events     = evtchn_2l_handle_events,
 	.resume	           = evtchn_2l_resume,
+	.percpu_deinit     = evtchn_2l_percpu_deinit,
 };
 
 void __init xen_evtchn_2l_init(void)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index e850f79351cb..6c539db81f8f 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -368,6 +368,7 @@ static int xen_irq_info_pirq_setup(unsigned irq,
 static void xen_irq_info_cleanup(struct irq_info *info)
 {
 	set_evtchn_to_irq(info->evtchn, -1);
+	xen_evtchn_port_remove(info->evtchn, info->cpu);
 	info->evtchn = 0;
 	channels_on_cpu_dec(info);
 }
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
index 0a97c0549db7..18a4090d0709 100644
--- a/drivers/xen/events/events_internal.h
+++ b/drivers/xen/events/events_internal.h
@@ -14,6 +14,7 @@ struct evtchn_ops {
 	unsigned (*nr_channels)(void);
 
 	int (*setup)(evtchn_port_t port);
+	void (*remove)(evtchn_port_t port, unsigned int cpu);
 	void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
 			    unsigned int old_cpu);
 
@@ -54,6 +55,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
 	return 0;
 }
 
+static inline void xen_evtchn_port_remove(evtchn_port_t evtchn,
+					  unsigned int cpu)
+{
+	if (evtchn_ops->remove)
+		evtchn_ops->remove(evtchn, cpu);
+}
+
 static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
 					       unsigned int cpu,
 					       unsigned int old_cpu)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83798.156835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92n-0001fJ-5Y; Thu, 11 Feb 2021 10:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83798.156835; Thu, 11 Feb 2021 10:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92n-0001fA-1v; Thu, 11 Feb 2021 10:17:53 +0000
Received: by outflank-mailman (input) for mailman id 83798;
 Thu, 11 Feb 2021 10:17:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92l-0001Uu-O2
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cb82252-c671-4eed-821b-f7d2d3151973;
 Thu, 11 Feb 2021 10:17:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98FF9AF50;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cb82252-c671-4eed-821b-f7d2d3151973
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038661; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lYH24KPMu2VG4s1NBLZlXtQTIXh2Tv25DANk+uAGYuo=;
	b=u1DgoxnpEm4vOpyw48CO18ajkHrDW1uvBzT3RfgWYFijSi0+reRlAnvlKP1gg0r4P9Pi7i
	Vu5847ApQb2x84mlacvWnCjsZcJwu0yHNhZeZttlgZKVZRpMXgD4XgZOCcNZ5ySrNx83tW
	UvkSODeyNjTFab6FqCgs6SMINmH3iSc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 8/8] xen/evtchn: use READ/WRITE_ONCE() for accessing ring indices
Date: Thu, 11 Feb 2021 11:16:16 +0100
Message-Id: <20210211101616.13788-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For avoiding read- and write-tearing by the compiler use READ_ONCE()
and WRITE_ONCE() for accessing the ring indices in evtchn.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- modify all accesses (Julien Grall)
---
 drivers/xen/evtchn.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index 421382c73d88..620008f89dbe 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -162,6 +162,7 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 {
 	struct user_evtchn *evtchn = data;
 	struct per_user_data *u = evtchn->user;
+	unsigned int prod, cons;
 
 	WARN(!evtchn->enabled,
 	     "Interrupt for port %u, but apparently not enabled; per-user %p\n",
@@ -171,10 +172,14 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 
 	spin_lock(&u->ring_prod_lock);
 
-	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
-		*evtchn_ring_entry(u, u->ring_prod) = evtchn->port;
+	prod = READ_ONCE(u->ring_prod);
+	cons = READ_ONCE(u->ring_cons);
+
+	if ((prod - cons) < u->ring_size) {
+		*evtchn_ring_entry(u, prod) = evtchn->port;
 		smp_wmb(); /* Ensure ring contents visible */
-		if (u->ring_cons == u->ring_prod++) {
+		if (cons == prod++) {
+			WRITE_ONCE(u->ring_prod, prod);
 			wake_up_interruptible(&u->evtchn_wait);
 			kill_fasync(&u->evtchn_async_queue,
 				    SIGIO, POLL_IN);
@@ -210,8 +215,8 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 		if (u->ring_overflow)
 			goto unlock_out;
 
-		c = u->ring_cons;
-		p = u->ring_prod;
+		c = READ_ONCE(u->ring_cons);
+		p = READ_ONCE(u->ring_prod);
 		if (c != p)
 			break;
 
@@ -221,7 +226,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 			return -EAGAIN;
 
 		rc = wait_event_interruptible(u->evtchn_wait,
-					      u->ring_cons != u->ring_prod);
+			READ_ONCE(u->ring_cons) != READ_ONCE(u->ring_prod));
 		if (rc)
 			return rc;
 	}
@@ -251,7 +256,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 	     copy_to_user(&buf[bytes1], &u->ring[0], bytes2)))
 		goto unlock_out;
 
-	u->ring_cons += (bytes1 + bytes2) / sizeof(evtchn_port_t);
+	WRITE_ONCE(u->ring_cons, c + (bytes1 + bytes2) / sizeof(evtchn_port_t));
 	rc = bytes1 + bytes2;
 
  unlock_out:
@@ -552,7 +557,9 @@ static long evtchn_ioctl(struct file *file,
 		/* Initialise the ring to empty. Clear errors. */
 		mutex_lock(&u->ring_cons_mutex);
 		spin_lock_irq(&u->ring_prod_lock);
-		u->ring_cons = u->ring_prod = u->ring_overflow = 0;
+		WRITE_ONCE(u->ring_cons, 0);
+		WRITE_ONCE(u->ring_prod, 0);
+		u->ring_overflow = 0;
 		spin_unlock_irq(&u->ring_prod_lock);
 		mutex_unlock(&u->ring_cons_mutex);
 		rc = 0;
@@ -595,7 +602,7 @@ static __poll_t evtchn_poll(struct file *file, poll_table *wait)
 	struct per_user_data *u = file->private_data;
 
 	poll_wait(file, &u->evtchn_wait, wait);
-	if (u->ring_cons != u->ring_prod)
+	if (READ_ONCE(u->ring_cons) != READ_ONCE(u->ring_prod))
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (u->ring_overflow)
 		mask = EPOLLERR;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83799.156840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92n-0001gG-Ki; Thu, 11 Feb 2021 10:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83799.156840; Thu, 11 Feb 2021 10:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92n-0001fu-BQ; Thu, 11 Feb 2021 10:17:53 +0000
Received: by outflank-mailman (input) for mailman id 83799;
 Thu, 11 Feb 2021 10:17:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92m-0001VI-7p
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9b06e69-c918-459a-a5f1-acebf056e50c;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ACC86AF0F;
 Thu, 11 Feb 2021 10:17:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9b06e69-c918-459a-a5f1-acebf056e50c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038660; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Rhogk/kmOD28gajRsMGG1ocHU7evgTnTuSlQWFIoQW8=;
	b=on0/DGg+97nD1nkvxkD4VEqz3oR9LkeQaHrptcUN0K8YljfnjFg68saLoRag5dRxWts5xM
	gcYtqhCRJfyj1qxrUcVplOvFvSQTRp245Dkbp+hx1i7zitTFMX0alfy6CbM8J1j2ZIVT3N
	SkPqcYJ4CUuD45OjfSfrPTh+cYiLMW0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH v2 4/8] xen/netback: fix spurious event detection for common event case
Date: Thu, 11 Feb 2021 11:16:12 +0100
Message-Id: <20210211101616.13788-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In case of a common event for rx and tx queue the event should be
regarded to be spurious if no rx and no tx requests are pending.

Unfortunately the condition for testing that is wrong causing to
decide a event being spurious if no rx OR no tx requests are
pending.

Fix that plus using local variables for rx/tx pending indicators in
order to split function calls and if condition.

Fixes: 23025393dbeb3b ("xen/netback: use lateeoi irq binding")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch, fixing FreeBSD performance issue
---
 drivers/net/xen-netback/interface.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index acb786d8b1d8..e02a4fbb74de 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -162,13 +162,15 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 {
 	struct xenvif_queue *queue = dev_id;
 	int old;
+	bool has_rx, has_tx;
 
 	old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending);
 	WARN(old, "Interrupt while EOI pending\n");
 
-	/* Use bitwise or as we need to call both functions. */
-	if ((!xenvif_handle_tx_interrupt(queue) |
-	     !xenvif_handle_rx_interrupt(queue))) {
+	has_tx = xenvif_handle_tx_interrupt(queue);
+	has_rx = xenvif_handle_rx_interrupt(queue);
+
+	if (!has_rx && !has_tx) {
 		atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending);
 		xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
 	}
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:17:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83800.156859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92s-0001oJ-9C; Thu, 11 Feb 2021 10:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83800.156859; Thu, 11 Feb 2021 10:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92s-0001o6-4H; Thu, 11 Feb 2021 10:17:58 +0000
Received: by outflank-mailman (input) for mailman id 83800;
 Thu, 11 Feb 2021 10:17:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92r-0001VI-7w
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:17:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c958570c-1b54-4414-a274-bdc203af6dae;
 Thu, 11 Feb 2021 10:17:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6FD2CAF49;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c958570c-1b54-4414-a274-bdc203af6dae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038661; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nYiBLOOS0jKaGXZR/xdp5WHTi+5sOW1v2m88NDrZNbI=;
	b=LmkY0cY/xUPQabBZ2K6py6iwlNSohSvZqtVGIKhgfs/Vd+e3i0HPowZRhK5GJgyUl5o28/
	Lx47YRJ3EXvR1/z8sPJDmrvUc6rf00u386Gf1+f8cdN9vhlSbjkIEU7foEmfSwD4BZQf/a
	zldb9aBFKnXvuTux4U/p6Gw35AZbFbs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 7/8] xen/evtch: use smp barriers for user event ring
Date: Thu, 11 Feb 2021 11:16:15 +0100
Message-Id: <20210211101616.13788-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The ring buffer for user events is local to the given kernel instance,
so smp barriers are fine for ensuring consistency.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 drivers/xen/evtchn.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index a7a85719a8c8..421382c73d88 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -173,7 +173,7 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 
 	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
 		*evtchn_ring_entry(u, u->ring_prod) = evtchn->port;
-		wmb(); /* Ensure ring contents visible */
+		smp_wmb(); /* Ensure ring contents visible */
 		if (u->ring_cons == u->ring_prod++) {
 			wake_up_interruptible(&u->evtchn_wait);
 			kill_fasync(&u->evtchn_async_queue,
@@ -245,7 +245,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 	}
 
 	rc = -EFAULT;
-	rmb(); /* Ensure that we see the port before we copy it. */
+	smp_rmb(); /* Ensure that we see the port before we copy it. */
 	if (copy_to_user(buf, evtchn_ring_entry(u, c), bytes1) ||
 	    ((bytes2 != 0) &&
 	     copy_to_user(&buf[bytes1], &u->ring[0], bytes2)))
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:18:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:18:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83801.156871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92x-0001wA-KM; Thu, 11 Feb 2021 10:18:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83801.156871; Thu, 11 Feb 2021 10:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA92x-0001w1-Em; Thu, 11 Feb 2021 10:18:03 +0000
Received: by outflank-mailman (input) for mailman id 83801;
 Thu, 11 Feb 2021 10:18:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA92w-0001VI-8E
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:18:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 146ca085-a6ed-4f37-b0fd-9402351e30fd;
 Thu, 11 Feb 2021 10:17:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3841FAF11;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 146ca085-a6ed-4f37-b0fd-9402351e30fd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038661; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3cGioF3XhVwmtqHEwtIlRzTuYQpd1I+EFdYzeGvtMh0=;
	b=nJWGxbfsZic/fqYIe+mxFP821DzSm6m2XntEhzUHGLp8ReTw/1Z9p6Wf+Zog36NLKvRkvP
	aBBHU2g6Du1VAciaM66Ag1GwoVvwYFf2dWM8ssG4kS8RlcudQi1k9volZsavYT1ss5EL3j
	JZviBo2JsyCZQjZJXlTkzxRB8siacRs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 6/8] xen/events: add per-xenbus device event statistics and settings
Date: Thu, 11 Feb 2021 11:16:14 +0100
Message-Id: <20210211101616.13788-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add syfs nodes for each xenbus device showing event statistics (number
of events and spurious events, number of associated event channels)
and for setting a spurious event threshold in case a frontend is
sending too many events without being rogue on purpose.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add documentation (Boris Ostrovsky)
---
 .../ABI/testing/sysfs-devices-xenbus          | 41 ++++++++++++
 drivers/xen/events/events_base.c              | 27 +++++++-
 drivers/xen/xenbus/xenbus_probe.c             | 66 +++++++++++++++++++
 include/xen/xenbus.h                          |  7 ++
 4 files changed, 139 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-devices-xenbus

diff --git a/Documentation/ABI/testing/sysfs-devices-xenbus b/Documentation/ABI/testing/sysfs-devices-xenbus
new file mode 100644
index 000000000000..fd796cb4f315
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-xenbus
@@ -0,0 +1,41 @@
+What:		/sys/devices/*/xenbus/event_channels
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Number of Xen event channels associated with a kernel based
+		paravirtualized device frontend or backend.
+
+What:		/sys/devices/*/xenbus/events
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Total number of Xen events received for a Xen pv device
+		frontend or backend.
+
+What:		/sys/devices/*/xenbus/jiffies_eoi_delayed
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Summed up time in jiffies the EOI of an interrupt for a Xen
+		pv device has been delayed in order to avoid stalls due to
+		event storms. This value rising is a first sign for a rogue
+		other end of the pv device.
+
+What:		/sys/devices/*/xenbus/spurious_events
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Number of events received for a Xen pv device which did not
+		require any action. Too many spurious events in a row will
+		trigger delayed EOI processing.
+
+What:		/sys/devices/*/xenbus/spurious_threshold
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Controls the tolerated number of subsequent spurious events
+		before delayed EOI processing is triggered for a Xen pv
+		device. Default is 1. This can be modified in case the other
+		end of the pv device is issuing spurious events on a regular
+		basis and is known not to be malicious on purpose. Raising
+		the value for such cases can improve pv device performance.
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index a5cce4c626c2..48210c1e62e3 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -332,6 +332,8 @@ static int xen_irq_info_evtchn_setup(unsigned irq,
 
 	ret = xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
 	info->u.interdomain = dev;
+	if (dev)
+		atomic_inc(&dev->event_channels);
 
 	return ret;
 }
@@ -606,18 +608,28 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 		return;
 
 	if (spurious) {
+		struct xenbus_device *dev = info->u.interdomain;
+		unsigned int threshold = 1;
+
+		if (dev && dev->spurious_threshold)
+			threshold = dev->spurious_threshold;
+
 		if ((1 << info->spurious_cnt) < (HZ << 2)) {
 			if (info->spurious_cnt != 0xFF)
 				info->spurious_cnt++;
 		}
-		if (info->spurious_cnt > 1) {
-			delay = 1 << (info->spurious_cnt - 2);
+		if (info->spurious_cnt > threshold) {
+			delay = 1 << (info->spurious_cnt - 1 - threshold);
 			if (delay > HZ)
 				delay = HZ;
 			if (!info->eoi_time)
 				info->eoi_cpu = smp_processor_id();
 			info->eoi_time = get_jiffies_64() + delay;
+			if (dev)
+				atomic_add(delay, &dev->jiffies_eoi_delayed);
 		}
+		if (dev)
+			atomic_inc(&dev->spurious_events);
 	} else {
 		info->spurious_cnt = 0;
 	}
@@ -950,6 +962,7 @@ static void __unbind_from_irq(unsigned int irq)
 
 	if (VALID_EVTCHN(evtchn)) {
 		unsigned int cpu = cpu_from_irq(irq);
+		struct xenbus_device *dev;
 
 		xen_evtchn_close(evtchn);
 
@@ -960,6 +973,11 @@ static void __unbind_from_irq(unsigned int irq)
 		case IRQT_IPI:
 			per_cpu(ipi_to_irq, cpu)[ipi_from_irq(irq)] = -1;
 			break;
+		case IRQT_EVTCHN:
+			dev = info->u.interdomain;
+			if (dev)
+				atomic_dec(&dev->event_channels);
+			break;
 		default:
 			break;
 		}
@@ -1623,6 +1641,7 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 {
 	int irq;
 	struct irq_info *info;
+	struct xenbus_device *dev;
 
 	irq = get_evtchn_to_irq(port);
 	if (irq == -1)
@@ -1654,6 +1673,10 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 	if (xchg_acquire(&info->is_active, 1))
 		return;
 
+	dev = (info->type == IRQT_EVTCHN) ? info->u.interdomain : NULL;
+	if (dev)
+		atomic_inc(&dev->events);
+
 	if (ctrl->defer_eoi) {
 		info->eoi_cpu = smp_processor_id();
 		info->irq_epoch = __this_cpu_read(irq_epoch);
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 18ffd0551b54..9494ecad3c92 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -206,6 +206,65 @@ void xenbus_otherend_changed(struct xenbus_watch *watch,
 }
 EXPORT_SYMBOL_GPL(xenbus_otherend_changed);
 
+#define XENBUS_SHOW_STAT(name)						\
+static ssize_t show_##name(struct device *_dev,				\
+			   struct device_attribute *attr,		\
+			   char *buf)					\
+{									\
+	struct xenbus_device *dev = to_xenbus_device(_dev);		\
+									\
+	return sprintf(buf, "%d\n", atomic_read(&dev->name));		\
+}									\
+static DEVICE_ATTR(name, 0444, show_##name, NULL)
+
+XENBUS_SHOW_STAT(event_channels);
+XENBUS_SHOW_STAT(events);
+XENBUS_SHOW_STAT(spurious_events);
+XENBUS_SHOW_STAT(jiffies_eoi_delayed);
+
+static ssize_t show_spurious_threshold(struct device *_dev,
+				       struct device_attribute *attr,
+				       char *buf)
+{
+	struct xenbus_device *dev = to_xenbus_device(_dev);
+
+	return sprintf(buf, "%d\n", dev->spurious_threshold);
+}
+
+static ssize_t set_spurious_threshold(struct device *_dev,
+				      struct device_attribute *attr,
+				      const char *buf, size_t count)
+{
+	struct xenbus_device *dev = to_xenbus_device(_dev);
+	unsigned int val;
+	ssize_t ret;
+
+	ret = kstrtouint(buf, 0, &val);
+	if (ret)
+		return ret;
+
+	dev->spurious_threshold = val;
+
+	return count;
+}
+
+static DEVICE_ATTR(spurious_threshold, 0644, show_spurious_threshold,
+		   set_spurious_threshold);
+
+static struct attribute *xenbus_attrs[] = {
+	&dev_attr_event_channels.attr,
+	&dev_attr_events.attr,
+	&dev_attr_spurious_events.attr,
+	&dev_attr_jiffies_eoi_delayed.attr,
+	&dev_attr_spurious_threshold.attr,
+	NULL
+};
+
+static const struct attribute_group xenbus_group = {
+	.name = "xenbus",
+	.attrs = xenbus_attrs,
+};
+
 int xenbus_dev_probe(struct device *_dev)
 {
 	struct xenbus_device *dev = to_xenbus_device(_dev);
@@ -253,6 +312,11 @@ int xenbus_dev_probe(struct device *_dev)
 		return err;
 	}
 
+	dev->spurious_threshold = 1;
+	if (sysfs_create_group(&dev->dev.kobj, &xenbus_group))
+		dev_warn(&dev->dev, "sysfs_create_group on %s failed.\n",
+			 dev->nodename);
+
 	return 0;
 fail_put:
 	module_put(drv->driver.owner);
@@ -269,6 +333,8 @@ int xenbus_dev_remove(struct device *_dev)
 
 	DPRINTK("%s", dev->nodename);
 
+	sysfs_remove_group(&dev->dev.kobj, &xenbus_group);
+
 	free_otherend_watch(dev);
 
 	if (drv->remove) {
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 2c43b0ef1e4d..13ee375a1f05 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -88,6 +88,13 @@ struct xenbus_device {
 	struct completion down;
 	struct work_struct work;
 	struct semaphore reclaim_sem;
+
+	/* Event channel based statistics and settings. */
+	atomic_t event_channels;
+	atomic_t events;
+	atomic_t spurious_events;
+	atomic_t jiffies_eoi_delayed;
+	unsigned int spurious_threshold;
 };
 
 static inline struct xenbus_device *to_xenbus_device(struct device *dev)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:18:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83802.156883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA932-000234-1L; Thu, 11 Feb 2021 10:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83802.156883; Thu, 11 Feb 2021 10:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA931-00022r-QU; Thu, 11 Feb 2021 10:18:07 +0000
Received: by outflank-mailman (input) for mailman id 83802;
 Thu, 11 Feb 2021 10:18:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/E5q=HN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lA931-0001VI-8G
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:18:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09455c06-ebbd-40b2-89ca-328d4c41e203;
 Thu, 11 Feb 2021 10:17:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 15E18AF10;
 Thu, 11 Feb 2021 10:17:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09455c06-ebbd-40b2-89ca-328d4c41e203
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613038661; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GuZoq58aOVo3oObGxElFNA6Kt6uHHJGIx8cU5W0ZP10=;
	b=pkhP3Lm6mYei0SahyUSTm/T2kQz6sZJb4OOfYj17bDA4br/IXuvFtdkIqsVKGzyShv5OAu
	d2jT8J1SIJXORUUW+PIgw04KHUOaCD+hJsA+vIGj4gqWcvMkTnOKQA1VjrHXTh++V8bUVG
	uSR2A9MHYHVSyHQrz3LZL1TxHbdjHfI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 5/8] xen/events: link interdomain events to associated xenbus device
Date: Thu, 11 Feb 2021 11:16:13 +0100
Message-Id: <20210211101616.13788-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210211101616.13788-1-jgross@suse.com>
References: <20210211101616.13788-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to support the possibility of per-device event channel
settings (e.g. lateeoi spurious event thresholds) add a xenbus device
pointer to struct irq_info() and modify the related event channel
binding interfaces to take the pointer to the xenbus device as a
parameter instead of the domain id of the other side.

While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi().

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Wei Liu <wei.liu@kernel.org>
---
 drivers/block/xen-blkback/xenbus.c  |  2 +-
 drivers/net/xen-netback/interface.c | 16 +++++------
 drivers/xen/events/events_base.c    | 41 +++++++++++++++++------------
 drivers/xen/pvcalls-back.c          |  4 +--
 drivers/xen/xen-pciback/xenbus.c    |  2 +-
 drivers/xen/xen-scsiback.c          |  2 +-
 include/xen/events.h                |  7 ++---
 7 files changed, 41 insertions(+), 33 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 9860d4842f36..c2aaf690352c 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -245,7 +245,7 @@ static int xen_blkif_map(struct xen_blkif_ring *ring, grant_ref_t *gref,
 	if (req_prod - rsp_prod > size)
 		goto fail;
 
-	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->domid,
+	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->be->dev,
 			evtchn, xen_blkif_be_int, 0, "blkif-backend", ring);
 	if (err < 0)
 		goto fail;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index e02a4fbb74de..50a94e58c150 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -630,13 +630,13 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 			unsigned int evtchn)
 {
 	struct net_device *dev = vif->dev;
+	struct xenbus_device *xendev = xenvif_to_xenbus_device(vif);
 	void *addr;
 	struct xen_netif_ctrl_sring *shared;
 	RING_IDX rsp_prod, req_prod;
 	int err;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
-				     &ring_ref, 1, &addr);
+	err = xenbus_map_ring_valloc(xendev, &ring_ref, 1, &addr);
 	if (err)
 		goto err;
 
@@ -650,7 +650,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 	if (req_prod - rsp_prod > RING_SIZE(&vif->ctrl))
 		goto err_unmap;
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(vif->domid, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(xendev, evtchn);
 	if (err < 0)
 		goto err_unmap;
 
@@ -673,8 +673,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 	vif->ctrl_irq = 0;
 
 err_unmap:
-	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-				vif->ctrl.sring);
+	xenbus_unmap_ring_vfree(xendev, vif->ctrl.sring);
 	vif->ctrl.sring = NULL;
 
 err:
@@ -719,6 +718,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 			unsigned int tx_evtchn,
 			unsigned int rx_evtchn)
 {
+	struct xenbus_device *dev = xenvif_to_xenbus_device(queue->vif);
 	struct task_struct *task;
 	int err;
 
@@ -755,7 +755,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			dev, tx_evtchn, xenvif_interrupt, 0,
 			queue->name, queue);
 		if (err < 0)
 			goto err;
@@ -766,7 +766,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
 			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			dev, tx_evtchn, xenvif_tx_interrupt, 0,
 			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err;
@@ -776,7 +776,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
 			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			dev, rx_evtchn, xenvif_rx_interrupt, 0,
 			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err;
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index f7e22330dcef..a5cce4c626c2 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -63,6 +63,7 @@
 #include <xen/interface/physdev.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/xenbus.h>
 #include <asm/hw_irq.h>
 
 #include "events_internal.h"
@@ -121,6 +122,7 @@ struct irq_info {
 			unsigned char flags;
 			uint16_t domid;
 		} pirq;
+		struct xenbus_device *interdomain;
 	} u;
 };
 
@@ -322,11 +324,16 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 }
 
 static int xen_irq_info_evtchn_setup(unsigned irq,
-				     evtchn_port_t evtchn)
+				     evtchn_port_t evtchn,
+				     struct xenbus_device *dev)
 {
 	struct irq_info *info = info_for_irq(irq);
+	int ret;
 
-	return xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
+	ret = xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
+	info->u.interdomain = dev;
+
+	return ret;
 }
 
 static int xen_irq_info_ipi_setup(unsigned cpu,
@@ -1158,7 +1165,8 @@ int xen_pirq_from_irq(unsigned irq)
 }
 EXPORT_SYMBOL_GPL(xen_pirq_from_irq);
 
-static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
+static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip,
+				   struct xenbus_device *dev)
 {
 	int irq;
 	int ret;
@@ -1178,7 +1186,7 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
 		irq_set_chip_and_handler_name(irq, chip,
 					      handle_edge_irq, "event");
 
-		ret = xen_irq_info_evtchn_setup(irq, evtchn);
+		ret = xen_irq_info_evtchn_setup(irq, evtchn, dev);
 		if (ret < 0) {
 			__unbind_from_irq(irq);
 			irq = ret;
@@ -1205,7 +1213,7 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
 
 int bind_evtchn_to_irq(evtchn_port_t evtchn)
 {
-	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip);
+	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip, NULL);
 }
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
 
@@ -1254,27 +1262,27 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
 	return irq;
 }
 
-static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain,
+static int bind_interdomain_evtchn_to_irq_chip(struct xenbus_device *dev,
 					       evtchn_port_t remote_port,
 					       struct irq_chip *chip)
 {
 	struct evtchn_bind_interdomain bind_interdomain;
 	int err;
 
-	bind_interdomain.remote_dom  = remote_domain;
+	bind_interdomain.remote_dom  = dev->otherend_id;
 	bind_interdomain.remote_port = remote_port;
 
 	err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
 					  &bind_interdomain);
 
 	return err ? : bind_evtchn_to_irq_chip(bind_interdomain.local_port,
-					       chip);
+					       chip, dev);
 }
 
-int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
 					   evtchn_port_t remote_port)
 {
-	return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
+	return bind_interdomain_evtchn_to_irq_chip(dev, remote_port,
 						   &xen_lateeoi_chip);
 }
 EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq_lateeoi);
@@ -1387,7 +1395,7 @@ static int bind_evtchn_to_irqhandler_chip(evtchn_port_t evtchn,
 {
 	int irq, retval;
 
-	irq = bind_evtchn_to_irq_chip(evtchn, chip);
+	irq = bind_evtchn_to_irq_chip(evtchn, chip, NULL);
 	if (irq < 0)
 		return irq;
 	retval = request_irq(irq, handler, irqflags, devname, dev_id);
@@ -1422,14 +1430,13 @@ int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn,
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler_lateeoi);
 
 static int bind_interdomain_evtchn_to_irqhandler_chip(
-		unsigned int remote_domain, evtchn_port_t remote_port,
+		struct xenbus_device *dev, evtchn_port_t remote_port,
 		irq_handler_t handler, unsigned long irqflags,
 		const char *devname, void *dev_id, struct irq_chip *chip)
 {
 	int irq, retval;
 
-	irq = bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
-						  chip);
+	irq = bind_interdomain_evtchn_to_irq_chip(dev, remote_port, chip);
 	if (irq < 0)
 		return irq;
 
@@ -1442,14 +1449,14 @@ static int bind_interdomain_evtchn_to_irqhandler_chip(
 	return irq;
 }
 
-int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(struct xenbus_device *dev,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
 						  unsigned long irqflags,
 						  const char *devname,
 						  void *dev_id)
 {
-	return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
+	return bind_interdomain_evtchn_to_irqhandler_chip(dev,
 				remote_port, handler, irqflags, devname,
 				dev_id, &xen_lateeoi_chip);
 }
@@ -1723,7 +1730,7 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
 	   so there should be a proper type */
 	BUG_ON(info->type == IRQT_UNBOUND);
 
-	(void)xen_irq_info_evtchn_setup(irq, evtchn);
+	(void)xen_irq_info_evtchn_setup(irq, evtchn, NULL);
 
 	mutex_unlock(&irq_mapping_update_lock);
 
diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a7d293fa8d14..b47fd8435061 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -348,7 +348,7 @@ static struct sock_mapping *pvcalls_new_active_socket(
 	map->bytes = page;
 
 	ret = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			fedata->dev->otherend_id, evtchn,
+			fedata->dev, evtchn,
 			pvcalls_back_conn_event, 0, "pvcalls-backend", map);
 	if (ret < 0)
 		goto out;
@@ -948,7 +948,7 @@ static int backend_connect(struct xenbus_device *dev)
 		goto error;
 	}
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn);
 	if (err < 0)
 		goto error;
 	fedata->irq = err;
diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index e7c692cfb2cf..5188f02e75fb 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -124,7 +124,7 @@ static int xen_pcibk_do_attach(struct xen_pcibk_device *pdev, int gnt_ref,
 	pdev->sh_info = vaddr;
 
 	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-		pdev->xdev->otherend_id, remote_evtchn, xen_pcibk_handle_event,
+		pdev->xdev, remote_evtchn, xen_pcibk_handle_event,
 		0, DRV_NAME, pdev);
 	if (err < 0) {
 		xenbus_dev_fatal(pdev->xdev, err,
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 862162dca33c..8b59897b2df9 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -799,7 +799,7 @@ static int scsiback_init_sring(struct vscsibk_info *info, grant_ref_t ring_ref,
 	sring = (struct vscsiif_sring *)area;
 	BACK_RING_INIT(&info->ring, sring, PAGE_SIZE);
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(info->domid, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(info->dev, evtchn);
 	if (err < 0)
 		goto unmap_page;
 
diff --git a/include/xen/events.h b/include/xen/events.h
index 8ec418e30c7f..c204262d9fc2 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -12,10 +12,11 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/events.h>
 
+struct xenbus_device;
+
 unsigned xen_evtchn_nr_channels(void);
 
 int bind_evtchn_to_irq(evtchn_port_t evtchn);
-int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn);
 int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
 			      irq_handler_t handler,
 			      unsigned long irqflags, const char *devname,
@@ -35,9 +36,9 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
 			   unsigned long irqflags,
 			   const char *devname,
 			   void *dev_id);
-int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
 					   evtchn_port_t remote_port);
-int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(struct xenbus_device *dev,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
 						  unsigned long irqflags,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:27:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:27:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83812.156895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9Bv-0003OQ-4C; Thu, 11 Feb 2021 10:27:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83812.156895; Thu, 11 Feb 2021 10:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9Bv-0003OJ-0l; Thu, 11 Feb 2021 10:27:19 +0000
Received: by outflank-mailman (input) for mailman id 83812;
 Thu, 11 Feb 2021 10:27:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA9Bt-0003OB-Q3; Thu, 11 Feb 2021 10:27:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA9Bt-0001S8-Ln; Thu, 11 Feb 2021 10:27:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lA9Bt-0000Wl-Fq; Thu, 11 Feb 2021 10:27:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lA9Bt-0000IX-FM; Thu, 11 Feb 2021 10:27:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zp0FmVY7X8XvTaBKe9vj2kb1oY8UgMp2n21C36Xfqic=; b=TvTPWQI78DPTHNd25ry02XFsoE
	gUDy6r9c4HfuiZKSgrzj+S6IcgGZ4c6GYHPw3Cd0pdn2Us0mFZbTtJ7UNLNRRrUC8bybwWg5lMzsu
	Dn206Pzj1LG6VftLquEgV4QNCLoZCKmg+86ZvFtmTpC5N0NVYtuUORGljkJTSPcttQWY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159198-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159198: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=124f1dd1ee1140b441151043aacbe5d33bb5ab79
X-Osstest-Versions-That:
    ovmf=472276f59bba2b22bb882c5c6f5479754e68d467
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 10:27:17 +0000

flight 159198 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159198/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 124f1dd1ee1140b441151043aacbe5d33bb5ab79
baseline version:
 ovmf                 472276f59bba2b22bb882c5c6f5479754e68d467

Last test of basis   159143  2021-02-08 16:09:49 Z    2 days
Testing same since   159198  2021-02-10 11:26:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bob Feng <bob.c.feng@intel.com>
  Leif Lindholm <leif@nuviainc.com>
  Loo Tung Lun <tung.lun.loo@intel.com>
  Matthew Carlson <matthewfcarlson@gmail.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   472276f59b..124f1dd1ee  124f1dd1ee1140b441151043aacbe5d33bb5ab79 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:37:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83816.156910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9LL-0004Ox-UG; Thu, 11 Feb 2021 10:37:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83816.156910; Thu, 11 Feb 2021 10:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9LL-0004Oq-R7; Thu, 11 Feb 2021 10:37:03 +0000
Received: by outflank-mailman (input) for mailman id 83816;
 Thu, 11 Feb 2021 10:37:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lA9LK-0004Ol-H0
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:37:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ecda26f-8d8f-4e22-bd78-d1343834501c;
 Thu, 11 Feb 2021 10:37:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E8EBADCD;
 Thu, 11 Feb 2021 10:37:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ecda26f-8d8f-4e22-bd78-d1343834501c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613039820; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KIKt2IbxDJ+JR+5PH3MQz09qkcO3HbCdRbYVSX8ZwhI=;
	b=j4y2aDoWqmd7hjQAQh7dqAwxVyn+H5MqPwjIcLe03Qtc3Z1vlFRUJiHQsxx7OPoGzQBUad
	aEWlD4y7vM7I/ohV/TBF9YyksvvYuF5ycROe3jkagkKC/r8QHmZ0THYkHHxY0kWVuonqB3
	Le5eRlkjdg9WzspkPk53ZgKhSFsalsU=
Subject: Re: [PATCH] VMX: use a single, global APIC access page
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <YCTuq5b130PR6G35@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
Date: Thu, 11 Feb 2021 11:36:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCTuq5b130PR6G35@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.02.2021 09:45, Roger Pau MonnÃ© wrote:
> On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
>> I did further consider not allocating any real page at all, but just
>> using the address of some unpopulated space (which would require
>> announcing this page as reserved to Dom0, so it wouldn't put any PCI
>> MMIO BARs there). But I thought this would be too controversial, because
>> of the possible risks associated with this.
> 
> No, Xen is not capable of allocating a suitable unpopulated page IMO,
> so let's not go down that route. Wasting one RAM page seems perfectly
> fine to me.

Why would Xen not be able to, in principle? It may be difficult,
but there may also be pages we could declare we know we can use
for this purpose.

>> @@ -3216,53 +3211,28 @@ static int vmx_alloc_vlapic_mapping(stru
>>      if ( !cpu_has_vmx_virtualize_apic_accesses )
>>          return 0;
>>  
>> -    pg = alloc_domheap_page(d, MEMF_no_refcount);
>> +    pg = alloc_domheap_page(NULL, 0);
>>      if ( !pg )
>>          return -ENOMEM;
>>  
>> -    if ( !get_page_and_type(pg, d, PGT_writable_page) )
>> -    {
>> -        /*
>> -         * The domain can't possibly know about this page yet, so failure
>> -         * here is a clear indication of something fishy going on.
>> -         */
>> -        domain_crash(d);
>> -        return -ENODATA;
>> -    }
>> -
>>      mfn = page_to_mfn(pg);
>>      clear_domain_page(mfn);
>> -    d->arch.hvm.vmx.apic_access_mfn = mfn;
>> +    apic_access_mfn = mfn;
>>  
>> -    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
>> -                              PAGE_ORDER_4K);
>> -}
>> -
>> -static void vmx_free_vlapic_mapping(struct domain *d)
>> -{
>> -    mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn;
>> -
>> -    d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
>> -    if ( !mfn_eq(mfn, _mfn(0)) )
>> -    {
>> -        struct page_info *pg = mfn_to_page(mfn);
>> -
>> -        put_page_alloc_ref(pg);
>> -        put_page_and_type(pg);
>> -    }
>> +    return 0;
>>  }
>>  
>>  static void vmx_install_vlapic_mapping(struct vcpu *v)
>>  {
>>      paddr_t virt_page_ma, apic_page_ma;
>>  
>> -    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
>> +    if ( mfn_eq(apic_access_mfn, _mfn(0)) )
> 
> You should check if the domain has a vlapic I think, since now
> apic_access_mfn is global and will be set regardless of whether the
> domain has vlapic enabled or not.
> 
> Previously apic_access_mfn was only allocated if the domain had vlapic
> enabled.

Oh, indeed - thanks for spotting. And also in domain_creation_finished().

>>          return;
>>  
>>      ASSERT(cpu_has_vmx_virtualize_apic_accesses);
>>  
>>      virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
>> -    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
>> +    apic_page_ma = mfn_to_maddr(apic_access_mfn);
>>  
>>      vmx_vmcs_enter(v);
>>      __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
> 
> I would consider setting up the vmcs and adding the page to the p2m in
> the same function, and likely call it from vlapic_init. We could have
> a domain_setup_apic in hvm_function_table that takes care of all this.

Well, I'd prefer to do this just once per domain without needing
to special case this on e.g. vCPU 0.

>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
>>          flags = IOMMUF_readable;
>>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
>>              flags |= IOMMUF_writable;
>> +        /* VMX'es APIC access page is global and hence has no owner. */
>> +        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
>> +            flags = 0;
> 
> Is it fine to have this page accessible to devices if the page tables
> are shared between the CPU and the IOMMU?

No, it's not, but what do you do? As said elsewhere, devices
gaining more access than is helpful is the price we pay for
being able to share page tables. But ...

> Is it possible for devices to write to it?

... thinking about it - they would be able to access it only
when interrupt remapping is off. Otherwise the entire range
0xFEExxxxx gets treated differently altogether by the IOMMU,
and is not subject to DMA remapping. IOW it shouldn't even
matter what flags we put in there (and I'd be far less
concerned about the no-intremap case). What this change
matters for then is only to avoid an unnecessary call to
iommu_map() (and, for small enough domain, extra intermediate
page tables to be allocated).

But let me really split this off of this patch.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 10:53:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 10:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83820.156928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9bD-0006Fs-DJ; Thu, 11 Feb 2021 10:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83820.156928; Thu, 11 Feb 2021 10:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9bD-0006Fl-9v; Thu, 11 Feb 2021 10:53:27 +0000
Received: by outflank-mailman (input) for mailman id 83820;
 Thu, 11 Feb 2021 10:53:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lA9bC-0006Fg-1G
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 10:53:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e292099-fb78-4098-90fa-f55e5f101bf8;
 Thu, 11 Feb 2021 10:53:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8840DB11A;
 Thu, 11 Feb 2021 10:53:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e292099-fb78-4098-90fa-f55e5f101bf8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613040804; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=P9xCMOru1/Zj81RD4aoWE/w0ZfHdvMe1Cb500/kNnBc=;
	b=s0OmhzXzKBbfO921BFeUxxT/ihk9SFXl7epMlPgOjbgR6yN2M4AUqSnSyTjwhH7Dd3Fp7s
	FeDSPBTOPkers18yXi01hmlp6e518rfVmCeJGbzmmAeEgqMq+vGDAIo5LvC7A+YVKAVxbC
	Y9vMsgVC9OJaTxyCYqIQ2pnYd+vRDiw=
Subject: Re: [PATCH v2 4/8] xen/netback: fix spurious event detection for
 common event case
To: Juergen Gross <jgross@suse.com>
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c164ff57-69f2-8a5f-43f4-ec170bd99c22@suse.com>
Date: Thu, 11 Feb 2021 11:53:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210211101616.13788-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.02.2021 11:16, Juergen Gross wrote:
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -162,13 +162,15 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
>  {
>  	struct xenvif_queue *queue = dev_id;
>  	int old;
> +	bool has_rx, has_tx;
>  
>  	old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending);
>  	WARN(old, "Interrupt while EOI pending\n");
>  
> -	/* Use bitwise or as we need to call both functions. */
> -	if ((!xenvif_handle_tx_interrupt(queue) |
> -	     !xenvif_handle_rx_interrupt(queue))) {
> +	has_tx = xenvif_handle_tx_interrupt(queue);
> +	has_rx = xenvif_handle_rx_interrupt(queue);
> +
> +	if (!has_rx && !has_tx) {
>  		atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending);
>  		xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
>  	}
> 

Ah yes, what was originally meant really was

	if (!(xenvif_handle_tx_interrupt(queue) |
	      xenvif_handle_rx_interrupt(queue))) {

(also hinted at by the otherwise pointless inner parentheses),
which you simply write in an alternative way.

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:05:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83822.156940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9mt-0007Ms-Hz; Thu, 11 Feb 2021 11:05:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83822.156940; Thu, 11 Feb 2021 11:05:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9mt-0007Ml-EQ; Thu, 11 Feb 2021 11:05:31 +0000
Received: by outflank-mailman (input) for mailman id 83822;
 Thu, 11 Feb 2021 11:05:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lA9ms-0007Mg-Fb
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 11:05:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id daa49be3-fe1a-4567-8787-db69d8ba288c;
 Thu, 11 Feb 2021 11:05:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61C60AE65;
 Thu, 11 Feb 2021 11:05:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daa49be3-fe1a-4567-8787-db69d8ba288c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613041528; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uwIx6/+561xaswH9ZaAMNmCXpvb2vKxetk90b+uGPPM=;
	b=EvdAQVKUHpJL6h1yVYHmx0mRaIGYNyMj42iP9LEfeOnu7lvRZMnKZRmOvavjS5H6fnHxFG
	Ie8GRmoinKOQBse07M+9GLSxbKQyak+PirFV5kQ934ZjOyiZT0aLKkLpJncNpZ4ISiF+4W
	9wBg6dHWyBEC+2umfhoY+y2IGFM0Guo=
Subject: Re: Stable library ABI compatibility checking
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, George Dunlap
 <george.dunlap@citrix.com>, Juergen Gross <jgross@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <816d28b2-df85-9259-de96-5a6654c8b341@suse.com>
Date: Thu, 11 Feb 2021 12:05:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.02.2021 02:08, Andrew Cooper wrote:
> Hello,
> 
> Last things first, some examples:
> 
> http://xenbits.xen.org/people/andrewcoop/abi/libxenevtchn-1.1_to_1.2.html
> http://xenbits.xen.org/people/andrewcoop/abi/libxenforeignmemory-1.3_to_1.4.html
> 
> These are an ABI compatibility report between RELEASE-4.14.0 and staging.
> 
> They're performed with abi-dumper (takes a shared object and header
> file(s) and write out a text "dump" file which happens to be a perl
> hash) and abi-compliance-checker checker, which takes two dumps and
> produces the HTML reports linked above.Â  They're both debian tools
> originally, but have found their way across the ecosystem.Â  They have no
> impact on build time (when invoked correctly).
> 
> I'm encouraged to see that the foreignmem analysis has even spotted that
> we deliberately renamed one parameter to clarify its purpose.
> 
> 
> For the stable libraries, the RELEASE-$X.$Y.0 tag is the formal point
> when accumulated changes in staging become fixed.Â  What we ought to be
> doing is taking a "dump" of libraries at this point, and publishing
> them, probably on xenbits.
> 
> Subsequent builds on all the staging branches should be performing an
> ABI check against the appropriate lib version.Â  This will let us catch
> breakages in staging (c/s e8af54084586f4) as well as breakages if we
> ever need to backport changes to the libs.
> 
> For libraries wrapped by Juergen's tools/libs/ common-makefile changes,
> adding ABI dumping is easy.Â  The only complicating is needing to build
> everything with "-Og -g", but this is easy to arrange, and frankly ought
> to be the default for debug builds anyway (the current use of -O0 is
> silly and interferes with common distro hardening settings).
> 
> What I propose is tweaking the library build to write out
> lib$FOO.so.$X.$Y-$(ARCH).abi.dump files.Â  A pristine set of these should
> be put somewhere on xenbits, and a task put on the release technicians
> checklist for future releases.
> 
> That way, subsequent builds which have these tools available can include
> a call to abi-compliance-checker between the authoritative copy and the
> one from the local build, which will make the report available, and exit
> nonzero on breaking problems.
> 
> 
> To make the pristine set, I need to retrofit the tooling to 4.14 and
> ideally 4.13.Â  This is in contravention to our normal policy of not
> backporting features, but I think, being optional build-time-only
> tooling, it is worthy of an exception considering the gains we get
> (specifically - to be able to check for ABI breakages on these branches
> in OSSTest).Â  Backporting to 4.12 and older is far more invasive, due to
> it being before the library build systems were common'd.
> 
> 
> Anyway, thoughts?

+1

Not sure about the backporting effects - tools/libs/ had quite a bit
less content in 4.14 and older, so the coverage would be smaller.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:16:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83824.156952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9xk-0008TC-Jx; Thu, 11 Feb 2021 11:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83824.156952; Thu, 11 Feb 2021 11:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lA9xk-0008T5-GY; Thu, 11 Feb 2021 11:16:44 +0000
Received: by outflank-mailman (input) for mailman id 83824;
 Thu, 11 Feb 2021 11:16:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=szQa=HN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lA9xj-0008T0-9n
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 11:16:43 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29fb4284-ce13-470b-82bd-2c8e8a3d203b;
 Thu, 11 Feb 2021 11:16:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29fb4284-ce13-470b-82bd-2c8e8a3d203b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613042202;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=QfSO8XJFp/gqAwCpZEx6So+GBHispVg2IZNXOOHeBu4=;
  b=Rh08ONvQhPIpBD51eLG82AB70Qir3ymUlVHZweBDMx81ZbR0XKiTBNnm
   3aU6guf7rK2+RXRzt2ZrQ/La6T03Ua2zZH2xhx+ddpwjW1TTA5V/rRtRe
   mJRWsBLn75lOZZcc/FLhNzI2X0Zp9QFVYmxjNbCDrFnx61Sw8jjthlCOG
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: VLeeqBu2yZQC9vSHLMVfn3QSXhyb7EkZADI7TJ+IrP17SUOT4rt/fBzj/erD6S7mupmajIyKNr
 XovETQ32KM4x7YTuNTss/oCyvTwt6SUxmCB35TDejpv0V1YsgLYOcj7jGuwHhyBLkQa+yOHaYP
 VO9Ekm9bNlnvKwx/wVI9opiK8RAsV2vi09bBK8F7Ne+8Xv/yi1SWbgEWtGDLFWiqYyy0oH37RE
 6fxx4COvFfw/uFZENlGRzrlhJNLk83iZsFFJ3tHDWd2aIxn38p/HnWDlkpU2EbnafTy0R5UsvT
 //s=
X-SBRS: 5.2
X-MesageID: 38387673
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="38387673"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ka0JUrxY9VPC0qjTWalL9FWewvVAI+noiDNmVP0PugNXiW5eR58l2SHc3GtEkMbBKNzm5j+WwNbry+dQmIhw2wI1H7X1+bATnGQ/sQJoNdw+wLGhjPpIBtFmKoYCs2zCMbXoTaFsIuwql7ISXiWN2m+KhSc3qzpWk1aGNwsqz7whwZn8gh/74J6Hzal7Oow/72e4LWyE18ftKe9dZoEC5d95FORjuU+Ny5+3MkzhaT3gtrFG6ReA9mM1OwCxawwsZ3VsFXPfcU3J3hnFhwWL6bW4uRz0V0dqXuhZMLVc25AVTxpYko60G2Y0xPdIkR40scWl+ZJy6v9FmvmGo2UbgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3tW8yrBgPPNcCOp7Oxg8mDF0D1A6IN1cdlEKzp2AIv0=;
 b=C/t9hnV3Ravgtu55iwVxPP133NzkCiOE4kXnvgshsnp4sR9Nx63p9HBQgSWM+f9EhqYf+MX/HHPHTLDNxzOiJ8/3XexGLF+YoPtkIhIi8Czo9KwFezrkuQsj/DeaEBSGBLScWWLQ7Fmbo0+ThxCQnTpMCFN116+CLMeY7zt9gUjnvSdLHBTWpnk6bNJ0IuSZo0kqeVeiQJNi33fTTaa+PafNwZDgtbf+WGGSLAmhYZ9c4LJDeVZs/Q1XQqYxlsXRCPgSkOlmQrJ/vprX0khfOiNpkz8zp7WBUa3kHFcBv8Js8E+iBBPmUswYb7ntrvc9yx+8y/6ajOyRUvUb8V0S2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3tW8yrBgPPNcCOp7Oxg8mDF0D1A6IN1cdlEKzp2AIv0=;
 b=xP2KqeLZW01iYhAQkqAL7ZJEKbeZeB+0IyaLD9qQiFtwXR7NVvAvARx1kUBqdTWwvo2GtJ4hmpm5R1gKO+W2L5edBrpemIxp6kpV5P9QrVj0vhlUK4pz3+LPBH+TimtTu/qxBARALYWB9bwwqUeOL5SQ6evFKVcsAFNLh2Iboqw=
Date: Thu, 11 Feb 2021 12:16:29 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Kevin
 Tian" <kevin.tian@intel.com>, Julien Grall <julien@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] VMX: use a single, global APIC access page
Message-ID: <YCUSDSYpS5X+AZco@Air-de-Roger>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <YCTuq5b130PR6G35@Air-de-Roger>
 <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
X-ClientProxiedBy: MR2P264CA0098.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 13afd357-f49a-4dad-9f29-08d8ce7e7e58
X-MS-TrafficTypeDiagnostic: DM6PR03MB4298:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4298DFDD894B1BF4F3278D828F8C9@DM6PR03MB4298.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 2PojE+xeOpcS4CbV6JrnFrMEXqUSFZklGxcWkjAoZjLVFZnMNJcmXagyxHDEeFFtxjet2KmzRt2jDtHqzW1M/knX38szJwZZT9grsQgzsIOv9SLMvthxbk6y7df8xauJtHmWezHJsxADFcllVjtvOKo558JuzTAWAwuFb4FdcRFevjHEPsE9+emdSUXMksrxRr1Zm7L7Cxr7dYiaujTTlYo45H3lRBw/h4gAAwiOYI0M1+RmlsiL8yStVlcUNncgz4B2MXuPQ9GUWpbUOYc7IjnOMcWHx69uq1v4c1ZsOTAZFPnkybvanZA5/U/CNSkiYOYlZnq9HTjiyR1oZl3UpMg0mvHyyDquEhOoqWJ+ErBlNME4ypop+87n9YfAjRx6fu4R9BczlUP5ft2fmxtYJSx2IVRMB2dJkX8B9fF/6r+qQnRcrn+W6fWqzE+hc0rAQ4NLuzQhAmhyIqMQ90hqHXzeT21M77M352tpShBQXC+SfThxjn9FTgAVJWEZRa5c/QwxkrrQA3EEpp1LkgwZxuIfD1ezwa5CRP2+IS9NMWbJWPwl8TpYZ58MDG/t15a7
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(136003)(366004)(39860400002)(376002)(26005)(53546011)(8676002)(2906002)(316002)(66476007)(66556008)(16526019)(6486002)(66946007)(186003)(8936002)(4326008)(83380400001)(54906003)(6916009)(956004)(478600001)(9686003)(5660300002)(6496006)(85182001)(86362001)(33716001)(6666004)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UzJSNWpsc0h2K3lGR2trazJxOXRYWEN2eFE3cVNGRndLUVJQSk9GTEM1SGt4?=
 =?utf-8?B?OVVpWHllV3NyMk5mWDJxSno1K2ZkOUkvSXhSS3M3Zmd5TWZ1Mnp3Tk9tMTlB?=
 =?utf-8?B?aG5wTER0M2JRWlFjNjZZQ1RvREFDcXVjZjg2RjM0SlF1UnpaUDI5RnNqV3Nk?=
 =?utf-8?B?cUhiaVRrdWozbzRKSFRhWDY1WWN5OE5LRnhNMHAraHZMNlpNNGVuQnFtc1Iv?=
 =?utf-8?B?QTQ3c0luWm5TR1piQk9RNkpRT1I1ZkxBZG9FdHVpSWZ5VmFXU2UwOXpjTHNq?=
 =?utf-8?B?elVSUkJBN21YblpscWlCRTJHeFpPOVRDeXRjOE4rUFREWFBadGtKbUw4UUtv?=
 =?utf-8?B?TzhnRnFibHdDOHFKaWVuWFYxLzlIMG5YNFlaNHJVYXRaZFF0eHQvZlJ5cyta?=
 =?utf-8?B?TGdMWi9QbFVIZi82MGJkZU11VDlUSzlyZVF6TWZLOVJzOW5Hd2Q2ZUtSZFNU?=
 =?utf-8?B?QUFiWEhDNmkzbkxHSlYyeEo3akpxMUFiYm9odlR0Sk1YT2tpL0VSRmdyUWFN?=
 =?utf-8?B?STQwaFFuanNhcWo0aW9tR2owVXhLa2VJVTlRUU9WTnR2a01QUEJPeDNlOFY1?=
 =?utf-8?B?L1N1ek9GdFZxUnE3VFZDWGR0RmI4NXZJVFlUMW5BMTVSYk0yZ1RCMDdtbmVF?=
 =?utf-8?B?cENOeVRTTS94RnN0NlNRaW9rdE9VRStpTXoyR203LzVhbmFtYkhpWE43RjNv?=
 =?utf-8?B?RTlXZFhnK2sreXVLdGpoMWVFREVJNVJEZGFDYTZremhGUzNiTHdadlpYbjBr?=
 =?utf-8?B?bk9yV3NUdHNpUzFmbjVKVytod3IrbGxhN3BUMHlvQXlvRkJQcEJoaGpzZTBH?=
 =?utf-8?B?SGdmVkN2S2tFalBNdG9vL1dSQldTZWJPM3pveUoyS0k2aFBSN0JwNUVzbmk3?=
 =?utf-8?B?UTBXWncrTGxPMzBCUzNWUk1Sb3V3ODFlbnp6aEZBVlppRC9ia2xkUk5QdmM5?=
 =?utf-8?B?dzJCL0UxZmZPcEMyMGVyV0N3cnRUcDNNL1F0RVNRUUN6ZFZ6VDRYdlRWUUMy?=
 =?utf-8?B?MlU5MjZSRWUvV0RBbHcyNFhrdmZUQUhuS2p0RE5DZGExd05rSDVaMm5uN2NQ?=
 =?utf-8?B?V2hVeDNBMkJ5MkZPbzBRV0t5Ylp5cXdhRWdsUjlpV2tSaStpU04rMFQvK1Vx?=
 =?utf-8?B?TzFNemYzNkI1SkwveVU5MmFiMWpPOGdjOEo5OHVVTjZGcy9HSTY4b2xHYjc3?=
 =?utf-8?B?b09QSjJ5NFlBM3paRmwyVFZKU0o0OUxFdEk5RjRrQW1ZRFJ0QXlRTzF2Yktr?=
 =?utf-8?B?TVYxSllJWlQrcWdOazkvbGJlZGdTZGpxUWhmY095UXNCZmpWeWk1VUhQUTRa?=
 =?utf-8?B?T3lwaG9mT1VrdTBlYVJuUStZL1dLdVlKSDRSZHFvV1FrZjRZc0h5NDVHMzBH?=
 =?utf-8?B?TVIybW81MlVCNVNHcytQbHdkY3hRZUJIYzVxSG9rV3FrNHAvaG9wU2JDUHpH?=
 =?utf-8?B?TkdvQlVoL2lzRzdxU2ZrbURJaXpFbEZiZXhKZ1ZQTHZMM29JRFVzdDhlSDhD?=
 =?utf-8?B?dDFkNVRWQnVaNFIzRXovYlVtTVpGbVovbGtLYStkYXk5VFJwOWhNZzJZVlhy?=
 =?utf-8?B?MjJvSnQyL2E4Qnp1NXhoRzFtcEtQNWNoOEtnL3ljc0ltUlFaT0FhOFBrR3l6?=
 =?utf-8?B?K0RGdTVpQVpFMmIyMXpPeUxFcTRtcnE1V0ZJRURGME9VYXhWV2Zzb0djbDdB?=
 =?utf-8?B?eXNhbU5IcmtGWUVuRHBSQ1dtWDhUcmM0ZXplcGVUcGFzcForRmg3azdrd1B5?=
 =?utf-8?Q?ODNCtB+D+Bo85R44lVb3xbCZd6OmL7iyC6De9hx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 13afd357-f49a-4dad-9f29-08d8ce7e7e58
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 11:16:35.5227
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ocyCiUh5XCIfrm6r7qZUsQJdwTl7z388jUrpqe1P8KbI4UsXwaBFTPNf6YD7HtQpyx4ccdwV2Yowz/ddYqqlwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4298
X-OriginatorOrg: citrix.com

On Thu, Feb 11, 2021 at 11:36:59AM +0100, Jan Beulich wrote:
> On 11.02.2021 09:45, Roger Pau MonnÃ© wrote:
> > On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
> >> I did further consider not allocating any real page at all, but just
> >> using the address of some unpopulated space (which would require
> >> announcing this page as reserved to Dom0, so it wouldn't put any PCI
> >> MMIO BARs there). But I thought this would be too controversial, because
> >> of the possible risks associated with this.
> > 
> > No, Xen is not capable of allocating a suitable unpopulated page IMO,
> > so let's not go down that route. Wasting one RAM page seems perfectly
> > fine to me.
> 
> Why would Xen not be able to, in principle? It may be difficult,
> but there may also be pages we could declare we know we can use
> for this purpose.

I was under the impression that there could always be bits in ACPI
dynamic tables that could report MMIO ranges that Xen wasn't aware of,
but those should already be marked as reserved in the memory map
anyway for good behaved systems.

> >>          return;
> >>  
> >>      ASSERT(cpu_has_vmx_virtualize_apic_accesses);
> >>  
> >>      virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
> >> -    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
> >> +    apic_page_ma = mfn_to_maddr(apic_access_mfn);
> >>  
> >>      vmx_vmcs_enter(v);
> >>      __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
> > 
> > I would consider setting up the vmcs and adding the page to the p2m in
> > the same function, and likely call it from vlapic_init. We could have
> > a domain_setup_apic in hvm_function_table that takes care of all this.
> 
> Well, I'd prefer to do this just once per domain without needing
> to special case this on e.g. vCPU 0.

I seems more natural to me to do this setup together with the rest of
the vlapic initialization, but I'm not going to insist, I also
understand your point about calling the function only once.

> >> --- a/xen/include/asm-x86/p2m.h
> >> +++ b/xen/include/asm-x86/p2m.h
> >> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
> >>          flags = IOMMUF_readable;
> >>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
> >>              flags |= IOMMUF_writable;
> >> +        /* VMX'es APIC access page is global and hence has no owner. */
> >> +        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
> >> +            flags = 0;
> > 
> > Is it fine to have this page accessible to devices if the page tables
> > are shared between the CPU and the IOMMU?
> 
> No, it's not, but what do you do? As said elsewhere, devices
> gaining more access than is helpful is the price we pay for
> being able to share page tables. But ...

I'm concerned about allowing devices to write to this shared page, as
could be used as an unintended way to exchange information between
domains?

> > Is it possible for devices to write to it?
> 
> ... thinking about it - they would be able to access it only
> when interrupt remapping is off. Otherwise the entire range
> 0xFEExxxxx gets treated differently altogether by the IOMMU,

Now that I think of it, the range 0xFEExxxxx must always be special
handled for device accesses, regardless of whether interrupt remapping
is enabled or not, or else they won't be capable of delivering MSI
messages?

So I assume that whatever gets mapped in the IOMMU pages tables at
0xFEExxxxx simply gets ignored?

Or else mapping the lapic access page when using shared page tables
would have prevented CPU#0 from receiving MSI messages.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:22:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:22:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83836.156982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAA3Y-0001Cj-RF; Thu, 11 Feb 2021 11:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83836.156982; Thu, 11 Feb 2021 11:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAA3Y-0001Cc-Nh; Thu, 11 Feb 2021 11:22:44 +0000
Received: by outflank-mailman (input) for mailman id 83836;
 Thu, 11 Feb 2021 11:22:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAA3X-0001CU-Pm
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 11:22:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98e222ea-dca3-453b-b540-9ec105d147fe;
 Thu, 11 Feb 2021 11:22:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 21D61AF11;
 Thu, 11 Feb 2021 11:22:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98e222ea-dca3-453b-b540-9ec105d147fe
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613042562; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V84uKexLP9S8zctgfRDW7psP9BSdmxXryN/tHgedQaU=;
	b=UTbTIGmlAAzJeETtCPGDXgFh9WLJC3ElTpVDt/YVJTrGTE1prTTSK08GJ4TNcbJXeXkuJc
	+dYuFOMNmlI3tyEQbLmNudDcvEOm15MsYGkfxdu5EC4Uj2Sn7R05LBgqF5Bg6AJ7UWFaP0
	XfnltV2Bk4sKR3l/f2/mdthPbVbAaVg=
Subject: Re: [PATCH] VMX: use a single, global APIC access page
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <YCTuq5b130PR6G35@Air-de-Roger>
 <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
 <YCUSDSYpS5X+AZco@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <547b40f2-3b7b-10cb-30f6-9445c784eb0b@suse.com>
Date: Thu, 11 Feb 2021 12:22:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCUSDSYpS5X+AZco@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.02.2021 12:16, Roger Pau MonnÃ© wrote:
> On Thu, Feb 11, 2021 at 11:36:59AM +0100, Jan Beulich wrote:
>> On 11.02.2021 09:45, Roger Pau MonnÃ© wrote:
>>> On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
>>>> --- a/xen/include/asm-x86/p2m.h
>>>> +++ b/xen/include/asm-x86/p2m.h
>>>> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
>>>>          flags = IOMMUF_readable;
>>>>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
>>>>              flags |= IOMMUF_writable;
>>>> +        /* VMX'es APIC access page is global and hence has no owner. */
>>>> +        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
>>>> +            flags = 0;
>>>
>>> Is it fine to have this page accessible to devices if the page tables
>>> are shared between the CPU and the IOMMU?
>>
>> No, it's not, but what do you do? As said elsewhere, devices
>> gaining more access than is helpful is the price we pay for
>> being able to share page tables. But ...
> 
> I'm concerned about allowing devices to write to this shared page, as
> could be used as an unintended way to exchange information between
> domains?

Well, such an abuse would be possible, but it wouldn't be part
of an ABI and hence could break at any time. Similarly I
wouldn't consider it an information leak if a guest abused
this.

>>> Is it possible for devices to write to it?
>>
>> ... thinking about it - they would be able to access it only
>> when interrupt remapping is off. Otherwise the entire range
>> 0xFEExxxxx gets treated differently altogether by the IOMMU,
> 
> Now that I think of it, the range 0xFEExxxxx must always be special
> handled for device accesses, regardless of whether interrupt remapping
> is enabled or not, or else they won't be capable of delivering MSI
> messages?

I don't think I know how exactly chipsets handle MSI in this
case, but yes, presumably these accesses need to be routed a
different path even in that case.

> So I assume that whatever gets mapped in the IOMMU pages tables at
> 0xFEExxxxx simply gets ignored?

This would be the implication, aiui.

> Or else mapping the lapic access page when using shared page tables
> would have prevented CPU#0 from receiving MSI messages.

I guess I don't understand this part. In particular I see
nothing CPU#0 specific here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:28:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:28:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83852.157009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAA9B-0001Y0-Of; Thu, 11 Feb 2021 11:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83852.157009; Thu, 11 Feb 2021 11:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAA9B-0001Xt-Kw; Thu, 11 Feb 2021 11:28:33 +0000
Received: by outflank-mailman (input) for mailman id 83852;
 Thu, 11 Feb 2021 11:28:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAA9A-0001Xo-3L
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 11:28:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5299d2b-c6cf-4db1-beac-416304f8de35;
 Thu, 11 Feb 2021 11:28:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6DAB1AC69;
 Thu, 11 Feb 2021 11:28:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5299d2b-c6cf-4db1-beac-416304f8de35
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613042910; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hTOKFnqoHp+bZPBlYfCS6cY1sWDsjMvxEoF7gGl4k00=;
	b=ug53fIXUnUqGhfLvYQxvkM+KIOdaEDmQHHDoaualjhQ8Pzy8dbzq8BArpRhz1n044M+co6
	+gZPtq1ydeknpTMvkxrinMIh4bbbNk/ldBTfOQsHlcdpte8zAxlvjqwBm0pVIfyctnx2Ks
	gFqy6+86kGnzlbD7hefEXQQB89viyOs=
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
 <YCK3sH/4EVLzRfZ3@Air-de-Roger>
 <d3b62090-fdb5-068b-93ab-63f8bebc9d2e@suse.com>
 <YCTmzRcZw9JUJkxw@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fd0e2d04-97ab-5096-71c3-a031c0b15589@suse.com>
Date: Thu, 11 Feb 2021 12:28:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCTmzRcZw9JUJkxw@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.02.2021 09:11, Roger Pau MonnÃ© wrote:
> On Wed, Feb 10, 2021 at 05:55:52PM +0100, Jan Beulich wrote:
>> On 09.02.2021 17:26, Roger Pau MonnÃ© wrote:
>>> On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/usercopy.c
>>>> +++ b/xen/arch/x86/usercopy.c
>>>> @@ -10,12 +10,19 @@
>>>>  #include <xen/sched.h>
>>>>  #include <asm/uaccess.h>
>>>>  
>>>> -unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n)
>>>> +#ifndef GUARD
>>>> +# define GUARD UA_KEEP
>>>> +#endif
>>>> +
>>>> +unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n)
>>>>  {
>>>>      unsigned dummy;
>>>>  
>>>>      stac();
>>>>      asm volatile (
>>>> +        GUARD(
>>>> +        "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"
>>>
>>> Don't you need to also take 'n' into account here to assert that the
>>> address doesn't end in hypervisor address space? Or that's fine as
>>> speculation wouldn't go that far?
>>
>> Like elsewhere this leverages that the hypervisor VA range starts
>> immediately after the non-canonical hole. I'm unaware of
>> speculation being able to cross over that hole.
>>
>>> I also wonder why this needs to be done in assembly, could you check
>>> the address(es) using C?
>>
>> For this to be efficient (in avoiding speculation) the insn
>> sequence would better not have any conditional jumps. I don't
>> think the compiler can be told so.
> 
> Why not use evaluate_nospec to check the address like we do in other
> places?

Because LFENCE is far heavier than what we do here, which is
effectively a (distant) relative of array_index_nospec().

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:30:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:30:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83853.157021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAABJ-0002Rd-5Q; Thu, 11 Feb 2021 11:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83853.157021; Thu, 11 Feb 2021 11:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAABJ-0002RV-2A; Thu, 11 Feb 2021 11:30:45 +0000
Received: by outflank-mailman (input) for mailman id 83853;
 Thu, 11 Feb 2021 11:30:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzmj=HN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAABG-0002R9-O1
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 11:30:43 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7f0b0cf-c033-413e-90e7-edf39443ea5c;
 Thu, 11 Feb 2021 11:30:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7f0b0cf-c033-413e-90e7-edf39443ea5c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613043041;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NxirOC2dBWp+Y4kS0+ky1iYNwg2E7hOxgG6lu1IJ2lU=;
  b=N2x8IkfFz2rWNo4CPWgoW2mBBueAXGCaYHf7AmUxpSQMPorsCrA9PnBi
   R5enkW2KFCI0rTQDo7qs5arBMSX3FQMDMrY0Ra/VWdUYTmP2StF01nGyz
   d6zhLdqGTyUIkPsSRf8Gf7tBnrueVqvvfJliSKO2zN9bIlU7o97jMCM5I
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: QxDqEM/EBDAEmROhn2SH0coNZvqL2bUkVyaSV2zEW/QvqggHeZ9EhYquuAE0qbogh1a4aT6aTs
 LpPzTDNlM3IOWHaoZ7t5vDHSAuDpcrFiZXiNEfdcdiM3+hngSH78/9c+/ycY3DqsTzF7nplTof
 uFN8Fl5Hvv0TS2mLHKAybTGtK8ijf6om6bGv+bj9glK/MU61OTko8y+MvWaGNIZxILk6eih0NQ
 9E3T+4PFtybUCoqoiGmNdR2tVXb3yHbYd3eILCr95H5uFEiYWi++8YK8JYGsW9Ty5MomG8u+hj
 AY4=
X-SBRS: 5.2
X-MesageID: 37058724
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37058724"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q9NC+NHHSEZquE41fmxjKbYoc+rQa+4K0QpZ3Ln/oNYoTlHSfMXDwGPiWk8aMEPBtiDMHA9my5uN+OvdsctnHE4V369GJTOfYyrVcI45K6XBIGkSM26MsrLAIgwk7OHFrEa5+T691e17C+UVJh/GVa6lDRT1M62d+MRk1uIlPsaqcbfKpjmp6KeivnJnIcK4VerXOhcxCU6TStrOlBSQy0pFaUsYpZbPQH0U/jTx6WNUY4GXFeZSoIEdxrRjBMq8LuGI5Qao3D7JWbqs0whuFT53+QrhUaRzrwqa+tmgDJIhXk5EWS2GsCnx373LJkxqUhUKmyMv7AWJ3Ksdi4nwmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NxirOC2dBWp+Y4kS0+ky1iYNwg2E7hOxgG6lu1IJ2lU=;
 b=AI4fwQYDWmRgl/2Kr+uAqRMle/PgYPBssn/T6OalkzDUnPJyTU8sTvnLJagcDKWhcJJgLh/hPW7UiBEIprOtu7Rw5hMjpiikL4jXE0xJln9ohMIxR8nRi4KOhQ9vLFAXs8gU7Tdd2cfzVsRHdpHCIfNSz5W3QEtUO8r6lsjM8NKa4qFSp/SuHc+niGYfF8Msf+hRFw+jjHwMV/oRLaPz35PT/8eKOyJZuE7Uj9kyHLW83Uz8GHfMrdYdPKe9B65GdkKFoEx3RB55tAjOguK9d65hPFk2rKztIpVnOBe/KwDDdtbAZyXrwXVf1I7Ycomr/Zw8EzMixPf/Tw2n0RQhcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NxirOC2dBWp+Y4kS0+ky1iYNwg2E7hOxgG6lu1IJ2lU=;
 b=LnlCo2Fpu/g/SNAHIW4cd8sQJ2A86oJz6UeNlfMGMhk+mOtw3zjugJvLQHJao5AqNd0BSfXSwcSnCGTVvaRpO0MIKvLbY5TVwMMQA8m4qrpkeX7HH+jgAjp0VM9YFtK+dA7sLmYteqvprT6/OBIsiTbxoaIzL23QA6URAIGkn5I=
Subject: Re: Stable library ABI compatibility checking
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Juergen Gross <jgross@suse.com>, xen-devel
	<xen-devel@lists.xenproject.org>
References: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
 <816d28b2-df85-9259-de96-5a6654c8b341@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <35654104-5445-9e44-792b-3059d601db5e@citrix.com>
Date: Thu, 11 Feb 2021 11:30:29 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <816d28b2-df85-9259-de96-5a6654c8b341@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0080.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3dc3fb00-19f9-4a8e-0bed-08d8ce8072fa
X-MS-TrafficTypeDiagnostic: BYAPR03MB3622:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3622F13FDB6722FC6441E804BA8C9@BYAPR03MB3622.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3044;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lQeN392+/Jw7PeFwuy2zKXM17CQVkM5U7ceExG9MTIxO4UsnpXqAclxJYyB1kySdEhgwyE8IHh6GzXG/pl6nUknim1fPPdDLO4pvgxk5yFYyaitNguGc7Z8kY2TU2QfWHw4QEMP7utHajvvzbpPAYL+xnDev9Rj56ll4gPZhIcwGps8U2xUZrjZxOg1+oSMbr7ZvA0j78qZiANfGKPAkLnFtStZnVYyTumzeq4OzYEKMuOm++FSu2tYzMEoWL7DDLBBinr7+6NdK2OxyQxeJOJrqHJup4xt8AkIoaIiOJBU7YEcZeZQL7V5MYpSGswUfIP1YvKzL2LUYqI5w87BAwI01y/Ljoc/eciFEBA0uLxTxdVEssa1zOVu7dnng1UenCLxa+yT4K6+ZbriYQwiR/asFLiPaLfta3v3bsZwEGgNzS9YFNv8zXTewr9MXOrXQYXBwTfxCZ7d6ACz+dod6bxyk7v5emBBP9Z8K5pimtC2hH/83J6KdoAZZjknyXuw7RmXvCO7jLzKKt6MJlgooojrgDweb4RJbwiYNp/W1Crf3PuwGyiRPoGlIN+dbVBM2VDQwZ2jt8WF1YBrlFQLxYPJzm0dRUID0pAa5Z08aw0OV57q3hK0EdjHcpdxh2JZgPiDusUzewTnHCS9+7ZtFCqhq++u5TpQyUmoWfewW9GPb5qD5P6+PjNQSgyC89QQS
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(136003)(376002)(366004)(396003)(346002)(8676002)(26005)(186003)(16526019)(53546011)(86362001)(966005)(6916009)(16576012)(478600001)(31686004)(316002)(8936002)(6486002)(54906003)(4326008)(6666004)(5660300002)(66476007)(66946007)(31696002)(2906002)(66556008)(956004)(2616005)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?enJhS0d3emhTcHZHK1JtOFhUdGNsL3l5aEZyYnJ1OCtjcXpFeE8yN0p3VVFF?=
 =?utf-8?B?L2RlWXRLd2pqMStjRkh3dGg5RkU0a2NsOW42MXUzWkdjWVc4L3JuWXpObFhY?=
 =?utf-8?B?Qzh4bUtjYzRJMS9qNWYraEJpRzZiUUFDS203bXN6N0VtcHdlTmVUTnpXSjdw?=
 =?utf-8?B?R1VYMmdoN0ZDeVJYUW9LMHUzVTVJcU1wQjNXZFRxeHpYbVp2dzlVUXR4dFBn?=
 =?utf-8?B?YnJEYVdURXpCSVZSVFczMmpPT1gvYkN4UHFRSE84UFZRZVFsRk0xVjkzUGgv?=
 =?utf-8?B?NzFsaVpJVlZXSk9Gc1V3dlJlclFYMk9TMklRZ2hnZmVFZDl6K0xMSldFUy9V?=
 =?utf-8?B?UURrcGdHYy9lQU5GK1p6VTc0TmpaQThScGlOQ0lhbTZRUXlrOEZDOVhVSjBo?=
 =?utf-8?B?MmdNKysySDNhM1NlYTI3YVlBdDlGWHlPclpENGFnbFdkVm50UjJ4QUozRGEx?=
 =?utf-8?B?bzdVRnVDTU9EQzQ0NHlCa0hHM3dtZklncFM5UmlSWk4zU25LMWhBZGh0ZTdD?=
 =?utf-8?B?REkzWXRJdkJ1TERmQ3Bac1BJTk9HTEZqL0ZYTHVOZ2NOUkFXbitJOURSNUQw?=
 =?utf-8?B?TzRibTlxMUpIQm5WSUppUUtadkFxVUJGZmFmZHBDVzlPVzlIYzFxRGJpUWF1?=
 =?utf-8?B?blpvbVZobi9iRGE1K00rVmU2TjdkR0g5OFlrRUVDZlBHZWpxcWM0V2hLL2p1?=
 =?utf-8?B?eEt3UGhWRE44RmVnUGdEcVNJdHRKZ0hGQ2FXS05vV3RRRTMrOU9rbFIrREhv?=
 =?utf-8?B?U0VlNWlKSWVxYithbHdtOXJONTVHMVpZR0tQTCsxYXllSk5SOUtQTDdkNVZP?=
 =?utf-8?B?cEFlTlJ1M2JiZTFvRGc1UC9XaDk5eG5ubE52UWN4UzNLNXg4Q0dkWUtVcEk5?=
 =?utf-8?B?cE1ndzFDVzlySXRnOFhqei9jd1h3TkV5Njl2cXpPSHpTb3NOSDFnN2h4ckxI?=
 =?utf-8?B?c05hOFBzZDZZallyT0d6d3QraGxOQlhpT1NZUXl2NS9iZWpJUUttdmFvbjRo?=
 =?utf-8?B?L2MzOFVNMGY4WWN0MURKTVkxT0UrOHloaWNmVVRtWk5nYTV3RW1TVVliVFhO?=
 =?utf-8?B?TDRIQ29tRmxyc0prSms3cy9rWXpuZkdCbVhYSU03RjA5eWowck12ZDY2S3JL?=
 =?utf-8?B?U2xUNWR3YTN1TUFrOXdkYXQwZkZJOHpmcXhGeDhpb0lyVkJaVHZUd1JTQ1NK?=
 =?utf-8?B?NzlrZmtwYXFQOVRpdml0Znh5Z1FJQVdsYm81WDFCRXNPMDNTMjlRamduekti?=
 =?utf-8?B?VDUxb2wrbXd5eXRSbDNvcWFvTXlPLzl3aWhSUjkrUjhvUkRDTVZKTE90bnd4?=
 =?utf-8?B?WkFraXprNFNsVDNKaVJtNEV1bldEM3MzcmpvVW1sWHM4a3dXNGdPZW0yOFc3?=
 =?utf-8?B?ejhEaFpqUkxzNjI4eU9ESTZ5MnQyaE55UTUxOW5LUWgwNkxiTHFHVEJGbnpr?=
 =?utf-8?B?dzZINWNzSXN6aG1aWWNtaUZGWjNmd29MV0xhVi94d0FXeDJJcTN5WDQzam84?=
 =?utf-8?B?dTNlRG0rZHNucm1MdkJ2NXNkdEo1Mm03RjRDYVNvVjM0UnNmZVhFTzJRdExY?=
 =?utf-8?B?SG9idjN0WktObUlhSm1jdHJqOWdkRisyS3ptSS9DdEtKNW0yWGFVcVg5a3p3?=
 =?utf-8?B?cEQzWmxuSmlQUWFySStCc29oWTVPTFZkeW00M2lsTUZvNG51S1daQ1pMMjYx?=
 =?utf-8?B?Vjd4T25TbmM4R3Mwbm03SDAyUVIrT1VRQ2R6NzZRWUFlWHJRLzRFbDEzR0FL?=
 =?utf-8?Q?AhE4KNlj781xfVVIkcj6Io5aKlwG/a7xq5U7Q63?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3dc3fb00-19f9-4a8e-0bed-08d8ce8072fa
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 11:30:35.4329
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3I0E0Fddl9IL+jjuRCCzt2aLeeV8amswGkEtwERhJAK/4ihVMFczhJuDXPK03V6P7OKgjtA3JdEgC/0ZeGqS41L0qoOaFSqwXkBl1F3qcfI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3622
X-OriginatorOrg: citrix.com

On 11/02/2021 11:05, Jan Beulich wrote:
> On 11.02.2021 02:08, Andrew Cooper wrote:
>> Hello,
>>
>> Last things first, some examples:
>>
>> http://xenbits.xen.org/people/andrewcoop/abi/libxenevtchn-1.1_to_1.2.html
>> http://xenbits.xen.org/people/andrewcoop/abi/libxenforeignmemory-1.3_to_1.4.html
>>
>> These are an ABI compatibility report between RELEASE-4.14.0 and staging.
>>
>> They're performed with abi-dumper (takes a shared object and header
>> file(s) and write out a text "dump" file which happens to be a perl
>> hash) and abi-compliance-checker checker, which takes two dumps and
>> produces the HTML reports linked above.Â  They're both debian tools
>> originally, but have found their way across the ecosystem.Â  They have no
>> impact on build time (when invoked correctly).
>>
>> I'm encouraged to see that the foreignmem analysis has even spotted that
>> we deliberately renamed one parameter to clarify its purpose.
>>
>>
>> For the stable libraries, the RELEASE-$X.$Y.0 tag is the formal point
>> when accumulated changes in staging become fixed.Â  What we ought to be
>> doing is taking a "dump" of libraries at this point, and publishing
>> them, probably on xenbits.
>>
>> Subsequent builds on all the staging branches should be performing an
>> ABI check against the appropriate lib version.Â  This will let us catch
>> breakages in staging (c/s e8af54084586f4) as well as breakages if we
>> ever need to backport changes to the libs.
>>
>> For libraries wrapped by Juergen's tools/libs/ common-makefile changes,
>> adding ABI dumping is easy.Â  The only complicating is needing to build
>> everything with "-Og -g", but this is easy to arrange, and frankly ought
>> to be the default for debug builds anyway (the current use of -O0 is
>> silly and interferes with common distro hardening settings).
>>
>> What I propose is tweaking the library build to write out
>> lib$FOO.so.$X.$Y-$(ARCH).abi.dump files.Â  A pristine set of these should
>> be put somewhere on xenbits, and a task put on the release technicians
>> checklist for future releases.
>>
>> That way, subsequent builds which have these tools available can include
>> a call to abi-compliance-checker between the authoritative copy and the
>> one from the local build, which will make the report available, and exit
>> nonzero on breaking problems.
>>
>>
>> To make the pristine set, I need to retrofit the tooling to 4.14 and
>> ideally 4.13.Â  This is in contravention to our normal policy of not
>> backporting features, but I think, being optional build-time-only
>> tooling, it is worthy of an exception considering the gains we get
>> (specifically - to be able to check for ABI breakages on these branches
>> in OSSTest).Â  Backporting to 4.12 and older is far more invasive, due to
>> it being before the library build systems were common'd.
>>
>>
>> Anyway, thoughts?
> +1
>
> Not sure about the backporting effects - tools/libs/ had quite a bit
> less content in 4.14 and older, so the coverage would be smaller.

tools/libs/ has been the stable libraries, since IanC split them years
ago.Â  The only odd-one-out is libxenstored IIRC, which moved during the
4.15 window.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:39:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83857.157033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAJC-0002f7-W9; Thu, 11 Feb 2021 11:38:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83857.157033; Thu, 11 Feb 2021 11:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAJC-0002f0-Su; Thu, 11 Feb 2021 11:38:54 +0000
Received: by outflank-mailman (input) for mailman id 83857;
 Thu, 11 Feb 2021 11:38:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAAJB-0002es-Hs; Thu, 11 Feb 2021 11:38:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAAJB-0002fc-8k; Thu, 11 Feb 2021 11:38:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAAJA-0003gm-WB; Thu, 11 Feb 2021 11:38:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAAJA-0006fC-VS; Thu, 11 Feb 2021 11:38:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N+pE5FJSEPfidijamYv6w8uC2dWxKfEUsEmmRW/dLiU=; b=OPkFECfR1+5xDaiSxXJXiV0Tb+
	Rn9HvwcN5kIM89bJ0fv23Hs2X5vE56n48xtgyCv41ZrPq+Dai1RRCOeNd7+8hIIjvns3OxrZX2/02
	gBEVCKv+iBFxd0LgN20pH/Ou5psUVpJxszU2V0tT0Q68XulilpSGd9/Uhrs+JW7SjfYQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159203-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159203: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1214d55d1c41fbab3a9973a05085b8760647e411
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 11:38:52 +0000

flight 159203 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159203/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>                 broken
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1214d55d1c41fbab3a9973a05085b8760647e411
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  175 days
Failing since        152659  2020-08-21 14:07:39 Z  173 days  343 attempts
Testing same since   159203  2021-02-10 11:26:17 Z    1 days    1 attempts

------------------------------------------------------------
394 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-step test-amd64-i386-xl-qemuu-win7-amd64 host-install(5)

Not pushing.

(No revision log; it would be 109250 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 11:53:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 11:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83865.157047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAXP-0004WU-JK; Thu, 11 Feb 2021 11:53:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83865.157047; Thu, 11 Feb 2021 11:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAXP-0004WN-GT; Thu, 11 Feb 2021 11:53:35 +0000
Received: by outflank-mailman (input) for mailman id 83865;
 Thu, 11 Feb 2021 11:53:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAAXO-0004WI-Fj
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 11:53:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31e3a254-52ba-4820-93e2-66e2c0dcb6b6;
 Thu, 11 Feb 2021 11:53:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41810ACD4;
 Thu, 11 Feb 2021 11:53:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31e3a254-52ba-4820-93e2-66e2c0dcb6b6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613044412; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B9knYC0hUOS3PzlhvNwRsotZ77rzpcDVfDVuN790IuA=;
	b=mzdy5GnP4F/SXNOFzXB1k8w6kBGgnO61ap6LCrBsFWaogfplBH7fL4Pmlwk31gBKg0NQpA
	KiX65CDIWDuQ6BMm47IKCKo5o7t6o7I0TJIJGHts5etWbU/BanD8hmSkc6rqEYP8JuJo3X
	clAaqRWOKPL1pvKVjhfFWaV6yb/S3Ms=
Subject: Re: Stable library ABI compatibility checking
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, George Dunlap
 <george.dunlap@citrix.com>, Juergen Gross <jgross@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
 <816d28b2-df85-9259-de96-5a6654c8b341@suse.com>
 <35654104-5445-9e44-792b-3059d601db5e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c0b35b70-5cac-d25f-92c6-181d95785a89@suse.com>
Date: Thu, 11 Feb 2021 12:53:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <35654104-5445-9e44-792b-3059d601db5e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.02.2021 12:30, Andrew Cooper wrote:
> On 11/02/2021 11:05, Jan Beulich wrote:
>> On 11.02.2021 02:08, Andrew Cooper wrote:
>>> Hello,
>>>
>>> Last things first, some examples:
>>>
>>> http://xenbits.xen.org/people/andrewcoop/abi/libxenevtchn-1.1_to_1.2.html
>>> http://xenbits.xen.org/people/andrewcoop/abi/libxenforeignmemory-1.3_to_1.4.html
>>>
>>> These are an ABI compatibility report between RELEASE-4.14.0 and staging.
>>>
>>> They're performed with abi-dumper (takes a shared object and header
>>> file(s) and write out a text "dump" file which happens to be a perl
>>> hash) and abi-compliance-checker checker, which takes two dumps and
>>> produces the HTML reports linked above.Â  They're both debian tools
>>> originally, but have found their way across the ecosystem.Â  They have no
>>> impact on build time (when invoked correctly).
>>>
>>> I'm encouraged to see that the foreignmem analysis has even spotted that
>>> we deliberately renamed one parameter to clarify its purpose.
>>>
>>>
>>> For the stable libraries, the RELEASE-$X.$Y.0 tag is the formal point
>>> when accumulated changes in staging become fixed.Â  What we ought to be
>>> doing is taking a "dump" of libraries at this point, and publishing
>>> them, probably on xenbits.
>>>
>>> Subsequent builds on all the staging branches should be performing an
>>> ABI check against the appropriate lib version.Â  This will let us catch
>>> breakages in staging (c/s e8af54084586f4) as well as breakages if we
>>> ever need to backport changes to the libs.
>>>
>>> For libraries wrapped by Juergen's tools/libs/ common-makefile changes,
>>> adding ABI dumping is easy.Â  The only complicating is needing to build
>>> everything with "-Og -g", but this is easy to arrange, and frankly ought
>>> to be the default for debug builds anyway (the current use of -O0 is
>>> silly and interferes with common distro hardening settings).
>>>
>>> What I propose is tweaking the library build to write out
>>> lib$FOO.so.$X.$Y-$(ARCH).abi.dump files.Â  A pristine set of these should
>>> be put somewhere on xenbits, and a task put on the release technicians
>>> checklist for future releases.
>>>
>>> That way, subsequent builds which have these tools available can include
>>> a call to abi-compliance-checker between the authoritative copy and the
>>> one from the local build, which will make the report available, and exit
>>> nonzero on breaking problems.
>>>
>>>
>>> To make the pristine set, I need to retrofit the tooling to 4.14 and
>>> ideally 4.13.Â  This is in contravention to our normal policy of not
>>> backporting features, but I think, being optional build-time-only
>>> tooling, it is worthy of an exception considering the gains we get
>>> (specifically - to be able to check for ABI breakages on these branches
>>> in OSSTest).Â  Backporting to 4.12 and older is far more invasive, due to
>>> it being before the library build systems were common'd.
>>>
>>>
>>> Anyway, thoughts?
>> +1
>>
>> Not sure about the backporting effects - tools/libs/ had quite a bit
>> less content in 4.14 and older, so the coverage would be smaller.
> 
> tools/libs/ has been the stable libraries, since IanC split them years
> ago.Â  The only odd-one-out is libxenstored IIRC, which moved during the
> 4.15 window.

As well as ctrl/, guest/, light/, stat/, util/, and vchan/.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 12:03:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 12:03:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83874.157060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAhC-0005ck-Uh; Thu, 11 Feb 2021 12:03:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83874.157060; Thu, 11 Feb 2021 12:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAhC-0005cd-RL; Thu, 11 Feb 2021 12:03:42 +0000
Received: by outflank-mailman (input) for mailman id 83874;
 Thu, 11 Feb 2021 12:03:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ygeM=HN=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lAAhB-0005cY-1z
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 12:03:41 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.67]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc45ceb1-f630-4517-a573-e6a3e13503c8;
 Thu, 11 Feb 2021 12:03:37 +0000 (UTC)
Received: from DB6PR07CA0053.eurprd07.prod.outlook.com (2603:10a6:6:2a::15) by
 AM6PR08MB3541.eurprd08.prod.outlook.com (2603:10a6:20b:51::31) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3825.30; Thu, 11 Feb 2021 12:03:34 +0000
Received: from DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2a:cafe::9f) by DB6PR07CA0053.outlook.office365.com
 (2603:10a6:6:2a::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.12 via Frontend
 Transport; Thu, 11 Feb 2021 12:03:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT027.mail.protection.outlook.com (10.152.20.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Thu, 11 Feb 2021 12:03:34 +0000
Received: ("Tessian outbound 587c3d093005:v71");
 Thu, 11 Feb 2021 12:03:34 +0000
Received: from 98a8f2f8d2da.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A4E41A5C-10F7-40E7-AAA1-E797151A9C8D.1; 
 Thu, 11 Feb 2021 12:03:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 98a8f2f8d2da.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 11 Feb 2021 12:03:28 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4869.eurprd08.prod.outlook.com (2603:10a6:10:de::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25; Thu, 11 Feb
 2021 12:03:26 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Thu, 11 Feb 2021
 12:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc45ceb1-f630-4517-a573-e6a3e13503c8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ekqrdfrD9g+E+3V2+qd2USJLZKIPla23yyzJHnaAQE=;
 b=p48LqkvYRMT7PJ7KY1W+15WK3/YhIlM1VQpyaMYXX1GFQvk+UpeIxvuoEZVaXNqakd/4HjJguxpcsqLKJ9VIZ1/vkefDzNi8lkc0GPH/UGtVbtG1MUYGuAcaq+mN2FXtJ7oE2/vSotICk9fLnD6msfIPsUznvWtnDFGUoLVnOHE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 76af0b9c07323150
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A/Z03Ii3/XXDKIB5uvGt5/NlMp0WzKO3yHi+Qh7WLfK/vPs5rDdybgyHQIlRaNRSY36c33ktkomL2LgNgG7FAmFZM6oH28lLAOOb6eF/MqFMdtYynHqANdtVp7g+aHbWTG0LDtGKtikmmIYd5snOyjleKxGvTKNszeWA5tZemVokDYbDh1Tx8Vhjcnd/OzKDK1Afe56tmbbfQ9B8W4nIlBwHvPcg/4JdhZXwuARF6IN1Y9YLpSkxu7Zlf5Gb/fmwGf5yMRjai15QV4ooAItnNSIxlX70ZiRw3/amyK6v7AE6sS3th9Qfg6rv7QsNYciBmHie/SmVc8N+xbgfonLW/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ekqrdfrD9g+E+3V2+qd2USJLZKIPla23yyzJHnaAQE=;
 b=YfZQXVFnt98aZDoubtINJ9hZMUa6M7+wOl/m8iexhsktO4sXI4bVOPmVgKBSHae3JlfpcXg2exxQEIH+d5cn7sNcidI6abUOqUanzzJv/L6q4joVI/xCp3Zk7wYOjbM/JIN9ZkaJRhmOTpnoyQZ+8wLlszQY4E1tLM6ga/tPqRRzmKeMxSb7zVVXBQ4abUBfFJbjZvNle8f/j51L7NtKmO+mFezi1MwbIB5+Rk+MKnUhzJnouyzAkjynjq/2zqR4XBBG893l4NCN5gj/C+eWPxaijhKtzxK6BmOTQSCltDXXeVrIblZNy30Fef3T2lYscg1HPzVXBmJXU5dWhslQYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ekqrdfrD9g+E+3V2+qd2USJLZKIPla23yyzJHnaAQE=;
 b=p48LqkvYRMT7PJ7KY1W+15WK3/YhIlM1VQpyaMYXX1GFQvk+UpeIxvuoEZVaXNqakd/4HjJguxpcsqLKJ9VIZ1/vkefDzNi8lkc0GPH/UGtVbtG1MUYGuAcaq+mN2FXtJ7oE2/vSotICk9fLnD6msfIPsUznvWtnDFGUoLVnOHE=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index: AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoAgAB7ugCAATYRAIAAZriAgAD4lgA=
Date: Thu, 11 Feb 2021 12:03:24 +0000
Message-ID: <6CB0A348-0470-430A-818A-546730992474@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <alpine.DEB.2.21.2102101309230.7114@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102101309230.7114@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: be87ff02-c972-4ef9-808d-08d8ce850e79
x-ms-traffictypediagnostic: DBBPR08MB4869:|AM6PR08MB3541:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3541DE4E6F372BD80966F6C3FC8C9@AM6PR08MB3541.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5236;OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4g3tnSrAjlfXMUSPim0Tvb7Od85TMTKPphx/TA1JkeJHCvI2cJktcdU4bsdeudAFvrbDDbWpJpglojMGyZCEd23qUHUkICumU4llHgbijMnr3oKWYoYlFql1Hcy8EjEJvFRUH1K4H1ilB6DcbWkQ3WFIHgQLlTMy1RfrPobTPvaYyGwqBQjD439Yce0qWB5QyxfhXg8Tb7q94IRZBoMPDfD1IPDiYUtbedk7MBPKk/yEsFF6Gj/I7ySFQ/uLEyRzUzQu7SajCNPW9uk+s5xFTa898ljbuZRxAl2T5yMhnkGW6h497GiO1Nj7EDAmSuddb74wKioc/eQcBGFz2nq3Fryg3DQovOPU1xzuAhD8HYubnPFnPMcYykwDRllL2V8yF0VMy9SZyFAF0UT2/6iqaewYEn7hhu4MBs8hluOM3RWAMBdiQHKKU0y0xRCyUFCdv9jyi8N8OxkmmKaystcNkPq21B/g6EnCgmRdfOU4HlTSfkqeyOlgV+0nU73yc8hRjraasRIlRKweMF9RpeFTZBmWnJNbBRmGXynRPZbCyFT8vtPC1dVm9iv5jUVgr2ABI+qw0jx/1dZwm3G6CNZBqsigMlAtqcw9L3MFVR64kfrBBg60wdZN8UfYwjGZsXqc1qFrGXTBBakUxy2qz0Uzngeb7NV+QFH2FZojdIhWkAQ=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(136003)(346002)(71200400001)(66446008)(66476007)(64756008)(66556008)(6506007)(186003)(66946007)(91956017)(6512007)(26005)(76116006)(2906002)(2616005)(33656002)(53546011)(54906003)(8676002)(83380400001)(8936002)(5660300002)(6916009)(4326008)(36756003)(966005)(316002)(86362001)(6486002)(478600001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?SG1Ydm1NaE5jeWROUC9UUGNYVXVHSFhMMHJWWDdkUVBCdnZVZGRzY3hkYlg4?=
 =?utf-8?B?Q1p3MGlWdkpqYmVjb2RLdjlPSTVaeXlqb29tSlFRenRiUEFvOFBmWWJubWd0?=
 =?utf-8?B?L05CNjZwOTJ2d21CREliMUFDUmJ5dHVLUi9wd2oyMldDc1YvTUZQSERPK25S?=
 =?utf-8?B?ZHBEWEJBQXFpL0VXTUNCSzc5R0xGR2szb2l0MkJPelhuUlhURjYrbmF4T1dx?=
 =?utf-8?B?R1VBT21tc2hIZlRVZHJYaGtIV1VaS1BReDB6aDRWN3RRMjdPRnM4aFgwU0Fa?=
 =?utf-8?B?QU5kZXJBamdYYnk3NDVPak56cWhZc2ovUlpGaGNVK01Nc3p6Zk43L3FiWHJv?=
 =?utf-8?B?RjlRaUZxT0ZaRXdWVXhuRzhFM1cvWDRyazl5cE9venJaSDhmNU1pQjMwTFE0?=
 =?utf-8?B?ci9FaW9RekpCdXpFenpoNjhrT1JlRzAvSmViK0liK3lJOUpIUXJLZzE1WE12?=
 =?utf-8?B?SzQ2cHZnbG9pSXg5a0g0WXlLRzhoMkNsSWJaZnlQVjBnK2lxMFJDckozQWVw?=
 =?utf-8?B?TklHL1ZHYXFWTFNEelRnZjJFZGJHbWdZYkRadTF0ZHVoOXdpMzc3ekRmZEtw?=
 =?utf-8?B?dWVzQUd4MjNrZ2xlTjI4aGs2bUs3bmlQQkRkZXFsL3h0WXBBUEhVL2hsVnhq?=
 =?utf-8?B?Y0hnbjd6TEFLbWtLYTUyTEl0WFJHT3lGVWZGaGc5ZW5YdmRLeWdWMk5RWUFw?=
 =?utf-8?B?TENCOEE4V2FRYzVZQm9Fd2o1SGluOHNRa3JvUFJxb2pJbTNHTlI3eDhQeUpq?=
 =?utf-8?B?OFBhdXBObDhhNEh2M3NzL1Q5c0Y2NTlSOGVHSEs2Y044NmxIUDRuaXY5d2Qw?=
 =?utf-8?B?YklMZmpkVWN5RzNmYmJRN2hpOEVmQ2NxWWJFMktYaGhKM3kvUXQzYUVBT1pX?=
 =?utf-8?B?eDIzU3J0QUJVbnYyZjVXZEZRUkxZdXJYZHFhNTE3NWtiQ1pEMVZNSkk3R1F2?=
 =?utf-8?B?NzhOK2lmWlRzSFdKRjFTQVh6Vk1DQWJaUW5LQXdlK1BUcVUwYW85VTZRN3Mv?=
 =?utf-8?B?R0xBeHZvUzBUdUFnYlBBYy9raEE1VVlldzUzMGc0UnNIckRzRzV3YnNOZkhX?=
 =?utf-8?B?MytEaHFIWnoreE9FZWg2V1crVkhsOUdWeTBkNzNiYWkvVnJHUHIvNkRHMEtW?=
 =?utf-8?B?ZCtNaXVUMzFhbkZlTUhlODZ1VzBmSWJ2eW9oaUwvM0Nmc1hGTlVsYWtjYVNv?=
 =?utf-8?B?MXRjRCt0T1kvcWhYRGNqNVZZa3JpMDBLelZzc2hiYjZPSDQrb2dQUVFKNndW?=
 =?utf-8?B?M2NtQm5BVGVZaWdjWjJVcjdQelYyeVhMdnc1eWgrNUQyQjY5VlBmMEYvU2NX?=
 =?utf-8?B?YXpodU5BNUZSRzMrT0dBRk1FanZTRDhWQllsMlVETDRQSjV3U3lod2JodFQ0?=
 =?utf-8?B?aEUyQkFLWEVXVnJVTC9EcUZCakhiV1gxaGVOUnAyekMweXNXczF5MUNZaVZa?=
 =?utf-8?B?L1BhVnVNS21UYnhGUy8wWExDTTJWMkt2Ukc4K2EyeWxkazgwdGVyNmQrZWl4?=
 =?utf-8?B?akt3aWlscVFVL3JRRks4d3BNaU9sRTVJUGs3bnNpditJbmdoMHFMbnhRaWNo?=
 =?utf-8?B?ekNoM2twTE1IZEdvME5SQ1FKdEs4Vk9EbXhzWFd2aTI4OVF4Um1MdHVXTGlR?=
 =?utf-8?B?Mis4T3BuTGlDL20rQUx6bTBkenVucktZQzczTFRUOW5EaW5vcFl4VFlWdEhY?=
 =?utf-8?B?blhJNEExT0xtbklYeDg4aUtFS1IxMEVVMGFVcDUrVUpOSXMyay9SOW9MaDRz?=
 =?utf-8?Q?HrPXNy+0Xdl6dfCUYmC6i5kQqnRowR9WRpRgnkg?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EDAFBB5276384E4AA7DFF2443F65E4F2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4869
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cd3054b1-2351-4a04-3292-08d8ce85095a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LcQ12K3OXC9+W8xfm/FEhyxg6j1AwKZcSD/qzjyYgUuzAFoIoU60fP3y+jASMibLiXbO3KbZAwWh3+xXmRA+jnRBFb0IZcT3zYMgkPIt/ZOSNJSEcYc1e/r+ToT6lK72WIivaNM+7qsDPIWVrX6ZReoD9+X+AuPCS9fD/aZOoXv0cr9No46ITUAT3ANMLnTeja0nn0/IXsBrJmgr/MKeaqWRZBR5vkrv2HNva28UDgFQtKCqB6HQSdSPhXMmSsenVAKMgm1P/UMCZb51KJKDEJHKiIO6bQddTOFI7mHWH1eOxZUm/tPyZmAeTRejvHkqwVoVckvE5v0WxolhKmcOBa8HVVV2L45uzVp5RFGtH4E02mUtPQelcFsEpEYCunwRdU/Wrpftjrptihwhm3/Pn/cTlXsWsWE6WWSaypXzx30cke3ZBqMvl7FT8p+vdJno5tm9m8ZOEV2QwV8U+cjabmKwkhvMvDrQuxS8zWq7VYkDNMA2WjM6b+wVdqhbCI99pC46cHyd3pw/Y2vn07rm3lQcK02eoULsB2DtcWp5OfXB3IhUEFwQcSnLJuzKbxh3pzskWIKyxH5HDR4PZTeeUnFa6eDwDCbhsj+972q1S5FpsZVkMebXxk3EOEL08cus4ylZ8WL3Lo30ieYA4ELTwo0ZP4k5JKgdclWsV+0P4TAuHPwZDIv4kiu4D2wnlIoZ62np6EjteKd1nQaA5tjlZGWld5GNBNUKfHUd8MtlqDA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(136003)(346002)(36840700001)(46966006)(70206006)(36756003)(966005)(2906002)(54906003)(26005)(82740400003)(5660300002)(82310400003)(4326008)(478600001)(36860700001)(70586007)(33656002)(107886003)(186003)(316002)(86362001)(8936002)(83380400001)(356005)(47076005)(336012)(6512007)(81166007)(2616005)(53546011)(8676002)(6506007)(6486002)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 12:03:34.1661
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be87ff02-c972-4ef9-808d-08d8ce850e79
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3541

SGVsbG8gU3RlZmFubywNCg0KPiBPbiAxMCBGZWIgMjAyMSwgYXQgOToxMyBwbSwgU3RlZmFubyBT
dGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwg
MTAgRmViIDIwMjEsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+IE9uIDkgRmViIDIwMjEsIGF0IDg6
MzYgcG0sIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6
DQo+Pj4gT24gVHVlLCA5IEZlYiAyMDIxLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+IE9uIDgg
RmViIDIwMjEsIGF0IDY6NDkgcG0sIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz4gd3JvdGU6DQo+Pj4+PiANCj4+Pj4+IENvbW1pdCA5MWQ0ZWNhN2FkZCBicm9rZSBn
bnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nIG9uIEFSTS4NCj4+Pj4+IFRoZSBvZmZlbmRpbmcgY2h1
bmsgaXM6DQo+Pj4+PiANCj4+Pj4+ICNkZWZpbmUgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhk
KSAgICAgICAgICAgICAgICAgICAgXA0KPj4+Pj4gLSAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBw
ZWQoZCkgJiYgbmVlZF9pb21tdShkKSkNCj4+Pj4+ICsgICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFw
cGVkKGQpICYmIG5lZWRfaW9tbXVfcHRfc3luYyhkKSkNCj4+Pj4+IA0KPj4+Pj4gT24gQVJNIHdl
IG5lZWQgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyB0byBiZSB0cnVlIGZvciBkb20wIHdoZW4g
aXQgaXMNCj4+Pj4+IGRpcmVjdGx5IG1hcHBlZCBhbmQgSU9NTVUgaXMgZW5hYmxlZCBmb3IgdGhl
IGRvbWFpbiwgbGlrZSB0aGUgb2xkIGNoZWNrDQo+Pj4+PiBkaWQsIGJ1dCB0aGUgbmV3IGNoZWNr
IGlzIGFsd2F5cyBmYWxzZS4NCj4+Pj4+IA0KPj4+Pj4gSW4gZmFjdCwgbmVlZF9pb21tdV9wdF9z
eW5jIGlzIGRlZmluZWQgYXMgZG9tX2lvbW11KGQpLT5uZWVkX3N5bmMgYW5kDQo+Pj4+PiBuZWVk
X3N5bmMgaXMgc2V0IGFzOg0KPj4+Pj4gDQo+Pj4+PiAgaWYgKCAhaXNfaGFyZHdhcmVfZG9tYWlu
KGQpIHx8IGlvbW11X2h3ZG9tX3N0cmljdCApDQo+Pj4+PiAgICAgIGhkLT5uZWVkX3N5bmMgPSAh
aW9tbXVfdXNlX2hhcF9wdChkKTsNCj4+Pj4+IA0KPj4+Pj4gaW9tbXVfdXNlX2hhcF9wdChkKSBt
ZWFucyB0aGF0IHRoZSBwYWdlLXRhYmxlIHVzZWQgYnkgdGhlIElPTU1VIGlzIHRoZQ0KPj4+Pj4g
UDJNLiBJdCBpcyB0cnVlIG9uIEFSTS4gbmVlZF9zeW5jIG1lYW5zIHRoYXQgeW91IGhhdmUgYSBz
ZXBhcmF0ZSBJT01NVQ0KPj4+Pj4gcGFnZS10YWJsZSBhbmQgaXQgbmVlZHMgdG8gYmUgdXBkYXRl
ZCBmb3IgZXZlcnkgY2hhbmdlLiBuZWVkX3N5bmMgaXMgc2V0DQo+Pj4+PiB0byBmYWxzZSBvbiBB
Uk0uIEhlbmNlLCBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nKGQpIGlzIGZhbHNlIHRvbywNCj4+
Pj4+IHdoaWNoIGlzIHdyb25nLg0KPj4+Pj4gDQo+Pj4+PiBBcyBhIGNvbnNlcXVlbmNlLCB3aGVu
IHVzaW5nIFBWIG5ldHdvcmsgZnJvbSBhIGRvbVUgb24gYSBzeXN0ZW0gd2hlcmUNCj4+Pj4+IElP
TU1VIGlzIG9uIGZyb20gRG9tMCwgSSBnZXQ6DQo+Pj4+PiANCj4+Pj4+IChYRU4pIHNtbXU6IC9z
bW11QGZkODAwMDAwOiBVbmhhbmRsZWQgY29udGV4dCBmYXVsdDogZnNyPTB4NDAyLCBpb3ZhPTB4
ODQyNGNiMTQ4LCBmc3lucj0weGIwMDAxLCBjYj0wDQo+Pj4+PiBbICAgNjguMjkwMzA3XSBtYWNi
IGZmMGUwMDAwLmV0aGVybmV0IGV0aDA6IERNQSBidXMgZXJyb3I6IEhSRVNQIG5vdCBPSw0KPj4+
Pj4gDQo+Pj4+PiBUaGUgZml4IGlzIHRvIGdvIGJhY2sgdG8gc29tZXRoaW5nIGFsb25nIHRoZSBs
aW5lcyBvZiB0aGUgb2xkDQo+Pj4+PiBpbXBsZW1lbnRhdGlvbiBvZiBnbnR0YWJfbmVlZF9pb21t
dV9tYXBwaW5nLg0KPj4+Pj4gDQo+Pj4+PiBTaWduZWQtb2ZmLWJ5OiBTdGVmYW5vIFN0YWJlbGxp
bmkgPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPg0KPj4+Pj4gRml4ZXM6IDkxZDRlY2E3
YWRkDQo+Pj4+PiBCYWNrcG9ydDogNC4xMisNCj4+Pj4+IA0KPj4+Pj4gLS0tDQo+Pj4+PiANCj4+
Pj4+IEdpdmVuIHRoZSBzZXZlcml0eSBvZiB0aGUgYnVnLCBJIHdvdWxkIGxpa2UgdG8gcmVxdWVz
dCB0aGlzIHBhdGNoIHRvIGJlDQo+Pj4+PiBiYWNrcG9ydGVkIHRvIDQuMTIgdG9vLCBldmVuIGlm
IDQuMTIgaXMgc2VjdXJpdHktZml4ZXMgb25seSBzaW5jZSBPY3QNCj4+Pj4+IDIwMjAuDQo+Pj4+
PiANCj4+Pj4+IEZvciB0aGUgNC4xMiBiYWNrcG9ydCwgd2UgY2FuIHVzZSBpb21tdV9lbmFibGVk
KCkgaW5zdGVhZCBvZg0KPj4+Pj4gaXNfaW9tbXVfZW5hYmxlZCgpIGluIHRoZSBpbXBsZW1lbnRh
dGlvbiBvZiBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nLg0KPj4+Pj4gDQo+Pj4+PiBDaGFuZ2Vz
IGluIHYyOg0KPj4+Pj4gLSBpbXByb3ZlIGNvbW1pdCBtZXNzYWdlDQo+Pj4+PiAtIGFkZCBpc19p
b21tdV9lbmFibGVkKGQpIHRvIHRoZSBjaGVjaw0KPj4+Pj4gLS0tDQo+Pj4+PiB4ZW4vaW5jbHVk
ZS9hc20tYXJtL2dyYW50X3RhYmxlLmggfCAyICstDQo+Pj4+PiAxIGZpbGUgY2hhbmdlZCwgMSBp
bnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkNCj4+Pj4+IA0KPj4+Pj4gZGlmZiAtLWdpdCBhL3hl
bi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaCBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3Jh
bnRfdGFibGUuaA0KPj4+Pj4gaW5kZXggNmY1ODViMTUzOC4uMGNlNzdmOWExYyAxMDA2NDQNCj4+
Pj4+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaA0KPj4+Pj4gKysrIGIv
eGVuL2luY2x1ZGUvYXNtLWFybS9ncmFudF90YWJsZS5oDQo+Pj4+PiBAQCAtODksNyArODksNyBA
QCBpbnQgcmVwbGFjZV9ncmFudF9ob3N0X21hcHBpbmcodW5zaWduZWQgbG9uZyBncGFkZHIsIG1m
bl90IG1mbiwNCj4+Pj4+ICAgKCgoaSkgPj0gbnJfc3RhdHVzX2ZyYW1lcyh0KSkgPyBJTlZBTElE
X0dGTiA6ICh0KS0+YXJjaC5zdGF0dXNfZ2ZuW2ldKQ0KPj4+Pj4gDQo+Pj4+PiAjZGVmaW5lIGdu
dHRhYl9uZWVkX2lvbW11X21hcHBpbmcoZCkgICAgICAgICAgICAgICAgICAgIFwNCj4+Pj4+IC0g
ICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQpICYmIG5lZWRfaW9tbXVfcHRfc3luYyhkKSkN
Cj4+Pj4+ICsgICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQpICYmIGlzX2lvbW11X2VuYWJs
ZWQoZCkpDQo+Pj4+PiANCj4+Pj4+ICNlbmRpZiAvKiBfX0FTTV9HUkFOVF9UQUJMRV9IX18gKi8N
Cj4+Pj4gDQo+Pj4+IEkgdGVzdGVkIHRoZSBwYXRjaCBhbmQgd2hpbGUgY3JlYXRpbmcgdGhlIGd1
ZXN0IEkgb2JzZXJ2ZWQgdGhlIGJlbG93IHdhcm5pbmcgZnJvbSBMaW51eCBmb3IgYmxvY2sgZGV2
aWNlLg0KPj4+PiBodHRwczovL2VsaXhpci5ib290bGluLmNvbS9saW51eC92NC4zL3NvdXJjZS9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jI0wyNTgNCj4+PiANCj4+PiBTbyB5b3Ug
YXJlIGNyZWF0aW5nIGEgZ3Vlc3Qgd2l0aCAieGwgY3JlYXRlIiBpbiBkb20wIGFuZCB5b3Ugc2Vl
IHRoZQ0KPj4+IHdhcm5pbmdzIGJlbG93IHByaW50ZWQgYnkgdGhlIERvbTAga2VybmVsPyBJIGlt
YWdpbmUgdGhlIGRvbVUgaGFzIGENCj4+PiB2aXJ0dWFsICJkaXNrIiBvZiBzb21lIHNvcnQuDQo+
PiANCj4+IFllcyB5b3UgYXJlIHJpZ2h0IEkgYW0gdHJ5aW5nIHRvIGNyZWF0ZSB0aGUgZ3Vlc3Qg
d2l0aCAieGwgY3JlYXRl4oCdIGFuZCBiZWZvcmUgdGhhdCwgSSBjcmVhdGVkIHRoZSBsb2dpY2Fs
IHZvbHVtZSBhbmQgdHJ5aW5nIHRvIGF0dGFjaCB0aGUgbG9naWNhbCB2b2x1bWUNCj4+IGJsb2Nr
IHRvIHRoZSBkb21haW4gd2l0aCDigJx4bCBibG9jay1hdHRhY2jigJ0uIEkgb2JzZXJ2ZWQgdGhp
cyBlcnJvciB3aXRoIHRoZSAieGwgYmxvY2stYXR0YWNo4oCdIGNvbW1hbmQuDQo+PiANCj4+IFRo
aXMgaXNzdWUgb2NjdXJzIGFmdGVyIGFwcGx5aW5nIHRoaXMgcGF0Y2ggYXMgd2hhdCBJIG9ic2Vy
dmVkIHRoaXMgcGF0Y2ggaW50cm9kdWNlIHRoZSBjYWxscyB0byBpb21tdV9sZWdhY3lfeywgdW59
bWFwKCkgdG8gbWFwIHRoZSBncmFudCBwYWdlcyBmb3INCj4+IElPTU1VIHRoYXQgdG91Y2hlcyB0
aGUgcGFnZS10YWJsZXMuIEkgYW0gbm90IHN1cmUgYnV0IHdoYXQgSSBvYnNlcnZlZCBpcyB0aGF0
IHNvbWV0aGluZyBpcyB3cml0dGVuIHdyb25nIHdoZW4gaW9tbV91bm1hcCBjYWxscyB1bm1hcCB0
aGUgcGFnZXMNCj4+IGJlY2F1c2Ugb2YgdGhhdCBpc3N1ZSBpcyBvYnNlcnZlZC4NCj4+IA0KPj4g
WW91IGNhbiByZXByb2R1Y2UgdGhlIGlzc3VlIGJ5IGp1c3QgY3JlYXRpbmcgdGhlIGR1bW15IGlt
YWdlIGZpbGUgYW5kIHRyeSB0byBhdHRhY2ggaXQgd2l0aCDigJx4bCBibG9jay1hdHRhY2jigJ0N
Cj4+IA0KPj4gZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QuaW1nIGJzPTEwMjQgY291bnQ9MCBzZWVr
PTEwMjQNCj4+IHhsIGJsb2NrLWF0dGFjaCAwIHBoeTp0ZXN0LmltZyB4dmRhIHcNCj4+IA0KPj4g
U2VxdWVuY2Ugb2YgY29tbWFuZCB0aGF0IEkgZm9sbG93IGlzIGFzIGZvbGxvd3MgdG8gcmVwcm9k
dWNlIHRoZSBpc3N1ZTogIA0KPj4gDQo+PiBsdnMgdmcteGVuL215Z3Vlc3QNCj4+IGx2Y3JlYXRl
IC15IC1MIDRHIC1uIG15Z3Vlc3QgdmcteGVuDQo+PiBwYXJ0ZWQgLXMgL2Rldi92Zy14ZW4vbXln
dWVzdCBta2xhYmVsIG1zZG9zDQo+PiBwYXJ0ZWQgLXMgL2Rldi92Zy14ZW4vbXlndWVzdCB1bml0
IE1CIG1rcGFydCBwcmltYXJ5IDEgNDA5Ng0KPj4gc3luYw0KPj4geGwgYmxvY2stYXR0YWNoIDAg
cGh5Oi9kZXYvdmcteGVuL215Z3Vlc3QgeHZkYSB3DQo+IA0KPiBBaCEgWW91IGFyZSBibG9jay1h
dHRhY2hpbmcgYSBkZXZpY2UgaW4gZG9tMCB0byBkb20wISBTbyBmcm9udGVuZCBhbmQNCj4gYmFj
a2VuZCBhcmUgYm90aCBpbiB0aGUgc2FtZSBkb21haW4sIGRvbTAuIFllYWggdGhhdCBpcyBub3Qg
c3VwcG9zZWQgdG8NCj4gd29yaywgYW5kIGlmIGl0IGRpZCBiZWZvcmUgaXQgd2FzIGp1c3QgYnkg
Y29pbmNpZGVuY2UgOi0pDQoNCkkgdGhpbmsgdGhpcyBpcyB2ZXJ5IGNvbW1vbiB0byBhdHRhY2gg
YSBibG9jayBkZXZpY2UgdG8gRE9NMCB3aGVyZSBmcm9udGVuZC9iYWNrZW5kIGlzIGluIHNhbWUg
ZG9tYWluLiANCkkgZm91bmQgdHdvIHJlZmVyZW5jZSB0aGF0IHVzZSB0aGUgc2FtZSBjb25jZXB0
DQoNCmh0dHBzOi8vd2lraS54ZW5wcm9qZWN0Lm9yZy93aWtpL0FjY2Vzc19hX0xWTS1iYXNlZF9E
b21VX2Rpc2tfb3V0c2lkZV9vZl90aGVfZG9tVQ0KaHR0cHM6Ly93d3cuc3VzZS5jb20vc3VwcG9y
dC9rYi9kb2MvP2lkPTAwMDAxNjE1NA0KDQpSZWdhcmRzLA0KUmFodWwNCg0KPiANCj4gVGhlcmUg
YXJlIGEgbnVtYmVyIG9mIGNoZWNrcyBpbiBMaW51eCB0aGF0IHdvdWxkbid0IHdvcmsgYXMgaW50
ZW5kZWQgaWYNCj4gdGhlIHBhZ2UgaXMgY29taW5nIGZyb20gaXRzZWxmLiBUaGlzIGlzIG5vdCBh
biBBUk0tb25seSBpc3N1ZSBlaXRoZXIuDQo+IA0KPiBJIHRyaWVkIGl0IHdpdGggZG9tMCwgbGlr
ZSB5b3UgZGlkLCBhbmQgSSBhbSBzZWVpbmcgdGhlIHNhbWUgd2FybmluZy4NCj4gSQ0KPiBkaWQg
dHJ5IHRvIGRvIGJsb2NrLWF0dGFjaCB0byBhIGRvbVUgYW5kIGl0IHdvcmtzIGZpbmUuDQo+IA0K
PiBJIGRvbid0IHRoaW5rIHRoaXMgaXMgYSBjb25jZXJuLg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 12:17:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 12:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83894.157096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAui-0006vJ-Jl; Thu, 11 Feb 2021 12:17:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83894.157096; Thu, 11 Feb 2021 12:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAAui-0006vC-Gl; Thu, 11 Feb 2021 12:17:40 +0000
Received: by outflank-mailman (input) for mailman id 83894;
 Thu, 11 Feb 2021 12:17:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzmj=HN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAAuh-0006v7-A2
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 12:17:39 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 431f18ba-df61-4803-832a-10fdf6134ba9;
 Thu, 11 Feb 2021 12:17:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 431f18ba-df61-4803-832a-10fdf6134ba9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613045857;
  h=subject:to:references:from:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=0yPHMVpqh8KgihOcoJHIkFlCwxhj0MXJpGUALjSmcm0=;
  b=WKjtUAxG3SAznKrYcBL8oyBSn6fORdrecZ6iU2irK29SMS0hIFVwY0ck
   vvXTUlDAi38FsHgGFXQ60d6gDQLgPlbqNDmUWzQbABtC+gSUptwYEsJwy
   tKwdlPfyhFjPp6vCFJK5E427bgctbfD4wDxoH86cRcnNGctH4WlUwPcdK
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DCrIMEkgX54EbOCwOsgaZOMjc1upsJou9A9Ae6RfKejq1IpeeIYwUd2XZ3DQREBFIxiekZbY+u
 XTJ+G1mNepDYyWWjikctD5dXPI6caHj1XB674+o78ie80YRIGgNU5Q1ei8DyTqTP3fghtJ6B/Q
 YIDGZmdweXuHEninK+UKtT9V/oJAV+vu8kg81AtRlYhn9vYhtNkFyMxY1UN48ZBLcw5k1fZ5er
 EJ0HuscmauGUiqJ65b/Pqwy57gNrZuwfAC9UPBLiESEHh0Gs+EEH5Zu+qYsm0CY2IdXo3Yxysq
 XG0=
X-SBRS: 5.2
X-MesageID: 37230265
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37230265"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OZQhSsRPphNHbgeN8R9T6bq2OORs14WLMtW5k1YpGfCHY6ow3TtvoTRYjys4GgJut43pr9QG8gZrF54tDWGm9FISrwRnjaWDzfnRiSH2e8CEBQd5S3afnV122LfFetXYBJc5GE5bfG8WcbCju85nUbRUWZNFkGKZ5eTsp5oLJFzGXXprwApFsQt/dodvZiWpE9OED3E04J/Yu+VdMYNCQL1upuRNdLCBhT/Z3RgQGPxfT8QxfCCkeBis24tug6gX2c5Xi2ajDEI6KXJfhVRV28G8WKOwIAMJstt2k93r6PB2SH0V51V6O4i1wM5mU1PZs7Sr6nH7UrBDSx5YIGvM3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0yPHMVpqh8KgihOcoJHIkFlCwxhj0MXJpGUALjSmcm0=;
 b=FJPM4l2qcw4j7/WlPNrBzwsh6pkNzgigUALMoK3A6TNzHr9PTe99Mw1Wk31gybbmf2+ZbhlH5HdMou3oa2NTEG3CxCuQBY8wPozLGLAHqbFehDRnfO4AXg6QEuh8B6Xlr509kjKAnvVn8OoMZSrLzoyRCrZzGmScKwrRvvexDKbw1NpAQjRbICuyvBL3i86wZnWdg8i2DqGRS6M38uien539K7MngNPSGqh6b0MTJwd59K3wHSgV53uGPV3Q/cwrGBFj02fQaFqfv8FqjsgXoGdJB2pzyJOxddJvBKdLjJSbj5320SdAqyWloebxjYU2YulRs6Jo19qV4eDaOB+1MQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0yPHMVpqh8KgihOcoJHIkFlCwxhj0MXJpGUALjSmcm0=;
 b=K+1Co9I1Ss4iL+ZldvdXnf4PP+KMWgyPeUxcqZ9E7iFF34BF/KJEw1Nxf8y1CEdCx56Gv2VTeR1DNlITSUS7weKPfC1RSc7YUXYBA5XSqzyz/dAsPevwjiv38Ax3qPGhNbthXA5Lyr2eRLxARMRotJSceIty7Trg6wh3uFMblLc=
Subject: Re: Stable library ABI compatibility checking
To: <xen-devel@lists.xenproject.org>
References: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
 <816d28b2-df85-9259-de96-5a6654c8b341@suse.com>
 <35654104-5445-9e44-792b-3059d601db5e@citrix.com>
 <c0b35b70-5cac-d25f-92c6-181d95785a89@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <03f845f2-6d5e-04e2-0cad-47648b3f8949@citrix.com>
Date: Thu, 11 Feb 2021 12:17:26 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <c0b35b70-5cac-d25f-92c6-181d95785a89@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0065.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a392b750-6172-4aa3-3fdd-08d8ce870221
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5439:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5439F756B1B1E88B5B22F998BA8C9@SJ0PR03MB5439.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3044;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QetDSbkKO8/hx/7ju59jbVzpLG+gOU4IkzcZkQFFq0wRo/eLyBwzjsl5FWCKLCGKRQM+mYWFLcqvYm31WZz9QmLWnP+/FSQWPspQNpnOiaBealPwct8g0i8JR7uLqh6l0VF2Y+RFsGZdiVapgPk7VPViOc4M+H+s06MjTtcidnZmHbZ4Gt+kEKXFxVNsb623v7Xjvr4ITjeVK/dVk39QPgm0LZDt8uh2x1nJ66xJ1ojTIunwVooF5Yso+e+zuN2PHpaonyWKIpMHjGLJ8eRr+RAoWKO+cI/26dLhJXdzx7z1fFwNagz0gM5LRJH7Vdi8CWXDBh+pMK6KNLrsBx/NsYCmFYBuRKgBT7e2dMod6FuB6p5ktHiVqtpsE4qWwkaJjK29sX3+ReWTGRY9iYQQXANmVg7rmpLlW9tgZ0HUVIyAzyKNYddH3H08axCdEZRyQqlptvAgWfho5M/toj9j4qoYKyDw6r2ETrM6Mo39CcWg0xGJNjGu3SDyP+2D5WkoVnR1mBqax+gfcH0HRx+2tAVcNqwuPfJp8wGtUAWkWSJzeg61cBmzy3lRp+stl1H5+VbROgE4eKIMaKE62PQBFpd7061CIPNCyJXDnidvKpJzWZ8fKDWGR3L1ZCdi43eoTzikb2sNFVmgdjVgo6WPl2GSfBt/akBIrFgX/R0bvhCLiRDJmsd8t2dJm8Amiohs
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(366004)(39850400004)(136003)(186003)(16526019)(86362001)(956004)(26005)(8936002)(31696002)(66946007)(2906002)(6666004)(2616005)(66556008)(478600001)(6486002)(6916009)(16576012)(8676002)(316002)(66476007)(53546011)(5660300002)(36756003)(31686004)(966005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NUFiQThlQkNHM0RrWU1rU2xVaWZYVG5pZUN6V1h2cVI1NEFNMkFSVXRFQlBP?=
 =?utf-8?B?Zm92N0dEajY4RlB4K2gxMDNhSU5taENxQS85QWxDOVpWUlB4dktMSXdVRmZN?=
 =?utf-8?B?TEkrTTV5VUN2akREelNTY3pwZlBOR2R4NXczUzFjTkxRMHZsU2ZCOEZxeVlk?=
 =?utf-8?B?R0lEMXBGT3FLVXUwTzl6U3lhQWRhTHBBMEJCdXBUbVVVZ3BLazZNUE1GY1Rw?=
 =?utf-8?B?MU9QOGRpS1dRNDVEaThpUEt1WWVhRU5scTltdkt3Y25zY1NJNitvMFEwbmV6?=
 =?utf-8?B?eFErYWgzcW9CVkljQzF1cWpFUE9xUWUwOWw5c3Vnd3JMcFR5Vkk2VmdLVjVX?=
 =?utf-8?B?dGtUWkJxbjAzMmNrTlJMZFVHZ1AwQXVtU1ozM0orYmp0NElBUzRVeVpGalBJ?=
 =?utf-8?B?aTJDOG5vU04wdlhjL3VhKzVQMWZuQ1ZmZDVIVFVYTHM3Y0dxUmE0SWtQK0dj?=
 =?utf-8?B?UG1ON25La3NaaVJQV01VYS9oa2Zva3Q4emdxemg0VWJaT2cvSHIrTDRPb2gw?=
 =?utf-8?B?aUJNdHc2eUg3eGYyZVdLRW9pZE5qRi8wZHNXNE4rS3ZYc3NxWlRKSkdJMmpR?=
 =?utf-8?B?T213SithT1RuLzQ3L1pPYXYzYmRXamZhSHRQd1dGQ1ZOVWtOK1M5VTlkRFpH?=
 =?utf-8?B?WlhQWG9OM3pDMk5RTE93aDNvZ2NzMFIzNE82VlpTQitMNlJyNnlHV3VaaW9q?=
 =?utf-8?B?eEtZVW1HRlcxaXpJYUhqVmJlUXg5NlFuS0dnZlVFU3UxZC9odTRGWlVxTFNH?=
 =?utf-8?B?S3I5ZWhFZTdqaW8wTG1PYnJMaUZYTm9idnRINm83UThmN3JpYUJ0cnl4aGlF?=
 =?utf-8?B?QW1RdHlicUtESnNJQ2d2M3BFWDJtcldza2wyb3l5cUdUd2Nsd0RZN25Jd09G?=
 =?utf-8?B?TXZ1aWNSNEFDdXo3M0Y2YlhjTDREa0hucTFqQ1R3dGxJRnlqRUQzbzFDRFF0?=
 =?utf-8?B?amREYm5oSWhNTGVLNUp4Q0FTSEM3QjhLVE05MEpLamRUN01SZFpqTi9LMHRh?=
 =?utf-8?B?aHJFVmZvVS9QME5lYWpqSE1JTnhLb1kvSlhwbjNFZUVUT0RsVVV5SzNJd1BV?=
 =?utf-8?B?QktsWFVXU3pnSjYraFR4Sjd0V0RYbVNVU1Q4bXRnQ0dISm9hTnFkZm5IaUNN?=
 =?utf-8?B?N2p1NkxQRlI1cVVVdXlOSDk2Um95bGxsdkFMR3lxaGk1dlRWcmE0VnN0bnFJ?=
 =?utf-8?B?a0I2UWZuZUJTRXN6RHRVWWVnYi90dDF6ZS9FMmxnU0l4cTRjMDI1cU9zblBq?=
 =?utf-8?B?Z2tuU2d0NStYMWxBNG9vT3JOVWdRQTBlcjU1RVFqMnlZeWJMZ1pNQXJoVlF4?=
 =?utf-8?B?dlZvcDZQcHNxUFMrTS9yMVNhZXFwUGRPektUSlE4V2VCWE1QQzA5dzA5djFa?=
 =?utf-8?B?R1BDa240R2pmYXdjL3QrVDJQeWNzMDNqMlV6UTQxK21PdnpsSk5adnVyekhZ?=
 =?utf-8?B?QXdZaTZiZHMxK3IyMG5NcHNmWGZyaENWU1JYK05DNEZwc2IxUXlVeHpsYWpz?=
 =?utf-8?B?Q0c3YWZSa08wK1ppRis0L3NDZEoyVXRCZ284cXZDMVpGLzdPZk5kYVdIeE84?=
 =?utf-8?B?YkYvUUUwdGxpc0VFMEJLZDhHMmdKb3VHYTgxbWM5VExyUTUvaDQyNzNoa1Fs?=
 =?utf-8?B?R1dxbGNsWDA1S0gxYjU1Y0tCVXhQNllGbFY3aXFKOEtFMUt4ckt0MkIwRmJr?=
 =?utf-8?B?WWtSWnpRSlFUdHhOREk2a2pTekxjQW9zWk1BemJsU1VDRnB2K3pReGdibkhJ?=
 =?utf-8?Q?Zd5VPYvCXJBHGATIQYOSmLQ4xq3JG6JbQ+pUas+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a392b750-6172-4aa3-3fdd-08d8ce870221
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 12:17:32.5734
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BYCfPJOnb5CdAjnWYBuyQsEdk32P2vSlnABFI+e4HFS2gzsDIsJlqf5vzQNdeAc8MFO/vNtKDWdiJL451I0TwSTUPC/5JQ3qvOjdxxRgMKo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5439
X-OriginatorOrg: citrix.com

On 11/02/2021 11:53, Jan Beulich wrote:
> On 11.02.2021 12:30, Andrew Cooper wrote:
>> On 11/02/2021 11:05, Jan Beulich wrote:
>>> On 11.02.2021 02:08, Andrew Cooper wrote:
>>>> Hello,
>>>>
>>>> Last things first, some examples:
>>>>
>>>> http://xenbits.xen.org/people/andrewcoop/abi/libxenevtchn-1.1_to_1.2.html
>>>> http://xenbits.xen.org/people/andrewcoop/abi/libxenforeignmemory-1.3_to_1.4.html
>>>>
>>>> These are an ABI compatibility report between RELEASE-4.14.0 and staging.
>>>>
>>>> They're performed with abi-dumper (takes a shared object and header
>>>> file(s) and write out a text "dump" file which happens to be a perl
>>>> hash) and abi-compliance-checker checker, which takes two dumps and
>>>> produces the HTML reports linked above.Â  They're both debian tools
>>>> originally, but have found their way across the ecosystem.Â  They have no
>>>> impact on build time (when invoked correctly).
>>>>
>>>> I'm encouraged to see that the foreignmem analysis has even spotted that
>>>> we deliberately renamed one parameter to clarify its purpose.
>>>>
>>>>
>>>> For the stable libraries, the RELEASE-$X.$Y.0 tag is the formal point
>>>> when accumulated changes in staging become fixed.Â  What we ought to be
>>>> doing is taking a "dump" of libraries at this point, and publishing
>>>> them, probably on xenbits.
>>>>
>>>> Subsequent builds on all the staging branches should be performing an
>>>> ABI check against the appropriate lib version.Â  This will let us catch
>>>> breakages in staging (c/s e8af54084586f4) as well as breakages if we
>>>> ever need to backport changes to the libs.
>>>>
>>>> For libraries wrapped by Juergen's tools/libs/ common-makefile changes,
>>>> adding ABI dumping is easy.Â  The only complicating is needing to build
>>>> everything with "-Og -g", but this is easy to arrange, and frankly ought
>>>> to be the default for debug builds anyway (the current use of -O0 is
>>>> silly and interferes with common distro hardening settings).
>>>>
>>>> What I propose is tweaking the library build to write out
>>>> lib$FOO.so.$X.$Y-$(ARCH).abi.dump files.Â  A pristine set of these should
>>>> be put somewhere on xenbits, and a task put on the release technicians
>>>> checklist for future releases.
>>>>
>>>> That way, subsequent builds which have these tools available can include
>>>> a call to abi-compliance-checker between the authoritative copy and the
>>>> one from the local build, which will make the report available, and exit
>>>> nonzero on breaking problems.
>>>>
>>>>
>>>> To make the pristine set, I need to retrofit the tooling to 4.14 and
>>>> ideally 4.13.Â  This is in contravention to our normal policy of not
>>>> backporting features, but I think, being optional build-time-only
>>>> tooling, it is worthy of an exception considering the gains we get
>>>> (specifically - to be able to check for ABI breakages on these branches
>>>> in OSSTest).Â  Backporting to 4.12 and older is far more invasive, due to
>>>> it being before the library build systems were common'd.
>>>>
>>>>
>>>> Anyway, thoughts?
>>> +1
>>>
>>> Not sure about the backporting effects - tools/libs/ had quite a bit
>>> less content in 4.14 and older, so the coverage would be smaller.
>> tools/libs/ has been the stable libraries, since IanC split them years
>> ago.Â  The only odd-one-out is libxenstored IIRC, which moved during the
>> 4.15 window.
> As well as ctrl/, guest/, light/, stat/, util/, and vchan/.

Right, but 5 of those don't have stable ABIs.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 12:27:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 12:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83896.157108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAB47-0007w9-MK; Thu, 11 Feb 2021 12:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83896.157108; Thu, 11 Feb 2021 12:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAB47-0007w2-J6; Thu, 11 Feb 2021 12:27:23 +0000
Received: by outflank-mailman (input) for mailman id 83896;
 Thu, 11 Feb 2021 12:27:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=szQa=HN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lAB46-0007vx-3D
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 12:27:22 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a67390c-5cf2-4ae3-9d25-89fa1803bd68;
 Thu, 11 Feb 2021 12:27:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a67390c-5cf2-4ae3-9d25-89fa1803bd68
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613046440;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=sr/L0JWRcyvOCP13ACdU9OhXOmz01m6V0otCTCOEMhI=;
  b=WcfO+X8NkdjsZxysTwN041u2x+5WqynS4PSfGZbvNZcPJcXfQljeJEF0
   ERLsdjOZabpLFb4VDhK5GgHyuoxIo7WJ4ZcZN4BX7hTkbWPI4jWA6Z5KJ
   zPdDtnnThinr/TYeY8C647VMgcU0N22Yz/qpD/6bNiuWjv8iy+NAMZAf8
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vXEJ/Jvf5pP9gR8/j2k7ODtnYVvXi+zxKA+VVas4xga3R8JYN2V9hZBSrCzr2Fb4ky/qFy9hZ4
 4C0/lytakRPhDXcV6RyZMobNMe40UqCWhT0n/wybg7IKzsmRJN486H1oGtFutepbHrdFV8EUyT
 rW2poiyQvn+tjTrDiT+P7tb4kNET/QAJSgXD8gPg8lwLxTqipH5bKa2jDriEfHY10FvL/+OeVi
 arBs/kWCcEK5yAdnZKxr6a6V2qieM2c+k/sFSFiwpcgFBnBnfYXF1rFMSbVngsAavPCFGK/+oJ
 yj4=
X-SBRS: 5.2
X-MesageID: 37061421
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37061421"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MbiI+hU53sQIfKgSGQdlGQmBT1dFP4hoSXCAheHf6vKtURnKUGGiO2ywuWmHOL9QpvM2Qf2dTYzCxCvs7y9dC35QAakEOvx4EmvvZt2NWadUqX6rL7nMULEPWYK1tU8KZA/viBOUkm+X/CQmXPIVKjhVDI6hR124Eg7qu6ZysNUVu1ccWLZAeP4+5RB9cuhF6enAI527HWwHf9jni77+gq/IDtVqlASjfVnYd+iqEM4UlS3dZLjFTSu88JVzvMORfVH7ANn7mYypVGoQFCm9nTAR2u+u8qCiqh3YjJPlZmstAbq3PdO/mh6kMRelY5lMXEN6RgduKpf8H7GNgGLSDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HWzS16qLjoZIgMyICLB15QDt/8zY+dmYjPfIyRlMRG0=;
 b=PHQfbbmYri3pXydIAvbFhjiALyPYCnZYIN3X0RfAzH2SnC3tN9dXGY09gT7ichJNvpMxG0S0OyixXrthxZdedxT0ahhPZuu1dLHECONa/xQNskffrUk6ksyVYv8AG/yIkuTJ5Hx83QdDjVP8ijGNAvGj2PHBrrD54Ep8axt/obodai8dx2vHuH6mKR/KqpA/khUUB6Dtz0b464erws8OvlRdV+pxYXubZ/n3zH36YGEXjO4NrrOQgzXd4o/smRyHJETIeuHclOMckpGGY2KmZA37++Ni9lBT55lDPg+KhaNE+kWVyne4gpKbpsZ7JWPp9hCEJyf6AUwEcjJgnhyEUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HWzS16qLjoZIgMyICLB15QDt/8zY+dmYjPfIyRlMRG0=;
 b=JABZmoH/QPft/vN1BiQPws5wGc2fcp63VpFNtkIa2ZrM6tmNEHX4WmdhDa2nd4edMeFvTXHApG8XHRQgAJEKJQk0J3r0tFp0csLumkIelxWqNhyPX3tNT1XMBrvnF7ZB3rmO9el09CEG7G3pNdU+a0rGCLarK5c4v4Wpi5URlxo=
Date: Thu, 11 Feb 2021 13:27:10 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Kevin Tian <kevin.tian@intel.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Julien
 Grall" <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH] VMX: use a single, global APIC access page
Message-ID: <YCUiniCn+oT9CFwC@Air-de-Roger>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <YCTuq5b130PR6G35@Air-de-Roger>
 <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
 <YCUSDSYpS5X+AZco@Air-de-Roger>
 <547b40f2-3b7b-10cb-30f6-9445c784eb0b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <547b40f2-3b7b-10cb-30f6-9445c784eb0b@suse.com>
X-ClientProxiedBy: MRXP264CA0040.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2a21749b-8a4d-4984-404f-08d8ce885e69
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5324AED8ACF8D3FDEF1AE4EE8F8C9@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jij90P4dt4NrHVqusVOuwUj/KO4WPa1vSHEi/5i911mJfq+mL8zzIjBKHGqeJIYtc1G/uHeLEyIRW30WAp9PSSE2Ju6BsZbxFkP1utFLjesJ44mf4+dUlshPyO8rGSTQIUO4ouJNI1Shc29plI4eRBZugT7kGTMVZUvP9FDO5Iyw9oDLRPWdUkeWR5gkbVOKGmgJwu1MsOgJ92EuNMtT0mowjaJEczzf64+878zh6hretOKdLKm4uJ0vvUAevw9EHdIzY4bA+y4tMSwbm795nOnaXGjzEJ9MXM81KDEaC3ky6spnPHXo2RdubWQbD2yoGojQrhIZ0hwnzVkQzqTDRT07ZkCU3zLuo/v5MASj/3uWWB0Q9ZEsh2gb3L+fjsGjcoUX3G9q+80Rr3Og71vIXkoRo5b42rh8xEH6HBBIstjsYyZJOkfdSRb5gT/ItkScAaeTCvhH3eEjBFaCVGvzgB+pQLjjfl3ksIKoTgsZz5GfPCxbfWjS6VIKh8ZDf8/v0rUMVYVONNjuix/Z2QZgGOwg7mF5eMf0AR+VdQygFHJo5kSNHI5mMc0fMpJBI4Hi
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(366004)(136003)(346002)(376002)(39860400002)(66476007)(53546011)(66556008)(8676002)(4326008)(6666004)(26005)(110136005)(478600001)(6486002)(83380400001)(54906003)(8936002)(6496006)(85182001)(33716001)(2906002)(66946007)(5660300002)(16526019)(186003)(9686003)(316002)(956004)(86362001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Tm8rOVcvVXg2bXptcHYyUVZScWpFam1ScGlqNHdaNVROZzJPSGF1d2I5bXY1?=
 =?utf-8?B?OTNXVGdzZjJwNjVkYmdWZkZENThoQ0dCNUxwZkdnT3FFVnkrZDdmWnV6SkZX?=
 =?utf-8?B?bFA1YzVmdkFHc3VUM3BPaHEyM3Rqb0ZWcThUY3FiNFdESWplajhrdmNHVThZ?=
 =?utf-8?B?YkVKcnhRclJCNlpvb01Ec1BiVXVFb1VqMXgzcEZuNnJZVk9iSW9TdUVjYWdI?=
 =?utf-8?B?bERpU3NsM2xTMTd6NzhmNXcvM1U0dTErME02WTc1REVVZDNJUUxKbXdjV2Jp?=
 =?utf-8?B?RzFielY1UTJ5MUZ2aG5BWFJkSXRMSlY0b2hacDZZU1lvUkhCL3dDODFnQXdF?=
 =?utf-8?B?L01VN0ZVQ2Y3MFQ4VkdVZnJHb3hta3U2aFJVSVc4ZGIyWFBBNWtPQXdZNVdm?=
 =?utf-8?B?Y3FSb1VCOHNWRm5pTUFqZ1lqSUcralg5aU1RMFRFL1FndGQ3cVBSWnJuOGJ5?=
 =?utf-8?B?RDN4aGpFRjB0NTk5T1ZQdER3VHU0bm1xc3B3OG5tK3lkLzNPRWNJSm84M3ls?=
 =?utf-8?B?UXR0MC91TWZDYU9FYWpMTGEydVpWQlVrUytEV1d3ZVUvNldrTHVHS3RKNDl2?=
 =?utf-8?B?bHN5cExOcUlIQm13S1RqU1JoVXlsT0JHS0U4Uk51bzY1U0c4VE11K2tTeVJH?=
 =?utf-8?B?VmU4WXFWQ3QxYzNVZ0dIUkpRNHFWcllvRElTZFArRlcyODJkelZFMXVMNFk2?=
 =?utf-8?B?d1Y0Wkh3ZmEveFNyN3dKMFQvRitBQ0RLR0RudzFkdDNOUTY1VklwQ25xTy9p?=
 =?utf-8?B?Tjk1UzZDcVdCdnNTUFZVUHRLNzdzeURJYktYT1RCTjRkOUZ1VHp5c0N5M0pZ?=
 =?utf-8?B?dHN3WjFxSWlNRFZIZERocEd2bHZJNnFxcTJIaDRrbU1vU3d6Zm9xTW1wQW5k?=
 =?utf-8?B?RlhvZ3hMUFB6ZEdlc2lLVkJVemliT1pHeTRjQlBIT0xldXJNNjIrcWN1aHNq?=
 =?utf-8?B?SHVzU2FIZUlqOG43NlpUTFRjMFNzazl5Y1Q1SS9OT00rK2VnUkh0VEwvT3RM?=
 =?utf-8?B?NlIwaVJmMFJWcWcrM2paZjNjMWl4aWpyYUIyVE4xYXNBcWNFWFBLR2FiZTR3?=
 =?utf-8?B?UTY0QjdpQUdLeXpMUGtzRmVwVUR5aDJUWG5NRC9hbHdVb002YUhob3lENkpm?=
 =?utf-8?B?RzkxZWtCZzBtMXRLRjBDd1dINzczejRZQzdDK0ZWampIUERxaGJIOEdiazUr?=
 =?utf-8?B?ZVRHTHM2OTBvYjZuaytwMURDSW42WmVXQ3ptWWpJQnNma3RWS3dKcUgyazdi?=
 =?utf-8?B?UHQyYnNGMmc1d1pmOGhMdVVyUTlwalNuQWRBTWp2THFFKzlKWTNoa25CQkJM?=
 =?utf-8?B?RXNVYTlpQUc3UHIwdDRMRUJnWlFYNmtCRS9qK015RlRQemtzQWlqODBqclcx?=
 =?utf-8?B?UnVPYXRRTUJ1am9PWDNDeVUwMXFYVytYYU1lQytZbU5HeEQ4Y2FOVGVMMHVp?=
 =?utf-8?B?d1BuYS9obElWMTBnOFkrMWk5WlZKcWExdi9ZOW92VzJkaUZEcndpZVk4WW1r?=
 =?utf-8?B?WFF0WjRQU1k0Q0dRNUxoOGRDUFBMeEdWc2QyQVYvK29NeXdLL2ppTnVXTEhO?=
 =?utf-8?B?Q2h5cXV1MzFoVnZDNFltK043Qk5UL252ckxibzdvM2twMGZmaDdKL3ZLL2FS?=
 =?utf-8?B?b1FkT3V5dFBXYTZXWlRFMFRCWmRPd3BXYmJTaHQvbFRjMTlRckxWY2RvbTZv?=
 =?utf-8?B?YncvQ2xBTVdhS0llajFLYlpmL0k0WnR6bVdXWktlTExSaWs2RUhSTzFIZnpE?=
 =?utf-8?Q?UpWg0lvyQCXk3l1w8OooHbL0t0Yrfv/A7YXIu5M?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a21749b-8a4d-4984-404f-08d8ce885e69
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 12:27:17.0144
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lRaLWqQIGhndeWKtzjQBdoaHyXDt55ea8nAfHZyL/nglf+US0sOmxgrpsul+0p4HJ4/ekxNJq7x9bMezZfUmhQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Thu, Feb 11, 2021 at 12:22:41PM +0100, Jan Beulich wrote:
> On 11.02.2021 12:16, Roger Pau MonnÃ© wrote:
> > On Thu, Feb 11, 2021 at 11:36:59AM +0100, Jan Beulich wrote:
> >> On 11.02.2021 09:45, Roger Pau MonnÃ© wrote:
> >>> On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
> >>>> --- a/xen/include/asm-x86/p2m.h
> >>>> +++ b/xen/include/asm-x86/p2m.h
> >>>> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
> >>>>          flags = IOMMUF_readable;
> >>>>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
> >>>>              flags |= IOMMUF_writable;
> >>>> +        /* VMX'es APIC access page is global and hence has no owner. */
> >>>> +        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
> >>>> +            flags = 0;
> >>>
> >>> Is it fine to have this page accessible to devices if the page tables
> >>> are shared between the CPU and the IOMMU?
> >>
> >> No, it's not, but what do you do? As said elsewhere, devices
> >> gaining more access than is helpful is the price we pay for
> >> being able to share page tables. But ...
> > 
> > I'm concerned about allowing devices to write to this shared page, as
> > could be used as an unintended way to exchange information between
> > domains?
> 
> Well, such an abuse would be possible, but it wouldn't be part
> of an ABI and hence could break at any time. Similarly I
> wouldn't consider it an information leak if a guest abused
> this.

Hm, I'm kind of worried about having such shared page accessible to
guests. Could Intel confirm whether pages in the 0xFEExxxxx range are
accessible to devices in any way when using IOMMU shared page
tables?

> >>> Is it possible for devices to write to it?
> >>
> >> ... thinking about it - they would be able to access it only
> >> when interrupt remapping is off. Otherwise the entire range
> >> 0xFEExxxxx gets treated differently altogether by the IOMMU,
> > 
> > Now that I think of it, the range 0xFEExxxxx must always be special
> > handled for device accesses, regardless of whether interrupt remapping
> > is enabled or not, or else they won't be capable of delivering MSI
> > messages?
> 
> I don't think I know how exactly chipsets handle MSI in this
> case, but yes, presumably these accesses need to be routed a
> different path even in that case.
> 
> > So I assume that whatever gets mapped in the IOMMU pages tables at
> > 0xFEExxxxx simply gets ignored?
> 
> This would be the implication, aiui.
> 
> > Or else mapping the lapic access page when using shared page tables
> > would have prevented CPU#0 from receiving MSI messages.
> 
> I guess I don't understand this part. In particular I see
> nothing CPU#0 specific here.

Well, the default APIC address is 0xfee00000 which matches the MSI
doorbell for APIC ID 0. APIC ID 1 would instead use 0xfee01000 and
thus the APIC access page mapping won't shadow it anyway.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 13:20:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 13:20:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83904.157126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lABtc-0004y2-TE; Thu, 11 Feb 2021 13:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83904.157126; Thu, 11 Feb 2021 13:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lABtc-0004xv-Pn; Thu, 11 Feb 2021 13:20:36 +0000
Received: by outflank-mailman (input) for mailman id 83904;
 Thu, 11 Feb 2021 13:20:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ygeM=HN=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lABtb-0004xq-4V
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 13:20:35 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.74]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d920c989-4df0-4cec-8482-d9f9d2f53f2e;
 Thu, 11 Feb 2021 13:20:29 +0000 (UTC)
Received: from AM5PR0502CA0015.eurprd05.prod.outlook.com
 (2603:10a6:203:91::25) by AM6PR08MB3463.eurprd08.prod.outlook.com
 (2603:10a6:20b:42::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20; Thu, 11 Feb
 2021 13:20:27 +0000
Received: from AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::48) by AM5PR0502CA0015.outlook.office365.com
 (2603:10a6:203:91::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25 via Frontend
 Transport; Thu, 11 Feb 2021 13:20:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT048.mail.protection.outlook.com (10.152.17.177) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Thu, 11 Feb 2021 13:20:27 +0000
Received: ("Tessian outbound af289585f0f4:v71");
 Thu, 11 Feb 2021 13:20:26 +0000
Received: from b22b1639adfb.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 06DC34BE-3D5F-4BE5-BD2A-0AA94CF1B632.1; 
 Thu, 11 Feb 2021 13:20:21 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b22b1639adfb.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 11 Feb 2021 13:20:21 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4630.eurprd08.prod.outlook.com (2603:10a6:10:d6::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.29; Thu, 11 Feb
 2021 13:20:19 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Thu, 11 Feb 2021
 13:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d920c989-4df0-4cec-8482-d9f9d2f53f2e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wppvBNXAkeePmECt9ThaeEuI+ynslVOSJoZD4o1RTgI=;
 b=ERDGP+pRuWqYpwDmk+4i5SxuHs8kMBxlDnQ9jshpPM98xd6vA3F2gpYMUpYEpzSEUkEA5umK+7YmpX8+bm/gARQzK5S5Nx8hYubqYflvNS+RhkIv5P9SYi9QPUmdtGnEkVIHhukNHtstdkOsAdIRGtTQlWj78+sxSMnx+VpueQE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: db884750f87764e4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V6XTlC26IjbYFw58bCVZwefnEQ6N73Rh/nLaOYWoPgWuTWqnigGJg5Ijsj+gStCRgXPTfbBe8XO3ZdbtiVHfiEoPgN5JJZUTbkL6SG9cESIb8Hl+oLWvVdaxuVpkisrCe9lJihaYaplvta/aQI5MzHnlPIWYvGX6+tbnpmZaIwHuarEaI7ZNowzxSfYSCxBA9miEvrkj7MMNpisyFblQolalc2LcNSngwgX45BcN4GVbET/mkSYVCvhDBf8wnfQ2UmQrIwc/a65R4hiNzF8mLo5h6HzsJxSeKPqv3H2izj3+0xD7wfOV5KPMD4YfCc/LRCARIDQeOD+rXLutOt77mQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wppvBNXAkeePmECt9ThaeEuI+ynslVOSJoZD4o1RTgI=;
 b=dt/JPG27jBt6jg/V+q3J+UaMYpZCaHRxI6F/RlGxG0SvhHrUjf29Mz9PvyLAH1FnUIJ/InsFVBr6gWQAbhM3ERnhRJUkYSyPUEVJDSRGRv0Rv2wwy9AlREDXd7Fl6xpDVxee1Cqwb+nHcUAN4eVUKNQMNWxZqCvhnwtNTpaAalgUGcaZHuerKY0FZ3qb3ak6bEvBGEUdSK9jJjrN0qT8ZzLVpmZ3g5sXoxuCGtPawJXgYIvas3+Us96bhoq3Z00dhRa6E3YkjMTFlT+AyEZifwHLzANax8yfKBes71pT9BOOssVywS8qbz4EOr90dGMN+3Qeewwi2EolsAQRTkD56g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wppvBNXAkeePmECt9ThaeEuI+ynslVOSJoZD4o1RTgI=;
 b=ERDGP+pRuWqYpwDmk+4i5SxuHs8kMBxlDnQ9jshpPM98xd6vA3F2gpYMUpYEpzSEUkEA5umK+7YmpX8+bm/gARQzK5S5Nx8hYubqYflvNS+RhkIv5P9SYi9QPUmdtGnEkVIHhukNHtstdkOsAdIRGtTQlWj78+sxSMnx+VpueQE=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index:
 AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoAgAB7ugCAATYRAIAAKYkAgAAJgACAABz0AIABJMKA
Date: Thu, 11 Feb 2021 13:20:18 +0000
Message-ID: <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
In-Reply-To: <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 46f3dc8e-abcd-4637-0c4e-08d8ce8fcc07
x-ms-traffictypediagnostic: DBBPR08MB4630:|AM6PR08MB3463:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3463DCFCA49B26FB99700873FC8C9@AM6PR08MB3463.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mOsZOaTnjkEr9tsUF6nHjPTlxZNHEeEVemeilCajG7EUGc3Dr2ZueVD+yQw8B0AdZVVx+JSIIz/H84kgwWTWQoxReduzmHVkPMntFp0iuF9x4GYeYb3nDn3diRHDFPdztqe8ZwrvgUeydEO5sALzHWCMkTqDZxln+PYJdBmBSdcsOuwUHCsyxoyyheNiUQNyCblvsuGCL/qNfYdH/kZYeBhmpHdyhBnfKc+lsNvAfc1QRbL53q1EaRLDd+lvushRiNuUBBt5BVZAIV5vQW9UhVj6i7cX7jbf5xP3KcotJ4TLVH+uOAA9FTIonL6TBOnbigKcsRg8zICdPlPDHUwGRz8lxUpjIUAKGyY+CZkvhOOO56OrmeO1bQTpsMaqn7x7zeRZ+KIFl2Zh0fU+twKwc87+giLI0aTeMty+3hdspXWOPKvSTmHKdouxO7F83jnN3KKMYFuU7wBMFKs1r0DSzmGiG6KPD8RmCQzpsAcJAMSPVX4MDxhrZsndUQ8fyeQQYcroammgAFX5CBAbyaWCFywUgTsyRtYaQnpoti+pSh1yZ8nZrFUt0EIf+3NRS0hC5tANOn9RngFu1RQkkVE0M9oJu1IMosp7MtnEhSm4ZcDBRoZ3s7psGcwxhGaVIyzfSVJnQnxbqNyIu13gFE8dJkfvHm4Tf18MQelt+YzZXyw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(71200400001)(26005)(8936002)(6486002)(36756003)(5660300002)(83380400001)(53546011)(6506007)(86362001)(33656002)(4326008)(2616005)(66946007)(186003)(2906002)(316002)(66556008)(66476007)(66446008)(478600001)(64756008)(91956017)(6916009)(76116006)(6512007)(54906003)(966005)(8676002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?a2pTU210SFBoZ3NwVDlUT2RqYy8zYWNIZTNLM0ljRDRKYWdLdTZEamhyV3VO?=
 =?utf-8?B?MytiTytNOStQaTRldUxSbER0MDFKWldSWHhJMVRCb2FyL2FkL21wRzZYdk5S?=
 =?utf-8?B?L0doY3FsZStxQ3piQnl1NjRyM0JoeGE2ZHBsWmFPK0lodFhyT1krMjFwTHpK?=
 =?utf-8?B?MDVJYmpyMnQwVGJWQ21oK01hUElaRVhNdkM4c1l6NWJON1d4U2FCSkdObEt5?=
 =?utf-8?B?T1F3c1VsQXhiOG0vQTZXQzI5RGlsKzNiUHVOVU1uUmhlOEVTQUh3ZUxjcHRp?=
 =?utf-8?B?dGhmckIvQjdPOGwwcHBlRGRjM3ZjVm0zdStOOGhXSlh6V1JnQk9JZXRpYXJB?=
 =?utf-8?B?MENrVExhMzViSC9senFWWWpDRGUzVWtJY0RPcHU0bjBHcFRDcWVhQ2hEaElq?=
 =?utf-8?B?MzFpUUJSL0RuS1U1Z3h5OFFtMnlGUzBPZi9OVFZqTk1wYWI2ODhZMkhXczVy?=
 =?utf-8?B?YlhRR1NlRDZJVUlDcDQxZHVkeUc3L1JYUU9LbytVRWxiSTdmS2c3Q1VKaVhL?=
 =?utf-8?B?djhtcllWUGwxUlpXaXJXMU5jN0ErOGdRRFdZUnFnb1pzd2ViOUYzY2pGZXll?=
 =?utf-8?B?Zjg3Q1RkTVFqVDgwYi9RRUlwTUc1Z3ZESFFJM2JhUC9Ja0MvQ0JtYWhwMkE1?=
 =?utf-8?B?WUZsNzB2S1h1ZlVzSGtmb09ZRmU0T2JMMXp1NmFhdzdvRzN1bVJYamMwT0dw?=
 =?utf-8?B?R1RSeUtPK0ZOaG5URklaa0NvS1lwQmhHeWZZUlVCVFhVaTlVOW5GRTBVZUpK?=
 =?utf-8?B?b0d1QnpPaVN2T2Nrd29mc1ZjZmMySTFoQldXaW1CczdzNlRUNkZMbVlldkV4?=
 =?utf-8?B?VnRUMTlTVDJJUExCWVBXM0JSaC9YaFExT1VEbUlmbElIenBhcktDMFFHQXlO?=
 =?utf-8?B?UDZMMTEyNTBsSGtNTGxER0tzNW9hak1NVDhBNmRpdElSUXd3WkJ5TzM4WEVV?=
 =?utf-8?B?TUpWNVoxY0FvUFRuakx4c3BkNHdtQTh3MThIeXZPdFA3RC8wenExUkowRVA4?=
 =?utf-8?B?Zk1iMFJBVjh2Q21wNVEyekRoOVFET3YwU1N6dXArRlhzL285Tm5HdFpLNm01?=
 =?utf-8?B?S254UkFoZW5QOVl5TTdSWDhncDFpbk83RFhiSU1pN0tpQWJLQmg4Ync3MDBG?=
 =?utf-8?B?VURsQkxya3hsL013Y3ZKL3JlMks2a3lXRldaYnhybEE0MitSZFRDaEhjSGND?=
 =?utf-8?B?YncrZjMzaDlrcVdQYzdLWGxRNFRyR1pkRlFiS2M4ODNuZk9NUjVBKzBIUGI1?=
 =?utf-8?B?d1h1dzYrZVlQTTdDdk81T25UeVVNWHBEVUZ1RjFqVCs1QXNLZDJkMWhkQXVn?=
 =?utf-8?B?TG4wdXJrSU4ySFdRbnREVVZ3K2xrQzBZWm55cEZ6MkRlTDJJblBQb0xsaVB2?=
 =?utf-8?B?WkUzemdCNjVpOVVmVmJXS25ob3NRWXBaN1R5aTZtNVUwMkhDaW95V0FlS1Y3?=
 =?utf-8?B?dVlWbDg0eDkxdU1DQUVzeEFPSmlrTzJjUkszdGNlMUtEQWVCVGtUc05wQ1lP?=
 =?utf-8?B?aXV4dUVkeHNUSTJTK3BaOVdmVVNpYlYvK1psOVVDMjB4dUM5U2JwczRvcmZv?=
 =?utf-8?B?eEVQVnZMZHhCMFg2WHdYMFNYOHc0S09PRmQzSFhoN2NyZ1orMmw4V3JHcXV6?=
 =?utf-8?B?OUNaRlU5QTljbmhUdjRHKzNGemc0QWlCYUQyampkVytBUHdYbHNCWmRvV1Mv?=
 =?utf-8?B?WjIzZEU4VnRlYmlXdDVNelhqYllpeWNqdktEb25wZjBWYmZaNVFSVFFmVVB0?=
 =?utf-8?Q?q3UMih/ef9qOBjAk6z3G+4s00jEIAc704l7vLIg?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <735D240996529D4E845AB207BC1169E8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4630
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d891f881-1d72-4f51-5531-08d8ce8fc6ea
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ckd2QJJ4SOtXAHy536n53nDQ1GDotv53xYOwsl83uhE2zhhmCDJL351EJCwpoTeir8zLJU97wyR7Ic5/elS1I7n35myNWr+95fboCFWuDBMZgYK0xgLxI4eLrJy0fAtZ0ZFjq7pt5g+IAQ//rZPE1eqlvSLyVrVM5EPF4VwyP8kKk4Y3lCdqpfx0HG4WLwxK5Z19JqYtbk34XV95ZF/GY+XcBKwgxKO/cf+5Hib55Bq5/S1G3VPlN8n4CpmcuO1fXN5lRfW/pMVXH/1WzjurIYYYLpIFjScDdB0sQmFuv5NRrDhAyvZ/97AB50sARv7+pQ5D9KUKP5vXwn5v2PQAP0Owgkektwxcy+M/57SuGk/93wWWxab3SzbWhXo1OqbOzRJDBXKP8688EHZVNWzYt60K8wApMHlj5jWEYCSjPcly4VkraeTtVe2G4tT8RzzvY1lNSu4X7mVVz5XIG+UsjHGrhjJdZTJYmidHgHtCkJTbXLHc8Rftg4QHGG+xHbrweMNcZ2+r97QJakO0ypz4MaFaYoQTwQNT6ilHbGM5zCwZ5S3hYpii13Fy8RBLUEUOMMs3iBDWqMgm4zqJHu6eIiEytPvd+6v9nsY6Ev1YMGRjIR3CGM6wIUiLVl1Z3PtT7e3Xb/2FAVTXkItkZ1C0gsm0D6KPcjQqg3ZG6GU02rwwGcM7PP7AGJFwbyxmqx5AlPhMaq3u6OJ9BJw7f67Te0k7OYRDZ1Or2Bmh0M5+HGg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(396003)(39860400002)(46966006)(36840700001)(316002)(54906003)(478600001)(186003)(70206006)(86362001)(36756003)(81166007)(26005)(33656002)(336012)(8676002)(47076005)(82310400003)(356005)(5660300002)(4326008)(82740400003)(2906002)(6512007)(8936002)(83380400001)(6862004)(53546011)(107886003)(36860700001)(70586007)(6486002)(6506007)(966005)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 13:20:27.0820
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 46f3dc8e-abcd-4637-0c4e-08d8ce8fcc07
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3463

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDEwIEZlYiAyMDIxLCBhdCA3OjUyIHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTAvMDIvMjAyMSAx
ODowOCwgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBIZWxsbyBKdWxpZW4sDQo+Pj4gT24gMTAgRmVi
IDIwMjEsIGF0IDU6MzQgcG0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0K
Pj4+IA0KPj4+IEhpLA0KPj4+IA0KPj4+IE9uIDEwLzAyLzIwMjEgMTU6MDYsIFJhaHVsIFNpbmdo
IHdyb3RlOg0KPj4+Pj4gT24gOSBGZWIgMjAyMSwgYXQgODozNiBwbSwgU3RlZmFubyBTdGFiZWxs
aW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4gT24gVHVl
LCA5IEZlYiAyMDIxLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4gT24gOCBGZWIgMjAyMSwg
YXQgNjo0OSBwbSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3
cm90ZToNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IENvbW1pdCA5MWQ0ZWNhN2FkZCBicm9rZSBnbnR0YWJf
bmVlZF9pb21tdV9tYXBwaW5nIG9uIEFSTS4NCj4+Pj4+Pj4gVGhlIG9mZmVuZGluZyBjaHVuayBp
czoNCj4+Pj4+Pj4gDQo+Pj4+Pj4+ICNkZWZpbmUgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyhk
KSAgICAgICAgICAgICAgICAgICAgXA0KPj4+Pj4+PiAtICAgIChpc19kb21haW5fZGlyZWN0X21h
cHBlZChkKSAmJiBuZWVkX2lvbW11KGQpKQ0KPj4+Pj4+PiArICAgIChpc19kb21haW5fZGlyZWN0
X21hcHBlZChkKSAmJiBuZWVkX2lvbW11X3B0X3N5bmMoZCkpDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBP
biBBUk0gd2UgbmVlZCBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nIHRvIGJlIHRydWUgZm9yIGRv
bTAgd2hlbiBpdCBpcw0KPj4+Pj4+PiBkaXJlY3RseSBtYXBwZWQgYW5kIElPTU1VIGlzIGVuYWJs
ZWQgZm9yIHRoZSBkb21haW4sIGxpa2UgdGhlIG9sZCBjaGVjaw0KPj4+Pj4+PiBkaWQsIGJ1dCB0
aGUgbmV3IGNoZWNrIGlzIGFsd2F5cyBmYWxzZS4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEluIGZhY3Qs
IG5lZWRfaW9tbXVfcHRfc3luYyBpcyBkZWZpbmVkIGFzIGRvbV9pb21tdShkKS0+bmVlZF9zeW5j
IGFuZA0KPj4+Pj4+PiBuZWVkX3N5bmMgaXMgc2V0IGFzOg0KPj4+Pj4+PiANCj4+Pj4+Pj4gICBp
ZiAoICFpc19oYXJkd2FyZV9kb21haW4oZCkgfHwgaW9tbXVfaHdkb21fc3RyaWN0ICkNCj4+Pj4+
Pj4gICAgICAgaGQtPm5lZWRfc3luYyA9ICFpb21tdV91c2VfaGFwX3B0KGQpOw0KPj4+Pj4+PiAN
Cj4+Pj4+Pj4gaW9tbXVfdXNlX2hhcF9wdChkKSBtZWFucyB0aGF0IHRoZSBwYWdlLXRhYmxlIHVz
ZWQgYnkgdGhlIElPTU1VIGlzIHRoZQ0KPj4+Pj4+PiBQMk0uIEl0IGlzIHRydWUgb24gQVJNLiBu
ZWVkX3N5bmMgbWVhbnMgdGhhdCB5b3UgaGF2ZSBhIHNlcGFyYXRlIElPTU1VDQo+Pj4+Pj4+IHBh
Z2UtdGFibGUgYW5kIGl0IG5lZWRzIHRvIGJlIHVwZGF0ZWQgZm9yIGV2ZXJ5IGNoYW5nZS4gbmVl
ZF9zeW5jIGlzIHNldA0KPj4+Pj4+PiB0byBmYWxzZSBvbiBBUk0uIEhlbmNlLCBnbnR0YWJfbmVl
ZF9pb21tdV9tYXBwaW5nKGQpIGlzIGZhbHNlIHRvbywNCj4+Pj4+Pj4gd2hpY2ggaXMgd3Jvbmcu
DQo+Pj4+Pj4+IA0KPj4+Pj4+PiBBcyBhIGNvbnNlcXVlbmNlLCB3aGVuIHVzaW5nIFBWIG5ldHdv
cmsgZnJvbSBhIGRvbVUgb24gYSBzeXN0ZW0gd2hlcmUNCj4+Pj4+Pj4gSU9NTVUgaXMgb24gZnJv
bSBEb20wLCBJIGdldDoNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IChYRU4pIHNtbXU6IC9zbW11QGZkODAw
MDAwOiBVbmhhbmRsZWQgY29udGV4dCBmYXVsdDogZnNyPTB4NDAyLCBpb3ZhPTB4ODQyNGNiMTQ4
LCBmc3lucj0weGIwMDAxLCBjYj0wDQo+Pj4+Pj4+IFsgICA2OC4yOTAzMDddIG1hY2IgZmYwZTAw
MDAuZXRoZXJuZXQgZXRoMDogRE1BIGJ1cyBlcnJvcjogSFJFU1Agbm90IE9LDQo+Pj4+Pj4+IA0K
Pj4+Pj4+PiBUaGUgZml4IGlzIHRvIGdvIGJhY2sgdG8gc29tZXRoaW5nIGFsb25nIHRoZSBsaW5l
cyBvZiB0aGUgb2xkDQo+Pj4+Pj4+IGltcGxlbWVudGF0aW9uIG9mIGdudHRhYl9uZWVkX2lvbW11
X21hcHBpbmcuDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBTaWduZWQtb2ZmLWJ5OiBTdGVmYW5vIFN0YWJl
bGxpbmkgPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPg0KPj4+Pj4+PiBGaXhlczogOTFk
NGVjYTdhZGQNCj4+Pj4+Pj4gQmFja3BvcnQ6IDQuMTIrDQo+Pj4+Pj4+IA0KPj4+Pj4+PiAtLS0N
Cj4+Pj4+Pj4gDQo+Pj4+Pj4+IEdpdmVuIHRoZSBzZXZlcml0eSBvZiB0aGUgYnVnLCBJIHdvdWxk
IGxpa2UgdG8gcmVxdWVzdCB0aGlzIHBhdGNoIHRvIGJlDQo+Pj4+Pj4+IGJhY2twb3J0ZWQgdG8g
NC4xMiB0b28sIGV2ZW4gaWYgNC4xMiBpcyBzZWN1cml0eS1maXhlcyBvbmx5IHNpbmNlIE9jdA0K
Pj4+Pj4+PiAyMDIwLg0KPj4+Pj4+PiANCj4+Pj4+Pj4gRm9yIHRoZSA0LjEyIGJhY2twb3J0LCB3
ZSBjYW4gdXNlIGlvbW11X2VuYWJsZWQoKSBpbnN0ZWFkIG9mDQo+Pj4+Pj4+IGlzX2lvbW11X2Vu
YWJsZWQoKSBpbiB0aGUgaW1wbGVtZW50YXRpb24gb2YgZ250dGFiX25lZWRfaW9tbXVfbWFwcGlu
Zy4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IENoYW5nZXMgaW4gdjI6DQo+Pj4+Pj4+IC0gaW1wcm92ZSBj
b21taXQgbWVzc2FnZQ0KPj4+Pj4+PiAtIGFkZCBpc19pb21tdV9lbmFibGVkKGQpIHRvIHRoZSBj
aGVjaw0KPj4+Pj4+PiAtLS0NCj4+Pj4+Pj4geGVuL2luY2x1ZGUvYXNtLWFybS9ncmFudF90YWJs
ZS5oIHwgMiArLQ0KPj4+Pj4+PiAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKyksIDEgZGVs
ZXRpb24oLSkNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20t
YXJtL2dyYW50X3RhYmxlLmggYi94ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3RhYmxlLmgNCj4+
Pj4+Pj4gaW5kZXggNmY1ODViMTUzOC4uMGNlNzdmOWExYyAxMDA2NDQNCj4+Pj4+Pj4gLS0tIGEv
eGVuL2luY2x1ZGUvYXNtLWFybS9ncmFudF90YWJsZS5oDQo+Pj4+Pj4+ICsrKyBiL3hlbi9pbmNs
dWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaA0KPj4+Pj4+PiBAQCAtODksNyArODksNyBAQCBpbnQg
cmVwbGFjZV9ncmFudF9ob3N0X21hcHBpbmcodW5zaWduZWQgbG9uZyBncGFkZHIsIG1mbl90IG1m
biwNCj4+Pj4+Pj4gICAgKCgoaSkgPj0gbnJfc3RhdHVzX2ZyYW1lcyh0KSkgPyBJTlZBTElEX0dG
TiA6ICh0KS0+YXJjaC5zdGF0dXNfZ2ZuW2ldKQ0KPj4+Pj4+PiANCj4+Pj4+Pj4gI2RlZmluZSBn
bnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nKGQpICAgICAgICAgICAgICAgICAgICBcDQo+Pj4+Pj4+
IC0gICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQpICYmIG5lZWRfaW9tbXVfcHRfc3luYyhk
KSkNCj4+Pj4+Pj4gKyAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgaXNfaW9tbXVf
ZW5hYmxlZChkKSkNCj4+Pj4+Pj4gDQo+Pj4+Pj4+ICNlbmRpZiAvKiBfX0FTTV9HUkFOVF9UQUJM
RV9IX18gKi8NCj4+Pj4+PiANCj4+Pj4+PiBJIHRlc3RlZCB0aGUgcGF0Y2ggYW5kIHdoaWxlIGNy
ZWF0aW5nIHRoZSBndWVzdCBJIG9ic2VydmVkIHRoZSBiZWxvdyB3YXJuaW5nIGZyb20gTGludXgg
Zm9yIGJsb2NrIGRldmljZS4NCj4+Pj4+PiBodHRwczovL2VsaXhpci5ib290bGluLmNvbS9saW51
eC92NC4zL3NvdXJjZS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jI0wyNTgNCj4+
Pj4+IA0KPj4+Pj4gU28geW91IGFyZSBjcmVhdGluZyBhIGd1ZXN0IHdpdGggInhsIGNyZWF0ZSIg
aW4gZG9tMCBhbmQgeW91IHNlZSB0aGUNCj4+Pj4+IHdhcm5pbmdzIGJlbG93IHByaW50ZWQgYnkg
dGhlIERvbTAga2VybmVsPyBJIGltYWdpbmUgdGhlIGRvbVUgaGFzIGENCj4+Pj4+IHZpcnR1YWwg
ImRpc2siIG9mIHNvbWUgc29ydC4NCj4+Pj4gWWVzIHlvdSBhcmUgcmlnaHQgSSBhbSB0cnlpbmcg
dG8gY3JlYXRlIHRoZSBndWVzdCB3aXRoICJ4bCBjcmVhdGXigJ0gYW5kIGJlZm9yZSB0aGF0LCBJ
IGNyZWF0ZWQgdGhlIGxvZ2ljYWwgdm9sdW1lIGFuZCB0cnlpbmcgdG8gYXR0YWNoIHRoZSBsb2dp
Y2FsIHZvbHVtZQ0KPj4+PiBibG9jayB0byB0aGUgZG9tYWluIHdpdGgg4oCceGwgYmxvY2stYXR0
YWNo4oCdLiBJIG9ic2VydmVkIHRoaXMgZXJyb3Igd2l0aCB0aGUgInhsIGJsb2NrLWF0dGFjaOKA
nSBjb21tYW5kLg0KPj4+PiBUaGlzIGlzc3VlIG9jY3VycyBhZnRlciBhcHBseWluZyB0aGlzIHBh
dGNoIGFzIHdoYXQgSSBvYnNlcnZlZCB0aGlzIHBhdGNoIGludHJvZHVjZSB0aGUgY2FsbHMgdG8g
aW9tbXVfbGVnYWN5X3ssIHVufW1hcCgpIHRvIG1hcCB0aGUgZ3JhbnQgcGFnZXMgZm9yDQo+Pj4+
IElPTU1VIHRoYXQgdG91Y2hlcyB0aGUgcGFnZS10YWJsZXMuIEkgYW0gbm90IHN1cmUgYnV0IHdo
YXQgSSBvYnNlcnZlZCBpcyB0aGF0IHNvbWV0aGluZyBpcyB3cml0dGVuIHdyb25nIHdoZW4gaW9t
bV91bm1hcCBjYWxscyB1bm1hcCB0aGUgcGFnZXMNCj4+Pj4gYmVjYXVzZSBvZiB0aGF0IGlzc3Vl
IGlzIG9ic2VydmVkLg0KPj4+IA0KPj4+IENhbiB5b3UgY2xhcmlmeSB3aGF0IHlvdSBtZWFuIGJ5
ICJ3cml0dGVuIHdyb25nIj8gV2hhdCBzb3J0IG9mIGVycm9yIGRvIHlvdSBzZWUgaW4gdGhlIGlv
bW11X3VubWFwKCk/DQo+PiBJIG1pZ2h0IGJlIHdyb25nIGFzIHBlciBteSB1bmRlcnN0YW5kaW5n
IGZvciBBUk0gd2UgYXJlIHNoYXJpbmcgdGhlIFAyTSBiZXR3ZWVuIENQVSBhbmQgSU9NTVUgYWx3
YXlzIGFuZCB0aGUgbWFwX2dyYW50X3JlZigpIGZ1bmN0aW9uIGlzIHdyaXR0ZW4gaW4gc3VjaCBh
IHdheSB0aGF0IHdlIGhhdmUgdG8gY2FsbCBpb21tdV9sZWdhY3lfeywgdW59bWFwKCkgb25seSBp
ZiBQMk0gaXMgbm90IHNoYXJlZC4NCj4gDQo+IG1hcF9ncmFudF9yZWYoKSB3aWxsIGNhbGwgdGhl
IElPTU1VIGlmIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcoKSByZXR1cm5zIHRydWUuIEkgZG9u
J3QgcmVhbGx5IHNlZSB3aGVyZSB0aGlzIGlzIGFzc3VtaW5nIHRoZSBQMk0gaXMgbm90IHNoYXJl
ZC4NCj4gDQo+IEluIGZhY3QsIG9uIHg4NiwgdGhpcyB3aWxsIGFsd2F5cyBiZSBmYWxzZSBmb3Ig
SFZNIGRvbWFpbiAodGhleSBzdXBwb3J0IGJvdGggc2hhcmVkIGFuZCBzZXBhcmF0ZSBwYWdlLXRh
YmxlcykuDQo+IA0KPj4gQXMgd2UgYXJlIHNoYXJpbmcgdGhlIFAyTSB3aGVuIHdlIGNhbGwgdGhl
IGlvbW11X21hcCgpIGZ1bmN0aW9uIGl0IHdpbGwgb3ZlcndyaXRlIHRoZSBleGlzdGluZyBHRk4g
LT4gTUZOICggRm9yIERPTTAgR0ZOIGlzIHNhbWUgYXMgTUZOKSBlbnRyeSBhbmQgd2hlbiB3ZSBj
YWxsIGlvbW11X3VubWFwKCkgaXQgd2lsbCB1bm1hcCB0aGUgIChHRk4gLT4gTUZOICkgZW50cnkg
ZnJvbSB0aGUgcGFnZS10YWJsZS4NCj4gQUZBSUssIHRoZXJlIHNob3VsZCBiZSBub3RoaW5nIG1h
cHBlZCBhdCB0aGF0IEdGTiBiZWNhdXNlIHRoZSBwYWdlIGJlbG9uZ3MgdG8gdGhlIGd1ZXN0LiBB
dCB3b3JzZSwgd2Ugd291bGQgb3ZlcndyaXRlIGEgbWFwcGluZyB0aGF0IGlzIHRoZSBzYW1lLg0K
DQpTb3JyeSBJIHNob3VsZCBoYXZlIG1lbnRpb24gYmVmb3JlIGJhY2tlbmQvZnJvbnRlbmQgaXMg
ZG9tMCBpbiB0aGlzIGNhc2UgYW5kIEdGTiBpcyBtYXBwZWQuIEkgYW0gdHJ5aW5nIHRvIGF0dGFj
aCB0aGUgYmxvY2sgZGV2aWNlIHRvIERPTTANCg0KZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QuaW1n
IGJzPTEwMjQgY291bnQ9MCBzZWVrPTEwMjQNCnhsIGJsb2NrLWF0dGFjaCAwIHBoeTp0ZXN0Lmlt
ZyB4dmRhIHcNCg0KPiANCj4gQWx0aG91Z2gsIEkgYWdyZWUgdGhhdCB3ZSBtYXkgZW5kIHVwIHRv
IHJlbW92ZSB0aGUgZW50cnkgZWFybHkgYW5kIHRoZXJlZm9yZSB3ZSBjb3VsZCBnZXQgYW4gSU9N
TVUgZmF1bHQgKHRoaXMgaXMgbm90IHlvdXIgY2FzZSBoZXJlKS4gQnV0IHRoYXQncyBub3QgYW4g
QXJtLW9ubHkgcHJvYmxlbS4NCj4gDQo+PiBOZXh0IHRpbWUgd2hlbiBtYXBfZ3JhbnRfcmVmKCkg
dHJpZXMgdG8gbWFwIHRoZSBzYW1lIGZyYW1lIGl0IHdpbGwgdHJ5IHRvIGdldCB0aGUgY29ycmVz
cG9uZGluZyBHRk4gYnV0IHRoZXJlIHdpbGwgbm8gZW50cnkgaW4gUDJNIGFzIGlvbW11X3VubWFw
KCkgYWxyZWFkeSB1bm1hcHBlZCB0aGUgR0ZOIGJlY2F1c2Ugb2YgdGhhdCB0aGlzIGVycm9yIG1p
Z2h0IGJlIG9ic2VydmVkLg0KPiBJIGFtIGFmcmFpZCwgSSBkb24ndCB1bmRlcnN0YW5kIHdoYXQg
eW91IG1lYW4gYnkgInRyeSB0byBnZXQgdGhlIGNvcnJlc3BvbmRpbmcgR0ZOIi4gIENhbiB5b3Ug
Z2l2ZSBhIHBvaW50ZXIgdG8gdGhlIGNvZGU/DQoNCkkgbWVhbiB0byBzYXkgdG8gZ2V0IHRoZSBH
Rk4gZnJvbSBmcmFtZSAic3RydWN0IGdyYW50X2VudHJ5X3YxIHsg4oCmLiAuZnJhbWUgfSANCmh0
dHA6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1ibG9iO2Y9eGVuL2NvbW1v
bi9ncmFudF90YWJsZS5jO2g9MjgwYjc5NjliNmQ1ODIxMDEwNjc1MTgyZWFmMGVhMjlmYzMzMmRk
NDtoYj1IRUFEI2wxMTI0DQoNCj4gDQo+PiBTZXF1ZW5jZSBvZiBldmVudHMgdGhhdCByZXN1bHRz
IHRoZSBpc3N1ZSA6DQo+PiBnbnR0YWJfbWFwX2dyYW50X3JlZiAoR0ZOPWE5OWZiIE1GTj1hOTlm
YikNCj4gDQo+IENhbiB5b3UgY2xhcmlmeSB3aGV0aGVyIHRoZSBHRk4gaXMgZnJvbSB0aGUgbG9j
YWwgZG9tYWluIChlLmcuIGRvbTAvYmFja2VuZCkgb3IgdGhlIHJlbW90ZSBkb21haW4gKGUuZy4g
dGhlIGZyb250ZW5kKT8NCg0KQm90aCBiYWNrZW5kL2Zyb250ZW5kIGluIHRoZSBzYW1lIGRvbWFp
biBkb20wIC4NCj4gDQo+PiBhcm1faW9tbXVfbWFwX3BhZ2UoKSBET01JRDowIGRmbiA9IGE5OWZi
IG1mbiA9IGE5OWZiDQo+PiBnbnR0YWJfbWFwX2dyYW50X3JlZiAoIEdGTj1kOTkxMyBNRk49ZDk5
MTMpDQo+PiBhcm1faW9tbXVfbWFwX3BhZ2UoKSBET01JRDowIGRmbiA9IGQ5OTEzIG1mbiA9IGQ5
OTEzDQo+PiBnbnR0YWJfbWFwX2dyYW50X3JlZiAoIEdGTj1kOTg0NiBNRk49ZDk4NDYpDQo+PiBh
cm1faW9tbXVfbWFwX3BhZ2UoKSBET01JRDowIGRmbiA9IGQ5ODQ2IG1mbiA9IGQ5ODQ2DQo+PiBn
bnR0YWJfbWFwX2dyYW50X3JlZiAoR0ZOPWE4NDc0IE1GTj1hODQ3NCkNCj4+IGFybV9pb21tdV9t
YXBfcGFnZSgpIERPTUlEOjAgZGZuID0gYTg0NzQgbWZuID0gYTg0NzQNCj4+IGFybV9pb21tdV91
bm1hcF9wYWdlKCkgRE9NSUQ6MCBkZm4gPSBhOTlmYg0KPj4gYXJtX2lvbW11X3VubWFwX3BhZ2Uo
KSBET01JRDowIGRmbiA9IGQ5OTEzDQo+PiBhcm1faW9tbXVfdW5tYXBfcGFnZSgpIERPTUlEOjAg
ZGZuID0gZDk4NDYNCj4+IGFybV9pb21tdV91bm1hcF9wYWdlKCkgRE9NSUQ6MCBkZm4gPSBhODQ3
NA0KPj4gVHJ5IHRvIG1hcCB0aGUgc2FtZSBmcmFtZSB0aGF0IGlzIHVubWFwcGVkIGVhcmxpZXIg
YnkgaW9tbXVfdW5tYXAgY2FsbCgpDQo+PiBnbnR0YWJfbWFwX2dyYW50X3JlZiAoR0ZOPWE5OWZi
IE1GTj0weGZmZmZmZmZmKS4tPiBOb3QgYWJsZSB0byBmaW5kIHRoZSBHRk4gaW4gcDJtIGVycm9y
IGlzIG9ic2VydmVkIGFmdGVyIHRoYXQuDQo+IA0KPiBUaGUgaW9tbXVfbWFwKCkvaW9tbXVfdW5t
YXAoKSBzaG91bGQgb25seSBtb2RpZnkgdGhlIGRvbTAgUDJNLiBJdCBzaG91bGQgbm90IG1vZGlm
eSB0aGUgZ3Vlc3QgUDJNLg0KDQpZZXMgYXMgaW4gdGhpcyBjYXNlIGJhY2tlbmQvZnJvbnRlbmQg
aW4gc2FtZSBkb20wIHRoYXQgd2h5IFAyTSBpcyBtb2RpZmllZC4NCj4gDQo+IFdoZW4gRG9tMCBp
c3N1ZSBhIHJlcXVlc3QgdG8gbWFwIGEgZ3JhbnQgd2Ugd2lsbDoNCj4gIDEpIExvb2stdXAgdGhl
IGd1ZXN0IFAyTSB0byBmaW5kIHRoZSBjb3JyZXNwb25kaW5nIE1GTg0KPiAgMikgRG8gYWxsIHRo
ZSBzYW5pdHkgY2hlY2sNCj4gIDMpIE1hcCB0aGUgcGFnZSBpbiBkb20wJ3MgUDJNIGF0IHRoZSBh
ZGRyZXNzIHJlcXVlc3RlZCAoaG9zdF9hZGRyKQ0KPiAgNCkgSXNzdWUgaW9tbXVfbWFwKCkgdG8g
Z2V0IGEgMToxIG1hcHBpbmcgaW4gdGhlIFAyTQ0KPiANCj4gU28gYXJlIHlvdSBzYXlpbmcgdGhh
dCB0aGUgZ3Vlc3QgUDJNIHdhbGsgaGFzIGZhaWxlZD8NCg0KTm8uIEkgbWVhbiB0byBzYXkgR0ZO
IGlzIHVubWFwcGVkIGJ5IGlvbW11X3VubWFwKCkgIGVhcmxpZXIgYW5kIGxhdGVyIHBvaW50IG9m
IHRpbWUgd2hlbiB3ZSBnZXQgdGhlIHJlcXVlc3QgdG8gbWFwIHRoZSBzYW1lIGZyYW1lIHRoYXQg
aXMgZmFpbGVkIGFzIHdlIHdpbGwgbm90IGJlIGFibGUgdG8gZ2V0IEdGTiBmcm9tIGZyYW1lIGdl
dF9wYWdlZF9mcmFtZSgpIGluIGZ1bmN0aW9uIG1hcF9ncmFudF9yZWYoKS4NCg0KUmVnYXJkcywN
ClJhaHVsDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 13:53:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 13:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83906.157138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACOx-0007pT-JJ; Thu, 11 Feb 2021 13:52:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83906.157138; Thu, 11 Feb 2021 13:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACOx-0007pM-Fw; Thu, 11 Feb 2021 13:52:59 +0000
Received: by outflank-mailman (input) for mailman id 83906;
 Thu, 11 Feb 2021 13:52:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lACOv-0007pH-FE
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 13:52:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lACOu-0004rw-0w; Thu, 11 Feb 2021 13:52:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lACOt-0007zd-LM; Thu, 11 Feb 2021 13:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EwHUlxv3YE0RtN6+99OTd+wjUZ9HVtOmE/xbephnDhI=; b=af2lcUm7WbnSWjZYGe/kuGHNUU
	lCpA6Aibj5mg4GsgIQMuatC88tURBwszWewCUEkQ7SgnM+wUmAzN8xou4gujEDt6sSOPzWYH7diDf
	6FOpu35w30k/6hnKrteq5DGojXPbkNBzGHFIt3kgSJk9v9lh68dRR5QCKhFGCeFyxMIM=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
 <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
Date: Thu, 11 Feb 2021 13:52:53 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 11/02/2021 13:20, Rahul Singh wrote:
> Hello Julien,

Hi Rahul,

>> On 10 Feb 2021, at 7:52 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 10/02/2021 18:08, Rahul Singh wrote:
>>> Hello Julien,
>>>> On 10 Feb 2021, at 5:34 pm, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On 10/02/2021 15:06, Rahul Singh wrote:
>>>>>> On 9 Feb 2021, at 8:36 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>>>
>>>>>> On Tue, 9 Feb 2021, Rahul Singh wrote:
>>>>>>>> On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>>>>>
>>>>>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>>>>>> The offending chunk is:
>>>>>>>>
>>>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>>>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>>>
>>>>>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>>>>>>> directly mapped and IOMMU is enabled for the domain, like the old check
>>>>>>>> did, but the new check is always false.
>>>>>>>>
>>>>>>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>>>>>>> need_sync is set as:
>>>>>>>>
>>>>>>>>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>>>>>        hd->need_sync = !iommu_use_hap_pt(d);
>>>>>>>>
>>>>>>>> iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
>>>>>>>> P2M. It is true on ARM. need_sync means that you have a separate IOMMU
>>>>>>>> page-table and it needs to be updated for every change. need_sync is set
>>>>>>>> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
>>>>>>>> which is wrong.
>>>>>>>>
>>>>>>>> As a consequence, when using PV network from a domU on a system where
>>>>>>>> IOMMU is on from Dom0, I get:
>>>>>>>>
>>>>>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>>>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>>>>>>>
>>>>>>>> The fix is to go back to something along the lines of the old
>>>>>>>> implementation of gnttab_need_iommu_mapping.
>>>>>>>>
>>>>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>>>>> Fixes: 91d4eca7add
>>>>>>>> Backport: 4.12+
>>>>>>>>
>>>>>>>> ---
>>>>>>>>
>>>>>>>> Given the severity of the bug, I would like to request this patch to be
>>>>>>>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
>>>>>>>> 2020.
>>>>>>>>
>>>>>>>> For the 4.12 backport, we can use iommu_enabled() instead of
>>>>>>>> is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
>>>>>>>>
>>>>>>>> Changes in v2:
>>>>>>>> - improve commit message
>>>>>>>> - add is_iommu_enabled(d) to the check
>>>>>>>> ---
>>>>>>>> xen/include/asm-arm/grant_table.h | 2 +-
>>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
>>>>>>>> index 6f585b1538..0ce77f9a1c 100644
>>>>>>>> --- a/xen/include/asm-arm/grant_table.h
>>>>>>>> +++ b/xen/include/asm-arm/grant_table.h
>>>>>>>> @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
>>>>>>>>     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
>>>>>>>>
>>>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>>>> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>>> +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
>>>>>>>>
>>>>>>>> #endif /* __ASM_GRANT_TABLE_H__ */
>>>>>>>
>>>>>>> I tested the patch and while creating the guest I observed the below warning from Linux for block device.
>>>>>>> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
>>>>>>
>>>>>> So you are creating a guest with "xl create" in dom0 and you see the
>>>>>> warnings below printed by the Dom0 kernel? I imagine the domU has a
>>>>>> virtual "disk" of some sort.
>>>>> Yes you are right I am trying to create the guest with "xl createâ€ and before that, I created the logical volume and trying to attach the logical volume
>>>>> block to the domain with â€œxl block-attachâ€. I observed this error with the "xl block-attachâ€ command.
>>>>> This issue occurs after applying this patch as what I observed this patch introduce the calls to iommu_legacy_{, un}map() to map the grant pages for
>>>>> IOMMU that touches the page-tables. I am not sure but what I observed is that something is written wrong when iomm_unmap calls unmap the pages
>>>>> because of that issue is observed.
>>>>
>>>> Can you clarify what you mean by "written wrong"? What sort of error do you see in the iommu_unmap()?
>>> I might be wrong as per my understanding for ARM we are sharing the P2M between CPU and IOMMU always and the map_grant_ref() function is written in such a way that we have to call iommu_legacy_{, un}map() only if P2M is not shared.
>>
>> map_grant_ref() will call the IOMMU if gnttab_need_iommu_mapping() returns true. I don't really see where this is assuming the P2M is not shared.
>>
>> In fact, on x86, this will always be false for HVM domain (they support both shared and separate page-tables).
>>
>>> As we are sharing the P2M when we call the iommu_map() function it will overwrite the existing GFN -> MFN ( For DOM0 GFN is same as MFN) entry and when we call iommu_unmap() it will unmap the  (GFN -> MFN ) entry from the page-table.
>> AFAIK, there should be nothing mapped at that GFN because the page belongs to the guest. At worse, we would overwrite a mapping that is the same.
>  > Sorry I should have mention before backend/frontend is dom0 in this 
case and GFN is mapped. I am trying to attach the block device to DOM0

Ah, your log makes a lot more sense now. Thank you for the clarification!

So yes, I agree that iommu_{,un}map() will do the wrong thing if the 
frontend and backend in the same domain.

I don't know what the state in Linux, but from Xen PoV it should be 
possible to have the backend/frontend in the same domain.

I think we want to ignore the IOMMU mapping request when the domain is 
the same. Can you try this small untested patch:

diff --git a/xen/drivers/passthrough/arm/iommu_helpers.c 
b/xen/drivers/passthrough/arm/iommu_helpers.c
index a36e2b8e6c42..7bad13593146 100644
--- a/xen/drivers/passthrough/arm/iommu_helpers.c
+++ b/xen/drivers/passthrough/arm/iommu_helpers.c
@@ -53,6 +53,9 @@ int __must_check arm_iommu_map_page(struct domain *d, 
dfn_t dfn, mfn_t mfn,

      t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;

+    if ( page_get_owner(mfn_to_page(mfn)) == d )
+        return 0;
+
      /*
       * The function guest_physmap_add_entry replaces the current mapping
       * if there is already one...
@@ -71,6 +74,9 @@ int __must_check arm_iommu_unmap_page(struct domain 
*d, dfn_t dfn,
      if ( !is_domain_direct_mapped(d) )
          return -EINVAL;

+    if ( page_get_owner(mfn_to_page(mfn)) == d )
+        return 0;
+
      return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), 
_mfn(dfn_x(dfn)), 0);
  }

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 13:53:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 13:53:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83907.157150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACPP-0007uC-TS; Thu, 11 Feb 2021 13:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83907.157150; Thu, 11 Feb 2021 13:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACPP-0007u4-PT; Thu, 11 Feb 2021 13:53:27 +0000
Received: by outflank-mailman (input) for mailman id 83907;
 Thu, 11 Feb 2021 13:53:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzmj=HN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lACPO-0007tv-Gy
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 13:53:26 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5373a624-1236-4a50-b9e2-873dc13a05c5;
 Thu, 11 Feb 2021 13:53:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5373a624-1236-4a50-b9e2-873dc13a05c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613051605;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lXSGzGBDTLVTIC7+IZAPlzSog7065n2VSfHi1CuBn58=;
  b=Y9NUeHxRyVsYwiSp+/4fYFZnee977sJ/De4WbZFfic9JaOFWzjj9KICE
   4uh0lbohr55S8fWgziCmPrxnC9UFWmdLIGQ0JsXcVTkDyM3M2dzOwPnv/
   iA8c6l5eM80M1suFyytv7MLDvw5GDbEOfOIXWo+7MYL7lcO6lqQyYFQEs
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: LL3fwKBwcKukpEWeKp6haoCfmojw6EZlwQxPgaMy48flSA/Lw7clocUTZeDuj9UyZVPy03uiFy
 ghWWUXw+LXqBYMFMXlfK0yaiC9yQkQC/zvD2KuA5pYFgthwSWP7otFYIA72XzvXk874kc3sc/0
 2JeTXw7S9r8MnLY4tAufFOD+kOqngLpBQcx80TzHjfAt9Eg6kIGK711eF8lN7RbnI0Q/yEPRdg
 /CQrHGsx71NpI9U/1a5j0zQOAiHjsxk1ODx0VNjdoFMXbiuSEcPTq2cO6bX3/FeElnxNl6VyEy
 T5M=
X-SBRS: 5.2
X-MesageID: 37236409
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37236409"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WLyeUXm50GWv+UPYl1e7i8W+oBojY9PHONiqkDCy5TzAJHiZuFW8NgZclMWcLXWqlbfRG6DEIShvSqLEyQzURgHCBTcix5mKjqZlRbdgUwfE9QYniR6/4iZsgCXKuy123kBlfFNKK0tMiRw5qkX3A5Wf5JBEenPpRWxXxMlXEQXmGHCUMZI/GKpak0Gmlfq7M0l9WGqHvON4iYR4i+C66HnqWzcjG63jKiAVBCnJboLFoMhuwk6NJvOaz+o30ohPT3q5vEdUfmBrqUZ9KxZiN65XBXCF088Cn0Vv8Ykk9+UQjAXAXoQmvS1VDLoCdUh1lCmJ1+ru8aoCdAOJaMvbmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XXT57OOjhb13TbZlcLP8QSaorO0xgSrsszjAEqbEdhw=;
 b=GchIMOZwFMwiYDUQR4vlQuq4cFwNi7a3nu6oS69nEQmTQSdIJ8LuCOnaH783R9hG1uNepvpy9UO8RuDdBK7M+ZaGXQu7YyBdRsbimu+0J5PK+QitltFG0EMq6bKALMEBplWONc4IaW2WwUyMVsJ5BNr4O9hr/5RAzM9TY6euvvi71XeaDwQj8eIDB17G2Wui7+y9femVxkejNSzJNKI/ufHxo6eR5kWksKXUbbgEe2TfPjaJEh49ItWvmcNSUyU5ioOaiWX1kmkMbHoPNSQQa9FXciujXAguh5q7CJa1vE3URvTmyJ/X7JryX9gD90xbf51ifj7ZdKpj1HpCcxFfCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XXT57OOjhb13TbZlcLP8QSaorO0xgSrsszjAEqbEdhw=;
 b=pMZ1kyN4vcWHEQqRL8R1kIz/YPDBZmt5yVmAuRmf5PzUfogttMTJkwzVvP/HITeTAtvebDHmDgYB31eHQEqcBrbzVfs5j3Mw0+e1UrypubfavKOob1Bkoghk3ZSbo/xoZRnoxwC+8K5rZWDwRF6mcCZ57rrbiSXLbzcqAULCTxg=
Subject: Re: [PATCH] VMX: use a single, global APIC access page
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Kevin
 Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
References: <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
 <YCTuq5b130PR6G35@Air-de-Roger>
 <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f71f975d-8e01-ecf1-0762-30ea18793690@citrix.com>
Date: Thu, 11 Feb 2021 13:53:14 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0055.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aa643e6f-0483-4e68-50d2-08d8ce94646d
X-MS-TrafficTypeDiagnostic: BYAPR03MB4166:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4166ACEA625AFB63F8D965ECBA8C9@BYAPR03MB4166.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cg4y/7SDl/dqBtdrlqufW+OoWc5z1ykIMXRAP32VxS9hSGdX3pOUEL2WFyAAyu7bPjJ10SJFQLDV3TV/pEZJ1D9OjYDpWEBOU7uw0jfC/zWwLg0Fa7md5pYSpi0+H+32qGKqDtWO6rbJ0oZdtQrBBNn90212bsm2MdvO2AettYBzXhZJqhGlD2t7CcvCoIPmVsh+J7v36zL8J9WuhuKp9Zw6KwnXfUZOv5wGjzj4jsJ8witobFrEJtjht2HWIdYaxxcpFR8gYTswkrZJMp5RkMX9TwFTRGhpxBQ0ACaIqaXWiw+WQBBEBeyrz5LyRmLhFIcz2La7s8k7cWEY1ayVYaMvj4FpbTm5WTcBhZOx2NJ+LYLuXUwnkJNVDy1N7qT9DvQuylBrdHT8gxLaMKJv84R06bPf/aX0rwi8LqyFyaxFycTOAXY+kpCd7ObLgfYrGOoJk5UvUUCW+WX5VKC8NCZJWs9ki4bqARCuEHbCF74Zcgqoh7tCG+DBw+is8Quzm4IlfDZF+GJOjpNIF+tIKhcmFjyVl3gIsB7EZhYD3QqjNX/2d1x9Rnxkgqfuy3zfOvhibxloWgzhDBWsBJlOk0AZ1kof2l0ZmiLKivQSTQw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(366004)(376002)(346002)(66556008)(6666004)(8676002)(186003)(66476007)(86362001)(31696002)(8936002)(66946007)(5660300002)(54906003)(4326008)(110136005)(16576012)(36756003)(26005)(16526019)(6636002)(2616005)(6486002)(2906002)(53546011)(83380400001)(316002)(31686004)(478600001)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?d3p1MkFvVUd3ek5WQVpqRDU2dTk2REtLVHlNWTJlcWpJelJjaHNsdTA3Qk5m?=
 =?utf-8?B?b2lmRUdlWFNWQ0w0UlpLUnNJdytHNTdka3hzTU41MlMreWh0QS9ZejVwdHJm?=
 =?utf-8?B?NjM1ZEtnbis2R2VsRE1FY0lXWTdQQnFseTNIblFVYkU3VUh4Z1llaGNwNVdh?=
 =?utf-8?B?V3VLNDFPanR2T3RPb0d1bGxCMCt6VVFnZnN5d2k4a2NTZ3AveEI4a2xoU1hK?=
 =?utf-8?B?SGMyVzZQbnVoK09jRUR4UVV4ZTk1T3Rib2RFbG9zMHpHQ1p6dHFQY3BVWUdV?=
 =?utf-8?B?Qk85cnhoMUliakZ2UDIwTVE1WnA1M3B5WW44VWZtY2k1UlVwVWVmMEk5WTNz?=
 =?utf-8?B?SU9wenRSMFNIcWNFZHB4akp5elJBc3hrNFEzYitqRGJaWHVTdUhHSEhxckNw?=
 =?utf-8?B?VTJ3Q1hiOGJxTFFWdDB0TTBtZ0Zha1JjbkhnVHNKZ0w0MCthdEhUb1IzNEp2?=
 =?utf-8?B?NEZLUThCS0RCUm56cUJPWWZKbklQQnZZNFAvVCtxY3IxTHBvZHhEbVpJUW9k?=
 =?utf-8?B?YXE4RUcrY05JMitGVDA5M1pkczg4ZE1kc0t4akhnQm11cWFtcmp2K1VubHE3?=
 =?utf-8?B?WjgyazcrUmlRWm01akg1SWVrR2NiRkNVU09yL3VseVM5dkhaNFpPRWhGT2xK?=
 =?utf-8?B?SmwwSGJhY3Z5d3A0RWl2WmgrWkl4RllqNmlrQVBvUjdNRUtZOHBWSCtoWlEr?=
 =?utf-8?B?SjRCL21ScjZOOHhLUVEvM0Vka1ViZ1YweXF6OGcxeW1TR1lhM1BXdi9WK3Q3?=
 =?utf-8?B?SEdBeU4rSFhXRHZaMnpWOXo0dlNsSXZoUCtmT2VSWEd4a2IyNjkzelEza1cz?=
 =?utf-8?B?akRCRTBuUnFvREZUQVNtNjFld0lVa0hSZytURkRLcE9xdUMzZHZBbjBkbkRj?=
 =?utf-8?B?dUZUbktKa29sb08zdk1vRm9VTWxuQjl5SmNuWjRkSzFOZlZCMDl4eHJjWUcw?=
 =?utf-8?B?OWJ4OHBsME51eTE3THFxZi95TmtYdllXb2doemNoNGRxNEpIUDBsSmFicXhE?=
 =?utf-8?B?T3lXdEJXdFFBamxRcEsvNEhlYmlndzMwRjFkbHVBQlJCTkY5a2VqOUhxY2ZD?=
 =?utf-8?B?eG5qaXJhYWIzajd2QUJicm1RdE5xRkNISnJLK0ZzUWZRbUJmTlA4ejM5aEEw?=
 =?utf-8?B?cEc0UjJEd1hxVE5haVRSU2NkbTdtc01aSVErcnR5ZjVnRlViWUg1cmhyM3I4?=
 =?utf-8?B?WFVCc1p0SzlOWTBJTnJQWFJXUFhLN0o2b1YzcGFoYkRTTXRVcklwZTVwU3ZU?=
 =?utf-8?B?ckNIYldROTdmWG9EbmJtZXBkc1h0ZHlwVHI5djROYUNkN3FpbjJtWm4xRy9T?=
 =?utf-8?B?cjZIMGtjMWhEZmY2U28wRkFOUTUxaC9OS1BKT2V2SEVEUEZMMkFqQ2ZhU2pZ?=
 =?utf-8?B?UmtYaFozSnMyUHlXbGh4aCtpOGNCcVhJWDErRTVFVVpvTGFNRWh5YlBuT2RM?=
 =?utf-8?B?clprcnlEN0syZStHckFKK2JmZlkvcTF1d3dsY2pBV2RCekpqWnZpY3d6T0dl?=
 =?utf-8?B?MVRSMy9MTUIwZlFHNDhWblpRT3RsL1grKzBJbVBHZzllajZnbmp5UWUxZ2N3?=
 =?utf-8?B?bHBqaDc4aHJWR21USDFISThBdFArTm9xeWF2SXN3OTBJbkh3WVJ5dG0yQVFp?=
 =?utf-8?B?dzJ0Q2gzSm5sL1ZWUEpjTWNHVmVwMjg0Tldub21OQ1ZxcHRNYnRpblV6MjBa?=
 =?utf-8?B?MXdRSmpRUnZGdmNPcGo0S3VXNFYwU3dpQzV5NDhhOVdDZWhmMkV0YUhWOVcv?=
 =?utf-8?Q?jtOa2VuCQ42LuWaVXoTOLZY/VlM1oFB7FLA8OrL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: aa643e6f-0483-4e68-50d2-08d8ce94646d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 13:53:21.0761
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: myGjkMZnvzZ6s0Jrh3JokeGohdX4Ktx2e0bnlgSHwkjU1VrYto5NQwZRCUfZ88qW0eUYeveLnqRtileEOC6KqkK1CfZBxCPCc3vzjyD4kT0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4166
X-OriginatorOrg: citrix.com

On 11/02/2021 10:36, Jan Beulich wrote:
> On 11.02.2021 09:45, Roger Pau MonnÃ© wrote:
>> On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
>>> I did further consider not allocating any real page at all, but just
>>> using the address of some unpopulated space (which would require
>>> announcing this page as reserved to Dom0, so it wouldn't put any PCI
>>> MMIO BARs there). But I thought this would be too controversial, because
>>> of the possible risks associated with this.
>> No, Xen is not capable of allocating a suitable unpopulated page IMO,
>> so let's not go down that route. Wasting one RAM page seems perfectly
>> fine to me.
> Why would Xen not be able to, in principle? It may be difficult,
> but there may also be pages we could declare we know we can use
> for this purpose.

There are frames we could use for this purpose, but their locations are
platform specific (holes around TSEG).

I'm also not sure about the implications of their use as a DMA target.

>
>>>  static void vmx_install_vlapic_mapping(struct vcpu *v)
>>>  {
>>>      paddr_t virt_page_ma, apic_page_ma;
>>>  
>>> -    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
>>> +    if ( mfn_eq(apic_access_mfn, _mfn(0)) )
>> You should check if the domain has a vlapic I think, since now
>> apic_access_mfn is global and will be set regardless of whether the
>> domain has vlapic enabled or not.
>>
>> Previously apic_access_mfn was only allocated if the domain had vlapic
>> enabled.
> Oh, indeed - thanks for spotting. And also in domain_creation_finished().

Honestly - PVH without LAPIC was a short sighted move.

Its a prohibited combination at the moment from XSA-256 and I don't see
any value in lifting the restriction, considering that TPR acceleration
has been around since practically the first generation of HVM support.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 13:58:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 13:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83910.157161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACUE-00087d-G6; Thu, 11 Feb 2021 13:58:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83910.157161; Thu, 11 Feb 2021 13:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACUE-00087W-DB; Thu, 11 Feb 2021 13:58:26 +0000
Received: by outflank-mailman (input) for mailman id 83910;
 Thu, 11 Feb 2021 13:58:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lACUD-00087R-Nx
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 13:58:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lACUB-0004xO-IV; Thu, 11 Feb 2021 13:58:23 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lACUB-0008WC-Be; Thu, 11 Feb 2021 13:58:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=neBAoV5Fv2eWcseKDUVFyfEBzVNRpAVb4oidqOMd4qw=; b=h948uGjfRNvdtkZcxVV2wW2z23
	t20KKBo1vEPQBJJqAgpckUKP2ETe+Q3eyuunHLQ9Dk9jyx9A8kxCHH/+pq/UU6TVw/IhNDa8zi+G8
	PdknCeORSVV0THRnNyw+J8QOH23pXUGAV9VaVZhDPHMiq3HJsUAUEQdU+i2ipFpiKPqM=;
Subject: Re: [PATCH v2] xen: workaround missing device_type property in
 pci/pcie nodes
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: Volodymyr_Babchuk@epam.com, ehem+xen@m5p.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210209195334.21206-1-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <c7fcaa1e-7f0f-577d-3b6a-388954b492f0@xen.org>
Date: Thu, 11 Feb 2021 13:58:21 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210209195334.21206-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 09/02/2021 19:53, Stefano Stabellini wrote:
> PCI buses differ from default buses in a few important ways, so it is
> important to detect them properly. Normally, PCI buses are expected to
> have the following property:
> 
>      device_type = "pci"
> 
> In reality, it is not always the case. To handle PCI bus nodes that
> don't have the device_type property, also consider the node name: if the
> node name is "pcie" or "pci" then consider the bus as a PCI bus.
> 
> This commit is based on the Linux kernel commit
> d1ac0002dd29 "of: address: Work around missing device_type property in
> pcie nodes".
> 
> This fixes Xen boot on RPi4. Some RPi4 kernels have the following node
> on their device trees:
> 
> &pcie0 {
> 	pci@1,0 {
> 		#address-cells = <3>;
> 		#size-cells = <2>;
> 		ranges;
> 
> 		reg = <0 0 0 0 0>;
> 
> 		usb@1,0 {
> 				reg = <0x10000 0 0 0 0>;
> 				resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
> 		};
> 	};
> };
> 
> The pci@1,0 node is a PCI bus. If we parse the node and its children as
> a default bus, the reg property under usb@1,0 would have to be
> interpreted as an address range mappable by the CPU, which is not the
> case and would break.
> 
> Link: https://lore.kernel.org/xen-devel/YBmQQ3Tzu++AadKx@mattapan.m5p.com/
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 14:15:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 14:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83914.157179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACkN-0001ep-0P; Thu, 11 Feb 2021 14:15:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83914.157179; Thu, 11 Feb 2021 14:15:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACkM-0001ei-TN; Thu, 11 Feb 2021 14:15:06 +0000
Received: by outflank-mailman (input) for mailman id 83914;
 Thu, 11 Feb 2021 14:15:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5qXs=HN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lACkL-0001ec-AB
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 14:15:05 +0000
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fa85e1c-7f64-47e5-88fe-316f17e06a3e;
 Thu, 11 Feb 2021 14:15:04 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id n10so4027162wmq.0
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 06:15:04 -0800 (PST)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:f088:412:4748:4eb1])
 by smtp.gmail.com with ESMTPSA id f2sm5093215wrt.7.2021.02.11.06.15.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 11 Feb 2021 06:15:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fa85e1c-7f64-47e5-88fe-316f17e06a3e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=NqJl+7VgG2HfaF9Ked4qUIjOghassFCSe0BNnEt7qj8=;
        b=mPb+s7DITo6YbAtQjaqPCEBNGHpfyHFE4s9EjjWPnx5M8SIZYlPQ9vx4O6elsYgKPY
         +Ej7zP1LSif9NfKboWhGb5BXjisNCMPjXtL6b6THXoqm+F/neJTqbQzUUtQollpmiZdt
         IUpDSRHAT0zzr2yZuzdKroI87pI9a3TqJ1V/ihNLDVMzh1o79Q3Yfok7RhPlJwGA0Vak
         jyLyfyiNscakfPrS2GE8hoP64a9C1ODvEonaJRDa/Qqf7MTD3wXuZiVFOvjueTst8tll
         dZ4VqQdHT0ZLYLZTiuJ0Vy8rhIKV3NfpDzpEK2NmonwJVvCyCILbpuBN7xyALDZbqf9v
         ZBXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=NqJl+7VgG2HfaF9Ked4qUIjOghassFCSe0BNnEt7qj8=;
        b=Cad+MD1LM8ccs50zmN647CubzYTI2SDPwg4IcgfUvaQlfxL59VKNgkw0ect6RyLVcs
         bLI0tOvDDKFVtscwZqHrSt4HNAa1PPsq+U1i6G2XR9OF8ntJ1CCE1Agr/bQNIE2zHdby
         RXXl6s5kK8SoaNNkaGWDIDikfoQMHCkyr7elAVQMzW0koA8s+nqbVjvEIY2iJF+cksQp
         2LRNOy/nMP+x12btafR5zUqijKROSK8DqP+CwAg+KRUdcagTCAqSgzFucmDM7WtsrllG
         cbTENG41bnirwkxoXaxkSR+OeE+JS2BedxKiRtOvfkqXI8zRwzdk8cqo3lrxSVFJJlfy
         0uiQ==
X-Gm-Message-State: AOAM530ezE5COQKZf327HJs/c7k1VjQBkcmo0u81sUXRywrf/leqtdCU
	eK7R1+olLutTVqy62c2BeA8=
X-Google-Smtp-Source: ABdhPJzMRxugvXnzrQVDPQcGqkvzFNvBaMKVtpKaUKUX/H65FT6SVTHvQwoSqVx3xmLWZ2SNHe4UNQ==
X-Received: by 2002:a1c:6a09:: with SMTP id f9mr5485740wmc.104.1613052903733;
        Thu, 11 Feb 2021 06:15:03 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>,
	<netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>
Cc: "'Wei Liu'" <wei.liu@kernel.org>,
	"'David S. Miller'" <davem@davemloft.net>,
	"'Jakub Kicinski'" <kuba@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com> <20210211101616.13788-5-jgross@suse.com>
In-Reply-To: <20210211101616.13788-5-jgross@suse.com>
Subject: RE: [PATCH v2 4/8] xen/netback: fix spurious event detection for common event case
Date: Thu, 11 Feb 2021 14:15:02 -0000
Message-ID: <001d01d70080$4a5446d0$defcd470$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJuRSjpYwlLGVvLkRJGigHTv/cnpwJEXRo/qRJS+MA=

> -----Original Message-----
> From: Juergen Gross <jgross@suse.com>
> Sent: 11 February 2021 10:16
> To: xen-devel@lists.xenproject.org; netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> Cc: Juergen Gross <jgross@suse.com>; Wei Liu <wei.liu@kernel.org>; Paul Durrant <paul@xen.org>; David
> S. Miller <davem@davemloft.net>; Jakub Kicinski <kuba@kernel.org>
> Subject: [PATCH v2 4/8] xen/netback: fix spurious event detection for common event case
> 
> In case of a common event for rx and tx queue the event should be
> regarded to be spurious if no rx and no tx requests are pending.
> 
> Unfortunately the condition for testing that is wrong causing to
> decide a event being spurious if no rx OR no tx requests are
> pending.
> 
> Fix that plus using local variables for rx/tx pending indicators in
> order to split function calls and if condition.
> 

Definitely neater.

> Fixes: 23025393dbeb3b ("xen/netback: use lateeoi irq binding")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 14:16:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 14:16:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83915.157192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAClq-0001ms-Ek; Thu, 11 Feb 2021 14:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83915.157192; Thu, 11 Feb 2021 14:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAClq-0001ml-Bf; Thu, 11 Feb 2021 14:16:38 +0000
Received: by outflank-mailman (input) for mailman id 83915;
 Thu, 11 Feb 2021 14:16:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5qXs=HN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lAClo-0001mf-B8
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 14:16:36 +0000
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d8462a0-f720-47e4-b320-dcc045cfc66c;
 Thu, 11 Feb 2021 14:16:35 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id a16so6026643wmm.0
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 06:16:35 -0800 (PST)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:f088:412:4748:4eb1])
 by smtp.gmail.com with ESMTPSA id z8sm5045343wrr.55.2021.02.11.06.16.33
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 11 Feb 2021 06:16:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d8462a0-f720-47e4-b320-dcc045cfc66c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=Cy7wIsuzTgl3aboWz/KovEJxl9QgfW9V5yGGjiDAWYM=;
        b=ceGtK16iiqXY4tgCK1vcKqBj4841jW8joof8S5SVv23VqEhPPrve3dH08MDFnAW8gT
         eDso9WyJXR0PA6bWE6qzadYTP4IRUv2MiuarZWCROw9k+VeyhRDA9lUEiPOqTwWyHTiA
         8oFIew0jN2w50DFl/edAVtXnvBA8T6ELRsB0yHtWmRNqXdKBax/B6dt4/+sVOkYxBuJZ
         ygiiDkEQOfvXTthd+sIy5RKUovNOzFQ/jzmU/xL8cASUcfaooC4EmBjuVYOfE6UNP6ID
         gzhTYcoWpGzDsu9Fs4152wzszrgjUAZMBh9fNIHMRM4cKAO+x5jqAZUbhdbwVgvcQxl3
         ks5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=Cy7wIsuzTgl3aboWz/KovEJxl9QgfW9V5yGGjiDAWYM=;
        b=kF8fIR/r/v2nDKcHkfRJECLaRQ6EHrXq+C2ZwrhPJypznR9jl0c534XYrjQpN7pafu
         gICbQaRAs4Y3V71KWCHm+BTE9gwM1YwGjcmqGNpBfROUOjYSWpSJEO1q8SEjbXtc8ji5
         Fr7qFDGhpLFhCej3T0nwC9SVTDmQX2bCZnnbiJ48ADh1X2fbGyWzOg+xA36U4XwBvV5M
         FrifKDpwaZlRogYd0IQ33dO/lWfw9Sxd15SKY8/yh4CKeqse9pN5QZzuR83ip2C2UUO8
         4JIwmnlPn0PGGPBcAj9vdJicFeYW+0jR/7rxWCMLA9XxZzuzEScGWb8Csp7+oZisbwjY
         R21g==
X-Gm-Message-State: AOAM5306kzQ/oxuKWaC3ar/WAK/XYT1NDsIGyjlSlmGc63V+0jIMbmWA
	g4sdBXshgxVSr+KAnR16Eag=
X-Google-Smtp-Source: ABdhPJxTbARAJSnDj9FS90ZX4jupockoPgBhbigpLeuBQY+mCo7W/pQ+NNyaOiH3hCn1BLn43MXmKg==
X-Received: by 2002:a05:600c:2d44:: with SMTP id a4mr5294978wmg.95.1613052994868;
        Thu, 11 Feb 2021 06:16:34 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>,
	<linux-block@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	<netdev@vger.kernel.org>,
	<linux-scsi@vger.kernel.org>
Cc: "'Konrad Rzeszutek Wilk'" <konrad.wilk@oracle.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Jens Axboe'" <axboe@kernel.dk>,
	"'Wei Liu'" <wei.liu@kernel.org>,
	"'David S. Miller'" <davem@davemloft.net>,
	"'Jakub Kicinski'" <kuba@kernel.org>,
	"'Boris Ostrovsky'" <boris.ostrovsky@oracle.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com> <20210211101616.13788-6-jgross@suse.com>
In-Reply-To: <20210211101616.13788-6-jgross@suse.com>
Subject: RE: [PATCH v2 5/8] xen/events: link interdomain events to associated xenbus device
Date: Thu, 11 Feb 2021 14:16:33 -0000
Message-ID: <001e01d70080$80afd5a0$820f80e0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJuRSjpYwlLGVvLkRJGigHTv/cnpwH8bIuTqRSTGoA=

> -----Original Message-----
> From: Juergen Gross <jgross@suse.com>
> Sent: 11 February 2021 10:16
> To: xen-devel@lists.xenproject.org; linux-block@vger.kernel.org; =
linux-kernel@vger.kernel.org;
> netdev@vger.kernel.org; linux-scsi@vger.kernel.org
> Cc: Juergen Gross <jgross@suse.com>; Konrad Rzeszutek Wilk =
<konrad.wilk@oracle.com>; Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>; Jens Axboe <axboe@kernel.dk>; Wei Liu =
<wei.liu@kernel.org>; Paul Durrant
> <paul@xen.org>; David S. Miller <davem@davemloft.net>; Jakub Kicinski =
<kuba@kernel.org>; Boris
> Ostrovsky <boris.ostrovsky@oracle.com>; Stefano Stabellini =
<sstabellini@kernel.org>
> Subject: [PATCH v2 5/8] xen/events: link interdomain events to =
associated xenbus device
>=20
> In order to support the possibility of per-device event channel
> settings (e.g. lateeoi spurious event thresholds) add a xenbus device
> pointer to struct irq_info() and modify the related event channel
> binding interfaces to take the pointer to the xenbus device as a
> parameter instead of the domain id of the other side.
>=20
> While at it remove the stale prototype of =
bind_evtchn_to_irq_lateeoi().
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Wei Liu <wei.liu@kernel.org>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 14:28:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 14:28:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83918.157204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACwi-0002qC-LR; Thu, 11 Feb 2021 14:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83918.157204; Thu, 11 Feb 2021 14:27:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lACwi-0002q5-Gj; Thu, 11 Feb 2021 14:27:52 +0000
Received: by outflank-mailman (input) for mailman id 83918;
 Thu, 11 Feb 2021 14:27:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MG8K=HN=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lACwh-0002q0-7o
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 14:27:51 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f36006ec-0cd6-4407-a3e8-cfa8154b6dfb;
 Thu, 11 Feb 2021 14:27:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f36006ec-0cd6-4407-a3e8-cfa8154b6dfb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613053669;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=c7iwSsJTOcTGqSZjVHehbQ5NzPp/4ejwmXHlRhTugIk=;
  b=Jk3w5gtYZNTwH9emJUj9G+IyXwF09QGqRPjDxIAEw6Jd+n83zw+Vr7yR
   XNl7wK56hd6Y3r6PqfZJtpvlIXBPJqa4f1nHdFRAAD7mZcIe9HLF54FZT
   vSHPy5BN8+zRD9LsEDPqYlpy/m0979r9lfSLi3lOgfDpVfpMBKLfUHbNZ
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bZoNhGoIJ5G10Om0Wz5ZoTr8Q4THo4iJWSEF3mu2065qfUOa6OcZ7Z+t2+Rdc/Iw8sCV68tePQ
 ZDQt2fianuNBCjZc4Q/40dPpwdkC0wMPJO3MmKRziRmQ9Yi/5N8FPENez2d1LG9nE6cuFLvSo+
 A+bhn/2xbI+DNvfDh1D+JHAvb9C3kq93BTUXzw4Yl+tH5zvfaysx+e/lFgjz6DdRi5D73jpXTE
 uU6071n1yeLswApBI9sGLRpYZswJ3jMoA0rzlCx44rf9OtngOOJN4vwbCS+ZZxTOdbwXauKwic
 Zbs=
X-SBRS: 5.2
X-MesageID: 37043871
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37043871"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WFI5r50LCvHn8Abwu3cUoTPPoG6rGziHkzuwQ3W+zZ9tx4J02tfSFDkzQaDP/r+CPb1OyUopFrV+TJ+a4W95+I6osnEoh2omxul17oGyaEqmziU+6XM8TotkpJBs/7T8Nu8np/kGHd247ddTUPIIiEHa0bfBJeUChLaYZKCOfSakg+UOv7rJth9ZdkjeGdnL+XvbrES8jWAkFlKMqx64gl+jYG5/oC5jJ25AhQ/O0nftqWL5ys23qRQkfIKj+P/OAxhGr9YXjb0teW36XuubfFPc+0kjqR8Hk0FSQ4oJfGnXZXj1H6Au6ZzOufj3U5Quz9ezDkfiPGLl95NByjf3Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c7iwSsJTOcTGqSZjVHehbQ5NzPp/4ejwmXHlRhTugIk=;
 b=Kf3abLoUhTgfjWzXy2h/yZ5kPyJJnGEBorvtdihEpGjfd3jHhh7tC6Wikzaze8knBMNKYqJd3UaFiuYTLn70deybvnIC5UXCpYcz3pflBv/uGppLZVb0cnVD0jkbLxEl37pyYRT7e5mF82gITE+0x+Owieds82t+5wZ529THLEJOihA+IB1pipWMNHZSlT8x3KWPrlTMyZ+KogCYF0hUh861lsqmivpJCo6lmHl15lkcZjufFd9p9Rog+pxAp4RjZBSikSnYrMoxfbL1/FnoSIEuOltO/TGXJNUtWBQa1mt8HvKX3HbBuWrf8xYaAHzYypDi9z5HzP7M7zGuc98WKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c7iwSsJTOcTGqSZjVHehbQ5NzPp/4ejwmXHlRhTugIk=;
 b=LmOsoZKdWJ+mLa85rqA+lNI3AK/SVM/bPtbxPYlq+P70oKjhZ7y9qlxFJCPqUGF5JIXoPsV06kE4Z8Wpgd7Ucc4KkDbgj4usLSF0VEnaKhn8e9Zu8rLKcmZqWD6wkVEMyMaDZt8EJ5sUfq8ETnPUc8H7ZAFfhb9TmjFHSSkZV4w=
From: George Dunlap <George.Dunlap@citrix.com>
To: "open list:X86" <xen-devel@lists.xenproject.org>
CC: "xen-announce@lists.xenproject.org" <xen-announce@lists.xenproject.org>,
	Rachel Romoff <rromoff@linuxfoundation.org>
Subject: [ANNOUNCEMENT] XenSummit VIRTUAL 2021 May 25-28
Thread-Topic: [ANNOUNCEMENT] XenSummit VIRTUAL 2021 May 25-28
Thread-Index: AQHXAIIQsLp0dQjn7UyDlp9keXk32A==
Date: Thu, 11 Feb 2021 14:27:45 +0000
Message-ID: <47E402B0-306E-4433-85D2-8471EC82101E@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d1e28a49-5eae-4420-4686-08d8ce99336e
x-ms-traffictypediagnostic: PH0PR03MB5927:
x-microsoft-antispam-prvs: <PH0PR03MB59272B2CC9059060E2D73E70998C9@PH0PR03MB5927.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:3631;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 3hal7gFEms/kD8gOeLS0p7byNCqnVuTbrCpeUL8z0Qnj3ivfHo5HxaaazkgYdcXntuFiwhMDmWNsFbexohHbDjcZJs8hxCyJUEI8CiZ80m4dZ7y9SLDOjpm08PyMOYSHrAFggpaM+e4QUUF/BMWsBDKdu2p75Eu/bpoTTM0EzRnw0qJWVX2VM0zBw6lUDmSHnteqq6syxqr81FefvEsp9oE1V3H+kEmHLFC5ceS15XMz95DXqn0eVn75tul1V8r5KDNXNmJigpRXBM89M+N5GsKkfw3WkXpvqkts5hgD+GuMvv+Lgw2hCh4wOvO1FoWznmhhqTisgzvnPsRsdzcFveWE0Omjoe+bz37ii8G9ZzlzV7k6dDHEWnLII0azUP8GNXAF4QwuYCcWE+1StA6IAfnnamnWwgDwOJAACWCHz1njfQX7pu7QHgnppgvBP5b5NoB2Zk2WM75m+5cWreo9gRR9WUoaNm7j8tYA5j8VFe+527uh3DruwU88H9p4pLjYDA+9hjKTjmxJEm6y1pfUqbqdQQLh3Q7kN+ifeu2Yo5GlBllldMJ3O71Dt61BzOtweMihUijESHQrjgm/PZYLOs6EJmGYY18y0Y/RUXT90hhZUBkdNfMwQahasiy2bOo1+UFMyiu1hMro+w5wXvMMwAN1u/5A16Qv5oe/QiXm3wA=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(396003)(376002)(39860400002)(2616005)(83380400001)(5660300002)(4744005)(966005)(91956017)(76116006)(66446008)(86362001)(66946007)(6486002)(64756008)(66556008)(66476007)(478600001)(2906002)(4326008)(6512007)(36756003)(71200400001)(8676002)(33656002)(6916009)(8936002)(26005)(186003)(6506007)(316002)(54906003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?dI1XBY2y2UGwQkueWIElmydKGJa37z92r211+MlDSYKOjI1Xs7LK0cUQq6y4?=
 =?us-ascii?Q?1SNUwJw0UkuB7ZG/YFHJO3F2Li2wCsKNUiFLibIimJGPriztXmlpWIYVxCp1?=
 =?us-ascii?Q?yGZBa+5IkShzhvHC0c1ei32ZOr5A0KXynx5/4L6CijYLc6IK6h+u+hGnlerQ?=
 =?us-ascii?Q?oK17hrkTsRxKCYBm5a+Y85gi3fadPolhp+DQCgqAz68Ff8j7caek1Mj91I8b?=
 =?us-ascii?Q?wF6pg6YcBWYQOYP/kF3U6NipPJRC7Dq3KQbxrdk46X908guhPr04Y64yvOup?=
 =?us-ascii?Q?RlsR825K0XEp7if/EWEkVCRS5t11+r1SPKY9NqtwlSJZS+GslWWDBHRanLna?=
 =?us-ascii?Q?tsd2YO2D6XlzIN3nlwsiHwwZEBEkUvDFk/L7oDrDkzrmnDo4pSm7Qilx5R69?=
 =?us-ascii?Q?610EQwHlfr067W8MRQ2re6xvPIS9u12/loHqq14aGlo9rn6UHQIIPMknjwsX?=
 =?us-ascii?Q?eBhcmxkq1k8sbxPhB3BRiwmxaHjM/PI2fS1qEWortjEhiWELEp7Hnw0tnLpW?=
 =?us-ascii?Q?trkBsW78+VNmDkXx1DFfyCQQ5RoGpt0x8WQDJCVK5LekfMisdS/7aa4YBQ/M?=
 =?us-ascii?Q?stXydIScn6CBlA479qPrv8Lh+qT2Suel6z0GUF9///TdoUSh88KQy8ZKBY9a?=
 =?us-ascii?Q?3DwNwbPc+n2Q0YAvyd30q4h3HUT8SvpkBxGkcOaWiicxDECOLewiWrgqB/oV?=
 =?us-ascii?Q?slHXNd2gEVcEA4+0kxQXlDIjzOb9ka8qn6Uv/X72pokA5o+dIE19BHD1BlfC?=
 =?us-ascii?Q?hzBe1XQBTrpjsIczl16Dgk2KdZ+mFEEGLp7T6cmdtR6Jl0FIB6xh585ySL8N?=
 =?us-ascii?Q?Gqghm72SVIuCu/JfaWI+jQlZKwL2zPMIJ7+PsqVPDY2paoKm98ChB7jmoIl9?=
 =?us-ascii?Q?zYOIf4H5ZVxx0c4JUeRyNU76wO14dRSoBHo3BugNR7h2YYgpYVerH3mipLwU?=
 =?us-ascii?Q?rohXRAl9ucp5y+Kdl9xXoLy71vRL1zrp4LvvMoMw1VVnTpNhLUmlBuzVbOzG?=
 =?us-ascii?Q?vlNBATyfTHM4/Y0BFpJ8VmOPQ/cwd1qNvCj+H37dmgU5Pvd/xRhx/pga3O7o?=
 =?us-ascii?Q?7lWszdkyMyt9sm/4KayPoWC1yF/miJwNcGHafrfNOpd5V4VO0him1gVEPH7m?=
 =?us-ascii?Q?XLrmplf3/MraHyq8cIh37pxRDpy1KqkHfMlJvP3MfiVZOMhsia40MncsWFJK?=
 =?us-ascii?Q?cwr6WWHNhGEgXmbLSbAHA37dElm0wsjJfROD/rxUZp8zW/qmnNANHUGS+45M?=
 =?us-ascii?Q?orgLH8wvbeyhNEoTl7y8aOkJbMf2qiksySTroq1eGY2x2ZVOzUDchce3n0ov?=
 =?us-ascii?Q?LtcQLJCXwuHx4K9q6MpotAxPCBROZmLlbejDx3EmEp1emQ=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <02FA38D31C89AC428265570B6F32D8F9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d1e28a49-5eae-4420-4686-08d8ce99336e
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Feb 2021 14:27:46.0053
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: zNvtUKR7d8sAkAbXXxC50B6iAZsP897YPj7nz7ch0nmzZ+7pshugPFiSXP+qnFzpBSVuXhwt6vWB2RK/GfjsfUWXoRWPLJG9HnsxkRBErYM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5927
X-OriginatorOrg: citrix.com

Hello everyone!

Submissions are now open for The XenProject Design and Developer Summit, to=
 be held virtually, May 25-28.

CFP closes 11:59pm PST on Friday, April 2, and notifications will be made o=
n 19 April.

As always, a significant chunk of time will be dedicated to attendee-submit=
ted design sessions.

Main event link: https://events.linuxfoundation.org/xen-summit/

CFP Link: https://events.linuxfoundation.org/xen-summit/program/cfp/

 -George Dunlap=


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 14:44:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 14:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83958.157236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lADCT-0004zR-DE; Thu, 11 Feb 2021 14:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83958.157236; Thu, 11 Feb 2021 14:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lADCT-0004zK-AP; Thu, 11 Feb 2021 14:44:09 +0000
Received: by outflank-mailman (input) for mailman id 83958;
 Thu, 11 Feb 2021 14:44:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lADCR-0004zF-TX
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 14:44:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lADCR-0005nA-P9
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 14:44:07 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lADCR-000473-NH
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 14:44:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lADCO-00039V-Dw; Thu, 11 Feb 2021 14:44:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=HSkH/Ptzv6CviWAvhuwGxWB2qevinfnBevEHYAMj8aA=; b=Lte1iNSwQ3B8zjYaru6P/6cbgz
	MrMERC9h2FUb1JV4bL7YZDWD8QEuMv1USg2JbfFEbN/JhgdXMT+xE0fZLUQkFnStUSJRL3QAZ5YjH
	FUqnU/K/3DnqiJUFertVAHJqCxlSja/wuIjQyVvJFMuZFYAYvv38duQ+X7HD8t9th+Lk=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24613.17076.160988.839468@mariner.uk.xensource.com>
Date: Thu, 11 Feb 2021 14:44:04 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
    George Dunlap <george.dunlap@citrix.com>,
    Juergen Gross <jgross@suse.com>,
    Jan Beulich <jbeulich@suse.com>
Subject: Re: Stable library ABI compatibility checking
In-Reply-To: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
References: <22a01569-1ea0-bb87-eda1-1450d0229cf7@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Stable library ABI compatibility checking"):
> What I propose is tweaking the library build to write out
> lib$FOO.so.$X.$Y-$(ARCH).abi.dump files.

+1

>   A pristine set of these should be put somewhere on xenbits, and a
> task put on the release technicians checklist for future releases.

I would rather shrink that checklist than grow it.  It's too full of
crazy manual steps as-is.  When we have the relevant changes in
xen.git I will see about making a cron job.

> To make the pristine set, I need to retrofit the tooling to 4.14 and
> ideally 4.13.  This is in contravention to our normal policy of not
> backporting features, but I think, being optional build-time-only
> tooling, it is worthy of an exception considering the gains we get
> (specifically - to be able to check for ABI breakages on these branches
> in OSSTest).  Backporting to 4.12 and older is far more invasive, due to
> it being before the library build systems were common'd.

I'm comfortable with the backports assuming that code review of the
makefile changes can persuade me that there is no change to the
default behaviour.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 15:24:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 15:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83965.157248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lADpH-0000Gf-Bi; Thu, 11 Feb 2021 15:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83965.157248; Thu, 11 Feb 2021 15:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lADpH-0000GT-8c; Thu, 11 Feb 2021 15:24:15 +0000
Received: by outflank-mailman (input) for mailman id 83965;
 Thu, 11 Feb 2021 15:24:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8IX=HN=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
 id 1lADpF-0000GO-HT
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 15:24:13 +0000
Received: from mail-wr1-f44.google.com (unknown [209.85.221.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 149d9afe-3c24-4bd4-a212-38dcc32cf45d;
 Thu, 11 Feb 2021 15:24:12 +0000 (UTC)
Received: by mail-wr1-f44.google.com with SMTP id v1so968168wrd.6
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 07:24:12 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c5sm5469871wrn.77.2021.02.11.07.24.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 07:24:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 149d9afe-3c24-4bd4-a212-38dcc32cf45d
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=r73cmlC1j2cHDWraNmDzLtHimsKfOTe23uRKBUQvTXY=;
        b=D/wfeZbJ1f4TyBFbC5qhFm7zoRGJZ25ZPqQom9TpHc1chJsB2ypqYimLcuiADXX3dq
         N+v4PFVLe3oMAwJVFyb+JLxl5cD33QhoqjQcR/+nce8yVhIkdR2pzf4NNpTO8o3c0xQc
         XPYK7uxWjNPJidc8vxgJtL3M5/t2BCVyxrUYS0j813oicV3lBjkZt1nJg/O7jLy+AATg
         fE0tskh3k6x1sxC3JJ/y4d3X7+lI5pZhhH6XBZlOIPD6W939yco2e/3sd08Yq9J6TA/+
         +8lwijbQrU/W7dv2mOInNIsiKBGvOBrTMZajONc55fMtdT66USouiUWHKNg9zt+FqsRA
         XveA==
X-Gm-Message-State: AOAM530pYzYg6J7AGl8p8y6lXPfgzFFKHM6UZ+88ylyTIG0gXwyhcpht
	ZapQCsVD4lVQtUDuDzw75/Q=
X-Google-Smtp-Source: ABdhPJw0ZNLx6cIm1oi5TPu/ZBGpav6tW7wPmN3XoUi4E90WuF9Eo8rynhy/JWMbDVpCiMzyDv1snA==
X-Received: by 2002:adf:9bcf:: with SMTP id e15mr6042718wrc.276.1613057051742;
        Thu, 11 Feb 2021 07:24:11 -0800 (PST)
Date: Thu, 11 Feb 2021 15:24:09 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: Re: [PATCH v2 4/8] xen/netback: fix spurious event detection for
 common event case
Message-ID: <20210211152409.knullq66jv3bkis2@liuwe-devbox-debian-v2>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-5-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210211101616.13788-5-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Thu, Feb 11, 2021 at 11:16:12AM +0100, Juergen Gross wrote:
> In case of a common event for rx and tx queue the event should be
> regarded to be spurious if no rx and no tx requests are pending.
> 
> Unfortunately the condition for testing that is wrong causing to
> decide a event being spurious if no rx OR no tx requests are
> pending.
> 
> Fix that plus using local variables for rx/tx pending indicators in
> order to split function calls and if condition.
> 
> Fixes: 23025393dbeb3b ("xen/netback: use lateeoi irq binding")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 15:41:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 15:41:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83967.157261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAE5M-0002Cm-O3; Thu, 11 Feb 2021 15:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83967.157261; Thu, 11 Feb 2021 15:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAE5M-0002Cf-KB; Thu, 11 Feb 2021 15:40:52 +0000
Received: by outflank-mailman (input) for mailman id 83967;
 Thu, 11 Feb 2021 15:40:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAE5L-0002Ca-1M
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 15:40:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a735cf69-9ff8-4e29-992b-c29ccc755003;
 Thu, 11 Feb 2021 15:40:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C1D46AE05;
 Thu, 11 Feb 2021 15:40:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a735cf69-9ff8-4e29-992b-c29ccc755003
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613058048; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=heMe2UqSwIMOR1X/pbl/YTzRjZNXtZTrbTPdga4F57E=;
	b=EMye3K7XdzVFT/f+gO5YzjqfCWYSEMqm4vnCoGtZ4Soow6iJU8tftFXJpJEYyxLULwVuh/
	VC5CERemM4lCX1fEh9LLTfT6z+SJYC9dawbeainvOMQNvQoHgPmPKUILxKRaU8gGpnP/Lm
	V3Qh9q3/YwC5ywZXbGYQZYYitvJ37QM=
Subject: Ping: [PATCH v2 11/17] x86/CPUID: adjust extended leaves out of range
 clearing
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <0f04f568-e55a-ef20-aa97-fbb199dfae37@suse.com>
Message-ID: <4ad26b59-26d1-5a90-c686-d66b7e5673a4@suse.com>
Date: Thu, 11 Feb 2021 16:40:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <0f04f568-e55a-ef20-aa97-fbb199dfae37@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.11.2020 15:32, Jan Beulich wrote:
> A maximum extended leaf input value with the high half different from
> 0x8000 should not be considered valid - all leaves should be cleared in
> this case.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Integrate into series.

While most other parts of this series are to be delayed until
(at least) 4.16, I consider this one a bug fix.

Jan

> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> @@ -516,11 +516,22 @@ static void test_cpuid_out_of_range_clea
>              },
>          },
>          {
> +            .name = "no extd",
> +            .nr_markers = 0,
> +            .p = {
> +                /* Clears all markers. */
> +                .extd.max_leaf = 0,
> +
> +                .extd.vendor_ebx = 0xc2,
> +                .extd.raw_fms = 0xc2,
> +            },
> +        },
> +        {
>              .name = "extd",
>              .nr_markers = 1,
>              .p = {
>                  /* Retains marker in leaf 0.  Clears others. */
> -                .extd.max_leaf = 0,
> +                .extd.max_leaf = 0x80000000,
>                  .extd.vendor_ebx = 0xc2,
>  
>                  .extd.raw_fms = 0xc2,
> --- a/xen/lib/x86/cpuid.c
> +++ b/xen/lib/x86/cpuid.c
> @@ -232,7 +232,9 @@ void x86_cpuid_policy_clear_out_of_range
>                      ARRAY_SIZE(p->xstate.raw) - 1);
>      }
>  
> -    zero_leaves(p->extd.raw, (p->extd.max_leaf & 0xffff) + 1,
> +    zero_leaves(p->extd.raw,
> +                ((p->extd.max_leaf >> 16) == 0x8000
> +                 ? (p->extd.max_leaf & 0xffff) + 1 : 0),
>                  ARRAY_SIZE(p->extd.raw) - 1);
>  }
>  
> 



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 15:42:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 15:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83970.157279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAE7F-0002MK-7G; Thu, 11 Feb 2021 15:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83970.157279; Thu, 11 Feb 2021 15:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAE7F-0002MD-2R; Thu, 11 Feb 2021 15:42:49 +0000
Received: by outflank-mailman (input) for mailman id 83970;
 Thu, 11 Feb 2021 15:42:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cmTu=HN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAE7D-0002M6-JC
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 15:42:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a885a60-44bd-4a7d-bd8e-a9d5c2703629;
 Thu, 11 Feb 2021 15:42:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 67110B07D;
 Thu, 11 Feb 2021 15:42:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a885a60-44bd-4a7d-bd8e-a9d5c2703629
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613058165; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hWYjTk2p4fW93pD3xlZCnWexIq9M2POLfp1SKCK9dVg=;
	b=E0NCC7GTaHwjllRdI6PwcBdMz4hBN1I0EYo0vTJXBB0KzSJVh96VHbiSbeW3FLTomtPxrP
	lpDo+avy0jFcoPMfkC68xMQWqQMYhVodsafUtkF8ySYnO8WCDf4NkSMFMW63kjlXvkDP6m
	NAgVady2gc+DVoozbHKtn+U2VhMG1Bs=
Subject: Re: [PATCH v2 12/17] x86/CPUID: shrink max_{,sub}leaf fields
 according to actual leaf contents
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <2aaffa0e-e17f-6581-6003-e58d2c9fc1d7@suse.com>
Message-ID: <7d1a0041-4b1a-855c-b522-421df21ca9d4@suse.com>
Date: Thu, 11 Feb 2021 16:42:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <2aaffa0e-e17f-6581-6003-e58d2c9fc1d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.11.2020 15:33, Jan Beulich wrote:
> Zapping leaf data for out of range leaves is just one half of it: To
> avoid guests (bogusly or worse) inferring information from mere leaf
> presence, also shrink maximum indicators such that the respective
> trailing entry is not all blank (unless of course it's the initial
> subleaf of a leaf that's not the final one).
> 
> This is also in preparation of bumping the maximum basic leaf we
> support, to ensure guests not getting exposed related features won't
> observe a change in behavior.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.

While most other parts of this series are to be delayed until
(at least) 4.16, I consider this one a bug fix as well, just
like the previous one. I also realize only now that I forgot to
Cc Paul on the original submission - sorry.

Jan

> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> @@ -8,10 +8,13 @@
>  #include <err.h>
>  
>  #include <xen-tools/libs.h>
> +#include <xen/asm/x86-defns.h>
>  #include <xen/asm/x86-vendors.h>
>  #include <xen/lib/x86/cpu-policy.h>
>  #include <xen/domctl.h>
>  
> +#define XSTATE_FP_SSE  (X86_XCR0_FP | X86_XCR0_SSE)
> +
>  static unsigned int nr_failures;
>  #define fail(fmt, ...)                          \
>  ({                                              \
> @@ -564,6 +567,103 @@ static void test_cpuid_out_of_range_clea
>      }
>  }
>  
> +static void test_cpuid_maximum_leaf_shrinking(void)
> +{
> +    static const struct test {
> +        const char *name;
> +        struct cpuid_policy p;
> +    } tests[] = {
> +        {
> +            .name = "basic",
> +            .p = {
> +                /* Very basic information only. */
> +                .basic.max_leaf = 1,
> +                .basic.raw_fms = 0xc2,
> +            },
> +        },
> +        {
> +            .name = "cache",
> +            .p = {
> +                /* Cache subleaves present. */
> +                .basic.max_leaf = 4,
> +                .cache.subleaf[0].type = 1,
> +            },
> +        },
> +        {
> +            .name = "feat#0",
> +            .p = {
> +                /* Subleaf 0 only with some valid bit. */
> +                .basic.max_leaf = 7,
> +                .feat.max_subleaf = 0,
> +                .feat.fsgsbase = 1,
> +            },
> +        },
> +        {
> +            .name = "feat#1",
> +            .p = {
> +                /* Subleaf 1 only with some valid bit. */
> +                .basic.max_leaf = 7,
> +                .feat.max_subleaf = 1,
> +                .feat.avx_vnni = 1,
> +            },
> +        },
> +        {
> +            .name = "topo",
> +            .p = {
> +                /* Topology subleaves present. */
> +                .basic.max_leaf = 0xb,
> +                .topo.subleaf[0].type = 1,
> +            },
> +        },
> +        {
> +            .name = "xstate",
> +            .p = {
> +                /* First subleaf always valid (and then non-zero). */
> +                .basic.max_leaf = 0xd,
> +                .xstate.xcr0_low = XSTATE_FP_SSE,
> +            },
> +        },
> +        {
> +            .name = "extd",
> +            .p = {
> +                /* Commonly available information only. */
> +                .extd.max_leaf = 0x80000008,
> +                .extd.maxphysaddr = 0x28,
> +                .extd.maxlinaddr = 0x30,
> +            },
> +        },
> +    };
> +
> +    printf("Testing CPUID maximum leaf shrinking:\n");
> +
> +    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
> +    {
> +        const struct test *t = &tests[i];
> +        struct cpuid_policy *p = memdup(&t->p);
> +
> +        p->basic.max_leaf = ARRAY_SIZE(p->basic.raw) - 1;
> +        p->feat.max_subleaf = ARRAY_SIZE(p->feat.raw) - 1;
> +        p->extd.max_leaf = 0x80000000 | (ARRAY_SIZE(p->extd.raw) - 1);
> +
> +        x86_cpuid_policy_shrink_max_leaves(p);
> +
> +        /* Check the the resulting max (sub)leaf values against expecations. */
> +        if ( p->basic.max_leaf != t->p.basic.max_leaf )
> +             fail("  Test %s basic fail - expected %#x, got %#x\n",
> +                  t->name, t->p.basic.max_leaf, p->basic.max_leaf);
> +
> +        if ( p->extd.max_leaf != t->p.extd.max_leaf )
> +             fail("  Test %s extd fail - expected %#x, got %#x\n",
> +                  t->name, t->p.extd.max_leaf, p->extd.max_leaf);
> +
> +        if ( p->feat.max_subleaf != t->p.feat.max_subleaf )
> +             fail("  Test %s feat fail - expected %#x, got %#x\n",
> +                  t->name, t->p.feat.max_subleaf, p->feat.max_subleaf);
> +
> +        free(p);
> +    }
> +}
> +
>  static void test_is_compatible_success(void)
>  {
>      static struct test {
> @@ -679,6 +779,7 @@ int main(int argc, char **argv)
>      test_cpuid_serialise_success();
>      test_cpuid_deserialise_failure();
>      test_cpuid_out_of_range_clearing();
> +    test_cpuid_maximum_leaf_shrinking();
>  
>      test_msr_serialise_success();
>      test_msr_deserialise_failure();
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -346,6 +346,8 @@ static void __init calculate_host_policy
>          p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
>                                 (1u << SVM_FEATURE_TSCRATEMSR));
>      }
> +
> +    x86_cpuid_policy_shrink_max_leaves(p);
>  }
>  
>  static void __init guest_common_default_feature_adjustments(uint32_t *fs)
> @@ -415,6 +417,8 @@ static void __init calculate_pv_max_poli
>      recalculate_xstate(p);
>  
>      p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
> +
> +    x86_cpuid_policy_shrink_max_leaves(p);
>  }
>  
>  static void __init calculate_pv_def_policy(void)
> @@ -435,6 +439,8 @@ static void __init calculate_pv_def_poli
>      sanitise_featureset(pv_featureset);
>      cpuid_featureset_to_policy(pv_featureset, p);
>      recalculate_xstate(p);
> +
> +    x86_cpuid_policy_shrink_max_leaves(p);
>  }
>  
>  static void __init calculate_hvm_max_policy(void)
> @@ -494,6 +500,8 @@ static void __init calculate_hvm_max_pol
>      sanitise_featureset(hvm_featureset);
>      cpuid_featureset_to_policy(hvm_featureset, p);
>      recalculate_xstate(p);
> +
> +    x86_cpuid_policy_shrink_max_leaves(p);
>  }
>  
>  static void __init calculate_hvm_def_policy(void)
> @@ -518,6 +526,8 @@ static void __init calculate_hvm_def_pol
>      sanitise_featureset(hvm_featureset);
>      cpuid_featureset_to_policy(hvm_featureset, p);
>      recalculate_xstate(p);
> +
> +    x86_cpuid_policy_shrink_max_leaves(p);
>  }
>  
>  void __init init_host_cpuid(void)
> @@ -704,6 +714,8 @@ void recalculate_cpuid_policy(struct dom
>  
>      if ( !p->extd.page1gb )
>          p->extd.raw[0x19] = EMPTY_LEAF;
> +
> +    x86_cpuid_policy_shrink_max_leaves(p);
>  }
>  
>  int init_domain_cpuid_policy(struct domain *d)
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -121,7 +121,9 @@ void cpuid_viridian_leaves(const struct
>      switch ( leaf )
>      {
>      case 0:
> -        res->a = 0x40000006; /* Maximum leaf */
> +        /* Maximum leaf */
> +        cpuid_viridian_leaves(v, 0x40000006, 0, res);
> +        res->a = res->a | res->b | res->c | res->d ? 0x40000006 : 0x40000004;
>          memcpy(&res->b, "Micr", 4);
>          memcpy(&res->c, "osof", 4);
>          memcpy(&res->d, "t Hv", 4);
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -964,13 +964,15 @@ void cpuid_hypervisor_leaves(const struc
>      uint32_t base = is_viridian_domain(d) ? 0x40000100 : 0x40000000;
>      uint32_t idx  = leaf - base;
>      unsigned int limit = is_viridian_domain(d) ? p->hv2_limit : p->hv_limit;
> +    unsigned int dflt = is_pv_domain(d) ? XEN_CPUID_MAX_PV_NUM_LEAVES
> +                                        : XEN_CPUID_MAX_HVM_NUM_LEAVES;
>  
>      if ( limit == 0 )
>          /* Default number of leaves */
> -        limit = XEN_CPUID_MAX_NUM_LEAVES;
> +        limit = dflt;
>      else
>          /* Clamp toolstack value between 2 and MAX_NUM_LEAVES. */
> -        limit = min(max(limit, 2u), XEN_CPUID_MAX_NUM_LEAVES + 0u);
> +        limit = min(max(limit, 2u), dflt);
>  
>      if ( idx > limit )
>          return;
> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -113,6 +113,10 @@
>  /* Max. address width in bits taking memory hotplug into account. */
>  #define XEN_CPUID_MACHINE_ADDRESS_WIDTH_MASK (0xffu << 0)
>  
> -#define XEN_CPUID_MAX_NUM_LEAVES 5
> +#define XEN_CPUID_MAX_PV_NUM_LEAVES 5
> +#define XEN_CPUID_MAX_HVM_NUM_LEAVES 4
> +#define XEN_CPUID_MAX_NUM_LEAVES \
> +    (XEN_CPUID_MAX_PV_NUM_LEAVES > XEN_CPUID_MAX_HVM_NUM_LEAVES ? \
> +     XEN_CPUID_MAX_PV_NUM_LEAVES : XEN_CPUID_MAX_HVM_NUM_LEAVES)
>  
>  #endif /* __XEN_PUBLIC_ARCH_X86_CPUID_H__ */
> --- a/xen/include/xen/lib/x86/cpuid.h
> +++ b/xen/include/xen/lib/x86/cpuid.h
> @@ -351,6 +351,13 @@ void x86_cpuid_policy_fill_native(struct
>   */
>  void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
>  
> +/**
> + * Shrink max leaf/subleaf values such that the last respective valid entry
> + * isn't all blank.  While permitted by the spec, such extraneous leaves may
> + * provide undue "hints" to guests.
> + */
> +void x86_cpuid_policy_shrink_max_leaves(struct cpuid_policy *p);
> +
>  #ifdef __XEN__
>  #include <public/arch-x86/xen.h>
>  typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
> --- a/xen/lib/x86/cpuid.c
> +++ b/xen/lib/x86/cpuid.c
> @@ -238,6 +238,45 @@ void x86_cpuid_policy_clear_out_of_range
>                  ARRAY_SIZE(p->extd.raw) - 1);
>  }
>  
> +void x86_cpuid_policy_shrink_max_leaves(struct cpuid_policy *p)
> +{
> +    unsigned int i;
> +
> +    p->basic.raw[0x4] = p->cache.raw[0];
> +
> +    for ( i = p->feat.max_subleaf; i; --i )
> +        if ( p->feat.raw[i].a | p->feat.raw[i].b |
> +             p->feat.raw[i].c | p->feat.raw[i].d )
> +            break;
> +    p->feat.max_subleaf = i;
> +    p->basic.raw[0x7] = p->feat.raw[0];
> +
> +    p->basic.raw[0xb] = p->topo.raw[0];
> +
> +    /*
> +     * Due to the way xstate gets handled in the hypervisor (see
> +     * recalculate_xstate()) there is (for now at least) no need to fiddle
> +     * with the xstate subleaves (IOW we assume they're already in consistent
> +     * shape, for coming from either hardware or recalculate_xstate()).
> +     */
> +    p->basic.raw[0xd] = p->xstate.raw[0];
> +
> +    for ( i = p->basic.max_leaf; i; --i )
> +        if ( p->basic.raw[i].a | p->basic.raw[i].b |
> +             p->basic.raw[i].c | p->basic.raw[i].d )
> +            break;
> +    p->basic.max_leaf = i;
> +
> +    for ( i = p->extd.max_leaf & 0xffff; i; --i )
> +        if ( p->extd.raw[i].a | p->extd.raw[i].b |
> +             p->extd.raw[i].c | p->extd.raw[i].d )
> +            break;
> +    if ( i | p->extd.raw[0].b | p->extd.raw[0].c | p->extd.raw[0].d )
> +        p->extd.max_leaf = 0x80000000 | i;
> +    else
> +        p->extd.max_leaf = 0;
> +}
> +
>  const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature)
>  {
>      static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
> 
> 



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 15:51:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 15:51:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83973.157291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAEFO-0003Kd-53; Thu, 11 Feb 2021 15:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83973.157291; Thu, 11 Feb 2021 15:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAEFO-0003KU-1V; Thu, 11 Feb 2021 15:51:14 +0000
Received: by outflank-mailman (input) for mailman id 83973;
 Thu, 11 Feb 2021 15:51:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzmj=HN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAEFM-0003J1-SD
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 15:51:12 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c262834-290a-4b06-a34f-4e0815d7ef6b;
 Thu, 11 Feb 2021 15:51:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c262834-290a-4b06-a34f-4e0815d7ef6b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613058671;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=rLYuxxztor8oevutfFBEJQJ0xtELagH/8VnRXUr1D2I=;
  b=BSXyALP4zwiBEjup10ktmQ0frkLzWAzQcRzCiXqN0HOFq7srTC0fhrMj
   9xz0FCrmfYH8fp66B5AigU1BJQR3IdApsxthGwLMQqsZSI+s0nG+RRDP6
   +j4TSCuDwIb09gIQ6hyB95+PSGXFLjqFq+bhmqB49pPIzD9UlVrnmWV8n
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 8xuRAR5iYxL92VwwJhzBENXoBr9Xxz73PsrQPmQhxbAM2oA/mLyk5oEAoCDElcULoEsepj9Nuf
 Wf8AQVBystYV6Oh9e3SiNLi1mBSaeFREYccxAl3UViNeHg6bFmhtaZXs/f6YQw0ucsDA445IpR
 duAoOR0Ez7zdzZp0fwmy0MWD4zEswxy/zROzrFW0lGQ+1QisNJLjnlpf3XEOHnZNABUpG0eZaU
 pQo6JPdo+x+rNc8Q8BFxtVYVGOFbPVDJBe+YhnS7XkaMYZktbkIACT4g8gljQ7NW0LO10varcP
 zeU=
X-SBRS: 4.0
X-MesageID: 37429781
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37429781"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Doug Goldstein
	<cardoe@cardoe.com>, George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <JBeulich@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
Subject: [PATCH for-4.15] automation: Add Ubuntu Focal builds
Date: Thu, 11 Feb 2021 15:50:41 +0000
Message-ID: <20210211155041.4811-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Logical continuation of c/s eb52442d7f "automation: Add Ubuntu:focal
container".

No further changes required.  Everything builds fine.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Doug Goldstein <cardoe@cardoe.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 automation/gitlab-ci/build.yaml | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index db68dd0b69..63258f830e 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -390,6 +390,26 @@ ubuntu-bionic-gcc-debug:
   variables:
     CONTAINER: ubuntu:bionic
 
+focal-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:focal
+
+focal-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:focal
+
+focal-clang:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:focal
+
+focal-clang-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:focal
+
 opensuse-leap-clang:
   extends: .clang-x86-64-build
   variables:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 15:57:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 15:57:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83975.157303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAELH-0003Y9-SO; Thu, 11 Feb 2021 15:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83975.157303; Thu, 11 Feb 2021 15:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAELH-0003Y2-Ni; Thu, 11 Feb 2021 15:57:19 +0000
Received: by outflank-mailman (input) for mailman id 83975;
 Thu, 11 Feb 2021 15:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzmj=HN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAELG-0003Xx-HJ
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 15:57:18 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 811e8374-59d5-49b6-88f4-79282f2dbc9f;
 Thu, 11 Feb 2021 15:57:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 811e8374-59d5-49b6-88f4-79282f2dbc9f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613059037;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=fDsorYK/RObkuApe5Ex3u735vz7rMoAh+0h775up/ZE=;
  b=ReWOvtdRcHZrTIm8zDK79uRb2L/F0BV+tVXTp7cyISNWh1+ktmhWinVx
   lmdi0znTjQgiMPCcm15dwuWAH5G2jx+fkrzrmqiooSsi0j20tNsFBLiUy
   nsHZ9MM+bXoi1kqpSWOH9RFTp5EcAHEtdSsI6ncKnzRf1Gx2025h9dR0Z
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Psd0VPQDLcX7ed5rbTYP8fdx01qI3Z2+aKoqwa/Br21S8jVViK8vZKYosYG5gAXrlKdYKhDAcg
 2Hz4BZsPj8ttMZEsCKd0OgKuEpKzr8PCSiQ5d4CuVf3GCSuvLoZWQrMypgJvJQgfz9mt7gEPrN
 HJhfcU28SpTWNWumbIdsUpXPTqTEmUrVXq0KWmzGW7K9wAeQHJe4Fw4vAJN8WJcSSKQtT0BKjI
 KvrkokfbGy1d3r0kvH2QiPU9OKPJrUbs+amOu5u69adNHvvbDHFvx5InuDbP539NNG+b6TMSRL
 8so=
X-SBRS: 5.2
X-MesageID: 37085690
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,170,1610427600"; 
   d="scan'208";a="37085690"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E+itWiaxronW1PhZr4S0Q5T665IffmBHLXANHZ5Xp9E6BjspkDxnDlZzoZ5GzAHcVkGr8wHZJftAb2ImPXumBR+bC1MQOSRznT2oI5cqRWsr0bfbfXkiFURrxKs6QK0ejLcrzN1Rr/I/u8Hg5U6jydysiKnaa5R6436RnkU4mD+X1pC3Rv9CGWaoabkAI7Rji79gN8xoswN+vVp+X+loTM4IhRfMoUU/SnTsEEoeQVdsUZ4ImdafzeT1EeUsM8dSirWoOo6Gh/SpYNIecL0e3zmx88Sg+ycp3Lak3lTlvvE0Bvb6yqRMBC/sTrlK6aO6bifWBXY7EgVyLTxChs5/SA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hDwh4/4ef9VLyR7F7NsuS8PDRC+ZJGdUO5HlYIqmSlw=;
 b=HeKjHUiyarV7Lmju4zhZc+hCk7VTyu+3Zy/ijz38BUPUHLmH+pIMtfS0HdvGvOC+ZZ8xcjJHh/P4ORZ4Z4dwKyIWuHLd4KJI0OyXprPWK23bSLezlF7BXT/ZqwtbJdGKt/HxYBv0oh4WNpTW9+S0drM9TW86kDlSTxQDjsuCgQ0dmbTKF6JoFiRoacAu2NToivgY1tt/Dcz+Z/yL1dF+XGjKq8ZAyjbr/1Cs9CgFLV688vhBEy+Mc2XV1PGySbQs6XWTPTNQygV6hj2ZxKZik0MbBMGWFgyqzt5IC+LuARCR26SDgBT0hxmz+P6X7lMfP6bVR+o5FDhHHztydjojgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hDwh4/4ef9VLyR7F7NsuS8PDRC+ZJGdUO5HlYIqmSlw=;
 b=Gz0HHYBqMYWe0r/VadyB++aTIQvK0BSGypUI9XSzKhdgIUErstAGVe5Q6jvTaBdIG1+9spbwtES6u9RqAnfSakW/ahY0d1GY06qNaX81OdgvCy8F08mwP/Aoxpwb3RzwDz8eWa5ULLZnODdIvJyuQNSuirU1we3ZfOUSI4LRBjA=
Subject: Re: [PATCH for-4.15] automation: Add Ubuntu Focal builds
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
References: <20210211155041.4811-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3928a36c-9ef0-dd2b-a4b6-0a94092cab88@citrix.com>
Date: Thu, 11 Feb 2021 15:57:06 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <20210211155041.4811-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0086.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2ceb4597-7f73-427e-0be4-08d8cea5b21b
X-MS-TrafficTypeDiagnostic: BYAPR03MB3621:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3621BF8702A06548E906E566BA8C9@BYAPR03MB3621.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:179;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Ig5BvbysG07FazzZ3N07Y1cahQdvezfJkEEb44vr65+RMUsfcz5/QPUlYR/NIwbo8Y63o91i1eDNG9ZOyfS8h8jn9uaHP4N+uvH7bNwuC44nAXnhwAUjzg//3ne0qmm0ZZUxDpIQLo11/kTgTckqtXaLPMpzkFgMDVmlClyVcyaTfxF4Sn009XDewNtoKIi8ER3US7mqlEdO8rrxr68xPx7S5ncu/U2i0dHXoVjoIdrzLr3nk5J5g9tpk6ocP8Ip5DfVfb7/oiWbGtZK71lEUcaJHapgRgq3eICjc8/xKEq/58LoTZxxfECypDUaSwHFnbBBSOd4R2zgTR+mUIXBMChyW4G2cHDMEm2k7xsuOzuUAU87fodpgO3XFWemEhfLV33IhGD7dznJmUw2ydOC3SUexNmRytqf/mGqgUa2NQ38DrlOzEV7scB50LL2J3RTBvHULHsAZ1Wz99EgEc9tdLP8GJN8a7zdBSkQe+Xxsci0LXt4wh7w4YTV6hHq+bZRWUf+eESFgwUL+RLLnf4CpJdq8Ycg4AZOfxnX/JdDhpB9RSgntg0w8nxWFWXdwtyi+yxi8OaDtSR7LcHDDbYjMcCcHxMx73+az+1itwz4kLD/oUbSY/EOWAVrCgB4yEvn5tF4ULBCG8eoolSjESn1gwTGi+1DKB/hgW7ldTrbmBrp70GC1TZz1UYH6XNNQpdo
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(53546011)(16526019)(186003)(6916009)(8676002)(478600001)(966005)(31686004)(8936002)(6486002)(5660300002)(4744005)(66946007)(66476007)(26005)(66556008)(2616005)(6666004)(956004)(16576012)(54906003)(316002)(4326008)(36756003)(31696002)(86362001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QVlMcHJRVFZMM0crNEpGMGh0eGc5SHk5Njl0ZXpkNW02YXBvUHJjTWtLQlMz?=
 =?utf-8?B?M0xKNGNwZXNFczFnMEtleXE4eUlpRVlRMDVFNFBCWFhYbTE3QWt4d0Rob3E2?=
 =?utf-8?B?Vzk5OXpZcngxQThJWFhQU0RVVVNrWVVtOE53YnNwQVBlSlVnaEUydVBzUEtE?=
 =?utf-8?B?bk1FaWl0dE8yNkY3TFZtczZCZzdYT2hqd21mMjFvNFg0azVyVTFaNnU1TVRK?=
 =?utf-8?B?cTVPSVRHZWdQNHlRREY4TUs2bWFLYnhXbUdSTjBUTS9RdURRZ1JnVitSdmhD?=
 =?utf-8?B?STVKOHVZM0I1RzBDQndhSXhzMzdSWnRTKzgyeVRNZG9QUXdNTzdONnkyQ0Ry?=
 =?utf-8?B?RUVGZTN1ZmlvMG5BbDl5SGFDMUZhajVOUTdWT2VZZjl5dEliRlJPMVA2enI4?=
 =?utf-8?B?bmdYcnlsV2hPREtTclZQU3B5Y3FGN0k1TE81Sjl5UzJycm11OGZuU1dlUDg5?=
 =?utf-8?B?bVp4VTdlWjZQdHZGbkNKQlNaa1BoMjV4cWhPb1c3cDR0dFJmU1E2SHByK05V?=
 =?utf-8?B?bm5RNndxNjNibmZnaXR5VlF0QlBIM2laOFJyZ05GQTJVdEErSHZFK3FNSEVj?=
 =?utf-8?B?ZTRBeC9GSm0yaTM2eW1OUndIYWJTdkxoQ3dyckQ4ckZmZm9wM1JkZTZoaUhB?=
 =?utf-8?B?N2Joa1lJWlVSbnc1Zmdncit6T2FuTnJnc2xId2NMR2RsSjVaNXdrQ1QvS2Vu?=
 =?utf-8?B?Vy93cU5neUhlQW96T2pCOWZoc3l6enhOdWxWYm9vTVgzUUxZaHljWXZWUzRJ?=
 =?utf-8?B?OXBIbzJSTW1nQ0N0OG8va1o5dzBmdmkyaGRUQzJsY2xUQU5sNTc2MXJFV016?=
 =?utf-8?B?SDdSKzZBRnZDdTJZaDJ2Y0VCZEduRlBWOUpQV1JzRk43alB2YThRMjlHVmhr?=
 =?utf-8?B?bElGSUdQOFlhVGV2OVFSZkFIeWZMQUkyUzd6MmhsOXY0dHlWc2U5djdwSXRu?=
 =?utf-8?B?WGpYVlZOc3lYc3NkR01sQkc1aFpxbUdzZ0IvRFZaRmpPSjhvWTN2UStCQ0tw?=
 =?utf-8?B?SmhRdHVVWm9RNmYrdEh1Q3lPZkNpY2dyZDM3NU1FTmpWUEtKeWVXSmZxNXhy?=
 =?utf-8?B?MG55REI4WG41TmFtMC9KMnhnZGtOYmhIV0RxZFdMaUtxRzVhMXFqb2tMVzNG?=
 =?utf-8?B?NDBSYWFzOWRvNGtqaWlYZkEydlRzMm1ic2FRaEJiWml5Tk1EN2dmUC9pTWZo?=
 =?utf-8?B?SXJBcTYrYk1neFIxdGppT21rbXFSb0Z6OHRHd1A2dUpGWSsrNThBZ1d1dmw4?=
 =?utf-8?B?b0NjWUxnakJHUTM3UEZKeE5NbVdabFhadnlScHluN3gvR24zN0JxTmpNcjg4?=
 =?utf-8?B?bTlkaW5pS2Q5Z2pYWVZjb1E3ZnRUV20rNWJpdWFQODc1bURQRlNGRGJwRWdB?=
 =?utf-8?B?bGhIZjRsYkdkcll5cnRYdEFRcFJOSTRDbkwyWEYyb3Qxa3ozUm1JNlpia0Ew?=
 =?utf-8?B?ZmNjYTRFODlVbUJvaWtObWo2RllYRmo1d0RmT1VDdkdWOXpOWW9hdnVMTk5j?=
 =?utf-8?B?UWFXNjc0ZGJCZkRBL21jbXNGY1hWRXJBTjRzQ1RraGx4eER1QmM3REFIQkhk?=
 =?utf-8?B?SzVML1VjWllUcW9mVGdicDlqdm5BMnJ2MG56SGNoSW5CQ0U2OTdBVDI3QTZF?=
 =?utf-8?B?Z2p2Q0hIdWk2RzJiKzBDcHB4L3hNZUxWNjJ5UFBVMXNMNGI4UG9ZeGFzMU5R?=
 =?utf-8?B?dnhZemUyamQ2cTJoTm5EdHhmS3V1WlpKTWFhWUROclpRV0tZL3dmdy90L1Z6?=
 =?utf-8?Q?Lq4N0mr9LWRTDwCTDkwuC2gvLRvXcbcwB69hMG/?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ceb4597-7f73-427e-0be4-08d8cea5b21b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 15:57:12.7996
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I+K2RqDg6YrZcFZtJYsJxelxkXwOAShJlKURa5D7+uCYpLNOsM1cMOlLfUl8Hnb5E+AONDLEnSk9z43wLw2m6K82w9qQ5mbbL0dr6LauhZU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3621
X-OriginatorOrg: citrix.com

On 11/02/2021 15:50, Andrew Cooper wrote:
> Logical continuation of c/s eb52442d7f "automation: Add Ubuntu:focal
> container".
>
> No further changes required.  Everything builds fine.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>

Forgot to say -
https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/254912313
is a pipeline run with everything green.

Also, I need to prefix the names with ubuntu- which I can do on commit.


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 16:05:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 16:05:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83978.157314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAET9-00053Q-Lg; Thu, 11 Feb 2021 16:05:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83978.157314; Thu, 11 Feb 2021 16:05:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAET9-00053J-Id; Thu, 11 Feb 2021 16:05:27 +0000
Received: by outflank-mailman (input) for mailman id 83978;
 Thu, 11 Feb 2021 16:05:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ygeM=HN=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lAET7-00053E-9M
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 16:05:25 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::628])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b98b296f-58f7-4d78-bed0-77c64212676d;
 Thu, 11 Feb 2021 16:05:22 +0000 (UTC)
Received: from AM5PR0201CA0002.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::12) by VI1PR08MB5360.eurprd08.prod.outlook.com
 (2603:10a6:803:132::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.29; Thu, 11 Feb
 2021 16:05:19 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::59) by AM5PR0201CA0002.outlook.office365.com
 (2603:10a6:203:3d::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.26 via Frontend
 Transport; Thu, 11 Feb 2021 16:05:19 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Thu, 11 Feb 2021 16:05:18 +0000
Received: ("Tessian outbound 587c3d093005:v71");
 Thu, 11 Feb 2021 16:05:17 +0000
Received: from 57ded33c6e34.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C6675330-DB1B-45B7-BDB5-60754596E4BE.1; 
 Thu, 11 Feb 2021 16:05:06 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 57ded33c6e34.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 11 Feb 2021 16:05:06 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3081.eurprd08.prod.outlook.com (2603:10a6:5:25::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.29; Thu, 11 Feb
 2021 16:05:02 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3825.030; Thu, 11 Feb 2021
 16:05:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b98b296f-58f7-4d78-bed0-77c64212676d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sUM9B7FHmZa4CYsKFSzOmpSDQazT5R9dgCoCh2poy7c=;
 b=Iin1v89M/APodQSc/SpEymPadrkyYkac1VqhyRHrHhXJb3XFXTrcf7r99bWw1b/GB/OYiM59kZzHjo9fsPDbxN+C1IAUO7zOXZ4uVyfGgFbozyJ3sNaoQ5iwQP79gba8Sdpk+L54Aj+p0fpevgumTfsvrjiRmKbw4hfbd9Cog6Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3d2d24650d570cb4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZKqUZu/4xQn0Qoy/37XgFvnkIoBV+RJ5mhj2KMqX3iiydtxnZiAesPlEzNhmymT7im5BNtrXlQwyM8HQu+DA4G9ugrpUq4nRh4Ri0bQabZacpm3kUKqdoUeMzl0g4xFQN3j4K7NtNp8bKF2hjhZi8vQdQmq7VPRz3GZrF6eQthIKhFQJFgkuaNb6bZl/+VwvaL4TJZQ96iFpMpLSAhzzji7U4px0du2bnjWb8uaBNGcK9RR1uer0WoxPuoq4omBlo4C4r5bpX2G7b59SdmfaGDQPZeWn/YBW/cH3hQJ9g2B2lTTHliQNKojgDlzd8UQeBvfu8TcTYy9tdVXH8Ju+5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sUM9B7FHmZa4CYsKFSzOmpSDQazT5R9dgCoCh2poy7c=;
 b=eVkJxy80M0/sl+KJom7Hn8YDpaMQWjC8Q9kJNbgsdN/nyFGfmwCy8CJD6fzwkY5B14n7LtFJUv2syOvxkUOj68Px+1r6b1zhV/J8AQ0uv4dkvau6bgRHwD7CsPgT6uN5Ri0LLtChsPNSbBTF1DkiKxL7kOI+lG0QVfcMfWJvSeRJWlEt+0578OCSYoY+Wx1sI67NFdQzHYCF1JEStZ+8L5p3E609Pp37I9+DaO1YgISAq/svBmJwnmav448qSMydRXTrqAE7BIGiIm62pNsCc/9pVJirHyQP5CXmZCE9yBUlvVh7mWQuZwNjBJt+MuuuyumNskJ4ZLbOfiYgkxjYDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sUM9B7FHmZa4CYsKFSzOmpSDQazT5R9dgCoCh2poy7c=;
 b=Iin1v89M/APodQSc/SpEymPadrkyYkac1VqhyRHrHhXJb3XFXTrcf7r99bWw1b/GB/OYiM59kZzHjo9fsPDbxN+C1IAUO7zOXZ4uVyfGgFbozyJ3sNaoQ5iwQP79gba8Sdpk+L54Aj+p0fpevgumTfsvrjiRmKbw4hfbd9Cog6Q=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index:
 AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoAgAB7ugCAATYRAIAAKYkAgAAJgACAABz0AIABJMKAgAAJKICAACTqAA==
Date: Thu, 11 Feb 2021 16:05:00 +0000
Message-ID: <DC7F1705-54B3-4543-8222-E7BCF1A501F7@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
 <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
 <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
In-Reply-To: <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4d5817d7-a936-4ffc-35d1-08d8cea6d3c2
x-ms-traffictypediagnostic: DB7PR08MB3081:|VI1PR08MB5360:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB5360A3EC416DD0EB0AC3522EFC8C9@VI1PR08MB5360.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ZXD6spk8Qhoe5X/GWgOKL0JYJauu2IjbXpMDCwXwJWdw2CrelziPpln2oAMt/Nm1iyqfFGDIkxVCcdahvrgpLn4Z47kqyz2DD6xrXZPS/FoHb8nK9+WrqUbcrfUonQVPmwxjfJcklQb9WVp0MGWQ+n8ZlswteD2u0YQKXa8QJm6sPxaCZNcvoM4TkYqt608zgVKQ9ckXAKc/stRXHcvgbdmeMzCxboQCcs9IAjkkQg92BPs6SYX7S9HAlnA4VuD9FI/g2zkcalGdfGQ+P5Y4aYf9wt2+TJi1qOJ3tBVmTKAzSxAI4CQdPvc68ebZj4kiDRPVAqrHNXVwmdOfwkyUz2BIqzB+R6qLTvj+Ml3du3A0GpRhL0jZ+voUoChWOcO71La9fSSrR8qT8/cWsoRl/3YtxS2+AZe+8RfQfqlWq6AxHVq/wXHr7GrmMDJ5t2rh/yyYrnW2CwbAIEaHpcSYWJoF136T9YwFU425eBhxTLuAQxtEWOi09YS4s7mHn7eUmoUnAa6KY+vsiJzSjfpAquT2E+SxGPViUBXVCj5xjd5diemmX0gZJ/HS1LzIaDKv2C1UYOHOeQ3qEmbIufv2F6PElGKxHNSsnFrWtzN44dMzTvelIO0cE8zSEYf76KSmP/b1XiZkf4HDRq2haxeIDvPgrNEsZxbtpdLfMDImFZM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39830400003)(376002)(346002)(396003)(2616005)(6486002)(5660300002)(83380400001)(91956017)(66476007)(66946007)(76116006)(86362001)(66446008)(66556008)(966005)(64756008)(508600001)(2906002)(6916009)(4326008)(8676002)(6512007)(33656002)(71200400001)(36756003)(8936002)(186003)(26005)(53546011)(6506007)(54906003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?UEp3ajdnU1V4N2lWa2cxeXNxS2FCWDA2RFhGZk5HMG5lRHdYZGhJYTlkU3c5?=
 =?utf-8?B?d1huL0JDL1AvRnM5NmxjazR2MUpPdG9UNHRObWhHZjMrMm9sSmdYUDRIM1N5?=
 =?utf-8?B?bERRdjdjU3lBc21Ecndlamp1SHprcUFwT2hGR1Y3eGRvenk1THhpU1p5SkFk?=
 =?utf-8?B?b21vUDhpVkNIalErZElNMGc0QVFpSzZnQ3JMNGNrbTFrU3RKOFZ6a2k0bWQw?=
 =?utf-8?B?UXlmZXFQRkxmd2NrY002SHZCdFE2T0FSd2ZDdTFqY0NuWENmWTFGa1JuR0lU?=
 =?utf-8?B?UXFoSVBLNmZiWG5kVk1qam5rVW9kaXJFbEVPMnJNM0ZUcWUycTFHZ3ZNc1VZ?=
 =?utf-8?B?Nk9CdWYrL2R6NkZGb2ZIcnZtKzdMRExTV20rL1JKREVxVG9jVnVOVWhrNkoz?=
 =?utf-8?B?ZlN2QUdIWElmZ1VFcUIreENjVTRZaFVtRXZJdTk4Y3ZZeXBUb243bS9PRk1N?=
 =?utf-8?B?a1I1SUREdjZWUitMeWU3ZUFVUHpYVnFQeHF1clZjQjFkOHp0ZGxOcXJmMGJJ?=
 =?utf-8?B?cFdyd3huZ244MjFFVm4wZElTR0pKNDBiN0xKT0tsem90RHJrUVN3WUlIZjVw?=
 =?utf-8?B?SkxJdUh3VW9RWkI1ai9GODNBV3hGeG5hTnZabE1CaTQ5TmpGZ1VOZVlNQmdw?=
 =?utf-8?B?ODB3dDk5Y1kwV1dlTDRGVGRRZ2UzTWZ0QWV2SEhmYU9CeHVhVG8xSmdjSGpM?=
 =?utf-8?B?TzBiMVFhMjI0d1R5NnZ3cUtKQ3VwL3lCSGxpdmdXZXVUR2FWQVBYTnFXRUIw?=
 =?utf-8?B?ZVdJOTNSbnAwRGdRd3NRN3lHck5DSTBjYWJLR0JadlVQTHZHOHFQeHFlb2xO?=
 =?utf-8?B?bDBLRWZsUG50Z2J2aHhBelZ6c2ZvTzhmUjY2aXBmaHlsOUhobmRnenRGam5G?=
 =?utf-8?B?Y204UzN5bHJPWWZTMmJnYWZVZTFNWStFdkExRmdIVEdieUdEVFVKWFM5Ymsw?=
 =?utf-8?B?TWlURTBjQW9IQXdjU0lEZjRReVVzRFlsZVRCZHh5UE9FWG5iTHByVmpFS2pl?=
 =?utf-8?B?NzRDcGt4cUNseE9oYjlHVzVqNGh0Szh0aXM3QXFqcEVQdm9WNkZ1cDJQQ21X?=
 =?utf-8?B?V1VNWUt1QkR6YXQrMUQzTnJRZWRuM2E4QWw1Qk11TDdPYTZQTHZHeHQ3RzA2?=
 =?utf-8?B?bjVDRUQ0RXgreHZGRlVqL3AzellVdXZ1dmwzaXlZRXpINW1salpVOElUekYv?=
 =?utf-8?B?akRyYUg1N1ErT3ZmU1R4MHBac2Y3RTUwTFBJNTNVUFV6UjB3N1lIMW0vcWdu?=
 =?utf-8?B?dWJ5ZGpVSVoyL0pFUjBMVCtXV3pCRHJ1RDhxTFBnQlB3UTc1T2RONmJtQVZR?=
 =?utf-8?B?RUFXUnFXelRYdk5OM3ovZDc0NmxjWUJ3WDdZaVRPUGdSTExNSGxSbEtDdTNY?=
 =?utf-8?B?MGVRbDZwVElpcTFtTUhGMmVPY0xTRXdiM3NMVVBROFRIK0VpdGlCVjBOQkpn?=
 =?utf-8?B?cXo1ckcrNS92VUpPZkF2QjFOMjAzbGhTYk0zZHJnUkFZVnEvTlR2Z1p2VWtn?=
 =?utf-8?B?c2hobSsxRnMwMVlBL1p2YjY5ZHI0NHVYak5lSmFxUWlJLzNDVHQxS1pNUUdt?=
 =?utf-8?B?aFBOMzZTL0YraFN6OFppTnZsQWtaRTRtQ0pqeEFWZk0rZXZLRHRHc1NtaVMy?=
 =?utf-8?B?bCtDbFQydm5TcGdwTEh5RkxQQVRNemxNY2gzYVQ4ME9La1VselQ1UkFnMUsv?=
 =?utf-8?B?RHhqNUdUMVNWa1lPZXRNYWVoOTB4K2RWUWExcElNU29nbjFNUWhZWTQ4NWMw?=
 =?utf-8?Q?TaVscb7rX4TTeyDVe5lUXVBxN2SdO4WoD+VJYqd?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <ED3F552400FB73469284D781206D3F5A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3081
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0ecaa4b6-7c92-4ae7-b816-08d8cea6ca30
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	byyxXGhQqhJRxmO64broU+gB7d2PDur5lFhwVdDEAltizP+OFUoao+sOiyHo1HLAdPGz6I+DMgJbsAyp3dp5BRyQpTKPiEtPzW4h3Xe3WSBNo7k0kZtYxwPQAb9TN9GL+T+kl8h2eCErrL9tHxeNoqZWYxVL1jdUzHfp3PlyH5jrHq+5rs491AO/sQ5QxyUxgIHq4InP8Ez8XllF1Q3wr2jYg5xIF93/QWd44m8+HPZOW5mpQOymocBmPx9k7z9x/t8jEVKDcqW20KurefpCl+R9xfB5FaMF4s/ULniopD+yCP5/hBhOFuli2wmieJwepca8EPyiedgnM75VqKKxM/qfH9T2Fi3KMx2p93viqXoTcW3ANVI/CwURKQP3y6L4VmMwRZml94HnE0gcFT0G1c4nMDqP4bV6dIqeEJYEGrSKELrFcExIeWFvZo75ZuGDwzChvdzyYwp5J5mfqxVu3F42/FIxqdECnImP6RuJ3X4GT/Vmdg6zo8HDdqNeUNJE0b1zZXeFbQAUBNaunpMCDTYNQQv0olRKuXer5Im9NErFH7BgpRI5V0BxvBBaaQJ2eHxTBIWTmE4ZACKQvRBqZXYRf/zSSKMV5S3HsW2ZaXtB/mJe1T66do8IV4GH9FdWdB81r8+6i/oLFIkSMKQitLZKmQ1AaDNMghDkJJyxryWWcSm3fElfcPJRjgWRcqeCu5rIO06ra+ChSFw8euGz7XZuk0QPuc6ioFyWWGjT1yk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(136003)(376002)(36840700001)(46966006)(5660300002)(2616005)(6512007)(2906002)(82310400003)(33656002)(107886003)(6486002)(54906003)(966005)(336012)(478600001)(36860700001)(36756003)(8936002)(6862004)(356005)(4326008)(47076005)(8676002)(53546011)(316002)(86362001)(70586007)(70206006)(186003)(81166007)(6506007)(83380400001)(82740400003)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Feb 2021 16:05:18.4748
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d5817d7-a936-4ffc-35d1-08d8cea6d3c2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5360

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDExIEZlYiAyMDIxLCBhdCAxOjUyIHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTEvMDIvMjAyMSAx
MzoyMCwgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBIZWxsbyBKdWxpZW4sDQo+IA0KPiBIaSBSYWh1
bCwNCj4gDQo+Pj4gT24gMTAgRmViIDIwMjEsIGF0IDc6NTIgcG0sIEp1bGllbiBHcmFsbCA8anVs
aWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+IA0KPj4+IA0KPj4+IA0KPj4+IE9uIDEwLzAyLzIwMjEg
MTg6MDgsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+PiBIZWxsbyBKdWxpZW4sDQo+Pj4+PiBPbiAx
MCBGZWIgMjAyMSwgYXQgNTozNCBwbSwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3Jv
dGU6DQo+Pj4+PiANCj4+Pj4+IEhpLA0KPj4+Pj4gDQo+Pj4+PiBPbiAxMC8wMi8yMDIxIDE1OjA2
LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4gT24gOSBGZWIgMjAyMSwgYXQgODozNiBwbSwg
U3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+Pj4+
Pj4gDQo+Pj4+Pj4+IE9uIFR1ZSwgOSBGZWIgMjAyMSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+
Pj4+Pj4gT24gOCBGZWIgMjAyMSwgYXQgNjo0OSBwbSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBDb21taXQg
OTFkNGVjYTdhZGQgYnJva2UgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZyBvbiBBUk0uDQo+Pj4+
Pj4+Pj4gVGhlIG9mZmVuZGluZyBjaHVuayBpczoNCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiAjZGVm
aW5lIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcoZCkgICAgICAgICAgICAgICAgICAgIFwNCj4+
Pj4+Pj4+PiAtICAgIChpc19kb21haW5fZGlyZWN0X21hcHBlZChkKSAmJiBuZWVkX2lvbW11KGQp
KQ0KPj4+Pj4+Pj4+ICsgICAgKGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQpICYmIG5lZWRfaW9t
bXVfcHRfc3luYyhkKSkNCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBPbiBBUk0gd2UgbmVlZCBnbnR0
YWJfbmVlZF9pb21tdV9tYXBwaW5nIHRvIGJlIHRydWUgZm9yIGRvbTAgd2hlbiBpdCBpcw0KPj4+
Pj4+Pj4+IGRpcmVjdGx5IG1hcHBlZCBhbmQgSU9NTVUgaXMgZW5hYmxlZCBmb3IgdGhlIGRvbWFp
biwgbGlrZSB0aGUgb2xkIGNoZWNrDQo+Pj4+Pj4+Pj4gZGlkLCBidXQgdGhlIG5ldyBjaGVjayBp
cyBhbHdheXMgZmFsc2UuDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gSW4gZmFjdCwgbmVlZF9pb21t
dV9wdF9zeW5jIGlzIGRlZmluZWQgYXMgZG9tX2lvbW11KGQpLT5uZWVkX3N5bmMgYW5kDQo+Pj4+
Pj4+Pj4gbmVlZF9zeW5jIGlzIHNldCBhczoNCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiAgIGlmICgg
IWlzX2hhcmR3YXJlX2RvbWFpbihkKSB8fCBpb21tdV9od2RvbV9zdHJpY3QgKQ0KPj4+Pj4+Pj4+
ICAgICAgIGhkLT5uZWVkX3N5bmMgPSAhaW9tbXVfdXNlX2hhcF9wdChkKTsNCj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+PiBpb21tdV91c2VfaGFwX3B0KGQpIG1lYW5zIHRoYXQgdGhlIHBhZ2UtdGFibGUg
dXNlZCBieSB0aGUgSU9NTVUgaXMgdGhlDQo+Pj4+Pj4+Pj4gUDJNLiBJdCBpcyB0cnVlIG9uIEFS
TS4gbmVlZF9zeW5jIG1lYW5zIHRoYXQgeW91IGhhdmUgYSBzZXBhcmF0ZSBJT01NVQ0KPj4+Pj4+
Pj4+IHBhZ2UtdGFibGUgYW5kIGl0IG5lZWRzIHRvIGJlIHVwZGF0ZWQgZm9yIGV2ZXJ5IGNoYW5n
ZS4gbmVlZF9zeW5jIGlzIHNldA0KPj4+Pj4+Pj4+IHRvIGZhbHNlIG9uIEFSTS4gSGVuY2UsIGdu
dHRhYl9uZWVkX2lvbW11X21hcHBpbmcoZCkgaXMgZmFsc2UgdG9vLA0KPj4+Pj4+Pj4+IHdoaWNo
IGlzIHdyb25nLg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEFzIGEgY29uc2VxdWVuY2UsIHdoZW4g
dXNpbmcgUFYgbmV0d29yayBmcm9tIGEgZG9tVSBvbiBhIHN5c3RlbSB3aGVyZQ0KPj4+Pj4+Pj4+
IElPTU1VIGlzIG9uIGZyb20gRG9tMCwgSSBnZXQ6DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gKFhF
Tikgc21tdTogL3NtbXVAZmQ4MDAwMDA6IFVuaGFuZGxlZCBjb250ZXh0IGZhdWx0OiBmc3I9MHg0
MDIsIGlvdmE9MHg4NDI0Y2IxNDgsIGZzeW5yPTB4YjAwMDEsIGNiPTANCj4+Pj4+Pj4+PiBbICAg
NjguMjkwMzA3XSBtYWNiIGZmMGUwMDAwLmV0aGVybmV0IGV0aDA6IERNQSBidXMgZXJyb3I6IEhS
RVNQIG5vdCBPSw0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IFRoZSBmaXggaXMgdG8gZ28gYmFjayB0
byBzb21ldGhpbmcgYWxvbmcgdGhlIGxpbmVzIG9mIHRoZSBvbGQNCj4+Pj4+Pj4+PiBpbXBsZW1l
bnRhdGlvbiBvZiBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nLg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4+IFNpZ25lZC1vZmYtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3RlZmFuby5zdGFiZWxsaW5p
QHhpbGlueC5jb20+DQo+Pj4+Pj4+Pj4gRml4ZXM6IDkxZDRlY2E3YWRkDQo+Pj4+Pj4+Pj4gQmFj
a3BvcnQ6IDQuMTIrDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gLS0tDQo+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4gR2l2ZW4gdGhlIHNldmVyaXR5IG9mIHRoZSBidWcsIEkgd291bGQgbGlrZSB0byByZXF1
ZXN0IHRoaXMgcGF0Y2ggdG8gYmUNCj4+Pj4+Pj4+PiBiYWNrcG9ydGVkIHRvIDQuMTIgdG9vLCBl
dmVuIGlmIDQuMTIgaXMgc2VjdXJpdHktZml4ZXMgb25seSBzaW5jZSBPY3QNCj4+Pj4+Pj4+PiAy
MDIwLg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEZvciB0aGUgNC4xMiBiYWNrcG9ydCwgd2UgY2Fu
IHVzZSBpb21tdV9lbmFibGVkKCkgaW5zdGVhZCBvZg0KPj4+Pj4+Pj4+IGlzX2lvbW11X2VuYWJs
ZWQoKSBpbiB0aGUgaW1wbGVtZW50YXRpb24gb2YgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZy4N
Cj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBDaGFuZ2VzIGluIHYyOg0KPj4+Pj4+Pj4+IC0gaW1wcm92
ZSBjb21taXQgbWVzc2FnZQ0KPj4+Pj4+Pj4+IC0gYWRkIGlzX2lvbW11X2VuYWJsZWQoZCkgdG8g
dGhlIGNoZWNrDQo+Pj4+Pj4+Pj4gLS0tDQo+Pj4+Pj4+Pj4geGVuL2luY2x1ZGUvYXNtLWFybS9n
cmFudF90YWJsZS5oIHwgMiArLQ0KPj4+Pj4+Pj4+IDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlv
bigrKSwgMSBkZWxldGlvbigtKQ0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IGRpZmYgLS1naXQgYS94
ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3RhYmxlLmggYi94ZW4vaW5jbHVkZS9hc20tYXJtL2dy
YW50X3RhYmxlLmgNCj4+Pj4+Pj4+PiBpbmRleCA2ZjU4NWIxNTM4Li4wY2U3N2Y5YTFjIDEwMDY0
NA0KPj4+Pj4+Pj4+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaA0KPj4+
Pj4+Pj4+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaA0KPj4+Pj4+Pj4+
IEBAIC04OSw3ICs4OSw3IEBAIGludCByZXBsYWNlX2dyYW50X2hvc3RfbWFwcGluZyh1bnNpZ25l
ZCBsb25nIGdwYWRkciwgbWZuX3QgbWZuLA0KPj4+Pj4+Pj4+ICAgICgoKGkpID49IG5yX3N0YXR1
c19mcmFtZXModCkpID8gSU5WQUxJRF9HRk4gOiAodCktPmFyY2guc3RhdHVzX2dmbltpXSkNCj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+PiAjZGVmaW5lIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcoZCkg
ICAgICAgICAgICAgICAgICAgIFwNCj4+Pj4+Pj4+PiAtICAgIChpc19kb21haW5fZGlyZWN0X21h
cHBlZChkKSAmJiBuZWVkX2lvbW11X3B0X3N5bmMoZCkpDQo+Pj4+Pj4+Pj4gKyAgICAoaXNfZG9t
YWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgaXNfaW9tbXVfZW5hYmxlZChkKSkNCj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+PiAjZW5kaWYgLyogX19BU01fR1JBTlRfVEFCTEVfSF9fICovDQo+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+IEkgdGVzdGVkIHRoZSBwYXRjaCBhbmQgd2hpbGUgY3JlYXRpbmcgdGhlIGd1ZXN0
IEkgb2JzZXJ2ZWQgdGhlIGJlbG93IHdhcm5pbmcgZnJvbSBMaW51eCBmb3IgYmxvY2sgZGV2aWNl
Lg0KPj4+Pj4+Pj4gaHR0cHM6Ly9lbGl4aXIuYm9vdGxpbi5jb20vbGludXgvdjQuMy9zb3VyY2Uv
ZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYyNMMjU4DQo+Pj4+Pj4+IA0KPj4+Pj4+
PiBTbyB5b3UgYXJlIGNyZWF0aW5nIGEgZ3Vlc3Qgd2l0aCAieGwgY3JlYXRlIiBpbiBkb20wIGFu
ZCB5b3Ugc2VlIHRoZQ0KPj4+Pj4+PiB3YXJuaW5ncyBiZWxvdyBwcmludGVkIGJ5IHRoZSBEb20w
IGtlcm5lbD8gSSBpbWFnaW5lIHRoZSBkb21VIGhhcyBhDQo+Pj4+Pj4+IHZpcnR1YWwgImRpc2si
IG9mIHNvbWUgc29ydC4NCj4+Pj4+PiBZZXMgeW91IGFyZSByaWdodCBJIGFtIHRyeWluZyB0byBj
cmVhdGUgdGhlIGd1ZXN0IHdpdGggInhsIGNyZWF0ZeKAnSBhbmQgYmVmb3JlIHRoYXQsIEkgY3Jl
YXRlZCB0aGUgbG9naWNhbCB2b2x1bWUgYW5kIHRyeWluZyB0byBhdHRhY2ggdGhlIGxvZ2ljYWwg
dm9sdW1lDQo+Pj4+Pj4gYmxvY2sgdG8gdGhlIGRvbWFpbiB3aXRoIOKAnHhsIGJsb2NrLWF0dGFj
aOKAnS4gSSBvYnNlcnZlZCB0aGlzIGVycm9yIHdpdGggdGhlICJ4bCBibG9jay1hdHRhY2jigJ0g
Y29tbWFuZC4NCj4+Pj4+PiBUaGlzIGlzc3VlIG9jY3VycyBhZnRlciBhcHBseWluZyB0aGlzIHBh
dGNoIGFzIHdoYXQgSSBvYnNlcnZlZCB0aGlzIHBhdGNoIGludHJvZHVjZSB0aGUgY2FsbHMgdG8g
aW9tbXVfbGVnYWN5X3ssIHVufW1hcCgpIHRvIG1hcCB0aGUgZ3JhbnQgcGFnZXMgZm9yDQo+Pj4+
Pj4gSU9NTVUgdGhhdCB0b3VjaGVzIHRoZSBwYWdlLXRhYmxlcy4gSSBhbSBub3Qgc3VyZSBidXQg
d2hhdCBJIG9ic2VydmVkIGlzIHRoYXQgc29tZXRoaW5nIGlzIHdyaXR0ZW4gd3Jvbmcgd2hlbiBp
b21tX3VubWFwIGNhbGxzIHVubWFwIHRoZSBwYWdlcw0KPj4+Pj4+IGJlY2F1c2Ugb2YgdGhhdCBp
c3N1ZSBpcyBvYnNlcnZlZC4NCj4+Pj4+IA0KPj4+Pj4gQ2FuIHlvdSBjbGFyaWZ5IHdoYXQgeW91
IG1lYW4gYnkgIndyaXR0ZW4gd3JvbmciPyBXaGF0IHNvcnQgb2YgZXJyb3IgZG8geW91IHNlZSBp
biB0aGUgaW9tbXVfdW5tYXAoKT8NCj4+Pj4gSSBtaWdodCBiZSB3cm9uZyBhcyBwZXIgbXkgdW5k
ZXJzdGFuZGluZyBmb3IgQVJNIHdlIGFyZSBzaGFyaW5nIHRoZSBQMk0gYmV0d2VlbiBDUFUgYW5k
IElPTU1VIGFsd2F5cyBhbmQgdGhlIG1hcF9ncmFudF9yZWYoKSBmdW5jdGlvbiBpcyB3cml0dGVu
IGluIHN1Y2ggYSB3YXkgdGhhdCB3ZSBoYXZlIHRvIGNhbGwgaW9tbXVfbGVnYWN5X3ssIHVufW1h
cCgpIG9ubHkgaWYgUDJNIGlzIG5vdCBzaGFyZWQuDQo+Pj4gDQo+Pj4gbWFwX2dyYW50X3JlZigp
IHdpbGwgY2FsbCB0aGUgSU9NTVUgaWYgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZygpIHJldHVy
bnMgdHJ1ZS4gSSBkb24ndCByZWFsbHkgc2VlIHdoZXJlIHRoaXMgaXMgYXNzdW1pbmcgdGhlIFAy
TSBpcyBub3Qgc2hhcmVkLg0KPj4+IA0KPj4+IEluIGZhY3QsIG9uIHg4NiwgdGhpcyB3aWxsIGFs
d2F5cyBiZSBmYWxzZSBmb3IgSFZNIGRvbWFpbiAodGhleSBzdXBwb3J0IGJvdGggc2hhcmVkIGFu
ZCBzZXBhcmF0ZSBwYWdlLXRhYmxlcykuDQo+Pj4gDQo+Pj4+IEFzIHdlIGFyZSBzaGFyaW5nIHRo
ZSBQMk0gd2hlbiB3ZSBjYWxsIHRoZSBpb21tdV9tYXAoKSBmdW5jdGlvbiBpdCB3aWxsIG92ZXJ3
cml0ZSB0aGUgZXhpc3RpbmcgR0ZOIC0+IE1GTiAoIEZvciBET00wIEdGTiBpcyBzYW1lIGFzIE1G
TikgZW50cnkgYW5kIHdoZW4gd2UgY2FsbCBpb21tdV91bm1hcCgpIGl0IHdpbGwgdW5tYXAgdGhl
ICAoR0ZOIC0+IE1GTiApIGVudHJ5IGZyb20gdGhlIHBhZ2UtdGFibGUuDQo+Pj4gQUZBSUssIHRo
ZXJlIHNob3VsZCBiZSBub3RoaW5nIG1hcHBlZCBhdCB0aGF0IEdGTiBiZWNhdXNlIHRoZSBwYWdl
IGJlbG9uZ3MgdG8gdGhlIGd1ZXN0LiBBdCB3b3JzZSwgd2Ugd291bGQgb3ZlcndyaXRlIGEgbWFw
cGluZyB0aGF0IGlzIHRoZSBzYW1lLg0KPj4gPiBTb3JyeSBJIHNob3VsZCBoYXZlIG1lbnRpb24g
YmVmb3JlIGJhY2tlbmQvZnJvbnRlbmQgaXMgZG9tMCBpbiB0aGlzIA0KPiBjYXNlIGFuZCBHRk4g
aXMgbWFwcGVkLiBJIGFtIHRyeWluZyB0byBhdHRhY2ggdGhlIGJsb2NrIGRldmljZSB0byBET00w
DQo+IA0KPiBBaCwgeW91ciBsb2cgbWFrZXMgYSBsb3QgbW9yZSBzZW5zZSBub3cuIFRoYW5rIHlv
dSBmb3IgdGhlIGNsYXJpZmljYXRpb24hDQo+IA0KPiBTbyB5ZXMsIEkgYWdyZWUgdGhhdCBpb21t
dV97LHVufW1hcCgpIHdpbGwgZG8gdGhlIHdyb25nIHRoaW5nIGlmIHRoZSBmcm9udGVuZCBhbmQg
YmFja2VuZCBpbiB0aGUgc2FtZSBkb21haW4uDQo+IA0KPiBJIGRvbid0IGtub3cgd2hhdCB0aGUg
c3RhdGUgaW4gTGludXgsIGJ1dCBmcm9tIFhlbiBQb1YgaXQgc2hvdWxkIGJlIHBvc3NpYmxlIHRv
IGhhdmUgdGhlIGJhY2tlbmQvZnJvbnRlbmQgaW4gdGhlIHNhbWUgZG9tYWluLg0KPiANCj4gSSB0
aGluayB3ZSB3YW50IHRvIGlnbm9yZSB0aGUgSU9NTVUgbWFwcGluZyByZXF1ZXN0IHdoZW4gdGhl
IGRvbWFpbiBpcyB0aGUgc2FtZS4gQ2FuIHlvdSB0cnkgdGhpcyBzbWFsbCB1bnRlc3RlZCBwYXRj
aDoNCg0KSSB0ZXN0ZWQgdGhlIHBhdGNoIGFuZCBpdCBpcyB3b3JraW5nIGZpbmUgZm9yIGJvdGgg
ZG9tMC9kb21VLiBJIGFtIGFibGUgdG8gYXR0YWNoIHRoZSBibG9jayBkZXZpY2UgdG8gZG9tMC9k
b211Lg0KQWxzbyBJIGRpZG7igJl0IG9ic2VydmUgdGhlIElPTU1VIGZhdWx0IGFsc28gZm9yIGJs
b2NrIGRldmljZSB0aGF0IHdlIGhhdmUgYmVoaW5kIElPTU1VIG9uIG91ciBzeXN0ZW0gYW5kIGF0
dGFjaGVkIHRvIGRvbVUuDQoNClJlZ2FyZHMsDQpSYWh1bCANCj4gDQo+IGRpZmYgLS1naXQgYS94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vaW9tbXVfaGVscGVycy5jIGIveGVuL2RyaXZlcnMv
cGFzc3Rocm91Z2gvYXJtL2lvbW11X2hlbHBlcnMuYw0KPiBpbmRleCBhMzZlMmI4ZTZjNDIuLjdi
YWQxMzU5MzE0NiAxMDA2NDQNCj4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL2lv
bW11X2hlbHBlcnMuYw0KPiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vaW9tbXVf
aGVscGVycy5jDQo+IEBAIC01Myw2ICs1Myw5IEBAIGludCBfX211c3RfY2hlY2sgYXJtX2lvbW11
X21hcF9wYWdlKHN0cnVjdCBkb21haW4gKmQsIGRmbl90IGRmbiwgbWZuX3QgbWZuLA0KPiANCj4g
ICAgIHQgPSAoZmxhZ3MgJiBJT01NVUZfd3JpdGFibGUpID8gcDJtX2lvbW11X21hcF9ydyA6IHAy
bV9pb21tdV9tYXBfcm87DQo+IA0KPiArICAgIGlmICggcGFnZV9nZXRfb3duZXIobWZuX3RvX3Bh
Z2UobWZuKSkgPT0gZCApDQo+ICsgICAgICAgIHJldHVybiAwOw0KPiArDQo+ICAgICAvKg0KPiAg
ICAgICogVGhlIGZ1bmN0aW9uIGd1ZXN0X3BoeXNtYXBfYWRkX2VudHJ5IHJlcGxhY2VzIHRoZSBj
dXJyZW50IG1hcHBpbmcNCj4gICAgICAqIGlmIHRoZXJlIGlzIGFscmVhZHkgb25lLi4uDQo+IEBA
IC03MSw2ICs3NCw5IEBAIGludCBfX211c3RfY2hlY2sgYXJtX2lvbW11X3VubWFwX3BhZ2Uoc3Ry
dWN0IGRvbWFpbiAqZCwgZGZuX3QgZGZuLA0KPiAgICAgaWYgKCAhaXNfZG9tYWluX2RpcmVjdF9t
YXBwZWQoZCkgKQ0KPiAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPiANCj4gKyAgICBpZiAoIHBh
Z2VfZ2V0X293bmVyKG1mbl90b19wYWdlKG1mbikpID09IGQgKQ0KPiArICAgICAgICByZXR1cm4g
MDsNCj4gKw0KPiAgICAgcmV0dXJuIGd1ZXN0X3BoeXNtYXBfcmVtb3ZlX3BhZ2UoZCwgX2dmbihk
Zm5feChkZm4pKSwgX21mbihkZm5feChkZm4pKSwgMCk7DQo+IH0NCj4gDQo+IENoZWVycywNCj4g
DQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 16:10:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 16:10:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83985.157327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAEY8-00061K-AL; Thu, 11 Feb 2021 16:10:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83985.157327; Thu, 11 Feb 2021 16:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAEY8-00061D-7N; Thu, 11 Feb 2021 16:10:36 +0000
Received: by outflank-mailman (input) for mailman id 83985;
 Thu, 11 Feb 2021 16:10:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RRkU=HN=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1lAEY7-000618-Hg
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 16:10:35 +0000
Received: from mail-wr1-f46.google.com (unknown [209.85.221.46])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a806218-d925-4699-a8f3-e19cf9da2446;
 Thu, 11 Feb 2021 16:10:34 +0000 (UTC)
Received: by mail-wr1-f46.google.com with SMTP id b3so4720579wrj.5
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 08:10:34 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id k11sm5579507wrl.84.2021.02.11.08.10.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 08:10:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a806218-d925-4699-a8f3-e19cf9da2446
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=mjp+FbTudtDuZRrF+iiuCYGoLoRBGrOcp7Kk0gXLkHw=;
        b=Y4Y4nuywcsavcEalPpCnMnwVV24eNEAP1N0FaMEfgi9ra482Yu2AjaFUrGFomCxvJX
         IIBPxYlxRydNT3gKkxMLlBRDJHbczaF1aExXL0kEghmdKG5/Cv3BbjItHKqDYqvq2GPd
         M+4N8qZZl5ZugWSnXKRRmpiZFMAUGWKLmXgHVFqRoTDivNV1zYmtfg2fW1aMGNghRaa/
         z418mnMd0SwgZUEfVxgH7/hUAEvxLeS7DKcqtyTZ+KveOUtgj983TF0Ew0eZO/Eqpl7a
         3iCPA81ZhPMmnFXFN02Ky9DBNpWYz5smPKLOJTYHaf/6vuWpAo53uG9SlZB47HTwqHCZ
         MKZw==
X-Gm-Message-State: AOAM532Oqn6/RGj+nYB8Zg+JPpWrJuZcGehHW9MvPKRqtsOomAMCBtpn
	PqG0P2Qafrz0UBnRbjGxnsA=
X-Google-Smtp-Source: ABdhPJwRsRCdBtlpBHCmkuPtejn7y7ewLste/8QOFWlE2YWuCSZ+3yNtAPGyGALF8OLmPiHmGmf28w==
X-Received: by 2002:adf:f905:: with SMTP id b5mr6230692wrr.129.1613059833770;
        Thu, 11 Feb 2021 08:10:33 -0800 (PST)
Date: Thu, 11 Feb 2021 16:10:31 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH for-4.15] tools/libxl: Document where the magic MAC
 numbers come from
Message-ID: <20210211161031.2anpirmszn7nwzip@liuwe-devbox-debian-v2>
References: <20210210135335.29180-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210210135335.29180-1-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Feb 10, 2021 at 01:53:35PM +0000, Andrew Cooper wrote:
> Matches the comment in the xl-network-configuration manpage.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 16:10:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 16:10:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.83987.157339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAEYO-000661-KN; Thu, 11 Feb 2021 16:10:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 83987.157339; Thu, 11 Feb 2021 16:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAEYO-00065t-GT; Thu, 11 Feb 2021 16:10:52 +0000
Received: by outflank-mailman (input) for mailman id 83987;
 Thu, 11 Feb 2021 16:10:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RRkU=HN=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1lAEYN-00065f-2P
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 16:10:51 +0000
Received: from mail-wm1-f54.google.com (unknown [209.85.128.54])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a8cea45-0e26-4382-88aa-86ba0e95b2da;
 Thu, 11 Feb 2021 16:10:50 +0000 (UTC)
Received: by mail-wm1-f54.google.com with SMTP id m1so6286052wml.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 08:10:50 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w12sm10165517wmi.4.2021.02.11.08.10.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 08:10:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a8cea45-0e26-4382-88aa-86ba0e95b2da
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=XZxsFBkQjLc1jz1XLSDQBIKkyNhmBPcRjtO5Ghx9wUI=;
        b=koWiZ6AhevW4z/QPsXaiBubsDkfbe3qIKHcz0g0XquVClE2bZWaUIfQ/FeHQDKyWqw
         oLj4FMd4oHbiEGftyCgJnABHJYtA4HP6bSyfpWgDtAHyL628B/ly6Ldr/VUQOHL8ngxu
         7YG+HbIBLioKuB/zB9+9CL9P6fwYdXPdKFCxXXqNly9ZcESiNR5LP245IyfK1GMWGTaq
         2pSECHQG7x/bg8whiPHWzDqlEFDFaViFBkBYEZ7Hinr4Xa9xmAbmGA4C8OwBHBCJl/lX
         Py9O6HF2Pxgadqc9g4rimvwGvxty5aaAhbXN7zwOhSTlSQ7o+DltkuPdcWk+7HVN4dDu
         EPCg==
X-Gm-Message-State: AOAM5315/azKfBjUKlwZay4NZNTIjyc0c3gd0pDgtyLkbYWdZmO0JbsU
	5Z0bq3y+ORjzS6+OzkqUP/s=
X-Google-Smtp-Source: ABdhPJyvRf3QWtv1mi3dgLFveNeF2PSMKuYF5nD/JnWPbgx38Cn0qS5lWpPWwSugsQnad1+PpFGDRg==
X-Received: by 2002:a05:600c:4305:: with SMTP id p5mr6000556wme.8.1613059849454;
        Thu, 11 Feb 2021 08:10:49 -0800 (PST)
Date: Thu, 11 Feb 2021 16:10:47 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Doug Goldstein <cardoe@cardoe.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>
Subject: Re: [PATCH for-4.15] automation: Add Ubuntu Focal builds
Message-ID: <20210211161047.xee6cdmvbxaaks3f@liuwe-devbox-debian-v2>
References: <20210211155041.4811-1-andrew.cooper3@citrix.com>
 <3928a36c-9ef0-dd2b-a4b6-0a94092cab88@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <3928a36c-9ef0-dd2b-a4b6-0a94092cab88@citrix.com>
User-Agent: NeoMutt/20180716

On Thu, Feb 11, 2021 at 03:57:06PM +0000, Andrew Cooper wrote:
> On 11/02/2021 15:50, Andrew Cooper wrote:
> > Logical continuation of c/s eb52442d7f "automation: Add Ubuntu:focal
> > container".
> >
> > No further changes required.  Everything builds fine.
> >
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > ---
> > CC: Doug Goldstein <cardoe@cardoe.com>
> > CC: George Dunlap <George.Dunlap@eu.citrix.com>
> > CC: Ian Jackson <iwj@xenproject.org>
> > CC: Jan Beulich <JBeulich@suse.com>
> > CC: Stefano Stabellini <sstabellini@kernel.org>
> > CC: Wei Liu <wl@xen.org>
> > CC: Julien Grall <julien@xen.org>
> 
> Forgot to say -
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/254912313
> is a pipeline run with everything green.
> 
> Also, I need to prefix the names with ubuntu- which I can do on commit.

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84027.157406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFd9-0003qF-SN; Thu, 11 Feb 2021 17:19:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84027.157406; Thu, 11 Feb 2021 17:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFd9-0003q8-PT; Thu, 11 Feb 2021 17:19:51 +0000
Received: by outflank-mailman (input) for mailman id 84027;
 Thu, 11 Feb 2021 17:19:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFd8-0003q3-2Q
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:19:50 +0000
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f973be68-ff6c-43da-b98a-9f14af12bcea;
 Thu, 11 Feb 2021 17:19:49 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id o10so6279211wmc.1
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:49 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id z5sm116078wrn.15.2021.02.11.09.19.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:46 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 0EE761FF7E;
 Thu, 11 Feb 2021 17:19:46 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f973be68-ff6c-43da-b98a-9f14af12bcea
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=TL6W7d73N3GCIvRIjYMLyXwnB4yeGf0qOX3lyLDdBu4=;
        b=EhrJgNj+rqe4e0/GVhKmxbOe9lxjJ5L9gjz6PiG8j0QQosmYD466hhVvdnX85G7hwU
         Z6S/GHysPn7kLK8NIPDdtl891EDpm9PBagn2RJYeua6mDmOCslp75bzwJnr6pFkIvQos
         UjAEUORUnOfUG6SX/aSEY5DRGnmXchX5AJ2gB/vvXHr0hxjKrVKoti7/Q2wDrKu/727f
         +Hpsy8L9K7OaAaQiLMglrF15jw+craZ4ZnAIzVJ30rZTxvCerYpw/crrxiuuYAt6z6Mq
         ZcJcp1ub2AFd0shcMC+tcqOJWPRAxc+aFSejPyy2kb9SmauIIOY8CzdQK6ZDCG9hk3UO
         JmDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=TL6W7d73N3GCIvRIjYMLyXwnB4yeGf0qOX3lyLDdBu4=;
        b=LOUrnJSl1bjqsiF6uSHA8f39FzezYdcsjMLxLM/9bezD3EtLfliuB1nCB2UIcu5DvE
         EU7RjnQTC+a4Zh69gq2gl1FCQcPrw4A/F+ngvVTs2JZtvnYD3+3i1E2WY2LjuL+IFqGZ
         2sVu6OVQcGUpo6FHyo2WDHY8Z3Tws50hqt5XnVFnwP+MK6EpibZqhxhnor8OD/YqaY+i
         +J6Aai3EiYoIxP4hguOsFDEeGjASc9ZMIUTvwAG6jojGKSrtKKOAuGiCvZOXiW+BWy73
         14o05Fnd3Lcj0ULHp0dVIYdXgOgmGUZ8dO90M+qy1MiRDF2O7hcVOqa4BLcY/iaE2stW
         AWIQ==
X-Gm-Message-State: AOAM533JQOX5MI2MdlwXPs7WLctqHeFCSQIVG8AtX5rkEOd7iqr5lJAP
	UtPT0NNKy2WqQr7AxvlLP4eOBg==
X-Google-Smtp-Source: ABdhPJwfc92IAQZFHyt9zLftcVjQAQ6qq5klu9mtEeWvx9NXaKZ4BhyBwZ597NnKMkWsjqRkB3sJaw==
X-Received: by 2002:a05:600c:41d6:: with SMTP id t22mr6094059wmh.74.1613063988088;
        Thu, 11 Feb 2021 09:19:48 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH  v2 0/7] Xen guest loader (to boot Xen+Kernel under TCG)
Date: Thu, 11 Feb 2021 17:19:38 +0000
Message-Id: <20210211171945.18313-1-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hi,

These patches have been sitting around as part of a larger series to
improve the support of Xen on AArch64. The second part of the work is
currently awaiting other re-factoring and build work to go in to make
the building of a pure-Xen capable QEMU easier. As that might take
some time and these patches are essentially ready I thought I'd best
push for merging them.

There are no fundamental changes since the last revision. I've
addressed most of the comments although I haven't expanded the use of
the global *fdt to other device models. I figured that work could be
done as and when models have support for type-1 hypervisors.

I have added some documentation to describe the feature and added an
acceptance tests which checks the various versions of Xen can boot.
The only minor wrinkle is using a custom compiled Linux kernel due to
missing support in the distro kernels. If anyone can suggest a distro
which is currently well supported for Xen on AArch64 I can update the
test.

The following patches still need review:

 - tests/avocado: add boot_xen tests
 - docs: add some documentation for the guest-loader
 - docs: move generic-loader documentation into the main manual
 - hw/core: implement a guest-loader to support static hypervisor guests

Alex BennÃ©e (7):
  hw/board: promote fdt from ARM VirtMachineState to MachineState
  hw/riscv: migrate fdt field to generic MachineState
  device_tree: add qemu_fdt_setprop_string_array helper
  hw/core: implement a guest-loader to support static hypervisor guests
  docs: move generic-loader documentation into the main manual
  docs: add some documentation for the guest-loader
  tests/avocado: add boot_xen tests

 docs/generic-loader.txt        |  92 ---------
 docs/system/generic-loader.rst | 117 +++++++++++
 docs/system/guest-loader.rst   |  54 +++++
 docs/system/index.rst          |   2 +
 hw/core/guest-loader.h         |  34 ++++
 include/hw/arm/virt.h          |   1 -
 include/hw/boards.h            |   1 +
 include/hw/riscv/virt.h        |   1 -
 include/sysemu/device_tree.h   |  17 ++
 hw/arm/virt.c                  | 356 +++++++++++++++++----------------
 hw/core/guest-loader.c         | 145 ++++++++++++++
 hw/riscv/virt.c                |  20 +-
 softmmu/device_tree.c          |  26 +++
 MAINTAINERS                    |   9 +-
 hw/core/meson.build            |   2 +
 tests/acceptance/boot_xen.py   | 117 +++++++++++
 16 files changed, 718 insertions(+), 276 deletions(-)
 delete mode 100644 docs/generic-loader.txt
 create mode 100644 docs/system/generic-loader.rst
 create mode 100644 docs/system/guest-loader.rst
 create mode 100644 hw/core/guest-loader.h
 create mode 100644 hw/core/guest-loader.c
 create mode 100644 tests/acceptance/boot_xen.py

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84029.157430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdI-0003uH-JW; Thu, 11 Feb 2021 17:20:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84029.157430; Thu, 11 Feb 2021 17:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdI-0003u7-G6; Thu, 11 Feb 2021 17:20:00 +0000
Received: by outflank-mailman (input) for mailman id 84029;
 Thu, 11 Feb 2021 17:20:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdH-0003q3-TQ
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:19:59 +0000
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80d6b348-d2ac-4914-ae23-b3b9f5514827;
 Thu, 11 Feb 2021 17:19:52 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id i9so6521711wmq.1
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:52 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id d10sm5685321wrn.88.2021.02.11.09.19.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:50 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 4642A1FF90;
 Thu, 11 Feb 2021 17:19:47 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80d6b348-d2ac-4914-ae23-b3b9f5514827
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=os20RBFKwCRXLNeh7G/FLoSRY+DOR032lYWTciVvsJA=;
        b=k3HhQsmMGHC7I+N35Uf/KArc3syXjbByxt0uMM6GlayIdVeBVQuQl1oXWLwXnDTZEv
         HBwA5IOgggj+wmwys5cN64BpfJoPTZUZ5E2WoAsiXomzcW4zqY4netQlpIbOovEtQ6UG
         a1R2Frvufd7DFkGrhI6jEqo214QbBM8xwNR3Q4LIfsFqURXQXcXsYmsVrO+8vr9pY1ms
         Gb6X1pX6ZUHGJVQnXEbxIKiXcN/gaO4Y54txpqFO88uBXxZVoAGYThJzz8ypF5gmgDfG
         bLM8q11yGE0zfE4MhlcrQds+nO1lsguOnTjkIQTNlA5T8jyai6BEA51C+UpL3I3tb6ai
         fFXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=os20RBFKwCRXLNeh7G/FLoSRY+DOR032lYWTciVvsJA=;
        b=iWbmqmDpAw50lH0j5t+5auCJ1lKjpnbTWtY1yeapP7bEEE3ChjivhZLohM7Uipb2Tf
         4XHnu6C4l2TzVYgL2Ylu339L2i9qUuOrnFEPottBa4MIjRxahrtAydPToJc43qtvm1wu
         zD2oR/dmSWKMz2Qwbjc+35Eiespe4CcxB6k9EjzWKvlv6b3xMX58r5I2uOs+CO1O+IgV
         lYtYiaLZRqlebFC89KsVVJBIfAWlRM/axZ2J4yRw731oDDTTyaOmN69lpL/I+Be0DDit
         rkl8pScT2PCB9smz9bxE6k5LsqczYEgoadwXdZRIZetYIQX7tm+PKPUCblfzVUHGcyBk
         xHBQ==
X-Gm-Message-State: AOAM533Ug685bmMO2jDs5Z1a13en9x5qofpr3VJcgg8gI0r98/TUPkK5
	4/HRLTQcsnLywoall9WNV7lGcg==
X-Google-Smtp-Source: ABdhPJx9JWVAAXbhnO9K3m5ApGIGsTFYkOy2GyhV4mJ7hU4WOVkd1hdn/X7D/n9JUnTcfI31osMGfw==
X-Received: by 2002:a7b:c150:: with SMTP id z16mr6299971wmi.30.1613063991222;
        Thu, 11 Feb 2021 09:19:51 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH  v2 4/7] hw/core: implement a guest-loader to support static hypervisor guests
Date: Thu, 11 Feb 2021 17:19:42 +0000
Message-Id: <20210211171945.18313-5-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hypervisors, especially type-1 ones, need the firmware/bootcode to put
their initial guest somewhere in memory and pass the information to it
via platform data. The guest-loader is modelled after the generic
loader for exactly this sort of purpose:

  $QEMU $ARGS  -kernel ~/xen.git/xen/xen \
    -append "dom0_mem=1G,max:1G loglvl=all guest_loglvl=all" \
    -device guest-loader,addr=0x42000000,kernel=Image,bootargs="root=/dev/sda2 ro console=hvc0 earlyprintk=xen" \
    -device guest-loader,addr=0x47000000,initrd=rootfs.cpio

Message-Id: <20201105175153.30489-5-alex.bennee@linaro.org>
Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>

---
v2
  - use PRIx64 format string
  - fix long lines to assuage checkpatch
  - add MAINTAINERS entry
  - use current_machine->ram_size
  - checkpatch fixes
---
 hw/core/guest-loader.h |  34 ++++++++++
 hw/core/guest-loader.c | 145 +++++++++++++++++++++++++++++++++++++++++
 MAINTAINERS            |   5 ++
 hw/core/meson.build    |   2 +
 4 files changed, 186 insertions(+)
 create mode 100644 hw/core/guest-loader.h
 create mode 100644 hw/core/guest-loader.c

diff --git a/hw/core/guest-loader.h b/hw/core/guest-loader.h
new file mode 100644
index 0000000000..07f4b4884b
--- /dev/null
+++ b/hw/core/guest-loader.h
@@ -0,0 +1,34 @@
+/*
+ * Guest Loader
+ *
+ * Copyright (C) 2020 Linaro
+ * Written by Alex BennÃ©e <alex.bennee@linaro.org>
+ * (based on the generic-loader by Li Guang <lig.fnst@cn.fujitsu.com>)
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef GUEST_LOADER_H
+#define GUEST_LOADER_H
+
+#include "hw/qdev-core.h"
+#include "qom/object.h"
+
+struct GuestLoaderState {
+    /* <private> */
+    DeviceState parent_obj;
+
+    /* <public> */
+    uint64_t addr;
+    char *kernel;
+    char *args;
+    char *initrd;
+};
+
+#define TYPE_GUEST_LOADER "guest-loader"
+OBJECT_DECLARE_SIMPLE_TYPE(GuestLoaderState, GUEST_LOADER)
+
+#endif
diff --git a/hw/core/guest-loader.c b/hw/core/guest-loader.c
new file mode 100644
index 0000000000..bde44e27b4
--- /dev/null
+++ b/hw/core/guest-loader.c
@@ -0,0 +1,145 @@
+/*
+ * Guest Loader
+ *
+ * Copyright (C) 2020 Linaro
+ * Written by Alex BennÃ©e <alex.bennee@linaro.org>
+ * (based on the generic-loader by Li Guang <lig.fnst@cn.fujitsu.com>)
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+/*
+ * Much like the generic-loader this is treated as a special device
+ * inside QEMU. However unlike the generic-loader this device is used
+ * to load guest images for hypervisors. As part of that process the
+ * hypervisor needs to have platform information passed to it by the
+ * lower levels of the stack (e.g. firmware/bootloader). If you boot
+ * the hypervisor directly you use the guest-loader to load the Dom0
+ * or equivalent guest images in the right place in the same way a
+ * boot loader would.
+ *
+ * This is only relevant for full system emulation.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/core/cpu.h"
+#include "hw/sysbus.h"
+#include "sysemu/dma.h"
+#include "hw/loader.h"
+#include "hw/qdev-properties.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "guest-loader.h"
+#include "sysemu/device_tree.h"
+#include "hw/boards.h"
+
+/*
+ * Insert some FDT nodes for the loaded blob.
+ */
+static void loader_insert_platform_data(GuestLoaderState *s, int size,
+                                        Error **errp)
+{
+    MachineState *machine = MACHINE(qdev_get_machine());
+    void *fdt = machine->fdt;
+    g_autofree char *node = g_strdup_printf("/chosen/module@0x%08" PRIx64,
+                                            s->addr);
+    uint64_t reg_attr[2] = {cpu_to_be64(s->addr), cpu_to_be64(size)};
+
+    if (!fdt) {
+        error_setg(errp, "Cannot modify FDT fields if the machine has none");
+        return;
+    }
+
+    qemu_fdt_add_subnode(fdt, node);
+    qemu_fdt_setprop(fdt, node, "reg", &reg_attr, sizeof(reg_attr));
+
+    if (s->kernel) {
+        const char *compat[2] = { "multiboot,module", "multiboot,kernel" };
+        if (qemu_fdt_setprop_string_array(fdt, node, "compatible",
+                                          (char **) &compat,
+                                          ARRAY_SIZE(compat)) < 0) {
+            error_setg(errp, "couldn't set %s/compatible", node);
+            return;
+        }
+        if (s->args) {
+            if (qemu_fdt_setprop_string(fdt, node, "bootargs", s->args) < 0) {
+                error_setg(errp, "couldn't set %s/bootargs", node);
+            }
+        }
+    } else if (s->initrd) {
+        const char *compat[2] = { "multiboot,module", "multiboot,ramdisk" };
+        if (qemu_fdt_setprop_string_array(fdt, node, "compatible",
+                                          (char **) &compat,
+                                          ARRAY_SIZE(compat)) < 0) {
+            error_setg(errp, "couldn't set %s/compatible", node);
+            return;
+        }
+    }
+}
+
+static void guest_loader_realize(DeviceState *dev, Error **errp)
+{
+    GuestLoaderState *s = GUEST_LOADER(dev);
+    char *file = s->kernel ? s->kernel : s->initrd;
+    int size = 0;
+
+    /* Perform some error checking on the user's options */
+    if (s->kernel && s->initrd) {
+        error_setg(errp, "Cannot specify a kernel and initrd in same stanza");
+        return;
+    } else if (!s->kernel && !s->initrd)  {
+        error_setg(errp, "Need to specify a kernel or initrd image");
+        return;
+    } else if (!s->addr) {
+        error_setg(errp, "Need to specify the address of guest blob");
+        return;
+    } else if (s->args && !s->kernel) {
+        error_setg(errp, "Boot args only relevant to kernel blobs");
+    }
+
+    /* Default to the maximum size being the machine's ram size */
+    size = load_image_targphys_as(file, s->addr, current_machine->ram_size,
+                                  NULL);
+    if (size < 0) {
+        error_setg(errp, "Cannot load specified image %s", file);
+        return;
+    }
+
+    /* Now the image is loaded we need to update the platform data */
+    loader_insert_platform_data(s, size, errp);
+}
+
+static Property guest_loader_props[] = {
+    DEFINE_PROP_UINT64("addr", GuestLoaderState, addr, 0),
+    DEFINE_PROP_STRING("kernel", GuestLoaderState, kernel),
+    DEFINE_PROP_STRING("bootargs", GuestLoaderState, args),
+    DEFINE_PROP_STRING("initrd", GuestLoaderState, initrd),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void guest_loader_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+
+    dc->realize = guest_loader_realize;
+    device_class_set_props(dc, guest_loader_props);
+    dc->desc = "Guest Loader";
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+}
+
+static TypeInfo guest_loader_info = {
+    .name = TYPE_GUEST_LOADER,
+    .parent = TYPE_DEVICE,
+    .instance_size = sizeof(GuestLoaderState),
+    .class_init = guest_loader_class_init,
+};
+
+static void guest_loader_register_type(void)
+{
+    type_register_static(&guest_loader_info);
+}
+
+type_init(guest_loader_register_type)
diff --git a/MAINTAINERS b/MAINTAINERS
index a2b92f973a..ab6877dae6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1993,6 +1993,11 @@ F: hw/core/generic-loader.c
 F: include/hw/core/generic-loader.h
 F: docs/generic-loader.txt
 
+Guest Loader
+M: Alex BennÃ©e <alex.bennee@linaro.org>
+S: Maintained
+F: hw/core/guest-loader.c
+
 Intel Hexadecimal Object File Loader
 M: Su Hang <suhang16@mails.ucas.ac.cn>
 S: Maintained
diff --git a/hw/core/meson.build b/hw/core/meson.build
index 032576f571..9cd72edf51 100644
--- a/hw/core/meson.build
+++ b/hw/core/meson.build
@@ -37,6 +37,8 @@ softmmu_ss.add(files(
   'clock-vmstate.c',
 ))
 
+softmmu_ss.add(when: 'CONFIG_TCG', if_true: files('guest-loader.c'))
+
 specific_ss.add(when: 'CONFIG_SOFTMMU', if_true: files(
   'machine-qmp-cmds.c',
   'numa.c',
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84028.157418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdF-0003rf-5E; Thu, 11 Feb 2021 17:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84028.157418; Thu, 11 Feb 2021 17:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdF-0003rW-1H; Thu, 11 Feb 2021 17:19:57 +0000
Received: by outflank-mailman (input) for mailman id 84028;
 Thu, 11 Feb 2021 17:19:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdC-0003q3-TK
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:19:54 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7dc0c0a4-b499-4e0d-9b31-e9b35c420dbb;
 Thu, 11 Feb 2021 17:19:51 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id o24so6502659wmh.5
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:50 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id j16sm2099151wra.17.2021.02.11.09.19.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:47 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 7128E1FF87;
 Thu, 11 Feb 2021 17:19:46 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dc0c0a4-b499-4e0d-9b31-e9b35c420dbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=iOgAtiYMYT7xdp80gicq3jn+x1Ib85IcTtTP09LbAPM=;
        b=dSsk+8MwXoD5vv0Ln156G+CO3DPNlgeJkqDZl+USXgDEo5+pVI3hEFj9ZlE4bFB4aC
         nis+/fbjxU8nkZuFKoOZhX3Y4GP04ZySZjK0w6WyR+b7D8JQv6p3sYimClbqO/sbQtEK
         MRYGOxmSjb2NjFsNu7kjYt4s1lFSU27WCARwo/dtUol9woNcJ3857RPgPROstMMhPA7q
         51JkOzmQuXk5a5upRdBJiJkJnYDT1ZinwmQjbi1JAFzfYBc+DXM+1Xjh4jqIhHtB1jZT
         juyY8VzEdVRPtCStvNrmsiwL5V24i/RcTwfjghuG3zYZEu+87HgPCeXSpDQ4uhG3bPhk
         DVPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=iOgAtiYMYT7xdp80gicq3jn+x1Ib85IcTtTP09LbAPM=;
        b=stgvujqfTjcP+ij3nLDrU5+bmd+321DSbmRkPZadb6PGlew3VRNhzkMOcxMUr9cMOC
         HN2RK1fpCbAarh14neuI2uHjvfWygTxouuiDIBLegnv+S1+njfP0ljgqXtIw8tZKorvu
         n3RitzA2s+ONQdLU2V2ESEA21N8gQQGa72v2EjXXWVV0voHhk3N8CRO1GJmnTwovKFYW
         VQ0rKZZC7KdY5/cHfBACQxA0nFoZ+ccse7gAt/8EtLsTkX654GH+1ajK6HXq0MX2l05a
         fRgWEU5rxuL4zMZ6fr68KTo2m9vNx/j9SXmO/qKEL6vwju/VCaHo6cihIoIgKeH/9qkR
         /2EA==
X-Gm-Message-State: AOAM531QzuzJhMrsgkmkYc2/gEX/tAdo4O3i8i2V8TQsACwIoSRV3hWH
	QCrYxe4yn2AvDm5rzKLNXbzXZw==
X-Google-Smtp-Source: ABdhPJxWovXeEY8XI8f0udgwdvOjOc99fRwWgrA86gBunYRy2clAqR/DHwdAy79LYr9xFUZaw4mz3Q==
X-Received: by 2002:a1c:b6c5:: with SMTP id g188mr6203568wmf.27.1613063989996;
        Thu, 11 Feb 2021 09:19:49 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	qemu-arm@nongnu.org (open list:Virt)
Subject: [PATCH  v2 1/7] hw/board: promote fdt from ARM VirtMachineState to MachineState
Date: Thu, 11 Feb 2021 17:19:39 +0000
Message-Id: <20210211171945.18313-2-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The use of FDT's is quite common across our various platforms. To
allow the generic loader to tweak it we need to make it available in
the generic state. This creates the field and migrates the initial
user to use the generic field. Other boards will be updated in later
patches.

Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
Message-Id: <20201105175153.30489-2-alex.bennee@linaro.org>
Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>

---
v2
  - rebase
---
 include/hw/arm/virt.h |   1 -
 include/hw/boards.h   |   1 +
 hw/arm/virt.c         | 356 ++++++++++++++++++++++--------------------
 3 files changed, 186 insertions(+), 172 deletions(-)

diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index ee9a93101e..921416f918 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -153,7 +153,6 @@ struct VirtMachineState {
     MemMapEntry *memmap;
     char *pciehb_nodename;
     const int *irqmap;
-    void *fdt;
     int fdt_size;
     uint32_t clock_phandle;
     uint32_t gic_phandle;
diff --git a/include/hw/boards.h b/include/hw/boards.h
index a46dfe5d1a..5fda5fd128 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -258,6 +258,7 @@ struct MachineState {
 
     /*< public >*/
 
+    void *fdt;
     char *dtb;
     char *dumpdtb;
     int phandle_start;
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 371147f3ae..c08bf11297 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -218,14 +218,14 @@ static bool cpu_type_valid(const char *cpu)
     return false;
 }
 
-static void create_kaslr_seed(VirtMachineState *vms, const char *node)
+static void create_kaslr_seed(MachineState *ms, const char *node)
 {
     uint64_t seed;
 
     if (qemu_guest_getrandom(&seed, sizeof(seed), NULL)) {
         return;
     }
-    qemu_fdt_setprop_u64(vms->fdt, node, "kaslr-seed", seed);
+    qemu_fdt_setprop_u64(ms->fdt, node, "kaslr-seed", seed);
 }
 
 static void create_fdt(VirtMachineState *vms)
@@ -239,7 +239,7 @@ static void create_fdt(VirtMachineState *vms)
         exit(1);
     }
 
-    vms->fdt = fdt;
+    ms->fdt = fdt;
 
     /* Header */
     qemu_fdt_setprop_string(fdt, "/", "compatible", "linux,dummy-virt");
@@ -248,11 +248,11 @@ static void create_fdt(VirtMachineState *vms)
 
     /* /chosen must exist for load_dtb to fill in necessary properties later */
     qemu_fdt_add_subnode(fdt, "/chosen");
-    create_kaslr_seed(vms, "/chosen");
+    create_kaslr_seed(ms, "/chosen");
 
     if (vms->secure) {
         qemu_fdt_add_subnode(fdt, "/secure-chosen");
-        create_kaslr_seed(vms, "/secure-chosen");
+        create_kaslr_seed(ms, "/secure-chosen");
     }
 
     /* Clock node, for the benefit of the UART. The kernel device tree
@@ -316,6 +316,7 @@ static void fdt_add_timer_nodes(const VirtMachineState *vms)
     ARMCPU *armcpu;
     VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
     uint32_t irqflags = GIC_FDT_IRQ_FLAGS_LEVEL_HI;
+    MachineState *ms = MACHINE(vms);
 
     if (vmc->claim_edge_triggered_timers) {
         irqflags = GIC_FDT_IRQ_FLAGS_EDGE_LO_HI;
@@ -327,19 +328,19 @@ static void fdt_add_timer_nodes(const VirtMachineState *vms)
                              (1 << MACHINE(vms)->smp.cpus) - 1);
     }
 
-    qemu_fdt_add_subnode(vms->fdt, "/timer");
+    qemu_fdt_add_subnode(ms->fdt, "/timer");
 
     armcpu = ARM_CPU(qemu_get_cpu(0));
     if (arm_feature(&armcpu->env, ARM_FEATURE_V8)) {
         const char compat[] = "arm,armv8-timer\0arm,armv7-timer";
-        qemu_fdt_setprop(vms->fdt, "/timer", "compatible",
+        qemu_fdt_setprop(ms->fdt, "/timer", "compatible",
                          compat, sizeof(compat));
     } else {
-        qemu_fdt_setprop_string(vms->fdt, "/timer", "compatible",
+        qemu_fdt_setprop_string(ms->fdt, "/timer", "compatible",
                                 "arm,armv7-timer");
     }
-    qemu_fdt_setprop(vms->fdt, "/timer", "always-on", NULL, 0);
-    qemu_fdt_setprop_cells(vms->fdt, "/timer", "interrupts",
+    qemu_fdt_setprop(ms->fdt, "/timer", "always-on", NULL, 0);
+    qemu_fdt_setprop_cells(ms->fdt, "/timer", "interrupts",
                        GIC_FDT_IRQ_TYPE_PPI, ARCH_TIMER_S_EL1_IRQ, irqflags,
                        GIC_FDT_IRQ_TYPE_PPI, ARCH_TIMER_NS_EL1_IRQ, irqflags,
                        GIC_FDT_IRQ_TYPE_PPI, ARCH_TIMER_VIRT_IRQ, irqflags,
@@ -375,35 +376,35 @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms)
         }
     }
 
-    qemu_fdt_add_subnode(vms->fdt, "/cpus");
-    qemu_fdt_setprop_cell(vms->fdt, "/cpus", "#address-cells", addr_cells);
-    qemu_fdt_setprop_cell(vms->fdt, "/cpus", "#size-cells", 0x0);
+    qemu_fdt_add_subnode(ms->fdt, "/cpus");
+    qemu_fdt_setprop_cell(ms->fdt, "/cpus", "#address-cells", addr_cells);
+    qemu_fdt_setprop_cell(ms->fdt, "/cpus", "#size-cells", 0x0);
 
     for (cpu = smp_cpus - 1; cpu >= 0; cpu--) {
         char *nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
         ARMCPU *armcpu = ARM_CPU(qemu_get_cpu(cpu));
         CPUState *cs = CPU(armcpu);
 
-        qemu_fdt_add_subnode(vms->fdt, nodename);
-        qemu_fdt_setprop_string(vms->fdt, nodename, "device_type", "cpu");
-        qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
+        qemu_fdt_add_subnode(ms->fdt, nodename);
+        qemu_fdt_setprop_string(ms->fdt, nodename, "device_type", "cpu");
+        qemu_fdt_setprop_string(ms->fdt, nodename, "compatible",
                                     armcpu->dtb_compatible);
 
         if (vms->psci_conduit != QEMU_PSCI_CONDUIT_DISABLED && smp_cpus > 1) {
-            qemu_fdt_setprop_string(vms->fdt, nodename,
+            qemu_fdt_setprop_string(ms->fdt, nodename,
                                         "enable-method", "psci");
         }
 
         if (addr_cells == 2) {
-            qemu_fdt_setprop_u64(vms->fdt, nodename, "reg",
+            qemu_fdt_setprop_u64(ms->fdt, nodename, "reg",
                                  armcpu->mp_affinity);
         } else {
-            qemu_fdt_setprop_cell(vms->fdt, nodename, "reg",
+            qemu_fdt_setprop_cell(ms->fdt, nodename, "reg",
                                   armcpu->mp_affinity);
         }
 
         if (ms->possible_cpus->cpus[cs->cpu_index].props.has_node_id) {
-            qemu_fdt_setprop_cell(vms->fdt, nodename, "numa-node-id",
+            qemu_fdt_setprop_cell(ms->fdt, nodename, "numa-node-id",
                 ms->possible_cpus->cpus[cs->cpu_index].props.node_id);
         }
 
@@ -414,71 +415,74 @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms)
 static void fdt_add_its_gic_node(VirtMachineState *vms)
 {
     char *nodename;
+    MachineState *ms = MACHINE(vms);
 
-    vms->msi_phandle = qemu_fdt_alloc_phandle(vms->fdt);
+    vms->msi_phandle = qemu_fdt_alloc_phandle(ms->fdt);
     nodename = g_strdup_printf("/intc/its@%" PRIx64,
                                vms->memmap[VIRT_GIC_ITS].base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_string(ms->fdt, nodename, "compatible",
                             "arm,gic-v3-its");
-    qemu_fdt_setprop(vms->fdt, nodename, "msi-controller", NULL, 0);
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_setprop(ms->fdt, nodename, "msi-controller", NULL, 0);
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, vms->memmap[VIRT_GIC_ITS].base,
                                  2, vms->memmap[VIRT_GIC_ITS].size);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->msi_phandle);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle", vms->msi_phandle);
     g_free(nodename);
 }
 
 static void fdt_add_v2m_gic_node(VirtMachineState *vms)
 {
+    MachineState *ms = MACHINE(vms);
     char *nodename;
 
     nodename = g_strdup_printf("/intc/v2m@%" PRIx64,
                                vms->memmap[VIRT_GIC_V2M].base);
-    vms->msi_phandle = qemu_fdt_alloc_phandle(vms->fdt);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
+    vms->msi_phandle = qemu_fdt_alloc_phandle(ms->fdt);
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_string(ms->fdt, nodename, "compatible",
                             "arm,gic-v2m-frame");
-    qemu_fdt_setprop(vms->fdt, nodename, "msi-controller", NULL, 0);
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_setprop(ms->fdt, nodename, "msi-controller", NULL, 0);
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, vms->memmap[VIRT_GIC_V2M].base,
                                  2, vms->memmap[VIRT_GIC_V2M].size);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->msi_phandle);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle", vms->msi_phandle);
     g_free(nodename);
 }
 
 static void fdt_add_gic_node(VirtMachineState *vms)
 {
+    MachineState *ms = MACHINE(vms);
     char *nodename;
 
-    vms->gic_phandle = qemu_fdt_alloc_phandle(vms->fdt);
-    qemu_fdt_setprop_cell(vms->fdt, "/", "interrupt-parent", vms->gic_phandle);
+    vms->gic_phandle = qemu_fdt_alloc_phandle(ms->fdt);
+    qemu_fdt_setprop_cell(ms->fdt, "/", "interrupt-parent", vms->gic_phandle);
 
     nodename = g_strdup_printf("/intc@%" PRIx64,
                                vms->memmap[VIRT_GIC_DIST].base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#interrupt-cells", 3);
-    qemu_fdt_setprop(vms->fdt, nodename, "interrupt-controller", NULL, 0);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#address-cells", 0x2);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#size-cells", 0x2);
-    qemu_fdt_setprop(vms->fdt, nodename, "ranges", NULL, 0);
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#interrupt-cells", 3);
+    qemu_fdt_setprop(ms->fdt, nodename, "interrupt-controller", NULL, 0);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#address-cells", 0x2);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#size-cells", 0x2);
+    qemu_fdt_setprop(ms->fdt, nodename, "ranges", NULL, 0);
     if (vms->gic_version == VIRT_GIC_VERSION_3) {
         int nb_redist_regions = virt_gicv3_redist_region_count(vms);
 
-        qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
+        qemu_fdt_setprop_string(ms->fdt, nodename, "compatible",
                                 "arm,gic-v3");
 
-        qemu_fdt_setprop_cell(vms->fdt, nodename,
+        qemu_fdt_setprop_cell(ms->fdt, nodename,
                               "#redistributor-regions", nb_redist_regions);
 
         if (nb_redist_regions == 1) {
-            qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+            qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                          2, vms->memmap[VIRT_GIC_DIST].base,
                                          2, vms->memmap[VIRT_GIC_DIST].size,
                                          2, vms->memmap[VIRT_GIC_REDIST].base,
                                          2, vms->memmap[VIRT_GIC_REDIST].size);
         } else {
-            qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+            qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, vms->memmap[VIRT_GIC_DIST].base,
                                  2, vms->memmap[VIRT_GIC_DIST].size,
                                  2, vms->memmap[VIRT_GIC_REDIST].base,
@@ -488,22 +492,22 @@ static void fdt_add_gic_node(VirtMachineState *vms)
         }
 
         if (vms->virt) {
-            qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
+            qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
                                    GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
                                    GIC_FDT_IRQ_FLAGS_LEVEL_HI);
         }
     } else {
         /* 'cortex-a15-gic' means 'GIC v2' */
-        qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
+        qemu_fdt_setprop_string(ms->fdt, nodename, "compatible",
                                 "arm,cortex-a15-gic");
         if (!vms->virt) {
-            qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+            qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                          2, vms->memmap[VIRT_GIC_DIST].base,
                                          2, vms->memmap[VIRT_GIC_DIST].size,
                                          2, vms->memmap[VIRT_GIC_CPU].base,
                                          2, vms->memmap[VIRT_GIC_CPU].size);
         } else {
-            qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+            qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                          2, vms->memmap[VIRT_GIC_DIST].base,
                                          2, vms->memmap[VIRT_GIC_DIST].size,
                                          2, vms->memmap[VIRT_GIC_CPU].base,
@@ -512,13 +516,13 @@ static void fdt_add_gic_node(VirtMachineState *vms)
                                          2, vms->memmap[VIRT_GIC_HYP].size,
                                          2, vms->memmap[VIRT_GIC_VCPU].base,
                                          2, vms->memmap[VIRT_GIC_VCPU].size);
-            qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
+            qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
                                    GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
                                    GIC_FDT_IRQ_FLAGS_LEVEL_HI);
         }
     }
 
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle", vms->gic_phandle);
     g_free(nodename);
 }
 
@@ -526,6 +530,7 @@ static void fdt_add_pmu_nodes(const VirtMachineState *vms)
 {
     ARMCPU *armcpu = ARM_CPU(first_cpu);
     uint32_t irqflags = GIC_FDT_IRQ_FLAGS_LEVEL_HI;
+    MachineState *ms = MACHINE(vms);
 
     if (!arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
         assert(!object_property_get_bool(OBJECT(armcpu), "pmu", NULL));
@@ -538,12 +543,12 @@ static void fdt_add_pmu_nodes(const VirtMachineState *vms)
                              (1 << MACHINE(vms)->smp.cpus) - 1);
     }
 
-    qemu_fdt_add_subnode(vms->fdt, "/pmu");
+    qemu_fdt_add_subnode(ms->fdt, "/pmu");
     if (arm_feature(&armcpu->env, ARM_FEATURE_V8)) {
         const char compat[] = "arm,armv8-pmuv3";
-        qemu_fdt_setprop(vms->fdt, "/pmu", "compatible",
+        qemu_fdt_setprop(ms->fdt, "/pmu", "compatible",
                          compat, sizeof(compat));
-        qemu_fdt_setprop_cells(vms->fdt, "/pmu", "interrupts",
+        qemu_fdt_setprop_cells(ms->fdt, "/pmu", "interrupts",
                                GIC_FDT_IRQ_TYPE_PPI, VIRTUAL_PMU_IRQ, irqflags);
     }
 }
@@ -749,6 +754,7 @@ static void create_uart(const VirtMachineState *vms, int uart,
     const char clocknames[] = "uartclk\0apb_pclk";
     DeviceState *dev = qdev_new(TYPE_PL011);
     SysBusDevice *s = SYS_BUS_DEVICE(dev);
+    MachineState *ms = MACHINE(vms);
 
     qdev_prop_set_chr(dev, "chardev", chr);
     sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
@@ -757,28 +763,28 @@ static void create_uart(const VirtMachineState *vms, int uart,
     sysbus_connect_irq(s, 0, qdev_get_gpio_in(vms->gic, irq));
 
     nodename = g_strdup_printf("/pl011@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
+    qemu_fdt_add_subnode(ms->fdt, nodename);
     /* Note that we can't use setprop_string because of the embedded NUL */
-    qemu_fdt_setprop(vms->fdt, nodename, "compatible",
+    qemu_fdt_setprop(ms->fdt, nodename, "compatible",
                          compat, sizeof(compat));
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                      2, base, 2, size);
-    qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
+    qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
                                GIC_FDT_IRQ_TYPE_SPI, irq,
                                GIC_FDT_IRQ_FLAGS_LEVEL_HI);
-    qemu_fdt_setprop_cells(vms->fdt, nodename, "clocks",
+    qemu_fdt_setprop_cells(ms->fdt, nodename, "clocks",
                                vms->clock_phandle, vms->clock_phandle);
-    qemu_fdt_setprop(vms->fdt, nodename, "clock-names",
+    qemu_fdt_setprop(ms->fdt, nodename, "clock-names",
                          clocknames, sizeof(clocknames));
 
     if (uart == VIRT_UART) {
-        qemu_fdt_setprop_string(vms->fdt, "/chosen", "stdout-path", nodename);
+        qemu_fdt_setprop_string(ms->fdt, "/chosen", "stdout-path", nodename);
     } else {
         /* Mark as not usable by the normal world */
-        qemu_fdt_setprop_string(vms->fdt, nodename, "status", "disabled");
-        qemu_fdt_setprop_string(vms->fdt, nodename, "secure-status", "okay");
+        qemu_fdt_setprop_string(ms->fdt, nodename, "status", "disabled");
+        qemu_fdt_setprop_string(ms->fdt, nodename, "secure-status", "okay");
 
-        qemu_fdt_setprop_string(vms->fdt, "/secure-chosen", "stdout-path",
+        qemu_fdt_setprop_string(ms->fdt, "/secure-chosen", "stdout-path",
                                 nodename);
     }
 
@@ -792,19 +798,20 @@ static void create_rtc(const VirtMachineState *vms)
     hwaddr size = vms->memmap[VIRT_RTC].size;
     int irq = vms->irqmap[VIRT_RTC];
     const char compat[] = "arm,pl031\0arm,primecell";
+    MachineState *ms = MACHINE(vms);
 
     sysbus_create_simple("pl031", base, qdev_get_gpio_in(vms->gic, irq));
 
     nodename = g_strdup_printf("/pl031@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop(vms->fdt, nodename, "compatible", compat, sizeof(compat));
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop(ms->fdt, nodename, "compatible", compat, sizeof(compat));
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base, 2, size);
-    qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
+    qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
                            GIC_FDT_IRQ_TYPE_SPI, irq,
                            GIC_FDT_IRQ_FLAGS_LEVEL_HI);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "clocks", vms->clock_phandle);
-    qemu_fdt_setprop_string(vms->fdt, nodename, "clock-names", "apb_pclk");
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "clocks", vms->clock_phandle);
+    qemu_fdt_setprop_string(ms->fdt, nodename, "clock-names", "apb_pclk");
     g_free(nodename);
 }
 
@@ -821,32 +828,30 @@ static void virt_powerdown_req(Notifier *n, void *opaque)
     }
 }
 
-static void create_gpio_keys(const VirtMachineState *vms,
-                             DeviceState *pl061_dev,
+static void create_gpio_keys(char *fdt, DeviceState *pl061_dev,
                              uint32_t phandle)
 {
     gpio_key_dev = sysbus_create_simple("gpio-key", -1,
                                         qdev_get_gpio_in(pl061_dev, 3));
 
-    qemu_fdt_add_subnode(vms->fdt, "/gpio-keys");
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-keys", "compatible", "gpio-keys");
-    qemu_fdt_setprop_cell(vms->fdt, "/gpio-keys", "#size-cells", 0);
-    qemu_fdt_setprop_cell(vms->fdt, "/gpio-keys", "#address-cells", 1);
+    qemu_fdt_add_subnode(fdt, "/gpio-keys");
+    qemu_fdt_setprop_string(fdt, "/gpio-keys", "compatible", "gpio-keys");
+    qemu_fdt_setprop_cell(fdt, "/gpio-keys", "#size-cells", 0);
+    qemu_fdt_setprop_cell(fdt, "/gpio-keys", "#address-cells", 1);
 
-    qemu_fdt_add_subnode(vms->fdt, "/gpio-keys/poweroff");
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-keys/poweroff",
+    qemu_fdt_add_subnode(fdt, "/gpio-keys/poweroff");
+    qemu_fdt_setprop_string(fdt, "/gpio-keys/poweroff",
                             "label", "GPIO Key Poweroff");
-    qemu_fdt_setprop_cell(vms->fdt, "/gpio-keys/poweroff", "linux,code",
+    qemu_fdt_setprop_cell(fdt, "/gpio-keys/poweroff", "linux,code",
                           KEY_POWER);
-    qemu_fdt_setprop_cells(vms->fdt, "/gpio-keys/poweroff",
+    qemu_fdt_setprop_cells(fdt, "/gpio-keys/poweroff",
                            "gpios", phandle, 3, 0);
 }
 
 #define SECURE_GPIO_POWEROFF 0
 #define SECURE_GPIO_RESET    1
 
-static void create_secure_gpio_pwr(const VirtMachineState *vms,
-                                   DeviceState *pl061_dev,
+static void create_secure_gpio_pwr(char *fdt, DeviceState *pl061_dev,
                                    uint32_t phandle)
 {
     DeviceState *gpio_pwr_dev;
@@ -860,22 +865,22 @@ static void create_secure_gpio_pwr(const VirtMachineState *vms,
     qdev_connect_gpio_out(pl061_dev, SECURE_GPIO_POWEROFF,
                           qdev_get_gpio_in_named(gpio_pwr_dev, "shutdown", 0));
 
-    qemu_fdt_add_subnode(vms->fdt, "/gpio-poweroff");
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-poweroff", "compatible",
+    qemu_fdt_add_subnode(fdt, "/gpio-poweroff");
+    qemu_fdt_setprop_string(fdt, "/gpio-poweroff", "compatible",
                             "gpio-poweroff");
-    qemu_fdt_setprop_cells(vms->fdt, "/gpio-poweroff",
+    qemu_fdt_setprop_cells(fdt, "/gpio-poweroff",
                            "gpios", phandle, SECURE_GPIO_POWEROFF, 0);
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-poweroff", "status", "disabled");
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-poweroff", "secure-status",
+    qemu_fdt_setprop_string(fdt, "/gpio-poweroff", "status", "disabled");
+    qemu_fdt_setprop_string(fdt, "/gpio-poweroff", "secure-status",
                             "okay");
 
-    qemu_fdt_add_subnode(vms->fdt, "/gpio-restart");
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-restart", "compatible",
+    qemu_fdt_add_subnode(fdt, "/gpio-restart");
+    qemu_fdt_setprop_string(fdt, "/gpio-restart", "compatible",
                             "gpio-restart");
-    qemu_fdt_setprop_cells(vms->fdt, "/gpio-restart",
+    qemu_fdt_setprop_cells(fdt, "/gpio-restart",
                            "gpios", phandle, SECURE_GPIO_RESET, 0);
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-restart", "status", "disabled");
-    qemu_fdt_setprop_string(vms->fdt, "/gpio-restart", "secure-status",
+    qemu_fdt_setprop_string(fdt, "/gpio-restart", "status", "disabled");
+    qemu_fdt_setprop_string(fdt, "/gpio-restart", "secure-status",
                             "okay");
 }
 
@@ -889,6 +894,7 @@ static void create_gpio_devices(const VirtMachineState *vms, int gpio,
     int irq = vms->irqmap[gpio];
     const char compat[] = "arm,pl061\0arm,primecell";
     SysBusDevice *s;
+    MachineState *ms = MACHINE(vms);
 
     pl061_dev = qdev_new("pl061");
     s = SYS_BUS_DEVICE(pl061_dev);
@@ -896,33 +902,33 @@ static void create_gpio_devices(const VirtMachineState *vms, int gpio,
     memory_region_add_subregion(mem, base, sysbus_mmio_get_region(s, 0));
     sysbus_connect_irq(s, 0, qdev_get_gpio_in(vms->gic, irq));
 
-    uint32_t phandle = qemu_fdt_alloc_phandle(vms->fdt);
+    uint32_t phandle = qemu_fdt_alloc_phandle(ms->fdt);
     nodename = g_strdup_printf("/pl061@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base, 2, size);
-    qemu_fdt_setprop(vms->fdt, nodename, "compatible", compat, sizeof(compat));
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#gpio-cells", 2);
-    qemu_fdt_setprop(vms->fdt, nodename, "gpio-controller", NULL, 0);
-    qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
+    qemu_fdt_setprop(ms->fdt, nodename, "compatible", compat, sizeof(compat));
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#gpio-cells", 2);
+    qemu_fdt_setprop(ms->fdt, nodename, "gpio-controller", NULL, 0);
+    qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
                            GIC_FDT_IRQ_TYPE_SPI, irq,
                            GIC_FDT_IRQ_FLAGS_LEVEL_HI);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "clocks", vms->clock_phandle);
-    qemu_fdt_setprop_string(vms->fdt, nodename, "clock-names", "apb_pclk");
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", phandle);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "clocks", vms->clock_phandle);
+    qemu_fdt_setprop_string(ms->fdt, nodename, "clock-names", "apb_pclk");
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle", phandle);
 
     if (gpio != VIRT_GPIO) {
         /* Mark as not usable by the normal world */
-        qemu_fdt_setprop_string(vms->fdt, nodename, "status", "disabled");
-        qemu_fdt_setprop_string(vms->fdt, nodename, "secure-status", "okay");
+        qemu_fdt_setprop_string(ms->fdt, nodename, "status", "disabled");
+        qemu_fdt_setprop_string(ms->fdt, nodename, "secure-status", "okay");
     }
     g_free(nodename);
 
     /* Child gpio devices */
     if (gpio == VIRT_GPIO) {
-        create_gpio_keys(vms, pl061_dev, phandle);
+        create_gpio_keys(ms->fdt, pl061_dev, phandle);
     } else {
-        create_secure_gpio_pwr(vms, pl061_dev, phandle);
+        create_secure_gpio_pwr(ms->fdt, pl061_dev, phandle);
     }
 }
 
@@ -930,6 +936,7 @@ static void create_virtio_devices(const VirtMachineState *vms)
 {
     int i;
     hwaddr size = vms->memmap[VIRT_MMIO].size;
+    MachineState *ms = MACHINE(vms);
 
     /* We create the transports in forwards order. Since qbus_realize()
      * prepends (not appends) new child buses, the incrementing loop below will
@@ -979,15 +986,15 @@ static void create_virtio_devices(const VirtMachineState *vms)
         hwaddr base = vms->memmap[VIRT_MMIO].base + i * size;
 
         nodename = g_strdup_printf("/virtio_mmio@%" PRIx64, base);
-        qemu_fdt_add_subnode(vms->fdt, nodename);
-        qemu_fdt_setprop_string(vms->fdt, nodename,
+        qemu_fdt_add_subnode(ms->fdt, nodename);
+        qemu_fdt_setprop_string(ms->fdt, nodename,
                                 "compatible", "virtio,mmio");
-        qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+        qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                      2, base, 2, size);
-        qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
+        qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
                                GIC_FDT_IRQ_TYPE_SPI, irq,
                                GIC_FDT_IRQ_FLAGS_EDGE_LO_HI);
-        qemu_fdt_setprop(vms->fdt, nodename, "dma-coherent", NULL, 0);
+        qemu_fdt_setprop(ms->fdt, nodename, "dma-coherent", NULL, 0);
         g_free(nodename);
     }
 }
@@ -1068,17 +1075,18 @@ static void virt_flash_fdt(VirtMachineState *vms,
 {
     hwaddr flashsize = vms->memmap[VIRT_FLASH].size / 2;
     hwaddr flashbase = vms->memmap[VIRT_FLASH].base;
+    MachineState *ms = MACHINE(vms);
     char *nodename;
 
     if (sysmem == secure_sysmem) {
         /* Report both flash devices as a single node in the DT */
         nodename = g_strdup_printf("/flash@%" PRIx64, flashbase);
-        qemu_fdt_add_subnode(vms->fdt, nodename);
-        qemu_fdt_setprop_string(vms->fdt, nodename, "compatible", "cfi-flash");
-        qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+        qemu_fdt_add_subnode(ms->fdt, nodename);
+        qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", "cfi-flash");
+        qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                      2, flashbase, 2, flashsize,
                                      2, flashbase + flashsize, 2, flashsize);
-        qemu_fdt_setprop_cell(vms->fdt, nodename, "bank-width", 4);
+        qemu_fdt_setprop_cell(ms->fdt, nodename, "bank-width", 4);
         g_free(nodename);
     } else {
         /*
@@ -1086,21 +1094,21 @@ static void virt_flash_fdt(VirtMachineState *vms,
          * only visible to the secure world.
          */
         nodename = g_strdup_printf("/secflash@%" PRIx64, flashbase);
-        qemu_fdt_add_subnode(vms->fdt, nodename);
-        qemu_fdt_setprop_string(vms->fdt, nodename, "compatible", "cfi-flash");
-        qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+        qemu_fdt_add_subnode(ms->fdt, nodename);
+        qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", "cfi-flash");
+        qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                      2, flashbase, 2, flashsize);
-        qemu_fdt_setprop_cell(vms->fdt, nodename, "bank-width", 4);
-        qemu_fdt_setprop_string(vms->fdt, nodename, "status", "disabled");
-        qemu_fdt_setprop_string(vms->fdt, nodename, "secure-status", "okay");
+        qemu_fdt_setprop_cell(ms->fdt, nodename, "bank-width", 4);
+        qemu_fdt_setprop_string(ms->fdt, nodename, "status", "disabled");
+        qemu_fdt_setprop_string(ms->fdt, nodename, "secure-status", "okay");
         g_free(nodename);
 
         nodename = g_strdup_printf("/flash@%" PRIx64, flashbase);
-        qemu_fdt_add_subnode(vms->fdt, nodename);
-        qemu_fdt_setprop_string(vms->fdt, nodename, "compatible", "cfi-flash");
-        qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+        qemu_fdt_add_subnode(ms->fdt, nodename);
+        qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", "cfi-flash");
+        qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                      2, flashbase + flashsize, 2, flashsize);
-        qemu_fdt_setprop_cell(vms->fdt, nodename, "bank-width", 4);
+        qemu_fdt_setprop_cell(ms->fdt, nodename, "bank-width", 4);
         g_free(nodename);
     }
 }
@@ -1167,17 +1175,17 @@ static FWCfgState *create_fw_cfg(const VirtMachineState *vms, AddressSpace *as)
     fw_cfg_add_i16(fw_cfg, FW_CFG_NB_CPUS, (uint16_t)ms->smp.cpus);
 
     nodename = g_strdup_printf("/fw-cfg@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_string(vms->fdt, nodename,
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_string(ms->fdt, nodename,
                             "compatible", "qemu,fw-cfg-mmio");
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base, 2, size);
-    qemu_fdt_setprop(vms->fdt, nodename, "dma-coherent", NULL, 0);
+    qemu_fdt_setprop(ms->fdt, nodename, "dma-coherent", NULL, 0);
     g_free(nodename);
     return fw_cfg;
 }
 
-static void create_pcie_irq_map(const VirtMachineState *vms,
+static void create_pcie_irq_map(const MachineState *ms,
                                 uint32_t gic_phandle,
                                 int first_irq, const char *nodename)
 {
@@ -1205,10 +1213,10 @@ static void create_pcie_irq_map(const VirtMachineState *vms,
         }
     }
 
-    qemu_fdt_setprop(vms->fdt, nodename, "interrupt-map",
+    qemu_fdt_setprop(ms->fdt, nodename, "interrupt-map",
                      full_irq_map, sizeof(full_irq_map));
 
-    qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupt-map-mask",
+    qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupt-map-mask",
                            cpu_to_be16(PCI_DEVFN(3, 0)), /* Slot 3 */
                            0, 0,
                            0x7           /* PCI irq */);
@@ -1225,6 +1233,7 @@ static void create_smmu(const VirtMachineState *vms,
     hwaddr size = vms->memmap[VIRT_SMMU].size;
     const char irq_names[] = "eventq\0priq\0cmdq-sync\0gerror";
     DeviceState *dev;
+    MachineState *ms = MACHINE(vms);
 
     if (vms->iommu != VIRT_IOMMU_SMMUV3 || !vms->iommu_phandle) {
         return;
@@ -1242,26 +1251,26 @@ static void create_smmu(const VirtMachineState *vms,
     }
 
     node = g_strdup_printf("/smmuv3@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, node);
-    qemu_fdt_setprop(vms->fdt, node, "compatible", compat, sizeof(compat));
-    qemu_fdt_setprop_sized_cells(vms->fdt, node, "reg", 2, base, 2, size);
+    qemu_fdt_add_subnode(ms->fdt, node);
+    qemu_fdt_setprop(ms->fdt, node, "compatible", compat, sizeof(compat));
+    qemu_fdt_setprop_sized_cells(ms->fdt, node, "reg", 2, base, 2, size);
 
-    qemu_fdt_setprop_cells(vms->fdt, node, "interrupts",
+    qemu_fdt_setprop_cells(ms->fdt, node, "interrupts",
             GIC_FDT_IRQ_TYPE_SPI, irq    , GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
             GIC_FDT_IRQ_TYPE_SPI, irq + 1, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
             GIC_FDT_IRQ_TYPE_SPI, irq + 2, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
             GIC_FDT_IRQ_TYPE_SPI, irq + 3, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI);
 
-    qemu_fdt_setprop(vms->fdt, node, "interrupt-names", irq_names,
+    qemu_fdt_setprop(ms->fdt, node, "interrupt-names", irq_names,
                      sizeof(irq_names));
 
-    qemu_fdt_setprop_cell(vms->fdt, node, "clocks", vms->clock_phandle);
-    qemu_fdt_setprop_string(vms->fdt, node, "clock-names", "apb_pclk");
-    qemu_fdt_setprop(vms->fdt, node, "dma-coherent", NULL, 0);
+    qemu_fdt_setprop_cell(ms->fdt, node, "clocks", vms->clock_phandle);
+    qemu_fdt_setprop_string(ms->fdt, node, "clock-names", "apb_pclk");
+    qemu_fdt_setprop(ms->fdt, node, "dma-coherent", NULL, 0);
 
-    qemu_fdt_setprop_cell(vms->fdt, node, "#iommu-cells", 1);
+    qemu_fdt_setprop_cell(ms->fdt, node, "#iommu-cells", 1);
 
-    qemu_fdt_setprop_cell(vms->fdt, node, "phandle", vms->iommu_phandle);
+    qemu_fdt_setprop_cell(ms->fdt, node, "phandle", vms->iommu_phandle);
     g_free(node);
 }
 
@@ -1269,22 +1278,23 @@ static void create_virtio_iommu_dt_bindings(VirtMachineState *vms)
 {
     const char compat[] = "virtio,pci-iommu";
     uint16_t bdf = vms->virtio_iommu_bdf;
+    MachineState *ms = MACHINE(vms);
     char *node;
 
-    vms->iommu_phandle = qemu_fdt_alloc_phandle(vms->fdt);
+    vms->iommu_phandle = qemu_fdt_alloc_phandle(ms->fdt);
 
     node = g_strdup_printf("%s/virtio_iommu@%d", vms->pciehb_nodename, bdf);
-    qemu_fdt_add_subnode(vms->fdt, node);
-    qemu_fdt_setprop(vms->fdt, node, "compatible", compat, sizeof(compat));
-    qemu_fdt_setprop_sized_cells(vms->fdt, node, "reg",
+    qemu_fdt_add_subnode(ms->fdt, node);
+    qemu_fdt_setprop(ms->fdt, node, "compatible", compat, sizeof(compat));
+    qemu_fdt_setprop_sized_cells(ms->fdt, node, "reg",
                                  1, bdf << 8, 1, 0, 1, 0,
                                  1, 0, 1, 0);
 
-    qemu_fdt_setprop_cell(vms->fdt, node, "#iommu-cells", 1);
-    qemu_fdt_setprop_cell(vms->fdt, node, "phandle", vms->iommu_phandle);
+    qemu_fdt_setprop_cell(ms->fdt, node, "#iommu-cells", 1);
+    qemu_fdt_setprop_cell(ms->fdt, node, "phandle", vms->iommu_phandle);
     g_free(node);
 
-    qemu_fdt_setprop_cells(vms->fdt, vms->pciehb_nodename, "iommu-map",
+    qemu_fdt_setprop_cells(ms->fdt, vms->pciehb_nodename, "iommu-map",
                            0x0, vms->iommu_phandle, 0x0, bdf,
                            bdf + 1, vms->iommu_phandle, bdf + 1, 0xffff - bdf);
 }
@@ -1309,6 +1319,7 @@ static void create_pcie(VirtMachineState *vms)
     char *nodename;
     int i, ecam_id;
     PCIHostState *pci;
+    MachineState *ms = MACHINE(vms);
 
     dev = qdev_new(TYPE_GPEX_HOST);
     sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
@@ -1369,27 +1380,27 @@ static void create_pcie(VirtMachineState *vms)
     }
 
     nodename = vms->pciehb_nodename = g_strdup_printf("/pcie@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_string(vms->fdt, nodename,
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_string(ms->fdt, nodename,
                             "compatible", "pci-host-ecam-generic");
-    qemu_fdt_setprop_string(vms->fdt, nodename, "device_type", "pci");
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#address-cells", 3);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#size-cells", 2);
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "linux,pci-domain", 0);
-    qemu_fdt_setprop_cells(vms->fdt, nodename, "bus-range", 0,
+    qemu_fdt_setprop_string(ms->fdt, nodename, "device_type", "pci");
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#address-cells", 3);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#size-cells", 2);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "linux,pci-domain", 0);
+    qemu_fdt_setprop_cells(ms->fdt, nodename, "bus-range", 0,
                            nr_pcie_buses - 1);
-    qemu_fdt_setprop(vms->fdt, nodename, "dma-coherent", NULL, 0);
+    qemu_fdt_setprop(ms->fdt, nodename, "dma-coherent", NULL, 0);
 
     if (vms->msi_phandle) {
-        qemu_fdt_setprop_cells(vms->fdt, nodename, "msi-parent",
+        qemu_fdt_setprop_cells(ms->fdt, nodename, "msi-parent",
                                vms->msi_phandle);
     }
 
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base_ecam, 2, size_ecam);
 
     if (vms->highmem) {
-        qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "ranges",
+        qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
                                      1, FDT_PCI_RANGE_IOPORT, 2, 0,
                                      2, base_pio, 2, size_pio,
                                      1, FDT_PCI_RANGE_MMIO, 2, base_mmio,
@@ -1398,23 +1409,23 @@ static void create_pcie(VirtMachineState *vms)
                                      2, base_mmio_high,
                                      2, base_mmio_high, 2, size_mmio_high);
     } else {
-        qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "ranges",
+        qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
                                      1, FDT_PCI_RANGE_IOPORT, 2, 0,
                                      2, base_pio, 2, size_pio,
                                      1, FDT_PCI_RANGE_MMIO, 2, base_mmio,
                                      2, base_mmio, 2, size_mmio);
     }
 
-    qemu_fdt_setprop_cell(vms->fdt, nodename, "#interrupt-cells", 1);
-    create_pcie_irq_map(vms, vms->gic_phandle, irq, nodename);
+    qemu_fdt_setprop_cell(ms->fdt, nodename, "#interrupt-cells", 1);
+    create_pcie_irq_map(ms, vms->gic_phandle, irq, nodename);
 
     if (vms->iommu) {
-        vms->iommu_phandle = qemu_fdt_alloc_phandle(vms->fdt);
+        vms->iommu_phandle = qemu_fdt_alloc_phandle(ms->fdt);
 
         switch (vms->iommu) {
         case VIRT_IOMMU_SMMUV3:
             create_smmu(vms, vms->bus);
-            qemu_fdt_setprop_cells(vms->fdt, nodename, "iommu-map",
+            qemu_fdt_setprop_cells(ms->fdt, nodename, "iommu-map",
                                    0x0, vms->iommu_phandle, 0x0, 0x10000);
             break;
         default:
@@ -1466,17 +1477,18 @@ static void create_secure_ram(VirtMachineState *vms,
     char *nodename;
     hwaddr base = vms->memmap[VIRT_SECURE_MEM].base;
     hwaddr size = vms->memmap[VIRT_SECURE_MEM].size;
+    MachineState *ms = MACHINE(vms);
 
     memory_region_init_ram(secram, NULL, "virt.secure-ram", size,
                            &error_fatal);
     memory_region_add_subregion(secure_sysmem, base, secram);
 
     nodename = g_strdup_printf("/secram@%" PRIx64, base);
-    qemu_fdt_add_subnode(vms->fdt, nodename);
-    qemu_fdt_setprop_string(vms->fdt, nodename, "device_type", "memory");
-    qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg", 2, base, 2, size);
-    qemu_fdt_setprop_string(vms->fdt, nodename, "status", "disabled");
-    qemu_fdt_setprop_string(vms->fdt, nodename, "secure-status", "okay");
+    qemu_fdt_add_subnode(ms->fdt, nodename);
+    qemu_fdt_setprop_string(ms->fdt, nodename, "device_type", "memory");
+    qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg", 2, base, 2, size);
+    qemu_fdt_setprop_string(ms->fdt, nodename, "status", "disabled");
+    qemu_fdt_setprop_string(ms->fdt, nodename, "secure-status", "okay");
 
     if (secure_tag_sysmem) {
         create_tag_ram(secure_tag_sysmem, base, size, "mach-virt.secure-tag");
@@ -1489,9 +1501,11 @@ static void *machvirt_dtb(const struct arm_boot_info *binfo, int *fdt_size)
 {
     const VirtMachineState *board = container_of(binfo, VirtMachineState,
                                                  bootinfo);
+    MachineState *ms = MACHINE(board);
+
 
     *fdt_size = board->fdt_size;
-    return board->fdt;
+    return ms->fdt;
 }
 
 static void virt_build_smbios(VirtMachineState *vms)
@@ -1539,7 +1553,7 @@ void virt_machine_done(Notifier *notifier, void *data)
      * while qemu takes charge of the qom stuff.
      */
     if (info->dtb_filename == NULL) {
-        platform_bus_add_all_fdt_nodes(vms->fdt, "/intc",
+        platform_bus_add_all_fdt_nodes(ms->fdt, "/intc",
                                        vms->memmap[VIRT_PLATFORM_BUS].base,
                                        vms->memmap[VIRT_PLATFORM_BUS].size,
                                        vms->irqmap[VIRT_PLATFORM_BUS]);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84030.157441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdN-0004S7-To; Thu, 11 Feb 2021 17:20:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84030.157441; Thu, 11 Feb 2021 17:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdN-0004RI-PB; Thu, 11 Feb 2021 17:20:05 +0000
Received: by outflank-mailman (input) for mailman id 84030;
 Thu, 11 Feb 2021 17:20:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdM-0003q3-Tk
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:20:04 +0000
Received: from mail-wr1-x434.google.com (unknown [2a00:1450:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 616edc7d-819b-49bb-90f0-499f35971b39;
 Thu, 11 Feb 2021 17:19:54 +0000 (UTC)
Received: by mail-wr1-x434.google.com with SMTP id b3so4949055wrj.5
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:54 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id l1sm9819414wmi.48.2021.02.11.09.19.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:50 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id AE1E91FF8C;
 Thu, 11 Feb 2021 17:19:46 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 616edc7d-819b-49bb-90f0-499f35971b39
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=t3OFT8GZLSx2s8PIhoV1MKY+HG2WX+w1muUkEk+uEOc=;
        b=jnYha8oxul2sAT33I9n4HYxvBHkU2m5Ig+7LY2M8FP5oTyPkTxZqkWq0YskaDNTih8
         c+0L5cLBONtsrnosF1zy0IMr8LGqeJz56dDlyhZunnPOwuCmxuIb5yjKfReU4Uuv9Ncj
         9/Tski3heTi6sJHoSBenr9YOau3WJ1IAT81dnpNhPG5Ex7vobdrmjYJCorJQ0djlFslO
         6jsJLRrc+fHLa5wKBjH6Zr76+Q1LZZRJERou7VROlcFMIk13/hzX74p341ufYzF5vnQb
         rfGyqPcaeylmOMId3yH+OGjX1PUviN/O18i6+6NuuCjUIKAPW0rxn6ZQY3WS88DITgHp
         l0ww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=t3OFT8GZLSx2s8PIhoV1MKY+HG2WX+w1muUkEk+uEOc=;
        b=cjkxVQkyI69HDZ8SIPwFxKtcKwuFeVqV6HfzMoNYw5q7LEfnrl8tu4uA9YxfDjHpYD
         N2d83p9G4bb631lgEUtNw9gfTsS9OunjV2BY7d2WexnYV3hObIQ0uv4EAV8uDcVAEvD8
         fbaTnFmrplqqS2xpwvXqzN6FqDAT9fyibLZzi2fgT7heIaYuU9YvdJWHqapEdcw6scyj
         n/IY+YJDBPIBlDNLjtI7noJ4XZ9QarRvBB3PvQw4AzeOfusASbDVjr9pzCLlM8HombdI
         /ZEEnKTjGeqz5nbAggT+HVbKsV8KzrE2I1GcT7umtSPJf6jTfLFvGXJJ+7Dildas6tI8
         Z6aA==
X-Gm-Message-State: AOAM530iNB4cma7vjGe03jXUjOOWBb0lbgOKkiJzD9yu/r1b4+W2zWcf
	rVZQrWk4lTdqrt+9Nw+kXGYSZg==
X-Google-Smtp-Source: ABdhPJy4hZBqYwg1DtCBsL0Xs3oZtCMSprNtTXkS5l8GCnYIalW6zmV+LmO6HhXEH1pfTWKXxOsi3A==
X-Received: by 2002:adf:bb54:: with SMTP id x20mr6691695wrg.112.1613063993294;
        Thu, 11 Feb 2021 09:19:53 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Alistair Francis <alistair.francis@wdc.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Alistair Francis <Alistair.Francis@wdc.com>,
	Sagar Karandikar <sagark@eecs.berkeley.edu>,
	Bastian Koppelmann <kbastian@mail.uni-paderborn.de>,
	qemu-riscv@nongnu.org (open list:RISC-V TCG CPUs)
Subject: [PATCH  v2 2/7] hw/riscv: migrate fdt field to generic MachineState
Date: Thu, 11 Feb 2021 17:19:40 +0000
Message-Id: <20210211171945.18313-3-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is a mechanical change to make the fdt available through
MachineState.

Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
Message-Id: <20201021170842.25762-3-alex.bennee@linaro.org>
Message-Id: <20201105175153.30489-3-alex.bennee@linaro.org>
Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
---
 include/hw/riscv/virt.h |  1 -
 hw/riscv/virt.c         | 20 ++++++++++----------
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
index 84b7a3848f..632da52018 100644
--- a/include/hw/riscv/virt.h
+++ b/include/hw/riscv/virt.h
@@ -41,7 +41,6 @@ struct RISCVVirtState {
     DeviceState *plic[VIRT_SOCKETS_MAX];
     PFlashCFI01 *flash[2];
 
-    void *fdt;
     int fdt_size;
 };
 
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index 2299b3a6be..8d0ba72d78 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -189,14 +189,14 @@ static void create_fdt(RISCVVirtState *s, const struct MemmapEntry *memmap,
     hwaddr flashbase = virt_memmap[VIRT_FLASH].base;
 
     if (mc->dtb) {
-        fdt = s->fdt = load_device_tree(mc->dtb, &s->fdt_size);
+        fdt = mc->fdt = load_device_tree(mc->dtb, &s->fdt_size);
         if (!fdt) {
             error_report("load_device_tree() failed");
             exit(1);
         }
         goto update_bootargs;
     } else {
-        fdt = s->fdt = create_device_tree(&s->fdt_size);
+        fdt = mc->fdt = create_device_tree(&s->fdt_size);
         if (!fdt) {
             error_report("create_device_tree() failed");
             exit(1);
@@ -434,12 +434,12 @@ static void create_fdt(RISCVVirtState *s, const struct MemmapEntry *memmap,
     g_free(name);
 
     name = g_strdup_printf("/soc/flash@%" PRIx64, flashbase);
-    qemu_fdt_add_subnode(s->fdt, name);
-    qemu_fdt_setprop_string(s->fdt, name, "compatible", "cfi-flash");
-    qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
+    qemu_fdt_add_subnode(mc->fdt, name);
+    qemu_fdt_setprop_string(mc->fdt, name, "compatible", "cfi-flash");
+    qemu_fdt_setprop_sized_cells(mc->fdt, name, "reg",
                                  2, flashbase, 2, flashsize,
                                  2, flashbase + flashsize, 2, flashsize);
-    qemu_fdt_setprop_cell(s->fdt, name, "bank-width", 4);
+    qemu_fdt_setprop_cell(mc->fdt, name, "bank-width", 4);
     g_free(name);
 
 update_bootargs:
@@ -631,9 +631,9 @@ static void virt_machine_init(MachineState *machine)
             hwaddr end = riscv_load_initrd(machine->initrd_filename,
                                            machine->ram_size, kernel_entry,
                                            &start);
-            qemu_fdt_setprop_cell(s->fdt, "/chosen",
+            qemu_fdt_setprop_cell(machine->fdt, "/chosen",
                                   "linux,initrd-start", start);
-            qemu_fdt_setprop_cell(s->fdt, "/chosen", "linux,initrd-end",
+            qemu_fdt_setprop_cell(machine->fdt, "/chosen", "linux,initrd-end",
                                   end);
         }
     } else {
@@ -654,12 +654,12 @@ static void virt_machine_init(MachineState *machine)
 
     /* Compute the fdt load address in dram */
     fdt_load_addr = riscv_load_fdt(memmap[VIRT_DRAM].base,
-                                   machine->ram_size, s->fdt);
+                                   machine->ram_size, machine->fdt);
     /* load the reset vector */
     riscv_setup_rom_reset_vec(machine, &s->soc[0], start_addr,
                               virt_memmap[VIRT_MROM].base,
                               virt_memmap[VIRT_MROM].size, kernel_entry,
-                              fdt_load_addr, s->fdt);
+                              fdt_load_addr, machine->fdt);
 
     /* SiFive Test MMIO device */
     sifive_test_create(memmap[VIRT_TEST].base);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84031.157454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdT-0004jl-5a; Thu, 11 Feb 2021 17:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84031.157454; Thu, 11 Feb 2021 17:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdT-0004jd-1z; Thu, 11 Feb 2021 17:20:11 +0000
Received: by outflank-mailman (input) for mailman id 84031;
 Thu, 11 Feb 2021 17:20:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdR-0003q3-Tg
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:20:09 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea3f0262-83ee-4a72-a225-ede5284102e1;
 Thu, 11 Feb 2021 17:19:55 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id v1so1347128wrd.6
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:55 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id y15sm5930649wrm.93.2021.02.11.09.19.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:50 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id F1B7C1FF8F;
 Thu, 11 Feb 2021 17:19:46 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea3f0262-83ee-4a72-a225-ede5284102e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=R65y4SBWGHf83/lkymoxqOczXrRrz5R3bZoR1vTdS4s=;
        b=OWTGajGfVbw4fCsYO3v4vddVcuO7px3NM68LIa9yRkSVmsHeVY4bkrU9/2QcbjxzJr
         VcK8a+LqFCjujYpwgMBmZYx1nr0R3zWlAsyxk05immU69nf3odOcHC3zXl3UdT3W+pP+
         UvQiV3poOL7pKqxPyPZyCxm8U4tk8Shx8YSzr9ucxH1Il4gUjM4wNoFRh+uhcRttyUrH
         KYkAno7BPJq8sJkeI9axdoZma5w//RmTNNJlT9hfV+mSo31QZ+S0HPcOcMR/yF778Z0n
         3MGGS0jVgAu8xrE8GDiW64N2zhNUG+m8K6Tq2X8kzTCQ7/sPfvWnPSTLauQy3nrs5Mnu
         eJJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=R65y4SBWGHf83/lkymoxqOczXrRrz5R3bZoR1vTdS4s=;
        b=fMGUpe7ge1jeCRWAgrlKUiG7HDKLUD/y+2vY2sEE8mQMIJcwMUcVIJHSl+rrMKddvF
         wBS2DS9F6W6b4ZLvuzp5p/njUxaQvbJtPn3Baa6y5stfhlLQaRmy4xJeYibrR5L7MHi5
         6h4K7VrzGqaEA8rtARou+ryNGKG6g1ybML9IiLgG6klLoZVTElfN36w+bJbsQcUk4VFq
         JbWt6WfaV8rruDF/GMWPLD+huGtgqJFbD+LkyiIGNcZ8ebBYnnn2uVcF0QuHYx9YpSe8
         Do3AP/aEaLd9UrAOHS+JSlzCcLJ7V2kOlpxT55TKelO1RlpdHkya/d15rKi6bTTi7uKV
         t4dw==
X-Gm-Message-State: AOAM532BTMP23qQ0JpZGG+MPEVVlaoxKzYhdzYGBiRjzxr048Q193yo/
	Tb65CRjekVDwjRKg29Gfmndv2Q==
X-Google-Smtp-Source: ABdhPJywKrSlQGvz6xL9wq/N3CEq2PsAYq2P+QL9BXzIbFPCJoxtsMQMD8v6tA1/gOUYVLDiz937UA==
X-Received: by 2002:a5d:428a:: with SMTP id k10mr629694wrq.4.1613063994310;
        Thu, 11 Feb 2021 09:19:54 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Alistair Francis <alistair.francis@wdc.com>,
	David Gibson <david@gibson.dropbear.id.au>
Subject: [PATCH  v2 3/7] device_tree: add qemu_fdt_setprop_string_array helper
Date: Thu, 11 Feb 2021 17:19:41 +0000
Message-Id: <20210211171945.18313-4-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

A string array in device tree is simply a series of \0 terminated
strings next to each other. As libfdt doesn't support that directly
we need to build it ourselves.

Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20201105175153.30489-4-alex.bennee@linaro.org>

---
v2
  - checkpatch long line fix
---
 include/sysemu/device_tree.h | 17 +++++++++++++++++
 softmmu/device_tree.c        | 26 ++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/include/sysemu/device_tree.h b/include/sysemu/device_tree.h
index 982c89345f..8a2fe55622 100644
--- a/include/sysemu/device_tree.h
+++ b/include/sysemu/device_tree.h
@@ -70,6 +70,23 @@ int qemu_fdt_setprop_u64(void *fdt, const char *node_path,
                          const char *property, uint64_t val);
 int qemu_fdt_setprop_string(void *fdt, const char *node_path,
                             const char *property, const char *string);
+
+/**
+ * qemu_fdt_setprop_string_array: set a string array property
+ *
+ * @fdt: pointer to the dt blob
+ * @name: node name
+ * @prop: property array
+ * @array: pointer to an array of string pointers
+ * @len: length of array
+ *
+ * assigns a string array to a property. This function converts and
+ * array of strings to a sequential string with \0 separators before
+ * setting the property.
+ */
+int qemu_fdt_setprop_string_array(void *fdt, const char *node_path,
+                                  const char *prop, char **array, int len);
+
 int qemu_fdt_setprop_phandle(void *fdt, const char *node_path,
                              const char *property,
                              const char *target_node_path);
diff --git a/softmmu/device_tree.c b/softmmu/device_tree.c
index b9a3ddc518..2691c58cf6 100644
--- a/softmmu/device_tree.c
+++ b/softmmu/device_tree.c
@@ -21,6 +21,7 @@
 #include "qemu/error-report.h"
 #include "qemu/option.h"
 #include "qemu/bswap.h"
+#include "qemu/cutils.h"
 #include "sysemu/device_tree.h"
 #include "sysemu/sysemu.h"
 #include "hw/loader.h"
@@ -397,6 +398,31 @@ int qemu_fdt_setprop_string(void *fdt, const char *node_path,
     return r;
 }
 
+/*
+ * libfdt doesn't allow us to add string arrays directly but they are
+ * test a series of null terminated strings with a length. We build
+ * the string up here so we can calculate the final length.
+ */
+int qemu_fdt_setprop_string_array(void *fdt, const char *node_path,
+                                  const char *prop, char **array, int len)
+{
+    int ret, i, total_len = 0;
+    char *str, *p;
+    for (i = 0; i < len; i++) {
+        total_len += strlen(array[i]) + 1;
+    }
+    p = str = g_malloc0(total_len);
+    for (i = 0; i < len; i++) {
+        int len = strlen(array[i]) + 1;
+        pstrcpy(p, len, array[i]);
+        p += len;
+    }
+
+    ret = qemu_fdt_setprop(fdt, node_path, prop, str, total_len);
+    g_free(str);
+    return ret;
+}
+
 const void *qemu_fdt_getprop(void *fdt, const char *node_path,
                              const char *property, int *lenp, Error **errp)
 {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84032.157466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdY-0004pM-GH; Thu, 11 Feb 2021 17:20:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84032.157466; Thu, 11 Feb 2021 17:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdY-0004pD-CI; Thu, 11 Feb 2021 17:20:16 +0000
Received: by outflank-mailman (input) for mailman id 84032;
 Thu, 11 Feb 2021 17:20:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdW-0003q3-Tv
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:20:14 +0000
Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ba1dfdb-01d5-4529-a047-c4b4dec59cac;
 Thu, 11 Feb 2021 17:19:57 +0000 (UTC)
Received: by mail-wm1-x32e.google.com with SMTP id u14so6515498wmq.4
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:57 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id t2sm6150461wru.53.2021.02.11.09.19.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:50 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id B7B281FF92;
 Thu, 11 Feb 2021 17:19:47 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ba1dfdb-01d5-4529-a047-c4b4dec59cac
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=uM4w7EGKPDq+ZH1NX/u4HRpRj7Wv3jOo9v6vSIC9Mdc=;
        b=vHLzZeY8yAlnzz3oPqOA73YhmPGd2nzHr7sXDHSuuhg8QK/Np6A21wjmNHmiYg/QC4
         hVuUw+OWoYaCJciF551k0XP7YWaD+JmYjDQ7DS0bRiLpJStdrxMJqdihp14lpTqvV3Mj
         jx4mj2E/FibSVPt+QgZM5WKATDnj8YWTzayC8fN37Dcs6Pm6NXwlVc9HG5gLe1OuqNuP
         +4LW89pCv8R8j8Oj+oxniURRjmuW6K/gnUcM3p5eIV4ldo1YNe5Q/ghbQGH4LvavOI7M
         aujY8A9uT460jUB3xa7cq7hTf8XonUWO3NBTiPHkyIpFo0Wh/2PHkcupgJleiiFHiHpU
         V2kw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=uM4w7EGKPDq+ZH1NX/u4HRpRj7Wv3jOo9v6vSIC9Mdc=;
        b=NCSbXOIfVkLyOOAPouH/+hIRW6S3JsgErf+gW8kJHtuDnGWWshImHfiJj4s5ElYS0w
         xp9vrKWg9G2jv7SxNz2EJu9i7n3e5RgpJiWilDBTDSo6ZEdoDB8WirJEWprU0uzNK8PC
         LkHsH/EvyCIU8GGo9tfIHELcYAbsDG/GXAJvI//A2hyypsyJa/VJ7dTfJLdWYFyOM6Fo
         3L2UuLn2AXmlUI9+ROalILT34P9mtetHFuWHVdr4TU+M05LTBQrDMOJuHvDjvG152c3E
         +GFOpqpFxn3mCLDEeotDaYw2hpIne4ySCigyxUokpVJyq2Gf2s514mo+ETX4aoCE3ZE4
         FoSg==
X-Gm-Message-State: AOAM5331nKSJmeozBYwh60zF/19dspx3fmq110JHoiIEP+kxQrxI8xqI
	EHDDXJ8P2cEfwkKwz+HShgRGQw==
X-Google-Smtp-Source: ABdhPJxzExjbkoFfYqwsuLpSdRjf+qkuuNTtEVUPGY9I1u8td6zf8I21bZRLpAq+wV1D9X5W43+cbQ==
X-Received: by 2002:a05:600c:4894:: with SMTP id j20mr6335347wmp.152.1613063996346;
        Thu, 11 Feb 2021 09:19:56 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH  v2 6/7] docs: add some documentation for the guest-loader
Date: Thu, 11 Feb 2021 17:19:44 +0000
Message-Id: <20210211171945.18313-7-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
Message-Id: <20201105175153.30489-7-alex.bennee@linaro.org>

---
v2
  - add docs and MAINTAINERS
---
 docs/system/guest-loader.rst | 54 ++++++++++++++++++++++++++++++++++++
 docs/system/index.rst        |  1 +
 MAINTAINERS                  |  1 +
 3 files changed, 56 insertions(+)
 create mode 100644 docs/system/guest-loader.rst

diff --git a/docs/system/guest-loader.rst b/docs/system/guest-loader.rst
new file mode 100644
index 0000000000..37d03cbd89
--- /dev/null
+++ b/docs/system/guest-loader.rst
@@ -0,0 +1,54 @@
+..
+   Copyright (c) 2020, Linaro
+
+Guest Loader
+------------
+
+The guest loader is similar to the `generic-loader` although it is
+aimed at a particular use case of loading hypervisor guests. This is
+useful for debugging hypervisors without having to jump through the
+hoops of firmware and boot-loaders.
+
+The guest loader does two things:
+
+  - load blobs (kernels and initial ram disks) into memory
+  - sets platform FDT data so hypervisors can find and boot them
+
+This is what is typically done by a boot-loader like grub using it's
+multi-boot capability. A typical example would look like:
+
+.. parsed-literal::
+
+  |qemu_system| -kernel ~/xen.git/xen/xen \
+    -append "dom0_mem=1G,max:1G loglvl=all guest_loglvl=all" \
+    -device guest-loader,addr=0x42000000,kernel=Image,bootargs="root=/dev/sda2 ro console=hvc0 earlyprintk=xen" \
+    -device guest-loader,addr=0x47000000,initrd=rootfs.cpio
+
+In the above example the Xen hypervisor is loaded by the -kernel
+parameter and passed it's boot arguments via -append. The Dom0 guest
+is loaded into the areas of memory. Each blob will get
+`/chosen/module@<addr>` entry in the FDT to indicate it's location and
+size. Additional information can be passed with by using additional
+arguments.
+
+Currently the only supported machines which use FDT data to boot are
+the ARM and RiscV `virt` machines.
+
+Arguments
+^^^^^^^^^
+
+The full syntax of the guest-loader is::
+
+  -device guest-loader,addr=<addr>[,kernel=<file>,[bootargs=<args>]][,initrd=<file>]
+
+``addr=<addr>``
+  This is mandatory and indicates the start address of the blob.
+
+``kernel|initrd=<file>``
+  Indicates the filename of the kernel or initrd blob. Both blobs will
+  have the "multiboot,module" compatibility string as well as
+  "multiboot,kernel" or "multiboot,ramdisk" as appropriate.
+
+``bootargs=<args>``
+  This is an optional field for kernel blobs which will pass command
+  like via the `/chosen/module@<addr>/bootargs` node.
diff --git a/docs/system/index.rst b/docs/system/index.rst
index cee1c83540..6ad9c93806 100644
--- a/docs/system/index.rst
+++ b/docs/system/index.rst
@@ -26,6 +26,7 @@ Contents:
    ivshmem
    linuxboot
    generic-loader
+   guest-loader
    vnc-security
    tls
    gdb
diff --git a/MAINTAINERS b/MAINTAINERS
index 774b3ca7a5..853f174fcf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1997,6 +1997,7 @@ Guest Loader
 M: Alex BennÃ©e <alex.bennee@linaro.org>
 S: Maintained
 F: hw/core/guest-loader.c
+F: docs/system/guest-loader.rst
 
 Intel Hexadecimal Object File Loader
 M: Su Hang <suhang16@mails.ucas.ac.cn>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84033.157478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdc-0004uM-Vo; Thu, 11 Feb 2021 17:20:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84033.157478; Thu, 11 Feb 2021 17:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdc-0004uE-Ru; Thu, 11 Feb 2021 17:20:20 +0000
Received: by outflank-mailman (input) for mailman id 84033;
 Thu, 11 Feb 2021 17:20:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdb-0003q3-UD
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:20:19 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5a878be-3f47-4681-9632-3f33215129f3;
 Thu, 11 Feb 2021 17:19:58 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id r21so4918949wrr.9
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:19:58 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id b19sm10182925wmj.22.2021.02.11.09.19.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:50 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 82A141FF91;
 Thu, 11 Feb 2021 17:19:47 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5a878be-3f47-4681-9632-3f33215129f3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=mkOxD0K1nZ+fuOlse/PqHe9yZcMohC/Q3ekJsWwmjy4=;
        b=w8LhT4kIP365H63ZCZMUq9dei6vNCOnWNhpYq0chQl/sZTf3C8XFZ22WERgUcdpne6
         75slk0xUqUmSFNQRUgspnf1CdWev7Ci9B+HaSnRszTlUCjzBqRrJyrDajMnbZWqL6mGf
         YXKMe9U31rUSH/40au+pdVa39lmLB7UtUIc0ghB4NyMDi1hklqKqIWs5mTXUeRAubjr+
         9qvHINBtb6D9chDVWxBjexqf2hrKJ0/SlrUiBN0pr6mHCrVgjBQM5ytf7GYEVovJkAot
         6o1/RySMxNB7/73XxQMyz4ez26GQXtmsRC7QP/9FAMXRme+bBojQjJPxGldXt4yPngbj
         X8sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=mkOxD0K1nZ+fuOlse/PqHe9yZcMohC/Q3ekJsWwmjy4=;
        b=jIQRvl5+sgrgEnV/8cCI9Ncm0kr4UCVT0529+BjCD/VZyqzp5248tJfppbPPI3k5Dz
         o75QEqsEPZCnbo3VeFvDhlwUj9BkusyQHNb8Xu5SKzxtv8XwXvJzsZ/FvDskbuDPNmBI
         FXLGyI8lpbY3FvJiZzjirOcA7a1EX4Mx/p6dLcHC7DeOqXnK54bxCVjifv3bIQXmBDZV
         hs/fBa/JykpJLSAYNYPu+OR2xHbDR+aeD/ZY4ppqYWDLXK8eU7HfVyCXlTOoxEpSl9u0
         lpr5vOG6gJnbHiTegRFEyCA2/j2sv3+q2Iau7P57nJ+lfl3L4gxzFGxmhKp18yPMoSqo
         tDQA==
X-Gm-Message-State: AOAM532E0DvENbkBK9K8xUb1lqGBKeugoRkW6e1mESN+YVWpAzs1MgKd
	i4AP5F5QAM0mng1SJFBdjJmOXPmsPFaHfLr7
X-Google-Smtp-Source: ABdhPJxGNE7xMzp72HGLUrXAgZu+lpGvk+gFxLvTaNvHrXzzraCXUBXJuwLIu8CbSY0xOmMelNvRYw==
X-Received: by 2002:adf:f6d0:: with SMTP id y16mr6751776wrp.351.1613063997176;
        Thu, 11 Feb 2021 09:19:57 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Alistair Francis <alistair@alistair23.me>
Subject: [PATCH  v2 5/7] docs: move generic-loader documentation into the main manual
Date: Thu, 11 Feb 2021 17:19:43 +0000
Message-Id: <20210211171945.18313-6-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We might as well surface this useful information in the manual so
users can find it easily. It is a fairly simple conversion to rst with
the only textual fixes being QemuOps to QemuOpts.

Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
Message-Id: <20201105175153.30489-6-alex.bennee@linaro.org>

---
v2
  - fix whitespace
  - update MAINTAINERS
---
 docs/generic-loader.txt        |  92 --------------------------
 docs/system/generic-loader.rst | 117 +++++++++++++++++++++++++++++++++
 docs/system/index.rst          |   1 +
 MAINTAINERS                    |   2 +-
 4 files changed, 119 insertions(+), 93 deletions(-)
 delete mode 100644 docs/generic-loader.txt
 create mode 100644 docs/system/generic-loader.rst

diff --git a/docs/generic-loader.txt b/docs/generic-loader.txt
deleted file mode 100644
index a9603a2af7..0000000000
--- a/docs/generic-loader.txt
+++ /dev/null
@@ -1,92 +0,0 @@
-Copyright (c) 2016 Xilinx Inc.
-
-This work is licensed under the terms of the GNU GPL, version 2 or later.  See
-the COPYING file in the top-level directory.
-
-
-The 'loader' device allows the user to load multiple images or values into
-QEMU at startup.
-
-Loading Data into Memory Values
--------------------------------
-The loader device allows memory values to be set from the command line. This
-can be done by following the syntax below:
-
-     -device loader,addr=<addr>,data=<data>,data-len=<data-len>
-                   [,data-be=<data-be>][,cpu-num=<cpu-num>]
-
-    <addr>      - The address to store the data in.
-    <data>      - The value to be written to the address. The maximum size of
-                  the data is 8 bytes.
-    <data-len>  - The length of the data in bytes. This argument must be
-                  included if the data argument is.
-    <data-be>   - Set to true if the data to be stored on the guest should be
-                  written as big endian data. The default is to write little
-                  endian data.
-    <cpu-num>   - The number of the CPU's address space where the data should
-                  be loaded. If not specified the address space of the first
-                  CPU is used.
-
-All values are parsed using the standard QemuOps parsing. This allows the user
-to specify any values in any format supported. By default the values
-will be parsed as decimal. To use hex values the user should prefix the number
-with a '0x'.
-
-An example of loading value 0x8000000e to address 0xfd1a0104 is:
-    -device loader,addr=0xfd1a0104,data=0x8000000e,data-len=4
-
-Setting a CPU's Program Counter
--------------------------------
-The loader device allows the CPU's PC to be set from the command line. This
-can be done by following the syntax below:
-
-     -device loader,addr=<addr>,cpu-num=<cpu-num>
-
-    <addr>      - The value to use as the CPU's PC.
-    <cpu-num>   - The number of the CPU whose PC should be set to the
-                  specified value.
-
-All values are parsed using the standard QemuOps parsing. This allows the user
-to specify any values in any format supported. By default the values
-will be parsed as decimal. To use hex values the user should prefix the number
-with a '0x'.
-
-An example of setting CPU 0's PC to 0x8000 is:
-    -device loader,addr=0x8000,cpu-num=0
-
-Loading Files
--------------
-The loader device also allows files to be loaded into memory. It can load ELF,
-U-Boot, and Intel HEX executable formats as well as raw images.  The syntax is
-shown below:
-
-    -device loader,file=<file>[,addr=<addr>][,cpu-num=<cpu-num>][,force-raw=<raw>]
-
-    <file>      - A file to be loaded into memory
-    <addr>      - The memory address where the file should be loaded. This is
-                  required for raw images and ignored for non-raw files.
-    <cpu-num>   - This specifies the CPU that should be used. This is an
-                  optional argument and will cause the CPU's PC to be set to
-                  the memory address where the raw file is loaded or the entry
-                  point specified in the executable format header. This option
-                  should only be used for the boot image.
-                  This will also cause the image to be written to the specified
-                  CPU's address space. If not specified, the default is CPU 0.
-    <force-raw> - Setting force-raw=on forces the file to be treated as a raw
-                  image.  This can be used to load supported executable formats
-                  as if they were raw.
-
-All values are parsed using the standard QemuOps parsing. This allows the user
-to specify any values in any format supported. By default the values
-will be parsed as decimal. To use hex values the user should prefix the number
-with a '0x'.
-
-An example of loading an ELF file which CPU0 will boot is shown below:
-    -device loader,file=./images/boot.elf,cpu-num=0
-
-Restrictions and ToDos
-----------------------
- - At the moment it is just assumed that if you specify a cpu-num then you
-   want to set the PC as well. This might not always be the case. In future
-   the internal state 'set_pc' (which exists in the generic loader now) should
-   be exposed to the user so that they can choose if the PC is set or not.
diff --git a/docs/system/generic-loader.rst b/docs/system/generic-loader.rst
new file mode 100644
index 0000000000..6bf8a4eb48
--- /dev/null
+++ b/docs/system/generic-loader.rst
@@ -0,0 +1,117 @@
+..
+   Copyright (c) 2016, Xilinx Inc.
+
+This work is licensed under the terms of the GNU GPL, version 2 or later.  See
+the COPYING file in the top-level directory.
+
+Generic Loader
+--------------
+
+The 'loader' device allows the user to load multiple images or values into
+QEMU at startup.
+
+Loading Data into Memory Values
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The loader device allows memory values to be set from the command line. This
+can be done by following the syntax below::
+
+   -device loader,addr=<addr>,data=<data>,data-len=<data-len> \
+                   [,data-be=<data-be>][,cpu-num=<cpu-num>]
+
+``<addr>``
+  The address to store the data in.
+
+``<data>``
+  The value to be written to the address. The maximum size of the data
+  is 8 bytes.
+
+``<data-len>``
+  The length of the data in bytes. This argument must be included if
+  the data argument is.
+
+``<data-be>``
+  Set to true if the data to be stored on the guest should be written
+  as big endian data. The default is to write little endian data.
+
+``<cpu-num>``
+  The number of the CPU's address space where the data should be
+  loaded. If not specified the address space of the first CPU is used.
+
+All values are parsed using the standard QemuOps parsing. This allows the user
+to specify any values in any format supported. By default the values
+will be parsed as decimal. To use hex values the user should prefix the number
+with a '0x'.
+
+An example of loading value 0x8000000e to address 0xfd1a0104 is::
+
+    -device loader,addr=0xfd1a0104,data=0x8000000e,data-len=4
+
+Setting a CPU's Program Counter
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The loader device allows the CPU's PC to be set from the command line. This
+can be done by following the syntax below::
+
+     -device loader,addr=<addr>,cpu-num=<cpu-num>
+
+``<addr>``
+  The value to use as the CPU's PC.
+
+``<cpu-num>``
+  The number of the CPU whose PC should be set to the specified value.
+
+All values are parsed using the standard QemuOpts parsing. This allows the user
+to specify any values in any format supported. By default the values
+will be parsed as decimal. To use hex values the user should prefix the number
+with a '0x'.
+
+An example of setting CPU 0's PC to 0x8000 is::
+
+    -device loader,addr=0x8000,cpu-num=0
+
+Loading Files
+^^^^^^^^^^^^^
+
+The loader device also allows files to be loaded into memory. It can load ELF,
+U-Boot, and Intel HEX executable formats as well as raw images.  The syntax is
+shown below:
+
+    -device loader,file=<file>[,addr=<addr>][,cpu-num=<cpu-num>][,force-raw=<raw>]
+
+``<file>``
+  A file to be loaded into memory
+
+``<addr>``
+  The memory address where the file should be loaded. This is required
+  for raw images and ignored for non-raw files.
+
+``<cpu-num>``
+  This specifies the CPU that should be used. This is an
+  optional argument and will cause the CPU's PC to be set to the
+  memory address where the raw file is loaded or the entry point
+  specified in the executable format header. This option should only
+  be used for the boot image. This will also cause the image to be
+  written to the specified CPU's address space. If not specified, the
+  default is CPU 0. <force-raw> - Setting force-raw=on forces the file
+  to be treated as a raw image. This can be used to load supported
+  executable formats as if they were raw.
+
+All values are parsed using the standard QemuOpts parsing. This allows the user
+to specify any values in any format supported. By default the values
+will be parsed as decimal. To use hex values the user should prefix the number
+with a '0x'.
+
+An example of loading an ELF file which CPU0 will boot is shown below::
+
+    -device loader,file=./images/boot.elf,cpu-num=0
+
+Restrictions and ToDos
+^^^^^^^^^^^^^^^^^^^^^^
+
+At the moment it is just assumed that if you specify a cpu-num then
+you want to set the PC as well. This might not always be the case. In
+future the internal state 'set_pc' (which exists in the generic loader
+now) should be exposed to the user so that they can choose if the PC
+is set or not.
+
+
diff --git a/docs/system/index.rst b/docs/system/index.rst
index 625b494372..cee1c83540 100644
--- a/docs/system/index.rst
+++ b/docs/system/index.rst
@@ -25,6 +25,7 @@ Contents:
    usb
    ivshmem
    linuxboot
+   generic-loader
    vnc-security
    tls
    gdb
diff --git a/MAINTAINERS b/MAINTAINERS
index ab6877dae6..774b3ca7a5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1991,7 +1991,7 @@ M: Alistair Francis <alistair@alistair23.me>
 S: Maintained
 F: hw/core/generic-loader.c
 F: include/hw/core/generic-loader.h
-F: docs/generic-loader.txt
+F: docs/system/generic-loader.rst
 
 Guest Loader
 M: Alex BennÃ©e <alex.bennee@linaro.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:20:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84034.157490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdi-00050i-9i; Thu, 11 Feb 2021 17:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84034.157490; Thu, 11 Feb 2021 17:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFdi-00050a-61; Thu, 11 Feb 2021 17:20:26 +0000
Received: by outflank-mailman (input) for mailman id 84034;
 Thu, 11 Feb 2021 17:20:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kYUG=HN=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lAFdg-0003q3-UV
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:20:24 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71906b6c-277f-4780-be41-9c7585268461;
 Thu, 11 Feb 2021 17:20:00 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id v14so4927494wro.7
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:20:00 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id z15sm5496898wrs.25.2021.02.11.09.19.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 Feb 2021 09:19:58 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id E48451FF93;
 Thu, 11 Feb 2021 17:19:47 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71906b6c-277f-4780-be41-9c7585268461
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=97euSTerjez5Tce8hHGcMNp4cX8yrwBdXJFrXOC2JXc=;
        b=xbD5H3NYhzAJ7qvapOjyovl4bg42BELbyrNRG6zw1q3UPUyzd8jX7jexxdcmxLVxTE
         rv1vIaa1IaulbpkTLDoYY+P8dN2ozs6OZq+itR/iusbN4d3LFziXoMfWzdnCH5SXEArn
         w5sulFaqxDBgw8+ZlxFQYCh/Vb9XFA7i04sms+qLj1NzzQu9wvKkWKWIgxDCpeO8yk5g
         s8F3Hs71+5tVvQvX8ionKzRg9bouu6aWKo9DXTGFQunviL69RSe57aiUtkXl/ZEaEw9C
         wzmpyMj337gzqdX8P9GcEhngmj3qSngmBF7Xzb+FfAJ87MCAWpjgWHrXO2WveLj3LAA+
         FMeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=97euSTerjez5Tce8hHGcMNp4cX8yrwBdXJFrXOC2JXc=;
        b=hBO6qc8GP+c3Fc0MmSrprn6nY0ysqeJojTpMSLzjeSpqpeyJzL1c2IXWRtxIx4dTWT
         xJGXw1VQ4ZbUcQ/PPaAD/uFAJ9eTbabsS+X/DT6ZpAan0D3NNR3ESp5kl6BfoWrGxZV7
         o4v9lY5Q21E0md+iq2JU0kv68Dqlh5T0HAV2Yt8A2PRXtznyfwMrVSwygb3z1t7Hfnma
         TlMSJkPOKJgN1efIZoBe4a2KtCr6DVoyLur8SLPG9MFabf4Awta7q+XoWNdJUyD8K6gE
         mV30/TvB+XjJ18r+rRRFMqpEzKdeQZVeOdToLhFZcUmtX7Ite3U1jBPTGHqL9P6XxApy
         /dPw==
X-Gm-Message-State: AOAM530ajrKC4+PuL8FwLhhxuX0dvuPjNSZQVx+YueDmbHeS9CvWMWQI
	gYE32OYK23YUdr4QDi6ISV8K3Q==
X-Google-Smtp-Source: ABdhPJyzRo8kcYPUXSrP5mf6sQeEw14/2FdcWAVQ8gyif+dPFUJ6YCsxJGyHCsTfJsdULjTJavJKJw==
X-Received: by 2002:a5d:44cf:: with SMTP id z15mr1759324wrr.191.1613063999472;
        Thu, 11 Feb 2021 09:19:59 -0800 (PST)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Cleber Rosa <crosa@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH  v2 7/7] tests/avocado: add boot_xen tests
Date: Thu, 11 Feb 2021 17:19:45 +0000
Message-Id: <20210211171945.18313-8-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210211171945.18313-1-alex.bennee@linaro.org>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These tests make sure we can boot the Xen hypervisor with a Dom0
kernel using the guest-loader. We currently have to use a kernel I
built myself because there are issues using the Debian kernel images.

Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
---
 MAINTAINERS                  |   1 +
 tests/acceptance/boot_xen.py | 117 +++++++++++++++++++++++++++++++++++
 2 files changed, 118 insertions(+)
 create mode 100644 tests/acceptance/boot_xen.py

diff --git a/MAINTAINERS b/MAINTAINERS
index 853f174fcf..537ca7a6f3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1998,6 +1998,7 @@ M: Alex BennÃ©e <alex.bennee@linaro.org>
 S: Maintained
 F: hw/core/guest-loader.c
 F: docs/system/guest-loader.rst
+F: tests/acceptance/boot_xen.py
 
 Intel Hexadecimal Object File Loader
 M: Su Hang <suhang16@mails.ucas.ac.cn>
diff --git a/tests/acceptance/boot_xen.py b/tests/acceptance/boot_xen.py
new file mode 100644
index 0000000000..8c7e091d40
--- /dev/null
+++ b/tests/acceptance/boot_xen.py
@@ -0,0 +1,117 @@
+# Functional test that boots a Xen hypervisor with a domU kernel and
+# checks the console output is vaguely sane .
+#
+# Copyright (c) 2020 Linaro
+#
+# Author:
+#  Alex BennÃ©e <alex.bennee@linaro.org>
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or
+# later.  See the COPYING file in the top-level directory.
+
+import os
+
+from avocado import skipIf
+from avocado_qemu import wait_for_console_pattern
+from boot_linux_console import LinuxKernelTest
+
+
+class BootXenBase(LinuxKernelTest):
+    """
+    Boots a Xen hypervisor with a Linux DomU kernel.
+    """
+
+    timeout = 90
+    XEN_COMMON_COMMAND_LINE = 'dom0_mem=128M loglvl=all guest_loglvl=all'
+
+    def fetch_guest_kernel(self):
+        # Using my own built kernel - which works
+        kernel_url = ('https://fileserver.linaro.org/'
+                      's/JSsewXGZ6mqxPr5/download?path=%2F&files='
+                      'linux-5.9.9-arm64-ajb')
+        kernel_sha1 = '4f92bc4b9f88d5ab792fa7a43a68555d344e1b83'
+        kernel_path = self.fetch_asset(kernel_url,
+                                       asset_hash=kernel_sha1)
+
+        return kernel_path
+
+    def launch_xen(self, xen_path):
+        """
+        Launch Xen with a dom0 guest kernel
+        """
+        self.log.info("launch with xen_path: %s", xen_path)
+        kernel_path = self.fetch_guest_kernel()
+
+        self.vm.set_console()
+
+        xen_command_line = self.XEN_COMMON_COMMAND_LINE
+        self.vm.add_args('-machine', 'virtualization=on',
+                         '-cpu', 'cortex-a57',
+                         '-m', '768',
+                         '-kernel', xen_path,
+                         '-append', xen_command_line,
+                         '-device',
+                         "guest-loader,addr=0x47000000,kernel=%s,bootargs=console=hvc0"
+                         % (kernel_path))
+
+        self.vm.launch()
+
+        console_pattern = 'VFS: Cannot open root device'
+        wait_for_console_pattern(self, console_pattern, "Panic on CPU 0:")
+
+
+class BootXen(BootXenBase):
+
+    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
+    def test_arm64_xen_411_and_dom0(self):
+        """
+        :avocado: tags=arch:aarch64
+        :avocado: tags=accel:tcg
+        :avocado: tags=cpu:cortex-a57
+        :avocado: tags=machine:virt
+        """
+        xen_url = ('https://deb.debian.org/debian/'
+                   'pool/main/x/xen/'
+                   'xen-hypervisor-4.11-arm64_4.11.4+37-g3263f257ca-1_arm64.deb')
+        xen_sha1 = '034e634d4416adbad1212d59b62bccdcda63e62a'
+        xen_deb = self.fetch_asset(xen_url, asset_hash=xen_sha1)
+        xen_path = self.extract_from_deb(xen_deb, "/boot/xen-4.11-arm64")
+
+        self.launch_xen(xen_path)
+
+    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
+    def test_arm64_xen_414_and_dom0(self):
+        """
+        :avocado: tags=arch:aarch64
+        :avocado: tags=accel:tcg
+        :avocado: tags=cpu:cortex-a57
+        :avocado: tags=machine:virt
+        """
+        xen_url = ('https://deb.debian.org/debian/'
+                   'pool/main/x/xen/'
+                   'xen-hypervisor-4.14-arm64_4.14.0+80-gd101b417b7-1_arm64.deb')
+        xen_sha1 = 'b9d209dd689ed2b393e625303a225badefec1160'
+        xen_deb = self.fetch_asset(xen_url, asset_hash=xen_sha1)
+        xen_path = self.extract_from_deb(xen_deb, "/boot/xen-4.14-arm64")
+
+        self.launch_xen(xen_path)
+
+    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
+    def test_arm64_xen_415_and_dom0(self):
+        """
+        :avocado: tags=arch:aarch64
+        :avocado: tags=accel:tcg
+        :avocado: tags=cpu:cortex-a57
+        :avocado: tags=machine:virt
+        """
+
+        xen_url = ('https://fileserver.linaro.org/'
+                   's/JSsewXGZ6mqxPr5/download'
+                   '?path=%2F&files=xen-upstream-4.15-unstable.deb')
+        xen_sha1 = 'fc191172b85cf355abb95d275a24cc0f6d6579d8'
+        xen_deb = self.fetch_asset(xen_url, asset_hash=xen_sha1)
+        xen_path = self.extract_from_deb(xen_deb, "/boot/xen-4.15-unstable")
+
+        self.launch_xen(xen_path)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:22:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84035.157502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFfb-0005Pk-PW; Thu, 11 Feb 2021 17:22:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84035.157502; Thu, 11 Feb 2021 17:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFfb-0005Pd-MG; Thu, 11 Feb 2021 17:22:23 +0000
Received: by outflank-mailman (input) for mailman id 84035;
 Thu, 11 Feb 2021 17:22:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAFfa-0005PT-D9; Thu, 11 Feb 2021 17:22:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAFfa-0000aN-3b; Thu, 11 Feb 2021 17:22:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAFfZ-0003JB-OW; Thu, 11 Feb 2021 17:22:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAFfZ-0003v0-O2; Thu, 11 Feb 2021 17:22:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R/Otw89A1doVdWzqdp7LV72o5fjA0ivOOW3I2o3MReg=; b=vj9i83LOPVaawSWNgyZ4fDuh6X
	obDapRMODvjRPpVcMbELy+paXh8NgOWZYBn/WTNFvdaT6NCfOLXzdZeE5g6BaEcBPmzUH2ha3JJ60
	C+44CLF1hnQ+zweY83QAYS1MZI95wQDbLkC2P1dkmyXdfMutHG4kMyhcRIoujIc4+YUE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159199-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159199: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e0756cfc7d7cd08c98a53b6009c091a3f6a50be6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 17:22:21 +0000

flight 159199 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159199/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e0756cfc7d7cd08c98a53b6009c091a3f6a50be6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  194 days
Failing since        152366  2020-08-01 20:49:34 Z  193 days  339 attempts
Testing same since   159199  2021-02-10 11:26:56 Z    1 days    1 attempts

------------------------------------------------------------
4560 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1029181 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 17:41:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 17:41:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84049.157524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFxY-0007OL-K5; Thu, 11 Feb 2021 17:40:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84049.157524; Thu, 11 Feb 2021 17:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAFxY-0007OE-H5; Thu, 11 Feb 2021 17:40:56 +0000
Received: by outflank-mailman (input) for mailman id 84049;
 Thu, 11 Feb 2021 17:40:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f8zj=HN=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1lAFxX-0007O9-Ao
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 17:40:55 +0000
Received: from mail-pj1-x1033.google.com (unknown [2607:f8b0:4864:20::1033])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 773c9eaa-1cd1-446c-8257-e2ef31715d58;
 Thu, 11 Feb 2021 17:40:54 +0000 (UTC)
Received: by mail-pj1-x1033.google.com with SMTP id q72so3803345pjq.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 Feb 2021 09:40:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 773c9eaa-1cd1-446c-8257-e2ef31715d58
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=QziDmZr8TTpasbvFZVUT/329P3XT1UCmwBzrVbDzVt8=;
        b=CZZ0dXb/akrEVv2gQQECboWALMHG7RAyWsSQM3mxEpIvbiek4NeIVFxUsJbwfNUS1F
         5l2Hwc3f005aESd3X699lGHhAOb7vfkMWnvZw0TzSLlkXPN/mwcP06Ate3WnBqutLlGw
         1vn4omxqbInwoBzJgbYQLZzODZnev3BBGI9as4f0skV7jhURhiiS4e+3Hb90+uogCy9J
         aDQ71xGirH1r0z2TRzWtPDbX7Me+ZLFWNdBoDZpNfRsHhZI1CEw5JOewM68MuwzUM1X8
         zfhcbbe2jGOmNIGQnCbus3G1gzKLviS5KedNNawoXnCKJUGm0kMO1ZiwAC91PUKILivP
         sGyw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=QziDmZr8TTpasbvFZVUT/329P3XT1UCmwBzrVbDzVt8=;
        b=T+vz03TIbxV47RL0QsUKRjcLYubSQru4N0wsTTd/ncVlCCp//94LW4LScH2qo1hA6o
         uUXDSaXxZzsy5NsUS6IRKzm8EGDq3eX195BBb7allMjfr0KCztUCd+V/9jST8TliAd1A
         yFonxCAfCr8g5oolWBKcYEc+jTHUWhn6bauB1mDI/Eac0/UMBqg+5ofIO6taVe3V7j+W
         gZgJaOr+m1hucXT0h5KPunmhi6MN2T56AHbStxpQnCA0cFP2EKObAozBlaL7yhSHjAm5
         sId1wdIumBZxDUJWa7JHzjlPprp9wBAvOwm/mCLPi/0J3ARaxABwo4bGN+tUiqXnV4aD
         szcQ==
X-Gm-Message-State: AOAM5322CoyGKAMWdWi59hQKQaDfG5cTQ8Aw6ohR7x87ICcf1M0H5Njc
	PB8GwD2bYD5XIRrugqCwwyA/qZ6cyutlM7Nno1M64A==
X-Google-Smtp-Source: ABdhPJwnwxrzPvItcE7Xa8pNbvWQIHYnBMQTywNiPe0cxc7ZYs5qbKGkU8WIgEdLbvZvJ4SMYkxbpVcNIY7Jt3hhMWg=
X-Received: by 2002:a17:902:e989:b029:e2:8c9d:78ba with SMTP id
 f9-20020a170902e989b02900e28c9d78bamr8505067plb.81.1613065253570; Thu, 11 Feb
 2021 09:40:53 -0800 (PST)
MIME-Version: 1.0
References: <20210211171945.18313-1-alex.bennee@linaro.org> <20210211171945.18313-2-alex.bennee@linaro.org>
In-Reply-To: <20210211171945.18313-2-alex.bennee@linaro.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 11 Feb 2021 17:40:42 +0000
Message-ID: <CAFEAcA82Fv34Ri=s97rx8hUPrqMeL4xOS325DbY1fhoWmiE+9A@mail.gmail.com>
Subject: Re: [PATCH v2 1/7] hw/board: promote fdt from ARM VirtMachineState to MachineState
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: QEMU Developers <qemu-devel@nongnu.org>, julien@xen.org, 
	Stefano Stabellini <stefano.stabellini@linaro.org>, stefano.stabellini@xilinx.com, 
	Andre Przywara <andre.przywara@arm.com>, stratos-dev@op-lists.linaro.org, 
	"open list:X86" <xen-devel@lists.xenproject.org>, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	Eduardo Habkost <ehabkost@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
	"open list:Virt" <qemu-arm@nongnu.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 11 Feb 2021 at 17:19, Alex Benn=C3=A9e <alex.bennee@linaro.org> wro=
te:
>
> The use of FDT's is quite common across our various platforms. To
> allow the generic loader to tweak it we need to make it available in
> the generic state. This creates the field and migrates the initial
> user to use the generic field. Other boards will be updated in later
> patches.

This commit message says this is being done for the generic loader,
but I deduce from the rest of the series that you aren't using
the generic loader (cf discussion on a previous version of the series)...

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 20:47:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 20:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84062.157560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAIrj-0007HM-F3; Thu, 11 Feb 2021 20:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84062.157560; Thu, 11 Feb 2021 20:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAIrj-0007HF-C1; Thu, 11 Feb 2021 20:47:07 +0000
Received: by outflank-mailman (input) for mailman id 84062;
 Thu, 11 Feb 2021 20:47:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZS85=HN=amazon.de=prvs=669fce943=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lAIrh-0007HA-CL
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 20:47:05 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74546eaa-d682-41f2-9d06-66b45f8f8a19;
 Thu, 11 Feb 2021 20:47:04 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-c300ac87.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 11 Feb 2021 20:46:56 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-c300ac87.us-west-2.amazon.com (Postfix) with ESMTPS
 id A2ABCA21E9; Thu, 11 Feb 2021 20:46:55 +0000 (UTC)
Received: from u6fc700a6f3c650.ant.amazon.com (10.43.160.146) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 11 Feb 2021 20:46:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74546eaa-d682-41f2-9d06-66b45f8f8a19
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1613076424; x=1644612424;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=qKTds+BDTK7/bjIL7gaXgqoG6yKodT76dFOIrx+0QP0=;
  b=c3GBimdUc2fVi+7ty4C886WeqpHUtwznlWlg51QsGhl7+p8icYt9FA0Y
   B13ZdKG6T2C9qdgilbqsWBUynIPsSedP7PwlVQzwEnM3Fq8eZgtTIp9oq
   f9ngS66Lhqk0c8m/ve71rQTXqD5PZp0dUN1B+8EVXdQG9dG8+wzYg6qMw
   I=;
X-IronPort-AV: E=Sophos;i="5.81,171,1610409600"; 
   d="scan'208";a="82252665"
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	<xen-devel@lists.xenproject.org>
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
 <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
 <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
 <adee7548-0a60-d7d1-731f-474a585edf6c@amazon.de>
 <1a50a8c3-44f5-9ea9-7ff1-0d716bc05ebd@suse.com>
From: Norbert Manthey <nmanthey@amazon.de>
Autocrypt: addr=nmanthey@amazon.de; prefer-encrypt=mutual; keydata=
 xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR
 /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg
 kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps
 K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89
 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV
 QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok
 xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o
 eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C
 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK
 VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h
 bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI
 BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC
 XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG
 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c
 kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN
 sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar
 C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt
 OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL
 n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur
 xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5
 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs
 nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf
 Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG
 MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK
 sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ
 /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV
 vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM
 p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk
 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p
 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f
 HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE
 GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b
 tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+
 RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ
 R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4
 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5
 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd
 DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i
 p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U
 sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C
 hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9
 pEddsOqYwOs=
Message-ID: <d2a5c3a5-d163-3ee9-50ff-0083bd52c374@amazon.de>
Date: Thu, 11 Feb 2021 21:46:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1a50a8c3-44f5-9ea9-7ff1-0d716bc05ebd@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
X-Originating-IP: [10.43.160.146]
X-ClientProxiedBy: EX13D22UWC004.ant.amazon.com (10.43.162.198) To
 EX13D02EUB001.ant.amazon.com (10.43.166.150)
Precedence: Bulk
Content-Transfer-Encoding: base64

T24gMi85LzIxIDM6MjEgUE0sIEphbiBCZXVsaWNoIHdyb3RlOgo+IE9uIDA5LjAyLjIwMjEgMTQ6
NTYsIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4gT24gMi85LzIxIDI6NDUgUE0sIEphbiBCZXVs
aWNoIHdyb3RlOgo+Pj4gT24gMDkuMDIuMjAyMSAxNDo0MSwgTm9yYmVydCBNYW50aGV5IHdyb3Rl
Ogo+Pj4+IE9uIDIvOS8yMSAxMDo0MCBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4+IE9uIDA4
LjAyLjIwMjEgMjA6NDcsIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4+Pj4+IE9uIDIvOC8yMSAz
OjIxIFBNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+Pj4+PiBPbiAwNS4wMi4yMDIxIDIxOjM5LCBO
b3JiZXJ0IE1hbnRoZXkgd3JvdGU6Cj4+Pj4+Pj4+IEBAIC00MTA4LDYgKzQxMDgsMTMgQEAgc3Rh
dGljIGludCBodm1fYWxsb3dfc2V0X3BhcmFtKHN0cnVjdCBkb21haW4gKmQsCj4+Pj4+Pj4+ICAg
ICAgaWYgKCByYyApCj4+Pj4+Pj4+ICAgICAgICAgIHJldHVybiByYzsKPj4+Pj4+Pj4KPj4+Pj4+
Pj4gKyAgICBpZiAoIGluZGV4ID49IEhWTV9OUl9QQVJBTVMgKQo+Pj4+Pj4+PiArICAgICAgICBy
ZXR1cm4gLUVJTlZBTDsKPj4+Pj4+Pj4gKwo+Pj4+Pj4+PiArICAgIC8qIE1ha2Ugc3VyZSB3ZSBl
dmFsdWF0ZSBwZXJtaXNzaW9ucyBiZWZvcmUgbG9hZGluZyBkYXRhIG9mIGRvbWFpbnMuICovCj4+
Pj4+Pj4+ICsgICAgYmxvY2tfc3BlY3VsYXRpb24oKTsKPj4+Pj4+Pj4gKwo+Pj4+Pj4+PiArICAg
IHZhbHVlID0gZC0+YXJjaC5odm0ucGFyYW1zW2luZGV4XTsKPj4+Pj4+Pj4gICAgICBzd2l0Y2gg
KCBpbmRleCApCj4+Pj4+Pj4+ICAgICAgewo+Pj4+Pj4+PiAgICAgIC8qIFRoZSBmb2xsb3dpbmcg
cGFyYW1ldGVycyBzaG91bGQgb25seSBiZSBjaGFuZ2VkIG9uY2UuICovCj4+Pj4+Pj4gSSBkb24n
dCBzZWUgdGhlIG5lZWQgZm9yIHRoZSBoZWF2aWVyIGJsb2NrX3NwZWN1bGF0aW9uKCkgaGVyZTsK
Pj4+Pj4+PiBhZmFpY3QgYXJyYXlfYWNjZXNzX25vc3BlYygpIHNob3VsZCBkbyBmaW5lLiBUaGUg
c3dpdGNoKCkgaW4KPj4+Pj4+PiBjb250ZXh0IGFib3ZlIGFzIHdlbGwgYXMgdGhlIHN3aXRjaCgp
IGZ1cnRoZXIgZG93biBpbiB0aGUKPj4+Pj4+PiBmdW5jdGlvbiBkb24ndCBoYXZlIGFueSBzcGVj
dWxhdGlvbiBzdXNjZXB0aWJsZSBjb2RlLgo+Pj4+Pj4gVGhlIHJlYXNvbiB0byBibG9jayBzcGVj
dWxhdGlvbiBpbnN0ZWFkIG9mIGp1c3QgdXNpbmcgdGhlIGhhcmRlbmVkIGluZGV4Cj4+Pj4+PiBh
Y2Nlc3MgaXMgdG8gbm90IGFsbG93IHRvIHNwZWN1bGF0aXZlbHkgbG9hZCBkYXRhIGZyb20gYW5v
dGhlciBkb21haW4uCj4+Pj4+IE9rYXksIGxvb2tzIGxpa2UgSSBnb3QgbWlzbGVhZCBieSB0aGUg
YWRkZWQgYm91bmRzIGNoZWNrLiBXaHkKPj4+Pj4gZG8geW91IGFkZCB0aGF0LCB3aGVuIHRoZSBz
b2xlIGNhbGxlciBhbHJlYWR5IGhhcyBvbmU/IEl0J2xsCj4+Pj4+IHN1ZmZpY2Ugc2luY2UgeW91
IG1vdmUgdGhlIGFycmF5IGFjY2VzcyBwYXN0IHRoZSBiYXJyaWVyLAo+Pj4+PiB3b24ndCBpdD8K
Pj4+PiBJIGNhbiBkcm9wIHRoYXQgYm91bmQgY2hlY2sgYWdhaW4uIFRoaXMgd2FzIGFkZGVkIHRv
IG1ha2Ugc3VyZSBvdGhlcgo+Pj4+IGNhbGxlcnMgd291bGQgYmUgc2F2ZSBhcyB3ZWxsLiBUaGlu
a2luZyBhYm91dCB0aGlzIGEgbGl0dGxlIG1vcmUsIHRoZQo+Pj4+IGNoZWNrIGNvdWxkIGFjdHVh
bGx5IGJlIG1vdmVkIGludG8gdGhlIGh2bV9hbGxvd19zZXRfcGFyYW0gZnVuY3Rpb24sCj4+Pj4g
YWJvdmUgdGhlIGZpcnN0IGluZGV4IGFjY2VzcyBpbiB0aGF0IGZ1bmN0aW9uLiBBcmUgdGhlcmUg
Z29vZCByZWFzb25zIHRvCj4+Pj4gbm90IG1vdmUgdGhlIGluZGV4IGNoZWNrIGludG8gdGhlIGFs
bG93IGZ1bmN0aW9uPwo+Pj4gSSBndWVzcyBJJ20gY29uZnVzZWQ6IFdlJ3JlIHRhbGtpbmcgYWJv
dXQgZHJvcHBpbmcgdGhlIGNoZWNrCj4+PiB5b3UgYWRkIHRvIGh2bV9hbGxvd19zZXRfcGFyYW0o
KSBhbmQgeW91IHN1Z2dlc3QgdG8gIm1vdmUiIGl0Cj4+PiByaWdodCB0aGVyZT8KPj4gWWVzLiBJ
IGNhbiBlaXRoZXIganVzdCBubyBpbnRyb2R1Y2UgdGhhdCBjaGVjayBpbiB0aGF0IGZ1bmN0aW9u
ICh3aGljaAo+PiBpcyB3aGF0IHlvdSBzdWdnZXN0ZWQpLiBBcyBhbiBhbHRlcm5hdGl2ZSwgdG8g
aGF2ZSBhbGwgY2hlY2tzIGluIG9uZQo+PiBmdW5jdGlvbiwgSSBwcm9wb3NlZCB0byBpbnN0ZWFk
IG1vdmUgaXQgaW50byBodm1fYWxsb3dfc2V0X3BhcmFtLgo+IE9oLCBJIHNlZS4gV2hhdCBJJ2Qg
bGlrZSB0byBiZSB0aGUgcmVzdWx0IG9mIHlvdXIgcmUtYXJyYW5nZW1lbnQgaXMKPiBzeW1tZXRy
eSBiZXR3ZWVuICJnZXQiIGFuZCAic2V0IiB3aGVyZSBwb3NzaWJsZSBpbiB0aGlzIHJlZ2FyZCwg
YW5kCj4gYXN5bW1ldHJ5IGhhdmluZyBhIGNsZWFyIHJlYXNvbi4gU2VlaW5nIHRoYXQgaHZtX3tn
ZXQsc2V0fV9wYXJhbSgpCj4gYXJlIHRoZSBub24tc3RhdGljIGZ1bmN0aW9ucyBoZXJlLCBJIHRo
aW5rIGl0IHdvdWxkIGJlIHF1aXRlCj4gZGVzaXJhYmxlIGZvciB0aGUgYm91bmRzIGNoZWNraW5n
IHRvIGxpdmUgdGhlcmUuIFNpbmNlCj4gaHZtX2FsbG93X3tnZXQsc2V0fV9wYXJhbSgpIGFyZSBz
cGVjaWZpY2FsbHkgaGVscGVycyBvZiB0aGUgdHdvCj4gZWFybGllciBuYW1lZCBmdW5jdGlvbnMs
IGNoZWNrcyBjb25zaXN0ZW50bHkgbGl2aW5nIHRoZXJlIHdvdWxkIGFzCj4gd2VsbCBiZSBmaW5l
IHdpdGggbWUuCgpJIGFncmVlIHdpdGggdGhlIHN5bW1ldHJ5IGZvciBnZXQgYW5kIHNldC4gVGhp
cyBpcyB3aGF0IEknZCBhaW0gZm9yOgoKwqAxLiBodm1vcF9zZXRfcGFyYW0gYW5kIGh2bW9wX2dl
dF9wYXJhbSAoc3RhdGljKSBib3RoIGNoZWNrIGZvciB0aGUKaW5kZXgsIGFuZCBhZnRlcndhcmRz
IHVzZSB0aGUgaXNfaHZtX2RvbWFpbihkKSBmdW5jdGlvbiB3aXRoIGl0cyBiYXJyaWVyCsKgMi4g
aHZtX3NldF9wYXJhbSAoc3RhdGljKSBhbmQgaHZtX2dldF9wYXJhbSBib3RoIGNhbGwgdGhlaXIg
YWxsb3cKaGVscGVyIGZ1bmN0aW9uLCBldmFsdWF0ZSB0aGUgcmV0dXJuIGNvZGUsIGFuZCBhZnRl
cndhcmRzIGJsb2NrIHNwZWN1bGF0aW9uLgrCoDIuMS4gaHZtX2dldF9wYXJhbSBpcyBkZWNsYXJl
ZCBpbiBhIHB1YmxpYyBoZWFkZXIsIGFuZCBjYW5ub3QgYmUgdHVybmVkCmludG8gYSBzdGF0aWMg
ZnVuY3Rpb24sIGhlbmNlIG5lZWRzIHRoZSBpbmRleCBjaGVjawrCoDIuMi4gaHZtX3NldF9wYXJh
bSBpcyBvbmx5IGNhbGxlZCBmcm9tIGh2bW9wX3NldF9wYXJhbSwgYW5kIGluZGV4IGlzCmFscmVh
ZHkgY2hlY2tlZCB0aGVyZSwgaGVuY2UsIGRvIG5vdCBhZGQgY2hlY2sKwqAzLiBodm1fYWxsb3df
c2V0X3BhcmFtIChzdGF0aWMpIGFuZCBodm1fYWxsb3dfZ2V0X3BhcmFtIChzdGF0aWMpIGRvIG5v
dAp2YWxpZGF0ZSB0aGUgaW5kZXggcGFyYW1ldGVyCsKgMy4xLiBodm1fYWxsb3dfc2V0X3BhcmFt
IGJsb2NrcyBzcGVjdWxhdGl2ZSBleGVjdXRpb24gd2l0aCBhIGJhcnJpZXIKYWZ0ZXIgZG9tYWlu
IHBlcm1pc3Npb25zIGhhdmUgYmVlbiBldmFsdWF0ZWQsIGJlZm9yZSBhY2Nlc3NpbmcgdGhlCnBh
cmFtZXRlcnMgb2YgdGhlIGRvbWFpbi4gaHZtX2FsbG93X2dldF9wYXJhbSBkb2VzIG5vdCBhY2Nl
c3MgdGhlIHBhcmFtcwptZW1iZXIgb2YgdGhlIGRvbWFpbiwgYW5kIGhlbmNlIGRvZXMgbm90IHJl
cXVpcmUgYWRkaXRpb25hbCBwcm90ZWN0aW9uLgoKVG8gc2ltcGxpZnkgdGhlIGNvZGUsIEkgcHJv
cG9zZSB0byBmdXJ0aGVybW9yZSBtYWtlIHRoZSBodm1vcF9zZXRfcGFyYW0KZnVuY3Rpb24gc3Rh
dGljIGFzIHdlbGwuCgpQbGVhc2UgbGV0IG1lIGtub3cgd2hldGhlciB0aGUgYWJvdmUgd291bGQg
aXMgYWNjZXB0YWJsZS4KCkJlc3QsCk5vcmJlcnQKCj4KPiBKYW4KCgoKCkFtYXpvbiBEZXZlbG9w
bWVudCBDZW50ZXIgR2VybWFueSBHbWJICktyYXVzZW5zdHIuIDM4CjEwMTE3IEJlcmxpbgpHZXNj
aGFlZnRzZnVlaHJ1bmc6IENocmlzdGlhbiBTY2hsYWVnZXIsIEpvbmF0aGFuIFdlaXNzCkVpbmdl
dHJhZ2VuIGFtIEFtdHNnZXJpY2h0IENoYXJsb3R0ZW5idXJnIHVudGVyIEhSQiAxNDkxNzMgQgpT
aXR6OiBCZXJsaW4KVXN0LUlEOiBERSAyODkgMjM3IDg3OQoKCg==



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 20:51:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 20:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84063.157572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAIvX-0008CH-0G; Thu, 11 Feb 2021 20:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84063.157572; Thu, 11 Feb 2021 20:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAIvW-0008CA-TS; Thu, 11 Feb 2021 20:51:02 +0000
Received: by outflank-mailman (input) for mailman id 84063;
 Thu, 11 Feb 2021 20:51:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAIvV-0008C2-TR; Thu, 11 Feb 2021 20:51:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAIvV-00044z-MO; Thu, 11 Feb 2021 20:51:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAIvV-0005Qh-E6; Thu, 11 Feb 2021 20:51:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAIvV-0001Bz-Dg; Thu, 11 Feb 2021 20:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qIskreAGBCFlnDDWDEYiKbm3ptovDurk2J3yrB8vfzg=; b=2jz23p0KJ0yPr2Lwm+0QQZj6ZM
	0u9fJSueiytgz/kVUw96iOSfNIrDetuNdSxc9L6YCCJORyGlSS2hJ+9oGez2HCYJIkQsSGIzw5WK5
	Jddf1+nYBblLl8s9O1Oy3VzXq7i9e2v+CGBikNI2k5iScs5Kyrzvb96jzZRYmeg+pqxc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159258-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159258: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f3e1eb2f0234c955243a915d69ebd84f26eec130
X-Osstest-Versions-That:
    xen=01456785ce093d95c6a8515e6b8feeb39e1820b8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 20:51:01 +0000

flight 159258 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159258/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f3e1eb2f0234c955243a915d69ebd84f26eec130
baseline version:
 xen                  01456785ce093d95c6a8515e6b8feeb39e1820b8

Last test of basis   159220  2021-02-10 18:00:50 Z    1 days
Testing same since   159258  2021-02-11 17:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   01456785ce..f3e1eb2f02  f3e1eb2f0234c955243a915d69ebd84f26eec130 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 20:55:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 20:55:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84068.157588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAIzr-0008N5-KW; Thu, 11 Feb 2021 20:55:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84068.157588; Thu, 11 Feb 2021 20:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAIzr-0008My-HK; Thu, 11 Feb 2021 20:55:31 +0000
Received: by outflank-mailman (input) for mailman id 84068;
 Thu, 11 Feb 2021 20:55:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fhqb=HN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lAIzq-0008Mt-CD
 for xen-devel@lists.xenproject.org; Thu, 11 Feb 2021 20:55:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9389cdcb-e5f0-4b25-a3f8-649bae3af2cc;
 Thu, 11 Feb 2021 20:55:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id B707464DE2;
 Thu, 11 Feb 2021 20:55:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9389cdcb-e5f0-4b25-a3f8-649bae3af2cc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613076929;
	bh=oUTzgSo/90aQbk722svSv2dI4MFCYozgAb8Qn5BYOmI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CtorfvTNjwp+1TIpYiE8xC7WWl1EezGAbbPGgz5cb4kKIlfdgdg4Kg7Ss6UHZkVVA
	 YPz0G/JU5zteLOIFXdiC3kvxXR7gmvYKIqARqUu800ECSIja41nquvIayPUqPmZkH9
	 MH1SRLu8hNNjX3Kj+O7Iyzyzk5RYz/EH4kC6OpS+KbhJfbdQQhPmiyxpyB9uAW9Hpp
	 0FD9Q8xyIwN4n0jsRkcGtK4JlC2QN/tRXVV08pWcLScyLp8g3MbPiv6uQ8weplYazc
	 jwFC8+++rgMRvfrOMSan+s5ZT19evJkih/pjy4bq8gwNeVenHPxuCeK46Nlj9AWI+k
	 0EVAlXVV0+dXA==
Date: Thu, 11 Feb 2021 12:55:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Rahul Singh <Rahul.Singh@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    "lucmiccio@gmail.com" <lucmiccio@gmail.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
Message-ID: <alpine.DEB.2.21.2102111253060.9128@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com> <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s> <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com> <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com> <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org> <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com> <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1276794850-1613076130=:9128"
Content-ID: <alpine.DEB.2.21.2102111245400.9128@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1276794850-1613076130=:9128
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102111245401.9128@sstabellini-ThinkPad-T480s>

On Thu, 11 Feb 2021, Julien Grall wrote:
> On 11/02/2021 13:20, Rahul Singh wrote:
> > > On 10 Feb 2021, at 7:52 pm, Julien Grall <julien@xen.org> wrote:
> > > On 10/02/2021 18:08, Rahul Singh wrote:
> > > > Hello Julien,
> > > > > On 10 Feb 2021, at 5:34 pm, Julien Grall <julien@xen.org> wrote:
> > > > > On 10/02/2021 15:06, Rahul Singh wrote:
> > > > > > > On 9 Feb 2021, at 8:36 pm, Stefano Stabellini
> > > > > > > <sstabellini@kernel.org> wrote:
> > > > > > > 
> > > > > > > On Tue, 9 Feb 2021, Rahul Singh wrote:
> > > > > > > > > On 8 Feb 2021, at 6:49 pm, Stefano Stabellini
> > > > > > > > > <sstabellini@kernel.org> wrote:
> > > > > > > > > 
> > > > > > > > > Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> > > > > > > > > The offending chunk is:
> > > > > > > > > 
> > > > > > > > > #define gnttab_need_iommu_mapping(d)                    \
> > > > > > > > > -    (is_domain_direct_mapped(d) && need_iommu(d))
> > > > > > > > > +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > > > > > > > > 
> > > > > > > > > On ARM we need gnttab_need_iommu_mapping to be true for dom0
> > > > > > > > > when it is
> > > > > > > > > directly mapped and IOMMU is enabled for the domain, like the
> > > > > > > > > old check
> > > > > > > > > did, but the new check is always false.
> > > > > > > > > 
> > > > > > > > > In fact, need_iommu_pt_sync is defined as
> > > > > > > > > dom_iommu(d)->need_sync and
> > > > > > > > > need_sync is set as:
> > > > > > > > > 
> > > > > > > > >    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> > > > > > > > >        hd->need_sync = !iommu_use_hap_pt(d);
> > > > > > > > > 
> > > > > > > > > iommu_use_hap_pt(d) means that the page-table used by the
> > > > > > > > > IOMMU is the
> > > > > > > > > P2M. It is true on ARM. need_sync means that you have a
> > > > > > > > > separate IOMMU
> > > > > > > > > page-table and it needs to be updated for every change.
> > > > > > > > > need_sync is set
> > > > > > > > > to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false
> > > > > > > > > too,
> > > > > > > > > which is wrong.
> > > > > > > > > 
> > > > > > > > > As a consequence, when using PV network from a domU on a
> > > > > > > > > system where
> > > > > > > > > IOMMU is on from Dom0, I get:
> > > > > > > > > 
> > > > > > > > > (XEN) smmu: /smmu@fd800000: Unhandled context fault:
> > > > > > > > > fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> > > > > > > > > [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error:
> > > > > > > > > HRESP not OK
> > > > > > > > > 
> > > > > > > > > The fix is to go back to something along the lines of the old
> > > > > > > > > implementation of gnttab_need_iommu_mapping.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Stefano Stabellini
> > > > > > > > > <stefano.stabellini@xilinx.com>
> > > > > > > > > Fixes: 91d4eca7add
> > > > > > > > > Backport: 4.12+
> > > > > > > > > 
> > > > > > > > > ---
> > > > > > > > > 
> > > > > > > > > Given the severity of the bug, I would like to request this
> > > > > > > > > patch to be
> > > > > > > > > backported to 4.12 too, even if 4.12 is security-fixes only
> > > > > > > > > since Oct
> > > > > > > > > 2020.
> > > > > > > > > 
> > > > > > > > > For the 4.12 backport, we can use iommu_enabled() instead of
> > > > > > > > > is_iommu_enabled() in the implementation of
> > > > > > > > > gnttab_need_iommu_mapping.
> > > > > > > > > 
> > > > > > > > > Changes in v2:
> > > > > > > > > - improve commit message
> > > > > > > > > - add is_iommu_enabled(d) to the check
> > > > > > > > > ---
> > > > > > > > > xen/include/asm-arm/grant_table.h | 2 +-
> > > > > > > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > > > > > 
> > > > > > > > > diff --git a/xen/include/asm-arm/grant_table.h
> > > > > > > > > b/xen/include/asm-arm/grant_table.h
> > > > > > > > > index 6f585b1538..0ce77f9a1c 100644
> > > > > > > > > --- a/xen/include/asm-arm/grant_table.h
> > > > > > > > > +++ b/xen/include/asm-arm/grant_table.h
> > > > > > > > > @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long
> > > > > > > > > gpaddr, mfn_t mfn,
> > > > > > > > >     (((i) >= nr_status_frames(t)) ? INVALID_GFN :
> > > > > > > > > (t)->arch.status_gfn[i])
> > > > > > > > > 
> > > > > > > > > #define gnttab_need_iommu_mapping(d)                    \
> > > > > > > > > -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > > > > > > > > +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
> > > > > > > > > 
> > > > > > > > > #endif /* __ASM_GRANT_TABLE_H__ */
> > > > > > > > 
> > > > > > > > I tested the patch and while creating the guest I observed the
> > > > > > > > below warning from Linux for block device.
> > > > > > > > https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
> > > > > > > 
> > > > > > > So you are creating a guest with "xl create" in dom0 and you see
> > > > > > > the
> > > > > > > warnings below printed by the Dom0 kernel? I imagine the domU has
> > > > > > > a
> > > > > > > virtual "disk" of some sort.
> > > > > > Yes you are right I am trying to create the guest with "xl createâ€
> > > > > > and before that, I created the logical volume and trying to attach
> > > > > > the logical volume
> > > > > > block to the domain with â€œxl block-attachâ€. I observed this error
> > > > > > with the "xl block-attachâ€ command.
> > > > > > This issue occurs after applying this patch as what I observed this
> > > > > > patch introduce the calls to iommu_legacy_{, un}map() to map the
> > > > > > grant pages for
> > > > > > IOMMU that touches the page-tables. I am not sure but what I
> > > > > > observed is that something is written wrong when iomm_unmap calls
> > > > > > unmap the pages
> > > > > > because of that issue is observed.
> > > > > 
> > > > > Can you clarify what you mean by "written wrong"? What sort of error
> > > > > do you see in the iommu_unmap()?
> > > > I might be wrong as per my understanding for ARM we are sharing the P2M
> > > > between CPU and IOMMU always and the map_grant_ref() function is written
> > > > in such a way that we have to call iommu_legacy_{, un}map() only if P2M
> > > > is not shared.
> > > 
> > > map_grant_ref() will call the IOMMU if gnttab_need_iommu_mapping() returns
> > > true. I don't really see where this is assuming the P2M is not shared.
> > > 
> > > In fact, on x86, this will always be false for HVM domain (they support
> > > both shared and separate page-tables).
> > > 
> > > > As we are sharing the P2M when we call the iommu_map() function it will
> > > > overwrite the existing GFN -> MFN ( For DOM0 GFN is same as MFN) entry
> > > > and when we call iommu_unmap() it will unmap the  (GFN -> MFN ) entry
> > > > from the page-table.
> > > AFAIK, there should be nothing mapped at that GFN because the page belongs
> > > to the guest. At worse, we would overwrite a mapping that is the same.
> >  > Sorry I should have mention before backend/frontend is dom0 in this 
> case and GFN is mapped. I am trying to attach the block device to DOM0
> 
> Ah, your log makes a lot more sense now. Thank you for the clarification!
> 
> So yes, I agree that iommu_{,un}map() will do the wrong thing if the frontend
> and backend in the same domain.
> 
> I don't know what the state in Linux, but from Xen PoV it should be possible
> to have the backend/frontend in the same domain.
> 
> I think we want to ignore the IOMMU mapping request when the domain is the
> same. Can you try this small untested patch:

Given that all the pages already owned by the domain should already be
in the shared pagetable between MMU and IOMMU, there is no need to
create a second mapping. In fact it is guaranteed to overlap with an
existing mapping.

In theory, if guest_physmap_add_entry returned -EEXIST if a mapping
identical to the one we want to add is already in the pagetable, in this
instance we would see -EEXIST being returned.

Based on that, I cannot think of unwanted side-effects for this patch.
It looks OK to me.

Given that it solves a different issue, I think it should be a separate
patch from [1]. Julien, are you OK with that or would you rather merge
the two?

[1] https://marc.info/?l=xen-devel&m=161281017406202


> diff --git a/xen/drivers/passthrough/arm/iommu_helpers.c
> b/xen/drivers/passthrough/arm/iommu_helpers.c
> index a36e2b8e6c42..7bad13593146 100644
> --- a/xen/drivers/passthrough/arm/iommu_helpers.c
> +++ b/xen/drivers/passthrough/arm/iommu_helpers.c
> @@ -53,6 +53,9 @@ int __must_check arm_iommu_map_page(struct domain *d, dfn_t
> dfn, mfn_t mfn,
> 
>      t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
> 
> +    if ( page_get_owner(mfn_to_page(mfn)) == d )
> +        return 0;
> +
>      /*
>       * The function guest_physmap_add_entry replaces the current mapping
>       * if there is already one...
> @@ -71,6 +74,9 @@ int __must_check arm_iommu_unmap_page(struct domain *d,
> dfn_t dfn,
>      if ( !is_domain_direct_mapped(d) )
>          return -EINVAL;
> 
> +    if ( page_get_owner(mfn_to_page(mfn)) == d )
> +        return 0;
> +
>      return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)),
> 0);
>  }
--8323329-1276794850-1613076130=:9128--


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 21:48:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 21:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84072.157600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAJp8-0004gX-So; Thu, 11 Feb 2021 21:48:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84072.157600; Thu, 11 Feb 2021 21:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAJp8-0004gQ-P5; Thu, 11 Feb 2021 21:48:30 +0000
Received: by outflank-mailman (input) for mailman id 84072;
 Thu, 11 Feb 2021 21:48:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAJp7-0004gI-9p; Thu, 11 Feb 2021 21:48:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAJp7-00051k-1c; Thu, 11 Feb 2021 21:48:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAJp6-00013K-QX; Thu, 11 Feb 2021 21:48:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAJp6-0007cy-Q2; Thu, 11 Feb 2021 21:48:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=XKcW6JJ6mlA2z/jJekgGL/bI94ZNTnFJB7NV4nz/f5o=; b=UJaFpGhHYSTTvmYOe64titxNH1
	A7qCBIMvTbnpvtLNLaEEIMowxt5lDIOCuN41npBcsdbMaKCFpufBtD+A6gg+bvF0upYsxVj/I35pC
	n13i4CwHm1rOM4dDXQjt4jf6Xxc+PUpD5Jr2cTFCupNmyntnlcxwWoDM8LpKMVCfGeKg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-xl-credit2
Message-Id: <E1lAJp6-0007cy-Q2@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 21:48:28 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-xl-credit2
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159265/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-xl-credit2.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-credit2.guest-start --summary-out=tmp/159265.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-xl-credit2 guest-start
Searching for failure / basis pass:
 159200 fail [host=laxton1] / 158681 [host=rochester0] 158624 [host=rochester1] 158616 ok.
Failure / basis pass flights: 159200 / 158616
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-5e1942063dc3633f7a127aa2b159c13507580d21 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-472276f59bba2b22bb882c5c6f5479754e68d467 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Loaded 15001 nodes in revision graph
Searching for test results:
 158609 [host=laxton0]
 158616 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158624 [host=rochester1]
 158681 [host=rochester0]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159247 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159249 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159251 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159252 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159253 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159254 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159255 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159256 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159257 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159259 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159261 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159262 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159263 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159265 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158616 (pass), for basis pass
 Result found: flight 159200 (fail), for basis failure (at ancestor ~185)
 Repro found: flight 159247 (pass), for basis pass
 Repro found: flight 159249 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159257 (pass), for last pass
 Result found: flight 159259 (fail), for first failure
 Repro found: flight 159261 (pass), for last pass
 Repro found: flight 159262 (fail), for first failure
 Repro found: flight 159263 (pass), for last pass
 Repro found: flight 159265 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159265/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 141 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-credit2.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159265: tolerable ALL FAIL

flight 159265 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159265/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl-credit2  14 guest-start             fail baseline untested


jobs:
 test-arm64-arm64-xl-credit2                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Feb 11 22:04:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 22:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84078.157617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAK4n-0006bC-A4; Thu, 11 Feb 2021 22:04:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84078.157617; Thu, 11 Feb 2021 22:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAK4n-0006b5-71; Thu, 11 Feb 2021 22:04:41 +0000
Received: by outflank-mailman (input) for mailman id 84078;
 Thu, 11 Feb 2021 22:04:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAK4m-0006ax-QG; Thu, 11 Feb 2021 22:04:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAK4m-0005Ip-Hk; Thu, 11 Feb 2021 22:04:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAK4m-0001ig-4X; Thu, 11 Feb 2021 22:04:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAK4m-0001u3-41; Thu, 11 Feb 2021 22:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DomIgvt0Q0yy3ArsFHTYzvFeoF97JSdMyTiwB10Mq4o=; b=y7KF9+SAksnTJAtguiZuhkPMNh
	lO6hsHBr597lup+3CPIw3lIFnO4NpfS46aK4W+4ZrFl78LABQZHfjpB6xaQnhRgRqP7TvnVd2XcWt
	GnyP/qdFpcPdHIXVJTbbKsUbwiMheNk84xiJxMEIe37oMSglWkuuOH71I1/Ibsfm5OkY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159202-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159202: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 22:04:40 +0000

flight 159202 xen-unstable real [real]
flight 159264 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159202/
http://logs.test-lab.xenproject.org/osstest/logs/159264/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 159036
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 159036

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159036
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159036
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    6 days
Failing since        159077  2021-02-06 11:11:30 Z    5 days    4 attempts
Testing same since   159202  2021-02-10 11:26:56 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 435 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 11 23:16:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 Feb 2021 23:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84087.157645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lALCO-0004kR-1c; Thu, 11 Feb 2021 23:16:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84087.157645; Thu, 11 Feb 2021 23:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lALCN-0004kK-Uk; Thu, 11 Feb 2021 23:16:35 +0000
Received: by outflank-mailman (input) for mailman id 84087;
 Thu, 11 Feb 2021 23:16:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lALCM-0004kC-Il; Thu, 11 Feb 2021 23:16:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lALCM-0006SO-9m; Thu, 11 Feb 2021 23:16:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lALCM-0005QZ-3X; Thu, 11 Feb 2021 23:16:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lALCM-0006y1-2z; Thu, 11 Feb 2021 23:16:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SlK+VQshIQdBZBdF06Sa42gUhKbzggs0p78CQ6D3ruI=; b=B8koYppn3Ogou9YcPtuAnolZCv
	ciOKLEBZwlK2p/O/KcRlyD07MyeW8lPLYDSVJGZfmVuO1G4JK2ar2uexFptJbmFmoXB9p972T/oqA
	EzameXshhb0xWqQPuaaxL4bN6LaZJsl/WyavzPaKH3akJa9S/d3sbxO989I7w7Kxrmik=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159266: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d66cf56fa26a052bce2f8c746dc0dbac9061b593
X-Osstest-Versions-That:
    xen=f3e1eb2f0234c955243a915d69ebd84f26eec130
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 Feb 2021 23:16:34 +0000

flight 159266 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159266/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 159258

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d66cf56fa26a052bce2f8c746dc0dbac9061b593
baseline version:
 xen                  f3e1eb2f0234c955243a915d69ebd84f26eec130

Last test of basis   159258  2021-02-11 17:00:27 Z    0 days
Testing same since   159266  2021-02-11 21:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)

Not pushing.

------------------------------------------------------------
commit d66cf56fa26a052bce2f8c746dc0dbac9061b593
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Feb 11 13:25:58 2021 +0000

    automation: Add Ubuntu Focal builds
    
    Logical continuation of c/s eb52442d7f "automation: Add Ubuntu:focal
    container".
    
    No further changes required.  Everything builds fine.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 9ca7632bc45cf045fa78a9d7e1275af55240b243
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Feb 10 13:51:21 2021 +0000

    tools/libxl: Document where the magic MAC numbers come from
    
    Matches the comment in the xl-network-configuration manpage.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 00:11:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 00:11:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84095.157672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAM2g-0002RU-Js; Fri, 12 Feb 2021 00:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84095.157672; Fri, 12 Feb 2021 00:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAM2g-0002RN-Gi; Fri, 12 Feb 2021 00:10:38 +0000
Received: by outflank-mailman (input) for mailman id 84095;
 Fri, 12 Feb 2021 00:10:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAM2e-0002RF-Nw; Fri, 12 Feb 2021 00:10:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAM2e-0007vW-9U; Fri, 12 Feb 2021 00:10:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAM2d-000802-Vr; Fri, 12 Feb 2021 00:10:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAM2d-0002tY-VQ; Fri, 12 Feb 2021 00:10:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kZQGa21+YcXovjtvFVHApJxwWbjOcQ9GqUgZ2uTm7SY=; b=oW/tXWL2r9fK8yr7NalDKmN8d/
	ELmQaZW9BjbdX0a7baf1uEpwtntUX+EMUowtoB5e2vQxbKuLnsTlZBiT3Vjc+VQXezkv8HjHU21H5
	6vbR1CF1Ezq1SBnYE/4X11x//9XuFqiOGMYWm2ivqpiJjxa24WF42ltOfN5wIkq56Uuk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159204-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159204: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-shadow:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-pygrub:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-xl-raw:<job status>:broken:regression
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:<job status>:broken:regression
    xen-4.11-testing:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-raw:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemuu-rhel6hvm-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-freebsd10-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-pvshim:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-libvirt:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-amd64-pvgrub:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-qemuu-freebsd11-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-i386-xl-shadow:host-install(5):broken:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-pvhv2-amd:host-install(5):broken:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 00:10:35 +0000

flight 159204 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159204/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemuu-win7-amd64    <job status>           broken in 159042
 test-amd64-i386-xl-qemut-debianhvm-amd64    <job status>      broken in 159042
 test-amd64-amd64-qemuu-freebsd11-amd64    <job status>        broken in 159042
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  <job status> broken in 159042
 test-amd64-i386-xl              <job status>                 broken  in 159042
 test-amd64-amd64-xl-qemut-ws16-amd64    <job status>          broken in 159042
 test-amd64-i386-xl-xsm          <job status>                 broken  in 159042
 test-amd64-i386-freebsd10-i386    <job status>                broken in 159042
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 159042
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm <job status> broken in 159042
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 159042
 test-amd64-i386-libvirt-xsm     <job status>                 broken  in 159042
 test-amd64-i386-xl-pvshim       <job status>                 broken  in 159042
 test-amd64-amd64-libvirt-vhd    <job status>                 broken  in 159042
 test-amd64-amd64-xl-pvhv2-amd    <job status>                 broken in 159042
 test-amd64-i386-libvirt         <job status>                 broken  in 159042
 test-amd64-amd64-libvirt-xsm    <job status>                 broken  in 159042
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>   broken in 159042
 test-amd64-amd64-xl-qemut-win7-amd64    <job status>          broken in 159042
 test-amd64-i386-xl-shadow       <job status>                 broken  in 159042
 test-amd64-i386-qemut-rhel6hvm-amd    <job status>            broken in 159042
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <job status>      broken in 159042
 test-amd64-amd64-pygrub         <job status>                 broken  in 159042
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <job status>   broken in 159042
 test-amd64-i386-freebsd10-amd64    <job status>               broken in 159042
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159042
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 159042
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 159042
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm <job status> broken in 159042
 test-amd64-i386-xl-raw          <job status>                 broken  in 159042
 test-amd64-i386-qemuu-rhel6hvm-amd    <job status>            broken in 159042
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict <job status> broken in 159042
 test-amd64-amd64-amd64-pvgrub    <job status>                 broken in 159042
 test-armhf-armhf-xl-vhd      13 guest-start    fail in 159042 REGR. vs. 157566
 test-armhf-armhf-libvirt-raw 13 guest-start    fail in 159042 REGR. vs. 157566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-xl-qemut-win7-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-freebsd10-i386 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-pygrub      5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-raw       5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-qemut-rhel6hvm-amd 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-qemuu-rhel6hvm-amd 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-libvirt-xsm 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-freebsd10-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-xsm       5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-pvshim    5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-win7-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-libvirt      5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-amd64-pvgrub 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-xl-qemut-ws16-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-qemuu-freebsd11-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl           5 host-install(5) broken in 159042 pass in 159204
 test-amd64-i386-xl-shadow    5 host-install(5) broken in 159042 pass in 159204
 test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 159042 pass in 159204
 test-armhf-armhf-xl-vhd      12 debian-di-install          fail pass in 159042
 test-armhf-armhf-libvirt-raw 12 debian-di-install          fail pass in 159042

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   58 days
Failing since        159016  2021-02-04 15:05:58 Z    7 days    7 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    6 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-i386-xl-qemuu-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl broken
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-xl-xsm broken
broken-job test-amd64-i386-freebsd10-i386 broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-libvirt-xsm broken
broken-job test-amd64-i386-xl-pvshim broken
broken-job test-amd64-amd64-libvirt-vhd broken
broken-job test-amd64-amd64-xl-pvhv2-amd broken
broken-job test-amd64-i386-libvirt broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-shadow broken
broken-job test-amd64-i386-qemut-rhel6hvm-amd broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-pygrub broken
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm broken
broken-job test-amd64-i386-freebsd10-amd64 broken
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-raw broken
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job test-amd64-amd64-amd64-pvgrub broken

Not pushing.

------------------------------------------------------------
commit 1c7d984645f9ade9b47e862b5880734ad498fea8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 5 08:54:03 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again, part 2)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f9090d990e201a5ca045976b8ddaab9fa6ee69dd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 4 15:41:12 2021 +0100

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL} (again)
    
    X86_VENDOR_* aren't bit masks in the older trees.
    
    Reported-by: James Dingwall <james@dingwall.me.uk>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 02:04:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 02:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84110.157709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lANoW-0006ZB-2W; Fri, 12 Feb 2021 02:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84110.157709; Fri, 12 Feb 2021 02:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lANoV-0006Z4-Ug; Fri, 12 Feb 2021 02:04:07 +0000
Received: by outflank-mailman (input) for mailman id 84110;
 Fri, 12 Feb 2021 02:04:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lANoU-0006Yw-V3; Fri, 12 Feb 2021 02:04:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lANoU-0003Ix-MJ; Fri, 12 Feb 2021 02:04:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lANoU-00050U-Dm; Fri, 12 Feb 2021 02:04:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lANoU-0000HJ-DL; Fri, 12 Feb 2021 02:04:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SF5afrLRDmWWM5T3rjAisrEHsawVAD03S5StD9WhDjw=; b=52+DI9lMUzwBoCtC3nTyhu/REN
	BY+Y+GnlIXPQ3yWay1NJcGnshQWZFDNwkVEHoKzMwZ2f+lEsC5Lf/mKjLUG5aHfyMhXN2Wvll0t8m
	szdjTC/2thDDh86o6KQowfC23o1zDZw7HrZUY7oG9vi9GcSCfqQES4vbVAovKeKdsPNc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159273: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d66cf56fa26a052bce2f8c746dc0dbac9061b593
X-Osstest-Versions-That:
    xen=f3e1eb2f0234c955243a915d69ebd84f26eec130
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 02:04:06 +0000

flight 159273 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159273/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm                 <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 159258

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d66cf56fa26a052bce2f8c746dc0dbac9061b593
baseline version:
 xen                  f3e1eb2f0234c955243a915d69ebd84f26eec130

Last test of basis   159258  2021-02-11 17:00:27 Z    0 days
Testing same since   159266  2021-02-11 21:00:28 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-xsm broken
broken-step build-arm64-xsm host-install(4)

Not pushing.

------------------------------------------------------------
commit d66cf56fa26a052bce2f8c746dc0dbac9061b593
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Feb 11 13:25:58 2021 +0000

    automation: Add Ubuntu Focal builds
    
    Logical continuation of c/s eb52442d7f "automation: Add Ubuntu:focal
    container".
    
    No further changes required.  Everything builds fine.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 9ca7632bc45cf045fa78a9d7e1275af55240b243
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Feb 10 13:51:21 2021 +0000

    tools/libxl: Document where the magic MAC numbers come from
    
    Matches the comment in the xl-network-configuration manpage.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 02:46:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 02:46:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84117.157729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAOTW-0001wP-Bh; Fri, 12 Feb 2021 02:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84117.157729; Fri, 12 Feb 2021 02:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAOTW-0001wI-8h; Fri, 12 Feb 2021 02:46:30 +0000
Received: by outflank-mailman (input) for mailman id 84117;
 Fri, 12 Feb 2021 02:46:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAOTU-0001wA-RX; Fri, 12 Feb 2021 02:46:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAOTU-0003wg-LV; Fri, 12 Feb 2021 02:46:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAOTU-0007ar-Ft; Fri, 12 Feb 2021 02:46:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAOTU-0008O1-FN; Fri, 12 Feb 2021 02:46:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3onlltnpr9YkYESTeCmrkgzAAzwfwjx9sZWTkClzhUE=; b=kTO1ASxv3gwNmONKw7ZRw8vDZV
	KzmphWQ3umboB7PadPXJ4GupGH8zh2JrqjD7DSQWk7xRZCzAYDtVCMgULTNQpWsLEe9nh3R4W/P47
	08GDDZucA0tmSAtfHmkCW9jRlmQ/AsguP6DvXgE+MIe8lwjgLt7EcYQEgcE5jWWzzukw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159239-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159239: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b9a063cd8e50a07dac179671b2e26c8144df0915
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 02:46:28 +0000

flight 159239 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159239/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b9a063cd8e50a07dac179671b2e26c8144df0915
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  216 days
Failing since        151818  2020-07-11 04:18:52 Z  215 days  208 attempts
Testing same since   159239  2021-02-11 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 41230 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 05:43:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 05:43:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84136.157778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAREU-00029e-Ja; Fri, 12 Feb 2021 05:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84136.157778; Fri, 12 Feb 2021 05:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAREU-00029X-Fy; Fri, 12 Feb 2021 05:43:10 +0000
Received: by outflank-mailman (input) for mailman id 84136;
 Fri, 12 Feb 2021 05:43:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lARET-00029P-Bl; Fri, 12 Feb 2021 05:43:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lARET-0007GJ-4D; Fri, 12 Feb 2021 05:43:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lARES-00070y-Sv; Fri, 12 Feb 2021 05:43:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lARES-0000rb-SI; Fri, 12 Feb 2021 05:43:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J7SyNDhKfusG1qYmEeMFW3NLanpVpQE2NDOUkfXJtYk=; b=O/sBO+csGB5HlX37aNTmsDjtIo
	+s0rNj+9sGHZdWfzN/M5WBzfbPH2esClHDkQKcJk4D8x5JWw0NXWYe5VyRhU4bgB0ZjziL0myFRxZ
	vTTbt/4rmfroku232OfWQ5IgkI7v1H6k8bVF9Lusl4h04sYSQyGLteCVUqJ4wUXYWAjM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159280: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5a4087004d1adbbb223925f3306db0e5824a2bdc
X-Osstest-Versions-That:
    xen=f3e1eb2f0234c955243a915d69ebd84f26eec130
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 05:43:08 +0000

flight 159280 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159280/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5a4087004d1adbbb223925f3306db0e5824a2bdc
baseline version:
 xen                  f3e1eb2f0234c955243a915d69ebd84f26eec130

Last test of basis   159258  2021-02-11 17:00:27 Z    0 days
Failing since        159266  2021-02-11 21:00:28 Z    0 days    3 attempts
Testing same since   159280  2021-02-12 03:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jukka Kaartinen <jukka.kaartinen@unikie.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f3e1eb2f02..5a4087004d  5a4087004d1adbbb223925f3306db0e5824a2bdc -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 06:01:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 06:01:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84144.157798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lARVz-0004A3-7P; Fri, 12 Feb 2021 06:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84144.157798; Fri, 12 Feb 2021 06:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lARVz-00049w-4Q; Fri, 12 Feb 2021 06:01:15 +0000
Received: by outflank-mailman (input) for mailman id 84144;
 Fri, 12 Feb 2021 06:01:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DDLl=HO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lARVy-00049k-6h
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 06:01:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60bb828b-6fa0-4e17-b5e1-bc7f6fadaf0a;
 Fri, 12 Feb 2021 06:01:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 635A2B1E6;
 Fri, 12 Feb 2021 06:01:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60bb828b-6fa0-4e17-b5e1-bc7f6fadaf0a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613109672; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=0v46qQRuE5c4pAwWDjq4Hvrs44PX0xNfVaw3r/wqYzE=;
	b=eyJWTr4jFEUfHoWMr1kClNLREwQz9QTj0mAfBKTo8FIZZwCqU7RRFsLTlmvASzRX3Paw0h
	a4H7RV8cAgE14HCpP3a8oNubwqP2Az+wuERcgt/e+EMvgPRPQwteKLgAiaXm2/wctYsEBe
	dRMiC+IL9Ye52glqAAjj9sGj4uSqSpA=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.11-rc8
Date: Fri, 12 Feb 2021 07:01:11 +0100
Message-Id: <20210212060111.22013-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc8-tag

xen: branch for v5.11-rc8

It contains a single fix for an issue introduced in 5.11: when running
as a Xen guest on Arm systems the kernel will hang during boot.

Thanks.

Juergen

 arch/arm/xen/enlighten.c          | 2 --
 drivers/xen/xenbus/xenbus.h       | 1 -
 drivers/xen/xenbus/xenbus_probe.c | 2 +-
 include/xen/xenbus.h              | 2 --
 4 files changed, 1 insertion(+), 6 deletions(-)

Julien Grall (1):
      arm/xen: Don't probe xenbus as part of an early initcall


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 09:22:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 09:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84165.157847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAUeq-0006J0-Lz; Fri, 12 Feb 2021 09:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84165.157847; Fri, 12 Feb 2021 09:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAUeq-0006Ir-Gw; Fri, 12 Feb 2021 09:22:36 +0000
Received: by outflank-mailman (input) for mailman id 84165;
 Fri, 12 Feb 2021 09:22:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAUeo-0006Ig-V6; Fri, 12 Feb 2021 09:22:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAUeo-0002wS-O3; Fri, 12 Feb 2021 09:22:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAUeo-0000zB-D4; Fri, 12 Feb 2021 09:22:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAUeo-0005fT-CW; Fri, 12 Feb 2021 09:22:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dRyW40dunMrJxBKpARRwHx/OJLkvTeTgKB+oM85/Hjk=; b=y9NriXqZsJjpCeiZMJRqrSFgL6
	JISg4IOcZXBuQs2LcOQrDtbJd4VN0oLtCg0N424eftk5NHVmlVA5mrnJbiM8X3biVhrhbCGbUZnC1
	VzfPe28xcPPHaAsWl95CZ+3YC8iR8uetq757Y2KGoqYvmA+okXFgyb7WBrQjWMS4rEG4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159238: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5e1942063dc3633f7a127aa2b159c13507580d21
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 09:22:34 +0000

flight 159238 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159238/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5e1942063dc3633f7a127aa2b159c13507580d21
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   30 days
Failing since        158473  2021-01-17 13:42:20 Z   25 days   36 attempts
Testing same since   159200  2021-02-10 11:25:12 Z    1 days    2 attempts

------------------------------------------------------------
453 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13546 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 10:04:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 10:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84183.157868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAVJj-0001vn-6r; Fri, 12 Feb 2021 10:04:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84183.157868; Fri, 12 Feb 2021 10:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAVJj-0001vg-3L; Fri, 12 Feb 2021 10:04:51 +0000
Received: by outflank-mailman (input) for mailman id 84183;
 Fri, 12 Feb 2021 10:04:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAVJi-0001vb-6h
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 10:04:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e113424-8bbc-4c53-adb8-9159cae6f91a;
 Fri, 12 Feb 2021 10:04:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 39AC9AD62;
 Fri, 12 Feb 2021 10:04:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e113424-8bbc-4c53-adb8-9159cae6f91a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613124288; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZmqpPX+DR/iDGR2SbURgyzodLSeFR0CivBL/vqX8vw8=;
	b=bKYws4UbV2h2BuEIeewK23BraG2q8qjq5lxO4sHOB9cmbvoo5WyIWnbw3/zD/Zpgal1l4H
	YF7jDp6o+HAwO5y3e8aGuPWjk018Vr5Mv5+5MPJm2pT+60iIAidmik0QoxfSzCmyIOHubQ
	+MP1sHWIM50+M7de7UfkYrCR8rq9CfI=
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
 <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
 <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
 <adee7548-0a60-d7d1-731f-474a585edf6c@amazon.de>
 <1a50a8c3-44f5-9ea9-7ff1-0d716bc05ebd@suse.com>
 <d2a5c3a5-d163-3ee9-50ff-0083bd52c374@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a48464da-659a-1dea-0b1f-fdd264b1db69@suse.com>
Date: Fri, 12 Feb 2021 11:04:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d2a5c3a5-d163-3ee9-50ff-0083bd52c374@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.02.2021 21:46, Norbert Manthey wrote:
> I agree with the symmetry for get and set. This is what I'd aim for:
> 
> Â 1. hvmop_set_param and hvmop_get_param (static) both check for the
> index, and afterwards use the is_hvm_domain(d) function with its barrier
> Â 2. hvm_set_param (static) and hvm_get_param both call their allow
> helper function, evaluate the return code, and afterwards block speculation.
> Â 2.1. hvm_get_param is declared in a public header, and cannot be turned
> into a static function, hence needs the index check

But both further call sites are in bounded loops, with the bounds not
guest controlled. It can rely on the callers just as much as ...

> Â 2.2. hvm_set_param is only called from hvmop_set_param, and index is
> already checked there, hence, do not add check

... this.

> Â 3. hvm_allow_set_param (static) and hvm_allow_get_param (static) do not
> validate the index parameter
> Â 3.1. hvm_allow_set_param blocks speculative execution with a barrier
> after domain permissions have been evaluated, before accessing the
> parameters of the domain. hvm_allow_get_param does not access the params
> member of the domain, and hence does not require additional protection.
> 
> To simplify the code, I propose to furthermore make the hvmop_set_param
> function static as well.

Yes - this not being so already is likely simply an oversight,
supported by the fact that there's no declaration in any header.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 10:46:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 10:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84197.157886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAVxl-0005h5-Ae; Fri, 12 Feb 2021 10:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84197.157886; Fri, 12 Feb 2021 10:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAVxl-0005gy-6N; Fri, 12 Feb 2021 10:46:13 +0000
Received: by outflank-mailman (input) for mailman id 84197;
 Fri, 12 Feb 2021 10:46:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cuCD=HO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lAVxj-0005g8-HX
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 10:46:11 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b97e3335-899d-4a23-a8b1-ffba49ed4297;
 Fri, 12 Feb 2021 10:46:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b97e3335-899d-4a23-a8b1-ffba49ed4297
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613126769;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=cyWk2E+WS3mL2Vm7Fx7BWrhEmqy5qdruLJ9hoJCaUJE=;
  b=KhX9G565xmtu95VIqgDj6Rmyvgtu4uBVaC6cegxSjk3jxB4KZATiMaDh
   Z3tB4hBFDUKi1IJWqWZ4FRyCMJZfTHgDM5EXVucvV+9Ih18HYO0lRFOi3
   1i3Wdgt2RQSiPqfQy3trTQSiqInnIecleZ1V3DhEcUq5O3KUmzPZFYbxH
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: /6XuCIqyUnRfjRhoXKl+gA8WRfxMXOgVcuNSc4mTPCTXUZ8aDTfRFW6KIwb/DJNzylGvJgqid/
 o7j8nk9UDWX9ryn9tIZvYEoGwY7lCu192E/WA38h/plWkRQocNBUgOLbSSWGFjad+mo3/xJCbF
 dr1ougIPae5+0eYhQA6wqsUYu0Q2zAgDXCe/X3a1/1nYG6M54XtrUvl+nJzcZ05k2zzlBvwUmL
 wxJw8omU6qmNaSVePIM80dmVTrc/WR2el3jpjsPwK0se9CIkUqWsWqmO4VLGcszaunUK3nmT3b
 ukQ=
X-SBRS: 5.2
X-MesageID: 37322926
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,173,1610427600"; 
   d="scan'208";a="37322926"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NBuvHzEXLPEPLdxe8KvAgFBbKdYjgDMJ1hiCuiBkXK/jwo3pkJTE1W2FBZ1AQa56B8kc4/nwqQBXecx3plPQjlFlXD+6y8lW+t/YrjrbjfXjJ6i/8kz9xnckBCO1cNfKyQGrpRjLCSYa2FOY9QNCAY4EplCdzSaHWqLBWc7OZ/d0Hvb4VKyZ/flF8ii6kDW5VYisRf0dGrnPwbPDbiJ6412IKSGDUHqTM86tWiiqHLNZFb1RgTRNGzq4nY79FoC44URy2IYcOSYlRdbkbOBygzoDt1QtfAty7Td2S2Tm5F24F5u83QQkzq2ymMDB2FTaqLlBOV6q5MtJj2QyYffKsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SOkdkfX61/MIq9/1Z6cE/L6pzSpsxc2u4oE10N2uiS0=;
 b=jbUaV72Q1JNDYRYnNd69DR1ArUfIhWAJze2LiA1T+dA6frOwA2XX7JVmV/h/HEYWV3McZIVx8FA9MFJ3nqfw5toD/UJjIfaa1VBzgWMF6X4ywRXzlOXkCdR5IvznIxbb/XtGqnBRx8yehlg7wJ43BBiFq+4LJTk9NZXA39tDUtqk/6cKGICU7PLziLmxRl5RBQRfBCZOjFPdNVA7uGs8Z9nBkpzibMdEol8Fp0MUjJ1qAtHCNXA+uZa/5zboXaZjK3AKGhXA2c2tBFy5deqw1JHpgaX1tv9UK3wN8DcxPtMHqtfXOYn7JU00jLCsHFPkIfpTbHTbJmwcC1a1LHYZ8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SOkdkfX61/MIq9/1Z6cE/L6pzSpsxc2u4oE10N2uiS0=;
 b=QR/WO7W3fzxLYCJhnPSfk9EskEl4uEVgWqK7PqZWgv9LBZggIiIXsh4xsHXGVbY7qyNlp9Jjq9miti0s8OweFg+KdxirHlkH4A+AEVJXAJ1fW2hK2EG5kZS+knfbUK1/M2fIAQJjkfYhtKIeYxezSWSpMu3pSIZzLjdGGpDsMpw=
Date: Fri, 12 Feb 2021 11:41:18 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
Message-ID: <YCZbToiL3+Ji3y48@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
X-ClientProxiedBy: AM6PR08CA0021.eurprd08.prod.outlook.com
 (2603:10a6:20b:b2::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7fc8b5ae-413b-440d-f73b-08d8cf436605
X-MS-TrafficTypeDiagnostic: DS7PR03MB5606:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5606979D06B8B07512B782398F8B9@DS7PR03MB5606.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /2p5GjHOVgcDBneEzkMLyhyTOpKgDNOH06J072ZVfI7Gf5Z9x2DtkrOOcIdZtQThlBkoaAXcfiGpbOD2NoR+GqH4NF9oZzLKG0zpAjhSryra3TYEbXfPYpjJ6cY/TRe4F7/Bs78mB8W/Us9YJXCD3YJHZ2427bGbWQlDgsnvXGlMQyT3lNay5x02u2NgRolD1VDFAGG8+Jh5/K5J28CpZepP6lbeuq7mDyzJVVKk2C40ZrU6JXlcjgyYi5h70/dEDXjt4fbx6DDxguo+G5LX9UNRsqzLwRBVnYJxPXY0QsQEm7d2iUmhg8/0P5qt1J7mdjWRhKw0eNV/8mBuq93uUugbWqwIPuztu30XowxIjGxbS30mL7sQrrGca1OfmXDRAI/f+aGj4IR6i7OM7e/10oOZBTayjBvIdBix9Zl1aCfIwu4SkLdCHAFN5Gb9Hakx6ACJ7XhLWFL524/OxgYiXBkX0jc0knDrA+IUBtBEuHkZtGtR8BuvlKTdls1sOWCp7YpnqqHw8zucfvj9gHTUjDRw2CWbHqo3YCjxq11fjFC2FQWdov5RrcLbuJM7l4nUVtdg5dNGqrB5/BpKDnoTJUBH8aET68aOsQ3tPXlTUynjDA7u63+YSOYR1gHPkLkUlam6fvQkNDsbnoUBSgvv3g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(136003)(376002)(346002)(396003)(366004)(33716001)(26005)(8676002)(66556008)(66476007)(6496006)(186003)(86362001)(966005)(956004)(9686003)(5660300002)(4326008)(2906002)(478600001)(6666004)(66946007)(16526019)(85182001)(83380400001)(107886003)(54906003)(8936002)(6486002)(316002)(6916009)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?K3M3TWxsZjg5WS9wanZjQVZrTW0xa2ErdjdGbTd4RWN1YmdjZmNhNWxlcy9x?=
 =?utf-8?B?c3BjbU41UHgwajQzWHE2VnEwQ2ZPZmJpZHlXWXdTNThWRk1GYUtTSVZWbURP?=
 =?utf-8?B?UXFwcW1TcDRTdkUwQzQxaXZ5SUIrWllRdlYvdDEyNzVSQ2U0TmxhQkRXSzNp?=
 =?utf-8?B?UThiMVljK1lSRWpBb0VQKzVTZEE3V0RjOFNVeC9oZUVxRE9veWE1cFpWK3o2?=
 =?utf-8?B?ZEJSdXUvdVFxZUQ3Q0xIMCtsbml6QUpyeFZPdmpqOWpsckdGU2J4S1JNTVBN?=
 =?utf-8?B?V2ZPM01Xc0FIUGNwTmdNQjNYcjJLV3ZIWm9JT1QyY09ZRWM1MzAyakV0NXl4?=
 =?utf-8?B?MlZadUpDK09ZUFN0NGhQWHh5Q0hPRUhRbS9oZUE0elZrRFJvYTFLclNxTkht?=
 =?utf-8?B?U2xudTlhbWJIc1ZiUEJxNU1YWnVub1VTWWxjdnA0UThXaGExT3ZhY1Rrejl4?=
 =?utf-8?B?ZDNMMjJmS2hmQ1Q3MURYQ2wrUFdsMklGS2xzRGdLS1VmN21kbkFRNng5SlJF?=
 =?utf-8?B?bmYyaE04WW8xaWVRQUJGSmpQZGM1QXJiVTUyQkd0ako1QnQ4QUs2V29yLzVk?=
 =?utf-8?B?enZmazNEZm1wMVB5U1NES2ZzYVVCNGZ1N1ZmdHphWHJVTmEzbzVSQnF4WTRP?=
 =?utf-8?B?bm5UVTFJSmo2MG1iOTdoZVU0ZHNOc1lZQzNGY2tnRDFrMWhuWmlCQ1RsdmZN?=
 =?utf-8?B?RmhnUEJBb2FFYVZyK0lUcmg3QW80akMxa1FtWTcxbVlWVlQ2N0xaRWRMN29W?=
 =?utf-8?B?aHY1RGJNdkszTEdlZHF0c2F4OXYvR3ZNQlBrWTh6NGtyZlgyQVRTTlVPK1Z2?=
 =?utf-8?B?bEZNUzRnakxTTGpSektBWUdvc0Y0NTQzbFZMRzJKQk9xazYzWnRoemsxdmJ4?=
 =?utf-8?B?M3ZKTFFkMlJScVRTSC8wOHk0cXBuQ2p6SmNkM1A5c240dW5wT3JQRDdDR2hp?=
 =?utf-8?B?YkhmRW5nd2lYd3BBRnFWbWtqNEo3YnVmTWNtN0o4a3pxbDZmZG5keUJDeXpR?=
 =?utf-8?B?V3FablcvaWJobFZ4dGxiWXgvVXVHVnk2MHJVdTcyNnBLWGI0bjRjVHBOcWlD?=
 =?utf-8?B?ZjNXMmE0Rm1nbnVueTJWbDRkQTU0TjRDUm4wZjNXSE9jZisrc0V2UVZmOWQ3?=
 =?utf-8?B?eTVGUS9CZjBhMEZ0VGEwK2wya0Y4bWdYemdmUnlpUGtKUUFiN1ZPMHREL2dj?=
 =?utf-8?B?NFRacWlvNzVLVlVDRnVPaGE2ZHhlWTU0THQ4Zm5hRFU5MGF5VzlxczRZTG40?=
 =?utf-8?B?ekl5d1JEeWN3eEdyL0ZsOXlmQ1Z0ZkllSVJ0T1h0cnRWaVlHWVBMMUtydmtY?=
 =?utf-8?B?Yk5ndHVLNGNreW9oajNSR2lvYmdkU0RXUnJvYVR3QWEyUTBlekx0NjQvTXNJ?=
 =?utf-8?B?VVgwODAwa2xKRTg3UkVoZ0dHSUJENi9admEweE85bWJ5L1BvMVFTQWp4YkNC?=
 =?utf-8?B?L2xQeGRwUVlFbzlFUVJBWWF6cXBYc3IzU3VFQkM2b0RJVENLTWhVcVRtbjBu?=
 =?utf-8?B?eU4xMHprQkNYQmZBNE5tMzNPYWsvbXl3bWNxeUhDQm1HZXpEb0kzQURVQzE3?=
 =?utf-8?B?cnI0ajhkdVoxWCtQWHo4WlUwbmhqMnZpWTNQUEk2SW56dUpURmtnY0FUUSt2?=
 =?utf-8?B?cmtXOS95REx2dVVCOVNjQUlQcGRwWFNRZTJ5WG5yamQzeC85NGlpOGZ3N1pW?=
 =?utf-8?B?TU9YdzgwcW5pRXYyNys0UFZYL1UvWXI2SVl2bzR6UVE2OHVJU2dBQmJNRjVK?=
 =?utf-8?Q?0krROGSXMNdgGFfOCF+WkPlnAAS+Rx4vuIxNTrw?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fc8b5ae-413b-440d-f73b-08d8cf436605
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2021 10:46:05.7367
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rY5ISyHq+ZNKLv+8660dwBDYJ5eN9KueNiCb08udPjD19m7ZLjESSjnX1TDgs82U8BLz9/ELzGT0Q/MNyvqZ/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5606
X-OriginatorOrg: citrix.com

On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
> Inspired by
> https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
> and prior work in that area of x86 Linux, suppress speculation with
> guest specified pointer values by suitably masking the addresses to
> non-canonical space in case they fall into Xen's virtual address range.
> 
> Introduce a new Kconfig control.
> 
> Note that it is necessary in such code to avoid using "m" kind operands:
> If we didn't, there would be no guarantee that the register passed to
> guest_access_mask_ptr is also the (base) one used for the memory access.
> 
> As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
> parameter gets dropped and the XOR on the fixup path gets changed to be
> a 32-bit one in all cases: This way we avoid pointless REX.W or operand
> size overrides, or writes to partial registers.
> 
> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> The insn sequence chosen is certainly up for discussion; I've picked
> this one despite the RCR because alternatives I could come up with,
> like
> 
> 	mov	$(HYPERVISOR_VIRT_END), %rax
> 	mov	$~0, %rdx
> 	mov	$0x7fffffffffffffff, %rcx
> 	cmp	%rax, %rdi
> 	cmovb	%rcx, %rdx
> 	and	%rdx, %rdi
> 
> weren't necessarily better: Either, as above, they are longer and
> require a 3rd scratch register, or they also utilize the carry flag in
> some similar way.
> ---
> Judging from the comment ahead of put_unsafe_asm() we might as well not
> tell gcc at all anymore about the memory access there, now that there's
> no use of the operand anymore in the assembly code.
> 
> --- a/xen/arch/x86/usercopy.c
> +++ b/xen/arch/x86/usercopy.c
> @@ -10,12 +10,19 @@
>  #include <xen/sched.h>
>  #include <asm/uaccess.h>
>  
> -unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n)
> +#ifndef GUARD
> +# define GUARD UA_KEEP
> +#endif
> +
> +unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n)
>  {
>      unsigned dummy;
>  
>      stac();
>      asm volatile (
> +        GUARD(
> +        "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"
> +        )
>          "    cmp  $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n"
>          "    jbe  1f\n"
>          "    mov  %k[to], %[cnt]\n"
> @@ -42,6 +49,7 @@ unsigned __copy_to_user_ll(void __user *
>          _ASM_EXTABLE(1b, 2b)
>          : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
>            [aux] "=&r" (dummy)
> +          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
>          : "[aux]" (n)
>          : "memory" );
>      clac();
> @@ -49,12 +57,15 @@ unsigned __copy_to_user_ll(void __user *
>      return n;
>  }
>  
> -unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n)
> +unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n)
>  {
>      unsigned dummy;
>  
>      stac();
>      asm volatile (
> +        GUARD(
> +        "    guest_access_mask_ptr %[from], %q[scratch1], %q[scratch2]\n"
> +        )
>          "    cmp  $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n"
>          "    jbe  1f\n"
>          "    mov  %k[to], %[cnt]\n"
> @@ -87,6 +98,7 @@ unsigned __copy_from_user_ll(void *to, c
>          _ASM_EXTABLE(1b, 6b)
>          : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
>            [aux] "=&r" (dummy)
> +          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
>          : "[aux]" (n)
>          : "memory" );
>      clac();
> @@ -94,6 +106,8 @@ unsigned __copy_from_user_ll(void *to, c
>      return n;
>  }
>  
> +#if GUARD(1) + 0

Why do you need the '+ 0' here? I guess it's to prevent the
preprocessor from complaining when GUARD(1) gets replaced by nothing?

> +
>  /**
>   * copy_to_user: - Copy a block of data into user space.
>   * @to:   Destination address, in user space.
> @@ -128,8 +142,11 @@ unsigned clear_user(void __user *to, uns
>  {
>      if ( access_ok(to, n) )
>      {
> +        long dummy;
> +
>          stac();
>          asm volatile (
> +            "    guest_access_mask_ptr %[to], %[scratch1], %[scratch2]\n"
>              "0:  rep stos"__OS"\n"
>              "    mov  %[bytes], %[cnt]\n"
>              "1:  rep stosb\n"
> @@ -140,7 +157,8 @@ unsigned clear_user(void __user *to, uns
>              ".previous\n"
>              _ASM_EXTABLE(0b,3b)
>              _ASM_EXTABLE(1b,2b)
> -            : [cnt] "=&c" (n), [to] "+D" (to)
> +            : [cnt] "=&c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy),
> +              [scratch2] "=&r" (dummy)
>              : [bytes] "r" (n & (BYTES_PER_LONG - 1)),
>                [longs] "0" (n / BYTES_PER_LONG), "a" (0) );
>          clac();
> @@ -174,6 +192,16 @@ unsigned copy_from_user(void *to, const
>      return n;
>  }
>  
> +# undef GUARD
> +# define GUARD UA_DROP
> +# define copy_to_guest_ll copy_to_unsafe_ll
> +# define copy_from_guest_ll copy_from_unsafe_ll
> +# undef __user
> +# define __user
> +# include __FILE__
> +
> +#endif /* GUARD(1) */
> +
>  /*
>   * Local variables:
>   * mode: C
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -446,6 +446,8 @@ UNLIKELY_START(g, create_bounce_frame_ba
>          jmp   asm_domain_crash_synchronous  /* Does not return */
>  __UNLIKELY_END(create_bounce_frame_bad_sp)
>  
> +        guest_access_mask_ptr %rsi, %rax, %rcx
> +
>  #define STORE_GUEST_STACK(reg, n) \
>  0:      movq  %reg,(n)*8(%rsi); \
>          _ASM_EXTABLE(0b, domain_crash_page_fault_ ## n ## x8)
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -114,6 +114,24 @@ config SPECULATIVE_HARDEN_BRANCH
>  
>  	  If unsure, say Y.
>  
> +config SPECULATIVE_HARDEN_GUEST_ACCESS
> +	bool "Speculative PV Guest Memory Access Hardening"
> +	default y
> +	depends on PV
> +	help
> +	  Contemporary processors may use speculative execution as a
> +	  performance optimisation, but this can potentially be abused by an
> +	  attacker to leak data via speculative sidechannels.
> +
> +	  One source of data leakage is via speculative accesses to hypervisor
> +	  memory through guest controlled values used to access guest memory.
> +
> +	  When enabled, code paths accessing PV guest memory will have guest
> +	  controlled addresses massaged such that memory accesses through them
> +	  won't touch hypervisor address space.
> +
> +	  If unsure, say Y.
> +
>  endmenu
>  
>  config HYPFS
> --- a/xen/include/asm-x86/asm-defns.h
> +++ b/xen/include/asm-x86/asm-defns.h
> @@ -44,3 +44,16 @@
>  .macro INDIRECT_JMP arg:req
>      INDIRECT_BRANCH jmp \arg
>  .endm
> +
> +.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req
> +#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS)
> +    mov $(HYPERVISOR_VIRT_END - 1), \scratch1
> +    mov $~0, \scratch2
> +    cmp \ptr, \scratch1
> +    rcr $1, \scratch2
> +    and \scratch2, \ptr

If my understanding is correct, that's equivalent to:

ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END);

It might be helpful to add this as a comment, to clarify the indented
functionality of the assembly bit.

I wonder if the C code above can generate any jumps? As you pointed
out, we already use something similar in array_index_mask_nospec and
that's fine to do in C.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 12:29:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 12:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84238.157904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAXZ8-0006tO-UR; Fri, 12 Feb 2021 12:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84238.157904; Fri, 12 Feb 2021 12:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAXZ8-0006tH-RC; Fri, 12 Feb 2021 12:28:54 +0000
Received: by outflank-mailman (input) for mailman id 84238;
 Fri, 12 Feb 2021 12:28:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZMbC=HO=amazon.de=prvs=670783a61=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lAXZ7-0006tC-7X
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 12:28:53 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6a9bbcb-0706-4323-ab86-a111a7e2f1a1;
 Fri, 12 Feb 2021 12:28:51 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2c-87a10be6.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 12 Feb 2021 12:28:43 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2c-87a10be6.us-west-2.amazon.com (Postfix) with ESMTPS
 id C6BC5A0578; Fri, 12 Feb 2021 12:28:41 +0000 (UTC)
Received: from u6fc700a6f3c650.ant.amazon.com (10.43.161.244) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 12 Feb 2021 12:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6a9bbcb-0706-4323-ab86-a111a7e2f1a1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1613132931; x=1644668931;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=FlCGlrIFwMKIzKmNKLPPm3SDShs71E05RQYdouStyoY=;
  b=sHtRyBr0iyu63deDojPzBid3/lKkoArdkv5gIC9bqU+L5OaUkBiwRquE
   eCpL5ZPcr6jAjjkyQuJTDpQwfLoeldUJS3Q/Yw+inZFZodRXsfbw5bMib
   ui3AP09Dg4b4KYjOlnMMpPOXu5b6sZtIq/w2IVbThY1lnHmsekq2IM9xK
   o=;
X-IronPort-AV: E=Sophos;i="5.81,173,1610409600"; 
   d="scan'208";a="85043428"
Subject: Re: [PATCH HVM v2 1/1] hvm: refactor set param
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	<xen-devel@lists.xenproject.org>
References: <20210205203905.8824-1-nmanthey@amazon.de>
 <edf1cd78-2192-2679-9a34-804c5d7b75c5@suse.com>
 <ba146cd6-fd5a-78d8-40bc-59885d265f5f@amazon.de>
 <b8529792-3d99-2e0d-7ebe-31c2c406145f@suse.com>
 <9f753ee9-73c5-aa2c-3c68-eed7e0c2608b@amazon.de>
 <a52cb2ac-fa85-73cd-0c53-3ee002d6b3ea@suse.com>
 <adee7548-0a60-d7d1-731f-474a585edf6c@amazon.de>
 <1a50a8c3-44f5-9ea9-7ff1-0d716bc05ebd@suse.com>
 <d2a5c3a5-d163-3ee9-50ff-0083bd52c374@amazon.de>
 <a48464da-659a-1dea-0b1f-fdd264b1db69@suse.com>
From: Norbert Manthey <nmanthey@amazon.de>
Autocrypt: addr=nmanthey@amazon.de; prefer-encrypt=mutual; keydata=
 xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR
 /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg
 kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps
 K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89
 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV
 QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok
 xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o
 eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C
 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK
 VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h
 bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI
 BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC
 XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG
 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c
 kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN
 sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar
 C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt
 OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL
 n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur
 xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5
 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs
 nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf
 Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG
 MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK
 sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ
 /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV
 vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM
 p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk
 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p
 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f
 HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE
 GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b
 tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+
 RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ
 R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4
 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5
 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd
 DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i
 p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U
 sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C
 hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9
 pEddsOqYwOs=
Message-ID: <2633df5f-df68-4a16-bc5c-522b2a589b00@amazon.de>
Date: Fri, 12 Feb 2021 13:28:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a48464da-659a-1dea-0b1f-fdd264b1db69@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
X-Originating-IP: [10.43.161.244]
X-ClientProxiedBy: EX13D12UWA001.ant.amazon.com (10.43.160.163) To
 EX13D02EUB001.ant.amazon.com (10.43.166.150)
Precedence: Bulk
Content-Transfer-Encoding: base64

Ck9uIDIvMTIvMjEgMTE6MDQgQU0sIEphbiBCZXVsaWNoIHdyb3RlOgo+IENBVVRJT046IFRoaXMg
ZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90
IGNsaWNrIGxpbmtzIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0
aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuCj4KPgo+Cj4gT24gMTEuMDIu
MjAyMSAyMTo0NiwgTm9yYmVydCBNYW50aGV5IHdyb3RlOgo+PiBJIGFncmVlIHdpdGggdGhlIHN5
bW1ldHJ5IGZvciBnZXQgYW5kIHNldC4gVGhpcyBpcyB3aGF0IEknZCBhaW0gZm9yOgo+Pgo+PiAg
MS4gaHZtb3Bfc2V0X3BhcmFtIGFuZCBodm1vcF9nZXRfcGFyYW0gKHN0YXRpYykgYm90aCBjaGVj
ayBmb3IgdGhlCj4+IGluZGV4LCBhbmQgYWZ0ZXJ3YXJkcyB1c2UgdGhlIGlzX2h2bV9kb21haW4o
ZCkgZnVuY3Rpb24gd2l0aCBpdHMgYmFycmllcgo+PiAgMi4gaHZtX3NldF9wYXJhbSAoc3RhdGlj
KSBhbmQgaHZtX2dldF9wYXJhbSBib3RoIGNhbGwgdGhlaXIgYWxsb3cKPj4gaGVscGVyIGZ1bmN0
aW9uLCBldmFsdWF0ZSB0aGUgcmV0dXJuIGNvZGUsIGFuZCBhZnRlcndhcmRzIGJsb2NrIHNwZWN1
bGF0aW9uLgo+PiAgMi4xLiBodm1fZ2V0X3BhcmFtIGlzIGRlY2xhcmVkIGluIGEgcHVibGljIGhl
YWRlciwgYW5kIGNhbm5vdCBiZSB0dXJuZWQKPj4gaW50byBhIHN0YXRpYyBmdW5jdGlvbiwgaGVu
Y2UgbmVlZHMgdGhlIGluZGV4IGNoZWNrCj4gQnV0IGJvdGggZnVydGhlciBjYWxsIHNpdGVzIGFy
ZSBpbiBib3VuZGVkIGxvb3BzLCB3aXRoIHRoZSBib3VuZHMgbm90Cj4gZ3Vlc3QgY29udHJvbGxl
ZC4gSXQgY2FuIHJlbHkgb24gdGhlIGNhbGxlcnMganVzdCBhcyBtdWNoIGFzIC4uLgpPa2F5LCBz
byBJIHdpbGwgbm90IGFkZCB0aGUgY2hlY2sgdGhlcmUgZWl0aGVyLiBJIHRob3VnaHQgYWJvdXQg
ZnV0dXJlCm1vZGlmaWNhdGlvbnMgdGhhdCBhbGxvdyB0byBjYWxsIHRoYXQgZnVuY3Rpb24gZnJv
bSBvdGhlciBwbGFjZXMsIG9yCm1vZGlmaWVkIGNhbGwgZW52aXJvbm1lbnRzIHdpdGggZXZlbnR1
YWxseSBndWVzdCBjb250cm9sIC0gYnV0IEkgYW0gZmluZQp0byBub3QgY29uc2lkZXIgdGhlc2Uu
Cj4KPj4gIDIuMi4gaHZtX3NldF9wYXJhbSBpcyBvbmx5IGNhbGxlZCBmcm9tIGh2bW9wX3NldF9w
YXJhbSwgYW5kIGluZGV4IGlzCj4+IGFscmVhZHkgY2hlY2tlZCB0aGVyZSwgaGVuY2UsIGRvIG5v
dCBhZGQgY2hlY2sKPiAuLi4gdGhpcy4KPgo+PiAgMy4gaHZtX2FsbG93X3NldF9wYXJhbSAoc3Rh
dGljKSBhbmQgaHZtX2FsbG93X2dldF9wYXJhbSAoc3RhdGljKSBkbyBub3QKPj4gdmFsaWRhdGUg
dGhlIGluZGV4IHBhcmFtZXRlcgo+PiAgMy4xLiBodm1fYWxsb3dfc2V0X3BhcmFtIGJsb2NrcyBz
cGVjdWxhdGl2ZSBleGVjdXRpb24gd2l0aCBhIGJhcnJpZXIKPj4gYWZ0ZXIgZG9tYWluIHBlcm1p
c3Npb25zIGhhdmUgYmVlbiBldmFsdWF0ZWQsIGJlZm9yZSBhY2Nlc3NpbmcgdGhlCj4+IHBhcmFt
ZXRlcnMgb2YgdGhlIGRvbWFpbi4gaHZtX2FsbG93X2dldF9wYXJhbSBkb2VzIG5vdCBhY2Nlc3Mg
dGhlIHBhcmFtcwo+PiBtZW1iZXIgb2YgdGhlIGRvbWFpbiwgYW5kIGhlbmNlIGRvZXMgbm90IHJl
cXVpcmUgYWRkaXRpb25hbCBwcm90ZWN0aW9uLgo+Pgo+PiBUbyBzaW1wbGlmeSB0aGUgY29kZSwg
SSBwcm9wb3NlIHRvIGZ1cnRoZXJtb3JlIG1ha2UgdGhlIGh2bW9wX3NldF9wYXJhbQo+PiBmdW5j
dGlvbiBzdGF0aWMgYXMgd2VsbC4KPiBZZXMgLSB0aGlzIG5vdCBiZWluZyBzbyBhbHJlYWR5IGlz
IGxpa2VseSBzaW1wbHkgYW4gb3ZlcnNpZ2h0LAo+IHN1cHBvcnRlZCBieSB0aGUgZmFjdCB0aGF0
IHRoZXJlJ3Mgbm8gZGVjbGFyYXRpb24gaW4gYW55IGhlYWRlci4KCk9rYXkuCgpCZXN0LApOb3Ji
ZXJ0CgoKCgpBbWF6b24gRGV2ZWxvcG1lbnQgQ2VudGVyIEdlcm1hbnkgR21iSApLcmF1c2Vuc3Ry
LiAzOAoxMDExNyBCZXJsaW4KR2VzY2hhZWZ0c2Z1ZWhydW5nOiBDaHJpc3RpYW4gU2NobGFlZ2Vy
LCBKb25hdGhhbiBXZWlzcwpFaW5nZXRyYWdlbiBhbSBBbXRzZ2VyaWNodCBDaGFybG90dGVuYnVy
ZyB1bnRlciBIUkIgMTQ5MTczIEIKU2l0ejogQmVybGluClVzdC1JRDogREUgMjg5IDIzNyA4NzkK
Cgo=



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 12:48:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 12:48:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84245.157916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAXsK-0000Qv-Ko; Fri, 12 Feb 2021 12:48:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84245.157916; Fri, 12 Feb 2021 12:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAXsK-0000Qo-HR; Fri, 12 Feb 2021 12:48:44 +0000
Received: by outflank-mailman (input) for mailman id 84245;
 Fri, 12 Feb 2021 12:48:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAXsJ-0000Qj-Ku
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 12:48:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4b1a9e7-cae1-4596-9ed0-c61f9706fa1a;
 Fri, 12 Feb 2021 12:48:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DCE95AF0D;
 Fri, 12 Feb 2021 12:48:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4b1a9e7-cae1-4596-9ed0-c61f9706fa1a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613134122; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U3Lgksc6gzrlfPMTHi9oCOygUg4UNBq2FGROCXZz5qY=;
	b=KsTRWWOq84A82HHdEZQ8fyEL3gLYX45KEYS3d8HdVcUDl0tgJf7ScWWeJnN2pGvKONu9bk
	ib51iRIg845KyGSJPX1eK5ep6quFtj7thyoxSIx9huXGSJ8n1d2E4HUFJOnPlmxJ6OKdMl
	2hutWc6UXzKQwZc/ftSzk5VIkWozvWg=
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
 <YCZbToiL3+Ji3y48@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ece2bf60-4154-756d-df5a-1f55170f9451@suse.com>
Date: Fri, 12 Feb 2021 13:48:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCZbToiL3+Ji3y48@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.02.2021 11:41, Roger Pau MonnÃ© wrote:
> On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
>> @@ -94,6 +106,8 @@ unsigned __copy_from_user_ll(void *to, c
>>      return n;
>>  }
>>  
>> +#if GUARD(1) + 0
> 
> Why do you need the '+ 0' here? I guess it's to prevent the
> preprocessor from complaining when GUARD(1) gets replaced by nothing?

Yes. "#if" with nothing after it is an error from all I know.

>> --- a/xen/include/asm-x86/asm-defns.h
>> +++ b/xen/include/asm-x86/asm-defns.h
>> @@ -44,3 +44,16 @@
>>  .macro INDIRECT_JMP arg:req
>>      INDIRECT_BRANCH jmp \arg
>>  .endm
>> +
>> +.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req
>> +#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS)
>> +    mov $(HYPERVISOR_VIRT_END - 1), \scratch1
>> +    mov $~0, \scratch2
>> +    cmp \ptr, \scratch1
>> +    rcr $1, \scratch2
>> +    and \scratch2, \ptr
> 
> If my understanding is correct, that's equivalent to:
> 
> ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END);
> 
> It might be helpful to add this as a comment, to clarify the indented
> functionality of the assembly bit.
> 
> I wonder if the C code above can generate any jumps? As you pointed
> out, we already use something similar in array_index_mask_nospec and
> that's fine to do in C.

Note how array_index_mask_nospec() gets away without any use of
relational operators. They're what poses the risk of getting
translated to branches. (Quite likely the compiler wouldn't use
any in the case here, as the code can easily get away without,
but we don't want to chance it. Afaict it would instead use a
3rd scratch register, so register pressure might still lead to
using a branch instead in some exceptional case.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 13:02:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 13:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84249.157927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAY5i-0002Ja-S7; Fri, 12 Feb 2021 13:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84249.157927; Fri, 12 Feb 2021 13:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAY5i-0002JT-PD; Fri, 12 Feb 2021 13:02:34 +0000
Received: by outflank-mailman (input) for mailman id 84249;
 Fri, 12 Feb 2021 13:02:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cuCD=HO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lAY5h-0002JO-K6
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 13:02:33 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3073445-3aea-4e51-9644-b78427d8b612;
 Fri, 12 Feb 2021 13:02:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3073445-3aea-4e51-9644-b78427d8b612
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613134952;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=1/AllvgK6Hx3mzUhsExdxU6RK8zMUno01BN6XJRkCag=;
  b=TSprTE2atxd1+/cCNk3/iOz2l8pNc+ILKLMDnpv2R4NvAEUstN9d6TUs
   HJCvHfHtSkY2lAh4Ees6l+TfpLAY+zdJwcLOUVGI1W/pLIjqs6W2MKvlV
   jqLZZvY5swTf+CrZ+WNUYYSzuXzxM0aaLls/pZJvibGGHckTwVXUQrnfV
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HrKqCgRLyVC/XvQ2Cz2SiOfGsgkDN41n5dQoml39ICRvZg9F8n4oVoWPGqEBXK7pATkco3pQY7
 rYOBFrP4FiRTQ6MYb4L+kFDTMYfNut9Cmn++G9FteemVXdbTQmEe0s7R63uKgocTTukaFonbpw
 scG9WRKS/gU2sJhBxEG7TM9PZDxb6YCW7PmPbh6SSH03OWPVX6Qit1puTkbuI3+UWqoZeFvoNT
 llLJOuL253EKFPgOyxbIYSJmyivRStAN+aGsPFvfmZmUfOfhRudT16G/uyh63Hn7Kfmsr5+yTw
 wE8=
X-SBRS: 5.2
X-MesageID: 38492831
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,173,1610427600"; 
   d="scan'208";a="38492831"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QIkjZNpJ4ffk85qnpWP2g7FEM7yWQDaJAqg1RyzZobrvnqXobDsVJ8dd6JMMh1iNtlw0Ytxo1uXtj/1g6QK8psIbmKaQox/SSd7MJ7qsJHzsnL8vhI2QiKMXR5oYBG6MDXs5eCcQH6tEx5/lQIg5pftNcu7ysz2l+TKk4hqVT2Jpm40t7600oUMF/LDap3sOtx/2BoNK+qEISeaDhzUtVoFyMZljzqdTBmIjNzrPK6514wpkK6SuQoOpG/i6P94f/UDPL/KM5/4ImIAEdb5INWJ6vGoMXYzaloQekN2OAw4fvmPyII+y4HjzuKUJwAMIr6LMb6xQGwxLGvWk+UHZ8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OuRljw37Pm8v4CMER+HIra1Dp97uTC9tAyl+hBTWkwQ=;
 b=XnDTQSDhQ0rWE9UMKRANR+6qB090KUfBNade58VLJgQ6V55cGsQyB9gJBB76t9Sl59gb2O9DwyISQ7FAVUWYmbCaKD9ZVxYMXBY3uw8OSlviOuOiTbqisz4XEwleMqqU3M+nGZx9b3hdPfZ3P8uoha7UxTmh0bASM19md1IVpdwpR3rbR1uFHDgH5DugQM48FKV3pmot2/ENMEHYYCPwdTAMeLo4KzphFUThR3AZW35lxgGa9LFyHjbJ8ANxhQn/fF0QDk9fC9xH5dNPL8wLup44iKgjRAvzO2FRDfBGXgdWm7WdKwXlJgA1DX0zatKiPZcXBbr+ke2nc3oei1Fw9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OuRljw37Pm8v4CMER+HIra1Dp97uTC9tAyl+hBTWkwQ=;
 b=IRtS8VKSrxWuc52iS7swsxvpCoSzWBIjpxPxHaDLPGBNU0PfQVwwuOZNYoNYPclNBL/HKgGAQFLBhWOuYY1RiJLqlVsG+tbIAZeNDtbJQc1HhxlC260H/4dDPXlkW+3WGjUYG21o4hFQpFXrVGVUN5tI94yx3OI3FPhMvgRSTX0=
Date: Fri, 12 Feb 2021 14:02:24 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan
	<tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
Message-ID: <YCZ8YB+Y/HyHNOPm@Air-de-Roger>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
 <YCZbToiL3+Ji3y48@Air-de-Roger>
 <ece2bf60-4154-756d-df5a-1f55170f9451@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ece2bf60-4154-756d-df5a-1f55170f9451@suse.com>
X-ClientProxiedBy: PR3P189CA0069.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e06f31ec-bc78-4650-b406-08d8cf567383
X-MS-TrafficTypeDiagnostic: DM6PR03MB3916:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB39169DB1EFAB2713FFBE2E6F8F8B9@DM6PR03MB3916.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Es59jSdTn9+p6Q/GaJclV1gsNRAH9gpvrbvxmjAo+DcZIwS+IAXKklGQehh1dd5WNnbOcXtgQAK8Zrm5FkJlY25MPHVxXX5bw/orNtSioZ2SGnUxPBaS8r4cXMuOJ6+MOPO3sh33sqVfpgFA6tdd2dirQBXfolPj5Iq5Oq6AZSLE7c1nDehQiJ43dv+2OgtnnqbSgR2hYp7QiVaFCIMKwYEVbCrXJKO/fusehLk2P8A7AuqnT5FaPgL+wEQo1JM0LAa/D4MZ+K69FxcT6Oslx+i0v6p0G5+wAey0Q6Jip9b4QTjjzL4hiPc5EEOf6im748ifWi4Jv4/yIHQJs+ZMSufahfxg/8OE7TicqwFnnUvAgMTTLJv65avCzEjuxAFieL5CgiZby2+4uE1SAFMUTEpllxN1UUkHODcOm+vV16DyOspUEGZD3VttRI7orpI87GlqqS86ZetEsxhh6KHnEC27IvaBUH3BgNdQnkXtNlf5sscNpianx0H5IcfoxlVR1kcWucPjb3xHtWFFFOaEuw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(66946007)(66556008)(6666004)(5660300002)(2906002)(6486002)(66476007)(83380400001)(85182001)(26005)(8936002)(316002)(86362001)(33716001)(54906003)(478600001)(8676002)(107886003)(186003)(6916009)(16526019)(6496006)(53546011)(4326008)(9686003)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TVJoVEQzTXNPZGtFa3RQRUxqb1RFZ3FMN1hYOEFqdTNBRXk5aGJ5MlVpNnly?=
 =?utf-8?B?bGo2U3JjRGl5NlVWcUhzaHkxYTRGRkdyVFdQSjRtOS9CYThNQ3ZuL0lsTm43?=
 =?utf-8?B?OG5lTlpKTHFyZFRIZkFBYlJzM0VYcnVnQm5tcXBOR3F0c1E4ZEpWd3RyY25V?=
 =?utf-8?B?RUxhdGw1M0Z4Vkk0a0I0dExWMVA1MGk3VVYxT1U4Sndjem1adWZ5M3g4aTBk?=
 =?utf-8?B?MXJld1NjSDdXa1BUQlJnTEppREl1Z3R1MERPM3l5T0lwUGsvcWRkUmVMZXE2?=
 =?utf-8?B?dzdNR2diQ1hKcDZPdm4yS2txV0t2MjN4RjFoUjJ3cmhEd2lsaXJuRnAvQlph?=
 =?utf-8?B?VUVWSVJ1cUM3QUhMZG90c0VEZCs5M3pmcUkvRldIZHQ4dkVjM1I3bXE3MTV2?=
 =?utf-8?B?UnhEUi9TcmdBYkRUNDg1VUhHTjA4NTBScjhsMVAxWFk2MURibUt5c3JyOXFE?=
 =?utf-8?B?eTRWK1J6dkJIN0NLdXVhSXBvdHBTTlRUZFFpdUNUSjNJeXVnT0VUZnhiOTZB?=
 =?utf-8?B?T2IzOW1KeUlScXY4N2p0cHlNYU1mRGovUXh0emRjSmE5WjhKV1lwTWt5VW9m?=
 =?utf-8?B?Uk1hL056MFlPc2kzckQ1cXdRb0ZWejJVUExSZGZPS2Fab2loQU92cjgwb0NG?=
 =?utf-8?B?QzA3SlVieXYxWGI1ekJFZUM1WmI1dWZMMVhjREZVUHBPUWFLZkEvU09GNTlz?=
 =?utf-8?B?UTl3VHp0a2U1R3FaUU5yYk9MbHlWWkdja0NtMjcxUzdneXVPR2tjWDd4RVlw?=
 =?utf-8?B?RVhnWElZdnlqM3FCaTY0WWMyRkQ4clNRTzUyaHdkVlNPbWNram5HMjgzdlc0?=
 =?utf-8?B?KzF0aUJERkV4M2FYNHpsejlGWE1veXFPbkltR2ZocFhyNG9NcEhibVNMd0xB?=
 =?utf-8?B?T2d4RXFjdUdVK1hyWlVhMTF6R1NRa2toZW9nMVFkQ0U2RnVKSkY0Qjhmc2dE?=
 =?utf-8?B?bmxUZ2Vsc2VjQ0dvOXRwdFMyeWZnZHVEQ1VOR2RYRmVubXlIcDdybkdqZTJ5?=
 =?utf-8?B?Sk94STh0RnIrWmNjaGJtaUFONlEzLzF4a2RUSUZiaUxYZFNMbDlRWDEveHpK?=
 =?utf-8?B?aW9wTGQ1dUVYUng5Z1JGekkwYjc0VkxKZFc4N09CekVwSWJKNWp0ajVnMWJL?=
 =?utf-8?B?NVpjVDlQU29WUHRKd3lmd1V2cmVOY2JYOWhEM3pwdCtpYjUveVB6NFhZTGs4?=
 =?utf-8?B?cFo5ZU1KdGZicXNGd25CL3pBVHFlcWdmLzI2SE5iR3BZMzBRb0Ztc3RqSDBa?=
 =?utf-8?B?RCtSL1dFU0pWNnMzbGNqeTdzYUlod05aZlR6MlhxdnZuVmo0c3R1S241RzFS?=
 =?utf-8?B?c3VlUUVNenkrR1pPejdCL3FiNk04TTl3ckRsUzhKR005aWNEU2ZZQjdtOTdE?=
 =?utf-8?B?WWV5UlRlbDBKSmZLV3lFR2h3V05Fcm1mcjR6OHpWRVVFLzlxZGt0NEZSTTZQ?=
 =?utf-8?B?cE1wN0Z5UFhPSnBWd0RPMG1HekxhRjJVZXc2dzZDazYyZFRPUzNzN3BGWHFM?=
 =?utf-8?B?MElFdWJJQllFQjFFTlNnOEpxV1d4b29EalVLUUdNU3hpdjY4OGcyTFZlY005?=
 =?utf-8?B?OWVpRnF6Q2h5VklLYkZ1ejhJeW1yd2kxM1NnQnFXaEE3eU95RHJadGFxc3Aw?=
 =?utf-8?B?ZnVpYmMvcE02R1VvU3BZWEp3WVJYeTNFZ2t5S2JLakg5UE5KOUVtK1JBS2E4?=
 =?utf-8?B?UUNUc0MycDNXTmp6aFNOQW1NeDVLejI5eUxvSHNleGxzeXo0czZ3c0Q4VDMr?=
 =?utf-8?Q?g5bl+IkRZeRQvYcA6H4vcn+zrkqXpKzK/5o5VcI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e06f31ec-bc78-4650-b406-08d8cf567383
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2021 13:02:28.7363
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FyAndKEikIHdCkf/toqSucDC6Vgv76C4/IblTqaQd1/+moK95rETor10XZ20IUn7VyeBh/xXYHkoyXQDpWOoIQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3916
X-OriginatorOrg: citrix.com

On Fri, Feb 12, 2021 at 01:48:43PM +0100, Jan Beulich wrote:
> On 12.02.2021 11:41, Roger Pau MonnÃ© wrote:
> > On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
> >> @@ -94,6 +106,8 @@ unsigned __copy_from_user_ll(void *to, c
> >>      return n;
> >>  }
> >>  
> >> +#if GUARD(1) + 0
> > 
> > Why do you need the '+ 0' here? I guess it's to prevent the
> > preprocessor from complaining when GUARD(1) gets replaced by nothing?
> 
> Yes. "#if" with nothing after it is an error from all I know.
> 
> >> --- a/xen/include/asm-x86/asm-defns.h
> >> +++ b/xen/include/asm-x86/asm-defns.h
> >> @@ -44,3 +44,16 @@
> >>  .macro INDIRECT_JMP arg:req
> >>      INDIRECT_BRANCH jmp \arg
> >>  .endm
> >> +
> >> +.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req
> >> +#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS)
> >> +    mov $(HYPERVISOR_VIRT_END - 1), \scratch1
> >> +    mov $~0, \scratch2
> >> +    cmp \ptr, \scratch1
> >> +    rcr $1, \scratch2
> >> +    and \scratch2, \ptr
> > 
> > If my understanding is correct, that's equivalent to:
> > 
> > ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END);
> > 
> > It might be helpful to add this as a comment, to clarify the indented
> > functionality of the assembly bit.
> > 
> > I wonder if the C code above can generate any jumps? As you pointed
> > out, we already use something similar in array_index_mask_nospec and
> > that's fine to do in C.
> 
> Note how array_index_mask_nospec() gets away without any use of
> relational operators. They're what poses the risk of getting
> translated to branches. (Quite likely the compiler wouldn't use
> any in the case here, as the code can easily get away without,
> but we don't want to chance it. Afaict it would instead use a
> 3rd scratch register, so register pressure might still lead to
> using a branch instead in some exceptional case.)

I see, it's not easy to build such construct without using any
relational operator. Would you mind adding the C equivalent next to
assembly chunk?

I don't think I have further comments:

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 13:15:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 13:15:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84263.157958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAYIU-0003Wa-Fc; Fri, 12 Feb 2021 13:15:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84263.157958; Fri, 12 Feb 2021 13:15:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAYIU-0003WT-CH; Fri, 12 Feb 2021 13:15:46 +0000
Received: by outflank-mailman (input) for mailman id 84263;
 Fri, 12 Feb 2021 13:15:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAYIT-0003WN-Iz
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 13:15:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42ccefeb-8b8b-4181-9233-f87b4f09eaad;
 Fri, 12 Feb 2021 13:15:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3362AC90;
 Fri, 12 Feb 2021 13:15:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42ccefeb-8b8b-4181-9233-f87b4f09eaad
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613135743; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MedWXEGnoEv2qf7wr3/ey/o1lNQKGa50cAPo9p6v26k=;
	b=VhzsJugKsQ0gaYJ3hqKqTNlEruYn3LAWHXlBKWrapSvYeTF8VYFGdBcmJ/hLhHBJD5utsz
	LRi3caHomWKNd0X+/7vU4rIsjAJCn/wAX1VsQUPP2KZDx1EzpMbhuDKxpNJg52Xj4DAdC0
	pd6tSyIwTDd1gxhMnRqzWFrhsb5ALRM=
Subject: Re: [PATCH 04/17] x86/PV: harden guest memory accesses against
 speculative abuse
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
 <5da0c123-3b90-97e8-e1e5-10286be38ce7@suse.com>
 <YCZbToiL3+Ji3y48@Air-de-Roger>
 <ece2bf60-4154-756d-df5a-1f55170f9451@suse.com>
 <YCZ8YB+Y/HyHNOPm@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f680f3e-78a5-e2d2-b1f1-8426ebc9613e@suse.com>
Date: Fri, 12 Feb 2021 14:15:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCZ8YB+Y/HyHNOPm@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.02.2021 14:02, Roger Pau MonnÃ© wrote:
> On Fri, Feb 12, 2021 at 01:48:43PM +0100, Jan Beulich wrote:
>> On 12.02.2021 11:41, Roger Pau MonnÃ© wrote:
>>> On Thu, Jan 14, 2021 at 04:04:57PM +0100, Jan Beulich wrote:
>>>> --- a/xen/include/asm-x86/asm-defns.h
>>>> +++ b/xen/include/asm-x86/asm-defns.h
>>>> @@ -44,3 +44,16 @@
>>>>  .macro INDIRECT_JMP arg:req
>>>>      INDIRECT_BRANCH jmp \arg
>>>>  .endm
>>>> +
>>>> +.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req
>>>> +#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS)
>>>> +    mov $(HYPERVISOR_VIRT_END - 1), \scratch1
>>>> +    mov $~0, \scratch2
>>>> +    cmp \ptr, \scratch1
>>>> +    rcr $1, \scratch2
>>>> +    and \scratch2, \ptr
>>>
>>> If my understanding is correct, that's equivalent to:
>>>
>>> ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END);
>>>
>>> It might be helpful to add this as a comment, to clarify the indented
>>> functionality of the assembly bit.
>>>
>>> I wonder if the C code above can generate any jumps? As you pointed
>>> out, we already use something similar in array_index_mask_nospec and
>>> that's fine to do in C.
>>
>> Note how array_index_mask_nospec() gets away without any use of
>> relational operators. They're what poses the risk of getting
>> translated to branches. (Quite likely the compiler wouldn't use
>> any in the case here, as the code can easily get away without,
>> but we don't want to chance it. Afaict it would instead use a
>> 3rd scratch register, so register pressure might still lead to
>> using a branch instead in some exceptional case.)
> 
> I see, it's not easy to build such construct without using any
> relational operator. Would you mind adding the C equivalent next to
> assembly chunk?

Sure:

    /*
     * Here we want
     *
     * ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END);
     *
     * but guaranteed without any conditional branches (hence in assembly).
     */

> I don't think I have further comments:
> 
> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks much!

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 13:30:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 13:30:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84276.157970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAYWq-0005QE-Sn; Fri, 12 Feb 2021 13:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84276.157970; Fri, 12 Feb 2021 13:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAYWq-0005Q7-Ph; Fri, 12 Feb 2021 13:30:36 +0000
Received: by outflank-mailman (input) for mailman id 84276;
 Fri, 12 Feb 2021 13:30:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lAYWq-0005Q2-1L
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 13:30:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lAYWo-0006zW-PQ; Fri, 12 Feb 2021 13:30:34 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lAYWo-0001Aj-Cf; Fri, 12 Feb 2021 13:30:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7n5wDDGeAwEfdH+2hCR80r+QfUuZy4aXiWCDCBy8vxI=; b=QRevInfTv2owthgSF6imfou8Zm
	Kgkyc07ZWG/VrBBKi4jiNOWSXuiwVbMlsCcpv2j5TEholB+rLJXx5X41RY3PhaNtr2oxEC5mm+xHj
	OCe6xlQvunDORLAqU3vtJhmPLi9FAkpxn2906F9F1GoLh6e5P7btaWMgjA2ZbC4dlRIo=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
 <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
 <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
 <alpine.DEB.2.21.2102111253060.9128@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <170a971e-8e10-bb9d-a324-e09e40ed994c@xen.org>
Date: Fri, 12 Feb 2021 13:30:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102111253060.9128@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Stefano,

On 11/02/2021 20:55, Stefano Stabellini wrote:
> On Thu, 11 Feb 2021, Julien Grall wrote:
>> On 11/02/2021 13:20, Rahul Singh wrote:
>>>> On 10 Feb 2021, at 7:52 pm, Julien Grall <julien@xen.org> wrote:
>>>> On 10/02/2021 18:08, Rahul Singh wrote:
>>>>> Hello Julien,
>>>>>> On 10 Feb 2021, at 5:34 pm, Julien Grall <julien@xen.org> wrote:
>>>>>> On 10/02/2021 15:06, Rahul Singh wrote:
>>>>>>>> On 9 Feb 2021, at 8:36 pm, Stefano Stabellini
>>>>>>>> <sstabellini@kernel.org> wrote:
>>>>>>>>
>>>>>>>> On Tue, 9 Feb 2021, Rahul Singh wrote:
>>>>>>>>>> On 8 Feb 2021, at 6:49 pm, Stefano Stabellini
>>>>>>>>>> <sstabellini@kernel.org> wrote:
>>>>>>>>>>
>>>>>>>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>>>>>>>> The offending chunk is:
>>>>>>>>>>
>>>>>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>>>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>>>>>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>>>>>
>>>>>>>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0
>>>>>>>>>> when it is
>>>>>>>>>> directly mapped and IOMMU is enabled for the domain, like the
>>>>>>>>>> old check
>>>>>>>>>> did, but the new check is always false.
>>>>>>>>>>
>>>>>>>>>> In fact, need_iommu_pt_sync is defined as
>>>>>>>>>> dom_iommu(d)->need_sync and
>>>>>>>>>> need_sync is set as:
>>>>>>>>>>
>>>>>>>>>>     if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>>>>>>>         hd->need_sync = !iommu_use_hap_pt(d);
>>>>>>>>>>
>>>>>>>>>> iommu_use_hap_pt(d) means that the page-table used by the
>>>>>>>>>> IOMMU is the
>>>>>>>>>> P2M. It is true on ARM. need_sync means that you have a
>>>>>>>>>> separate IOMMU
>>>>>>>>>> page-table and it needs to be updated for every change.
>>>>>>>>>> need_sync is set
>>>>>>>>>> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false
>>>>>>>>>> too,
>>>>>>>>>> which is wrong.
>>>>>>>>>>
>>>>>>>>>> As a consequence, when using PV network from a domU on a
>>>>>>>>>> system where
>>>>>>>>>> IOMMU is on from Dom0, I get:
>>>>>>>>>>
>>>>>>>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault:
>>>>>>>>>> fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>>>>>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error:
>>>>>>>>>> HRESP not OK
>>>>>>>>>>
>>>>>>>>>> The fix is to go back to something along the lines of the old
>>>>>>>>>> implementation of gnttab_need_iommu_mapping.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Stefano Stabellini
>>>>>>>>>> <stefano.stabellini@xilinx.com>
>>>>>>>>>> Fixes: 91d4eca7add
>>>>>>>>>> Backport: 4.12+
>>>>>>>>>>
>>>>>>>>>> ---
>>>>>>>>>>
>>>>>>>>>> Given the severity of the bug, I would like to request this
>>>>>>>>>> patch to be
>>>>>>>>>> backported to 4.12 too, even if 4.12 is security-fixes only
>>>>>>>>>> since Oct
>>>>>>>>>> 2020.
>>>>>>>>>>
>>>>>>>>>> For the 4.12 backport, we can use iommu_enabled() instead of
>>>>>>>>>> is_iommu_enabled() in the implementation of
>>>>>>>>>> gnttab_need_iommu_mapping.
>>>>>>>>>>
>>>>>>>>>> Changes in v2:
>>>>>>>>>> - improve commit message
>>>>>>>>>> - add is_iommu_enabled(d) to the check
>>>>>>>>>> ---
>>>>>>>>>> xen/include/asm-arm/grant_table.h | 2 +-
>>>>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/xen/include/asm-arm/grant_table.h
>>>>>>>>>> b/xen/include/asm-arm/grant_table.h
>>>>>>>>>> index 6f585b1538..0ce77f9a1c 100644
>>>>>>>>>> --- a/xen/include/asm-arm/grant_table.h
>>>>>>>>>> +++ b/xen/include/asm-arm/grant_table.h
>>>>>>>>>> @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long
>>>>>>>>>> gpaddr, mfn_t mfn,
>>>>>>>>>>      (((i) >= nr_status_frames(t)) ? INVALID_GFN :
>>>>>>>>>> (t)->arch.status_gfn[i])
>>>>>>>>>>
>>>>>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>>>>>> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>>>>> +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
>>>>>>>>>>
>>>>>>>>>> #endif /* __ASM_GRANT_TABLE_H__ */
>>>>>>>>>
>>>>>>>>> I tested the patch and while creating the guest I observed the
>>>>>>>>> below warning from Linux for block device.
>>>>>>>>> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
>>>>>>>>
>>>>>>>> So you are creating a guest with "xl create" in dom0 and you see
>>>>>>>> the
>>>>>>>> warnings below printed by the Dom0 kernel? I imagine the domU has
>>>>>>>> a
>>>>>>>> virtual "disk" of some sort.
>>>>>>> Yes you are right I am trying to create the guest with "xl createâ€
>>>>>>> and before that, I created the logical volume and trying to attach
>>>>>>> the logical volume
>>>>>>> block to the domain with â€œxl block-attachâ€. I observed this error
>>>>>>> with the "xl block-attachâ€ command.
>>>>>>> This issue occurs after applying this patch as what I observed this
>>>>>>> patch introduce the calls to iommu_legacy_{, un}map() to map the
>>>>>>> grant pages for
>>>>>>> IOMMU that touches the page-tables. I am not sure but what I
>>>>>>> observed is that something is written wrong when iomm_unmap calls
>>>>>>> unmap the pages
>>>>>>> because of that issue is observed.
>>>>>>
>>>>>> Can you clarify what you mean by "written wrong"? What sort of error
>>>>>> do you see in the iommu_unmap()?
>>>>> I might be wrong as per my understanding for ARM we are sharing the P2M
>>>>> between CPU and IOMMU always and the map_grant_ref() function is written
>>>>> in such a way that we have to call iommu_legacy_{, un}map() only if P2M
>>>>> is not shared.
>>>>
>>>> map_grant_ref() will call the IOMMU if gnttab_need_iommu_mapping() returns
>>>> true. I don't really see where this is assuming the P2M is not shared.
>>>>
>>>> In fact, on x86, this will always be false for HVM domain (they support
>>>> both shared and separate page-tables).
>>>>
>>>>> As we are sharing the P2M when we call the iommu_map() function it will
>>>>> overwrite the existing GFN -> MFN ( For DOM0 GFN is same as MFN) entry
>>>>> and when we call iommu_unmap() it will unmap the  (GFN -> MFN ) entry
>>>>> from the page-table.
>>>> AFAIK, there should be nothing mapped at that GFN because the page belongs
>>>> to the guest. At worse, we would overwrite a mapping that is the same.
>>>   > Sorry I should have mention before backend/frontend is dom0 in this
>> case and GFN is mapped. I am trying to attach the block device to DOM0
>>
>> Ah, your log makes a lot more sense now. Thank you for the clarification!
>>
>> So yes, I agree that iommu_{,un}map() will do the wrong thing if the frontend
>> and backend in the same domain.
>>
>> I don't know what the state in Linux, but from Xen PoV it should be possible
>> to have the backend/frontend in the same domain.
>>
>> I think we want to ignore the IOMMU mapping request when the domain is the
>> same. Can you try this small untested patch:
> 
> Given that all the pages already owned by the domain should already be
> in the shared pagetable between MMU and IOMMU, there is no need to
> create a second mapping. In fact it is guaranteed to overlap with an
> existing mapping.

It is **almost** guaranteed :). I can see a few reasons for this to not 
be valid:
    - Using the domain shared info in a grant
    - With a good timing, it would be possible that a differente vCPU 
remove the mapping after the P2M walk

That said, I feel it is not an expected behavior for a domain guest. So 
it is not something we should care at least for now.

> In theory, if guest_physmap_add_entry returned -EEXIST if a mapping
> identical to the one we want to add is already in the pagetable, in this
> instance we would see -EEXIST being returned.

While I agree that the GFN and MFN would be the same, there mapping 
still not be identical because the P2M type (and potentially the 
permission) will differ.

However, guest_physmap_add_entry() doesn't do such check today. It will 
just happily replace any mapping. It would be good to harden the code 
P2M as this is not the first time we see report of mapping overwritten.

I actually have a task in my todo list but I never got the chance to 
spend time on it.

> 
> Based on that, I cannot think of unwanted side-effects for this patch.
> It looks OK to me.
> 
> Given that it solves a different issue, I think it should be a separate
> patch from [1]. Julien, are you OK with that or would you rather merge
> the two?

They are two distinct issues. In fact, the bug has always been present 
on Arm. I will send a separate patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 13:50:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 13:50:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84279.157981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAYqE-0007Jf-JD; Fri, 12 Feb 2021 13:50:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84279.157981; Fri, 12 Feb 2021 13:50:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAYqE-0007JY-Ft; Fri, 12 Feb 2021 13:50:38 +0000
Received: by outflank-mailman (input) for mailman id 84279;
 Fri, 12 Feb 2021 13:50:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAYqD-0007JQ-0p; Fri, 12 Feb 2021 13:50:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAYqC-0007JR-RR; Fri, 12 Feb 2021 13:50:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAYqC-00005I-Fi; Fri, 12 Feb 2021 13:50:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAYqC-0000yD-F4; Fri, 12 Feb 2021 13:50:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/zYZBpR5BFG42zPjSI55YcwPsmHpMYZ4mDVvW9zH2qo=; b=Dy3jmo1hMVfsAuSR0otxDj3vhr
	oj4aKw9LGF/ceFcOg52grGGTQO4oqciq+IX4VXnbmWTULedubCFT9WlW2FDYmtz5Sr8dX1A3fRdk7
	hh3PLvshhT7r9kZ0oPB/oCWw7jloKgkHr80Jtyf6JI0RnlE2YUwDz8xqIzPdTT0TPh/U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159241-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159241: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
X-Osstest-Versions-That:
    xen=cce7cbd986c122a86582ff3775b6b559d877407c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 13:50:36 +0000

flight 159241 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159241/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 158556
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158556
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158556
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158556
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158556
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158556
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158556
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e
baseline version:
 xen                  cce7cbd986c122a86582ff3775b6b559d877407c

Last test of basis   158556  2021-01-21 15:37:26 Z   21 days
Failing since        159017  2021-02-04 15:06:13 Z    7 days    6 attempts
Testing same since   159052  2021-02-05 18:27:22 Z    6 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cce7cbd986..8d26cdd3b6  8d26cdd3b66ab86d560dacd763d76ff3da95723e -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 14:23:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 14:23:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84292.158010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAZMC-0001u0-NF; Fri, 12 Feb 2021 14:23:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84292.158010; Fri, 12 Feb 2021 14:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAZMC-0001tt-KL; Fri, 12 Feb 2021 14:23:40 +0000
Received: by outflank-mailman (input) for mailman id 84292;
 Fri, 12 Feb 2021 14:23:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAZMB-0001tl-Bn; Fri, 12 Feb 2021 14:23:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAZMB-0007wH-5p; Fri, 12 Feb 2021 14:23:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAZMA-000126-Tx; Fri, 12 Feb 2021 14:23:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAZMA-0001iZ-TV; Fri, 12 Feb 2021 14:23:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l7CMZcYdOZL0Q88BUcPEBjL31Bkf514UB7r4SCsZy4A=; b=ZQOPvVyiyob1HzFAJotSTH3ScL
	/WlufR2vYWI3lq42Al8sWO5dQff2FFxVkIxVtNwYHFFrcUn8QgI9A/pnEcUf+TwgC9U2NG5S42Jfj
	/0ZwJHA80LpliOtY08L+cvl9Zhop5HGR6/GZqHes9E8DFejDu+lASCrC/pzs9Za4CsS0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159248-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159248: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1d27e58e401faea284309039f3962cb3cb4549fc
X-Osstest-Versions-That:
    ovmf=124f1dd1ee1140b441151043aacbe5d33bb5ab79
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 14:23:38 +0000

flight 159248 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159248/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1d27e58e401faea284309039f3962cb3cb4549fc
baseline version:
 ovmf                 124f1dd1ee1140b441151043aacbe5d33bb5ab79

Last test of basis   159198  2021-02-10 11:26:55 Z    2 days
Testing same since   159248  2021-02-11 10:27:56 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   124f1dd1ee..1d27e58e40  1d27e58e401faea284309039f3962cb3cb4549fc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84308.158049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYV-0000nE-37; Fri, 12 Feb 2021 15:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84308.158049; Fri, 12 Feb 2021 15:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYU-0000n7-Vu; Fri, 12 Feb 2021 15:40:26 +0000
Received: by outflank-mailman (input) for mailman id 84308;
 Fri, 12 Feb 2021 15:40:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYT-0000ix-Nr
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:25 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01228532-2bd6-448b-a830-03df601056c7;
 Fri, 12 Feb 2021 15:40:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01228532-2bd6-448b-a830-03df601056c7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144416;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=AkTySBnlGFyVMAxg/gcAuosdJVnwo4RsV9buhZjnQmI=;
  b=Fto+Pz5B0Aef1KmZxa1alOzmlvDsusbB3sqf88EPWlMLjp7zLCuIso6G
   mfC/QAYmExw9rfZutIm72t1eKhfvdNwJgJT5F8u7YmVOnvNxW44AIO+c6
   GBMeoPHf+A5alb2okvxLmGfAEwIi1hN5BXTS0SzF1oQM79PEvuWbsq+IM
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: HownhAwJMB882tcBvJ6G+QOZ8MyF74dHFD5WPKB4/YdYun8zrgjFznZa7eMmo3fa6yqRCEK46f
 c4CFRvhfwylaL3mc3COpuLJLVUeU7P5OkvJI9wV6troHksPpTN0Pzi1s9/ff44nmfA9bo+25Nf
 ZsGTKWfrZzgZGcZtPbpTKihFpvu9Y03/bNMaXVxR0YjKj8hzxYAJ3t6h7trMXH8etYxCBhGmYi
 JsCe0LW4lgIg8TcIcG9FrNXj6eokQGwB1VwTJdDlV0JDuXCk4din0IC+sfucPWf3j80XJbtVVH
 g08=
X-SBRS: 5.2
X-MesageID: 38508885
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="38508885"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH 05/10] tools/libxl: Fix uninitialised variable in libxl__write_stub_dmargs()
Date: Fri, 12 Feb 2021 15:39:48 +0000
Message-ID: <20210212153953.4582-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Various version of gcc, when compiling with -Og, complain:

  libxl_dm.c: In function â€˜libxl__write_stub_dmargsâ€™:
  libxl_dm.c:2166:16: error: â€˜dmargsâ€™ may be used uninitialized in this function [-Werror=maybe-uninitialized]
               rc = libxl__xs_write_checked(gc, t, path, dmargs);
               ~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It can't, but only because of how the is_linux_stubdom checks line up.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 1ca21e4b81..7bbb8792ea 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -2099,7 +2099,7 @@ static int libxl__write_stub_dmargs(libxl__gc *gc,
 {
     struct xs_permissions roperm[2];
     xs_transaction_t t = XBT_NULL;
-    char *dmargs;
+    char *dmargs = NULL;
     int rc;
 
     roperm[0].id = 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84306.158025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYL-0000j9-HT; Fri, 12 Feb 2021 15:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84306.158025; Fri, 12 Feb 2021 15:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYL-0000j2-EQ; Fri, 12 Feb 2021 15:40:17 +0000
Received: by outflank-mailman (input) for mailman id 84306;
 Fri, 12 Feb 2021 15:40:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYJ-0000ix-UV
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:16 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0497549c-6442-4ccf-ba19-070ba799dfdc;
 Fri, 12 Feb 2021 15:40:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0497549c-6442-4ccf-ba19-070ba799dfdc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144414;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=9ilKt//pD9WWwJmxzImNAH8JgkNnaX0KcmVT6/nsyOQ=;
  b=TGuIyEm4uG2FoOiaRkgb+qI7ura6onEHhWcky31AlNrEr+T0yqwoMQj+
   i7gGdjG8lUGQhXdCxfTRGI1pzPLMzLTOuGwwhmpgzQHwCx1SSxpjPBDgs
   lUVC1vKwJ/dwecPMiM+KQPwJxhzpOorOIpDQbmXjgQkwEbElYUPBBkoWY
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yZYg3LdXWpas36oecYc8GfAQ44qTZtj34YojGj+i3Y/nUAffyNtu041+sfrB7t0+QzgSQnU4Vx
 CE6fbK/BHU37/SdXYauPMkSVq4Kuw7kZVQWw66pkpyl63zDqcRMX+75ukBgAJSkBtFuDTsr15f
 L6DBL/e93xWopf1n88mpx6DOurQDlD37sa1pHwN62lENUtfkIuodDMnYt7WNe4rESdpMxNza9J
 PvevuBg0Zbcdri/Nx9avtE4h5McP0/HWlr4Jux+x7NdRYdJG9XZlDDMbfSfTYgyfoPr6ekp8ut
 iTc=
X-SBRS: 5.2
X-MesageID: 38508879
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="38508879"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: [PATCH 03/10] tools/libxg: Fix uninitialised variable in meminit()
Date: Fri, 12 Feb 2021 15:39:46 +0000
Message-ID: <20210212153953.4582-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  xg_dom_arm.c: In function 'meminit':
  xg_dom_arm.c:420:19: error: 'p2m_size' may be used uninitialized in this function [-Werror=maybe-uninitialized]
    420 |     dom->p2m_size = p2m_size;
        |     ~~~~~~~~~~~~~~^~~~~~~~~~

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>

Julien/Stefano: I can't work out how this variable is supposed to work, and
the fact that it isn't a straight accumulation across the RAM banks looks
suspect.
---
 tools/libs/guest/xg_dom_arm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.c
index 94948d2b20..f1b8d06f75 100644
--- a/tools/libs/guest/xg_dom_arm.c
+++ b/tools/libs/guest/xg_dom_arm.c
@@ -373,7 +373,7 @@ static int meminit(struct xc_dom_image *dom)
     const uint64_t modsize = dtb_size + ramdisk_size;
     const uint64_t ram128mb = bankbase[0] + (128<<20);
 
-    xen_pfn_t p2m_size;
+    xen_pfn_t p2m_size = 0;
     uint64_t bank0end;
 
     assert(dom->rambase_pfn << XC_PAGE_SHIFT == bankbase[0]);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84309.158062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYa-0000rW-Dn; Fri, 12 Feb 2021 15:40:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84309.158062; Fri, 12 Feb 2021 15:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYa-0000rN-9l; Fri, 12 Feb 2021 15:40:32 +0000
Received: by outflank-mailman (input) for mailman id 84309;
 Fri, 12 Feb 2021 15:40:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYY-0000ix-O2
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:30 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b53a4ed4-952e-4bde-aa3e-303df37baa6d;
 Fri, 12 Feb 2021 15:40:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b53a4ed4-952e-4bde-aa3e-303df37baa6d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144416;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=THEW0Vd+o8+5NNZotQAfhag3Lg3d+kZffhIX6B7tke8=;
  b=M3SFRRzPFVN3UbIedZ6vE/toxx2jRMFqDrxjXKPppSUZM88OIDw+aNeh
   LsZnCJngO53TjpDiSp3vkD4YOtULBiBvlwJNOPkb2qNdmAhELiQFIwafA
   oOyEHlDBaq8z4BtL3PB9hi+A9Im3T6U72ameDN/ZCpymNvI3SXFM54Cii
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 6Fw2KaezQaX/2xsw/xTG3V/cEq3H06/GILUVL4Y3NEK8P+ifbKijqr9FjN+/AIxZbV19l1Foc0
 li4F9FqP2dlWBqgmjJkF7KiuLPTCRnOgXq7WTZ+pqtPBfkQ8gKYMumEST+xKSCHPokhEe2qLsN
 dQW70Ec2AFsvezMup9xPwZw/THN//JMfLN/oJSP16nuZ1HcYj6CDdmeKim3GYNV5MkCYiDW7M3
 Caa+K5RSzxr5qTNo32reNxfMf9L34HCUOAf1KCEef0pxUNhdNY8oGc6dKqiXvIGH7LhpYHRwni
 fpY=
X-SBRS: 5.2
X-MesageID: 37085875
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37085875"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 07/10] tools: Use -Og for debug builds when available
Date: Fri, 12 Feb 2021 15:39:50 +0000
Message-ID: <20210212153953.4582-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

The recommended optimisation level for debugging is -Og, and is what tools
such as gdb prefer.  In practice, it equates to -01 with a few specific
optimisations turned off.

abi-dumper in particular wants the libraries it inspects in this form.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
---
 tools/Rules.mk | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index f61da81f4a..2907ed2d39 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -106,8 +106,9 @@ endif
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
 
 ifeq ($(debug),y)
-# Disable optimizations
-CFLAGS += -O0 -fno-omit-frame-pointer
+# Use -Og if available, -O0 otherwise
+dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
+CFLAGS += $(dbg_opt_level) -fno-omit-frame-pointer
 # But allow an override to -O0 in case Python enforces -D_FORTIFY_SOURCE=<n>.
 PY_CFLAGS += $(PY_NOOPT_CFLAGS)
 else
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84307.158038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYP-0000kD-R6; Fri, 12 Feb 2021 15:40:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84307.158038; Fri, 12 Feb 2021 15:40:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYP-0000k6-Ma; Fri, 12 Feb 2021 15:40:21 +0000
Received: by outflank-mailman (input) for mailman id 84307;
 Fri, 12 Feb 2021 15:40:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYO-0000ix-NY
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:20 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03ecbdec-a497-49fe-88a0-a97db26888a2;
 Fri, 12 Feb 2021 15:40:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03ecbdec-a497-49fe-88a0-a97db26888a2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144415;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=8n3LNq3HpJRCzFtQilT2o7NpxGGvtcLJdJkdvWcMQnw=;
  b=UOgfUupnQL5p0yk51e3fisezYp6hlsMNH6UyPVrKApSDfGMmJffenVBO
   yLtlsPvab9cHXjk/9dayObOQudrs3DofQhA1IZYPWhHkOhsQ68k4/Ng/b
   L9CkvL9/mnE0xHIMfLZB265/5icY6aSu+4R1mB/SU+0a9Hb8zsNy3xpM2
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Guvd7p8XjY8XoUmY5MoiKRxmWTdrFrJQvYwSBTWTuijsXIsyt20eYDpEYtfH2O5MPWoj0PTZW5
 gW+twg/8I+eXOb4i9h+6tmEvxZiVKtnFr7su7KUCv3KWc2HZ9ijINDwmS7ktj4xKd+vHu5iKde
 7SttGN2e6fKdhIZ4GNCVU0c2pjUepQS0r3ElBOzU6MexQBetZO8BHtO5ScZgd190YiX2gG8pAt
 im17uWP0xr7Pmh6vpvmx8DyPXiQ/G5vx8rrkiQvPtyYgOqwRnZ1CHMkXgr+L0m0bunnMLWCv8d
 qm4=
X-SBRS: 5.2
X-MesageID: 38508881
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="38508881"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in lu_read_state()
Date: Fri, 12 Feb 2021 15:39:49 +0000
Message-ID: <20210212153953.4582-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Various version of gcc, when compiling with -Og, complain:

  xenstored_control.c: In function â€˜lu_read_stateâ€™:
  xenstored_control.c:540:11: error: â€˜state.sizeâ€™ is used uninitialized in this
  function [-Werror=uninitialized]
    if (state.size == 0)
        ~~~~~^~~~~
  xenstored_control.c:543:6: error: â€˜state.bufâ€™ may be used uninitialized in
  this function [-Werror=maybe-uninitialized]
    pre = state.buf;
    ~~~~^~~~~~~~~~~
  xenstored_control.c:550:23: error: â€˜state.bufâ€™ may be used uninitialized in
  this function [-Werror=maybe-uninitialized]
     (void *)head - state.buf < state.size;
                    ~~~~~^~~~
  xenstored_control.c:550:35: error: â€˜state.sizeâ€™ may be used uninitialized in
  this function [-Werror=maybe-uninitialized]
     (void *)head - state.buf < state.size;
                                ~~~~~^~~~~

Interestingly, this is only in the stubdom build.  I can't identify any
relevant differences vs the regular tools build.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 1f733e0a04..f10beaf85e 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -530,7 +530,7 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 
 void lu_read_state(void)
 {
-	struct lu_dump_state state;
+	struct lu_dump_state state = {};
 	struct xs_state_record_header *head;
 	void *ctx = talloc_new(NULL); /* Work context for subfunctions. */
 	struct xs_state_preamble *pre;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84310.158074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYf-0000xK-Ni; Fri, 12 Feb 2021 15:40:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84310.158074; Fri, 12 Feb 2021 15:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYf-0000xB-Jr; Fri, 12 Feb 2021 15:40:37 +0000
Received: by outflank-mailman (input) for mailman id 84310;
 Fri, 12 Feb 2021 15:40:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYd-0000ix-O8
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:35 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6990a41-5224-4e58-86c7-bbec74fd5ff9;
 Fri, 12 Feb 2021 15:40:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6990a41-5224-4e58-86c7-bbec74fd5ff9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144416;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=xRy21DvjvCrpVWjUNEKJmq6KENL31WHuSFgYxPxdZ5Y=;
  b=BXgLuG64lHW7VUdMo+kj1T03GBulBkrF7agZRDm92wN4Rg9lccD1tOe4
   Wmdk5HEfXGdVxayckaXRfzablsU1en8Pp2wdnzbICxIyGCnnyk/aFgBvA
   O16rPrv8XtATh75ZdMruQXm0jqMPGop1q/cD+bD7n81sV8yBysYoV7WmN
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1GPl3yxibCxgECKAjeZrzdszMYAG2TEfiCuJFMKA6SDBa3uPSy8e9DGGtSxuqhLr6synXsJZOL
 dw98UqTfaA1mUFGvgnR5N9EP6+tdudSPFnvx6VHznAbe+sKdRMej/e23fe4zxLg+JhZrA65YOq
 8/m0Zgv0qQ5FQATavZE6BIEDEFTEo7YvIblWmNtAWdydiQtbdu1hHYLQ58JqrlersoyvoEQgpF
 i3B2L8ANF3HB15BjhG6euS/wJOEp3uP2qFIo/wTYL8axOjWuc63b8bRrgUch9gf8n1tn4M1Rjz
 7SA=
X-SBRS: 5.2
X-MesageID: 38508886
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="38508886"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH 01/10] tools/xl: Fix exit code for `xl vkbattach`
Date: Fri, 12 Feb 2021 15:39:44 +0000
Message-ID: <20210212153953.4582-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  xl_vkb.c: In function 'main_vkbattach':
  xl_vkb.c:79:12: error: 'rc' may be used uninitialized in this function [-Werror=maybe-uninitialized]
     79 |     return rc;
        |            ^~

The dryrun_only path really does leave rc uninitalised.  Introduce a done
label for success paths to use.

Fixes: a15166af7c3 ("xl: add vkb config parser and CLI")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/xl/xl_vkb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/xl/xl_vkb.c b/tools/xl/xl_vkb.c
index f6ed9e05ee..728ac9470b 100644
--- a/tools/xl/xl_vkb.c
+++ b/tools/xl/xl_vkb.c
@@ -64,7 +64,7 @@ int main_vkbattach(int argc, char **argv)
         char *json = libxl_device_vkb_to_json(ctx, &vkb);
         printf("vkb: %s\n", json);
         free(json);
-        goto out;
+        goto done;
     }
 
     if (libxl_device_vkb_add(ctx, domid, &vkb, 0)) {
@@ -72,6 +72,7 @@ int main_vkbattach(int argc, char **argv)
         rc = ERROR_FAIL; goto out;
     }
 
+done:
     rc = 0;
 
 out:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84311.158086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYk-00011y-05; Fri, 12 Feb 2021 15:40:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84311.158086; Fri, 12 Feb 2021 15:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYj-00011o-TF; Fri, 12 Feb 2021 15:40:41 +0000
Received: by outflank-mailman (input) for mailman id 84311;
 Fri, 12 Feb 2021 15:40:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYi-0000ix-OA
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a48c940c-c2a4-4e10-b77d-e5dc69be1924;
 Fri, 12 Feb 2021 15:40:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a48c940c-c2a4-4e10-b77d-e5dc69be1924
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144417;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=d2FNP0eIgRtKpMK8y8WPj0QMRGu85RJJU+Z7e2YOdHA=;
  b=Cinlk9HDiCvq3Z9iFMvpRfsXcbNp5c0TJ35xGP08Z7SsPnmIA0O1G9AY
   6QaTQxP52OReibXY5EUW++M6dBhQsVq+LBl8TXNs2lX77GS9hhV1E/vbD
   3v3tcCVPHXRbpr4ismX9N710M11zAhxgTnmPPdZKRk9f3CbpRLvG6F/pp
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +3kB8HBQiTyA76xd0+qDZKlRG1tgbdPr5WmecmHMH9p0eflak7dFXBsNRNKf0YpyOBw/ab8LZU
 F/3isO+RAzyYQxC5H/zOoTDy9X8JlkzPpEfXSOdWO3eiyeZFxvwREPxKdYu9R/17l7RregcuU8
 DhhRcATOjEHz83jU3MYZ/OibaGmZgsRg5W/a8RwAGFLxTQ7sCXHDbpAtfRrh8XpyT7qK4q4Q0X
 W+qKAVXl/ZvGfy8bULV4x9F7d9WnUIYpglids200kwapbW9k1x8R+HMf5hI2rQzubnu8MGnKda
 CPU=
X-SBRS: 5.2
X-MesageID: 37518634
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37518634"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 09/10] tools/libs: Add rule to generate headers.lst
Date: Fri, 12 Feb 2021 15:39:52 +0000
Message-ID: <20210212153953.4582-10-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

abi-dumper needs a list of the public header files for shared objects, and
only accepts this in the form of a file.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
---
 tools/libs/.gitignore | 1 +
 tools/libs/libs.mk    | 9 ++++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)
 create mode 100644 tools/libs/.gitignore

diff --git a/tools/libs/.gitignore b/tools/libs/.gitignore
new file mode 100644
index 0000000000..4a13126144
--- /dev/null
+++ b/tools/libs/.gitignore
@@ -0,0 +1 @@
+*/headers.lst
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 0b3381755a..ac68996ab2 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -76,6 +76,10 @@ endif
 
 headers.chk: $(AUTOINCS)
 
+headers.lst: FORCE
+	@{ $(foreach h,$(LIBHEADERS),echo $(h);) } > $@.tmp
+	@$(call move-if-changed,$@.tmp,$@)
+
 libxen$(LIBNAME).map:
 	echo 'VERS_$(MAJOR).$(MINOR) { global: *; };' >$@
 
@@ -118,9 +122,12 @@ TAGS:
 clean:
 	rm -rf *.rpm $(LIB) *~ $(DEPS_RM) $(LIB_OBJS) $(PIC_OBJS)
 	rm -f lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) lib$(LIB_FILE_NAME).so.$(MAJOR)
-	rm -f headers.chk
+	rm -f headers.chk headers.lst
 	rm -f $(PKG_CONFIG)
 	rm -f _paths.h
 
 .PHONY: distclean
 distclean: clean
+
+.PHONY: FORCE
+FORCE:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84312.158098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYp-00017R-9P; Fri, 12 Feb 2021 15:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84312.158098; Fri, 12 Feb 2021 15:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYp-00017K-5l; Fri, 12 Feb 2021 15:40:47 +0000
Received: by outflank-mailman (input) for mailman id 84312;
 Fri, 12 Feb 2021 15:40:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYn-0000ix-OS
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:45 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c936ab3-6ef4-4388-881b-171fce2cd81f;
 Fri, 12 Feb 2021 15:40:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c936ab3-6ef4-4388-881b-171fce2cd81f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144417;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=EN+VlYU9zCh2xnbJq+2hMB7Ag2s+TPKO2KR/6NYIL/0=;
  b=UnuOktGy7AVOGKmNfwMaiEzuP/pZRZbVQwO6mcGgMIhgKko1+XbBbtYL
   /Xxn+ZihSSkRtmVZ7yH1HkPDF3Z8QYi2P9c6CyXsV7T6PBRQMqwUlx0xd
   MhAkeKe/m56zSwcjJSgioXmP0xWEjzeDJVvg0ahiHTp4Oumfg3eGRClaa
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bGzisur3WNKmJPki3iBXyYIzRvvHVmcfLmOJo5gzjij9cHnm2JNK6F6XLJ3fRR4y5vWY1FXyjp
 48tPVCEb/xI6AzDiTbJDZPJWCPtLFZmfiD2KRP+/cLaBGke428bEuexBDSwzphs6MrNrkCjK5y
 Wt72sy+xp09jkKjBIiMO6xs5LgK5jqPDQk8VoiZhnZZ6mvvsOzoR1vQsYMKRhetY3RyOIoV5Mi
 kQb8zdB6iRncX46VdYwn8SuUOb8U7xPWvjcwdZ46YhVN8Q5pJDtCZMHlAqrlXTX6nTLqaWOjGl
 aLg=
X-SBRS: 5.2
X-MesageID: 37085879
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37085879"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH 02/10] tools/libxg: Fix uninitialised variable in write_x86_cpu_policy_records()
Date: Fri, 12 Feb 2021 15:39:45 +0000
Message-ID: <20210212153953.4582-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  xg_sr_common_x86.c: In function 'write_x86_cpu_policy_records':
  xg_sr_common_x86.c:92:12: error: 'rc' may be used uninitialized in this function [-Werror=maybe-uninitialized]
     92 |     return rc;
        |            ^~

The complaint is legitimate, and can occur with unexpected behaviour of two
related hypercalls in combination with a libc which permits zero-length
malloc()s.

Have an explicit rc = 0 on the success path, and make the MSRs record error
handling consistent with the CPUID record before it.

Fixes: f6b2b8ec53d ("libxc/save: Write X86_{CPUID,MSR}_DATA records")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
---
 tools/libs/guest/xg_sr_common_x86.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/libs/guest/xg_sr_common_x86.c b/tools/libs/guest/xg_sr_common_x86.c
index 6f12483907..3168c5485f 100644
--- a/tools/libs/guest/xg_sr_common_x86.c
+++ b/tools/libs/guest/xg_sr_common_x86.c
@@ -83,7 +83,13 @@ int write_x86_cpu_policy_records(struct xc_sr_context *ctx)
 
     msrs.length = nr_msrs * sizeof(xen_msr_entry_t);
     if ( msrs.length )
+    {
         rc = write_record(ctx, &msrs);
+        if ( rc )
+            goto out;
+    }
+
+    rc = 0;
 
  out:
     free(cpuid.data);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84313.158110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYu-0001DH-OA; Fri, 12 Feb 2021 15:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84313.158110; Fri, 12 Feb 2021 15:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYu-0001D6-Ju; Fri, 12 Feb 2021 15:40:52 +0000
Received: by outflank-mailman (input) for mailman id 84313;
 Fri, 12 Feb 2021 15:40:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYs-0000ix-Of
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:50 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1986579-83f5-4a77-bdee-3db06d09e860;
 Fri, 12 Feb 2021 15:40:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1986579-83f5-4a77-bdee-3db06d09e860
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144418;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=gxhqNkRAqt4D8bXDbOgXUWNa09LADvFwdwUf17wrXSw=;
  b=R80rtD6giKjL9kkupOQKKmhZjbkW2sQNjKwX05b5yJXbs9z/+Wh2Oh8Y
   wBCRzI1kVVIlH+GrxxTQFljGsMG7vdolxK2D/VyDk63Pi2f5hqjzjhX/D
   BhQI/1/RlqriAzW01Nxx+rlG1MMpaSoKDgbnxk4zUtE/sFQykLjeylwmX
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RKCf/rOpegXLOR/rGRr2O17NtjZ96m2jnR3RcZHKCvoQth9eyEg3hr5TXOSDVPvgM0hU3RP9Aa
 ZVOhfKHBDiTJiOdeiIU6J5DxCSf0wNjoLhH7UBPAxE4pBz20I1moCLLE/ShGQEXpmfs+JaYhox
 12z0e8R2UXettahbW/Q5PZParO3n5jo7rd6f5wowS7uXtCZ9xn/G+nqF08eMN5LuhHyNStgQIT
 Yi08sSeTUiPiUoq9vfqO6JvSBTsNMk9XGB1JbJlu/wUEmezF15cR6TZYRsKS9L+4qlOtAEsiUh
 usQ=
X-SBRS: 5.2
X-MesageID: 37518635
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37518635"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()
Date: Fri, 12 Feb 2021 15:39:47 +0000
Message-ID: <20210212153953.4582-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
  libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
    256 |         if (kill_by_uid)
        |            ^

The logic is sufficiently complicated I can't figure out if the complain is
legitimate or not.  There is exactly one path wanting kill_by_uid set to true,
so default it to false and drop the existing workaround for this problem at
other optimisation levels.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 291dee9b3f..1ca21e4b81 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -128,7 +128,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     int rc;
     char *user;
     uid_t intended_uid = -1;
-    bool kill_by_uid;
+    bool kill_by_uid = false;
 
     /* Only qemu-upstream can run as a different uid */
     if (b_info->device_model_version != LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN)
@@ -176,7 +176,6 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         LOGD(DEBUG, guest_domid,
              "dm_restrict disabled, starting QEMU as root");
         user = NULL; /* Should already be null, but just in case */
-        kill_by_uid = false; /* Keep older versions of gcc happy */
         rc = 0;
         goto out;
     }
@@ -227,7 +226,6 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         LOGD(WARN, guest_domid, "Could not find user %s, falling back to %s",
              LIBXL_QEMU_USER_RANGE_BASE, LIBXL_QEMU_USER_SHARED);
         intended_uid = user_base->pw_uid;
-        kill_by_uid = false;
         rc = 0;
         goto out;
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:40:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84314.158122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYz-0001Id-2a; Fri, 12 Feb 2021 15:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84314.158122; Fri, 12 Feb 2021 15:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaYy-0001IS-Us; Fri, 12 Feb 2021 15:40:56 +0000
Received: by outflank-mailman (input) for mailman id 84314;
 Fri, 12 Feb 2021 15:40:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaYx-0000ix-Oy
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:40:55 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99fc8af0-a1b5-402c-8def-985e4836a66a;
 Fri, 12 Feb 2021 15:40:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99fc8af0-a1b5-402c-8def-985e4836a66a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144416;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=lsfae/ZYMUTtHloQUljGbO2IeSsesWCyE9jSqEqM860=;
  b=PJ+jrVw2narBucB2O8lVOiUQP7ECcfL9NAcYtcWXamn3TauhlcsIv7UZ
   xBmOnqrAGGK6iwhXUdzLQeiCwhNaNcMUHVXb5jqLq6H61oXouOeygKa2X
   vov6d5KnhjX2he4rUe1OnzdYAnUyLLkjEB4Jemd0OO62Hz16M4ZKCJcFr
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yEUYvSEFIim5Jdww6A+wuZKUVYJBIVxGk/i2RPOJroGXupap9BtvLjgddQdvX1G0VHk2n+RkTA
 OhwPiTmolbklJkbR6p1H0zPWtnYB6lgJ+CfEGD8l1UUVMo4ny20DCUvksAbbDipn5eLBehOXsQ
 m+CF6O7Jo0ksSqTgC702zvQOuEdb57hY6od5ZkM5wBUBn0qosfdajOcyVRCm1IU3EvzYpFbtuF
 /dGkkjUraGXyhe2E819M1jGeZyinFk/mebU4g9S/bstypBwq2c8DkA/yLOIgJrvp5PESSaSUYe
 GX4=
X-SBRS: 5.2
X-MesageID: 38508888
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="38508888"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 08/10] tools: Check for abi-dumper in ./configure
Date: Fri, 12 Feb 2021 15:39:51 +0000
Message-ID: <20210212153953.4582-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

This will be optional.  No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
---
 config/Tools.mk.in |  1 +
 tools/configure    | 41 +++++++++++++++++++++++++++++++++++++++++
 tools/configure.ac |  1 +
 3 files changed, 43 insertions(+)

diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 48bd9ab731..d47936686b 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -19,6 +19,7 @@ BCC                 := @BCC@
 IASL                := @IASL@
 AWK                 := @AWK@
 FETCHER             := @FETCHER@
+ABI_DUMPER          := @ABI_DUMPER@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff --git a/tools/configure b/tools/configure
index e63ca3797d..bb5acf9d43 100755
--- a/tools/configure
+++ b/tools/configure
@@ -682,6 +682,7 @@ OCAMLOPT
 OCAMLLIB
 OCAMLVERSION
 OCAMLC
+ABI_DUMPER
 INSTALL_DATA
 INSTALL_SCRIPT
 INSTALL_PROGRAM
@@ -5442,6 +5443,46 @@ $as_echo "no" >&6; }
 fi
 
 
+# Extract the first word of "abi-dumper", so it can be a program name with args.
+set dummy abi-dumper; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_ABI_DUMPER+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $ABI_DUMPER in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_ABI_DUMPER="$ABI_DUMPER" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_ABI_DUMPER="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+ABI_DUMPER=$ac_cv_path_ABI_DUMPER
+if test -n "$ABI_DUMPER"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ABI_DUMPER" >&5
+$as_echo "$ABI_DUMPER" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
 # Extract the first word of "perl", so it can be a program name with args.
 set dummy perl; ac_word=$2
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
diff --git a/tools/configure.ac b/tools/configure.ac
index 6b611deb13..636e7077be 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -310,6 +310,7 @@ AC_PROG_CC
 AC_PROG_MAKE_SET
 AC_PROG_INSTALL
 AC_PATH_PROG([FLEX], [flex])
+AC_PATH_PROG([ABI_DUMPER], [abi-dumper])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
 AX_PATH_PROG_OR_FAIL([AWK], [awk])
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:41:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84316.158134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaZ4-0001Or-DY; Fri, 12 Feb 2021 15:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84316.158134; Fri, 12 Feb 2021 15:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaZ4-0001Of-9l; Fri, 12 Feb 2021 15:41:02 +0000
Received: by outflank-mailman (input) for mailman id 84316;
 Fri, 12 Feb 2021 15:41:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAaZ2-0000ix-PI
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:41:00 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c825c4b-0a26-462a-891b-669940f5c0bc;
 Fri, 12 Feb 2021 15:40:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c825c4b-0a26-462a-891b-669940f5c0bc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613144422;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Rz8UsAOinQtm3EzgaPvbSv/VidINXBxSC5At43dOJUQ=;
  b=D+/bns3hGvQNEpInGTrxn6sKpTUm/pnfRvOMnP41AbvTJtHOW3C32Cw5
   0VqK9q/AA5Ut1rzZD2tDjb/14vxQKKikE79wB6XGBl9X8zOAvaRBDxXd1
   16WSqYCOWaTxGburDuqVdRgtXyynHalwMjjMmZD9dyqVSh+xm0jrEgYcN
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +Wf30pnhbcD3GN9U9MLRbwvAkNGL9BlifUKjHxW3NtKLlThUKO1VZbaKVnqwO3KvD8YsRhWnfG
 MPiUbmNFZcEJ/QpmO9/gK8ds3y2k64xPl8lLVsfPIrmjvaYHoxeNHoGqnb5F3/5VMDVHuYDpsl
 Sbd0Ma5Gob6Ko9fSx4m7sOK5oDKcIDWYejrsydTAXdzcItpn95YAhMV54DUbO1B8xybaerd9me
 HZ9qEfMTQRsTJTICi2Dzh/rMq2QVDREALg8q3gg8K23Xvnp3VZrPiWb0kS8v/bs9MZk9q60aII
 PkQ=
X-SBRS: 5.2
X-MesageID: 37344043
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37344043"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 00/10] tools: Support to use abi-dumper on libraries
Date: Fri, 12 Feb 2021 15:39:43 +0000
Message-ID: <20210212153953.4582-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The first 6 patches are fixes to build with the -Og optimisation level, which
is an expectation of abi-dumper and a good idea generally.  There are 2
definite bugfixes, and 4 of more questionable usefulness.  All fixes are
simple.

The next patch switches to -Og by default.  This is potentially risky - I've
fixed up all build failures that Gitlab CI can spot, but I don't guarentee
that I've fixed all of them.  However, it only affects debug builds - release
builds are unchanged, and we're before -RC1 so have plenty of time to react to
any fallout.

The final 3 patches arrange for abi-dumper to run if it is available in the
build environment.  This is strictly optional, has no effect if abi-dumper
isn't in the build environment, and writes out one extra file if present.


With this tooling place, we can now add support to OSSTest to check for ABI
breakges in builds.

Andrew Cooper (10):
  tools/xl: Fix exit code for `xl vkbattach`
  tools/libxg: Fix uninitialised variable in write_x86_cpu_policy_records()
  tools/libxg: Fix uninitialised variable in meminit()
  tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()
  tools/libxl: Fix uninitialised variable in libxl__write_stub_dmargs()
  stubdom/xenstored: Fix uninitialised variables in lu_read_state()
  tools: Use -Og for debug builds when available
  tools: Check for abi-dumper in ./configure
  tools/libs: Add rule to generate headers.lst
  tools/libs: Write out an ABI analysis when abi-dumper is available

 config/Tools.mk.in                  |  1 +
 tools/Rules.mk                      |  5 +++--
 tools/configure                     | 41 +++++++++++++++++++++++++++++++++++++
 tools/configure.ac                  |  1 +
 tools/libs/.gitignore               |  1 +
 tools/libs/guest/xg_dom_arm.c       |  2 +-
 tools/libs/guest/xg_sr_common_x86.c |  6 ++++++
 tools/libs/libs.mk                  | 18 +++++++++++++++-
 tools/libs/light/libxl_dm.c         |  6 ++----
 tools/xenstore/xenstored_control.c  |  2 +-
 tools/xl/xl_vkb.c                   |  3 ++-
 11 files changed, 76 insertions(+), 10 deletions(-)
 create mode 100644 tools/libs/.gitignore

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:45:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:45:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84332.158146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAadf-0001te-1g; Fri, 12 Feb 2021 15:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84332.158146; Fri, 12 Feb 2021 15:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAade-0001tX-Uv; Fri, 12 Feb 2021 15:45:46 +0000
Received: by outflank-mailman (input) for mailman id 84332;
 Fri, 12 Feb 2021 15:45:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pKAX=HO=gmail.com=snitzer@srs-us1.protection.inumbo.net>)
 id 1lAadd-0001tS-HP
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:45:45 +0000
Received: from mail-pl1-f180.google.com (unknown [209.85.214.180])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aed3b706-dc8a-43af-b503-f19b77648660;
 Fri, 12 Feb 2021 15:45:44 +0000 (UTC)
Received: by mail-pl1-f180.google.com with SMTP id s15so49589plr.9
 for <xen-devel@lists.xenproject.org>; Fri, 12 Feb 2021 07:45:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aed3b706-dc8a-43af-b503-f19b77648660
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=M8C1yZZgp4rNPfghNpxWN41GpCK1+InKLAelhxbvplw=;
        b=j0utPCU15ZdF4aMvuBalv8kNhluW4XUid0Ei0j/wrOqVu0XNhoipRGtO3JBrPg80T2
         QBHZaYU0lWlGoAW9xFSnkZEnych3aJgHqJKITskkg6jPsoqo7Uy9q5ocRd2MsmVWflSw
         CPjQWvrW/A2rW4ST9apzwhp4BkY+j728CBjuBrxLW4seYUI4S/8pjpOEEDgnaLFc8PFC
         G+3YiClxb8+mWtYDrzIxsn/gX75xIG2o5sbwXip11ECwRGm/bP2aBfCDsX38uk30FdeW
         Jnoynsw/+sYPsv1JtGlGVf6gFlVeK5BF+I2kePfYK3BHMuPjsll7XHZeRwSYA9Ax/8yk
         kNiw==
X-Gm-Message-State: AOAM533In0NeOFkBRyCDblgAB5Vz/FtLS7u1eb1b5G3Mg3FvuJxxAuXZ
	/5ixh/HCtNRi3oWnX7Sfu8iCD1PYbA1hAoI8Pj8=
X-Google-Smtp-Source: ABdhPJyYCXBmQgiV5syIf5Cv8mFWKTwy1gjS5Yhyhg4Qi/jYzZasdmgfRgJp2ybsEGtlgybQDIghy8J0Yy2+jaKx2kI=
X-Received: by 2002:a17:90a:4e1:: with SMTP id g88mr3222886pjg.7.1613144743400;
 Fri, 12 Feb 2021 07:45:43 -0800 (PST)
MIME-Version: 1.0
References: <20201116145809.410558-1-hch@lst.de> <20201116145809.410558-13-hch@lst.de>
In-Reply-To: <20201116145809.410558-13-hch@lst.de>
From: Mike Snitzer <snitzer@redhat.com>
Date: Fri, 12 Feb 2021 10:45:32 -0500
Message-ID: <CAMM=eLfD0_Am3--X+PsKPTfc9qzejxpMNjYwEh=WtjSa-iSncg@mail.gmail.com>
Subject: Re: [PATCH 12/78] dm: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>, 
	Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Jason Wang <jasowang@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Song Liu <song@kernel.org>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, 
	device-mapper development <dm-devel@redhat.com>, linux-block <linux-block@vger.kernel.org>, 
	drbd-dev@lists.linbit.com, nbd@other.debian.org, ceph-devel@vger.kernel.org, 
	xen-devel@lists.xenproject.org, 
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>, linux-nvme@lists.infradead.org, 
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>, linux-fsdevel <linux-fsdevel@vger.kernel.org>, 
	Hannes Reinecke <hare@suse.de>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 16, 2020 at 10:05 AM Christoph Hellwig <hch@lst.de> wrote:
>
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/md/dm.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index c18fc25485186d..62ad44925e73ec 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1971,8 +1971,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
>         if (size != dm_get_size(md))
>                 memset(&md->geometry, 0, sizeof(md->geometry));
>
> -       set_capacity(md->disk, size);
> -       bd_set_nr_sectors(md->bdev, size);
> +       set_capacity_and_notify(md->disk, size);
>
>         dm_table_event_callback(t, event_callback, md);
>

Not yet pinned down _why_ DM is calling set_capacity_and_notify() with
a size of 0 but, when running various DM regression tests, I'm seeing
a lot of noise like:

[  689.240037] dm-2: detected capacity change from 2097152 to 0

Is this pr_info really useful?  Should it be moved to below: if
(!capacity || !size) so that it only prints if a uevent is sent?

Mike


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:52:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84346.158158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAak9-0002u2-PK; Fri, 12 Feb 2021 15:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84346.158158; Fri, 12 Feb 2021 15:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAak9-0002tv-MH; Fri, 12 Feb 2021 15:52:29 +0000
Received: by outflank-mailman (input) for mailman id 84346;
 Fri, 12 Feb 2021 15:52:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAak8-0002tq-68
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:52:28 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1085d0ec-aa4b-47ae-a060-fd302c24a649;
 Fri, 12 Feb 2021 15:52:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1085d0ec-aa4b-47ae-a060-fd302c24a649
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613145147;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=71PBv1yaSXwV1DXb00ABA1ndEflYaw9hZ+l02M+OUzY=;
  b=EdoI7/2zxyHXmZis2df1wf5fE+VvTRHJc24JzUjSvqhSPaEJT+0pQGOs
   2ZelLzjNXflGCeJmgIXwSICBNxYorpt3gIh9FYfnWIJbdJDCrf5HjE8gY
   +JlSfAjinO1SjEStSgoOQye6YuQ942B0+0S5NsD3nrvB167+LIDC7exqQ
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1XPwlT/SsWH1wkhbtbN8GhOIc5iKNlVmziz9mOZJaCQVt0oHdU0oxUA29altqRi5xQ5SaMJjUZ
 wwH3XXgJ/Qv+Z3q12XBW6gWNtIEHgc25AfQAVj6yn0MfxwXGckMgN+HHnWsRCjv3GxAxpirRAd
 nu6IqyFkDugU2+Y4P5nr8FeQG7KcYV5HinC5pxQt40zFcYLDRMDUUXYe9Ya/A16YikQPF2Fgca
 cTYCS/joN3bTiBxs904fuGrSV1vMTBc+Jc3T3H66BgX8GPTJgpig+rLzNZd8U7BRajGInPvxLB
 x4Y=
X-SBRS: 5.2
X-MesageID: 37145664
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37145664"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 10/10] tools/libs: Write out an ABI analysis when abi-dumper is available
Date: Fri, 12 Feb 2021 15:39:53 +0000
Message-ID: <20210212153953.4582-11-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
---
 tools/libs/libs.mk | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index ac68996ab2..2a4ce8a90c 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -49,6 +49,8 @@ PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
 LIBHEADER ?= $(LIB_FILE_NAME).h
 LIBHEADERS = $(foreach h, $(LIBHEADER), $(XEN_INCLUDE)/$(h))
 
+PKG_ABI := lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)-$(XEN_COMPILE_ARCH)-abi.dump
+
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
@@ -94,6 +96,13 @@ lib$(LIB_FILE_NAME).so.$(MAJOR): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
 lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR): $(PIC_OBJS) libxen$(LIBNAME).map
 	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDUSELIBS) $(APPEND_LDFLAGS)
 
+# If abi-dumper is available, write out the ABI analysis
+ifneq ($(ABI_DUMPER),)
+libs: $(PKG_ABI)
+$(PKG_ABI): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) headers.lst
+	abi-dumper $< -o $@ -public-headers headers.lst -lver $(MAJOR).$(MINOR)
+endif
+
 .PHONY: install
 install: build
 	$(INSTALL_DIR) $(DESTDIR)$(libdir)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:54:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84349.158170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAam8-00031e-5B; Fri, 12 Feb 2021 15:54:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84349.158170; Fri, 12 Feb 2021 15:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAam8-00031X-1w; Fri, 12 Feb 2021 15:54:32 +0000
Received: by outflank-mailman (input) for mailman id 84349;
 Fri, 12 Feb 2021 15:54:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAam6-00031R-Ol
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:54:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01c6bf23-4629-4233-a6a5-ab47d3435a42;
 Fri, 12 Feb 2021 15:54:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 50A45AD29;
 Fri, 12 Feb 2021 15:54:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01c6bf23-4629-4233-a6a5-ab47d3435a42
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613145269; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=C7GMDUyMOdTLI3rewXsY5zd2XTLH56wGuBSolmtWDq4=;
	b=klk5ZmyMZS+rwZq5oJVvqQ6K9dOpJivuQmr3cvKDL0jBd+MEPA7vZXr8On+/5F4fb9UJCz
	4su0Rz2IBQSizFAt5NMU2cDFDrUCx5jdZn79DO1leJikogfvw/UCPupC5lZ9fJM2kAzbWU
	c8AcFfJ+WW+7IKyUpi3foeI2JlYFuZs=
Subject: Re: [PATCH 02/10] tools/libxg: Fix uninitialised variable in
 write_x86_cpu_policy_records()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b8aed3ea-16b1-3e0f-abfb-ec90f7ce302c@suse.com>
Date: Fri, 12 Feb 2021 16:54:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210212153953.4582-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.02.2021 16:39, Andrew Cooper wrote:
> Various version of gcc, when compiling with -Og, complain:
> 
>   xg_sr_common_x86.c: In function 'write_x86_cpu_policy_records':
>   xg_sr_common_x86.c:92:12: error: 'rc' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>      92 |     return rc;
>         |            ^~
> 
> The complaint is legitimate, and can occur with unexpected behaviour of two
> related hypercalls in combination with a libc which permits zero-length
> malloc()s.
> 
> Have an explicit rc = 0 on the success path, and make the MSRs record error
> handling consistent with the CPUID record before it.
> 
> Fixes: f6b2b8ec53d ("libxc/save: Write X86_{CPUID,MSR}_DATA records")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 15:56:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 15:56:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84352.158182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAanW-00038o-Ha; Fri, 12 Feb 2021 15:55:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84352.158182; Fri, 12 Feb 2021 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAanW-00038h-D8; Fri, 12 Feb 2021 15:55:58 +0000
Received: by outflank-mailman (input) for mailman id 84352;
 Fri, 12 Feb 2021 15:55:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lAanV-00038a-IW
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 15:55:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lAanU-0000z3-C4; Fri, 12 Feb 2021 15:55:56 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lAanU-0004j9-61; Fri, 12 Feb 2021 15:55:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QAwqRGeNVcQ/ggu7wEF3m0P68VKWsSn976OQ/azoyeI=; b=aEf1n+WhcJ3B1Pbrk+r+Jj7sDZ
	rVybiQauOZh0gZZe8k0r+Qa3lDDbGeZzUwDRQCxzi3UzcLvj1CK8tTE+5bOrjWt0cTe5D/qERVBfA
	G0GNzOltJYkzPc93VkX2ubuEnJh7DObMpzTAUHO/SNhcjAzwj2SCllPn31yppJ4S2m38=;
Subject: Re: [PATCH 03/10] tools/libxg: Fix uninitialised variable in
 meminit()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-4-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2437c26c-2bb6-ec43-37bd-3051b97eff56@xen.org>
Date: Fri, 12 Feb 2021 15:55:54 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210212153953.4582-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 12/02/2021 15:39, Andrew Cooper wrote:
> Various version of gcc, when compiling with -Og, complain:
> 
>    xg_dom_arm.c: In function 'meminit':
>    xg_dom_arm.c:420:19: error: 'p2m_size' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>      420 |     dom->p2m_size = p2m_size;
>          |     ~~~~~~~~~~~~~~^~~~~~~~~~
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

This was reported nearly 3 years ago (see [1]) and it is pretty sad this 
was never merged :(.

> ---
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> 
> Julien/Stefano: I can't work out how this variable is supposed to work, and
> the fact that it isn't a straight accumulation across the RAM banks looks
> suspect.

It looks buggy, but the P2M is never used on Arm. In fact, you sent a 
patch a year ago to drop it (see [2]). It would be nice to revive it.

> ---
>   tools/libs/guest/xg_dom_arm.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.c
> index 94948d2b20..f1b8d06f75 100644
> --- a/tools/libs/guest/xg_dom_arm.c
> +++ b/tools/libs/guest/xg_dom_arm.c
> @@ -373,7 +373,7 @@ static int meminit(struct xc_dom_image *dom)
>       const uint64_t modsize = dtb_size + ramdisk_size;
>       const uint64_t ram128mb = bankbase[0] + (128<<20);
>   
> -    xen_pfn_t p2m_size;
> +    xen_pfn_t p2m_size = 0;
>       uint64_t bank0end;
>   
>       assert(dom->rambase_pfn << XC_PAGE_SHIFT == bankbase[0]);
>

If your original series is too risky for 4.15, I would consider to 
remote p2m_size completely and always 0 dom->p2m_size.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/20180314123203.30646-1-wei.liu2@citrix.com/
[2] 
https://patchwork.kernel.org/project/xen-devel/patch/20191217201550.15864-3-andrew.cooper3@citrix.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 16:04:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 16:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84358.158194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaw0-0004hC-Cs; Fri, 12 Feb 2021 16:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84358.158194; Fri, 12 Feb 2021 16:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAaw0-0004h5-9d; Fri, 12 Feb 2021 16:04:44 +0000
Received: by outflank-mailman (input) for mailman id 84358;
 Fri, 12 Feb 2021 16:04:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAavz-0004h0-4p
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 16:04:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e3c1277-d2ac-4405-95cc-f2f6a5f1036c;
 Fri, 12 Feb 2021 16:04:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D2A2AD29;
 Fri, 12 Feb 2021 16:04:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e3c1277-d2ac-4405-95cc-f2f6a5f1036c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613145881; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Hw8Go3FxPM3rNPpPAh6+TMrhXG8uCHjrpH86/xDkyKo=;
	b=JV+UoQqKlwJPUV7J1JLrbdXE7epIdvgACTfTToq8IsaGCnNnWTQjsQZGQQuQJj1GiXeUjP
	hmUFjQihqDME5XkpOSjOVeT3//VDt1GCwXdnmfFUrjmSucI2tgRJeqUMF+ZbCneEzaLMIx
	gQ6PIeEF0W0TeCRpJooK8hahO12eENA=
Subject: Re: [PATCH 07/10] tools: Use -Og for debug builds when available
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-8-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <04c93a14-ee95-e4a6-33b9-f80fcd03a010@suse.com>
Date: Fri, 12 Feb 2021 17:04:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210212153953.4582-8-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.02.2021 16:39, Andrew Cooper wrote:
> The recommended optimisation level for debugging is -Og, and is what tools
> such as gdb prefer.  In practice, it equates to -01 with a few specific
> optimisations turned off.
> 
> abi-dumper in particular wants the libraries it inspects in this form.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/tools/Rules.mk
> +++ b/tools/Rules.mk
> @@ -106,8 +106,9 @@ endif
>  CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
>  
>  ifeq ($(debug),y)
> -# Disable optimizations
> -CFLAGS += -O0 -fno-omit-frame-pointer
> +# Use -Og if available, -O0 otherwise
> +dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
> +CFLAGS += $(dbg_opt_level) -fno-omit-frame-pointer

I wonder if we shouldn't do something similar for the hypervisor,
where we use -O1 for debug builds right now. At least when
DEBUG_INFO is also enabled, -Og may be better.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 16:08:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 16:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84362.158208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAazI-0004rb-Tj; Fri, 12 Feb 2021 16:08:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84362.158208; Fri, 12 Feb 2021 16:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAazI-0004rU-Qn; Fri, 12 Feb 2021 16:08:08 +0000
Received: by outflank-mailman (input) for mailman id 84362;
 Fri, 12 Feb 2021 16:08:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DDLl=HO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lAazH-0004rP-03
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 16:08:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 554b83a8-3006-4472-bdf9-4365c5a67bf4;
 Fri, 12 Feb 2021 16:08:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2263FB773;
 Fri, 12 Feb 2021 16:08:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 554b83a8-3006-4472-bdf9-4365c5a67bf4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613146085; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EBTa6uQGZ2JnAJEQi8Cw9Dd3OJRkwy7U15wp9gpQuvc=;
	b=ogw4o064XnfXXXNOk4TKdvuLhcVnONSacDQZweKnSFjK/jf3Q/6FQBAbDH7j5maxzax+rB
	8ORmYcxzYY8xD5av+VSifUlka/jlFRWVLulZ7xnorFt8E1+RxTMyZGbZz6XqDAao9JfCAY
	TPFrDDCBy+R6LTYaG1214mihzPDP3JY=
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in
 lu_read_state()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-7-andrew.cooper3@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
Date: Fri, 12 Feb 2021 17:08:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210212153953.4582-7-andrew.cooper3@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="RB8B3HmeeuY4YhLkuLHMPNaKjUGqpdzne"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--RB8B3HmeeuY4YhLkuLHMPNaKjUGqpdzne
Content-Type: multipart/mixed; boundary="pKKo9JB3kWQkMvqkyPhiol9jY1VJMC2Di";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in
 lu_read_state()
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-7-andrew.cooper3@citrix.com>
In-Reply-To: <20210212153953.4582-7-andrew.cooper3@citrix.com>

--pKKo9JB3kWQkMvqkyPhiol9jY1VJMC2Di
Content-Type: multipart/mixed;
 boundary="------------FEDE9DAFA83136D86D129C31"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FEDE9DAFA83136D86D129C31
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.02.21 16:39, Andrew Cooper wrote:
> Various version of gcc, when compiling with -Og, complain:
>=20
>    xenstored_control.c: In function =E2=80=98lu_read_state=E2=80=99:
>    xenstored_control.c:540:11: error: =E2=80=98state.size=E2=80=99 is u=
sed uninitialized in this
>    function [-Werror=3Duninitialized]
>      if (state.size =3D=3D 0)
>          ~~~~~^~~~~
>    xenstored_control.c:543:6: error: =E2=80=98state.buf=E2=80=99 may be=
 used uninitialized in
>    this function [-Werror=3Dmaybe-uninitialized]
>      pre =3D state.buf;
>      ~~~~^~~~~~~~~~~
>    xenstored_control.c:550:23: error: =E2=80=98state.buf=E2=80=99 may b=
e used uninitialized in
>    this function [-Werror=3Dmaybe-uninitialized]
>       (void *)head - state.buf < state.size;
>                      ~~~~~^~~~
>    xenstored_control.c:550:35: error: =E2=80=98state.size=E2=80=99 may =
be used uninitialized in
>    this function [-Werror=3Dmaybe-uninitialized]
>       (void *)head - state.buf < state.size;
>                                  ~~~~~^~~~~
>=20
> Interestingly, this is only in the stubdom build.  I can't identify any=

> relevant differences vs the regular tools build.

But me. :-)

lu_get_dump_state() is empty for the stubdom case (this will change when
LU is implemented for stubdom, too). In the daemon case this function is
setting all the fields which are relevant.

>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------FEDE9DAFA83136D86D129C31
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FEDE9DAFA83136D86D129C31--

--pKKo9JB3kWQkMvqkyPhiol9jY1VJMC2Di--

--RB8B3HmeeuY4YhLkuLHMPNaKjUGqpdzne
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAmp+QFAwAAAAAACgkQsN6d1ii/Ey//
1wf+M+x1gMb3ErfWAlF9kE+gqqtRiO2UMDB5K0h+/RvaWoPrhbFnUeCyAZ11Vp8e4I0F1R+NMeCy
/KDrKL8ZbPkcFDUUjmhoLxDVkhwWHI0oiVfTcfD2pNmK8nYyTE7ItXDOnT6WPaKQLLq4hMX7MORA
h4d6v8F9zD1bRShKKfHFnxxTJpuj10yXS0QAyyA7ZKMt745HQsFOmBg5u4LiiEDdharqsQOhI7Vj
cTADhdAcisPmgnXsoyXaKOg93SJ+7rBT/H8beTK7TE/0sPlL5KMxP7Hdam5H9SZY6YL1EN8IFT9P
u7bgXjPQWSsXHHbXY6AmHMkO2RvQZLPuEKFn58SMvQ==
=zNBf
-----END PGP SIGNATURE-----

--RB8B3HmeeuY4YhLkuLHMPNaKjUGqpdzne--


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 16:09:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 16:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84367.158221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAb0h-00054f-9F; Fri, 12 Feb 2021 16:09:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84367.158221; Fri, 12 Feb 2021 16:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAb0h-00054Y-5k; Fri, 12 Feb 2021 16:09:35 +0000
Received: by outflank-mailman (input) for mailman id 84367;
 Fri, 12 Feb 2021 16:09:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAb0f-00054S-3i
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 16:09:33 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44edb0c5-7e48-4170-86ed-09c8a9e5de9b;
 Fri, 12 Feb 2021 16:09:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44edb0c5-7e48-4170-86ed-09c8a9e5de9b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613146171;
  h=subject:to:references:from:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=Y90NweLWeH8Wgrzk/VYq1vqvQimd/C0kTsKrLs0WwJA=;
  b=TwDcwyUPNaTSOEzCUvvIVlUXPCPbToGhKFFSwcHM+k7z0SPSro0QGm3D
   YRG8BFZ2sQYHol454AWq/ej2lo91paaKuP0NzBFduZ4Ac13oQBtmKCxdU
   WWq//aY6i+Py8KR96jLlTfr7F/RVysaU3T2W9E4n+W6y7GCxpiJpPibYg
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: IpiVNqxszCKcX/2LAt0UcCjkiwBHjtIqA27gnV8X8Ty03+05BICZcsZ4LM+xpFGTe0K0jNbBxY
 qt1RAW8XNzZ2ARsXFWJ0cObCe1RSMJd7iuvtNkIw+sK9Op0J2BgHAuD6utY8bD9wPNUy+Q7HtX
 ez5qnSTVG1LhYBH8VJHKjxsgWgEAAHQZiFkgzwBMkD5+UhFFfrQj9yConsBCHwMx4ezDuCKN5V
 FslUR50yVH8Z+dOJSUc1ng2U8Vs8PQ66KdfjbZbvngQgsyrljVNyQCAXETiU9AbnXsESTpH0Wq
 UXw=
X-SBRS: 5.2
X-MesageID: 37147148
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37147148"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ja77gf+Xyshu94faMRdlkt0yqlcJLR7gMakyak69Ris2P9WUjXPzRkX5D19j0c994w2g/zaxKb9zTQqwiSrhsl63o9jHohRfrnL4cvWvwaLQi3YkFT1NxmIYQ2T+uYU5zCJEsxAP0aNsoKNDbrMTOF1ChCgidoMWfBOFTmm27oSmbxcdrtu1Yg9P7pC2hO7iA6BXH9P8zn8/VlgtrZ6TiQDNrnY1gxQjr/4JnPM4GWNItv0G9NyrMJvtddH5jdDh4bs3ViPMG+aZco90zADlz3MTakwxRF8JFtlCVBGVUNxCqFe0nPBKdxjI7NFAg6C2g2ZvhZKmt80hcDpvofBIjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j+LRUSpwIrw8tx0aiADHj3ZAhtrgGCphqQrMBT1Pqwo=;
 b=R55pNphujnWw7rHZuIuWkL4JWZLgFS+c3HBZKZhQX5+AlXRZvIWbKuPP6uRVmHYJQCecANKCFRzl11+r/YO4J9Sr8xVYp3/qBzocOdz+6kMxyT45BRadh8xSGVgIyhuApXyqq5u4ZbRUWdM5NzS2Fek3mss1PQTJ3udXzkqDI3x85mTadkQpfv43GARBhRBwegXu8Z9Lj5vIgYifLvfUIHiScsQfzUgAOwbw2PU2te4fLMkQ+ID70T9wlJoY0Hycp4ZNMqUR8E+WxRSJPkoGD2y39gRkwcXaPCXWiVU5zErViuDw0jz/zbpVKrC5UP1+UQeeFIGsja718VU4kihhyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j+LRUSpwIrw8tx0aiADHj3ZAhtrgGCphqQrMBT1Pqwo=;
 b=JqvDvt32Gx/ukIqV7vZ2k1lUXlAe06oIGs8PlLz9wLfN7N2WOEpk1tYLzi9t6gKfhfB8+RdktjWn9MhuGormlbSw/Zxg89VI72R2rgVJ08O520/i5w49aSE6DPHVGoMkobcziUZ1xldv49CYBS5SiADQV7vmUnJYLZ/3UPCMsh0=
Subject: Re: [PATCH 07/10] tools: Use -Og for debug builds when available
To: <xen-devel@lists.xenproject.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-8-andrew.cooper3@citrix.com>
 <04c93a14-ee95-e4a6-33b9-f80fcd03a010@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3b02dfa7-923b-9bf7-a349-68bbba0590ad@citrix.com>
Date: Fri, 12 Feb 2021 16:09:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <04c93a14-ee95-e4a6-33b9-f80fcd03a010@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0063.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5c64cff7-5528-4c84-f831-08d8cf709047
X-MS-TrafficTypeDiagnostic: BYAPR03MB3863:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3863E61689D9CE1C06CEFF9CBA8B9@BYAPR03MB3863.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PazBsET/vQrIhiK7HP28PF75hmjLS116RUPvhmUwB5z5UBAWHO1ZOjr6WOXLVnEydSfkZSC97XBErVMLkvIcaJcb023Hg8ckZCbOOx+gWiZIArTU0QcR4+V3LN190PCZ7bE2JBPHzWEL4EC7yJUoXOhWruiEaMqvEivuil5QU8A9nu2YOkY7fwktLxBTqkzdXbeGmVp2SwSmfkoZnn7M8TsVjJFE5ka67YxS0YxBDRZtM/fpVuVHSpb/G8JXQoUdNjUKnkEJSp8R4DYEUz3jgotZbcGGSJO4odLmNLubMSb4MxaxgCzQG0+jP0l3QZJ9kB2yJY4SlboEWojfcytlMBjZfUpmca00q+O6wgnV2/4SNkt7kfN/636TeuFHaVYJJisQc0BpYRvJ9VbsX/yMS9E9s8cP4lSoHu+l1iEamWDAAANLO5Y4OSnAi6bG16lzyODLdjRVGtv6vQgqPi0WcTbcPXP4TUvfwhaibmWA7TyI12BS+i6WiJo8bS/IUB/ITTBUZwIGBeO2q1r/JhjfIbR8y9U2LDQzs8+IExsQIBvxGVHKje4w9ByrnZN05X/IegvxzZZRvsB0s6IVN6tBUVTiLU+Jnnpaskn80b6IJ0o=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(376002)(39860400002)(136003)(16526019)(186003)(86362001)(36756003)(83380400001)(26005)(6666004)(2616005)(6916009)(66476007)(66946007)(2906002)(478600001)(6486002)(16576012)(31686004)(53546011)(956004)(5660300002)(31696002)(8936002)(316002)(8676002)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UklnTEoxeHd5SnV1RnlwR0pEODJHMHdyYUF2L3psKzhhcHFKc00yUTNHNGQx?=
 =?utf-8?B?eWN0bklVcXllVnBEdm9ycFRzQUgxdWk3dUZHOERZRWFaSTF6NEZESkhoN25i?=
 =?utf-8?B?WlFSbkR5TDYzSTRPTnNxejNON090QVV3YW1mYnpYY0IwRnBiN3VsSWo5dHVG?=
 =?utf-8?B?S3B4Qnc4c2RlS0JUbTVWUFJZSy80NU91TkJlQkY5OVJZY1ljZ0YxTmdFeEVS?=
 =?utf-8?B?bmE3Yk5nOW90Sk5rUkwyT2hzSDQyMXA3RThKQ0dmaDNNdU9RdnVyN0lCM0hN?=
 =?utf-8?B?TUtqZ2ZhdGZsN0YwNWNCQ1pjSDU4SER3MTJZSnpoaGxLVFdzQUhZZjB4d2FW?=
 =?utf-8?B?SHZrNXVzeFRMeFpZc2RuMk83MjhrTVk1RmxpZWF4SFQxSVA2NHVhQnh3dVU4?=
 =?utf-8?B?MzU3ZmF4U1VYTGdxQ3ZYM0FNYjYvL2NGN25pTVZHaE9EaXY3SFFhbW1rOXJq?=
 =?utf-8?B?TnZHUnZZUlkzNmtlWlhBWWdrbWgrZFdOcWk0elRVbGN2MlVzMGZrVUtjUTh0?=
 =?utf-8?B?UEIzQ0x4clplL0YzWVdONm1MbzNtSFBFT0tmVWh6SVN0cmxwWWMxMTNlNHhN?=
 =?utf-8?B?OURwQytpSGs3RUo2UkRZNDdTTCtqcHNjZ3J5dnlqNlpBN2FlNzFzdDRLdm1k?=
 =?utf-8?B?RVMyTGE2NlRXSWhsWVh1VlpQc1VxTjFWeHErMytRZ3h1MVI3L003RzEvTTFt?=
 =?utf-8?B?bGVvcW9wZC9BWkJOWlJGNVRpZG9oMW1RTFJjak9TNWtBS0FVU2ljWXp3czJR?=
 =?utf-8?B?WVAzaGI3N0pIbWtYMFhzbVl0WGxYTVR3dFY1QVFLWG9YMnBFbTBpQjhCMTB6?=
 =?utf-8?B?MVpOcTN4MzFpQ2JWTE5qK1phSCtvZ2d6aDFBMlAybTk2RlVkdkVEdWYyRm1x?=
 =?utf-8?B?dmtJcC9SNXY5TWFFVnk3Z3NuNmNyN3lZYnBRWHNLNnBjUjhnNWQ5RTJSazNm?=
 =?utf-8?B?V21RUEFkTE5jNFV4cmUvUUw3bGdzVUJYRWo2SnVETnFxMWRmRnh3L1FiZlVY?=
 =?utf-8?B?YnFZVEE3NXZmcWNDYUdXTm1rL3A5VVhqVk5DYWU1NjVGU0I3bFVEYWw0V0F2?=
 =?utf-8?B?SkFwbmxQNFhwVmRDR1oxOURTU3dYOUtGVlZGR2xKL3FtcVd1TCtOcFkvaWw5?=
 =?utf-8?B?UCtvM0hQMG5OVysrTmd3czR0TjVMZituWlJkWXBCL0s3cnhtS1QwSVVoRlU5?=
 =?utf-8?B?NzhPTlY5WGVwYjVVcS96cWRHYlVjWkxPcDZCdURHQjJCdEgrcVNpcHlyV3cv?=
 =?utf-8?B?QWZHcU5CMUFvSE1XYm4wV0FoUFkyUXN1YTE4UzBwaHVTMGo4ZmVlcmJUYmVu?=
 =?utf-8?B?dWQwOXUrQUxJRTUvQkVKL3FIQmJFTGlBMTYxUGFtUUNZTE1UMkw2MExBTUwr?=
 =?utf-8?B?dmxzMGhjb2owNGEza016SDVKUmEya1pxQW1YckRINWl3R3IvQXNFcm8wZng0?=
 =?utf-8?B?dzdBeWRNRVRnNlNmZDRMaDQ4MUtaNDVZOFJEY0t2dEY0MWNLem5jVUtmNW9w?=
 =?utf-8?B?aURVc1I2QnBFWFIvSDhDZzQ2d3VZWUZyaktpTkFnLzZpaWY3eDFuYndFZzdh?=
 =?utf-8?B?Sll3d0JvUk1hNGFTbVhLWlhsNU5YQzd3Y1R2dEpCVlEzWTdwc2g3TSt0SG8y?=
 =?utf-8?B?dkZPWEM1c3ZTS3BsbGpJbUhMcXNkenhUWTZsTEhmVDNrbkhzYU4xMnU1WkFm?=
 =?utf-8?B?L09haU1TM1ZXM2F5a1I2c201aW55RVVVcHBLUFo4Z1dTQ1JhTkY0TWZWMHVY?=
 =?utf-8?Q?Q/OGaOX1lih6rUxvowWbbgNLnmzdwkAk6dz+wsU?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c64cff7-5528-4c84-f831-08d8cf709047
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2021 16:09:24.0295
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gfPaTLc9E4/buRfoeVA1H/bEi3TRijUdacHSpOz8jOws22h2FAMvxUafe9SGbCb1QoKGIMexxvfaj5KLeKyRLMzxRma1CuKEECkbkP4iJnY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3863
X-OriginatorOrg: citrix.com

On 12/02/2021 16:04, Jan Beulich wrote:
> On 12.02.2021 16:39, Andrew Cooper wrote:
>> The recommended optimisation level for debugging is -Og, and is what tools
>> such as gdb prefer.  In practice, it equates to -01 with a few specific
>> optimisations turned off.
>>
>> abi-dumper in particular wants the libraries it inspects in this form.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

>
>> --- a/tools/Rules.mk
>> +++ b/tools/Rules.mk
>> @@ -106,8 +106,9 @@ endif
>>  CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
>>  
>>  ifeq ($(debug),y)
>> -# Disable optimizations
>> -CFLAGS += -O0 -fno-omit-frame-pointer
>> +# Use -Og if available, -O0 otherwise
>> +dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
>> +CFLAGS += $(dbg_opt_level) -fno-omit-frame-pointer
> I wonder if we shouldn't do something similar for the hypervisor,
> where we use -O1 for debug builds right now. At least when
> DEBUG_INFO is also enabled, -Og may be better.

I also made that work... its rather more invasive in terms of changes -
all for "maybe uninitialised" warnings.

$ git diff e2bab84984^ --stat
Â xen/MakefileÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | 3 ++-
Â xen/arch/arm/domain_build.cÂ Â Â Â  | 2 +-
Â xen/arch/x86/irq.cÂ Â Â Â Â Â Â Â Â Â Â Â Â  | 2 +-
Â xen/arch/x86/mm/shadow/common.c | 2 +-
Â xen/arch/x86/pv/shim.cÂ Â Â Â Â Â Â Â Â  | 6 +++---
Â xen/arch/x86/sysctl.cÂ Â Â Â Â Â Â Â Â Â  | 4 ++--
Â xen/common/efi/boot.cÂ Â Â Â Â Â Â Â Â Â  | 2 +-
Â 7 files changed, 11 insertions(+), 10 deletions(-)

is what is required to make Gitlab happy.Â  I was planning to defer it to
4.16 at this point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 16:12:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 16:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84373.158236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAb3g-0005vO-Od; Fri, 12 Feb 2021 16:12:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84373.158236; Fri, 12 Feb 2021 16:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAb3g-0005vH-Li; Fri, 12 Feb 2021 16:12:40 +0000
Received: by outflank-mailman (input) for mailman id 84373;
 Fri, 12 Feb 2021 16:12:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAb3f-0005vC-Qv
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 16:12:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c3fa973-6f9d-40c4-a2c8-8a23a723f7fc;
 Fri, 12 Feb 2021 16:12:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5777FB7A7;
 Fri, 12 Feb 2021 16:12:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c3fa973-6f9d-40c4-a2c8-8a23a723f7fc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613146358; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0D8qcfOgFn7FHOECrrXxpmKqC4jFpbIEWCm0H1u4gBc=;
	b=FC+L907nin3PVfz5jXvXFY7E1lSL8dBHhSKne8bw1lY4pzlYQ7aYC0x+P9mXjwlHRbZXSF
	9/zjy/qg33Zon+7wKtf0yNVFBgDN59ULqCmh6iJoOLMrv6nAMpOgZXev+OGtJus3/KSm9M
	e9rbt472ch1fSViE2Yney21HPvR9n48=
Subject: Re: [PATCH 10/10] tools/libs: Write out an ABI analysis when
 abi-dumper is available
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-11-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6bac2a21-736d-9b07-4ee0-4654b5273ce5@suse.com>
Date: Fri, 12 Feb 2021 17:12:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210212153953.4582-11-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.02.2021 16:39, Andrew Cooper wrote:
> --- a/tools/libs/libs.mk
> +++ b/tools/libs/libs.mk
> @@ -49,6 +49,8 @@ PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
>  LIBHEADER ?= $(LIB_FILE_NAME).h
>  LIBHEADERS = $(foreach h, $(LIBHEADER), $(XEN_INCLUDE)/$(h))
>  
> +PKG_ABI := lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)-$(XEN_COMPILE_ARCH)-abi.dump

Don't you mean $(XEN_TARGET_ARCH) here?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 16:14:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 16:14:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84376.158248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAb5B-00062V-3z; Fri, 12 Feb 2021 16:14:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84376.158248; Fri, 12 Feb 2021 16:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAb5B-00062O-0P; Fri, 12 Feb 2021 16:14:13 +0000
Received: by outflank-mailman (input) for mailman id 84376;
 Fri, 12 Feb 2021 16:14:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aDps=HO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lAb59-00062I-VH
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 16:14:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 454d563a-17ee-4427-adcc-17180619b9a5;
 Fri, 12 Feb 2021 16:14:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76266AD29;
 Fri, 12 Feb 2021 16:14:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 454d563a-17ee-4427-adcc-17180619b9a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613146450; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Vgi+fK7GsoYWXBXPeO1U9BYDDEMJnP3aT/KwQLfoC7Q=;
	b=ghSsOf0SZw+g89EeBlVhhRR8axpMIhqkDFHS4g8EgjCwwb2AnJ6pceuar6QhS/k2JmdnjY
	UqXB+l8wuB/FBm6FWjob5WEofEIR1EVELPVzIKm0OrHkLhfyCUeYXE1P6bJBoaEw5sLb53
	yD9uVU7OxLzafqxEc5qvDFLEPhecC40=
Subject: Re: [PATCH 07/10] tools: Use -Og for debug builds when available
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-8-andrew.cooper3@citrix.com>
 <04c93a14-ee95-e4a6-33b9-f80fcd03a010@suse.com>
 <3b02dfa7-923b-9bf7-a349-68bbba0590ad@citrix.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <da48c866-68cb-1655-2887-434d38e50b21@suse.com>
Date: Fri, 12 Feb 2021 17:14:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <3b02dfa7-923b-9bf7-a349-68bbba0590ad@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.02.2021 17:09, Andrew Cooper wrote:
> On 12/02/2021 16:04, Jan Beulich wrote:
>> On 12.02.2021 16:39, Andrew Cooper wrote:
>>> --- a/tools/Rules.mk
>>> +++ b/tools/Rules.mk
>>> @@ -106,8 +106,9 @@ endif
>>>  CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
>>>  
>>>  ifeq ($(debug),y)
>>> -# Disable optimizations
>>> -CFLAGS += -O0 -fno-omit-frame-pointer
>>> +# Use -Og if available, -O0 otherwise
>>> +dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
>>> +CFLAGS += $(dbg_opt_level) -fno-omit-frame-pointer
>> I wonder if we shouldn't do something similar for the hypervisor,
>> where we use -O1 for debug builds right now. At least when
>> DEBUG_INFO is also enabled, -Og may be better.
> 
> I also made that work... its rather more invasive in terms of changes -
> all for "maybe uninitialised" warnings.
> 
> $ git diff e2bab84984^ --stat
> Â xen/MakefileÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  | 3 ++-
> Â xen/arch/arm/domain_build.cÂ Â Â Â  | 2 +-
> Â xen/arch/x86/irq.cÂ Â Â Â Â Â Â Â Â Â Â Â Â  | 2 +-
> Â xen/arch/x86/mm/shadow/common.c | 2 +-
> Â xen/arch/x86/pv/shim.cÂ Â Â Â Â Â Â Â Â  | 6 +++---
> Â xen/arch/x86/sysctl.cÂ Â Â Â Â Â Â Â Â Â  | 4 ++--
> Â xen/common/efi/boot.cÂ Â Â Â Â Â Â Â Â Â  | 2 +-
> Â 7 files changed, 11 insertions(+), 10 deletions(-)
> 
> is what is required to make Gitlab happy.

Oh, good to know. Thanks!

>Â  I was planning to defer it to 4.16 at this point.

Of course.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 17:01:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 17:01:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84389.158260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAbp3-0002GL-M1; Fri, 12 Feb 2021 17:01:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84389.158260; Fri, 12 Feb 2021 17:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAbp3-0002GE-Hh; Fri, 12 Feb 2021 17:01:37 +0000
Received: by outflank-mailman (input) for mailman id 84389;
 Fri, 12 Feb 2021 17:01:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAbp1-0002G9-Qz
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 17:01:35 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 489dcfd4-9cb1-40f3-96d0-f0565531e5f8;
 Fri, 12 Feb 2021 17:01:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 489dcfd4-9cb1-40f3-96d0-f0565531e5f8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613149294;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=vnW0b30z1hZxcDTtu+jo26HtPh1/6e5wL35MPxOOP64=;
  b=BRc0qDO/nSN33Mlpib8gyb0L1wGvhUbTgVguwq3yvrDAB9xxvC8pjO3p
   /msTM8rOgqqhACUIyPzv+6grDBJouPwlpIYsSe2tpKnPS46fmgF3tpnME
   Z+qIlVKR9uIOsmDhnqFuBWVMw3OjwETw5Ko55GPr9GkNCiH/3IKTxzj+j
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: gdn48AqrquRY5iJo6FIf8CWvWhUa1x1/Tp+xPr+N5yZJHiW16WT/k+0N61wMLpdwt3mqKUXRB/
 /CrEJryussyGex1YHluldO485Cf2sd1BImLDCzejhB/BlUe7oFjmWUoZ1LL+Yusk7wEJb8I9T1
 ZadScgLWRjvGDG4F7SxHtXG7UULWAddGcwfMCR1+qsQN0Z2OfBDYqPU080fJNmrEFLqAppZUp5
 /I9Jbk0n6mb+my9if5gXcOUILKHYfD4uZUIk27DNaZLE8F/ngJZekYoeAvQ2AhNmeuqMbaqDiY
 cQ0=
X-SBRS: 5.2
X-MesageID: 37179681
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37179681"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XYZhlOvM4l5a+AsIA50e4QQ5aHgqv9GSh7M1C45w+3+woYUn6OQ+hu3CAYLVtmhlIiYvVYVUtLCIfdvc64QtNrg/4G4BwTUS/F8Pv7ttowcmFF8QTkpJTxUq7RSy2VL582PmwjQE8wilgi388He3ofO7zlg8rhqIawkmbbzvD8aYNuN/gN1KKZbuQWD79mXKLOxVtlMCzW6fsnpHho5YPxquuDLw8Fz1s1JUVlFW/a5TmBmtDMyk1x4TN44lxj4ZHq1gAZkJibh93qgTsK6u1/K9R1CCpKiKNRmgwWTCIXT18dqqkp4NNt2wPHoS4mPBOF7ubji4eJhFQJGPbgzfcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vnW0b30z1hZxcDTtu+jo26HtPh1/6e5wL35MPxOOP64=;
 b=dkGlL1UYsAJJvCv8PnO0bDqpgosHcBJA1MMo0o0BcaG8rz2dm3P1uQGm6zfhBQvdNfZpOOXSlNVE1BngXWzYQAfLvxPRBFylou9x7L74ckZTLo5O6e7n2vm2/NnSkfqIWSS/eBQOpggqedaEf73j+sya+P4n97cX7MOtQm4flWtr9pKaXnoM0jyd5z6yrLKLDU7aacHNwNTnQTaCUieUH1Ci/2yKTQLQmt1sovaXuAB9qDA8DZY0fVCog4nEKMfz3xBHQk8v+JJqDHkkEdtXNobpaIPq7+N9ADxM4ZPhvJfLY4OMfO26w7vlKzpD6SVTeD1PzbGKLKY4b+NpRuetRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vnW0b30z1hZxcDTtu+jo26HtPh1/6e5wL35MPxOOP64=;
 b=M+kIdi67rmWsPY9EpG7swpy6PloOr5Di2owF1+0buHT1NOcSz6xVWZVoYZApJQ5ONapKv0SQ2tdjSjkX6QmTh4mwbYBXAJDyWGn0cYqdCaJ3d/O1acOXOwF2LrM9IASFbWuF8ouxhBZB0L2cw/NRIfch6jqPQq4E000RdwsLxlM=
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in
 lu_read_state()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-7-andrew.cooper3@citrix.com>
 <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <226c35c2-2280-8353-74d0-cc35e7f84de6@citrix.com>
Date: Fri, 12 Feb 2021 17:01:25 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0270.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4aba431c-4dc5-488d-2972-08d8cf77d87b
X-MS-TrafficTypeDiagnostic: BY5PR03MB5348:
X-Microsoft-Antispam-PRVS: <BY5PR03MB5348FF5CDBFD8FBCAF497559BA8B9@BY5PR03MB5348.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xWvRRN//waT0RqD4UFrPpW13RORsFrrMRhaq8KEdrTE8UUca2+6iBSJPgA39oWscCD++fLUptlviYfFonEs8whyddEEM28VgkWyWOIWJwdko8Zcqm/sAFibwI4lWa9EGF8i1sYbUj/5pXFaYER9aPj4lbvcZTvwDvjC7/DQ+TgW72T9GOBUrJrQuWDzOR+X1PlZCyWslPFNnZqh384PoCOMJ7Yc8SweUxlSwkoPSOzGsONXjbsJN77jRxnkSx1x9Vf4+VcKrnfZ4YXNt0cMqTOGJh4vpMViRK1s7KPRCS+y9ZvAbjwSF8kHbnPHFQYeAha9wj+NQ1u+3KJpc0So4G0TrjYPhpns+3MGWV8DGO79sy2NGzU0IWH9ppxrj+0B30X/m0l/x5eZ1BI76nrxHlRl4Gx2AyRlfHqoimNxEcJAHlpPJ08ENEtQnUXXs1FzIoecGgKvtZZoSHEJMUjCciocOelHT2pbd9FbdQItjqfCVG147TiHqjc0Y0vHf87VmnG5Q7C9Yvb7StZYEUNQ5AAZJdZyCR6BREHo+J9SwAo0Gwk2gCvFZ1Mb7eIKf2/SQsv9vvAAAWi4KmYlqjdICn0b1Cc3olvOfnOUOg09Xdw4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(366004)(376002)(5660300002)(8936002)(110136005)(36756003)(6486002)(66574015)(66556008)(66476007)(54906003)(31696002)(66946007)(31686004)(4326008)(26005)(2906002)(53546011)(186003)(478600001)(956004)(16526019)(8676002)(316002)(86362001)(16576012)(2616005)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RTc1cnorT1ZzQTJyb1NrbVNBR3dTVXZxUnJacnorbXJhZC9oQmM1THIrU0cz?=
 =?utf-8?B?cGI2YmhydWtMUE5xdGdINGVYZFplMEZlbEZsUVloNS9iTzBTcUorRVVtVTVZ?=
 =?utf-8?B?cGlVdFNaWGZIOEdUd1R4VjZZVFRtMVFCdm5GeU9oSHphUUpnTnVVMFkweisx?=
 =?utf-8?B?QVFZK0tFMmsyYkplUmRKcWNEcDZVaGJObGhLSEhaMWhtWnFjcnZMMlBHVFRG?=
 =?utf-8?B?b3hEVkY2d2p4V3BNUjk4dXdWV0RzVFpXZ21LWk9hRGowYzBoZ1c1NTArQ2tL?=
 =?utf-8?B?SlROdm5MK3VFWTVjOXFqdEY1c3hNY2x3aTRPeVZoQ3JKMlZ3SmNWSGo1UHF1?=
 =?utf-8?B?OXJxWVZQaTh6Z1c0OW9kY1AxSEhZL0tzOE5OMDhTMTFBR3VsdzZFTzJxbU9a?=
 =?utf-8?B?MVBLMlZ4Ui84OWdFTmUwQWpIQXhJcDF2c2cxWElLcVcrK0VGNitJamxNdUk1?=
 =?utf-8?B?RzRwU1V1cFlVaDNGMS9VcTRyZkRCQjFRQ3FZdjJ2STliOXdIaGRKanBpTVlX?=
 =?utf-8?B?TWZzNEhQVW5ZdncrYzNpU1BoVmlqS0MvME9wWlRYWVZpMHFUR05oaVJSekdk?=
 =?utf-8?B?R0kxVjhweUFZcFh0UU44SnlmRURJbmZvY2xtcTNoY3RXUEFRcVFvUi9vNTBL?=
 =?utf-8?B?NjllVktWdGRvNVlVUUNFYTh6K2RVVG1pbmQ0L2kvUmFLSHFadWJmTW1ET1Js?=
 =?utf-8?B?bXZKMi9LNTBYR0FScHIwaWR5aXFWa2pPRzBFMVZWekNYWm11M0s4bTEvM211?=
 =?utf-8?B?Tk05WWRKSGpOMndqT0hvei9raE9ycVFQdlNtUUptM3gwYWg4ZFBiWHVDVm1j?=
 =?utf-8?B?Ym1VVjg5RDVQNTlhVkRhSjhyZDhOdk51UTd5ckZucGd6L0x3a2N1Kys4ZlJH?=
 =?utf-8?B?a1h2K3U2ZkxvT0ZXTVFmNHRVdDFMd0dPd0Z0K3V4YlByYjNGV3A3KzVJR05P?=
 =?utf-8?B?YnM5WWU1WHM4UFAxVk01d3ZobUdJeDJEK05LSjFZdU1KeURsa3FHcUpOMXox?=
 =?utf-8?B?d29GOHZ6SUxKZXdFRUZzSHdRcktkaHhvaExVUm9iVG80cTVDOFoyZitReUtp?=
 =?utf-8?B?ZXR6Ry9OSVFBbUxGY1Y1NXdOTWdzT3hrZTN1MnlXcmRBQ3gwY0VWL1ZBVnlj?=
 =?utf-8?B?QW0yWHJ5VkR2VXhFVHdMdlVIWDBHS1lIdkoyQ2FoNGZYQlk1K1JoKzB2NnUr?=
 =?utf-8?B?NytINnZWSUZaVnZ6aUJ6V0VxSnhPbWJHaVoyakhpcmRLQWZpTUEzTUI3V1dy?=
 =?utf-8?B?OXBEYkc5eFhTODZyMGxtZHlRekxMUktsVHhwRWE2RkNBMDZoclN3a1AwYjFu?=
 =?utf-8?B?S2grdm1SYnVRNUdjYlVtYlhVcDZVLzlTNDNnTW1ub0c2YnAzcXk5cWZlNWF3?=
 =?utf-8?B?ZTlmMG9UYTFmNmpUcTA2Z3RzZGs1T1ZxL3VPNnM0dFdiQmV1MXlLOGVNcVk3?=
 =?utf-8?B?OHRuUjZDTXNFYmN2RStzMldLeDZ3RHNsWnNpeTVNMzcxWTgxVWVNaDZXalla?=
 =?utf-8?B?NUgxOTZ0T2thbThGd3ZvdzZXWWkwcEpYM043bktQbXZRYjEyalVaRzhqTEtQ?=
 =?utf-8?B?R1YrZDVWa1F5VEtyYVhLOFNteHJyMFlMcEcwcklENVdpQk0wT0dKQ3VTdjJB?=
 =?utf-8?B?OUFGMStyY053NVF1OFZoaHJpazI2N0tvR3dXMVlIajRKQUV2Y1VmeHlBVk1W?=
 =?utf-8?B?YjdwT0pSbmt2cTQyUTBPN1JCb2QwQ3g1VXpYMVJSUllnRVQvK29zYlZEVVlT?=
 =?utf-8?Q?tIgds9XvkaL0f09HIp4OUfXdCAK32RV1e4cRCXn?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4aba431c-4dc5-488d-2972-08d8cf77d87b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2021 17:01:31.6031
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pPqF5w9KQqHx1myJvmhftdpQFEvCBMji1ptm1fvTpjcMZesscGKBHskYgrQpzWP42u5nrQxg0zB3peCgGGKoF4xuQVl5QlzeR+T+Bitb2xw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5348
X-OriginatorOrg: citrix.com

On 12/02/2021 16:08, JÃ¼rgen GroÃŸ wrote:
> On 12.02.21 16:39, Andrew Cooper wrote:
>> Various version of gcc, when compiling with -Og, complain:
>>
>> Â Â  xenstored_control.c: In function â€˜lu_read_stateâ€™:
>> Â Â  xenstored_control.c:540:11: error: â€˜state.sizeâ€™ is used
>> uninitialized in this
>> Â Â  function [-Werror=uninitialized]
>> Â Â Â Â  if (state.size == 0)
>> Â Â Â Â Â Â Â Â  ~~~~~^~~~~
>> Â Â  xenstored_control.c:543:6: error: â€˜state.bufâ€™ may be used
>> uninitialized in
>> Â Â  this function [-Werror=maybe-uninitialized]
>> Â Â Â Â  pre = state.buf;
>> Â Â Â Â  ~~~~^~~~~~~~~~~
>> Â Â  xenstored_control.c:550:23: error: â€˜state.bufâ€™ may be used
>> uninitialized in
>> Â Â  this function [-Werror=maybe-uninitialized]
>> Â Â Â Â Â  (void *)head - state.buf < state.size;
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  ~~~~~^~~~
>> Â Â  xenstored_control.c:550:35: error: â€˜state.sizeâ€™ may be used
>> uninitialized in
>> Â Â  this function [-Werror=maybe-uninitialized]
>> Â Â Â Â Â  (void *)head - state.buf < state.size;
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  ~~~~~^~~~~
>>
>> Interestingly, this is only in the stubdom build.Â  I can't identify any
>> relevant differences vs the regular tools build.
>
> But me. :-)
>
> lu_get_dump_state() is empty for the stubdom case (this will change when
> LU is implemented for stubdom, too). In the daemon case this function is
> setting all the fields which are relevant.

So I spotted that.Â  This instance of lu_read_state() is already within
the ifdefary, so doesn't get to see the empty stub (I think).

Also, I'd expect the compiler to complain at -O2 if it spotted that code.

>
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 17:03:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 17:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84391.158271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAbrG-0002NM-29; Fri, 12 Feb 2021 17:03:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84391.158271; Fri, 12 Feb 2021 17:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAbrF-0002NF-VL; Fri, 12 Feb 2021 17:03:53 +0000
Received: by outflank-mailman (input) for mailman id 84391;
 Fri, 12 Feb 2021 17:03:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAbrE-0002N9-6S
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 17:03:52 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba1f990a-4cd9-4d6d-bc79-85ff6111e4fa;
 Fri, 12 Feb 2021 17:03:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba1f990a-4cd9-4d6d-bc79-85ff6111e4fa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613149430;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=LQ0+AEYf/60avuYzwyCIOrw+QdwYj4rEREvb8mJXd5k=;
  b=GED2R+ijDJ3Scs6AsrFI5Aeew3dHArmyRmgq+HOQUDdmQ8nhrPTUH9PV
   MyeyT/1nm9/oze+NjFSblPeRxPF+wyrX8KRPymSReEeLPQvY+SWxd9ym0
   xGG/aLK1WtdaZnIlDXErl3zBY1pX7FyN8Mh0bIAM3tDtUGvXEvSgk8TTk
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tjazUEmBJWcxYkmGR61S1Md7C4CHgOo6lMk1cDin8DEP1nI/odxPPIP5b+N1fyVbYpnJ4olEBh
 Uu2OSXEG7b8VJMfhvSePIPbISr5MxSFLa1VzfvBrJitfcpoW0Ssh+bKQi7ghHQI8G4Zqllmxku
 FAUKwyUoS1RH2pMLcUmRkTwo33d6/yZ5uVMlD0fxBmBpLZNOTTwnSjO6nUxdp6gNlLBgvFaIoO
 JiO6OHMgnQEyrL5rMve4HBiTAQF8cCHnkmc8v1k5jTneMzX5XjTRYos4VNOJS97NfyhIF+EGxD
 FWU=
X-SBRS: 5.2
X-MesageID: 37092377
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37092377"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PfaWYIA5vucTa0GvEBg4TGQkJsCPZX6Fd9YGpRFcaK3JSr9gtWDvpe+ukbWte1kZxtDjl7jALNBFv1XwbxnAXINGoVGjWtGDEtP9d0a4dCIxMXW2fUL0HVbyNZB7Chjoxi/3pV4vK20sHUQr+m9W7sUP0w4DBi4t5w6GXYlCnl3UkC9OzXiRlfLKgOqAYwujXkJxSK502YFxKmwV3h74TNhRymr/r5JfINuFep4T+v3h01tTK3muqTO75P+K4IIJsKtqiSK0e8Vks2Y3g5m5xSLjLzvXRDMfAiZks/xHCX1RLCH+n10k4OGSTDKNtQSCtrrbGqFX4o0yjqLBM7XmMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pBABoIiwDpqr3bWUxNXG+E/gzKX9JbPvZWXP0lV2Oe8=;
 b=HFNedp1CnfXGJo7Jfk9gF9X9EZYp9pDi0eO4Cl8INNVt+z9Jn2SeZFspG6imDVUrNYzsUttDPsG++gaJP8mbO7CcfPsGnc8p+Py/XuLE27iCWOkEBJisMby1S5V6K0ixQPEkzm4Z9/7CCOyhNNV0cmh1qjS5vRac/gsIR4d20S64zM0rHrEkn7Ih9CJ+k8O3dAAj2a/fC+bBozGoQ1cSzPjJx9ZRqnK0+d553imxvOnFkVBrIYrlG3AVLfdH761nWYxQmEyMsSCn3JyN6JA2HuaN/CgfaEtlGHVqW/zm1seKtlUCSbkxLarSFcFzInL6QZCYsJOu1bvojYCFYYQ0DA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pBABoIiwDpqr3bWUxNXG+E/gzKX9JbPvZWXP0lV2Oe8=;
 b=Y46An8oxdwvDiBuw5QeZSwBqwKTDu7IwY0pz4XAdnJqoI5FyH6R/fRvSEOAibIP0dapVCGUh76FNqQQJg+UUIHYlAP6ZGyDwR2PJLKiaL70eNa9TOhPtV1WegPkLFbrZsBfrvTlvUCe8WvlKo7Hcx25Lk1hDUKDGvaLL5MginaY=
Subject: Re: [PATCH 10/10] tools/libs: Write out an ABI analysis when
 abi-dumper is available
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-11-andrew.cooper3@citrix.com>
 <6bac2a21-736d-9b07-4ee0-4654b5273ce5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8de5cecb-68eb-5a5d-687b-9b8e6d0721ba@citrix.com>
Date: Fri, 12 Feb 2021 17:03:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <6bac2a21-736d-9b07-4ee0-4654b5273ce5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0051.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f2d7714-0df7-4d98-c790-08d8cf782a02
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5776:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5776C0F1B2ACAE0CAF5EC0BEBA8B9@SJ0PR03MB5776.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:626;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Qg5/S4RwmYv9emHD2CFnOLZveGgQl7cEOuNRlvKVOFG5VOXSsAPBORrGjhHNPcIw84DR2mLFp233u9NB4cgWvi7jcs/KswX3A+UpLtnpE7QOvTsC0iOAifrN/7ClV21YfQrAKYanoqu9WPpwJT4C52+lT5DdU3bnlQ4Aty3h7hEC9R4DqpsyBjeBC4O7JYqw6LTgs/tHrYyJN/B+eOrsKXVE3Lpy6qIeX1I+5ahOsXYeaT43IhLHdOnTLN6dHJU9asPKol53BgsbpEzcNY8zaHvr7kd8LaR94VbRGUe3oQjmsFUs7fXbcZ1ja/iMqHd4qf7GR4A7HRoJL7fJvJ7hDitp2N+MvPmIe/rLaXuhz8ACStg/UiSjHwkf+CypXhRL+YyjOJ2SA5MjwjAoCuIOBOdSGg2AJZik+xay+ortWoFBUtG/rQImd0JGWE3aNk0J1XgwZp4qDSnr47XRdggN2tMA3sDRffXHxF1pvDm8O1NKG4LBTtqS6MKaNzslDlOTXf2Bf/etRy1BEnz3AlxafcKI8TB6UFAgzYBSE5sUIas4fS2T7vhtTezTTyjxxt2mU7gCuwMocmqWmxrOIjpVmB/DrVbpO1ieOfLxjKp3RuY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(366004)(8676002)(4744005)(31696002)(66556008)(66476007)(66946007)(86362001)(8936002)(478600001)(54906003)(316002)(16576012)(4326008)(5660300002)(2906002)(2616005)(956004)(6666004)(26005)(186003)(16526019)(6486002)(36756003)(31686004)(53546011)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T1p3VnByQWJHOWd1WFVoR3R6OXk1dERxWTdIVjNNOWZjWWdyc2prQ0ZXWXpU?=
 =?utf-8?B?RTB5TWl4bTFkNHJwS1hOVzNFOVNhV05oYzh6anRRbVA5b0J1enRIZ0V0bUVy?=
 =?utf-8?B?S1UwcTJkM2ErRWtBWVZKMkd2TzB4czRrWUtIWm9ZTHJWK2llV2huOHdCMEVq?=
 =?utf-8?B?OVhTMDl5MkdNaERlenNrUWVLY29pOXlVUHpqOFUzdnNydUdPQUhCYmhoZ0Vt?=
 =?utf-8?B?OWxpS2hKR1lybWZURnU1U3c4NEVZSFptNnVBZHYyOC9Gek5JOVNPMER2N25u?=
 =?utf-8?B?dXd6TWd0N3FLRjcvSm4yUE5jbExqMUxRWk9EWEkyUDdSZVJ0L3R1K1JhN0Ru?=
 =?utf-8?B?Rk94S1FoV2pobXBUVFE4NjlZS1dCQ1R5S0RCMlplMWRNQk1uRllJVm91SzZS?=
 =?utf-8?B?eDNTeExUVzgzT2FSR1I5bEpybWpPdm9obm5GbkdNRCtRYzhEL0o0WWJ3bUxE?=
 =?utf-8?B?VlRMYXlPTncwWFpXbG85S3VNV2F1TWgxamg4cmNRV1dPMnBJSTNyTEZPcXdC?=
 =?utf-8?B?bDVRMUFSVU1CdlYzMHE3OUJ2SlhnOEVGUUZtak5XNGJNNWM5d3BGNFhqSWxN?=
 =?utf-8?B?am5EeHJlZlhDelBEaW9PNnVVcmd2VUdiKzJRVEcvMzRkaDBvVUxFRkFZNXk2?=
 =?utf-8?B?bCt0MmVBOXNrKytlQnYyTmp3cTh2R01BOEhBMlU1bmZDSUlCYzlvYXBXYUI2?=
 =?utf-8?B?MnNKYjJJMEZlNmRQYWpMQTQ3Q1B1K1Q3Y2N3VThDYmFQSGNxWUdQM2xrK1Zk?=
 =?utf-8?B?ZXVCRVBYdzJzNmNjWmVyN1pXNm53R3gxcHkydE5mMlFrbkdpWEZydnRQeUor?=
 =?utf-8?B?Y2lMZzVvS21udWVBaTRhQnNIU255ZnU1bGVaRmVEVzBQSXdGYk1SbWQyTUpO?=
 =?utf-8?B?NXZVdzhESjhmaW9wVitleDhWemRTc2hxS0FkOHpVWGgxZGdmYVFCcE9jU3cv?=
 =?utf-8?B?NEhTU1ZkTzdyenBuU0tSN0FVSGZoZytFaE4rMGR5cjRoYVpmQVVhNUt3cHBs?=
 =?utf-8?B?dklvcDJLaUFlazNtWXZ4WFV0cXVXampxNEJMQWxGMUlZQnRmMkhUZVlRbzd0?=
 =?utf-8?B?eTJDTlJ4aTJLREo1OU5nNENDR0Rqd21ma0t0akdidW4wdGRYL2MrRjJIWklt?=
 =?utf-8?B?U2M0V2FVR2pTcml3M21GZXJoRW5HaEVKck9ZMlNvQzdiTHlndHAyZ2czdGVv?=
 =?utf-8?B?NlBwdm9GUlEzY1pYSFB6R0lJYU5VUzFxN1AvcHY1dDRZYXVxOEkyNFVwTzVV?=
 =?utf-8?B?Ynl2REo5QzdNME1KUHZRM0h3N1NaRHEyaHBmK1k5YXI4RE9HNTBWVGx4dVpj?=
 =?utf-8?B?MFRCTEZWZitEU3kyallwbFFOb1N2Uk9waC8wYlZOdU5Qd3lnQzVPMlE4eW8v?=
 =?utf-8?B?b3ZBMC9jN0JxR3ZLM1l4NUFJUUFlVjlROTBaV2lNb1RhWEtxenVLSE5CeUVn?=
 =?utf-8?B?WkZGOVZ6ZDRrQUYraUIxbFV6c0ZUbVp0RnZUM09xaW1POU9YU05FdXE4TFE4?=
 =?utf-8?B?NzZaRlNQVmo5bkZnRzlVRUVIT3hiTkhqMFExT3RMZWdiZGI0b3NIRmRZelVw?=
 =?utf-8?B?Vm9lQVJmMnhPd1dNZzJlR1BFaTIzTnlMTzh5L0huSEFzYmRWY08wOU1SUkM4?=
 =?utf-8?B?azhnOWM5Q2hISXd4V1pkbC9uZ1pCcGR3ckVuNjJmcXhpUzNhbmFoOGVhQzRZ?=
 =?utf-8?B?MGlnNUE4ajYxN0dPVy9YM3F2Q1VabjBjb0dZcG9meUFtdi9sSDVGRm1WL2RS?=
 =?utf-8?Q?sI5kjpkhaLC/cDjShxEVXT+vxOMGzgdsfKdlgIS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f2d7714-0df7-4d98-c790-08d8cf782a02
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2021 17:03:48.3763
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EmTpjhMLPhmHNu0ObG/IdrUBOR94sYFUfOCkixhxOkBBQ3VQ3OOifwMMwgQNx5n5dJ758A/mKiM0ej0NVZaRbfrN4+RLWiJ69posrCaFzpc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5776
X-OriginatorOrg: citrix.com

On 12/02/2021 16:12, Jan Beulich wrote:
> On 12.02.2021 16:39, Andrew Cooper wrote:
>> --- a/tools/libs/libs.mk
>> +++ b/tools/libs/libs.mk
>> @@ -49,6 +49,8 @@ PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
>>  LIBHEADER ?= $(LIB_FILE_NAME).h
>>  LIBHEADERS = $(foreach h, $(LIBHEADER), $(XEN_INCLUDE)/$(h))
>>  
>> +PKG_ABI := lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)-$(XEN_COMPILE_ARCH)-abi.dump
> Don't you mean $(XEN_TARGET_ARCH) here?

Yes, I do.Â  Will fix up.

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 18:02:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 18:02:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84407.158296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAcll-0007ym-LI; Fri, 12 Feb 2021 18:02:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84407.158296; Fri, 12 Feb 2021 18:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAcll-0007yf-HN; Fri, 12 Feb 2021 18:02:17 +0000
Received: by outflank-mailman (input) for mailman id 84407;
 Fri, 12 Feb 2021 18:02:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAclk-0007ya-Nm
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 18:02:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37b347a6-ba12-4c97-9bee-4f83c9989f03;
 Fri, 12 Feb 2021 18:02:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37b347a6-ba12-4c97-9bee-4f83c9989f03
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613152934;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=jPG4MEbtuunoyNDu7GuAbGxANDQvxig1cec2/Yjf4ZA=;
  b=JMglDUzUp5VFEKoz80mnk1myCC8Jf5OJteEhKvRk0Um9HsXjyIzgBp6b
   Zm/Z8Yll3EVa3B/2CJCJ8AKqpXB5pMmuCp/PNjUxoNQcPHEt05rcP5h33
   H31VHfTyV73wda0++adT6UxHzqJyDbyCD+wwx+y0oW6KYpglgv3DQE1IO
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qXwvS7ETl5dTnO3owjy4p1Rx9r95Gyc4KyttyFm0G+AwtCB8byxjDBFjtTpBLXEhAl4T5XKg6Y
 CDJMJl0g22J9VYA++fjPIxz2Kh/4bpbnuCmT80/aU0s8OoKkikvH8wmbG8GVFNEqkpcaB0FUI/
 qc0073opEKtxojGdGN5tCPiIRF5nyt+Zq4Hm0QmR4HOxgd+kKOzGChXHueuGFEmqQFXaq1vOK9
 gXl+a/JOTW9gSx4bATc+05PK3cDvXPvx4WtXqnHmby3mkR2HIxkkIaoW8PZqFTUwseOJW5xVLy
 ihk=
X-SBRS: 5.2
X-MesageID: 37530461
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37530461"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH v1.1 10/10] tools/libs: Write out an ABI analysis when abi-dumper is available
Date: Fri, 12 Feb 2021 18:01:43 +0000
Message-ID: <20210212180143.22477-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-11-andrew.cooper3@citrix.com>
References: <20210212153953.4582-11-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>

v2:
 * Swap XEN_COMPILE_ARCH for XEN_TARGET_ARCH
 * Swap abi-dumper for $(ABI_DUMPER)
---
 tools/libs/libs.mk | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index ac68996ab2..a82d1783cc 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -49,6 +49,8 @@ PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
 LIBHEADER ?= $(LIB_FILE_NAME).h
 LIBHEADERS = $(foreach h, $(LIBHEADER), $(XEN_INCLUDE)/$(h))
 
+PKG_ABI := lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)-$(XEN_TARGET_ARCH)-abi.dump
+
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
@@ -94,6 +96,13 @@ lib$(LIB_FILE_NAME).so.$(MAJOR): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
 lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR): $(PIC_OBJS) libxen$(LIBNAME).map
 	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDUSELIBS) $(APPEND_LDFLAGS)
 
+# If abi-dumper is available, write out the ABI analysis
+ifneq ($(ABI_DUMPER),)
+libs: $(PKG_ABI)
+$(PKG_ABI): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) headers.lst
+	$(ABI_DUMPER) $< -o $@ -public-headers headers.lst -lver $(MAJOR).$(MINOR)
+endif
+
 .PHONY: install
 install: build
 	$(INSTALL_DIR) $(DESTDIR)$(libdir)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 18:32:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 18:32:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84416.158314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAdF8-0002XD-2A; Fri, 12 Feb 2021 18:32:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84416.158314; Fri, 12 Feb 2021 18:32:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAdF7-0002X6-VH; Fri, 12 Feb 2021 18:32:37 +0000
Received: by outflank-mailman (input) for mailman id 84416;
 Fri, 12 Feb 2021 18:32:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAdF6-0002Wy-Rx; Fri, 12 Feb 2021 18:32:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAdF6-0004B7-Fa; Fri, 12 Feb 2021 18:32:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAdF6-0003v2-7p; Fri, 12 Feb 2021 18:32:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAdF6-0000sw-7M; Fri, 12 Feb 2021 18:32:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=jqWR7mr2210CwytGPx9gXIXBgHmiQJGRgoXkOybvzzE=; b=WD4Fm0DR92dWE6xg3dMLBDxVM6
	o1fQ6t9VVhg/LATxHICL/BxTCxeeBD0dGP8CpbqarRJbQY7GG15GGqLkKK9LbWvCqQEdmcCdIN1jj
	+u2nv6YaZ1M7c7608QYd7UQpBe1d74ScEWx6N8mD/eV0hvxaTdHG0QA9CS2DxPDp+XB0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-xl-credit1
Message-Id: <E1lAdF6-0000sw-7M@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 18:32:36 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-xl-credit1
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159305/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-xl-credit1.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-credit1.guest-start --summary-out=tmp/159305.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-xl-credit1 guest-start
Searching for failure / basis pass:
 159238 fail [host=laxton0] / 158681 [host=laxton1] 158624 [host=rochester0] 158616 ok.
Failure / basis pass flights: 159238 / 158616
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-5e1942063dc3633f7a127aa2b159c13507580d21 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-472276f59bba2b22bb882c5c6f5479754e68d467 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Loaded 15001 nodes in revision graph
Searching for test results:
 158609 [host=rochester1]
 158616 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158624 [host=rochester0]
 158681 [host=laxton1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159288 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159289 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159290 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159291 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159292 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159238 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 472276f59bba2b22bb882c5c6f5479754e68d467 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159293 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159294 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159296 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159297 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159299 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159301 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159303 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159304 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159305 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158616 (pass), for basis pass
 Result found: flight 159200 (fail), for basis failure (at ancestor ~185)
 Repro found: flight 159288 (pass), for basis pass
 Repro found: flight 159289 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159297 (pass), for last pass
 Result found: flight 159299 (fail), for first failure
 Repro found: flight 159301 (pass), for last pass
 Repro found: flight 159303 (fail), for first failure
 Repro found: flight 159304 (pass), for last pass
 Repro found: flight 159305 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159305/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 143 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-credit1.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159305: tolerable ALL FAIL

flight 159305 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159305/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start             fail baseline untested


jobs:
 test-arm64-arm64-xl-credit1                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 18:56:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 18:56:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84422.158331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAdcA-0004W9-74; Fri, 12 Feb 2021 18:56:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84422.158331; Fri, 12 Feb 2021 18:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAdcA-0004W2-3z; Fri, 12 Feb 2021 18:56:26 +0000
Received: by outflank-mailman (input) for mailman id 84422;
 Fri, 12 Feb 2021 18:56:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dd8J=HO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lAdc8-0004Vx-TZ
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 18:56:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27280d76-3980-46a0-ae1a-aab2d5d38db5;
 Fri, 12 Feb 2021 18:56:24 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0222164D9A;
 Fri, 12 Feb 2021 18:56:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27280d76-3980-46a0-ae1a-aab2d5d38db5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613156183;
	bh=Mq2Ilw0F0UEjsE+TXzIgC/+QC2l/0mn9P4z0OJzZIM8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=si0957cY9Kxof26z3e30q/IssxsPmZvbViYHC2uZCJcmJZ8xvkDcRyb9KrYvDmdsG
	 CAtW4blmwqVzEOukCpDtoWcNYDRXaEqFcHsD3PhQ41mkJYiZkcRmOpO849uJNgbq6E
	 CTQ5y7OCsbpX7b2OQO0Ffms6kjTL76Nn1L7BA9ta517Z73D9rD+MYvgRWTlqtaecMF
	 TuJ6bvSpVqk9+wo+ErS8EsMgnqQoR+gdBtxtJQLn9HHlFOkmFqVm+uFt6XdZw3l3j/
	 Om075m3DJokyHl/THAsuAqx16hqTFX/09jj9E02mHuSbdW/SiV7iEHXzS30LfcUz+l
	 RU1aTNh4DA1EQ==
Date: Fri, 12 Feb 2021 10:56:17 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Rahul Singh <Rahul.Singh@arm.com>, 
    "lucmiccio@gmail.com" <lucmiccio@gmail.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
In-Reply-To: <170a971e-8e10-bb9d-a324-e09e40ed994c@xen.org>
Message-ID: <alpine.DEB.2.21.2102121055220.3234@sstabellini-ThinkPad-T480s>
References: <20210208184932.23468-1-sstabellini@kernel.org> <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com> <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s> <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com> <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com> <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org> <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com> <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org> <alpine.DEB.2.21.2102111253060.9128@sstabellini-ThinkPad-T480s>
 <170a971e-8e10-bb9d-a324-e09e40ed994c@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-649706624-1613156183=:3234"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-649706624-1613156183=:3234
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 12 Feb 2021, Julien Grall wrote:
> On 11/02/2021 20:55, Stefano Stabellini wrote:
> > On Thu, 11 Feb 2021, Julien Grall wrote:
> > > On 11/02/2021 13:20, Rahul Singh wrote:
> > > > > On 10 Feb 2021, at 7:52 pm, Julien Grall <julien@xen.org> wrote:
> > > > > On 10/02/2021 18:08, Rahul Singh wrote:
> > > > > > Hello Julien,
> > > > > > > On 10 Feb 2021, at 5:34 pm, Julien Grall <julien@xen.org> wrote:
> > > > > > > On 10/02/2021 15:06, Rahul Singh wrote:
> > > > > > > > > On 9 Feb 2021, at 8:36 pm, Stefano Stabellini
> > > > > > > > > <sstabellini@kernel.org> wrote:
> > > > > > > > > 
> > > > > > > > > On Tue, 9 Feb 2021, Rahul Singh wrote:
> > > > > > > > > > > On 8 Feb 2021, at 6:49 pm, Stefano Stabellini
> > > > > > > > > > > <sstabellini@kernel.org> wrote:
> > > > > > > > > > > 
> > > > > > > > > > > Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
> > > > > > > > > > > The offending chunk is:
> > > > > > > > > > > 
> > > > > > > > > > > #define gnttab_need_iommu_mapping(d)                    \
> > > > > > > > > > > -    (is_domain_direct_mapped(d) && need_iommu(d))
> > > > > > > > > > > +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > > > > > > > > > > 
> > > > > > > > > > > On ARM we need gnttab_need_iommu_mapping to be true for
> > > > > > > > > > > dom0
> > > > > > > > > > > when it is
> > > > > > > > > > > directly mapped and IOMMU is enabled for the domain, like
> > > > > > > > > > > the
> > > > > > > > > > > old check
> > > > > > > > > > > did, but the new check is always false.
> > > > > > > > > > > 
> > > > > > > > > > > In fact, need_iommu_pt_sync is defined as
> > > > > > > > > > > dom_iommu(d)->need_sync and
> > > > > > > > > > > need_sync is set as:
> > > > > > > > > > > 
> > > > > > > > > > >     if ( !is_hardware_domain(d) || iommu_hwdom_strict )
> > > > > > > > > > >         hd->need_sync = !iommu_use_hap_pt(d);
> > > > > > > > > > > 
> > > > > > > > > > > iommu_use_hap_pt(d) means that the page-table used by the
> > > > > > > > > > > IOMMU is the
> > > > > > > > > > > P2M. It is true on ARM. need_sync means that you have a
> > > > > > > > > > > separate IOMMU
> > > > > > > > > > > page-table and it needs to be updated for every change.
> > > > > > > > > > > need_sync is set
> > > > > > > > > > > to false on ARM. Hence, gnttab_need_iommu_mapping(d) is
> > > > > > > > > > > false
> > > > > > > > > > > too,
> > > > > > > > > > > which is wrong.
> > > > > > > > > > > 
> > > > > > > > > > > As a consequence, when using PV network from a domU on a
> > > > > > > > > > > system where
> > > > > > > > > > > IOMMU is on from Dom0, I get:
> > > > > > > > > > > 
> > > > > > > > > > > (XEN) smmu: /smmu@fd800000: Unhandled context fault:
> > > > > > > > > > > fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
> > > > > > > > > > > [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error:
> > > > > > > > > > > HRESP not OK
> > > > > > > > > > > 
> > > > > > > > > > > The fix is to go back to something along the lines of the
> > > > > > > > > > > old
> > > > > > > > > > > implementation of gnttab_need_iommu_mapping.
> > > > > > > > > > > 
> > > > > > > > > > > Signed-off-by: Stefano Stabellini
> > > > > > > > > > > <stefano.stabellini@xilinx.com>
> > > > > > > > > > > Fixes: 91d4eca7add
> > > > > > > > > > > Backport: 4.12+
> > > > > > > > > > > 
> > > > > > > > > > > ---
> > > > > > > > > > > 
> > > > > > > > > > > Given the severity of the bug, I would like to request
> > > > > > > > > > > this
> > > > > > > > > > > patch to be
> > > > > > > > > > > backported to 4.12 too, even if 4.12 is security-fixes
> > > > > > > > > > > only
> > > > > > > > > > > since Oct
> > > > > > > > > > > 2020.
> > > > > > > > > > > 
> > > > > > > > > > > For the 4.12 backport, we can use iommu_enabled() instead
> > > > > > > > > > > of
> > > > > > > > > > > is_iommu_enabled() in the implementation of
> > > > > > > > > > > gnttab_need_iommu_mapping.
> > > > > > > > > > > 
> > > > > > > > > > > Changes in v2:
> > > > > > > > > > > - improve commit message
> > > > > > > > > > > - add is_iommu_enabled(d) to the check
> > > > > > > > > > > ---
> > > > > > > > > > > xen/include/asm-arm/grant_table.h | 2 +-
> > > > > > > > > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > > > > > > > 
> > > > > > > > > > > diff --git a/xen/include/asm-arm/grant_table.h
> > > > > > > > > > > b/xen/include/asm-arm/grant_table.h
> > > > > > > > > > > index 6f585b1538..0ce77f9a1c 100644
> > > > > > > > > > > --- a/xen/include/asm-arm/grant_table.h
> > > > > > > > > > > +++ b/xen/include/asm-arm/grant_table.h
> > > > > > > > > > > @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned
> > > > > > > > > > > long
> > > > > > > > > > > gpaddr, mfn_t mfn,
> > > > > > > > > > >      (((i) >= nr_status_frames(t)) ? INVALID_GFN :
> > > > > > > > > > > (t)->arch.status_gfn[i])
> > > > > > > > > > > 
> > > > > > > > > > > #define gnttab_need_iommu_mapping(d)                    \
> > > > > > > > > > > -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
> > > > > > > > > > > +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
> > > > > > > > > > > 
> > > > > > > > > > > #endif /* __ASM_GRANT_TABLE_H__ */
> > > > > > > > > > 
> > > > > > > > > > I tested the patch and while creating the guest I observed
> > > > > > > > > > the
> > > > > > > > > > below warning from Linux for block device.
> > > > > > > > > > https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
> > > > > > > > > 
> > > > > > > > > So you are creating a guest with "xl create" in dom0 and you
> > > > > > > > > see
> > > > > > > > > the
> > > > > > > > > warnings below printed by the Dom0 kernel? I imagine the domU
> > > > > > > > > has
> > > > > > > > > a
> > > > > > > > > virtual "disk" of some sort.
> > > > > > > > Yes you are right I am trying to create the guest with "xl
> > > > > > > > createâ€
> > > > > > > > and before that, I created the logical volume and trying to
> > > > > > > > attach
> > > > > > > > the logical volume
> > > > > > > > block to the domain with â€œxl block-attachâ€. I observed this
> > > > > > > > error
> > > > > > > > with the "xl block-attachâ€ command.
> > > > > > > > This issue occurs after applying this patch as what I observed
> > > > > > > > this
> > > > > > > > patch introduce the calls to iommu_legacy_{, un}map() to map the
> > > > > > > > grant pages for
> > > > > > > > IOMMU that touches the page-tables. I am not sure but what I
> > > > > > > > observed is that something is written wrong when iomm_unmap
> > > > > > > > calls
> > > > > > > > unmap the pages
> > > > > > > > because of that issue is observed.
> > > > > > > 
> > > > > > > Can you clarify what you mean by "written wrong"? What sort of
> > > > > > > error
> > > > > > > do you see in the iommu_unmap()?
> > > > > > I might be wrong as per my understanding for ARM we are sharing the
> > > > > > P2M
> > > > > > between CPU and IOMMU always and the map_grant_ref() function is
> > > > > > written
> > > > > > in such a way that we have to call iommu_legacy_{, un}map() only if
> > > > > > P2M
> > > > > > is not shared.
> > > > > 
> > > > > map_grant_ref() will call the IOMMU if gnttab_need_iommu_mapping()
> > > > > returns
> > > > > true. I don't really see where this is assuming the P2M is not shared.
> > > > > 
> > > > > In fact, on x86, this will always be false for HVM domain (they
> > > > > support
> > > > > both shared and separate page-tables).
> > > > > 
> > > > > > As we are sharing the P2M when we call the iommu_map() function it
> > > > > > will
> > > > > > overwrite the existing GFN -> MFN ( For DOM0 GFN is same as MFN)
> > > > > > entry
> > > > > > and when we call iommu_unmap() it will unmap the  (GFN -> MFN )
> > > > > > entry
> > > > > > from the page-table.
> > > > > AFAIK, there should be nothing mapped at that GFN because the page
> > > > > belongs
> > > > > to the guest. At worse, we would overwrite a mapping that is the same.
> > > >   > Sorry I should have mention before backend/frontend is dom0 in this
> > > case and GFN is mapped. I am trying to attach the block device to DOM0
> > > 
> > > Ah, your log makes a lot more sense now. Thank you for the clarification!
> > > 
> > > So yes, I agree that iommu_{,un}map() will do the wrong thing if the
> > > frontend
> > > and backend in the same domain.
> > > 
> > > I don't know what the state in Linux, but from Xen PoV it should be
> > > possible
> > > to have the backend/frontend in the same domain.
> > > 
> > > I think we want to ignore the IOMMU mapping request when the domain is the
> > > same. Can you try this small untested patch:
> > 
> > Given that all the pages already owned by the domain should already be
> > in the shared pagetable between MMU and IOMMU, there is no need to
> > create a second mapping. In fact it is guaranteed to overlap with an
> > existing mapping.
> 
> It is **almost** guaranteed :). I can see a few reasons for this to not be
> valid:
>    - Using the domain shared info in a grant
>    - With a good timing, it would be possible that a differente vCPU remove
> the mapping after the P2M walk
> 
> That said, I feel it is not an expected behavior for a domain guest. So it is
> not something we should care at least for now.
> 
> > In theory, if guest_physmap_add_entry returned -EEXIST if a mapping
> > identical to the one we want to add is already in the pagetable, in this
> > instance we would see -EEXIST being returned.
> 
> While I agree that the GFN and MFN would be the same, there mapping still not
> be identical because the P2M type (and potentially the permission) will
> differ.
> 
> However, guest_physmap_add_entry() doesn't do such check today. It will just
> happily replace any mapping. It would be good to harden the code P2M as this
> is not the first time we see report of mapping overwritten.
> 
> I actually have a task in my todo list but I never got the chance to spend
> time on it.
> 
> > 
> > Based on that, I cannot think of unwanted side-effects for this patch.
> > It looks OK to me.
> > 
> > Given that it solves a different issue, I think it should be a separate
> > patch from [1]. Julien, are you OK with that or would you rather merge
> > the two?
> 
> They are two distinct issues. In fact, the bug has always been present on Arm.
> I will send a separate patch.

Excellent, thank you!
--8323329-649706624-1613156183=:3234--


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 19:36:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 19:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84429.158348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAeEV-0008Fm-4o; Fri, 12 Feb 2021 19:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84429.158348; Fri, 12 Feb 2021 19:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAeEV-0008Ff-1d; Fri, 12 Feb 2021 19:36:03 +0000
Received: by outflank-mailman (input) for mailman id 84429;
 Fri, 12 Feb 2021 19:36:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAeET-0008Fa-C2
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 19:36:01 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 484e50de-db2c-48db-8d81-41cf6074c5f9;
 Fri, 12 Feb 2021 19:35:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 484e50de-db2c-48db-8d81-41cf6074c5f9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613158559;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=JLB/N6vyzFsgB8EuuJagPk41G9hnHiknEi9TCRPI1GA=;
  b=bZjStSUHCeE4cclXrn/eWcS27m9lwO1b0W50P07WoCYE9mXxAok4qbYl
   JS24ARlFJGuE5IcJXwsHrTAmCl1+LQQMaoxyS3XhUrZcjoJjkRXiHAkjY
   aA9P6gNObk09hBLJ8fufhg65Gz9zdnj8y8SIR0aLlL4LGV1Yk+0sFUpgJ
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: A8QwpC+iga+gRWHwRpliqUFyDXuq907TblMdGliqZGio+xmdnNqVvU495jcbYxm3u2gm1d1xDp
 pb65gXsKZTjJp0J9rJBIRhIEuBqBKo8G9skbag90FNNefHbCOCR6oXZ59RVezxVQp3mS/995p8
 EWuHuNEo8j1nwb/BGUiNatBMrAYhYPHRBEJEYKiQtQy2H9FDWypQerD7NrfCiAuRUe5PflS6lX
 A40Zsy/NDXl8X0RlXfg/8Hchd+Z4X50vLGm14np11gVnrxgELunmD2lHRPR9BJEOz1y0TF916V
 /3Y=
X-SBRS: 5.2
X-MesageID: 37104963
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37104963"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KYYVIoizhj5lhw8fojnVOx1bAIbE02ffPYtbY05UTi7HGrFlm8RnUpE5o2Cn9EFI0YUDVIdndviPdfwp9mJxtriQNbrRSpcXoYpJhoqZbKyX6x1KuyKEbZ5KnBg48jsodv1dDPWgnSo1Mgkog8c1aVWyXBnIBiIXnj/GjNjhlU1OZPBBDS40puNsxNjoDip8tf98Ji1H9n6OqQt2DwFUpWhaP1w56UU02tCmb81/pHChDf5sl4GvksIVOg3FnKLjxqiFyQj9eIdzx5/cmEH5MxVr9ggm2hSx1G8Jxc1UeSvtXOUVGxalYXLCwX6NbbfvEWQLO4UnRfOcxcfo7HuWDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JLB/N6vyzFsgB8EuuJagPk41G9hnHiknEi9TCRPI1GA=;
 b=FSQLgRPWSBl0diNYK7dyaCMFlBY5LyRLbBABcyDFVbRJQ1HP72SpTgvWL7AsI6t3wxHP+ORtjVkHhk8AldTqjns3cL3emmKKloss2oTHaD9o0Z4pnww8/P0uoaw3FXC43edMRDuaUits22naziZpX5NfAE42Bct5tomwJzH//IeeBLTX8YoYMBGDvaYK248smJ4HRijs5fJkkmgjBWOil/pke0BJTt3eifYnFGzJQxCaXY8XZ+tL+hf9014mIGp3Lm2FQBgR0iyBcElZ3QqaVDLlOXiF4MrmQc7+IQ3jiuWIafN2ZPfwMOtZHm7KQmiPyXMm7oLudWavOA65FcpwAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JLB/N6vyzFsgB8EuuJagPk41G9hnHiknEi9TCRPI1GA=;
 b=ZL5vHXQnByWxkzV4z7SGErgzHe8L7Xqz2A0iHQLSG0/t1TIiGZC44slgTSTY+aeXbEKsQgA76Ffz3BQwztspd0pbRySf3uj7RcbohRdiN37mFPPkAZFtam6dsesQuLF+0oQrJFhI2tvE1tWFT8qf9WkCk/LtE9+KRZiTDmmMAo4=
Subject: Re: [PATCH 03/10] tools/libxg: Fix uninitialised variable in
 meminit()
To: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-4-andrew.cooper3@citrix.com>
 <2437c26c-2bb6-ec43-37bd-3051b97eff56@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2747d0fd-888c-6de0-fe6b-77a7b00ab46d@citrix.com>
Date: Fri, 12 Feb 2021 19:35:28 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <2437c26c-2bb6-ec43-37bd-3051b97eff56@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0018.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8d23a8cf-7af7-4898-7c28-08d8cf8d697d
X-MS-TrafficTypeDiagnostic: BY5PR03MB5282:
X-Microsoft-Antispam-PRVS: <BY5PR03MB5282DEA6482DD0613F80F320BA8B9@BY5PR03MB5282.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: INgpMd3xWnz6ptvTriKgxhtPkZOhUd2i352wltAD7oMty6T6vjUcto4xjWH/kVB7ujycyPVK3Kysr2GDkTXvowqjI67uhhRy02E3foMbpN6HfCrbAiCnhr0f+vOxeAKKqGXiSgq21nNjqutNm8qd18V1KVBiyzxf69DXFWX9b8DbbZ38np55uOgC8lo0mgF7bLdWSNG+FK8ZhKHKoa5/sML1cGpDrqjoLkjqtGwO3SHon7DxWOGOeUTWdllIPXz5oYN1USGdcUENKoQi/HLqUi3O9cDQlLhXaHdgQoOBbYe7n+RTkZlJz4U3I5lZW8LcV306+KlvYTCvijMZuozeSAzsfV3tQLaJWwEjH0oZIoL5KFh0LLKeOgXsDrcKA6T1wDkVBre1PYZywr4R5eilYpWNdQ1skithDXdCV/Rg7Vlnw/mPTPAUPRKOnexvRJ5IXB4p36PxbNA5fKTTJVBvaoU5r46S0FrLahF8VMKDy3PDgmU/UKt2mxJD2KxFIACOBdhX5boaWHWUTVCk2s7fKhuRej9+Fj7Xfc2DcGSDDM4mWkDBJJQD7s4a4XZUqDJib7pnXiPqlb+Hbmgji5kJhc6xqKrHHoqspDjPWG8Jlp4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(6486002)(2906002)(36756003)(186003)(4326008)(6666004)(8676002)(110136005)(31686004)(8936002)(16526019)(31696002)(478600001)(53546011)(2616005)(16576012)(316002)(86362001)(83380400001)(956004)(5660300002)(26005)(66476007)(54906003)(66946007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UFNXWDVxakhJMUJCYTB3QzVWa0lMVWdjZmZ0OW9UVVlET09VSU9QcHEvVEF4?=
 =?utf-8?B?L0JQNVJPdXMrczlsOFZLVkNsbU5LWjdRTmthcVNwTmI0TGhEL1lXbEZuSTQ5?=
 =?utf-8?B?Q2xtSUZvWU1PcG5EUXE1MWpTajFocjFFY0xGVUhPYUoremttQ1R3R3NXZlNQ?=
 =?utf-8?B?WGhxYnlRVDZwSE9YQ1R4bU1TZmg5M29PN0NqbHJSWW1acGowT2QvaEE1S21h?=
 =?utf-8?B?WkxYYzUrellOSlh6ZGRmejlEOWh5TWRHZXdFbzBWZWhBdVV6NE1aOElEdlJz?=
 =?utf-8?B?RHl1VWtqeDlBcXA1WXErcGpmb0hzUEplUnJldWJDdkFNbFd4aERCYVNzUVd1?=
 =?utf-8?B?TUtPc2NRWXl6MFVIZ0RsNFVrTVdkZWdZaE1sUERUTzI2cTU2OWdlcjBacDVF?=
 =?utf-8?B?c1Bhcm13VlJ5UlY2MkdHb3Zzdm9sMGJEb2N4OEt2QmhqeFFNQ0htenR0b2lI?=
 =?utf-8?B?cEFuU1I1OG9PWXpIaWtxVW9xQ3hPRUJyQXNIalJxbmJ5cDV0emxVVDFOYnNR?=
 =?utf-8?B?ZFh5dlFUbkZrdThvQ0JZWDl3M0crR0lOWUNGWXNPQmdsOENjdDNUN293dVlF?=
 =?utf-8?B?WDJEQXYvZlQzNlR6VEV6bVRUTUxVcm5GWXlDTXR2dEt5VlVyNjJNUnY3Y2l6?=
 =?utf-8?B?WG91Rmw4blB0dGp6YzczYVhpU3crNkFhem5uUmd0SGpQRFBzTlZJMVVYb3Y1?=
 =?utf-8?B?WjNYY2oyOEFzUi83allwSFh1NVZrUVFjc2hIc2c5WlZhemxtWmw3NzZtUXpm?=
 =?utf-8?B?cWh6WmZMNnlyT1ZWZ045WFk5eEVTK0tKcVE4cGdlem5NUC9kdGFVb0F3emNs?=
 =?utf-8?B?ZldFT2xCeTZMOGs2bHBkOVpiSk5lVHVFWmRjeGF6M3dTTzRmZ3gvOE1HWDVa?=
 =?utf-8?B?VEhrK1FBR014cWRBK296YUhGaVNmd21EdUlRbWpVaXVkMnZvZmpndWxmemha?=
 =?utf-8?B?KzlHTXpmaXBETWlxOUh1eVNDd1Foem5zelEwY2F3NzlPbVA1TjcycDZtZUxt?=
 =?utf-8?B?VXBUN1dBa3Z3Y3JGTVYzT1VaeW0yTTdBWEREZG91dVlVZ2xxRGdKQWxsRk84?=
 =?utf-8?B?RVZmdy9kSU1OdkFVMkVoc25OMWJnaThiRmgwejFBZk9aQ2lScUd4OXBYNzVM?=
 =?utf-8?B?b3R2cVZQUXJseDFzMTNHd1Zjcm1FbkxWaVB5aHFYa0U4V1JBVHltTkduNkQ5?=
 =?utf-8?B?M2ZKTlZCcng0Mk9STXFlbjlSRUR3YlJlZ3htOG5EZTAyeHNRS25SaGtXOE9D?=
 =?utf-8?B?eUNUNGo3VEpncC9LaVJxYUs4VjZwd3JnbzVBZWhuU1hPR0pPTGRLeU1zRHp6?=
 =?utf-8?B?SDRiUk0wWlloWis4aXBKTzE5YzVrSGlITml5UE5jN3RCSWZmRUFQZXJQUHJQ?=
 =?utf-8?B?cVVsODdjcEhjQVJlenNUTHFYeXJPU25KT3NVSkNxYW83b0JXaEFBTlF0V2ZE?=
 =?utf-8?B?dGFYZG1zT1ZqbVNZemZJd3hmdlg2ZldKQXlic0RTT3JaMlpFVy9aS3ZjQWVF?=
 =?utf-8?B?dFhhbk9LeGJOT2RoK3A4YVNXTWJjZlhyOUpLWWRMWmErTWdqYTlpQWw0TU81?=
 =?utf-8?B?ekhPb2tndkZFck9hMFdwY1dLRjlKcVZPQ3VHSHpMejhXdjA0bEsvTW05a2g4?=
 =?utf-8?B?eTdSTkxoZXdjalBlblN0YWJiQ1lUK3BFNk9jSGxpQ2IvWVJSVVllUlIyeXUw?=
 =?utf-8?B?ZDhtbFRsLzNOdUNzaGJVTTVoTlRjWXpJekhMaVVWZHBhaGJOZ25rZmk3akR4?=
 =?utf-8?Q?iTQ2sYnL6+/uRZvA3aplESP2O8Au6+iNqmrh1jR?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d23a8cf-7af7-4898-7c28-08d8cf8d697d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Feb 2021 19:35:54.8979
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PvrwqNHmUVcNUMr0of6lam2lr+UEVX/Ez1w4tyeNWm8GzuFLyqK7pMCAi2EutEfcBwebLfujvBmBEKV3wa+am6OdOyk6KnI70DJfk45Hv2c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5282
X-OriginatorOrg: citrix.com

On 12/02/2021 15:55, Julien Grall wrote:
> Hi Andrew,
>
> On 12/02/2021 15:39, Andrew Cooper wrote:
>> Various version of gcc, when compiling with -Og, complain:
>>
>> Â Â  xg_dom_arm.c: In function 'meminit':
>> Â Â  xg_dom_arm.c:420:19: error: 'p2m_size' may be used uninitialized
>> in this function [-Werror=maybe-uninitialized]
>> Â Â Â Â  420 |Â Â Â Â  dom->p2m_size = p2m_size;
>> Â Â Â Â Â Â Â Â  |Â Â Â Â  ~~~~~~~~~~~~~~^~~~~~~~~~
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> This was reported nearly 3 years ago (see [1]) and it is pretty sad
> this was never merged :(.

:( We've got far too many patches which fall through the cracks like this.

>
>> ---
>> CC: Ian Jackson <iwj@xenproject.org>
>> CC: Wei Liu <wl@xen.org>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien@xen.org>
>>
>> Julien/Stefano: I can't work out how this variable is supposed to
>> work, and
>> the fact that it isn't a straight accumulation across the RAM banks
>> looks
>> suspect.
>
> It looks buggy, but the P2M is never used on Arm. In fact, you sent a
> patch a year ago to drop it (see [2]). It would be nice to revive it.


That series was committed more than a year ago - ee21f10d70^..97e34ad22d
- and tbh, I'd forgotten about it.

In light of that, I think I'll just delete the p2m_size references
here.Â  It's easy to prove correctness via inspection, and removes a
dubious construct entirely.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 20:02:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 20:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84433.158360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAedm-0002hh-7o; Fri, 12 Feb 2021 20:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84433.158360; Fri, 12 Feb 2021 20:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAedm-0002ha-4n; Fri, 12 Feb 2021 20:02:10 +0000
Received: by outflank-mailman (input) for mailman id 84433;
 Fri, 12 Feb 2021 20:02:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o46S=HO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lAedl-0002hV-Ez
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 20:02:09 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acbcb1ba-be5d-4367-898a-fee14ba32728;
 Fri, 12 Feb 2021 20:02:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acbcb1ba-be5d-4367-898a-fee14ba32728
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613160127;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=VdsXABp7VUh6OG8RFFA9bEJy3dHargOx/0u9ZIwHPNw=;
  b=ZhLDYYhrqSk9B6ZfJ6tLHtoZgZvFGfdfVl7BwMCrMUNuBVrP2idRp04C
   29BGl5ENFIdmw+rXrB+6voPrJDbzbkA8V13tpl1YDqWEEJ5Cs/NflhgpC
   WCzf+bk8yFpuvtEdCIX0dF5NCPVJnKac/IQeAA1Kb9dxUiBOS5Iu8v0vI
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LWEkUM+oXQstXv+mNowpsf79ZzHXf6KklcwLD47XRLOrOjuZsJBDy3hl9drAHJhg4Ix0B/GRwr
 DNikCgEvPFa8tyAwj7DFH9H/ztDxJczet4QS287Ko+GrTzX3gZUf37isbkYeg9tLzUby7xus3Q
 CEX+qJ2JqlDgWBzkBaHx+pxIY8KZC0bhSHy/gWXpwyUrj3jlkR3gQ5mtZ+Bmscv4i7NohnGWbF
 7ah8Pu3P8ItK8RxMPEzl9lBU7pz7f4RScR/gMRONOGWO+uw6grY1Toz+KcTlpHpVO3ghO9SiG3
 UZE=
X-SBRS: 5.2
X-MesageID: 37165914
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,174,1610427600"; 
   d="scan'208";a="37165914"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: [PATCH v1.1 03/10] tools/libxg: Drop stale p2m logic from ARM's meminit()
Date: Fri, 12 Feb 2021 20:01:39 +0000
Message-ID: <20210212200139.26911-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-4-andrew.cooper3@citrix.com>
References: <20210212153953.4582-4-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  xg_dom_arm.c: In function 'meminit':
  xg_dom_arm.c:420:19: error: 'p2m_size' may be used uninitialized in this function [-Werror=maybe-uninitialized]
    420 |     dom->p2m_size = p2m_size;
        |     ~~~~~~~~~~~~~~^~~~~~~~~~

This is actually entirely stale code since ee21f10d70^..97e34ad22d which
removed the 1:1 identity p2m for translated domains.

Drop the write of d->p2m_size, and the p2m_size local variable.  Reposition
the p2m_size field in struct xc_dom_image and correct some stale
documentation.

This change really ought to have been part of the original cleanup series.

No actual change to how ARM domains are constructed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>

v2:
 * Delete stale p2m_size infrastructure.
---
 tools/include/xenguest.h      | 5 ++---
 tools/libs/guest/xg_dom_arm.c | 5 -----
 2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 775cf34c04..217022b6e7 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -145,6 +145,7 @@ struct xc_dom_image {
      * eventually copied into guest context.
      */
     xen_pfn_t *pv_p2m;
+    xen_pfn_t p2m_size;         /* number of pfns covered by pv_p2m */
 
     /* physical memory
      *
@@ -154,12 +155,10 @@ struct xc_dom_image {
      *
      * An ARM guest has GUEST_RAM_BANKS regions of RAM, with
      * rambank_size[i] pages in each. The lowest RAM address
-     * (corresponding to the base of the p2m arrays above) is stored
-     * in rambase_pfn.
+     * is stored in rambase_pfn.
      */
     xen_pfn_t rambase_pfn;
     xen_pfn_t total_pages;
-    xen_pfn_t p2m_size;         /* number of pfns covered by p2m */
     struct xc_dom_phys *phys_pages;
 #if defined (__arm__) || defined(__aarch64__)
     xen_pfn_t rambank_size[GUEST_RAM_BANKS];
diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.c
index 94948d2b20..b4c24f15fb 100644
--- a/tools/libs/guest/xg_dom_arm.c
+++ b/tools/libs/guest/xg_dom_arm.c
@@ -373,7 +373,6 @@ static int meminit(struct xc_dom_image *dom)
     const uint64_t modsize = dtb_size + ramdisk_size;
     const uint64_t ram128mb = bankbase[0] + (128<<20);
 
-    xen_pfn_t p2m_size;
     uint64_t bank0end;
 
     assert(dom->rambase_pfn << XC_PAGE_SHIFT == bankbase[0]);
@@ -409,16 +408,12 @@ static int meminit(struct xc_dom_image *dom)
 
         ramsize -= banksize;
 
-        p2m_size = ( bankbase[i] + banksize - bankbase[0] ) >> XC_PAGE_SHIFT;
-
         dom->rambank_size[i] = banksize >> XC_PAGE_SHIFT;
     }
 
     assert(dom->rambank_size[0] != 0);
     assert(ramsize == 0); /* Too much RAM is rejected above */
 
-    dom->p2m_size = p2m_size;
-
     /* setup initial p2m and allocate guest memory */
     for ( i = 0; i < GUEST_RAM_BANKS && dom->rambank_size[i]; i++ )
     {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 12 20:12:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 20:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84438.158372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAenz-0003jV-Bb; Fri, 12 Feb 2021 20:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84438.158372; Fri, 12 Feb 2021 20:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAenz-0003jO-89; Fri, 12 Feb 2021 20:12:43 +0000
Received: by outflank-mailman (input) for mailman id 84438;
 Fri, 12 Feb 2021 20:12:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c5RG=HO=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1lAeny-0003jJ-7t
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 20:12:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0a0294e-a9a2-412f-ad16-8420414fdf23;
 Fri, 12 Feb 2021 20:12:41 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id BC30364E00;
 Fri, 12 Feb 2021 20:12:40 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id AC1E660A2B;
 Fri, 12 Feb 2021 20:12:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0a0294e-a9a2-412f-ad16-8420414fdf23
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613160760;
	bh=kAG+MDkUcogL/a24ByGyYkIfAvRYJiLx0RY3ITP4sXI=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=LeqvodM7vNQRhcCy4kpTkP/yu2q9sY3z0lVn8ql4qnbXcqlqBXqdZ/4gaRwOkv5eE
	 QYSOqAsYwlsV708lxc0Yv9OfnncI+juKimab0V9vA4LPGC+yL7brD6afM/qrsB2odD
	 /KsPheEvgP2N+lU/Ke2W9a9TCqsjptfI1fTRR3dxQ/BJmYAdi8vNhzyKGoJk6SQYvI
	 tQk2rERbgwAqjBs/FMA0gc9Usbg8kqYO68yNbUzJtFa/9djRltF3EH1Q0Y1QOfLlrN
	 i72HaJdKHV0l0o9dkzBXQSxpg1NMcij7wB4MG/zzfMQ557Nf6HBnv7qvtWOWIVBKNg
	 Z0FMzCM5GnAXA==
Subject: Re: [GIT PULL] xen: branch for v5.11-rc8
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210212060111.22013-1-jgross@suse.com>
References: <20210212060111.22013-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20210212060111.22013-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc8-tag
X-PR-Tracked-Commit-Id: c4295ab0b485b8bc50d2264bcae2acd06f25caaf
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 2dbbaae5f7b3855697e2decc5de79c7574403254
Message-Id: <161316076064.13717.12655994858613087035.pr-tracker-bot@kernel.org>
Date: Fri, 12 Feb 2021 20:12:40 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Fri, 12 Feb 2021 07:01:11 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc8-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/2dbbaae5f7b3855697e2decc5de79c7574403254

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 20:56:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 20:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84444.158390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAfUZ-0007Uv-SD; Fri, 12 Feb 2021 20:56:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84444.158390; Fri, 12 Feb 2021 20:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAfUZ-0007Uo-P6; Fri, 12 Feb 2021 20:56:43 +0000
Received: by outflank-mailman (input) for mailman id 84444;
 Fri, 12 Feb 2021 20:56:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAfUY-0007Ug-Bj; Fri, 12 Feb 2021 20:56:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAfUY-0006XW-39; Fri, 12 Feb 2021 20:56:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAfUX-0001BS-Q5; Fri, 12 Feb 2021 20:56:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAfUX-0008WJ-Pb; Fri, 12 Feb 2021 20:56:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3ygHQGGe44XqnwP4K0XfJpKMwx7GLy36TNYPkaR0WCA=; b=3gTFVsgtMs8WWhXnLHSD3TGkC9
	nUwpG4MQqXWAz9a1aQBmbx9zQ9fFf0qZvoPPclp5bcIOyAAlQmuOud1DkvMckOPalF9Tg3kjPhJC3
	RN4+6RUEaBO+yyUL1teV5Q073sovahX2QNleuGYJ0wIYJvRiyZ5Cu1ySGyDysem91usE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159250-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159250: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=83339e21d05c824ebc9131d644f25c23d0e41ecf
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 20:56:41 +0000

flight 159250 qemu-mainline real [real]
flight 159307 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159250/
http://logs.test-lab.xenproject.org/osstest/logs/159307/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                83339e21d05c824ebc9131d644f25c23d0e41ecf
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  176 days
Failing since        152659  2020-08-21 14:07:39 Z  175 days  344 attempts
Testing same since   159250  2021-02-11 11:40:58 Z    1 days    1 attempts

------------------------------------------------------------
398 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 110291 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 21:39:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 21:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84450.158405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAg9c-0002uF-7n; Fri, 12 Feb 2021 21:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84450.158405; Fri, 12 Feb 2021 21:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAg9c-0002u8-32; Fri, 12 Feb 2021 21:39:08 +0000
Received: by outflank-mailman (input) for mailman id 84450;
 Fri, 12 Feb 2021 21:39:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EDRu=HO=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1lAg9a-0002u3-OC
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 21:39:06 +0000
Received: from mail-io1-xd33.google.com (unknown [2607:f8b0:4864:20::d33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ffab29c-e438-43c6-bbc9-a92e731618b8;
 Fri, 12 Feb 2021 21:39:05 +0000 (UTC)
Received: by mail-io1-xd33.google.com with SMTP id e133so662271iof.8
 for <xen-devel@lists.xenproject.org>; Fri, 12 Feb 2021 13:39:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ffab29c-e438-43c6-bbc9-a92e731618b8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=zz+K8ViQbR5P7cwxMLQZ2JvbtEjOgwVqQlaY++GZmjs=;
        b=okcpfQZNHJk1Be7bxSr6RwmHYT2YpvVGqIbIFZN47YBLqAtGs4OUCgdNm5ZgUlAYD/
         Xfg0U9Dz3g6E9jldOIEfMrcPXNunefE72Et8343hW3LE8QThVXKmSpEvMYN7vwZm/t11
         I3G31nJvr6yTumHqu2/ZFMxuG8fL0SKE9qfb4J/zsKD2UpjO5Mwlo3l6lKNGYEEf9DkX
         NLdhR8eMHOJAgN/w5PtiC0B7JUBuYQPug0n4h3Ro79Nd6ujgPi6rGp6VkpnAH7lX98TB
         DUPvdnD0gd2azLgJK/NwjDEmzN6lW8E6B2tT2xowUMRsIw5JQ62gSsd09ESfRRjveW2o
         UfVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=zz+K8ViQbR5P7cwxMLQZ2JvbtEjOgwVqQlaY++GZmjs=;
        b=htx4I66TaziKGpMhaMxsboxQ6+e75FcHcoWczE4G5vLz4cKk9s0bxK/e22gqWRhHFl
         Mj0ddnV1OG0hpDcjYMyMF8OS+bM3QR930VxaTEM3KqmRSZS8Qo8pVeUIzr717RhQPd0V
         /Y432XB9HAt/oaADZCqqLRJSyFGhWCY6wa8PFqFknpDx8YXgR2VNSAV68mGFM4iKEH/c
         LYLy2d+1RB2JWOOrXnZk8Ui55KEeCgHjLKKOb8uYEi5sIzLIBwyAm+0MtXdpCtXu4Hlw
         myVrJW1n5FfayfoPZHbX++FK3h0pCvPMAMa0Nd9lanp/P0iRbtLW+QZShzIf8qdvgnm+
         Cdow==
X-Gm-Message-State: AOAM530Um86/90uVRx3bhXi1Xugolvd/Mc9a24j8ulJkJYIcVCXoZehq
	NDlm4qYlovkvT/L8cnqGh2DBh26uUxbIzkIp+uw=
X-Google-Smtp-Source: ABdhPJzIPFQ4dxwLawQzAV54jJd8sPnN9VkMaBieOJymlzBGoY/jAcE1kFV1Jl1mpR+5pYPl3zFEZYfopONhXgLuKsA=
X-Received: by 2002:a6b:7d42:: with SMTP id d2mr3739673ioq.176.1613165944675;
 Fri, 12 Feb 2021 13:39:04 -0800 (PST)
MIME-Version: 1.0
References: <20210211171945.18313-1-alex.bennee@linaro.org> <20210211171945.18313-6-alex.bennee@linaro.org>
In-Reply-To: <20210211171945.18313-6-alex.bennee@linaro.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 12 Feb 2021 13:38:24 -0800
Message-ID: <CAKmqyKNQHhwZfTvqEU8EzQQs2tXN5gbrWC6ooc6ERpYjS1gNhw@mail.gmail.com>
Subject: Re: [PATCH v2 5/7] docs: move generic-loader documentation into the
 main manual
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>, julien@xen.org, stefano.stabellini@linaro.org, 
	stefano.stabellini@xilinx.com, andre.przywara@arm.com, 
	stratos-dev@op-lists.linaro.org, 
	"open list:X86" <xen-devel@lists.xenproject.org>, Alistair Francis <alistair@alistair23.me>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Feb 11, 2021 at 9:21 AM Alex Benn=C3=A9e <alex.bennee@linaro.org> w=
rote:
>
> We might as well surface this useful information in the manual so
> users can find it easily. It is a fairly simple conversion to rst with
> the only textual fixes being QemuOps to QemuOpts.
>
> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
> Message-Id: <20201105175153.30489-6-alex.bennee@linaro.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

>
> ---
> v2
>   - fix whitespace
>   - update MAINTAINERS
> ---
>  docs/generic-loader.txt        |  92 --------------------------
>  docs/system/generic-loader.rst | 117 +++++++++++++++++++++++++++++++++
>  docs/system/index.rst          |   1 +
>  MAINTAINERS                    |   2 +-
>  4 files changed, 119 insertions(+), 93 deletions(-)
>  delete mode 100644 docs/generic-loader.txt
>  create mode 100644 docs/system/generic-loader.rst
>
> diff --git a/docs/generic-loader.txt b/docs/generic-loader.txt
> deleted file mode 100644
> index a9603a2af7..0000000000
> --- a/docs/generic-loader.txt
> +++ /dev/null
> @@ -1,92 +0,0 @@
> -Copyright (c) 2016 Xilinx Inc.
> -
> -This work is licensed under the terms of the GNU GPL, version 2 or later=
.  See
> -the COPYING file in the top-level directory.
> -
> -
> -The 'loader' device allows the user to load multiple images or values in=
to
> -QEMU at startup.
> -
> -Loading Data into Memory Values
> --------------------------------
> -The loader device allows memory values to be set from the command line. =
This
> -can be done by following the syntax below:
> -
> -     -device loader,addr=3D<addr>,data=3D<data>,data-len=3D<data-len>
> -                   [,data-be=3D<data-be>][,cpu-num=3D<cpu-num>]
> -
> -    <addr>      - The address to store the data in.
> -    <data>      - The value to be written to the address. The maximum si=
ze of
> -                  the data is 8 bytes.
> -    <data-len>  - The length of the data in bytes. This argument must be
> -                  included if the data argument is.
> -    <data-be>   - Set to true if the data to be stored on the guest shou=
ld be
> -                  written as big endian data. The default is to write li=
ttle
> -                  endian data.
> -    <cpu-num>   - The number of the CPU's address space where the data s=
hould
> -                  be loaded. If not specified the address space of the f=
irst
> -                  CPU is used.
> -
> -All values are parsed using the standard QemuOps parsing. This allows th=
e user
> -to specify any values in any format supported. By default the values
> -will be parsed as decimal. To use hex values the user should prefix the =
number
> -with a '0x'.
> -
> -An example of loading value 0x8000000e to address 0xfd1a0104 is:
> -    -device loader,addr=3D0xfd1a0104,data=3D0x8000000e,data-len=3D4
> -
> -Setting a CPU's Program Counter
> --------------------------------
> -The loader device allows the CPU's PC to be set from the command line. T=
his
> -can be done by following the syntax below:
> -
> -     -device loader,addr=3D<addr>,cpu-num=3D<cpu-num>
> -
> -    <addr>      - The value to use as the CPU's PC.
> -    <cpu-num>   - The number of the CPU whose PC should be set to the
> -                  specified value.
> -
> -All values are parsed using the standard QemuOps parsing. This allows th=
e user
> -to specify any values in any format supported. By default the values
> -will be parsed as decimal. To use hex values the user should prefix the =
number
> -with a '0x'.
> -
> -An example of setting CPU 0's PC to 0x8000 is:
> -    -device loader,addr=3D0x8000,cpu-num=3D0
> -
> -Loading Files
> --------------
> -The loader device also allows files to be loaded into memory. It can loa=
d ELF,
> -U-Boot, and Intel HEX executable formats as well as raw images.  The syn=
tax is
> -shown below:
> -
> -    -device loader,file=3D<file>[,addr=3D<addr>][,cpu-num=3D<cpu-num>][,=
force-raw=3D<raw>]
> -
> -    <file>      - A file to be loaded into memory
> -    <addr>      - The memory address where the file should be loaded. Th=
is is
> -                  required for raw images and ignored for non-raw files.
> -    <cpu-num>   - This specifies the CPU that should be used. This is an
> -                  optional argument and will cause the CPU's PC to be se=
t to
> -                  the memory address where the raw file is loaded or the=
 entry
> -                  point specified in the executable format header. This =
option
> -                  should only be used for the boot image.
> -                  This will also cause the image to be written to the sp=
ecified
> -                  CPU's address space. If not specified, the default is =
CPU 0.
> -    <force-raw> - Setting force-raw=3Don forces the file to be treated a=
s a raw
> -                  image.  This can be used to load supported executable =
formats
> -                  as if they were raw.
> -
> -All values are parsed using the standard QemuOps parsing. This allows th=
e user
> -to specify any values in any format supported. By default the values
> -will be parsed as decimal. To use hex values the user should prefix the =
number
> -with a '0x'.
> -
> -An example of loading an ELF file which CPU0 will boot is shown below:
> -    -device loader,file=3D./images/boot.elf,cpu-num=3D0
> -
> -Restrictions and ToDos
> -----------------------
> - - At the moment it is just assumed that if you specify a cpu-num then y=
ou
> -   want to set the PC as well. This might not always be the case. In fut=
ure
> -   the internal state 'set_pc' (which exists in the generic loader now) =
should
> -   be exposed to the user so that they can choose if the PC is set or no=
t.
> diff --git a/docs/system/generic-loader.rst b/docs/system/generic-loader.=
rst
> new file mode 100644
> index 0000000000..6bf8a4eb48
> --- /dev/null
> +++ b/docs/system/generic-loader.rst
> @@ -0,0 +1,117 @@
> +..
> +   Copyright (c) 2016, Xilinx Inc.
> +
> +This work is licensed under the terms of the GNU GPL, version 2 or later=
.  See
> +the COPYING file in the top-level directory.
> +
> +Generic Loader
> +--------------
> +
> +The 'loader' device allows the user to load multiple images or values in=
to
> +QEMU at startup.
> +
> +Loading Data into Memory Values
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +The loader device allows memory values to be set from the command line. =
This
> +can be done by following the syntax below::
> +
> +   -device loader,addr=3D<addr>,data=3D<data>,data-len=3D<data-len> \
> +                   [,data-be=3D<data-be>][,cpu-num=3D<cpu-num>]
> +
> +``<addr>``
> +  The address to store the data in.
> +
> +``<data>``
> +  The value to be written to the address. The maximum size of the data
> +  is 8 bytes.
> +
> +``<data-len>``
> +  The length of the data in bytes. This argument must be included if
> +  the data argument is.
> +
> +``<data-be>``
> +  Set to true if the data to be stored on the guest should be written
> +  as big endian data. The default is to write little endian data.
> +
> +``<cpu-num>``
> +  The number of the CPU's address space where the data should be
> +  loaded. If not specified the address space of the first CPU is used.
> +
> +All values are parsed using the standard QemuOps parsing. This allows th=
e user
> +to specify any values in any format supported. By default the values
> +will be parsed as decimal. To use hex values the user should prefix the =
number
> +with a '0x'.
> +
> +An example of loading value 0x8000000e to address 0xfd1a0104 is::
> +
> +    -device loader,addr=3D0xfd1a0104,data=3D0x8000000e,data-len=3D4
> +
> +Setting a CPU's Program Counter
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +The loader device allows the CPU's PC to be set from the command line. T=
his
> +can be done by following the syntax below::
> +
> +     -device loader,addr=3D<addr>,cpu-num=3D<cpu-num>
> +
> +``<addr>``
> +  The value to use as the CPU's PC.
> +
> +``<cpu-num>``
> +  The number of the CPU whose PC should be set to the specified value.
> +
> +All values are parsed using the standard QemuOpts parsing. This allows t=
he user
> +to specify any values in any format supported. By default the values
> +will be parsed as decimal. To use hex values the user should prefix the =
number
> +with a '0x'.
> +
> +An example of setting CPU 0's PC to 0x8000 is::
> +
> +    -device loader,addr=3D0x8000,cpu-num=3D0
> +
> +Loading Files
> +^^^^^^^^^^^^^
> +
> +The loader device also allows files to be loaded into memory. It can loa=
d ELF,
> +U-Boot, and Intel HEX executable formats as well as raw images.  The syn=
tax is
> +shown below:
> +
> +    -device loader,file=3D<file>[,addr=3D<addr>][,cpu-num=3D<cpu-num>][,=
force-raw=3D<raw>]
> +
> +``<file>``
> +  A file to be loaded into memory
> +
> +``<addr>``
> +  The memory address where the file should be loaded. This is required
> +  for raw images and ignored for non-raw files.
> +
> +``<cpu-num>``
> +  This specifies the CPU that should be used. This is an
> +  optional argument and will cause the CPU's PC to be set to the
> +  memory address where the raw file is loaded or the entry point
> +  specified in the executable format header. This option should only
> +  be used for the boot image. This will also cause the image to be
> +  written to the specified CPU's address space. If not specified, the
> +  default is CPU 0. <force-raw> - Setting force-raw=3Don forces the file
> +  to be treated as a raw image. This can be used to load supported
> +  executable formats as if they were raw.
> +
> +All values are parsed using the standard QemuOpts parsing. This allows t=
he user
> +to specify any values in any format supported. By default the values
> +will be parsed as decimal. To use hex values the user should prefix the =
number
> +with a '0x'.
> +
> +An example of loading an ELF file which CPU0 will boot is shown below::
> +
> +    -device loader,file=3D./images/boot.elf,cpu-num=3D0
> +
> +Restrictions and ToDos
> +^^^^^^^^^^^^^^^^^^^^^^
> +
> +At the moment it is just assumed that if you specify a cpu-num then
> +you want to set the PC as well. This might not always be the case. In
> +future the internal state 'set_pc' (which exists in the generic loader
> +now) should be exposed to the user so that they can choose if the PC
> +is set or not.
> +
> +
> diff --git a/docs/system/index.rst b/docs/system/index.rst
> index 625b494372..cee1c83540 100644
> --- a/docs/system/index.rst
> +++ b/docs/system/index.rst
> @@ -25,6 +25,7 @@ Contents:
>     usb
>     ivshmem
>     linuxboot
> +   generic-loader
>     vnc-security
>     tls
>     gdb
> diff --git a/MAINTAINERS b/MAINTAINERS
> index ab6877dae6..774b3ca7a5 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1991,7 +1991,7 @@ M: Alistair Francis <alistair@alistair23.me>
>  S: Maintained
>  F: hw/core/generic-loader.c
>  F: include/hw/core/generic-loader.h
> -F: docs/generic-loader.txt
> +F: docs/system/generic-loader.rst
>
>  Guest Loader
>  M: Alex Benn=C3=A9e <alex.bennee@linaro.org>
> --
> 2.20.1
>
>


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 21:40:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 21:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84452.158417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAgAT-00033h-Lj; Fri, 12 Feb 2021 21:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84452.158417; Fri, 12 Feb 2021 21:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAgAT-00033Z-Hv; Fri, 12 Feb 2021 21:40:01 +0000
Received: by outflank-mailman (input) for mailman id 84452;
 Fri, 12 Feb 2021 21:40:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EDRu=HO=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1lAgAS-00030H-KF
 for xen-devel@lists.xenproject.org; Fri, 12 Feb 2021 21:40:00 +0000
Received: from mail-il1-x12a.google.com (unknown [2607:f8b0:4864:20::12a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ee7315b-b2e2-4b8f-bda5-5808c2261b31;
 Fri, 12 Feb 2021 21:39:59 +0000 (UTC)
Received: by mail-il1-x12a.google.com with SMTP id g9so545072ilc.3
 for <xen-devel@lists.xenproject.org>; Fri, 12 Feb 2021 13:39:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ee7315b-b2e2-4b8f-bda5-5808c2261b31
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=Jp4CCHwWK240T1rzKA41Yn74M6YlFRWIOSQ6mAraccc=;
        b=RWzc+u5PJMONDFlBzkgnv9gzrcnhqfOiQi8SEexMjFuNfqDtxyTl7d3D+DU4N5rn8X
         dNqYO/cLgfTHOw/rxa920ZsS57PySmZH4/DD6nNdeEbBna1uKhu0c3bnCIFWHTKD1uSC
         7ys4Z250E3DdBz6ssIM8sihIfzgsE2CE4OGpXryRjoUSfBGSfIeJA8vBBlkukgO5I75v
         omdwytRj4y+Q7H2d6R5QrTLUMape5tPICR0gUqesni70IiM9wZeXQfvntglMCHhoFalm
         8VgR6hoHEJo/bk9ElYvXHl6Yua1fEKH3GnGN+UyZClvVWF+lIysi23UeOOVH3KKIVz9J
         j9nQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=Jp4CCHwWK240T1rzKA41Yn74M6YlFRWIOSQ6mAraccc=;
        b=ATvNcZhhrQzdmvazD7sS8zye2KzhewI0Xw5RblDDhkbwEiimzQ0otv3/o9YfMBSlbT
         kN3PfgMr5sdFjyNjc2rMONNgGhwGZHiPUKRVq3mly0OLFurDY3mmqa2hGduDj8vIh8qV
         2BVgdYPtOLPZV51cer/Ljd9ReIQygLh9AJ7p2lY7E7EeXkhMVTjGS2y8ImZZZGgIgUkJ
         cGCNSDZeDo1VvzE1d0mzVMPO/Sjzxtae07esuZiAbz2Bjvzn8tDo/SNaPvziut27IEoP
         DNHcyR1VDLsNK5gerWPxUzXLisyVrt7xtxtU58Ylgp+cpXdvYo66Nh4KOnmN7odLiQLJ
         +Sdg==
X-Gm-Message-State: AOAM533ZORG6dFfuF9I7SRGH2gCqt1y+fky+CdQ4uVVq+0RjQWhCFS2L
	ln2uDKU4/mZPL+SL2XlRLZZCbnuYtBpnAgdhPuQ=
X-Google-Smtp-Source: ABdhPJzSYfVvPhaNdhr6PllD44RL2BM6sEzXFDBlNJvKz9+9scpPbwgU89qj2yJK8Dpm+bOe9xIKp2iFVZnlOlkRWBk=
X-Received: by 2002:a92:d445:: with SMTP id r5mr3870130ilm.227.1613165999438;
 Fri, 12 Feb 2021 13:39:59 -0800 (PST)
MIME-Version: 1.0
References: <20210211171945.18313-1-alex.bennee@linaro.org> <20210211171945.18313-7-alex.bennee@linaro.org>
In-Reply-To: <20210211171945.18313-7-alex.bennee@linaro.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 12 Feb 2021 13:39:18 -0800
Message-ID: <CAKmqyKM6JPDfk555+Dswn4V-hd-qqDPr+V-a31QeVJg=148iWQ@mail.gmail.com>
Subject: Re: [PATCH v2 6/7] docs: add some documentation for the guest-loader
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>, julien@xen.org, andre.przywara@arm.com, 
	stefano.stabellini@linaro.org, 
	"open list:X86" <xen-devel@lists.xenproject.org>, stefano.stabellini@xilinx.com, 
	stratos-dev@op-lists.linaro.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Feb 11, 2021 at 9:20 AM Alex Benn=C3=A9e <alex.bennee@linaro.org> w=
rote:
>
> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
> Message-Id: <20201105175153.30489-7-alex.bennee@linaro.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

>
> ---
> v2
>   - add docs and MAINTAINERS
> ---
>  docs/system/guest-loader.rst | 54 ++++++++++++++++++++++++++++++++++++
>  docs/system/index.rst        |  1 +
>  MAINTAINERS                  |  1 +
>  3 files changed, 56 insertions(+)
>  create mode 100644 docs/system/guest-loader.rst
>
> diff --git a/docs/system/guest-loader.rst b/docs/system/guest-loader.rst
> new file mode 100644
> index 0000000000..37d03cbd89
> --- /dev/null
> +++ b/docs/system/guest-loader.rst
> @@ -0,0 +1,54 @@
> +..
> +   Copyright (c) 2020, Linaro
> +
> +Guest Loader
> +------------
> +
> +The guest loader is similar to the `generic-loader` although it is
> +aimed at a particular use case of loading hypervisor guests. This is
> +useful for debugging hypervisors without having to jump through the
> +hoops of firmware and boot-loaders.
> +
> +The guest loader does two things:
> +
> +  - load blobs (kernels and initial ram disks) into memory
> +  - sets platform FDT data so hypervisors can find and boot them
> +
> +This is what is typically done by a boot-loader like grub using it's
> +multi-boot capability. A typical example would look like:
> +
> +.. parsed-literal::
> +
> +  |qemu_system| -kernel ~/xen.git/xen/xen \
> +    -append "dom0_mem=3D1G,max:1G loglvl=3Dall guest_loglvl=3Dall" \
> +    -device guest-loader,addr=3D0x42000000,kernel=3DImage,bootargs=3D"ro=
ot=3D/dev/sda2 ro console=3Dhvc0 earlyprintk=3Dxen" \
> +    -device guest-loader,addr=3D0x47000000,initrd=3Drootfs.cpio
> +
> +In the above example the Xen hypervisor is loaded by the -kernel
> +parameter and passed it's boot arguments via -append. The Dom0 guest
> +is loaded into the areas of memory. Each blob will get
> +`/chosen/module@<addr>` entry in the FDT to indicate it's location and
> +size. Additional information can be passed with by using additional
> +arguments.
> +
> +Currently the only supported machines which use FDT data to boot are
> +the ARM and RiscV `virt` machines.
> +
> +Arguments
> +^^^^^^^^^
> +
> +The full syntax of the guest-loader is::
> +
> +  -device guest-loader,addr=3D<addr>[,kernel=3D<file>,[bootargs=3D<args>=
]][,initrd=3D<file>]
> +
> +``addr=3D<addr>``
> +  This is mandatory and indicates the start address of the blob.
> +
> +``kernel|initrd=3D<file>``
> +  Indicates the filename of the kernel or initrd blob. Both blobs will
> +  have the "multiboot,module" compatibility string as well as
> +  "multiboot,kernel" or "multiboot,ramdisk" as appropriate.
> +
> +``bootargs=3D<args>``
> +  This is an optional field for kernel blobs which will pass command
> +  like via the `/chosen/module@<addr>/bootargs` node.
> diff --git a/docs/system/index.rst b/docs/system/index.rst
> index cee1c83540..6ad9c93806 100644
> --- a/docs/system/index.rst
> +++ b/docs/system/index.rst
> @@ -26,6 +26,7 @@ Contents:
>     ivshmem
>     linuxboot
>     generic-loader
> +   guest-loader
>     vnc-security
>     tls
>     gdb
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 774b3ca7a5..853f174fcf 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1997,6 +1997,7 @@ Guest Loader
>  M: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>  S: Maintained
>  F: hw/core/guest-loader.c
> +F: docs/system/guest-loader.rst
>
>  Intel Hexadecimal Object File Loader
>  M: Su Hang <suhang16@mails.ucas.ac.cn>
> --
> 2.20.1
>
>


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 22:08:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 22:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84458.158435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAgbY-0005nn-TU; Fri, 12 Feb 2021 22:08:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84458.158435; Fri, 12 Feb 2021 22:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAgbY-0005ng-Q8; Fri, 12 Feb 2021 22:08:00 +0000
Received: by outflank-mailman (input) for mailman id 84458;
 Fri, 12 Feb 2021 22:07:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAgbW-0005nW-QB; Fri, 12 Feb 2021 22:07:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAgbW-0007jd-IA; Fri, 12 Feb 2021 22:07:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAgbW-0003sh-4y; Fri, 12 Feb 2021 22:07:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAgbW-0000OK-4P; Fri, 12 Feb 2021 22:07:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZB1X6klElGmMlmmfeZTvYtLpRjiGQBQ3+UGJ09WU+dE=; b=bQPNb5LKl7W8qp189Ev6tj5f8u
	yG5ZVf7JJWcRxzmRO29BDOO/zN+FQLOR66W9CHZz1a/NmqZJiwp0vw8oGNpL7T8rhhU5pJioPtsH2
	mk37f1WKD6HGOt/HY12d2JKycMsF9qwtGK0TRuMQmj+g0KBJymQSIgNmN7trjTEkPF28=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159260-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159260: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=291009f656e8eaebbdfd3a8d99f6b190a9ce9deb
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 22:07:58 +0000

flight 159260 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159260/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                291009f656e8eaebbdfd3a8d99f6b190a9ce9deb
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  196 days
Failing since        152366  2020-08-01 20:49:34 Z  195 days  340 attempts
Testing same since   159260  2021-02-11 17:25:09 Z    1 days    1 attempts

------------------------------------------------------------
4568 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1031630 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 12 22:55:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 Feb 2021 22:55:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84464.158450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAhLg-0002BH-EQ; Fri, 12 Feb 2021 22:55:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84464.158450; Fri, 12 Feb 2021 22:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAhLg-0002BA-BK; Fri, 12 Feb 2021 22:55:40 +0000
Received: by outflank-mailman (input) for mailman id 84464;
 Fri, 12 Feb 2021 22:55:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAhLf-0002B2-Pw; Fri, 12 Feb 2021 22:55:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAhLf-0008Sb-Fd; Fri, 12 Feb 2021 22:55:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAhLf-0006iT-7V; Fri, 12 Feb 2021 22:55:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAhLf-0004Lw-71; Fri, 12 Feb 2021 22:55:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f0RcCJjhYQivP7ckj4iXgVUzJXaaF0D1WJyv2oOFb0U=; b=w4BhTP1GOXvEmCGlPLz2aRHq5z
	5fhu+xv2ls93VRNHzeL3KdT950Tb0/MJ8w8SLzL+NFLD+qaVCERa4T2oxcYIiEZsjnqhrmzS3iJdC
	Kjh4N3Odg8STfOtWGmUWKOmPcfFt30V5GhdSoFWGfRehOVvVi7gpk16qa9qO084mWcOM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159308-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159308: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
X-Osstest-Versions-That:
    xen=5a4087004d1adbbb223925f3306db0e5824a2bdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 Feb 2021 22:55:39 +0000

flight 159308 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159308/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
baseline version:
 xen                  5a4087004d1adbbb223925f3306db0e5824a2bdc

Last test of basis   159280  2021-02-12 03:01:28 Z    0 days
Testing same since   159308  2021-02-12 20:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5a4087004d..04085ec1ac  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 01:38:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 01:38:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84474.158471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAjt7-0002bK-SY; Sat, 13 Feb 2021 01:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84474.158471; Sat, 13 Feb 2021 01:38:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAjt7-0002bC-L5; Sat, 13 Feb 2021 01:38:21 +0000
Received: by outflank-mailman (input) for mailman id 84474;
 Sat, 13 Feb 2021 01:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8A84=HP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lAjt6-0002b5-3p
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 01:38:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57219d8d-e9bf-4dc3-8ac1-5f69cdff5306;
 Sat, 13 Feb 2021 01:38:19 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E155B64DEE;
 Sat, 13 Feb 2021 01:38:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57219d8d-e9bf-4dc3-8ac1-5f69cdff5306
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613180298;
	bh=AAM0GsFSCFy3GGKIJsyvQXq6QgQww1bLbn609ZU5zdI=;
	h=From:To:Cc:Subject:Date:From;
	b=B1hb8f0sJcIncX6aY8cH1lkCOZ/po3kn7O80aAsqcOkPTBeaixEwN71LXO1BOoiHY
	 ArVv/3tfmjKtAve6yE2yjqycjRLHE46XJcoLhtt+cBu8jBVFjd28to2XVEQcUXwilj
	 OOY1/K160j6J0lt+EYr0Ipmv18PVcUfccW0eTgM4SJjvJpi52ya4ZC6EjZKeZgXtx2
	 7onYPhXYs6PVh5886afIgekWRzal6SMb6HsdcbuioB5AJOWNEAAMQJ8NqUdZuReQ40
	 m4SDZQHDC8Yh2aCB1B0gbhv59K87eabTN1jAamWJUJaII3pI0t49Gvra4IAHfPWXsj
	 Ub/T0B8G6t9mQ==
From: Stefano Stabellini <sstabellini@kernel.org>
To: cardoe@cardoe.com
Cc: andrew.cooper3@citrix.com,
	wl@xen.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH] automation: add arm32 cross-build tests for Xen
Date: Fri, 12 Feb 2021 17:38:13 -0800
Message-Id: <20210213013813.30114-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

Add a debian build container with cross-gcc for arm32 installed.
Add build jobs to cross-compile Xen-only for arm32.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 .../debian/unstable-arm32-gcc.dockerfile      | 24 +++++++++
 automation/gitlab-ci/build.yaml               | 50 +++++++++++++++++++
 automation/scripts/build                      |  9 ++++
 3 files changed, 83 insertions(+)
 create mode 100644 automation/build/debian/unstable-arm32-gcc.dockerfile

diff --git a/automation/build/debian/unstable-arm32-gcc.dockerfile b/automation/build/debian/unstable-arm32-gcc.dockerfile
new file mode 100644
index 0000000000..b41a57f197
--- /dev/null
+++ b/automation/build/debian/unstable-arm32-gcc.dockerfile
@@ -0,0 +1,24 @@
+FROM debian:unstable
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+ENV DEBIAN_FRONTEND=noninteractive
+ENV USER root
+ENV CROSS_COMPILE /usr/bin/arm-linux-gnueabihf-
+
+RUN mkdir /build
+WORKDIR /build
+
+# build depends
+RUN apt-get update && \
+    apt-get --quiet --yes install \
+        build-essential \
+        flex \
+        bison \
+        git \
+        gcc-arm-linux-gnueabihf \
+        && \
+        apt-get autoremove -y && \
+        apt-get clean && \
+        rm -rf /var/lib/apt/lists* /tmp/* /var/tmp/*
+
diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index d00b8a5123..22114662f2 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -117,6 +117,33 @@
   variables:
     <<: *clang
 
+.arm32-cross-build-tmpl:
+  <<: *build
+  variables:
+    XEN_TARGET_ARCH: arm32
+  tags:
+    - x86_64
+
+.arm32-cross-build:
+  extends: .arm32-cross-build-tmpl
+  variables:
+    debug: n
+
+.arm32-cross-build-debug:
+  extends: .arm32-cross-build-tmpl
+  variables:
+    debug: y
+
+.gcc-arm32-cross-build:
+  extends: .arm32-cross-build
+  variables:
+    <<: *gcc
+
+.gcc-arm32-cross-build-debug:
+  extends: .arm32-cross-build-debug
+  variables:
+    <<: *gcc
+
 .arm64-build-tmpl:
   <<: *build
   variables:
@@ -454,6 +481,29 @@ alpine-3.12-clang-debug:
     CONTAINER: alpine:3.12
   allow_failure: true
 
+# Arm32 cross-build
+
+debian-unstable-gcc-arm32:
+  extends: .gcc-arm32-cross-build
+  variables:
+    CONTAINER: debian:unstable-arm32-gcc
+
+debian-unstable-gcc-arm32-debug:
+  extends: .gcc-arm32-cross-build-debug
+  variables:
+    CONTAINER: debian:unstable-arm32-gcc
+
+debian-unstable-gcc-arm32-randconfig:
+  extends: .gcc-arm32-cross-build
+  variables:
+    CONTAINER: debian:unstable-arm32-gcc
+    RANDCONFIG: y
+
+debian-unstable-gcc-arm32-debug-randconfig:
+  extends: .gcc-arm32-cross-build-debug
+  variables:
+    CONTAINER: debian:unstable-arm32-gcc
+    RANDCONFIG: y
 
 # Arm builds
 
diff --git a/automation/scripts/build b/automation/scripts/build
index d8990c3bf4..e7d68f7a9d 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -15,6 +15,15 @@ else
     make -j$(nproc) -C xen defconfig
 fi
 
+# arm32 only cross-compiles the hypervisor
+if [[ "${XEN_TARGET_ARCH}" = "arm32" ]]; then
+    make -j$(nproc) xen
+    cp xen/.config xen-config
+    mkdir binaries
+    cp xen/xen binaries/xen
+    exit 0
+fi
+
 # build up our configure options
 cfgargs=()
 cfgargs+=("--enable-docs")
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 13 02:05:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 02:05:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84480.158489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAkJe-0005iM-1M; Sat, 13 Feb 2021 02:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84480.158489; Sat, 13 Feb 2021 02:05:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAkJd-0005iF-Uf; Sat, 13 Feb 2021 02:05:45 +0000
Received: by outflank-mailman (input) for mailman id 84480;
 Sat, 13 Feb 2021 02:05:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8A84=HP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lAkJc-0005iA-Hk
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 02:05:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 557ddec8-3825-44be-8940-54ff37dfe3bc;
 Sat, 13 Feb 2021 02:05:43 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0258C64E02;
 Sat, 13 Feb 2021 02:05:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 557ddec8-3825-44be-8940-54ff37dfe3bc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613181943;
	bh=AP/Wps9XW8Z6HndHtp5zQ4prDNlOHmK5VMAi50oZeGo=;
	h=From:To:Cc:Subject:Date:From;
	b=tS710+zSkQ12Erqlct6ZnUeEIPpP7uVNkDZaDS/zYNX9hmW+rt8AAHjDc1cNaJcvY
	 GRBjVr9MlzUMuOY+MGghZVvfuvWESg+ubFuYABLz1hJQWOunQnQrCCImeMxyVRnrCj
	 xQYfAQYAkAb/rnTbzc5FZGyx6mcMtRSIZNK02YcE8qp97rjf3VJQvYN7jz8xfYLrAM
	 sFneIBvqDz/FEql1W29Edm7JT2cvdn4HvgzpzjDdayDffw0z3Mc+6vF2eqw2dwDB0M
	 g3pm6HsCPWrYvV+e9l+Bd01evq1mSGD5tsGOuVRQTyl7eAUbfcL7Ul2TnFw1cpKlII
	 h3X6OA/teFhPA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	cardoe@cardoe.com,
	andrew.cooper3@citrix.com,
	wl@xen.org,
	iwj@xenproject.org,
	anthony.perard@citrix.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH] firmware: don't build hvmloader if it is not needed
Date: Fri, 12 Feb 2021 18:05:40 -0800
Message-Id: <20210213020540.27894-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

If rombios, seabios and ovmf are all disabled, don't attempt to build
hvmloader.

This patches fixes the x86 alpine linux builds currently failing in
gitlab-ci.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 tools/firmware/Makefile | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tools/firmware/Makefile b/tools/firmware/Makefile
index 1f27117794..e68cd0d358 100644
--- a/tools/firmware/Makefile
+++ b/tools/firmware/Makefile
@@ -13,7 +13,16 @@ SUBDIRS-$(CONFIG_ROMBIOS) += rombios
 SUBDIRS-$(CONFIG_ROMBIOS) += vgabios
 SUBDIRS-$(CONFIG_IPXE) += etherboot
 SUBDIRS-$(CONFIG_PV_SHIM) += xen-dir
-SUBDIRS-y += hvmloader
+ifeq ($(CONFIG_ROMBIOS),y)
+CONFIG_HVMLOADER ?= y
+endif
+ifeq ($(CONFIG_SEABIOS),y)
+CONFIG_HVMLOADER ?= y
+endif
+ifeq ($(CONFIG_OVMF),y)
+CONFIG_HVMLOADER ?= y
+endif
+SUBDIRS-$(CONFIG_HVMLOADER) += hvmloader
 
 SEABIOSCC ?= $(CC)
 SEABIOSLD ?= $(LD)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 13 03:56:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 03:56:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84483.158501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAm27-0007Bb-54; Sat, 13 Feb 2021 03:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84483.158501; Sat, 13 Feb 2021 03:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAm27-0007BU-1h; Sat, 13 Feb 2021 03:55:47 +0000
Received: by outflank-mailman (input) for mailman id 84483;
 Sat, 13 Feb 2021 03:55:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAm26-0007BM-4C; Sat, 13 Feb 2021 03:55:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAm25-0007Pf-UA; Sat, 13 Feb 2021 03:55:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAm25-0005rc-Jq; Sat, 13 Feb 2021 03:55:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAm25-00062Z-JO; Sat, 13 Feb 2021 03:55:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gShPLppME2P8gWmWU1s+wIQvhfJ/z2OFTPc1+k7hTfI=; b=3SS1Vg6RjBP06xgbIpYwVZwqyo
	Yr56YUvqxDnT2bLXkRKtQVnvaR6/B0DC3D5+JUyr7OUyvLA5zqvU1Dbj59F0m0BAngodLt1l6F85G
	4XNBHL9zAewv97X44UbWybsQzRiJzRsbESFT7dFq82xgjOFE6xuAQiuSz3671e/E/mhk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159268: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    xen-unstable:test-arm64-arm64-xl-seattle:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f3e1eb2f0234c955243a915d69ebd84f26eec130
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 03:55:45 +0000

flight 159268 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159268/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-seattle   5 host-install(5)        broken REGR. vs. 159036

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail blocked in 159036
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159036
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159036
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f3e1eb2f0234c955243a915d69ebd84f26eec130
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    7 days
Failing since        159077  2021-02-06 11:11:30 Z    6 days    5 attempts
Testing same since   159268  2021-02-11 22:07:04 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl-seattle host-install(5)

Not pushing.

(No revision log; it would be 560 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 06:23:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 06:23:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84492.158527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAoKn-0004aG-Vo; Sat, 13 Feb 2021 06:23:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84492.158527; Sat, 13 Feb 2021 06:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAoKn-0004a9-Sg; Sat, 13 Feb 2021 06:23:13 +0000
Received: by outflank-mailman (input) for mailman id 84492;
 Sat, 13 Feb 2021 06:23:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAoKm-0004a1-66; Sat, 13 Feb 2021 06:23:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAoKl-0001we-LT; Sat, 13 Feb 2021 06:23:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAoKl-0005ZL-AO; Sat, 13 Feb 2021 06:23:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAoKl-0008IZ-9t; Sat, 13 Feb 2021 06:23:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V2jz68gKkypY0vtWm10+eCTRT6mm+WcvTWD9TgFTiZk=; b=dgEn0HcVx75h3yglvFzVi01pcd
	jSyVX2TxP1RXHqzMB63YT09XWo/eBros/4izILgzMEHI/jX/EikluiZ3dTFMOXpoz5YZJjf1LBYzK
	5e5NRZ+eN7PN4YwOu3sH4+c4EOrMDJnDDZqstHPqA7Kwyk6ai2WIhhqff5WDpW7LlI9Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159284-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159284: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bebaafd6b4a54b35f0d6676ab9156ea1489cbf5e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 06:23:11 +0000

flight 159284 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bebaafd6b4a54b35f0d6676ab9156ea1489cbf5e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  218 days
Failing since        151818  2020-07-11 04:18:52 Z  217 days  209 attempts
Testing same since   159284  2021-02-12 04:18:50 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 41998 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 08:52:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 08:52:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84537.158573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAqed-00021v-Uz; Sat, 13 Feb 2021 08:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84537.158573; Sat, 13 Feb 2021 08:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAqed-00021o-Qz; Sat, 13 Feb 2021 08:51:51 +0000
Received: by outflank-mailman (input) for mailman id 84537;
 Sat, 13 Feb 2021 08:51:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAqec-00021g-Ps; Sat, 13 Feb 2021 08:51:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAqec-0004o5-Gx; Sat, 13 Feb 2021 08:51:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAqec-0006Dl-9S; Sat, 13 Feb 2021 08:51:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAqec-0002Zr-8o; Sat, 13 Feb 2021 08:51:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/9fKWra61mDjW6MwhbZGw1gDuiZvSdwSFDgMXpIsBgI=; b=h4BsEeRUzdBFgZZLWcZiciVsej
	Fom/43amtlp1PWmF/yBJEHOckrcTquV+vuxCTOufwICASyvmhESUuZMyX/cvxvonVI3OF4UGfQoqn
	dKwM6QfZF+/bFn4FosT/XlznArHOIJq9r5REGV/yFzjPMU037AbyPp2S07Y8bxKaydGI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159274: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-armhf-armhf-xl-vhd:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
X-Osstest-Versions-That:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 08:51:50 +0000

flight 159274 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159274/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd     12 debian-di-install fail in 159204 pass in 159274
 test-armhf-armhf-libvirt-raw 12 debian-di-install fail in 159204 pass in 159274
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 159204

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157566
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157566
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157566
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157566
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157566
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157566
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157566
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8
baseline version:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9

Last test of basis   157566  2020-12-15 14:05:54 Z   59 days
Failing since        159016  2021-02-04 15:05:58 Z    8 days    8 attempts
Testing same since   159042  2021-02-05 12:13:30 Z    7 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   310ab79875..1c7d984645  1c7d984645f9ade9b47e862b5880734ad498fea8 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 09:37:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 09:37:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84547.158602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lArMG-0005p1-Ka; Sat, 13 Feb 2021 09:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84547.158602; Sat, 13 Feb 2021 09:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lArMG-0005ou-Gw; Sat, 13 Feb 2021 09:36:56 +0000
Received: by outflank-mailman (input) for mailman id 84547;
 Sat, 13 Feb 2021 09:36:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=THDE=HP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lArMF-0005op-GJ
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 09:36:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b8ac8c3-846b-4689-ba32-251d99791c7f;
 Sat, 13 Feb 2021 09:36:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A1C1AD3E;
 Sat, 13 Feb 2021 09:36:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b8ac8c3-846b-4689-ba32-251d99791c7f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613209013; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=t0AdJDF1YujlJWijwSI8dXpLbz3azvJQLlSeIv8+cTk=;
	b=apsE50Nrr5x943I6SrSHqgYv4/K5FLYJ+Jkto6S6WlDAg100eX+fXer6Xe/Ve+Sc1jfcka
	VIthcU7e1nkEJ2jimjGzikoIX2ugTg4TBln5trojo/DFO9CEbydzipG02A30iE1RXG5JRQ
	KPzAn8imKKGFUVqm/oKaFACTsqYMlI0=
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in
 lu_read_state()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-7-andrew.cooper3@citrix.com>
 <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
 <226c35c2-2280-8353-74d0-cc35e7f84de6@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f77b1634-b250-33c0-42ca-25dd00cb5a02@suse.com>
Date: Sat, 13 Feb 2021 10:36:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <226c35c2-2280-8353-74d0-cc35e7f84de6@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iKTNcUck0VyoTrJDEnAnySQk8neBU6enD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iKTNcUck0VyoTrJDEnAnySQk8neBU6enD
Content-Type: multipart/mixed; boundary="FpK9mDCxOPKybHuE2giqzm6aMcSW6LRBh";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <f77b1634-b250-33c0-42ca-25dd00cb5a02@suse.com>
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in
 lu_read_state()
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-7-andrew.cooper3@citrix.com>
 <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
 <226c35c2-2280-8353-74d0-cc35e7f84de6@citrix.com>
In-Reply-To: <226c35c2-2280-8353-74d0-cc35e7f84de6@citrix.com>

--FpK9mDCxOPKybHuE2giqzm6aMcSW6LRBh
Content-Type: multipart/mixed;
 boundary="------------A4CC2795F7CDDBA915610DA1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------A4CC2795F7CDDBA915610DA1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.02.21 18:01, Andrew Cooper wrote:
> On 12/02/2021 16:08, J=C3=BCrgen Gro=C3=9F wrote:
>> On 12.02.21 16:39, Andrew Cooper wrote:
>>> Various version of gcc, when compiling with -Og, complain:
>>>
>>>  =C2=A0=C2=A0 xenstored_control.c: In function =E2=80=98lu_read_state=
=E2=80=99:
>>>  =C2=A0=C2=A0 xenstored_control.c:540:11: error: =E2=80=98state.size=E2=
=80=99 is used
>>> uninitialized in this
>>>  =C2=A0=C2=A0 function [-Werror=3Duninitialized]
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 if (state.size =3D=3D 0)
>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ~~~~~^~~~~
>>>  =C2=A0=C2=A0 xenstored_control.c:543:6: error: =E2=80=98state.buf=E2=
=80=99 may be used
>>> uninitialized in
>>>  =C2=A0=C2=A0 this function [-Werror=3Dmaybe-uninitialized]
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 pre =3D state.buf;
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 ~~~~^~~~~~~~~~~
>>>  =C2=A0=C2=A0 xenstored_control.c:550:23: error: =E2=80=98state.buf=E2=
=80=99 may be used
>>> uninitialized in
>>>  =C2=A0=C2=A0 this function [-Werror=3Dmaybe-uninitialized]
>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (void *)head - state.buf < state.size=
;
>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ~~~~~^~~~
>>>  =C2=A0=C2=A0 xenstored_control.c:550:35: error: =E2=80=98state.size=E2=
=80=99 may be used
>>> uninitialized in
>>>  =C2=A0=C2=A0 this function [-Werror=3Dmaybe-uninitialized]
>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (void *)head - state.buf < state.size=
;
>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ~~~~~^~~~~
>>>
>>> Interestingly, this is only in the stubdom build.=C2=A0 I can't ident=
ify any
>>> relevant differences vs the regular tools build.
>>
>> But me. :-)
>>
>> lu_get_dump_state() is empty for the stubdom case (this will change wh=
en
>> LU is implemented for stubdom, too). In the daemon case this function =
is
>> setting all the fields which are relevant.
>=20
> So I spotted that.=C2=A0 This instance of lu_read_state() is already wi=
thin
> the ifdefary, so doesn't get to see the empty stub (I think).

There is only one instance of lu_read_state().


Juergen

--------------A4CC2795F7CDDBA915610DA1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------A4CC2795F7CDDBA915610DA1--

--FpK9mDCxOPKybHuE2giqzm6aMcSW6LRBh--

--iKTNcUck0VyoTrJDEnAnySQk8neBU6enD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAnnbQFAwAAAAAACgkQsN6d1ii/Ey/G
Kwf/dNA2cnsFWDkDHYvCK/OfVoxpq7ozPRJkIsNS682UlH0EQTkDtyznPQAV7QGgAPkVK/SUCGvY
m/OBJ795dNeQ39k2745IxCf3HWd28vA5tnjmNJogZdS+Lpz+crwuhTdfNwfEl6BovWsBbH9iIIap
Ejxn139+HNet3DFKeCOBcYPdIhDxX7ziK4e4P4nrAZ99bfA4cY0R4iikg1Edw/Ecs2a4+p8hHjiQ
klRq9ljX2itGdoXIF4W0WTXn3GyHgK4uIUxJ+IKO79Fg3i7AAvcUrEu6C+q+wA8VbKNGCVeGtYeF
0+YEntT+lp7gLLW6RdRhR+/k40mK8YNh09brkcHjoQ==
=dKhA
-----END PGP SIGNATURE-----

--iKTNcUck0VyoTrJDEnAnySQk8neBU6enD--


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 11:25:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 11:25:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84561.158626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAt3O-0007OJ-7d; Sat, 13 Feb 2021 11:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84561.158626; Sat, 13 Feb 2021 11:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAt3O-0007OC-48; Sat, 13 Feb 2021 11:25:34 +0000
Received: by outflank-mailman (input) for mailman id 84561;
 Sat, 13 Feb 2021 11:25:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAt3N-0007O4-6s; Sat, 13 Feb 2021 11:25:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAt3M-0007JS-VQ; Sat, 13 Feb 2021 11:25:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAt3M-0004UG-Lx; Sat, 13 Feb 2021 11:25:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAt3M-0000nU-LR; Sat, 13 Feb 2021 11:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/+ADUMxswiEub3KtG/wCWdlW8W91RpGDp6zpNpatz9U=; b=ClSAxMmJjRrOZFfqzTsfWklRn+
	aMHY1siwAVyozIPza2cHI4Wn++LWmJxQ8XTXhl4jAJw676pJ+fRaqwgJYPgBNexcVaXJ/DDhpWzo5
	0hflovMLkEqJBxbkZ5dm72yJVIRznN3PInxVRG9k5etAFl747ZJcBs4mVk/+7p2OBc7k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159295: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5e1942063dc3633f7a127aa2b159c13507580d21
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 11:25:32 +0000

flight 159295 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159295/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5e1942063dc3633f7a127aa2b159c13507580d21
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   31 days
Failing since        158473  2021-01-17 13:42:20 Z   26 days   37 attempts
Testing same since   159200  2021-02-10 11:25:12 Z    2 days    3 attempts

------------------------------------------------------------
453 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13546 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 12:24:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 12:24:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84575.158646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAty0-0004bQ-0w; Sat, 13 Feb 2021 12:24:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84575.158646; Sat, 13 Feb 2021 12:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAtxz-0004bJ-Tt; Sat, 13 Feb 2021 12:24:03 +0000
Received: by outflank-mailman (input) for mailman id 84575;
 Sat, 13 Feb 2021 12:24:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAtxy-0004bB-VD; Sat, 13 Feb 2021 12:24:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAtxy-0008GP-N6; Sat, 13 Feb 2021 12:24:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAtxy-0006rF-GO; Sat, 13 Feb 2021 12:24:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAtxy-0005bP-Fq; Sat, 13 Feb 2021 12:24:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tUglYL5CalXB/0+Da2uri6+y2VhUCHB4UYYkjA/0l1s=; b=T/Kl7rEGgyzirShlv4I6ouC2wg
	H7czpkt4CyEYJllkwDZddlx8G92Swq+EbyBoGtFlGj9FgvTBhZQNoEn+WPaieXe4Pd80k7A1fo4Ys
	w7QVKnFrgKfbX46eIHp3NhNfI+viLcRfBPAs3WbSBhfTZ25hvOg6mXm1OdYu/DF7T4LM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159300-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159300: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2e1e8c35f3178df95d79da81ac6deec242da74c2
X-Osstest-Versions-That:
    ovmf=1d27e58e401faea284309039f3962cb3cb4549fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 12:24:02 +0000

flight 159300 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159300/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2e1e8c35f3178df95d79da81ac6deec242da74c2
baseline version:
 ovmf                 1d27e58e401faea284309039f3962cb3cb4549fc

Last test of basis   159248  2021-02-11 10:27:56 Z    2 days
Testing same since   159300  2021-02-12 14:25:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>
  Joey Gouly <joey.gouly@arm.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Tim Crawford <tcrawford@system76.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1d27e58e40..2e1e8c35f3  2e1e8c35f3178df95d79da81ac6deec242da74c2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 13:51:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 13:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84597.158668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAvKC-0004FO-6X; Sat, 13 Feb 2021 13:51:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84597.158668; Sat, 13 Feb 2021 13:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAvKC-0004FH-36; Sat, 13 Feb 2021 13:51:04 +0000
Received: by outflank-mailman (input) for mailman id 84597;
 Sat, 13 Feb 2021 13:51:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r+3A=HP=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lAvKA-0004FC-Q6
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 13:51:03 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3277022c-ac15-4d4f-8869-88b01d3b5c9c;
 Sat, 13 Feb 2021 13:51:01 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 6C4A65C00CE;
 Sat, 13 Feb 2021 08:51:01 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sat, 13 Feb 2021 08:51:01 -0500
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA id DCC11240057;
 Sat, 13 Feb 2021 08:50:59 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3277022c-ac15-4d4f-8869-88b01d3b5c9c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=OmsRd5
	fT8lTKWh0LAKPh0yMk5Y9O+be3M4lCrp7Ghx4=; b=HRdQxaFnWYnGqFNihk5cxD
	ORl+1Rct0JkXz53veGmMAvNElsUv3E1GNIelFfEgbfUNXhS53BcMXIpCYq+TZfU5
	3C5bpKwyxlGkfZaFxjPB8oHGE/cOnyz/nta6hotPQjiVzX8bb0xL9fxCM9QcbGSX
	06SZ/wexXWLW+HnCGj76Vm/X8qi7MRV8qkzmekBH4DQ9QYvQ7HsDhNK1C2Sdq/zI
	geP3wPsz92wgxqsYjVJmlpV80ybECsX2i2DWriqOeHQlZrec4sbbbmYxf5fuRwCP
	8dzqLGNMGWtgIPzMTSJNzU2Xadp4OVFlPewODOMMIOFY1IYy52kP3XhcIdU14wdQ
	==
X-ME-Sender: <xms:RNknYL4OcxhjVlgwE3FsoHM9-gioUDG1lKBB9JRsS5J5lNKCyKl29g>
    <xme:RNknYA4XWmtFmtqj6oRxhaxDtZ7wbr7i83Nrspw7XD1X85_YACDLhm_De0L40bwxn
    3AvatiGls1gGA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrieefgdehvdcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhl
    fhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hm
X-ME-Proxy: <xmx:RNknYCe9WgLVRQxzMWqX99UIi67xTSfPayA4-ocMo0kEcbS37V-ypQ>
    <xmx:RNknYML_av1Tyodzhxx1jzP8sSncP3xwIJSUh1-xn5aFTXOQjn2iqQ>
    <xmx:RNknYPL1ckcRgnSOvms8gSeESy2iVoryF-8dPy9ucKJyZkkLZI1SJw>
    <xmx:RdknYMFTmO2UAkNaL-gPCS2qBjDpdQua29hERAWa9fm8QLYDbkxcuw>
Date: Sat, 13 Feb 2021 14:50:56 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, cardoe@cardoe.com,
	andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org,
	anthony.perard@citrix.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
Message-ID: <20210213135056.GA6191@mail-itl>
References: <20210213020540.27894-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="1yeeQ81UyVL57Vl7"
Content-Disposition: inline
In-Reply-To: <20210213020540.27894-1-sstabellini@kernel.org>


--1yeeQ81UyVL57Vl7
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed

On Fri, Feb 12, 2021 at 06:05:40PM -0800, Stefano Stabellini wrote:
> If rombios, seabios and ovmf are all disabled, don't attempt to build
> hvmloader.

What if you choose to not build any of rombios, seabios, ovmf, but use
system one instead? Wouldn't that exclude hvmloader too?

This heuristic seems like a bit too much, maybe instead add an explicit
option to skip hvmloader?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--1yeeQ81UyVL57Vl7
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmAn2UAACgkQ24/THMrX
1yzU9Af9Gmygp9ELtsgMuobBJp2H6cb6ILXlKKt9l2CK3zNDXntu7nNggntcn/us
O1/1iUCrH3rv0tqf9PxAKTLizKiWK1n7jMQWmhEazTOSKz1z1cgS4LhfTfCxS5gz
nuE6VNrpLqTyEcDl7/k98NW+5UvFVfcTxvL/wlzpAHVR6R4NtQQGS9DFXRUr2sxI
93HfkERFeMLpDnMKflAHUhi7P1ZFcmL4rr6LqoDsBYUhA36uQWZuMwhesl536pl7
djrMvLcAXMGUh9pbiQzv0z0ksVmsOltPZekG8gHDg7FbJAeYMRY0Oe6nBHNUAxxK
00OnttGWp1F999V91M94j56ImetkGA==
=rKuN
-----END PGP SIGNATURE-----

--1yeeQ81UyVL57Vl7--


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 14:49:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 14:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84620.158698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAwEA-0000l0-U2; Sat, 13 Feb 2021 14:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84620.158698; Sat, 13 Feb 2021 14:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAwEA-0000kt-Pk; Sat, 13 Feb 2021 14:48:54 +0000
Received: by outflank-mailman (input) for mailman id 84620;
 Sat, 13 Feb 2021 14:48:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uKQL=HP=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1lAwE9-0000ko-Kf
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 14:48:53 +0000
Received: from mx1.somlen.de (unknown [89.238.87.226])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0be3515e-d966-4bb2-aa4a-a8bf189a4583;
 Sat, 13 Feb 2021 14:48:51 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id F2934C36AAE;
 Sat, 13 Feb 2021 15:48:49 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0be3515e-d966-4bb2-aa4a-a8bf189a4583
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [RESEND] [XEN PATCH v1 0/1] Use reproducible date in docs
Date: Sat, 13 Feb 2021 15:48:24 +0100
Message-Id: <cover.1613227389.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a resend of my v1 patch from [1] which has not been accepted
yet.

The -d "@..." syntax was introduced in GNU date about 2005, so I assume
a version supporting this syntax is available, if SOURCE_DATE_EPOCH is
defined. If SOURCE_DATE_EPOCH is not defined nothing changes with
respect to the current behavior.

In response to my first submission Jan Beulich mentioned in [2] it would
be good to probe for the availability of the -d "@..." syntax of GNU
date, as Jan couldn't find it when looking at an older documentation of
GNU date. I later found out and reported in [3] that while the syntax
has been implemented in GNU date in about 2005, the documentation has
only been updated in 2011.

I also submitted a v2 version that implemented the suggested probing of
the capabilities of the "date" command in [4]. However Wei Liu responded
a shorter version of the patch would be preferred, so I'm resending v1
now.

I'm fine with v1 or v2, both fix the issue of generating reproducible
documentation. This patch is the last upstream patch needed for get xen
built reproducibly in the next Debian release.

If both patches are not seen as appropriate, please tell me what changes
I should implement.

Thanks,
Maxi

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01518.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01564.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01698.html
[4] https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01715.html
[5] https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg00178.html

Maximilian Engelhardt (1):
  docs: set date to SOURCE_DATE_EPOCH if available

 docs/Makefile | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 13 14:49:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 14:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84622.158713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAwEI-0000nN-5Y; Sat, 13 Feb 2021 14:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84622.158713; Sat, 13 Feb 2021 14:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAwEI-0000nG-22; Sat, 13 Feb 2021 14:49:02 +0000
Received: by outflank-mailman (input) for mailman id 84622;
 Sat, 13 Feb 2021 14:49:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uKQL=HP=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1lAwEG-0000mz-Tx
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 14:49:00 +0000
Received: from mx1.somlen.de (unknown [2a00:1828:a019::100:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86049f9a-5c79-4b4c-ae22-852cd601aefe;
 Sat, 13 Feb 2021 14:48:59 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id 3E6C9C3AF0D;
 Sat, 13 Feb 2021 15:48:58 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86049f9a-5c79-4b4c-ae22-852cd601aefe
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH v1 1/1] docs: set date to SOURCE_DATE_EPOCH if available
Date: Sat, 13 Feb 2021 15:48:25 +0100
Message-Id: <b607a676b98320977a6c592079569a3c01e46781.1613227389.git.maxi@daemonizer.de>
In-Reply-To: <cover.1613227389.git.maxi@daemonizer.de>
References: <cover.1613227389.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use the solution described in [1] to replace the call to the 'date'
command with a version that uses SOURCE_DATE_EPOCH if available. This
is needed for reproducible builds.

[1] https://reproducible-builds.org/docs/source-date-epoch/

Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>
---
 docs/Makefile | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/docs/Makefile b/docs/Makefile
index 8de1efb6f5..ac6792ff7c 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -3,7 +3,13 @@ include $(XEN_ROOT)/Config.mk
 -include $(XEN_ROOT)/config/Docs.mk
 
 VERSION		:= $(shell $(MAKE) -C $(XEN_ROOT)/xen --no-print-directory xenversion)
-DATE		:= $(shell date +%Y-%m-%d)
+
+DATE_FMT	:= +%Y-%m-%d
+ifdef SOURCE_DATE_EPOCH
+DATE		:= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u "$(DATE_FMT)")
+else
+DATE		:= $(shell date "$(DATE_FMT)")
+endif
 
 DOC_ARCHES      := arm x86_32 x86_64
 MAN_SECTIONS    := 1 5 7 8
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 13 15:36:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 15:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84643.158728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAwyG-0005Q4-Gb; Sat, 13 Feb 2021 15:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84643.158728; Sat, 13 Feb 2021 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAwyG-0005Px-Bg; Sat, 13 Feb 2021 15:36:32 +0000
Received: by outflank-mailman (input) for mailman id 84643;
 Sat, 13 Feb 2021 15:36:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uKQL=HP=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1lAwyF-0005Ps-IX
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 15:36:31 +0000
Received: from mx1.somlen.de (unknown [89.238.87.226])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1707a3e-872d-46a8-8d6c-797b24671964;
 Sat, 13 Feb 2021 15:36:29 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id A6CECC36AAE;
 Sat, 13 Feb 2021 16:36:27 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1707a3e-872d-46a8-8d6c-797b24671964
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: pkg-xen-devel@lists.alioth.debian.org
Subject: [BUG] Linux pvh vm not getting destroyed on shutdown
Date: Sat, 13 Feb 2021 16:36:24 +0100
Message-ID: <2195346.r5JaYcbZso@localhost>
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="nextPart2014495.WFqFFtlU6v"; micalg="pgp-sha512"; protocol="application/pgp-signature"

--nextPart2014495.WFqFFtlU6v
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

Hi,

after a recent upgrade of one of our test systems to Debian Bullseye we 
noticed an issue where on shutdown of a pvh vm the vm was not destroyed by xen 
automatically. It could still be destroyed by manually issuing a 'xl destroy 
$vm' command.

We can reproduce the hang reliably with the following vm configuration:

type = 'pvh'
memory = '512'
kernel = '/usr/lib/grub-xen/grub-i386-xen_pvh.bin'
[... disk/name/vif ]
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'
vcpus = '1'
maxvcpus = '2'

And then issuing a shutdown command in the vm (e.g. by calling 'poweroff')


Here are some things I noticed while trying to debug this issue:

* It happens on a Debian buster dom0 as well as on a bullseye dom0

* It seems to only affect pvh vms.

* shutdown from the pvgrub menu ("c" -> "halt") does work

* the vm seems to shut down normal, the last lines in the console are:

[  228.461167] systemd-shutdown[1]: All filesystems, swaps, loop devices, MD 
devices and DM devices detached.
[  228.476794] systemd-shutdown[1]: Syncing filesystems and block devices.
[  228.477878] systemd-shutdown[1]: Powering off.
[  233.709498] xenbus_probe_frontend: xenbus_frontend_dev_shutdown: device/
vif/0 timeout closing device
[  233.745642] reboot: System halted

* issuing a reboot instead of a shutdown does work fine.

* The issue started with Debian kernel 5.8.3+1~exp1 running in the vm, Debian 
kernel 5.7.17-1 does not show the issue.

* setting vcpus equal to maxvcpus does *not* show the hang.


Below is the output of "xl debug-keys q; xl dmesg" for the affected vm in the 
'hang' state as suggested by andyhhp on #xen to attach to this bug report:

(XEN) General information for domain 55:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=131088 xenheap_pages=4 shared_pages=0 paged_pages=0 
dirty_cpus={} max_pages=131328
(XEN)     handle=275e3a73-247f-4649-af86-6d5c0c72e8e4 vm_assist=00000020
(XEN)     paging assistance: hap refcounts translate external 
(XEN) Rangesets belonging to domain 55:
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN)     I/O Ports  { }
(XEN)     log-dirty  { }
(XEN) Memory pages belonging to domain 55:
(XEN)     DomPage list too long to display
(XEN)     PoD entries=0 cachesize=0
(XEN)     XenPage 0000000000080125: caf=c000000000000001, taf=e400000000000001
(XEN)     XenPage 00000000001412c9: caf=c000000000000001, taf=e400000000000001
(XEN)     XenPage 0000000000140da0: caf=c000000000000001, taf=e400000000000001
(XEN)     XenPage 0000000000140d9a: caf=c000000000000001, taf=e400000000000001
(XEN)     ExtraPage 00000000001412d3: caf=8040000000000002, 
taf=e400000000000001
(XEN) NODE affinity for domain 55: [0]
(XEN) VCPU information and callbacks for domain 55:
(XEN)   UNIT0 affinities: hard={0-7} soft={0-3}
(XEN)     VCPU0: CPU2 [has=F] poll=0 upcall_pend=01 upcall_mask=00 
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN)   UNIT1 affinities: hard={0-7} soft={0-3}
(XEN)     VCPU1: CPU1 [has=F] poll=0 upcall_pend=00 upcall_mask=00 
(XEN)     pause_count=0 pause_flags=1
(XEN)     paging assistance: hap, 4 levels
(XEN) No periodic timer


Please let me know if more information is necessary.

Thanks,
Maxi

--nextPart2014495.WFqFFtlU6v
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part.
Content-Transfer-Encoding: 7Bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEEQ8gZ7vwsPje0uPkIgepkfSQr0hUFAmAn8fgACgkQgepkfSQr
0hU3pw//SPKfPm7iQHDWRgsciWfTyZeYMDyO34L8YabVmsCkEm4pkvyuTK4DDdry
UjUspkyUtbCHmbRxWQXxa6elLkIeV2J3UMp+/iB8NNS9Kpfmjy1QVxnjWJe8J8uK
vpNs4+43aGdJCemimx9447PQCokBN5e5WPs6f4G+/qlBxdLY0VHSPCWvQHoeBJcq
cC58r13Vt+p0SM5dh6PUAmma9vaw9JbFtvNgTE3j/RZseq3EFWDOfRqpdTY+3rlx
YW1P++WsNMFNxk0wAS/lVURkwxXkSuLM/Aaj2aDFFlS06yU08w0vT9rQ0yORP0QV
ROAzCK4fHYBbs1TLMirEMscX/t/7/CymI/4cr1bKMaQrGAe8+9WnqqYpsThQZhMB
OR68IvVodMVKadYh4erZ7dmLH1VhjbZhSZcFHAGYEbNaRv7AUPP/BFWF+FdEncUk
bvc8A7v5FON206D0jKlyrCowMYAGsKQwYsn0i+8b9dri4Oj9YMtqHA9uWtCrbxyu
t+te0IilXS812b2zFrQDEYPF0od8kHrWb6qxt+29XF27Pp5oGn3VHll3Pf2Inh8U
SaV2aa87ipBShCc34tthWlLOb1x3cAuCCZolNPH1Zrfb3q8mkWelReSBLAfzA5Qt
0Vn+gzONGAOkLRNu8NA8wJomQfplLmjYiPudQH17X7RwLnr5acI=
=x4tO
-----END PGP SIGNATURE-----

--nextPart2014495.WFqFFtlU6v--





From xen-devel-bounces@lists.xenproject.org Sat Feb 13 15:43:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 15:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84649.158739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAx4k-0006M3-6W; Sat, 13 Feb 2021 15:43:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84649.158739; Sat, 13 Feb 2021 15:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAx4k-0006Lw-3B; Sat, 13 Feb 2021 15:43:14 +0000
Received: by outflank-mailman (input) for mailman id 84649;
 Sat, 13 Feb 2021 15:43:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAx4j-0006Lo-AT; Sat, 13 Feb 2021 15:43:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAx4j-00035M-0T; Sat, 13 Feb 2021 15:43:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAx4i-0008Sb-P0; Sat, 13 Feb 2021 15:43:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAx4i-0001Ua-OV; Sat, 13 Feb 2021 15:43:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Mdrw2Y6AvyNNyMZloyIyvsqCZ4gEVMDClbLLKUoVxx4=; b=qi8FV9iW0VmlWg00P3i6kYCsas
	eJABikX5QJrBYmrVaplQUPDRGk7VY6Z/S4mnd3o34p9quwoUKZMP70GQLRxN2eXY+6ScwBP8ccC2m
	Oxca4pIGeZrP09oRVuVwBXLWcZjFAcP3pnrHZUmlca8ua6kLAFfWdH484fvPbtpd0EQ0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-libvirt-xsm
Message-Id: <E1lAx4i-0001Ua-OV@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 15:43:12 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159327/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-libvirt-xsm.guest-start --summary-out=tmp/159327.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-libvirt-xsm guest-start
Searching for failure / basis pass:
 159295 fail [host=rochester1] / 158681 [host=laxton0] 158624 [host=laxton1] 158616 [host=rochester0] 158609 ok.
Failure / basis pass flights: 159295 / 158609
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 124f1dd1ee1140b441151043aacbe5d33bb5ab79 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-5e1942063dc3633f7a127aa2b159c13507580d21 git://xenbits.xen.org/osstest/linux-firmware.git#\
 c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#3b769c5110384fb33bcfeddced80f721ec7838cc-124f1dd1ee1140b441151043aacbe5d33bb5ab79 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f\
 09c8ae07f98fdd-ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Loaded 15001 nodes in revision graph
Searching for test results:
 158593 [host=laxton1]
 158603 [host=laxton0]
 158609 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158616 [host=rochester0]
 158624 [host=laxton1]
 158681 [host=laxton0]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159306 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159309 fail irrelevant
 159311 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159313 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159314 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159316 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159317 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159319 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159320 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159321 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159322 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159295 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 124f1dd1ee1140b441151043aacbe5d33bb5ab79 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159323 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159325 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 124f1dd1ee1140b441151043aacbe5d33bb5ab79 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159326 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159327 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158609 (pass), for basis pass
 Result found: flight 159295 (fail), for basis failure (at ancestor ~210)
 Repro found: flight 159306 (pass), for basis pass
 Repro found: flight 159325 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159320 (pass), for last pass
 Result found: flight 159321 (fail), for first failure
 Repro found: flight 159322 (pass), for last pass
 Repro found: flight 159323 (fail), for first failure
 Repro found: flight 159326 (pass), for last pass
 Repro found: flight 159327 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159327/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 147 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159327: tolerable FAIL

flight 159327 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159327/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start             fail baseline untested


jobs:
 build-arm64-libvirt                                          pass    
 test-arm64-arm64-libvirt-xsm                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Feb 13 18:03:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 18:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84678.158758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAzGF-0002Z6-SB; Sat, 13 Feb 2021 18:03:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84678.158758; Sat, 13 Feb 2021 18:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAzGF-0002Yz-Ol; Sat, 13 Feb 2021 18:03:15 +0000
Received: by outflank-mailman (input) for mailman id 84678;
 Sat, 13 Feb 2021 18:03:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAzGE-0002Yr-M0; Sat, 13 Feb 2021 18:03:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAzGE-0005uW-El; Sat, 13 Feb 2021 18:03:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lAzGE-0006FM-1b; Sat, 13 Feb 2021 18:03:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lAzGE-0003mz-17; Sat, 13 Feb 2021 18:03:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TJMJ/ez//wl0TUajmZBY7u/6UnKCEAcuMAuqN/LZFVg=; b=TVYp5pLhRAe1Mka7OETSI2xkw0
	EVz0myv0SyrBX3pjYTDVw34CtJ5Iv6d6euX4TN+k0SElxQgUUgjZif5qExwROJ/xVxrl0ZVifcCxJ
	NRgH6Dnbxh2H1qdNrKamR7X0N5HNNXme33Rp/EL16ofXFIGzRU3OzMaMkqnBMvJE9KuI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159318-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159318: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d2b0ee0aff09d784eecaca0035502516ed0d2a0f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 18:03:14 +0000

flight 159318 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159318/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d2b0ee0aff09d784eecaca0035502516ed0d2a0f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  218 days
Failing since        151818  2020-07-11 04:18:52 Z  217 days  210 attempts
Testing same since   159318  2021-02-13 06:23:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42304 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 18:22:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 18:22:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84690.158773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAzYb-0004Q5-Gl; Sat, 13 Feb 2021 18:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84690.158773; Sat, 13 Feb 2021 18:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lAzYb-0004Py-DH; Sat, 13 Feb 2021 18:22:13 +0000
Received: by outflank-mailman (input) for mailman id 84690;
 Sat, 13 Feb 2021 18:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G3Bv=HP=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lAzYa-0004Pt-L0
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 18:22:12 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0840c19-aaf9-4ba0-9d37-cb3a1ed926bf;
 Sat, 13 Feb 2021 18:22:11 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11DILu80096894
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sat, 13 Feb 2021 13:22:02 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11DILuWU096893;
 Sat, 13 Feb 2021 10:21:56 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0840c19-aaf9-4ba0-9d37-cb3a1ed926bf
Date: Sat, 13 Feb 2021 10:21:56 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Maximilian Engelhardt <maxi@daemonizer.de>
Cc: xen-devel@lists.xenproject.org, pkg-xen-devel@lists.alioth.debian.org
Subject: Re: [BUG] Linux pvh vm not getting destroyed on shutdown
Message-ID: <YCgYxOxXwitkFB0T@mattapan.m5p.com>
References: <2195346.r5JaYcbZso@localhost>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2195346.r5JaYcbZso@localhost>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Sat, Feb 13, 2021 at 04:36:24PM +0100, Maximilian Engelhardt wrote:
> after a recent upgrade of one of our test systems to Debian Bullseye we 
> noticed an issue where on shutdown of a pvh vm the vm was not destroyed by xen 
> automatically. It could still be destroyed by manually issuing a 'xl destroy 
> $vm' command.

Usually I would expect such an issue to show on the Debian bug database
before xen-devel.  In particular as this is a behavior change with
security updates, there is a good chance this isn't attributable to the
Xen Project.  Additionally the Xen Project's support window is rather
narrow.  I've been observing the same (or similar) issue for a bit too.


> Here are some things I noticed while trying to debug this issue:
> 
> * It happens on a Debian buster dom0 as well as on a bullseye dom0

I stick with stable on non-development machines, so I can't say anything
to this.

> * It seems to only affect pvh vms.

I've observed it with pv and hvm VMs as well.

> * shutdown from the pvgrub menu ("c" -> "halt") does work

Woah!  That is quite the observation.  Since I had a handy opportunity
I tried this and this reproduces for me.

> * the vm seems to shut down normal, the last lines in the console are:

I agree with this.  Everything appears typical until the last moment.

> * issuing a reboot instead of a shutdown does work fine.

I disagree with this.  I'm seeing the issue occur with restart attempts
too.

> * The issue started with Debian kernel 5.8.3+1~exp1 running in the vm, Debian 
> kernel 5.7.17-1 does not show the issue.

I think the first kernel update during which I saw the issue was around
linux-image-4.19.0-12-amd64 or linux-image-4.19.0-13-amd64.  I think
the last security update to the Xen packages was in a similar timeframe
though.  Rate this portion as unreliable though.  I can definitely state
this occurs with Debian's linux-image-4.19.0-13-amd64 and kernels built
from corresponding source, this may have shown earlier.

> * setting vcpus equal to maxvcpus does *not* show the hang.

I haven't tried things related to this, so I can't comment on this
part.


Fresh observation.  During a similar timeframe I started noticing VM
creation leaving a `xl create` process behind.  I had discovered this
process could be freely killed without appearing to effect the VM and had
thus been doing so (memory in a lean Dom0 is precious).

While typing this I realized there was another scenario I needed to try.
Turns out if I boot PV GRUB and get to its command-line (press 'c'), then
get away from the VM console, kill the `xl create` process, return to
the console and type "halt".  This results in a hung VM.

Are you perhaps either killing the `xl create` process for effected VMs,
or migrating the VM and thus splitting the `xl create` process from the
effected VMs?

This seems more a Debian issue than a Xen Project issue right now.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Feb 13 18:56:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 18:56:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84696.158791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB064-0007DO-CB; Sat, 13 Feb 2021 18:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84696.158791; Sat, 13 Feb 2021 18:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB064-0007DH-7h; Sat, 13 Feb 2021 18:56:48 +0000
Received: by outflank-mailman (input) for mailman id 84696;
 Sat, 13 Feb 2021 18:56:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB063-0007D9-44; Sat, 13 Feb 2021 18:56:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB062-0006iI-SJ; Sat, 13 Feb 2021 18:56:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB062-0007sz-EV; Sat, 13 Feb 2021 18:56:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lB062-0000QK-E2; Sat, 13 Feb 2021 18:56:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I5BxN5shYxOpluGrfOUD7sOJ4toZ2eJODheg2tZUmtw=; b=q09nHG68cXra1OVCZaG/f/asXT
	rYHnn+du4cWihtVM028rtRLIdDRMD8n3sJdiCy2mhbsMBCYiqi7DA8Z3OtVAR4AdMko8m3R993KzK
	s/eBWBU8sZj51UpkyXtlSRS5+hySYGzcqvNK2c8yQ/qLL8islDyt+R5qEMCFumNWb48w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159310: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=eac92d316351b855ba79eb374dd21cc367f1f9c1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 18:56:46 +0000

flight 159310 qemu-mainline real [real]
flight 159329 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159310/
http://logs.test-lab.xenproject.org/osstest/logs/159329/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eac92d316351b855ba79eb374dd21cc367f1f9c1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  177 days
Failing since        152659  2020-08-21 14:07:39 Z  176 days  345 attempts
Testing same since   159310  2021-02-12 20:58:39 Z    0 days    1 attempts

------------------------------------------------------------
401 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 110609 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 21:16:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 21:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84719.158811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB2HI-0002rf-I2; Sat, 13 Feb 2021 21:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84719.158811; Sat, 13 Feb 2021 21:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB2HI-0002rY-Ep; Sat, 13 Feb 2021 21:16:32 +0000
Received: by outflank-mailman (input) for mailman id 84719;
 Sat, 13 Feb 2021 21:16:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB2HH-0002rQ-BZ; Sat, 13 Feb 2021 21:16:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB2HH-0000bg-4U; Sat, 13 Feb 2021 21:16:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB2HG-0007fl-OB; Sat, 13 Feb 2021 21:16:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lB2HG-0004eG-Nh; Sat, 13 Feb 2021 21:16:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s+Q0TIQAVDXWK2uv9Qr935P974M/HR+THlbDO6IaE8Q=; b=0Kw15JjUBz3bc7RyGQRDhmG1Im
	NJHN5fInYjUFewCKfm6bRYuKBRIhNqwZYxEYpfjy5cxcJB3HBCkZgQfK+cWjFakFvYN5foBi1ow5A
	aVn375OuVNM9NMvUJw3kLRbOEzOCggqex7n9JWtcIFujhcCUhrMaQ2rR9topbl+f9TQw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159312-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159312: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c6d8570e4d642a0c0bfbe7362ffa1b1433c72db1
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 Feb 2021 21:16:30 +0000

flight 159312 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159312/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                c6d8570e4d642a0c0bfbe7362ffa1b1433c72db1
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  197 days
Failing since        152366  2020-08-01 20:49:34 Z  196 days  341 attempts
Testing same since   159312  2021-02-12 22:10:53 Z    0 days    1 attempts

------------------------------------------------------------
4570 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1032403 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 13 22:38:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 Feb 2021 22:38:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84731.158827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB3Xz-0001X5-AA; Sat, 13 Feb 2021 22:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84731.158827; Sat, 13 Feb 2021 22:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB3Xz-0001Wy-7B; Sat, 13 Feb 2021 22:37:51 +0000
Received: by outflank-mailman (input) for mailman id 84731;
 Sat, 13 Feb 2021 22:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QNNA=HP=vivier.eu=laurent@srs-us1.protection.inumbo.net>)
 id 1lB3Xy-0001Wt-Ne
 for xen-devel@lists.xenproject.org; Sat, 13 Feb 2021 22:37:50 +0000
Received: from mout.kundenserver.de (unknown [212.227.126.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id caa44f35-4888-486a-80d2-929de50c2fb2;
 Sat, 13 Feb 2021 22:37:49 +0000 (UTC)
Received: from [192.168.100.1] ([82.252.149.54]) by mrelayeu.kundenserver.de
 (mreue010 [213.165.67.103]) with ESMTPSA (Nemesis) id
 1N7iOw-1ly5gQ30T4-014nob; Sat, 13 Feb 2021 23:37:44 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: caa44f35-4888-486a-80d2-929de50c2fb2
Subject: Re: [PATCH] hw/i386/xen: Remove dead code
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
References: <20210202155644.998812-1-philmd@redhat.com>
From: Laurent Vivier <laurent@vivier.eu>
Message-ID: <d8c7c2c9-031c-3534-d828-c3b232edc674@vivier.eu>
Date: Sat, 13 Feb 2021 23:37:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210202155644.998812-1-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: fr
Content-Transfer-Encoding: 8bit
X-Provags-ID: V03:K1:fg0slwWmDeG5W6CXwD+LFOmr7nTkZRJYbYVrZF2KgwJh4tqjenM
 /hoyI99CpsCV3jiaMWNDOkcczLbBun/CfFUn76o4NhdEcPcjBcQLizPk/jxuAmjF9DPXyzV
 6G+ldbCmL3v7cvczTf6OCu7yC/BYDtFsk4yAJ3rMRTESsBqOdvV/houE9RMe95Ll73EzaSh
 p04cUr9ZL3aCPiOAf69JQ==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:oBoG6KzYfQA=:I1QJIt5t3ob4iCy8v8Avoz
 pN1hkJnA/17x4K02MXs94pUg5fkJWO2R1rJk4b0qHhUcLy+QSTkYegggX06uj4vaSOOyyKHd/
 ohf5fakZi3ZurGMJqqcmv+NT2IxP8eftr77lSqM9bFkfvgRFaCOc5E303yzg5kSGhAaaYbMLw
 kDZL2wGKFRczeCJQaRnAqTP5FeOK5+BmQZ5F5HfmH7o65gicGik2qpiUoV/WROV9pIcIWsaey
 XjIn0ounM2CztblyWF/Qa27qFP23hoFcVe7tuKaCQXUMfHBOMBNC9n1V6EJjptO0d6Iq2T2O4
 8nUVxIzyY66I+0iwRCnBk9TOe0F78rVcx+CpxTqq1lDrUXkHSnq4SkGRNICzdPgWycnsaGYlN
 iuLDhH3DguoBQD9OuhM5rMDD+JIwgJbQYkvw77bPuYF9lGKd/j6W7cYZq4BTuvVB9gP80wILa
 E9sYo8oLNw==

Le 02/02/2021 Ã  16:56, Philippe Mathieu-DaudÃ© a Ã©critÂ :
> 'drivers_blacklisted' is never accessed, remove it.
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
> ---
>  hw/i386/xen/xen_platform.c | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> index 7c4db35debb..01ae1fb1618 100644
> --- a/hw/i386/xen/xen_platform.c
> +++ b/hw/i386/xen/xen_platform.c
> @@ -60,7 +60,6 @@ struct PCIXenPlatformState {
>      MemoryRegion bar;
>      MemoryRegion mmio_bar;
>      uint8_t flags; /* used only for version_id == 2 */
> -    int drivers_blacklisted;
>      uint16_t driver_product_version;
>  
>      /* Log from guest drivers */
> @@ -245,18 +244,10 @@ static void platform_fixed_ioport_writeb(void *opaque, uint32_t addr, uint32_t v
>  
>  static uint32_t platform_fixed_ioport_readw(void *opaque, uint32_t addr)
>  {
> -    PCIXenPlatformState *s = opaque;
> -
>      switch (addr) {
>      case 0:
> -        if (s->drivers_blacklisted) {
> -            /* The drivers will recognise this magic number and refuse
> -             * to do anything. */
> -            return 0xd249;
> -        } else {
> -            /* Magic value so that you can identify the interface. */
> -            return 0x49d2;
> -        }
> +        /* Magic value so that you can identify the interface. */
> +        return 0x49d2;
>      default:
>          return 0xffff;
>      }
> 

Applied to my trivial-patches branch.

Thanks,
Laurent


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 01:36:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 01:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84761.158933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB6Kz-0003QS-Um; Sun, 14 Feb 2021 01:36:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84761.158933; Sun, 14 Feb 2021 01:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB6Kz-0003QL-R3; Sun, 14 Feb 2021 01:36:37 +0000
Received: by outflank-mailman (input) for mailman id 84761;
 Sun, 14 Feb 2021 01:36:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB6Ky-0003PU-EO; Sun, 14 Feb 2021 01:36:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB6Ky-0006xK-9Z; Sun, 14 Feb 2021 01:36:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB6Kx-00044J-TX; Sun, 14 Feb 2021 01:36:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lB6Kx-0008PM-Sv; Sun, 14 Feb 2021 01:36:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M9h19oltUu2hdciJ3B1IeHz7WvnZhhILvfL7z/jA0oo=; b=PIM8gdEyotbRPBBYTNB63UTz3s
	uHQG4LT3E2h/Bq/sULwKWh/hnU737aJu+P6MDt3rBfvUzNICli6drr6fNMnVUqLrim5bZCtVKR9Do
	wypBamvPPCaXqbXsc74VmI6hS/JEB8pV+9iMyubPiJTmZcAR9ecMkJS34uFWWxeWqt+M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159315-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159315: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
X-Osstest-Versions-That:
    xen=ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 01:36:35 +0000

flight 159315 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159315/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail blocked in 159036
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159036
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159036
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159036
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159036
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159036
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159036
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
baseline version:
 xen                  ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7

Last test of basis   159036  2021-02-05 08:46:54 Z    8 days
Failing since        159077  2021-02-06 11:11:30 Z    7 days    6 attempts
Testing same since   159315  2021-02-13 03:58:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jukka Kaartinen <jukka.kaartinen@unikie.com>
  Julien Grall <jgrall@amazon.com>
  MichaÅ‚ LeszczyÅ„ski <michal.leszczynski@cert.pl>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ff522e2e91..04085ec1ac  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad -> master


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 04:40:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 04:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84793.159016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB9D7-00041j-Je; Sun, 14 Feb 2021 04:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84793.159016; Sun, 14 Feb 2021 04:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lB9D7-00041c-GR; Sun, 14 Feb 2021 04:40:41 +0000
Received: by outflank-mailman (input) for mailman id 84793;
 Sun, 14 Feb 2021 04:40:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB9D6-00041U-3W; Sun, 14 Feb 2021 04:40:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB9D5-0001zl-Tj; Sun, 14 Feb 2021 04:40:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lB9D5-00041R-Hy; Sun, 14 Feb 2021 04:40:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lB9D5-0000x0-HR; Sun, 14 Feb 2021 04:40:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yiVM5fqA33ZK0kRUq+u/DWWu0MtbeLPN/tUiOGuAndI=; b=vvxIj67Z0HlbHEWdHQJQDTZ6XK
	2O4SftxlJEzjx7V55u3kmPHSAeuTbSHpY62/PCLZwMh8Z6ndgMPd4f6uVIaPe2jAC6i8gUO+/xZw9
	Acn+2gCwpKvU67jWsrDDhQhHnD8T6QkGvXwGJPTqMtSPWVmnmWVvDAetmg0KxBEi+1/8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159324-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159324: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5e1942063dc3633f7a127aa2b159c13507580d21
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 04:40:39 +0000

flight 159324 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159324/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5e1942063dc3633f7a127aa2b159c13507580d21
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   32 days
Failing since        158473  2021-01-17 13:42:20 Z   27 days   38 attempts
Testing same since   159200  2021-02-10 11:25:12 Z    3 days    4 attempts

------------------------------------------------------------
453 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13546 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 09:38:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 09:38:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84844.159085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBDr7-0005XB-Vx; Sun, 14 Feb 2021 09:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84844.159085; Sun, 14 Feb 2021 09:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBDr7-0005X4-Ss; Sun, 14 Feb 2021 09:38:17 +0000
Received: by outflank-mailman (input) for mailman id 84844;
 Sun, 14 Feb 2021 09:38:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBDr6-0005Ww-Rf; Sun, 14 Feb 2021 09:38:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBDr6-0007mo-Ly; Sun, 14 Feb 2021 09:38:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBDr6-0002R8-Aa; Sun, 14 Feb 2021 09:38:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBDr6-0004Gy-9y; Sun, 14 Feb 2021 09:38:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qnVLNWg4Equ/p1WU9/8Hs2q+YMuBpp6jWD0A3/sXRsg=; b=eHtpNqKWAu1czd+23bHmhj8w+G
	CfEuy7afqDf4Vh4tIJeXKi+1K/C4mWy0cuNdkKBOqhV3pLpb4R+e6AYNk1Xrz/dXtRwWJm+LrdiPW
	ATIqfIa/9IN+WyoDrZkdYxcnSmak1fMuTqUU1/WQu3+BbD6TTNRWIi0wUS/FaKdQucvA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159338-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159338: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=97c57c67858e54916207520c693939741adae450
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 09:38:16 +0000

flight 159338 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159338/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              97c57c67858e54916207520c693939741adae450
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  219 days
Failing since        151818  2020-07-11 04:18:52 Z  218 days  211 attempts
Testing same since   159338  2021-02-14 04:19:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42318 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 09:48:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 09:48:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84850.159100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBE1O-0006WP-2M; Sun, 14 Feb 2021 09:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84850.159100; Sun, 14 Feb 2021 09:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBE1N-0006WI-V1; Sun, 14 Feb 2021 09:48:53 +0000
Received: by outflank-mailman (input) for mailman id 84850;
 Sun, 14 Feb 2021 09:48:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBE1M-0006WA-BZ; Sun, 14 Feb 2021 09:48:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBE1M-0007x0-4n; Sun, 14 Feb 2021 09:48:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBE1L-00033o-Tz; Sun, 14 Feb 2021 09:48:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBE1L-00029a-TT; Sun, 14 Feb 2021 09:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZNdzCzwGB8tNhjOddTJbrHmLpFBsMaYU/Li82+n4mRw=; b=G/VEElVDwsv4HepXwkX3uYiSi7
	V3bT1EztsoeU5kKP438KTG7RAdkrg6wzL+CMOJzoqjt+yMM6bjgp9leUAE7qMsxXF4IgMeKj5u3Vt
	v6xm0Q807qvlYw9KlItfTWXG8SE5QoA88QevZ79Ckq063+B+UFzvqoDGwSjHck7ARcKM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159344-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159344: all pass - PUSHED
X-Osstest-Versions-This:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
X-Osstest-Versions-That:
    xen=687121f8a0e7c1ea1c4fa3d056637e5819342f14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 09:48:51 +0000

flight 159344 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159344/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
baseline version:
 xen                  687121f8a0e7c1ea1c4fa3d056637e5819342f14

Last test of basis   159197  2021-02-10 09:18:32 Z    4 days
Testing same since   159344  2021-02-14 09:18:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jukka Kaartinen <jukka.kaartinen@unikie.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   687121f8a0..04085ec1ac  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 10:45:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 10:45:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84861.159121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBEto-0003T4-Hn; Sun, 14 Feb 2021 10:45:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84861.159121; Sun, 14 Feb 2021 10:45:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBEto-0003Sx-Eg; Sun, 14 Feb 2021 10:45:08 +0000
Received: by outflank-mailman (input) for mailman id 84861;
 Sun, 14 Feb 2021 10:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBEtn-0003Sp-7e; Sun, 14 Feb 2021 10:45:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBEtn-0000Ul-0T; Sun, 14 Feb 2021 10:45:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBEtm-0004xC-PA; Sun, 14 Feb 2021 10:45:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBEtm-0001qS-Og; Sun, 14 Feb 2021 10:45:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h0vT9IuiTANgZSIpAn5D9CHf8NSUXG70BZboc5YtMBM=; b=Z2hxvqM6ipbMYXDRt0EFV1bGkd
	m67dM6+gpIH32DTf/MbCM0dV60yGwL3hzuVrc/RdwgbOYlmGe+3i6aNWfIRdcGD1FlEIu3hu8NDVB
	w2FFvtNPcdnITfJCMoIxgQZubjJfJmXuuRVeSbClObNhXj/n3b9muuVt2BNt6t/mF2Vk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159331-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159331: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=abb8b29aff352f226bf91cb59e5ac7e3e6082ce8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 10:45:06 +0000

flight 159331 qemu-mainline real [real]
flight 159343 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159331/
http://logs.test-lab.xenproject.org/osstest/logs/159343/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2 20 guest-localmigrate/x10 fail pass in 159343-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                abb8b29aff352f226bf91cb59e5ac7e3e6082ce8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  178 days
Failing since        152659  2020-08-21 14:07:39 Z  176 days  346 attempts
Testing same since   159331  2021-02-13 18:58:50 Z    0 days    1 attempts

------------------------------------------------------------
401 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 110866 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 12:34:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 12:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84895.159148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBGb9-0004qD-9e; Sun, 14 Feb 2021 12:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84895.159148; Sun, 14 Feb 2021 12:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBGb9-0004q6-6c; Sun, 14 Feb 2021 12:33:59 +0000
Received: by outflank-mailman (input) for mailman id 84895;
 Sun, 14 Feb 2021 12:33:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBGb7-0004py-R1; Sun, 14 Feb 2021 12:33:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBGb7-0002Dm-HE; Sun, 14 Feb 2021 12:33:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBGb7-0002Se-6B; Sun, 14 Feb 2021 12:33:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBGb7-00060E-5g; Sun, 14 Feb 2021 12:33:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yRHpje+RFiAz1mfPiC4zG3dZEubTEFrwdQKYJwU/kb4=; b=CZE+i6qaoI8PKRo8WIaAlp5uvZ
	6b2JqkO8+cg8M2tXMuh6SZtcT9yukPV4CU2kr5xGyxTlMURGFOBZNOOO4f6BT7oo42ndaievErNnz
	tAwhB6TUoklLvKqF0tn7AQRyWQ3RYlSxaP/YBzXe0dDvUYo8Cuxh3W15Qfo/L//zF/cs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159333-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159333: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ac30d8ce28d61c05ac3a8b1452e889371136f3af
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 12:33:57 +0000

flight 159333 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159333/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ac30d8ce28d61c05ac3a8b1452e889371136f3af
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  197 days
Failing since        152366  2020-08-01 20:49:34 Z  196 days  342 attempts
Testing same since   159333  2021-02-13 21:19:06 Z    0 days    1 attempts

------------------------------------------------------------
4576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1032827 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 14:32:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 14:32:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84917.159175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBIRX-000735-RY; Sun, 14 Feb 2021 14:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84917.159175; Sun, 14 Feb 2021 14:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBIRX-00072y-N1; Sun, 14 Feb 2021 14:32:11 +0000
Received: by outflank-mailman (input) for mailman id 84917;
 Sun, 14 Feb 2021 14:32:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBIRW-00072t-7B
 for xen-devel@lists.xenproject.org; Sun, 14 Feb 2021 14:32:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBIRU-0004B5-RC; Sun, 14 Feb 2021 14:32:08 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBIRU-0004lo-Fv; Sun, 14 Feb 2021 14:32:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=m5iut/WYHR7DkdBnCVfuEsE8J1Rh3jEUkPkymCm/aPM=; b=MBIxO691U94ZAa7Z5i6wMlzMi9
	YId38bQUJNDWMDS0vkMby1CbrzeRiumwCu+tKV/n10JzS+xG4Kheh4+v8uf+eD/nvGfHS9MvtU1++
	hopLMCVTrsvZQQyGcBgWT+yaOhfmuFijSnx41VlO8dZvyE1gzLsNr6/Zcb4zjUkwd6Vs=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
 <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
 <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
 <DC7F1705-54B3-4543-8222-E7BCF1A501F7@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <acbcdd06-83b1-28ff-ea7e-2ce1ba681ac1@xen.org>
Date: Sun, 14 Feb 2021 14:32:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <DC7F1705-54B3-4543-8222-E7BCF1A501F7@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Rahul,

On 11/02/2021 16:05, Rahul Singh wrote:
>> On 11 Feb 2021, at 1:52 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 11/02/2021 13:20, Rahul Singh wrote:
>>> Hello Julien,
>>
>> Hi Rahul,
>>
>>>> On 10 Feb 2021, at 7:52 pm, Julien Grall <julien@xen.org> wrote:
>>>>
>>>>
>>>>
>>>> On 10/02/2021 18:08, Rahul Singh wrote:
>>>>> Hello Julien,
>>>>>> On 10 Feb 2021, at 5:34 pm, Julien Grall <julien@xen.org> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On 10/02/2021 15:06, Rahul Singh wrote:
>>>>>>>> On 9 Feb 2021, at 8:36 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>>>>>
>>>>>>>> On Tue, 9 Feb 2021, Rahul Singh wrote:
>>>>>>>>>> On 8 Feb 2021, at 6:49 pm, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>>>>>>>
>>>>>>>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>>>>>>>> The offending chunk is:
>>>>>>>>>>
>>>>>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>>>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>>>>>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>>>>>
>>>>>>>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>>>>>>>>> directly mapped and IOMMU is enabled for the domain, like the old check
>>>>>>>>>> did, but the new check is always false.
>>>>>>>>>>
>>>>>>>>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>>>>>>>>> need_sync is set as:
>>>>>>>>>>
>>>>>>>>>>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>>>>>>>        hd->need_sync = !iommu_use_hap_pt(d);
>>>>>>>>>>
>>>>>>>>>> iommu_use_hap_pt(d) means that the page-table used by the IOMMU is the
>>>>>>>>>> P2M. It is true on ARM. need_sync means that you have a separate IOMMU
>>>>>>>>>> page-table and it needs to be updated for every change. need_sync is set
>>>>>>>>>> to false on ARM. Hence, gnttab_need_iommu_mapping(d) is false too,
>>>>>>>>>> which is wrong.
>>>>>>>>>>
>>>>>>>>>> As a consequence, when using PV network from a domU on a system where
>>>>>>>>>> IOMMU is on from Dom0, I get:
>>>>>>>>>>
>>>>>>>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>>>>>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>>>>>>>>>
>>>>>>>>>> The fix is to go back to something along the lines of the old
>>>>>>>>>> implementation of gnttab_need_iommu_mapping.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>>>>>>> Fixes: 91d4eca7add
>>>>>>>>>> Backport: 4.12+
>>>>>>>>>>
>>>>>>>>>> ---
>>>>>>>>>>
>>>>>>>>>> Given the severity of the bug, I would like to request this patch to be
>>>>>>>>>> backported to 4.12 too, even if 4.12 is security-fixes only since Oct
>>>>>>>>>> 2020.
>>>>>>>>>>
>>>>>>>>>> For the 4.12 backport, we can use iommu_enabled() instead of
>>>>>>>>>> is_iommu_enabled() in the implementation of gnttab_need_iommu_mapping.
>>>>>>>>>>
>>>>>>>>>> Changes in v2:
>>>>>>>>>> - improve commit message
>>>>>>>>>> - add is_iommu_enabled(d) to the check
>>>>>>>>>> ---
>>>>>>>>>> xen/include/asm-arm/grant_table.h | 2 +-
>>>>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
>>>>>>>>>> index 6f585b1538..0ce77f9a1c 100644
>>>>>>>>>> --- a/xen/include/asm-arm/grant_table.h
>>>>>>>>>> +++ b/xen/include/asm-arm/grant_table.h
>>>>>>>>>> @@ -89,7 +89,7 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
>>>>>>>>>>     (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
>>>>>>>>>>
>>>>>>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>>>>>>> -    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>>>>>>> +    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
>>>>>>>>>>
>>>>>>>>>> #endif /* __ASM_GRANT_TABLE_H__ */
>>>>>>>>>
>>>>>>>>> I tested the patch and while creating the guest I observed the below warning from Linux for block device.
>>>>>>>>> https://elixir.bootlin.com/linux/v4.3/source/drivers/block/xen-blkback/xenbus.c#L258
>>>>>>>>
>>>>>>>> So you are creating a guest with "xl create" in dom0 and you see the
>>>>>>>> warnings below printed by the Dom0 kernel? I imagine the domU has a
>>>>>>>> virtual "disk" of some sort.
>>>>>>> Yes you are right I am trying to create the guest with "xl createâ€ and before that, I created the logical volume and trying to attach the logical volume
>>>>>>> block to the domain with â€œxl block-attachâ€. I observed this error with the "xl block-attachâ€ command.
>>>>>>> This issue occurs after applying this patch as what I observed this patch introduce the calls to iommu_legacy_{, un}map() to map the grant pages for
>>>>>>> IOMMU that touches the page-tables. I am not sure but what I observed is that something is written wrong when iomm_unmap calls unmap the pages
>>>>>>> because of that issue is observed.
>>>>>>
>>>>>> Can you clarify what you mean by "written wrong"? What sort of error do you see in the iommu_unmap()?
>>>>> I might be wrong as per my understanding for ARM we are sharing the P2M between CPU and IOMMU always and the map_grant_ref() function is written in such a way that we have to call iommu_legacy_{, un}map() only if P2M is not shared.
>>>>
>>>> map_grant_ref() will call the IOMMU if gnttab_need_iommu_mapping() returns true. I don't really see where this is assuming the P2M is not shared.
>>>>
>>>> In fact, on x86, this will always be false for HVM domain (they support both shared and separate page-tables).
>>>>
>>>>> As we are sharing the P2M when we call the iommu_map() function it will overwrite the existing GFN -> MFN ( For DOM0 GFN is same as MFN) entry and when we call iommu_unmap() it will unmap the  (GFN -> MFN ) entry from the page-table.
>>>> AFAIK, there should be nothing mapped at that GFN because the page belongs to the guest. At worse, we would overwrite a mapping that is the same.
>>>> Sorry I should have mention before backend/frontend is dom0 in this
>> case and GFN is mapped. I am trying to attach the block device to DOM0
>>
>> Ah, your log makes a lot more sense now. Thank you for the clarification!
>>
>> So yes, I agree that iommu_{,un}map() will do the wrong thing if the frontend and backend in the same domain.
>>
>> I don't know what the state in Linux, but from Xen PoV it should be possible to have the backend/frontend in the same domain.
>>
>> I think we want to ignore the IOMMU mapping request when the domain is the same. Can you try this small untested patch:
> 
> I tested the patch and it is working fine for both dom0/domU. I am able to attach the block device to dom0/domu.
> Also I didnâ€™t observe the IOMMU fault also for block device that we have behind IOMMU on our system and attached to domU.

Thanks for the testing. I noticed that my patch doesn't build because 
arm_iommu_unmap_page() doesn't have a parameter mfn. Can you confirm 
whether you had to replace mfn with _mfn(dfn_x(dfn))?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 14:35:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 14:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84920.159186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBIUQ-0007Br-Db; Sun, 14 Feb 2021 14:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84920.159186; Sun, 14 Feb 2021 14:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBIUQ-0007Bk-AU; Sun, 14 Feb 2021 14:35:10 +0000
Received: by outflank-mailman (input) for mailman id 84920;
 Sun, 14 Feb 2021 14:35:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBIUO-0007Be-Q4
 for xen-devel@lists.xenproject.org; Sun, 14 Feb 2021 14:35:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBIUO-0004DK-9b; Sun, 14 Feb 2021 14:35:08 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBIUN-00057D-WE; Sun, 14 Feb 2021 14:35:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=m/LlNL0TamjvVVErh/pPovK2MwMDVTxbRIY3ju8BM/k=; b=4DbgDOufF2y1YcJUTFqmEFkZkM
	APBxrcLN5LfRljx07JdtnDU0WjYWeC11Cr2DMH+Xha6Ru0FegqY3c6RJHHCJW7LUqamltDQ9YgAso
	rigAa87oBUkROeFUb4djd9/CcFPFLVa2ZMqCCrdmrMdBrpEXkY3JT/BjKasggs6Iu+AI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <Rahul.Singh@arm.com>
Subject: [PATCH] xen/iommu: arm: Don't insert an IOMMU mapping when the grantee and granter...
Date: Sun, 14 Feb 2021 14:35:04 +0000
Message-Id: <20210214143504.23099-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

... are the same.

When the IOMMU is enabled and the domain is direct mapped (e.g. Dom0),
Xen will insert a 1:1 mapping for each grant mapping in the P2M to
allow DMA.

This works quite well when the grantee and granter and not the same
because the GFN in the P2M should not be mapped. However, if they are
the same, we will overwrite the mapping. Worse, it will be completely
removed when the grant is unmapped.

As the domain is direct mapped, a 1:1 mapping should always present in
the P2M. This is not 100% guaranteed if the domain decides to mess with
the P2M. However, such domain would already end up in trouble as the
page would be soon be freed (when the last reference dropped).

Add an additional check in arm_iommu_{,un}map_page() to check whether
the page belongs to the domain. If it is belongs to it, then ignore the
request.

Note that we don't need to hold an extra reference on the page because
the grant code will already do it for us.

Reported-by: Rahul Singh <Rahul.Singh@arm.com>
Fixes: 552710b38863 ("xen/arm: grant: Add another entry to map MFN 1:1 in dom0 p2m")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This is a candidate for 4.15. Without the patch, it would not be
possible to get the frontend and backend in the same domain (useful when
trying to map the guest block device in dom0).
---
 xen/drivers/passthrough/arm/iommu_helpers.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/xen/drivers/passthrough/arm/iommu_helpers.c b/xen/drivers/passthrough/arm/iommu_helpers.c
index a36e2b8e6c42..35a813308b8c 100644
--- a/xen/drivers/passthrough/arm/iommu_helpers.c
+++ b/xen/drivers/passthrough/arm/iommu_helpers.c
@@ -53,6 +53,17 @@ int __must_check arm_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     t = (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro;
 
+    /*
+     * The granter and grantee can be the same domain. In normal
+     * condition, the 1:1 mapping should already present in the P2M so
+     * we want to avoid overwriting it.
+     *
+     * Note that there is no guarantee the 1:1 mapping will be present
+     * if the domain decides to mess with the P2M.
+     */
+    if ( page_get_owner(mfn_to_page(mfn)) == d )
+        return 0;
+
     /*
      * The function guest_physmap_add_entry replaces the current mapping
      * if there is already one...
@@ -71,6 +82,13 @@ int __must_check arm_iommu_unmap_page(struct domain *d, dfn_t dfn,
     if ( !is_domain_direct_mapped(d) )
         return -EINVAL;
 
+    /*
+     * When the granter and grantee are the same, we didn't insert a
+     * mapping. So ignore the unmap request.
+     */
+    if ( page_get_owner(mfn_to_page(_mfn(dfn_x(dfn)))) == d )
+        return 0;
+
     return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), 0);
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Feb 14 14:52:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 14:52:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84928.159198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBIlK-0000aA-MC; Sun, 14 Feb 2021 14:52:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84928.159198; Sun, 14 Feb 2021 14:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBIlK-0000a3-JG; Sun, 14 Feb 2021 14:52:38 +0000
Received: by outflank-mailman (input) for mailman id 84928;
 Sun, 14 Feb 2021 14:52:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBIlJ-0000Zv-8U; Sun, 14 Feb 2021 14:52:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBIlI-0004V2-UO; Sun, 14 Feb 2021 14:52:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBIlI-0001YP-N1; Sun, 14 Feb 2021 14:52:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBIlI-0006FW-Ma; Sun, 14 Feb 2021 14:52:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=GOTx7rVtbob76CmC5vOVs8tZZPgT6acJhjHK253bVTY=; b=kIKLY7oGMNmedh9v0DSkrG9hzo
	E/zqqyXjQxB/HgYj3BY/AlnyMtzUxfICWryA8U4/jwPaRhcRM7xcda6crSLPqFe81RUdLCsD6PiM6
	io7j2qQzd7hUW0kSywOzKOMAYf/shLh/5HGEtAxWSd3Vyvq5Lxx/0M6h3U9fSL2MaYiI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-armhf-armhf-xl-credit1
Message-Id: <E1lBIlI-0006FW-Ma@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 14:52:36 +0000

branch xen-unstable
xenbranch xen-unstable
job test-armhf-armhf-xl-credit1
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159351/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-armhf-armhf-xl-credit1.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-credit1.guest-start --summary-out=tmp/159351.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-armhf-armhf-xl-credit1 guest-start
Searching for failure / basis pass:
 159324 fail [host=arndale-bluewater] / 158681 [host=cubietruck-gleizes] 158624 [host=cubietruck-braque] 158616 [host=cubietruck-picasso] 158609 [host=cubietruck-metzinger] 158603 [host=arndale-metrocentre] 158593 [host=arndale-lakeside] 158583 [host=arndale-westfield] 158563 ok.
Failure / basis pass flights: 159324 / 158563
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d27e58e401faea284309039f3962cb3cb4549fc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Basis pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 339371ef78eb3a6f2e9848f8b058379de5e87d39 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#d26b3110041a9fddc6c6e36398f53f7eab8cff82-5e1942063dc3633f7a127aa2b159c13507580d21 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#339371ef78eb3a6f2e9848f8b058379de5e87d39-1d27e58e401faea284309039f3962cb3cb4549fc git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#e8adbf680b56a3f4b9600c7bcc04fec1877a6213-ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
Loaded 15001 nodes in revision graph
Searching for test results:
 158552 [host=cubietruck-gleizes]
 158563 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 339371ef78eb3a6f2e9848f8b058379de5e87d39 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 158583 [host=arndale-westfield]
 158593 [host=arndale-lakeside]
 158603 [host=arndale-metrocentre]
 158609 [host=cubietruck-metzinger]
 158616 [host=cubietruck-picasso]
 158624 [host=cubietruck-braque]
 158681 [host=cubietruck-gleizes]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159295 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 124f1dd1ee1140b441151043aacbe5d33bb5ab79 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159328 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 339371ef78eb3a6f2e9848f8b058379de5e87d39 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 159330 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 124f1dd1ee1140b441151043aacbe5d33bb5ab79 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159332 pass e2d69319b713c30ca21428c3955a79f3a7bf6c23 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159334 fail 8c3d3b385ed868660c7dff0336da1bd5a9fb134d c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159336 pass 4d1cf8eeda5b3f411440d9910a484b1d06484aa7 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159324 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d27e58e401faea284309039f3962cb3cb4549fc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159337 pass 68f99105752d132d411231bfc60cf78eceaac5e0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159340 fail 5e1942063dc3633f7a127aa2b159c13507580d21 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d27e58e401faea284309039f3962cb3cb4549fc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e ff522e2e9163b27fe4d80ba55c18408f9b1f1cb7
 159341 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159342 fail 7eef736858712ab65afea3908f49eb4e7775fa93 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159345 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159347 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159348 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159350 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159351 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158563 (pass), for basis pass
 Result found: flight 159295 (fail), for basis failure (at ancestor ~214)
 Repro found: flight 159328 (pass), for basis pass
 Repro found: flight 159340 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159341 (pass), for last pass
 Result found: flight 159345 (fail), for first failure
 Repro found: flight 159347 (pass), for last pass
 Repro found: flight 159348 (fail), for first failure
 Repro found: flight 159350 (pass), for last pass
 Repro found: flight 159351 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159351/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 131 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-credit1.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159351: tolerable ALL FAIL

flight 159351 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159351/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-armhf-armhf-xl-credit1  14 guest-start             fail baseline untested


jobs:
 test-armhf-armhf-xl-credit1                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Feb 14 18:37:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 18:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84993.159271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBMGQ-00042O-OC; Sun, 14 Feb 2021 18:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84993.159271; Sun, 14 Feb 2021 18:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBMGQ-00042H-KR; Sun, 14 Feb 2021 18:36:58 +0000
Received: by outflank-mailman (input) for mailman id 84993;
 Sun, 14 Feb 2021 18:36:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBMGP-00042C-70
 for xen-devel@lists.xenproject.org; Sun, 14 Feb 2021 18:36:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBMGO-0000DI-U9; Sun, 14 Feb 2021 18:36:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBMGO-0002P5-FX; Sun, 14 Feb 2021 18:36:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=cv+PoO2hB3Ur/fcAURS3NBl+Is4TBPmhFrEjfDJTO5o=; b=kzuvCnlR95SOwLAiXp25xrvVLm
	+gngY086PBKMUbCMb+dRw8dWqBaSaCd+EYAIuhea0TAM2Dz2AOOzxVw5v4xUR+3u5GAL29bueNbOM
	h3Gv0zjTC5GXA6g6H4dK5vXA+zg5kURm95oujtMrmSrEKxpMgf+L8fHc+BhT0WUApKBg=;
Subject: Re: [RFC PATCH v2 00/15] xen/arm: port Linux LL/SC and LSE atomics
 helpers to Xen
To: Ash Wilding <ash.j.wilding@gmail.com>
Cc: bertrand.marquis@arm.com, rahul.singh@arm.com, sstabellini@kernel.org,
 xen-devel@lists.xenproject.org
References: <cb0f7055-6d9a-5c39-6198-109593fd3424@xen.org>
 <20201217153742.14034-1-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5d79ab3f-7307-d61d-f743-6b14cde2477f@xen.org>
Date: Sun, 14 Feb 2021 18:36:54 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20201217153742.14034-1-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 17/12/2020 15:37, Ash Wilding wrote:
> Hi Julien,

Hi,

First of all, apologies for the very late reply.

> Thanks for taking a look at the patches and providing feedback. I've seen your
> other comments and will reply to those separately when I get a chance (maybe at
> the weekend or over the Christmas break).
> 
> RE the differences in ordering semantics between Xen's and Linux's atomics
> helpers, please find my notes below.
> 
> Thoughts?

Thank you for the very detailed answer, it made a lot easier to 
understand the differences!

I think it would be good to have some (if not all) the content in 
Documents for future reference.

[...]

> The tables below use format AAA/BBB/CCC/DDD/EEE, where:
> 
>   - AAA is the memory barrier before the operation
>   - BBB is the acquire semantics of the atomic operation
>   - CCC is the release semantics of the atomic operation
>   - DDD is whether the asm() block clobbers memory
>   - EEE is the memory barrier after the operation
> 
> For example, ---/---/rel/mem/dmb would mean:
> 
>   - No memory barrier before the operation
>   - The atomic does *not* have acquire semantics
>   - The atomic *does* have release semantics
>   - The asm() block clobbers memory
>   - There is a DMB memory barrier after the atomic operation
> 
> 
>      arm64 LL/SC
>      ===========
> 
>          Xen Function            Xen                     Linux                   Inconsistent
>          ============            ===                     =====                   ============
> 
>          atomic_add              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_add_return       ---/---/rel/mem/dmb     ---/---/rel/mem/dmb     --- (1)
>          atomic_sub              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_sub_return       ---/---/rel/mem/dmb     ---/---/rel/mem/dmb     --- (1)
>          atomic_and              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_cmpxchg          dmb/---/---/---/dmb     ---/---/rel/mem/---     YES (2)

If I am not mistaken, Linux is implementing atomic_cmpxchg() with the 
*_mb() version. So the semantic would be:

---/---/rel/mem/dmb

>          atomic_xchg             ---/---/rel/mem/dmb     ---/acq/rel/mem/dmb     YES (3)

 From Linux:

#define __XCHG_CASE(w, sfx, name, sz, mb, nop_lse, acq, acq_lse, rel, 
cl)       \

[...]

         /* LL/SC */ 
          \
         "       prfm    pstl1strm, %2\n" 
          \
         "1:     ld" #acq "xr" #sfx "\t%" #w "0, %2\n" 
          \
         "       st" #rel "xr" #sfx "\t%w1, %" #w "3, %2\n" 
          \
         "       cbnz    %w1, 1b\n" 
          \
         "       " #mb, 
          \

[...]

__XCHG_CASE(w,  ,  mb_, 32, dmb ish, nop,  , a, l, "memory")

So I think the Linux semantic would be:

---/---/rel/mem/dmb

Therefore there would be no inconsistency between Xen and Linux.

> 
> (1) It's actually interesting to me that Linux does it this way. As with the
>      LSE atomics below, I'd have expected acq/rel semantics and ditch the DMB.

I have done some digging. The original implementation of atomic_{sub, 
add}_return was actually using acq/rel semantics up until Linux 3.14. 
But this was reworked by 8e86f0b409a4 "arm64: atomics: fix use of 
acquire + release for full barrier semantics".

>      Unless I'm missing something where there is a concern around taking an IRQ
>      between the LDAXR and the STLXR, which can't happen in the LSE atomic case
>      since it's a single instruction. But the exclusive monitor is cleared on
>      exception return in AArch64 so I'm struggling to see what that potential
>      issue may be. Regardless, Linux and Xen are consistent so we're OK ;-)

The commit I pointed above contains a lot of details on the issue. For 
convenience, I copied the most relevant bits below:

"
On arm64, these operations have been incorrectly implemented as follows:

             // A, B, C are independent memory locations

             <Access [A]>

             // atomic_op (B)
     1:      ldaxr   x0, [B]         // Exclusive load with acquire
             <op(B)>
             stlxr   w1, x0, [B]     // Exclusive store with release
             cbnz    w1, 1b

             <Access [C]>

The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -> B -> C
(where B is the atomic operation involving both a load and a store).

Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.
"

> (2) The Linux version uses either STLXR with rel semantics if the comparison
>      passes, or DMB if the comparison fails.

I think the DMB is only on the success path and there is no barrier on 
the failure path. The commit 4e39715f4b5c "arm64: cmpxchg: avoid memory 
barrier on comparison failure" seems to corroborate that.

> This is weaker than Xen's version,
>      which is quite blunt in always wrapping the operation between two DMBs. This
>      may be a holdover from Xen's arm32 versions being ported to arm64, as we
>      didn't support acq/rel semantics on LDREX and STREX in Armv7-A? Regardless,

The atomic helpers used in Xen were originally taken from Linux 3.14. 
Back then, atomic_cmpxchg() were using the two full barriers. This was 
introduced by the commit I pointed out in (1).

This was then optimized with commit 4e39715f4b5c "arm64: cmpxchg: avoid 
memory barrier on comparison failure".

>      this is quite a big discrepancy and I've not yet given it enough thought to
>      determine whether it would actually cause an issue. My feeling is that the
>      Linux LL/SC atomic_cmpxchg() should have have acq semantics on the LL, but
>      like you said these helpers are well tested so I'd be surprised if there
>      is a bug. See (5) below though, where the Linux LSE atomic_cmpxchg() *does*
>      have acq semantics.

If my understanding is correct the semantics would be (xen vs Linux):

  - failure: dmb/---/---/---/dmb     ---/---/rel/mem/---
  - success: dmb/---/---/---/dmb     ---/---/rel/mem/dmb

I think the success path is not going to be a problem. But we would need 
to check if all the callers expect a full barrier for the failure path 
(I would hope not).

> 
> (3) The Linux version just adds acq semantics to the LL, so we're OK here.
> 
> 
>      arm64 LSE (comparison to Xen's LL/SC)
>      =====================================
> 
>          Xen Function            Xen                     Linux                   Inconsistent
>          ============            ===                     =====                   ============
> 
>          atomic_add              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_add_return       ---/---/rel/mem/dmb     ---/acq/rel/mem/---     YES (4)
>          atomic_sub              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_sub_return       ---/---/rel/mem/dmb     ---/acq/rel/mem/---     YES (4)
>          atomic_and              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_cmpxchg          dmb/---/---/---/dmb     ---/acq/rel/mem/---     YES (5)
>          atomic_xchg             ---/---/rel/mem/dmb     ---/acq/rel/mem/---     YES (4)
> 
> (4) As noted in (1), this is how I would have expected Linux's LL/SC atomics to
>      work too. I don't think this discrepancy will cause any issues.
IIUC, the LSE implementation would be using a single instruction that 
has both the acquire and release semantics. Therefore, it would act as a 
full barrier.

On the other hand, in the LL/SC implementation, the acquire and release 
semantics is happening with two different instruction. Therefore, they 
don't act as a full barrier.

So I think the memory ordering is going to be equivalent between Xen and 
the Linux LSE implementation.

> 
> (5) As with (2) above, this is quite a big discrepancy to Xen. However at least
>      this version has acq semantics unlike the LL/SC version in (2), so I'm more
>      confident that there won't be regressions going from Xen LL/SC to Linux LSE
>      version of atomic_cmpxchg().
Are they really different? In both cases, the helper will act as a full 
barrier. The main difference is how this ordering is achieved.

> 
> 
>      arm32 LL/SC
>      ===========
> 
>          Xen Function            Xen                     Linux                   Inconsistent
>          ============            ===                     =====                   ============
> 
>          atomic_add              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_add_return       dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
>          atomic_sub              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_sub_return       dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
>          atomic_and              ---/---/---/---/---     ---/---/---/---/---     ---
>          atomic_cmpxchg          dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
>          atomic_xchg             dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
> 
> (6) Linux only provides relaxed variants of these functions, such as
>      atomic_add_return_relaxed() and atomic_xchg_relaxed(). Patches #13 and #14
>      in the series add the stricter versions expected by Xen, wrapping calls to
>      Linux's relaxed variants inbetween two calls to smb_mb(). This makes them
>      consistent with Xen's existing helpers, though is quite blunt.

Linux will do the same when the fully ordered version is not implemented 
(see include/linux/atomic-fallback.h).

>  It is worth
>      noting that Armv8-A AArch32 does support acq/rel semantics on exclusive
>      accesses, with LDAEX and STLEX, so I could imagine us introducing a new
>      arm32 hwcap to detect whether we're on actual Armv7-A hardware or Armv8-A
>      AArch32, then swap to lighterweight STLEX versions of these helpers rather
>      than the heavyweight double DMB versions. Whether that would actually give
>      measurable performance improvements is another story!

That's good to know! So far, I haven't heard anyone using Xen 32-bit on 
Armv8. I would wait until there is a request before introducing a 3rd 
(4th if counting the ll/sc as 2) implementation for the atomics helpers.

That said, the 32-bit port is meant to only be supported on Armv7. It 
should boot on Armv8, but there is no promise it will be fully 
functional nor stable.

Overall, it looks like to me that re-syncing the atomic implementation 
with Linux should not be a major problem.

We are in the middle of 4.15 freeze, I will try to go through the series 
ASAP.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 18:50:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 18:50:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.84997.159283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBMTg-0005le-4R; Sun, 14 Feb 2021 18:50:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 84997.159283; Sun, 14 Feb 2021 18:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBMTg-0005lX-1I; Sun, 14 Feb 2021 18:50:40 +0000
Received: by outflank-mailman (input) for mailman id 84997;
 Sun, 14 Feb 2021 18:50:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBMTe-0005lP-Gf; Sun, 14 Feb 2021 18:50:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBMTe-0000Q2-9v; Sun, 14 Feb 2021 18:50:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBMTd-0004BH-Vm; Sun, 14 Feb 2021 18:50:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBMTd-000290-VG; Sun, 14 Feb 2021 18:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9MdH8pqxG1drpTSb0DV479TvFrZfUkhfN1GBliztCn4=; b=pjH0FpA9PWaf7O5nkUTbudzLRg
	hyHrjsljZ2J6cTfe3/TBCxi7egJqzTu6etsYshOxmGYZL5PcIHFsi14RvKg4HlRhhSLr9ka057glS
	ff/VUtdLQl2iqukem47ZdG6ApfIufORww/SQvA5cJS0MsYOEBEXygILtK2Dk9QPrGz38=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159335: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
X-Osstest-Versions-That:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 18:50:37 +0000

flight 159335 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159335/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 159315
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 159315

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159315
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159315
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159315
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159315
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159315
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159315
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159315
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159315
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159315
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159315
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159315
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
baseline version:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad

Last test of basis   159335  2021-02-14 01:53:36 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Feb 14 21:18:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 21:18:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85009.159310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBOmJ-0001Os-TT; Sun, 14 Feb 2021 21:18:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85009.159310; Sun, 14 Feb 2021 21:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBOmJ-0001Ol-Ol; Sun, 14 Feb 2021 21:18:03 +0000
Received: by outflank-mailman (input) for mailman id 85009;
 Sun, 14 Feb 2021 21:18:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBOmI-0001Og-3n
 for xen-devel@lists.xenproject.org; Sun, 14 Feb 2021 21:18:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBOmF-0002tO-Rd; Sun, 14 Feb 2021 21:17:59 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBOmF-0006vA-Ir; Sun, 14 Feb 2021 21:17:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/QYTDglDUzmSr9H0s41Nh/i0d4YrXWxPDKOebpqx8dE=; b=NcY8RSnZ3LKaGcib3T0HtChLhF
	M1sLaMxJGp/Kp2l4MeydJBd2feanETtRQcVu06AuiG4/2zW30vwcvB+BBKHyQJrkYanE817356IgL
	8AoJUiKNh+Lp/4nQUrIV37x4O/nhjAkAEWpn2K/Ca8ELj/RVIL+ETmjzuONRlMF2jgK0=;
Subject: Re: [PATCH v2 1/8] xen/events: reset affinity of 2-level event when
 tearing it down
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2abf73b0-ec8d-8e9a-665a-1adc47972fe7@xen.org>
Date: Sun, 14 Feb 2021 21:17:57 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210211101616.13788-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 11/02/2021 10:16, Juergen Gross wrote:
> When creating a new event channel with 2-level events the affinity
> needs to be reset initially in order to avoid using an old affinity
> from earlier usage of the event channel port. So when tearing an event
> channel down reset all affinity bits.
> 
> The same applies to the affinity when onlining a vcpu: all old
> affinity settings for this vcpu must be reset. As percpu events get
> initialized before the percpu event channel hook is called,
> resetting of the affinities happens after offlining a vcpu (this is
> working, as initial percpu memory is zeroed out).
> 
> Cc: stable@vger.kernel.org
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> V2:
> - reset affinity when tearing down the event (Julien Grall)
> ---
>   drivers/xen/events/events_2l.c       | 15 +++++++++++++++
>   drivers/xen/events/events_base.c     |  1 +
>   drivers/xen/events/events_internal.h |  8 ++++++++
>   3 files changed, 24 insertions(+)
> 
> diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
> index da87f3a1e351..a7f413c5c190 100644
> --- a/drivers/xen/events/events_2l.c
> +++ b/drivers/xen/events/events_2l.c
> @@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
>   	return EVTCHN_2L_NR_CHANNELS;
>   }
>   
> +static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu)
> +{
> +	clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
> +}
> +
>   static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
>   				  unsigned int old_cpu)
>   {
> @@ -355,9 +360,18 @@ static void evtchn_2l_resume(void)
>   				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
>   }
>   
> +static int evtchn_2l_percpu_deinit(unsigned int cpu)
> +{
> +	memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
> +			EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
> +
> +	return 0;
> +}
> +
>   static const struct evtchn_ops evtchn_ops_2l = {
>   	.max_channels      = evtchn_2l_max_channels,
>   	.nr_channels       = evtchn_2l_max_channels,
> +	.remove            = evtchn_2l_remove,
>   	.bind_to_cpu       = evtchn_2l_bind_to_cpu,
>   	.clear_pending     = evtchn_2l_clear_pending,
>   	.set_pending       = evtchn_2l_set_pending,
> @@ -367,6 +381,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
>   	.unmask            = evtchn_2l_unmask,
>   	.handle_events     = evtchn_2l_handle_events,
>   	.resume	           = evtchn_2l_resume,
> +	.percpu_deinit     = evtchn_2l_percpu_deinit,
>   };
>   
>   void __init xen_evtchn_2l_init(void)
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index e850f79351cb..6c539db81f8f 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -368,6 +368,7 @@ static int xen_irq_info_pirq_setup(unsigned irq,
>   static void xen_irq_info_cleanup(struct irq_info *info)
>   {
>   	set_evtchn_to_irq(info->evtchn, -1);
> +	xen_evtchn_port_remove(info->evtchn, info->cpu);
>   	info->evtchn = 0;
>   	channels_on_cpu_dec(info);
>   }
> diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
> index 0a97c0549db7..18a4090d0709 100644
> --- a/drivers/xen/events/events_internal.h
> +++ b/drivers/xen/events/events_internal.h
> @@ -14,6 +14,7 @@ struct evtchn_ops {
>   	unsigned (*nr_channels)(void);
>   
>   	int (*setup)(evtchn_port_t port);
> +	void (*remove)(evtchn_port_t port, unsigned int cpu);
>   	void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
>   			    unsigned int old_cpu);
>   
> @@ -54,6 +55,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
>   	return 0;
>   }
>   
> +static inline void xen_evtchn_port_remove(evtchn_port_t evtchn,
> +					  unsigned int cpu)
> +{
> +	if (evtchn_ops->remove)
> +		evtchn_ops->remove(evtchn, cpu);
> +}
> +
>   static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
>   					       unsigned int cpu,
>   					       unsigned int old_cpu)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 21:34:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 21:34:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85013.159322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBP2S-00039v-97; Sun, 14 Feb 2021 21:34:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85013.159322; Sun, 14 Feb 2021 21:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBP2S-00039o-5v; Sun, 14 Feb 2021 21:34:44 +0000
Received: by outflank-mailman (input) for mailman id 85013;
 Sun, 14 Feb 2021 21:34:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBP2Q-00039j-Eu
 for xen-devel@lists.xenproject.org; Sun, 14 Feb 2021 21:34:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBP2H-00039D-1g; Sun, 14 Feb 2021 21:34:33 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBP2G-0008FF-Pr; Sun, 14 Feb 2021 21:34:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sr90ghnrrGFMQJUJbkuM9kxkBCV/fRb8wi8FFVB7UVM=; b=zYsYKWf6svYxuu2NL22bZxT/Ki
	lWJTcTuHslPksK7qPqnxkie4ebX1D1ya7QT4OZX3xO0iYKAn+GNR+CLRzAyCVlTJcgxXjka4p3ijV
	X28cbCzHUG8LUdXlGxuGd0XUvOAZrdLuVW0hJt7Ro21kji+Nwi8ZZPg1kTL68MNE2ouI=;
Subject: Re: [PATCH v2 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <eed12192-a740-e767-1762-828c75de66ce@xen.org>
Date: Sun, 14 Feb 2021 21:34:31 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210211101616.13788-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 11/02/2021 10:16, Juergen Gross wrote:
> When changing the cpu affinity of an event it can happen today that
> (with some unlucky timing) the same event will be handled on the old
> and the new cpu at the same time.
> 
> Avoid that by adding an "event active" flag to the per-event data and
> call the handler only if this flag isn't set.
> 
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - new patch
> ---
>   drivers/xen/events/events_base.c | 19 +++++++++++++++----
>   1 file changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index e157e7506830..f7e22330dcef 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -102,6 +102,7 @@ struct irq_info {
>   #define EVT_MASK_REASON_EXPLICIT	0x01
>   #define EVT_MASK_REASON_TEMPORARY	0x02
>   #define EVT_MASK_REASON_EOI_PENDING	0x04
> +	u8 is_active;		/* Is event just being handled? */
>   	unsigned irq;
>   	evtchn_port_t evtchn;   /* event channel */
>   	unsigned short cpu;     /* cpu bound */
> @@ -622,6 +623,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
>   	}
>   
>   	info->eoi_time = 0;
> +	smp_store_release(&info->is_active, 0);
>   	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
>   }
>   
> @@ -809,13 +811,15 @@ static void pirq_query_unmask(int irq)
>   
>   static void eoi_pirq(struct irq_data *data)
>   {
> -	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
> +	struct irq_info *info = info_for_irq(data->irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
>   	struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
>   	int rc = 0;
>   
>   	if (!VALID_EVTCHN(evtchn))
>   		return;
>   
> +	smp_store_release(&info->is_active, 0);

Would you mind to explain why you are using the release semantics?

It is also not clear to me if there are any expected ordering between 
clearing is_active and clearing the pending bit.

>   	clear_evtchn(evtchn);


The 2 lines here seems to be a common pattern in this patch. So I would 
suggest to create a new helper.

>   
>   	if (pirq_needs_eoi(data->irq)) {
> @@ -1640,6 +1644,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
>   	}
>   
>   	info = info_for_irq(irq);
> +	if (xchg_acquire(&info->is_active, 1))
> +		return;
>   
>   	if (ctrl->defer_eoi) {
>   		info->eoi_cpu = smp_processor_id();
> @@ -1823,11 +1829,13 @@ static void disable_dynirq(struct irq_data *data)
>   
>   static void ack_dynirq(struct irq_data *data)
>   {
> -	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
> +	struct irq_info *info = info_for_irq(data->irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
>   
>   	if (!VALID_EVTCHN(evtchn))
>   		return;
>   
> +	smp_store_release(&info->is_active, 0);
>   	clear_evtchn(evtchn);
>   }
>   
> @@ -1969,10 +1977,13 @@ static void restore_cpu_ipis(unsigned int cpu)
>   /* Clear an irq's pending state, in preparation for polling on it */
>   void xen_clear_irq_pending(int irq)
>   {
> -	evtchn_port_t evtchn = evtchn_from_irq(irq);
> +	struct irq_info *info = info_for_irq(irq);
> +	evtchn_port_t evtchn = info ? info->evtchn : 0;
>   
> -	if (VALID_EVTCHN(evtchn))
> +	if (VALID_EVTCHN(evtchn)) {
> +		smp_store_release(&info->is_active, 0);
>   		clear_evtchn(evtchn);
> +	}
>   }
>   EXPORT_SYMBOL(xen_clear_irq_pending);
>   void xen_set_irq_pending(int irq)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 21:39:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 21:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85018.159334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBP6p-0003LD-0Z; Sun, 14 Feb 2021 21:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85018.159334; Sun, 14 Feb 2021 21:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBP6o-0003L6-S1; Sun, 14 Feb 2021 21:39:14 +0000
Received: by outflank-mailman (input) for mailman id 85018;
 Sun, 14 Feb 2021 21:39:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBP6n-0003Ky-Va; Sun, 14 Feb 2021 21:39:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBP6n-0003Dv-Qz; Sun, 14 Feb 2021 21:39:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBP6n-0003Os-G3; Sun, 14 Feb 2021 21:39:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBP6n-0006XX-FX; Sun, 14 Feb 2021 21:39:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qbpwa4YB58HnO/pDZ5ZsUnwyqvblVbmF5khLhViUEMU=; b=0a25cKspYG07IT8eRWrSkm+J8K
	rAp7sfm6YSuBbzK5Z1fRlUfZEgrW0QXKqNIh/3cdjrOyoD6IvvJ/YzFepA8uCKQIUsZuE26sE9TBO
	U2h+mTkEFRj8CU9s0U7bTBYybn0777opSn67AoF2hN2AUHGz+ZYaGV5zA7RuMzLnNays=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159339: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5b9a4104c902d7dec14c9e3c5652a638194487c6
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 Feb 2021 21:39:13 +0000

flight 159339 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159339/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5b9a4104c902d7dec14c9e3c5652a638194487c6
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   33 days
Failing since        158473  2021-01-17 13:42:20 Z   28 days   39 attempts
Testing same since   159339  2021-02-14 04:42:28 Z    0 days    1 attempts

------------------------------------------------------------
467 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14254 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 14 22:46:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 Feb 2021 22:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85027.159355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBQ9Q-0000zu-Uk; Sun, 14 Feb 2021 22:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85027.159355; Sun, 14 Feb 2021 22:46:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBQ9Q-0000zn-R4; Sun, 14 Feb 2021 22:46:00 +0000
Received: by outflank-mailman (input) for mailman id 85027;
 Sun, 14 Feb 2021 22:45:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0kZh=HQ=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1lBQ9P-0000zi-PL
 for xen-devel@lists.xenproject.org; Sun, 14 Feb 2021 22:45:59 +0000
Received: from mx1.somlen.de (unknown [89.238.87.226])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84c5f96c-93bc-48de-84c2-5900121afe07;
 Sun, 14 Feb 2021 22:45:56 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id E0D95C36AAE;
 Sun, 14 Feb 2021 23:45:54 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84c5f96c-93bc-48de-84c2-5900121afe07
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, pkg-xen-devel@lists.alioth.debian.org
Subject: Re: [BUG] Linux pvh vm not getting destroyed on shutdown
Date: Sun, 14 Feb 2021 23:45:47 +0100
Message-ID: <22128555.Kfurr2TIWe@localhost>
In-Reply-To: <YCgYxOxXwitkFB0T@mattapan.m5p.com>
References: <2195346.r5JaYcbZso@localhost> <YCgYxOxXwitkFB0T@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="nextPart1779699.oyBIJXxD3g"; micalg="pgp-sha512"; protocol="application/pgp-signature"

--nextPart1779699.oyBIJXxD3g
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

On Samstag, 13. Februar 2021 19:21:56 CET Elliott Mitchell wrote:
> On Sat, Feb 13, 2021 at 04:36:24PM +0100, Maximilian Engelhardt wrote:
> > after a recent upgrade of one of our test systems to Debian Bullseye we
> > noticed an issue where on shutdown of a pvh vm the vm was not destroyed by
> > xen automatically. It could still be destroyed by manually issuing a 'xl
> > destroy $vm' command.
> 
> Usually I would expect such an issue to show on the Debian bug database
> before xen-devel.  In particular as this is a behavior change with
> security updates, there is a good chance this isn't attributable to the
> Xen Project.  Additionally the Xen Project's support window is rather
> narrow.  I've been observing the same (or similar) issue for a bit too.

I posted this bug report to the xen-devel list because I was told to do so on 
upstream #xen irc channel.
Before writing my mail, I also checked the Debian kernel packaging for 
anything that might be related to our issue, but could not find anything.
Please note we didn't observe any behavior change in Debian buster on our 
systems and also didn't notice the shutdown issue there. For us the issue 
only started with kernel version 5.8.3+1~exp1.

> > Here are some things I noticed while trying to debug this issue:
> > 
> > * It happens on a Debian buster dom0 as well as on a bullseye dom0
> 
> I stick with stable on non-development machines, so I can't say anything
> to this.
> 
> > * It seems to only affect pvh vms.
> 
> I've observed it with pv and hvm VMs as well.
> 
> > * shutdown from the pvgrub menu ("c" -> "halt") does work
> 
> Woah!  That is quite the observation.  Since I had a handy opportunity
> I tried this and this reproduces for me.
> 
> > * the vm seems to shut down normal, the last lines in the console are:
> I agree with this.  Everything appears typical until the last moment.
> 
> > * issuing a reboot instead of a shutdown does work fine.
> 
> I disagree with this.  I'm seeing the issue occur with restart attempts
> too.
> 
> > * The issue started with Debian kernel 5.8.3+1~exp1 running in the vm,
> > Debian kernel 5.7.17-1 does not show the issue.
> 
> I think the first kernel update during which I saw the issue was around
> linux-image-4.19.0-12-amd64 or linux-image-4.19.0-13-amd64.  I think
> the last security update to the Xen packages was in a similar timeframe
> though.  Rate this portion as unreliable though.  I can definitely state
> this occurs with Debian's linux-image-4.19.0-13-amd64 and kernels built
> from corresponding source, this may have shown earlier.

We don't see any issues with the current Debian buster (Debian stable) kernel 
(4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux) and 
also did not notice any issues with the older kernel packages in buster. Also 
the security update of xen in buster did not cause any behavior change for us. 
In our case everything in buster is working as we expect it to work (using 
latest updates and security updates).

> > * setting vcpus equal to maxvcpus does *not* show the hang.
> 
> I haven't tried things related to this, so I can't comment on this
> part.
> 
> 
> Fresh observation.  During a similar timeframe I started noticing VM
> creation leaving a `xl create` process behind.  I had discovered this
> process could be freely killed without appearing to effect the VM and had
> thus been doing so (memory in a lean Dom0 is precious).
> 
> While typing this I realized there was another scenario I needed to try.
> Turns out if I boot PV GRUB and get to its command-line (press 'c'), then
> get away from the VM console, kill the `xl create` process, return to
> the console and type "halt".  This results in a hung VM.
> 
> Are you perhaps either killing the `xl create` process for effected VMs,
> or migrating the VM and thus splitting the `xl create` process from the
> effected VMs?
> 
> This seems more a Debian issue than a Xen Project issue right now.

We don't migrate the vms, we don't kill any processes running on the dom0 and 
I don't see anything in our logs indicating something gets killed on the dom0. 
On our systems the running 'xl create' processes only use very little memory.

Have you tried if you still observer your hangs if you don't kill the xl 
processes?
--nextPart1779699.oyBIJXxD3g
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part.
Content-Transfer-Encoding: 7Bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEEQ8gZ7vwsPje0uPkIgepkfSQr0hUFAmApqBsACgkQgepkfSQr
0hWQexAArihR0+TqNRoVfPjTFQCO4a4ByZNufXeappdTaKahvwK29qr+2aRIjn4L
8pcSPYuJV/CsMH+ZVR19pfR/DC1R5EUYOeBd9+K/Q0VlJ/XzMPJ7cTbccyzm7Nub
muCCWHRidlGVUNGXVvANdj7MlU8LAAmkjSFcCav2jiySnsdiwm/Q8PKvUWzqA3Ju
5syeYqkUh3PWqIwGGfSLuAJyAKSzYdFWSk/5Rep07J5CPK0rzMTxuYUZcqw+fD+8
KAcFV7uXSusDKSl+8qVCbraPvOPisZ0xvy1gDSMH5Jsz+aS2nnbdls3zE6U77IhC
/Z4WXGXe3QvVhyuLd8iT36EhD+PjQfV0bUvY9ECCqmuaIryQLrIcJiayYmX03K3X
gC8LAIVAHX/G65c3S+BzsNaHPQ8GZ2LiFG3jFYUlAjoMlNerB1F+aCwuI6ZLwPML
v7qdtJ9nKJsU4kVDKan5tUc5IZ7NM9VwQeSI83/EhkMIfUCfdHVKHyrNdfWWj4Gx
LS62SKjPC4gDIOfUr6booAvxB6vCQMYp90r8+GagKNNCxqvXezI3lWE2woBW49BC
BxtIJ26AYBrBWHUYGmpj9LUFPA0GwkArPt9opKQfvtL154+nD+HY9buMGPZaxaqA
17lL2i4HhzcUn5sD0urQg2pFI8asPSBv9l7HQsIuvexaPh4OzDI=
=zb5T
-----END PGP SIGNATURE-----

--nextPart1779699.oyBIJXxD3g--





From xen-devel-bounces@lists.xenproject.org Mon Feb 15 03:28:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 03:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85038.159379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBUYO-0003HI-E8; Mon, 15 Feb 2021 03:28:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85038.159379; Mon, 15 Feb 2021 03:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBUYO-0003HA-7k; Mon, 15 Feb 2021 03:28:04 +0000
Received: by outflank-mailman (input) for mailman id 85038;
 Mon, 15 Feb 2021 03:28:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VQg6=HR=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lBUYM-0003H5-PJ
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 03:28:02 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2c302b4-af6d-40b3-a4bd-ea374c696290;
 Mon, 15 Feb 2021 03:28:01 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11F3Rll6003347
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sun, 14 Feb 2021 22:27:53 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11F3RkkO003346;
 Sun, 14 Feb 2021 19:27:46 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2c302b4-af6d-40b3-a4bd-ea374c696290
Date: Sun, 14 Feb 2021 19:27:46 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Maximilian Engelhardt <maxi@daemonizer.de>
Cc: xen-devel@lists.xenproject.org, pkg-xen-devel@lists.alioth.debian.org
Subject: Re: [BUG] Linux pvh vm not getting destroyed on shutdown
Message-ID: <YCnqMh+YB2ZDsMUl@mattapan.m5p.com>
References: <2195346.r5JaYcbZso@localhost>
 <YCgYxOxXwitkFB0T@mattapan.m5p.com>
 <22128555.Kfurr2TIWe@localhost>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <22128555.Kfurr2TIWe@localhost>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Sun, Feb 14, 2021 at 11:45:47PM +0100, Maximilian Engelhardt wrote:
> On Samstag, 13. Februar 2021 19:21:56 CET Elliott Mitchell wrote:
> > On Sat, Feb 13, 2021 at 04:36:24PM +0100, Maximilian Engelhardt wrote:
> > > * The issue started with Debian kernel 5.8.3+1~exp1 running in the vm,
> > > Debian kernel 5.7.17-1 does not show the issue.
> > 
> > I think the first kernel update during which I saw the issue was around
> > linux-image-4.19.0-12-amd64 or linux-image-4.19.0-13-amd64.  I think
> > the last security update to the Xen packages was in a similar timeframe
> > though.  Rate this portion as unreliable though.  I can definitely state
> > this occurs with Debian's linux-image-4.19.0-13-amd64 and kernels built
> > from corresponding source, this may have shown earlier.
> 
> We don't see any issues with the current Debian buster (Debian stable) kernel 
> (4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux) and 
> also did not notice any issues with the older kernel packages in buster. Also 
> the security update of xen in buster did not cause any behavior change for us. 
> In our case everything in buster is working as we expect it to work (using 
> latest updates and security updates).

I can't really say much here.  I keep up to date and I cannot point to a
key ingredient as the one which caused this breakage.


> > Fresh observation.  During a similar timeframe I started noticing VM
> > creation leaving a `xl create` process behind.  I had discovered this
> > process could be freely killed without appearing to effect the VM and had
> > thus been doing so (memory in a lean Dom0 is precious).
> > 
> > While typing this I realized there was another scenario I needed to try.
> > Turns out if I boot PV GRUB and get to its command-line (press 'c'), then
> > get away from the VM console, kill the `xl create` process, return to
> > the console and type "halt".  This results in a hung VM.
> > 
> > Are you perhaps either killing the `xl create` process for effected VMs,
> > or migrating the VM and thus splitting the `xl create` process from the
> > effected VMs?
> > 
> > This seems more a Debian issue than a Xen Project issue right now.
> 
> We don't migrate the vms, we don't kill any processes running on the dom0 and 
> I don't see anything in our logs indicating something gets killed on the dom0. 
> On our systems the running 'xl create' processes only use very little memory.
> 
> Have you tried if you still observer your hangs if you don't kill the xl 
> processes?

That is exactly what I pointed to above.  On stable killing the
mysterious left behind `xl create` process causes the problem to
manifest, while leaving it undisturbed appears to makes the problem not
manifest.

After a save/restore instead it is a `xl restore` process left behind.
I /suspect/ this plays a similar role, I'm unsure how far this goes
though.  Might you try telling a VM to reboot, then do a save followed
by a restore of it?

I'm curious whether respawning the `xl restore` could work around what is
occuring.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Feb 15 03:29:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 03:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85040.159391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBUZK-0003PO-Li; Mon, 15 Feb 2021 03:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85040.159391; Mon, 15 Feb 2021 03:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBUZK-0003PH-IK; Mon, 15 Feb 2021 03:29:02 +0000
Received: by outflank-mailman (input) for mailman id 85040;
 Mon, 15 Feb 2021 03:29:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBUZK-0003P9-0C; Mon, 15 Feb 2021 03:29:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBUZJ-0003vX-N6; Mon, 15 Feb 2021 03:29:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBUZJ-0003yP-EC; Mon, 15 Feb 2021 03:29:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBUZJ-0003JS-Dj; Mon, 15 Feb 2021 03:29:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=feKAhElX6lW2sCP6CFuJJliczN7ZujRjseD8ro5kQxg=; b=x6md4Ub+0a4HF4Pz4+uNQdSnUh
	4lYOZLaQY/eSrzcupSsuye+gGf5J8WH3NzdZlx47z09yuQrvTWfyKoeYKvzuRm/pxd2LoBn/yScUZ
	E2HQjV3x34y2Gw8COd2GsQdEDQ/hv7ZQrg1Xv0FzveOeTiRe1+ra/76bXjIrJON2ck7I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159346-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159346: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f4ceebdec531243dd72e38f85f085287e6e63258
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 03:29:01 +0000

flight 159346 qemu-mainline real [real]
flight 159361 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159346/
http://logs.test-lab.xenproject.org/osstest/logs/159361/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f4ceebdec531243dd72e38f85f085287e6e63258
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  178 days
Failing since        152659  2020-08-21 14:07:39 Z  177 days  347 attempts
Testing same since   159346  2021-02-14 10:47:07 Z    0 days    1 attempts

------------------------------------------------------------
402 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111031 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 05:03:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 05:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85050.159411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBW2g-0004E8-TQ; Mon, 15 Feb 2021 05:03:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85050.159411; Mon, 15 Feb 2021 05:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBW2g-0004E1-QR; Mon, 15 Feb 2021 05:03:26 +0000
Received: by outflank-mailman (input) for mailman id 85050;
 Mon, 15 Feb 2021 05:03:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBW2f-0004Dt-Ow; Mon, 15 Feb 2021 05:03:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBW2f-00069W-Dq; Mon, 15 Feb 2021 05:03:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBW2f-000051-2q; Mon, 15 Feb 2021 05:03:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBW2f-0001NE-2I; Mon, 15 Feb 2021 05:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9E0OZXPMuvwT653wAcz80ZHHoyHz8DMGscIEFh+1H8Y=; b=b+fHDTt+subqjqBjTMmRNdFBff
	ldS89BOBnixQ6L8mJ4o+bfD0mKHJ9J+kXW3xERW3WVDd1xYFwtfyf74SFxjXuT17598vovGWu1/aA
	Ih04ADyM1bmEEDXlVZ4zCjWgJr4mEYnL0jX1bOAO7LHtOX8Wyl8iwXeWkMmrdRXi3N/w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159349: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=358feceebbf68f33c44c6650d14455389e65282d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 05:03:25 +0000

flight 159349 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159349/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 12 debian-install           fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                358feceebbf68f33c44c6650d14455389e65282d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  198 days
Failing since        152366  2020-08-01 20:49:34 Z  197 days  343 attempts
Testing same since   159349  2021-02-14 12:37:48 Z    0 days    1 attempts

------------------------------------------------------------
4577 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1032903 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 06:55:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 06:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85061.159445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBXmy-0005Vy-TP; Mon, 15 Feb 2021 06:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85061.159445; Mon, 15 Feb 2021 06:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBXmy-0005Vr-QQ; Mon, 15 Feb 2021 06:55:20 +0000
Received: by outflank-mailman (input) for mailman id 85061;
 Mon, 15 Feb 2021 06:55:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lx6I=HR=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lBXmw-0005Vm-Jq
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 06:55:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1f5ee29-3f76-4d3c-bac5-230785bc2e34;
 Mon, 15 Feb 2021 06:55:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 998F8ADDB;
 Mon, 15 Feb 2021 06:55:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1f5ee29-3f76-4d3c-bac5-230785bc2e34
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613372115; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Gb2YctauaZwx1wpb7VFmE/CdTXsDy0TKF2f3m6ky/to=;
	b=IQ0cjTrz20AhAVvJ2K+wanLj3F/5JLuQJvUBuMSW9EdUMgSwHK8yJ8yeibvVEmDOHsSlhp
	jc43rB2SAM0oPnVSOGncHvIp6rG1B38sZRwuKpspDqiOKfUe82ragNpfDL/VNLXybFn3Yh
	BjmwQBUiqRWEbOewn4ToXQxrn2lPL5s=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-4-jgross@suse.com>
 <eed12192-a740-e767-1762-828c75de66ce@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
Message-ID: <c4d930c1-331f-6a1e-7d26-cf066cecda33@suse.com>
Date: Mon, 15 Feb 2021 07:55:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <eed12192-a740-e767-1762-828c75de66ce@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="7zfISEbWiXTW2yLiOjoHHWptpgt3WYDwu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--7zfISEbWiXTW2yLiOjoHHWptpgt3WYDwu
Content-Type: multipart/mixed; boundary="PN3jkXVSs90bn2otpr7vqVYheMbLKyUDx";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <c4d930c1-331f-6a1e-7d26-cf066cecda33@suse.com>
Subject: Re: [PATCH v2 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-4-jgross@suse.com>
 <eed12192-a740-e767-1762-828c75de66ce@xen.org>
In-Reply-To: <eed12192-a740-e767-1762-828c75de66ce@xen.org>

--PN3jkXVSs90bn2otpr7vqVYheMbLKyUDx
Content-Type: multipart/mixed;
 boundary="------------E8383365BF43AE4646CD0A35"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E8383365BF43AE4646CD0A35
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.02.21 22:34, Julien Grall wrote:
> Hi Juergen,
>=20
> On 11/02/2021 10:16, Juergen Gross wrote:
>> When changing the cpu affinity of an event it can happen today that
>> (with some unlucky timing) the same event will be handled on the old
>> and the new cpu at the same time.
>>
>> Avoid that by adding an "event active" flag to the per-event data and
>> call the handler only if this flag isn't set.
>>
>> Reported-by: Julien Grall <julien@xen.org>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - new patch
>> ---
>> =C2=A0 drivers/xen/events/events_base.c | 19 +++++++++++++++----
>> =C2=A0 1 file changed, 15 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/xen/events/events_base.c=20
>> b/drivers/xen/events/events_base.c
>> index e157e7506830..f7e22330dcef 100644
>> --- a/drivers/xen/events/events_base.c
>> +++ b/drivers/xen/events/events_base.c
>> @@ -102,6 +102,7 @@ struct irq_info {
>> =C2=A0 #define EVT_MASK_REASON_EXPLICIT=C2=A0=C2=A0=C2=A0 0x01
>> =C2=A0 #define EVT_MASK_REASON_TEMPORARY=C2=A0=C2=A0=C2=A0 0x02
>> =C2=A0 #define EVT_MASK_REASON_EOI_PENDING=C2=A0=C2=A0=C2=A0 0x04
>> +=C2=A0=C2=A0=C2=A0 u8 is_active;=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 /* Is event just being handled? */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned irq;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 evtchn_port_t evtchn;=C2=A0=C2=A0 /* ev=
ent channel */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned short cpu;=C2=A0=C2=A0=C2=A0=C2=
=A0 /* cpu bound */
>> @@ -622,6 +623,7 @@ static void xen_irq_lateeoi_locked(struct irq_info=
=20
>> *info, bool spurious)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 info->eoi_time =3D 0;
>> +=C2=A0=C2=A0=C2=A0 smp_store_release(&info->is_active, 0);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 do_unmask(info, EVT_MASK_REASON_EOI_PEN=
DING);
>> =C2=A0 }
>> @@ -809,13 +811,15 @@ static void pirq_query_unmask(int irq)
>> =C2=A0 static void eoi_pirq(struct irq_data *data)
>> =C2=A0 {
>> -=C2=A0=C2=A0=C2=A0 evtchn_port_t evtchn =3D evtchn_from_irq(data->irq=
);
>> +=C2=A0=C2=A0=C2=A0 struct irq_info *info =3D info_for_irq(data->irq);=

>> +=C2=A0=C2=A0=C2=A0 evtchn_port_t evtchn =3D info ? info->evtchn : 0;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct physdev_eoi eoi =3D { .irq =3D p=
irq_from_irq(data->irq) };
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int rc =3D 0;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!VALID_EVTCHN(evtchn))
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>> +=C2=A0=C2=A0=C2=A0 smp_store_release(&info->is_active, 0);
>=20
> Would you mind to explain why you are using the release semantics?

It is basically releasing a lock. So release semantics seem to be
appropriate.

> It is also not clear to me if there are any expected ordering between=20
> clearing is_active and clearing the pending bit.

No, I don't think there is a specific ordering required. is_active is
just guarding against two simultaneous IRQ handler calls for the same
event being active. Clearing the pending bit is not part of the guarded
section.

>=20
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 clear_evtchn(evtchn);
>=20
>=20
> The 2 lines here seems to be a common pattern in this patch. So I would=
=20
> suggest to create a new helper.

Okay.


Juergen

--------------E8383365BF43AE4646CD0A35
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E8383365BF43AE4646CD0A35--

--PN3jkXVSs90bn2otpr7vqVYheMbLKyUDx--

--7zfISEbWiXTW2yLiOjoHHWptpgt3WYDwu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAqGtIFAwAAAAAACgkQsN6d1ii/Ey89
Bgf/YqzP0frmMc/MfBd2/G9qz9UanDcIsq9kBPzvF1wZyxZ0ez80TwRXnTVCVNxpvVQnoWmCAMUO
jLlR6BdcrM8lhsOUHYfldjFD7mVPf1MeOjhGRcCUEsorosjAljfnkIv0YTLUkJsvhX7G4vafiLwg
VHpGrueqrDg/yf5N1UiuWJT83GsK4UA95I7f+23ox0a96XxnfKVmXxHRt2DPLU8OzK5/4yTWc0Bu
FoxuNchoAoz/4PH0s0d5Ai1iVVyEGxBu9+gyU8+ar4C46Q6bT8JpxpOPMBOFoDNdnZfDEu7n+V7/
LgGyXwFdl99HrQghEeJTuO2RzUQbx8tSU8rYQ+ZOGQ==
=Q2NO
-----END PGP SIGNATURE-----

--7zfISEbWiXTW2yLiOjoHHWptpgt3WYDwu--


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 07:15:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 07:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85064.159457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBY6I-0007Nh-PU; Mon, 15 Feb 2021 07:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85064.159457; Mon, 15 Feb 2021 07:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBY6I-0007Na-LA; Mon, 15 Feb 2021 07:15:18 +0000
Received: by outflank-mailman (input) for mailman id 85064;
 Mon, 15 Feb 2021 07:15:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dghk=HR=codiax.se=anders.tornqvist@srs-us1.protection.inumbo.net>)
 id 1lBY6H-0007NU-2T
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 07:15:17 +0000
Received: from mailrelay3-3.pub.mailoutpod1-cph3.one.com (unknown
 [46.30.212.12]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 152e3318-9f6c-4f40-929c-121d5dce0464;
 Mon, 15 Feb 2021 07:15:13 +0000 (UTC)
Received: from [192.168.101.129] (h87-96-135-119.cust.a3fiber.se
 [87.96.135.119])
 by mailrelay3.pub.mailoutpod1-cph3.one.com (Halon) with ESMTPSA
 id 89951097-6f5d-11eb-8cc9-d0431ea8bb03;
 Mon, 15 Feb 2021 07:15:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 152e3318-9f6c-4f40-929c-121d5dce0464
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=codiax.se; s=20191106;
	h=content-type:in-reply-to:mime-version:date:message-id:from:references:to:
	 subject:from;
	bh=GGrVJ0O+qbn+K0rIfUk9Hcbob+z2GH9MAHfGbQW0rMY=;
	b=dwzy3ejpx86V16s1/UQiLJPrB4nW5mQfMnYRWHhSoYWaozgZrpXoZDpVYMd/FDchIywtm/prge94q
	 UWxgM9Az7jRfy/19NRoV9BFQdaBoToWlRrK5DG6mIfcD3OUmF7XyOX/s885Pw5FTM7ukXK2xz2CmBp
	 m3ZQKYM1L5XZDDcXfk7ukmxYRG7nyM/BWNY1ciW8NumymhKMAuzIroumWZaH/8SkZA5egEXelWKdia
	 ST4TS5hIJXo5F3Y5ubjSWccj24mm18faKsAEZHdhvr/tT/EOxHIYE0SqkT9+1O2Y4LagYwAfTFwFuj
	 fmvACp6OyRiQks/iwa4TGgYlDFRKgpw==
X-HalOne-Cookie: 3116c411e6e7ae0bba8e5eaf972acbfd10e0b169
X-HalOne-ID: 89951097-6f5d-11eb-8cc9-d0431ea8bb03
Subject: Re: Null scheduler and vwfi native problem
To: Dario Faggioli <dfaggioli@suse.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
References: <fe3dd9f0-b035-01fe-3e01-ddf065f182ab@codiax.se>
 <207305e4e2998614767fdcc5ad83ced6de982820.camel@suse.com>
 <e85548f4-e03b-4717-3495-9ed472ed03c9@xen.org>
 <e18ba69efd0d12fc489144024305fd3c6102c330.camel@suse.com>
 <e37fe8a9-c633-3572-e273-2fd03b35b791@codiax.se>
 <744ddde6-a228-82fc-76b9-401926d7963b@xen.org>
 <d92c4191fb81e6d1de636f281c8624d68f8d14fc.camel@suse.com>
 <c9a4e132-5bca-aa76-ab8b-bfeee1cd5a9e@codiax.se>
 <f52baf12308d71b96d0d9be1c7c382a3c5efafbc.camel@suse.com>
 <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
 <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
From: =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Message-ID: <1d86afab-691e-e362-627f-8524cb0494c2@codiax.se>
Date: Mon, 15 Feb 2021 08:15:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a9d80e262760f6692f7086c9b6a0622caf19e795.camel@suse.com>
Content-Type: multipart/mixed;
 boundary="------------8DCEB5763D2D09BD74FF88B6"
Content-Language: en-GB

This is a multi-part message in MIME format.
--------------8DCEB5763D2D09BD74FF88B6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

On 1/30/21 6:59 PM, Dario Faggioli wrote:
> On Fri, 2021-01-29 at 09:08 +0100, Anders TÃ¶rnqvist wrote:
>> On 1/26/21 11:31 PM, Dario Faggioli wrote:
>>> Thanks again for letting us see these logs.
>> Thanks for the attention to this :-)
>>
>> Any ideas for how to solve it?
>>
> So, you're up for testing patches, right?
>
> How about applying these two, and letting me know what happens? :-D

Great work guys!

Hi. Now I got the time to test the patches.

They was not possible to apply without fail on the code version I am 
using which is commit b64b8df622963accf85b227e468fe12b2d56c128 from 
https://source.codeaurora.org/external/imx/imx-xen.

I did some editing to get them into my code. I think I should have 
removed some sched_tick_suspend/sched_tick_resume calls also.
See the attached patches for what I have applied on the code.

Anyway, after applying the patches including the original 
rcu-quiesc-patch.patch the destroy of the domu seems to work.
I have rebooted, only destroyed-created and used Xen watchdog to reboot 
the domu in total about 20 times and so far it has nicely destroyed and 
the been able to start a new instance of the domu.

So it looks promising although my edited patches probably need some fixing.


>
> They are on top of current staging. I can try to rebase on something
> else, if it's easier for you to test.
>
> Besides being attached, they're also available here:
>
> https://gitlab.com/xen-project/people/dfaggioli/xen/-/tree/rcu-quiet-fix
>
> I could not test them properly on ARM, as I don't have an ARM system
> handy, so everything is possible really... just let me know.
>
> It should at least build fine, AFAICT from here:
>
> https://gitlab.com/xen-project/people/dfaggioli/xen/-/pipelines/249101213
>
> Julien, back in:
>
>   https://lore.kernel.org/xen-devel/315740e1-3591-0e11-923a-718e06c36445@arm.com/
>
>
> you said I should hook in enter_hypervisor_head(),
> leave_hypervisor_tail(). Those functions are gone now and looking at
> how the code changed, this is where I figured I should put the calls
> (see the second patch). But feel free to educate me otherwise.
>
> For x86 people that are listening... Do we have, in our beloved arch,
> equally handy places (i.e., right before leaving Xen for a guest and
> right after entering Xen from one), preferrably in a C file, and for
> all guests... like it seems to be the case on ARM?
>
> Regards



--------------8DCEB5763D2D09BD74FF88B6
Content-Type: text/x-patch; charset=UTF-8;
 name="1_1-xen-rcu-rename-idle-ignore.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="1_1-xen-rcu-rename-idle-ignore.patch"

diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index d6dc4b48db..42ab9dbbd6 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -52,8 +52,8 @@ static struct rcu_ctrlblk {
     int  next_pending;  /* Is the next batch already waiting?         */
 
     spinlock_t  lock __cacheline_aligned;
-    cpumask_t   cpumask; /* CPUs that need to switch in order ... */
-    cpumask_t   idle_cpumask; /* ... unless they are already idle */
+    cpumask_t   cpumask; /* CPUs that need to switch in order ...   */
+    cpumask_t   ignore_cpumask; /* ... unless they are already idle */
     /* for current batch to proceed.        */
 } __cacheline_aligned rcu_ctrlblk = {
     .cur = -300,
@@ -86,8 +86,8 @@ struct rcu_data {
     long            last_rs_qlen;     /* qlen during the last resched */
 
     /* 3) idle CPUs handling */
-    struct timer idle_timer;
-    bool idle_timer_active;
+    struct timer cb_timer;
+    bool cb_timer_active;
 };
 
 /*
@@ -116,22 +116,22 @@ struct rcu_data {
  * CPU that is going idle. The user can change this, via a boot time
  * parameter, but only up to 100ms.
  */
-#define IDLE_TIMER_PERIOD_MAX     MILLISECS(100)
-#define IDLE_TIMER_PERIOD_DEFAULT MILLISECS(10)
-#define IDLE_TIMER_PERIOD_MIN     MICROSECS(100)
+#define CB_TIMER_PERIOD_MAX     MILLISECS(100)
+#define CB_TIMER_PERIOD_DEFAULT MILLISECS(10)
+#define CB_TIMER_PERIOD_MIN     MICROSECS(100)
 
-static s_time_t __read_mostly idle_timer_period;
+static s_time_t __read_mostly cb_timer_period;
 
 /*
- * Increment and decrement values for the idle timer handler. The algorithm
+ * Increment and decrement values for the callback timer handler. The algorithm
  * works as follows:
  * - if the timer actually fires, and it finds out that the grace period isn't
- *   over yet, we add IDLE_TIMER_PERIOD_INCR to the timer's period;
+ *   over yet, we add CB_TIMER_PERIOD_INCR to the timer's period;
  * - if the timer actually fires and it finds the grace period over, we
  *   subtract IDLE_TIMER_PERIOD_DECR from the timer's period.
  */
-#define IDLE_TIMER_PERIOD_INCR    MILLISECS(10)
-#define IDLE_TIMER_PERIOD_DECR    MICROSECS(100)
+#define CB_TIMER_PERIOD_INCR    MILLISECS(10)
+#define CB_TIMER_PERIOD_DECR    MICROSECS(100)
 
 static DEFINE_PER_CPU(struct rcu_data, rcu_data);
 
@@ -309,7 +309,7 @@ static void rcu_start_batch(struct rcu_ctrlblk *rcp)
         * This barrier is paired with the one in rcu_idle_enter().
         */
         smp_mb();
-        cpumask_andnot(&rcp->cpumask, &cpu_online_map, &rcp->idle_cpumask);
+        cpumask_andnot(&rcp->cpumask, &cpu_online_map, &rcp->ignore_cpumask);
     }
 }
 
@@ -455,7 +455,7 @@ int rcu_needs_cpu(int cpu)
 {
     struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
 
-    return (rdp->curlist && !rdp->idle_timer_active) || rcu_pending(cpu);
+    return (rdp->curlist && !rdp->cb_timer_active) || rcu_pending(cpu);
 }
 
 /*
@@ -463,7 +463,7 @@ int rcu_needs_cpu(int cpu)
  * periodically poke rcu_pedning(), so that it will invoke the callback
  * not too late after the end of the grace period.
  */
-void rcu_idle_timer_start()
+static void cb_timer_start(void)
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
 
@@ -475,48 +475,48 @@ void rcu_idle_timer_start()
     if (likely(!rdp->curlist))
         return;
 
-    set_timer(&rdp->idle_timer, NOW() + idle_timer_period);
-    rdp->idle_timer_active = true;
+    set_timer(&rdp->cb_timer, NOW() + cb_timer_period);
+    rdp->cb_timer_active = true;
 }
 
-void rcu_idle_timer_stop()
+static void cb_timer_stop(void)
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
 
-    if (likely(!rdp->idle_timer_active))
+    if (likely(!rdp->cb_timer_active))
         return;
 
-    rdp->idle_timer_active = false;
+    rdp->cb_timer_active = false;
 
     /*
      * In general, as the CPU is becoming active again, we don't need the
-     * idle timer, and so we want to stop it.
+     * callback timer, and so we want to stop it.
      *
-     * However, in case we are here because idle_timer has (just) fired and
+     * However, in case we are here because cb_timer has (just) fired and
      * has woken up the CPU, we skip stop_timer() now. In fact, when a CPU
      * wakes up from idle, this code always runs before do_softirq() has the
      * chance to check and deal with TIMER_SOFTIRQ. And if we stop the timer
      * now, the TIMER_SOFTIRQ handler will see it as inactive, and will not
-     * call rcu_idle_timer_handler().
+     * call cb_timer_handler().
      *
      * Therefore, if we see that the timer is expired already, we leave it
      * alone. The TIMER_SOFTIRQ handler will then run the timer routine, and
      * deactivate it.
      */
-    if ( !timer_is_expired(&rdp->idle_timer) )
-        stop_timer(&rdp->idle_timer);
+    if ( !timer_is_expired(&rdp->cb_timer) )
+        stop_timer(&rdp->cb_timer);
 }
 
-static void rcu_idle_timer_handler(void* data)
+static void cb_timer_handler(void* data)
 {
-    perfc_incr(rcu_idle_timer);
+    perfc_incr(rcu_callback_timer);
 
     if ( !cpumask_empty(&rcu_ctrlblk.cpumask) )
-        idle_timer_period = min(idle_timer_period + IDLE_TIMER_PERIOD_INCR,
-                                IDLE_TIMER_PERIOD_MAX);
+        cb_timer_period = min(cb_timer_period + CB_TIMER_PERIOD_INCR,
+                                CB_TIMER_PERIOD_MAX);
     else
-        idle_timer_period = max(idle_timer_period - IDLE_TIMER_PERIOD_DECR,
-                                IDLE_TIMER_PERIOD_MIN);
+        cb_timer_period = max(cb_timer_period - CB_TIMER_PERIOD_DECR,
+                                CB_TIMER_PERIOD_MIN);
 }
 
 void rcu_check_callbacks(int cpu)
@@ -537,7 +537,7 @@ static void rcu_move_batch(struct rcu_data *this_rdp, struct rcu_head *list,
 static void rcu_offline_cpu(struct rcu_data *this_rdp,
                             struct rcu_ctrlblk *rcp, struct rcu_data *rdp)
 {
-    kill_timer(&rdp->idle_timer);
+    kill_timer(&rdp->cb_timer);
 
     /* If the cpu going offline owns the grace period we can block
      * indefinitely waiting for it, so flush it here.
@@ -567,7 +567,7 @@ static void rcu_init_percpu_data(int cpu, struct rcu_ctrlblk *rcp,
     rdp->qs_pending = 0;
     rdp->cpu = cpu;
     rdp->blimit = blimit;
-    init_timer(&rdp->idle_timer, rcu_idle_timer_handler, rdp, cpu);
+    init_timer(&rdp->cb_timer, cb_timer_handler, rdp, cpu);
 }
 
 static int cpu_callback(
@@ -596,25 +596,39 @@ static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
+/*
+ * We're changing the name of the parameter, to better reflect the fact that
+ * the timer is used for callbacks in general, when the CPU is either idle
+ * or executing guest code. We still accept the old parameter but, if both
+ * are specified, the new one ("rcu-callback-timer-period-ms") has priority.
+ */
+#define CB_TIMER_PERIOD_DEFAULT_MS ( CB_TIMER_PERIOD_DEFAULT / MILLISECS(1) )
+static unsigned int __initdata cb_timer_period_ms = CB_TIMER_PERIOD_DEFAULT_MS;
+integer_param("rcu-callback-timer-period-ms", cb_timer_period_ms);
+
+static unsigned int __initdata idle_timer_period_ms = CB_TIMER_PERIOD_DEFAULT_MS;
+integer_param("rcu-idle-timer-period-ms", idle_timer_period_ms);
+
 void __init rcu_init(void)
 {
     void *cpu = (void *)(long)smp_processor_id();
-    static unsigned int __initdata idle_timer_period_ms =
-                                    IDLE_TIMER_PERIOD_DEFAULT / MILLISECS(1);
-    integer_param("rcu-idle-timer-period-ms", idle_timer_period_ms);
+
+    if (idle_timer_period_ms != CB_TIMER_PERIOD_DEFAULT_MS &&
+        cb_timer_period_ms == CB_TIMER_PERIOD_DEFAULT_MS)
+        cb_timer_period_ms = idle_timer_period_ms;
 
     /* We don't allow 0, or anything higher than IDLE_TIMER_PERIOD_MAX */
-    if ( idle_timer_period_ms == 0 ||
-         idle_timer_period_ms > IDLE_TIMER_PERIOD_MAX / MILLISECS(1) )
+    if ( cb_timer_period_ms == 0 ||
+         cb_timer_period_ms > CB_TIMER_PERIOD_MAX / MILLISECS(1) )
     {
-        idle_timer_period_ms = IDLE_TIMER_PERIOD_DEFAULT / MILLISECS(1);
-        printk("WARNING: rcu-idle-timer-period-ms outside of "
+        cb_timer_period_ms = CB_TIMER_PERIOD_DEFAULT / MILLISECS(1);
+        printk("WARNING: rcu-callback-timer-period-ms outside of "
                "(0,%"PRI_stime"]. Resetting it to %u.\n",
-               IDLE_TIMER_PERIOD_MAX / MILLISECS(1), idle_timer_period_ms);
+               CB_TIMER_PERIOD_MAX / MILLISECS(1), cb_timer_period_ms);
     }
-    idle_timer_period = MILLISECS(idle_timer_period_ms);
+    cb_timer_period = MILLISECS(cb_timer_period_ms);
 
-    cpumask_clear(&rcu_ctrlblk.idle_cpumask);
+    cpumask_clear(&rcu_ctrlblk.ignore_cpumask);
     cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu);
     register_cpu_notifier(&cpu_nfb);
     open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
@@ -626,8 +640,8 @@ void __init rcu_init(void)
  */
 void rcu_idle_enter(unsigned int cpu)
 {
-    ASSERT(!cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask));
-    cpumask_set_cpu(cpu, &rcu_ctrlblk.idle_cpumask);
+    ASSERT(!cpumask_test_cpu(cpu, &rcu_ctrlblk.ignore_cpumask));
+    cpumask_set_cpu(cpu, &rcu_ctrlblk.ignore_cpumask);
     /*
      * If some other CPU is starting a new grace period, we'll notice that
      * by seeing a new value in rcp->cur (different than our quiescbatch).
@@ -637,10 +651,12 @@ void rcu_idle_enter(unsigned int cpu)
      * Se the comment before cpumask_andnot() in  rcu_start_batch().
      */
     smp_mb();
+    cb_timer_start();
 }
 
 void rcu_idle_exit(unsigned int cpu)
 {
-    ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask));
-    cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask);
+    cb_timer_stop();
+    ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.ignore_cpumask));
+    cpumask_clear_cpu(cpu, &rcu_ctrlblk.ignore_cpumask);
 }
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
index 08b182ccd9..d142534383 100644
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -12,7 +12,7 @@ PERFCOUNTER(calls_from_multicall,       "calls from multicall")
 PERFCOUNTER(irqs,                   "#interrupts")
 PERFCOUNTER(ipis,                   "#IPIs")
 
-PERFCOUNTER(rcu_idle_timer,         "RCU: idle_timer")
+PERFCOUNTER(rcu_callback_timer,     "RCU: callback_timer")
 
 /* Generic scheduler counters (applicable to all schedulers) */
 PERFCOUNTER(sched_irq,              "sched: timer")

--------------8DCEB5763D2D09BD74FF88B6
Content-Type: text/x-patch; charset=UTF-8;
 name="2_1-rcu-idle-guest.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="2_1-rcu-idle-guest.patch"

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index a9ca09acb2..e4439b2397 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -46,6 +46,8 @@ static void do_idle(void)
 {
     unsigned int cpu = smp_processor_id();
 
+    rcu_quiet_enter();
+
     sched_tick_suspend();
     /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
     process_pending_softirqs();
@@ -59,6 +61,8 @@ static void do_idle(void)
     local_irq_enable();
 
     sched_tick_resume();
+
+    rcu_quiet_exit();
 }
 
 void idle_loop(void)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 1d2b762e22..5158a03746 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2007,6 +2007,8 @@ void enter_hypervisor_from_guest(void)
 {
     struct vcpu *v = current;
 
+    rcu_quiet_exit();
+
     /*
      * If we pended a virtual abort, preserve it until it gets cleared.
      * See ARM ARM DDI 0487A.j D1.14.3 (Virtual Interrupts) for details,
@@ -2264,6 +2266,8 @@ static void check_for_vcpu_work(void)
  */
 void leave_hypervisor_to_guest(void)
 {
+    rcu_quiet_enter();
+
     local_irq_disable();
 
     check_for_vcpu_work();
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 836f524ef4..3d8dcec143 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -647,7 +647,8 @@ static void acpi_processor_idle(void)
     cpufreq_dbs_timer_suspend();
 
     sched_tick_suspend();
-    /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+    rcu_quiet_enter();
+    /* rcu_quiet_enter() can raise TIMER_SOFTIRQ. Process it now. */
     process_pending_softirqs();
 
     /*
@@ -660,6 +661,7 @@ static void acpi_processor_idle(void)
     {
         local_irq_enable();
         sched_tick_resume();
+        rcu_quiet_exit();
         cpufreq_dbs_timer_resume();
         return;
     }
@@ -785,6 +787,7 @@ static void acpi_processor_idle(void)
         power->last_state = &power->states[0];
         local_irq_enable();
         sched_tick_resume();
+        rcu_quiet_exit();
         cpufreq_dbs_timer_resume();
         return;
     }
@@ -793,6 +796,7 @@ static void acpi_processor_idle(void)
     power->last_state = &power->states[0];
 
     sched_tick_resume();
+    rcu_quiet_exit();
     cpufreq_dbs_timer_resume();
 
     if ( cpuidle_current_governor->reflect )
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 52413e6da1..2657ec76f4 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -756,7 +756,8 @@ static void mwait_idle(void)
 	cpufreq_dbs_timer_suspend();
 
 	sched_tick_suspend();
-	/* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+	rcu_quiet_enter();
+	/* rcu_quiet_enter() can raise TIMER_SOFTIRQ. Process it now. */
 	process_pending_softirqs();
 
 	/* Interrupts must be disabled for C2 and higher transitions. */
@@ -765,6 +766,7 @@ static void mwait_idle(void)
 	if (!cpu_is_haltable(cpu)) {
 		local_irq_enable();
 		sched_tick_resume();
+		rcu_quiet_exit();
 		cpufreq_dbs_timer_resume();
 		return;
 	}
@@ -807,6 +809,7 @@ static void mwait_idle(void)
 		lapic_timer_on();
 
 	sched_tick_resume();
+	rcu_quiet_exit();
 	cpufreq_dbs_timer_resume();
 
 	if ( cpuidle_current_governor->reflect )
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 42ab9dbbd6..a9c24b5889 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -52,8 +52,8 @@ static struct rcu_ctrlblk {
     int  next_pending;  /* Is the next batch already waiting?         */
 
     spinlock_t  lock __cacheline_aligned;
-    cpumask_t   cpumask; /* CPUs that need to switch in order ...   */
-    cpumask_t   ignore_cpumask; /* ... unless they are already idle */
+    cpumask_t   cpumask; /* CPUs that need to switch in order...      */
+    cpumask_t   ignore_cpumask; /* ...unless already idle or in guest */
     /* for current batch to proceed.        */
 } __cacheline_aligned rcu_ctrlblk = {
     .cur = -300,
@@ -85,7 +85,7 @@ struct rcu_data {
     struct rcu_head barrier;
     long            last_rs_qlen;     /* qlen during the last resched */
 
-    /* 3) idle CPUs handling */
+    /* 3) idle (or in guest mode) CPUs handling */
     struct timer cb_timer;
     bool cb_timer_active;
 };
@@ -107,6 +107,12 @@ struct rcu_data {
  * 3) it is stopped immediately, if the CPU wakes up from idle and
  *    resumes 'normal' execution.
  *
+ * Note also that the same happens if a CPU starts executing a guest that
+ * (almost) never comes back into the hypervisor. This may be the case if
+ * the guest uses "idle=poll" / "vwfi=native". Therefore, we need to handle
+ * guest entry events in the same way as the CPU going idle, i.e., consider
+ * it quiesced and arm the timer.
+ *
  * About how far in the future the timer should be programmed each time,
  * it's hard to tell (guess!!). Since this mimics Linux's periodic timer
  * tick, take values used there as an indication. In Linux 2.6.21, tick
@@ -304,9 +310,10 @@ static void rcu_start_batch(struct rcu_ctrlblk *rcp)
         * Make sure the increment of rcp->cur is visible so, even if a
         * CPU that is about to go idle, is captured inside rcp->cpumask,
         * rcu_pending() will return false, which then means cpu_quiet()
-        * will be invoked, before the CPU would actually enter idle.
+        * will be invoked, before the CPU would actually go idle (or
+	* enter a guest).
         *
-        * This barrier is paired with the one in rcu_idle_enter().
+        * This barrier is paired with the one in rcu_quiet_enter().
         */
         smp_mb();
         cpumask_andnot(&rcp->cpumask, &cpu_online_map, &rcp->ignore_cpumask);
@@ -463,14 +470,15 @@ int rcu_needs_cpu(int cpu)
  * periodically poke rcu_pedning(), so that it will invoke the callback
  * not too late after the end of the grace period.
  */
-static void cb_timer_start(void)
+static void cb_timer_start(unsigned int cpu)
 {
-    struct rcu_data *rdp = &this_cpu(rcu_data);
+    struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
 
     /*
      * Note that we don't check rcu_pending() here. In fact, we don't want
      * the timer armed on CPUs that are in the process of quiescing while
-     * going idle, unless they really are the ones with a queued callback.
+     * going idle or entering guest mode, unless they really have queued
+     * callbacks.
      */
     if (likely(!rdp->curlist))
         return;
@@ -479,9 +487,9 @@ static void cb_timer_start(void)
     rdp->cb_timer_active = true;
 }
 
-static void cb_timer_stop(void)
+static void cb_timer_stop(unsigned int cpu)
 {
-    struct rcu_data *rdp = &this_cpu(rcu_data);
+    struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
 
     if (likely(!rdp->cb_timer_active))
         return;
@@ -635,11 +643,14 @@ void __init rcu_init(void)
 }
 
 /*
- * The CPU is becoming idle, so no more read side critical
- * sections, and one more step toward grace period.
+ * The CPU is becoming about to either idle or enter the guest. In any of
+ * these cases, it can't have any outstanding read side critical sections
+ * so this is one step toward the end of the grace period.
  */
-void rcu_idle_enter(unsigned int cpu)
+void rcu_quiet_enter()
 {
+    unsigned int cpu = smp_processor_id();
+
     ASSERT(!cpumask_test_cpu(cpu, &rcu_ctrlblk.ignore_cpumask));
     cpumask_set_cpu(cpu, &rcu_ctrlblk.ignore_cpumask);
     /*
@@ -652,11 +663,15 @@ void rcu_idle_enter(unsigned int cpu)
      */
     smp_mb();
     cb_timer_start();
+    cb_timer_start(cpu);
 }
 
-void rcu_idle_exit(unsigned int cpu)
+
+void rcu_quiet_exit()
 {
-    cb_timer_stop();
+    unsigned int cpu = smp_processor_id();
+
+    cb_timer_stop(cpu);
     ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.ignore_cpumask));
     cpumask_clear_cpu(cpu, &rcu_ctrlblk.ignore_cpumask);
 }
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 13850865ed..63db0f9887 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -145,8 +145,8 @@ void call_rcu(struct rcu_head *head,
 
 int rcu_barrier(void);
 
-void rcu_idle_enter(unsigned int cpu);
-void rcu_idle_exit(unsigned int cpu);
+void rcu_quiet_enter(void);
+void rcu_quiet_exit(void);
 
 void rcu_idle_timer_start(void);
 void rcu_idle_timer_stop(void);

--------------8DCEB5763D2D09BD74FF88B6
Content-Type: text/x-patch; charset=UTF-8;
 name="3_1-rcu-adaptations.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="3_1-rcu-adaptations.patch"

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0902a15e8d..a8e203a1c1 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -935,7 +935,7 @@ static void complete_domain_destroy(struct rcu_head *head)
     struct domain *d = container_of(head, struct domain, rcu);
     struct vcpu *v;
     int i;
-
+    printk("complete_domain_destroy BEGIN\n");
     /*
      * Flush all state for the vCPU previously having run on the current CPU.
      * This is in particular relevant for x86 HVM ones on VMX, so that this
@@ -991,6 +991,7 @@ static void complete_domain_destroy(struct rcu_head *head)
     _domain_destroy(d);
 
     send_global_virq(VIRQ_DOM_EXC);
+    printk("complete_domain_destroy END\n");
 }
 
 /* Release resources belonging to task @p. */
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index a9c24b5889..1bdf4ecc53 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -662,7 +662,6 @@ void rcu_quiet_enter()
      * Se the comment before cpumask_andnot() in  rcu_start_batch().
      */
     smp_mb();
-    cb_timer_start();
     cb_timer_start(cpu);
 }
 
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 6d24a3a135..4a63c11ed1 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -3111,14 +3111,12 @@ void schedule_dump(struct cpupool *c)
 
 void sched_tick_suspend(void)
 {
-    rcu_idle_enter(smp_processor_id());
-    rcu_idle_timer_start();
+    rcu_quiet_enter();
 }
 
 void sched_tick_resume(void)
 {
-    rcu_idle_timer_stop();
-    rcu_idle_exit(smp_processor_id());
+    rcu_quiet_exit();
 }
 
 void wait(void)

--------------8DCEB5763D2D09BD74FF88B6
Content-Type: text/x-patch; charset=UTF-8;
 name="rcu-quiesc-patch.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="rcu-quiesc-patch.patch"

commit 0d2beb3d4125d65c415860d66974db9a5532ac84
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Wed Sep 26 11:47:06 2018 +0200

    xen: RCU: bootparam to force quiescence at every call.
    
    Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 0f4b1f2a5d..536eb17017 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -110,7 +110,10 @@ static enum {
 static int __init parse_vwfi(const char *s)
 {
 	if ( !strcmp(s, "native") )
+	{
+		rcu_always_quiesc = true;
 		vwfi = NATIVE;
+	}
 	else
 		vwfi = TRAP;
 
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 3517790913..219dd2884f 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -140,6 +140,9 @@ static int qhimark = 10000;
 static int qlowmark = 100;
 static int rsinterval = 1000;
 
+bool rcu_always_quiesc = false;
+boolean_param("rcu_force_quiesc", rcu_always_quiesc);
+
 struct rcu_barrier_data {
     struct rcu_head head;
     atomic_t *cpu_count;
@@ -562,6 +565,13 @@ static void rcu_init_percpu_data(int cpu, struct rcu_ctrlblk *rcp,
     rdp->quiescbatch = rcp->completed;
     rdp->qs_pending = 0;
     rdp->cpu = cpu;
+    if ( rcu_always_quiesc )
+    {
+        blimit = INT_MAX;
+        qhimark = 0;
+        qlowmark = 0;
+        //rsinterval = 0;
+    }
     rdp->blimit = blimit;
     init_timer(&rdp->idle_timer, rcu_idle_timer_handler, rdp, cpu);
 }
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 3402eb5caf..274a01acf6 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -56,6 +56,8 @@ struct rcu_head {
 } while (0)
 
 
+extern bool rcu_always_quiesc;
+
 int rcu_pending(int cpu);
 int rcu_needs_cpu(int cpu);
 

--------------8DCEB5763D2D09BD74FF88B6--


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 07:37:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 07:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85068.159469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBYS0-0000nj-LW; Mon, 15 Feb 2021 07:37:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85068.159469; Mon, 15 Feb 2021 07:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBYS0-0000nc-HF; Mon, 15 Feb 2021 07:37:44 +0000
Received: by outflank-mailman (input) for mailman id 85068;
 Mon, 15 Feb 2021 07:37:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dghk=HR=codiax.se=anders.tornqvist@srs-us1.protection.inumbo.net>)
 id 1lBYRz-0000nH-Di
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 07:37:43 +0000
Received: from mailrelay1-3.pub.mailoutpod1-cph3.one.com (unknown
 [46.30.212.10]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20ed8bb5-fa94-4ab7-9243-5538a7f83d63;
 Mon, 15 Feb 2021 07:37:41 +0000 (UTC)
Received: from [192.168.101.129] (h87-96-135-119.cust.a3fiber.se
 [87.96.135.119])
 by mailrelay1.pub.mailoutpod1-cph3.one.com (Halon) with ESMTPSA
 id ae742993-6f60-11eb-9248-d0431ea8a283;
 Mon, 15 Feb 2021 07:37:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20ed8bb5-fa94-4ab7-9243-5538a7f83d63
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=codiax.se; s=20191106;
	h=content-transfer-encoding:content-type:mime-version:date:message-id:to:
	 subject:from:from;
	bh=nRwWovs734qaKdA60euEkY5q39AlOcB9TXQaCO+vki4=;
	b=VFDfkwvhUV5LPl2bPLPCwWV4mwlaqetBAeAGGOc5AmxFDg9euYUN176JdReBU4GbSXk7E5hRiKQ9h
	 AEwjkmzzj3BFhJNzWekC8/6vV98DQG91jPbILul+Y/1CvgzsqYzTDJvp3ULq1KqD1j7y0phB/eSKRQ
	 k415zNSCg7rXKYHzgh7yqmByrtkpz75ZhK6zF9g8zvlDELel7lvzDgZC4WHpdUM5s0wyGYFS1PKkas
	 DOvOZi6utKH1yR+3jQlG5h11PEXoPYrcLs3yBRjjTHdXS3jyqn59Jh0ak8oSGK5j+ncfBUW/SD/hHy
	 YY1fh7rRDDLeTJTezATkZX3CwJMFuGQ==
X-HalOne-Cookie: 7dc683fc829a40dbfc6b6ccd077c5325c7ceba05
X-HalOne-ID: ae742993-6f60-11eb-9248-d0431ea8a283
From: =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Subject: Boot time and 3 sec in warning_print
To: xen-devel@lists.xenproject.org
Message-ID: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
Date: Mon, 15 Feb 2021 08:37:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-GB

Hi,

I would like to shorten the boot time in our system if possible.

In xen/common/warning.c there is warning_print() which contains a 3 
seconds loop that callsÂ  process_pending_softirqs().

What would the potential problems be if that loop is radically shortened?

/Anders



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 08:13:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 08:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85076.159487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZ0M-0004oq-Ij; Mon, 15 Feb 2021 08:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85076.159487; Mon, 15 Feb 2021 08:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZ0M-0004oj-Fe; Mon, 15 Feb 2021 08:13:14 +0000
Received: by outflank-mailman (input) for mailman id 85076;
 Mon, 15 Feb 2021 08:13:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBZ0L-0004oe-85
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 08:13:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cc8ae3b-e56a-4109-aea9-4ed6e3968418;
 Mon, 15 Feb 2021 08:13:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4A9CFB125;
 Mon, 15 Feb 2021 08:13:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cc8ae3b-e56a-4109-aea9-4ed6e3968418
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613376791; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fwQC7DNBt5mvznk9zQZ/2vgKpu2DIGAAc5dNymRR2wQ=;
	b=Er/2MQpqVt3Sb+EhLvE8IW1bLvydTGL9IBa030cUDoEQWBW3CPcVmZxJQLAIkk0LDqJdd/
	1GY2wMZBBJIPpl2leYDMQeylOkPr45V8MK/7g2aSCTdQ46cT9gCt3/xH9KDlFVaYqh+hw0
	OaEwyMUEzSnG+bqhRMFfiiTsTiHSN6A=
Subject: Re: [PATCH] xen/iommu: arm: Don't insert an IOMMU mapping when the
 grantee and granter...
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Rahul Singh <Rahul.Singh@arm.com>, xen-devel@lists.xenproject.org
References: <20210214143504.23099-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3a827099-1d8f-826d-42ef-743d86d9ccce@suse.com>
Date: Mon, 15 Feb 2021 09:13:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210214143504.23099-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.02.2021 15:35, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> ... are the same.
> 
> When the IOMMU is enabled and the domain is direct mapped (e.g. Dom0),
> Xen will insert a 1:1 mapping for each grant mapping in the P2M to
> allow DMA.
> 
> This works quite well when the grantee and granter and not the same
> because the GFN in the P2M should not be mapped. However, if they are
> the same, we will overwrite the mapping. Worse, it will be completely
> removed when the grant is unmapped.
> 
> As the domain is direct mapped, a 1:1 mapping should always present in
> the P2M. This is not 100% guaranteed if the domain decides to mess with
> the P2M. However, such domain would already end up in trouble as the
> page would be soon be freed (when the last reference dropped).
> 
> Add an additional check in arm_iommu_{,un}map_page() to check whether
> the page belongs to the domain. If it is belongs to it, then ignore the
> request.

Doesn't this want / need solving in grant_table.c itself, as it also
affects PV on x86? Or alternatively in gnttab_need_iommu_mapping(),
handing the macro the MFN alongside the domain? No matter which one
was chosen, it could at the same time avoid the expensive mapkind()
invocation in this case.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 08:13:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 08:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85077.159498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZ0r-0004ta-SD; Mon, 15 Feb 2021 08:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85077.159498; Mon, 15 Feb 2021 08:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZ0r-0004tT-PK; Mon, 15 Feb 2021 08:13:45 +0000
Received: by outflank-mailman (input) for mailman id 85077;
 Mon, 15 Feb 2021 08:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lx6I=HR=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lBZ0r-0004tN-31
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 08:13:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1cde3bf7-9789-476f-9544-f74e0a5d1f27;
 Mon, 15 Feb 2021 08:13:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 92B57AD78;
 Mon, 15 Feb 2021 08:13:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cde3bf7-9789-476f-9544-f74e0a5d1f27
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613376823; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rHrhNXRAm/hsUN1PH3Xvs7N0BH+MVYsqdTdWKIURCHk=;
	b=Ji8z/lSV2LSSU4sgVOBKFrrd41OtyIi1ypA9yfuD5WMtuqJj2B+y0EW2mQqJuguezkApMN
	hW9olWIpRg3J88vJWwV7i9mZqUDqIOb/c9eBxjW54Fje/L+lbMIXFg6M+9+Wx7CJFk8rn0
	RJLnkLNigJ51nLocO/joijSdcjJ4eW8=
Subject: Re: Boot time and 3 sec in warning_print
To: =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
Date: Mon, 15 Feb 2021 09:13:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TvVk8jY0ORGMRIquAvDf9eZXRJhPV5VP1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TvVk8jY0ORGMRIquAvDf9eZXRJhPV5VP1
Content-Type: multipart/mixed; boundary="umdRZdy40tqnGmjYdGfMgHLwSwKuHOVIt";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>,
 xen-devel@lists.xenproject.org
Message-ID: <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
Subject: Re: Boot time and 3 sec in warning_print
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
In-Reply-To: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>

--umdRZdy40tqnGmjYdGfMgHLwSwKuHOVIt
Content-Type: multipart/mixed;
 boundary="------------5B4F665EA105AFAA20D6F57A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5B4F665EA105AFAA20D6F57A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.02.21 08:37, Anders T=C3=B6rnqvist wrote:
> Hi,
>=20
> I would like to shorten the boot time in our system if possible.
>=20
> In xen/common/warning.c there is warning_print() which contains a 3=20
> seconds loop that calls=C2=A0 process_pending_softirqs().
>=20
> What would the potential problems be if that loop is radically shortene=
d?

A user might not notice the warning(s) printed.

But I can see your point. I think adding a boot option for setting
another timeout value (e.g. 0) would do the job without compromising
the default case.


Juergen

--------------5B4F665EA105AFAA20D6F57A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5B4F665EA105AFAA20D6F57A--

--umdRZdy40tqnGmjYdGfMgHLwSwKuHOVIt--

--TvVk8jY0ORGMRIquAvDf9eZXRJhPV5VP1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAqLTYFAwAAAAAACgkQsN6d1ii/Ey9e
Agf/elalu+jlMAe/TcVBeb6DbF7SUfsH03aO6jsQ9RGdUIJIgbf8PYduqE1IRyjIbvQTOUsYowYG
h2oYEcz583je8Zh8+ITrCxFAPKhzV/kEfGRbRkafzKakqtlfOAtsjRiGf1vB7pgJUr6WxwXW1xCg
kEK2NuMAJpPAHqokgQvAVOJdcGHp6EZEv8kjzfHIt8G/JEF6EynN1BQHA7p6Luypcs9RD3szMsle
rgwGXJCWgrXGhUnlbjDg0vhvOPUIq72e2avd1bWelVFmia5xRFOZ67CZITvXOUpjQIHJDZHyaltG
AEZNc+B5xB+7OHzaPFE0qdfsLzIqIJrdPKnRtUdNsw==
=pur1
-----END PGP SIGNATURE-----

--TvVk8jY0ORGMRIquAvDf9eZXRJhPV5VP1--


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 08:29:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 08:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85085.159511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZFp-000608-6w; Mon, 15 Feb 2021 08:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85085.159511; Mon, 15 Feb 2021 08:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZFp-000601-3R; Mon, 15 Feb 2021 08:29:13 +0000
Received: by outflank-mailman (input) for mailman id 85085;
 Mon, 15 Feb 2021 08:29:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBZFo-0005zw-AJ
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 08:29:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fec3e451-c3fc-4633-aef3-05a460a88629;
 Mon, 15 Feb 2021 08:29:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8BD59AE03;
 Mon, 15 Feb 2021 08:29:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fec3e451-c3fc-4633-aef3-05a460a88629
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613377749; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wbWKccuo2a3z5eO8bLDWyss52tPylGtvuT6EX1VGiGk=;
	b=Dw5oEpDGrFciPPG6xRO//TnP2MkmSBCnTP0N9Xx6mtm7iolQLIK5tAGza/3SrTh6NPXGj7
	mJie/LMCgs/ElOez5z+U3H2UIyujmVWg7BTmVqFO2rF4TUF+ZWXW6RPuGtfGGrOJ5dZ3US
	G6ARbWNcVBm99Vw0Y1V6yfSztoRZIac=
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, cardoe@cardoe.com,
 andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org,
 anthony.perard@citrix.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210213020540.27894-1-sstabellini@kernel.org>
 <20210213135056.GA6191@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
Date: Mon, 15 Feb 2021 09:29:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210213135056.GA6191@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.02.2021 14:50, Marek Marczykowski-GÃ³recki wrote:
> On Fri, Feb 12, 2021 at 06:05:40PM -0800, Stefano Stabellini wrote:
>> If rombios, seabios and ovmf are all disabled, don't attempt to build
>> hvmloader.
> 
> What if you choose to not build any of rombios, seabios, ovmf, but use
> system one instead? Wouldn't that exclude hvmloader too?

Even further - one can disable all firmware and have every guest
config explicitly specify the firmware to use, afaict.

> This heuristic seems like a bit too much, maybe instead add an explicit
> option to skip hvmloader?

+1 (If making this configurable is needed at all - is having
hvmloader without needing it really a problem?)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 08:31:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 08:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85087.159523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZHv-0006pO-Iw; Mon, 15 Feb 2021 08:31:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85087.159523; Mon, 15 Feb 2021 08:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZHv-0006pH-Fh; Mon, 15 Feb 2021 08:31:23 +0000
Received: by outflank-mailman (input) for mailman id 85087;
 Mon, 15 Feb 2021 08:31:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBZHu-0006p8-Fn; Mon, 15 Feb 2021 08:31:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBZHu-0001lh-4w; Mon, 15 Feb 2021 08:31:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBZHt-0002Pp-Qi; Mon, 15 Feb 2021 08:31:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBZHt-0005ts-QF; Mon, 15 Feb 2021 08:31:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OuYwwtd7G6yNJLjym9Bb25yXVkXahmnKaI2jd9+zrMk=; b=hJv+Spl4xulq4SNdjPUnAVeCYT
	7XeEPXdhmE1gk2YY05M6Mzd9QsuhuKanHT27NDhZ3knD+7q5osWvksRyzxBTwnZMua5jD4Fjby/fq
	9BEwcTpVDCpQ4ZfuCWss0s4ZNWGsjk1R3LOtXHW052HBNvchfaROLhkig0eMPRCR9w0w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159366-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159366: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=97c57c67858e54916207520c693939741adae450
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 08:31:21 +0000

flight 159366 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159366/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              97c57c67858e54916207520c693939741adae450
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  220 days
Failing since        151818  2020-07-11 04:18:52 Z  219 days  212 attempts
Testing same since   159338  2021-02-14 04:19:55 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42318 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 08:38:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 08:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85093.159538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZP4-00071r-EJ; Mon, 15 Feb 2021 08:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85093.159538; Mon, 15 Feb 2021 08:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZP4-00071k-BL; Mon, 15 Feb 2021 08:38:46 +0000
Received: by outflank-mailman (input) for mailman id 85093;
 Mon, 15 Feb 2021 08:38:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBZP2-00071M-7L
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 08:38:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68ff295d-05b1-4c48-9ad2-c8de1852afd6;
 Mon, 15 Feb 2021 08:38:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0027AAD78;
 Mon, 15 Feb 2021 08:38:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68ff295d-05b1-4c48-9ad2-c8de1852afd6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613378320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v5OmUhAPyHsjLq24iNVaOjuspmmJfGHeH5jqOEAPucM=;
	b=pkI8gRBq4Lb43TQjdcoN3nAgch8IhU8NRVunS2oCh3rHziMTlvP0tEBR1/rttkFGBYbab/
	JBPVWPYrDjQYx7q2HprfASg2q+cUWomovYFVYIa115sTONjKiKquAP6+4pqpsMaXawAP79
	v6w6enxt7WqImGtzCL4ZdZwc5tHpbEQ=
Subject: Re: Boot time and 3 sec in warning_print
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
Date: Mon, 15 Feb 2021 09:38:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>> I would like to shorten the boot time in our system if possible.
>>
>> In xen/common/warning.c there is warning_print() which contains a 3 
>> seconds loop that callsÂ  process_pending_softirqs().
>>
>> What would the potential problems be if that loop is radically shortened?
> 
> A user might not notice the warning(s) printed.
> 
> But I can see your point. I think adding a boot option for setting
> another timeout value (e.g. 0) would do the job without compromising
> the default case.

I don't think I agree - the solution to this is to eliminate the
reason leading to the warning. The delay is intentionally this way
to annoy the admin and force them to take measures.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 08:51:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 08:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85098.159550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZbH-0000Lz-OI; Mon, 15 Feb 2021 08:51:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85098.159550; Mon, 15 Feb 2021 08:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZbH-0000Ls-KO; Mon, 15 Feb 2021 08:51:23 +0000
Received: by outflank-mailman (input) for mailman id 85098;
 Mon, 15 Feb 2021 08:51:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lx6I=HR=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lBZbG-0000Ln-9H
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 08:51:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7efd6cfc-3ae9-4718-a513-06674940aca6;
 Mon, 15 Feb 2021 08:51:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0E1BAD78;
 Mon, 15 Feb 2021 08:51:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7efd6cfc-3ae9-4718-a513-06674940aca6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613379078; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N9CZCjedQ0b3MQfPvdH0zEp3wMLdymsVdz1SBJzZqqA=;
	b=RIgxf/6jG5363gCrmGnL1qdYsUCkjLMoiU/O8LwVn+XhcfDkR6YyX5kUUIBQ6zxUh5tKfZ
	VufY55xh9Myv+iSlQkpeP4Oh2q+jCSOHYRpnv60+8lYZqLzzMZCIq5Zyr4Zd9/VTLn//ZY
	o/LOfTMXQd2pFR17phwIUnMFvtE9gJk=
Subject: Re: Boot time and 3 sec in warning_print
To: Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Cc: xen-devel@lists.xenproject.org
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2668f69a-2150-afbe-2cae-69c79a2d63d8@suse.com>
Date: Mon, 15 Feb 2021 09:51:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ZCDeHTgHUP18KBY6GltLo6MZh990mMxAD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ZCDeHTgHUP18KBY6GltLo6MZh990mMxAD
Content-Type: multipart/mixed; boundary="ob3Yv1UZf7M0lbJoDFWtMnZIpQjcZyAYM";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Cc: xen-devel@lists.xenproject.org
Message-ID: <2668f69a-2150-afbe-2cae-69c79a2d63d8@suse.com>
Subject: Re: Boot time and 3 sec in warning_print
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
In-Reply-To: <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>

--ob3Yv1UZf7M0lbJoDFWtMnZIpQjcZyAYM
Content-Type: multipart/mixed;
 boundary="------------16E95B248085B7F87567790F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------16E95B248085B7F87567790F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.02.21 09:38, Jan Beulich wrote:
> On 15.02.2021 09:13, J=C3=BCrgen Gro=C3=9F wrote:
>> On 15.02.21 08:37, Anders T=C3=B6rnqvist wrote:
>>> I would like to shorten the boot time in our system if possible.
>>>
>>> In xen/common/warning.c there is warning_print() which contains a 3
>>> seconds loop that calls=C2=A0 process_pending_softirqs().
>>>
>>> What would the potential problems be if that loop is radically shorte=
ned?
>>
>> A user might not notice the warning(s) printed.
>>
>> But I can see your point. I think adding a boot option for setting
>> another timeout value (e.g. 0) would do the job without compromising
>> the default case.
>=20
> I don't think I agree - the solution to this is to eliminate the
> reason leading to the warning. The delay is intentionally this way
> to annoy the admin and force them to take measures.

OTOH there are some warnings which can't be dealt with via boot
parameters or which can be solved by other means, e.g.:

"WARNING: SILO mode is not enabled.\n"
"It has implications on the security of the system,\n"
"unless the communications have been forbidden between\n"
"untrusted domains.\n"

"WARNING: HMP COMPUTING HAS BEEN ENABLED.\n"
"It has implications on the security and stability of the system,\n"
"unless the cpu affinity of all domains is specified.\n"


Juergen

--------------16E95B248085B7F87567790F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------16E95B248085B7F87567790F--

--ob3Yv1UZf7M0lbJoDFWtMnZIpQjcZyAYM--

--ZCDeHTgHUP18KBY6GltLo6MZh990mMxAD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAqNgYFAwAAAAAACgkQsN6d1ii/Ey+P
AwgAg+GwtHx3Mp/Ghl8Z3VWJ+WR+8tte9Oxhv6RgYywWLWBtU7dxFH3q7wravOtXMFomyxeDVdkM
S7fJI/btx320Xj5sXtYwS1imWv0BbmWaZTS6Vuld6dDVX9nijvkYEIjSMkB+1nGaPMa8V8S9JjyE
77J+K+WLleb08Qcg6UtMMX9OQf+wXHhTjeBIeY2gqE7qP8NznTtJmUHWPLJqQqamxlSj9jA2GBOH
nZ2ELMrfhVVb7ax2KlEdfX0u4CFSkOmZIZokWFWtHfEiuoDt5uYN//Rq2N+Mu1Hl/29wmjsie6e6
RGjNE2AEQv8TEdGera9e5KHkM8OJ1L7qYeTTCghWGw==
=zpyQ
-----END PGP SIGNATURE-----

--ZCDeHTgHUP18KBY6GltLo6MZh990mMxAD--


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 09:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 09:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85103.159561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZkD-0001JV-Mt; Mon, 15 Feb 2021 09:00:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85103.159561; Mon, 15 Feb 2021 09:00:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZkD-0001JO-Js; Mon, 15 Feb 2021 09:00:37 +0000
Received: by outflank-mailman (input) for mailman id 85103;
 Mon, 15 Feb 2021 09:00:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYcZ=HR=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lBZkB-0001JJ-MV
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 09:00:35 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 371702f4-140f-4a9b-8bdc-40c79731569b;
 Mon, 15 Feb 2021 09:00:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 371702f4-140f-4a9b-8bdc-40c79731569b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613379634;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=YtvlF13IrUo6q7IT8UHZ4QFT8ApPquBv0Ln2e2Sd5eY=;
  b=Lxu4Hfl6hENDEMy3syfpid4mH1a0QtHgrBpM65/g9ID7W6g7RXuY7fDD
   eRh6cjYfJnWjRmv5sVEzggMmviF3HWHUIP/DTqmhB9rG4rVm6UqMFnpYC
   ZKdGvNnqPqoPNW3m9a0ewCnYq1wNVCTbAG5tzyg3JZMsy6TUq/OVqJtPL
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Kz0FDnN74ipmbbRLVWAdsyGKWrbcL13JdKO9l895DUv5f/HKTdEPf/zXrjGpxrRgda5bEZiJUp
 UxpMZjInSjKYLMD/pY69lYs/ZH2hKJvw0jK0yrCjprJ7b6uXZIm5J4DbDa4xIcQmEyuJRaEjFr
 X4ftgsdLDaR/N7QiiJXVCWNlTH9Ok0EpmCHU11VCXFiZqwmU9xdRgOxwLUNtyiEpQqIjoUov4K
 31fACMFXP2tXNsqtvs+5aKBGeEW9rwPMlkS0iVhA0mjM1PmNopw0L7queliu5TVU27uSiIuo1v
 0v0=
X-SBRS: 5.2
X-MesageID: 37239783
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,180,1610427600"; 
   d="scan'208";a="37239783"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Iv1oEYxpb3vD8PujlyCM50RQz2p1n1AxWvg+iOCbgI9XzzDv9KjEmwJdRnAT6mG8RgNHpXkHpaA6GJOjgCpE4Tea5CEfjs7Q84T5ypc4zEzBv1sdWzx0DcWtvZQ5PmkQbzI3FlvBXYVPGwEs7ZqRJApj2EtyKt+YiT2FbM/wmkmwHsuQCubw7Y+odKCzlHlsfjvdzYDRZeLEMkPL6zllRu3/HBQi1v+lD2eXqv3O+EqtvoHkmmGjyRRrgueQywt0Tj0/qAFK9CW+8DayrnrCG0CXI1Tb7MB6EfzM1MmhlNsCl9ZH3uGiNdRUZCV35Pu0+CwF4ta/7HROVk+ACfYN+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7A+SDUPTUJPa7Mxd4brTfMtyHyQijV0QKoLVGPD+R+M=;
 b=ZU+kUphZlQtr58n5De6uPtqMCjT7QauY/qJ7V2F9dHGkjyQl1Ely7k0EUy0VGHGhVZ83vRvjoqx+tr+rzFJxOSzsmIjPAXNHZap0XeOY4nibtDw+0M1qHvzJk0yeuKaGWsaQYE+vNkWlIWhyK1WaSOO8cmBY8AV47DcaNeeLBELO0AXTr7cv/g4yn/mmPyJLQoSz1wBqTnyTtKT9KrujLRkelaGAIVpj/fuIj+nw1XRIoBB69F/Gu0XKV4+v9nycltDbhqQ4mRbvsot3wAA9IDcIe2pw8eGT4kl1//8PoqC3COxVoVH7PV2tcvknOV9x+ErULv89XZz86XuszLckFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7A+SDUPTUJPa7Mxd4brTfMtyHyQijV0QKoLVGPD+R+M=;
 b=St1B1MN2l4dMSrcuUymhCtjJ0gbS+T7Xn+NSWqsFOZOPfO8HZBAy1lSTd24UciuKJHXKzAIDtWNXmXJYLgp9ToPn7VwO2iYPbEbDzjlMGNBUIF8S9HNQiYCkCvHsNe7XL0j65RaEJihVuth4+X7JfdVpeK51U5l9QCKGlPTXqVg=
Date: Mon, 15 Feb 2021 10:00:23 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
CC: Maximilian Engelhardt <maxi@daemonizer.de>,
	<xen-devel@lists.xenproject.org>, <pkg-xen-devel@lists.alioth.debian.org>
Subject: Re: [BUG] Linux pvh vm not getting destroyed on shutdown
Message-ID: <YCo4J2DPtecdfIir@Air-de-Roger>
References: <2195346.r5JaYcbZso@localhost> <YCgYxOxXwitkFB0T@mattapan.m5p.com>
 <22128555.Kfurr2TIWe@localhost> <YCnqMh+YB2ZDsMUl@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <YCnqMh+YB2ZDsMUl@mattapan.m5p.com>
X-ClientProxiedBy: MR2P264CA0055.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 06e20d8b-aed2-4ad6-2ca5-08d8d190250c
X-MS-TrafficTypeDiagnostic: DM6PR03MB5179:
X-Microsoft-Antispam-PRVS: <DM6PR03MB5179F90873B1FDB8B9D36A7A8F889@DM6PR03MB5179.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: t2fW6GgXg8mydHUzbdTDLrN0DxpctIoGfp9Y8YeQeMo3DdnY9tVlunQwKLRJTUuL2RW02c1q9nks8QLwDOw3RntfhXU0HL+6bftbK+h5QJ5tjRC0tPDVXId935sFviKG1w7OuW4jrId01E2PzMbGLHPLVT/uTyhr0WzGt9FCTOSVosnHgnSALnxFrNWCj6Txwd6wwxhuAzlU/WxP7kthWh3BQNRKxD8iVMLfUSEejsjQp68Z3zT8Tuis1x3ga5Ufr/ycjsI30N58JJrZUIixZunlG1bYGArbMYKkaVhsTDTrWbgsdB6C/t86uP2aopNbAovYXrweGGAMOTuOzWBYeQwAjhA8KAcJks7tP916BfMj7sk5NVFgW0jvJP9l1uVml2t39zyMllh6NdASFJBrzMf5vfl8o/VKOeK5u5RPblwjQfG6+GDEush4ZDyft0ew1kXA6e33IMMWuY50fq4F7wvk1opPnAsSyrzNkdnMc0d80eN5a1kAE9v9GWx2KNCq8ZU9LOL++QaJULKNBsNEwQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(136003)(366004)(396003)(39850400004)(346002)(66476007)(8936002)(5660300002)(66556008)(66946007)(33716001)(83380400001)(85182001)(8676002)(16526019)(26005)(9686003)(4326008)(6666004)(956004)(86362001)(186003)(6486002)(2906002)(316002)(6496006)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VS9sZm81Q0FJZUhvdjVZZ3VsL25TVW4zMWhldzM2V3JZVTZJT29WdlNoZlNH?=
 =?utf-8?B?V2hPVXR2WmFhY3U5bVI5a2pNNEhveGhSRVp1UXkvSkszYTRXZnJDRmZobHps?=
 =?utf-8?B?c2RuSFBqd3IxMG1IVnBPaHpFcjh2cUNvUDVsdXhMdlNKZERac3BlZVZaSEZD?=
 =?utf-8?B?dUl4TjRNeXA3NHFTYkwwYXdoWkhmQkhCaDZYOWpGMHUwaG4xclpNcEdxY1dH?=
 =?utf-8?B?ZGJGMWFnTDZ2Vkhxd1pRaFJtck8wZWFNbE5pMmFxZzkvQk5sSW1mMDMvK2cv?=
 =?utf-8?B?SmZjTjVMdDRyTUpjU21XL1ludG42TnY2WlVaT28rOS9kUEIwYmZaMkxYbXBF?=
 =?utf-8?B?bUdBZEJQYmZpY1RWMjQ4T3NRMkJlTGFXVjJuSzNyY3Y5QnorS28yQVkySDZR?=
 =?utf-8?B?T3lpUXhGSHNTTGZqWk9XTE1TRlh5ZjBRVnh1dS9vQnNUMjJVdW9rMllUQ01k?=
 =?utf-8?B?OGtyS2NOUlhSVDVFUU94QzlYUUhLVWVCQnAwYVN1UHU5YzVUc2k3cUZaSDBP?=
 =?utf-8?B?WlQxM29WazNoUXpndnpaTEkyYVlXLzN0NDhsNmZqMEc1Q21pVlNWUENaT01E?=
 =?utf-8?B?aGlxZkhDZEd3TlVwaEkwbHl2bnVtZjMxRFVYU3hpYWJCem4wbjYyZzI0Mk1X?=
 =?utf-8?B?Z0dmVlFTVmtCT1g4T05KUWE0Zmp5THB4Q2FSekZ4M3A5dVFMZjliUHhTOGZ4?=
 =?utf-8?B?U0NMbUJyVExuNG1UNmJXeDlOdkdjMGdlQnEvWWVoOE1acFoyMlJpeHJCQXRy?=
 =?utf-8?B?WmhaMDNaQzNLNHdnejZiamhaUG1NZTcyd0p5RUFhUldHcVJsVXp6MjdGd29Y?=
 =?utf-8?B?ZThZMjR4VS9vTDBJUW5hclFyNUFTM3Nmd2VzZm9xY0xaNENBNmY3alpkOVJ5?=
 =?utf-8?B?TlljbjhWOHh1cWRzWnQrbDZhdVFXZVZ3UEhNbTdmeCtKRlZMajUvN3FkQzd0?=
 =?utf-8?B?WURCQTR2MVVUV20rNmRjN2FZM2pUMXhGKzNBTUJpZGVDUEhDZFRvcE1qL0l2?=
 =?utf-8?B?WURadHJBMFFCQzFHWk1DVkxlV0pIQUVneDJWazNDTmFTQnZJTm42bVZLZXl1?=
 =?utf-8?B?QzZSemVsVHRVb011NXVnVE5jQWdHaUU3V0ZhZjZ0bVZ1UkVzdnU0OWVTRDFx?=
 =?utf-8?B?Q0gzcjc1UDUyQk9PMG5ITUlqYjVIOERwOTIvbGlZREtRdzRGUk13TExEcWt4?=
 =?utf-8?B?TGx6QmFkdGFjZWxqKzJGRkV4eHIxNkcvR1daNXlBdlVBeWFjaDQzWXZWdWl5?=
 =?utf-8?B?SlFLdWVUM1ZBQnZ3Zm9lU1psb3VDU2o3WnIwc2VtTXpwYUJ6Y3BkVkJXVHg2?=
 =?utf-8?B?NHo5clRyYzZoUmQ5bXlSZ1didW95eVNtWnR0UGNqQkJvRWtlWHZ3MHc2bW81?=
 =?utf-8?B?eG5WbldNWGVUOHU2ZSt0UWVhZHluck85N1ZtZWxueWJIbE9rcEpZSi9JVmYz?=
 =?utf-8?B?L2JqeUlWOVRWKzBvUmlJRnYzYU9kU1duK0o1YnJ3MUlBTERTVStRZlFrbDZq?=
 =?utf-8?B?dkdsR05BemJIZGFPdlh5TDNTSG5taE1lcE5YYnhLd2kzNVdWME5xQUM3OGtG?=
 =?utf-8?B?dU42T3pqb3YyQXZ6Y2lkYnhYWlRzZFZ5QmM4cEJPalUydHliZzR5aUpqeFVv?=
 =?utf-8?B?VExXRzh1ZEdVUUdMWUxUM3hJK2p6aEU0VDNkUDdBL2Y5MTZIdEFJWXNsdU9W?=
 =?utf-8?B?NEErZURlRjNsbnBrM0FNWDEySE1hTVQwR1hvU2p6RjdkczZ5eDZOdWxwbmds?=
 =?utf-8?Q?cnFYTLOioa3moEp9wj6DUxNkUBSlUl7TSI6wLIP?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 06e20d8b-aed2-4ad6-2ca5-08d8d190250c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Feb 2021 09:00:30.2365
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Y1tLJIYoU3//8Is64d86ekgz0mgFMQEJCvrRHJmU9roO1I4MQHVMX+9gyRkzZX/XmCg24mgBWUPmkU2e4Htrkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5179
X-OriginatorOrg: citrix.com

On Sun, Feb 14, 2021 at 07:27:46PM -0800, Elliott Mitchell wrote:
> On Sun, Feb 14, 2021 at 11:45:47PM +0100, Maximilian Engelhardt wrote:
> > On Samstag, 13. Februar 2021 19:21:56 CET Elliott Mitchell wrote:
> > > On Sat, Feb 13, 2021 at 04:36:24PM +0100, Maximilian Engelhardt wrote:
> > > > * The issue started with Debian kernel 5.8.3+1~exp1 running in the vm,
> > > > Debian kernel 5.7.17-1 does not show the issue.
> > > 
> > > I think the first kernel update during which I saw the issue was around
> > > linux-image-4.19.0-12-amd64 or linux-image-4.19.0-13-amd64.  I think
> > > the last security update to the Xen packages was in a similar timeframe
> > > though.  Rate this portion as unreliable though.  I can definitely state
> > > this occurs with Debian's linux-image-4.19.0-13-amd64 and kernels built
> > > from corresponding source, this may have shown earlier.
> > 
> > We don't see any issues with the current Debian buster (Debian stable) kernel 
> > (4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux) and 
> > also did not notice any issues with the older kernel packages in buster. Also 
> > the security update of xen in buster did not cause any behavior change for us. 
> > In our case everything in buster is working as we expect it to work (using 
> > latest updates and security updates).
> 
> I can't really say much here.  I keep up to date and I cannot point to a
> key ingredient as the one which caused this breakage.
> 
> 
> > > Fresh observation.  During a similar timeframe I started noticing VM
> > > creation leaving a `xl create` process behind.  I had discovered this
> > > process could be freely killed without appearing to effect the VM and had
> > > thus been doing so (memory in a lean Dom0 is precious).
> > > 
> > > While typing this I realized there was another scenario I needed to try.
> > > Turns out if I boot PV GRUB and get to its command-line (press 'c'), then
> > > get away from the VM console, kill the `xl create` process, return to
> > > the console and type "halt".  This results in a hung VM.
> > > 
> > > Are you perhaps either killing the `xl create` process for effected VMs,
> > > or migrating the VM and thus splitting the `xl create` process from the
> > > effected VMs?
> > > 
> > > This seems more a Debian issue than a Xen Project issue right now.
> > 
> > We don't migrate the vms, we don't kill any processes running on the dom0 and 
> > I don't see anything in our logs indicating something gets killed on the dom0. 
> > On our systems the running 'xl create' processes only use very little memory.
> > 
> > Have you tried if you still observer your hangs if you don't kill the xl 
> > processes?
> 
> That is exactly what I pointed to above.  On stable killing the
> mysterious left behind `xl create` process causes the problem to
> manifest, while leaving it undisturbed appears to makes the problem not
> manifest.

You cannot kill the 'xl create' process, or else events for the domain
(like shutdown) won't be handled by the toolstack, and thus the domain
won't be destroyed when the guest shuts down. The same would happen if
the guest ties to reboot, it won't work properly because the reboot
request won't be handled by the toolstack as you have just killed the
xl process that's in charge of doing it.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 09:12:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 09:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85105.159574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZw1-0002KP-TV; Mon, 15 Feb 2021 09:12:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85105.159574; Mon, 15 Feb 2021 09:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBZw1-0002KI-PA; Mon, 15 Feb 2021 09:12:49 +0000
Received: by outflank-mailman (input) for mailman id 85105;
 Mon, 15 Feb 2021 09:12:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=grdk=HR=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lBZvz-0002KC-KZ
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 09:12:47 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f00344b-bce6-489e-97b9-b487e1be323f;
 Mon, 15 Feb 2021 09:12:44 +0000 (UTC)
Received: from AM5PR0101CA0007.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::20) by DBBPR08MB4695.eurprd08.prod.outlook.com
 (2603:10a6:10:db::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.27; Mon, 15 Feb
 2021 09:12:41 +0000
Received: from AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::f2) by AM5PR0101CA0007.outlook.office365.com
 (2603:10a6:206:16::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.26 via Frontend
 Transport; Mon, 15 Feb 2021 09:12:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT054.mail.protection.outlook.com (10.152.16.212) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Mon, 15 Feb 2021 09:12:40 +0000
Received: ("Tessian outbound 2b57fdd78668:v71");
 Mon, 15 Feb 2021 09:12:40 +0000
Received: from d572069380f4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DF924D55-4894-452D-94B6-A9758E191566.1; 
 Mon, 15 Feb 2021 09:12:34 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d572069380f4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 15 Feb 2021 09:12:34 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4443.eurprd08.prod.outlook.com (2603:10a6:10:c8::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.31; Mon, 15 Feb
 2021 09:12:32 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3846.039; Mon, 15 Feb 2021
 09:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f00344b-bce6-489e-97b9-b487e1be323f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y25BjAdiNR/jL74i3vNwM3x2RAxisV+bjFuLi6QVVJ4=;
 b=SqgOYck3IswQ7nfO3tSTiI0z1SryMAtz7MPVt9TQRvhos2Q4Ae+QwH+eI+1NIBpaB4+TKq9wCqLeJWt6VV73ODKlDS/bWeYd00yOJLDITisThZWKC9SU1waPTI2jv+1lSB2W6yLmWs2ERYGQn6mQ3WPpsl+X4yN2XqJXI7sy1+c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c5eaa60167c3fd43
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AvUh/epKc1R0gjaFB9/6WrSbNLXxyGJtkyUOqx/+OJh8y4msaKg91hViThhA15LzUHJfDC/Qu5lsI3umNDhX6OkVr4Du2ZGheCIpuxPJSUAaLfW3/ip2YD/rjyATFMxAY55KPuXT5Po/mWZYg7yYDupXvf2fiv0vS40oOyNjm+yU5qfiKH1wXdbg4XuUtbYn3TPAJQX4uyZHauDJL4QwE2t0Pkf/7bv92Qsk9BknKGe24fx+UTAS59p0QIXC1AmGz/QmIvnX6BST1aHob7GwDmNnaABGxJzO80vbl3SWLV0q8yXAOP4n+vrBg8KN4PfjNjOGMrxxMKYHYrgRxWS/Bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y25BjAdiNR/jL74i3vNwM3x2RAxisV+bjFuLi6QVVJ4=;
 b=d+y3W4SGi5tSwuKhw+GSyli/fMYOSQc9bsfsz3HWbyWsqvGsEdn1quOu7yRCR4sBVPbgXg89vVvn8S8YAgnU8DS7/QKnLJ/bPfFL2j5IWRP2aqW56GbwlxSGFSysTG7qGgOXtcJceum+7lO/sZfD+073arxHpY17qo0z8w2FHmW88G6R41Cw8/yCb1PlKvAR5QH2WfJcLhRga6xz7Cr5a4AANRwKv3Qf5H81QHp2/e/TDtcj6aBRWLHFUHhIstxSLAREAgvd2sK0RFtJTEAfGg7SOvPXgvudwMarUR9/C0RsmEdNTTnc984W1468rSkB7B8Krhap/AriOfijcH6Asw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y25BjAdiNR/jL74i3vNwM3x2RAxisV+bjFuLi6QVVJ4=;
 b=SqgOYck3IswQ7nfO3tSTiI0z1SryMAtz7MPVt9TQRvhos2Q4Ae+QwH+eI+1NIBpaB4+TKq9wCqLeJWt6VV73ODKlDS/bWeYd00yOJLDITisThZWKC9SU1waPTI2jv+1lSB2W6yLmWs2ERYGQn6mQ3WPpsl+X4yN2XqJXI7sy1+c=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, "lucmiccio@gmail.com"
	<lucmiccio@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Topic: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
Thread-Index:
 AQHW/ks46k4kUfhzpk6LTniK0S/PbqpPzhoAgAB7ugCAATYRAIAAKYkAgAAJgACAABz0AIABJMKAgAAJKICAACTqAIAEnQkAgAE5C4A=
Date: Mon, 15 Feb 2021 09:12:32 +0000
Message-ID: <8A85CE97-A38D-4580-BBDD-38DD0542A3F8@arm.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
 <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
 <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
 <DC7F1705-54B3-4543-8222-E7BCF1A501F7@arm.com>
 <acbcdd06-83b1-28ff-ea7e-2ce1ba681ac1@xen.org>
In-Reply-To: <acbcdd06-83b1-28ff-ea7e-2ce1ba681ac1@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c0d8a071-d63a-4211-5ab8-08d8d191d888
x-ms-traffictypediagnostic: DBBPR08MB4443:|DBBPR08MB4695:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB4695BD1D49FDCC14EBD26554FC889@DBBPR08MB4695.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ckhByKDDnQ9YbB+N3YlMnxHrh8CvH0K90DJDK+QezRMWPg7p9c6jFgUj8jvBK3sZPCrA5/d7/l/OWxDONLR8vwJrogt/OYtiyj2SStUWVzz4jempFUWfkeXEI6S7eA+BfGWl2l8ulSuU/SXGoR3lSXTcM2gj0S8QcL7JjlxktB0XQAN4g7PBAD2a/OnYQ9RFvHVqCBG8/JQscewmtdmQbpQjJxJnmutBCl3UCeBD2gwPl0zw8yfaiblOakhHdHl41KwqGoAKEUGBLd4cEljDActgD9uBbzD4IAiy2irYm3J7bLuGQ7Oi1eUVCoBHyw1o6Sss2K2wECxAlMaOBxa0kcKAy7WgpXF6bkTkWu+asq8RWp5UDgNm0L5b988KEgEqPzJ/nAdfuimNFe/OPyMmD4u9ki/FVa1lUwBLzh9nagzl1NVhvxRNIjqCilzypCFb4FVnG0D0iawBvmBAI2S0cSRGfdJhRfT7F1f7poPNk9cfPXPEZq2eFrBf+pyMbZL8DbZS6oCaKefpSOqp1EIsO1MLwOFmAkeuSOROSpo/fKWEWXP4W7twUrRbKhrLl/tAR37zZS+TYzGFiGFG7N15HilEccaSav+dU2bD4kloCHHyB7MtUlvL8naLprt8Dj8wsOG0MAgtZUtOnwdOEl5MSPapqgfusFgqO851SPkB/5c=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(376002)(39860400002)(396003)(6506007)(53546011)(966005)(2906002)(478600001)(6916009)(2616005)(26005)(186003)(33656002)(8936002)(66556008)(86362001)(83380400001)(64756008)(66946007)(66476007)(76116006)(91956017)(66446008)(316002)(6486002)(36756003)(5660300002)(54906003)(6512007)(8676002)(4326008)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?WFkxczdNY2xaWTFVSUhyQVZDR3dRR1pzcGVqZ2g3cHhRY0IzWDlaMVI4cTBT?=
 =?utf-8?B?Tlc1bUVZdnZzeENzUGkvcTRZclFaaTUvZHRuUTkxQ3dKWDlyWmRVUEVPK1NS?=
 =?utf-8?B?YUl5SGdveDNOam9uTVYyZG14K1NqL3p1NExSaVEyVUhJRTNybFFyNStnMXN3?=
 =?utf-8?B?WnMzclRZNnd5L0lxVFB1NTlxM2ZmQUdNSlZZN0hiVlN3OE9XQnpvNkcvRzRl?=
 =?utf-8?B?Q296UGc3UXRDREl6MWpjWG1CTHltWFVjajdhVStLVktvQktBT1JUc0VJN1Vi?=
 =?utf-8?B?WkdPbHA0RDBTclV5QkRVVmNwcW9GcmF5b1d3d1d0TXE3T091NFhaVk1tS1FL?=
 =?utf-8?B?dUV3SWM3TU42WmQxcTRONm1IZndtQ0t1Z1ZJTFpjVUhWSzQzbHNlVWRjUzFN?=
 =?utf-8?B?blU5MEdsRmZvKzUzYVRiYXMxU0g4K0JVaGdWWWxIZE9wM1N5dm9oVVd2MmZM?=
 =?utf-8?B?SEFZc3hMdVFuTERoMDg3MlB5a1JFOVdhRVNFbFNaTXlvd0JxVFZOSGN5NTM5?=
 =?utf-8?B?SmtCZ2NBdzdKZ0FoaGpodTFpR3VxOWdoUEprMVppL21LaS8yc09UNTlNR0p2?=
 =?utf-8?B?NDhzaGJtOW5kNEo4SEhIVzY1RUlsSHJ4dnN4TGVyTS9pcUVBNGVOVVFBUGRw?=
 =?utf-8?B?OStYRjlTdnN3SUtEM1NYYUQzeGFIV2FtMUw5aHZ4S3pPazlkdHAwQW0rbnlx?=
 =?utf-8?B?R3NNd0RxYkVKWlVLLy9vc0xKTjVjeTE1QTk2WHlFUUNibytqY0daWGtsUEJF?=
 =?utf-8?B?Z3BiLytCYmRja28xSGpIL3NuRzI2TWc1RHlUWTAzNVNxdTFmYVFXRUhScmRW?=
 =?utf-8?B?V2dPRGt5RWNrRWluZWxIOEdYSnJBMWM3ODU3UWV5OG9sZWdzSFdCT0U5ZjZN?=
 =?utf-8?B?VmU1VVpKcHZPN3RyRDZYSVpZY0ZTUnQ1YjlRNHZmMDM3MHJOR0pCemtiWE1K?=
 =?utf-8?B?YXJXSHVrN2w1N2E1aFlPeXFwOWZrZFZDOVFLMnRHaDdwcWlzdmVJQ3ZEMlNi?=
 =?utf-8?B?NnhqN0NSVmFHdW4zeC9LRjBZd0d0NU9VVUJNRWtWWWUvWTRoNzdGMWdhVi9O?=
 =?utf-8?B?WEdQcEtvSXBlNTYxUUZ3UmNvMStZOE9hWW5IWmNwRzJNOHlPOFFOQXhCYm5p?=
 =?utf-8?B?L2RsZGFlanM1WHpNeTNFbERXZDR4NVduVW1JM0FRL0FBd0Jxa1VRTFFxb2lt?=
 =?utf-8?B?dGlENVVncjlaSFpRWk1uZXJxMituRDhyYWhBWnphZ2NEVjgrMU91Y29NcnJB?=
 =?utf-8?B?R2hUbWZac2N3RHJnbExtTWxiSXlxTzFENmtISStrbER0UmZOSTl0UGM1RXds?=
 =?utf-8?B?V1diaFFCY3pqamsreXpMb2VmQ2gwSjI1SUVCb0U2aklvZ3J4M0xZNHY0NXN4?=
 =?utf-8?B?cTlIS1I2NVc3RDZFUDhmcEhtRUl5UWtJYkFxdmNCQ0k4dlYzZEtXa21PanJk?=
 =?utf-8?B?TVc3MThJTTVMOXljOWZBZkFPdmNmQzA2VFNiQkFxOXdnVmY5d2t1N2dXa1ky?=
 =?utf-8?B?NStEWjVlMXA2SGFBelJMb2dhTzhEK0RDYlN2Z05EMXZvRjlJaUtGaUdVYzR6?=
 =?utf-8?B?QnVnV3U0NXNxS0l2aXBiMzBsM2VKc0NNWi9Eam5EdmluK3dwMmJsUkZXQS9I?=
 =?utf-8?B?OHppVEVaZURPdVdPbFE1TnNEY0wrRU90ZUh6OG5GN0NobnFGa3RmZE1HTEpX?=
 =?utf-8?B?aTduNHRmaTV2QnFyNkVicklpQjJqUERaZ3pIdGc4YkVWaGhoYUl5cmFYcVRY?=
 =?utf-8?Q?YcI7s6V8j9sxomM9qqpHY+O2Mdi6ejfdhrRByJA?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <CFC41E583A4BD4499640416C603F5A6E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4443
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9d841f4a-d171-485c-4886-08d8d191d385
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vnFfAu6akzU+Y98WQO6iJxzS5vWeNizR9m1YgEq7tLJlWTbtcDy6vuk9v5kOUTLOu4Q5TKlkbu/Ou+ebIKQYgDXnWl1nyxIGR3t5u9cWeM+P5fuqc44wSzj3MoAxef69EIt0Y7vKS/FobLByF/bHCQCu6Hkr3AGyEUnvgn1sVrPSn0FpqiSlXT3FCn42v20cR89NIdhKYqWupuz+apk7Kv3O++C/kkw5sAbusCIRkiZNN7WUmwnP1LfcM8DJQ3A3DTwv9zSMgXIRwVnBNv9xvGAhlhIXLJ70I2kZBCVK/PIa0PkrDrAZ473K6ehNLM3N4DTMbpffnAULb+94dq1AYm2TWjGCRjskdNaz2fYDSSDn2XQziDCqLgpZk4NTY1073UcZaciGpyFBuvlgfPMwcBQHbTwU5t9+DMU/OufVO0Cw6KoupJVHcFy7LqMp1eydpBtMY+wOnKkFc8Fw5AKXC6MIML+eoiWpN9w2tjMAw45NBjDDBaNU8xL8VgYU9/+0YmWu/iqQoC75XoFn9l4QD1ItBGUI6fKbnQbhbJg3la5oksI2NVius3alBRa5lrcTcy+sFDhsekxLBuEMruN1GPS8+9IGyTIwNOhXixiWuIjsRb4dEYgTLYGcSX9mLodj00fPmzktNIlnwBYjv+Ca+T39YAf+rODjFlCvTBuxGUZpb8QPj/1MU2bdKL9kEbEQ0l/UAakHUGRtTqpyfOQMNDGlFJK7jlYMUpghRLhP/BY=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(36840700001)(46966006)(6862004)(4326008)(107886003)(82740400003)(70206006)(82310400003)(70586007)(6512007)(36756003)(478600001)(966005)(86362001)(2906002)(336012)(53546011)(81166007)(356005)(54906003)(33656002)(8936002)(8676002)(316002)(83380400001)(26005)(47076005)(36860700001)(5660300002)(186003)(6506007)(6486002)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Feb 2021 09:12:40.5503
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c0d8a071-d63a-4211-5ab8-08d8d191d888
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4695

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDE0IEZlYiAyMDIxLCBhdCAyOjMyIHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhpIFJhaHVsLA0KPiANCj4gT24gMTEv
MDIvMjAyMSAxNjowNSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4gT24gMTEgRmViIDIwMjEsIGF0
IDE6NTIgcG0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+IA0KPj4+
IA0KPj4+IA0KPj4+IE9uIDExLzAyLzIwMjEgMTM6MjAsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+
PiBIZWxsbyBKdWxpZW4sDQo+Pj4gDQo+Pj4gSGkgUmFodWwsDQo+Pj4gDQo+Pj4+PiBPbiAxMCBG
ZWIgMjAyMSwgYXQgNzo1MiBwbSwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6
DQo+Pj4+PiANCj4+Pj4+IA0KPj4+Pj4gDQo+Pj4+PiBPbiAxMC8wMi8yMDIxIDE4OjA4LCBSYWh1
bCBTaW5naCB3cm90ZToNCj4+Pj4+PiBIZWxsbyBKdWxpZW4sDQo+Pj4+Pj4+IE9uIDEwIEZlYiAy
MDIxLCBhdCA1OjM0IHBtLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4+
Pj4+Pj4gDQo+Pj4+Pj4+IEhpLA0KPj4+Pj4+PiANCj4+Pj4+Pj4gT24gMTAvMDIvMjAyMSAxNTow
NiwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+Pj4+Pj4gT24gOSBGZWIgMjAyMSwgYXQgODozNiBw
bSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBPbiBUdWUsIDkgRmViIDIwMjEsIFJhaHVsIFNpbmdoIHdyb3Rl
Og0KPj4+Pj4+Pj4+Pj4gT24gOCBGZWIgMjAyMSwgYXQgNjo0OSBwbSwgU3RlZmFubyBTdGFiZWxs
aW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4+Pj4gQ29tbWl0IDkxZDRlY2E3YWRkIGJyb2tlIGdudHRhYl9uZWVkX2lvbW11X21hcHBpbmcg
b24gQVJNLg0KPj4+Pj4+Pj4+Pj4gVGhlIG9mZmVuZGluZyBjaHVuayBpczoNCj4+Pj4+Pj4+Pj4+
IA0KPj4+Pj4+Pj4+Pj4gI2RlZmluZSBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nKGQpICAgICAg
ICAgICAgICAgICAgICBcDQo+Pj4+Pj4+Pj4+PiAtICAgIChpc19kb21haW5fZGlyZWN0X21hcHBl
ZChkKSAmJiBuZWVkX2lvbW11KGQpKQ0KPj4+Pj4+Pj4+Pj4gKyAgICAoaXNfZG9tYWluX2RpcmVj
dF9tYXBwZWQoZCkgJiYgbmVlZF9pb21tdV9wdF9zeW5jKGQpKQ0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4+PiBPbiBBUk0gd2UgbmVlZCBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nIHRvIGJlIHRy
dWUgZm9yIGRvbTAgd2hlbiBpdCBpcw0KPj4+Pj4+Pj4+Pj4gZGlyZWN0bHkgbWFwcGVkIGFuZCBJ
T01NVSBpcyBlbmFibGVkIGZvciB0aGUgZG9tYWluLCBsaWtlIHRoZSBvbGQgY2hlY2sNCj4+Pj4+
Pj4+Pj4+IGRpZCwgYnV0IHRoZSBuZXcgY2hlY2sgaXMgYWx3YXlzIGZhbHNlLg0KPj4+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4+PiBJbiBmYWN0LCBuZWVkX2lvbW11X3B0X3N5bmMgaXMgZGVmaW5lZCBh
cyBkb21faW9tbXUoZCktPm5lZWRfc3luYyBhbmQNCj4+Pj4+Pj4+Pj4+IG5lZWRfc3luYyBpcyBz
ZXQgYXM6DQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+ICAgaWYgKCAhaXNfaGFyZHdhcmVfZG9t
YWluKGQpIHx8IGlvbW11X2h3ZG9tX3N0cmljdCApDQo+Pj4+Pj4+Pj4+PiAgICAgICBoZC0+bmVl
ZF9zeW5jID0gIWlvbW11X3VzZV9oYXBfcHQoZCk7DQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
IGlvbW11X3VzZV9oYXBfcHQoZCkgbWVhbnMgdGhhdCB0aGUgcGFnZS10YWJsZSB1c2VkIGJ5IHRo
ZSBJT01NVSBpcyB0aGUNCj4+Pj4+Pj4+Pj4+IFAyTS4gSXQgaXMgdHJ1ZSBvbiBBUk0uIG5lZWRf
c3luYyBtZWFucyB0aGF0IHlvdSBoYXZlIGEgc2VwYXJhdGUgSU9NTVUNCj4+Pj4+Pj4+Pj4+IHBh
Z2UtdGFibGUgYW5kIGl0IG5lZWRzIHRvIGJlIHVwZGF0ZWQgZm9yIGV2ZXJ5IGNoYW5nZS4gbmVl
ZF9zeW5jIGlzIHNldA0KPj4+Pj4+Pj4+Pj4gdG8gZmFsc2Ugb24gQVJNLiBIZW5jZSwgZ250dGFi
X25lZWRfaW9tbXVfbWFwcGluZyhkKSBpcyBmYWxzZSB0b28sDQo+Pj4+Pj4+Pj4+PiB3aGljaCBp
cyB3cm9uZy4NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gQXMgYSBjb25zZXF1ZW5jZSwgd2hl
biB1c2luZyBQViBuZXR3b3JrIGZyb20gYSBkb21VIG9uIGEgc3lzdGVtIHdoZXJlDQo+Pj4+Pj4+
Pj4+PiBJT01NVSBpcyBvbiBmcm9tIERvbTAsIEkgZ2V0Og0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+PiAoWEVOKSBzbW11OiAvc21tdUBmZDgwMDAwMDogVW5oYW5kbGVkIGNvbnRleHQgZmF1bHQ6
IGZzcj0weDQwMiwgaW92YT0weDg0MjRjYjE0OCwgZnN5bnI9MHhiMDAwMSwgY2I9MA0KPj4+Pj4+
Pj4+Pj4gWyAgIDY4LjI5MDMwN10gbWFjYiBmZjBlMDAwMC5ldGhlcm5ldCBldGgwOiBETUEgYnVz
IGVycm9yOiBIUkVTUCBub3QgT0sNCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gVGhlIGZpeCBp
cyB0byBnbyBiYWNrIHRvIHNvbWV0aGluZyBhbG9uZyB0aGUgbGluZXMgb2YgdGhlIG9sZA0KPj4+
Pj4+Pj4+Pj4gaW1wbGVtZW50YXRpb24gb2YgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZy4NCj4+
Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFiZWxsaW5p
IDxzdGVmYW5vLnN0YWJlbGxpbmlAeGlsaW54LmNvbT4NCj4+Pj4+Pj4+Pj4+IEZpeGVzOiA5MWQ0
ZWNhN2FkZA0KPj4+Pj4+Pj4+Pj4gQmFja3BvcnQ6IDQuMTIrDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+IC0tLQ0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiBHaXZlbiB0aGUgc2V2ZXJpdHkg
b2YgdGhlIGJ1ZywgSSB3b3VsZCBsaWtlIHRvIHJlcXVlc3QgdGhpcyBwYXRjaCB0byBiZQ0KPj4+
Pj4+Pj4+Pj4gYmFja3BvcnRlZCB0byA0LjEyIHRvbywgZXZlbiBpZiA0LjEyIGlzIHNlY3VyaXR5
LWZpeGVzIG9ubHkgc2luY2UgT2N0DQo+Pj4+Pj4+Pj4+PiAyMDIwLg0KPj4+Pj4+Pj4+Pj4gDQo+
Pj4+Pj4+Pj4+PiBGb3IgdGhlIDQuMTIgYmFja3BvcnQsIHdlIGNhbiB1c2UgaW9tbXVfZW5hYmxl
ZCgpIGluc3RlYWQgb2YNCj4+Pj4+Pj4+Pj4+IGlzX2lvbW11X2VuYWJsZWQoKSBpbiB0aGUgaW1w
bGVtZW50YXRpb24gb2YgZ250dGFiX25lZWRfaW9tbXVfbWFwcGluZy4NCj4+Pj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+Pj4gQ2hhbmdlcyBpbiB2MjoNCj4+Pj4+Pj4+Pj4+IC0gaW1wcm92ZSBjb21taXQg
bWVzc2FnZQ0KPj4+Pj4+Pj4+Pj4gLSBhZGQgaXNfaW9tbXVfZW5hYmxlZChkKSB0byB0aGUgY2hl
Y2sNCj4+Pj4+Pj4+Pj4+IC0tLQ0KPj4+Pj4+Pj4+Pj4geGVuL2luY2x1ZGUvYXNtLWFybS9ncmFu
dF90YWJsZS5oIHwgMiArLQ0KPj4+Pj4+Pj4+Pj4gMSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9u
KCspLCAxIGRlbGV0aW9uKC0pDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IGRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3RhYmxlLmggYi94ZW4vaW5jbHVkZS9hc20tYXJt
L2dyYW50X3RhYmxlLmgNCj4+Pj4+Pj4+Pj4+IGluZGV4IDZmNTg1YjE1MzguLjBjZTc3ZjlhMWMg
MTAwNjQ0DQo+Pj4+Pj4+Pj4+PiAtLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJtL2dyYW50X3RhYmxl
LmgNCj4+Pj4+Pj4+Pj4+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZ3JhbnRfdGFibGUuaA0K
Pj4+Pj4+Pj4+Pj4gQEAgLTg5LDcgKzg5LDcgQEAgaW50IHJlcGxhY2VfZ3JhbnRfaG9zdF9tYXBw
aW5nKHVuc2lnbmVkIGxvbmcgZ3BhZGRyLCBtZm5fdCBtZm4sDQo+Pj4+Pj4+Pj4+PiAgICAoKChp
KSA+PSBucl9zdGF0dXNfZnJhbWVzKHQpKSA/IElOVkFMSURfR0ZOIDogKHQpLT5hcmNoLnN0YXR1
c19nZm5baV0pDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+ICNkZWZpbmUgZ250dGFiX25lZWRf
aW9tbXVfbWFwcGluZyhkKSAgICAgICAgICAgICAgICAgICAgXA0KPj4+Pj4+Pj4+Pj4gLSAgICAo
aXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgbmVlZF9pb21tdV9wdF9zeW5jKGQpKQ0KPj4+
Pj4+Pj4+Pj4gKyAgICAoaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgJiYgaXNfaW9tbXVfZW5h
YmxlZChkKSkNCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gI2VuZGlmIC8qIF9fQVNNX0dSQU5U
X1RBQkxFX0hfXyAqLw0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gSSB0ZXN0ZWQgdGhlIHBhdGNo
IGFuZCB3aGlsZSBjcmVhdGluZyB0aGUgZ3Vlc3QgSSBvYnNlcnZlZCB0aGUgYmVsb3cgd2Fybmlu
ZyBmcm9tIExpbnV4IGZvciBibG9jayBkZXZpY2UuDQo+Pj4+Pj4+Pj4+IGh0dHBzOi8vZWxpeGly
LmJvb3RsaW4uY29tL2xpbnV4L3Y0LjMvc291cmNlL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
eGVuYnVzLmMjTDI1OA0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IFNvIHlvdSBhcmUgY3JlYXRpbmcg
YSBndWVzdCB3aXRoICJ4bCBjcmVhdGUiIGluIGRvbTAgYW5kIHlvdSBzZWUgdGhlDQo+Pj4+Pj4+
Pj4gd2FybmluZ3MgYmVsb3cgcHJpbnRlZCBieSB0aGUgRG9tMCBrZXJuZWw/IEkgaW1hZ2luZSB0
aGUgZG9tVSBoYXMgYQ0KPj4+Pj4+Pj4+IHZpcnR1YWwgImRpc2siIG9mIHNvbWUgc29ydC4NCj4+
Pj4+Pj4+IFllcyB5b3UgYXJlIHJpZ2h0IEkgYW0gdHJ5aW5nIHRvIGNyZWF0ZSB0aGUgZ3Vlc3Qg
d2l0aCAieGwgY3JlYXRl4oCdIGFuZCBiZWZvcmUgdGhhdCwgSSBjcmVhdGVkIHRoZSBsb2dpY2Fs
IHZvbHVtZSBhbmQgdHJ5aW5nIHRvIGF0dGFjaCB0aGUgbG9naWNhbCB2b2x1bWUNCj4+Pj4+Pj4+
IGJsb2NrIHRvIHRoZSBkb21haW4gd2l0aCDigJx4bCBibG9jay1hdHRhY2jigJ0uIEkgb2JzZXJ2
ZWQgdGhpcyBlcnJvciB3aXRoIHRoZSAieGwgYmxvY2stYXR0YWNo4oCdIGNvbW1hbmQuDQo+Pj4+
Pj4+PiBUaGlzIGlzc3VlIG9jY3VycyBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoIGFzIHdoYXQg
SSBvYnNlcnZlZCB0aGlzIHBhdGNoIGludHJvZHVjZSB0aGUgY2FsbHMgdG8gaW9tbXVfbGVnYWN5
X3ssIHVufW1hcCgpIHRvIG1hcCB0aGUgZ3JhbnQgcGFnZXMgZm9yDQo+Pj4+Pj4+PiBJT01NVSB0
aGF0IHRvdWNoZXMgdGhlIHBhZ2UtdGFibGVzLiBJIGFtIG5vdCBzdXJlIGJ1dCB3aGF0IEkgb2Jz
ZXJ2ZWQgaXMgdGhhdCBzb21ldGhpbmcgaXMgd3JpdHRlbiB3cm9uZyB3aGVuIGlvbW1fdW5tYXAg
Y2FsbHMgdW5tYXAgdGhlIHBhZ2VzDQo+Pj4+Pj4+PiBiZWNhdXNlIG9mIHRoYXQgaXNzdWUgaXMg
b2JzZXJ2ZWQuDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBDYW4geW91IGNsYXJpZnkgd2hhdCB5b3UgbWVh
biBieSAid3JpdHRlbiB3cm9uZyI/IFdoYXQgc29ydCBvZiBlcnJvciBkbyB5b3Ugc2VlIGluIHRo
ZSBpb21tdV91bm1hcCgpPw0KPj4+Pj4+IEkgbWlnaHQgYmUgd3JvbmcgYXMgcGVyIG15IHVuZGVy
c3RhbmRpbmcgZm9yIEFSTSB3ZSBhcmUgc2hhcmluZyB0aGUgUDJNIGJldHdlZW4gQ1BVIGFuZCBJ
T01NVSBhbHdheXMgYW5kIHRoZSBtYXBfZ3JhbnRfcmVmKCkgZnVuY3Rpb24gaXMgd3JpdHRlbiBp
biBzdWNoIGEgd2F5IHRoYXQgd2UgaGF2ZSB0byBjYWxsIGlvbW11X2xlZ2FjeV97LCB1bn1tYXAo
KSBvbmx5IGlmIFAyTSBpcyBub3Qgc2hhcmVkLg0KPj4+Pj4gDQo+Pj4+PiBtYXBfZ3JhbnRfcmVm
KCkgd2lsbCBjYWxsIHRoZSBJT01NVSBpZiBnbnR0YWJfbmVlZF9pb21tdV9tYXBwaW5nKCkgcmV0
dXJucyB0cnVlLiBJIGRvbid0IHJlYWxseSBzZWUgd2hlcmUgdGhpcyBpcyBhc3N1bWluZyB0aGUg
UDJNIGlzIG5vdCBzaGFyZWQuDQo+Pj4+PiANCj4+Pj4+IEluIGZhY3QsIG9uIHg4NiwgdGhpcyB3
aWxsIGFsd2F5cyBiZSBmYWxzZSBmb3IgSFZNIGRvbWFpbiAodGhleSBzdXBwb3J0IGJvdGggc2hh
cmVkIGFuZCBzZXBhcmF0ZSBwYWdlLXRhYmxlcykuDQo+Pj4+PiANCj4+Pj4+PiBBcyB3ZSBhcmUg
c2hhcmluZyB0aGUgUDJNIHdoZW4gd2UgY2FsbCB0aGUgaW9tbXVfbWFwKCkgZnVuY3Rpb24gaXQg
d2lsbCBvdmVyd3JpdGUgdGhlIGV4aXN0aW5nIEdGTiAtPiBNRk4gKCBGb3IgRE9NMCBHRk4gaXMg
c2FtZSBhcyBNRk4pIGVudHJ5IGFuZCB3aGVuIHdlIGNhbGwgaW9tbXVfdW5tYXAoKSBpdCB3aWxs
IHVubWFwIHRoZSAgKEdGTiAtPiBNRk4gKSBlbnRyeSBmcm9tIHRoZSBwYWdlLXRhYmxlLg0KPj4+
Pj4gQUZBSUssIHRoZXJlIHNob3VsZCBiZSBub3RoaW5nIG1hcHBlZCBhdCB0aGF0IEdGTiBiZWNh
dXNlIHRoZSBwYWdlIGJlbG9uZ3MgdG8gdGhlIGd1ZXN0LiBBdCB3b3JzZSwgd2Ugd291bGQgb3Zl
cndyaXRlIGEgbWFwcGluZyB0aGF0IGlzIHRoZSBzYW1lLg0KPj4+Pj4gU29ycnkgSSBzaG91bGQg
aGF2ZSBtZW50aW9uIGJlZm9yZSBiYWNrZW5kL2Zyb250ZW5kIGlzIGRvbTAgaW4gdGhpcw0KPj4+
IGNhc2UgYW5kIEdGTiBpcyBtYXBwZWQuIEkgYW0gdHJ5aW5nIHRvIGF0dGFjaCB0aGUgYmxvY2sg
ZGV2aWNlIHRvIERPTTANCj4+PiANCj4+PiBBaCwgeW91ciBsb2cgbWFrZXMgYSBsb3QgbW9yZSBz
ZW5zZSBub3cuIFRoYW5rIHlvdSBmb3IgdGhlIGNsYXJpZmljYXRpb24hDQo+Pj4gDQo+Pj4gU28g
eWVzLCBJIGFncmVlIHRoYXQgaW9tbXVfeyx1bn1tYXAoKSB3aWxsIGRvIHRoZSB3cm9uZyB0aGlu
ZyBpZiB0aGUgZnJvbnRlbmQgYW5kIGJhY2tlbmQgaW4gdGhlIHNhbWUgZG9tYWluLg0KPj4+IA0K
Pj4+IEkgZG9uJ3Qga25vdyB3aGF0IHRoZSBzdGF0ZSBpbiBMaW51eCwgYnV0IGZyb20gWGVuIFBv
ViBpdCBzaG91bGQgYmUgcG9zc2libGUgdG8gaGF2ZSB0aGUgYmFja2VuZC9mcm9udGVuZCBpbiB0
aGUgc2FtZSBkb21haW4uDQo+Pj4gDQo+Pj4gSSB0aGluayB3ZSB3YW50IHRvIGlnbm9yZSB0aGUg
SU9NTVUgbWFwcGluZyByZXF1ZXN0IHdoZW4gdGhlIGRvbWFpbiBpcyB0aGUgc2FtZS4gQ2FuIHlv
dSB0cnkgdGhpcyBzbWFsbCB1bnRlc3RlZCBwYXRjaDoNCj4+IEkgdGVzdGVkIHRoZSBwYXRjaCBh
bmQgaXQgaXMgd29ya2luZyBmaW5lIGZvciBib3RoIGRvbTAvZG9tVS4gSSBhbSBhYmxlIHRvIGF0
dGFjaCB0aGUgYmxvY2sgZGV2aWNlIHRvIGRvbTAvZG9tdS4NCj4+IEFsc28gSSBkaWRu4oCZdCBv
YnNlcnZlIHRoZSBJT01NVSBmYXVsdCBhbHNvIGZvciBibG9jayBkZXZpY2UgdGhhdCB3ZSBoYXZl
IGJlaGluZCBJT01NVSBvbiBvdXIgc3lzdGVtIGFuZCBhdHRhY2hlZCB0byBkb21VLg0KPiANCj4g
VGhhbmtzIGZvciB0aGUgdGVzdGluZy4gSSBub3RpY2VkIHRoYXQgbXkgcGF0Y2ggZG9lc24ndCBi
dWlsZCBiZWNhdXNlIGFybV9pb21tdV91bm1hcF9wYWdlKCkgZG9lc24ndCBoYXZlIGEgcGFyYW1l
dGVyIG1mbi4gQ2FuIHlvdSBjb25maXJtIHdoZXRoZXIgeW91IGhhZCB0byByZXBsYWNlIG1mbiB3
aXRoIF9tZm4oZGZuX3goZGZuKSk/DQoNClllcyBJIGhhdmUgdG8gcmVwbGFjZSB0aGUgbWZuIHdp
dGggX21mbihkZm5feChkZm4pKSB0byB0ZXN0IHRoZSBwYXRjaC4NCg0KUmVnYXJkcywNClJhaHVs
DQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxsDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 09:24:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 09:24:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85110.159586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBa75-0003JL-31; Mon, 15 Feb 2021 09:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85110.159586; Mon, 15 Feb 2021 09:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBa74-0003JE-W9; Mon, 15 Feb 2021 09:24:14 +0000
Received: by outflank-mailman (input) for mailman id 85110;
 Mon, 15 Feb 2021 09:24:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBa74-0003J5-0V; Mon, 15 Feb 2021 09:24:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBa73-0002dh-Ng; Mon, 15 Feb 2021 09:24:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBa73-0005GH-Df; Mon, 15 Feb 2021 09:24:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBa73-0000fH-DB; Mon, 15 Feb 2021 09:24:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Dv5/6c5r8vAGVOQp0QOe8krQuu/9l6pnqS8vmAuzAiw=; b=YTZm1dSEXInIiaL4Un1OEEENH3
	5B81rYCd69KLoujWQV0JmWVXxLbmg4QcHxWt0T4lk0B/BFA9qhjwLmJbkfCpDVwhIUg9w/+Kj6Z5o
	T4COi8+h3LCXNx1GuYlIVEGULzgZpI/qeQj6UNlvCTM3Np2IAA8RTFubkNLevK29nO0o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159359-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159359: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5b9a4104c902d7dec14c9e3c5652a638194487c6
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 09:24:13 +0000

flight 159359 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159359/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5b9a4104c902d7dec14c9e3c5652a638194487c6
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   33 days
Failing since        158473  2021-01-17 13:42:20 Z   28 days   40 attempts
Testing same since   159339  2021-02-14 04:42:28 Z    1 days    2 attempts

------------------------------------------------------------
467 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14254 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 10:05:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 10:05:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85119.159607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBakq-0006w2-DP; Mon, 15 Feb 2021 10:05:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85119.159607; Mon, 15 Feb 2021 10:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBakq-0006vv-AQ; Mon, 15 Feb 2021 10:05:20 +0000
Received: by outflank-mailman (input) for mailman id 85119;
 Mon, 15 Feb 2021 10:05:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBako-0006vq-LA
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 10:05:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBakn-0003ND-T6; Mon, 15 Feb 2021 10:05:17 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBakn-000782-MZ; Mon, 15 Feb 2021 10:05:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Z8fhdcbp2lZw6hOwH1OSTqhEgUBoiEhtwEp5Y62NxBc=; b=gdAdCAZ6R/gYeOOEA2Wq7hPiGS
	s9hUVlHzIf1gi1VDr6u5cm8TtygPsXwAyaix8j4x5O39se/3wgOQsMZWJ3jFDc1BpOPr2jZuTlSMt
	NnexzPLxZ8s/icqNCI/s6uSVlTuWdB/JYrFx5Lly9JM8agmzjazGc6kZMHvgX6nMPYv8=;
Subject: Re: Boot time and 3 sec in warning_print
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Anders_T=c3=b6rnqvist?= <anders.tornqvist@codiax.se>
Cc: xen-devel@lists.xenproject.org
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <2668f69a-2150-afbe-2cae-69c79a2d63d8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9f9d7e52-00e9-796b-fa6f-f6df04015773@xen.org>
Date: Mon, 15 Feb 2021 10:05:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <2668f69a-2150-afbe-2cae-69c79a2d63d8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 15/02/2021 08:51, JÃ¼rgen GroÃŸ wrote:
> On 15.02.21 09:38, Jan Beulich wrote:
>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>>> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>>>> I would like to shorten the boot time in our system if possible.
>>>>
>>>> In xen/common/warning.c there is warning_print() which contains a 3
>>>> seconds loop that callsÂ  process_pending_softirqs().
>>>>
>>>> What would the potential problems be if that loop is radically 
>>>> shortened?
>>>
>>> A user might not notice the warning(s) printed.
>>>
>>> But I can see your point. I think adding a boot option for setting
>>> another timeout value (e.g. 0) would do the job without compromising
>>> the default case.
>>
>> I don't think I agree - the solution to this is to eliminate the
>> reason leading to the warning. The delay is intentionally this way
>> to annoy the admin and force them to take measures.
> 
> OTOH there are some warnings which can't be dealt with via boot
> parameters or which can be solved by other means, e.g
Both of the warning can be removed...

> "WARNING: SILO mode is not enabled.\n"
> "It has implications on the security of the system,\n"
> "unless the communications have been forbidden between\n"
> "untrusted domains.\n"

The default Arm config will use SILO mode. You would have needed to 
tweak your .config in order to get this message.

There are current discussion to allow again FLASK on Armv8.1+ platform 
(see [1]).

> 
> "WARNING: HMP COMPUTING HAS BEEN ENABLED.\n"
> "It has implications on the security and stability of the system,\n"
> "unless the cpu affinity of all domains is specified.\n"

This is printed if the admin add "hmp-unsafe=yes" on the command line.

Cheers,

[1]
https://lore.kernel.org/xen-devel/20201111215203.80336-1-ash.j.wilding@gmail.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 10:36:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 10:36:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85133.159618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbEL-0001BO-Pk; Mon, 15 Feb 2021 10:35:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85133.159618; Mon, 15 Feb 2021 10:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbEL-0001BH-Mq; Mon, 15 Feb 2021 10:35:49 +0000
Received: by outflank-mailman (input) for mailman id 85133;
 Mon, 15 Feb 2021 10:35:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBbEK-0001B9-6S
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 10:35:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBbEJ-0003qE-C1; Mon, 15 Feb 2021 10:35:47 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBbEJ-00019S-54; Mon, 15 Feb 2021 10:35:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xG2ulb23qHt4SiokEosUYzaGqt7iLv3eJrf7QlH12Wo=; b=G1+S7yf9BSS/i8UWIUPl+Qtiao
	NiDMscxGT7w01va+6sL0TtctGcYJ02XICfIBuJXC3jvIDvfyNmjYtvTMcPqlUMjGB2SjJ7WVdROPn
	dQZiiiTFDYn1korrfK1cMXypRZcY2W30u7t2FxQBahVDjnSh9oPpollOjIO03QFmrEd4=;
Subject: Re: Boot time and 3 sec in warning_print
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
 <anders.tornqvist@codiax.se>
Cc: xen-devel@lists.xenproject.org
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
Date: Mon, 15 Feb 2021 10:35:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 15/02/2021 08:38, Jan Beulich wrote:
> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>>> I would like to shorten the boot time in our system if possible.
>>>
>>> In xen/common/warning.c there is warning_print() which contains a 3
>>> seconds loop that callsÂ  process_pending_softirqs().
>>>
>>> What would the potential problems be if that loop is radically shortened?
>>
>> A user might not notice the warning(s) printed.
>>
>> But I can see your point. I think adding a boot option for setting
>> another timeout value (e.g. 0) would do the job without compromising
>> the default case.
> 
> I don't think I agree - the solution to this is to eliminate the
> reason leading to the warning. 
 >
> The delay is intentionally this way
> to annoy the admin and force them to take measures.
Given they are warning, an admin may have assessed them and considered 
there is no remediation necessary.

We encountered the same problem with LiveUpdate. If you happen to have a 
warning (e.g. sync_console for debugging), then you are adding 3s in 
your downtime (this can more than double-up the actual one).

What was just an "annoyance" for boot can now completely wreck your 
guests and system (not many software can tolerate large downtime).

So I think we either want to drop the 3s pause completely or allow the 
user to decide whether he/she cares about it via a command line option.

I am leaning towards the former at the moment.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 10:41:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 10:41:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85137.159630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbJv-00027F-FI; Mon, 15 Feb 2021 10:41:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85137.159630; Mon, 15 Feb 2021 10:41:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbJv-000278-CF; Mon, 15 Feb 2021 10:41:35 +0000
Received: by outflank-mailman (input) for mailman id 85137;
 Mon, 15 Feb 2021 10:41:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBbJu-000273-E2
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 10:41:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f561364-0321-47ab-b597-8b002a93bc22;
 Mon, 15 Feb 2021 10:41:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08991AC32;
 Mon, 15 Feb 2021 10:41:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f561364-0321-47ab-b597-8b002a93bc22
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613385691; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=K+tRXwQ5Ct7R9Eny+tyaBcKdcNvbILR4Ppv/xJyxewI=;
	b=sKprtr3aGelwXzmVxyVFToWNO0aOctFxCQ4A0nzA77WkL30Vf3gyh6KEIoye5KkxGnlcBG
	uYnCLWfBw2i7QXP7Atqbu4W8fNzoj4/81pp85OMU8ektMjvUYtJtzwR0VWNgQkJ7wD8+wP
	VZHJ/UfB98FEWZxeqppiA07uLUZGRvU=
Subject: Re: Boot time and 3 sec in warning_print
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
 <anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
Date: Mon, 15 Feb 2021 11:41:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.02.2021 11:35, Julien Grall wrote:
> On 15/02/2021 08:38, Jan Beulich wrote:
>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>>> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>>>> I would like to shorten the boot time in our system if possible.
>>>>
>>>> In xen/common/warning.c there is warning_print() which contains a 3
>>>> seconds loop that callsÂ  process_pending_softirqs().
>>>>
>>>> What would the potential problems be if that loop is radically shortened?
>>>
>>> A user might not notice the warning(s) printed.
>>>
>>> But I can see your point. I think adding a boot option for setting
>>> another timeout value (e.g. 0) would do the job without compromising
>>> the default case.
>>
>> I don't think I agree - the solution to this is to eliminate the
>> reason leading to the warning. 
>  >
>> The delay is intentionally this way
>> to annoy the admin and force them to take measures.
> Given they are warning, an admin may have assessed them and considered 
> there is no remediation necessary.
> 
> We encountered the same problem with LiveUpdate. If you happen to have a 
> warning (e.g. sync_console for debugging), then you are adding 3s in 
> your downtime (this can more than double-up the actual one).

One very explicitly should not run with sync_console in production.

> What was just an "annoyance" for boot can now completely wreck your 
> guests and system (not many software can tolerate large downtime).
> 
> So I think we either want to drop the 3s pause completely or allow the 
> user to decide whether he/she cares about it via a command line option.
> 
> I am leaning towards the former at the moment.

I'm afraid I'm -2 towards complete removal. I'm at least -1 towards
shortening of the pause, as already indicated.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 10:50:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 10:50:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85143.159643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbSN-00033i-Fs; Mon, 15 Feb 2021 10:50:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85143.159643; Mon, 15 Feb 2021 10:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbSN-00033b-B2; Mon, 15 Feb 2021 10:50:19 +0000
Received: by outflank-mailman (input) for mailman id 85143;
 Mon, 15 Feb 2021 10:50:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBbSM-00033W-70
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 10:50:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBbSL-00044w-FT; Mon, 15 Feb 2021 10:50:17 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBbSL-0002PD-9R; Mon, 15 Feb 2021 10:50:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=I45Zkb6sYCYdIE3W20B5kPPpphDexcs7TeZauDQdg7w=; b=WTo3EHDdHbj6a/wPhqyzAOPCmk
	InAB2ZdxgpBZEy+UCW+I8FJvj+sUJ3ZOmYXonyhROFXifgp5T5oGnXJZD3xgyijbvrxmlTLKTqDXX
	8u9yXQkVKF+nnrVwp4DOybvlZfTnxqLT5d4L2pABMVUribqbetZrupDQi8y917jSv+oM=;
Subject: Re: Boot time and 3 sec in warning_print
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
 <anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
 <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <677c66e0-6f06-569c-7447-d3bd07dcda81@xen.org>
Date: Mon, 15 Feb 2021 10:50:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 15/02/2021 10:41, Jan Beulich wrote:
> On 15.02.2021 11:35, Julien Grall wrote:
>> On 15/02/2021 08:38, Jan Beulich wrote:
>>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>>>> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>>>>> I would like to shorten the boot time in our system if possible.
>>>>>
>>>>> In xen/common/warning.c there is warning_print() which contains a 3
>>>>> seconds loop that callsÂ  process_pending_softirqs().
>>>>>
>>>>> What would the potential problems be if that loop is radically shortened?
>>>>
>>>> A user might not notice the warning(s) printed.
>>>>
>>>> But I can see your point. I think adding a boot option for setting
>>>> another timeout value (e.g. 0) would do the job without compromising
>>>> the default case.
>>>
>>> I don't think I agree - the solution to this is to eliminate the
>>> reason leading to the warning.
>>   >
>>> The delay is intentionally this way
>>> to annoy the admin and force them to take measures.
>> Given they are warning, an admin may have assessed them and considered
>> there is no remediation necessary.
>>
>> We encountered the same problem with LiveUpdate. If you happen to have a
>> warning (e.g. sync_console for debugging), then you are adding 3s in
>> your downtime (this can more than double-up the actual one).
> 
> One very explicitly should not run with sync_console in production.

I knew it would be misinterpreted ;). I agree that sync_console must not 
be used in production.

I gave the example of sync_console because this is something impacting 
debugging of LiveUpdate. If you have a console issue and need to add 
sync_console, then you may end up to wreck completely your platform when 
LiveUpdating.

Without the 3s delay, then you have a chance to LiveUpdate and figure 
out the problem.

> 
>> What was just an "annoyance" for boot can now completely wreck your
>> guests and system (not many software can tolerate large downtime).
>>
>> So I think we either want to drop the 3s pause completely or allow the
>> user to decide whether he/she cares about it via a command line option.
>>
>> I am leaning towards the former at the moment.
> 
> I'm afraid I'm -2 towards complete removal. I'm at least -1 towards
> shortening of the pause, as already indicated.
So how do you suggest to approach the issues I discussed above?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 10:51:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 10:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85150.159655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbTv-0003Ef-63; Mon, 15 Feb 2021 10:51:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85150.159655; Mon, 15 Feb 2021 10:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbTv-0003EY-2W; Mon, 15 Feb 2021 10:51:55 +0000
Received: by outflank-mailman (input) for mailman id 85150;
 Mon, 15 Feb 2021 10:51:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3evm=HR=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1lBbTt-0003ET-NG
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 10:51:53 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 89e54135-31a7-40fb-b1ad-76dcac7d86c7;
 Mon, 15 Feb 2021 10:51:52 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-497-en3XCfTZM9KwMZkEje0NTw-1; Mon, 15 Feb 2021 05:51:50 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0BAB31E56C;
 Mon, 15 Feb 2021 10:51:49 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-113-28.ams2.redhat.com [10.36.113.28])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 19B65722C7;
 Mon, 15 Feb 2021 10:51:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89e54135-31a7-40fb-b1ad-76dcac7d86c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613386312;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hYY5Jg9wnLbXzPop0ndlZ0ZadkwesoK2qW32ai1FRls=;
	b=JPJk0qMwY3djpl0OVdlf1iwL6WmIzWugfinr3TRZof5b5zD6Owvuc6jzmq2JiihF5LL1YD
	N4e97x2xGN0peu1lfRlWzvJrZkgWx0G3ClQ/2rWsFMe5CbzS9p/TQiGHLFLgxgqt28Vaqo
	Enx/Pg3hYhIx6g5lUkcdU617RjsDH/k=
X-MC-Unique: en3XCfTZM9KwMZkEje0NTw-1
Date: Mon, 15 Feb 2021 11:51:44 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: paul@xen.org
Cc: 'Roger Pau Monne' <roger.pau@citrix.com>, qemu-devel@nongnu.org,
	'Arthur Borsboom' <arthurborsboom@gmail.com>,
	'Stefano Stabellini' <sstabellini@kernel.org>,
	'Anthony Perard' <anthony.perard@citrix.com>,
	'Max Reitz' <mreitz@redhat.com>, xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org
Subject: Re: [PATCH] xen-block: fix reporting of discard feature
Message-ID: <20210215105144.GG7226@merkur.fritz.box>
References: <20210118153330.82324-1-roger.pau@citrix.com>
 <00d701d6edb1$894122f0$9bc368d0$@xen.org>
MIME-Version: 1.0
In-Reply-To: <00d701d6edb1$894122f0$9bc368d0$@xen.org>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

Am 18.01.2021 um 16:49 hat Paul Durrant geschrieben:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: 18 January 2021 15:34
> > To: qemu-devel@nongnu.org
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Arthur Borsboom <arthurborsboom@gmail.com>; Stefano
> > Stabellini <sstabellini@kernel.org>; Anthony Perard <anthony.perard@citrix.com>; Paul Durrant
> > <paul@xen.org>; Kevin Wolf <kwolf@redhat.com>; Max Reitz <mreitz@redhat.com>; xen-
> > devel@lists.xenproject.org; qemu-block@nongnu.org
> > Subject: [PATCH] xen-block: fix reporting of discard feature
> > 
> > Linux blkfront expects both "discard-granularity" and
> > "discard-alignment" present on xenbus in order to properly enable the
> > feature, not exposing "discard-alignment" left some Linux blkfront
> > versions with a broken discard setup. This has also been addressed in
> > Linux with:
> > 
> > https://lore.kernel.org/lkml/20210118151528.81668-1-roger.pau@citrix.com/T/#u
> > 
> > Fix QEMU to report a "discard-alignment" of 0, in order for it to work
> > with older Linux frontends.
> > 
> > Reported-by: Arthur Borsboom <arthurborsboom@gmail.com>
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks, applied to the block branch.

Kevin



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 10:57:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 10:57:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85155.159667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbZV-0003QN-R9; Mon, 15 Feb 2021 10:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85155.159667; Mon, 15 Feb 2021 10:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBbZV-0003QG-OA; Mon, 15 Feb 2021 10:57:41 +0000
Received: by outflank-mailman (input) for mailman id 85155;
 Mon, 15 Feb 2021 10:57:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBbZU-0003Q7-6S; Mon, 15 Feb 2021 10:57:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBbZT-0004DP-VX; Mon, 15 Feb 2021 10:57:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBbZT-0001GW-O4; Mon, 15 Feb 2021 10:57:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBbZT-0003x9-Nb; Mon, 15 Feb 2021 10:57:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=onuMpMTv3x3bWTPeegbteUE46aUPQeQzqte/3HoH/ZM=; b=1hWlAPqe3aVZkj/wxSoGaKK163
	pPjwNV4RURdKAhjmyV9nQMJeHE0kRkUSQt5gOncxI7Zr9QVR26ixpa+x16gtRhe2NT6sAUdPtxvK0
	7y9cJEfDhRSuMSUtHSVWYDF4wsVcgVDmeBRRhiDr6r+5tq/Gwje9pGaOXXzWtYdUnHYc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-armhf-armhf-xl-credit2
Message-Id: <E1lBbZT-0003x9-Nb@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 10:57:39 +0000

branch xen-unstable
xenbranch xen-unstable
job test-armhf-armhf-xl-credit2
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159373/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-armhf-armhf-xl-credit2.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-credit2.guest-start --summary-out=tmp/159373.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-armhf-armhf-xl-credit2 guest-start
Searching for failure / basis pass:
 159359 fail [host=arndale-bluewater] / 158681 [host=arndale-lakeside] 158624 [host=cubietruck-braque] 158616 [host=arndale-metrocentre] 158609 [host=cubietruck-gleizes] 158603 [host=cubietruck-metzinger] 158593 [host=arndale-westfield] 158583 [host=cubietruck-picasso] 158563 [host=arndale-lakeside] 158552 ok.
Failure / basis pass flights: 159359 / 158552
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Basis pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca272b9513a6de5772e6e8ef5bbecd2e23cf9fb3 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#d26b3110041a9fddc6c6e36398f53f7eab8cff82-5b9a4104c902d7dec14c9e3c5652a638194487c6 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ca272b9513a6de5772e6e8ef5bbecd2e23cf9fb3-2e1e8c35f3178df95d79da81ac6deec242da74c2 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#3487f4cf8bf5cef47a4c3918c13a502afc9891f6-04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Loaded 15001 nodes in revision graph
Searching for test results:
 158544 [host=cubietruck-braque]
 158552 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca272b9513a6de5772e6e8ef5bbecd2e23cf9fb3 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6
 158563 [host=arndale-lakeside]
 158583 [host=cubietruck-picasso]
 158593 [host=arndale-westfield]
 158603 [host=cubietruck-metzinger]
 158609 [host=cubietruck-gleizes]
 158616 [host=arndale-metrocentre]
 158624 [host=cubietruck-braque]
 158681 [host=arndale-lakeside]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159295 fail irrelevant
 159324 fail irrelevant
 159352 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca272b9513a6de5772e6e8ef5bbecd2e23cf9fb3 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 3487f4cf8bf5cef47a4c3918c13a502afc9891f6
 159353 fail irrelevant
 159354 pass 6d57b582fb35d321ea42fe6a75f7251451a55569 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159355 fail 5fa6987258a757a9fae70ff28188dff07f01bf50 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159356 pass 60066d5181be6448def7e97b9ad0fc2741f6c1bb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159339 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159357 pass 6b59bd9eea08dea84df61bfa847579f14213684c c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159358 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159360 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159363 pass c074680653e27f19eb584522df06758607277f77 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159365 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159368 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159369 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159370 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159359 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159371 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159373 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158552 (pass), for basis pass
 Result found: flight 159339 (fail), for basis failure (at ancestor ~280)
 Repro found: flight 159352 (pass), for basis pass
 Repro found: flight 159358 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159365 (pass), for last pass
 Result found: flight 159368 (fail), for first failure
 Repro found: flight 159369 (pass), for last pass
 Repro found: flight 159370 (fail), for first failure
 Repro found: flight 159371 (pass), for last pass
 Repro found: flight 159373 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159373/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.883041 to fit
pnmtopng: 85 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-credit2.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159373: tolerable ALL FAIL

flight 159373 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159373/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-armhf-armhf-xl-credit2  14 guest-start             fail baseline untested


jobs:
 test-armhf-armhf-xl-credit2                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 11:34:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 11:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85193.159743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBc9A-0007P7-MO; Mon, 15 Feb 2021 11:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85193.159743; Mon, 15 Feb 2021 11:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBc9A-0007P0-JE; Mon, 15 Feb 2021 11:34:32 +0000
Received: by outflank-mailman (input) for mailman id 85193;
 Mon, 15 Feb 2021 11:34:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBc99-0007Ov-8C
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 11:34:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBc98-0004oU-2k; Mon, 15 Feb 2021 11:34:30 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBc97-0005kE-Se; Mon, 15 Feb 2021 11:34:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zTO4fwO09MK8vZivCZ95NSeaukYvebOxV5ual5fKQ2g=; b=1JockMy1ZTRbntXg0v55DyVQ+N
	yAcuIzi/Cv4PRGpSwmMYlRLCx/7/4lrqL0WQAGHGoDAvw9yfFeyHbNuV8SrYizvhJZK6aickKWvCx
	DCBzoBk271zbC3pI36lfDJnBFPuYllGN0pdsKTUK0I5tdFIucuXteQjDZUPZCYLF4a1E=;
Subject: Re: [PATCH v2] xen/arm: fix gnttab_need_iommu_mapping
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "lucmiccio@gmail.com" <lucmiccio@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210208184932.23468-1-sstabellini@kernel.org>
 <B96B5E21-0600-4664-899D-D38A18DE7A8C@arm.com>
 <alpine.DEB.2.21.2102091226560.8948@sstabellini-ThinkPad-T480s>
 <EFFD35EA-378B-4C5C-8485-7EA5265E89E4@arm.com>
 <4e4f7d25-6f5f-1016-b1c9-7aa902d637b8@xen.org>
 <ECC82E19-3504-4E0E-B3EA-D0E46DD842C6@arm.com>
 <c573b3a0-186d-626e-6670-f8fc28285e3d@xen.org>
 <BFC5858A-3631-48E1-AB87-40EECF95FA66@arm.com>
 <489c1b89-67f0-5d47-d527-3ea580b7cc43@xen.org>
 <DC7F1705-54B3-4543-8222-E7BCF1A501F7@arm.com>
 <acbcdd06-83b1-28ff-ea7e-2ce1ba681ac1@xen.org>
 <8A85CE97-A38D-4580-BBDD-38DD0542A3F8@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aeb8fe57-b2c8-b87f-d80e-ea2dae12315e@xen.org>
Date: Mon, 15 Feb 2021 11:34:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <8A85CE97-A38D-4580-BBDD-38DD0542A3F8@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 15/02/2021 09:12, Rahul Singh wrote:
>> Thanks for the testing. I noticed that my patch doesn't build because arm_iommu_unmap_page() doesn't have a parameter mfn. Can you confirm whether you had to replace mfn with _mfn(dfn_x(dfn))?
> 
> Yes I have to replace the mfn with _mfn(dfn_x(dfn)) to test the patch.

Thanks.

In the future, I would suggest to mention such things in your testing.

This will be easier to figure out how one managed to test it and whether 
the fix was the same.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 11:38:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 11:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85197.159755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBcCr-0007ZV-7L; Mon, 15 Feb 2021 11:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85197.159755; Mon, 15 Feb 2021 11:38:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBcCr-0007ZO-49; Mon, 15 Feb 2021 11:38:21 +0000
Received: by outflank-mailman (input) for mailman id 85197;
 Mon, 15 Feb 2021 11:38:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBcCp-0007ZH-L9
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 11:38:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBcCo-0004sq-NR; Mon, 15 Feb 2021 11:38:18 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBcCo-00062n-F5; Mon, 15 Feb 2021 11:38:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VKBO3d9fdC0qpQyUBH4WAjr47OsJtTcMzoOQwIP4sWU=; b=i9b9jxNb/SWGRmzHNhjyDumdbR
	FCL8ChNhYy0IHedm940u5emVicJvUUwpWe+um6VPE8zxxsBIn380X2CTYSu9rd8MVKHZnjyqUsBI/
	nHmrNe2gesBfB8b2RqxLbxWhVDYETDvfqxQSYZRmjdASUEzWrKjKVkRLbV6Wkp0ft9y0=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
 <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <797ac673-9c7b-ff39-1266-94c96bde0f26@xen.org>
Date: Mon, 15 Feb 2021 11:38:16 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 10/02/2021 11:26, Jan Beulich wrote:
> On 09.02.2021 16:28, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, the IOMMU page-tables will be populated early in the domain
>> creation if the hardware is able to virtualize the local APIC. However,
>> the IOMMU page tables will not be freed during early failure and will
>> result to a leak.
>>
>> An assigned device should not need to DMA into the vLAPIC page, so we
>> can avoid to map the page in the IOMMU page-tables.
> 
> Here and below, may I ask that you use the correct term "APIC
> access page", as there are other pages involved in vLAPIC
> handling (in particular the virtual APIC page, which is where
> accesses actually go to that translate to the APIC access page
> in EPT).
> 
>> This statement is also true for any special pages (the vLAPIC page is
>> one of them). So to take the opportunity to prevent the mapping for all
>> of them.
> 
> I probably should have realized this earlier, but there is a
> downside to this: A guest wanting to core dump itself may want
> to dump e.g. shared info and vcpu info pages. Hence ...
> 
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
>>   {
>>       unsigned int flags;
>>   
>> +    /* Don't map special pages in the IOMMU page-tables. */
>> +    if ( mfn_valid(mfn) && is_special_page(mfn_to_page(mfn)) )
>> +        return 0;
> 
> ... instead of is_special_page() I think you want to limit the
> check here to seeing whether PGC_extra is set.
> 
> But as said on irc, since this crude way of setting up the APIC
> access page is now firmly a problem, I intend to try to redo it.

Given this series needs to go in 4.15 (we would introduce a 0-day 
otherwise), could you clarify whether your patch [1] is intended to 
replace this one in 4.15?

Cheers,

[1] <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 12:29:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 12:29:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85222.159781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBd0B-0003kj-KQ; Mon, 15 Feb 2021 12:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85222.159781; Mon, 15 Feb 2021 12:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBd0B-0003kc-HK; Mon, 15 Feb 2021 12:29:19 +0000
Received: by outflank-mailman (input) for mailman id 85222;
 Mon, 15 Feb 2021 12:29:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBd09-0003kX-TO
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 12:29:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01a0a0ac-0f89-48cc-a5b6-3903e03c6696;
 Mon, 15 Feb 2021 12:29:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82473B0BE;
 Mon, 15 Feb 2021 12:29:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01a0a0ac-0f89-48cc-a5b6-3903e03c6696
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613392154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=woplbVM4gqPTxKjJtko/zRXKYmPCYa5pT66zZOsos7g=;
	b=nMQASbnoOX854rzI5E48XQlHkouFK6UWUbIb1IsOP29b8VT4IV0vHuFsuyU+uWZpsRZABr
	dG/2iw0bGsUHPEjParHD5y6nLe8I+gYuFdppvKEY2T62PRk2AqkZABRdk9QMwAlrvumO4/
	K8TWbSw/0yDHw9I7DojlUggmu++IweQ=
Subject: Re: Boot time and 3 sec in warning_print
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
 <anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
 <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
 <677c66e0-6f06-569c-7447-d3bd07dcda81@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6c850ae1-62a9-4adb-fb94-ed90ee1780ff@suse.com>
Date: Mon, 15 Feb 2021 13:29:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <677c66e0-6f06-569c-7447-d3bd07dcda81@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.02.2021 11:50, Julien Grall wrote:
> Hi Jan,
> 
> On 15/02/2021 10:41, Jan Beulich wrote:
>> On 15.02.2021 11:35, Julien Grall wrote:
>>> On 15/02/2021 08:38, Jan Beulich wrote:
>>>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>>>>> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>>>>>> I would like to shorten the boot time in our system if possible.
>>>>>>
>>>>>> In xen/common/warning.c there is warning_print() which contains a 3
>>>>>> seconds loop that callsÂ  process_pending_softirqs().
>>>>>>
>>>>>> What would the potential problems be if that loop is radically shortened?
>>>>>
>>>>> A user might not notice the warning(s) printed.
>>>>>
>>>>> But I can see your point. I think adding a boot option for setting
>>>>> another timeout value (e.g. 0) would do the job without compromising
>>>>> the default case.
>>>>
>>>> I don't think I agree - the solution to this is to eliminate the
>>>> reason leading to the warning.
>>>   >
>>>> The delay is intentionally this way
>>>> to annoy the admin and force them to take measures.
>>> Given they are warning, an admin may have assessed them and considered
>>> there is no remediation necessary.
>>>
>>> We encountered the same problem with LiveUpdate. If you happen to have a
>>> warning (e.g. sync_console for debugging), then you are adding 3s in
>>> your downtime (this can more than double-up the actual one).
>>
>> One very explicitly should not run with sync_console in production.
> 
> I knew it would be misinterpreted ;). I agree that sync_console must not 
> be used in production.
> 
> I gave the example of sync_console because this is something impacting 
> debugging of LiveUpdate. If you have a console issue and need to add 
> sync_console, then you may end up to wreck completely your platform when 
> LiveUpdating.
> 
> Without the 3s delay, then you have a chance to LiveUpdate and figure 
> out the problem.

I'm afraid I don't see how LU comes into the picture here: We're
talking about a boot time delay. The delay doesn't recur at any
point at runtime.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 12:37:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 12:37:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85229.159796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBd7a-0004fe-EL; Mon, 15 Feb 2021 12:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85229.159796; Mon, 15 Feb 2021 12:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBd7a-0004fX-BM; Mon, 15 Feb 2021 12:36:58 +0000
Received: by outflank-mailman (input) for mailman id 85229;
 Mon, 15 Feb 2021 12:36:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBd7Y-0004fS-No
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 12:36:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1907643d-8205-443d-88be-65ba63d13eed;
 Mon, 15 Feb 2021 12:36:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFB47AC32;
 Mon, 15 Feb 2021 12:36:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1907643d-8205-443d-88be-65ba63d13eed
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613392615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t0nUKBi1kY1NyjDocwmWML3EhkrFRPb0UgcYYprp/F4=;
	b=p+TTTVcNIfb5htzy2G/20udAs0rMo2OOx0p9hJKTHLw0jkrtLY9NVCO8yts/4ehdPwx7nt
	Ql+gQzk5vTgfCHLMEU7Tx6jfPSZeBxYPFXwZ4ogeg8q59x9mtn3bwoy1Yw/7edqRAS5Nth
	K1Fh++lVRsjq276m3RXaECrxx3YeHE0=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
 <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
 <797ac673-9c7b-ff39-1266-94c96bde0f26@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d394dc9c-d02c-f24a-7414-ec626ac5e82b@suse.com>
Date: Mon, 15 Feb 2021 13:36:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <797ac673-9c7b-ff39-1266-94c96bde0f26@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.02.2021 12:38, Julien Grall wrote:
> On 10/02/2021 11:26, Jan Beulich wrote:
>> On 09.02.2021 16:28, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Currently, the IOMMU page-tables will be populated early in the domain
>>> creation if the hardware is able to virtualize the local APIC. However,
>>> the IOMMU page tables will not be freed during early failure and will
>>> result to a leak.
>>>
>>> An assigned device should not need to DMA into the vLAPIC page, so we
>>> can avoid to map the page in the IOMMU page-tables.
>>
>> Here and below, may I ask that you use the correct term "APIC
>> access page", as there are other pages involved in vLAPIC
>> handling (in particular the virtual APIC page, which is where
>> accesses actually go to that translate to the APIC access page
>> in EPT).
>>
>>> This statement is also true for any special pages (the vLAPIC page is
>>> one of them). So to take the opportunity to prevent the mapping for all
>>> of them.
>>
>> I probably should have realized this earlier, but there is a
>> downside to this: A guest wanting to core dump itself may want
>> to dump e.g. shared info and vcpu info pages. Hence ...
>>
>>> --- a/xen/include/asm-x86/p2m.h
>>> +++ b/xen/include/asm-x86/p2m.h
>>> @@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
>>>   {
>>>       unsigned int flags;
>>>   
>>> +    /* Don't map special pages in the IOMMU page-tables. */
>>> +    if ( mfn_valid(mfn) && is_special_page(mfn_to_page(mfn)) )
>>> +        return 0;
>>
>> ... instead of is_special_page() I think you want to limit the
>> check here to seeing whether PGC_extra is set.
>>
>> But as said on irc, since this crude way of setting up the APIC
>> access page is now firmly a problem, I intend to try to redo it.
> 
> Given this series needs to go in 4.15 (we would introduce a 0-day 
> otherwise), could you clarify whether your patch [1] is intended to 
> replace this one in 4.15?

Yes, that or a cut down variant (simply moving the invocation of
set_mmio_p2m_entry()). The more that there the controversy
continued regarding the adjustment to p2m_get_iommu_flags(). I
did indicate there that I've dropped it for v2.

> [1] <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>

Given the context I was able to guess what mail you refer to, but
I would very much like to ask you and anyone else to provide links
rather than mail IDs as references. Not every mail UI allows
looking up by ID.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 12:55:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 12:55:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85235.159809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBdOj-0006Qe-Vt; Mon, 15 Feb 2021 12:54:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85235.159809; Mon, 15 Feb 2021 12:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBdOj-0006QX-SE; Mon, 15 Feb 2021 12:54:41 +0000
Received: by outflank-mailman (input) for mailman id 85235;
 Mon, 15 Feb 2021 12:54:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBdOh-0006QS-Rx
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 12:54:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBdOg-000674-OL; Mon, 15 Feb 2021 12:54:38 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBdOg-0003m2-D5; Mon, 15 Feb 2021 12:54:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EijVhNi2kvori+/sMggkz8otUpdHhzbDaoiFWULhLMY=; b=HCEarl/G8nH3TpEbnaIYVdmu6M
	6ZafW0YAejjkJ1+6e0OSwiwwY8gC75dns6PzhcyLt/Lm/5EdhATVETA90p7FuyR/ryzT/C9/VUYOn
	DJVYZhAYKXOfYnwF7EY2JTRNcMpOnNHjaBzSd6XMQ2sMqV+LsGl7k7PJ+eSX5iex7AOU=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
 <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
 <797ac673-9c7b-ff39-1266-94c96bde0f26@xen.org>
 <d394dc9c-d02c-f24a-7414-ec626ac5e82b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4ddf0f24-0379-e724-84e1-9b296167e000@xen.org>
Date: Mon, 15 Feb 2021 12:54:36 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d394dc9c-d02c-f24a-7414-ec626ac5e82b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 15/02/2021 12:36, Jan Beulich wrote:
> On 15.02.2021 12:38, Julien Grall wrote:
>> On 10/02/2021 11:26, Jan Beulich wrote:
>>> On 09.02.2021 16:28, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Currently, the IOMMU page-tables will be populated early in the domain
>>>> creation if the hardware is able to virtualize the local APIC. However,
>>>> the IOMMU page tables will not be freed during early failure and will
>>>> result to a leak.
>>>>
>>>> An assigned device should not need to DMA into the vLAPIC page, so we
>>>> can avoid to map the page in the IOMMU page-tables.
>>>
>>> Here and below, may I ask that you use the correct term "APIC
>>> access page", as there are other pages involved in vLAPIC
>>> handling (in particular the virtual APIC page, which is where
>>> accesses actually go to that translate to the APIC access page
>>> in EPT).
>>>
>>>> This statement is also true for any special pages (the vLAPIC page is
>>>> one of them). So to take the opportunity to prevent the mapping for all
>>>> of them.
>>>
>>> I probably should have realized this earlier, but there is a
>>> downside to this: A guest wanting to core dump itself may want
>>> to dump e.g. shared info and vcpu info pages. Hence ...
>>>
>>>> --- a/xen/include/asm-x86/p2m.h
>>>> +++ b/xen/include/asm-x86/p2m.h
>>>> @@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
>>>>    {
>>>>        unsigned int flags;
>>>>    
>>>> +    /* Don't map special pages in the IOMMU page-tables. */
>>>> +    if ( mfn_valid(mfn) && is_special_page(mfn_to_page(mfn)) )
>>>> +        return 0;
>>>
>>> ... instead of is_special_page() I think you want to limit the
>>> check here to seeing whether PGC_extra is set.
>>>
>>> But as said on irc, since this crude way of setting up the APIC
>>> access page is now firmly a problem, I intend to try to redo it.
>>
>> Given this series needs to go in 4.15 (we would introduce a 0-day
>> otherwise), could you clarify whether your patch [1] is intended to
>> replace this one in 4.15?
> 
> Yes, that or a cut down variant (simply moving the invocation of
> set_mmio_p2m_entry()). The more that there the controversy
> continued regarding the adjustment to p2m_get_iommu_flags(). I
> did indicate there that I've dropped it for v2.

Do you have a link to v2? I would like to try with my series.

> 
>> [1] <1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com>
> 
> Given the context I was able to guess what mail you refer to, but
> I would very much like to ask you and anyone else to provide links
> rather than mail IDs as references. Not every mail UI allows
> looking up by ID.
It is pretty trivial nowadays to search for a message by ID online:

https://lore.kernel.org/xen-devel/<message-id>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 13:14:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 13:14:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85238.159820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBdha-0008GM-Gc; Mon, 15 Feb 2021 13:14:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85238.159820; Mon, 15 Feb 2021 13:14:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBdha-0008GF-DZ; Mon, 15 Feb 2021 13:14:10 +0000
Received: by outflank-mailman (input) for mailman id 85238;
 Mon, 15 Feb 2021 13:14:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X1yw=HR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lBdhZ-0008GA-M5
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 13:14:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a501e710-ef7f-4fb1-a631-7033aba1b629;
 Mon, 15 Feb 2021 13:14:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5DF7AAF9C;
 Mon, 15 Feb 2021 13:14:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a501e710-ef7f-4fb1-a631-7033aba1b629
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613394847; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6pf2KbnvElpgSOm5vSIgPcPiZ3O1yDUXCyzNOzGlsEk=;
	b=T75T/bIB/Qo5O1w1iaveTr/tGByjn3YIw6UyRH017Z00Gq0bO0qmm7jE3QXYATFNg4xfEe
	nLopvAMSmt46Fr2EWAerbcmnYpBt/ktGHBMEMS5amOKfC5861ellp9NmWwfVipwHIiyB64
	05j7ZUFt+XUJZDA73zhKwDEn/tNV1aU=
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
 <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
 <797ac673-9c7b-ff39-1266-94c96bde0f26@xen.org>
 <d394dc9c-d02c-f24a-7414-ec626ac5e82b@suse.com>
 <4ddf0f24-0379-e724-84e1-9b296167e000@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <237bb7b8-038e-4f10-98e8-559edc764f59@suse.com>
Date: Mon, 15 Feb 2021 14:14:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <4ddf0f24-0379-e724-84e1-9b296167e000@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.02.2021 13:54, Julien Grall wrote:
> On 15/02/2021 12:36, Jan Beulich wrote:
>> On 15.02.2021 12:38, Julien Grall wrote:
>>> Given this series needs to go in 4.15 (we would introduce a 0-day
>>> otherwise), could you clarify whether your patch [1] is intended to
>>> replace this one in 4.15?
>>
>> Yes, that or a cut down variant (simply moving the invocation of
>> set_mmio_p2m_entry()). The more that there the controversy
>> continued regarding the adjustment to p2m_get_iommu_flags(). I
>> did indicate there that I've dropped it for v2.
> 
> Do you have a link to v2? I would like to try with my series.

I didn't post it yet, as I didn't consider the v1 discussion
settled so far. The intermediate version I have at present is
below.

Jan

VMX: use a single, global APIC access page

The address of this page is used by the CPU only to recognize when to
access the virtual APIC page instead. No accesses would ever go to this
page. It only needs to be present in the (CPU) page tables so that
address translation will produce its address as result for respective
accesses.

By making this page global, we also eliminate the need to refcount it,
or to assign it to any domain in the first place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Avoid insertion when !has_vlapic(). Split off change to
    p2m_get_iommu_flags().
---
Hooking p2m insertion onto arch_domain_creation_finished() isn't very
nice, but I couldn't find any better hook (nor a good place where to
introduce a new one). In particular there look to be no hvm_funcs hooks
being used on a domain-wide basis (except for init/destroy of course).
I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
considered this no better, the more that the tool stack could be smarter
and avoid setting that param when not needed.

I did further consider not allocating any real page at all, but just
using the address of some unpopulated space (which would require
announcing this page as reserved to Dom0, so it wouldn't put any PCI
MMIO BARs there). But I thought this would be too controversial, because
of the possible risks associated with this.

--- unstable.orig/xen/arch/x86/domain.c
+++ unstable/xen/arch/x86/domain.c
@@ -1005,6 +1005,8 @@ int arch_domain_soft_reset(struct domain
 
 void arch_domain_creation_finished(struct domain *d)
 {
+    if ( is_hvm_domain(d) )
+        hvm_domain_creation_finished(d);
 }
 
 #define xen_vcpu_guest_context vcpu_guest_context
--- unstable.orig/xen/arch/x86/hvm/vmx/vmx.c
+++ unstable/xen/arch/x86/hvm/vmx/vmx.c
@@ -66,8 +66,7 @@ boolean_param("force-ept", opt_force_ept
 static void vmx_ctxt_switch_from(struct vcpu *v);
 static void vmx_ctxt_switch_to(struct vcpu *v);
 
-static int  vmx_alloc_vlapic_mapping(struct domain *d);
-static void vmx_free_vlapic_mapping(struct domain *d);
+static int alloc_vlapic_mapping(void);
 static void vmx_install_vlapic_mapping(struct vcpu *v);
 static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
                                 unsigned int flags);
@@ -78,6 +77,8 @@ static int vmx_msr_read_intercept(unsign
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg(struct vcpu *v, unsigned long linear);
 
+static mfn_t __read_mostly apic_access_mfn;
+
 /* Values for domain's ->arch.hvm_domain.pi_ops.flags. */
 #define PI_CSW_FROM (1u << 0)
 #define PI_CSW_TO   (1u << 1)
@@ -401,7 +402,6 @@ static int vmx_domain_initialise(struct
         .to   = vmx_ctxt_switch_to,
         .tail = vmx_do_resume,
     };
-    int rc;
 
     d->arch.ctxt_switch = &csw;
 
@@ -411,21 +411,16 @@ static int vmx_domain_initialise(struct
      */
     d->arch.hvm.vmx.exec_sp = is_hardware_domain(d) || opt_ept_exec_sp;
 
-    if ( !has_vlapic(d) )
-        return 0;
-
-    if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-        return rc;
-
     return 0;
 }
 
-static void vmx_domain_relinquish_resources(struct domain *d)
+static void domain_creation_finished(struct domain *d)
 {
-    if ( !has_vlapic(d) )
-        return;
 
-    vmx_free_vlapic_mapping(d);
+    if ( has_vlapic(d) && !mfn_eq(apic_access_mfn, _mfn(0)) &&
+         set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
+                            apic_access_mfn, PAGE_ORDER_4K) )
+        domain_crash(d);
 }
 
 static int vmx_vcpu_initialise(struct vcpu *v)
@@ -2264,7 +2259,7 @@ static struct hvm_function_table __initd
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_relinquish_resources = vmx_domain_relinquish_resources,
+    .domain_creation_finished = domain_creation_finished,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
@@ -2503,7 +2498,7 @@ const struct hvm_function_table * __init
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( vmx_vmcs_init() )
+    if ( vmx_vmcs_init() || alloc_vlapic_mapping() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -3064,7 +3059,7 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
-static int vmx_alloc_vlapic_mapping(struct domain *d)
+static int __init alloc_vlapic_mapping(void)
 {
     struct page_info *pg;
     mfn_t mfn;
@@ -3072,53 +3067,28 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_refcount);
+    pg = alloc_domheap_page(NULL, 0);
     if ( !pg )
         return -ENOMEM;
 
-    if ( !get_page_and_type(pg, d, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(d);
-        return -ENODATA;
-    }
-
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    d->arch.hvm.vmx.apic_access_mfn = mfn;
+    apic_access_mfn = mfn;
 
-    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
-                              PAGE_ORDER_4K);
-}
-
-static void vmx_free_vlapic_mapping(struct domain *d)
-{
-    mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn;
-
-    d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
-    if ( !mfn_eq(mfn, _mfn(0)) )
-    {
-        struct page_info *pg = mfn_to_page(mfn);
-
-        put_page_alloc_ref(pg);
-        put_page_and_type(pg);
-    }
+    return 0;
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
 {
     paddr_t virt_page_ma, apic_page_ma;
 
-    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
+    if ( !has_vlapic(v->domain) || mfn_eq(apic_access_mfn, _mfn(0)) )
         return;
 
     ASSERT(cpu_has_vmx_virtualize_apic_accesses);
 
     virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
-    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
+    apic_page_ma = mfn_to_maddr(apic_access_mfn);
 
     vmx_vmcs_enter(v);
     __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
--- unstable.orig/xen/include/asm-x86/hvm/hvm.h
+++ unstable/xen/include/asm-x86/hvm/hvm.h
@@ -106,6 +106,7 @@ struct hvm_function_table {
      * Initialise/destroy HVM domain/vcpu resources
      */
     int  (*domain_initialise)(struct domain *d);
+    void (*domain_creation_finished)(struct domain *d);
     void (*domain_relinquish_resources)(struct domain *d);
     void (*domain_destroy)(struct domain *d);
     int  (*vcpu_initialise)(struct vcpu *v);
@@ -383,6 +384,12 @@ static inline bool hvm_has_set_descripto
     return hvm_funcs.set_descriptor_access_exiting;
 }
 
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    if ( hvm_funcs.domain_creation_finished )
+        alternative_vcall(hvm_funcs.domain_creation_finished, d);
+}
+
 static inline int
 hvm_guest_x86_mode(struct vcpu *v)
 {
@@ -715,6 +722,11 @@ static inline void hvm_invlpg(const stru
 {
     ASSERT_UNREACHABLE();
 }
+
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    ASSERT_UNREACHABLE();
+}
 
 /*
  * Shadow code needs further cleanup to eliminate some HVM-only paths. For
--- unstable.orig/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ unstable/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -58,7 +58,6 @@ struct ept_data {
 #define _VMX_DOMAIN_PML_ENABLED    0
 #define VMX_DOMAIN_PML_ENABLED     (1ul << _VMX_DOMAIN_PML_ENABLED)
 struct vmx_domain {
-    mfn_t apic_access_mfn;
     /* VMX_DOMAIN_* */
     unsigned int status;
 


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 14:00:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 14:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85246.159839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBePt-0003U9-5f; Mon, 15 Feb 2021 13:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85246.159839; Mon, 15 Feb 2021 13:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBePt-0003U2-0q; Mon, 15 Feb 2021 13:59:57 +0000
Received: by outflank-mailman (input) for mailman id 85246;
 Mon, 15 Feb 2021 13:59:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBePr-0003Tx-NZ
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 13:59:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBePr-00079w-1E; Mon, 15 Feb 2021 13:59:55 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBePq-0000w9-Qn; Mon, 15 Feb 2021 13:59:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=If32U9ooWov0ZV7Yqm353+IJpeCNL3uN+LcztntuAV8=; b=D3Y/CNe9jLmTCappYS5AeQtTz3
	4THxUmzu3R+h+d+fUwwqYYODQvSxcMRHyjBSfecOTgyUXOL81TcyZm7007ENXAGYquZOVcsE9pqkm
	1EX0zt6F8M344QbMZKvDt0KQxBJBMss0Eq8kBMpJ0ks0Pca7psZxdY06V8ywKC4+9fAk=;
Subject: Re: Boot time and 3 sec in warning_print
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
 <anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
 <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
 <677c66e0-6f06-569c-7447-d3bd07dcda81@xen.org>
 <6c850ae1-62a9-4adb-fb94-ed90ee1780ff@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <173df539-6f89-061a-100e-4ed0b461dff1@xen.org>
Date: Mon, 15 Feb 2021 13:59:53 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <6c850ae1-62a9-4adb-fb94-ed90ee1780ff@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 15/02/2021 12:29, Jan Beulich wrote:
> On 15.02.2021 11:50, Julien Grall wrote:
>> On 15/02/2021 10:41, Jan Beulich wrote:
>>> On 15.02.2021 11:35, Julien Grall wrote:
>>>> On 15/02/2021 08:38, Jan Beulich wrote:
>>>>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>>>>>> On 15.02.21 08:37, Anders TÃ¶rnqvist wrote:
>>>>>>> I would like to shorten the boot time in our system if possible.
>>>>>>>
>>>>>>> In xen/common/warning.c there is warning_print() which contains a 3
>>>>>>> seconds loop that callsÂ  process_pending_softirqs().
>>>>>>>
>>>>>>> What would the potential problems be if that loop is radically shortened?
>>>>>>
>>>>>> A user might not notice the warning(s) printed.
>>>>>>
>>>>>> But I can see your point. I think adding a boot option for setting
>>>>>> another timeout value (e.g. 0) would do the job without compromising
>>>>>> the default case.
>>>>>
>>>>> I don't think I agree - the solution to this is to eliminate the
>>>>> reason leading to the warning.
>>>>    >
>>>>> The delay is intentionally this way
>>>>> to annoy the admin and force them to take measures.
>>>> Given they are warning, an admin may have assessed them and considered
>>>> there is no remediation necessary.
>>>>
>>>> We encountered the same problem with LiveUpdate. If you happen to have a
>>>> warning (e.g. sync_console for debugging), then you are adding 3s in
>>>> your downtime (this can more than double-up the actual one).
>>>
>>> One very explicitly should not run with sync_console in production.
>>
>> I knew it would be misinterpreted ;). I agree that sync_console must not
>> be used in production.
>>
>> I gave the example of sync_console because this is something impacting
>> debugging of LiveUpdate. If you have a console issue and need to add
>> sync_console, then you may end up to wreck completely your platform when
>> LiveUpdating.
>>
>> Without the 3s delay, then you have a chance to LiveUpdate and figure
>> out the problem.
> 
> I'm afraid I don't see how LU comes into the picture here: We're
> talking about a boot time delay. The delay doesn't recur at any
> point at runtime.

Live Update is a reboot of the hypervisor. The main difference is we are 
carrying the state of each domain states to the new Xen along with some 
strictly necessary global state (e.g. IOMMU).

So the new Xen will mostly followed the same boot path as you would from 
a "classic" (re)boot. There are only a few twist necessary for 
LiveUpdate (e.g. reserving memory, creating/restoring multiple domains).

We would need to gate the 3s pause in the case of LiveUpdate. At which 
point, it seems we may want to take an approach that also benefits other 
users.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 14:12:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 14:12:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85250.159850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBecA-0005Jy-CW; Mon, 15 Feb 2021 14:12:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85250.159850; Mon, 15 Feb 2021 14:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBecA-0005Jr-99; Mon, 15 Feb 2021 14:12:38 +0000
Received: by outflank-mailman (input) for mailman id 85250;
 Mon, 15 Feb 2021 14:12:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/QLJ=HR=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lBec8-0005Jg-23
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 14:12:36 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fdde6c7-d997-49bf-aeb1-a71a7270094b;
 Mon, 15 Feb 2021 14:12:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fdde6c7-d997-49bf-aeb1-a71a7270094b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613398354;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1qMNtljXf0apYUhwBsXK9LdL5JcXbm/+2a8v3UZBFRk=;
  b=PtQv6m9WnUi6GcCXuIW/GksZojZbiF8oB1qRgPhok7I/jJ44rFSnoMFX
   ZJhqIKocJdanMtekNPi+YsXlc7xr/DUN2I8Y5D3KSgeGGWrRf6CszRu6+
   9zBg4lY2MIiHyE8PVoOwxdLzL9DGAd6vOykUIo22t6HSbVZpIVK6d/2o5
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4INl3YANdjwqMDXki9S+VLEORqNKwZj6XhjD1LlZmvUdPY0MNTM/SxwFASPUVeQOh7r3G8FSyh
 G1PMiQxn4oNNObx4dJbGCK8Y/bwNJl2BQRBq0fH7K/xT8sYgihNg5kJUDrddqkd2BLtU8I+Bbe
 sQcamMA3940jATEqt0WGuipmy227E6IeKZxDntDDfefKYqaoHURnCTItBtcZ+k7kupHQFYqxrf
 y1B1CLkfLwwk52UYhvzGL4+buMzSUJIQMsTWxLexAVS4P3f/s+16y5LHI+mRK4yIA1nZ+8enVl
 ERk=
X-SBRS: 5.2
X-MesageID: 37257266
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,180,1610427600"; 
   d="scan'208";a="37257266"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IIRPFr+9Jglby9el1e31ysQrKWz9T+nSYBgvIQdG4W3MbiTvz/XgTjNQdExVMvFj6tFOlbEPb01xClNuizpngqoAbxQ/y6OHFozxPOC64dMgLjXJ/pI6emEKQdsSVC9dWxPyPmRbrg4iLVMB8CtT43CseDWiw3OtItuREF9b67rp5ePjwpPN+YQjjmAMYlKhpvlnc0Ul+5Acm28fV3WZVJcUwynoGrNi/KcI25vfcSvC0VfFTnSO4g3Ch+l82hM84sjRxmYXm0Zrmk+4Nqb09AZ6QPpJMUlUi6Sj96rFc+n/7J/nqdsUBGSVdyV5glI36fITDyRfZ1Ghm8WbRdgwFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1qMNtljXf0apYUhwBsXK9LdL5JcXbm/+2a8v3UZBFRk=;
 b=hJJT9ae7mvyWuryvVntmTMFhVq+wnqep0llrQAbLvJ6izIt+D02aFpfn4Rhgqo2QxNGvMKVDKqKEt4FTOjG28coamxbQYHF/JnwIK+ZHfnt16QytZpLxcXtz9uQn45OOyuPSk3csOSpmpv/pK8He1qxGktmyk8H2+PJQXykmP9Lgp5Q0CZwI8+9XWNdeBQw5I0fsQBs8yQFVwA85PrlErkqJki1S/HqRvg76/cmxEeD8cTrBkHwCuuUbDtLiL0uN6jZ0zqacdUfFCiD2SYKvN/N6+rkpNMoZW8u6FixTwaFeSEsI1o1G1Z4zVQbpFOF68ELh2T6OC5SvSrIg+p30lA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1qMNtljXf0apYUhwBsXK9LdL5JcXbm/+2a8v3UZBFRk=;
 b=rhhbFXAOqNQ7K2vEjY0sVH72FYbhqMHOJ2BQAbb/BXgA12JG/wBjSiYqTZnc5iEInhjvOAr4KE2WUoqUIXiSRCnykV4q3i+eNFuIqExxRw2xE8rfN0OcitWmkw4J6yT75aNv3YVhHWNj3Yr3buSOYJDy5X4EhsD+GXVXzGHJKcQ=
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in
 lu_read_state()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
 <20210212153953.4582-7-andrew.cooper3@citrix.com>
 <cea38f47-7dc3-ea67-104a-e5b1899a7f3b@suse.com>
 <226c35c2-2280-8353-74d0-cc35e7f84de6@citrix.com>
 <f77b1634-b250-33c0-42ca-25dd00cb5a02@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b6db2e90-873a-85e9-51ec-6d4818f752cf@citrix.com>
Date: Mon, 15 Feb 2021 14:12:22 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <f77b1634-b250-33c0-42ca-25dd00cb5a02@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0135.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 293ba4d3-307e-4807-31df-08d8d1bbba53
X-MS-TrafficTypeDiagnostic: BYAPR03MB3495:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3495E4C485CF8C99B919A14BBA889@BYAPR03MB3495.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:923;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /K5F7PgF/t9YxDcPU4Ql5Gxb2r0Qk4vCrq3rvIcO7brrz323OU02X401jInCxiOfmRgElZ4VnRWQvEslPXN8dTVX3XFAOss8tXY8dgOVf7M0zDas/++qz+iZnzVauT4y8cipYu5Q2Cb3ejthbGW9EvbNsZ8+eUWGS9R/93EdCVEkvSRkVmEMNVOZ8q9qkz1iPqnUhtDO2oD/4/50vjINCpV31VfYsdyyiQAbCClHtZzH7aO4D7BgYvsrZ+6QV8o6+fP/ALya4yybbxfCrIGsLGgf/Gbs+4mJMR/LzIfo928M9N1hUbvPkUa/ZSGDewTw+mq6odxFFTxzZ+OTT/fVdn1Ybyn8OwAS+eHBAX3pPQ5swoU/OOgLwj5gOV2if3sX98uh9mIjPfOfmVQVPmw93Us+Xd17w76j0r62mp33ilcCL8OmvfpNyBNYlCmF6FjlTWizwJzW8D+o7rm0YVkRZ2jssZGM3tLMn13HPfZ3eKYmuyiWuuOUKjYxA9riRcVdU9YvJtbe7Z/sdm6Kt0WYKgfV0oeOBWOkIMHOTbYi+SgRPw3ZEoKeDGyp7KiuVk94x6SMnaw2Dq1HfiUmBAc0zq47RKELyKlimv6eCrRCu54=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(366004)(39860400002)(376002)(36756003)(83380400001)(316002)(86362001)(16576012)(31696002)(2616005)(66946007)(31686004)(956004)(54906003)(66556008)(110136005)(66574015)(8936002)(6666004)(8676002)(4326008)(16526019)(26005)(6486002)(478600001)(5660300002)(186003)(53546011)(2906002)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NmFoY09lcWdXeVF4Y1lVSy9UdnhxNnFvVnNLYnV3MENWU09tKzBSOGFZNmVN?=
 =?utf-8?B?ZkE2Tm1QV29OVHJwZTRuNnQwMjBsSXhyMmt4NlhpMml1Y3RmM0R0OVFqNm5R?=
 =?utf-8?B?MnBCZ3preGxqeGFLcWJXU0cycksvMHRaYnBHOXYzbHFYQ1BjK0RocXNVTWVP?=
 =?utf-8?B?K3hvWUhEb0d4Sys0Y1MrQ2lUMkRXU2FvUWlUU0xSbFMxVWVENUJkc2lHMVEw?=
 =?utf-8?B?RnRCcGE3WGFxa3pwQ0FKSnZDMHBQQktvQkJkWmdPUFhlMkhVMmxPaU1UZEM2?=
 =?utf-8?B?SDdyd2M3ODJMMHppUmRiTDFrZkZyS3pDazFNd1NnM012WDNZVkMyTmFINVRm?=
 =?utf-8?B?Y3lrcm4yTTJveFFWTkNDRnBPZkMxLzVXTkVIbFVKdDlrZXZHQVcreURtQjZx?=
 =?utf-8?B?ZGZpUDdhMCtNRUt5TzNBRDFNdWJ2am1vSXBwRmgxSjAwM2ZBSW83S3pIc01T?=
 =?utf-8?B?R3dlZG5vb0RXWTB5MGdjRTRKbStUbHZXUVdpMlNML0d2aVBqb1RMT3FzZTh1?=
 =?utf-8?B?WFMxMUZEY3NkLy91ZTlTTklERzdTaEhlZnVDS2gxTlBuNng4VlFvSU1hMEd5?=
 =?utf-8?B?Z0xNYzJlQlE3MmJwV3B0dTRhWDg4a2Jta0tBbzJtR0psZC8zRmpZTDlwR29m?=
 =?utf-8?B?dFdLZmhxQnlLVkx4NjBqR3pxZVdrM3lQNTlGdmhGYmR3VllTUmhHMUNnUHdW?=
 =?utf-8?B?UmVocFJnNkdZVW9NdkQrVnhpVk5IajE2RVFLM0llUEE1QzVmMU9xZ0ZLQ0g0?=
 =?utf-8?B?a1IvZ2xmRVJKTG0zdVlBTXJkSjlZT1gwYlM5K2svekQzS1JaNjRaVTJNdWxZ?=
 =?utf-8?B?TVBJejJXMGJCSWN4SytxTmp4K1JwYjJUd3kweGRNcXU3TUpiVHpPMWNRcjZD?=
 =?utf-8?B?L3lpa3BKM2J6b3djSjBkSjFHV2syYVF6bkFFNWFtUTI4cG8xRzlMUTYvYTJm?=
 =?utf-8?B?dnJSU0xzNkJZNCtiRkh6MFgxVnhXNis0V0lvcmdhai94SnVkc1NBMXF5aURV?=
 =?utf-8?B?TGt1Z2duTDVmaHU5V09mNlpMTkdFK3FaTXV1QUhCNjAxeXJnZXgvRkVORFVJ?=
 =?utf-8?B?Y0kwd0lRU1M1TGtMNE9jNCtheEQ1anpxZjY3WmhxUXJOcmd6dzNKUk1NRm1T?=
 =?utf-8?B?ZEllK1c5WnNHT0pEMTA0dmJsaXVweEpvSEtzT2prTURGVlRMTWRlZjZaUHV4?=
 =?utf-8?B?bW1lTkZmU0VkWGxpOFczZWV4OWpReFcyL1F5cUdQb3pNNmsyREszeFhaTlND?=
 =?utf-8?B?NFJWUEpsT2xOUVU3KzBaUGpuSmxFR0w1SjRwUUxRSXl0M0pNNWRBZkRMMExo?=
 =?utf-8?B?dFJseTM1TU5jWDA3WG9SUnlrN3lxVFN4RkpkdnIxWFRBcklzVTVvU05KZU1T?=
 =?utf-8?B?ZnFQaFRTYXhTQW5jKzBUQmdua2tQNHRVWWJpMmFxRWFqamd1bHhGVTEzRnFJ?=
 =?utf-8?B?ekw3ZFcxWXlKeDdieElNbFllU2FoQkczQm4rcGptRnRRM1lXc0RnQTlGb3lk?=
 =?utf-8?B?b1o3Umw4enJXRlg2bGFybG95a25hdEN5UU5YOXNRUkw5RlVvN1VsOHNpZCs1?=
 =?utf-8?B?dmQwcEEycXJsVzVGL2MzQ3I4cUJRZm8rVnJPZ2ZhL2lUS3k1RGNMODJ6YjE4?=
 =?utf-8?B?SDVyRUhtYlFxVWpuZ3J6ZEFjc1FxenhTK2pQREIwME1EZjhpNE9hVDVpTE0v?=
 =?utf-8?B?V1g5RktVZEFZRWJRejFQMzVhUHlyQ09EbUtESmdvOEZRR05jS09QRGFHRlZ6?=
 =?utf-8?Q?Bxo8Mh31ycpRW/vSBXoCWtLyDUQjnYuLUE7mB2X?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 293ba4d3-307e-4807-31df-08d8d1bbba53
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Feb 2021 14:12:29.0587
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9FR7Ewx0kUhz5rdmdSOQ6vgKUrqXtI08ts/KSJG4GC+/5RAJLeZvVT2t2CF00RBQCGxD2GfdS7c4jW6en6xG0NtSizbs9wUp0g3VK3WTduk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3495
X-OriginatorOrg: citrix.com

On 13/02/2021 09:36, JÃ¼rgen GroÃŸ wrote:
> On 12.02.21 18:01, Andrew Cooper wrote:
>> On 12/02/2021 16:08, JÃ¼rgen GroÃŸ wrote:
>>> On 12.02.21 16:39, Andrew Cooper wrote:
>>>> Various version of gcc, when compiling with -Og, complain:
>>>>
>>>> Â Â Â  xenstored_control.c: In function â€˜lu_read_stateâ€™:
>>>> Â Â Â  xenstored_control.c:540:11: error: â€˜state.sizeâ€™ is used
>>>> uninitialized in this
>>>> Â Â Â  function [-Werror=uninitialized]
>>>> Â Â Â Â Â  if (state.size == 0)
>>>> Â Â Â Â Â Â Â Â Â  ~~~~~^~~~~
>>>> Â Â Â  xenstored_control.c:543:6: error: â€˜state.bufâ€™ may be used
>>>> uninitialized in
>>>> Â Â Â  this function [-Werror=maybe-uninitialized]
>>>> Â Â Â Â Â  pre = state.buf;
>>>> Â Â Â Â Â  ~~~~^~~~~~~~~~~
>>>> Â Â Â  xenstored_control.c:550:23: error: â€˜state.bufâ€™ may be used
>>>> uninitialized in
>>>> Â Â Â  this function [-Werror=maybe-uninitialized]
>>>> Â Â Â Â Â Â  (void *)head - state.buf < state.size;
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  ~~~~~^~~~
>>>> Â Â Â  xenstored_control.c:550:35: error: â€˜state.sizeâ€™ may be used
>>>> uninitialized in
>>>> Â Â Â  this function [-Werror=maybe-uninitialized]
>>>> Â Â Â Â Â Â  (void *)head - state.buf < state.size;
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  ~~~~~^~~~~
>>>>
>>>> Interestingly, this is only in the stubdom build.Â  I can't identify
>>>> any
>>>> relevant differences vs the regular tools build.
>>>
>>> But me. :-)
>>>
>>> lu_get_dump_state() is empty for the stubdom case (this will change
>>> when
>>> LU is implemented for stubdom, too). In the daemon case this
>>> function is
>>> setting all the fields which are relevant.
>>
>> So I spotted that.Â  This instance of lu_read_state() is already within
>> the ifdefary, so doesn't get to see the empty stub (I think).
>
> There is only one instance of lu_read_state().

Ahh - I got the NO_LIVE_UPDATE and __MINIOS__ ifdefary inverted.Â  (It is
very confusing to follow).

I'll reword the commit message, but leave the fix the same.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 15:01:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 15:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85267.159868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBfMz-0001Gh-5w; Mon, 15 Feb 2021 15:01:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85267.159868; Mon, 15 Feb 2021 15:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBfMz-0001Ga-2s; Mon, 15 Feb 2021 15:01:01 +0000
Received: by outflank-mailman (input) for mailman id 85267;
 Mon, 15 Feb 2021 15:01:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/QLJ=HR=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lBfMy-0001GV-6G
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 15:01:00 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e256dcd2-f978-40b6-90f2-abb3be25828a;
 Mon, 15 Feb 2021 15:00:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e256dcd2-f978-40b6-90f2-abb3be25828a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613401258;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=dgXGA8O4xw9D+ikICTiinvDUGkt79fMsZa4zjuKS3WE=;
  b=PonEcvRZv+JV2Zi4qcpsBU48f18/zdw9ROEqSV3/tgWAOt/iQAtO8jUk
   5f6cMSR02cBX522O8tvDqqZh9mIfh6l/7lKyhvhQSyxg2QiJ8edpRaZfW
   lKgOGjaN8HXUD//vtxwpufsMmFrl9Vnvx7CZ+5VxcLHSeAUXx90OsdeR+
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: G5Wja4U7B6jo0nSIg1yIRyAoEbPLnG2TQCKh1gLdu9BQa5qA4B4PowvQTaEVd2zKZRHMN+SHH3
 39FdOrUuUVPFU/T41oEYU62oDp7Oo+auV9RrKeQnafukA7k/8DwfRIm4Eu4K+6cX/C2Ez7MfF1
 pRp4BqO/Bt6qJpDNifxleP+PZV3nBOdPjb4CB+WUKzbu2fatpXwXWEDk6Dz07ce5qUU0LRCTWY
 Njdrptad+HDgYOhqopd4OGk0LnI/SxCvd8qZg7aQXCOsr/FNZlpkLf+Z7LapO3s5jITQ0DbJXG
 pxY=
X-SBRS: 5.2
X-MesageID: 37201565
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,180,1610427600"; 
   d="scan'208";a="37201565"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lVYMuzu7QhwGL3OXKWcr/rysyJjWRsolhD81p7gdjXtWOxcKLwgwsOxkXRs6yUuT5ru0ofp2kzGlYK1nZZXEk8CFimlUWBT1LcWZDESwXH5Dd1iofdF3OkF/tVvKkgcZEWoFVcxFF2A0oH0xL/CQL/rB5OFG3wsTMl9bcatfKmnBmBqsVrCBC3nBURwmSsbv6gxJrxV9OrYL2anVZ6OdqNfWNWcdALkMnPkVq/vwySYgG1x2Z0O1HR3/EFhOBeSe1fSYP3H+ERB54qChZwvphZ2Y7yvPaIapu1JgmxJMwzxdZbQY54mFBcFHGFactl9FWRNoye5LvHSDlPXrzZaq9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pxxANHvwZGVlFMZK1QmL5U5xQvz2wmhp4Z3FsK4LR9w=;
 b=G9+JluSKmX6CHYYhx6VyES8WRUrFFzKqMZXHHg3/kziPVUOmkY1kn54ehj4J0wKTIWe2Zixj2Bfvk1k040by5ETNNyB1Z26OHbo+Fg4Whn/47+ANNSqu8mvg0bn19vSOV/1EfoKAJprE2AuCX7nIlI3nosNvtkhztGuAF4g9jDLBi/ElG3/GcScCxyQjgNkMKdQqbdDzM5+N5NFQsb54wYzmK8LR8wga/IvAfTJqK6RsPDvB8NZqJ3cJoClw2jcyhl2AVKDed1D9KM+mfiLHZSqEYn302e5fPhZI5wTNGnR8yWPlxi268zmkws1A+VqfgztxqmTnO1CANJYBTfJOgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pxxANHvwZGVlFMZK1QmL5U5xQvz2wmhp4Z3FsK4LR9w=;
 b=HaoS/u7CgJuHLe+/TDKgSibz16FCsPPMxzwDFB+hOGifQN/Y7vkBTpKEqMj6jyjvqlMkIMpUfFwV8hk7mUuIlWEQ7Esrnfu+TvtNgCvP/c/xE82SHImgfFcJrGdU+uJ7QTIY+MaCol5TRek7ruz6n3y0mpS+iHUIL69DNS+S2q8=
Subject: Re: Boot time and 3 sec in warning_print
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
	<jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
	<anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
 <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e4c5e236-b801-27af-d519-204943aa32d4@citrix.com>
Date: Mon, 15 Feb 2021 15:00:42 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0495.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7990e63b-0bd2-4874-1474-08d8d1c27aaf
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3623979DF80F93E42C708662BA889@BYAPR03MB3623.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xZH7VwgR0INGU8EGuIbGvPk2wpRBc4oL4c2N/U94eTVUbsQpcXFm7iTHqcABV31xh17ugGX0jIztOhJMmWXs0JnKVFACFKwPWkjoCfL2c6/nsk1Jrr25RUc2qOMvCwRvKC43aAJiwHPRQzGd3Tetjh1vdoQmAmD6JV52O4Lhkg5NzBUrY9ClFrfuxsLbMhbN0PeI+xULl9iwNpGnM9I+Y1Y5dyggySP3ZXrVaN2AwalQSTJdecBOJubyKLP0qXZn+auuX8z26Hw3DO78iYB0YtQdgcqnbjaZmR7FE2zh7jVlz3VGp3qPxK3KI79y+J8ULL2ufM23b0ddZwWiupOQ3K6eBZyHaP4KvfdH0L2JIH4WdUm4DchHcs/mO6L+IugNkBdFt0tmO/4lQef1wU9pvrp7sA0jypnuh4a3FluqZ3AOypyEUuzwZdS7ONoVbCpsl3fwghOT+SukdSj07tgPAKooiSFK9Z5ZCmJHwp7kCC5MfulvHruhHQzHzpxaEmcCgrnAwAB6ZmAiYWQSPTNOXFe0TWD8A2WwwK1PWP6t9LnvTnk7/eu7cycHaRS7olep78J5jfbuM4z9z1oJmtVW2n40tpCPLU0XtCbwUcSu7Rg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(396003)(366004)(346002)(2906002)(26005)(6666004)(86362001)(66574015)(478600001)(8936002)(2616005)(6486002)(31696002)(186003)(956004)(54906003)(16576012)(316002)(16526019)(110136005)(31686004)(66556008)(66476007)(5660300002)(8676002)(4326008)(53546011)(36756003)(66946007)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YjFmK3FLSnIyNXRocHVmSzRMcTFLckFpOFBxY1NmNi9iT2Vnc05aQldnSkQz?=
 =?utf-8?B?MndPUHdHZk1UTUFZMVhWazhrVGo3Si9qWjN3ZDhZWkErVnRpd2R5azZUMGVD?=
 =?utf-8?B?U0dxbjlrMk9qS2xKS3VkSXdGckhudVcyNG9nSXlHOEFEem5KK2YxSGoyNE1P?=
 =?utf-8?B?djgxRjRXWThmM0xQUTY2U1cxK0svQjZVTmpYVnpqcXl4SVhpcEplS05BZGZx?=
 =?utf-8?B?cnFqOXVobHFmMUYrcnozdklqRFNKL1k3OWtMREpIWTRld1NscXgrRGpkemZU?=
 =?utf-8?B?TnhGWW5EK0RGZDJKT3Y5TlF3N3J5dmZ1ZFFkMW1oUXp3QW8rWi95eFo3N1No?=
 =?utf-8?B?bWNUbWVjS2ZpeUFIcktOOWY0VitwNVV3WncvOHdlNURkQnZxSFJ4VE1nK2I4?=
 =?utf-8?B?b3Y0cHF2NnRqWkxVbWlhWTFiZklxZTZNNDU3NVd2Y2tteXZnc0I3SGNaaGt2?=
 =?utf-8?B?Wmc4RFZuQmJuMHNza09wRjRRVTVSOHRlLy9PT3ZGdUpWUHBtaFNWZXJmVDBs?=
 =?utf-8?B?cGM5YlIwb095eG9oTXRmYityQXNmLzVEbzNmaGhOYmVhOGNTYzZUKzRlZzJC?=
 =?utf-8?B?eU82MnFEVFVBbktIQ3ByME1XR2Y4Z0tOaUxqaG1xMVBSNk5oM2xxYXNMamxG?=
 =?utf-8?B?UDg5ZTlXY0NqMVpOQWxBRGpISC9hM0Fha0dJeHgvOSt0QW9UemZFNlRxOXh4?=
 =?utf-8?B?VEI4RVF5UkV2NjBmY1NUcmdaQTY3MDRDdmJVWXZkQm5VUkxYaDk3UE9JT093?=
 =?utf-8?B?bGdwQVhaSjNaTCtWRVhGWkJpdzJmOVZHck90U0xsSk4yZ0pHWkZPRzBvRmZu?=
 =?utf-8?B?TkZIN0pPYnZWdnZVUERibUhSVnVnYlhYSWh5SWVUS0RpU0xWK2xsM1MraWVK?=
 =?utf-8?B?dmpNRUN1VjhlQ2RkaFRvczFnZFlpakRESUE1SWFORXhxSnlTS2U1MVp1ZUF6?=
 =?utf-8?B?ZmloWXpaT25zTG1UbGw2bDNRRU5zalU0cXNLNjJNV2UrK05TVkFsNjV2dklE?=
 =?utf-8?B?RzVqMEJKMHR6ajVycDdWNzRZS1VVd2QvTVpHcUtkVHVzeXRZUVhaWXlEVW1Q?=
 =?utf-8?B?Ylh3RWhyZ1JyRXFQZnV6dFdHUW1BQndtcWsxRjJjWmFWTHpOVG5SRlVUbmJV?=
 =?utf-8?B?NUErdUVuSUxYRkl1ZnhJUk5WWTd1UlpFSGs1RUN5eDNIMWRuNXpMQkJvUzFh?=
 =?utf-8?B?WDhzaWhtdGJBQ3JERE5JSEVNdDVVbldzdjUrQm9hV0FSMlJyemZBd1VUMmhk?=
 =?utf-8?B?UFp3T04vaWc1VXV2ZzM1d1V6SVQ3RkJwYlM3WVpLcS85UnpPdU9mcklHNlZB?=
 =?utf-8?B?ckt1dU9uUEZyZFIwVzVpWFczbEwzNGdRM1YxeGFrRG5lU3FpUmJZanJGNThK?=
 =?utf-8?B?ek42L05RZDZLTWVEb1VXWDFKcVdnYVlYWGllTTlhTm9peVdMS0VQR09OMWFC?=
 =?utf-8?B?SS9qSkp5amZZNnNOMGRvMXpXQkd1NlByWkhGZkcrUWtNM1pidkw0N1owSGRL?=
 =?utf-8?B?MngzUlpWTEIwSEQxUlA5TzhtZ2RCT3NNbEE0YmhoVW8rcExEWGhscUhZQjNK?=
 =?utf-8?B?KzlBc21MUDM1RDRKOTdheFdsWndmRmtVd0FtbzlsNXNpN3hHcjdPcWRiYUhH?=
 =?utf-8?B?cmVXSmJBOVdWcDAwUGhwUG5qMTduenBOYkNCNFVEdFhyVzdMN2h1YlFSSHpp?=
 =?utf-8?B?NUw1V0dGVHRXZWlyTG1rZ2lKcTgxZHZvSHZNR0lDdk1Cblg1bGhSRE1RZyti?=
 =?utf-8?Q?Dg6+kZks6PqKfixjYhvRyGzWg3WbxG6TYnPdTmI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7990e63b-0bd2-4874-1474-08d8d1c27aaf
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Feb 2021 15:00:48.7598
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RAdASi3tH+jpCm9g3UNxZmJYD2tDxTxb2xPT38Y2Tjk7g9GWyMuizOCd2iq0i5RRgLnrez6S3lliRj9xI1H2DtWUXtP3DXYDqPzvR6FYkpk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3623
X-OriginatorOrg: citrix.com

On 15/02/2021 10:41, Jan Beulich wrote:
> On 15.02.2021 11:35, Julien Grall wrote:
>> On 15/02/2021 08:38, Jan Beulich wrote:
>>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>> What was just an "annoyance" for boot can now completely wreck your 
>> guests and system (not many software can tolerate large downtime).
>>
>> So I think we either want to drop the 3s pause completely or allow the 
>> user to decide whether he/she cares about it via a command line option.
>>
>> I am leaning towards the former at the moment.
> I'm afraid I'm -2 towards complete removal. I'm at least -1 towards
> shortening of the pause, as already indicated.

A 3s delay on boot doesn't even cause most people to notice.Â  The
infrastructure has failed at its intended purpose.

Therefore, we should consider now to replace this largely-failed
experiment with something better.


Personally, I think ARM is abusing this in the first place.Â  Adding a 3
second delay for someone who's explicitly chosen hmp_unsafe is petty.Â 
So is adding a 3 second delay for anyone who's explicitly chosen a
non-default configuration.Â  In retrospect, I think the delay for hvm_fep
is also wrong, especially as we also have a taint for it.


The *only* way to make users deal with the warnings is to surface them
very obviously in the UI they use to interact with their Xen system.Â 
That is XenCenter/XenOrchestra/virt-manager/etc, and possibly the SSH
MOTD - not a logfile that approximately noone reads.

To make this happen, warnings need to be available somewhere which isn't
the dmesg ring.Â  hypfs would be the obvious candidate at the moment.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 15:10:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 15:10:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85270.159881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBfWD-0002E9-VV; Mon, 15 Feb 2021 15:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85270.159881; Mon, 15 Feb 2021 15:10:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBfWD-0002E2-S7; Mon, 15 Feb 2021 15:10:33 +0000
Received: by outflank-mailman (input) for mailman id 85270;
 Mon, 15 Feb 2021 15:10:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBfWD-0002Dx-F1
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 15:10:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBfWC-0008NR-Px; Mon, 15 Feb 2021 15:10:32 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBfWC-0007mx-H8; Mon, 15 Feb 2021 15:10:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=HBoxuV2cYZP+cucKP3j6vy3oNG8hP2mNkjtQxmf9GsM=; b=pNAuWUVYG0UD8P91nQFDviWDsf
	h4kS9UHRQUV+Q7rz93+4KL1OPs7Lbw0V5jywfhL4w3319mredSyj4ACAuHjNK9bhahEndCfi2xyJC
	eq4qNHMGAfKt2CXfpDDph7cU9wbZUmOKGwIP5BOI4nqZTAc3MxIjieEvuLfRVTfE/daA=;
Subject: Re: Boot time and 3 sec in warning_print
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, =?UTF-8?Q?Anders_T=c3=b6rnqvist?=
 <anders.tornqvist@codiax.se>
References: <fcc14668-b07d-aec8-1587-bc2d7add84da@codiax.se>
 <4f256544-7e1a-1a65-4942-04b3bc00e373@suse.com>
 <d9d4ebf7-09b3-29f2-1359-cd9db73f9c94@suse.com>
 <84a8cc60-e63d-24db-750c-39bb6049c045@xen.org>
 <29a1b91f-891c-2a9e-a5f0-b7b01296c880@suse.com>
 <e4c5e236-b801-27af-d519-204943aa32d4@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6c238a10-cf7e-527d-88ea-c1d730ad9e87@xen.org>
Date: Mon, 15 Feb 2021 15:10:30 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <e4c5e236-b801-27af-d519-204943aa32d4@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 15/02/2021 15:00, Andrew Cooper wrote:
> On 15/02/2021 10:41, Jan Beulich wrote:
>> On 15.02.2021 11:35, Julien Grall wrote:
>>> On 15/02/2021 08:38, Jan Beulich wrote:
>>>> On 15.02.2021 09:13, JÃ¼rgen GroÃŸ wrote:
>>> What was just an "annoyance" for boot can now completely wreck your
>>> guests and system (not many software can tolerate large downtime).
>>>
>>> So I think we either want to drop the 3s pause completely or allow the
>>> user to decide whether he/she cares about it via a command line option.
>>>
>>> I am leaning towards the former at the moment.
>> I'm afraid I'm -2 towards complete removal. I'm at least -1 towards
>> shortening of the pause, as already indicated.
> 
> A 3s delay on boot doesn't even cause most people to notice.Â  The
> infrastructure has failed at its intended purpose.
> 
> Therefore, we should consider now to replace this largely-failed
> experiment with something better.
> 
> 
> Personally, I think ARM is abusing this in the first place.Â  Adding a 3
> second delay for someone who's explicitly chosen hmp_unsafe is petty.

For Arm, the original goal was to get all the important warning in a 
single place so they are easy to find.

The 3 seconds is unfortunately an unintended consequence of using 
warning_add().

> So is adding a 3 second delay for anyone who's explicitly chosen a
> non-default configuration.Â  In retrospect, I think the delay for hvm_fep
> is also wrong, especially as we also have a taint for it. >
> 
> The *only* way to make users deal with the warnings is to surface them
> very obviously in the UI they use to interact with their Xen system.
> That is XenCenter/XenOrchestra/virt-manager/etc, and possibly the SSH
> MOTD - not a logfile that approximately noone reads.
> 
> To make this happen, warnings need to be available somewhere which isn't
> the dmesg ring.Â  hypfs would be the obvious candidate at the moment.

Could we also taint Xen if there is a major warning?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 15:37:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 15:37:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85273.159893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBfwA-00047K-3k; Mon, 15 Feb 2021 15:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85273.159893; Mon, 15 Feb 2021 15:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBfwA-00047D-0r; Mon, 15 Feb 2021 15:37:22 +0000
Received: by outflank-mailman (input) for mailman id 85273;
 Mon, 15 Feb 2021 15:37:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/QLJ=HR=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lBfw9-000478-5Z
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 15:37:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aba63bde-7cad-4555-8b0e-636286fb3c35;
 Mon, 15 Feb 2021 15:37:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aba63bde-7cad-4555-8b0e-636286fb3c35
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613403439;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Zf7KG8h/f/t89+M/y8zcenH/KRVdl9lbrjSfPKDQUDc=;
  b=PuQMaMlkzTENGOn9tQf++aaTVdU9n+cCOO+fBGEBDBLPEcLwRQaAU16B
   tUT8HJlAG1hn1N1Rk2t1L5R4ku9ST/v3L7QCJVnOOoN9ime0wdl64vrzx
   IL399TUpDXHnvZ8NOieIikYymMcP1nRrCZCriPV/miNq5o0ip4O8m1CB5
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uYHojOcCv3PBzUrbrwWZn9PAmgetAW51eL4awKKfNOzIJ2ZucZQZ5UR4v1Dv4AveR7n/WsvR17
 eni5K0cFLtkGeMZhHDC2SKG1wOh3ADwC9jJya6LbXgW7jNW4OW7v9Luo0iq5BUGLCN99xzqYGH
 ZmrSxpJYjQIws6bUWJrpUCO+oc27xhdSM2zlMxdt+4UCm2Z+zsp40IR18e71bvpeUSbeYRIjlJ
 HrI82reXOprjuihekmoOtp5YYFprVcI9D/r1P/+e4cJR2S7q2g7hI51u3M5otJlpsvSaGPkpy/
 FOU=
X-SBRS: 5.1
X-MesageID: 37460986
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,180,1610427600"; 
   d="scan'208";a="37460986"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH v1.1 06/10] stubdom/xenstored: Fix uninitialised variables in lu_read_state()
Date: Mon, 15 Feb 2021 15:36:49 +0000
Message-ID: <20210215153649.4055-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-7-andrew.cooper3@citrix.com>
References: <20210212153953.4582-7-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Various version of gcc, when compiling with -Og, complain:

  xenstored_control.c: In function â€˜lu_read_stateâ€™:
  xenstored_control.c:540:11: error: â€˜state.sizeâ€™ is used uninitialized in this
  function [-Werror=uninitialized]
    if (state.size == 0)
        ~~~~~^~~~~
  xenstored_control.c:543:6: error: â€˜state.bufâ€™ may be used uninitialized in
  this function [-Werror=maybe-uninitialized]
    pre = state.buf;
    ~~~~^~~~~~~~~~~
  xenstored_control.c:550:23: error: â€˜state.bufâ€™ may be used uninitialized in
  this function [-Werror=maybe-uninitialized]
     (void *)head - state.buf < state.size;
                    ~~~~~^~~~
  xenstored_control.c:550:35: error: â€˜state.sizeâ€™ may be used uninitialized in
  this function [-Werror=maybe-uninitialized]
     (void *)head - state.buf < state.size;
                                ~~~~~^~~~~

for the stubdom build.  This is because lu_get_dump_state() is a no-op stub in
MiniOS, and state really is operated on uninitialised.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>

v2:
 * Reword commit message.
---
 tools/xenstore/xenstored_control.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 1f733e0a04..f10beaf85e 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -530,7 +530,7 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 
 void lu_read_state(void)
 {
-	struct lu_dump_state state;
+	struct lu_dump_state state = {};
 	struct xs_state_record_header *head;
 	void *ctx = talloc_new(NULL); /* Work context for subfunctions. */
 	struct xs_state_preamble *pre;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 15:48:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 15:48:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85279.159911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBg79-00056g-Cy; Mon, 15 Feb 2021 15:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85279.159911; Mon, 15 Feb 2021 15:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBg79-00056Z-80; Mon, 15 Feb 2021 15:48:43 +0000
Received: by outflank-mailman (input) for mailman id 85279;
 Mon, 15 Feb 2021 15:48:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBg77-00056R-JR; Mon, 15 Feb 2021 15:48:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBg77-0000ZH-AQ; Mon, 15 Feb 2021 15:48:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBg76-0000Z9-SV; Mon, 15 Feb 2021 15:48:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBg76-00068z-S0; Mon, 15 Feb 2021 15:48:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P/WqCjGFbMIFkn1yBl9wfulFRLPyx+iDm/xqUZW2FtM=; b=nUp0tMAtfQr/yzbc+kKtmg8PfT
	4bVD/PtLNoaIBmD1MdDEXFjcU8FnxpUQHukraE1abLpDIj9OpnZ5f+luTPgYhxU3IHU44V97MObDa
	L144RfhiDHHlTMQh4+y4hzW79iJ7wr/KFNtrrRKj1lUX3zSe2H/Ul1KfHej7h/dJtZfA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159362-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159362: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
X-Osstest-Versions-That:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 15:48:40 +0000

flight 159362 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159362/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 159335 pass in 159362
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159335 pass in 159362
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 159335
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 159335

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159335
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159335
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159335
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159335
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159335
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159335
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159335
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159335
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159335
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159335
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159335
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
baseline version:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad

Last test of basis   159362  2021-02-15 01:51:33 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 21:15:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 21:15:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85353.159998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBlCz-0000kr-2f; Mon, 15 Feb 2021 21:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85353.159998; Mon, 15 Feb 2021 21:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBlCy-0000kk-Vd; Mon, 15 Feb 2021 21:15:04 +0000
Received: by outflank-mailman (input) for mailman id 85353;
 Mon, 15 Feb 2021 21:15:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBlCx-0000kc-OU; Mon, 15 Feb 2021 21:15:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBlCx-0006Y6-HP; Mon, 15 Feb 2021 21:15:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBlCx-0008FD-8G; Mon, 15 Feb 2021 21:15:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBlCx-0007e4-7h; Mon, 15 Feb 2021 21:15:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gYJwDLw6FzpM7LM2ZEpju6CSddpDu/mWhhdwRL03Hdk=; b=h96/213yEaunYnVE6XNl4JFES4
	BjLxoQFxE9QmOxARYtXa31KEIKt6+LZG8M/5OrdK1fHQ8FsSXGQ0hgalDzxt0OnaFRIUAFJrmJ49n
	8BrDdmxhCewLKjr6UtK0w8pZMwMmAtH5JNlIWyv+bbfPRpHEmuzyqHWbRcqxrw8OK/aY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159364-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159364: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=392b9a74b9b621c52d05e37bc6f41f1bbab5c6f8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 21:15:03 +0000

flight 159364 qemu-mainline real [real]
flight 159385 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159364/
http://logs.test-lab.xenproject.org/osstest/logs/159385/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                392b9a74b9b621c52d05e37bc6f41f1bbab5c6f8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  179 days
Failing since        152659  2020-08-21 14:07:39 Z  178 days  348 attempts
Testing same since   159364  2021-02-15 03:31:02 Z    0 days    1 attempts

------------------------------------------------------------
402 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111151 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 21:35:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 21:35:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85360.160019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBlWa-0002bZ-0G; Mon, 15 Feb 2021 21:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85360.160019; Mon, 15 Feb 2021 21:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBlWZ-0002bS-Sf; Mon, 15 Feb 2021 21:35:19 +0000
Received: by outflank-mailman (input) for mailman id 85360;
 Mon, 15 Feb 2021 21:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1d0k=HR=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lBlWY-0002bM-5D
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 21:35:18 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7897ad91-ddde-4cbc-8b5c-822bea6edeb7;
 Mon, 15 Feb 2021 21:35:16 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11FLYvnH117107;
 Mon, 15 Feb 2021 21:35:14 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 36p66qw3tm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 15 Feb 2021 21:35:14 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11FLU4Ld029654;
 Mon, 15 Feb 2021 21:35:13 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2174.outbound.protection.outlook.com [104.47.55.174])
 by aserp3020.oracle.com with ESMTP id 36prnx7r4p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 15 Feb 2021 21:35:13 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB2808.namprd10.prod.outlook.com (2603:10b6:a03:8d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.29; Mon, 15 Feb
 2021 21:35:12 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3846.042; Mon, 15 Feb 2021
 21:35:12 +0000
Received: from [10.74.96.113] (138.3.200.49) by
 SJ0PR03CA0323.namprd03.prod.outlook.com (2603:10b6:a03:39d::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25 via Frontend
 Transport; Mon, 15 Feb 2021 21:35:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7897ad91-ddde-4cbc-8b5c-822bea6edeb7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=t6ljEiRGC4oNugrEri7bzXoKa0KoqCwTlkn+NuRj/AI=;
 b=VbwkssN0brvGGRD8meq6qlSZiIlS4+YPDCvikRWgyv8Vp2QYhHxVk5yH226EC2ER3gzl
 usFPmmsYtNao+jQeI9qVeq0rTjPj26Zk8+5hA8L0xvjkBQG0Lj6NQxf+VSrtsxwWF9qb
 5EciVMhpeb88X9VkRlSoCkolNwPK+MxL9iHU5vuZBiRyr4SHWNh+qLslQogx4k7p5fRH
 MQcAjJ/ENxZru2wOThZfIlkjyvMYbm2hZ7uKmWExN3UY9qKUURmKqfiMQOHZYavkZA6v
 cpYZy06SMNlYVknh7CnfnNSpXQZPaQy2iYR9Y3/TQUromsYOBgbqqjh6eVeTvsEUFvGp YA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=erVomIEROZIaL9vYiXh6qwSwCM1nVj3NXDuYih2+HjuBpVcCy22mhTBVA7Cd8qQm39+wUmlKiAD0WHH3xLfZrSQwosULiL8xCo2KckFKkCznjYPmqm3q6L4JK/+5ZTiYy+A4u+1rE+Bc/YHOI6gywfBrFFb7jQn4oaaZZafbwvIUZS5OMIXMtoulqIWkJskOC216LVgtssBrEnwRkjWfRxGJCurwIturTjVUKo/sxy7u4h56ifOsNwkQbK45dY2of/x3hdGLJAfeDtxSrdlaFkeZBxkJBf9jqyfB+CFCXyq5IoT58cmFT+TEJaCiZlN0Rq3/7HF0YZhMznpz/Ay35g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t6ljEiRGC4oNugrEri7bzXoKa0KoqCwTlkn+NuRj/AI=;
 b=T23XqMzxxujybVqwhEwhhda93JTgVRtGBfrggv0Fb9poBLRWspNywsDutUxWUTVrbx7THFRtTMqMp28uXUargHBFtH60VQMtTOlBe1pUIuAtRdhTKXowA7o/3xG/hFxxkSTr3HCbA7l9BLgvQsrRePXpnbhVwoapmUo4b2xhXqcEs50sRbVsVnEthFwpDiC9axe2qN8ZqEZwmzf/vf4X8XNHW+eeETuLTrGEBFJdgC8vUqhJoTEIpulDpjnomQapmJV6UHo3iCIhWhtZQ9InlEde7g/q6UC1z6caMlt45ujQ+XwwQIl8PGeNGIB6EpH3bw2EErS1Tz3o7mjS4IAQ8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t6ljEiRGC4oNugrEri7bzXoKa0KoqCwTlkn+NuRj/AI=;
 b=bFZTFzIVHuJp71+F+Rcx4+CAzny/CgntECGCWIoXtz6FsfG0T1gFV4yasw4NzGMpVdXOOWTkGZLXNJ14uw/cbiaL6X+i9ZY7PVU4XG4mJh/YQW5ac70U/dKYiu6hrWVaNxCipwlgwF/LkCyQoEd6W55/5aVZCSjd5DND2zogNlo=
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-4-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <6cc74d6b-d537-0e9f-9da8-45456f6b703e@oracle.com>
Date: Mon, 15 Feb 2021 16:35:08 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <20210211101616.13788-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.49]
X-ClientProxiedBy: SJ0PR03CA0323.namprd03.prod.outlook.com
 (2603:10b6:a03:39d::28) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fa3f262a-dd60-4324-ed13-08d8d1f992e8
X-MS-TrafficTypeDiagnostic: BYAPR10MB2808:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB2808846360EB00018A10E2868A889@BYAPR10MB2808.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2512;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	P7ufO7x2Aw0PLH2cBYQjA2ICh98Wc/0dwBzs95wz38ejeoPoLrOqQ8V4Qbp93IgKSQTm7+zwqzJ2q+/0l96ZrSDZwWxm3crLXAgPKUVarQytsu0q4gU7eV+y4RpsTPcZt2V3oc4m9XWqpZpcSVCV8LK/Vtjwsv0fzxa+tyTnoiuo2z+QZN7cqDEvKbf8Qy1NqpYm/fi0c1xg2pGnwpXVj7iVueG6MfCK2VDBqhfFpxInrNNCyIs3BYGg2r3d1fNoXofGFGe1c8NBNCe+fjwZlJJnVPCMRi20z9J34q5LCswHUwgBG3vEOvjWxOA0nLsQWx6wHu7d70ZMk11gYXQIt04we1USvjVR4SNMc19Sqh0ucyUWEv7q9vx536Ug86H2GCTRa2gq5H0a6YdUNbY5ducl02zwuLkUvP7dTGgapKnTGy4XsMtdFzBKpxQ4IQRk6hcWSlO4C+rkQuQcajk2yUqCYvfgwIYqNWgIm2E1KoVwmwTuVnuoTiwttpQWKz2xOCU4rj4cb+sZ0PGyIjqCMEOPIrED4jiCq2sU1O7XXxpNEd0N1e2mfO1FCNXy17jzzlLQ/BjHLVDBGLv3fz4M7Ghb5uqkljM0FFWIo6ao0ws=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(376002)(366004)(136003)(39860400002)(2906002)(478600001)(86362001)(36756003)(83380400001)(31696002)(53546011)(66476007)(4744005)(66556008)(54906003)(66946007)(956004)(16576012)(316002)(44832011)(2616005)(6486002)(8676002)(26005)(4326008)(31686004)(5660300002)(186003)(8936002)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?MlBZWEhVdTdaT3lmVi9oVzNhZFg5T0QyVDhaWlc5SVFDMzFNbkFZUTg3UVZq?=
 =?utf-8?B?TTdxWUVLbXYvQzRxRkxKUGg5LzZ0c1RCZHM2aXN3M2lhTWJCdU9YRkFRUU1F?=
 =?utf-8?B?UU5STVNSdFcxTktSc2FwbllSMDlrUElKOWpSSFpjSkxLYnRCVVRVaVJRbmNq?=
 =?utf-8?B?NXFDYmIzQTBDT2s5RGY4Y2tuZVRPS2hxRTJSR2RaTjBldzMrZ0FCOTBxZGF4?=
 =?utf-8?B?a0NqRTBwaDJCZ3VVNnJPVDM1ems2cldVaFZhZjllbklOeWpCUU9KcE55aXpL?=
 =?utf-8?B?Q3ZHL2NmZ2JsbXU1c0NGTDRUTldCTjhYN3J0SE0ybEp0MHo3OU5PaHVsb0Zk?=
 =?utf-8?B?RWNYdWxNRDYxRGtoTkhXQ3o1eVVEVy9lWFZtQmpmRVIxMXh3b2ZnZ2FWRlo3?=
 =?utf-8?B?MmpHY3AyV2JKMTZ3QVBlNHpCM21aYVZuaUM1QTVHU29GRG9LUkt6dDZpREo4?=
 =?utf-8?B?SVBWSldnd0Mvay9WNEt0V1ZMU2U5d1Z1VUR1L3VNNzh5Y1lCQVU0UFhRV1BX?=
 =?utf-8?B?d3cwOGQ2OU9ZWUdnYTNnTW5tenp1ci93TkgxYmp3WnlpUVFray91N0cxaTRh?=
 =?utf-8?B?RTdSNlIwQm1IVXFTQ1NlK3ZlckZRWmQ2M29CT0xKSm9OQUtSWm42dDMwQzhY?=
 =?utf-8?B?ZHFITzk0Y01vM0dRTG8wVFFnL042OTN4RFFpdUFJbWlHSVV1SGJzbXJsV1l4?=
 =?utf-8?B?WmR6SzIrMmMvaFY2S1p3NmRTSG1kb1ZDSVFWYko4TnlGK2NDQ3BhbWJZZTZo?=
 =?utf-8?B?czFlTnAycVhOaGl4WGliTDRWMXRXVEtCWi8xSHJORjYvYkJrU1ZrQmtSSjlP?=
 =?utf-8?B?MkpmQ0ROOTUzeDZvd1FjbzRBWGF3TlVMY2ZaUDVuQnpyTmNVV0liU2xpYTQv?=
 =?utf-8?B?dklxSDRabU9YdVVXNkI1d3ZGVitSeFBUZFVRSXY5TWh3NE9jUUsvckIzK0xX?=
 =?utf-8?B?MEZhMUVWdUZUakVoUytjdlZlTEVJZ1FRS244T09Qa1Q4cDBuY3E3VkJiMUxP?=
 =?utf-8?B?NGczdGttbWRCdXZWVW1ZS1RhNnplaVNkZkltMEw5T085eHVDTS9tSGtucDVy?=
 =?utf-8?B?WFphOFRYLy9lTm0ydE13TjUvcER5WEtFdGlDQ2NPWFFGQURicjN3cXZoZzRq?=
 =?utf-8?B?dFpDaUNkUTlMUCtna0N3cm1rOXhCOVhPK0dRYTc3K0NwZ3Zxd2E1bWdLUEZj?=
 =?utf-8?B?VlUrTUxmNkdCWHRUbkdWUXh2Y29XUlhzYjNZNWN4Zk5hUkNHU1ptYUg0RFJi?=
 =?utf-8?B?UzQyekQ0WU5Ha0gvV2NydUZLYUo2ZGxUYkJ3YmExODlGUHhsL3lPYm84ZkpF?=
 =?utf-8?B?UnFuZWVKQ1NRcFJwczN5c3hYRjBCUUdVbjN4ZDRuKzNrNVRSRFN4RmhkVDdV?=
 =?utf-8?B?Rzk1c2pobFh6czVrdVF2ekJwNHNQeFFLUnNlRjhjZHJTVHM0Zy85dXZ3Wkhu?=
 =?utf-8?B?R1RSZUxpWUk0RnJwZWhwclVtSWJFS0psNTh5bVZCazZTT29JTFN1M2lQb2Uz?=
 =?utf-8?B?dTNVTVU5MnljenlML1orNzVmSXJReFZxZGNpQW55WnA3YmU4Q3Ercy9TUVlY?=
 =?utf-8?B?K25Uc0FVWHR6NDVFVGlZVGxWZVdpRUNJZXpXSHExTldvY01aSFNxd0czTTg0?=
 =?utf-8?B?TCtCT3NtVlVlYWRkdWRFdFBYQmRhVk5QWDRpUVNVY3I4Y0xiQUFSQ0pvOVdj?=
 =?utf-8?B?T2pPZXppS0MxcDlvRDM4blhjeVROZVZ2RlpZZTJPTnFhQzhkMmxia0FvUWxt?=
 =?utf-8?Q?qGsknww2O2eYTHhqJQRRvKUyzcWLMVD268LiD3O?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fa3f262a-dd60-4324-ed13-08d8d1f992e8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Feb 2021 21:35:11.9203
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5y5OG7lZIri+uLa/pbR1Ndkx3TVsUByP9zfh8H6FcYHMzSVhjWTvWpzVpWkCleLZAROUThYXgG7A2NAImMty7UM5ViwqZFqV0/Y6VkZ4ALE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB2808
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9896 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxlogscore=999
 bulkscore=0 suspectscore=0 spamscore=0 malwarescore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102150164
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9896 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102150164


On 2/11/21 5:16 AM, Juergen Gross wrote:

> @@ -622,6 +623,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
>  	}
>  
>  	info->eoi_time = 0;
> +	smp_store_release(&info->is_active, 0);


Can this be done in lateeoi_ack_dynirq()/lateeoi_mask_ack_dynirq(), after we've masked the channel? Then it will be consistent with how how other chips do it, especially with the new helper.


-boris


>  	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
>  }
>  


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 21:54:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 21:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85365.160031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBlp4-0004Oi-Ir; Mon, 15 Feb 2021 21:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85365.160031; Mon, 15 Feb 2021 21:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBlp4-0004Ob-FV; Mon, 15 Feb 2021 21:54:26 +0000
Received: by outflank-mailman (input) for mailman id 85365;
 Mon, 15 Feb 2021 21:54:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1d0k=HR=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lBlp2-0004OT-QO
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 21:54:24 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9448ec0f-5bb7-497d-aeb7-b6a2e62e687a;
 Mon, 15 Feb 2021 21:54:23 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11FLjSDL133829;
 Mon, 15 Feb 2021 21:54:21 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 36p66qw4cd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 15 Feb 2021 21:54:21 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11FLk3dk072646;
 Mon, 15 Feb 2021 21:54:20 GMT
Received: from nam02-sn1-obe.outbound.protection.outlook.com
 (mail-sn1nam02lp2055.outbound.protection.outlook.com [104.47.36.55])
 by aserp3020.oracle.com with ESMTP id 36prnx835m-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 15 Feb 2021 21:54:20 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by SJ0PR10MB4557.namprd10.prod.outlook.com (2603:10b6:a03:2d4::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.29; Mon, 15 Feb
 2021 21:54:19 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3846.042; Mon, 15 Feb 2021
 21:54:19 +0000
Received: from [10.74.96.113] (138.3.200.49) by
 BY3PR05CA0036.namprd05.prod.outlook.com (2603:10b6:a03:39b::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.11 via Frontend
 Transport; Mon, 15 Feb 2021 21:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9448ec0f-5bb7-497d-aeb7-b6a2e62e687a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=5TcrC9CRttlghKdwb1T/jMH/Irp34EOihRLM/Z4mHX4=;
 b=czj40iN734tbMeCZq+3V1KMFMRGNV5N/MHCLJ6C1PF7CMGpUMu3bVjQIEhgAd/CTejS2
 u2qbGARJF5kSF4tzQhUkvUau9WLhgG26jH2LpOe0UTZ1d2oAAqpA0pCUfESpfqyVi2K9
 XriISX4EXz4AtUg8xuOmNjNaL2hgbzy+s9+oP0t72DGNvQsISYADVh+IVwVouAwDyqfW
 aXGKoANyNInGstmhuztDTHJSjyEL4RDrE9vsdz5WkwG3/NlV+ScccDv+6pDKu5HpPCYQ
 4p3oOBKF6dpa8JXauXwvqVpoR+Ec1nQGlS6TS1c4IDgHElhK80uitrRPbT4IhDp5kq7k Mg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UkcAOqKxAmzYpl2tzY+z8JFdlfPvYeckKd75UukjvbjrRYy76+GtWhZgSMmc+YV+DpHjefgvIm7yDDm+oZGJ1jJVh9untA4HdSeuEvqqDufrM6lHCQfHX3c90sP12U0HOELl4wxMeZBCRP9JWyQg6kaLfqf1iZiYMKOa0LRyHuoe3lbrg/kEvgf//Qvgy1xmdSrdb/MzbQN0rSV7kdkkErtU/GfQ/lr4A0PcdB2ThOenrNtQEOro99EaByo3VXhgQ6avMpx+l2E8ufUjW5A3GQgbGv4h9yXuuozUVsHEcI0wcNQxBA90zMW80ftGAJOYkCueRLvcVbJV3VTRzVDTlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5TcrC9CRttlghKdwb1T/jMH/Irp34EOihRLM/Z4mHX4=;
 b=cf2P/50/wB/odezndYa5vPjakowfkbjtsdPO2vFvhlQTh7saw7FhgzWwvR3RoZaAv/xNwWLbWfbMXfKMXf6JRW3OJ7FAeTWpmDWpQGP8prEvsFBTtBtGUo7j8w1CMD4TpXVQkijUhpRfrYbdwPV32WlaODSMpPOmIbHWhYiufk0XT0tGRUw/hF4+VL569st4L5Ub50gqK9N0ZIaRQWSPE7gKrzjrqjzbyzWFBXIRNMstCouvsk/wadDFcIs1Ea4fvfSQt9I7aPU8g6fJePj08ZP3j3UXPGoYnVK+WSMQBsw6/rEdiNoQaNKWu9SFHtmz3sblLcGGAIJ0FFqlSeLVKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5TcrC9CRttlghKdwb1T/jMH/Irp34EOihRLM/Z4mHX4=;
 b=cweXIZ8A3uqyPARtmwYe4vBNHOxtv6H6DD08vp7s8vPyUvdQWaBQFRdUANICqfc5eQbHh0E5daL6+d/2625MDG1qv/wcBevd4mUxrjkvzIbf+3YmYOCYh8ro3jjnRNci0eCcAM2U+dvn9TdCY3y8QdxgWcVKfKJjU0x98/h0EcQ=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 6/8] xen/events: add per-xenbus device event statistics
 and settings
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-7-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <afbd1081-53fd-2cd2-5504-a6342d38a5ae@oracle.com>
Date: Mon, 15 Feb 2021 16:54:15 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <20210211101616.13788-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.49]
X-ClientProxiedBy: BY3PR05CA0036.namprd05.prod.outlook.com
 (2603:10b6:a03:39b::11) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: afbbd218-c6af-45e4-5216-08d8d1fc3eeb
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4557:
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB455748EE35DA12BD74E2EB678A889@SJ0PR10MB4557.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:475;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	ab51Y8PflCJSeK+DTB0U7SDta1bgkp9jmz8lzVL+s9sk4BV7SV3/gh7WikdXIipMJ3RBsHUj/Qi4icbaJSGz+Xh/d/89bqKkDv+z4ZndZi0IdEVDGjGo9gT27zarBUifcsJpGQUqkF0a/wmhHkVShVBLt1Kbk54zZNQBlHabNVz7XssKsDRZpEKRLcvKwVcVcubOO0fyttKx4S6LNL5m5wxRwIcAAHL+Wgu8GpySh16mzWXgZNlgiZQUP6jzNglDVQ3yo5PEmT+SMLKSXtoHspNJOwKMLt1JOrMqi6eP/gaEV/xgCaTw3TuitSFGt39M38YLXbI7YZeY97oHdahULVMLwvFT+OFqq1GRb+h1QQMdau4XKs5WBffRRV05Q7+CdSjbBk5Hcw9DVT7DBckCEWyFX7X5zTc+Lb6HTN6R+paS4I+omnJYke6seLwx9TNfnyBWfhlGiiJ5oEumyezwz8bBBTX7d/0DEs9xcj26OR79vn/g56yH7Nnkkzc+xmrf29HOG7vXXFXBMPf+ybCENwMAhkV831OGCgcVYuiX48pW/WOfXTpsqEf8ejfBoROv7F6gX9LQvNX3AyBNuuO3e6HMRlTrOmVwT0M6nWN4h4o=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(39860400002)(396003)(376002)(136003)(346002)(4326008)(26005)(44832011)(16526019)(6666004)(478600001)(2616005)(956004)(66556008)(31696002)(66476007)(66946007)(8676002)(186003)(8936002)(86362001)(2906002)(31686004)(5660300002)(4744005)(316002)(16576012)(6486002)(36756003)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?blVYMlRKUkhzOGphODBSY2lrVlo1TktrSVJydGRVOFRGUmE1ME51Qk5yUlNY?=
 =?utf-8?B?QzU1bWtublpQdTJPaDFRM1hCMHU4QVU1NjFsSk9PTlBGdHBONmZ6QkkyWXlQ?=
 =?utf-8?B?WVhySWRDZ0tWeitDT2Mzc1ZMaTFZSzFYUThWRDF0NGdXMzdwTndJaTRBalF6?=
 =?utf-8?B?MnJIaXJBVTlPUDNWWkcvb0lvUXhpRmw2amdpMG1wT1JWdHFyY21OSGxJRE43?=
 =?utf-8?B?ZmxXcU1VckoyUlYvamJtb3l1dDVXQjU4RVVpSjZOc3M2N01mTFZaRXI3RHhE?=
 =?utf-8?B?M0FQSHhpZVdyUzNsdjJVTkVhNXozdUNtYzAydzVnS3dqVkkzSnJ2b01aRzJk?=
 =?utf-8?B?L28ydEZvYnVoUzVvb25jenpCbXZwVDk3MlRLYjF2ZFQrVUgxQU1mcnQwdStp?=
 =?utf-8?B?eU40ZlFGblkxVzRQbi9lbUt2V2MrR0xQMzBwdHFCMmZncGZPZXNISDRMYVJr?=
 =?utf-8?B?UTYwMzVRblpURjdpOEgzOVN4TGNTSVVCSXZLeHBBKytqQWdXaUVlakwzUlhJ?=
 =?utf-8?B?Q1p6TWZOUEVLQ01jUGpMeEFlZmVLL0drUVR3aXhqdVRwblFIUEIrV2RkcStB?=
 =?utf-8?B?bnRxRktGZEZpVm1Pbms0SERWVmFqaWROanljYjFwMFp4bWduUFpLTGZPY3VN?=
 =?utf-8?B?QjEwQjdoeVBmakc3TlVDZ01XN0FtN1JHUUx4dVVVbm8xcWR1OUVrWkpwam5Q?=
 =?utf-8?B?aXo4eDZEMHBOdVhGQVRXVDlUODVDdVhJeHdMWVZPeWZ3WHRVR0tOa2VZQXBp?=
 =?utf-8?B?cUlISFFkK1BucnVuNXZkbVoza1hmODBqcjVKZE1JYTgwVlhQQjhZRlZaMzR6?=
 =?utf-8?B?L2VZNnpKN1MyL1RlTHp2SVo2d0JXQjJLUzVsZzByaUwxekdVVVliTmpWd1pj?=
 =?utf-8?B?TXRaYlF3VXFKamFMVm9WaDZmZ3E4QXNpekRKeWQ5L0VZclQ2dkJtakp2dE8z?=
 =?utf-8?B?NHVIUWhwM0w1aGFSYzk5c28rV3hZVUdJL0xkSnQvcFd4bWJ0VXhvVzkvVGFY?=
 =?utf-8?B?NHlmNDMvSXFNMzlwblJQeUtPNFczbkFBVm14ZnQvcGJ3SlIwQ09yTVQ2dWNw?=
 =?utf-8?B?WGVNMVpOTWcwV3N4OVpTM3M3U1pCenRPTFJjNmtCNFF4aUhTWE04MlNucFl5?=
 =?utf-8?B?Y0RwTHRENm9JZEs0aWIrRmtwc1huRTkvelUrZ1BQWFp3NGVNV0JKRkNoRVdN?=
 =?utf-8?B?ZkRmNWtUbThIdjRncVNiNFRqcFgxOU1JWW9ydHl4citmbGpybDR6YmJXLzlP?=
 =?utf-8?B?MWpnVndTenJKWjI2aGVvRVBSV1BOekNGaWlMbzBINi85Z01FTEZLb0I3clkw?=
 =?utf-8?B?WGVSTWlocU1KaDlrSERwTm9nLzRGYzBHSzVPcHM0TFNqRkQvYnVlMTZZRkoy?=
 =?utf-8?B?U1dVaVF6OFlXckszNGxNZFZIdlFCdjNSbDJUMUIvVVV2Q0I0WUVmV0pVMDg2?=
 =?utf-8?B?R1ZsNHphVVdFZHBBcEx5T2RYOGlOZ0p0eEc0WkMzb0xhYndTb3hiOUpmRTlU?=
 =?utf-8?B?am9ZZTliZTBrTkRWZlQ4Y3VLai91OWJZd1NIb1JKQ2duRk1iSTRDaWtGZVpJ?=
 =?utf-8?B?NVl4WG5DWGJXSG5KQk5rV3g3akpwQnNXWEJXUFZrUWd4WGhBNUk5ckhPbFFL?=
 =?utf-8?B?bnpQTisvMFZ4NzY1N0hEQjdWTnd1K3F0UUFhYnEwVmo1cEhnSXJWcVpwZDJT?=
 =?utf-8?B?TVRvQ2JpOWo5blBQZnFmd0MzWEk2UXhQblZ6SXZnUnRuODh5aG43Z0tvaHRj?=
 =?utf-8?Q?WnIK42xs7OTdgLzhIzTkwqDBPnElkupX28unS+g?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: afbbd218-c6af-45e4-5216-08d8d1fc3eeb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Feb 2021 21:54:19.2367
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZZ9rz2CfcbWokm7kPZfiB7bUaUxKtH5lnTNKecce84RAU2NVx+TrZqlLEwSiCOJ+XqSm5M9uqrXxsnDvI8qwxKxbEqY+xvGWooIXjCS/8GA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4557
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9896 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxlogscore=999
 bulkscore=0 suspectscore=0 spamscore=0 malwarescore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102150166
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9896 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102150166


On 2/11/21 5:16 AM, Juergen Gross wrote:
> Add syfs nodes for each xenbus device showing event statistics (number
> of events and spurious events, number of associated event channels)
> and for setting a spurious event threshold in case a frontend is
> sending too many events without being rogue on purpose.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 22:24:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 22:24:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85371.160048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBmHq-00075W-Vv; Mon, 15 Feb 2021 22:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85371.160048; Mon, 15 Feb 2021 22:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBmHq-00075P-SP; Mon, 15 Feb 2021 22:24:10 +0000
Received: by outflank-mailman (input) for mailman id 85371;
 Mon, 15 Feb 2021 22:24:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBmHp-00075H-PV; Mon, 15 Feb 2021 22:24:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBmHp-0007fl-Fl; Mon, 15 Feb 2021 22:24:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBmHp-00028u-4Y; Mon, 15 Feb 2021 22:24:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBmHp-00019v-42; Mon, 15 Feb 2021 22:24:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pOUwuHecvsAd4lBwIP2E/IdWnCOrmNE+huqZciMhs+Q=; b=GQ+MoBhD8L0HQIP9yrTOI/IyQn
	98SnTMYaA4D9ZFhI73Yxvo9cUkkc/jcb7mX7a5pgKoPVwwzdmNJZXtkVAzDwcIR6kTLPYQuUSJ0kB
	XrCwmpPZvldVK6r6lzmdftvvbSwkpQOhgICrgzB1Z1TMPlzowj29g/pw3V5uokxOXbcA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159367-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159367: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 22:24:09 +0000

flight 159367 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159367/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  199 days
Failing since        152366  2020-08-01 20:49:34 Z  198 days  344 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    0 days    1 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 15 23:28:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 23:28:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85380.160069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBnHm-00042O-Fs; Mon, 15 Feb 2021 23:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85380.160069; Mon, 15 Feb 2021 23:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBnHm-00042H-Cr; Mon, 15 Feb 2021 23:28:10 +0000
Received: by outflank-mailman (input) for mailman id 85380;
 Mon, 15 Feb 2021 23:28:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBnHl-000429-Rg; Mon, 15 Feb 2021 23:28:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBnHl-0000I0-Kf; Mon, 15 Feb 2021 23:28:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBnHl-0004eM-Cu; Mon, 15 Feb 2021 23:28:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBnHl-0001Cs-CM; Mon, 15 Feb 2021 23:28:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=3roV0lBe1C3RjunBqx79sp/uGQPjrGFd1XGMPhxakII=; b=Wx9CX9uC276voiqKlm5O0R8wnJ
	DB+dq0WSyx3CG3607aKbbWlUcGyulS2kaBMu6TJ4hyhQ1Ct0Og+cvl5z+Hv4DXxP687VsoQ+OgpQK
	Z3veF8IONqSE4hEunFFvnJVCNM1yOZEUfOH8I8JTNJcQblfn5n96soqoXLpetSARCg5M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-armhf-armhf-xl-arndale
Message-Id: <E1lBnHl-0001Cs-CM@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 Feb 2021 23:28:09 +0000

branch xen-unstable
xenbranch xen-unstable
job test-armhf-armhf-xl-arndale
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159392/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-armhf-armhf-xl-arndale.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-arndale.guest-start --summary-out=tmp/159392.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-armhf-armhf-xl-arndale guest-start
Searching for failure / basis pass:
 159359 fail [host=arndale-westfield] / 158681 [host=arndale-bluewater] 158624 [host=arndale-metrocentre] 158616 [host=arndale-lakeside] 158609 ok.
Failure / basis pass flights: 159359 / 158609
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-5b9a4104c902d7dec14c9e3c5652a638194487c6 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#3b769c5110384fb33bcfeddced80f721ec7838cc-2e1e8c35f3178df95d79da81ac6deec242da74c2 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Loaded 15001 nodes in revision graph
Searching for test results:
 158593 [host=arndale-bluewater]
 158603 [host=arndale-metrocentre]
 158609 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158616 [host=arndale-lakeside]
 158624 [host=arndale-metrocentre]
 158681 [host=arndale-bluewater]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 blocked 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159295 fail irrelevant
 159324 fail irrelevant
 159339 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159359 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159374 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159376 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159378 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159379 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159380 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159381 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159382 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159383 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159384 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159386 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159387 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159388 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159390 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159392 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158609 (pass), for basis pass
 Result found: flight 159339 (fail), for basis failure (at ancestor ~280)
 Repro found: flight 159374 (pass), for basis pass
 Repro found: flight 159376 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159384 (pass), for last pass
 Result found: flight 159386 (fail), for first failure
 Repro found: flight 159387 (pass), for last pass
 Repro found: flight 159388 (fail), for first failure
 Repro found: flight 159390 (pass), for last pass
 Repro found: flight 159392 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159392/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 116 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-arndale.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159392: tolerable ALL FAIL

flight 159392 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159392/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-armhf-armhf-xl-arndale  14 guest-start             fail baseline untested


jobs:
 test-armhf-armhf-xl-arndale                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 23:46:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 23:46:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85387.160088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBnZp-0005qZ-6Q; Mon, 15 Feb 2021 23:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85387.160088; Mon, 15 Feb 2021 23:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBnZp-0005qS-2W; Mon, 15 Feb 2021 23:46:49 +0000
Received: by outflank-mailman (input) for mailman id 85387;
 Mon, 15 Feb 2021 23:46:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MmGi=HR=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1lBnZn-0005qN-3k
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 23:46:48 +0000
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c493b652-372d-4272-8cd9-77c60a419d08;
 Mon, 15 Feb 2021 23:46:44 +0000 (UTC)
Received: from [10.9.9.72] (helo=submission01.runbox)
 by mailtransmit02.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>)
 id 1lBnZi-0005eU-98; Tue, 16 Feb 2021 00:46:42 +0100
Received: by submission01.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1lBnZW-0007iB-Pt; Tue, 16 Feb 2021 00:46:31 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c493b652-372d-4272-8cd9-77c60a419d08
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com;
	 s=selector2; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=4v9QKILfUudtdP3y6/ul7/qyjBU4cjEWU/fKUxb5z5I=; b=zFkxIb
	3wy3vTud7ItWm9o9/+cgnFqhS9de5gbgW4WThxhZpOtN7qwchyNXqut+NKtvoBGZc0NKuLpGQxeKS
	rjB6E8WbyvIhzqDdxDVRlesN51CNGc1rdWlTLkbrwdCOaBUZg3goil+9zqu5WsOCuTyKWqPdOSz5R
	UQUlR9+AloKgdSjOXf/XL0B0If8RLGp2Viz4nFatYpgn2r+R+GqYpsXpqSgiwE51ZGS2OGsQ3pAc0
	aTuEeGNGxSFzS/asu7qJp4clCtVz5VUty0HPXC6zh5GK/Kk1W69Qxk4myZ+UYrMHwWiMhi2thJiJO
	PEQi4NdjyqDgQfhTKXkFCF//K9HA==;
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
To: xen-devel@lists.xenproject.org
Cc: m.v.b@runbox.com,
	marmarek@invisiblethingslab.com,
	roger.pau@citrix.com,
	jbeulich@suse.com
Subject: [PATCH 0/1] Fix for a buggy XSA-321 resolution in Xen 4.9
Date: Mon, 15 Feb 2021 18:46:18 -0500
Message-Id: <20210215234619.245422-1-m.v.b@runbox.com>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hello all,

I found a bug in Xen 4.9 involving what I think is an incorrectly backported
patch for XSA-321. I realize that Xen 4.9 no longer receives security support,
but as a Qubes OS 4.0 user, I still rely on Xen 4.8, and this bug affects me
negatively.

I would really appreciate it if anyone could review the following patch in this
e-mail thread. I am mainly looking for a confirmation as to whether the patch
fixes the buggy backport in an acceptable manner. The description of the patch
is very (and possibly too) detailed and should have all of the necessary
background.

Thank you in advance,

Vefa

M. Vefa Bicakci (1):
  x86/ept: Fix buggy XSA-321 backport

 xen/arch/x86/mm/p2m-ept.c | 57 +++++++++++++++++++--------------------
 1 file changed, 28 insertions(+), 29 deletions(-)


base-commit: 4597fc97b3b8870c39214e3aa4132ab711a40691
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Feb 15 23:46:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 Feb 2021 23:46:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85388.160100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBnZt-0005s0-E5; Mon, 15 Feb 2021 23:46:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85388.160100; Mon, 15 Feb 2021 23:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBnZt-0005rt-AE; Mon, 15 Feb 2021 23:46:53 +0000
Received: by outflank-mailman (input) for mailman id 85388;
 Mon, 15 Feb 2021 23:46:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MmGi=HR=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1lBnZs-0005qN-0D
 for xen-devel@lists.xenproject.org; Mon, 15 Feb 2021 23:46:52 +0000
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f471da53-0c34-4266-b0f5-992b90e3e36f;
 Mon, 15 Feb 2021 23:46:44 +0000 (UTC)
Received: from [10.9.9.72] (helo=submission01.runbox)
 by mailtransmit03.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>)
 id 1lBnZi-0000sc-HJ; Tue, 16 Feb 2021 00:46:42 +0100
Received: by submission01.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1lBnZY-0007iB-4j; Tue, 16 Feb 2021 00:46:32 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f471da53-0c34-4266-b0f5-992b90e3e36f
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com;
	 s=selector2; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To
	:Message-Id:Date:Subject:Cc:To:From;
	bh=sgze2FZO6yCdRuipkR7pE2WcN1p1yyBzt+7Q3Qr39S8=; b=Mq4G7D3yeew6G2d6PlTVZRMF2A
	PWFnsq5E1jdibEdTisiob2FtLL5/VoGqlBy+QpYG+Mcdb+IB3jeA1Vv57Wuh1F3irmZhEq+NBH7Ak
	ESYYpjA/GnGJ2pVYO9IE9wkix3ShdC4ZCAgFm0YxJoRRQFHnokhAzUZj6Xd+guOnCRwcKSrnZMqcV
	1ShtF/YlZ6/zuXzp1F3CHeRQILSTwOnvg4UbOXMwqFmlF5yAi/OfKAEUvmBuYKYw7cR5/Vq/ISp/A
	pExOYIJr5mBiSuw0F0vy+gIi8KUaXJg4D0hVfvkU4n1R9WH91rFiFutmHdp+tu6H1WtpEbiy9oMnU
	bmo7ohQw==;
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
To: xen-devel@lists.xenproject.org
Cc: m.v.b@runbox.com,
	marmarek@invisiblethingslab.com,
	roger.pau@citrix.com,
	jbeulich@suse.com
Subject: [PATCH 1/1] x86/ept: Fix buggy XSA-321 backport
Date: Mon, 15 Feb 2021 18:46:19 -0500
Message-Id: <20210215234619.245422-2-m.v.b@runbox.com>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20210215234619.245422-1-m.v.b@runbox.com>
References: <20210215234619.245422-1-m.v.b@runbox.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit aims to fix commit a852040fe3ab ("x86/ept: flush cache when
modifying PTEs and sharing page tables"). The aforementioned commit is
for the stable-4.9 branch of Xen and is a backported version of commit
c23274fd0412 ("x86/ept: flush cache when modifying PTEs and sharing page
tables"), which was for Xen 4.14.0-rc5 and which fixes the security
issue reported by XSA-321.

Prior to the latter commit, the function atomic_write_ept_entry in Xen
4.14.y consisted mostly of a call to p2m_entry_modify followed by an
atomic replacement of a page table entry, and the latter commit adds
a call to iommu_sync_cache for a specific condition:

   static int atomic_write_ept_entry(struct p2m_domain *p2m,
                                     ept_entry_t *entryptr, ept_entry_t new,
                                     int level)
   {
       int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
                                 _mfn(new.mfn), _mfn(entryptr->mfn), level + 1);

       if ( rc )
           return rc;

       write_atomic(&entryptr->epte, new.epte);

  +    /* snipped comment block */
  +    if ( !new.recalc && iommu_use_hap_pt(p2m->domain) )
  +        iommu_sync_cache(entryptr, sizeof(*entryptr));
  +
       return 0;
   }

However, the backport to Xen 4.9.y is a bit different because
atomic_write_ept_entry does not consist solely of a call to
p2m_entry_modify, which does not exist in Xen 4.9.y. I am quoting from
Xen 4.8.y for convenience:

   static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
                                     int level)
   {
       int rc;
       unsigned long oldmfn = mfn_x(INVALID_MFN);
       bool_t check_foreign = (new.mfn != entryptr->mfn ||
                               new.sa_p2mt != entryptr->sa_p2mt);

       if ( level )
       {
           ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
           write_atomic(&entryptr->epte, new.epte);
           return 0;
       }

       if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
       {
           rc = -EINVAL;
           if ( !is_epte_present(&new) )
                   goto out;

           if ( check_foreign )
           {
               struct domain *fdom;

               if ( !mfn_valid(new.mfn) )
                   goto out;

               rc = -ESRCH;
               fdom = page_get_owner(mfn_to_page(new.mfn));
               if ( fdom == NULL )
                   goto out;

               /* get refcount on the page */
               rc = -EBUSY;
               if ( !get_page(mfn_to_page(new.mfn), fdom) )
                   goto out;
           }
       }

       if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
           oldmfn = entryptr->mfn;

       write_atomic(&entryptr->epte, new.epte);

  +    /* snipped comment block */
  +    if ( !new.recalc && iommu_hap_pt_share )
  +        iommu_sync_cache(entryptr, sizeof(*entryptr));
  +
       if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
           put_page(mfn_to_page(oldmfn));

       rc = 0;

    out:
       if ( rc )
           gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
                    entryptr->epte, new.epte, rc);
       return rc;
   }

Based on inspection of the p2m_entry_modify function in Xen 4.14.1, it
appears that the part of atomic_write_ept_entry above the call to
write_atomic is encapsulated by p2m_entry_modify, which uncovers one
issue with the backport: In Xen 4.14, if p2m_entry_modify returns early
without an error, then the calls to write_atomic and iommu_sync_cache
will still occur (assuming that the corresponding if condition is
satisfied), whereas in Xen 4.9.y, there is a code path that can skip the
call to iommu_sync_cache, in case the variable level is not zero:

  if ( level )
  {
     ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
     write_atomic(&entryptr->epte, new.epte);
     return 0;
  }

This patch reorganizes the atomic_write_ept_entry to ensure that the
call to iommu_sync_cache is not inadvertently skipped.

Furthermore, in Xen 4.14.1, p2m_entry_modify calls

  put_page(mfn_to_page(oldmfn));

before the potential call to iommu_sync_cache in atomic_write_ept_entry.
I am not sufficiently familiar with Xen to determine if this is a
significant behavioural change, but this patch makes Xen 4.9.y similar
to Xen 4.14.1 in that regard as well, by further re-organizing the code
in atomic_write_ept_entry.

The incorrect backport was detected with a software configuration
consisting of Qubes OS 4.0 running Xen 4.8.5 in conjunction with Linux
5.10.y. Linux 5.10.y has the recently introduced "unpopulated-alloc"
feature.

The "unpopulated-alloc" feature uncovered the following error message in
Xen's log buffer, whenever an HVM domain with PCI passthrough for a USB
controller exposed an external USB SSD's block device to a PVH domain
via xen-blkback/xen-blkfront, and whenever the PVH domain in question made
rapid write accesses to the exposed block device:

  [VT-D]DMAR:[DMA Read] Request device [0000:00:14.0] fault addr 38600000, iommu reg = ...
  [VT-D]DMAR: reason 06 - PTE Read access is not set

This was only observed when EPT page table sharing was enabled for the
IOMMU. Upon debugging this issue further, it was noticed that Xen 4.12.0
had the same issue, but Xen 4.12.4 did not. Bisection led to the commit
that fixed this issue, which was later discovered to not have been
backported correctly to Xen 4.9. This patch has been tested with Xen 4.8
and Qubes OS 4.0.

Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
---
 xen/arch/x86/mm/p2m-ept.c | 57 +++++++++++++++++++--------------------
 1 file changed, 28 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 036771f43ccd..8492f4c1d321 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -48,7 +48,7 @@ static inline bool_t is_epte_valid(ept_entry_t *e)
 static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
-    int rc;
+    int rc = 0;
     unsigned long oldmfn = mfn_x(INVALID_MFN);
     bool_t check_foreign = (new.mfn != entryptr->mfn ||
                             new.sa_p2mt != entryptr->sa_p2mt);
@@ -56,37 +56,39 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
     if ( level )
     {
         ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
-        write_atomic(&entryptr->epte, new.epte);
-        return 0;
-    }
-
-    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
-    {
-        rc = -EINVAL;
-        if ( !is_epte_present(&new) )
-                goto out;
-
-        if ( check_foreign )
+    } else {
+        if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
         {
-            struct domain *fdom;
+            rc = -EINVAL;
+            if ( !is_epte_present(&new) )
+                    goto out;
 
-            if ( !mfn_valid(_mfn(new.mfn)) )
-                goto out;
+            if ( check_foreign )
+            {
+                struct domain *fdom;
 
-            rc = -ESRCH;
-            fdom = page_get_owner(mfn_to_page(new.mfn));
-            if ( fdom == NULL )
-                goto out;
+                if ( !mfn_valid(_mfn(new.mfn)) )
+                    goto out;
 
-            /* get refcount on the page */
-            rc = -EBUSY;
-            if ( !get_page(mfn_to_page(new.mfn), fdom) )
-                goto out;
+                rc = -ESRCH;
+                fdom = page_get_owner(mfn_to_page(new.mfn));
+                if ( fdom == NULL )
+                    goto out;
+
+                /* get refcount on the page */
+                rc = -EBUSY;
+                if ( !get_page(mfn_to_page(new.mfn), fdom) )
+                    goto out;
+            }
         }
-    }
 
-    if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
-        oldmfn = entryptr->mfn;
+        if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
+            oldmfn = entryptr->mfn;
+
+        if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
+            put_page(mfn_to_page(oldmfn));
+
+    }
 
     write_atomic(&entryptr->epte, new.epte);
 
@@ -103,9 +105,6 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
     if ( !new.recalc && iommu_hap_pt_share )
         iommu_sync_cache(entryptr, sizeof(*entryptr));
 
-    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
-        put_page(mfn_to_page(oldmfn));
-
     rc = 0;
 
  out:
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 03:39:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 03:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85404.160130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBrCy-0003JC-Ob; Tue, 16 Feb 2021 03:39:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85404.160130; Tue, 16 Feb 2021 03:39:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBrCy-0003J4-IE; Tue, 16 Feb 2021 03:39:28 +0000
Received: by outflank-mailman (input) for mailman id 85404;
 Tue, 16 Feb 2021 03:39:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBrCx-0003Iw-IK; Tue, 16 Feb 2021 03:39:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBrCx-000719-9H; Tue, 16 Feb 2021 03:39:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBrCx-0000OI-1R; Tue, 16 Feb 2021 03:39:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBrCx-0003Gr-0r; Tue, 16 Feb 2021 03:39:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0JY7v5zD+ow2v1NeGYqs94vrwgm7n5mcPA2T1uiMRWE=; b=S0h6EhzLqBbWT0oVzrZ8voTWKR
	w28vwTUKzcHasAWsZrWBrS5bDzNjP7zBPYiCACivNRutX1TyOnuNQnUfHXEHh23shxUgRI6xYi0vI
	ytA+EnEeuHgQwKQl63rNTDbGbtRqmjEvs4kN27Lothl99xxeF4aI82vrs+Z8WtrgCIVA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159372-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159372: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5b9a4104c902d7dec14c9e3c5652a638194487c6
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 03:39:27 +0000

flight 159372 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159372/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 159359

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5b9a4104c902d7dec14c9e3c5652a638194487c6
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   34 days
Failing since        158473  2021-01-17 13:42:20 Z   29 days   41 attempts
Testing same since   159339  2021-02-14 04:42:28 Z    1 days    3 attempts

------------------------------------------------------------
467 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14254 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 08:01:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 08:01:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85422.160169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBvIH-00028f-Tm; Tue, 16 Feb 2021 08:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85422.160169; Tue, 16 Feb 2021 08:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBvIH-00028Y-Q2; Tue, 16 Feb 2021 08:01:13 +0000
Received: by outflank-mailman (input) for mailman id 85422;
 Tue, 16 Feb 2021 08:01:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBvIG-000277-Et; Tue, 16 Feb 2021 08:01:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBvIG-0003sJ-8n; Tue, 16 Feb 2021 08:01:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBvIF-0003VS-TG; Tue, 16 Feb 2021 08:01:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBvIF-0007Gx-Sl; Tue, 16 Feb 2021 08:01:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nodN9cNrMj6FDFVtCEpCMReMDu25NE5qJCSi5qHcqtE=; b=owCQx0r+tOGt2vDKgDtN7A5g6F
	t1t0c28Jgi6eNyn3TBrDqKEGpV0eXjuGMojmvh36LmTKvDhRCuxYzPPXNb4glVewlMs9NxJX667BU
	yWQ3qbMjg7+C/KacG3WbFP25MYaIcpJS9WlPQBvtX8BwpViQhXuRSSO33Guv4INafkJE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159394-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159394: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4f4d862c1c7232a18347616d94c343c929657fdb
X-Osstest-Versions-That:
    ovmf=2e1e8c35f3178df95d79da81ac6deec242da74c2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 08:01:11 +0000

flight 159394 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159394/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4f4d862c1c7232a18347616d94c343c929657fdb
baseline version:
 ovmf                 2e1e8c35f3178df95d79da81ac6deec242da74c2

Last test of basis   159300  2021-02-12 14:25:09 Z    3 days
Testing same since   159394  2021-02-15 23:39:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <Pierre.Gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2e1e8c35f3..4f4d862c1c  4f4d862c1c7232a18347616d94c343c929657fdb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 09:15:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 09:15:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85455.160213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBwS0-000096-7K; Tue, 16 Feb 2021 09:15:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85455.160213; Tue, 16 Feb 2021 09:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBwS0-00008z-47; Tue, 16 Feb 2021 09:15:20 +0000
Received: by outflank-mailman (input) for mailman id 85455;
 Tue, 16 Feb 2021 09:15:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwRz-00008r-Al; Tue, 16 Feb 2021 09:15:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwRy-00054o-Vz; Tue, 16 Feb 2021 09:15:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwRy-0006p8-OI; Tue, 16 Feb 2021 09:15:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwRy-0001lm-Np; Tue, 16 Feb 2021 09:15:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LE7b5Nf8COF0fsMzF8Sw8LxVm6rlssaa7XNQlFyQ07c=; b=xk+H34o3xhIibECopPlemDNXr1
	lr8PtusNc3h8tHgFH56SX0WNth27/qvw8HuA9I8dGcwlKNwOUvjvsaZewozCe/qmoS2b/TkCy6zXV
	+93Mrxx8JMvf9x8GxGT+O7sweNNzCrDVCftXoNtRhVekHz2OoNC6NiLmvf53elXEq5Pc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159389-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159389: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8ba4bca570ace1e60614a0808631a517cf5df67a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 09:15:18 +0000

flight 159389 qemu-mainline real [real]
flight 159405 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159389/
http://logs.test-lab.xenproject.org/osstest/logs/159405/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8ba4bca570ace1e60614a0808631a517cf5df67a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  180 days
Failing since        152659  2020-08-21 14:07:39 Z  178 days  349 attempts
Testing same since   159389  2021-02-15 21:37:04 Z    0 days    1 attempts

------------------------------------------------------------
411 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112131 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 09:20:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 09:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85462.160229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBwX1-00014m-07; Tue, 16 Feb 2021 09:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85462.160229; Tue, 16 Feb 2021 09:20:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBwX0-00014f-T5; Tue, 16 Feb 2021 09:20:30 +0000
Received: by outflank-mailman (input) for mailman id 85462;
 Tue, 16 Feb 2021 09:20:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IdGf=HS=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lBwWz-00014a-8o
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 09:20:29 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 898d1e5a-8ce4-475c-ada8-6bbcf7c1b1df;
 Tue, 16 Feb 2021 09:20:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 898d1e5a-8ce4-475c-ada8-6bbcf7c1b1df
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613467226;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=LWgAT44xPd6EhzLZAqorQfM5a79jN/+8hx0V0aZtcyc=;
  b=X/3YrZrr/J4lkhdb6jVC0dU9OWq6K+l+riXG7yjvJ0D+8qqPh42sejTj
   pp2nQPfyQicel8v6QtoVwvy5073eqb9JBGBMIr5ZsK5A0ohgN3ylJ0JxO
   +9kLn6ccpsVfV/arhwV3wvTcsZuwJVPds/WsJdyfibTfTwmKmiVWDQ7UD
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Xa+BZGwjOPasIgjxqnc/Hjm6KXxWJCq/YbZRo3P9rYitOgXeR87TxVTgvjhRiGvv5oe8GPEbso
 8bzBJ/ujWd2t+0126MruvFyMiM9VaI4HQHd7D5m6FVQj8jTdb59JFbfFAgPVKO+WbZt7N9Q4Qa
 i1vy1hFBEmssxJKEvcshqP34cYj+jh0+F+985b4+4ZhiBNQrdchqsw4OjV4aplKUzhDN+Uzhcc
 wjoLno8ulbr7e30IOj+cjTxp0YJXlnYtO6O5xwgS7ayyYLE04pnJHkGIO1S4tRo+atAraXmxHn
 wwI=
X-SBRS: 5.2
X-MesageID: 37306400
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,183,1610427600"; 
   d="scan'208";a="37306400"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ie2mLSSOeyTBPFRAA379GPwMFv6CXETEvmwD9NdXUFKjQSCZ0p8mepDPO+GKs202aLj6ehLbTjD5NaaEX6hpzlU7NrDV1S+IkorfzW6/bpANTSDf2L4aDefODPgmIvyJGh7+4J225nOBbhe6J9mRoYUJ63PqesUInsXJ0Y/So7DDQuzQ6lXMbUD4nBY4dsRUgOEPcB/x8SoSuJEDIzc/tNuOGNpBgRfrjFZZMu/y18iUYgpSrL5bt5lGrRgpBircHIfs8cjXm3kUPVyhrO/6XOvj6ZE6ecZ7UWAYRRPFNiZkz9ZP4nNvadlkZP1G9NOT0FAoo/qo56sg5j1bz53Zlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=50rRlYd/WoiMQgu4iiiur1BQeY/ousyczYTvbIpF768=;
 b=nz2kxoip8k4GLukuK3bUNCBYXitvMusJGRw7pYLrI21CHJ3VSgMQB8kS5xi+JTlzYY0pI1dlNCeOmjgTbCE/JbBMHFNshd+aIX/yfQdNOlY6pG3EPVHOHPwykaK/3cOY+0TNsUn9ppAT1ocdldk34vFLkufrzoz+5zCLU0NgIXpnRRm2ruqNUtQgSO5zIojvAf3jKIvZfLUH1/uzXkUAfaMgttkkd6ekyhfy3xOx9BWYVIPTx8p6s9vJ0lamgjNrzgz1oCuXP+zBFUyGhFD6BZMiX4Nflxz5xMyWyCHeVI8AEwps6eu18CLBdE2DqntAiew6FmbVKvhUsgSi6bnrYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=50rRlYd/WoiMQgu4iiiur1BQeY/ousyczYTvbIpF768=;
 b=uLjPnTFdlEvHHWfAuWrnXy6lynhgPE9Rd4n1JpKuOirrS/KtbYW+jNG4eQEJQy++q72JHO6ndH2uTX7QoARsTNsRY5EDBhu4nRcnyI6sI/uenI3mDFu82x1fzHyso42xIziHX+IFlJLQFCVN61fwJFynG4RqFQxmYpWSTjeIbKc=
Date: Tue, 16 Feb 2021 10:20:03 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "M. Vefa Bicakci" <m.v.b@runbox.com>
CC: <xen-devel@lists.xenproject.org>, <marmarek@invisiblethingslab.com>,
	<jbeulich@suse.com>
Subject: Re: [PATCH 1/1] x86/ept: Fix buggy XSA-321 backport
Message-ID: <YCuOQ3qpFD6RgIld@Air-de-Roger>
References: <20210215234619.245422-1-m.v.b@runbox.com>
 <20210215234619.245422-2-m.v.b@runbox.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210215234619.245422-2-m.v.b@runbox.com>
X-ClientProxiedBy: AS8PR04CA0025.eurprd04.prod.outlook.com
 (2603:10a6:20b:310::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e09ab386-2e91-4559-4301-08d8d25c0eb3
X-MS-TrafficTypeDiagnostic: DM6PR03MB4843:
X-Microsoft-Antispam-PRVS: <DM6PR03MB484331647457DF909FD60D0F8F879@DM6PR03MB4843.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: I+lETBN3pEVSIjeZAcLAoiqsVRRxD8A4oEfnvreFbIEPeZ801U/rO37F6IYrP7u0y1LpfmUBIg6GhD/dUkBCyWDJxjLykdUpcLDqWKk624qx917IRA9jbqfeuL3u6s/mssaAZWXyTyDZZGzZ9+rvl3UoRX3UZRWvDFldOE5P8iMm9KtoujGkwibFjSQsCqu1Y+PVBEk3aoWJjfphGKo4zX28ueCVfTBr/Dq+54Tl0L5TagJo9Sxn4k1AmuohrV4QyKd/J0g/iNBhUwWFTedolWtg1F8H7dr9PFltxuetrsVZct8pAH/tn3BhnP2AwaXzGXNIujkGzBzJPvP3o/+4DuQGrT2EMrYcZvtRk4EmLuCemvfQbBOOHbHu+eQZmkrKYh5L8wuVpVIQy4K3S7QbL34qb6i/DuWTs6did0LNFP6lMlr6e6MTObwoISG4HLbuXfRStHXiED+6mcH8KvG8PK2LU/pFuzlo1R4xSsCW51bp0Q20ts0WyrHY0rpoLOIeIKxigNM9a5oiXmGGyAAhUA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(366004)(136003)(396003)(39860400002)(346002)(66476007)(16526019)(83380400001)(6916009)(6496006)(6666004)(9686003)(316002)(33716001)(86362001)(66946007)(186003)(66556008)(26005)(4326008)(956004)(8676002)(5660300002)(2906002)(8936002)(6486002)(85182001)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TjVBMktXano0V0U0bEpOV2t4NEdOVENaRGNsb01YT0NVdnVmOXVFdjBTbWtZ?=
 =?utf-8?B?ZWFMcGd4ZGNFNDJrUFpjTURoZlpYVG8xYmxwd2lJTXJ1S1FLb21taHdONUdx?=
 =?utf-8?B?ejBCNWtyZTZBazZXbHZJUVBNWnVMUENGK2p2UEh4ZXN4ckR1SXlJSlAyMTVh?=
 =?utf-8?B?TkFTemZqa3FHbDR5SVdHNEhjb2hFQWwrMGt5NW1CQUJIR0hhTGNQellHYUZ0?=
 =?utf-8?B?MEdHUXVncWZ2QitYMUhGQ2M1cm13U0g0UTV1OVhXL1pDVFA5QmN2Kzc3WGVC?=
 =?utf-8?B?b3BoKzRQSStUOVZqTisweFFwaWFlaEUydlFGTE5RTTE1VzVtY1ZBOUtPcWJX?=
 =?utf-8?B?WnRCRTFyWnl2Z29xdW04V1p4UHlYYWRQWEpDcVZNM1JRai8xTDRaMkxmaTcx?=
 =?utf-8?B?Y2RxS3FSQlB0TTdrTURJK3F0bEhSSWJnZXpvOUNwMXMxS3BmTjBwdHRNOTBr?=
 =?utf-8?B?TlVpZzBkNi9qWEY5aWVRRkRTRHZBM2EwQzJhOUdvWEVQTWlpQTV0RXZzM3cx?=
 =?utf-8?B?TXlqcWR5SC96NXhlOThxOGdZOExMUkUwck14eGxpeFJHREJYbFlKZDF0Qnpi?=
 =?utf-8?B?OXFiRklYS3VqQXozQUE3Uk1GZEkrbWRaR21HWVZKN1BjQnpSZ2RKemgzOHgw?=
 =?utf-8?B?bUxaVXBCVndkZkV6dkpVYmxyRWtzL2c5Q1BtbDE5T3BlemlJeGVLcERuSXJj?=
 =?utf-8?B?RlFLN3BaTjFNUlBnVG13cFVFemFVY2lMYVVlRFdQR3RibnZxTEp4UEVvWFFs?=
 =?utf-8?B?RHhGRWs1dG5LZHh5L2F1TUd3d2EyWmFNZ2ovcTVWZVdDNmhNTkJDT2V1TkdZ?=
 =?utf-8?B?QTdGS2V1VUVZT0RhbkRuVE9CajVyRzIrbUdYNzhiUlV1Z2FiNDRwUVRjVmpk?=
 =?utf-8?B?ZWVzOHVCTzVsTDNyUlgvS3lFaW81SkZ3Q3VwKzVnQ3Z2RGl4U21GUkxhRWxo?=
 =?utf-8?B?TEZCNzlpZTFBU3BFa1BCL2lReFo1VUpBNnJlNWVBNWNmVnFsUW1lRlpNeHRt?=
 =?utf-8?B?cTB6eVVsK1ZDZWM0U29ORmlHQkdsWktJRDAvdGQ0MHdIU0tXZnE1ZEMyZ2cx?=
 =?utf-8?B?c0RVcjFqclBhS1BScWRPRkUzTEViVzdFV2cyYmsyTEpCeDlLM0lweSt3ck80?=
 =?utf-8?B?QS93YkdZdURKcEt6eG5TSGkyejFETUM1NUVEQ1ZieVV6U2R3ZkZNSU5LbWlU?=
 =?utf-8?B?UHROaDdtcWtPbW0xL3g4OFUzTEp2MTRFZWI1ZkMrOXlTdC92QUgxQ0dqY0ZY?=
 =?utf-8?B?ZjFuRDE4R3hCOURtTDA0bXBIMThveGgzNXk3eU01Yzh2Z0d6SjB6ZzNXSVd5?=
 =?utf-8?B?UUpHRmlyWVZVTzlPcVFzZWxqUmtkRlExbnF2bkVqb1pORG5Yc2Q0VEcvbzI1?=
 =?utf-8?B?OVhpb2pGdXJMaGVqbzl1YnJFVzBndnBxS041WkdjZkxFdDg0RmVTcEtUY1BR?=
 =?utf-8?B?eWRxY1ZBZzhqRHpocXI0MzRuWmdQVzZpdGtSdXZaK0xNTmJxaGlLSnlKeTlr?=
 =?utf-8?B?THN6MFIzUWJvUWZaSUZFMjlkZXdNbjBRR2hiL01QbkFreHVDbHNxeEJMYkR0?=
 =?utf-8?B?SWNkc241bFB4NkpYODNjREJnRkdUamxtYzdoVTU5V0RVeHZmRS9qOUhSQU1O?=
 =?utf-8?B?NDVFM3JUN0JmNmQ0MmlvV1pwSkNNR3FXZ1lGb1FtL1dEZ09uVGxwQjRHRnJS?=
 =?utf-8?B?cldEbG1BQmtiajhqRGlHRW5HcGxDcHg5TFlDYklzK1poSUZCYzVLUWtuMS85?=
 =?utf-8?Q?7TUXOc3MS4pqa3Yh6BYc5dAfW+RaYTWw0wqMy5G?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e09ab386-2e91-4559-4301-08d8d25c0eb3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Feb 2021 09:20:10.2998
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iMUqQQH176kQzMpjL8Io3vI78BGctoU5Xj3st+fgXPIsaEieCUXNKX9w5ZWc96riXvNIUOEn5N11Ue7mPHm6qg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4843
X-OriginatorOrg: citrix.com

On Mon, Feb 15, 2021 at 06:46:19PM -0500, M. Vefa Bicakci wrote:
> This commit aims to fix commit a852040fe3ab ("x86/ept: flush cache when
> modifying PTEs and sharing page tables"). The aforementioned commit is
> for the stable-4.9 branch of Xen and is a backported version of commit
> c23274fd0412 ("x86/ept: flush cache when modifying PTEs and sharing page
> tables"), which was for Xen 4.14.0-rc5 and which fixes the security
> issue reported by XSA-321.
> 
> Prior to the latter commit, the function atomic_write_ept_entry in Xen
> 4.14.y consisted mostly of a call to p2m_entry_modify followed by an
> atomic replacement of a page table entry, and the latter commit adds
> a call to iommu_sync_cache for a specific condition:
> 
>    static int atomic_write_ept_entry(struct p2m_domain *p2m,
>                                      ept_entry_t *entryptr, ept_entry_t new,
>                                      int level)
>    {
>        int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
>                                  _mfn(new.mfn), _mfn(entryptr->mfn), level + 1);
> 
>        if ( rc )
>            return rc;
> 
>        write_atomic(&entryptr->epte, new.epte);
> 
>   +    /* snipped comment block */
>   +    if ( !new.recalc && iommu_use_hap_pt(p2m->domain) )
>   +        iommu_sync_cache(entryptr, sizeof(*entryptr));
>   +
>        return 0;
>    }
> 
> However, the backport to Xen 4.9.y is a bit different because
> atomic_write_ept_entry does not consist solely of a call to
> p2m_entry_modify, which does not exist in Xen 4.9.y. I am quoting from
> Xen 4.8.y for convenience:
> 
>    static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>                                      int level)
>    {
>        int rc;
>        unsigned long oldmfn = mfn_x(INVALID_MFN);
>        bool_t check_foreign = (new.mfn != entryptr->mfn ||
>                                new.sa_p2mt != entryptr->sa_p2mt);
> 
>        if ( level )
>        {
>            ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
>            write_atomic(&entryptr->epte, new.epte);
>            return 0;
>        }
> 
>        if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
>        {
>            rc = -EINVAL;
>            if ( !is_epte_present(&new) )
>                    goto out;
> 
>            if ( check_foreign )
>            {
>                struct domain *fdom;
> 
>                if ( !mfn_valid(new.mfn) )
>                    goto out;
> 
>                rc = -ESRCH;
>                fdom = page_get_owner(mfn_to_page(new.mfn));
>                if ( fdom == NULL )
>                    goto out;
> 
>                /* get refcount on the page */
>                rc = -EBUSY;
>                if ( !get_page(mfn_to_page(new.mfn), fdom) )
>                    goto out;
>            }
>        }
> 
>        if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
>            oldmfn = entryptr->mfn;
> 
>        write_atomic(&entryptr->epte, new.epte);
> 
>   +    /* snipped comment block */
>   +    if ( !new.recalc && iommu_hap_pt_share )
>   +        iommu_sync_cache(entryptr, sizeof(*entryptr));
>   +
>        if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
>            put_page(mfn_to_page(oldmfn));
> 
>        rc = 0;
> 
>     out:
>        if ( rc )
>            gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
>                     entryptr->epte, new.epte, rc);
>        return rc;
>    }
> 
> Based on inspection of the p2m_entry_modify function in Xen 4.14.1, it
> appears that the part of atomic_write_ept_entry above the call to
> write_atomic is encapsulated by p2m_entry_modify, which uncovers one
> issue with the backport: In Xen 4.14, if p2m_entry_modify returns early
> without an error, then the calls to write_atomic and iommu_sync_cache
> will still occur (assuming that the corresponding if condition is
> satisfied), whereas in Xen 4.9.y, there is a code path that can skip the
> call to iommu_sync_cache, in case the variable level is not zero:
> 
>   if ( level )
>   {
>      ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
>      write_atomic(&entryptr->epte, new.epte);
>      return 0;
>   }
> 
> This patch reorganizes the atomic_write_ept_entry to ensure that the
> call to iommu_sync_cache is not inadvertently skipped.

IMO this is likely too much change in a single patch, iff we really
wanted to do this you should have a pre-patch that re-arranges the
code without any functional change followed by a patch that fixes the
issue.

In any case I think this is too much change, so I would go for a
smaller fix like my proposal below. Can you please test it?

> Furthermore, in Xen 4.14.1, p2m_entry_modify calls
> 
>   put_page(mfn_to_page(oldmfn));
> 
> before the potential call to iommu_sync_cache in atomic_write_ept_entry.
> I am not sufficiently familiar with Xen to determine if this is a
> significant behavioural change, but this patch makes Xen 4.9.y similar
> to Xen 4.14.1 in that regard as well, by further re-organizing the code
> in atomic_write_ept_entry.

Well, that put_page is only relevant to PVH dom0, but you shouldn't
remove it. The put_page call in newer versions has been moved by
commit ce0224bf96a1a1f82 into p2m_entry_modify.

Here is my proposed fix, I think we could even do away with the else
branch, but if level is != 0 p2m_is_foreign should be false, so we
avoid an extra check.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 036771f43c..086739ffdd 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -56,11 +56,8 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
     if ( level )
     {
         ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
-        write_atomic(&entryptr->epte, new.epte);
-        return 0;
     }
-
-    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
+    else if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
     {
         rc = -EINVAL;
         if ( !is_epte_present(&new) )



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 09:36:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 09:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85468.160241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBwmA-00026I-Cv; Tue, 16 Feb 2021 09:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85468.160241; Tue, 16 Feb 2021 09:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBwmA-00026B-9P; Tue, 16 Feb 2021 09:36:10 +0000
Received: by outflank-mailman (input) for mailman id 85468;
 Tue, 16 Feb 2021 09:36:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwm9-000263-3x; Tue, 16 Feb 2021 09:36:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwm8-0005Om-UI; Tue, 16 Feb 2021 09:36:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwm8-0007we-Lt; Tue, 16 Feb 2021 09:36:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBwm8-00076a-LO; Tue, 16 Feb 2021 09:36:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a0n6JXMKnMYT9wegwMISRioJ26rIIWzsb04XuS9YSsA=; b=FhLCHR6NLE9nTBwMLSnfcn6iJE
	msnyLZ/jq4Ldqjp/gv5KEXpM7+pZsaJkd3+zx61t929gfCGUjIjIHlbg/uNn74Ww0KbayKP9mLy/N
	6HsL1NfA1NTnrThIZu8bTnFUiQQgtL5WWoqFlLO8HX1X447y/AwbsJgXGcsVNPa7XfBo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159401-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159401: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=6a1f5e8a4f3184bb54b9dcaa3afcf8c97adccb62
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 09:36:08 +0000

flight 159401 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159401/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6a1f5e8a4f3184bb54b9dcaa3afcf8c97adccb62
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  221 days
Failing since        151818  2020-07-11 04:18:52 Z  220 days  213 attempts
Testing same since   159401  2021-02-16 04:19:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42364 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 10:29:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 10:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85478.160267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBxbK-0006e1-Gh; Tue, 16 Feb 2021 10:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85478.160267; Tue, 16 Feb 2021 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBxbK-0006de-Dm; Tue, 16 Feb 2021 10:29:02 +0000
Received: by outflank-mailman (input) for mailman id 85478;
 Tue, 16 Feb 2021 10:29:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uFEE=HS=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lBxbI-0006dZ-D4
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 10:29:00 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 560467d0-e5af-4044-bd93-81e2276c3ccd;
 Tue, 16 Feb 2021 10:28:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 560467d0-e5af-4044-bd93-81e2276c3ccd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613471338;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=wktsdWIC+5TMN24qfT7chXZPcSVXhF+Od3SnpJcmwSI=;
  b=gq2Co1PmUKtmchcniDdtKO+0D5jK+SgZRBagmXGOk19XqbdiSGV1w0oy
   I1Wudjbrv2T3NCLjx/8uuftdbFbKKQrn6u6/vXBxPVxpMpR6Pwky/7K4g
   bekqOiz3bQEkS9mDWn+EyBa+eUsUGsicf4InK6es6RmsHpjcpR8q5C2Bt
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: UPofGbnA1xwvmSSvUcCt9fJIvr2o+7ilBz2vJoLDQ4+DUPqEnn7E+YhBNG2eB0/3ZfRyVEULJL
 sVdEBgpsb3CCZw/4FMVxRhE1W9qPZcdja3eAM7iqOByv/mgWTM0LGDkhUj66XsLu3q7WyUx/Ns
 l0Cf04CNqUauEaWxw54fcB13zwdZMQ8Pla4azLKE/9tdEbxTvfoifc3eeZ/fhg6JQUYaI70bAg
 yiV28tEq4ZgciC/Id4TzOEsFZrpn5uVLcB84WVTzQ+EUGM/F+A1ngw8dEFFXCipfIn3xQ5XJ4s
 2Yg=
X-SBRS: 5.1
X-MesageID: 37250563
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,183,1610427600"; 
   d="scan'208";a="37250563"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>
Subject: [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them
Date: Tue, 16 Feb 2021 10:28:39 +0000
Message-ID: <20210216102839.1801667-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.30.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Document the properties of the various allocators and lay out a clear
rubric for when to use each.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---

This doc is my understanding of the properties of the current
allocators (alloc_xenheap_pages, xmalloc, and vmalloc), and of Jan's
proposed new wrapper, xvmalloc.

xmalloc, vmalloc, and xvmalloc were designed more or less to mirror
similar functions in Linux (kmalloc, vmalloc, and kvmalloc
respectively).

CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
---
 .../memory-allocation-functions.rst           | 118 ++++++++++++++++++
 1 file changed, 118 insertions(+)
 create mode 100644 docs/hypervisor-guide/memory-allocation-functions.rst

diff --git a/docs/hypervisor-guide/memory-allocation-functions.rst b/docs/hypervisor-guide/memory-allocation-functions.rst
new file mode 100644
index 0000000000..15aa2a1a65
--- /dev/null
+++ b/docs/hypervisor-guide/memory-allocation-functions.rst
@@ -0,0 +1,118 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Xenheap memory allocation functions
+===================================
+
+In general Xen contains two pools (or "heaps") of memory: the *xen
+heap* and the *dom heap*.  Please see the comment at the top of
+``xen/common/page_alloc.c`` for the canonical explanation.
+
+This document describes the various functions available to allocate
+memory from the xen heap: their properties and rules for when they should be
+used.
+
+
+TLDR guidelines
+---------------
+
+* By default, ``xvmalloc`` (or its helper cognates) should be used
+  unless you know you have specific properties that need to be met.
+
+* If you need memory which needs to be physically contiguous, and may
+  be larger than ``PAGE_SIZE``...
+  
+  - ...and is order 2, use ``alloc_xenheap_pages``.
+    
+  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..
+
+* If you don't need memory to be physically contiguous, and know the
+  allocation will always be larger than ``PAGE_SIZE``, you may use
+  ``vmalloc`` (or one of its helper cognates).
+
+* If you know that allocation will always be less than ``PAGE_SIZE``,
+  you may use ``xmalloc``.
+
+Properties of various allocation functions
+------------------------------------------
+
+Ultimately, the underlying allocator for all of these functions is
+``alloc_xenheap_pages``.  They differ on several different properties:
+
+1. What underlying allocation sizes are.  This in turn has an effect
+   on:
+
+   - How much memory is wasted when requested size doesn't match
+
+   - How such allocations are affected by memory fragmentation
+
+   - How such allocations affect memory fragmentation
+
+2. Whether the underlying pages are physically contiguous
+
+3. Whether allocation and deallocation require the cost of mapping and
+   unmapping
+
+``alloc_xenheap_pages`` will allocate a physically contiguous set of
+pages on orders of 2.  No mapping or unmapping is done.  However, if
+this is used for sizes not close to ``PAGE_SIZE * (1 << n)``, a lot of
+space will be wasted.  Such allocations may fail if the memory becomes
+very fragmented; but such allocations do not tend to contribute to
+that memory fragmentation much.
+
+As such, ``alloc_xenheap_pages`` should be used when you need a size
+of exactly ``PAGE_SIZE * (1 << n)`` physically contiguous pages.
+
+``xmalloc`` is actually two separate allocators.  Allocations of <
+``PAGE_SIZE`` are handled using ``xmem_pool_alloc()``, and allocations >=
+``PAGE_SIZE`` are handled using ``xmalloc_whole_pages()``.
+
+``xmem_pool_alloc()`` is a pool allocator which allocates xenheap
+pages on demand as needed.  This is ideal for small, quick
+allocations: no pages are mapped or unmapped; sub-page allocations are
+expected, and so a minimum of space is wasted; and because xenheap
+pages are allocated one-by-one, 1) they are unlikely to fail unless
+Xen is genuinely out of memory, and 2) it doesn't have a major effect
+on fragmentation of memory.
+
+Allocations of > ``PAGE_SIZE`` are not possible with the pool
+allocator, so for such sizes, ``xmalloc`` calls
+``xmalloc_whole_pages()``, which in turn calls ``alloc_xenheap_pages``
+with an order large enough to satisfy the request, and then frees all
+the pages which aren't used.
+
+Like the other allocator, this incurs no mapping or unmapping
+overhead.  Allocations will be physically contiguous (like
+``alloc_xenheap_pages``), but not as much is wasted as a plain
+``alloc_xenheap_pages`` allocation.  However, such an allocation may
+fail if memory fragmented to the point that a contiguous allocation of
+the appropriate size cannot be found; such allocations also tend to
+fragment memory more.
+
+As such, ``xmalloc`` may be called in cases where you know the
+allocation will be less than ``PAGE_SIZE``; or when you need a
+physically contiguous allocation which may be more than
+``PAGE_SIZE``.  
+
+``vmalloc`` will allocate pages one-by-one and map them into a virtual
+memory area designated for the purpose, separated by a guard page.
+Only full pages are allocated, so using it from less tham
+``PAGE_SIZE`` allocations is wasteful.  The underlying memory will not
+be physically contiguous.  As such, it is not adversely affected by
+excessive system fragmentation, nor does it contribute to it.
+However, allocating and freeing requires a map and unmap operation
+respectively, both of which adversely affect system performance.
+
+Therefore, ``vmalloc`` should be used for page allocations over a page
+size in length which don't need to be physically contiguous.
+
+``xvmalloc`` is like ``xmalloc``, except that for allocations >
+``PAGE_SIZE``, it calls ``vmalloc`` instead.  Thus ``xvmalloc`` should
+always be preferred unless:
+
+1. You need physically contiguous memory, and your size may end up
+   greater than ``PAGE_SIZE``; in which case you should use
+   ``xmalloc`` or ``alloc_xenheap_pages`` as appropriate
+
+2. You are positive that ``xvmalloc`` will choose one specific
+   underlying implementation; in which case you should simply call
+   that implementation directly.
-- 
2.30.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 10:55:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 10:55:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85484.160280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBy15-0000tW-Rq; Tue, 16 Feb 2021 10:55:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85484.160280; Tue, 16 Feb 2021 10:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBy15-0000tP-Op; Tue, 16 Feb 2021 10:55:39 +0000
Received: by outflank-mailman (input) for mailman id 85484;
 Tue, 16 Feb 2021 10:55:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lBy14-0000tK-JX
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 10:55:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBy13-0006j6-8C; Tue, 16 Feb 2021 10:55:37 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lBy12-0003Nt-W4; Tue, 16 Feb 2021 10:55:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=CW1+G+rNU8ZB4rLsu/W9CEIluhc25LaTtj4IRPG5cXg=; b=uZQ/76TIeoQJ1koN8+V58wX0rl
	spxoVnVLpT+F4NdxvoLcAOQTM7qWXLdy+Tyd/m0NQskHig65ZcvcVrRs7Lw2203G5vD86pnCS6jIx
	GYYcsu1mqdQqhPz4YSivdR4g56nvi/siKJf9oftOObi3mrzxpArVoGwfQ2EEjI8lkf6o=;
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
To: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210216102839.1801667-1-george.dunlap@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c5eb64fc-a90b-6e28-bb0d-075e3a870299@xen.org>
Date: Tue, 16 Feb 2021 10:55:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210216102839.1801667-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi George,

On 16/02/2021 10:28, George Dunlap wrote:
> Document the properties of the various allocators and lay out a clear
> rubric for when to use each.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> 
> This doc is my understanding of the properties of the current
> allocators (alloc_xenheap_pages, xmalloc, and vmalloc), and of Jan's
> proposed new wrapper, xvmalloc.
> 
> xmalloc, vmalloc, and xvmalloc were designed more or less to mirror
> similar functions in Linux (kmalloc, vmalloc, and kvmalloc
> respectively).
> 
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Roger Pau Monne <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> ---
>   .../memory-allocation-functions.rst           | 118 ++++++++++++++++++
>   1 file changed, 118 insertions(+)
>   create mode 100644 docs/hypervisor-guide/memory-allocation-functions.rst
> 
> diff --git a/docs/hypervisor-guide/memory-allocation-functions.rst b/docs/hypervisor-guide/memory-allocation-functions.rst
> new file mode 100644
> index 0000000000..15aa2a1a65
> --- /dev/null
> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst
> @@ -0,0 +1,118 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Xenheap memory allocation functions
> +===================================
> +
> +In general Xen contains two pools (or "heaps") of memory: the *xen
> +heap* and the *dom heap*.  Please see the comment at the top of
> +``xen/common/page_alloc.c`` for the canonical explanation.
> +
> +This document describes the various functions available to allocate
> +memory from the xen heap: their properties and rules for when they should be
> +used.
> +
> +
> +TLDR guidelines
> +---------------
> +
> +* By default, ``xvmalloc`` (or its helper cognates) should be used
> +  unless you know you have specific properties that need to be met.
> +
> +* If you need memory which needs to be physically contiguous, and may
> +  be larger than ``PAGE_SIZE``...
> +
> +  - ...and is order 2, use ``alloc_xenheap_pages``.
> +
> +  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..
> +
> +* If you don't need memory to be physically contiguous, and know the
> +  allocation will always be larger than ``PAGE_SIZE``, you may use
> +  ``vmalloc`` (or one of its helper cognates).
> +
> +* If you know that allocation will always be less than ``PAGE_SIZE``,
> +  you may use ``xmalloc``.

AFAICT, the determining factor is PAGE_SIZE. This is a single is a 
single value on x86 (e.g. 4KB) but on other architecture this may be 
multiple values.

For instance, on Arm, this could be 4KB, 16KB, 64KB (note that only the 
former is so far supported on Xen).

For Arm and common code, it feels to me we can't make a clear decision 
based on PAGE_SIZE. Instead, I continue to think that the decision 
should only be based on physical vs virtually contiguous.

We can then add further rules for x86 specific code if the maintainers want.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 10:58:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 10:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85489.160298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBy3m-00014y-Bt; Tue, 16 Feb 2021 10:58:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85489.160298; Tue, 16 Feb 2021 10:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBy3m-00014r-8o; Tue, 16 Feb 2021 10:58:26 +0000
Received: by outflank-mailman (input) for mailman id 85489;
 Tue, 16 Feb 2021 10:58:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uFEE=HS=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lBy3k-00014m-IL
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 10:58:24 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c168669-766f-4b6d-b921-059614787fea;
 Tue, 16 Feb 2021 10:58:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c168669-766f-4b6d-b921-059614787fea
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613473103;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=jjyGTJwvkLj5uA8DyYObNDRsgh7IhPrL2gMWZLzlAUM=;
  b=HftqPTfcRwXGeru3czIzXio2+a6swVWbgJzuYTXhtowPYaU9X69orfQn
   tajcWuw0kWUC63cIdAv3i4dLv8lre0K+5LsZmBZvIGmU2Xi7OOMPTepzg
   rvjztcdRQ4LdccdxawdStkLMoe5zeOD2LAw1pCh+vxPSqG29QDZpeG7cv
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: CCG9AWwCfmEca/DAkqnGqUku2P3D+SjRuqHO2+eAuPVZp+OeBO3RofcU1TWB2G2J0kHT1j9pN0
 4xYG8y1PGtKKP9/So+L08W61GvJzrg40eb1GHoCdAj0GotvbEtEoKfUUnCs8ZeBCWGpMgcAcoz
 93+KnBdgUK7/VJmFysAh9FQQo8uiSEs7DAOeHvD9P716zCvuKes/aqIJNcKjARxi4a6QHMvWFR
 W1SbwHzmBbg8h3D49AM07LAn3tE178fFgi54Gx1tPqymO3HUkEqTbkLgryFuZKrIbKxJxyDcWf
 5d8=
X-SBRS: 5.2
X-MesageID: 37336199
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,183,1610427600"; 
   d="scan'208";a="37336199"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m1+kmodrfPEpXZa6Pbtthv9La0zCR2CDdaa42Aca80vQrI8nir4RPH1cxnXetZ4Cl/X5WKSYV0YiYjgxP0f+/BHKpM4w1bKnG+jzJ69KCPhIMqm76rMJzMuBCBDanipa/ECjEgNM5Luek9Uj2FFl6Hx+tmKD0O7cbyCtvCUbCAuwfHkfFY+X0s5I0LwA9c+gifxSJc6luNusuhUyDvtRFTnMsTW3xHB0GvcgUQlICKyT/VesIDTgTH9IoFIM2kep8iPWQ8gVlOJvlnKbu2s31o4mcc41KGrNL+OuPWadu/QcS5ZMs14Lj0ILQxscm6M3XAvZEwB7IU5KiRITvpKI2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jjyGTJwvkLj5uA8DyYObNDRsgh7IhPrL2gMWZLzlAUM=;
 b=LM19gqSnFJVFQ2rEkPf2iRMKAS0qkNGcnbRuNEjXm56yIhNXakcnXw+izMIdnRR31/PMrqandkbHZ4bVIBX3gg1S6i5drO7CnER8hPaeXR4aeHTzAW/7sj6qm4s4e9uV6tTtv+0omLyr8vmwPU2TqZfXuUnp40Bt2SpSu4ws0leEpCQnrj6R/9g8xMEt3/sc061fY/ITGQReo8DodLL+2CCqMtn0o+n7w+8Z0O9lA77VSJ1qjRjHnAL9F8aimE0Fmx27x7Ijm5Cee0Li68cGzhY4BeizFJWpCircU2QClF1RUZQUPMiMPt95GAnHAc6baKTRNqpYerHOcBANy7wjzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jjyGTJwvkLj5uA8DyYObNDRsgh7IhPrL2gMWZLzlAUM=;
 b=ZjrRCxHIZqUqHXly7TU6NLGDaKgukGZixkkBoguH5nlOG2FSuw0lzH4iNgv4pwrxTTkzhMkIgHQO8OBDkWxC4mhc08D3YC7ePrnDbG3eoHaeSIPtLXAF+uQ7opdF9G9CT66g7LqNV4D2KHUGGgGhqDB2y/vs1wI+cIOajv+CBTs=
From: George Dunlap <George.Dunlap@citrix.com>
To: "open list:X86" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
Thread-Topic: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
Thread-Index: AQHXBE6IzoDpW0Psi0SLnVWLvJvx9KpanKQA
Date: Tue, 16 Feb 2021 10:58:18 +0000
Message-ID: <80FEC93D-2304-4A92-844C-481FDA9CA7F6@citrix.com>
References: <20210216102839.1801667-1-george.dunlap@citrix.com>
In-Reply-To: <20210216102839.1801667-1-george.dunlap@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 577090af-27a4-4bfd-37df-08d8d269c495
x-ms-traffictypediagnostic: PH0PR03MB5767:
x-microsoft-antispam-prvs: <PH0PR03MB57670D59C511DDA9FAB1401299879@PH0PR03MB5767.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: yRuyUMmHkDdjiXHcqJ0un/Z7sATeNuvilNa1KrIENXbGV38z1BSne3WuZ8oJoxxnywYzBaiQUs8nAw2JQo32CKzXD+qJcJpKFe2mNnUfvQ3hk8KPveq+wUJ4H2g1IFJkIo07W9x6WZIMqX6ZmPJGD2FKykUKmoPqX0vqimQMrX5YlspFQ+sq9oP/3kjC6HSEFCvU3oxqUNnDwt+sqpy8QkvUscro/aG2uUzb+71OP/Z2FwTcxgmAAn5VjJ/nHU6XXuAmC2J6tMJ/GJfA5XVslU5/fTND/mI1VXeuUhPlIG90ca021WEUupkONymmr8+4YUvDP8HbeyPnBuS/Mnj7pRUCLHfQrlft9q7V8EquTIqTSj0ks5xWTiU1rAho7oG8n7vmwt/r+xcW8G4cxXDgsbH1KzUw6AM5UdEGotrnSCHPEzDLNBh6YmfdkEYbIb+bEaOh0OVrfUvHh77UAlm4hWhtOngi614TO2mZcm+ShDvoRY3nrupes+MWZxDZyVK4Q1g9KVDFY9lnR58YjSWhzgHRbWMDMcM0EV1gNM+W96xzktjNn/IxAjFf0YYJOEgctZ4fUdy3ZwdApZd1sRcKOg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(136003)(39860400002)(346002)(396003)(53546011)(64756008)(66556008)(66446008)(186003)(26005)(8936002)(86362001)(66476007)(91956017)(76116006)(8676002)(6486002)(36756003)(71200400001)(83380400001)(66946007)(6506007)(6512007)(33656002)(2616005)(478600001)(5660300002)(2906002)(6916009)(316002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?ZkNLQUppbDcxU3k1dVhObDN4VGtTL2d1M1lmcnZUVXEwYzE0YXBZMlQvckp2?=
 =?utf-8?B?Z3BFNEgxbi9KNTUwWHRTazQrRTA2MFM2Y1lQRmhxUUNhc3hhcnFIaHFnK0xi?=
 =?utf-8?B?d1lrY0xMWjlnMFBCc3ZrUlBUcDYvWXhKQk9TWDl1dW5SQ25QRmhnelY2RlZH?=
 =?utf-8?B?TkEzY1Y1NnV3WEFLVlNOcnJmckxrWW5yYms3WnUySC9RTU1uTVdIdjZIaW9K?=
 =?utf-8?B?dmplZHRRRHA5cUNOMjBIMDRUZXVaUzhOeG1zbGFTaDVWTzMyb1BkWGl6VHRr?=
 =?utf-8?B?c0EwSWkwQXg5ZHQzQko3VDQ1UkxOSFM4RkZjZDYxWGY3TytzVXpaNS80bHY4?=
 =?utf-8?B?ZC9ER1pXNVQzVW4vWjVLa3RZeENNMC91YUpwQjZFd0t2aS9nck80RHhzNTV5?=
 =?utf-8?B?VFViL3hWT1p6UFF2c0Z6UGs5L0RrS1RSNnVJTGhiZWFGaEJ2RHAzdjJoOVRF?=
 =?utf-8?B?UE05UzVPbkcyeGVvR0J0eUpwMEJGWFJCNnRNRUlBSkxvQXVEdm5lSUxxSmx1?=
 =?utf-8?B?S1hkVk5POFlKLzdmSHlzNEkwM2d5cFpORElDaWhPOCt0SDE5YzhrRE9lU0lp?=
 =?utf-8?B?dVZ5VVZiMnhrOWRRZnNuVVZmaXNOL1d3UFdTMjN2bHptakVGSEg4NEhQSFNC?=
 =?utf-8?B?UW9EbUhrTzlrdWNiU2xpMEpmQ29PUGxiNnNXTmgxcGNNOFg2bGV3T0NjSGR5?=
 =?utf-8?B?YzF4S0E1WlU0OTA4eVNlNk5vWXRFY0hETEZmbk4xWVo1ejlLUEQ3VHBhSVkv?=
 =?utf-8?B?U2wyZEhma1RPd2wxc1Y4ZVlLNWI1ZzF1Q1puM1kzRHVRb3RvNUxsN2NHdHpD?=
 =?utf-8?B?WnQ0QkxIdEhCVnpMS2JyQXg3aFo3ek1odEYyb3JPTnpPeXVoRmtVMG5VZ09m?=
 =?utf-8?B?OEtxSHM5dWw2bUFLMDlWMDY5NVBRVDNXekVyZkJRZ2F0K0o0WXh2KzBhU2o4?=
 =?utf-8?B?OXpEMG5HL3JYL1JJWFNvdm5yK0dDR2xaY05DSnlMK2ZZU1FtcWt0VjlEUGw0?=
 =?utf-8?B?aVpOd3dUUWpuMUx5RlA1QWdlcjBOTnZpM0pnRUNaRlBydDdlbXZxVldlVjJV?=
 =?utf-8?B?a3dsTGxYRzBoWVowcUFyRUFOa3BwZ3pid25vWjBMVWJnY2xWMUxrS0FDQTlM?=
 =?utf-8?B?OXcycU45cEhZUi9pNyszOUJXR01PVk1VeEprNFM2SWoxelFhb1UybmxkY0ly?=
 =?utf-8?B?ZXNmY1hza21NUGFXakRoMWpwVEtxVlRRUFd4M0p0R1J6SXlaaDFONkdYMHQ5?=
 =?utf-8?B?WTFNUWk4Q05rV2M5RGlrdFMyTit5dTlZbnJUL0h2d2RIMk1laW5NQnF6bk5m?=
 =?utf-8?B?Zk80di9FeTgxeDNmaHluQ1FHVFZldmpTTVdpSlpTS0RwSXNxNnd5RHliTHRE?=
 =?utf-8?B?U0dMK2d5TWJOOWNWcjh2ckVTdm1ienBEM0V2ZnlDNnU2UkZwY2xZc01laDky?=
 =?utf-8?B?U1p1MHVuaCtOQ0QrNTZpUHQ3VjAwUE5ldGl1YzlIdTZpV2Z6ZnUyOFlpWjZv?=
 =?utf-8?B?cXJtTzd3cDJVRXd4WVFFM2V1MWhDRHJJdWF2WUZQZk96Q3p4NkRDNGFucmYw?=
 =?utf-8?B?UkhlVUtuc2tvbVRZSFpSN2greUJFYTUrVmt1QnFKbzdaTTE0MlYzaitWR01J?=
 =?utf-8?B?d0Qzejk2QS9UT0QrcDVSaWUvenFqWWlGWm9md0UxOER1ZkdVTGE1MnY0U3V5?=
 =?utf-8?B?Q25uZkk3M1g3Qjk0TXhLbERMZVVWSkN4eU9KOWVlRTM4VDk0Sm1sMHJ3bWl1?=
 =?utf-8?B?TUUrd1lhYjM1OUJDcklGeFhRNlRPcnVmbHNYYklON1dHTDhKcUZqZ3ZrRmFv?=
 =?utf-8?B?Zk5xVE1aTUJDaERpVURLUT09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E84114AFEF0CE147A21ECDB854346DE9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 577090af-27a4-4bfd-37df-08d8d269c495
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Feb 2021 10:58:18.3033
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: n+hElF+KPHf25Pp8DyDn5mzpIM8nQhFudBV9GG3RsKKeZgBuo9sT78sciJLuz6CMCZcWaztG7ssry/FsxSafmqlHqoSi2UIwymw6n1YKSL4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5767
X-OriginatorOrg: citrix.com

DQoNCj4gT24gRmViIDE2LCAyMDIxLCBhdCAxMDoyOCBBTSwgR2VvcmdlIER1bmxhcCA8Z2Vvcmdl
LmR1bmxhcEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IERvY3VtZW50IHRoZSBwcm9wZXJ0aWVz
IG9mIHRoZSB2YXJpb3VzIGFsbG9jYXRvcnMgYW5kIGxheSBvdXQgYSBjbGVhcg0KPiBydWJyaWMg
Zm9yIHdoZW4gdG8gdXNlIGVhY2guDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBHZW9yZ2UgRHVubGFw
IDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+DQo+IC0tLQ0KPiANCj4gVGhpcyBkb2MgaXMgbXkg
dW5kZXJzdGFuZGluZyBvZiB0aGUgcHJvcGVydGllcyBvZiB0aGUgY3VycmVudA0KPiBhbGxvY2F0
b3JzIChhbGxvY194ZW5oZWFwX3BhZ2VzLCB4bWFsbG9jLCBhbmQgdm1hbGxvYyksIGFuZCBvZiBK
YW4ncw0KPiBwcm9wb3NlZCBuZXcgd3JhcHBlciwgeHZtYWxsb2MuDQo+IA0KPiB4bWFsbG9jLCB2
bWFsbG9jLCBhbmQgeHZtYWxsb2Mgd2VyZSBkZXNpZ25lZCBtb3JlIG9yIGxlc3MgdG8gbWlycm9y
DQo+IHNpbWlsYXIgZnVuY3Rpb25zIGluIExpbnV4IChrbWFsbG9jLCB2bWFsbG9jLCBhbmQga3Zt
YWxsb2MNCj4gcmVzcGVjdGl2ZWx5KS4NCj4gDQo+IENDOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcu
Y29vcGVyM0BjaXRyaXguY29tPg0KPiBDQzogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29t
Pg0KPiBDQzogUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gQ0M6IFN0
ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gQ0M6IEp1bGllbiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IC0tLQ0KPiAuLi4vbWVtb3J5LWFsbG9jYXRpb24tZnVu
Y3Rpb25zLnJzdCAgICAgICAgICAgfCAxMTggKysrKysrKysrKysrKysrKysrDQo+IDEgZmlsZSBj
aGFuZ2VkLCAxMTggaW5zZXJ0aW9ucygrKQ0KPiBjcmVhdGUgbW9kZSAxMDA2NDQgZG9jcy9oeXBl
cnZpc29yLWd1aWRlL21lbW9yeS1hbGxvY2F0aW9uLWZ1bmN0aW9ucy5yc3QNCj4gDQo+IGRpZmYg
LS1naXQgYS9kb2NzL2h5cGVydmlzb3ItZ3VpZGUvbWVtb3J5LWFsbG9jYXRpb24tZnVuY3Rpb25z
LnJzdCBiL2RvY3MvaHlwZXJ2aXNvci1ndWlkZS9tZW1vcnktYWxsb2NhdGlvbi1mdW5jdGlvbnMu
cnN0DQo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+IGluZGV4IDAwMDAwMDAwMDAuLjE1YWEyYTFh
NjUNCj4gLS0tIC9kZXYvbnVsbA0KPiArKysgYi9kb2NzL2h5cGVydmlzb3ItZ3VpZGUvbWVtb3J5
LWFsbG9jYXRpb24tZnVuY3Rpb25zLnJzdA0KPiBAQCAtMCwwICsxLDExOCBAQA0KPiArLi4gU1BE
WC1MaWNlbnNlLUlkZW50aWZpZXI6IENDLUJZLTQuMA0KPiArDQo+ICtYZW5oZWFwIG1lbW9yeSBh
bGxvY2F0aW9uIGZ1bmN0aW9ucw0KPiArPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0NCj4gKw0KPiArSW4gZ2VuZXJhbCBYZW4gY29udGFpbnMgdHdvIHBvb2xzIChvciAiaGVhcHMi
KSBvZiBtZW1vcnk6IHRoZSAqeGVuDQo+ICtoZWFwKiBhbmQgdGhlICpkb20gaGVhcCouICBQbGVh
c2Ugc2VlIHRoZSBjb21tZW50IGF0IHRoZSB0b3Agb2YNCj4gK2BgeGVuL2NvbW1vbi9wYWdlX2Fs
bG9jLmNgYCBmb3IgdGhlIGNhbm9uaWNhbCBleHBsYW5hdGlvbi4NCj4gKw0KPiArVGhpcyBkb2N1
bWVudCBkZXNjcmliZXMgdGhlIHZhcmlvdXMgZnVuY3Rpb25zIGF2YWlsYWJsZSB0byBhbGxvY2F0
ZQ0KPiArbWVtb3J5IGZyb20gdGhlIHhlbiBoZWFwOiB0aGVpciBwcm9wZXJ0aWVzIGFuZCBydWxl
cyBmb3Igd2hlbiB0aGV5IHNob3VsZCBiZQ0KPiArdXNlZC4NCj4gKw0KPiArDQo+ICtUTERSIGd1
aWRlbGluZXMNCj4gKy0tLS0tLS0tLS0tLS0tLQ0KPiArDQo+ICsqIEJ5IGRlZmF1bHQsIGBgeHZt
YWxsb2NgYCAob3IgaXRzIGhlbHBlciBjb2duYXRlcykgc2hvdWxkIGJlIHVzZWQNCj4gKyAgdW5s
ZXNzIHlvdSBrbm93IHlvdSBoYXZlIHNwZWNpZmljIHByb3BlcnRpZXMgdGhhdCBuZWVkIHRvIGJl
IG1ldC4NCj4gKw0KPiArKiBJZiB5b3UgbmVlZCBtZW1vcnkgd2hpY2ggbmVlZHMgdG8gYmUgcGh5
c2ljYWxseSBjb250aWd1b3VzLCBhbmQgbWF5DQo+ICsgIGJlIGxhcmdlciB0aGFuIGBgUEFHRV9T
SVpFYGAuLi4NCj4gKyAgDQo+ICsgIC0gLi4uYW5kIGlzIG9yZGVyIDIsIHVzZSBgYGFsbG9jX3hl
bmhlYXBfcGFnZXNgYC4NCj4gKyAgICANCj4gKyAgLSAuLi5hbmQgaXMgbm90IG9yZGVyIDIsIHVz
ZSBgYHhtYWxsb2NgYCAob3IgaXRzIGhlbHBlciBjb2duYXRlcykuLg0KPiArDQo+ICsqIElmIHlv
dSBkb24ndCBuZWVkIG1lbW9yeSB0byBiZSBwaHlzaWNhbGx5IGNvbnRpZ3VvdXMsIGFuZCBrbm93
IHRoZQ0KPiArICBhbGxvY2F0aW9uIHdpbGwgYWx3YXlzIGJlIGxhcmdlciB0aGFuIGBgUEFHRV9T
SVpFYGAsIHlvdSBtYXkgdXNlDQo+ICsgIGBgdm1hbGxvY2BgIChvciBvbmUgb2YgaXRzIGhlbHBl
ciBjb2duYXRlcykuDQo+ICsNCj4gKyogSWYgeW91IGtub3cgdGhhdCBhbGxvY2F0aW9uIHdpbGwg
YWx3YXlzIGJlIGxlc3MgdGhhbiBgYFBBR0VfU0laRWBgLA0KPiArICB5b3UgbWF5IHVzZSBgYHht
YWxsb2NgYC4NCj4gKw0KPiArUHJvcGVydGllcyBvZiB2YXJpb3VzIGFsbG9jYXRpb24gZnVuY3Rp
b25zDQo+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gKw0K
PiArVWx0aW1hdGVseSwgdGhlIHVuZGVybHlpbmcgYWxsb2NhdG9yIGZvciBhbGwgb2YgdGhlc2Ug
ZnVuY3Rpb25zIGlzDQo+ICtgYGFsbG9jX3hlbmhlYXBfcGFnZXNgYC4gIFRoZXkgZGlmZmVyIG9u
IHNldmVyYWwgZGlmZmVyZW50IHByb3BlcnRpZXM6DQo+ICsNCj4gKzEuIFdoYXQgdW5kZXJseWlu
ZyBhbGxvY2F0aW9uIHNpemVzIGFyZS4gIFRoaXMgaW4gdHVybiBoYXMgYW4gZWZmZWN0DQo+ICsg
ICBvbjoNCj4gKw0KPiArICAgLSBIb3cgbXVjaCBtZW1vcnkgaXMgd2FzdGVkIHdoZW4gcmVxdWVz
dGVkIHNpemUgZG9lc24ndCBtYXRjaA0KPiArDQo+ICsgICAtIEhvdyBzdWNoIGFsbG9jYXRpb25z
IGFyZSBhZmZlY3RlZCBieSBtZW1vcnkgZnJhZ21lbnRhdGlvbg0KPiArDQo+ICsgICAtIEhvdyBz
dWNoIGFsbG9jYXRpb25zIGFmZmVjdCBtZW1vcnkgZnJhZ21lbnRhdGlvbg0KPiArDQo+ICsyLiBX
aGV0aGVyIHRoZSB1bmRlcmx5aW5nIHBhZ2VzIGFyZSBwaHlzaWNhbGx5IGNvbnRpZ3VvdXMNCj4g
Kw0KPiArMy4gV2hldGhlciBhbGxvY2F0aW9uIGFuZCBkZWFsbG9jYXRpb24gcmVxdWlyZSB0aGUg
Y29zdCBvZiBtYXBwaW5nIGFuZA0KPiArICAgdW5tYXBwaW5nDQo+ICsNCj4gK2BgYWxsb2NfeGVu
aGVhcF9wYWdlc2BgIHdpbGwgYWxsb2NhdGUgYSBwaHlzaWNhbGx5IGNvbnRpZ3VvdXMgc2V0IG9m
DQo+ICtwYWdlcyBvbiBvcmRlcnMgb2YgMi4gIE5vIG1hcHBpbmcgb3IgdW5tYXBwaW5nIGlzIGRv
bmUuICBIb3dldmVyLCBpZg0KPiArdGhpcyBpcyB1c2VkIGZvciBzaXplcyBub3QgY2xvc2UgdG8g
YGBQQUdFX1NJWkUgKiAoMSA8PCBuKWBgLCBhIGxvdCBvZg0KPiArc3BhY2Ugd2lsbCBiZSB3YXN0
ZWQuICBTdWNoIGFsbG9jYXRpb25zIG1heSBmYWlsIGlmIHRoZSBtZW1vcnkgYmVjb21lcw0KPiAr
dmVyeSBmcmFnbWVudGVkOyBidXQgc3VjaCBhbGxvY2F0aW9ucyBkbyBub3QgdGVuZCB0byBjb250
cmlidXRlIHRvDQo+ICt0aGF0IG1lbW9yeSBmcmFnbWVudGF0aW9uIG11Y2guDQo+ICsNCj4gK0Fz
IHN1Y2gsIGBgYWxsb2NfeGVuaGVhcF9wYWdlc2BgIHNob3VsZCBiZSB1c2VkIHdoZW4geW91IG5l
ZWQgYSBzaXplDQo+ICtvZiBleGFjdGx5IGBgUEFHRV9TSVpFICogKDEgPDwgbilgYCBwaHlzaWNh
bGx5IGNvbnRpZ3VvdXMgcGFnZXMuDQo+ICsNCj4gK2BgeG1hbGxvY2BgIGlzIGFjdHVhbGx5IHR3
byBzZXBhcmF0ZSBhbGxvY2F0b3JzLiAgQWxsb2NhdGlvbnMgb2YgPA0KPiArYGBQQUdFX1NJWkVg
YCBhcmUgaGFuZGxlZCB1c2luZyBgYHhtZW1fcG9vbF9hbGxvYygpYGAsIGFuZCBhbGxvY2F0aW9u
cyA+PQ0KPiArYGBQQUdFX1NJWkVgYCBhcmUgaGFuZGxlZCB1c2luZyBgYHhtYWxsb2Nfd2hvbGVf
cGFnZXMoKWBgLg0KPiArDQo+ICtgYHhtZW1fcG9vbF9hbGxvYygpYGAgaXMgYSBwb29sIGFsbG9j
YXRvciB3aGljaCBhbGxvY2F0ZXMgeGVuaGVhcA0KPiArcGFnZXMgb24gZGVtYW5kIGFzIG5lZWRl
ZC4gIFRoaXMgaXMgaWRlYWwgZm9yIHNtYWxsLCBxdWljaw0KPiArYWxsb2NhdGlvbnM6IG5vIHBh
Z2VzIGFyZSBtYXBwZWQgb3IgdW5tYXBwZWQ7IHN1Yi1wYWdlIGFsbG9jYXRpb25zIGFyZQ0KPiAr
ZXhwZWN0ZWQsIGFuZCBzbyBhIG1pbmltdW0gb2Ygc3BhY2UgaXMgd2FzdGVkOyBhbmQgYmVjYXVz
ZSB4ZW5oZWFwDQo+ICtwYWdlcyBhcmUgYWxsb2NhdGVkIG9uZS1ieS1vbmUsIDEpIHRoZXkgYXJl
IHVubGlrZWx5IHRvIGZhaWwgdW5sZXNzDQo+ICtYZW4gaXMgZ2VudWluZWx5IG91dCBvZiBtZW1v
cnksIGFuZCAyKSBpdCBkb2Vzbid0IGhhdmUgYSBtYWpvciBlZmZlY3QNCj4gK29uIGZyYWdtZW50
YXRpb24gb2YgbWVtb3J5Lg0KPiArDQo+ICtBbGxvY2F0aW9ucyBvZiA+IGBgUEFHRV9TSVpFYGAg
YXJlIG5vdCBwb3NzaWJsZSB3aXRoIHRoZSBwb29sDQo+ICthbGxvY2F0b3IsIHNvIGZvciBzdWNo
IHNpemVzLCBgYHhtYWxsb2NgYCBjYWxscw0KPiArYGB4bWFsbG9jX3dob2xlX3BhZ2VzKClgYCwg
d2hpY2ggaW4gdHVybiBjYWxscyBgYGFsbG9jX3hlbmhlYXBfcGFnZXNgYA0KPiArd2l0aCBhbiBv
cmRlciBsYXJnZSBlbm91Z2ggdG8gc2F0aXNmeSB0aGUgcmVxdWVzdCwgYW5kIHRoZW4gZnJlZXMg
YWxsDQo+ICt0aGUgcGFnZXMgd2hpY2ggYXJlbid0IHVzZWQuDQo+ICsNCj4gK0xpa2UgdGhlIG90
aGVyIGFsbG9jYXRvciwgdGhpcyBpbmN1cnMgbm8gbWFwcGluZyBvciB1bm1hcHBpbmcNCj4gK292
ZXJoZWFkLiAgQWxsb2NhdGlvbnMgd2lsbCBiZSBwaHlzaWNhbGx5IGNvbnRpZ3VvdXMgKGxpa2UN
Cj4gK2BgYWxsb2NfeGVuaGVhcF9wYWdlc2BgKSwgYnV0IG5vdCBhcyBtdWNoIGlzIHdhc3RlZCBh
cyBhIHBsYWluDQo+ICtgYGFsbG9jX3hlbmhlYXBfcGFnZXNgYCBhbGxvY2F0aW9uLiAgSG93ZXZl
ciwgc3VjaCBhbiBhbGxvY2F0aW9uIG1heQ0KPiArZmFpbCBpZiBtZW1vcnkgZnJhZ21lbnRlZCB0
byB0aGUgcG9pbnQgdGhhdCBhIGNvbnRpZ3VvdXMgYWxsb2NhdGlvbiBvZg0KPiArdGhlIGFwcHJv
cHJpYXRlIHNpemUgY2Fubm90IGJlIGZvdW5kOyBzdWNoIGFsbG9jYXRpb25zIGFsc28gdGVuZCB0
bw0KPiArZnJhZ21lbnQgbWVtb3J5IG1vcmUuDQo+ICsNCj4gK0FzIHN1Y2gsIGBgeG1hbGxvY2Bg
IG1heSBiZSBjYWxsZWQgaW4gY2FzZXMgd2hlcmUgeW91IGtub3cgdGhlDQo+ICthbGxvY2F0aW9u
IHdpbGwgYmUgbGVzcyB0aGFuIGBgUEFHRV9TSVpFYGA7IG9yIHdoZW4geW91IG5lZWQgYQ0KPiAr
cGh5c2ljYWxseSBjb250aWd1b3VzIGFsbG9jYXRpb24gd2hpY2ggbWF5IGJlIG1vcmUgdGhhbg0K
PiArYGBQQUdFX1NJWkVgYC4gIA0KPiArDQo+ICtgYHZtYWxsb2NgYCB3aWxsIGFsbG9jYXRlIHBh
Z2VzIG9uZS1ieS1vbmUgYW5kIG1hcCB0aGVtIGludG8gYSB2aXJ0dWFsDQo+ICttZW1vcnkgYXJl
YSBkZXNpZ25hdGVkIGZvciB0aGUgcHVycG9zZSwgc2VwYXJhdGVkIGJ5IGEgZ3VhcmQgcGFnZS4N
Cj4gK09ubHkgZnVsbCBwYWdlcyBhcmUgYWxsb2NhdGVkLCBzbyB1c2luZyBpdCBmcm9tIGxlc3Mg
dGhhbQ0KPiArYGBQQUdFX1NJWkVgYCBhbGxvY2F0aW9ucyBpcyB3YXN0ZWZ1bC4gIFRoZSB1bmRl
cmx5aW5nIG1lbW9yeSB3aWxsIG5vdA0KPiArYmUgcGh5c2ljYWxseSBjb250aWd1b3VzLiAgQXMg
c3VjaCwgaXQgaXMgbm90IGFkdmVyc2VseSBhZmZlY3RlZCBieQ0KPiArZXhjZXNzaXZlIHN5c3Rl
bSBmcmFnbWVudGF0aW9uLCBub3IgZG9lcyBpdCBjb250cmlidXRlIHRvIGl0Lg0KPiArSG93ZXZl
ciwgYWxsb2NhdGluZyBhbmQgZnJlZWluZyByZXF1aXJlcyBhIG1hcCBhbmQgdW5tYXAgb3BlcmF0
aW9uDQo+ICtyZXNwZWN0aXZlbHksIGJvdGggb2Ygd2hpY2ggYWR2ZXJzZWx5IGFmZmVjdCBzeXN0
ZW0gcGVyZm9ybWFuY2UuDQo+ICsNCj4gK1RoZXJlZm9yZSwgYGB2bWFsbG9jYGAgc2hvdWxkIGJl
IHVzZWQgZm9yIHBhZ2UgYWxsb2NhdGlvbnMgb3ZlciBhIHBhZ2UNCj4gK3NpemUgaW4gbGVuZ3Ro
IHdoaWNoIGRvbid0IG5lZWQgdG8gYmUgcGh5c2ljYWxseSBjb250aWd1b3VzLg0KPiArDQo+ICtg
YHh2bWFsbG9jYGAgaXMgbGlrZSBgYHhtYWxsb2NgYCwgZXhjZXB0IHRoYXQgZm9yIGFsbG9jYXRp
b25zID4NCj4gK2BgUEFHRV9TSVpFYGAsIGl0IGNhbGxzIGBgdm1hbGxvY2BgIGluc3RlYWQuICBU
aHVzIGBgeHZtYWxsb2NgYCBzaG91bGQNCj4gK2Fsd2F5cyBiZSBwcmVmZXJyZWQgdW5sZXNzOg0K
PiArDQo+ICsxLiBZb3UgbmVlZCBwaHlzaWNhbGx5IGNvbnRpZ3VvdXMgbWVtb3J5LCBhbmQgeW91
ciBzaXplIG1heSBlbmQgdXANCj4gKyAgIGdyZWF0ZXIgdGhhbiBgYFBBR0VfU0laRWBgOyBpbiB3
aGljaCBjYXNlIHlvdSBzaG91bGQgdXNlDQo+ICsgICBgYHhtYWxsb2NgYCBvciBgYGFsbG9jX3hl
bmhlYXBfcGFnZXNgYCBhcyBhcHByb3ByaWF0ZQ0KPiArDQo+ICsyLiBZb3UgYXJlIHBvc2l0aXZl
IHRoYXQgYGB4dm1hbGxvY2BgIHdpbGwgY2hvb3NlIG9uZSBzcGVjaWZpYw0KPiArICAgdW5kZXJs
eWluZyBpbXBsZW1lbnRhdGlvbjsgaW4gd2hpY2ggY2FzZSB5b3Ugc2hvdWxkIHNpbXBseSBjYWxs
DQo+ICsgICB0aGF0IGltcGxlbWVudGF0aW9uIGRpcmVjdGx5Lg0KDQpCYXNpY2FsbHksIHRoZSBt
b3JlIEkgbG9vayBhdCB0aGlzIHdob2xlIHRoaW5nIOKAlCBwYXJ0aWN1bGFybHkgdGhlIGZhY3Qg
dGhhdCB4bWFsbG9jIGFscmVhZHkgaGFzIGFuIGBpZiAoIHNpemUgPiBQQUdFX1NJWkUpYCBpbnNp
ZGUgb2YgaXQg4oCUIHRoZSBtb3JlIEkgdGhpbmsgdGhpcyBsYXN0IHBvaW50IGlzIGp1c3QgYSB3
YXN0ZSBvZiBldmVyeW9uZeKAmXMgdGltZS4NCg0KSeKAmW0gaW5jbGluZWQgdG8gZ28gd2l0aCBK
dWxpZW7igJlzIHN1Z2dlc3Rpb24sIHRoYXQgd2UgdXNlIHhtYWxsb2Mgd2hlbiB3ZSBuZWVkIHBo
eXNpY2FsbHkgY29udGlndW91cyBtZW1vcnkgKHdpdGggYSBjb21tZW50KSwgYW5kIHh2bWFsbG9j
IGV2ZXJ5d2hlcmUgZWxzZS4gIFdlIGNhbiBpbXBsZW1lbnQgeHZtYWxsb2Mgc3VjaCB0aGF0IGl0
4oCZcyBubyBzbG93ZXIgdGhhbiB4bWFsbG9jIGlzIGN1cnJlbnRseSAoaS5lLiwgaXQgZGlyZWN0
bHkgY2FsbHMgYHhtZW1fcG9vbF9hbGxvY2Agd2hlbiBzaXplIDwgUEFHRV9TSVpFLCByYXRoZXIg
dGhhbiBjYWxsaW5nIHhtYWxsb2MgYW5kIGhhdmluZyB4bWFsbG9jIGRvIHRoZSBjb21wYXJpc29u
IGFnYWluKS4NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 11:16:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 11:16:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85494.160309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lByKu-0002vh-UQ; Tue, 16 Feb 2021 11:16:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85494.160309; Tue, 16 Feb 2021 11:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lByKu-0002va-RQ; Tue, 16 Feb 2021 11:16:08 +0000
Received: by outflank-mailman (input) for mailman id 85494;
 Tue, 16 Feb 2021 11:16:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uFEE=HS=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lByKs-0002vS-Qi
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 11:16:06 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dcb7f0d3-c7d4-4fc2-9ca2-0dfccb8eaa70;
 Tue, 16 Feb 2021 11:16:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcb7f0d3-c7d4-4fc2-9ca2-0dfccb8eaa70
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613474165;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=prvIo8r5uqxUuSL4WmFE54fDoRX4oi5o+88Tm/eLTe4=;
  b=YUutq58Uqm10RqdJ1IWFZehSNcL1Lk9nYAtm1FsPZ1kuRJqBDyugCL0W
   PG4CQTo5vfDnT4zy1qyS5np7QbL+AvzT6aXCkT5HktYy9/ol8BcWWxR0e
   8k+I/PSF43cSx8Ttu7wf3z5MpgZcQwUEYOdFh6Ri+kShd07MPb5XooBBS
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: /aE+jzHXvAR33K7j797FKAtzpQ9lfZHI5n1HcDaxHdW9uXEIfJydkO/W3frGD2SFadT07i1ypr
 sFPEeCNshELYrsEm1Uqyuh9HCrm49jlRVZ91TJYH9HO6Vzd2Gi9mdqYGXAQ5hAdGl2alAmpbB5
 0zqFBVZTLfeXiu+szdqDEFhNVSlQbdVF37SRjWbjPtyFApaRGjzpizEJz+nOJ08urGXJJX3Qi6
 8lbopTlC0+AXxzNyU8+N8Yj7eOHDY6/1MWk9q62m6TUzQUQ6VS4Lk3tHXSw6Am6aKPidOMl/ji
 VFA=
X-SBRS: 5.2
X-MesageID: 38678394
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,183,1610427600"; 
   d="scan'208";a="38678394"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VdomDXxFi8kvUxmcHfcsQDrxi/xm/+6z4qWmug5PvaFReefQo6ynpUKAFZGfIyYOSXLT+sUCWaDs/7tFRFQq1ZbipUD0/vEmlWG5CeTwfgQwbPeFm0zOVkDSwVT6IM9+TLtOvrhcAwYAKu7YqlOC5gSRhImOhRlvPxm6OB6Lwg6sBm4OQdYP1/cAbbLcgswA+7CipfPlabH0g/VsAuPPQQwuvHk6GmhgaSTFHIdmQi2sGvcrAfB9xD7ekr+hUG7nCNuw6WaHFUlJ3+F5RoCZhsCT070pdChydphHQnHHz0pmQa3694M3mjwp7CkwgN3JSF9/9DNEMstaBTRzkMry1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=prvIo8r5uqxUuSL4WmFE54fDoRX4oi5o+88Tm/eLTe4=;
 b=nUU2RTzeUJlvgjVDg+zIyC0mGL+ba0+qAf395u8eoQT7l8bgkaUWF5mszbXFbq9FQ1/0Y818+SbWt+FziMQTPajXPxvldtRTETm54cg02QhudLUYFmUZhyGu+Yie6zKEuetrRtiNNiRDfedoX1tPaXG7QDc84nj6+MN2FZCoSukAEY/wBe4lBaYCCpOyo4po3QMDyuV2PtAp3As5qiPgSeon7jxnxIEoDW8gd3SykfD68nonAtTHsayaYt4SwJe9yRr6vWEJOIM9DXHm1C6EgCtdSVywHgNKogguzMFj7tcyAYH/SonBAy7nb/TCZMTFVt/RE05ZkrSVBtKufQmlyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=prvIo8r5uqxUuSL4WmFE54fDoRX4oi5o+88Tm/eLTe4=;
 b=JIHHyNAily+5l45JKbySor+Z43B4PVofduFkJvPAa6TqPQ6/9VCnEtNXrD18lf90eqNgymsr7fpSDPXLxrAFWZgykrdPFRVNdswQwDF0MZmvuyv8D30d3XK4aBgd/WcEKpoXkrwVzHsjWgu9wIo7LqteDa1F0Bb2GlM2lkriFR8=
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
Thread-Topic: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
Thread-Index: AQHXBE6IzoDpW0Psi0SLnVWLvJvx9Kpam+MAgAAFtoA=
Date: Tue, 16 Feb 2021 11:16:02 +0000
Message-ID: <E820CE9D-9671-4ED3-872E-3AECE21505AC@citrix.com>
References: <20210216102839.1801667-1-george.dunlap@citrix.com>
 <c5eb64fc-a90b-6e28-bb0d-075e3a870299@xen.org>
In-Reply-To: <c5eb64fc-a90b-6e28-bb0d-075e3a870299@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0b7cbc34-48c9-4582-2874-08d8d26c3e97
x-ms-traffictypediagnostic: PH0PR03MB5782:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <PH0PR03MB578214A3C6E2A94554A40A5499879@PH0PR03MB5782.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: PY5+gn0I4+oN50Dy8Ab8mOZ01s0T5Mk+HSSRM8SRtHR9PDKxZ0PFne2bJCNoNXQA/cs1h/mhdxtIwnnuDWoZ/LawNi9KTwKN7EhrATpJlop+JVH2kXgb/8IWRW0MFEb37Q6jaIRpA82F0u/gT7ObBJRTZDfrwqxO6BchOK6ZqdZ3uHMQf1CAT3aDhOhL2I8IQFQNewwkejiTRSscbsPXbbiB1YNO00zfe5dOp9IhQRiytjYWKVZz9q0pifOVLsKzB9Y3kuqAgX601zYJGRN1FiFK2Z7eLy5dL55LpE+55eu9wpskHY+oV3qDGhehVBmy/m3sVT9ConuZaOKqslg549LcRQnW6j6sIatFy5c+S0z/6f2Tjw9RItEOjoNO6Qk1H1PQ1ScAP3D1h5sKLwJ3YRr642UkYyIfe5TQf+oJXY8sxvH4Q/KCmFmXImNMXUqwNOoTfxXD/PxjbA+U9PJ6yr/oCYrZIypwECUJQXrOG2sMKYqXP4851X6evdv0u8ggqDrZgPblOZn9pil/qpu39638sm1cxgukifpJwcU8AqYSB6MdEyJ53nnUorf95nrGxWUPuG1rE6jpu6w6LXc5zA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(186003)(36756003)(54906003)(91956017)(71200400001)(6506007)(6916009)(26005)(8676002)(478600001)(53546011)(66446008)(64756008)(66476007)(33656002)(66556008)(2616005)(6512007)(66946007)(83380400001)(316002)(86362001)(8936002)(76116006)(6486002)(2906002)(5660300002)(4326008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?ZDlGSXVhdUVrdHVvWmtoZUhWb2luMzFmQi9HaEt5LzMxTU56Z2h0WFArdzJZ?=
 =?utf-8?B?Q2p6d3BoSGlOUWFpOVBxTWhmYmpiWTl5OC9pUnJRTFdnY0N2cUtWV3JLWDdC?=
 =?utf-8?B?dktiY200ODBRY3NzTVdNV05rZ0ZhODZuaUZnM2ZhSUFnUG55d0djQUhBTGZG?=
 =?utf-8?B?SGlPcVRMWTdmU3NrZ2o5RW5aWElTRWorY0EvSVRpNnBWNHh2TWN2d1BkNkNK?=
 =?utf-8?B?OWRuS2RWZkpTZDJTaDRObUYxSzNXTjcxN05aRTIvS2ZIVEZCbzdaTm02LzVP?=
 =?utf-8?B?azcrMWQyWTV1L1ZlQVNRTDJuTmYxSmxJZTB5c0lsZnlScE5mL3pKcFY4M05M?=
 =?utf-8?B?aWdWQ3czSFdOcWFCc2l3L2RHMERQTnpCU2w1MUhoUnI2ekNnVzJMTFdGYllU?=
 =?utf-8?B?YVRmYkNNazF2YUVHUFhyUGZsVVpvTWkxZ2tsSEY4SXZyM0tqMm94MEhlK3pm?=
 =?utf-8?B?L1ZTY2JkYXYyY0dDWVlJNEJ5TFRaNEtUUHFrczQ2QnA0clVscHBKQWp4ZGps?=
 =?utf-8?B?NVBNWmJzYzZnWFFjcHp2WkNaMlh0VGIvb2pkZDMvMk5vWkQvUE9zbWpWdUJh?=
 =?utf-8?B?UHpJNlNUalk5aGhSbmVhbXBnNG9EUTZMV1R1RzlPVGFqRythVlpUS0dnYnBh?=
 =?utf-8?B?c21Ya2hwbnpreStvNkVHR3k3Yi9nMDIxZWE3SlBzVFllVWE5VXhpdTJFZENn?=
 =?utf-8?B?eTJOWk9LbDZJQ01wTFU5em8rdytKRnUvUHd3ZUZJczk1NGhETGoza29lQ2Qy?=
 =?utf-8?B?b3BzaHJHRWkvRmVwR3Avc2lNTnZDZFBrQ1VWRVFVd1VXZzhZeVNqa1UxaTA3?=
 =?utf-8?B?QjNta0ppUzZhUUdacml3YVV0NVBIL3FjU3lSS2xuZVV4eXZlT0JwUUVSaVNT?=
 =?utf-8?B?bnoxM1lRbTFnRmJ4YWJ1cGFoNHpYRkxOalVwczhReStiK1kvRno0VHRwdVJ0?=
 =?utf-8?B?OFhvdXdkWWJ2TFUxMDBuT0hzNGNjRGFyM2RVWTlXL21iV29vK3d4YkRIejAy?=
 =?utf-8?B?Mkh3emtWOXJWRlBOR2IrR0czWUtSTjgrWm5pSk9pRFN3YWMxNUl4TDlBODB2?=
 =?utf-8?B?c1MxSzUwakNiUTBkVHZnbENOOURyMzhUYWdlMDJQWDBUa0lBelAwNmZ3OHU2?=
 =?utf-8?B?eU15dkI1akxuc0Nla05QOG9LUUpVVUxWbTBxZ3NhVmJJeE5iL25RYVU2UThv?=
 =?utf-8?B?cHFvS1BtNGJCa2NjMkpZdkRPT1k1NVBIMW1GczJlNkF3R3NpWWpiQktaYnkv?=
 =?utf-8?B?aVBLeUExUXZSdnFRUnNtOUVOZTZsUGtDVEoyUTIwY2lhRTVPNFR4NWpNT1M2?=
 =?utf-8?B?ZlVpYlhnUVpONGx0aGx4MTM1NHdaUHBNMyt0Z3hZWkpscWsrNGM0V24xT29x?=
 =?utf-8?B?dll6NUpNeVp4VXpFbTJVeUQyNW9lVWJPNHduOFhYLzQ4My9zRUVDSjVra1di?=
 =?utf-8?B?alFCUGNTcGg5YTExR0IvbE5meTIvR25ySjQxdFVOMkgra1NpcC9RMVFVbyth?=
 =?utf-8?B?SUNSaU95LytaelAzTEhaeDRjdWlSdCtBNFRsbGowKy9uTzVzN2NQVEdGNFVa?=
 =?utf-8?B?aFBjbWQvSzYrdEsxcnIrdE9oMGhnQjF1R1N3VGFNaFloTU9MYzdCQ2JVdjdH?=
 =?utf-8?B?Ny92MjlKVWtPcmtRRmZ2ZXFBSFMzZmdBTEpuMUY1RU8xdUlEMzA0TWV6UEhG?=
 =?utf-8?B?OCs3NkdhcGlpakplallhUmp6ZmpPbG9PN1haNzhIUGprUlNnVmhoakRrWHVr?=
 =?utf-8?B?VUhvUXNGdEQ1MWtrZ2pJZ2p2Q0wzcXFIeVlXdXRQOFU2ck8waitWU253N216?=
 =?utf-8?B?eVhFbDhqNjBQeUxSUXZBZz09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <6AD3F8BABA54EE44AE4257E8BD110A92@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b7cbc34-48c9-4582-2874-08d8d26c3e97
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Feb 2021 11:16:02.0135
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 354THSTQfW2IzJUlxHBuGHoffeauNm/sAIq0rrdynuNCXsG0w74KX3NguOjWw+EFigwtWp9swTI/NEiEnnBKBMNh1yZRO55dSnwOA7iV6Ak=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5782
X-OriginatorOrg: citrix.com

DQoNCj4gT24gRmViIDE2LCAyMDIxLCBhdCAxMDo1NSBBTSwgSnVsaWVuIEdyYWxsIDxqdWxpZW5A
eGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSBHZW9yZ2UsDQo+IA0KPiBPbiAxNi8wMi8yMDIxIDEw
OjI4LCBHZW9yZ2UgRHVubGFwIHdyb3RlOg0KPj4gRG9jdW1lbnQgdGhlIHByb3BlcnRpZXMgb2Yg
dGhlIHZhcmlvdXMgYWxsb2NhdG9ycyBhbmQgbGF5IG91dCBhIGNsZWFyDQo+PiBydWJyaWMgZm9y
IHdoZW4gdG8gdXNlIGVhY2guDQo+PiBTaWduZWQtb2ZmLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9y
Z2UuZHVubGFwQGNpdHJpeC5jb20+DQo+PiAtLS0NCj4+IFRoaXMgZG9jIGlzIG15IHVuZGVyc3Rh
bmRpbmcgb2YgdGhlIHByb3BlcnRpZXMgb2YgdGhlIGN1cnJlbnQNCj4+IGFsbG9jYXRvcnMgKGFs
bG9jX3hlbmhlYXBfcGFnZXMsIHhtYWxsb2MsIGFuZCB2bWFsbG9jKSwgYW5kIG9mIEphbidzDQo+
PiBwcm9wb3NlZCBuZXcgd3JhcHBlciwgeHZtYWxsb2MuDQo+PiB4bWFsbG9jLCB2bWFsbG9jLCBh
bmQgeHZtYWxsb2Mgd2VyZSBkZXNpZ25lZCBtb3JlIG9yIGxlc3MgdG8gbWlycm9yDQo+PiBzaW1p
bGFyIGZ1bmN0aW9ucyBpbiBMaW51eCAoa21hbGxvYywgdm1hbGxvYywgYW5kIGt2bWFsbG9jDQo+
PiByZXNwZWN0aXZlbHkpLg0KPj4gQ0M6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNp
dHJpeC5jb20+DQo+PiBDQzogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPj4gQ0M6
IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+PiBDQzogU3RlZmFubyBT
dGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPg0KPj4gQ0M6IEp1bGllbiBHcmFsbCA8
anVsaWVuQHhlbi5vcmc+DQo+PiAtLS0NCj4+ICAuLi4vbWVtb3J5LWFsbG9jYXRpb24tZnVuY3Rp
b25zLnJzdCAgICAgICAgICAgfCAxMTggKysrKysrKysrKysrKysrKysrDQo+PiAgMSBmaWxlIGNo
YW5nZWQsIDExOCBpbnNlcnRpb25zKCspDQo+PiAgY3JlYXRlIG1vZGUgMTAwNjQ0IGRvY3MvaHlw
ZXJ2aXNvci1ndWlkZS9tZW1vcnktYWxsb2NhdGlvbi1mdW5jdGlvbnMucnN0DQo+PiBkaWZmIC0t
Z2l0IGEvZG9jcy9oeXBlcnZpc29yLWd1aWRlL21lbW9yeS1hbGxvY2F0aW9uLWZ1bmN0aW9ucy5y
c3QgYi9kb2NzL2h5cGVydmlzb3ItZ3VpZGUvbWVtb3J5LWFsbG9jYXRpb24tZnVuY3Rpb25zLnJz
dA0KPj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLjE1YWEyYTFh
NjUNCj4+IC0tLSAvZGV2L251bGwNCj4+ICsrKyBiL2RvY3MvaHlwZXJ2aXNvci1ndWlkZS9tZW1v
cnktYWxsb2NhdGlvbi1mdW5jdGlvbnMucnN0DQo+PiBAQCAtMCwwICsxLDExOCBAQA0KPj4gKy4u
IFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBDQy1CWS00LjANCj4+ICsNCj4+ICtYZW5oZWFwIG1l
bW9yeSBhbGxvY2F0aW9uIGZ1bmN0aW9ucw0KPj4gKz09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09DQo+PiArDQo+PiArSW4gZ2VuZXJhbCBYZW4gY29udGFpbnMgdHdvIHBvb2xzIChv
ciAiaGVhcHMiKSBvZiBtZW1vcnk6IHRoZSAqeGVuDQo+PiAraGVhcCogYW5kIHRoZSAqZG9tIGhl
YXAqLiAgUGxlYXNlIHNlZSB0aGUgY29tbWVudCBhdCB0aGUgdG9wIG9mDQo+PiArYGB4ZW4vY29t
bW9uL3BhZ2VfYWxsb2MuY2BgIGZvciB0aGUgY2Fub25pY2FsIGV4cGxhbmF0aW9uLg0KPj4gKw0K
Pj4gK1RoaXMgZG9jdW1lbnQgZGVzY3JpYmVzIHRoZSB2YXJpb3VzIGZ1bmN0aW9ucyBhdmFpbGFi
bGUgdG8gYWxsb2NhdGUNCj4+ICttZW1vcnkgZnJvbSB0aGUgeGVuIGhlYXA6IHRoZWlyIHByb3Bl
cnRpZXMgYW5kIHJ1bGVzIGZvciB3aGVuIHRoZXkgc2hvdWxkIGJlDQo+PiArdXNlZC4NCj4+ICsN
Cj4+ICsNCj4+ICtUTERSIGd1aWRlbGluZXMNCj4+ICstLS0tLS0tLS0tLS0tLS0NCj4+ICsNCj4+
ICsqIEJ5IGRlZmF1bHQsIGBgeHZtYWxsb2NgYCAob3IgaXRzIGhlbHBlciBjb2duYXRlcykgc2hv
dWxkIGJlIHVzZWQNCj4+ICsgIHVubGVzcyB5b3Uga25vdyB5b3UgaGF2ZSBzcGVjaWZpYyBwcm9w
ZXJ0aWVzIHRoYXQgbmVlZCB0byBiZSBtZXQuDQo+PiArDQo+PiArKiBJZiB5b3UgbmVlZCBtZW1v
cnkgd2hpY2ggbmVlZHMgdG8gYmUgcGh5c2ljYWxseSBjb250aWd1b3VzLCBhbmQgbWF5DQo+PiAr
ICBiZSBsYXJnZXIgdGhhbiBgYFBBR0VfU0laRWBgLi4uDQo+PiArDQo+PiArICAtIC4uLmFuZCBp
cyBvcmRlciAyLCB1c2UgYGBhbGxvY194ZW5oZWFwX3BhZ2VzYGAuDQo+PiArDQo+PiArICAtIC4u
LmFuZCBpcyBub3Qgb3JkZXIgMiwgdXNlIGBgeG1hbGxvY2BgIChvciBpdHMgaGVscGVyIGNvZ25h
dGVzKS4uDQo+PiArDQo+PiArKiBJZiB5b3UgZG9uJ3QgbmVlZCBtZW1vcnkgdG8gYmUgcGh5c2lj
YWxseSBjb250aWd1b3VzLCBhbmQga25vdyB0aGUNCj4+ICsgIGFsbG9jYXRpb24gd2lsbCBhbHdh
eXMgYmUgbGFyZ2VyIHRoYW4gYGBQQUdFX1NJWkVgYCwgeW91IG1heSB1c2UNCj4+ICsgIGBgdm1h
bGxvY2BgIChvciBvbmUgb2YgaXRzIGhlbHBlciBjb2duYXRlcykuDQo+PiArDQo+PiArKiBJZiB5
b3Uga25vdyB0aGF0IGFsbG9jYXRpb24gd2lsbCBhbHdheXMgYmUgbGVzcyB0aGFuIGBgUEFHRV9T
SVpFYGAsDQo+PiArICB5b3UgbWF5IHVzZSBgYHhtYWxsb2NgYC4NCj4gDQo+IEFGQUlDVCwgdGhl
IGRldGVybWluaW5nIGZhY3RvciBpcyBQQUdFX1NJWkUuIFRoaXMgaXMgYSBzaW5nbGUgaXMgYSBz
aW5nbGUgdmFsdWUgb24geDg2IChlLmcuIDRLQikgYnV0IG9uIG90aGVyIGFyY2hpdGVjdHVyZSB0
aGlzIG1heSBiZSBtdWx0aXBsZSB2YWx1ZXMuDQo+IA0KPiBGb3IgaW5zdGFuY2UsIG9uIEFybSwg
dGhpcyBjb3VsZCBiZSA0S0IsIDE2S0IsIDY0S0IgKG5vdGUgdGhhdCBvbmx5IHRoZSBmb3JtZXIg
aXMgc28gZmFyIHN1cHBvcnRlZCBvbiBYZW4pLg0KPiANCj4gRm9yIEFybSBhbmQgY29tbW9uIGNv
ZGUsIGl0IGZlZWxzIHRvIG1lIHdlIGNhbid0IG1ha2UgYSBjbGVhciBkZWNpc2lvbiBiYXNlZCBv
biBQQUdFX1NJWkUuIEluc3RlYWQsIEkgY29udGludWUgdG8gdGhpbmsgdGhhdCB0aGUgZGVjaXNp
b24gc2hvdWxkIG9ubHkgYmUgYmFzZWQgb24gcGh5c2ljYWwgdnMgdmlydHVhbGx5IGNvbnRpZ3Vv
dXMuDQo+IA0KPiBXZSBjYW4gdGhlbiBhZGQgZnVydGhlciBydWxlcyBmb3IgeDg2IHNwZWNpZmlj
IGNvZGUgaWYgdGhlIG1haW50YWluZXJzIHdhbnQuDQoNClNvcnJ5IG15IHNlY29uZCBtYWlsIHdh
cyBzb21ld2hhdCBkZWxheWVkIOKAlCBteSBpbnRlbnQgd2FzOiAxKSBwb3N0IHRoZSBkb2N1bWVu
dCBJ4oCZZCBhZ3JlZWQgdG8gd3JpdGUsIDIpIHNheSB3aHkgSSB0aGluayB0aGUgcHJvcG9zYWwg
aXMgYSBiYWQgaWRlYS4gOi0pDQoNClJlIHBhZ2Ugc2l6ZSDigJQgdGhlIHZhc3QgbWFqb3JpdHkg
b2YgdGltZSB3ZeKAmXJlIHRhbGtpbmcg4oCca25vd2luZ+KAnSB0aGF0IHRoZSBzaXplIGlzIGxl
c3MgdGhhbiA0ay4gIElmIHdl4oCZcmUgY29uZmlkZW50IHRoYXQgbm8gYXJjaGl0ZWN0dXJlIHdp
bGwgZXZlciBoYXZlIGEgcGFnZSBzaXplIGxlc3MgdGhhbiA0aywgdGhlbiB3ZSBrbm93IHRoYXQg
YWxsIGFsbG9jYXRpb25zIGxlc3MgdGhhbiA0ayB3aWxsIGFsd2F5cyBiZSBsZXNzIHRoYW4gUEFH
RV9TSVpFLiAgT2J2aW91c2x5IGxhcmdlciBwYWdlIHNpemVzIHRoZW4gYmVjb21lcyBhbiBpc3N1
ZS4NCg0KQnV0IGluIGFueSBjYXNlIOKAlCB1bmxlc3Mgd2UgaGF2ZSBCVUdfT04oc2l6ZSA+IFBB
R0VfU0laRSksIHdl4oCZcmUgZ29pbmcgdG8gaGF2ZSB0byBoYXZlIGEgZmFsbGJhY2ssIHdoaWNo
IGlzIGdvaW5nIHRvIGNvc3Qgb25lIHByZWNpb3VzIGNvbmRpdGlvbmFsLCBtYWtpbmcgdGhlIHdo
b2xlIGV4ZXJjaXNlIHBvaW50bGVzcy4NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 11:17:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 11:17:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85497.160322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lByMX-00034a-ED; Tue, 16 Feb 2021 11:17:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85497.160322; Tue, 16 Feb 2021 11:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lByMX-00034T-B0; Tue, 16 Feb 2021 11:17:49 +0000
Received: by outflank-mailman (input) for mailman id 85497;
 Tue, 16 Feb 2021 11:17:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uFEE=HS=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lByMW-00034N-0o
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 11:17:48 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e20ea3a-92a4-4e46-8392-5a436c750d18;
 Tue, 16 Feb 2021 11:17:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e20ea3a-92a4-4e46-8392-5a436c750d18
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613474266;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=zVHhheOKCyOgarTdRy6kcVMNKFqBQcaf5V00FWxK8Mo=;
  b=XDEZIY+k9Mk+SSrGEa1r29Z/fu/IhVLEjlrcCXtWiz/9PVBXtvfPdmSP
   eRGB4G6N7MttPyWRI3u7zW0IPf8kMURkidqs5twzv3ju8u8nEkeoVRGlx
   +cVAdONsd5HKuRlYAWv/3lcBEWAlrVao4qZ5qHpVcfkyMPuNmuc7u1L8r
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FEkv5wIWerMCN8bue7qtLzBiyBR4YQJ9rlmRi2H33Yw4HAXx+qmdcpCyCbyxkPPwFnVeyZpFGJ
 IGPyVzswd0et+oYXSDNeCLdKjTns66XW3en67xPVaUBNJUSY/9raUUy5JlbTEx9ePYXbVoHaZN
 VDgT+yV7XocfztrtYhL0luJUe+hXebqkVurivE6c3ziYlWkZQnFrrcQgwXo8bWIm0I6/zp6pP2
 9VAdgIk7L0mx3dGaOYrzGVR+XDemakqnVAugThw+x4jFtVZ5+oDw8KaHS8aI4XTxXvgxdymSm1
 AfY=
X-SBRS: 5.2
X-MesageID: 37312113
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,183,1610427600"; 
   d="scan'208";a="37312113"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E4gD5UAW1AHZzX1/gfcliolnJ4/1oDKojmqPUwzVz/wkky5ZsKlSK1Xpmcu0eAUrS7TmIcRToOYffKXKGvG7xR4JJYn2WoaMQtX1u6c4NXWfTvykoO9+/nmuWmjr/wa4w2Z6U3cbbWI1+Tpg9Bf8iPZfntayfNwl1lE1qv7okOZTpW2bgqj5y0VcugV4Ipn2bemb7x73vKrLTh5yc2TxOxI8kM7Uy7WKwFVoJcaZGStiWycYU3t3U1eLNhlJhbcQglJp4XfeTGrBY/6cvYANovNXs/QmWGvhzuivEmVr0zFQVtIGdCKdY+1WRYpIgKdJc9h6l4KRfHZRTnOWLHMRtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zVHhheOKCyOgarTdRy6kcVMNKFqBQcaf5V00FWxK8Mo=;
 b=oMHu9MR9ZR6Wrc/b7TOvbPILgfbOaMgSo5oH6xAgYvNJZdIvNaVFpHVK97Jxgv6d2nqtwKvMju0iQPKamyKxFvbAuxDcBMovcPH7fJaRcxeA6i10CzJYXFTtYoxeH3+rbsWcahWCr/l5xLYHS3YZJhcXQPccbUba3aAct7ecKAcgetSeoRa/KyP8jqJSRIlz9ALdH5iB3uGHNfOHEVH/IbDgY3w1HmhoKo52oWLd4BZit8zgBM0PKQBlYH5gg3A4TC/L9rqRdRB6i6OSBEETQr/Tc3IhCFq3lcngOtPCBNzT8Ycm0AINQGnaderUac2qSf71PUSB8EWULSX4cgBTlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zVHhheOKCyOgarTdRy6kcVMNKFqBQcaf5V00FWxK8Mo=;
 b=rmidaeOMivcCcBJb1qjdz4Eo6gsWIYKo+ugO0+j0/i+6HWSRXds6etqsuLUlYZDJydg44WecISUljLlI7ADgYF2NfIcIycBdm6mqN34Nm4w4RDzJxftLStBxco79Tgn8nH0Q77VK4gYXT8kFkKaaalvO32jcap7ajG5nxQjxiKQ=
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
Thread-Topic: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
Thread-Index: AQHXBE6IzoDpW0Psi0SLnVWLvJvx9Kpam+MAgAAFtoCAAAB2AA==
Date: Tue, 16 Feb 2021 11:17:41 +0000
Message-ID: <E0E24EA5-CF14-45AA-8C0A-122F87051EC0@citrix.com>
References: <20210216102839.1801667-1-george.dunlap@citrix.com>
 <c5eb64fc-a90b-6e28-bb0d-075e3a870299@xen.org>
 <E820CE9D-9671-4ED3-872E-3AECE21505AC@citrix.com>
In-Reply-To: <E820CE9D-9671-4ED3-872E-3AECE21505AC@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bdf9687e-b470-491a-d838-08d8d26c7a26
x-ms-traffictypediagnostic: PH0PR03MB5893:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <PH0PR03MB5893F86D22F0CAEB50D4AAC299879@PH0PR03MB5893.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: yfp4hbBZwYshENAVB4MMvMIlh6uAUkrNfImJ2WAvBCnFeP71QR7Gux2IwV7rXbCd0HH7ulGfCqa+ez24FcaL53VYdQU3ujjqebUSTGFM8+iIHg1JtQgcdt0ZwaREJaNqObnteOc0n1+dFfEDbU1he67WO6SyPpDVy0z+vUPoOy+zEdVqVg9plALS0C2OPvaKX4izR2IRe4TKa/LfZjRM4X0HtXSLl7hGTcXFaadTCsACv2S4lEBlnustxPtSwleQn9ZnwdjS6LNCGzpCBRWpIZ/xHjXLMmRuK6e+fR3NvAMw31c0Qcy+K/TVhLxJoZjQtmTPqLqlt3OM2dNcKka6C+QqLywAmvxMu1D11B2XBPWDruGGIAWX6LK+25hz2s7DRq+li4TzlqYLedJp4g09sNO6sH75zf0WpPhxKbv9JcCGOa/X8AZW0ag1USb5qbKWcRuw3exiSmqkZ2ZxMxcX0SkPJIl/uv8xEXsIdS3r2juOzSz0EZQ53hx5v6dL+ROagFg0skzyF2d/Z/ask3CQfXwYADwERbgDf+ulnMCxH9BVoQ3b+VRTojZ48joFKF9uoyJi4iS2iX+9bIaUzccvMw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(366004)(346002)(39860400002)(54906003)(8936002)(33656002)(5660300002)(83380400001)(6486002)(66556008)(6916009)(316002)(2616005)(2906002)(186003)(71200400001)(66946007)(86362001)(64756008)(76116006)(36756003)(66476007)(8676002)(6512007)(91956017)(66446008)(4326008)(478600001)(53546011)(26005)(6506007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?ZnBIdC9weE85RUVjcjJ5VDE1cU5paTBZNVFBNW9IU1cxMXMxdEZySjg3UXNK?=
 =?utf-8?B?UlhMY3B4RkVnbkhsTVp6ak9MbEV0MEFEdGFMNXQzMGU5UWRwa0d1cEVXQmYw?=
 =?utf-8?B?Z3UxVGRzR1NGWm1YS2ZrcnNzOTh4dDJSa3k4QnV3eEh0RllTRlFJTUlvclhm?=
 =?utf-8?B?aGtSNXlyQXZ2ay84T0VKNDBJOWtPVkpPdlpodTQzMzVqaVlDRk13Zi9McHRv?=
 =?utf-8?B?NzJHcm5LbzNrUXhSRGdKNFJVazJQanV2OGxDQ3pxcjk4RFJOYzBhbFpWMzl1?=
 =?utf-8?B?cU5xcmVzQ2RXZWcvbU5FVFlSZ0ZPS1BLdW9rdU5kU1hPRVAzVTd5UE9QMVdi?=
 =?utf-8?B?dTBUaGFSblRGVlFGTXVzajVWaHB3OTFsVnVUcGVjVzFQS2tNbmhKYUFhOEla?=
 =?utf-8?B?MkdkUlNjRGNuM0VFQ3c4MFJmWWdRbXNWNjNkYUZOa2pUZit2Ly9JSnd3Mkhz?=
 =?utf-8?B?Q3I2RlNUZmhtQTdoYlpQejN4b1N0U3Z3WTh0Y1FUNndVaHkzbXJHVTV5YTlM?=
 =?utf-8?B?ZVpOSVBzWWV1MnZOVVgyQVFyelJFaDkvcFFhS1pReEI3RWZZS01hL2ZhZWhC?=
 =?utf-8?B?OWtvSVRZa0Y0NnM0Vkc3ZHhGWEtFaWNqSVhSNDVOdTl5S25jL2VuSlBYaHpP?=
 =?utf-8?B?ZERlM3Y4cCtaZjFQVnZXbFFOdWkzVVBwYTR6M3VEdXdtZk90OHQ1SE5XcnJv?=
 =?utf-8?B?ajlzaWFmaGhNNkJVWW1kU0phVDJXMjJ1QldlblQxRUdDODRnOEk1OFNsbHcw?=
 =?utf-8?B?VXRZSi8xSCtNZzJVcGpQM1VHdzNaS2JtcUExRGNDdjBYUFRmdmxUY0FzR254?=
 =?utf-8?B?QVhOR09WclRSdmJKODYxQ00ycTdibGZkZ1kybmRXK1BPNVQ5UlZLSXFlNmJi?=
 =?utf-8?B?V0xWRXJ0VVdQcmE1dFVMVXIrUGczY2Y0Wm1Pb2EzRmdQYkNFNWFTbk5MNnhK?=
 =?utf-8?B?ZUxwZTI5RWJES0htbU9MN3F1VkhlTnU3NERWdWJ2bzVmaFVuYjJkWWNDMHEw?=
 =?utf-8?B?Y0tsUHZvemhPcXJzUmVBMXk2SERJdGJpTG42Q3g3dStkUkJNYWZtQzV0ekk2?=
 =?utf-8?B?Rmh4Uk1hOHNFU3JSRG93VlFjckU0d1lLcFlyV3lDTjROcmJ6bDJ2Vms3V1kv?=
 =?utf-8?B?cHVvTUJ0emk4SlgzZW5qKyt2ZHFxNWpETm1HL0sycWJtUk5WZTdTczREeUtx?=
 =?utf-8?B?dTNzbUx0YnQyN0YvMDRDRFI3MlpxRU80aEQvcDVoU212TUJBaFZ4NDNJMGJh?=
 =?utf-8?B?NHJ3Ry9lRktPQ3VkV1dGbGNBUjZZZU8vMWpobWlBR1hmOGJxbVFvQ2gva1BG?=
 =?utf-8?B?eTNwa3B2TUF3Mzd3MGRLWU14UDN3c3FMbG1SQ0dCS1ZZYkJxRDBrTTZ0eEEw?=
 =?utf-8?B?Vi9weE52OUNIWFhnVzFqY2s2cGdsK2Npc0h1R3lFeVRUNDlnVzE1WUlHb3p6?=
 =?utf-8?B?M2tLR05samZkWDZPdG80UzVNalRhdGNCUFhWRWZGMS9JMHFzYjJCa0FlMHZS?=
 =?utf-8?B?aDJqdCtUQUE0ODE4OUtST2JMRi9Od1ZySndQRWVBOFovd0NXNjhMc0hoNVg0?=
 =?utf-8?B?ZjF0Ymxuc29QcWRmbDZOZmh3QTQwdXlWeENhbkN2S1BtNEtMaGkyNXo3VUxi?=
 =?utf-8?B?Um1YSFdGNnh3NWdySC9mVWJmdVhxS05hbHpxQ05rdnkxOE9pdUdBWEdlZFNk?=
 =?utf-8?B?Q0lYYW4wZjlsN2hQZC83OTQzb2MybGowOWdCSXc0c2FnNGc1S2pubU8zMWZu?=
 =?utf-8?B?d0NsQ1FsYjZURHc3MW5XRXVnZ1lEK1R6a2ljSFBVdEFLWTQrVys5SzEzeTJH?=
 =?utf-8?B?SVhqMHBsUklQNGlyMVovZz09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5378BE6D9F446B4097B29C427759F275@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bdf9687e-b470-491a-d838-08d8d26c7a26
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Feb 2021 11:17:41.9437
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: HCU5XUEByvDnv3FjD++16sX3Gs1ALZ3mzLKT4JbwBNE5v3sepavMVZV73O4ZdNl4KM0iQWe0z6Qxeggb/sLEm0HVJqtdq+gtBdBQUZJOuJk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5893
X-OriginatorOrg: citrix.com

DQoNCj4gT24gRmViIDE2LCAyMDIxLCBhdCAxMToxNiBBTSwgR2VvcmdlIER1bmxhcCA8Z2Vvcmdl
LmR1bmxhcEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IA0KPiANCj4+IE9uIEZlYiAxNiwgMjAy
MSwgYXQgMTA6NTUgQU0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4g
DQo+PiBIaSBHZW9yZ2UsDQo+PiANCj4+IE9uIDE2LzAyLzIwMjEgMTA6MjgsIEdlb3JnZSBEdW5s
YXAgd3JvdGU6DQo+Pj4gRG9jdW1lbnQgdGhlIHByb3BlcnRpZXMgb2YgdGhlIHZhcmlvdXMgYWxs
b2NhdG9ycyBhbmQgbGF5IG91dCBhIGNsZWFyDQo+Pj4gcnVicmljIGZvciB3aGVuIHRvIHVzZSBl
YWNoLg0KPj4+IFNpZ25lZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4NCj4+PiAtLS0NCj4+PiBUaGlzIGRvYyBpcyBteSB1bmRlcnN0YW5kaW5nIG9mIHRo
ZSBwcm9wZXJ0aWVzIG9mIHRoZSBjdXJyZW50DQo+Pj4gYWxsb2NhdG9ycyAoYWxsb2NfeGVuaGVh
cF9wYWdlcywgeG1hbGxvYywgYW5kIHZtYWxsb2MpLCBhbmQgb2YgSmFuJ3MNCj4+PiBwcm9wb3Nl
ZCBuZXcgd3JhcHBlciwgeHZtYWxsb2MuDQo+Pj4geG1hbGxvYywgdm1hbGxvYywgYW5kIHh2bWFs
bG9jIHdlcmUgZGVzaWduZWQgbW9yZSBvciBsZXNzIHRvIG1pcnJvcg0KPj4+IHNpbWlsYXIgZnVu
Y3Rpb25zIGluIExpbnV4IChrbWFsbG9jLCB2bWFsbG9jLCBhbmQga3ZtYWxsb2MNCj4+PiByZXNw
ZWN0aXZlbHkpLg0KPj4+IENDOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXgu
Y29tPg0KPj4+IENDOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+Pj4gQ0M6IFJv
Z2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+Pj4gQ0M6IFN0ZWZhbm8gU3Rh
YmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4+PiBDQzogSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4NCj4+PiAtLS0NCj4+PiAuLi4vbWVtb3J5LWFsbG9jYXRpb24tZnVuY3Rp
b25zLnJzdCAgICAgICAgICAgfCAxMTggKysrKysrKysrKysrKysrKysrDQo+Pj4gMSBmaWxlIGNo
YW5nZWQsIDExOCBpbnNlcnRpb25zKCspDQo+Pj4gY3JlYXRlIG1vZGUgMTAwNjQ0IGRvY3MvaHlw
ZXJ2aXNvci1ndWlkZS9tZW1vcnktYWxsb2NhdGlvbi1mdW5jdGlvbnMucnN0DQo+Pj4gZGlmZiAt
LWdpdCBhL2RvY3MvaHlwZXJ2aXNvci1ndWlkZS9tZW1vcnktYWxsb2NhdGlvbi1mdW5jdGlvbnMu
cnN0IGIvZG9jcy9oeXBlcnZpc29yLWd1aWRlL21lbW9yeS1hbGxvY2F0aW9uLWZ1bmN0aW9ucy5y
c3QNCj4+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4+IGluZGV4IDAwMDAwMDAwMDAuLjE1YWEy
YTFhNjUNCj4+PiAtLS0gL2Rldi9udWxsDQo+Pj4gKysrIGIvZG9jcy9oeXBlcnZpc29yLWd1aWRl
L21lbW9yeS1hbGxvY2F0aW9uLWZ1bmN0aW9ucy5yc3QNCj4+PiBAQCAtMCwwICsxLDExOCBAQA0K
Pj4+ICsuLiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogQ0MtQlktNC4wDQo+Pj4gKw0KPj4+ICtY
ZW5oZWFwIG1lbW9yeSBhbGxvY2F0aW9uIGZ1bmN0aW9ucw0KPj4+ICs9PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PQ0KPj4+ICsNCj4+PiArSW4gZ2VuZXJhbCBYZW4gY29udGFpbnMg
dHdvIHBvb2xzIChvciAiaGVhcHMiKSBvZiBtZW1vcnk6IHRoZSAqeGVuDQo+Pj4gK2hlYXAqIGFu
ZCB0aGUgKmRvbSBoZWFwKi4gIFBsZWFzZSBzZWUgdGhlIGNvbW1lbnQgYXQgdGhlIHRvcCBvZg0K
Pj4+ICtgYHhlbi9jb21tb24vcGFnZV9hbGxvYy5jYGAgZm9yIHRoZSBjYW5vbmljYWwgZXhwbGFu
YXRpb24uDQo+Pj4gKw0KPj4+ICtUaGlzIGRvY3VtZW50IGRlc2NyaWJlcyB0aGUgdmFyaW91cyBm
dW5jdGlvbnMgYXZhaWxhYmxlIHRvIGFsbG9jYXRlDQo+Pj4gK21lbW9yeSBmcm9tIHRoZSB4ZW4g
aGVhcDogdGhlaXIgcHJvcGVydGllcyBhbmQgcnVsZXMgZm9yIHdoZW4gdGhleSBzaG91bGQgYmUN
Cj4+PiArdXNlZC4NCj4+PiArDQo+Pj4gKw0KPj4+ICtUTERSIGd1aWRlbGluZXMNCj4+PiArLS0t
LS0tLS0tLS0tLS0tDQo+Pj4gKw0KPj4+ICsqIEJ5IGRlZmF1bHQsIGBgeHZtYWxsb2NgYCAob3Ig
aXRzIGhlbHBlciBjb2duYXRlcykgc2hvdWxkIGJlIHVzZWQNCj4+PiArICB1bmxlc3MgeW91IGtu
b3cgeW91IGhhdmUgc3BlY2lmaWMgcHJvcGVydGllcyB0aGF0IG5lZWQgdG8gYmUgbWV0Lg0KPj4+
ICsNCj4+PiArKiBJZiB5b3UgbmVlZCBtZW1vcnkgd2hpY2ggbmVlZHMgdG8gYmUgcGh5c2ljYWxs
eSBjb250aWd1b3VzLCBhbmQgbWF5DQo+Pj4gKyAgYmUgbGFyZ2VyIHRoYW4gYGBQQUdFX1NJWkVg
YC4uLg0KPj4+ICsNCj4+PiArICAtIC4uLmFuZCBpcyBvcmRlciAyLCB1c2UgYGBhbGxvY194ZW5o
ZWFwX3BhZ2VzYGAuDQo+Pj4gKw0KPj4+ICsgIC0gLi4uYW5kIGlzIG5vdCBvcmRlciAyLCB1c2Ug
YGB4bWFsbG9jYGAgKG9yIGl0cyBoZWxwZXIgY29nbmF0ZXMpLi4NCj4+PiArDQo+Pj4gKyogSWYg
eW91IGRvbid0IG5lZWQgbWVtb3J5IHRvIGJlIHBoeXNpY2FsbHkgY29udGlndW91cywgYW5kIGtu
b3cgdGhlDQo+Pj4gKyAgYWxsb2NhdGlvbiB3aWxsIGFsd2F5cyBiZSBsYXJnZXIgdGhhbiBgYFBB
R0VfU0laRWBgLCB5b3UgbWF5IHVzZQ0KPj4+ICsgIGBgdm1hbGxvY2BgIChvciBvbmUgb2YgaXRz
IGhlbHBlciBjb2duYXRlcykuDQo+Pj4gKw0KPj4+ICsqIElmIHlvdSBrbm93IHRoYXQgYWxsb2Nh
dGlvbiB3aWxsIGFsd2F5cyBiZSBsZXNzIHRoYW4gYGBQQUdFX1NJWkVgYCwNCj4+PiArICB5b3Ug
bWF5IHVzZSBgYHhtYWxsb2NgYC4NCj4+IA0KPj4gQUZBSUNULCB0aGUgZGV0ZXJtaW5pbmcgZmFj
dG9yIGlzIFBBR0VfU0laRS4gVGhpcyBpcyBhIHNpbmdsZSBpcyBhIHNpbmdsZSB2YWx1ZSBvbiB4
ODYgKGUuZy4gNEtCKSBidXQgb24gb3RoZXIgYXJjaGl0ZWN0dXJlIHRoaXMgbWF5IGJlIG11bHRp
cGxlIHZhbHVlcy4NCj4+IA0KPj4gRm9yIGluc3RhbmNlLCBvbiBBcm0sIHRoaXMgY291bGQgYmUg
NEtCLCAxNktCLCA2NEtCIChub3RlIHRoYXQgb25seSB0aGUgZm9ybWVyIGlzIHNvIGZhciBzdXBw
b3J0ZWQgb24gWGVuKS4NCj4+IA0KPj4gRm9yIEFybSBhbmQgY29tbW9uIGNvZGUsIGl0IGZlZWxz
IHRvIG1lIHdlIGNhbid0IG1ha2UgYSBjbGVhciBkZWNpc2lvbiBiYXNlZCBvbiBQQUdFX1NJWkUu
IEluc3RlYWQsIEkgY29udGludWUgdG8gdGhpbmsgdGhhdCB0aGUgZGVjaXNpb24gc2hvdWxkIG9u
bHkgYmUgYmFzZWQgb24gcGh5c2ljYWwgdnMgdmlydHVhbGx5IGNvbnRpZ3VvdXMuDQo+PiANCj4+
IFdlIGNhbiB0aGVuIGFkZCBmdXJ0aGVyIHJ1bGVzIGZvciB4ODYgc3BlY2lmaWMgY29kZSBpZiB0
aGUgbWFpbnRhaW5lcnMgd2FudC4NCj4gDQo+IFNvcnJ5IG15IHNlY29uZCBtYWlsIHdhcyBzb21l
d2hhdCBkZWxheWVkIOKAlCBteSBpbnRlbnQgd2FzOiAxKSBwb3N0IHRoZSBkb2N1bWVudCBJ4oCZ
ZCBhZ3JlZWQgdG8gd3JpdGUsIDIpIHNheSB3aHkgSSB0aGluayB0aGUgcHJvcG9zYWwgaXMgYSBi
YWQgaWRlYS4gOi0pDQo+IA0KPiBSZSBwYWdlIHNpemUg4oCUIHRoZSB2YXN0IG1ham9yaXR5IG9m
IHRpbWUgd2XigJlyZSB0YWxraW5nIOKAnGtub3dpbmfigJ0gdGhhdCB0aGUgc2l6ZSBpcyBsZXNz
IHRoYW4gNGsuICBJZiB3ZeKAmXJlIGNvbmZpZGVudCB0aGF0IG5vIGFyY2hpdGVjdHVyZSB3aWxs
IGV2ZXIgaGF2ZSBhIHBhZ2Ugc2l6ZSBsZXNzIHRoYW4gNGssIHRoZW4gd2Uga25vdyB0aGF0IGFs
bCBhbGxvY2F0aW9ucyBsZXNzIHRoYW4gNGsgd2lsbCBhbHdheXMgYmUgbGVzcyB0aGFuIFBBR0Vf
U0laRS4gIE9idmlvdXNseSBsYXJnZXIgcGFnZSBzaXplcyB0aGVuIGJlY29tZXMgYW4gaXNzdWUu
DQo+IA0KPiBCdXQgaW4gYW55IGNhc2Ug4oCUIHVubGVzcyB3ZSBoYXZlIEJVR19PTihzaXplID4g
UEFHRV9TSVpFKSwgd2XigJlyZSBnb2luZyB0byBoYXZlIHRvIGhhdmUgYSBmYWxsYmFjaywgd2hp
Y2ggaXMgZ29pbmcgdG8gY29zdCBvbmUgcHJlY2lvdXMgY29uZGl0aW9uYWwsIG1ha2luZyB0aGUg
d2hvbGUgZXhlcmNpc2UgcG9pbnRsZXNzLg0KDQpFciwganVzdCBpbiBjYXNlIGl0IHdhc27igJl0
IGNsZWFyIOKAlCBJIGFncmVlIHdpdGggdGhpczoNCg0KPj4gSSBjb250aW51ZSB0byB0aGluayB0
aGF0IHRoZSBkZWNpc2lvbiBzaG91bGQgb25seSBiZSBiYXNlZCBvbiBwaHlzaWNhbCB2cyB2aXJ0
dWFsbHkgY29udGlndW91cy4NCg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 11:35:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 11:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85502.160334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBydH-0004qt-Jq; Tue, 16 Feb 2021 11:35:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85502.160334; Tue, 16 Feb 2021 11:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBydH-0004qm-Er; Tue, 16 Feb 2021 11:35:07 +0000
Received: by outflank-mailman (input) for mailman id 85502;
 Tue, 16 Feb 2021 11:35:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBydG-0004qe-Ol; Tue, 16 Feb 2021 11:35:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBydG-0007OG-Dh; Tue, 16 Feb 2021 11:35:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBydG-0005QT-3j; Tue, 16 Feb 2021 11:35:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBydG-0007ZI-3F; Tue, 16 Feb 2021 11:35:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=01cr1DhFGWR/71oh8Pn99yZ9lu/1zOusZZ4MhD0uPlM=; b=5EHS1Z+TJ5C+47CZJFm7WMTeGS
	JxnFM9YTPGE2q51Bs/uW67xE3mj2KI4kt/Xf6RAHeLOjMpH7Tg0vQxIjcyYmaDYRZz0v399PAycNb
	9rIrDtDRlINL6vkuYzi+8LrDPEEn4gFDFGS2wDnGbexClRz0hqa7bdZg+Z9B05AWdFOE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159391-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159391: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 11:35:06 +0000

flight 159391 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159391/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 159367 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 159367 pass in 159391
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159391
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 159367
 test-arm64-arm64-examine      8 reboot                     fail pass in 159367
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159367
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 159367
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 159367

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  199 days
Failing since        152366  2020-08-01 20:49:34 Z  198 days  345 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    1 days    2 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 11:46:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 11:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85511.160355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lByoH-0005sg-QE; Tue, 16 Feb 2021 11:46:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85511.160355; Tue, 16 Feb 2021 11:46:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lByoH-0005sZ-N0; Tue, 16 Feb 2021 11:46:29 +0000
Received: by outflank-mailman (input) for mailman id 85511;
 Tue, 16 Feb 2021 11:46:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=miGH=HS=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lByoG-0005sU-O7
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 11:46:28 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7764075b-d0a5-473f-8e57-2b4a13ade34a;
 Tue, 16 Feb 2021 11:46:27 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 5738667373; Tue, 16 Feb 2021 12:46:24 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7764075b-d0a5-473f-8e57-2b4a13ade34a
Date: Tue, 16 Feb 2021 12:46:24 +0100
From: Christoph Hellwig <hch@lst.de>
To: Mike Snitzer <snitzer@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>, Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	device-mapper development <dm-devel@redhat.com>,
	linux-block <linux-block@vger.kernel.org>,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	linux-nvme@lists.infradead.org,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH 12/78] dm: use set_capacity_and_notify
Message-ID: <20210216114624.GA1221@lst.de>
References: <20201116145809.410558-1-hch@lst.de> <20201116145809.410558-13-hch@lst.de> <CAMM=eLfD0_Am3--X+PsKPTfc9qzejxpMNjYwEh=WtjSa-iSncg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMM=eLfD0_Am3--X+PsKPTfc9qzejxpMNjYwEh=WtjSa-iSncg@mail.gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Feb 12, 2021 at 10:45:32AM -0500, Mike Snitzer wrote:
> On Mon, Nov 16, 2020 at 10:05 AM Christoph Hellwig <hch@lst.de> wrote:
> >
> > Use set_capacity_and_notify to set the size of both the disk and block
> > device.  This also gets the uevent notifications for the resize for free.
> >
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > Reviewed-by: Hannes Reinecke <hare@suse.de>
> > ---
> >  drivers/md/dm.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> > index c18fc25485186d..62ad44925e73ec 100644
> > --- a/drivers/md/dm.c
> > +++ b/drivers/md/dm.c
> > @@ -1971,8 +1971,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
> >         if (size != dm_get_size(md))
> >                 memset(&md->geometry, 0, sizeof(md->geometry));
> >
> > -       set_capacity(md->disk, size);
> > -       bd_set_nr_sectors(md->bdev, size);
> > +       set_capacity_and_notify(md->disk, size);
> >
> >         dm_table_event_callback(t, event_callback, md);
> >
> 
> Not yet pinned down _why_ DM is calling set_capacity_and_notify() with
> a size of 0 but, when running various DM regression tests, I'm seeing
> a lot of noise like:
> 
> [  689.240037] dm-2: detected capacity change from 2097152 to 0
> 
> Is this pr_info really useful?  Should it be moved to below: if
> (!capacity || !size) so that it only prints if a uevent is sent?

In general I suspect such a size change might be interesting to users
if it e.g. comes from a remote event.  So I'd be curious why this happens
with DM, and if we can detect some higher level gendisk state to supress
it if it is indeed spurious.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:11:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:11:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85519.160366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzCZ-000074-W6; Tue, 16 Feb 2021 12:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85519.160366; Tue, 16 Feb 2021 12:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzCZ-00006x-T3; Tue, 16 Feb 2021 12:11:35 +0000
Received: by outflank-mailman (input) for mailman id 85519;
 Tue, 16 Feb 2021 12:11:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TaMt=HS=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1lBzCX-00006n-2H
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 12:11:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20371ff9-624d-44aa-9ce3-628e848906d4;
 Tue, 16 Feb 2021 12:11:27 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1lBzBS-00GpyS-Qf; Tue, 16 Feb 2021 12:10:52 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BCFB13059DD;
 Tue, 16 Feb 2021 13:10:25 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A5D6B2B9C6CCA; Tue, 16 Feb 2021 13:10:25 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20371ff9-624d-44aa-9ce3-628e848906d4
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=qydaGrdRobsyV5LaMgil96UsQSn4K3xiCgp4dxh0juc=; b=W1OSaSNnkgJkSGPTgy8i13Fnth
	Q3yMBTUQI35ZC32HOj6lyR72QpKJCByW9IX627qEuospypeWxHjJGIexIeRPMxqE0ne9uQQtNhjrq
	bpoTv7CK6Y9Cg9FFNDqGjoFa67aUgp2jyFviMRH0o/L4iQMLWden+bu1eFoSy6H1so5yKTP+0qufM
	SbTdZckJ2IAXD57Vh/FXcxwpn5F8FXBRV1s8PxB45PtPKblZ4qOdDNxWQBJdHGBMLZWA/6dB/GbRW
	Ryy2BGjIC60/A5VLY8h27Nsvwue8yeJOw2/C9GZKFS6+uYPlFf8abXeJ8FKrixF4vTrxoLGrcBvM1
	rJh/I4Uw==;
Date: Tue, 16 Feb 2021 13:10:25 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org,
	Andy Lutomirski <luto@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Nadav Amit <namit@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Sasha Levin <sashal@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, x86@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Michael Kelley <mikelley@microsoft.com>
Subject: Re: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs
 concurrently
Message-ID: <YCu2MQFdV4JTrUQb@hirez.programming.kicks-ass.net>
References: <20210209221653.614098-1-namit@vmware.com>
 <20210209221653.614098-5-namit@vmware.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210209221653.614098-5-namit@vmware.com>

On Tue, Feb 09, 2021 at 02:16:49PM -0800, Nadav Amit wrote:
> @@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
>  	 * doing a speculative memory access.
>  	 */
>  	if (info->freed_tables) {
> -		smp_call_function_many(cpumask, flush_tlb_func,
> -			       (void *)info, 1);
> +		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> +				      cpumask);
>  	} else {
>  		/*
>  		 * Although we could have used on_each_cpu_cond_mask(),
> @@ -844,14 +849,15 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
>  			if (tlb_is_not_lazy(cpu))
>  				__cpumask_set_cpu(cpu, cond_cpumask);
>  		}
> -		smp_call_function_many(cond_cpumask, flush_tlb_func, (void *)info, 1);
> +		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> +				      cpumask);
>  	}
>  }

Surely on_each_cpu_mask() is more appropriate? There the compiler can do
the NULL propagation because it's on the same TU.

--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -821,8 +821,7 @@ STATIC_NOPV void native_flush_tlb_multi(
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables) {
-		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
-				      cpumask);
+		on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
 	} else {
 		/*
 		 * Although we could have used on_each_cpu_cond_mask(),
@@ -849,8 +848,7 @@ STATIC_NOPV void native_flush_tlb_multi(
 			if (tlb_is_not_lazy(cpu))
 				__cpumask_set_cpu(cpu, cond_cpumask);
 		}
-		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
-				      cpumask);
+		on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
 	}
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:36:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85536.160472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzaE-0002GD-9U; Tue, 16 Feb 2021 12:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85536.160472; Tue, 16 Feb 2021 12:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzaE-0002G5-3s; Tue, 16 Feb 2021 12:36:02 +0000
Received: by outflank-mailman (input) for mailman id 85536;
 Tue, 16 Feb 2021 12:36:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y7oK=HS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lBzaC-0001zG-QO
 for xen-devel@lists.xen.org; Tue, 16 Feb 2021 12:36:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc82297d-811e-47da-95c3-8465355eaf5e;
 Tue, 16 Feb 2021 12:35:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZi-0008Mx-Q1; Tue, 16 Feb 2021 12:35:30 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZi-0002bO-P6; Tue, 16 Feb 2021 12:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc82297d-811e-47da-95c3-8465355eaf5e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=CNx9HA2TuAgcqoV4Hd77YdCoDcvs1DeLFjIGIr9cIf8=; b=VNhge+CiWitYtuDMDdiguZfn3B
	obUidPsr5cz3im2ad4rV5YsGbjJbYFE5FYlUnP8elWnZbO9bjGekvUFcfkltrDh9aipBXOd65NcP0
	j4WJcnoja+Yva7WGZjdXG/fdKbPHTdz3VuXv+wY+CqwcQZprR16SROClEB74DU33CmgQ=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 363 v3 (CVE-2021-26934) - Linux: display
 frontend "be-alloc" mode is unsupported
Message-Id: <E1lBzZi-0002bO-P6@xenbits.xenproject.org>
Date: Tue, 16 Feb 2021 12:35:30 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-26934 / XSA-363
                               version 3

        Linux: display frontend "be-alloc" mode is unsupported

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The backend allocation mode of Linux'es drm_xen_front drivers was
not meant to be a supported configuration, but this wasn't stated
accordingly in its support status entry.

IMPACT
======

Use of the feature may have unknown effects.

VULNERABLE SYSTEMS
==================

Linux versions from 4.18 onwards are affected.  Earlier Linux versions
do not provide the affected driver.

MITIGATION
==========

Not using the driver or its backend allocation mode will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the attached patch documents the situation.  The patch does
not fix any security issues.

xsa363.patch           xen-unstable

$ sha256sum xsa363*
cf2f2eff446aec625b19d9d01301ec66098b58b792d74012235f10c62a21bb68  xsa363.patch
$

-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmAru/UMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZSocH/3jAI0MeZtnhvuyOM4CxkNmr0fI4HIXnA1xGNhWY
Wa2WgtOuFVaPUFX1Tj/e6zCoibatl1gicETI9hL+w4Dg6/GzIeTogOuzv5D6Ux91
9a6n2tryFfSAs0OxTKq6etLv63VEEicYMHrZT8n700JFvJsAWYAMvuanMDknGxBP
5/Z+DASnZxT09cpvP4REKuG7rW9vIif+6EZ0T0kU87InouDts/YOhzNsdvBD1wKH
y5e/MZh2sOyMOovuhgbvoK+YezHTAcZeGWnUk3yQoTGnW3p+W9XZVURsc8/e2FbZ
heY3Tj918LsY50wGpMZ2PDoHC8PSHaUqEOTq0MPmnPlppvU=
=tJD0
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa363.patch"
Content-Disposition: attachment; filename="xsa363.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBTVVBQT1JULm1kOiBQViBkaXNwbGF5IGZyb250ZW5kIGlzIHVuc3VwcG9y
dGVkIGluICJiYWNrZW5kIGFsbG9jYXRpb24iIG1vZGUKClRoaXMgd2Fzbid0
IG1lYW50IHRvIGJlIHN1cHBvcnRlZCwgYnV0IHdhc24ndCBzdGF0ZWQgdGhp
cyB3YXkuCgpUaGlzIGlzIFhTQS0zNjMuCgpSZXBvcnRlZC1ieTogSmFuIEJl
bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL1NVUFBPUlQubWQK
KysrIGIvU1VQUE9SVC5tZApAQCAtNDE0LDcgKzQxNCw4IEBAIEd1ZXN0LXNp
ZGUgZHJpdmVyIGNhcGFibGUgb2Ygc3BlYWtpbmcgdGgKIAogR3Vlc3Qtc2lk
ZSBkcml2ZXIgY2FwYWJsZSBvZiBzcGVha2luZyB0aGUgWGVuIFBWIGRpc3Bs
YXkgcHJvdG9jb2wKIAotICAgIFN0YXR1cywgTGludXg6IFN1cHBvcnRlZAor
ICAgIFN0YXR1cywgTGludXg6IFN1cHBvcnRlZCAob3V0c2lkZSBvZiAiYmFj
a2VuZCBhbGxvY2F0aW9uIiBtb2RlKQorICAgIFN0YXR1cywgTGludXg6IEV4
cGVyaW1lbnRhbCAoaW4gImJhY2tlbmQgYWxsb2NhdGlvbiIgbW9kZSkKIAog
IyMjIFBWIENvbnNvbGUgKGZyb250ZW5kKQogCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:36:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:36:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85529.160393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBza1-000229-4v; Tue, 16 Feb 2021 12:35:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85529.160393; Tue, 16 Feb 2021 12:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBza1-000221-1Z; Tue, 16 Feb 2021 12:35:49 +0000
Received: by outflank-mailman (input) for mailman id 85529;
 Tue, 16 Feb 2021 12:35:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y7oK=HS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lBza0-0001zb-4Z
 for xen-devel@lists.xen.org; Tue, 16 Feb 2021 12:35:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7cbe405-2f49-4d8d-b1e0-17b00f36c61f;
 Tue, 16 Feb 2021 12:35:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZk-0008NO-5U; Tue, 16 Feb 2021 12:35:32 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZk-0002dG-4W; Tue, 16 Feb 2021 12:35:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7cbe405-2f49-4d8d-b1e0-17b00f36c61f
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=6tV55iGlOj7+xKBOO1uuq23eO+x9bBJGrqnaJXGM2+U=; b=roUx5Vw3x3kv1tIxON22i8ODSL
	MsKzPjCaj+6V9eQ8nrLYY4nOCpHM789U9I2GXZR4Pzx3QDB99QDFDwUCtg1m0/ShnLcSmp+Douwsi
	zEYJDCmJqNDdvdWN5BUUOGSds7M2oo9OFj+OwoRAzEyJ7NYSvexCbsO6z6mKFGcGXsTs=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 365 v3 (CVE-2021-26930) - Linux: error
 handling issues in blkback's grant mapping
Message-Id: <E1lBzZk-0002dG-4W@xenbits.xenproject.org>
Date: Tue, 16 Feb 2021 12:35:32 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-26930 / XSA-365
                               version 3

        Linux: error handling issues in blkback's grant mapping

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

To service requests, the driver maps grant references provided by the
frontend.  In this process, errors may be encountered.  In one case an
error encountered earlier might be discarded by later processing,
resulting in the caller assuming successful mapping, and hence
subsequent operations trying to access space that wasn't mapped.  In
another case internal state would be insufficiently updated, preventing
safe recovery from the error.

IMPACT
======

A malicious or buggy frontend driver may be able to crash the
corresponding backend driver, potentially affecting the entire domain
running the backend driver.  In configurations without driver domains
or similar disaggregation, that is a host-wide denial of sevice.

Privilege escalation and information leaks cannot be ruled out.

VULNERABLE SYSTEMS
==================

Linux versions from at least 3.11 onwards are vulnerable.

MITIGATION
==========

Reconfiguring guests to use alternative (e.g. qemu-based) backends may
avoid the vulnerability.

CREDITS
=======

This issue was discovered by Olivier Benjamin, Norbert Manthey, Martin
Mazein, and Jan H. SchÃ¶nherr, all from Amazon.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa365-linux.patch           Linux 5.11-rc - 5.10

$ sha256sum xsa365*
7e45fcf3c70eb40debe9997a1773de7c4a2edcde5c23f76aeb5c1b6e3a34a654  xsa365-linux.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

HOWEVER, deployment of the non-kernel-based backends mitigation
described above is NOT permitted during the embargo on public-facing
systems with untrusted guest users and administrators.  This is because
such a configuration change may be recognizable by the affected guests.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmAru/UMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZnpQH/jMHOQao08C5s4VlCUIDJTJ8AZXIjFKW2zOKBqt5
Gp7HiRZSLKa2s/dqxIdiVHTnMzGyFegfzK0AeLjLeftSbOANSvI9tx/S6ajOr6Mx
s5j0r2JzCBsh1bULJbRV7MBVaRqyOR77i3sREu7o0uuRxMd0RNnck7rVm0slmG1P
FoFfC2tF+gxnYZi8tpBS4aY/e3tZ4y+J6s0Fgyfln4p33/j1JwILzzYscGnRdDvG
31DnotOq3E+TqcTZRK4BrLJqZodZLsd9en1DriJj2dDqrobs6QS4sZkHKX20gcxC
RnGvkdHXI+u/du6qpb3GHep2F5pg5+2vMzBNvxxBjr8vmi4=
=HBCB
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa365-linux.patch"
Content-Disposition: attachment; filename="xsa365-linux.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ZW4tYmxrYmFjazogZml4IGVycm9yIGhhbmRsaW5nIGluIHhlbl9ibGti
a19tYXAoKQoKVGhlIGZ1bmN0aW9uIHVzZXMgYSBnb3RvLWJhc2VkIGxvb3As
IHdoaWNoIG1heSBsZWFkIHRvIGFuIGVhcmxpZXIgZXJyb3IKZ2V0dGluZyBk
aXNjYXJkZWQgYnkgYSBsYXRlciBpdGVyYXRpb24uIEV4aXQgdGhpcyBhZC1o
b2MgbG9vcCB3aGVuIGFuCmVycm9yIHdhcyBlbmNvdW50ZXJlZC4KClRoZSBv
dXQtb2YtbWVtb3J5IGVycm9yIHBhdGggYWRkaXRpb25hbGx5IGZhaWxzIHRv
IGZpbGwgYSBzdHJ1Y3R1cmUKZmllbGQgbG9va2VkIGF0IGJ5IHhlbl9ibGti
a191bm1hcF9wcmVwYXJlKCkgYmVmb3JlIGluc3BlY3RpbmcgdGhlCmhhbmRs
ZSB3aGljaCBkb2VzIGdldCBwcm9wZXJseSBzZXQgKHRvIEJMS0JBQ0tfSU5W
QUxJRF9IQU5ETEUpLgoKU2luY2UgdGhlIGVhcmxpZXIgZXhpdGluZyBmcm9t
IHRoZSBhZC1ob2MgbG9vcCByZXF1aXJlcyB0aGUgc2FtZSBmaWVsZApmaWxs
aW5nIChpbnZhbGlkYXRpb24pIGFzIHRoYXQgb24gdGhlIG91dC1vZi1tZW1v
cnkgcGF0aCwgZm9sZCBib3RoCnBhdGhzLiBXaGlsZSBkb2luZyBzbywgZHJv
cCB0aGUgcHJfYWxlcnQoKSwgYXMgZXh0cmEgbG9nIG1lc3NhZ2VzIGFyZW4n
dApnb2luZyB0byBoZWxwIHRoZSBzaXR1YXRpb24gKHRoZSBrZXJuZWwgd2ls
bCBsb2cgb29tIGNvbmRpdGlvbnMgYWxyZWFkeQphbnl3YXkpLgoKVGhpcyBp
cyBYU0EtMzY1LgoKUmVwb3J0ZWQtYnk6IEJqb2VybiBEb2ViZWwgPGRvZWJl
bEBhbWF6b24uZGU+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1
bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8anVs
aWVuQHhlbi5vcmc+Ci0tLQp2MjogQXZvaWQgb3ZlcndyaXRpbmcgdmFsaWQg
LT5wZXJzaXN0ZW50X2dudCBmaWVsZHMuCgotLS0gYS9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNzk0LDggKzc5NCwxMyBAQCBhZ2Fp
bjoKIAkJCXBhZ2VzW2ldLT5wZXJzaXN0ZW50X2dudCA9IHBlcnNpc3RlbnRf
Z250OwogCQl9IGVsc2UgewogCQkJaWYgKGdudHRhYl9wYWdlX2NhY2hlX2dl
dCgmcmluZy0+ZnJlZV9wYWdlcywKLQkJCQkJCSAgJnBhZ2VzW2ldLT5wYWdl
KSkKLQkJCQlnb3RvIG91dF9vZl9tZW1vcnk7CisJCQkJCQkgICZwYWdlc1tp
XS0+cGFnZSkpIHsKKwkJCQlnbnR0YWJfcGFnZV9jYWNoZV9wdXQoJnJpbmct
PmZyZWVfcGFnZXMsCisJCQkJCQkgICAgICBwYWdlc190b19nbnQsCisJCQkJ
CQkgICAgICBzZWdzX3RvX21hcCk7CisJCQkJcmV0ID0gLUVOT01FTTsKKwkJ
CQlnb3RvIG91dDsKKwkJCX0KIAkJCWFkZHIgPSB2YWRkcihwYWdlc1tpXS0+
cGFnZSk7CiAJCQlwYWdlc190b19nbnRbc2Vnc190b19tYXBdID0gcGFnZXNb
aV0tPnBhZ2U7CiAJCQlwYWdlc1tpXS0+cGVyc2lzdGVudF9nbnQgPSBOVUxM
OwpAQCAtODgwLDE3ICs4ODUsMTggQEAgbmV4dDoKIAl9CiAJc2Vnc190b19t
YXAgPSAwOwogCWxhc3RfbWFwID0gbWFwX3VudGlsOwotCWlmIChtYXBfdW50
aWwgIT0gbnVtKQorCWlmICghcmV0ICYmIG1hcF91bnRpbCAhPSBudW0pCiAJ
CWdvdG8gYWdhaW47CiAKLQlyZXR1cm4gcmV0OwotCi1vdXRfb2ZfbWVtb3J5
OgotCXByX2FsZXJ0KCIlczogb3V0IG9mIG1lbW9yeVxuIiwgX19mdW5jX18p
OwotCWdudHRhYl9wYWdlX2NhY2hlX3B1dCgmcmluZy0+ZnJlZV9wYWdlcywg
cGFnZXNfdG9fZ250LCBzZWdzX3RvX21hcCk7Ci0JZm9yIChpID0gbGFzdF9t
YXA7IGkgPCBudW07IGkrKykKK291dDoKKwlmb3IgKGkgPSBsYXN0X21hcDsg
aSA8IG51bTsgaSsrKSB7CisJCS8qIERvbid0IHphcCBjdXJyZW50IGJhdGNo
J3MgdmFsaWQgcGVyc2lzdGVudCBncmFudHMuICovCisJCWlmKGkgPj0gbGFz
dF9tYXAgKyBzZWdzX3RvX21hcCkKKwkJCXBhZ2VzW2ldLT5wZXJzaXN0ZW50
X2dudCA9IE5VTEw7CiAJCXBhZ2VzW2ldLT5oYW5kbGUgPSBCTEtCQUNLX0lO
VkFMSURfSEFORExFOwotCXJldHVybiAtRU5PTUVNOworCX0KKworCXJldHVy
biByZXQ7CiB9CiAKIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX21hcF9zZWcoc3Ry
dWN0IHBlbmRpbmdfcmVxICpwZW5kaW5nX3JlcSkK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:36:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85541.160514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzaM-0002U8-6p; Tue, 16 Feb 2021 12:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85541.160514; Tue, 16 Feb 2021 12:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzaM-0002Tu-1Q; Tue, 16 Feb 2021 12:36:10 +0000
Received: by outflank-mailman (input) for mailman id 85541;
 Tue, 16 Feb 2021 12:36:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y7oK=HS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lBzaK-0001zb-55
 for xen-devel@lists.xen.org; Tue, 16 Feb 2021 12:36:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9ef4170-2346-4d91-8c5f-8c90166e5d6e;
 Tue, 16 Feb 2021 12:35:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZj-0008N9-FT; Tue, 16 Feb 2021 12:35:31 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZj-0002cK-EW; Tue, 16 Feb 2021 12:35:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9ef4170-2346-4d91-8c5f-8c90166e5d6e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=IHjlcNqY9zVQG6xPpJPwS1esZBtm3kl4cjbgTTWsTRU=; b=Ww+Gi67omVf9Ru77k14K1xTV/Z
	h3mTaS7e/r1khuCnYVxpZuOomlXQz+ykuTN4mO1X4xfDDy1ChAc4a4lwU0tVy4yIifF0tXvA7LxU7
	/9uRxUDcrduK7p9U1CnLG9275u2h31jacTH0Yh+J7ZERTlYM9RfSGhAXQpMg7WMtrb+0=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 364 v3 (CVE-2021-26933) - arm: The cache
 may not be cleaned for newly allocated scrubbed pages
Message-Id: <E1lBzZj-0002cK-EW@xenbits.xenproject.org>
Date: Tue, 16 Feb 2021 12:35:31 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-26933 / XSA-364
                               version 3

 arm: The cache may not be cleaned for newly allocated scrubbed pages

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

On Arm, a guest is allowed to control whether memory access bypass the
cache.  This means that Xen needs to ensure that all writes (such as
the ones during scrubbing) have reached memory before handing over the
page to a guest.

Unfortunately the operation to clean the cache happens before checking
if the page was scrubbed.  Therefore there is no guarantee when all
the writes will reach the memory.

IMPACT
======

A malicious guest may be able to read sensitive data from memory that
previously belonged to another guest.

VULNERABLE SYSTEMS
==================

Xen version 4.9 onwards are vulnerable. Only Arm systems are vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa364.patch           xen-unstable - 4.11

$ sha256sum xsa364*
c9dcb3052bb6ca4001e02b3ad889c70b4eebf1931bef83dfb7de86452851f3c8  xsa364.meta
dc313c70bb07b4096bbc4612cbbc180589923277411dede2fda37f04ecc846d6  xsa364.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmAru/UMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZT0UH/0Lzw4sShqmyO06n0HWcXyzXKx7Qh67tjBglmB0D
XHKrlTKR0Cs1S2NR3GCSZCSPNKXcXU689qEXlvK07EpheO/xCUgpZNkt/Eab/JFK
NngYbuev1z6+bGeCi70b6RItCXoWiwDWEJqLlLKROwBXMZaodwgjY7/o3GR2D8ZV
Qyz2EcAdJUIYmMsLC3hJ7gTLXvdySp+0lZ9oO6qe4YYQ3CIwPJnlflWFTzcASfML
D9lMVG6u6ratiqt4N1egE0gxBe3/QP8KoptSqiV+MDdwPnsK009g/G+0Ea430ZEh
lviVSgCxhdELx2Tv+Q7qSSbnfMSdnibSHAxipcbyhvjiEJU=
=mHyv
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa364.meta"
Content-Disposition: attachment; filename="xsa364.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNjQsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4ZW4iCiAgXSwK
ICAiUmVjaXBlcyI6IHsKICAgICI0LjExIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIz
MTBhYjc5ODc1Y2I3MDVjYzJjN2RhZGRmZjQxMmI1YTQ4OTlmOGM5IiwKICAg
ICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTM2NC5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiY2NlN2NiZDk4NmMxMjJhODY1ODJmZjM3NzViNmI1NTlkODc3NDA3YyIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMi
OiBbCiAgICAgICAgICAgICJ4c2EzNjQucGF0Y2giCiAgICAgICAgICBdCiAg
ICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTMiOiB7CiAgICAgICJS
ZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxl
UmVmIjogImU0MTYxOTM4YjMxNWYzYjljNmExM2FkZTMwZDE2YzExNTA0YTJk
MTYiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQYXRj
aGVzIjogWwogICAgICAgICAgICAieHNhMzY0LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE0IjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI0MTcwMjE4Y2I5NjU0NjQyNjY2NGU1YzFkMDBjNWE4NDhh
MjZhZTllIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM2NC5wYXRjaCIKICAgICAg
ICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAibWFzdGVyIjog
ewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAg
ICAgIlN0YWJsZVJlZiI6ICI1ZTdhYTkwNDQwNWZhMmYyNjhjM2FmMjEzNTE2
YmFlMjcxZGUzMjY1IiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAg
ICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM2NC5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0KICB9Cn0K

--=separator
Content-Type: application/octet-stream; name="xsa364.patch"
Content-Disposition: attachment; filename="xsa364.patch"
Content-Transfer-Encoding: base64

RnJvbSBkYWRiNWI0YjIxYzkwNGNlNTkwMjRjNjg2ZWIxYzU1YmU4ZjQ2YzUy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBUaHUsIDIxIEphbiAyMDIxIDEw
OjE2OjA4ICswMDAwClN1YmplY3Q6IFtQQVRDSF0geGVuL3BhZ2VfYWxsb2M6
IE9ubHkgZmx1c2ggdGhlIHBhZ2UgdG8gUkFNIG9uY2Ugd2Uga25vdyB0aGV5
CiBhcmUgc2NydWJiZWQKCkF0IHRoZSBtb21lbnQsIGVhY2ggcGFnZSBhcmUg
Zmx1c2hlZCB0byBSQU0ganVzdCBhZnRlciB0aGUgYWxsb2NhdG9yCmZvdW5k
IHNvbWUgZnJlZSBwYWdlcy4gSG93ZXZlciwgdGhpcyBpcyBoYXBwZW5pbmcg
YmVmb3JlIGNoZWNrIGlmIHRoZQpwYWdlIHdhcyBzY3J1YmJlZC4KCkFzIGEg
Y29uc2VxdWVuY2UsIG9uIEFybSwgYSBndWVzdCBtYXkgYmUgYWJsZSB0byBh
Y2Nlc3MgdGhlIG9sZCBjb250ZW50Cm9mIHRoZSBzY3J1YmJlZCBwYWdlcyBp
ZiBpdCBoYXMgY2FjaGUgZGlzYWJsZWQgKGRlZmF1bHQgYXQgYm9vdCkgYW5k
CnRoZSBjb250ZW50IGRpZG4ndCByZWFjaCB0aGUgUG9pbnQgb2YgQ29oZXJl
bmN5LgoKVGhlIGZsdXNoIGlzIG5vdyBtb3ZlZCBhZnRlciB3ZSBrbm93IHRo
ZSBjb250ZW50IG9mIHRoZSBwYWdlIHdpbGwgbm90CmNoYW5nZS4gVGhpcyBh
bHNvIGhhcyB0aGUgYmVuZWZpdCB0byByZWR1Y2UgdGhlIGFtb3VudCBvZiB3
b3JrIGhhcHBlbmluZwp3aXRoIHRoZSBoZWFwX2xvY2sgaGVsZC4KClRoaXMg
aXMgWFNBLTM2NC4KCkZpeGVzOiAzMDdjM2JlM2NjYjIgKCJtbTogRG9uJ3Qg
c2NydWIgcGFnZXMgd2hpbGUgaG9sZGluZyBoZWFwIGxvY2sgaW4gYWxsb2Nf
aGVhcF9wYWdlcygpIikKU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxq
Z3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vY29tbW9uL3BhZ2VfYWxsb2Mu
YyB8IDE0ICsrKysrKysrKy0tLS0tCiAxIGZpbGUgY2hhbmdlZCwgOSBpbnNl
cnRpb25zKCspLCA1IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3hlbi9j
b21tb24vcGFnZV9hbGxvYy5jIGIveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMK
aW5kZXggMDJhYzFmYTYxM2U3Li4xNzQ0ZTZmYWE1YzQgMTAwNjQ0Ci0tLSBh
L3hlbi9jb21tb24vcGFnZV9hbGxvYy5jCisrKyBiL3hlbi9jb21tb24vcGFn
ZV9hbGxvYy5jCkBAIC05MjQsNiArOTI0LDcgQEAgc3RhdGljIHN0cnVjdCBw
YWdlX2luZm8gKmFsbG9jX2hlYXBfcGFnZXMoCiAgICAgYm9vbCBuZWVkX3Rs
YmZsdXNoID0gZmFsc2U7CiAgICAgdWludDMyX3QgdGxiZmx1c2hfdGltZXN0
YW1wID0gMDsKICAgICB1bnNpZ25lZCBpbnQgZGlydHlfY250ID0gMDsKKyAg
ICBtZm5fdCBtZm47CiAKICAgICAvKiBNYWtlIHN1cmUgdGhlcmUgYXJlIGVu
b3VnaCBiaXRzIGluIG1lbWZsYWdzIGZvciBub2RlSUQuICovCiAgICAgQlVJ
TERfQlVHX09OKChfTUVNRl9iaXRzIC0gX01FTUZfbm9kZSkgPCAoOCAqIHNp
emVvZihub2RlaWRfdCkpKTsKQEAgLTEwMjIsMTEgKzEwMjMsNiBAQCBzdGF0
aWMgc3RydWN0IHBhZ2VfaW5mbyAqYWxsb2NfaGVhcF9wYWdlcygKICAgICAg
ICAgcGdbaV0udS5pbnVzZS50eXBlX2luZm8gPSAwOwogICAgICAgICBwYWdl
X3NldF9vd25lcigmcGdbaV0sIE5VTEwpOwogCi0gICAgICAgIC8qIEVuc3Vy
ZSBjYWNoZSBhbmQgUkFNIGFyZSBjb25zaXN0ZW50IGZvciBwbGF0Zm9ybXMg
d2hlcmUgdGhlCi0gICAgICAgICAqIGd1ZXN0IGNhbiBjb250cm9sIGl0cyBv
d24gdmlzaWJpbGl0eSBvZi90aHJvdWdoIHRoZSBjYWNoZS4KLSAgICAgICAg
ICovCi0gICAgICAgIGZsdXNoX3BhZ2VfdG9fcmFtKG1mbl94KHBhZ2VfdG9f
bWZuKCZwZ1tpXSkpLAotICAgICAgICAgICAgICAgICAgICAgICAgICAhKG1l
bWZsYWdzICYgTUVNRl9ub19pY2FjaGVfZmx1c2gpKTsKICAgICB9CiAKICAg
ICBzcGluX3VubG9jaygmaGVhcF9sb2NrKTsKQEAgLTEwNjIsNiArMTA1OCwx
NCBAQCBzdGF0aWMgc3RydWN0IHBhZ2VfaW5mbyAqYWxsb2NfaGVhcF9wYWdl
cygKICAgICBpZiAoIG5lZWRfdGxiZmx1c2ggKQogICAgICAgICBmaWx0ZXJl
ZF9mbHVzaF90bGJfbWFzayh0bGJmbHVzaF90aW1lc3RhbXApOwogCisgICAg
LyoKKyAgICAgKiBFbnN1cmUgY2FjaGUgYW5kIFJBTSBhcmUgY29uc2lzdGVu
dCBmb3IgcGxhdGZvcm1zIHdoZXJlIHRoZSBndWVzdAorICAgICAqIGNhbiBj
b250cm9sIGl0cyBvd24gdmlzaWJpbGl0eSBvZi90aHJvdWdoIHRoZSBjYWNo
ZS4KKyAgICAgKi8KKyAgICBtZm4gPSBwYWdlX3RvX21mbihwZyk7CisgICAg
Zm9yICggaSA9IDA7IGkgPCAoMVUgPDwgb3JkZXIpOyBpKysgKQorICAgICAg
ICBmbHVzaF9wYWdlX3RvX3JhbShtZm5feChtZm4pICsgaSwgIShtZW1mbGFn
cyAmIE1FTUZfbm9faWNhY2hlX2ZsdXNoKSk7CisKICAgICByZXR1cm4gcGc7
CiB9CiAKLS0gCjIuMTcuMQoK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:36:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85544.160544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzaT-0002kc-M0; Tue, 16 Feb 2021 12:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85544.160544; Tue, 16 Feb 2021 12:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzaT-0002kG-EO; Tue, 16 Feb 2021 12:36:17 +0000
Received: by outflank-mailman (input) for mailman id 85544;
 Tue, 16 Feb 2021 12:36:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y7oK=HS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lBzaR-0001zG-R5
 for xen-devel@lists.xen.org; Tue, 16 Feb 2021 12:36:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64ffd9fb-5b5f-406e-9569-84fe0147aac7;
 Tue, 16 Feb 2021 12:35:34 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZh-0008Mj-8f; Tue, 16 Feb 2021 12:35:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZh-0002ZO-3z; Tue, 16 Feb 2021 12:35:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64ffd9fb-5b5f-406e-9569-84fe0147aac7
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=6uIFfku8/Q5IsCoK+QTF4nmrkhXMtquEX3JaPS97YIU=; b=bDZA++vMBKc3gSRgVHpMDYWhO6
	a53h5gkN/Mu4INhupxvR4tjZNbkCU5xxc3by3YuhgFlU4ZuYvxfBO3WrHXXNTrSrmxQJavmiQhG4D
	gPwrhsRPEY7e7CMImoH5qAIWoeh59TPhU8+VouZK4Tu9liTXg5qO6UH9yomsgTpaaZ9o=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 361 v4 (CVE-2021-26932) - Linux: grant
 mapping error handling issues
Message-Id: <E1lBzZh-0002ZO-3z@xenbits.xenproject.org>
Date: Tue, 16 Feb 2021 12:35:29 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-26932 / XSA-361
                               version 4

                Linux: grant mapping error handling issues

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Grant mapping operations often occur in batch hypercalls, where a
number of operations are done in a single hypercall, the success or
failure of each one reported to the backend driver, and the backend
driver then loops over the results, performing follow-up actions based
on the success or failure of each operation.

Unfortunately, when running in PV mode, the Linux backend drivers
mishandle this: Some errors are ignored, effectively implying their
success from the success of related batch elements.  In other cases,
errors resulting from one batch element lead to further batch elements
not being inspected, and hence successful ones to not be possible to
properly unmap upon error recovery.

IMPACT
======

A malicious or buggy frontend driver may be able to crash the
corresponding backend driver, causing a denial of service potentially
affecting the entire domain running the backend driver.

A malicious or buggy frontend driver may be able to cause resource
leaks in the domain running the corresponding backend driver, leading
to a denial of service.

VULNERABLE SYSTEMS
==================

All Linux versions back to at least 3.2 are vulnerable, when running in
PV mode on x86 or when running on Arm.

On x86, only systems with Linux backends running in PV mode are
vulnerable.  Linux backends run in HVM / PVH modes are not vulnerable.

MITIGATION
==========

On x86, running the backends in HVM or PVH domains will avoid the
vulnerability.

For protocols where other, e.g. non-kernel-based backends are available,
reconfiguring guests to use alternative (e.g. qemu-based) backends may
allow to avoid the vulnerability as long as these backends don't rely on
similar functionality provided by the xen-gntdev (/dev/gntdev) driver.

In all other cases there is no known mitigation.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the attached patches resolves this issue.

xsa361-linux-1.patch           Linux 5.11-rc - 3.19
xsa361-linux-2.patch           Linux 5.11-rc - 3.15
xsa361-linux-3.patch           Linux 5.11-rc - 4.19
xsa361-linux-4.patch           Linux 5.11-rc - 4.19
xsa361-linux-5.patch           Linux 5.11-rc - 4.4

$ sha256sum xsa361*
bb00ab6319b4fc536566af50c73e064f10f8b99eaa6b0f0b35a8d174c285a905  xsa361-linux-1.patch
73b6a54aa3773ce11f0de6b9aa1d80dd7f4c297dc71924b1a3886bc3b99ac859  xsa361-linux-2.patch
8e554cfab8cdb4fe1b74601a9432ea4c570f74a952ad757f9294ba1666cbeaea  xsa361-linux-3.patch
8c290895d10fc148f99e2a6587811b3037f29c3a0201d69d448ff520cea6f96d  xsa361-linux-4.patch
231ae3e1b9bec1b75dbbbee4b5acff620ef7ac2853332aa7b3c4957c6ca7f341  xsa361-linux-5.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

Deployment of the mitigation to switch to HVM / PVH backend domains is
also permitted during the embargo, even on public-facing systems with
untrusted guest users and administrators.

HOWEVER, deployment of the non-kernel-based backends mitigation
described above is NOT permitted during the embargo on public-facing
systems with untrusted guest users and administrators.  This is because
such a configuration change may be recognizable by the affected guests.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmAru/QMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZmFkH/Ay1RoZbbcA4ywdhy9xdnpt0DHMFLjZSbE4sNTi+
J+m9rn69UTK01VDD0RUohTcmWO0nv8ZD+jKETsSq31GiYhVk7XnSmCJkzILGujr8
cf+7jUWWJPcqBmN7xcLBaor9lhpKfMpYlMLBG7twIRHfqOSw6Sm+iD4YC23nkGKF
Cb8tpkYCpX3dPMMP74nX00Wta2rqd1BrpAGvAnt9hrHIBfTcpwWE8A4H1eFL/7Dv
5+pVvrSMkyzaR5kI/QBeriXsuOP509CiafUBpeXU85pGWpLgZAqD+puodEVQ2fpT
/MqATdNRhgnCzqSqh/ElN/1ZdB7406DbdCnErJiyDdN/OCE=
=DUXr
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa361-linux-1.patch"
Content-Disposition: attachment; filename="xsa361-linux-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBYZW4veDg2OiBkb24ndCBiYWlsIGVhcmx5IGZyb20gY2xlYXJfZm9yZWln
bl9wMm1fbWFwcGluZygpCgpJdHMgc2libGluZyAoc2V0X2ZvcmVpZ25fcDJt
X21hcHBpbmcoKSkgYXMgd2VsbCBhcyB0aGUgc2libGluZyBvZiBpdHMKb25s
eSBjYWxsZXIgKGdudHRhYl9tYXBfcmVmcygpKSBkb24ndCBjbGVhbiB1cCBh
ZnRlciB0aGVtc2VsdmVzIGluIGNhc2UKb2YgZXJyb3IuIEhpZ2hlciBsZXZl
bCBjYWxsZXJzIGFyZSBleHBlY3RlZCB0byBkbyBzby4gSG93ZXZlciwgaW4g
b3JkZXIKZm9yIHRoYXQgdG8gcmVhbGx5IGNsZWFuIHVwIGFueSBwYXJ0aWFs
bHkgc2V0IHVwIHN0YXRlLCB0aGUgb3BlcmF0aW9uCnNob3VsZCBub3QgdGVy
bWluYXRlIHVwb24gZW5jb3VudGVyaW5nIGFuIGVudHJ5IGluIHVuZXhwZWN0
ZWQgc3RhdGUuIEl0CmlzIHBhcnRpY3VsYXJseSByZWxldmFudCB0byBub3Rp
Y2UgaGVyZSB0aGF0IHNldF9mb3JlaWduX3AybV9tYXBwaW5nKCkKd291bGQg
c2tpcCBzZXR0aW5nIHVwIGEgcDJtIGVudHJ5IGlmIGl0cyBncmFudCBtYXBw
aW5nIGZhaWxlZCwgYnV0IGl0CndvdWxkIGNvbnRpbnVlIHRvIHNldCB1cCBm
dXJ0aGVyIHAybSBlbnRyaWVzIGFzIGxvbmcgYXMgdGhlaXIgbWFwcGluZ3MK
c3VjY2VlZGVkLgoKQXJndWFibHkgZG93biB0aGUgcm9hZCBzZXRfZm9yZWln
bl9wMm1fbWFwcGluZygpIG1heSB3YW50IGl0cyBwYWdlIHN0YXRlCnJlbGF0
ZWQgV0FSTl9PTigpIGFsc28gY29udmVydGVkIHRvIGFuIGVycm9yIHJldHVy
bi4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzYxLgoKU2lnbmVkLW9mZi1ieTog
SmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpDYzogc3RhYmxlQHZn
ZXIua2VybmVsLm9yZwpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdy
b3NzQHN1c2UuY29tPgoKLS0tIGEvYXJjaC94ODYveGVuL3AybS5jCisrKyBi
L2FyY2gveDg2L3hlbi9wMm0uYwpAQCAtNzUwLDE3ICs3NTAsMTUgQEAgaW50
IGNsZWFyX2ZvcmVpZ25fcDJtX21hcHBpbmcoc3RydWN0IGdudAogCQl1bnNp
Z25lZCBsb25nIG1mbiA9IF9fcGZuX3RvX21mbihwYWdlX3RvX3BmbihwYWdl
c1tpXSkpOwogCQl1bnNpZ25lZCBsb25nIHBmbiA9IHBhZ2VfdG9fcGZuKHBh
Z2VzW2ldKTsKIAotCQlpZiAobWZuID09IElOVkFMSURfUDJNX0VOVFJZIHx8
ICEobWZuICYgRk9SRUlHTl9GUkFNRV9CSVQpKSB7CisJCWlmIChtZm4gIT0g
SU5WQUxJRF9QMk1fRU5UUlkgJiYgKG1mbiAmIEZPUkVJR05fRlJBTUVfQklU
KSkKKwkJCXNldF9waHlzX3RvX21hY2hpbmUocGZuLCBJTlZBTElEX1AyTV9F
TlRSWSk7CisJCWVsc2UKIAkJCXJldCA9IC1FSU5WQUw7Ci0JCQlnb3RvIG91
dDsKLQkJfQotCi0JCXNldF9waHlzX3RvX21hY2hpbmUocGZuLCBJTlZBTElE
X1AyTV9FTlRSWSk7CiAJfQogCWlmIChrdW5tYXBfb3BzKQogCQlyZXQgPSBI
WVBFUlZJU09SX2dyYW50X3RhYmxlX29wKEdOVFRBQk9QX3VubWFwX2dyYW50
X3JlZiwKLQkJCQkJCWt1bm1hcF9vcHMsIGNvdW50KTsKLW91dDoKKwkJCQkJ
CWt1bm1hcF9vcHMsIGNvdW50KSA/OiByZXQ7CisKIAlyZXR1cm4gcmV0Owog
fQogRVhQT1JUX1NZTUJPTF9HUEwoY2xlYXJfZm9yZWlnbl9wMm1fbWFwcGlu
Zyk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa361-linux-2.patch"
Content-Disposition: attachment; filename="xsa361-linux-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBYZW4veDg2OiBhbHNvIGNoZWNrIGtlcm5lbCBtYXBwaW5nIGluIHNldF9m
b3JlaWduX3AybV9tYXBwaW5nKCkKCldlIHNob3VsZCBub3Qgc2V0IHVwIGZ1
cnRoZXIgc3RhdGUgaWYgZWl0aGVyIG1hcHBpbmcgZmFpbGVkOyBwYXlpbmcK
YXR0ZW50aW9uIHRvIGp1c3QgdGhlIHVzZXIgbWFwcGluZydzIHN0YXR1cyBp
c24ndCBlbm91Z2guCgpBbHNvIHVzZSBHTlRTVF9va2F5IGluc3RlYWQgb2Yg
aW1wbHlpbmcgaXRzIHZhbHVlICh6ZXJvKS4KClRoaXMgaXMgcGFydCBvZiBY
U0EtMzYxLgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpDYzogc3RhYmxlQHZnZXIua2VybmVsLm9yZwpSZXZpZXdl
ZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgoKLS0tIGEv
YXJjaC94ODYveGVuL3AybS5jCisrKyBiL2FyY2gveDg2L3hlbi9wMm0uYwpA
QCAtNzEyLDcgKzcxMiw4IEBAIGludCBzZXRfZm9yZWlnbl9wMm1fbWFwcGlu
ZyhzdHJ1Y3QgZ250dGEKIAkJdW5zaWduZWQgbG9uZyBtZm4sIHBmbjsKIAog
CQkvKiBEbyBub3QgYWRkIHRvIG92ZXJyaWRlIGlmIHRoZSBtYXAgZmFpbGVk
LiAqLwotCQlpZiAobWFwX29wc1tpXS5zdGF0dXMpCisJCWlmIChtYXBfb3Bz
W2ldLnN0YXR1cyAhPSBHTlRTVF9va2F5IHx8CisJCSAgICAoa21hcF9vcHMg
JiYga21hcF9vcHNbaV0uc3RhdHVzICE9IEdOVFNUX29rYXkpKQogCQkJY29u
dGludWU7CiAKIAkJaWYgKG1hcF9vcHNbaV0uZmxhZ3MgJiBHTlRNQVBfY29u
dGFpbnNfcHRlKSB7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa361-linux-3.patch"
Content-Disposition: attachment; filename="xsa361-linux-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBYZW4vZ250ZGV2OiBjb3JyZWN0IGRldl9idXNfYWRkciBoYW5kbGluZyBp
biBnbnRkZXZfbWFwX2dyYW50X3BhZ2VzKCkKCldlIG1heSBub3Qgc2tpcCBz
ZXR0aW5nIHRoZSBmaWVsZCBpbiB0aGUgdW5tYXAgc3RydWN0dXJlIHdoZW4K
R05UTUFQX2RldmljZV9tYXAgaXMgaW4gdXNlIC0gc3VjaCBhbiB1bm1hcCB3
b3VsZCBmYWlsIHRvIHJlbGVhc2UgdGhlCnJlc3BlY3RpdmUgcmVzb3VyY2Vz
IChhIHBhZ2UgcmVmIGluIHRoZSBoeXBlcnZpc29yKS4gT3RvaCB0aGUgZmll
bGQKZG9lc24ndCBuZWVkIHNldHRpbmcgYXQgYWxsIHdoZW4gR05UTUFQX2Rl
dmljZV9tYXAgaXMgbm90IGluIHVzZS4KClRvIHJlY29yZCB0aGUgdmFsdWUg
Zm9yIHVubWFwcGluZywgd2UgYWxzbyBiZXR0ZXIgZG9uJ3QgdXNlIG91ciBs
b2NhbApwMm06IEluIHBhcnRpY3VsYXIgYWZ0ZXIgYSBzdWJzZXF1ZW50IGNo
YW5nZSBpdCBtYXkgbm90IGhhdmUgZ290IHVwZGF0ZWQKZm9yIGFsbCB0aGUg
YmF0Y2ggZWxlbWVudHMuIEluc3RlYWQgaXQgY2FuIHNpbXBseSBiZSB0YWtl
biBmcm9tIHRoZQpyZXNwZWN0aXZlIG1hcCdzIHJlc3VsdHMuCgpXZSBjYW4g
YWRkaXRpb25hbGx5IGF2b2lkIHBsYXlpbmcgdGhpcyBnYW1lIGFsdG9nZXRo
ZXIgZm9yIHRoZSBrZXJuZWwKcGFydCBvZiB0aGUgbWFwcGluZ3MgaW4gKHg4
NikgUFYgbW9kZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzYxLgoKU2lnbmVk
LW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpDYzog
c3RhYmxlQHZnZXIua2VybmVsLm9yZwpSZXZpZXdlZC1ieTogU3RlZmFubyBT
dGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPgotLS0KdjQ6IFNw
bGl0IGZyb20gc3Vic2VxdWVudCBwYXRjaC4KCi0tLSBhL2RyaXZlcnMveGVu
L2dudGRldi5jCisrKyBiL2RyaXZlcnMveGVuL2dudGRldi5jCkBAIC0zMDks
MTggKzMwOSwyNSBAQCBpbnQgZ250ZGV2X21hcF9ncmFudF9wYWdlcyhzdHJ1
Y3QgZ250ZGV2CiAJCSAqIHRvIHRoZSBrZXJuZWwgbGluZWFyIGFkZHJlc3Nl
cyBvZiB0aGUgc3RydWN0IHBhZ2VzLgogCQkgKiBUaGVzZSBwdGVzIGFyZSBj
b21wbGV0ZWx5IGRpZmZlcmVudCBmcm9tIHRoZSB1c2VyIHB0ZXMgZGVhbHQK
IAkJICogd2l0aCBmaW5kX2dyYW50X3B0ZXMuCisJCSAqIE5vdGUgdGhhdCBH
TlRNQVBfZGV2aWNlX21hcCBpc24ndCBuZWVkZWQgaGVyZTogVGhlCisJCSAq
IGRldl9idXNfYWRkciBvdXRwdXQgZmllbGQgZ2V0cyBjb25zdW1lZCBvbmx5
IGZyb20gLT5tYXBfb3BzLAorCQkgKiBhbmQgYnkgbm90IHJlcXVlc3Rpbmcg
aXQgd2hlbiBtYXBwaW5nIHdlIGFsc28gYXZvaWQgbmVlZGluZworCQkgKiB0
byBtaXJyb3IgZGV2X2J1c19hZGRyIGludG8gLT51bm1hcF9vcHMgKGFuZCBo
b2xkaW5nIGFuIGV4dHJhCisJCSAqIHJlZmVyZW5jZSB0byB0aGUgcGFnZSBp
biB0aGUgaHlwZXJ2aXNvcikuCiAJCSAqLworCQl1bnNpZ25lZCBpbnQgZmxh
Z3MgPSAobWFwLT5mbGFncyAmIH5HTlRNQVBfZGV2aWNlX21hcCkgfAorCQkJ
CSAgICAgR05UTUFQX2hvc3RfbWFwOworCiAJCWZvciAoaSA9IDA7IGkgPCBt
YXAtPmNvdW50OyBpKyspIHsKIAkJCXVuc2lnbmVkIGxvbmcgYWRkcmVzcyA9
ICh1bnNpZ25lZCBsb25nKQogCQkJCXBmbl90b19rYWRkcihwYWdlX3RvX3Bm
bihtYXAtPnBhZ2VzW2ldKSk7CiAJCQlCVUdfT04oUGFnZUhpZ2hNZW0obWFw
LT5wYWdlc1tpXSkpOwogCi0JCQlnbnR0YWJfc2V0X21hcF9vcCgmbWFwLT5r
bWFwX29wc1tpXSwgYWRkcmVzcywKLQkJCQltYXAtPmZsYWdzIHwgR05UTUFQ
X2hvc3RfbWFwLAorCQkJZ250dGFiX3NldF9tYXBfb3AoJm1hcC0+a21hcF9v
cHNbaV0sIGFkZHJlc3MsIGZsYWdzLAogCQkJCW1hcC0+Z3JhbnRzW2ldLnJl
ZiwKIAkJCQltYXAtPmdyYW50c1tpXS5kb21pZCk7CiAJCQlnbnR0YWJfc2V0
X3VubWFwX29wKCZtYXAtPmt1bm1hcF9vcHNbaV0sIGFkZHJlc3MsCi0JCQkJ
bWFwLT5mbGFncyB8IEdOVE1BUF9ob3N0X21hcCwgLTEpOworCQkJCWZsYWdz
LCAtMSk7CiAJCX0KIAl9CiAKQEAgLTMzNiwxNyArMzQzLDEyIEBAIGludCBn
bnRkZXZfbWFwX2dyYW50X3BhZ2VzKHN0cnVjdCBnbnRkZXYKIAkJCWNvbnRp
bnVlOwogCQl9CiAKKwkJaWYgKG1hcC0+ZmxhZ3MgJiBHTlRNQVBfZGV2aWNl
X21hcCkKKwkJCW1hcC0+dW5tYXBfb3BzW2ldLmRldl9idXNfYWRkciA9IG1h
cC0+bWFwX29wc1tpXS5kZXZfYnVzX2FkZHI7CisKIAkJbWFwLT51bm1hcF9v
cHNbaV0uaGFuZGxlID0gbWFwLT5tYXBfb3BzW2ldLmhhbmRsZTsKIAkJaWYg
KHVzZV9wdGVtb2QpCiAJCQltYXAtPmt1bm1hcF9vcHNbaV0uaGFuZGxlID0g
bWFwLT5rbWFwX29wc1tpXS5oYW5kbGU7Ci0jaWZkZWYgQ09ORklHX1hFTl9H
UkFOVF9ETUFfQUxMT0MKLQkJZWxzZSBpZiAobWFwLT5kbWFfdmFkZHIpIHsK
LQkJCXVuc2lnbmVkIGxvbmcgYmZuOwotCi0JCQliZm4gPSBwZm5fdG9fYmZu
KHBhZ2VfdG9fcGZuKG1hcC0+cGFnZXNbaV0pKTsKLQkJCW1hcC0+dW5tYXBf
b3BzW2ldLmRldl9idXNfYWRkciA9IF9fcGZuX3RvX3BoeXMoYmZuKTsKLQkJ
fQotI2VuZGlmCiAJfQogCXJldHVybiBlcnI7CiB9Cg==

--=separator
Content-Type: application/octet-stream; name="xsa361-linux-4.patch"
Content-Disposition: attachment; filename="xsa361-linux-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBYZW4vZ250ZGV2OiBjb3JyZWN0IGVycm9yIGNoZWNraW5nIGluIGdudGRl
dl9tYXBfZ3JhbnRfcGFnZXMoKQoKRmFpbHVyZSBvZiB0aGUga2VybmVsIHBh
cnQgb2YgdGhlIG1hcHBpbmcgb3BlcmF0aW9uIHNob3VsZCBhbHNvIGJlCmlu
ZGljYXRlZCBhcyBhbiBlcnJvciB0byB0aGUgY2FsbGVyLCBvciBlbHNlIGl0
IG1heSBhc3N1bWUgdGhlCnJlc3BlY3RpdmUga2VybmVsIFZBIGlzIG9rYXkg
dG8gYWNjZXNzLgoKRnVydGhlcm1vcmUgZ250dGFiX21hcF9yZWZzKCkgZmFp
bGluZyBzdGlsbCByZXF1aXJlcyByZWNvcmRpbmcKc3VjY2Vzc2Z1bGx5IG1h
cHBlZCBoYW5kbGVzLCBzbyB0aGV5IGNhbiBiZSB1bm1hcHBlZCBzdWJzZXF1
ZW50bHkuIFRoaXMKaW4gdHVybiByZXF1aXJlcyB0aGVyZSB0byBiZSBhIHdh
eSB0byB0ZWxsIGZ1bGwgaHlwZXJjYWxsIGZhaWx1cmUgZnJvbQpwYXJ0aWFs
IHN1Y2Nlc3MgLSBwcmVzZXQgbWFwX29wIHN0YXR1cyBmaWVsZHMgc3VjaCB0
aGF0IHRoZXkgd29uJ3QKImhhcHBlbiIgdG8gbG9vayBhcyBpZiB0aGUgb3Bl
cmF0aW9uIHN1Y2NlZWRlZC4KCkFsc28gYWdhaW4gdXNlIEdOVFNUX29rYXkg
aW5zdGVhZCBvZiBpbXBseWluZyBpdHMgdmFsdWUgKHplcm8pLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zNjEuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CkNjOiBzdGFibGVAdmdlci5rZXJuZWwu
b3JnClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5j
b20+Ci0tLQp2NDogU3BsaXQgb3V0IHRoZSB2MyBjaGFuZ2VzIGFuZCByZS1i
YXNlIG92ZXIgdGhlIHJlc3VsdGluZyBlYXJsaWVyCiAgICBwYXRjaC4KdjM6
IEFsc28gZml4IEdOVE1BUF9kZXZpY2VfbWFwIC8gLT5kZXZfYnVzX2FkZHIg
aGFuZGxpbmcuCnYyOiBEcm9wIGFuIGluYXBwbGljYWJsZSBwYXJ0IG9mIHRo
ZSBkZXNjcmlwdGlvbi4gVXNlIDEgaW5zdGVhZCBvZgogICAgX19MSU5FX18u
CgotLS0gYS9kcml2ZXJzL3hlbi9nbnRkZXYuYworKysgYi9kcml2ZXJzL3hl
bi9nbnRkZXYuYwpAQCAtMzM0LDIxICszMzQsMjIgQEAgaW50IGdudGRldl9t
YXBfZ3JhbnRfcGFnZXMoc3RydWN0IGdudGRldgogCXByX2RlYnVnKCJtYXAg
JWQrJWRcbiIsIG1hcC0+aW5kZXgsIG1hcC0+Y291bnQpOwogCWVyciA9IGdu
dHRhYl9tYXBfcmVmcyhtYXAtPm1hcF9vcHMsIHVzZV9wdGVtb2QgPyBtYXAt
PmttYXBfb3BzIDogTlVMTCwKIAkJCW1hcC0+cGFnZXMsIG1hcC0+Y291bnQp
OwotCWlmIChlcnIpCi0JCXJldHVybiBlcnI7CiAKIAlmb3IgKGkgPSAwOyBp
IDwgbWFwLT5jb3VudDsgaSsrKSB7Ci0JCWlmIChtYXAtPm1hcF9vcHNbaV0u
c3RhdHVzKSB7CisJCWlmIChtYXAtPm1hcF9vcHNbaV0uc3RhdHVzID09IEdO
VFNUX29rYXkpCisJCQltYXAtPnVubWFwX29wc1tpXS5oYW5kbGUgPSBtYXAt
Pm1hcF9vcHNbaV0uaGFuZGxlOworCQllbHNlIGlmICghZXJyKQogCQkJZXJy
ID0gLUVJTlZBTDsKLQkJCWNvbnRpbnVlOwotCQl9CiAKIAkJaWYgKG1hcC0+
ZmxhZ3MgJiBHTlRNQVBfZGV2aWNlX21hcCkKIAkJCW1hcC0+dW5tYXBfb3Bz
W2ldLmRldl9idXNfYWRkciA9IG1hcC0+bWFwX29wc1tpXS5kZXZfYnVzX2Fk
ZHI7CiAKLQkJbWFwLT51bm1hcF9vcHNbaV0uaGFuZGxlID0gbWFwLT5tYXBf
b3BzW2ldLmhhbmRsZTsKLQkJaWYgKHVzZV9wdGVtb2QpCi0JCQltYXAtPmt1
bm1hcF9vcHNbaV0uaGFuZGxlID0gbWFwLT5rbWFwX29wc1tpXS5oYW5kbGU7
CisJCWlmICh1c2VfcHRlbW9kKSB7CisJCQlpZiAobWFwLT5rbWFwX29wc1tp
XS5zdGF0dXMgPT0gR05UU1Rfb2theSkKKwkJCQltYXAtPmt1bm1hcF9vcHNb
aV0uaGFuZGxlID0gbWFwLT5rbWFwX29wc1tpXS5oYW5kbGU7CisJCQllbHNl
IGlmICghZXJyKQorCQkJCWVyciA9IC1FSU5WQUw7CisJCX0KIAl9CiAJcmV0
dXJuIGVycjsKIH0KLS0tIGEvaW5jbHVkZS94ZW4vZ3JhbnRfdGFibGUuaAor
KysgYi9pbmNsdWRlL3hlbi9ncmFudF90YWJsZS5oCkBAIC0xNTcsNiArMTU3
LDcgQEAgZ250dGFiX3NldF9tYXBfb3Aoc3RydWN0IGdudHRhYl9tYXBfZ3Jh
bgogCW1hcC0+ZmxhZ3MgPSBmbGFnczsKIAltYXAtPnJlZiA9IHJlZjsKIAlt
YXAtPmRvbSA9IGRvbWlkOworCW1hcC0+c3RhdHVzID0gMTsgLyogYXJiaXRy
YXJ5IHBvc2l0aXZlIHZhbHVlICovCiB9CiAKIHN0YXRpYyBpbmxpbmUgdm9p
ZAo=

--=separator
Content-Type: application/octet-stream; name="xsa361-linux-5.patch"
Content-Disposition: attachment; filename="xsa361-linux-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0YWJlbGxpbmlA
eGlsaW54LmNvbT4KU3ViamVjdDogeGVuL2FybTogZG9uJ3QgaWdub3JlIHJl
dHVybiBlcnJvcnMgZnJvbSBzZXRfcGh5c190b19tYWNoaW5lCgpzZXRfcGh5
c190b19tYWNoaW5lIGNhbiBmYWlsIGR1ZSB0byBsYWNrIG9mIG1lbW9yeSwg
c2VlIHRoZSBremFsbG9jIGNhbGwKaW4gYXJjaC9hcm0veGVuL3AybS5jOl9f
c2V0X3BoeXNfdG9fbWFjaGluZV9tdWx0aS4KCkRvbid0IGlnbm9yZSB0aGUg
cG90ZW50aWFsIHJldHVybiBlcnJvciBpbiBzZXRfZm9yZWlnbl9wMm1fbWFw
cGluZywKcmV0dXJuaW5nIGl0IHRvIHRoZSBjYWxsZXIgaW5zdGVhZC4KClRo
aXMgaXMgcGFydCBvZiBYU0EtMzYxLgoKU2lnbmVkLW9mZi1ieTogU3RlZmFu
byBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0YWJlbGxpbmlAeGlsaW54LmNvbT4K
Q2M6IHN0YWJsZUB2Z2VyLmtlcm5lbC5vcmcKUmV2aWV3ZWQtYnk6IEp1bGll
biBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+CgotLS0gYS9hcmNoL2FybS94
ZW4vcDJtLmMKKysrIGIvYXJjaC9hcm0veGVuL3AybS5jCkBAIC05NSw4ICs5
NSwxMCBAQCBpbnQgc2V0X2ZvcmVpZ25fcDJtX21hcHBpbmcoc3RydWN0IGdu
dHRhYl9tYXBfZ3JhbnRfcmVmICptYXBfb3BzLAogCWZvciAoaSA9IDA7IGkg
PCBjb3VudDsgaSsrKSB7CiAJCWlmIChtYXBfb3BzW2ldLnN0YXR1cykKIAkJ
CWNvbnRpbnVlOwotCQlzZXRfcGh5c190b19tYWNoaW5lKG1hcF9vcHNbaV0u
aG9zdF9hZGRyID4+IFhFTl9QQUdFX1NISUZULAotCQkJCSAgICBtYXBfb3Bz
W2ldLmRldl9idXNfYWRkciA+PiBYRU5fUEFHRV9TSElGVCk7CisJCWlmICh1
bmxpa2VseSghc2V0X3BoeXNfdG9fbWFjaGluZShtYXBfb3BzW2ldLmhvc3Rf
YWRkciA+PiBYRU5fUEFHRV9TSElGVCwKKwkJCQkgICAgbWFwX29wc1tpXS5k
ZXZfYnVzX2FkZHIgPj4gWEVOX1BBR0VfU0hJRlQpKSkgeworCQkJcmV0dXJu
IC1FTk9NRU07CisJCX0KIAl9CiAKIAlyZXR1cm4gMDsKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:39:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85636.160592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzdM-0004T0-4g; Tue, 16 Feb 2021 12:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85636.160592; Tue, 16 Feb 2021 12:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzdM-0004Sq-0F; Tue, 16 Feb 2021 12:39:16 +0000
Received: by outflank-mailman (input) for mailman id 85636;
 Tue, 16 Feb 2021 12:39:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y7oK=HS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lBzaW-0001zG-R3
 for xen-devel@lists.xen.org; Tue, 16 Feb 2021 12:36:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 058e238a-214d-4b02-9732-042ee8802a6c;
 Tue, 16 Feb 2021 12:35:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZi-0008Mp-47; Tue, 16 Feb 2021 12:35:30 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lBzZi-0002aT-2R; Tue, 16 Feb 2021 12:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 058e238a-214d-4b02-9732-042ee8802a6c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=lZWifSAdWtqIapxaF/ABMz5U1NS2kxDVUZoclZKPguo=; b=3nniZQLh/q0JG34VPD0xs4VeUH
	HRegZFYig+FQgXOTHx7CBcDBYFzPhSB9NE/4qvF9EaEZzkFN4C7HWi+Ijs4luDq/2w1LavEck4otJ
	K5LTGecfopcu3c7q8XREM2KCw7iwukc1ZQArBpWjLiqQSujSvRNvJ+g9EccWSLbSk00g=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 362 v3 (CVE-2021-26931) - Linux: backends
 treating grant mapping errors as bugs
Message-Id: <E1lBzZi-0002aT-2R@xenbits.xenproject.org>
Date: Tue, 16 Feb 2021 12:35:30 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-26931 / XSA-362
                               version 3

         Linux: backends treating grant mapping errors as bugs

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Block, net, and SCSI backends consider certain errors a plain bug,
deliberately causing a kernel crash.  For errors potentially being at
least under the influence of guests, like out of memory conditions, it
isn't correct to assume so.  Memory allocations potentially causing
such crashes occur only when Linux is running in PV mode, though.

IMPACT
======

A malicious or buggy frontend driver may be able to crash the
corresponding backend driver, potentially affecting the entire domain
running the backend driver.

VULNERABLE SYSTEMS
==================

Linux versions from at least 2.6.39 onwards are vulnerable, when run in
PV mode.  Earlier versions differ significantly in behavior and may
therefore instead surface other issues under the same conditions.  Linux
run in HVM / PVH modes is not vulnerable.

MITIGATION
==========

For Linux, running the backends in HVM or PVH domains will avoid the
vulnerability.

For protocols where non-Linux-kernel based backends are available,
reconfiguring guests to use alternative (e.g. qemu-based) backends may
allow to avoid the vulnerability.

In all other cases there is no known mitigation.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate attached patches resolves this issue.

Applying the attached patches resolves this issue.

xsa362-linux-1.patch           Linux 5.11-rc - 5.10
xsa362-linux-2.patch           Linux 5.11-rc - 3.16
xsa362-linux-3.patch           Linux 5.11-rc - 4.1

$ sha256sum xsa362*
d64334807f16ff9909503b3cc9b8b93fd42d2c36e1fb0e508b89a765a53071a8  xsa362-linux-1.patch
b6d02952e7fbede55b868cb2dc4d8853284996883dc72518a0cd5b14d6c7fdd4  xsa362-linux-2.patch
0a2661380d8f786fefe12e5a8b1528d4a79f1ad058c26b417c52449a7e16a302  xsa362-linux-3.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

Deployment of the mitigation to switch to HVM / PVH backend domains
is also permitted during the embargo, even on public-facing systems with
untrusted guest users and administrators.

HOWEVER, deployment of the non-kernel-based backends mitigation
described above is NOT permitted during the embargo on public-facing
systems with untrusted guest users and administrators.  This is because
such a configuration change may be recognizable by the affected guests.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmAru/UMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZszQH/jwCgehGBbejtpFjiOqEPdqIQhd0X+Q1feFD9PB6
07gfGanmSds5mitr0ezTHbfLw85CoFbAJhalNdx9XeQrZTIvRAizkCi779rE9UYZ
H0CN73GoObF4E8q+tVRpZni0Rcnb77bETRsmlYjRYRjtZNZ1+7vbn4tf4JMccoo0
qhz1/bqY3e4yHPcdxb9P3T/DQKNG+nJjkn4kNueYo1PUGUetxw6HXbXWHh6WvbOr
mfd+sTxRSf+Nk2OZhtofjIYEIeL058axZoSuARBIPphBmOCumUTGzrypZwe5BTuF
GMQqlguxPU0rFscGd/Js05suFhQQR4ccJlSGRs7pswt9i0M=
=KnG3
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa362-linux-1.patch"
Content-Disposition: attachment; filename="xsa362-linux-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ZW4tYmxrYmFjazogZG9uJ3QgImhhbmRsZSIgZXJyb3IgYnkgQlVHKCkK
CkluIHBhcnRpY3VsYXIgLUVOT01FTSBtYXkgY29tZSBiYWNrIGhlcmUsIGZy
b20gc2V0X2ZvcmVpZ25fcDJtX21hcHBpbmcoKS4KRG9uJ3QgbWFrZSBwcm9i
bGVtcyB3b3JzZSwgdGhlIG1vcmUgdGhhdCBoYW5kbGluZyBlbHNld2hlcmUg
KHRvZ2V0aGVyCndpdGggbWFwJ3Mgc3RhdHVzIGZpZWxkcyBub3cgaW5kaWNh
dGluZyB3aGV0aGVyIGEgbWFwcGluZyB3YXNuJ3QgZXZlbgphdHRlbXB0ZWQs
IGFuZCBoZW5jZSBoYXMgdG8gYmUgY29uc2lkZXJlZCBmYWlsZWQpIGRvZXNu
J3QgcmVxdWlyZSB0aGlzCm9kZCB3YXkgb2YgZGVhbGluZyB3aXRoIGVycm9y
cy4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzYyLgoKU2lnbmVkLW9mZi1ieTog
SmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpDYzogc3RhYmxlQHZn
ZXIua2VybmVsLm9yZwpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdy
b3NzQHN1c2UuY29tPgoKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFj
ay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9i
bGtiYWNrLmMKQEAgLTgxMSwxMCArODExLDggQEAgYWdhaW46CiAJCQlicmVh
azsKIAl9CiAKLQlpZiAoc2Vnc190b19tYXApIHsKKwlpZiAoc2Vnc190b19t
YXApCiAJCXJldCA9IGdudHRhYl9tYXBfcmVmcyhtYXAsIE5VTEwsIHBhZ2Vz
X3RvX2dudCwgc2Vnc190b19tYXApOwotCQlCVUdfT04ocmV0KTsKLQl9CiAK
IAkvKgogCSAqIE5vdyBzd2l6emxlIHRoZSBNRk4gaW4gb3VyIGRvbWFpbiB3
aXRoIHRoZSBNRk4gZnJvbSB0aGUgb3RoZXIgZG9tYWluCkBAIC04MzAsNyAr
ODI4LDcgQEAgYWdhaW46CiAJCQkJZ250dGFiX3BhZ2VfY2FjaGVfcHV0KCZy
aW5nLT5mcmVlX3BhZ2VzLAogCQkJCQkJICAgICAgJnBhZ2VzW3NlZ19pZHhd
LT5wYWdlLCAxKTsKIAkJCQlwYWdlc1tzZWdfaWR4XS0+aGFuZGxlID0gQkxL
QkFDS19JTlZBTElEX0hBTkRMRTsKLQkJCQlyZXQgfD0gMTsKKwkJCQlyZXQg
fD0gIXJldDsKIAkJCQlnb3RvIG5leHQ7CiAJCQl9CiAJCQlwYWdlc1tzZWdf
aWR4XS0+aGFuZGxlID0gbWFwW25ld19tYXBfaWR4XS5oYW5kbGU7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa362-linux-2.patch"
Content-Disposition: attachment; filename="xsa362-linux-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ZW4tbmV0YmFjazogZG9uJ3QgImhhbmRsZSIgZXJyb3IgYnkgQlVHKCkK
CkluIHBhcnRpY3VsYXIgLUVOT01FTSBtYXkgY29tZSBiYWNrIGhlcmUsIGZy
b20gc2V0X2ZvcmVpZ25fcDJtX21hcHBpbmcoKS4KRG9uJ3QgbWFrZSBwcm9i
bGVtcyB3b3JzZSwgdGhlIG1vcmUgdGhhdCBoYW5kbGluZyBlbHNld2hlcmUg
KHRvZ2V0aGVyCndpdGggbWFwJ3Mgc3RhdHVzIGZpZWxkcyBub3cgaW5kaWNh
dGluZyB3aGV0aGVyIGEgbWFwcGluZyB3YXNuJ3QgZXZlbgphdHRlbXB0ZWQs
IGFuZCBoZW5jZSBoYXMgdG8gYmUgY29uc2lkZXJlZCBmYWlsZWQpIGRvZXNu
J3QgcmVxdWlyZSB0aGlzCm9kZCB3YXkgb2YgZGVhbGluZyB3aXRoIGVycm9y
cy4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzYyLgoKU2lnbmVkLW9mZi1ieTog
SmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpDYzogc3RhYmxlQHZn
ZXIua2VybmVsLm9yZwpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdy
b3NzQHN1c2UuY29tPgoKLS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2sv
bmV0YmFjay5jCisrKyBiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJh
Y2suYwpAQCAtMTM0MiwxMyArMTM0MiwxMSBAQCBpbnQgeGVudmlmX3R4X2Fj
dGlvbihzdHJ1Y3QgeGVudmlmX3F1ZXVlCiAJCXJldHVybiAwOwogCiAJZ250
dGFiX2JhdGNoX2NvcHkocXVldWUtPnR4X2NvcHlfb3BzLCBucl9jb3BzKTsK
LQlpZiAobnJfbW9wcyAhPSAwKSB7CisJaWYgKG5yX21vcHMgIT0gMCkKIAkJ
cmV0ID0gZ250dGFiX21hcF9yZWZzKHF1ZXVlLT50eF9tYXBfb3BzLAogCQkJ
CSAgICAgIE5VTEwsCiAJCQkJICAgICAgcXVldWUtPnBhZ2VzX3RvX21hcCwK
IAkJCQkgICAgICBucl9tb3BzKTsKLQkJQlVHX09OKHJldCk7Ci0JfQogCiAJ
d29ya19kb25lID0geGVudmlmX3R4X3N1Ym1pdChxdWV1ZSk7CiAK

--=separator
Content-Type: application/octet-stream; name="xsa362-linux-3.patch"
Content-Disposition: attachment; filename="xsa362-linux-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ZW4tc2NzaWJhY2s6IGRvbid0ICJoYW5kbGUiIGVycm9yIGJ5IEJVRygp
CgpJbiBwYXJ0aWN1bGFyIC1FTk9NRU0gbWF5IGNvbWUgYmFjayBoZXJlLCBm
cm9tIHNldF9mb3JlaWduX3AybV9tYXBwaW5nKCkuCkRvbid0IG1ha2UgcHJv
YmxlbXMgd29yc2UsIHRoZSBtb3JlIHRoYXQgaGFuZGxpbmcgZWxzZXdoZXJl
ICh0b2dldGhlcgp3aXRoIG1hcCdzIHN0YXR1cyBmaWVsZHMgbm93IGluZGlj
YXRpbmcgd2hldGhlciBhIG1hcHBpbmcgd2Fzbid0IGV2ZW4KYXR0ZW1wdGVk
LCBhbmQgaGVuY2UgaGFzIHRvIGJlIGNvbnNpZGVyZWQgZmFpbGVkKSBkb2Vz
bid0IHJlcXVpcmUgdGhpcwpvZGQgd2F5IG9mIGRlYWxpbmcgd2l0aCBlcnJv
cnMuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM2Mi4KClNpZ25lZC1vZmYtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KQ2M6IHN0YWJsZUB2
Z2VyLmtlcm5lbC5vcmcKUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4KCi0tLSBhL2RyaXZlcnMveGVuL3hlbi1zY3NpYmFj
ay5jCisrKyBiL2RyaXZlcnMveGVuL3hlbi1zY3NpYmFjay5jCkBAIC0zODYs
MTIgKzM4NiwxMiBAQCBzdGF0aWMgaW50IHNjc2liYWNrX2dudHRhYl9kYXRh
X21hcF9iYXRjCiAJCXJldHVybiAwOwogCiAJZXJyID0gZ250dGFiX21hcF9y
ZWZzKG1hcCwgTlVMTCwgcGcsIGNudCk7Ci0JQlVHX09OKGVycik7CiAJZm9y
IChpID0gMDsgaSA8IGNudDsgaSsrKSB7CiAJCWlmICh1bmxpa2VseShtYXBb
aV0uc3RhdHVzICE9IEdOVFNUX29rYXkpKSB7CiAJCQlwcl9lcnIoImludmFs
aWQgYnVmZmVyIC0tIGNvdWxkIG5vdCByZW1hcCBpdFxuIik7CiAJCQltYXBb
aV0uaGFuZGxlID0gU0NTSUJBQ0tfSU5WQUxJRF9IQU5ETEU7Ci0JCQllcnIg
PSAtRU5PTUVNOworCQkJaWYgKCFlcnIpCisJCQkJZXJyID0gLUVOT01FTTsK
IAkJfSBlbHNlIHsKIAkJCWdldF9wYWdlKHBnW2ldKTsKIAkJfQo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:49:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:49:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85724.160619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzmi-00063u-Kf; Tue, 16 Feb 2021 12:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85724.160619; Tue, 16 Feb 2021 12:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzmi-00063n-HJ; Tue, 16 Feb 2021 12:48:56 +0000
Received: by outflank-mailman (input) for mailman id 85724;
 Tue, 16 Feb 2021 12:48:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jGlc=HS=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1lBzmg-00063i-H8
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 12:48:55 +0000
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0471b6f0-6bbe-4215-9ad4-c8f5733e4933;
 Tue, 16 Feb 2021 12:48:51 +0000 (UTC)
Received: from [10.9.9.74] (helo=submission03.runbox)
 by mailtransmit02.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>)
 id 1lBzmZ-0005SJ-Sk; Tue, 16 Feb 2021 13:48:48 +0100
Received: by submission03.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1lBzmY-0002Xj-7s; Tue, 16 Feb 2021 13:48:46 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0471b6f0-6bbe-4215-9ad4-c8f5733e4933
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com;
	 s=selector2; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=f+nKs8JpK4wFd/s5GZi1+0U4NVGMehxgEcNl1XW0KAo=; b=mDOzVrKoeD08JED7svMnb7Q645
	3qinbUo4jpPJKhIojrnEOITbmyE60csS5s0vLvUeJaxtbFOExvq33FK5BwPgH0vR7ybn8hdqU84rR
	R61nQII2iMxDMy0155EJK/OJ2VG6udUKex7eN7YgAhLldusW8SmyOUzbJn4Mw2lSfoDk6+FZW21/3
	K8+zbxO0VI0hJ8RdDzCQDUbWKOn8jjXwNqKFBPLJ/WFILEW3kMHJB3WsTAy7QaJp+Ej86moPODbni
	JhW5v6RCwhSDlZdILKaQ1SCbk32OR5VYc2rgQKaWIyi7OCZnprIGfPfL70BInqWtr3arZn9XYxGxI
	wfYE+dtA==;
Subject: Re: [PATCH 1/1] x86/ept: Fix buggy XSA-321 backport
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 jbeulich@suse.com
References: <20210215234619.245422-1-m.v.b@runbox.com>
 <20210215234619.245422-2-m.v.b@runbox.com> <YCuOQ3qpFD6RgIld@Air-de-Roger>
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
Message-ID: <5517e20e-c485-7016-da89-81570cc43b3b@runbox.com>
Date: Tue, 16 Feb 2021 07:48:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YCuOQ3qpFD6RgIld@Air-de-Roger>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-CA
Content-Transfer-Encoding: 8bit

On 16/02/2021 04.20, Roger Pau Monné wrote:
> On Mon, Feb 15, 2021 at 06:46:19PM -0500, M. Vefa Bicakci wrote:
>> This commit aims to fix commit a852040fe3ab ("x86/ept: flush cache when
>> modifying PTEs and sharing page tables"). The aforementioned commit is
>> for the stable-4.9 branch of Xen and is a backported version of commit
>> c23274fd0412 ("x86/ept: flush cache when modifying PTEs and sharing page
>> tables"), which was for Xen 4.14.0-rc5 and which fixes the security
>> issue reported by XSA-321.
>>
>> Prior to the latter commit, the function atomic_write_ept_entry in Xen
>> 4.14.y consisted mostly of a call to p2m_entry_modify followed by an
>> atomic replacement of a page table entry, and the latter commit adds
>> a call to iommu_sync_cache for a specific condition:
>>
>>     static int atomic_write_ept_entry(struct p2m_domain *p2m,
>>                                       ept_entry_t *entryptr, ept_entry_t new,
>>                                       int level)
>>     {
>>         int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
>>                                   _mfn(new.mfn), _mfn(entryptr->mfn), level + 1);
>>
>>         if ( rc )
>>             return rc;
>>
>>         write_atomic(&entryptr->epte, new.epte);
>>
>>    +    /* snipped comment block */
>>    +    if ( !new.recalc && iommu_use_hap_pt(p2m->domain) )
>>    +        iommu_sync_cache(entryptr, sizeof(*entryptr));
>>    +
>>         return 0;
>>     }
>>
>> However, the backport to Xen 4.9.y is a bit different because
>> atomic_write_ept_entry does not consist solely of a call to
>> p2m_entry_modify, which does not exist in Xen 4.9.y. I am quoting from
>> Xen 4.8.y for convenience:
>>
>>     static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>>                                       int level)
>>     {
>>         int rc;
>>         unsigned long oldmfn = mfn_x(INVALID_MFN);
>>         bool_t check_foreign = (new.mfn != entryptr->mfn ||
>>                                 new.sa_p2mt != entryptr->sa_p2mt);
>>
>>         if ( level )
>>         {
>>             ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
>>             write_atomic(&entryptr->epte, new.epte);
>>             return 0;
>>         }
>>
>>         if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
>>         {
>>             rc = -EINVAL;
>>             if ( !is_epte_present(&new) )
>>                     goto out;
>>
>>             if ( check_foreign )
>>             {
>>                 struct domain *fdom;
>>
>>                 if ( !mfn_valid(new.mfn) )
>>                     goto out;
>>
>>                 rc = -ESRCH;
>>                 fdom = page_get_owner(mfn_to_page(new.mfn));
>>                 if ( fdom == NULL )
>>                     goto out;
>>
>>                 /* get refcount on the page */
>>                 rc = -EBUSY;
>>                 if ( !get_page(mfn_to_page(new.mfn), fdom) )
>>                     goto out;
>>             }
>>         }
>>
>>         if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
>>             oldmfn = entryptr->mfn;
>>
>>         write_atomic(&entryptr->epte, new.epte);
>>
>>    +    /* snipped comment block */
>>    +    if ( !new.recalc && iommu_hap_pt_share )
>>    +        iommu_sync_cache(entryptr, sizeof(*entryptr));
>>    +
>>         if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
>>             put_page(mfn_to_page(oldmfn));
>>
>>         rc = 0;
>>
>>      out:
>>         if ( rc )
>>             gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
>>                      entryptr->epte, new.epte, rc);
>>         return rc;
>>     }
>>
>> Based on inspection of the p2m_entry_modify function in Xen 4.14.1, it
>> appears that the part of atomic_write_ept_entry above the call to
>> write_atomic is encapsulated by p2m_entry_modify, which uncovers one
>> issue with the backport: In Xen 4.14, if p2m_entry_modify returns early
>> without an error, then the calls to write_atomic and iommu_sync_cache
>> will still occur (assuming that the corresponding if condition is
>> satisfied), whereas in Xen 4.9.y, there is a code path that can skip the
>> call to iommu_sync_cache, in case the variable level is not zero:
>>
>>    if ( level )
>>    {
>>       ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
>>       write_atomic(&entryptr->epte, new.epte);
>>       return 0;
>>    }
>>
>> This patch reorganizes the atomic_write_ept_entry to ensure that the
>> call to iommu_sync_cache is not inadvertently skipped.
> 
> IMO this is likely too much change in a single patch, iff we really
> wanted to do this you should have a pre-patch that re-arranges the
> code without any functional change followed by a patch that fixes the
> issue.

Thank you for the feedback; this is a good point.

> In any case I think this is too much change, so I would go for a
> smaller fix like my proposal below. Can you please test it?

Thank you! I will test your patch later today, and I will report
back by tomorrow.

>> Furthermore, in Xen 4.14.1, p2m_entry_modify calls
>>
>>    put_page(mfn_to_page(oldmfn));
>>
>> before the potential call to iommu_sync_cache in atomic_write_ept_entry.
>> I am not sufficiently familiar with Xen to determine if this is a
>> significant behavioural change, but this patch makes Xen 4.9.y similar
>> to Xen 4.14.1 in that regard as well, by further re-organizing the code
>> in atomic_write_ept_entry.
> 
> Well, that put_page is only relevant to PVH dom0, but you shouldn't
> remove it. The put_page call in newer versions has been moved by
> commit ce0224bf96a1a1f82 into p2m_entry_modify.

Ah, but my patch moves the call to put_page to a location above the
call to iommu_sync_cache, to make the code a bit similar to the same
function in Xen 4.14. This may not be necessary, though. This goes back
to your aforementioned point about having two separate patches, as my
patch does not make the move of the call to put_page obvious. In any
case, let's focus on your patch.

> Here is my proposed fix, I think we could even do away with the else
> branch, but if level is != 0 p2m_is_foreign should be false, so we
> avoid an extra check.
>
> Thanks, Roger.

I will test this. Thanks again! I really appreciate that you have
have taken the time and effort.

Vefa

> ---8<---
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 036771f43c..086739ffdd 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -56,11 +56,8 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>       if ( level )
>       {
>           ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
> -        write_atomic(&entryptr->epte, new.epte);
> -        return 0;
>       }
> -
> -    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
> +    else if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
>       {
>           rc = -EINVAL;
>           if ( !is_epte_present(&new) )
> 


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 12:49:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 12:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85762.160631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBznZ-0006EU-VE; Tue, 16 Feb 2021 12:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85762.160631; Tue, 16 Feb 2021 12:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBznZ-0006EN-Re; Tue, 16 Feb 2021 12:49:49 +0000
Received: by outflank-mailman (input) for mailman id 85762;
 Tue, 16 Feb 2021 12:49:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZW7A=HS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lBzfZ-0001zG-Iy
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 12:41:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4d441d8-01e8-4a07-9053-c3c19e8fddbb;
 Tue, 16 Feb 2021 12:40:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8760FAF2C;
 Tue, 16 Feb 2021 12:40:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4d441d8-01e8-4a07-9053-c3c19e8fddbb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613479216; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=X3blwfF7NbGXbiKfZpPPBVAn/s2HDWS/Ulzem1ZAomk=;
	b=jk7/SGiWFBznoGqTALSPOXjELreX060pqbsJdjQkx3ySvnDY886OEXayPN/m7HrhTFyq8v
	4DnGLRQQ1ok0Oi2aumVaFc8pWBRyIG4be3Op9woQ0CyVFtjY9dcGOKEu4OT5IVxMI8gqd1
	m6KegIYDvLBojLzL5pKVsLn5xCINBwM=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.12-rc1
Date: Tue, 16 Feb 2021 13:40:15 +0100
Message-Id: <20210216124015.28923-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.12-rc1-tag

xen: branch for v5.12-rc1

This batch contains a series of Xen related security fixes, all related
to limited error handling in Xen backend drivers.


Thanks.

Juergen

 arch/arm/xen/p2m.c                  |  6 ++++--
 arch/x86/xen/p2m.c                  | 15 +++++++--------
 drivers/block/xen-blkback/blkback.c | 32 ++++++++++++++++++--------------
 drivers/net/xen-netback/netback.c   |  4 +---
 drivers/xen/gntdev.c                | 37 ++++++++++++++++++++-----------------
 drivers/xen/xen-scsiback.c          |  4 ++--
 include/xen/grant_table.h           |  1 +
 7 files changed, 53 insertions(+), 46 deletions(-)

Jan Beulich (8):
      Xen/x86: don't bail early from clear_foreign_p2m_mapping()
      Xen/x86: also check kernel mapping in set_foreign_p2m_mapping()
      Xen/gntdev: correct dev_bus_addr handling in gntdev_map_grant_pages()
      Xen/gntdev: correct error checking in gntdev_map_grant_pages()
      xen-blkback: don't "handle" error by BUG()
      xen-netback: don't "handle" error by BUG()
      xen-scsiback: don't "handle" error by BUG()
      xen-blkback: fix error handling in xen_blkbk_map()

Stefano Stabellini (1):
      xen/arm: don't ignore return errors from set_phys_to_machine


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 13:01:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 13:01:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85826.160643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzz4-000849-W1; Tue, 16 Feb 2021 13:01:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85826.160643; Tue, 16 Feb 2021 13:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lBzz4-000842-Ss; Tue, 16 Feb 2021 13:01:42 +0000
Received: by outflank-mailman (input) for mailman id 85826;
 Tue, 16 Feb 2021 13:01:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBzz3-00083t-Bk; Tue, 16 Feb 2021 13:01:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBzz3-0000RW-4D; Tue, 16 Feb 2021 13:01:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lBzz2-00032l-TL; Tue, 16 Feb 2021 13:01:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lBzz2-0006ST-St; Tue, 16 Feb 2021 13:01:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=KoCdJ/LAgtbnKaVeDISt1mfKhY4FtmT75KHU6LpFLoc=; b=s+cKUCfJPeHJD4k4A8i3VxsmVs
	6tX7/AY0nTfM5xOe81oU+dkKsTFJ4kblIp/tY6t7ZeHrXNT04vo+hNbz46shRn9iEnTozhT6fs5Qk
	dX9o8VotBaZTE38hwYLvLNnpow2JtO9XckXa8qUPF2VyWLDoCz40gpXLvKGG7+jleH28=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-arm64-arm64-xl-thunderx
Message-Id: <E1lBzz2-0006ST-St@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 13:01:40 +0000

branch xen-unstable
xenbranch xen-unstable
job test-arm64-arm64-xl-thunderx
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159412/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-arm64-arm64-xl-thunderx.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-thunderx.guest-start --summary-out=tmp/159412.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-arm64-arm64-xl-thunderx guest-start
Searching for failure / basis pass:
 159372 fail [host=rochester0] / 158681 [host=rochester1] 158624 ok.
Failure / basis pass flights: 159372 / 158624
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Basis pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#09f983f0c7fc0db79a5f6c883ec3510d424c369c-5b9a4104c902d7dec14c9e3c5652a638194487c6 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#96a9acfc527964dc5ab7298862a0cd8aa5fffc6a-2e1e8c35f3178df95d79da81ac6deec242da74c2 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#452ddbe3592b141b05a7e0676f09c8ae07f98fdd-04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Loaded 15001 nodes in revision graph
Searching for test results:
 158609 pass irrelevant
 158616 [host=rochester1]
 158624 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 158681 [host=rochester1]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159295 fail irrelevant
 159324 fail irrelevant
 159339 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159359 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159393 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 96a9acfc527964dc5ab7298862a0cd8aa5fffc6a 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159395 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159397 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159372 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159398 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159400 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159402 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159403 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159404 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159406 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159407 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159409 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159410 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159411 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159412 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158624 (pass), for basis pass
 Result found: flight 159339 (fail), for basis failure (at ancestor ~280)
 Repro found: flight 159393 (pass), for basis pass
 Repro found: flight 159395 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159406 (pass), for last pass
 Result found: flight 159407 (fail), for first failure
 Repro found: flight 159409 (pass), for last pass
 Repro found: flight 159410 (fail), for first failure
 Repro found: flight 159411 (pass), for last pass
 Repro found: flight 159412 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159412/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

pnmtopng: 118 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-arm64-arm64-xl-thunderx.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159412: tolerable ALL FAIL

flight 159412 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159412/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx 14 guest-start             fail baseline untested


jobs:
 test-arm64-arm64-xl-thunderx                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 13:19:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 13:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85843.160661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC0GF-0000w8-Ko; Tue, 16 Feb 2021 13:19:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85843.160661; Tue, 16 Feb 2021 13:19:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC0GF-0000w1-Hb; Tue, 16 Feb 2021 13:19:27 +0000
Received: by outflank-mailman (input) for mailman id 85843;
 Tue, 16 Feb 2021 13:19:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+CYm=HS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lC0GE-0000vw-0H
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 13:19:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be77653d-027b-4d95-bc34-40ebdce87f51;
 Tue, 16 Feb 2021 13:19:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4BA94ACBF;
 Tue, 16 Feb 2021 13:19:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be77653d-027b-4d95-bc34-40ebdce87f51
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613481563; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QAN5Wz+ZSjzG6pV8EQCugEBAawJOPuPe6sT/ik3O//E=;
	b=Mg9xC9YHNJn8jRkUj7lf6XC70P7AHxZwHh8NlDS9fQE/IhRy92s0zLL3YjG0iIDhxmqSS8
	PYJc8o3d2w5zi1OYjg2IxsdsYuEZoDoSKGmu7s7JKvoS0cq69hNGi8dCW7g+MekPQU9uxT
	CNlBtd0Zf9HHyTBJHjaSAGsQJ3g8+c0=
Subject: Re: [PATCH] x86/EFI: work around GNU ld 2.36 issue
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e6d59277-35b2-e7df-0e68-a794c8855ac0@suse.com>
 <8450b84d-93f2-7568-362e-af27954e5157@suse.com>
 <881b97a1-a4e3-f213-f81b-ac07ca46ed27@citrix.com>
 <5a02763e-715f-a76d-e926-87a2a4c38449@suse.com>
Message-ID: <2a3696b9-9799-ec75-508d-c2e562eb26b8@suse.com>
Date: Tue, 16 Feb 2021 14:19:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <5a02763e-715f-a76d-e926-87a2a4c38449@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.02.2021 12:13, Jan Beulich wrote:
> On 05.02.2021 11:33, Andrew Cooper wrote:
>> On 05/02/2021 08:11, Jan Beulich wrote:
>>> On 04.02.2021 14:38, Jan Beulich wrote:
>>>> Our linker capability check fails with the recent binutils release's ld:
>>>>
>>>> .../check.o:(.debug_aranges+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_info'
>>>> .../check.o:(.debug_info+0x6): relocation truncated to fit: R_X86_64_32 against `.debug_abbrev'
>>>> .../check.o:(.debug_info+0xc): relocation truncated to fit: R_X86_64_32 against `.debug_str'+76
>>>> .../check.o:(.debug_info+0x11): relocation truncated to fit: R_X86_64_32 against `.debug_str'+d
>>>> .../check.o:(.debug_info+0x15): relocation truncated to fit: R_X86_64_32 against `.debug_str'+2b
>>>> .../check.o:(.debug_info+0x29): relocation truncated to fit: R_X86_64_32 against `.debug_line'
>>>> .../check.o:(.debug_info+0x30): relocation truncated to fit: R_X86_64_32 against `.debug_str'+19
>>>> .../check.o:(.debug_info+0x37): relocation truncated to fit: R_X86_64_32 against `.debug_str'+71
>>>> .../check.o:(.debug_info+0x3e): relocation truncated to fit: R_X86_64_32 against `.debug_str'
>>>> .../check.o:(.debug_info+0x45): relocation truncated to fit: R_X86_64_32 against `.debug_str'+5e
>>>> .../check.o:(.debug_info+0x4c): additional relocation overflows omitted from the output
>>>>
>>>> Tell the linker to strip debug info as a workaround. Oddly enough debug
>>>> info has been getting stripped when linking the actual xen.efi, without
>>>> me being able to tell why this would be.
>>> I've changed this to
>>>
>>> "Tell the linker to strip debug info as a workaround. Debug info has been
>>>  getting stripped already anyway when linking the actual xen.efi."
>>>
>>> as I noticed I did look for -S only yesterday, while we have
>>>
>>> EFI_LDFLAGS += --image-base=$(1) --stack=0,0 --heap=0,0 --strip-debug
>>
>> So, in terms of the bugfix, Acked-by: Andrew Cooper
>> <andrew.cooper3@citrix.com>
> 
> Thanks.
> 
>> However, we ought be keeping the debug symbols for xen-syms.efi (or
>> equiv) seeing as there is logic included here which isn't in the regular
>> xen-syms.
> 
> Well, perhaps. Besides the 2.36 binutils regression needing fixing
> (or preventing us to avoid the stripping in case that's the linker
> version used), there are a few more points relevant here:
> 
> - Checking with a random older binutils (2.32) I observe the linker
>   working fine, but our mkreloc utility choking on the (admittedly
>   suspicious, at least at the first glance) output. This may be
>   possible to deal with, but still.
> 
> - It would need checking whether the resulting binary works at all.
>   All the .debug_* sections come first. Of course there are surely
>   again ways to overcome this (albeit it smells like a binutils
>   bug).

I've now convinced myself that the resulting images wouldn't work.
This can be hacked around in binutils, presumably, but the question
is whether that's worth it: A correct binary would include the
entire debug data as part of the loadable image, i.e. would require
quite a bit of memory (and time) for EFI to load. This is because
of requirements resulting from (I'm inclined to say shortcomings
in) how at least some of the PE loaders works.

On the positive side, while investigating I came across a change
(a little over a year ago) to binutils that - if working
correctly (not tried out yet) - could allow us to avoid the use of
our mkreloc tool.

> - While in ELF binaries the particular .debug_* sections are
>   conventionally assumed to hold Dwarf debug info, no such
>   assumption is true for PE executables. In particular I observe
>   objdump (2.32 as well as 2.36) to merely dump the COFF symbol
>   table when handed -g. Are you aware of consumers of the
>   information, if we indeed kept it?

I noticed Cygwin uses Dwarf in PE images, so there is at least a
precedent.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 15:29:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 15:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85872.160703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2Hy-0004Dt-Nv; Tue, 16 Feb 2021 15:29:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85872.160703; Tue, 16 Feb 2021 15:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2Hy-0004Dm-Kt; Tue, 16 Feb 2021 15:29:22 +0000
Received: by outflank-mailman (input) for mailman id 85872;
 Tue, 16 Feb 2021 15:29:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+CYm=HS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lC2Hw-0004Dg-V6
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:29:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bcd36bae-9c04-4e6a-b0e2-4cc19cba6bb1;
 Tue, 16 Feb 2021 15:29:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C43B7AE87;
 Tue, 16 Feb 2021 15:29:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcd36bae-9c04-4e6a-b0e2-4cc19cba6bb1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613489357; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wMIMMImt700WSdImEdH1Sc2CYLw5jnB37NAWri3bCHA=;
	b=FgjKl5X2iZ2IiuV/v2Ego+3rl/9DOb8TbyhTZimQntujCSI4e13qGMK1Ny+E85zyuaqiVi
	n1L/waVO2GLlUz+D1iPZZdDzZ7BQ1woqY+e5vofyTSQ8XmG/VsCLbZthwcN2eNjV3nYufm
	xgpSxzuREtpn4wTD7atOTggkTdhs30o=
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the
 rubric for using them
To: George Dunlap <george.dunlap@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210216102839.1801667-1-george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b225be0f-3eed-426e-8829-6e7c57cd7635@suse.com>
Date: Tue, 16 Feb 2021 16:29:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210216102839.1801667-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.02.2021 11:28, George Dunlap wrote:
> --- /dev/null
> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst
> @@ -0,0 +1,118 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Xenheap memory allocation functions
> +===================================
> +
> +In general Xen contains two pools (or "heaps") of memory: the *xen
> +heap* and the *dom heap*.  Please see the comment at the top of
> +``xen/common/page_alloc.c`` for the canonical explanation.
> +
> +This document describes the various functions available to allocate
> +memory from the xen heap: their properties and rules for when they should be
> +used.

Irrespective of your subsequent indication of you disliking the
proposal (which I understand only affects the guidelines further
down anyway) I'd like to point out that vmalloc() does not
allocate from the Xen heap. Therefore a benefit of always
recommending use of xvmalloc() would be that the function could
fall back to vmalloc() (and hence the larger domain heap) when
xmalloc() failed.

> +TLDR guidelines
> +---------------
> +
> +* By default, ``xvmalloc`` (or its helper cognates) should be used
> +  unless you know you have specific properties that need to be met.
> +
> +* If you need memory which needs to be physically contiguous, and may
> +  be larger than ``PAGE_SIZE``...
> +  
> +  - ...and is order 2, use ``alloc_xenheap_pages``.
> +    
> +  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..

ITYM "an exact power of 2 number of pages"?

> +* If you don't need memory to be physically contiguous, and know the
> +  allocation will always be larger than ``PAGE_SIZE``, you may use
> +  ``vmalloc`` (or one of its helper cognates).
> +
> +* If you know that allocation will always be less than ``PAGE_SIZE``,
> +  you may use ``xmalloc``.

As per Julien's and your own replies, this wants to be "minimum
possible page size", which of course depends on where in the
tree the piece of code is to live. (It would be "maximum
possible page size" in the earlier paragraph.)

> +Properties of various allocation functions
> +------------------------------------------
> +
> +Ultimately, the underlying allocator for all of these functions is
> +``alloc_xenheap_pages``.  They differ on several different properties:
> +
> +1. What underlying allocation sizes are.  This in turn has an effect
> +   on:
> +
> +   - How much memory is wasted when requested size doesn't match
> +
> +   - How such allocations are affected by memory fragmentation
> +
> +   - How such allocations affect memory fragmentation
> +
> +2. Whether the underlying pages are physically contiguous
> +
> +3. Whether allocation and deallocation require the cost of mapping and
> +   unmapping
> +
> +``alloc_xenheap_pages`` will allocate a physically contiguous set of
> +pages on orders of 2.  No mapping or unmapping is done.

That's the case today, but meant to change rather sooner than later
(when the 1:1 map disappears).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 15:57:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 15:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85888.160718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2is-0006tc-VX; Tue, 16 Feb 2021 15:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85888.160718; Tue, 16 Feb 2021 15:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2is-0006tV-SW; Tue, 16 Feb 2021 15:57:10 +0000
Received: by outflank-mailman (input) for mailman id 85888;
 Tue, 16 Feb 2021 15:57:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2ir-0006tQ-N6
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:57:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2ir-0003Oa-ME
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:57:09 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2ir-0002vk-LE
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:57:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2io-0000h0-BU; Tue, 16 Feb 2021 15:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=I5DR9HPMrvd5IRmtN2NQE3F2mFp0XM5vCEOoF2ijlXA=; b=DyjrxL/GJSUJD3gs86YzphYkwU
	8akfydlcklfXPTu21QiYe9HOk3nwrADlEuklLR3q/zBMrc6MBzi57SGg2M0jR3JilUegXtizXGhdW
	zkm/tEgRY85Sd0epme4RxRII94g0+YaCZgMeeJcpadxUEz8AUDH95sY87T8J6YpZ/Bjo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.60242.137678.510238@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 15:57:06 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 01/10] tools/xl: Fix exit code for `xl vkbattach`
In-Reply-To: <20210212153953.4582-2-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-2-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 01/10] tools/xl: Fix exit code for `xl vkbattach`"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   xl_vkb.c: In function 'main_vkbattach':
>   xl_vkb.c:79:12: error: 'rc' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>      79 |     return rc;
>         |            ^~
> 
> The dryrun_only path really does leave rc uninitalised.  Introduce a done
> label for success paths to use.
> 
> Fixes: a15166af7c3 ("xl: add vkb config parser and CLI")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 15:57:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 15:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85890.160732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2jN-0006yw-9l; Tue, 16 Feb 2021 15:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85890.160732; Tue, 16 Feb 2021 15:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2jN-0006yp-6o; Tue, 16 Feb 2021 15:57:41 +0000
Received: by outflank-mailman (input) for mailman id 85890;
 Tue, 16 Feb 2021 15:57:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2jM-0006yg-9z
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:57:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2jM-0003Ov-8L
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:57:40 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2jM-0002yD-6d
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 15:57:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2jI-0000hF-Rr; Tue, 16 Feb 2021 15:57:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=6ipkBe/QGkaONQoMmbhX9lP8UodzZGUN7veOmeNVZDY=; b=tmYlvbG+HQBbPN0JkBCHbr1GN5
	C61AcwEgG/0+21PGjQeAWTgoutTrOQiOaTGaP9tKZEggj/Vd1wUK7vG+0raj3/n+l2gJ8u7TLYTZ2
	JRFlYye1xewKFFFEVT00MvH9jV8PmGm8ZJAu2IrZ5trgo5tEw3RUQOg264excsj9OpN8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.60272.652150.820628@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 15:57:36 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 02/10] tools/libxg: Fix uninitialised variable in write_x86_cpu_policy_records()
In-Reply-To: <20210212153953.4582-3-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-3-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 02/10] tools/libxg: Fix uninitialised variable in write_x86_cpu_policy_records()"):
> Fixes: f6b2b8ec53d ("libxc/save: Write X86_{CPUID,MSR}_DATA records")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:00:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85897.160745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2lq-0008IM-NY; Tue, 16 Feb 2021 16:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85897.160745; Tue, 16 Feb 2021 16:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2lq-0008IF-Jn; Tue, 16 Feb 2021 16:00:14 +0000
Received: by outflank-mailman (input) for mailman id 85897;
 Tue, 16 Feb 2021 16:00:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2lp-0008I9-NI
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:00:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2lp-0003zW-MZ
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:00:13 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2lp-0003KD-LR
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:00:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2lm-0000hw-H6; Tue, 16 Feb 2021 16:00:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=P6eWWio+X0l5UPkWFzkpOKLW9fWf8eZJUZuO0IBWcXo=; b=lB/hLiaBBaMQ7GMhFWJvenGH0I
	JRv9c6mL8BtcX96AfRbfA/HA9HrZZPLYwhhbW6SwWhJFFASveU9yTDftO3FszzbGlAVtjr7poukPz
	MUTIRrhpDd8tfjcz003YKaS35tqoYxdm/lVaajDhaO9WrQevTVq0OAEezXucaWBmEcik=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.60426.285539.113926@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:00:10 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Julien Grall <julien@xen.org>
Subject: Re: [PATCH v1.1 03/10] tools/libxg: Drop stale p2m logic from ARM's meminit()
In-Reply-To: <20210212200139.26911-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-4-andrew.cooper3@citrix.com>
	<20210212200139.26911-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH v1.1 03/10] tools/libxg: Drop stale p2m logic from ARM's meminit()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   xg_dom_arm.c: In function 'meminit':
>   xg_dom_arm.c:420:19: error: 'p2m_size' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>     420 |     dom->p2m_size = p2m_size;
>         |     ~~~~~~~~~~~~~~^~~~~~~~~~
> 
> This is actually entirely stale code since ee21f10d70^..97e34ad22d which
> removed the 1:1 identity p2m for translated domains.
> 
> Drop the write of d->p2m_size, and the p2m_size local variable.  Reposition
> the p2m_size field in struct xc_dom_image and correct some stale
> documentation.
> 
> This change really ought to have been part of the original cleanup series.
> 
> No actual change to how ARM domains are constructed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:07:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:07:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85903.160757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2sU-0000Cx-FK; Tue, 16 Feb 2021 16:07:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85903.160757; Tue, 16 Feb 2021 16:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2sU-0000Cq-Ba; Tue, 16 Feb 2021 16:07:06 +0000
Received: by outflank-mailman (input) for mailman id 85903;
 Tue, 16 Feb 2021 16:07:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2sS-0000CQ-VD
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:07:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2sS-00047J-Qz
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:07:04 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2sS-0003qC-Pi
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:07:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2sP-0000j7-Kn; Tue, 16 Feb 2021 16:07:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=KPxoidAHBajHVmadRtmD47OFaWAuEME/xon5R2DQX0U=; b=Bl7xw4wGHCPuYX7E7xLSZsQ+8i
	b5I+II1XYD4QzBojXS/i7aHWta9zLrFQraCGyydibWwtUxRDe/FUkZWeAz9KkbhYhDBsmTc2G3yVh
	uQn+Td4mU/k3PtkLdUEvgk5ioj+pFF2nTG5LtwEFlBPiqUPpwaeK3QrY79Ley1TKpuC4=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.60837.362896.565993@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:07:01 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Ian Jackson <iwj@xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()
In-Reply-To: <20210212153953.4582-5-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-5-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()"):
> The logic is sufficiently complicated I can't figure out if the complain is
> legitimate or not.  There is exactly one path wanting kill_by_uid set to true,
> so default it to false and drop the existing workaround for this problem at
> other optimisation levels.

The place where it's used is here:

    if (!rc && user) {
            state->dm_runas = user;
                    if (kill_by_uid)
                                state->dm_kill_uid = GCSPRINTF("%ld",...
        
This is gated by !rc.  So for this to be used uninitialised, we'd have
to get here with rc==0 but uninitialised kill_by_uid.

The label `out` is preceded by a nonzero assignment to rc.

All the `goto out` are preceded by either (i) nonzero assignment to
rc, or (ii) assignment to kill_by_uid and setting rc=0.

So the compiler is wrong.

If only we had sum types.

In the absence of sum types I suggest the following restructuring:
Change all the `rc = ERROR...; goto out;` to `goto err` and make `goto
out` be the success path only.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:08:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:08:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85905.160768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2tO-0000J3-P9; Tue, 16 Feb 2021 16:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85905.160768; Tue, 16 Feb 2021 16:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2tO-0000Iw-MG; Tue, 16 Feb 2021 16:08:02 +0000
Received: by outflank-mailman (input) for mailman id 85905;
 Tue, 16 Feb 2021 16:08:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2tN-0000Ir-Vz
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:08:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2tN-00048E-UQ
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:08:01 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2tN-0003wF-Tb
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:08:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2tK-0000jV-K1; Tue, 16 Feb 2021 16:07:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Jb235spog02YHjKBB0tR6jF2pxu57DBz4D6J3M1QWFk=; b=m64ozJRMtv6TOS2McyYKcAFwn7
	hKsnKv41RfvpH7Qam3M5CUZBJgaavCSe6G4gXEVzTst6KeDfElhvl3CnrgAb1N4Xbz/yt8P9fxIQJ
	J+QhbW16X6sZi9sMZ4UaoSknsH4IlirQ0QNCLrZjx+bdmRRuC8WhgRk1OszeYxAoDySo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Message-ID: <24619.60894.384400.644194@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:07:58 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 05/10] tools/libxl: Fix uninitialised variable in libxl__write_stub_dmargs()
In-Reply-To: <20210212153953.4582-6-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-6-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 05/10] tools/libxl: Fix uninitialised variable in libxl__write_stub_dmargs()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   libxl_dm.c: In function â€˜libxl__write_stub_dmargsâ€™:
>   libxl_dm.c:2166:16: error: â€˜dmargsâ€™ may be used uninitialized in this function [-Werror=maybe-uninitialized]
>                rc = libxl__xs_write_checked(gc, t, path, dmargs);
>                ~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> It can't, but only because of how the is_linux_stubdom checks line up.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:10:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85909.160780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2w2-0001C8-6X; Tue, 16 Feb 2021 16:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85909.160780; Tue, 16 Feb 2021 16:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2w2-0001C1-3c; Tue, 16 Feb 2021 16:10:46 +0000
Received: by outflank-mailman (input) for mailman id 85909;
 Tue, 16 Feb 2021 16:10:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2w1-0001Bw-CV
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:10:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2w1-0004A8-AJ
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:10:45 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2w1-00047e-9a
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:10:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2vy-0000kA-5V; Tue, 16 Feb 2021 16:10:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=wCbS+McCL2JPV+sCiQ+JDxAUUe/J7CsQjnJ4BiKHHJ0=; b=LipKPbop7/HxsqHVOmxneAfClf
	bEF64tAnQiNlzTRbSazo08aNM/NACpUmD2fZ79gcqi5e9Vr6pikwchvyMztnsuw8XQaZZKqkvl+gs
	dR5kY8P7KasM7uoHD4LMfmqIci3YyS8qwnjcx0AKU0wJAjyCF1ozYF5aezEMU1xQWkBI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Message-ID: <24619.61057.971290.452710@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:10:41 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in lu_read_state()
In-Reply-To: <20210212153953.4582-7-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-7-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 06/10] stubdom/xenstored: Fix uninitialised variables in lu_read_state()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   xenstored_control.c: In function â€˜lu_read_stateâ€™:
>   xenstored_control.c:540:11: error: â€˜state.sizeâ€™ is used uninitialized in this
>   function [-Werror=uninitialized]
>     if (state.size == 0)
>         ~~~~~^~~~~
>   xenstored_control.c:543:6: error: â€˜state.bufâ€™ may be used uninitialized in
>   this function [-Werror=maybe-uninitialized]
>     pre = state.buf;
>     ~~~~^~~~~~~~~~~
>   xenstored_control.c:550:23: error: â€˜state.bufâ€™ may be used uninitialized in
>   this function [-Werror=maybe-uninitialized]
>      (void *)head - state.buf < state.size;
>                     ~~~~~^~~~
>   xenstored_control.c:550:35: error: â€˜state.sizeâ€™ may be used uninitialized in
>   this function [-Werror=maybe-uninitialized]
>      (void *)head - state.buf < state.size;
>                                 ~~~~~^~~~~
> 
> Interestingly, this is only in the stubdom build.  I can't identify any
> relevant differences vs the regular tools build.

  #ifdef __MINIOS__
  static void lu_get_dump_state(struct lu_dump_state *state)
  {
  }

So the compiler is right to complain that

  lu_get_dump_state(&state);
  if (state.size == 0)
          barf_perror("No state found after live-update");

this will use state.size uninitialised.

It's probably just luck that this works at all, if it does,
anywhere...

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:11:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85913.160793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2wl-0001Kl-MS; Tue, 16 Feb 2021 16:11:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85913.160793; Tue, 16 Feb 2021 16:11:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2wl-0001Ke-J7; Tue, 16 Feb 2021 16:11:31 +0000
Received: by outflank-mailman (input) for mailman id 85913;
 Tue, 16 Feb 2021 16:11:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2wk-0001KZ-6R
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:11:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2wk-0004Aw-5Z
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:11:30 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2wk-0004MG-4Z
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:11:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2wh-0000kW-0Z; Tue, 16 Feb 2021 16:11:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Ihf4Ig4kaZSydDmfXnGJYIGUJcOgkHM5mhFknI5r/94=; b=6T1RZKKO9cfh0McByDGiIYhTC3
	rcPjTq/63qy8T/500EYQqAKtksGZowusyoLR+EmN+bNK8/ZU8Vs+zKZv3xnUL26gVOaw1WW0jM9MG
	gXLG9PH9rQElY9g8CXwlhInIMx9dqmONb2wT7ny7T4f+i3Cq1tqG/IFPLV4E2Zx02RbU=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.61102.815443.66032@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:11:26 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 07/10] tools: Use -Og for debug builds when available
In-Reply-To: <20210212153953.4582-8-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-8-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 07/10] tools: Use -Og for debug builds when available"):
> The recommended optimisation level for debugging is -Og, and is what tools
> such as gdb prefer.  In practice, it equates to -01 with a few specific
> optimisations turned off.
> 
> abi-dumper in particular wants the libraries it inspects in this form.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

I would prefer to have this in 4.15 now than to backport it later...

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:12:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85915.160804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2xD-0001QJ-VU; Tue, 16 Feb 2021 16:11:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85915.160804; Tue, 16 Feb 2021 16:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2xD-0001QC-SA; Tue, 16 Feb 2021 16:11:59 +0000
Received: by outflank-mailman (input) for mailman id 85915;
 Tue, 16 Feb 2021 16:11:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2xC-0001Pz-QY
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:11:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2xC-0004D3-Pi
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:11:58 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2xC-0004Q0-Ow
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:11:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2x9-0000kl-F8; Tue, 16 Feb 2021 16:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=xB+egjqVb1RSiDxyWknyUN31Dv96ZV29UYoSz8npmGc=; b=GLRvpwzyDN5JYinB77Vm1KIac5
	jr2LFvB6R6Xm/m8y5olbgTqUuG+4CZMDRxFEDHhLWeUN30SxXUcSny9gKd/qvNALggBsF4WzRziyA
	/6y3Dn7VW6XPq7VUgW3B/N9l2MmwNmIhhby/O4Vvqol8IO4JKwYTeiUOGJmrWM0wOCZI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.61131.239989.717826@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:11:55 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 08/10] tools: Check for abi-dumper in ./configure
In-Reply-To: <20210212153953.4582-9-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-9-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 08/10] tools: Check for abi-dumper in ./configure"):
> This will be optional.  No functional change.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:13:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85919.160816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2yz-0001Zv-BE; Tue, 16 Feb 2021 16:13:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85919.160816; Tue, 16 Feb 2021 16:13:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC2yz-0001Zo-89; Tue, 16 Feb 2021 16:13:49 +0000
Received: by outflank-mailman (input) for mailman id 85919;
 Tue, 16 Feb 2021 16:13:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2yy-0001Zi-Dd
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:13:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2yy-0004En-C3
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:13:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC2yy-0004WE-B9
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:13:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC2yv-0000lF-0T; Tue, 16 Feb 2021 16:13:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=xk/YdcwjCHmroxzlKQxTpW5y3R7/G6VMsvaVS7Nkriw=; b=Zv1KGPu1SgOnumJSQ/nbCu1BIB
	R+u/wlNS2/y3jvADIWLqV1J72VHkg2rFrYk/bawK7oCCQCXgU6QBC5tM+levY1NKaYB71NuSTGYP1
	VNmY4vQrM2NaWBoFp3y4t/kpGOf7t9BmIP9Ql0FAVZgoCMP4yCyWP3pgrJIXqf5MCdcE=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.61240.746260.35610@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:13:44 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 09/10] tools/libs: Add rule to generate headers.lst
In-Reply-To: <20210212153953.4582-10-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-10-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 09/10] tools/libs: Add rule to generate headers.lst"):
> abi-dumper needs a list of the public header files for shared objects, and
> only accepts this in the form of a file.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

because it's not run by default, but...

> +headers.lst: FORCE
> +	@{ $(foreach h,$(LIBHEADERS),echo $(h);) } > $@.tmp

Missing set -e.  If the disk fills up temporarily you might get a
partial file here...

> +	@$(call move-if-changed,$@.tmp,$@)

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:15:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85922.160828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC30H-0001hH-Me; Tue, 16 Feb 2021 16:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85922.160828; Tue, 16 Feb 2021 16:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC30H-0001hA-J1; Tue, 16 Feb 2021 16:15:09 +0000
Received: by outflank-mailman (input) for mailman id 85922;
 Tue, 16 Feb 2021 16:15:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC30G-0001h1-Bm
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:15:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC30G-0004GR-B3
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:15:08 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC30G-0004b6-9M
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:15:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC30C-0000lk-VC; Tue, 16 Feb 2021 16:15:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Un0skDIbFRJ6CDyTmDUbqHNXGXX9kVPGz8wVqwHbRWE=; b=0w8j4ii7/wci/qlUX9WeDiiz1S
	YX2+NSr3dXovseXWQdPnYxWwPerPCXCz9IDDFGbx/J+d47Y63o0I/Otfr0hjlqnHjRm42UeUbe6D1
	IAaAWVCeO2BuHKWnv4zfNBr7TBBWO+EpUfA4xHEZ+xSq5KALcoLNYHYoK5IJWeEZ8gHA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.61320.695920.365775@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:15:04 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 10/10] tools/libs: Write out an ABI analysis when abi-dumper is available
In-Reply-To: <20210212153953.4582-11-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-11-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 10/10] tools/libs: Write out an ABI analysis when abi-dumper is available"):
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:25:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:25:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85931.160840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC39o-0002gh-LM; Tue, 16 Feb 2021 16:25:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85931.160840; Tue, 16 Feb 2021 16:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC39o-0002ga-IE; Tue, 16 Feb 2021 16:25:00 +0000
Received: by outflank-mailman (input) for mailman id 85931;
 Tue, 16 Feb 2021 16:24:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC39n-0002gS-0b; Tue, 16 Feb 2021 16:24:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC39m-0004QD-OZ; Tue, 16 Feb 2021 16:24:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC39m-0006AD-HU; Tue, 16 Feb 2021 16:24:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lC39m-0000PU-H3; Tue, 16 Feb 2021 16:24:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=heBOpk8iLupTVtQvY9D63MVV7VxkFMcpg/SbdKa77bs=; b=NFVhij9/IJ2jcbl3hHCBXPAFjM
	x/iTez/YNs8RLJM8IoLy974EIWo2X442VKHmINrVqf4UXIeEZsWoXYUZszOm5/uB4+wA+8D3sTxmq
	ViLfFHQEPq3DHad+CYwUaRoqr6F9U+6+b+STfxw0IiUEEZDt1rU2qQLDYGRqwldq2500=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159396-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159396: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
X-Osstest-Versions-That:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 16:24:58 +0000

flight 159396 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159396/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 159362 pass in 159396
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 159362 pass in 159396
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-saverestore.2 fail pass in 159362
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 159362
 test-armhf-armhf-xl-vhd      20 leak-check/check           fail pass in 159362

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-examine      4 memdisk-try-append           fail  like 159335
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159362
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159362
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159362
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159362
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159362
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159362
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159362
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159362
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159362
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159362
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159362
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
baseline version:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad

Last test of basis   159396  2021-02-16 01:52:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:25:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85935.160855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3AS-0002nK-3m; Tue, 16 Feb 2021 16:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85935.160855; Tue, 16 Feb 2021 16:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3AS-0002nD-0t; Tue, 16 Feb 2021 16:25:40 +0000
Received: by outflank-mailman (input) for mailman id 85935;
 Tue, 16 Feb 2021 16:25:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3AQ-0002n3-C8
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:25:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3AQ-0004Qc-AI
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:25:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3AQ-0005W6-91
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:25:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC3AM-0000oC-VA; Tue, 16 Feb 2021 16:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=qDVkbLnzCb0Q3HYzyaRnu68N0QtY+p24YOetEZdk+Fk=; b=Y/DjeG74m7wWDtTzgLNlPMYUzC
	lPHSgcK1MsqYYb/WZcY6FlWePdl6R39lotG2JHvM0iVL1/OtEWqGDxRsSBfBJJ1x5KvubbDlynOih
	37+yo+9VYn+Oo3wvPMfV90xKRzfh9UKRgOm+9REpB/ThlHsgk0W/+7QJbGc1rgnVLPLo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.61950.742612.203689@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:25:34 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 01/10] tools/xl: Fix exit code for `xl vkbattach`
In-Reply-To: <20210212153953.4582-2-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-2-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 01/10] tools/xl: Fix exit code for `xl vkbattach`"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   xl_vkb.c: In function 'main_vkbattach':
>   xl_vkb.c:79:12: error: 'rc' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>      79 |     return rc;
>         |            ^~
> 
> The dryrun_only path really does leave rc uninitalised.  Introduce a done
> label for success paths to use.
> 
> Fixes: a15166af7c3 ("xl: add vkb config parser and CLI")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:26:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:26:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85937.160868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3BH-0002w2-FX; Tue, 16 Feb 2021 16:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85937.160868; Tue, 16 Feb 2021 16:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3BH-0002vu-Bm; Tue, 16 Feb 2021 16:26:31 +0000
Received: by outflank-mailman (input) for mailman id 85937;
 Tue, 16 Feb 2021 16:26:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3BG-0002vo-CO
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:26:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3BG-0004RS-BW
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:26:30 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3BG-0005aR-Ak
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:26:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC3BC-0000oW-Sv; Tue, 16 Feb 2021 16:26:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=xujiUnVe7uienQ0IKm30itwOU1hhpKUE444zbfdEHH8=; b=3d8U3E7xJakLqYq42wJXXrwPbn
	OzU+sglZ/SeEZDYl65vfgAn8w4Et2XiqmU+muaZmyR/PZtDAcTpnvszT10Vtpv1h5Xb+g3lUPwokG
	y7iGdrFT7I2Lo3pYmJ2KWHnZD0mugbidTtIY2m94VbdRxO1KU0tfjqGbRi/f6xpClEXQ=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Message-ID: <24619.62002.658637.648192@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:26:26 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 05/10] tools/libxl: Fix uninitialised variable in libxl__write_stub_dmargs()
In-Reply-To: <20210212153953.4582-6-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-6-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 05/10] tools/libxl: Fix uninitialised variable in libxl__write_stub_dmargs()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   libxl_dm.c: In function â€˜libxl__write_stub_dmargsâ€™:
>   libxl_dm.c:2166:16: error: â€˜dmargsâ€™ may be used uninitialized in this function [-Werror=maybe-uninitialized]
>                rc = libxl__xs_write_checked(gc, t, path, dmargs);
>                ~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Reviewed-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:26:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:26:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85941.160879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3Bf-00031u-Ns; Tue, 16 Feb 2021 16:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85941.160879; Tue, 16 Feb 2021 16:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3Bf-00031n-L3; Tue, 16 Feb 2021 16:26:55 +0000
Received: by outflank-mailman (input) for mailman id 85941;
 Tue, 16 Feb 2021 16:26:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3Be-00031c-IC
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:26:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3Be-0004Th-HO
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:26:54 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3Be-0005eo-GV
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:26:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC3BW-0000ok-Q7; Tue, 16 Feb 2021 16:26:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=tO9ieUmc95MRXolLDZFGCe/H/aGMflPRypjajX1GP8g=; b=eLKfVf5MAwVIs6aztlyul5H5CR
	LpNT5JT582itSs4DvkvGAmzJ39oaO2xlygXv0i90qyD6kLRWtICOOnDfltw9fQqPIo14uSyfrmnoV
	kS3fXm/BX90z+TiKz6BxLg9qAi2QEnYuSZYr0kK3JyalGllpZidFG7B5PDb6RGWyDSSQ=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.62022.547747.36963@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:26:46 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>,
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 07/10] tools: Use -Og for debug builds when available
In-Reply-To: <04c93a14-ee95-e4a6-33b9-f80fcd03a010@suse.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-8-andrew.cooper3@citrix.com>
	<04c93a14-ee95-e4a6-33b9-f80fcd03a010@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH 07/10] tools: Use -Og for debug builds when available"):
> On 12.02.2021 16:39, Andrew Cooper wrote:
> > The recommended optimisation level for debugging is -Og, and is what tools
> > such as gdb prefer.  In practice, it equates to -01 with a few specific
> > optimisations turned off.
> > 
> > abi-dumper in particular wants the libraries it inspects in this form.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:27:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85942.160892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3C0-00037h-1x; Tue, 16 Feb 2021 16:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85942.160892; Tue, 16 Feb 2021 16:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3Bz-00037Z-Tw; Tue, 16 Feb 2021 16:27:15 +0000
Received: by outflank-mailman (input) for mailman id 85942;
 Tue, 16 Feb 2021 16:27:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3By-00037F-OO
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:27:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3By-0004US-NY
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:27:14 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3By-0005hn-Mh
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:27:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC3Bv-0000p7-Cb; Tue, 16 Feb 2021 16:27:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=RlSG4wixyrZiUAKT77HPortKpgXFhVgri9OD3UR4GQk=; b=ts4PJI3MlfiAVx0YV76qGLiNV3
	3b7c5GX2w8/Ud/H2+1ACet2PuE4tfKs+L1sRF4mWw+zkt4wTC8N7K8KxFNeLZw+0PDG5NezKUFSKi
	jdWRBgbB/+1apH7rlL7o7Y8an47ynN+Nr8QGusmDSPFGVgMPKJWNQi08kTVqpYsl8e2A=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.62047.178128.680533@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:27:11 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 08/10] tools: Check for abi-dumper in ./configure
In-Reply-To: <20210212153953.4582-9-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-9-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 08/10] tools: Check for abi-dumper in ./configure"):
> This will be optional.  No functional change.

Reviewed-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:27:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85945.160904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3CQ-0003EY-A3; Tue, 16 Feb 2021 16:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85945.160904; Tue, 16 Feb 2021 16:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3CQ-0003ER-6v; Tue, 16 Feb 2021 16:27:42 +0000
Received: by outflank-mailman (input) for mailman id 85945;
 Tue, 16 Feb 2021 16:27:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lC3CO-0003EF-V9
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:27:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lC3CN-0004Ui-Vb; Tue, 16 Feb 2021 16:27:39 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lC3CN-0005jD-Ou; Tue, 16 Feb 2021 16:27:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DCI+BtU6LfMqY9BMnR7Jf3s3R8a2DrxZCuIrxTkqtAI=; b=5Ra9B4eh8aKd30TYqZKTDi63bA
	GApQCrBP9BpvB/Thtql0JSL3DCste0Es8cLn0sdpfVB0tCjmbH33z/bVbAO+KjkOO6Hg0zD/w6TvW
	hZT3EL7SaCL5rks9QBMg1OuV9p7I/3Hk6qm636m1ZA8pGH4JPFp06GEhAlo2W4YhwiB0=;
Subject: Re: [PATCH v1.1 03/10] tools/libxg: Drop stale p2m logic from ARM's
 meminit()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210212153953.4582-4-andrew.cooper3@citrix.com>
 <20210212200139.26911-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0f973282-770e-bab8-2918-541f1740eace@xen.org>
Date: Tue, 16 Feb 2021 16:27:37 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210212200139.26911-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 12/02/2021 20:01, Andrew Cooper wrote:
> Various version of gcc, when compiling with -Og, complain:
> 
>    xg_dom_arm.c: In function 'meminit':
>    xg_dom_arm.c:420:19: error: 'p2m_size' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>      420 |     dom->p2m_size = p2m_size;
>          |     ~~~~~~~~~~~~~~^~~~~~~~~~
> 
> This is actually entirely stale code since ee21f10d70^..97e34ad22d which
> removed the 1:1 identity p2m for translated domains.
> 
> Drop the write of d->p2m_size, and the p2m_size local variable.  Reposition
> the p2m_size field in struct xc_dom_image and correct some stale
> documentation.
> 
> This change really ought to have been part of the original cleanup series.
> 
> No actual change to how ARM domains are constructed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> 
> v2:
>   * Delete stale p2m_size infrastructure.
> ---
>   tools/include/xenguest.h      | 5 ++---
>   tools/libs/guest/xg_dom_arm.c | 5 -----
>   2 files changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
> index 775cf34c04..217022b6e7 100644
> --- a/tools/include/xenguest.h
> +++ b/tools/include/xenguest.h
> @@ -145,6 +145,7 @@ struct xc_dom_image {
>        * eventually copied into guest context.
>        */
>       xen_pfn_t *pv_p2m;
> +    xen_pfn_t p2m_size;         /* number of pfns covered by pv_p2m */
>   
>       /* physical memory
>        *
> @@ -154,12 +155,10 @@ struct xc_dom_image {
>        *
>        * An ARM guest has GUEST_RAM_BANKS regions of RAM, with
>        * rambank_size[i] pages in each. The lowest RAM address
> -     * (corresponding to the base of the p2m arrays above) is stored
> -     * in rambase_pfn.
> +     * is stored in rambase_pfn.
>        */
>       xen_pfn_t rambase_pfn;
>       xen_pfn_t total_pages;
> -    xen_pfn_t p2m_size;         /* number of pfns covered by p2m */
>       struct xc_dom_phys *phys_pages;
>   #if defined (__arm__) || defined(__aarch64__)
>       xen_pfn_t rambank_size[GUEST_RAM_BANKS];
> diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.c
> index 94948d2b20..b4c24f15fb 100644
> --- a/tools/libs/guest/xg_dom_arm.c
> +++ b/tools/libs/guest/xg_dom_arm.c
> @@ -373,7 +373,6 @@ static int meminit(struct xc_dom_image *dom)
>       const uint64_t modsize = dtb_size + ramdisk_size;
>       const uint64_t ram128mb = bankbase[0] + (128<<20);
>   
> -    xen_pfn_t p2m_size;
>       uint64_t bank0end;
>   
>       assert(dom->rambase_pfn << XC_PAGE_SHIFT == bankbase[0]);
> @@ -409,16 +408,12 @@ static int meminit(struct xc_dom_image *dom)
>   
>           ramsize -= banksize;
>   
> -        p2m_size = ( bankbase[i] + banksize - bankbase[0] ) >> XC_PAGE_SHIFT;
> -
>           dom->rambank_size[i] = banksize >> XC_PAGE_SHIFT;
>       }
>   
>       assert(dom->rambank_size[0] != 0);
>       assert(ramsize == 0); /* Too much RAM is rejected above */
>   
> -    dom->p2m_size = p2m_size;
> -
>       /* setup initial p2m and allocate guest memory */
>       for ( i = 0; i < GUEST_RAM_BANKS && dom->rambank_size[i]; i++ )
>       {
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:31:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85954.160916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3FV-0004At-P6; Tue, 16 Feb 2021 16:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85954.160916; Tue, 16 Feb 2021 16:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3FV-0004Am-Lr; Tue, 16 Feb 2021 16:30:53 +0000
Received: by outflank-mailman (input) for mailman id 85954;
 Tue, 16 Feb 2021 16:30:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3FT-0004Ah-L1
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:30:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3FT-0004Xx-Gx
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:30:51 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC3FT-0005tN-EK
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:30:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC3FQ-0000pm-3u; Tue, 16 Feb 2021 16:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=OhJRJlAMpzAAh07WuVPnM89I4Bmu//KXS2FG5XMcqVk=; b=lT4/iBXjo+IsctsMKaRK/FCSHM
	Ru1xR5NMxNKhUVU7J6qVzzAZ6X0v+uKbZCGH28izGJ/CAKbQ5LPY8CGXl8OXEAMjElvIXU8V7tk2T
	zRyKbqjPNyj645CXMKwNVgRP+RnY6Alvas91LDvHkx8477re3/ZhGu9iMfJ6w3c0P/NA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24619.62263.884017.217332@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 16:30:47 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 10/10] tools/libs: Write out an ABI analysis when abi-dumper is available
In-Reply-To: <20210212153953.4582-11-andrew.cooper3@citrix.com>
References: <20210212153953.4582-1-andrew.cooper3@citrix.com>
	<20210212153953.4582-11-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 10/10] tools/libs: Write out an ABI analysis when abi-dumper is available"):
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
...
> +# If abi-dumper is available, write out the ABI analysis
> +ifneq ($(ABI_DUMPER),)
> +libs: $(PKG_ABI)
> +$(PKG_ABI): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) headers.lst
> +	abi-dumper $< -o $@ -public-headers headers.lst -lver $(MAJOR).$(MINOR)
> +endif

Kind of annoying that we don't have a variable for
   lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)

Reviewed-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 16:48:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 16:48:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85962.160928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3Wm-0005Gq-AA; Tue, 16 Feb 2021 16:48:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85962.160928; Tue, 16 Feb 2021 16:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC3Wm-0005Gj-5L; Tue, 16 Feb 2021 16:48:44 +0000
Received: by outflank-mailman (input) for mailman id 85962;
 Tue, 16 Feb 2021 16:48:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IlKa=HS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lC3Wk-0005Ge-Ti
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 16:48:43 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8353bae6-55be-4058-8f0c-8d6eb701fd7f;
 Tue, 16 Feb 2021 16:48:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8353bae6-55be-4058-8f0c-8d6eb701fd7f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613494121;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=HWCoZ/WJgui/TOZPaunhOxHAHwsS6IwogKSNyRuE1MM=;
  b=U9QXR0D95B7FHPYJEgVXYLIknMY5eBvw2qf6HPU8fgNaKnx2s3lzyoKx
   91vc9WSmpenWQlWaFgZp5QIEUhm2bmtJ7RK8WmKNnwBSD85Kapv67VdGv
   WRfXnLld7AWFNpckekM30RADBztxBjpnuF2rpv7Hea0ONxXG5uTtHsEad
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Vu5xwLrJvSApH8hUwNswQFMM3cM8P1xVSKSRrVVgu6uFp5kKddaamlvAD0Ja9xdiCtuuTwjo+S
 /GqAkMC6L9qSbPCsTi+/Uy+rtWFrHj4I7rWZOGRiauEZ7x1NmxlVnTJIK6OAOKhZ6Xog/a9LtI
 zreVNWJsefFuDqqvQIDiXrolj6Hx2SANzXvLqeeZtSS+pEuDpgp5/1LcwnCpqy4Y/6OMIGR+AK
 YpIpxNFbIsmWZU+D5hr4hOScO8O43TS+Y/HfEdITYAMMiA1XkSaLpSPSv244PL3G0j026WCnk2
 3rY=
X-SBRS: 5.1
X-MesageID: 37281190
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37281190"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 09/10] tools/libs: Add rule to generate headers.lst
Date: Tue, 16 Feb 2021 16:48:16 +0000
Message-ID: <20210216164816.27948-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-10-andrew.cooper3@citrix.com>
References: <20210212153953.4582-10-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

abi-dumper needs a list of the public header files for shared objects, and
only accepts this in the form of a file.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>

v2:
 * Use set -e to avoid truncated content on transient errors
---
 tools/libs/.gitignore | 1 +
 tools/libs/libs.mk    | 9 ++++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)
 create mode 100644 tools/libs/.gitignore

diff --git a/tools/libs/.gitignore b/tools/libs/.gitignore
new file mode 100644
index 0000000000..4a13126144
--- /dev/null
+++ b/tools/libs/.gitignore
@@ -0,0 +1 @@
+*/headers.lst
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 0b3381755a..7970627ac8 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -76,6 +76,10 @@ endif
 
 headers.chk: $(AUTOINCS)
 
+headers.lst: FORCE
+	@{ set -e; $(foreach h,$(LIBHEADERS),echo $(h);) } > $@.tmp
+	@$(call move-if-changed,$@.tmp,$@)
+
 libxen$(LIBNAME).map:
 	echo 'VERS_$(MAJOR).$(MINOR) { global: *; };' >$@
 
@@ -118,9 +122,12 @@ TAGS:
 clean:
 	rm -rf *.rpm $(LIB) *~ $(DEPS_RM) $(LIB_OBJS) $(PIC_OBJS)
 	rm -f lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) lib$(LIB_FILE_NAME).so.$(MAJOR)
-	rm -f headers.chk
+	rm -f headers.chk headers.lst
 	rm -f $(PKG_CONFIG)
 	rm -f _paths.h
 
 .PHONY: distclean
 distclean: clean
+
+.PHONY: FORCE
+FORCE:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:23:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85969.160945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC44c-0000Qx-4G; Tue, 16 Feb 2021 17:23:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85969.160945; Tue, 16 Feb 2021 17:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC44c-0000Qq-1D; Tue, 16 Feb 2021 17:23:42 +0000
Received: by outflank-mailman (input) for mailman id 85969;
 Tue, 16 Feb 2021 17:23:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC44a-0000Qi-HO; Tue, 16 Feb 2021 17:23:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC44a-0005Pn-AJ; Tue, 16 Feb 2021 17:23:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC44a-0008LZ-2o; Tue, 16 Feb 2021 17:23:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lC44a-0004Yi-2M; Tue, 16 Feb 2021 17:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6yPbcmSxx6RNOJQG9Zxzruwzq2CVgybIopIiJX09+3Q=; b=x8xmZSqa0ZzSTTDyKvkQjcORBc
	2/m7Qo8DoGZ2CeYSh+meYnqx0FEXYZ8WRutKE73Nt4BhgSss5i60VX7A+is/BQC9H6EJKCNVxgrzX
	NgmCFY/kunQpa5Gr3ni8TVaq9D3YnxfKTmflWvmYmshObD70O1kq7nTQTIJIGbxjLFso=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159416-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159416: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b1cc15f1931ba56d0ee256fe9bfe65509733b27
X-Osstest-Versions-That:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 17:23:40 +0000

flight 159416 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159416/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b1cc15f1931ba56d0ee256fe9bfe65509733b27
baseline version:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad

Last test of basis   159308  2021-02-12 20:01:25 Z    3 days
Testing same since   159416  2021-02-16 15:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   04085ec1ac..3b1cc15f19  3b1cc15f1931ba56d0ee256fe9bfe65509733b27 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:33:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:33:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85974.160961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4DQ-0001QA-4k; Tue, 16 Feb 2021 17:32:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85974.160961; Tue, 16 Feb 2021 17:32:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4DQ-0001Q3-1b; Tue, 16 Feb 2021 17:32:48 +0000
Received: by outflank-mailman (input) for mailman id 85974;
 Tue, 16 Feb 2021 17:32:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4DO-0001Py-It
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:32:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4DO-0005Z6-H1
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:32:46 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4DO-0007Ni-G0
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:32:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC4DL-0000y9-9c; Tue, 16 Feb 2021 17:32:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=af7Cjns37ALhTzRlM7EI0Izs1Esi8Vz92QP9PJocfdg=; b=dPkyvy0a6D3j/JoBcHQ55QX+Tl
	31oGZD63ncUQtQ2JB+06WQg7/a2XrKX5avmkC2ufwhEP1zYE17HoYAAljRZDRg6/3kMrKrqZt0OAX
	1dBcxrU7EFNT1AEi4heV1XvlcTln8W7VLQ/RuvhJpxA/RVjvM9imjYWGxPy/GbEdSt+w=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24620.443.68948.235522@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 17:32:43 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 09/10] tools/libs: Add rule to generate headers.lst
In-Reply-To: <20210216164816.27948-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-10-andrew.cooper3@citrix.com>
	<20210216164816.27948-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH v2 09/10] tools/libs: Add rule to generate headers.lst"):
> abi-dumper needs a list of the public header files for shared objects, and
> only accepts this in the form of a file.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Reviewed-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:42:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:42:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85979.160973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4Mo-0002OX-3H; Tue, 16 Feb 2021 17:42:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85979.160973; Tue, 16 Feb 2021 17:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4Mo-0002OQ-07; Tue, 16 Feb 2021 17:42:30 +0000
Received: by outflank-mailman (input) for mailman id 85979;
 Tue, 16 Feb 2021 17:42:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hb7A=HS=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lC4Mn-0002OL-Cx
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:42:29 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4885f4a-432a-4b36-a1ce-22d518a91853;
 Tue, 16 Feb 2021 17:42:28 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11GHgHmT012011
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 16 Feb 2021 12:42:22 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11GHgG8v012010;
 Tue, 16 Feb 2021 09:42:16 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4885f4a-432a-4b36-a1ce-22d518a91853
Message-Id: <cover.1613496229.git.ehem+xen@m5p.com>
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 16 Feb 2021 09:23:49 -0800
Subject: [RESEND PATCH 0/2] Adding const to many libxl/xl functions
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

The rest of the series seems hopeless for stable, so right now I'm merely
resending the 2 which are simpler.  During the full series I came across
a bunch of xl and then libxl functions which could have arguments
declared const.

These are the input arguments of *_is_empty() and *_is_default(), which
are merely read from.  There are also *_gen_json() functions where the
yajl handle needs to be writeable, but the input data structure isn't
modified.

The second is merely spreading these further outwards.  Once libxl marks
its function's arguments const, portions of `xl` can similarly have
functions marked const.

NOTE: Order is important on these two!

Elliott Mitchell (2):
  tools/libxl: Mark pointer args of many functions constant
  tools/xl: Mark libxl_domain_config * arg of printf_info_*() const

 tools/include/libxl_json.h        | 22 ++++++++++++----------
 tools/libs/light/gentypes.py      |  8 ++++----
 tools/libs/light/libxl_cpuid.c    |  2 +-
 tools/libs/light/libxl_internal.c |  4 ++--
 tools/libs/light/libxl_internal.h | 18 +++++++++---------
 tools/libs/light/libxl_json.c     | 18 ++++++++++--------
 tools/libs/light/libxl_nocpuid.c  |  4 ++--
 tools/xl/xl.h                     |  2 +-
 tools/xl/xl_info.c                |  2 +-
 tools/xl/xl_sxp.c                 |  6 +++---
 10 files changed, 45 insertions(+), 41 deletions(-)

-- 


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:43:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85983.160985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4NZ-0002UM-Cf; Tue, 16 Feb 2021 17:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85983.160985; Tue, 16 Feb 2021 17:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4NZ-0002UF-9N; Tue, 16 Feb 2021 17:43:17 +0000
Received: by outflank-mailman (input) for mailman id 85983;
 Tue, 16 Feb 2021 17:43:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hb7A=HS=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lC4NX-0002UA-Or
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:43:15 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9900c24b-01a1-4917-802e-17d8fe9f6f91;
 Tue, 16 Feb 2021 17:43:14 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11GHh4gP012022
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 16 Feb 2021 12:43:10 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11GHh4RL012021;
 Tue, 16 Feb 2021 09:43:04 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9900c24b-01a1-4917-802e-17d8fe9f6f91
Message-Id: <2e18b114fc91028459b2d6cbfd0f1204501dfde9.1613496229.git.ehem+xen@m5p.com>
In-Reply-To: <cover.1613496229.git.ehem+xen@m5p.com>
References: <cover.1613496229.git.ehem+xen@m5p.com>
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Date: Fri, 18 Dec 2020 13:37:44 -0800
Subject: [RESEND PATCH 1/2] tools/libxl: Mark pointer args of many functions constant
X-Spam-Status: No, score=2.5 required=10.0 tests=DATE_IN_PAST_96_XX,
	KHOP_HELO_FCRDNS autolearn=no autolearn_force=no version=3.4.4
X-Spam-Level: **
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Anything *_is_empty(), *_is_default(), or *_gen_json() is going to be
examining the pointed to thing, not modifying it.  This potentially
results in higher-performance output.  This also allows spreading
constants further, allowing more checking and security.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
 tools/include/libxl_json.h        | 22 ++++++++++++----------
 tools/libs/light/gentypes.py      |  8 ++++----
 tools/libs/light/libxl_cpuid.c    |  2 +-
 tools/libs/light/libxl_internal.c |  4 ++--
 tools/libs/light/libxl_internal.h | 18 +++++++++---------
 tools/libs/light/libxl_json.c     | 18 ++++++++++--------
 tools/libs/light/libxl_nocpuid.c  |  4 ++--
 7 files changed, 40 insertions(+), 36 deletions(-)

diff --git a/tools/include/libxl_json.h b/tools/include/libxl_json.h
index 260783bfde..63f0e58fe1 100644
--- a/tools/include/libxl_json.h
+++ b/tools/include/libxl_json.h
@@ -23,17 +23,19 @@
 #endif
 
 yajl_gen_status libxl__uint64_gen_json(yajl_gen hand, uint64_t val);
-yajl_gen_status libxl_defbool_gen_json(yajl_gen hand, libxl_defbool *p);
-yajl_gen_status libxl_uuid_gen_json(yajl_gen hand, libxl_uuid *p);
-yajl_gen_status libxl_mac_gen_json(yajl_gen hand, libxl_mac *p);
-yajl_gen_status libxl_bitmap_gen_json(yajl_gen hand, libxl_bitmap *p);
+yajl_gen_status libxl_defbool_gen_json(yajl_gen hand, const libxl_defbool *p);
+yajl_gen_status libxl_uuid_gen_json(yajl_gen hand, const libxl_uuid *p);
+yajl_gen_status libxl_mac_gen_json(yajl_gen hand, const libxl_mac *p);
+yajl_gen_status libxl_bitmap_gen_json(yajl_gen hand, const libxl_bitmap *p);
 yajl_gen_status libxl_cpuid_policy_list_gen_json(yajl_gen hand,
-                                                 libxl_cpuid_policy_list *p);
-yajl_gen_status libxl_string_list_gen_json(yajl_gen hand, libxl_string_list *p);
+                                                 const libxl_cpuid_policy_list *p);
+yajl_gen_status libxl_string_list_gen_json(yajl_gen hand,
+                                           const libxl_string_list *p);
 yajl_gen_status libxl_key_value_list_gen_json(yajl_gen hand,
-                                              libxl_key_value_list *p);
-yajl_gen_status libxl_hwcap_gen_json(yajl_gen hand, libxl_hwcap *p);
-yajl_gen_status libxl_ms_vm_genid_gen_json(yajl_gen hand, libxl_ms_vm_genid *p);
+                                              const libxl_key_value_list *p);
+yajl_gen_status libxl_hwcap_gen_json(yajl_gen hand, const libxl_hwcap *p);
+yajl_gen_status libxl_ms_vm_genid_gen_json(yajl_gen hand,
+                                           const libxl_ms_vm_genid *p);
 
 #include <_libxl_types_json.h>
 
@@ -91,6 +93,6 @@ static inline yajl_gen libxl_yajl_gen_alloc(const yajl_alloc_funcs *allocFuncs)
 #endif /* !HAVE_YAJL_V2 */
 
 yajl_gen_status libxl_domain_config_gen_json(yajl_gen hand,
-                                             libxl_domain_config *p);
+                                             const libxl_domain_config *p);
 
 #endif /* LIBXL_JSON_H */
diff --git a/tools/libs/light/gentypes.py b/tools/libs/light/gentypes.py
index 9a45e45acc..7e02a5366f 100644
--- a/tools/libs/light/gentypes.py
+++ b/tools/libs/light/gentypes.py
@@ -632,7 +632,7 @@ if __name__ == '__main__':
                                                ty.make_arg("p"),
                                                ku.keyvar.type.make_arg(ku.keyvar.name)))
         if ty.json_gen_fn is not None:
-            f.write("%schar *%s_to_json(libxl_ctx *ctx, %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p")))
+            f.write("%schar *%s_to_json(libxl_ctx *ctx, const %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p")))
         if ty.json_parse_fn is not None:
             f.write("%sint %s_from_json(libxl_ctx *ctx, %s, const char *s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
         if isinstance(ty, idl.Enumeration):
@@ -662,7 +662,7 @@ if __name__ == '__main__':
 """ % (header_json_define, header_json_define, " ".join(sys.argv)))
 
     for ty in [ty for ty in types if ty.json_gen_fn is not None]:
-        f.write("%syajl_gen_status %s_gen_json(yajl_gen hand, %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
+        f.write("%syajl_gen_status %s_gen_json(yajl_gen hand, const %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
 
     f.write("\n")
     f.write("""#endif /* %s */\n""" % header_json_define)
@@ -766,13 +766,13 @@ if __name__ == '__main__':
         f.write("\n")
 
     for ty in [t for t in types if t.json_gen_fn is not None]:
-        f.write("yajl_gen_status %s_gen_json(yajl_gen hand, %s)\n" % (ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
+        f.write("yajl_gen_status %s_gen_json(yajl_gen hand, const %s)\n" % (ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
         f.write("{\n")
         f.write(libxl_C_type_gen_json(ty, "p"))
         f.write("}\n")
         f.write("\n")
 
-        f.write("char *%s_to_json(libxl_ctx *ctx, %s)\n" % (ty.typename, ty.make_arg("p")))
+        f.write("char *%s_to_json(libxl_ctx *ctx, const %s)\n" % (ty.typename, ty.make_arg("p")))
         f.write("{\n")
         f.write(libxl_C_type_to_json(ty, "p"))
         f.write("}\n")
diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 289c59c742..b3e4212edf 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -14,7 +14,7 @@
 
 #include "libxl_internal.h"
 
-int libxl__cpuid_policy_is_empty(libxl_cpuid_policy_list *pl)
+int libxl__cpuid_policy_is_empty(const libxl_cpuid_policy_list *pl)
 {
     return !libxl_cpuid_policy_list_length(pl);
 }
diff --git a/tools/libs/light/libxl_internal.c b/tools/libs/light/libxl_internal.c
index d93a75533f..32b8788b59 100644
--- a/tools/libs/light/libxl_internal.c
+++ b/tools/libs/light/libxl_internal.c
@@ -332,7 +332,7 @@ _hidden int libxl__parse_mac(const char *s, libxl_mac mac)
     return 0;
 }
 
-_hidden int libxl__compare_macs(libxl_mac *a, libxl_mac *b)
+_hidden int libxl__compare_macs(const libxl_mac *a, const libxl_mac *b)
 {
     int i;
 
@@ -344,7 +344,7 @@ _hidden int libxl__compare_macs(libxl_mac *a, libxl_mac *b)
     return 0;
 }
 
-_hidden int libxl__mac_is_default(libxl_mac *mac)
+_hidden int libxl__mac_is_default(const libxl_mac *mac)
 {
     return (!(*mac)[0] && !(*mac)[1] && !(*mac)[2] &&
             !(*mac)[3] && !(*mac)[4] && !(*mac)[5]);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 028bc013d9..e3df881d08 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -2073,9 +2073,9 @@ struct libxl__xen_console_reader {
 /* parse the string @s as a sequence of 6 colon separated bytes in to @mac */
 _hidden int libxl__parse_mac(const char *s, libxl_mac mac);
 /* compare mac address @a and @b. 0 if the same, -ve if a<b and +ve if a>b */
-_hidden int libxl__compare_macs(libxl_mac *a, libxl_mac *b);
+_hidden int libxl__compare_macs(const libxl_mac *a, const libxl_mac *b);
 /* return true if mac address is all zero (the default value) */
-_hidden int libxl__mac_is_default(libxl_mac *mac);
+_hidden int libxl__mac_is_default(const libxl_mac *mac);
 /* init a recursive mutex */
 _hidden int libxl__init_recursive_mutex(libxl_ctx *ctx, pthread_mutex_t *lock);
 
@@ -4571,7 +4571,7 @@ _hidden int libxl__ms_vm_genid_set(libxl__gc *gc, uint32_t domid,
 #define LIBXL__DEFBOOL_STR_DEFAULT "<default>"
 #define LIBXL__DEFBOOL_STR_FALSE   "False"
 #define LIBXL__DEFBOOL_STR_TRUE    "True"
-static inline int libxl__defbool_is_default(libxl_defbool *db)
+static inline int libxl__defbool_is_default(const libxl_defbool *db)
 {
     return !db->val;
 }
@@ -4666,22 +4666,22 @@ int libxl__random_bytes(libxl__gc *gc, uint8_t *buf, size_t len);
 #include "_libxl_types_internal_private.h"
 
 /* This always return false, there's no "default value" for hw cap */
-static inline int libxl__hwcap_is_default(libxl_hwcap *hwcap)
+static inline int libxl__hwcap_is_default(const libxl_hwcap *hwcap)
 {
     return 0;
 }
 
-static inline int libxl__string_list_is_empty(libxl_string_list *psl)
+static inline int libxl__string_list_is_empty(const libxl_string_list *psl)
 {
     return !libxl_string_list_length(psl);
 }
 
-static inline int libxl__key_value_list_is_empty(libxl_key_value_list *pkvl)
+static inline int libxl__key_value_list_is_empty(const libxl_key_value_list *pkvl)
 {
     return !libxl_key_value_list_length(pkvl);
 }
 
-int libxl__cpuid_policy_is_empty(libxl_cpuid_policy_list *pl);
+int libxl__cpuid_policy_is_empty(const libxl_cpuid_policy_list *pl);
 
 /* Portability note: a proper flock(2) implementation is required */
 typedef struct {
@@ -4812,12 +4812,12 @@ void* libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
 void libxl__device_list_free(const libxl__device_type *dt,
                              void *list, int num);
 
-static inline bool libxl__timer_mode_is_default(libxl_timer_mode *tm)
+static inline bool libxl__timer_mode_is_default(const libxl_timer_mode *tm)
 {
     return *tm == LIBXL_TIMER_MODE_DEFAULT;
 }
 
-static inline bool libxl__string_is_default(char **s)
+static inline bool libxl__string_is_default(char *const *s)
 {
     return *s == NULL;
 }
diff --git a/tools/libs/light/libxl_json.c b/tools/libs/light/libxl_json.c
index 9b8ef2cab9..88e81f9905 100644
--- a/tools/libs/light/libxl_json.c
+++ b/tools/libs/light/libxl_json.c
@@ -95,7 +95,7 @@ yajl_gen_status libxl__yajl_gen_enum(yajl_gen hand, const char *str)
  * YAJL generators for builtin libxl types.
  */
 yajl_gen_status libxl_defbool_gen_json(yajl_gen hand,
-                                       libxl_defbool *db)
+                                       const libxl_defbool *db)
 {
     return libxl__yajl_gen_asciiz(hand, libxl_defbool_to_string(*db));
 }
@@ -137,7 +137,7 @@ int libxl__bool_parse_json(libxl__gc *gc, const libxl__json_object *o,
 }
 
 yajl_gen_status libxl_uuid_gen_json(yajl_gen hand,
-                                    libxl_uuid *uuid)
+                                    const libxl_uuid *uuid)
 {
     char buf[LIBXL_UUID_FMTLEN+1];
     snprintf(buf, sizeof(buf), LIBXL_UUID_FMT, LIBXL_UUID_BYTES((*uuid)));
@@ -154,7 +154,7 @@ int libxl__uuid_parse_json(libxl__gc *gc, const libxl__json_object *o,
 }
 
 yajl_gen_status libxl_bitmap_gen_json(yajl_gen hand,
-                                      libxl_bitmap *bitmap)
+                                      const libxl_bitmap *bitmap)
 {
     yajl_gen_status s;
     int i;
@@ -208,7 +208,7 @@ int libxl__bitmap_parse_json(libxl__gc *gc, const libxl__json_object *o,
 }
 
 yajl_gen_status libxl_key_value_list_gen_json(yajl_gen hand,
-                                              libxl_key_value_list *pkvl)
+                                              const libxl_key_value_list *pkvl)
 {
     libxl_key_value_list kvl = *pkvl;
     yajl_gen_status s;
@@ -269,7 +269,8 @@ int libxl__key_value_list_parse_json(libxl__gc *gc, const libxl__json_object *o,
     return 0;
 }
 
-yajl_gen_status libxl_string_list_gen_json(yajl_gen hand, libxl_string_list *pl)
+yajl_gen_status libxl_string_list_gen_json(yajl_gen hand,
+                                           const libxl_string_list *pl)
 {
     libxl_string_list l = *pl;
     yajl_gen_status s;
@@ -322,7 +323,7 @@ int libxl__string_list_parse_json(libxl__gc *gc, const libxl__json_object *o,
     return 0;
 }
 
-yajl_gen_status libxl_mac_gen_json(yajl_gen hand, libxl_mac *mac)
+yajl_gen_status libxl_mac_gen_json(yajl_gen hand, const libxl_mac *mac)
 {
     char buf[LIBXL_MAC_FMTLEN+1];
     snprintf(buf, sizeof(buf), LIBXL_MAC_FMT, LIBXL_MAC_BYTES((*mac)));
@@ -339,7 +340,7 @@ int libxl__mac_parse_json(libxl__gc *gc, const libxl__json_object *o,
 }
 
 yajl_gen_status libxl_hwcap_gen_json(yajl_gen hand,
-                                     libxl_hwcap *p)
+                                     const libxl_hwcap *p)
 {
     yajl_gen_status s;
     int i;
@@ -377,7 +378,8 @@ int libxl__hwcap_parse_json(libxl__gc *gc, const libxl__json_object *o,
     return 0;
 }
 
-yajl_gen_status libxl_ms_vm_genid_gen_json(yajl_gen hand, libxl_ms_vm_genid *p)
+yajl_gen_status libxl_ms_vm_genid_gen_json(yajl_gen hand,
+                                           const libxl_ms_vm_genid *p)
 {
     yajl_gen_status s;
     int i;
diff --git a/tools/libs/light/libxl_nocpuid.c b/tools/libs/light/libxl_nocpuid.c
index f47336565b..73580351b3 100644
--- a/tools/libs/light/libxl_nocpuid.c
+++ b/tools/libs/light/libxl_nocpuid.c
@@ -14,7 +14,7 @@
 
 #include "libxl_internal.h"
 
-int libxl__cpuid_policy_is_empty(libxl_cpuid_policy_list *pl)
+int libxl__cpuid_policy_is_empty(const libxl_cpuid_policy_list *pl)
 {
     return 1;
 }
@@ -40,7 +40,7 @@ void libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
 }
 
 yajl_gen_status libxl_cpuid_policy_list_gen_json(yajl_gen hand,
-                                libxl_cpuid_policy_list *pcpuid)
+                                const libxl_cpuid_policy_list *pcpuid)
 {
     return 0;
 }
-- 


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:43:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:43:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85985.160997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4Np-0002ZF-Ld; Tue, 16 Feb 2021 17:43:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85985.160997; Tue, 16 Feb 2021 17:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4Np-0002Z8-IW; Tue, 16 Feb 2021 17:43:33 +0000
Received: by outflank-mailman (input) for mailman id 85985;
 Tue, 16 Feb 2021 17:43:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hb7A=HS=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lC4No-0002Yi-4Q
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:43:32 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a3334e1-2ac7-4881-9f7d-a66d05dbff6c;
 Tue, 16 Feb 2021 17:43:31 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11GHhLI7012036
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 16 Feb 2021 12:43:27 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11GHhL93012035;
 Tue, 16 Feb 2021 09:43:21 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a3334e1-2ac7-4881-9f7d-a66d05dbff6c
Message-Id: <7de11de51475d98197ad30124674b9785a731dda.1613496229.git.ehem+xen@m5p.com>
In-Reply-To: <cover.1613496229.git.ehem+xen@m5p.com>
References: <cover.1613496229.git.ehem+xen@m5p.com>
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Date: Fri, 18 Dec 2020 13:32:33 -0800
Subject: [RESEND PATCH 2/2] tools/xl: Mark libxl_domain_config * arg of
 printf_info_*() const
X-Spam-Status: No, score=2.5 required=10.0 tests=DATE_IN_PAST_96_XX,
	KHOP_HELO_FCRDNS autolearn=no autolearn_force=no version=3.4.4
X-Spam-Level: **
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

With libxl having gotten a lot more constant, now printf_info_sexp() and
printf_info_one_json() can add consts.  May not be particularly
important, but it is best to mark things constant when they are known to
be so.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
 tools/xl/xl.h      | 2 +-
 tools/xl/xl_info.c | 2 +-
 tools/xl/xl_sxp.c  | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 137a29077c..30a6ddb86f 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -300,7 +300,7 @@ typedef enum {
     DOMAIN_RESTART_SOFT_RESET,   /* Soft reset should be performed */
 } domain_restart_type;
 
-extern void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh);
+extern void printf_info_sexp(int domid, const libxl_domain_config *d_config, FILE *fh);
 extern void apply_global_affinity_masks(libxl_domain_type type,
                                         libxl_bitmap *vcpu_affinity_array,
                                         unsigned int size);
diff --git a/tools/xl/xl_info.c b/tools/xl/xl_info.c
index 8383e4a6df..18c88abf34 100644
--- a/tools/xl/xl_info.c
+++ b/tools/xl/xl_info.c
@@ -59,7 +59,7 @@ static int maybe_printf(const char *fmt, ...)
 }
 
 static yajl_gen_status printf_info_one_json(yajl_gen hand, int domid,
-                                            libxl_domain_config *d_config)
+                                            const libxl_domain_config *d_config)
 {
     yajl_gen_status s;
 
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a001570..d5b9051dfc 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -26,13 +26,13 @@
 /* In general you should not add new output to this function since it
  * is intended only for legacy use.
  */
-void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
+void printf_info_sexp(int domid, const libxl_domain_config *d_config, FILE *fh)
 {
     int i;
     libxl_dominfo info;
 
-    libxl_domain_create_info *c_info = &d_config->c_info;
-    libxl_domain_build_info *b_info = &d_config->b_info;
+    const libxl_domain_create_info *c_info = &d_config->c_info;
+    const libxl_domain_build_info *b_info = &d_config->b_info;
 
     fprintf(fh, "(domain\n\t(domid %d)\n", domid);
     fprintf(fh, "\t(create_info)\n");
-- 


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:43:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:43:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.85991.161009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4OB-0002fx-0M; Tue, 16 Feb 2021 17:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 85991.161009; Tue, 16 Feb 2021 17:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4OA-0002fq-Sl; Tue, 16 Feb 2021 17:43:54 +0000
Received: by outflank-mailman (input) for mailman id 85991;
 Tue, 16 Feb 2021 17:43:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IlKa=HS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lC4O9-0002fX-Aa
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:43:53 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22ce567e-6be4-4656-83ed-d8dec9bf333e;
 Tue, 16 Feb 2021 17:43:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22ce567e-6be4-4656-83ed-d8dec9bf333e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613497431;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=v3W8OOM9yiuJH+VdAlk5xvI6YpLMlAEGALiJ1ZliNPE=;
  b=gH/GS3i7DU3uQJ23XbUbWCbS5hVQpu0n6J9ATpPGvnMVYY59sw855j1H
   MbuH49Hhn2VDNxT4Tdb96RhhJ+MFGmStR/XIj6w8pzQ0s5INCgZsuHfJV
   6gPcyhrh64mmxStbeSXpLiXnJN5qfHXMKdlOfPYDrBDwqD3KVXRy1oE9m
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: m0X4lj2Rk4pQoP5+5j84UjOCPRaIYdHed2aQ8zxQCYclWeKVMkjkFOD1M3UsuS8VhXcNH3fb8g
 zYFkXdwy7ITQwdg/cYmpiRTfAFCQXpsREODgeGPdZeOMES7qcD54JFVTi3OJx00I57o2Ixq+X1
 Ql9KZ/D6zmZba/B/umAXp/JVC7DhhCFlMjIw+zXH3dBdMzf868RtVPhLAWADto1TFkvmK/kprp
 60hTaBWLRAjcbQPzwIbXaKaro8iBXakTkgs0Ni52jWqTSA+hSlB+qfPDaZiftLjVhQPYMByo72
 N2A=
X-SBRS: 5.1
X-MesageID: 37349327
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37349327"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH v2 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()
Date: Tue, 16 Feb 2021 17:43:27 +0000
Message-ID: <20210216174327.8086-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210212153953.4582-5-andrew.cooper3@citrix.com>
References: <20210212153953.4582-5-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
  libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
    256 |         if (kill_by_uid)
        |            ^

The logic is very tangled.  Split the out and err labels apart, using out for
success cases only.

assert() that rc is 0 in the success case.  This allows for the removal of the
`if (!rc)` nesting in the out path, which reduces the cyclomatic complexity,
which is the root cause of false positive maybe-uninitialised warnings.

This also allows for dropping of the two paths setting kill_by_uid to false to
work around this warning at other optimisation levels.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>

v2:
 * Split the out and err paths.
---
 tools/libs/light/libxl_dm.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 291dee9b3f..dbd2ab607d 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -135,8 +135,8 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         return 0;
 
     /*
-     * From this point onward, all paths should go through the `out`
-     * label.  The invariants should be:
+     * From this point onward, all paths should go through the `out` (success)
+     * or `err` (failure) labels.  The invariants should be:
      * - rc may be 0, or an error code.
      * - if rc is an error code, user and intended_uid are ignored.
      * - if rc is 0, user may be set or not set.
@@ -153,13 +153,13 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     if (user) {
         rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
         if (rc)
-            goto out;
+            goto err;
 
         if (!user_base) {
             LOGD(ERROR, guest_domid, "Couldn't find device_model_user %s",
                  user);
             rc = ERROR_INVAL;
-            goto out;
+            goto err;
         }
 
         intended_uid = user_base->pw_uid;
@@ -176,7 +176,6 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         LOGD(DEBUG, guest_domid,
              "dm_restrict disabled, starting QEMU as root");
         user = NULL; /* Should already be null, but just in case */
-        kill_by_uid = false; /* Keep older versions of gcc happy */
         rc = 0;
         goto out;
     }
@@ -188,7 +187,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     rc = userlookup_helper_getpwnam(gc, LIBXL_QEMU_USER_RANGE_BASE,
                                          &user_pwbuf, &user_base);
     if (rc)
-        goto out;
+        goto err;
     if (user_base) {
         struct passwd *user_clash, user_clash_pwbuf;
 
@@ -196,14 +195,14 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         rc = userlookup_helper_getpwuid(gc, intended_uid,
                                          &user_clash_pwbuf, &user_clash);
         if (rc)
-            goto out;
+            goto err;
         if (user_clash) {
             LOGD(ERROR, guest_domid,
                  "wanted to use uid %ld (%s + %d) but that is user %s !",
                  (long)intended_uid, LIBXL_QEMU_USER_RANGE_BASE,
                  guest_domid, user_clash->pw_name);
             rc = ERROR_INVAL;
-            goto out;
+            goto err;
         }
 
         LOGD(DEBUG, guest_domid, "using uid %ld", (long)intended_uid);
@@ -222,12 +221,11 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     user = LIBXL_QEMU_USER_SHARED;
     rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
     if (rc)
-        goto out;
+        goto err;
     if (user_base) {
         LOGD(WARN, guest_domid, "Could not find user %s, falling back to %s",
              LIBXL_QEMU_USER_RANGE_BASE, LIBXL_QEMU_USER_SHARED);
         intended_uid = user_base->pw_uid;
-        kill_by_uid = false;
         rc = 0;
         goto out;
     }
@@ -240,23 +238,26 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
          "Could not find user %s or range base pseudo-user %s, cannot restrict",
          LIBXL_QEMU_USER_SHARED, LIBXL_QEMU_USER_RANGE_BASE);
     rc = ERROR_INVAL;
+    goto err;
 
 out:
+    assert(rc == 0);
+
     /* First, do a root check if appropriate */
-    if (!rc) {
-        if (user && intended_uid == 0) {
-            LOGD(ERROR, guest_domid, "intended_uid is 0 (root)!");
-            rc = ERROR_INVAL;
-        }
+    if (user && intended_uid == 0) {
+        LOGD(ERROR, guest_domid, "intended_uid is 0 (root)!");
+        rc = ERROR_INVAL;
+        goto err;
     }
 
     /* Then do the final set, if still appropriate */
-    if (!rc && user) {
+    if (user) {
         state->dm_runas = user;
         if (kill_by_uid)
             state->dm_kill_uid = GCSPRINTF("%ld", (long)intended_uid);
     }
 
+ err:
     return rc;
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 17:58:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 17:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86008.161021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4bo-0003oQ-D8; Tue, 16 Feb 2021 17:58:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86008.161021; Tue, 16 Feb 2021 17:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4bo-0003oJ-8y; Tue, 16 Feb 2021 17:58:00 +0000
Received: by outflank-mailman (input) for mailman id 86008;
 Tue, 16 Feb 2021 17:57:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4bm-0003oE-VJ
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:57:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4bm-0005y0-RS
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:57:58 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4bm-0001PP-OZ
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 17:57:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC4bj-00013M-F7; Tue, 16 Feb 2021 17:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=zdLuxJR9BFfxTCVW0soQh3MiN+REhXu+qW4qHAdtfF0=; b=4HpC/DHi+wtnXoLYxYOOOZBpJi
	ilvzPRF6g3UQ1UW2qP75PCrI4ZpaGZVQuAE/9dK56sQpynGim/0t/8TobkNUfuYNay9Rifgka6v8A
	VGaOHfZuptgx2yXVMPa72FnakQN7L00diYLPxUpSJYigksY9sbEhd5nHXIle8yWTZlUY=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24620.1955.243860.253052@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 17:57:55 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()
In-Reply-To: <20210216174327.8086-1-andrew.cooper3@citrix.com>
References: <20210212153953.4582-5-andrew.cooper3@citrix.com>
	<20210216174327.8086-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH v2 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
>   libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>     256 |         if (kill_by_uid)
...
> @@ -176,7 +176,6 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
>          LOGD(DEBUG, guest_domid,
>               "dm_restrict disabled, starting QEMU as root");
>          user = NULL; /* Should already be null, but just in case */
> -        kill_by_uid = false; /* Keep older versions of gcc happy */
>          rc = 0;
>          goto out;

Uhhhhh.  I think this and the other one seem to be stray hunks which
each introduce a new uninitialised variable bug ?

Isn.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 18:05:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 18:05:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86015.161035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4if-0004pf-4h; Tue, 16 Feb 2021 18:05:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86015.161035; Tue, 16 Feb 2021 18:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC4if-0004pY-1P; Tue, 16 Feb 2021 18:05:05 +0000
Received: by outflank-mailman (input) for mailman id 86015;
 Tue, 16 Feb 2021 18:05:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4id-0004pT-Ns
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 18:05:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4id-0006B6-LM
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 18:05:03 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lC4id-00026J-KO
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 18:05:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lC4ia-00014T-GV; Tue, 16 Feb 2021 18:05:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:To:Date:
	Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=z+ZPSMf938mJ4x62YBWP/JOKWCZStOwFzCP7XTqj4CM=; b=fC/fO8IHjiMhiOH+pS1PtwBGAU
	ZtxMLaMmHg3gnd5Nf6oTDPDdiaNjnzNFg8DepbBnveLAlgdwXqNY7tVujAgXq0AeXwCF1mORbxZeN
	GMb6ZxI10j51m2LRrHD7+GOp1py/z5L7FxA+kwYgjK0+1SLyApQy1GDz3XVFbrBnmpuc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24620.2380.253008.883244@mariner.uk.xensource.com>
Date: Tue, 16 Feb 2021 18:05:00 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>,
    Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()
In-Reply-To: <24620.1955.243860.253052@mariner.uk.xensource.com>
References: <20210212153953.4582-5-andrew.cooper3@citrix.com>
	<20210216174327.8086-1-andrew.cooper3@citrix.com>
	<24620.1955.243860.253052@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Ian Jackson writes ("Re: [PATCH v2 04/10] tools/libxl: Fix uninitialised variable in libxl__domain_get_device_model_uid()"):
> Uhhhhh.  I think this and the other one seem to be stray hunks which
> each introduce a new uninitialised variable bug ?

I now think (following irc discussion) that I was wrong about this.

Use of intended_uid is conditional on user being set.  This is very
confusing.  This code is simulating a sum type.  If only we had sum
types.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

I think, given how confusing this is, that I would like another
careful review before I give my release-ack.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 18:31:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 18:31:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86021.161050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC58R-0007Y4-Aw; Tue, 16 Feb 2021 18:31:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86021.161050; Tue, 16 Feb 2021 18:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC58R-0007Xx-84; Tue, 16 Feb 2021 18:31:43 +0000
Received: by outflank-mailman (input) for mailman id 86021;
 Tue, 16 Feb 2021 18:31:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QcaE=HS=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lC58Q-0007Xs-4s
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 18:31:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2014e51b-c195-4263-92b6-386ef5a81ba1;
 Tue, 16 Feb 2021 18:31:39 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 8001564E13;
 Tue, 16 Feb 2021 18:31:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2014e51b-c195-4263-92b6-386ef5a81ba1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613500298;
	bh=mhvB5DZ9XKO0ImPntfsEEVWA2S7YxYKMmuSSOj+XapE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QO0xNW5ipQKutc6TgcF94ChIujWgDEeyCICGs9EenwP4nf1v+RnvgrpBUAJg9Q5Tt
	 b4uEdStSQo7s8VTBi1j6n4w3k1SCFg74Ixeamh92s2cR+KOvWiT0Pt3Tpap+gHOHW9
	 M5SL3FHMWjZpbbADeZzRoDsoMHFmT1EEudxf0QhGj4aKuh58wz1uPLqXn6oIY33TvL
	 aHvJvNr5vh6QPNPaKeZevSHDmvpeD0an4SG1bZ/syjRQHBNP0I7O3hFaFXK3GuMdPK
	 iHVT1A02jwdH6gZ7wSCjSQoZUiFvp6ZulpnxywPv58UeDVC6D6KtJtYTTCUbWcXBEx
	 ZlXqdn408eouQ==
Date: Tue, 16 Feb 2021 10:31:37 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, cardoe@cardoe.com, 
    andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org, 
    anthony.perard@citrix.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
In-Reply-To: <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
Message-ID: <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s>
References: <20210213020540.27894-1-sstabellini@kernel.org> <20210213135056.GA6191@mail-itl> <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1061101881-1613499727=:3234"
Content-ID: <alpine.DEB.2.21.2102161022390.3234@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1061101881-1613499727=:3234
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102161022391.3234@sstabellini-ThinkPad-T480s>

On Mon, 15 Feb 2021, Jan Beulich wrote:
> On 13.02.2021 14:50, Marek Marczykowski-GÃ³recki wrote:
> > On Fri, Feb 12, 2021 at 06:05:40PM -0800, Stefano Stabellini wrote:
> >> If rombios, seabios and ovmf are all disabled, don't attempt to build
> >> hvmloader.
> > 
> > What if you choose to not build any of rombios, seabios, ovmf, but use
> > system one instead? Wouldn't that exclude hvmloader too?
> 
> Even further - one can disable all firmware and have every guest
> config explicitly specify the firmware to use, afaict.

I didn't realize there was a valid reason for wanting to build hvmloader
without rombios, seabios, and ovfm.


> > This heuristic seems like a bit too much, maybe instead add an explicit
> > option to skip hvmloader?
> 
> +1 (If making this configurable is needed at all - is having
> hvmloader without needing it really a problem?)

The hvmloader build fails on Alpine Linux x86:

https://gitlab.com/xen-project/xen/-/jobs/1033722465



I admit I was just trying to find the fastest way to make those Alpine
Linux builds succeed to unblock patchew: although the Alpine Linux
builds are marked as "allow_failure: true" in gitlab-ci, patchew will
still report the whole battery of tests as "failure". As a consequence
the notification emails from patchew after a build of a contributed
patch series always says "job failed" today, making it kind of useless.
See attached.

I would love if somebody else took over this fix as I am doing this
after hours, but if you have a simple suggestion on how to fix the
Alpine Linux hvmloader builds, or skip the build when appropriate, I can
try to follow up.

Otherwise for now it might be best to just temporarily remove the Alpine
Linux x86 builds from gitlab-ci.

Any thoughts?
--8323329-1061101881-1613499727=:3234
Content-Type: text/plain; CHARSET=US-ASCII; NAME=patchew.email
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2102161027550.3234@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=patchew.email

RnJvbTogbm8tcmVwbHlAcGF0Y2hldy5vcmcNClRvOiBmYW16aGVuZ0BhbWF6
b24uY29tDQpDYzogc3N0YWJlbGxpbmlAa2VybmVsLm9yZywsIGNhcmRvZUBj
YXJkb2UuY29tLCwgd2xAeGVuLm9yZywsIEJlcnRyYW5kLk1hcnF1aXNAYXJt
LmNvbSwsIGp1bGllbkB4ZW4ub3JnLCwgYW5kcmV3LmNvb3BlcjNAY2l0cml4
LmNvbQ0KRGF0ZTogVGh1LCAxMSBGZWIgMjAyMSAwNTowMzo1MCAtMDgwMCAo
UFNUKQ0KDQpIaSwNCg0KUGF0Y2hldyBhdXRvbWF0aWNhbGx5IHJhbiBnaXRs
YWItY2kgcGlwZWxpbmUgd2l0aCB0aGlzIHBhdGNoIChzZXJpZXMpIGFwcGxp
ZWQsIGJ1dCB0aGUgam9iIGZhaWxlZC4gTWF5YmUgdGhlcmUncyBhIGJ1ZyBp
biB0aGUgcGF0Y2hlcz8NCg0KWW91IGNhbiBmaW5kIHRoZSBsaW5rIHRvIHRo
ZSBwaXBlbGluZSBuZWFyIHRoZSBlbmQgb2YgdGhlIHJlcG9ydCBiZWxvdzoN
Cg0KVHlwZTogc2VyaWVzDQpNZXNzYWdlLWlkOiAyMDIxMDIxMDA5MjIxMS41
MzM1OS0xLXJvZ2VyLnBhdUBjaXRyaXguY29tDQpTdWJqZWN0OiBbUEFUQ0hd
IHg4Ni9pcnE6IHNpbXBsaWZ5IGxvb3AgaW4gdW5tYXBfZG9tYWluX3BpcnEN
Cg0KPT09IFRFU1QgU0NSSVBUIEJFR0lOID09PQ0KIyEvYmluL2Jhc2gNCnNs
ZWVwIDEwDQpwYXRjaGV3IGdpdGxhYi1waXBlbGluZS1jaGVjayAtcCB4ZW4t
cHJvamVjdC9wYXRjaGV3L3hlbg0KPT09IFRFU1QgU0NSSVBUIEVORCA9PT0N
Cg0Kd2FybmluZzogcmVkaXJlY3RpbmcgdG8gaHR0cHM6Ly9naXRsYWIuY29t
L3hlbi1wcm9qZWN0L3BhdGNoZXcveGVuLmdpdC8NCndhcm5pbmc6IHJlZGly
ZWN0aW5nIHRvIGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9wYXRj
aGV3L3hlbi5naXQvDQpGcm9tIGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJv
amVjdC9wYXRjaGV3L3hlbg0KICogW25ldyB0YWddICAgICAgICAgICAgICAg
cGF0Y2hldy8yMDIxMDIxMDA5MjIxMS41MzM1OS0xLXJvZ2VyLnBhdUBjaXRy
aXguY29tIC0+IHBhdGNoZXcvMjAyMTAyMTAwOTIyMTEuNTMzNTktMS1yb2dl
ci5wYXVAY2l0cml4LmNvbQ0KU3dpdGNoZWQgdG8gYSBuZXcgYnJhbmNoICd0
ZXN0Jw0KNThiMzZiNzdkNSB4ODYvaXJxOiBzaW1wbGlmeSBsb29wIGluIHVu
bWFwX2RvbWFpbl9waXJxDQoNCj09PSBPVVRQVVQgQkVHSU4gPT09DQpbMjAy
MS0wMi0xMSAxMToxMDowN10gTG9va2luZyB1cCBwaXBlbGluZS4uLg0KWzIw
MjEtMDItMTEgMTE6MTA6MDddIEZvdW5kIHBpcGVsaW5lIDI1NDc2MDQzMzoN
Cg0KaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L3BhdGNoZXcveGVu
Ly0vcGlwZWxpbmVzLzI1NDc2MDQzMw0KDQpbMjAyMS0wMi0xMSAxMToxMDow
N10gV2FpdGluZyBmb3IgcGlwZWxpbmUgdG8gZmluaXNoLi4uDQpbMjAyMS0w
Mi0xMSAxMToyNToxMl0gU3RpbGwgd2FpdGluZy4uLg0KWzIwMjEtMDItMTEg
MTE6NDA6MTddIFN0aWxsIHdhaXRpbmcuLi4NClsyMDIxLTAyLTExIDExOjU1
OjIxXSBTdGlsbCB3YWl0aW5nLi4uDQpbMjAyMS0wMi0xMSAxMjoxMDoyOV0g
U3RpbGwgd2FpdGluZy4uLg0KWzIwMjEtMDItMTEgMTI6MjU6MzVdIFN0aWxs
IHdhaXRpbmcuLi4NClsyMDIxLTAyLTExIDEyOjQwOjQwXSBTdGlsbCB3YWl0
aW5nLi4uDQpbMjAyMS0wMi0xMSAxMjo1NTo0NV0gU3RpbGwgd2FpdGluZy4u
Lg0KWzIwMjEtMDItMTEgMTM6MDM6NDhdIFBpcGVsaW5lIGZhaWxlZA0KWzIw
MjEtMDItMTEgMTM6MDM6NDhdIEpvYiAnYWxwaW5lLTMuMTItZ2NjLWRlYnVn
LWFybTY0JyBpbiBzdGFnZSAnYnVpbGQnIGlzIGZhaWxlZA0KWzIwMjEtMDIt
MTEgMTM6MDM6NDhdIEpvYiAnYWxwaW5lLTMuMTItZ2NjLWFybTY0JyBpbiBz
dGFnZSAnYnVpbGQnIGlzIGZhaWxlZA0KWzIwMjEtMDItMTEgMTM6MDM6NDhd
IEpvYiAncWVtdS1zbW9rZS14ODYtNjQtY2xhbmctcHZoJyBpbiBzdGFnZSAn
dGVzdCcgaXMgc2tpcHBlZA0KWzIwMjEtMDItMTEgMTM6MDM6NDhdIEpvYiAn
cWVtdS1zbW9rZS14ODYtNjQtZ2NjLXB2aCcgaW4gc3RhZ2UgJ3Rlc3QnIGlz
IHNraXBwZWQNClsyMDIxLTAyLTExIDEzOjAzOjQ4XSBKb2IgJ3FlbXUtc21v
a2UteDg2LTY0LWNsYW5nJyBpbiBzdGFnZSAndGVzdCcgaXMgc2tpcHBlZA0K
WzIwMjEtMDItMTEgMTM6MDM6NDhdIEpvYiAncWVtdS1zbW9rZS14ODYtNjQt
Z2NjJyBpbiBzdGFnZSAndGVzdCcgaXMgc2tpcHBlZA0KWzIwMjEtMDItMTEg
MTM6MDM6NDhdIEpvYiAncWVtdS1zbW9rZS1hcm02NC1nY2MnIGluIHN0YWdl
ICd0ZXN0JyBpcyBza2lwcGVkDQpbMjAyMS0wMi0xMSAxMzowMzo0OF0gSm9i
ICdxZW11LWFscGluZS1hcm02NC1nY2MnIGluIHN0YWdlICd0ZXN0JyBpcyBz
a2lwcGVkDQpbMjAyMS0wMi0xMSAxMzowMzo0OF0gSm9iICdidWlsZC1lYWNo
LWNvbW1pdC1nY2MnIGluIHN0YWdlICd0ZXN0JyBpcyBza2lwcGVkDQpbMjAy
MS0wMi0xMSAxMzowMzo0OF0gSm9iICdhbHBpbmUtMy4xMi1jbGFuZy1kZWJ1
ZycgaW4gc3RhZ2UgJ2J1aWxkJyBpcyBmYWlsZWQNClsyMDIxLTAyLTExIDEz
OjAzOjQ4XSBKb2IgJ2FscGluZS0zLjEyLWNsYW5nJyBpbiBzdGFnZSAnYnVp
bGQnIGlzIGZhaWxlZA0KWzIwMjEtMDItMTEgMTM6MDM6NDhdIEpvYiAnYWxw
aW5lLTMuMTItZ2NjLWRlYnVnJyBpbiBzdGFnZSAnYnVpbGQnIGlzIGZhaWxl
ZA0KWzIwMjEtMDItMTEgMTM6MDM6NDhdIEpvYiAnYWxwaW5lLTMuMTItZ2Nj
JyBpbiBzdGFnZSAnYnVpbGQnIGlzIGZhaWxlZA0KPT09IE9VVFBVVCBFTkQg
PT09DQoNClRlc3QgY29tbWFuZCBleGl0ZWQgd2l0aCBjb2RlOiAxDQo=

--8323329-1061101881-1613499727=:3234--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 18:50:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 18:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86029.161068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC5QQ-0000yt-TH; Tue, 16 Feb 2021 18:50:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86029.161068; Tue, 16 Feb 2021 18:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC5QQ-0000ym-QL; Tue, 16 Feb 2021 18:50:18 +0000
Received: by outflank-mailman (input) for mailman id 86029;
 Tue, 16 Feb 2021 18:50:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QcaE=HS=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lC5QO-0000yg-VG
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 18:50:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00d293e8-e9a1-46c8-a9fc-75ee1a041f98;
 Tue, 16 Feb 2021 18:50:16 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1C97064D99;
 Tue, 16 Feb 2021 18:50:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00d293e8-e9a1-46c8-a9fc-75ee1a041f98
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613501415;
	bh=GgsQbSrSQW88fNbO/wEDbTzgE3oo75sp7dX9OoFQ8ao=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=V/Z95bWkzedD8WdfPR1O7ydPkR5ksfTPYTpFyrGaMCAxEabELRbgAxKvXYQX2BD+E
	 LKMDA86qkwNlRWsczw43atb3hq/78jGnxdcj1GB692B5rdtBW1Dr2Z5GRI+g+CfGum
	 kDYfzi/PRDcPDOtZ0IX9oBMSeN9Bu65bjX+25L8LnPSD49v741YExZu8niJHMo+2Wm
	 7MkYilBELQ8t3wa1YlKKdl+XoB84XA+RnWt63Vxdesu03xrxbxDSLc1Djw4PtVcVu0
	 xWFIj9TYfSyeo1opfNIcD4edhgUnCAvyEQbd5+SjAguTf37SkFEQJySxKeJPQvvwG5
	 xj8pOiD8gJSHA==
Date: Tue, 16 Feb 2021 10:50:14 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Julien Grall <julien@xen.org>, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Rahul Singh <Rahul.Singh@arm.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/iommu: arm: Don't insert an IOMMU mapping when the
 grantee and granter...
In-Reply-To: <3a827099-1d8f-826d-42ef-743d86d9ccce@suse.com>
Message-ID: <alpine.DEB.2.21.2102161049030.3234@sstabellini-ThinkPad-T480s>
References: <20210214143504.23099-1-julien@xen.org> <3a827099-1d8f-826d-42ef-743d86d9ccce@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 15 Feb 2021, Jan Beulich wrote:
> On 14.02.2021 15:35, Julien Grall wrote:
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > ... are the same.
> > 
> > When the IOMMU is enabled and the domain is direct mapped (e.g. Dom0),
> > Xen will insert a 1:1 mapping for each grant mapping in the P2M to
> > allow DMA.
> > 
> > This works quite well when the grantee and granter and not the same
> > because the GFN in the P2M should not be mapped. However, if they are
> > the same, we will overwrite the mapping. Worse, it will be completely
> > removed when the grant is unmapped.
> > 
> > As the domain is direct mapped, a 1:1 mapping should always present in
> > the P2M. This is not 100% guaranteed if the domain decides to mess with
> > the P2M. However, such domain would already end up in trouble as the
> > page would be soon be freed (when the last reference dropped).
> > 
> > Add an additional check in arm_iommu_{,un}map_page() to check whether
> > the page belongs to the domain. If it is belongs to it, then ignore the
> > request.
> 
> Doesn't this want / need solving in grant_table.c itself, as it also
> affects PV on x86? Or alternatively in gnttab_need_iommu_mapping(),
> handing the macro the MFN alongside the domain? No matter which one
> was chosen, it could at the same time avoid the expensive mapkind()
> invocation in this case.

Not knowing the x86 side I don't have an opinion on the best location
for the check. But I wanted to say for the records that the patch has
already been tested successfully and looks good to me.


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 18:52:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 18:52:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86030.161081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC5SV-00017S-BP; Tue, 16 Feb 2021 18:52:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86030.161081; Tue, 16 Feb 2021 18:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC5SV-00017L-71; Tue, 16 Feb 2021 18:52:27 +0000
Received: by outflank-mailman (input) for mailman id 86030;
 Tue, 16 Feb 2021 18:52:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mdBD=HS=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lC5SU-00017F-GY
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 18:52:26 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.45]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e61aa7c9-daef-48e1-a942-7c197774199c;
 Tue, 16 Feb 2021 18:52:22 +0000 (UTC)
Received: from MR2P264CA0159.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::22) by
 DBAPR08MB5688.eurprd08.prod.outlook.com (2603:10a6:10:1a0::10) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.28; Tue, 16 Feb 2021 18:52:20 +0000
Received: from VE1EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::7f) by MR2P264CA0159.outlook.office365.com
 (2603:10a6:501:1::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.38 via Frontend
 Transport; Tue, 16 Feb 2021 18:52:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT053.mail.protection.outlook.com (10.152.19.198) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3846.25 via Frontend Transport; Tue, 16 Feb 2021 18:52:19 +0000
Received: ("Tessian outbound e7cb4a6f0881:v71");
 Tue, 16 Feb 2021 18:52:18 +0000
Received: from cb8c037f6224.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B6E9DABD-0376-438A-B165-DD5EC9082A4E.1; 
 Tue, 16 Feb 2021 18:52:13 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cb8c037f6224.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 16 Feb 2021 18:52:13 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2374.eurprd08.prod.outlook.com (2603:10a6:4:8a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.36; Tue, 16 Feb
 2021 18:52:12 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::f5c1:9694:9263:d90d%2]) with mapi id 15.20.3846.043; Tue, 16 Feb 2021
 18:52:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e61aa7c9-daef-48e1-a942-7c197774199c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0NsScOyP/RjOMccIdFQRHi2n/MAzJVlLB32h7r44Nd8=;
 b=iNBRb4dCNkC/aqG/uV0NiUp7YgGWdqZoUrPdXrl0j4VTsgkZiJ1BNDcDQ3ElifniEU5CqGBmPzJElD9Pk1DEewTYmds0KcW7QIQptpgT+/MAteUJ2mirreDZsQLGGJGtS8HaPuOY64bILewxHXYNrgaC1BA2tftseg4AUieY9js=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1188f4d710f32df5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nXZ+IHgEs58YwGZBm9ANnWUYgVknWz8R/fiEvtqAaCNeC1apNOp+YY7i2+N7f8e9i1iNvj1vl/urr9m3FbxnSblzVHo5yIBjNgsCDX48JpzqYLt6QxvCPMFyn286FtfnIXR94RedTqPb6CY9yQxFe9OyrskQBQgDeJ8CUPX2t6sKI9G4IedWEhSa01uFnqmnYkfRYJgkngBKT5+FcPL09eMs0x5MYzXbP33m2+fgEX7Qmo1L+JJXD3SQVkMKtr3LgDukMp9EqhBG2ft4OIq8r9Q0uMCXMjOG3Yjle8CFd7ua3/nhdr9KrJ4f3soIM1Ed3FCJTgzpBP0POSAgHCWhoA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0NsScOyP/RjOMccIdFQRHi2n/MAzJVlLB32h7r44Nd8=;
 b=fL2y8/v1oz3JV5KLaoL6JnaQyxxSLWOUp692miXzHBAEYrtPgo9Y47jFKhufbB8YGSY1XoRFYpW23r7P2oJ2ybkcioREMxzRdwiLaoblbVn/EwhCw4jXd7dXwPVpWU0y4OXNnqfR+kSrH26ZzYcAAWUNiCwx59o4CGeISxPEKh/H6aklLmGLN0CgDPswWGlDu1TbsmTUBe8klV+gB6fqKz5SRXvtQIi7kfyKIRpRgfiWE8onJMKJjFVJjdB3yoFqfpwyb72jYb/iMBmIJ5jUTmrVe0vhYMiXfdJajII6PWYykPQoG6nRtSP68Brq/DtNxP4iMwZFtd7ZuFfNKDk6tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0NsScOyP/RjOMccIdFQRHi2n/MAzJVlLB32h7r44Nd8=;
 b=iNBRb4dCNkC/aqG/uV0NiUp7YgGWdqZoUrPdXrl0j4VTsgkZiJ1BNDcDQ3ElifniEU5CqGBmPzJElD9Pk1DEewTYmds0KcW7QIQptpgT+/MAteUJ2mirreDZsQLGGJGtS8HaPuOY64bILewxHXYNrgaC1BA2tftseg4AUieY9js=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/iommu: arm: Don't insert an IOMMU mapping when the
 grantee and granter...
Thread-Topic: [PATCH] xen/iommu: arm: Don't insert an IOMMU mapping when the
 grantee and granter...
Thread-Index: AQHXAt6oFaIGN7XEykaD6M/rWzXvuKpbI+sA
Date: Tue, 16 Feb 2021 18:52:11 +0000
Message-ID: <E5D9902F-7105-4D6F-8CD4-F774EAE0821F@arm.com>
References: <20210214143504.23099-1-julien@xen.org>
In-Reply-To: <20210214143504.23099-1-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8af9af91-adac-4bfc-da1d-08d8d2abfcc8
x-ms-traffictypediagnostic: DB6PR0802MB2374:|DBAPR08MB5688:
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5688A0238013B6504C8F5520FC879@DBAPR08MB5688.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RcbLZcAsCZuOQ3j/+8FY0iZ8rtkhvCjJUsZIG71tc4ifQjlpTzlS/QOxBF1FF1nS0C3YgpV5CPqLYG1tCyKqvns/YpMfNphg/ojM162SoCJPkza0re0rO0BFuYG4KJYbC6kgjcMctRQ9PkkWMXh4ZZtWIRdNOgMEkqWTXo6oIIMyfTCCVMqc9h7DWA2HYxfiHgVn+SOSMANsHPfnO6bPYv/d253jW1vUZZRzBWpvEStRAfwi4mGx3zX84o4eTX7R3j26bPuL2bCSm9+un4GSNcBP+L4IsafG7twKT/iybbk65t3r9dlXBvJwkphYK9UbgolAM+yCDoLs9ZIJ/cpyDdbWIo2e7SWrnuGtgC/bePI+qGVDS+SWGqADJ8OSmTOnNVM5nS3uzdXNZFWXlqhDwwP6Myu0vFwrzDrLs1VTf23h+Ar+dh1cbqIc4meAmkAhwWaqB6vajzL1YJ83wepiuZz4oz3tDkubMnZq4r6OKMnPmf3jIthVFtXGfZIZxv269XQgU5j7iCy6/5nf9BzcHX/vm58oq1ElJDfDkNAeKqEAQ8QgtE2Omv8oxMFLjsXIy5Uqf6GV+kxdsnveEMH5T1hdiQB0wboFnSqLMm1cXTM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(376002)(136003)(396003)(39830400003)(64756008)(66476007)(66556008)(66946007)(66446008)(2906002)(71200400001)(76116006)(6506007)(91956017)(83380400001)(478600001)(33656002)(316002)(2616005)(26005)(5660300002)(6916009)(6512007)(186003)(53546011)(54906003)(6486002)(86362001)(8936002)(8676002)(4326008)(36756003)(6314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?c0g4GesZk4lQPbgU4Zl3Afd+gt2Txm8oitofNy8HsaV2Ozz5JmDlBJs4dIfz?=
 =?us-ascii?Q?wxc+usuceW3un7fB0cPoTXsZsi1cnQnI3F4pbdyYCFxBXR1neOz7ba/ZKpBu?=
 =?us-ascii?Q?5xBhRKvs9eIHAQ6qbRuawyiPObNgmgCcl1ts5WzlTTTAm2Cm1uL7XSCdTp7T?=
 =?us-ascii?Q?CjlQyQMnc0yTCDU6Gsbo7udDcPTbpHQfRxWqVfW+5PGR8XfsST+0ODhm90JZ?=
 =?us-ascii?Q?Np6QJp6uf/ku6P50wAiYHmDDakH5ZObX9SiU+wzn3z0FLuNrE632BgND+DYd?=
 =?us-ascii?Q?4ygLAlA/4HynTTMh4EtywqDJsHdTy+3IRzgAw77DgOs258AvjljP0G81RPND?=
 =?us-ascii?Q?kkddHcT8xMA2eQmpoPx2qeB64Sl1u05q1iFNvSX0WF/U/tSnjsqP9hPniHAC?=
 =?us-ascii?Q?cxhWDjRUq4S4o+HdkCzhxxDcTo58jHWx4xHso5UFHKVfcOvt0UFv6U9dgnag?=
 =?us-ascii?Q?B8zqp8H95hYHZ1bWBmro4U2PA0w4rzAxFwMXDDFka7h60WUdgEDMflsTWInM?=
 =?us-ascii?Q?rsi/IIFTDiOy5MA2P0+ByOI7wsUT2mVbgnPoyISPc2cGWQeFKB3Usn+0D3Qt?=
 =?us-ascii?Q?L0k06d6+Iipr8eXcFDjdsB6Mmq4c35Tl4cqMnsnJbzFYaYIDpDvtZUV1FYIE?=
 =?us-ascii?Q?dLqDOvOAbUxGJlDr//nNKkCsFH+vnqCVTWWllD1DpzDHcr5fxrZCciGMmxCW?=
 =?us-ascii?Q?rfoVOOQureRYtQLrPbnBxiRgbX6+yPgCy0U/73dJOC/5fujSvqc/3VrKPMZd?=
 =?us-ascii?Q?GvhdSl+F5cQARyHmkiJ12bOA/DnMz1fonr9iinieYDIbtVLbou/dkO1a6BUS?=
 =?us-ascii?Q?C0d5JEaRizEHsdm4tlA1Z6dR+aGEBq5tSwqtWNW62W+Np9x/YVaGLtyjbM4+?=
 =?us-ascii?Q?r/8ItLFC5snEVBod7tH5M4FURMOgGeFdPPe/KUNXHUhVt81Cr5knyo3Uyzn0?=
 =?us-ascii?Q?fvLnti4gH9TuwgfimcUo4JBAhaLmWZJWHFcYXTiTrv+JnXrWZM3f6tAxQ4Ew?=
 =?us-ascii?Q?b9aqboXUqJH7tUh0eke7bxY8rTAlizPYJ7Z6Wv0/oXWkRdQZV89DCmQYzRq5?=
 =?us-ascii?Q?17RbBKoUzSnBN78defveYh8YtbLr4Ae7y9UTXFfJux3JC8ZmvuX/FKs5NspV?=
 =?us-ascii?Q?xW+VpIotrES9k4++gV0PT1Inq0DWKNfEXMeaGwaKQDIFq7etUMlUmF3L9DSI?=
 =?us-ascii?Q?Nhg3+7NYmFZmNyGJMelouszXkD6HfFfj7qFgXwJx0d1mDq7blYoIlvBto6H3?=
 =?us-ascii?Q?B26DshNSmMFxjDVSbPOvZL5UrzXwohmjmG5KzvdENXBJz8PkKnRuFhnR741q?=
 =?us-ascii?Q?xnyNUP6eu892ZZV2LhFqvS8a?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9DF969F0A8356449816588662CBBB96B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2374
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2414cea8-2454-4003-2fd4-08d8d2abf859
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8ruH9fzDY61FXgORffw22pVl/eBmqO2DLGF8II86t1Nkpe6I048P31NubbhYpMLNj9sbZJ36dfZG/q5SZzxD27SO0cmVO+cbiqcrODPQGP7NHDlJkSHbXoMW1zMGnbT62qChAdlKasOhEnaV8yHMJB5QFj4sKMmpg1pU2cb7nUYxoZQ8lGDHdZ4QDNf37HZtYFUkiTTUXahtb4NrUS489uCZQKrT840v4YA5AdBFQS44OxXp1ftduEAp4KVLWBfEqiB7l2oXBxnuPbqkBbdt8GfK0R86tMR7HVGLLX6u0/t5ESI4egXo/tzUhfOMzSzPG7n4dXZzbqoi8eVNqlYVTw+TzmL23xEEWdwrc1kiKkvie6lXip8T5ppxSLHGc5r1DkuhGVXVFqRvDklIJ+FGIGXlaDclgNaVCHHEABL8ncBIntWv12HVOgj3+V26SNxyKDTmcJyAeRTmYWBOX0L1gkm3QpyDn9rb+eFwCB2LO4eskETwhZ732NwC5+vOJgZp9muTaECP038GgXyFOgGmP32ZkpKu8ESMd0cPErAAjNm29Ba6JLEKfbym01YxA8fBeWJyd6YvDCk60793CpT4lrkeAlK0A+3DtNsHXIIBRcqpgFPHDwVBlOwIrTxfYbsxwCQGDWdcoGJ3smu5lc3H/bLprV7xKu7UxSJ9DFHXlfwByAi8yp1K+LkeBzdl/xma
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(396003)(39830400003)(346002)(46966006)(36840700001)(356005)(53546011)(6512007)(82310400003)(6486002)(81166007)(70586007)(70206006)(36860700001)(6506007)(186003)(2906002)(47076005)(54906003)(8936002)(4326008)(478600001)(6862004)(336012)(26005)(36756003)(316002)(107886003)(86362001)(2616005)(8676002)(5660300002)(33656002)(83380400001)(6314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Feb 2021 18:52:19.3907
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8af9af91-adac-4bfc-da1d-08d8d2abfcc8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5688

Hello Julien,

> On 14 Feb 2021, at 2:35 pm, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> ... are the same.
>=20
> When the IOMMU is enabled and the domain is direct mapped (e.g. Dom0),
> Xen will insert a 1:1 mapping for each grant mapping in the P2M to
> allow DMA.
>=20
> This works quite well when the grantee and granter and

/s/and/are

> not the same
> because the GFN in the P2M should not be mapped. However, if they are
> the same, we will overwrite the mapping. Worse, it will be completely
> removed when the grant is unmapped.
>=20
> As the domain is direct mapped, a 1:1 mapping should always present in
> the P2M. This is not 100% guaranteed if the domain decides to mess with
> the P2M. However, such domain would already end up in trouble as the
> page would be soon be freed (when the last reference dropped).
>=20
> Add an additional check in arm_iommu_{,un}map_page() to check whether
> the page belongs to the domain. If it is belongs to it, then ignore the
> request.
>=20
> Note that we don't need to hold an extra reference on the page because
> the grant code will already do it for us.
>=20
> Reported-by: Rahul Singh <Rahul.Singh@arm.com>
> Fixes: 552710b38863 ("xen/arm: grant: Add another entry to map MFN 1:1 in=
 dom0 p2m")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

With the typo fixed.

Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
>=20
> ---
>=20
> This is a candidate for 4.15. Without the patch, it would not be
> possible to get the frontend and backend in the same domain (useful when
> trying to map the guest block device in dom0).
> ---
> xen/drivers/passthrough/arm/iommu_helpers.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>=20
> diff --git a/xen/drivers/passthrough/arm/iommu_helpers.c b/xen/drivers/pa=
ssthrough/arm/iommu_helpers.c
> index a36e2b8e6c42..35a813308b8c 100644
> --- a/xen/drivers/passthrough/arm/iommu_helpers.c
> +++ b/xen/drivers/passthrough/arm/iommu_helpers.c
> @@ -53,6 +53,17 @@ int __must_check arm_iommu_map_page(struct domain *d, =
dfn_t dfn, mfn_t mfn,
>=20
>     t =3D (flags & IOMMUF_writable) ? p2m_iommu_map_rw : p2m_iommu_map_ro=
;
>=20
> +    /*
> +     * The granter and grantee can be the same domain. In normal
> +     * condition, the 1:1 mapping should already present in the P2M so
> +     * we want to avoid overwriting it.
> +     *
> +     * Note that there is no guarantee the 1:1 mapping will be present
> +     * if the domain decides to mess with the P2M.
> +     */
> +    if ( page_get_owner(mfn_to_page(mfn)) =3D=3D d )
> +        return 0;
> +
>     /*
>      * The function guest_physmap_add_entry replaces the current mapping
>      * if there is already one...
> @@ -71,6 +82,13 @@ int __must_check arm_iommu_unmap_page(struct domain *d=
, dfn_t dfn,
>     if ( !is_domain_direct_mapped(d) )
>         return -EINVAL;
>=20
> +    /*
> +     * When the granter and grantee are the same, we didn't insert a
> +     * mapping. So ignore the unmap request.
> +     */
> +    if ( page_get_owner(mfn_to_page(_mfn(dfn_x(dfn)))) =3D=3D d )
> +        return 0;
> +
>     return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)=
), 0);
> }
>=20
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 19:18:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 19:18:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86035.161093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC5rD-00035n-IL; Tue, 16 Feb 2021 19:17:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86035.161093; Tue, 16 Feb 2021 19:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC5rD-00035g-FF; Tue, 16 Feb 2021 19:17:59 +0000
Received: by outflank-mailman (input) for mailman id 86035;
 Tue, 16 Feb 2021 19:17:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=g3ND=HS=vmware.com=namit@srs-us1.protection.inumbo.net>)
 id 1lC5rC-00035b-5T
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 19:17:58 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com (unknown
 [40.107.236.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e4c36da-d8a4-4f8a-bd03-0e2ef1c1d8b9;
 Tue, 16 Feb 2021 19:17:57 +0000 (UTC)
Received: from BYAPR05MB4776.namprd05.prod.outlook.com (2603:10b6:a03:4a::18)
 by BY3PR05MB8083.namprd05.prod.outlook.com (2603:10b6:a03:36e::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.11; Tue, 16 Feb
 2021 19:17:55 +0000
Received: from BYAPR05MB4776.namprd05.prod.outlook.com
 ([fe80::ddba:e1e9:fde7:3b31]) by BYAPR05MB4776.namprd05.prod.outlook.com
 ([fe80::ddba:e1e9:fde7:3b31%3]) with mapi id 15.20.3846.039; Tue, 16 Feb 2021
 19:17:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e4c36da-d8a4-4f8a-bd03-0e2ef1c1d8b9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cUcCvkj/p/pc+k+FB1NOm4UrrGQ1dIsxVClkCADAqOvjAj4bziB1OvNsH4dVCN63wiRQ/DLV4bbRxvD2/VMkrDqFI51tLRh9Myomol7efl02/1CjTlN5NvZSWqapWcZRimuKtJ99n6Leikee4KyRTdJTbg6KtRiAVpLvmZAkEMcOsfWh8B6+sR1cPCSuQ7m8T3ANQAmJqOxNxmxsPaOXtGqwIddMdjt4aZ9vX4yO1CZu0cUg0zMsl+Id0hyxmwKfPeV3HlAIddhUMUmamX0Fn8b91H5/kPD3K33d8w7th7Bj3nOtRteTH8D+/izogjY4o9x4r7QKw1e5iCyWz2maCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jWIVzoNz7KPULPlDSYoUIXf7hWoOx4E1oEeVSH8mL5s=;
 b=Lmva3OZfnpTwL6GBRoQyFWiU72yyx9kOXvd1FZH9hGOMgIj9g3Uwp328PLyA6FGKJbI41pmTNtukK/9s7r8mTlXx5/7hXqqjAPmVPyKf/u6Vy9AR4oXSyRq7/7OyTlg/Z9iyOSSbHGIuWXS9CNdirMJtdLi3dYT5NPlYwd++tYqQysz1bTbuo3oxoIcmOKSAbmyHcj1+uX4NryKRCRy6raXK3qp0gCJ2h9K53oQJ4jA76pG6Jo5iVW1+bKnUCL2AnRRJvYHlDxcRk2KULyRrBasJdYI7aqntz2Sgd9aHOF0sdxhuUu1JeaAacST9BWbjvr06LtvHutTF1RzuSlfZbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=vmware.com; dmarc=pass action=none header.from=vmware.com;
 dkim=pass header.d=vmware.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vmware.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jWIVzoNz7KPULPlDSYoUIXf7hWoOx4E1oEeVSH8mL5s=;
 b=R9BZir0A3CZetCVcQw+Ja7Y8mbvE/qrLupBhwaWV7JJjYvqeQxwAAs3fnowgXBPmBUWM/2W3NYhrMSphlLGG75Q68UtBa2Mv69x1zJfRKKD4L3C68AzRJcgxS9G4UEhc0erynJtON7Sqn/MSsBYrbc1Uchg/6ItqCvwtbcKLRks=
From: Nadav Amit <namit@vmware.com>
To: Peter Zijlstra <peterz@infradead.org>
CC: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>,
	Andy Lutomirski <luto@kernel.org>, Dave Hansen <dave.hansen@linux.intel.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>, Haiyang Zhang
	<haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, Sasha
 Levin <sashal@kernel.org>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov
	<bp@alien8.de>, X86 ML <x86@kernel.org>, Juergen Gross <jgross@suse.com>,
	Paolo Bonzini <pbonzini@redhat.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, "linux-hyperv@vger.kernel.org"
	<linux-hyperv@vger.kernel.org>, Linux Virtualization
	<virtualization@lists.linux-foundation.org>, KVM <kvm@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Michael
 Kelley <mikelley@microsoft.com>
Subject: Re: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs
 concurrently
Thread-Topic: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs
 concurrently
Thread-Index: AQHXBFzuDk/+97dVSEa5i3JTefGxeKpbKCAA
Date: Tue, 16 Feb 2021 19:17:54 +0000
Message-ID: <AFA552C0-88B1-4D58-B3C5-1A571DBF0EA6@vmware.com>
References: <20210209221653.614098-1-namit@vmware.com>
 <20210209221653.614098-5-namit@vmware.com>
 <YCu2MQFdV4JTrUQb@hirez.programming.kicks-ass.net>
In-Reply-To: <YCu2MQFdV4JTrUQb@hirez.programming.kicks-ass.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: infradead.org; dkim=none (message not signed)
 header.d=none;infradead.org; dmarc=none action=none header.from=vmware.com;
x-originating-ip: [24.6.216.183]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8d0a60f0-c67d-4e85-3a49-08d8d2af9010
x-ms-traffictypediagnostic: BY3PR05MB8083:
x-microsoft-antispam-prvs:
 <BY3PR05MB8083829349529D99571540C2D0879@BY3PR05MB8083.namprd05.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5516;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 LPbUNtFBn0N04M5Lm1C3N0zrODoag8LRvwSmI5RqGwf1xyR8CgukS91tO8L/m0b3nYZ6QhlPgMXJvHuMNZn3RQdQG8yg4vumbtgl5LsTzw+qlnczlC4GTBRkIbWV4JrUjK/h2vD2VnP1rMtqOqqYSACd32YkR0b9sKUVsJm1YofFd6VDsPPCWB9INz6erPh/Ht00pn3yU0l7DrwaIzN0Lu/Iu2WAcSbL+fnfMyY7ikovSjj03trUGYRiJcxWMTdiX7fIJn30ryMX6VLLdAKcKEz/IPpfnEh61B+SwaUEEBUGLsfu4nyAWb4YgcjZRAP1si7KKjaqv6q/gtzz5r9IHvLaJjd4QIr82803pOylR9vyF3ehkSipoyu/H9iKe/Cdbf7tVI4HYm4Sp5i3pT+8jOnlLyHXyax9ytawwIhBmuQh6AJssDyJNsimjb22KHM6ckNEjvD/j050Hd261/ppZw8DRCtNWPjKfnucdF+/3z3pt2lllHIAjR2vXrD6WENCpPvMR3EaiTixuN48ZoYhb49YNZGkugoNSwheN2V1F64DDjg83NV/52pHQM7ss2aEou6iSote3Z+pky8VysyH+A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR05MB4776.namprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(39860400002)(376002)(136003)(396003)(5660300002)(33656002)(4326008)(6512007)(6486002)(53546011)(478600001)(66946007)(8936002)(2906002)(54906003)(86362001)(83380400001)(316002)(66446008)(186003)(66476007)(66556008)(76116006)(26005)(71200400001)(36756003)(7416002)(2616005)(6916009)(8676002)(64756008)(6506007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?fyNbfqSd9ANotjCBaT4ktWFbtGRkNLuUxd1+oQ/DB+m87EmQFbfMxnm2cp+i?=
 =?us-ascii?Q?iM5ozZR5UUJ/OZafBBMcBXa6sqcUtT7+GRC5YnxBQnx2zLH6a+cbLjfeVSTT?=
 =?us-ascii?Q?SQEAfxa/hxr+mvee9M2qM1y/ca56f+I1wgWVF1aZvrCFIQ3GNZNcjWw70/gr?=
 =?us-ascii?Q?Y5f24d5dvrt22Qhz/XC9Pt8BqkR9mLOGDFxvN/ROfnk7lqkscAmMeqtJ8iCd?=
 =?us-ascii?Q?Hlv/cE7tqGlELWkCkjpEZ/hzKpMVPtE63ID7IHlMiIdRAAyU1baBowig2qpi?=
 =?us-ascii?Q?benTkh4oFrj4I61f2zpDqZj/2WbBhs+jfT6xwDHfb3SFLqi8BmDZrZeuxvkd?=
 =?us-ascii?Q?g/NYuJ6KWoTP3kMH6jZYKnoiKpRq6ZPuS7saqUvj+p5zLSt3fSdNzaaYSP7+?=
 =?us-ascii?Q?IgIj31Wi05yZEpfNNDaoqRGJEEBeE0tUVbjvMCORdzx39kzDbwhdq8KLN1aa?=
 =?us-ascii?Q?r/VuRW2HtLN9sneLkCRWh7vFL7Yx/923YXuWNkxRAZ+cvY3Dhq45HHDzdUg5?=
 =?us-ascii?Q?QH6gB6gddZQjJ9AVuGjLQsJzCJfnUj3jRJseN+pcbWdb8FYlJARvO0d/OpUL?=
 =?us-ascii?Q?0cMgZ7mToS8GrRLZKaodp6DvCIpvLZQ4saXcHkbd49RRk8wDKNw3SqL7+ZCz?=
 =?us-ascii?Q?IEAHl8kPzFQY75I7kkxYISxXFccwBhuNTCJhkaVCsbdUXkvbfrQdGkYS67Qf?=
 =?us-ascii?Q?3RLnJ1Z3Yd0ES8q80OlRKiVHHkLiX7mjvRmkI+gt3U8xk3RzUoQIWBi9vzhp?=
 =?us-ascii?Q?26ugbP9cWt5k+JZ5bpvbfDW3CwDsMJRN/5wdpLPsBfG/le3U9kXRWG3jpfTn?=
 =?us-ascii?Q?n5EVMctzKXjEK73qf9VtfAd0W3pecmu8x0w9bZ/1FZsi1g/czrJ4xX6q8yME?=
 =?us-ascii?Q?riZF0aFcqY+hOonx37uKBYI/0tCf3GUd66Ib4DvDpSlqO+EVc2EhkKRYhVqX?=
 =?us-ascii?Q?5Eu88EoeK5OmWfxgCv8zT/YUhQEu8UPVzrI9UEoPC8eKwx6MCXRfvgJNNi3P?=
 =?us-ascii?Q?UTWJCc4KqFJWkaVvjOJY8OfqGjB1Z921aDk26pGyS2Ai0/EQFwcOx2GoSdnT?=
 =?us-ascii?Q?zt32ET24WlMzSz9vYz39fmOQUM+Vn3ftnJUW9i3uuIV808DiBhD0JnyAsJoU?=
 =?us-ascii?Q?TEcDqdJ1QFvWT6yG0uD1ogBw7nfn5eMbBxBp6gtHaOy8sIboedzkV1nZel+a?=
 =?us-ascii?Q?wPGRu8YP61h7t32vVTsHIM7EW1PHD4QYh89z0jUc5NfyU106Z6MGxKhK3IGO?=
 =?us-ascii?Q?jRRUlGtxkFpULb+0Ltk9EO86BIdYRpV6FijKxAUpTPx2ur73S+Nk9tpUWJsJ?=
 =?us-ascii?Q?n7gtPRmstlastTmA2XYCAGyL?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9390762EDE2D8D4786E55865DEDAE795@namprd05.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: vmware.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR05MB4776.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d0a60f0-c67d-4e85-3a49-08d8d2af9010
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Feb 2021 19:17:54.9690
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SKPOxAghkr3M152dfPKEi0yLDWpx04TQBHLmVJW8qpd/dObjW4As10/cLm99NXak6CNWCIRfDRVTBzg4MqdH1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY3PR05MB8083

> On Feb 16, 2021, at 4:10 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>=20
> On Tue, Feb 09, 2021 at 02:16:49PM -0800, Nadav Amit wrote:
>> @@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struc=
t cpumask *cpumask,
>> 	 * doing a speculative memory access.
>> 	 */
>> 	if (info->freed_tables) {
>> -		smp_call_function_many(cpumask, flush_tlb_func,
>> -			       (void *)info, 1);
>> +		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
>> +				      cpumask);
>> 	} else {
>> 		/*
>> 		 * Although we could have used on_each_cpu_cond_mask(),
>> @@ -844,14 +849,15 @@ STATIC_NOPV void native_flush_tlb_others(const str=
uct cpumask *cpumask,
>> 			if (tlb_is_not_lazy(cpu))
>> 				__cpumask_set_cpu(cpu, cond_cpumask);
>> 		}
>> -		smp_call_function_many(cond_cpumask, flush_tlb_func, (void *)info, 1)=
;
>> +		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
>> +				      cpumask);
>> 	}
>> }
>=20
> Surely on_each_cpu_mask() is more appropriate? There the compiler can do
> the NULL propagation because it's on the same TU.
>=20
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -821,8 +821,7 @@ STATIC_NOPV void native_flush_tlb_multi(
> 	 * doing a speculative memory access.
> 	 */
> 	if (info->freed_tables) {
> -		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> -				      cpumask);
> +		on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
> 	} else {
> 		/*
> 		 * Although we could have used on_each_cpu_cond_mask(),
> @@ -849,8 +848,7 @@ STATIC_NOPV void native_flush_tlb_multi(
> 			if (tlb_is_not_lazy(cpu))
> 				__cpumask_set_cpu(cpu, cond_cpumask);
> 		}
> -		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> -				      cpumask);
> +		on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
> 	}
> }

Indeed, and there is actually an additional bug - I used cpumask in the
second on_each_cpu_cond_mask() instead of cond_cpumask.



From xen-devel-bounces@lists.xenproject.org Tue Feb 16 20:33:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 20:33:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86043.161111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC72J-0001lp-76; Tue, 16 Feb 2021 20:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86043.161111; Tue, 16 Feb 2021 20:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC72J-0001li-3q; Tue, 16 Feb 2021 20:33:31 +0000
Received: by outflank-mailman (input) for mailman id 86043;
 Tue, 16 Feb 2021 20:33:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cMbj=HS=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1lC72H-0001ld-6H
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 20:33:29 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9e81a9d-c4e1-4997-95c1-2bea7a4ec69e;
 Tue, 16 Feb 2021 20:33:25 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Feb 2021 12:33:23 -0800
Received: from lkp-server02.sh.intel.com (HELO cd560a204411) ([10.239.97.151])
 by fmsmga002.fm.intel.com with ESMTP; 16 Feb 2021 12:33:21 -0800
Received: from kbuild by cd560a204411 with local (Exim 4.92)
 (envelope-from <lkp@intel.com>)
 id 1lC728-0008OS-Rz; Tue, 16 Feb 2021 20:33:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9e81a9d-c4e1-4997-95c1-2bea7a4ec69e
IronPort-SDR: LAA8r5tqFopAUoYD3slXWUSDDSvMVaZy2BOf28Wo1lcOiZ4GgvKaqWHqlTNW7jnSow680K/j5s
 DqULvWKcIa7w==
X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="180451123"
X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; 
   d="gz'50?scan'50,208,50";a="180451123"
IronPort-SDR: O3TYxgWpYaZMEDeoLfJ/XLyCgDv8FxJsm/dR9UgygIEi+XwZJ+2VFpV6AVukU8vMyGCv0f9QeR
 lvfqtRPwITbg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; 
   d="gz'50?scan'50,208,50";a="426603832"
Date: Wed, 17 Feb 2021 04:32:23 +0800
From: kernel test robot <lkp@intel.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: kbuild-all@lists.01.org, xen-devel@lists.xenproject.org,
	Juergen Gross <jgross@suse.com>
Subject: [xen-tip:linux-next 7/9] drivers/net/xen-netback/netback.c:1334:17:
 warning: variable 'ret' set but not used
Message-ID: <202102170412.vJgSEw1F-lkp@intel.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="zYM0uCDKw75PZbzx"
Content-Disposition: inline
User-Agent: Mutt/1.10.1 (2018-07-13)


--zYM0uCDKw75PZbzx
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next
head:   871997bc9e423f05c7da7c9178e62dde5df2a7f8
commit: 3194a1746e8aabe86075fd3c5e7cf1f4632d7f16 [7/9] xen-netback: don't "handle" error by BUG()
config: arm64-allyesconfig (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git/commit/?id=3194a1746e8aabe86075fd3c5e7cf1f4632d7f16
        git remote add xen-tip https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
        git fetch --no-tags xen-tip linux-next
        git checkout 3194a1746e8aabe86075fd3c5e7cf1f4632d7f16
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/net/xen-netback/netback.c: In function 'xenvif_tx_action':
>> drivers/net/xen-netback/netback.c:1334:17: warning: variable 'ret' set but not used [-Wunused-but-set-variable]
    1334 |  int work_done, ret;
         |                 ^~~


vim +/ret +1334 drivers/net/xen-netback/netback.c

3e2234b3149f66 Zoltan Kiss  2014-03-06  1328  
f53c3fe8dad725 Zoltan Kiss  2014-03-06  1329  
f942dc2552b8bf Ian Campbell 2011-03-15  1330  /* Called after netfront has transmitted */
e9ce7cb6b10740 Wei Liu      2014-06-04  1331  int xenvif_tx_action(struct xenvif_queue *queue, int budget)
f942dc2552b8bf Ian Campbell 2011-03-15  1332  {
bdab82759b8e36 Zoltan Kiss  2014-04-02  1333  	unsigned nr_mops, nr_cops = 0;
f53c3fe8dad725 Zoltan Kiss  2014-03-06 @1334  	int work_done, ret;
b3f980bd827e6e Wei Liu      2013-08-26  1335  
e9ce7cb6b10740 Wei Liu      2014-06-04  1336  	if (unlikely(!tx_work_todo(queue)))
b3f980bd827e6e Wei Liu      2013-08-26  1337  		return 0;
f942dc2552b8bf Ian Campbell 2011-03-15  1338  
e9ce7cb6b10740 Wei Liu      2014-06-04  1339  	xenvif_tx_build_gops(queue, budget, &nr_cops, &nr_mops);
f942dc2552b8bf Ian Campbell 2011-03-15  1340  
bdab82759b8e36 Zoltan Kiss  2014-04-02  1341  	if (nr_cops == 0)
b3f980bd827e6e Wei Liu      2013-08-26  1342  		return 0;
f942dc2552b8bf Ian Campbell 2011-03-15  1343  
e9ce7cb6b10740 Wei Liu      2014-06-04  1344  	gnttab_batch_copy(queue->tx_copy_ops, nr_cops);
3194a1746e8aab Jan Beulich  2021-02-15  1345  	if (nr_mops != 0)
e9ce7cb6b10740 Wei Liu      2014-06-04  1346  		ret = gnttab_map_refs(queue->tx_map_ops,
f53c3fe8dad725 Zoltan Kiss  2014-03-06  1347  				      NULL,
e9ce7cb6b10740 Wei Liu      2014-06-04  1348  				      queue->pages_to_map,
9074ce24932186 Zoltan Kiss  2014-04-02  1349  				      nr_mops);
f942dc2552b8bf Ian Campbell 2011-03-15  1350  
e9ce7cb6b10740 Wei Liu      2014-06-04  1351  	work_done = xenvif_tx_submit(queue);
b3f980bd827e6e Wei Liu      2013-08-26  1352  
b3f980bd827e6e Wei Liu      2013-08-26  1353  	return work_done;
f942dc2552b8bf Ian Campbell 2011-03-15  1354  }
f942dc2552b8bf Ian Campbell 2011-03-15  1355  

:::::: The code at line 1334 was first introduced by commit
:::::: f53c3fe8dad725b014e9c7682720d8e3e2a8a5b3 xen-netback: Introduce TX grant mapping

:::::: TO: Zoltan Kiss <zoltan.kiss@citrix.com>
:::::: CC: David S. Miller <davem@davemloft.net>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--zYM0uCDKw75PZbzx
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICGUlLGAAAy5jb25maWcAnDzbchs3su/5Clb8kjzEy5tkuU7pAZzBkAjnZgBDUnqZ4sq0
o1pZykpyEv/96Qbm0sBgaJ/j2o093bg2Gn0H3/z0ZsK+vj59Ob7e3x0fHr5NPp8eT8/H19PH
yaf7h9P/TOJikhd6wmOh30Lj9P7x6z//Oj5/uVxOLt7OZm+nk+3p+fH0MImeHj/df/4Kfe+f
Hn9681NU5IlY11FU77hUoshrzQ/6+ufj8fnuj8vlbw840m+f7+4mv6yj6NfJ+7eLt9OfSTeh
akBcf2tB636o6/fTxXTaItK4g88Xy6n5042TsnzdofsupM+UzLlhqmYqq9eFLvqZCULkqcg5
QRW50rKKdCFVDxXyQ70v5LaHrCqRxlpkvNZslfJaFVL3WL2RnMUweFLAf6CJwq5AxDeTtTmP
h8nL6fXrnz1ZRS50zfNdzSTsRmRCXy/m/aKyUsAkmisySVpELG03/fPPzspqxVJNgDFPWJVq
M00AvCmUzlnGr3/+5fHp8fRr10DtWdnPqG7UTpTRAIB/Rzrt4WWhxKHOPlS84mHooMue6WhT
ez0iWShVZzwr5E3NtGbRpkdWiqdi1X+zCni6/9ywHQdqwqAGgfOxNPWa91BzOHDOk5ev/375
9vJ6+tIfzprnXIrIsEEpixVZIUWpTbEfx9Qp3/E0jOdJwiMtcMFJUmeWXQLtMrGWTON5B9Ei
/x2HoegNkzGgFJxkLbnieRzuGm1E6fJ7XGRM5C5MiSzUqN4ILpHUNy42YUrzQvRoWE4ep5xe
LWf9pRgiMiUQOYoILtTgiiyrKCVw6nbFzohmrYWMeNzcXJGvCZOXTCoeXoOZn6+qdYJbejM5
PX6cPH3y+Ch4knDtREuO4bhGsuwGPNuiI7j7W2CnXBNKGl5HuaZFtK1XsmBxxKjACPR2mpkr
oO+/nJ5fQrfADFvkHJiZDJoX9eYWJVRm2O7NpCX3bV3CbEUsosn9y+Tx6RVFnttLwOZpHwtN
qjQd60KOU6w3yNGGVNKh/mALnQCSnGelhqFyZ94WvivSKtdM3tDp/VaBpbX9owK6t4SMyupf
+vjyn8krLGdyhKW9vB5fXybHu7unr4+v94+fPdJCh5pFZgzLf93MOyG1h8bDDKwEWcvwjjMQ
lYgq2gCbs93aZWgL1hsuM5bihpSqJJFzKxWj6IsAjmPrcUy9WxBFCKJMaUbZFEFwZ1J24w1k
EIcATBTB7ZRKOB+dNouFQp0cU574gdPolA4QWqgibQWtOU0ZVRMVuBNw8jXg+oXAR80PwPpk
F8ppYfp4ICST6drc0QBqAKpiHoJryaLAmuAU0rS/pwSTczh5xdfRKhVUXCAuYXlR6evL5RAI
+owl13MXobR/T80MRbRCso4utTbmUraiJ+ZS3LVuViKfExqJrf3HEGI4k4I3MJGjf9ICB01A
RYtEX8/eUThyQsYOFN/tt5Qi11uwsxLuj7HwBa69XEbstvyk7v44ffz6cHqefDodX78+n156
pqrAyM3K1mp0gasKRDfIbStqLnpyBQZ0FIOqyhJsVFXnVcbqFQM7OnKuU2MUw65m8ytPq3Sd
fezYYC68u5k8by9mO+laFlVJjqNka243R9UiWIDR2vv0bFML28JfRDSl22YGf8Z6L4XmKxZt
BxhzWD00YULWQUyUgPoEBb4XsSZmKYjqYHNyqnV4TaWI1QAo44wNgAmIkFtKoAa+qdZcp8Qm
Bg5WnEpfvA84UYMZjBDznYj4AAytXcHcLpnLZABclUOYMZKIRCyibYdimuwQ3RCwuECdENIh
21IVghqOAtAHod+wNekAcMf0O+fa+YajirZlAeyNNgU4f4QEjcasdOEdG5hnwAIxB20ZMU3P
2sfUuzlhENR9LpMC1Y1rJskY5ptlMI4qKrBNidsm43p9S+11AKwAMHcg6S3lHAAcbj184X0v
ne9bpclyVkWBBo4rZkFmFCWchrjlaEAbdijAgsgjx77ymyn4R8B4AcFeyBJsYnAGZe5Q03EV
jR1TiXh26bcBzRvxUpvgBOoWsnzKk75+9sYytjmyEBke7hW6ZfXAJLdHPQAn1rQnHGic385Y
dbSI/13nGTFtnIvD0wROwbHMGDgnaDOTySvND95nTX0rQ0ELjrLyEG3oDGXh7E+sc5YmhBPM
HijAuBIUoDaOGGaCMBoYc5V0dAWLd0LxloSEODDIikkp6EFssclNpoaQ2qF/BzXkwSuH7rXD
D3WqMhcwjBGg8tszkAqtBsNmv9MwQwOA2ffsRtXUuGpRbV+KQ/bKCrDgYgnrki7CNKcE7Vy7
niSw2DzyWAHcVGLSG5HrwaA7j2Mqqgwr4L2tfX/SAGE59Q5cgpSaUWU0my5bS6YJF5an509P
z1+Oj3enCf/r9Ai2NQPLJELrGryx3roJzmXXGpixs29+cJp2wF1m52htCTKXSqvVQB8hrDEr
zIWmZ4XhNwZHb+J/nVBTKVsFhBiO5DYrws0YTijB2mnYgy4GcKji0R6vJQiSIhvDYnwHrEzn
8lVJknJrSRkyMtBn3lbRsi2Z1IK5okzzzOhjDKaKREResAmsh0Skzu01ctZoTscHd8OcPR9n
l0TJXC5X9Co54RrT1G7CN70tCj50XeoWvQxhs3iIhZuUZQwMqxz9CLAsMpFfz67ONWCH6/nI
CC1vdAPNfqAdjDe77KiuwQq1nlBjZhNhmaZ8jf440hfu/I6lFb+e/vPxdPw4JX86DQpmFVgd
w4Hs+ODxJylbqyG+9VScS0GAnQRtlxKI4G32XKw3oXiTqrIAlKViJcE6ssGAvsFtkQOM2i4t
ZDG/dkWi9SfaAPGm0GVKNxBuI+FfVBWojFhSWy5zntZGNOec8n4C6pszmd7Ad+3ot3JtcwAm
wKuuF870netUmcixH68z9vgWxbhNpxAFpsAQUhsWF/u6SBI01uHgP+Gfu/7gjQguH46vKBXh
tj2c7tx8jY2Am6iwPzdbi5TaCc1684PwG6alkyQxwFWUza8WF0Po8v3U9x8BCja644pbOJcp
Dd1aoNBuQNdCZZQpvfIP93CTF/6+MGB78Be2XXgA4Dlg44iV/sbS9WzrgTZC+TTZclTFNx40
47EAlvb7g0tT+BvKdqCCfNjBp9CHiMp+A5KcpcMpJFwrxXxSAMm3bmjfUmhwkxRnWqc+KZTG
dMNhNvXhN/kHcAip3WLgmq8lG5yG9C0ZvanyeNjZQv2VVbkoN2LQegdGPEYpfbBQKPr90zqg
1PFgtz7r38KujCzotFjgYlGbJ+lDNwYMqmdyen4+vh4nfz89/+f4DFbJx5fJX/fHyesfp8nx
AUyUx+Pr/V+nl8mn5+OXE7aiVxU1F+YZGTiaqDRSDjIgYuCA+oqRSziZKquv5peL2ftx7Luz
2OX0chw7e798Nx/FLubTdxfj2OV8Ph3FLi/enVnVcrEcx86m8+W72dUoejm7mi4HMxOaqpJH
VaPLmB4dZ3Z5cTEf3f0MqLq4fDeKvlhM388XZ1YheQkXrtbpSowOMr+6vJqOz7G8XMznoycw
u1jOz5HxYnq1nJH9RWwnAN7i5/MFPV0fu5gtl+ewF2ew75YXl6PYxXQ2G86rD/O+P91UUoF/
paoOOZ2BTpwR5wbkdipQYXcbv5xdTqdXU7J1FLF1wtJtIQmbTRffbfHea/EhTuBGTfvVTC8v
zg/CwY8iy1VFBGoazIBewGIqRrim9f9Pyrh8sNwa01pRT8ViZpcNKpgDs20ul4E2Tosds6bu
4v1whha3vPpe9+vFe98daLsOHQXbY3lFwk7g5Kzgb56DUg5ly7BBKlDvNW3IUZigXxb5EJXR
3Kc0YdTr+UVnzTc2qJvtwGg4+QLbUvl+CXq94P7iikzsHBvVwvfTwQi0QVebDQRLgAyLaZwW
Zfx8MFkluIgR6FViLWyKlGMg31jV127GFpg6QCZAzC+mXtOF29QbJTwMEGrqknMjMfUZsN8U
BxO7sd1H0QPPuTFlUh7p1uBHS57GDvfhAAT4vr2p3oTRE9+4MJEgRDaeJZP+2jBGY/R1jWVF
JtAZdkdUCYxnhil1kxdqvQkmGSZ8h5DxzO6WH3jkfcLpU9JZmBJ+r1pk6GMbf//GxUeSqU0d
V9Q/OvAcSyimDoQIZKyiMDkz5K5CopXXu8RVju5w42KB+uMpGUcWJvCAkczxfJG97Gpfa72S
U6BM7uM0W68xXB/HsmZUu1oPnOzeJAk2PC0dbxZG2V2Fg/r7Eu5VlXoxkaicXdRtUC6AhzsL
wsXBtMbiX1dvZxMsn7t/BevyK8ZLhtlAuy3gW5bEq8zfrksBZRgwjVkp/YapQnunyESkfBTK
njPo3YZ76ufcusne5j+4t9KNFhsYsB44d3pwvFFeDhczOhFZzOJHF6Ml5npIQq9JjK4ky63D
DveWRWDC6UEbDJYjopK54UDXP7GnA30HsCgRIL/XGAyRDCNCOkDy0R2QXS5/cJcsq3zy2pVY
9l8OblW6wkDkOrCs0SnJsi6+vyw6/cCqXVGHbvw83HaNxTstfffTj6g2JLEryPTA18pCpzG6
JU9U7YbDFXGF8dlUD65aqXgVF276x2KaALkUhRT6xpQIOhpAchPoddWl3Shm3TALEoI3a5F8
jTkzN4tk6IxGBwYokcwcq2JRU0JzIt9cNFogTeWCH7tPHM5YPYF58PQnOtVD9oxKgWoP92hO
uIgKurIsNlW7fUaUwwVSuiLhIYD0H7EhaLceZ2piAZhaVl9PUK2Nut+EpAOFmLSd3pROExul
e/r79Dz5cnw8fj59OT0GNq8q8E5phWYDGOb7WwRwQWkyQdTnWYGCRVGCuSwscVBDpFsj2gNr
lbMSC/wwJU0skAyoG9uMgXbLjRGVcl66jRHiBlYBiin0Yds92yLv0lVSaFMMPeuD3A52TdNS
mTOEl+LBBcQ7TE3HARSWVg/p323F6xCbNehoExcjUGOrYenUbE4XHqVbZ/Q2GG6rUgkJ9h/q
stijRZEkIhK8T0Ge6x84Cr9FQWszMAtEiIZN1wOTuYmA9jxSFkqJoV1Om9gyK98wb5iW9O8j
bWOXo62lbFpkXYs26IY48fHhRJ4oYAGfk3xvITa1X6Zt2lUFmqyLXZ2C8nbKaygy43k1gtKc
ZLpjbRFGnSti9nVLnsTP4KU/u+IPR3RXj8BSReL7mEEttBkvLdW72exAujuxhOFqSNGkJW1H
6OT59N+vp8e7b5OXu+ODU8OKhAA59cElDUIMaZgGw8qtOKJovzCxQyL1qOvZIVofAXuTupOg
UxruhPdLgYb78S7oMpjipB/vUuQxh4XFP94DcDDNzoQoR4IWgz7Gm660SEfI6xbmBFu01BjB
d1sfwbf7HD3fflMjTegeOob75DPc5KN/Z6CZpYfLWw0M7CymHWVmjIuoRFVvW7nXBZTmXuQ5
VlhU+cVUdGPlO99Pwf+zmNWLd4fDyGC2wdU2jFZ2KQFMk+6q2U6FG4jscPlhFBUkCuLaBFW4
p4lRntmvg9/sXSS4bCVoAHkzticVZSMYkzeaT88gZ/PlOezV5Tns+0DfD2BIU/o5Qi8g5ih6
oIoMuyb3z1/+Pj6PSHWz+aEp26OMtg/Jb8MkYz3Lsz0xhojZ+sS5t4mQ2Z5JkxPPaA0tmMk0
5AifthqmBwkV4cuaVULrxMm5t0OTyfbg3a79cSi0M+f7WWDNaZ98rFFyOFWpfgOpiEY2TAo0
HUKASPs8LbDU3JRe+QaVhv1GIVqDQy8F+IbFoZZ7TavGomyJlz/fSRYAKxiKgDUHTyw/aNh+
D1wXxRo095ByDQJrA0zxpefcNWgsWgLZWgRQCawJ7NYkwaBwM8qZ/uNtdsaFNkwObtXkF/7P
6+nx5f7fYDN0TC+w/urT8e7060R9/fPPp+fXnv/RO+OKhjQRsmNSgfXiFtt6CP9Nh9tQFYmp
1wX28DAYMs0UUABzLrE3scTwacbrvWSlG/xDbFe077uTeKUQCDJ1VSMXUQvR79mU87a8HWyP
tLZwU64kKcchPgJXDL3zUF/3KSjSQNsHkltwPLVYey6e2XYk5j4TIbwhbV0C69uym07Y/V9O
ux2yMksvnZrGFoQkdCdvK2q8nVdw+nBlFejvAuz0lN1QIQT6LValC1D0wUsDqGnox9Rg1MrT
aMZ6ptexeTYKk2cRfa3swpFoEa7sxh3OIlUR2dyNfVF3+vx8nHxqSWftF/KECFVULXb0zY8B
rUq3GCE8jpni9tvjfydZqZ6iM1rIljcEBI2H6Jzdbuazw7eNBhjP92tyK2h9ubaYZ5m1t2it
fEwUMeD0D5WQTkoAUWbJa8fjpODareY2OFVG0r8OBsEj8qSTIpzALQJWcONcFjDQSmsnMITA
hPkQzQabcPJArFH7+LCukJ5Da5AZqNCQWeu+inaG8eCizHwiBxOYdsH2caDv9TPV7geFSFUC
68b+Ss/hvNCKXS9cSpVSTWR3B1ISjJzBUbZrtik/Hzlglka6ZFxvCh+3Wkt/VuC2CoNGWGpr
7keRp/4k8C99/aUF4RcYLVFlYrWhDboZYbvOjEYdrbgxXFdy/4xGQPV6wwc8i3AgJ2cDqhmU
4v46DLhJniZMpM4z0L4FF/nvQTimnENbHsbL2jOFfw/umigG57DWsQ8qS+3EwwQ+NLHBbGKM
3ehIRmPYaPMdbL0fxepSXV4t303H8Pgad3VTMvwNA5YzRz5hQrdiqbj1NPV2l3lcABAcyS3j
o5jELxNo4LUsqsAL4G1bG0/7ITDL6EONrm2m/JcjCEXXHYuCD9bDwGc37mi7JDiaLfVLV3WS
VmrjPdrYkXgl0OcGH1aap6FNfmFkn5bGAeTOrLLK7Yu3TZPA6vU1JhTBb3F+G8R8Y3J/fnFZ
ewXrPfJiNh9HztqxeXDcs9hu4BH8YmzabHGmX7YcR643mOYfRcPt0LNpLJLxJoyrkVV1mLPd
AAnKPTvfYEVD54MGWKsdbAKnDv+bT71q7gZbFunNbDG9CGPzzXl8P/2qi++2Dx9IdvD028fT
n2AlBXM+tsjCfcdjCzM8mF8+/nsFNlvKVjRUj8FSuDtbjtUsPE1cL2FQgW7uSZ9gqHKQR+sc
qyeiyEldbyXXwc6DVVnoWPOkyk0lOhbAoU0T+r0SaObkP/vfcTGPGDZFsfWQ4LQY00Csq6IK
PExQQCgTHbc/7jFsYJD47M0WWAUsnATUkamPsU8xhw22nJf+C84Oic7ZwDahSBB2pmSJ+RK+
KYw2MtD+LlG93wjN3Zf6tqnKMKzR/CCQT3lQTcCrmDE0PqI9YNDYPqHd52LuoeFPE4123Ozr
FSzTPqL1cKbiClcQgpsyHLsqt9SoJ0CI1UPYwDu/LKtqcIY3vLH1TSo1iMafIQg1aQ7KsqV9
7z94M2kX01ye5pwwT+y1aPrZX24awcVFNUytmcKz5gkR5qjtL9K0v/0UoAkYn9j8DArLw5w3
9oMu32nY1Ll5ph6ZB480BY7wkG69gCO7fgCO1C0GJg3efiybRQmxHVo8I7+f4rX6/m+n4Luh
uqx8c8uCMx/cyqwcCxZ5UzkY4ArLYFhVuBsKALjRbdUjj/BFHuFdU7uhTPkWPgtG5g/IF4Nq
Cz5CUzuP3bwBXFz/Si7Qm7xwGxuENvEeyjmPbHVRYqTWdkzZDebRe92b4mswrGsAZ4z+YML/
cvZvTW7jSLsw+lcq5mK9M7FXfyOSOq4dfQGRlEQXT0VQEss3jGq7urti3La/cvU7PevXbyTA
AzKRkP3uiZh26XkAEGckgERmBWbPsuNwy2ypbw+fHXhBVpyBjUKVLd3+XB1By5ieyM3grVpE
2lGvsbl2duf1UjT6qIrDROeoOW+DdbimP3Gs2pTkUTjqBzGvwKD7qMWpSaGIMM5mHlQs7Ge0
nLEplTC959dt6XvMb+XsUMKDxoyumNNR0aDzpEbM+BrYSFhxdfnpl6dvzx/v/mUUi76+fvn1
Bd9GQ6Ch2plMa9Y8Zk2HLcj8cvVG8qgQYL8Qjh2Q3sZ3QNWbW6jOFI5760c2CAxWs4b8zDyo
/Y5cOaanJrwC3vfbIph+Ci/hafVsI3HoWmpc9PqCuHXmFwoMOr5wKO1Q55KFTQyGdIUVrxQz
ZrSJR/OTaOs4l4PDTA5YxpOK3sb9bOkdYCoMl+yFPgm1Wv9AqGj7I2mpfSGjDGCFgV3cz3/7
9vtT8DfCwgSBVVEJMVoToZ+e+O69/9swNq99kUkJy+pkvqXPCj2Kre1HqaYaNQU/FvsqdzIj
jV2sXIn3tgS+x8rxYCxFLdN6PiCTNVD6dhKOiNG+ZzYDpKbQQZ/EosD4yl4eWRAdpc6WWuCA
PmtZIy4D1bf2o8mRBqX4xIWVgF+1LX5X73Kqbq6kUMPlkhYHG8xd93wNZJWeguJHDxtXtOpU
Sn3xQHMGU7N9amejXDmh6avaFnsBNeZX1XSod9T42I6j4QFVPphtMkqVT69vLzD13bX/+Wor
FE+6ipPWnzXJqK11aWkz+og+PsO5nZ9PU1l1fhor0xNSJIcbrL7Ub9PYH6LJZJzZH886rkiV
PLAlLZQkxBKtaDKOKETMwjKpJEeAXcAkk/dk31aoXXKn1vg9EwWM7sE9e7ddcymeVUxzQeUm
mycFFwVgaurjyBZPyZ0NX4PyzPaVe9Dt4Yjh/tlJ5lFe1luOsYbxRM2KkaSDo4nRUVaGQVM8
wIG/g8EmyD6RHWBsXwxArRFqzOFWs1k5a2ipWFll3m8lqSC6GxZ5/7i3Z6UR3h/syeTw0I9T
DzGYBhSxHjYbWEU5m8b8ZN+zVVslbGhJYDNjQpbWw0stGA6TjazBYnPziBceX4h+f7oR6Dtp
/FgC2FCnNwjW3XOCgfR1MzMmwO3sDGFuZ2gO5JhSs8PqMy1/nibam6M5hDc/KIi/gnSwWxVk
Bbidne9VEAl0s4K0icIbNTTz3jxZQbxZwmH8lWTC3aolO8R3svS9eqKhnIpSa8H3OvescqDf
tfVNYYlPenNkIqu1tLqiS2UlJaaFj9RZ8nDThlrbMU90MPK2wc/QyM2Vj+rg07a2hBxpdZm6
BoFxeILWE73d+eTB2DsbFaHmEPN7E6P09dfzhz/fnkADCJwI3GkLXG/WirDPykMB7zntxwXj
YY9LDWZeRmJ68IbzdzHHWvhIYqqkY3kGCgz5WaKsioCvTbRlHThXnt+UqjQdK7BDZmTcZPY9
5gCrzU2MkxxOqmdFKU8N6eornv/48vofS1+Ueflz683y/OBZSaNnwTEzpJ+BT88c9FtzLqW0
AztIKUddjFqo8/jaCTFT5p4ArP8enesTuBnRduvwENWv3EcOHCVYY9PUgm2DGTOOoSeMDzn1
0rM9PiK6+E1EmdfarZG54Mn/kkTaw5YUib8GMMOAO3IkmH5W1qQwY6F9IPPSLNZ3XT01ynd6
lObZc0sNq+2rM1L8h1uPUUayBFpbN26sI90TVPvolH9eLnZr1LjTHOtTGPHhp2tdZaCQZ+4A
Z+L2gTnHDqYXf7bOL9hghTE0yT2pADs7xMzOoVG1jE0Mx8gSr+q+ZGcxQfbGDkAw1CB/DnYj
9n5Id8qvBqbjlqqZ1V7TA2zXmTx7oxizrt9PersM2WOnGwnz51S3Ipzi/1kUsDn7Pyjsz3/7
9H+//A2Hel9X1WxB4P3+nLjVQcJEhyrnH+iwwaWxTunNJwr+89/+7y9/fiR55CyC6ljWz719
z2SyaP2W1CbniPT4gGu84tZ6tqBVlWKN//SQNg2+tDROYuZtWzIak3Rv1yZJotZm/PD9lbHa
R+xJw5EWJAbTSmUb/j4VannLQC0ABVaR4dXzBQkIWrG9v5CLwlE0kMY3hMpMr60dWlOkMbBC
HBKoJctRp1ZSOmj56Ye8YLHYMQUyFlpfywl0geBf7uc12r7PMnKSwtRqdA9qynKwzDOHVk1y
xCe4AKYMptpS6xJYa9f9Hlb7tBwP1LVMUj6/gVkgeFDlCCNqbbq3c2h+90kmrKaHwxj8C7+5
0AiO0tqnvuqH0zkAaysL6A62vjj8AlOI+BpBoyI/VgTCKr8aYt7CaFye96D+kdmHopowi68T
HLRsZItO90wuTgRIbS11k4UaX7JDm92njw7g+XQK+4Y2tm/pkY2iIiZ13iW1NraOjMBbIAme
oa6Z1UaSxG5kFDo9pAYNQ3TGA+oCezVTZCkdZmNiIJZqPQvM6ZSGEMI2vzFxamOzr2yxbmLi
XEhp3yMqpi5r+rtPTrELwisMF21EQ1opqzMHOWpt+eLcUaJvzyW6epzCc0kwvnqgtobCkZex
E8MFvlXDdVZIJbsHHGiZQ5OPILRW95kzB9UX27wGQOeEL+mhOjvAXCsS9zc0bDSAhs2IuCN/
ZMiIyExm8TjToB5CNL+aYUF3aPTqQxwM9cDAjbhyMECq24AmizXwIWn155G5r5ioPfIMM6Lx
mcev6hPXquISOqEam2HpwR/3tm7JhF/So21ydMLLCwPCWQPeUU5Uzn30kpYVAz+mdn+Z4CxX
y6faUjBUEvOlipMjV8f7xhYYJ8PvrB+rkR2bwIkGFc1KllMAqNqbIXQlfydEWd0MMPaEm4F0
Nd0MoSrsJq+q7ibfkHwSemyCn//24c9fXj78zW6aIlmhu301Ga3xr2EtgkuAA8f0+JBBE8Yr
BSzlfUJnlrUzL63diWntn5nWnqlp7c5NkJUiq2mBMnvMmajeGWztopAEmrE1IpFsPyD9Gnke
AbSEF276/Kh9rFNCst9Ci5tG0DIwInzkGwsXZPG8B+0ACrvr4AR+J0F32TPfSY/rPr+yOdTc
qbAfgc04ci9i+lydMymBlE/uQ2t38dIYWTkMhru9we7P4DsUdjB4wQZzs6CsWQjbcSmkX7f1
IDMdHt0o9elRq1Yo+a2osVentKXKoBPELFv7JkuOKYpl7Dp8eX2GDcivL2Cq1Ododk6Z2/wM
1LBr4qiDKDK1gzOZuBGACno4ZeKjzuWJx1I3QF5xNTjRlbR6TgnOXcpSb7QRqr2REUFwgFVC
yLzF/AlIanRJyHygJx3DptxuY7Ows5ceDp4aHnwktbqJyNFikp/VPdLD62FFkm7NU361ssU1
z2CB3CJk3HqiKFkvz9rUkw0BNlCEhzzQNCfmFNnmohGVNbGHYbYNiFc9YZ9V2HUWbuXSW511
7c2rFKWv9DLzRWqdsrfM4LVhvj/MtLHceWtoHfOz2j7hBErh/ObaDGCaY8BoYwBGCw2YU1wA
3bOZgSiEVNMIfhs7F0dtyFTP6x5RNLqqTRDZws+4M08cWriEQXrwgOH8qWrIjQMKLOHokNRN
nwHL0phsQzCeBQFww0A1YETXGMmyILGcJVZh1f4dkgIBoxO1hirkek5/8V1Ka8BgTsWOLzIw
ptUwcQXaOoQDwCSGz7oAMUc0pGSSFKt1+kbL95jkXLN9wIcfrgmPq9xz+FBLLmV6kHla43TO
meO6fjd1cy04dPqG9dvdhy9//PLy+fnj3R9fQAPoGyc0dC1d32wKeukN2tgpQ998e3r97fnN
96lWNEc4ycAuyLkg2tMgcojDhuKkMzfU7VJYoTgx0A34nawnMmZFpTnEKf8O//1MwHWD9jB3
OxiyCs0G4MWuOcCNrOA5holbgjfA79RFefhuFsqDV3q0AlVUHGQCwVExusJgA7nrD1svtxaj
OVybfi8AnYO4MNhLIxfkh7qu2gcV/A4BhVH7fXhSU9PB/cfT24ffb8wjLThBSpIGb4WZQGgf
yPDUGy0XJD9LzxZrDqO2Amnpa8gxTFnuH9vUVytzKLIj9YUiCzYf6kZTzYFudeghVH2+yROJ
ngmQXr5f1TcmNBMgjcvbvLwdH4SB79ebX5Kdg9xuH+ZWyQ3SYPMHbJjL7d6Sh+3tr+RpebQv
b7gg360PdMbC8t/pY+bsB1kQYUKVB9/efgqCpS2Gx+p/TAh6rcgFOT1Kzw5+DnPffnfuodKs
G+L2KjGESUXuE07GEPH35h6ye2YCUNGWCYKVAj0h9OHtd0I1/CHWHOTm6jEEQW+NmABn7VJt
NhN564xrTAZsd5P7Vm0qAPxNzi5lBlS7xYMTQCf8xJDDSZskfiUNpw1/MAkOOB5nmLuVntZh
86YKbMmUevqoWwZNeQmV2M00bxG3OH8RFZlhNYKB1S5ZaZNeJPnpXF4ARnTJDKi2P8Nr6XB4
kaFm6Lu316fP38DmHrxMffvy4cunu09fnj7e/fL06enzB1Dp+EYtMJrkzAFWSy7BJ+KceAhB
Vjqb8xLixOPD3DAX59v4kINmt2loClcXymMnkAvhix9AqsvBSWnvRgTM+WTilEw6SOGGSRMK
lQ+oIuTJXxeq102dYWvFKW7EKUycrEzSDvegp69fP7180JPR3e/Pn766cQ+t06zlIaYdu6/T
4fhrSPv//MC5/gEu/Bqh70ksi0EKN6uCi5udBIMPJ14En09sHAIOO1xUH8h4EsfXA/gwg0bh
Utdn9DQRwJyAnkybM8ayqOH9duYePzontQDi82TVVgrPakYpROHD9ubE40gEtommpndBNtu2
OSX44NPeFJ+7IdI9zzI02qejGNwmFgWgO3iSGbpRHotWHnNfisO+LfMlylTkuDF166oRVwqN
9gQprvoW367C10KKmIsyP6m7MXiH0f3f6x8b3/M4XuMhNY3jNTfUKG6PY0IMI42gwzjGieMB
izkuGd9Hx0GLVu61b2CtfSPLItJzZptMQxxMkB4KDjE81Cn3EJBv6pMDBSh8meQ6kU23HkI2
borMKeHAeL7hnRxslpsd1vxwXTNja+0bXGtmirG/y88xdoiybvEIuzWA2PVxPS6tSRp/fn77
geGnApb6aLE/NmIPPueqxs7E9xJyh6Vzg65G2nC1X6T0/mQg3GsUPXzcpNB1JiZH9YFDn+7p
ABs4RcAtKFICsajW6VeIRG1rMdtF2EcsIwpke8pm7BXewjMfvGZxcjhiMXgzZhHO0YDFyZb/
/CW3zRHjYjRpbZu3tcjEV2GQt56n3KXUzp4vQXRybuHkTH3PLXD4aNAoXMazOo0ZTQq4i+Ms
+eYbRkNCPQQKmc3ZREYe2BenPTTEQjNinJfu3qzOBRncq52ePvwL2UAaE+bTJLGsSPj0Bn71
yf4Il6qxfe5jiFE1UGsMa/0o0NX72VKF9IYDEzusvqA3RlmV3FsoHd7NgY8dTPvYPcR8ESlc
Idti6gexlAAI2kkDQNq8zeoY/zL+V3q7+S0YbcA1Tq3TahDnU9j+KNQPJYjak86IqLrrs7gg
TI50OQAp6kpgZN+E6+2Sw1RnoQMQnxDDL/dJnEYvEQEyGi+1D5LRTHZEs23hTr3O5JEd1f5J
llWFFdoGFqbDYangaOYDfXygZpr1RCPxASwLgItaWGOCB54SzS6KAp4Dt0CuIhgJcCMqzO7I
w50d4pTmedyk6T1PH+WVvoAYKfj3Vq681ZB6maL1ZONevueJps2XvSe1Kk6ROXmbe4g9kVSv
2EW2H3iblO9EECxWPKkEGrDIN5O6h5E2n7H+eLG7mEUUiDCyHf3tPMLJ7XMs9cPSchWtsG0k
w2s8Udd5iuGsTvBRoPoJRpjsDXMXWmXPRW3NaPWpQtlcqx0Y8lM8AO7MMBLlKWZB/WqCZ0Bi
xneiNnuqap7AGzqbKap9lqMtgc06ptVtEs3jI3FUBNgoPSUNn53jrZgwdXM5tVPlK8cOgXeV
XAiqUZ2mKfTE1ZLD+jIf/ki7Ws2dUP+2eQIrJL3wsSine6g1mn7TrNHGPJAWfB7+fP7zWckt
/xzMACHBZwjdx/sHJ4n+1O4Z8CBjF0VL6wiCD14X1VeOzNcaoqeiQXlgsiAPTPQ2fcgZdH9w
wXgvXTBtmZCt4MtwZDObSFeBHHD1b8pUT9I0TO088F+U93ueiE/VferCD1wdxdhaxgiD9Sie
iQWXNpf06cRUX52xsXmcfbirU0EGLOb2YoLOfsqcFzWHh9sPdqACboYYa+l7gVThbgaROCeE
VWLiodIGQuy1x3BDKX/+29dfX3790v/69O3tb8M7gU9P3769/DpcVODhHeekohTgHJAPcBub
KxCH0JPd0sVtf1AjZu53B3AAtAFzF3XHi/6YvNQ8umZygGw9jiijPWTKTbSOpiSIcoLG9fEc
snoKTKphDhvMHkchQ8X0KfOAa8UjlkHVaOHkJGkmwK43S8SizBKWyWqZ8nGQcZ+xQgRRAgHA
6G2kLn5EoY/CPAvYuwHBlgGdTgGXoqhzJmEnawBSRUSTtZQqmZqEM9oYGr3f88FjqoNqcl3T
cQUoPi4aUafX6WQ5HTDDtPgBnpXDomIqKjswtWSUvd0X8+YDXHPRfqiS1Z908jgQ7no0EOws
0sajfQVmScjs4iax1UmSEgxnyyq/oMNJJW8Iba+Uw8Y/PaT9VtDCE3TCNuO2x3gLLvBzEjsh
fLRhMXB6i0ThSm0xL2qziCYUC8Svbmzi0qGehuKkZWqbZLo4Vg0uvEmDCc7V7n+PFA+NSUwu
KUxwe1v9woQ+0aODBxC1b65wGHfzoFE1AzBP6Utbt+AkqXClK4dqj/V5BLcToJ+EqIembfCv
XtpuCzSiMkGQ4kSe/Zex7RIKfvVVWoAd095cjFidq7GNvDQHqT2IWGXsbH4w9wnfwOPQIhxj
D3oL3PX7s3zUTlysTmoLz2q66t+hw/UaLM81qSgcA8qQpL43HM/jbZspd2/P396c/UZ93+Kn
NHAc0FS12keWGbmDcRIihG2VZWp6UTQi0XUyGD7+8K/nt7vm6ePLl0kPyPY5jDbo8EvNBYXo
ZY4sPKpsIhe1jbGwYVzVd/9PuLr7PGT24/N/v3x4dt1vF/eZLd+uazTE9vVDCu5V7JnjUbvj
hReYScfiJwZXTTRjj9rZ7uzF/lZGpy5kzyzqB74HBGCPXFDBxpgEeBfsoh2GMlnN6kwKuEvM
1x3vmxD44uTh0jmQzB0IDWIAYpHHoAsEr9fteQQ40e4CjBzy1P3MsXGgd6J832fqrwjj9xcB
rQJeBG2XcTqz53KZYajL1NSIv1cbcY2UwQNph+3gi4DlYvK1ON5sFgykGkZwMJ94pr3tlrR0
hZvFgs9GcSPnhmvVf5bdqsNcnYp7tmJV6zQuwmUSDiIXC1IHaSHdTBqwiDNSM4dtsF4Evjbn
M+wpRszi7ifrvHNTGUriNt1I8PULXqCdUTCAfTw9HoPBKevs7mV0XUwG5ymLgoA0TxHX4coD
Op1lhOGBrDk3nLWB3W9PeTrLvTdPWzigVQHcdnRBmQAYYvTIhBya1sGLeC9cVDehg57NwEAF
JAXBE9j+PJpzkzQemTGned9equGaP00ahDQHEMsYqG+RawYVt0xrB1DlddUDBspoqjJsXLQ4
pVOWEECin/ZWT/10zjp1kATHKeQB73r3rXtUDtfnjoc/C+zT2NZTtRlZTCvW/tOfz29fvrz9
7l3eQVmhbG2JDSopJvXeYh5dqUClxNm+RZ3IAntxbqvBaxMfgH5uItAlkU3QDGlCJsj+vUbP
omk5DOQQtOxa1GnJwmV1nznF1sw+ljVLiPYUOSXQTO7kX8PRNWtSlnEbaf66U3saZ+pI40zj
mcwe113HMkVzcas7LsJF5ITf12oqd9ED0zmSNg/cRoxiB8vPqVobnb5zOSEvCEw2AeidXuE2
iupmTiiFOX3nQc0+aENlMtLo3dI053nH3CSsH9R+prFVB0aEXFfNsDaiq3a4tiQ+sWTr3nT3
yPX4ob+3e4hnSwS6lQ32BwV9MUeH2yOCD0SuqX5xbXdcDYGpEAJJ2yfWECizhd/DEa6G7Ntx
fQUVaPs3YDfbDQvrTpqDpXDtikxJBZIJFKfg4TMzPtL6qjxzgQbv8+BvCTw9Nukx2TPBwD75
6PINgmiHrEw4MMIt5iBg6+Bvf2M+qn6keX7OlQB4ypABFRTIuGMGPY+GrYXhLJ6L7potnuql
ScRo5pmhr6ilEQyXgihSnu1J442I0XNRsWovF6OzZkK29xlHko4/3CsGLmI82cUM0cRg/RrG
RM6zk6HsHwn189/+ePn87e31+VP/+9vfnIBFah/2TDAWECbYaTM7HTka7MXnTCiuCleeGbKs
MmoZfaQGG5y+mu2LvPCTsnVMZs8N4DiNn6gq3nu5bC8drauJrP1UUec3OLUC+NnTtaj9rGpB
UEh2Jl0cIpb+mtABbmS9TXI/adp1MMzCdQ1og+E5Xaemsffp7AqwOdxntthhfpPeN4BZWduW
eQb0WNOz811Nfzt+iAa4o4dnCsMadwNIja6L7IB/cSEgMjlFyQ5kW5PWJ6yYOSKgNaW2FDTZ
kYXZnj/QLw/ouQ5o7h0zpCEBYGmLKQMA/kFcEAscgJ5oXHlK8slTZvn89Hp3eHn+9PEu/vLH
H39+Ht98/V0F/ccgfthWD1QCbXPY7DYLQZLNCgzAzB7Ypw4AHuy90AD0WUgqoS5XyyUDsSGj
iIFww80wm0DIVFuRxU2FfYEj2E0Jy44j4mbEoO4HAWYTdVtatmGg/qUtMKBuKrJ1u5DBfGGZ
3tXVTD80IJNKdLg25YoFfaG3XDvIdrfSKhbWCfgPddkxkZq7TkU3h64NxhHBF5iJqhriAuLY
VFrwsh2ogAMQ7bRWtGnfUUsGhi8k0exQMw82dKbN6WN7/+Aho0KzR9qeWnAkUFIzacY1znyf
YTTBPefOAuySF3vbem96VFKnOO1Jiug8jv7ok6oQyKmtBY5eBjA5eBdCoPZvsrdl69EpC8SA
ADi4sCtkAByXIYD3adzEJKisCxfhtGkmTntdBBdYrK4LDgZC8Q8FThvtWbeMOdV1nfe6IMXu
k5oUpq9bUph+f6VVkODKUh0xcwDt5ty0m8tp1waji03S1rDzoRhZEgFqjLPmwXuQPtsh3aI9
7zGi7+EoiGzHA6D2+LgGpochxRl3sj6rLuQLDamIWqArRA2FNRI3ABst46BWhZtGuCVNwSqe
r0khjKenaU6Kg7/f6BCefsMFTJsQ/sPkxRpd/JATcX2DUfJ1wbOxN0V5qicBRP2++/Dl89vr
l0+fnl/dw0X9HdEkF6SNoUtmrpH68kra9tCq/yLJA1A9p5EU8BXKBKnMSjp5aNzefEKaEM65
+Z8Ibsobc80XJSbTUd9BGgzkjttL1Mu0oCDMPm2W07kjw+cqM8bcn1gk/Sj4VVKbCFqrBnSz
qCulPZ3LBK6N0uIG64xk1QBq3YxPWe2B2TYbuZTG0o9p2pT2KHgAIVsyzYCPrqMkLZz2p0zN
hmkzmSxOnr+9/Pb5+vT6rPu0tu4iqZENM3PTWTm5crlXKO1vSSM2XcdhbgIj4ZRdpQutzKOe
jGiK5ibtHsuKTMFZ0a1JdFmnogkimm848Gor2rtHlCnPRNF85OJR9fNY1KkPdwdu5nRlOJml
HVnNrYnot7SbKCG1TmNazgHlanCknLbQR/JIiUDD91mTsf3N6ZxF6vZMPdMFu6UH5jI4cU4O
z2VWnzIqY02wGwF7Oro1Koznwy+/qBn/5RPQz7dGDTy2uKQZERYnmCvVxDH93eocakJd2nm+
kSVzJfv08fnzh2dDz2vXN9emjv5SLJIU+dazUS7bI+VU7UgwxbGpW2myA/ndJgxSBmIGocFT
5Nny+/UxearmF/tJEEg/f/z65eUzrkElJyZ1lZUkJyPaG+xAZUElMg43n+jz0yemj37798vb
h9+/K4TI66BKZ1yuo0T9Scwp4PsnqvFgfvdgQbmPbRclEM3shoYM//Th6fXj3S+vLx9/sw9m
HuFhzRxN/+yrkCJKHqlOFLQ9QBgERAyQX52QlTxl9k6xTtab0FJ4yrbhYhfa5YICwKtcbYrN
1voTdYZuzAagb2WmOpmLa28To8XvaEHpYTfRdH3b6bMnySRRQNGO6OB64sgV2JTsuaCvBkYu
PhX2Rf0IF/D1PjaHibrVmqevLx/BIbnpJ07/soq+2nTMh2rZdwwO4ddbPrya1UKXaTrNRHYP
9uRO5/z4/Pn59eXDcGhwV1FHcOIMoqsA96b2lv2szfg7ZisR3GsnXvNtlqqvtqjtcTwiaiJH
LgpUVyoTkWPhoTFpH7KmuArwmHXO8ukt2OHl9Y9/wyIEVtBsU1aHqx5z6BpzhPQZTKISss6A
zH3c+BEr93Oss9ZPJCVn6WkfzYUbXVkibjx+mtqOFmwMexWlPlSyXRGPTZaD2irP+VCtm9Nk
6PBp0thpUklRrURiIvTUU25d9A+VtHySWJtEcB3MeLjVyQlzaWIShRcV6c9/jAFMYiOXkmTl
o+xPj6rCL5m03UGOni/B4yOcQphEWfpyztUPod97IrdnUu2LUNdv0iMyG2V+q73ybuOA6CB0
wGSeFUyC+EB2wgoXvAYOVBRooh0+bntWHxNUAy3BCiQjE9uvE8YkIib/tdqwX2ytK5h15Uk0
ZiwdUB8CD5xaMCFmnse618dNqmWqvDoifTnPxGRUlv785t5cwClnbJ9IDMBysXC28GLwBAn+
Faumt62dDlvE/piBWlKDVFKCHj1y1kBnfbGoutZ+dgQifa4W5LLP7cM8tYfqr6l9nQJ7kz7d
Z7YPvgxOtWEwoc4iz+VqAaduoYN3Wd/YdxHDIa/6VWL3yho/2j1rkvbVSGpT8slL2umJahDE
rPlK5qBBhwIXp4wFnOvKAQbJaz5HmRVkrBaeJCVTEHuA61ML6lHmWEryCzS5MvvOTYNFe88T
MmsOPHPedw5RtAn6Mbhh+mPU0X99e9G3F1+fXr9hrXkVVjQbULOxsw/wPi7WakPNUXGRwMU8
R1UHDjVaPKrTqcWuRW9V4Ptq8fXHaZsO4zDKa9XqTBQ1+sGN5i3KmAHSrtK1I/ifAm8Cqkfq
M2TRpsmN72g3veClFwnsTpXrljirP9XWT3uLuBMqaAs2VD+Zq6X86T9O2+zze7X40ZbBLuwP
LboSpL/6xrYzhvnmkODoUh4S5MgV07qFkb9k3VKyRVpVupWQy/KhPdsMtJrUFG1eBU2Sqyj+
2VTFPw+fnr6pDc7vL1+Z5x3Q7Q4ZTvJdmqQxWVABV4O5Z2AVX78UA3d7VUn7tCLLinpEH5m9
kvUewc+y4tlj8DFg7glIgh3Tqkjb5hHnAZa2vSjv+2uWtKc+uMmGN9nlTXZ7+7vrm3QUujWX
BQzGhVsyGMkN8oM7BYJDLKTkNbVokUg6/QGuBHjhouc2I/25sU+BNVARQOylsegx72b8PdYc
OD19/Qqvpwbw7tcvrybU0we1mtBuXcEq20E111grUA+b06MsnLFkQMfzj82p8jftz4u/tgv9
Py5InpY/swS0tm7sn0OOrg78J0EYcWpvJJl7Aps+pkVWZh6uVrtKcHxB5ph4FS7ihNRNmbaa
IAuiXK0WBEN3RwbAByYz1ouyKh/VFpG0jjlbvTRq6iCZg0OwBr8P+16v0F1HPn/69Sc45HnS
roVUUv4nb/CZIl6tyOAzWA9qeVnHUlQQUkwiWnHIkdcoBPfXJjPer5E/IBzGGbpFfKrD6D5c
kSlFn9er5YU0gJRtuCLjU+bOCK1PDqT+TzH1W0n1rciNgtlysVsTVm22ZGrYINw6S2xoxCpz
8/Ly7V8/VZ9/iqG9fEoOujKq+GhbbTS+RtTWsvg5WLpo+/Ny7iDfb3ujTyXKBH8UEKLarGfS
MgWGBYeWNM3Kh3DuFm1SikLJ+0eedPrBSIQdLMxHd84V137I6nCY9e9/Ksnp6dOn50+6vHe/
mql2PvllaiBRH8lJl7IId8DbZNIynCqk4vNWMFylpqbQg0ML36CmgyMaoBXlsWLwQehlmFgc
Ui7jbZFywQvRXNKcY2QewxYxCruOi3eThQ2r29MMFRfLTdeVzNxiqqQrhWTwY11kvSdN2PRl
h5hhLod1sMAKj3MROg5Vs9Yhj6mQazqGuGQl22XartuVyaHgEnz3frnZLhhCre1pmam9YuyL
tlzcIMPV3tOrzBc95EGyudR7dQaH44LVYskw+O5yrlX77ZNV13TeMPWG9SPm3LRFFPaqPrnx
RK4frR6ScUPF1S6wxgq5JZuHi1oJBPcRs8DnRy45S1vASH0v3z7gWUm6VhKn2PAfpM86p4vv
T+b+mMn7qsT6CQxptj6MI+RbYRN9DLz4ftBTdrydt36/b5lFBY7s7BledXS17P2mFjr3SnNK
lR8NCoWbspMo8LNzT4CeHwFDIDNqpiWYy9akFArrrs58XqsKu/tf5t/wTsmId388//Hl9T+8
kKaD4Sw8gBWaaZM6feL7CTt1SgXPAdT64EvtQlntziXd1I6h5BUMzkq41vJsV5mQajnvL1U+
SvPehO/TlNsE69NbJQGmCW4awI0GwoGgoAKs/qX7//PeBfpr3rcn1ZtPlVphidCnA+zT/WBQ
I1xQDmyDObstIMCJL/c1chYDsL4FwMql+yJWosTaNiWYtFYZ7Q1VdYCj0BbfLihQ5LmKZFvX
q8CrgGjBJT0ClWidP/LUfbV/h4DksRRFFuMvDbOBjaGD/Eo/ZEC/VYRUSRYwWxeUgOcICAMF
41w84gTPSOtRiTvo0dYA9KLbbje7tUsoAX7poiWc0tlaaGWNfkwvkPRLpfkG3DVkono+jYyV
Fvf5PbaQMQCqZKrl9rZhU8r05u2WUSjO7NUiTtB+eowI6hZSwuKb1Vgke49Ea/gF+p/6oKDP
31cNHrCYfy/VhoM73KLJLH8oVPVjaZ3iHwi3XYbMRILC/Py3T//3y0+vn57/hmi9FOH7T42r
fgqnwNrmP7a2PNTxGfXkEQXrTTwKT+/Mk6eft5Q39rP5uEmzt9Zo+OXvDlPHsaOMoLznwG7r
gqiPWOCQ/WDNcc4mWvdNMDgUJ5eEdNkRHi7j5FwlmL6Stw4CFErgfhVZ3e7ScjgJ7w9NpSQy
e1dgkdDMiBsMaLGDr+HqsJHobfmIsvUNKNg0R9Z+EamnxOmYu7wUqat3BijZuk+tfEHO/iCg
cSkpkG9LwE9XbBgMsIPYK0lSEpQ8ZtMBYwIgg/IG0Z5EWBCU2KVacc88izu9zTA5GRg3QyPu
T83keZbV7MqepHP3xlampVTiEbjMi/LLIrRfpCercNX1SW1b7LZAfLNuE+gaPTkXxSNeP+uT
KFt7Xje7jCJTuxdbT6rNDgXpGxpS+2nbc0Asd1Eol7btHL3976VtTVjtfPJKnuHZOCgpxLaC
wanus9zaTOm74LhSu190VqBhEJGwVYA6kbvtIhT2C6VM5uFuYVstN4h9LjvWfauY1Yoh9qcA
mVUacf3FnW2/4VTE62hl7R4TGay3SEcMPJzarztAPMpAAzKuI+daWTb0lcekCogFs0GpXiYH
2+hQAWpkTSttdeRLLUpb0NKS7im7Tx/JA9BwkHzMNilVe4TC3SIZXLVzaEk9M7hywDw9CtsD
7AAXoltvN27wXRTbStYT2nVLF86Stt/uTnVqF3jg0jRYLJDqKynSVO79JliQ3m4w+uJ1BtU2
Qp6L6TpP11j7/NfTt7sM3rf/+cfz57dvd99+f3p9/mj5q/wE27uPaj54+Qp/zrXawrWRndf/
PxLjZhYyVZi3C7IVtX25n5bXh5T+no4z4BlABYpOMayPj/NWPY1PFel5IlfVSE47xx7pg1Ef
PIm9KEUvbPUMMJhoVw2aT83VRSyz8cDa6bBA9sjCaiMyOKds7QfiEpl01HHQKqER5yWiRrUe
xmHqBjozQy7u3v7z9fnu76qR/vW/796evj7/77s4+Ul1wn9YBoZGgckWZU6NwZgF3jaBOYU7
Mph9KqczOk3EBI+1/ipSI9F4Xh2PSELVqNRG9kBdDZW4HfvlN1L1etPtVrZaU1k40//lGCmk
F8+zvRR8BNqIgOrnONLWAjRUU09fmO9GSOlIFV1zsK5irzaAYx+zGtKKG/JRHmg24+64j0wg
hlmyzL7sQi/RqbqtbLEuDUnQsS9F175T/9MjgiR0qiWtORV619li6oi6VS+wQrjBRMx8R2Tx
BiU6AKDrox8BDmbTLAPcYwjY+oMeqNrR94X8eWVdNo9BzGRttKfdTwx7cCHvf3ZigkEZY/cA
Xk9iv09Dtnc027vvZnv3/WzvbmZ7dyPbux/K9m5Jsg0AXepMF8jMcPHAeEI30+zFDa4xNn3D
tKoceUozWlzOBU1dHyzLR6evgdJiQ8BUJR3aB5RKCtHzfplekZ3aibCVOmdQZPm+6hiGijUT
wdRA3UYsGkL5tSGSI7oItmPd4kNmzivgOdYDrbrzQZ5iOvQMyDSjIvrkGoN1b5bUsZzbjilq
DNZAbvBj0v4Q+Mpngt2HjROF37xNcOu8/5movaT9EVD67G8uFPFENkySSgKkq0jxaCvOjpCV
JhwcmCXQOVNQ65i9B9U/7akc/zItjoT7CRpmCWe1SYouCnYB7QsH+iTeRplecExaKl5ktbOW
lxkydTOCAr1fNlluU7qwyMdiFcVbNTmFXgZUrIdTY7hw0QbQAl/YwVJVK47SOmoioWC46RDr
pS9E4ZappvOPQqjW94Tj9wcaflCylmozNcZpxTzkAh1LtHEBWIjWTAtkZ1pIhIgAD2osoV8H
EievD7RfAeTtV3G0W/1Fp2aos91mSeBrsgl2tLm5fNcFJzLUxXZhnzkYweeA60mD1NaSkapO
aS6zihtLozjne8EmTiJYhd38NGPAx9FD8TIr3wmzt6CUaXEHNt0MNMb+wLVDR1ty6ptE0AIr
9FT38urCacGEFflZOLIu2UiNcYyNFDgndSduLGVDGPK4UuiHeAXWMgRwNKimN5mYUp+IyQkt
vvjQH3pfV0lCsHo28BpbLzb//fL2+93nL59/kofD3eent5f/fp4N9lq7Fv0lZFlKQ9ohWqoG
QWG8o1hb4SkKVzcnbaojplBWdASJ04sgEFJyMMhFjROCEZ0KjRGNB40Rgwsae6ga22+XLglV
iZyLJ1O1P7JlUk2pwHGwDjsaQ7+OZGpSZrl9eqShw2FsMmidD7TZPvz57e3LH3dqvuearE7U
ZhLv1yHRB4keTJhvd+TL+8JENN9WCJ8BHcx6cAPdLMtokZVo4yJ9lSe9mztg6KQ34heOAA0I
UJCl/fJCgJICcOyVSdpq2D7I2DAOIilyuRLknNMGvmS0sJesVWv05Fqh/tF61jMH0q0ziG1l
1iBakaaPDw7e2hKawVrVci5Yb9f2+1SNqu3ceumAcoX0fCcwYsE1BR/J20eNKumkIZASL6M1
jQ2gk00Au7Dk0IgFcX/UBJqQDNJuw4DG1yAN+U6bjqPfd3T+NFqmbcygsFTaGv4GldvNMlgR
VI0nPPYMqoRxt1RqaggXoVNhMGNUOe1E4GEEbTANar9M0YiMg3BB2xoduBlEX8NdK2w3ahho
662TQEaDuS/SNdpk4NKCoGjMaeSalftqVnyqs+qnL58//YeOOzLYdI9fYJHfNDxVRNBNzDSE
aTRaugrdMJlGoFIWL12Y6Acf07wf/D+gN92/Pn369MvTh3/d/fPu0/NvTx8YDanaFSnM6kct
GwHq7PeZi1kbKxL96jZJW/T+UMHwgs0e6kWiz98WDhK4iBtoiTTcE+6ithgu9lHu+zg/S2ze
n1yJm9+O4yuDDifJzsHOQJvnzE16zKTaBfG6BEmhtZHbjOVmLCnoR3TMgy3Yj2GMWpSaeEpx
TJsefqATbBJOu/ZzzQRD+hkoyWVIATTRxuzUKG3hLX6ChF7FncEAclbbio8K1ecGCJGlqOWp
wmB7yvTTsUumtiYlzQ1pmRHpZfGAUK1z4gZObeWtRD8/wIlhawMKAe99FXrKDLcB+nm/rNGu
VjF4i6aA92mD24bplDba236qECFbD3HyMlklSHsjjS9AziQynFPgptTPahF0yAXyuqcgeMjQ
ctD4xKGpqlYbG5bZ8QeDgdqkmrPB5oT6XEM7whAR3flClyLO5obm0t1BkqLCxoBm+z08jpyR
QbOBKADEKjbROgTsoDZF9lAErMZnDQBB17FW99EZnaPgoZO034ub+xQSykbNNYklb+5rJ/zh
LNEcZH5jfYkBsz8+BrOPWQeMOZYdGKTTP2DIrd+ITddreuECj9B3QbRb3v398PL6fFX//4d7
m3nImhSbMBiRvkIbrQlW1REyMNKjnNFKoufENzM1xja2pbFiR5ERn3lE00j1cdy3QVll/gmZ
OZ7RHdIE0dUgfTirDcJ7x3+d3Ymow+k2tdUsRkSfH6q9eiUS7AcSB2jAjkRT7bPSG0KUSeX9
gIjb7KJVA6kz2zkM2EnZi1zgVwMixq5IAWhtreGshgB9HkmKod8oDnE6SR1N7kWTIrfsR/TG
SsTSnoxAuK9KWRGbxAPmav0qDvss1L4EFQK30m2j/kDt2u4dS+YNvOxu6W+wk0Tf1w1M4zLI
5yOqHMX0F91/m0pK5M7oglQFB40/lJUyp14z+4vtMFn718SPNE4ZTgKeuqUFPEi1hNQmRmHM
715tSQIXXKxcEHn+G7DYLvWIVcVu8ddfPtye9ceUM7VIcOHVdsneMRMC7zYoGaMzw2KwhENB
PIEAhC7hAVD9XGQYSksXoBPMCGubt/tzY88MI6dh6HTB+nqD3d4il7fI0Es2Nz/a3Ppoc+uj
jfvRMovhdTcL6ncfqrtmfjZL2s1G9UgcQqOhrUNno1xjTFwTX3pkYhuxfIbsDaf5zX1C7TNT
1ftSHtVJOxfXKEQLd/FgaGG+aUK8+ebC5k7ka6fUUwQ1ldp3lMbpAx0UGkWe4DRysgUzjUyX
JON747fXl1/+fHv+ONpME68ffn95e/7w9ucr5yBtZb86XkVa4Yea0wK80IboOAIep3KEbMSe
J8A5GfE1nEgBbz57eQhdgqjvDugpa6Q2c1eCzbI8btL0nokryjZ76I9KyGbSKNoNOvab8Mt2
m64Xa46CszL9Yu1evud8K7uhdsvN5geCEOcD3mDY/wEXbLvZrX4giCclXXZ0Q+lQ/TGvlIDD
tNUcpG65CpdxrDZAecalDpxUsmhOfSIAK5pdFAUuDu410axECD4fI9kKpiOO5CV3uYdYbJlu
Bobj2/Qe2yaY0lMlg464i2zNZo7luwAKUSTU/wwEGU7olVASbyKu6UgAvulpIOvMbjab+4NT
zCTggwNlJPG4JVD79aRq+ojYOdZ3qlG8sq+gZ3Rr2fa8VA1SQWgf61PlSG/mKyIRdZsiHXwN
aNMnB7Q7s2MdU5tJ2yAKOj5kLmJ9kmNf+oItNSk94fNrVpb2LKkdE/cpckWOYrQpsgkXp0iN
xPzuqwKsGGZHtVu1lyCjRtxKTzkL8d5OOy0F04Qogv34oUi2AfiGs4XrGgRCdDkw3K8XMdq7
qMh9d7TNL41In8RkC0guQyeov4R8LtU2Uy0EtvTwgA827cC2Mw71Q7cE2QOPsFVTEMg1BW+n
C/VYIdE3R4JTHuBfKf6JlME9ne/cVOjmWP/uy/12u1iwMcyGGT0XtB0cqR/G0QG4OU1zdBw+
cFAxt3gLiAtoJDtI2dlOf1E31l03or/poyWtM0t+KtkDedHYH1FL6Z+QGUExRqtNmxXEr0rV
N8gv54OAHXLtaaU6HOA8gJCoR2uEPsZCTQQ2COzwgg3oWioQ9mfglxZKT1c11xU1YVBTmV1l
3qWJUCPLNxPF4pKdrdoanSTA9GP7J7LxiwffHzueaGzCfBEv6nn2cMYGm0cEfczOt9EispId
1IragMP64MjAEYMtOQw3toVjJaaZsHM9oti52wAaV4eOeqP5bd6Qjonaz6ym6LVM4576S7Si
jHrQbB1mMra+iZcgO5w2mWt1WKOgwqwqcQfeNdDJ/Q45hze/jW7RZGb09NjjQ6jEt1Ql5Kyr
b8+5PVUnaRgsbH2BAVCyUT5v9kgk/bMvrpkDIU1Dg5WidsIBpkakEs/VBEdu24ZL4H67xLUQ
LKxZU6WyCtfIh4VeX7usiek55lgT+CVMkoe2XooaevjockRImawEwY2RLYDt0xDP8/q3M3cb
VP3DYJGD6QPVxoHl/eNJXO/5fL3Hq7H53Ze1HK4ZC7gNTH095iAaJR0+8pzacIKPMPs03+5g
YG7ogKywJ7UQoCsnWjWDicVqEW1XOHz9QKRjAPX0S/BjJkqkcgIBTfq26DWi4Q0Yz04zpaZY
uFtENkMVCXUVMxCaamfULY7Bb6UORrj5Kj+/y1p5dnr6obi8C7a8RHSsqqPdRscLP51NFo9n
9pR1q1MS9nj50y8oDinB6sUS1/EpC6IuoHFLSWrkZFsUBVrtww4Ywb1TIRH+1Z/i/JgSDDXq
HOpyIKi365/O4praXrIy3wSfbcMV3XKOFPbrniLV8TRYOD+tYmTHPfpBJw8F2aXJOhQebyT0
TycBd2thIL0sEpB+SgFOuCXK/nJBExcoEcWj3/aEeyiCxb1dVOsz7wq+A7s22i7rJeziUbcs
Lrj/FXA/YhvXutTIPB38xNJW3YlgvcWpynu7A8IvRzkSMBD+sU7i/WOIf9F4VQx73bYL+wI9
5Zlxe7iUCbipleNNlVa0QDeVczRbPJ1Ru0VAz4/4EBsQV1Qe28B54wLkiIJFfh8Dp+zeDX+h
2lWU6CVT3qn5pnQA3OM0SAw8AkQNeY7BiGcOha/c6Kse3vTmBDvUR8HEpHlcQR5Fgxx/D2jT
YQN8AGOnGyYkVZcw31KyqkCqWoCqpcTBhlw5FTUwWV1llICy0cGuCQ5TSXOwTgMJ4SaHDqLi
uyB4+GnTFGuUKEbhTvsMGJ3tLAYE70LklMNPvDWEjiMNZKqf1NGEd6GD12ncNvYmEeNOQ0gQ
hcuMZvBg3WbZQyOLkd/6e7ndLkP8275ENb9VgijOexWp84/q8eDcWqzKONy+s+8HRsTo7VCD
t4rtwqWirRhqSG/UBO3/JPaaqI/HKzXy4CWyrmy8J3R5PuVH250o/AoWRyTDirzkM1WKFmfJ
BeQ22ob8QZP6M23Q7kaG9kp06exswK/RYwu8xMI3hTjZpiortCgekCfvuhd1PZzGuLjY62tO
TJAJ0v6cXVr9KOOHNhLbaIecfpoHSR3WBKBGygaAWtUo0/CeaPia9OrY9/nykiX24afeQSdo
Cc/r2J/96h597dQj6UqlQxe2IV4NppXawbGVLeiKAlbmGXhMwfXPgSrljMmkpQSlHEsiqnzS
6PA0a6IechGhy6yHHB8zmt/0BG9A0eQ0YO5BHbwJxWnaCnnqR5/bB70A0M+l9vkeBMB2jgBx
3wCSAyRAqorfoIOaFVxWWqFjsUEC+ADgq6ERxO7PjdcXJK40ha/zIA38Zr1Y8vPDcIU2c9sg
2tlKIPC7tYs3AD2y6DqCWt+jvWZYTXpkt4HtGg5Q/c6nGR74W/ndBuudJ79lip9wn7BQ3IjL
no+ptr12puhvK6hjL1vqHQr6jh08TR94osqV1JULZD4Evbg8xH1hO0TQQJyA9ZUSo6TrTgFd
iyOKOUC3KzkMf87Oa4auhWS8Cxf05ncKatd/JnfoaXImgx3f1+BG1QpYxLvAPT/TcGy7DEzr
LMavnyGIHRUSZpClZ01UGwJQa7OvE2QJTqpSDKgoVFFvSqLVsoIVvi3gvAjvzwwm0/xgfPxQ
xr34SK6Aw3M2cImGUjOU89LCwGoxxKu8gbP6YbuwDykNrFadYNs5sLuPGnHpJk1sgRvQzFDt
CR0lGcq9ozO4agy8ixlg++XLCBX2feYA4necE7h1wKywzSsOmLblht2njm3jEUulrfd4UrLM
Y5HaQrPRPpx/xwLe2SP55cwn/FhWNXpJBd2gy/FZ1ox5c9impzOyTEd+20GRAbvRiDpZYywC
n2K04NMdtjCnR+jkDuGGNBIy0kXVlD02FHAfaQu8Jk55lj6WRJvjIAWTFk1sVunR8y/1o29O
6EpmgsgJPOAXJfHH6E2AlfA1e4+WZfO7v67QNDahkUYna6oDrp2gaQ9YrM1VK1RWuuHcUKJ8
5HPkaqUMxaCe6Qf7edA7cmQzfCBER7vOQOS56oS+YxZ6YWLdo4S2eYxDYltfSNIDmsDgJzUz
cW/vSNTUgxwmViJpzlgBZMbULrFRe4wGP77Xs1tWk1tguccnqmo04HseDdjmTa5INTlX8mPb
ZEd4fIWIQ9alCYbkYTZgn2V3ivM6nAEtDhRXz+L9scuJZnQCr6gQMmhtENRsi/YYHfUYCBoX
q2UALyIJapzUEVDbjqLgdrndBi66YYL28eOxBNeAFIfWoZUfZzG4cEdhh7tSDMKU5xQsi+uc
finvWhJILyrdVTySgGBjqQ0WQRCTljFnyDwYLI6E0GcvLmaUDD1wGzAMnCJguNQ3oYKkDhbd
W9DOo5Uv2u0iItiDm+qopkdALcgTcBASSK8HTTyMtGmwsJ+jw/mwau4sJgkmNRyNhC7Yxtsg
YMIutwy43nDgDoOjGh8ChwnwqEZr2BzRC5+hHe/ldrdb2btOo/RLVAA0iAzVVweyHI/xkPtd
DWrNNoIRXS6NGUP/9KNZuxfoBFSj8LQNzDgy+BnOESlBlVY0SNyCAMRd9GkCn4pqv9oXZAjT
YHAep+qZfqmoOrSX1qC5gaDfqR+Wi2DnokqSXhJ0UJiZ5mSF3RV/fnp7+frp+S/sWmJov744
d26rAjpO0EFI+8IYwFvnA8/U5pS2fuuZp5293uEQau1s0ulpXR1L79KiuL6r7ScmgOSPWlaY
3Wq6KUzBkS5HXeMf/V4m2kg8AtUKr8T0FIOHLEcHDYAVdU1C6cKTNbmuK9EWGEDRWvz9Kg8J
Mhn0tCD9hBs9IJCoqDI/xZib3Hrb404T2iwdwfQ7N/jLOphUY8CoEtPXDEDEwlYbAOReXNG2
ErA6PQp5JlGbNt8GtqXnGQwxCEfqaDsJoPo/koHHbIIcEWw6H7Hrg81WuGycxFp1iWX61N5x
2UQZM4S5ZPfzQBT7jGGSYre2X4yNuGx2m8WCxbcsrqapzYpW2cjsWOaYr8MFUzMlyBRb5iMg
quxduIjlZhsx4ZsSLnGxFSq7SuR5L/WxMjax6QbBHPhrK1briHQaUYabkORin+b39mG0DtcU
auieSYWktZorw+12Szp3HKLDpzFv78W5of1b57nbhlGw6J0RAeS9yIuMqfAHJd9cr4Lk8yQr
N6gSBVdBRzoMVFR9qpzRkdUnJx8yA82l3gl7yddcv4pPu5DDxUMcBCQbZihHfWoPgSvaK8Ov
WYG/QEdD6vc2DJDq9Ml5mIMSsMsGgZ0nZCdz6aRNt0tMgI3WUb0AnsVr4PQD4eK0MWbg0Rmp
Crq6Jz+Z/KyMoQt71jEofntpAqpvqPoXavuY40zt7vvTlSKOh3kLZXKiuOQwWA45OMnv27hK
OzX6aqwyrVkamOZdQeK0d77Gf0m2eodg/pVtFjsh2m6347IODZEdMnuZG0jVXLGTy2vlVFlz
uM/ww0VdZabK9dtndKQ7lrZKC6YK+rIarN47bWWvmBPkq5DTtSmdphqa0Vy224eDsWjyXWC7
SRgROAaQDOx8dmKutl+HCXXzs77P6e9eoo3DAKLVYsDcngioY/1lwNXoozZRRbNahZbq3DVT
y1iwcIA+k1pp2SWcj40E1yJIgcv87rFFRA3RMQAYHQSAOfUEIK0nHbCsYgd0K29C3WwzvWUg
uNrWCfGj6hqX0doWIAaA/3BwT3+7FREwFRawxQs8xQs8pQi4YuNFA7lGJT/1ExkKmUt+Gm+z
jlcL4jnB/hD3ICdCP+jTFYVIOzUdRK05UgfstatMzU9HtjgEe6o7B1FxmfNc4P0Pg6LvPAyK
SIceS4XvcnU6DnB67I8uVLpQXrvYiWQDT3aAkHkLIGomaxlRg2ITdKtO5hC3amYI5WRswN3s
DYQvk9hYoJUNUrFzaN1jan1UkaSk21ihgPV1nfkbTrAxUBMX59Y2WQmIxA+1FHJgEbC21cIZ
T+InC3ncnw8MTbreCKMROacVZymG3QkE0GRvLwzWeCbvZETWVMgohh2W6Etn9TVEFzUDAHfy
GbKOOhKkEwAc0gRCXwJAgBHFililMYyxQxqfkTf7kUTXrCNIMpNne8XQ306Wr3RsKWS5W68Q
EO2WAOgDopd/f4Kfd/+EvyDkXfL8y5+//fby+be76uvby5fPtkPGKz9cMH5ATkR+5ANWOlfk
cXQAyHhWaHIp0O+C/Nax9mDKaDhcssxN3S6gjumWb4YPkiPgmNfq2/OLb29haddtkAla2L/b
Hcn8BnNVxRUpohCiLy/Ij9ZA1/aT1xGzhYEBs8cWKLqmzm9tG7BwUGOV73AFP7XYqJz6tJNU
WyQOVsIz9NyBYUlwMS0deGBXabZSzV/FFZ6k6tXS2b4B5gTC2oIKQBetAzDZ2Ke7EeBx99UV
aPultXuC8xBBDXQlHNoX4SOCczqhMRcUz9ozbJdkQt2px+Cqsk8MDAYcofvdoLxJTgHwFQAM
KvtF3ACQYowoXmVGlKSY25YnUI076jKFEjMXwRkDVFccINyuGsJfBYTkWUF/LUKifTyATuS/
Fowzc4DPFCBZ+yvkI4ZOOJLSIiIhghWbUrAi4cKwv+LbHgWuI3P8pW+OmFTW0ZkCuEJ36Duo
2Vy9crWjjPEl/oiQRphhu/9P6EnNYtUeJuWG/7ba56BriKYNO/uz6vdysUDzhoJWDrQOaJit
G81A6q8I2SZBzMrHrPxxwt2CZg/1v6bdRASA2Dzkyd7AMNkbmU3EM1zGB8aT2rm8L6trSSk8
0maMaJmYJrxN0JYZcVolHfPVMay7gFskfYpuUXiqsQhHJhk4MuOi7kuVhfUZ8nZBgY0DONnI
4ciKQNtgF8apA0kXSgi0CSPhQnsacbtN3bQotA0Dmhbk64wgLG0OAG1nA5JGZuXE8SPOXDeU
hMPNoW9m39ZA6K7rzi6iOjkcUNvnRE17ta9P9E+yVhmMlAogVUnhngNjB1S5px+FkIEbEtJ0
Pq4TdVFIlQsbuGGdqp7Ag2c/2NgK/+pHj/SUG8nI8wDipQIQ3PTabaQtnNjftJsxvmJT+ea3
CY4/ghi0JFlJtwgPQvthlvlN4xoMr3wKRIeKOdYgvua465jfNGGD0SVVLYmz61RsI9wux/vH
xJZmYep+n2ALnvA7CJqri9ya1rS+W1ra1jIe2hIfgQwAERmHjUMjHmN3O6H2yys7cyr6dqEy
A4ZguMtlc/+Kb+DA0GCPJxt083hK8hj/wpZKR4S8kgeUnJBo7NAQAOlmaKSz/RWr2lD9Tz6W
KHsdOo+NFgv0fuQgGqw4ARYIznFMygLGu/pEhutVaNvAFvWe6AGAvWWoV7VdclQgLO4g7tN8
z1Ki3a6bQ2jfiXMss4ufQxUqyPLdkk8ijkPk/ASljiYJm0kOm9B+VGknKLboEsWhbuc1bpAm
gUWRrqmfWGnjwYzjPYsEw8yIuxTwys4S0QYjEn2KR/ASX22b5FAWYGQcRJZXyKJlJpMS/wJr
vMhMp9paE29yUzDw3p7kKRanCpym/qk6YE2hPKiySQv3D4Dufn96/fjvJ87Sp4lyOsTUk7NB
tWYSg+P9nEbFpTg0Wfue4lpl7yA6isP2uMTabRq/rtf24xoDqkp+h4wFmoygATkkWwsXk7ZN
lNI+UVM/+nqf37vINCEbS+6fv/755vVHnZX12bZkDz/p0Z7GDge1Ky9y5CnIMLJW0056X6Az
Vs0Uom2ybmB0Zs7fnl8/PX3+OLvN+kby0hfVWaboHQPG+1oKW2eFsBLsppZ993OwCJe3wzz+
vFlvcZB31SPz6fTCgk4lJ6aSE9pVTYT79HFfISPyI6ImpJhFa+zZCTO2iEmYHcfUtWo9eyDP
VHu/57L10AaLFfd9IDY8EQZrjojzWm7QO7KJ0rab4D3G2rYeNNH5PZ+5tN6h/ehEYF1NBGvD
WimXWhuL9TJY88x2GXB1bbo3l+ViG9k384iIOKIQ3SZacc1W2OLPjNaNEr4YQpYX2dfXBnkH
mVjkVMtG1ZDo+Shlem3tqW6uF+zSb8KrOi1BGOWyXRcZOC3lMuE8DJ0brsqTQwaPUcENCpes
bKuruAou+1KPO/ASz5Hnku9b6mM6FptgYSvB2mktsz5v+KGcPUjkkXCuLTU5LtleF6lhzMVo
i7Bvq3N84turvebLRcQNwc4zyuG5Qp9yuVYLOrxMYJi9rdk298r2XjcxOzlbSxv8VNN4yEC9
yO2XSTO+f0w4GB7Hq39tWXomlTAsaqxJxZC9LJD2/xzEcY03UyD/3Gt1Oo5NwUo3Mpjrcv7P
yhRuTe1qtL6rWz5jv3qoYjhE4j/Lfk2mTYbMkmhUrxT6Q5SB10fIga6B40dhP90yIJSTvCxA
+E2Oze1FqqlDOB8iuvmmYFPjMl+ZSbxBGCUAUL6zxKoRgZfAqrtxhH0OM6P2TGChGYPG1d6e
TCf8eAi5nBwb+4wdwX3BMmcwQl7Ybr4mTl90IltDEyWzJL1mwzsMSrYFW8CMeMolBK5zSoa2
LvNEqt1Ek1VcHgpx1KakuLyDZ7Cq4T6mqT0ywDJzoM7Kl/eaJeoHw7w/peXpzLVfst9xrSEK
8KvFfePc7Cu1rh46ruvI1cJWC54IkFrPbLt3teC6JsD94eBjsPxvNUN+r3qKkvy4TNRSx0US
JkPyn627hutLB5mJtTNEW9CSt5106d9GpT1OY5HwVFajA3aLOonyit5hWdz9Xv1gGedpx8CZ
SVXVVlwVSyfvMK2a/YcVcQZBK6UGzUN0NW/x221dbNeLjmdFIjfb5dpHbra24waH293i8EzK
8KjlMe+L2KhNWnAjYVA17Atb9Zil+zbyFesM1lS6OGt4fn8Og4XtVNYhQ0+lwDVnVaZ9Fpfb
yN4e+AKtbJ8OKNDjNm4LEdgnXC5/DAIv37aypv7x3ADeah54b/sZnhre40J85xNL/zcSsVtE
Sz9nP4xCHKzltrUQmzyJopanzJfrNG09uVEjOxeeIWY4R3RCQTo4mvU0l2Pz1SaPVZVkng+f
1GKc1jyX5Znqq56I5GGjTcm1fNysA09mzuV7X9Xdt4cwCD2jLkUrMmY8TaVny/66XSw8mTEB
vB1MbaCDYOuLrDbRK2+DFIUMAk/XUxPMAbRsstoXgMjJqN6Lbn3O+1Z68pyVaZd56qO43wSe
Lq/210qOLT2TYpq0/aFddQvPItAIWe/TpnmEpfjq+Xh2rDwTpv67yY4nz+f139fM0/zakkgU
rTp/pZzjvZoJPU11ayq/Jq22YeDtItdii1ycYG636W5wvrkbOF87ac6ztOjHalVRVxJZ+0CN
0El6nIDp0JOnIg6izfbGh2/NblqwEeW7zNO+wEeFn8vaG2SqxVs/f2PCATopYug3vnVQf765
MR51gIQqWjiZADNQSn77TkLHqq08kzHQ74REPnmcqvBNhJoMPeuSvph9BPuQ2a20WyURxcsV
2mnRQDfmHp2GkI83akD/nbWhr3+3crn1DWLVhHr19Hxd0eFi0d2QNkwIz4RsSM/QMKRn1RrI
PvPlrEZeKdGkWvStR16XWZ6iHQnipH+6km2AdsOYKw7eD+IDSkRhYxWYanzyp6IOal8V+YU3
2W3XK1971HK9Wmw80837tF2HoacTvScnCUigrPJs32T95bDyZLupTsUgwnvSzx7kyjfpvweF
6cy9Zsqkc7o57sj6qkRHshbrI9XOKVg6HzEo7hmIQQ0xME0Glmuuzf7covP6iX5flQKsquHz
0IHWOynVvcmQN+xebU7sWh7uv6Ju0fNfUyXeLQPnGmIiwSrRRTWfwC82BtpcEXhiw0XJRnUo
vj4Nu4uGcjL0dheuvHG3u93GF9Usqv4aLgqxXbq1pG+d9kpuT52SaipJ4yrxcLqKKBPDLHSj
oZWI1cBBn+2CZLp/lGppH2iH7dp3O6cxwMRwIdzQjylRtx0yVwQLJxFwlJ1DU3uqtlFigb9A
ev4Ig+2NInd1qAZYnTrZGe5CbiQ+BGBrWpFg25Unz+zFeS3yQkj/9+pYTVfrSHWj4sxwW+QZ
cICvhaf/AMPmrbnfgptIdvzojtVUrWgewYY31/cSsQm3C99UYTbj/BDSnGd4AbeOeM5I5j1X
X65SgUi6POImTQ3zs6ahmGkzK1RrxU5bqJUhXO/csVcIvK9HMPdpUOS53ye8ls+gPlHFwzSq
ZulGuBXXXEJYXXztAfR6dZve+GhtEkqPcqZZGnEBNUN/d1YC02acyh2uhZk8oA3eFBk9SNIQ
qjuNoNYySLEnyMH2PzoiVLjUeJjAtZq01xsT3j5QH5CQIvZ16oAsHURQZOWEWU1P/E6jWlP2
z+oONHIsbRGSff0T/osNUBi4Fg261DWoKPbi3jZjPwSOM3TpalAlRzEoUnUcUjWONpnACgJ1
KydCE3OhRc19sAJ77aK2lcKGkuuLdSaGUeiw8TOpOrhnwbU2In0pV6stg+dLBkyLc7C4Dxjm
UJgzp+ltIdewI8dqYunuEP/+9Pr04e35dWCt3oBMXl1sVeZKdedcP3AsZa5th0g75BiAw3qZ
o6PE05UNPcP9Hiyh2lck5zLrdmpVbm2zuuMraQ+oUoNzq3A1OSHPEyVQ64fjg0tJXR3y+fXl
6ZOr8jfcrKSiyR9jZITbENvQFsAsUIlZdQOu+MCgfE2qyg5XlzVPBOvVaiH6ixKkBVJCsQMd
4Cr1nuec+kXZs1+0o/zEGU+kna0YiD7kyVyhj4b2PFk22iC+/HnJsY1qtaxIbwVJO1jb0sTz
bVGCU8PGV3HGOmJ/wUb57RDyBA9ps+bB175tGrd+vpGeCk6u2ICsRe3jItxGK6RsiFpb5r40
PZlow+3Wk1iF1CcpA7NABVZrz55AjtlxVPvtemXf+9mcGsb1KUs9fQmuwtFRE/6m9HW1zNMP
iM7YQFUH21y7ngHKL59/ghh338xUAFOlq7w6xIflTqWwCNzBP1PeATgFCW5Q3tjjXATG1Hqw
z4qNvI0JYZspNurPl2brxK19w6guIdwv3R+TfV/StV8RxNK8jXqz4CptEsIb03XzgHAzk/TL
27wz04ys76t899Jo39pSPmW8KRaii7CDBBt3KwYpWM6YN33gvMsZVAK2Bk4Ib7JTgGleD2hV
npT07vYSA1vRtnwAb7sb2lukgefWu5OEWSwKmVlspvxdFW0pLNCNMYo02MnuEOWdbUdhbGwe
8+ZFGzOHidPP+CswO2QXH+yNBSqFmbv0GdifT+Y7cVx27hpjYH+m42CdyU1HD+0pfSMi2v85
LNoLjoM3K/ZpkwgmP4M5dB/un3LNDuddK46sJEL4H01nFqIfa8GsmUPwW5/UyahJx8hQdF60
A+3FOWngxC4IVuFicSOkd046dOtu7c554CaLzeNI+GfRTioZn4s6Md64g5nvWvLfxrQ/B6AC
+2Mh3CZomCW4if2trzg1gZqmovNuU4dOBIXNM24UEhbe8OU1m7OZ8mZGB8nKQ552/iRm/sb8
WqotR9n2SXbMYrVbcyVGN4h/wmiVyM8MeA37mwjuZYJo5carG1fgBPBGBpDDGxv1f/6S7s98
FzGUL2J1ddcZhXnDq0mNw/wZy/J9KuBQWtKDI8r2/ASCw3hXGSVRsMUfCZihPP1+CjInPp2L
kO0+zVvcNjlR8h6oUqXVijJBj6q087EWH/vEj3EuElulMn58T4yJgBl7Y68sx/rknTAGw1EG
HstYP1s62ncA9uN2+sZvesWCDnRs1EhNbu2X/dEWQMrqfYVcVJ7zHCdq/Es21RkZcDeoRLc2
p0s8vOzFGNpHA9DZ6qoDwJxo6/Rid2Drh6xnd+UDXDevyj9uMaiPulHNcc9hfZ5e1IZqOkbS
qF2InBFW6hq94oP331xnz+oiA43gJEdXI4Am8H99YUcI2L6S5/UGF+BfUb9bYhnZYg+45ivG
cJku0QG/sgXa7mQGUMIhga4CfEFVNGV90F8daOj7WPb7wjayao5jANcBEFnW2q+Mhx2i7luG
U8j+RulO174Bp5gFA4G0p7pMVaQsuxdL26GeRZgzHI7SWpF9Ux6RQYiZxycyM276DZui2syp
9GKOO6HJYcaJr5CZICvUTJCN/ExQNyBWFHsUzXDaPZa2mUOr7HWbsrmCxudwuFtuK74ksRrh
die3Cg922e0dO7w8yoyp2MFVBph7uPvgPx6fJmL72BPs3xSi7Jfosm5GbU0XGTchuk2sr1mT
Du+cLY8bnoyM0VS3Rn0TDEPQmRVWR42nF2kfgqvfZPKL1f9rfhzYsA6XSaoqZVA3GNbfmcE+
bpASzcg8lg9nbKJ9pOCJFjnTsin3gbzNludL1VKSSY1PBa1EAMT2Ex8ALqqO4BlF94jxA+Co
X08lbaPofR0u/QxR06Isqlm1Jckf0Wo3IsTEyQRXB7uvuZdBc6cyfaA5g0H82jZGZDP7qmrh
OmX2nqNyzzz9t4skYtUPoHWqukmPyN8loPp9p2qQCsOgwmofQmrspIKi5/IKNK55jCef2YmP
zlf8+8tXNnNqI7U3l3wqyTxPS9ud9pAokQtnFPkCGuG8jZeRrRg9EnUsdqtl4CP+YoisBMHF
JYyjHwtM0pvhi7yL6zyxO8DNGrLjn9K8Tht9fYYTJs8ndWXmx2qftS6oimh3k+kCc//nN6tZ
hun4TqWs8N+/fHu7+/Dl89vrl0+foKM6Fg904lmwstfeCVxHDNhRsEg2q7WDbZG3jQFUu/cQ
g6esW50SAmbogYBGJFJ3U0idZd2S9ui2v8YYK7VuIknfeCVXve9MmiOTq9Vu5YBrZBfHYLs1
6bjIleYAmBcvuk1g4PL1L2O9TZgngP98e3v+4+4X1X5D+Lu//6Ea8tN/7p7/+OX548fnj3f/
HEL99OXzTx9Ut/sHbdIWrdcaI57NzNqwo42kkF7moFGSdqrTZuBeXpDxILqOFtYR4AaQPmoZ
4fuqpCmAFe52T5pUzY5lTOaTGOZid0IZPLTSUS2zY6mt++LVl5C6yF7W9VJMAzjfdY9fAE4P
SFrUkBJsyXBPi/RCQ2kZkNSvWwd6GjbGdLPyXRq3NAOn7HjKBX4prAdYcaRA5wBqu4Z1pwCu
anSEC9i798vNlgyZ+7Qw06eF5XVsP5vWU23e7Q8EwoK0htr1in5UG1KlS8NlveycgB2Zcku1
hUgykpFh+4TBihjM0Bg2qAPIlYwXNXV7ulBdqE5Potcl+WrdCQfgOqy+tYhpT2RuOTR8Jp9t
sow0bXMfkZzIKA6XAZ0MT32hlqyc5EZmBXpZYbCGNC4+EtRIS3+rIXNYcuCGgudoQTN3Ltdq
Qx1eSfEZoRlgcjM5Qf2+Lkh9uTfyNtqTcoJRNtE6lXQtSGkHd4qk3qkbYo3lDQXqHe2wTSwm
4TL9S8mqn58+wSLzTyMfPH18+vrmkwuSrAIzD2c64pO8JJNTLYjSm/50ta/aw/n9+77CBx9Q
SgGmTC5kMLRZ+UhMPehlVC1Do0EmXZDq7XcjcQ2lsFZKXIJZZrNXD2NGpW/BzTEZqHqXD3b3
CvRWFaj3Xbhbkw530GcFs96YTywjnXT/8x8IcUfysAATc+gzA4ZMzyWVErU5MXaZAxxkSA43
EigqhJPvyPazlJQSELVNlugsL7mycJHBjkURJ3SPXeMf1DglQDQljaXToYP6eVc8fYP+G88i
rmPUC2JReUhjzQ6pJ2usPdmP602wArwlR8i5oQmLdUU0pISns8RH72NQsLGZOMUGh+Hwr9o1
IcfrgDkylQVibSeDk7vUGexP0vkwCGEPLko93Wrw3MIhYP6IYUc2s0C+sIyeim75UYwi+JXo
HxisjmnPuVKv5wbctwGHgQUzrMQFFJrTdIMQs2XaJIbMKAAXe045AWYrQKtxy4Oa1Jy04d4e
bvecOORGBQZTAf8eMoqSFN+RS34F5QW4WctJ4fN6u10GfWN7fZtKh7TuBpAtsFtao7Sk/opj
D3GgBJHrDIblOoPdg88LUoNKYusP2ZlB3SYaVC6kJDmozDJEQNVfwiXNWJsxA0hrAQYL2web
hht0VAOQqpYoZKBePpA063wR0pCdCGl+DOaOj9GjOEFVuAOBnNJo2dEtJJIdp3BEk0bBSlxc
O9Um42CrttQLUiKQImVWHSjqhDo52XF0cQDT62TRhhvn+/gSekCwvSeNkqvnEWLqQ7bQkZYE
xI84B2hNIVcO1R28y0jH1GIosn8woeFCzSm5oHU1cfgFmKYcKVOjVR3n2eEAiiKE6TqyLDL6
qArtwEg7gYjoqjE6A4FSshTqn0N9JDP+e1VBTJUDXNT90WVEMeung4RgHdK5iqlQ1fORJ4Sv
X7+8ffnw5dMgWhBBQv0fnZnqqaSqarD+q6WyWbTT9Zan67BbMF2T661wd8bh8lHJQVqNrW0q
InIMnl5tECmVwuUe6MDBIx84qJ2pE7roUuuSfXZsHr/IzDo8/DaeLmr408vzZ/sxDCQAJ8pz
krVtH1D9wNZuFTAm4jYLhFY9MS3b/l5fKOKEBkq/SWAZZz9iccNyOmXit+fPz69Pb19e3VPU
tlZZ/PLhX0wGWzXJr8BZQF7ZJugw3ifIWTvmHtSSYF3eJ/U2Wi8X4CrQG0UJiNJLojFLIybt
Nqxtg6ZuAPu+kLBVDMN1vmNz6mWKRw/PtY2GLB6J/thUZ9QtshJdAFjh4cz9cFbR8CMQSEn9
xX8CEWZ342RpzIqQ0cZeVSccXqnuGFzJ9qrrLBmmSFxwXwRb+whsxBOxheci55qJo59eMlly
Hg2MRBHXYSQX2x6dyjksmiIp6zKuvDAyMiuPSFNjxLtgtWDyB+YPuGzr198hUzvmXa6LO+8b
przCE1oXruI0t20oTl8eHR/1EovVU8Qr01XAJhGDblh0x6H07B3j/ZHrVQPFlG6k1ky3g91h
wPUVZzM51a3Wr+CrI348lmfZozE6cnRUGqz2pFTK0JdMzRP7tMlt+0T2wGWq2ATv98dlzDQ8
2hxZoJJGzyyxteUYhDNZ0jgzXjT+wOMPnvQfOk9CScd0zb14bBuRMUx8AuNRlyy9ulz+qHab
2CbuPNKQI9Cp0nMlLubinhn8+6bqkJrAlANRllXJR4rTRDSHqrl3qSQtL2nDppjm9yd4McEm
mRZF1sr9uTm63DEtsjLj42VqrmCJdzDOPIUG9JClOTOm8/SaebKh5P8mk6mn6tvs6Puc1ltl
WsWcTYl6u2DmgoGNa2SRkLDRhpssnMuPaYK2ryIsMFzxgcMNN/9Lpu+L+kGVgpsIgdgyRFY/
LBcBs1RnvqQ0seGJ9SJg1kKV1W0YMpULxHrN1CsQO5ZIit06YCZgiNFxudJJBZ6P71aRh9j4
Yux839h5YzBV8hDL5YJJ6SE5hB3XZ/ThgN6IYJvjmJd7Hy/jTcCJUgoPWTwp2JZR+HbJ1L9M
uhUHF1tkTcfCQw7P4c0I3JCOu4tG7Sy+PX27+/ry+cPbK/NMehJilAgrObFHnvr6wNWIxj0r
rSJBbvawEI9cL9tUsxWbzW7HVMfMMm1vReWkupHdMIN1jnor5o6rcYsNbn2V6cRzVGYUzeSt
ZJFnaIa9meH1zZRvNg7X5WeWE40mdnmDjATTrs17wWRUobdyuLydh1u1tryZ7q2mWt7qlcv4
Zo7SW42x5GpgZvds/ZSeOPK0CReeYgDHLVoT5xk8ituwe6mR89QpcJH/e5vVxs9tPY2oOWYx
GbhI3Mqnv142oTefWhttOprwTbnOHEmfPU9CJFG1xjgITbc4rvm0ygS3LDqH4xOBDqhtVC15
uy27tOGzagQfliHTcwaK61SDbsWSaceB8sY6sYNUU0UdcD1KNUPHnE4YYzKCE1wVteJjrFWM
iBl5E9U3LLlVJNf7BiryU9uIkeNn7ub3/OTJ+8HTjViXiJMgzuUO8sLXo6E8Sa4WimV79MTd
ihmwpyMzeSPuiZPGBorrlCPFJUkUehAccNOKuTXhOp6Jw4k/RkWoQ6orE5f1WZWoLeajy7mX
KJTp84T53sTWDXcgOdEyTxjZx47N1PRMd5KZU6ycrZniWnTADEWL5tYs+9vRKEcXzx9fntrn
f/kF6VTtp/HjmWkX4wF7TgAGvKjQpblN1aLJmCEGd4wLpqj63prbowHOTKBFuw248znAQ2bm
hO8GbCnWG040BZwTwAHfsemDY3o+P2s2/DbYsOXdBlsPzkm6Cl+xm952Hel8zvr3vo5Bo75n
Vlmjz8Xu+bG+KYL7Y7dnev3IMSdnmtqqrS13EKKjiY4RRCbqVsxjEDJz2xCV6ZJ5FZ9KcRTM
hFPAMxQmMbXZ3+TcqYUmuP6qCU5A1AS3yzME0xUu4J+3bJnVvi3qy4Y9gE8fzpk2OWw/sYO9
MNJkGYD+IGRbi/bU51mRtT+vgulVf3UgO2itWg1a/G4qWfOAL0zM3RMTXz5K2yOteTWDbpQn
qL8EBB2uugjapEekzqRB7adwMb/lef7jy+t/7v54+vr1+eMdhHAnUR1voyRSok1lyk0U6AxY
JHVLMXLTYYH0zsVQWOPOlMhyfZB2tGju64EJ7o6SvjcwHH1aYCqZ6q8Z1NFRM/Z/6eMCg15F
TZNNM6rVbOCCAsgynFHcb+EfZB/LbnhGh9zQDVOx+CGrgfIrzVVW0eoFR33xhdagcwU5otgO
kOl7++1abhw0Ld+jdcygNfEzaVCi8mVAfAZusI5mFCn3G8uT+WJN09LaEJ6GQkfcpj/GTksh
0w9mZItCrJJQzUOVk02qoDSAFS23LEFPAb1MM7ibSzVt9R1ypTnOL7G9eGmQyMAzFth7dQMT
fwAGdLSFNOxKr8Yadre1j1M1do0TrGyrUfKmesZ6SQcV1SIyYE47L2gFUYjGgmdoB60sYQkS
3ulxepWl0ee/vj59/uhOm46fXxvFVhoGpqRZP157pKpuTeO0PTQaOoPGoMzX9OPIiIYfUF/4
Df2qsXtNU2nrLA63zoyl+pG5CEd65aQOzdJ0SH6gbkP6gcFQPp3ok81iFdJ2UGiwZVBVyKC4
Omtv8yhbbSbHGbLUwdUM0g6P9Yk19E6U7/u2zQlM30UNE2u0s09SBnC7cVoRwNWafp5Kc1MH
wWoXFrxympuoYgwz4apdbWnGZB5uY7cQxMGF6RfUXa5BGRNhQ+8CpxTuLDXYmufg7drtogre
uV3UwLSZAN4und7fPhSdmw/qw3dE18h0gZktqb8kMzESX0cT6LTHdbzAm2ctd+QMr3+z74wo
+jrX9AP88m7GaA0VuRIbTrS7xC6S9Vmi/ghotcHrekPZBzDDuqokCl0hlkkHpziTsubNYipZ
NljTD2iDkDunys1E61RJHEVIi8tkP5OVpKte14BjQDoyiqprtQfM2YaSm2tdmrPc3y4NegY1
JcdE08ldXl7f/nz6dEvUF8ejkjSwa48h0/H9GWn8samNca5WrV21AdZx1xH89O+X4SWUo0yr
QprnPdo7uy0JzUwiw6W9X8TMNuQYJBHaEYJrwRFYcp5xeURPu5ii2EWUn57++xmXblDpPaUN
/u6g0ouMkEwwlMvWZ8PE1kuoTaBIQAfZE8J2/4Sjrj1E6Imx9WYvWviIwEf4chVFahWOfaSn
GpAGok2gp8iY8ORsm9oKDpgJNky/GNp/jKHtN6k2kbb3Wwt0lU8tDjareH9LWbSVtUmj5sSY
j0KBUI+nDPzZogdsdgi8qbQZeGOgIrbomYsdACv9WIRW/6h90Ywm56261IYgvlPmvI3D3cpT
4XCwiA5qLW7yieOjb1TWJe2IW3qbJZsfm3KNNNks3eK53Hcqo6GvtW3S3i41KVioUeuBbUdt
+ATLoazE+OlNCXaXbkWT57q2nxjaKH0dirjTtUD1kQjDW8vacH4ikrjfC3jMaH1ndDZF4gy+
bmBStV87DTATGNS6MQrvQSg2fJ5xDw2vJ45gJUbtd9CJxRhFxO12t1wJl4mx/50JvoYL++h2
xGHqs2+RbXzrw5kMaTx08Tw9Vn16iVwGnIe4qKPdPRLU2+eIy7106w2BhSiFA47R9w/QNZl0
BwKr01PylDz4yaTtz6oDqpaHDs9UGbhP5qqY7CzHQikcKaFZ4RE+dR7tY4vpOwQffXHhzgno
dtsfzmneH8XZtuM0JgT+ezdod0MYpj9oJgyYbI1+vQrkPnUsjH+MjP653BSbztYFG8OTATLC
mawhyy6h5wRbmh8JZ8c3ErAPt083bdw+GBpxvArP39XdlkmmjdZcwaBql8hpxNRztKeNagiy
ti00WZHJzh8zO6YCBu97PoIpaVGH6L5zxI3aZ2Ffp42UGk3LYMW0uyZ2TIaBCFdMtoDY2NdU
FrHyfWO19XxjhbTvppmn2EdL5tvm9IJLajjA2Lj9Vw87I68smSl3tEPLdPx2tYiYBmtatWYw
5ddmMNT20X6INBVIrd221D5PCM6yPkY5xzJYLJgZzDmSm4ndbrdiBt81y2PbLVi5atfgVxDP
VWTV1z/VJjmh0GAUw1zPGZ8lT29qB8t5LQK/YxKcd0bone2ML734lsOLYIFeriNi5SPWPmLn
ISLPNwLsTGYidiEyrDkR7aYLPETkI5Z+gs2VIuwnbojY+JLacHWFnwHNcEzMBYxEl/UHUTKv
aMcA4Bgmxh5ZbKbmGHLbOeFtVzN5AOsTte0bjBC9yNW3pMvH6j8ig3WsqfxsLc8uqY2btqn9
PGmiJDoNnuGArcHBo6TAnm4sjmmlbHUP3nlcQtZCLdUufgBd/9WBJ7bh4cgxq2izYmrtKJmc
jg5i2WIcWtmm5xbkNya5fBVssYuRiQgXLKHEbMHCzBAwN8CidJlTdloHEdNS2b4QKfNdhddp
x+BwCYznzYlqt8xk8S5eMjlVM3EThFzXybMyFbbYOBGu5slE6bWO6QqGYHI1EFhMp6Tkxqsm
d1zGNcGUFUyRBitmNAARBny2l2HoSSr0FHQZrvlcKYL5OMh0ATfBAhEyVQb4erFmPq6ZgFla
NLFm1jUgdvw3omDDldwwXA9WzJqdbDQR8dlar7leqYmV7xv+DHPdoYjriF26i7xr0iM/TNsY
OVefoqTlIQz2RewbekWzWaFXAfPaF3fMKM6LNRMYTPuwKB+W64YFJy8olOkDebFlv7Zlv7Zl
v8ZNOHnBjs6CHZrFjv3abhVGTDtoYsmNZE0wWazj7SbixiUQS26YlW1sjvwz2VbMXFfGrRpS
TK6B2HCNoojNdsGUHojdgimn83ZzIqSIuEm7iuO+3vKzqeZ2vdwzc3oVMxG0lgB6N1UQhxJD
OB4GsTVceyTgkKugPTjfOzDZU4tgHx8ONfOVrJT1WW3/a8myTbQKucGvCPyudCZquVouuCgy
X2+VwMH1unC14EqqlyJ2zBmCO5O2gkRbblEa5n9uetLTPJd3xYQL36ytGG5VNFMqN96BWS65
jQecHKy33EJTq/Jy47JYb9bLlil/3aVqMWO+8bBaynfBYiuYkaQ248vFklu3FLOK1htmFTrH
yW6xYD4ERMgRXVKnAfeR9/k64CKA23h2nbHVIz1LinQ0MiZm30pGMJKnlus2CuYGgoKjv1g4
5kJTq9rTpqFIlVTAjI1UCelLbkVURBh4iDWcdjNfL2S83BQ3GG5tMdw+4sQGGZ9Wa+2druAr
H3huddBExAx52baSHU6yKNac0KYkgyDcJlv+2EFukLYTIjbc1lhV3pad8EqBbOHYOLfCKDxi
Z8423jBTT3sqYk5ga4s64JY8jTONr3GmwApnJ2XA2VwW9Spg0ncv8iYmE+vtmtniXdog5OTz
S7sNueOa6zbabCJmcwvENmAGMhA7LxH6CKZ4Gmc6mcFhDgLFepbP1VTfMvViqHXJF0gNjhOz
wzdMylJETcrGuR5ELlfn/tkqaaEIFr0tjt8wzz+NEPC6QW/3QM6zLeMPQF+mLba4NxL6fly2
me0CZ+TSIm1UpsH/+nB52+tHY30hf17QwGTWH2HbeOKIXZusFXvtfj6rme8mqbEWf6wuKn9p
3V8zaZy+3Qh4gNMm7en77uXb3ecvb3ffnt9uRzlLOFeuRfzjUcwlr8jzKgZpx45HYuE8uYWk
hWNosHrbY9O3Nj1nn+dJXudAcX12ewqAhyZ94JksyVOXSdILH2XuQSAVZlzHwA86tBFaJxkw
hsSCMmbxbVG4+H3kYqPaqctoE3guLOtUNAysX7k68GitjGFiLhmNqpHG5PQ+a+6vVZUwlV9d
mCYZTOi4obUdN6Ym2nsmkUK/uLAIo3L++e350x0YOv/jyX7xOE9WajKLlouOCTNpJ90ON2lm
s5/S6exfvzx9/PDlD+YjQ/bBrNgmCNxyDfbGGMJoMLEx1B6Vx6XdklPOvdnTmW+f/3r6pkr3
7e31zz+0oUlvKdqslxXTz1umw4GFX6bzALzkYaYSkkZsViFXpu/n2mi/Pv3x7c/Pv/mLNFgl
YL7gizoVWs2CFe2PxmWOyt1vr0836lE/hFZVSfQeZx8KXIZupj0mYevwkLw9/Pn0SfWCG71U
3zW3sJpbs89klghuR8zVi50rb6pjAuYFqdu20yNmZmZrmMnl/qRmEThzPOsLKod3nUSOCLH1
P8FldRWP1bllKOMwU7se69MSJIOECVXVaanN0UIiC4ceXzHq2r8+vX34/eOX3+7q1+e3lz+e
v/z5dnf8omrq8xek8DtGVqL0kDKsnMzHcQAlg+WzUV1foLKyH7L5QmlnnrZwwwW0RRBIlpE7
vhdt/A6un0R7iWMcFFSHlmlkBFtfsiZGc7XOxB2u6jzEykOsIx/BJWVeJdyGwcf2SYndWRsr
2chaCaczcTcBeCi4WO8YRk9MHTceEqGqKrH7u1HTY4IaTT2XGByUu8T7LGtA+9dlNCxrrgx5
h/MzHsgwYfXtcr1dcK2iub0UPDXa7+NYWezCNVdOsGPbFHBG5SGlKHZckuaB45JhRp8PLnNo
VS0sAu5TgyshrsddGdC4Y2AIbXDfheuyWy4W/NjQL3kZRkmvTcsRowoKUwowX8Pgo/NdphMP
Km9MWm0BrrQ6cMTARdRPM1liE7KfgosvvtImmZxxQFx0Ie67wyaAYptzXmNQTVFn7mNVBw7F
8XjImgNIXFwtwLNirphahnBxvY6jxGdLDez0AiSHKxmkTe+5HjP6cvONUHYeG55Ms6MsF3LD
9TNjCJDWqgGb9wLhw6N6ZsIycglXtfDuOWCYSWRh8tQmQcBPCCDNMCNPG61kiNGaA1cjeVZs
gkVAOkm8gi6K+t06WixSuceoeUNJqs28JMOg2kos9bAkoN6pUFBbGfCjVCddcZtFtKXj5Fgn
ZOwUNZSLFEx7qltTUElhIiS1Ah7dEXAucrtKxzeAP/3y9O354yx+xE+vH23jk3FWx9xS2hoH
IuOrtO8kAxqETDJSNVFdSZnt7YcG0n7XDUEkdnAF0B5MziOPN5BUnJ0qrUzPJDmyJJ1lpJ8g
7pssOToRwEv0zRTHACS/SVbdiDbSGNURpG2UAlDjdBqyCEK8J0EciOWwIrHqhIJJC2ASyKln
jZrCxZknjYnnYFREDc/Z54kCHUmavBNHJhqk3k00WHLgWCmFiPvYNtWNWLfKkG8K7TLk1z8/
f3h7+fJ5cO7s7imLQ0L2X4AYQymwQyqODaGc9xsaldHGvikYMfQ8TDvvoO/gdUjRhtvNgssI
44vM4OCLDDxNxfbQm6lTHtuqcTOhljkMq5pb7Rb2RZBG3Xf1pvTollND5FHCjGFlBAtv7BlE
t8DgzQ/Z0AGCPoGfMTfxAUcqYzpxavxoAiMO3HLgbsGBIW3wLI5Ie+unIh0DrkjkYbPn5H7A
ndJSXc0RWzPp2ppGA4benWgM2ToABEx43O+jXURCDmdS2lIyZo5K9LpWzT1R2tSNEwdRRzvZ
ALqFHgm3jckzBI11KjONoN1dScArJVU7+ClbL9VqjK1UD8Rq1RHi1IJjTNywgKmcoZt5kHYz
+/U8ANjdtb62AnEZfwHj4P76SjJmQtQFmQOyB7kOSY1q8xRxUSX23AgENVABmH6ZQ4ezAVcM
uKaj3H22MqDEQMWM0k5nUNsiw4zuIgbdLl10u1u4WYDHgAy440La71002K6RwtiIOZHH848Z
Tt9rL/Y1Dhi7EDIEYOGw+8KI+0pqRLD284TihXGwWMGsLapJnRGpt2FNTZYUxuy7zutk5sEG
yQsWjVGDIxq83y5IxQ87dPJxtS64mZfZcrPuWEJ19NQMEDp7uAo1Gi1Wi4CBSEVq/P5xq7o8
mSjNaxpSQWLfrZxqF/so8IFVS7rIaHnF3BS0xcuH1y/Pn54/vL1++fzy4dud5vW9z+uvT+yR
JAQgGoEaMvPtfJXw42mj/BmnzE1Mewt52gxYCx7dokhNr62MnSmZGsoxGH5yN6SSF2R46JOj
8yCgkw5OjN/Ae61gYb8WM2+7bFUzg2xIp3Yt2MwoFQ3cV2Ejig3SjAUi9oAsGFkEspKmteIY
zJlQZC/HQkMedUfMxDhLtGLU2mFrwYxnYu6YHBlxRuvSYGKHiXDNg3ATMUReRCs6u3B2hzRO
rRRpkFgA0nMxtnWmv+O+dNASHrVXZYGMtDsQvERqW83RZS5WSF9qxGgTahNCGwbbOtiSLu5U
A2fG3NwPuJN5qq0zY2wayF+Jmdauy62zalSnwlj4oivSyODnhzgOZYyHz7wmXgdnShOSMvpc
zQl+oPVFreBp8Wq6/SNda1A6g0kRWQQcbx3cLo7Um+xp+uaGdkrXVSueIHrWNROHrEtVZqu8
Re9+5gCXrGnP2pxaKc+oUucwoF2jlWtuhlKi5BFNVojC8iih1racN3OwI9/aUyWm8Gbd4pJV
ZI8ZiynVPzXLmI06S+lVnGWGaSBPquAWr3of2MLgg9BXjRZHjh4wYx9AWAzZs8+MexpgcXQU
IgoPQ0L5EnQOGWaSCM0WYQ4L2A5OdtuYWbF1QTfSmFl749ibasQEIdsaikEG6QnDxjmIchWt
+NxpDtk1mzksuc642fv6mcsqYtMzW2OOyWS+ixZsBuHVRLgJ2MGn1vE131DMymuRSlDcsPnX
DNtW2qYD/ykiemGGr3VHLsPUlh0CuRFFfNTa9vU1U+72GnOrrS8a2X9TbuXjtuslm0lNrb2x
dvy87OzCCcUPR01t2LHl7OApxVa+e8ZAuZ3vaxv8aItyIZ/mcHaFV3bMb7b8JxW13fFfjOtA
NRzP1atlwOel3m5XfJMqhl+Fi/phs/N0n3Yd8RMVtb6FmRXfMOSYBTP8xEaPYWaGbvYsZp95
iFgo4YD9jm/tcQ9jLO6w7XgJoD6c36ce6aC+qDmcrwZN8fWgqR1P2SYSZ9g92XG5k5eURXIz
MvZXTkjYnV/QQ8E5gP0Mqq3O8UnGTQoXpm2blY9sDHqYZFH4SMki6MGSRaldBIu3y+2C7ev0
hMtmigs/cmRY1IJPDijJjyq5KrabNdvdqXEXi3HOqCwuP6pNJt8Rzc5oX1VgOdMf4NKkh/35
4A9QXz2xyfbKpvSOsL8UBSvVSVWgxZqVIxS1DZfsPKapTclR8CIwWEdsFbmHRJgLPXOWOQzi
50D3UIly/PLkHjARLvCXAR9BORw7FgzHV6d7ykS4HS/cuidOiCNnSBZHbXTNlGuyfuYu+JWT
RTjPvyzuQfU81xPtHICem2CGX0Lo+Qti0KkImS5zsc9se1oNPf9WAPIVkme2mdV9fdCINs8Y
olhJGivMPvjImr5MJwLhap714GsWf3fh05FV+cgTonyseOYkmpplihjuJBOW6wo+TmYMSHEl
KQqX0PV0yWLbeIzCRJuphiqqNkVpoBdqGeyUutUpCZ0MuDlqxJUWDbkeg3Bt2scZzvQBzofu
cUzs6giQFocoz5eqJWGaNGlEG+GKtw/74HfbpKJ4b3e2DEx7lfuqTJysZceqqfPz0SnG8Szs
Q1MFta0KRKJjo3+6mo70t1NrgJ1cqLTPBwbs3cXFoHO6IHQ/F4Xu6uYnXjHYGnWdvKpqbNY5
awYvBaQKjOX5DmHwgtyGVIL2RQe0Eva+CEjaZOil1wj1bSNKWWRtS4ccyYnW0kYf7fZV1yeX
BAV7j/PaVlZtxs7FHSBl1WYHNHkDWmf2hREocWrYnteGYL0SFuFwoXzHRYADtMpWltGZOG0i
+xxMY/QQCcDB51bFoeCly6GI/UfIgPFJqkS3mhC2AywDIG+fABEXLzpUGtMvKARVDIjX9TmX
6RZ4jDciK1V3Tqor5kyNObWFYDXV5KibjOw+aS69OLeVTPM0huizh7/xXPrtP19tU+pDC4lC
a+jwn1VzRF4d+/biCwAavODTwx+iEeCPwFespPFRo0smH69tAM8c9l2HizxGvGRJWhGFJlMJ
xnRdbtdsctmPQ2Uw/P/x+csyf/n85193X77Ceb9VlyblyzK3es+M4UsYC4d2S1W72VO8oUVy
oVcDhjDXAkVW6o1aebSXRBOiPZd2OfSH3tWpmpPTvHaYE3KNrKEiLUIwKY0qSjNaMbDPVQbi
HGkaGfZaIuvTOjtqXwIvzBg0Af1DWj4gLoV+7uuJAm2VHX9GThTclrF6/4cvn99ev3z69Pzq
thttfmh1f+dQ6/PDGbqdmH3R15+en749wzsm3d9+f3qDZ20qa0+/fHr+6Gahef5//3z+9nan
koD3T0oCVotAkZZqENkPTL1Z14GSl99e3p4+3bUXt0jQbwskiwJS2kbedRDRqU4m6hZkz2Bt
U8ljKUDtSXcyiaMlaXHuYL6D99BqFZVgge6Iw5zzdOq7U4GYLNszFH6GO6g23P368unt+VVV
49O3u29aFwL+frv7r4Mm7v6wI/+X9VoSVK37NMVK0KY5YQqepw3zkOz5lw9PfwxzBlbBHsYU
6e6EUCtffW779IJGDAQ6yjoWGCpWa/vIUGenvSzW9nWMjpojh9RTav0+tV1xzbgCUpqGIerM
9lk5E0kbS3RsMlNpWxWSI5Ssm9YZ+513Kbz0esdSebhYrPZxwpH3Ksm4ZZmqzGj9GaYQDZu9
otmBSVU2TnndLtiMV5eVvcdEhG3wjBA9G6cWcWgfviNmE9G2t6iAbSSZIlMrFlHu1JfsCz7K
sYVVglPW7b0M23zwn9WC7Y2G4jOoqZWfWvspvlRArb3fClaeynjYeXIBROxhIk/1tfeLgO0T
igmQn2GbUgN8y9ffuVT7M7Yvt+uAHZtthYzV2sS5RhtRi7psVxHb9S7xArm8sxg19gqO6LIG
7LCorRI7at/HEZ3M6isVjq8xlW9GmJ1Mh9lWzWSkEO+baL2kn1NNcU33Tu5lGNo3iCZNRbSX
cSUQn58+ffkNFinw/+QsCCZGfWkU60h6A0w90WISyReEgurIDo6keEpUCArqzrZeOKayEEvh
Y7VZ2FOTjfbohAAxeSXQaQyNput10Y+as1ZF/vPjvOrfqFBxXiBFBRtlheqBapy6irswCuze
gGB/hF7kUvg4ps3aYo2O7G2UTWugTFJUhmOrRktSdpsMAB02E5ztI/UJ+7h+pATS4LEiaHmE
+8RI9frp/aM/BPM1RS023AfPRdsjxc6RiDu2oBoetqAuCy+tO+7rakN6cfFLvVnYpkhtPGTS
OdbbWt67eFld1Gza4wlgJPURGoMnbavkn7NLVEr6t2WzqcUOu8WCya3BnUPPka7j9rJchQyT
XEOkyTjVsZK9muNj37K5vqwCriHFeyXCbpjip/GpzKTwVc+FwaBEgaekEYeXjzJlCijO6zXX
tyCvCyavcboOIyZ8Gge2aeapOyhpnGmnvEjDFffZosuDIJAHl2naPNx2HdMZ1L/ynhlr75MA
eVAEXPe0fn9OjnRjZ5jEPlmShTQfaMjA2IdxOLxYq93JhrLczCOk6VbWPup/w5T29ye0APzj
1vSfFuHWnbMNyk7/A8XNswPFTNkD00zmQ+SXX9/+/fT6rLL168tntbF8ffr48oXPqO5JWSNr
q3kAO4n4vjlgrJBZiITl4TxL7UjJvnPY5D99fftTZePbn1+/fnl9o7Ujq7xaI/cRw4pyXW3R
0c2Arp2FFDB9Seh+9J9Pk8Dj+Xx2aR0xDDDVGeomjUWbJn1WxW3uiDw6FNdGhz2b6intsnMx
uNrzkFWTudJO0TmNnbRRoEU9b5H/+ft/fnl9+Xij5HEXOFUJmFdW2KJniub81DxajZ3yqPAr
ZBkUwZ5PbJn8bH35UcQ+V91zn9nvmSyWGSMaN8aL1MIYLVZO/9IhblBFnTpHlvt2uyRTqoLc
ES+F2ASRk+4As8UcOVewGxmmlCPFi8OadQdWXO1VY+IeZUm34EtXfFQ9DL320TPkZRMEiz4j
R8sG5rC+kgmpLT3Nk4ubmeADZyws6Apg4BrsDNyY/WsnOcJya4Pa17YVWfLBeQ4VbOo2oID9
nESUbSaZwhsCY6eqrukhfolNlupcJNR4gY3CDG4GAeZlkYGDZZJ62p5rUH8wHW0ygDXMgfU5
Uk1RhYytq2H3B8vCfZqn6C7YXJpM57MEb1Ox2iB1GHPHki039NCCYlkYO9gcm543UGy+kyHE
mKyNzcmuSaaKZksPkxK5b2jUQnSZ/stJ8ySaexYkhwP3KWp6LX4JEJ5Lcn5SiB3SBJur2Z4J
ENx3LbJ+aTKhJo/NYn1y4xzUGhw6MPNMyjDmtRWHbu15c5kPjJK6ByMMTm/J7GnTQGBXqqVg
0zboQtxGey22RItfOdIp1gCPkT6QXv0e9glOX9foEGW1wKSSCdC5lo0OUZYfeLKp9k7lykOw
PiDlSAtu3FZKm0bJObGDN2fp1KIGPcVoH+tT5Q7zAR4izXcxmC3OqhM16cPP242SLnGY91Xe
NpkzpAfYJBzO7TDea8HRkdqCwlXOZHQQDDPCUyZ9p+K76ARpZxk4C3h7oVcu8aN5anXImuKK
LAiPd3ohmdlnnJH8NV6o8VtTaVMz6HrQTc93rRh6ryLJeR1d+G4siezdrRYtlmsP3F+stRm2
bDITpZoFk5bFm5hD9Xfd40d9P9vWdo7U1DFN587MMTSzOKR9HGeOcFUU9aA44HxoUilwE9O2
7DxwH6tdU+Me3Fls67CjwblLnR36JJOqPI83w8RqPT07vU01/3qp6j9G5llGKlqtfMx6pSbX
7OD/5D71ZQueSKsuCfYtL83BEVFnmjLUMd7QhU4Q2G0MByrOTi1qs7ssyPfiuhPh5i+KajVJ
1fLS6UUyioFw68noJifoDZ1hRstsceoUYLJKDS5m3ZFkNH2MOZRlnzmZmRnf0fmqVrNV4e4n
FK7kvwy6oidVHa/Ps9bpYONXdYBbmarNHMZ3U1Eso02nutXBoYyNTR4dhpbbMAONpwWbubRO
NWgj35AgS1wypz6NhaNMOimNhNP4qgWXupoZYs0SrUJtWQzmtkmJxTO1VYkzQ4E5xktSsXjd
1c5QGu0avmP2vBN5qd0xOHJF4k/0Aiqw7sSL6ZupD0FkzHxkVPABxdUmF+60PCjYpaE71cza
dP3xNs1VjM0X7mUX2MhMQX2lcXKNBzc2ZzROKFm/hwmXI04X9/TAwL5FE+gkzVs2nib6gi3i
RJvO55vdDok7g43cO7dhp2hug47UhZkTpwmzObq3UrBIOW1vUH7y19P8JS3Pbm1pLwI3upQJ
0FTgBJT9ZFJwGXSbGYa7JBdPflFG6/FtQWMJOzNLmu/KP3pOU9xhFI6LIv4nmCK8U4nePTnH
PVoMA8EbHbTDbKSVFT1fuTCrzSW7ZM7Q0iDWGbUJ0OhK0ov8eb10PhAWbhwywei7AzabwKhI
8y354eX1+ar+f/f3LE3TuyDaLf/hOf1Sgn+a0Pu4ATQ3/T+7upu2OX4DPX3+8PLp09Prfxhr
geagtW2F3lQaHw/NXRbG4ybm6c+3Lz9N6mO//Ofuv4RCDOCm/F/OCXgz6G+ai+0/4ZLg4/OH
Lx9V4P999/X1y4fnb9++vH5TSX28++PlL5S7cWNEzKwMcCI2y8hZShW82y7dA/9EBLvdxt11
pWK9DFbuMAE8dJIpZB0t3bvrWEbRwj1flqto6ahMAJpHoTta80sULkQWh5Ej0Z5V7qOlU9Zr
sUXeGWfUdlE6dNk63Miids+N4TXLvj30hpuddPxQU+lWbRI5BXQuYIRYr/TR+5QyCj5rB3uT
EMkFvC87IpCGHdkb4OXWKSbA64VzMD3A3LwA1Nat8wHmYuzbbeDUuwJXzqZVgWsHvJcL5CR3
6HH5dq3yuOaP2t2bLQO7/Rwe7G+WTnWNOFee9lKvgiVzUKHglTvCQBlg4Y7Ha7h167297nYL
NzOAOvUCqFvOS91FITNARbcL9eNDq2dBh31C/ZnpppvAnR30jZKeTLC+NNt/nz/fSNttWA1v
ndGru/WG7+3uWAc4cltVwzsWXgWOkDPA/CDYRdudMx+J++2W6WMnuTVOJkltTTVj1dbLH2pG
+e9n8CVz9+H3l69OtZ3rZL1cRIEzURpCj3zyHTfNedX5pwny4YsKo+YxsDjEfhYmrM0qPEln
MvSmYC7Ek+bu7c/PasUkyYKsBJ5JTevNNupIeLNev3z78KwW1M/PX/78dvf786evbnpTXW8i
dwQVqxB5jh4WYfcFhRJVYEOe6AE7ixD+7+v8xU9/PL8+3X17/qwWAq9CWt1mJTxByZ3hFEsO
PmUrd4oEg/+BM29o1JljAV05yy+gGzYFpoaKLmLTjVydx+qyCIU7IVWXcO3KHYCunIQBdVc0
jTKfU6Vgwq7YrymUSUGhzvxTXbBf8jmsO/tolE13x6CbcOXMMQpFRmsmlC3Fhs3Dhq2HLbO+
Vpcdm+6OLfFu496bV5cg2rp96iLX69AJXLS7YrFwyqxhV0IFOHBnYQXX6On4BLd82m0QcGlf
FmzaFz4nFyYnsllEizqOnKoqq6pcBCxVrIrK1WJpEhEX7iLdvFstS/ezq/u1cI8LAHXmOYUu
0/joSrOr+9VeOIelceweG7bb9N5pX7mKN1GBlhZ+ztPTYa4wd081rpyrrVtycb+J3IGUXHcb
d64D1NVHUuh2sekvMfIKhnJitpmfnr797p2iEzC+49QqGKN0FZ/BtJW+d5m+htM2y1+d3Vyv
jjJYr9Fa48SwdqzAuVviuEvC7XYBj7qHQwKy90XRxljDg8fhXZ9Zxv789vblj5f/+wzKJ3oR
drbEOvxge3euEJuDHeU2RIYjMbtF64xDIpOsTrq2UTDC7rbbjYfUl+u+mJr0xCxkhiYZxLUh
tqZPuLWnlJqLvFxo74AIF0SevDy0AVKCtrmOPOjB3GrhahWO3NLLFV2uIq7kLXbjvq41bLxc
yu3CVwMgEq4dnTe7DwSewhziBZrjHS68wXmyM3zREzP119AhVqKXr/a220aC6r6nhtqz2Hm7
nczCYOXprlm7CyJPl2zUtOtrkS6PFoGtcor6VhEkgaqipacSNL9XpVmi5YGZS+xJ5tuzPu88
vH75/KaiTK80tfHTb29qa/r0+vHu79+e3pTg/fL2/I+7X62gQza0AlW7X2x3lig5gGtHyxwe
TO0WfzEg1ZlT4DoImKBrJBZohTHV1+1ZQGPbbSIj4/ycK9QHeMZ79/+5U/Ox2jG9vb6ALrOn
eEnTkQcD40QYhwlR6YOusSZ6cEW53S43IQdO2VPQT/JH6lrt+5eOgqEGbXtI+gttFJCPvs9V
i0RrDqSttzoF6JBxbKjQVlYd23nBtXPo9gjdpFyPWDj1u11sI7fSF8h60xg0pCr8l1QG3Y7G
H8ZnEjjZNZSpWverKv2Ohhdu3zbR1xy44ZqLVoTqObQXt1KtGySc6tZO/ov9di3op0196dV6
6mLt3d9/pMfLeotM705Y5xQkdJ4EGTBk+lNElUabjgyfXO0Gt/RJhC7Hkny67Fq326kuv2K6
fLQijTq+qdrzcOzAG4BZtHbQndu9TAnIwNEvZEjG0pidMqO104OUvBkuqFkLQJcBVZTVL1Po
mxgDhiwIB0PMtEbzD09E+gPRmzWPWsCeQEXa1ry8ciIMorPdS+Nhfvb2TxjfWzowTC2HbO+h
c6OZnzbjR0Ur1TfLL69vv98Jtad6+fD0+Z/3X16fnz7ftfN4+WesV42kvXhzprpluKDv16pm
FYR01QIwoA2wj9U+h06R+TFpo4gmOqArFrUt+Bk4RO9GpyG5IHO0OG9XYchhvXPdN+CXZc4k
HEzzTiaTH594drT91IDa8vNduJDoE3j5/F//o++2MZis5pboZTS9sBlfdloJ3n35/Ok/g2z1
zzrPcaroQHFeZ+Ah5YJOrxa1mwaDTOPRVsi4p737VW31tbTgCCnRrnt8R9q93J9C2kUA2zlY
TWteY6RKwM70kvY5DdLYBiTDDjaeEe2ZcnvMnV6sQLoYinavpDo6j6nxvV6viJiYdWr3uyLd
VYv8odOX9INEkqlT1ZxlRMaQkHHV0jeYpzQ3quhGsDZKtrMvl7+n5WoRhsE/bJMvzrHMOA0u
HImpRucSPrldf7v98uXTt7s3uAD67+dPX77efX7+t1eiPRfFo5mJyTmFeyGvEz++Pn39HZzV
uG+qjqIXjX0NYwCttnCsz7YRGuOQFpzH2Dc0NqpVCK7I8zVoaWX1+UJdlCRNgX4YLb5kn3Go
JGhSq9mr6+OTaJA5As2BfkxfFBwq0/wAyhSYuy+kY4RpxA97ljLJqWwUsgXDD1VeHR/7JrW1
lSDcQRuSSgswWomeyM1kdUkbowEdzPrjM52n4r6vT4+yl0VKCgUWAHq1j0wYRe6hmtDNG2Bt
SxK5NKJgy6hCsvgxLXrtA9NTZT4O4skTKLhx7IVkS8andDJbAFojw1XfnZo/+eNAiAUPXuKT
EuzWODXzECZHD8hGvOxqffi1s+/2HXKFbh9vZciIJE3B2A5QiZ6S3Da3M0Gqaqprfy6TtGnO
pKMUIs9cjWVd31WRao3J+ULR+rAdshFJSjugwbQ3krol7SGK5Ggru81YT0fjAMfZPYvfSL4/
gqPsWc/PVF1c3/3dKInEX+pROeQf6sfnX19++/P1Cd4+4EpVqfVC69/N9fBDqQyCwbevn57+
c5d+/u3l8/P3vpPETkkUphrR1v+zCFRbetq4T5syzU1CliGuG5kY45+kgGTxd8rqfEmF1VQD
oKaOo4gf+7jtXGN9Yxiia+cGMOqGKxZW/9WGKH6OeLoomFwZSq0sJ7YYPZj/zLPjqeVpSSeK
+2LPj43Lkc6Xl/uCzM9GaXVa/5s2JsPVBFgto0ibvy256GqR6uh0NjCXLJlM0qWDroJWGtm/
vnz8jc4NQyRnuRvwU1LwhPGmZ0TOP3/5yRVQ5qBINdjCs7pmcaxzbxFaYbTiSy1jkXsqBKkH
6zlo0IOd0Ukz1pgYybo+4dg4KXkiuZKashlXnphfLpRl5YuZXxLJwM1xz6H3age3ZprrnOQY
EFQUKY7iGCIRF6pI67ueGTCmYo4JSitgYnAxJvgiawa9NlmbYmO8emGGdwAMxHxzxl05xHCQ
fFomDrVmhL5Bk5krnKGYYWiIViE98lMF3ENHWmNfxSdSPeDMC14t0pWwkFR6lUWv10asRj1S
TXrMwFkAWGU8ZuXRE/mcVC6j689dXoBKOIzW2wCS7axFhNuyABHTwy5ushB3u1sv/EGC5a0E
gpvJbziS2P6dIOex+0SoVnFrvRZqFf75P1gGqJ8+P38iU6cO2It92z8uokXXLdYbwSSlXY+B
irYS9fOUDSDPsn+/WKgtQ7GqV33ZRqvVbs0F3Vdpf8rAV0+42SW+EO0lWATXs1odczYVd7wb
nN4+z0yaZ4no75No1QZoWz2FOKRZl5X9vfqy2tyFe4HOiu1gj6I89ofHxWYRLpMsXItowZYk
g7dZ9+qfHbI9zQTIdtttELNB1OSdqy1hvdjs3sds87xLsj5vVW6KdIHvbOcw96qXDMKEqoTF
bpMslmzFpiKBLOXtvUrrFAXL9fU74dQnT0mwRUc3c4MMj2jyZLdYsjnLFblfRKsHvrqBPi5X
G7bJwG9BmW8Xy+0pR+eYc4jqoh8n6R4ZsBmwgqzXm5CtYivMbhGwXVKbh+j6IheHxWpzTVds
fqo8K9Kuh32R+rM8qx5XseGaTKb6hXrVgkPEHZutSibwf9Vj23C13fSrqGWHhfqvAGubcX+5
dMHisIiWJd9PPL54+KCPCdjIaYr1JtixpbWCbJ3VfwhSlfuqb8CEWxKxIaYXXOskWCffCZJG
J8H2IyvIOnq36BZsh0Khiu99C4Jgfwn+YM7q5QTbbsVCbYIkGFQ7LNj6tEMLcTt71UGlwgdJ
s/uqX0bXyyHglo7B90b+oPpVE8jOkxcTSC6izWWTXL8TaBm1QZ56AmVtA6Zg1Wq22fxIEL7p
7CDb3YUNAy83RNwtw6W4r2+FWK1X4r7gQrQJPDxR3fUqT3yHbWt4PLMIt60awGxxhhDLqGhT
4Q9RHwN+ymqbc/44rLKb/vrQHdnp4ZLJrCqrDsbfDl97T2HUBFSnqr90db1YreJwg052ifSA
hEpq0WZewEcGCSDz4TO7RVS7HmaDGJ9Um4IvXDj7osv2uJ4pCAw60z1bDpYZ1OSTt7s1XRww
d+7I0gviRU/fq4H0BgcPaiekdoJtUnfg4u+Y9vvtanGJ+gNZKMtr7jnlhbO3ui2j5dppXTi5
6mu5XbsCw0TRdVRm0PuzLXL4aIhsh41NDmAYLSkIchPbpu0pK5VAdorXkaqWYBGSqG0lT9le
DM9i1uFN9nbczU12e4u1NUQ1q5avQ72kwwfed5brlWqR7dqNUCdBKLF1SNjLjrt1UXZr9DqN
shtkZAyxdEODoq1Dkigc0DovTwhBPalT2tmY6hFWnJJ6u1qub1D9u00Y0AN3bnM6gL047bnM
jHQWylu0k098mOFMRe48gmqgoGfd8JhewEUEnExyx2kQor2kLpgnexd0qwHQJJZE8MrADFgW
syDcG5HteEQ2EJd46QCe+krbUlyyCwuqcZs2haCnM01cH0kOTmqtUP/Z01Mxjd9nTUaP7odn
/zzKVNN75/igkw5w2NP0JD2aNJYC2A4ZZ02jNqIPaUEKdyyC8BzZ0yN4ltTF67bRapO4BOzJ
Qntc2kS0DHhiaU8rI1FkShaIHlqXadJaoAujkVAyzIpLCmSbaEUWujoP6Dyh+rMjb3dUjFdA
f9DrKz2oUpsUV6BQQempnjEO0x8PZNAVcUJXkSyhI+T9Y/kA7tpqeSatbs73SQIJ/UgThGRJ
KKgYdMkIIMVF0AUu7YynI3A4mEp+A6W2Y+AyRTsheThnzb2kFQaW3spE26Iyjwlen/54vvvl
z19/fX69S+iF2WHfx0WiNoBWXg574xjr0Yasv4ebUH0vimIl9s2N+r2vqhZUkRgvS/DdAzyI
z/MG+cAYiLiqH9U3hEOoDnFM93nmRmnSS19nXZqDW5J+/9jiIslHyX8OCPZzQPCfU02UZsey
V/01EyUpc3ua8ckqIjDqH0PYNhHtEOozrRJ+3ECkFMgKGNR7elA7ZW2PFhfgchSqQyCsEDH4
YsQJMDdCEFSFG26ScXA4fYM6ac2Jn9vNfn96/WgsDNOLEmgrPTmiBOsipL9VWx0qWCcHoRrx
oilidMkLyea1xK+ndW/Bv+PHfdpgFRcbdXqwaPDv2LhEwmGU2KvaqyUfli1GzjAQEJIeMvT7
uE/pb7BY8/PSrplLg6uqUpsm0PvAFSqDRDv+xhkFw0N4mMPtmWAg/Ox0hsn570zwPajJLsIB
nLQ16KasYT7dDL0m1L1aNUvHQGrNUwJXqbZYLPko2+zhnHLckQNp1sd0xCXF0wBVBpggt/QG
9lSgId3KEe0jWnUmyJOQaB/p754OKgWBxdhGyYV0cGmO9qZHz7dkRH46w4qufhPk1M4Aizgm
XRcZIzO/+4iMa43Zu6jDHq/E5reaZWBRAJOZ8UE6bAcG02q15O7hlBxXY5lWaoHIcJ7vHxs8
D0dIZBgApkwapjVwqaqkqgKMtWqPjWu5VTvmtKTT5D36XRc4TqzmUbryD5gSJoSSSC5aYp/W
KETGZ9lWBb9MXYstcoCkoRbOKBq6eNWdQJrTEDSgDXlSi5Gq/hQ6Jq6etiCLHgCmbkmHiWL6
e9CUaNKjvonFdIGcO2lExmfSkOi+GSamvZLxu3a5IgWgZupgdq/y5JDZOhqwkostmbThfvRs
75u0sKwV11yRGWakFE4fq4LMaXvVYUjKA6YNMR9JrY6cM991uAftm0ok8pSmZAYgt18ASdB7
35Aa3QRkNQPzjy4yKhcyUqThyzNo88lZTWaOqb3UZVwktBFAEdz5lnAHX8wY/CWquSRrHvRN
tfcLdeZh1EoSeyiz7SbWG4cQyymEQ638lElXJj4GnRgiRs0D/QHsI6eN6kH3Py/4lPM0rXtx
gIt5KJgaazKdjMlDuMPenOlqnZ1BgWd0g4jERpMoCDuJSqyqRbTmesoYgB66uQHcQ7YpTDwe
5PbJhauAmffU6hxgciTLhDJbOr4rDJxUDV546fxYn9TMUkv75nI6qPpu9Y6pglVbbD1wRFgH
sROJbpwAna4MThd7BwzUYW9njd2U6j6xf/rwr08vv/3+dve/7tRkP/qzddSq4eLS+KA0DtLn
rwGTLw+LRbgMW/uKRhOFDLfR8WAvThpvL9Fq8XDBqDl76VwQHeEA2CZVuCwwdjkew2UUiiWG
R4VAjIpCRuvd4Wjr1Q4ZVgvR/YEWxJwXYawCu7Lhyqr5SUDz1NXMG6ukeHmd2fs2Ce03YjMD
dgcilqmvBQcnYrew3/9ixn6dNjOgpbGzz8BmSttlvOa2ZeCZVGJIFLDfEkm9WtmNiKgt8kBK
qA1Lbbd1oWKxH6vjw2qx5mtJiDb0JAnGG6IF25qa2rFMvV2t2FwoZmO/TbXyBwdGDfshef+4
DZZ8q7S1XK9C++2mVSwZbeyzwJnBbsqt7F1Ue2zymuP2yTpY8N9p4i4uS45q1Kasl2x6prtM
s9F35pwxvprTJGPDkz8mGRaG4dXL529fPj3ffRxuDgbzjM6cZl6dqB+yQrpDNgwSxrko5c/b
Bc831VX+HE7qxAclqiuJ5XCA97s0ZYZUU0RrNkNZIZrH22G1iip6dcGnOBxPteI+rYwO9Pxk
53bdTNNbdbR6DfzqtdZKj11aWIRqLVs/xmLi/NyGIbIE4DzfGaPJ6mwL2PpnX0nqlgXjPTiI
ykVmzX8SpaLCtllhr6kA1XHhAH2aJy6YpfHONnAEeFKItDzC7sxJ53RN0hpDMn1wFgPAG3Et
MlscBBD2v9qbQXU4wIsYzL5DCqwjMngzRY+HpKkjeKyDQa1XCpRbVB8ITnZUaRmSqdlTw4A+
b986Q6KDzW6idhQhqjazA+nV3g07r9cfb6q4P5CUVHffVzJ1Dhcwl5UtqUOyBZmgMZJb7q45
OydFuvXavFf7+CwhQ1XnoBCgNOv0jTM4H3BhM9V4QrtNBTGGqp8eMjgBoLv16QWdXdicL4bT
iYBSu2U3TlGfl4ugP4uGfKKq86hHB+Q2CgmS2urc0CLebaiGhm4sakxYg271qd1BRcYmX4i2
FhcKSVuPwdRBk4m8PwfrlW3daK4F0m1UXy5EGXZLplB1dQVTLuKS3iSnll3gDknyL5Jgu93R
skt0ZGewbLVckXyqnpt1NYfpSwoy3YnzdhvQZBUWMlhEsWtIgPdtFIVkrt23yNLDBOmnhnFe
0QkxFovAFuw1pp1qka7XPR7TkumSGifx5TLcBg627joO68v0qnaTNeVWq2hFdCLMnNEdSN4S
0eSCVqGagR0sF49uQBN7ycRecrEJqBZ5QZCMAGl8qiIy82Vlkh0rDqPlNWjyjg/b8YEJrGak
YHEfsKA7lwwETaOUQbRZcCBNWAa7aOtiaxabLI67DHE0Bsyh2NKZQkOj/zW42CWT78n0LaNo
9+Xzf73BM/zfnt/gvfXTx49qq//y6e2nl893v768/gFXg+adPkQbRD7L6uqQHhnWSlYJ0Hnh
BNLuApb482234FGS7H3VHIOQpptXOelgebderpepIyiksm2qiEe5aleyjrMQlUW4ItNDHXcn
sgA3Wd1mCRXYijQKHWi3ZqAVCacVsS/ZnpbJuUowi5LYhnRuGUBuEtbn1pUkPevShSHJxWNx
MPOg7jun5Cf9pJT2BkG7m5jvqtJEuqxubRcmL1tGmJGQAW5SA3DJg3S7T7lYM6cr5ueABtBO
Jx3v8iNrvJc0KbhQvffR1Dk4ZmV2LARb/sF7Cp09ZwofaGKO3twTtirTTtB+Y/FqEaTLMmZp
R6asu4BZIbTGl79CsONW0odc4nvyzdTFzKG8zHI1Yno16FOBjHtO/dnNV5O6n1UF9PYLJRkd
S7WLLgo6X5v0ilo1AFf9aUddqE6lhF6mxBWV//ep5T5jmi778kRldoMn5mzYGRvgI6tjxGRJ
N0ui3URxGEQ82reiAfes+6wFD4Q/L8HEjR0Q+eweAKrciWB4JD/5/3PPtMewZxHQpU87TReZ
ePDA3CSvk5JBGOYuvgYTHi58yg6C7sb3cYKVVcbAoLC1duG6SljwxMCt6i34Nm1kLkJtIshM
r82OOPkeUbe9E+dkoepsvXTdkyRWHZhSrJBam66IdF/tPd9WwliGLEohthUyFoWHLKr27FJu
O6jtdUwnl0tXKzk/JfmvE93b4gPp/lXsAGYjtacTKjDj0nbjTAeCjecyLjMaTOE+SkeiRp19
tgF70Wm9aT8p6yRzC2vZi2CI+L3aD2zCYFd0O7jFAKW0kzdo04JhdCaMubJwqnaCVWN4KeRy
CVNSemMp6laiQDMJ7wLDimJ3DBfGx4uzwR3TUOxuQbfjdhLd6jsp6JuexF8nBV3vZpJt6SK7
byp9gNWSybWIT/UYT/2IPazuIm13i23objouQtUz/JmKH48lHTkq0jrSSguyv54y2TozfFrv
IIDTZZJUTUWlVmp1vmZxZhAaKxBf4sHNDmxjDq/Pz98+PH16vovr82QmdjB2NQcdnMoyUf4P
lnelPkiE58wNM28AIwUzYIEoHpja0mmdVct3ntSkJzXP6AYq9Wchiw8ZPZwbY/mL1MUXeuI4
Zz080Q6kuwa8qYgLd9CNJBT6TLfvxdgDSEsOZ/+keV7+n6K7++XL0+tHrpUgsVRunROikZPH
Nl85S/jE+qtX6F4umsRfMK41rZchs7X2W30V1YwaOKdsHQYLdxi8e7/cLBf8gLzPmvtrVTHL
nM2A1q5IRLRZ9AmVDnXOjyyoc5WVfq6iwtdITq9tvCF0/XsTN6w/eTXDwCO8SovEjdqQqVWN
6dtGYJbGjFmeXui2DIXxUvePubhP/bQ3UVF7qfu9lzrm9z4qLr2x4oOfKtRW6RaZM8ICKnt/
EEWWMyINDiVhZ+LP/RjsZAQ17ujeDUx1rmxhaghawJ7eW9FpWuyFN+u8+GM4sO3VH+DVR5I/
wnPRY1+Kgh7ZzOH3yVVLTKvFzWTHYBuf8DUEA/2+a5rfzuP+sY0bI6d956tTwFVwM2AMl/5y
yGL4w0FZMdENCo7GtovdAt5a/kj4Ut8eLL9XNB0+7sLFJux+KKwWgqMfCgprTLC+GVTNAaoS
wu33Q+ny5KESr2SxVBX84xF0zSmJXdyMYoR7KzB7zGIVsmvdOL4xdyPKzYpUEVTt7La3C1sd
QLreLm43tpo4dX9bR+bru/B2HVrh1T+rYPnj0f5HhaQRfjhft4ctdIHxdGrcmn6vFiHZ7e2R
C8GU9LYKwr884dy3nRPThht6UDPj+nJsuWRksoGHXdaaEcqKdr3ZbXw4/BPRu0lDb4NN5MOn
+cYbwEzY36GHrvMDodabNR9q68njNjJF2/atjEQYbtK5w3lj0J7JBbzv9218obvRMZGdcyEx
4iY7O1WYYBGshs40SuwCBFpbWBd/fPry28uHu6+fnt7U7z++YTndPNQVGTnqGeDuqJ+aebkm
SRof2Va3yKSAh4JKMHA0LHAgLT66h04oEJVREemIqDNrFJPcfYQVAqTcWykA7/98nRQcBV/s
z22W0/syw+pD+WN+Zot87L6T7WMQClX3gpHdUADYCHHbRhOo3Rnl79mg6ff7FfpUJ90Nj04e
CHbfN5yOs7FAkdVF8xrUduP67KPc+5qZczWNMZ/VD9vFmqkgQwugA2YqMbSMsWvfkZUt+8kh
tV7uPYX3rvgPat1Zf5elZ9MzJw63KDWPMRU401qdgxG3hxC0+89UowYVMolHYkpvTAG2+by5
YjqcVMsZvfLVTZEUW9tOyIQX2PvZhHua1DUhShn+lG5inVkCsZ5jkIn3r5+zRdAWe8+cAtxH
4XY7mAdh7kyHMNFu1x+bc0/VPcd6MbasCDEYuHKP5kfLV0yxBoqtrSlekdzrF3Ts6CKBdjtG
IpCFaNqH70T21LqVMH/rIOv0UTp6CMC01T5tiqphNrp7tYdkipxX11xwNW6ewsPjXSYDZXV1
0SppqoxJSTRlInImt2NltEWoyrty7qbtMEJtwKW/uodQRZYICBVsZ+ce/PFi8/z5+dvTN2C/
uYeK8rTsD9yRLFiN/Zk92fMm7qSdNVyjK5S7PsVc794XTgHO9GpeM0rC9x96GfmfKSYQ/D0V
MBWXf4UPZq+bylFomUOofFTwYs15SWgHKyvPoZBF3k5BtmqL1PZin6nNe8ouH1OOeUot23E6
fUyrkdwotFb9lS1VJMWBRm3jrPYUzQQzX1aBVGvLzFUZxqHTUuy1ir9+FKkkNlXeHwg/2RBp
G0fuxREgI4ccjpCxWwk3ZJO2IitHzYQ27fjQfBLa9tLNngohbsX2yScDv73dYyCEnym+H5mb
qIHSO+3vlMycE3sHnOG9I3VQg1GbhD6t/b1r+EpbFWPYW+Fu1eZePKpuA7bmblXKGMrDTgeb
txMZg/F0kTaNKkuaJ7eTmcN5Jru6ykHjEI6vb6Uzh+P5o1oxy+z76czheD4WZVmV309nDufh
q8MhTX8gnSmcp0/EP5DIEMj3hSJtf4D+Xj7HYHl9O2SbHdPm+wlOwXg6ze9PSpL7fjpWQD7A
O7CQ9QMZmsN5emCe/EgyUzCeHhTWvCPc6Kb5F3KjDncVj3JagJR8nzMnc2PoPCvv1ZQgU2zq
yp149A5g0GkqmYNHX8j/WeJ8oK5NS8mco8mauwYGFAyccU3QTgq1si1ePrx+ef70/OHt9ctn
eOMm4ZnwnQp392TLjowcCgF5VQND8dsYE4tT65jp5CATpCj5P8inOQT89OnfL58/P7+6QjAp
iHaDwEl02nPBbYLfM57L1eI7AZac2pSGuW2X/qBIdK8HcySFwJ52bpTV2YO5Cs4THC60zpmf
VdsXP8k29kh6NpOajtRnT2fmyn9kb6Qc3IwLtKvPhGh/2sF2DcIiM8bnTyeF8BZruPD0saCk
tWLO2yd2t7jB7pyXDjOrNheFzB0FyzmAyOPVmipTz7T/OGUu18bXS+yTTTMQnf1n+/yX2n1m
n7+9vf75x/PnN982t1VCoKpg/pQBDOTeIs8zabz7OR9NRGZni1HeScQlK+MMrEe63xjJIr5J
X2Kug4DpDU/P1FQR77lEB86clnlq16gi3f375e33H65pSDfq22u+XNAHb9NnxT6FEOsF16V1
CP6oWRvp7dMLms1/uFPQ1M5lVp8y5+mpxfSCO6SY2DwJGDFgoutOMuNiotUmSfhUQowNJ35C
GThzSuK59LHCeWbLrj3UR4G/8N4J/b5zQrTc8aq2wQx/17MhAiiZa5RxjCHy3BSeKaFr32KK
1WTvnfd5QFzVTu+8Z9JShHCecuikwIL5wtcAvqe2mkuCbcScgCt8F3GZ1rj7LMHikO0rm+OO
ZUWyiSKu54lEnLmLsJELog3TIUfGl4mB9WRfs8ziohn22twwnZdZ32Bu5BFYfx439PmqzdxK
dXsr1R23dI3M7Xj+b24WC08rbYKAObgZmf7EnFRPpO9zly07zjTBV5ki2PaWQUAfKmvifhlQ
JfMRZ4tzv1xSexMDvoqYWxfA6XOrAV/TJz8jvuRKBjhX8QqnD2INvoq23Cxwv1qx+QdBKeQy
5JOg9km4ZWPs217GzMIU17FgZrr4YbHYRRem/eOmUrva2DfRxTJa5VzODMHkzBBMaxiCaT5D
MPUIajU51yCa4DRjBoLv6ob0JufLADe1AcGXcRmu2SIuQ/rOesI95djcKMbGMyUB13FHuwPh
TTEKOEkNCG6gaHzH4ps84Mu/yem76YngO4Uitj6C200Ygm3eVZSzxevCxZLtX0arkpE+jVq6
Z7AAG672t+iNN3LOdDOt88Rk3GhyenCm9Y3uFItHXDG1aTSm7vktxmAnki1VKjcBN1AUHnI9
yyie8jj36MHgfLceOHagHNtizS1up0Rwj5Itinv6occDN0tqF6TgPpSb3jIp4J6a2VfnxXK3
5HbzeRWfSnEUTU9fjQFbwEteTjFO78C3nH6iX1XQMEwnuKWBpyluQtPMihMCNLPmlCCNSqkv
B7uQU00Z1FC9WeM0EgeG70QTKxNGtjKst/5YJUtdXo4AtZpg3V/BPKNHd8QOAw9RW8FcFdVx
Eaw5YReIDbWKYxF8DWhyx8wSA3EzFj/6gNxymmAD4U8SSF+S0WLBdHFNcPU9EN5vadL7LVXD
zAAYGX+imvWlugoWIZ8qaC17Ce/XNMl+DJSQuPm0yZW4yXQdhUdLbsg3bbhhRrV+GcDCO+6r
oHfLfVXr4/pwTj+sVXIMHyHiOofB+bHt038eVOr5am1Xa275ApytVs+Zrle/TL9l8eDMwDba
9h6cmQs17vkuNfUz4pxc6zvTHd4Aeetuy6yhg2o/28cHztN+G+4hpIa9MfheqGB/DLa6FMzH
8L/QlNlyw82J2roKe341MnzdTOx0w+ME0M7vhPovaAsw54dDCOdNq+EmPS2f/pJH408WITtI
gVhx4isQa+5EZCD4/jSSfOWYp0gM0QpWJAac1WFtxSpkRh481txt1pyWLNw1sDdfQoYrbn+q
ibWH2DhG+UaCG5iKWC24mRmIDbUPNhHUvtpArJfcnq5V24olt91oD2K33XBEfonChchi7qjD
Ivm2tAOwPWEOwBV8JKOA2pvCtGO20KG/kz0d5HYGubNjQ6rNB3faMsRM4i5g7waH9zUcY44E
PAx3nOa90PHe45wTEUTc9k8TS+bjmuBOvJXEu4u4gwJNcEld8yDk5P1rsVhwm+prEYSrRZ9e
mCXgWriGcQY85PFV4MWZgexTCAaL4tyso/Aln/525UlnxY0tjTPt41MHh1tmbokEnNt1aZyZ
0TmTIhPuSYc7LtC33p58cvtnwLlpUePM5AA4J5OYR5s+nJ8HBo6dAPT9PJ8v9t6eM9sy4txA
BJw70PE9TNQ4X987biECnNv2a9yTzw3fL3bcq0GNe/LPnWto1XlPuXaefO483+VU8DXuyQ/3
MkbjfL/ecRuia7FbcDt4wPly7TacSOXT7NA4V14ptltOCnifq1mZ6ynv9TX0bl1TQ4tA5sVy
u/Icxmy4/YomuI2GPjXhdhRFHEQb9qFpHq4Dbm7zv6qFJ6kenMtru2b3VvB6PeJ2BUCsuNFZ
ciaEJ4Kr2MFEgI9gPt7WYq32uoJrJf2+TjU9PItumEslE+Ay87M5fXTDj+KZrYPvYaZFY8Ls
KI6NqE+cPZrHEvxdOhsS3qWrZfHMWPvMEldp72S/4lE/+r1WpnjU1hXLY3tCbCOsPd3ZiTtb
HjDakF+fP7w8fdIfdtQgILxYtmmMvwDOxM5tdXbhxi71BPWHA0Fr5AxrgrKGgNK2aKWRMxho
JLWR5vf2c1yDtVXtfHefHffQDASOT2ljP9EyWKZ+UbBqpKCZjKvzURBM9UKR5yR23VRJdp8+
kiJRO50aq8PAnhQ1pkreZuDBY79AY1STj8TuHYCqKxyrssls2/gz5lRDWkgXy0VJkRS9yzVY
RYD3qpy03xX7rKGd8dCQpI551WQVbfZThU2/mt9Obo9VdVRD9iQK5NsAqEt2Eblt00+Hb9fb
iARUGWe69v0j6a/nGJzExxi8ihw9OTIfTq/arjD59GNDvA8AmsUiIR9CbvgAeCf2Deku7TUr
T7Sh7tNSZmp2oN/IY23KlYBpQoGyupBWhRK7k8GI9raxcESoH7VVKxNuNx+AzbnY52ktktCh
jkpmdMDrKQXfxrQXaP+ThepDKcVz8ARIwcdDLiQpU5OacULCZqCKUB1aAsOk3tD+XpzzNmN6
UtlmFGhsm7EAVQ3u7TB5iBIcxqvRYTWUBTq1UKelqoOypWgr8seSzNK1muuQg1ML7G1P1zbO
uDq1aW962Dq1zcR0aq3V7ANNlsU0Bvji6WibqaB09DRVHAuSQzWFO9XrvGfWIFoA4JdTy9rL
On4XoeE2FYUDqc6awrNZQpzLOqcTXlPQqapJ01JIe6GYIDdX8Nr5XfWI07VRJ4paWchoVzOZ
TOm00J7UlFJQrDnLlvpNsVHna2eQUvra9our4fDwPm1IPq7CWW+uWVZUdF7sMtXhMQSJ4ToY
ESdH7x8TkBzJiJdqDgWfhuc9ixuHr8MvIqjkNWnSQi3qYRjYsiknfGmp7Cz3vCho7CI7I8sC
hhDGzdD0JZqg/koWxvxXQFHWfGVKgIY1CXx+e/50l8mTJxn9xEfRTmJsPKPeXSR38mAISRME
+7eKpMmxcSZ74/YXrDqqTnGG/cvjOnRec2nz1uSJlrY8nWoHAUeMnvM6w6aMTfyyJP7etD3u
BlZJIftTjFsSB0MPTnW8slRTPDyLBl8m2nnVtJMoXr59eP706enz85c/v+n2H4yt4s402HIH
r6Uyk6S4B5UsuIrVcyuauHRUj7soXbvt0QG0AHyO29z5DpAJ6JpAW3SD6Ug06MZQB9tGyFD7
Ulf/UU0zCnDbTKititpHqPUQTNfm4vHn0KZNe86j7su3N3DB9vb65dMnzvGqbsb1plssnNbq
O+hTPJrsj0jtcSKcRh1RVellim5hZtYxYzN/XVXunsEL253WjF7S/ZnBB7MKFCZPqQBPAd83
ceF8lgVTtoY02lRVC43ety3Dti10cqm2alxcpxI1epA5//W+rONiY98sIBZ2IKWHU/2IVs3M
tVwugAF71VyhPPVpi6MTmHaPZSUZorhgMC5l1HWdJj354TtQ1Z3DYHGq3QbKZB0E644nonXo
Egc1WuHBmEMouS1ahoFLVGzXqG5UfOWt+JmJ4hB5PUZsXsONV+dh3UabKP18yMMN76A8rNNT
56zS6b7iukLl6wpjq1dOq1e3W/3M1vsZvIs4qMy3AdN0E6z6Q8VRMclssxXr9Wq3cZMaJj34
++Suh/ob+7gQLupUH4BgIYPYCnE+Ys/+xvHyXfzp6ds39zhMryYxqT7tqjAlPfOakFBtMZ24
lUpy/T93um7aSu0y07uPz1+VsPLtDkyjxzK7++XPt7t9fg8rei+Tuz+e/jMaUH/69O3L3S/P
d5+fnz8+f/z/3n17fkYpnZ4/fdWPy/748vp89/L51y8490M40kQGpMZXbMrxvTMAenGtC096
ohUHsefJg9q8ILneJjOZoDtLm1N/i5anZJI0i52fs6+XbO7duajlqfKkKnJxTgTPVWVKtvg2
ew/2vXlqOK9Tc4yIPTWk+mh/3q+RWTLjkAV12eyPp99ePv82uOglvbVI4i2tSH2KgRpToVlN
DMYZ7MLNDTOuzR/Jn7cMWapdkxr1AaZOFRH9IPjZ9h9hMKYrxkkpIwbqjyI5plRO14zztQEH
4eraUGnMcHQlMWhWkEWiaM+R3oIQTH/z7uXb3ecvb2p0vjEhTH7tMDREclbib4P8Es+cWzOF
nu0S7UUAf04TNzME/7mdIS3pWxnSHa8e7D/eHT/9+XyXP/3H9lY3RZPnssuYvLbqP+sFXZXN
l2QtGfjcrZxurP8zm8M1mx49iRdCzX8fn+cc6bBq16XGq30orz94jSMX0ds3Wp2auFmdOsTN
6tQhvlOdZsvh7n6n+FVB+66GOalAE47MYUoiaFVrGC4owBMSQ82mRBkSjHzpGzOGo8NQgw/O
9K/gkKn00Kl0XWnHp4+/Pb/9M/nz6dNPr+AwG9r87vX5//3zBZwpQk8wQaZX12967Xz+/PTL
p+ePw/Nf/CG1C87qU9qI3N9+oW98mhSYug65Uatxx3XxxIAZsHs1V0uZwkHkwW2qcLTvpvJc
JVlM5q1TVmdJKni0p3PuzDCT5kg5ZZuYgm7LJ8aZOSfGMYGOWGZrBHuNzXrBgvzOBF7bmpKi
pp7iqKLqdvQO6DGkGdNOWCakM7ahH+rex4qTZymR5qKeNrXLYg5z/dVbHFufA8eNzIESmdrs
731kcx8Ftra4xdFrVzubJ/Qmz2Kup6xNT6kjwRkW3ovA5XKap+45zph2rbaVHU8NQlWxZem0
qFMq3xrm0CbgsZBuXQx5ydDhrsVkte0Czyb48KnqRN5yjaQjgYx53Aah/X4LU6uIr5KjEkE9
jZTVVx4/n1kcFoZalODQ7RbPc7nkS3Vf7cFsXczXSRG3/dlX6gLue3imkhvPqDJcsAJXON6m
gDDbpSd+d/bGK8Wl8FRAnYfRImKpqs3W2xXfZR9iceYb9kHNM3AazQ/3Oq63Hd3tDBwy/kwI
VS1JQk/YpjkkbRoBtrpypGlgB3ks9hU/c3l6dfy4T5t3Ir5n2evVU51V3TpHdCNVlFlJ9wNW
tNgTr4NbHCVj8xnJ5GnvCEVjqeU5cHarQyu1fN8918lme1hsIj7aKC5MCwg+zGdXkrTI1uRj
CgrJ3C2Sc+v2qIukE2OeHqsWqwlomK6y45QbP27iNd2ePcLlNOmhWUJu5gHU8y9WNdGZBZ2g
RK2sue0pSKN9ccj6g5BtfAK/qKRAmVT/XI5knspJ3pWIVcbpJds3oqUzfFZdRaPkKgJji6u6
jk9SCQb6nOmQde2Z7KEHl54HMtU+qnD09Pm9romOtCEciKt/w1XQ0fMtmcXwR7SiE8vILNe2
Aq6uArDLp2ozbZiiqKqsJNLbgSP83uydSmfPIVo6+cBdOHMcEnegBYaxcyqOeeok0Z3hdKew
u379+3++vXx4+mR2mXzfr09WpscNjMuUVW2+EqeZdWYuiihadaMTXAjhcCoZjEMycI/XX9Ad
XytOlwqHnCAjbu4fJ/fJjrgaLYjQVFzcizTwpYBKZfolGCJz4GEXSxCtkoQXseHdv0kAXRp7
qh/VA3MAMwjMzL5nYNidjx1LDaec3jhiniehQXqtBBky7Hi4Vp6Lfn8+HNJGWuFcMXvuhs+v
L19/f35VNTHfDuJeyN4mjPcgzobr2LjYeCxOUHQk7kaaaTIPgHONDT24urgpABbRRb9kTgQ1
qqLrmwSSBmSczF37JB4+hk852JMNUSSrVbR2MqfW8jDchCyI3VxOxJasqsfqnkw/6TFc8D3W
2CMjZdNXVkwbmmHXObjQU2F/ca6wk3NRPA57VTzM2O6Fp+699mgukUKg7mLupcRBCSV9Tj4+
dm+KprBMU5CYvh8SZeIf+mpP17JDX7o5Sl2oPlWOqKYCpm5pznvpBmxKJRxQsNCeWLh7joMz
ZRz6s4gDDgMBSMSPDBU62CV28pAlGcVOVEnnwF8dHfqWVpT5k2Z+RNlWmUina0yM22wT5bTe
xDiNaDNsM00BmNaaI9Mmnxiui0ykv62nIAc1DHq6XbFYb61yfYOQbCfBYUIv6fYRi3Q6i50q
7W8Wx/Yoi29jJFsN56NfX58/fPnj65dvzx/vPnz5/OvLb3++PjG6Qlg3b0T6U1m7wiSZP4bZ
FVepBbJVmbZUD6I9cd0IYKcHHd1ebL7nTALnMobdpB93M2Jx3CQ0s+yhnL/bDjXSwp6Grk/s
OIdexAtgnr6QgB01dhkB+fg+ExRUE0hfUFHLaDezIFchIxU7QpDb04+gKmWMPTuoKdO95wh2
CMNV07G/pvtYkP4AOqhT3aHl+PsDYxLvH2vbsID+qYaZfSc+YfbxuQGbNtgEwYnC8J7LPui2
UgBhJHMSP4DwZ7/aNfA5Rsdu6lcfx0eCYDcXJuIpiaSMwtDNQy2VnLftKC7hTi9A1lANof2Y
1cX84Aiqt/3P1+ef4rviz09vL18/Pf/1/PrP5Nn6dSf//fL24XdXo3SonrPakWWRLvMqCmnj
/U9Tp9kSn96eXz8/vT3fFXCj5Ow4TSaSuhd5i3VLDFNe1KAUFsvlzvMR1D3VFqSX1wy5US4K
q7fV10amD33KgTLZbrYbFyY3ASpqvweHbgw0anZO9/sSHr6dhb1zhMDD7G9uYIv4nzL5J4T8
vi4lRCZbRIBEU6h/MgzqjpQUOUYH0/oJqgFNJCeagoZ6VQK4YZAS6azOfE2jqem7OvX8B8jg
sVLJ20PBEeAKohHSPurCpN46+EikkYaoFP7ycMk1LqSXlbVo7LPimYTHSGWcspTRNuMonRN8
7zeTSXVh0yPXfTMhIzbf2GmXVe+duEQ+ImRTwnqF6Mt4fzlTe7Um3iMLzzN3gH/tc92ZKrJ8
n4pzy3bLuqlISUffnxxadL3b4BZly16aqjpn2A7FJKgxbE6GBtw1sJWELn71hzqSqTY7qI0B
6dGOjiSAxypPDpk8ke84A8zpEaqhTlczOWXNg0safftJnhhh0BNxJQlTNjPoY3aGwM5HdBkL
bWKoSV3YScCdLlSKjxJy4/b0zLjCBXV4l3ctwwMa7zcB6X0XtRoxU2SsGuFc9O3pXCZpQ7qZ
bRjK/ObmNIXu83NKHDkNDNVHGeBTFm122/iCNPwG7j5yv0q7icJc75sD8Z5OOnrizshUcTkr
wYF8/OxMkGeo/7VahEnIUfXRXSgGAh3/6lxgPShd9w/O8nSSpOu2lTxle+F+SE0/4TYiUzZS
8LemKqqFP1NdWlb8MoTO+q3FrljbRnf0bHGlC7JZA7q511p8qrKSIVFjQPCNV/H8x5fX/8i3
lw//cqWvKcq51DeWTSrPhT3w1PCsHJFGTojzhe9LKeMX9VRm74Um5p1Wqiz7yJaMJ7ZBJ6Mz
zHYkyqLeBO988PtJ/f4lzoVksZ68bbUYvSOLq9ye1zW9b+DGqoRbPTWrxidRHrVAoitOhXCb
REdz3SFoWIg2CG17IAYt1W5ltRMUbjLbJZ/BZLRerpyQ13BhWwcxOY+LNTIZOaMrihIr5AZr
FotgGdgWFTWe5sEqXETIvJJ5d3RumkzqK2eawbyIVhENr8GQA2lRFIjsvE/gLqQ1DOgioChs
IUOaqn4N0dGgcbVXXa1/OO9TnmlsNRdNqMrbuSUZUPLATVMMlNfRbkmrGsCVU+56tXByrcBV
5zqZnLgw4ECnnhW4dr+3XS3c6Go/RXuRApGh3LkaVjS/A8rVBFDriEYAw1pBB1b62jMd3NTo
lgbBJLaTiraTTQuYiDgIl3Jh2ysyObkWBGnS4znH9+NmVCXhduFUXButdrSKRQIVTzPrGMXR
aClpkmXadnv7ceUwKWQxjdvGYr1abCiax6td4PSeQnSbzdqpQgM7RVAwNo40DdzVXwSs2tCZ
Joq0PITB3pa/NH7fJuF6R0ucySg45FGwo3keiNApjIzDjRoK+7ydTlnmedq4SPr08vlffw/+
oU8gmuNe80r2/fPzRzgPcZ8h3/19fu39DzLT70GLgPYTJcLGzjhUK8LCmXmLvGtS2qBnmdIe
JuEB7WNL56Q2UxV/9ox7mCCZZlojA8AmmVqug4UzSrPambTlsYiM5UJds4dPT99+v3v6/PGu
/fL64fcbK2LTblfayNLUIu3ry2+/uQGH56t0kI+vWtuscCpn5Cq1TqOXK4hNMnnvoYo28TAn
tR9u90hFE/GM3QfEx/XZw4i4zS5Z++ihmZlxKsjwSnl+q/vy9Q3UuL/dvZk6nXtz+fz26wuc
rg1Htnd/h6p/e3r97fmNduWpihtRyiwtvWUSBTJbj8haIOsuiFPTl3mJz0cEM060E0+1hW9Q
zKFVts9yVIMiCB6VyCayHGxVYb0FNa6f/vXnV6iHb6Ag/+3r8/OH3y0vV3Uq7s+22V0DDEfo
yKvYyGjrViIuW+SW02GRz2PMao+9Xvac1G3jY/el9FFJGrf5/Q0We7+mrMrvHx7yRrL36aO/
oPmNiNiIDOHq++rsZduubvwFAfWCn7GBCa4HjLEz9d9SbTFti2Qzpidl8NjgJ02nvBHZvpWz
SLVVStIC/qrFMbPtrliBRJIMI/M7NHNBboW7ZE2L96EWWbSn+AZDT6ctPu6O+yXLqLmKxbPl
IrNPVXIwx8u0gCJW32uaKm58RboYP/L1xRvi5KlRhfenrF6sb7Jblt2XHZiJYLmHNLGGNGSr
b7qUINKuG7vW6irb+5k+5nuYIf3NZ/H6oSobSDa1D2/5VJEYQwg+StM2fGsAofbqeFWivEr2
Yn8yBQ8wjukSQEkYc98OYps9bDRFKk1jx1NKg2n9fJoqqJZKtbNOCeEeMg5ptGhdtUA4hLUv
bG1K9X4fpZWZkMaIzZZoPNkMGis2gc5fbOIBnUzjnBdOzagJuqrlI63EDhQECIYfxmmIOWQ1
TVXEaGvVtDEoimGAnA4BdIrbCmXGAgdDMj//7fXtw+JvdgAJirX2MakF+mOR7gRQeTELg5ZS
FHD38lnJa78+offLEDAr2wPtoxOOb0wmGMlbNtqfsxQMc+aYTpoLusEES0eQJ0eoHwO7J12I
4Qix36/ep/b75ZlJq/c7Du/4lGL01mCEnWPeKbyMNrZ51hFPZBDZm2WM97Gadc62TUybtzdT
GO+vtvN1i1tvmDycHovtas1UCj1rGXG1D1/vuOLrDTpXHE3YxmYRseO/gff6FrHZrG3HBCPT
3G8XTEqNXMURV+5M5kHIxTAE11wDw3y8UzhTvjo+YHvqiFhwta6ZyMt4iS1DFMug3XINpXG+
m+yTzWIVMtWyf4jCexd2jP1PuRJ5ISQTAZRXkFMnxOwCJi3FbBcL2xD81LzxqmXLDsQ6YMa0
jFbRbiFc4lBg54ZTSmoO4DKl8NWWy5IKz3X2tIgWIdOlm4vCuZ6r8Ijphc1li9yqTgVbFQyY
qIlkO86qarG8PatCz9h5etLOM+EsfBMbUweAL5n0Ne6ZCHf8VLPeBdwssEOOhOc2WfJtBbPD
0jvJMSVTgy0MuCFdxPVmR4rM+LqGJoBzq+8ucImMQq75Dd6frujkDWfP18t2MdufgPEl2HRr
43ECvx74TtaDkJuiFb4KmFYAfMX3ivV21R9EkeX8KrjWh+eTUgFiduyDcivIJtyuvhtm+QNh
tjgMlwrbkOFywY0pclmAcG5MKZxbFpSsyswH7X2waQXX45fblms0wCNu7Vb4iplfC1msQ668
+4fllhtRTb2KuTEL3ZIZ+uZGhsdXTHhzLs/gWOXIGkCwMLNCYsRKfe8fy4eidvHBY/I4dL58
/imuz7cHjpDFLlwz33BUdSYiO9L740laKbqEiXGQ8NS+AItKDbNgaPUlD9xfmjZ2OaypcBJg
HT0CtVomLNIbmnpqvYvYJjoxvaJZBlzYOuelkJwVG0CHr1F1zbUncFIUTNd29MinTLXbFZeU
PJdrbhBiVZRJyumWu4gbURcmk00hEoE0HaZ+RxUGp5Zv1V+sKBNXp90iiLiaki3Xt/G9/bwE
BlgfcSSMO2Rui0Guwi0CX7FNHy627BeI6uKUo45pLQX2F2YikuWFkVczUOXjUiHKfBPehsiV
yoyvI3ZH027W3GaDHEpMs+Um4iZLrYLLNCzfUE2bBOhqc56ABsXXyQ+GfP787cvr7WnLMsYM
12TMwHGUCqdZO8vjqreV9xPwTjwa0XUwepBhMRekvgR6hQm1xibkYxmrcdanpTZzC3o1ZZo7
+t1wFJqWx8xuAMDgFPusTavoeDiHRA0ZkMrSeRuOpQp5RKdcogDlsnxhj2TRZUTTEHRdpQrY
CPulxjBsbVeI8FVHMw1AGIL2XlCf9oog6CiGp6zkyuTm/8fYtTW7bSPpv+Ka550dkZQo6iEP
FElJyOEFh6B0dPzCytgaryuOT8p2aiv767cbICk00KRcKcfW9zXuVwKNbjNf09M6XFYKD3km
yEko4VwCVEc0j+eCVx9Qzjm6NmoNWLz20Eb2KZF+imh8MLsEiSkA8RBTZQenDKOyMLr6Jpqf
I351NUJlLx19Zdl3FIGxTfR49W8yI+HzchrmGvXCvtIdAFS1Vb+sR7Tey8PQXHfR5sXRAJTo
JoIAZRStXMhpA6Pjz0PUzY5GKyop29wJa5SgnI6lZ/Fw1adyT8UNEaychoXZxhEc1XF1BjIG
dxpMz7I0CvP4mMXMVm+Roq3/3omn6p76k/Kg7NmD8IEH1APB9euLfVr1PnqUgkFPOEL66mgb
VLkTZJRjtTja0wPqixFdSdQxdiNDAKVs5wDq7PSAgzNSxif0VEp31QJKbZspGFArbJa2Tmat
F/luLxNujnGqJptYEIFJ4+x0thHrj+W5MNcsLi2VKAmG4vglALO3JYxzYy7TNBxnyGnNyr58
vn39wa1Zbpbpc8/7kjUuEmOU+/PBtzqvI0XjDlalvmjUGjMmMEkDfsPOB3b4ddOJw6vHqaI8
YMaUx5wKYurQRvX1hX1JTUhjaHi6TXdKNFXT+eoZqUGzNNQDSr7GVdDTWxpwui6lKhPC8aDS
BfETURPN8tAq1GDPCpVRbBVa/XMydrVy4LbR7bChsFH5xe8uRR6oGnaPZtpH7h//uB9LDEXu
9yVsPg7syYUtUjPnFhbvKC47xToT2wT4+MLW8UdADl9N5EEIEnlVVCyR2ltBBFTRZg2xOYvx
ZoJ51AsEKio6ou2ZPDwHqDrEtku8ywGfrECPOOung4HDwJ7v+ZBT0BGpGx3cQcksOSKwH7Dn
mQmGrcvVhT1D4RrGDeOMJHz6ldciT69HnKXbgpgBoJJplV+P+2JZCDaSh7K4wr84sYrc8U3Q
eAdJGZzw7OtYow/SigvRuEOU1K7+jaqbZw+k1Tth3hv2gbrAdOvLE9WYAdynZdnYs8OAi1ra
GkFj3iouw/opUoU+iYre+5oZhPRuG8ZakQ9GcCwJmln4hU9GfaQnd+zikF3sRzaowUJjmiAa
8KKNIomms42VGLAlekEXapfUiDitozEmekWeQRvsosgDkQGkhdeYXkEHDzH3Fh5crHz49vb9
7T8/3p3+/vP27Z+Xd5/+un3/YT1bnpaOR6Ja9nr7Oirtei+f0X2k1z0sEFWtmva1PzWdLO3P
VJRRWXveo6qV/op1rE+hAA6e4gIfol7k2RPxVwmgfRePMmhOIO04BpUJTjCvtY4JTeTgDxpq
8j1iInmsqdbkHevdjYGm2rTudBmwLjKWxI9kSsKXN3Y7FKIh5AWdN87lbWS5qtE9jWckTDgw
LClIzskRQHP6/RUmwYLiOiu9POaihYFrKmDqW0y3GcMe2+KVGCgbgL6wFe1V5yjhQWZVFVLV
F2jmwj7UNb/dQ5EJNfq6el8m3hf90/6XcLVOFsSq9GpLrhzRSqjMn7UHct/UuQfSTeoAemY/
B1wp6Fq19HCh0tlUZVYST+MWbC/yNhyzsH16fIcT+yjPhtlIEvsoZoKriMtKWskSKlM04WqF
JZwRkFkYxct8HLE8LDLE7YAN+4XK04xFVRBXfvUCvkrYVHUIDuXygsIzeLzmstOFyYrJDcBM
H9CwX/Ea3vDwloXt114jXFVRmPpd+FBumB6T4r5NNEHY+/0DOSHapmeqTWhLCOHqKfOoLL7i
3U/jEZXMYq675c9B6M0kfQ1M16dhsPFbYeD8JDRRMWmPRBD7MwFwZbqXGdtrYJCkfhBA85Qd
gBWXOsBnrkLwjeVz5OFqw84EYnaqScLNhm47p7qF/72ksHLnjT8NazbFiAOiz+HTG2Yo2DTT
Q2w65lp9ouOr34vvdLictTBczFoUhIv0hhm0Fn1ls1ZiXcdERYty22s0Gw4maK42NLcLmMni
znHp4Y2XCMibfpdja2Dk/N5357h8Dlw8G2efMz2dLClsR7WWlEU+jhZ5Ec4uaEgyS2mGu7hs
NudmPeGSzDv65HeEX2t9whismL5zhF3KSTL7JPjyv/oZF5l0TWlN2XreN2mLfpD8LPza8pX0
hE+AztTq11gL2n2kXt3muTkm96dNw1TzgSouVFWsufJU6Czq2YNh3o43ob8wapypfMSJXq6F
b3ncrAtcXdZ6RuZ6jGG4ZaDt8g0zGFXMTPcVMcB2jxq+z8l3wn2FycT8XhTqXG9/iMkS0sMZ
otbdrN/CkJ1ncUyvZ3hTezynzyF85vmcGmfh6bPkeH1mPlPIvNtxm+Jah4q5mR7w/Ow3vIHR
fPgMpcSx8nvvpXpKuEEPq7M/qHDJ5tdxZhPyZP4mqvvMzLo0q/LNPttqM12Pg9vm3JHPw7aD
z41deL4/mQME8+787rP2VcIHbZZVco7rnsQs91JQChMtKALr215ZULINQutIqIXPoqSwMoq/
YOl3fAK2HezI7Mpqsq5oauYxyaWLY2jXP8jvGH6bpwOieff9x+CPbVJZMH6NP3y4fbl9e/vj
9oMoMqS5gGEb2lq1A6S1Vu4+jml4E+fX3768fUK3Rh8/f/r847cv+M4PEnVT2JJvRvhtLCTf
416Kx05ppP/9+Z8fP3+7fcB7kJk0u21EE9UAteY0giLMmOw8Ssw4cPrtz98+gNjXD7efqAfy
qQG/t+vYTvhxZOZiS+cG/jK0+vvrj/+5ff9Mktol9qZW/17bSc3GYVxE3n7879u333VN/P1/
t2//9U788efto85YxhZts4siO/6fjGHomj+gq0LI27dPf7/THQw7sMjsBIptYk9yAzA0nQOq
wXfa1HXn4jfvf27f377g4dXD9gtVEAak5z4KOzkcZwbmGO9h36tq63pZLCpi3e6Q9/XFviB6
Kl71/syB8eV0o7Fe2kdsBqEONQyWvrfn9eEozji2s6YdkRcNHmAWR/hczy+dS53SmnjDsVFU
YkmqGc43pWVoVHwZM2Ee3P93dd38K/7X9l11+/j5t3fqr3/7PifvYemR+whvB3xqmKVYaehB
DTQvMjdevD9fu+BYLjaEowVpgX1W5C3x6KBNvF/s1cKIv2/atGbBPs/szxCbed9G8SqeIffn
93PxBTNByqq074Q9qp0LmF5UXLzSqyFkHWVGC+w7OWm4pV8/fnv7/NHWFDiZ+yxrEjcibsfW
g+aeQNkV/TGv4FP0el9UD6It0M+QZ8r38NJ1r3hS3HdNh16VtPvReO3zGQ5NQ0eTf4ej6g/y
mOINtjXYa6FeFRrBtNLZ9539QN/87tNjFYTx+qk/lB63z+M4Wtsv4wbidIWpf7WveWKbs/gm
msEZedg17gJbC9/CI/trhOAbHl/PyNvu3Cx8nczhsYfLLIfFwa+gNk2SrZ8dFeerMPWjBzwI
QgYvJGzimHhOQbDyc6NUHoTJjsXJ+yGC8/EQ7WUb3zB4t91GG6+vaTzZXTwcdt6vRBFkxEuV
hCu/Ns9ZEAd+sgCT10kjLHMQ3zLxvGjbJE1nj4JRc4iBcKusbOMG+uoUDY3XRW2r5FTeHa1G
VHMmJhD0bSxOPg6Wiyp0ILIFeVJbomY+3ne55uhtWOv8ZQ1ZJ0YBnCta+zn2SMAcpe0s+Ayx
aD6CjsGcCbYPbe9gI/fEJdrIOBuIEUZnNx7o+7WaytSK/Fjk1F3QSFIjPCNK6njKzQtTL4qt
Z7LtH0FqSHpC7UvHqZ3a7GRVNeok695BtQYH7eP+Agu/dZqk6txXTDYrmweTKFBjxtalEmu9
2A7eZ7//fvvh74DGde6Yqqei6w9tWhUvTWtbghkkUllch6MOe+F0Ih5DXUWJ6tDY8Q5WBWuj
ANrjkX1ffqrQxCLWHLS2vZuBerwOjD4XbRvYUrY0oNYKIyP0SWb0GHIAelr9I0oaewTpKB1A
qjZa2spmLwKWcefnYK6jLC5FeTdKbigB38uryg1gUNqnCMPHeLBSRtdfJxHF2xWNRslKaLVy
pOiXwknE6zDQEndiMqQ30JfYrlH/mcKIQLezrWlkJ5iOiklRSrlMo/qOWJq7vwWjAG2REWxl
pY6MrDp10odJS49gKZl4oVN1jQM/7XO0eMUZFBuDoUIg6dlTIihP9FpH5rJnktfNbKunTCXQ
b0eIn6aJomYnRthx+KBh6Bgyx8n8WLg5MpSrsuq/PRkRP6sTA52ULKUT0RVlgd5QrQSqoizT
urkyqnzG/p6vKDTgxGR1ed0f+q6ic45BcWXMutKFpd33GugLpJQauDaBvd+7Y0RUndtDmrE9
faQimNy6zlaIujN6qewbCUUSnMTRHlRTWm3jx6lfFWa2LS/4gcpGsE4T62mjIERTSLI1yLTS
rxPJhN3flZojrS9vk91jbbwxbat37e0/t283PL35ePv++ZOtTy0ycowN8SmZkOtCgC7F1Tjq
bBQ5a/vJxOyoTirni+EbyKAkfBVsWM6xn2ExMFUSS6oWpbJKzBByhhAb8h3jUJtZytHksJj1
LLNdscy+CpKEp7I8K7YrvvaQI2ZMbE6ZtViyrH5zWxZXNVMpyKuU545FJWqect1U2IUPK6nI
NTeA3UsZr9Z8wfHZEPx9tPXwEH9uWnsbh1CpglWY4Ku1MhdHNjbn8aLFlE12qtNj2rKsazTE
puyNroU313omxCXj26qqZOh+i9i9I9/i6zC+ocQV1kpH+wRrT7uaUhTEh1eK6nSM6JZFdy6a
1imsZ3vRqf6lheoGsA6TE1nbMcepeELvzk5z77ugz7IzthNP5LYDVk3AxnsbBH1+kT5BtugD
2Mfk4baNwuab3K0OFPXYYVWt43tjlM9ej/VZ+fipDX2wVn6+qTnjEVQtxVoYS/uibV9nRihs
PjdBnF2iFT98NL+bo+J4NlQ8M0exDhropEw8Q2kNfb0Vtj9yzntW2CJm87ZvFFmB8dU0WTsH
AKb6M61LfdJdMVjNYJLBnn3s+SrHpVl8/XT7+vnDO/WWMV644aOuqAXk7OibJ7Y590m6y4Wb
/Ty5XQiYzHDXgHy9USqJGKqDIWpq/H6FwpWdabzR/fI90k5AQwnagncMN137AjWcq952k92J
wab0EJDfG+lbg+72O2br3hL2jIt3GF0xs2Ppwu2KX/YNBfMtMZPnC4jq+EACLyAeiJzE4YEE
Hq8tS+xz+UAC1p0HEsdoUcLRvaDUowyAxIO6Aolf5fFBbYFQdThmB37xHyUWWw0EHrUJihT1
gki8jWdWeE2ZNX45OBqVfiBxzIoHEksl1QKLda4lLvro81E6h0fRVEKKVfozQvufEAp+Jqbg
Z2IKfyamcDGmLb+6GupBE4DAgyZACbnYziDxoK+AxHKXNiIPujQWZmlsaYnFWSTe7rYL1IO6
AoEHdQUSj8qJIovlpDZPPGp5qtUSi9O1llisJJCY61BIPczAbjkDSRDNTU1JEM81D1LL2dYS
i+2jJRZ7kJFY6ARaYLmJk2AbLVAPok/mwybRo2lbyywORS3xoJISYz8DD8/5/a8jNLdBmYTS
vHwcT10vyTxoteRxtT5sNRRZHJiJ+8qAUvfeOX96RbaD1o5xeBdnTrj++PL2CTayfw4GBb/P
7BtR3aQtjuTRtCeQn9OSfsu6EhX9BnZpeSLmJXx+MbTCfy6nfxE5RvJAKm3wR7YgURSPJDLo
Z/lrPZfQ8brfs0R65Tse4AtnJccgTP2+sNzQYzTGhBnqgWayPxWltI/tBzJClzrkm2UKlaxi
z9+NHeWZDZfJIFh54YwJldw286WhVlYZX9nUPLgWTjcR6Sca1FUoM4VmERNisdSmM0PvOLqV
bkL6o7vKZxhArQusVD7DBjbrk1WypmhVebAAOJVK0f4+ofHKfhsjhpjXK/vcY0R5WWizK0VL
FjWyth4P1I9ByXHFhJKavaO2Nb076sZQ+mhuZHex/VAQ0dJHIQZTl17EJjm3GIMwW7rdjkdj
NgoXHoQTB5VnFh8jSexOpIY2tbKh0Ekpym4D+2wDXwILJTn8yIGlRIeHuJSyQXQmPbiCIB5o
lA486bwa8pmsNxTWHdJuHCxnd0bbBrSoiD/HSnWNdOpgiMWP2lSuC49Z9Iihyjxc145PXHWq
9hsGdY8jtNVlx+YPONCTNLn2ZA3sSk+FceUngobAO3l0No/TEVmHjOmpA5ldnnBmuWbOye/x
MFQJJENj11OcMe1EwaIqLs5Bb/s+dY7E263ahe61XJuk2yhd+yA5ILyDbioajDhww4FbNlIv
pxrds2jGxlBwstuEA3cMuOMi3XFx7rgK2HH1t+MqgEyTFsomFbMxsFW4S1iULxefs9SVBSQ+
0qexuPieoL+4omiB7FjUIWwxjjwVzVBntYdQ6BkXzWexXR1D4mTo3loQliiHWCyMQX5L76ud
IyqvkVu2C2yanGue9ilakfdOBgtCXy5I3E74FIWMXMjIRSsGCxksZrAd/4Wv4NvxbD+lUlEW
ryc3pnTfqDbygoYHOc741u4jmKCW+PUSuXkQeBPGy/x6OXObdbjIw744XswgfvApXW9kyzyw
gFM/aWjXcSZHhgvnuXXEcrrNxEFcCg7rZWs/HNWmJtkUkFDZLsH65IkoZRKmrxQmyAxYxTGQ
ocq1o+qzySK7s4tk0rOv2QASl/4QoB8a5VGblehTbFUOD1BNYY5oWeoUz8G+/FrH5Mv7BYhB
Mgo8OAE4jFg44uEk6jj8xEpfIr++EjSCE3Jwu/aLssMkfRilKYital5t7KV9z2kwfWhwmDlY
6PChv3dvPxoxpWh5rPAWkY3HtU1/elFS1NQt+x1zLbnfCfq5axFKtAeekPYLE5uglqpPqqj6
82A+3TosUm9/fUPdJ/d+V/thJYaVDUKNLxtMX2qSylJt5mhvjLrGjn/XUVXBxQdD/B48muH3
iBet2L6AkrIcuq5qVzB+nADiKnGBctDpeZWD65Og2EVRxcSNIPcKaIawD8IAPikHHp4kUdDY
wnfRWmbV1i/BYKu+77rMpQZfCF4I04j5/oqp4HRJRpxU2yC4JzMZvUy7MlVbgzM2L9HgshdG
tqJKw9kwNXTptnDzN968ey1Y69pC1ePUa7ChUFKoLoUGbTwG5gPifWmAja3lUvp9nLx7TNuh
dhWH9fF6LzqbqbSqv9cgBEdLdaprC9sFoCPRNGWPWvdpS9+VaFPhLdTFGcRXq2RjaySiBksJ
Y6meRII4WOn/SEKwvI0CEMHOfos0LE8jfa6f6ualpsGHLCqZ2J/5QFy2lTZHK+wpMu0qtNRK
aklDjtInVv2wMaoynxo2nlSVbfTH4Y5WVGvrW6m8sfiqRt+SCg0hZ7ZxZ7Qc7crj9uZBHB0d
RDqzv+JRDS2zGluWpDmhVXe2XRAMXx8NdFJGmCRZTO3RCS8jaIUi7Yip4nG8XG3j8kmE007V
JgxmHw8OoPSLjE9mj9JvN8Q7+7DVZFabpYeazDp/KKuOquCnHaxPXeBPjJOyDw9D/MTk5ogT
sBIZLAW4EEAaMJp/8U7kndV0CpiKct9cyQDoq9PZA4h1ev0qmQSbLMaSsLKM4NujchOYzvXb
F+j2lMYNSCjLs2JwDfVPqAavzRn+Em5ibyV08mWfK40OGIjEuPxTtBOjcWWoojolDwqMypwT
wCjYOeBQu44NRHNXgFcCwu4gZi09KbcIxkC9KkUFuxk/873MMwYdzPZSwqwWglSKtj5f5c+O
6GDNXkjhxqH37ZU6UhTnGCqoi0nTMjaWRXOxHTk0qbKf6hmZ1N7LGejuTte8/EJDBp8/vNPk
O/nbp5v2RP9OuSZbx0R7eezQ54efnZHBA8dH9GTffEFOLyLqoYAd1f3d2YNi0Ti9VyojbAx6
4vlpd4LF92hdCDWH3jFWPQQi7jVMlV9S6sdWj0In9B3zfMyOY9IJMSzPDmr6/5AwYYYF1ZG3
Uc/DtETwUtk2fnAio/GOyOiFOe/6vahzmGEVI5QLpRtv/6qPpfevvg3fSfYSsSBWqFUN0Q4/
Il+86kHcr2ccqA5khhjFRlvVAzrY9vjj7cftz29vHxgXPkXVdIXjxHfC+oy8+8KJnwswLkoX
eYYdC6Ew28p+FSHbMxuHLowUJTrXbhy4NXlpqDavpp7jy2aBSXNbI/yOV7ZN+DssUxZ+yTxx
WJf9JF+yGi9PHQ8Q6H0GpnWu0P/f2ps1N4706ML351c46momovtt7Za/iL6gSEpimZtJSpbr
huG21VWKt2zXsV0z1fPrPyCTC4AEVTURJ6IX6wEymXsikUjAUos5JxLHKk7H2Q799vT2WelL
/mzQ/DSP9yRGHypYxOkmC9t78ThKr4cp/BraoZbMxTohl9S/msU79+99C7CadlMEzxPom6Id
4SDePD/enl6PbsinjteNi9aTTOAgjcD1Hj3eBB+w3qy9yj5us0XJ/Iv/KP95ez8+XWTPF/6X
07f/vHj7dnw4/Q3reiD7DA/leVIHsIxGaemYQXBy+43WpqJ8UYJyNWYgXrqng7lBjZmIV+7o
uzhL2oAsm/l88nUUVgRGDMMzxDL0d8U5hoR+tPdaolTP1tu8d9KrbWkodKM8TjqTEMo0y3KH
kk88PYlWNLcEvYR/NcYkNRVnOrBcF23vrV5f7h8fXp70erQCqfAyQN6nSBJmD6n4SxwDynDk
DZfMoOGrc+Y7Ri2mdYx1yP9Yvx6Pbw/3IJrcvLxGN3pdWscU/adaBJal0L9m/uyQtAJpW4jL
DOaCoQkwpqe4+YUU6AOAvvW+2UW+74R9wxvxMs5uOcK9Fu6olHsTYrwv/s3Njj4ZtrEwoB2o
osc68IAfZUaFVOQtfN4zP2v/zlOU3iv2lOvvJ+rENCO4cVXFHES5n0Cl5I8fAx+xCsubZONq
MVPz6rd/0+JmY7IPn43oG5/ej/bjq++nr4/oX79dTZ2vxlEVkklgfpoa+dTNQvflX/+CjTRB
7BqVdbc5SXHZB+QnLxfyECwqhccMPRE1FhO3BVXAN7syM9bsMX1dra47I9E+7oVWcFOlm+/3
X2GOD6xF9qiKkTfYJYI1NQOZC0NbBytJyAuBoHBYU/tKi5arSEBx7EuJ8Lq4y+p4Unu+Dwfy
rJByWAJSe5zBzi4/mvlM8LDiZVA0O6wj5qEbCZVSJNW6rN28uJldB+WBCzpY6Wan2/MhIy5T
lWy8MsknskfKpHTSyx2diKl8y2u0Emx6qIODrhOOHY5R/nbmDhJ3DC8oPFLh+QA8VuGFnvdC
z2ShZ7LUuS912FNhZqFD4FD9JLP9ITA1/ikwFo/vcR2ur0JOMxN4pjOPNJhaCRFmlXfgc2MV
XejMCz3nhZ7JREWXeh6XOuw5cJKteEy8jnmm5zFT6zJTS0dtxAjq6xmHar3ZWCEwGyut4mVD
72aJOsbuGgppSCBwLGZa25Byr2E1iyzeWorAB6ho3MC5otTJUQVYJK4L6o7uFrMh9Q5+/GyX
x+IS84DqTPqa3GJlwW+x8AbLqKPG0wnWX6WhUdAQbbxcDNOuZsN5TgUNG9+S1jsWALPHQTbl
S3lPY25gCIziP77OF3YJHcdkVO+zuEL1t9uKLdP0Z0zsOt48NNb6rVGBwFHA8x3NiLN/VZEM
6NdkDSd4jK0c1XJs78w9cXcusyG+Tl9PzwOCaxPOdU8NORptuDg2tSitVB8Oyv0EbZxPdEf/
dJhcLS4HMvo1fUKbFeYR7tdFeNPWtfl5sXkBxucXWtWGVG+yPQaTgy6sszQIUawj5xLCBOIR
3vJ4TIvCGLCFSm8/QIYhXJS5N5jaK0tr5MVK7uhMcGlopnvjdK2pMKHjIecccQkNFaAdgUa3
00QlWbPCOkiUpH3jWy9Rbi0N3JY9zajCTWXJ2erIWbpFO6Dxy8JD5ffKqPDH+8PLc6MUcxvS
Mtde4NcfmavDllBEn5i7jgZfl97VjG61Dc7dFjZgE5Q7raYz+l6BUf1tBVKpQ0y8w3g2v7zU
CNMptYrv8cvLxdVUJyxnKmF5deV+QTqiaeEqnTMr+Aa30jYavmOINIdcVMury6nbkGUyn9Mw
Vw2Mrq/VtgSC7/qSo8QK/sv8zdqIhGQQBcKkI4/Hl5M6YXtFY9wQwL7ooCE9trWqoyBfUxeQ
1RjPbMwxGVq0hUnE7LZqDpirpA33XdZC8oon2cNvnBFsO0dtCZotpGFV+2uOR2uSr3XUUadh
Iu8qqD+rwDNR4mGdoDXJ4+kctqaE3ZhYW4ciZyFw7YXrOvEnvNVaO5CEdSLO+PlsghHVHRxE
FKqItgtUIq+PUSSBGtXiopPSejSi4yvCuKciCGmP1f5KhfmdIcOlGo1Qt7dGzbVL5Mfs1T4L
LIlwVUTo0k8Jk4pU+ye7BO3TOKzmqyVuZB3LhLKUt0742gZWc+yL1i74vxRmgh73GuiKQod4
ejlxABm2wYLMgeQq8ZifH/g9Gzm/nTQz6S90lfiwyhmlS6yjMg9CYTkF3oRuFIE3pU7JYKAU
AfW2ZoErAdAHO9soaPxFNp+jTrpNLzcuIi1VRgO+PpTBlfgpfKUaiHtKPfgfr8ejMdk+En/K
wlwliQdnwbkD8IxakH0QQf6qMfGWs/mEAVfz+bjmnl4bVAK0kAcfunbOgAWLiFP6Hg+vVVbX
yyl1MYPAypv/PwuDUpuoPjDL4OhAR/Pl6GpczBkypkHG8PcVmxSXk4UIqHI1Fr8FP33qCL9n
lzz9YuT8hl3DuKT0Cgz7EA+QxcQEEWQhfi9rXjTmJQp/i6JfUhkGY8csL9nvqwmnX82u+O8r
anMVXM0WLH1k3P2B3EdAe+fFMby9chHYvrx5MBGUQz4ZHVxsueQYWicZV28CDgs4d4k8fXze
MBJF8HMfZAIGBd4VLj+bnKOxzC9M92Gc5RiSuwp95pa7VV5QdjQKjguUjhlsrm8OkzlHtxGI
l9SO9cDC0rYmEiwNxuIQTR7ny0vZZHHuo0NCB5xOHLDyJ7PLsQCoea0BqBxuATI6UOgeTQQw
HtNFwiJLDkyoV08EpjQcAnoeZS7xEz8HOfXAgRl1CoPAFUvSeBFDDzPTxUh0FiHCkaH2dgdB
T+tPY9m0zT21V3A0n6CDF4al3u6Sxc1Fi3bOYs8Mchiao8EeR5FqppIn0LWH+pC5icx5IhrA
9wM4wKS7rS7+rsh4SYt0Xi3Goi26A6VsjtKfXMqRBqsF5MwhM5QxqJfVwNE9BOVg2wR0B+tw
CQVr85pbYbYUmQSmNIPM8xl/tBwrGH180mKzckQNyC08noynSwccLdH7qcu7LEdzF16MedhB
A0MG1IGAxfhdh8WWU+ratsEWS1moEuYeizKHaAIH5IPTKlXsz+Z0ola38Ww0HcH8ZJzoKHbq
rKj79WIspt0+AlnaxmBieKNGa+bg/z7I2fr15fn9Inx+pHfKIN0VIYgscajkSVI0RjLfvp7+
PgnxYzmle/M28WfGoS+xPelS2cdIX45PpwcMDnZ8fmPaM/OipM63jTRK90gkhJ8yh7JKwsVy
JH9LUdpg3FOwX7L41pF3w+dGnqBHWXq94AdTGRnAYuxjFpIBfrDYURHhwrjJqZBb5iWLqfRp
acSM3tZcNhbtOe7svRSFUzjOEusYzgFeuok7deH29Nh81wQa81+enl6e++4i5wZ7FuRrsSD3
p72ucnr+tIhJ2ZXOtrI1CCvzNp0skzlaljlpEiyUqHjPYB3k95phJ2OWrBKF0WlsnAla00NN
uD07XWHm3tv5pov389GCCe3zKbt9hd9c8p3PJmP+e7YQv5lkO59fTYp6xdw1NagApgIY8XIt
JrNCCu5z5jfd/nZ5rhYy4N78cj4Xv5f892IsfvPCXF6OeGnleWDKQ1Mul1SHEORZBTIwFX3L
2YwenloJkjGB5Ddm504UBRd0e0wWkyn77R3mYy4ZzpcTLtSh51wOXE3YcdLs4p675XtSOqgw
eipszhPY2+YSns8vxxK7ZLqFBlvQw6zdwOzXSRTIM0O7iyj6+P3p6Z/mLofP4GCXJHd1uGeu
081Usncqhj5McSJKOAyd2otFUmQFMsVcvx7/7/fj88M/XSTL/4EqXARB+Ucex20MVPsgyDw0
uH9/ef0jOL29v57++o6RPVnwzPmEBbM8m87knH+5fzv+HgPb8fEifnn5dvEf8N3/vPi7K9cb
KRf91hqOTmxZAMD0b/f1/23ebbqftAlb2z7/8/ry9vDy7Xjx5mz2Rk034msXQuOpAi0kNOGL
4KEoJ1cSmc2ZZLAZL5zfUlIwGFuf1gevnMBZjfL1GE9PcJYH2QrNyYEq2JJ8Nx3RgjaAusfY
1BgvSSdBmnNkKJRDrjZT6xDdmb1u51mp4Hj/9f0Lkd5a9PX9orh/P14kL8+nd97X63A2Y+ut
AahjLu8wHckTMSITJjBoHyFEWi5bqu9Pp8fT+z/K8EsmU3pkCLYVXeq2eC6hZ2kAJqMBrel2
l0RBVJEVaVuVE7qK29+8SxuMD5Rqx57dRpdM2Yi/J6yvnAo2/txhrT1BFz4d79++vx6fjiDH
f4cGc+Yf02U30MKFLucOxKXuSMytSJlbkTK3snLJAje0iJxXDcrVyslhwfRB+zryk9lkwZ3C
96iYUpTChTagwCxcmFnI7nQoQebVEjT5Ly6TRVAehnB1rre0M/nV0ZTtu2f6nWaAPcifoVG0
3xzNWIpPn7+8K/PHh7XEi6mlYvARZgQTGLxgh5ovOp7iKZtF8BuWH6q2zoPyioWEMAi3OCwv
pxP6ndV2zAId42/mugrEoTEN6YkAc0EFZ3uq/4XfCzrx8PeCXgzQ85OJ2YWeMUj/bvKJl4+o
VsMiUNfRiN7G3ZQLWARYQ3aHjDKGPY0qBTmF+mwyyJjKifRWh+ZOcF7kj6U3nlDRrsiL0Zwt
R+1BMZnOaXjeuCrmVHqO99DHM5/aUHsHWO/F8o4IOYmkmccjlGZ5BQOB5JtDAScjjpXReEzL
gr+ZBWF1PZ3SEQezZ7ePyslcgcRRvoPZFKz8cjqjoZMMQG8X23aqoFPmVGVrgKUALmlSAGZz
GnZ1V87HywmRF/Z+GvOmtAgLAhkmRtskEWpwuY8XzNnjJ2juib1I7dYTPvetxf795+fju72n
UlaFa+6F0/yme8f16IopoJtrzsTbpCqoXooaAr/w8zaw8Oi7M3KHVZaEVVhwySvxp/MJi1hi
V1eTvy5GtWU6R1akrHZEbBN/zmxmBEEMQEFkVW6JRTJlchPH9Qwbmohnr3at7fTvX99P374e
f/AHKaig2TF1FWNsRJGHr6fnofFCdUSpH0ep0k2ExxoS1EVWtQ8EydanfMeUoHo9ff6MJ5Tf
L97e758f4Tz6fOS12BaNBw3NIgHtqYpil1c6uXUYcyYHy3KGocIdBMPjDqTHiI2aAk2vWrNt
P4OwDMfvR/j38/ev8Pe3l7cTHi3dbjC70KzOs5LP/p9nwU57317eQeA4KUYa8wld5IISVh5+
kzWfSa0IC8FtAaon8fMZ2xoRGE+F4mQugTETPqo8lieMgaqo1YQmpwJ1nORXTUCiwexsEnu0
fz2+oYymLKKrfLQYJcQ0a5XkEy5v42+5NhrMkRZbKWXlFfRtXLyF/YAapebldGABNQEWCSWn
fRf5+Vgc3PJ4zLw5m9/CasNifA3P4ylPWM75/ab5LTKyGM8IsOmlmEKVrAZFVfnbUvjWP2en
2G0+GS1Iwk+5B1LlwgF49i0oVl9nPPTS9/Pp+bMyTMrp1ZTdtLjMzUh7+XF6wkMiTuXHEy4V
D8q4MzIkF+SiwCvMmz/mrSZZjZn0nLNXp8U6QDe/VB4q1szz8+GKS2SHKxbSD9nJzEbxZsoO
Eft4Po1H7amJtODZejaOG95evmJUhZ9a1UxKrk+alGOhJ/lJXnbzOT59Q+2eOtHNsjvyYGMJ
6aMBVBpfLfn6GCV1tQ2LJLMPA9R5ynNJ4sPVaEHlVIuwy9oEzigL8ZvMnAp2HjoezG8qjKKS
ZrycL9impFS5k/Hp22n4ge8LOBAFFQfK26jytxW1zkUYx1ye0XGHaJVlseAL6Yud5pPiebNJ
WXhp2bjDaYdZEjZRxk1Xws+L1evp8bNi9o2sFRw9ZkuefO1dhyz9y/3ro5Y8Qm44s84p95CR
OfKi4T+ZgdT9GfyQQZ4REua+CBnzYwWqt7Ef+G6ullhR21WEOwMkF+axKRuUx700oLFVEph8
a49g645QoNJ+29T3VgBhfsUe9CPWuIrj4DZa7SsORclGAoexg1AbnwbiXs4MaOzz442E7WLA
weswTFbeHQfjfHpFzxAWs9dRpV85BDRqkmBZukidU6+/PepE2kaSMfMRED7Ojqh3GMsoAxUa
9CAKYMzSg0T4bENK7ntXi6UYMMxPHQL82aZBGgty5pbOEBwf3WbGyAd5BhQ+kg0WT5Z+HgcC
ResdCRWSqYokwDy3dhBzQNmguSwHuhXlkDFLF1AU+l7uYNvCmdzVbewAGMeeg9YXKcc+dTG+
o+Lm4uHL6Vsbr4bsdcUNb3N8KLGJfAeo88TFYJ+p0+LPscT3E4VZ+oqyWB3RG0GO85EvaNa3
BCHHsPeEfOfyYBWhdYFF7nI0XdbxGCtO8OZxRzzheOO9NmKvInq/ncALQlfErkMTfKHv8Ww+
GgeTHi1JOwtg1fGROWevVFsidI6LYqwHQWrHvsmOygWzJeoaaFloyFVGaLPfLkuRTedGg9TS
PEqB3sglFtHnhBbKAvrUxGI5bQwLlSHhikt87MMKCFDprze8h3OvqCJUOqBE4dMlxfoMgx6C
/69gZNDDO6Cth2noliCkHiaNnSRy8HdJjVsF0SzAV1YhyxvRtLIKmLYfupebhTvh6LNOjWhe
gYoeaV3CqbWLM5C/TCwvf8vbmVFYM8Lob0ZDr7mRa0ZXw9zzr2v2GsmaxVUwXSdc54XmVpAg
8ytqdmXeAW9xKJtQzj51WdKNxvMUr9rSV/sNeChZ9AaLGl9AVPfcwEK2aVAp3TC4McqT1G0Z
XEsMLZ4dzIgYm1uJx15aRTcOamUHCYsdnoA2ehm0rlN8tOCVmOKC2RI6XygqgU0Bi6sRxS2p
9BMXMxYfDor7a5KP506rlZm/zjeeA/NwBRa0k0lDRRw0S3DdzHO83sQ7p6T4GrrHGhf0bVBy
Nch4S9TimDO3+VYvsL27KL//9WZeQvc7deuIDch9HgQ0gWXrgJERbkVQfGqZVRtOND5Xewh5
0Mm+k4n1lQ5kB0YfqfqHbQABLQ06xcTXm5xgxvByZYLEKJR6c4iHaeOJ91PiFOWJUOPA0H/n
aKaGyFB7qRdnm7N8bku03sigDFtO8e826a5Uvo2HtLLgrdd56jdhdLSv1GmptEJPEC2elhPl
04jiQAiYmIz5mIgdHn3N1MFONzcVcLPvHOFnRcGejlOi24YtpYzQ2/kAzYv3GSeZ97D4CP7G
LWISHWA5Huizxs+xk6hxiqzguD/gbqxkVeL+nmZK37TimJOf3RPqfXGYoPd/pxkbegFiHM/V
+p2eXs7NM+l4V+LFjTtYzO6n9aYluI1lZDTId2RC3jgZUvquoqs+pS4PZxLbiJUaHY6V9WSZ
JrDTUnmGkdy2RZJbjySfDqBu5sbJvFtaQHdMjdOAh1Ll3QZOc6DvMjPuSkGxb77c8nl5vsVY
C0mQLJghDVIzP4wztGYuglAUy8hJbn6NE6yb5WgxUwZB49L6BiNeDiSOTOLDUGIcsBMFZ+7V
etTtPIPjMrQtBwhlmsMpMEyqjGnBRWLZpYRkxs1Q5tpX2zo7LVJ4xgOsi7eh0nRY25B6mtsm
jCbW8t5JRj5IwF+H0QA5TBJ/gGTWLncUc7pSXEaHoe6usr0/JrchupAvd3k4VDKnxZuTUZDb
kI0q0cy8YbJblNaLgTPpO4JT9zaYmktp3B8gxdlhO7nTTUZJ0wGSW/L+lLv1Re/h8wdUjo2n
UExoEkd86+izAXq0nY0uFQHPaMoAhh+i36ywe3CSGBzdReWTHadYxxROgiBZjrWJ6CWL+Uxd
CD9eTsZhfRt96mGj2/TtEZPvkIbC+wBP4lEeiqZHJyRjFmbToFG9SaKIxzi0uz0eABvtsTLj
ON2pXaeiNnJGNkR0823eqHUxsvq7OXbW6JKgcyOmnoyCGH1HfwypHjuhFw7wg6vrELCBQeyp
5viKQbHN1d+TNRJ2tZGo4vON+yzhvR5AdN2g4fMfPzQ8FUAiACeJccXGQmtRh/YOe1DuONhK
muhFhVOsx8+JBoqMq+0uDUIQ+ThsI2M4RYDpqZQr8RcT0VJGALVI3/Nn+qNNV1C/QDAaZ/xX
G5Whvi2iKhS0a1h6KnGhZxMlXgs3zyAfX19Oj2QQpEGRMdfSFjAxCzAYDYs2w2h0fRaprI1S
+eeHv07Pj8fX3778d/PHfz0/2r8+DH9PjWDRFrxNFkerdB9ECREpVrHxiwttT92gpgES2G8/
9iLBUZGGYz+AmK/JaLYfVbHAo5Fl1rIclgnjePUgJGm8ETKM/ID6aIDIvEW3g2jXJQ71WhTT
/SkvUi1odK+Rw4tw5mc0aKwg1CXVWjcuk0LuiNAmabUpIYYwcL7UUpVvoYsFUQg8BYiPWGl4
reVt3ryXAfXf2MtuPJcOV8qB53K1Mey2DR9WGtv6jqRzshMs1Fayr89kda0bfc7fecdX8ynT
fQmNuqGOlgtvj35InB5o3u6r+cjgkCaKT8tpX6TcXry/3j8YIx25K/GwXlWCBtpw6Fh57HDR
E9BjdcUJ4mUcQmW2K/zQ9YNOaFsQv6pV6FUqdV0VzMGgFQCqrYvwjblD+W7XwRs1i1JFQfzV
Pldp+baGDv0jGbfNu82UKXzxV51sClcVLCkYkZVsATYOVo5ruHhy6ZDMfbqSccsoTM4k3d/n
ChHH3WBdoPuq6CB9q3b0RmbTvwrr5kw+2mlpiedvD9lEoa6KKNi4jbAuwvBT6FCbAuS4dzp+
T01+RbiJqFIddhgVN2Cwjl2kXiehjtbMnz6jyIIy4tC3a2+9U1A2M1i/JbnsOaqrgR91Ghof
anWaBSGnJJ7RuvEbREKw79tdHP4rvPkREnot4qSSxWwxyCpE13IczJi72bBb8+BP1ytsllsO
+rMut0md7nB9i9B56QY27TGxLCP5dOv6Lq4iGDKH/sUSMUJXYgrs0IXH5vJqQlq8AcvxjBoe
IspbFpEm1q1m8u4ULoctMKfbQcRCvMEv41GVfwQD03GPygA0gQC4V+IOTzeBoBmjdfg7ZQcp
iqJQMkxZUunSJabniDcDRB75xyEZiWGfVTLEK2dKymR5RWNPDrDQVyguS1bimegcx41fsgeq
LgfGJkCT+TLiMfBUxnP00r9kL3EUjiBZ0otlhSNZMieqKsfkZxwiYAJjceyjGNUqm3oirOMp
60n6BsJPK0lo308wErozvQnpflehCtcLAqqn60N1Vv6q9r284rGYeFzPDF91oVaWBpQxaBOY
rbfd59aX1h/A6evxwioxqNdnH3bCEAPzBk1UjT7rvYdG1BVIUSXakDCrzbUJtkfVH+GhmtT0
vNEA9cGraEjUFobhF8HS5ccuyUSKYo+SgTKVmU+Hc5kO5jKTucyGc5mdyUVYpBqsP86TT3xc
BRP+S6bFgCIr0w1EgA+jEo/qrLQdaMIlKbhxpMddpJOMZEdQktIAlOw2wkdRto96Jh8HE4tG
MIz4NApjLZN8D+I7+LsJ/lnvZxy/2WX0nuWgFwlhaiqNv7M0Rru40i+o0EEoRZh7UcFJogYI
eSU0WVWvPWbKs1mXfGY0QB1nJupVHcRkQoOQLthbpM4mVJvYwZ0j9bq5ylR4sG2dLE0NUHi6
Zvf1lEjLsarkiGwRrZ07mhmtTcRzNgw6jmKHt6wwee7k7LEsoqUtaNtayy1cY5ToaE0+lUax
bNX1RFTGANhOGpucPC2sVLwluePeUGxzOJ8wbqbYWdXmY2K3Wq0yl9mbr+BVML72UYnxp0wD
Zy74qawCNX1BBYRPWRrKViu5psv+BvGRyeH6CouzmC/HFqlXODNA/qTfiTDobibCxGF0AvQu
eDdAh7zC1C/uctF4FIYj3qYcokV2/pvfjAdHGOvbFlKW94aw2kUg8Kfo8zb1cKdnX02zig3Z
QAKRBcTzibUn+Vqk2c/R6DOJzACh0az4Wml+wmGtMne0Rjxas8EIp5q0athuvSJlrWxhUW8L
VgU9Kd2sE1i2xxKYiFTMlNjbVdm65Pu2xfg4hGZhgM90XE0kXCcF1/5CR8XeHV98OwwWliAq
UGIM6FagMXjxrXcH5ctiFiuQsKLaXP1yfYB+NhVUqUkIzZPl2N3Wf9P9wxca4HRdCkmiAeQG
0MJokpNtWMSbluSMYwtnK1yL6jiisqkh4RQsNUxmRSj0+71zKVspW8Hg9yJL/gj2gZFgHQE2
KrMrNDZiwkgWR9Re+RMwUfouWFv+/ov6V+zb2az8A3b0P8ID/jet9HKsxb6RlJCOIXvJgr/b
6Nx+FoSoNPhzNr3U6FGGoXtLqNWH09vLcjm/+n38QWPcVWuiBzBlFiLvQLbf3/9edjmmlZhe
BhDdaLDilh08zrWVveN8O35/fLn4W2tDI78y+wEEroV3S8T2ySDYvrQPdsyMBhnQwJXFbkAQ
Wx1OUSB9UOecNsb0NoqDgr5cuA6LlBZQXLFUSe781LY+SxAihQUj1JRRh4Db3QaW5RXNt4FM
0cmIC5N1ADtVyAIEdnbkm2iD5nC+SGX/J3obJufeK8QcUXqu+3RU+mYHhvaowoQun4WXbqTM
4AU6YAdTi61locwmrEN4R1J6G7YrbUV6+J2DfMwFWFk0A0h502kdefaRsmWLNDmNHNzcBssA
Fj0VKI4Ia6nlLkm8woHd0dTh6qmsPRUoRzMkEaEStZtcdLAsn5gTJosxcdNCxjuFA+5WkfWA
wb+awNCvU5AxL05vF88v6L7l/f8oLCCMZE2x1SzK6BPLQmVae/tsV0CRlY9B+UQftwgM1T1G
vApsGykMrBE6lDdXDzOx28IeNll7yFXSiI7ucLcz+0Lvqm2Ik9/jcrAPGy+TmcxvK35blRIn
JLS05c3OK7dsNWwQK4y3gkjX+pxsRSWl8Ts2vFJJcujNxrGvm1HDYRTpaoernM0zqHOfFm3c
4bwbO5gdqQiaKejhk5ZvqbVsPTOmEWghgUNaYQiTVRgEoZZ2XXibBEODNfIfZjDtZBGpV0mi
FFYJJvgmcv3MBXCTHmYutNAhsaYWTvYWWXn+Ncb2ubODkPa6ZIDBqPa5k1FWbZW+tmywwLUf
and+EEiZaGF+dxLTdVLCjnBX4fXOaDIbuWwxqkzbFdTJBwbFOeLsLHHrD5OXs8kwEcfXMHWQ
IGvTtgLtFqVeLZvaPUpVf5Gf1P5XUtAG+RV+1kZaAr3Rujb58Hj8++v9+/GDwyisFxo8h5Hk
gNJgoYHZAa0tb5a6jMxoqsfwX1zQP8jCIc0MabM+LGYKGR8wg1CJL+omCjk/n7qp/RkOW2XJ
AJLknu/Acke2W5u0q3OXmrCQuoIWGeJ0rixaXNNitTTloqAlfaLPVTu0e3eCB5A4SqKqfySe
htVtVlzrMnUqz2aoYpqI31P5mxfbYDP+u7yl9zmWg0YqahBqdJ22u3ns3WW7SlDkymq4Yzgb
khRP8nu1efqIO5dnNXBBG8X1w7+Pr8/Hr/96ef38wUmVRJtCSDcNre0Y+OKKGhsXWVbVqWxI
R4GCIGqKbOywOkhFAnkoRigqMeZevQtyV45rWxHnVFDjiYTRAv4LOtbpuED2bqB1byD7NzAd
ICDTRbLzDKX0y0gltD2oEk3NjP6wLmnoy5Y41BkbswaAYBZlpAWMHCp+OsMWKq63sozqUO7S
gtrF2t/1hm58DYbSg7/10pSWsaHxaQII1Akzqa+L1dzhbsdClJqqh6hcxtca7jfFQGrQQ15U
dcFiK/phvuWqTguIgdug2qLVkoZ6w49Y9niKMPrDiQA91G/2VZPh9QzPbejBJnGLOoitIO1y
H3IQoFh7DWaqIDCpU+wwWUh7kYXqIGHGa6lD5Shv0wFCsmoOL4Lg9gCiuMwQKAs8rvqQqhC3
ap6Wd8dXQ9OzuDJXOcvQ/BSJDaYNDEtwt7KUuuGFH73Q42ojkdyqM+sZ9WbHKJfDFOp2lVGW
1ApEUCaDlOHchkqwXAx+h7rtFpTBElA/uoIyG6QMlpqGLBKUqwHK1XQozdVgi15Nh+rDwgvy
ElyK+kRlhqOjXg4kGE8Gvw8k0dRe6UeRnv9Yhyc6PNXhgbLPdXihw5c6fDVQ7oGijAfKMhaF
uc6iZV0o2I5jiefjgddLXdgP44raaPc47OI76nizoxQZSFpqXndFFMdabhsv1PEipP61WjiC
UrHA8x0h3UXVQN3UIlW74jqiOw8S+CUJM7+AH3L93aWRz8xXG6BOsyLx4uiTFVTJ+5eGL8rq
W+Yvhtlg2XhQx4fvr+j38eUbOqcllyF8r8JfIDHe7MKyqsVqDlJTGcEZIa2QrYhSep29crKq
Cjx3BAJt7rwdHH7VwbbO4COeUAQjyVw1N3pFKtK0gkWQhKVx41EVEd0w3S2mS4InOiMybbPs
WslzrX2nOTAplAh+ptGKjSaZrD6sqZO4jpx71KI/LhOMqpujsqz2MHT6dHK5WLbkLT662HpF
EKbQinhLjxe1RkbyeQREh+kMqV5DBiiOnuMx5sY5Hf5rkIbRBsC+gyBVw1OVb1KiFnwbxjk3
dVTIthk+/PH21+n5j+9vx9enl8fj71+OX7+RB2Fdm8E0gEl6UFqzodQrkIgwhq7W4i1PIzaf
4whN+NYzHN7el9feDo+xxoF5ha9S0OBxF/a3NQ5zGQUwMo0kC/MK8r06xzqBMU+Vr5P5wmVP
WM9yHI34081OraKhw+iFgxi3VeUcXp6HaWAtTmKtHaosye6yQYJR/qAdSV7BClEVd39ORrPl
WeZdEFU12pOhenSIM0uiititxRl6hBsuRXfC6Exowqpil31dCqixB2NXy6wliaOITieqzkE+
eWLTGRpLNa31BaO9xAzPcmpvRvtjHLQj85InKdCJsDL42ry68+gZsx9H3hp9MEXa6mnO4xmc
k2Bl/Am5Dr0iJuucMfAyRLxSD+PaFMtc/v1JlMsDbJ0xoarPHUhkqAFeg8GezZO2+7Vro9hB
vdWWRvTKuyQJcY8T22fPQrbdIpLG6JaldVfq8mD31btwHQ1mb+YdIdDOhB8wtrwSZ1DuF3UU
HGB2Uir2ULGz5jtdO0bmFXKCpdJuZJGcbjoOmbKMNj9L3d6pdFl8OD3d//7ca/cok5mU5dYb
yw9JBlhn1WGh8c7Hk1/jvc1/mbVMpj+pr1l/Prx9uR+zmhpVNpzKQVC+451nVYUKAZaFwouo
oZtB0ebjHLtZR8/naITNCG8koiK59QrcxKhcqfJehwcMyvpzRhMW+peytGU8x6mIE4wO34LU
nDg8GYHYCtHWcrIyM7+5Smy2H1iHYZXL0oCZYmDaVQzbLtrG6VmbeXyY09hBCCPSSlnH94c/
/n385+2PHwjChPgXfXfPatYUDMTbSp/sw8sSMMFZYhfaddm0ocLS7LogO2OV20ZbMY1WuE/Y
jxr1d/W63O3onoGE8FAVXiOYGC1fKRIGgYorjYbwcKMd/+uJNVo77xQZtZvGLg+WU53xDquV
Un6Nt93If4078HxlLcHt9gMG4Hx8+e/n3/65f7r/7evL/eO30/Nvb/d/H4Hz9Pjb6fn9+BmP
lr+9Hb+enr//+O3t6f7h37+9vzy9/PPy2/23b/cgyL/+9te3vz/Ys+i1uV65+HL/+ng0oRj6
M6l9nHgE/n8uTs8nDNR2+p97HiQUhyHK2yiYim184/t407FByQ1GkV/FqBRG+U+pHWPG2QS8
7IRiIWMuf23OYka8Ho9GLo8d/KWWvNilxtDGOWmYehizcBAkui7JUpcDHwVzhv5ppd5WLXm4
qbt4zlIx0H78ADPR3NpQpXF5l8qAuRZLwsSn50uLHliEcgPlNxKBNSZYwDrsZ3tJqroDGqTD
Y1PNLigcJiyzw2X0DXj0sHa+r/98e3+5eHh5PV68vF7Y02U/uCwzmup7LBY6hScuDvumCrqs
5bUf5Vt6CBEEN4m40ehBl7WgG0GPqYzuyaMt+GBJvKHCX+e5y31N3/W2OaA5g8uaeKm3UfJt
cDcBf5zAubvhIB75NFyb9XiyTHaxQ0h3sQ66n8/FQ40GNv9TRoIxi/MdnJ+u2nEQJW4OYQrL
VPdYPP/+19fTw++wEV08mOH8+fX+25d/nFFclM40qAN3KIW+W7TQVxmLQMmyTNwGgn1lH07m
8/FVW2jv+/sXDPD0cP9+fLwIn03JMU7Wf5/ev1x4b28vDydDCu7f752q+NRpctuRCuZvPfhn
MgJx7o6HSuxm5SYqxzQupCDoHVCGN9FeaZCtB4v0vq3jykSrRrXVm1uDldvK/nrlYpU7sH1l
GIe+mzamhs0NlinfyLXCHJSPgKh2W3juNE63ww0cRF5a7dyuQTvfrqW2929fhhoq8dzCbTXw
oFVjbznbcGTHt3f3C4U/nSi9gbD7kYO6/oIAfh1O3Ka1uNuSkHk1HgXR2h3Gav6D7ZsEMwVT
+CIYnManrlvTIgm0KYAwc5ndwZP5QoOnE5e7OTo7oJaFPRlr8NQFEwXDZ1+rzN3zqk0xvnIz
NqfrThI4ffvC/F50C4Hbe4DVlSIPpLtVpHAXvttHIEvdriN1JFmCY0fSjhwvCeM4ctdd37go
GUpUVu6YQNTthUCp8Frf4K633idF1Cm9uPSUsdCuxspyGmprbJEzL9Rdz7utWYVue1S3mdrA
Dd43le3+l6dvGE+OnS26FlnH7HVLu75SS+sGW87cccbstHts687ExiDbBl67f358ebpIvz/9
dXy92ByfMUSYVjwvLaPazzVhLyhWqN9NdzpFXUYtRVuEDEXbkJDggB+jqgrRj3jBrpqIxFZr
QnVL0IvQUQcF545Daw9KhOG/d7eyjkMV4jtqmBqRMluhlakyNMQFEJHSW08J9Pjx9fTX6z2c
215fvr+fnpVNEIOMawuRwbXlxUQlt3tPG4ngHI+2Zm3t1SJy2YmrZmBJZ79xLnUnF57PgYqP
LllbsRBvN02QfPGofnW2joM7LMvpXCnP5vBTSRSZBvbFrSvGoXMrL45vozRV5gBSbVyI0m0Z
Sqz1VcNyLGFVcUc2JTrWcAqLvpJQjp+Woa7OcwxX0hB/WsqftwNwDNYDvcv6npcM7b2cpxl6
6GA+LJXFlzJ7Zr34Jd7zGQ03QMfyUR9FHd2oprVZxLh4LKQhDuvrqK62cfAnzOqfshuFmeUm
18Dnm/eXu+HmJ6xdJ5xny6/9nzPhrnOOKcg9bzLcn3nkZwc/VNQSZqRCSQvlhA+kxn364Bif
u/ugWWBM8MohdQXhUNbfnlppy3NPLpWtoadGysGsp2qqCpYzjBc9d9/Xqwx4Hbj7vmml/Gwq
+3M4U+uQWaWjA9pgKGsmdnv7aJcIrOdNI5C1DmdItZ+m8/lBZ0k82GQHxlZDg7VY0yEBQ+ZX
YZZWh8GyNUVnb1II+WZgp7nBJzpDol7HMDCEkNYIataeursZ0JnaD6mXJANJtp5ylyDLd2ts
VOIw/RMOnCpTlgzOzijZVKE/vP+4wTkJsXGXOTRD3VCjtMu2YVxG+pC2jk1UkgkvlCsnDrNG
rUNcwQamCfPaQijGS3oZ6pO9Jbrnro56o6+nhjY0Ig1xmxdnSjQolnhJnGE8zM1BrymhnxNO
vImilkZKG4Yg80uji9COxAN8qqJwiJcpGvmtron9oRLz3SpueMrdapCtyhOdx1yw+mHRWHiG
jhNA2GrLpXF4ilTMQ3K0eWspL1t7pwEqauExcY839915aN+cGQ8J/Zt2e447vr6f/jbK7LeL
v9Ff/+nzsw3o/fDl+PDv0/Nn4lC1s0Iw3/nwAInf/sAUwFb/+/jPv74dn3oLR/MOb9h0wKWX
5L1lQ7V34KRRnfQOh73enI2uqPmgtT34aWHOmCM4HEa8Mm59oNS9Z5xfaNA2y1WUYqGMr6h1
2yPx4JHaXjDSi8cWqVcg52xhryPrIfrh8ora+BOhL5U94fJrBRtsCEODGsWYs6U5ZWrUNkBg
CcuIjxa3hQlwREckZYHdY4CaYljEKqKGln5WBCy8UoEidLpLViE1c7C21cxpYBu1EOOeck+b
GFC5tl5uyITG2uE7RT/JD/7WWsEV4VpwoCeYNeonG+fDLLBjlwcsDrWXplkl7bqjtPFwlfM9
x8coIxUTOPzxgnO4ana/jqpdzVNxTT/8VOzqGxwWt3B1t+TiBKHMBsQHw+IVt8IwTXDASFEF
Cn/Blniu8vEv6YBduRcaPtHuyxsMazrrKDYsbPoGL2m9QZYhKkyZIEvUltSdBiBqHWZwHL1f
oNaM62A/Wc2PQHU/B4hqOeuOD4Y8HiC3Wj7dy4GBNf7Dp5q547W/68Ny4WAm3E/u8kYeHQ4N
6NHHBD1WbWHaO4QSdj8335X/0cF41/UVqjdMdCKEFRAmKiX+RE0xCIG6J2H82QA+U3Hu0KRd
zJS3ECBpBnWZxVnCQ8/2KD5NWQ6Q4ItDJEhFVyCZjNJWPpmFFWzAZYjTTcPqa+pVjOCrRIXX
1DJ6xb0hmmfSaBbD4YNXFCD+mQWaCmxl5sMRITI7GTDQ3c04Y6YxSSxkvOSyrQNxZoSDkYaY
n83UtJMlwE7HAl0YGhLwEQwq1uX+gzR8GFNX9WK2olaJgTF/9WPPOMTYhjysabc1lWG1y91C
dfQKWtVYdg+zGNsjJK+zQt8lHS4WE71jQSoM5lwpb3kbZVW84tVLs7TlNM+EOLUj5VkWc1IR
OtzNLqtQfNl7eViAONES7PX68e/771/fLx5ent9Pn7+/fH+7eLL2aPevx3uQ4P7n+P+Rmw1j
rP0prJPGrc3CoZR4X22pdLOlZPSphD4bNgN7KssqSn+ByTto+y+OsBjOAegg4s8lbQirnmVn
OAbXpaDgKFYEzXIT24WK7NvGZ6/yAgBGDrpPrrP12lgWMkpd8N67oeJdnK34L2V3TmP+Fj4u
dvLtnx9/qiuPZIXx7vOMao+SPOJeq9xqBFHCWODHOiAFwUhfGDSkrKi9885Hh3QVP1YYUbtd
7/dBSbaNFt3g450kzNYBXcVoGqtopoLnOksr18sDopJp+WPpIHSxN9Dix3gsoMsf9NGtgTC0
Y6xk6IFUnyo4OtGqZz+Uj40ENB79GMvUeMHhlhTQ8eTHZCJg2DnGix9TCS9omdAvD0j4FUP4
8tMtdhiYjGvEAZBxYjruXeNaeB3vyq10TIBMZjLcerE03A3CnNqOl7BHsDmCttH0GWO2+uht
6KHVjDY11JxzzpQDy4qpNqhZ47qCGjvkcZCsb9sltDPQbVUGBv32enp+//fFPXzx8en49tl9
tWvOvtc193rYgOhLgilPG9dIcbaJ8TFjZ/h5Ochxs0PHtrO+j6wCxcmh4zCG/c33A3TZQib9
XeolkeN3hMHCphjOhSt8j1GHRQFcdAUx3PAvnK1XWRnSnhlstc7e4/T1+Pv76alRKbwZ1geL
v7ptvC7g08YL9Z/L8dWEjo0c5CQMfEfdJeHjGat6prLYNsQnh+hvFQYmXS6bvcL6W0fPpolX
+fy5IKOYgmCcgDuZh312tt6lfuNmHBbeekpt08xcufVg4to65ZmRCemSRnH9A9bBStiKMb3m
5lcb1nSDMWs5PbQDPzj+9f3zZ7RMj57f3l+/Px2f32koIg+1puVdWRDtDQHbgdho5/+E5U7j
KmFiUqWHS0MDzx0GrieqNDcyQYs0DmnEHUNHRftjw5DgtefACwyW04Az0l6fdL0JSH+6v+pt
lma7xmKf+8w25KaWvvQRZ4jCTrrHjNtC9gqH0MyUb7blD/vxejwafWBs16yQwepMZyH1Orxb
ZR4Nw4wo/FlF6Q7dfFZeiaZF28jv3073W8Wq9JroDijZsalmaGQ99EmKFXRRUAreARQn3wCp
3EbrSoJBtK8/hUUm8V0Ka4W/5c9Bmnys6hk9sa+ZT/e2XJmsF7QmNbY91w5G020b46mfv780
I/kMsO9d5bxA78ztvta8S+kyIzsXbiRwcA1THkvC5oFUIRQLQnsR57xeMBnDmY1p+I3aP4vK
jEcS6POsme7S4kUWeJUn9CD9ec3w3B5kKop0WtVKuAc3v8Vu14DOhbHN1vrBH4IVaZ7T1+zM
z2kmntRgztzPBacV/s7sbEN06/nWDXHFuURPdqtBGe9WLSt9ZI6wsLMzK2QzKEHi4m+4fg1H
CdSIq837rsWof+ElOE1DPw0Quwdaa2dAdTwYb6Eufc8Z9/Y9265kPtNLEHiDhoTuFUQ8JjEi
91CLTcUXlZbiIsZOnkvUHalYKWC+Wcfexhkt2ldlwaKi2nnOcjEAQ1NhnBX+2LQBrRcYjHRb
FFnhBEFvZrUVUfDYLQeK3Uo9thsIAt7dV6yekmErGZr9xFJde0FLxdmER4o069fmIOB6YlEy
mWEnPLByKGKDpWc7jLISugltrJnBdFbBMOag5g7CXpwasr3hpLuKswGI8b6NjEjX6JGA6SJ7
+fb220X88vDv79+sBLm9f/5MzzzQcD6KJBnTyDG48a0y5kSjBthVvdiAUhIqAMMKBhBz4pGt
q0Fi9wKcspkv/AqPLJrNv97u0LmBV7L1qXm/35K6CownI/dDPdtgWQSLLMrtDZwO4IwR0KcU
pnttBWjHnu8s62wKTgGP31H0V/Z/u6hJlyYG5CHlDNYu9/17VSVvPrSwra7DMLcbvr0bxmdZ
vWDzH2/fTs/4VAuq8PT9/fjjCH8c3x/+9a9//WdfUOveA7PcGF2AVArlRbZXQkFZuPBubQYp
tKJwsYEqvspz1i3Uoe+q8BA6a2wJdeEubpulUme/vbUU2DCzW+5aqvnSbckc/VrUFEzIXtZB
f66xKrBXZXiiL+NQTxJZa8hOZilFq8BkQ32hWGb66jiiTumvBxL5ZWDzvPWiqhttvRLnfzEg
uvlgXMfCsiX2QbNeCnfa5qwObQmiPj51gbFtL0cdwcCKQgMwyKYgNTgGBooOhCyj1nnxxeP9
+/0Fyu8PaENBI3Ha7ohckTHXwNKRmttNmDp/M5JabaRmkG2LXRsHTawaA2Xj+ftF2HjPKdua
gbipHiXsVPN3zuwD8ZRXRh8lyAfSWKzhwykw8N9QKhQ8jKKnW7InY5YrHycIhTdu1AIsl/Fk
Jx0Wdw3Km0QsADeNyqYQd07NyDETA45gqKakcwbKvoUdI7byqPHHj495iIiGd++pf1dRf2hp
lttqFWKcdhqp81SoYb7VeVodofRWrxDr26ja4uWBlNwacmJN0NHNAFUxGBaM12S6DDmNDkxm
4jcJbS5kWJlSG2NPUUT7VZ8v20apLMPxhHu8rkN+tk9g22MflVAx320fklWjQ+L+onM49yUw
DYsbvVrO99ojq/xQw6hciogao7RhDfdl1oMD4SdjYKj7f97zXcawHqANH3c96F87n4J2Anlt
7eBWMHEG5y1MBLc2TYABO5pKZ5SUKZxItpk7fFpCd3ThXbmCvQMdLdmqOE5PWrwxpELHOSZB
qB0NMBCCsTF2woBeQz6r0I7GcgDGRR4+whPu9ISrfO1gbcdJfDiH5vOoDysi6hv1/LRuBy23
Z7tLYaDIr2BkQOCPNhu2pdns7dyU58x+Qmn3unRmKuQ2Yy82F8PYdU6tbHXwf7tCRFzVGRrV
xWSpFWI4t42f7bvxI2dZO5wdqaslVF6BVhSc2K9mv8JhzhjuhKGl1zOhHF38cOtnJ4zhoKMu
hOZiSihxyKDAJVB8ho5+hczGjnOm8TDARCkBOrJKUg5KtBdpA0RrlSNpjiTZ4qYG7oeui7Aa
IG1vYQUJvWszwt2E6whOaQ4arBzsJolgA4xCJZPCRHMZINpfa7dg8Cu1R3JJ2a8j9IKADxyq
ym0cQg7yn5HrtVsVwrHK/C0tmhXT7IUv2U8yh2Ik2fvXp8VMlWUjPP+2u34UMHPZZDHDs0Dm
i8GLeuQy2myZ4/8GQkPm69L4VCvxryGWjqOuEl9j8r1qp+E2TR4NE8NqtacmCYRs3AUDQzIj
WnPzE+OyBCBhr0NpA9OnrhK1oLBzyscVPZG5b6Bw49zSOtduxIhOzJb9RS/tq+PbO54eUfvh
v/zX8fX+85H41d4xDaZVoDk6fk2vZrHw0Ex6hWaEVX6AVlWjbLHPk5/pT7O12SuG8yOfCyvz
yuw8VyeGDRZqOO64F8VlTC2OELFXNkJJIfJQvF2bpIl3HbauzQUpyrrjGiesUfEw/CX3hrhJ
lSq1qZPE177Ps+y1CbX0udyp1a+Zq7RGwVyCzAe7ebOBkObh3PirvXgxhtwFXo6VggGtFYqd
CevHbhAtEbZSDzYIK3SMfsxG5MakdXdXWb2X8EURXwcVM2AtbYRnWBDo0cLg6HFvG3q5gDln
s73ZC887MVlWXVOiECMlCGMlK0FqvSuc7FMrWkFrrrv44my1YYuZIkFRx3eKTn0bHvi1oa24
tUmyvtJLl1gyB3xWOQ9wRR+IGrR7N0NBaSFl74qZb00DHYRRsAHd62oDF/juQNwN2Qqy9wgG
AglOFlPYaNnBcp30LdwWHK8DOLhP7BrBUePMw6wMIot8LRF8rbTNzOXkvqetI9gr4YOqXI/p
Wie2sndEQGjIAlbNOJCbBOxKdmNUvW+bTFSSfXmlEshjJqkiTQIkq+nQ67w2MnfCtqsZe8aZ
v3mIxpvxOskCAQ3c/dkZHyYgbNRyFErjvPajqFCOnFUjTBR0m8hVx/jUzLkXdEgrrf3ObfVt
MqPsTaISA8jWQeabdZNka5XBq8jugqWSfWv+9/8Dmd52Win5BAA=

--zYM0uCDKw75PZbzx--


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 20:34:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 20:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86045.161123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC73W-0001sE-NW; Tue, 16 Feb 2021 20:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86045.161123; Tue, 16 Feb 2021 20:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC73W-0001s7-K5; Tue, 16 Feb 2021 20:34:46 +0000
Received: by outflank-mailman (input) for mailman id 86045;
 Tue, 16 Feb 2021 20:34:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QcaE=HS=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lC73W-0001s1-2o
 for xen-devel@lists.xenproject.org; Tue, 16 Feb 2021 20:34:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81bf7c56-8467-40c8-a8ed-72d5ad0433c9;
 Tue, 16 Feb 2021 20:34:45 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 9DD3464EBD;
 Tue, 16 Feb 2021 20:34:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81bf7c56-8467-40c8-a8ed-72d5ad0433c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613507684;
	bh=nmjR6xz7JkrX1qcxgA69YTQHoKWl2Gx1LS3y140JtE8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CYsQiWe6uj8Q0mXeezJZYj3fEbl4/UsslbpKDx0L79GjjPUllmkKdPTrRwWX7HwN+
	 WoGRXaDG+OItKvPdXhUFijWvtaA7y7DH4LchV4HKka2od4UVQf0ZasqPHLgsk8k2G5
	 9cyEMricyk4dm6sXzJx5mr6szBdD/8czvbKcq4EN5wD2VwbLuPKpy/xD0ULPs3PDtd
	 sr2Ur4tZM775nFniPWDgLY77Pzp0uay13zxPrbDYUrcmqJnUMd6+/kTl6mp4xVhvTa
	 EwHbCjvPGSlE2qjCB6OanT6Zc9JbQn9F6eY1RqFQs+F0sJdvJwqXhQY/lPfsSBLpUk
	 EEMbd53nxtx3g==
Date: Tue, 16 Feb 2021 12:34:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roman Shaposhnik <roman@zededa.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, jbeulich@suse.com, 
    andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org, 
    george.dunlap@citrix.com, sstabellini@kernel.org
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
In-Reply-To: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

+ x86 maintainers

It looks like the tlbflush is getting stuck?


On Sat, 6 Feb 2021, Roman Shaposhnik wrote:
> Hi!
> 
> all of a sudden (but only after a few days of running normally), on a stock
> Ubuntu 18.04 (Bionic with 4.15.0 kernel) DomU I'm seeing Microsoft's .net
> runtime go into a heave GC cycle and then freeze and die like what is
> shown below. This is under stock Xen 4.14.0 on a pretty unremarkable
> x86_64 box made by Supermicro.
> 
> I would really appreciate any thoughts on the subject or at least directions
> in which I should go to investigate this. At this point -- this part
> of Xen is a
> bit of a mystery to me -- but I'm very much willing to learn ;-)
> 
> >From my completely uneducated guess it feels like some kind of an issue
> between DomU shuffling memory much more than normal and Xen somehow
> getting unhappy about that:
> 
> [376900.874560] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [dotnet:3518]
> [376900.874764] Kernel panic - not syncing: softlockup: hung tasks
> [376900.874793] CPU: 0 PID: 3518 Comm: dotnet Tainted: G L
> 4.15.0-112-generic #113-Ubuntu
> [376900.874824] Hardware name: Xen HVM domU, BIOS 4.14.0 12/15/2020
> [376900.874847] Call Trace:
> [376900.874860] <IRQ>
> [376900.874874] dump_stack+0x6d/0x8e
> [376900.874892] panic+0xe4/0x254
> [376900.874911] watchdog_timer_fn+0x21e/0x230
> [376900.874928] ? watchdog+0x30/0x30
> [376900.874947] __hrtimer_run_queues+0xdf/0x230
> [376900.874970] hrtimer_interrupt+0xa0/0x1d0
> [376900.874989] xen_timer_interrupt+0x20/0x30
> [376900.875008] __handle_irq_event_percpu+0x44/0x1a0
> [376900.875031] handle_irq_event_percpu+0x32/0x80
> [376900.875053] handle_percpu_irq+0x3d/0x60
> [376900.875071] generic_handle_irq+0x28/0x40
> [376900.875090] __evtchn_fifo_handle_events+0x172/0x190
> [376900.875112] evtchn_fifo_handle_events+0x10/0x20
> [376900.875133] __xen_evtchn_do_upcall+0x49/0x80
> [376900.875156] xen_evtchn_do_upcall+0x2b/0x50
> [376900.875177] xen_hvm_callback_vector+0x90/0xa0
> [376900.875197] </IRQ>
> [376900.875211] RIP: 0010:smp_call_function_single+0xdc/0x100
> [376900.875230] RSP: 0018:ffffaaa3c1807c20 EFLAGS: 00000202 ORIG_RAX:
> ffffffffffffff0c
> [376900.875261] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
> 0000000000000000
> [376900.875288] RDX: 0000000000000001 RSI: 0000000000000003 RDI:
> 0000000000000003
> [376900.875314] RBP: ffffaaa3c1807c70 R08: fffffffffffffffc R09:
> 0000000000000002
> [376900.875341] R10: 0000000000000040 R11: 0000000000000000 R12:
> ffff8e0ab2c1de70
> [376900.875368] R13: 0000000000000000 R14: ffffffff95a7ecd0 R15:
> ffffaaa3c1807d08
> [376900.875396] ? flush_tlb_func_common.constprop.10+0x230/0x230
> [376900.875424] ? flush_tlb_func_common.constprop.10+0x230/0x230
> [376900.875449] ? unmap_page_range+0xbbc/0xd00
> [376900.875470] smp_call_function_many+0x1cc/0x250
> [376900.875491] ? smp_call_function_many+0x1cc/0x250
> [376900.875513] native_flush_tlb_others+0x3c/0xf0
> [376900.875534] flush_tlb_mm_range+0xae/0x110
> [376900.875552] tlb_flush_mmu_tlbonly+0x5f/0xc0
> [376900.875574] arch_tlb_finish_mmu+0x3f/0x80
> [376900.875592] tlb_finish_mmu+0x23/0x30
> [376900.875610] unmap_region+0xf7/0x130
> [376900.875629] do_munmap+0x276/0x450
> [376900.875647] vm_munmap+0x69/0xb0
> [376900.875664] SyS_munmap+0x22/0x30
> [376900.875682] do_syscall_64+0x73/0x130
> [376900.875701] entry_SYSCALL_64_after_hwframe+0x41/0xa6
> [376900.875721] RIP: 0033:0x7f05ad52dd59
> [376900.875737] RSP: 002b:00007f05a8037150 EFLAGS: 00000246 ORIG_RAX:
> 000000000000000b
> [376900.875765] RAX: ffffffffffffffda RBX: 000056517e2a08c0 RCX:
> 00007f05ad52dd59
> [376900.875791] RDX: 0000000000000000 RSI: 0000000000006a00 RDI:
> 00007f05aad8f000
> [376900.875818] RBP: 0000000000006a00 R08: 0000000000020b18 R09:
> 0000000000000000
> [376900.875844] R10: 0000000000020ad0 R11: 0000000000000246 R12:
> 0000000000000001
> [376900.875870] R13: 0000000000000000 R14: 000056517eb02300 R15:
> 00007f05aad8f000
> 
> Thanks,
> Roman.
> 


From xen-devel-bounces@lists.xenproject.org Tue Feb 16 22:08:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 Feb 2021 22:08:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86060.161146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC8WH-0001XL-Um; Tue, 16 Feb 2021 22:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86060.161146; Tue, 16 Feb 2021 22:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lC8WH-0001XE-Rh; Tue, 16 Feb 2021 22:08:33 +0000
Received: by outflank-mailman (input) for mailman id 86060;
 Tue, 16 Feb 2021 22:08:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC8WG-0001X6-9T; Tue, 16 Feb 2021 22:08:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC8WF-0001oT-UB; Tue, 16 Feb 2021 22:08:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lC8WF-0005nI-Kf; Tue, 16 Feb 2021 22:08:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lC8WF-00066V-K8; Tue, 16 Feb 2021 22:08:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4JDxop6YYfsQSC3+/2wU0KLMYdxX8YR2q5TWNXqvGEM=; b=Tkvyug1o3fQHQ9tnR3nncSgu9D
	BgTXuTgpXkJb0Q8RUKGzgcBmKYxIBdfmR8XCr1R2UKf8juDUeEdisnCe6z0bgHt0pkpffJw8f+lf/
	sBsJaq66ZzLgxJhkf3a7jySJaiHVfXwSqI42orehxObpDDNkJjJSg3MpPM0brlsKA+7g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159399-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159399: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5b9a4104c902d7dec14c9e3c5652a638194487c6
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 Feb 2021 22:08:31 +0000

flight 159399 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159399/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 159359

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5b9a4104c902d7dec14c9e3c5652a638194487c6
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   35 days
Failing since        158473  2021-01-17 13:42:20 Z   30 days   42 attempts
Testing same since   159339  2021-02-14 04:42:28 Z    2 days    4 attempts

------------------------------------------------------------
467 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14254 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 01:13:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 01:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86079.161187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCBPH-00042W-Ok; Wed, 17 Feb 2021 01:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86079.161187; Wed, 17 Feb 2021 01:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCBPH-00042P-Lo; Wed, 17 Feb 2021 01:13:31 +0000
Received: by outflank-mailman (input) for mailman id 86079;
 Wed, 17 Feb 2021 01:13:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCBPG-00042H-SI; Wed, 17 Feb 2021 01:13:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCBPG-0006st-Li; Wed, 17 Feb 2021 01:13:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCBPG-0007CT-BR; Wed, 17 Feb 2021 01:13:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCBPG-0007WJ-Av; Wed, 17 Feb 2021 01:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ei55XRTfnnZErS7uG5k/6dUN1NkNiLLgSedYz9vswHo=; b=AQllw7RXaQTS8RZtQ67sPMY4cT
	3L6XiGzHYAWhldnyH5em2yMdOHOrbAE22ozg112BalgMbTw59ybQX9cPOQeCEUFC+ZK/jDiPkurld
	JeBJVwC3XFbt8ApcjdhdmvIpM+BTnLmlVBRqztVVcav7idydUUiF/i/d1qYtP1Og0ZzE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159408-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159408: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-xsm:guest-localmigrate:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8ba4bca570ace1e60614a0808631a517cf5df67a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 01:13:30 +0000

flight 159408 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159408/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-xsm       18 guest-localmigrate         fail pass in 159389

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159389 like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8ba4bca570ace1e60614a0808631a517cf5df67a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  180 days
Failing since        152659  2020-08-21 14:07:39 Z  179 days  350 attempts
Testing same since   159389  2021-02-15 21:37:04 Z    1 days    2 attempts

------------------------------------------------------------
411 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112131 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 01:23:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 01:23:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86085.161203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCBZG-00052B-Tt; Wed, 17 Feb 2021 01:23:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86085.161203; Wed, 17 Feb 2021 01:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCBZG-000524-Qe; Wed, 17 Feb 2021 01:23:50 +0000
Received: by outflank-mailman (input) for mailman id 86085;
 Wed, 17 Feb 2021 01:23:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qrWw=HT=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1lCBZE-00051z-R2
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 01:23:49 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1540d77-9901-44cf-a17e-1308f2308352;
 Wed, 17 Feb 2021 01:23:46 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4DgKqL3R2Nz9sBy; Wed, 17 Feb 2021 12:23:42 +1100 (AEDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1540d77-9901-44cf-a17e-1308f2308352
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1613525022;
	bh=i5csGyCZLyAc9i3KKGsKbSkGDjy55sFRwjE1x4+/oxY=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Ff/K0Qzt0Q0AP2MItzycNnHmFdzI6mf9iC+lPlww1oKVeZmInviIjsR7ruUoYr7V9
	 ymzTMv9Yst0C5F2rAOX9Zn3/plSkuqmotvRNBmq1cTFoofq3So1dLvfeGPXrDZbUFn
	 J7yiV4TKbyfDnYdtPavRMb1n0MclRaE4K6TOf7zY=
Date: Wed, 17 Feb 2021 11:44:41 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, julien@xen.org, stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com, andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org, xen-devel@lists.xenproject.org,
	Alistair Francis <alistair.francis@wdc.com>
Subject: Re: [PATCH  v2 3/7] device_tree: add qemu_fdt_setprop_string_array
 helper
Message-ID: <YCxm+e2UzO9cTnl8@yekko.fritz.box>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-4-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="TO/drffHYp5rg2K/"
Content-Disposition: inline
In-Reply-To: <20210211171945.18313-4-alex.bennee@linaro.org>


--TO/drffHYp5rg2K/
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Feb 11, 2021 at 05:19:41PM +0000, Alex Benn=E9e wrote:
> A string array in device tree is simply a series of \0 terminated
> strings next to each other. As libfdt doesn't support that directly
> we need to build it ourselves.

Hm, that might not make a bad extension to libfdt...

>=20
> Signed-off-by: Alex Benn=E9e <alex.bennee@linaro.org>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Message-Id: <20201105175153.30489-4-alex.bennee@linaro.org>
>=20
> ---
> v2
>   - checkpatch long line fix
> ---
>  include/sysemu/device_tree.h | 17 +++++++++++++++++
>  softmmu/device_tree.c        | 26 ++++++++++++++++++++++++++
>  2 files changed, 43 insertions(+)
>=20
> diff --git a/include/sysemu/device_tree.h b/include/sysemu/device_tree.h
> index 982c89345f..8a2fe55622 100644
> --- a/include/sysemu/device_tree.h
> +++ b/include/sysemu/device_tree.h
> @@ -70,6 +70,23 @@ int qemu_fdt_setprop_u64(void *fdt, const char *node_p=
ath,
>                           const char *property, uint64_t val);
>  int qemu_fdt_setprop_string(void *fdt, const char *node_path,
>                              const char *property, const char *string);
> +
> +/**
> + * qemu_fdt_setprop_string_array: set a string array property
> + *
> + * @fdt: pointer to the dt blob
> + * @name: node name
> + * @prop: property array
> + * @array: pointer to an array of string pointers
> + * @len: length of array
> + *
> + * assigns a string array to a property. This function converts and
> + * array of strings to a sequential string with \0 separators before
> + * setting the property.
> + */
> +int qemu_fdt_setprop_string_array(void *fdt, const char *node_path,
> +                                  const char *prop, char **array, int le=
n);
> +
>  int qemu_fdt_setprop_phandle(void *fdt, const char *node_path,
>                               const char *property,
>                               const char *target_node_path);
> diff --git a/softmmu/device_tree.c b/softmmu/device_tree.c
> index b9a3ddc518..2691c58cf6 100644
> --- a/softmmu/device_tree.c
> +++ b/softmmu/device_tree.c
> @@ -21,6 +21,7 @@
>  #include "qemu/error-report.h"
>  #include "qemu/option.h"
>  #include "qemu/bswap.h"
> +#include "qemu/cutils.h"
>  #include "sysemu/device_tree.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/loader.h"
> @@ -397,6 +398,31 @@ int qemu_fdt_setprop_string(void *fdt, const char *n=
ode_path,
>      return r;
>  }
> =20
> +/*
> + * libfdt doesn't allow us to add string arrays directly but they are
> + * test a series of null terminated strings with a length. We build
> + * the string up here so we can calculate the final length.
> + */
> +int qemu_fdt_setprop_string_array(void *fdt, const char *node_path,
> +                                  const char *prop, char **array, int le=
n)
> +{
> +    int ret, i, total_len =3D 0;
> +    char *str, *p;
> +    for (i =3D 0; i < len; i++) {
> +        total_len +=3D strlen(array[i]) + 1;
> +    }
> +    p =3D str =3D g_malloc0(total_len);
> +    for (i =3D 0; i < len; i++) {
> +        int len =3D strlen(array[i]) + 1;
> +        pstrcpy(p, len, array[i]);
> +        p +=3D len;
> +    }
> +
> +    ret =3D qemu_fdt_setprop(fdt, node_path, prop, str, total_len);
> +    g_free(str);
> +    return ret;
> +}
> +
>  const void *qemu_fdt_getprop(void *fdt, const char *node_path,
>                               const char *property, int *lenp, Error **er=
rp)
>  {

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--TO/drffHYp5rg2K/
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmAsZvkACgkQbDjKyiDZ
s5JkeBAAh1nq//jzuoFXH1RTHHchSWcXwSOQwwoKtEgNymV0OYnJvVeI87h02Ryf
Q4AkWX0TRNC7xTP7fk9rJpYlIvDu+3M3fEQKu9Paz9taWc3lat8lHdRN+qoBaDuG
91smdKfWYGI1RHSj0C+pjDPa73He3JaNE4ooyJcpcAJEP38/jdVTBx1g82ZrksJQ
iKBSapz/RBX5e7LSfm5NIB7OMDXWo8vORFopz+pUKIKDKNHTWGsf29zQchuNj7v9
n62ahTYFfN7CvOsEkwccA0sqaHEKSptUFeZ9YApO9zMj/FVlyjw73RCMGFbaWTlm
m5dzj+wS+2n8iriX2pjbDp2bpREI3AJM23D5qArXfeH4ArmXNEc98canOD4/Y5zO
KrnErOnvghtt7CXfjYYyKoWixTCMw0StbSnXsWJBnF0NXhtRr2DXKLbI2mPFRzac
AnNXXAEs8CKtvWt/xPAwCTEMCTw9PkaUk18USAx2edDm+8lOWwzvRgMfDvvMRl7L
ZJxr5MXP6ZWflZCY3tLXd4nD2DT+okaxNiKv8ZuNW0HEo4pYdwni0IVc8m8ZWlIB
kEvPySjaxF5s38NoxB0ryAsuCEMnuJR8KP2cz3bYyVJ/tLE86+gevg1ztqCQzkyz
sw23wgg9Ze5ypekPbLZJ/fpVNr5/bOs3tEAvLoc14qPVaJ4PpaQ=
=OXJn
-----END PGP SIGNATURE-----

--TO/drffHYp5rg2K/--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 01:56:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 01:56:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86093.161215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCC4r-0007nG-J4; Wed, 17 Feb 2021 01:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86093.161215; Wed, 17 Feb 2021 01:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCC4r-0007n9-Et; Wed, 17 Feb 2021 01:56:29 +0000
Received: by outflank-mailman (input) for mailman id 86093;
 Wed, 17 Feb 2021 01:56:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkg6=HT=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1lCC4p-0007n4-Na
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 01:56:28 +0000
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e26e0f9f-3e25-45d5-867e-cf79746ae4c4;
 Wed, 17 Feb 2021 01:56:25 +0000 (UTC)
Received: from [10.9.9.73] (helo=submission02.runbox)
 by mailtransmit03.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>)
 id 1lCC4k-000717-43; Wed, 17 Feb 2021 02:56:22 +0100
Received: by submission02.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1lCC4Q-0007xM-Ug; Wed, 17 Feb 2021 02:56:03 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e26e0f9f-3e25-45d5-867e-cf79746ae4c4
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com;
	 s=selector2; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=pyYlGNUAqjEgpRSUkNyi4+J7eM7ekIrYP7jIlz2vcPk=; b=zwnVgG3RmBo4O3l+qQeQ+OmnMZ
	cpsBI+HQMNg83pJwMM4K9wL7blFsi6EiXD3y1r7bYMkqV1qLa9OBKvL2cZga51HkXFVdoE5yo9zCd
	Z9oUxgaCEMOIJzowypdLrFjcTbhUCQwyQM52cWZiE8G/Wk8Ntq9MVdkk6ENEgClXHzGLabHf6kd5h
	RCq3ZAa3oBWBbutOXIjCvL78rWSD7mkd2D+FAzb+n15fYiLEFMfNLYnNnFbTGkmRpbzQ2ReFnZif6
	1DO30YPL7VQVjcSNOdHykHM7Gc3zUT1WZ9fd1O1L/N3CKGpw6ngJ09StzM5ooZYVnGDNLXe0zdNWm
	CI30NJaw==;
Subject: Re: [PATCH 1/1] x86/ept: Fix buggy XSA-321 backport
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 jbeulich@suse.com
References: <20210215234619.245422-1-m.v.b@runbox.com>
 <20210215234619.245422-2-m.v.b@runbox.com> <YCuOQ3qpFD6RgIld@Air-de-Roger>
 <5517e20e-c485-7016-da89-81570cc43b3b@runbox.com>
Message-ID: <aad766de-2e2a-2f64-a839-9c7a5a9e4d6f@runbox.com>
Date: Tue, 16 Feb 2021 20:55:59 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <5517e20e-c485-7016-da89-81570cc43b3b@runbox.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-CA
Content-Transfer-Encoding: 8bit

On 16/02/2021 07.48, M. Vefa Bicakci wrote:
> On 16/02/2021 04.20, Roger Pau Monné wrote:
>> On Mon, Feb 15, 2021 at 06:46:19PM -0500, M. Vefa Bicakci wrote:
> 
[snipped by Vefa]
> >> In any case I think this is too much change, so I would go for a
>> smaller fix like my proposal below. Can you please test it?
> 
> Thank you! I will test your patch later today, and I will report
> back by tomorrow.
> 
[snipped by Vefa]
> 
>> Here is my proposed fix, I think we could even do away with the else
>> branch, but if level is != 0 p2m_is_foreign should be false, so we
>> avoid an extra check.
>>
>> Thanks, Roger.
> 
> I will test this. Thanks again! I really appreciate that you have
> have taken the time and effort.
> 
> Vefa

Hello Roger,

I have tested your patch, and I am happy to confirm that it too resolves
the issue I have described in my original patch description. Thank you!

When I find some more time, I would like to prepare a GitHub pull request
for Qubes OS 4.0's version of Xen 4.8.5 with your patch so that other users
do not encounter the same issue. I would like to properly credit your
contribution. Would you be able to send a patch with a Signed-off-by tag
in its description?

Thanks again,

Vefa

>> ---8<---
>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
>> index 036771f43c..086739ffdd 100644
>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -56,11 +56,8 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>>       if ( level )
>>       {
>>           ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
>> -        write_atomic(&entryptr->epte, new.epte);
>> -        return 0;
>>       }
>> -
>> -    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
>> +    else if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
>>       {
>>           rc = -EINVAL;
>>           if ( !is_epte_present(&new) )
>>


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 02:01:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 02:01:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86096.161227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCC9G-0000cX-4A; Wed, 17 Feb 2021 02:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86096.161227; Wed, 17 Feb 2021 02:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCC9G-0000cQ-1H; Wed, 17 Feb 2021 02:01:02 +0000
Received: by outflank-mailman (input) for mailman id 86096;
 Wed, 17 Feb 2021 02:01:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RvmJ=HT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lCC9E-0000cL-IS
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 02:01:00 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c52f67a-e934-4578-ad52-b3876bb01e7c;
 Wed, 17 Feb 2021 02:00:59 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F185461490;
 Wed, 17 Feb 2021 02:00:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c52f67a-e934-4578-ad52-b3876bb01e7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613527259;
	bh=lCB1K9uBBMXlnp4KoJgH5VnUKB9q33drNtgI1C16sO8=;
	h=Date:From:To:cc:Subject:From;
	b=pGwA5+jmwgoHhBT1B3Yf0mMT6h5uBcYOyl6LTECyO8U0gTOEuaAB088225J7tzg3I
	 8XBggWjyazDMdlf7F8Dho+9K1phbMg6At2bt0cJx4KW1PMJz7rA43AXfYwGANUztuN
	 O8P36zSMAIv4JOxjooiYcQ4PhP46HH5pFTcxTYOlJTu7twdgePf35vWIB+5yWfGOvS
	 TaTmu9TZy3C0U/wKKiwOOPGGTB1ErSczaY30dP17hubPyomvV8KwpsAlfSSY3DoUwh
	 V6SNI0jwHD2uiYHUaVgpMAX+qRYcVe7cOPNbfoTT0n3JV9szFYE1UJC0g8+F8AqmlA
	 I33z8pJMCpe8g==
Date: Tue, 16 Feb 2021 18:00:57 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: julien@xen.org
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, 
    Bertrand.Marquis@arm.com, Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
Subject: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Message-ID: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
translate addresses for DMA operations in Dom0. Specifically,
swiotlb-xen is used to translate the address of a foreign page (a page
belonging to a domU) mapped into Dom0 before using it for DMA.

This is important because although Dom0 is 1:1 mapped, DomUs are not. On
systems without an IOMMU swiotlb-xen enables PV drivers to work as long
as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
ends up using the MFN, rather than the GFN.


On systems with an IOMMU, this is not necessary: when a foreign page is
mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
address (instead of the MFN) for DMA operations and they would work. It
would be more efficient than using swiotlb-xen.

If you recall my presentation from Xen Summit 2020, Xilinx is working on
cache coloring. With cache coloring, no domain is 1:1 mapped, not even
Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
work as intended.


The suggested solution for both these issues is to add a new feature
flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
swiotlb-xen because IOMMU translations are available for Dom0. If
XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
initialization. I have tested this scheme with and without cache
coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
works as expected: DMA operations succeed.


What about systems where an IOMMU is present but not all devices are
protected?

There is no way for Xen to know which devices are protected and which
ones are not: devices that do not have the "iommus" property could or
could not be DMA masters.

Perhaps Xen could populate a whitelist of devices protected by the IOMMU
based on the "iommus" property. It would require some added complexity
in Xen and especially in the swiotlb-xen driver in Linux to use it,
which is not ideal. However, this approach would not work for cache
coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
used either way.


For these reasons, I would like to propose a single flag
XENFEAT_ARM_dom0_iommu which says that the IOMMU can be relied upon for
DMA translations. In situations where a DMA master is not SMMU
protected, XENFEAT_ARM_dom0_iommu should not be set. For example, on a
platform where an IOMMU is present and protects most DMA masters but it
is leaving out the MMC controller, then XENFEAT_ARM_dom0_iommu should
not be set (because PV block is not going to work without swiotlb-xen.)
This also means that cache coloring won't be usable on such a system (at
least not usable with the MMC controller so the system integrator should
pay special care to setup the system).

It is worth noting that if we wanted to extend the interface to add a
list of protected devices in the future, it would still be possible. It
would be compatible with XENFEAT_ARM_dom0_iommu.


How to set XENFEAT_ARM_dom0_iommu?

We could set XENFEAT_ARM_dom0_iommu automatically when
is_iommu_enabled(d) for Dom0. We could also have a platform specific
(xen/arch/arm/platforms/) override so that a specific platform can
disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
users, it would also be useful to be able to override it via a Xen
command line parameter.

See appended patch as a reference.


Cheers,

Stefano


diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 7a345ae45e..4dbef48199 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -16,6 +16,7 @@
 #include <xen/hypfs.h>
 #include <xsm/xsm.h>
 #include <asm/current.h>
+#include <asm/platform.h>
 #include <public/version.h>
 
 #ifndef COMPAT
@@ -549,6 +550,9 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
                 fi.submap |= 1U << XENFEAT_dom0;
 #ifdef CONFIG_ARM
             fi.submap |= (1U << XENFEAT_ARM_SMCCC_supported);
+            if ( !platform_has_quirk(PLATFORM_QUIRK_DOM0_IOMMU) &&
+                 is_hardware_domain(d) && is_iommu_enabled(d) )
+                fi.submap |= (1U << XENFEAT_ARM_dom0_iommu);
 #endif
 #ifdef CONFIG_X86
             if ( is_pv_domain(d) )
diff --git a/xen/include/asm-arm/platform.h b/xen/include/asm-arm/platform.h
index 997eb25216..094a76d677 100644
--- a/xen/include/asm-arm/platform.h
+++ b/xen/include/asm-arm/platform.h
@@ -48,6 +48,11 @@ struct platform_desc {
  * stride.
  */
 #define PLATFORM_QUIRK_GIC_64K_STRIDE (1 << 0)
+/*
+ * Quirk for platforms where the IOMMU is present but doesn't protect
+ * all DMA-capable devices.
+ */
+#define PLATFORM_QUIRK_DOM0_IOMMU (1 << 1)
 
 void platform_init(void);
 int platform_init_time(void);
diff --git a/xen/include/asm-x86/platform.h b/xen/include/asm-x86/platform.h
new file mode 100644
index 0000000000..5427e8b851
--- /dev/null
+++ b/xen/include/asm-x86/platform.h
@@ -0,0 +1,13 @@
+#ifndef __ASM_X86_PLATFORM_H
+#define __ASM_X86_PLATFORM_H
+
+#endif /* __ASM_X86_PLATFORM_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/features.h b/xen/include/public/features.h
index 1613b2aab8..adaa2a995d 100644
--- a/xen/include/public/features.h
+++ b/xen/include/public/features.h
@@ -114,6 +114,11 @@
  */
 #define XENFEAT_linux_rsdp_unrestricted   15
 
+/*
+ * arm: dom0 is started with IOMMU protection.
+ */
+#define XENFEAT_ARM_dom0_iommu            16
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 05:12:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 05:12:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86106.161257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCF8F-000113-Vy; Wed, 17 Feb 2021 05:12:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86106.161257; Wed, 17 Feb 2021 05:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCF8F-00010w-S7; Wed, 17 Feb 2021 05:12:11 +0000
Received: by outflank-mailman (input) for mailman id 86106;
 Wed, 17 Feb 2021 05:12:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SSRj=HT=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lCF8E-00010r-8F
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 05:12:10 +0000
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ba0c179-b4d4-451b-bb23-fd706effe475;
 Wed, 17 Feb 2021 05:12:08 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 20A985C00DE
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 00:12:08 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Wed, 17 Feb 2021 00:12:08 -0500
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA id A01161080057
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 00:12:07 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ba0c179-b4d4-451b-bb23-fd706effe475
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=content-type:date:from:message-id
	:mime-version:subject:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm2; bh=hqxfMtVLYQTpWK3DxeliVp4K2OtYZ
	NHD5kIqDxzCy+c=; b=Z3zWm3gMsUE94b2X4X4jaf754x+6lwrkThw5Tj6Y33lmv
	gViGmsLHfseJMS33iDps1hG0j10Uu5y4HFjmY62vFfF4pPD1kKwppBJ9SFnPFpbA
	qIff6At2uTClTarOlyLBRUXT61ypXoZQYSo3YgVe7PssyDbXZTQYq1Vucj+chXEx
	uZPd2+riZf6RzuVTyalRVOuxDNTGFXqE24CWknZIaAn4Myj8f6Qk1dpPpM18RyLP
	+NCv2y877gUPVy6NRAr0eBzkPoLD1umuoU6Vx0Jmv2BwSzWa+olFkb7imUC748uu
	RNwgT8gIUNQ3Ay9DhQ9ho9fwOH+sTKfGAQkUJdbXA==
X-ME-Sender: <xms:p6UsYPAdQ-loRQ3RSETy8bsijQLwoBa9Y9b2rwzUJz8K4jt3JK-IdQ>
    <xme:p6UsYFh7r58ORawhUeuM7J_T9b3qdkNjNM7SOztuUQdmkvcKiB89KkQ0Cx9i68LP2
    Q6-D9HovgEM2Q>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrjedugdejkecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecunecujfgurhepfffhvffukfggtggusehgtderredttd
    ejnecuhfhrohhmpeforghrvghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcu
    oehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenuc
    ggtffrrghtthgvrhhnpedtudfgteduveduieevvefgteeujeelgffggffhhffhhedtffef
    fefgudeugeefhfenucfkphepledurdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivg
    eptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgs
    lhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:p6UsYKnsCKPOcmqZ6nzdv1HPcMBT6gr_HtwSInovcZwwew4VUrVMug>
    <xmx:p6UsYBwGHG5B-1boEHoTW8-WgJN_tmSOr2Wir4Opd5Mw-hiApNeBqA>
    <xmx:p6UsYESGfzX493s0St6CJztqRVkLtp9FaTVsPMhWFDS6qQ31KwdYEw>
    <xmx:qKUsYKB2HdIOfspB7ZWLJi1OzTVG5TDEM7OPahErgl37JBrKZ94jog>
Date: Wed, 17 Feb 2021 06:12:04 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Linux PV/PVH domU crash on (guest) resume from suspend
Message-ID: <YCylpKU8F+Hsg8YL@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="L2+TYTG2qmBDfrpw"
Content-Disposition: inline


--L2+TYTG2qmBDfrpw
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Wed, 17 Feb 2021 06:12:04 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Linux PV/PVH domU crash on (guest) resume from suspend

Hi,

I'm observing Linux PV/PVH guest crash when I resume it from sleep. I do
this with:

    virsh -c xen dompmsuspend <vmname> mem
    virsh -c xen dompmwakeup <vmname>=20

But it's possible to trigger it with plain xl too:

    xl save -c <vmname> <some-file>

The same on HVM works fine.

This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens
with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
relevant here. I can reliably reproduce it.

The crash message:

[  219.844995] Freezing user space processes ... (elapsed 0.011 seconds) do=
ne.
[  219.856564] OOM killer disabled.
[  219.856566] Freezing remaining freezable tasks ... (elapsed 0.001 second=
s) done.
[  277.562118] register_vcpu_info failed: cpu=3D0 err=3D-22
[  219.858384] xen:grant_table: Grant tables using version 1 layout
[  219.858442] ------------[ cut here ]------------
[  219.858446] kernel BUG at drivers/xen/events/events_fifo.c:369!
[  219.858503] invalid opcode: 0000 [#1] SMP NOPTI
[  219.858511] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.4.90-1.qubes.=
x86_64 #1
[  219.858527] RIP: e030:evtchn_fifo_resume+0x58/0x90
[  219.858532] Code: eb 48 8b 04 dd 80 29 3e 82 4e 8b 04 20 4d 85 c0 74 d5 =
48 0f a3 1d b8 40 20 01 73 10 4c 89 c6 89 ef e8 5c fb ff ff 85 c0 79 bd <0f=
> 0b 31 f6 4c 89 c7 e8 7c 8a c8 ff 48 8b 04 dd 80 29 3e 82 4a c7
[  219.858538] RSP: e02b:ffffc9000025be10 EFLAGS: 00010082
[  219.858542] RAX: ffffffffffffffea RBX: 0000000000000000 RCX: 00000000000=
00000
[  219.858545] RDX: ffff888018400000 RSI: ffffc9000025bde0 RDI: 00000000000=
0000b
[  219.858548] RBP: 0000000000000000 R08: ffff888018143000 R09: 00000000000=
001e0
[  219.858552] R10: ffff88800e50f440 R11: ffffc9000025bcbd R12: 00000000000=
26ea0
[  219.858555] R13: 0000000000000000 R14: ffffc9000029bdf8 R15: 00000000000=
00003
[  219.858567] FS:  0000000000000000(0000) GS:ffff888018400000(0000) knlGS:=
0000000000000000
[  219.858571] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[  219.858574] CR2: 0000581c2753e030 CR3: 000000000260a000 CR4: 00000000000=
00660
[  219.858578] Call Trace:
[  219.858615]  xen_irq_resume+0x1b/0xe0
[  219.858620]  xen_suspend+0x13e/0x190
[  219.858626]  multi_cpu_stop+0x6c/0x100
[  219.858630]  ? stop_machine_yield+0x10/0x10
[  219.858633]  cpu_stopper_thread+0xb0/0x110
[  219.858638]  smpboot_thread_fn+0xc5/0x160
[  219.858641]  ? smpboot_register_percpu_thread+0xf0/0xf0
[  219.858645]  kthread+0x115/0x140
[  219.858648]  ? __kthread_bind_mask+0x60/0x60
[  219.858653]  ret_from_fork+0x22/0x40
[  219.858657] Modules linked in: nf_conntrack_netlink nft_reject_ipv4 nft_=
reject xt_nat nf_tables_set nft_ct nf_tables nfnetlink e1000e rfkill xt_RED=
IRECT ip6table_filter ip6table_mangle ip6table_raw ip6_tables edac_mce_amd =
pcspkr ipt_REJECT nf_reject_ipv4 xen_pcifront xt_state xt_conntrack iptable=
_filter iptable_mangle iptable_raw xt_MASQUERADE iptable_nat nf_nat nf_conn=
track nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c xen_scsiback target_core_mod =
xen_netback xen_privcmd xen_gntdev xen_gntalloc xen_blkback xen_evtchn drm =
fuse ip_tables overlay xen_blkfront
[  219.858754] ---[ end trace 54d868ea756768db ]---
[  219.858758] RIP: e030:evtchn_fifo_resume+0x58/0x90
[  219.858762] Code: eb 48 8b 04 dd 80 29 3e 82 4e 8b 04 20 4d 85 c0 74 d5 =
48 0f a3 1d b8 40 20 01 73 10 4c 89 c6 89 ef e8 5c fb ff ff 85 c0 79 bd <0f=
> 0b 31 f6 4c 89 c7 e8 7c 8a c8 ff 48 8b 04 dd 80 29 3e 82 4a c7
[  219.858768] RSP: e02b:ffffc9000025be10 EFLAGS: 00010082
[  219.858770] RAX: ffffffffffffffea RBX: 0000000000000000 RCX: 00000000000=
00000
[  219.858774] RDX: ffff888018400000 RSI: ffffc9000025bde0 RDI: 00000000000=
0000b
[  219.858777] RBP: 0000000000000000 R08: ffff888018143000 R09: 00000000000=
001e0
[  219.858780] R10: ffff88800e50f440 R11: ffffc9000025bcbd R12: 00000000000=
26ea0
[  219.858783] R13: 0000000000000000 R14: ffffc9000029bdf8 R15: 00000000000=
00003
[  219.858790] FS:  0000000000000000(0000) GS:ffff888018400000(0000) knlGS:=
0000000000000000
[  219.858793] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[  219.858796] CR2: 0000581c2753e030 CR3: 000000000260a000 CR4: 00000000000=
00660
[  219.858801] Kernel panic - not syncing: Fatal exception
[  219.858819] Kernel Offset: disabled

Note the time besides "register_vcpu_info failed" - it is in the future.
I think the offset depends on the uptime, with shorter uptime I get much
smaller difference (like 49 vs 51).

Any ideas?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--L2+TYTG2qmBDfrpw
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmAspaQACgkQ24/THMrX
1yxwjwgAj8AL+J6HxH9lwMsGR19Ghz8uJu6zC93lTr6xH/AiSqwoiEsVwRdcvxs6
YiEDGyJjfnu5ea+GhTooWplyTcIMI01IdLbogtncnzlH4YrKJn/Oo/31Of3toL0n
ubyDfPHHnbBOqDiD+0XL009xWbSpdNxhiuQbA+3TGjn/y7k99boPjqZuuoWe+NwE
AK6ebMMw6czL9+HJnSiOuxODy69e02rN5FPds9ZMz+uJFExhq3A3on5BTAN05zcB
bltwV/fvMhy0UEybouzYwSuJYkuZucWPHrz3mOfte20oMkExn642H4OHFCdiM5+W
84BmZ2lNewz0/v0h5BJaRNJNlUMAyw==
=MOTm
-----END PGP SIGNATURE-----

--L2+TYTG2qmBDfrpw--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 05:24:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 05:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86108.161268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCFKU-00020d-2K; Wed, 17 Feb 2021 05:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86108.161268; Wed, 17 Feb 2021 05:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCFKT-00020W-Vd; Wed, 17 Feb 2021 05:24:49 +0000
Received: by outflank-mailman (input) for mailman id 86108;
 Wed, 17 Feb 2021 05:24:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFKT-00020O-4G; Wed, 17 Feb 2021 05:24:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFKS-0003Sw-Sa; Wed, 17 Feb 2021 05:24:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFKS-0004LY-H0; Wed, 17 Feb 2021 05:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFKS-0001nl-GU; Wed, 17 Feb 2021 05:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=9/likg3V9T223CBhVpWnYFwz8/74bJ+3e1ufka676b0=; b=yXnZZifwthGlh+kg3vNpMPYDoR
	TJ6JU29bDUPlJNa9ePwDZmXvUEjhvEEopLKC0EipXHZHNz448zzywgM39HKBFreYEAEnBKmkXPuw+
	+OyOwyjumdOxgXLip4DTXJJ/yZhx8uMJcRD1x1krR9H0oTymIdKjWvcMMZaAFXqFspsg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-armhf-armhf-xl-multivcpu
Message-Id: <E1lCFKS-0001nl-GU@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 05:24:48 +0000

branch xen-unstable
xenbranch xen-unstable
job test-armhf-armhf-xl-multivcpu
testid guest-start

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159438/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-armhf-armhf-xl-multivcpu.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-multivcpu.guest-start --summary-out=tmp/159438.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-armhf-armhf-xl-multivcpu guest-start
Searching for failure / basis pass:
 159399 fail [host=arndale-metrocentre] / 158681 [host=arndale-westfield] 158624 [host=arndale-lakeside] 158616 [host=cubietruck-braque] 158609 [host=cubietruck-gleizes] 158603 [host=arndale-bluewater] 158593 [host=cubietruck-picasso] 158583 ok.
Failure / basis pass flights: 159399 / 158583
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Basis pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
Generating revisions with ./adhoc-revtuple-generator  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#d26b3110041a9fddc6c6e36398f53f7eab8cff82-5b9a4104c902d7dec14c9e3c5652a638194487c6 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#5b4a97bbc39ed8e7eb50038b9cffe2e948e49995-2e1e8c35f3178df95d79da81ac6deec242da74c2 git://xenbits.xen.org/qemu-xen.git#7ea4288\
 95af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#e8adbf680b56a3f4b9600c7bcc04fec1877a6213-04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Loaded 15001 nodes in revision graph
Searching for test results:
 158563 [host=cubietruck-metzinger]
 158583 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 158593 [host=cubietruck-picasso]
 158603 [host=arndale-bluewater]
 158609 [host=cubietruck-gleizes]
 158616 [host=cubietruck-braque]
 158624 [host=arndale-lakeside]
 158681 [host=arndale-westfield]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159295 fail irrelevant
 159324 fail irrelevant
 159339 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159359 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159372 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159415 pass d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b4a97bbc39ed8e7eb50038b9cffe2e948e49995 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 159421 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159422 pass 09f983f0c7fc0db79a5f6c883ec3510d424c369c c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159423 fail 38f35023fd301abeb01cfd81e73caa2e4e7ec0b1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159425 pass d8a487e673abf46c69c901bb25da54e9bc7ba45e c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159426 pass 5a1d7bb7d333849eb7d3ab5ebfbf9805b2cd46c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159427 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159428 pass 9cec63a3aacbcaee8d09aecac2ca2f8820efcc70 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159399 fail 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159429 pass 8ab3478335ad8fc08f14ec73251b084fe02b3ebb c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159430 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159432 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159433 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159435 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159436 pass acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159438 fail a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158583 (pass), for basis pass
 Result found: flight 159339 (fail), for basis failure (at ancestor ~280)
 Repro found: flight 159415 (pass), for basis pass
 Repro found: flight 159421 (fail), for basis failure
 0 revisions at acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159430 (pass), for last pass
 Result found: flight 159432 (fail), for first failure
 Repro found: flight 159433 (pass), for last pass
 Repro found: flight 159435 (fail), for first failure
 Repro found: flight 159436 (pass), for last pass
 Repro found: flight 159438 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159438/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.913366 to fit
pnmtopng: 91 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-armhf-armhf-xl-multivcpu.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159438: tolerable ALL FAIL

flight 159438 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159438/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 14 guest-start            fail baseline untested


jobs:
 test-armhf-armhf-xl-multivcpu                                fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 05:29:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 05:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86116.161287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCFPH-0002HQ-Ov; Wed, 17 Feb 2021 05:29:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86116.161287; Wed, 17 Feb 2021 05:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCFPH-0002HJ-Ln; Wed, 17 Feb 2021 05:29:47 +0000
Received: by outflank-mailman (input) for mailman id 86116;
 Wed, 17 Feb 2021 05:29:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFPG-0002H9-Dq; Wed, 17 Feb 2021 05:29:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFPG-0003Xu-8w; Wed, 17 Feb 2021 05:29:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFPG-0004fi-16; Wed, 17 Feb 2021 05:29:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCFPG-0005ty-0Y; Wed, 17 Feb 2021 05:29:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iZHITwBQNxQLYJ4H+b42xzxYYtUvkhg2aWHpclafKWw=; b=1V2U8x7KzZ/rZ/vk8pXKbC+IHN
	m//lW2GGDti+euC923R5CCvhA8ewqJQZdEMnW/cscc2RmPrG881B3r7B4BRV5UE0brBroowyuW76g
	C7G3KfPedLgMZ5+bI+jeorGNlgoU2Of2uacPNXzzw5ervnuDd4uI5p1i/Aog5+m1c50I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159413-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159413: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 05:29:46 +0000

flight 159413 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159413/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 159391 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159413
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 159391 pass in 159413
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 159391 pass in 159413
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 159367
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen        fail pass in 159367
 test-arm64-arm64-examine      8 reboot                     fail pass in 159367
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 159367
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 159391

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  200 days
Failing since        152366  2020-08-01 20:49:34 Z  199 days  346 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    2 days    3 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 06:47:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 06:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86123.161302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCGc8-0000wV-DL; Wed, 17 Feb 2021 06:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86123.161302; Wed, 17 Feb 2021 06:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCGc8-0000wO-9z; Wed, 17 Feb 2021 06:47:08 +0000
Received: by outflank-mailman (input) for mailman id 86123;
 Wed, 17 Feb 2021 06:47:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T8s0=HT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lCGc7-0000wJ-39
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 06:47:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca82ad5e-9af1-494f-b9ac-436d6ef575cf;
 Wed, 17 Feb 2021 06:47:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6F956B1C8;
 Wed, 17 Feb 2021 06:47:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca82ad5e-9af1-494f-b9ac-436d6ef575cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613544424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=P/zoz/I1xLZaXQg7lCRnP0IrXBCbBdM7Ny+FAZcPeyA=;
	b=GDq2k7CGwGFS1qHUYWpHS5lWFg1m4iUPUZohx2I/2+a9qIbiOujmOk5jfjWkCdO1SDFuXX
	bEUZlB3YAfrn5ogZnZ1V4tcAoa28UHQxr45jcEVxSMWHMg74ynSki4FTMhphA7hUAdLhjn
	PtvKOVraTblUl12YaKKdM4lqU4sQbGM=
To: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, jbeulich@suse.com,
 andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org,
 george.dunlap@citrix.com
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
Message-ID: <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
Date: Wed, 17 Feb 2021 07:47:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="M6CPskvQftaQ9vNnWZkwIDUQDLKr6zsKe"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--M6CPskvQftaQ9vNnWZkwIDUQDLKr6zsKe
Content-Type: multipart/mixed; boundary="Xl4SWKGDcTKXMa7tAnXUj2ynJt3ze48j0";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, jbeulich@suse.com,
 andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org,
 george.dunlap@citrix.com
Message-ID: <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>

--Xl4SWKGDcTKXMa7tAnXUj2ynJt3ze48j0
Content-Type: multipart/mixed;
 boundary="------------8F4CDFD94349F7FF19611F9A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8F4CDFD94349F7FF19611F9A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.02.21 21:34, Stefano Stabellini wrote:
> + x86 maintainers
>=20
> It looks like the tlbflush is getting stuck?

I have seen this case multiple times on customer systems now, but
reproducing it reliably seems to be very hard.

I suspected fifo events to be blamed, but just yesterday I've been
informed of another case with fifo events disabled in the guest.

One common pattern seems to be that up to now I have seen this effect
only on systems with Intel Gold cpus. Can it be confirmed to be true
in this case, too?

In case anybody has a reproducer (either in a guest or dom0) with a
setup where a diagnostic kernel can be used, I'd be _very_ interested!


Juergen

--------------8F4CDFD94349F7FF19611F9A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8F4CDFD94349F7FF19611F9A--

--Xl4SWKGDcTKXMa7tAnXUj2ynJt3ze48j0--

--M6CPskvQftaQ9vNnWZkwIDUQDLKr6zsKe
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAsu+cFAwAAAAAACgkQsN6d1ii/Ey94
nwf9HFQ8uf6gS+7thbedwNQVLrbT4ZIVcZB2wxy2Muj2kBr3aefr/pFU+moMEuHF84+J3zWiTQJe
9roq6NH79Hm6PEvYQiPPt1WMqK74OBjJQjz5Cuoir7bT3Fvg3HGssRVVGjxmC+DVwwzBSOdJ2nlN
YSzXddQHfnEadCwIW0R0BJlimsuAcrFEdSF4Utb5cd2Lp2geB3qJAh3KVNABVnk877UVhEvftlrz
oBtfbwlj2QRlP02R03byHksydVzkA2WXlqnDIHjKBQwMT18cLXGrvzgjA4M3MMEea56pjh6X/glT
6EQFjY61YnYa6oKx7qoue19X8vRqoyoYoMR/UqGoyQ==
=d+Cg
-----END PGP SIGNATURE-----

--M6CPskvQftaQ9vNnWZkwIDUQDLKr6zsKe--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 06:51:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 06:51:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86124.161313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCGgb-0001s7-2G; Wed, 17 Feb 2021 06:51:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86124.161313; Wed, 17 Feb 2021 06:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCGga-0001s0-Vf; Wed, 17 Feb 2021 06:51:44 +0000
Received: by outflank-mailman (input) for mailman id 86124;
 Wed, 17 Feb 2021 06:51:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T8s0=HT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lCGga-0001rv-9Q
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 06:51:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fab37a28-e966-452e-83c7-1a5ac703eeb3;
 Wed, 17 Feb 2021 06:51:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A4A61AFBF;
 Wed, 17 Feb 2021 06:51:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fab37a28-e966-452e-83c7-1a5ac703eeb3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613544702; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mtZwWOa8RPZDhC9LBp92rSU7qze88B6rRKpZiclGDcY=;
	b=lm+KY83wTrR857Kbo3iypkG0qbxGvV4CQS/X6QCN7IzjS7rc0ygjnrkCzCtuZ0GVhPnVk7
	nQHMvnpZiUrMAe1IU9SXo3nfdvDqbMxlTOO5ZINNrMVmpEsV+rMH8mjZnIAxJEmpXzdpJ/
	ljAAIDFckXe9kFIJ9IR88gxS55Ro7fg=
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <YCylpKU8F+Hsg8YL@mail-itl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com>
Date: Wed, 17 Feb 2021 07:51:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YCylpKU8F+Hsg8YL@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="UX4saGOiEUpFPtBDrnrAEnG2ZNeobw8mK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--UX4saGOiEUpFPtBDrnrAEnG2ZNeobw8mK
Content-Type: multipart/mixed; boundary="yE44TUMEKY1977Vr2eREM0Yy0cWlzNhOf";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
References: <YCylpKU8F+Hsg8YL@mail-itl>
In-Reply-To: <YCylpKU8F+Hsg8YL@mail-itl>

--yE44TUMEKY1977Vr2eREM0Yy0cWlzNhOf
Content-Type: multipart/mixed;
 boundary="------------7CF6C416BAAF436A25352388"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7CF6C416BAAF436A25352388
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.02.21 06:12, Marek Marczykowski-G=C3=B3recki wrote:
> Hi,
>=20
> I'm observing Linux PV/PVH guest crash when I resume it from sleep. I d=
o
> this with:
>=20
>      virsh -c xen dompmsuspend <vmname> mem
>      virsh -c xen dompmwakeup <vmname>
>=20
> But it's possible to trigger it with plain xl too:
>=20
>      xl save -c <vmname> <some-file>
>=20
> The same on HVM works fine.
>=20
> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens
> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
> relevant here. I can reliably reproduce it.

This is already on my list of issues to look at.

The problem seems to be related to the XSA-332 patches. You could try
the patches I've sent out recently addressing other fallout from XSA-332
which _might_ fix this issue, too:

https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/


Juergen

--------------7CF6C416BAAF436A25352388
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7CF6C416BAAF436A25352388--

--yE44TUMEKY1977Vr2eREM0Yy0cWlzNhOf--

--UX4saGOiEUpFPtBDrnrAEnG2ZNeobw8mK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAsvP4FAwAAAAAACgkQsN6d1ii/Ey/a
Bgf/eKI57Fh58nFthQFuvkD0jHeBLPYqMP5mccGjOTuh5M6Qnk6OtsS4FEdCveKRaEeaEiwcidGC
ezf06ylBzoGMjikjsfm3DV8j3fz3sOkC6cb8qLJvENXpi6ZLThn5VfuMnGzQHoJANbWaiQfvV0Bh
xs0PMrVpwP6FjNd6FVjqdAZhxShyK0ctmoGV8UnOqmcAp7A7MwCOEgOtBim/Lrqo12JhG+CB9949
ACFe+5pGsRcWDrebX5Y0dLBvkGJImUpKhH3I5maWIYHnA0XLJCPl9H6yhngFXbjRSd8Uhq9UHW1i
P4db8IPaZXMGgWx5F96EfSMPC0nwKH3ZXwjAt6QryA==
=9eXr
-----END PGP SIGNATURE-----

--UX4saGOiEUpFPtBDrnrAEnG2ZNeobw8mK--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 07:54:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 07:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86127.161326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCHet-0007JJ-Q1; Wed, 17 Feb 2021 07:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86127.161326; Wed, 17 Feb 2021 07:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCHet-0007JC-Mc; Wed, 17 Feb 2021 07:54:03 +0000
Received: by outflank-mailman (input) for mailman id 86127;
 Wed, 17 Feb 2021 07:54:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCHes-0007J4-Cw; Wed, 17 Feb 2021 07:54:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCHes-0005vs-5i; Wed, 17 Feb 2021 07:54:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCHer-0003Yz-Pv; Wed, 17 Feb 2021 07:54:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCHer-0001XP-NZ; Wed, 17 Feb 2021 07:54:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uV8YS6m6T2lbiLx7/jCA3x9FHrd3ha7TRcbL2Qs7hzc=; b=WlFSlVseusrNuJsXTiD5t754Kn
	IIJo/QLg/oZDUf1IZ86ER2Xt7kOWQPa4aCnidcfoBJphTdX8QXlwRu08pGTDlJoGb0Lc2A9ihkyGU
	NhtOPffWMJZuGXw6CFTUxS+4VTyVoJFUsIIHKm83CjD/UE5jCYlkZwjWjklZvklgo8iw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159417-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159417: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=80cad584fb4c2599ae174226e2c913bb23df3bfa
X-Osstest-Versions-That:
    xen=1c7d984645f9ade9b47e862b5880734ad498fea8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 07:54:01 +0000

flight 159417 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159417/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 159274
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 159274
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159274
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159274
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159274
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159274
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159274
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159274
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159274
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159274
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159274
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159274
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159274
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  80cad584fb4c2599ae174226e2c913bb23df3bfa
baseline version:
 xen                  1c7d984645f9ade9b47e862b5880734ad498fea8

Last test of basis   159274  2021-02-12 00:12:35 Z    5 days
Testing same since   159417  2021-02-16 15:05:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1c7d984645..80cad584fb  80cad584fb4c2599ae174226e2c913bb23df3bfa -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:12:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86139.161345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCHwa-0001Ip-KF; Wed, 17 Feb 2021 08:12:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86139.161345; Wed, 17 Feb 2021 08:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCHwa-0001Ii-H3; Wed, 17 Feb 2021 08:12:20 +0000
Received: by outflank-mailman (input) for mailman id 86139;
 Wed, 17 Feb 2021 08:12:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DxxK=HT=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1lCHwZ-0001Id-Q6
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:12:19 +0000
Received: from mail-qk1-x72b.google.com (unknown [2607:f8b0:4864:20::72b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3a59a0a-b68a-40c0-8488-1526509153c7;
 Wed, 17 Feb 2021 08:12:17 +0000 (UTC)
Received: by mail-qk1-x72b.google.com with SMTP id c3so11340750qkj.11
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 00:12:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a59a0a-b68a-40c0-8488-1526509153c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=VTjdFI+DhbdiZu1W84z4/bry4Ozmd78oQz1jcG+yAz8=;
        b=axOw/BlkT6+Fgu929G7QhIJhl+AsLB9nEhtRfHwYvE2H5ceqtkinWuagD51jTVbVCR
         AKUe9tVsk9liOydGR2Fgtq9kcStxAsgXTKw1tM2P8UnKVx1dSM8QpbePiq48t99DXV0J
         YSx+kSv7UBur7zrkYkuNlHMNKMDHwNWPfoEz0LKR7t/5GvZP4RnMR3uH6yiNhxV6fIPN
         yADghDzeYgRTEXnHW55jKoTt+zhWNFK3m630+DKVkI8k0+3KeGu2z7JgICx7onRZd+OW
         4MuVpOXV9nyswuXcoTl5/xZp6kb/3s8ggCmvv+jllrOvP6+d+3DAMiN6JaBVGLXnxlnf
         wVOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=VTjdFI+DhbdiZu1W84z4/bry4Ozmd78oQz1jcG+yAz8=;
        b=ISTxfW2yT5mEJ9mrb0WhSIsFP8ibYQI4iYCjBMhZyJHBhTGB5k/zEIwV2lwfLcp+nk
         PLJ/WJmxRuTQSlSHmiKFWv8o9kOSsuvAHiaSwCux5RxY1nXae350swITOLGH0EA1Brzc
         58areGHI/PJ4dWY5lm/iUltr+gtZ1W2UtyKESgX0T9Q4ExMdguaDjXkzr9ggCmZkootC
         yXRt055QMVerHdZflwY0YS/MtD3e9e8ZVXQi3gBZseHqMlH2IKAevlZz8FSGjpzzF3MD
         +ubb+nBL3nP4c5PglqCzgY+9Z4wpKFgkWGNdC2Hczr8RrzgkjYm9qAZ0luqeNgzvkg6X
         yfsg==
X-Gm-Message-State: AOAM532woVIUQTbYTtOZFKuFMilcbFQYDWxwQJ64BmGbwnyQb/4gVR6I
	btoN3gwMHi7Hcjz4F0MXHJYVXjlu3bcyCv+igBBAvQ==
X-Google-Smtp-Source: ABdhPJyDPOkIN7B3Sy9stn/hUhmweh/Z4xG0/NMCd9Jaw3934wXaTtNnai8UHWvY6Xa6ZuqbUaxvst3ZymC3O68IUR4=
X-Received: by 2002:a37:d0b:: with SMTP id 11mr14688599qkn.267.1613549537469;
 Wed, 17 Feb 2021 00:12:17 -0800 (PST)
MIME-Version: 1.0
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s> <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
In-Reply-To: <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 17 Feb 2021 00:12:07 -0800
Message-ID: <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi J=C3=BCrgen, thanks for taking a look at this. A few comments below:

On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wr=
ote:
>
> On 16.02.21 21:34, Stefano Stabellini wrote:
> > + x86 maintainers
> >
> > It looks like the tlbflush is getting stuck?
>
> I have seen this case multiple times on customer systems now, but
> reproducing it reliably seems to be very hard.

It is reliably reproducible under my workload but it take a long time
(~3 days of the workload running in the lab).

> I suspected fifo events to be blamed, but just yesterday I've been
> informed of another case with fifo events disabled in the guest.
>
> One common pattern seems to be that up to now I have seen this effect
> only on systems with Intel Gold cpus. Can it be confirmed to be true
> in this case, too?

I am pretty sure mine isn't -- I can get you full CPU specs if that's usefu=
l.

> In case anybody has a reproducer (either in a guest or dom0) with a
> setup where a diagnostic kernel can be used, I'd be _very_ interested!

I can easily add things to Dom0 and DomU. Whether that will disrupt the
experiment is, of course, another matter. Still please let me know what
would be helpful to do.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:16:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86142.161357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI0Y-0001Tu-95; Wed, 17 Feb 2021 08:16:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86142.161357; Wed, 17 Feb 2021 08:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI0Y-0001Tn-5b; Wed, 17 Feb 2021 08:16:26 +0000
Received: by outflank-mailman (input) for mailman id 86142;
 Wed, 17 Feb 2021 08:16:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI0X-0001Ti-81
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:16:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88919c7c-27dc-4eb2-b48d-ba0d520c285e;
 Wed, 17 Feb 2021 08:16:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7242AFF1;
 Wed, 17 Feb 2021 08:16:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88919c7c-27dc-4eb2-b48d-ba0d520c285e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613549783; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=wkFJ/b05L0J/NukyWmjGp+kEZkAv1xGr0xeiUNwWK/w=;
	b=KdPnlmPHlzQfK0Wz1c7Wj9W/mNsWgoszF16+GESNvky6Z8/zQcaTd6KrslfsEoHiYh+DRe
	JQChSMjaF63949Ol0LP3oFsxQmp+bBFOvazW0UvKV5hnigOTPj7juPaOlgc6GXh2+BJDS8
	GPZ6q5lIBn0Zad5b4PRd9zEiHNjOUgE=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Message-ID: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Date: Wed, 17 Feb 2021 09:16:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Re-sending primarily for the purpose of getting a release ack, an
explicit release nak, or an indication of there not being a need,
all for at least the first three patches here (which are otherwise
ready to go in). I've dropped the shadow part of the series from
this re-submission, because it has all got reviewed by Tim already
and is intended for 4.16 only anyway. I'm re-including the follow
up patches getting the code base in consistent shape again, as I
continue to think this consistency goal is at least worth a
consideration towards a freeze exception.

1: split __{get,put}_user() into "guest" and "unsafe" variants
2: split __copy_{from,to}_user() into "guest" and "unsafe" variants
3: PV: harden guest memory accesses against speculative abuse
4: rename {get,put}_user() to {get,put}_guest()
5: gdbsx: convert "user" to "guest" accesses
6: rename copy_{from,to}_user() to copy_{from,to}_guest_pv()
7: move stac()/clac() from {get,put}_unsafe_asm() ...
8: PV: use get_unsafe() instead of copy_from_unsafe()

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:19:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:19:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86145.161369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI3s-0001gB-OS; Wed, 17 Feb 2021 08:19:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86145.161369; Wed, 17 Feb 2021 08:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI3s-0001g4-Kx; Wed, 17 Feb 2021 08:19:52 +0000
Received: by outflank-mailman (input) for mailman id 86145;
 Wed, 17 Feb 2021 08:19:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI3q-0001fw-Pe
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:19:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b351726e-cebe-4d68-ac0b-110aa6d43cd9;
 Wed, 17 Feb 2021 08:19:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30458AFF1;
 Wed, 17 Feb 2021 08:19:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b351726e-cebe-4d68-ac0b-110aa6d43cd9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613549988; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xgld3+LJHJd9MalP1rtZpn+6haeSaOlZWapLR/Xv3Tw=;
	b=MhiS8JXOXg9FZO9Ma+IyGotMI01A8R56O1bbrW/rWtCFtfCMZ+MaUq83hZNKYdnBjNKf/2
	CPA08pE+UajHpKQwhdS2u/q/K00tA3yxQTgmMYWAiemFO8zwLXL9wpOy5uATYugQy4ehU8
	LRNsgE8npCcOuSRf3RKmhw6wAfd0UCY=
Subject: [PATCH v2 1/8] x86: split __{get,put}_user() into "guest" and
 "unsafe" variants
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <75f52f7b-0f94-5c7c-fbd8-f2c85a8a7044@suse.com>
Date: Wed, 17 Feb 2021 09:19:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

The "guest" variants are intended to work with (potentially) fully guest
controlled addresses, while the "unsafe" variants are intended to be
used in order to access addresses not (directly) under guest control,
within Xen's part of virtual address space. (For linear page table and
descriptor table accesses the low bits of the addresses may still be
guest controlled, but this still won't allow speculation to "escape"
into unwanted areas.) Subsequently we will want them to have distinct
behavior, so as first step identify which one is which. For now, both
groups of constructs alias one another.

Double underscore prefixes are retained only on __{get,put}_guest(), to
allow still distinguishing them from their "checking" counterparts once
they also get renamed (to {get,put}_guest()).

Since for them it's almost a full re-write, move what becomes
{get,put}_unsafe_size() into the "common" uaccess.h (x86_64/*.h should
disappear at some point anyway).

In __copy_to_user() one of the two casts in each put_guest_size()
invocation gets dropped. They're not needed and did break symmetry with
__copy_from_user().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org> [shadow]
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
v2: Use __get_guest() in {,compat_}show_guest_stack().

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -776,9 +776,9 @@ shadow_write_entries(void *d, void *s, i
     /* Because we mirror access rights at all levels in the shadow, an
      * l2 (or higher) entry with the RW bit cleared will leave us with
      * no write access through the linear map.
-     * We detect that by writing to the shadow with __put_user() and
+     * We detect that by writing to the shadow with put_unsafe() and
      * using map_domain_page() to get a writeable mapping if we need to. */
-    if ( __put_user(*dst, dst) )
+    if ( put_unsafe(*dst, dst) )
     {
         perfc_incr(shadow_linear_map_failed);
         map = map_domain_page(mfn);
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -40,7 +40,7 @@ static int read_gate_descriptor(unsigned
          ((gate_sel >> 3) + !is_pv_32bit_vcpu(v) >=
           (gate_sel & 4 ? v->arch.pv.ldt_ents
                         : v->arch.pv.gdt_ents)) ||
-         __get_user(desc, pdesc) )
+         get_unsafe(desc, pdesc) )
         return 0;
 
     *sel = (desc.a >> 16) & 0x0000fffc;
@@ -59,7 +59,7 @@ static int read_gate_descriptor(unsigned
     {
         if ( (*ar & 0x1f00) != 0x0c00 ||
              /* Limit check done above already. */
-             __get_user(desc, pdesc + 1) ||
+             get_unsafe(desc, pdesc + 1) ||
              (desc.b & 0x1f00) )
             return 0;
 
@@ -294,7 +294,7 @@ void pv_emulate_gate_op(struct cpu_user_
         { \
             --stkp; \
             esp -= 4; \
-            rc = __put_user(item, stkp); \
+            rc = __put_guest(item, stkp); \
             if ( rc ) \
             { \
                 pv_inject_page_fault(PFEC_write_access, \
@@ -362,7 +362,7 @@ void pv_emulate_gate_op(struct cpu_user_
                     unsigned int parm;
 
                     --ustkp;
-                    rc = __get_user(parm, ustkp);
+                    rc = __get_guest(parm, ustkp);
                     if ( rc )
                     {
                         pv_inject_page_fault(0, (unsigned long)(ustkp + 1) - rc);
--- a/xen/arch/x86/pv/emulate.c
+++ b/xen/arch/x86/pv/emulate.c
@@ -34,13 +34,13 @@ int pv_emul_read_descriptor(unsigned int
     if ( sel < 4 ||
          /*
           * Don't apply the GDT limit here, as the selector may be a Xen
-          * provided one. __get_user() will fail (without taking further
+          * provided one. get_unsafe() will fail (without taking further
           * action) for ones falling in the gap between guest populated
           * and Xen ones.
           */
          ((sel & 4) && (sel >> 3) >= v->arch.pv.ldt_ents) )
         desc.b = desc.a = 0;
-    else if ( __get_user(desc, gdt_ldt_desc_ptr(sel)) )
+    else if ( get_unsafe(desc, gdt_ldt_desc_ptr(sel)) )
         return 0;
     if ( !insn_fetch )
         desc.b &= ~_SEGMENT_L;
--- a/xen/arch/x86/pv/iret.c
+++ b/xen/arch/x86/pv/iret.c
@@ -114,15 +114,15 @@ unsigned int compat_iret(void)
     regs->rsp = (u32)regs->rsp;
 
     /* Restore EAX (clobbered by hypercall). */
-    if ( unlikely(__get_user(regs->eax, (u32 *)regs->rsp)) )
+    if ( unlikely(__get_guest(regs->eax, (u32 *)regs->rsp)) )
     {
         domain_crash(v->domain);
         return 0;
     }
 
     /* Restore CS and EIP. */
-    if ( unlikely(__get_user(regs->eip, (u32 *)regs->rsp + 1)) ||
-        unlikely(__get_user(regs->cs, (u32 *)regs->rsp + 2)) )
+    if ( unlikely(__get_guest(regs->eip, (u32 *)regs->rsp + 1)) ||
+        unlikely(__get_guest(regs->cs, (u32 *)regs->rsp + 2)) )
     {
         domain_crash(v->domain);
         return 0;
@@ -132,7 +132,7 @@ unsigned int compat_iret(void)
      * Fix up and restore EFLAGS. We fix up in a local staging area
      * to avoid firing the BUG_ON(IOPL) check in arch_get_info_guest.
      */
-    if ( unlikely(__get_user(eflags, (u32 *)regs->rsp + 3)) )
+    if ( unlikely(__get_guest(eflags, (u32 *)regs->rsp + 3)) )
     {
         domain_crash(v->domain);
         return 0;
@@ -164,16 +164,16 @@ unsigned int compat_iret(void)
         {
             for (i = 1; i < 10; ++i)
             {
-                rc |= __get_user(x, (u32 *)regs->rsp + i);
-                rc |= __put_user(x, (u32 *)(unsigned long)ksp + i);
+                rc |= __get_guest(x, (u32 *)regs->rsp + i);
+                rc |= __put_guest(x, (u32 *)(unsigned long)ksp + i);
             }
         }
         else if ( ksp > regs->esp )
         {
             for ( i = 9; i > 0; --i )
             {
-                rc |= __get_user(x, (u32 *)regs->rsp + i);
-                rc |= __put_user(x, (u32 *)(unsigned long)ksp + i);
+                rc |= __get_guest(x, (u32 *)regs->rsp + i);
+                rc |= __put_guest(x, (u32 *)(unsigned long)ksp + i);
             }
         }
         if ( rc )
@@ -189,7 +189,7 @@ unsigned int compat_iret(void)
             eflags &= ~X86_EFLAGS_IF;
         regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|
                           X86_EFLAGS_NT|X86_EFLAGS_TF);
-        if ( unlikely(__put_user(0, (u32 *)regs->rsp)) )
+        if ( unlikely(__put_guest(0, (u32 *)regs->rsp)) )
         {
             domain_crash(v->domain);
             return 0;
@@ -205,8 +205,8 @@ unsigned int compat_iret(void)
     else if ( ring_1(regs) )
         regs->esp += 16;
     /* Return to ring 2/3: restore ESP and SS. */
-    else if ( __get_user(regs->ss, (u32 *)regs->rsp + 5) ||
-              __get_user(regs->esp, (u32 *)regs->rsp + 4) )
+    else if ( __get_guest(regs->ss, (u32 *)regs->rsp + 5) ||
+              __get_guest(regs->esp, (u32 *)regs->rsp + 4) )
     {
         domain_crash(v->domain);
         return 0;
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -274,7 +274,7 @@ static void compat_show_guest_stack(stru
     {
         if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
             break;
-        if ( __get_user(addr, stack) )
+        if ( __get_guest(addr, stack) )
         {
             if ( i != 0 )
                 printk("\n    ");
@@ -343,7 +343,7 @@ static void show_guest_stack(struct vcpu
     {
         if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask )
             break;
-        if ( __get_user(addr, stack) )
+        if ( __get_guest(addr, stack) )
         {
             if ( i != 0 )
                 printk("\n    ");
--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -59,13 +59,11 @@ extern void __put_user_bad(void);
   __put_user_check((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)))
 
 /**
- * __get_user: - Get a simple variable from user space, with less checking.
+ * __get_guest: - Get a simple variable from guest space, with less checking.
  * @x:   Variable to store result.
- * @ptr: Source address, in user space.
+ * @ptr: Source address, in guest space.
  *
- * Context: User context only.  This function may sleep.
- *
- * This macro copies a single simple variable from user space to kernel
+ * This macro copies a single simple variable from guest space to hypervisor
  * space.  It supports simple types like char and int, but not larger
  * data types like structures or arrays.
  *
@@ -78,17 +76,15 @@ extern void __put_user_bad(void);
  * Returns zero on success, or -EFAULT on error.
  * On error, the variable @x is set to zero.
  */
-#define __get_user(x,ptr) \
-  __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+#define __get_guest(x, ptr) get_guest_nocheck(x, ptr, sizeof(*(ptr)))
+#define get_unsafe __get_guest
 
 /**
- * __put_user: - Write a simple value into user space, with less checking.
- * @x:   Value to copy to user space.
- * @ptr: Destination address, in user space.
+ * __put_guest: - Write a simple value into guest space, with less checking.
+ * @x:   Value to store in guest space.
+ * @ptr: Destination address, in guest space.
  *
- * Context: User context only.  This function may sleep.
- *
- * This macro copies a single simple value from kernel space to user
+ * This macro copies a single simple value from hypervisor space to guest
  * space.  It supports simple types like char and int, but not larger
  * data types like structures or arrays.
  *
@@ -100,13 +96,14 @@ extern void __put_user_bad(void);
  *
  * Returns zero on success, or -EFAULT on error.
  */
-#define __put_user(x,ptr) \
-  __put_user_nocheck((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)))
+#define __put_guest(x, ptr) \
+    put_guest_nocheck((__typeof__(*(ptr)))(x), ptr, sizeof(*(ptr)))
+#define put_unsafe __put_guest
 
-#define __put_user_nocheck(x, ptr, size)				\
+#define put_guest_nocheck(x, ptr, size)					\
 ({									\
 	int err_; 							\
-	__put_user_size(x, ptr, size, err_, -EFAULT);			\
+	put_guest_size(x, ptr, size, err_, -EFAULT);			\
 	err_;								\
 })
 
@@ -114,14 +111,14 @@ extern void __put_user_bad(void);
 ({									\
 	__typeof__(*(ptr)) __user *ptr_ = (ptr);			\
 	__typeof__(size) size_ = (size);				\
-	access_ok(ptr_, size_) ? __put_user_nocheck(x, ptr_, size_)	\
+	access_ok(ptr_, size_) ? put_guest_nocheck(x, ptr_, size_)	\
 			       : -EFAULT;				\
 })
 
-#define __get_user_nocheck(x, ptr, size)				\
+#define get_guest_nocheck(x, ptr, size)					\
 ({									\
 	int err_; 							\
-	__get_user_size(x, ptr, size, err_, -EFAULT);			\
+	get_guest_size(x, ptr, size, err_, -EFAULT);			\
 	err_;								\
 })
 
@@ -129,7 +126,7 @@ extern void __put_user_bad(void);
 ({									\
 	__typeof__(*(ptr)) __user *ptr_ = (ptr);			\
 	__typeof__(size) size_ = (size);				\
-	access_ok(ptr_, size_) ? __get_user_nocheck(x, ptr_, size_)	\
+	access_ok(ptr_, size_) ? get_guest_nocheck(x, ptr_, size_)	\
 			       : -EFAULT;				\
 })
 
@@ -141,7 +138,7 @@ struct __large_struct { unsigned long bu
  * we do not write to any memory gcc knows about, so there are no
  * aliasing issues.
  */
-#define __put_user_asm(x, addr, err, itype, rtype, ltype, errret)	\
+#define put_unsafe_asm(x, addr, err, itype, rtype, ltype, errret)	\
 	stac();								\
 	__asm__ __volatile__(						\
 		"1:	mov"itype" %"rtype"1,%2\n"			\
@@ -155,7 +152,7 @@ struct __large_struct { unsigned long bu
 		: ltype (x), "m"(__m(addr)), "i"(errret), "0"(err));	\
 	clac()
 
-#define __get_user_asm(x, addr, err, itype, rtype, ltype, errret)	\
+#define get_unsafe_asm(x, addr, err, itype, rtype, ltype, errret)	\
 	stac();								\
 	__asm__ __volatile__(						\
 		"1:	mov"itype" %2,%"rtype"1\n"			\
@@ -170,6 +167,34 @@ struct __large_struct { unsigned long bu
 		: "m"(__m(addr)), "i"(errret), "0"(err));		\
 	clac()
 
+#define put_unsafe_size(x, ptr, size, retval, errret)                      \
+do {                                                                       \
+    retval = 0;                                                            \
+    switch ( size )                                                        \
+    {                                                                      \
+    case 1: put_unsafe_asm(x, ptr, retval, "b", "b", "iq", errret); break; \
+    case 2: put_unsafe_asm(x, ptr, retval, "w", "w", "ir", errret); break; \
+    case 4: put_unsafe_asm(x, ptr, retval, "l", "k", "ir", errret); break; \
+    case 8: put_unsafe_asm(x, ptr, retval, "q",  "", "ir", errret); break; \
+    default: __put_user_bad();                                             \
+    }                                                                      \
+} while ( false )
+#define put_guest_size put_unsafe_size
+
+#define get_unsafe_size(x, ptr, size, retval, errret)                      \
+do {                                                                       \
+    retval = 0;                                                            \
+    switch ( size )                                                        \
+    {                                                                      \
+    case 1: get_unsafe_asm(x, ptr, retval, "b", "b", "=q", errret); break; \
+    case 2: get_unsafe_asm(x, ptr, retval, "w", "w", "=r", errret); break; \
+    case 4: get_unsafe_asm(x, ptr, retval, "l", "k", "=r", errret); break; \
+    case 8: get_unsafe_asm(x, ptr, retval, "q",  "", "=r", errret); break; \
+    default: __get_user_bad();                                             \
+    }                                                                      \
+} while ( false )
+#define get_guest_size get_unsafe_size
+
 /**
  * __copy_to_user: - Copy a block of data into user space, with less checking
  * @to:   Destination address, in user space.
@@ -192,16 +217,16 @@ __copy_to_user(void __user *to, const vo
 
         switch (n) {
         case 1:
-            __put_user_size(*(const u8 *)from, (u8 __user *)to, 1, ret, 1);
+            put_guest_size(*(const uint8_t *)from, to, 1, ret, 1);
             return ret;
         case 2:
-            __put_user_size(*(const u16 *)from, (u16 __user *)to, 2, ret, 2);
+            put_guest_size(*(const uint16_t *)from, to, 2, ret, 2);
             return ret;
         case 4:
-            __put_user_size(*(const u32 *)from, (u32 __user *)to, 4, ret, 4);
+            put_guest_size(*(const uint32_t *)from, to, 4, ret, 4);
             return ret;
         case 8:
-            __put_user_size(*(const u64 *)from, (u64 __user *)to, 8, ret, 8);
+            put_guest_size(*(const uint64_t *)from, to, 8, ret, 8);
             return ret;
         }
     }
@@ -233,16 +258,16 @@ __copy_from_user(void *to, const void __
 
         switch (n) {
         case 1:
-            __get_user_size(*(u8 *)to, from, 1, ret, 1);
+            get_guest_size(*(uint8_t *)to, from, 1, ret, 1);
             return ret;
         case 2:
-            __get_user_size(*(u16 *)to, from, 2, ret, 2);
+            get_guest_size(*(uint16_t *)to, from, 2, ret, 2);
             return ret;
         case 4:
-            __get_user_size(*(u32 *)to, from, 4, ret, 4);
+            get_guest_size(*(uint32_t *)to, from, 4, ret, 4);
             return ret;
         case 8:
-            __get_user_size(*(u64*)to, from, 8, ret, 8);
+            get_guest_size(*(uint64_t *)to, from, 8, ret, 8);
             return ret;
         }
     }
--- a/xen/include/asm-x86/x86_64/uaccess.h
+++ b/xen/include/asm-x86/x86_64/uaccess.h
@@ -57,28 +57,4 @@ extern void *xlat_malloc(unsigned long *
     (likely((count) < (~0U / (size))) && \
      compat_access_ok(addr, 0 + (count) * (size)))
 
-#define __put_user_size(x,ptr,size,retval,errret)			\
-do {									\
-	retval = 0;							\
-	switch (size) {							\
-	case 1: __put_user_asm(x,ptr,retval,"b","b","iq",errret);break;	\
-	case 2: __put_user_asm(x,ptr,retval,"w","w","ir",errret);break; \
-	case 4: __put_user_asm(x,ptr,retval,"l","k","ir",errret);break;	\
-	case 8: __put_user_asm(x,ptr,retval,"q","","ir",errret);break;	\
-	default: __put_user_bad();					\
-	}								\
-} while (0)
-
-#define __get_user_size(x,ptr,size,retval,errret)			\
-do {									\
-	retval = 0;							\
-	switch (size) {							\
-	case 1: __get_user_asm(x,ptr,retval,"b","b","=q",errret);break;	\
-	case 2: __get_user_asm(x,ptr,retval,"w","w","=r",errret);break;	\
-	case 4: __get_user_asm(x,ptr,retval,"l","k","=r",errret);break;	\
-	case 8: __get_user_asm(x,ptr,retval,"q","","=r",errret); break;	\
-	default: __get_user_bad();					\
-	}								\
-} while (0)
-
 #endif /* __X86_64_UACCESS_H */
--- a/xen/test/livepatch/xen_hello_world_func.c
+++ b/xen/test/livepatch/xen_hello_world_func.c
@@ -26,7 +26,7 @@ const char *xen_hello_world(void)
      * Any BUG, or WARN_ON will contain symbol and payload name. Furthermore
      * exceptions will be caught and processed properly.
      */
-    rc = __get_user(tmp, non_canonical_addr);
+    rc = get_unsafe(tmp, non_canonical_addr);
     BUG_ON(rc != -EFAULT);
 #endif
 #if defined(CONFIG_ARM)


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:20:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:20:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86146.161381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI4C-0002QZ-0S; Wed, 17 Feb 2021 08:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86146.161381; Wed, 17 Feb 2021 08:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI4B-0002QS-TM; Wed, 17 Feb 2021 08:20:11 +0000
Received: by outflank-mailman (input) for mailman id 86146;
 Wed, 17 Feb 2021 08:20:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI4A-0002QE-FZ
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:20:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a7f5c84d-d7de-4925-83b0-53a06968ec40;
 Wed, 17 Feb 2021 08:20:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 39539B018;
 Wed, 17 Feb 2021 08:20:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7f5c84d-d7de-4925-83b0-53a06968ec40
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550008; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WrItjwsHf1tf64EfZyuTUmUrclrGErnnNK1/2FkMJ0A=;
	b=lDfHAN1BxTtAtW9pelpvtlnbkbGnb207wTX7rG9mrcyNCG8DVq0xUeHIEP3rVBq3uM1zTS
	+Fjj+ENw1DKHKMxiSlv1DbkzYa/FWE6meQOK90+eoVu0qd6JeNJz5SAKv3pV81J9AyUh0b
	YRbZrd6uT0ZTAa0te2+SpkGyHpjDej8=
Subject: [PATCH v2 2/8] x86: split __copy_{from,to}_user() into "guest" and
 "unsafe" variants
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <2c7530b5-5e56-bac8-6011-6c3a6aa529fa@suse.com>
Date: Wed, 17 Feb 2021 09:20:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

The "guest" variants are intended to work with (potentially) fully guest
controlled addresses, while the "unsafe" variants are intended to be
used in order to access addresses not (directly) under guest control,
within Xen's part of virtual address space. Subsequently we will want
them to have distinct behavior, so as first step identify which one is
which. For now, both groups of constructs alias one another.

Double underscore prefixes are retained only on
__copy_{from,to}_guest_pv(), to allow still distinguishing them from
their "checking" counterparts once they also get renamed (to
copy_{from,to}_guest_pv()).

Add previously missing __user at some call sites.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org> [shadow]
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
Instead of __copy_{from,to}_guest_pv(), perhaps name them just
__copy_{from,to}_pv()?

--- a/xen/arch/x86/gdbstub.c
+++ b/xen/arch/x86/gdbstub.c
@@ -33,13 +33,13 @@ gdb_arch_signal_num(struct cpu_user_regs
 unsigned int
 gdb_arch_copy_from_user(void *dest, const void *src, unsigned len)
 {
-    return __copy_from_user(dest, src, len);
+    return copy_from_unsafe(dest, src, len);
 }
 
 unsigned int 
 gdb_arch_copy_to_user(void *dest, const void *src, unsigned len)
 {
-    return __copy_to_user(dest, src, len);
+    return copy_to_unsafe(dest, src, len);
 }
 
 void
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2614,7 +2614,7 @@ static int sh_page_fault(struct vcpu *v,
         {
             shadow_l2e_t sl2e;
             mfn_t gl1mfn;
-            if ( (__copy_from_user(&sl2e,
+            if ( (copy_from_unsafe(&sl2e,
                                    (sh_linear_l2_table(v)
                                     + shadow_l2_linear_offset(va)),
                                    sizeof(sl2e)) != 0)
@@ -2633,7 +2633,7 @@ static int sh_page_fault(struct vcpu *v,
 #endif /* SHOPT_OUT_OF_SYNC */
         /* The only reasons for reserved bits to be set in shadow entries
          * are the two "magic" shadow_l1e entries. */
-        if ( likely((__copy_from_user(&sl1e,
+        if ( likely((copy_from_unsafe(&sl1e,
                                       (sh_linear_l1_table(v)
                                        + shadow_l1_linear_offset(va)),
                                       sizeof(sl1e)) == 0)
@@ -3308,10 +3308,10 @@ static bool sh_invlpg(struct vcpu *v, un
                    sh_linear_l4_table(v)[shadow_l4_linear_offset(linear)])
                & _PAGE_PRESENT) )
             return false;
-        /* This must still be a copy-from-user because we don't have the
+        /* This must still be a copy-from-unsafe because we don't have the
          * paging lock, and the higher-level shadows might disappear
          * under our feet. */
-        if ( __copy_from_user(&sl3e, (sh_linear_l3_table(v)
+        if ( copy_from_unsafe(&sl3e, (sh_linear_l3_table(v)
                                       + shadow_l3_linear_offset(linear)),
                               sizeof (sl3e)) != 0 )
         {
@@ -3330,9 +3330,9 @@ static bool sh_invlpg(struct vcpu *v, un
         return false;
 #endif
 
-    /* This must still be a copy-from-user because we don't have the shadow
+    /* This must still be a copy-from-unsafe because we don't have the shadow
      * lock, and the higher-level shadows might disappear under our feet. */
-    if ( __copy_from_user(&sl2e,
+    if ( copy_from_unsafe(&sl2e,
                           sh_linear_l2_table(v) + shadow_l2_linear_offset(linear),
                           sizeof (sl2e)) != 0 )
     {
@@ -3371,11 +3371,11 @@ static bool sh_invlpg(struct vcpu *v, un
              * hold the paging lock yet.  Check again with the lock held. */
             paging_lock(d);
 
-            /* This must still be a copy-from-user because we didn't
+            /* This must still be a copy-from-unsafe because we didn't
              * have the paging lock last time we checked, and the
              * higher-level shadows might have disappeared under our
              * feet. */
-            if ( __copy_from_user(&sl2e,
+            if ( copy_from_unsafe(&sl2e,
                                   sh_linear_l2_table(v)
                                   + shadow_l2_linear_offset(linear),
                                   sizeof (sl2e)) != 0 )
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -149,12 +149,12 @@ static int read_mem(enum x86_segment seg
 
     addr = (uint32_t)addr;
 
-    if ( (rc = __copy_from_user(p_data, (void *)addr, bytes)) )
+    if ( (rc = __copy_from_guest_pv(p_data, (void __user *)addr, bytes)) )
     {
         /*
          * TODO: This should report PFEC_insn_fetch when goc->insn_fetch &&
          * cpu_has_nx, but we'd then need a "fetch" variant of
-         * __copy_from_user() respecting NX, SMEP, and protection keys.
+         * __copy_from_guest_pv() respecting NX, SMEP, and protection keys.
          */
         x86_emul_pagefault(0, addr + bytes - rc, ctxt);
         return X86EMUL_EXCEPTION;
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -649,7 +649,8 @@ static int rep_ins(uint16_t port,
         if ( rc != X86EMUL_OKAY )
             return rc;
 
-        if ( (rc = __copy_to_user((void *)addr, &data, bytes_per_rep)) != 0 )
+        if ( (rc = __copy_to_guest_pv((void __user *)addr, &data,
+                                      bytes_per_rep)) != 0 )
         {
             x86_emul_pagefault(PFEC_write_access,
                                addr + bytes_per_rep - rc, ctxt);
@@ -716,7 +717,8 @@ static int rep_outs(enum x86_segment seg
         if ( rc != X86EMUL_OKAY )
             return rc;
 
-        if ( (rc = __copy_from_user(&data, (void *)addr, bytes_per_rep)) != 0 )
+        if ( (rc = __copy_from_guest_pv(&data, (void __user *)addr,
+                                        bytes_per_rep)) != 0 )
         {
             x86_emul_pagefault(0, addr + bytes_per_rep - rc, ctxt);
             return X86EMUL_EXCEPTION;
@@ -1253,12 +1255,12 @@ static int insn_fetch(enum x86_segment s
     if ( rc != X86EMUL_OKAY )
         return rc;
 
-    if ( (rc = __copy_from_user(p_data, (void *)addr, bytes)) != 0 )
+    if ( (rc = __copy_from_guest_pv(p_data, (void __user *)addr, bytes)) != 0 )
     {
         /*
          * TODO: This should report PFEC_insn_fetch when goc->insn_fetch &&
          * cpu_has_nx, but we'd then need a "fetch" variant of
-         * __copy_from_user() respecting NX, SMEP, and protection keys.
+         * __copy_from_guest_pv() respecting NX, SMEP, and protection keys.
          */
         x86_emul_pagefault(0, addr + bytes - rc, ctxt);
         return X86EMUL_EXCEPTION;
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -41,7 +41,7 @@ l1_pgentry_t *map_guest_l1e(unsigned lon
         return NULL;
 
     /* Find this l1e and its enclosing l1mfn in the linear map. */
-    if ( __copy_from_user(&l2e,
+    if ( copy_from_unsafe(&l2e,
                           &__linear_l2_table[l2_linear_offset(linear)],
                           sizeof(l2_pgentry_t)) )
         return NULL;
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -22,7 +22,7 @@ static inline l1_pgentry_t guest_get_eff
         toggle_guest_pt(curr);
 
     if ( unlikely(!__addr_ok(linear)) ||
-         __copy_from_user(&l1e,
+         copy_from_unsafe(&l1e,
                           &__linear_l1_table[l1_linear_offset(linear)],
                           sizeof(l1_pgentry_t)) )
         l1e = l1e_empty();
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -43,7 +43,7 @@ static int ptwr_emulated_read(enum x86_s
     unsigned long addr = offset;
 
     if ( !__addr_ok(addr) ||
-         (rc = __copy_from_user(p_data, (void *)addr, bytes)) )
+         (rc = __copy_from_guest_pv(p_data, (void *)addr, bytes)) )
     {
         x86_emul_pagefault(0, addr + bytes - rc, ctxt);  /* Read fault. */
         return X86EMUL_EXCEPTION;
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1103,7 +1103,7 @@ void do_invalid_op(struct cpu_user_regs
     }
 
     if ( !is_active_kernel_text(regs->rip) ||
-         __copy_from_user(bug_insn, eip, sizeof(bug_insn)) ||
+         copy_from_unsafe(bug_insn, eip, sizeof(bug_insn)) ||
          memcmp(bug_insn, "\xf\xb", sizeof(bug_insn)) )
         goto die;
 
--- a/xen/arch/x86/usercopy.c
+++ b/xen/arch/x86/usercopy.c
@@ -110,7 +110,7 @@ unsigned __copy_from_user_ll(void *to, c
 unsigned copy_to_user(void __user *to, const void *from, unsigned n)
 {
     if ( access_ok(to, n) )
-        n = __copy_to_user(to, from, n);
+        n = __copy_to_guest_pv(to, from, n);
     return n;
 }
 
@@ -168,7 +168,7 @@ unsigned clear_user(void __user *to, uns
 unsigned copy_from_user(void *to, const void __user *from, unsigned n)
 {
     if ( access_ok(from, n) )
-        n = __copy_from_user(to, from, n);
+        n = __copy_from_guest_pv(to, from, n);
     else
         memset(to, 0, n);
     return n;
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -28,11 +28,11 @@
 #define __raw_copy_to_guest(dst, src, len)      \
     (is_hvm_vcpu(current) ?                     \
      copy_to_user_hvm((dst), (src), (len)) :    \
-     __copy_to_user((dst), (src), (len)))
+     __copy_to_guest_pv(dst, src, len))
 #define __raw_copy_from_guest(dst, src, len)    \
     (is_hvm_vcpu(current) ?                     \
      copy_from_user_hvm((dst), (src), (len)) :  \
-     __copy_from_user((dst), (src), (len)))
+     __copy_from_guest_pv(dst, src, len))
 #define __raw_clear_guest(dst,  len)            \
     (is_hvm_vcpu(current) ?                     \
      clear_user_hvm((dst), (len)) :             \
--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -196,21 +196,20 @@ do {
 #define get_guest_size get_unsafe_size
 
 /**
- * __copy_to_user: - Copy a block of data into user space, with less checking
- * @to:   Destination address, in user space.
- * @from: Source address, in kernel space.
+ * __copy_to_guest_pv: - Copy a block of data into guest space, with less
+ *                       checking
+ * @to:   Destination address, in guest space.
+ * @from: Source address, in hypervisor space.
  * @n:    Number of bytes to copy.
  *
- * Context: User context only.  This function may sleep.
- *
- * Copy data from kernel space to user space.  Caller must check
+ * Copy data from hypervisor space to guest space.  Caller must check
  * the specified block with access_ok() before calling this function.
  *
  * Returns number of bytes that could not be copied.
  * On success, this will be zero.
  */
 static always_inline unsigned long
-__copy_to_user(void __user *to, const void *from, unsigned long n)
+__copy_to_guest_pv(void __user *to, const void *from, unsigned long n)
 {
     if (__builtin_constant_p(n)) {
         unsigned long ret;
@@ -232,16 +231,16 @@ __copy_to_user(void __user *to, const vo
     }
     return __copy_to_user_ll(to, from, n);
 }
+#define copy_to_unsafe __copy_to_guest_pv
 
 /**
- * __copy_from_user: - Copy a block of data from user space, with less checking
- * @to:   Destination address, in kernel space.
- * @from: Source address, in user space.
+ * __copy_from_guest_pv: - Copy a block of data from guest space, with less
+ *                         checking
+ * @to:   Destination address, in hypervisor space.
+ * @from: Source address, in guest space.
  * @n:    Number of bytes to copy.
  *
- * Context: User context only.  This function may sleep.
- *
- * Copy data from user space to kernel space.  Caller must check
+ * Copy data from guest space to hypervisor space.  Caller must check
  * the specified block with access_ok() before calling this function.
  *
  * Returns number of bytes that could not be copied.
@@ -251,7 +250,7 @@ __copy_to_user(void __user *to, const vo
  * data to the requested size using zero bytes.
  */
 static always_inline unsigned long
-__copy_from_user(void *to, const void __user *from, unsigned long n)
+__copy_from_guest_pv(void *to, const void __user *from, unsigned long n)
 {
     if (__builtin_constant_p(n)) {
         unsigned long ret;
@@ -273,6 +272,7 @@ __copy_from_user(void *to, const void __
     }
     return __copy_from_user_ll(to, from, n);
 }
+#define copy_from_unsafe __copy_from_guest_pv
 
 /*
  * The exception table consists of pairs of addresses: the first is the



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:20:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86148.161392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI4h-0002Xg-DG; Wed, 17 Feb 2021 08:20:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86148.161392; Wed, 17 Feb 2021 08:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI4h-0002XZ-A5; Wed, 17 Feb 2021 08:20:43 +0000
Received: by outflank-mailman (input) for mailman id 86148;
 Wed, 17 Feb 2021 08:20:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI4g-0002XP-5s
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:20:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 362f80be-5f20-478f-9872-21791666509f;
 Wed, 17 Feb 2021 08:20:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6D93AB018;
 Wed, 17 Feb 2021 08:20:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 362f80be-5f20-478f-9872-21791666509f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550038; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1IZ8g9n5ogU05h+YXNeRGMJ3ZPl5ClDvsrVFXSKMzx8=;
	b=DVapiHp7OScD0KAm6Rk7W5nVKdedKh47c3q7IYxBMcyTJ8yR85WZkwz/66EdilQcR7veLf
	umYxLlNa42rULEbWPtxffWRXd09nqy0VlmAA9ILJcNLgKiZPOEH771lKux+E6OtCeVfIZT
	8MgecPW0flbUOVg6Wqdq8vsYIxm29xY=
Subject: [PATCH v2 3/8] x86/PV: harden guest memory accesses against
 speculative abuse
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <ad99a113-ecc5-2548-c91e-81fa4fe6154a@suse.com>
Date: Wed, 17 Feb 2021 09:20:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Inspired by
https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
and prior work in that area of x86 Linux, suppress speculation with
guest specified pointer values by suitably masking the addresses to
non-canonical space in case they fall into Xen's virtual address range.

Introduce a new Kconfig control.

Note that it is necessary in such code to avoid using "m" kind operands:
If we didn't, there would be no guarantee that the register passed to
guest_access_mask_ptr is also the (base) one used for the memory access.

As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
parameter gets dropped and the XOR on the fixup path gets changed to be
a 32-bit one in all cases: This way we avoid pointless REX.W or operand
size overrides, or writes to partial registers.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
v2: Add comment to assembler macro.
---
The insn sequence chosen is certainly up for discussion; I've picked
this one despite the RCR because alternatives I could come up with,
like

	mov	$(HYPERVISOR_VIRT_END), %rax
	mov	$~0, %rdx
	mov	$0x7fffffffffffffff, %rcx
	cmp	%rax, %rdi
	cmovb	%rcx, %rdx
	and	%rdx, %rdi

weren't necessarily better: Either, as above, they are longer and
require a 3rd scratch register, or they also utilize the carry flag in
some similar way.
---
Judging from the comment ahead of put_unsafe_asm() we might as well not
tell gcc at all anymore about the memory access there, now that there's
no use of the operand anymore in the assembly code.

--- a/xen/arch/x86/usercopy.c
+++ b/xen/arch/x86/usercopy.c
@@ -10,12 +10,19 @@
 #include <xen/sched.h>
 #include <asm/uaccess.h>
 
-unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n)
+#ifndef GUARD
+# define GUARD UA_KEEP
+#endif
+
+unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n)
 {
     unsigned dummy;
 
     stac();
     asm volatile (
+        GUARD(
+        "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"
+        )
         "    cmp  $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n"
         "    jbe  1f\n"
         "    mov  %k[to], %[cnt]\n"
@@ -42,6 +49,7 @@ unsigned __copy_to_user_ll(void __user *
         _ASM_EXTABLE(1b, 2b)
         : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
           [aux] "=&r" (dummy)
+          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
         : "[aux]" (n)
         : "memory" );
     clac();
@@ -49,12 +57,15 @@ unsigned __copy_to_user_ll(void __user *
     return n;
 }
 
-unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n)
+unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n)
 {
     unsigned dummy;
 
     stac();
     asm volatile (
+        GUARD(
+        "    guest_access_mask_ptr %[from], %q[scratch1], %q[scratch2]\n"
+        )
         "    cmp  $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n"
         "    jbe  1f\n"
         "    mov  %k[to], %[cnt]\n"
@@ -87,6 +98,7 @@ unsigned __copy_from_user_ll(void *to, c
         _ASM_EXTABLE(1b, 6b)
         : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
           [aux] "=&r" (dummy)
+          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
         : "[aux]" (n)
         : "memory" );
     clac();
@@ -94,6 +106,8 @@ unsigned __copy_from_user_ll(void *to, c
     return n;
 }
 
+#if GUARD(1) + 0
+
 /**
  * copy_to_user: - Copy a block of data into user space.
  * @to:   Destination address, in user space.
@@ -128,8 +142,11 @@ unsigned clear_user(void __user *to, uns
 {
     if ( access_ok(to, n) )
     {
+        long dummy;
+
         stac();
         asm volatile (
+            "    guest_access_mask_ptr %[to], %[scratch1], %[scratch2]\n"
             "0:  rep stos"__OS"\n"
             "    mov  %[bytes], %[cnt]\n"
             "1:  rep stosb\n"
@@ -140,7 +157,8 @@ unsigned clear_user(void __user *to, uns
             ".previous\n"
             _ASM_EXTABLE(0b,3b)
             _ASM_EXTABLE(1b,2b)
-            : [cnt] "=&c" (n), [to] "+D" (to)
+            : [cnt] "=&c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy),
+              [scratch2] "=&r" (dummy)
             : [bytes] "r" (n & (BYTES_PER_LONG - 1)),
               [longs] "0" (n / BYTES_PER_LONG), "a" (0) );
         clac();
@@ -174,6 +192,16 @@ unsigned copy_from_user(void *to, const
     return n;
 }
 
+# undef GUARD
+# define GUARD UA_DROP
+# define copy_to_guest_ll copy_to_unsafe_ll
+# define copy_from_guest_ll copy_from_unsafe_ll
+# undef __user
+# define __user
+# include __FILE__
+
+#endif /* GUARD(1) */
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -458,6 +458,8 @@ UNLIKELY_START(g, create_bounce_frame_ba
         jmp   asm_domain_crash_synchronous  /* Does not return */
 __UNLIKELY_END(create_bounce_frame_bad_sp)
 
+        guest_access_mask_ptr %rsi, %rax, %rcx
+
 #define STORE_GUEST_STACK(reg, n) \
 0:      movq  %reg,(n)*8(%rsi); \
         _ASM_EXTABLE(0b, domain_crash_page_fault_ ## n ## x8)
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -111,6 +111,24 @@ config SPECULATIVE_HARDEN_BRANCH
 
 	  If unsure, say Y.
 
+config SPECULATIVE_HARDEN_GUEST_ACCESS
+	bool "Speculative PV Guest Memory Access Hardening"
+	default y
+	depends on PV
+	help
+	  Contemporary processors may use speculative execution as a
+	  performance optimisation, but this can potentially be abused by an
+	  attacker to leak data via speculative sidechannels.
+
+	  One source of data leakage is via speculative accesses to hypervisor
+	  memory through guest controlled values used to access guest memory.
+
+	  When enabled, code paths accessing PV guest memory will have guest
+	  controlled addresses massaged such that memory accesses through them
+	  won't touch hypervisor address space.
+
+	  If unsure, say Y.
+
 endmenu
 
 config HYPFS
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -56,3 +56,23 @@
 .macro INDIRECT_JMP arg:req
     INDIRECT_BRANCH jmp \arg
 .endm
+
+.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req
+#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS)
+    /*
+     * Here we want
+     *
+     * ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END);
+     *
+     * but guaranteed without any conditional branches (hence in assembly).
+     */
+    mov $(HYPERVISOR_VIRT_END - 1), \scratch1
+    mov $~0, \scratch2
+    cmp \ptr, \scratch1
+    rcr $1, \scratch2
+    and \scratch2, \ptr
+#elif defined(CONFIG_DEBUG) && defined(CONFIG_PV)
+    xor $~\@, \scratch1
+    xor $~\@, \scratch2
+#endif
+.endm
--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -12,13 +12,19 @@
 unsigned copy_to_user(void *to, const void *from, unsigned len);
 unsigned clear_user(void *to, unsigned len);
 unsigned copy_from_user(void *to, const void *from, unsigned len);
+
 /* Handles exceptions in both to and from, but doesn't do access_ok */
-unsigned __copy_to_user_ll(void __user*to, const void *from, unsigned n);
-unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n);
+unsigned int copy_to_guest_ll(void __user*to, const void *from, unsigned int n);
+unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n);
+unsigned int copy_to_unsafe_ll(void *to, const void *from, unsigned int n);
+unsigned int copy_from_unsafe_ll(void *to, const void *from, unsigned int n);
 
 extern long __get_user_bad(void);
 extern void __put_user_bad(void);
 
+#define UA_KEEP(args...) args
+#define UA_DROP(args...)
+
 /**
  * get_user: - Get a simple variable from user space.
  * @x:   Variable to store result.
@@ -77,7 +83,6 @@ extern void __put_user_bad(void);
  * On error, the variable @x is set to zero.
  */
 #define __get_guest(x, ptr) get_guest_nocheck(x, ptr, sizeof(*(ptr)))
-#define get_unsafe __get_guest
 
 /**
  * __put_guest: - Write a simple value into guest space, with less checking.
@@ -98,7 +103,13 @@ extern void __put_user_bad(void);
  */
 #define __put_guest(x, ptr) \
     put_guest_nocheck((__typeof__(*(ptr)))(x), ptr, sizeof(*(ptr)))
-#define put_unsafe __put_guest
+
+#define put_unsafe(x, ptr)						\
+({									\
+	int err_; 							\
+	put_unsafe_size(x, ptr, sizeof(*(ptr)), UA_DROP, err_, -EFAULT);\
+	err_;								\
+})
 
 #define put_guest_nocheck(x, ptr, size)					\
 ({									\
@@ -115,6 +126,13 @@ extern void __put_user_bad(void);
 			       : -EFAULT;				\
 })
 
+#define get_unsafe(x, ptr)						\
+({									\
+	int err_; 							\
+	get_unsafe_size(x, ptr, sizeof(*(ptr)), UA_DROP, err_, -EFAULT);\
+	err_;								\
+})
+
 #define get_guest_nocheck(x, ptr, size)					\
 ({									\
 	int err_; 							\
@@ -138,62 +156,87 @@ struct __large_struct { unsigned long bu
  * we do not write to any memory gcc knows about, so there are no
  * aliasing issues.
  */
-#define put_unsafe_asm(x, addr, err, itype, rtype, ltype, errret)	\
+#define put_unsafe_asm(x, addr, GUARD, err, itype, rtype, ltype, errret) \
 	stac();								\
 	__asm__ __volatile__(						\
-		"1:	mov"itype" %"rtype"1,%2\n"			\
+		GUARD(							\
+		"	guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \
+		)							\
+		"1:	mov"itype" %"rtype"[val], (%[ptr])\n"		\
 		"2:\n"							\
 		".section .fixup,\"ax\"\n"				\
-		"3:	mov %3,%0\n"					\
+		"3:	mov %[errno], %[ret]\n"				\
 		"	jmp 2b\n"					\
 		".previous\n"						\
 		_ASM_EXTABLE(1b, 3b)					\
-		: "=r"(err)						\
-		: ltype (x), "m"(__m(addr)), "i"(errret), "0"(err));	\
+		: [ret] "+r" (err), [ptr] "=&r" (dummy_)		\
+		  GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_))	\
+		: [val] ltype (x), "m" (__m(addr)),			\
+		  "[ptr]" (addr), [errno] "i" (errret));		\
 	clac()
 
-#define get_unsafe_asm(x, addr, err, itype, rtype, ltype, errret)	\
+#define get_unsafe_asm(x, addr, GUARD, err, rtype, ltype, errret)	\
 	stac();								\
 	__asm__ __volatile__(						\
-		"1:	mov"itype" %2,%"rtype"1\n"			\
+		GUARD(							\
+		"	guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \
+		)							\
+		"1:	mov (%[ptr]), %"rtype"[val]\n"			\
 		"2:\n"							\
 		".section .fixup,\"ax\"\n"				\
-		"3:	mov %3,%0\n"					\
-		"	xor"itype" %"rtype"1,%"rtype"1\n"		\
+		"3:	mov %[errno], %[ret]\n"				\
+		"	xor %k[val], %k[val]\n"				\
 		"	jmp 2b\n"					\
 		".previous\n"						\
 		_ASM_EXTABLE(1b, 3b)					\
-		: "=r"(err), ltype (x)					\
-		: "m"(__m(addr)), "i"(errret), "0"(err));		\
+		: [ret] "+r" (err), [val] ltype (x),			\
+		  [ptr] "=&r" (dummy_)					\
+		  GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_))	\
+		: "m" (__m(addr)), "[ptr]" (addr),			\
+		  [errno] "i" (errret));				\
 	clac()
 
-#define put_unsafe_size(x, ptr, size, retval, errret)                      \
+#define put_unsafe_size(x, ptr, size, grd, retval, errret)                 \
 do {                                                                       \
     retval = 0;                                                            \
     switch ( size )                                                        \
     {                                                                      \
-    case 1: put_unsafe_asm(x, ptr, retval, "b", "b", "iq", errret); break; \
-    case 2: put_unsafe_asm(x, ptr, retval, "w", "w", "ir", errret); break; \
-    case 4: put_unsafe_asm(x, ptr, retval, "l", "k", "ir", errret); break; \
-    case 8: put_unsafe_asm(x, ptr, retval, "q",  "", "ir", errret); break; \
+    long dummy_;                                                           \
+    case 1:                                                                \
+        put_unsafe_asm(x, ptr, grd, retval, "b", "b", "iq", errret);       \
+        break;                                                             \
+    case 2:                                                                \
+        put_unsafe_asm(x, ptr, grd, retval, "w", "w", "ir", errret);       \
+        break;                                                             \
+    case 4:                                                                \
+        put_unsafe_asm(x, ptr, grd, retval, "l", "k", "ir", errret);       \
+        break;                                                             \
+    case 8:                                                                \
+        put_unsafe_asm(x, ptr, grd, retval, "q",  "", "ir", errret);       \
+        break;                                                             \
     default: __put_user_bad();                                             \
     }                                                                      \
 } while ( false )
-#define put_guest_size put_unsafe_size
 
-#define get_unsafe_size(x, ptr, size, retval, errret)                      \
+#define put_guest_size(x, ptr, size, retval, errret) \
+    put_unsafe_size(x, ptr, size, UA_KEEP, retval, errret)
+
+#define get_unsafe_size(x, ptr, size, grd, retval, errret)                 \
 do {                                                                       \
     retval = 0;                                                            \
     switch ( size )                                                        \
     {                                                                      \
-    case 1: get_unsafe_asm(x, ptr, retval, "b", "b", "=q", errret); break; \
-    case 2: get_unsafe_asm(x, ptr, retval, "w", "w", "=r", errret); break; \
-    case 4: get_unsafe_asm(x, ptr, retval, "l", "k", "=r", errret); break; \
-    case 8: get_unsafe_asm(x, ptr, retval, "q",  "", "=r", errret); break; \
+    long dummy_;                                                           \
+    case 1: get_unsafe_asm(x, ptr, grd, retval, "b", "=q", errret); break; \
+    case 2: get_unsafe_asm(x, ptr, grd, retval, "w", "=r", errret); break; \
+    case 4: get_unsafe_asm(x, ptr, grd, retval, "k", "=r", errret); break; \
+    case 8: get_unsafe_asm(x, ptr, grd, retval,  "", "=r", errret); break; \
     default: __get_user_bad();                                             \
     }                                                                      \
 } while ( false )
-#define get_guest_size get_unsafe_size
+
+#define get_guest_size(x, ptr, size, retval, errret)                       \
+    get_unsafe_size(x, ptr, size, UA_KEEP, retval, errret)
 
 /**
  * __copy_to_guest_pv: - Copy a block of data into guest space, with less
@@ -229,9 +272,8 @@ __copy_to_guest_pv(void __user *to, cons
             return ret;
         }
     }
-    return __copy_to_user_ll(to, from, n);
+    return copy_to_guest_ll(to, from, n);
 }
-#define copy_to_unsafe __copy_to_guest_pv
 
 /**
  * __copy_from_guest_pv: - Copy a block of data from guest space, with less
@@ -270,9 +312,87 @@ __copy_from_guest_pv(void *to, const voi
             return ret;
         }
     }
-    return __copy_from_user_ll(to, from, n);
+    return copy_from_guest_ll(to, from, n);
+}
+
+/**
+ * copy_to_unsafe: - Copy a block of data to unsafe space, with exception
+ *                   checking
+ * @to:   Unsafe destination address.
+ * @from: Safe source address, in hypervisor space.
+ * @n:    Number of bytes to copy.
+ *
+ * Copy data from hypervisor space to a potentially unmapped area.
+ *
+ * Returns number of bytes that could not be copied.
+ * On success, this will be zero.
+ */
+static always_inline unsigned int
+copy_to_unsafe(void __user *to, const void *from, unsigned int n)
+{
+    if (__builtin_constant_p(n)) {
+        unsigned long ret;
+
+        switch (n) {
+        case 1:
+            put_unsafe_size(*(const uint8_t *)from, to, 1, UA_DROP, ret, 1);
+            return ret;
+        case 2:
+            put_unsafe_size(*(const uint16_t *)from, to, 2, UA_DROP, ret, 2);
+            return ret;
+        case 4:
+            put_unsafe_size(*(const uint32_t *)from, to, 4, UA_DROP, ret, 4);
+            return ret;
+        case 8:
+            put_unsafe_size(*(const uint64_t *)from, to, 8, UA_DROP, ret, 8);
+            return ret;
+        }
+    }
+
+    return copy_to_unsafe_ll(to, from, n);
+}
+
+/**
+ * copy_from_unsafe: - Copy a block of data from unsafe space, with exception
+ *                     checking
+ * @to:   Safe destination address, in hypervisor space.
+ * @from: Unsafe source address.
+ * @n:    Number of bytes to copy.
+ *
+ * Copy data from a potentially unmapped area space to hypervisor space.
+ *
+ * Returns number of bytes that could not be copied.
+ * On success, this will be zero.
+ *
+ * If some data could not be copied, this function will pad the copied
+ * data to the requested size using zero bytes.
+ */
+static always_inline unsigned int
+copy_from_unsafe(void *to, const void __user *from, unsigned int n)
+{
+    if ( __builtin_constant_p(n) )
+    {
+        unsigned long ret;
+
+        switch ( n )
+        {
+        case 1:
+            get_unsafe_size(*(uint8_t *)to, from, 1, UA_DROP, ret, 1);
+            return ret;
+        case 2:
+            get_unsafe_size(*(uint16_t *)to, from, 2, UA_DROP, ret, 2);
+            return ret;
+        case 4:
+            get_unsafe_size(*(uint32_t *)to, from, 4, UA_DROP, ret, 4);
+            return ret;
+        case 8:
+            get_unsafe_size(*(uint64_t *)to, from, 8, UA_DROP, ret, 8);
+            return ret;
+        }
+    }
+
+    return copy_from_unsafe_ll(to, from, n);
 }
-#define copy_from_unsafe __copy_from_guest_pv
 
 /*
  * The exception table consists of pairs of addresses: the first is the



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:21:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:21:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86151.161405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI57-0002dj-Mt; Wed, 17 Feb 2021 08:21:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86151.161405; Wed, 17 Feb 2021 08:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI57-0002db-Jb; Wed, 17 Feb 2021 08:21:09 +0000
Received: by outflank-mailman (input) for mailman id 86151;
 Wed, 17 Feb 2021 08:21:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI56-0002dG-Br
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:21:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id afef7590-768b-42c9-85ef-2e49d1957938;
 Wed, 17 Feb 2021 08:21:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 87787B923;
 Wed, 17 Feb 2021 08:21:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afef7590-768b-42c9-85ef-2e49d1957938
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550066; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3edcQpzCvtggucCwlX/TE5E+DYPVVTCrsewvr6Dlet4=;
	b=dkGdA3tJG+aPkmuo+k6QFgN/dEnqd2ZCh0vGibhGhvDsSFMIErrY//eNUWGjYe/FPR4XiU
	0eyLqH2eeV3S7tqvy+unl44I3LgjRNmMP+CdBZzAg806Q/6W1N4Ln60wgbRSSNHHVpCxO1
	ch20dNVPzH7FIbsqoAFKndpPChCEh78=
Subject: [PATCH v2 4/8] x86: rename {get,put}_user() to {get,put}_guest()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <369ae5ec-ee2a-78d4-438f-b18d04c81c4c@suse.com>
Date: Wed, 17 Feb 2021 09:21:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Bring them (back) in line with __{get,put}_guest().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1649,19 +1649,19 @@ static void load_segments(struct vcpu *n
 
             if ( !ring_1(regs) )
             {
-                ret  = put_user(regs->ss,       esp-1);
-                ret |= put_user(regs->esp,      esp-2);
+                ret  = put_guest(regs->ss,  esp - 1);
+                ret |= put_guest(regs->esp, esp - 2);
                 esp -= 2;
             }
 
             if ( ret |
-                 put_user(rflags,              esp-1) |
-                 put_user(cs_and_mask,         esp-2) |
-                 put_user(regs->eip,           esp-3) |
-                 put_user(uregs->gs,           esp-4) |
-                 put_user(uregs->fs,           esp-5) |
-                 put_user(uregs->es,           esp-6) |
-                 put_user(uregs->ds,           esp-7) )
+                 put_guest(rflags,      esp - 1) |
+                 put_guest(cs_and_mask, esp - 2) |
+                 put_guest(regs->eip,   esp - 3) |
+                 put_guest(uregs->gs,   esp - 4) |
+                 put_guest(uregs->fs,   esp - 5) |
+                 put_guest(uregs->es,   esp - 6) |
+                 put_guest(uregs->ds,   esp - 7) )
             {
                 gprintk(XENLOG_ERR,
                         "error while creating compat failsafe callback frame\n");
@@ -1690,17 +1690,17 @@ static void load_segments(struct vcpu *n
         cs_and_mask = (unsigned long)regs->cs |
             ((unsigned long)vcpu_info(n, evtchn_upcall_mask) << 32);
 
-        if ( put_user(regs->ss,            rsp- 1) |
-             put_user(regs->rsp,           rsp- 2) |
-             put_user(rflags,              rsp- 3) |
-             put_user(cs_and_mask,         rsp- 4) |
-             put_user(regs->rip,           rsp- 5) |
-             put_user(uregs->gs,           rsp- 6) |
-             put_user(uregs->fs,           rsp- 7) |
-             put_user(uregs->es,           rsp- 8) |
-             put_user(uregs->ds,           rsp- 9) |
-             put_user(regs->r11,           rsp-10) |
-             put_user(regs->rcx,           rsp-11) )
+        if ( put_guest(regs->ss,    rsp -  1) |
+             put_guest(regs->rsp,   rsp -  2) |
+             put_guest(rflags,      rsp -  3) |
+             put_guest(cs_and_mask, rsp -  4) |
+             put_guest(regs->rip,   rsp -  5) |
+             put_guest(uregs->gs,   rsp -  6) |
+             put_guest(uregs->fs,   rsp -  7) |
+             put_guest(uregs->es,   rsp -  8) |
+             put_guest(uregs->ds,   rsp -  9) |
+             put_guest(regs->r11,   rsp - 10) |
+             put_guest(regs->rcx,   rsp - 11) )
         {
             gprintk(XENLOG_ERR,
                     "error while creating failsafe callback frame\n");
--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -26,14 +26,12 @@ extern void __put_user_bad(void);
 #define UA_DROP(args...)
 
 /**
- * get_user: - Get a simple variable from user space.
+ * get_guest: - Get a simple variable from guest space.
  * @x:   Variable to store result.
- * @ptr: Source address, in user space.
- *
- * Context: User context only.  This function may sleep.
+ * @ptr: Source address, in guest space.
  *
- * This macro copies a single simple variable from user space to kernel
- * space.  It supports simple types like char and int, but not larger
+ * This macro load a single simple variable from guest space.
+ * It supports simple types like char and int, but not larger
  * data types like structures or arrays.
  *
  * @ptr must have pointer-to-simple-variable type, and the result of
@@ -42,18 +40,15 @@ extern void __put_user_bad(void);
  * Returns zero on success, or -EFAULT on error.
  * On error, the variable @x is set to zero.
  */
-#define get_user(x,ptr)	\
-  __get_user_check((x),(ptr),sizeof(*(ptr)))
+#define get_guest(x, ptr) get_guest_check(x, ptr, sizeof(*(ptr)))
 
 /**
- * put_user: - Write a simple value into user space.
- * @x:   Value to copy to user space.
- * @ptr: Destination address, in user space.
- *
- * Context: User context only.  This function may sleep.
+ * put_guest: - Write a simple value into guest space.
+ * @x:   Value to store in guest space.
+ * @ptr: Destination address, in guest space.
  *
- * This macro copies a single simple value from kernel space to user
- * space.  It supports simple types like char and int, but not larger
+ * This macro stores a single simple value from to guest space.
+ * It supports simple types like char and int, but not larger
  * data types like structures or arrays.
  *
  * @ptr must have pointer-to-simple-variable type, and @x must be assignable
@@ -61,8 +56,8 @@ extern void __put_user_bad(void);
  *
  * Returns zero on success, or -EFAULT on error.
  */
-#define put_user(x,ptr)							\
-  __put_user_check((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)))
+#define put_guest(x, ptr) \
+    put_guest_check((__typeof__(*(ptr)))(x), ptr, sizeof(*(ptr)))
 
 /**
  * __get_guest: - Get a simple variable from guest space, with less checking.
@@ -118,7 +113,7 @@ extern void __put_user_bad(void);
 	err_;								\
 })
 
-#define __put_user_check(x, ptr, size)					\
+#define put_guest_check(x, ptr, size)					\
 ({									\
 	__typeof__(*(ptr)) __user *ptr_ = (ptr);			\
 	__typeof__(size) size_ = (size);				\
@@ -140,7 +135,7 @@ extern void __put_user_bad(void);
 	err_;								\
 })
 
-#define __get_user_check(x, ptr, size)					\
+#define get_guest_check(x, ptr, size)					\
 ({									\
 	__typeof__(*(ptr)) __user *ptr_ = (ptr);			\
 	__typeof__(size) size_ = (size);				\



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:21:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:21:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86154.161416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI5c-0002lz-0X; Wed, 17 Feb 2021 08:21:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86154.161416; Wed, 17 Feb 2021 08:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI5b-0002lr-Tb; Wed, 17 Feb 2021 08:21:39 +0000
Received: by outflank-mailman (input) for mailman id 86154;
 Wed, 17 Feb 2021 08:21:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI5a-0002le-Vw
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:21:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2974b1bc-d770-4709-90fe-f662433d92f3;
 Wed, 17 Feb 2021 08:21:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1E21FB926;
 Wed, 17 Feb 2021 08:21:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2974b1bc-d770-4709-90fe-f662433d92f3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550097; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7+4q+ReB9EfmNerxV8ac8BV2+CtbLgs3ung/R2kapNU=;
	b=k7iTE3VpfSst3expp3Wj8nbpoeGkALM+7WFo8YAO7VueCMHke1xnA/jbuLnXQfd5sAMQTE
	VXD6YarR22lw7kIZEnqcim9pKclepIlB33ZdGye2w47jeRMMyXeNR/LuviR8eKaGGtFMeX
	VJefGqYJ5OenTl04gnAQrZrHrhnyfA0=
Subject: [PATCH v2 5/8] x86/gdbsx: convert "user" to "guest" accesses
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <d1a1b9eb-33b4-4d07-9465-189699f88323@suse.com>
Date: Wed, 17 Feb 2021 09:21:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Using copy_{from,to}_user(), this code was assuming to be called only by
PV guests. Use copy_{from,to}_guest() instead, transforming the incoming
structure field into a guest handle (the field should really have been
one in the first place). Also do not transform the debuggee address into
a pointer.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base (bug fix side effect was taken care of already).

--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -108,12 +108,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct doma
 }
 
 /* Returns: number of bytes remaining to be copied */
-static unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
-                                     void * __user buf, unsigned int len,
-                                     bool toaddr, uint64_t pgd3)
+static unsigned int dbg_rw_guest_mem(struct domain *dp, unsigned long addr,
+                                     XEN_GUEST_HANDLE_PARAM(void) buf,
+                                     unsigned int len, bool toaddr,
+                                     uint64_t pgd3)
 {
-    unsigned long addr = (unsigned long)gaddr;
-
     while ( len > 0 )
     {
         char *va;
@@ -134,20 +133,18 @@ static unsigned int dbg_rw_guest_mem(str
 
         if ( toaddr )
         {
-            copy_from_user(va, buf, pagecnt);    /* va = buf */
+            copy_from_guest(va, buf, pagecnt);
             paging_mark_dirty(dp, mfn);
         }
         else
-        {
-            copy_to_user(buf, va, pagecnt);    /* buf = va */
-        }
+            copy_to_guest(buf, va, pagecnt);
 
         unmap_domain_page(va);
         if ( !gfn_eq(gfn, INVALID_GFN) )
             put_gfn(dp, gfn_x(gfn));
 
         addr += pagecnt;
-        buf += pagecnt;
+        guest_handle_add_offset(buf, pagecnt);
         len -= pagecnt;
     }
 
@@ -161,7 +158,7 @@ static unsigned int dbg_rw_guest_mem(str
  * pgd3: value of init_mm.pgd[3] in guest. see above.
  * Returns: number of bytes remaining to be copied.
  */
-unsigned int dbg_rw_mem(void * __user addr, void * __user buf,
+unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf,
                         unsigned int len, domid_t domid, bool toaddr,
                         uint64_t pgd3)
 {
@@ -170,7 +167,7 @@ unsigned int dbg_rw_mem(void * __user ad
     if ( d )
     {
         if ( !d->is_dying )
-            len = dbg_rw_guest_mem(d, addr, buf, len, toaddr, pgd3);
+            len = dbg_rw_guest_mem(d, gva, buf, len, toaddr, pgd3);
         rcu_unlock_domain(d);
     }
 
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -40,10 +40,8 @@
 #ifdef CONFIG_GDBSX
 static int gdbsx_guest_mem_io(domid_t domid, struct xen_domctl_gdbsx_memio *iop)
 {
-    void * __user gva = (void *)iop->gva, * __user uva = (void *)iop->uva;
-
-    iop->remain = dbg_rw_mem(gva, uva, iop->len, domid,
-                             !!iop->gwr, iop->pgd3val);
+    iop->remain = dbg_rw_mem(iop->gva, guest_handle_from_ptr(iop->uva, void),
+                             iop->len, domid, iop->gwr, iop->pgd3val);
 
     return iop->remain ? -EFAULT : 0;
 }
--- a/xen/include/asm-x86/debugger.h
+++ b/xen/include/asm-x86/debugger.h
@@ -93,9 +93,9 @@ static inline bool debugger_trap_entry(
 #endif
 
 #ifdef CONFIG_GDBSX
-unsigned int dbg_rw_mem(void * __user addr, void * __user buf,
+unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf,
                         unsigned int len, domid_t domid, bool toaddr,
-                        uint64_t pgd3);
+                        unsigned long pgd3);
 #endif
 
 #endif /* __X86_DEBUGGER_H__ */



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:22:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:22:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86156.161429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI6X-0002vB-FR; Wed, 17 Feb 2021 08:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86156.161429; Wed, 17 Feb 2021 08:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI6X-0002v4-C9; Wed, 17 Feb 2021 08:22:37 +0000
Received: by outflank-mailman (input) for mailman id 86156;
 Wed, 17 Feb 2021 08:22:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI6V-0002up-62
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:22:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f628108d-e73d-42dc-8205-5d6795168574;
 Wed, 17 Feb 2021 08:22:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4B379B923;
 Wed, 17 Feb 2021 08:22:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f628108d-e73d-42dc-8205-5d6795168574
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550153; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PqIHI9Fdsb5Eyg+96N6bfOyORZFtSfdfSsVAnEtdCGg=;
	b=O1fAS4TW5yxmzbIzQt4k6x8gcZSRp65JcaxTSNbgq5oEMMjRM8CEnW7S2/eOi5FrOiZvt+
	i/uorsRoOcsBGMsOEzHQuddvunLhrEL5TSYMIOWQVI5pFdAllb6ZBhV0MZ/oWrrZgA3k8a
	5r6F8TuWU8eebNdGsOcR7zxKohMgdHo=
Subject: [PATCH v2 6/8] x86: rename copy_{from,to}_user() to
 copy_{from,to}_guest_pv()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <5104a32f-e2a1-06a5-a637-9702e4562b81@suse.com>
Date: Wed, 17 Feb 2021 09:22:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Bring them (back) in line with __copy_{from,to}_guest_pv(). Since it
falls in the same group, also convert clear_user(). Instead of adjusting
__raw_clear_guest(), drop it - it's unused and would require a non-
checking __clear_guest_pv() which we don't have.

Add previously missing __user at some call sites and in the function
declarations.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/emul-inv-op.c
+++ b/xen/arch/x86/pv/emul-inv-op.c
@@ -33,7 +33,7 @@ static int emulate_forced_invalid_op(str
     eip = regs->rip;
 
     /* Check for forced emulation signature: ud2 ; .ascii "xen". */
-    if ( (rc = copy_from_user(sig, (char *)eip, sizeof(sig))) != 0 )
+    if ( (rc = copy_from_guest_pv(sig, (char __user *)eip, sizeof(sig))) != 0 )
     {
         pv_inject_page_fault(0, eip + sizeof(sig) - rc);
         return EXCRET_fault_fixed;
@@ -43,7 +43,8 @@ static int emulate_forced_invalid_op(str
     eip += sizeof(sig);
 
     /* We only emulate CPUID. */
-    if ( ( rc = copy_from_user(instr, (char *)eip, sizeof(instr))) != 0 )
+    if ( (rc = copy_from_guest_pv(instr, (char __user *)eip,
+                                  sizeof(instr))) != 0 )
     {
         pv_inject_page_fault(0, eip + sizeof(instr) - rc);
         return EXCRET_fault_fixed;
--- a/xen/arch/x86/pv/iret.c
+++ b/xen/arch/x86/pv/iret.c
@@ -54,8 +54,8 @@ unsigned long do_iret(void)
     struct iret_context iret_saved;
     struct vcpu *v = current;
 
-    if ( unlikely(copy_from_user(&iret_saved, (void *)regs->rsp,
-                                 sizeof(iret_saved))) )
+    if ( unlikely(copy_from_guest_pv(&iret_saved, (void __user *)regs->rsp,
+                                     sizeof(iret_saved))) )
     {
         gprintk(XENLOG_ERR,
                 "Fault while reading IRET context from guest stack\n");
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -90,7 +90,8 @@ static int ptwr_emulated_update(unsigned
 
         /* Align address; read full word. */
         addr &= ~(sizeof(full) - 1);
-        if ( (rc = copy_from_user(&full, (void *)addr, sizeof(full))) != 0 )
+        if ( (rc = copy_from_guest_pv(&full, (void __user *)addr,
+                                      sizeof(full))) != 0 )
         {
             x86_emul_pagefault(0, /* Read fault. */
                                addr + sizeof(full) - rc,
--- a/xen/arch/x86/usercopy.c
+++ b/xen/arch/x86/usercopy.c
@@ -109,19 +109,17 @@ unsigned int copy_from_guest_ll(void *to
 #if GUARD(1) + 0
 
 /**
- * copy_to_user: - Copy a block of data into user space.
- * @to:   Destination address, in user space.
- * @from: Source address, in kernel space.
+ * copy_to_guest_pv: - Copy a block of data into guest space.
+ * @to:   Destination address, in guest space.
+ * @from: Source address, in hypervisor space.
  * @n:    Number of bytes to copy.
  *
- * Context: User context only.  This function may sleep.
- *
- * Copy data from kernel space to user space.
+ * Copy data from hypervisor space to guest space.
  *
  * Returns number of bytes that could not be copied.
  * On success, this will be zero.
  */
-unsigned copy_to_user(void __user *to, const void *from, unsigned n)
+unsigned int copy_to_guest_pv(void __user *to, const void *from, unsigned int n)
 {
     if ( access_ok(to, n) )
         n = __copy_to_guest_pv(to, from, n);
@@ -129,16 +127,16 @@ unsigned copy_to_user(void __user *to, c
 }
 
 /**
- * clear_user: - Zero a block of memory in user space.
- * @to:   Destination address, in user space.
+ * clear_guest_pv: - Zero a block of memory in guest space.
+ * @to:   Destination address, in guest space.
  * @n:    Number of bytes to zero.
  *
- * Zero a block of memory in user space.
+ * Zero a block of memory in guest space.
  *
  * Returns number of bytes that could not be cleared.
  * On success, this will be zero.
  */
-unsigned clear_user(void __user *to, unsigned n)
+unsigned int clear_guest_pv(void __user *to, unsigned int n)
 {
     if ( access_ok(to, n) )
     {
@@ -168,14 +166,12 @@ unsigned clear_user(void __user *to, uns
 }
 
 /**
- * copy_from_user: - Copy a block of data from user space.
- * @to:   Destination address, in kernel space.
- * @from: Source address, in user space.
+ * copy_from_guest_pv: - Copy a block of data from guest space.
+ * @to:   Destination address, in hypervisor space.
+ * @from: Source address, in guest space.
  * @n:    Number of bytes to copy.
  *
- * Context: User context only.  This function may sleep.
- *
- * Copy data from user space to kernel space.
+ * Copy data from guest space to hypervisor space.
  *
  * Returns number of bytes that could not be copied.
  * On success, this will be zero.
@@ -183,7 +179,8 @@ unsigned clear_user(void __user *to, uns
  * If some data could not be copied, this function will pad the copied
  * data to the requested size using zero bytes.
  */
-unsigned copy_from_user(void *to, const void __user *from, unsigned n)
+unsigned int copy_from_guest_pv(void *to, const void __user *from,
+                                unsigned int n)
 {
     if ( access_ok(from, n) )
         n = __copy_from_guest_pv(to, from, n);
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -16,15 +16,15 @@
 #define raw_copy_to_guest(dst, src, len)        \
     (is_hvm_vcpu(current) ?                     \
      copy_to_user_hvm((dst), (src), (len)) :    \
-     copy_to_user((dst), (src), (len)))
+     copy_to_guest_pv(dst, src, len))
 #define raw_copy_from_guest(dst, src, len)      \
     (is_hvm_vcpu(current) ?                     \
      copy_from_user_hvm((dst), (src), (len)) :  \
-     copy_from_user((dst), (src), (len)))
+     copy_from_guest_pv(dst, src, len))
 #define raw_clear_guest(dst,  len)              \
     (is_hvm_vcpu(current) ?                     \
      clear_user_hvm((dst), (len)) :             \
-     clear_user((dst), (len)))
+     clear_guest_pv(dst, len))
 #define __raw_copy_to_guest(dst, src, len)      \
     (is_hvm_vcpu(current) ?                     \
      copy_to_user_hvm((dst), (src), (len)) :    \
@@ -33,10 +33,6 @@
     (is_hvm_vcpu(current) ?                     \
      copy_from_user_hvm((dst), (src), (len)) :  \
      __copy_from_guest_pv(dst, src, len))
-#define __raw_clear_guest(dst,  len)            \
-    (is_hvm_vcpu(current) ?                     \
-     clear_user_hvm((dst), (len)) :             \
-     clear_user((dst), (len)))
 
 /*
  * Pre-validate a guest handle.
--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -9,9 +9,11 @@
 
 #include <asm/x86_64/uaccess.h>
 
-unsigned copy_to_user(void *to, const void *from, unsigned len);
-unsigned clear_user(void *to, unsigned len);
-unsigned copy_from_user(void *to, const void *from, unsigned len);
+unsigned int copy_to_guest_pv(void __user *to, const void *from,
+                              unsigned int len);
+unsigned int clear_guest_pv(void __user *to, unsigned int len);
+unsigned int copy_from_guest_pv(void *to, const void __user *from,
+                                unsigned int len);
 
 /* Handles exceptions in both to and from, but doesn't do access_ok */
 unsigned int copy_to_guest_ll(void __user*to, const void *from, unsigned int n);



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:23:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86157.161441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI6x-00030v-PB; Wed, 17 Feb 2021 08:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86157.161441; Wed, 17 Feb 2021 08:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI6x-00030o-MB; Wed, 17 Feb 2021 08:23:03 +0000
Received: by outflank-mailman (input) for mailman id 86157;
 Wed, 17 Feb 2021 08:23:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI6w-00030g-DB
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:23:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f2ca825-4c10-4e89-ac6c-415e0ff5ded6;
 Wed, 17 Feb 2021 08:23:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35FB9B926;
 Wed, 17 Feb 2021 08:23:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f2ca825-4c10-4e89-ac6c-415e0ff5ded6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550180; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=q8/6WRQIWFwJzkQJIPD65ozYOHWLMe51rYFHqRcSohA=;
	b=ZUi7cArqJrGBozceb78RVr3NSFRmzNXE1MiWZN/tiPbvDbJsMSawfDawf6oIOIoacT4LXd
	E3ZJcL5xvWEiBTO0Hr+lQ6s82NOe44lDL/2osqH3yIpDg3cwGDXaNgA+3aaJ9E45TrtThL
	iIVtIZn+iB0/v3leNVtwXRHE0+X0CiQ=
Subject: [PATCH v2 7/8] x86: move stac()/clac() from {get,put}_unsafe_asm()
 ...
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <c817e9be-bcdf-3c38-2298-0a3a58773964@suse.com>
Date: Wed, 17 Feb 2021 09:22:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... to {get,put}_unsafe_size(). There's no need to have the macros
expanded once per case label in the latter. This also makes the former
well-formed single statements again. No change in generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -154,7 +154,6 @@ struct __large_struct { unsigned long bu
  * aliasing issues.
  */
 #define put_unsafe_asm(x, addr, GUARD, err, itype, rtype, ltype, errret) \
-	stac();								\
 	__asm__ __volatile__(						\
 		GUARD(							\
 		"	guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \
@@ -169,11 +168,9 @@ struct __large_struct { unsigned long bu
 		: [ret] "+r" (err), [ptr] "=&r" (dummy_)		\
 		  GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_))	\
 		: [val] ltype (x), "m" (__m(addr)),			\
-		  "[ptr]" (addr), [errno] "i" (errret));		\
-	clac()
+		  "[ptr]" (addr), [errno] "i" (errret))
 
 #define get_unsafe_asm(x, addr, GUARD, err, rtype, ltype, errret)	\
-	stac();								\
 	__asm__ __volatile__(						\
 		GUARD(							\
 		"	guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \
@@ -190,12 +187,12 @@ struct __large_struct { unsigned long bu
 		  [ptr] "=&r" (dummy_)					\
 		  GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_))	\
 		: "m" (__m(addr)), "[ptr]" (addr),			\
-		  [errno] "i" (errret));				\
-	clac()
+		  [errno] "i" (errret))
 
 #define put_unsafe_size(x, ptr, size, grd, retval, errret)                 \
 do {                                                                       \
     retval = 0;                                                            \
+    stac();                                                                \
     switch ( size )                                                        \
     {                                                                      \
     long dummy_;                                                           \
@@ -213,6 +210,7 @@ do {
         break;                                                             \
     default: __put_user_bad();                                             \
     }                                                                      \
+    clac();                                                                \
 } while ( false )
 
 #define put_guest_size(x, ptr, size, retval, errret) \
@@ -221,6 +219,7 @@ do {
 #define get_unsafe_size(x, ptr, size, grd, retval, errret)                 \
 do {                                                                       \
     retval = 0;                                                            \
+    stac();                                                                \
     switch ( size )                                                        \
     {                                                                      \
     long dummy_;                                                           \
@@ -230,6 +229,7 @@ do {
     case 8: get_unsafe_asm(x, ptr, grd, retval,  "", "=r", errret); break; \
     default: __get_user_bad();                                             \
     }                                                                      \
+    clac();                                                                \
 } while ( false )
 
 #define get_guest_size(x, ptr, size, retval, errret)                       \


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:23:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:23:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86161.161453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI7V-00037u-3B; Wed, 17 Feb 2021 08:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86161.161453; Wed, 17 Feb 2021 08:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCI7U-00037n-WC; Wed, 17 Feb 2021 08:23:36 +0000
Received: by outflank-mailman (input) for mailman id 86161;
 Wed, 17 Feb 2021 08:23:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCI7T-00037d-Sc
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:23:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23ef8b20-fa74-49e5-bbfa-948cd9c0f6af;
 Wed, 17 Feb 2021 08:23:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 726B3B926;
 Wed, 17 Feb 2021 08:23:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23ef8b20-fa74-49e5-bbfa-948cd9c0f6af
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550214; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=e67EQSav/B2t339nJdJXrH2VD3Eb4kam/ZfJ4x2oFeo=;
	b=DzUrPXTx9dh4UaCf0KdD/Xf2NdXoreX1z0q9cXNSYlhaT8AE/rbrEMsLr9zxlchRMblcz0
	Azj2XdComQn+sFem/DcXTFLI6h7M483P9EVDXZzDY1RuknXQjld7wRltXiNtXfY90amog7
	GEAH2FEp3t2FNi3eeJGqEIRt2V4K9ao=
Subject: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of copy_from_unsafe()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Message-ID: <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
Date: Wed, 17 Feb 2021 09:23:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The former expands to a single (memory accessing) insn, which the latter
does not guarantee. Yet we'd prefer to read consistent PTEs rather than
risking a split read racing with an update done elsewhere.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -41,9 +41,7 @@ l1_pgentry_t *map_guest_l1e(unsigned lon
         return NULL;
 
     /* Find this l1e and its enclosing l1mfn in the linear map. */
-    if ( copy_from_unsafe(&l2e,
-                          &__linear_l2_table[l2_linear_offset(linear)],
-                          sizeof(l2_pgentry_t)) )
+    if ( get_unsafe(l2e, &__linear_l2_table[l2_linear_offset(linear)]) )
         return NULL;
 
     /* Check flags that it will be safe to read the l1e. */
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -22,9 +22,7 @@ static inline l1_pgentry_t guest_get_eff
         toggle_guest_pt(curr);
 
     if ( unlikely(!__addr_ok(linear)) ||
-         copy_from_unsafe(&l1e,
-                          &__linear_l1_table[l1_linear_offset(linear)],
-                          sizeof(l1_pgentry_t)) )
+         get_unsafe(l1e, &__linear_l1_table[l1_linear_offset(linear)]) )
         l1e = l1e_empty();
 
     if ( user_mode )



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:27:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86165.161464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIBa-0003Lv-KN; Wed, 17 Feb 2021 08:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86165.161464; Wed, 17 Feb 2021 08:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIBa-0003Lo-HR; Wed, 17 Feb 2021 08:27:50 +0000
Received: by outflank-mailman (input) for mailman id 86165;
 Wed, 17 Feb 2021 08:27:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCIBZ-0003Lg-CS
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:27:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf55f6c5-16cf-4d63-8501-65dc9911df82;
 Wed, 17 Feb 2021 08:27:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 64CBAB030;
 Wed, 17 Feb 2021 08:27:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf55f6c5-16cf-4d63-8501-65dc9911df82
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550467; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JSjKfYtiyV+pXkloEbdvxdhSElSQtZVjZjlL17MCa3Q=;
	b=rqZqZUrrPjudd504MvFJIbIstllnfE1UTWPIeISFtLRmR4ES3ral3EhNboUPPHhxYKp+Vf
	wZnY7zWfEB68HfyBT7YcmgVzw5H3p65D35uKhxqS12VUQcYMCLNS6Su9wGxNXX9JAqwVhf
	D8QS9RUZeQUV9PXCZ3beSBpQ0M0ABiY=
Subject: Ping: [PATCH v3 0/4] x86/time: calibration rendezvous adjustments
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
 <a3be96d8-1480-8af4-601b-a55ab3819f97@suse.com>
Message-ID: <36e71ec6-e02a-87a9-9f43-714a1f2cfcf4@suse.com>
Date: Wed, 17 Feb 2021 09:27:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a3be96d8-1480-8af4-601b-a55ab3819f97@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Ian?

On 09.02.2021 14:02, Jan Beulich wrote:
> On 09.02.2021 13:53, Jan Beulich wrote:
>> The middle two patches are meant to address a regression reported on
>> the list under "Problems with APIC on versions 4.9 and later (4.8
>> works)". In the course of analyzing output from a debugging patch I
>> ran into another anomaly again, which I thought I should finally try
>> to address. Hence patch 1. Patch 4 is new in v3 and RFC for now.
> 
> Of course this is the kind of change I'd prefer doing early in a
> release cycle. I don't think there are severe risks from patch 1, but
> I'm not going to claim patches 2 and 3 are risk free. They fix booting
> Xen on a system left in rather awkward state by the firmware. And
> they shouldn't affect well behaved modern systems at all (due to
> those using a different rendezvous function). While we've been having
> this issue for years, I also consider this set a backporting
> candidate. Hence I can see reasons pro and con inclusion in 4.15.
> 
> Jan
> 
>> 1: change initiation of the calibration timer
>> 2: adjust time recording time_calibration_tsc_rendezvous()
>> 3: don't move TSC backwards in time_calibration_tsc_rendezvous()
>> 4: re-arrange struct calibration_rendezvous
>>
>> Jan
>>
> 



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:29:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:29:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86167.161477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIDJ-0003Z5-2Z; Wed, 17 Feb 2021 08:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86167.161477; Wed, 17 Feb 2021 08:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIDI-0003Yy-TL; Wed, 17 Feb 2021 08:29:36 +0000
Received: by outflank-mailman (input) for mailman id 86167;
 Wed, 17 Feb 2021 08:29:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T8s0=HT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lCIDH-0003Ym-1q
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:29:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1794ed29-bcbf-42b5-9169-e1b1d8d2fd7a;
 Wed, 17 Feb 2021 08:29:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23F34B030;
 Wed, 17 Feb 2021 08:29:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1794ed29-bcbf-42b5-9169-e1b1d8d2fd7a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550573; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YtcTVMfz0A0mn83jtQfp1GPwlNrriqd5zAvGUY7Md4c=;
	b=Ivkovvrcn/ckMsf2Lakda5GTLQLqtFJZXQ477ye9DJAeUOPrEVrkWXqFVBM9qJKhGsUQVj
	5Fe1q7pTsVS3Z6+QN9VLabMaU6lqQUO5hxuUxP2xxatPDv8VlMVGOqUC439mCu9K+IU/kN
	z7PteCa+aRASMHRBZHc9gxqwp9fi6no=
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
Message-ID: <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
Date: Wed, 17 Feb 2021 09:29:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="VWsJdD0BxsHUO9CZsPG3eM17N61YKLn9M"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VWsJdD0BxsHUO9CZsPG3eM17N61YKLn9M
Content-Type: multipart/mixed; boundary="awxEAb2lYGwwf0Kkvb0N17bv6mC3HNApN";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Message-ID: <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
In-Reply-To: <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>

--awxEAb2lYGwwf0Kkvb0N17bv6mC3HNApN
Content-Type: multipart/mixed;
 boundary="------------AFE1857A4354CA1A2D1E08A5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AFE1857A4354CA1A2D1E08A5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.02.21 09:12, Roman Shaposhnik wrote:
> Hi J=C3=BCrgen, thanks for taking a look at this. A few comments below:=

>=20
> On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com=
> wrote:
>>
>> On 16.02.21 21:34, Stefano Stabellini wrote:
>>> + x86 maintainers
>>>
>>> It looks like the tlbflush is getting stuck?
>>
>> I have seen this case multiple times on customer systems now, but
>> reproducing it reliably seems to be very hard.
>=20
> It is reliably reproducible under my workload but it take a long time
> (~3 days of the workload running in the lab).

This is by far the best reproduction rate I have seen up to now.

The next best reproducer seems to be a huge installation with several
hundred hosts and thousands of VMs with about 1 crash each week.

>=20
>> I suspected fifo events to be blamed, but just yesterday I've been
>> informed of another case with fifo events disabled in the guest.
>>
>> One common pattern seems to be that up to now I have seen this effect
>> only on systems with Intel Gold cpus. Can it be confirmed to be true
>> in this case, too?
>=20
> I am pretty sure mine isn't -- I can get you full CPU specs if that's u=
seful.

Just the output of "grep model /proc/cpuinfo" should be enough.

>=20
>> In case anybody has a reproducer (either in a guest or dom0) with a
>> setup where a diagnostic kernel can be used, I'd be _very_ interested!=

>=20
> I can easily add things to Dom0 and DomU. Whether that will disrupt the=

> experiment is, of course, another matter. Still please let me know what=

> would be helpful to do.

Is there a chance to switch to an upstream kernel in the guest? I'd like
to add some diagnostic code to the kernel and creating the patches will
be easier this way.


Juergen

--------------AFE1857A4354CA1A2D1E08A5
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AFE1857A4354CA1A2D1E08A5--

--awxEAb2lYGwwf0Kkvb0N17bv6mC3HNApN--

--VWsJdD0BxsHUO9CZsPG3eM17N61YKLn9M
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAs0+wFAwAAAAAACgkQsN6d1ii/Ey/v
OQf/dhGHw+4u9Ak3rESk5PWiSZ3zcm6WZfBJzONpDV9aAYhNWSjsPuwediPR73B7jA4XUIgP3OJd
O14xYrr2FxH/oyG4aGOco2qy5hq4f/E23zGqnwiZ57o+Mja7mDZEHIKNfs1o1FP2f9NFZMOFtQnr
1jTea24FcvStG7KSZ+fDKOXi1ijrpx98yRZKp00ZW3s5a3PvblGds/essWHpZXxvf17muESHBxT1
YSxvPabhSsV+HZJfcdv+4TihPbDcomWx0jOykEtgSw8rcfbBRsaZMMGJNmZNu92R38bcxDAmZz1m
Cr/UXkduDiAX/0LqHRtEU0DjEKSsp+QLGOnUhUSFLw==
=fqc1
-----END PGP SIGNATURE-----

--VWsJdD0BxsHUO9CZsPG3eM17N61YKLn9M--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:30:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86171.161489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIE8-0004MT-F7; Wed, 17 Feb 2021 08:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86171.161489; Wed, 17 Feb 2021 08:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIE8-0004MM-By; Wed, 17 Feb 2021 08:30:28 +0000
Received: by outflank-mailman (input) for mailman id 86171;
 Wed, 17 Feb 2021 08:30:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCIE7-0004MG-6s
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:30:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c92ddd4-e935-4f56-99ac-035d70cb1833;
 Wed, 17 Feb 2021 08:30:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D70DB03A;
 Wed, 17 Feb 2021 08:30:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c92ddd4-e935-4f56-99ac-035d70cb1833
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550625; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XnR69bt22wd/7RRrZ2BwquGMng6qj0XhIE7ADQoX+Ns=;
	b=B95tbhCRzNRSVm9zs71qMpOqCsGTbisZDDfnSi6nNq6keMB1BlgvuYsUNTeXVrD4qlOFR8
	NIaWPxfvOfyztCSb95/LLtf33SBRMrJoEQwSH4VhXThSjr6OuDkrIRCUFJJe4Nc5dF1hpR
	haSvDBAKj3QYhA/3I7cx5DcLKqUpNIo=
Subject: Ping: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation
 request
From: Jan Beulich <jbeulich@suse.com>
To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
 <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
 <03fb01d6fad7$c39087b0$4ab19710$@xen.org>
 <ad73c330-4cbd-0ee4-fee7-2453dab00eef@suse.com>
Message-ID: <006bd542-e213-a6ad-7812-e91fed7093a3@suse.com>
Date: Wed, 17 Feb 2021 09:30:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <ad73c330-4cbd-0ee4-fee7-2453dab00eef@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Paul (or others), thoughts?

On 04.02.2021 12:24, Jan Beulich wrote:
> On 04.02.2021 10:26, Paul Durrant wrote:
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 02 February 2021 15:15
>>>
>>> XENMEM_decrease_reservation isn't the only means by which pages can get
>>> removed from a guest, yet all removals ought to be signaled to qemu. Put
>>> setting of the flag into the central p2m_remove_page() underlying all
>>> respective hypercalls as well as a few similar places, mainly in PoD
>>> code.
>>>
>>> Additionally there's no point sending the request for the local domain
>>> when the domain acted upon is a different one. The latter domain's ioreq
>>> server mapcaches need invalidating. We assume that domain to be paused
>>> at the point the operation takes place, so sending the request in this
>>> case happens from the hvm_do_resume() path, which as one of its first
>>> steps calls handle_hvm_io_completion().
>>>
>>> Even without the remote operation aspect a single domain-wide flag
>>> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
>>> parallel. Each of them needs to issue an invalidation request in due
>>> course, in particular because exiting to guest context should not happen
>>> before the request was actually seen by (all) the emulator(s).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>>>     places to set the flag. Also handle acting on a remote domain.
>>>     Re-base.
>>
>> I'm wondering if a per-vcpu flag is overkill actually. We just need
>> to make sure that we don't miss sending an invalidation where
>> multiple vcpus are in play. The mapcache in the emulator is global
>> so issuing an invalidate for every vcpu is going to cause an
>> unnecessary storm of ioreqs, isn't it?
> 
> The only time a truly unnecessary storm would occur is when for
> an already running guest mapcache invalidation gets triggered
> by a remote domain. This should be a pretty rare event, so I
> think the storm in this case ought to be tolerable.
> 
>> Could we get away with the per-domain atomic counter?
> 
> Possible, but quite involved afaict: The potential storm above
> is the price to pay for the simplicity of the model. It is
> important to note that while we don't need all of the vCPU-s
> to send these ioreqs, we need all of them to wait for the
> request(s) to be acked. And this waiting is what we get "for
> free" using the approach here, whereas we'd need to introduce
> new logic for this with an atomic counter (afaict at least).
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:32:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:32:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86172.161501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIGM-0004WT-S5; Wed, 17 Feb 2021 08:32:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86172.161501; Wed, 17 Feb 2021 08:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIGM-0004WM-Oy; Wed, 17 Feb 2021 08:32:46 +0000
Received: by outflank-mailman (input) for mailman id 86172;
 Wed, 17 Feb 2021 08:32:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCIGL-0004WF-9t
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:32:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3520f3e-9c25-49a3-a2a9-7d6541cc38b4;
 Wed, 17 Feb 2021 08:32:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9768B03B;
 Wed, 17 Feb 2021 08:32:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3520f3e-9c25-49a3-a2a9-7d6541cc38b4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613550763; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y1/G31dhI4NC4cwy4l2zlxtzFa6ohofSNDIQI8F53Pw=;
	b=s7GiXzDf+V2G28i/DNoOpz+/fHAFXBTq4LqFcn8GZVBxgwW15yo7T69b/NvTlJq+ictmwF
	0jcFNJ5/cda49mAcpJFFzimyGxrgT87wZwSoUWnqyPrHgZSrWD51HNhoyztFC7+6AarTQF
	XeHaduEo8t2MoawFe7oDQWGxomWtUog=
Subject: Ping: [PATCH] x86emul: de-duplicate scatters to the same linear
 address
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
 <fcf7e123-3cdd-fd4d-6c58-36facb26a68e@citrix.com>
 <2e559806-5bc0-0f61-8e23-95e0dba34c41@suse.com>
Message-ID: <ca22a3b6-8194-7880-8e84-e709ee20bcf3@suse.com>
Date: Wed, 17 Feb 2021 09:32:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <2e559806-5bc0-0f61-8e23-95e0dba34c41@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.02.2021 12:28, Jan Beulich wrote:
> On 05.02.2021 11:41, Andrew Cooper wrote:
>> On 10/11/2020 13:26, Jan Beulich wrote:
>>> The SDM specifically allows for earlier writes to fully overlapping
>>> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
>>> would crash it if varying data was written to the same address. Detect
>>> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
>>> be quite a bit more difficult.
>>
>> Are you saying that there is currently a bug if a guest does encode such
>> an instruction, and we emulate it?
> 
> That is my take on it, yes.
> 
>>> Note that due to cache slot use being linear address based, there's no
>>> similar issue with multiple writes to the same physical address (mapped
>>> through different linear addresses).
>>>
>>> Since this requires an adjustment to the EVEX Disp8 scaling test,
>>> correct a comment there at the same time.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> TBD: The SDM isn't entirely unambiguous about the faulting behavior in
>>>      this case: If a fault would need delivering on the earlier slot
>>>      despite the write getting squashed, we'd have to call ops->write()
>>>      with size set to zero for the earlier write(s). However,
>>>      hvm/emulate.c's handling of zero-byte accesses extends only to the
>>>      virtual-to-linear address conversions (and raising of involved
>>>      faults), so in order to also observe #PF changes to that logic
>>>      would then also be needed. Can we live with a possible misbehavior
>>>      here?
>>
>> Do you have a chapter/section reference?
> 
> The instruction pages. They say in particular
> 
> "If two or more destination indices completely overlap, the â€œearlierâ€
>  write(s) may be skipped."
> 
> and
> 
> "Faults are delivered in a right-to-left manner. That is, if a fault
>  is triggered by an element and delivered ..."
> 
> To me this may or may not mean the skipping of indices includes the
> skipping of faults (which a later element then would raise anyway).

Does the above address your concerns / questions? If not, what else
do I need to provide?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 08:50:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 08:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86175.161512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIXa-0006Nw-Au; Wed, 17 Feb 2021 08:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86175.161512; Wed, 17 Feb 2021 08:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCIXa-0006Np-7y; Wed, 17 Feb 2021 08:50:34 +0000
Received: by outflank-mailman (input) for mailman id 86175;
 Wed, 17 Feb 2021 08:50:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCIXZ-0006Nj-11
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:50:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCIXW-0007Q5-Da; Wed, 17 Feb 2021 08:50:30 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCIXW-0005NU-2x; Wed, 17 Feb 2021 08:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=dRe5LduMaKQ8pMQNQAp8DB1c1uwOmrlK8GMRylhbikQ=; b=SvpaVaj9Qzw0KUlH8ph+pWFLmF
	nqjvK45DGjZqDzI0qF+hXFjR+NOweRdBHNnjpDX7E0qB4nLPJoTPNUqe2C18IzP9jneu4D1nhlOQZ
	fMSj9sAYkQsUJOBvi1eGn0kw2A+QGgG47qR2eF+vKfEv5UGWsJ91VMBmswTfm1bnMMxc=;
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com,
 Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
Date: Wed, 17 Feb 2021 08:50:26 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 17/02/2021 02:00, Stefano Stabellini wrote:
> Hi all,
> 
> Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
> translate addresses for DMA operations in Dom0. Specifically,
> swiotlb-xen is used to translate the address of a foreign page (a page
> belonging to a domU) mapped into Dom0 before using it for DMA.
> 
> This is important because although Dom0 is 1:1 mapped, DomUs are not. On
> systems without an IOMMU swiotlb-xen enables PV drivers to work as long
> as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
> ends up using the MFN, rather than the GFN.
> 
> 
> On systems with an IOMMU, this is not necessary: when a foreign page is
> mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
> is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
> address (instead of the MFN) for DMA operations and they would work. It
> would be more efficient than using swiotlb-xen.
> 
> If you recall my presentation from Xen Summit 2020, Xilinx is working on
> cache coloring. With cache coloring, no domain is 1:1 mapped, not even
> Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
> work as intended.
> 
> 
> The suggested solution for both these issues is to add a new feature
> flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
> swiotlb-xen because IOMMU translations are available for Dom0. If
> XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
> initialization. I have tested this scheme with and without cache
> coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
> works as expected: DMA operations succeed.
> 
> 
> What about systems where an IOMMU is present but not all devices are
> protected?
> 
> There is no way for Xen to know which devices are protected and which
> ones are not: devices that do not have the "iommus" property could or
> could not be DMA masters.
> 
> Perhaps Xen could populate a whitelist of devices protected by the IOMMU
> based on the "iommus" property. It would require some added complexity
> in Xen and especially in the swiotlb-xen driver in Linux to use it,
> which is not ideal.

You are trading a bit more complexity in Xen and Linux with a user will 
may not be able to use the hypervisor on his/her platform without a 
quirk in Xen (see more below).

> However, this approach would not work for cache
> coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
> used either way

Not all the Dom0 Linux kernel will be able to work with cache colouring. 
So you will need a way for the kernel to say "Hey I am can avoid using 
swiotlb".

> 
> For these reasons, I would like to propose a single flag
> XENFEAT_ARM_dom0_iommu which says that the IOMMU can be relied upon for
> DMA translations. In situations where a DMA master is not SMMU
> protected, XENFEAT_ARM_dom0_iommu should not be set. For example, on a
> platform where an IOMMU is present and protects most DMA masters but it
> is leaving out the MMC controller, then XENFEAT_ARM_dom0_iommu should
> not be set (because PV block is not going to work without swiotlb-xen.)
> This also means that cache coloring won't be usable on such a system (at
> least not usable with the MMC controller so the system integrator should
> pay special care to setup the system).
> 
> It is worth noting that if we wanted to extend the interface to add a
> list of protected devices in the future, it would still be possible. It
> would be compatible with XENFEAT_ARM_dom0_iommu.

I imagine by compatible, you mean XENFEAT_ARM_dom0_iommu would be 
cleared and instead the device-tree list would be used. Is that correct?

> 
> 
> How to set XENFEAT_ARM_dom0_iommu?
> 
> We could set XENFEAT_ARM_dom0_iommu automatically when
> is_iommu_enabled(d) for Dom0. We could also have a platform specific
> (xen/arch/arm/platforms/) override so that a specific platform can
> disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
> users, it would also be useful to be able to override it via a Xen
> command line parameter.
Platform quirks should be are limited to a small set of platforms.

In this case, this would not be only per-platform but also per-firmware 
table as a developer can decide to remove/add IOMMU nodes in the DT at 
any time.

In addition to that, it means we are introducing a regression for those 
users as Xen 4.14 would have worked on there platform but not anymore. 
They would need to go through all the nodes and find out which one is 
not protected.

This is a bit of a daunting task and we are going to end up having a lot 
of per-platform quirk in Xen.

So this approach selecting the flag is a no-go for me. FAOD, the 
inverted idea (i.e. only setting XENFEAT_ARM_dom0_iommu per-platform) is 
a no go as well.

I don't have a good idea how to set the flag automatically. My 
requirement is newer Xen should continue to work on all supported 
platforms without any additional per-platform effort.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 09:42:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 09:42:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86180.161525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJLO-0002Wt-1P; Wed, 17 Feb 2021 09:42:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86180.161525; Wed, 17 Feb 2021 09:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJLN-0002Wm-UZ; Wed, 17 Feb 2021 09:42:01 +0000
Received: by outflank-mailman (input) for mailman id 86180;
 Wed, 17 Feb 2021 09:42:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IRng=HT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lCJLM-0002Wh-30
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 09:42:00 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1611a42b-b7b4-4c0a-b152-a0ba389d5b12;
 Wed, 17 Feb 2021 09:41:58 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id r21so16597736wrr.9
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 01:41:58 -0800 (PST)
Received: from ?IPv6:2a00:23c5:5785:9a01:71b0:bf69:5f0d:b70f?
 ([2a00:23c5:5785:9a01:71b0:bf69:5f0d:b70f])
 by smtp.gmail.com with ESMTPSA id o13sm3968880wrs.45.2021.02.17.01.41.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 17 Feb 2021 01:41:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1611a42b-b7b4-4c0a-b152-a0ba389d5b12
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=H9YiCxPC3EWKAD/DFSNhs4Ry4EYfjJv5o1ZFvoYgAxU=;
        b=X8SNqkdKYWiEpgfGlmblSR1TfoWzv/Zw+itn4ZAw9ABm9hLvgspWpFUaGxzdiYgGKO
         cP/PB8nMsOnBtxuogTHBwjBrUmNOsHZueHl30HXHxrPg31/lft//ccAcJX3IjPhyakUy
         Et7S65k/fIJiL+HGvxeplAG1N20fOA0DobTp23sdgxgfGHVw6wQo6217tCeqOjlLjzhB
         niSy2VqU/eEthqhnzC3Pq8K26halit0KSqPConW7Ux6v6Ts9kBSRttOcvp+5C/132dpG
         YQ4xWi20s6jKFMWeN+GylMMa8R69J5lW2m5MFoL+gShVtsBFVRnmPo3FyhQpmvfAlq8s
         33yA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=H9YiCxPC3EWKAD/DFSNhs4Ry4EYfjJv5o1ZFvoYgAxU=;
        b=mI/jtJihQ4aE49VwBHx0SuNHrgIImtRYXoPDi2hvkv5JzgSyLG0CkWbplsNKVOFlRM
         B93V+XfKxmk9HsUkwciD/s7Chc7ASSkUgIQXQxBvPPiu/US74vFv31P3SVEXzeXzvIaV
         yT5hZsqiWyiMOXpPlAvPItmhS5KEjciBDT8aMT1gjkw4u/cHuHjrzDkbftYy+Ug1z2iw
         PRvJH2AGM+PVEJs17Q86fcowqQ2yIiy5Q8AtSd3WFQ2kqqe8UzCRw6Dy+PKuPoXRA/M9
         +lsJO9AGhGU65gajvRSpNw85Vjf6jy8M0vv6Tzbq/WMx4cxbj/LMRhOL8cnNqtT1wB6R
         Jh0Q==
X-Gm-Message-State: AOAM533hYluius0O1dY7gha9Q3G++Q2uVmH0AOIBBUd1o2MJUpDaj2Le
	J2B+a3ZAu5bNzEn1xw/ZK6fPJhe1kouPgQ==
X-Google-Smtp-Source: ABdhPJzoVsEvclI2KTyTxhqm3DQdUCEZtjnNXr2qlaRSsp548D0t0p/l5wZODJjOi3+9Povuwcrw7w==
X-Received: by 2002:a5d:4850:: with SMTP id n16mr28684852wrs.296.1613554917998;
        Wed, 17 Feb 2021 01:41:57 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: Ping: [PATCH v2 2/2] IOREQ: refine when to send mapcache
 invalidation request
To: Jan Beulich <jbeulich@suse.com>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
 <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
 <03fb01d6fad7$c39087b0$4ab19710$@xen.org>
 <ad73c330-4cbd-0ee4-fee7-2453dab00eef@suse.com>
 <006bd542-e213-a6ad-7812-e91fed7093a3@suse.com>
Message-ID: <56900eda-9718-f68a-8a05-99a8e713446d@xen.org>
Date: Wed, 17 Feb 2021 09:41:35 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <006bd542-e213-a6ad-7812-e91fed7093a3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17/02/2021 08:30, Jan Beulich wrote:
> Paul (or others), thoughts?
> 
> On 04.02.2021 12:24, Jan Beulich wrote:
>> On 04.02.2021 10:26, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 02 February 2021 15:15
>>>>
>>>> XENMEM_decrease_reservation isn't the only means by which pages can get
>>>> removed from a guest, yet all removals ought to be signaled to qemu. Put
>>>> setting of the flag into the central p2m_remove_page() underlying all
>>>> respective hypercalls as well as a few similar places, mainly in PoD
>>>> code.
>>>>
>>>> Additionally there's no point sending the request for the local domain
>>>> when the domain acted upon is a different one. The latter domain's ioreq
>>>> server mapcaches need invalidating. We assume that domain to be paused
>>>> at the point the operation takes place, so sending the request in this
>>>> case happens from the hvm_do_resume() path, which as one of its first
>>>> steps calls handle_hvm_io_completion().
>>>>
>>>> Even without the remote operation aspect a single domain-wide flag
>>>> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
>>>> parallel. Each of them needs to issue an invalidation request in due
>>>> course, in particular because exiting to guest context should not happen
>>>> before the request was actually seen by (all) the emulator(s).
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>>>>      places to set the flag. Also handle acting on a remote domain.
>>>>      Re-base.
>>>
>>> I'm wondering if a per-vcpu flag is overkill actually. We just need
>>> to make sure that we don't miss sending an invalidation where
>>> multiple vcpus are in play. The mapcache in the emulator is global
>>> so issuing an invalidate for every vcpu is going to cause an
>>> unnecessary storm of ioreqs, isn't it?
>>
>> The only time a truly unnecessary storm would occur is when for
>> an already running guest mapcache invalidation gets triggered
>> by a remote domain. This should be a pretty rare event, so I
>> think the storm in this case ought to be tolerable.
>>
>>> Could we get away with the per-domain atomic counter?
>>
>> Possible, but quite involved afaict: The potential storm above
>> is the price to pay for the simplicity of the model. It is
>> important to note that while we don't need all of the vCPU-s
>> to send these ioreqs, we need all of them to wait for the
>> request(s) to be acked. And this waiting is what we get "for
>> free" using the approach here, whereas we'd need to introduce
>> new logic for this with an atomic counter (afaict at least).
>>
>> Jan
>>
> 

Ok, let's take the patch as-is then.

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 09:47:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 09:47:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86183.161536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJR0-0002i2-Mm; Wed, 17 Feb 2021 09:47:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86183.161536; Wed, 17 Feb 2021 09:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJR0-0002hv-Jc; Wed, 17 Feb 2021 09:47:50 +0000
Received: by outflank-mailman (input) for mailman id 86183;
 Wed, 17 Feb 2021 09:47:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCJQz-0002hn-O0
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 09:47:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8da034df-f1cb-4c01-9c1a-fd9a2ddc11ea;
 Wed, 17 Feb 2021 09:47:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A859BB7A5;
 Wed, 17 Feb 2021 09:47:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8da034df-f1cb-4c01-9c1a-fd9a2ddc11ea
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613555267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MmsjZvs0OUslKZz6am/QNr2+fo8zfsU45uUVmMv+tMA=;
	b=SAA3nDknxyvtr6jr7KS7gqck7Mo7QdsjpzSqBUb+G2iY4L2OirgRsHGxNKtsQtkOCWEMjn
	zK3tTweNCfeGIqvuVHcRI4TubLAbxIIMgtSjgT7HOhIClgDYMvYgzup9QKNd4tIg1Zgu9w
	ndGBqhbkSAfx53TEJz3hXm6B1pqntFs=
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 cardoe@cardoe.com, andrew.cooper3@citrix.com, wl@xen.org,
 iwj@xenproject.org, anthony.perard@citrix.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210213020540.27894-1-sstabellini@kernel.org>
 <20210213135056.GA6191@mail-itl>
 <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
 <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
Date: Wed, 17 Feb 2021 10:47:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.02.2021 19:31, Stefano Stabellini wrote:
> On Mon, 15 Feb 2021, Jan Beulich wrote:
>> On 13.02.2021 14:50, Marek Marczykowski-GÃ³recki wrote:
>>> On Fri, Feb 12, 2021 at 06:05:40PM -0800, Stefano Stabellini wrote:
>>>> If rombios, seabios and ovmf are all disabled, don't attempt to build
>>>> hvmloader.
>>>
>>> What if you choose to not build any of rombios, seabios, ovmf, but use
>>> system one instead? Wouldn't that exclude hvmloader too?
>>
>> Even further - one can disable all firmware and have every guest
>> config explicitly specify the firmware to use, afaict.
> 
> I didn't realize there was a valid reason for wanting to build hvmloader
> without rombios, seabios, and ovfm.
> 
> 
>>> This heuristic seems like a bit too much, maybe instead add an explicit
>>> option to skip hvmloader?
>>
>> +1 (If making this configurable is needed at all - is having
>> hvmloader without needing it really a problem?)
> 
> The hvmloader build fails on Alpine Linux x86:
> 
> https://gitlab.com/xen-project/xen/-/jobs/1033722465
> 
> 
> 
> I admit I was just trying to find the fastest way to make those Alpine
> Linux builds succeed to unblock patchew: although the Alpine Linux
> builds are marked as "allow_failure: true" in gitlab-ci, patchew will
> still report the whole battery of tests as "failure". As a consequence
> the notification emails from patchew after a build of a contributed
> patch series always says "job failed" today, making it kind of useless.
> See attached.
> 
> I would love if somebody else took over this fix as I am doing this
> after hours, but if you have a simple suggestion on how to fix the
> Alpine Linux hvmloader builds, or skip the build when appropriate, I can
> try to follow up.

There is an issue with the definition of uint64_t there. Initial
errors like

hvmloader.c: In function 'init_vm86_tss':
hvmloader.c:202:39: error: left shift count >= width of type [-Werror=shift-count-overflow]
  202 |                   ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss));

already hint at this, but then

util.c: In function 'get_cpu_mhz':
util.c:824:15: error: conversion from 'long long unsigned int' to 'uint64_t' {aka 'long unsigned int'} changes value from '4294967296000000' to '0' [-Werror=overflow]
  824 |     cpu_khz = 1000000ull << 32;

is quite explicit: "aka 'long unsigned int'"? This is a 32-bit
environment, after all. I suspect the build picks up headers
(stdint.h here in particular) intended for 64-bit builds only.
Can you check whether "gcc -m32" properly sets include paths
_different_ from those plain "gcc" sets, if the headers found in
the latter case aren't suitable for the former? Or alternatively
is the Alpine build environment set up incorrectly, in that it
lacks 32-bit devel packages?

As an aside I don't think it's really a good idea to have
hvmloader depend on any external headers. Just like the
hypervisor it's a free-standing binary, and hence ought to be
free of any dependencies on the build/host environment.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 09:52:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 09:52:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86186.161551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJVU-0003fV-Dw; Wed, 17 Feb 2021 09:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86186.161551; Wed, 17 Feb 2021 09:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJVU-0003fO-B2; Wed, 17 Feb 2021 09:52:28 +0000
Received: by outflank-mailman (input) for mailman id 86186;
 Wed, 17 Feb 2021 09:52:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCJVT-0003fG-Ms; Wed, 17 Feb 2021 09:52:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCJVT-0008SN-HI; Wed, 17 Feb 2021 09:52:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCJVT-0000iT-7f; Wed, 17 Feb 2021 09:52:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCJVT-0006To-7D; Wed, 17 Feb 2021 09:52:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jw2gRgns+aoU5vteKgD9U2R0xbR3tAByT7Y8a4Z6D84=; b=B9ofobUPOb1TdmMwqn3GSN7dvT
	zcUE2OusDApTZqT2AUP1/lFWGJQ7F42krUY/GBjbmzsTN/sP0g0PRW7e+bRKsTVt5KfLT3JWqS2A0
	8hcpBFv4eMz0UuT0Ob0mxWNVuT3NcX+flBzVO+OeYr1YjqO5ayDF7mqzMQlKN8/mYblI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159441-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159441: all pass - PUSHED
X-Osstest-Versions-This:
    xen=3b1cc15f1931ba56d0ee256fe9bfe65509733b27
X-Osstest-Versions-That:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 09:52:27 +0000

flight 159441 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159441/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  3b1cc15f1931ba56d0ee256fe9bfe65509733b27
baseline version:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad

Last test of basis   159344  2021-02-14 09:18:27 Z    3 days
Testing same since   159441  2021-02-17 09:20:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   04085ec1ac..3b1cc15f19  3b1cc15f1931ba56d0ee256fe9bfe65509733b27 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:02:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:02:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86194.161570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJen-0004kO-Cn; Wed, 17 Feb 2021 10:02:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86194.161570; Wed, 17 Feb 2021 10:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJen-0004kH-9l; Wed, 17 Feb 2021 10:02:05 +0000
Received: by outflank-mailman (input) for mailman id 86194;
 Wed, 17 Feb 2021 10:02:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCJel-0004kC-KS
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:02:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06ba39fc-1995-4513-bee8-63d2c3b41f9f;
 Wed, 17 Feb 2021 10:02:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BBDFDB7A7;
 Wed, 17 Feb 2021 10:02:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06ba39fc-1995-4513-bee8-63d2c3b41f9f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613556121; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Iv15mlQqaiiuwkeJAdzOnIHYlDJnW7R5IWksA1/NNVM=;
	b=bFJYfDfqZ5LQNgoULYOYGUM08RGu95+AmZAIp7kYUKA2APeAMTusaOB92L6Vx7OKI3PzlZ
	lwWxgpROGq1p0//HiIvXCp6m2BZUOTb/buG/tF8EnIJ0pMcXRlKy7Mcmy1Lfr+dqMqSNrI
	KfwhvXe0FxFb8fPhAYrdeJKFDzVgwJ0=
Subject: Re: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation
 request
From: Jan Beulich <jbeulich@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
 <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
Message-ID: <454ffaeb-f0d5-a2fe-420b-f28e51d9aabf@suse.com>
Date: Wed, 17 Feb 2021 11:02:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.02.2021 16:14, Jan Beulich wrote:
> XENMEM_decrease_reservation isn't the only means by which pages can get
> removed from a guest, yet all removals ought to be signaled to qemu. Put
> setting of the flag into the central p2m_remove_page() underlying all
> respective hypercalls as well as a few similar places, mainly in PoD
> code.
> 
> Additionally there's no point sending the request for the local domain
> when the domain acted upon is a different one. The latter domain's ioreq
> server mapcaches need invalidating. We assume that domain to be paused
> at the point the operation takes place, so sending the request in this
> case happens from the hvm_do_resume() path, which as one of its first
> steps calls handle_hvm_io_completion().
> 
> Even without the remote operation aspect a single domain-wide flag
> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
> parallel. Each of them needs to issue an invalidation request in due
> course, in particular because exiting to guest context should not happen
> before the request was actually seen by (all) the emulator(s).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>     places to set the flag. Also handle acting on a remote domain.
>     Re-base.

Can I get an Arm side ack (or otherwise) please?

Thanks, Jan

> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -759,10 +759,9 @@ static void p2m_free_entry(struct p2m_do
>           * has failed (error case).
>           * So, at worst, the spurious mapcache invalidation might be sent.
>           */
> -        if ( (p2m->domain == current->domain) &&
> -              domain_has_ioreq_server(p2m->domain) &&
> -              p2m_is_ram(entry.p2m.type) )
> -            p2m->domain->mapcache_invalidate = true;
> +        if ( p2m_is_ram(entry.p2m.type) &&
> +             domain_has_ioreq_server(p2m->domain) )
> +            ioreq_request_mapcache_invalidate(p2m->domain);
>  #endif
>  
>          p2m->stats.mappings[level]--;
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1509,8 +1509,8 @@ static void do_trap_hypercall(struct cpu
>       * Note that sending the invalidation request causes the vCPU to block
>       * until all the IOREQ servers have acknowledged the invalidation.
>       */
> -    if ( unlikely(curr->domain->mapcache_invalidate) &&
> -         test_and_clear_bool(curr->domain->mapcache_invalidate) )
> +    if ( unlikely(curr->mapcache_invalidate) &&
> +         test_and_clear_bool(curr->mapcache_invalidate) )
>          ioreq_signal_mapcache_invalidate();
>  #endif
>  }
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -32,7 +32,6 @@
>  
>  static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    const struct vcpu *curr = current;
>      long rc;
>  
>      switch ( cmd & MEMOP_CMD_MASK )
> @@ -42,14 +41,11 @@ static long hvm_memory_op(int cmd, XEN_G
>          return -ENOSYS;
>      }
>  
> -    if ( !curr->hcall_compat )
> +    if ( !current->hcall_compat )
>          rc = do_memory_op(cmd, arg);
>      else
>          rc = compat_memory_op(cmd, arg);
>  
> -    if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
> -        curr->domain->mapcache_invalidate = true;
> -
>      return rc;
>  }
>  
> @@ -327,9 +323,11 @@ int hvm_hypercall(struct cpu_user_regs *
>  
>      HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
>  
> -    if ( unlikely(currd->mapcache_invalidate) &&
> -         test_and_clear_bool(currd->mapcache_invalidate) )
> +    if ( unlikely(curr->mapcache_invalidate) )
> +    {
> +        curr->mapcache_invalidate = false;
>          ioreq_signal_mapcache_invalidate();
> +    }
>  
>      return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
>  }
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -28,6 +28,7 @@
>  #include <xen/vm_event.h>
>  #include <xen/event.h>
>  #include <xen/grant_table.h>
> +#include <xen/ioreq.h>
>  #include <xen/param.h>
>  #include <public/vm_event.h>
>  #include <asm/domain.h>
> @@ -815,6 +816,8 @@ p2m_remove_page(struct p2m_domain *p2m,
>          }
>      }
>  
> +    ioreq_request_mapcache_invalidate(p2m->domain);
> +
>      return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
>                           p2m->default_access);
>  }
> @@ -1301,6 +1304,8 @@ static int set_typed_p2m_entry(struct do
>              ASSERT(mfn_valid(mfn_add(omfn, i)));
>              set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
>          }
> +
> +        ioreq_request_mapcache_invalidate(d);
>      }
>  
>      P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -20,6 +20,7 @@
>   */
>  
>  #include <xen/event.h>
> +#include <xen/ioreq.h>
>  #include <xen/mm.h>
>  #include <xen/sched.h>
>  #include <xen/trace.h>
> @@ -647,6 +648,8 @@ p2m_pod_decrease_reservation(struct doma
>                  set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
>              p2m_pod_cache_add(p2m, page, cur_order);
>  
> +            ioreq_request_mapcache_invalidate(d);
> +
>              steal_for_cache =  ( p2m->pod.entry_count > p2m->pod.count );
>  
>              ram -= n;
> @@ -835,6 +838,8 @@ p2m_pod_zero_check_superpage(struct p2m_
>      p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
>      p2m->pod.entry_count += SUPERPAGE_PAGES;
>  
> +    ioreq_request_mapcache_invalidate(d);
> +
>      ret = SUPERPAGE_PAGES;
>  
>  out_reset:
> @@ -997,6 +1002,8 @@ p2m_pod_zero_check(struct p2m_domain *p2
>              /* Add to cache, and account for the new p2m PoD entry */
>              p2m_pod_cache_add(p2m, mfn_to_page(mfns[i]), PAGE_ORDER_4K);
>              p2m->pod.entry_count++;
> +
> +            ioreq_request_mapcache_invalidate(d);
>          }
>      }
>  
> @@ -1315,6 +1322,8 @@ guest_physmap_mark_populate_on_demand(st
>          p2m->pod.entry_count -= pod_count;
>          BUG_ON(p2m->pod.entry_count < 0);
>          pod_unlock(p2m);
> +
> +        ioreq_request_mapcache_invalidate(d);
>      }
>  
>  out:
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -35,6 +35,17 @@
>  #include <public/hvm/ioreq.h>
>  #include <public/hvm/params.h>
>  
> +void ioreq_request_mapcache_invalidate(const struct domain *d)
> +{
> +    struct vcpu *v = current;
> +
> +    if ( d == v->domain )
> +        v->mapcache_invalidate = true;
> +    else if ( d->creation_finished )
> +        for_each_vcpu ( d, v )
> +            v->mapcache_invalidate = true;
> +}
> +
>  /* Ask ioemu mapcache to invalidate mappings. */
>  void ioreq_signal_mapcache_invalidate(void)
>  {
> @@ -206,6 +217,7 @@ bool vcpu_ioreq_handle_completion(struct
>      struct ioreq_server *s;
>      struct ioreq_vcpu *sv;
>      enum vio_completion completion;
> +    bool res = true;
>  
>      if ( has_vpci(d) && vpci_process_pending(v) )
>      {
> @@ -232,17 +244,27 @@ bool vcpu_ioreq_handle_completion(struct
>          break;
>  
>      case VIO_mmio_completion:
> -        return arch_ioreq_complete_mmio();
> +        res = arch_ioreq_complete_mmio();
> +        break;
>  
>      case VIO_pio_completion:
> -        return handle_pio(vio->req.addr, vio->req.size,
> -                          vio->req.dir);
> +        res = handle_pio(vio->req.addr, vio->req.size,
> +                         vio->req.dir);
> +        break;
>  
>      default:
> -        return arch_vcpu_ioreq_completion(completion);
> +        res = arch_vcpu_ioreq_completion(completion);
> +        break;
>      }
>  
> -    return true;
> +    if ( res && unlikely(v->mapcache_invalidate) )
> +    {
> +        v->mapcache_invalidate = false;
> +        ioreq_signal_mapcache_invalidate();
> +        res = false;
> +    }
> +
> +    return res;
>  }
>  
>  static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf)
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -103,6 +103,7 @@ struct ioreq_server *ioreq_server_select
>  int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p,
>                 bool buffered);
>  unsigned int ioreq_broadcast(ioreq_t *p, bool buffered);
> +void ioreq_request_mapcache_invalidate(const struct domain *d);
>  void ioreq_signal_mapcache_invalidate(void);
>  
>  void ioreq_domain_init(struct domain *d);
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -225,6 +225,14 @@ struct vcpu
>      bool             hcall_compat;
>  #endif
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +    /*
> +     * Indicates that mapcache invalidation request should be sent to
> +     * the device emulator.
> +     */
> +    bool             mapcache_invalidate;
> +#endif
> +
>      /* The CPU, if any, which is holding onto this VCPU's state. */
>  #define VCPU_CPU_CLEAN (~0u)
>      unsigned int     dirty_cpu;
> @@ -444,11 +452,6 @@ struct domain
>       * unpaused for the first time by the systemcontroller.
>       */
>      bool             creation_finished;
> -    /*
> -     * Indicates that mapcache invalidation request should be sent to
> -     * the device emulator.
> -     */
> -    bool             mapcache_invalidate;
>  
>      /* Which guest this guest has privileges on */
>      struct domain   *target;
> 



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:05:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86197.161581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJiC-0004si-QR; Wed, 17 Feb 2021 10:05:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86197.161581; Wed, 17 Feb 2021 10:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCJiC-0004sb-NW; Wed, 17 Feb 2021 10:05:36 +0000
Received: by outflank-mailman (input) for mailman id 86197;
 Wed, 17 Feb 2021 10:05:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=am+m=HT=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lCJiA-0004sW-OZ
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:05:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8e77610b-4e3b-4f2d-9232-281804662c2e;
 Wed, 17 Feb 2021 10:05:32 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2394B1042;
 Wed, 17 Feb 2021 02:05:32 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 57A783F73B;
 Wed, 17 Feb 2021 02:05:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e77610b-4e3b-4f2d-9232-281804662c2e
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm : smmuv3: Fix to handle multiple StreamIds per device.
Date: Wed, 17 Feb 2021 10:05:14 +0000
Message-Id: <43de5b58df37d8b8de037cb23c47ab8454caf37c.1613492577.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

SMMUv3 driver does not handle multiple StreamId if the master device
supports more than one StreamID.

This bug was introduced when the driver was ported from Linux to XEN.
dt_device_set_protected(..) should be called from add_device(..) not
from the dt_xlate(..).

Move dt_device_set_protected(..) from dt_xlate(..) to add_device().

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
This patch is a candidate for 4.15 as without this patch it is not possible to
assign multiple StreamIds to the same device when device is protected behind
SMMUv3.
---
 xen/drivers/passthrough/arm/smmu-v3.c | 29 ++++++++++-----------------
 1 file changed, 11 insertions(+), 18 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 914cdc1cf4..53d150cdb6 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2207,24 +2207,6 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	 */
 	arm_smmu_enable_pasid(master);
 
-	return 0;
-
-err_free_master:
-	xfree(master);
-	dev_iommu_priv_set(dev, NULL);
-	return ret;
-}
-
-static int arm_smmu_dt_xlate(struct device *dev,
-				const struct dt_phandle_args *args)
-{
-	int ret;
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
-
-	ret = iommu_fwspec_add_ids(dev, args->args, 1);
-	if (ret)
-		return ret;
-
 	if (dt_device_is_protected(dev_to_dt(dev))) {
 		dev_err(dev, "Already added to SMMUv3\n");
 		return -EEXIST;
@@ -2237,6 +2219,17 @@ static int arm_smmu_dt_xlate(struct device *dev,
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
 
 	return 0;
+
+err_free_master:
+	xfree(master);
+	dev_iommu_priv_set(dev, NULL);
+	return ret;
+}
+
+static int arm_smmu_dt_xlate(struct device *dev,
+				const struct dt_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
 /* Probing and initialisation functions */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:43:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:43:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86199.161593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKIC-0008UL-OX; Wed, 17 Feb 2021 10:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86199.161593; Wed, 17 Feb 2021 10:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKIC-0008UE-Lg; Wed, 17 Feb 2021 10:42:48 +0000
Received: by outflank-mailman (input) for mailman id 86199;
 Wed, 17 Feb 2021 10:42:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCKIB-0008U9-Hy
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:42:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c19b84b1-a893-48f8-84af-fcd536900b0f;
 Wed, 17 Feb 2021 10:42:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E84DB154;
 Wed, 17 Feb 2021 10:42:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c19b84b1-a893-48f8-84af-fcd536900b0f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613558565; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=JptSK0y1QceLM0NpaVhOXeL+/6/LV1pc12AWn8xXNy0=;
	b=ZFQ4Q64Tt7EQ1Lx3hjCDtSX7YzoK1/23jigqMw3IGuo2nz2+cCoNUXxQK0CI33+LQD40tO
	IUdg1h4Oj2OERY87hmGDDkg5yiWAW/tfOoVT+/uYy0JS2V6s7gDa0DsSJjHn0BUvw71Xi/
	b2CtQx1tP1zkSbgydpsFL/EXW1HxAlA=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/3] gnttab: misc fixes
Message-ID: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Date: Wed, 17 Feb 2021 11:42:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Patches 1 and 2 clearly are intended for 4.15; patch 3 is possibly
controversial (along the lines of similar relaxation proposed for
other (sub-)hypercalls), and hence unlikely to be a candidate as
well.

1: never permit mapping transitive grants
2: bypass IOMMU (un)mapping when a domain is (un)mapping its own grant
3: GTF_sub_page is a v2-only flag

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:46:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:46:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86201.161606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKLM-0000BB-7s; Wed, 17 Feb 2021 10:46:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86201.161606; Wed, 17 Feb 2021 10:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKLM-0000B4-4M; Wed, 17 Feb 2021 10:46:04 +0000
Received: by outflank-mailman (input) for mailman id 86201;
 Wed, 17 Feb 2021 10:46:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCKLK-0000Az-Gm
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:46:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 427a70d8-bde4-4401-b4b1-beb4744f5412;
 Wed, 17 Feb 2021 10:46:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2AE5B1AB;
 Wed, 17 Feb 2021 10:46:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 427a70d8-bde4-4401-b4b1-beb4744f5412
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613558760; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EkWeEv6ILJ0/X/gZ638Qi3CNdeAfWvOX5jOISvz4cYE=;
	b=MTK3OG8cAjod+VeIaR/YSVwwkk9abgMZFwLVj7zB9qAvvFVyFeFi5dKkwx3zPNWIgzrjvG
	Bb85kiMDcrhMKE/1h+h6IatAZUDUFJE2qANorzarHHrV0Nom6IdDdKCiA6hesNmePN2EvF
	WgV7Ux7SR21+djg1OQiqOm+Cac3V0dU=
Subject: [PATCH 1/3] gnttab: never permit mapping transitive grants
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Message-ID: <3620b977-4182-db2c-e2f9-71e1c6c4e721@suse.com>
Date: Wed, 17 Feb 2021 11:46:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Transitive grants allow an intermediate domain I to grant a target
domain T access to a page which origin domain O did grant I access to.
As an implementation restriction, T is not allowed to map such a grant.
This restriction is currently tried to be enforced by marking active
entries resulting from transitive grants as is-sub-page; sub-page grants
for obvious reasons don't allow mapping. However, marking (and checking)
only active entries is insufficient, as a map attempt may also occur on
a grant not otherwise in use. When not presently in use (pin count zero)
the grant type itself needs checking. Otherwise T may be able to map an
unrelated page owned by I. This is because the "transitive" sub-
structure of the v2 union would end up being interpreted as "full_page"
sub-structure instead. The low 32 bits of the GFN used would match the
grant reference specified in I's transitive grant entry, while the upper
32 bits could be random (depending on how exactly I sets up its grant
table entries).

Note that if one mapping already exists and the granting domain _then_
changes the grant to GTF_transitive (which the domain is not supposed to
do), the changed type will only be honored after the pin count has gone
back to zero. This is no different from e.g. GTF_readonly or
GTF_sub_page becoming set when a grant is already in use.

While adjusting the implementation, also adjust commentary in the public
header to better reflect reality.

Fixes: 3672ce675c93 ("Transitive grant support")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -851,9 +851,10 @@ static int _set_status_v2(const grant_en
         mask |= GTF_sub_page;
 
     /* If not already pinned, check the grant domid and type. */
-    if ( !act->pin && ((((scombo.flags & mask) != GTF_permit_access) &&
-                        ((scombo.flags & mask) != GTF_transitive)) ||
-                       (scombo.domid != ldomid)) )
+    if ( !act->pin &&
+         ((((scombo.flags & mask) != GTF_permit_access) &&
+           (mapflag || ((scombo.flags & mask) != GTF_transitive))) ||
+          (scombo.domid != ldomid)) )
         PIN_FAIL(done, GNTST_general_error,
                  "Bad flags (%x) or dom (%d); expected d%d, flags %x\n",
                  scombo.flags, scombo.domid, ldomid, mask);
@@ -879,7 +880,7 @@ static int _set_status_v2(const grant_en
     if ( !act->pin )
     {
         if ( (((scombo.flags & mask) != GTF_permit_access) &&
-              ((scombo.flags & mask) != GTF_transitive)) ||
+              (mapflag || ((scombo.flags & mask) != GTF_transitive))) ||
              (scombo.domid != ldomid) ||
              (!readonly && (scombo.flags & GTF_readonly)) )
         {
--- a/xen/include/public/grant_table.h
+++ b/xen/include/public/grant_table.h
@@ -166,11 +166,13 @@ typedef struct grant_entry_v1 grant_entr
 #define GTF_type_mask       (3U<<0)
 
 /*
- * Subflags for GTF_permit_access.
+ * Subflags for GTF_permit_access and GTF_transitive.
  *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
  *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
  *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
- *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags for the grant [GST]
+ * Further subflags for GTF_permit_access only.
+ *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags to be used for
+ *                             mappings of the grant [GST]
  *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
  *                will only be allowed to copy from the grant, and not
  *                map it. [GST]



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:46:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86204.161618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKLp-0000Is-GX; Wed, 17 Feb 2021 10:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86204.161618; Wed, 17 Feb 2021 10:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKLp-0000Il-DX; Wed, 17 Feb 2021 10:46:33 +0000
Received: by outflank-mailman (input) for mailman id 86204;
 Wed, 17 Feb 2021 10:46:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCKLn-0000Ie-UB
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:46:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d84df09-bc85-4edf-a52d-801e5272fe32;
 Wed, 17 Feb 2021 10:46:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45E22B1AB;
 Wed, 17 Feb 2021 10:46:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d84df09-bc85-4edf-a52d-801e5272fe32
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613558790; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VudHzvHkbrWxXAQT3YkDMWmcJ5pINo+1wlM1FgK45WU=;
	b=XFf2PzRUoaBtA3RmlfcRvSi0PQbmgi3lL7ZdHaV/xzen1zkf8LJh+ir+L4pcwjyNfisEBj
	hy+zaBbbBt+ZIxBst9hdcFFbRZgFnxiKYJ5Zu0eBueBlFGNY8jd0j6/+OdRAz++VBMi42u
	JjRzZKA+e8B4zFzVjp6hcenZyWZ+rkk=
Subject: [PATCH 2/3] gnttab: bypass IOMMU (un)mapping when a domain is
 (un)mapping its own grant
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Message-ID: <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
Date: Wed, 17 Feb 2021 11:46:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Mappings for a domain's own pages should already be present in the
IOMMU. While installing the same mapping again is merely redundant (and
inefficient), removing the mapping when the grant mapping gets removed
is outright wrong in this case: The mapping was there before the map, so
should remain in place after unmapping.

This affects
- Arm Dom0 in the direct mapped case,
- x86 PV Dom0 in the "iommu=dom0-strict" / "dom0-iommu=strict" cases,
- all x86 PV DomU-s, including driver domains.

Reported-by: Rahul Singh <Rahul.Singh@arm.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1243,7 +1243,7 @@ map_grant_ref(
         goto undo_out;
     }
 
-    need_iommu = gnttab_need_iommu_mapping(ld);
+    need_iommu = ld != rd && gnttab_need_iommu_mapping(ld);
     if ( need_iommu )
     {
         unsigned int kind;
@@ -1493,7 +1493,7 @@ unmap_common(
     if ( put_handle )
         put_maptrack_handle(lgt, op->handle);
 
-    if ( rc == GNTST_okay && gnttab_need_iommu_mapping(ld) )
+    if ( rc == GNTST_okay && ld != rd && gnttab_need_iommu_mapping(ld) )
     {
         unsigned int kind;
         int err = 0;



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:47:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:47:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86206.161629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKMF-0000PW-TK; Wed, 17 Feb 2021 10:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86206.161629; Wed, 17 Feb 2021 10:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKMF-0000PP-QJ; Wed, 17 Feb 2021 10:46:59 +0000
Received: by outflank-mailman (input) for mailman id 86206;
 Wed, 17 Feb 2021 10:46:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCKMD-0000PB-Ua
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:46:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f951a317-8485-4d2b-a593-4ce277995d9c;
 Wed, 17 Feb 2021 10:46:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4855CB1AD;
 Wed, 17 Feb 2021 10:46:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f951a317-8485-4d2b-a593-4ce277995d9c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613558815; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IJzykk1fycKpMnuMtQ/GtfxP7KxLImeCXhtduVS0e7A=;
	b=pO4jeeQnzTStydoed62HEhA5D0tBkXOa4xYfhDPT9njPu/0Wn9V3MNtsC9KAp7agIBJT4t
	fVQpZLxB//2DS5cmvDXM4dRg1nPkQbE1HMU2PYlaVGiOfiiGM9cX0vq22dZM1UoZURHRwO
	Yiuym74dE3n/Hy9lpKEG/OQOpBpUXmo=
Subject: [PATCH 3/3] gnttab: GTF_sub_page is a v2-only flag
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Message-ID: <2bf46266-785d-0de3-5f61-48c3fd191a5c@suse.com>
Date: Wed, 17 Feb 2021 11:46:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Prior to its introduction, v1 entries weren't checked for this flag, and
the flag also has been meaningless for v1 entries. Therefore it also
shouldn't be checked. (The only consistent alternative would be to also
check for all currently undefined flags to be clear.)

Fixes: b545941b6638 ("Implement sub-page grant support")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -762,13 +762,11 @@ static int _set_status_v1(const grant_en
                           struct domain *rd,
                           struct active_grant_entry *act,
                           int readonly,
-                          int mapflag,
-                          domid_t  ldomid)
+                          domid_t ldomid)
 {
     int rc = GNTST_okay;
     uint32_t *raw_shah = (uint32_t *)shah;
     union grant_combo scombo;
-    uint16_t mask = GTF_type_mask;
 
     /*
      * We bound the number of times we retry CMPXCHG on memory locations that
@@ -780,11 +778,6 @@ static int _set_status_v1(const grant_en
      */
     int retries = 0;
 
-    /* if this is a grant mapping operation we should ensure GTF_sub_page
-       is not set */
-    if ( mapflag )
-        mask |= GTF_sub_page;
-
     scombo.raw = ACCESS_ONCE(*raw_shah);
 
     /*
@@ -798,8 +791,9 @@ static int _set_status_v1(const grant_en
         union grant_combo prev, new;
 
         /* If not already pinned, check the grant domid and type. */
-        if ( !act->pin && (((scombo.flags & mask) != GTF_permit_access) ||
-                           (scombo.domid != ldomid)) )
+        if ( !act->pin &&
+             (((scombo.flags & GTF_type_mask) != GTF_permit_access) ||
+              (scombo.domid != ldomid)) )
             PIN_FAIL(done, GNTST_general_error,
                      "Bad flags (%x) or dom (%d); expected d%d\n",
                      scombo.flags, scombo.domid, ldomid);
@@ -916,7 +910,7 @@ static int _set_status(const grant_entry
 {
 
     if ( evaluate_nospec(rgt_version == 1) )
-        return _set_status_v1(shah, rd, act, readonly, mapflag, ldomid);
+        return _set_status_v1(shah, rd, act, readonly, ldomid);
     else
         return _set_status_v2(shah, status, rd, act, readonly, mapflag, ldomid);
 }
--- a/xen/include/public/grant_table.h
+++ b/xen/include/public/grant_table.h
@@ -175,7 +175,7 @@ typedef struct grant_entry_v1 grant_entr
  *                             mappings of the grant [GST]
  *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
  *                will only be allowed to copy from the grant, and not
- *                map it. [GST]
+ *                map it. [GST, v2]
  */
 #define _GTF_readonly       (2)
 #define GTF_readonly        (1U<<_GTF_readonly)



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 10:58:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 10:58:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86209.161642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKXJ-0001SA-05; Wed, 17 Feb 2021 10:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86209.161642; Wed, 17 Feb 2021 10:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKXI-0001S3-SP; Wed, 17 Feb 2021 10:58:24 +0000
Received: by outflank-mailman (input) for mailman id 86209;
 Wed, 17 Feb 2021 10:58:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCKXI-0001Ry-4G
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 10:58:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKXH-0001AD-Un; Wed, 17 Feb 2021 10:58:23 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKXH-00069I-He; Wed, 17 Feb 2021 10:58:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=jwE2jZ2S8KglN7IeYMtqSO/MS+kCOzL7n8HsK39R9SU=; b=JNCMnsO5tq0HkOH9wh/hPCC+Qb
	4raWFFQk7hP6sJ7lixxsM6CHfW/bdtlRRR9axblUvl8Kj/qj/ZiHAJaKOIRP60LZOcCIPqT0+PiXV
	IsDi2+hZUTHODHmsx3x5671fteHr+3jxWXehNCfmZoUp5ahGZnXX5FJCapSLLFVZ9dYY=;
Subject: Re: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation
 request
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com>
 <e2682f84-b3bb-b9fb-edd8-863b9036de95@suse.com>
 <454ffaeb-f0d5-a2fe-420b-f28e51d9aabf@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0bd56ccf-da71-84f4-4a24-dd3e590c3e1c@xen.org>
Date: Wed, 17 Feb 2021 10:58:21 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <454ffaeb-f0d5-a2fe-420b-f28e51d9aabf@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 17/02/2021 10:02, Jan Beulich wrote:
> On 02.02.2021 16:14, Jan Beulich wrote:
>> XENMEM_decrease_reservation isn't the only means by which pages can get
>> removed from a guest, yet all removals ought to be signaled to qemu. Put
>> setting of the flag into the central p2m_remove_page() underlying all
>> respective hypercalls as well as a few similar places, mainly in PoD
>> code.
>>
>> Additionally there's no point sending the request for the local domain
>> when the domain acted upon is a different one. The latter domain's ioreq
>> server mapcaches need invalidating. We assume that domain to be paused
>> at the point the operation takes place, so sending the request in this
>> case happens from the hvm_do_resume() path, which as one of its first
>> steps calls handle_hvm_io_completion().
>>
>> Even without the remote operation aspect a single domain-wide flag
>> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
>> parallel. Each of them needs to issue an invalidation request in due
>> course, in particular because exiting to guest context should not happen
>> before the request was actually seen by (all) the emulator(s).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>>      places to set the flag. Also handle acting on a remote domain.
>>      Re-base.
> 
> Can I get an Arm side ack (or otherwise) please?

This looks good for me.

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,


> 
> Thanks, Jan
> 
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -759,10 +759,9 @@ static void p2m_free_entry(struct p2m_do
>>            * has failed (error case).
>>            * So, at worst, the spurious mapcache invalidation might be sent.
>>            */
>> -        if ( (p2m->domain == current->domain) &&
>> -              domain_has_ioreq_server(p2m->domain) &&
>> -              p2m_is_ram(entry.p2m.type) )
>> -            p2m->domain->mapcache_invalidate = true;
>> +        if ( p2m_is_ram(entry.p2m.type) &&
>> +             domain_has_ioreq_server(p2m->domain) )
>> +            ioreq_request_mapcache_invalidate(p2m->domain);
>>   #endif
>>   
>>           p2m->stats.mappings[level]--;
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -1509,8 +1509,8 @@ static void do_trap_hypercall(struct cpu
>>        * Note that sending the invalidation request causes the vCPU to block
>>        * until all the IOREQ servers have acknowledged the invalidation.
>>        */
>> -    if ( unlikely(curr->domain->mapcache_invalidate) &&
>> -         test_and_clear_bool(curr->domain->mapcache_invalidate) )
>> +    if ( unlikely(curr->mapcache_invalidate) &&
>> +         test_and_clear_bool(curr->mapcache_invalidate) )
>>           ioreq_signal_mapcache_invalidate();
>>   #endif
>>   }
>> --- a/xen/arch/x86/hvm/hypercall.c
>> +++ b/xen/arch/x86/hvm/hypercall.c
>> @@ -32,7 +32,6 @@
>>   
>>   static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>   {
>> -    const struct vcpu *curr = current;
>>       long rc;
>>   
>>       switch ( cmd & MEMOP_CMD_MASK )
>> @@ -42,14 +41,11 @@ static long hvm_memory_op(int cmd, XEN_G
>>           return -ENOSYS;
>>       }
>>   
>> -    if ( !curr->hcall_compat )
>> +    if ( !current->hcall_compat )
>>           rc = do_memory_op(cmd, arg);
>>       else
>>           rc = compat_memory_op(cmd, arg);
>>   
>> -    if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
>> -        curr->domain->mapcache_invalidate = true;
>> -
>>       return rc;
>>   }
>>   
>> @@ -327,9 +323,11 @@ int hvm_hypercall(struct cpu_user_regs *
>>   
>>       HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
>>   
>> -    if ( unlikely(currd->mapcache_invalidate) &&
>> -         test_and_clear_bool(currd->mapcache_invalidate) )
>> +    if ( unlikely(curr->mapcache_invalidate) )
>> +    {
>> +        curr->mapcache_invalidate = false;
>>           ioreq_signal_mapcache_invalidate();
>> +    }
>>   
>>       return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
>>   }
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -28,6 +28,7 @@
>>   #include <xen/vm_event.h>
>>   #include <xen/event.h>
>>   #include <xen/grant_table.h>
>> +#include <xen/ioreq.h>
>>   #include <xen/param.h>
>>   #include <public/vm_event.h>
>>   #include <asm/domain.h>
>> @@ -815,6 +816,8 @@ p2m_remove_page(struct p2m_domain *p2m,
>>           }
>>       }
>>   
>> +    ioreq_request_mapcache_invalidate(p2m->domain);
>> +
>>       return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
>>                            p2m->default_access);
>>   }
>> @@ -1301,6 +1304,8 @@ static int set_typed_p2m_entry(struct do
>>               ASSERT(mfn_valid(mfn_add(omfn, i)));
>>               set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
>>           }
>> +
>> +        ioreq_request_mapcache_invalidate(d);
>>       }
>>   
>>       P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -20,6 +20,7 @@
>>    */
>>   
>>   #include <xen/event.h>
>> +#include <xen/ioreq.h>
>>   #include <xen/mm.h>
>>   #include <xen/sched.h>
>>   #include <xen/trace.h>
>> @@ -647,6 +648,8 @@ p2m_pod_decrease_reservation(struct doma
>>                   set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
>>               p2m_pod_cache_add(p2m, page, cur_order);
>>   
>> +            ioreq_request_mapcache_invalidate(d);
>> +
>>               steal_for_cache =  ( p2m->pod.entry_count > p2m->pod.count );
>>   
>>               ram -= n;
>> @@ -835,6 +838,8 @@ p2m_pod_zero_check_superpage(struct p2m_
>>       p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
>>       p2m->pod.entry_count += SUPERPAGE_PAGES;
>>   
>> +    ioreq_request_mapcache_invalidate(d);
>> +
>>       ret = SUPERPAGE_PAGES;
>>   
>>   out_reset:
>> @@ -997,6 +1002,8 @@ p2m_pod_zero_check(struct p2m_domain *p2
>>               /* Add to cache, and account for the new p2m PoD entry */
>>               p2m_pod_cache_add(p2m, mfn_to_page(mfns[i]), PAGE_ORDER_4K);
>>               p2m->pod.entry_count++;
>> +
>> +            ioreq_request_mapcache_invalidate(d);
>>           }
>>       }
>>   
>> @@ -1315,6 +1322,8 @@ guest_physmap_mark_populate_on_demand(st
>>           p2m->pod.entry_count -= pod_count;
>>           BUG_ON(p2m->pod.entry_count < 0);
>>           pod_unlock(p2m);
>> +
>> +        ioreq_request_mapcache_invalidate(d);
>>       }
>>   
>>   out:
>> --- a/xen/common/ioreq.c
>> +++ b/xen/common/ioreq.c
>> @@ -35,6 +35,17 @@
>>   #include <public/hvm/ioreq.h>
>>   #include <public/hvm/params.h>
>>   
>> +void ioreq_request_mapcache_invalidate(const struct domain *d)
>> +{
>> +    struct vcpu *v = current;
>> +
>> +    if ( d == v->domain )
>> +        v->mapcache_invalidate = true;
>> +    else if ( d->creation_finished )
>> +        for_each_vcpu ( d, v )
>> +            v->mapcache_invalidate = true;
>> +}
>> +
>>   /* Ask ioemu mapcache to invalidate mappings. */
>>   void ioreq_signal_mapcache_invalidate(void)
>>   {
>> @@ -206,6 +217,7 @@ bool vcpu_ioreq_handle_completion(struct
>>       struct ioreq_server *s;
>>       struct ioreq_vcpu *sv;
>>       enum vio_completion completion;
>> +    bool res = true;
>>   
>>       if ( has_vpci(d) && vpci_process_pending(v) )
>>       {
>> @@ -232,17 +244,27 @@ bool vcpu_ioreq_handle_completion(struct
>>           break;
>>   
>>       case VIO_mmio_completion:
>> -        return arch_ioreq_complete_mmio();
>> +        res = arch_ioreq_complete_mmio();
>> +        break;
>>   
>>       case VIO_pio_completion:
>> -        return handle_pio(vio->req.addr, vio->req.size,
>> -                          vio->req.dir);
>> +        res = handle_pio(vio->req.addr, vio->req.size,
>> +                         vio->req.dir);
>> +        break;
>>   
>>       default:
>> -        return arch_vcpu_ioreq_completion(completion);
>> +        res = arch_vcpu_ioreq_completion(completion);
>> +        break;
>>       }
>>   
>> -    return true;
>> +    if ( res && unlikely(v->mapcache_invalidate) )
>> +    {
>> +        v->mapcache_invalidate = false;
>> +        ioreq_signal_mapcache_invalidate();
>> +        res = false;
>> +    }
>> +
>> +    return res;
>>   }
>>   
>>   static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf)
>> --- a/xen/include/xen/ioreq.h
>> +++ b/xen/include/xen/ioreq.h
>> @@ -103,6 +103,7 @@ struct ioreq_server *ioreq_server_select
>>   int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p,
>>                  bool buffered);
>>   unsigned int ioreq_broadcast(ioreq_t *p, bool buffered);
>> +void ioreq_request_mapcache_invalidate(const struct domain *d);
>>   void ioreq_signal_mapcache_invalidate(void);
>>   
>>   void ioreq_domain_init(struct domain *d);
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -225,6 +225,14 @@ struct vcpu
>>       bool             hcall_compat;
>>   #endif
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    /*
>> +     * Indicates that mapcache invalidation request should be sent to
>> +     * the device emulator.
>> +     */
>> +    bool             mapcache_invalidate;
>> +#endif
>> +
>>       /* The CPU, if any, which is holding onto this VCPU's state. */
>>   #define VCPU_CPU_CLEAN (~0u)
>>       unsigned int     dirty_cpu;
>> @@ -444,11 +452,6 @@ struct domain
>>        * unpaused for the first time by the systemcontroller.
>>        */
>>       bool             creation_finished;
>> -    /*
>> -     * Indicates that mapcache invalidation request should be sent to
>> -     * the device emulator.
>> -     */
>> -    bool             mapcache_invalidate;
>>   
>>       /* Which guest this guest has privileges on */
>>       struct domain   *target;
>>
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:03:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:03:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86211.161654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKcI-0002Pg-K2; Wed, 17 Feb 2021 11:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86211.161654; Wed, 17 Feb 2021 11:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKcI-0002PZ-Gc; Wed, 17 Feb 2021 11:03:34 +0000
Received: by outflank-mailman (input) for mailman id 86211;
 Wed, 17 Feb 2021 11:03:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCKcH-0002PU-HE
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:03:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKcE-0001Hy-He; Wed, 17 Feb 2021 11:03:30 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKcE-0006av-8e; Wed, 17 Feb 2021 11:03:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7ALOija2AEB9dftlJYgOTQ5dqXwH3+OBL7GDBY1YNtg=; b=SxveSZHQHoy4gyVOHUpmi1YIJ0
	lOg8KuBAaht8/5MQGNPsJbjZkYDhqfVkF7A5YuVn/T4OtkGZ3hINOcIpiKJnnyYEp1F07rw9qfjMv
	DsGILgV6+Pu2H5HL7qfcCIrxK9fhBR/poiSmsuQyIVP7VUozkLOQ/Kb3lRkD3qfMav3E=;
Subject: Re: [PATCH 2/3] gnttab: bypass IOMMU (un)mapping when a domain is
 (un)mapping its own grant
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Rahul Singh <Rahul.Singh@arm.com>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a267959c-538e-0f90-f9a5-72e836f33cb4@xen.org>
Date: Wed, 17 Feb 2021 11:03:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 10:46, Jan Beulich wrote:
> Mappings for a domain's own pages should already be present in the
> IOMMU. While installing the same mapping again is merely redundant (and
> inefficient), removing the mapping when the grant mapping gets removed
> is outright wrong in this case: The mapping was there before the map, so
> should remain in place after unmapping.
> 
> This affects
> - Arm Dom0 in the direct mapped case,
> - x86 PV Dom0 in the "iommu=dom0-strict" / "dom0-iommu=strict" cases,
> - all x86 PV DomU-s, including driver domains.
> 
> Reported-by: Rahul Singh <Rahul.Singh@arm.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1243,7 +1243,7 @@ map_grant_ref(
>           goto undo_out;
>       }
>   
> -    need_iommu = gnttab_need_iommu_mapping(ld);
> +    need_iommu = ld != rd && gnttab_need_iommu_mapping(ld);

AFAICT, the owner of the page may not always be rd. So do we want to 
check against the owner instead?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:21:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:21:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86213.161666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKtc-0004IQ-2l; Wed, 17 Feb 2021 11:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86213.161666; Wed, 17 Feb 2021 11:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKtb-0004IJ-Vj; Wed, 17 Feb 2021 11:21:27 +0000
Received: by outflank-mailman (input) for mailman id 86213;
 Wed, 17 Feb 2021 11:21:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCKta-0004IC-4d
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:21:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKtZ-0001YV-48; Wed, 17 Feb 2021 11:21:25 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKtY-0007sS-R9; Wed, 17 Feb 2021 11:21:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Khun2zGuAGcEpvtr5KfIBZmX2CJvevecPfohkAT+DxI=; b=0ukaaQMSOoXI29xQwOUOlA9OTK
	xZwj2BI0i8mflLIgrz15cbvvltIo9laVFYkE4swKwjRyWK5wOlMVJf4j8xX0y/ywZwQr/Yc06xG2/
	kr9ZBBXQH8N3kOqkS1xA2HOePM+U0GZ19rVEtakGKO4K+V5nJ3gnpE7oCQea3XOH1PU0=;
Subject: Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special
 pages in the IOMMU page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-2-julien@xen.org>
 <d2485d44-180e-499c-d917-80da3486d98e@suse.com>
 <797ac673-9c7b-ff39-1266-94c96bde0f26@xen.org>
 <d394dc9c-d02c-f24a-7414-ec626ac5e82b@suse.com>
 <4ddf0f24-0379-e724-84e1-9b296167e000@xen.org>
 <237bb7b8-038e-4f10-98e8-559edc764f59@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <83fe56bc-42e7-68a0-c2e9-8bef7e412062@xen.org>
Date: Wed, 17 Feb 2021 11:21:22 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <237bb7b8-038e-4f10-98e8-559edc764f59@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 15/02/2021 13:14, Jan Beulich wrote:
> On 15.02.2021 13:54, Julien Grall wrote:
>> On 15/02/2021 12:36, Jan Beulich wrote:
>>> On 15.02.2021 12:38, Julien Grall wrote:
>>>> Given this series needs to go in 4.15 (we would introduce a 0-day
>>>> otherwise), could you clarify whether your patch [1] is intended to
>>>> replace this one in 4.15?
>>>
>>> Yes, that or a cut down variant (simply moving the invocation of
>>> set_mmio_p2m_entry()). The more that there the controversy
>>> continued regarding the adjustment to p2m_get_iommu_flags(). I
>>> did indicate there that I've dropped it for v2.
>>
>> Do you have a link to v2? I would like to try with my series.
> 
> I didn't post it yet, as I didn't consider the v1 discussion
> settled so far. The intermediate version I have at present is
> below.

Thanks! I will drop patch #1 from my series and resend it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:25:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:25:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86216.161678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKxY-0004Qx-Ke; Wed, 17 Feb 2021 11:25:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86216.161678; Wed, 17 Feb 2021 11:25:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCKxY-0004Qq-HV; Wed, 17 Feb 2021 11:25:32 +0000
Received: by outflank-mailman (input) for mailman id 86216;
 Wed, 17 Feb 2021 11:25:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCKxW-0004Ql-VX
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:25:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKxU-0001cz-DY; Wed, 17 Feb 2021 11:25:28 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCKxU-0008G6-5t; Wed, 17 Feb 2021 11:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BjARDfv67VUgnEkyeC9djvBnhG9dcZLvDOnPv6Do+vA=; b=2PBiSKPTL9HefLyoDxwwlrbIah
	sNhgM6KBDxNcgkvH9wqLoHY1BfFGsWNm92/YGRjiDjme4QJraeKbh5eNQ97AR1H63r3Fe7yqY3+7d
	MFERpOwxr0Hg0sG1wZBt7AnZ1V0kfWXC0yO8KkuX+I4p222HsGPv3md8NdvwbHfTSJls=;
Subject: Re: [for-4.15][PATCH v2 2/5] xen/iommu: Check if the IOMMU was
 initialized before tearing down
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 'Julien Grall' <jgrall@amazon.com>, 'Jan Beulich' <jbeulich@suse.com>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-3-julien@xen.org>
 <04f401d6ff21$3b167720$b1436560$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <78eb9b66-6676-d7d8-d427-2f345c2673f5@xen.org>
Date: Wed, 17 Feb 2021 11:25:26 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <04f401d6ff21$3b167720$b1436560$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Paul,

On 09/02/2021 20:22, Paul Durrant wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 09 February 2021 15:28
>> To: xen-devel@lists.xenproject.org
>> Cc: hongyxia@amazon.co.uk; iwj@xenproject.org; Julien Grall <jgrall@amazon.com>; Jan Beulich
>> <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
>> Subject: [for-4.15][PATCH v2 2/5] xen/iommu: Check if the IOMMU was initialized before tearing down
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> is_iommu_enabled() will return true even if the IOMMU has not been
>> initialized (e.g. the ops are not set).
>>
>> In the case of an early failure in arch_domain_init(), the function
>> iommu_destroy_domain() will be called even if the IOMMU is not
>> initialized.
>>
>> This will result to dereference the ops which will be NULL and an host
>> crash.
>>
>> Fix the issue by checking that ops has been set before accessing it.
>>
>> Fixes: 71e617a6b8f6 ("use is_iommu_enabled() where appropriate...")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks! Ian gave his Release-Acked-by so I will commit this patch now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:33:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:33:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86221.161693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCL5R-0005Qu-Fq; Wed, 17 Feb 2021 11:33:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86221.161693; Wed, 17 Feb 2021 11:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCL5R-0005Qn-Cx; Wed, 17 Feb 2021 11:33:41 +0000
Received: by outflank-mailman (input) for mailman id 86221;
 Wed, 17 Feb 2021 11:33:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCL5Q-0005Qi-0U
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:33:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCL5K-0001mO-Tl; Wed, 17 Feb 2021 11:33:34 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCL5K-0000Lw-M6; Wed, 17 Feb 2021 11:33:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oqaMdQ/nzsMQZ/Hlhzzn1T22q0aaSC/8cEbBXqyCgec=; b=TVPO+SeRi3SsoQLW4XeapTlcRb
	yuVreDMD831uGqs/pguBGUcBJeOyl7K052V3lvn+iM70iuPijqymqL9NoFk4I6w/vTAoJdLDHFZ52
	D+2cE/g3cD7FDTwClxLyvIPXtm6sdwMzZuGacBrLitcTvy8xPAtYa7tqBwkkVNugA5lU=;
Subject: Re: [for-4.15][PATCH v2 0/5] xen/iommu: Collection of bug fixes for
 IOMMU teadorwn
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, hongyxia@amazon.co.uk,
 Julien Grall <jgrall@amazon.com>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
References: <20210209152816.15792-1-julien@xen.org>
 <24610.48309.568020.376765@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ff2d2f16-66b6-36a6-fb46-18b96686cf87@xen.org>
Date: Wed, 17 Feb 2021 11:33:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24610.48309.568020.376765@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ian,

On 09/02/2021 16:47, Ian Jackson wrote:
> Julien Grall writes ("[for-4.15][PATCH v2 0/5] xen/iommu: Collection of bug fixes for IOMMU teadorwn"):
>> From: Julien Grall <jgrall@amazon.com>
> ...
>> This series is a collection of bug fixes for the IOMMU teardown code.
>> All of them are candidate for 4.15 as they can either leak memory or
>> lead to host crash/host corruption.
>>
>> This is sent directly on xen-devel because all the issues were either
>> introduced in 4.15 or happen in the domain creation code.
> 
> I think by current freeze rules this does not need a release-ack but
> for the avoidance of doubt
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks!

> 
> assuming it's commited by the end of the week.

I saw you extended the freeze rules by a week. So I will assume that I 
have until end of this week (19th February) to commit it.

Please let me know if I misunderstood the extension.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:38:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:38:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86225.161706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCL9p-0005bp-72; Wed, 17 Feb 2021 11:38:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86225.161706; Wed, 17 Feb 2021 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCL9p-0005bi-3U; Wed, 17 Feb 2021 11:38:13 +0000
Received: by outflank-mailman (input) for mailman id 86225;
 Wed, 17 Feb 2021 11:38:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nrto=HT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lCL9n-0005bd-JT
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:38:11 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::602])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2c40c3b-ac2f-4c0f-b5d7-5a287013461d;
 Wed, 17 Feb 2021 11:38:09 +0000 (UTC)
Received: from DB6PR0802CA0036.eurprd08.prod.outlook.com (2603:10a6:4:a3::22)
 by VI1PR08MB3472.eurprd08.prod.outlook.com (2603:10a6:803:80::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.27; Wed, 17 Feb
 2021 11:38:07 +0000
Received: from DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::d4) by DB6PR0802CA0036.outlook.office365.com
 (2603:10a6:4:a3::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Wed, 17 Feb 2021 11:38:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT033.mail.protection.outlook.com (10.152.20.76) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Wed, 17 Feb 2021 11:38:06 +0000
Received: ("Tessian outbound fb307b4548b2:v71");
 Wed, 17 Feb 2021 11:38:06 +0000
Received: from ddd18b687afb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7AF98EF7-189B-4D9F-ABCA-F473FACADEBA.1; 
 Wed, 17 Feb 2021 11:38:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ddd18b687afb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 17 Feb 2021 11:38:01 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VI1PR0801MB1981.eurprd08.prod.outlook.com (2603:10a6:800:89::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.27; Wed, 17 Feb
 2021 11:38:00 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3846.042; Wed, 17 Feb 2021
 11:37:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2c40c3b-ac2f-4c0f-b5d7-5a287013461d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RF+AlTdElpX6bajun38j3sEa2fmjFdgbKVXV1GWenE0=;
 b=dd3WrAEhAMRocqaJWwQyB1wgS/YyLLLo+L94HGGZpAx3lajETswWjDY1kopcUnnRZBkgBSbZJVWyEHitQiE9UvZmzYTA6F8HccYkoLFvZgt2ZG13DDL6UV0PB9BhS1qBHW311EOvt/xC4+BMmv7Y6BgUZHPsm6pH6TQH72YiuOY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9a0711ad9db05bf3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L3GdUuE5f7+B71rN25pnVpviZGtiqGMIHKBk5ePyhdvP7YY8dyhuXCt7F/GasRjdq0fJlfBAThXjSCK7ZAFJI1PYryIlPs8fOH81eXIBXAh1mGtR0nbzUPb/6alRL+wrSlnG6VgO6tkXR7Bj5SNdoyxFocIpqheZPzfBYYYNs5ZjlxGIV94a6whzIt7+4aod09aHDG+4/gIn8X1YiDAUTGUYkekf0u/5QAc4Vyj293xAM0kX+9tJWoplS3zpDTtmHIXVKWLWwp2bmilpcQdc7Z7wGVWbpeIqJqh6K8AfEyTrbcLbtFlHkZDlvosdNn4ajIggTIMetDkD1tVhYHAsqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RF+AlTdElpX6bajun38j3sEa2fmjFdgbKVXV1GWenE0=;
 b=R9HRWrpQVXQLEfUXwox5RRaqT8YL9hEg3aSfsqO2ssj2I4vevVDymlj0lYRprZU5w6PWH3LzgLZRjLJZ5bcCWSTXZIwa6HdTJeLJ4BbSZ5+qRFfazlUdchr00N/pp059ucftupKqNPiv8rdFgzdI6O9gEWvMEgjrCWHGpQghjK7UjDUdQhGzAy1fdcM3OMJTHYbRQgwzXoBceNrXItV7GNhaPEcstCSSUsQ4xCAZeX9SAq8DCuxEo4dgQSvKA4rKjXIlOowELywfdXs6jj1wM3bnpeNXlHNb3l9+d4A/aG7uKU2GzGUvonBAs4T9tVt+FVyBEk6njo01wsx7ahetSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RF+AlTdElpX6bajun38j3sEa2fmjFdgbKVXV1GWenE0=;
 b=dd3WrAEhAMRocqaJWwQyB1wgS/YyLLLo+L94HGGZpAx3lajETswWjDY1kopcUnnRZBkgBSbZJVWyEHitQiE9UvZmzYTA6F8HccYkoLFvZgt2ZG13DDL6UV0PB9BhS1qBHW311EOvt/xC4+BMmv7Y6BgUZHPsm6pH6TQH72YiuOY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm : smmuv3: Fix to handle multiple StreamIds per
 device.
Thread-Topic: [PATCH] xen/arm : smmuv3: Fix to handle multiple StreamIds per
 device.
Thread-Index: AQHXBRR8dZBOnjHcyUSWoTi7lR9udKpcOIQA
Date: Wed, 17 Feb 2021 11:37:59 +0000
Message-ID: <5145C563-A8AA-41B1-8EBB-B32D1FFC2219@arm.com>
References:
 <43de5b58df37d8b8de037cb23c47ab8454caf37c.1613492577.git.rahul.singh@arm.com>
In-Reply-To:
 <43de5b58df37d8b8de037cb23c47ab8454caf37c.1613492577.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 331a3eda-9853-438a-d917-08d8d3387eae
x-ms-traffictypediagnostic: VI1PR0801MB1981:|VI1PR08MB3472:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB34729722E09AD9BB650E10F19D869@VI1PR08MB3472.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VfILVkqKGslMLAjXkigTiyCdRiZHXcUYrjUVt+b9qz/TOkUGcKix4UxhKCDDPH0bk5ugluYAwXDGXCBfsurxjTCuhi2cz/5HJkQ3Mlm9Q38yQK96j13/U3L/wAzzC4MppwdeG8Kwx/1SleZktwL3ZqarOsO9G+ONMMGIzHH2Eb23qeHT6Xokozl2FKOwJzx73klLWkagdNtd51eEblaGnyqHs8XaMfUhxos4CO0Krn0I4+WsucbGAT+HMrgZEFoKZJICC1KxEpgrLQwPdRAORQKhGyP6b/djY24YhNggFi3AMTUufKMuc0pK7mdy0gjttMjhTwF9VXgPR7aS9NuOXzbAnGFNUaNahnVJx6/zNC809NTxNSi4fkcq/sJ08+D6JvOaeBxX6mDggq6vjcz6r0MzaXdy6+kFDcvvYDXchBZYJWvd0HqSgKLsPdOR5WUT5jsCwhO6yuv6avfvNSEhKOwCX+ulxr4C47l4Wa0G8wcLP3xn6VKpBboWpuu9lFlUQ2Q4jpGF/Enzw7ZLX3DwwgNcH0ziR27S25ax9UqPp39gb7h2UynEW3+fLJntzQgH
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(39860400002)(346002)(376002)(53546011)(8676002)(26005)(6636002)(4326008)(33656002)(76116006)(86362001)(55236004)(71200400001)(8936002)(6506007)(6862004)(2616005)(6486002)(478600001)(186003)(66446008)(37006003)(2906002)(83380400001)(66556008)(316002)(66946007)(66476007)(91956017)(5660300002)(6512007)(54906003)(36756003)(64756008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?5k5ST2449UjZ//CejpvVK3yD6tSDIMV3hatYouAsgJjjMDzA+UrxIx7uGOvj?=
 =?us-ascii?Q?1KPlIRu2m6niEG/8+0eGi1GonuvQ73K0BCcb1ItNPjXZftTvYmuA70W/Hc6n?=
 =?us-ascii?Q?HI7UhfuiIqGvprxVk7TnUsqdZ9Zld8Zo1s3WF16uIYez+6va12GetMNQK4rx?=
 =?us-ascii?Q?+jBx8Q6KSQQNhqE8uuAQqhEPaJPHyo+u8NtFu2OFLB9Xjhvy6gXKAAvhXJzz?=
 =?us-ascii?Q?JLssElhf7VIZ4RhHUWE28cBvXfUe5UkaKPn+5jGzescqVYCdjwH4rEuOpay2?=
 =?us-ascii?Q?lKAEiGXFPJz85NW0i3bntqXEbjQuiagFFp/tjqsXujWQRQG5zkQFYYuDA6oQ?=
 =?us-ascii?Q?kk9fkLMxnHhZ8nGKflimvNuFCoLrNTicn1Z8ajlGVF1msBkGHzUVGZiFvyip?=
 =?us-ascii?Q?Xc0PQkoQwrondTA/Ba9UI0vWj4Glqctv4mGFTMSp8ppIVnsIntSPOrEh4Q0W?=
 =?us-ascii?Q?lRUGs0/GBnhxOyQ13HX2vgmh1gEjn19doRZq4PLuFe6bLQ9pcEWBOWd+oPNx?=
 =?us-ascii?Q?4H/GnN17WQN64KrzF2rf3FnldKmc0TNde/frfw0tl9A8fZ8VMXqEJss6vOM+?=
 =?us-ascii?Q?6kzoN45tzCfio1WfNOV1feQatg8eMLFNACtMB++krMWPnlqwnsrgoqwma8+e?=
 =?us-ascii?Q?Fsf+gZu3ah/Nr+9xcJ8U2gcAeT20VSXDVr0LsalPzq+3FT08PLqt53WCj53Q?=
 =?us-ascii?Q?IfQuy19lUXYURrfWapWrrsiDfUpoitO6uj3kq0sK9oVagq5XL6pcGRV0FJHn?=
 =?us-ascii?Q?joTf875tuQbK5ocTKAKHOSpvolEazN3TjAednPc5KSTe+tBPEZ04zkWTE++b?=
 =?us-ascii?Q?vtsiAokgHtVIZ7BOCQS8RQy0ho/n+7M8EbgGK5m1WuONoa1NN2sf06G0HzEA?=
 =?us-ascii?Q?cST+rFlye65dDN6VqCQDLDuQaCO7glb6C7OGFOHX/7KsEA6hNqjkvWzv6Req?=
 =?us-ascii?Q?D8kmO2b+hXoV5YFIakqMoBsC24JcdhmS2itodMpWoR5qTV9p1UvYVFyigyyS?=
 =?us-ascii?Q?4jqPMbOi+sDZSEANpEyAr2V8SDIhz5DANiyZyJIc8bPnEPEj43q76dJEJi3H?=
 =?us-ascii?Q?fvMHlTXeRkNONu31bQkjyfUgh/gov4LOFpY959GHaYgyeu/AQUaqHfXBFVNd?=
 =?us-ascii?Q?yW2ygj8sVnHh6QTjKBixoxNCs6QTU4FxOYaeQy6Xt02cysg/b2e6Mcci4ZPw?=
 =?us-ascii?Q?IOynTtQhotQOc7RLr/pMnp4mAcwND33FASPHB9G4rDqzw+9LGGEVmqIm9SuH?=
 =?us-ascii?Q?kXeUHpNUQkgX9Bb9ZxtOASgsLlLxmatXdthQ6cv7Sy1SVAWIHMgaCtZz9mSF?=
 =?us-ascii?Q?MkT6GUB6ZumTf6tsAGlFHfo3?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <86E140E06AFC214AB3EA8D2CF28F24F9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1981
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f0503722-c568-4952-9805-08d8d3387a40
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iubWsZQyNfXfLzmDRTeztuFp5Nr+Nc9Uyl23TfMOXNVQhJ7nnfMcm+q3WGNJPbj90KlgMnHl7G/kOol3zsIH0GQIAyBklD2OM0bAeLUnRp5cZDgy199Z046m8Ho5MhH+iYNajPdTOvyTNBX3DlflWROhWaIrGP5BvkhAo0kem9G6RPbqMk6WXXr2XrUjYFRDeflYjpgcIM4bVZvFC4suXGRSn+rT67hhzbVcGwJ7BhqxEGo8JQu+xpvJj7lc6SQjbM2PRVgJ7Wabmi/COJVG0k7pjMMPLA5g8veVqhvlyktcyNJpBTMuglfd2KtI1TEVNUEpK9XMNjty5DITj/4fPTHtSwh2VRNsqjEZshJPgLNOEXyQd3M0IFqQh2g5aCSO31cy4FhIfil4uHP0ztSHx8PIOMg/rcjguHKLL868DT5FNSs1DE49wvEjaHW/2OQzC9oUiRLuajaS5CGAElTd/hdsYDXN4ndEqrI83z4HfGqjs42a2DC2mg1WrIIWnycVhwkRc9XhzLGLVm+vyPR7iNj3xjMB8z7MePi86vmxAhQsFC+AWtN2FhDumm0NOJau1q+FDopgkYB1yd8+XS0ITwbPNHbLJk8K9zEEiHZeEjJAabYa8dgmOk99OiX2aF1w0YABcX3hf95QpWnjNUzgAg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(36840700001)(46966006)(6636002)(6506007)(107886003)(6512007)(86362001)(81166007)(53546011)(82740400003)(2616005)(55236004)(47076005)(336012)(2906002)(33656002)(26005)(70586007)(70206006)(6862004)(478600001)(82310400003)(8936002)(83380400001)(356005)(54906003)(316002)(4326008)(8676002)(36756003)(37006003)(5660300002)(36860700001)(186003)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Feb 2021 11:38:06.9872
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 331a3eda-9853-438a-d917-08d8d3387eae
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3472

Hi Rahul,


> On 17 Feb 2021, at 10:05, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> SMMUv3 driver does not handle multiple StreamId if the master device
> supports more than one StreamID.
>=20
> This bug was introduced when the driver was ported from Linux to XEN.
> dt_device_set_protected(..) should be called from add_device(..) not
> from the dt_xlate(..).
>=20
> Move dt_device_set_protected(..) from dt_xlate(..) to add_device().
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks a lot, this is fixing issues with multiple stream ids for one device=
 :-)

Cheers
Bertrand

> ---
> This patch is a candidate for 4.15 as without this patch it is not possib=
le to
> assign multiple StreamIds to the same device when device is protected beh=
ind
> SMMUv3.
> ---
> xen/drivers/passthrough/arm/smmu-v3.c | 29 ++++++++++-----------------
> 1 file changed, 11 insertions(+), 18 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthro=
ugh/arm/smmu-v3.c
> index 914cdc1cf4..53d150cdb6 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2207,24 +2207,6 @@ static int arm_smmu_add_device(u8 devfn, struct de=
vice *dev)
> 	 */
> 	arm_smmu_enable_pasid(master);
>=20
> -	return 0;
> -
> -err_free_master:
> -	xfree(master);
> -	dev_iommu_priv_set(dev, NULL);
> -	return ret;
> -}
> -
> -static int arm_smmu_dt_xlate(struct device *dev,
> -				const struct dt_phandle_args *args)
> -{
> -	int ret;
> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
> -
> -	ret =3D iommu_fwspec_add_ids(dev, args->args, 1);
> -	if (ret)
> -		return ret;
> -
> 	if (dt_device_is_protected(dev_to_dt(dev))) {
> 		dev_err(dev, "Already added to SMMUv3\n");
> 		return -EEXIST;
> @@ -2237,6 +2219,17 @@ static int arm_smmu_dt_xlate(struct device *dev,
> 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
>=20
> 	return 0;
> +
> +err_free_master:
> +	xfree(master);
> +	dev_iommu_priv_set(dev, NULL);
> +	return ret;
> +}
> +
> +static int arm_smmu_dt_xlate(struct device *dev,
> +				const struct dt_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> }
>=20
> /* Probing and initialisation functions */
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:38:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86226.161718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLAG-0005h5-Gr; Wed, 17 Feb 2021 11:38:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86226.161718; Wed, 17 Feb 2021 11:38:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLAG-0005gy-CT; Wed, 17 Feb 2021 11:38:40 +0000
Received: by outflank-mailman (input) for mailman id 86226;
 Wed, 17 Feb 2021 11:38:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCLAE-0005go-Vg
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:38:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c4b9658-75e6-40de-9758-2556c233326d;
 Wed, 17 Feb 2021 11:38:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45D6CB125;
 Wed, 17 Feb 2021 11:38:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c4b9658-75e6-40de-9758-2556c233326d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613561916; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1kg9CDmEonWqa5Cubbsh4CG5mab/QezBHryd96oV8Ss=;
	b=djwstGH2h+mp93vugv2X9BznSdSCrYeNP8nGyNq5Of9wF8m6tJ5XydblCH1lNSn4JtPs/Y
	l9TfzRU0zsY6tS9xer1DOj+5BpuiAi5NXXQmFPcIMgwVXdMCm8l7fvLwVS/U0D0gamdqNz
	1KcPQu2JwtF1iud3Gee2KUE+6RVVyfs=
Subject: Re: [PATCH 2/3] gnttab: bypass IOMMU (un)mapping when a domain is
 (un)mapping its own grant
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
 <a267959c-538e-0f90-f9a5-72e836f33cb4@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <283d8514-16fb-8973-e395-0a86bf820306@suse.com>
Date: Wed, 17 Feb 2021 12:38:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a267959c-538e-0f90-f9a5-72e836f33cb4@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 12:03, Julien Grall wrote:
> On 17/02/2021 10:46, Jan Beulich wrote:
>> Mappings for a domain's own pages should already be present in the
>> IOMMU. While installing the same mapping again is merely redundant (and
>> inefficient), removing the mapping when the grant mapping gets removed
>> is outright wrong in this case: The mapping was there before the map, so
>> should remain in place after unmapping.
>>
>> This affects
>> - Arm Dom0 in the direct mapped case,
>> - x86 PV Dom0 in the "iommu=dom0-strict" / "dom0-iommu=strict" cases,
>> - all x86 PV DomU-s, including driver domains.
>>
>> Reported-by: Rahul Singh <Rahul.Singh@arm.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1243,7 +1243,7 @@ map_grant_ref(
>>           goto undo_out;
>>       }
>>   
>> -    need_iommu = gnttab_need_iommu_mapping(ld);
>> +    need_iommu = ld != rd && gnttab_need_iommu_mapping(ld);
> 
> AFAICT, the owner of the page may not always be rd. So do we want to 
> check against the owner instead?

For the DomIO case - specifically not. And the DomCOW case can't
happen when an IOMMU is in use. Did I overlook any other cases
where the page may not be owned by rd?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86231.161730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLDI-0006du-Vz; Wed, 17 Feb 2021 11:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86231.161730; Wed, 17 Feb 2021 11:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLDI-0006dn-Rc; Wed, 17 Feb 2021 11:41:48 +0000
Received: by outflank-mailman (input) for mailman id 86231;
 Wed, 17 Feb 2021 11:41:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCLDH-0006di-2m
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:41:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCLDG-0001tn-86; Wed, 17 Feb 2021 11:41:46 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCLDF-0000t6-T6; Wed, 17 Feb 2021 11:41:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kkSMwpoCBdsQ15EvRDIrJENmC7LLyhxIFTzsa1kTE+Y=; b=gGJTCp0hkXyVv7zuiH3osWTbSN
	m/hzt43HhXtfYdQ1pjsMYYYZtJqNfvVm7KABAnWpAZvzz3Ys111ZVVCN31daxP01auIGw1PhN4JVd
	RT7+40Cix9jK63E5oi+R9UNe9SnGWZ7ZIKJ+rp369pIA2ug1hrUXVybhrtJLkQDglQDo=;
Subject: Re: [PATCH 2/3] gnttab: bypass IOMMU (un)mapping when a domain is
 (un)mapping its own grant
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
 <a267959c-538e-0f90-f9a5-72e836f33cb4@xen.org>
 <283d8514-16fb-8973-e395-0a86bf820306@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6ece0308-504f-5127-b7af-5c801630253b@xen.org>
Date: Wed, 17 Feb 2021 11:41:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <283d8514-16fb-8973-e395-0a86bf820306@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 11:38, Jan Beulich wrote:
> On 17.02.2021 12:03, Julien Grall wrote:
>> On 17/02/2021 10:46, Jan Beulich wrote:
>>> Mappings for a domain's own pages should already be present in the
>>> IOMMU. While installing the same mapping again is merely redundant (and
>>> inefficient), removing the mapping when the grant mapping gets removed
>>> is outright wrong in this case: The mapping was there before the map, so
>>> should remain in place after unmapping.
>>>
>>> This affects
>>> - Arm Dom0 in the direct mapped case,
>>> - x86 PV Dom0 in the "iommu=dom0-strict" / "dom0-iommu=strict" cases,
>>> - all x86 PV DomU-s, including driver domains.
>>>
>>> Reported-by: Rahul Singh <Rahul.Singh@arm.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -1243,7 +1243,7 @@ map_grant_ref(
>>>            goto undo_out;
>>>        }
>>>    
>>> -    need_iommu = gnttab_need_iommu_mapping(ld);
>>> +    need_iommu = ld != rd && gnttab_need_iommu_mapping(ld);
>>
>> AFAICT, the owner of the page may not always be rd. So do we want to
>> check against the owner instead?
> 
> For the DomIO case - specifically not. And the DomCOW case can't
> happen when an IOMMU is in use. Did I overlook any other cases
> where the page may not be owned by rd?

For the current code, it looks like not. But it feels to me this code is 
fragile as we are assuming that other cases should never happen.

I think it would be worth explaining in a comment and the commit message 
why check rd rather than the page owner is sufficient.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 11:49:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 11:49:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86233.161741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLKm-0006ui-Nw; Wed, 17 Feb 2021 11:49:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86233.161741; Wed, 17 Feb 2021 11:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLKm-0006ub-L5; Wed, 17 Feb 2021 11:49:32 +0000
Received: by outflank-mailman (input) for mailman id 86233;
 Wed, 17 Feb 2021 11:49:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCLKl-0006uW-MX
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 11:49:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCLKk-00022M-IG; Wed, 17 Feb 2021 11:49:30 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCLKk-0001Vl-99; Wed, 17 Feb 2021 11:49:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MyBH3wIhOkLKvrqlqHeIARgH/g3V+Mmzq7yxkBkD4IE=; b=JaXlvAE60u5CIXB93W4bgZW9bc
	LQhBH0lYEOtUkzla81f9nlZ3t5leNQmEpw559Bjuw1EBE0+8RQ5evFbWkUq409p4D0N6WKj613JZj
	3AIU9N4txrKfGHMjKIukTA01ba69nXZjRxGHkiI1ZpsPwlAXyiekv+mLEZUQxYW4lYn8=;
Subject: Re: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-5-julien@xen.org>
 <62a791cb-a880-4097-5fec-4f728751b58b@suse.com>
 <712042bf-bec6-dc0f-67ee-b0807887772f@xen.org>
 <01546a53-1b1a-af35-a67e-7612e619961d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8c9ec11b-6ee8-4bc5-ed2d-c84c0dc23afb@xen.org>
Date: Wed, 17 Feb 2021 11:49:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <01546a53-1b1a-af35-a67e-7612e619961d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 10/02/2021 16:12, Jan Beulich wrote:
> On 10.02.2021 16:04, Julien Grall wrote:
>> On 10/02/2021 14:32, Jan Beulich wrote:
>>> On 09.02.2021 16:28, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> The new IOMMU page-tables allocator will release the pages when
>>>> relinquish the domain resources. However, this is not sufficient when
>>>> the domain is dying because nothing prevents page-table to be allocated.
>>>>
>>>> iommu_alloc_pgtable() is now checking if the domain is dying before
>>>> adding the page in the list. We are relying on &hd->arch.pgtables.lock
>>>> to synchronize d->is_dying.
>>>
>>> As said in reply to an earlier patch, I think suppressing
>>> (really: ignoring) new mappings would be better.
>>
>> This is exactly what I suggested in v1 but you wrote:
>>
>> "Ignoring requests there seems fragile to me. Paul - what are your
>> thoughts about bailing early from hvm_add_ioreq_gfn() when the
>> domain is dying?"
> 
> Was this on the thread of this patch? I didn't find such a
> reply of mine. I need more context here because you name
> hvm_add_ioreq_gfn() above, while I refer to iommu_map()
> (and downwards the call stack).

See [1].

> 
>> Are you know saying that the following snipped would be fine:
>>
>> if ( d->is_dying )
>>     return 0;
> 
> In {amd,intel}_iommu_map_page(), after the lock was acquired
> and with it suitably released, yes. And if that's what you
> suggested, then I'm sorry - I don't think I can see anything
> fragile there anymore.

Duplicating the check sounds good to me.

> 
>>> You could
>>> utilize the same lock, but you'd need to duplicate the
>>> checking in {amd,intel}_iommu_map_page().
>>>
>>> I'm not entirely certain in the case about unmap requests:
>>> It may be possible to also suppress/ignore them, but this
>>> may require some further thought.
>>
>> I think the unmap part is quite risky to d->is_dying because the PCI
>> devices may not quiesced and still assigned to the domain.
> 
> Hmm, yes, good point. Of course upon first unmap with is_dying
> observed set we could zap the root page table, but I don't
> suppose that's something we want to do for 4.15.

We would still need to zap the root page table in the relinquish path. 
So I am not sure what benefits it would give us to zap the page tables 
on the first iommu_unmap() afther domain dies.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/f21f1f61-5213-55a8-320c-43e5fe80100f@suse.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 12:26:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 12:26:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86248.161754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLuA-0002Ai-6H; Wed, 17 Feb 2021 12:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86248.161754; Wed, 17 Feb 2021 12:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCLuA-0002Ab-32; Wed, 17 Feb 2021 12:26:06 +0000
Received: by outflank-mailman (input) for mailman id 86248;
 Wed, 17 Feb 2021 12:26:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCLu8-0002AS-DO; Wed, 17 Feb 2021 12:26:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCLu8-0002da-6O; Wed, 17 Feb 2021 12:26:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCLu7-0000Ci-Ql; Wed, 17 Feb 2021 12:26:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCLu7-0001Cd-Of; Wed, 17 Feb 2021 12:26:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZxIx45/Y6Km/OcyZMojdbkqYgdVXUs7fRhcro4Cf0MM=; b=1ti/ViP0pnBS8jfzirY7/b0/UY
	ukL5+Q7Nyt/62lX12ns7QzTgPyDVQzH6sW49skKySVr+I0sa2B5HLy6V1OlRcum+mS9ucii31dVyM
	5ZLZxRkfuKI26bZP5NQRYj9djwGBaLMNViFHWcTiSr9xPo+tIXVCFeeRNuG0Rr+vmlAM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159418-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 159418: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
X-Osstest-Versions-That:
    xen=8d26cdd3b66ab86d560dacd763d76ff3da95723e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 12:26:03 +0000

flight 159418 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159418/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159241
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159241
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159241
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159241
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159241
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159241
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159241
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159241
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159241
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159241
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159241
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
baseline version:
 xen                  8d26cdd3b66ab86d560dacd763d76ff3da95723e

Last test of basis   159241  2021-02-11 05:00:05 Z    6 days
Testing same since   159418  2021-02-16 15:06:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8d26cdd3b6..4cf5929606  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 12:57:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 12:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86260.161793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCMOJ-00057D-RK; Wed, 17 Feb 2021 12:57:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86260.161793; Wed, 17 Feb 2021 12:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCMOJ-000576-O2; Wed, 17 Feb 2021 12:57:15 +0000
Received: by outflank-mailman (input) for mailman id 86260;
 Wed, 17 Feb 2021 12:57:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCMOI-000571-9L
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 12:57:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38932570-4c32-4c0f-83d2-9a3bf47912d0;
 Wed, 17 Feb 2021 12:57:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33C2CAF7B;
 Wed, 17 Feb 2021 12:57:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38932570-4c32-4c0f-83d2-9a3bf47912d0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613566632; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7W3+ffwYr1nMpqLUtPZyxxnlYdErmYNvREl7MIUVtcQ=;
	b=HkjcU9wSvw0LnjOqd8HBrSvqELrgxIyBceyRDIrMKSHyIPQbBgUJvW5QR7KyGBGwpYLJsC
	uYF5QaTwXARi5bene2OYhI24EKYsOm/PcMwyy/oMA1PP+M+3/Ei0LfDJ9iZbhjwvjCT4km
	q2vl9DsgyS/g1ZNmnMAXW67NONJtv30=
Subject: Re: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-5-julien@xen.org>
 <62a791cb-a880-4097-5fec-4f728751b58b@suse.com>
 <712042bf-bec6-dc0f-67ee-b0807887772f@xen.org>
 <01546a53-1b1a-af35-a67e-7612e619961d@suse.com>
 <8c9ec11b-6ee8-4bc5-ed2d-c84c0dc23afb@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b2d58a3d-cb5e-3170-db77-807a8704c079@suse.com>
Date: Wed, 17 Feb 2021 13:57:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <8c9ec11b-6ee8-4bc5-ed2d-c84c0dc23afb@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 12:49, Julien Grall wrote:
> On 10/02/2021 16:12, Jan Beulich wrote:
>> On 10.02.2021 16:04, Julien Grall wrote:
>>> Are you know saying that the following snipped would be fine:
>>>
>>> if ( d->is_dying )
>>>     return 0;
>>
>> In {amd,intel}_iommu_map_page(), after the lock was acquired
>> and with it suitably released, yes. And if that's what you
>> suggested, then I'm sorry - I don't think I can see anything
>> fragile there anymore.
> 
> Duplicating the check sounds good to me.

The checks in said functions are mandatory, and I didn't really
have any duplication in mind. But yes, iommu_map() could have
an early (but racy) check, if so wanted.

>>>> You could
>>>> utilize the same lock, but you'd need to duplicate the
>>>> checking in {amd,intel}_iommu_map_page().
>>>>
>>>> I'm not entirely certain in the case about unmap requests:
>>>> It may be possible to also suppress/ignore them, but this
>>>> may require some further thought.
>>>
>>> I think the unmap part is quite risky to d->is_dying because the PCI
>>> devices may not quiesced and still assigned to the domain.
>>
>> Hmm, yes, good point. Of course upon first unmap with is_dying
>> observed set we could zap the root page table, but I don't
>> suppose that's something we want to do for 4.15.
> 
> We would still need to zap the root page table in the relinquish path. 
> So I am not sure what benefits it would give us to zap the page tables 
> on the first iommu_unmap() afther domain dies.

I guess we think of different aspects of the zapping - what I'm
suggesting here is for the effect of no translations remaining
active anymore. I'm not after freeing the memory at this point;
that'll have to happen on the relinquish path, as you say. The
problem with allowing individual unmaps to proceed (unlike your
plan for maps) is that these, too, can trigger allocations (when
a large page needs shattering).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 13:17:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 13:17:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86263.161805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCMhH-00071L-FH; Wed, 17 Feb 2021 13:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86263.161805; Wed, 17 Feb 2021 13:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCMhH-00071E-CD; Wed, 17 Feb 2021 13:16:51 +0000
Received: by outflank-mailman (input) for mailman id 86263;
 Wed, 17 Feb 2021 13:16:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCMhF-000719-RK
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 13:16:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07485c78-c394-4470-8cbe-c0fb8c8874d0;
 Wed, 17 Feb 2021 13:16:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F214BB8F2;
 Wed, 17 Feb 2021 13:16:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07485c78-c394-4470-8cbe-c0fb8c8874d0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613567808; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+GgjWvj6kfrzUcRteP3Et3jYZHDS9Moqn4t7OAoGUWo=;
	b=l0dnk2++ZxGSknyBH/XyNiNa4fi51+gk/s0CFVzRfrNoGrRY7NhWMGVZi9MWtxlszoJq7w
	HJI1waenaPDx7fKJM6EWNVU+TRaQMi5RWCToaa7yLpFGs7OWiFG/VCevuL73iGQUNKG5J6
	jgQOhPAtCNWB7NqmJguALcrYC/nXRf0=
Subject: Re: [PATCH 2/3] gnttab: bypass IOMMU (un)mapping when a domain is
 (un)mapping its own grant
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
 <a267959c-538e-0f90-f9a5-72e836f33cb4@xen.org>
 <283d8514-16fb-8973-e395-0a86bf820306@suse.com>
 <6ece0308-504f-5127-b7af-5c801630253b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ef8991ad-ba9e-c4f0-e4cc-fb4655608549@suse.com>
Date: Wed, 17 Feb 2021 14:16:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <6ece0308-504f-5127-b7af-5c801630253b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 12:41, Julien Grall wrote:
> Hi Jan,
> 
> On 17/02/2021 11:38, Jan Beulich wrote:
>> On 17.02.2021 12:03, Julien Grall wrote:
>>> On 17/02/2021 10:46, Jan Beulich wrote:
>>>> Mappings for a domain's own pages should already be present in the
>>>> IOMMU. While installing the same mapping again is merely redundant (and
>>>> inefficient), removing the mapping when the grant mapping gets removed
>>>> is outright wrong in this case: The mapping was there before the map, so
>>>> should remain in place after unmapping.
>>>>
>>>> This affects
>>>> - Arm Dom0 in the direct mapped case,
>>>> - x86 PV Dom0 in the "iommu=dom0-strict" / "dom0-iommu=strict" cases,
>>>> - all x86 PV DomU-s, including driver domains.
>>>>
>>>> Reported-by: Rahul Singh <Rahul.Singh@arm.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/common/grant_table.c
>>>> +++ b/xen/common/grant_table.c
>>>> @@ -1243,7 +1243,7 @@ map_grant_ref(
>>>>            goto undo_out;
>>>>        }
>>>>    
>>>> -    need_iommu = gnttab_need_iommu_mapping(ld);
>>>> +    need_iommu = ld != rd && gnttab_need_iommu_mapping(ld);
>>>
>>> AFAICT, the owner of the page may not always be rd. So do we want to
>>> check against the owner instead?
>>
>> For the DomIO case - specifically not. And the DomCOW case can't
>> happen when an IOMMU is in use. Did I overlook any other cases
>> where the page may not be owned by rd?
> 
> For the current code, it looks like not. But it feels to me this code is 
> fragile as we are assuming that other cases should never happen.
> 
> I think it would be worth explaining in a comment and the commit message 
> why check rd rather than the page owner is sufficient.

Well, I've added

    /*
     * This is deliberately not checking the page's owner: get_paged_frame()
     * explicitly rejects foreign pages, and all success paths above yield
     * either owner == rd or owner == dom_io (the dom_cow case is irrelevant
     * as mem-sharing and IOMMU use are incompatible). The dom_io case would
     * need checking separately if we compared against owner here.
     */

to map_grant_ref(), and a reference to this comment to both
unmap_common() and the commit message. Will this do?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 13:29:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 13:29:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86268.161822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCMtJ-000882-MQ; Wed, 17 Feb 2021 13:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86268.161822; Wed, 17 Feb 2021 13:29:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCMtJ-00087v-JH; Wed, 17 Feb 2021 13:29:17 +0000
Received: by outflank-mailman (input) for mailman id 86268;
 Wed, 17 Feb 2021 13:29:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4zEa=HT=citrix.com=ross.lagerwall@srs-us1.protection.inumbo.net>)
 id 1lCMtI-00087q-CS
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 13:29:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9880bbd2-ba38-45a0-a00b-2974685bad31;
 Wed, 17 Feb 2021 13:29:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9880bbd2-ba38-45a0-a00b-2974685bad31
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613568555;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=bp3fBtBumnQYDuMT4p4O4vegfrz8kk/nF90ppoE+EIQ=;
  b=GUGOGoBqzJ98mWrdB1Xf/t7r4p6w4QMXIW53oyws7NDPNqOuNDHuI5K9
   aAJwl2ZDakcWECFlcZIIfEEF91ICiYuGC7MHb/47Ld247wB3PPg7p9HqV
   Z8lzm4W9S7UTRoExQVRMfB+c2tAqFfRFA64QD1ZIxh4AqT2jimZaSOUh8
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: o7LaJcbQT5QuBw1xPwkFm1kmmIpHK9sbb9lZj5IyJs4mA6AKdPwwmE6rjsZVBkmNqq+f1rg7v9
 LhPjd42AWS+GmnJwjqQu2oMcrOsirvyJVxH5cCnpbZzFpjLStPB9qB5/CJH5wL2kyX3hIiHH41
 BiJdvO1WYeF49G5IAxgFWe2Doce5WdQcFnPhB0gMWpqLGNUyZ04XS+fO43Jds6xWB/x8VdXjLc
 TVqBAY/waBGvsB5Nt+Gp+8G4G7JcUyHaEeserRUmk28EhENWIwM7fRM4ds93N++2/jeX9TnB7a
 ItU=
X-SBRS: 5.1
X-MesageID: 37785213
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37785213"
Subject: Re: [PATCH v2 8/8] xen/evtchn: use READ/WRITE_ONCE() for accessing
 ring indices
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-9-jgross@suse.com>
From: Ross Lagerwall <ross.lagerwall@citrix.com>
Message-ID: <6818fcde-abab-1250-119c-d0ccb8c80488@citrix.com>
Date: Wed, 17 Feb 2021 13:29:19 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210211101616.13788-9-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2021-02-11 10:16, Juergen Gross wrote:
> For avoiding read- and write-tearing by the compiler use READ_ONCE()
> and WRITE_ONCE() for accessing the ring indices in evtchn.c.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - modify all accesses (Julien Grall)
> ---
>  drivers/xen/evtchn.c | 25 ++++++++++++++++---------
>  1 file changed, 16 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
> index 421382c73d88..620008f89dbe 100644
> --- a/drivers/xen/evtchn.c
> +++ b/drivers/xen/evtchn.c
> @@ -162,6 +162,7 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
>  {
>  	struct user_evtchn *evtchn = data;
>  	struct per_user_data *u = evtchn->user;
> +	unsigned int prod, cons;
>  
>  	WARN(!evtchn->enabled,
>  	     "Interrupt for port %u, but apparently not enabled; per-user %p\n",
> @@ -171,10 +172,14 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
>  
>  	spin_lock(&u->ring_prod_lock);
>  
> -	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
> -		*evtchn_ring_entry(u, u->ring_prod) = evtchn->port;
> +	prod = READ_ONCE(u->ring_prod);
> +	cons = READ_ONCE(u->ring_cons);
> +
> +	if ((prod - cons) < u->ring_size) {
> +		*evtchn_ring_entry(u, prod) = evtchn->port;
>  		smp_wmb(); /* Ensure ring contents visible */
> -		if (u->ring_cons == u->ring_prod++) {
> +		if (cons == prod++) {
> +			WRITE_ONCE(u->ring_prod, prod);
>  			wake_up_interruptible(&u->evtchn_wait);
>  			kill_fasync(&u->evtchn_async_queue,
>  				    SIGIO, POLL_IN);

This doesn't work correctly since now u->ring_prod is only updated if cons == prod++.

Ross


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 13:49:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 13:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86273.161834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNCO-0001Vm-EK; Wed, 17 Feb 2021 13:49:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86273.161834; Wed, 17 Feb 2021 13:49:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNCO-0001Vf-B7; Wed, 17 Feb 2021 13:49:00 +0000
Received: by outflank-mailman (input) for mailman id 86273;
 Wed, 17 Feb 2021 13:48:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SSRj=HT=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lCNCN-0001VX-E7
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 13:48:59 +0000
Received: from wout2-smtp.messagingengine.com (unknown [64.147.123.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 36722e62-5a52-44cd-8208-363e8c21879a;
 Wed, 17 Feb 2021 13:48:58 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 7B4DED32;
 Wed, 17 Feb 2021 08:48:57 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Wed, 17 Feb 2021 08:48:57 -0500
Received: from mail-itl (unknown [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA id 637FE240062;
 Wed, 17 Feb 2021 08:48:56 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36722e62-5a52-44cd-8208-363e8c21879a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=5d4LBG
	6Qg/xjUkSWjDeu9znpavuBLKz6fQQ9smZ7o3c=; b=P6Wu4WTOlx+W2gUNm1cg1C
	JA6iZaFUM+Sqzi0M4v4F6y8TMfNuYWS4PxqsaUJ3ti7knc8+GNAegE/4uXDi8Vw7
	tAlE82D7dbCePw6GJX9MYluLtUYYRq4rUtWf8Iv72K4Tc6e+JcScqB/5JDMo5Q8q
	Meese+ze4W13/5laaVr9OXD//NypdY933p/5SzbFVeQt3riCI3EUqi2B9DX3Ge/u
	xd7hBy3CuNgtvSMvfxpPmtvBARqQ93KttB/2A2Tl38mrTjnp5e/nx+R/HI+64Pc7
	r47q3vd73/9aAE9TKXLnOj99+3JjyFJetaASowwbFszRi+Rnq/xUl//2B6L1BC2g
	==
X-ME-Sender: <xms:yB4tYCY0l_S-l9pXzXfxOpK6QAoDcPpcO49aBEP07LpyAA1p1HZ8Eg>
    <xme:yB4tYE-XYaJWFg14mJsYFY1EgTrH-slbCKy6mhaiLPbJSLwFEy8sWepzqieLhREE7
    fpSxY0dgr-6pw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrjedvgdefudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecunecujfgurhepfffhvffukfhfgggtuggjsehgtderre
    dttdejnecuhfhrohhmpeforghrvghkucforghrtgiihihkohifshhkihdqifpkrhgvtghk
    ihcuoehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqe
    enucggtffrrghtthgvrhhnpeffkeffudevvddtkefhveelvdfgudehvdeitdduheeugfei
    jeektdeuhfefveefhfenucffohhmrghinhepphgrthgthhgvfidrohhrghenucfkpheple
    durdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:yB4tYBT6zbKCLY67AsDji7Km9jcS1wP-uhhW93i8T3bWq2ysL4wHag>
    <xmx:yB4tYHLysVOZJmgD1zbIufowupXWSmEEoMmSxKuVsv-ihXZTf5D_Jg>
    <xmx:yB4tYLQ3aiLf5PsUKncGg7zAJi9x2kb27-Gro_syYoUQUrEG_5tC1g>
    <xmx:yR4tYNGz3saZyQrLXfiu5-oTLG7vGs2gUSaTPSIqWXBe_5arlgmA7Q>
Date: Wed, 17 Feb 2021 14:48:51 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
Message-ID: <YC0exVYljxxvwFFt@mail-itl>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Eept/bozI/Ba/6p9"
Content-Disposition: inline
In-Reply-To: <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com>


--Eept/bozI/Ba/6p9
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Wed, 17 Feb 2021 14:48:51 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend

On Wed, Feb 17, 2021 at 07:51:42AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 17.02.21 06:12, Marek Marczykowski-G=C3=B3recki wrote:
> > Hi,
> >=20
> > I'm observing Linux PV/PVH guest crash when I resume it from sleep. I do
> > this with:
> >=20
> >      virsh -c xen dompmsuspend <vmname> mem
> >      virsh -c xen dompmwakeup <vmname>
> >=20
> > But it's possible to trigger it with plain xl too:
> >=20
> >      xl save -c <vmname> <some-file>
> >=20
> > The same on HVM works fine.
> >=20
> > This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens
> > with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
> > relevant here. I can reliably reproduce it.
>=20
> This is already on my list of issues to look at.
>=20
> The problem seems to be related to the XSA-332 patches. You could try
> the patches I've sent out recently addressing other fallout from XSA-332
> which _might_ fix this issue, too:
>=20
> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/

Thanks for the patches. Sadly it doesn't change anything - I get exactly
the same crash. I applied that on top of 5.11-rc7 (that's what I had
handy). If you think there may be a difference with the final 5.11 or
another branch, please let me know.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Eept/bozI/Ba/6p9
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmAtHsUACgkQ24/THMrX
1yxFhQgAmfd5RKukJUsaQpSaNUrTV/lAylrIb9bSgHpKk+OFOUvQlcVzpADT/eB1
WLTdNHZ0fENjW96HSI3NUZR4YmzOl/2Zw5vV1wiokwhi6eMR3PrLwGqRY7gMdU4f
NZf31wQhukgZ/yxzJesCar1EnpoAmfIMZyGx73BDXoRcXlqx5UKf7HIZ6Ta/LIkE
WAq1jXXT6RK6O/FjsmYmJaigbqSqCMyymC1JdxBGNCk7gtw6Sh2NOfYbbC/VnC+2
38z6LJVTpjZUTKEkXQYlgYv88PHTFf24SnUwNN/ju6IMPv/cD4lw8qbcPh459RLm
rn7DoiUL5VJv+6qIapngzWEMdFqVmw==
=Yj3r
-----END PGP SIGNATURE-----

--Eept/bozI/Ba/6p9--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 13:54:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 13:54:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86276.161847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNI5-0002SB-3J; Wed, 17 Feb 2021 13:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86276.161847; Wed, 17 Feb 2021 13:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNI5-0002S4-0R; Wed, 17 Feb 2021 13:54:53 +0000
Received: by outflank-mailman (input) for mailman id 86276;
 Wed, 17 Feb 2021 13:54:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCNI3-0002Rw-Vm
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 13:54:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNHx-00045M-MY; Wed, 17 Feb 2021 13:54:45 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNHx-0002fc-EM; Wed, 17 Feb 2021 13:54:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/BMI8jzOWOIBwNHI/soZA7yadtHGIBIyCRbij6dumj4=; b=xa7WckS1UZuapeyC5uvhAFU381
	y52Rg0i9K8gtuYyB4kU4FJ2GbUJUKX71fpKIPcIL6DswSmd6vBWeKlKbzouFLuZdhIdtijkl72yS0
	nia8H5KQxANTdX3fghYT0edGs8JCdDBTf5l3kSa9iuZzoQbVskaJWCTH2NLITdlfh0MU=;
Subject: Re: [for-4.15][PATCH v2 5/5] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: "Tian, Kevin" <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "hongyxia@amazon.co.uk" <hongyxia@amazon.co.uk>,
 "iwj@xenproject.org" <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, "Cooper, Andrew"
 <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <20210209152816.15792-1-julien@xen.org>
 <20210209152816.15792-6-julien@xen.org>
 <MWHPR11MB18869B3DB550711B3F6F6CA48C8D9@MWHPR11MB1886.namprd11.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a8636cac-c4da-27f6-404e-20e9e58a77b1@xen.org>
Date: Wed, 17 Feb 2021 13:54:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <MWHPR11MB18869B3DB550711B3F6F6CA48C8D9@MWHPR11MB1886.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Kevin,

On 10/02/2021 02:21, Tian, Kevin wrote:
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, February 9, 2021 11:28 PM
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The new per-domain IOMMU page-table allocator will now free the
>> page-tables when domain's resources are relinquished. However, the root
>> page-table (i.e. hd->arch.pg_maddr) will not be cleared.
> 
> It's the pointer not the table itself.

I don't think the sentence is incorrect. One can view clearing as either 
clearing the page table itself or the pointer.

> 
>>
>> Xen may access the IOMMU page-tables afterwards at least in the case of
>> PV domain:
>>
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d04025b4b2>] R
>> iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
>> (XEN)    [<ffff82d04025b695>] F
>> iommu.c#intel_iommu_unmap_page+0x5d/0xf8
>> (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
>> (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
>> (XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
>> (XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
>> (XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
>> (XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
>> (XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
>> (XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
>> (XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
>> (XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
>> (XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
>> (XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
>> (XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
>> (XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
>> (XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
>> (XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
>> (XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
>> (XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120
>>
>> This will result to a use after-free and possibly an host crash or
>> memory corruption.
>>
>> Freeing the page-tables further down in domain_relinquish_resources()
>> would not work because pages may not be released until later if another
>> domain hold a reference on them.
>>
>> Once all the PCI devices have been de-assigned, it is actually pointless
>> to access modify the IOMMU page-tables. So we can simply clear the root
>> page-table address.
> 
> Above two paragraphs are confusing to me. I don't know what exact change
> is proposed until looking over the whole patch. Isn't it clearer to just say
> "We should clear the root page table pointer in iommu_free_pgtables to
> avoid use-after-free after all pgtables are moved to the free list"?

Your sentence explain the approach taken but not the rational behind the 
approach. Both are equally important for future reference.

I will try to reword it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:25:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:25:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86283.161866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlN-0005KJ-Tq; Wed, 17 Feb 2021 14:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86283.161866; Wed, 17 Feb 2021 14:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlN-0005K6-MU; Wed, 17 Feb 2021 14:25:09 +0000
Received: by outflank-mailman (input) for mailman id 86283;
 Wed, 17 Feb 2021 14:25:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCNlM-0005JW-KY
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:25:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlJ-0004eR-Rh; Wed, 17 Feb 2021 14:25:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlJ-00057m-Fa; Wed, 17 Feb 2021 14:25:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=E10KWobPtnAxShnfJroEM2NY8XPPcudrmSVydi6i4NA=; b=kMCv4+ftElp2JtZlu3sfOUMXe
	W4wdDP4OTTLeHmeSn+X/qNJwqRQlDtAyI4YE6aX64pUTNugb2TNnXjMUuGj6zQy3esu/i8juW89EL
	CEZ4iWtCGncr3Ie7CLnIDBBlyFyx2iXp6TPKpQNA2kSmawFCCQ7QqPN2s3af4kcQ7qp7Q=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v3 1/3] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Wed, 17 Feb 2021 14:24:56 +0000
Message-Id: <20210217142458.3769-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210217142458.3769-1-julien@xen.org>
References: <20210217142458.3769-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new per-domain IOMMU page-table allocator will now free the
page-tables when domain's resources are relinquished. However, the
per-domain IOMMU structure will still contain a dangling pointer to
the root page-table.

Xen may access the IOMMU page-tables afterwards at least in the case of
PV domain:

(XEN) Xen call trace:
(XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
(XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
(XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
(XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
(XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
(XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
(XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
(XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
(XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
(XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
(XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
(XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
(XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
(XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
(XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
(XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120

This will result to a use after-free and possibly an host crash or
memory corruption.

It would not be possible to free the page-tables further down in
domain_relinquish_resources() because cleanup_page_mappings() will only
be called when the last reference on the page dropped. This may happen
much later if another domain still hold a reference.

After all the PCI devices have been de-assigned, nobody should use the
IOMMU page-tables and it is therefore pointless to try to modify them.

So we can simply clear any reference to the root page-table in the
per-domain IOMMU structure. This requires to introduce a new callback of
the method will depend on the IOMMU driver used.

Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v3:
        - Move the patch earlier in the series
        - Reword the commit message

    Changes in v2:
        - Introduce clear_root_pgtable()
        - Move the patch later in the series
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
 xen/drivers/passthrough/x86/iommu.c         |  9 +++++++++
 xen/include/xen/iommu.h                     |  1 +
 4 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 42b5a5a9bec4..81add0ba26b4 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.amd.root_table = NULL;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    dom_iommu(d)->arch.amd.root_table = NULL;
+    ASSERT(!dom_iommu(d)->arch.amd.root_table);
 }
 
 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
@@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .remove_device = amd_iommu_remove_device,
     .assign_device  = amd_iommu_assign_device,
     .teardown = amd_iommu_domain_destroy,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d136fe36883b..e1871f6c2bc1 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1726,6 +1726,15 @@ out:
     return ret;
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.vtd.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void iommu_domain_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d)
         xfree(mrmrr);
     }
 
-    hd->arch.vtd.pgd_maddr = 0;
+    ASSERT(!hd->arch.vtd.pgd_maddr);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2719,6 +2728,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .remove_device = intel_iommu_remove_device,
     .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cea1032b3d02..f54fc8093f18 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,6 +267,15 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    /*
+     * Pages will be moved to the free list below. So we want to
+     * clear the root page-table to avoid any potential use after-free.
+     */
+    hd->platform_ops->clear_root_pgtable(d);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 863a68fe1622..d59ed7cbad43 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -272,6 +272,7 @@ struct iommu_ops {
 
     int (*adjust_irq_affinities)(void);
     void (*sync_cache)(const void *addr, unsigned int size);
+    void (*clear_root_pgtable)(struct domain *d);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:25:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:25:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86285.161885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlO-0005Ln-Kf; Wed, 17 Feb 2021 14:25:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86285.161885; Wed, 17 Feb 2021 14:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlO-0005LS-At; Wed, 17 Feb 2021 14:25:10 +0000
Received: by outflank-mailman (input) for mailman id 86285;
 Wed, 17 Feb 2021 14:25:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCNlM-0005Jc-W0
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:25:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlL-0004eT-3C; Wed, 17 Feb 2021 14:25:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlK-00057m-QC; Wed, 17 Feb 2021 14:25:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=t6YsWVHT5mEBG6Q0BImhmwLAtI3B16h1jksP0n7ERIQ=; b=3uZES943amRECcWSte8d/Dw7R
	ooUX+hMcyyiJcd2+SiVkaL03GfOgZudyjNCtUsOlGFIAXRWUKX6zPL084P0mq3UBltQLNE3oIQNtO
	zRGebw4/RErIbC+mIgjphC42rG3SWLKaKVC/grLUGVeLLo8qTPRk6Fg09klw7eKzFDrbQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
Date: Wed, 17 Feb 2021 14:24:57 +0000
Message-Id: <20210217142458.3769-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210217142458.3769-1-julien@xen.org>
References: <20210217142458.3769-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new x86 IOMMU page-tables allocator will release the pages when
relinquishing the domain resources. However, this is not sufficient
when the domain is dying because nothing prevents page-table to be
allocated.

Currently page-table allocations can only happen from iommu_map(). As
the domain is dying, there is no good reason to continue to modify the
IOMMU page-tables.

In order to observe d->is_dying correctly, we need to rely on per-arch
locking, so the check to ignore IOMMU mapping is added on the per-driver
map_page() callback.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Changes in v3:
    - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't
    crash the domain if it is dying"
---
 xen/drivers/passthrough/amd/iommu_map.c | 13 +++++++++++++
 xen/drivers/passthrough/vtd/iommu.c     | 13 +++++++++++++
 xen/drivers/passthrough/x86/iommu.c     |  3 +++
 3 files changed, 29 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index d3a8b1aec766..ed78a083ba12 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -285,6 +285,19 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables() and
+     * iommu_clear_root_pgtable()).
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     rc = amd_iommu_alloc_root(d);
     if ( rc )
     {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index e1871f6c2bc1..239a63f74f64 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1771,6 +1771,19 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables() and
+     * iommu_clear_root_pgtable()).
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
     if ( !pg_maddr )
     {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f54fc8093f18..faa0078db595 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -273,6 +273,9 @@ int iommu_free_pgtables(struct domain *d)
     /*
      * Pages will be moved to the free list below. So we want to
      * clear the root page-table to avoid any potential use after-free.
+     *
+     * After this call, no more IOMMU mapping can happen.
+     *
      */
     hd->platform_ops->clear_root_pgtable(d);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:25:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:25:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86284.161875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlO-0005L2-6e; Wed, 17 Feb 2021 14:25:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86284.161875; Wed, 17 Feb 2021 14:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlO-0005Kq-0d; Wed, 17 Feb 2021 14:25:10 +0000
Received: by outflank-mailman (input) for mailman id 86284;
 Wed, 17 Feb 2021 14:25:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCNlM-0005Jb-Vv
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:25:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlM-0004eb-6H; Wed, 17 Feb 2021 14:25:08 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlL-00057m-TP; Wed, 17 Feb 2021 14:25:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=eHbALjFmdHYfqoCsH6vCHJNGB5r0g1/oYxtKAGV8sK0=; b=JYG72ERHoej3PUjHk2Ezhu6HI
	6R/Jd6aWbu6IG7m/idXbr+zjnMm4YDUpSLitQsM0p3sdFlVMYxW+Y4SehyOKFh6vWnDyufcfjxkcE
	giAPs4bmXmGSCWr/sm29fqlQmzIR+ogVnElCABJ24D83UU9bgwQGoOm1acNo3zL1PLYMs=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU page-table allocator
Date: Wed, 17 Feb 2021 14:24:58 +0000
Message-Id: <20210217142458.3769-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210217142458.3769-1-julien@xen.org>
References: <20210217142458.3769-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, we are assuming that only iommu_map() can allocate
IOMMU page-table.

Given the complexity of the IOMMU framework, it would be sensible to
have a check closer to the IOMMU allocator. This would avoid to leak
IOMMU page-tables again in the future.

iommu_alloc_pgtable() is now checking if the domain is dying before
adding the page in the list. We are relying on &hd->arch.pgtables.lock
to synchronize d->is_dying.

Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy()
to check if we freed all the IOMMU page tables.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Changes in v3:
    - Rename the patch. This was originally "xen/iommu: x86: Don't leak
    the IOMMU page-tables"
    - Rework the commit message
    - Move the patch towards the end of the series

Changes in v2:
    - Rework the approach
    - Move the patch earlier in the series
---
 xen/drivers/passthrough/x86/iommu.c | 33 ++++++++++++++++++++++++++++-
 1 file changed, 32 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index faa0078db595..a67075f0045d 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    /*
+     * There should be not page-tables left allocated by the time the
+     * domain is destroyed. Note that arch_iommu_domain_destroy() is
+     * called unconditionally, so pgtables may be unitialized.
+     */
+    ASSERT(dom_iommu(d)->platform_ops == NULL ||
+           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
      */
     hd->platform_ops->clear_root_pgtable(d);
 
+    /* After this barrier no new page allocations can occur. */
+    spin_barrier(&hd->arch.pgtables.lock);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
@@ -296,6 +306,7 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     unsigned int memflags = 0;
     struct page_info *pg;
     void *p;
+    bool alive = false;
 
 #ifdef CONFIG_NUMA
     if ( hd->node != NUMA_NO_NODE )
@@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     unmap_domain_page(p);
 
     spin_lock(&hd->arch.pgtables.lock);
-    page_list_add(pg, &hd->arch.pgtables.list);
+    /*
+     * The IOMMU page-tables are freed when relinquishing the domain, but
+     * nothing prevent allocation to happen afterwards. There is no valid
+     * reasons to continue to update the IOMMU page-tables while the
+     * domain is dying.
+     *
+     * So prevent page-table allocation when the domain is dying.
+     *
+     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
+     */
+    if ( likely(!d->is_dying) )
+    {
+        alive = true;
+        page_list_add(pg, &hd->arch.pgtables.list);
+    }
     spin_unlock(&hd->arch.pgtables.lock);
 
+    if ( unlikely(!alive) )
+    {
+        free_domheap_page(pg);
+        pg = NULL;
+    }
+
     return pg;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:25:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86282.161859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlN-0005Js-HQ; Wed, 17 Feb 2021 14:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86282.161859; Wed, 17 Feb 2021 14:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNlN-0005Jl-EQ; Wed, 17 Feb 2021 14:25:09 +0000
Received: by outflank-mailman (input) for mailman id 86282;
 Wed, 17 Feb 2021 14:25:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCNlL-0005JR-TG
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:25:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlI-0004eP-Gy; Wed, 17 Feb 2021 14:25:04 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNlI-00057m-58; Wed, 17 Feb 2021 14:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=ZmVFimIAQ1oAEuVpblWKPooP062mJ9LJUPpU+7TXU8A=; b=MQMzPDIzMHLhMJ9HqUd6HvPP7U
	DE6DTLxnzzpX6zYovY4qsA0Uk2Fva/LTTQujD8KUOpGqorneYe3kYiNqzogIuRHQd9DljUntB2qZP
	HOgs+wp/crzg1jRNTRaz2YDg1e9in6WSOHgKiuT9bWO7KS+WbVdCKuO2wHUn2kssCZEg=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][PATCH v3 0/3] xen/iommu: Collection of bug fixes for IOMMU teadorwn
Date: Wed, 17 Feb 2021 14:24:55 +0000
Message-Id: <20210217142458.3769-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of bug fixes for the IOMMU teardown code.
All of them are candidate for 4.15 as they can either leak memory or
lead to host crash/host corruption.

This is sent directly on xen-devel because all the issues were either
introduced in 4.15 or happen in the domain creation code.

Major changes since v2:
    - patch #1 "xen/x86: p2m: Don't map the special pages in the IOMMU
    page-tables" has been removed. This requires Jan's patch [2] to
    fully mitigate memory leaks.

Cheers,

[1] https://lore.kernel.org/xen-devel/1b6a411b-84e7-bfb1-647e-511a13df838c@suse.com

Julien Grall (3):
  xen/iommu: x86: Clear the root page-table before freeing the
    page-tables
  xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
  xen/iommu: x86: Harden the IOMMU page-table allocator

 xen/drivers/passthrough/amd/iommu_map.c     | 13 ++++++
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++-
 xen/drivers/passthrough/vtd/iommu.c         | 25 +++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 45 ++++++++++++++++++++-
 xen/include/xen/iommu.h                     |  1 +
 5 files changed, 93 insertions(+), 3 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:27:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86294.161907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNnb-0005nK-5T; Wed, 17 Feb 2021 14:27:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86294.161907; Wed, 17 Feb 2021 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNnb-0005nD-1Z; Wed, 17 Feb 2021 14:27:27 +0000
Received: by outflank-mailman (input) for mailman id 86294;
 Wed, 17 Feb 2021 14:27:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCNnZ-0005n7-VK
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:27:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNnZ-0004if-2n; Wed, 17 Feb 2021 14:27:25 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCNnY-0005H0-TD; Wed, 17 Feb 2021 14:27:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=SqzphOsvkZYjVu8uGONDWuZ5Zn6sou//Nag8FX3HAAQ=; b=EcHACXGE6wVnU/jMIjaYkzv6ec
	vYv7FqOsExKGYo+NaTTquCY8HmTX4mo+XzeZZVvOS5gJ/1kp3KT5K2pxwnjc4s3Zi/BRz7hNROLpo
	iH1gyPyMcJNb3RZI++XPEmQ5i4CVj1JkLE1ljT8EBt0N1SUQ3dPn8tTsvV25gbp6iBGs=;
Subject: Re: [PATCH 2/3] gnttab: bypass IOMMU (un)mapping when a domain is
 (un)mapping its own grant
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <5bb4fba7-a10b-90c4-82f7-8cde6e8cacfb@suse.com>
 <a267959c-538e-0f90-f9a5-72e836f33cb4@xen.org>
 <283d8514-16fb-8973-e395-0a86bf820306@suse.com>
 <6ece0308-504f-5127-b7af-5c801630253b@xen.org>
 <ef8991ad-ba9e-c4f0-e4cc-fb4655608549@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c9e48ef3-7de3-f6c8-7976-a574b7b8be90@xen.org>
Date: Wed, 17 Feb 2021 14:27:22 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <ef8991ad-ba9e-c4f0-e4cc-fb4655608549@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 13:16, Jan Beulich wrote:
> On 17.02.2021 12:41, Julien Grall wrote:
>> Hi Jan,
>>
>> On 17/02/2021 11:38, Jan Beulich wrote:
>>> On 17.02.2021 12:03, Julien Grall wrote:
>>>> On 17/02/2021 10:46, Jan Beulich wrote:
>>>>> Mappings for a domain's own pages should already be present in the
>>>>> IOMMU. While installing the same mapping again is merely redundant (and
>>>>> inefficient), removing the mapping when the grant mapping gets removed
>>>>> is outright wrong in this case: The mapping was there before the map, so
>>>>> should remain in place after unmapping.
>>>>>
>>>>> This affects
>>>>> - Arm Dom0 in the direct mapped case,
>>>>> - x86 PV Dom0 in the "iommu=dom0-strict" / "dom0-iommu=strict" cases,
>>>>> - all x86 PV DomU-s, including driver domains.
>>>>>
>>>>> Reported-by: Rahul Singh <Rahul.Singh@arm.com>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/xen/common/grant_table.c
>>>>> +++ b/xen/common/grant_table.c
>>>>> @@ -1243,7 +1243,7 @@ map_grant_ref(
>>>>>             goto undo_out;
>>>>>         }
>>>>>     
>>>>> -    need_iommu = gnttab_need_iommu_mapping(ld);
>>>>> +    need_iommu = ld != rd && gnttab_need_iommu_mapping(ld);
>>>>
>>>> AFAICT, the owner of the page may not always be rd. So do we want to
>>>> check against the owner instead?
>>>
>>> For the DomIO case - specifically not. And the DomCOW case can't
>>> happen when an IOMMU is in use. Did I overlook any other cases
>>> where the page may not be owned by rd?
>>
>> For the current code, it looks like not. But it feels to me this code is
>> fragile as we are assuming that other cases should never happen.
>>
>> I think it would be worth explaining in a comment and the commit message
>> why check rd rather than the page owner is sufficient.
> 
> Well, I've added
> 
>      /*
>       * This is deliberately not checking the page's owner: get_paged_frame()
>       * explicitly rejects foreign pages, and all success paths above yield
>       * either owner == rd or owner == dom_io (the dom_cow case is irrelevant
>       * as mem-sharing and IOMMU use are incompatible). The dom_io case would
>       * need checking separately if we compared against owner here.
>       */
> 
> to map_grant_ref(), and a reference to this comment to both
> unmap_common() and the commit message. Will this do?

LGTM. With that, you can add:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:32:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86297.161918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNrw-0006ji-N9; Wed, 17 Feb 2021 14:31:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86297.161918; Wed, 17 Feb 2021 14:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCNrw-0006jb-K5; Wed, 17 Feb 2021 14:31:56 +0000
Received: by outflank-mailman (input) for mailman id 86297;
 Wed, 17 Feb 2021 14:31:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GfGR=HT=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1lCNrv-0006jW-Oo
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:31:55 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef74b978-a6b5-4adb-8a5e-9912f77cc903;
 Wed, 17 Feb 2021 14:31:54 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id g6so17596810wrs.11
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 06:31:54 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id f14sm3183012wmc.32.2021.02.17.06.31.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 17 Feb 2021 06:31:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: ef74b978-a6b5-4adb-8a5e-9912f77cc903
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=BHNj0CsMa8fBSRbxi+Me5az+imrRkM91kLCrLrDQvAA=;
        b=RkKkcOCsCXWjZNtTrohUTRVjmHmeqePt0CR5mWa8AjZQDEcn4wCQT9VbjL6XWj1zz6
         Xu1/MV+8SUi31koMrWxrOPw/BJNraCbgEs9uZ4lP29GPqJQWcbq4/EI7y3JshdOZ43K0
         Ol0eLHEe9MqOwWnIbcGm9fghtTF8KYJ/jGlOLlAzR03Re7cSYc7MnoLVJcyfnpANEz9C
         43ajeTmAhhF/xtHKBcNkDTCFaBRz43cakBojyQq+paTwjtuKs5RgFkvUg1pi3OgzKcDY
         0osS45Jfe5PG1YqiYJwzIH9x9A+DCU2YqMzeHdQvu5qOzrRDmWjD9YkMxKnIQXWM/ODT
         ZHRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
         :date:user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=BHNj0CsMa8fBSRbxi+Me5az+imrRkM91kLCrLrDQvAA=;
        b=nHkhXiOIyJ1NGJi2KSYFLLWVfpiCX9wQLdtOuE0xTSRd+FN+HUczSsHxL5AZSp4+sl
         VLZ/kXViP+PPMbG8CFhOAaiIitKxSKXl+KFwTdzSWx8Zttqkhetqnwoq87eG9fyJs61K
         exmDDQNH1zpQUKHcJSTYKRs8PzrXJIE4Hs079mfriVCHIiJLQMxD1Fd1VItEhi5YJREL
         8l+vtl6y4B8Dp4GpmKnZgYKkzmhscSHp5raeqzD5Uva45V6zWZ9QiReson8ogK4xMiPM
         0IA+PkS7gOFX+XHxRjwzBAJ0ed6hho3Yq3z36C4JgtoqTIGdfhSgGMW9HvmCLJLZQ16/
         YzTA==
X-Gm-Message-State: AOAM530nuf7/80YYn3/h3YPEPgTQrbwQthjoOEEQ7690/tLs0u6NW6YH
	8N4enHzUmQi8RxSC+JiN2uM=
X-Google-Smtp-Source: ABdhPJwwpnlY8JJhdMQeN9JKBjWyuJTspQXY6pyTHoJ9TDkPo2M5y1c0t9Zqctp5xYzHnvaUxDoegw==
X-Received: by 2002:adf:a2c2:: with SMTP id t2mr10810395wra.47.1613572314058;
        Wed, 17 Feb 2021 06:31:54 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
Subject: Re: [PATCH v2 7/7] tests/avocado: add boot_xen tests
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org
Cc: julien@xen.org, stefano.stabellini@linaro.org,
 stefano.stabellini@xilinx.com, andre.przywara@arm.com,
 stratos-dev@op-lists.linaro.org, xen-devel@lists.xenproject.org,
 Cleber Rosa <crosa@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-8-alex.bennee@linaro.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <16d8366f-6f94-3a97-b9e0-9962e3d173a1@amsat.org>
Date: Wed, 17 Feb 2021 15:31:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210211171945.18313-8-alex.bennee@linaro.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/11/21 6:19 PM, Alex BennÃ©e wrote:
> These tests make sure we can boot the Xen hypervisor with a Dom0
> kernel using the guest-loader. We currently have to use a kernel I
> built myself because there are issues using the Debian kernel images.
> 
> Signed-off-by: Alex BennÃ©e <alex.bennee@linaro.org>
> ---
>  MAINTAINERS                  |   1 +
>  tests/acceptance/boot_xen.py | 117 +++++++++++++++++++++++++++++++++++
>  2 files changed, 118 insertions(+)
>  create mode 100644 tests/acceptance/boot_xen.py

> diff --git a/tests/acceptance/boot_xen.py b/tests/acceptance/boot_xen.py
> new file mode 100644
> index 0000000000..8c7e091d40
> --- /dev/null
> +++ b/tests/acceptance/boot_xen.py
> @@ -0,0 +1,117 @@
> +# Functional test that boots a Xen hypervisor with a domU kernel and
> +# checks the console output is vaguely sane .
> +#
> +# Copyright (c) 2020 Linaro

2021?

Otherwise:
Reviewed-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 14:54:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 14:54:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86303.161931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCODh-0000Cy-Kp; Wed, 17 Feb 2021 14:54:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86303.161931; Wed, 17 Feb 2021 14:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCODh-0000Cr-Gr; Wed, 17 Feb 2021 14:54:25 +0000
Received: by outflank-mailman (input) for mailman id 86303;
 Wed, 17 Feb 2021 14:54:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCODf-0000Cj-SP
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 14:54:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1914e24-ecf2-4c54-9d08-0b923d4dc645;
 Wed, 17 Feb 2021 14:54:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 421DBB8FA;
 Wed, 17 Feb 2021 14:54:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1914e24-ecf2-4c54-9d08-0b923d4dc645
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613573662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yF/cluvsiJCFqkm34Ubp610JZTcmodW0JUMP+pXIss8=;
	b=GCaOQtiLlIF7lQuBb8zh9KNKgIFxZOxRHwnAoJtipEtFqjHs96GoZ72ccI+IC0uNQG7Y2t
	q4OJqDNKwmm/AvmC9JRrk8rGDeJyoeICNdb/0kCPromd1+glij/3k7IxJYzzGktRU8jgvb
	boeyf3hoDdYfLmf/JCYF62bWb68j83o=
Subject: Re: [for-4.15][PATCH v3 1/3] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d20d31ed-4392-a7fb-66ee-575eb254ae84@suse.com>
Date: Wed, 17 Feb 2021 15:54:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210217142458.3769-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 15:24, Julien Grall wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
>      return reassign_device(pdev->domain, d, devfn, pdev);
>  }
>  
> +static void iommu_clear_root_pgtable(struct domain *d)

Nit: amd_iommu_ as a prefix would be okay here considering other
(static) functions also use it. Since it is a static function,
no prefix at all would also do (my personal preference). But
iommu_ as a prefix isn't helpful and results in needless re-use
of VT-d's name.

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -267,6 +267,15 @@ int iommu_free_pgtables(struct domain *d)
>      struct page_info *pg;
>      unsigned int done = 0;
>  
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    /*
> +     * Pages will be moved to the free list below. So we want to
> +     * clear the root page-table to avoid any potential use after-free.
> +     */
> +    hd->platform_ops->clear_root_pgtable(d);

Taking amd_iommu_alloc_root() as example, is this really correct
prior to what is now patch 2? What guarantees a new root table
won't get allocated subsequently?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:00:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86307.161943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOJo-0001C7-A6; Wed, 17 Feb 2021 15:00:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86307.161943; Wed, 17 Feb 2021 15:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOJo-0001C0-6n; Wed, 17 Feb 2021 15:00:44 +0000
Received: by outflank-mailman (input) for mailman id 86307;
 Wed, 17 Feb 2021 15:00:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCOJm-0001Bv-IG
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:00:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCOJj-0005Gs-K8; Wed, 17 Feb 2021 15:00:39 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCOJj-0007yi-BB; Wed, 17 Feb 2021 15:00:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=cOHzgwAC2Z9qphFZQv3GbMtfyp4vQstE7EdLfDTxIpA=; b=Cq0Vm+Vfm1JJuwsgpavP6jOo4n
	ssIpJoDLdEr6iP3PdOzmHhxzHdtACSvStnL6Zjdp72VPxwKu6QzEotZlggCpQ2VkQZDWEmjL2cbAY
	ZRyf/hCU4m45OA3g2t41BtBUwndsfB/Eh8mWDwrWUgkai2O8U4bN9zV6/SVcVROechaI=;
Subject: Re: [for-4.15][PATCH v3 1/3] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-2-julien@xen.org>
 <d20d31ed-4392-a7fb-66ee-575eb254ae84@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <87548d76-36fa-f587-0137-6806280617ad@xen.org>
Date: Wed, 17 Feb 2021 15:00:37 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d20d31ed-4392-a7fb-66ee-575eb254ae84@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 14:54, Jan Beulich wrote:
> On 17.02.2021 15:24, Julien Grall wrote:
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -267,6 +267,15 @@ int iommu_free_pgtables(struct domain *d)
>>       struct page_info *pg;
>>       unsigned int done = 0;
>>   
>> +    if ( !is_iommu_enabled(d) )
>> +        return 0;
>> +
>> +    /*
>> +     * Pages will be moved to the free list below. So we want to
>> +     * clear the root page-table to avoid any potential use after-free.
>> +     */
>> +    hd->platform_ops->clear_root_pgtable(d);
> 
> Taking amd_iommu_alloc_root() as example, is this really correct
> prior to what is now patch 2? 

Yes, there are no more use-after-free...
	
> What guarantees a new root table
> won't get allocated subsequently?

It doesn't prevent root table allocation. I view the two as distincts 
issues, hence the two patches.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:01:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:01:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86310.161955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOKb-0001JP-Jh; Wed, 17 Feb 2021 15:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86310.161955; Wed, 17 Feb 2021 15:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOKb-0001JI-Gc; Wed, 17 Feb 2021 15:01:33 +0000
Received: by outflank-mailman (input) for mailman id 86310;
 Wed, 17 Feb 2021 15:01:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCOKa-0001JD-Ej
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:01:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f981051a-7bad-430d-a71a-ae8792356cb3;
 Wed, 17 Feb 2021 15:01:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5B0CB8FB;
 Wed, 17 Feb 2021 15:01:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f981051a-7bad-430d-a71a-ae8792356cb3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613574090; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XlcbOFN/qMdHxITM+PVmddFB6cOmVw5dejK6K51LolI=;
	b=FS1iGIboZm/U7U2+XCcEC5N9I/3YxiV67EeRH2I8NxEGTk3khhfkOPHG7tWwTMXZ+z+H6Y
	cM5f2Qccs7C0V18WRZ5WgfZZOiZO2yplNQZz9I5ZOgquxhWjPUGaiUrNaymqvy6PpoUTYC
	S6CtRZx2YWWhHHdB7QJaSukbLz0GLSo=
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
Date: Wed, 17 Feb 2021 16:01:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210217142458.3769-3-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 15:24, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new x86 IOMMU page-tables allocator will release the pages when
> relinquishing the domain resources. However, this is not sufficient
> when the domain is dying because nothing prevents page-table to be
> allocated.
> 
> Currently page-table allocations can only happen from iommu_map(). As
> the domain is dying, there is no good reason to continue to modify the
> IOMMU page-tables.

While I agree this to be the case right now, I'm not sure it is a
good idea to build on it (in that you leave the unmap paths
untouched). Imo there's a fair chance this would be overlooked at
the point super page mappings get introduced (which has been long
overdue), and I thought prior discussion had lead to a possible
approach without risking use-after-free due to squashed unmap
requests.

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -273,6 +273,9 @@ int iommu_free_pgtables(struct domain *d)
>      /*
>       * Pages will be moved to the free list below. So we want to
>       * clear the root page-table to avoid any potential use after-free.
> +     *
> +     * After this call, no more IOMMU mapping can happen.
> +     *
>       */
>      hd->platform_ops->clear_root_pgtable(d);

I.e. you utilize the call in place of spin_barrier(). Maybe worth
saying in the comment?

Also, nit: Stray blank comment line.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:13:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86313.161967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOWG-0002Lg-NW; Wed, 17 Feb 2021 15:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86313.161967; Wed, 17 Feb 2021 15:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOWG-0002LZ-KU; Wed, 17 Feb 2021 15:13:36 +0000
Received: by outflank-mailman (input) for mailman id 86313;
 Wed, 17 Feb 2021 15:13:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCOWF-0002LU-Eh
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:13:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee381065-6981-45b0-afd8-7c2aad9b7d6c;
 Wed, 17 Feb 2021 15:13:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70CFAB90D;
 Wed, 17 Feb 2021 15:13:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee381065-6981-45b0-afd8-7c2aad9b7d6c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613574813; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=i7EBXH+yfdd8lzydHD+wFYX2HKu7Kze86WnApKq84eU=;
	b=ijCMpvGLhyGn1ZKUMHciSd8UB2ZAUojlUibCJfW0JLxxaemVsAzRUrrVMhjNhM9orl/MYf
	/hC1YIsfhnnORbcDKT+vDt73SP2A/K6K0JoQJuvMyqfu6iXRqdpaQF0N3mwlb7Mdio3lgs
	7KlcK9o8m9AgJTt1tUmmurXOTsO+8A8=
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
Date: Wed, 17 Feb 2021 16:13:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210217142458.3769-4-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
Nit: s/not/no/ ?

> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
> +     * called unconditionally, so pgtables may be unitialized.
> +     */
> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>  }
>  
>  static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>       */
>      hd->platform_ops->clear_root_pgtable(d);
>  
> +    /* After this barrier no new page allocations can occur. */
> +    spin_barrier(&hd->arch.pgtables.lock);

Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
the barrier? Why introduce another one (with a similar comment)
explicitly now?

> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>      unmap_domain_page(p);
>  
>      spin_lock(&hd->arch.pgtables.lock);
> -    page_list_add(pg, &hd->arch.pgtables.list);
> +    /*
> +     * The IOMMU page-tables are freed when relinquishing the domain, but
> +     * nothing prevent allocation to happen afterwards. There is no valid
> +     * reasons to continue to update the IOMMU page-tables while the
> +     * domain is dying.
> +     *
> +     * So prevent page-table allocation when the domain is dying.
> +     *
> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
> +     */
> +    if ( likely(!d->is_dying) )
> +    {
> +        alive = true;
> +        page_list_add(pg, &hd->arch.pgtables.list);
> +    }
>      spin_unlock(&hd->arch.pgtables.lock);
>  
> +    if ( unlikely(!alive) )
> +    {
> +        free_domheap_page(pg);
> +        pg = NULL;
> +    }
> +
>      return pg;
>  }

As before I'm concerned of this forcing error paths to be taken
elsewhere, in case an allocation still happens (e.g. from unmap
once super page mappings are supported). Considering some of the
error handling in the IOMMU code is to invoke domain_crash(), it
would be quite unfortunate if we ended up crashing a domain
while it is being cleaned up after.

Additionally, the (at present still hypothetical) unmap case, if
failing because of the change here, would then again chance to
leave mappings in place while the underlying pages get freed. As
this would likely require an XSA, the change doesn't feel like
"hardening" to me.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:17:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86317.161979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOaC-0002Wa-CY; Wed, 17 Feb 2021 15:17:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86317.161979; Wed, 17 Feb 2021 15:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOaC-0002WT-9V; Wed, 17 Feb 2021 15:17:40 +0000
Received: by outflank-mailman (input) for mailman id 86317;
 Wed, 17 Feb 2021 15:17:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCOaB-0002WO-0s
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:17:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16aa6b09-1435-480f-b97c-a49dd1511c4b;
 Wed, 17 Feb 2021 15:17:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4EA54B987;
 Wed, 17 Feb 2021 15:17:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16aa6b09-1435-480f-b97c-a49dd1511c4b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613575057; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hwJ/5d7Ac64z6uwfzzVcoLdXn9X/QoXsOcZAZcrA5Jw=;
	b=AFKAED41T/4a7zfp1j6pGGxIGOhFYJqncNFgsITV2PHmWnbmnKLQkx/YiLAm0sIPbXR5nd
	mb1RFMs/ck/LCVUmbVm/SJ9PVZu1o90MXkWKN7Lu0wzUQjQAux/91Nq3Ob3r2AceXHGx3S
	XM3FaNQukCIdx9tCUYV3KtRu5bNPf9k=
Subject: Re: [for-4.15][PATCH v3 1/3] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-2-julien@xen.org>
 <d20d31ed-4392-a7fb-66ee-575eb254ae84@suse.com>
 <87548d76-36fa-f587-0137-6806280617ad@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <334ee115-c710-88c7-aa27-975bdb6c6912@suse.com>
Date: Wed, 17 Feb 2021 16:17:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <87548d76-36fa-f587-0137-6806280617ad@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 16:00, Julien Grall wrote:
> Hi Jan,
> 
> On 17/02/2021 14:54, Jan Beulich wrote:
>> On 17.02.2021 15:24, Julien Grall wrote:
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -267,6 +267,15 @@ int iommu_free_pgtables(struct domain *d)
>>>       struct page_info *pg;
>>>       unsigned int done = 0;
>>>   
>>> +    if ( !is_iommu_enabled(d) )
>>> +        return 0;
>>> +
>>> +    /*
>>> +     * Pages will be moved to the free list below. So we want to
>>> +     * clear the root page-table to avoid any potential use after-free.
>>> +     */
>>> +    hd->platform_ops->clear_root_pgtable(d);
>>
>> Taking amd_iommu_alloc_root() as example, is this really correct
>> prior to what is now patch 2? 
> 
> Yes, there are no more use-after-free...

And this is because of ...? The necessary lock isn't being held
here, so on another CPU allocation of a new root and then of new
page tables could happen before you make enough progress here,
and hence it looks to me as if there might then still be pages
which get freed while present in the page tables (and hence
accessible by devices).

Jan

>> What guarantees a new root table
>> won't get allocated subsequently?
> 
> It doesn't prevent root table allocation. I view the two as distincts 
> issues, hence the two patches.
> 
> Cheers,
> 



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:34:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:34:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86321.161991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOqb-0004Nc-R4; Wed, 17 Feb 2021 15:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86321.161991; Wed, 17 Feb 2021 15:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOqb-0004NV-Np; Wed, 17 Feb 2021 15:34:37 +0000
Received: by outflank-mailman (input) for mailman id 86321;
 Wed, 17 Feb 2021 15:34:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nrto=HT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lCOqa-0004NQ-Tm
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:34:37 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.86]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 198bdae3-bee6-48b6-ab39-857eb3266086;
 Wed, 17 Feb 2021 15:34:35 +0000 (UTC)
Received: from DB6PR0301CA0049.eurprd03.prod.outlook.com (2603:10a6:4:54::17)
 by AM7PR08MB5429.eurprd08.prod.outlook.com (2603:10a6:20b:107::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.27; Wed, 17 Feb
 2021 15:34:33 +0000
Received: from DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::c6) by DB6PR0301CA0049.outlook.office365.com
 (2603:10a6:4:54::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Wed, 17 Feb 2021 15:34:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT042.mail.protection.outlook.com (10.152.21.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Wed, 17 Feb 2021 15:34:32 +0000
Received: ("Tessian outbound e7cb4a6f0881:v71");
 Wed, 17 Feb 2021 15:34:32 +0000
Received: from 7b3ff4ef2141.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FC02212B-DD35-41CD-B325-27E9A1E7F914.1; 
 Wed, 17 Feb 2021 15:34:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7b3ff4ef2141.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 17 Feb 2021 15:34:07 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VE1PR08MB4703.eurprd08.prod.outlook.com (2603:10a6:802:b1::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Wed, 17 Feb
 2021 15:34:03 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3846.042; Wed, 17 Feb 2021
 15:34:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 198bdae3-bee6-48b6-ab39-857eb3266086
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+FFOxwcZQWPFVOqUytxRxL4okfTTAXoZjcIiUuYeV6M=;
 b=HjddhZpeRrnj4pqMNDi5d7yca4Ek0xoD1hcB9pP+EMwVMb6UYP/NEo+kFKSRY7bZ47b9Vhmir4+6rLjj2Uvvr2B3OJfRKFxnKZ693OvGe9hBIcdt1xRCgreipgJVBtGoj/YB7TF8l5rnY9TJmbkj7YE9JuRV6/Ng69STPCiYBqM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 13e1fe07bdfb62e6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cITNV7LyOe7PKrfxu3okr3WWp3/0mIHf4VUwMy/b4D1a07y6JoiGcYpsb9EaPnu44PvLKXnZaksqGVOGdVMtSJUnKArVJyzpIFTEhYZx7nNzCpz+d19Ar8kobyuJcMhUNGB/I6VvAsk66+PdURhRn1GozqBIFAuCE1boWdVLc3CPe5ZY9c0TV6ZnOLuneASR6gQLiNRXz8ZeK1L30q/cHNF12+OKoKHF9GuY20o6IfwOMD3VcXYJSuFmlIHuSwfYumq67LGDPNW6ykqd4OMXucznbPd9thglhfJ5unalCRH2/gPt+6591uRlejantn3ztyMbJoFisk3WKlGPB1A1vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+FFOxwcZQWPFVOqUytxRxL4okfTTAXoZjcIiUuYeV6M=;
 b=ZzcT2IA58AMH5cFT93ylLbI61+I+WrgqiXnFjIpE6k3Y7TRmZH1zJac0LXuxdx7KUIzigY/ONCGfAf+Ip4hTGo2HsCyb3i5DYXSM9QIGewRSRMy3gik6t2FyVFeCS2RAmgp5ViyGwSbS7+xTXhLoyX2wM732uXatn+f6JmQK3gWNQqsadwzEnUPmjzSlLqFdMedI1vYCzAX3qxV7FKKKRbJG2JWaee2wvOHaAHv4Rb8MyJ8y8za1LW8ab4p6jWY6veS58+iM6EKsGb1gjkEym6Bhyy1L1dQ0OmXCQ1rapUDFf57sWC0QHkaL6KTa1hoyY5fKzt1IvYnz9NLapWqH1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+FFOxwcZQWPFVOqUytxRxL4okfTTAXoZjcIiUuYeV6M=;
 b=HjddhZpeRrnj4pqMNDi5d7yca4Ek0xoD1hcB9pP+EMwVMb6UYP/NEo+kFKSRY7bZ47b9Vhmir4+6rLjj2Uvvr2B3OJfRKFxnKZ693OvGe9hBIcdt1xRCgreipgJVBtGoj/YB7TF8l5rnY9TJmbkj7YE9JuRV6/Ng69STPCiYBqM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Thread-Topic: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Thread-Index: AQHXBNDUDoMffJM9X0qFA5iW4kfZOqpcev+A
Date: Wed, 17 Feb 2021 15:34:03 +0000
Message-ID: <538F9995-6D8D-4B02-A9B6-7C5F26F95657@arm.com>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c8c375bc-a885-4c0b-f25c-08d8d35985fb
x-ms-traffictypediagnostic: VE1PR08MB4703:|AM7PR08MB5429:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM7PR08MB5429EDE9F994804AB5704B579D869@AM7PR08MB5429.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FDwAcvwurpPZuI7Wn8D+WvxAo+WpUuZkbXYc3DruCXYqa2o8d4sA8NWr+yDb6+vszvVKpYYPG9C3+qtGiqteph3xvr+AuHM1iJug6+4eU8f1Q3sCmD1GyIq/Yj4xToP0dq5o8GA8Zt+3J/MipY+BuICH47QvN5JQTEBcfNJAscMlqZmY/8pgLeJoXj2YwSEZJQ1h24feoOgGDgXYgpmswG5e6FEgkH2giLON8kzT9xiJbY5VEFzOURaMLDEdVhEZaHhV1UN5RwV5sCk6wfCzTbb8vKmBF147N8CbtcEdc46SlPma0rTKQJvjlAK3Mw6UoH1kjcoyjO4oBuWSZ6kvtq2epGAGnRdJbq9kOOIqlteR7c1GYyhdPeUOOYDQrMNOInmVDcZKBe1DpucVZb+Qx3PM9Q7crxsRT6xfO8vvOCiGlDv9yxTxxhtaifUssDGhU75BprDRmQtCcjU8/2pyjgjdhQ9DwJ+IqEWVz8g29O2+bS4kmZACLgMYdUu/hx8shHskS3k6CB0rYtljfhQa9wnOt0ijzHhsbGAqsjdghHGLB2SUYiq/YIFbgvjfy1Ci8vgilYIKFbimaRRPWngLNA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(346002)(136003)(376002)(5660300002)(2616005)(54906003)(33656002)(316002)(6506007)(53546011)(55236004)(6486002)(2906002)(91956017)(76116006)(66946007)(66556008)(186003)(64756008)(66446008)(26005)(66476007)(6916009)(6512007)(478600001)(86362001)(71200400001)(36756003)(83380400001)(8676002)(4326008)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?VGxGbTFBR09TNjhwWnU3eDJNM3lxQzVDaGd6WjNNZ0NSbEk1YWRZTDZiSnhG?=
 =?utf-8?B?ZFlhSjdqSVpCVldXanZkdS9WNktQQ0kveHNZd2pHMUFQQnN1UkIwZkRtTk42?=
 =?utf-8?B?Qm1KVS9HU2lHSjJBOVZoWkxFVUswRTJkWTAxdGJEU21hVVVseVlwSCtFZVhD?=
 =?utf-8?B?QU02b2xoYUxkSU9hMjB1cTR6Mk5VM1h4aFJhL2tlV3lKeXplWFl5V1BFZ3g5?=
 =?utf-8?B?ZnJ3L2xaOGNsWi9udnlmSDI5SFpXK2RxMERnRXJmd0Y5WFlIKzF5Zk15MXpx?=
 =?utf-8?B?NzJBdWV2WUUvL25maG1NbnJxdnlWS2V5R0lMcEFWVGUzYzE0ZTE0UllMa3hX?=
 =?utf-8?B?eXA1dVVoV0NkbU5ZME02R2laZ3F1R1c5eDZhZHZPY0NJcFRBSVJ1NkplemRE?=
 =?utf-8?B?UllFQlQ3NEhlcUdsWVgrMkdtdGVnQlVrdWZTcjArWTF3dVJTSXlOSi8xdGdL?=
 =?utf-8?B?ZmdxOHBhS1RwYzV3QkRKNGphQVJjT3orOHZLdjlUWDFhYlNjU1NrMldVdFZv?=
 =?utf-8?B?VUtkWXkvSTlRTmZ2emF1cVpTM1d3dWJLZFpYQ1QvL3BzaUtqZTB3a0NUMXp0?=
 =?utf-8?B?QkNib2U1N0JLdmdNekl4ZFJTNDJXNzRpbUFoY3JqTmg1WEpRQlI1c05JTVA4?=
 =?utf-8?B?RGtIS3NvSU1KeFRhNWFRb3hZVCs0OTFCUDdTanltd0kyNkZZVjdSazRGRERK?=
 =?utf-8?B?UHV2eTEyaVBYZXJMZFdZcnBkSnI4S1l3cW9YRUlPU3dvSGd6Sk5kVTdhMysr?=
 =?utf-8?B?VWxFa0hKOUJjWUtNeFdJYUVLTENPNnpMRWFac3g3VGhJRXVQOHJ4Q1UrNDF4?=
 =?utf-8?B?T1hqbnZTTHBnK3puNFRXdjZYcWtZY2hXYjQ2aEtiaG1ZZFM5MldYTm5JZXhC?=
 =?utf-8?B?T0lWeXlvTU11QTkyQU9ISGtaQ05RdzMweEZ1YjJ0QVhmTFl2bGJ6dEI5d0lQ?=
 =?utf-8?B?b3Z2VUlQMEdDaHRmemdVNDdnVFpQOUJtQ1dqbis3VW0vZW52TWpyUUtIL1Y3?=
 =?utf-8?B?aGdRa0FFdk9rY1dqaHFianBGRksvbWxqY1JjZ2haMzdyUDJUaG9tTFc1elhH?=
 =?utf-8?B?VS9xblpENVcwSldXOWtKQXBaRWdYVTVuRGJudC9OR0JaSzdFMmJEd1ZIaDRt?=
 =?utf-8?B?cW9pRVRLWlJjblMwaHNDV0YxL2FTMUpFeVBFL0RTLzlPUUJiV1FnTVhsQjNG?=
 =?utf-8?B?UjhvK1loamZkU3htWXJ2OFZ5dmFDS3NadEZoT0FLNzlGc3VMZWtFRUZ2S0Zq?=
 =?utf-8?B?TFJIeG9XcmpKSzZMOERza3dZMXB4c25HTWZweWZSM2phYUlTWXo2YUVFQnB6?=
 =?utf-8?B?cnZZL2p0bkJOVkZHN1JHQmpwMVhodXFET0tYVzdtREJTSUd0L1pVSnRBdStl?=
 =?utf-8?B?eThYcmM3cldobWRabHdBcTN1TTBQdW5reFV1eFljTTFsVmJ5bkdCS3dyejk3?=
 =?utf-8?B?cnozV3dwK2plNlYxOTdYSzNjOW44UUpQUUZZRWl0NkQwVXVkK3gxSWRMMWEv?=
 =?utf-8?B?bDFkTCtPNm1VSHM3VjJHcFBGaXp3ekt0NllBa0g2SGFNeGwxNFB4TEFad3Yw?=
 =?utf-8?B?WXFjbUlWOFp0WFNTdkUwcWZNVE44SmZhQmNxMi83UnB0VXhVeHNBYUhoTVBU?=
 =?utf-8?B?WmlrNFpPWDBEdWJPQWlaTXNSMEoxeGNmOHY0WjdZdjRxc3pWekNLMTgwZnFZ?=
 =?utf-8?B?YVJyRWNZVkJhNlRHcGNMa1lHb0xnd095eGNveXJWZHJzR1pCajlQdHJ0dytN?=
 =?utf-8?Q?NDTjB43T1zVNbv7EW/AKciCu1xW0bKw1rvxmmA9?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <31DDAD82CB9EBC469C6A7FA3B429C9BB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4703
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f62ad6ff-094c-4359-c45f-08d8d3597485
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vPqoGl64iC8BI9EbBMA6PX8YL1Znnn900K/+u5b7ZYzIiL/FwbZKdGDL31PeljiEhvlARJGyprolakS3FY2jUjXxikSgEew7KjaiIXGcFSMbkHLyjE0wUXKIUa8MfOx4tcHwFKgEl+yCYLWEbIVMCY57pKFupJdReaOD1sGV3WYql+l5JafIljoaYN6Rjy+WGhNY9AHPlIjpQsZtHGlPbkIKxTG0Y7NGbfnfBlW34w8cDO/XoXS5iYHtR58LU3NYucOAACo0GS8CNCgCmMlIbi4XrxHdYASb72fdiQnjqgwDpQYwP9Kr0FFrAYfDug+73xnbs5eqwLY/20TNQSm4LNX1sEteTSY52aj+nvscdgEUDRFS5pWlmCSaXTLXNaAmTt3f/OmxapEZTiHTHLXNSZ/cv/MjKP4Wg1DuUyrANd6fWf36uUDhfe6FJDuTPxsimPSvQjh9LvaoiEW178oxs9BkSOyVUmYr3ybWcLyhKYV/mHd/KOVYNRVJ4Ohqls7ZNvK+yVx3Uf1zTnMNzTislMSrRd3MvN4n/9Cd5IoWlbhUqLTAwRB0MxfOKwoc3GQmojNfFRRRjLmFM/XNrBkFp0KN+3dEAnTXwvbgw4dMEahtsy07DMHSOmMvWJ9Z7FjNO1AU2KD825z/eAZlK/9A5oRg0Sn0VoeTW27k/2W0Aug=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(39860400002)(376002)(46966006)(36840700001)(8936002)(86362001)(55236004)(336012)(82310400003)(70586007)(83380400001)(2906002)(36756003)(2616005)(356005)(186003)(26005)(8676002)(81166007)(53546011)(6506007)(6512007)(6486002)(478600001)(82740400003)(36860700001)(70206006)(54906003)(47076005)(33656002)(4326008)(5660300002)(316002)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Feb 2021 15:34:32.6256
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c8c375bc-a885-4c0b-f25c-08d8d35985fb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5429

SGkgU3RlZmFubywNCg0KPiBPbiAxNyBGZWIgMjAyMSwgYXQgMDI6MDAsIFN0ZWZhbm8gU3RhYmVs
bGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSBhbGwsDQo+IA0K
PiBUb2RheSBMaW51eCB1c2VzIHRoZSBzd2lvdGxiLXhlbiBkcml2ZXIgKGRyaXZlcnMveGVuL3N3
aW90bGIteGVuLmMpIHRvDQo+IHRyYW5zbGF0ZSBhZGRyZXNzZXMgZm9yIERNQSBvcGVyYXRpb25z
IGluIERvbTAuIFNwZWNpZmljYWxseSwNCj4gc3dpb3RsYi14ZW4gaXMgdXNlZCB0byB0cmFuc2xh
dGUgdGhlIGFkZHJlc3Mgb2YgYSBmb3JlaWduIHBhZ2UgKGEgcGFnZQ0KPiBiZWxvbmdpbmcgdG8g
YSBkb21VKSBtYXBwZWQgaW50byBEb20wIGJlZm9yZSB1c2luZyBpdCBmb3IgRE1BLg0KPiANCj4g
VGhpcyBpcyBpbXBvcnRhbnQgYmVjYXVzZSBhbHRob3VnaCBEb20wIGlzIDE6MSBtYXBwZWQsIERv
bVVzIGFyZSBub3QuIE9uDQo+IHN5c3RlbXMgd2l0aG91dCBhbiBJT01NVSBzd2lvdGxiLXhlbiBl
bmFibGVzIFBWIGRyaXZlcnMgdG8gd29yayBhcyBsb25nDQo+IGFzIHRoZSBiYWNrZW5kcyBhcmUg
aW4gRG9tMC4gVGhhbmtzIHRvIHN3aW90bGIteGVuLCB0aGUgRE1BIG9wZXJhdGlvbg0KPiBlbmRz
IHVwIHVzaW5nIHRoZSBNRk4sIHJhdGhlciB0aGFuIHRoZSBHRk4uDQo+IA0KPiANCj4gT24gc3lz
dGVtcyB3aXRoIGFuIElPTU1VLCB0aGlzIGlzIG5vdCBuZWNlc3Nhcnk6IHdoZW4gYSBmb3JlaWdu
IHBhZ2UgaXMNCj4gbWFwcGVkIGluIERvbTAsIGl0IGlzIGFkZGVkIHRvIHRoZSBEb20wIHAybS4g
QSBuZXcgR0ZOLT5NRk4gdHJhbnNsYXRpb24NCj4gaXMgZW5zdGFibGlzaGVkIGZvciBib3RoIE1N
VSBhbmQgU01NVS4gRG9tMCBjb3VsZCBzYWZlbHkgdXNlIHRoZSBHRk4NCj4gYWRkcmVzcyAoaW5z
dGVhZCBvZiB0aGUgTUZOKSBmb3IgRE1BIG9wZXJhdGlvbnMgYW5kIHRoZXkgd291bGQgd29yay4g
SXQNCj4gd291bGQgYmUgbW9yZSBlZmZpY2llbnQgdGhhbiB1c2luZyBzd2lvdGxiLXhlbi4NCj4g
DQo+IElmIHlvdSByZWNhbGwgbXkgcHJlc2VudGF0aW9uIGZyb20gWGVuIFN1bW1pdCAyMDIwLCBY
aWxpbnggaXMgd29ya2luZyBvbg0KPiBjYWNoZSBjb2xvcmluZy4gV2l0aCBjYWNoZSBjb2xvcmlu
Zywgbm8gZG9tYWluIGlzIDE6MSBtYXBwZWQsIG5vdCBldmVuDQo+IERvbTAuIEluIGEgc2NlbmFy
aW8gd2hlcmUgRG9tMCBpcyBub3QgMToxIG1hcHBlZCwgc3dpb3RsYi14ZW4gZG9lcyBub3QNCj4g
d29yayBhcyBpbnRlbmRlZC4NCj4gDQo+IA0KPiBUaGUgc3VnZ2VzdGVkIHNvbHV0aW9uIGZvciBi
b3RoIHRoZXNlIGlzc3VlcyBpcyB0byBhZGQgYSBuZXcgZmVhdHVyZQ0KPiBmbGFnICJYRU5GRUFU
X0FSTV9kb20wX2lvbW11IiB0aGF0IHRlbGxzIERvbTAgdGhhdCBpdCBpcyBzYWZlIG5vdCB0byB1
c2UNCj4gc3dpb3RsYi14ZW4gYmVjYXVzZSBJT01NVSB0cmFuc2xhdGlvbnMgYXJlIGF2YWlsYWJs
ZSBmb3IgRG9tMC4gSWYNCj4gWEVORkVBVF9BUk1fZG9tMF9pb21tdSBpcyBzZXQsIExpbnV4IHNo
b3VsZCBza2lwIHRoZSBzd2lvdGxiLXhlbg0KPiBpbml0aWFsaXphdGlvbi4gSSBoYXZlIHRlc3Rl
ZCB0aGlzIHNjaGVtZSB3aXRoIGFuZCB3aXRob3V0IGNhY2hlDQo+IGNvbG9yaW5nIChoZW5jZSB3
aXRoIGFuZCB3aXRob3V0IDE6MSBtYXBwaW5nIG9mIERvbTApIG9uIFpDVTEwMiBhbmQgaXQNCj4g
d29ya3MgYXMgZXhwZWN0ZWQ6IERNQSBvcGVyYXRpb25zIHN1Y2NlZWQuDQoNCldvdWxkbuKAmXQg
aXQgYmUgZWFzaWVyIHRvIG5hbWUgdGhlIGZsYWcgWEVORkVBVF9BUk1fc3dpb3RsYl9uZWVkZWQg
Pw0KDQo+IA0KPiANCj4gV2hhdCBhYm91dCBzeXN0ZW1zIHdoZXJlIGFuIElPTU1VIGlzIHByZXNl
bnQgYnV0IG5vdCBhbGwgZGV2aWNlcyBhcmUNCj4gcHJvdGVjdGVkPw0KPiANCj4gVGhlcmUgaXMg
bm8gd2F5IGZvciBYZW4gdG8ga25vdyB3aGljaCBkZXZpY2VzIGFyZSBwcm90ZWN0ZWQgYW5kIHdo
aWNoDQo+IG9uZXMgYXJlIG5vdDogZGV2aWNlcyB0aGF0IGRvIG5vdCBoYXZlIHRoZSAiaW9tbXVz
IiBwcm9wZXJ0eSBjb3VsZCBvcg0KPiBjb3VsZCBub3QgYmUgRE1BIG1hc3RlcnMuDQo+IA0KPiBQ
ZXJoYXBzIFhlbiBjb3VsZCBwb3B1bGF0ZSBhIHdoaXRlbGlzdCBvZiBkZXZpY2VzIHByb3RlY3Rl
ZCBieSB0aGUgSU9NTVUNCj4gYmFzZWQgb24gdGhlICJpb21tdXMiIHByb3BlcnR5LiBJdCB3b3Vs
ZCByZXF1aXJlIHNvbWUgYWRkZWQgY29tcGxleGl0eQ0KPiBpbiBYZW4gYW5kIGVzcGVjaWFsbHkg
aW4gdGhlIHN3aW90bGIteGVuIGRyaXZlciBpbiBMaW51eCB0byB1c2UgaXQsDQo+IHdoaWNoIGlz
IG5vdCBpZGVhbC4gSG93ZXZlciwgdGhpcyBhcHByb2FjaCB3b3VsZCBub3Qgd29yayBmb3IgY2Fj
aGUNCj4gY29sb3Jpbmcgd2hlcmUgZG9tMCBpcyBub3QgMToxIG1hcHBlZCBzbyB0aGUgc3dpb3Rs
Yi14ZW4gc2hvdWxkIG5vdCBiZQ0KPiB1c2VkIGVpdGhlciB3YXkuDQoNCldvdWxkIGl0IGJlIHJl
YWxpc3RpYyB0byBzYXkgdGhhdCBjYWNoZSBjb2xvcmluZyBjYW5ub3QgYmUgdXNlZCB3aXRob3V0
IGFuIElPTU1VID8NCg0KSGF2aW5nIGEgZmxhZyBmb3IgdGhlIHN3aW90bGIgaXMgYSBnb29kIGlk
ZWEgYmVjYXVzZSBpbiB0aGUgY3VycmVudCBzdGF0dXMgdGhlIHN3aXRjaA0KdG8gdXNlIG9yIG5v
dCBzd2lvdGxiIGlzIG1vcmUgb3IgbGVzcyBpbXBsaWNpdC4gDQpCdXQgc29tZWhvdyB0aGVyZSBh
cmUgdXNlIGNhc2VzIHdoZXJlIHdlIHNob3VsZCBzaW1wbHkgc2F5IOKAnG5vdCBzdXBwb3J0ZWTi
gJ0gYXMNCndlIGRvIGZvciBleGFtcGxlIGZvciBwYXNzdGhyb3VnaCByaWdodCBub3cuIE1heWJl
IGNhY2hlIGNvbG9yaW5nIGlzIGEgY2FzZSBsaWtlDQp0aGF0Lg0KDQo+IA0KPiANCj4gRm9yIHRo
ZXNlIHJlYXNvbnMsIEkgd291bGQgbGlrZSB0byBwcm9wb3NlIGEgc2luZ2xlIGZsYWcNCj4gWEVO
RkVBVF9BUk1fZG9tMF9pb21tdSB3aGljaCBzYXlzIHRoYXQgdGhlIElPTU1VIGNhbiBiZSByZWxp
ZWQgdXBvbiBmb3INCj4gRE1BIHRyYW5zbGF0aW9ucy4gSW4gc2l0dWF0aW9ucyB3aGVyZSBhIERN
QSBtYXN0ZXIgaXMgbm90IFNNTVUNCj4gcHJvdGVjdGVkLCBYRU5GRUFUX0FSTV9kb20wX2lvbW11
IHNob3VsZCBub3QgYmUgc2V0LiBGb3IgZXhhbXBsZSwgb24gYQ0KPiBwbGF0Zm9ybSB3aGVyZSBh
biBJT01NVSBpcyBwcmVzZW50IGFuZCBwcm90ZWN0cyBtb3N0IERNQSBtYXN0ZXJzIGJ1dCBpdA0K
PiBpcyBsZWF2aW5nIG91dCB0aGUgTU1DIGNvbnRyb2xsZXIsIHRoZW4gWEVORkVBVF9BUk1fZG9t
MF9pb21tdSBzaG91bGQNCj4gbm90IGJlIHNldCAoYmVjYXVzZSBQViBibG9jayBpcyBub3QgZ29p
bmcgdG8gd29yayB3aXRob3V0IHN3aW90bGIteGVuLikNCj4gVGhpcyBhbHNvIG1lYW5zIHRoYXQg
Y2FjaGUgY29sb3Jpbmcgd29uJ3QgYmUgdXNhYmxlIG9uIHN1Y2ggYSBzeXN0ZW0gKGF0DQo+IGxl
YXN0IG5vdCB1c2FibGUgd2l0aCB0aGUgTU1DIGNvbnRyb2xsZXIgc28gdGhlIHN5c3RlbSBpbnRl
Z3JhdG9yIHNob3VsZA0KPiBwYXkgc3BlY2lhbCBjYXJlIHRvIHNldHVwIHRoZSBzeXN0ZW0pLg0K
DQpBbnkgc3lzdGVtIHdoZXJlIHlvdSBoYXZlIGFuIElPTU1VIGJ1dCBub3QgY292ZXJpbmcgYWxs
IGRldmljZXMgaXMNCmFsbW9zdCBpbXBvc3NpYmxlIHRvIG1hZ2ljYWxseSBoYW5kbGUgc21vb3Ro
bHkgYW5kIHdpbGwgcmVxdWlyZSBzb21lDQpjYXJlZnVsbCBjb25maWd1cmF0aW9uLiBTYWRseSBh
cyB5b3Ugc3RhdGVkLCB3ZSBkbyBub3QgaGF2ZSBhIHdheSB0bw0KYXV0by1kZXRlY3Qgc3VjaCBh
IGNhc2UuDQoNCj4gDQo+IEl0IGlzIHdvcnRoIG5vdGluZyB0aGF0IGlmIHdlIHdhbnRlZCB0byBl
eHRlbmQgdGhlIGludGVyZmFjZSB0byBhZGQgYQ0KPiBsaXN0IG9mIHByb3RlY3RlZCBkZXZpY2Vz
IGluIHRoZSBmdXR1cmUsIGl0IHdvdWxkIHN0aWxsIGJlIHBvc3NpYmxlLiBJdA0KPiB3b3VsZCBi
ZSBjb21wYXRpYmxlIHdpdGggWEVORkVBVF9BUk1fZG9tMF9pb21tdS4NCj4gDQo+IA0KPiBIb3cg
dG8gc2V0IFhFTkZFQVRfQVJNX2RvbTBfaW9tbXU/DQo+IA0KPiBXZSBjb3VsZCBzZXQgWEVORkVB
VF9BUk1fZG9tMF9pb21tdSBhdXRvbWF0aWNhbGx5IHdoZW4NCj4gaXNfaW9tbXVfZW5hYmxlZChk
KSBmb3IgRG9tMC4gV2UgY291bGQgYWxzbyBoYXZlIGEgcGxhdGZvcm0gc3BlY2lmaWMNCj4gKHhl
bi9hcmNoL2FybS9wbGF0Zm9ybXMvKSBvdmVycmlkZSBzbyB0aGF0IGEgc3BlY2lmaWMgcGxhdGZv
cm0gY2FuDQo+IGRpc2FibGUgWEVORkVBVF9BUk1fZG9tMF9pb21tdS4gRm9yIGRlYnVnZ2luZyBw
dXJwb3NlcyBhbmQgYWR2YW5jZWQNCj4gdXNlcnMsIGl0IHdvdWxkIGFsc28gYmUgdXNlZnVsIHRv
IGJlIGFibGUgdG8gb3ZlcnJpZGUgaXQgdmlhIGEgWGVuDQo+IGNvbW1hbmQgbGluZSBwYXJhbWV0
ZXIuDQo+IA0KPiBTZWUgYXBwZW5kZWQgcGF0Y2ggYXMgYSByZWZlcmVuY2UuDQoNCkkgcmVhbGx5
IHRoaW5rIHRoZSBuYW1pbmcgb2YgdGhlIGZsYWcgd2lsbCBuZWVkIHRvIGJlIG1vZGlmaWVkLg0K
DQpDaGVlcnMNCkJlcnRyYW5kDQoNCj4gDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiBTdGVmYW5vDQo+
IA0KPiANCj4gZGlmZiAtLWdpdCBhL3hlbi9jb21tb24va2VybmVsLmMgYi94ZW4vY29tbW9uL2tl
cm5lbC5jDQo+IGluZGV4IDdhMzQ1YWU0NWUuLjRkYmVmNDgxOTkgMTAwNjQ0DQo+IC0tLSBhL3hl
bi9jb21tb24va2VybmVsLmMNCj4gKysrIGIveGVuL2NvbW1vbi9rZXJuZWwuYw0KPiBAQCAtMTYs
NiArMTYsNyBAQA0KPiAjaW5jbHVkZSA8eGVuL2h5cGZzLmg+DQo+ICNpbmNsdWRlIDx4c20veHNt
Lmg+DQo+ICNpbmNsdWRlIDxhc20vY3VycmVudC5oPg0KPiArI2luY2x1ZGUgPGFzbS9wbGF0Zm9y
bS5oPg0KPiAjaW5jbHVkZSA8cHVibGljL3ZlcnNpb24uaD4NCj4gDQo+ICNpZm5kZWYgQ09NUEFU
DQo+IEBAIC01NDksNiArNTUwLDkgQEAgRE8oeGVuX3ZlcnNpb24pKGludCBjbWQsIFhFTl9HVUVT
VF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKQ0KPiAgICAgICAgICAgICAgICAgZmkuc3VibWFwIHw9
IDFVIDw8IFhFTkZFQVRfZG9tMDsNCj4gI2lmZGVmIENPTkZJR19BUk0NCj4gICAgICAgICAgICAg
Zmkuc3VibWFwIHw9ICgxVSA8PCBYRU5GRUFUX0FSTV9TTUNDQ19zdXBwb3J0ZWQpOw0KPiArICAg
ICAgICAgICAgaWYgKCAhcGxhdGZvcm1faGFzX3F1aXJrKFBMQVRGT1JNX1FVSVJLX0RPTTBfSU9N
TVUpICYmDQo+ICsgICAgICAgICAgICAgICAgIGlzX2hhcmR3YXJlX2RvbWFpbihkKSAmJiBpc19p
b21tdV9lbmFibGVkKGQpICkNCj4gKyAgICAgICAgICAgICAgICBmaS5zdWJtYXAgfD0gKDFVIDw8
IFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUpOw0KPiAjZW5kaWYNCj4gI2lmZGVmIENPTkZJR19YODYN
Cj4gICAgICAgICAgICAgaWYgKCBpc19wdl9kb21haW4oZCkgKQ0KPiBkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUvYXNtLWFybS9wbGF0Zm9ybS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wbGF0Zm9y
bS5oDQo+IGluZGV4IDk5N2ViMjUyMTYuLjA5NGE3NmQ2NzcgMTAwNjQ0DQo+IC0tLSBhL3hlbi9p
bmNsdWRlL2FzbS1hcm0vcGxhdGZvcm0uaA0KPiArKysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL3Bs
YXRmb3JtLmgNCj4gQEAgLTQ4LDYgKzQ4LDExIEBAIHN0cnVjdCBwbGF0Zm9ybV9kZXNjIHsNCj4g
ICogc3RyaWRlLg0KPiAgKi8NCj4gI2RlZmluZSBQTEFURk9STV9RVUlSS19HSUNfNjRLX1NUUklE
RSAoMSA8PCAwKQ0KPiArLyoNCj4gKyAqIFF1aXJrIGZvciBwbGF0Zm9ybXMgd2hlcmUgdGhlIElP
TU1VIGlzIHByZXNlbnQgYnV0IGRvZXNuJ3QgcHJvdGVjdA0KPiArICogYWxsIERNQS1jYXBhYmxl
IGRldmljZXMuDQo+ICsgKi8NCj4gKyNkZWZpbmUgUExBVEZPUk1fUVVJUktfRE9NMF9JT01NVSAo
MSA8PCAxKQ0KPiANCj4gdm9pZCBwbGF0Zm9ybV9pbml0KHZvaWQpOw0KPiBpbnQgcGxhdGZvcm1f
aW5pdF90aW1lKHZvaWQpOw0KPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wbGF0
Zm9ybS5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9wbGF0Zm9ybS5oDQo+IG5ldyBmaWxlIG1vZGUg
MTAwNjQ0DQo+IGluZGV4IDAwMDAwMDAwMDAuLjU0MjdlOGI4NTENCj4gLS0tIC9kZXYvbnVsbA0K
PiArKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BsYXRmb3JtLmgNCj4gQEAgLTAsMCArMSwxMyBA
QA0KPiArI2lmbmRlZiBfX0FTTV9YODZfUExBVEZPUk1fSA0KPiArI2RlZmluZSBfX0FTTV9YODZf
UExBVEZPUk1fSA0KPiArDQo+ICsjZW5kaWYgLyogX19BU01fWDg2X1BMQVRGT1JNX0ggKi8NCj4g
Kw0KPiArLyoNCj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4gKyAqIG1vZGU6IEMNCj4gKyAqIGMt
ZmlsZS1zdHlsZTogIkJTRCINCj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0DQo+ICsgKiBpbmRlbnQt
dGFicy1tb2RlOiBuaWwNCj4gKyAqIEVuZDoNCj4gKyAqLw0KPiBkaWZmIC0tZ2l0IGEveGVuL2lu
Y2x1ZGUvcHVibGljL2ZlYXR1cmVzLmggYi94ZW4vaW5jbHVkZS9wdWJsaWMvZmVhdHVyZXMuaA0K
PiBpbmRleCAxNjEzYjJhYWI4Li5hZGFhMmE5OTVkIDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVk
ZS9wdWJsaWMvZmVhdHVyZXMuaA0KPiArKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvZmVhdHVyZXMu
aA0KPiBAQCAtMTE0LDYgKzExNCwxMSBAQA0KPiAgKi8NCj4gI2RlZmluZSBYRU5GRUFUX2xpbnV4
X3JzZHBfdW5yZXN0cmljdGVkICAgMTUNCj4gDQo+ICsvKg0KPiArICogYXJtOiBkb20wIGlzIHN0
YXJ0ZWQgd2l0aCBJT01NVSBwcm90ZWN0aW9uLg0KPiArICovDQo+ICsjZGVmaW5lIFhFTkZFQVRf
QVJNX2RvbTBfaW9tbXUgICAgICAgICAgICAxNg0KPiArDQo+ICNkZWZpbmUgWEVORkVBVF9OUl9T
VUJNQVBTIDENCj4gDQo+ICNlbmRpZiAvKiBfX1hFTl9QVUJMSUNfRkVBVFVSRVNfSF9fICovDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:37:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86325.162002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOt6-0004WT-98; Wed, 17 Feb 2021 15:37:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86325.162002; Wed, 17 Feb 2021 15:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOt6-0004WM-5t; Wed, 17 Feb 2021 15:37:12 +0000
Received: by outflank-mailman (input) for mailman id 86325;
 Wed, 17 Feb 2021 15:37:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T8s0=HT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lCOt5-0004WG-FV
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:37:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85511ce3-67c2-4f31-9d86-dba1a5b66ac4;
 Wed, 17 Feb 2021 15:37:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3783B0D2;
 Wed, 17 Feb 2021 15:37:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85511ce3-67c2-4f31-9d86-dba1a5b66ac4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613576230; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Csa02k/l1wzUrNiTUBspOuJjzDMlX3lKkeqhEKz7vDA=;
	b=V71+0nLpDOJwBxa866mbQkpy/gOV8ZF/b+SKFIOg5ZeYEr9nujn0POs2Mvyi5Gc0AGRij9
	3uqC7wJp+IVBf9wtL4buSwC1r4ywaegMSypy6Qlnn+rt8ajXbRu/1eeuR6pXCgKOpAAArX
	hl3whYrYYkMDVUDeGXCSsZf0XJ92N+Y=
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <c31654c2-f486-33ea-da00-7bfa5dac0e51@suse.com>
Date: Wed, 17 Feb 2021 16:37:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YC0exVYljxxvwFFt@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xAh7i6poRwVEodvveb7vrLt6b4KvPhF1J"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xAh7i6poRwVEodvveb7vrLt6b4KvPhF1J
Content-Type: multipart/mixed; boundary="kVJtkqr3cnuwMf7dEZLcNYdtQr55iWuDh";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <c31654c2-f486-33ea-da00-7bfa5dac0e51@suse.com>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
In-Reply-To: <YC0exVYljxxvwFFt@mail-itl>

--kVJtkqr3cnuwMf7dEZLcNYdtQr55iWuDh
Content-Type: multipart/mixed;
 boundary="------------FBDABACC5782ED7AA9FBCE95"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FBDABACC5782ED7AA9FBCE95
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.02.21 14:48, Marek Marczykowski-G=C3=B3recki wrote:
> On Wed, Feb 17, 2021 at 07:51:42AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.02.21 06:12, Marek Marczykowski-G=C3=B3recki wrote:
>>> Hi,
>>>
>>> I'm observing Linux PV/PVH guest crash when I resume it from sleep. I=
 do
>>> this with:
>>>
>>>       virsh -c xen dompmsuspend <vmname> mem
>>>       virsh -c xen dompmwakeup <vmname>
>>>
>>> But it's possible to trigger it with plain xl too:
>>>
>>>       xl save -c <vmname> <some-file>
>>>
>>> The same on HVM works fine.
>>>
>>> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens=

>>> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
>>> relevant here. I can reliably reproduce it.
>>
>> This is already on my list of issues to look at.
>>
>> The problem seems to be related to the XSA-332 patches. You could try
>> the patches I've sent out recently addressing other fallout from XSA-3=
32
>> which _might_ fix this issue, too:
>>
>> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/
>=20
> Thanks for the patches. Sadly it doesn't change anything - I get exactl=
y
> the same crash. I applied that on top of 5.11-rc7 (that's what I had
> handy). If you think there may be a difference with the final 5.11 or
> another branch, please let me know.
>=20

Okay, thanks for testing.

I hope to find some time to look into the issue soon.


Juergen

--------------FBDABACC5782ED7AA9FBCE95
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FBDABACC5782ED7AA9FBCE95--

--kVJtkqr3cnuwMf7dEZLcNYdtQr55iWuDh--

--xAh7i6poRwVEodvveb7vrLt6b4KvPhF1J
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAtOCUFAwAAAAAACgkQsN6d1ii/Ey8Z
WAf+KJoqbeCqSs2dca7ZFfvD3VSo7IxOh4m1A6TE9J314m+Y/6s0Za2oX0s77K0FmfcvxC1Y+hYL
CLDGWzdk1N5c7fqNCKlvq6841+iDigA76t+zKUiJDUbRm9KZsjcA2JUjOQExDSFh0ZK9fDMkmSBy
z8b9LCbRt8d3Qqis0Atcqy3WwogXjoOinAaROXplV54oYhL8jh2MtVw5LuoELf4LvJD8R7DVhNC1
byeOxCHUjISMfT3OmVgcrhyz6EiLRwD+IU2kYkBxVc6NiC5UkNVQhTI8+oJXv/i5Xt3CB3Cp7LZ8
j3R50RQZYElCJfFjz+cZhQ2rwAjFXDmR40blhYNnnQ==
=/CmS
-----END PGP SIGNATURE-----

--xAh7i6poRwVEodvveb7vrLt6b4KvPhF1J--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:38:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:38:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86328.162014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOuB-0004eO-N4; Wed, 17 Feb 2021 15:38:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86328.162014; Wed, 17 Feb 2021 15:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCOuB-0004eH-K1; Wed, 17 Feb 2021 15:38:19 +0000
Received: by outflank-mailman (input) for mailman id 86328;
 Wed, 17 Feb 2021 15:38:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nrto=HT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lCOuA-0004eA-18
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 15:38:18 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7dd6f48e-e634-4dc3-a13e-c35fa940d2ad;
 Wed, 17 Feb 2021 15:38:15 +0000 (UTC)
Received: from AM0PR10CA0017.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::27)
 by AS8PR08MB6743.eurprd08.prod.outlook.com (2603:10a6:20b:399::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Wed, 17 Feb
 2021 15:38:13 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:17c:cafe::a4) by AM0PR10CA0017.outlook.office365.com
 (2603:10a6:208:17c::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Wed, 17 Feb 2021 15:38:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Wed, 17 Feb 2021 15:38:13 +0000
Received: ("Tessian outbound e7cb4a6f0881:v71");
 Wed, 17 Feb 2021 15:38:12 +0000
Received: from 68169c2f96a3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4FB785DD-2CB2-4365-8369-0B2841FA733B.1; 
 Wed, 17 Feb 2021 15:37:55 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 68169c2f96a3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 17 Feb 2021 15:37:54 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VI1PR08MB5295.eurprd08.prod.outlook.com (2603:10a6:803:e3::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Wed, 17 Feb
 2021 15:37:53 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3846.042; Wed, 17 Feb 2021
 15:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dd6f48e-e634-4dc3-a13e-c35fa940d2ad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9z5DojfRds/MZyMmVcgqQxaHneUZe/0lwMvjpw8rlo=;
 b=Zb6wpNYV1b7ruX+jLvjuczpyQp71gyDOojuzJjVpL6FtQmFwBHXuHrXsp63IvvXCmvyGdnSI6ctFQ1AJ54ZybHxKiLK15UxZTzm4rbP05ioimjtndMkMOSFHaHB9Jxmo/IDZrp43fgAErYeJ8DEbst9b/+gd//s4Wak+zYD6siw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 77254a5e94b361a9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g+AU3WQZN4708/B7WWGopUp9Kk4tXntb5D0LtGzTQ9U1ds6tZZFso5Q84zHxC81N348cwwBJKxz6Ct9BxFggkkHZpzYq36Gi5mBYz8EiWja3f0LwFglKikEkro/4RWzL88gDRztY+xI0daH+0Jvi5LTp+AxBUhNEw52Q68c8VEKzMDEuw5iRRRJASTL+shTUaFXSSrzwqD52tbSp+gybkhVc483SYGk828iLPduDbgASnkA9P4Dg9UoKhuoCjTNXA0YLpiMG3qHdmxZhNhPMXsdlzWdCMiS5ncaUMjE6MAxuw8S2CpBT/oQALAq0l91C+igkKO4AIiAJN6NvdrKN8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9z5DojfRds/MZyMmVcgqQxaHneUZe/0lwMvjpw8rlo=;
 b=K96aGYuJSVaKAE0OmEgLJgVqua0JxsOJJ/oRe7O2nh8DieLquBHb2EHGYivVyn3zN++tvu4HbU0ty2QRlzoh/+iE5ReG5QUtWm36Hbl4Hx3NfZrMu/1PsO/Y1Ig7H1kdJ63ZWTtFV7fKLQKbNfZkInitwOHaPzaMydFCcJhRySQia7ZRc2RN1p9wHhekD03yCz6yhjv63Dcd2pR8DOQIzn7NGQy1HuAKvw9l+m8biCJxmjDExHt+SpbxIGlht1kfHEkRJLr1sQ4uaIikrJZDEYzhfaMQmQx4PstMybTYi3yHN49XEEnlze0NeK7nUvvFfYnSnPqjJE2HO5tAJmaPPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9z5DojfRds/MZyMmVcgqQxaHneUZe/0lwMvjpw8rlo=;
 b=Zb6wpNYV1b7ruX+jLvjuczpyQp71gyDOojuzJjVpL6FtQmFwBHXuHrXsp63IvvXCmvyGdnSI6ctFQ1AJ54ZybHxKiLK15UxZTzm4rbP05ioimjtndMkMOSFHaHB9Jxmo/IDZrp43fgAErYeJ8DEbst9b/+gd//s4Wak+zYD6siw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>, Rahul Singh
	<Rahul.Singh@arm.com>
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Thread-Topic: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Thread-Index: AQHXBNDUDoMffJM9X0qFA5iW4kfZOqpcCj0AgABx1gA=
Date: Wed, 17 Feb 2021 15:37:53 +0000
Message-ID: <505CE19F-B324-4DE2-8EC4-D885780FD17A@arm.com>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
In-Reply-To: <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 24beb32e-a4ad-420b-fb8d-08d8d35a0961
x-ms-traffictypediagnostic: VI1PR08MB5295:|AS8PR08MB6743:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB6743AC3D882DDF1425CB01269D869@AS8PR08MB6743.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3cVu1sYVKoWGRRwWuEPTBqW0xF+J4VqIyFcqwU5UvSdmdqnrhsyQ5c9nWKGvQPjtxw6E8DRvy9Rd4NWqZ82P57rOAAZi8xQUB1oYwQfRMwJfqvn2Y+FyM6jKx3qceD0C3Ew5jyWApbzdFPd4T3xbeBUBd6ctxZI1mj4LE+Bx5MJL3f0DMOxYqpwihMyD4SlCQcP3WEF4rMtIhELZkPenCKqmp/ojpcQoO1UlZ8ICKT49PvosssOggAleLEtzUP1In4VtdUnLX8JhkZp4/WvET0824T4yTp+BODpAEjsJYvgE/j3ddctwc7etWvSjSKDI82UzI68f6hVCnGTPQuOqqoGy0AubgvydUmUCoUs3tLoShlu8Nv3O0/zph6aBSglO49KAWk41J+7uy13ej0NORLxPg94eCJtTnFON6apHYxyoPMmYqqDDdD+gdhbtLdqV3Q7Udoa5l511YYzpvBxrH7YQv+zP8myKAjwpVw4NDF4tQhAILBkE2LGzaYHxOxapG0hiGW73S8wcmSbTvV4PjblGiUl5BwWrX2ozHF+XZEG0wDI+onOtw8iMKx00TvZQRrkElvUdNwTBjuQR2n6q4A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(376002)(366004)(136003)(83380400001)(53546011)(5660300002)(6506007)(86362001)(186003)(6916009)(2906002)(6512007)(4326008)(478600001)(6486002)(54906003)(71200400001)(76116006)(36756003)(91956017)(66446008)(66476007)(66556008)(64756008)(33656002)(8936002)(66946007)(55236004)(26005)(2616005)(8676002)(316002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?bGdOQjR5WEx1Nncwc0xjR1RZWGdkdHNIU25TeUZxL0ZFTWhBQlY0alZ2T21O?=
 =?utf-8?B?NG1uZ2YrTHJZbzA2bWltWk5ZZWNZVEdjRHVMUFZPYnNXL3pqN2ZxL3ZvQVR0?=
 =?utf-8?B?NnRETXRrYkIxOXNXUzZLSmZRdEgyazRpNTFSRDFsNVpoREZYTzdreVQxcE41?=
 =?utf-8?B?dkRaOE9wVEpja3VhV1g0KzkwK05jb29lWUlsdmFNUXp0WUcxNmFHQ0xaL1Fo?=
 =?utf-8?B?L2JTWkFEcjNqaFFzNWZRWWxhR0xmVVJzalc5UG83Rmc5bSttQW9ZWkRDSU03?=
 =?utf-8?B?Rld2TW5heUJPZDJJRkJpWDJGcStCNElyNnF6dWVJOFYrRkZTZiswZEYwV1hD?=
 =?utf-8?B?eDVTN1Nvei92UmZtMEh0QU5nMldNVnU0RW5sc0swakdxc1Q4NjAvVkJmK0xj?=
 =?utf-8?B?U00wNXphVHFtZW8ycXZDNGV3cjU1YU5GUGdLMjhoUjczRTJlb1NKU1dBdjJ6?=
 =?utf-8?B?M2lTTlBWMG9QeUQ4c2JJNFd4dVJNNHFLa1ZaNEhoYlA2clM1MWs5WlhMdE4y?=
 =?utf-8?B?NXlMRElmYkhnZWZkR1FSYlh1ZEM2aGQxVGtxaDJ0QmdvOFdmb0VueG5CL2FU?=
 =?utf-8?B?MXpSdW5HM1dvUkt5cG1jcDE3WDhqc2xvZllxdmpDNjBaL1U1dzRTemxmdjM2?=
 =?utf-8?B?ZFhnOURnYmhsb0FqVGk3ci80VFZrcmxncDdhaUYxdURiSXdpTHFDT0J1OWZB?=
 =?utf-8?B?U0NieElLcFh5QUFCYWtsWGZ3UGpibE8yRlNSMjZwR281eHFUVnNyVlVsSVM3?=
 =?utf-8?B?UzhxZEVKN0pzd3h0OUtzaHVrOHpQaHcrNVkydHRMT3ZQc1c0Y0VyT1ZYaHBH?=
 =?utf-8?B?UDZsOXNRaVUrcDI4OGRBVTROUDlkYkF4MVB4UUtndktRSVgrNnhFK3c4cWtt?=
 =?utf-8?B?OVJPOGYrL1ZvTWF0bERvZnMxZmN4Zm9XU2lUMzZCNjR1UDFPdnUxSjA5UG9p?=
 =?utf-8?B?aGhkTUg0RUNBMVFBT1BUU0c3ZE5qZVlkZjI0L0I2bzRoUXhjNCtLa1dqMmo2?=
 =?utf-8?B?RHZ0VFNxdFJnbjJlMFhCWGNPZEtuMy9wbWF0LzZEMXV5ajgrZ0VTTVFnZGll?=
 =?utf-8?B?RStibEh6RVJrRVY5MDk4clNvcFdjbG5Ec3l4SlZWQWtoUWxDTlp3L0EyWnND?=
 =?utf-8?B?ZFNKWVVHUnFVVTVsT2ErVnRleXd2NjNWYWZ2enRJRUFLbWhXMzdHa0puSnFF?=
 =?utf-8?B?ZUQ2ajVFakIzaThpVElVTkN6SkRWRGgrQkMzNDM4SDFIS3pIWHZ4SVVvb09Z?=
 =?utf-8?B?NEJ3M0lqMGI0UGRhY0JPQVI5UHBvNUFJSGpjdUdTc3o3S0Zxa05CN2NNK3lP?=
 =?utf-8?B?Q01lbzNyWm10dnJTK01NQVVzb0pBOGttK05PVmVmYnZRendGQ0FOK1hpL2tr?=
 =?utf-8?B?VUhNa1JXbHNmdlNncDNCMHAxMjlVdG80TW5ndEVyaVVHSS8zZzk0ZnVMdlN0?=
 =?utf-8?B?ZEZkT09TY1BHU2w3UUVxc0NzdzZBYTl2TldhZlh2MDNnbGxITzdkUGxLNDgx?=
 =?utf-8?B?K0ZWcnE4YkRVY3lFYXBvRzhuOTRrSDBsbS80ZS8vQnJXT1JKQThSYVNSbytM?=
 =?utf-8?B?ME9ZaTYxMlBaVjMvUzJ1TjhCNmNHVW9VTUlleHRUQm1ER1V2UDJMbXQxNnNQ?=
 =?utf-8?B?UjF3Z09Pa05lLzdwT0FXT01KK1BqK25LekJWaXFxNTZ1bTNJcGhzK2k4N2Y2?=
 =?utf-8?B?bGhXclZrbEVxZkMzM0xnRVhyQlFET1ZqaHpRK3pZVHByYjc3dkVhYStid2p1?=
 =?utf-8?Q?WAJX9FU9G57JxmHqXmsaI+qhjFFr9m2UIvRh8hX?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AE2959492815A041A11E0860008AA97E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5295
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	eec8377b-cb7c-41e2-1c0d-08d8d359fd9f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iBMHPBCAYtACw69A9oSUOphV8fT7N2xUKtSX1jUY09bJzC8Iad0FECAZJtwJ7Whp042VSLNOLW6AMK6III0JnIJiLQ1NbICY8FwLaofueOkouby6o8NjFjNxrwZLziXR0d51y+hVnrkSHfHzQSEi0WPoE1Q1PjcdtFUGG6z237Mmh5dESmiPsYXSmVIYQZ/NCp6/Is9wlYiZtGSk7iR+MUpBbQ419chq7OsvbrO+BRnMOyr3ZCTFHtdKMF/+qxG4Wh30KPSXT/G9CoiNtc4v4+mirZVE6RaH8RkfXsYuiYAy8GBvfFGz5zb3NFeiBv9TGh7U6HvTtNrHHNZQjJ8OZFSqAFoTSK2Rfw21ZY49izJkzj0QGMmmElqAjIsGs63S5woB/psg0p+XV0+yR3Adc3mJ/vIojINj0AmYijQV1GLsxuOE4xva+EXmHvBJvlOUGsjk3oJFjkUu/AiQUzMeCIn0i3pf69mWL2Pr4tMkjuhT15G5wYyBdvaVcx7uYcwCPc2fQLAXgvJ1l8BybM4w4jCvhjmIJPtsHo9QePZpaJDKOjbtTaxy186FV8OQ+ooD/VCUwc32MAPujqPgpLrs9EO842WWsMF8kC831Fr4XDkj01HyK77mcgB9ELWdfxanAd00L7ypFisV+xXKyCWq80U+OYCjkpBGgYSZ8nn0ATg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(346002)(376002)(36840700001)(46966006)(4326008)(8936002)(8676002)(6862004)(55236004)(5660300002)(6486002)(53546011)(6512007)(2616005)(86362001)(54906003)(478600001)(26005)(70586007)(70206006)(316002)(186003)(82310400003)(6506007)(36860700001)(36756003)(2906002)(336012)(33656002)(47076005)(356005)(83380400001)(82740400003)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Feb 2021 15:38:13.0159
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 24beb32e-a4ad-420b-fb8d-08d8d35a0961
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6743

SGkgSnVsaWVuLA0KDQo+IE9uIDE3IEZlYiAyMDIxLCBhdCAwODo1MCwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiANCj4gDQo+IE9uIDE3LzAyLzIwMjEgMDI6MDAs
IFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4+IEhpIGFsbCwNCj4+IFRvZGF5IExpbnV4IHVz
ZXMgdGhlIHN3aW90bGIteGVuIGRyaXZlciAoZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYykgdG8N
Cj4+IHRyYW5zbGF0ZSBhZGRyZXNzZXMgZm9yIERNQSBvcGVyYXRpb25zIGluIERvbTAuIFNwZWNp
ZmljYWxseSwNCj4+IHN3aW90bGIteGVuIGlzIHVzZWQgdG8gdHJhbnNsYXRlIHRoZSBhZGRyZXNz
IG9mIGEgZm9yZWlnbiBwYWdlIChhIHBhZ2UNCj4+IGJlbG9uZ2luZyB0byBhIGRvbVUpIG1hcHBl
ZCBpbnRvIERvbTAgYmVmb3JlIHVzaW5nIGl0IGZvciBETUEuDQo+PiBUaGlzIGlzIGltcG9ydGFu
dCBiZWNhdXNlIGFsdGhvdWdoIERvbTAgaXMgMToxIG1hcHBlZCwgRG9tVXMgYXJlIG5vdC4gT24N
Cj4+IHN5c3RlbXMgd2l0aG91dCBhbiBJT01NVSBzd2lvdGxiLXhlbiBlbmFibGVzIFBWIGRyaXZl
cnMgdG8gd29yayBhcyBsb25nDQo+PiBhcyB0aGUgYmFja2VuZHMgYXJlIGluIERvbTAuIFRoYW5r
cyB0byBzd2lvdGxiLXhlbiwgdGhlIERNQSBvcGVyYXRpb24NCj4+IGVuZHMgdXAgdXNpbmcgdGhl
IE1GTiwgcmF0aGVyIHRoYW4gdGhlIEdGTi4NCj4+IE9uIHN5c3RlbXMgd2l0aCBhbiBJT01NVSwg
dGhpcyBpcyBub3QgbmVjZXNzYXJ5OiB3aGVuIGEgZm9yZWlnbiBwYWdlIGlzDQo+PiBtYXBwZWQg
aW4gRG9tMCwgaXQgaXMgYWRkZWQgdG8gdGhlIERvbTAgcDJtLiBBIG5ldyBHRk4tPk1GTiB0cmFu
c2xhdGlvbg0KPj4gaXMgZW5zdGFibGlzaGVkIGZvciBib3RoIE1NVSBhbmQgU01NVS4gRG9tMCBj
b3VsZCBzYWZlbHkgdXNlIHRoZSBHRk4NCj4+IGFkZHJlc3MgKGluc3RlYWQgb2YgdGhlIE1GTikg
Zm9yIERNQSBvcGVyYXRpb25zIGFuZCB0aGV5IHdvdWxkIHdvcmsuIEl0DQo+PiB3b3VsZCBiZSBt
b3JlIGVmZmljaWVudCB0aGFuIHVzaW5nIHN3aW90bGIteGVuLg0KPj4gSWYgeW91IHJlY2FsbCBt
eSBwcmVzZW50YXRpb24gZnJvbSBYZW4gU3VtbWl0IDIwMjAsIFhpbGlueCBpcyB3b3JraW5nIG9u
DQo+PiBjYWNoZSBjb2xvcmluZy4gV2l0aCBjYWNoZSBjb2xvcmluZywgbm8gZG9tYWluIGlzIDE6
MSBtYXBwZWQsIG5vdCBldmVuDQo+PiBEb20wLiBJbiBhIHNjZW5hcmlvIHdoZXJlIERvbTAgaXMg
bm90IDE6MSBtYXBwZWQsIHN3aW90bGIteGVuIGRvZXMgbm90DQo+PiB3b3JrIGFzIGludGVuZGVk
Lg0KPj4gVGhlIHN1Z2dlc3RlZCBzb2x1dGlvbiBmb3IgYm90aCB0aGVzZSBpc3N1ZXMgaXMgdG8g
YWRkIGEgbmV3IGZlYXR1cmUNCj4+IGZsYWcgIlhFTkZFQVRfQVJNX2RvbTBfaW9tbXUiIHRoYXQg
dGVsbHMgRG9tMCB0aGF0IGl0IGlzIHNhZmUgbm90IHRvIHVzZQ0KPj4gc3dpb3RsYi14ZW4gYmVj
YXVzZSBJT01NVSB0cmFuc2xhdGlvbnMgYXJlIGF2YWlsYWJsZSBmb3IgRG9tMC4gSWYNCj4+IFhF
TkZFQVRfQVJNX2RvbTBfaW9tbXUgaXMgc2V0LCBMaW51eCBzaG91bGQgc2tpcCB0aGUgc3dpb3Rs
Yi14ZW4NCj4+IGluaXRpYWxpemF0aW9uLiBJIGhhdmUgdGVzdGVkIHRoaXMgc2NoZW1lIHdpdGgg
YW5kIHdpdGhvdXQgY2FjaGUNCj4+IGNvbG9yaW5nIChoZW5jZSB3aXRoIGFuZCB3aXRob3V0IDE6
MSBtYXBwaW5nIG9mIERvbTApIG9uIFpDVTEwMiBhbmQgaXQNCj4+IHdvcmtzIGFzIGV4cGVjdGVk
OiBETUEgb3BlcmF0aW9ucyBzdWNjZWVkLg0KPj4gV2hhdCBhYm91dCBzeXN0ZW1zIHdoZXJlIGFu
IElPTU1VIGlzIHByZXNlbnQgYnV0IG5vdCBhbGwgZGV2aWNlcyBhcmUNCj4+IHByb3RlY3RlZD8N
Cj4+IFRoZXJlIGlzIG5vIHdheSBmb3IgWGVuIHRvIGtub3cgd2hpY2ggZGV2aWNlcyBhcmUgcHJv
dGVjdGVkIGFuZCB3aGljaA0KPj4gb25lcyBhcmUgbm90OiBkZXZpY2VzIHRoYXQgZG8gbm90IGhh
dmUgdGhlICJpb21tdXMiIHByb3BlcnR5IGNvdWxkIG9yDQo+PiBjb3VsZCBub3QgYmUgRE1BIG1h
c3RlcnMuDQo+PiBQZXJoYXBzIFhlbiBjb3VsZCBwb3B1bGF0ZSBhIHdoaXRlbGlzdCBvZiBkZXZp
Y2VzIHByb3RlY3RlZCBieSB0aGUgSU9NTVUNCj4+IGJhc2VkIG9uIHRoZSAiaW9tbXVzIiBwcm9w
ZXJ0eS4gSXQgd291bGQgcmVxdWlyZSBzb21lIGFkZGVkIGNvbXBsZXhpdHkNCj4+IGluIFhlbiBh
bmQgZXNwZWNpYWxseSBpbiB0aGUgc3dpb3RsYi14ZW4gZHJpdmVyIGluIExpbnV4IHRvIHVzZSBp
dCwNCj4+IHdoaWNoIGlzIG5vdCBpZGVhbC4NCj4gDQo+IFlvdSBhcmUgdHJhZGluZyBhIGJpdCBt
b3JlIGNvbXBsZXhpdHkgaW4gWGVuIGFuZCBMaW51eCB3aXRoIGEgdXNlciB3aWxsIG1heSBub3Qg
YmUgYWJsZSB0byB1c2UgdGhlIGh5cGVydmlzb3Igb24gaGlzL2hlciBwbGF0Zm9ybSB3aXRob3V0
IGEgcXVpcmsgaW4gWGVuIChzZWUgbW9yZSBiZWxvdykuDQo+IA0KPj4gSG93ZXZlciwgdGhpcyBh
cHByb2FjaCB3b3VsZCBub3Qgd29yayBmb3IgY2FjaGUNCj4+IGNvbG9yaW5nIHdoZXJlIGRvbTAg
aXMgbm90IDE6MSBtYXBwZWQgc28gdGhlIHN3aW90bGIteGVuIHNob3VsZCBub3QgYmUNCj4+IHVz
ZWQgZWl0aGVyIHdheQ0KPiANCj4gTm90IGFsbCB0aGUgRG9tMCBMaW51eCBrZXJuZWwgd2lsbCBi
ZSBhYmxlIHRvIHdvcmsgd2l0aCBjYWNoZSBjb2xvdXJpbmcuIFNvIHlvdSB3aWxsIG5lZWQgYSB3
YXkgZm9yIHRoZSBrZXJuZWwgdG8gc2F5ICJIZXkgSSBhbSBjYW4gYXZvaWQgdXNpbmcgc3dpb3Rs
YiIuDQoNCkkgZnVsbHkgYWdyZWUgYW5kIGZyb20gbXkgY3VycmVudCB1bmRlcnN0YW5kaW5nIHRo
ZSBjb25kaXRpb24gaXMg4oCcaGF2aW5nIGFuIGlvbW114oCdLg0KDQo+IA0KPj4gRm9yIHRoZXNl
IHJlYXNvbnMsIEkgd291bGQgbGlrZSB0byBwcm9wb3NlIGEgc2luZ2xlIGZsYWcNCj4+IFhFTkZF
QVRfQVJNX2RvbTBfaW9tbXUgd2hpY2ggc2F5cyB0aGF0IHRoZSBJT01NVSBjYW4gYmUgcmVsaWVk
IHVwb24gZm9yDQo+PiBETUEgdHJhbnNsYXRpb25zLiBJbiBzaXR1YXRpb25zIHdoZXJlIGEgRE1B
IG1hc3RlciBpcyBub3QgU01NVQ0KPj4gcHJvdGVjdGVkLCBYRU5GRUFUX0FSTV9kb20wX2lvbW11
IHNob3VsZCBub3QgYmUgc2V0LiBGb3IgZXhhbXBsZSwgb24gYQ0KPj4gcGxhdGZvcm0gd2hlcmUg
YW4gSU9NTVUgaXMgcHJlc2VudCBhbmQgcHJvdGVjdHMgbW9zdCBETUEgbWFzdGVycyBidXQgaXQN
Cj4+IGlzIGxlYXZpbmcgb3V0IHRoZSBNTUMgY29udHJvbGxlciwgdGhlbiBYRU5GRUFUX0FSTV9k
b20wX2lvbW11IHNob3VsZA0KPj4gbm90IGJlIHNldCAoYmVjYXVzZSBQViBibG9jayBpcyBub3Qg
Z29pbmcgdG8gd29yayB3aXRob3V0IHN3aW90bGIteGVuLikNCj4+IFRoaXMgYWxzbyBtZWFucyB0
aGF0IGNhY2hlIGNvbG9yaW5nIHdvbid0IGJlIHVzYWJsZSBvbiBzdWNoIGEgc3lzdGVtIChhdA0K
Pj4gbGVhc3Qgbm90IHVzYWJsZSB3aXRoIHRoZSBNTUMgY29udHJvbGxlciBzbyB0aGUgc3lzdGVt
IGludGVncmF0b3Igc2hvdWxkDQo+PiBwYXkgc3BlY2lhbCBjYXJlIHRvIHNldHVwIHRoZSBzeXN0
ZW0pLg0KPj4gSXQgaXMgd29ydGggbm90aW5nIHRoYXQgaWYgd2Ugd2FudGVkIHRvIGV4dGVuZCB0
aGUgaW50ZXJmYWNlIHRvIGFkZCBhDQo+PiBsaXN0IG9mIHByb3RlY3RlZCBkZXZpY2VzIGluIHRo
ZSBmdXR1cmUsIGl0IHdvdWxkIHN0aWxsIGJlIHBvc3NpYmxlLiBJdA0KPj4gd291bGQgYmUgY29t
cGF0aWJsZSB3aXRoIFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUuDQo+IA0KPiBJIGltYWdpbmUgYnkg
Y29tcGF0aWJsZSwgeW91IG1lYW4gWEVORkVBVF9BUk1fZG9tMF9pb21tdSB3b3VsZCBiZSBjbGVh
cmVkIGFuZCBpbnN0ZWFkIHRoZSBkZXZpY2UtdHJlZSBsaXN0IHdvdWxkIGJlIHVzZWQuIElzIHRo
YXQgY29ycmVjdD8NCg0KV2hhdCBkbyB5b3UgbWVhbiBieSBkZXZpY2UgdHJlZSBsaXN0IGhlcmUg
Pw0KDQo+IA0KPj4gSG93IHRvIHNldCBYRU5GRUFUX0FSTV9kb20wX2lvbW11Pw0KPj4gV2UgY291
bGQgc2V0IFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUgYXV0b21hdGljYWxseSB3aGVuDQo+PiBpc19p
b21tdV9lbmFibGVkKGQpIGZvciBEb20wLiBXZSBjb3VsZCBhbHNvIGhhdmUgYSBwbGF0Zm9ybSBz
cGVjaWZpYw0KPj4gKHhlbi9hcmNoL2FybS9wbGF0Zm9ybXMvKSBvdmVycmlkZSBzbyB0aGF0IGEg
c3BlY2lmaWMgcGxhdGZvcm0gY2FuDQo+PiBkaXNhYmxlIFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUu
IEZvciBkZWJ1Z2dpbmcgcHVycG9zZXMgYW5kIGFkdmFuY2VkDQo+PiB1c2VycywgaXQgd291bGQg
YWxzbyBiZSB1c2VmdWwgdG8gYmUgYWJsZSB0byBvdmVycmlkZSBpdCB2aWEgYSBYZW4NCj4+IGNv
bW1hbmQgbGluZSBwYXJhbWV0ZXIuDQo+IFBsYXRmb3JtIHF1aXJrcyBzaG91bGQgYmUgYXJlIGxp
bWl0ZWQgdG8gYSBzbWFsbCBzZXQgb2YgcGxhdGZvcm1zLg0KPiANCj4gSW4gdGhpcyBjYXNlLCB0
aGlzIHdvdWxkIG5vdCBiZSBvbmx5IHBlci1wbGF0Zm9ybSBidXQgYWxzbyBwZXItZmlybXdhcmUg
dGFibGUgYXMgYSBkZXZlbG9wZXIgY2FuIGRlY2lkZSB0byByZW1vdmUvYWRkIElPTU1VIG5vZGVz
IGluIHRoZSBEVCBhdCBhbnkgdGltZS4NCj4gDQo+IEluIGFkZGl0aW9uIHRvIHRoYXQsIGl0IG1l
YW5zIHdlIGFyZSBpbnRyb2R1Y2luZyBhIHJlZ3Jlc3Npb24gZm9yIHRob3NlIHVzZXJzIGFzIFhl
biA0LjE0IHdvdWxkIGhhdmUgd29ya2VkIG9uIHRoZXJlIHBsYXRmb3JtIGJ1dCBub3QgYW55bW9y
ZS4gVGhleSB3b3VsZCBuZWVkIHRvIGdvIHRocm91Z2ggYWxsIHRoZSBub2RlcyBhbmQgZmluZCBv
dXQgd2hpY2ggb25lIGlzIG5vdCBwcm90ZWN0ZWQuDQoNCkkgYW0gbm90IHN1cmUgaSB1bmRlcnN0
YW5kIHlvdXIgcG9pbnQgaGVyZSBhcyB3ZSBjYW5ub3QgZGV0ZWN0IGlmIGEgZGV2aWNlIGlzIHBy
b3RlY3RlZCBvciBub3QgYnkgYW4gSU9NTVUgYXMgd2UgZG8gbm90IGtub3cgd2hpY2ggZGV2aWNl
IHJlcXVpcmVzIG9uZS4NCkNvdWxkIHlvdSBleHBsYWluIHdoYXQgdXNlIGNhc2Ugd29ya2luZyBi
ZWZvcmUgd291bGQgbm90IHdvcmsgaGVyZSA/DQoNCj4gDQo+IFRoaXMgaXMgYSBiaXQgb2YgYSBk
YXVudGluZyB0YXNrIGFuZCB3ZSBhcmUgZ29pbmcgdG8gZW5kIHVwIGhhdmluZyBhIGxvdCBvZiBw
ZXItcGxhdGZvcm0gcXVpcmsgaW4gWGVuLg0KDQpGcm9tIG15IHVuZGVyc3RhbmRpbmcgdGhlIHF1
aXJrcyBoZXJlIHdvdWxkIGJlIGluIExpbnV4IGFzIGl0IHdvdWxkIGhhdmUgdG8gZGVjaWRlIGZv
ciB3aGF0IHRvIHVzZSBzd2lvdGxiIG9yIG5vdC4NCldoYXQgcXVpcmsgZG8geW91IGltYWdpbmUg
d2UgY291bGQgaW1wbGVtZW50IGluIFhlbiA/DQoNCkNoZWVycw0KQmVydHJhbmQNCg0KPiANCj4g
U28gdGhpcyBhcHByb2FjaCBzZWxlY3RpbmcgdGhlIGZsYWcgaXMgYSBuby1nbyBmb3IgbWUuIEZB
T0QsIHRoZSBpbnZlcnRlZCBpZGVhIChpLmUuIG9ubHkgc2V0dGluZyBYRU5GRUFUX0FSTV9kb20w
X2lvbW11IHBlci1wbGF0Zm9ybSkgaXMgYSBubyBnbyBhcyB3ZWxsLg0KPiANCj4gSSBkb24ndCBo
YXZlIGEgZ29vZCBpZGVhIGhvdyB0byBzZXQgdGhlIGZsYWcgYXV0b21hdGljYWxseS4gTXkgcmVx
dWlyZW1lbnQgaXMgbmV3ZXIgWGVuIHNob3VsZCBjb250aW51ZSB0byB3b3JrIG9uIGFsbCBzdXBw
b3J0ZWQgcGxhdGZvcm1zIHdpdGhvdXQgYW55IGFkZGl0aW9uYWwgcGVyLXBsYXRmb3JtIGVmZm9y
dC4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 15:59:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 15:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86333.162027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPE8-0006Ye-Ej; Wed, 17 Feb 2021 15:58:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86333.162027; Wed, 17 Feb 2021 15:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPE8-0006YX-B8; Wed, 17 Feb 2021 15:58:56 +0000
Received: by outflank-mailman (input) for mailman id 86333;
 Wed, 17 Feb 2021 15:58:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCPE6-0006YP-Kd; Wed, 17 Feb 2021 15:58:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCPE6-0006CM-DC; Wed, 17 Feb 2021 15:58:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCPE6-0003Qc-4d; Wed, 17 Feb 2021 15:58:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCPE6-0005gy-49; Wed, 17 Feb 2021 15:58:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Qf8DKKcHWabE9Ib+8Qe1QZQTuewvx21vbqjwY2Noiic=; b=hqM2bJwxiiIHI+IZCg/TGVkuy6
	w71BK0n0RwGF4WT3XkR+wYqfNLhcSZp6OOb4624FwiEwIRoWnf7khpEfR55njae67ZEAdlfFt+wwk
	xoSY5kM9hYlneaEjdA+4MvNC4wk9MNb4AibgrMDTAZQZTmMFt2hlz0/Y6eKepkNmek88=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159443-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159443: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d670ef3401b91d04c58d72cd8ce5579b4fa900d8
X-Osstest-Versions-That:
    xen=3b1cc15f1931ba56d0ee256fe9bfe65509733b27
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 15:58:54 +0000

flight 159443 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159443/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d670ef3401b91d04c58d72cd8ce5579b4fa900d8
baseline version:
 xen                  3b1cc15f1931ba56d0ee256fe9bfe65509733b27

Last test of basis   159416  2021-02-16 15:01:29 Z    1 days
Testing same since   159443  2021-02-17 12:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b1cc15f19..d670ef3401  d670ef3401b91d04c58d72cd8ce5579b4fa900d8 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:07:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86340.162042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPM4-00085f-AW; Wed, 17 Feb 2021 16:07:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86340.162042; Wed, 17 Feb 2021 16:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPM4-00085Y-6R; Wed, 17 Feb 2021 16:07:08 +0000
Received: by outflank-mailman (input) for mailman id 86340;
 Wed, 17 Feb 2021 16:07:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCPM2-00085T-6c
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:07:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPLz-0006t3-Vb; Wed, 17 Feb 2021 16:07:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPLz-00058q-Lk; Wed, 17 Feb 2021 16:07:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xcnzsdAIA2LNIs/7vC0zkePqE7Axn2nUGgNJd7ObN5s=; b=UaymHbSaRMAsimqRxPTaHSI14Y
	Yt3qltAcQJMch0HRWwNLW9ksJm8j8mGXkr2pg4RSOXjQUahqkAvqTx8lEo8ctfBxyKI/pksixQVdT
	DAQJKHPMT95tqD++FEVj353jSLIzmZX1IjqghlvRXiGkPRw4Vk5eD1ttTxZlCs07bGsg=;
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
Date: Wed, 17 Feb 2021 16:07:01 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 15:01, Jan Beulich wrote:
> On 17.02.2021 15:24, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The new x86 IOMMU page-tables allocator will release the pages when
>> relinquishing the domain resources. However, this is not sufficient
>> when the domain is dying because nothing prevents page-table to be
>> allocated.
>>
>> Currently page-table allocations can only happen from iommu_map(). As
>> the domain is dying, there is no good reason to continue to modify the
>> IOMMU page-tables.
> 
> While I agree this to be the case right now, I'm not sure it is a
> good idea to build on it (in that you leave the unmap paths
> untouched).

I don't build on that assumption. See next patch.

> Imo there's a fair chance this would be overlooked at
> the point super page mappings get introduced (which has been long
> overdue), and I thought prior discussion had lead to a possible
> approach without risking use-after-free due to squashed unmap
> requests.

I know you suggested to zap the root page-tables... However, I don't 
think this is 4.15 material and you agree with this (you were the one 
pointed out that out).

>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -273,6 +273,9 @@ int iommu_free_pgtables(struct domain *d)
>>       /*
>>        * Pages will be moved to the free list below. So we want to
>>        * clear the root page-table to avoid any potential use after-free.
>> +     *
>> +     * After this call, no more IOMMU mapping can happen.
>> +     *
>>        */
>>       hd->platform_ops->clear_root_pgtable(d);
> 
> I.e. you utilize the call in place of spin_barrier(). Maybe worth
> saying in the comment?

Sure.

> 
> Also, nit: Stray blank comment line.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:29:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86344.162054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPi0-0001en-CG; Wed, 17 Feb 2021 16:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86344.162054; Wed, 17 Feb 2021 16:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPi0-0001eg-7E; Wed, 17 Feb 2021 16:29:48 +0000
Received: by outflank-mailman (input) for mailman id 86344;
 Wed, 17 Feb 2021 16:29:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCPhz-0001eb-3Y
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:29:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPhx-0007EK-TV; Wed, 17 Feb 2021 16:29:45 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPhx-00071v-Ic; Wed, 17 Feb 2021 16:29:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Ac7GcIC4jmtgcfioE38Gw/EadoejlB+Xws4jOjnk3do=; b=xBvVfLikMwkMQvglC9qBLmYqvm
	DsP+IUfbKVkcw6ZWqVxdlwbHZO0/0xMnYKK/sY2tCSrlpP7bbs9rbn86TN2SzH8+xV1HQmRkHOvs7
	hVk07gw24+TktH2G3pVJQVf1WmUFtUgnabm2nwJ69N9S/8QQX975cQwl0/n6I+kCRF4c=;
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
Date: Wed, 17 Feb 2021 16:29:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 15:13, Jan Beulich wrote:
> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
> Nit: s/not/no/ ?
> 
>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>> +     * called unconditionally, so pgtables may be unitialized.
>> +     */
>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>   }
>>   
>>   static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>        */
>>       hd->platform_ops->clear_root_pgtable(d);
>>   
>> +    /* After this barrier no new page allocations can occur. */
>> +    spin_barrier(&hd->arch.pgtables.lock);
> 
> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
> the barrier? Why introduce another one (with a similar comment)
> explicitly now?
The barriers act differently, one will get against any IOMMU page-tables 
modification. The other one will gate against allocation.

There is no guarantee that the former will prevent the latter.

> 
>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>       unmap_domain_page(p);
>>   
>>       spin_lock(&hd->arch.pgtables.lock);
>> -    page_list_add(pg, &hd->arch.pgtables.list);
>> +    /*
>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>> +     * nothing prevent allocation to happen afterwards. There is no valid
>> +     * reasons to continue to update the IOMMU page-tables while the
>> +     * domain is dying.
>> +     *
>> +     * So prevent page-table allocation when the domain is dying.
>> +     *
>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>> +     */
>> +    if ( likely(!d->is_dying) )
>> +    {
>> +        alive = true;
>> +        page_list_add(pg, &hd->arch.pgtables.list);
>> +    }
>>       spin_unlock(&hd->arch.pgtables.lock);
>>   
>> +    if ( unlikely(!alive) )
>> +    {
>> +        free_domheap_page(pg);
>> +        pg = NULL;
>> +    }
>> +
>>       return pg;
>>   }
> 
> As before I'm concerned of this forcing error paths to be taken
> elsewhere, in case an allocation still happens (e.g. from unmap
> once super page mappings are supported). Considering some of the
> error handling in the IOMMU code is to invoke domain_crash(), it
> would be quite unfortunate if we ended up crashing a domain
> while it is being cleaned up after.

It is unfortunate, but I think this is better than having to leak page 
tables.

> 
> Additionally, the (at present still hypothetical) unmap case, if
> failing because of the change here, would then again chance to
> leave mappings in place while the underlying pages get freed. As
> this would likely require an XSA, the change doesn't feel like
> "hardening" to me.

I would agree with this if memory allocations could never fail. That's 
not that case and will become worse as we use IOMMU pool.

Do you have callers in mind that doesn't check the returns of iommu_unmap()?

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:41:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:41:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86347.162066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPsu-0003OQ-Bu; Wed, 17 Feb 2021 16:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86347.162066; Wed, 17 Feb 2021 16:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPsu-0003OJ-93; Wed, 17 Feb 2021 16:41:04 +0000
Received: by outflank-mailman (input) for mailman id 86347;
 Wed, 17 Feb 2021 16:41:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCPss-0003OE-LB
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:41:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPss-0007PD-8U; Wed, 17 Feb 2021 16:41:02 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPsr-000815-TF; Wed, 17 Feb 2021 16:41:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=e45F8tRQETYHRvyIL3xkas+Ry4NXrjDpWegxK+BOBVM=; b=F3D6NkkAr2JcueAQXK+zKcRTou
	0eRedKyNP8ziMA86mkfJyqFSmAhwRutl4qErm2LInCHEzEwRyYypmeZT/t/CRNd14UvJNo3EOhv/R
	iwwqZtxB3XO80TbPNmyRsdOR50qsK3rWTwdHC0HoqMbZm7Pm6N5UUIBIa3f2rgS3KZgc=;
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>,
 Rahul Singh <Rahul.Singh@arm.com>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
 <505CE19F-B324-4DE2-8EC4-D885780FD17A@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <463b0594-a78b-3f9b-e816-2cbd2a1d16dd@xen.org>
Date: Wed, 17 Feb 2021 16:41:00 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <505CE19F-B324-4DE2-8EC4-D885780FD17A@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 17/02/2021 15:37, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

>> On 17 Feb 2021, at 08:50, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 17/02/2021 02:00, Stefano Stabellini wrote:
>>> Hi all,
>>> Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
>>> translate addresses for DMA operations in Dom0. Specifically,
>>> swiotlb-xen is used to translate the address of a foreign page (a page
>>> belonging to a domU) mapped into Dom0 before using it for DMA.
>>> This is important because although Dom0 is 1:1 mapped, DomUs are not. On
>>> systems without an IOMMU swiotlb-xen enables PV drivers to work as long
>>> as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
>>> ends up using the MFN, rather than the GFN.
>>> On systems with an IOMMU, this is not necessary: when a foreign page is
>>> mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
>>> is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
>>> address (instead of the MFN) for DMA operations and they would work. It
>>> would be more efficient than using swiotlb-xen.
>>> If you recall my presentation from Xen Summit 2020, Xilinx is working on
>>> cache coloring. With cache coloring, no domain is 1:1 mapped, not even
>>> Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
>>> work as intended.
>>> The suggested solution for both these issues is to add a new feature
>>> flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
>>> swiotlb-xen because IOMMU translations are available for Dom0. If
>>> XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
>>> initialization. I have tested this scheme with and without cache
>>> coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
>>> works as expected: DMA operations succeed.
>>> What about systems where an IOMMU is present but not all devices are
>>> protected?
>>> There is no way for Xen to know which devices are protected and which
>>> ones are not: devices that do not have the "iommus" property could or
>>> could not be DMA masters.
>>> Perhaps Xen could populate a whitelist of devices protected by the IOMMU
>>> based on the "iommus" property. It would require some added complexity
>>> in Xen and especially in the swiotlb-xen driver in Linux to use it,
>>> which is not ideal.
>>
>> You are trading a bit more complexity in Xen and Linux with a user will may not be able to use the hypervisor on his/her platform without a quirk in Xen (see more below).
>>
>>> However, this approach would not work for cache
>>> coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
>>> used either way
>>
>> Not all the Dom0 Linux kernel will be able to work with cache colouring. So you will need a way for the kernel to say "Hey I am can avoid using swiotlb".
> 
> I fully agree and from my current understanding the condition is â€œhaving an iommuâ€.
> 
>>
>>> For these reasons, I would like to propose a single flag
>>> XENFEAT_ARM_dom0_iommu which says that the IOMMU can be relied upon for
>>> DMA translations. In situations where a DMA master is not SMMU
>>> protected, XENFEAT_ARM_dom0_iommu should not be set. For example, on a
>>> platform where an IOMMU is present and protects most DMA masters but it
>>> is leaving out the MMC controller, then XENFEAT_ARM_dom0_iommu should
>>> not be set (because PV block is not going to work without swiotlb-xen.)
>>> This also means that cache coloring won't be usable on such a system (at
>>> least not usable with the MMC controller so the system integrator should
>>> pay special care to setup the system).
>>> It is worth noting that if we wanted to extend the interface to add a
>>> list of protected devices in the future, it would still be possible. It
>>> would be compatible with XENFEAT_ARM_dom0_iommu.
>>
>> I imagine by compatible, you mean XENFEAT_ARM_dom0_iommu would be cleared and instead the device-tree list would be used. Is that correct?
> 
> What do you mean by device tree list here ?

Sorry I meant "device list". I was referring to Stefano's suggestion to 
describe the list of devices protected in the device-tree.

> 
>>
>>> How to set XENFEAT_ARM_dom0_iommu?
>>> We could set XENFEAT_ARM_dom0_iommu automatically when
>>> is_iommu_enabled(d) for Dom0. We could also have a platform specific
>>> (xen/arch/arm/platforms/) override so that a specific platform can
>>> disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
>>> users, it would also be useful to be able to override it via a Xen
>>> command line parameter.
>> Platform quirks should be are limited to a small set of platforms.
>>
>> In this case, this would not be only per-platform but also per-firmware table as a developer can decide to remove/add IOMMU nodes in the DT at any time.
>>
>> In addition to that, it means we are introducing a regression for those users as Xen 4.14 would have worked on there platform but not anymore. They would need to go through all the nodes and find out which one is not protected.
> 
> I am not sure i understand your point here as we cannot detect if a device is protected or not by an IOMMU as we do not know which device requires one.

That's correct...

> Could you explain what use case working before would not work here ?

 From Stefano's e-mail, Xen would expose XENFEAT_ARM_dom0_iommu if all 
the devices are protected by the IOMMU.

This implies that Xen is aware whether ever DMA-capable devices are 
protected. As you rightfully pointed out this cannot work.

>>
>> This is a bit of a daunting task and we are going to end up having a lot of per-platform quirk in Xen.
> 
>  From my understanding the quirks here would be in Linux as it would have to decide for what to use swiotlb or not.

This is not how I understood Stefano's e-mail. But even if it is 
happening in Linux, then we need a way to tell Linux which devices have 
been protected by Xen.

> What quirk do you imagine we could implement in Xen ?

Me? None. That Stefano's idea and I don't think it can work.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:43:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86351.162078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPv5-0003YB-Oy; Wed, 17 Feb 2021 16:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86351.162078; Wed, 17 Feb 2021 16:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPv5-0003Y4-Ly; Wed, 17 Feb 2021 16:43:19 +0000
Received: by outflank-mailman (input) for mailman id 86351;
 Wed, 17 Feb 2021 16:43:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2sW=HT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lCPv3-0003Xr-UC
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:43:18 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eef189c4-32ee-4253-a924-6e49a546f455;
 Wed, 17 Feb 2021 16:43:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eef189c4-32ee-4253-a924-6e49a546f455
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613580196;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=rw2lC3aWi2/OP/LKqhOTIZxiMs1Fg5QFKgrBJsMDhWI=;
  b=QWNKOOhfcWdcWfp4KbNerctvGYLiriqnX4UdCd12FnJ7lVgdK2oYX1KV
   9gCv8y7irlEYPsGGnYm3wp62MQ4SAOXHgZdVvvKqczucfYCrwlJLY0kPD
   yYe/KHWsgHqz0/IXxYrsNcprK2uQCp5eRllMJbmlUGYT1QH58Vg9+/NKP
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: NN3IXByZhZFFMdUNirgUZMv0TMLHOrJW8YpwFEgpp6NxIfTDTNzdOqrjyAm6tdbJnAZnCS6Kn5
 BJaRUD0EAqcCU4G88fS++B+hcjQ4G2xQj/YIpXn+/tT75CStlemscJ1D1nR4HeW0j7hfwZhISE
 2+OfOyKv/x3EwUrKV4X3fyB/W0TjQdwecFg29K7Q+Vrko4NnNIODCWzq+mtyXoDS86msbVP7mf
 lzQNvSAHSDu90OwJW+zIPG+FdFx0SfA0n0oblYaJOGt6Hi+k10S/u4xIhkgdp0fH0gKfPdjstF
 LrI=
X-SBRS: 5.1
X-MesageID: 37456607
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37456607"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH 1/3] tools/libxl: Split out and err paths in libxl__domain_get_device_model_uid()
Date: Wed, 17 Feb 2021 16:42:49 +0000
Message-ID: <20210217164251.11005-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210217164251.11005-1-andrew.cooper3@citrix.com>
References: <20210217164251.11005-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

All paths with nonzero rc head to err.  All paths with a zero rc stay heading
towards out.

The comment discussing invariants is now arguably stale - it will be fixed in
one coherent manner when further fixes are in place.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index db4cec6a76..30b3242e57 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -153,13 +153,13 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     if (user) {
         rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
         if (rc)
-            goto out;
+            goto err;
 
         if (!user_base) {
             LOGD(ERROR, guest_domid, "Couldn't find device_model_user %s",
                  user);
             rc = ERROR_INVAL;
-            goto out;
+            goto err;
         }
 
         intended_uid = user_base->pw_uid;
@@ -188,7 +188,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     rc = userlookup_helper_getpwnam(gc, LIBXL_QEMU_USER_RANGE_BASE,
                                          &user_pwbuf, &user_base);
     if (rc)
-        goto out;
+        goto err;
     if (user_base) {
         struct passwd *user_clash, user_clash_pwbuf;
 
@@ -196,14 +196,14 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         rc = userlookup_helper_getpwuid(gc, intended_uid,
                                          &user_clash_pwbuf, &user_clash);
         if (rc)
-            goto out;
+            goto err;
         if (user_clash) {
             LOGD(ERROR, guest_domid,
                  "wanted to use uid %ld (%s + %d) but that is user %s !",
                  (long)intended_uid, LIBXL_QEMU_USER_RANGE_BASE,
                  guest_domid, user_clash->pw_name);
             rc = ERROR_INVAL;
-            goto out;
+            goto err;
         }
 
         LOGD(DEBUG, guest_domid, "using uid %ld", (long)intended_uid);
@@ -222,7 +222,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     user = LIBXL_QEMU_USER_SHARED;
     rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
     if (rc)
-        goto out;
+        goto err;
     if (user_base) {
         LOGD(WARN, guest_domid, "Could not find user %s, falling back to %s",
              LIBXL_QEMU_USER_RANGE_BASE, LIBXL_QEMU_USER_SHARED);
@@ -240,6 +240,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
          "Could not find user %s or range base pseudo-user %s, cannot restrict",
          LIBXL_QEMU_USER_SHARED, LIBXL_QEMU_USER_RANGE_BASE);
     rc = ERROR_INVAL;
+    goto err;
 
 out:
     /* First, do a root check if appropriate */
@@ -257,6 +258,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
             state->dm_kill_uid = GCSPRINTF("%ld", (long)intended_uid);
     }
 
+ err:
     return rc;
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:43:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86352.162084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPv6-0003Ye-4h; Wed, 17 Feb 2021 16:43:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86352.162084; Wed, 17 Feb 2021 16:43:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPv5-0003YQ-TC; Wed, 17 Feb 2021 16:43:19 +0000
Received: by outflank-mailman (input) for mailman id 86352;
 Wed, 17 Feb 2021 16:43:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2sW=HT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lCPv4-0003Xs-2T
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:43:18 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3dac973-a7df-4fe8-9898-4f22d82209cc;
 Wed, 17 Feb 2021 16:43:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3dac973-a7df-4fe8-9898-4f22d82209cc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613580196;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=Rf44n99j0ALTC6ho3Q828btTMXx7uCUyMKUasv55xao=;
  b=Fm8Iy5htOtH/gcUefKdml5qdYcNV8k9l/qjm3MDjfUKJshHew+YRkOX+
   gedntByLm4r+IzJjy475ltCrRSRX66Jx05RcwnU/JDvbtxbY3z7kNraJO
   5X0KTvGOX2M2zv7PQDruaNsj3nJR7ppc7uN4esXJuc+AC+Slg8JNcfm79
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: SiPIjv4hMhCbTbF9Li2GzodEYaWtYzebbOnjDJq1FDMG9tEThZ2R7a9YCB48cgX6uEHaBITRT4
 S1MU13tEemk7tW/ioIcGjF3UMYuDCTRiOekD4tbY4hplH0tDOpfIQGN5Z4bTP+ON3xjQ9CxNWJ
 ODJhNaW4yjyv66kmcCP2HZvWNq19LrStlzhADTAWsT/S3N9ZtcMBm6aBTGHb2M/SVhz2gxeKPm
 3hvAURFkEalFszMdzDLQXKkxRIrBj6DrabDr3wPY/AFrgYtcJAGoiiLmwrJFj0vt5GsYNMxRap
 C/w=
X-SBRS: 5.1
X-MesageID: 37627938
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37627938"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH 3/3] tools/libxl: Rework invariants in libxl__domain_get_device_model_uid()
Date: Wed, 17 Feb 2021 16:42:51 +0000
Message-ID: <20210217164251.11005-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210217164251.11005-1-andrew.cooper3@citrix.com>
References: <20210217164251.11005-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
  libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
    256 |         if (kill_by_uid)
        |            ^

The logic is very tangled.

Two paths unconditionally set user before checking for associated errors.
This interferes with the expected use of uninitialised-variable heuristics to
force compile failures for ill-formed exit paths.

Use b_info->device_model_user and LIBXL_QEMU_USER_SHARED as appliable, and
only set user on the goto out paths.

All goto out paths now are comprised of the form:
  user = NULL;
  rc = 0;

or:
  user = non-NULL;
  indended_uid = ...;
  kill_by_uid = ...;
  rc = 0;

As a consequence, indended_uid can drop its default of -1, the dm_restrict can
drop its now-stale "just in case" comment and the redundant setting of
kill_by_uid to work around this issue at other optimisation levels.

Finally, rewrite the comment about invariants, indicating the split between
the out and err lables, and associated rc values.  Additionally, reword the
"is {not,} set" terminology to "is {non-,}NULL" to be more precise.

No funcational change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 7843c283ca..8a7e084d89 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -127,7 +127,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     struct passwd *user_base, user_pwbuf;
     int rc;
     char *user;
-    uid_t intended_uid = -1;
+    uid_t intended_uid;
     bool kill_by_uid;
 
     /* Only qemu-upstream can run as a different uid */
@@ -135,33 +135,34 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
         return 0;
 
     /*
-     * From this point onward, all paths should go through the `out`
-     * label.  The invariants should be:
+     * From this point onward, all paths should go through the `out` (success,
+     * rc = 0) or `err` (failure, rc != 0) labels.  The invariants should be:
      * - rc may be 0, or an error code.
-     * - if rc is an error code, user and intended_uid are ignored.
-     * - if rc is 0, user may be set or not set.
-     * - if user is set, then intended_uid must be set to a UID matching
+     * - if rc is an error code, all settings are ignored.
+     * - if rc is 0, user may be NULL or non-NULL.
+     * - if user is non-NULL, then intended_uid must be set to a UID matching
      *   the username `user`, and kill_by_uid must be set to the appropriate
      *   value.  intended_uid will be checked for root (0).
      */
-    
+
     /*
      * If device_model_user is present, set `-runas` even if
      * dm_restrict isn't in use
      */
-    user = b_info->device_model_user;
-    if (user) {
-        rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
+    if (b_info->device_model_user) {
+        rc = userlookup_helper_getpwnam(gc, b_info->device_model_user,
+                                        &user_pwbuf, &user_base);
         if (rc)
             goto err;
 
         if (!user_base) {
             LOGD(ERROR, guest_domid, "Couldn't find device_model_user %s",
-                 user);
+                 b_info->device_model_user);
             rc = ERROR_INVAL;
             goto err;
         }
 
+        user = b_info->device_model_user;
         intended_uid = user_base->pw_uid;
         kill_by_uid = true;
         rc = 0;
@@ -175,8 +176,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     if (!libxl_defbool_val(b_info->dm_restrict)) {
         LOGD(DEBUG, guest_domid,
              "dm_restrict disabled, starting QEMU as root");
-        user = NULL; /* Should already be null, but just in case */
-        kill_by_uid = false; /* Keep older versions of gcc happy */
+        user = NULL;
         rc = 0;
         goto out;
     }
@@ -219,13 +219,14 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
      * QEMU_USER_SHARED.  NB for QEMU_USER_SHARED, all QEMU will run
      * as the same UID, we can't kill by uid; therefore don't set uid.
      */
-    user = LIBXL_QEMU_USER_SHARED;
-    rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
+    rc = userlookup_helper_getpwnam(gc, LIBXL_QEMU_USER_SHARED,
+                                    &user_pwbuf, &user_base);
     if (rc)
         goto err;
     if (user_base) {
         LOGD(WARN, guest_domid, "Could not find user %s, falling back to %s",
              LIBXL_QEMU_USER_RANGE_BASE, LIBXL_QEMU_USER_SHARED);
+        user = LIBXL_QEMU_USER_SHARED;
         intended_uid = user_base->pw_uid;
         kill_by_uid = false;
         rc = 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:43:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:43:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86353.162102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPvA-0003d1-AA; Wed, 17 Feb 2021 16:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86353.162102; Wed, 17 Feb 2021 16:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPvA-0003cs-6S; Wed, 17 Feb 2021 16:43:24 +0000
Received: by outflank-mailman (input) for mailman id 86353;
 Wed, 17 Feb 2021 16:43:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2sW=HT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lCPv8-0003Xr-Sd
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:43:22 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24364b36-9608-4611-8e6a-3e4b241f46c7;
 Wed, 17 Feb 2021 16:43:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24364b36-9608-4611-8e6a-3e4b241f46c7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613580198;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=hQzng0cFb39JNB90gro8hw3unlKdNOE3YEmb8ba439E=;
  b=QJLXwOsWUIa9AobFGJhjSx1zue+wXXMmBDWFFIDjUA/sN4xoCf46L9dx
   lWN0zzkVvswtUDZKNPcipbZo7DYDFgp4U2WqNCkPVfqL3LwESegeSFgqE
   P5k93E9jq+E55vp3PATjE4ZSzZxICe7vt0Vs4otjaNW0eVIjK8v4ed281
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Vyes8em4VtOVJla2FF8K4DXvfh9SPL5ZsNLcAMvlFKEs4OWv9lEz4u7BaoUrapy20mw5RQCbzz
 3J+nk1QRYUCMA9c6dT/9N/rbDDS+h3DTzGGGGxJcKloshLuwLzF52gdAd8IOMMxkOwRI0tX9WJ
 XSUhe8zfqBb0+s+mBbL3IYun84sfgX3WFaRIL3zAFZIghhL2//Xn0AI9MVQSub3D5uMneKUe1V
 i7UeYeHRfEUCX69AhQTMj/v58hFncOa4ex61chF4/qU5FXUBbRrqK3juF/zQ4yHXy1VVZiONqD
 T2M=
X-SBRS: 5.1
X-MesageID: 37456609
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37456609"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH for-4.15 0/3] tools/libxl: -Og fixes for libxl__domain_get_device_model_uid()
Date: Wed, 17 Feb 2021 16:42:48 +0000
Message-ID: <20210217164251.11005-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Split out of previous series, and split up for clarity.

Andrew Cooper (3):
  tools/libxl: Split out and err paths in libxl__domain_get_device_model_uid()
  tools/libxl: Simplfy the out path in libxl__domain_get_device_model_uid()
  tools/libxl: Rework invariants in libxl__domain_get_device_model_uid()

 tools/libs/light/libxl_dm.c | 58 ++++++++++++++++++++++++---------------------
 1 file changed, 31 insertions(+), 27 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:43:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:43:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86354.162113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPvF-0003iX-Nt; Wed, 17 Feb 2021 16:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86354.162113; Wed, 17 Feb 2021 16:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPvF-0003iO-KR; Wed, 17 Feb 2021 16:43:29 +0000
Received: by outflank-mailman (input) for mailman id 86354;
 Wed, 17 Feb 2021 16:43:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2sW=HT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lCPvD-0003Xr-Su
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:43:27 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed5c5453-c7cc-4fbf-967e-74a5772ed26d;
 Wed, 17 Feb 2021 16:43:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed5c5453-c7cc-4fbf-967e-74a5772ed26d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613580198;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=YoxtGpvRbXNh9imaug3gf79BTXd/2y72nycSyNyRXCg=;
  b=eI46XjKk6Em84++U+WHODnrzt2CgHcS/U+obnWZgKmWInA1hyFT9NhVj
   EcuHYt7PoUaX2VADDrbp3hmjaYSjJ+ZY2nW7nl+Prqq+8HGBE1Ki+k1g1
   GmQNgtigNstLd32xKEpOGDlvWhEfGX3GXA2XLp9GjtfMk0LdwvRnHf7KL
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: veN/DLGoe6oZH6JG+7YX0Jwp/aRqbZ4EWH5CTCNGti/k9h0KWFwEcpAPxKIpLwOdBQ6emzfnxL
 NpDKAXMPXrMEW8p5Nt86aszLYI7RRh+ln64NwbQjGF9TXkTRRdxYo4ZZqvxXVsO1Y2VVwN0WQ2
 wJCZmeQnRVpc0oZYUjEWTNZ8YFptQpxdQqrC7C6vL3WmtAxnR68eP+SLKlp2v4d4dv4iatVKDS
 ujmmJ9rc6fY5qTm9Ah3YAMEsosAr5xG0ftnNYjI1cIZgUKJmrNQ8UW62WfvKc75wsZU7wjwBth
 KZg=
X-SBRS: 5.1
X-MesageID: 37456612
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,184,1610427600"; 
   d="scan'208";a="37456612"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH 2/3] tools/libxl: Simplfy the out path in libxl__domain_get_device_model_uid()
Date: Wed, 17 Feb 2021 16:42:50 +0000
Message-ID: <20210217164251.11005-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210217164251.11005-1-andrew.cooper3@citrix.com>
References: <20210217164251.11005-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

All paths heading towards `out` have rc = 0.  Assert this property.

The intended_uid check is an error path.  Use the err label rather than
falling into subsequent success logic.

With the above two changes, the two `if (!rc)` checks can be dropped.

Now, both remaining tests start with `if (user ...)`.  Combine the two blocks.

No functional change, but far simpler logic to follow.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 30b3242e57..7843c283ca 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -243,16 +243,17 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     goto err;
 
 out:
-    /* First, do a root check if appropriate */
-    if (!rc) {
-        if (user && intended_uid == 0) {
+    assert(rc == 0);
+
+    if (user) {
+        /* First, do a root check if appropriate */
+        if (intended_uid == 0) {
             LOGD(ERROR, guest_domid, "intended_uid is 0 (root)!");
             rc = ERROR_INVAL;
+            goto err;
         }
-    }
 
-    /* Then do the final set, if still appropriate */
-    if (!rc && user) {
+        /* Then do the final set. */
         state->dm_runas = user;
         if (kill_by_uid)
             state->dm_kill_uid = GCSPRINTF("%ld", (long)intended_uid);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:48:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86366.162126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPzo-00043r-CY; Wed, 17 Feb 2021 16:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86366.162126; Wed, 17 Feb 2021 16:48:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPzo-00043k-8W; Wed, 17 Feb 2021 16:48:12 +0000
Received: by outflank-mailman (input) for mailman id 86366;
 Wed, 17 Feb 2021 16:48:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nrto=HT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lCPzm-00043f-Gp
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:48:10 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe08::631])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54b3872d-771c-4bf9-a166-02dae3546360;
 Wed, 17 Feb 2021 16:48:08 +0000 (UTC)
Received: from AS8PR04CA0100.eurprd04.prod.outlook.com (2603:10a6:20b:31e::15)
 by AM5PR0802MB2385.eurprd08.prod.outlook.com (2603:10a6:203:9e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.39; Wed, 17 Feb
 2021 16:48:06 +0000
Received: from AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:31e:cafe::a6) by AS8PR04CA0100.outlook.office365.com
 (2603:10a6:20b:31e::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Wed, 17 Feb 2021 16:48:06 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT033.mail.protection.outlook.com (10.152.16.99) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Wed, 17 Feb 2021 16:48:06 +0000
Received: ("Tessian outbound 46f6cf9da5e8:v71");
 Wed, 17 Feb 2021 16:48:06 +0000
Received: from e898bf6028c3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D9389E49-D43D-4DD5-9858-8EA1BBAAEE17.1; 
 Wed, 17 Feb 2021 16:48:00 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e898bf6028c3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 17 Feb 2021 16:48:00 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VE1PR08MB5726.eurprd08.prod.outlook.com (2603:10a6:800:1b2::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Wed, 17 Feb
 2021 16:47:58 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3846.042; Wed, 17 Feb 2021
 16:47:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54b3872d-771c-4bf9-a166-02dae3546360
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bWAmjMiwA8AKar4agnndqNS4ujLBO6/4OMpatASobmI=;
 b=ARKlRwYI2YOpbRmmbkkMjI8GfCAdWNWMYvDZm6UZ/9vITlPuo2VMHHvIjJYxCcsDISY/zgvw0PyH1cSucEknnMfiCNiWYSsPOc1KyvWDY5JEQHA9e8fyXHSo8of1gyz+wY2DOCZ+I+zlaSPKNa8rg35d/YQajtt/De4cOgidmQY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 19543790204007ba
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SPUIF7aHN5TQoGMzOVpqvrI4AALrCQKaaCG2UwSWh1GrTZCWrLMi979AVGeg6Sbm7x9Z7m8FqOfj/kJotug5TI51QPPS3slyfL4pBdFr0YhfvTpgSG/BfKW/MU+9DhoDazj/PHFl+SBn/cBDIR897EvR17//2AuOvJlTBN+zd8n8dN4gHicsiTmAoNjBFKRQYX7xpxnjbwuRK8DTGYBO14SREHDhZ8BJ/pBrZHmF3ByaAHH3l7/+Y+PnHjQpqicHMBc6nXxYu3hz2McCwlge1MWvV6JYDUoH3YjJIuSGZJtiGa9RddZ/VlYXV93dIrmHKEcQvMDgJQVgwQDl4a+yfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bWAmjMiwA8AKar4agnndqNS4ujLBO6/4OMpatASobmI=;
 b=MGwlTmX20+dJkjWUm/wTVujvnlnyNMid0mF6A1dSLUksOip/rE70RhELEoaqPVsCNu5X/qL0e+Znj65JNaIT4DgIL2uRdlL8yWChy9dyXzqe1keGJj2INMXL8iDARmKU9rJ6Fs2wLXKBtZyGqYO3Nlm+eqouN6l5bHvUt70aIZrDyHEp18744CRs5r6RjRaeDaolxwqvu+CMDvjRsf5qOjqiKSieXz/sbv6J4JgZghDty+F5463Ff5qyQvUKyDfQpWeibmu9MJ0D2V0x/ReHQuZ2LG5jKPUrwaHfmxoC/fvb0oiUdwMG2ucwSEPtB3k4+pvW8BYrXGn/9owlHbzjIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bWAmjMiwA8AKar4agnndqNS4ujLBO6/4OMpatASobmI=;
 b=ARKlRwYI2YOpbRmmbkkMjI8GfCAdWNWMYvDZm6UZ/9vITlPuo2VMHHvIjJYxCcsDISY/zgvw0PyH1cSucEknnMfiCNiWYSsPOc1KyvWDY5JEQHA9e8fyXHSo8of1gyz+wY2DOCZ+I+zlaSPKNa8rg35d/YQajtt/De4cOgidmQY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>, Rahul Singh
	<Rahul.Singh@arm.com>
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Thread-Topic: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
Thread-Index: AQHXBNDUDoMffJM9X0qFA5iW4kfZOqpcCj0AgABx1gCAABGjAIAAAeqA
Date: Wed, 17 Feb 2021 16:47:51 +0000
Message-ID: <49FAB3DC-AA0B-4E71-B435-315EF99A76EA@arm.com>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
 <505CE19F-B324-4DE2-8EC4-D885780FD17A@arm.com>
 <463b0594-a78b-3f9b-e816-2cbd2a1d16dd@xen.org>
In-Reply-To: <463b0594-a78b-3f9b-e816-2cbd2a1d16dd@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8ee9e3a7-d0c0-40c7-5bd7-08d8d363cca8
x-ms-traffictypediagnostic: VE1PR08MB5726:|AM5PR0802MB2385:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM5PR0802MB2385E6457DC0C153D3FCE8A69D869@AM5PR0802MB2385.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lpkjjXLnfK+Q5P9sp5WRQVGsxXSWTgwHPIr2bxkY0H0KFr6+dr3P2Vle/3eX3mLL3CMNVIUIRP3fdp5oRDkbj609Mg3N73KoNcKT3hiJKpoiJHQyRZe8vJTgYlgwNffljkAYvlw5u6hE3SLV3S7NyqpaE2yZY4Rc0y1yv7lIQQE1GGtLSCuCeOTj9OvER2C1EahjBYtvDVZq37emCtDR1QT8d7v1zZ0+0IWGgg/cWaUmGVio3z32EOnT12Mi7LfMyifGr0MR+llJvf/78k7mJZjXHglTjsH2B2nAnvZGPGahVgVzkYqgVY3opezXx6kKBLAdPW6sSYhDsq9IRzfy5Hz9o7B4UfrbE0LyXZUdlpMAtnK2mffM6gjFv30cCHM498r4G2fPNMebIYxoqOdNS7/cRBvOqTtdQ+CL98f/u1tgWec47s/4wHRva0NfZcTWPvGjAhDx7EGOFjFqHZL7pAa7+dZmC9Hezs5WcmsVkp2Fq81CjmYZL1e3tiOtDZYp5kMTPKUxy6WXau5i1RYRglLPEr6Wt2vWFidO1NWfd6sHJXmq5UIGjmA867oX3PAhg4XAspzrNAZeIB0uPntKyg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(396003)(136003)(39860400002)(86362001)(2616005)(76116006)(91956017)(316002)(6916009)(66946007)(66476007)(66446008)(64756008)(66556008)(33656002)(6506007)(53546011)(71200400001)(55236004)(26005)(186003)(8676002)(5660300002)(6486002)(83380400001)(2906002)(4326008)(8936002)(54906003)(478600001)(36756003)(6512007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?S3ZpTnJCSG51VnNIRjdSamRod0o4b2NrN3ZWZXhUM3Z4TE9UL1BZandDNkNO?=
 =?utf-8?B?TFRMYkY2VGV5N2pXb3h0SXZ6M2lKamlPL2Q1Yk9YVDA3RWtlbFRwcnZnRVQ1?=
 =?utf-8?B?YkxaRlR2Z0xnQnFlbGdoeW9NZGJVZWVka3FhSkNPYnkyOFVIa3FIQ1c0NFVJ?=
 =?utf-8?B?WlFYYzZXWFFCL2Zkc05qQ0NpVFRocHB3VmZiTnBCa3d2QjBmZExKdDBxYkEy?=
 =?utf-8?B?L1lHb3VqUjJsZmdjcnI4RDJWcHJQVXM4Z2tIQjhWMWttTm0zdkZmcUR0aDRy?=
 =?utf-8?B?S2lacEcvc2tKUU9oSFVvNW9JTlRGaW1ROXZGSDRIZ3VFSUtKTVJyNlVvR1lN?=
 =?utf-8?B?RWorbTRnZkZkNVg5VnFPZWRaQ3luYU01Z0x6dU1yaXRjYkp5YVZ0RWVqZ3Za?=
 =?utf-8?B?SExZNzFWWlF3Y3VVNDl2SHNYOWdkM0NkeDBPR0gvb2JWVHBtZ0hCSTF4Mk84?=
 =?utf-8?B?N2U4eFdlWXRFVlh1NGhKQ04wYm5hOTMwdEpHcGlvRURKNW5uSUQ3djlRcC9h?=
 =?utf-8?B?d1JUenhidGFlbjFxRFYzQnZEb0g1bUg0VFVjVWwrYWNFN0JOc3JxUGVyZzlX?=
 =?utf-8?B?TFg5d3JUMXpSSHRpdERBTVU2WFlmalRPNHdMcSthYkdGWGk1ZElOMjBEMXRp?=
 =?utf-8?B?M1EwazRMb3RsSXRyTTFKMHNFd0lGeXdmYVcvMmFqNnhTZHlYbkNpWEdkU1Rz?=
 =?utf-8?B?ajZOdjJGYTYzaHRVMEhDSHpRZXpxWnRGTnNwalN5dWJPY25LT21KVlQzWFhD?=
 =?utf-8?B?bVZWaFFTdXBsazJBT2RQaFVjWkdNeGU5Q1pLaFc3VUZkYVdTTmg1ZVVsa1BY?=
 =?utf-8?B?MmUzTXViUkI1MTFjZWN6MktrT3pSQXdTMlpnS0hlN29lSjJIb3BJb3Vhc3ZJ?=
 =?utf-8?B?bXB4NGZMQjJIMjFTbSs3S3pWd1lVdCtUL2NXcXhCK0lVV2d3Vjh1RWR6QnNT?=
 =?utf-8?B?WENTRCt2emdLcGZLbEJnYlJJemo3UkJacldqUlc0SitQbFowcUpDa1BtNGdH?=
 =?utf-8?B?Z2JnazJKQUlFeER0YzVTQ2V6NWcrSzFiajhwNTFoNHdTRlBtMUZxZGNlcEVJ?=
 =?utf-8?B?TUxzME1PSXJrMTZacEg3cTh1cXJkZUd2T3RPdGJ1Q1lrSmRNWlUvS2ZkejRM?=
 =?utf-8?B?YmJOSDBkeEpFeDZqaU1pVWs2WXI0a204RjlOdUZjMVo3MFJhbkd0WnVLbUNC?=
 =?utf-8?B?NW43WkhkaElaYVlXcjVXRkkvRGFYQlpIQml5Ymloa3VnTW5NN0NIVi96Mkkx?=
 =?utf-8?B?VHdleklCZmlOenJVeVQ0MHgyaTA5SWhlRzNXNEVLNElibncvd1JrU3FzYi9x?=
 =?utf-8?B?MThIOG8yT05RVGpVaElhSndmbm1lbDZFS1pMMURhNGVsRjY0M3BOeGZDNTFR?=
 =?utf-8?B?Y1BYdEJvd2pNZ0llQnFMZCtVcVU1ZnhFT1JndSsra1pzUDRuVjBvaFdlWlRh?=
 =?utf-8?B?Tnl2SVdmcEI2Mmd4RjRYaGtPemxPRXNYNUpnK3M2YmRsRFdvekROWWw2NEh5?=
 =?utf-8?B?RjgyS0g1bndvUWdkaUNDL3ZiOUVldGpTUEI2NFZobXJ1ZlpIUUtsdlE3cms0?=
 =?utf-8?B?b3RldGw2cHhjNE9hK1hMNlRFdTkrUFVRRjZxY1AwOGtzVWNiUUIxTmRBb0ZT?=
 =?utf-8?B?NktTQWRwM0tEV1NvQm5KN0RVR2FqWWEvMGU5bW40eFVJYUpteGZBaW1pTGR6?=
 =?utf-8?B?ZVZUYWYveDhpSmYzNGdCR2Y3enRhVEZ1WDNMUVlpblVWNzhFb3krb0dUejdu?=
 =?utf-8?Q?VZZlQ35BcseJb3WeWmCwlbSFtPZlNc/7kfbcKlh?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A9498A896DA13D49A57AA4DE5739ED9B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5726
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e22f453f-514a-4bad-3dfa-08d8d363c428
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VzdlB00nRsvIkMHStJJcgiFRwKqLxqv6ZgugY+yoL02LQyGR1apLEEB0lVewfwlzkck9OfawokJUdS7hfbDn256gHbuRM12Trv1A0mW9CUB8SQgC36z1UiltE0kZmLxUczJcuqmJQ75bU/9MtV4J7XBS3YurC4FIEZXxLV7QkmiiMoBd0/OeqMH4F2ohBmUrjTaTfAe1dcNvSAfw1HaBA0DvVfhFLsVBHivl3z/VlfASRLS3KysWbVyRE44NkLbFQN1YtlkqWdntt1WgLt19ZZ+WHtA2cT6zxcqjaDLkuNYHC8O/6+lO9Bp//PquU3nxpVFXYMh6mqWKk63zKkBEWOmx2oue1btey8G6QvWWtuv46qNIwh7bvzW1HBX+X88aI/ny7nEV63j/yu/7XL98/7zbgQjuiwfYYhBAkx1gks/IEmSaEUytu7N4IcEkBSVvoy71yrzUA5Tx7/v43+YK7EBxbxFjlV1VD8zh4SoSxhY01f8k3D6FdGi0b9MRIRxknRZxw9kpzyVyccv2bKq+YXPS8Mglh1zWitc8kEh8Ow4JwbqYgX7RLCQ4CwRID7qhGqtU/LWVV520zGbGtRQqkgNpl9QpfVQNqIUmBW3pDneeMmeCZWNjATtDLQKrjaRHkD6Zeybxn7YUOMcvPphdXpbEr8/Qd9c6Fatn1olKRx0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(346002)(136003)(36840700001)(46966006)(6506007)(53546011)(2906002)(8936002)(6512007)(4326008)(81166007)(47076005)(8676002)(5660300002)(33656002)(36860700001)(83380400001)(336012)(36756003)(186003)(54906003)(26005)(2616005)(6862004)(86362001)(478600001)(316002)(82310400003)(55236004)(6486002)(356005)(70586007)(82740400003)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Feb 2021 16:48:06.0874
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ee9e3a7-d0c0-40c7-5bd7-08d8d363cca8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2385

SGkgSnVsaWVuLA0KDQo+IE9uIDE3IEZlYiAyMDIxLCBhdCAxNjo0MSwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiANCj4gDQo+IE9uIDE3LzAyLzIwMjEgMTU6Mzcs
IEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBIaSBKdWxpZW4sDQo+IA0KPiBIaSBCZXJ0cmFu
ZCwNCj4gDQo+Pj4gT24gMTcgRmViIDIwMjEsIGF0IDA4OjUwLCBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPiB3cm90ZToNCj4+PiANCj4+PiANCj4+PiANCj4+PiBPbiAxNy8wMi8yMDIxIDAy
OjAwLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+Pj4+IEhpIGFsbCwNCj4+Pj4gVG9kYXkg
TGludXggdXNlcyB0aGUgc3dpb3RsYi14ZW4gZHJpdmVyIChkcml2ZXJzL3hlbi9zd2lvdGxiLXhl
bi5jKSB0bw0KPj4+PiB0cmFuc2xhdGUgYWRkcmVzc2VzIGZvciBETUEgb3BlcmF0aW9ucyBpbiBE
b20wLiBTcGVjaWZpY2FsbHksDQo+Pj4+IHN3aW90bGIteGVuIGlzIHVzZWQgdG8gdHJhbnNsYXRl
IHRoZSBhZGRyZXNzIG9mIGEgZm9yZWlnbiBwYWdlIChhIHBhZ2UNCj4+Pj4gYmVsb25naW5nIHRv
IGEgZG9tVSkgbWFwcGVkIGludG8gRG9tMCBiZWZvcmUgdXNpbmcgaXQgZm9yIERNQS4NCj4+Pj4g
VGhpcyBpcyBpbXBvcnRhbnQgYmVjYXVzZSBhbHRob3VnaCBEb20wIGlzIDE6MSBtYXBwZWQsIERv
bVVzIGFyZSBub3QuIE9uDQo+Pj4+IHN5c3RlbXMgd2l0aG91dCBhbiBJT01NVSBzd2lvdGxiLXhl
biBlbmFibGVzIFBWIGRyaXZlcnMgdG8gd29yayBhcyBsb25nDQo+Pj4+IGFzIHRoZSBiYWNrZW5k
cyBhcmUgaW4gRG9tMC4gVGhhbmtzIHRvIHN3aW90bGIteGVuLCB0aGUgRE1BIG9wZXJhdGlvbg0K
Pj4+PiBlbmRzIHVwIHVzaW5nIHRoZSBNRk4sIHJhdGhlciB0aGFuIHRoZSBHRk4uDQo+Pj4+IE9u
IHN5c3RlbXMgd2l0aCBhbiBJT01NVSwgdGhpcyBpcyBub3QgbmVjZXNzYXJ5OiB3aGVuIGEgZm9y
ZWlnbiBwYWdlIGlzDQo+Pj4+IG1hcHBlZCBpbiBEb20wLCBpdCBpcyBhZGRlZCB0byB0aGUgRG9t
MCBwMm0uIEEgbmV3IEdGTi0+TUZOIHRyYW5zbGF0aW9uDQo+Pj4+IGlzIGVuc3RhYmxpc2hlZCBm
b3IgYm90aCBNTVUgYW5kIFNNTVUuIERvbTAgY291bGQgc2FmZWx5IHVzZSB0aGUgR0ZODQo+Pj4+
IGFkZHJlc3MgKGluc3RlYWQgb2YgdGhlIE1GTikgZm9yIERNQSBvcGVyYXRpb25zIGFuZCB0aGV5
IHdvdWxkIHdvcmsuIEl0DQo+Pj4+IHdvdWxkIGJlIG1vcmUgZWZmaWNpZW50IHRoYW4gdXNpbmcg
c3dpb3RsYi14ZW4uDQo+Pj4+IElmIHlvdSByZWNhbGwgbXkgcHJlc2VudGF0aW9uIGZyb20gWGVu
IFN1bW1pdCAyMDIwLCBYaWxpbnggaXMgd29ya2luZyBvbg0KPj4+PiBjYWNoZSBjb2xvcmluZy4g
V2l0aCBjYWNoZSBjb2xvcmluZywgbm8gZG9tYWluIGlzIDE6MSBtYXBwZWQsIG5vdCBldmVuDQo+
Pj4+IERvbTAuIEluIGEgc2NlbmFyaW8gd2hlcmUgRG9tMCBpcyBub3QgMToxIG1hcHBlZCwgc3dp
b3RsYi14ZW4gZG9lcyBub3QNCj4+Pj4gd29yayBhcyBpbnRlbmRlZC4NCj4+Pj4gVGhlIHN1Z2dl
c3RlZCBzb2x1dGlvbiBmb3IgYm90aCB0aGVzZSBpc3N1ZXMgaXMgdG8gYWRkIGEgbmV3IGZlYXR1
cmUNCj4+Pj4gZmxhZyAiWEVORkVBVF9BUk1fZG9tMF9pb21tdSIgdGhhdCB0ZWxscyBEb20wIHRo
YXQgaXQgaXMgc2FmZSBub3QgdG8gdXNlDQo+Pj4+IHN3aW90bGIteGVuIGJlY2F1c2UgSU9NTVUg
dHJhbnNsYXRpb25zIGFyZSBhdmFpbGFibGUgZm9yIERvbTAuIElmDQo+Pj4+IFhFTkZFQVRfQVJN
X2RvbTBfaW9tbXUgaXMgc2V0LCBMaW51eCBzaG91bGQgc2tpcCB0aGUgc3dpb3RsYi14ZW4NCj4+
Pj4gaW5pdGlhbGl6YXRpb24uIEkgaGF2ZSB0ZXN0ZWQgdGhpcyBzY2hlbWUgd2l0aCBhbmQgd2l0
aG91dCBjYWNoZQ0KPj4+PiBjb2xvcmluZyAoaGVuY2Ugd2l0aCBhbmQgd2l0aG91dCAxOjEgbWFw
cGluZyBvZiBEb20wKSBvbiBaQ1UxMDIgYW5kIGl0DQo+Pj4+IHdvcmtzIGFzIGV4cGVjdGVkOiBE
TUEgb3BlcmF0aW9ucyBzdWNjZWVkLg0KPj4+PiBXaGF0IGFib3V0IHN5c3RlbXMgd2hlcmUgYW4g
SU9NTVUgaXMgcHJlc2VudCBidXQgbm90IGFsbCBkZXZpY2VzIGFyZQ0KPj4+PiBwcm90ZWN0ZWQ/
DQo+Pj4+IFRoZXJlIGlzIG5vIHdheSBmb3IgWGVuIHRvIGtub3cgd2hpY2ggZGV2aWNlcyBhcmUg
cHJvdGVjdGVkIGFuZCB3aGljaA0KPj4+PiBvbmVzIGFyZSBub3Q6IGRldmljZXMgdGhhdCBkbyBu
b3QgaGF2ZSB0aGUgImlvbW11cyIgcHJvcGVydHkgY291bGQgb3INCj4+Pj4gY291bGQgbm90IGJl
IERNQSBtYXN0ZXJzLg0KPj4+PiBQZXJoYXBzIFhlbiBjb3VsZCBwb3B1bGF0ZSBhIHdoaXRlbGlz
dCBvZiBkZXZpY2VzIHByb3RlY3RlZCBieSB0aGUgSU9NTVUNCj4+Pj4gYmFzZWQgb24gdGhlICJp
b21tdXMiIHByb3BlcnR5LiBJdCB3b3VsZCByZXF1aXJlIHNvbWUgYWRkZWQgY29tcGxleGl0eQ0K
Pj4+PiBpbiBYZW4gYW5kIGVzcGVjaWFsbHkgaW4gdGhlIHN3aW90bGIteGVuIGRyaXZlciBpbiBM
aW51eCB0byB1c2UgaXQsDQo+Pj4+IHdoaWNoIGlzIG5vdCBpZGVhbC4NCj4+PiANCj4+PiBZb3Ug
YXJlIHRyYWRpbmcgYSBiaXQgbW9yZSBjb21wbGV4aXR5IGluIFhlbiBhbmQgTGludXggd2l0aCBh
IHVzZXIgd2lsbCBtYXkgbm90IGJlIGFibGUgdG8gdXNlIHRoZSBoeXBlcnZpc29yIG9uIGhpcy9o
ZXIgcGxhdGZvcm0gd2l0aG91dCBhIHF1aXJrIGluIFhlbiAoc2VlIG1vcmUgYmVsb3cpLg0KPj4+
IA0KPj4+PiBIb3dldmVyLCB0aGlzIGFwcHJvYWNoIHdvdWxkIG5vdCB3b3JrIGZvciBjYWNoZQ0K
Pj4+PiBjb2xvcmluZyB3aGVyZSBkb20wIGlzIG5vdCAxOjEgbWFwcGVkIHNvIHRoZSBzd2lvdGxi
LXhlbiBzaG91bGQgbm90IGJlDQo+Pj4+IHVzZWQgZWl0aGVyIHdheQ0KPj4+IA0KPj4+IE5vdCBh
bGwgdGhlIERvbTAgTGludXgga2VybmVsIHdpbGwgYmUgYWJsZSB0byB3b3JrIHdpdGggY2FjaGUg
Y29sb3VyaW5nLiBTbyB5b3Ugd2lsbCBuZWVkIGEgd2F5IGZvciB0aGUga2VybmVsIHRvIHNheSAi
SGV5IEkgYW0gY2FuIGF2b2lkIHVzaW5nIHN3aW90bGIiLg0KPj4gSSBmdWxseSBhZ3JlZSBhbmQg
ZnJvbSBteSBjdXJyZW50IHVuZGVyc3RhbmRpbmcgdGhlIGNvbmRpdGlvbiBpcyDigJxoYXZpbmcg
YW4gaW9tbXXigJ0uDQo+Pj4gDQo+Pj4+IEZvciB0aGVzZSByZWFzb25zLCBJIHdvdWxkIGxpa2Ug
dG8gcHJvcG9zZSBhIHNpbmdsZSBmbGFnDQo+Pj4+IFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUgd2hp
Y2ggc2F5cyB0aGF0IHRoZSBJT01NVSBjYW4gYmUgcmVsaWVkIHVwb24gZm9yDQo+Pj4+IERNQSB0
cmFuc2xhdGlvbnMuIEluIHNpdHVhdGlvbnMgd2hlcmUgYSBETUEgbWFzdGVyIGlzIG5vdCBTTU1V
DQo+Pj4+IHByb3RlY3RlZCwgWEVORkVBVF9BUk1fZG9tMF9pb21tdSBzaG91bGQgbm90IGJlIHNl
dC4gRm9yIGV4YW1wbGUsIG9uIGENCj4+Pj4gcGxhdGZvcm0gd2hlcmUgYW4gSU9NTVUgaXMgcHJl
c2VudCBhbmQgcHJvdGVjdHMgbW9zdCBETUEgbWFzdGVycyBidXQgaXQNCj4+Pj4gaXMgbGVhdmlu
ZyBvdXQgdGhlIE1NQyBjb250cm9sbGVyLCB0aGVuIFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUgc2hv
dWxkDQo+Pj4+IG5vdCBiZSBzZXQgKGJlY2F1c2UgUFYgYmxvY2sgaXMgbm90IGdvaW5nIHRvIHdv
cmsgd2l0aG91dCBzd2lvdGxiLXhlbi4pDQo+Pj4+IFRoaXMgYWxzbyBtZWFucyB0aGF0IGNhY2hl
IGNvbG9yaW5nIHdvbid0IGJlIHVzYWJsZSBvbiBzdWNoIGEgc3lzdGVtIChhdA0KPj4+PiBsZWFz
dCBub3QgdXNhYmxlIHdpdGggdGhlIE1NQyBjb250cm9sbGVyIHNvIHRoZSBzeXN0ZW0gaW50ZWdy
YXRvciBzaG91bGQNCj4+Pj4gcGF5IHNwZWNpYWwgY2FyZSB0byBzZXR1cCB0aGUgc3lzdGVtKS4N
Cj4+Pj4gSXQgaXMgd29ydGggbm90aW5nIHRoYXQgaWYgd2Ugd2FudGVkIHRvIGV4dGVuZCB0aGUg
aW50ZXJmYWNlIHRvIGFkZCBhDQo+Pj4+IGxpc3Qgb2YgcHJvdGVjdGVkIGRldmljZXMgaW4gdGhl
IGZ1dHVyZSwgaXQgd291bGQgc3RpbGwgYmUgcG9zc2libGUuIEl0DQo+Pj4+IHdvdWxkIGJlIGNv
bXBhdGlibGUgd2l0aCBYRU5GRUFUX0FSTV9kb20wX2lvbW11Lg0KPj4+IA0KPj4+IEkgaW1hZ2lu
ZSBieSBjb21wYXRpYmxlLCB5b3UgbWVhbiBYRU5GRUFUX0FSTV9kb20wX2lvbW11IHdvdWxkIGJl
IGNsZWFyZWQgYW5kIGluc3RlYWQgdGhlIGRldmljZS10cmVlIGxpc3Qgd291bGQgYmUgdXNlZC4g
SXMgdGhhdCBjb3JyZWN0Pw0KPj4gV2hhdCBkbyB5b3UgbWVhbiBieSBkZXZpY2UgdHJlZSBsaXN0
IGhlcmUgPw0KPiANCj4gU29ycnkgSSBtZWFudCAiZGV2aWNlIGxpc3QiLiBJIHdhcyByZWZlcnJp
bmcgdG8gU3RlZmFubydzIHN1Z2dlc3Rpb24gdG8gZGVzY3JpYmUgdGhlIGxpc3Qgb2YgZGV2aWNl
cyBwcm90ZWN0ZWQgaW4gdGhlIGRldmljZS10cmVlLg0KDQpPayB5b3UgbWVhbiBhZGRpbmcgdG8g
dGhlIGRldmljZSB0cmVlIHNvbWUga2luZCBvZiBkZXZpY2UgbGlzdCBmb3Igd2hpY2ggc3dpb3Rs
YiBzaG91bGQgYmUgdXNlZCAob3IgbWF5YmUgdGhlIG9wcG9zaXRlIGxpc3QgaW4gZmFjdCkuDQoN
Cj4gDQo+Pj4gDQo+Pj4+IEhvdyB0byBzZXQgWEVORkVBVF9BUk1fZG9tMF9pb21tdT8NCj4+Pj4g
V2UgY291bGQgc2V0IFhFTkZFQVRfQVJNX2RvbTBfaW9tbXUgYXV0b21hdGljYWxseSB3aGVuDQo+
Pj4+IGlzX2lvbW11X2VuYWJsZWQoZCkgZm9yIERvbTAuIFdlIGNvdWxkIGFsc28gaGF2ZSBhIHBs
YXRmb3JtIHNwZWNpZmljDQo+Pj4+ICh4ZW4vYXJjaC9hcm0vcGxhdGZvcm1zLykgb3ZlcnJpZGUg
c28gdGhhdCBhIHNwZWNpZmljIHBsYXRmb3JtIGNhbg0KPj4+PiBkaXNhYmxlIFhFTkZFQVRfQVJN
X2RvbTBfaW9tbXUuIEZvciBkZWJ1Z2dpbmcgcHVycG9zZXMgYW5kIGFkdmFuY2VkDQo+Pj4+IHVz
ZXJzLCBpdCB3b3VsZCBhbHNvIGJlIHVzZWZ1bCB0byBiZSBhYmxlIHRvIG92ZXJyaWRlIGl0IHZp
YSBhIFhlbg0KPj4+PiBjb21tYW5kIGxpbmUgcGFyYW1ldGVyLg0KPj4+IFBsYXRmb3JtIHF1aXJr
cyBzaG91bGQgYmUgYXJlIGxpbWl0ZWQgdG8gYSBzbWFsbCBzZXQgb2YgcGxhdGZvcm1zLg0KPj4+
IA0KPj4+IEluIHRoaXMgY2FzZSwgdGhpcyB3b3VsZCBub3QgYmUgb25seSBwZXItcGxhdGZvcm0g
YnV0IGFsc28gcGVyLWZpcm13YXJlIHRhYmxlIGFzIGEgZGV2ZWxvcGVyIGNhbiBkZWNpZGUgdG8g
cmVtb3ZlL2FkZCBJT01NVSBub2RlcyBpbiB0aGUgRFQgYXQgYW55IHRpbWUuDQo+Pj4gDQo+Pj4g
SW4gYWRkaXRpb24gdG8gdGhhdCwgaXQgbWVhbnMgd2UgYXJlIGludHJvZHVjaW5nIGEgcmVncmVz
c2lvbiBmb3IgdGhvc2UgdXNlcnMgYXMgWGVuIDQuMTQgd291bGQgaGF2ZSB3b3JrZWQgb24gdGhl
cmUgcGxhdGZvcm0gYnV0IG5vdCBhbnltb3JlLiBUaGV5IHdvdWxkIG5lZWQgdG8gZ28gdGhyb3Vn
aCBhbGwgdGhlIG5vZGVzIGFuZCBmaW5kIG91dCB3aGljaCBvbmUgaXMgbm90IHByb3RlY3RlZC4N
Cj4+IEkgYW0gbm90IHN1cmUgaSB1bmRlcnN0YW5kIHlvdXIgcG9pbnQgaGVyZSBhcyB3ZSBjYW5u
b3QgZGV0ZWN0IGlmIGEgZGV2aWNlIGlzIHByb3RlY3RlZCBvciBub3QgYnkgYW4gSU9NTVUgYXMg
d2UgZG8gbm90IGtub3cgd2hpY2ggZGV2aWNlIHJlcXVpcmVzIG9uZS4NCj4gDQo+IFRoYXQncyBj
b3JyZWN0Li4uDQo+IA0KPj4gQ291bGQgeW91IGV4cGxhaW4gd2hhdCB1c2UgY2FzZSB3b3JraW5n
IGJlZm9yZSB3b3VsZCBub3Qgd29yayBoZXJlID8NCj4gDQo+IEZyb20gU3RlZmFubydzIGUtbWFp
bCwgWGVuIHdvdWxkIGV4cG9zZSBYRU5GRUFUX0FSTV9kb20wX2lvbW11IGlmIGFsbCB0aGUgZGV2
aWNlcyBhcmUgcHJvdGVjdGVkIGJ5IHRoZSBJT01NVS4NCj4gDQo+IFRoaXMgaW1wbGllcyB0aGF0
IFhlbiBpcyBhd2FyZSB3aGV0aGVyIGV2ZXIgRE1BLWNhcGFibGUgZGV2aWNlcyBhcmUgcHJvdGVj
dGVkLiBBcyB5b3UgcmlnaHRmdWxseSBwb2ludGVkIG91dCB0aGlzIGNhbm5vdCB3b3JrLg0KDQpC
dXQgdGhpcyBpcyBhbHNvIGFuIGlzc3VlIHJpZ2h0IG5vdy4NCg0KPiANCj4+PiANCj4+PiBUaGlz
IGlzIGEgYml0IG9mIGEgZGF1bnRpbmcgdGFzayBhbmQgd2UgYXJlIGdvaW5nIHRvIGVuZCB1cCBo
YXZpbmcgYSBsb3Qgb2YgcGVyLXBsYXRmb3JtIHF1aXJrIGluIFhlbi4NCj4+IEZyb20gbXkgdW5k
ZXJzdGFuZGluZyB0aGUgcXVpcmtzIGhlcmUgd291bGQgYmUgaW4gTGludXggYXMgaXQgd291bGQg
aGF2ZSB0byBkZWNpZGUgZm9yIHdoYXQgdG8gdXNlIHN3aW90bGIgb3Igbm90Lg0KPiANCj4gVGhp
cyBpcyBub3QgaG93IEkgdW5kZXJzdG9vZCBTdGVmYW5vJ3MgZS1tYWlsLiBCdXQgZXZlbiBpZiBp
dCBpcyBoYXBwZW5pbmcgaW4gTGludXgsIHRoZW4gd2UgbmVlZCBhIHdheSB0byB0ZWxsIExpbnV4
IHdoaWNoIGRldmljZXMgaGF2ZSBiZWVuIHByb3RlY3RlZCBieSBYZW4uDQoNClNvIGJhc2ljYWxs
eSBsZXQgc29tZSBpbmZvIGluIHRoZSBkZXZpY2VzIHRvIGxldCBsaW51eCB0aGF0IHRoZXkgYXJl
IHByb3RlY3RlZCBieSBhbiBJT01NVSwgd2hpY2ggd291bGQgYmUgcmVwbGFjaW5nIHRoZSBpb21t
dSBsaW5rIG5vZGUgYnkgc29tZXRoaW5nIGVsc2UuDQoNCj4gDQo+PiBXaGF0IHF1aXJrIGRvIHlv
dSBpbWFnaW5lIHdlIGNvdWxkIGltcGxlbWVudCBpbiBYZW4gPw0KPiANCj4gTWU/IE5vbmUuIFRo
YXQgU3RlZmFubydzIGlkZWEgYW5kIEkgZG9uJ3QgdGhpbmsgaXQgY2FuIHdvcmsuDQoNCkRlZmlu
aXRlbHkgdGhlcmUgaXMgYSBwcm9ibGVtIHRvIHNvbHZlIGhlcmUsIG1heWJlIHRoZSBob3cgcmVx
dWlyZXMgbW9yZSBkaXNjdXNzaW9uIDotKQ0KDQpJIHNlZSB0aGUgc2FtZSBraW5kIG9mIHByb2Js
ZW0gaW5jb21pbmcgb25jZSB3ZSB3aWxsIGhhdmUgc29tZSBndWVzdHMgdXNpbmcgZGlyZWN0LW1h
cCBhbmQgc29tZSBvdGhlciBub3QuDQpBdCB0aGUgZW5kIHRoZXJlIGlzIGEgc29tZSBraW5kIG9m
IGEgbWF0cml4IHdpdGggc3dpb3RsYiBkZXBlbmRpbmcgb24gZGlyZWN0LW1hcCBhbmQgSU9NTVUg
cHJlc2VudCB3aXRoIHNvbWUNCnZlcnkgbmFzdHkgY29tYmluYXRpb24gaWYgd2UgdHJ5IHRvIGFk
ZCB0aGUgdXNlIGNhc2Ugb2Ygc29tZSBkZXZpY2VzIG9ubHkgcHJvdGVjdGVkIGJ5IGFuIElPTU1V
Lg0KDQpDaGVlcnMNCkJlcnRyYW5kIA0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVs
aWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:48:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86367.162138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPzt-00045v-KS; Wed, 17 Feb 2021 16:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86367.162138; Wed, 17 Feb 2021 16:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCPzt-00045o-H5; Wed, 17 Feb 2021 16:48:17 +0000
Received: by outflank-mailman (input) for mailman id 86367;
 Wed, 17 Feb 2021 16:48:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCPzs-00045I-Ea
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:48:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPzp-0007YB-Ta; Wed, 17 Feb 2021 16:48:13 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCPzp-0000CD-Mk; Wed, 17 Feb 2021 16:48:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KLnzYZP1FGPH1cvVhn/Z/3hV4pR+fGuD5Tn7xqVIhEA=; b=T27ee3qb/OpaNGh+U2AuzM7bCJ
	o5GIkYKcW9fLQe/Dfjn1//0n/ELR2qLkxfEU9/fNtL5/QCb+p49HpM4+u4FBU+bnpUWZVO79xmllW
	5EDY2KYAiheXIJejsmHtzQhBlHFUKcEMbWT4ohw7qhqDdZDnMPTBvID9UjGoCLtn1GdI=;
Subject: Re: [for-4.15][PATCH v3 1/3] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-2-julien@xen.org>
 <d20d31ed-4392-a7fb-66ee-575eb254ae84@suse.com>
 <87548d76-36fa-f587-0137-6806280617ad@xen.org>
 <334ee115-c710-88c7-aa27-975bdb6c6912@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cfa7bf49-bb45-1dee-b35e-271ce73c8d70@xen.org>
Date: Wed, 17 Feb 2021 16:48:11 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <334ee115-c710-88c7-aa27-975bdb6c6912@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 17/02/2021 15:17, Jan Beulich wrote:
> On 17.02.2021 16:00, Julien Grall wrote:
>> Hi Jan,
>>
>> On 17/02/2021 14:54, Jan Beulich wrote:
>>> On 17.02.2021 15:24, Julien Grall wrote:
>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>> @@ -267,6 +267,15 @@ int iommu_free_pgtables(struct domain *d)
>>>>        struct page_info *pg;
>>>>        unsigned int done = 0;
>>>>    
>>>> +    if ( !is_iommu_enabled(d) )
>>>> +        return 0;
>>>> +
>>>> +    /*
>>>> +     * Pages will be moved to the free list below. So we want to
>>>> +     * clear the root page-table to avoid any potential use after-free.
>>>> +     */
>>>> +    hd->platform_ops->clear_root_pgtable(d);
>>>
>>> Taking amd_iommu_alloc_root() as example, is this really correct
>>> prior to what is now patch 2?
>>
>> Yes, there are no more use-after-free...
> 
> And this is because of ...? The necessary lock isn't being held
> here, so on another CPU allocation of a new root and then of new
> page tables could happen before you make enough progress here,
> and hence it looks to me as if there might then still be pages
> which get freed while present in the page tables (and hence
> accessible by devices).

Ah yes. I forgot that now patch #3 is not first anymore. I can move 
again patch #3 first, although I know you dislike the approach taken 
there...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 16:57:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 16:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86373.162150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCQ8o-0005CX-J1; Wed, 17 Feb 2021 16:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86373.162150; Wed, 17 Feb 2021 16:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCQ8o-0005CQ-Fu; Wed, 17 Feb 2021 16:57:30 +0000
Received: by outflank-mailman (input) for mailman id 86373;
 Wed, 17 Feb 2021 16:57:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCQ8n-0005CL-5t
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 16:57:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCQ8m-0007hm-Q2; Wed, 17 Feb 2021 16:57:28 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCQ8m-0004l3-Ff; Wed, 17 Feb 2021 16:57:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=exN3TKpeawJxnUGL6V0tudRamlUCZ1+0UpTddTxd+fk=; b=UkpurHd0SGrZJv8zyNDFhASS78
	r/JKFJLUshexTZelubiAxMV/etKc2GvwU2mile2Rw2EC5NuiygnQgmao2hYqJXb/miaL+fDc4/7xv
	FzVLRkz3ioH/AVDqAbvho9+BrmHxfM6cf3srqGJUcVmaExLJgXTSiUxKzAdgPHqmcMDs=;
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>,
 Rahul Singh <Rahul.Singh@arm.com>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
 <505CE19F-B324-4DE2-8EC4-D885780FD17A@arm.com>
 <463b0594-a78b-3f9b-e816-2cbd2a1d16dd@xen.org>
 <49FAB3DC-AA0B-4E71-B435-315EF99A76EA@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3ac0e70e-ca6c-9fc6-1743-510c548782cb@xen.org>
Date: Wed, 17 Feb 2021 16:57:26 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <49FAB3DC-AA0B-4E71-B435-315EF99A76EA@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 17/02/2021 16:47, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 17 Feb 2021, at 16:41, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 17/02/2021 15:37, Bertrand Marquis wrote:
>>> Hi Julien,
>>
>> Hi Bertrand,
>>
>>>> On 17 Feb 2021, at 08:50, Julien Grall <julien@xen.org> wrote:
>>>>
>>>>
>>>>
>>>> On 17/02/2021 02:00, Stefano Stabellini wrote:
>>>>> Hi all,
>>>>> Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
>>>>> translate addresses for DMA operations in Dom0. Specifically,
>>>>> swiotlb-xen is used to translate the address of a foreign page (a page
>>>>> belonging to a domU) mapped into Dom0 before using it for DMA.
>>>>> This is important because although Dom0 is 1:1 mapped, DomUs are not. On
>>>>> systems without an IOMMU swiotlb-xen enables PV drivers to work as long
>>>>> as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
>>>>> ends up using the MFN, rather than the GFN.
>>>>> On systems with an IOMMU, this is not necessary: when a foreign page is
>>>>> mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
>>>>> is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
>>>>> address (instead of the MFN) for DMA operations and they would work. It
>>>>> would be more efficient than using swiotlb-xen.
>>>>> If you recall my presentation from Xen Summit 2020, Xilinx is working on
>>>>> cache coloring. With cache coloring, no domain is 1:1 mapped, not even
>>>>> Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
>>>>> work as intended.
>>>>> The suggested solution for both these issues is to add a new feature
>>>>> flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
>>>>> swiotlb-xen because IOMMU translations are available for Dom0. If
>>>>> XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
>>>>> initialization. I have tested this scheme with and without cache
>>>>> coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
>>>>> works as expected: DMA operations succeed.
>>>>> What about systems where an IOMMU is present but not all devices are
>>>>> protected?
>>>>> There is no way for Xen to know which devices are protected and which
>>>>> ones are not: devices that do not have the "iommus" property could or
>>>>> could not be DMA masters.
>>>>> Perhaps Xen could populate a whitelist of devices protected by the IOMMU
>>>>> based on the "iommus" property. It would require some added complexity
>>>>> in Xen and especially in the swiotlb-xen driver in Linux to use it,
>>>>> which is not ideal.
>>>>
>>>> You are trading a bit more complexity in Xen and Linux with a user will may not be able to use the hypervisor on his/her platform without a quirk in Xen (see more below).
>>>>
>>>>> However, this approach would not work for cache
>>>>> coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
>>>>> used either way
>>>>
>>>> Not all the Dom0 Linux kernel will be able to work with cache colouring. So you will need a way for the kernel to say "Hey I am can avoid using swiotlb".
>>> I fully agree and from my current understanding the condition is â€œhaving an iommuâ€.
>>>>
>>>>> For these reasons, I would like to propose a single flag
>>>>> XENFEAT_ARM_dom0_iommu which says that the IOMMU can be relied upon for
>>>>> DMA translations. In situations where a DMA master is not SMMU
>>>>> protected, XENFEAT_ARM_dom0_iommu should not be set. For example, on a
>>>>> platform where an IOMMU is present and protects most DMA masters but it
>>>>> is leaving out the MMC controller, then XENFEAT_ARM_dom0_iommu should
>>>>> not be set (because PV block is not going to work without swiotlb-xen.)
>>>>> This also means that cache coloring won't be usable on such a system (at
>>>>> least not usable with the MMC controller so the system integrator should
>>>>> pay special care to setup the system).
>>>>> It is worth noting that if we wanted to extend the interface to add a
>>>>> list of protected devices in the future, it would still be possible. It
>>>>> would be compatible with XENFEAT_ARM_dom0_iommu.
>>>>
>>>> I imagine by compatible, you mean XENFEAT_ARM_dom0_iommu would be cleared and instead the device-tree list would be used. Is that correct?
>>> What do you mean by device tree list here ?
>>
>> Sorry I meant "device list". I was referring to Stefano's suggestion to describe the list of devices protected in the device-tree.
> 
> Ok you mean adding to the device tree some kind of device list for which swiotlb should be used (or maybe the opposite list in fact).

I think the list of protected devices is better because we could create 
an Xen IOMMU node.

> 
>>
>>>>
>>>>> How to set XENFEAT_ARM_dom0_iommu?
>>>>> We could set XENFEAT_ARM_dom0_iommu automatically when
>>>>> is_iommu_enabled(d) for Dom0. We could also have a platform specific
>>>>> (xen/arch/arm/platforms/) override so that a specific platform can
>>>>> disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
>>>>> users, it would also be useful to be able to override it via a Xen
>>>>> command line parameter.
>>>> Platform quirks should be are limited to a small set of platforms.
>>>>
>>>> In this case, this would not be only per-platform but also per-firmware table as a developer can decide to remove/add IOMMU nodes in the DT at any time.
>>>>
>>>> In addition to that, it means we are introducing a regression for those users as Xen 4.14 would have worked on there platform but not anymore. They would need to go through all the nodes and find out which one is not protected.
>>> I am not sure i understand your point here as we cannot detect if a device is protected or not by an IOMMU as we do not know which device requires one.
>>
>> That's correct...
>>
>>> Could you explain what use case working before would not work here ?
>>
>>  From Stefano's e-mail, Xen would expose XENFEAT_ARM_dom0_iommu if all the devices are protected by the IOMMU.
>>
>> This implies that Xen is aware whether ever DMA-capable devices are protected. As you rightfully pointed out this cannot work.
> 
> But this is also an issue right now.

Aside the issue reported by Rahul, Linux will always use the host 
physical address which also contains a direct mapping in the P2M. This 
should work.

Why would mind to explain why it can't work today?

>>
>>>>
>>>> This is a bit of a daunting task and we are going to end up having a lot of per-platform quirk in Xen.
>>>  From my understanding the quirks here would be in Linux as it would have to decide for what to use swiotlb or not.
>>
>> This is not how I understood Stefano's e-mail. But even if it is happening in Linux, then we need a way to tell Linux which devices have been protected by Xen.
> 
> So basically let some info in the devices to let linux that they are protected by an IOMMU, which would be replacing the iommu link node by something else.

Correct.

> 
>>
>>> What quirk do you imagine we could implement in Xen ?
>>
>> Me? None. That Stefano's idea and I don't think it can work.
> 
> Definitely there is a problem to solve here, maybe the how requires more discussion :-)
> 
> I see the same kind of problem incoming once we will have some guests using direct-map and some other not.
> At the end there is a some kind of a matrix with swiotlb depending on direct-map and IOMMU present with some
> very nasty combination if we try to add the use case of some devices only protected by an IOMMU.

Once you tell Linux which device is protected then it is easy to handle 
it because it is possible to set per-device DMA ops.

Unprotected devices would have to use the swiotlb DMA ops.

What's more complicated is to fully disable the IOMMU mapping added by 
Xen. Linux would need a way (possibly via the VM_assist hypercall) to 
indicate the mapping is not necesary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 17:03:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 17:03:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86377.162162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCQEO-0006BC-8F; Wed, 17 Feb 2021 17:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86377.162162; Wed, 17 Feb 2021 17:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCQEO-0006B5-58; Wed, 17 Feb 2021 17:03:16 +0000
Received: by outflank-mailman (input) for mailman id 86377;
 Wed, 17 Feb 2021 17:03:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XFXw=HT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCQEM-0006B0-RC
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 17:03:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff620281-393a-4fce-a474-807e649b114f;
 Wed, 17 Feb 2021 17:03:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D249CB7AE;
 Wed, 17 Feb 2021 17:03:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff620281-393a-4fce-a474-807e649b114f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613581392; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ofAHBmxiXPw1XkhhaHsSb0iNu3YjM+dKi/K/AamBjAo=;
	b=fa6pjqKtXTEqcyIiun3ywb4rGaRjbEvMOHurmwb0Uuzu7zD/3w63IAvVXFDrGWkpgomLcJ
	SJdjaHg6S2BUTX/yjQnX3ri/OE0ia+espKFYJKtPVAd8mip4JkGMmNehWF/fw/keIEc+ZR
	y9YgRDH/o8j2Sauq6uJBsaHJY5cXb7s=
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>,
 Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <538F9995-6D8D-4B02-A9B6-7C5F26F95657@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d416b29a-ac05-3bc7-2c60-0b0f3e8febba@suse.com>
Date: Wed, 17 Feb 2021 18:03:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <538F9995-6D8D-4B02-A9B6-7C5F26F95657@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.02.2021 16:34, Bertrand Marquis wrote:
>> On 17 Feb 2021, at 02:00, Stefano Stabellini <sstabellini@kernel.org> wrote:
>> Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
>> translate addresses for DMA operations in Dom0. Specifically,
>> swiotlb-xen is used to translate the address of a foreign page (a page
>> belonging to a domU) mapped into Dom0 before using it for DMA.
>>
>> This is important because although Dom0 is 1:1 mapped, DomUs are not. On
>> systems without an IOMMU swiotlb-xen enables PV drivers to work as long
>> as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
>> ends up using the MFN, rather than the GFN.
>>
>>
>> On systems with an IOMMU, this is not necessary: when a foreign page is
>> mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
>> is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
>> address (instead of the MFN) for DMA operations and they would work. It
>> would be more efficient than using swiotlb-xen.
>>
>> If you recall my presentation from Xen Summit 2020, Xilinx is working on
>> cache coloring. With cache coloring, no domain is 1:1 mapped, not even
>> Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
>> work as intended.
>>
>>
>> The suggested solution for both these issues is to add a new feature
>> flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
>> swiotlb-xen because IOMMU translations are available for Dom0. If
>> XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
>> initialization. I have tested this scheme with and without cache
>> coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
>> works as expected: DMA operations succeed.
> 
> Wouldnâ€™t it be easier to name the flag XENFEAT_ARM_swiotlb_needed ?

Except that "swiotlb" is Linux terminology, which I don't think a
Xen feature should be named after. Names should be generic, except
maybe when they're really targeting exactly one kind of guest
(which imo would better never be the case).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 17:51:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 17:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86390.162184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCQyW-0002R9-1c; Wed, 17 Feb 2021 17:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86390.162184; Wed, 17 Feb 2021 17:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCQyV-0002R2-So; Wed, 17 Feb 2021 17:50:55 +0000
Received: by outflank-mailman (input) for mailman id 86390;
 Wed, 17 Feb 2021 17:50:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lCQyU-0002Qx-Ow
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 17:50:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lCQyU-00006z-LQ
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 17:50:54 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lCQyU-0000ku-Ja
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 17:50:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lCQyR-0003yD-8i; Wed, 17 Feb 2021 17:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=eeUEqWk2bSDD/sRRFkEI0O2xe4p+VmENySKL0yccvbk=; b=IIsohx6sG67H+6I1JX2ipDeeMF
	ZFwA0OwiT0ZaZKsMGZxrPPDOY+YHrtxlstfmjLBaFI1DumudSmfutrltXemO9aIC9uAvWbdLbneVI
	zr/ZZQq20PRATNcioapQcntd4+LNqIym8QpfVpylSNhY6c5SDbZgnTMibI5hwz0LABxw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24621.22394.619514.991835@mariner.uk.xensource.com>
Date: Wed, 17 Feb 2021 17:50:50 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 3/3] tools/libxl: Rework invariants in libxl__domain_get_device_model_uid()
In-Reply-To: <20210217164251.11005-4-andrew.cooper3@citrix.com>
References: <20210217164251.11005-1-andrew.cooper3@citrix.com>
	<20210217164251.11005-4-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH 3/3] tools/libxl: Rework invariants in libxl__domain_get_device_model_uid()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
>   libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>     256 |         if (kill_by_uid)
>         |            ^

Thanks for working on this.  I have reviewed your changes and I see
where you are coming from.  The situation is not very nice, mostly
because we don't have proper sum types in C.

I'm sorry to say that with my release manager hat on I think it is too
late for this kind of reorganisation for 4.15, especially just to work
around an overzealous compiler warning.

I think we can fix the compiler warning simply by setting the
`kill_by_uid` variable on more of the exit paths.  This approach was
already taken in this code for one of the paths.

I would prefer that approach at this stage of the release.

Sorry,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 18:01:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 18:01:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86394.162200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCR8m-0003ak-2c; Wed, 17 Feb 2021 18:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86394.162200; Wed, 17 Feb 2021 18:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCR8l-0003ad-Vw; Wed, 17 Feb 2021 18:01:31 +0000
Received: by outflank-mailman (input) for mailman id 86394;
 Wed, 17 Feb 2021 18:01:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCR8k-0003aV-PD; Wed, 17 Feb 2021 18:01:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCR8k-0000Oo-Jt; Wed, 17 Feb 2021 18:01:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCR8k-0000gA-Aj; Wed, 17 Feb 2021 18:01:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCR8k-0000mG-7m; Wed, 17 Feb 2021 18:01:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6HucEuaiH1+hDGIgkENPGJ6sjxYlHoJ5CI2QSqUf/Zs=; b=1WXoRsW7f0d50/kget4jlfDBh2
	aCHzWL71vosgzq1ZfILoqeqJtVIN8283+nZviuXy4yrjteOf3lMOTYbjvLIcg4qOCJEi5Iq8RA8IH
	Bt4GhhpG7/qlC1BH1YUTHoEF5IL0G7zBK/vDQHGqmj3mqhgB60W8ZWOvA8gFZI0ipLnQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159419-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 159419: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ab995b6af9ab723b0b52e5ea0e342b612f1a7b89
X-Osstest-Versions-That:
    xen=e4161938b315f3b9c6a13ade30d16c11504a2d16
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 18:01:30 +0000

flight 159419 xen-4.13-testing real [real]
flight 159446 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159419/
http://logs.test-lab.xenproject.org/osstest/logs/159446/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 159446-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158557
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158557
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158557
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158557
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158557
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158557
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 158557
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158557
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158557
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158557
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158557
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158557
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ab995b6af9ab723b0b52e5ea0e342b612f1a7b89
baseline version:
 xen                  e4161938b315f3b9c6a13ade30d16c11504a2d16

Last test of basis   158557  2021-01-21 15:37:26 Z   27 days
Testing same since   159419  2021-02-16 15:06:29 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e4161938b3..ab995b6af9  ab995b6af9ab723b0b52e5ea0e342b612f1a7b89 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 18:13:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 18:13:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86402.162215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCRK7-0004da-AI; Wed, 17 Feb 2021 18:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86402.162215; Wed, 17 Feb 2021 18:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCRK7-0004dT-7D; Wed, 17 Feb 2021 18:13:15 +0000
Received: by outflank-mailman (input) for mailman id 86402;
 Wed, 17 Feb 2021 18:13:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2sW=HT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lCRK5-0004dO-PN
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 18:13:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58ae33e3-e9c6-42f9-bc40-f7d3dfc8ecb5;
 Wed, 17 Feb 2021 18:13:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58ae33e3-e9c6-42f9-bc40-f7d3dfc8ecb5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613585592;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=QRpiycKoHpqbeXMkjOnMDaBDtQtNB995D+xygeHxhYs=;
  b=YhwsXY2qsCrMMORjsGQdZ+9NLPXG2C4d6iFHe58+InhH8k2DiNBNjd1M
   Dk5UZsVLhheVULNcpb78zSIMqudyoB2R4wd2J8V4QD3znEBckcSrOhEST
   Nx5LsFwZH7TNoY8qeVw244b43gv9PAWmMwE9UDzGWhU6xV3r1x7V/XVUk
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: MKjA3RlDl8g7SHBYaRSDFy2pqs8rVx8FD7E1psoxBGOlUpDBqLWM7gLZsYBKZ231XJyUmX63oO
 syZVi8xZsT4uESUQTq8gUFzNfixnhBcRsRlZ65fIUSm/cNcYLzuo/8FQVO0ZXmAdVy4PqcmzRe
 ilFoImPyWvKDtDq5gTMbtisg2G2XtpVOY+O2lMhj7J1gCIoeEeVXnlKWRIoovksgL85TmvozoF
 x6LLDU+rA0DPomb7cqPSsa22CFJD3LBdp9mmlqA57I2+jzRyjC7c6GPtauk2CLJL+3udSDOkTz
 8fk=
X-SBRS: 5.2
X-MesageID: 37638885
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,185,1610427600"; 
   d="scan'208";a="37638885"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OdURUj2tRSO5CW7tpKApyC2xU2EhRTYx7540VrWQJePGvvRnIttxgxZ9q7+FrhabtU+cagBtZwatOiFzEh6CjoBbkTr6uhqlu1JFkKLqJPZFHRIYnkvAz+A3qriya10F/Wk6aDbi5lgn94vB7RSVGPlkDgwU+NdzBWJLyjc7hZTL/dHQmIoJicCMU9Qvu+/1VC62aqtbrzbFavWFjbN2yUKQ8gbFN5L/Eo0QaGF0dqKT1HlMdzviAsaDcmqHxObw+mjThS+ewA0j37tUwJhUUA7FndLM6c3N7J6cdogBBZtp8OXR5WUpnRNIRuZYneCnCv9K2GhL03Qrl475iDN9Mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=efY9GnvQWruZXPsXoFCvJRoWo8O/XI2YOmFSN4SnK+w=;
 b=aZEwj+dBqghRsBi/oMc9ZIGnkNunblYurJeCqo+qdeRznzdHsi2hCBsSm6PibxH0BH3ltkzW7dzbaoFJQVl5TI/9UXM63kpu8E7AoVNAI7DTHctb5qNprVm9ZubMpprMB/563nMTDL19F+ryuu+Z/ZRq8g3r69zIwv6gi0bH1i9r2EB3s3vDQ1qNsh7sixvWFk+03G5PyJJJ0WU74ScIh+v7lnuYUWusEMPKA8QSwk9vdRLhnbDM7agB00FgFWqfcpXwTxObeYG+D/b8JCtXgewTtiHn/5LiCL6wTWhqkx66hafgk4sMCeVIf7wgsEug+ZC9gc4aWTvhnwTks6jmIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=efY9GnvQWruZXPsXoFCvJRoWo8O/XI2YOmFSN4SnK+w=;
 b=v/25i0l3rgwd57Pi0UdJW9mOUipdSBDpC5qZzg+rmIFbC4VZYpJphfW4CIRgLwmfkCrvaC7yxJUwPkakTB3KVejF5ROvFO7MhqBi/O9JhY5QCR41ElZ4sdWSrxL4MQ869ivRn1E8GbsRYcMZGOUNuauzJh/3bgVOXb/jO7Ml5R8=
Subject: Re: [PATCH 3/3] tools/libxl: Rework invariants in
 libxl__domain_get_device_model_uid()
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>
References: <20210217164251.11005-1-andrew.cooper3@citrix.com>
 <20210217164251.11005-4-andrew.cooper3@citrix.com>
 <24621.22394.619514.991835@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c9cc1195-fb5a-edf6-3afa-1b32cfb37c48@citrix.com>
Date: Wed, 17 Feb 2021 18:13:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
In-Reply-To: <24621.22394.619514.991835@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0327.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 39768408-e83c-4ea6-4891-08d8d36fada5
X-MS-TrafficTypeDiagnostic: BYAPR03MB3557:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3557724D7B1A71CC12695E9BBA869@BYAPR03MB3557.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Oppmea+J/j+218WgYR9bhcro3JsJANa2+1aHDFwGVBolNsgxB5hcga7WkGqiuU2Xk20c4cZhOd3QX5KMagxA3hzN5CLhJbDyoW2+69Yy7B1zMzyfiWTGZqX0fSSs264FxUS3VCsab9GIL1PiPRcurPMI72sR/JAs+rXeEnAVErDkDfI6sIZwGqEo2WGvGOfDUMQtEiKt1byb3fCyhCvM0GdisCPToSKn9Y2wpo0NBBfGHJ2URAhK7om6qQgMtMeJycSR9h4oH36G9n5MTN9WlcNlTBa5ZQXYZw7ctT5BpWLDmPaMxBZmiFcMH+vj4tDZJ+PqZ3wKbAFCBuw/tNdtjcQ7cSb4fXA44GOW5E/aaDnSl3isChbzByOqj5I6GTnKhOQyfxmOnOIQ77MXlU+HS6eCazeF+sBRzngHp9R2y0bsxpO4jfrSyU5I59ynIHjFUek9w3vD8t4vBRkrVCJx/xh95FxhNHbjPeDVN+1aZCQUvralPocrnNw48rVsOKhc4szrqV//A+s9xY4zvd0VSvNGFuWoAkjjkPaQ8F/vYH/xUgS2wf53wwkO/l1Dh1eEh8fp81yDdATkutOgEbR0usSU7BTGkwPG+L6q3BKEN8k8L04ya8CtFEz0s3mc0sP05kF2wbubJdl4CRmAovUalA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(366004)(136003)(396003)(53546011)(16526019)(31686004)(6666004)(2616005)(26005)(36756003)(186003)(86362001)(4326008)(956004)(2906002)(83380400001)(107886003)(31696002)(8936002)(54906003)(316002)(8676002)(5660300002)(66476007)(66556008)(6916009)(66946007)(16576012)(6486002)(478600001)(101420200003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZDQ3ZWtCSjU1bHh1NUtnL0pzYjNGZ20rZlJLS3BscWZpRDZGUXl5WmZwWW5x?=
 =?utf-8?B?aSticTRXbTN5YkRlMUtTZktScUlJcUFNbk5Ydm5ZOGxyRHhrbFdBclBHaDZD?=
 =?utf-8?B?RGpHVHJvRVRiQU45bjZoOXRCbitQV1FvdWZ3aHZNWXpwUTZqMDN3ZGZKRC9V?=
 =?utf-8?B?ZnRoTlRnSXV3ZWpXYjNEMjdHcEdhZHFZVTQxVW1YeWs0Qnk2ZnJKbURvSWFh?=
 =?utf-8?B?UEg4eUk3RlNlYi9uaUVYZXYwQ1B2VWNCSE92UkdObkxIM1luNEJTYmhaT29Y?=
 =?utf-8?B?VldVUEFnamRteWpnK1VCUHRpK3E4QTBjMi9lc1dJZHVaRktnbEwwYVBvT1di?=
 =?utf-8?B?eVpGSG5TS3VueWNFcjhEaHVQT0pKN0JZWHhzanNFUFBMTXNMZmNnSS9QU0lN?=
 =?utf-8?B?OG9IOG8rZURpeHEwbzJQVFBBejAzaktETEQwa1QzSUpJc3JrT2R4V3dLVXRW?=
 =?utf-8?B?NWM0cWN2dTRDWEYyck9saW40N2dyaE9OaEZiYmhFRUVESitxTi9zMXNBdDFX?=
 =?utf-8?B?bWRFdkRTWEM4M0lsVVYxM2lpcWFQNVNPQXVIY0pNQ091SUUzb2kyMEFjekVI?=
 =?utf-8?B?V2dpbDNNTHBJVTJlTUhSZHZxc1MxS0QvcEtobTFyTFRSVHFKMW5HOXlVcjBz?=
 =?utf-8?B?WVJWMDYvanUreVRnU2VrNFBMYjFLaEg0VUZ0dUM1VlBMSlR0Y3N6cDdJcUY4?=
 =?utf-8?B?dEFRUklSUGhmSnkrRFludFRrbDlVc3pvVTlYdXFZOFAwR21PZThic2c2UWVr?=
 =?utf-8?B?OW96dEovdFJiWFc3bmlkSnNaWTVSd1g4L1Z1NzJuMmVUSHJLR042QkRNMUwx?=
 =?utf-8?B?d2o5VjcxdTNQaWxkQ3lxTWtZN1hqblhYR3NnV1F0cU9kVHpCaU5rY3Vpa2xo?=
 =?utf-8?B?eG4yOGNNdnNEbG5sS2txVGxHN3FIZ2MyOHh6Y1lFcWlwVzJONmhrZHJpZ2xN?=
 =?utf-8?B?bm9icWh3ZUcyZ3NGalovUEY1RndzanBQZHUwcERGdHRCZDd4amhMZUZNSFA2?=
 =?utf-8?B?d3h0cmpLWnF3RGd1RVNyUzB2STMwZGtyYkRNSHdSREVuUmNmcVM4ZGNhUG9k?=
 =?utf-8?B?M3ZjOUoxazBlRGZpdXRCTml3aktPY2t5YWc0dTVBclRhK0ZkUXdDZTRMKy9u?=
 =?utf-8?B?elNGMGRITjBGRmhCM2dJZDF3bWRpQnVlc2F5c1FKOHA2Ri91UlBGQ0t4Nit3?=
 =?utf-8?B?OXZTMWRZd1ZJUDhsUVNBMU9MVGF1UHB1UldzMU0yNzA2UFJVZWgrUTIrZ2Fj?=
 =?utf-8?B?UncrR1oyV1IzZDZOb2dNZlVXeTFQOGdMckJyMUd3UzRGL1ExVmJ4M1pzM3BO?=
 =?utf-8?B?azdIY0sxR3ZrNjI1VXVZV0lueUgwVEhQZlBaRnFNV09YZEM0OEkwK2NESG5Z?=
 =?utf-8?B?WVV4eVRJci9JZGh5WXM5K2J6U2hRSmUvUjJDbDdqWm0xTjgzOGUwakkxTUQ1?=
 =?utf-8?B?ZkFpbktkK0U0S1pmSnlPeFo1dTZVdmhLa3VUdHdDRWJVNm5EekJ2NXFxZHg5?=
 =?utf-8?B?djU3ODV2SXRRTGZrMTVsRmdmbG1iem55bkdsUk42TzNPWTRMZDRBeUl0aFI5?=
 =?utf-8?B?cDNYOXp6T1NJU0sxSy95S0ZWL0p4Y0YzSCtTY0lrNjI0VzNXaWJoeEpaMTRh?=
 =?utf-8?B?dzFwZUtDWDZUVGliUm1lQkxoSW0yV1NYZ1VKMlVTdGtMVUFlSWwyT2UwMTJ1?=
 =?utf-8?B?UEJNUVFpZ3NlNnNyYVg2T1FsRjhXaUZkdzZJaG5yVjZXK2lYaTQrUE9Xem9x?=
 =?utf-8?Q?lSOhsuoI/icWHj4xBqEiQwu+NMCgQzF6+gebFzT?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 39768408-e83c-4ea6-4891-08d8d36fada5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Feb 2021 18:13:08.5973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6r6UgATvwaLoHwy/JZIMRsfgD5PYgbEqTvAYsPA4bXTDfZuZH+3TXqWBgr4SFyYqKWyiC8H9x+wudzThLFnCaw/7S0wn2a15JMK2UWxl6OI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3557
X-OriginatorOrg: citrix.com

On 17/02/2021 17:50, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 3/3] tools/libxl: Rework invariants in libxl__domain_get_device_model_uid()"):
>> Various version of gcc, when compiling with -Og, complain:
>>
>>   libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
>>   libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>>     256 |         if (kill_by_uid)
>>         |            ^
> Thanks for working on this.  I have reviewed your changes and I see
> where you are coming from.  The situation is not very nice, mostly
> because we don't have proper sum types in C.
>
> I'm sorry to say that with my release manager hat on I think it is too
> late for this kind of reorganisation for 4.15, especially just to work
> around an overzealous compiler warning.
>
> I think we can fix the compiler warning simply by setting the
> `kill_by_uid` variable on more of the exit paths.  This approach was
> already taken in this code for one of the paths.
>
> I would prefer that approach at this stage of the release.

Well - I have explained why I'm not happy with that approach, but you
are the maintainer and RM.

I will trade you a minimal patch for formal R-b's so the time invested
so far fixing this mess isn't wasted when 4.16 opens.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 18:20:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 18:20:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86411.162230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCRQt-0005dI-3r; Wed, 17 Feb 2021 18:20:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86411.162230; Wed, 17 Feb 2021 18:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCRQt-0005dB-0p; Wed, 17 Feb 2021 18:20:15 +0000
Received: by outflank-mailman (input) for mailman id 86411;
 Wed, 17 Feb 2021 18:20:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VV3x=HT=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1lCRQr-0005d6-Av
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 18:20:13 +0000
Received: from mx1.somlen.de (unknown [2a00:1828:a019::100:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 924806b7-d942-45d4-8821-89b50adfa9ee;
 Wed, 17 Feb 2021 18:20:11 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id 9866FC36AAE;
 Wed, 17 Feb 2021 19:20:09 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 924806b7-d942-45d4-8821-89b50adfa9ee
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: pkg-xen-devel@lists.alioth.debian.org
Subject: Re: [BUG] Linux pvh vm not getting destroyed on shutdown
Date: Wed, 17 Feb 2021 19:19:58 +0100
Message-ID: <2712384.y2I76aXMJt@localhost>
In-Reply-To: <2195346.r5JaYcbZso@localhost>
References: <2195346.r5JaYcbZso@localhost>
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="nextPart2401644.GujMsZU7iO"; micalg="pgp-sha512"; protocol="application/pgp-signature"

--nextPart2401644.GujMsZU7iO
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

On Samstag, 13. Februar 2021 16:36:24 CET Maximilian Engelhardt wrote:
> Here are some things I noticed while trying to debug this issue:
> 
> * It happens on a Debian buster dom0 as well as on a bullseye dom0
> 
> * It seems to only affect pvh vms.
> 
> * shutdown from the pvgrub menu ("c" -> "halt") does work
> 
> * the vm seems to shut down normal, the last lines in the console are:
[...]
> 
> * issuing a reboot instead of a shutdown does work fine.
> 
> * The issue started with Debian kernel 5.8.3+1~exp1 running in the vm,
> Debian kernel 5.7.17-1 does not show the issue.
> 
> * setting vcpus equal to maxvcpus does *not* show the hang.

One thing I just realized I totally forgot to mention in my initial report is 
that this issue is present for us also on a modern kernel. We tested with 
Debian kernel 5.10.13-1.
--nextPart2401644.GujMsZU7iO
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part.
Content-Transfer-Encoding: 7Bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEEQ8gZ7vwsPje0uPkIgepkfSQr0hUFAmAtXk4ACgkQgepkfSQr
0hVZnBAAlR1926bhLwZ9/eKezs1tlhnBdCJpxZhabVDyFKshycl+43skylXlftXh
5fmyqrKNUnXXXDUOAUl44+5ezvDVISC2DEb1OTfgXY1JJFz61wku10yCIoe2xU0u
NnW8YbhuLIkW+6RDXxVWe7IP7mCnT1W2MEfAqGC/Ed8P4ETo05VK/nnTAB4Cotjr
4Hq/Ho9xgQEyVhJzlVIX0je8QcCn/PikpQ0u+1fRmLeIg6hWur5qf+54Mbga4+iK
v22dAfKq2ml8FF69UfkjVkQdjJai3L49QERQtXjrQpv4SrvLX9mCC1TcxynfM8E2
hb8plgRKV4C5pBCK0fB0pybsiXQUuUPi/eZDkO+oQkwz2uOEqC+GXIMOv/UicacB
u/2w7mBsKvKZwBilgDAFxTqLYjdedmJ7wTAlplN6HoyH666xhf8pCw3OXagwtCjC
rfahDuSMnWgG+tZ4VL4CGSJHGRrTHoa3XcUqFWdGahOKZfsZWTjWlr8NlB4+VfjN
nvbpRkDnhYrcfoeyBTsRsxEfhzB0rlyCPPl8ZlzUflGfgiA2XL3E+gnyKYihRNcz
bjKqPhNaoXtmyp7QBTDkVaXSCm1xQknXaTEwyR1xDlP5q4IjVmWK03+pX8Iq0Twa
EwYoZQRgefvbZz2pdLzY756TTeykdtkbUbIeD9NoQDCS5ulUykY=
=Xcx9
-----END PGP SIGNATURE-----

--nextPart2401644.GujMsZU7iO--





From xen-devel-bounces@lists.xenproject.org Wed Feb 17 18:55:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 18:55:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86421.162246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCRyl-000067-S8; Wed, 17 Feb 2021 18:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86421.162246; Wed, 17 Feb 2021 18:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCRyl-000060-PD; Wed, 17 Feb 2021 18:55:15 +0000
Received: by outflank-mailman (input) for mailman id 86421;
 Wed, 17 Feb 2021 18:55:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCRyk-00005r-DY; Wed, 17 Feb 2021 18:55:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCRyj-0001FS-C2; Wed, 17 Feb 2021 18:55:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCRyj-0005kp-4g; Wed, 17 Feb 2021 18:55:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCRyj-0003WL-4A; Wed, 17 Feb 2021 18:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kpIpAmSl+sWJx0DrWGIZgir4nq/BfTz99Lr4MirX1aw=; b=NjJxSc4uBCvTgnRNL87CEc/vTG
	OjT+oweo2FkgvqRGQdtkCw1UCt0dam/kQsLZfellJdTE/jyTHXcMGMtX/O+hbVqlkX4rPKoVilTo1
	nKGL4NYBgLvtU22SNjcZfUt/El1Bl2ES3BKrwVQb6hKOEtu4+tCwt5RsXQ7yVz1/N+8o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159445-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159445: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a4133feaf42000923eb9d84badb6b171625f137
X-Osstest-Versions-That:
    xen=d670ef3401b91d04c58d72cd8ce5579b4fa900d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 18:55:13 +0000

flight 159445 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159445/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a4133feaf42000923eb9d84badb6b171625f137
baseline version:
 xen                  d670ef3401b91d04c58d72cd8ce5579b4fa900d8

Last test of basis   159443  2021-02-17 12:00:28 Z    0 days
Testing same since   159445  2021-02-17 16:00:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d670ef3401..7a4133feaf  7a4133feaf42000923eb9d84badb6b171625f137 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 21:29:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 21:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86434.162268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCUNn-0005oo-5M; Wed, 17 Feb 2021 21:29:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86434.162268; Wed, 17 Feb 2021 21:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCUNn-0005oh-1n; Wed, 17 Feb 2021 21:29:15 +0000
Received: by outflank-mailman (input) for mailman id 86434;
 Wed, 17 Feb 2021 21:29:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCUNm-0005oZ-7R; Wed, 17 Feb 2021 21:29:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCUNm-0003o5-1x; Wed, 17 Feb 2021 21:29:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCUNl-0005CM-OV; Wed, 17 Feb 2021 21:29:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCUNl-0003GD-O1; Wed, 17 Feb 2021 21:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o3BcDtmu5pI0TDgF8ae29wvms1XdbA9i+gS+fbgDDW8=; b=DBiacppZ1A8h1Aym6zG9toSHWQ
	BnbeTYU6CnF6gX2KCDIdVJrlfx5pMlfzBg3jgkN8WAHu/5VaTMpovNMhnKDJVXvuAh1/NEYvPRohG
	YulALoBorUIcwH1bq9OwshF+7TdUJ74+0AApv8lC7GMHeQYcxXZHJ6ZGZ9nwXLG+zkgM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159420-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 159420: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9f357fe3e4593fc1109962b76d4db73d589ebef5
X-Osstest-Versions-That:
    xen=4170218cb96546426664e5c1d00c5a848a26ae9e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 Feb 2021 21:29:13 +0000

flight 159420 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159420/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158558
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158558
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158558
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158558
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158558
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158558
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158558
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158558
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158558
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158558
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158558
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9f357fe3e4593fc1109962b76d4db73d589ebef5
baseline version:
 xen                  4170218cb96546426664e5c1d00c5a848a26ae9e

Last test of basis   158558  2021-01-21 15:37:26 Z   27 days
Testing same since   159420  2021-02-16 15:06:34 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4170218cb9..9f357fe3e4  9f357fe3e4593fc1109962b76d4db73d589ebef5 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 22:03:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 22:03:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86441.162287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCUuV-00013V-UY; Wed, 17 Feb 2021 22:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86441.162287; Wed, 17 Feb 2021 22:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCUuV-00013O-QS; Wed, 17 Feb 2021 22:03:03 +0000
Received: by outflank-mailman (input) for mailman id 86441;
 Wed, 17 Feb 2021 22:03:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PVDq=HT=ucw.cz=pavel@srs-us1.protection.inumbo.net>)
 id 1lCUuU-00013J-TE
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 22:03:02 +0000
Received: from jabberwock.ucw.cz (unknown [46.255.230.98])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b907ee9-65d6-4140-842b-5f7305433625;
 Wed, 17 Feb 2021 22:03:01 +0000 (UTC)
Received: by jabberwock.ucw.cz (Postfix, from userid 1017)
 id 88A821C0B8E; Wed, 17 Feb 2021 23:02:59 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b907ee9-65d6-4140-842b-5f7305433625
Date: Wed, 17 Feb 2021 23:02:58 +0100
From: Pavel Machek <pavel@ucw.cz>
To: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, drbd-dev@lists.linbit.com,
	xen-devel@lists.xenproject.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-fscrypt@vger.kernel.org, jfs-discussion@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org, ocfs2-devel@oss.oracle.com,
	linux-pm@vger.kernel.org, linux-mm@kvack.org, axboe@kernel.dk,
	philipp.reisner@linbit.com, lars.ellenberg@linbit.com,
	konrad.wilk@oracle.com, roger.pau@citrix.com, minchan@kernel.org,
	ngupta@vflare.org, sergey.senozhatsky.work@gmail.com,
	agk@redhat.com, snitzer@redhat.com, hch@lst.de, sagi@grimberg.me,
	martin.petersen@oracle.com, viro@zeniv.linux.org.uk, tytso@mit.edu,
	jaegeuk@kernel.org, ebiggers@kernel.org, djwong@kernel.org,
	shaggy@kernel.org, konishi.ryusuke@gmail.com, mark@fasheh.com,
	jlbec@evilplan.org, joseph.qi@linux.alibaba.com,
	damien.lemoal@wdc.com, naohiro.aota@wdc.com, jth@kernel.org,
	rjw@rjwysocki.net, len.brown@intel.com, akpm@linux-foundation.org,
	hare@suse.de, gustavoars@kernel.org, tiwai@suse.de,
	alex.shi@linux.alibaba.com, asml.silence@gmail.com,
	ming.lei@redhat.com, tj@kernel.org, osandov@fb.com,
	bvanassche@acm.org, jefflexu@linux.alibaba.com
Subject: Re: [RFC PATCH 29/34] power/swap: use bio_new in hib_submit_io
Message-ID: <20210217220257.GA10791@amd>
References: <20210128071133.60335-1-chaitanya.kulkarni@wdc.com>
 <20210128071133.60335-30-chaitanya.kulkarni@wdc.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="LQksG6bCIzRHxTLp"
Content-Disposition: inline
In-Reply-To: <20210128071133.60335-30-chaitanya.kulkarni@wdc.com>
User-Agent: Mutt/1.5.23 (2014-03-12)


--LQksG6bCIzRHxTLp
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi!
>=20
> diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> index c73f2e295167..e92e36c053a6 100644
> --- a/kernel/power/swap.c
> +++ b/kernel/power/swap.c
> @@ -271,13 +271,12 @@ static int hib_submit_io(int op, int op_flags, pgof=
f_t page_off, void *addr,
>  		struct hib_bio_batch *hb)
>  {
>  	struct page *page =3D virt_to_page(addr);
> +	sector_t sect =3D page_off * (PAGE_SIZE >> 9);
>  	struct bio *bio;
>  	int error =3D 0;
> =20
> -	bio =3D bio_alloc(GFP_NOIO | __GFP_HIGH, 1);
> -	bio->bi_iter.bi_sector =3D page_off * (PAGE_SIZE >> 9);
> -	bio_set_dev(bio, hib_resume_bdev);
> -	bio_set_op_attrs(bio, op, op_flags);
> +	bio =3D bio_new(hib_resume_bdev, sect, op, op_flags, 1,
> +		      GFP_NOIO | __GFP_HIGH);
> =20

C function with 6 arguments... dunno. Old version looks comparable or
even more readable...

Best regards,
							Pavel

--=20
http://www.livejournal.com/~pavelmachek

--LQksG6bCIzRHxTLp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAmAtkpEACgkQMOfwapXb+vL5ywCguk9XRtMJ4/rJgwKlR42qzH7B
ww4AoK8H3c5uHgpu/eHAUqpvoYMrxHuL
=Rk1V
-----END PGP SIGNATURE-----

--LQksG6bCIzRHxTLp--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 22:24:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 22:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86445.162299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCVEu-00039U-KK; Wed, 17 Feb 2021 22:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86445.162299; Wed, 17 Feb 2021 22:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCVEu-00039N-HK; Wed, 17 Feb 2021 22:24:08 +0000
Received: by outflank-mailman (input) for mailman id 86445;
 Wed, 17 Feb 2021 22:24:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mYKF=HT=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lCVEt-00039I-BK
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 22:24:07 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e699e8ee-2c96-4371-9f50-e39a183f86ea;
 Wed, 17 Feb 2021 22:24:06 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id n4so286610wrx.1
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 14:24:06 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id x11sm5526509wrv.83.2021.02.17.14.24.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 17 Feb 2021 14:24:04 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 583D51FF7E;
 Wed, 17 Feb 2021 22:24:03 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e699e8ee-2c96-4371-9f50-e39a183f86ea
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:date:in-reply-to
         :message-id:mime-version:content-transfer-encoding;
        bh=UBErDwxItRQDaQKbAHMCfrEjjIOCFPLlsoGBCNNVrxc=;
        b=dHVoXOEhkEQmqkxhT1z9RJDaL3PKUPKjab59XnBq+VAabO0DBXbi6J6gE3yWhZG0O9
         O4Z8HNEn+spF9oAtjqGOyFzymowEkimnrfehoc1B1DMs1OXoRPeJSMsE+ktV+nlPuzfe
         f82zgsatGikwu+U9h5iQCza80bsmYxienCH/LsZaUiqHvK8kp6In4Z0beWPK3abQyT9C
         w0CLqfxBzVJRmJIIXGxhuVHqMXjGB3I2m9a4YMXTpQBLpIKPh++Ei6x9oZCjSObqS671
         nyyAAiEsbmasyWY7A9bVH1PLFdJql5yGJ0rgFCpOBuYoTgABeTmRI5H5VmVtRMbFiKIK
         6BTA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject:date
         :in-reply-to:message-id:mime-version:content-transfer-encoding;
        bh=UBErDwxItRQDaQKbAHMCfrEjjIOCFPLlsoGBCNNVrxc=;
        b=JubkAlXrHJ6SWoAD0V/vDmwjno29lsh/YFrMh53RhIQAircSEo6+oMv04hhpYo5p9o
         h7+4lN30tasiMLSr5V7b7qI6cFocdgzvDIzYh84TF0VYSPoPL242Is2WjhCkQ6J9mdo9
         WTihtPTVxGmSLRWUjJHJVZlJK6QsrwYfrl4s38DWNfwL3hEZbN1GgzCFHUsXhkJld1Cr
         YdS9VfsmSMQExP//sy9oGNUhCvDNO3H7555rYBZTVsqK0FJInZ9x5HXeyXGmBg4O4Gr7
         MF/TJhy9GhOA29AuYgpjkf6jSWiQ8oQ5Cuar1qNMolmoAF3fFx0jBjjCBz6zjxqcWv+l
         Sy1w==
X-Gm-Message-State: AOAM533Ri0/O6Hz2nZP22BYfRXHwMyVlQ82tQVkbOOQ2YDpcS3wbmS+r
	9wVrqux87s+/zxdnyGHl+3s0rQ==
X-Google-Smtp-Source: ABdhPJwv23fB+ttVoJXnMOjEDxGHSwBNNyZx5lUAwYZzR2vi+hrXJ0Dh3JQH0hBHHsnpt0xkUXTEbw==
X-Received: by 2002:a5d:67c2:: with SMTP id n2mr1291969wrw.298.1613600645220;
        Wed, 17 Feb 2021 14:24:05 -0800 (PST)
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-8-alex.bennee@linaro.org>
 <20210217204654.GA353754@localhost.localdomain>
User-agent: mu4e 1.5.8; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Cleber Rosa <crosa@redhat.com>
Cc: qemu-devel@nongnu.org, julien@xen.org, andre.przywara@arm.com,
 stefano.stabellini@linaro.org, Wainer dos Santos Moschetta
 <wainersm@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?=
 <philmd@redhat.com>,
 xen-devel@lists.xenproject.org, stefano.stabellini@xilinx.com,
 stratos-dev@op-lists.linaro.org
Subject: Re: [PATCH  v2 7/7] tests/avocado: add boot_xen tests
Date: Wed, 17 Feb 2021 22:22:50 +0000
In-reply-to: <20210217204654.GA353754@localhost.localdomain>
Message-ID: <87sg5us58c.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Cleber Rosa <crosa@redhat.com> writes:

> On Thu, Feb 11, 2021 at 05:19:45PM +0000, Alex Benn=C3=A9e wrote:
>> These tests make sure we can boot the Xen hypervisor with a Dom0
>> kernel using the guest-loader. We currently have to use a kernel I
>> built myself because there are issues using the Debian kernel images.
>>=20
>> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>> ---
>>  MAINTAINERS                  |   1 +
>>  tests/acceptance/boot_xen.py | 117 +++++++++++++++++++++++++++++++++++
>>  2 files changed, 118 insertions(+)
>>  create mode 100644 tests/acceptance/boot_xen.py
>>=20
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 853f174fcf..537ca7a6f3 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -1998,6 +1998,7 @@ M: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>>  S: Maintained
>>  F: hw/core/guest-loader.c
>>  F: docs/system/guest-loader.rst
>> +F: tests/acceptance/boot_xen.py
>>=20=20
>>  Intel Hexadecimal Object File Loader
>>  M: Su Hang <suhang16@mails.ucas.ac.cn>
>> diff --git a/tests/acceptance/boot_xen.py b/tests/acceptance/boot_xen.py
>> new file mode 100644
>> index 0000000000..8c7e091d40
>> --- /dev/null
>> +++ b/tests/acceptance/boot_xen.py
>> @@ -0,0 +1,117 @@
>> +# Functional test that boots a Xen hypervisor with a domU kernel and
>> +# checks the console output is vaguely sane .
>> +#
>> +# Copyright (c) 2020 Linaro
>> +#
>> +# Author:
>> +#  Alex Benn=C3=A9e <alex.bennee@linaro.org>
>> +#
>> +# SPDX-License-Identifier: GPL-2.0-or-later
>> +#
>> +# This work is licensed under the terms of the GNU GPL, version 2 or
>> +# later.  See the COPYING file in the top-level directory.
>> +
>> +import os
>> +
>> +from avocado import skipIf
>> +from avocado_qemu import wait_for_console_pattern
>> +from boot_linux_console import LinuxKernelTest
>> +
>> +
>> +class BootXenBase(LinuxKernelTest):
>> +    """
>> +    Boots a Xen hypervisor with a Linux DomU kernel.
>> +    """
>> +
>> +    timeout =3D 90
>> +    XEN_COMMON_COMMAND_LINE =3D 'dom0_mem=3D128M loglvl=3Dall guest_log=
lvl=3Dall'
>> +
>> +    def fetch_guest_kernel(self):
>> +        # Using my own built kernel - which works
>> +        kernel_url =3D ('https://fileserver.linaro.org/'
>> +                      's/JSsewXGZ6mqxPr5/download?path=3D%2F&files=3D'
>> +                      'linux-5.9.9-arm64-ajb')
>> +        kernel_sha1 =3D '4f92bc4b9f88d5ab792fa7a43a68555d344e1b83'
>> +        kernel_path =3D self.fetch_asset(kernel_url,
>> +                                       asset_hash=3Dkernel_sha1)
>> +
>> +        return kernel_path
>> +
>> +    def launch_xen(self, xen_path):
>> +        """
>> +        Launch Xen with a dom0 guest kernel
>> +        """
>> +        self.log.info("launch with xen_path: %s", xen_path)
>> +        kernel_path =3D self.fetch_guest_kernel()
>> +
>> +        self.vm.set_console()
>> +
>> +        xen_command_line =3D self.XEN_COMMON_COMMAND_LINE
>> +        self.vm.add_args('-machine', 'virtualization=3Don',
>> +                         '-cpu', 'cortex-a57',
>> +                         '-m', '768',
>> +                         '-kernel', xen_path,
>> +                         '-append', xen_command_line,
>> +                         '-device',
>> +                         "guest-loader,addr=3D0x47000000,kernel=3D%s,bo=
otargs=3Dconsole=3Dhvc0"
>
> Nitpick/OCD: single quotes to match all other args?
>
>> +                         % (kernel_path))
>> +
>> +        self.vm.launch()
>> +
>> +        console_pattern =3D 'VFS: Cannot open root device'
>> +        wait_for_console_pattern(self, console_pattern, "Panic on CPU 0=
:")
>> +
>> +
>> +class BootXen(BootXenBase):
>> +
>> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
>> +    def test_arm64_xen_411_and_dom0(self):
>> +        """
>> +        :avocado: tags=3Darch:aarch64
>> +        :avocado: tags=3Daccel:tcg
>> +        :avocado: tags=3Dcpu:cortex-a57
>> +        :avocado: tags=3Dmachine:virt
>> +        """
>> +        xen_url =3D ('https://deb.debian.org/debian/'
>> +                   'pool/main/x/xen/'
>> +                   'xen-hypervisor-4.11-arm64_4.11.4+37-g3263f257ca-1_a=
rm64.deb')
>> +        xen_sha1 =3D '034e634d4416adbad1212d59b62bccdcda63e62a'
>
> This URL is already giving 404s because of a new pacakge.  I found
> this to work (but yeah, won't probably last long):
>
>         xen_url =3D ('http://deb.debian.org/debian/'
>                    'pool/main/x/xen/'
>                    'xen-hypervisor-4.11-arm64_4.11.4+57-g41a822c392-2_arm=
64.deb')
>         xen_sha1 =3D 'b5a6810fc67fd50fa36afdfdfe88ce3153dd3a55'

I think the solution is to use archive links here. There is a snapshot
archive of sid (we've used it in the past) but I suspect there isn't an
archive of old stable packages for a reason.

>
>> +        xen_deb =3D self.fetch_asset(xen_url, asset_hash=3Dxen_sha1)
>> +        xen_path =3D self.extract_from_deb(xen_deb, "/boot/xen-4.11-arm=
64")
>> +
>> +        self.launch_xen(xen_path)
>> +
>> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
>> +    def test_arm64_xen_414_and_dom0(self):
>> +        """
>> +        :avocado: tags=3Darch:aarch64
>> +        :avocado: tags=3Daccel:tcg
>> +        :avocado: tags=3Dcpu:cortex-a57
>> +        :avocado: tags=3Dmachine:virt
>> +        """
>> +        xen_url =3D ('https://deb.debian.org/debian/'
>> +                   'pool/main/x/xen/'
>> +                   'xen-hypervisor-4.14-arm64_4.14.0+80-gd101b417b7-1_a=
rm64.deb')
>> +        xen_sha1 =3D 'b9d209dd689ed2b393e625303a225badefec1160'
>
> Likewise here:
>
>         xen_url =3D ('https://deb.debian.org/debian/'
>                    'pool/main/x/xen/'
>                    'xen-hypervisor-4.14-arm64_4.14.0+88-g1d1d1f5391-2_arm=
64.deb')
>         xen_sha1 =3D 'f316049beaadd50482644e4955c4cdd63e3a07d5'
>
>> +        xen_deb =3D self.fetch_asset(xen_url, asset_hash=3Dxen_sha1)
>> +        xen_path =3D self.extract_from_deb(xen_deb, "/boot/xen-4.14-arm=
64")
>> +
>> +        self.launch_xen(xen_path)
>> +
>> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
>> +    def test_arm64_xen_415_and_dom0(self):
>> +        """
>> +        :avocado: tags=3Darch:aarch64
>> +        :avocado: tags=3Daccel:tcg
>> +        :avocado: tags=3Dcpu:cortex-a57
>> +        :avocado: tags=3Dmachine:virt
>> +        """
>> +
>> +        xen_url =3D ('https://fileserver.linaro.org/'
>> +                   's/JSsewXGZ6mqxPr5/download'
>> +                   '?path=3D%2F&files=3Dxen-upstream-4.15-unstable.deb')
>> +        xen_sha1 =3D 'fc191172b85cf355abb95d275a24cc0f6d6579d8'
>> +        xen_deb =3D self.fetch_asset(xen_url, asset_hash=3Dxen_sha1)
>> +        xen_path =3D self.extract_from_deb(xen_deb, "/boot/xen-4.15-uns=
table")
>> +
>> +        self.launch_xen(xen_path)
>> --=20
>> 2.20.1
>>=20
>>=20
>
> With those changes,
>
> Reviewed-by: Cleber Rosa <crosa@redhat.com>
> Tested-by: Cleber Rosa <crosa@redhat.com>


--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 23:24:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 23:24:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86450.162311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCWAe-0000da-7P; Wed, 17 Feb 2021 23:23:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86450.162311; Wed, 17 Feb 2021 23:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCWAe-0000dT-4V; Wed, 17 Feb 2021 23:23:48 +0000
Received: by outflank-mailman (input) for mailman id 86450;
 Wed, 17 Feb 2021 23:23:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RvmJ=HT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lCWAc-0000dO-0f
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 23:23:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0655dd7-d175-4ec5-8c17-54cbdc8525ab;
 Wed, 17 Feb 2021 23:23:45 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CBB5F64E2F;
 Wed, 17 Feb 2021 23:23:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0655dd7-d175-4ec5-8c17-54cbdc8525ab
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613604224;
	bh=3Ac/wfKtJ+Bk7gd+4GzFKr/tjDHTJw1C8Ict01DZ9uE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kzpS3EYt2RCCnxwN4yxk8CQydrGoODH28ShDhoZT1tPrEDpbv4KVa2uxPBq5wYgGB
	 GswV+qA5tSrj1OPSfySugcoQWn2Ae1/qOp5KIvLd49NTWDo2iP+trlE1wMvZg/nP6Q
	 8L/CM738nhX+Cuqy/R74SquOJqYeeV2cudVCYhgpz5YyXryocFlucviKcCU3bWCRqo
	 sAdNQVptOuwgifX+F3FXghYxpFcXYmTFMMlJqlCrnOSLdKkYrqP9Ommu1isbgSOx0Y
	 gRNUtnWJscMD84+uKCdtTb6oklD02TNFd1J74Hh1+sDo88NBH/UYwxLmjZeGqQ9U5a
	 iaA7zCeRDCDRw==
Date: Wed, 17 Feb 2021 15:23:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Rahul Singh <Rahul.Singh@arm.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, iwj@xenproject.org
Subject: Re: [PATCH] xen/arm : smmuv3: Fix to handle multiple StreamIds per
 device.
In-Reply-To: <5145C563-A8AA-41B1-8EBB-B32D1FFC2219@arm.com>
Message-ID: <alpine.DEB.2.21.2102171522420.3234@sstabellini-ThinkPad-T480s>
References: <43de5b58df37d8b8de037cb23c47ab8454caf37c.1613492577.git.rahul.singh@arm.com> <5145C563-A8AA-41B1-8EBB-B32D1FFC2219@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

+IanJ

On Wed, 17 Feb 2021, Bertrand Marquis wrote:
> Hi Rahul,
> 
> 
> > On 17 Feb 2021, at 10:05, Rahul Singh <Rahul.Singh@arm.com> wrote:
> > 
> > SMMUv3 driver does not handle multiple StreamId if the master device
> > supports more than one StreamID.
> > 
> > This bug was introduced when the driver was ported from Linux to XEN.
> > dt_device_set_protected(..) should be called from add_device(..) not
> > from the dt_xlate(..).
> > 
> > Move dt_device_set_protected(..) from dt_xlate(..) to add_device().
> > 
> > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Thanks a lot, this is fixing issues with multiple stream ids for one device :-)

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> > This patch is a candidate for 4.15 as without this patch it is not possible to
> > assign multiple StreamIds to the same device when device is protected behind
> > SMMUv3.

I agree especially considering that the impact is limited to smmu-v3.c.


> > ---
> > xen/drivers/passthrough/arm/smmu-v3.c | 29 ++++++++++-----------------
> > 1 file changed, 11 insertions(+), 18 deletions(-)
> > 
> > diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> > index 914cdc1cf4..53d150cdb6 100644
> > --- a/xen/drivers/passthrough/arm/smmu-v3.c
> > +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> > @@ -2207,24 +2207,6 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
> > 	 */
> > 	arm_smmu_enable_pasid(master);
> > 
> > -	return 0;
> > -
> > -err_free_master:
> > -	xfree(master);
> > -	dev_iommu_priv_set(dev, NULL);
> > -	return ret;
> > -}
> > -
> > -static int arm_smmu_dt_xlate(struct device *dev,
> > -				const struct dt_phandle_args *args)
> > -{
> > -	int ret;
> > -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> > -
> > -	ret = iommu_fwspec_add_ids(dev, args->args, 1);
> > -	if (ret)
> > -		return ret;
> > -
> > 	if (dt_device_is_protected(dev_to_dt(dev))) {
> > 		dev_err(dev, "Already added to SMMUv3\n");
> > 		return -EEXIST;
> > @@ -2237,6 +2219,17 @@ static int arm_smmu_dt_xlate(struct device *dev,
> > 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
> > 
> > 	return 0;
> > +
> > +err_free_master:
> > +	xfree(master);
> > +	dev_iommu_priv_set(dev, NULL);
> > +	return ret;
> > +}
> > +
> > +static int arm_smmu_dt_xlate(struct device *dev,
> > +				const struct dt_phandle_args *args)
> > +{
> > +	return iommu_fwspec_add_ids(dev, args->args, 1);
> > }
> > 
> > /* Probing and initialisation functions */
> > -- 
> > 2.17.1
> > 
> 


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 23:45:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 23:45:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86454.162323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCWVk-0002Ya-1s; Wed, 17 Feb 2021 23:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86454.162323; Wed, 17 Feb 2021 23:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCWVj-0002YT-VD; Wed, 17 Feb 2021 23:45:35 +0000
Received: by outflank-mailman (input) for mailman id 86454;
 Wed, 17 Feb 2021 23:45:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RvmJ=HT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lCWVj-0002YO-FQ
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 23:45:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fae42472-e4ac-4cf7-b4fe-e80d17f5f7a5;
 Wed, 17 Feb 2021 23:45:34 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6EC9A64D9C;
 Wed, 17 Feb 2021 23:45:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fae42472-e4ac-4cf7-b4fe-e80d17f5f7a5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613605533;
	bh=ZsCe8pnh3yb9rrl+xuQd+RNHy5q8RsFnRp4dROlRyD8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rMHq/InzpRnJiAzejYqlM/h3fauhGIBceK8iyAmt2uvdtbp0L8+r4+tHBXouf5N0a
	 B4Gdz0baCTVa9+yEzV96TD/5DYcP72BinYSqcOlT86l/vCM0lpxv46+KG/feVGi0T+
	 DeG1Mm3WWyZLg2etRgrQE4iuWg9LkUXJlcgEdSi9EUTs6bDPyJutmzz/NUKKGBaqx3
	 C4I+hawB8spZFcSxzLIRbzvAdGYAQ2buabQjXihPtrdCf82WD0TIiWmSr9sqID4Jbj
	 QiQhuoaTq0ufxUH/F4HkjBWqRGBUGjtkhp76FKNJ1Rw1YCQcXHVXf8n9Axs/yzlcfT
	 lpmPpBmUY2NBg==
Date: Wed, 17 Feb 2021 15:45:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    xen-devel@lists.xenproject.org, cardoe@cardoe.com, 
    andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org, 
    anthony.perard@citrix.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
In-Reply-To: <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
Message-ID: <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s>
References: <20210213020540.27894-1-sstabellini@kernel.org> <20210213135056.GA6191@mail-itl> <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com> <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s> <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-2053857798-1613604828=:3234"
Content-ID: <alpine.DEB.2.21.2102171533570.3234@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2053857798-1613604828=:3234
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102171533571.3234@sstabellini-ThinkPad-T480s>

On Wed, 17 Feb 2021, Jan Beulich wrote:
> On 16.02.2021 19:31, Stefano Stabellini wrote:
> > On Mon, 15 Feb 2021, Jan Beulich wrote:
> >> On 13.02.2021 14:50, Marek Marczykowski-GÃ³recki wrote:
> >>> On Fri, Feb 12, 2021 at 06:05:40PM -0800, Stefano Stabellini wrote:
> >>>> If rombios, seabios and ovmf are all disabled, don't attempt to build
> >>>> hvmloader.
> >>>
> >>> What if you choose to not build any of rombios, seabios, ovmf, but use
> >>> system one instead? Wouldn't that exclude hvmloader too?
> >>
> >> Even further - one can disable all firmware and have every guest
> >> config explicitly specify the firmware to use, afaict.
> > 
> > I didn't realize there was a valid reason for wanting to build hvmloader
> > without rombios, seabios, and ovfm.
> > 
> > 
> >>> This heuristic seems like a bit too much, maybe instead add an explicit
> >>> option to skip hvmloader?
> >>
> >> +1 (If making this configurable is needed at all - is having
> >> hvmloader without needing it really a problem?)
> > 
> > The hvmloader build fails on Alpine Linux x86:
> > 
> > https://gitlab.com/xen-project/xen/-/jobs/1033722465
> > 
> > 
> > 
> > I admit I was just trying to find the fastest way to make those Alpine
> > Linux builds succeed to unblock patchew: although the Alpine Linux
> > builds are marked as "allow_failure: true" in gitlab-ci, patchew will
> > still report the whole battery of tests as "failure". As a consequence
> > the notification emails from patchew after a build of a contributed
> > patch series always says "job failed" today, making it kind of useless.
> > See attached.
> > 
> > I would love if somebody else took over this fix as I am doing this
> > after hours, but if you have a simple suggestion on how to fix the
> > Alpine Linux hvmloader builds, or skip the build when appropriate, I can
> > try to follow up.
> 
> There is an issue with the definition of uint64_t there. Initial
> errors like
> 
> hvmloader.c: In function 'init_vm86_tss':
> hvmloader.c:202:39: error: left shift count >= width of type [-Werror=shift-count-overflow]
>   202 |                   ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss));
> 
> already hint at this, but then
> 
> util.c: In function 'get_cpu_mhz':
> util.c:824:15: error: conversion from 'long long unsigned int' to 'uint64_t' {aka 'long unsigned int'} changes value from '4294967296000000' to '0' [-Werror=overflow]
>   824 |     cpu_khz = 1000000ull << 32;
> 
> is quite explicit: "aka 'long unsigned int'"? This is a 32-bit
> environment, after all. I suspect the build picks up headers
> (stdint.h here in particular) intended for 64-bit builds only.
> Can you check whether "gcc -m32" properly sets include paths
> _different_ from those plain "gcc" sets, if the headers found in
> the latter case aren't suitable for the former? Or alternatively
> is the Alpine build environment set up incorrectly, in that it
> lacks 32-bit devel packages?
> 
> As an aside I don't think it's really a good idea to have
> hvmloader depend on any external headers. Just like the
> hypervisor it's a free-standing binary, and hence ought to be
> free of any dependencies on the build/host environment.

All the automation containers are available for anybody to use, so FYI
you can repro the issue by doing inside your Xen repo:

docker run -it -v `pwd`:/build registry.gitlab.com/xen-project/xen/alpine:3.12
CC=gcc bash automation/scripts/build


I did just that and ran a simple test:

~ # gcc -m32 test.c -o test
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../libssp_nonshared.a when searching for -lssp_nonshared
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/libssp_nonshared.a when searching for -lssp_nonshared
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/libgcc.a when searching for -lgcc
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lgcc
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../libgcc_s.so.1 when searching for libgcc_s.so.1
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/libgcc_s.so.1 when searching for libgcc_s.so.1
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find libgcc_s.so.1
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/libgcc.a when searching for -lgcc
/usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lgcc
collect2: error: ld returned 1 exit status
~ # cat test.c 
#include <stdio.h>

int main() {
    printf("DEBUG\n");
}


Given this, I take there is no 32bit build env? A bit of Googling tells
me that gcc on Alpine Linux is compiled without multilib support.


That said I was looking at the Alpine Linux APKBUILD script:
https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/APKBUILD

And I noticed this patch that looks suspicious:
https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/musl-hvmloader-fix-stdint.patch
--8323329-2053857798-1613604828=:3234--


From xen-devel-bounces@lists.xenproject.org Wed Feb 17 23:54:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 Feb 2021 23:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86458.162338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCWeV-0003a7-1P; Wed, 17 Feb 2021 23:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86458.162338; Wed, 17 Feb 2021 23:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCWeU-0003a0-Sw; Wed, 17 Feb 2021 23:54:38 +0000
Received: by outflank-mailman (input) for mailman id 86458;
 Wed, 17 Feb 2021 23:54:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RvmJ=HT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lCWeU-0003Zv-43
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 23:54:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c3c9570-9cd0-46e1-b5d1-ea5a50a7affb;
 Wed, 17 Feb 2021 23:54:37 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 731A664E63;
 Wed, 17 Feb 2021 23:54:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c3c9570-9cd0-46e1-b5d1-ea5a50a7affb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613606076;
	bh=/+lRb1DonEWYzxPO9qdel/N7h5iRPfdgS0uSyL+o2Pw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XrsYBdVT8dMW9tJlvyNt/5SNotM86J9fb5UZsaT5/3U4yFk5bDsq7AqKFxiGwRu7V
	 GBGMd4Q01h21IEgp3XbIQ6zL3Ip+JmHQajB7FaRBIaKvweF0ieQJbXE7+Ryu1tj3EQ
	 Wftxtuc4mOoJapEFvO2D7bInj+ftFhdFh5QWHeb0gJ+xmuTmBffOrZPgtABETV57O+
	 YTLvcO1udntTTPFusnWuBa5uLAuH+rtY15pc2w5bgNfLTCzr8aq2U4PVs8KMW/C3+C
	 tv9bvnNUUMWtvx/BkN22CbhScTQrXmM0RjRwzxNgT9Tv0kaW4JaRdgoivpsD+QW/IN
	 Zz6h5YqiJXyVg==
Date: Wed, 17 Feb 2021 15:54:35 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
In-Reply-To: <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
Message-ID: <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s> <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 17 Feb 2021, Julien Grall wrote:
> On 17/02/2021 02:00, Stefano Stabellini wrote:
> > Hi all,
> > 
> > Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
> > translate addresses for DMA operations in Dom0. Specifically,
> > swiotlb-xen is used to translate the address of a foreign page (a page
> > belonging to a domU) mapped into Dom0 before using it for DMA.
> > 
> > This is important because although Dom0 is 1:1 mapped, DomUs are not. On
> > systems without an IOMMU swiotlb-xen enables PV drivers to work as long
> > as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
> > ends up using the MFN, rather than the GFN.
> > 
> > 
> > On systems with an IOMMU, this is not necessary: when a foreign page is
> > mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
> > is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
> > address (instead of the MFN) for DMA operations and they would work. It
> > would be more efficient than using swiotlb-xen.
> > 
> > If you recall my presentation from Xen Summit 2020, Xilinx is working on
> > cache coloring. With cache coloring, no domain is 1:1 mapped, not even
> > Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
> > work as intended.
> > 
> > 
> > The suggested solution for both these issues is to add a new feature
> > flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
> > swiotlb-xen because IOMMU translations are available for Dom0. If
> > XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
> > initialization. I have tested this scheme with and without cache
> > coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
> > works as expected: DMA operations succeed.
> > 
> > 
> > What about systems where an IOMMU is present but not all devices are
> > protected?
> > 
> > There is no way for Xen to know which devices are protected and which
> > ones are not: devices that do not have the "iommus" property could or
> > could not be DMA masters.
> > 
> > Perhaps Xen could populate a whitelist of devices protected by the IOMMU
> > based on the "iommus" property. It would require some added complexity
> > in Xen and especially in the swiotlb-xen driver in Linux to use it,
> > which is not ideal.
> 
> You are trading a bit more complexity in Xen and Linux with a user will may
> not be able to use the hypervisor on his/her platform without a quirk in Xen
> (see more below).
> 
> > However, this approach would not work for cache
> > coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
> > used either way
> 
> Not all the Dom0 Linux kernel will be able to work with cache colouring. So
> you will need a way for the kernel to say "Hey I am can avoid using swiotlb".

A dom0 Linux kernel unmodified can work with Xen cache coloring enabled.
The swiotlb-xen is unneeded and also all the pfn_valid() checks to detect
if a page is local or foreign don't work as intended. However, it still
works on systems where the IOMMU can be relied upon. That's because the
IOMMU does the GFN->MFN translations, and also because both swiotlb-xen
code paths (cache flush local and cache flush via hypercall) work.

Also considering that somebody that enables cache coloring has to know
the entire system well, I don't think we need a runtime flag from Linux
to say "Hey I can avoid using swiotlb".

That said, I think it is important to fix Linux because the code might
work today, but it is more by accident than by design.


> > For these reasons, I would like to propose a single flag
> > XENFEAT_ARM_dom0_iommu which says that the IOMMU can be relied upon for
> > DMA translations. In situations where a DMA master is not SMMU
> > protected, XENFEAT_ARM_dom0_iommu should not be set. For example, on a
> > platform where an IOMMU is present and protects most DMA masters but it
> > is leaving out the MMC controller, then XENFEAT_ARM_dom0_iommu should
> > not be set (because PV block is not going to work without swiotlb-xen.)
> > This also means that cache coloring won't be usable on such a system (at
> > least not usable with the MMC controller so the system integrator should
> > pay special care to setup the system).
> > 
> > It is worth noting that if we wanted to extend the interface to add a
> > list of protected devices in the future, it would still be possible. It
> > would be compatible with XENFEAT_ARM_dom0_iommu.
> 
> I imagine by compatible, you mean XENFEAT_ARM_dom0_iommu would be cleared and
> instead the device-tree list would be used. Is that correct?

If XENFEAT_ARM_dom0_iommu is set, the device list is redundant.
If XENFEAT_ARM_dom0_iommu is not set, the device list is useful.

The device list and XENFEAT_ARM_dom0_iommu serve different purposes. The
device list is meant to skip the swiotlb-xen for some devices.

XENFEAT_ARM_dom0_iommu is meant to skip the swiotlb-xen for all devices,
which is especially useful when dom0 is not 1:1 mapped, because in that
case swiotlb-xen is useless and wrong.

Thinking more about this, it is clear that swiotlb-xen should not be
considered when cache coloring is enabled. It can't help by design. The
device list and the flag are really for different use-cases. And the
cache coloring use-case is better served by a XENFEAT_direct_mapped and
XENFEAT_not_direct_mapped flags. More on this below.


> > How to set XENFEAT_ARM_dom0_iommu?
> > 
> > We could set XENFEAT_ARM_dom0_iommu automatically when
> > is_iommu_enabled(d) for Dom0. We could also have a platform specific
> > (xen/arch/arm/platforms/) override so that a specific platform can
> > disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
> > users, it would also be useful to be able to override it via a Xen
> > command line parameter.
> Platform quirks should be are limited to a small set of platforms.
> 
> In this case, this would not be only per-platform but also per-firmware table
> as a developer can decide to remove/add IOMMU nodes in the DT at any time.
> 
> In addition to that, it means we are introducing a regression for those users
> as Xen 4.14 would have worked on there platform but not anymore. They would
> need to go through all the nodes and find out which one is not protected.
> 
> This is a bit of a daunting task and we are going to end up having a lot of
> per-platform quirk in Xen.
> 
> So this approach selecting the flag is a no-go for me. FAOD, the inverted idea
> (i.e. only setting XENFEAT_ARM_dom0_iommu per-platform) is a no go as well.
>
> I don't have a good idea how to set the flag automatically. My
> requirement is newer Xen should continue to work on all supported
> platforms without any additional per-platform effort.

Absolutely agreed.


One option would be to rename the flag to XENFEAT_ARM_cache_coloring and
only set it when cache coloring is enabled.  Obviously you need to know
what you are doing if you are enabling cache coloring. If we go down
that route, we don't risk breaking compatibility with any platforms.
Given that cache coloring is not upstream yet (upstreaming should start
very soon), maybe the only thing to do now would be to reserve a XENFEAT
number.

But actually it was always wrong for Linux to enable swiotlb-xen without
checking whether it is 1:1 mapped or not. Today we enable swiotlb-xen in
dom0 and disable it in domU, while we should have enabled swiotlb-xen if
1:1 mapped no matter dom0/domU. (swiotlb-xen could be useful in a 1:1
mapped domU driver domain.)


There is an argument (Andy was making on IRC) that being 1:1 mapped or
not is an important information that Xen should provide to the domain
regardless of anything else.

So maybe we should add two flags:

- XENFEAT_direct_mapped
- XENFEAT_not_direct_mapped

To all domains. This is not even ARM specific. Today dom0 would get
XENFEAT_direct_mapped and domUs XENFEAT_not_direct_mapped. With cache
coloring all domains will get XENFEAT_not_direct_mapped. With Bertrand's
team work on 1:1 mapping domUs, some domUs might start to get
XENFEAT_direct_mapped also one day soon.

Now I think this is the best option because it is descriptive, doesn't
imply anything about what Linux should or should not do, and doesn't
depend on unreliable IOMMU information.



Instead, if we follow my original proposal of using
XENFEAT_ARM_dom0_iommu and set it automatically when Dom0 is protected
by IOMMU, we risk breaking PV drivers for platforms where that protection
is incomplete. I have no idea how many there are out there today. I have
the feeling that there aren't that many but I am not sure. So yes, it
could be that we start passing XENFEAT_ARM_dom0_iommu for a given
platform, Linux skips the swiotlb-xen initialization, actually it is
needed for a network/block device, and a PV driver breaks. I can see why
you say this is a no-go.


Third option. We still use XENFEAT_ARM_dom0_iommu but we never set
XENFEAT_ARM_dom0_iommu automatically. It needs a platform specific flag
to be set. We add the flag to xen/arch/arm/platforms/xilinx-zynqmp.c and
any other platforms that qualify. Basically it is "opt in" instead of
"opt out". We don't risk breaking anything because platforms would have
XENFEAT_ARM_dom0_iommu disabled by default. Still, I think the
XENFEAT_not/direct_mapped route is much cleaner and simpler.



I saw that the topic has generated quite a bit of discussion. I suggest
we keep gathering data and do brainstorming on the thread for a few days
and in the meantime schedule a call for late next week to figure out a
way forward?


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 01:28:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 01:28:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86462.162353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCY78-00066K-Ni; Thu, 18 Feb 2021 01:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86462.162353; Thu, 18 Feb 2021 01:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCY78-00066D-KN; Thu, 18 Feb 2021 01:28:18 +0000
Received: by outflank-mailman (input) for mailman id 86462;
 Thu, 18 Feb 2021 01:28:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K4tY=HU=wdc.com=prvs=6762db3dd=chaitanya.kulkarni@srs-us1.protection.inumbo.net>)
 id 1lCY77-000668-EF
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 01:28:17 +0000
Received: from esa5.hgst.iphmx.com (unknown [216.71.153.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca3682e2-02f8-482d-8cf3-08fefaf86fe8;
 Thu, 18 Feb 2021 01:28:14 +0000 (UTC)
Received: from mail-bn8nam08lp2045.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.45])
 by ob1.hgst.iphmx.com with ESMTP; 18 Feb 2021 09:28:05 +0800
Received: from BYAPR04MB4965.namprd04.prod.outlook.com (2603:10b6:a03:4d::25)
 by BYAPR04MB4759.namprd04.prod.outlook.com (2603:10b6:a03:15::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.38; Thu, 18 Feb
 2021 01:28:03 +0000
Received: from BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::1d83:38d9:143:4c9c]) by BYAPR04MB4965.namprd04.prod.outlook.com
 ([fe80::1d83:38d9:143:4c9c%5]) with mapi id 15.20.3846.041; Thu, 18 Feb 2021
 01:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca3682e2-02f8-482d-8cf3-08fefaf86fe8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1613611694; x=1645147694;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=9qH2YcBXObP3U7J9syAbJhGF6oEFdit3Uuies5zV6L0=;
  b=jRIiLwxGwDDlwFBZ9wzV8ieaActAaGzXZKfiau8M0idAh9GC1/b8ynU+
   5lbHQFcmSQVNAwjcH5gAY5Z3Ew2NpvzWTNHkQD51hY11blAMl0Y8GUg7a
   xIbG8UJ7dFoZeX+DizISrJ+P8pZotxnoMO58lrwTEGWscLT+2NXv8obdO
   SKI3qVG5Mn3rJJ3u+bq7hjOlm1TrojqAIksOGVa1Cwlo3KAiE3i2TsN6b
   sP4Kcklv/51crOBaV4k+bIHK+ZkFescFlnRawdiOfqavlbFn2I8QTp2AZ
   vhpMB/WSVZV4QxMnfR01Cen4BSBSq89hPJXZzUfiUq/wiBS2QEs3EnDnR
   g==;
IronPort-SDR: 4M88/9R/M4HR78hDDgozsqeLNcrKDcQhQIJmP9TXkmbtTPL6yPoZ1bM7xqHkXXxRFDDdTwigus
 mX5gd0CwlopugYCRTtOHQJopUAAsTISKaYaw4apfQyBsiluX3a4Rjf7VpS2Lf3x7MzVjK/nTdx
 djmGlB7C12dt0Rta8wMbhG6TZWfwu4HmiE1uxo6qqVm89CyfifeWMzkq23vWWRp6oFDDG6pKxJ
 IGK+ZbjuTPsRQYy55cOH0fLinw9UoYX1CD67RhnSXlGc+WYIAKau3aAnc6EaAEKVzmR4RuLma5
 7b0=
X-IronPort-AV: E=Sophos;i="5.81,185,1610380800"; 
   d="scan'208";a="160235283"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GsvHQavf7uNlQ8d/3oXOF87+GvxVIS7rl+xPiEGXnHP+bBUzAqaNm79hvp6eEOelgsqnMdrw+/UPYXBqUqOTa5pnRU4tFNgDa9nLm4urO01I3oavRBVKe/vBSZY0cmZCNUfeMdAy448tzUkbPFZ+HA0L678ORN689HKD3GlNknbjj/aa/inu26GRcHgxm6ZoaTp4vnJIX0csxabEjE6nXT7No/iQFEwefWDC2u9zDR33iE8JiBL/QdUGhtOfICUkWaWMVhHbQwy1/JhuPajet2bj67dnvUlXiTo9kDSgmRpWiIUCJN5wOqA3M1VNKsXEe2jfnCtxL0fSMYuJw4vVOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nVXbQhzjMljenFFGdNZDlx1QlYnqBP6C3iQWXjZAeDQ=;
 b=D5Ytc/df2DjxEIjW1QryLJ4PGUbnB4Ea3TV4TMBIJJe8DPrtS7/fCrYYQqqU4ssKhfL6DXKyI+ZNIR6LwOebIgPjxKUsjt0VplzRsXrmumld0mbBvENF/awZ15Ok/IV6/9F9mTCXij7K1iD7W+Z0sFanabFIx+4VFlXzVHtIjaNPcKGT03ZluX8jLrza9njERNtaKgX2RI3q74SGBrf4x8krmb/G7SmtLcmpf9Z+p8jh5zsmy03qoXUJFM7u3O4PUmydrU65J49EK9YrYeUvGRQPEtmB28UyPJfcA10fq1voUzC4x1k0u63AqIPz8W0Gv9XhFgSoMRGCHlVRMjeeKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nVXbQhzjMljenFFGdNZDlx1QlYnqBP6C3iQWXjZAeDQ=;
 b=iMEzjUgowwL4XBlFMbTYGFc9RSs3LvDC4WW6RrbatNzS7M+z+PyPrjOSGE6fAOGm592UnzFD8A7h93qDIuXnq9uKDUbExBwysGUd8rs6aYwe7pKW6YjBYZL8YQshg3SnC3pqPIF1BU1dXdY1nP0og2xoM8Xmxfh5ANKbKh/4HrE=
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Pavel Machek <pavel@ucw.cz>
CC: "linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>, "linux-block@vger.kernel.org"
	<linux-block@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-nvme@lists.infradead.org"
	<linux-nvme@lists.infradead.org>, "linux-scsi@vger.kernel.org"
	<linux-scsi@vger.kernel.org>, "target-devel@vger.kernel.org"
	<target-devel@vger.kernel.org>, "linux-fscrypt@vger.kernel.org"
	<linux-fscrypt@vger.kernel.org>, "jfs-discussion@lists.sourceforge.net"
	<jfs-discussion@lists.sourceforge.net>, "linux-nilfs@vger.kernel.org"
	<linux-nilfs@vger.kernel.org>, "ocfs2-devel@oss.oracle.com"
	<ocfs2-devel@oss.oracle.com>, "linux-pm@vger.kernel.org"
	<linux-pm@vger.kernel.org>, "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"axboe@kernel.dk" <axboe@kernel.dk>, "philipp.reisner@linbit.com"
	<philipp.reisner@linbit.com>, "lars.ellenberg@linbit.com"
	<lars.ellenberg@linbit.com>, "konrad.wilk@oracle.com"
	<konrad.wilk@oracle.com>, "roger.pau@citrix.com" <roger.pau@citrix.com>,
	"minchan@kernel.org" <minchan@kernel.org>, "ngupta@vflare.org"
	<ngupta@vflare.org>, "sergey.senozhatsky.work@gmail.com"
	<sergey.senozhatsky.work@gmail.com>, "agk@redhat.com" <agk@redhat.com>,
	"snitzer@redhat.com" <snitzer@redhat.com>, "hch@lst.de" <hch@lst.de>,
	"sagi@grimberg.me" <sagi@grimberg.me>, "martin.petersen@oracle.com"
	<martin.petersen@oracle.com>, "viro@zeniv.linux.org.uk"
	<viro@zeniv.linux.org.uk>, "tytso@mit.edu" <tytso@mit.edu>,
	"jaegeuk@kernel.org" <jaegeuk@kernel.org>, "ebiggers@kernel.org"
	<ebiggers@kernel.org>, "djwong@kernel.org" <djwong@kernel.org>,
	"shaggy@kernel.org" <shaggy@kernel.org>, "konishi.ryusuke@gmail.com"
	<konishi.ryusuke@gmail.com>, "mark@fasheh.com" <mark@fasheh.com>,
	"jlbec@evilplan.org" <jlbec@evilplan.org>, "joseph.qi@linux.alibaba.com"
	<joseph.qi@linux.alibaba.com>, Damien Le Moal <Damien.LeMoal@wdc.com>,
	Naohiro Aota <Naohiro.Aota@wdc.com>, "jth@kernel.org" <jth@kernel.org>,
	"rjw@rjwysocki.net" <rjw@rjwysocki.net>, "len.brown@intel.com"
	<len.brown@intel.com>, "akpm@linux-foundation.org"
	<akpm@linux-foundation.org>, "hare@suse.de" <hare@suse.de>,
	"gustavoars@kernel.org" <gustavoars@kernel.org>, "tiwai@suse.de"
	<tiwai@suse.de>, "alex.shi@linux.alibaba.com" <alex.shi@linux.alibaba.com>,
	"asml.silence@gmail.com" <asml.silence@gmail.com>, "ming.lei@redhat.com"
	<ming.lei@redhat.com>, "tj@kernel.org" <tj@kernel.org>, "osandov@fb.com"
	<osandov@fb.com>, "bvanassche@acm.org" <bvanassche@acm.org>,
	"jefflexu@linux.alibaba.com" <jefflexu@linux.alibaba.com>
Subject: Re: [RFC PATCH 29/34] power/swap: use bio_new in hib_submit_io
Thread-Topic: [RFC PATCH 29/34] power/swap: use bio_new in hib_submit_io
Thread-Index: AQHW9UVtzcRBxm47UEq6Z+ZGJ3C99A==
Date: Thu, 18 Feb 2021 01:28:03 +0000
Message-ID:
 <BYAPR04MB49652FAADFDCAA3DF72DDEE186859@BYAPR04MB4965.namprd04.prod.outlook.com>
References: <20210128071133.60335-1-chaitanya.kulkarni@wdc.com>
 <20210128071133.60335-30-chaitanya.kulkarni@wdc.com>
 <20210217220257.GA10791@amd>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: ucw.cz; dkim=none (message not signed)
 header.d=none;ucw.cz; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [199.255.45.62]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 06721654-7df7-4cf0-a01b-08d8d3ac6fd8
x-ms-traffictypediagnostic: BYAPR04MB4759:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <BYAPR04MB475955F370F81E8B007520E886859@BYAPR04MB4759.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:4941;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 RcRwgz6IM2A/u/TWxloQlzmC7jpW7IdpwapwE2srZFbILm7vBO1dKGMJs5xJorby8uToHqvtIxmTesokOK2N67DDeAojN22PAHvdQmSqNaSTDLBzH2qMKpFV1+WzWIVjxYMdCnh4jbF3SPjmDqwN/yIYFDFyk1EwDVJOfFLLMnaVxJAscRAP1ZFSZCNBzbges1ZNat9Mh1a+bsGirLXNwdSp2Afj7WrLg+DqkEDznkumbqbojWv2cikOO/w0aDpnRPnXREJ+NQdYMomG/M/yAnArKOyCvjTCoBmEv3pXZZerNQ1l33hI6zSUNjzJcTlPZ2+8e2cI2VO0+TzZ/hOuxZ06W2eFjAmeVDqVAaTFUCCfPrlmFghyrFtQzTvLF3BAta6ZiKq6LdIOocMNcqDB7SvM+Vy8jGmeQRmHkzsdJBhthIkjCGU/2Ujs3bqZtAIGKXyX+QheehpSxQ7dv8hPsjXXDBRnon9r6T5eMc3v1712HMeHqbj4g63qCPkU2S/O4NB5wD8r4YmNhCQ8vmr6qQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR04MB4965.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(39860400002)(366004)(346002)(6916009)(33656002)(2906002)(55016002)(5660300002)(7406005)(9686003)(8676002)(4326008)(86362001)(8936002)(478600001)(71200400001)(7366002)(316002)(186003)(7416002)(52536014)(54906003)(76116006)(66556008)(66946007)(64756008)(7696005)(53546011)(6506007)(66446008)(66476007)(26005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?FIItwhJNoqlHqrXr7ohF0LCXk2DS0EdG1yLZV0Z4VhMmvaysA9SgTmrClqIG?=
 =?us-ascii?Q?H3mrueUZG2rfqbYjvGN1Y1jpOkrI0tSChqG3PmuxYoz9EC8+CqA2NMt8YiG0?=
 =?us-ascii?Q?avib5faAAruYFRaTRBrtOZqj5Kae+D1MvohlzKuTpITQC0MUwI3xYA7RkXrc?=
 =?us-ascii?Q?sgn3HFfoZar+Qd1DoDBCAFo7j8O/MTD5eZZo9pgTs5AtwYk1hYbuFqmFZlpl?=
 =?us-ascii?Q?bSWSn/tV9sXNp9d/FpFzxvANMuy2h0Z4koAntonbxCVQWy4RXgZuCfKx8ihl?=
 =?us-ascii?Q?8ITpSwYLWRPvHfIrYc71WuBsdFzfr/s7nQZKVj5B6Oc/989Pk5sDFQTMhl8s?=
 =?us-ascii?Q?8d+5CSeJfjq/AR24QEulThpVLzLCKx9lznAAO1BPf9880E0vT+ocFYFXhz+T?=
 =?us-ascii?Q?2pCjGrPdpVJOwWMBa+WyzoiugWCSlCBlIQtpNUlB/QyBMx5eN5PUQsDN1btn?=
 =?us-ascii?Q?qlENDWj+45WaUXLdHYx6Ks+YpBEKLM2r1XMI5fdEEGkfwh7f7GqMUFbfOk0K?=
 =?us-ascii?Q?HIIBEvTq3oJXoFzib/yy596Ppl2fJiJGTDpXIdDPU/zN4dtzkkyv0VD085vR?=
 =?us-ascii?Q?0HkezDO7QBfwWpmkoV6OewO/RJOFoq5jNR/QJiiiqsPEzV7Z6T1Q95PJdvTP?=
 =?us-ascii?Q?PZuGAalpLyPcO7ms4hMFMo1U8+6XyfAmlkCrgWrtJieSu89xTLjPjcIurFvC?=
 =?us-ascii?Q?jufErXi6AIdXYhjfC7Qn3j6LJFeUffVyeQJ3yqz7aMaCXyh0N61ZHZ49g2hv?=
 =?us-ascii?Q?2SUpJ4MmLH7bSeIDLDUkJs9k1ufFvCwpfwdb3NBo6YxbGJPxFKXaR8idCCSb?=
 =?us-ascii?Q?QJNDgOlpvtM1yL3zcYDbkemuOABjcaDowGs8058X3FlzN1YAwF/CVtD0ThkW?=
 =?us-ascii?Q?zZNPs/x8dmVxzxocn+ZAXR+OsGnBy+YX/PRHIFYX90Am3Qbmfrzs+aJchYGK?=
 =?us-ascii?Q?zIm2tUM2jO4CTM7LW+UBFu8AJ0Cei4vmpe6L49QadsHoZuf5OahJLq6jE/j1?=
 =?us-ascii?Q?5pn6S4Byv5bclnFanhxBSiVv6JKQ6F5huJ3GY5KxMKvPj/X0TWWcX6uK0vq6?=
 =?us-ascii?Q?tHS/FYwrjHMEiMY+qErRsjEL30TJZlvtn9X9cKL0SUutNIDqZfNys6WxLiUX?=
 =?us-ascii?Q?oDsEk2EkVDQxEL/eb/QRixTUrTZcELUTddej0C3ypQcmIgJHJPhLB/8bzobn?=
 =?us-ascii?Q?RzpfPgCl6hVEWzSynB3Lqqljq5oh5VUFHrvF2iVFdEpMKWDlnBnVzKP1MoVY?=
 =?us-ascii?Q?+9+yJJYgYEJjcZPzXmUL4J2skzLncVRkrpGKxdRSLWhJrGgqm7HHSe67r4PW?=
 =?us-ascii?Q?BAQaAMKewt7oKQDpMEem1Qmz?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR04MB4965.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06721654-7df7-4cf0-a01b-08d8d3ac6fd8
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Feb 2021 01:28:03.6019
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: iwT3NHWXg8QVXADRxHceCUGDm5QEez0FiJmRfbimNLUHwFL3p95byR8mLhVMJL3VGD62zPKA+aTh7kf1DD7ORrPoFj0oolcWym1ymC0x1KQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR04MB4759

On 2/17/21 14:03, Pavel Machek wrote:=0A=
> Hi!=0A=
>> diff --git a/kernel/power/swap.c b/kernel/power/swap.c=0A=
>> index c73f2e295167..e92e36c053a6 100644=0A=
>> --- a/kernel/power/swap.c=0A=
>> +++ b/kernel/power/swap.c=0A=
>> @@ -271,13 +271,12 @@ static int hib_submit_io(int op, int op_flags, pgo=
ff_t page_off, void *addr,=0A=
>>  		struct hib_bio_batch *hb)=0A=
>>  {=0A=
>>  	struct page *page =3D virt_to_page(addr);=0A=
>> +	sector_t sect =3D page_off * (PAGE_SIZE >> 9);=0A=
>>  	struct bio *bio;=0A=
>>  	int error =3D 0;=0A=
>>  =0A=
>> -	bio =3D bio_alloc(GFP_NOIO | __GFP_HIGH, 1);=0A=
>> -	bio->bi_iter.bi_sector =3D page_off * (PAGE_SIZE >> 9);=0A=
>> -	bio_set_dev(bio, hib_resume_bdev);=0A=
>> -	bio_set_op_attrs(bio, op, op_flags);=0A=
>> +	bio =3D bio_new(hib_resume_bdev, sect, op, op_flags, 1,=0A=
>> +		      GFP_NOIO | __GFP_HIGH);=0A=
>>  =0A=
> C function with 6 arguments... dunno. Old version looks comparable or=0A=
> even more readable...=0A=
>=0A=
> Best regards,=0A=
> 							Pavel=0A=
The library functions that are in the kernel tree which are used=0A=
in different file-systems and fabrics drivers do take 6 arguments.=0A=
=0A=
Plus what is the point of duplicating code for mandatory=0A=
parameters all over the kernel ?=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 03:37:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 03:37:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86469.162379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCa8C-0001Yr-Te; Thu, 18 Feb 2021 03:37:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86469.162379; Thu, 18 Feb 2021 03:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCa8C-0001Yk-QU; Thu, 18 Feb 2021 03:37:32 +0000
Received: by outflank-mailman (input) for mailman id 86469;
 Thu, 18 Feb 2021 03:37:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCa8C-0001Yc-1q; Thu, 18 Feb 2021 03:37:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCa8B-0003tc-Tn; Thu, 18 Feb 2021 03:37:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCa8B-0006ns-J8; Thu, 18 Feb 2021 03:37:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCa8B-0008ER-IZ; Thu, 18 Feb 2021 03:37:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=COMwe52hsENWBq5RCppFpzXojcxLGEAEvKZo9tvvNgM=; b=bE5fB/Bq/oK/sOeN5Y9ZpqJC97
	R1HyyMx2xhIaSjLkdhf93HEu+HGnKoUb2DMVvVYh6qrh8IN1JfTICcIseD21wymf/PN1LGDwltJIE
	9GeeUHxj7bd8FsXxq02D/WUHkOc3LXPT5+3F9dt91Pr8ziolulIwZOE4bmZuM7G2q30c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159424-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159424: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b1cc15f1931ba56d0ee256fe9bfe65509733b27
X-Osstest-Versions-That:
    xen=04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 03:37:31 +0000

flight 159424 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159424/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 159335
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 159362
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159396
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159396
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159396
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159396
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159396
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159396
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159396
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159396
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159396
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159396
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159396
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b1cc15f1931ba56d0ee256fe9bfe65509733b27
baseline version:
 xen                  04085ec1ac05a362812e9b0c6b5a8713d7dc88ad

Last test of basis   159396  2021-02-16 01:52:34 Z    2 days
Testing same since   159424  2021-02-16 17:37:44 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   04085ec1ac..3b1cc15f19  3b1cc15f1931ba56d0ee256fe9bfe65509733b27 -> master


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 04:45:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 04:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86476.162394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCbBo-000063-39; Thu, 18 Feb 2021 04:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86476.162394; Thu, 18 Feb 2021 04:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCbBn-00005v-VJ; Thu, 18 Feb 2021 04:45:19 +0000
Received: by outflank-mailman (input) for mailman id 86476;
 Thu, 18 Feb 2021 04:45:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCbBm-00005n-8u; Thu, 18 Feb 2021 04:45:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCbBl-00051E-QI; Thu, 18 Feb 2021 04:45:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCbBk-0000o4-Kz; Thu, 18 Feb 2021 04:45:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCbBk-0002JF-KS; Thu, 18 Feb 2021 04:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cRDxmOg+UNOags7CNyBGLtuDcV6sjKGmTTth7oP1VqU=; b=tnLU2uUNq/th6a6nERVStLpUKK
	fkZ29ExShm0QZ9glSn0SN/M/mZFSnnx/Kz5xiF+VfYGjgA97ZurDfzg/GIH+KW4OV2/oDyX/b57pp
	L+Fkz1Aus7wXuV3yTn/BL8hMHqSwoiABoYauqC607njxYcQBaSQL5sasEK169ZX/FiVs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159437-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159437: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=040a5bc307eb8cd304a4330b7538370315db3774
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 04:45:16 +0000

flight 159437 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159437/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              040a5bc307eb8cd304a4330b7538370315db3774
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  223 days
Failing since        151818  2020-07-11 04:18:52 Z  222 days  214 attempts
Testing same since   159437  2021-02-17 04:18:55 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42887 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 05:08:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 05:08:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86430.162409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCbYY-0002Os-V9; Thu, 18 Feb 2021 05:08:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86430.162409; Thu, 18 Feb 2021 05:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCbYY-0002Ol-Ri; Thu, 18 Feb 2021 05:08:50 +0000
Received: by outflank-mailman (input) for mailman id 86430;
 Wed, 17 Feb 2021 20:47:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0xF/=HT=redhat.com=crosa@srs-us1.protection.inumbo.net>)
 id 1lCTj2-00022J-Lb
 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 20:47:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c3272381-f90d-4553-b4c5-433c72b31c0a;
 Wed, 17 Feb 2021 20:47:07 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-430-fQu7eeJtN0CedJ2OgkwHOQ-1; Wed, 17 Feb 2021 15:47:02 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CC95C801965;
 Wed, 17 Feb 2021 20:47:00 +0000 (UTC)
Received: from localhost.localdomain (ovpn-114-28.rdu2.redhat.com
 [10.10.114.28])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6C8796060F;
 Wed, 17 Feb 2021 20:46:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3272381-f90d-4553-b4c5-433c72b31c0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613594826;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pnqM8mef723a/lVpNQvs8vnov3IYns0l6DZWn5GscnM=;
	b=BJNjfxznCHtuA4XzkyezbA0wbIvqUglLNYhzCUInyJLcroqtRbzEahWuoscszQ4JGKqlbz
	Du114Vi0ERrXp3CIO+NQeSA2M27VzDPcrqmC1ZUu/vKxyt1mfNvwgEX8tQJZ8xWuLOfedP
	0V2hjIXe0CqtTRZh7hzVWzOyiPZ2xG4=
X-MC-Unique: fQu7eeJtN0CedJ2OgkwHOQ-1
Date: Wed, 17 Feb 2021 15:46:54 -0500
From: Cleber Rosa <crosa@redhat.com>
To: Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, julien@xen.org, andre.przywara@arm.com,
	stefano.stabellini@linaro.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
	xen-devel@lists.xenproject.org, stefano.stabellini@xilinx.com,
	stratos-dev@op-lists.linaro.org
Subject: Re: [PATCH  v2 7/7] tests/avocado: add boot_xen tests
Message-ID: <20210217204654.GA353754@localhost.localdomain>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-8-alex.bennee@linaro.org>
MIME-Version: 1.0
In-Reply-To: <20210211171945.18313-8-alex.bennee@linaro.org>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=crosa@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="ikeVEW9yuYc//A+q"
Content-Disposition: inline

--ikeVEW9yuYc//A+q
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Feb 11, 2021 at 05:19:45PM +0000, Alex Benn=E9e wrote:
> These tests make sure we can boot the Xen hypervisor with a Dom0
> kernel using the guest-loader. We currently have to use a kernel I
> built myself because there are issues using the Debian kernel images.
>=20
> Signed-off-by: Alex Benn=E9e <alex.bennee@linaro.org>
> ---
>  MAINTAINERS                  |   1 +
>  tests/acceptance/boot_xen.py | 117 +++++++++++++++++++++++++++++++++++
>  2 files changed, 118 insertions(+)
>  create mode 100644 tests/acceptance/boot_xen.py
>=20
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 853f174fcf..537ca7a6f3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1998,6 +1998,7 @@ M: Alex Benn=E9e <alex.bennee@linaro.org>
>  S: Maintained
>  F: hw/core/guest-loader.c
>  F: docs/system/guest-loader.rst
> +F: tests/acceptance/boot_xen.py
> =20
>  Intel Hexadecimal Object File Loader
>  M: Su Hang <suhang16@mails.ucas.ac.cn>
> diff --git a/tests/acceptance/boot_xen.py b/tests/acceptance/boot_xen.py
> new file mode 100644
> index 0000000000..8c7e091d40
> --- /dev/null
> +++ b/tests/acceptance/boot_xen.py
> @@ -0,0 +1,117 @@
> +# Functional test that boots a Xen hypervisor with a domU kernel and
> +# checks the console output is vaguely sane .
> +#
> +# Copyright (c) 2020 Linaro
> +#
> +# Author:
> +#  Alex Benn=E9e <alex.bennee@linaro.org>
> +#
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +#
> +# This work is licensed under the terms of the GNU GPL, version 2 or
> +# later.  See the COPYING file in the top-level directory.
> +
> +import os
> +
> +from avocado import skipIf
> +from avocado_qemu import wait_for_console_pattern
> +from boot_linux_console import LinuxKernelTest
> +
> +
> +class BootXenBase(LinuxKernelTest):
> +    """
> +    Boots a Xen hypervisor with a Linux DomU kernel.
> +    """
> +
> +    timeout =3D 90
> +    XEN_COMMON_COMMAND_LINE =3D 'dom0_mem=3D128M loglvl=3Dall guest_logl=
vl=3Dall'
> +
> +    def fetch_guest_kernel(self):
> +        # Using my own built kernel - which works
> +        kernel_url =3D ('https://fileserver.linaro.org/'
> +                      's/JSsewXGZ6mqxPr5/download?path=3D%2F&files=3D'
> +                      'linux-5.9.9-arm64-ajb')
> +        kernel_sha1 =3D '4f92bc4b9f88d5ab792fa7a43a68555d344e1b83'
> +        kernel_path =3D self.fetch_asset(kernel_url,
> +                                       asset_hash=3Dkernel_sha1)
> +
> +        return kernel_path
> +
> +    def launch_xen(self, xen_path):
> +        """
> +        Launch Xen with a dom0 guest kernel
> +        """
> +        self.log.info("launch with xen_path: %s", xen_path)
> +        kernel_path =3D self.fetch_guest_kernel()
> +
> +        self.vm.set_console()
> +
> +        xen_command_line =3D self.XEN_COMMON_COMMAND_LINE
> +        self.vm.add_args('-machine', 'virtualization=3Don',
> +                         '-cpu', 'cortex-a57',
> +                         '-m', '768',
> +                         '-kernel', xen_path,
> +                         '-append', xen_command_line,
> +                         '-device',
> +                         "guest-loader,addr=3D0x47000000,kernel=3D%s,boo=
targs=3Dconsole=3Dhvc0"

Nitpick/OCD: single quotes to match all other args?

> +                         % (kernel_path))
> +
> +        self.vm.launch()
> +
> +        console_pattern =3D 'VFS: Cannot open root device'
> +        wait_for_console_pattern(self, console_pattern, "Panic on CPU 0:=
")
> +
> +
> +class BootXen(BootXenBase):
> +
> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
> +    def test_arm64_xen_411_and_dom0(self):
> +        """
> +        :avocado: tags=3Darch:aarch64
> +        :avocado: tags=3Daccel:tcg
> +        :avocado: tags=3Dcpu:cortex-a57
> +        :avocado: tags=3Dmachine:virt
> +        """
> +        xen_url =3D ('https://deb.debian.org/debian/'
> +                   'pool/main/x/xen/'
> +                   'xen-hypervisor-4.11-arm64_4.11.4+37-g3263f257ca-1_ar=
m64.deb')
> +        xen_sha1 =3D '034e634d4416adbad1212d59b62bccdcda63e62a'

This URL is already giving 404s because of a new pacakge.  I found
this to work (but yeah, won't probably last long):

        xen_url =3D ('http://deb.debian.org/debian/'
                   'pool/main/x/xen/'
                   'xen-hypervisor-4.11-arm64_4.11.4+57-g41a822c392-2_arm64=
.deb')
        xen_sha1 =3D 'b5a6810fc67fd50fa36afdfdfe88ce3153dd3a55'

> +        xen_deb =3D self.fetch_asset(xen_url, asset_hash=3Dxen_sha1)
> +        xen_path =3D self.extract_from_deb(xen_deb, "/boot/xen-4.11-arm6=
4")
> +
> +        self.launch_xen(xen_path)
> +
> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
> +    def test_arm64_xen_414_and_dom0(self):
> +        """
> +        :avocado: tags=3Darch:aarch64
> +        :avocado: tags=3Daccel:tcg
> +        :avocado: tags=3Dcpu:cortex-a57
> +        :avocado: tags=3Dmachine:virt
> +        """
> +        xen_url =3D ('https://deb.debian.org/debian/'
> +                   'pool/main/x/xen/'
> +                   'xen-hypervisor-4.14-arm64_4.14.0+80-gd101b417b7-1_ar=
m64.deb')
> +        xen_sha1 =3D 'b9d209dd689ed2b393e625303a225badefec1160'

Likewise here:

        xen_url =3D ('https://deb.debian.org/debian/'
                   'pool/main/x/xen/'
                   'xen-hypervisor-4.14-arm64_4.14.0+88-g1d1d1f5391-2_arm64=
.deb')
        xen_sha1 =3D 'f316049beaadd50482644e4955c4cdd63e3a07d5'

> +        xen_deb =3D self.fetch_asset(xen_url, asset_hash=3Dxen_sha1)
> +        xen_path =3D self.extract_from_deb(xen_deb, "/boot/xen-4.14-arm6=
4")
> +
> +        self.launch_xen(xen_path)
> +
> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
> +    def test_arm64_xen_415_and_dom0(self):
> +        """
> +        :avocado: tags=3Darch:aarch64
> +        :avocado: tags=3Daccel:tcg
> +        :avocado: tags=3Dcpu:cortex-a57
> +        :avocado: tags=3Dmachine:virt
> +        """
> +
> +        xen_url =3D ('https://fileserver.linaro.org/'
> +                   's/JSsewXGZ6mqxPr5/download'
> +                   '?path=3D%2F&files=3Dxen-upstream-4.15-unstable.deb')
> +        xen_sha1 =3D 'fc191172b85cf355abb95d275a24cc0f6d6579d8'
> +        xen_deb =3D self.fetch_asset(xen_url, asset_hash=3Dxen_sha1)
> +        xen_path =3D self.extract_from_deb(xen_deb, "/boot/xen-4.15-unst=
able")
> +
> +        self.launch_xen(xen_path)
> --=20
> 2.20.1
>=20
>=20

With those changes,

Reviewed-by: Cleber Rosa <crosa@redhat.com>
Tested-by: Cleber Rosa <crosa@redhat.com>

--ikeVEW9yuYc//A+q
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEeruW64tGuU1eD+m7ZX6NM6XyCfMFAmAtgLMACgkQZX6NM6Xy
CfOLJg/+LdIhvsNQD1wFhsh5OXvvTm8SLwTvFcM7YOiCeOLsOC6LGpTQyedugWAb
OLw5aib6qidCrOK+pjV+tISOw0hTJ0H28GVAyT/Y9AGsCNtu2xxAOw4jt1jJRrl3
JIYwoPfl4EWCnHGTobb1ZvHPfK6sumasHrZkZNujaKFKi3DZfIY7rtl+nHLdU5B8
OUd5lNcG0N1BBcLJJ8bZKhG8LdBfZrSIdwRekiyHjK/8udQ+gC9q64xVxerjjaT6
q76GFkUJT5rci+D5BOIaSDqlaDOwm+wKoFSYSa1syBx4ZvGcWHnxFBnDmWkwERTh
FJ8T/0EYGiNIlZvyFW4w0aPOGOyCvcOkc+kdKEP6q5JkJ7+44Ja76HA4fAodFgGe
uvDPfLOO930y7Co9Haudbd05NjAhWQSJG3hVZ2wkZ+QAnVzpvsAW5jKrhASMQvKH
N/mSMnR2a8nGVCvOvVDfjK6Kx/hPYvl3cxjqIqOs/JFXaIDUG+e/uGijz841L54T
KYVM/7tHLdzeasdW/kJLEApFlUJVdBJnl9CQd8vApEeW6GEuU3WqvxWoaJK51sCr
QiVgKxR2Byv1RadtLg8uj0NhpfJC3EEbnLqHeZHnhiZ7X62gvtg3am8osBIMFH2V
EnCZ/0LU2ut5yPAA/rALG+Kk/yMqvgBfNCKU4pGzWKXjz6X0u0w=
=sLVe
-----END PGP SIGNATURE-----

--ikeVEW9yuYc//A+q--



From xen-devel-bounces@lists.xenproject.org Thu Feb 18 05:21:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 05:21:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86484.162421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCbkt-0004FC-38; Thu, 18 Feb 2021 05:21:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86484.162421; Thu, 18 Feb 2021 05:21:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCbks-0004F5-Vy; Thu, 18 Feb 2021 05:21:34 +0000
Received: by outflank-mailman (input) for mailman id 86484;
 Thu, 18 Feb 2021 05:21:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0cV5=HU=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1lCbkr-0004Ey-Ki
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 05:21:34 +0000
Received: from mail-qv1-xf33.google.com (unknown [2607:f8b0:4864:20::f33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f57f9047-4b19-4c3d-8dcf-675fab17a18f;
 Thu, 18 Feb 2021 05:21:32 +0000 (UTC)
Received: by mail-qv1-xf33.google.com with SMTP id a1so401858qvd.13
 for <xen-devel@lists.xenproject.org>; Wed, 17 Feb 2021 21:21:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f57f9047-4b19-4c3d-8dcf-675fab17a18f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1aGb/J92Mb3UQqgHwhGKXT1nP4FiVjOT1qjd79sU3k4=;
        b=Pp1+Mt82q6+hCnorlaQaoh/Us1KLXpzFpd5sfB1DAqqSkNuDbNe5YM7A4iyVZm0dn/
         nYFnxG1J7kiwI4wxJ/+KPSol6Nytn9gLNdpB/9h9qhk8X4yfMSaFLpSuiz/bGzmXNpMN
         K2eYP999AMzmFDkYOTMlSnlumLXtGM0CygPDQfoKcMVzqiSEmz93OptpDBuBgVxp3cMx
         NiNp06Np5JdL6Od/AqDQrIH+eJ+7x7k3+i9jmcFnpLtjQ2n6rAUd9Y6IY+aXkV2tcy/w
         qKkHcVoDHGUfdncujo1IlwBRsTJGdopsY4tS1VOgtKycMt/mKp3VkQmuxEU/cOoA1eCz
         5m4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1aGb/J92Mb3UQqgHwhGKXT1nP4FiVjOT1qjd79sU3k4=;
        b=shEb7toKHTZwVuZc5apAlgcT0jWFSjP5EDqvSaqXtJkqPU7zjcGcUaKc/pzJlcs2d0
         Cn+m5kUUKTkQ75nYaUPYOYzmua5onMirTy0sFtI/R50afOIlEuQNpBLcUhvwAoy9rGy8
         RKp/35oSIR/z9ZGQF1+FlmY/IkxK+25LyDbyqWQMZT86lmpu7UMsJmpno+WGumiawA5b
         eF71o9SOeOcoNNKh8UYC8i22bHBKfZWHoB9OF9km5XgxCOw3Cwxp/Ltc5GqmbYKQwVVt
         B+Deu2x5cg2VFmNiav3CT+BEe36kpav+rdb0hli3SbHv638s/nU5Q5VYok3soclxsgqD
         0h3A==
X-Gm-Message-State: AOAM531xN+4hv3XKgbA99VmKpum2LRew1wkWssJNfsTt/gxKuitjIr0k
	bqzNDFQx4/XYED56dICOjsrfnQgZ3Lg2SNJ1W5MpUA==
X-Google-Smtp-Source: ABdhPJw8B0Ch3knZDj5DCyDYJolQTNPl2596yRXZJe5rbaeDsacvV/YoEug1L9bakztRYWQmSqyjLCmo0CTSCxRZR0c=
X-Received: by 2002:a0c:fa90:: with SMTP id o16mr2677614qvn.55.1613625692210;
 Wed, 17 Feb 2021 21:21:32 -0800 (PST)
MIME-Version: 1.0
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com> <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
In-Reply-To: <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 17 Feb 2021 21:21:21 -0800
Message-ID: <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	George Dunlap <george.dunlap@citrix.com>
Content-Type: multipart/alternative; boundary="0000000000002f50b305bb95827a"

--0000000000002f50b305bb95827a
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Feb 17, 2021 at 12:29 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wr=
ote:

> On 17.02.21 09:12, Roman Shaposhnik wrote:
> > Hi J=C3=BCrgen, thanks for taking a look at this. A few comments below:
> >
> > On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com=
> wrote:
> >>
> >> On 16.02.21 21:34, Stefano Stabellini wrote:
> >>> + x86 maintainers
> >>>
> >>> It looks like the tlbflush is getting stuck?
> >>
> >> I have seen this case multiple times on customer systems now, but
> >> reproducing it reliably seems to be very hard.
> >
> > It is reliably reproducible under my workload but it take a long time
> > (~3 days of the workload running in the lab).
>
> This is by far the best reproduction rate I have seen up to now.
>
> The next best reproducer seems to be a huge installation with several
> hundred hosts and thousands of VMs with about 1 crash each week.
>
> >
> >> I suspected fifo events to be blamed, but just yesterday I've been
> >> informed of another case with fifo events disabled in the guest.
> >>
> >> One common pattern seems to be that up to now I have seen this effect
> >> only on systems with Intel Gold cpus. Can it be confirmed to be true
> >> in this case, too?
> >
> > I am pretty sure mine isn't -- I can get you full CPU specs if that's
> useful.
>
> Just the output of "grep model /proc/cpuinfo" should be enough.
>

processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Intel(R) Atom(TM) CPU  C2550  @ 2.40GHz
stepping : 8
microcode : 0x12d
cpu MHz : 1200.070
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc
cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16
xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm
3dnowprefetch cpuid_fault epb pti ibrs ibpb stibp tpr_shadow vnmi
flexpriority ept vpid tsc_adjust smep erms dtherm ida arat md_clear
vmx flags : vnmi preemption_timer invvpid ept_x_only flexpriority
tsc_offset vtpr mtf vapic ept vpid unrestricted_guest
bugs : cpu_meltdown spectre_v1 spectre_v2 mds msbds_only
bogomips : 4800.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:


> >
> >> In case anybody has a reproducer (either in a guest or dom0) with a
> >> setup where a diagnostic kernel can be used, I'd be _very_ interested!
> >
> > I can easily add things to Dom0 and DomU. Whether that will disrupt the
> > experiment is, of course, another matter. Still please let me know what
> > would be helpful to do.
>
> Is there a chance to switch to an upstream kernel in the guest? I'd like
> to add some diagnostic code to the kernel and creating the patches will
> be easier this way.
>

That's a bit tough -- the VM is based on stock Ubuntu and if I upgrade the
kernel I'll have fiddle with a lot things to make workload functional again=
.

However, I can install debug kernel (from Ubuntu, etc. etc.)

Of course, if patching the kernel is the only way to make progress -- lets
try that -- please let me know.

Thanks,
Roman.

--0000000000002f50b305bb95827a
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr">On Wed, Feb 17, 2021 at =
12:29 AM J=C3=BCrgen Gro=C3=9F &lt;<a href=3D"mailto:jgross@suse.com">jgros=
s@suse.com</a>&gt; wrote:<br></div><div class=3D"gmail_quote"><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1p=
x;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1=
ex">On 17.02.21 09:12, Roman Shaposhnik wrote:<br>
&gt; Hi J=C3=BCrgen, thanks for taking a look at this. A few comments below=
:<br>
&gt; <br>
&gt; On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F &lt;<a href=3D"=
mailto:jgross@suse.com" target=3D"_blank">jgross@suse.com</a>&gt; wrote:<br=
>
&gt;&gt;<br>
&gt;&gt; On 16.02.21 21:34, Stefano Stabellini wrote:<br>
&gt;&gt;&gt; + x86 maintainers<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; It looks like the tlbflush is getting stuck?<br>
&gt;&gt;<br>
&gt;&gt; I have seen this case multiple times on customer systems now, but<=
br>
&gt;&gt; reproducing it reliably seems to be very hard.<br>
&gt; <br>
&gt; It is reliably reproducible under my workload but it take a long time<=
br>
&gt; (~3 days of the workload running in the lab).<br>
<br>
This is by far the best reproduction rate I have seen up to now.<br>
<br>
The next best reproducer seems to be a huge installation with several<br>
hundred hosts and thousands of VMs with about 1 crash each week.<br>
<br>
&gt; <br>
&gt;&gt; I suspected fifo events to be blamed, but just yesterday I&#39;ve =
been<br>
&gt;&gt; informed of another case with fifo events disabled in the guest.<b=
r>
&gt;&gt;<br>
&gt;&gt; One common pattern seems to be that up to now I have seen this eff=
ect<br>
&gt;&gt; only on systems with Intel Gold cpus. Can it be confirmed to be tr=
ue<br>
&gt;&gt; in this case, too?<br>
&gt; <br>
&gt; I am pretty sure mine isn&#39;t -- I can get you full CPU specs if tha=
t&#39;s useful.<br>
<br>
Just the output of &quot;grep model /proc/cpuinfo&quot; should be enough.<b=
r></blockquote><div><br></div><div><div>processor<span class=3D"gmail-Apple=
-tab-span" style=3D"white-space:pre">	</span>: 3</div><div>vendor_id<span c=
lass=3D"gmail-Apple-tab-span" style=3D"white-space:pre">	</span>: GenuineIn=
tel</div><div>cpu family<span class=3D"gmail-Apple-tab-span" style=3D"white=
-space:pre">	</span>: 6</div><div>model<span class=3D"gmail-Apple-tab-span"=
 style=3D"white-space:pre">		</span>: 77</div><div>model name<span class=3D=
"gmail-Apple-tab-span" style=3D"white-space:pre">	</span>: Intel(R) Atom(TM=
) CPU =C2=A0C2550 =C2=A0@ 2.40GHz</div><div>stepping<span class=3D"gmail-Ap=
ple-tab-span" style=3D"white-space:pre">	</span>: 8</div><div>microcode<spa=
n class=3D"gmail-Apple-tab-span" style=3D"white-space:pre">	</span>: 0x12d<=
/div><div>cpu MHz<span class=3D"gmail-Apple-tab-span" style=3D"white-space:=
pre">		</span>: 1200.070</div><div>cache size<span class=3D"gmail-Apple-tab=
-span" style=3D"white-space:pre">	</span>: 1024 KB</div><div>physical id<sp=
an class=3D"gmail-Apple-tab-span" style=3D"white-space:pre">	</span>: 0</di=
v><div>siblings<span class=3D"gmail-Apple-tab-span" style=3D"white-space:pr=
e">	</span>: 4</div><div>core id<span class=3D"gmail-Apple-tab-span" style=
=3D"white-space:pre">		</span>: 3</div><div>cpu cores<span class=3D"gmail-A=
pple-tab-span" style=3D"white-space:pre">	</span>: 4</div><div>apicid<span =
class=3D"gmail-Apple-tab-span" style=3D"white-space:pre">		</span>: 6</div>=
<div>initial apicid<span class=3D"gmail-Apple-tab-span" style=3D"white-spac=
e:pre">	</span>: 6</div><div>fpu<span class=3D"gmail-Apple-tab-span" style=
=3D"white-space:pre">		</span>: yes</div><div>fpu_exception<span class=3D"g=
mail-Apple-tab-span" style=3D"white-space:pre">	</span>: yes</div><div>cpui=
d level<span class=3D"gmail-Apple-tab-span" style=3D"white-space:pre">	</sp=
an>: 11</div><div>wp<span class=3D"gmail-Apple-tab-span" style=3D"white-spa=
ce:pre">		</span>: yes</div><div>flags<span class=3D"gmail-Apple-tab-span" =
style=3D"white-space:pre">		</span>: fpu vme de pse tsc msr pae mce cx8 api=
c sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht =
tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nop=
l xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cp=
l vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_=
timer aes rdrand lahf_lm 3dnowprefetch cpuid_fault epb pti ibrs ibpb stibp =
tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat =
md_clear</div><div>vmx flags<span class=3D"gmail-Apple-tab-span" style=3D"w=
hite-space:pre">	</span>: vnmi preemption_timer invvpid ept_x_only flexprio=
rity tsc_offset vtpr mtf vapic ept vpid unrestricted_guest</div><div>bugs<s=
pan class=3D"gmail-Apple-tab-span" style=3D"white-space:pre">		</span>: cpu=
_meltdown spectre_v1 spectre_v2 mds msbds_only</div><div>bogomips<span clas=
s=3D"gmail-Apple-tab-span" style=3D"white-space:pre">	</span>: 4800.19</div=
><div>clflush size<span class=3D"gmail-Apple-tab-span" style=3D"white-space=
:pre">	</span>: 64</div><div>cache_alignment<span class=3D"gmail-Apple-tab-=
span" style=3D"white-space:pre">	</span>: 64</div><div>address sizes<span c=
lass=3D"gmail-Apple-tab-span" style=3D"white-space:pre">	</span>: 36 bits p=
hysical, 48 bits virtual</div><div>power management:</div></div><div>=C2=A0=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,=
204);padding-left:1ex">&gt; <br>
&gt;&gt; In case anybody has a reproducer (either in a guest or dom0) with =
a<br>
&gt;&gt; setup where a diagnostic kernel can be used, I&#39;d be _very_ int=
erested!<br>
&gt; <br>
&gt; I can easily add things to Dom0 and DomU. Whether that will disrupt th=
e<br>
&gt; experiment is, of course, another matter. Still please let me know wha=
t<br>
&gt; would be helpful to do.<br>
<br>
Is there a chance to switch to an upstream kernel in the guest? I&#39;d lik=
e<br>
to add some diagnostic code to the kernel and creating the patches will<br>
be easier this way.<br></blockquote><div><br></div><div>That&#39;s a bit to=
ugh -- the VM is based on stock Ubuntu and if I upgrade the kernel I&#39;ll=
 have fiddle with a lot things to make workload functional again.</div><div=
><br></div><div>However, I can install debug kernel (from Ubuntu, etc. etc.=
)</div><div><br></div><div>Of course, if patching the kernel is the only wa=
y to make progress -- lets try that -- please let me know.</div><div><br></=
div><div>Thanks,</div><div>Roman.=C2=A0</div></div></div></div>

--0000000000002f50b305bb95827a--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 08:15:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 08:15:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86498.162445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCeSk-0004Ga-7K; Thu, 18 Feb 2021 08:15:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86498.162445; Thu, 18 Feb 2021 08:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCeSk-0004GT-43; Thu, 18 Feb 2021 08:15:02 +0000
Received: by outflank-mailman (input) for mailman id 86498;
 Thu, 18 Feb 2021 08:15:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xBhj=HU=casper.srs.infradead.org=batv+b4c3bfe49cc01b414faa+6388+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1lCeSg-0004GO-SC
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 08:15:00 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6e55f0e-243c-4773-aca3-8a47cb3c74fa;
 Thu, 18 Feb 2021 08:14:51 +0000 (UTC)
Received: from hch by casper.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1lCeRs-001Pgy-9q; Thu, 18 Feb 2021 08:14:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6e55f0e-243c-4773-aca3-8a47cb3c74fa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=rFuCEDruQM3ANvsFui7zHQhN7ske9Q4pPXaIkjwguxg=; b=wOct5ZUuVs84BfzGoHAtevMAst
	U+MwPU+FwceTFebaOa/YfxTXWAlE3yYBCUa2yclPtq7j9EiLnSuhfXTTIzaBsS4IG0tyY2bSG7OMf
	ONF1GFQSMM2z4xupwsset0+VV+KUGXMmBHkkqrDidEVwMIHh3VUDJtRHUTGsm/QwoVbPCK05OvnsK
	j5ZpZUanoMV81iB88ONksrXDTSwsSGxznYijhgkbNqo6kixSzXl4Zgbdwm/MzPkujYjFGpZHXkPgr
	e9HRu8ZxBSaI+o5BItL6q7b87RBFmj9T8aL/hCXUHqmrnB74Tk7YDGjFbhXjugRVAVmMCqaDjHI9w
	KeHlarkw==;
Date: Thu, 18 Feb 2021 08:14:08 +0000
From: Christoph Hellwig <hch@infradead.org>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Nadav Amit <namit@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Sasha Levin <sashal@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, x86@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Michael Kelley <mikelley@microsoft.com>
Subject: Re: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs
 concurrently
Message-ID: <20210218081408.GB335524@infradead.org>
References: <20210209221653.614098-1-namit@vmware.com>
 <20210209221653.614098-5-namit@vmware.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210209221653.614098-5-namit@vmware.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Given that the last patch killed the last previously existing
user of on_each_cpu_cond_mask there are now the only users.

>  	if (info->freed_tables) {
> -		smp_call_function_many(cpumask, flush_tlb_func,
> -			       (void *)info, 1);
> +		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> +				      cpumask);

.. 

> +		on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> +				      cpumask);

Which means the cond_func is unused, and thus on_each_cpu_cond_mask can
go away entirely in favor of on_each_cpu_cond.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 08:57:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 08:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86503.162457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCf87-0008DE-Gh; Thu, 18 Feb 2021 08:57:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86503.162457; Thu, 18 Feb 2021 08:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCf87-0008D7-DT; Thu, 18 Feb 2021 08:57:47 +0000
Received: by outflank-mailman (input) for mailman id 86503;
 Thu, 18 Feb 2021 08:57:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCf85-0008Cy-P4; Thu, 18 Feb 2021 08:57:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCf85-0001xc-IN; Thu, 18 Feb 2021 08:57:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCf85-0005MU-65; Thu, 18 Feb 2021 08:57:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCf85-0000Z5-5S; Thu, 18 Feb 2021 08:57:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7INe+YtM3Pr2lt4J3NMfWR0HNZSgkQLZTXL44q0oOrE=; b=pP8EPSyV0V2Si+NN4PtXseGWdR
	hJ7fd0WWznTkZhUZkgXjKIg463LDuS5l1Ez5k+2Y7IaQMiUaFbGpQ4IP4erF5y/6ges8/BRB71eau
	rlHqKn8KT4yUqrNatims0ViDGibhzixQFuxCm3Fl0xaKFBtEek+so/S2A1frMP+wIdQQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159431-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159431: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-seattle:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5b9a4104c902d7dec14c9e3c5652a638194487c6
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 08:57:45 +0000

flight 159431 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159431/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-seattle  14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 158387
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 158387
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 158387
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 158387
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 158387

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 159399 pass in 159431
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail pass in 159399

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 158387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5b9a4104c902d7dec14c9e3c5652a638194487c6
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   36 days
Failing since        158473  2021-01-17 13:42:20 Z   31 days   43 attempts
Testing same since   159339  2021-02-14 04:42:28 Z    4 days    5 attempts

------------------------------------------------------------
467 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14254 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 09:00:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 09:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86508.162472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCfB7-0000jV-0E; Thu, 18 Feb 2021 09:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86508.162472; Thu, 18 Feb 2021 09:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCfB6-0000jO-Sg; Thu, 18 Feb 2021 09:00:52 +0000
Received: by outflank-mailman (input) for mailman id 86508;
 Thu, 18 Feb 2021 09:00:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lCfB5-0000jI-T5
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 09:00:52 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d0f4040-f5d8-472e-b868-d48073333d2c;
 Thu, 18 Feb 2021 09:00:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d0f4040-f5d8-472e-b868-d48073333d2c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613638850;
  h=date:from:to:cc:subject:message-id:mime-version;
  bh=MmZPfkgR5G4XryRQNLK3h4HyNlDR4RLco8Ob7NtogyI=;
  b=Bjk2vVvr4rxoIRXWxxlz2EyGfgFKwaTCjYw1YmB+3edsSZ5TVqxaGL8f
   3TEBsQH1vUL0qP2g0Gkutw8C8qNYCHFhTPEqWB3vBamkttkFXtFiwz+Ve
   fCZAmGH8KP1CG6ry2SjPazZfTogepk25FNW98urNMClesc9UPlowOaBC9
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XbL8nlMAD1sJX9/0ud0JhejpkKfCucXoRvCOpWSYQBna13hTMiD3hddgIV593OCUqBZ9dyfd0B
 3EgAHMv/xJPs8CqtJFvIUNeLTb/ssH8J1MiL2PukCEZBwk4zjx/fkdYxLbtLWZjErdC8PyaQ9A
 7vUDf5DOMu+Ez+dU6Fix2CefgCyUqkEJKxfbMLoZ8Z+AzCaxVcGM5wJDrZzxOfrn965ggymUTN
 TGJihfUMciauqmFd+JBs4AgpWZNmgk0NXTupYbLAprUClmUu+XEAkgv0HKNVplFpHppq2IHHYZ
 vzo=
X-SBRS: 5.2
X-MesageID: 38868723
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,186,1610427600"; 
   d="scan'208";a="38868723"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YZuSr0j+LKrj7I78kFoNvkev6mhp7rWXoD7884EC+P8JRZfAxRaVgvXSz1EdW+hJY5ySvFhRjqZ1Ij5J34IPPlFopfMtcPtjHZJPKmm1OJrp8kk+kH7RJ17E80ynZSAC0RMK4TJ2DNgJuF5ai8D9tGX/+Bw1caaCvi0jf47pnWmrHjZEEfdASUJH57JdlzMhAAzpnOcx5uirHbcI/dvqnRpndW1ynEIjh87eDDwvIB35NtH844uh1Q3BMrtVrCbWKngwBzBQBBCxLHRYCau1OxfrhDgebmL2nLWsUWLahpU8EifhqpDzyhiYxa99zUgGTa4/jQX2whiuHxfv86RSbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MmZPfkgR5G4XryRQNLK3h4HyNlDR4RLco8Ob7NtogyI=;
 b=gyUkihzWaaMLtqSQjnRL3BAqpwKpVmnVr1u6vGa7dQs47Iinjk0iHV1uVKaBVTQqX9AaR/vmqhLwWWn0TCY7n+tCe25uO9ig9sbr3iQMBMilOa+vJIInBWrstixXIEYWbrkNgagvCnGyWHNB0Ey5mTjkHLe1L+J6k6k/HcVg7hawTsUWyZf9uqiFxy3fl7fQPYWQwnsBhGqXJ/O06q8JDeh5+OOtLVMywjSovxvy217NMBL2VSPKQTjjb3fyxu67wZ4yNtNZyE67Q0nrAQCo0vnH5axWT9aF49f5SvsAjXb5iDE+M3YSEpTd6iMmnJKZMD5UahFU8B+JsTVWVX+okA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MmZPfkgR5G4XryRQNLK3h4HyNlDR4RLco8Ob7NtogyI=;
 b=NZsfKAH+AbpYaf+saFTkDaGm/OvAxgabnXVo8nptU8mP9z88aaeXQImbLkQgvQYLUZL4O1QviJxdnflrlbqbxIR1ELisrkb58W9J7z2o2mlMsPduRQnQVy8QcTBJQJY90xxCm1PK73n5EmxC7ivKxfWvbW5Wr2CSmo+d+TKD3bc=
Date: Thu, 18 Feb 2021 10:00:40 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: HEADS UP: FreeBSD/Xen dom0 UEFI support
Message-ID: <YC4suFy4awWuaSz1@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
X-ClientProxiedBy: PR1P264CA0022.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::9) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6c13ceea-669a-4be7-41fc-08d8d3ebad2b
X-MS-TrafficTypeDiagnostic: DM6PR03MB4217:
X-Microsoft-Antispam-PRVS: <DM6PR03MB42175BFFC363FD864F7204C98F859@DM6PR03MB4217.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: s7aISYvz0YHB05o3x+EH+GEOktwTAvVkVWJumjppTpl0vFrokr85QqjVVGHoRhzOdacAgQPNpy9tDnSNXpvV8N/2PB062lFUaEVbivvUSMwMLCD/UMTZE6SDZiUcZPPPo6MCof7RY/ys186Lzd0ScHufoQxybO3F1KAa9v3TJ9wjAoMjgHY2D4RNWrcEpR5MNK/0tMRiQ7bZvJf0iAwRxkM0erdNZx4E7ieJUcQEeG11eyvZwynYH5ww3mVgburu6ARfqgn1EZONfWdU5cXOtiBzOxbWErKaCgmr25UG9Q+9+wFWuScQvA86fUd3yfUZ223iX3ZhNZjMdPgLEUmBeqdstTEeT4pm/gSOxWn3gkrai3rDyCZ2smaurkr6/V20PiGH+lcm0ctkXRP1U+tFXXcQiCExZe4A4KWtYhYvMbKKFmZsYDyy2Uys5C2ViDW1znAMC9vYmmtwc0/ePAiPF9EfuTE1qe7Iy50CXAeFP5QzXf4IHkfKnGkzON7WCpWN2TBNhAUt75omRNt7axTqjZcO6VwtBZ+jvO0BcM9oZ3vYqm4/LzAN/xjRAqq2M6cQzEPfTYCfhedBUrvHI9jJeGq46DDpRC+I2xW/iNMEpcg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(346002)(39860400002)(366004)(376002)(396003)(8676002)(2906002)(85182001)(26005)(6486002)(16526019)(186003)(33716001)(5660300002)(6666004)(8936002)(66556008)(6916009)(6496006)(478600001)(9686003)(316002)(966005)(86362001)(4744005)(4326008)(66946007)(66476007)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Um1vbkt6TkdhOTVnckFsTnEyWUFubWdiUllNRDlxUytneWR5ZVJRelBGUnZR?=
 =?utf-8?B?NzVONTdTWUhpZlVaZzdRSHU5b0tHbjRsZi9WUU1RZWRER2k2QyszQXlVRFlj?=
 =?utf-8?B?NlRNNmczUGpTZkkrbUJkL2tJbU5tS2phU3o2M1h2OFoxWmZrVk9VdFBHVzRH?=
 =?utf-8?B?SG40QzJKS3JZRkR6NWlidWVOK1ZYUmh3MFJia010emxHOHJQUnl3akVzc0lQ?=
 =?utf-8?B?Yzk0WjA4VjBqNUc4M21nR3dFa1I3ZnpoSEwwRlVlTlJqWUk1MkJQYmQxZHNG?=
 =?utf-8?B?dlRGcWtEcXplV2pqRTFtUW12NE9KcWw1dmd1Qys2U2tiSUx5QjZRLzVqYnBS?=
 =?utf-8?B?dkNzY1RKYUhDRHNyaTlOR21yclFLRk1YZW5GaDU4RHFLelpOV2NRSXd2SzFF?=
 =?utf-8?B?bTFyanFnak12cHM4K3ZTWnJ6eWhWRERFMmR1L1Z6U1NHWnpKZTBMdWsyRUor?=
 =?utf-8?B?Y2pLeWpIWWRUZ0lmRGkxWWVWaFo1NXo2VG5QZ1NFVkNrc3p2N3d5R3gyR2kz?=
 =?utf-8?B?TWJjVE95MUJtWEhqSGZpUUc3UXN0SWJEWE9TQ3VzQ0FsOXY4SVZ1WEZRT2xS?=
 =?utf-8?B?a1NCcUhHdUxha2ppaDJ6MlpTTXdCd25zeXB0MzloNWtQTk1IRWpmM2lRMDV2?=
 =?utf-8?B?bHBZaDVqRE0rK24xRTh6WEo2SkJkdmtMYWMxZXRvc25TV2RtaVBkcS9KOWJH?=
 =?utf-8?B?RElLeW9vWXJEREcvZHRrQWJCYmRWcHI3a1I4MExXSzg3SVBsZVlEbHZyeCtR?=
 =?utf-8?B?Z3NWampaZHE5UG1sdW50SjhNZ3lSL2plUC92SDdiSDRDb0RsWmRjMldLOXgz?=
 =?utf-8?B?TmhZVUNmTXBpaEUyUTNJLzRxKzFLVFZjbXJDR2pYSk1BeXMwcjFjLytXVHhl?=
 =?utf-8?B?ZTJSUU0zN29JdnpYZXB3SjVlR3YzTkZjamVOZ3FJdjBEUmh5WktLUUFxUDQz?=
 =?utf-8?B?RVpUeWxqTy9GNnNtYXNOZ0E1RzR0S29CZ2Y2ZUIvU1lpcDkraVJFeVM1Z25u?=
 =?utf-8?B?YU43Y2tZVXkyMklCQTd1dGlDR0xFL1QybkNRQ3JCZStjemFYcmdYTEplWGNt?=
 =?utf-8?B?YzhqZ1RyUW1UcnpIT1J6UVdBL0p6NWt1NlJmRHZweVVHOHpiNXNydEU4Njl3?=
 =?utf-8?B?MkNaSmcyZXNNamJsdWd5eHRWK1RxTHpHU2tiQU9LR25ONmtLSitnR0ZUcThK?=
 =?utf-8?B?aGMwaWZ2ZGFFSXZ4K0l6QlpqQ2Rkc0NZTHNBdHpjd3oydFFiWFVOVk43WEp0?=
 =?utf-8?B?bnFYR0pmU3JudUdCOUNCUVpnQyt6WGVubXc2SSszTzdDTGVqMlA5Y1A0SlRz?=
 =?utf-8?B?a0tVYW9rOUFaSUdONitubEVrM2dsd2xpWkdnQ3ZDN2ZGRGpHZHFoTWpPYnFL?=
 =?utf-8?B?Q1dRZ2hXTnBmc0pwZEw0ODNOS3BlWUVkWkp5aFhZNGhMcXF2MjBIMEVSdkZ4?=
 =?utf-8?B?UnlLcTdoRDlsbFhQdjluazl1UUw0NHB1OFM5amg0N3Y5QmhtaG5RcDdQbmhL?=
 =?utf-8?B?Ty9xcXg2eFNXTWYxNHVpWVRzV0hqdWJXWmE3ekxKZ1o3SUs5UlB1QmpINlhK?=
 =?utf-8?B?R2lYd21PMGRxSXlMZ0kybytVSlZGaEpwaU04aTVtRitRSUpSRS9ITGpNS0NF?=
 =?utf-8?B?UDkzamRVaUFMT1FDbFRSakErbmtqYzE0Q3VlakVSZ0JoQVlVcVF5Mm5LcUN6?=
 =?utf-8?B?cDViaHAvMVFRTGE1cG9HVzZrcDA0dEFtOXh3ZlFpZjhBR09nVExUZ2ZhNGNN?=
 =?utf-8?Q?Wu1c90+ETMyc+gld/+RBQpu2DJjl82Aw6d6My0z?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c13ceea-669a-4be7-41fc-08d8d3ebad2b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 09:00:45.0029
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ae8E3e9zBfc1kcxTG3trOHTbs80TxGvBjBFfcM575JoF5Ya82WnuynRgasFyEbnphJZgHtLeqTx6W9wwVOmVww==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4217
X-OriginatorOrg: citrix.com

Hello,

Since commit 97527e9c4fd37140 on main branch FreeBSD should be able to
boot and work as a Xen dom0 from UEFI.

Booting from UEFI also requires the usage of xen-kernel 4.14.1_1,
previous versions of xen-kernel won't boot correctly under UEFI.

The way to setup the system is exactly the same as when using BIOS,
xen_kernel and xen_cmdline options should be set in loader.conf, see:

https://docs.freebsd.org/en_US.ISO8859-1/books/handbook/virtualization-host-xen.html

I don't have plans to backport for 13.0, but will consider backporting
to stable/13 at a later point if there are no issues, so the
functionality could be used in 13.1.

This has been tested on a single Intel box with UEFI, some wider
testing would be appreciated.

Thanks to tsoome, imp and kib for the reviews.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 09:34:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 09:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86511.162484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCfhL-0003eK-QD; Thu, 18 Feb 2021 09:34:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86511.162484; Thu, 18 Feb 2021 09:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCfhL-0003eD-NF; Thu, 18 Feb 2021 09:34:11 +0000
Received: by outflank-mailman (input) for mailman id 86511;
 Thu, 18 Feb 2021 09:34:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UbXw=HU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lCfhK-0003e8-K8
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 09:34:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2c6039f-a2b0-4678-bdf3-3ba294f18306;
 Thu, 18 Feb 2021 09:34:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EDE3FACE5;
 Thu, 18 Feb 2021 09:34:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2c6039f-a2b0-4678-bdf3-3ba294f18306
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613640848; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=v0jfYtZwVgb7PTear6gYtmpMP/Ium18ZC+jrWab3JrI=;
	b=n6KHcmY2eophbPa/suL1PvzEhsWTgcW222t1pJL4IaA+pS7fDQMCTTRkSrXvSMz7MXD5VP
	l2+NcwUgKKr0qQWKd2DogRof6zeoA1/Vy+XgGOEc0iVc64jyUGcFpmkDo9q1aarLgXgeiW
	TzIFTQAfnrlPuMoW9iEKMQ51N0tGX0Y=
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
 <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f8b7cd40-7f58-ac64-e2c2-a2a4c9cc90ce@suse.com>
Date: Thu, 18 Feb 2021 10:34:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="GNJRm2pyggWu40eK3nt8thzH0tNlKfd4Y"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--GNJRm2pyggWu40eK3nt8thzH0tNlKfd4Y
Content-Type: multipart/mixed; boundary="C9gK4prqazbYKcbofgeDVBEzotxWg6ffV";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Message-ID: <f8b7cd40-7f58-ac64-e2c2-a2a4c9cc90ce@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
 <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
In-Reply-To: <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>

--C9gK4prqazbYKcbofgeDVBEzotxWg6ffV
Content-Type: multipart/mixed;
 boundary="------------457068B34F1ADD00FE05D41D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------457068B34F1ADD00FE05D41D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.02.21 06:21, Roman Shaposhnik wrote:
> On Wed, Feb 17, 2021 at 12:29 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com=
=20
> <mailto:jgross@suse.com>> wrote:
>=20
>     On 17.02.21 09:12, Roman Shaposhnik wrote:
>      > Hi J=C3=BCrgen, thanks for taking a look at this. A few comments=
 below:
>      >
>      > On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F <jgross@s=
use.com
>     <mailto:jgross@suse.com>> wrote:
>      >>
>      >> On 16.02.21 21:34, Stefano Stabellini wrote:
>      >>> + x86 maintainers
>      >>>
>      >>> It looks like the tlbflush is getting stuck?
>      >>
>      >> I have seen this case multiple times on customer systems now, b=
ut
>      >> reproducing it reliably seems to be very hard.
>      >
>      > It is reliably reproducible under my workload but it take a long=
 time
>      > (~3 days of the workload running in the lab).
>=20
>     This is by far the best reproduction rate I have seen up to now.
>=20
>     The next best reproducer seems to be a huge installation with sever=
al
>     hundred hosts and thousands of VMs with about 1 crash each week.
>=20
>      >
>      >> I suspected fifo events to be blamed, but just yesterday I've b=
een
>      >> informed of another case with fifo events disabled in the guest=
=2E
>      >>
>      >> One common pattern seems to be that up to now I have seen this
>     effect
>      >> only on systems with Intel Gold cpus. Can it be confirmed to be=
 true
>      >> in this case, too?
>      >
>      > I am pretty sure mine isn't -- I can get you full CPU specs if
>     that's useful.
>=20
>     Just the output of "grep model /proc/cpuinfo" should be enough.
>=20
>=20
> processor: 3
> vendor_id: GenuineIntel
> cpu family: 6
> model: 77
> model name: Intel(R) Atom(TM) CPU =C2=A0C2550 =C2=A0@ 2.40GHz
> stepping: 8
> microcode: 0x12d
> cpu MHz: 1200.070
> cache size: 1024 KB
> physical id: 0
> siblings: 4
> core id: 3
> cpu cores: 4
> apicid: 6
> initial apicid: 6
> fpu: yes
> fpu_exception: yes
> cpuid level: 11
> wp: yes
> flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pa=
t=20
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp=
=20
> lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology=20
> nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx es=
t=20
> tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer =

> aes rdrand lahf_lm 3dnowprefetch cpuid_fault epb pti ibrs ibpb stibp=20
> tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida=20
> arat md_clear
> vmx flags: vnmi preemption_timer invvpid ept_x_only flexpriority=20
> tsc_offset vtpr mtf vapic ept vpid unrestricted_guest
> bugs: cpu_meltdown spectre_v1 spectre_v2 mds msbds_only
> bogomips: 4800.19
> clflush size: 64
> cache_alignment: 64
> address sizes: 36 bits physical, 48 bits virtual
> power management:
>=20
>      >
>      >> In case anybody has a reproducer (either in a guest or dom0) wi=
th a
>      >> setup where a diagnostic kernel can be used, I'd be _very_
>     interested!
>      >
>      > I can easily add things to Dom0 and DomU. Whether that will
>     disrupt the
>      > experiment is, of course, another matter. Still please let me
>     know what
>      > would be helpful to do.
>=20
>     Is there a chance to switch to an upstream kernel in the guest? I'd=
 like
>     to add some diagnostic code to the kernel and creating the patches =
will
>     be easier this way.
>=20
>=20
> That's a bit tough -- the VM is based on stock Ubuntu and if I upgrade =

> the kernel I'll have fiddle with a lot things to make workload=20
> functional again.
>=20
> However, I can install debug kernel (from Ubuntu, etc. etc.)
>=20
> Of course, if patching the kernel is the only way to make progress --=20
> lets try that -- please let me know.

I have found a nice upstream patch, which - with some modifications - I
plan to give our customer as a workaround.

The patch is for kernel 4.12, but chances are good it will apply to a
4.15 kernel, too.

Are you able to give it a try? I hope it will fix the hangs, but in#case =

of a fixed situation there should be a message on the console.


Juergen

--------------457068B34F1ADD00FE05D41D
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-kernel-smp-Provide-CSD-lock-timeout-diagnostics.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-kernel-smp-Provide-CSD-lock-timeout-diagnostics.patch"

From: "Juergen Gross" <jgross@suse.com>
Date: Thu, 18 Feb 2021 09:22:54 +0100
Subject: [PATCH] kernel/smp: Provide CSD lock timeout diagnostics

This commit causes csd_lock_wait() to emit diagnostics when a CPU
fails to respond quickly enough to one of the smp_call_function()
family of function calls.

In case such a stall is detected the cpu which ought to execute the
function will be pinged again in case the IPI somehow got lost.

This commit is based on an upstream patch by Paul E. McKenney.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
diff --git a/kernel/smp.c b/kernel/smp.c
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -19,6 +19,9 @@
 #include <linux/sched.h>
 #include <linux/sched/idle.h>
 #include <linux/hypervisor.h>
+#include <linux/sched/clock.h>
+#include <linux/nmi.h>
+#include <linux/sched/debug.h>
=20
 #include "smpboot.h"
=20
@@ -96,6 +99,79 @@ void __init call_function_init(void)
 	smpcfd_prepare_cpu(smp_processor_id());
 }
=20
+static DEFINE_PER_CPU(call_single_data_t *, cur_csd);
+static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func);
+static DEFINE_PER_CPU(void *, cur_csd_info);
+
+#define CSD_LOCK_TIMEOUT (5ULL * NSEC_PER_SEC)
+atomic_t csd_bug_count =3D ATOMIC_INIT(0);
+
+/* Record current CSD work for current CPU, NULL to erase. */
+static void csd_lock_record(call_single_data_t *csd)
+{
+	if (!csd) {
+		smp_mb(); /* NULL cur_csd after unlock. */
+		__this_cpu_write(cur_csd, NULL);
+		return;
+	}
+	__this_cpu_write(cur_csd_func, csd->func);
+	__this_cpu_write(cur_csd_info, csd->info);
+	smp_wmb(); /* func and info before csd. */
+	__this_cpu_write(cur_csd, csd);
+	smp_mb(); /* Update cur_csd before function call. */
+		  /* Or before unlock, as the case may be. */
+}
+
+/* Complain if too much time spent waiting. */
+static __always_inline bool csd_lock_wait_toolong(call_single_data_t *cs=
d, u64 ts0, u64 *ts1, int *bug_id, unsigned int cpu)
+{
+	bool firsttime;
+	u64 ts2, ts_delta;
+	call_single_data_t *cpu_cur_csd;
+	unsigned int flags =3D READ_ONCE(csd->flags);
+
+	if (!(flags & CSD_FLAG_LOCK)) {
+		if (!unlikely(*bug_id))
+			return true;
+		pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d releas=
ed the lock.\n",
+			 *bug_id, raw_smp_processor_id(), cpu);
+		return true;
+	}
+
+	ts2 =3D sched_clock();
+	ts_delta =3D ts2 - *ts1;
+	if (likely(ts_delta <=3D CSD_LOCK_TIMEOUT))
+		return false;
+
+	firsttime =3D !*bug_id;
+	if (firsttime)
+		*bug_id =3D atomic_inc_return(&csd_bug_count);
+	cpu_cur_csd =3D smp_load_acquire(&per_cpu(cur_csd, cpu)); /* Before fun=
c and info. */
+	pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %llu=
 ns for CPU#%02d %pS(%ps).\n",
+		 firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id()=
, ts2 - ts0,
+		 cpu, csd->func, csd->info);
+	if (cpu_cur_csd && csd !=3D cpu_cur_csd) {
+		pr_alert("\tcsd: CSD lock (#%d) handling prior %pS(%ps) request.\n",
+			 *bug_id, READ_ONCE(per_cpu(cur_csd_func, cpu)),
+			 READ_ONCE(per_cpu(cur_csd_info, cpu)));
+	} else {
+		pr_alert("\tcsd: CSD lock (#%d) %s.\n",
+			 *bug_id, !cpu_cur_csd ? "unresponsive" : "handling this request");
+	}
+	if (cpu >=3D 0) {
+		if (!trigger_single_cpu_backtrace(cpu))
+			dump_cpu_task(cpu);
+		if (!cpu_cur_csd) {
+			pr_alert("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02=
d\n", *bug_id, raw_smp_processor_id(), cpu);
+			arch_send_call_function_single_ipi(cpu);
+		}
+	}
+	dump_stack();
+	*ts1 =3D ts2;
+
+	return false;
+}
+
 /*
  * csd_lock/csd_unlock used to serialize access to per-cpu csd resources=

  *
@@ -103,14 +179,23 @@ void __init call_function_init(void)
  * previous function call. For multi-cpu calls its even more interesting=

  * as we'll have to ensure no other cpu is observing our csd.
  */
-static __always_inline void csd_lock_wait(call_single_data_t *csd)
+static __always_inline void csd_lock_wait(call_single_data_t *csd, unsig=
ned int cpu)
 {
-	smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK));
+	int bug_id =3D 0;
+	u64 ts0, ts1;
+
+	ts1 =3D ts0 =3D sched_clock();
+	for (;;) {
+		if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, cpu))
+			break;
+		cpu_relax();
+	}
+	smp_acquire__after_ctrl_dep();
 }
=20
-static __always_inline void csd_lock(call_single_data_t *csd)
+static __always_inline void csd_lock(call_single_data_t *csd, unsigned i=
nt cpu)
 {
-	csd_lock_wait(csd);
+	csd_lock_wait(csd, cpu);
 	csd->flags |=3D CSD_FLAG_LOCK;
=20
 	/*
@@ -148,9 +233,11 @@ static int generic_exec_single(int cpu,
 		 * We can unlock early even for the synchronous on-stack case,
 		 * since we're doing this from the same CPU..
 		 */
+		csd_lock_record(csd);
 		csd_unlock(csd);
 		local_irq_save(flags);
 		func(info);
+		csd_lock_record(NULL);
 		local_irq_restore(flags);
 		return 0;
 	}
@@ -238,6 +325,7 @@ static void flush_smp_call_function_queu
 		smp_call_func_t func =3D csd->func;
 		void *info =3D csd->info;
=20
+		csd_lock_record(csd);
 		/* Do we wait until *after* callback? */
 		if (csd->flags & CSD_FLAG_SYNCHRONOUS) {
 			func(info);
@@ -246,6 +334,7 @@ static void flush_smp_call_function_queu
 			csd_unlock(csd);
 			func(info);
 		}
+		csd_lock_record(NULL);
 	}
=20
 	/*
@@ -293,13 +382,13 @@ int smp_call_function_single(int cpu, sm
 	csd =3D &csd_stack;
 	if (!wait) {
 		csd =3D this_cpu_ptr(&csd_data);
-		csd_lock(csd);
+		csd_lock(csd, cpu);
 	}
=20
 	err =3D generic_exec_single(cpu, csd, func, info);
=20
 	if (wait)
-		csd_lock_wait(csd);
+		csd_lock_wait(csd, cpu);
=20
 	put_cpu();
=20
@@ -331,7 +420,7 @@ int smp_call_function_single_async(int c
=20
 	/* We could deadlock if we have to wait here with interrupts disabled! =
*/
 	if (WARN_ON_ONCE(csd->flags & CSD_FLAG_LOCK))
-		csd_lock_wait(csd);
+		csd_lock_wait(csd, cpu);
=20
 	csd->flags =3D CSD_FLAG_LOCK;
 	smp_wmb();
@@ -448,7 +537,7 @@ void smp_call_function_many(const struct
 	for_each_cpu(cpu, cfd->cpumask) {
 		call_single_data_t *csd =3D per_cpu_ptr(cfd->csd, cpu);
=20
-		csd_lock(csd);
+		csd_lock(csd, cpu);
 		if (wait)
 			csd->flags |=3D CSD_FLAG_SYNCHRONOUS;
 		csd->func =3D func;
@@ -465,7 +554,7 @@ void smp_call_function_many(const struct
 			call_single_data_t *csd;
=20
 			csd =3D per_cpu_ptr(cfd->csd, cpu);
-			csd_lock_wait(csd);
+			csd_lock_wait(csd, cpu);
 		}
 	}
 }

--------------457068B34F1ADD00FE05D41D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------457068B34F1ADD00FE05D41D--

--C9gK4prqazbYKcbofgeDVBEzotxWg6ffV--

--GNJRm2pyggWu40eK3nt8thzH0tNlKfd4Y
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAuNI4FAwAAAAAACgkQsN6d1ii/Ey+9
nQf+Osx/VUZ6+S0+UYwDjYW2xrVye1BhzhUXgw3s3VZbauUjVrtS1IfN1hmG+kGzTV3Os88nzalt
L2F2CbyangRjHIv9evE4meUOMeFJxcgYRmuY436hyczDuGTB/zvba+Uf93AwEjrTovXuOjKJPdYg
wZovPCnUzm4TLk2VsHqUtdSi3q2Ir9tt21IhL6ot/qN+a/IhlCENXKxU3lxTOdwKUY+GQvd+NItB
xn5DrWhsXJ5qAdM1rX7jrjJJ7Tq8OV59se82K63PRvK7gXsa0cAATujRs7dUksQG7P5wFFuRuw/c
IL9H9644sld1q3aV4mEx22orm0asNqNMTP2oB/gvjg==
=nn8W
-----END PGP SIGNATURE-----

--GNJRm2pyggWu40eK3nt8thzH0tNlKfd4Y--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 09:44:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 09:44:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86513.162495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCfqs-0004gM-Oc; Thu, 18 Feb 2021 09:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86513.162495; Thu, 18 Feb 2021 09:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCfqs-0004gF-Lh; Thu, 18 Feb 2021 09:44:02 +0000
Received: by outflank-mailman (input) for mailman id 86513;
 Thu, 18 Feb 2021 09:44:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/a8=HU=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lCfqr-0004gA-Bi
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 09:44:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e99db446-47e3-47f7-ac96-8939532d7a74;
 Thu, 18 Feb 2021 09:43:59 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-317-ppBynGg2PLObngwSRJiYUw-1; Thu, 18 Feb 2021 04:43:57 -0500
Received: by mail-wr1-f71.google.com with SMTP id y6so765462wrl.9
 for <xen-devel@lists.xenproject.org>; Thu, 18 Feb 2021 01:43:57 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id e12sm7570704wrv.59.2021.02.18.01.43.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 18 Feb 2021 01:43:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e99db446-47e3-47f7-ac96-8939532d7a74
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613641439;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z/oUH83yvgTbTgpGRwfa+lYGHr4STadHNFna7DZctk8=;
	b=Tozs2pPi5zwHQS+CNc+GCxwCn2NCzr5U5tOVoI/siy6FbOYgUq1skDOKzgPfh5LsTqJJOn
	QnBQkXZQsMbGbl7yOP1sUJzjAXCBmCk66un4VE53prTVT/Ejlyzy8qPlzTCJr2C/jq1TsT
	vpCgQC21Rcx/53AoNzAzB4AU33tZyQE=
X-MC-Unique: ppBynGg2PLObngwSRJiYUw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Z/oUH83yvgTbTgpGRwfa+lYGHr4STadHNFna7DZctk8=;
        b=trEkTbtF+tjtmyxoSWN4Ww9rHc2bH0nPWkjD4PgU8ExILV+OnhLy9Z9hVumUNwuC5p
         RRjYL5n99vjk2xTpci6YfhbDQWV3NB0G4IlDC2FPsjJwuzn2zE5JjaFBlpBvuB/VJbke
         rfWW9X23uG4kHUFuXmNLspIPAgxDpOTHpN+Fkk5jFVZtsALUhAUYU+32af9ylMHXlzua
         iAz2qCyeRJs7My+Ltvyn6sYQMq/y0eDp+x7jYbAhZ/D5opK+6UWmcEXvQ4S29RSUzIaH
         gRp5rOssr6i/1No1gXfPhqlrMm3EV/plKynsb4NDoLxeUZhqw5sgVTOVxKGXyivY5d09
         iLLQ==
X-Gm-Message-State: AOAM532K69Qcr5vIY8DZmzx8Kq2IgdL7/4fkwd6Qr/nZu43RZFM40p8R
	kOOIapXJqj6wG2B0+tH3zSPM4RhfGv0PMjg2rLHKfhS9CUFo//ClSjpAkvVeXvM9tFcnn0NEU/N
	5x/3HvOZSH1ditVEWn49rnMYxH04=
X-Received: by 2002:adf:ce91:: with SMTP id r17mr3344094wrn.219.1613641436361;
        Thu, 18 Feb 2021 01:43:56 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwMOb3IDgaYqdEW2tkZ33eFn3YO+zdFXyxi+LRDW60vFVymK1fuZ5dq/fUeAHNxFm21ZFgR7Q==
X-Received: by 2002:adf:ce91:: with SMTP id r17mr3344076wrn.219.1613641436195;
        Thu, 18 Feb 2021 01:43:56 -0800 (PST)
Subject: Re: [PATCH v2 7/7] tests/avocado: add boot_xen tests
To: Cleber Rosa <crosa@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Willian Rampazzo <wrampazz@redhat.com>
Cc: qemu-devel@nongnu.org, julien@xen.org, andre.przywara@arm.com,
 stefano.stabellini@linaro.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, stefano.stabellini@xilinx.com,
 stratos-dev@op-lists.linaro.org
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-8-alex.bennee@linaro.org>
 <20210217204654.GA353754@localhost.localdomain>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <2948d7db-2168-7c5e-a73e-969a67496daa@redhat.com>
Date: Thu, 18 Feb 2021 10:43:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210217204654.GA353754@localhost.localdomain>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=WINDOWS-1252
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/17/21 9:46 PM, Cleber Rosa wrote:
> On Thu, Feb 11, 2021 at 05:19:45PM +0000, Alex Bennée wrote:
>> These tests make sure we can boot the Xen hypervisor with a Dom0
>> kernel using the guest-loader. We currently have to use a kernel I
>> built myself because there are issues using the Debian kernel images.
>>
>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>> ---
>>  MAINTAINERS                  |   1 +
>>  tests/acceptance/boot_xen.py | 117 +++++++++++++++++++++++++++++++++++
>>  2 files changed, 118 insertions(+)
>>  create mode 100644 tests/acceptance/boot_xen.py

>> +class BootXen(BootXenBase):
>> +
>> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
>> +    def test_arm64_xen_411_and_dom0(self):
>> +        """
>> +        :avocado: tags=arch:aarch64
>> +        :avocado: tags=accel:tcg
>> +        :avocado: tags=cpu:cortex-a57
>> +        :avocado: tags=machine:virt
>> +        """
>> +        xen_url = ('https://deb.debian.org/debian/'
>> +                   'pool/main/x/xen/'
>> +                   'xen-hypervisor-4.11-arm64_4.11.4+37-g3263f257ca-1_arm64.deb')
>> +        xen_sha1 = '034e634d4416adbad1212d59b62bccdcda63e62a'
> 
> This URL is already giving 404s because of a new pacakge.  I found
> this to work (but yeah, won't probably last long):
> 
>         xen_url = ('http://deb.debian.org/debian/'
>                    'pool/main/x/xen/'
>                    'xen-hypervisor-4.11-arm64_4.11.4+57-g41a822c392-2_arm64.deb')
>         xen_sha1 = 'b5a6810fc67fd50fa36afdfdfe88ce3153dd3a55'

This is not the same package version... Please understand the developer
has to download the Debian package sources, check again the set of
downstream changes between 37 and 57. Each distrib number might contain
multiple downstream patches. Then the testing has to be done again,
often enabling tracing or doing single-stepping in gdb. This has a
cost in productivity. This is why I insist I prefer to use archived
well tested artifacts, rather than changing package URL randomly.

>> +        xen_deb = self.fetch_asset(xen_url, asset_hash=xen_sha1)
>> +        xen_path = self.extract_from_deb(xen_deb, "/boot/xen-4.11-arm64")
>> +
>> +        self.launch_xen(xen_path)
>> +
>> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
>> +    def test_arm64_xen_414_and_dom0(self):
>> +        """
>> +        :avocado: tags=arch:aarch64
>> +        :avocado: tags=accel:tcg
>> +        :avocado: tags=cpu:cortex-a57
>> +        :avocado: tags=machine:virt
>> +        """
>> +        xen_url = ('https://deb.debian.org/debian/'
>> +                   'pool/main/x/xen/'
>> +                   'xen-hypervisor-4.14-arm64_4.14.0+80-gd101b417b7-1_arm64.deb')
>> +        xen_sha1 = 'b9d209dd689ed2b393e625303a225badefec1160'
> 
> Likewise here:
> 
>         xen_url = ('https://deb.debian.org/debian/'
>                    'pool/main/x/xen/'
>                    'xen-hypervisor-4.14-arm64_4.14.0+88-g1d1d1f5391-2_arm64.deb')
>         xen_sha1 = 'f316049beaadd50482644e4955c4cdd63e3a07d5'

Likewise not the same package version.

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Thu Feb 18 09:46:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 09:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86517.162511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCftD-0004pv-7z; Thu, 18 Feb 2021 09:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86517.162511; Thu, 18 Feb 2021 09:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCftD-0004po-4A; Thu, 18 Feb 2021 09:46:27 +0000
Received: by outflank-mailman (input) for mailman id 86517;
 Thu, 18 Feb 2021 09:46:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lCftB-0004pj-EJ
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 09:46:25 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5584014d-611a-4628-8f50-77fa3e66d2f4;
 Thu, 18 Feb 2021 09:46:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5584014d-611a-4628-8f50-77fa3e66d2f4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613641583;
  h=date:from:to:cc:subject:message-id:mime-version;
  bh=In0H3GKAIQ6+4OBnb5ov4Vo8dGUkbVupydG+4XFu8Bo=;
  b=M6hd8RZRb56DZBh/0PHgTwYiNFYZz5/9o/z365QbAt9uGyVrL4dNup6r
   OMJ6FBDMvRfEiXsrK2L1ux138+W2co/7ZRiyx+96Q7qfubPJ9Sw1JlsW+
   T6C69CEyyh41I/skSgsZ2OhS4iu+q5s6NtWK24Fv0/yBFfN+XEfXTEHje
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UoJTiv5V4jRBe0Y00jc3hlmzX7xSgE58DcB7/2mGHALHi2Jigd/PG8oMRglOQk+eCti6W+VPA+
 aX7YWUMbmGObi4qvI52u1E1PqYUN6rbsDhp91cIgdk3DYKyyFqivdaMiFsFU2XeYUz7yd+TogK
 Pgkn2z4ISwjbJOPk4eiQSRKfNALTDAgswaPSTJadkDnXUPQV1+5nN0mBGFXehKgtlpyVdUkdFm
 Y+Riaai+upuggRApBfoeEiGo5WtKLrw78JAHpEJ2g8ZpALyogMipKz5uWsXc+Yg1GcsPOuZKPS
 6Jk=
X-SBRS: 5.2
X-MesageID: 38871673
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,186,1610427600"; 
   d="scan'208";a="38871673"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=djLiSb0LGj8ENRaRSn0sFZSvs1J4CeL1L5mhw4wiZzQ6qQHo4t/jF71Ad2fYVv1bV+xtTgSFGhHW8cSsVVYoCgy7Iz5OjMLqL9S7z3DLuXuH3nynKTYv8FFyWzlCXradEvpxiWvWfq1YLb51zteSU6aIH7+rq9UNqhDe0jnoWLC50di5H4X/KTS62h3qzt9z4LwIdHYDPwkoZf1fS69+w+X22GWg1Vwe6RdSYyNW3wQfhtOfiJoSSfQfR5AsFunBN8BkjD/H0RJtQ1fobQrrrmwGq/7LdnEYjysPHiyxUi4ysjXX7ZVCmwKqu8vTMA6JmJ7TUgN4ne/zio9/0DakIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=In0H3GKAIQ6+4OBnb5ov4Vo8dGUkbVupydG+4XFu8Bo=;
 b=GM8EQKW7LPM2TOXfAm91a/WUECTlh2fP2HdJLvefUpsynRhj8cmUeTU5KfEiLJS18Jg9bhPdUkKPhTG1lm3Pu1VRewdT6QOdaCkLg5RQYtlWPrhrd6zV0vfbVmZlYZd7CtOOB/nnxATHDoljJkMY3kLMjegBvUISXzxrlmUgut8sCbZK68IxfvR0qvQ/gcgijmPROVhOJCtG9JVgc33VmE4RwakIDC1zKWdQC0NHOo362NJQWO+G4avpSzeS+1qonCvDO8WjUeAQroeYthPPlIHQlE0AicnJKlrus2l2XW62C6oNLFzJveWJhCZpYkE0Lk+9iYHtuQD6HvW+PydcVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=In0H3GKAIQ6+4OBnb5ov4Vo8dGUkbVupydG+4XFu8Bo=;
 b=wOYJ4bd9F+3qlNjmcJ5ZVGWKLueyHr5OSv191ShQBkulZgAYSg+cGZURPmRQPAOVq3IRlUUlXWGFMPzAKmBfxcn63F+gaeCDxFFy6zlNhRPV9x+KYR01wA2fjluIc/NOv8GsGAEl0WFoi/bJZIyEeF57kvJIFMkde3QBx8xYUtU=
Date: Thu, 18 Feb 2021 10:46:14 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Christian Lindig <christian.lindig@citrix.com>, Edwin
 =?utf-8?B?VMO2csO2aw==?= <edvin.torok@citrix.com>,
	=?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Ian Jackson
	<iwj@xenproject.org>
Subject: oxenstored restart after system crash
Message-ID: <YC43ZrCyScGxHEVE@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
X-ClientProxiedBy: MR2P264CA0009.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a1700421-028e-485b-38ce-08d8d3f20b56
X-MS-TrafficTypeDiagnostic: DM6PR03MB4841:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB48414C4DB0AAFB455B39BE638F859@DM6PR03MB4841.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u2FQ4zQfdvtZrkdYOuSSZ2/UTRbFJQ2sDM+uiNZQSis/z31WUOdpguIB2cZ+yb5uvaPTCf7Y+IwT2knCuY+OKqZWDLt2bTAgtDYofH/q1rZtywy7c51QnXAYBq5NEWeYPpOzYugxW6bYLhi2h/x94Tdsk9saiYjkeARWFRKF/MSoVw0UKSc/m+ou4ibNGkftU5TN2SVgZt/JfvFojyu+ArUzBcNporclQ8JgH87TdSpSupnFemnzH7ZUwX8DwlQwyH2N+w92PhS3ZWsTF2a5HB4+J0m2lm73tFiBks/bS9+Ut/hx3GqNjxi4u57rROhhZXT0SOJJJrS3aybDB1xBMCtZidvC1MmyW+/LInshz48rYudS87w3LZWC6DsgbVBeCG7hFKSersnlmJN+LcUrjbyqbTc94XkgYeXLV0MyCsCSvo1bUlOfB13jsmJJbVuZ0AWA2VCX0ynFmMno/A/thWP/rRMdPd4oYaTnXFjZCkF0fOb3S5jBN0f9ywZvV4FE2dSXvStwGJe4P4x18YqP2aB5bdmWEBC7WVcBVr83KPSXI9CWxG+/GqKTj9RMkAKvI26Wcm/jRNt/mHy147BWtDhvUP8TKebgs23pzH0WkNQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(39850400004)(136003)(366004)(376002)(33716001)(8936002)(4744005)(956004)(6496006)(966005)(8676002)(6666004)(26005)(85182001)(86362001)(478600001)(6486002)(5660300002)(66476007)(316002)(4326008)(9686003)(83380400001)(186003)(54906003)(16526019)(2906002)(66556008)(6916009)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?V0dkZnhmVUhvOGVkd3NIYmN5dTJLMDk4MmFZWEVhYUMxNUxQWkpMTFA1TDRw?=
 =?utf-8?B?TklDN1Z4SHVyZSsxMkphYkNlZlJjdEdUWGhCNVB1aG01YVFFLzV2L2x2clJj?=
 =?utf-8?B?cTVqUU51Y3ZLUDd6M09MdDFyMVNMeVc5b3R6RzhBWGVZWXRnaG1ZMDUzTHd3?=
 =?utf-8?B?OEphd2RqYXM3R0dRalRwSVZaLzg4bnZrRVBoWTdqREhiMXI3bGl3QWk3eEtI?=
 =?utf-8?B?UXhrZnhDKzgyclRKSnlaVG5oTmFmVWtvR2JoU2xjNkNKUXZZVXp5TG5RUTdX?=
 =?utf-8?B?YzFjNmUwWEhiL2lQYWpoc3VySEprSXhWNHJTSTJSVUx3S2V4RTM2cVQ5T0hX?=
 =?utf-8?B?djZpSnduVTlUeUxDMGV0VjRGbVc4NUU4bXdJcm9MaUk4Zi8xTDdUYXlRZmc2?=
 =?utf-8?B?VVhWNnJXVVNSb1ZIQ1R3K0k5cUZIenQ2UFg2TVNJRk5iR3kzY3ovb3NzZUI2?=
 =?utf-8?B?WTNyTi9xN0tZREY2WUNFSEpsTkpWajFrU241THRoaVcwYnNvVlBseVRZcks1?=
 =?utf-8?B?ZHNBSXhHMEFweHg4ZGFmRFFnYVJDblpXb2NVT0tHSGlLLzJvc015eit5VUZW?=
 =?utf-8?B?d3gwK3c4UGZoc0o4R1JmY1I5YnV4SW9lRys5MmNyTjdhakhHcWlKUjZhYXdQ?=
 =?utf-8?B?NGM1b0IyRlpiNlROMitIRmJscFRWRXNQbE9EejRtUUQvV3UyODM3SlJrdFM0?=
 =?utf-8?B?RS9RdnpzRTE0ajlYMFZOTVdGcDNaSTZlU0U0RVY0aWxicElGNmxkcGJuRFJz?=
 =?utf-8?B?RFFVaDVHOEdRY1h3UTdua0k1dHk1cW9xaDJuWE9BQ25vMmJGM0dJamdhS0RL?=
 =?utf-8?B?TksrZGZrN3M1UlNzbXE2UzNyUW9Kd1U3bzF2TDRaZmptRE5zNVJPU2J4cmRX?=
 =?utf-8?B?VHRtYlB6V3ZJcEErYUJKU01ldGZidXNhTmNKdFJoZVpiQ0dSU2NtL2ozWFpD?=
 =?utf-8?B?SVRyVnZMS0Y3UTFSYXcwRm00bi9Eb2EvekxYQVc3aXFpcjVtQm1aeFRKSXhh?=
 =?utf-8?B?WTAzYTdjb0NJS1ltMUQ1NmgzY3Z6eDBrNmF5ek5EYnNtbGU4NWYzQW1WU1pH?=
 =?utf-8?B?Rk5oN1VHVFZqZmc2cXNGTWtXNVNXdU1yMHMvWjJCM2FWUzNjK3AxZGJUSjEw?=
 =?utf-8?B?MHNQRXlnd3h1RFUwamxrbXdNREplUy93Z2NmTXAwTW9RUGV1bWJpTjZGcXhk?=
 =?utf-8?B?enpkNE5JY0xZVHhuVkJnQjFlZStOaklFaXp4RW90WDJTSHVvTlpOWlhqaUg0?=
 =?utf-8?B?akRYQk1WVjBaRGNzem1OdTUrM05GZmN6dkRZZDNrc3JCaEpOcEp3cEx5dUdQ?=
 =?utf-8?B?SHNOVWo1WXlaUzBZeDJJRkZxMGNKNGJXQ21iaUpSamZZUHY4aG8xYTRRQVBn?=
 =?utf-8?B?alVnU2thSXBIeS9Qb2NpTWd3amFtSnBBWkRrekFRSG9yR1lmZFU2WW1jd1FP?=
 =?utf-8?B?WU9hZGR4UEwxaW9JczdaYXVtQi9UdlIvR0sxTnJrUnBMKzlOSWN3NXF2UnZ4?=
 =?utf-8?B?d2YySkFkcDhqNGlLZWFOTFlOcEUxU3RKRFlNd1RjMTQyQjNLSGdtd1ptZW42?=
 =?utf-8?B?cWdTcDh0b0R4cVVad3VBUTdKcXVlSGp0Tmo5TzNGVjcwSUNyWFgyaUI5WDFZ?=
 =?utf-8?B?TDduQW8rc0M3Q25TNktKMDhmZnpHSEp0eU16NEF6c0lvR2FzS1FvSHJCaGxU?=
 =?utf-8?B?TWNGYXFIUGxldUtseC9sdk1jbEJUaEI0SGJXQjgvaURnSmFOem52UndXUnFv?=
 =?utf-8?Q?EJ8ejRrCHdw8dlvwYOkmsnB5ciSDH2ahAhYSxsi?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a1700421-028e-485b-38ce-08d8d3f20b56
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 09:46:19.9916
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /6glbdv3I7yKtzWLkpHLeHGEGLSs1zXcjsAprn54ZoPYZIkzLFRxXmpKkrRfHacP+2M4YRp1S69l16Z26zCJwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4841
X-OriginatorOrg: citrix.com

Hello,

Last month I got a query from a FreeBSD Xen user having issues with
xenstored after a power failure:

https://lists.freebsd.org/pipermail/freebsd-xen/2021-January/003446.html

I'm not sure what's the right approach here. I've been told cxenstored
will attempt to unlink the tdb file when starting, does oxenstored
attempt to do the same?

Should the tdb file be placed in a path that's cleaned up on boot?

Should xencommons remove the stale tdb before starting xenstored?

Mostly wanted to know what's the approach on Linux so that I can do
the same on FreeBSD.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 09:57:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 09:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86522.162526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCg3a-0005sk-Di; Thu, 18 Feb 2021 09:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86522.162526; Thu, 18 Feb 2021 09:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCg3a-0005sd-AU; Thu, 18 Feb 2021 09:57:10 +0000
Received: by outflank-mailman (input) for mailman id 86522;
 Thu, 18 Feb 2021 09:57:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCg3Y-0005sY-QF
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 09:57:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCg3X-0002vZ-4P; Thu, 18 Feb 2021 09:57:07 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCg3W-0000AL-NT; Thu, 18 Feb 2021 09:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vEjg5XJVHUXx9pXONWEXqqHBcvyqRSVXUjBeUHcGnug=; b=gpFRm6of+x+3vRmwoaWzXZdUq9
	kXGJMysOv7E9FttYy8lXshmjj9XvrD6YWQpljuhBwqFm2+dHuA6fAkLoY1hJHyONrYptM9w17aIPb
	4NiVPngiOUtMFc03ZPgdTKO81LkWVs41bjpGczfiC6PXsCG5lOyEn3oJDprHJgv3lKww=;
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com,
 Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
 <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <0be0196f-5b3f-73f9-5ab7-7a54faabec5c@xen.org>
Date: Thu, 18 Feb 2021 09:57:05 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 17/02/2021 23:54, Stefano Stabellini wrote:
> On Wed, 17 Feb 2021, Julien Grall wrote:
>> On 17/02/2021 02:00, Stefano Stabellini wrote:
>>> Hi all,
>>>
>>> Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
>>> translate addresses for DMA operations in Dom0. Specifically,
>>> swiotlb-xen is used to translate the address of a foreign page (a page
>>> belonging to a domU) mapped into Dom0 before using it for DMA.
>>>
>>> This is important because although Dom0 is 1:1 mapped, DomUs are not. On
>>> systems without an IOMMU swiotlb-xen enables PV drivers to work as long
>>> as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
>>> ends up using the MFN, rather than the GFN.
>>>
>>>
>>> On systems with an IOMMU, this is not necessary: when a foreign page is
>>> mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
>>> is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
>>> address (instead of the MFN) for DMA operations and they would work. It
>>> would be more efficient than using swiotlb-xen.
>>>
>>> If you recall my presentation from Xen Summit 2020, Xilinx is working on
>>> cache coloring. With cache coloring, no domain is 1:1 mapped, not even
>>> Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
>>> work as intended.
>>>
>>>
>>> The suggested solution for both these issues is to add a new feature
>>> flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
>>> swiotlb-xen because IOMMU translations are available for Dom0. If
>>> XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
>>> initialization. I have tested this scheme with and without cache
>>> coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
>>> works as expected: DMA operations succeed.
>>>
>>>
>>> What about systems where an IOMMU is present but not all devices are
>>> protected?
>>>
>>> There is no way for Xen to know which devices are protected and which
>>> ones are not: devices that do not have the "iommus" property could or
>>> could not be DMA masters.
>>>
>>> Perhaps Xen could populate a whitelist of devices protected by the IOMMU
>>> based on the "iommus" property. It would require some added complexity
>>> in Xen and especially in the swiotlb-xen driver in Linux to use it,
>>> which is not ideal.
>>
>> You are trading a bit more complexity in Xen and Linux with a user will may
>> not be able to use the hypervisor on his/her platform without a quirk in Xen
>> (see more below).
>>
>>> However, this approach would not work for cache
>>> coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
>>> used either way
>>
>> Not all the Dom0 Linux kernel will be able to work with cache colouring. So
>> you will need a way for the kernel to say "Hey I am can avoid using swiotlb".
> 
> A dom0 Linux kernel unmodified can work with Xen cache coloring enabled.

Really? I was expecting Linux to configure the DMA transaction to use 
the MFN for foreign pages. So a mapping GFN == MFN would be necessary.

> The swiotlb-xen is unneeded and also all the pfn_valid() checks to detect
> if a page is local or foreign don't work as intended. 

I am not sure to understand this. Are you saying that Linux will 
"mistakenly" consider the foreign page as local? Therefore, it would 
always use the GFN rather than MFN?

> However, it still
> works on systems where the IOMMU can be relied upon. That's because the
> IOMMU does the GFN->MFN translations, and also because both swiotlb-xen
> code paths (cache flush local and cache flush via hypercall) work.
> 
> Also considering that somebody that enables cache coloring has to know
> the entire system well, I don't think we need a runtime flag from Linux
> to say "Hey I can avoid using swiotlb".

I think we should avoid to hardcode any decision again. Otherwise, 
sooner or later this will come back to bite us.

[...]

>>> How to set XENFEAT_ARM_dom0_iommu?
>>>
>>> We could set XENFEAT_ARM_dom0_iommu automatically when
>>> is_iommu_enabled(d) for Dom0. We could also have a platform specific
>>> (xen/arch/arm/platforms/) override so that a specific platform can
>>> disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
>>> users, it would also be useful to be able to override it via a Xen
>>> command line parameter.
>> Platform quirks should be are limited to a small set of platforms.
>>
>> In this case, this would not be only per-platform but also per-firmware table
>> as a developer can decide to remove/add IOMMU nodes in the DT at any time.
>>
>> In addition to that, it means we are introducing a regression for those users
>> as Xen 4.14 would have worked on there platform but not anymore. They would
>> need to go through all the nodes and find out which one is not protected.
>>
>> This is a bit of a daunting task and we are going to end up having a lot of
>> per-platform quirk in Xen.
>>
>> So this approach selecting the flag is a no-go for me. FAOD, the inverted idea
>> (i.e. only setting XENFEAT_ARM_dom0_iommu per-platform) is a no go as well.
>>
>> I don't have a good idea how to set the flag automatically. My
>> requirement is newer Xen should continue to work on all supported
>> platforms without any additional per-platform effort.
> 
> Absolutely agreed.
> 
> 
> One option would be to rename the flag to XENFEAT_ARM_cache_coloring and
> only set it when cache coloring is enabled.  Obviously you need to know
> what you are doing if you are enabling cache coloring. If we go down
> that route, we don't risk breaking compatibility with any platforms.
> Given that cache coloring is not upstream yet (upstreaming should start
> very soon), maybe the only thing to do now would be to reserve a XENFEAT
> number.

At least in this context, I can't see how the problem described is cache 
coloring specific. Although, we may need to expose such flag for other 
purpose in the future.

> 
> But actually it was always wrong for Linux to enable swiotlb-xen without
> checking whether it is 1:1 mapped or not. Today we enable swiotlb-xen in
> dom0 and disable it in domU, while we should have enabled swiotlb-xen if
> 1:1 mapped no matter dom0/domU. (swiotlb-xen could be useful in a 1:1
> mapped domU driver domain.)
> 
> 
> There is an argument (Andy was making on IRC) that being 1:1 mapped or
> not is an important information that Xen should provide to the domain
> regardless of anything else.
> 
> So maybe we should add two flags:
> 
> - XENFEAT_direct_mapped
> - XENFEAT_not_direct_mapped

I am guessing the two flags is to allow Linux to fallback to the default 
behavior (depending on dom0 vs domU) on older hypervisor On newer 
hypervisors, one of this flag would always be set. Is that correct?

> 
> To all domains. This is not even ARM specific. Today dom0 would get
> XENFEAT_direct_mapped and domUs XENFEAT_not_direct_mapped. With cache
> coloring all domains will get XENFEAT_not_direct_mapped. With Bertrand's
> team work on 1:1 mapping domUs, some domUs might start to get
> XENFEAT_direct_mapped also one day soon.
> 
> Now I think this is the best option because it is descriptive, doesn't
> imply anything about what Linux should or should not do, and doesn't
> depend on unreliable IOMMU information.

That's a good first step but this still doesn't solve the problem on 
whether the swiotlb can be disabled per-device or even disabling the 
expensive 1:1 mapping in the IOMMU page-tables.

It feels to me we need to have a more complete solution (not necessary 
implemented) so we don't put ourself in the corner again.

> Instead, if we follow my original proposal of using
> XENFEAT_ARM_dom0_iommu and set it automatically when Dom0 is protected
> by IOMMU, we risk breaking PV drivers for platforms where that protection
> is incomplete. I have no idea how many there are out there today. 

This can virtually affect any platform as it is easy to disable an IOMMU 
in the firmware table.

> I have
> the feeling that there aren't that many but I am not sure. So yes, it
> could be that we start passing XENFEAT_ARM_dom0_iommu for a given
> platform, Linux skips the swiotlb-xen initialization, actually it is
> needed for a network/block device, and a PV driver breaks. I can see why
> you say this is a no-go.
> 
> 
> Third option. We still use XENFEAT_ARM_dom0_iommu but we never set
> XENFEAT_ARM_dom0_iommu automatically. It needs a platform specific flag
> to be set. We add the flag to xen/arch/arm/platforms/xilinx-zynqmp.c and
> any other platforms that qualify. Basically it is "opt in" instead of
> "opt out". We don't risk breaking anything because platforms would have
> XENFEAT_ARM_dom0_iommu disabled by default.
Well, yes you will not break other platforms. However, you are still at 
risk to break your platform if the firmware table is updated and disable 
some but not all IOMMUs (for performance concern, brokeness...).

> Still, I think the
> XENFEAT_not/direct_mapped route is much cleaner and simpler.

The XENFEAT_{not,}_direct_mapped also doesn't rely on every platform to 
add code in Xen to take advantage of new features.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 10:13:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 10:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86523.162537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCgJe-0007qd-SK; Thu, 18 Feb 2021 10:13:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86523.162537; Thu, 18 Feb 2021 10:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCgJe-0007qW-Op; Thu, 18 Feb 2021 10:13:46 +0000
Received: by outflank-mailman (input) for mailman id 86523;
 Thu, 18 Feb 2021 10:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dJ9M=HU=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lCgJd-0007qR-KH
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 10:13:45 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7eca57f8-13b3-46a1-ba2b-826405a8d42f;
 Thu, 18 Feb 2021 10:13:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7eca57f8-13b3-46a1-ba2b-826405a8d42f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613643223;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=mvD//k+mnJawP31ZUy/eUC99eCmCfx6eayuNUTQap3o=;
  b=DvOiwHMMUZTuoaQQ+vvVMsrSTze1ALeOd9nN8l3JTGHaU41JCE0NDGWl
   0XTbJnq9Ghy34yXrcFU8XoE20GwCE0gP/kmmzOWg2cLtjZLK2DfXx7I5M
   goj/ievGQs+Y1TkeidCGdAiUeaIqwX1kERTX41Ob0GnFWArvCkCzRt1uF
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ElurHBGuJ8z1ELb231YywTqaXFsLfMZllvXRjPZehCw6/ZXkQ/Il6V45a1qb63ihU0r5LOlzcK
 Urolrpaj8E5lInWV4pDK2qNLH2tHb+HwLC32z7u6K8GQJZqxnSJ6EUnL8EBh9VbvoTEsE4rzlY
 nHP6usVTbfEXldkjqw5g4r2qnXfJ/GMhhKmUrQ7U6aQJKesEf2L8QzpJRqotQ1BkpvuQ4YA3jB
 PubDTTdG65fReSFJf+DCvsYYULTRUPU6jsOOQV43whXjmb4pgJktfWoR6mHIKJbOWp1/gJ9wiH
 smg=
X-SBRS: 5.2
X-MesageID: 37520510
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,186,1610427600"; 
   d="scan'208,217";a="37520510"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CpnHaZMjYenfg//z7z6U4RmFm5K3om05+cWaHgsd8kBWsexP3Z8E5sqOj1czMj5Y6Xixw3Yz2Necz8RlHJdUXV+L9HyChAOwNYJQlB1vOGV/vOuA16E5GXsbXWZhcDkDhiAsDOfyb6brii8z4rMe3sxjOWa6+eV/w4aQBFTl/Rq+GpOhD8WANjMeHnclu8wVjbct0kYJCbuT5PqjD3A6gkpgbxF3hR67nuWU1UUcJry3E42aNnmZSZJOCx7kUf8iaUIEqAl1luS5FXEBtP73Za0XVsFvd5lRuwyTI5g8NCxsea6Lz4Us7e4TR9roj23KVdpAoLh2F1ZdcCrdGXK5rQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mvD//k+mnJawP31ZUy/eUC99eCmCfx6eayuNUTQap3o=;
 b=EYpXHWbMIp9y1iyH7eamexK7Y2hPqpanLQL55XJQw3hSrxkMHv7mq/MiE1leE93c0W7fBGoUWaDkwObrXyVWdGPTC/rlT4ycdT5WNspRxJBa/Lo6Z7rm7TLM4LfK48vEIQskuo0coDZB/sJKY/t6znlkinAkwdOujMu2E7qqrNWVt5kYFvTCliwEAaoVWXPXjApar0DX8vfpwIkfS4wTOJxCL6kG91LHtNsSP2RKuKVKBjxszORVU4iEf0M8/UKJF7Fb1jzE9nHHivr1gQOxqLRPx6FYfKngnei886qvYLVPu09hBuNNoiGItWwhSOhY+w27lk+briWJL73J2mjlHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mvD//k+mnJawP31ZUy/eUC99eCmCfx6eayuNUTQap3o=;
 b=dx6oPCkYm9vDVJDmNyHRavjkn6gLM1amB+mMh/efNHNdsu0iwHI7jfYhWRCmpdxw4dMMncfjFlsEytjEHHJkcQwnuF2+kpHcNrPxeL6VeiMCkxhqWF5awkwTaDl17zv6yKQvl++SsxreIUYPZgRfqz+qlq9rcBnmPH9mGZCehFw=
From: Edwin Torok <edvin.torok@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Christian Lindig <christian.lindig@citrix.com>,
	=?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: oxenstored restart after system crash
Thread-Topic: oxenstored restart after system crash
Thread-Index: AQHXBdrqiMKXv9cDnkSLt+pdzqCGoKpdrxcg
Date: Thu, 18 Feb 2021 10:13:38 +0000
Message-ID: <SJ0PR03MB5888497D41C2C4B3FD14D86D9B859@SJ0PR03MB5888.namprd03.prod.outlook.com>
References: <YC43ZrCyScGxHEVE@Air-de-Roger>
In-Reply-To: <YC43ZrCyScGxHEVE@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: a7a1de27-39bc-4e3c-d4b7-08d8d3f5dc33
x-ms-traffictypediagnostic: BYAPR03MB4184:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB4184A4A75FD56D2A4CC7FCB29B859@BYAPR03MB4184.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: QcIeRp8C4Vyeusd/5ipgTmUFjVKFn3ZtXYWqsQsIYEDcHgvsbMcGuPCanWjTdxKwzxFhH51ztS4wiCXjvpGKUBL106oZw7OKVLzJruA8iy6Xo9+x8s1C94/8AEmiAegwWuZsAJh/7QgR1Z4ZYke6HYJWYh58QJRvVtc3orrSfHIAiffZ2toYhGn6nIKxxUEDl4Z/L8NkJE5+TN55CG1m0+IB9e2ebcBxpIG7v5CNmPW4XQsKxIa7ZxImK2BwBLbdvNaFcotOGXHawUBOxaTxMB65+iluSrbC/LUfzzO8FxPDjSrRw709By2HFc7+/uo48cTehawYhXCExd/v2PxnrG2LM1VGdT5gb0VlkcHGeXKRc17qa2IWllTBohvoXLVu1uRx8rw0tk5PsI/w/4BvmiOfq2V0Uf7moBSmmd8geKzUqDzeuGgr45rJEgKjsWBySPYDbk/vD1UkfhUHRBJRLDMTbJv+8v5QwcBOdKEzboQ715kPD5JGzjDJIDu3kOBWMRrr7rMVFI0gtqBUyv1A/NQqmXXhrhCY3JSo+r40pfLrkurY6JWnvGFjSdB5HvCY6BxZB3vyfVChLpXjGEr1INv1fjignnovI8iZJWBO4Xk=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(396003)(39850400004)(346002)(366004)(76116006)(2906002)(166002)(26005)(86362001)(9686003)(83380400001)(91956017)(71200400001)(33656002)(64756008)(54906003)(66556008)(66946007)(66476007)(19627405001)(110136005)(7696005)(8936002)(52536014)(66574015)(21615005)(4326008)(966005)(55016002)(478600001)(5660300002)(66446008)(53546011)(186003)(8676002)(6506007)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?iso-8859-1?Q?HrFyQOq1/QAUwM5MiTUoZ6K9rravF84fIY81c6/99IH1ur0m/vaBEzYFvV?=
 =?iso-8859-1?Q?FafwxGk0XbnJZIBjxoXmBsoLBjz74jK0Xfju5iPzZ5sCL/J1HU/PmTDV+f?=
 =?iso-8859-1?Q?cwQKkZ61/rtAMtWZ/oZ8mGKqwAF2r65NaKIv2djS5Qhxv54MXd2lma01Lu?=
 =?iso-8859-1?Q?W2C3F17LoXltGHcvpjtUVyNGAr5tw1cTQftRbdY5qlQ0tPedOLkjDwqW0r?=
 =?iso-8859-1?Q?FeABw1THzyK6GYNX0MPBidXPn7ktg+9EMOftxPmqPEHMPRHjIIWKcsmcvL?=
 =?iso-8859-1?Q?bR/1g6BYZO4olfYMsskFsplWSsvmNZTiAr/A78/xiRJvXeqYtgs5i0cePi?=
 =?iso-8859-1?Q?88U6DhTwAYWj1tyBUyUJbXGE6vaMx4R9clwIfqFbgjgnXn15zznHRGl/Mb?=
 =?iso-8859-1?Q?GMj8hMy92mstFzh1xq60lTjsp1KYaxqlrvTapa6pD2JoqVIejZbPIqiNvF?=
 =?iso-8859-1?Q?s7MFnARPWm7FbifX0OD1Kw5Y8u+pq5AQwlHDnNSE+jnPThfwWRcaQ9XQL2?=
 =?iso-8859-1?Q?cxAIFv7BZ7zCZdRY8ZaJcVaXF3dva+WM+4toSX7jKJBv+xsdUCdSXG5hEF?=
 =?iso-8859-1?Q?aDES4U4LusHUqnb+2J4G1nLUb3nsXwnXAmoKtWCk4m5y2DZQhTICqC+rG6?=
 =?iso-8859-1?Q?qbxoebzlJv9nI5u565nS6QgOAImxteHYhRd0FtCiotvflMjKt0M21lAQ8K?=
 =?iso-8859-1?Q?+l6RpXvMgg525BOlW9MUCOIX9aSjuwMG3bLbGZ2XJOqOVKC8eax86iycX3?=
 =?iso-8859-1?Q?RTwxfQCSkrFKCricIGbgPJOhmSXdObPK0/V4HbP/mFnsD2Lw0wNwtEpZeS?=
 =?iso-8859-1?Q?UVUkT6pLM2VezbARj3xIY6ZCRAniT5BjIpXwsVCBLTRW/IfL4jkdRpGJOd?=
 =?iso-8859-1?Q?xH081Ix++dBxzihhG0Y8y+FaR/VEOtaZQQE2YwdkNJsoc4uzah5JlBHyB+?=
 =?iso-8859-1?Q?fltq7JpsRmXqQxZWpBoQFckBzkKAi0dZU0G1LvshydsG0d7lSc4agoNsG8?=
 =?iso-8859-1?Q?RIHJhBjUVE/XT856nXuxA+XSe29LsQRE8lhiZd6mONt41pNGZFDXkSEC4E?=
 =?iso-8859-1?Q?cYJDPed0fHF1OWJOrJHsQ8nm6t4E41qB0pneuWLVAKXxPfKUOb08M83psJ?=
 =?iso-8859-1?Q?w5YhQ4dDAaNSM2XvLMD9d5mpUuwrZe988WFVCq/zMKBT1GDLps9LyqfTN3?=
 =?iso-8859-1?Q?qO8/rX801db4/oLD0M3gEkIpqnfwe5YIzwD5lGhSk8pglZoQfFJes5Zv8g?=
 =?iso-8859-1?Q?Ep3N+afj0pidfVj/LzaLldBKWpcsnStuoi9kuwdK6cZOG5Tf0fgh8ez2Lt?=
 =?iso-8859-1?Q?+ud7+FqfR3xhG2NLOe8uXbFowJt8WagZX+4yMYUVl2oneNM=3D?=
Content-Type: multipart/alternative;
	boundary="_000_SJ0PR03MB5888497D41C2C4B3FD14D86D9B859SJ0PR03MB5888namp_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7a1de27-39bc-4e3c-d4b7-08d8d3f5dc33
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Feb 2021 10:13:38.6971
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 1K/C/K9SMi2pElsRt/uC83MJUtMfFyvDyDDM2d9IbuhTOu+H/4z46e1zwKSbkOaF7Zsf6HrCFy1E1WLOWGYNOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4184
X-OriginatorOrg: citrix.com

--_000_SJ0PR03MB5888497D41C2C4B3FD14D86D9B859SJ0PR03MB5888namp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi,

oxenstored doesn't have a tdb file, by default it stores the entire tree in=
 memory only.

There is a way to persistently store the tree (--persistent), but that is n=
ot enabled by default and I don't know whether it even works.
Master (or the hotfixed releases) have a live-update functionality now that=
 dump and restore state properly (and reuses some of the persistent disk co=
de, but also dumps some additional state).

The default location of the "persistent" database is /var/run/xenstored, wh=
ich is a tmpfs and thus cleared on every boot. So if you'd ensure that oxen=
stored uses the equivalent of that on FreeBSD (or have a script on boot tha=
t clears it) that would solve any issues like this.

I don't know about C xenstored's behaviour, I'll let someone else answer th=
at.

Best regards,
--Edwin

________________________________
From: Roger Pau Monne <roger.pau@citrix.com>
Sent: 18 February 2021 09:46
To: xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>
Cc: Christian Lindig <christian.lindig@citrix.com>; Edwin Torok <edvin.toro=
k@citrix.com>; J=FCrgen Gro=DF <jgross@suse.com>; Ian Jackson <iwj@xenproje=
ct.org>
Subject: oxenstored restart after system crash

Hello,

Last month I got a query from a FreeBSD Xen user having issues with
xenstored after a power failure:

https://lists.freebsd.org/pipermail/freebsd-xen/2021-January/003446.html

I'm not sure what's the right approach here. I've been told cxenstored
will attempt to unlink the tdb file when starting, does oxenstored
attempt to do the same?

Should the tdb file be placed in a path that's cleaned up on boot?

Should xencommons remove the stale tdb before starting xenstored?

Mostly wanted to know what's the approach on Linux so that I can do
the same on FreeBSD.

Thanks, Roger.

--_000_SJ0PR03MB5888497D41C2C4B3FD14D86D9B859SJ0PR03MB5888namp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo=
ttom:0;} </style>
</head>
<body dir=3D"ltr">
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
Hi,</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
oxenstored doesn't have a tdb file, by default it stores the entire tree in=
 memory only.</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
There is a way to persistently store the tree (--persistent), but that is n=
ot enabled by default and I don't know whether it even works.<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
Master (or the hotfixed releases) have a live-update functionality now that=
 dump and restore state properly (and reuses some of the persistent disk co=
de, but also dumps some additional state).</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
The default location of the &quot;persistent&quot; database is /var/run/xen=
stored, which is a tmpfs and thus cleared on every boot. So if you'd ensure=
 that oxenstored uses the equivalent of that on FreeBSD (or have a script o=
n boot that clears it) that would solve any
 issues like this.</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
I don't know about C xenstored's behaviour, I'll let someone else answer th=
at.</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
Best regards,</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
--Edwin<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div id=3D"appendonsend"></div>
<hr style=3D"display:inline-block;width:98%" tabindex=3D"-1">
<div id=3D"divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" st=
yle=3D"font-size:11pt" color=3D"#000000"><b>From:</b> Roger Pau Monne &lt;r=
oger.pau@citrix.com&gt;<br>
<b>Sent:</b> 18 February 2021 09:46<br>
<b>To:</b> xen-devel@lists.xenproject.org &lt;xen-devel@lists.xenproject.or=
g&gt;<br>
<b>Cc:</b> Christian Lindig &lt;christian.lindig@citrix.com&gt;; Edwin Toro=
k &lt;edvin.torok@citrix.com&gt;; J=FCrgen Gro=DF &lt;jgross@suse.com&gt;; =
Ian Jackson &lt;iwj@xenproject.org&gt;<br>
<b>Subject:</b> oxenstored restart after system crash</font>
<div>&nbsp;</div>
</div>
<div class=3D"BodyFragment"><font size=3D"2"><span style=3D"font-size:11pt;=
">
<div class=3D"PlainText">Hello,<br>
<br>
Last month I got a query from a FreeBSD Xen user having issues with<br>
xenstored after a power failure:<br>
<br>
<a href=3D"https://lists.freebsd.org/pipermail/freebsd-xen/2021-January/003=
446.html">https://lists.freebsd.org/pipermail/freebsd-xen/2021-January/0034=
46.html</a><br>
<br>
I'm not sure what's the right approach here. I've been told cxenstored<br>
will attempt to unlink the tdb file when starting, does oxenstored<br>
attempt to do the same?<br>
<br>
Should the tdb file be placed in a path that's cleaned up on boot?<br>
<br>
Should xencommons remove the stale tdb before starting xenstored?<br>
<br>
Mostly wanted to know what's the approach on Linux so that I can do<br>
the same on FreeBSD.<br>
<br>
Thanks, Roger.<br>
</div>
</span></font></div>
</body>
</html>

--_000_SJ0PR03MB5888497D41C2C4B3FD14D86D9B859SJ0PR03MB5888namp_--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 10:25:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 10:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86526.162549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCgUb-0000Rw-Uj; Thu, 18 Feb 2021 10:25:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86526.162549; Thu, 18 Feb 2021 10:25:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCgUb-0000Rp-RU; Thu, 18 Feb 2021 10:25:05 +0000
Received: by outflank-mailman (input) for mailman id 86526;
 Thu, 18 Feb 2021 10:25:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCgUa-0000Rk-Ct
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 10:25:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCgUY-0003SU-Dh; Thu, 18 Feb 2021 10:25:02 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCgUY-0002ca-5V; Thu, 18 Feb 2021 10:25:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=wZx6CleLLX9Pj2wjhauaBUeBoimEpEIPPkSGGFkk4+Y=; b=woQQlUpFYTj8OQG0LapCwZPANB
	sY5sd4w31fbDCmgKl0wH+PCVQ7raDj6kt5B28/0GPPGmbi2rpiCcx72SOpi7YXdVHyEZzGt3RPR8S
	Yy7YNQ1+GeqpDHxVmQ+gvIvqXPiW4d+u8LbaO+tbUEqUP4M05Jkf5cVmL9ww/TB3yFJE=;
Subject: Re: [PATCH 1/3] gnttab: never permit mapping transitive grants
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <3620b977-4182-db2c-e2f9-71e1c6c4e721@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6423c7ef-66d2-8867-50a0-b75ae63aaef6@xen.org>
Date: Thu, 18 Feb 2021 10:25:00 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <3620b977-4182-db2c-e2f9-71e1c6c4e721@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/02/2021 10:46, Jan Beulich wrote:
> Transitive grants allow an intermediate domain I to grant a target
> domain T access to a page which origin domain O did grant I access to.
> As an implementation restriction, T is not allowed to map such a grant.
> This restriction is currently tried to be enforced by marking active
> entries resulting from transitive grants as is-sub-page; sub-page grants
> for obvious reasons don't allow mapping. However, marking (and checking)
> only active entries is insufficient, as a map attempt may also occur on
> a grant not otherwise in use. When not presently in use (pin count zero)
> the grant type itself needs checking. Otherwise T may be able to map an
> unrelated page owned by I. This is because the "transitive" sub-
> structure of the v2 union would end up being interpreted as "full_page"
> sub-structure instead. The low 32 bits of the GFN used would match the
> grant reference specified in I's transitive grant entry, while the upper
> 32 bits could be random (depending on how exactly I sets up its grant
> table entries).
> 
> Note that if one mapping already exists and the granting domain _then_
> changes the grant to GTF_transitive (which the domain is not supposed to
> do), the changed type will only be honored after the pin count has gone
> back to zero. This is no different from e.g. GTF_readonly or
> GTF_sub_page becoming set when a grant is already in use.
> 
> While adjusting the implementation, also adjust commentary in the public
> header to better reflect reality.
> 
> Fixes: 3672ce675c93 ("Transitive grant support")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

The change in grant_table.c looks good to me:

Acked-by: Julien Grall <jgrall@amazon.com>

> 
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -851,9 +851,10 @@ static int _set_status_v2(const grant_en
>           mask |= GTF_sub_page;
>   
>       /* If not already pinned, check the grant domid and type. */
> -    if ( !act->pin && ((((scombo.flags & mask) != GTF_permit_access) &&
> -                        ((scombo.flags & mask) != GTF_transitive)) ||
> -                       (scombo.domid != ldomid)) )
> +    if ( !act->pin &&
> +         ((((scombo.flags & mask) != GTF_permit_access) &&
> +           (mapflag || ((scombo.flags & mask) != GTF_transitive))) ||
> +          (scombo.domid != ldomid)) )
>           PIN_FAIL(done, GNTST_general_error,
>                    "Bad flags (%x) or dom (%d); expected d%d, flags %x\n",
>                    scombo.flags, scombo.domid, ldomid, mask);
> @@ -879,7 +880,7 @@ static int _set_status_v2(const grant_en
>       if ( !act->pin )
>       {
>           if ( (((scombo.flags & mask) != GTF_permit_access) &&
> -              ((scombo.flags & mask) != GTF_transitive)) ||
> +              (mapflag || ((scombo.flags & mask) != GTF_transitive))) ||
>                (scombo.domid != ldomid) ||
>                (!readonly && (scombo.flags & GTF_readonly)) )
>           {
> --- a/xen/include/public/grant_table.h
> +++ b/xen/include/public/grant_table.h
> @@ -166,11 +166,13 @@ typedef struct grant_entry_v1 grant_entr
>   #define GTF_type_mask       (3U<<0)
>   
>   /*
> - * Subflags for GTF_permit_access.
> + * Subflags for GTF_permit_access and GTF_transitive.
>    *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
>    *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
>    *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
> - *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags for the grant [GST]
> + * Further subflags for GTF_permit_access only.
> + *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags to be used for
> + *                             mappings of the grant [GST]
>    *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
>    *                will only be allowed to copy from the grant, and not
>    *                map it. [GST]

Do we have any check preventing GTF_sub_page and GTF_transitive to be 
set together?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 10:34:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 10:34:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86529.162561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCgdk-0001TP-T5; Thu, 18 Feb 2021 10:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86529.162561; Thu, 18 Feb 2021 10:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCgdk-0001TI-Pq; Thu, 18 Feb 2021 10:34:32 +0000
Received: by outflank-mailman (input) for mailman id 86529;
 Thu, 18 Feb 2021 10:34:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCgdi-0001TD-Sp
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 10:34:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCgdh-0003cD-RC; Thu, 18 Feb 2021 10:34:29 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCgdh-0003KJ-Ja; Thu, 18 Feb 2021 10:34:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/thO9y/7clmp/qafZw7NmxL30/2KKyrk1CbITDs9Fx8=; b=0L2RFhHHuDdjx5lhgg1O06vmsg
	8cpJ1mjnKsh1YdqqLzKQSREaXvCumixWy3ccu9XA6Q8IP1k1jqmdnsXMjq5gQxWPoE4LsgxnlE8i4
	mciCWg8zxwMnYacKrpWIKQxvpdh5PN6Qz2bOUDCeCJpwnDgUGH3Y3Ub5Scvrl1myUxG8=;
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com,
 Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
 <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b9ffc8ac-e87a-253e-0658-8e27c5cc046e@xen.org>
Date: Thu, 18 Feb 2021 10:34:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 17/02/2021 23:54, Stefano Stabellini wrote:
> On Wed, 17 Feb 2021, Julien Grall wrote:
>> On 17/02/2021 02:00, Stefano Stabellini wrote:
> 
> I saw that the topic has generated quite a bit of discussion. I suggest
> we keep gathering data and do brainstorming on the thread for a few days
> and in the meantime schedule a call for late next week to figure out a
> way forward?

I forgot to answer to this bit, sorry. I am happy to have a call about this.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 10:42:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 10:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86532.162574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCglS-0002Td-P2; Thu, 18 Feb 2021 10:42:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86532.162574; Thu, 18 Feb 2021 10:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCglS-0002TW-KB; Thu, 18 Feb 2021 10:42:30 +0000
Received: by outflank-mailman (input) for mailman id 86532;
 Thu, 18 Feb 2021 10:42:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lCglS-0002TP-5B
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 10:42:30 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60ba374a-ea8f-4079-94fe-1f3f604140fd;
 Thu, 18 Feb 2021 10:42:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60ba374a-ea8f-4079-94fe-1f3f604140fd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613644948;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=kBHOqF4mdLh2Vw0vTNHog/2fBR0VtFO3kawcyzOyAHU=;
  b=aznO5C+GYPXbWmR6MYSiy8xJmwu0d93BlpyWRlrV4+TQ4Y6lMY0cVn3o
   00ef5RopiNPm9HiSeelHzsksbxa0CSevmzIVkcd1FjJmpCyQP2XIKMr5R
   CIpStzNM5P3tJWOc6JUPSuz+BNbduy5WYGM+68BMHUP8uUGkPg+FJAUtP
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 6wHRPfzgqt+zK8B9IwL02u7Uo3BxnG8WTIQRS8DwQnRYlxPLwvK1TE3BujlL+cUBxAeTxOWrHg
 vaCHp47I6KwHED3AXgH+aNLnEDae1qUOccSfqoI+EPqVK/TUkNhDe17rPICWnvrGYimQjqLOpU
 /6sveDOrgRdUlxCYAVsCcaUjkFVPdZA0WNzta4Yeh1PyR8zAT5AjkCSeW3lYdbavcKyweCGBZv
 pUg8GutSmplr0JjsnIGafTfHRIhACNeG8g593O3S3WIoyFgkvXLcXMntmAb5raV2STHg+OKkyI
 7dA=
X-SBRS: 5.2
X-MesageID: 37497121
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="37497121"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IZQJHM6XRmcF6pJCAC/yJ7t87JzTmThXk1Q2BQodXSGNqFNZxDjhNWOgKY86LTvc1w2ttHL+l4DfnYAdCGGndxcltgEPeoaOIZ5JQj/es+2SMCLQNSSnhilACmapqlAUme6WfFlmI+yvLkjHKFV369uKtt0f1P7v9nA+TnYsZISatW2lX84YgMJ7LWAvfV+jEy/f5NLzE9UAioB0yMnSvBJhYRSoJX0SDAIe/MNoKe0hMdQsSNDCCy51KyRpdCJ+0TH9IyH3qDfgqZJH3Am9ldMn5SAib8pb5zoYg9VoDPZoHOHTgBYTOf7wQvACt5H327NpOMKIHOp30+axgqGtNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tadfgq7x1Rhc5P1xiKE+Bs5UFp8iNQGJW6YmQvc1qxY=;
 b=Zef+NyeMfQPiLqMI6PXI2WUV7XGxPwZu2poDw7HQQ3Y/5qKAnqXWzOw4AQIVibK71cJS7a4unx912X+Y65CMFipLCeOsj2/AB7WUd0AUuBf54gDmmv0fTrBRr8P/Rwrv/BlbDTxnVH2E5IeC6LqbdqW46UZsdKO42Tdbb0zicSVDAdSUopjr8zNr06VC8HENM8xNTrnsLkS1tmqgrna5TKi0rtGkFdF/+Su7wof7aj1rzK8lS8bX6b/t6oC7lKhHiU2r3hflVDxlX4VobFjlFXCHWIajP5r17KSIWqj/ttpKsU9EIMSYgm6okv890XOp76pSv5sZg8z5zC1THw9eAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tadfgq7x1Rhc5P1xiKE+Bs5UFp8iNQGJW6YmQvc1qxY=;
 b=AoYN8TuLCYKODofFXwB6+QObk8/iMXkxN+zlC7IVCKyKrjnC2UQiqOg2i3li5a/R38jChn1Eem3NHCN16bDUkh8eOUELUF6Ucyg9JVvV1ykl0c+fi2K/EELULlXWGIOZ6Gof00OXSUMvSoq4Q+XXsqaUEJjf3oP+6feltEGG5R4=
Date: Thu, 18 Feb 2021 11:42:18 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
Message-ID: <YC5EitRCZB+VCeCC@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
X-ClientProxiedBy: PR0P264CA0091.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:18::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 15ea7325-2f5a-432c-f25f-08d8d3f9e06c
X-MS-TrafficTypeDiagnostic: DS7PR03MB5463:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5463003F70B0E153662243E58F859@DS7PR03MB5463.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +bJP7fypWMTAV19Iqr3qZm7uBiRHId7qtX3fmxtNCeQVd5Hrs2+0lM8Pgxw7SmofMPteQpRLnYogcTOUtlZlt8Vp5t6dXp/AJCMwh73g1q7WdtEyquX6nFjGi9tJA99/A0xJNYqINK7NBRnY4xnqxVaN8HOz4+SG6Sy2bhHLntjOIKcA1xp3RHMgm8RxxA9kJN09lo28RPk+l4pxsSWYhyigItqm0/WsFNJY8nu7OMseVZbYe+XlPBacfo3X5/qX4Gx2arcQ3WftaG6bw7rRAE6lLBxlV1eX5XMlVDnJXJSMZAfjatcewU/P4bnZ8YCMB11gi8hICmydStfF00CXMRU9XXs6HqZvtLemO3t9NmA9lHu4ojBwAM58MlLmppXmu3pS1vfcl9UNCWdJx+XnFD7sqRjEn4M2uNbqv8b9Fdz8T+l2SaJ6lUXoNltwX+qiA5vMl7zRzWGCFnToZ66O0sBYT2/2NDug57UHN2msGNEQVc8SOutzBxlmsrw+JrvOLnfCTjQ4Z9O9ljtLpT93bmNtpaD+UeECW/VlW0mWsnhPjze0IA/kr2Ni/oBZM4kGdfYQpaX6k4xCmE2gsQN33S2ptAw2cQ873sI0/tKjMDytSNMAfb/xcllEmo7MC9R2pfg3D97/0yAG0KD2ktzpCQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(39850400004)(366004)(376002)(346002)(16526019)(316002)(83380400001)(86362001)(8676002)(186003)(478600001)(45080400002)(6666004)(2906002)(85182001)(8936002)(66556008)(6486002)(33716001)(66946007)(956004)(5660300002)(9686003)(6916009)(6496006)(66476007)(4326008)(26005)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SWkxUWowR1E1cHVueGIxaWVqN2UvU2FzckhEcU9Gd1JKWnFXSHBsK0JCd0M4?=
 =?utf-8?B?bXF4M2N4UHpzRXVDWHUrSktsNzdKUnhoeUt1bW1Kd3pZRGJYVTZubkJ6SDJ5?=
 =?utf-8?B?ZU5KMTV2WHVibFlWWkM5aG9NVWN6alJoaHRqUTBhY0tCS29qd3NhN0d4bUVn?=
 =?utf-8?B?emI2VEdhRFE4cDNPamJCeFdhRGNuWmJOWGYvT2xEc29IeEl4Y3hmb3JndTJS?=
 =?utf-8?B?S0JpUVlLOHRuVytmK0ptSkdPQSt2N3lHM1BXZXYwdjY0RzdFQjRMSTdtSlQy?=
 =?utf-8?B?emorTHJBYnZPOWE5aXpkeUVFbkx3TG5nT0pCSUFNWkNRMGNkdFRCc2h4ZkQx?=
 =?utf-8?B?SzNsZjZHeWxEaFQvT2NKclBIeVRxbStuMUU2a21vVUV3SGtuSzRnYmEzTkJN?=
 =?utf-8?B?am9uTW95MFpLbHhoU3VYZ1Myanc1MnVNQlk2bkYxeGdmdUVzRHFiaW5lZWJP?=
 =?utf-8?B?UlhOQ0xIVHRGUjlkMy9USERBL3owRU1LN3o2RHUvbGJHT3QxUVFKL1dwcWo1?=
 =?utf-8?B?dVkvYTFGeURHdUF5dHlIK2RZU0pZcDVvcXNsYk02bnIveEpsNCtmdFh3RWpJ?=
 =?utf-8?B?YWhZQUdkRFZqeHlqaWc5aG5KSXF0OHBEazQvREU2SGFzUFJVUENWQVFKY01M?=
 =?utf-8?B?WFUraU9ETzdROVJ4c1BOUENvRHhSMlFWaXFzVUh2TzRaUnlaWGlQWXRKUGpG?=
 =?utf-8?B?bHFVRkc3M2JNTnhJZ2FQKzJaSU53YTFZKzNhc2xET3c4eG9HbVBoc0FVbis3?=
 =?utf-8?B?MWFDT09QdTNwK1BGVlFER3BjRVRMa29Bbmh6RUVGSVJnY2ovU0swS3NEUzVX?=
 =?utf-8?B?ZVlaeVUyUldYVUdFRDFXb1I1WFg2cm0wcVBQSUVXR3l6TDhRdnNSQzY0Rkdq?=
 =?utf-8?B?UnQ4OVlvMVdJVldCNVBNUFZZajYyOW9EV3ZiZ1BzQnpoNFYwL3ZxUjVJcXJv?=
 =?utf-8?B?MU5EOW9rS2w1ZlJIN1h3RzV5Qi9hcEF3TFFTNUEwUnV3VzI1b2RONmNUK0Mv?=
 =?utf-8?B?SDRjTmNKbmVlQ21ZaTJDVVBxTVp4d0VBYlFHSjdVNE5GeUc2V1VzeWNIZ0h6?=
 =?utf-8?B?NzdDYWN0OWFWa09VdE9mdTErTy9PSnNTZTZKTmVpaStGUk9zWHNIWkVaMUdz?=
 =?utf-8?B?dXI5Vy9UWHBxOCtyOHNuZVNXdmdSRk1HK0pIWnc5cS95SzJUeS9qQlZDV2sy?=
 =?utf-8?B?U0tsRWVxM0p6Y0V4OTdoR0ZaZGIybTFQMVByKy9OUHM0UDZ0SmJWQmY2cjdB?=
 =?utf-8?B?TTdCTVBRUmlIVlMzT1FhOUdZMm1zcHdlYnREaTdEYllxREhCVWdON25MVTVk?=
 =?utf-8?B?SVVLdnRIQ0gxa2tNcThEUGczbmg5c1NwQlNURCtxaEoyMHFjWDZ5VVR6RWgv?=
 =?utf-8?B?WHhrUnd6Q0NHOHNIMlRwYWFiN2YzYlJaVktIRkFwRHRmMTM3cnF2MmppTExi?=
 =?utf-8?B?aXkzb1JiTXRocHJaRTBod200a2s1VkF2VXFLd20xK2RkRUQvRFhpMDVEWjJy?=
 =?utf-8?B?cXVHSDlDaFFaa3RTNDRnR3p2UHBrcDVNR21JYzhMNE4wZHJRUnc2OUlUaGty?=
 =?utf-8?B?VE5DckxzTDRrSnBtYmFZUDBLK2YwTmVZbUhjbVRlMnpLSkZSbXdyang1WGpD?=
 =?utf-8?B?djgzZHllOTdZM1ZDNmc0VzVJZTFjUVZJRG1oNHJLR0hTK1RIR0dJWEVBVS9s?=
 =?utf-8?B?ek9WM3RYa0hiM2lGQnhQZUh5VlM4TklUWlZiMzJ5ckVKak9GK0tDZzFiZzBh?=
 =?utf-8?Q?8KmYju3+pqzqj2dF6KMjriP0lFEss5UisgfIQ96?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 15ea7325-2f5a-432c-f25f-08d8d3f9e06c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 10:42:24.0370
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Z74XfucidyWAISwMG9FjAnzwcggqs8i6RhskHKFyQGtMMVVf5qRorCONKTeyrKIcyRGotOM71itXcOsULQ7nRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5463
X-OriginatorOrg: citrix.com

On Wed, Jan 20, 2021 at 05:49:09PM -0500, Boris Ostrovsky wrote:
> This option allows guest administrator specify what should happen when
> guest accesses an MSR which is not explicitly emulated by the hypervisor.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  docs/man/xl.cfg.5.pod.in         | 20 +++++++++++++++++++-
>  tools/libs/light/libxl_types.idl |  7 +++++++
>  tools/xl/xl_parse.c              |  7 +++++++
>  3 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index c8e017f950de..96ce97c42cab 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -2044,7 +2044,25 @@ Do not provide a VM generation ID.
>  See also "Virtual Machine Generation ID" by Microsoft:
>  L<https://docs.microsoft.com/en-us/windows/win32/hyperv_v2/virtual-machine-generation-identifier>
>  
> -=back 
> +=over
> +
> +=item B<ignore_msrs="STRING">
> +
> +Determine hypervisor behavior on accesses to MSRs that are not emulated by the hypervisor.
> +
> +=over 4
> +
> +=item B<never>
> +
> +Issue a warning to the log and #GP to the guest. This is default.
> +
> +=item B<silent>
> +
> +MSR reads return 0, MSR writes are ignored. No warnings to the log.
> +
> +=item B<verbose>
> +
> +Similar to B<silent> but a warning is written.

Would it make sense to allow for this option to be more fine-grained
in the future?

Not that you need to implement the full thing now, but maybe we could
have something like:

"
=item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>

Specify a list of MSR ranges that will be ignored by the hypervisor:
reads will return zeros and writes will be discarded without raising a
#GP.

Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
"

Then you can print the messages in the hypervisor using a guest log
level and modify it on demand in order to get more verbose output?

I don't think selecting whether the messages are printed or not from
xl is that helpful as the same could be achieved using guest_loglvl.

Also I think it will be fine to only implement:

ignore_msrs=[ "0-ffffffff" ]

Right now and return an error for any other combination, so that we
can get something in soon and expand it later.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 10:51:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 10:51:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86534.162586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCguN-0003Uc-Lu; Thu, 18 Feb 2021 10:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86534.162586; Thu, 18 Feb 2021 10:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCguN-0003UV-Hw; Thu, 18 Feb 2021 10:51:43 +0000
Received: by outflank-mailman (input) for mailman id 86534;
 Thu, 18 Feb 2021 10:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lCguM-0003UQ-64
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 10:51:42 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 468b704a-dbfd-42fb-9545-8d7d9ae2f99b;
 Thu, 18 Feb 2021 10:51:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 468b704a-dbfd-42fb-9545-8d7d9ae2f99b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613645501;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=jhzsGpQPzBitDrJL2kuF1JGoA85M8mY2OIKBQ+Alj74=;
  b=bP9p2Ne+ft3iPhmRiwVRlB/pVne+bC+B8lE7wFlSrBiatxJCd9bgGM8u
   N4C9x2v0TyHvyJBGWuS0v6+xPcIbtyMz8yFaJX3VHP5u/PIw8g3M5ftjr
   SXK34NLRa5b9MNEFEkvYEp9b2pnKXNDYjVdn5poaEBP3k5xBUAEWD94az
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: L9kBf9ecP6woPCL0REzAkI9uA7tE6ccRInNTa/3BevqceSaKjzecv8X08XoV+mkxQ9yK9fFgZ2
 sa+MoFOJpEyJjAA5oMOq4lxZWmVtyAD7UpYSyrLpVP0qo8dayV6KrJysr0iKbpI82fVm6eb4iU
 OP1CzWbLVX+zmnia3kNsiX2cm4gZjVfPPnEJ90kdHMnI7/9i4na7Jz8L6ICw/B+/vRpOkcWWs2
 aerHhHmDuMrs/oMxakJcKiXiXeyQR6sNn3nYgyyiSxHLBX0RbYh3oZ7wn4vA0/VqCINk46rIvh
 has=
X-SBRS: 5.2
X-MesageID: 37870199
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="37870199"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mMwBOlfRNgj6wmBgfGDFoTl5hZFTwkiQj6RVcwvNFBquCjkSUyyAK+d+4gKVYoYC7nx8iDbfgeNlTYdibXbFlxkqgVNSVJRiYXxThBXf4O00uIUKNx4DjCQqAiXd0VM48Ie+BEYz+d+jRt819sly+cAe+sixAHSkz2nl4fp2hXzShLoWBHRV/jBq28XJ1yHC4sbogvJEkgFqxUzDflvajc+MFKslZAcLj0xJl8ltvrY96+h8lZFzMnEkhddO+vHpItsre+HURQQTjNTGYTDI0ohtgRWCq/8zLaeef+3yfZwakSVIaOlToSekiAqPJqvYcqFDwvI31uy7cOz22eB4nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jhzsGpQPzBitDrJL2kuF1JGoA85M8mY2OIKBQ+Alj74=;
 b=JhBv/otPTaFNPPoBlGaP1R7nvq9qx/v8zDXM4HRBaNhRRwYF4goqmQRnMGBorb0diLMHYZ7nNl3vNI6oNsRZnCS9H86S/8E8AF3Q7nMUhBGqq8Lu4FL+7hHCKx6TydmyiN54glIM2ziRc+IKwFHRiS5p+lJ7dr/9M7bMCl4/fUslNb5wjZL/xPAolQcNOgsIYd3DwrlqBzYohzjOaw2c/44UMwrTaDkYWn3E9ccOwJsRywFL5uoPlcPEYNLfidpEdqJHcnzQdMOH14neC/pqYQTxgqfGyRQdFLqzOYIyE7czk5TIGgw2/CH1kbhi+WfMsBaiEWShb0XFv1ft8Ap/BQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jhzsGpQPzBitDrJL2kuF1JGoA85M8mY2OIKBQ+Alj74=;
 b=DsTzd+Wg70kAAg6zYDLAyfOnnJtepgHgBSEQ6PcRyU8jr4nHgAWGpoZumaTrTFzwbGnlGxnVrGMSirGj5PSQt8VY9wENxtojolj4m+Yi+u2m6H2OJ9lPG0xf3Pfr9dpXXE4kfuqmi66V1YKMyDsqhAfrEVMO5DbPH35CEckodXQ=
Date: Thu, 18 Feb 2021 11:51:26 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YC5GrgqwsR/eBwpy@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
X-ClientProxiedBy: PR3PR09CA0007.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ea04b627-bd38-4914-8620-08d8d3fb2a09
X-MS-TrafficTypeDiagnostic: DM5PR03MB3147:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3147E970EB1BD72E5DDC80F68F859@DM5PR03MB3147.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: DO2LOadmTtJDr1qNX3aqhQIcZ/DeEgzQwlJu1m1RolmHw4ZNyN2PlWCEW74hiHB6n7vROdAV2WlSU+9vXP5GsfuGbS4IYccgUjwfQ4niFt/mvXLvPW9ib927OFlvsisrXmuj4+5/igqWC2quzpDlPy+GpfVftspAqS+tA220yJrXKpPb7ZpQMQlPB9bKSL/e0yhkt/uy3bt3frnTgSrL4maNTneeGvqykWxiyK+/nCpahP1TFI3pmiuWmgH14hBbW5yfEiwJu4DbuaCYC5csGc3+VxFopsw4Fskaj2Pt45LcqHmzdM9N5d+2R47FhYHje9bNp+AGORXGczT4xOGwGLwmhHW7uLH29KDu8KVx/54C9+4AX63Eu+bthBBEYwGX9Y6YnoSTkrIlawXhqCU5R0istyKJ/fGqS8LLD/A9XpLENiT57c9bOIUJ5DUhbkygPU+A9Za6Ad5DWWoIr1N1KL3TL8TpLBeNHgSb7FM05XXKrE5P1HMomK04N/ij4/Fz9mfb94uuKft2SzHc70Jchg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(376002)(346002)(366004)(39860400002)(136003)(85182001)(9686003)(956004)(186003)(66556008)(33716001)(478600001)(8676002)(86362001)(4326008)(26005)(4744005)(66476007)(83380400001)(6916009)(316002)(8936002)(6666004)(16526019)(6486002)(5660300002)(2906002)(66946007)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R0VUeEhSUDlJSkRDUktTcGYxSjlHUVN1czBqQmpvbHNLVVMrY3p6bGZWQkpq?=
 =?utf-8?B?M05RbzRqaEtqWTl0cXlpL3VpYmdMZWhJTlZpRjhmUHdoSXY0ekNHVlp2U1l4?=
 =?utf-8?B?eHVhcUNqSXVjbFpCbStDdVR4QUkvU2ZpTGkvaERaV0hrYUhTbktSTi9FODE1?=
 =?utf-8?B?bGlwdGVacFBtNGlYZHpNSlRzekFFamlFVzJWZVFIUFB6YnIwTHV4TFFYYndw?=
 =?utf-8?B?c3AvNFF6T3NlZ1BxZVpWWGRuTXN2Vjl1SjVVbm1GMlMzUEpHR1I4WW1JMmg0?=
 =?utf-8?B?bW5EVEczR2JLbHFpM25nRXJqY3JYZHBMY1ltcUw4WGNFK0xNNktZMlZqZ00v?=
 =?utf-8?B?clZxdkxvSE1ZUFhCWHlTTmJxOFVaM1dXSnBxU1dXZDhtQnFFUWpKQUJqc3FE?=
 =?utf-8?B?TldwdkxtYWhZbDdYRGljTGQ3aFNhNEg0TXZ5QjRwTlRoY1ZGQ25FcThBMVAw?=
 =?utf-8?B?STRmMDFzREUwSjFOYjRPcm9NemFuNFNFR045MmdGWFI4S3ZxSTRJY1VyTEFI?=
 =?utf-8?B?ZVEyakpqalU5WUppdkxDYzRvQkdjY0gzNGZJdUFuWkdYVS9YNFAwT2FzdW1E?=
 =?utf-8?B?TzBaN2dtMDQycURwTDVjaDYrS0ZKVFRHVVlKYkVkL1hzM2RBNGs2VTlPS3Rn?=
 =?utf-8?B?OGlXVDV6c05EVFJtY0RJN1k2NmQ2U1EzaUZaR01KaDRPamFlenRhYW1uV2N1?=
 =?utf-8?B?d21RUC9VWitGdWlzMkc5K1h4RkJMWU5JS280L1RSNmRDM2F2ZmU3czVYblB0?=
 =?utf-8?B?YlF5c2ZhVVRja0RFWFVncGVxWXN3bDdGWWxTRktjYTA0NmRwNkcyREtjT3Ax?=
 =?utf-8?B?SmRoNUpuMVFGSERoM0lyejFRQ0NwaTNQaGtES094VG5VL2QrYTVaLzRMUkVp?=
 =?utf-8?B?a3hZbVBaUGpvTzFpcjNud29GWHB6Mk8xRXc4RDNYMDFJM1ljZCtBWUF3Umlk?=
 =?utf-8?B?TndvYng2dkVJb1BkUE1GKzMyTWxmUWJOL2hoYzVSZENhMDNsTE9lWFdkS3E3?=
 =?utf-8?B?N3czaTZVTFJpTm0rZHNReW55dmpiYXpHWUEyc21xUTRkYzFCQU4yVXUyTDdN?=
 =?utf-8?B?WThaQUxOaTIzOVNoZVZmT1JUWkpOR2c4Mm5TSEpiYTNDN09EbGlRYkRMUThs?=
 =?utf-8?B?MWNqc2YvckN4RnV4VDdJMjdWWHBjRXRPNlEyNGhXTS9sU0R3cXk4L0JjMEpM?=
 =?utf-8?B?NjZYNUdSdXl1NjRGY1FMVUw0amJuRVdDMmFYdnFxNEQ3OXVxLzdmbHdHOU0r?=
 =?utf-8?B?azkyTTB4Ym5EUDhTSTJIY3BNWk5LUU42YlZqODA5eTJQSkNmRk8zanNKdHBC?=
 =?utf-8?B?NFNzVDdoMEZGOVcvc1VSdnVSZTZHc091M1M2ZmdabklJYUJSUGpweDU5OUY2?=
 =?utf-8?B?YzJJMUtNY2dTeERrYzdtZzVRYUZVWXgwOHVLK3NRN3NnYzZuTVl2NEVwdC9t?=
 =?utf-8?B?aEtxR1FOTGJ5RkdTQTZ4bGVqS0dITld3QmxuRVBFQmptVnFNQWhiSUZlWUNZ?=
 =?utf-8?B?VDM0Q3RCZjA1OGdub1VSdGJ1UnQ1VkhqRDhBNmV1NWtMSTBvUHpSSDFUTDl5?=
 =?utf-8?B?UE4wQVNyaUlMd2lkM0hrSUduMTFGRktoWjBQTmZwNkk5K0xYbkkwejV6Ylhm?=
 =?utf-8?B?SUVWemdxL3dNTVpCUlEydE0rcG9pTWlCcGdNNytnb2I5amdZdnl2MlZyR2dF?=
 =?utf-8?B?UkZJYXozbVBML3ZVTWorZ013L0dORzNUU0JaWjdKdlhnS2xFTnZoYXp2RHpk?=
 =?utf-8?Q?7nsINK4VTgRxXrTSpWVWYjxbJ1A834H7zdd2TWU?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ea04b627-bd38-4914-8620-08d8d3fb2a09
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 10:51:36.9350
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PptdD5yOZNC3bOkEJgrwvw5JVXaFXZU1VWkF8KHiUc/C/oswCK0ZCUaSzlhv+WgaRs/Rv/rgQUM5kGo55aPEjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3147
X-OriginatorOrg: citrix.com

On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
> When toolstack updates MSR policy, this MSR offset (which is the last
> index in the hypervisor MSR range) is used to indicate hypervisor
> behavior when guest accesses an MSR which is not explicitly emulated.

It's kind of weird to use an MSR to store this. I assume this is done
for migration reasons?

Isn't is possible to convey this data in the xl migration stream
instead of having to pack it with MSRs?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:24:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86537.162597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChQ5-0006QY-A2; Thu, 18 Feb 2021 11:24:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86537.162597; Thu, 18 Feb 2021 11:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChQ5-0006QR-77; Thu, 18 Feb 2021 11:24:29 +0000
Received: by outflank-mailman (input) for mailman id 86537;
 Thu, 18 Feb 2021 11:24:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lChQ3-0006QL-Cl
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:24:27 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c2efd69-254c-4cb3-929c-af622bd04f2d;
 Thu, 18 Feb 2021 11:24:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c2efd69-254c-4cb3-929c-af622bd04f2d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613647465;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=mgcAdqXdkzRPOMLqgQAP8vrUf7a0M4YwrYg/UkNoXy4=;
  b=EBPR3XQirnKbi/qkmaKGHUYUgAJNVVfmv3nNeLDT/vaSEqmFDjeN9maL
   iRK9fvYUg3i9rbMNDfD0tI3MAfZTVLKcG+yRAPtXcpCDIo9q9htbNfD8l
   o2U3y85ZFZ9vOKu06B5KWG6ahg7HYoP7biXBLB+KZrZ0tUxvwayLVIcSG
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: h6fTCPB80k32DTNHQbdaQqGpFDsyJ1wYhcIPv7kcdGIFCwKgEogHXcHhU1hQtNZ+Y1f03qJkl2
 JVmw8Wp3iyPkbCeW+Z34qZ2+/P9cx6NPXfDL8jQew1NSb9qv2rVro565KAGzT+5Dz38RjrOkYU
 eiGZmWp5r338EPXc27X9gZXeSzEj1YzHvqCF00t5h0toMO+o7IYuiP/KeEF++KrhOC2HhPscH9
 uP5Oq3THpw4PyhARTPhXF6pt7vbthcfU5T3skgIhVF0/ovaioqYATpR+piv6d4MzdPcC3nxxLG
 B4U=
X-SBRS: 5.2
X-MesageID: 37524224
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="37524224"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fZgJRhg9EpoUgu4RfKAhXZ6Hrw/d2yXje6PP2NR/rOrYBr7yZmobeKlY2yYndxCVRQi6qLB8Mbvyzf4QdBqjxGHrIrYyM60fwMUk3k12zvw3w2uqhvAMN4j1E7Y+DNmmV20jC9eprqNDO+UYCO3bVVHVxvTF15/kJO3V436p3M3gU/BEvybs39lcujoEV0+lZoiAdi/8pnmUUh+IC5J6lKQinXir2CjfTw9CsiCwAUuiGlceNgVj48G19yeKyc6k8NUGW2GMqo9bEar3pZNCY56eP+l31Hp061QRNSZKxnoScFSEX9xglbobQV1Ax929S5mU1F9RKf2dQG0MFpEr6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=243NAMITV7mJMRT4NUohsQfhdOtzxg0uW+vW8Q3KYzE=;
 b=NVg5qKWA1eJbOYAHCCLHachEmpm7UUTioxDi5xfFfqB73x1+olHzI6zo3kQygIJEKKg5mq6IEDLYzI4o+1YyGPRGF4W+zERc72erCV+PcDhlAi/+9FLi5fB3NsP1nswAt0b7fDRj8tygpOtsn3Wz8w6f3EiVzHvm+kqXHiOfutOc7LLhMB58luDR3kCVOwoDGqY9PO5IUrIsEOd5U8tIxeyhfiUdB5LwS3Y01VBbSEI91nCzCtaTN/4boRu0xMbbs3Q7OK2Qrwu+cvhuzOeaORBgk70xikBgPGsoe7M3x6G2WR7Z/ZmCl8StL2oUlaWSTkPxSbSMjmdCYLF/59xmaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=243NAMITV7mJMRT4NUohsQfhdOtzxg0uW+vW8Q3KYzE=;
 b=R9zcqfxfzNAwPvfsNeEASf6ax1J5UsCAyLHFNUI6igcv6kaqBDtZ+SuXKwc77LnyVfES+oqrFf7LCqfPtXWaGWdtqN57cKfwGt1MRHJDBZXv0mW1KEyA25UjZpahilXp+SieAjND1o//GtvbKM/Ajtj1YmsIGR6nXNFh8P0EXD4=
Date: Thu, 18 Feb 2021 12:24:17 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 3/4] x86: Allow non-faulting accesses to non-emulated
 MSRs if policy permits this
Message-ID: <YC5OYZOAkx+jutJz@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-4-git-send-email-boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1611182952-9941-4-git-send-email-boris.ostrovsky@oracle.com>
X-ClientProxiedBy: PR0P264CA0113.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:19::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f679fcc6-5cd6-4bb5-8156-08d8d3ffbd4b
X-MS-TrafficTypeDiagnostic: DM6PR03MB5323:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5323330F876F056618D1A6218F859@DM6PR03MB5323.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: uO6cGAc7OF2yhl3R5WpmfCXdufIF4t/9apnBkKBVzRIHv7ELntziT0tNSPK8L1l5ZyCAVR2hTSELWqkoGKb32PADIjibOzqdfHn4scACQblwryVMYX7ShPQ2LEPPI1Dl5nNFrxPon65GS1M8svxoz0dH6qQVDoa+PasJP7ATRO2ptneUTwbO6uKorPsVp05Ry6cuWoNU4d+vNoIZV7Z4hVFF44dKqO/hHMmjiJ/zJz9nztf+SNpJZeDP4icQkAJ23IF2QJwKsFdwXE0y9Fka83iUsuZZfvkP8zrrA4+WJmzMI+NaxzmMbxI/suCunKX6PmluyP6AkUoZFtY0aaLyS0PP9MyH/JSNprUpZByee1ckluOYUtjr+TviR/gVgmqYqODJfpqdU29ay/YTYfpXAe681X7fQGy800TOWDvJKlJdHvncrzQoK1heYmx7mvJqju7JE9eVBKakr6etbzy3RidNXWaCvWX0cxV9yfpNSqc1/gYpmTkcZE+ehCeGZxotonJTb0Kd40+P4cQhON+oVU/albXqdeTmzXTkke2h8gAKJKoxPmZ7yjxbg19csJbU
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(136003)(376002)(366004)(39860400002)(6666004)(26005)(16526019)(956004)(6486002)(316002)(6496006)(186003)(83380400001)(8676002)(4326008)(33716001)(86362001)(85182001)(9686003)(6916009)(478600001)(5660300002)(66476007)(2906002)(8936002)(66556008)(66946007)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MnE0SkR1SnJnZ3AvbVZHTi9vc2dYc0Vncm91b3ZjZ1BBTUtyY1VXdXZ2cjF0?=
 =?utf-8?B?MXA3eGpmSEs0ZWtzYVUzV2xON2UwVXowNHVzdnBKS0hLaTYvdUUzcDVCczVZ?=
 =?utf-8?B?WnRCU1hGSlVOQmZsMFc3c3pXWnFDRW5DMzJUbGFGM2M5VDhwcGllOVlScHdT?=
 =?utf-8?B?MFp6cFdtbXVEL09sakI0ME81WHdVVWRnZFZLOE1XSzhsM0hkS0hpUFkwRkdn?=
 =?utf-8?B?eGV2WSsxT3ptS3JTLzVmZkl1QUo4WVFvZnZVN2Z2c3lVaHh6WXZiNFQ0dkNM?=
 =?utf-8?B?ektxSDVLQ0UydGh1OGlvS0ZTWjF1dzJ2S0g0ZnpGUEVzSk5ud2pRNUxsQVd5?=
 =?utf-8?B?a29xTnpZZzJsRVBHMk16UVhERTI5Vnc1MFQyNStCYVc4cWJnU05QSDEvK3ZB?=
 =?utf-8?B?K3lVWjhOdVo4cVlsQk9KVkYraEFzc2czWFliMW9CdE9DUURTd3BueFd4eW4z?=
 =?utf-8?B?cjRlcjlET1ZFZlNvaGRKMkYwQ1pTeUVmZmxCdVVrWUcyekhES3BldE5MbHRl?=
 =?utf-8?B?VEVqMmdMeWl4T0ZiWDlLOVhEcGYvRm50Z1pxMEZIL3libGVzdmNqM1hzajl0?=
 =?utf-8?B?Q3BNa3oraWN2TkRiNGdkamxYOTdzOVlJcFM0L1RMV1FabndSa0g0cDBHeEcw?=
 =?utf-8?B?NUowNGVDbHpGMFlYanBBZHZMQzY1TXV3azMxdFA2VmJ5Ujl0Z1BTMjFWbFN1?=
 =?utf-8?B?VnJSR1dIais4Y2VFODkzNXVXTzdaWTBLVXZWTU5QWDhLcFQyQlh2TFpJd280?=
 =?utf-8?B?V1Bqa3VWNVlZcVRqVFk5L0ppYnFtclh2a0tmZjdlSXA0RlA3dGt2bHNxRjhm?=
 =?utf-8?B?RHhNTTRlK2loRk00NVJCdTRHUUZ2THgzRUFCamptR3ZsaVUyeXZvMTBNVU5y?=
 =?utf-8?B?OHZ1VkJWKytBRld3eFRmZXpDdmxWc250d1FlTVdua2FqL01zM1BLT3ZTcEd0?=
 =?utf-8?B?YWYzeWdwei8yRmd3dVN3OWxmd2ZQYTljR0pUWTlwOGFRanA0WXYzUk5IY3dO?=
 =?utf-8?B?VlUvNFJoRzl0SDBOT2JJcVlNa0hDVnFPbnJxRWtScUhLZHJkQWhsajZOK3pV?=
 =?utf-8?B?RkhCd25WRzFoNEp4UGdTdFBXQXpxV1NBZ1lKeWVEbEg1YUhwZ1FmSUo0RUFY?=
 =?utf-8?B?WUJsaTNKYTkxR3diZXFOZEo1ZnRKSW9RSXRzZEtsbTlKY1p1ZEtKeE5kREpo?=
 =?utf-8?B?S2p6T0oyM2kzNzRrOFlkMWtZMmhZbzYwNnkwem9sMVpmSFlFNE45cCtYRFJ5?=
 =?utf-8?B?UldUYkFEcnBtcW1RcS9aOEp0RGMydFhsRXhuT0pGUGpNMVVObG1BbjJSQW1o?=
 =?utf-8?B?Y2pkNG84Z1N2ck9OSlZxSGx6ek9UaTlucmZhcWpKckdQZG40RjRVc3U1Rncr?=
 =?utf-8?B?WjE2dUJGeXJCOEEvaXNvMjg4Rm81enJ3N0VqdFFhT1hnODAySFNBQTkweVp6?=
 =?utf-8?B?QnFIc1J3ZDdiazlXTHAyZ2poZWNFOWcxRUNleGdwNWVvUytQeHFvb0Rtc1R4?=
 =?utf-8?B?MnpaQ0xKVURlUGtlSCs2ZG84bGFSMStDZExPNHA1UXBpZnpVdjdnSE1vVS9C?=
 =?utf-8?B?enBENnZ6MkZxYXNBbjZzclZ5ems0b0cwdm50dFlFQ0p2VDdrdW80ajBMZGRT?=
 =?utf-8?B?OGhJa042KzFtd3dQNVQ5ejhYY2dScWJGMG1MOFA1UUI0Z0JJNzBrL3RaUGdX?=
 =?utf-8?B?SzdYS3NoZlNVSVpWTktVMUg2L0dMSnJ1MkVYRk5DWk9oQ3Zram9veTJnN1Bk?=
 =?utf-8?Q?BsLhWyFiLvd1OPatfoBY00HLO3+cgz8Y0pY7mXw?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f679fcc6-5cd6-4bb5-8156-08d8d3ffbd4b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 11:24:22.0899
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NrMqZt5T+z2QqDPy2i8TK1hEMLmMiAuQbjDF5KcDNk76cP7kmcaP8Q2vNDhUAmXue3Mez6FqUaTIHsMbbN7GcQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5323
X-OriginatorOrg: citrix.com

On Wed, Jan 20, 2021 at 05:49:11PM -0500, Boris Ostrovsky wrote:
> Starting with commit 84e848fd7a16 ("x86/hvm: disallow access to unknown MSRs")
> accesses to unhandled MSRs result in #GP sent to the guest. This caused a
> regression for Solaris who tries to acccess MSR_RAPL_POWER_UNIT and (unlike,
> for example, Linux) does not catch exceptions when accessing MSRs that
> potentially may not be present.
> 
> Instead of special-casing RAPL registers we decide what to do when any
> non-emulated MSR is accessed based on ignore_msrs field of msr_policy.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
> Changes in v2:
> * define x86_emul_guest_msr_access() and use it to determine whether emulated
>   instruction is rd/wrmsr.
> * Don't use ignore_msrs for MSR accesses that are not guest's rd/wrmsr.
> * Clear @val for writes too in guest_unhandled_msr()
> 
>  xen/arch/x86/hvm/svm/svm.c             | 10 ++++------
>  xen/arch/x86/hvm/vmx/vmx.c             | 10 ++++------
>  xen/arch/x86/msr.c                     | 28 ++++++++++++++++++++++++++++
>  xen/arch/x86/pv/emul-priv-op.c         | 10 ++++++----
>  xen/arch/x86/x86_emulate/x86_emulate.h |  6 ++++++
>  xen/include/asm-x86/msr.h              |  3 +++
>  6 files changed, 51 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index b819897a4a9f..7b59885b2619 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1965,8 +1965,8 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          break;
>  
>      default:
> -        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
> -        goto gpf;
> +        if ( guest_unhandled_msr(v, msr, msr_content, false, true) )
> +            goto gpf;
>      }
>  
>      HVM_DBG_LOG(DBG_LEVEL_MSR, "returns: ecx=%x, msr_value=%"PRIx64,
> @@ -2151,10 +2151,8 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          break;
>  
>      default:
> -        gdprintk(XENLOG_WARNING,
> -                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> -                 msr, msr_content);
> -        goto gpf;
> +        if ( guest_unhandled_msr(v, msr, &msr_content, true, true) )
> +            goto gpf;
>      }
>  
>      return X86EMUL_OKAY;
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 2d4475ee3de2..87baca57d33f 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -3017,8 +3017,8 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>              break;
>          }
>  
> -        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
> -        goto gp_fault;
> +        if ( guest_unhandled_msr(curr, msr, msr_content, false, true) )
> +            goto gp_fault;
>      }
>  
>  done:
> @@ -3319,10 +3319,8 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>               is_last_branch_msr(msr) )
>              break;
>  
> -        gdprintk(XENLOG_WARNING,
> -                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> -                 msr, msr_content);
> -        goto gp_fault;
> +        if ( guest_unhandled_msr(v, msr, &msr_content, true, true) )
> +            goto gp_fault;
>      }

I think this could be done in hvm_msr_read_intercept instead of having
to call guest_unhandled_msr from each vendor specific handler?

Oh, I see, that's likely done to differentiate between guest MSR
accesses and emulator ones? I'm not sure we really need to make a
difference between guests MSR accesses and emulator ones, surely in
the past they would be treated equally?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:31:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86540.162609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChX4-0007Rh-7q; Thu, 18 Feb 2021 11:31:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86540.162609; Thu, 18 Feb 2021 11:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChX4-0007Ra-4Q; Thu, 18 Feb 2021 11:31:42 +0000
Received: by outflank-mailman (input) for mailman id 86540;
 Thu, 18 Feb 2021 11:31:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lChX3-0007RV-Gh
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:31:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc69c32a-09f4-4a16-a659-07b3b6161685;
 Thu, 18 Feb 2021 11:31:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6334EAF0B;
 Thu, 18 Feb 2021 11:31:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc69c32a-09f4-4a16-a659-07b3b6161685
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613647899; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dQ7Bmw3qV/Mx0DUjr1xZLIgeHRg07uwzun2XAH/yTdM=;
	b=J6bkn30ZJlMhpujoLJ46dQEVIAD1UttsLvtLsiAaGE95xiy19w/YbPkpRtwcAQRhKm0LXb
	o87nzZmK4icCye6F0ZzQEu4hD/TFIuAokLYU1SpKv5jJNlSEfcgy2/g5IYGEi8VAXr+e16
	V9+j4/Mag2qKM7pj/3anQe+y5mQMOBc=
Subject: Re: [PATCH 1/3] gnttab: never permit mapping transitive grants
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <3620b977-4182-db2c-e2f9-71e1c6c4e721@suse.com>
 <6423c7ef-66d2-8867-50a0-b75ae63aaef6@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c918000-c5aa-e297-569e-621e6d5c5d3b@suse.com>
Date: Thu, 18 Feb 2021 12:31:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <6423c7ef-66d2-8867-50a0-b75ae63aaef6@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 11:25, Julien Grall wrote:
> On 17/02/2021 10:46, Jan Beulich wrote:
>> Transitive grants allow an intermediate domain I to grant a target
>> domain T access to a page which origin domain O did grant I access to.
>> As an implementation restriction, T is not allowed to map such a grant.
>> This restriction is currently tried to be enforced by marking active
>> entries resulting from transitive grants as is-sub-page; sub-page grants
>> for obvious reasons don't allow mapping. However, marking (and checking)
>> only active entries is insufficient, as a map attempt may also occur on
>> a grant not otherwise in use. When not presently in use (pin count zero)
>> the grant type itself needs checking. Otherwise T may be able to map an
>> unrelated page owned by I. This is because the "transitive" sub-
>> structure of the v2 union would end up being interpreted as "full_page"
>> sub-structure instead. The low 32 bits of the GFN used would match the
>> grant reference specified in I's transitive grant entry, while the upper
>> 32 bits could be random (depending on how exactly I sets up its grant
>> table entries).
>>
>> Note that if one mapping already exists and the granting domain _then_
>> changes the grant to GTF_transitive (which the domain is not supposed to
>> do), the changed type will only be honored after the pin count has gone
>> back to zero. This is no different from e.g. GTF_readonly or
>> GTF_sub_page becoming set when a grant is already in use.
>>
>> While adjusting the implementation, also adjust commentary in the public
>> header to better reflect reality.
>>
>> Fixes: 3672ce675c93 ("Transitive grant support")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> The change in grant_table.c looks good to me:
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Thanks.

>> --- a/xen/include/public/grant_table.h
>> +++ b/xen/include/public/grant_table.h
>> @@ -166,11 +166,13 @@ typedef struct grant_entry_v1 grant_entr
>>   #define GTF_type_mask       (3U<<0)
>>   
>>   /*
>> - * Subflags for GTF_permit_access.
>> + * Subflags for GTF_permit_access and GTF_transitive.
>>    *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
>>    *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
>>    *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
>> - *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags for the grant [GST]
>> + * Further subflags for GTF_permit_access only.
>> + *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags to be used for
>> + *                             mappings of the grant [GST]
>>    *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
>>    *                will only be allowed to copy from the grant, and not
>>    *                map it. [GST]
> 
> Do we have any check preventing GTF_sub_page and GTF_transitive to be 
> set together?

No, and I also don't think we need one. For one transitive grants
get implicitly treated as sub-page ones (I admit this is an
implementation detail, not really an ABI aspect) and the flag is
simply ignored in this case. Much like - see patch 3 - the flag
ought to also be ignored for v1 grants.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:40:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86542.162621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChfX-0008QH-3r; Thu, 18 Feb 2021 11:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86542.162621; Thu, 18 Feb 2021 11:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChfX-0008QA-0P; Thu, 18 Feb 2021 11:40:27 +0000
Received: by outflank-mailman (input) for mailman id 86542;
 Thu, 18 Feb 2021 11:40:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lChfW-0008Q5-5f
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:40:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14695b22-27e0-48d1-8a49-0424e7033bfd;
 Thu, 18 Feb 2021 11:40:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 221AAAD78;
 Thu, 18 Feb 2021 11:40:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14695b22-27e0-48d1-8a49-0424e7033bfd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613648424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LHtRAmoS/Hgdkg+Xo3Sq2kdtxx+/kmpAVLud7f4+3YE=;
	b=J6nGnTXm4+E+YCtZcyU8eEDGqr8xCNbqfFcXMrVY8JMjKzp3X9t3SbAo6nfBwYxViNCFsb
	LRrIs+soXlJSUfgxS3lHVyx1HXiyW1HCGa0/eTnLC25z2EWqSlKfi1IDprQLjV9vZ8NwjJ
	rw+rmnhc1nRyYXUsw707DWq3aObJgeg=
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 cardoe@cardoe.com, andrew.cooper3@citrix.com, wl@xen.org,
 iwj@xenproject.org, anthony.perard@citrix.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210213020540.27894-1-sstabellini@kernel.org>
 <20210213135056.GA6191@mail-itl>
 <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
 <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s>
 <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
 <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <416e26b7-0e24-a9ee-6f9a-732f77f7e0cc@suse.com>
Date: Thu, 18 Feb 2021 12:40:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 00:45, Stefano Stabellini wrote:
> Given this, I take there is no 32bit build env? A bit of Googling tells
> me that gcc on Alpine Linux is compiled without multilib support.
> 
> 
> That said I was looking at the Alpine Linux APKBUILD script:
> https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/APKBUILD
> 
> And I noticed this patch that looks suspicious:
> https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/musl-hvmloader-fix-stdint.patch

Indeed. I find it very odd that they have a bimodal gcc (allowing
-m32) but no suitable further infrastructure (headers). So perhaps
configure should probe for "gcc -m32" producing a uint64_t that is
actually 64 bits wide, and disable hvmloader building otherwise
(and - important - no matter whether it would actually be needed;
alternative being to fail configuring altogether)? Until - as said
before - we've made hvmloader properly freestanding.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:46:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86544.162634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChl2-00009S-Pt; Thu, 18 Feb 2021 11:46:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86544.162634; Thu, 18 Feb 2021 11:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChl2-00009L-MM; Thu, 18 Feb 2021 11:46:08 +0000
Received: by outflank-mailman (input) for mailman id 86544;
 Thu, 18 Feb 2021 11:46:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UbXw=HU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lChl1-00009G-S1
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:46:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 198238a0-3583-45e7-a27f-4142ec84de20;
 Thu, 18 Feb 2021 11:46:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3B2AEACD4;
 Thu, 18 Feb 2021 11:46:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 198238a0-3583-45e7-a27f-4142ec84de20
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613648766; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IBzfMKiSeUEFSAHrVku/Sd9rNaUrGBe91U4xWJTdn5k=;
	b=pbyeaglKkYHe3qoageLH/xq1rK520BEBJihrV1YqsS0Ux9Azs2fSTUkHwz5+hKLzahIEOe
	MJX2pyQ5wcGYkqsAlGBDIfJRjyxPLBqk0wBhx54ZaOWpgOblErKLNJhNjioCfTv3p+KXIJ
	yR6HKu3KiaT452SdbFdDpNtnlCubZfQ=
Subject: Re: [PATCH v2 8/8] xen/evtchn: use READ/WRITE_ONCE() for accessing
 ring indices
To: Ross Lagerwall <ross.lagerwall@citrix.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-9-jgross@suse.com>
 <6818fcde-abab-1250-119c-d0ccb8c80488@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <ee288ba1-d043-81a7-3900-0b9587625743@suse.com>
Date: Thu, 18 Feb 2021 12:46:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <6818fcde-abab-1250-119c-d0ccb8c80488@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="wPTjx05UixzXWOdncPbHQbYBNlaLnhaEg"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--wPTjx05UixzXWOdncPbHQbYBNlaLnhaEg
Content-Type: multipart/mixed; boundary="gBsCtQn6MpSbLUGhbnq3HWLUqY6yu4TxQ";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <ee288ba1-d043-81a7-3900-0b9587625743@suse.com>
Subject: Re: [PATCH v2 8/8] xen/evtchn: use READ/WRITE_ONCE() for accessing
 ring indices
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-9-jgross@suse.com>
 <6818fcde-abab-1250-119c-d0ccb8c80488@citrix.com>
In-Reply-To: <6818fcde-abab-1250-119c-d0ccb8c80488@citrix.com>

--gBsCtQn6MpSbLUGhbnq3HWLUqY6yu4TxQ
Content-Type: multipart/mixed;
 boundary="------------C2FAD25BBA0FE1C6ACA23C63"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C2FAD25BBA0FE1C6ACA23C63
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.02.21 14:29, Ross Lagerwall wrote:
> On 2021-02-11 10:16, Juergen Gross wrote:
>> For avoiding read- and write-tearing by the compiler use READ_ONCE()
>> and WRITE_ONCE() for accessing the ring indices in evtchn.c.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - modify all accesses (Julien Grall)
>> ---
>>   drivers/xen/evtchn.c | 25 ++++++++++++++++---------
>>   1 file changed, 16 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>> index 421382c73d88..620008f89dbe 100644
>> --- a/drivers/xen/evtchn.c
>> +++ b/drivers/xen/evtchn.c
>> @@ -162,6 +162,7 @@ static irqreturn_t evtchn_interrupt(int irq, void =
*data)
>>   {
>>   	struct user_evtchn *evtchn =3D data;
>>   	struct per_user_data *u =3D evtchn->user;
>> +	unsigned int prod, cons;
>>  =20
>>   	WARN(!evtchn->enabled,
>>   	     "Interrupt for port %u, but apparently not enabled; per-user %=
p\n",
>> @@ -171,10 +172,14 @@ static irqreturn_t evtchn_interrupt(int irq, voi=
d *data)
>>  =20
>>   	spin_lock(&u->ring_prod_lock);
>>  =20
>> -	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
>> -		*evtchn_ring_entry(u, u->ring_prod) =3D evtchn->port;
>> +	prod =3D READ_ONCE(u->ring_prod);
>> +	cons =3D READ_ONCE(u->ring_cons);
>> +
>> +	if ((prod - cons) < u->ring_size) {
>> +		*evtchn_ring_entry(u, prod) =3D evtchn->port;
>>   		smp_wmb(); /* Ensure ring contents visible */
>> -		if (u->ring_cons =3D=3D u->ring_prod++) {
>> +		if (cons =3D=3D prod++) {
>> +			WRITE_ONCE(u->ring_prod, prod);
>>   			wake_up_interruptible(&u->evtchn_wait);
>>   			kill_fasync(&u->evtchn_async_queue,
>>   				    SIGIO, POLL_IN);
>=20
> This doesn't work correctly since now u->ring_prod is only updated if c=
ons =3D=3D prod++.

Right. Thanks for noticing.


Juergen


--------------C2FAD25BBA0FE1C6ACA23C63
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C2FAD25BBA0FE1C6ACA23C63--

--gBsCtQn6MpSbLUGhbnq3HWLUqY6yu4TxQ--

--wPTjx05UixzXWOdncPbHQbYBNlaLnhaEg
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAuU30FAwAAAAAACgkQsN6d1ii/Ey/P
Kwf/VW9ZcEHbrwGNFvfbGIyw3smta72xdAKPF0aGstPUpbZ+sB98qWqf08JRnY9vFMj9KsVDHiA7
htlUBRCQbfK2KuLNP87L5GHc3sNyi5k37ylJ/TdRshwR/J9DpaBEGQzUB0Y3L3CEzE7cNWna3GHr
s0YIsBZhjeU0Sk/j+yqHp/PeXBk4XKtfHIOlUQjhHKxlfW9N3X61lK0MEG4GLKENIjfKIr2CnMGf
ovSJQqOiU43BR7rV4zgclN+WKyv2dZntDalcqG2rB0VmueNQZkgusCXVdKZWd6JdUjPHaSuu4TIB
bJBQEhoO7PaqLARC4bebpOaF6frb03kZc882C86I6w==
=HqmF
-----END PGP SIGNATURE-----

--wPTjx05UixzXWOdncPbHQbYBNlaLnhaEg--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:47:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86546.162645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChm0-0000HI-4J; Thu, 18 Feb 2021 11:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86546.162645; Thu, 18 Feb 2021 11:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChm0-0000H3-1D; Thu, 18 Feb 2021 11:47:08 +0000
Received: by outflank-mailman (input) for mailman id 86546;
 Thu, 18 Feb 2021 11:47:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UbXw=HU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lChly-0000Gy-W2
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:47:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 239e4b99-9bc6-4847-8fc0-a589a73e4141;
 Thu, 18 Feb 2021 11:47:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C8E9AD78;
 Thu, 18 Feb 2021 11:47:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 239e4b99-9bc6-4847-8fc0-a589a73e4141
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613648825; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vP1S07ca3sqxgRiRY6So84F1f0cq3n+/55ZYZDA6jwQ=;
	b=bbqMqQPt/mCi0FQMo/l8RuuHzWubuWl+3oL3+ISKNpzdVgWomKjvgyFU4aLwfBrO5r6YSB
	bBsbYGkktn/vBHy77BiKC4W4Wa5m57T8d12DH4zahL7+jS6c92GYd9iAcQZQ1j0dYFgVu2
	SmPIKAaqojOJEWKS/EnIXqK8M2Zhyv0=
Subject: Re: [PATCH v2 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-4-jgross@suse.com>
 <6cc74d6b-d537-0e9f-9da8-45456f6b703e@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <63e42922-204d-9d0e-6a02-49f64e7e8885@suse.com>
Date: Thu, 18 Feb 2021 12:47:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <6cc74d6b-d537-0e9f-9da8-45456f6b703e@oracle.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="nyMyUXtbBYfCQJffcUfegBSEpdoFj1b9E"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--nyMyUXtbBYfCQJffcUfegBSEpdoFj1b9E
Content-Type: multipart/mixed; boundary="aIEYVeglYW0935UFABA3RFn6vG3A7vknV";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Message-ID: <63e42922-204d-9d0e-6a02-49f64e7e8885@suse.com>
Subject: Re: [PATCH v2 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
References: <20210211101616.13788-1-jgross@suse.com>
 <20210211101616.13788-4-jgross@suse.com>
 <6cc74d6b-d537-0e9f-9da8-45456f6b703e@oracle.com>
In-Reply-To: <6cc74d6b-d537-0e9f-9da8-45456f6b703e@oracle.com>

--aIEYVeglYW0935UFABA3RFn6vG3A7vknV
Content-Type: multipart/mixed;
 boundary="------------456ECA628170F25D28062DF7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------456ECA628170F25D28062DF7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.02.21 22:35, Boris Ostrovsky wrote:
>=20
> On 2/11/21 5:16 AM, Juergen Gross wrote:
>=20
>> @@ -622,6 +623,7 @@ static void xen_irq_lateeoi_locked(struct irq_info=
 *info, bool spurious)
>>   	}
>>  =20
>>   	info->eoi_time =3D 0;
>> +	smp_store_release(&info->is_active, 0);
>=20
>=20
> Can this be done in lateeoi_ack_dynirq()/lateeoi_mask_ack_dynirq(), aft=
er we've masked the channel? Then it will be consistent with how how othe=
r chips do it, especially with the new helper.

Yes, I think this should work.


Juergen

--------------456ECA628170F25D28062DF7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------456ECA628170F25D28062DF7--

--aIEYVeglYW0935UFABA3RFn6vG3A7vknV--

--nyMyUXtbBYfCQJffcUfegBSEpdoFj1b9E
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAuU7gFAwAAAAAACgkQsN6d1ii/Ey98
ogf+OYHpzRVAg+cjmXBOIZ/G9w3jDjtACYbBCL6aLdsTm5mBymNr96GdNITM2eHwIKj2RjY/A9NO
ZbJFT8kzKZLnKDI7Pz1s/qENqqe2aINBtEeXzln/8IBuZB2dOg236Q+BOtfTuNNK/MljOUm1qRkg
XwlZbaBB5CGe8xZ4uS9tPWvmrQCWYshArgfg8UEoeZGbBeLzQ7Hdyh8Qj9Kh+oaUnKWjmi3xJ9Xc
Y0KJ3N0kohBYNneknpMdWf6YmPXXmjud2QGBY85EO1gvjCw0ozxxqjiTzDQg7HUBEMqvHQvp4lT3
8EzJBjCDsZUk6jbkMzqFJQHYwS9xCPo4EoCFBut7/Q==
=19YR
-----END PGP SIGNATURE-----

--nyMyUXtbBYfCQJffcUfegBSEpdoFj1b9E--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:47:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:47:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86550.162679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChmQ-0000PI-13; Thu, 18 Feb 2021 11:47:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86550.162679; Thu, 18 Feb 2021 11:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChmP-0000PA-SS; Thu, 18 Feb 2021 11:47:33 +0000
Received: by outflank-mailman (input) for mailman id 86550;
 Thu, 18 Feb 2021 11:47:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZheJ=HU=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1lChmO-0000Mp-Cm
 for xen-devel@lists.xen.org; Thu, 18 Feb 2021 11:47:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98c26d30-926e-4a86-8b2e-550b9188d054;
 Thu, 18 Feb 2021 11:47:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1lChm8-0004oj-9n; Thu, 18 Feb 2021 11:47:16 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1lChm8-0008V9-8E; Thu, 18 Feb 2021 11:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98c26d30-926e-4a86-8b2e-550b9188d054
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=qAwJ86xNj6iy18bFGlL0Zzk4WSB6wYKH7ThjA2nJMbc=; b=bcGgWZH0tqptJxUlJZbVCtjYcp
	LFW4FUt251tKzjm4otY0GBQoMCyB8XyeeY6drDkaC+tshsCnldctbYVNIh9loeP0DKpWCDANAOodX
	G0/IwRq008RCT+2y99kvrhTMeWI2hWWfildC8sQFJigWFEOPpQoZPIOGc06X1dkvPXD8=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 366 v1 - missed flush in XSA-321 backport
Message-Id: <E1lChm8-0008V9-8E@xenbits.xenproject.org>
Date: Thu, 18 Feb 2021 11:47:16 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-366

                   missed flush in XSA-321 backport

ISSUE DESCRIPTION
=================

An oversight was made when backporting XSA-320, leading entries in the
IOMMU not being properly updated under certain circumstances.

IMPACT
======

A malicious guest may be able to retain read/write DMA access to
frames returned to Xen's free pool, and later reused for another
purpose.  Host crashes (leading to a Denial of Service) and privilege
escalation cannot be ruled out.

VULNERABLE SYSTEMS
==================

Xen versions up to 4.11, from at least 3.2 onwards, are affected.  Xen
versions 4.12 and newer are not affected.

Only x86 Intel systems are affected.  x86 AMD as well as Arm systems are
not affected.

Only x86 HVM guests using hardware assisted paging (HAP), having a
passed through PCI device assigned, and having page table sharing
enabled can leverage the vulnerability.  Note that page table
sharing will be enabled (by default) only if Xen considers IOMMU and
CPU large page size support compatible.

MITIGATION
==========

Suppressing the use of page table sharing will avoid the vulnerability
(command line option "iommu=no-sharept").

Suppressing the use of large HAP pages will avoid the vulnerability
(command line options "hap_2mb=no hap_1gb=no").

Not passing through PCI devices to HVM guests will avoid the
vulnerability.

CREDITS
=======

This issue was reported as a bug by M. Vefa Bicakci, and recognized as
a security issue by Roger Pau Monne of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa366-4.11.patch      Xen 4.11.x

$ sha256sum xsa366*
3131c9487b9446655e2e21df4ccf1e003bec471881396d7b2b1a0939f5cbae96  xsa366.meta
8c8c18ca8425e6167535c3cf774ffeb9dcb4572e81c8d2ff4a73fefede2d4d94  xsa366-4.11.patch
$

NOTE REGARDING LACK OF EMBARGO
==============================

This was reported and debugged publicly, before the security
implications were apparent.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmAuU5EMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZMCkIAKq1dU6xOMN3lFqY6LeIV+Pn+JQDvJKhDT+lJT9b
KAP+a44ks5bHHSD6CPyiq5boU5APE7yqiyJnXBycXVDLH6GGjh7uBvc6A00YkeHU
y08l8jxa6/FAyrvCj5P0pYItALwH0NZDtfUE57ueloYUu3KJnyBRtl9icvx/sCa9
CUkpKDpS0te+Rk+G57UPDjGvSPwpIh01vphJ5tyf+2Lrk8rsHTJYWQ7eD8A09jCr
DtSD6FylzEuGGY30vPGLUzXgOm8Nji/WgnXnmmbILCEo8PQs3CcoxN53/F8cYvr6
NRERHKZFhHoLmUUCImoFcApxzzdt11USDnCdEXiAkrEOYsk=
=w9OA
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa366.meta"
Content-Disposition: attachment; filename="xsa366.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNjYsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
IjQuMTEiCiAgXSwKICAiVHJlZXMiOiBbCiAgICAieGVuIgogIF0sCiAgIlJl
Y2lwZXMiOiB7CiAgICAiNC4xMSI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAg
ICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiODBjYWQ1
ODRmYjRjMjU5OWFlMTc0MjI2ZTJjOTEzYmIyM2RmM2JmYSIsCiAgICAgICAg
ICAiUHJlcmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAg
ICAgICAgICJ4c2EzNjYtNC4xMS5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa366-4.11.patch"
Content-Disposition: attachment; filename="xsa366-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4K
U3ViamVjdDogeDg2L2VwdDogZml4IG1pc3NpbmcgSU9NTVUgZmx1c2ggaW4g
YXRvbWljX3dyaXRlX2VwdF9lbnRyeQoKQmFja3BvcnQgb2YgWFNBLTMyMSBt
aXNzZWQgYSBmbHVzaCBpbiBhdG9taWNfd3JpdGVfZXB0X2VudHJ5IHdoZW4K
bGV2ZWwgd2FzIGRpZmZlcmVudCB0aGFuIDAuIFN1Y2ggb21pc3Npb24gd2ls
bCB1bmRlcm1pbmUgdGhlIGZpeCBmb3IKWFNBLTMyMSwgYmVjYXVzZSBwYWdl
IHRhYmxlIGVudHJpZXMgY2FjaGVkIGluIHRoZSBJT01NVSBjYW4gZ2V0IG91
dApvZiBzeW5jIGFuZCBjb250YWluIHN0YWxlIGVudHJpZXMuCgpGaXggdGhp
cyBieSBzbGlnaHRseSByZS1hcnJhbmdpbmcgdGhlIGNvZGUgdG8gcHJldmVu
dCB0aGUgZWFybHkgcmV0dXJuCndoZW4gbGV2ZWwgaXMgZGlmZmVyZW50IHRo
YXQgMC4gTm90ZSB0aGF0IHRoZSBlYXJseSByZXR1cm4gaXMganVzdCBhbgpv
cHRpbWl6YXRpb24gYmVjYXVzZSBmb3JlaWduIGVudHJpZXMgY2Fubm90IGhh
dmUgbGV2ZWwgPiAwLgoKVGhpcyBpcyBYU0EtMzY2LgoKUmVwb3J0ZWQtYnk6
IE0uIFZlZmEgQmljYWtjaSA8bS52LmJAcnVuYm94LmNvbT4KU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
Ci0tLQogeGVuL2FyY2gveDg2L21tL3AybS1lcHQuYyB8IDcgKy0tLS0tLQog
MSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCA2IGRlbGV0aW9ucygt
KQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgYi94
ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCmluZGV4IDAzNjc3MWY0M2MuLmZk
ZTJmNWY3ZTMgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0
LmMKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYwpAQCAtNTMsMTIg
KzUzLDcgQEAgc3RhdGljIGludCBhdG9taWNfd3JpdGVfZXB0X2VudHJ5KGVw
dF9lbnRyeV90ICplbnRyeXB0ciwgZXB0X2VudHJ5X3QgbmV3LAogICAgIGJv
b2xfdCBjaGVja19mb3JlaWduID0gKG5ldy5tZm4gIT0gZW50cnlwdHItPm1m
biB8fAogICAgICAgICAgICAgICAgICAgICAgICAgICAgIG5ldy5zYV9wMm10
ICE9IGVudHJ5cHRyLT5zYV9wMm10KTsKIAotICAgIGlmICggbGV2ZWwgKQot
ICAgIHsKLSAgICAgICAgQVNTRVJUKCFpc19lcHRlX3N1cGVycGFnZSgmbmV3
KSB8fCAhcDJtX2lzX2ZvcmVpZ24obmV3LnNhX3AybXQpKTsKLSAgICAgICAg
d3JpdGVfYXRvbWljKCZlbnRyeXB0ci0+ZXB0ZSwgbmV3LmVwdGUpOwotICAg
ICAgICByZXR1cm4gMDsKLSAgICB9CisgICAgQVNTRVJUKCFsZXZlbCB8fCAh
aXNfZXB0ZV9zdXBlcnBhZ2UoJm5ldykgfHwgIXAybV9pc19mb3JlaWduKG5l
dy5zYV9wMm10KSk7CiAKICAgICBpZiAoIHVubGlrZWx5KHAybV9pc19mb3Jl
aWduKG5ldy5zYV9wMm10KSkgKQogICAgIHsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:48:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:48:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86580.162706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChne-0000pB-18; Thu, 18 Feb 2021 11:48:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86580.162706; Thu, 18 Feb 2021 11:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChnd-0000p4-Tv; Thu, 18 Feb 2021 11:48:49 +0000
Received: by outflank-mailman (input) for mailman id 86580;
 Thu, 18 Feb 2021 11:48:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lChnc-0000m1-I4
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:48:48 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5039829f-1079-44d6-9b90-2b9138f9cebd;
 Thu, 18 Feb 2021 11:48:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5039829f-1079-44d6-9b90-2b9138f9cebd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613648925;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=FXHFAYYGqfbINU5bAK19nnqPuljhQWtO3/JhdCU9WU4=;
  b=O1NfPotsqmNpY24038KXY1JBIkH6bJByDYjoPulAlRkZtFl9OGouQsKa
   NpMQb+WDcZ1ThEwNbvC8IwTH8iAQpuqIzX7hNEENmvusRz2+/jmx7SJsw
   D7gjS92GayCUprrFPD8UI/3w+NLSnJ1G8YIymRsW6HfQtjBcP3VnSiqT0
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: R8d/KEYbsq9hLrw5VvgPGwPa5P0OoHRiJKQnf30sTjOpxijvRhqtaA1wXGsbjYmNJUiH0BifMo
 JA/vxD3cAGGjNMFVu3/26KCGkS9D7YFdUsgfNo0IX8TypYwjFcmqO9pC9vuQIZ+F7xSDfHqUQ9
 CzxgDuQ1WSt0msyo+KYnw9jifrGwrW1Ikzm1+rtsuEXbBaMHDgncdfhRpgXXdxu9W/TIepBOW9
 hJSj+mzytYTCKecGvi/g3nfIe2KL3qGL32ILw3gSQ5MYY8tGKlyK/Bbo8ExZItosmMCL6JyVgC
 jGA=
X-SBRS: 5.2
X-MesageID: 37872918
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="37872918"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aFJxjkdIeGmEkSbhFkxPpJ5KWsp/wxHI7OSd7sA8/cwqLDKcn72VsqzGAqIln0jc6hxUTUEI3ujGD5sagr5iKMtbid/YASItGzpk58/6cksN1Pl/+vWFNrSK88wuKIQT/9vZqUIyXhNo+x9a6odJeUOUAabggxFkUIGHwiMXqVR0fo5a1sAYrvibDZvJSGtrtw1WI37pg7oNADglYImIzbre2QJaK7jpZR/mRTgnuy3wwYR7b2UvfyDxRRoocnssLkgaZnSkx9iID9tGaKfvgcKELsByGxxSQKRxTqV4o8kzkHscXazN/twA0IXoc5PTbTQ/VzID3f8rQWASRfRwGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ELahRdeN5J9kMXWhPEXU97y4TG25/2rHS0WXuLbMYDk=;
 b=EMcJvgPw4Fm70zggwKL44EBhh2ssgMnDLwCZzyPT3xZVruCO7Ekw7+2YGvbyTxsNlx4y9gzG9MaarfzUcj4wyqROviAeQhSk2cfnOYU54lEIk7rdMjQaJmlzeq7LruVFmULbIPsXLHjdcbH1PrfKw3uogfa6kMvgg7VYyuQCpR6eb2p0oul/u1uax30LgssIycIHfLt/QfOTK7ycUyxszLpIcN4Q+v/F95/B1Jxm7ivBmJ9T/dZcuQ9jjCINxukyAIbzP2o4xT9g+DFGXbpndzeV2pYHLx9mPk4Dkx89zqAXYmX00EP62xXgIsAvNH6cC3AUdjdbli5VKcUuW4OxNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ELahRdeN5J9kMXWhPEXU97y4TG25/2rHS0WXuLbMYDk=;
 b=bM9iEkd9TXd4rRYk9rcG4P4owTg5b0ds8vxDDn0BJhFsKdnPQrKBcTI3XpFIZqxD4r53AbCdHJhDu2Ej+tIEPS/Lad0cAU33PM/STl/zb9YYAWiMIHpSOQUCnaO2aZr//YjXXXAqXJAFhNOM5cHhBX5WL5xzMxAgVvmVu/28UPU=
Date: Thu, 18 Feb 2021 12:48:35 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 4/4] tools/libs: Apply MSR policy to a guest
Message-ID: <YC5UEzGwoqmLvh0n@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-5-git-send-email-boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1611182952-9941-5-git-send-email-boris.ostrovsky@oracle.com>
X-ClientProxiedBy: PR1P264CA0012.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19e::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a38e1963-c3eb-4a3e-94fa-08d8d40322e1
X-MS-TrafficTypeDiagnostic: DM6PR03MB5323:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53236C41867BC625E3FC83878F859@DM6PR03MB5323.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: T5vlmrz9C4nLozIFGJinmUzKe1fzoUU4exNGrgPr/bQ6BUrNxvKFRRrLJ1lQXXNdO7fgZEw98eE8Q32aCM0PHYM1AdpSuzRZIvoBpSbE/tFWrBC0GZ1EGMlmo0LlLCj62SjeKskEVvlKl/yZlTmDlLOwat//oXTRmX2xVoZjKacHxz6ncw1PU1xK/KltrwzJkox8Xl6lS/PoGrJzUS+vUOjNiIzhIIWgR5s4oOiD6QCaytPGQmHIM1wMQzVYeBP09rXFdAJhHwzoeQMQDRi66eLeHYxDJYJq3pPYGs8LcWIztzypO/W2yaXY4FFF2zMys1Zck81LcNjaYcU9meTDSKmJ/FoZVPCnjIoWHJ7+uf6KnT01JnTj4LQXGoav7fhrTnlb7grA4o5P7tcqk7YnoJKD2A1x16sBJfGUuTlQXNSZ9w8ed2uHkOdogpHcLiXZ3bMGFIkjvTDwd2amtcQdcYjbBZ82b5UhemEaZYPjn6XSR9kn3zZHOjosVxn0HyOrOtZzaiq5QbBHowyzXpJbFglOpH/wcCrGLBr28EM3+NpnbHvVn8PU1RildI7x+0KeoUdRM3x1nKaFgOyzRc+lkEjk6AR6CNyyKYB2ZLv9805iVRfW5DIEubjlG/ZJDlQjO25D0j/Uhl87OYdI3IEsDw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(136003)(346002)(376002)(366004)(39860400002)(6666004)(26005)(16526019)(956004)(6486002)(316002)(6496006)(186003)(83380400001)(8676002)(4326008)(33716001)(86362001)(85182001)(9686003)(6916009)(478600001)(5660300002)(66476007)(2906002)(66556008)(8936002)(66946007)(2004002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VFE1ZUxsUmtZRVQrbTVRcGtOSWdZOFkxQVM3V3ZrOWtGRlFVRlBOMHd5SHpo?=
 =?utf-8?B?dXRIYSswMGZBczBVUzZqUEQvV1NldlZTK0c5Q2NlS0FUdjB0T2t2TGc4cnBS?=
 =?utf-8?B?RzRJVytTNVRHVFU5dGxDcEtIWW9BaVJyR1d2aFNUbkkvd1B6aktiUTBDcUtF?=
 =?utf-8?B?YkxTTHNod21qR1I5SEJYK21mdGJ6dDFOTmZOcWJSaGlVR3JEUXloODVacjdH?=
 =?utf-8?B?c0ZZUnZXQTY1T3FLd2JBVkJ1enlaUmQ0VkQrZlJ6amJrcFh5UHpscFNZZTg1?=
 =?utf-8?B?UkNHR3ZLcGVob2hHMC9PdmdaeWdwOGxxN1NRbk1ZczF6SWlQTG1nSU1waHJY?=
 =?utf-8?B?bFhLd0V2NjBWSEhna0ZUWDlqQlp2S3BzR2Q2eTQrVHRFRWUwUS9vMXF1dnNl?=
 =?utf-8?B?MjZOSllOaWtvcUFqMEd3M1QzNFIyc003b2Q1TWpiR2p6VXk4bGdGZXR6ZUk5?=
 =?utf-8?B?bDZnUitGUEZSMmtwc1BiOWUwTXljK3RuMVlaanY0VmRuZWtJbWlpT1R5eHV2?=
 =?utf-8?B?b3prQUpIRm4wN1ROaEh6Ky94Z0FHcVRocG9sS2V6cVR1VkRyWlVPODgvWm5n?=
 =?utf-8?B?SVFzTHVQUmUvaXRMUGhWWlhUbUowQmpHRlNjRUtUVHpGVVp4TndvaU94Z0Jk?=
 =?utf-8?B?THZ0UnZ2UWxVc0tlZmJkdXNEV2xxT0FKSk85ZVlSRXpwVE5adUVqOXVXckRN?=
 =?utf-8?B?d3VKRXpGYS9TU2ZGdjBUZUJCaTlaMVFjY2crSk54aFlPQUlQS0JVYk9QM1h6?=
 =?utf-8?B?eVRHSG1DYU1xMmxXbmRVeGlLNk5nVFNIQllZa3dVNm9DL0pqKzY3UHZiWXFB?=
 =?utf-8?B?b01MTm9jR1BHZUlzeS9KQktmQTlhZFJ5NTkzaFlrUVVOTjFUQ3pNVERQUDFp?=
 =?utf-8?B?dDhrQjQ2c0FBdGwyK3dzTGc0YmxvVnNYeVl6bkwwQ1pTb2IvL3R6RmhNUTNU?=
 =?utf-8?B?US92VzZVdjVnblYrd0tUOXphTmtFSGM4ck0rbEZxemR2em1LUkVZOExrczUy?=
 =?utf-8?B?ZlVrc3ZsVll3dWFSTGFpcDBsamhlNFFHN0kvREVFZDJDaWdsRmJMZHluOXRp?=
 =?utf-8?B?WnJpSGlEYmFFRFFtdktvVmJZOVloZkdRMTIxU1hCOWMraWZrZVJnM2dqamp3?=
 =?utf-8?B?czVCZmlhTW5JWVM5ZTc4eitjSmhpNzRDMGZKTlJkSWZMdm5HNWZZZGtDc1B4?=
 =?utf-8?B?REJiR04wRlpUamh5dVE1S1VpMmZ2TFZmbS8yUlpQemp6YkZLbko1V3lrRzFt?=
 =?utf-8?B?L2FWc1YrVTN3bWpDazVqSFQzWGZGR1VYa1RUejErSTQrbWhPNmROSnF5cGVt?=
 =?utf-8?B?MVNZY1UrK3VGcmxvTlZOOGI1ak05ZW13WUFiallNNmtmcVNwK3RoTDlOdUNi?=
 =?utf-8?B?Z3hQQW4vTlJsQWs1MHljNXFxNnlUZnRxUFJwNTVlTUV2Y3BYSFNDK3djTnRU?=
 =?utf-8?B?MWRWcEhRN2p6VDBVNEdGWGNqYzkwQVR4d0RxblUwUG1hcnhGY2plTTcyNkQz?=
 =?utf-8?B?cnlpZUptdmpVZjFTVzhIdmtxRXBhUEVzWGdHSE1sa0tCTjNZQ0dicGtISGtW?=
 =?utf-8?B?a202NWovQXhYejkrbVIwQlZHMjYyRTdMaDI0SUlId0J2akpRWW1FanRoeGNL?=
 =?utf-8?B?Q1FKTDllUXp4RVZJWlVMNWUycGdJMjF3ZUpNYnJhYXI1eWJIVmcvNjQyRXQ3?=
 =?utf-8?B?eGZtQlFiNjJtLzdwVmxHVnlOb1BFdGl4eUlOZ2dQWXZ0SWxBaXJqV283YnJx?=
 =?utf-8?Q?SOp+zhjoPW/5eHPCoUXqXw/Em5Zsiy4WrZWmq23?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a38e1963-c3eb-4a3e-94fa-08d8d40322e1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 11:48:40.9568
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Had/HMqjVVEfdqNK37gCw/syx5C/JuomAFVb6jTo0FLxFVtOEGRG+ZY2+woanyr7WQMN8PEUY2gusLFZx0ntkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5323
X-OriginatorOrg: citrix.com

On Wed, Jan 20, 2021 at 05:49:12PM -0500, Boris Ostrovsky wrote:
> When creating a guest, if ignore_msrs option has been specified,
> apply it to guest's MSR policy.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  tools/include/xenctrl.h           |   2 +
>  tools/libs/guest/Makefile         |   1 +
>  tools/libs/guest/xg_msrs_x86.c    | 110 ++++++++++++++++++++++++++++++++++++++
>  tools/libs/light/libxl_dom.c      |   5 +-
>  tools/libs/light/libxl_internal.h |   2 +
>  tools/libs/light/libxl_x86.c      |   7 +++
>  6 files changed, 125 insertions(+), 2 deletions(-)
>  create mode 100644 tools/libs/guest/xg_msrs_x86.c
> 
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 3796425e1eca..1d6a38e73dcf 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1835,6 +1835,8 @@ int xc_cpuid_apply_policy(xc_interface *xch,
>                            const uint32_t *featureset,
>                            unsigned int nr_features, bool pae, bool itsc,
>                            bool nested_virt, const struct xc_xend_cpuid *xend);
> +int xc_msr_apply_policy(xc_interface *xch, uint32_t domid,
> +                        unsigned int ignore_msr);
>  int xc_mca_op(xc_interface *xch, struct xen_mc *mc);
>  int xc_mca_op_inject_v2(xc_interface *xch, unsigned int flags,
>                          xc_cpumap_t cpumap, unsigned int nr_cpus);
> diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
> index 1c729040b337..452155ea0385 100644
> --- a/tools/libs/guest/Makefile
> +++ b/tools/libs/guest/Makefile
> @@ -56,6 +56,7 @@ SRCS-y                 += xg_dom_compat_linux.c
>  
>  SRCS-$(CONFIG_X86)     += xg_dom_x86.c
>  SRCS-$(CONFIG_X86)     += xg_cpuid_x86.c
> +SRCS-$(CONFIG_X86)     += xg_msrs_x86.c
>  SRCS-$(CONFIG_ARM)     += xg_dom_arm.c
>  
>  ifeq ($(CONFIG_LIBXC_MINIOS),y)
> diff --git a/tools/libs/guest/xg_msrs_x86.c b/tools/libs/guest/xg_msrs_x86.c
> new file mode 100644
> index 000000000000..464ce9292ad8
> --- /dev/null
> +++ b/tools/libs/guest/xg_msrs_x86.c
> @@ -0,0 +1,110 @@
> +/******************************************************************************
> + * xc_msrs_x86.c
> + *
> + * Update MSR policy of a domain.
> + *
> + * Copyright (c) 2021, Oracle and/or its affiliates.
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation;
> + * version 2.1 of the License.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "xc_private.h"
> +#include "xen/lib/x86/msr.h"
> +
> +
> +
> +int xc_msr_apply_policy(xc_interface *xch, uint32_t domid, unsigned int ignore_msr)
> +{
> +    int rc;
> +    unsigned int nr_leaves, nr_msrs;
> +    xen_msr_entry_t *msrs = NULL;
> +    struct msr_policy *p = NULL;
> +    xc_dominfo_t di;
> +    unsigned int err_leaf, err_subleaf, err_msr;
> +
> +    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
> +         di.domid != domid )
> +    {
> +        ERROR("Failed to obtain d%d info", domid);
> +        rc = -ESRCH;
> +        goto out;
> +    }
> +
> +    rc = xc_get_cpu_policy_size(xch, &nr_leaves, &nr_msrs);
> +    if ( rc )
> +    {
> +        PERROR("Failed to obtain policy info size");
> +        rc = -errno;
> +        goto out;
> +    }
> +
> +    rc = -ENOMEM;
> +    if ( (msrs = calloc(nr_msrs, sizeof(*msrs))) == NULL ||
> +         (p = calloc(1, sizeof(*p))) == NULL )
> +        goto out;
> +
> +    /* Get the domain's default policy. */
> +    nr_leaves = 0;
> +    rc = xc_get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
> +                                              : XEN_SYSCTL_cpu_policy_pv_default,
> +                                  &nr_leaves, NULL, &nr_msrs, msrs);
> +    if ( rc )
> +    {
> +        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
> +        rc = -errno;
> +        goto out;
> +    }

Why not use xc_get_domain_cpu_policy instead so that you can avoid the
call to xc_domain_getinfo?

It would also seem safer, as you won't be discarding any adjustments
made to the default policy by the hypervisor for this specific domain.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:54:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:54:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86603.162718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChsv-00022H-MR; Thu, 18 Feb 2021 11:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86603.162718; Thu, 18 Feb 2021 11:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChsv-00022A-J3; Thu, 18 Feb 2021 11:54:17 +0000
Received: by outflank-mailman (input) for mailman id 86603;
 Thu, 18 Feb 2021 11:54:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lChsu-000225-E7
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:54:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81fa8e0f-598e-4254-8b63-3a48d9af044a;
 Thu, 18 Feb 2021 11:54:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9CFECAF0C;
 Thu, 18 Feb 2021 11:54:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81fa8e0f-598e-4254-8b63-3a48d9af044a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613649254; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uLeCNH+QoFBFXb0fm7m+wzQKZXmls20NMPsf/D+qskk=;
	b=YdYRq6aJ3vn/BJ+A3ckF15fML3BMBEnplleVbAIP8PHA7Qsg43FERMfdaTOysfwaDKagWg
	8KwgVRTFP1RJZziMXC+4OBrManqO7lXgtKi/7tfBqfxOroyoc7fl8A7UC4dPGyTbxW9tGM
	1TJqR9Yqecxn4RGmFPq6aWoQUtjQ85Y=
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, andrew.cooper3@citrix.com,
 jun.nakajima@intel.com, kevin.tian@intel.com,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
 <YC5EitRCZB+VCeCC@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
Date: Thu, 18 Feb 2021 12:54:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YC5EitRCZB+VCeCC@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.02.2021 11:42, Roger Pau MonnÃ© wrote:
> On Wed, Jan 20, 2021 at 05:49:09PM -0500, Boris Ostrovsky wrote:
>> This option allows guest administrator specify what should happen when
>> guest accesses an MSR which is not explicitly emulated by the hypervisor.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> ---
>>  docs/man/xl.cfg.5.pod.in         | 20 +++++++++++++++++++-
>>  tools/libs/light/libxl_types.idl |  7 +++++++
>>  tools/xl/xl_parse.c              |  7 +++++++
>>  3 files changed, 33 insertions(+), 1 deletion(-)
>>
>> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
>> index c8e017f950de..96ce97c42cab 100644
>> --- a/docs/man/xl.cfg.5.pod.in
>> +++ b/docs/man/xl.cfg.5.pod.in
>> @@ -2044,7 +2044,25 @@ Do not provide a VM generation ID.
>>  See also "Virtual Machine Generation ID" by Microsoft:
>>  L<https://docs.microsoft.com/en-us/windows/win32/hyperv_v2/virtual-machine-generation-identifier>
>>  
>> -=back 
>> +=over
>> +
>> +=item B<ignore_msrs="STRING">
>> +
>> +Determine hypervisor behavior on accesses to MSRs that are not emulated by the hypervisor.
>> +
>> +=over 4
>> +
>> +=item B<never>
>> +
>> +Issue a warning to the log and #GP to the guest. This is default.
>> +
>> +=item B<silent>
>> +
>> +MSR reads return 0, MSR writes are ignored. No warnings to the log.
>> +
>> +=item B<verbose>
>> +
>> +Similar to B<silent> but a warning is written.
> 
> Would it make sense to allow for this option to be more fine-grained
> in the future?

>From an abstract perspective - maybe. But remember that this information
will need to be migrated with the guest. It would seem to me that
Boris'es approach is easier migration-wise.

> Not that you need to implement the full thing now, but maybe we could
> have something like:
> 
> "
> =item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>
> 
> Specify a list of MSR ranges that will be ignored by the hypervisor:
> reads will return zeros and writes will be discarded without raising a
> #GP.
> 
> Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
> c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
> "
> 
> Then you can print the messages in the hypervisor using a guest log
> level and modify it on demand in order to get more verbose output?

"Modify on demand"? Irrespective of what you mean with this, ...

> I don't think selecting whether the messages are printed or not from
> xl is that helpful as the same could be achieved using guest_loglvl.

... controlling this via guest_loglvl would affect various other
log messages' visibility.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 11:57:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 11:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86605.162729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChvq-0002Be-4S; Thu, 18 Feb 2021 11:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86605.162729; Thu, 18 Feb 2021 11:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lChvq-0002BX-1G; Thu, 18 Feb 2021 11:57:18 +0000
Received: by outflank-mailman (input) for mailman id 86605;
 Thu, 18 Feb 2021 11:57:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lChvo-0002BP-3H
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 11:57:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48a9391d-8549-4fc7-aa01-25d8e5e64783;
 Thu, 18 Feb 2021 11:57:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B540BAF1B;
 Thu, 18 Feb 2021 11:57:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48a9391d-8549-4fc7-aa01-25d8e5e64783
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613649434; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A8c74sPvl5d2onmBiOM54dv7aEMSFVP3Md/vEjhTS5g=;
	b=TMxSGla0TK56fDd/xKwnLjJqDfLpWOcHYPxkqtBPurgiG/mR7NLcs7AwyBc8JB6fqyXhyd
	qBDRnlcla64+edGOmarE+OiYbJZ+WqGSQQuovFl3GGUpErCTQosNkXhps8qIQLVdGluF9L
	lR90q1BLhb9ot6EOwJGpT4y+iaYLfKc=
Subject: Re: [PATCH v2 3/4] x86: Allow non-faulting accesses to non-emulated
 MSRs if policy permits this
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, andrew.cooper3@citrix.com,
 jun.nakajima@intel.com, kevin.tian@intel.com,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-4-git-send-email-boris.ostrovsky@oracle.com>
 <YC5OYZOAkx+jutJz@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <785a4925-31f2-9df1-a4b3-1760ad17e01e@suse.com>
Date: Thu, 18 Feb 2021 12:57:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YC5OYZOAkx+jutJz@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.02.2021 12:24, Roger Pau MonnÃ© wrote:
> On Wed, Jan 20, 2021 at 05:49:11PM -0500, Boris Ostrovsky wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -3017,8 +3017,8 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>              break;
>>          }
>>  
>> -        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
>> -        goto gp_fault;
>> +        if ( guest_unhandled_msr(curr, msr, msr_content, false, true) )
>> +            goto gp_fault;
>>      }
>>  
>>  done:
>> @@ -3319,10 +3319,8 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>               is_last_branch_msr(msr) )
>>              break;
>>  
>> -        gdprintk(XENLOG_WARNING,
>> -                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
>> -                 msr, msr_content);
>> -        goto gp_fault;
>> +        if ( guest_unhandled_msr(v, msr, &msr_content, true, true) )
>> +            goto gp_fault;
>>      }
> 
> I think this could be done in hvm_msr_read_intercept instead of having
> to call guest_unhandled_msr from each vendor specific handler?
> 
> Oh, I see, that's likely done to differentiate between guest MSR
> accesses and emulator ones? I'm not sure we really need to make a
> difference between guests MSR accesses and emulator ones, surely in
> the past they would be treated equally?

We did discuss this before. Even if they were treated the same in
the past, that's not correct, and hence we shouldn't suppress the
distinction going forward. A guest explicitly asking to access an
MSR (via RDMSR/WRMSR) is entirely different from the emulator
perhaps just probing an MSR, falling back to some default behavior
if it's unavailable.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 13:06:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 13:06:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86641.162758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCj07-0000YT-R4; Thu, 18 Feb 2021 13:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86641.162758; Thu, 18 Feb 2021 13:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCj07-0000YM-Nk; Thu, 18 Feb 2021 13:05:47 +0000
Received: by outflank-mailman (input) for mailman id 86641;
 Thu, 18 Feb 2021 13:05:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCj06-0000YD-Ck
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 13:05:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3251abe-4c87-4528-bac8-13f09b19fd4c;
 Thu, 18 Feb 2021 13:05:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85101ACE5;
 Thu, 18 Feb 2021 13:05:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3251abe-4c87-4528-bac8-13f09b19fd4c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613653544; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4zHRJzE4DwRp/+P2uIqjaOI1hbKMfhgC5lubSPgQPCE=;
	b=AdMnZeE6Rh8a0l7sFBVXg8pdZyJyHx44VBmfh56/ELG5900K9WyZhbXTRXUSMlAiikWISp
	QtNvYCxtwu4HFva8FXKA1uxdyvkmgcye+RVhyhAh+n8tmDbGF5kgLioDdr+pzE+F46ojmb
	aAEf4hxlcZ/Njc2sKqgCeQRnOHGxHzA=
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
 <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
Date: Thu, 18 Feb 2021 14:05:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 17:07, Julien Grall wrote:
> On 17/02/2021 15:01, Jan Beulich wrote:
>> On 17.02.2021 15:24, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The new x86 IOMMU page-tables allocator will release the pages when
>>> relinquishing the domain resources. However, this is not sufficient
>>> when the domain is dying because nothing prevents page-table to be
>>> allocated.
>>>
>>> Currently page-table allocations can only happen from iommu_map(). As
>>> the domain is dying, there is no good reason to continue to modify the
>>> IOMMU page-tables.
>>
>> While I agree this to be the case right now, I'm not sure it is a
>> good idea to build on it (in that you leave the unmap paths
>> untouched).
> 
> I don't build on that assumption. See next patch.

Yet as said there that patch makes unmapping perhaps more fragile,
by introducing a latent error source into the path.

>> Imo there's a fair chance this would be overlooked at
>> the point super page mappings get introduced (which has been long
>> overdue), and I thought prior discussion had lead to a possible
>> approach without risking use-after-free due to squashed unmap
>> requests.
> 
> I know you suggested to zap the root page-tables... However, I don't 
> think this is 4.15 material and you agree with this (you were the one 
> pointed out that out).

Paul - do you have any thoughts here? Outright zapping isn't
going to work, we'd need to switch to quarantine page tables at
the very least to prevent the issue with babbling devices. But
that still seems better to me than the introduction of a latent
issue on the unmap paths.

>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -273,6 +273,9 @@ int iommu_free_pgtables(struct domain *d)
>>>       /*
>>>        * Pages will be moved to the free list below. So we want to
>>>        * clear the root page-table to avoid any potential use after-free.
>>> +     *
>>> +     * After this call, no more IOMMU mapping can happen.
>>> +     *
>>>        */
>>>       hd->platform_ops->clear_root_pgtable(d);
>>
>> I.e. you utilize the call in place of spin_barrier(). Maybe worth
>> saying in the comment?
> 
> Sure.

Btw - "no more IOMMU mapping" is also possibly ambiguous here:
One might take it to mean both maps and unmaps. How about "no
new IOMMU mappings can be inserted", as long as the unmap paths
don't follow suit?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 13:10:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 13:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86644.162770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCj4U-0001Vo-D9; Thu, 18 Feb 2021 13:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86644.162770; Thu, 18 Feb 2021 13:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCj4U-0001Vh-9s; Thu, 18 Feb 2021 13:10:18 +0000
Received: by outflank-mailman (input) for mailman id 86644;
 Thu, 18 Feb 2021 13:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCj4S-0001Vb-Sx
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 13:10:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46dfe39e-c5e1-4c95-9f5f-2bf023dfdbb4;
 Thu, 18 Feb 2021 13:10:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8497ACD4;
 Thu, 18 Feb 2021 13:10:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46dfe39e-c5e1-4c95-9f5f-2bf023dfdbb4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613653814; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DkihpiD9twC+qInFw8JwaUWcXXcm86I8D9b69rHendI=;
	b=TY5M1aQRVrfuw1ZbCSWOGvPOehwQcjJ+Hoa9lttFpaeget+rCwL4SeuR/up9puyMui/ssh
	WF1me6v+1KU+WjwQClWVyVobdooR2yFMLtCo/OguN5weAMUEc8jvzBGdZedCcD4krP27Fm
	/tGXfvh5BU+EtFtw/nO6oeuQ8He+JLk=
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
 <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
Date: Thu, 18 Feb 2021 14:10:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.02.2021 17:29, Julien Grall wrote:
> On 17/02/2021 15:13, Jan Beulich wrote:
>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
>> Nit: s/not/no/ ?
>>
>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>> +     * called unconditionally, so pgtables may be unitialized.
>>> +     */
>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>>   }
>>>   
>>>   static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>>        */
>>>       hd->platform_ops->clear_root_pgtable(d);
>>>   
>>> +    /* After this barrier no new page allocations can occur. */
>>> +    spin_barrier(&hd->arch.pgtables.lock);
>>
>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
>> the barrier? Why introduce another one (with a similar comment)
>> explicitly now?
> The barriers act differently, one will get against any IOMMU page-tables 
> modification. The other one will gate against allocation.
> 
> There is no guarantee that the former will prevent the latter.

Oh, right - different locks. I got confused here because in both
cases the goal is to prevent allocations.

>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>       unmap_domain_page(p);
>>>   
>>>       spin_lock(&hd->arch.pgtables.lock);
>>> -    page_list_add(pg, &hd->arch.pgtables.list);
>>> +    /*
>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>> +     * reasons to continue to update the IOMMU page-tables while the
>>> +     * domain is dying.
>>> +     *
>>> +     * So prevent page-table allocation when the domain is dying.
>>> +     *
>>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>>> +     */
>>> +    if ( likely(!d->is_dying) )
>>> +    {
>>> +        alive = true;
>>> +        page_list_add(pg, &hd->arch.pgtables.list);
>>> +    }
>>>       spin_unlock(&hd->arch.pgtables.lock);
>>>   
>>> +    if ( unlikely(!alive) )
>>> +    {
>>> +        free_domheap_page(pg);
>>> +        pg = NULL;
>>> +    }
>>> +
>>>       return pg;
>>>   }
>>
>> As before I'm concerned of this forcing error paths to be taken
>> elsewhere, in case an allocation still happens (e.g. from unmap
>> once super page mappings are supported). Considering some of the
>> error handling in the IOMMU code is to invoke domain_crash(), it
>> would be quite unfortunate if we ended up crashing a domain
>> while it is being cleaned up after.
> 
> It is unfortunate, but I think this is better than having to leak page 
> tables.
> 
>>
>> Additionally, the (at present still hypothetical) unmap case, if
>> failing because of the change here, would then again chance to
>> leave mappings in place while the underlying pages get freed. As
>> this would likely require an XSA, the change doesn't feel like
>> "hardening" to me.
> 
> I would agree with this if memory allocations could never fail. That's 
> not that case and will become worse as we use IOMMU pool.
> 
> Do you have callers in mind that doesn't check the returns of iommu_unmap()?

The function is marked __must_check, so there won't be any direct
callers ignoring errors (albeit I may be wrong here - we used to
have cases where we simply suppressed the resulting compiler
diagnostic, without really handling errors; not sure if all of
these are gone by now). Risks might be elsewhere.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 13:20:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 13:20:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86648.162782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCjDu-0001vd-D9; Thu, 18 Feb 2021 13:20:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86648.162782; Thu, 18 Feb 2021 13:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCjDu-0001v6-9S; Thu, 18 Feb 2021 13:20:02 +0000
Received: by outflank-mailman (input) for mailman id 86648;
 Thu, 18 Feb 2021 13:20:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCjDs-0001qy-Vn
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 13:20:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCjDr-0006L0-RU; Thu, 18 Feb 2021 13:19:59 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCjDr-0000FF-Il; Thu, 18 Feb 2021 13:19:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=I3jDrGGz0+nT6fQwnHbdKx3HlkwXs4KJGk4KN9Pj9do=; b=oHpzVvQA4mwwI/k2jKJYUjAOmT
	bthluPrJMLblI7GudZuiHylM3kY6Q+Le0SaytkjR0a+O/uV9Z9AFs4HhsMwwvkKaJ5QFZTtvG8+FR
	PDib/hcDENcMceBvMgdrG2HY0KYhFBXkXLBA8xGt/pp6lcySaz7YY+5n08qZXTHdQReE=;
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
 <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
 <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org>
Date: Thu, 18 Feb 2021 13:19:57 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 18/02/2021 13:10, Jan Beulich wrote:
> On 17.02.2021 17:29, Julien Grall wrote:
>> On 17/02/2021 15:13, Jan Beulich wrote:
>>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
>>> Nit: s/not/no/ ?
>>>
>>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>>> +     * called unconditionally, so pgtables may be unitialized.
>>>> +     */
>>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>>>    }
>>>>    
>>>>    static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>>>         */
>>>>        hd->platform_ops->clear_root_pgtable(d);
>>>>    
>>>> +    /* After this barrier no new page allocations can occur. */
>>>> +    spin_barrier(&hd->arch.pgtables.lock);
>>>
>>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
>>> the barrier? Why introduce another one (with a similar comment)
>>> explicitly now?
>> The barriers act differently, one will get against any IOMMU page-tables
>> modification. The other one will gate against allocation.
>>
>> There is no guarantee that the former will prevent the latter.
> 
> Oh, right - different locks. I got confused here because in both
> cases the goal is to prevent allocations.
> 
>>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>>        unmap_domain_page(p);
>>>>    
>>>>        spin_lock(&hd->arch.pgtables.lock);
>>>> -    page_list_add(pg, &hd->arch.pgtables.list);
>>>> +    /*
>>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>>> +     * reasons to continue to update the IOMMU page-tables while the
>>>> +     * domain is dying.
>>>> +     *
>>>> +     * So prevent page-table allocation when the domain is dying.
>>>> +     *
>>>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>>>> +     */
>>>> +    if ( likely(!d->is_dying) )
>>>> +    {
>>>> +        alive = true;
>>>> +        page_list_add(pg, &hd->arch.pgtables.list);
>>>> +    }
>>>>        spin_unlock(&hd->arch.pgtables.lock);
>>>>    
>>>> +    if ( unlikely(!alive) )
>>>> +    {
>>>> +        free_domheap_page(pg);
>>>> +        pg = NULL;
>>>> +    }
>>>> +
>>>>        return pg;
>>>>    }
>>>
>>> As before I'm concerned of this forcing error paths to be taken
>>> elsewhere, in case an allocation still happens (e.g. from unmap
>>> once super page mappings are supported). Considering some of the
>>> error handling in the IOMMU code is to invoke domain_crash(), it
>>> would be quite unfortunate if we ended up crashing a domain
>>> while it is being cleaned up after.
>>
>> It is unfortunate, but I think this is better than having to leak page
>> tables.
>>
>>>
>>> Additionally, the (at present still hypothetical) unmap case, if
>>> failing because of the change here, would then again chance to
>>> leave mappings in place while the underlying pages get freed. As
>>> this would likely require an XSA, the change doesn't feel like
>>> "hardening" to me.
>>
>> I would agree with this if memory allocations could never fail. That's
>> not that case and will become worse as we use IOMMU pool.
>>
>> Do you have callers in mind that doesn't check the returns of iommu_unmap()?
> 
> The function is marked __must_check, so there won't be any direct
> callers ignoring errors (albeit I may be wrong here - we used to
> have cases where we simply suppressed the resulting compiler
> diagnostic, without really handling errors; not sure if all of
> these are gone by now). Risks might be elsewhere.

But this is not a new risk. So I don't understand why you think my patch 
is the one that may lead to an XSA in the future.

What my patch could do is expose such issues more easily rather than 
waiting until an OOM condition.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 13:25:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 13:25:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86651.162793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCjIx-0002hr-06; Thu, 18 Feb 2021 13:25:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86651.162793; Thu, 18 Feb 2021 13:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCjIw-0002hk-TJ; Thu, 18 Feb 2021 13:25:14 +0000
Received: by outflank-mailman (input) for mailman id 86651;
 Thu, 18 Feb 2021 13:25:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCjIv-0002hf-Tk
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 13:25:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCjIm-0006R9-7W; Thu, 18 Feb 2021 13:25:04 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCjIl-0000me-UU; Thu, 18 Feb 2021 13:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kbUgVWmOaNiyuJGVBS9FHtCRD8oLJxkmHRRV3QktNjw=; b=lyOsDD3+qoxwiy7/NxA1DbNbIb
	OUMcUrMPwV7urGENX8jzFio2fh+D9ppQYz8GgGi423JhutWZa1TU4roXTNw3b+ReUnQm2dbNAzBKf
	yrliYrTbRIRXrmrj2lLuHQIckPu+5wSjG6NS2phnWPLYV06MN4h5BcZH/luJoVIDFd44=;
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
 <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
 <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c713f440-8c3d-42fe-1d71-56b23e53a495@xen.org>
Date: Thu, 18 Feb 2021 13:25:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 18/02/2021 13:05, Jan Beulich wrote:
> On 17.02.2021 17:07, Julien Grall wrote:
>> On 17/02/2021 15:01, Jan Beulich wrote:
>>> On 17.02.2021 15:24, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> The new x86 IOMMU page-tables allocator will release the pages when
>>>> relinquishing the domain resources. However, this is not sufficient
>>>> when the domain is dying because nothing prevents page-table to be
>>>> allocated.
>>>>
>>>> Currently page-table allocations can only happen from iommu_map(). As
>>>> the domain is dying, there is no good reason to continue to modify the
>>>> IOMMU page-tables.
>>>
>>> While I agree this to be the case right now, I'm not sure it is a
>>> good idea to build on it (in that you leave the unmap paths
>>> untouched).
>>
>> I don't build on that assumption. See next patch.
> 
> Yet as said there that patch makes unmapping perhaps more fragile,
> by introducing a latent error source into the path.

I still don't see what latent error my patch will introduce. Allocation 
of page-tables are not guarantee to succeed.

So are you implying that a code may rely on the page allocation to succeed?

> 
>>> Imo there's a fair chance this would be overlooked at
>>> the point super page mappings get introduced (which has been long
>>> overdue), and I thought prior discussion had lead to a possible
>>> approach without risking use-after-free due to squashed unmap
>>> requests.
>>
>> I know you suggested to zap the root page-tables... However, I don't
>> think this is 4.15 material and you agree with this (you were the one
>> pointed out that out).
> 
> Paul - do you have any thoughts here? Outright zapping isn't
> going to work, we'd need to switch to quarantine page tables at
> the very least to prevent the issue with babbling devices. But
> that still seems better to me than the introduction of a latent
> issue on the unmap paths.

I am afraid I am not going to be able to come up with such patch for 
4.15. If you want to go that route for 4.15, then feel free to suggest a 
patch.

[...]

> Btw - "no more IOMMU mapping" is also possibly ambiguous here:
> One might take it to mean both maps and unmaps. How about "no
> new IOMMU mappings can be inserted", as long as the unmap paths
> don't follow suit?

Sure.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 14:00:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 14:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86653.162805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCjrP-0006WY-PE; Thu, 18 Feb 2021 14:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86653.162805; Thu, 18 Feb 2021 14:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCjrP-0006WR-ME; Thu, 18 Feb 2021 14:00:51 +0000
Received: by outflank-mailman (input) for mailman id 86653;
 Thu, 18 Feb 2021 14:00:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YzyM=HU=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lCjrO-0006WM-3y
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 14:00:50 +0000
Received: from mail-wr1-x42f.google.com (unknown [2a00:1450:4864:20::42f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a014e25-a244-4342-9352-03043f5e257b;
 Thu, 18 Feb 2021 14:00:48 +0000 (UTC)
Received: by mail-wr1-x42f.google.com with SMTP id t15so3033697wrx.13
 for <xen-devel@lists.xenproject.org>; Thu, 18 Feb 2021 06:00:48 -0800 (PST)
Received: from ?IPv6:2a00:23c5:5785:9a01:a970:d87c:bd19:86c1?
 ([2a00:23c5:5785:9a01:a970:d87c:bd19:86c1])
 by smtp.gmail.com with ESMTPSA id q140sm10141888wme.0.2021.02.18.06.00.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 18 Feb 2021 06:00:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a014e25-a244-4342-9352-03043f5e257b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=BY4qrM6o2zEMu6zv5MXmimY1AG6/hvex7B7y3DLJ5nQ=;
        b=D5NUjmuVo6WGLe9JMEk1PEDc23kq+DnE8YXguyeLFIwQ+OCg7I5B9RMgOgrWm5RUDz
         ErLrotxhCvl/0Mmglo/kMK2li4EAJ7tKuCUAlymOsihu+ghLTblwwN7zF/Nb6zsLNLtI
         ZV3vHQP97w3UoBEYKRhWPcHNQdQk4noT0cVUxGwA2kKyzCn6QfRbhOoQWCRvDS4o34a1
         4i9SKX1/mTMMmO2qT1aoRez3Wjw/sjL2qEMoECJfHj+xYW2IILnMg5SoRuGxsT5MSUW1
         PflzK6ZNy2WejJUV3pN0kgJ8Mem9tXfeirRfZeJN5i3qxCOjch5pdpdZPrOg0ngS2Vo2
         Q84A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=BY4qrM6o2zEMu6zv5MXmimY1AG6/hvex7B7y3DLJ5nQ=;
        b=bK1DCU6X4QSciHbVc6baah0yi0ia95oK2iRaJqsTjSRxSGNnm1QEW6VS84s/vYeNB4
         qp8sIS5JTB69QQLpzJ4DNIWMJ87C/RmCYbd+pbJ9+3bI9qz+JpTlnA3rotQxd0KXqdN4
         W6jm1WYekKJoLKdEI3FQYz7I6XKAlZ+cV1/va79PuGyNhbP/SwbZWugLsRDecMUJogwA
         Mv4EiSnxLKnEF2tzJy0WZQD/Y6q57azMOX8b2SKzO0K6dlRKfz4RxRAUU2QjxzVSt5Fn
         2bGff7pNvOfSIMPfCrghBNKoGI5+IGBXzN7x+309w+g00CuL4AcxaVq77bLI1uCfLa7s
         TJag==
X-Gm-Message-State: AOAM533RN268dKI1wSLppmqt7F6eGncl1791g5QIHw4Bp0nuZI+bvaIc
	YQ+eiQ35pN1q436zRRaGLjX1h+7PsSw=
X-Google-Smtp-Source: ABdhPJzG5daJBGPloKEMMggcE1pS7AE5OpbDhT+VvuKEGdkVYB8cMOm+j1vW+JVwvQz1M6tndZ6+jQ==
X-Received: by 2002:adf:97d3:: with SMTP id t19mr4622560wrb.164.1613656847766;
        Thu, 18 Feb 2021 06:00:47 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
 <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
 <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
Message-ID: <14f21eac-7d5d-9dda-18d2-206614e91339@xen.org>
Date: Thu, 18 Feb 2021 14:00:46 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18/02/2021 13:05, Jan Beulich wrote:
> On 17.02.2021 17:07, Julien Grall wrote:
>> On 17/02/2021 15:01, Jan Beulich wrote:
>>> On 17.02.2021 15:24, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> The new x86 IOMMU page-tables allocator will release the pages when
>>>> relinquishing the domain resources. However, this is not sufficient
>>>> when the domain is dying because nothing prevents page-table to be
>>>> allocated.
>>>>
>>>> Currently page-table allocations can only happen from iommu_map(). As
>>>> the domain is dying, there is no good reason to continue to modify the
>>>> IOMMU page-tables.
>>>
>>> While I agree this to be the case right now, I'm not sure it is a
>>> good idea to build on it (in that you leave the unmap paths
>>> untouched).
>>
>> I don't build on that assumption. See next patch.
> 
> Yet as said there that patch makes unmapping perhaps more fragile,
> by introducing a latent error source into the path.
> 
>>> Imo there's a fair chance this would be overlooked at
>>> the point super page mappings get introduced (which has been long
>>> overdue), and I thought prior discussion had lead to a possible
>>> approach without risking use-after-free due to squashed unmap
>>> requests.
>>
>> I know you suggested to zap the root page-tables... However, I don't
>> think this is 4.15 material and you agree with this (you were the one
>> pointed out that out).
> 
> Paul - do you have any thoughts here? Outright zapping isn't
> going to work, we'd need to switch to quarantine page tables at
> the very least to prevent the issue with babbling devices. But
> that still seems better to me than the introduction of a latent
> issue on the unmap paths.
> 

I've not really been following this. AFAICT clear_root_pgtable() does 
not actually mess with any context entries so the device view of the 
tables won't change, will it?

   Paul


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 14:23:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 14:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86657.162818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkCu-0008W1-QA; Thu, 18 Feb 2021 14:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86657.162818; Thu, 18 Feb 2021 14:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkCu-0008Vu-Mw; Thu, 18 Feb 2021 14:23:04 +0000
Received: by outflank-mailman (input) for mailman id 86657;
 Thu, 18 Feb 2021 14:23:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5S6F=HU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lCkCt-0008Vp-UR
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 14:23:04 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0feda2f3-a4b1-4026-8034-8425074e4bf6;
 Thu, 18 Feb 2021 14:23:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0feda2f3-a4b1-4026-8034-8425074e4bf6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613658182;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=+j0B3tTIqYe23JUhyzIlAO1rYhkU6ufEHOyZT/ZzWXg=;
  b=MoztygSU/W12US200XA++hK/hB93SPGh8omjyK7RXvJS4EHdywBCON+n
   r5PXeVstGysGiRvg/bSQyD1aOQ0n8drDqeFlfiyB0zsS+3g+yZT/XHp0w
   EwNZJNZC3tdgP0zn+4TthaFBddhd9bfjHhXP2K3cQk4Qs39P1ppByyqF+
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: oAQDl4fKCScSRfgYOmya+DHqRM0Xh4jDni0L6Md7I9GAwYKSz9gyFsX0+mQRpQHZPoYFlrnGZa
 XBrfVC73KThkpA9ndqDCPQfNLKZLzia+8ue9ngkpof0t39/ivTre8RZKUJvpnhU4WxSBSWr798
 tK63G3B040fSBU17gmfzFnGGksBEr4JdsBH71N9YgtiJMegDqE6G1PvF/tLWTrVM2pV3bYNDk0
 zHeRYYSZPQACMQlvHn5yURQWlmv5uRRCYIEaUGa7g3COs43mGFb4O2qisBtVReWormD7ncg/bp
 dsY=
X-SBRS: 5.2
X-MesageID: 37539209
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="37539209"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ENhQGFML8nTZt0X7KD0LMERyNwfXz1aTfw5QB0s6aZunq4o+8RzUwFMjntuLh+Zl43R0UqM3xByULQPpq+NHXr/Z2vF5zquCaYSNAWnsALoiEnzi2U7Tgfo9eGtFcs1xgd1GmnDGkE2stflseI7Mvx3Ci6xpxUi2h1ZoyU3017peo15PLzEQ3BFgAWF/z96b9KnTpfKuYw4NoBjbx+lfz2K+wladantjk2n/uwEzeGP9UZ9Q79vsUcC4aPq8S4WFvXOGnv6N26lVnYp7nTH1B6it52zJPIr49ElxcEmIUPyzdwgysNAlruUobumOkmgOFL3wDmAySloCirN8FtshDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+j0B3tTIqYe23JUhyzIlAO1rYhkU6ufEHOyZT/ZzWXg=;
 b=At1P1c2iShszeFkOpN9tSgleqvEMmJhoFwSPfs0fBY8Idic88SxBjXyb2OtMwbozBk/+IA+3wMUfKeG/4yY5xPkYfoDZCNrmJoNhaXojflIstNNaA+akQF8c26WMNaWIvNStU59fWUO0+e2it9V9Epn6VrTa9yQGNCPAfh512yh1U2iWsZkUQxfpq06EvM5zNQ8crDbvha1YMRNKveCpDzxtPva12QFeefDbXab/petz+XJIoXU3qvCgHJdMY2D0jwxen7lh2VuOiuWIeMu2xCGYT7TSVGds+ZNeE1IWZnNO9c6bmp9x91bcTuth2Q1N5Hn2dYI+Mk0X3vXHn2ygCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+j0B3tTIqYe23JUhyzIlAO1rYhkU6ufEHOyZT/ZzWXg=;
 b=jI5tt/sfP7/nHfjFBK6tUWsCL2pu7kxsAzGlxWGkD7tT8bmlofCOdOBr6aTdh3xY3lcPdBuSvmnyohquDqJCoeATrVeEGWqdEzAkuMn0os/arBhk42uMFEoL74gjsKljQYrw9P5eB8lF5FiIRrVihyzT+IN8jpFT7PPxKvwvpaI=
Subject: Re: [PATCH 3/3] gnttab: GTF_sub_page is a v2-only flag
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <2bf46266-785d-0de3-5f61-48c3fd191a5c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e00d9bed-20e4-b5a5-91e8-02f7a6689f86@citrix.com>
Date: Thu, 18 Feb 2021 14:22:49 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <2bf46266-785d-0de3-5f61-48c3fd191a5c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0409.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f01e0a88-b96f-459b-4911-08d8d418b04c
X-MS-TrafficTypeDiagnostic: BYAPR03MB4550:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4550975F5AE2A202EC448FFBBA859@BYAPR03MB4550.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: t827W26ZAS/2g1JiCKKTN8gEyvyc4xunUGDPNpeU/qjqoBdeLPEWZ+gyxn44uQLhGOmw4tuZ6P2TV+8MEDmwG0tdCXMdziny5ZcKzT8jwfwgsaUupOkaSed/7bmRyMbDRHmzb6T78RL3SqRbTcdrCoFh2EYLVc6Rx8OFyC8Xdqd1ZAwTGIwnedqTVk3oyfeCXdI10d5Y7PQyK97PL9YiNkOlC/+X1a7S4c60a6eQRWD230zeEV4bxp2LxTIQ7o7peYXdY5H/NY+8pY++QbO+YV6m63pxVpFghTwhRzqsJiNJxGPRXFsY6iKCj01YF68mOlAzSMYr5fOHcwjTO/tsqvdgvZ96vrwSF7D3EAlqr++A+bT9MIF8Y+PwPkQc5w2t5vmBlF99IPwSlfWJWQhEAngmS++DyrRIq9pYLS7nhzNFOxw2v8D50n7khnHQdfF53KJNbFT8tlDcflFjqb0OXTqw0SEIFBHgq2EhUoU4tkUExRP4JG2jak2QquN5MjjTRcL7O3p0JcdrGI2RjfqkmAXDcv1Ccpg9mpeCfafyuNVjiJvsw4eF+6J6zuRhm4v29zchgOXsunUpEcS1HVFbDP6f4CYckEV07yKyL4RoZSI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(346002)(366004)(396003)(54906003)(4744005)(2906002)(66556008)(5660300002)(36756003)(8676002)(956004)(2616005)(478600001)(66946007)(31696002)(186003)(4326008)(110136005)(83380400001)(26005)(8936002)(16526019)(86362001)(6486002)(316002)(66476007)(16576012)(6666004)(31686004)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SjVuSTFhcUN6K1hkR3JOQThpdExJQ3MxWXZCZXdtNSsybUt6KzYyeXVSQlBL?=
 =?utf-8?B?d3pQSDdBb2FCWTZqTzhpSUxWTVlsUEVEU0hveG9yRkdkdnllK3pCak5RcktV?=
 =?utf-8?B?aEg3VWNtdXZZZloyc1Y2cTVBNldOUURSdEdiUlFyQnJYNkVZcW5uZnZiL1ZD?=
 =?utf-8?B?cjloaWduRTF6UnhoYUt6cTk0bmx0T213ZGRWUUcySGJ2T2JOTzhCSXA1QVQx?=
 =?utf-8?B?UlFBdkwycVhvT1M4RFp5d2RiRDNoSXFOQVJRdFlXMDNKYmY1SHVTS2tyd3oy?=
 =?utf-8?B?SDBpRHlMSVJROTVmczY2c3BlNGVFNEc4WjRhZEk0M25FdVpOVFRPbEhTQ2VI?=
 =?utf-8?B?NVNmZ0QyV3ZmL0RLY3BhNkN1QjR6MmVEWVZiaDFwTnZ4ZXVCdkJTRkg2cEp5?=
 =?utf-8?B?U2RkVUZOY0tjM2tSZTNpQXB5WFFQOEdSMmZyUCtQL1V6NUNhTzgvRHMzQ05Z?=
 =?utf-8?B?Z2NkL1pyREVvdnV0RHJCN3ZjNFd4WTVCeUsyK3hEd21kTXJpSUlkOE02bWdH?=
 =?utf-8?B?bC91dFBYcHVoWXg5SFRwaXo0YnVRM1NXTVFucmE5dkplTmtSS1NrU1ZwZXVm?=
 =?utf-8?B?UVZlOUI2eFg1Z2ZNZHJLcnZBVlBrK05RUlVzaDlkcU9nbU1ZSE0zWE5UUi9K?=
 =?utf-8?B?RytIanhmRUR0aEJQSGc3K1VMZElLUzdodkREOVd4NFgxZitVa2ZoSExSWjQw?=
 =?utf-8?B?Y2NoMlV5ank0M1VLNldNRUp4T3R0YnYwR1gvQ0xjYjZaaFlQOTdpSFZ0TnJV?=
 =?utf-8?B?VjNmc0kzeHRXb2JEK21UOFBDaEorZzZqaXdNUkdwSndOTzF3L1BLWjB5K0Iv?=
 =?utf-8?B?T2ZVMDlZeFJPU2dsMUloa2QwOWhVTC9QN2ZiWWpyTUJZS2RyaU8rWldoalFZ?=
 =?utf-8?B?b2RHQzVkbWRuVUFwNG5FZ2FGYUxSUFlSVkNXa0lBZWx0YkV1MDBDN0lPTGFT?=
 =?utf-8?B?RFZKYXpSY2ZtUFA0R2s5VjY0d2RxTlkzbGE1ZTFQdUFjanNGd2NRNGl0TnlS?=
 =?utf-8?B?a2lYTG5RWndXMENaRGt3UWpRMitNUzYxQllXZjRwVnBadFNrSEpRTFU4Wm1M?=
 =?utf-8?B?UDV6VFpLR1lCd3QzR2dOcjVBeFpDdDVGMm5tb0lLWENQQWVLNEQyQkRidE05?=
 =?utf-8?B?amdvM3RHei8rNXZwbjJ1bkdLOVQvS29GUXh4dFRyMndRZmtUVFU3MzhXd3Zz?=
 =?utf-8?B?aGRzN2lyVENoT2Q3anRKRWoxMHA0N0lWRVliVGVYeTRUQUk4M29HWEZjU1NJ?=
 =?utf-8?B?NnpwMjBLV2VFSGI0MWlFT2UrcDZjNC94M0piVnd3Q0ZIaE96M2VwNDdSUnFl?=
 =?utf-8?B?RTA3aHpoNGtjNFkwZWtYNlVhWDNaVWlSNUkyMHAydmRVOU5YM3U1eFh1V0lG?=
 =?utf-8?B?TCtCRUovVXo4T2diYkVKTVJRWjdGOFN2ZVd3TzI1V0xLL2orREpDOEx1OXlR?=
 =?utf-8?B?MzRac2lRN3ZiNHp4c3NQaWJTYmVTRStXMTk1d3NKRVlRN3pWdVdmVm5ZY240?=
 =?utf-8?B?NktvckVwZytPSkNLYnlyTDRLZUlUVFVvYmNnYjFOZVdRRGVlZ3NzK2FNb3BY?=
 =?utf-8?B?Ti90OVo4OTN1cEZLNDUxU0hjczhudGtnS0xiMUt2VkRVVVlDTW52NE93ODU0?=
 =?utf-8?B?dE5JelZJTUdDNUFET0k5Yk9RVW85dHptQnFodmF2QUhlQjd2TFdKa0ZOVEll?=
 =?utf-8?B?cStEa0NwcGRtcGFGeDdhYVBNWGszUUt3QVBGUTVjRlVMdUpoaHp4Q1pkVDN6?=
 =?utf-8?Q?nhZVIQOIl1JGdtWHRESBu/SeDUUrpmIYBCEyd35?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f01e0a88-b96f-459b-4911-08d8d418b04c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 14:22:57.6337
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P9jQarwPaKwQ3DXygpFf8VXO6vDLplMPyTLhp5IlsUnL/5pCIHGtvYxUAeBTXU6yFuiyxz43nRhZAeLP+oskQIhdu482fd7sg/YZU9CMl38=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4550
X-OriginatorOrg: citrix.com

On 17/02/2021 10:46, Jan Beulich wrote:
> Prior to its introduction, v1 entries weren't checked for this flag, and
> the flag also has been meaningless for v1 entries. Therefore it also
> shouldn't be checked. (The only consistent alternative would be to also
> check for all currently undefined flags to be clear.)

We recently had a similar discussion for the stable libs.

Whatever we do, an unexpected corner case needs to break.Â  Checking for
all undefined flags up front is far cleaner - absolutely nothing good
can come for a guest which set GTF_sub_page with v1, and is expecting it
to work, and this way, we do all breaking in one go, rather than
breaking $N times in the future as new flags get added.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 14:27:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 14:27:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86659.162830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkGz-0000Eq-C5; Thu, 18 Feb 2021 14:27:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86659.162830; Thu, 18 Feb 2021 14:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkGz-0000Ej-91; Thu, 18 Feb 2021 14:27:17 +0000
Received: by outflank-mailman (input) for mailman id 86659;
 Thu, 18 Feb 2021 14:27:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCkGy-0000Eb-Cr; Thu, 18 Feb 2021 14:27:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCkGy-0007WT-6M; Thu, 18 Feb 2021 14:27:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCkGx-0004iA-RV; Thu, 18 Feb 2021 14:27:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCkGx-0002KZ-Qn; Thu, 18 Feb 2021 14:27:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h3z4ViQ3wy9EFZh3nU3fA/Z1oEvKqT4V9x2set1rsCA=; b=2s8ZRAKh1L0blRU0DcrymStzDF
	om+355VW8mkmDHX0hvxMbn2oZtrh+XcaxjlxBxaIO1iUZh3JEeGa5tdlwa09Q6VtaRqBmbeDHXREW
	p5lEswMftWth75e9OdD3wKh1LnBHgTsTXkark6Tji7vLAXRijplrjrckrVbB+/xLF9K4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159434-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159434: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=18543229fd7a2c79dcd6818c7b1f0f62512b5220
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 14:27:15 +0000

flight 159434 qemu-mainline real [real]
flight 159460 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159434/
http://logs.test-lab.xenproject.org/osstest/logs/159460/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                18543229fd7a2c79dcd6818c7b1f0f62512b5220
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  182 days
Failing since        152659  2020-08-21 14:07:39 Z  181 days  351 attempts
Testing same since   159434  2021-02-17 01:39:41 Z    1 days    1 attempts

------------------------------------------------------------
411 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112344 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 14:55:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 14:55:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86668.162845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkhu-00035S-G3; Thu, 18 Feb 2021 14:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86668.162845; Thu, 18 Feb 2021 14:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkhu-00035L-CA; Thu, 18 Feb 2021 14:55:06 +0000
Received: by outflank-mailman (input) for mailman id 86668;
 Thu, 18 Feb 2021 14:55:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCkhs-00035G-JP
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 14:55:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fff36cdb-53c3-4362-923a-5b11829a516d;
 Thu, 18 Feb 2021 14:55:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3560FAECE;
 Thu, 18 Feb 2021 14:55:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fff36cdb-53c3-4362-923a-5b11829a516d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613660102; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lVOs/JzlefvQeAhcnj1Kr4X404vRAEhukY2mkTftP+c=;
	b=Y+zusE3y3dM0OYiS/bJpg3/4XHhFrlb7Estz9KSkVUOsDMwSB2qdYJqoPc1VyE9igaDw9U
	CmUirEl99R670nD24boM2mImnADv2lzMcfSXn0zB2xjrimJq2SbAuuau3QGilRUYLURVfa
	EmpQEtyD/Gw9aBKbfzMKfdqhUjmFuGA=
Subject: Re: ld 2.36 regression linking EFI binary from ELF input
To: Binutils <binutils@sourceware.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jeremy Drake <sourceware-bugzilla@jdrake.com>
References: <79812876-b43d-7729-da34-3b4cd1c31f24@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8467d4dd-c702-19e2-4511-92f26a7d7b1f@suse.com>
Date: Thu, 18 Feb 2021 15:55:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <79812876-b43d-7729-da34-3b4cd1c31f24@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.02.2021 14:21, Jan Beulich via Binutils wrote:
> the Xen project hypervisor build system includes building the
> hypervisor binary as an EFI application, as an option (i.e.
> as long as the tool chain supports this). Already when probing
> the linker we now suddenly get several "relocation truncated
> to fit:R_X86_64_32 against `.debug_...'" errors. I have not
> had  the time to figure out what exactly broke this, and I'm
> sending this mail in the hope that it may ring a bell for
> someone.
> 
> For reference, the probing is as simple as
> 
> $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o
> 
> As was to be expected, the errors disappear with -S, but that's
> an option only for the probing, not for the actual linking of
> the binary.

Actually, that was just the easily visible problem. There's a
worse one, again resulting from 514b4e191d5f ("Change the
default characteristics of DLLs built by the linker to more
secure settings"): Prior to this a .reloc section would not
have been created by ld for executables. To work around this
we've been hand-crafting relocations (by linking the image at
two different base addresses and working out the delta). Now
that ld does this by default, we get two base relocations for
every field that needs relocating. Which obviously isn't
going to work.

We also can't easily use ld's way of populating .reloc, as
it's buggy (I'll send separate mail about that) and apart
from this the resulting relocations differ subtly for the
pre-populated page tables (using physical addresses, not
virtual ones) that the binary contains.

The immediate workaround at our end is therefore going to be
to use --disable-reloc-section when available, but I have to
admit that this is far worse breakage than I would have
expected from a single-step binutils version increment. I
wouldn't be surprised if Cygwin / MingW (or users thereof,
when they are creating their own programs on top) aren't
similarly affected. Luckily(?) the Windows loader looks to
fall back to ignoring .relocs when it encounters an error
processing one of the relocations, at least for executables
(for DLLs this may not be an option).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:02:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:02:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86671.162856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkot-00048S-Ap; Thu, 18 Feb 2021 15:02:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86671.162856; Thu, 18 Feb 2021 15:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkot-00048L-7k; Thu, 18 Feb 2021 15:02:19 +0000
Received: by outflank-mailman (input) for mailman id 86671;
 Thu, 18 Feb 2021 15:02:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCkor-00048G-Mf
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:02:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ac93afd-5cec-4253-945a-d9c8402057f3;
 Thu, 18 Feb 2021 15:02:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7931DAE6E;
 Thu, 18 Feb 2021 15:02:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ac93afd-5cec-4253-945a-d9c8402057f3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613660535; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JWCDGoQxl/o3awx8vCD48SSwoRU3YNAxpRjz1eq7XqM=;
	b=mUOP/pE27q42jvBJvzpWUj0C4AKBMSyLHrcDEyVcC8xsIfBu5mkiiw74NrBn2IqSu0B8zf
	lPVPwuUpQLjmb4rJebOIevthPYnnSI5dFwB4bwkTxhOPh2Hoivi2SNooNuWba1DYDiRaSM
	YDk8cRpRtFodurMfHtmuSz+5hWlOHdw=
Subject: Re: [PATCH 3/3] gnttab: GTF_sub_page is a v2-only flag
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <156559d5-853a-5bb9-942b-f623627e0907@suse.com>
 <2bf46266-785d-0de3-5f61-48c3fd191a5c@suse.com>
 <e00d9bed-20e4-b5a5-91e8-02f7a6689f86@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a1a0587e-3e20-a023-6dd0-3b37b34fdf17@suse.com>
Date: Thu, 18 Feb 2021 16:02:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <e00d9bed-20e4-b5a5-91e8-02f7a6689f86@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.02.2021 15:22, Andrew Cooper wrote:
> On 17/02/2021 10:46, Jan Beulich wrote:
>> Prior to its introduction, v1 entries weren't checked for this flag, and
>> the flag also has been meaningless for v1 entries. Therefore it also
>> shouldn't be checked. (The only consistent alternative would be to also
>> check for all currently undefined flags to be clear.)
> 
> We recently had a similar discussion for the stable libs.
> 
> Whatever we do, an unexpected corner case needs to break.Â  Checking for
> all undefined flags up front is far cleaner - absolutely nothing good
> can come for a guest which set GTF_sub_page with v1, and is expecting it
> to work, and this way, we do all breaking in one go, rather than
> breaking $N times in the future as new flags get added.

Except that there doesn't need to be any breaking: v1 could continue
to ignore all originally undefined flags. v1 and v2 could continue to
ignore all flags presently undefined. See also Julien's question
about GTF_sub_page on a transitive grant entry. That's presently an
ignored setting as well (and, as an implementation detail, in fact
getting forced to be set, but with a range covering the entire page).

Retroactively starting to check (and reject) undefined flags isn't
going to be nice; nevertheless I wanted to put this up as an at
least theoretical alternative. Perhaps a topic for the next community
call, if I don't forget by the time an agenda page appears.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:02:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:02:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86672.162869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkp2-0004BH-K1; Thu, 18 Feb 2021 15:02:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86672.162869; Thu, 18 Feb 2021 15:02:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCkp2-0004BA-GM; Thu, 18 Feb 2021 15:02:28 +0000
Received: by outflank-mailman (input) for mailman id 86672;
 Thu, 18 Feb 2021 15:02:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lY7L=HU=amazon.de=prvs=6763206ab=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lCkp2-0004Ax-0m
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:02:28 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08f468dd-fcad-4044-ae56-2603f61b7e48;
 Thu, 18 Feb 2021 15:02:27 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-7d76a15f.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 18 Feb 2021 15:02:18 +0000
Received: from EX13D02EUB003.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1a-7d76a15f.us-east-1.amazon.com (Postfix) with ESMTPS
 id 2E8B9A177C; Thu, 18 Feb 2021 15:02:15 +0000 (UTC)
Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by
 EX13D02EUB003.ant.amazon.com (10.43.166.172) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 18 Feb 2021 15:02:14 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.213.30) by
 mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Thu, 18 Feb 2021 15:02:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08f468dd-fcad-4044-ae56-2603f61b7e48
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1613660547; x=1645196547;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=j78hOZFqSeIc4Quq5tg7IMbvyJPUwGTss72xF+BItXg=;
  b=X5oh+4Cuz9641nhe77cvk1i9DfVPZHi9876BFdjYe+GfCIaOdDL+VR9P
   q/GBtzmDVZAOgm0nkK0dSOg2NsDGegnWGavVbFHE1R9nBhElDVxO9eEiT
   ij6goSDSS0uv7iyLRzGm1pCidwr+9dOOyvqLlhmEb6uuh5DvgTmG98YA4
   g=;
X-IronPort-AV: E=Sophos;i="5.81,187,1610409600"; 
   d="scan'208";a="89076133"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Norbert Manthey
	<nmanthey@amazon.de>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH HVM v4 1/1] hvm: refactor set param
Date: Thu, 18 Feb 2021 16:01:58 +0100
Message-ID: <20210218150158.28265-1-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <2633df5f-df68-4a16-bc5c-522b2a589b00@amazon.de>
References: <2633df5f-df68-4a16-bc5c-522b2a589b00@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

To prevent leaking HVM params via L1TF and similar issues on a
hyperthread pair, let's load values of domains only after performing all
relevant checks, and blocking speculative execution.

For both get and set, the value of the index is already checked in the
outer calling function. The block_speculation calls in hvmop_get_param
and hvmop_set_param are removed, because is_hvm_domain already blocks
speculation.

Furthermore, speculative barriers are re-arranged to make sure we do not
allow guests running on co-located VCPUs to leak hvm parameter values of
other domains.

To improve symmetry between the get and set operations, function
hvmop_set_param is made static.

This is part of the speculative hardening effort.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reported-by: Hongyan Xia <hongyxia@amazon.co.uk>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

---
v4: * add 'static' attribute to hvmop_set_param
    * drop introduced bound checks, e.g. in hvm_allow_set_param
    * drop existing bound check from hvm_set_param
    * do not introduce block_speculation in hvmop_set_param,
      as is_hvm_domain already blocks speculation
    * fix comments

 xen/arch/x86/hvm/hvm.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4060,7 +4060,7 @@ static int hvm_allow_set_param(struct domain *d,
                                uint32_t index,
                                uint64_t new_value)
 {
-    uint64_t value = d->arch.hvm.params[index];
+    uint64_t value;
     int rc;
 
     rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
@@ -4108,6 +4108,10 @@ static int hvm_allow_set_param(struct domain *d,
     if ( rc )
         return rc;
 
+    /* Make sure we evaluate permissions before loading data of domains. */
+    block_speculation();
+
+    value = d->arch.hvm.params[index];
     switch ( index )
     {
     /* The following parameters should only be changed once. */
@@ -4134,13 +4138,13 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
     struct vcpu *v;
     int rc;
 
-    if ( index >= HVM_NR_PARAMS )
-        return -EINVAL;
-
     rc = hvm_allow_set_param(d, index, value);
     if ( rc )
         return rc;
 
+    /* Make sure we evaluate permissions before loading data of domains. */
+    block_speculation();
+
     switch ( index )
     {
     case HVM_PARAM_CALLBACK_IRQ:
@@ -4305,7 +4309,7 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
     return rc;
 }
 
-int hvmop_set_param(
+static int hvmop_set_param(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_param_t) arg)
 {
     struct xen_hvm_param a;
@@ -4318,9 +4322,6 @@ int hvmop_set_param(
     if ( a.index >= HVM_NR_PARAMS )
         return -EINVAL;
 
-    /* Make sure the above bound check is not bypassed during speculation. */
-    block_speculation();
-
     d = rcu_lock_domain_by_any_id(a.domid);
     if ( d == NULL )
         return -ESRCH;
@@ -4388,6 +4389,9 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value)
     if ( rc )
         return rc;
 
+    /* Make sure the above domain permissions check is respected. */
+    block_speculation();
+
     switch ( index )
     {
     case HVM_PARAM_ACPI_S_STATE:
@@ -4428,9 +4432,6 @@ static int hvmop_get_param(
     if ( a.index >= HVM_NR_PARAMS )
         return -EINVAL;
 
-    /* Make sure the above bound check is not bypassed during speculation. */
-    block_speculation();
-
     d = rcu_lock_domain_by_any_id(a.domid);
     if ( d == NULL )
         return -ESRCH;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:15:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:15:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86678.162881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCl1r-0005Jd-MH; Thu, 18 Feb 2021 15:15:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86678.162881; Thu, 18 Feb 2021 15:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCl1r-0005JW-Im; Thu, 18 Feb 2021 15:15:43 +0000
Received: by outflank-mailman (input) for mailman id 86678;
 Thu, 18 Feb 2021 15:15:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Wg/=HU=redhat.com=crosa@srs-us1.protection.inumbo.net>)
 id 1lCl1q-0005JR-IB
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:15:42 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 91f814d1-a31a-405b-936a-51e970c0d183;
 Thu, 18 Feb 2021 15:15:41 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-382-m8pRphbRNZyoQTqxE2nwVg-1; Thu, 18 Feb 2021 10:15:39 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 44F53192D7BA;
 Thu, 18 Feb 2021 15:15:36 +0000 (UTC)
Received: from localhost.localdomain (ovpn-114-28.rdu2.redhat.com
 [10.10.114.28])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 18FC067CC2;
 Thu, 18 Feb 2021 15:15:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91f814d1-a31a-405b-936a-51e970c0d183
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613661341;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hdBF8OBirgPbspYnaG5GA5tWKPIadlNyyNrXBvpWLb0=;
	b=TSgR1edoNqLpTe2MPY5wsqIFePJaq+53IXqg125OPNkBsUcxIcHtNUlwMrnFvOpDgydfrT
	1PbRe5YseS4a2OVcQWIB40VYIflIYjSbWGjXtbrVdgvbm4WXuiBJ7RuC0SIzzAENTUghmm
	QG4ZYhixUrHrCoHY5/pldkpzlLcn8xc=
X-MC-Unique: m8pRphbRNZyoQTqxE2nwVg-1
Date: Thu, 18 Feb 2021 10:15:28 -0500
From: Cleber Rosa <crosa@redhat.com>
To: Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, julien@xen.org, andre.przywara@arm.com,
	stefano.stabellini@linaro.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
	xen-devel@lists.xenproject.org, stefano.stabellini@xilinx.com,
	stratos-dev@op-lists.linaro.org
Subject: Re: [PATCH  v2 7/7] tests/avocado: add boot_xen tests
Message-ID: <20210218151528.GA433029@localhost.localdomain>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-8-alex.bennee@linaro.org>
 <20210217204654.GA353754@localhost.localdomain>
 <87sg5us58c.fsf@linaro.org>
MIME-Version: 1.0
In-Reply-To: <87sg5us58c.fsf@linaro.org>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=crosa@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="IS0zKkzwUGydFO0o"
Content-Disposition: inline

--IS0zKkzwUGydFO0o
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Feb 17, 2021 at 10:22:50PM +0000, Alex Benn=E9e wrote:
>=20
> I think the solution is to use archive links here. There is a snapshot
> archive of sid (we've used it in the past) but I suspect there isn't an
> archive of old stable packages for a reason.
>

If the packages you need are available on archives (which I wasn't
aware of), then without a doubt it is the best solution.

I assume you're going to look into that (I'll keep an eye on the list
about it).

Regards,
- Cleber.

--IS0zKkzwUGydFO0o
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEeruW64tGuU1eD+m7ZX6NM6XyCfMFAmAuhI0ACgkQZX6NM6Xy
CfP2Aw//cvewBBcbcAWjx/pOHRnaVZW8kOlzWmOLN3LYxwQWqbyN+s2735/dCRHb
VD2uZYIawq4T0bWTf/cOC77oT8YG3rRqwFWVM4lFayUqhJcBDyi+7RM+SLmQZbdg
M/4EiHzQc3ocPyUt9ZNVcvU0GHQbEefOobG8aK4s6+y3BPjdsoCPLWT6vqtctcyH
eGi2URX1WLWkCqCrYtgrWdm51jcFU59vtbrIqdR5KOaB0G676LdbCR94g6NWPeN9
9c7g3h10/C4z6KvSFUYRXWXuKYQjSLpYyDn1xMDWMJ+M/3oOf8Zmmd7U7lP9iOxA
p38MRhY+rdqPWmphvHcvwuWkIeq3gZlKc3RkVIw05Q7g92XNMZCjb03GJ3DxX+Ka
Mn9F+zTduy2/v7PhahbC+4xCzm2JBg9VAdvvAgXPddSCV5KsaqQbQ4BoZH/ik7EP
hy7055UxhwQI3giXbSu9drgJZ4NDKM12GRpcB83+SYp13ctu0o81Nei/XL7lEcC3
4/nBh8yEw4t5GlroU4L3Y0Agodmf6qvg7ejaS7EeJZ6kDnqn5Z3whz+l5UTzu/vr
81ekZ6kVaMjW9eFF6vyo4MgfPxTAh/LSYz5crvN/Ocu1zg8dGCGBDvDsQimHU24M
0gG3uhrdAxH7U3KYGy7wYYZJebQpQre8idUNBq6xZJSnipYswiI=
=oUm8
-----END PGP SIGNATURE-----

--IS0zKkzwUGydFO0o--



From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:22:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:22:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86681.162893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCl8q-0006KD-Ey; Thu, 18 Feb 2021 15:22:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86681.162893; Thu, 18 Feb 2021 15:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCl8q-0006K6-BE; Thu, 18 Feb 2021 15:22:56 +0000
Received: by outflank-mailman (input) for mailman id 86681;
 Thu, 18 Feb 2021 15:22:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Wg/=HU=redhat.com=crosa@srs-us1.protection.inumbo.net>)
 id 1lCl8o-0006K1-Oi
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:22:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 26227b49-892a-4837-8012-636ba45eecdb;
 Thu, 18 Feb 2021 15:22:53 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-306-iWZITrk4M0qFn9y3Kf7wcA-1; Thu, 18 Feb 2021 10:22:49 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2AF131936B66;
 Thu, 18 Feb 2021 15:22:48 +0000 (UTC)
Received: from localhost.localdomain (ovpn-114-28.rdu2.redhat.com
 [10.10.114.28])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 48D3C10016DB;
 Thu, 18 Feb 2021 15:22:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26227b49-892a-4837-8012-636ba45eecdb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613661773;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=oKW300fgiDXqSH0LNzsHF56e+kWpLknmKKlM0u8WQq0=;
	b=buT45IyQakZsxhvUkL3keobib3wI6ZghlvAt4L35C3a3eMS4P6tuwLBHBW35imfQM+14Zd
	+vqhwpfQPNtBuGE1whJxWNCvvqS76kafF31FTbEBPPlJqRc0rjRsOH4p2etB4d9y4axxHi
	G5vlr5Zlpidqo6BzcEs+tbkPosGGtEM=
X-MC-Unique: iWZITrk4M0qFn9y3Kf7wcA-1
Date: Thu, 18 Feb 2021 10:22:41 -0500
From: Cleber Rosa <crosa@redhat.com>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>,
	Willian Rampazzo <wrampazz@redhat.com>, julien@xen.org,
	andre.przywara@arm.com, stefano.stabellini@linaro.org,
	qemu-devel@nongnu.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org, stefano.stabellini@xilinx.com,
	stratos-dev@op-lists.linaro.org
Subject: Re: [PATCH v2 7/7] tests/avocado: add boot_xen tests
Message-ID: <20210218152241.GB433029@localhost.localdomain>
References: <20210211171945.18313-1-alex.bennee@linaro.org>
 <20210211171945.18313-8-alex.bennee@linaro.org>
 <20210217204654.GA353754@localhost.localdomain>
 <2948d7db-2168-7c5e-a73e-969a67496daa@redhat.com>
MIME-Version: 1.0
In-Reply-To: <2948d7db-2168-7c5e-a73e-969a67496daa@redhat.com>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=crosa@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="dTy3Mrz/UPE2dbVg"
Content-Disposition: inline

--dTy3Mrz/UPE2dbVg
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Feb 18, 2021 at 10:43:54AM +0100, Philippe Mathieu-Daud=E9 wrote:
> On 2/17/21 9:46 PM, Cleber Rosa wrote:
> > On Thu, Feb 11, 2021 at 05:19:45PM +0000, Alex Benn=E9e wrote:
> >> These tests make sure we can boot the Xen hypervisor with a Dom0
> >> kernel using the guest-loader. We currently have to use a kernel I
> >> built myself because there are issues using the Debian kernel images.
> >>
> >> Signed-off-by: Alex Benn=E9e <alex.bennee@linaro.org>
> >> ---
> >>  MAINTAINERS                  |   1 +
> >>  tests/acceptance/boot_xen.py | 117 ++++++++++++++++++++++++++++++++++=
+
> >>  2 files changed, 118 insertions(+)
> >>  create mode 100644 tests/acceptance/boot_xen.py
>=20
> >> +class BootXen(BootXenBase):
> >> +
> >> +    @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
> >> +    def test_arm64_xen_411_and_dom0(self):
> >> +        """
> >> +        :avocado: tags=3Darch:aarch64
> >> +        :avocado: tags=3Daccel:tcg
> >> +        :avocado: tags=3Dcpu:cortex-a57
> >> +        :avocado: tags=3Dmachine:virt
> >> +        """
> >> +        xen_url =3D ('https://deb.debian.org/debian/'
> >> +                   'pool/main/x/xen/'
> >> +                   'xen-hypervisor-4.11-arm64_4.11.4+37-g3263f257ca-1=
_arm64.deb')
> >> +        xen_sha1 =3D '034e634d4416adbad1212d59b62bccdcda63e62a'
> >=20
> > This URL is already giving 404s because of a new pacakge.  I found
> > this to work (but yeah, won't probably last long):
> >=20
> >         xen_url =3D ('http://deb.debian.org/debian/'
> >                    'pool/main/x/xen/'
> >                    'xen-hypervisor-4.11-arm64_4.11.4+57-g41a822c392-2_a=
rm64.deb')
> >         xen_sha1 =3D 'b5a6810fc67fd50fa36afdfdfe88ce3153dd3a55'
>=20
> This is not the same package version... Please understand the developer
> has to download the Debian package sources, check again the set of
> downstream changes between 37 and 57. Each distrib number might contain
> multiple downstream patches. Then the testing has to be done again,
> often enabling tracing or doing single-stepping in gdb. This has a
> cost in productivity. This is why I insist I prefer to use archived
> well tested artifacts, rather than changing package URL randomly.
>

I understand it's not the same version... but from my different and
limited PoV it was the obvious thing to suggest during a review.  Of
course using stable archived versions is much better (I believe Alex
will look into that for these packages).

Best,
- Cleber.

--dTy3Mrz/UPE2dbVg
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEeruW64tGuU1eD+m7ZX6NM6XyCfMFAmAuhjkACgkQZX6NM6Xy
CfNVdRAAzlG3dNdeqwCKfa6leo0ZGC4j0XorAhWP9lS8UE1QvS6BFUQaO/esZcm+
BOo5YWsmbAg/e8MGu9Q4eZvCnkAPrjDDWQN9/6RYJ2qrBZfE/WOyV38UhR2SkMLz
RfgvQpQE075ZgQZed+puHfdWEgwELovkBlFklwDfzQbhp1jgCVaQXHA1pbJjMeVw
ryTNfBSRb2hQ7CKkeTL3XKxhzljR4McqaNfWONZJnfxwGGYRX5NJ9FrbsHuyyyZe
wjqylXUATtNEswQlUPG5YLwrn20vt7e6V/Oh6Wz+gaZbBXNO9sqRnKu+Hq8pUTEX
SbWNwy4s8TUMvFNwedvSJE/C6Xyc6VaocP2eCeOtQhnriraRNG720//4b81Zo0Wv
rwd4/va7y+UegpB0W69LdAcw78SQ5Knl1BKr3wGqxa+/ae9TXTs5EnUOzeh2DoYH
3lI6yI+Q4Fk4Sv6M/ailVOMq2jS5zXGnnQkKG55JdCkjp46DIbV/FnNPpI5OdUxL
u3E0SytsouftiOG83nNDlqn7GSqJE/wg7qi0FNLZm/IvE1CBhSooPs12mcp1XBkm
rpHEB00duys35EY131gkKZ9Cr8FUvpetmzWXxvoE3jPFTZfQeO86zDe/t1UxALZA
s9sI2FLvEMO18BsCLOPq/ga6N3gEgUE9ByMmPllTiB75d2UicIs=
=XUNL
-----END PGP SIGNATURE-----

--dTy3Mrz/UPE2dbVg--



From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:39:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86682.162905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClOS-0007XW-Sf; Thu, 18 Feb 2021 15:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86682.162905; Thu, 18 Feb 2021 15:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClOS-0007XP-P4; Thu, 18 Feb 2021 15:39:04 +0000
Received: by outflank-mailman (input) for mailman id 86682;
 Thu, 18 Feb 2021 15:39:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOR-0007XH-8o; Thu, 18 Feb 2021 15:39:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOQ-0000EX-Uu; Thu, 18 Feb 2021 15:39:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOQ-0008Gb-DW; Thu, 18 Feb 2021 15:39:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOQ-0001gl-Cq; Thu, 18 Feb 2021 15:39:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=39/N3R684x2A/rK6wP3/+GUVWoQXIZ/rS8CwgQFnVwo=; b=Rt0jFSqNhc4edZWlwvfRGUu9f1
	VRR8NdqPQLDlljAWUI+qMddcfLFpneRjF2JL1cGj4CIRMySHzs+eAiHY6qZcvZLbm3FaPvno5HPv6
	nkbY0NlU0w2o8xupsMI4+fRPeTj4XTgENZq/Zpct5IBCbmryfNATWypg6/DYfVUOdDZ0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159440-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159440: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 15:39:02 +0000

flight 159440 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159440/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 159391 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 159367 pass in 159440
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159440
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 159367 pass in 159440
 test-arm64-arm64-xl       10 host-ping-check-xen fail in 159391 pass in 159367
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 159391 pass in 159440
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 159391 pass in 159440
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 159413 pass in 159367
 test-arm64-arm64-examine      8 reboot                     fail pass in 159367
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 159391
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159413
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 159413
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 159413

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159413 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  201 days
Failing since        152366  2020-08-01 20:49:34 Z  200 days  347 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    3 days    4 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:39:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:39:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86684.162919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClOu-0007cy-EQ; Thu, 18 Feb 2021 15:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86684.162919; Thu, 18 Feb 2021 15:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClOu-0007cr-BX; Thu, 18 Feb 2021 15:39:32 +0000
Received: by outflank-mailman (input) for mailman id 86684;
 Thu, 18 Feb 2021 15:39:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOt-0007cd-44; Thu, 18 Feb 2021 15:39:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOs-0000Ey-W4; Thu, 18 Feb 2021 15:39:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOs-0008Ix-O6; Thu, 18 Feb 2021 15:39:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lClOs-0002aI-Na; Thu, 18 Feb 2021 15:39:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yzrx+tXbFS3nkLvjOvoHLiKWzkhPj09cIsXknxA9DPw=; b=UmUOos5OH6GoVTWqOK6BnqKZuD
	3xbcK4Ryt25zCkaT/efoBSFWQpcmRq3ZkCR5eOHT4vORWyE0FYmkMvrfjzElREE0IlXaA6mzHkaH1
	Ob5h4iZPvphJHF9/4nVfC4h9iOosN2wQD4bDpe2OO/qAK03ZstN582yLHRje8ABS4bms=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159461-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159461: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
X-Osstest-Versions-That:
    xen=7a4133feaf42000923eb9d84badb6b171625f137
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 15:39:30 +0000

flight 159461 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159461/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
baseline version:
 xen                  7a4133feaf42000923eb9d84badb6b171625f137

Last test of basis   159445  2021-02-17 16:00:31 Z    0 days
Testing same since   159461  2021-02-18 13:01:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a4133feaf..e8185c5f01  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:52:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:52:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86696.162934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClbe-00011a-KG; Thu, 18 Feb 2021 15:52:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86696.162934; Thu, 18 Feb 2021 15:52:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClbe-00011T-HG; Thu, 18 Feb 2021 15:52:42 +0000
Received: by outflank-mailman (input) for mailman id 86696;
 Thu, 18 Feb 2021 15:52:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lClbd-00011O-Cr
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:52:41 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da28f838-c329-4f6b-b398-6ba171ec62cf;
 Thu, 18 Feb 2021 15:52:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da28f838-c329-4f6b-b398-6ba171ec62cf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613663560;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=twyTTeoclOMZv6TC1+k9zhbDS+Y/mNLgV8KCOu/lapw=;
  b=bS+8KJoVOxU+io8Kotf/n9Yn7w2lNHX1sTdXi4CoLjyroyvByEJcfCKp
   U1wMosD/jeAoBy7oquY7IgQGGONESCycHrJscLs9HBr1OutuaqFh5QoI7
   uZubzOhUN6lYJjp5IT4w3ZxNiGN0jA/MIL8ZL5Jdk5FQo5wk1jJI5ff8x
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rKXZRHjwaaIRhoggsqCHTbfcZmRVABR7nLazA2LEssiCakesJYU5T6OTgPjv2W5oKOcGol/IGh
 rMdysEnVrjNbEXqaZp9rsOSJiDih+HxK/EjQkwKDHcGWgqf18V1x9eJ9naZ2Bc13e2YhvBFLKC
 aEGGkhvwL/CM9Tc7RjjNN3/7VDi8ha6pyidaxBKoJtx7vIl/eCX1V+uxvZgYI7e3x0t0jjmqcD
 TO5tztHSN683CYcKsyQEGKLd6P4w1hLz9VTHBhoR9RuLCQspRXSMvT3WYp6Me/Gx32t+HA3iFN
 Lr4=
X-SBRS: 5.2
X-MesageID: 38902774
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="38902774"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=COKgsJgTBQTP2lcSwsWRb/1sy20DRLCpUrh5Wvp8r0ADa5esiu6EnKwqQsMTkUjsFtDNkMEK4cxKakbMD7YyZfqw9FLpSKczQLTlPd/Hr8NK4fYm34HSs6OldAJQbahNtguZcqizUbpm9/p1gJLyvd4qpEpMWiWaiLuiLKuAscwhehsGvpJGI9AR+Hchb5TVg3Skkp+Npi/89mpYORCIx8XE8uyCj4v7D+E6iDbC+VpHWUVMgR4Eqkp+fXvIhUOkE4T/zTOGqykh1HpJSG4r42sNeNC0cD2BEJbxmQuwm1k/8aEHWlX8ELmEOArUQax0zEkFTy8ifBT6qjjBbEFBhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KFSM7SXcOrlix1cTLQ7haTv07XVezgiuCtOfOD8r7Yc=;
 b=oJ7Eb4Rp3CVjDkeovP+FAvXJiO5jF+WfdMOOrnHwXvfuewS7V8tTfuQdGU59vIqLn5vymCDWrZzjhTmdKXU7ryp0nf4Bj5sTyOCmNdVvpjLCl1MlRWk2sueKzJ/t/H+xThRPJEVRB7AzB2i3sXsI1OqTi67PWEcLfevSMJ3YOVQc+pnJSbiUhtVB6HVdKRlldfcHlEPBbzH16DX4/Z2mqsMyyv4OzH1c6zxsz4FBo41J6mCuKcSSTtc0T6j85IMZgqJ58t0qbcUukLB6FgCEY33NEfDgXDLrYyVDyb3HeNndEv6MAcNcc3E47uo0pUILkzclVhvBJVGmA7uUL4ywEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KFSM7SXcOrlix1cTLQ7haTv07XVezgiuCtOfOD8r7Yc=;
 b=MN/Usc702do5KWcmeeUE4QsstO5pLLaEskg6CSj1PvXPx0HpDej3olU8dUcCt5zohsBXZqWxTdczcQwYP5kwMx1F67/0FQ2gXpIDp9VVpn/zJuXTjZue/jPdAh7PwN0yBqZ8q3ZHajmxOkhL6iqlvCZecPuLq7kXzQnYHWFifL0=
Date: Thu, 18 Feb 2021 16:52:29 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <andrew.cooper3@citrix.com>,
	<jun.nakajima@intel.com>, <kevin.tian@intel.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
Message-ID: <YC6NPcym62a0Nu0M@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
 <YC5EitRCZB+VCeCC@Air-de-Roger>
 <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
X-ClientProxiedBy: PR0P264CA0273.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c28ddce7-6982-4adc-cec1-08d8d425361d
X-MS-TrafficTypeDiagnostic: DM5PR03MB2633:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2633EF9901667548DE5D07898F859@DM5PR03MB2633.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: c6DUu1+q5aLdwCKeloBH4WYRps2rIN2OGaY9j1EElK4k4uBFHtVtjtcQ4PmKJI7GAzaBOrir6H9Y3dtTAHC2RA/XQdhWDpyZgRyOT8m9CJVq2jl4mnp8wzjNdUDt/71kf8Wz6xPEfc6ltwUd+nTm4CjHQ6Fy7CORLOSfKuSQVipVGQcXncKRIyQONDx4WE066EonpLrJYOPDLyHXZR4bb4igXmE2fekZB0TMbVWV52U9V71xRx8OuVJ57ZBrZoSYT7ngItXf+slEFVQJH1MBLxs5oKOn7ZTedR3wIepU4VeXERinvfh+tt3GwRymf6uUoaCgQ5+fLF0oIhptcga8/Va8qMXUYF1kUSitEsu/wLVmINEvhrYNGzC+SFbCOOVw3Yrjq1ynB7qkM2HMXPygs4N8oOO7Iru4Vn4B6k0ZFrc/vhmRo/lJPPOkBsUZccuGP3fXkQ5X2gJ6uZHYGRMsTXaT0A/TsVQlDCeSSzTn1r4qilrNphFzf1NoJMEhTv4FmK+BJdmAm38Pd9kSF/T4Pu1PON967B/wO3+lTii+3REPhyUeyalLkni9xHxcNugCuTeqeXkMmkZ6bNklz/MiRomIpkONIqQv5pzqGGWYGj0=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(366004)(396003)(346002)(376002)(39860400002)(9686003)(85182001)(6666004)(8676002)(478600001)(8936002)(6496006)(6486002)(5660300002)(83380400001)(45080400002)(33716001)(66946007)(316002)(2906002)(86362001)(16526019)(956004)(4326008)(6916009)(26005)(53546011)(186003)(66556008)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bE0wTDNkMnBiSUlsSVJld0NCQjV1OU9yVG5TZHF0TXNRbkY0dmxQdHRtc3ZN?=
 =?utf-8?B?VHFTdjNDVmRQZmIwVm9FcjBMdWx1TVp4ZzRWNlgvZzA5SHRsdjB6L01TSCtQ?=
 =?utf-8?B?YjRpLy91b2pIS0w2UFpLeituMnNQNGVBbWQ5aCtJNnExMzgxSVdBSURrSTNW?=
 =?utf-8?B?cUpSK0o0dlJrMURJd3NQK0FvNTZUbjF0TXBsRy8yaC94R0RHMWFua1B4K29Q?=
 =?utf-8?B?WUVJODFtdGFLbG5TeENRVU00K3F3RHpEcWdMY1VPOEk3alpnZCtLckN6NmVF?=
 =?utf-8?B?TWJGMDRSaWVQZ0l2NnFiNmk4Tm1Uam5VczE4OVlxeTlGbWVGczJTREpEZCt6?=
 =?utf-8?B?bHRtajBuUjNQVVNmaG1WdVdkUW1WVElqZEd5a1Y0Um5DTjhQSkYxeW5WMndt?=
 =?utf-8?B?WW5vdGhXSmFDK2RYSEsrb05nSzlMQXpHOTJJTXU4ZW9qdGVzN1FIaHdpMnox?=
 =?utf-8?B?Z0k0TFNVQXp3dlFIUE5oNUJkQ2pTOGEzeE9Cb2NDVTd3R2dOYTdwSmx3T2ZV?=
 =?utf-8?B?S0YzN3d5Q1p5bTlKcWJsNWZieGkwKzNzcWI4OGxIc2NBcjh0TFplTWZVc0do?=
 =?utf-8?B?WXpmM3VqVWpERUJ0eEhHWm5lUGJzbXU4WmtNQTFuUzdwRzcxbFV1MU9XMFJj?=
 =?utf-8?B?cEdGRTg2L2QreG9tanM0UzZvU21pVDNPMGVGbVd1dE1ZdFVRODJINWlWcjlF?=
 =?utf-8?B?T0VteU1WMUp0S3h4QlQxWXRZVWMzTFgwYXdMUWFHb0o0SFRpaVkxN050TWZV?=
 =?utf-8?B?aC9zdWlQS1hCQ1JBNmRJSjVMS3cvbnQ4TGpLT1VPMnp6eExLOWlteExQcGJ4?=
 =?utf-8?B?dmhrR0V6THJKdmQzRjBGSWlOeGJISFl3RUM5OEVGQ1RDK3R3b3l5UXE1MTdv?=
 =?utf-8?B?UjIzUXRPQm1zWkdoTVorVjFFVlpud1l2M2ZuaWVhSDRwbFBoYVl1b3VYUERT?=
 =?utf-8?B?S2doaStpdHlGNWVKbFpOMFhnUWtLdnRPMzZSeVpCSVRpdFJldGtXbmdSc29W?=
 =?utf-8?B?V2h0NnBjRmVMWkF2K0Y3MnNFUi9ydGluaWlmZHdUNk1wU0ptUCsyQ2xCQjUx?=
 =?utf-8?B?ZDV3U295QnpjbGlHcENUbENBa0ZQRS91L0VpNlluSWRFTUhHOHB5UWFqZzUr?=
 =?utf-8?B?MmpORHVuQmozL0xrZlhDZ256T1JpLytpZ2FacmFxZEZZYUkvS1JVMXhXUklt?=
 =?utf-8?B?Z2xHZnpsR01vSGtiR3ovR3JNbmVya1haZndRREYyMzh0UkFKMVRmTWlrZi8z?=
 =?utf-8?B?dVJ2MldkakZnRVNRL3pzOE5aNUxZaWZxa1pWcm1JQ3ozTmRZTFhoQTd6dzJT?=
 =?utf-8?B?enhVTC9hbjdxbk5xRDVqWjNVa1hPcjVnK1FseVpWSThSd3YwaUw1aW52K2Yv?=
 =?utf-8?B?bFM5Z3FWdVZKaTRxbktWUUVIWGVGTG01cVJWV2Z5cEowZW9KY0VJT1pQdjhQ?=
 =?utf-8?B?MERlNmxDbHVhS2QrU3VwaW5UNm51TUpFQ1NXNXA2ZWZGdDdUOWpWVmVaRXZp?=
 =?utf-8?B?ZW9OeWIwNktSQkM0QmJEb2dLc2pUd0ltRXEyVlhZbXM1V0YzQTY2ZXZKWWFD?=
 =?utf-8?B?N29mWmpUc2RiV21GZ2xzb3ZMM0plZlhnL1VuQzJlSnUzTnNHZkp6Tm9Da2gw?=
 =?utf-8?B?djdISVIwV2N4ODNsdXZBUXNjZWxyeU4rVkZzOVd3djhQeUdYK1lOcmV5N1ZM?=
 =?utf-8?B?RnlQNzdXaUQzbExHRjMvd2k3N0REclpoNUsrUm9rNHRyQUZDYlBvSFFhRmEr?=
 =?utf-8?Q?NsT2onfYZVE74RtlvzKT0fgnZhoTGGHz7BAlXdN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c28ddce7-6982-4adc-cec1-08d8d425361d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 15:52:36.1479
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3iyqD9ht2GbOM2MD5+4iQTnAdxSE8TYB8gyLZmzMxQlRxUesocbu1HDkRQqptniBgHRcQJ0OwXQkNnpEtg6Jgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2633
X-OriginatorOrg: citrix.com

On Thu, Feb 18, 2021 at 12:54:13PM +0100, Jan Beulich wrote:
> On 18.02.2021 11:42, Roger Pau MonnÃ© wrote:
> > On Wed, Jan 20, 2021 at 05:49:09PM -0500, Boris Ostrovsky wrote:
> >> This option allows guest administrator specify what should happen when
> >> guest accesses an MSR which is not explicitly emulated by the hypervisor.
> >>
> >> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> ---
> >>  docs/man/xl.cfg.5.pod.in         | 20 +++++++++++++++++++-
> >>  tools/libs/light/libxl_types.idl |  7 +++++++
> >>  tools/xl/xl_parse.c              |  7 +++++++
> >>  3 files changed, 33 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> >> index c8e017f950de..96ce97c42cab 100644
> >> --- a/docs/man/xl.cfg.5.pod.in
> >> +++ b/docs/man/xl.cfg.5.pod.in
> >> @@ -2044,7 +2044,25 @@ Do not provide a VM generation ID.
> >>  See also "Virtual Machine Generation ID" by Microsoft:
> >>  L<https://docs.microsoft.com/en-us/windows/win32/hyperv_v2/virtual-machine-generation-identifier>
> >>  
> >> -=back 
> >> +=over
> >> +
> >> +=item B<ignore_msrs="STRING">
> >> +
> >> +Determine hypervisor behavior on accesses to MSRs that are not emulated by the hypervisor.
> >> +
> >> +=over 4
> >> +
> >> +=item B<never>
> >> +
> >> +Issue a warning to the log and #GP to the guest. This is default.
> >> +
> >> +=item B<silent>
> >> +
> >> +MSR reads return 0, MSR writes are ignored. No warnings to the log.
> >> +
> >> +=item B<verbose>
> >> +
> >> +Similar to B<silent> but a warning is written.
> > 
> > Would it make sense to allow for this option to be more fine-grained
> > in the future?
> 
> From an abstract perspective - maybe. But remember that this information
> will need to be migrated with the guest. It would seem to me that
> Boris'es approach is easier migration-wise.

I'm not an expert on migration, but I seem to recall there's already a
libxl blob that gets migrated that contains the domain configuration,
so having the MSR configuration there seems like a sensible thing to
do.

> > Not that you need to implement the full thing now, but maybe we could
> > have something like:
> > 
> > "
> > =item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>
> > 
> > Specify a list of MSR ranges that will be ignored by the hypervisor:
> > reads will return zeros and writes will be discarded without raising a
> > #GP.
> > 
> > Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
> > c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
> > "
> > 
> > Then you can print the messages in the hypervisor using a guest log
> > level and modify it on demand in order to get more verbose output?
> 
> "Modify on demand"? Irrespective of what you mean with this, ...
> 
> > I don't think selecting whether the messages are printed or not from
> > xl is that helpful as the same could be achieved using guest_loglvl.
> 
> ... controlling this via guest_loglvl would affect various other
> log messages' visibility.

Right, but do we really need this level of per-guest log control,
implemented in this way exclusively for MSRs?

We don't have a way for other parts of the code to have such
fine-grained control about what messages should be printed, and I
don't think MSR should be an exception. I assume this would be used to
detect which MSRs a guest is trying to access, and I would be fine
just using guest_loglvl to that end, the more that it can be modified
at run time now.

In any case I'm more worried about having a big switch to ignore all
unhandled MSRs rather than whether accesses should print a message or
not. Worse come to worse we could always add a new option afterwards
to selectively ignore MSR access, but that would be confusing for
users IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:53:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:53:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86698.162947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClcc-00017S-0V; Thu, 18 Feb 2021 15:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86698.162947; Thu, 18 Feb 2021 15:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClcb-00017L-S5; Thu, 18 Feb 2021 15:53:41 +0000
Received: by outflank-mailman (input) for mailman id 86698;
 Thu, 18 Feb 2021 15:53:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXm/=HU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lClca-00017F-PK
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:53:40 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e0192fe-5015-4943-a8f4-4809c78731e9;
 Thu, 18 Feb 2021 15:53:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e0192fe-5015-4943-a8f4-4809c78731e9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613663619;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=3NFFt6nzc3+RcweZ/92bmGJ5t6EfgOUhE5axgD1CeVE=;
  b=JWqq8tKTFbClDVSvp4VTLMzAnXyh9yGcYIdvAfLJFczNBhM54hTbGfas
   RXUUhV2dKhinMah+ExKj1U8W8qLhzXGaJzD15xqUcM1EJjFEszkR3xoE9
   BlzXxCVFPDDMT3umFavKMCKwLsZzAegg7T9RYwWhsMhFXaCtjpV68NjOp
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FntlC+WxqKmqkLSwvnmCEbb72x4QGzoLFPCvPmEH1xUuBZsaQihcgJjDxWCxHLr9Z+fGkSuZFw
 0vA7ABCcHZalM5rVQLFD05PFUI2zLZ6Ikg9kEEYw+4Ph9A4ldTiVIqtdT/lBCGEfy20j73I8OP
 hgBzoCTKJ5ALO9tD8JNGsJkcHUiNDXG5HqoNZWGLMN7x92SjpSlcmzfvalP36QJXriyh6/kr6P
 cn9mFnk9O3Jtu+KVE5iS+DcapqnGthTHrFgxoeWD0Y9aEgtJlNI1k021b/EmTLFydZYJ9O8u2E
 bcQ=
X-SBRS: 5.2
X-MesageID: 37460161
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,187,1610427600"; 
   d="scan'208";a="37460161"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RmpR7lQ3oEAK3b+F83nevG3fyqFShiq4IKraSRYVVQYbj5Q7lPwYOTJ0GHV5FMLOJLIgVZh0PNmXvEc4cG4Qrzk6aZeo2tB3D9Lim9B6woZrSN4Qs+XnE80PbZzv5hH/s9Qa7yw3UTWWdQfviaGchFyzf0EvnZryt9Gz04fKHEeKJFZolDK+VkDVKOI2VxvcMTTRU2xI+DIf1jjkDzapLujil3Ijb5uwF1XAGKf3wZDmWpsM2KFfh7z2QODq5YJevxlD+ftcPsbqv4EJjtqyUGNhck8miWvE+k783SQaBlktrBRZx0+rDclTVT9llscGEVNFbWZXCyCEFzeU0yITDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z20XgBSmtQH9S2Bv/kNA51nmdRbVFUITvV+X3oM96Y4=;
 b=JqlsBEX13C/44ROoQqlC1Ynlu0vs8ao2KcYeEzP25fU7MorMcYjLKQp/LXTzi/7In1Px53MvwdVq0MHYsDXlU4UOrYqCRh5AM1kGKN4o4L93DQRW1DbwWdLu01lRllEhbIIYGTuo8JZfB42IWJnPXPAKIY2H5Km0g5ke2G9cGsj5DM3Av3c4xnsq9KZkSzvQTFtzY61IZKK9rzkBaKuoU/SPMlmIU1pDf/GE9KWEJ/4aN0v7LpJ9rHa95vhFq9hatyew20gxXAHDz3v/WbaFZ3hak9GvpRtn/ofkmRakB0S1JoVQmQagYne78u7tI5cVKnFSTKqMw5uZDFpLKxdbtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z20XgBSmtQH9S2Bv/kNA51nmdRbVFUITvV+X3oM96Y4=;
 b=o6S7cP5nb/1CmShBBSkFHNlNhwmYFJff+uHvsw0Lg0v3gzySaxTFHM5Zx5W6YdNBj5W5R+COFSk6m1TIYE65/cTVQamA23Ejx/6TI7tVocTRQlkvrcZtaDUd2XGDvo5TOuxXza6+CqIPFFoWqgs3g0uRMQCPUPSVT/vFCX5pZSk=
Date: Thu, 18 Feb 2021 16:53:31 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <andrew.cooper3@citrix.com>,
	<jun.nakajima@intel.com>, <kevin.tian@intel.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 3/4] x86: Allow non-faulting accesses to non-emulated
 MSRs if policy permits this
Message-ID: <YC6Ne2ZEg5alzRk2@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-4-git-send-email-boris.ostrovsky@oracle.com>
 <YC5OYZOAkx+jutJz@Air-de-Roger>
 <785a4925-31f2-9df1-a4b3-1760ad17e01e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <785a4925-31f2-9df1-a4b3-1760ad17e01e@suse.com>
X-ClientProxiedBy: PR3P189CA0049.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:53::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 45a25cfe-488f-4309-b441-08d8d42559c7
X-MS-TrafficTypeDiagnostic: DM6PR03MB3577:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3577B4F2841BF72470B6704A8F859@DM6PR03MB3577.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 9AZUepPXZzoOcNBIOBPm58PuQMMAOqnRV2o4AuHbiHLIz13toTz7EmR7DA71XQv/tuMAMVISOMHD7AXDfcXT4G0Sm+OVdPnw6ko3deo2iZk0mDpwCKW4rw0Uv+mwkXVF/Bm5gngLjFAxsgSjT2vz+elvhSUAGeag8zeLcxQS5UyC2Tz9rkRCRQkfmU7ypcqFlWqeGjeUnGrez9GEcKOu3uR24E25F0KACuOjDlbN/U3bxut1E/SZVG5q5Ncx6Qx5BmzJbowX0qvaKRjaKlbHA7N14AZ36Og+Y1DOlOHTu9NbziRbukTwhr26J+cHt3cPX3RWNrNCIOOjmYcOvVD3gvPGk7s0RNxL+nVxrgmGEUnlQ3prwKtZ011ulItHVucHUK44noLWLGjBVMLFLnDVJowB1nHNJxAu/cKpjEHkYUNJALZqK7t9ML770379FYC+IksjncyejcCY7nM81M2LoDVptPPrmQao9ZwcIDpp/M+Bmx9fA6LrRmxmbdzezvBm71zp1oCa3greH0gMdMZ1jA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(366004)(376002)(136003)(396003)(346002)(186003)(66476007)(66946007)(2906002)(85182001)(478600001)(66556008)(4326008)(6496006)(316002)(9686003)(5660300002)(6916009)(53546011)(6486002)(26005)(83380400001)(956004)(16526019)(33716001)(86362001)(6666004)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VXZ3N05ieDZpVzdxeDh3MDJwaVV5aW1zU3ZSSTkrNG5MU0U5WHVnd09IcnpG?=
 =?utf-8?B?T0hOeDR4dWdLNTArbmVOcjd0RVA3c005UTRqbEFadmNobkZxMmQ1RDFMS1R2?=
 =?utf-8?B?MDROOXlVZk5nOE03MmJYRllxbk83VXIzZml2K25vd1VhZ29aVkVwOFhDcTlT?=
 =?utf-8?B?UjlkTXlrYjRDUUI1YVZVa0w3RGVlZERXNURWRzBrYlRXOWZ2a0lUNFZXYWxy?=
 =?utf-8?B?Vkg1V1JiQzhRMzZmVGdXd2ErcnBrUXc2UHlvYTg0WDI0RzJycXJsK010VGlW?=
 =?utf-8?B?VGhtN09iK3UxbVZWaE1QYjRXdExKMHVRVUhZS0w4YU5XNGNzTERqWE5ITnF1?=
 =?utf-8?B?cUx0cUFjdEkzVXFSYWNUSmpRampKTml5NE9CbHJrYXQxeXlMOXU3ZWVodHor?=
 =?utf-8?B?SjUzeHNBWEticFY2ZUlhVW5DS1gva2R6c3BXc1NBRzR1K3d0emQwbWtFWlVQ?=
 =?utf-8?B?MHJJMktWdmp1WWdtZnZURWI5d3c0L2g5MGt6UnprV0paOFR6NnFzcjg1QUVV?=
 =?utf-8?B?ZHVuTWpRdmM2bC84QU1CcTZkYTF5QVlzc2RITUFpRnM4T2YyUHF4ZW9yMXRS?=
 =?utf-8?B?eldpYXJmUkZrTjlScGx5MGxYV3k2V1A4a0lmVUZ1WnNmeXBNMUdnWExxSkpX?=
 =?utf-8?B?U1BpcFJsUDNoRUtxVnB2VUJTbGwxRFpkeTRsNTRSNWVBRWYyN0hRM2dROTRy?=
 =?utf-8?B?WnJpUVhIOC9ZQXNQWWN5NlE1MThSdW1RNWRDSVpKM3JQVVMzWFRZbzJpS1c1?=
 =?utf-8?B?SFplaWtiRlFOR2lzUkRvdUVNVmdKTnhCd0Z5Qis2ODFnTTZTMS8yaS9ySmVF?=
 =?utf-8?B?d0lPazQ5YURUa1pZZnJXdHpOeDh0VmRBVmw1WHFmdXNMV0pSUWE2MDJCNUZQ?=
 =?utf-8?B?RnB6K3MyT1hpaXdUNEZkZXozSHNHUkU2RXdjTnNrVURWVTI0ejNHTzZPTHBO?=
 =?utf-8?B?TlNMeEc4L0tTOEk0N25RbndkcDU3ZTMxRE85aWlTdFVDNForL2NTKzV5cUh3?=
 =?utf-8?B?Tm5CUThhL0JzcldDSVFxM0R3Qjl3ZjByK3lKMVFTMmNlcnNlSUczRlljclB4?=
 =?utf-8?B?UXl2VW1pb1JVTGlaUG93NkhKamhKVlB4M2VlZzFoOWRmVGdERUFmUnJLdmZm?=
 =?utf-8?B?bmpJRnRpckpIaEhFeEZCSGVrMmdUWE5QSEZDdWYyQm1RWFVyUjN6NlQwNlZv?=
 =?utf-8?B?V1RZdzYrMFF3bCt3TUJXTk4xdGtvOGc2U0YxNW1JT0RMaXVWdU54WFJqdGJG?=
 =?utf-8?B?cmlBbURxS1J2aDZKYlFaRzhRM1d4VWh2eGRJSDJhSHBFT2g0Z1l0Rm8vOWsy?=
 =?utf-8?B?MWpGVmdLWUJtK3U5TkQ5K1FnbUJmc2szc1J1cGRES0FIbnlvQkRYRHJDaDFy?=
 =?utf-8?B?TUltU1VvMnIyTkFjaEdvV1ZXQmVzNkVxR1RldjRuYUkrNHRLZS9hRGFYN0R1?=
 =?utf-8?B?SDdsaU1SbUdyY25zRm40bE9zNzhSV244cjd0UHUzWTdxUDJHV0hIZEtWdC9D?=
 =?utf-8?B?bzVuczZXVk9MRnBGbHk1cGI1aXlyZkhCZFlDNXJJVUpwRlVsTnFIKytjWk5h?=
 =?utf-8?B?dFJOT3YrcW0rbW5sWFpPbjBqNTlDekpiRHR5MDA0RXptbmxkNUFBTXREMEhE?=
 =?utf-8?B?VExEN3p1b2NWcDluVlJOTkhaSHBQVmhuSno2WkxCOXFyb1A5MHN0UFRJWjJR?=
 =?utf-8?B?Zm1FSkJDSW1rUW80UU1tQ0lPdWNLQnowbUR6dkNjeE1DeVF3WmNKb2NiUWJx?=
 =?utf-8?B?QVdLMVdCdWJjWjFpZGxZaXRhV2c3S3JGZ3ZGSTlEZU1XNU9RSW1FSjZKMjk0?=
 =?utf-8?B?L0N6UUtTYXVvTENreHFNZz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 45a25cfe-488f-4309-b441-08d8d42559c7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Feb 2021 15:53:36.0652
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JN9bCbUgt+G0tGPr85C7I8O0N4cY99sJVYjx31k+zT2H4frmk5mmUurhXX7A2j+8Rxik2Qzek7hDnLyM8Uj3wQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3577
X-OriginatorOrg: citrix.com

On Thu, Feb 18, 2021 at 12:57:13PM +0100, Jan Beulich wrote:
> On 18.02.2021 12:24, Roger Pau MonnÃ© wrote:
> > On Wed, Jan 20, 2021 at 05:49:11PM -0500, Boris Ostrovsky wrote:
> >> --- a/xen/arch/x86/hvm/vmx/vmx.c
> >> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >> @@ -3017,8 +3017,8 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >>              break;
> >>          }
> >>  
> >> -        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
> >> -        goto gp_fault;
> >> +        if ( guest_unhandled_msr(curr, msr, msr_content, false, true) )
> >> +            goto gp_fault;
> >>      }
> >>  
> >>  done:
> >> @@ -3319,10 +3319,8 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >>               is_last_branch_msr(msr) )
> >>              break;
> >>  
> >> -        gdprintk(XENLOG_WARNING,
> >> -                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> >> -                 msr, msr_content);
> >> -        goto gp_fault;
> >> +        if ( guest_unhandled_msr(v, msr, &msr_content, true, true) )
> >> +            goto gp_fault;
> >>      }
> > 
> > I think this could be done in hvm_msr_read_intercept instead of having
> > to call guest_unhandled_msr from each vendor specific handler?
> > 
> > Oh, I see, that's likely done to differentiate between guest MSR
> > accesses and emulator ones? I'm not sure we really need to make a
> > difference between guests MSR accesses and emulator ones, surely in
> > the past they would be treated equally?
> 
> We did discuss this before. Even if they were treated the same in
> the past, that's not correct, and hence we shouldn't suppress the
> distinction going forward. A guest explicitly asking to access an
> MSR (via RDMSR/WRMSR) is entirely different from the emulator
> perhaps just probing an MSR, falling back to some default behavior
> if it's unavailable.

Ack, then placing the calls to guest_unhandled_msr in vendor code
seems like the best option.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 15:57:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 15:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86702.162959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClfy-0001IS-Fy; Thu, 18 Feb 2021 15:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86702.162959; Thu, 18 Feb 2021 15:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lClfy-0001IL-CR; Thu, 18 Feb 2021 15:57:10 +0000
Received: by outflank-mailman (input) for mailman id 86702;
 Thu, 18 Feb 2021 15:57:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lClfw-0001IF-CN
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 15:57:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34a89aa1-8a97-4477-9280-a1eb20102512;
 Thu, 18 Feb 2021 15:57:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BACE2AD57;
 Thu, 18 Feb 2021 15:57:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34a89aa1-8a97-4477-9280-a1eb20102512
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613663826; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1HFE6TUCiuqRztdB/XB1rh8grG5LF44712bwwFCFtZs=;
	b=qn65r9eCaNmJqux2b6wa5QgfXBiBaLLJOMMAdthfG0EkFcXw3E88+nijwZxCbTN8YQBuX6
	JxR6dx9b2I0RsCuXz7GWRJclYTt2AkD1X0PBQowJf/2MM2WBAhjH3zTaXEbgJpkVUCFxi9
	PNv5dwwOnL/VVkt0p1uV+DkiR1YVbxE=
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, andrew.cooper3@citrix.com,
 jun.nakajima@intel.com, kevin.tian@intel.com,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
 <YC5EitRCZB+VCeCC@Air-de-Roger>
 <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
 <YC6NPcym62a0Nu0M@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8ffd4f51-5fc6-349b-146f-e52c35c59b4d@suse.com>
Date: Thu, 18 Feb 2021 16:57:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YC6NPcym62a0Nu0M@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.02.2021 16:52, Roger Pau MonnÃ© wrote:
> On Thu, Feb 18, 2021 at 12:54:13PM +0100, Jan Beulich wrote:
>> On 18.02.2021 11:42, Roger Pau MonnÃ© wrote:
>>> Not that you need to implement the full thing now, but maybe we could
>>> have something like:
>>>
>>> "
>>> =item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>
>>>
>>> Specify a list of MSR ranges that will be ignored by the hypervisor:
>>> reads will return zeros and writes will be discarded without raising a
>>> #GP.
>>>
>>> Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
>>> c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
>>> "
>>>
>>> Then you can print the messages in the hypervisor using a guest log
>>> level and modify it on demand in order to get more verbose output?
>>
>> "Modify on demand"? Irrespective of what you mean with this, ...
>>
>>> I don't think selecting whether the messages are printed or not from
>>> xl is that helpful as the same could be achieved using guest_loglvl.
>>
>> ... controlling this via guest_loglvl would affect various other
>> log messages' visibility.
> 
> Right, but do we really need this level of per-guest log control,
> implemented in this way exclusively for MSRs?
> 
> We don't have a way for other parts of the code to have such
> fine-grained control about what messages should be printed, and I
> don't think MSR should be an exception. I assume this would be used to
> detect which MSRs a guest is trying to access, and I would be fine
> just using guest_loglvl to that end, the more that it can be modified
> at run time now.

I can certainly see your point. The problem is that for guests
heavily accessing such MSRs, all other messages may disappear
due to rate limiting. That's not going to be helpful.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 16:42:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 16:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86709.162976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCmNb-0006Yv-7T; Thu, 18 Feb 2021 16:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86709.162976; Thu, 18 Feb 2021 16:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCmNb-0006Yo-4Z; Thu, 18 Feb 2021 16:42:15 +0000
Received: by outflank-mailman (input) for mailman id 86709;
 Thu, 18 Feb 2021 16:35:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5cwq=HU=cerno.tech=maxime@srs-us1.protection.inumbo.net>)
 id 1lCmGy-0005eH-BU
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 16:35:25 +0000
Received: from wnew2-smtp.messagingengine.com (unknown [64.147.123.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id beef2542-b4a1-4d7d-b048-973b9bc22af8;
 Thu, 18 Feb 2021 16:35:22 +0000 (UTC)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.46])
 by mailnew.west.internal (Postfix) with ESMTP id DF1EFE1F;
 Thu, 18 Feb 2021 11:35:18 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Thu, 18 Feb 2021 11:35:21 -0500
Received: from localhost (lfbn-tou-1-1502-76.w90-89.abo.wanadoo.fr
 [90.89.68.76])
 by mail.messagingengine.com (Postfix) with ESMTPA id DE99E24005C;
 Thu, 18 Feb 2021 11:35:14 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: beef2542-b4a1-4d7d-b048-973b9bc22af8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cerno.tech; h=
	date:from:to:cc:subject:message-id:references:mime-version
	:content-type:in-reply-to; s=fm2; bh=9glu1NqYQ89GMrISno2hyWbjb5g
	/OTCuIq2T05YOits=; b=G6pm+BxV6Dh9oiD6SJUz1xXNJ7zPPCZpBW9AEchzgQY
	SsCnHwIhIZleE9PczJeaxSABPuNfyZNS1u3Zj2kDGJdZzYOuF3md0m5GftSNoeev
	ZavMKbuw+WSPSBcY4VpVTBkV/ddPDz1xHbCCETt+RAuJ3CD0g/aAsRM9ST0bJ2+t
	+6ifUhzBy+1w+NmK0Bu03Ar0ji4npJeGmU9jySby/2b7puTSkKn/YECDpUC+LyuL
	6Guj5EkOzfnyrZ7fuCEdeNrI7VmN9ZSWN6LpeNmu5/FnzVAUNcjxOj673ayo+RK2
	5fiMH3d3rCy4RcepJW9kS8GD2lNioOxaP+2WrEFLkFQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=9glu1N
	qYQ89GMrISno2hyWbjb5g/OTCuIq2T05YOits=; b=F8tizhTRDZ1QmkV4IX/nKz
	GFGeeljOv0nlv95ycFp0ze5k8JaZwhxcx+uGwtKUsVGuJaf3wBvlVHShQ6AEUuUa
	nYa/M+CR3Q7nQv96fSu/CfaeIwuzJxRKrhn2B8yNMcGRvghgtZ0Vp/RbSek1ItaO
	Huq8VE54K1HbMzSfgJ/qk4vTktg7KPqCaeXhHlIBqpyMVIsKqMHjMXcuPGS+ft/T
	80Nr4qrm62gEv7vJ1wWgcbm6pcoegFAPfttgQ+bfSTEtA0b5vB4AgmFNGzLe9Jjp
	2mDgst8VHznkjEYhvASc+FuMU+CJuXELKFptery4MIiY6qnvOy5wsWIYxYIQt8qA
	==
X-ME-Sender: <xms:Q5cuYNHGf4DP6s5Yrk94g87bEt9maVsvZUvIZ5_OSwOa3ygXsvSckA>
    <xme:Q5cuYCWJqVHxc0TBvW8RaM0wSdODYkxfn4yClOKahSKwcF7ZJMIFzdXO9brx_KJON
    wVOak8LO-iBIzeTj3U>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrjeeggdekjecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtvdenucfhrhhomhepofgrgihimhgv
    ucftihhprghrugcuoehmrgigihhmvgestggvrhhnohdrthgvtghhqeenucggtffrrghtth
    gvrhhnpeelkeeghefhuddtleejgfeljeffheffgfeijefhgfeufefhtdevteegheeiheeg
    udenucfkphepledtrdekledrieekrdejieenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpehmrgigihhmvgestggvrhhnohdrthgvtghh
X-ME-Proxy: <xmx:Q5cuYPL64pGtEXDOIjB2LytBtepgEjTDIlirUSWB4Z_2bk7CHwRyNw>
    <xmx:Q5cuYDGxFYNq90q1Kozd9_sE8xI5mkNSdlmaFJwB_t_-qpDhKS53yA>
    <xmx:Q5cuYDU83mA2iruvPx6pKrkqUS3fSvQrhe9kV_x71Rp0kdmagu5q-g>
    <xmx:RpcuYHmzfxpDFq6rYEjERoNkSWa7GUcR73y9VFLtc6uo7hgIIot_W0suCF8>
Date: Thu, 18 Feb 2021 17:35:12 +0100
From: Maxime Ripard <maxime@cerno.tech>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: airlied@linux.ie, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com,
	dri-devel@lists.freedesktop.org, linux-aspeed@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-amlogic@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-renesas-soc@vger.kernel.org,
	linux-rockchip@lists.infradead.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-tegra@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v2] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic
 helpers
Message-ID: <20210218163512.arnmixdkygysxrqk@gilmour>
References: <20210211081636.28311-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="qyrmapmaovukaths"
Content-Disposition: inline
In-Reply-To: <20210211081636.28311-1-tzimmermann@suse.de>


--qyrmapmaovukaths
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi,

On Thu, Feb 11, 2021 at 09:16:36AM +0100, Thomas Zimmermann wrote:
> diff --git a/include/drm/drm_gem_framebuffer_helper.h b/include/drm/drm_gem_framebuffer_helper.h
> index 6b013154911d..495d174d9989 100644
> --- a/include/drm/drm_gem_framebuffer_helper.h
> +++ b/include/drm/drm_gem_framebuffer_helper.h
> @@ -9,9 +9,11 @@ struct drm_framebuffer;
>  struct drm_framebuffer_funcs;
>  struct drm_gem_object;
>  struct drm_mode_fb_cmd2;
> +#if 0
>  struct drm_plane;
>  struct drm_plane_state;
>  struct drm_simple_display_pipe;
> +#endif

That's probably not what you meant?

With that fixed,
Acked-by: Maxime Ripard <mripard@kernel.org>

Thanks!
Maxime

--qyrmapmaovukaths
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iHUEABYIAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCYC6XQAAKCRDj7w1vZxhR
xeYjAQDBp12JMmHuiBCHQBmWyl9fGbmCMg6R9psxq9edd+0vigD+MjBWZAmh8A1d
2S0DtBQtnfgH07vDxZs1Eb8jJZ+x/QQ=
=WmxA
-----END PGP SIGNATURE-----

--qyrmapmaovukaths--


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 17:04:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 17:04:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86714.162989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCmio-00007V-6L; Thu, 18 Feb 2021 17:04:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86714.162989; Thu, 18 Feb 2021 17:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCmio-00007O-1Q; Thu, 18 Feb 2021 17:04:10 +0000
Received: by outflank-mailman (input) for mailman id 86714;
 Thu, 18 Feb 2021 17:04:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x3vz=HU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lCmin-00007J-4Y
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 17:04:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c28e1cc6-c552-494c-82f6-dcf358f2d43d;
 Thu, 18 Feb 2021 17:04:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F36C8ACD4;
 Thu, 18 Feb 2021 17:04:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c28e1cc6-c552-494c-82f6-dcf358f2d43d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613667847; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=djexdEIqwLwefnVmIQrAri1RtlqUfPMIhNwImDgvcV4=;
	b=HjE2iM9FcJy4LBkH0C81pCVZwk59Ok7E2zGyd2ZklzhSYHjq/Z+480aETttrRAVLzP5QuT
	ouWfqm9WY7sU99ek5FmKwyHzEMueW1sSOzWBlIK9Q9tkXE33IZn3GmYmLbyC74LhOzDUoI
	jnYh+3FGjqwCqCI9C+sWoNvu+sZXNKs=
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
 <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
 <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
 <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <141ea545-3725-5305-d352-057ff7c70c4f@suse.com>
Date: Thu, 18 Feb 2021 18:04:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 14:19, Julien Grall wrote:
> 
> 
> On 18/02/2021 13:10, Jan Beulich wrote:
>> On 17.02.2021 17:29, Julien Grall wrote:
>>> On 17/02/2021 15:13, Jan Beulich wrote:
>>>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
>>>> Nit: s/not/no/ ?
>>>>
>>>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>>>> +     * called unconditionally, so pgtables may be unitialized.
>>>>> +     */
>>>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>>>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>>>>    }
>>>>>    
>>>>>    static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>>>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>>>>         */
>>>>>        hd->platform_ops->clear_root_pgtable(d);
>>>>>    
>>>>> +    /* After this barrier no new page allocations can occur. */
>>>>> +    spin_barrier(&hd->arch.pgtables.lock);
>>>>
>>>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
>>>> the barrier? Why introduce another one (with a similar comment)
>>>> explicitly now?
>>> The barriers act differently, one will get against any IOMMU page-tables
>>> modification. The other one will gate against allocation.
>>>
>>> There is no guarantee that the former will prevent the latter.
>>
>> Oh, right - different locks. I got confused here because in both
>> cases the goal is to prevent allocations.
>>
>>>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>>>        unmap_domain_page(p);
>>>>>    
>>>>>        spin_lock(&hd->arch.pgtables.lock);
>>>>> -    page_list_add(pg, &hd->arch.pgtables.list);
>>>>> +    /*
>>>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>>>> +     * reasons to continue to update the IOMMU page-tables while the
>>>>> +     * domain is dying.
>>>>> +     *
>>>>> +     * So prevent page-table allocation when the domain is dying.
>>>>> +     *
>>>>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>>>>> +     */
>>>>> +    if ( likely(!d->is_dying) )
>>>>> +    {
>>>>> +        alive = true;
>>>>> +        page_list_add(pg, &hd->arch.pgtables.list);
>>>>> +    }
>>>>>        spin_unlock(&hd->arch.pgtables.lock);
>>>>>    
>>>>> +    if ( unlikely(!alive) )
>>>>> +    {
>>>>> +        free_domheap_page(pg);
>>>>> +        pg = NULL;
>>>>> +    }
>>>>> +
>>>>>        return pg;
>>>>>    }
>>>>
>>>> As before I'm concerned of this forcing error paths to be taken
>>>> elsewhere, in case an allocation still happens (e.g. from unmap
>>>> once super page mappings are supported). Considering some of the
>>>> error handling in the IOMMU code is to invoke domain_crash(), it
>>>> would be quite unfortunate if we ended up crashing a domain
>>>> while it is being cleaned up after.
>>>
>>> It is unfortunate, but I think this is better than having to leak page
>>> tables.
>>>
>>>>
>>>> Additionally, the (at present still hypothetical) unmap case, if
>>>> failing because of the change here, would then again chance to
>>>> leave mappings in place while the underlying pages get freed. As
>>>> this would likely require an XSA, the change doesn't feel like
>>>> "hardening" to me.
>>>
>>> I would agree with this if memory allocations could never fail. That's
>>> not that case and will become worse as we use IOMMU pool.
>>>
>>> Do you have callers in mind that doesn't check the returns of iommu_unmap()?
>>
>> The function is marked __must_check, so there won't be any direct
>> callers ignoring errors (albeit I may be wrong here - we used to
>> have cases where we simply suppressed the resulting compiler
>> diagnostic, without really handling errors; not sure if all of
>> these are gone by now). Risks might be elsewhere.
> 
> But this is not a new risk. So I don't understand why you think my patch 
> is the one that may lead to an XSA in the future.

I didn't mean to imply it would _lead_ to an XSA (you're
right that the problem was there already before), but the term
"harden" suggests to me that the patch aims at eliminating
possible conditions. IOW the result here looks to me as if it
would yield a false sense of safety.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 17:41:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 17:41:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86716.163000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCnJ5-0003vS-3x; Thu, 18 Feb 2021 17:41:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86716.163000; Thu, 18 Feb 2021 17:41:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCnJ5-0003vL-0z; Thu, 18 Feb 2021 17:41:39 +0000
Received: by outflank-mailman (input) for mailman id 86716;
 Thu, 18 Feb 2021 17:41:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lCnJ4-0003vG-76
 for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 17:41:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCnJ3-0002kd-0O; Thu, 18 Feb 2021 17:41:37 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lCnJ2-0000hr-J5; Thu, 18 Feb 2021 17:41:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=quajQGjmwB5BnZPmECJw4fmBHD5TtmnbeXjlQMd/HrU=; b=RluOuw40NpTjfgQImBRz+RvrSg
	vkhS2gYRefOwf4lNKsaoYzUStzxRX7JiRjgAe8jW+QWh6cg8d2n9RIu8msYPmfxJPsmAAaI7oqrjq
	pz8b3akt1ug+ut2IzAh44II4QDkLE8qDrQxfQ69d43QrUlQAyKpP85dDP2fSPjFy4bhM=;
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
 <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
 <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
 <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org>
 <141ea545-3725-5305-d352-057ff7c70c4f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6e467ed0-34f1-498d-a9ce-7e0f2e606033@xen.org>
Date: Thu, 18 Feb 2021 17:41:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <141ea545-3725-5305-d352-057ff7c70c4f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 18/02/2021 17:04, Jan Beulich wrote:
> On 18.02.2021 14:19, Julien Grall wrote:
>>
>>
>> On 18/02/2021 13:10, Jan Beulich wrote:
>>> On 17.02.2021 17:29, Julien Grall wrote:
>>>> On 17/02/2021 15:13, Jan Beulich wrote:
>>>>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
>>>>> Nit: s/not/no/ ?
>>>>>
>>>>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>>>>> +     * called unconditionally, so pgtables may be unitialized.
>>>>>> +     */
>>>>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>>>>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>>>>>     }
>>>>>>     
>>>>>>     static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>>>>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>>>>>          */
>>>>>>         hd->platform_ops->clear_root_pgtable(d);
>>>>>>     
>>>>>> +    /* After this barrier no new page allocations can occur. */
>>>>>> +    spin_barrier(&hd->arch.pgtables.lock);
>>>>>
>>>>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
>>>>> the barrier? Why introduce another one (with a similar comment)
>>>>> explicitly now?
>>>> The barriers act differently, one will get against any IOMMU page-tables
>>>> modification. The other one will gate against allocation.
>>>>
>>>> There is no guarantee that the former will prevent the latter.
>>>
>>> Oh, right - different locks. I got confused here because in both
>>> cases the goal is to prevent allocations.
>>>
>>>>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>>>>         unmap_domain_page(p);
>>>>>>     
>>>>>>         spin_lock(&hd->arch.pgtables.lock);
>>>>>> -    page_list_add(pg, &hd->arch.pgtables.list);
>>>>>> +    /*
>>>>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>>>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>>>>> +     * reasons to continue to update the IOMMU page-tables while the
>>>>>> +     * domain is dying.
>>>>>> +     *
>>>>>> +     * So prevent page-table allocation when the domain is dying.
>>>>>> +     *
>>>>>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>>>>>> +     */
>>>>>> +    if ( likely(!d->is_dying) )
>>>>>> +    {
>>>>>> +        alive = true;
>>>>>> +        page_list_add(pg, &hd->arch.pgtables.list);
>>>>>> +    }
>>>>>>         spin_unlock(&hd->arch.pgtables.lock);
>>>>>>     
>>>>>> +    if ( unlikely(!alive) )
>>>>>> +    {
>>>>>> +        free_domheap_page(pg);
>>>>>> +        pg = NULL;
>>>>>> +    }
>>>>>> +
>>>>>>         return pg;
>>>>>>     }
>>>>>
>>>>> As before I'm concerned of this forcing error paths to be taken
>>>>> elsewhere, in case an allocation still happens (e.g. from unmap
>>>>> once super page mappings are supported). Considering some of the
>>>>> error handling in the IOMMU code is to invoke domain_crash(), it
>>>>> would be quite unfortunate if we ended up crashing a domain
>>>>> while it is being cleaned up after.
>>>>
>>>> It is unfortunate, but I think this is better than having to leak page
>>>> tables.
>>>>
>>>>>
>>>>> Additionally, the (at present still hypothetical) unmap case, if
>>>>> failing because of the change here, would then again chance to
>>>>> leave mappings in place while the underlying pages get freed. As
>>>>> this would likely require an XSA, the change doesn't feel like
>>>>> "hardening" to me.
>>>>
>>>> I would agree with this if memory allocations could never fail. That's
>>>> not that case and will become worse as we use IOMMU pool.
>>>>
>>>> Do you have callers in mind that doesn't check the returns of iommu_unmap()?
>>>
>>> The function is marked __must_check, so there won't be any direct
>>> callers ignoring errors (albeit I may be wrong here - we used to
>>> have cases where we simply suppressed the resulting compiler
>>> diagnostic, without really handling errors; not sure if all of
>>> these are gone by now). Risks might be elsewhere.
>>
>> But this is not a new risk. So I don't understand why you think my patch
>> is the one that may lead to an XSA in the future.
> 
> I didn't mean to imply it would _lead_ to an XSA (you're
> right that the problem was there already before), but the term
> "harden" suggests to me that the patch aims at eliminating
> possible conditions.

It elimitates the risk that someone inadvertently call 
iommu_alloc_pgtable() when the domain is dying. If this is happening 
after the page tables have been freed, then we would end up to leak memory.

> IOW the result here looks to me as if it
> would yield a false sense of safety.

So you are concerned about the wording rather than the code itself. Is 
that correct?

If so, how about "xen/iommu: Make the IOMMU page-table allocator 
slightly firmer"?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 20:28:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 20:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86723.163018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCpug-0002GC-OT; Thu, 18 Feb 2021 20:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86723.163018; Thu, 18 Feb 2021 20:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCpug-0002G5-LN; Thu, 18 Feb 2021 20:28:38 +0000
Received: by outflank-mailman (input) for mailman id 86723;
 Thu, 18 Feb 2021 20:28:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCpue-0002Fx-Mh; Thu, 18 Feb 2021 20:28:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCpue-0005cT-Du; Thu, 18 Feb 2021 20:28:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCpue-0008Dh-2L; Thu, 18 Feb 2021 20:28:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCpue-0006Vh-1r; Thu, 18 Feb 2021 20:28:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B4+FrCAuJb6306KF6OzdztzmEcq95ufxUhR/+llne1o=; b=Tppjg2OiT9uuiaIzZvkpxvQee0
	1yNNdqxx4BAffD66otpE7Wbs3D5s2Cx8JPD0cw7VOgmIDf78M5LPIpGDLLFQx+XVrPQOfG1bFByN8
	nh1YJ4bBZtyni5vw9SWd53SXsxzpGtNWcnTlGCqgiAJfGYxJFUsiWghR1+dArg9xNfmQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159448-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 159448: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:guest-saverestore:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d7a1e06efd3ae2b16d5bb335932376b7d7eaf633
X-Osstest-Versions-That:
    xen=ab995b6af9ab723b0b52e5ea0e342b612f1a7b89
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 20:28:36 +0000

flight 159448 xen-4.13-testing real [real]
flight 159465 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159448/
http://logs.test-lab.xenproject.org/osstest/logs/159465/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  17 guest-saverestore   fail pass in 159465-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159419
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159419
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159419
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159419
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159419
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159419
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159419
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159419
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159419
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159419
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159419
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d7a1e06efd3ae2b16d5bb335932376b7d7eaf633
baseline version:
 xen                  ab995b6af9ab723b0b52e5ea0e342b612f1a7b89

Last test of basis   159419  2021-02-16 15:06:29 Z    2 days
Testing same since   159448  2021-02-17 18:03:21 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ab995b6af9..d7a1e06efd  d7a1e06efd3ae2b16d5bb335932376b7d7eaf633 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 22:29:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 22:29:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86734.163037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCrnH-0005Fy-4n; Thu, 18 Feb 2021 22:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86734.163037; Thu, 18 Feb 2021 22:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCrnH-0005Fr-1W; Thu, 18 Feb 2021 22:29:07 +0000
Received: by outflank-mailman (input) for mailman id 86734;
 Thu, 18 Feb 2021 22:29:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCrnF-0005Fj-Iv; Thu, 18 Feb 2021 22:29:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCrnF-0007Xn-D3; Thu, 18 Feb 2021 22:29:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCrnF-0004o3-4L; Thu, 18 Feb 2021 22:29:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCrnF-0005qp-3h; Thu, 18 Feb 2021 22:29:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g8/SfeTGriTyUiIZ7dHqNVfCCa9k1gSTx7GB45U5yVc=; b=zaSEb94VO8vm0Sc7wE2sUhykZT
	etZk+M42rVd5nVINz3+H4z7hrgMkxqKurGMHmBKjY8l6qOb4WWNHYbSZh3sf1+gkddTMsLFba/nEn
	VGdgh1f7uqAXwDCeg04NnI3xyrGKJOsz6aR2m3+sFqwBgG2OV1YsKVBi1o2vtcKiPefU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159454-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159454: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=33ddfaf4e6f50c45d103063805dcdf6b70320daf
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 22:29:05 +0000

flight 159454 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159454/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              33ddfaf4e6f50c45d103063805dcdf6b70320daf
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  223 days
Failing since        151818  2020-07-11 04:18:52 Z  222 days  215 attempts
Testing same since   159454  2021-02-18 04:47:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43179 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 18 23:34:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 Feb 2021 23:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86743.163063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCsoG-0003JC-7N; Thu, 18 Feb 2021 23:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86743.163063; Thu, 18 Feb 2021 23:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCsoG-0003J5-4N; Thu, 18 Feb 2021 23:34:12 +0000
Received: by outflank-mailman (input) for mailman id 86743;
 Thu, 18 Feb 2021 23:34:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCsoF-0003Iw-FG; Thu, 18 Feb 2021 23:34:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCsoF-00009L-6C; Thu, 18 Feb 2021 23:34:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCsoE-0008B0-UH; Thu, 18 Feb 2021 23:34:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCsoE-0008Ke-Tn; Thu, 18 Feb 2021 23:34:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d0OM0VEWZ7wmsBNzi1iRoUFxUOqvv5g4nJiXdCpfNss=; b=KRIO85N+0hbSJWglbnRh6SmZ3N
	uCf+b4atSu5H1QrlykwGrk3fjOOyruYiAXzLx4gLpHgj1lNSMEUhjvRJzGoWTSe3eZ6AqDu8CC/0w
	5+Y5TNYzV2ZmuUd8tmHM7xzXB7zQ0z1Y4Br8mNtyMjhNjLi+5BJCF06OCiwb3ZRIoFQo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159450-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 159450: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:xen-boot:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b0b734a8b3e516ff1040884b755a8d47afed31ea
X-Osstest-Versions-That:
    xen=9f357fe3e4593fc1109962b76d4db73d589ebef5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 Feb 2021 23:34:10 +0000

flight 159450 xen-4.14-testing real [real]
flight 159467 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159450/
http://logs.test-lab.xenproject.org/osstest/logs/159467/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm   8 xen-boot            fail pass in 159467-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 159467 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159420
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159420
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159420
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159420
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159420
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159420
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159420
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159420
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159420
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159420
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159420
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b0b734a8b3e516ff1040884b755a8d47afed31ea
baseline version:
 xen                  9f357fe3e4593fc1109962b76d4db73d589ebef5

Last test of basis   159420  2021-02-16 15:06:34 Z    2 days
Testing same since   159450  2021-02-17 21:31:04 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9f357fe3e4..b0b734a8b3  b0b734a8b3e516ff1040884b755a8d47afed31ea -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 01:42:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 01:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86751.163084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCuoe-0001Fc-D4; Fri, 19 Feb 2021 01:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86751.163084; Fri, 19 Feb 2021 01:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCuoe-0001FT-5y; Fri, 19 Feb 2021 01:42:44 +0000
Received: by outflank-mailman (input) for mailman id 86751;
 Fri, 19 Feb 2021 01:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cTCa=HV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lCuoc-0001FO-Bx
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 01:42:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35959384-beba-4392-bd20-1b3687373ce6;
 Fri, 19 Feb 2021 01:42:41 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AA0DE64EC7;
 Fri, 19 Feb 2021 01:42:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35959384-beba-4392-bd20-1b3687373ce6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613698960;
	bh=ES19ySQ2YCJfx/AJS03/x+T+dDPTIgq86RcyudNFWL8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Z1F5A8obpo6TaFIOLgmezi6is//V3GqqHGcx91+aeARdOHDYoxj84PVmZsebY8QCo
	 yQLanZYTM8VwlTE+AuXp68+YM4mj+pswMpOwsqUqsrrYaEIPNIlMLq59jH1LTnPyd+
	 LaPCfdRG4SWCvNqaOs/HDn54ThwjK3yEpCei9lLeZ08Ex0Fw7QuN+aciBbgywPYYT1
	 AYZ+kIAiFcrnxBsQ+a7pTLg2NcBtIz8qLvbZM8GT3HS6OY+nNZu+eu30Liqal3pYbc
	 ut9Lb4pEy/E7YL7GK6n7r2IgnIVNyY7BhAW/jpwLREh3EeZCF/EnzminpxEZTb1HXu
	 VnhDVot4YKbdA==
Date: Thu, 18 Feb 2021 17:42:38 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    xen-devel@lists.xenproject.org, cardoe@cardoe.com, 
    andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org, 
    anthony.perard@citrix.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
In-Reply-To: <416e26b7-0e24-a9ee-6f9a-732f77f7e0cc@suse.com>
Message-ID: <alpine.DEB.2.21.2102181737310.3234@sstabellini-ThinkPad-T480s>
References: <20210213020540.27894-1-sstabellini@kernel.org> <20210213135056.GA6191@mail-itl> <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com> <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s> <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
 <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s> <416e26b7-0e24-a9ee-6f9a-732f77f7e0cc@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 18 Feb 2021, Jan Beulich wrote:
> On 18.02.2021 00:45, Stefano Stabellini wrote:
> > Given this, I take there is no 32bit build env? A bit of Googling tells
> > me that gcc on Alpine Linux is compiled without multilib support.
> > 
> > 
> > That said I was looking at the Alpine Linux APKBUILD script:
> > https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/APKBUILD
> > 
> > And I noticed this patch that looks suspicious:
> > https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/musl-hvmloader-fix-stdint.patch
> 
> Indeed. I find it very odd that they have a bimodal gcc (allowing
> -m32) but no suitable further infrastructure (headers). So perhaps
> configure should probe for "gcc -m32" producing a uint64_t that is
> actually 64 bits wide, and disable hvmloader building otherwise
> (and - important - no matter whether it would actually be needed;
> alternative being to fail configuring altogether)? Until - as said
> before - we've made hvmloader properly freestanding.

OK it took me a lot longer than expected (I have never had the dubious
pleasure of working with autoconf before) but the following seems to
work, tested on both Alpine Linux and Debian Unstable. Of course I had
to run autoreconf first.


diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 48bd9ab731..d5e4f1679f 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -50,6 +50,7 @@ CONFIG_OVMF         := @ovmf@
 CONFIG_ROMBIOS      := @rombios@
 CONFIG_SEABIOS      := @seabios@
 CONFIG_IPXE         := @ipxe@
+CONFIG_HVMLOADER    := @hvmloader@
 CONFIG_QEMU_TRAD    := @qemu_traditional@
 CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_QEMUU_EXTRA_ARGS:= @EXTRA_QEMUU_CONFIGURE_ARGS@
diff --git a/tools/Makefile b/tools/Makefile
index 757a560be0..6cff5766f3 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -14,7 +14,7 @@ SUBDIRS-y += examples
 SUBDIRS-y += hotplug
 SUBDIRS-y += xentrace
 SUBDIRS-$(CONFIG_XCUTILS) += xcutils
-SUBDIRS-$(CONFIG_X86) += firmware
+SUBDIRS-$(CONFIG_HVMLOADER) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
 SUBDIRS-y += xentop
diff --git a/tools/configure.ac b/tools/configure.ac
index 6b611deb13..a3a52cec41 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -307,6 +307,10 @@ AC_ARG_VAR([AWK], [Path to awk tool])
 
 # Checks for programs.
 AC_PROG_CC
+AC_LANG(C)
+AC_LANG_CONFTEST([AC_LANG_SOURCE([[int main() { return 0;}]])])
+AS_IF([gcc -m32 conftest.c -o - 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(hvmloader build disabled as the compiler cannot build 32bit binaries)])
+AC_SUBST(hvmloader)
 AC_PROG_MAKE_SET
 AC_PROG_INSTALL
 AC_PATH_PROG([FLEX], [flex])


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 01:43:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 01:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86752.163096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCuou-0001Iy-HN; Fri, 19 Feb 2021 01:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86752.163096; Fri, 19 Feb 2021 01:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCuou-0001Ir-EM; Fri, 19 Feb 2021 01:43:00 +0000
Received: by outflank-mailman (input) for mailman id 86752;
 Fri, 19 Feb 2021 01:42:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cTCa=HV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lCuot-0001IQ-6f
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 01:42:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41e0c28a-ee77-4bf5-af49-4967c6e7371c;
 Fri, 19 Feb 2021 01:42:56 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1E99964E92;
 Fri, 19 Feb 2021 01:42:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41e0c28a-ee77-4bf5-af49-4967c6e7371c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613698975;
	bh=XF5c/LiOQ6+/57pE7wpWL6218t2PqLuap1Fk4canAHU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=J5lonfZr6/9hfj/SYa2ITGZu1SdnUNu5oqWx30JRAI+bhc2F4MPBANEEX8phklIE0
	 rQ5Imz1fBcjeGstEYkS2dTF0PwuVSUkHzyOhLN/Dmr0wdY3ogPMLjzAaHkWXQC+xNy
	 JN/pIujrabw7dFokJ8XR7c7j+dCSf3zjri2utBwjkG47Mg0z4EOuZEyXJ8LPGGKD8E
	 sCi/AF51DuSc/2dwT1oj0XPpOpII0wmJ/KkfuVZ+otQWTh4OhVH0D+zccDEIrM5u5K
	 sVwY8qchkORdYzx5FIajfjT6Ws7CwIpIIgV19c8LSDkFvi6uJaM7jFJJ+VSP4uJooH
	 FxJ378q/zeahA==
Date: Thu, 18 Feb 2021 17:42:54 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
In-Reply-To: <0be0196f-5b3f-73f9-5ab7-7a54faabec5c@xen.org>
Message-ID: <alpine.DEB.2.21.2102180920570.3234@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s> <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org> <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s> <0be0196f-5b3f-73f9-5ab7-7a54faabec5c@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 18 Feb 2021, Julien Grall wrote:
> On 17/02/2021 23:54, Stefano Stabellini wrote:
> > On Wed, 17 Feb 2021, Julien Grall wrote:
> > > On 17/02/2021 02:00, Stefano Stabellini wrote:
> > > > Hi all,
> > > > 
> > > > Today Linux uses the swiotlb-xen driver (drivers/xen/swiotlb-xen.c) to
> > > > translate addresses for DMA operations in Dom0. Specifically,
> > > > swiotlb-xen is used to translate the address of a foreign page (a page
> > > > belonging to a domU) mapped into Dom0 before using it for DMA.
> > > > 
> > > > This is important because although Dom0 is 1:1 mapped, DomUs are not. On
> > > > systems without an IOMMU swiotlb-xen enables PV drivers to work as long
> > > > as the backends are in Dom0. Thanks to swiotlb-xen, the DMA operation
> > > > ends up using the MFN, rather than the GFN.
> > > > 
> > > > 
> > > > On systems with an IOMMU, this is not necessary: when a foreign page is
> > > > mapped in Dom0, it is added to the Dom0 p2m. A new GFN->MFN translation
> > > > is enstablished for both MMU and SMMU. Dom0 could safely use the GFN
> > > > address (instead of the MFN) for DMA operations and they would work. It
> > > > would be more efficient than using swiotlb-xen.
> > > > 
> > > > If you recall my presentation from Xen Summit 2020, Xilinx is working on
> > > > cache coloring. With cache coloring, no domain is 1:1 mapped, not even
> > > > Dom0. In a scenario where Dom0 is not 1:1 mapped, swiotlb-xen does not
> > > > work as intended.
> > > > 
> > > > 
> > > > The suggested solution for both these issues is to add a new feature
> > > > flag "XENFEAT_ARM_dom0_iommu" that tells Dom0 that it is safe not to use
> > > > swiotlb-xen because IOMMU translations are available for Dom0. If
> > > > XENFEAT_ARM_dom0_iommu is set, Linux should skip the swiotlb-xen
> > > > initialization. I have tested this scheme with and without cache
> > > > coloring (hence with and without 1:1 mapping of Dom0) on ZCU102 and it
> > > > works as expected: DMA operations succeed.
> > > > 
> > > > 
> > > > What about systems where an IOMMU is present but not all devices are
> > > > protected?
> > > > 
> > > > There is no way for Xen to know which devices are protected and which
> > > > ones are not: devices that do not have the "iommus" property could or
> > > > could not be DMA masters.
> > > > 
> > > > Perhaps Xen could populate a whitelist of devices protected by the IOMMU
> > > > based on the "iommus" property. It would require some added complexity
> > > > in Xen and especially in the swiotlb-xen driver in Linux to use it,
> > > > which is not ideal.
> > > 
> > > You are trading a bit more complexity in Xen and Linux with a user will
> > > may
> > > not be able to use the hypervisor on his/her platform without a quirk in
> > > Xen
> > > (see more below).
> > > 
> > > > However, this approach would not work for cache
> > > > coloring where dom0 is not 1:1 mapped so the swiotlb-xen should not be
> > > > used either way
> > > 
> > > Not all the Dom0 Linux kernel will be able to work with cache colouring.
> > > So
> > > you will need a way for the kernel to say "Hey I am can avoid using
> > > swiotlb".
> > 
> > A dom0 Linux kernel unmodified can work with Xen cache coloring enabled.
> 
> Really? I was expecting Linux to configure the DMA transaction to use the MFN
> for foreign pages. So a mapping GFN == MFN would be necessary.
>
> > The swiotlb-xen is unneeded and also all the pfn_valid() checks to detect
> > if a page is local or foreign don't work as intended. 
> 
> I am not sure to understand this. Are you saying that Linux will
> "mistakenly" consider the foreign page as local?

The pfn_valid(addr) check is based on the idea that gfn == mfn for local
pages and gfn != mfn for foreign pages. If you take the mfn of a foreign
page pfn_valid(mfn) should fail.

However, it only works as long as Dom0 is 1:1 mapped. If it is not 1:1
mapped pfn_valid(mfn) could return true for a foreign page. It could
mistake a foreign page for a local one. It could also mistake a local
page for a foreign one if we called pfn_valid() passing the mfn of one
of our local pages.


> Therefore, it would always use the GFN rather than MFN?

For DMA operations:
- local pages use GFN as dev_addr
- foreign pages use MFN as dev_addr   <- requires 1:1 MFN mapping

For cache flush operations:
- local pages might flush locally, works as expected
- local pages might flush via hypercall, needlessly slow but works
  (this is what I was talking about)
- foreign pages might flush locally, requires 1:1 MFN mapping
- foreign pages might flush via hypercall, works as expected

The 1:1 MFN mapping of foreign pages is required and it is problematic
as it could conflict. Sorry for not mentioning it earlier. So yeah, it
looks like a change in Linux is required.


> > However, it still
> > works on systems where the IOMMU can be relied upon. That's because the
> > IOMMU does the GFN->MFN translations, and also because both swiotlb-xen
> > code paths (cache flush local and cache flush via hypercall) work.
> > 
> > Also considering that somebody that enables cache coloring has to know
> > the entire system well, I don't think we need a runtime flag from Linux
> > to say "Hey I can avoid using swiotlb".
> 
> I think we should avoid to hardcode any decision again. Otherwise, sooner or
> later this will come back to bite us.

That's fair.

Let's say that Linux needs some sort of change to work with cache
colored Xen. One important point is that a user cannot really expect the
system to enable cache coloring on its own if appropriate. The user has
to go in and manually set it up in the config files of all domUs and
even the Xen command line. So if the user turns it on and Linux breaks
at boot because it can't support it, I think it is entirely fine.

In other words, I don't think cache coloring needs to be dynamically
discoverable and transparently enableable. But certainly if you turn it
on, you don't want Linux to fail silently after a few hours. If it is
going to fail, we want it to fail straight away and clearly. For
example, a BUG_ON at boot in Linux or Xen would be fine.


 
> > > > How to set XENFEAT_ARM_dom0_iommu?
> > > > 
> > > > We could set XENFEAT_ARM_dom0_iommu automatically when
> > > > is_iommu_enabled(d) for Dom0. We could also have a platform specific
> > > > (xen/arch/arm/platforms/) override so that a specific platform can
> > > > disable XENFEAT_ARM_dom0_iommu. For debugging purposes and advanced
> > > > users, it would also be useful to be able to override it via a Xen
> > > > command line parameter.
> > > Platform quirks should be are limited to a small set of platforms.
> > > 
> > > In this case, this would not be only per-platform but also per-firmware
> > > table
> > > as a developer can decide to remove/add IOMMU nodes in the DT at any time.
> > > 
> > > In addition to that, it means we are introducing a regression for those
> > > users
> > > as Xen 4.14 would have worked on there platform but not anymore. They
> > > would
> > > need to go through all the nodes and find out which one is not protected.
> > > 
> > > This is a bit of a daunting task and we are going to end up having a lot
> > > of
> > > per-platform quirk in Xen.
> > > 
> > > So this approach selecting the flag is a no-go for me. FAOD, the inverted
> > > idea
> > > (i.e. only setting XENFEAT_ARM_dom0_iommu per-platform) is a no go as
> > > well.
> > > 
> > > I don't have a good idea how to set the flag automatically. My
> > > requirement is newer Xen should continue to work on all supported
> > > platforms without any additional per-platform effort.
> > 
> > Absolutely agreed.
> > 
> > 
> > One option would be to rename the flag to XENFEAT_ARM_cache_coloring and
> > only set it when cache coloring is enabled.  Obviously you need to know
> > what you are doing if you are enabling cache coloring. If we go down
> > that route, we don't risk breaking compatibility with any platforms.
> > Given that cache coloring is not upstream yet (upstreaming should start
> > very soon), maybe the only thing to do now would be to reserve a XENFEAT
> > number.
> 
> At least in this context, I can't see how the problem described is cache
> coloring specific. Although, we may need to expose such flag for other purpose
> in the future.
 
I agree


> > But actually it was always wrong for Linux to enable swiotlb-xen without
> > checking whether it is 1:1 mapped or not. Today we enable swiotlb-xen in
> > dom0 and disable it in domU, while we should have enabled swiotlb-xen if
> > 1:1 mapped no matter dom0/domU. (swiotlb-xen could be useful in a 1:1
> > mapped domU driver domain.)
> > 
> > 
> > There is an argument (Andy was making on IRC) that being 1:1 mapped or
> > not is an important information that Xen should provide to the domain
> > regardless of anything else.
> > 
> > So maybe we should add two flags:
> > 
> > - XENFEAT_direct_mapped
> > - XENFEAT_not_direct_mapped
> 
> I am guessing the two flags is to allow Linux to fallback to the default
> behavior (depending on dom0 vs domU) on older hypervisor On newer hypervisors,
> one of this flag would always be set. Is that correct?

Yes. On a newer hypervisor one of the two would be present and Linux can
make an informed decision. On an older hypervisor, neither flag would be
present, so Linux will have to keep doing what is currently doing.

 
> > To all domains. This is not even ARM specific. Today dom0 would get
> > XENFEAT_direct_mapped and domUs XENFEAT_not_direct_mapped. With cache
> > coloring all domains will get XENFEAT_not_direct_mapped. With Bertrand's
> > team work on 1:1 mapping domUs, some domUs might start to get
> > XENFEAT_direct_mapped also one day soon.
> > 
> > Now I think this is the best option because it is descriptive, doesn't
> > imply anything about what Linux should or should not do, and doesn't
> > depend on unreliable IOMMU information.
> 
> That's a good first step but this still doesn't solve the problem on whether
> the swiotlb can be disabled per-device or even disabling the expensive 1:1
> mapping in the IOMMU page-tables.
>
> It feels to me we need to have a more complete solution (not necessary
> implemented) so we don't put ourself in the corner again.

Yeah, XENFEAT_{not,}_direct_mapped help cleaning things up, but don't
solve the issues you described. Those are difficult to solve, it would
be nice to have some idea.

One issue is that we only have limited information passed via device
tree, limited to the "iommus" property. If that's all we have, there
isn't much we can do. The device tree list is maybe the only option,
although it feels a bit complex intuitively. We could maybe replace the
real iommu node with a fake iommu node only to use it to "tag" devices
protected by the real iommu.

I like the idea of rewarding well-designed boards; boards that have an
IOMMU and works for all DMA-mastering devices. It would be great to be
able to optimize those in a simple way, without breaking the others. But
unfortunately due to the limited info on device tree, I cannot think of
a way to do it automatically. And it is not great to rely on platform
files.



> > Instead, if we follow my original proposal of using
> > XENFEAT_ARM_dom0_iommu and set it automatically when Dom0 is protected
> > by IOMMU, we risk breaking PV drivers for platforms where that protection
> > is incomplete. I have no idea how many there are out there today. 
> 
> This can virtually affect any platform as it is easy to disable an IOMMU in
> the firmware table.
> 
> > I have
> > the feeling that there aren't that many but I am not sure. So yes, it
> > could be that we start passing XENFEAT_ARM_dom0_iommu for a given
> > platform, Linux skips the swiotlb-xen initialization, actually it is
> > needed for a network/block device, and a PV driver breaks. I can see why
> > you say this is a no-go.
> > 
> > 
> > Third option. We still use XENFEAT_ARM_dom0_iommu but we never set
> > XENFEAT_ARM_dom0_iommu automatically. It needs a platform specific flag
> > to be set. We add the flag to xen/arch/arm/platforms/xilinx-zynqmp.c and
> > any other platforms that qualify. Basically it is "opt in" instead of
> > "opt out". We don't risk breaking anything because platforms would have
> > XENFEAT_ARM_dom0_iommu disabled by default.
> Well, yes you will not break other platforms. However, you are still at risk
> to break your platform if the firmware table is updated and disable some but
> not all IOMMUs (for performance concern, brokeness...).

This is something we might be able to detect: we can detect if an IOMMU
is disabled.



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 03:41:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 03:41:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86758.163108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCwfA-0004XZ-8K; Fri, 19 Feb 2021 03:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86758.163108; Fri, 19 Feb 2021 03:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCwfA-0004XS-4C; Fri, 19 Feb 2021 03:41:04 +0000
Received: by outflank-mailman (input) for mailman id 86758;
 Fri, 19 Feb 2021 03:41:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCwf8-0004XK-PA; Fri, 19 Feb 2021 03:41:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCwf8-0006jA-Gr; Fri, 19 Feb 2021 03:41:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCwf8-0004Gx-8M; Fri, 19 Feb 2021 03:41:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCwf8-0001MX-7N; Fri, 19 Feb 2021 03:41:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=AxJPj6s906IAs2x+dw3xQLtjyPnu63GPqEkw/Yy9KGM=; b=6koQV2E/BaK8B8uo7CIPlfMrKq
	uppp/o/C7V/c3s4A9RbDKOqsEytBJW1BGg+3bh9qnjnaD5NC0bc1ZNYQY7ZMcDenTTmeMnH/Y0jYL
	/ie1jDuNfu0AZjSKHRm54YR1P5f3Q4XqTApS/mvarNYZr3BvJOFDTV7BNLRLwM1dAdao=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [linux-5.4 bisection] complete test-armhf-armhf-libvirt
Message-Id: <E1lCwf8-0001MX-7N@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 03:41:02 +0000

branch xen-unstable
xenbranch xen-unstable
job test-armhf-armhf-libvirt
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159469/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-5.4/test-armhf-armhf-libvirt.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/linux-5.4/test-armhf-armhf-libvirt.guest-start --summary-out=tmp/159469.bisection-summary --basis-template=158387 --blessings=real,real-bisect,real-retry linux-5.4 test-armhf-armhf-libvirt guest-start
Searching for failure / basis pass:
 159431 fail [host=cubietruck-gleizes] / 158681 [host=cubietruck-braque] 158624 [host=cubietruck-picasso] 158616 [host=arndale-bluewater] 158609 [host=arndale-metrocentre] 158603 [host=cubietruck-metzinger] 158593 [host=arndale-westfield] 158583 [host=arndale-lakeside] 158563 ok.
Failure / basis pass flights: 159431 / 158563
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4f4d862c1c7232a18347616d94c343c929657fdb 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 339371ef78eb3a6f2e9848f8b058379de5e87d39 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#d26b3110041a9fddc6c6e36398f53f7eab8cff82-5b9a4104c902d7dec14c9e3c5652a638194487c6 git://xenbits.xen.org/osstest/linux-firmware.git#\
 c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#339371ef78eb3a6f2e9848f8b058379de5e87d39-4f4d862c1c7232a18347616d94c343c929657fdb git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e git://xenbits.xen.org/xen.git#e8adbf680b56a3f4b9600c7bcc\
 04fec1877a6213-04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
Loaded 15001 nodes in revision graph
Searching for test results:
 158552 [host=cubietruck-braque]
 158563 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 339371ef78eb3a6f2e9848f8b058379de5e87d39 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 158583 [host=arndale-lakeside]
 158593 [host=arndale-westfield]
 158603 [host=cubietruck-metzinger]
 158609 [host=arndale-metrocentre]
 158616 [host=arndale-bluewater]
 158624 [host=cubietruck-picasso]
 158681 [host=cubietruck-braque]
 158707 fail irrelevant
 158716 fail irrelevant
 158748 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e f8708b0ed6d549d1d29b8b5cc287f1f2b642bc63
 158765 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 131f8d8a889a5ca66a835eea82bba043ac91a7cf c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 6677b5a3577c16501fbc51a3341446905bd21c38
 158796 fail irrelevant
 158818 fail irrelevant
 158841 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158863 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158881 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c6be6dab9c4bdf135bc02b61ecc304d5511c3588 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158929 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 0fbca6ce4174724f28be5268c5d210f51ed96e31 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea56ebf67dd55483105aa9f9996a48213e78337e 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 9dc687f155a57216b83b17f9cde55dd43e06b0cd
 158962 fail irrelevant
 159023 fail irrelevant
 159129 fail irrelevant
 159200 fail irrelevant
 159238 fail irrelevant
 159295 fail irrelevant
 159324 fail irrelevant
 159339 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159359 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159372 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159399 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159439 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd d26b3110041a9fddc6c6e36398f53f7eab8cff82 c530a75c1e6a472b0eb9558310b518f0dfcd8860 339371ef78eb3a6f2e9848f8b058379de5e87d39 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e e8adbf680b56a3f4b9600c7bcc04fec1877a6213
 159442 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e1e8c35f3178df95d79da81ac6deec242da74c2 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159444 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd e2d69319b713c30ca21428c3955a79f3a7bf6c23 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3b769c5110384fb33bcfeddced80f721ec7838cc 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 452ddbe3592b141b05a7e0676f09c8ae07f98fdd
 159447 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 8c3d3b385ed868660c7dff0336da1bd5a9fb134d c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159449 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 4d1cf8eeda5b3f411440d9910a484b1d06484aa7 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159451 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 68f99105752d132d411231bfc60cf78eceaac5e0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159452 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159455 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 7eef736858712ab65afea3908f49eb4e7775fa93 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159431 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4f4d862c1c7232a18347616d94c343c929657fdb 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159456 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159458 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 5b9a4104c902d7dec14c9e3c5652a638194487c6 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4f4d862c1c7232a18347616d94c343c929657fdb 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad
 159464 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159466 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159468 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
 159469 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a09d4e7acdbf276b2096661ee82454ae3dd24d2b c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
Searching for interesting versions
 Result found: flight 158563 (pass), for basis pass
 Result found: flight 159339 (fail), for basis failure (at ancestor ~281)
 Repro found: flight 159439 (pass), for basis pass
 Repro found: flight 159458 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd acc402fa5bf502d471d50e3d495379f093a7f9e4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d6fc9d36fd5ff15972bedab919f37bb4ee951d0 7ea428895af2840d85c524f0bd11a38aac308308 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 464301737acfa90b46b79659b19d7f456861def3
No revisions left to test, checking graph state.
 Result found: flight 159452 (pass), for last pass
 Result found: flight 159456 (fail), for first failure
 Repro found: flight 159464 (pass), for last pass
 Repro found: flight 159466 (fail), for first failure
 Repro found: flight 159468 (pass), for last pass
 Repro found: flight 159469 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Bug not present: acc402fa5bf502d471d50e3d495379f093a7f9e4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159469/


  commit a09d4e7acdbf276b2096661ee82454ae3dd24d2b
  Author: David Woodhouse <dwmw@amazon.co.uk>
  Date:   Wed Jan 13 13:26:02 2021 +0000
  
      xen: Fix event channel callback via INTX/GSI
      
      [ Upstream commit 3499ba8198cad47b731792e5e56b9ec2a78a83a2 ]
      
      For a while, event channel notification via the PCI platform device
      has been broken, because we attempt to communicate with xenstore before
      we even have notifications working, with the xs_reset_watches() call
      in xs_init().
      
      We tend to get away with this on Xen versions below 4.0 because we avoid
      calling xs_reset_watches() anyway, because xenstore might not cope with
      reading a non-existent key. And newer Xen *does* have the vector
      callback support, so we rarely fall back to INTX/GSI delivery.
      
      To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
      startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
      case, deferring it to be called from xenbus_probe() in the XS_HVM case
      instead.
      
      Then fix up the invocation of xenbus_probe() to happen either from its
      device_initcall if the callback is available early enough, or when the
      callback is finally set up. This means that the hack of calling
      xenbus_probe() from a workqueue after the first interrupt, or directly
      from the PCI platform device setup, is no longer needed.
      
      Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/20210113132606.422794-2-dwmw2@infradead.org
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Signed-off-by: Sasha Levin <sashal@kernel.org>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.910397 to fit
pnmtopng: 105 colors found
Revision graph left in /home/logs/results/bisect/linux-5.4/test-armhf-armhf-libvirt.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
159469: tolerable FAIL

flight 159469 linux-5.4 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159469/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-armhf-armhf-libvirt     14 guest-start             fail baseline untested


jobs:
 build-armhf-libvirt                                          pass    
 test-armhf-armhf-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 06:24:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 06:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86778.163162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCzDV-0003lC-L5; Fri, 19 Feb 2021 06:24:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86778.163162; Fri, 19 Feb 2021 06:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lCzDV-0003l5-I7; Fri, 19 Feb 2021 06:24:41 +0000
Received: by outflank-mailman (input) for mailman id 86778;
 Fri, 19 Feb 2021 06:24:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCzDU-0003kx-Mu; Fri, 19 Feb 2021 06:24:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCzDU-0001pa-Ik; Fri, 19 Feb 2021 06:24:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lCzDU-0002RV-9X; Fri, 19 Feb 2021 06:24:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lCzDU-0000nM-93; Fri, 19 Feb 2021 06:24:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZZLonCwKtJDuM3P/nQtc+wEAPkpBbco8gFNIGzJrNwI=; b=Km26v8e2fBM4a2CQbOpB/AKMDn
	VA1FGxY3KFoBHjrike9V9p6wq/sUBe/dhUBKChJxVQGhF5NfHASnegWalMf6R29qvg4cqRRPHCOWg
	pdFyy548rdZC0pSvBfVNIIKyvpPRovEgGDAqyljyA/rEM2lYh4ATwc82Pcqtu1Q0VT4g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159453-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159453: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a4133feaf42000923eb9d84badb6b171625f137
X-Osstest-Versions-That:
    xen=3b1cc15f1931ba56d0ee256fe9bfe65509733b27
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 06:24:40 +0000

flight 159453 xen-unstable real [real]
flight 159473 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159453/
http://logs.test-lab.xenproject.org/osstest/logs/159473/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 18 guest-start/debian.repeat fail pass in 159473-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159424
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159424
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159424
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159424
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159424
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159424
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159424
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159424
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159424
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159424
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159424
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a4133feaf42000923eb9d84badb6b171625f137
baseline version:
 xen                  3b1cc15f1931ba56d0ee256fe9bfe65509733b27

Last test of basis   159424  2021-02-16 17:37:44 Z    2 days
Testing same since   159453  2021-02-18 03:39:33 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b1cc15f19..7a4133feaf  7a4133feaf42000923eb9d84badb6b171625f137 -> master


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:10:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86795.163189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD0rF-0005cM-TR; Fri, 19 Feb 2021 08:09:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86795.163189; Fri, 19 Feb 2021 08:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD0rF-0005cF-Q3; Fri, 19 Feb 2021 08:09:49 +0000
Received: by outflank-mailman (input) for mailman id 86795;
 Fri, 19 Feb 2021 08:09:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD0rF-0005br-96
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:09:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c365cdf-9d6c-4e2d-a2ad-dfde9c35b33d;
 Fri, 19 Feb 2021 08:09:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5AEACAC69;
 Fri, 19 Feb 2021 08:09:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c365cdf-9d6c-4e2d-a2ad-dfde9c35b33d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613722187; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=+Mj267HPvJ4mQpGRW6TYWARtkHTN06xccO011IR9F90=;
	b=HeQ2c+LNm5BbukyHw5AP3xGAfsdLZ4hXalf38/rYH8QK+Aly1KUufg1xbkSCCxwqHDVJTl
	fPIW+k86eDCVQ7fe41/1qF/axBJcOz8ixxQQ6rQra8qEMBX1MUv1esdtINF+UKWWF9lX4O
	L/JL+uFNr6HQWOSBjf3PFaeiVJ5gSr4=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base relocs
Message-ID: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
Date: Fri, 19 Feb 2021 09:09:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All of the sudden ld creates base relocations itself, for PE
executables - as a result we now have two of them for every entity to
be relocated. While we will likely want to use this down the road, it
doesn't work quite right yet in corner cases, so rather than suppressing
our own way of creating the relocations we need to tell ld to avoid
doing so.

Probe whether --disable-reloc-section (which was introduced by the same
commit making relocation generation the default) is recognized by ld's PE
emulation, and use the option if so. (To limit redundancy, move the first
part of setting EFI_LDFLAGS earlier, and use it already while probing.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -123,8 +123,13 @@ ifneq ($(efi-y),)
 # Check if the compiler supports the MS ABI.
 export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
 # Check if the linker supports PE.
-XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
+EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10 --strip-debug
+XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) $(EFI_LDFLAGS) -o efi/check.efi efi/check.o 2>/dev/null && echo y))
 CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
+# Check if the linker produces fixups in PE by default (we need to disable it doing so for now).
+XEN_NO_PE_FIXUPS := $(if $(XEN_BUILD_EFI), \
+                         $(shell $(LD) $(EFI_LDFLAGS) --disable-reloc-section -o efi/check.efi efi/check.o 2>/dev/null && \
+                                 echo --disable-reloc-section))
 endif
 
 ALL_OBJS := $(BASEDIR)/arch/x86/boot/built_in.o $(BASEDIR)/arch/x86/efi/built_in.o $(ALL_OBJS)
@@ -177,8 +182,7 @@ note.o: $(TARGET)-syms
 		--rename-section=.data=.note.gnu.build-id -S $@.bin $@
 	rm -f $@.bin
 
-EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10
-EFI_LDFLAGS += --image-base=$(1) --stack=0,0 --heap=0,0 --strip-debug
+EFI_LDFLAGS += --image-base=$(1) --stack=0,0 --heap=0,0 $(XEN_NO_PE_FIXUPS)
 EFI_LDFLAGS += --section-alignment=0x200000 --file-alignment=0x20
 EFI_LDFLAGS += --major-image-version=$(XEN_VERSION)
 EFI_LDFLAGS += --minor-image-version=$(XEN_SUBVERSION)


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:29:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86800.163201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD19t-0007bA-Hg; Fri, 19 Feb 2021 08:29:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86800.163201; Fri, 19 Feb 2021 08:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD19t-0007b3-EX; Fri, 19 Feb 2021 08:29:05 +0000
Received: by outflank-mailman (input) for mailman id 86800;
 Fri, 19 Feb 2021 08:29:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD19s-0007ay-8W
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:29:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed4b9576-fe56-4e20-bc07-d9b151e47382;
 Fri, 19 Feb 2021 08:28:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77AA6AC69;
 Fri, 19 Feb 2021 08:28:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed4b9576-fe56-4e20-bc07-d9b151e47382
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613723334; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oxrkD2bmNOcTxTC61FFJFdIn89f2Nu1xMQpb4dU6cWQ=;
	b=pXP0GGm2vKhd+ZXSbAsWySM5VGswYb24Yf2gNTBzALa4QPYt4e/7ZKUVyK0aSjujLdNKK2
	gHT6dQDQH1paafFM4Arq9EoiEj2RLFywm+hsQCh2DZgAgoxyEtzhvCBeWoqh0Tp89vpPu7
	RW12pIcJODY+Nq67dFYiKxkzcaafzKw=
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 cardoe@cardoe.com, andrew.cooper3@citrix.com, wl@xen.org,
 iwj@xenproject.org, anthony.perard@citrix.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210213020540.27894-1-sstabellini@kernel.org>
 <20210213135056.GA6191@mail-itl>
 <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
 <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s>
 <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
 <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s>
 <416e26b7-0e24-a9ee-6f9a-732f77f7e0cc@suse.com>
 <alpine.DEB.2.21.2102181737310.3234@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3723a430-e7de-017a-294f-4c3fdb35da51@suse.com>
Date: Fri, 19 Feb 2021 09:28:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102181737310.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 02:42, Stefano Stabellini wrote:
> OK it took me a lot longer than expected (I have never had the dubious
> pleasure of working with autoconf before) but the following seems to
> work, tested on both Alpine Linux and Debian Unstable. Of course I had
> to run autoreconf first.
> 
> 
> diff --git a/config/Tools.mk.in b/config/Tools.mk.in
> index 48bd9ab731..d5e4f1679f 100644
> --- a/config/Tools.mk.in
> +++ b/config/Tools.mk.in
> @@ -50,6 +50,7 @@ CONFIG_OVMF         := @ovmf@
>  CONFIG_ROMBIOS      := @rombios@
>  CONFIG_SEABIOS      := @seabios@
>  CONFIG_IPXE         := @ipxe@
> +CONFIG_HVMLOADER    := @hvmloader@
>  CONFIG_QEMU_TRAD    := @qemu_traditional@
>  CONFIG_QEMU_XEN     := @qemu_xen@
>  CONFIG_QEMUU_EXTRA_ARGS:= @EXTRA_QEMUU_CONFIGURE_ARGS@
> diff --git a/tools/Makefile b/tools/Makefile
> index 757a560be0..6cff5766f3 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -14,7 +14,7 @@ SUBDIRS-y += examples
>  SUBDIRS-y += hotplug
>  SUBDIRS-y += xentrace
>  SUBDIRS-$(CONFIG_XCUTILS) += xcutils
> -SUBDIRS-$(CONFIG_X86) += firmware
> +SUBDIRS-$(CONFIG_HVMLOADER) += firmware

But there are more subdirs under firmware/ than just hvmloader.
In particular you'd now also skip building the shim if hvmloader
was disabled.

> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -307,6 +307,10 @@ AC_ARG_VAR([AWK], [Path to awk tool])
>  
>  # Checks for programs.
>  AC_PROG_CC
> +AC_LANG(C)
> +AC_LANG_CONFTEST([AC_LANG_SOURCE([[int main() { return 0;}]])])
> +AS_IF([gcc -m32 conftest.c -o - 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(hvmloader build disabled as the compiler cannot build 32bit binaries)])
> +AC_SUBST(hvmloader)

I'm puzzled: "gcc -m32" looked to work fine on its own. I suppose
the above fails at the linking stage, but that's not what we care
about (we don't link with any system libraries). Instead, as said,
you want to check "gcc -m32 -c" produces correct code, in
particular with sizeof(uint64_t) being 8. Of course all of this
would be easier if their headers at least caused some form of
error, instead of silently causing bad code to be generated.

The way you do it, someone simply not having 32-bit C libraries
installed would then also have hvmloader build disabled, even if
their compiler and headers are fine to use.

Also I don't think "-o -" does what you want - it'll produce a
binary named "-" (if compilation and linking succeed), while I
suppose what you want is to discard the output (i.e. probably
"-o /dev/null"). Albeit even that doesn't look to be the commonly
used approach - a binary named "conftest" would normally be
specified as the output, according to other configure.ac I've
looked at. Such tests then have a final "rm -f conftest*".

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:46:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86806.163213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1Qx-0001Ab-4c; Fri, 19 Feb 2021 08:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86806.163213; Fri, 19 Feb 2021 08:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1Qx-0001AU-0s; Fri, 19 Feb 2021 08:46:43 +0000
Received: by outflank-mailman (input) for mailman id 86806;
 Fri, 19 Feb 2021 08:46:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD1Qv-0001AP-Vx
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:46:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be2a6f6e-ec37-43c3-92e6-fb57a8345150;
 Fri, 19 Feb 2021 08:46:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 18929ACCF;
 Fri, 19 Feb 2021 08:46:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be2a6f6e-ec37-43c3-92e6-fb57a8345150
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613724400; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=r3RyBAYQCeDYzG/Exeg8ZGxCTU7G67wdtJL3rkXfd9g=;
	b=BPxy0O2Pk5cCHPNBxhDbSu95YAqmx+wnOPtTyDhfvAj81AEA8r9T9b53xx6DiWoKoCUPI1
	fnyeq4O9Ngk0832No9vOxBKgrqoWmj/7W5CxMuCWIp6emOYwpibOeJTarK+zFgu68rNYlt
	VOhrAnVZQ4wTq4M9YZwjoZ5rSBCbmjo=
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
 <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
 <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
 <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org>
 <141ea545-3725-5305-d352-057ff7c70c4f@suse.com>
 <6e467ed0-34f1-498d-a9ce-7e0f2e606033@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f3697524-3565-2cbc-29f1-8ffaf1c772fb@suse.com>
Date: Fri, 19 Feb 2021 09:46:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <6e467ed0-34f1-498d-a9ce-7e0f2e606033@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 18:41, Julien Grall wrote:
> 
> 
> On 18/02/2021 17:04, Jan Beulich wrote:
>> On 18.02.2021 14:19, Julien Grall wrote:
>>>
>>>
>>> On 18/02/2021 13:10, Jan Beulich wrote:
>>>> On 17.02.2021 17:29, Julien Grall wrote:
>>>>> On 17/02/2021 15:13, Jan Beulich wrote:
>>>>>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
>>>>>> Nit: s/not/no/ ?
>>>>>>
>>>>>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>>>>>> +     * called unconditionally, so pgtables may be unitialized.
>>>>>>> +     */
>>>>>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>>>>>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>>>>>>     }
>>>>>>>     
>>>>>>>     static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>>>>>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>>>>>>          */
>>>>>>>         hd->platform_ops->clear_root_pgtable(d);
>>>>>>>     
>>>>>>> +    /* After this barrier no new page allocations can occur. */
>>>>>>> +    spin_barrier(&hd->arch.pgtables.lock);
>>>>>>
>>>>>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
>>>>>> the barrier? Why introduce another one (with a similar comment)
>>>>>> explicitly now?
>>>>> The barriers act differently, one will get against any IOMMU page-tables
>>>>> modification. The other one will gate against allocation.
>>>>>
>>>>> There is no guarantee that the former will prevent the latter.
>>>>
>>>> Oh, right - different locks. I got confused here because in both
>>>> cases the goal is to prevent allocations.
>>>>
>>>>>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>>>>>         unmap_domain_page(p);
>>>>>>>     
>>>>>>>         spin_lock(&hd->arch.pgtables.lock);
>>>>>>> -    page_list_add(pg, &hd->arch.pgtables.list);
>>>>>>> +    /*
>>>>>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>>>>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>>>>>> +     * reasons to continue to update the IOMMU page-tables while the
>>>>>>> +     * domain is dying.
>>>>>>> +     *
>>>>>>> +     * So prevent page-table allocation when the domain is dying.
>>>>>>> +     *
>>>>>>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>>>>>>> +     */
>>>>>>> +    if ( likely(!d->is_dying) )
>>>>>>> +    {
>>>>>>> +        alive = true;
>>>>>>> +        page_list_add(pg, &hd->arch.pgtables.list);
>>>>>>> +    }
>>>>>>>         spin_unlock(&hd->arch.pgtables.lock);
>>>>>>>     
>>>>>>> +    if ( unlikely(!alive) )
>>>>>>> +    {
>>>>>>> +        free_domheap_page(pg);
>>>>>>> +        pg = NULL;
>>>>>>> +    }
>>>>>>> +
>>>>>>>         return pg;
>>>>>>>     }
>>>>>>
>>>>>> As before I'm concerned of this forcing error paths to be taken
>>>>>> elsewhere, in case an allocation still happens (e.g. from unmap
>>>>>> once super page mappings are supported). Considering some of the
>>>>>> error handling in the IOMMU code is to invoke domain_crash(), it
>>>>>> would be quite unfortunate if we ended up crashing a domain
>>>>>> while it is being cleaned up after.
>>>>>
>>>>> It is unfortunate, but I think this is better than having to leak page
>>>>> tables.
>>>>>
>>>>>>
>>>>>> Additionally, the (at present still hypothetical) unmap case, if
>>>>>> failing because of the change here, would then again chance to
>>>>>> leave mappings in place while the underlying pages get freed. As
>>>>>> this would likely require an XSA, the change doesn't feel like
>>>>>> "hardening" to me.
>>>>>
>>>>> I would agree with this if memory allocations could never fail. That's
>>>>> not that case and will become worse as we use IOMMU pool.
>>>>>
>>>>> Do you have callers in mind that doesn't check the returns of iommu_unmap()?
>>>>
>>>> The function is marked __must_check, so there won't be any direct
>>>> callers ignoring errors (albeit I may be wrong here - we used to
>>>> have cases where we simply suppressed the resulting compiler
>>>> diagnostic, without really handling errors; not sure if all of
>>>> these are gone by now). Risks might be elsewhere.
>>>
>>> But this is not a new risk. So I don't understand why you think my patch
>>> is the one that may lead to an XSA in the future.
>>
>> I didn't mean to imply it would _lead_ to an XSA (you're
>> right that the problem was there already before), but the term
>> "harden" suggests to me that the patch aims at eliminating
>> possible conditions.
> 
> It elimitates the risk that someone inadvertently call 
> iommu_alloc_pgtable() when the domain is dying. If this is happening 
> after the page tables have been freed, then we would end up to leak memory.
> 
>> IOW the result here looks to me as if it
>> would yield a false sense of safety.
> 
> So you are concerned about the wording rather than the code itself. Is 
> that correct?

In a way, yes. First of all I'd like us to settle on what to do
with late unmap requests, for 4.15 (and if need be longer term).

Jan

> If so, how about "xen/iommu: Make the IOMMU page-table allocator 
> slightly firmer"?
> 
> Cheers,
> 



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:49:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86808.163225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1U6-0001Qu-Ie; Fri, 19 Feb 2021 08:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86808.163225; Fri, 19 Feb 2021 08:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1U6-0001Qn-FT; Fri, 19 Feb 2021 08:49:58 +0000
Received: by outflank-mailman (input) for mailman id 86808;
 Fri, 19 Feb 2021 08:49:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD1U4-0001Qi-9b
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:49:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7f6ec65-a373-4d26-97bb-d7e28b308470;
 Fri, 19 Feb 2021 08:49:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A25D8AD29;
 Fri, 19 Feb 2021 08:49:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7f6ec65-a373-4d26-97bb-d7e28b308470
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613724594; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WwfJzREW3w3iAgJqzedGFL8X5T4KF6M7SnNM51vcU58=;
	b=rjeZ1ZffG+WtIWL+4gazAEU9QcXAo6xQksKJI6IIFLBDRIO6ZuIEDwHn77ZS1HSsSS+zeB
	AMDjZ0gcHOcMl7jaQQFhhKGP1C/eR27CSCxgPetez8x2pCXlaFxC02X8VX70pYJGoX5bYs
	xNemxZv5GQ3zOGSavNlWDwvZmj+/ybY=
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org,
 Paul Durrant <paul@xen.org>
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
 <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
 <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
 <c713f440-8c3d-42fe-1d71-56b23e53a495@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aa672eaa-e828-968b-73bf-dd4249d5de16@suse.com>
Date: Fri, 19 Feb 2021 09:49:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <c713f440-8c3d-42fe-1d71-56b23e53a495@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 14:25, Julien Grall wrote:
> Hi,
> 
> On 18/02/2021 13:05, Jan Beulich wrote:
>> On 17.02.2021 17:07, Julien Grall wrote:
>>> On 17/02/2021 15:01, Jan Beulich wrote:
>>>> On 17.02.2021 15:24, Julien Grall wrote:
>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> The new x86 IOMMU page-tables allocator will release the pages when
>>>>> relinquishing the domain resources. However, this is not sufficient
>>>>> when the domain is dying because nothing prevents page-table to be
>>>>> allocated.
>>>>>
>>>>> Currently page-table allocations can only happen from iommu_map(). As
>>>>> the domain is dying, there is no good reason to continue to modify the
>>>>> IOMMU page-tables.
>>>>
>>>> While I agree this to be the case right now, I'm not sure it is a
>>>> good idea to build on it (in that you leave the unmap paths
>>>> untouched).
>>>
>>> I don't build on that assumption. See next patch.
>>
>> Yet as said there that patch makes unmapping perhaps more fragile,
>> by introducing a latent error source into the path.
> 
> I still don't see what latent error my patch will introduce. Allocation 
> of page-tables are not guarantee to succeed.
> 
> So are you implying that a code may rely on the page allocation to succeed?

I'm implying that failure to split a superpage may have unknown
consequences. Since we make no use of superpages when not
sharing page tables, I call this a latent issue which may go
unnoticed for quite some time once no longer latent.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:56:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:56:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86809.163236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1aV-0002Jb-9K; Fri, 19 Feb 2021 08:56:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86809.163236; Fri, 19 Feb 2021 08:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1aV-0002JU-6Q; Fri, 19 Feb 2021 08:56:35 +0000
Received: by outflank-mailman (input) for mailman id 86809;
 Fri, 19 Feb 2021 08:56:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD1aT-0002JP-QJ
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:56:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91cd361b-6633-4316-9f5f-11fa1442adcf;
 Fri, 19 Feb 2021 08:56:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CB986AED2;
 Fri, 19 Feb 2021 08:56:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91cd361b-6633-4316-9f5f-11fa1442adcf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613724991; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ITrIOhijD+0MgNfyXd0YvcHKDVayblTBb3Txi7ML4fk=;
	b=rR4bkhKpFpDpi3jQc1XlnSqgfuMMYh/hfv1rZduUKPP0uHqtEzRaCWRty3Sy+CwxoRJdJo
	KjxnJCnAhy+Sn+IaCFJQAhgC0SJ2w5Kh+Aw89QOXujU6CNMscne+gzimXidt+itmwN3SZt
	JqSYM2ET5xdtcxiRj+eqj0TPGX0wpF4=
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: paul@xen.org
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
 <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
 <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
 <14f21eac-7d5d-9dda-18d2-206614e91339@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d818154a-2176-2607-b893-6e77e57e5c82@suse.com>
Date: Fri, 19 Feb 2021 09:56:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <14f21eac-7d5d-9dda-18d2-206614e91339@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 15:00, Paul Durrant wrote:
> On 18/02/2021 13:05, Jan Beulich wrote:
>> On 17.02.2021 17:07, Julien Grall wrote:
>>> On 17/02/2021 15:01, Jan Beulich wrote:
>>>> On 17.02.2021 15:24, Julien Grall wrote:
>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> The new x86 IOMMU page-tables allocator will release the pages when
>>>>> relinquishing the domain resources. However, this is not sufficient
>>>>> when the domain is dying because nothing prevents page-table to be
>>>>> allocated.
>>>>>
>>>>> Currently page-table allocations can only happen from iommu_map(). As
>>>>> the domain is dying, there is no good reason to continue to modify the
>>>>> IOMMU page-tables.
>>>>
>>>> While I agree this to be the case right now, I'm not sure it is a
>>>> good idea to build on it (in that you leave the unmap paths
>>>> untouched).
>>>
>>> I don't build on that assumption. See next patch.
>>
>> Yet as said there that patch makes unmapping perhaps more fragile,
>> by introducing a latent error source into the path.
>>
>>>> Imo there's a fair chance this would be overlooked at
>>>> the point super page mappings get introduced (which has been long
>>>> overdue), and I thought prior discussion had lead to a possible
>>>> approach without risking use-after-free due to squashed unmap
>>>> requests.
>>>
>>> I know you suggested to zap the root page-tables... However, I don't
>>> think this is 4.15 material and you agree with this (you were the one
>>> pointed out that out).
>>
>> Paul - do you have any thoughts here? Outright zapping isn't
>> going to work, we'd need to switch to quarantine page tables at
>> the very least to prevent the issue with babbling devices. But
>> that still seems better to me than the introduction of a latent
>> issue on the unmap paths.
>>
> 
> I've not really been following this. AFAICT clear_root_pgtable() does 
> not actually mess with any context entries so the device view of the 
> tables won't change, will it?

Correct. Elsewhere I said that "zapping" here has two meanings,
one is what Julien does and the other is what I mean and what I
understand you also refer to - to actually replace the mappings
referenced by a context entry as soon as we no longer guarantee
to update them correctly.

My concern with not touching the unmap paths is that if there
was any "best effort" unmap left anywhere in the tree, we may
still end up with a use-after-free when late unmaps are now
made possibly fail by failing late allocation attempts.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:58:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:58:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86812.163248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1bw-0002Ql-Lh; Fri, 19 Feb 2021 08:58:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86812.163248; Fri, 19 Feb 2021 08:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1bw-0002Qe-Ia; Fri, 19 Feb 2021 08:58:04 +0000
Received: by outflank-mailman (input) for mailman id 86812;
 Fri, 19 Feb 2021 08:58:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lD1bv-0002QZ-1p
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:58:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD1bu-0004p5-4j; Fri, 19 Feb 2021 08:58:02 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD1bt-0005uM-QI; Fri, 19 Feb 2021 08:58:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=2r7D/U3uoeookFxvmHy5QwL08Te8ZxOYGJv1LZZeXIM=; b=ajoweDahHJU0SwcliooMI33ZNj
	63QGwjq7P/rOuxYOB01hfY6NY9D1cTg1pUGzZqFgmWWSNwhAZEawDB9/StI/9FFFvrHIPS4mCYvPp
	AgbmW87jC1Gmbt7Y8+dMb0VBTAxrD3j8vaWcjRF0/xKxvoqLANx17ob90RGAgV8l0EO4=;
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU
 page-table allocator
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-4-julien@xen.org>
 <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>
 <51618338-daff-5b9a-5214-e0788d95992b@xen.org>
 <ee3d628e-a369-fddc-4824-e860ebabe8af@suse.com>
 <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org>
 <141ea545-3725-5305-d352-057ff7c70c4f@suse.com>
 <6e467ed0-34f1-498d-a9ce-7e0f2e606033@xen.org>
 <f3697524-3565-2cbc-29f1-8ffaf1c772fb@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4b05a1f5-bf61-942a-ff4c-40c6c9dd079c@xen.org>
Date: Fri, 19 Feb 2021 08:57:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <f3697524-3565-2cbc-29f1-8ffaf1c772fb@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/02/2021 08:46, Jan Beulich wrote:
> On 18.02.2021 18:41, Julien Grall wrote:
>>
>>
>> On 18/02/2021 17:04, Jan Beulich wrote:
>>> On 18.02.2021 14:19, Julien Grall wrote:
>>>>
>>>>
>>>> On 18/02/2021 13:10, Jan Beulich wrote:
>>>>> On 17.02.2021 17:29, Julien Grall wrote:
>>>>>> On 17/02/2021 15:13, Jan Beulich wrote:
>>>>>>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
>>>>>>> Nit: s/not/no/ ?
>>>>>>>
>>>>>>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>>>>>>> +     * called unconditionally, so pgtables may be unitialized.
>>>>>>>> +     */
>>>>>>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>>>>>>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>>>>>>>      }
>>>>>>>>      
>>>>>>>>      static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>>>>>>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>>>>>>>           */
>>>>>>>>          hd->platform_ops->clear_root_pgtable(d);
>>>>>>>>      
>>>>>>>> +    /* After this barrier no new page allocations can occur. */
>>>>>>>> +    spin_barrier(&hd->arch.pgtables.lock);
>>>>>>>
>>>>>>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
>>>>>>> the barrier? Why introduce another one (with a similar comment)
>>>>>>> explicitly now?
>>>>>> The barriers act differently, one will get against any IOMMU page-tables
>>>>>> modification. The other one will gate against allocation.
>>>>>>
>>>>>> There is no guarantee that the former will prevent the latter.
>>>>>
>>>>> Oh, right - different locks. I got confused here because in both
>>>>> cases the goal is to prevent allocations.
>>>>>
>>>>>>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>>>>>>          unmap_domain_page(p);
>>>>>>>>      
>>>>>>>>          spin_lock(&hd->arch.pgtables.lock);
>>>>>>>> -    page_list_add(pg, &hd->arch.pgtables.list);
>>>>>>>> +    /*
>>>>>>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>>>>>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>>>>>>> +     * reasons to continue to update the IOMMU page-tables while the
>>>>>>>> +     * domain is dying.
>>>>>>>> +     *
>>>>>>>> +     * So prevent page-table allocation when the domain is dying.
>>>>>>>> +     *
>>>>>>>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>>>>>>>> +     */
>>>>>>>> +    if ( likely(!d->is_dying) )
>>>>>>>> +    {
>>>>>>>> +        alive = true;
>>>>>>>> +        page_list_add(pg, &hd->arch.pgtables.list);
>>>>>>>> +    }
>>>>>>>>          spin_unlock(&hd->arch.pgtables.lock);
>>>>>>>>      
>>>>>>>> +    if ( unlikely(!alive) )
>>>>>>>> +    {
>>>>>>>> +        free_domheap_page(pg);
>>>>>>>> +        pg = NULL;
>>>>>>>> +    }
>>>>>>>> +
>>>>>>>>          return pg;
>>>>>>>>      }
>>>>>>>
>>>>>>> As before I'm concerned of this forcing error paths to be taken
>>>>>>> elsewhere, in case an allocation still happens (e.g. from unmap
>>>>>>> once super page mappings are supported). Considering some of the
>>>>>>> error handling in the IOMMU code is to invoke domain_crash(), it
>>>>>>> would be quite unfortunate if we ended up crashing a domain
>>>>>>> while it is being cleaned up after.
>>>>>>
>>>>>> It is unfortunate, but I think this is better than having to leak page
>>>>>> tables.
>>>>>>
>>>>>>>
>>>>>>> Additionally, the (at present still hypothetical) unmap case, if
>>>>>>> failing because of the change here, would then again chance to
>>>>>>> leave mappings in place while the underlying pages get freed. As
>>>>>>> this would likely require an XSA, the change doesn't feel like
>>>>>>> "hardening" to me.
>>>>>>
>>>>>> I would agree with this if memory allocations could never fail. That's
>>>>>> not that case and will become worse as we use IOMMU pool.
>>>>>>
>>>>>> Do you have callers in mind that doesn't check the returns of iommu_unmap()?
>>>>>
>>>>> The function is marked __must_check, so there won't be any direct
>>>>> callers ignoring errors (albeit I may be wrong here - we used to
>>>>> have cases where we simply suppressed the resulting compiler
>>>>> diagnostic, without really handling errors; not sure if all of
>>>>> these are gone by now). Risks might be elsewhere.
>>>>
>>>> But this is not a new risk. So I don't understand why you think my patch
>>>> is the one that may lead to an XSA in the future.
>>>
>>> I didn't mean to imply it would _lead_ to an XSA (you're
>>> right that the problem was there already before), but the term
>>> "harden" suggests to me that the patch aims at eliminating
>>> possible conditions.
>>
>> It elimitates the risk that someone inadvertently call
>> iommu_alloc_pgtable() when the domain is dying. If this is happening
>> after the page tables have been freed, then we would end up to leak memory.
>>
>>> IOW the result here looks to me as if it
>>> would yield a false sense of safety.
>>
>> So you are concerned about the wording rather than the code itself. Is
>> that correct?
> 
> In a way, yes. First of all I'd like us to settle on what to do
> with late unmap requests, for 4.15 (and if need be longer term).

iommu_unmap() doesn't allocate memory today and very unlikely going to 
do for the lifetime of 4.15. So why should we address the late unmap 
requests in 4.15?

At best, to me this looks like introduce a risk for fixing a so far 
inexistent bug.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 08:59:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 08:59:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86816.163261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1dB-0002fx-11; Fri, 19 Feb 2021 08:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86816.163261; Fri, 19 Feb 2021 08:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD1dA-0002fp-TL; Fri, 19 Feb 2021 08:59:20 +0000
Received: by outflank-mailman (input) for mailman id 86816;
 Fri, 19 Feb 2021 08:59:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LCn1=HV=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
 id 1lD1d9-0002fi-Tl
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 08:59:19 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 060d340e-b71c-4df6-b871-1de4ae3bc77f;
 Fri, 19 Feb 2021 08:59:19 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11J8oEwu089650;
 Fri, 19 Feb 2021 08:59:18 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 36p7dnrmq7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 08:59:18 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11J8tluC182172;
 Fri, 19 Feb 2021 08:59:16 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 36prhvdt6f-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 08:59:16 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 11J8xEaX004380;
 Fri, 19 Feb 2021 08:59:14 GMT
Received: from mwanda (/10.175.194.66) by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 19 Feb 2021 00:59:14 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 060d340e-b71c-4df6-b871-1de4ae3bc77f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : mime-version : content-type; s=corp-2020-01-29;
 bh=hVrBwJZyZrcozmvRMSPGChZCnCSYVBUl6bBNcrGOdYw=;
 b=PvZe8gqiauPCdqpWnwOV8wL4kc3qPa31MxW2NhR56mviODnh5aRcHJX31jyVu4sjt/2u
 g+hE02V+KeooZPLftvwl56ZswAWq/6fTxbNWmsZ2KforuUUJl1bGSRh0E6w13Cd/WS1m
 Y9hG1/mYdAMH5Oe+i5mu/CAtl+/hqaenwL0/I7zLGoxoBsc5bqT9L43KascJEnn09Bl6
 o9ouSESqXurilKQ9QcQ2lUxaFP9B29BqgyZ7+GcV2Ejlus0CnAu+ookRU0BzTuqYdSZJ
 9FyZkgnw5I6eNE5r6EtAw/w7fs1Tjy0WaJGELQ/ROXH8l/4BlBd6dGtmfvA0ukHBmEB3 NA== 
Date: Fri, 19 Feb 2021 11:59:05 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: jbeulich@suse.com
Cc: xen-devel@lists.xenproject.org
Subject: [bug report] xen-blkback: don't "handle" error by BUG()
Message-ID: <YC992dHyignVEe5R@mwanda>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Proofpoint-IMR: 1
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=846 adultscore=0 mlxscore=0
 bulkscore=0 suspectscore=0 malwarescore=0 spamscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190070
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 clxscore=1011 impostorscore=0
 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=783
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190069

Hello Jan Beulich,

The patch 5a264285ed1c: "xen-blkback: don't "handle" error by BUG()"
from Feb 15, 2021, leads to the following static checker warning:

	drivers/block/xen-blkback/blkback.c:836 xen_blkbk_map()
	warn: should this be a bitwise negate mask?

drivers/block/xen-blkback/blkback.c
   823           * Now swizzle the MFN in our domain with the MFN from the other domain
   824           * so that when we access vaddr(pending_req,i) it has the contents of
   825           * the page from the other domain.
   826           */
   827          for (seg_idx = last_map, new_map_idx = 0; seg_idx < map_until; seg_idx++) {
   828                  if (!pages[seg_idx]->persistent_gnt) {
   829                          /* This is a newly mapped grant */
   830                          BUG_ON(new_map_idx >= segs_to_map);
   831                          if (unlikely(map[new_map_idx].status != 0)) {
   832                                  pr_debug("invalid buffer -- could not remap it\n");
   833                                  gnttab_page_cache_put(&ring->free_pages,
   834                                                        &pages[seg_idx]->page, 1);
   835                                  pages[seg_idx]->handle = BLKBACK_INVALID_HANDLE;
   836                                  ret |= !ret;

Originally this code was:

	ret |= 1;

Now it's equivalent to:

	if (!ret)
		ret = 1;

This is the second user of this idiom in the kernel.  Both were
introduced in the last month so maybe it's a new trend or something...
Anyway.  False positive warning.

But my main question isn't really related to this patch.  What does
the 1 mean in this context?  I always feel like there should be
documentation when functions return a mix of error codes, zero and one
but there isn't any here.

I have reviewed this and so far as I can see setting "ret = 1;" is
always treated exactly the same as an error code by everything.  Every
single place where this is checked just checks for ret is zero.  The
value is propagated two functions back and then it becomes -EIO.

???

   837                                  goto next;
   838                          }
   839                          pages[seg_idx]->handle = map[new_map_idx].handle;
   840                  } else {
   841                          continue;
   842                  }
   843                  if (use_persistent_gnts &&

regards,
dan carpenter


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 09:24:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 09:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86823.163273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD21H-0005Sq-6l; Fri, 19 Feb 2021 09:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86823.163273; Fri, 19 Feb 2021 09:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD21H-0005Sj-35; Fri, 19 Feb 2021 09:24:15 +0000
Received: by outflank-mailman (input) for mailman id 86823;
 Fri, 19 Feb 2021 09:24:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lD21F-0005Se-Rv
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 09:24:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD21D-0005Gp-2R; Fri, 19 Feb 2021 09:24:11 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD21C-0008PI-QB; Fri, 19 Feb 2021 09:24:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NW4IO4W+Z6u0V6oCvWPiguInxloPgJvQIgvjYCtuh94=; b=5P2KG/i4ZACihexX2Lz89vRI6W
	V04k+E0c2rBMpyqM+tOg5xPzL9WGb1M4HslPRcMHD3wbaBarueiV8+6Z9Q/Pitmn7Z04f4jeF3Fzc
	UdnuGwdJ2131nFj9XQPrin/ZnF1006c1gT0M4DkOxkhYw/1dZRu14OD/aXEB1HQxaUJ0=;
Subject: Re: [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org,
 Paul Durrant <paul@xen.org>
References: <20210217142458.3769-1-julien@xen.org>
 <20210217142458.3769-3-julien@xen.org>
 <20f68b12-a767-b1d3-a3dd-9f92172def5f@suse.com>
 <d1116875-dd98-8f51-9113-314c1a62b4bf@xen.org>
 <268f1991-bacb-c888-04c0-0e4cf670b6b5@suse.com>
 <c713f440-8c3d-42fe-1d71-56b23e53a495@xen.org>
 <aa672eaa-e828-968b-73bf-dd4249d5de16@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <223ab610-34bc-c0a8-dfe2-6f54b86e7635@xen.org>
Date: Fri, 19 Feb 2021 09:24:08 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <aa672eaa-e828-968b-73bf-dd4249d5de16@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 19/02/2021 08:49, Jan Beulich wrote:
> On 18.02.2021 14:25, Julien Grall wrote:
>> Hi,
>>
>> On 18/02/2021 13:05, Jan Beulich wrote:
>>> On 17.02.2021 17:07, Julien Grall wrote:
>>>> On 17/02/2021 15:01, Jan Beulich wrote:
>>>>> On 17.02.2021 15:24, Julien Grall wrote:
>>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>>
>>>>>> The new x86 IOMMU page-tables allocator will release the pages when
>>>>>> relinquishing the domain resources. However, this is not sufficient
>>>>>> when the domain is dying because nothing prevents page-table to be
>>>>>> allocated.
>>>>>>
>>>>>> Currently page-table allocations can only happen from iommu_map(). As
>>>>>> the domain is dying, there is no good reason to continue to modify the
>>>>>> IOMMU page-tables.
>>>>>
>>>>> While I agree this to be the case right now, I'm not sure it is a
>>>>> good idea to build on it (in that you leave the unmap paths
>>>>> untouched).
>>>>
>>>> I don't build on that assumption. See next patch.
>>>
>>> Yet as said there that patch makes unmapping perhaps more fragile,
>>> by introducing a latent error source into the path.
>>
>> I still don't see what latent error my patch will introduce. Allocation
>> of page-tables are not guarantee to succeed.
>>
>> So are you implying that a code may rely on the page allocation to succeed?
> 
> I'm implying that failure to split a superpage may have unknown
> consequences.

As failure to flush the TLBs (see below).

> Since we make no use of superpages when not
> sharing page tables, I call this a latent issue which may go
> unnoticed for quite some time once no longer latent.

And it would take a lot longer to unnotice an OOM as this should happen 
less often ;).

But, I think this is wrong to blame the lower layer for a (latent) bug 
in the upper layer...

Even with your solution to zap page-tables, there is still a risk of 
failure because the TLB flush may fail (the operation can return an error).

So regardless the solution, we still need to have callers that function 
correctly.

Anyway, what matters right now is fixing a host crash when running PV 
guest with passthrough.  Neither patch #3 nor the iommu_unmap() part is 
strictly necessary for 4.15 (as you say this is latent).

So I would suggest to defer this discussion post-4.15.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 10:33:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 10:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86831.163291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD35a-0003pD-A9; Fri, 19 Feb 2021 10:32:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86831.163291; Fri, 19 Feb 2021 10:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD35a-0003p6-6w; Fri, 19 Feb 2021 10:32:46 +0000
Received: by outflank-mailman (input) for mailman id 86831;
 Fri, 19 Feb 2021 10:32:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lD35Y-0003p1-Uz
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 10:32:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD35X-0006RO-Sb; Fri, 19 Feb 2021 10:32:43 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD35X-0005om-Gi; Fri, 19 Feb 2021 10:32:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8sl1RSejo9EYD+aPu8zswMb1564lhYS/l4h4l2C6a1w=; b=PGWRJvunY5lFUkBIEd/sWjJKiB
	IbrsIO1c6cen4snB65UQfbBvy+WxrMFeCY2B2HEeYU9yYO3H4xlTjBuTo9Us4J22+FcoXStkorZEw
	NoCxuW/G+xZk+YF7Y2qWwEOemyyK4lLyp01ByCdlgLqb31WjcXXwd7xQoEfp24hIcR50=;
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com,
 Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s>
 <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org>
 <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s>
 <0be0196f-5b3f-73f9-5ab7-7a54faabec5c@xen.org>
 <alpine.DEB.2.21.2102180920570.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <98a15b6d-7460-31a0-0b4a-acf035571a17@xen.org>
Date: Fri, 19 Feb 2021 10:32:41 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102180920570.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 19/02/2021 01:42, Stefano Stabellini wrote:
> On Thu, 18 Feb 2021, Julien Grall wrote:
>> On 17/02/2021 23:54, Stefano Stabellini wrote:
>>> On Wed, 17 Feb 2021, Julien Grall wrote:
>>>> On 17/02/2021 02:00, Stefano Stabellini wrote:
>>> But actually it was always wrong for Linux to enable swiotlb-xen without
>>> checking whether it is 1:1 mapped or not. Today we enable swiotlb-xen in
>>> dom0 and disable it in domU, while we should have enabled swiotlb-xen if
>>> 1:1 mapped no matter dom0/domU. (swiotlb-xen could be useful in a 1:1
>>> mapped domU driver domain.)
>>>
>>>
>>> There is an argument (Andy was making on IRC) that being 1:1 mapped or
>>> not is an important information that Xen should provide to the domain
>>> regardless of anything else.
>>>
>>> So maybe we should add two flags:
>>>
>>> - XENFEAT_direct_mapped
>>> - XENFEAT_not_direct_mapped
>>
>> I am guessing the two flags is to allow Linux to fallback to the default
>> behavior (depending on dom0 vs domU) on older hypervisor On newer hypervisors,
>> one of this flag would always be set. Is that correct?
> 
> Yes. On a newer hypervisor one of the two would be present and Linux can
> make an informed decision. On an older hypervisor, neither flag would be
> present, so Linux will have to keep doing what is currently doing.
> 
>   
>>> To all domains. This is not even ARM specific. Today dom0 would get
>>> XENFEAT_direct_mapped and domUs XENFEAT_not_direct_mapped. With cache
>>> coloring all domains will get XENFEAT_not_direct_mapped. With Bertrand's
>>> team work on 1:1 mapping domUs, some domUs might start to get
>>> XENFEAT_direct_mapped also one day soon.
>>>
>>> Now I think this is the best option because it is descriptive, doesn't
>>> imply anything about what Linux should or should not do, and doesn't
>>> depend on unreliable IOMMU information.
>>
>> That's a good first step but this still doesn't solve the problem on whether
>> the swiotlb can be disabled per-device or even disabling the expensive 1:1
>> mapping in the IOMMU page-tables.
>>
>> It feels to me we need to have a more complete solution (not necessary
>> implemented) so we don't put ourself in the corner again.
> 
> Yeah, XENFEAT_{not,}_direct_mapped help cleaning things up, but don't
> solve the issues you described. Those are difficult to solve, it would
> be nice to have some idea.
> 
> One issue is that we only have limited information passed via device
> tree, limited to the "iommus" property. If that's all we have, there
> isn't much we can do.

We can actually do a lot with that :). See more below.

> The device tree list is maybe the only option,
> although it feels a bit complex intuitively. We could maybe replace the
> real iommu node with a fake iommu node only to use it to "tag" devices
> protected by the real iommu.
> 
> I like the idea of rewarding well-designed boards; boards that have an
> IOMMU and works for all DMA-mastering devices. It would be great to be
> able to optimize those in a simple way, without breaking the others. But
> unfortunately due to the limited info on device tree, I cannot think of
> a way to do it automatically. And it is not great to rely on platform
> files.

We would not be able to automate in Xen alone, however we can ask the 
help of Linux.

Xen is able to tell whether it has protected the device with an IOMMU or 
not. When creating the domain device-tree, it could replace the IOMMU 
node with a Xen specific one.

With the Xen IOMMU nodes, Linux could find out whether the device needs 
to use the swiotlb ops or not.

Skipping extra mapping in the IOMMU is a bit trickier. I can see two 
solutions:
   - A per-domain toggle to skip the IOMMU mapping. This is assuming 
that Linux is able to know that all DMA capable devices are protected. 
The problem is a  driver may be loaded later. Such drivers are unlikely 
to use existing grant, so the toggle could be used to say "all the grant 
after this point will require a mapping (or not)"

   - A per-grant flag to indicate whether an IOMMU mapping is necessary. 
This is assuming we are able to know whether a grant will be used for DMA.

>>> Instead, if we follow my original proposal of using
>>> XENFEAT_ARM_dom0_iommu and set it automatically when Dom0 is protected
>>> by IOMMU, we risk breaking PV drivers for platforms where that protection
>>> is incomplete. I have no idea how many there are out there today.
>>
>> This can virtually affect any platform as it is easy to disable an IOMMU in
>> the firmware table.
>>
>>> I have
>>> the feeling that there aren't that many but I am not sure. So yes, it
>>> could be that we start passing XENFEAT_ARM_dom0_iommu for a given
>>> platform, Linux skips the swiotlb-xen initialization, actually it is
>>> needed for a network/block device, and a PV driver breaks. I can see why
>>> you say this is a no-go.
>>>
>>>
>>> Third option. We still use XENFEAT_ARM_dom0_iommu but we never set
>>> XENFEAT_ARM_dom0_iommu automatically. It needs a platform specific flag
>>> to be set. We add the flag to xen/arch/arm/platforms/xilinx-zynqmp.c and
>>> any other platforms that qualify. Basically it is "opt in" instead of
>>> "opt out". We don't risk breaking anything because platforms would have
>>> XENFEAT_ARM_dom0_iommu disabled by default.
>> Well, yes you will not break other platforms. However, you are still at risk
>> to break your platform if the firmware table is updated and disable some but
>> not all IOMMUs (for performance concern, brokeness...).
> 
> This is something we might be able to detect: we can detect if an IOMMU
> is disabled.

This is assuming that node has not been removed... :) Anyway, as I 
pointed out in my original answer, I don't think platform quirk (or 
enablement) is a viable solution here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 10:48:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 10:48:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86835.163303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD3Ke-0004wJ-Ks; Fri, 19 Feb 2021 10:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86835.163303; Fri, 19 Feb 2021 10:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD3Ke-0004wC-GH; Fri, 19 Feb 2021 10:48:20 +0000
Received: by outflank-mailman (input) for mailman id 86835;
 Fri, 19 Feb 2021 10:48:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD3Kd-0004w2-4g; Fri, 19 Feb 2021 10:48:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD3Kc-0006gF-SF; Fri, 19 Feb 2021 10:48:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD3Kc-0001Ah-I3; Fri, 19 Feb 2021 10:48:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lD3Kc-0001dm-HW; Fri, 19 Feb 2021 10:48:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pgFmw7q/e8lcZXG/ohrxQdtGS1lrrzGK2VjVNSOFBkE=; b=WVjzWlDauvOP4rtcOV3+hXtCd+
	PE0V59RhwHnsWwvMEpUcPb4Jrn+3gdkHBN/6AxtrJDGW93LpW/Y00YVpg3eaBQJzdxmf/2VY3CJVm
	oSRUVreslke4/LFfTKDRCjtv7RsXjTOjgCnOQAEWj0ftJ3FpeQqWSJ9F7IIypo6bv76M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159457-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159457: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=850e6a95deb5a9e6e922ace64bf2dd0ed290ecb7
X-Osstest-Versions-That:
    linux=a829146c3fdcf6d0b76d9c54556a223820f1f73b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 10:48:18 +0000

flight 159457 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159457/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 158387
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 158387
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 158387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 158387
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 158387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 158387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 158387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 158387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 158387
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                850e6a95deb5a9e6e922ace64bf2dd0ed290ecb7
baseline version:
 linux                a829146c3fdcf6d0b76d9c54556a223820f1f73b

Last test of basis   158387  2021-01-12 19:40:06 Z   37 days
Failing since        158473  2021-01-17 13:42:20 Z   32 days   44 attempts
Testing same since   159457  2021-02-18 08:59:55 Z    1 days    1 attempts

------------------------------------------------------------
512 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a829146c3fdc..850e6a95deb5  850e6a95deb5a9e6e922ace64bf2dd0ed290ecb7 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 11:07:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 11:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86845.163318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD3d7-000701-Av; Fri, 19 Feb 2021 11:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86845.163318; Fri, 19 Feb 2021 11:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD3d7-0006zu-74; Fri, 19 Feb 2021 11:07:25 +0000
Received: by outflank-mailman (input) for mailman id 86845;
 Fri, 19 Feb 2021 11:07:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD3d6-0006zp-LW
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 11:07:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ad6a248-4987-42e3-b7b7-9f423cfa0097;
 Fri, 19 Feb 2021 11:07:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69C83AFCD;
 Fri, 19 Feb 2021 11:07:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ad6a248-4987-42e3-b7b7-9f423cfa0097
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613732842; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8UOkix4EP/5Ua9OQdNnxiZgSKUwqBjpGAIkG+Sptj9I=;
	b=haldrOyQszsZCowU1rQgI/64rM0myLAMVfJ378rwglDRlrkhUsAMaJ5oydzck6H11pc1Wp
	Ca1T8WQHzFRfCLXYjLcPAXRoZHMiVBGFNkdP9o9XbW/Z5UL2XI/XNIAT0MpJFAQndWHccd
	bROw+29w5ZBiyuYsnMeQ+hLZCrSSQ/s=
Subject: Re: [bug report] xen-blkback: don't "handle" error by BUG()
To: Dan Carpenter <dan.carpenter@oracle.com>
Cc: xen-devel@lists.xenproject.org
References: <YC992dHyignVEe5R@mwanda>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <60a27374-d84d-19c5-5a04-2ab981e8ad83@suse.com>
Date: Fri, 19 Feb 2021 12:07:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YC992dHyignVEe5R@mwanda>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 09:59, Dan Carpenter wrote:
> Hello Jan Beulich,
> 
> The patch 5a264285ed1c: "xen-blkback: don't "handle" error by BUG()"
> from Feb 15, 2021, leads to the following static checker warning:
> 
> 	drivers/block/xen-blkback/blkback.c:836 xen_blkbk_map()
> 	warn: should this be a bitwise negate mask?
> 
> drivers/block/xen-blkback/blkback.c
>    823           * Now swizzle the MFN in our domain with the MFN from the other domain
>    824           * so that when we access vaddr(pending_req,i) it has the contents of
>    825           * the page from the other domain.
>    826           */
>    827          for (seg_idx = last_map, new_map_idx = 0; seg_idx < map_until; seg_idx++) {
>    828                  if (!pages[seg_idx]->persistent_gnt) {
>    829                          /* This is a newly mapped grant */
>    830                          BUG_ON(new_map_idx >= segs_to_map);
>    831                          if (unlikely(map[new_map_idx].status != 0)) {
>    832                                  pr_debug("invalid buffer -- could not remap it\n");
>    833                                  gnttab_page_cache_put(&ring->free_pages,
>    834                                                        &pages[seg_idx]->page, 1);
>    835                                  pages[seg_idx]->handle = BLKBACK_INVALID_HANDLE;
>    836                                  ret |= !ret;
> 
> Originally this code was:
> 
> 	ret |= 1;
> 
> Now it's equivalent to:
> 
> 	if (!ret)
> 		ret = 1;
> 
> This is the second user of this idiom in the kernel.  Both were
> introduced in the last month so maybe it's a new trend or something...
> Anyway.  False positive warning.
> 
> But my main question isn't really related to this patch.  What does
> the 1 mean in this context?  I always feel like there should be
> documentation when functions return a mix of error codes, zero and one
> but there isn't any here.

I agree. The literal 1 was there before, and the security fix
was not a good place to change this. I suppose you really
should ask whoever introduced that use of literal 1. I find
this as odd as you do, and ...

> I have reviewed this and so far as I can see setting "ret = 1;" is
> always treated exactly the same as an error code by everything.  Every
> single place where this is checked just checks for ret is zero.  The
> value is propagated two functions back and then it becomes -EIO.

... I've come to the same conclusions.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 12:48:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 12:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86866.163333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5D4-0008Mc-39; Fri, 19 Feb 2021 12:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86866.163333; Fri, 19 Feb 2021 12:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5D3-0008MV-W7; Fri, 19 Feb 2021 12:48:37 +0000
Received: by outflank-mailman (input) for mailman id 86866;
 Fri, 19 Feb 2021 12:48:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD5D1-0008MQ-RL
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 12:48:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 008eb9b8-fd22-4161-9abe-366ecfd94a6d;
 Fri, 19 Feb 2021 12:48:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2959ABAE;
 Fri, 19 Feb 2021 12:48:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 008eb9b8-fd22-4161-9abe-366ecfd94a6d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613738914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MzK5JEYbjIi/MmVDTjIAh8Zkph4uP/esrwdgZYP62yM=;
	b=BPG7cYSeBfwOhhzM/rgxEtx1vDiO/uq89NFh9DkKixPUx4kB36tFlaHvHjZHLvdWzBnp/0
	7w2FNcPpU5b71/X3aShD4AC30PZoavLkBOeQ6sL4973+Dm8ok6tNeT8bOgzegEhb3tnToo
	MABbg8sTtwGfgOojk34r5czNyoSG66M=
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
Message-ID: <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
Date: Fri, 19 Feb 2021 13:48:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YC0exVYljxxvwFFt@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5aKjBXTyAgyPuCInfeqeldumVEITylnc5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5aKjBXTyAgyPuCInfeqeldumVEITylnc5
Content-Type: multipart/mixed; boundary="BuSeWw4MLN1X0b3xvWIhMRmhtLWRC6NqT";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>
Message-ID: <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
In-Reply-To: <YC0exVYljxxvwFFt@mail-itl>

--BuSeWw4MLN1X0b3xvWIhMRmhtLWRC6NqT
Content-Type: multipart/mixed;
 boundary="------------87A5A1D9A3A31E5DA1054892"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------87A5A1D9A3A31E5DA1054892
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.02.21 14:48, Marek Marczykowski-G=C3=B3recki wrote:
> On Wed, Feb 17, 2021 at 07:51:42AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.02.21 06:12, Marek Marczykowski-G=C3=B3recki wrote:
>>> Hi,
>>>
>>> I'm observing Linux PV/PVH guest crash when I resume it from sleep. I=
 do
>>> this with:
>>>
>>>       virsh -c xen dompmsuspend <vmname> mem
>>>       virsh -c xen dompmwakeup <vmname>
>>>
>>> But it's possible to trigger it with plain xl too:
>>>
>>>       xl save -c <vmname> <some-file>
>>>
>>> The same on HVM works fine.
>>>
>>> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens=

>>> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
>>> relevant here. I can reliably reproduce it.
>>
>> This is already on my list of issues to look at.
>>
>> The problem seems to be related to the XSA-332 patches. You could try
>> the patches I've sent out recently addressing other fallout from XSA-3=
32
>> which _might_ fix this issue, too:
>>
>> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/
>=20
> Thanks for the patches. Sadly it doesn't change anything - I get exactl=
y
> the same crash. I applied that on top of 5.11-rc7 (that's what I had
> handy). If you think there may be a difference with the final 5.11 or
> another branch, please let me know.
>=20

Some more tests reveal that this seems to be s hypervisor regression.
I can reproduce the very same problem with a 4.12 kernel from 2019.

It seems as if the EVTCHNOP_init_control hypercall is returning
-EINVAL when the domain is continuing to run after the suspend
hypercall (in contrast to the case where a new domain has been created
when doing a "xl restore").


Juergen

--------------87A5A1D9A3A31E5DA1054892
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------87A5A1D9A3A31E5DA1054892--

--BuSeWw4MLN1X0b3xvWIhMRmhtLWRC6NqT--

--5aKjBXTyAgyPuCInfeqeldumVEITylnc5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAvs6EFAwAAAAAACgkQsN6d1ii/Ey+Z
2Qf9HoQFMpmjgrDdk8oK3S2Z6iUOW9FM3y+TEsnYCQ4mTNDdLPOWYNfJYmSO91tcdx3c6M8eLfEG
thgLOoavL8wsYHPcDQwNeV3AKckoH6zCjwISOaBTh+wu0AZNx/NmrUowFFFvHoCXokx6BHMZ2KoD
rXY02wnCas5DXKptDCdMymd0T8yUGSvHNm6ajjIbYASvpL0uA8MQjdzh+R7tRtVB9g7WpLhRMSZa
/mgt/PG9C1ttiZBvVREGhxKawf07dcjT7T4uCvTHz20K3mnleqgvm7skdbVoGeI0OBC+Wvp7NLgJ
iptOyfwEZNDojyMgDXr1uIvzNnb5kITS8WnMx2jLXw==
=YQ/S
-----END PGP SIGNATURE-----

--5aKjBXTyAgyPuCInfeqeldumVEITylnc5--


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 13:10:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 13:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86870.163345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5Xv-0002o2-VC; Fri, 19 Feb 2021 13:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86870.163345; Fri, 19 Feb 2021 13:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5Xv-0002nv-Ri; Fri, 19 Feb 2021 13:10:11 +0000
Received: by outflank-mailman (input) for mailman id 86870;
 Fri, 19 Feb 2021 13:10:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD5Xu-0002np-Pg
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 13:10:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 505fbf80-9baf-481d-8582-baaa234996d3;
 Fri, 19 Feb 2021 13:10:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7DEB2ACBF;
 Fri, 19 Feb 2021 13:10:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 505fbf80-9baf-481d-8582-baaa234996d3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613740208; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/VKks5qxLBV3Mbj7lruyczsz3QxnsFsX+pUjFf3/Eaw=;
	b=Qoepv2G06GvbSu4dRbBoKZOqgn8kSt/1eLNDKfFJsMiSthiW1NFt4mlxdh5IAbWopk6r0b
	kHIFmEEqJp+0XRglDTjiI4bh03CXreUEoDTF6W+GX2tMC2/8rcnUJVgeA10bTzI6re3W0Z
	BU4g7VHRh3hc6LAGMI4WwCF/Wc5mEK0=
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
 <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
Date: Fri, 19 Feb 2021 14:10:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.02.2021 13:48, JÃ¼rgen GroÃŸ wrote:
> On 17.02.21 14:48, Marek Marczykowski-GÃ³recki wrote:
>> On Wed, Feb 17, 2021 at 07:51:42AM +0100, JÃ¼rgen GroÃŸ wrote:
>>> On 17.02.21 06:12, Marek Marczykowski-GÃ³recki wrote:
>>>> Hi,
>>>>
>>>> I'm observing Linux PV/PVH guest crash when I resume it from sleep. I do
>>>> this with:
>>>>
>>>>       virsh -c xen dompmsuspend <vmname> mem
>>>>       virsh -c xen dompmwakeup <vmname>
>>>>
>>>> But it's possible to trigger it with plain xl too:
>>>>
>>>>       xl save -c <vmname> <some-file>
>>>>
>>>> The same on HVM works fine.
>>>>
>>>> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens
>>>> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
>>>> relevant here. I can reliably reproduce it.
>>>
>>> This is already on my list of issues to look at.
>>>
>>> The problem seems to be related to the XSA-332 patches. You could try
>>> the patches I've sent out recently addressing other fallout from XSA-332
>>> which _might_ fix this issue, too:
>>>
>>> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/
>>
>> Thanks for the patches. Sadly it doesn't change anything - I get exactly
>> the same crash. I applied that on top of 5.11-rc7 (that's what I had
>> handy). If you think there may be a difference with the final 5.11 or
>> another branch, please let me know.
>>
> 
> Some more tests reveal that this seems to be s hypervisor regression.
> I can reproduce the very same problem with a 4.12 kernel from 2019.
> 
> It seems as if the EVTCHNOP_init_control hypercall is returning
> -EINVAL when the domain is continuing to run after the suspend
> hypercall (in contrast to the case where a new domain has been created
> when doing a "xl restore").

But when you resume the same domain, the kernel isn't supposed to
call EVTCHNOP_init_control, as that's a one time operation (per
vCPU, and unless EVTCHNOP_reset was called of course). In the
hypervisor map_control_block() has (always had) as its first step

    if ( v->evtchn_fifo->control_block )
        return -EINVAL;

Re-setup is needed only when resuming in a new domain.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 13:18:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 13:18:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86871.163357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5fj-00032k-PT; Fri, 19 Feb 2021 13:18:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86871.163357; Fri, 19 Feb 2021 13:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5fj-00032c-MF; Fri, 19 Feb 2021 13:18:15 +0000
Received: by outflank-mailman (input) for mailman id 86871;
 Fri, 19 Feb 2021 13:18:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD5fi-00032W-BX
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 13:18:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11476446-b00c-49f0-acdd-2186da0ed5e6;
 Fri, 19 Feb 2021 13:18:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6613EACBF;
 Fri, 19 Feb 2021 13:18:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11476446-b00c-49f0-acdd-2186da0ed5e6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613740691; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Rei8ANkquo1IN73rAIcLIbZSfWjXiehiuCZ+FJpkFu4=;
	b=OQJg9x3S5KXZZbgFsHP+8hzxTHUxf6Zo+UWaka8Ljvlz5g+EP9FsQkJvYribd4J2ghLj54
	LFPaKUnp26nHOqdp/pvx4UzekJk4jyVp/T5UlMxQWSIFD3lfDuZsBTx8FpSj/S5HsMZ1S2
	jhT27vsmM0MprQNOc0u1nMoM7BZMGFs=
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
 <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
 <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <81dfdb6c-00bc-86c1-a27f-2f7b312b4360@suse.com>
Date: Fri, 19 Feb 2021 14:18:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="m3hFf1vGlJ3WJ2JB55j0TJZ2y4tCB9d0P"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--m3hFf1vGlJ3WJ2JB55j0TJZ2y4tCB9d0P
Content-Type: multipart/mixed; boundary="gOp6d9G01zkkOSyRdRbtVX5AmzOyVGOHg";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <81dfdb6c-00bc-86c1-a27f-2f7b312b4360@suse.com>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
 <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
 <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
In-Reply-To: <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>

--gOp6d9G01zkkOSyRdRbtVX5AmzOyVGOHg
Content-Type: multipart/mixed;
 boundary="------------8046E4BB7CA7B59F4A190BC1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8046E4BB7CA7B59F4A190BC1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 19.02.21 14:10, Jan Beulich wrote:
> On 19.02.2021 13:48, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.02.21 14:48, Marek Marczykowski-G=C3=B3recki wrote:
>>> On Wed, Feb 17, 2021 at 07:51:42AM +0100, J=C3=BCrgen Gro=C3=9F wrote=
:
>>>> On 17.02.21 06:12, Marek Marczykowski-G=C3=B3recki wrote:
>>>>> Hi,
>>>>>
>>>>> I'm observing Linux PV/PVH guest crash when I resume it from sleep.=
 I do
>>>>> this with:
>>>>>
>>>>>        virsh -c xen dompmsuspend <vmname> mem
>>>>>        virsh -c xen dompmwakeup <vmname>
>>>>>
>>>>> But it's possible to trigger it with plain xl too:
>>>>>
>>>>>        xl save -c <vmname> <some-file>
>>>>>
>>>>> The same on HVM works fine.
>>>>>
>>>>> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happe=
ns
>>>>> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
>>>>> relevant here. I can reliably reproduce it.
>>>>
>>>> This is already on my list of issues to look at.
>>>>
>>>> The problem seems to be related to the XSA-332 patches. You could tr=
y
>>>> the patches I've sent out recently addressing other fallout from XSA=
-332
>>>> which _might_ fix this issue, too:
>>>>
>>>> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/
>>>
>>> Thanks for the patches. Sadly it doesn't change anything - I get exac=
tly
>>> the same crash. I applied that on top of 5.11-rc7 (that's what I had
>>> handy). If you think there may be a difference with the final 5.11 or=

>>> another branch, please let me know.
>>>
>>
>> Some more tests reveal that this seems to be s hypervisor regression.
>> I can reproduce the very same problem with a 4.12 kernel from 2019.
>>
>> It seems as if the EVTCHNOP_init_control hypercall is returning
>> -EINVAL when the domain is continuing to run after the suspend
>> hypercall (in contrast to the case where a new domain has been created=

>> when doing a "xl restore").
>=20
> But when you resume the same domain, the kernel isn't supposed to
> call EVTCHNOP_init_control, as that's a one time operation (per
> vCPU, and unless EVTCHNOP_reset was called of course). In the
> hypervisor map_control_block() has (always had) as its first step
>=20
>      if ( v->evtchn_fifo->control_block )
>          return -EINVAL;
>=20
> Re-setup is needed only when resuming in a new domain.

But the same guest will not crash when doing the same on a 4.12
hypervisor.


Juergen


--------------8046E4BB7CA7B59F4A190BC1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8046E4BB7CA7B59F4A190BC1--

--gOp6d9G01zkkOSyRdRbtVX5AmzOyVGOHg--

--m3hFf1vGlJ3WJ2JB55j0TJZ2y4tCB9d0P
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAvupIFAwAAAAAACgkQsN6d1ii/Ey99
Pwf/SZMCnS4M0xMawWG67h4d7g+YeXcaSZYagzoFFpex6n0fEQFuUE791dkCTqJYr4DgTqO5rwAS
VLDcCeE/Ibc/fM268TtJq6H/pl8gRMWy6HCBGm6gvaAvHbw3ZXUTjrQig6mQMWVr3bgqVXgpI+zD
NVcG7Xet8lYqCGU+jMu5akTw4uLIKntIBohinDjw64HEldIpWEOGuaqlY8seIY1JMhcsP5jACgRY
7DQFSP+kbvImvlc8hN/PzKLg/zOk8a6x91bgcteKTOAABfPCRjyPDkh548ywlJ1UNgFo5MyKrSmL
YHFG/q/d1tsLhg5CSEzVTh+mcQ5aO3lklVI/ViLurw==
=amJa
-----END PGP SIGNATURE-----

--m3hFf1vGlJ3WJ2JB55j0TJZ2y4tCB9d0P--


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 13:38:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 13:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86876.163369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5yt-00057E-9g; Fri, 19 Feb 2021 13:38:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86876.163369; Fri, 19 Feb 2021 13:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD5yt-000577-5Q; Fri, 19 Feb 2021 13:38:03 +0000
Received: by outflank-mailman (input) for mailman id 86876;
 Fri, 19 Feb 2021 13:38:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD5yr-000572-G7
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 13:38:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d8b5c23-5a2c-437a-bd99-6c9a56526803;
 Fri, 19 Feb 2021 13:38:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8C314AC6E;
 Fri, 19 Feb 2021 13:37:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d8b5c23-5a2c-437a-bd99-6c9a56526803
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613741879; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=q0TbcnHmZj0mnHr+Mg1DCerYzlXHBvgyavb0cPKolgg=;
	b=n+TfXjnXZNWNZYv1QYw44vU8Mm6kCnKFmv3eIxM8usJSCTbtYJq3hLZpe22H/fT9LHxiDr
	lxG1moSlHK2uA6Y7x3WkjVhHDK7Ybt+f0weBwIeydNcY1zq63GXLVqWqLOL8jrVjii0Hw4
	s2ns7LA3PZ3pI3xXAUxzOhu88gSK+Zk=
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
 <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
 <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
 <81dfdb6c-00bc-86c1-a27f-2f7b312b4360@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f61868e0-95e9-5134-f415-80039ea7b5a5@suse.com>
Date: Fri, 19 Feb 2021 14:37:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <81dfdb6c-00bc-86c1-a27f-2f7b312b4360@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.02.2021 14:18, JÃ¼rgen GroÃŸ wrote:
> On 19.02.21 14:10, Jan Beulich wrote:
>> On 19.02.2021 13:48, JÃ¼rgen GroÃŸ wrote:
>>> On 17.02.21 14:48, Marek Marczykowski-GÃ³recki wrote:
>>>> On Wed, Feb 17, 2021 at 07:51:42AM +0100, JÃ¼rgen GroÃŸ wrote:
>>>>> On 17.02.21 06:12, Marek Marczykowski-GÃ³recki wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I'm observing Linux PV/PVH guest crash when I resume it from sleep. I do
>>>>>> this with:
>>>>>>
>>>>>>        virsh -c xen dompmsuspend <vmname> mem
>>>>>>        virsh -c xen dompmwakeup <vmname>
>>>>>>
>>>>>> But it's possible to trigger it with plain xl too:
>>>>>>
>>>>>>        xl save -c <vmname> <some-file>
>>>>>>
>>>>>> The same on HVM works fine.
>>>>>>
>>>>>> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same happens
>>>>>> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
>>>>>> relevant here. I can reliably reproduce it.
>>>>>
>>>>> This is already on my list of issues to look at.
>>>>>
>>>>> The problem seems to be related to the XSA-332 patches. You could try
>>>>> the patches I've sent out recently addressing other fallout from XSA-332
>>>>> which _might_ fix this issue, too:
>>>>>
>>>>> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/
>>>>
>>>> Thanks for the patches. Sadly it doesn't change anything - I get exactly
>>>> the same crash. I applied that on top of 5.11-rc7 (that's what I had
>>>> handy). If you think there may be a difference with the final 5.11 or
>>>> another branch, please let me know.
>>>>
>>>
>>> Some more tests reveal that this seems to be s hypervisor regression.
>>> I can reproduce the very same problem with a 4.12 kernel from 2019.
>>>
>>> It seems as if the EVTCHNOP_init_control hypercall is returning
>>> -EINVAL when the domain is continuing to run after the suspend
>>> hypercall (in contrast to the case where a new domain has been created
>>> when doing a "xl restore").
>>
>> But when you resume the same domain, the kernel isn't supposed to
>> call EVTCHNOP_init_control, as that's a one time operation (per
>> vCPU, and unless EVTCHNOP_reset was called of course). In the
>> hypervisor map_control_block() has (always had) as its first step
>>
>>      if ( v->evtchn_fifo->control_block )
>>          return -EINVAL;
>>
>> Re-setup is needed only when resuming in a new domain.
> 
> But the same guest will not crash when doing the same on a 4.12
> hypervisor.

Is the kernel perhaps not given the bit of information anymore that
it needs to tell apart the two resume modes?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 13:41:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 13:41:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86879.163380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD61r-00065b-RW; Fri, 19 Feb 2021 13:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86879.163380; Fri, 19 Feb 2021 13:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD61r-00065U-OW; Fri, 19 Feb 2021 13:41:07 +0000
Received: by outflank-mailman (input) for mailman id 86879;
 Fri, 19 Feb 2021 13:41:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD61q-00065N-E9
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 13:41:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 206f12f1-a420-4f80-82cc-b764175a6d8d;
 Fri, 19 Feb 2021 13:41:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 466BDAFF9;
 Fri, 19 Feb 2021 13:41:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 206f12f1-a420-4f80-82cc-b764175a6d8d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613742064; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=e3ToGL55rtNeOI1yFlJoMm2vLm7E9Gm/MbhmKFqt4uA=;
	b=k5sofEyWYnWMXccSfiK7S/GDfOYhdMQlohhM5YbS7w9YKxMrf1o47zb8VIsagDpfSmuPQ1
	sgmWFm6t2TA/f9/7h4AGgT+EW3DZ2Z8FcFotSycv6OW8a20XmHunsebCGZhL65NWeXEEBT
	E9NPDupM70lRF4nLBhateJV5YBKsIVY=
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
 <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
 <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
 <81dfdb6c-00bc-86c1-a27f-2f7b312b4360@suse.com>
 <f61868e0-95e9-5134-f415-80039ea7b5a5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <78b41fc5-f50f-3d56-f854-a14f978bb574@suse.com>
Date: Fri, 19 Feb 2021 14:41:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <f61868e0-95e9-5134-f415-80039ea7b5a5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="InpOaW6LgH7B0jYTWlVrjD0XqFDH2zmog"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--InpOaW6LgH7B0jYTWlVrjD0XqFDH2zmog
Content-Type: multipart/mixed; boundary="kSaBhwffdNA39SZcm9rraOsZjQVs32cQE";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <78b41fc5-f50f-3d56-f854-a14f978bb574@suse.com>
Subject: Re: Linux PV/PVH domU crash on (guest) resume from suspend
References: <YCylpKU8F+Hsg8YL@mail-itl>
 <0b71a671-592a-53ab-6b4a-1fe15b9eb453@suse.com> <YC0exVYljxxvwFFt@mail-itl>
 <d848053e-b708-167a-ab7c-e7985e61d608@suse.com>
 <08117ed3-7e84-b233-4a74-248896e2a2d8@suse.com>
 <81dfdb6c-00bc-86c1-a27f-2f7b312b4360@suse.com>
 <f61868e0-95e9-5134-f415-80039ea7b5a5@suse.com>
In-Reply-To: <f61868e0-95e9-5134-f415-80039ea7b5a5@suse.com>

--kSaBhwffdNA39SZcm9rraOsZjQVs32cQE
Content-Type: multipart/mixed;
 boundary="------------58E6AF6AA545396E9142EA3F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------58E6AF6AA545396E9142EA3F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 19.02.21 14:37, Jan Beulich wrote:
> On 19.02.2021 14:18, J=C3=BCrgen Gro=C3=9F wrote:
>> On 19.02.21 14:10, Jan Beulich wrote:
>>> On 19.02.2021 13:48, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 17.02.21 14:48, Marek Marczykowski-G=C3=B3recki wrote:
>>>>> On Wed, Feb 17, 2021 at 07:51:42AM +0100, J=C3=BCrgen Gro=C3=9F wro=
te:
>>>>>> On 17.02.21 06:12, Marek Marczykowski-G=C3=B3recki wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm observing Linux PV/PVH guest crash when I resume it from slee=
p. I do
>>>>>>> this with:
>>>>>>>
>>>>>>>         virsh -c xen dompmsuspend <vmname> mem
>>>>>>>         virsh -c xen dompmwakeup <vmname>
>>>>>>>
>>>>>>> But it's possible to trigger it with plain xl too:
>>>>>>>
>>>>>>>         xl save -c <vmname> <some-file>
>>>>>>>
>>>>>>> The same on HVM works fine.
>>>>>>>
>>>>>>> This is on Xen 4.14.1, and with guest kernel 5.4.90, the same hap=
pens
>>>>>>> with 5.4.98. Dom0 kernel is the same, but I'm not sure if that's
>>>>>>> relevant here. I can reliably reproduce it.
>>>>>>
>>>>>> This is already on my list of issues to look at.
>>>>>>
>>>>>> The problem seems to be related to the XSA-332 patches. You could =
try
>>>>>> the patches I've sent out recently addressing other fallout from X=
SA-332
>>>>>> which _might_ fix this issue, too:
>>>>>>
>>>>>> https://patchew.org/Xen/20210211101616.13788-1-jgross@suse.com/
>>>>>
>>>>> Thanks for the patches. Sadly it doesn't change anything - I get ex=
actly
>>>>> the same crash. I applied that on top of 5.11-rc7 (that's what I ha=
d
>>>>> handy). If you think there may be a difference with the final 5.11 =
or
>>>>> another branch, please let me know.
>>>>>
>>>>
>>>> Some more tests reveal that this seems to be s hypervisor regression=
=2E
>>>> I can reproduce the very same problem with a 4.12 kernel from 2019.
>>>>
>>>> It seems as if the EVTCHNOP_init_control hypercall is returning
>>>> -EINVAL when the domain is continuing to run after the suspend
>>>> hypercall (in contrast to the case where a new domain has been creat=
ed
>>>> when doing a "xl restore").
>>>
>>> But when you resume the same domain, the kernel isn't supposed to
>>> call EVTCHNOP_init_control, as that's a one time operation (per
>>> vCPU, and unless EVTCHNOP_reset was called of course). In the
>>> hypervisor map_control_block() has (always had) as its first step
>>>
>>>       if ( v->evtchn_fifo->control_block )
>>>           return -EINVAL;
>>>
>>> Re-setup is needed only when resuming in a new domain.
>>
>> But the same guest will not crash when doing the same on a 4.12
>> hypervisor.
>=20
> Is the kernel perhaps not given the bit of information anymore that
> it needs to tell apart the two resume modes?

Ah, yes, this might be the problem.

EVTCHNOP_init_control is indeed used only if the suspend hypercall did
return 0.


Juergen

--------------58E6AF6AA545396E9142EA3F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------58E6AF6AA545396E9142EA3F--

--kSaBhwffdNA39SZcm9rraOsZjQVs32cQE--

--InpOaW6LgH7B0jYTWlVrjD0XqFDH2zmog
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmAvv+8FAwAAAAAACgkQsN6d1ii/Ey/s
8Af/eERHppQK2x35Tp67SzxKIrK0dZsLv9bNfrVURGZ4w+WlJYWAi8+mj7rj9/Gh+DKBVOY56COf
CbeoN0j5YkSPIK6IpwNziP82eJFfqkKP2Zeli767LlJ+f6eDQat3y6g2TNiP3yK7Dwfc2aN+B9Xg
io9F4R6Z8UxpxV8kBjwuA9t9wPZEb8tXS7Foi2PbwFFwLM2HNOruPCUqZeP5SgOYePeO4H3L70ou
isexWlkGfymndMXfuBqP6V8MPz13ZcN+YadLpOobcfssoqiIpWc+075gTWO/tEZacVEIPaMFMfUW
wgVuC+jpHUP5dTbmfEHqrVhbNtqPHH9Y3yPcDhIbbA==
=BgJH
-----END PGP SIGNATURE-----

--InpOaW6LgH7B0jYTWlVrjD0XqFDH2zmog--


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 13:44:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 13:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86883.163392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD655-0006Ga-AS; Fri, 19 Feb 2021 13:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86883.163392; Fri, 19 Feb 2021 13:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD655-0006GT-7W; Fri, 19 Feb 2021 13:44:27 +0000
Received: by outflank-mailman (input) for mailman id 86883;
 Fri, 19 Feb 2021 13:44:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD654-0006GO-HT
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 13:44:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 339e94c2-2e4a-45c7-83c4-2a25fa455593;
 Fri, 19 Feb 2021 13:44:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9808EAC6E;
 Fri, 19 Feb 2021 13:44:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 339e94c2-2e4a-45c7-83c4-2a25fa455593
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613742264; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+SP6WAoYFimwm4BW7FGybNQJGKcYzR4XHeUA/R4j+gI=;
	b=ZW+0TOsXH7jEUsZphbLahSRxJhKh+/r/nI/o5smRcuNifhuFrXo2Jmx3YWhopzrnSzhHYj
	RoGRof5xm6EescRxfG2K/6ZOt8fwI8zw2xyxhZn4TenY8RhQMwNuyFeLGtcQ6oYiVljOHS
	4DxqC0ChTPpolcR87HcQMTg4fncg/So=
Subject: Re: [PATCH HVM v4 1/1] hvm: refactor set param
To: Norbert Manthey <nmanthey@amazon.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <2633df5f-df68-4a16-bc5c-522b2a589b00@amazon.de>
 <20210218150158.28265-1-nmanthey@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6d3a067d-3071-5c81-ee1f-0615f770c9fb@suse.com>
Date: Fri, 19 Feb 2021 14:44:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210218150158.28265-1-nmanthey@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.02.2021 16:01, Norbert Manthey wrote:
> To prevent leaking HVM params via L1TF and similar issues on a
> hyperthread pair, let's load values of domains only after performing all
> relevant checks, and blocking speculative execution.
> 
> For both get and set, the value of the index is already checked in the
> outer calling function. The block_speculation calls in hvmop_get_param
> and hvmop_set_param are removed, because is_hvm_domain already blocks
> speculation.
> 
> Furthermore, speculative barriers are re-arranged to make sure we do not
> allow guests running on co-located VCPUs to leak hvm parameter values of
> other domains.
> 
> To improve symmetry between the get and set operations, function
> hvmop_set_param is made static.
> 
> This is part of the speculative hardening effort.
> 
> Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
> Reported-by: Hongyan Xia <hongyxia@amazon.co.uk>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:14:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86887.163404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD6XS-0000x8-Ml; Fri, 19 Feb 2021 14:13:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86887.163404; Fri, 19 Feb 2021 14:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD6XS-0000x1-Jq; Fri, 19 Feb 2021 14:13:46 +0000
Received: by outflank-mailman (input) for mailman id 86887;
 Fri, 19 Feb 2021 14:13:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD6XR-0000ww-9D
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:13:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cacaf6f-74c1-4b44-90c4-9ccb488c96e0;
 Fri, 19 Feb 2021 14:13:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B8CDABAE;
 Fri, 19 Feb 2021 14:13:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cacaf6f-74c1-4b44-90c4-9ccb488c96e0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613744023; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=m9LvAwR4BamUvVwgeaJtp/lMusRaDgmJHEvq2QOW/hM=;
	b=cAK+uYSmkCA7h8hxcrzuToVJDOK+x2eSDT8qYT/xICl3nynXtvsMtE/VoVBwMCN/pukKoW
	Cx0r5Fw4KU0aeeSwBGXO1VFClxuMjhl4gcxJvnymsjrh3NrefuKEl9iK57O+PCMUA7vA05
	Y1vcGcFOmJqmqeofXt/K/yHQjpej8Jk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH-for-4.15] tools/libs/light: fix xl save -c handling
Date: Fri, 19 Feb 2021 15:13:37 +0100
Message-Id: <20210219141337.6934-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

libxl_domain_resume() won't work correctly for the case it was called
due to a "xl save -c" command, i.e. to continue the suspended domain.

The information to do that is not saved in libxl__dm_resume_state for
non-HVM domains.

Fixes: 6298f0eb8f443 ("libxl: Re-introduce libxl__domain_resume")
Reported-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/light/libxl_dom_suspend.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dom_suspend.c b/tools/libs/light/libxl_dom_suspend.c
index 25d1571895..f7823bbc8f 100644
--- a/tools/libs/light/libxl_dom_suspend.c
+++ b/tools/libs/light/libxl_dom_suspend.c
@@ -630,12 +630,13 @@ void libxl__domain_resume(libxl__egc *egc,
         goto out;
     }
 
+    dmrs->suspend_cancel = suspend_cancel;
+
     if (type != LIBXL_DOMAIN_TYPE_HVM) {
         rc = 0;
         goto out;
     }
 
-    dmrs->suspend_cancel = suspend_cancel;
     dmrs->dm_resumed_callback = domain_resume_done;
     libxl__dm_resume(egc, dmrs); /* must be last */
     return;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:15:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86890.163417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD6ZX-00014X-3A; Fri, 19 Feb 2021 14:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86890.163417; Fri, 19 Feb 2021 14:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD6ZW-00014Q-WE; Fri, 19 Feb 2021 14:15:55 +0000
Received: by outflank-mailman (input) for mailman id 86890;
 Fri, 19 Feb 2021 14:15:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD6ZW-00014L-1s
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:15:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b8a634b-4203-446c-912f-84de8b4f57a6;
 Fri, 19 Feb 2021 14:15:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14FCEABAE;
 Fri, 19 Feb 2021 14:15:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b8a634b-4203-446c-912f-84de8b4f57a6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613744152; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7Ycka/xTMYs/DV3CnJFaLl8REG1O2rk4dF5PdWYzoWY=;
	b=jsGEjuLgR/i3+6meOJ7tMesy8/9vVPGiOTIbJCgpbf5z1o8ck2eSJq53wEgRrMtKZv2HFk
	7sjC30Z/a4PB+ixi1l6yfQlAbHKQdc4vhlwlPhIQtnQXwsc6xDQ0LVnIH0RzoH5QQ7Xj3h
	40jYj0v/6VgjItZhPWrRyNivpQPvECA=
Subject: Re: [PATCH-for-4.15] tools/libs/light: fix xl save -c handling
To: Juergen Gross <jgross@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
References: <20210219141337.6934-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1c02c3af-0a9b-6c68-e110-9d0963275e17@suse.com>
Date: Fri, 19 Feb 2021 15:15:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210219141337.6934-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.02.2021 15:13, Juergen Gross wrote:
> libxl_domain_resume() won't work correctly for the case it was called
> due to a "xl save -c" command, i.e. to continue the suspended domain.
> 
> The information to do that is not saved in libxl__dm_resume_state for
> non-HVM domains.
> 
> Fixes: 6298f0eb8f443 ("libxl: Re-introduce libxl__domain_resume")
> Reported-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:39:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86900.163433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD6wE-0003Ak-1P; Fri, 19 Feb 2021 14:39:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86900.163433; Fri, 19 Feb 2021 14:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD6wD-0003Ad-Uk; Fri, 19 Feb 2021 14:39:21 +0000
Received: by outflank-mailman (input) for mailman id 86900;
 Fri, 19 Feb 2021 14:39:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD6wC-0003AY-2M
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:39:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 955d387b-96eb-448a-900d-52fc9ab7259e;
 Fri, 19 Feb 2021 14:39:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2AFBAB10B;
 Fri, 19 Feb 2021 14:39:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 955d387b-96eb-448a-900d-52fc9ab7259e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613745557; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lBZOODJxUXudV3lFWoXDl/wMzTDy0bjvGE3IRHW8zu4=;
	b=TtExeoZCT8EEi4bDTgzP87iroJMMZZZH/l2BxWK7Cj2n5gT882IY2JnEVE4fuYvI+Rj4L9
	Mq09qQ6eD55C1RJ6UpfNJLjwtJvg69lW0+Jq94TRn5uhVMdf83ua925nJVU3YetKvkEJ4M
	PcWIZcFR2rjvFvsrdSo3/yuQBdBL81U=
Subject: Re: [PATCH] x86/irq: simplify loop in unmap_domain_pirq
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210210092211.53359-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f80e0026-9a0b-d25d-a0a4-81774da8cba8@suse.com>
Date: Fri, 19 Feb 2021 15:39:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210210092211.53359-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.02.2021 10:22, Roger Pau Monne wrote:
> The for loop in unmap_domain_pirq is unnecessary complicated, with
> several places where the index is incremented, and also different
> exit conditions spread between the loop body.
> 
> Simplify it by looping over each possible PIRQ using the for loop
> syntax, and remove all possible in-loop exit points.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Quite a bit better indeed. Just one nit below (can be taken care of
while committing, once the tree will re-open).

> @@ -2356,11 +2355,23 @@ int unmap_domain_pirq(struct domain *d, int pirq)
>      if ( msi_desc != NULL )
>          pci_disable_msi(msi_desc);
>  
> -    spin_lock_irqsave(&desc->lock, flags);
> -
> -    for ( i = 0; ; )
> +    for ( i = 0; i < nr; i++, info = pirq_info(d, pirq + i) )
>      {
> +        unsigned long flags;
> +
> +        if ( !info || info->arch.irq <= 0 )
> +        {
> +            printk(XENLOG_G_ERR "dom%d: MSI pirq %d not mapped\n",
> +                   d->domain_id, pirq + i);

%pd please as you touch/move this anyway.

> @@ -2378,45 +2389,6 @@ int unmap_domain_pirq(struct domain *d, int pirq)
>              desc->msi_desc = NULL;
>          }
>  
> -        if ( ++i == nr )
> -            break;
> -
> -        spin_unlock_irqrestore(&desc->lock, flags);
> -
> -        if ( !forced_unbind )
> -           cleanup_domain_irq_pirq(d, irq, info);
> -
> -        rc = irq_deny_access(d, irq);
> -        if ( rc )
> -        {
> -            printk(XENLOG_G_ERR
> -                   "dom%d: could not deny access to IRQ%d (pirq %d)\n",
> -                   d->domain_id, irq, pirq + i);

Looks like the pirq number logged here also was off by one, which
the re-arrangement takes care of.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:46:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:46:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86902.163445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD72v-0004C4-PX; Fri, 19 Feb 2021 14:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86902.163445; Fri, 19 Feb 2021 14:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD72v-0004Bx-Lm; Fri, 19 Feb 2021 14:46:17 +0000
Received: by outflank-mailman (input) for mailman id 86902;
 Fri, 19 Feb 2021 14:46:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lD72u-0004Bs-7g
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:46:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD72s-0002F3-Ob; Fri, 19 Feb 2021 14:46:14 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lD72s-0000h0-Gk; Fri, 19 Feb 2021 14:46:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=v2y+ymMSDQX82piVPwjU1Es9uWoU4rS0eoNMp1/EwQc=; b=J4AqVIV/pSle2KRvPPw/UG2KHK
	KbC3wfFxxrBhNIfbfww9q/u0KsjcqXikvlLJhK4iJ8NE6bkdxm8Aa9IwFl2m2IZLq9bBcsw1uco1l
	zqA9HjGurjczOVNEeY7iZebDPH4FoPcOSVv6MH/824qSWn+xrRwk0EDR1z+BJuKY9hE4=;
Subject: Re: [PATCH] xen/arm : smmuv3: Fix to handle multiple StreamIds per
 device.
To: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, iwj@xenproject.org
References: <43de5b58df37d8b8de037cb23c47ab8454caf37c.1613492577.git.rahul.singh@arm.com>
 <5145C563-A8AA-41B1-8EBB-B32D1FFC2219@arm.com>
 <alpine.DEB.2.21.2102171522420.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <32e14e21-86de-9c74-e94a-ff29bda56b3a@xen.org>
Date: Fri, 19 Feb 2021 14:46:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102171522420.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 17/02/2021 23:23, Stefano Stabellini wrote:
> +IanJ
> 
> On Wed, 17 Feb 2021, Bertrand Marquis wrote:
>> Hi Rahul,
>>
>>
>>> On 17 Feb 2021, at 10:05, Rahul Singh <Rahul.Singh@arm.com> wrote:
>>>
>>> SMMUv3 driver does not handle multiple StreamId if the master device
>>> supports more than one StreamID.
>>>
>>> This bug was introduced when the driver was ported from Linux to XEN.
>>> dt_device_set_protected(..) should be called from add_device(..) not
>>> from the dt_xlate(..).
>>>
>>> Move dt_device_set_protected(..) from dt_xlate(..) to add_device().
>>>
>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> Thanks a lot, this is fixing issues with multiple stream ids for one device :-)
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> 
>>> ---
>>> This patch is a candidate for 4.15 as without this patch it is not possible to
>>> assign multiple StreamIds to the same device when device is protected behind
>>> SMMUv3.
> 
> I agree especially considering that the impact is limited to smmu-v3.c.

SMMUv3 is in tech preview, so the risk is pretty low as we don't support 
it :).

But, a release ack from Ian is not necessary until tonight (19th 
February) for bug fixes. So I have committed the patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:50:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86905.163456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD771-00053q-DF; Fri, 19 Feb 2021 14:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86905.163456; Fri, 19 Feb 2021 14:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD771-00053j-AE; Fri, 19 Feb 2021 14:50:31 +0000
Received: by outflank-mailman (input) for mailman id 86905;
 Fri, 19 Feb 2021 14:50:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rPBP=HV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lD770-00053e-Dr
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:50:30 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebc54774-0da2-42e9-906a-4759bba68947;
 Fri, 19 Feb 2021 14:50:28 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JEoMqa047171;
 Fri, 19 Feb 2021 14:50:26 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 36p7dnsn68-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 14:50:26 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JEoMxJ125464;
 Fri, 19 Feb 2021 14:50:25 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2041.outbound.protection.outlook.com [104.47.66.41])
 by aserp3020.oracle.com with ESMTP id 36prp2ykae-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 14:50:25 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by SJ0PR10MB4717.namprd10.prod.outlook.com (2603:10b6:a03:2ac::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.27; Fri, 19 Feb
 2021 14:50:18 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.029; Fri, 19 Feb 2021
 14:50:17 +0000
Received: from [10.74.102.113] (138.3.200.49) by
 SA9PR03CA0024.namprd03.prod.outlook.com (2603:10b6:806:20::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.28 via Frontend Transport; Fri, 19 Feb 2021 14:50:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebc54774-0da2-42e9-906a-4759bba68947
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=kUskgIqVNaYeclYwmhNYIZ2o+1wQAjk9lE4jRdS2drI=;
 b=0CkEfwLMxbJuTQWT8qEwP8DDHGuBl/2N5Xi+RisBR7qnWD1uJsN5MEqnaOzwIlOxcF67
 bn/q6LFW/MQdDgk1VK3hsO2BuKkIRxzrkIVXGrm5MCwyerBvPQhIpVT10J/wI5gkKNan
 5aDJjC3tljQRodoO5l2xnCsQsREHSPKHz7tn5hkjWwGB/wYa3Jv6o9wONV+ErkCHnDLW
 R0/sOY5Ar47TQuLm6UCrqaHB2HA4YBqlNh30lT2EusdNurgCgPwX5R8K5hG5T2tgBBfb
 i7IEHwEsAZSAwRPkpEwkFsY0hG/xaIEltfYDJmXYyrSpfD7Tg+7NTJ0U23jtxTutGiqJ zg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nyUMEkpVaK6x2Ym2Bd8x/O8zODtYq7BlPrMXRWu1QBvKjxcWuR0B4nK1orNF85rvz8Z3i0nL2VUI6tSJr9d6Cdg/ToAMBguOrQbL5TBqQb8QzTZxhlGMHVUQpDFLNz34GshpzNvy3re7yWRncZYpRvkLseVb4NMBy0tReV5B7OyvdyxZ5+p5HFv5kJD7vVVxck13klzTb5fwzFp3C/9E+ZrO7l/AI6fKoqTHZ/qBg4StgDtXXLYyDqnDSk2fSi4lFs94o103s+ojC3c4IrIpn47PAgsWmggYbqKIJ1v/4XD9/Dzc4rWNvDtdRdhBm1799r34nlGuEtM5N0fGpKHvbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kUskgIqVNaYeclYwmhNYIZ2o+1wQAjk9lE4jRdS2drI=;
 b=CuvOnTmh9JUoBG14A8Bi2XhhwcRV0TShc5AlY5QvggrEAz/+ejSIZA8/I9zIaZzcEhI8P6qNRiReSTUiN8E1U8tvKipfxmp2Vl9F8B6ptmvOEbJpCLo3ABXTOnOZcBlgTp93x/K9chTZp3xsUCV4ff40DXG+GMx2fXUhxmg0SB+/Rujkd0GKganEhcR7cZKpOlGaOGFv7GtZNlJJUB1YqC29CkmxsHGIlcr9uIcMggzBl6w31YoZPYrRR6tUkmg6BW3FBYzgZG4d0iO3HlGjofP5Qb3jpICVgCA1MJXP8lS9EXXZHSnw6Qa5fgEFK1iubnJjh8e2isttWDF0fawvVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kUskgIqVNaYeclYwmhNYIZ2o+1wQAjk9lE4jRdS2drI=;
 b=iI7bySvn6HyR81f+WcWPsUupjYXILeSkeOGdmmn8LILQPdB5WzTE7becuJHTedC/c3ujRi0HDIorhFEcI9lzWrIaeURf++Qq4MlLIbomXLkPRBVmD1UhsftzSM1lvCOPi//gDmgZxJKLLm2hrhTnPW4Li3VPR/WhHloZWRSMwwE=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
To: Jan Beulich <jbeulich@suse.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
        anthony.perard@citrix.com, andrew.cooper3@citrix.com,
        jun.nakajima@intel.com, kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
 <YC5EitRCZB+VCeCC@Air-de-Roger>
 <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
 <YC6NPcym62a0Nu0M@Air-de-Roger>
 <8ffd4f51-5fc6-349b-146f-e52c35c59b4d@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <5b286dfd-278b-8675-cd88-3ee2706c06e1@oracle.com>
Date: Fri, 19 Feb 2021 09:50:12 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <8ffd4f51-5fc6-349b-146f-e52c35c59b4d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.49]
X-ClientProxiedBy: SA9PR03CA0024.namprd03.prod.outlook.com
 (2603:10b6:806:20::29) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e1de6ba4-7170-4a8d-9914-08d8d4e5ac56
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4717:
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB471793DAF52E233B69B20F968A849@SJ0PR10MB4717.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	A1yZ6Me+10rbT0asUAlkvUeq/6Yz7RI1i4O6qghSiItFPWsaaTSAYRq41SXgFGFDq61HyzkQ86oGzluIF7Vm7UD+hir72Mffhg4eHhUim6joaMyP/+1n+3nerVkBZLIxhg9F/gasrs9vAW++juZvSKHOnVXRgwSi7i8cbd/5HxH1kJj2aOlkC82UcSPJQyGKAFYRNZW2D6XKaTZSJszfATIlEwnhBPmcxB0NqjC6nbh8JY80NGoYK+as7luYxjcPnYs2nkrmMOgFUcP/od5xwmLkPFP4Qf0OlujvgauW1JXm5wHIy7MWulmlReG35g4KId44mOPUpFzM2DdVMCawQj7LkyYsY3Tt9twsXKnbdf6/ASONDCIWht13HsMAZI2QbMJrNrzdRB9M2eCZXxpNz/c5FQxsd7VXU9HLxWPG1jPjj6epNdkU9l2pBaxq7P9lpuFKTgfXokstuqiUyDQ5P5xCk3FPYamHRxJUUYapu9X81TnFSgJ+PonJSsFDHuwGtEYdIBuRWWwTAfxjCEYVmYj23PVAivwibJIMQQFMM2rfjpIxYBOE9MZeeyLzAdPi6jra9Les7RCdQZXhds9J9HM58RtvQ7qETgGQFw4NXAk=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(366004)(346002)(39860400002)(376002)(316002)(4326008)(8936002)(956004)(2616005)(16526019)(66946007)(66476007)(83380400001)(66556008)(6666004)(31686004)(26005)(86362001)(5660300002)(8676002)(186003)(31696002)(478600001)(110136005)(36756003)(6486002)(2906002)(44832011)(16576012)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?cjd3N1pZQm14OHF6VEM1MTc0OVFDaDVFODdtZUVvaTZOR0pqMEpTMUVRcjUx?=
 =?utf-8?B?c1FSZTgwVFI3MmwwSUUwaW40a0VOb0NCdHEwSGZwaklld3RCZTdXL3VLdjFZ?=
 =?utf-8?B?bXZYZDFtQ1NOZWl2aXlnMGs2MmcwaHBwMWp0akF6Y050SDd3QTBibm5raHY1?=
 =?utf-8?B?NTY3Si9pdThxWE9zUHd3WjhOb0tLTm9CejE1UmV3dzF1bmxyMU8zMEpvVWhn?=
 =?utf-8?B?aC9nNzJTYkN1cCtSNEV0RG5aR0E0N3hTY29UNFJqUlkwTDh1MVJzd3N6Y1pq?=
 =?utf-8?B?ZmhnSkdMeEFXNXNNb0FCT1Bwbi81UEJ5aUw1cUphVVhndlNOV1JCWjhLVDg3?=
 =?utf-8?B?YmxKWEoxd3p6aTE4LzVWNUFXQmovRks5OTA4Vkk1SzhEdmViYmRyL2l2cWhL?=
 =?utf-8?B?RzZnS0svRXZHc1ovQ0lGdFZtR3ZueTRxZGIzYXdCRVVtZ2VrYjNwdzlBWXpz?=
 =?utf-8?B?WDVTNTlSQ3FXcnRmYUVzUWNYS0w4SlJRbWh4Vlo3ejJVNzNMV01FV0hxRlZu?=
 =?utf-8?B?Y3pkdXFpTG9hUEorMFJMWTA3NXFiZ3U3c2l0TnlqZTRQNnYwdTlxbmYvdG16?=
 =?utf-8?B?TTAxSUV3dldnNmdQekprY2tmRWhxMlZ1KzdIT2VMeWVyVWs2ZXR4UFZtY0pm?=
 =?utf-8?B?bU9rcnNROXBIZEs1Y25TczhPRng3QnVxNnFyMU1wNmwwcHBQZ0VFa3lUcHF2?=
 =?utf-8?B?V0w0ZGkxNytjUlVKZUZhdEtiUkZ0OVpuR2NCREVVUENGdkVGL3RoWFVrOStL?=
 =?utf-8?B?SERvRGF3SFhZVG5IUVFwdTRjVTRtY3haektUY0ZxaHlzU09EWHZPOWhLcURG?=
 =?utf-8?B?a29uNVJrK3BCSGZiYldCbEI5QUtyb3F2QXgzZFZ3NW4vQWNWc0pjbmErVUFn?=
 =?utf-8?B?L3pPNFJ2Q1FwYVVOWkhNaGprZ0ppa01ESjYwdWJmR1lCOCtGbE5TRWdsVS9l?=
 =?utf-8?B?bTI3dEhySkNOdWQzRFBYc2V6UjRIenBqYllodWlHK2pYWGo0YlZXclFEOUZZ?=
 =?utf-8?B?WlBma3dscFQ2OGJHbzV3YXcvU0dBOFZKT0tDYjdKcGVGMlJXaEdXelBHZXdS?=
 =?utf-8?B?V3VsZm5Qc1VCZEY0aHF2ZmFPK3FtQUtZajh5Mm90ekVFN29xbGVJVjVHbkZU?=
 =?utf-8?B?OWVzdGw2TjFDcUtOa0NZcEY3L2ZlOUVMSVg1azlaajZaNGVCVmIzMzFqWWtM?=
 =?utf-8?B?SW1QWjEzUDJxUmViSU5hUi96MHdITkgwR3Rmd0p5UjBLLzlscFNISXpZbWVa?=
 =?utf-8?B?WDRVMy8yZmFNK3ovMXRHRGV3S3RCQzM3b1hFLzhVUXpRZlVvclg3aFFBOGVK?=
 =?utf-8?B?d3A3RkZYNXQwN2NZakplNFRQNlJkbjdkUXVZNkhtZnB3Wkl5VGJSTHBORWVO?=
 =?utf-8?B?bEVZWEs5NGJXR3hPNmVFYXRFNEw5Z0ZwTzAyU1c0dXhoVFZEWGpVWDhPQTBr?=
 =?utf-8?B?T2Zhd0NEN01EUFNmZEd0WmRUS3lmNm5PMlBZblRWZmpjUHlwZGpocllMNWZQ?=
 =?utf-8?B?aUxRaWg3MXV2c2U4SXY2VVcyVkkrcGg4R0pWdDNsc3ozNzVYUXlvckhnTTJa?=
 =?utf-8?B?dExyTFZMZ1d0YkdxOXNKWjNEazlUUGpNbTQ3WWhIZS9FMXN0WVNQcjlxT1Er?=
 =?utf-8?B?NjdFK0ZhZENBWGMyaDFla2NiVXc3dXRwbmpDRDNDa09vcXBKNWYwME1vMjZ2?=
 =?utf-8?B?N3lxUDdXU2p3OWdiWlJiWlVDb243YzR4blJ3d20yaGhoNXNCdUhBblJHN2ZK?=
 =?utf-8?Q?iGCph9983YHiK6pieJ/WqrY1p7rHJ4iibSKs4mE?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e1de6ba4-7170-4a8d-9914-08d8d4e5ac56
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 14:50:17.8569
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xAURUbRBreJ1IyqM2W6rAjaQ+8KhMYKns4CLFGrH9lV4HMgMARCXoxt+OVuRCohcbVBSa30OpUQALIDAKrmLr/lVmrKTCTvmDz1UXzoQaII=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4717
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxlogscore=999
 bulkscore=0 suspectscore=0 spamscore=0 malwarescore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102190120
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 clxscore=1015 impostorscore=0
 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190120


On 2/18/21 10:57 AM, Jan Beulich wrote:
> On 18.02.2021 16:52, Roger Pau MonnÃ© wrote:
>> On Thu, Feb 18, 2021 at 12:54:13PM +0100, Jan Beulich wrote:
>>> On 18.02.2021 11:42, Roger Pau MonnÃ© wrote:
>>>> Not that you need to implement the full thing now, but maybe we could
>>>> have something like:
>>>>
>>>> "
>>>> =item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>
>>>>
>>>> Specify a list of MSR ranges that will be ignored by the hypervisor:
>>>> reads will return zeros and writes will be discarded without raising a
>>>> #GP.
>>>>
>>>> Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
>>>> c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
>>>> "
>>>>
>>>> Then you can print the messages in the hypervisor using a guest log
>>>> level and modify it on demand in order to get more verbose output?
>>> "Modify on demand"? Irrespective of what you mean with this, ...
>>>
>>>> I don't think selecting whether the messages are printed or not from
>>>> xl is that helpful as the same could be achieved using guest_loglvl.
>>> ... controlling this via guest_loglvl would affect various other
>>> log messages' visibility.
>> Right, but do we really need this level of per-guest log control,
>> implemented in this way exclusively for MSRs?


In a multi-tenant environment we may need to figure out why a particular guest is failing to boot, without affecting behavior of other guests.


If we had per-guest log level in general then what you are suggesting would be the right thing to do IMO. Maybe that's what we should add?


-boris


>>
>> We don't have a way for other parts of the code to have such
>> fine-grained control about what messages should be printed, and I
>> don't think MSR should be an exception. I assume this would be used to
>> detect which MSRs a guest is trying to access, and I would be fine
>> just using guest_loglvl to that end, the more that it can be modified
>> at run time now.
> I can certainly see your point. The problem is that for guests
> heavily accessing such MSRs, all other messages may disappear
> due to rate limiting. That's not going to be helpful.





From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:56:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86908.163469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7D3-0005Gd-3Q; Fri, 19 Feb 2021 14:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86908.163469; Fri, 19 Feb 2021 14:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7D3-0005GW-04; Fri, 19 Feb 2021 14:56:45 +0000
Received: by outflank-mailman (input) for mailman id 86908;
 Fri, 19 Feb 2021 14:56:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rPBP=HV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lD7D1-0005GR-Ci
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:56:43 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee634ed7-9bdd-4ad8-8eb7-60b73af9827c;
 Fri, 19 Feb 2021 14:56:42 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JEt2uu037831;
 Fri, 19 Feb 2021 14:56:40 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 36p66r9sd5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 14:56:39 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JEtlTH009114;
 Fri, 19 Feb 2021 14:56:39 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2048.outbound.protection.outlook.com [104.47.66.48])
 by userp3020.oracle.com with ESMTP id 36prhvqwnt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 14:56:39 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3654.namprd10.prod.outlook.com (2603:10b6:a03:123::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25; Fri, 19 Feb
 2021 14:56:37 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.029; Fri, 19 Feb 2021
 14:56:37 +0000
Received: from [10.74.102.113] (138.3.200.49) by
 BYAPR06CA0065.namprd06.prod.outlook.com (2603:10b6:a03:14b::42) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Fri, 19 Feb 2021 14:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee634ed7-9bdd-4ad8-8eb7-60b73af9827c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=bRxSbWplBavRs1LQ1gzX3mYcuKe7una/4Vqo3LzU7p0=;
 b=Qj3nhrhI69EsznbQ585tQ984C1xGOc1mGuXFxNkNm0Y5Cc4bYxPnn+6rvYNBgw8YYT3L
 KokV+2iqObSLYiCRuTuOS79+cUj76Si0NAnHZY6Dw09ZwI5U83NI8k33Go4sejuUURST
 dlcXgsHDXleYr+pYmecZfoNCAlY0IbnbXYJPkn6ZlNiEWdcHFZvDYkeF1LgOEhkQSo7M
 TRcEmJergrxojdrif5atLGi5AH/hxzybbRJ+q62qzMUhe0pIKhpwn5FYEtzl1XjQFgDK
 xKT5Nag9kWZvt5IMOBVadKJgzcunehTXdtHOKYxuyRVfflY5xGHGrGVZM4LekCgRadu+ 0A== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C6qprb7UXLoQkqSLLU/7JdgOKi/RKegWj7v+YnD465LNgBS+CvQKWuiLj6+t5DXOO4IfS7YGkH4wCVbb8TPTRlPt0+XZ0MDeBUoLUEZs6wuJ1lVDjIPIcypmJJ6wtYc0/0/sgLyWRzfODdAK1VVXR7newOfyV9C5hiW3I5EVjiqQFmACL8ejS9HB8ipPjxSUkN49iDfTxofzzO8f612ulJ9uOhC13oKQBx1pYXEpm/q9e5fC4N5M8xpHR2cQSTjvbHjN2pqAOUYOEK2kUGPZozyKGaBIaajKZ1zerTekouqdgxoRBJA3KpAdOCXcXAftuYWWhEXx9u4bCw9Q5GKmkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bRxSbWplBavRs1LQ1gzX3mYcuKe7una/4Vqo3LzU7p0=;
 b=Xzg7ubfP0IDg8rQxMz+fIOPkLQPc0scxX9DnmkbPFq/aQYV12BAkLZjPi9zyhH5OClQZ+CYsA76a5V/lrf0UIOLXc7fzsKJ12syfuvdSAHMOlv3tLmhdtdD3kvmBc6eTQKLqSw9bBL7jypvvRqg9fH9oRYK4xHE7bgzkbOIi+uIWLVEETKdSFCpxE2Eur9vxVkgYA8UyS/kcxkt2RXs3JioQL7juKuyGjIyFVC4FCsOOvFzfoYa7x2Odn8QyPWreN7KBT0zkR87SQLFoYPJo1EFcRTDnrmgmXVBNaE7IE/QwihVwi+44u74FTKN6asMxgTjQ61VwJ+nweQIidm0CrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bRxSbWplBavRs1LQ1gzX3mYcuKe7una/4Vqo3LzU7p0=;
 b=Uw1ptUWt2zS3YZ3p9lsQVvDTrHPYxLmfWdgzpS1VKeAVcOQMXKkqkqV9LYHd1Cl78p69hY02ydPm2apYeeva6loYFiv7CgHsWuPTSoQeTWCw6jrpmnUZcdI319L/+7iAg/YUM7jZWlXvatioLxZczdf56+DkbsUAhI3exjWeM6Q=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
        anthony.perard@citrix.com, jbeulich@suse.com,
        andrew.cooper3@citrix.com, jun.nakajima@intel.com,
        kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
Date: Fri, 19 Feb 2021 09:56:32 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <YC5GrgqwsR/eBwpy@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.49]
X-ClientProxiedBy: BYAPR06CA0065.namprd06.prod.outlook.com
 (2603:10b6:a03:14b::42) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d6029434-4c19-49db-8724-08d8d4e68e87
X-MS-TrafficTypeDiagnostic: BYAPR10MB3654:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB36542292772E13AC4295E4B58A849@BYAPR10MB3654.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	zikiqoz02OsF5q6jmVWNx3g83h/oQVt3Z903rC03TJMW1D4w+cxzR35R5a64zvJlQe+XslAHqnw/WPRx9CVam+4nqQ2NmaF+VNrdgPACsfWmOeE+UPeKrhSD8wo5vJaqPjYO5GYw9/nsFzaJAr5g7MFp2V4woGlXNzBT68wPPSF9FkzYGTT4XLn7DDTyim1oOl3FNCUle0ogv5fc6NdKqyYuOHY+28WEZELg4OTqLNuTE2DDDg+tmdy8BJlGQWPR9Aak9mo9cWbP48coTiLX5TBBoKLrfQBdZxt7zoAzOab+zMU8Cp6/z5iYzCnoMxJBY9AIAICbJLQYaRfdIAaNQhqrVi+d+8FnrODKtX9+5N+67XzpkFAJzi44mDuXB3Mwf2sVbJnrJoHrvwh7+yF0GVOmEtoM6GeNhAAHDhcJ/KqxjzE7PimYl3Sg7WnpWczbWivmXY494De+UH2o94I9gwVJJPXASnN0OrEEOhBuzYIqq+d7Q2DNZ83S8a8rt8K1prC8sukbabpXDeSwZI9MxjHG1X9ZM4qdU8uJc6dDLhavEJW4DBl5MT48p8RD43iEr/QohbAXykCQ096UBHiOTrKBo4qYWLkTFi/ZkNYgT1s=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(136003)(376002)(39860400002)(346002)(66476007)(83380400001)(66556008)(31686004)(36756003)(5660300002)(186003)(6486002)(4326008)(53546011)(4744005)(6666004)(44832011)(2616005)(478600001)(8676002)(2906002)(956004)(26005)(316002)(86362001)(16526019)(8936002)(31696002)(66946007)(16576012)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?QTR2MmRHNnJuSGJ2ZWFZNGZsYVhFYmE2YmhQTUFiUkFGRXhndUlLRU9JUDlX?=
 =?utf-8?B?WHFSWmNPWTk5K3kwWDRHVC92OW9pcjVWdmtXb2VhWG1xM3ZFREhoejh6RDRt?=
 =?utf-8?B?SHdWZEFheTlIYjRMMFdIWjF1Z2VOclhTQjlEMTFpMUxseGJqTm5aQU1lVVRO?=
 =?utf-8?B?QUhiQ1hnOFF1MzFFS0FFRzd4R0YzVG1aN2hpbG8wdjgyZWpTZHE0dmNKWFR1?=
 =?utf-8?B?aEJ6Q3NsMXFGaWp5b0d0UU1FWG9qY0JDWDJsNE1CTFg3TlB0ejIzN3BXeUFh?=
 =?utf-8?B?dUtLbFlrS2c1Ly93a05CT0FzTytjcFRkSklQM1ErOGs1Q2RHMGNwLzY0b3hs?=
 =?utf-8?B?QndBSnREaWR5aGp5Wk55Q2NkUW1QN2U0VnFsUkttRlY1YTNoL3MwMEZSV28x?=
 =?utf-8?B?SFpBKzhXcXpyWXZ3MTZkQmNuamZYT3c0YlJSaTZzWWJ1MXdLVFJwd2VBMGMx?=
 =?utf-8?B?Q0lPWjQxcTRWUW10UDM4Q09rcURlcVZRZWFlN2pIZm9OT0VaTFVMRTVSbGVW?=
 =?utf-8?B?K3BlbU9mUHhEVUpzeW4yVWwwQ1VEb0drZGhMMWRkYkx3SGVldmh3ZGUrRVd5?=
 =?utf-8?B?ZG5XZmtUNlRUcitodTQwS2FCVjNaVHV3TTFFUS8reEk5eUV1N1daVDdnMExT?=
 =?utf-8?B?WFJLYjdoRVlobjl5bUdobzhiUDh2MktwWUpEMGxuT3RmN1BuVkV6LzE4REJH?=
 =?utf-8?B?Zm55YkVaL2NpQUdvSmpFNmhHd0VVdmh5MThvRnVYL0hqekFtcXFCeWJ3RkZs?=
 =?utf-8?B?T2pkeXdYMFFTR1lYZzZOVGo0eVVVN1hkbjhSWTlzSms2SkhraUNWNk5yRkNN?=
 =?utf-8?B?alpFS2Q2R1VPbWJzYjA4T3k0a0FaaXNnUUxiK2REN1dGT3NnaFdSRzRKUGor?=
 =?utf-8?B?UUxEaktuYUVyOUYyKzVYdVVsc3J6VVpmbStqY3dZMHV2RkVXUU82S29pNzBy?=
 =?utf-8?B?U21PWnF0OVVucmNSeTNnN2FJaXJJaVdOY01UYjJUblhDLzFScFNnZWtDRzhh?=
 =?utf-8?B?RGtnSkRlbjJLYkV4ZzRFa3VOeWdZc3UwWGM3RVZMQ2QyYXpuRGNzRG12Y2h4?=
 =?utf-8?B?Vmg1T0NTYi9DTUV3WVNpUzltQldiUHZjYkxTRzhrSXg3d0RLS3hyMHhJZ2Rk?=
 =?utf-8?B?TGVwWXBSOVZLZjFDL2F6bUNoMEExdEZETFVCZEVWaUc4eHJZTFZoTGw3Q3JM?=
 =?utf-8?B?SldCYjZoemR5MU1MbS96cHNtT3NHVkkra0hiak5OZXFCSDhGUmw5Y2lBV2Jq?=
 =?utf-8?B?QVlac21OcnIyL00yQzY2OXpWMnRGOERsdUFQR241ZmRvcmJjRW5sV3M5YTdE?=
 =?utf-8?B?K2p5SVlMRTI2M0VXQis3dGRuMk1MdXJEMVp4NVFXeE95Wk5QQXVLaGVqY0pS?=
 =?utf-8?B?MVF1cmx4MUJSd0N0YWIvWVJyemhpU3ZRV080V21uelRDYXFsTnhZMEN0dG5h?=
 =?utf-8?B?YzNZcjJHOC9sVnBvM01IY0NPTG1tYUJPZXYvajNnNUZVRXNkeDdJTEJSQnYw?=
 =?utf-8?B?TWUrMEFxWDJQdi91b2EyM1hQWEg0SEJLV2s4QWJhc09DWXJrRjlxdUtFOTFR?=
 =?utf-8?B?TUI3cU5qeEpUUWVFNWo3ODJRYWh5OVNJeTZCbVcyMHB5Yi9GdDZTR1JFa3o0?=
 =?utf-8?B?RTFjcHRTNHMyaUVHM0FhMVk5MUtvbUVWSHFOelJqSXFadzhPdGYzRFpEN3Zo?=
 =?utf-8?B?WllPSFpYdHZHNkNyc2tTdzd0THVLYllXWnVQcGtLZEMydmtlN0U3QnZLSVEw?=
 =?utf-8?Q?seiQx9pmzFVZdirYV6zm0d/WXB/4xiL52nQFzHt?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6029434-4c19-49db-8724-08d8d4e68e87
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 14:56:37.3367
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WRyKB4SJVqeW75lPIkDk2ROM0Qv2s54fQwkmtmsbZhJG1abD8BXzdn9jF4r44ojNvnp3xo9jWKsxbFk1tk90QzEgWr1RbTnAHmcV0klNPCI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3654
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0 mlxscore=0
 bulkscore=0 suspectscore=0 malwarescore=0 spamscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190121
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190121


On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>> When toolstack updates MSR policy, this MSR offset (which is the last
>> index in the hypervisor MSR range) is used to indicate hypervisor
>> behavior when guest accesses an MSR which is not explicitly emulated.
> It's kind of weird to use an MSR to store this. I assume this is done
> for migration reasons?


Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?


> Isn't is possible to convey this data in the xl migration stream
> instead of having to pack it with MSRs?


I haven't looked at this but again --- the feature itself has nothing to do with migration. The fact that folding it into policy makes migration of this information "just work" is just a nice side benefit of this implementation.


-boris





From xen-devel-bounces@lists.xenproject.org Fri Feb 19 14:57:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 14:57:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86910.163481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7Du-0005Mc-DR; Fri, 19 Feb 2021 14:57:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86910.163481; Fri, 19 Feb 2021 14:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7Du-0005MV-AI; Fri, 19 Feb 2021 14:57:38 +0000
Received: by outflank-mailman (input) for mailman id 86910;
 Fri, 19 Feb 2021 14:57:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rPBP=HV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lD7Ds-0005MJ-Fz
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 14:57:36 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4304c511-87e4-4737-a6b1-2e42ccef9ead;
 Fri, 19 Feb 2021 14:57:35 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JEt72b050821;
 Fri, 19 Feb 2021 14:57:31 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 36p7dnsp06-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 14:57:31 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JEuF56176290;
 Fri, 19 Feb 2021 14:57:31 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2046.outbound.protection.outlook.com [104.47.66.46])
 by userp3030.oracle.com with ESMTP id 36prq1ypxy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 14:57:31 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3654.namprd10.prod.outlook.com (2603:10b6:a03:123::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25; Fri, 19 Feb
 2021 14:57:28 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.029; Fri, 19 Feb 2021
 14:57:28 +0000
Received: from [10.74.102.113] (138.3.200.49) by
 BYAPR06CA0037.namprd06.prod.outlook.com (2603:10b6:a03:14b::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Fri, 19 Feb 2021 14:57:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4304c511-87e4-4737-a6b1-2e42ccef9ead
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=cdkCO4q8WDY5V2H9ToXTzu8WfAE5LQu8USvb4eEJaFo=;
 b=kkial8LvkvIfrrDAcETElmu2k2XipGLT+ukdJ3lJkORJNDMfTR5ioo6CEJBR9O6Db7qO
 crukLskcMYG4qTSejXCTSZsBOc7b5BH5Zn8Yq768j8ZcFrx7QhM69dZbTYVNNdo0jNsc
 69X795kmQbLG+rs7shgNtXb9xpcvD00ZdWLbYw+weIqsHpVJbXelSmqcdKN+xv1sTmeu
 1QRzcN6Rh96jh81yoxpW+jJF7y6K/wo2P+fBVUJzUctbAo8TCBycnDB54v1RvZRi0lFv
 Ws1TP4FTh2PzKv4lwEyeZdhK2CexJSbmq1CXvPEC5ZuWtjIPhJR7iqj/PL8eNQsRU2B2 iQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fI4FWdURaF9Xwi9s/sVAtlSb42wqBnogNxAfDNmUbtHGarHGN4tNrK7uYu58HejjAyF46oh5oJ+ts8Jf7EEo/m0uYX7mn6gY9Cx1z/jOtfA/CfnPE9otpFzL8pfw9DhhDqMJQztPIdkRDg0uaWwnxDB+/dBHAQNatolH9inV/M4eZWkJp0ICHjQtl/frDnXJdrqNjaVa7badyUrzD2jaPsmPPXDZodEUp6DiKGOKwi9HcC1ek9c0GSZBHPKee8g89uAD3P8b6YLZZeTrzbOQuRYJoZ24DPrW0QUe0QT4WR0W638cbtN6Hi7/ELH30f7/oLKPD7F7qsGPh+J6FDu7XQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cdkCO4q8WDY5V2H9ToXTzu8WfAE5LQu8USvb4eEJaFo=;
 b=lxWnZpFL0rBEoU1clsYcRZ5Uhjs62OiCqoudVm2NKSCBWiP9WS4pffawHDMnSCuFQuzUBWvDGDYUHsI12qh/gfO2TXcxYyP1K82+DOCjn1GtKrJIH/Gkr50wEV4AbBUa+QP8rQXWUJeGKy1MbKRvm30Dx6vGOVwaFCi1ylVSMLFhWQOqsCY5EylBOIDcWpclKlH5Hi4XEPKQBU3gWbL5Ec5iHCSRTDLEMKWIMdZpfkEP5ao4PT1dR4AFHC3jZP+dEFdcZK06CwYU/57/fuUpwmZMJTFMp5zHcaBbNq6M2vlFc/vuYEQHJDzjyf0ApK5bqyreIEXz5IZUjrWcP4wRXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cdkCO4q8WDY5V2H9ToXTzu8WfAE5LQu8USvb4eEJaFo=;
 b=fNIJRJYXH14LpIq2vjPMUowi00jWT2B4nDXndsXm75RqE1UEXxxoxnRBvmO5xaLilfFjo5ELV5Dy23hGTlXkV/x9waIp0geYhLjIlUeJZRgIw+ChCHw8g4XPkO9BKfCC2ItzK8e06o5ppC+KuLhkW/Qlg7WG4xISFeeTfSBWn80=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 4/4] tools/libs: Apply MSR policy to a guest
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
        anthony.perard@citrix.com, jbeulich@suse.com,
        andrew.cooper3@citrix.com, jun.nakajima@intel.com,
        kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-5-git-send-email-boris.ostrovsky@oracle.com>
 <YC5UEzGwoqmLvh0n@Air-de-Roger>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <8138cad1-248d-c5d5-4fe6-9c30016cfeae@oracle.com>
Date: Fri, 19 Feb 2021 09:57:24 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <YC5UEzGwoqmLvh0n@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.49]
X-ClientProxiedBy: BYAPR06CA0037.namprd06.prod.outlook.com
 (2603:10b6:a03:14b::14) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7da2df8b-0129-4ed3-fc1c-08d8d4e6ac4b
X-MS-TrafficTypeDiagnostic: BYAPR10MB3654:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB36541522D9D1279DD94FE3F08A849@BYAPR10MB3654.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1824;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	FTEdCJp0LlObAvudFCNYGOedBDhO5qzPNxogAjGz7vabzvuivGRN2xGql39rcd27wQ2C9+9IQKcYaE46cOJY23njBC5qZWXEfWIKHhT/fvTj1uyVoHQqjbkR/1ajWdjwD1CKcnXAzbjhi0pNmFNeDY7u0BTn2N7u+V/eCTaRkZOo1BysCnjgozp2M68kN6oMOvBOLcSjPvwnYota9HTH5uMikfwtw3RCU+a4hBx6lX+ZcFz8DCSiNaw+ffnZsnYdbGFWkXSQiR55IcnFEFc0GWWMqMeQ1g9qUbYoUZNc/k2ulpWodi025hxpvFgePEfaElwU1vFbSbW4rcMXUX18fKxHTo14axCKk9cm/0BCtq04trlaar+04P46WXRsoqxBSkggRCnYcYRSj7cwr11Hy0sTHfB01sndLTMcXZqwqmWmS2ZJhbCfmTIWonr9a8iHGHLjQKcLhB7QlGC4l3fDHsjT+6vkResFKbwbgK84qMJGZycO+x/AkDsWndItNXg1LkIvFVNaz9tGY1pGZ7FuhDBIcyCezWkE+AKdNcjmONm71gPH1fFWNJihmXwr4+dX0LNoIxFg3PiJUjlJo2r/crSf/W2eM+M7YetO/APXpUg=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(136003)(376002)(39860400002)(346002)(66476007)(66556008)(31686004)(36756003)(5660300002)(186003)(6486002)(4326008)(53546011)(4744005)(44832011)(2616005)(478600001)(8676002)(2906002)(956004)(26005)(316002)(86362001)(16526019)(8936002)(31696002)(66946007)(16576012)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?cVVBclhvSDJteFNWNUxPeWtwSUFzSEZock1VSHB4Q3Y1b0tKeUVST2tJemNG?=
 =?utf-8?B?WDBHUWpVVlFFczd1YW1kRDEwc011MXV6aTNGRXNnUEdSbU5ZUkcvTm9reXgw?=
 =?utf-8?B?MDhQVzVzMUtwMkZ0WUhZZUd4LzhmZFFNN212RklWM3l6K0pUa0Myb3gvby94?=
 =?utf-8?B?cTBoN04rL2llUTN5Qy9zbkpGZVNVTkIrbmxZUnRKRk5LQ3gyclFCV09FY0lp?=
 =?utf-8?B?VTAxMXYvL2FvaTJHYnh0MnJoN1pDMzlrMjluVWZhMFJlcUl4YzNDNVRKbWpv?=
 =?utf-8?B?N1Vsczk5MXRWRERkLzREd3NUeE1ndnVCaXMveXlOWnExSjRaU0grWkFOK2l1?=
 =?utf-8?B?OEZRbG5wMHJOTG1KWDBDbG42SEpNMFJ0TjRNQnFEVlZ3cExoYXN5OE5hN2Qz?=
 =?utf-8?B?eU9jL3dWa1UwQ3FvenM2MmtVMEMySXFOc1ZLRjB0ZnIrRWd0cWdWOVQyODY2?=
 =?utf-8?B?eFN0TUxpZ0tQTm9RVWNCeitGL0x5OUpGRkpHK3VkdEl0emRhc1pjdTJCL2Np?=
 =?utf-8?B?RHBYeUgxYWNUQTJSOS9iMUh0Qm1qZEZjblQ1Y1JobEFldllLY0tSekp5dWFx?=
 =?utf-8?B?d3NzMzNmbSt2YVpSSUZQYysyZFZjKy9KdXY4a2R3NnhNQjJlY0w0T2xwQWpH?=
 =?utf-8?B?K2dyaHdqbnlwbURHNy9kU1lzK2FoVzM2K2xVczkyTEhtY1VWeU9Lb3k5c1lB?=
 =?utf-8?B?TjNhNEY3Q0hoRXNHaWVFTFJXdU0rc3lqTGgzbVc1THB5bm1sMlJQV2RNZjQr?=
 =?utf-8?B?REJDOEVrQmFaZThNeVhZOGoybTV2OWMzUTlGMUM2ZE5RT1BRaEhvLzBOYStK?=
 =?utf-8?B?TmsrNVM0VkhvSm9MZExnUEZZckx5Z2dmdm5RK1dOUnkyekh2RC9PSlE5T0Jm?=
 =?utf-8?B?RDZQNkkzWGtxWkRXV3V6SDFHbmF4MWFrd3JIRFFoZHk4bFlYZEdqNVY0U0Zh?=
 =?utf-8?B?eTFYVFFzU0ZvUmRtK0dVcjc2c0x0cCtZakpoT3NHTE9qSG5aRDQ1MlNLbUlm?=
 =?utf-8?B?RFhLdytoRmU4eGNtUXBQVFZzdW5LSzlPTUFnc1NhcERZMnI3SkRHanFIMytz?=
 =?utf-8?B?cU5pdjlNeXZyL2FNd0hqR1lmWTBQYlpDeUFUbWZZNFVtN29PcDBIVWJFSjFs?=
 =?utf-8?B?UWlGUSthM1hqQi9pMjQ3YitDcG1VUlJUT0pZQ0E1cVFKT1pNUHFWUDdueVU0?=
 =?utf-8?B?R0gzS0RKRzIrUm1lcEppTmZnaGw4bWJLYWY4RGphR1NJNGlpQUwxWitNNDhB?=
 =?utf-8?B?aUNyeGNrRGVUaHk1Z1BjYjN4RzdrRERqNnhkZDNDbzh3VDdvb0VMa0lBVitG?=
 =?utf-8?B?M0NuMGJZT1Q1OEF0V1ZnSWgxaml6cmlFOS9tTnJSMDZxdWx2RDBXd2pxQ0Fu?=
 =?utf-8?B?VC9abjBROUJUQU9reHEvRVp1cnZhLzNYeEpuWDhCeVJHSzYxZ1JocTZNeW1P?=
 =?utf-8?B?ajJUOWQvTXR6cEh3eUNoM1ozV3JVR0FJUUhLdWtXbWs2a3JZREIzM3Z3MVpv?=
 =?utf-8?B?VDZtTFVSMkFiZzF6YnBqUjBtWUdLejlaTGVmNmIrU1lTb280YWd2dVVZRmFJ?=
 =?utf-8?B?T2NERHJtSnR5aENDNUVMR2V3ZEt2aFVZSUJLMFM2NkJON1IydkR5eXpqY3c2?=
 =?utf-8?B?RTNQRHhFNjJhRUlsYlJ5YnlnZTBPOG1GUGcxbUtJeW5WSXBOZ3dLdnlaNVVr?=
 =?utf-8?B?SlhJbTRvdVNZdkhiazR3dDZnUTdPSGFydDZWZEJtemd1cnk2MGxvMUFFeXdF?=
 =?utf-8?Q?01X1qNyp+k852FdLE1JYeYg9s8vM2uof9W+lT7b?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7da2df8b-0129-4ed3-fc1c-08d8d4e6ac4b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 14:57:28.0337
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i02c3iVO0Zy+JioJTyl3moUQaVQk7+Th4aAAlP/QiZ/vQpp0Ph4XRpP164PVS8gfloKHQxzx+SFIMfMp3N5d6Cb6cRemdDr61oOHCFu80mQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3654
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=904
 phishscore=0 adultscore=0 mlxscore=0 suspectscore=0 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102190121
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9899 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 clxscore=1015 impostorscore=0
 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190121


On 2/18/21 6:48 AM, Roger Pau MonnÃ© wrote:
>
>> +    /* Get the domain's default policy. */
>> +    nr_leaves = 0;
>> +    rc = xc_get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
>> +                                              : XEN_SYSCTL_cpu_policy_pv_default,
>> +                                  &nr_leaves, NULL, &nr_msrs, msrs);
>> +    if ( rc )
>> +    {
>> +        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
>> +        rc = -errno;
>> +        goto out;
>> +    }
> Why not use xc_get_domain_cpu_policy instead so that you can avoid the
> call to xc_domain_getinfo?


Yes, indeed.


-boris


>
> It would also seem safer, as you won't be discarding any adjustments
> made to the default policy by the hypervisor for this specific domain.
>
> Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:01:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86915.163497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7Hr-0006YT-5R; Fri, 19 Feb 2021 15:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86915.163497; Fri, 19 Feb 2021 15:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7Hr-0006YM-0Q; Fri, 19 Feb 2021 15:01:43 +0000
Received: by outflank-mailman (input) for mailman id 86915;
 Fri, 19 Feb 2021 15:01:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kvll=HV=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lD7Hq-0006YH-IS
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:01:42 +0000
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a22a2a18-20dc-4bae-8908-e5ff7b00e800;
 Fri, 19 Feb 2021 15:01:41 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 95582A3E;
 Fri, 19 Feb 2021 10:01:39 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 19 Feb 2021 10:01:40 -0500
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA id 31DC6108006B;
 Fri, 19 Feb 2021 10:01:37 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a22a2a18-20dc-4bae-8908-e5ff7b00e800
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=+FpTDQ
	EfJBhWk/fzo6gHGJpB659VTaavb2b6nZ7KaN8=; b=nPUmiRf4+tPkiMbhejTjkS
	ou2WbYuj+JJ9kxs1sbyxj4TnQbSgBqR1tEwdET6DK6Wvr1+Whta/9GFxxg2OJWlS
	ffvRYuyw+qocAqSu5k3kF7VgN/yfiCPdVboDYnu8xYPCbWo9CHYUMXxmYbPAFOB8
	0cu8G7Mph+gaGQdJSPzh09J8svJVSHXu6wggUkbKBOP95w8GxGUmz12HcRm+TT0+
	ep5RpsSQUJ1i2OqoHMX8AlWVaJxNknItMjFFzneuLQL9o2/Z3cD69O6Ovw1DyOsf
	Kj4LsvUqT7xnO2egw03HWtI0XMl9pvTJURxwoNJrmd55yS/ZNCT9dI/7PL+EhZuQ
	==
X-ME-Sender: <xms:0tIvYF17V1d2CsS1sTn8-aio2M_am4YW5vMw9BXYkMPUoeCpx2NyCw>
    <xme:0tIvYOtTxrxcn7MhoVZbQOSpqCPzB8C_sgUsbg3ltozBQZHdXTB2UT_GPbD4V8Dt6
    swMWnlhMRL-UA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrjeeigdejudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhl
    fhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hm
X-ME-Proxy: <xmx:0tIvYFaLlBApEzCKae8Zko0URyEQYQCxHvX-NcBFJE9rreR2-v31Mw>
    <xmx:0tIvYEKLPDs0y_znpCvmRpAkSHCTIrAe8hP_2CWaQLor_ltEbeI7gQ>
    <xmx:0tIvYGi9Lh6zi8NItCMwefQ1hbQCQYcsBQfLRL66KNUi1BshY1SMXA>
    <xmx:0tIvYGaZgCXk61EVVZovYsq5CFrzCPH-A50vjWETY4-R47TygGyOnQ>
Date: Fri, 19 Feb 2021 16:01:32 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH-for-4.15] tools/libs/light: fix xl save -c handling
Message-ID: <YC/SzYyDP7e6830X@mail-itl>
References: <20210219141337.6934-1-jgross@suse.com>
 <1c02c3af-0a9b-6c68-e110-9d0963275e17@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="kP/HQT7mM1GlbVAs"
Content-Disposition: inline
In-Reply-To: <1c02c3af-0a9b-6c68-e110-9d0963275e17@suse.com>


--kP/HQT7mM1GlbVAs
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 19 Feb 2021 16:01:32 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH-for-4.15] tools/libs/light: fix xl save -c handling

On Fri, Feb 19, 2021 at 03:15:52PM +0100, Jan Beulich wrote:
> On 19.02.2021 15:13, Juergen Gross wrote:
> > libxl_domain_resume() won't work correctly for the case it was called
> > due to a "xl save -c" command, i.e. to continue the suspended domain.
> >=20
> > The information to do that is not saved in libxl__dm_resume_state for
> > non-HVM domains.
> >=20
> > Fixes: 6298f0eb8f443 ("libxl: Re-introduce libxl__domain_resume")
> > Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> > Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Works with both xl save and libvirt now.

Tested-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

Thanks!

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--kP/HQT7mM1GlbVAs
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmAv0s4ACgkQ24/THMrX
1yyPDQf/RByCAYvl8/HxXGBaVwa6BocOwML2P3gW+MNEYZcEjojZmZHtX+3cii+g
sLSqlW5Pg7cuRhaNRM2x9UhEubpMhJM7hbgWrudvMPaIecp7poEH2gC7gImQfrdQ
SIzUwv889m313lou6qGii4P4rG5CzxxoPZMER8ddaerkaFIe+/cET/sEw1W/A1WY
IGOc1cz+mpg3fbePSSVvwmLQRzUyA7LvvofNZPF1V6SAvJ9Snx0rp67m6Z5MpumB
gXG4qpBWsQAKMUfsjbBN5qxVZO85jdmnur4WPCsnaalfSgokTPZ9ALhJuFbHQagw
5+9bOPbwjlb41QxIC0hpwu8oYD5Qcg==
=8xMO
-----END PGP SIGNATURE-----

--kP/HQT7mM1GlbVAs--


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:04:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86917.163509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7Kv-0006gV-Jm; Fri, 19 Feb 2021 15:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86917.163509; Fri, 19 Feb 2021 15:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7Kv-0006gO-FK; Fri, 19 Feb 2021 15:04:53 +0000
Received: by outflank-mailman (input) for mailman id 86917;
 Fri, 19 Feb 2021 15:04:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Nbw=HV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lD7Ku-0006gF-4T
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:04:52 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90bb71f3-8dd2-490f-91cd-f1e7aa094536;
 Fri, 19 Feb 2021 15:04:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90bb71f3-8dd2-490f-91cd-f1e7aa094536
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613747091;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=qL6LlSUT3VuR8v4v64EDX2BiNYj+Drd+LyVkTuFvHc0=;
  b=RgF0x83gY6CCdtuHdGFq8wPX4aTeGh6k7CK9/ZzYEe7Pp9bYmi4Qjp0G
   70Smuh2hh59BQWnAtSuBr2HX6kVUChOMZ+ZaxOVNKfyZjo6xvqq8gLgkk
   i/+zekL99L13atta3EBFD83oO6UVnzzMUDnCm4WnIKpJKQlASar+aJTA+
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +fVDByxOsCEEDTDywyhAPN9cUTVvLBICtUXBMempAdn54fr2xw+6fAzkz3jpzxHp6iu0tYNMxy
 vLATHgJr67NzCSHuqSlHe6jrsr14S1uCRp+7oKYLTKLCQ6VMv04iB2rCJqdxek/wXRvLtKBE4k
 6Im9ix6mE5rW/tje131mo1wRPyso5eQeoLm1BCDXB+iXR6aMrBtWq5gDS4i6A0TcwwZWX7/okr
 CGnYNzgGCxTqvg29yfrDmnSUvTak53dg0B7zbNOXyTe5uuPzdWpyjSuRZZUNumlOMgHnR+5Oqf
 Xug=
X-SBRS: 5.1
X-MesageID: 37809213
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,189,1610427600"; 
   d="scan'208";a="37809213"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH v2 for-4.15] tools/libxl: Work around unintialised variable libxl__domain_get_device_model_uid()
Date: Fri, 19 Feb 2021 15:04:26 +0000
Message-ID: <20210219150426.8498-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Various version of gcc, when compiling with -Og, complain:

  libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
  libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
    256 |         if (kill_by_uid)
        |            ^

The logic is very tangled.  Set kill_by_uid on every path.

No funcational change.

Requested-by: Ian Jackson <iwj@xenproject.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Not-acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_dm.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index db4cec6a76..5309496c58 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -152,13 +152,16 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
     user = b_info->device_model_user;
     if (user) {
         rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
-        if (rc)
+        if (rc) {
+            kill_by_uid = false;
             goto out;
+        }
 
         if (!user_base) {
             LOGD(ERROR, guest_domid, "Couldn't find device_model_user %s",
                  user);
             rc = ERROR_INVAL;
+            kill_by_uid = false;
             goto out;
         }
 
@@ -187,22 +190,29 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
      */
     rc = userlookup_helper_getpwnam(gc, LIBXL_QEMU_USER_RANGE_BASE,
                                          &user_pwbuf, &user_base);
-    if (rc)
+    if (rc) {
+        kill_by_uid = false;
         goto out;
+    }
+
     if (user_base) {
         struct passwd *user_clash, user_clash_pwbuf;
 
         intended_uid = user_base->pw_uid + guest_domid;
         rc = userlookup_helper_getpwuid(gc, intended_uid,
                                          &user_clash_pwbuf, &user_clash);
-        if (rc)
+        if (rc) {
+            kill_by_uid = false;
             goto out;
+        }
+
         if (user_clash) {
             LOGD(ERROR, guest_domid,
                  "wanted to use uid %ld (%s + %d) but that is user %s !",
                  (long)intended_uid, LIBXL_QEMU_USER_RANGE_BASE,
                  guest_domid, user_clash->pw_name);
             rc = ERROR_INVAL;
+            kill_by_uid = false;
             goto out;
         }
 
@@ -221,8 +231,11 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
      */
     user = LIBXL_QEMU_USER_SHARED;
     rc = userlookup_helper_getpwnam(gc, user, &user_pwbuf, &user_base);
-    if (rc)
+    if (rc) {
+        kill_by_uid = false;
         goto out;
+    }
+
     if (user_base) {
         LOGD(WARN, guest_domid, "Could not find user %s, falling back to %s",
              LIBXL_QEMU_USER_RANGE_BASE, LIBXL_QEMU_USER_SHARED);
@@ -240,6 +253,7 @@ static int libxl__domain_get_device_model_uid(libxl__gc *gc,
          "Could not find user %s or range base pseudo-user %s, cannot restrict",
          LIBXL_QEMU_USER_SHARED, LIBXL_QEMU_USER_RANGE_BASE);
     rc = ERROR_INVAL;
+    kill_by_uid = false;
 
 out:
     /* First, do a root check if appropriate */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:22:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:22:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86923.163524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7bn-0000FK-2M; Fri, 19 Feb 2021 15:22:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86923.163524; Fri, 19 Feb 2021 15:22:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7bm-0000FD-VW; Fri, 19 Feb 2021 15:22:18 +0000
Received: by outflank-mailman (input) for mailman id 86923;
 Fri, 19 Feb 2021 15:22:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD7bl-0000F3-SR; Fri, 19 Feb 2021 15:22:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD7bl-0002s8-NP; Fri, 19 Feb 2021 15:22:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD7bl-0005m7-CK; Fri, 19 Feb 2021 15:22:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lD7bl-0006k1-Bp; Fri, 19 Feb 2021 15:22:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ab8gzahnn1G9Ebnr10VI9KN/DRzmbr2sd6CIa8uLJec=; b=nh2eMj2Kv9tHh53f9mGJSQY7qY
	NJpnNJcK58ylc8EZubQt3hLoVC5VZKGmncBFQ4HwRDcFfnWuA1w5ABk0Tr30LvWBjMoCtu+WRwU8g
	wp497jSh66MrdJ+VZY+hWnDiFI70VTohTea2jJNYkIewK/WlIdXjpQHf2MI91yxXJ3gE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159459-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 159459: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=76d369d33179a5f8e5f6607f3917db9ab8c22968
X-Osstest-Versions-That:
    xen=80cad584fb4c2599ae174226e2c913bb23df3bfa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 15:22:17 +0000

flight 159459 xen-4.11-testing real [real]
flight 159479 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159459/
http://logs.test-lab.xenproject.org/osstest/logs/159479/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 159479-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 159417
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 159417
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159417
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159417
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159417
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159417
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159417
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159417
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159417
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159417
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159417
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159417
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159417
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  76d369d33179a5f8e5f6607f3917db9ab8c22968
baseline version:
 xen                  80cad584fb4c2599ae174226e2c913bb23df3bfa

Last test of basis   159417  2021-02-16 15:05:59 Z    3 days
Testing same since   159459  2021-02-18 12:07:00 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   80cad584fb..76d369d331  76d369d33179a5f8e5f6607f3917db9ab8c22968 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86930.163539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tZ-0002FO-SN; Fri, 19 Feb 2021 15:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86930.163539; Fri, 19 Feb 2021 15:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tZ-0002FH-PS; Fri, 19 Feb 2021 15:40:41 +0000
Received: by outflank-mailman (input) for mailman id 86930;
 Fri, 19 Feb 2021 15:40:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tY-0002F7-7O
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c6d1683-92bd-4b9a-9465-7bbe691d60bf;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16522ACBF;
 Fri, 19 Feb 2021 15:40:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c6d1683-92bd-4b9a-9465-7bbe691d60bf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FO0fz98U2g5weSeglUcsg+1IlA+co2jioZAxAjIzYHw=;
	b=QHsiSyjEB+XxqHutx5nmU495s5WuRT0+qc5uj4y7BVCW2JiShJb0alShJHO9TdfRz2ErIk
	oECbNApBN2vIETzf/74XNNkgdqqKi1NF/cJVQDze0mmLQXz/o0ELCqD4f7DDFbTncYvnLU
	PMx2UHjoxGknPHWP9OyA2OLZyRYUoR0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 1/8] xen/events: reset affinity of 2-level event when tearing it down
Date: Fri, 19 Feb 2021 16:40:23 +0100
Message-Id: <20210219154030.10892-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When creating a new event channel with 2-level events the affinity
needs to be reset initially in order to avoid using an old affinity
from earlier usage of the event channel port. So when tearing an event
channel down reset all affinity bits.

The same applies to the affinity when onlining a vcpu: all old
affinity settings for this vcpu must be reset. As percpu events get
initialized before the percpu event channel hook is called,
resetting of the affinities happens after offlining a vcpu (this is
working, as initial percpu memory is zeroed out).

Cc: stable@vger.kernel.org
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- reset affinity when tearing down the event (Julien Grall)
---
 drivers/xen/events/events_2l.c       | 15 +++++++++++++++
 drivers/xen/events/events_base.c     |  1 +
 drivers/xen/events/events_internal.h |  8 ++++++++
 3 files changed, 24 insertions(+)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index da87f3a1e351..a7f413c5c190 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
 	return EVTCHN_2L_NR_CHANNELS;
 }
 
+static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu)
+{
+	clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
+}
+
 static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
 				  unsigned int old_cpu)
 {
@@ -355,9 +360,18 @@ static void evtchn_2l_resume(void)
 				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
 }
 
+static int evtchn_2l_percpu_deinit(unsigned int cpu)
+{
+	memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
+			EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
+
+	return 0;
+}
+
 static const struct evtchn_ops evtchn_ops_2l = {
 	.max_channels      = evtchn_2l_max_channels,
 	.nr_channels       = evtchn_2l_max_channels,
+	.remove            = evtchn_2l_remove,
 	.bind_to_cpu       = evtchn_2l_bind_to_cpu,
 	.clear_pending     = evtchn_2l_clear_pending,
 	.set_pending       = evtchn_2l_set_pending,
@@ -367,6 +381,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
 	.unmask            = evtchn_2l_unmask,
 	.handle_events     = evtchn_2l_handle_events,
 	.resume	           = evtchn_2l_resume,
+	.percpu_deinit     = evtchn_2l_percpu_deinit,
 };
 
 void __init xen_evtchn_2l_init(void)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index e850f79351cb..6c539db81f8f 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -368,6 +368,7 @@ static int xen_irq_info_pirq_setup(unsigned irq,
 static void xen_irq_info_cleanup(struct irq_info *info)
 {
 	set_evtchn_to_irq(info->evtchn, -1);
+	xen_evtchn_port_remove(info->evtchn, info->cpu);
 	info->evtchn = 0;
 	channels_on_cpu_dec(info);
 }
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
index 0a97c0549db7..18a4090d0709 100644
--- a/drivers/xen/events/events_internal.h
+++ b/drivers/xen/events/events_internal.h
@@ -14,6 +14,7 @@ struct evtchn_ops {
 	unsigned (*nr_channels)(void);
 
 	int (*setup)(evtchn_port_t port);
+	void (*remove)(evtchn_port_t port, unsigned int cpu);
 	void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
 			    unsigned int old_cpu);
 
@@ -54,6 +55,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
 	return 0;
 }
 
+static inline void xen_evtchn_port_remove(evtchn_port_t evtchn,
+					  unsigned int cpu)
+{
+	if (evtchn_ops->remove)
+		evtchn_ops->remove(evtchn, cpu);
+}
+
 static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
 					       unsigned int cpu,
 					       unsigned int old_cpu)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86931.163547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7ta-0002Fp-8W; Fri, 19 Feb 2021 15:40:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86931.163547; Fri, 19 Feb 2021 15:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7ta-0002Fa-10; Fri, 19 Feb 2021 15:40:42 +0000
Received: by outflank-mailman (input) for mailman id 86931;
 Fri, 19 Feb 2021 15:40:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tY-0002F6-7h
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc0cb927-d5f2-4efb-9a95-43ea58351532;
 Fri, 19 Feb 2021 15:40:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 17FFFAED2;
 Fri, 19 Feb 2021 15:40:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc0cb927-d5f2-4efb-9a95-43ea58351532
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=MHxag6mUxa0dootKy2U+lJVR2k8UiaKbpKfVyKM/EZ4=;
	b=jjsdawg1KZYf+nItuzbxyqaXeDKHiLv6YoeaErBu4tJer4zXSytLV7xk92MixdQUnvPRED
	iXUz6DrGfs47sGkqcF453+W0zBJW6WReMsknyI3WKg9iAusmhGukYAazPdCEFJ6k8mopm3
	ZGwTClr2lapCAwa2bLNQ99xzaXwou3k=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org,
	linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: [PATCH v3 0/8] xen/events: bug fixes and some diagnostic aids
Date: Fri, 19 Feb 2021 16:40:22 +0100
Message-Id: <20210219154030.10892-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The first four patches are fixes for XSA-332. The avoid WARN splats
and a performance issue with interdomain events.

Patches 5 and 6 are some additions to event handling in order to add
some per pv-device statistics to sysfs and the ability to have a per
backend device spurious event delay control.

Patches 7 and 8 are minor fixes I had lying around.

Juergen Gross (8):
  xen/events: reset affinity of 2-level event when tearing it down
  xen/events: don't unmask an event channel when an eoi is pending
  xen/events: avoid handling the same event on two cpus at the same time
  xen/netback: fix spurious event detection for common event case
  xen/events: link interdomain events to associated xenbus device
  xen/events: add per-xenbus device event statistics and settings
  xen/evtchn: use smp barriers for user event ring
  xen/evtchn: use READ/WRITE_ONCE() for accessing ring indices

 .../ABI/testing/sysfs-devices-xenbus          |  41 ++++
 drivers/block/xen-blkback/xenbus.c            |   2 +-
 drivers/net/xen-netback/interface.c           |  24 ++-
 drivers/xen/events/events_2l.c                |  22 +-
 drivers/xen/events/events_base.c              | 199 +++++++++++++-----
 drivers/xen/events/events_fifo.c              |   7 -
 drivers/xen/events/events_internal.h          |  14 +-
 drivers/xen/evtchn.c                          |  29 ++-
 drivers/xen/pvcalls-back.c                    |   4 +-
 drivers/xen/xen-pciback/xenbus.c              |   2 +-
 drivers/xen/xen-scsiback.c                    |   2 +-
 drivers/xen/xenbus/xenbus_probe.c             |  66 ++++++
 include/xen/events.h                          |   7 +-
 include/xen/xenbus.h                          |   7 +
 14 files changed, 327 insertions(+), 99 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-devices-xenbus

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86932.163564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7te-0002Il-D8; Fri, 19 Feb 2021 15:40:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86932.163564; Fri, 19 Feb 2021 15:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7te-0002Ie-8v; Fri, 19 Feb 2021 15:40:46 +0000
Received: by outflank-mailman (input) for mailman id 86932;
 Fri, 19 Feb 2021 15:40:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tc-0002F6-V2
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db1eec34-b373-4506-9779-f75a822cf7ce;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C5A8AF23;
 Fri, 19 Feb 2021 15:40:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db1eec34-b373-4506-9779-f75a822cf7ce
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l3hsQtjVhbUJuzJrwS+bXrEjvj+YDAiuRUM066+18HU=;
	b=Mc5dLu2iK4ri54BVS8sVlgoCdpzZyxnZb6PrS1izcSrnyoWAfPFIU02dYvCgRltNbGbIHW
	oI3sqwmM0Q2vgPMX7v3/FBtanmeY4inkYGxGOt//Jb19mpHGoSbfO8NU+nPaL4L7JnOwyJ
	lXkAGmpRFn162CNsNJmSx3FibIYwqZw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>
Subject: [PATCH v3 3/8] xen/events: avoid handling the same event on two cpus at the same time
Date: Fri, 19 Feb 2021 16:40:25 +0100
Message-Id: <20210219154030.10892-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When changing the cpu affinity of an event it can happen today that
(with some unlucky timing) the same event will be handled on the old
and the new cpu at the same time.

Avoid that by adding an "event active" flag to the per-event data and
call the handler only if this flag isn't set.

Cc: stable@vger.kernel.org
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
V3:
- use common helper for end of handler action (Julien Grall)
- move setting is_active to 0 for lateeoi (Boris Ostrovsky)
---
 drivers/xen/events/events_base.c | 32 +++++++++++++++++++++-----------
 1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index e157e7506830..9d7ba7623510 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -102,6 +102,7 @@ struct irq_info {
 #define EVT_MASK_REASON_EXPLICIT	0x01
 #define EVT_MASK_REASON_TEMPORARY	0x02
 #define EVT_MASK_REASON_EOI_PENDING	0x04
+	u8 is_active;		/* Is event just being handled? */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
@@ -791,6 +792,12 @@ static void xen_evtchn_close(evtchn_port_t port)
 		BUG();
 }
 
+static void event_handler_exit(struct irq_info *info)
+{
+	smp_store_release(&info->is_active, 0);
+	clear_evtchn(info->evtchn);
+}
+
 static void pirq_query_unmask(int irq)
 {
 	struct physdev_irq_status_query irq_status;
@@ -809,14 +816,15 @@ static void pirq_query_unmask(int irq)
 
 static void eoi_pirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 	struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
 	int rc = 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	clear_evtchn(evtchn);
+	event_handler_exit(info);
 
 	if (pirq_needs_eoi(data->irq)) {
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
@@ -1640,6 +1648,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 	}
 
 	info = info_for_irq(irq);
+	if (xchg_acquire(&info->is_active, 1))
+		return;
 
 	if (ctrl->defer_eoi) {
 		info->eoi_cpu = smp_processor_id();
@@ -1823,12 +1833,11 @@ static void disable_dynirq(struct irq_data *data)
 
 static void ack_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
-
-	if (!VALID_EVTCHN(evtchn))
-		return;
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
-	clear_evtchn(evtchn);
+	if (VALID_EVTCHN(evtchn))
+		event_handler_exit(info);
 }
 
 static void mask_ack_dynirq(struct irq_data *data)
@@ -1844,7 +1853,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
 
 	if (VALID_EVTCHN(evtchn)) {
 		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
-		clear_evtchn(evtchn);
+		event_handler_exit(info);
 	}
 }
 
@@ -1856,7 +1865,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *data)
 	if (VALID_EVTCHN(evtchn)) {
 		do_mask(info, EVT_MASK_REASON_EXPLICIT |
 			      EVT_MASK_REASON_EOI_PENDING);
-		clear_evtchn(evtchn);
+		event_handler_exit(info);
 	}
 }
 
@@ -1969,10 +1978,11 @@ static void restore_cpu_ipis(unsigned int cpu)
 /* Clear an irq's pending state, in preparation for polling on it */
 void xen_clear_irq_pending(int irq)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(irq);
+	struct irq_info *info = info_for_irq(irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		event_handler_exit(info);
 }
 EXPORT_SYMBOL(xen_clear_irq_pending);
 void xen_set_irq_pending(int irq)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86933.163571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7te-0002JY-SK; Fri, 19 Feb 2021 15:40:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86933.163571; Fri, 19 Feb 2021 15:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7te-0002JI-Hs; Fri, 19 Feb 2021 15:40:46 +0000
Received: by outflank-mailman (input) for mailman id 86933;
 Fri, 19 Feb 2021 15:40:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tc-0002F7-Vf
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4387be5f-7ab4-46f3-840c-020f1ce719c3;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 48961AEE7;
 Fri, 19 Feb 2021 15:40:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4387be5f-7ab4-46f3-840c-020f1ce719c3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3oKwFjI7QJbP3FIEeR3qa2NxeJ0L76eV38hLUEb9hjo=;
	b=Krq+2gAPtl2iwBk9OuqOgdBVyD8pTmW+cAprW8bgBNVfAHrb+ZskD1FHPqIDjh3m0fr6xh
	aZ+CpDR7AuySaiogHQX+JCH2eUdJ9zIM4Uw+yiBWCY8TfxV6PECxUtU7c4Vbb4YLFO9FSv
	7t7T3Vp9+/dR5Si/Ui3GUKoIJuvlaF8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	stable@vger.kernel.org,
	Julien Grall <julien@xen.org>
Subject: [PATCH v3 2/8] xen/events: don't unmask an event channel when an eoi is pending
Date: Fri, 19 Feb 2021 16:40:24 +0100
Message-Id: <20210219154030.10892-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

An event channel should be kept masked when an eoi is pending for it.
When being migrated to another cpu it might be unmasked, though.

In order to avoid this keep three different flags for each event channel
to be able to distinguish "normal" masking/unmasking from eoi related
masking/unmasking and temporary masking. The event channel should only
be able to generate an interrupt if all flags are cleared.

Cc: stable@vger.kernel.org
Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framework")
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- introduce a lock around masking/unmasking
- merge patch 3 into this one (Jan Beulich)
---
 drivers/xen/events/events_2l.c       |   7 --
 drivers/xen/events/events_base.c     | 102 +++++++++++++++++++++------
 drivers/xen/events/events_fifo.c     |   7 --
 drivers/xen/events/events_internal.h |   6 --
 4 files changed, 81 insertions(+), 41 deletions(-)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index a7f413c5c190..b8f2f971c2f0 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -77,12 +77,6 @@ static bool evtchn_2l_is_pending(evtchn_port_t port)
 	return sync_test_bit(port, BM(&s->evtchn_pending[0]));
 }
 
-static bool evtchn_2l_test_and_set_mask(evtchn_port_t port)
-{
-	struct shared_info *s = HYPERVISOR_shared_info;
-	return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0]));
-}
-
 static void evtchn_2l_mask(evtchn_port_t port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
@@ -376,7 +370,6 @@ static const struct evtchn_ops evtchn_ops_2l = {
 	.clear_pending     = evtchn_2l_clear_pending,
 	.set_pending       = evtchn_2l_set_pending,
 	.is_pending        = evtchn_2l_is_pending,
-	.test_and_set_mask = evtchn_2l_test_and_set_mask,
 	.mask              = evtchn_2l_mask,
 	.unmask            = evtchn_2l_unmask,
 	.handle_events     = evtchn_2l_handle_events,
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 6c539db81f8f..e157e7506830 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -97,13 +97,18 @@ struct irq_info {
 	short refcnt;
 	u8 spurious_cnt;
 	u8 is_accounted;
-	enum xen_irq_type type; /* type */
+	short type;		/* type: IRQT_* */
+	u8 mask_reason;		/* Why is event channel masked */
+#define EVT_MASK_REASON_EXPLICIT	0x01
+#define EVT_MASK_REASON_TEMPORARY	0x02
+#define EVT_MASK_REASON_EOI_PENDING	0x04
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
 	unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
 	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
 	u64 eoi_time;           /* Time in jiffies when to EOI. */
+	spinlock_t lock;
 
 	union {
 		unsigned short virq;
@@ -152,6 +157,7 @@ static DEFINE_RWLOCK(evtchn_rwlock);
  *   evtchn_rwlock
  *     IRQ-desc lock
  *       percpu eoi_list_lock
+ *         irq_info->lock
  */
 
 static LIST_HEAD(xen_irq_list_head);
@@ -302,6 +308,8 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 	info->irq = irq;
 	info->evtchn = evtchn;
 	info->cpu = cpu;
+	info->mask_reason = EVT_MASK_REASON_EXPLICIT;
+	spin_lock_init(&info->lock);
 
 	ret = set_evtchn_to_irq(evtchn, irq);
 	if (ret < 0)
@@ -450,6 +458,34 @@ unsigned int cpu_from_evtchn(evtchn_port_t evtchn)
 	return ret;
 }
 
+static void do_mask(struct irq_info *info, u8 reason)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	if (!info->mask_reason)
+		mask_evtchn(info->evtchn);
+
+	info->mask_reason |= reason;
+
+	spin_unlock_irqrestore(&info->lock, flags);
+}
+
+static void do_unmask(struct irq_info *info, u8 reason)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	info->mask_reason &= ~reason;
+
+	if (!info->mask_reason)
+		unmask_evtchn(info->evtchn);
+
+	spin_unlock_irqrestore(&info->lock, flags);
+}
+
 #ifdef CONFIG_X86
 static bool pirq_check_eoi_map(unsigned irq)
 {
@@ -586,7 +622,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 	}
 
 	info->eoi_time = 0;
-	unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
 }
 
 static void xen_irq_lateeoi_worker(struct work_struct *work)
@@ -831,7 +867,8 @@ static unsigned int __startup_pirq(unsigned int irq)
 		goto err;
 
 out:
-	unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_EXPLICIT);
+
 	eoi_pirq(irq_get_irq_data(irq));
 
 	return 0;
@@ -858,7 +895,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	mask_evtchn(evtchn);
+	do_mask(info, EVT_MASK_REASON_EXPLICIT);
 	xen_evtchn_close(evtchn);
 	xen_irq_info_cleanup(info);
 }
@@ -1691,10 +1728,10 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
 }
 
 /* Rebind an evtchn so that it gets delivered to a specific cpu */
-static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
+static int xen_rebind_evtchn_to_cpu(struct irq_info *info, unsigned int tcpu)
 {
 	struct evtchn_bind_vcpu bind_vcpu;
-	int masked;
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return -1;
@@ -1710,7 +1747,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
 	 * Mask the event while changing the VCPU binding to prevent
 	 * it being delivered on an unexpected VCPU.
 	 */
-	masked = test_and_set_mask(evtchn);
+	do_mask(info, EVT_MASK_REASON_TEMPORARY);
 
 	/*
 	 * If this fails, it usually just indicates that we're dealing with a
@@ -1720,8 +1757,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
 		bind_evtchn_to_cpu(evtchn, tcpu, false);
 
-	if (!masked)
-		unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_TEMPORARY);
 
 	return 0;
 }
@@ -1760,7 +1796,7 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 	unsigned int tcpu = select_target_cpu(dest);
 	int ret;
 
-	ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
+	ret = xen_rebind_evtchn_to_cpu(info_for_irq(data->irq), tcpu);
 	if (!ret)
 		irq_data_update_effective_affinity(data, cpumask_of(tcpu));
 
@@ -1769,18 +1805,20 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 
 static void enable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+		do_unmask(info, EVT_MASK_REASON_EXPLICIT);
 }
 
 static void disable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (VALID_EVTCHN(evtchn))
-		mask_evtchn(evtchn);
+		do_mask(info, EVT_MASK_REASON_EXPLICIT);
 }
 
 static void ack_dynirq(struct irq_data *data)
@@ -1799,18 +1837,40 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
+static void lateeoi_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		do_mask(info, EVT_MASK_REASON_EOI_PENDING);
+		clear_evtchn(evtchn);
+	}
+}
+
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		do_mask(info, EVT_MASK_REASON_EXPLICIT |
+			      EVT_MASK_REASON_EOI_PENDING);
+		clear_evtchn(evtchn);
+	}
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
-	int masked;
+	struct irq_info *info = info_for_irq(data->irq);
+	evtchn_port_t evtchn = info ? info->evtchn : 0;
 
 	if (!VALID_EVTCHN(evtchn))
 		return 0;
 
-	masked = test_and_set_mask(evtchn);
+	do_mask(info, EVT_MASK_REASON_TEMPORARY);
 	set_evtchn(evtchn);
-	if (!masked)
-		unmask_evtchn(evtchn);
+	do_unmask(info, EVT_MASK_REASON_TEMPORARY);
 
 	return 1;
 }
@@ -2024,8 +2084,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
 	.irq_mask		= disable_dynirq,
 	.irq_unmask		= enable_dynirq,
 
-	.irq_ack		= mask_ack_dynirq,
-	.irq_mask_ack		= mask_ack_dynirq,
+	.irq_ack		= lateeoi_ack_dynirq,
+	.irq_mask_ack		= lateeoi_mask_ack_dynirq,
 
 	.irq_set_affinity	= set_affinity_irq,
 	.irq_retrigger		= retrigger_dynirq,
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index b234f1766810..ad9fe51d3fb3 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -209,12 +209,6 @@ static bool evtchn_fifo_is_pending(evtchn_port_t port)
 	return sync_test_bit(EVTCHN_FIFO_BIT(PENDING, word), BM(word));
 }
 
-static bool evtchn_fifo_test_and_set_mask(evtchn_port_t port)
-{
-	event_word_t *word = event_word_from_port(port);
-	return sync_test_and_set_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word));
-}
-
 static void evtchn_fifo_mask(evtchn_port_t port)
 {
 	event_word_t *word = event_word_from_port(port);
@@ -423,7 +417,6 @@ static const struct evtchn_ops evtchn_ops_fifo = {
 	.clear_pending     = evtchn_fifo_clear_pending,
 	.set_pending       = evtchn_fifo_set_pending,
 	.is_pending        = evtchn_fifo_is_pending,
-	.test_and_set_mask = evtchn_fifo_test_and_set_mask,
 	.mask              = evtchn_fifo_mask,
 	.unmask            = evtchn_fifo_unmask,
 	.handle_events     = evtchn_fifo_handle_events,
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
index 18a4090d0709..4d3398eff9cd 100644
--- a/drivers/xen/events/events_internal.h
+++ b/drivers/xen/events/events_internal.h
@@ -21,7 +21,6 @@ struct evtchn_ops {
 	void (*clear_pending)(evtchn_port_t port);
 	void (*set_pending)(evtchn_port_t port);
 	bool (*is_pending)(evtchn_port_t port);
-	bool (*test_and_set_mask)(evtchn_port_t port);
 	void (*mask)(evtchn_port_t port);
 	void (*unmask)(evtchn_port_t port);
 
@@ -84,11 +83,6 @@ static inline bool test_evtchn(evtchn_port_t port)
 	return evtchn_ops->is_pending(port);
 }
 
-static inline bool test_and_set_mask(evtchn_port_t port)
-{
-	return evtchn_ops->test_and_set_mask(port);
-}
-
 static inline void mask_evtchn(evtchn_port_t port)
 {
 	return evtchn_ops->mask(port);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86934.163588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tk-0002QA-0W; Fri, 19 Feb 2021 15:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86934.163588; Fri, 19 Feb 2021 15:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tj-0002Q1-Sa; Fri, 19 Feb 2021 15:40:51 +0000
Received: by outflank-mailman (input) for mailman id 86934;
 Fri, 19 Feb 2021 15:40:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7th-0002F6-V9
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d945f9fa-43c4-4d82-b864-f9b70dc12d2e;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5B9AB10B;
 Fri, 19 Feb 2021 15:40:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d945f9fa-43c4-4d82-b864-f9b70dc12d2e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QmVbRiyGhWjgyRhmW9maPS5XeYQ8D0P7LNo1+euH3hI=;
	b=kUPpmw0yKQA70gsRODhJFsj/8eUsG8GG0hrbpqX7y81r0C26/2h9EUbs0xUKId7cwep9Am
	gireTmt9iiSmUipV/+ZpaGWiJ2siBWquCtwD2sAbj6VLLHfWhdZl4VbbcwKpZAGnf74FYh
	QpyLJ4Yxuslqb5hRXglPLi6uk8bEdGY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	stable@vger.kernel.org,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 4/8] xen/netback: fix spurious event detection for common event case
Date: Fri, 19 Feb 2021 16:40:26 +0100
Message-Id: <20210219154030.10892-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In case of a common event for rx and tx queue the event should be
regarded to be spurious if no rx and no tx requests are pending.

Unfortunately the condition for testing that is wrong causing to
decide a event being spurious if no rx OR no tx requests are
pending.

Fix that plus using local variables for rx/tx pending indicators in
order to split function calls and if condition.

Cc: stable@vger.kernel.org
Fixes: 23025393dbeb3b ("xen/netback: use lateeoi irq binding")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Wei Liu <wl@xen.org>
---
V2:
- new patch, fixing FreeBSD performance issue
---
 drivers/net/xen-netback/interface.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index acb786d8b1d8..e02a4fbb74de 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -162,13 +162,15 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 {
 	struct xenvif_queue *queue = dev_id;
 	int old;
+	bool has_rx, has_tx;
 
 	old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending);
 	WARN(old, "Interrupt while EOI pending\n");
 
-	/* Use bitwise or as we need to call both functions. */
-	if ((!xenvif_handle_tx_interrupt(queue) |
-	     !xenvif_handle_rx_interrupt(queue))) {
+	has_tx = xenvif_handle_tx_interrupt(queue);
+	has_rx = xenvif_handle_rx_interrupt(queue);
+
+	if (!has_rx && !has_tx) {
 		atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending);
 		xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
 	}
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86935.163595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tk-0002R3-FH; Fri, 19 Feb 2021 15:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86935.163595; Fri, 19 Feb 2021 15:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tk-0002Qo-7w; Fri, 19 Feb 2021 15:40:52 +0000
Received: by outflank-mailman (input) for mailman id 86935;
 Fri, 19 Feb 2021 15:40:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7th-0002F7-Vq
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17195ed2-3ee0-4ae2-ad36-57118e02a687;
 Fri, 19 Feb 2021 15:40:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8FC36B114;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17195ed2-3ee0-4ae2-ad36-57118e02a687
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nYiBLOOS0jKaGXZR/xdp5WHTi+5sOW1v2m88NDrZNbI=;
	b=mqlyaLZe/YgvYSsQIvPpN9XoaMOuiHW3Pbur9d3Wz3NOMHY4Oy4s66u3aQWq9fxarixX/O
	Ey3afn8U9k0KK6JKqUCXGc1zWME5EppFL1W/JOkfTp3ilVzkPrWMZUrxql+Y+EqqtO2aWl
	dpSxPChifLttkm90i7WQkWP8NL1KFr4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 7/8] xen/evtchn: use smp barriers for user event ring
Date: Fri, 19 Feb 2021 16:40:29 +0100
Message-Id: <20210219154030.10892-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The ring buffer for user events is local to the given kernel instance,
so smp barriers are fine for ensuring consistency.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 drivers/xen/evtchn.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index a7a85719a8c8..421382c73d88 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -173,7 +173,7 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 
 	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
 		*evtchn_ring_entry(u, u->ring_prod) = evtchn->port;
-		wmb(); /* Ensure ring contents visible */
+		smp_wmb(); /* Ensure ring contents visible */
 		if (u->ring_cons == u->ring_prod++) {
 			wake_up_interruptible(&u->evtchn_wait);
 			kill_fasync(&u->evtchn_async_queue,
@@ -245,7 +245,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 	}
 
 	rc = -EFAULT;
-	rmb(); /* Ensure that we see the port before we copy it. */
+	smp_rmb(); /* Ensure that we see the port before we copy it. */
 	if (copy_to_user(buf, evtchn_ring_entry(u, c), bytes1) ||
 	    ((bytes2 != 0) &&
 	     copy_to_user(&buf[bytes1], &u->ring[0], bytes2)))
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86936.163612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tp-0002a4-7M; Fri, 19 Feb 2021 15:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86936.163612; Fri, 19 Feb 2021 15:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tp-0002Zr-1Q; Fri, 19 Feb 2021 15:40:57 +0000
Received: by outflank-mailman (input) for mailman id 86936;
 Fri, 19 Feb 2021 15:40:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tm-0002F6-VW
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d4088ab8-1c64-409b-ae27-ad1bf24d0844;
 Fri, 19 Feb 2021 15:40:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B65E9B115;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4088ab8-1c64-409b-ae27-ad1bf24d0844
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dflticAMbc09PwXAGDjeFedypgLhrYxdjLRQqHKEDDU=;
	b=cAj/RR/Np2Vu9mTDqZssOVj4+tOadc3EcJK5OUtUYHgAsHlAkUaM7EnBllZPfqdGIbJiFN
	EixYe2DRhOopG/yuDiYRwWsU8do5usxDeCMFtoxvVAkcN6xMoINQmWwvZIObrIh6jbFbvN
	GI/e8gT4izrCIcwyQ6tiv+FG6FYEIsA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 8/8] xen/evtchn: use READ/WRITE_ONCE() for accessing ring indices
Date: Fri, 19 Feb 2021 16:40:30 +0100
Message-Id: <20210219154030.10892-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For avoiding read- and write-tearing by the compiler use READ_ONCE()
and WRITE_ONCE() for accessing the ring indices in evtchn.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- modify all accesses (Julien Grall)
V3:
- fix incrementing producer index (Ross Lagerwall)
---
 drivers/xen/evtchn.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index 421382c73d88..c99415a70051 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -162,6 +162,7 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 {
 	struct user_evtchn *evtchn = data;
 	struct per_user_data *u = evtchn->user;
+	unsigned int prod, cons;
 
 	WARN(!evtchn->enabled,
 	     "Interrupt for port %u, but apparently not enabled; per-user %p\n",
@@ -171,10 +172,14 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
 
 	spin_lock(&u->ring_prod_lock);
 
-	if ((u->ring_prod - u->ring_cons) < u->ring_size) {
-		*evtchn_ring_entry(u, u->ring_prod) = evtchn->port;
+	prod = READ_ONCE(u->ring_prod);
+	cons = READ_ONCE(u->ring_cons);
+
+	if ((prod - cons) < u->ring_size) {
+		*evtchn_ring_entry(u, prod) = evtchn->port;
 		smp_wmb(); /* Ensure ring contents visible */
-		if (u->ring_cons == u->ring_prod++) {
+		WRITE_ONCE(u->ring_prod, prod + 1);
+		if (cons == prod) {
 			wake_up_interruptible(&u->evtchn_wait);
 			kill_fasync(&u->evtchn_async_queue,
 				    SIGIO, POLL_IN);
@@ -210,8 +215,8 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 		if (u->ring_overflow)
 			goto unlock_out;
 
-		c = u->ring_cons;
-		p = u->ring_prod;
+		c = READ_ONCE(u->ring_cons);
+		p = READ_ONCE(u->ring_prod);
 		if (c != p)
 			break;
 
@@ -221,7 +226,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 			return -EAGAIN;
 
 		rc = wait_event_interruptible(u->evtchn_wait,
-					      u->ring_cons != u->ring_prod);
+			READ_ONCE(u->ring_cons) != READ_ONCE(u->ring_prod));
 		if (rc)
 			return rc;
 	}
@@ -251,7 +256,7 @@ static ssize_t evtchn_read(struct file *file, char __user *buf,
 	     copy_to_user(&buf[bytes1], &u->ring[0], bytes2)))
 		goto unlock_out;
 
-	u->ring_cons += (bytes1 + bytes2) / sizeof(evtchn_port_t);
+	WRITE_ONCE(u->ring_cons, c + (bytes1 + bytes2) / sizeof(evtchn_port_t));
 	rc = bytes1 + bytes2;
 
  unlock_out:
@@ -552,7 +557,9 @@ static long evtchn_ioctl(struct file *file,
 		/* Initialise the ring to empty. Clear errors. */
 		mutex_lock(&u->ring_cons_mutex);
 		spin_lock_irq(&u->ring_prod_lock);
-		u->ring_cons = u->ring_prod = u->ring_overflow = 0;
+		WRITE_ONCE(u->ring_cons, 0);
+		WRITE_ONCE(u->ring_prod, 0);
+		u->ring_overflow = 0;
 		spin_unlock_irq(&u->ring_prod_lock);
 		mutex_unlock(&u->ring_cons_mutex);
 		rc = 0;
@@ -595,7 +602,7 @@ static __poll_t evtchn_poll(struct file *file, poll_table *wait)
 	struct per_user_data *u = file->private_data;
 
 	poll_wait(file, &u->evtchn_wait, wait);
-	if (u->ring_cons != u->ring_prod)
+	if (READ_ONCE(u->ring_cons) != READ_ONCE(u->ring_prod))
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (u->ring_overflow)
 		mask = EPOLLERR;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:40:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86937.163618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tp-0002b6-Mh; Fri, 19 Feb 2021 15:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86937.163618; Fri, 19 Feb 2021 15:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tp-0002ag-C5; Fri, 19 Feb 2021 15:40:57 +0000
Received: by outflank-mailman (input) for mailman id 86937;
 Fri, 19 Feb 2021 15:40:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tm-0002F7-Vw
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9c76555-797f-4038-8c26-98d27ef634e8;
 Fri, 19 Feb 2021 15:40:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 349CDB112;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9c76555-797f-4038-8c26-98d27ef634e8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ak07w92nb7YZjr3VNNPBcdRWHdpt4Qfsb+0QEiXHvow=;
	b=m4oBqkbD/J0XMEHCZgXSg4TG7sdainHTuh9UGcR5bUtj6PJT9S5rbPq0yawzRopsO1fKg/
	fjSM1x0hUPEPyBkPo6wvzAa/BWiMgBbeMiebc2U5iC19UpbuoIMh3EdMB1hDw5ObeZ9Tyc
	FCiyRKbsyQ6RmR5diqVsVt7inPEpljI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 5/8] xen/events: link interdomain events to associated xenbus device
Date: Fri, 19 Feb 2021 16:40:27 +0100
Message-Id: <20210219154030.10892-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to support the possibility of per-device event channel
settings (e.g. lateeoi spurious event thresholds) add a xenbus device
pointer to struct irq_info() and modify the related event channel
binding interfaces to take the pointer to the xenbus device as a
parameter instead of the domain id of the other side.

While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi().

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Wei Liu <wei.liu@kernel.org>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 drivers/block/xen-blkback/xenbus.c  |  2 +-
 drivers/net/xen-netback/interface.c | 16 +++++------
 drivers/xen/events/events_base.c    | 41 +++++++++++++++++------------
 drivers/xen/pvcalls-back.c          |  4 +--
 drivers/xen/xen-pciback/xenbus.c    |  2 +-
 drivers/xen/xen-scsiback.c          |  2 +-
 include/xen/events.h                |  7 ++---
 7 files changed, 41 insertions(+), 33 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 9860d4842f36..c2aaf690352c 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -245,7 +245,7 @@ static int xen_blkif_map(struct xen_blkif_ring *ring, grant_ref_t *gref,
 	if (req_prod - rsp_prod > size)
 		goto fail;
 
-	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->domid,
+	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->be->dev,
 			evtchn, xen_blkif_be_int, 0, "blkif-backend", ring);
 	if (err < 0)
 		goto fail;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index e02a4fbb74de..50a94e58c150 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -630,13 +630,13 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 			unsigned int evtchn)
 {
 	struct net_device *dev = vif->dev;
+	struct xenbus_device *xendev = xenvif_to_xenbus_device(vif);
 	void *addr;
 	struct xen_netif_ctrl_sring *shared;
 	RING_IDX rsp_prod, req_prod;
 	int err;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
-				     &ring_ref, 1, &addr);
+	err = xenbus_map_ring_valloc(xendev, &ring_ref, 1, &addr);
 	if (err)
 		goto err;
 
@@ -650,7 +650,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 	if (req_prod - rsp_prod > RING_SIZE(&vif->ctrl))
 		goto err_unmap;
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(vif->domid, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(xendev, evtchn);
 	if (err < 0)
 		goto err_unmap;
 
@@ -673,8 +673,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
 	vif->ctrl_irq = 0;
 
 err_unmap:
-	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-				vif->ctrl.sring);
+	xenbus_unmap_ring_vfree(xendev, vif->ctrl.sring);
 	vif->ctrl.sring = NULL;
 
 err:
@@ -719,6 +718,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 			unsigned int tx_evtchn,
 			unsigned int rx_evtchn)
 {
+	struct xenbus_device *dev = xenvif_to_xenbus_device(queue->vif);
 	struct task_struct *task;
 	int err;
 
@@ -755,7 +755,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			dev, tx_evtchn, xenvif_interrupt, 0,
 			queue->name, queue);
 		if (err < 0)
 			goto err;
@@ -766,7 +766,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
 			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			dev, tx_evtchn, xenvif_tx_interrupt, 0,
 			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err;
@@ -776,7 +776,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
 		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
 			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			dev, rx_evtchn, xenvif_rx_interrupt, 0,
 			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err;
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 9d7ba7623510..b60df189ecbc 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -63,6 +63,7 @@
 #include <xen/interface/physdev.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/xenbus.h>
 #include <asm/hw_irq.h>
 
 #include "events_internal.h"
@@ -121,6 +122,7 @@ struct irq_info {
 			unsigned char flags;
 			uint16_t domid;
 		} pirq;
+		struct xenbus_device *interdomain;
 	} u;
 };
 
@@ -322,11 +324,16 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 }
 
 static int xen_irq_info_evtchn_setup(unsigned irq,
-				     evtchn_port_t evtchn)
+				     evtchn_port_t evtchn,
+				     struct xenbus_device *dev)
 {
 	struct irq_info *info = info_for_irq(irq);
+	int ret;
 
-	return xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
+	ret = xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
+	info->u.interdomain = dev;
+
+	return ret;
 }
 
 static int xen_irq_info_ipi_setup(unsigned cpu,
@@ -1162,7 +1169,8 @@ int xen_pirq_from_irq(unsigned irq)
 }
 EXPORT_SYMBOL_GPL(xen_pirq_from_irq);
 
-static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
+static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip,
+				   struct xenbus_device *dev)
 {
 	int irq;
 	int ret;
@@ -1182,7 +1190,7 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
 		irq_set_chip_and_handler_name(irq, chip,
 					      handle_edge_irq, "event");
 
-		ret = xen_irq_info_evtchn_setup(irq, evtchn);
+		ret = xen_irq_info_evtchn_setup(irq, evtchn, dev);
 		if (ret < 0) {
 			__unbind_from_irq(irq);
 			irq = ret;
@@ -1209,7 +1217,7 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
 
 int bind_evtchn_to_irq(evtchn_port_t evtchn)
 {
-	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip);
+	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip, NULL);
 }
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
 
@@ -1258,27 +1266,27 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
 	return irq;
 }
 
-static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain,
+static int bind_interdomain_evtchn_to_irq_chip(struct xenbus_device *dev,
 					       evtchn_port_t remote_port,
 					       struct irq_chip *chip)
 {
 	struct evtchn_bind_interdomain bind_interdomain;
 	int err;
 
-	bind_interdomain.remote_dom  = remote_domain;
+	bind_interdomain.remote_dom  = dev->otherend_id;
 	bind_interdomain.remote_port = remote_port;
 
 	err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
 					  &bind_interdomain);
 
 	return err ? : bind_evtchn_to_irq_chip(bind_interdomain.local_port,
-					       chip);
+					       chip, dev);
 }
 
-int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
 					   evtchn_port_t remote_port)
 {
-	return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
+	return bind_interdomain_evtchn_to_irq_chip(dev, remote_port,
 						   &xen_lateeoi_chip);
 }
 EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq_lateeoi);
@@ -1391,7 +1399,7 @@ static int bind_evtchn_to_irqhandler_chip(evtchn_port_t evtchn,
 {
 	int irq, retval;
 
-	irq = bind_evtchn_to_irq_chip(evtchn, chip);
+	irq = bind_evtchn_to_irq_chip(evtchn, chip, NULL);
 	if (irq < 0)
 		return irq;
 	retval = request_irq(irq, handler, irqflags, devname, dev_id);
@@ -1426,14 +1434,13 @@ int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn,
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler_lateeoi);
 
 static int bind_interdomain_evtchn_to_irqhandler_chip(
-		unsigned int remote_domain, evtchn_port_t remote_port,
+		struct xenbus_device *dev, evtchn_port_t remote_port,
 		irq_handler_t handler, unsigned long irqflags,
 		const char *devname, void *dev_id, struct irq_chip *chip)
 {
 	int irq, retval;
 
-	irq = bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
-						  chip);
+	irq = bind_interdomain_evtchn_to_irq_chip(dev, remote_port, chip);
 	if (irq < 0)
 		return irq;
 
@@ -1446,14 +1453,14 @@ static int bind_interdomain_evtchn_to_irqhandler_chip(
 	return irq;
 }
 
-int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(struct xenbus_device *dev,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
 						  unsigned long irqflags,
 						  const char *devname,
 						  void *dev_id)
 {
-	return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
+	return bind_interdomain_evtchn_to_irqhandler_chip(dev,
 				remote_port, handler, irqflags, devname,
 				dev_id, &xen_lateeoi_chip);
 }
@@ -1727,7 +1734,7 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
 	   so there should be a proper type */
 	BUG_ON(info->type == IRQT_UNBOUND);
 
-	(void)xen_irq_info_evtchn_setup(irq, evtchn);
+	(void)xen_irq_info_evtchn_setup(irq, evtchn, NULL);
 
 	mutex_unlock(&irq_mapping_update_lock);
 
diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a7d293fa8d14..b47fd8435061 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -348,7 +348,7 @@ static struct sock_mapping *pvcalls_new_active_socket(
 	map->bytes = page;
 
 	ret = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-			fedata->dev->otherend_id, evtchn,
+			fedata->dev, evtchn,
 			pvcalls_back_conn_event, 0, "pvcalls-backend", map);
 	if (ret < 0)
 		goto out;
@@ -948,7 +948,7 @@ static int backend_connect(struct xenbus_device *dev)
 		goto error;
 	}
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn);
 	if (err < 0)
 		goto error;
 	fedata->irq = err;
diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index e7c692cfb2cf..5188f02e75fb 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -124,7 +124,7 @@ static int xen_pcibk_do_attach(struct xen_pcibk_device *pdev, int gnt_ref,
 	pdev->sh_info = vaddr;
 
 	err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
-		pdev->xdev->otherend_id, remote_evtchn, xen_pcibk_handle_event,
+		pdev->xdev, remote_evtchn, xen_pcibk_handle_event,
 		0, DRV_NAME, pdev);
 	if (err < 0) {
 		xenbus_dev_fatal(pdev->xdev, err,
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 862162dca33c..8b59897b2df9 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -799,7 +799,7 @@ static int scsiback_init_sring(struct vscsibk_info *info, grant_ref_t ring_ref,
 	sring = (struct vscsiif_sring *)area;
 	BACK_RING_INIT(&info->ring, sring, PAGE_SIZE);
 
-	err = bind_interdomain_evtchn_to_irq_lateeoi(info->domid, evtchn);
+	err = bind_interdomain_evtchn_to_irq_lateeoi(info->dev, evtchn);
 	if (err < 0)
 		goto unmap_page;
 
diff --git a/include/xen/events.h b/include/xen/events.h
index 8ec418e30c7f..c204262d9fc2 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -12,10 +12,11 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/events.h>
 
+struct xenbus_device;
+
 unsigned xen_evtchn_nr_channels(void);
 
 int bind_evtchn_to_irq(evtchn_port_t evtchn);
-int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn);
 int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
 			      irq_handler_t handler,
 			      unsigned long irqflags, const char *devname,
@@ -35,9 +36,9 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
 			   unsigned long irqflags,
 			   const char *devname,
 			   void *dev_id);
-int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
 					   evtchn_port_t remote_port);
-int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(struct xenbus_device *dev,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
 						  unsigned long irqflags,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:41:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86938.163636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tt-0002im-VX; Fri, 19 Feb 2021 15:41:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86938.163636; Fri, 19 Feb 2021 15:41:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD7tt-0002ic-P4; Fri, 19 Feb 2021 15:41:01 +0000
Received: by outflank-mailman (input) for mailman id 86938;
 Fri, 19 Feb 2021 15:41:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jZy7=HV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lD7tr-0002F6-VV
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:41:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a996cdc6-4437-4d1d-ac26-0ef392d6b9b6;
 Fri, 19 Feb 2021 15:40:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5717EB113;
 Fri, 19 Feb 2021 15:40:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a996cdc6-4437-4d1d-ac26-0ef392d6b9b6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613749239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6VpM+WfMANjvkY5w7Gc3UZ8tT64NvTb2gpQ7A/fT8ao=;
	b=bvKz+jSrLdLNwd5s+FG6C4BKp/wBnu2wr9z2vfOmJ6cWicKe/y0A5AFof5tF2XSXaFqiWW
	wO5XbI53YJWxw5Mq1h/yWAjyLBJuRNB4rFCHFn0z4tXncQ8Ffjp+CHP9IvO92IAYAT7bOS
	h1XcIapQO8Q1rwbguFV1CFoIGtsMR30=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 6/8] xen/events: add per-xenbus device event statistics and settings
Date: Fri, 19 Feb 2021 16:40:28 +0100
Message-Id: <20210219154030.10892-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
References: <20210219154030.10892-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add syfs nodes for each xenbus device showing event statistics (number
of events and spurious events, number of associated event channels)
and for setting a spurious event threshold in case a frontend is
sending too many events without being rogue on purpose.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
V2:
- add documentation (Boris Ostrovsky)
---
 .../ABI/testing/sysfs-devices-xenbus          | 41 ++++++++++++
 drivers/xen/events/events_base.c              | 27 +++++++-
 drivers/xen/xenbus/xenbus_probe.c             | 66 +++++++++++++++++++
 include/xen/xenbus.h                          |  7 ++
 4 files changed, 139 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-devices-xenbus

diff --git a/Documentation/ABI/testing/sysfs-devices-xenbus b/Documentation/ABI/testing/sysfs-devices-xenbus
new file mode 100644
index 000000000000..fd796cb4f315
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-xenbus
@@ -0,0 +1,41 @@
+What:		/sys/devices/*/xenbus/event_channels
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Number of Xen event channels associated with a kernel based
+		paravirtualized device frontend or backend.
+
+What:		/sys/devices/*/xenbus/events
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Total number of Xen events received for a Xen pv device
+		frontend or backend.
+
+What:		/sys/devices/*/xenbus/jiffies_eoi_delayed
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Summed up time in jiffies the EOI of an interrupt for a Xen
+		pv device has been delayed in order to avoid stalls due to
+		event storms. This value rising is a first sign for a rogue
+		other end of the pv device.
+
+What:		/sys/devices/*/xenbus/spurious_events
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Number of events received for a Xen pv device which did not
+		require any action. Too many spurious events in a row will
+		trigger delayed EOI processing.
+
+What:		/sys/devices/*/xenbus/spurious_threshold
+Date:		February 2021
+Contact:	Xen Developers mailing list <xen-devel@lists.xenproject.org>
+Description:
+		Controls the tolerated number of subsequent spurious events
+		before delayed EOI processing is triggered for a Xen pv
+		device. Default is 1. This can be modified in case the other
+		end of the pv device is issuing spurious events on a regular
+		basis and is known not to be malicious on purpose. Raising
+		the value for such cases can improve pv device performance.
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index b60df189ecbc..a3636fb03417 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -332,6 +332,8 @@ static int xen_irq_info_evtchn_setup(unsigned irq,
 
 	ret = xen_irq_info_common_setup(info, irq, IRQT_EVTCHN, evtchn, 0);
 	info->u.interdomain = dev;
+	if (dev)
+		atomic_inc(&dev->event_channels);
 
 	return ret;
 }
@@ -606,18 +608,28 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
 		return;
 
 	if (spurious) {
+		struct xenbus_device *dev = info->u.interdomain;
+		unsigned int threshold = 1;
+
+		if (dev && dev->spurious_threshold)
+			threshold = dev->spurious_threshold;
+
 		if ((1 << info->spurious_cnt) < (HZ << 2)) {
 			if (info->spurious_cnt != 0xFF)
 				info->spurious_cnt++;
 		}
-		if (info->spurious_cnt > 1) {
-			delay = 1 << (info->spurious_cnt - 2);
+		if (info->spurious_cnt > threshold) {
+			delay = 1 << (info->spurious_cnt - 1 - threshold);
 			if (delay > HZ)
 				delay = HZ;
 			if (!info->eoi_time)
 				info->eoi_cpu = smp_processor_id();
 			info->eoi_time = get_jiffies_64() + delay;
+			if (dev)
+				atomic_add(delay, &dev->jiffies_eoi_delayed);
 		}
+		if (dev)
+			atomic_inc(&dev->spurious_events);
 	} else {
 		info->spurious_cnt = 0;
 	}
@@ -954,6 +966,7 @@ static void __unbind_from_irq(unsigned int irq)
 
 	if (VALID_EVTCHN(evtchn)) {
 		unsigned int cpu = cpu_from_irq(irq);
+		struct xenbus_device *dev;
 
 		xen_evtchn_close(evtchn);
 
@@ -964,6 +977,11 @@ static void __unbind_from_irq(unsigned int irq)
 		case IRQT_IPI:
 			per_cpu(ipi_to_irq, cpu)[ipi_from_irq(irq)] = -1;
 			break;
+		case IRQT_EVTCHN:
+			dev = info->u.interdomain;
+			if (dev)
+				atomic_dec(&dev->event_channels);
+			break;
 		default:
 			break;
 		}
@@ -1627,6 +1645,7 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 {
 	int irq;
 	struct irq_info *info;
+	struct xenbus_device *dev;
 
 	irq = get_evtchn_to_irq(port);
 	if (irq == -1)
@@ -1658,6 +1677,10 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
 	if (xchg_acquire(&info->is_active, 1))
 		return;
 
+	dev = (info->type == IRQT_EVTCHN) ? info->u.interdomain : NULL;
+	if (dev)
+		atomic_inc(&dev->events);
+
 	if (ctrl->defer_eoi) {
 		info->eoi_cpu = smp_processor_id();
 		info->irq_epoch = __this_cpu_read(irq_epoch);
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 8a75092bb148..97f0d234482d 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -206,6 +206,65 @@ void xenbus_otherend_changed(struct xenbus_watch *watch,
 }
 EXPORT_SYMBOL_GPL(xenbus_otherend_changed);
 
+#define XENBUS_SHOW_STAT(name)						\
+static ssize_t show_##name(struct device *_dev,				\
+			   struct device_attribute *attr,		\
+			   char *buf)					\
+{									\
+	struct xenbus_device *dev = to_xenbus_device(_dev);		\
+									\
+	return sprintf(buf, "%d\n", atomic_read(&dev->name));		\
+}									\
+static DEVICE_ATTR(name, 0444, show_##name, NULL)
+
+XENBUS_SHOW_STAT(event_channels);
+XENBUS_SHOW_STAT(events);
+XENBUS_SHOW_STAT(spurious_events);
+XENBUS_SHOW_STAT(jiffies_eoi_delayed);
+
+static ssize_t show_spurious_threshold(struct device *_dev,
+				       struct device_attribute *attr,
+				       char *buf)
+{
+	struct xenbus_device *dev = to_xenbus_device(_dev);
+
+	return sprintf(buf, "%d\n", dev->spurious_threshold);
+}
+
+static ssize_t set_spurious_threshold(struct device *_dev,
+				      struct device_attribute *attr,
+				      const char *buf, size_t count)
+{
+	struct xenbus_device *dev = to_xenbus_device(_dev);
+	unsigned int val;
+	ssize_t ret;
+
+	ret = kstrtouint(buf, 0, &val);
+	if (ret)
+		return ret;
+
+	dev->spurious_threshold = val;
+
+	return count;
+}
+
+static DEVICE_ATTR(spurious_threshold, 0644, show_spurious_threshold,
+		   set_spurious_threshold);
+
+static struct attribute *xenbus_attrs[] = {
+	&dev_attr_event_channels.attr,
+	&dev_attr_events.attr,
+	&dev_attr_spurious_events.attr,
+	&dev_attr_jiffies_eoi_delayed.attr,
+	&dev_attr_spurious_threshold.attr,
+	NULL
+};
+
+static const struct attribute_group xenbus_group = {
+	.name = "xenbus",
+	.attrs = xenbus_attrs,
+};
+
 int xenbus_dev_probe(struct device *_dev)
 {
 	struct xenbus_device *dev = to_xenbus_device(_dev);
@@ -253,6 +312,11 @@ int xenbus_dev_probe(struct device *_dev)
 		return err;
 	}
 
+	dev->spurious_threshold = 1;
+	if (sysfs_create_group(&dev->dev.kobj, &xenbus_group))
+		dev_warn(&dev->dev, "sysfs_create_group on %s failed.\n",
+			 dev->nodename);
+
 	return 0;
 fail_put:
 	module_put(drv->driver.owner);
@@ -269,6 +333,8 @@ int xenbus_dev_remove(struct device *_dev)
 
 	DPRINTK("%s", dev->nodename);
 
+	sysfs_remove_group(&dev->dev.kobj, &xenbus_group);
+
 	free_otherend_watch(dev);
 
 	if (drv->remove) {
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index bf3cfc7c35d0..0b1386073d49 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -88,6 +88,13 @@ struct xenbus_device {
 	struct completion down;
 	struct work_struct work;
 	struct semaphore reclaim_sem;
+
+	/* Event channel based statistics and settings. */
+	atomic_t event_channels;
+	atomic_t events;
+	atomic_t spurious_events;
+	atomic_t jiffies_eoi_delayed;
+	unsigned int spurious_threshold;
 };
 
 static inline struct xenbus_device *to_xenbus_device(struct device *dev)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:50:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:50:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86951.163648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD83K-00044D-8r; Fri, 19 Feb 2021 15:50:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86951.163648; Fri, 19 Feb 2021 15:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD83K-000446-5k; Fri, 19 Feb 2021 15:50:46 +0000
Received: by outflank-mailman (input) for mailman id 86951;
 Fri, 19 Feb 2021 15:50:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD83J-00043k-N6
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:50:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD83J-0003K6-M8
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:50:45 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD83J-0006KL-Kq
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:50:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD83G-0001EC-Cl; Fri, 19 Feb 2021 15:50:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=u8nZQgrCi40A/rBG4PVPftmjeH8BU5sMLURTLylJDSo=; b=4j+ayX1tIq3ozDwnqyyMmNyiHL
	nQZXJi2qfh6fSN8j7c6cmFNYqgwa9gpvw51RJ1xxiKiJ/WQjEOx/hx1mQIPBdULjVasWyp0jj19RT
	MtuYvCRuo5HEgRP6RSdSfBTa0ha9ppDBSaC06V5/znrzyMN5tavWXtcwv7EhT8zHPUAk=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.56913.290437.499946@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 15:50:41 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
In-Reply-To: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> Re-sending primarily for the purpose of getting a release ack, an
> explicit release nak, or an indication of there not being a need,
> all for at least the first three patches here (which are otherwise
> ready to go in). I've dropped the shadow part of the series from
> this re-submission, because it has all got reviewed by Tim already
> and is intended for 4.16 only anyway. I'm re-including the follow
> up patches getting the code base in consistent shape again, as I
> continue to think this consistency goal is at least worth a
> consideration towards a freeze exception.
> 
> 1: split __{get,put}_user() into "guest" and "unsafe" variants
> 2: split __copy_{from,to}_user() into "guest" and "unsafe" variants
> 3: PV: harden guest memory accesses against speculative abuse

These three:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

On the grounds that this is probably severe enough to be a blocking
issue for 4.15.

> 4: rename {get,put}_user() to {get,put}_guest()
> 5: gdbsx: convert "user" to "guest" accesses
> 6: rename copy_{from,to}_user() to copy_{from,to}_guest_pv()
> 7: move stac()/clac() from {get,put}_unsafe_asm() ...
> 8: PV: use get_unsafe() instead of copy_from_unsafe()

These have not got a maintainer review yet.  To grant a release-ack
I'd like an explanation of the downsides and upsides of taking this
series in 4.15 ?

You say "consistency" but in practical terms, what will happen if the
code is not "conxistent" in this sense ?

I'd also like to hear from aother hypervisor maintainer.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 15:56:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 15:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86955.163659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD894-0004Nh-UZ; Fri, 19 Feb 2021 15:56:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86955.163659; Fri, 19 Feb 2021 15:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD894-0004Na-Rf; Fri, 19 Feb 2021 15:56:42 +0000
Received: by outflank-mailman (input) for mailman id 86955;
 Fri, 19 Feb 2021 15:56:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD894-0004NV-10
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:56:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60fd18f0-2509-4154-8bbb-9b2e36e92450;
 Fri, 19 Feb 2021 15:56:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 466EDAF13;
 Fri, 19 Feb 2021 15:56:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60fd18f0-2509-4154-8bbb-9b2e36e92450
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613750200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Yz2tmJss4Lazm5SIaE9ql6MWR4N5/YdjMrP/jH5CWzI=;
	b=HX/Tc/A5Fk/rQvEcNQu5Nyc8tSx15BfFBxl4cBuNaGkiqE1ItOqXYJQODtP/IOgd1iwAVn
	PYWT7UHPq4e4hEs5dtUTWxGW0G8H3H8iQcJpJTICmHsJBhix0f1Yx9Lqx4msThdNwKfnon
	2mhTIWmq1zv3dPGcMoyGSAVHJ6e1f8o=
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <24623.56913.290437.499946@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <381560e0-e108-c77a-7c43-ae6eb559bba9@suse.com>
Date: Fri, 19 Feb 2021 16:56:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24623.56913.290437.499946@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 16:50, Ian Jackson wrote:
> Jan Beulich writes ("[PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
>> Re-sending primarily for the purpose of getting a release ack, an
>> explicit release nak, or an indication of there not being a need,
>> all for at least the first three patches here (which are otherwise
>> ready to go in). I've dropped the shadow part of the series from
>> this re-submission, because it has all got reviewed by Tim already
>> and is intended for 4.16 only anyway. I'm re-including the follow
>> up patches getting the code base in consistent shape again, as I
>> continue to think this consistency goal is at least worth a
>> consideration towards a freeze exception.
>>
>> 1: split __{get,put}_user() into "guest" and "unsafe" variants
>> 2: split __copy_{from,to}_user() into "guest" and "unsafe" variants
>> 3: PV: harden guest memory accesses against speculative abuse
> 
> These three:
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> On the grounds that this is probably severe enough to be a blocking
> issue for 4.15.

Thanks.

>> 4: rename {get,put}_user() to {get,put}_guest()
>> 5: gdbsx: convert "user" to "guest" accesses
>> 6: rename copy_{from,to}_user() to copy_{from,to}_guest_pv()
>> 7: move stac()/clac() from {get,put}_unsafe_asm() ...
>> 8: PV: use get_unsafe() instead of copy_from_unsafe()
> 
> These have not got a maintainer review yet.  To grant a release-ack
> I'd like an explanation of the downsides and upsides of taking this
> series in 4.15 ?
> 
> You say "consistency" but in practical terms, what will happen if the
> code is not "conxistent" in this sense ?

Patches 4-6: The code is harder to understand with the mix of names.
Backports from future versions to 4.15 may require more attention to
get right (and then again the same level of attention when moving to
4.14).

Patches 7 is simply a minor improvement. Patch 8 is an equivalent
of the one patch of the earlier version which has already gone in.
Just like that other one it's more to avoid a latent issue than any
active one.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:05:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86958.163672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8HA-0005vV-Qo; Fri, 19 Feb 2021 16:05:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86958.163672; Fri, 19 Feb 2021 16:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8HA-0005vO-N5; Fri, 19 Feb 2021 16:05:04 +0000
Received: by outflank-mailman (input) for mailman id 86958;
 Fri, 19 Feb 2021 16:05:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD8H9-0005vJ-OY
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:05:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa4619ff-6d22-4ad9-9014-62ab168cc0c9;
 Fri, 19 Feb 2021 16:05:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37FF2AF23;
 Fri, 19 Feb 2021 16:05:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa4619ff-6d22-4ad9-9014-62ab168cc0c9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613750702; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=bQJYuFPwzPF7OCEzYZIA6HHVIIEyxPJUuCa9MHlhA4o=;
	b=KnslhaFTERezln7xZLeZ6OCu132qfrGlGdAl1odTkX/7TUtc0O18Ghoxk6tXfmEhbRoBWF
	OUsQ58qL7b1ghfLEVWYTAn07wNzvUlW2yulf7jM+cbHldQnnmurEJ5FpFlWOHgnmroSLDV
	RUaH5oj/sO9m/nUidd3fxGzg+w4715Q=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] build: remove more absolute paths from dependency tracking
 files
Message-ID: <0a68efee-9595-b272-fc8b-8ceb284d3163@suse.com>
Date: Fri, 19 Feb 2021 17:05:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

d6b12add90da ("DEPS handling: Remove absolute paths from references to
cwd") took care of massaging the dependencies of the output file, but
for our passing of -MP to the compiler to take effect the same needs to
be done on the "phony" rules that the compiler emits.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/Config.mk
+++ b/Config.mk
@@ -63,7 +63,7 @@ DEPS_INCLUDE = $(addsuffix .d2, $(basena
 DEPS_RM = $(DEPS) $(DEPS_INCLUDE)
 
 %.d2: %.d
-	sed "s! $$PWD/! !" $^ >$@.tmp && mv -f $@.tmp $@
+	sed "s!\(^\| \)$$PWD/! !" $^ >$@.tmp && mv -f $@.tmp $@
 
 include $(XEN_ROOT)/config/$(XEN_OS).mk
 include $(XEN_ROOT)/config/$(XEN_TARGET_ARCH).mk


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:06:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:06:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86960.163683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8IM-000637-4B; Fri, 19 Feb 2021 16:06:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86960.163683; Fri, 19 Feb 2021 16:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8IM-000630-1N; Fri, 19 Feb 2021 16:06:18 +0000
Received: by outflank-mailman (input) for mailman id 86960;
 Fri, 19 Feb 2021 16:06:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8IJ-00062t-VP
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:06:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8IJ-00048Q-SZ
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:06:15 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8IJ-0007jo-QD
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:06:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD8IG-0001GV-IL; Fri, 19 Feb 2021 16:06:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=zhtwyNPYJpdzEYbBgenSUE5dcpa0r2iqJkD+QVTI2Z4=; b=4Ph/+81mfUCdLlOYZHUoJfteMf
	6hoK/4IWwtoE8wQQAkXaN8gLi1rRsbxrSISyFLYiL7IXOIXFNS5zaGNECDgZaXPxJtO4SfCRcSOdj
	WAFrAGaN7vfAwkbq2uT3OX8/wRjjTsrWenjxvX7OW2AzuXUs7cE7ZDt1vHBLl3ydqJtM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.57844.337638.750772@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:06:12 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Claudemir Todo Bom <claudemir@todobom.com>,
    Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v3 0/4] x86/time: calibration rendezvous adjustments
In-Reply-To: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH v3 0/4] x86/time: calibration rendezvous adjustments"):
> The middle two patches are meant to address a regression reported on
> the list under "Problems with APIC on versions 4.9 and later (4.8
> works)". In the course of analyzing output from a debugging patch I
> ran into another anomaly again, which I thought I should finally try
> to address. Hence patch 1. Patch 4 is new in v3 and RFC for now.

For patches 1-3:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:12:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:12:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86962.163696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8OA-00074f-TP; Fri, 19 Feb 2021 16:12:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86962.163696; Fri, 19 Feb 2021 16:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8OA-00074Y-Pe; Fri, 19 Feb 2021 16:12:18 +0000
Received: by outflank-mailman (input) for mailman id 86962;
 Fri, 19 Feb 2021 16:10:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/VLz=HV=gmail.com=kevinnegy@srs-us1.protection.inumbo.net>)
 id 1lD8MY-00071h-08
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:10:38 +0000
Received: from mail-ot1-x32f.google.com (unknown [2607:f8b0:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 152974fb-6c79-4267-8752-4bcdc551d0b5;
 Fri, 19 Feb 2021 16:10:37 +0000 (UTC)
Received: by mail-ot1-x32f.google.com with SMTP id q4so5521347otm.9
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 08:10:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 152974fb-6c79-4267-8752-4bcdc551d0b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=v7MkuIaxgQ1ml4ate7F2T/nEuIWneFQOQs/Dm6gkCfw=;
        b=lf9NBKI9DxnzQEiCcAdaAPYWqEt77bmkca/HJc8mT2PIqqY4AvS0zciEhEXubfrpHN
         nGovZXcOAZeIeP0zD4Y9Ceuoq8jNwX71a+pt+TWmb45G6EQczHmjZvk3cn28MoI/V3sX
         bTGHCGxA9gA6+4KNta0IRv5nAMJ+xi/MU+H/QFc3Fj4thgO1G3CYG2xcLbdNgk/d1qyL
         4HFhFJVH/g4c8ij1tV9rTSgsOP/OI/PFkUWut5vcYF2w9UJd9pKIrZTH5bl/eYRz4QkR
         eq8FSugWI/nd/Qi02d4m9IrS7iazp8F9pDL6oQuye3jnS1iYBk3U2LQQk1fcO1yVaSKF
         wkYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=v7MkuIaxgQ1ml4ate7F2T/nEuIWneFQOQs/Dm6gkCfw=;
        b=DfuiwqIKB9XMgIkNgac9aK3punj+Pjdt/HfESU/a3sAaM9Fcr1PdmJARje2U2Suurz
         Enurzjgl8GYwUg7OA/iJm1n3mOd/oiQbBwpLCLjT5l+hdFdK3xomGsERNgdJo+XW6Dfb
         yJJoKe5NJnD5v5HidHl61rueQM0kaC/PS9jy1nYXsTdP/Ni2OphuFf+wl9E42fFOJIX/
         e8/7h+Du7dNhwsEg4xo+ePYp/arZkmGH1F4aA1XhbS/LvxoSB3a9GnYX+3DoeWwROfdN
         KD/vwvCZ8UOV6gm1FIuahhFT25SIJYwNUeQVpuC50w339hfl3v6r6Qb8NJNh5r/u6Rn1
         QKiQ==
X-Gm-Message-State: AOAM530g98D5+NkavZvQCP3DAL73G30yqVhtqnqW1nBha5jiUB31aeW4
	HQDzkudln8REjNTmX3KWstcKBEJ1Luzp6nTDz206OAPQwQ==
X-Google-Smtp-Source: ABdhPJwOoeGr8acsF+dBNLbmaVXbY2yvyZys9FcN7wdThmiDFwKSF3BX35N2v72curU1KzeZIGdoKAXdzPp97aB5Ovw=
X-Received: by 2002:a9d:d87:: with SMTP id 7mr7559451ots.256.1613751036403;
 Fri, 19 Feb 2021 08:10:36 -0800 (PST)
MIME-Version: 1.0
From: Kevin Negy <kevinnegy@gmail.com>
Date: Fri, 19 Feb 2021 11:10:00 -0500
Message-ID: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
Subject: How does shadow page table work during migration?
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="00000000000047e49b05bbb2b138"

--00000000000047e49b05bbb2b138
Content-Type: text/plain; charset="UTF-8"

Hello,

I'm trying to understand how the shadow page table works in Xen,
specifically during live migration. My understanding is that after shadow
paging is enabled (sh_enable_log_dirty() in
xen/arch/x86/mm/shadow/common.c), a shadow page table is created, which is
a complete copy of the current guest page table. Then the CR3 register is
switched to use this shadow page table as the active table while the guest
page table is stored elsewhere. The guest page table itself (and not the
individual entries in the page table) is marked as read only so that any
guest memory access that requires the page table will result in a page
fault. These page faults happen and are trapped to the Xen hypervisor. Xen
will then update the shadow page table to match what the guest sees on its
page tables.

Is this understanding correct?

If so, here is where I get confused. During the migration pre-copy phase,
each pre-copy iteration reads the dirty bitmap (paging_log_dirty_op() in
xen/arch/x86/mm/paging.c) and cleans it. This process seems to destroy all
the shadow page tables of the domain with the call to shadow_blow_tables()
in sh_clean_dirty_bitmap().

How is the dirty bitmap related to shadow page tables? Why destroy the
entire shadow page table if it is the only legitimate page table in CR3 for
the domain?

Thank you,
Kevin

--00000000000047e49b05bbb2b138
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello,<div><br></div><div>I&#39;m trying to understand how=
 the shadow page table works in Xen, specifically during live migration. My=
 understanding is that after shadow paging is enabled (sh_enable_log_dirty(=
) in xen/arch/x86/mm/shadow/common.c), a shadow page table is created, whic=
h is a complete copy of the current guest page table. Then the CR3 register=
 is switched to use this shadow page table as the active table while the gu=
est page table is stored elsewhere. The guest page table itself (and not th=
e individual entries in the page table) is marked as read only so that any =
guest memory access that requires the page table will result in a page faul=
t. These page faults happen and are trapped to the Xen hypervisor. Xen will=
 then update the shadow page table to match what the guest sees on its page=
 tables. <br><br>Is this understanding correct? <br><br>If so, here is wher=
e I get confused. During the migration pre-copy phase, each pre-copy iterat=
ion reads the dirty bitmap (paging_log_dirty_op() in xen/arch/x86/mm/paging=
.c) and cleans it. This process seems to destroy all the shadow page tables=
 of the domain with the call to shadow_blow_tables() in sh_clean_dirty_bitm=
ap(). <br><br>How is the dirty bitmap related to shadow page tables? Why de=
stroy the entire shadow page table if it is the only legitimate page table =
in CR3 for the domain?=C2=A0<br></div><div><br></div><div>Thank you,</div><=
div>Kevin</div></div>

--00000000000047e49b05bbb2b138--


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:13:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:13:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86964.163708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8P3-0007Ad-68; Fri, 19 Feb 2021 16:13:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86964.163708; Fri, 19 Feb 2021 16:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8P3-0007AV-36; Fri, 19 Feb 2021 16:13:13 +0000
Received: by outflank-mailman (input) for mailman id 86964;
 Fri, 19 Feb 2021 16:13:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8P1-0007AN-LE
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:13:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8P1-0004H2-JA
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:13:11 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8P1-0008PJ-Hs
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:13:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD8Oy-0001Hz-A6; Fri, 19 Feb 2021 16:13:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=WxeqxFuHJbqY6zzQggWwRIr+agzwPbNQklTmSllrMsc=; b=4hfFgGcARyqSy9L+C2gcTkFp/M
	4NiqSAI4qxatzMY1IqkrhPq06fm5qAMQc+njIY7nvqCKOlBsc7Akdy7nvPFJU1fbXOLYOzFuLBaZt
	a4Ah/qEKsJWEIyyGYZLFMbbJP9TQL8usp+9eaeBCVkwsDld9XY98JF7snnaNbW13JRto=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.58260.98531.223090@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:13:08 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
In-Reply-To: <381560e0-e108-c77a-7c43-ae6eb559bba9@suse.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
	<24623.56913.290437.499946@mariner.uk.xensource.com>
	<381560e0-e108-c77a-7c43-ae6eb559bba9@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> On 19.02.2021 16:50, Ian Jackson wrote:
> > You say "consistency" but in practical terms, what will happen if the
> > code is not "conxistent" in this sense ?
> 
> Patches 4-6: The code is harder to understand with the mix of names.
> Backports from future versions to 4.15 may require more attention to
> get right (and then again the same level of attention when moving to
> 4.14).
> 
> Patches 7 is simply a minor improvement. Patch 8 is an equivalent
> of the one patch of the earlier version which has already gone in.
> Just like that other one it's more to avoid a latent issue than any
> active one.

Thank you for this clear explanation.

I think 4-6 and 8 are good candidates for the reasons you give, and
because they seem low risk to me.  Have you used any automatic
techniques to check that there is no unintentional codegen change ?
(Eg, binary diffs, diffing sedderied versions, or something.)

To my naive eye patch 7 looks scary because it might be moving the
scope of a critical section.  Am I wrong about that ?

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:17:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:17:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86968.163719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8Sf-0007Lo-Mx; Fri, 19 Feb 2021 16:16:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86968.163719; Fri, 19 Feb 2021 16:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8Sf-0007Lh-Jz; Fri, 19 Feb 2021 16:16:57 +0000
Received: by outflank-mailman (input) for mailman id 86968;
 Fri, 19 Feb 2021 16:16:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD8Se-0007Lc-Ev
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:16:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a12b14bb-5534-4e53-a73a-2a9c777b8bd3;
 Fri, 19 Feb 2021 16:16:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E89E7AC6E;
 Fri, 19 Feb 2021 16:16:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a12b14bb-5534-4e53-a73a-2a9c777b8bd3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613751415; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tBkMQYeVZQwzEvmOaxxs4K2wmCKetPkqRG4QRy5Q4tw=;
	b=Sd2O2v66TsNXNIAYSZ/Bbs6qxaJxvAFIOc6Eck9YbUz30UDmYTYtRC+WRk3xbAlxhc+aoH
	5t9qEmvLITpdodIoxUxbNOr4wO/CCJs4DPeOfuxHX0McDeprrq43KuZPvEypyht9auH+zS
	wlPPSc5Hezt4DLFNh/K0m7Ce4A3qMUw=
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <24623.56913.290437.499946@mariner.uk.xensource.com>
 <381560e0-e108-c77a-7c43-ae6eb559bba9@suse.com>
 <24623.58260.98531.223090@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fbac488e-383c-c5a9-585a-6609b81e7acc@suse.com>
Date: Fri, 19 Feb 2021 17:16:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24623.58260.98531.223090@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 17:13, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
>> On 19.02.2021 16:50, Ian Jackson wrote:
>>> You say "consistency" but in practical terms, what will happen if the
>>> code is not "conxistent" in this sense ?
>>
>> Patches 4-6: The code is harder to understand with the mix of names.
>> Backports from future versions to 4.15 may require more attention to
>> get right (and then again the same level of attention when moving to
>> 4.14).
>>
>> Patches 7 is simply a minor improvement. Patch 8 is an equivalent
>> of the one patch of the earlier version which has already gone in.
>> Just like that other one it's more to avoid a latent issue than any
>> active one.
> 
> Thank you for this clear explanation.
> 
> I think 4-6 and 8 are good candidates for the reasons you give, and
> because they seem low risk to me.  Have you used any automatic
> techniques to check that there is no unintentional codegen change ?
> (Eg, binary diffs, diffing sedderied versions, or something.)

I did some manual inspection at the time of putting together that
work, but nothing further to be honest.

> To my naive eye patch 7 looks scary because it might be moving the
> scope of a critical section.  Am I wrong about that ?

At the source level it moves things, yes. Generated code, again as
per manual inspection, doesn't change, due to the pieces that the
compiler is able to eliminate. So I guess it's not as scary as it
may look.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:31:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:31:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86973.163731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8gD-0000pg-43; Fri, 19 Feb 2021 16:30:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86973.163731; Fri, 19 Feb 2021 16:30:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8gD-0000pZ-0K; Fri, 19 Feb 2021 16:30:57 +0000
Received: by outflank-mailman (input) for mailman id 86973;
 Fri, 19 Feb 2021 16:30:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8gB-0000pU-Bh
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:30:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8gB-0004YX-AC
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:30:55 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8gB-0001LB-7P
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:30:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD8g8-0001MC-1n; Fri, 19 Feb 2021 16:30:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Fei5JGxqGwxzts9ixih8J+hpvtO9kql5cvdXDxjILpY=; b=JEhivi0Hg14kd3muGbKAvhIAwS
	A1HTrlYSUBHoL1POqUktwt3QEzz9KkBmLJnoOZhxterrejQVFJYTTsRDlTMpbH7YoQMqUV956pFO9
	XWtoqRYQfo2Qx4mcP0BUXMZg+/r0lL9ifo21VH56dwl9WooxRHZbHUrGTOMKd5pQ3gAU=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.59323.764544.88044@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:30:51 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
In-Reply-To: <fbac488e-383c-c5a9-585a-6609b81e7acc@suse.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
	<24623.56913.290437.499946@mariner.uk.xensource.com>
	<381560e0-e108-c77a-7c43-ae6eb559bba9@suse.com>
	<24623.58260.98531.223090@mariner.uk.xensource.com>
	<fbac488e-383c-c5a9-585a-6609b81e7acc@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> On 19.02.2021 17:13, Ian Jackson wrote:
> > Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> > I think 4-6 and 8 are good candidates for the reasons you give, and
> > because they seem low risk to me.  Have you used any automatic
> > techniques to check that there is no unintentional codegen change ?
> > (Eg, binary diffs, diffing sedderied versions, or something.)
> 
> I did some manual inspection at the time of putting together that
> work, but nothing further to be honest.

I think that something automatic might be worthwhile, but I would like
an opinion from another hypervisor maintainer about the level of risk
posed by the possibility of manual slips.  Eg, how likely it would be
for the compiler to catch them.

> > To my naive eye patch 7 looks scary because it might be moving the
> > scope of a critical section.  Am I wrong about that ?
> 
> At the source level it moves things, yes. Generated code, again as
> per manual inspection, doesn't change, due to the pieces that the
> compiler is able to eliminate. So I guess it's not as scary as it
> may look.

Oh, you eyeballed the generated code ?  Cool.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:35:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86982.163772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8kJ-00018N-0R; Fri, 19 Feb 2021 16:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86982.163772; Fri, 19 Feb 2021 16:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8kI-00018G-T0; Fri, 19 Feb 2021 16:35:10 +0000
Received: by outflank-mailman (input) for mailman id 86982;
 Fri, 19 Feb 2021 16:35:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8kI-00018B-5I
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:35:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8kI-0004eG-4Y
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:35:10 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8kI-0001th-3f
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:35:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD8kE-0001NK-V9; Fri, 19 Feb 2021 16:35:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=3AsbATIsLWhono4AHYbRjfdqSrx+oa1XHvVDsYGMgDs=; b=Tf2LjBfcqb7XLUpKa1UssD7xQQ
	lfRnWOB2bz7aJ6YHXyOR44phaHV1qHQgMGUqTS9vvl4JQVnF+cgIy9nVtb6FKyuOuB5fJJeIB0UQI
	NshXzYLh2L4BCAtlcemAHBpgWP8JHScsGFsuF9QLAC32Igxhgti6j24l5W60SrGc8ydc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.59578.746852.719352@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:35:06 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2 for-4.15] tools/libxl: Work around unintialised variable libxl__domain_get_device_model_uid()
In-Reply-To: <20210219150426.8498-1-andrew.cooper3@citrix.com>
References: <20210219150426.8498-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH v2 for-4.15] tools/libxl: Work around unintialised variable libxl__domain_get_device_model_uid()"):
> Various version of gcc, when compiling with -Og, complain:
> 
>   libxl_dm.c: In function 'libxl__domain_get_device_model_uid':
>   libxl_dm.c:256:12: error: 'kill_by_uid' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>     256 |         if (kill_by_uid)
>         |            ^
> 
> The logic is very tangled.  Set kill_by_uid on every path.
> 
> No funcational change.
> 
> Requested-by: Ian Jackson <iwj@xenproject.org>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Not-acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Ian Jackson <iwj@xenproject.org>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:36:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:36:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86985.163784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8lY-0001HH-AS; Fri, 19 Feb 2021 16:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86985.163784; Fri, 19 Feb 2021 16:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8lY-0001HA-7S; Fri, 19 Feb 2021 16:36:28 +0000
Received: by outflank-mailman (input) for mailman id 86985;
 Fri, 19 Feb 2021 16:36:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD8lW-0001H2-WE
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:36:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3726355-8d64-4e67-8d0c-c32f77c47b61;
 Fri, 19 Feb 2021 16:36:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 74512ABAE;
 Fri, 19 Feb 2021 16:36:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3726355-8d64-4e67-8d0c-c32f77c47b61
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613752585; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sgxil5tD+XZQAMhCseq/9FJubb3RX2X/+D4pVtsfG1k=;
	b=ghzL4CmdyUaWjZLXnhxczVX5P1693R0v1eZoTn5xsWLGeL+TMLJrNDkt0qXx0iSP43aeUW
	Jx9g8Pf6xgzYvpMCqkwOYS8uh7mPglaLwgvvXKiIdo8lxGqW3pPjxksLkq13iTitrNzMwF
	bf0Du45v9c6HuHHtkf0il0hvFKnm8No=
Subject: Re: How does shadow page table work during migration?
To: Kevin Negy <kevinnegy@gmail.com>
References: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4d3a6f57-31e3-3709-4ed1-a39b5fe55347@suse.com>
Date: Fri, 19 Feb 2021 17:36:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 17:10, Kevin Negy wrote:
> I'm trying to understand how the shadow page table works in Xen,
> specifically during live migration. My understanding is that after shadow
> paging is enabled (sh_enable_log_dirty() in
> xen/arch/x86/mm/shadow/common.c), a shadow page table is created, which is
> a complete copy of the current guest page table. Then the CR3 register is
> switched to use this shadow page table as the active table while the guest
> page table is stored elsewhere. The guest page table itself (and not the
> individual entries in the page table) is marked as read only so that any
> guest memory access that requires the page table will result in a page
> fault. These page faults happen and are trapped to the Xen hypervisor. Xen
> will then update the shadow page table to match what the guest sees on its
> page tables.
> 
> Is this understanding correct?

Partly. For HVM, shadow mode (if so used) would be active already. For
PV, page tables would be read-only already. Log-dirty mode isn't after
page table modifications alone, but to notice _any_ page that gets
written to.

> If so, here is where I get confused. During the migration pre-copy phase,
> each pre-copy iteration reads the dirty bitmap (paging_log_dirty_op() in
> xen/arch/x86/mm/paging.c) and cleans it. This process seems to destroy all
> the shadow page tables of the domain with the call to shadow_blow_tables()
> in sh_clean_dirty_bitmap().
> 
> How is the dirty bitmap related to shadow page tables?

Shadow page tables are the mechanism to populate the dirty bitmap.

> Why destroy the
> entire shadow page table if it is the only legitimate page table in CR3 for
> the domain?

Page tables will get re-populated again as the guest touches memory.
Blowing the tables is not the same as turning off shadow mode.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:42:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86988.163796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8rG-0002Td-Vl; Fri, 19 Feb 2021 16:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86988.163796; Fri, 19 Feb 2021 16:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8rG-0002TW-Sa; Fri, 19 Feb 2021 16:42:22 +0000
Received: by outflank-mailman (input) for mailman id 86988;
 Fri, 19 Feb 2021 16:42:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8rF-0002TN-3B
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:42:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8rF-0004md-0Z
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:42:21 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8rE-0002U1-V0
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:42:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD8rB-0001Of-Mf; Fri, 19 Feb 2021 16:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=hWjHV8b6g6GxCZgpsU4NfV7RHhGd7hhRUk3swvyHJiM=; b=1GJaFAhJQNaoJib5nttyNsxj4a
	UirwyaQsb1ijyx2U3c4za5lOXzTN5vNSO0EXmTrb1kuTeXZ2hxY/BQqLDlDtb9y7jQy1FDTXFqTAF
	e68SSGf6JqZCKhOE2WLm14Y/tRDtrv4D/9+qdM5x7jkz/Zu9lykz0bNQofJWPMQmOaec=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.60009.486528.814166@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:42:17 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [for-4.15 PATCH] build: remove more absolute paths from dependency tracking
 files
In-Reply-To: <0a68efee-9595-b272-fc8b-8ceb284d3163@suse.com>
References: <0a68efee-9595-b272-fc8b-8ceb284d3163@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH] build: remove more absolute paths from dependency tracking files"):
> d6b12add90da ("DEPS handling: Remove absolute paths from references to
> cwd") took care of massaging the dependencies of the output file, but
> for our passing of -MP to the compiler to take effect the same needs to
> be done on the "phony" rules that the compiler emits.

Thanks.

I think this is a bugfix.  As discussed in d6b12add90da these absolute
paths can cause build races.  So:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/Config.mk
> +++ b/Config.mk
> @@ -63,7 +63,7 @@ DEPS_INCLUDE = $(addsuffix .d2, $(basena
>  DEPS_RM = $(DEPS) $(DEPS_INCLUDE)
>  
>  %.d2: %.d
> -	sed "s! $$PWD/! !" $^ >$@.tmp && mv -f $@.tmp $@
> +	sed "s!\(^\| \)$$PWD/! !" $^ >$@.tmp && mv -f $@.tmp $@
>  
>  include $(XEN_ROOT)/config/$(XEN_OS).mk
>  include $(XEN_ROOT)/config/$(XEN_TARGET_ARCH).mk


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:45:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86991.163808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8uD-0002hw-EF; Fri, 19 Feb 2021 16:45:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86991.163808; Fri, 19 Feb 2021 16:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8uD-0002hp-Av; Fri, 19 Feb 2021 16:45:25 +0000
Received: by outflank-mailman (input) for mailman id 86991;
 Fri, 19 Feb 2021 16:45:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD8uC-0002hi-Gq
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:45:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a972d635-e97e-4273-b171-35f8953d4037;
 Fri, 19 Feb 2021 16:45:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69988ABAE;
 Fri, 19 Feb 2021 16:45:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a972d635-e97e-4273-b171-35f8953d4037
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613753122; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=v0w1XgiqNd2VbKokW7SYvXmycUfv8Dv82aGvuA9IRTw=;
	b=QAZ/xy4ByUgYExjwd5G882/QUN1EjXdGY+0jj9lwPRLAlB/pQK/Gnc69+Ebscl3StaaMNx
	b42BaqSL+H1AuhuAXbu6xK7TGWKNMIivdk/xYfyCvtQ55znyMHZycMIrWBANhaCMs6cCpG
	9jTbK7ZTuNXXOiYo8g/kndboM0oMffk=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] VMX: use a single, global APIC access page
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
Message-ID: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
Date: Fri, 19 Feb 2021 17:45:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The address of this page is used by the CPU only to recognize when to
access the virtual APIC page instead. No accesses would ever go to this
page. It only needs to be present in the (CPU) page tables so that
address translation will produce its address as result for respective
accesses.

By making this page global, we also eliminate the need to refcount it,
or to assign it to any domain in the first place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Avoid insertion when !has_vlapic(). Split off change to
    p2m_get_iommu_flags().
---
Hooking p2m insertion onto arch_domain_creation_finished() isn't very
nice, but I couldn't find any better hook (nor a good place where to
introduce a new one). In particular there look to be no hvm_funcs hooks
being used on a domain-wide basis (except for init/destroy of course).
I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
considered this no better, the more that the tool stack could be smarter
and avoid setting that param when not needed.

I did further consider not allocating any real page at all, but just
using the address of some unpopulated space (which would require
announcing this page as reserved to Dom0, so it wouldn't put any PCI
MMIO BARs there). But I thought this would be too controversial, because
of the possible risks associated with this.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1007,6 +1007,8 @@ int arch_domain_soft_reset(struct domain
 
 void arch_domain_creation_finished(struct domain *d)
 {
+    if ( is_hvm_domain(d) )
+        hvm_domain_creation_finished(d);
 }
 
 #define xen_vcpu_guest_context vcpu_guest_context
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -66,8 +66,7 @@ boolean_param("force-ept", opt_force_ept
 static void vmx_ctxt_switch_from(struct vcpu *v);
 static void vmx_ctxt_switch_to(struct vcpu *v);
 
-static int  vmx_alloc_vlapic_mapping(struct domain *d);
-static void vmx_free_vlapic_mapping(struct domain *d);
+static int alloc_vlapic_mapping(void);
 static void vmx_install_vlapic_mapping(struct vcpu *v);
 static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
                                 unsigned int flags);
@@ -78,6 +77,8 @@ static int vmx_msr_read_intercept(unsign
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg(struct vcpu *v, unsigned long linear);
 
+static mfn_t __read_mostly apic_access_mfn;
+
 /* Values for domain's ->arch.hvm_domain.pi_ops.flags. */
 #define PI_CSW_FROM (1u << 0)
 #define PI_CSW_TO   (1u << 1)
@@ -401,7 +402,6 @@ static int vmx_domain_initialise(struct
         .to   = vmx_ctxt_switch_to,
         .tail = vmx_do_resume,
     };
-    int rc;
 
     d->arch.ctxt_switch = &csw;
 
@@ -411,21 +411,16 @@ static int vmx_domain_initialise(struct
      */
     d->arch.hvm.vmx.exec_sp = is_hardware_domain(d) || opt_ept_exec_sp;
 
-    if ( !has_vlapic(d) )
-        return 0;
-
-    if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-        return rc;
-
     return 0;
 }
 
-static void vmx_domain_relinquish_resources(struct domain *d)
+static void domain_creation_finished(struct domain *d)
 {
-    if ( !has_vlapic(d) )
-        return;
 
-    vmx_free_vlapic_mapping(d);
+    if ( has_vlapic(d) && !mfn_eq(apic_access_mfn, _mfn(0)) &&
+         set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
+                            apic_access_mfn, PAGE_ORDER_4K) )
+        domain_crash(d);
 }
 
 static void vmx_init_ipt(struct vcpu *v)
@@ -2407,7 +2402,7 @@ static struct hvm_function_table __initd
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_relinquish_resources = vmx_domain_relinquish_resources,
+    .domain_creation_finished = domain_creation_finished,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
@@ -2653,7 +2548,7 @@ const struct hvm_function_table * __init
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( vmx_vmcs_init() )
+    if ( vmx_vmcs_init() || alloc_vlapic_mapping() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -3208,7 +3203,7 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
-static int vmx_alloc_vlapic_mapping(struct domain *d)
+static int __init alloc_vlapic_mapping(void)
 {
     struct page_info *pg;
     mfn_t mfn;
@@ -3216,53 +3211,28 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_refcount);
+    pg = alloc_domheap_page(NULL, 0);
     if ( !pg )
         return -ENOMEM;
 
-    if ( !get_page_and_type(pg, d, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(d);
-        return -ENODATA;
-    }
-
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    d->arch.hvm.vmx.apic_access_mfn = mfn;
+    apic_access_mfn = mfn;
 
-    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
-                              PAGE_ORDER_4K);
-}
-
-static void vmx_free_vlapic_mapping(struct domain *d)
-{
-    mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn;
-
-    d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
-    if ( !mfn_eq(mfn, _mfn(0)) )
-    {
-        struct page_info *pg = mfn_to_page(mfn);
-
-        put_page_alloc_ref(pg);
-        put_page_and_type(pg);
-    }
+    return 0;
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
 {
     paddr_t virt_page_ma, apic_page_ma;
 
-    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
+    if ( !has_vlapic(v->domain) || mfn_eq(apic_access_mfn, _mfn(0)) )
         return;
 
     ASSERT(cpu_has_vmx_virtualize_apic_accesses);
 
     virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
-    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
+    apic_page_ma = mfn_to_maddr(apic_access_mfn);
 
     vmx_vmcs_enter(v);
     __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -106,6 +106,7 @@ struct hvm_function_table {
      * Initialise/destroy HVM domain/vcpu resources
      */
     int  (*domain_initialise)(struct domain *d);
+    void (*domain_creation_finished)(struct domain *d);
     void (*domain_relinquish_resources)(struct domain *d);
     void (*domain_destroy)(struct domain *d);
     int  (*vcpu_initialise)(struct vcpu *v);
@@ -390,6 +391,12 @@ static inline bool hvm_has_set_descripto
     return hvm_funcs.set_descriptor_access_exiting;
 }
 
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    if ( hvm_funcs.domain_creation_finished )
+        alternative_vcall(hvm_funcs.domain_creation_finished, d);
+}
+
 static inline int
 hvm_guest_x86_mode(struct vcpu *v)
 {
@@ -765,6 +772,11 @@ static inline void hvm_invlpg(const stru
 {
     ASSERT_UNREACHABLE();
 }
+
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    ASSERT_UNREACHABLE();
+}
 
 /*
  * Shadow code needs further cleanup to eliminate some HVM-only paths. For
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -58,7 +58,6 @@ struct ept_data {
 #define _VMX_DOMAIN_PML_ENABLED    0
 #define VMX_DOMAIN_PML_ENABLED     (1ul << _VMX_DOMAIN_PML_ENABLED)
 struct vmx_domain {
-    mfn_t apic_access_mfn;
     /* VMX_DOMAIN_* */
     unsigned int status;
 


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:47:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86993.163820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8wF-0002vA-RM; Fri, 19 Feb 2021 16:47:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86993.163820; Fri, 19 Feb 2021 16:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD8wF-0002v3-O6; Fri, 19 Feb 2021 16:47:31 +0000
Received: by outflank-mailman (input) for mailman id 86993;
 Fri, 19 Feb 2021 16:47:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8wE-0002uw-A6
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:47:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8wE-0004sa-5O
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:47:30 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD8wE-0002yV-3m
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:47:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD8wA-0001QN-Ur; Fri, 19 Feb 2021 16:47:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:CC:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=1MkExnMamECWYQx76dG5d7NGinCeUMdPZgwxTkBhnYI=; b=3QEKuczr8RsRvHYQmhBcF37bpZ
	UGxe9ZYkEfUBnKOH7dgEtuC2u70HXRr/0u+RAakLhbxBgEyd8JkKBU+bHUyB0xDaUUSvZz5738DrZ
	de5cvRnAF8YLueib2RgA+ftYsnWswU10bmeSFardA6q8w9EM85cpcoSz4uxe6plHIM04=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24623.60318.719046.601197@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:47:26 +0000
To: committers@xenproject.org,
CC: xen-devel@lists.xenproject.org,
    community.manager@xenproject.org
Subject: [ANNOUNCE] Xen 4.15 - hard codefreeze today
In-Reply-To: <24612.676.586372.372903@mariner.uk.xensource.com>
References: <24612.676.586372.372903@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Today is the last day for committing anything to 4.15 without an
explicit release-ack.

Today, still:

    You may continue to commit straightforward bugfixes, docs changes, and
    new tests, without a release-ack.  Anything involving reorganisation
    or refactoring should get a release ack.

>From the end of today, *all* changes must have a Release-Ack.

I intend to grant release-acks for bugfixes.  By and large I will try
to adopt a risk/benefit analysis.  As the freeze goes on I will grow
stricter.  I will be much more liberal with docs and tests.

So we are now here in the release schedule:

> * Friday 19th February   Code freeze
> 
>        Bugfixes only, all changes to be approved by the Release Manager.
> 
> * Week of 19th March **tentative**    Release
>        (probably Tuesday or Wednesday)
...
>   The releas dates is provisional and will be adjusted in the light
>   of apparent code quality etc.

My current list of issues I am tracking for the release is below.  If
you know about any of these issues please do let me know.

I'm slightly concerned that there are some issues on that list that
I'm not aware of any progress occurring on.  But maybe I haven't been
CC'd on all the mails, in which case I'd appreciate a summary update.

Also, please try to make sure that any patches targeted for 4.15
contain `4.15` in the Subject line.  Typically, something like this:
   [PATCH for-4.15 v2] re-invert the neutron polarisation

Thanks,
Ian.


OPEN ISSUES AND BLOCKERS
------------------------

C. Fallout from MSR handling behavioral change.

Information from
  Jan Beulich <jbeulich@suse.com>
  Andrew Cooper <andrew.cooper3@citrix.com>

Andrew writes:
| Bugs are "VMs which boot on earlier releases don't boot on
| 4.15 at the moment".
| 
| Still WIP and on my TODO list.

I think this
  [PATCH v2 4/4] tools/libs: Apply MSR policy to a guest
is probably part of the answer but it hasn't been committed yet.


D. Use-after-free in the IOMMU code

Information from
  Julien Grall <julien@xen.org>
References
 [PATCH for-4.15 0/4] xen/iommu: Collection of bug fixes for IOMMU teadorwn
 <20201222154338.9459-1-julien@xen.org>

Quoting the 0/:
| This series is a collection of bug fixes for the IOMMU teardown code.
| All of them are candidate for 4.15 as they can either leak memory or
| lead to host crash/host corruption.

These patches are still being discussed.  One went in, so now we are
talking about
  [PATCH v3 0/3] xen/iommu: Collection of bug fixes for IOMMU teadorwn


F. BUG: credit=sched2 machine hang when using DRAKVUF

Information from
  Dario Faggioli <dfaggioli@suse.com>
References
  https://lists.xen.org/archives/html/xen-devel/2020-05/msg01985.html
  https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01561.html
  https://bugzilla.opensuse.org/show_bug.cgi?id=1179246

Quoting Dario:
| Manifests only with certain combination of hardware and workload. 
| I'm not reproducing, but there are multiple reports of it (see 
| above). I'm investigating and trying to come up at least with 
| debug patches that one of the reporter should be able and willing to 
| test.

Dario is working on this.  Last update 29.1.21 ?


G. Null scheduler and vwfi native problem

Information from
  Dario Faggioli <dfaggioli@suse.com>

References
  https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg01634.html

Quoting Dario:
| RCU issues, but manifests due to scheduler behavior (especially   
| NULL scheduler, especially on ARM).
|
| Patches that should solve the issue for ARM posted already. They
| will need to be slightly adjusted to cover x86 as well.

As of last update from Dario 29.1.21:
waiting for test report from submitter.


H. Ryzen 4000 (Mobile) Softlocks/Micro-stutters

Information from
  Dario Faggioli <dfaggioli@suse.com>

As of last update from Dario 29.1.21:
Discussions currently ongoing about the severity of this issue.


I. "x86/PV: avoid speculation abuse through guest accessors"

Information from
  Jan Beulich <jbeulich@suse.com>

| F. The almost-XSA "x86/PV: avoid speculation abuse through guest
| accessors" - the first 4 patches are needed to address the actual
| issue. The next 3 patches are needed to get the tree into
| consistent state again, identifier-wise. The remaining patches
| can probably wait.

The primary fixes for this have reviews and R-A and will be going in
shortly.  There is some followup work which needs to be reviewed and
acked.


J. x86/time: calibration rendezvous adjustments

Information from
  Jan Beulich <jbeulich@suse.com>

Not entirely a regression.  But 3 out of the 4 patches have reviews
and R-A and should be going in shortly.

Patch 4/ is RFC and it's not clear to e whether it's targeted at 4.15.


K. Problems with xl save / cancel

Information from Jürgen Groß:
  xl daemon won't kill the domain after it has gone through a
  suspend-cancel cycle.

Investigation is ongoing.  Not clear at this stage how big a blocker
this is.


L. ABI stability checking

   [PATCH for-4.15 00/10] tools: Support to use abi-dumper on libraries
   [PATCH v2 for-4.15] tools/libxl: Work around unintialised variable libxl__domain_get_device_model_uid()

This is testing/build work and will enable ABI checking of future
changes to 4.15 after its release.  I don't think it's a blocker but
it would be nice to have.  It has R-A and I think most acks now.


NEWLY CLOSED ISSUES
===================


A. HPET/PIT issue on newer Intel systems

Information from
  Andrew Cooper <andrew.cooper3@citrix.com>

| This has had literally tens of reports across the devel and users
| mailing lists, and prevents Xen from booting at all on the past two
| generations of Intel laptop.  I've finally got a repro and posted a
| fix to the list, but still in progress.

Fixed.  c/s e1de4c196a.


B. "scheduler broken" bugs.

Information from
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>

Quoting Andrew Cooper
| We've had 4 or 5 reports of Xen not working, and very little
| investigation on whats going on.  Suspicion is that there might be
| two bugs, one with smt=0 on recent AMD hardware, and one more
| general "some workloads cause negative credit" and might or might
| not be specific to credit2 (debugging feedback differs - also might
| be 3 underlying issue).

Dario has expaneded on this and I am closing this one out in favour of
F, G, H.


PREVIOUS CLOSED ISSUES
======================

E. zstd support


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:54:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86995.163831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD92z-00047E-MR; Fri, 19 Feb 2021 16:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86995.163831; Fri, 19 Feb 2021 16:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD92z-000477-JS; Fri, 19 Feb 2021 16:54:29 +0000
Received: by outflank-mailman (input) for mailman id 86995;
 Fri, 19 Feb 2021 16:54:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD92y-000472-Dc
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:54:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94e6f463-90e6-4d11-a102-23012299aaf7;
 Fri, 19 Feb 2021 16:54:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EAF9FAF30;
 Fri, 19 Feb 2021 16:54:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94e6f463-90e6-4d11-a102-23012299aaf7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613753667; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HkcX+wckKjMLRGbBnQDECS84T554CBI5qL1tK459O6Y=;
	b=nyXX6VtymjZ1+J2uhgChDK0Q3qSsY5sRZMZum2dy9wShYMTUKvF4vMHfvCe+gHqpcXBBbf
	rCxqIvKahevkHi6zRG22WwEUAWc3COJT2jpUNeFfhWl6kzqS3QuOBDNChKcLuWEaSn/hZS
	m9TBmLnukOrCbmRzphCkK/8pCCKaajU=
Subject: Re: [ANNOUNCE] Xen 4.15 - hard codefreeze today
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, community.manager@xenproject.org,
 committers@xenproject.org
References: <24612.676.586372.372903@mariner.uk.xensource.com>
 <24623.60318.719046.601197@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4f04b710-bc3e-1b0d-f13c-becebc2b017f@suse.com>
Date: Fri, 19 Feb 2021 17:54:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24623.60318.719046.601197@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 17:47, Ian Jackson wrote:
> J. x86/time: calibration rendezvous adjustments
> 
> Information from
>   Jan Beulich <jbeulich@suse.com>
> 
> Not entirely a regression.  But 3 out of the 4 patches have reviews
> and R-A and should be going in shortly.
> 
> Patch 4/ is RFC and it's not clear to e whether it's targeted at 4.15.

No, that's an intended optimization which can wait. I'm still
trying to determine a way how I could demonstrate it actually
makes a difference.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:55:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.86997.163844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD93b-0004Ci-VL; Fri, 19 Feb 2021 16:55:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 86997.163844; Fri, 19 Feb 2021 16:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD93b-0004Cb-SM; Fri, 19 Feb 2021 16:55:07 +0000
Received: by outflank-mailman (input) for mailman id 86997;
 Fri, 19 Feb 2021 16:55:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD93a-0004CU-Mk
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:55:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD93a-000502-Ky
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:55:06 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD93a-0007Tv-IV
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:55:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD93X-0001S9-4c; Fri, 19 Feb 2021 16:55:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=MZHIVyOQN/1/fIErbtWNkeK/kPFL3SkkUFk13SXq5rM=; b=a9Kd1KLgA6zujPxFVq5WTODPNr
	YoQEcRFdbIhJFtTe4ho1V5rd1fKdOn4eSy/Mqmp2nYQ8kUiUayIhNG7TC9DvdKsXO3afwXlncHal6
	S4T7aCH1vNAU6/PocRjELu+1qlCgWK9xsYY6HgzIVjRmHoWBLvlvBAmZKHDR4r1Fw/7s=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.60774.894817.439652@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 16:55:02 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH] build: remove more absolute paths from dependency tracking
 files
In-Reply-To: <0a68efee-9595-b272-fc8b-8ceb284d3163@suse.com>
References: <0a68efee-9595-b272-fc8b-8ceb284d3163@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH] build: remove more absolute paths from dependency tracking files"):
> d6b12add90da ("DEPS handling: Remove absolute paths from references to
> cwd") took care of massaging the dependencies of the output file, but
> for our passing of -MP to the compiler to take effect the same needs to
> be done on the "phony" rules that the compiler emits.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

> --- a/Config.mk
> +++ b/Config.mk
> @@ -63,7 +63,7 @@ DEPS_INCLUDE = $(addsuffix .d2, $(basena
>  DEPS_RM = $(DEPS) $(DEPS_INCLUDE)
>  
>  %.d2: %.d
> -	sed "s! $$PWD/! !" $^ >$@.tmp && mv -f $@.tmp $@
> +	sed "s!\(^\| \)$$PWD/! !" $^ >$@.tmp && mv -f $@.tmp $@

Urgh I hate having to remember the crazy \ rules for all these
different kinds of regexps.  I have to test it every time...

Thanks for fixing this, anyway.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 16:59:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 16:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87001.163856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD97W-0004X9-Fg; Fri, 19 Feb 2021 16:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87001.163856; Fri, 19 Feb 2021 16:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD97W-0004X2-Cb; Fri, 19 Feb 2021 16:59:10 +0000
Received: by outflank-mailman (input) for mailman id 87001;
 Fri, 19 Feb 2021 16:59:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOKe=HV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lD97U-0004Wx-B0
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 16:59:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c6d8442-b99e-46c0-b691-6267cd3f6abc;
 Fri, 19 Feb 2021 16:59:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9219ABAE;
 Fri, 19 Feb 2021 16:59:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c6d8442-b99e-46c0-b691-6267cd3f6abc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613753946; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/41fLRFKZgMwwDxZ8eIofzB3e4gTmyzT+VBjdn1TFZg=;
	b=S2XvL8JLpeQxjJVY+ZuY5HAl6XDfgAVlGLYRdTgVAebhlr67uTLeqVcBJGYc7SU6UkNMKc
	J1m/ecfAHz2nTLGP5vp/ojLMt4N80LGuutUFsc/Ld7SjzMIpExFff7+WCxOomVKPwIAHaX
	7TpZ9eIPCS0njFlrRsiOt5dOUIn68qo=
Subject: Re: [PATCH v2] VMX: use a single, global APIC access page
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
Message-ID: <dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
Date: Fri, 19 Feb 2021 17:59:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 17:45, Jan Beulich wrote:
> The address of this page is used by the CPU only to recognize when to
> access the virtual APIC page instead. No accesses would ever go to this
> page. It only needs to be present in the (CPU) page tables so that
> address translation will produce its address as result for respective
> accesses.
> 
> By making this page global, we also eliminate the need to refcount it,
> or to assign it to any domain in the first place.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Avoid insertion when !has_vlapic(). Split off change to
>     p2m_get_iommu_flags().
> ---
> Hooking p2m insertion onto arch_domain_creation_finished() isn't very
> nice, but I couldn't find any better hook (nor a good place where to
> introduce a new one). In particular there look to be no hvm_funcs hooks
> being used on a domain-wide basis (except for init/destroy of course).
> I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
> considered this no better, the more that the tool stack could be smarter
> and avoid setting that param when not needed.

While this patch was triggered not just by Julien's observation of
the early p2m insertion being a problem, but also many earlier
times of running into this odd code, it is - especially at this
stage - perhaps a possible option to split the change into just
the movement of the set_mmio_p2m_entry() invocation and all the
rest, in order to defer that rest until after 4.15.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:05:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87006.163868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9Dk-0005QP-6W; Fri, 19 Feb 2021 17:05:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87006.163868; Fri, 19 Feb 2021 17:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9Dk-0005QI-1t; Fri, 19 Feb 2021 17:05:36 +0000
Received: by outflank-mailman (input) for mailman id 87006;
 Fri, 19 Feb 2021 17:05:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD9Di-0005QB-UO
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:05:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD9Di-0005DG-Rv
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:05:34 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lD9Di-0008Jp-QZ
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:05:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lD9Df-0001Ty-Ly; Fri, 19 Feb 2021 17:05:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=+EICzCcB8txsTdiA7EvP5djlrzhJs+nXTRVs3jeK/94=; b=0PWmZ8c1AcEk3iU3/sBumcfmwD
	sYik+Q/bfsPuUjGl4K4VolYKtfnZfzLIy08Qnts2sbNcX+lO2Yu9Wyz0/wcjvvREFjEnowo1c5Isy
	iTJE5rllXLFvvQwoSZDAce8dVEQCnxkYTNgLtxTU40/qSaKU7m5K441kj0cgRPyf8bzk=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24623.61403.440917.434@mariner.uk.xensource.com>
Date: Fri, 19 Feb 2021 17:05:31 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Kevin Tian <kevin.tian@intel.com>,
    Julien Grall <julien@xen.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page [and 1 more messages]
In-Reply-To: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>,
	<dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
	<dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH v2] VMX: use a single, global APIC access page"):
> The address of this page is used by the CPU only to recognize when to
> access the virtual APIC page instead. No accesses would ever go to this
> page. It only needs to be present in the (CPU) page tables so that
> address translation will produce its address as result for respective
> accesses.
> 
> By making this page global, we also eliminate the need to refcount it,
> or to assign it to any domain in the first place.

Thanks for this.

>From this:

Jan Beulich writes ("Re: [PATCH v2] VMX: use a single, global APIC access page"):
> While this patch was triggered not just by Julien's observation of
> the early p2m insertion being a problem, but also many earlier
> times of running into this odd code, it is - especially at this
> stage - perhaps a possible option to split the change into just
> the movement of the set_mmio_p2m_entry() invocation and all the
> rest, in order to defer that rest until after 4.15.

I infer that this contains a bugfix, but perhaps other
changes/improvements too.

George, I think you're our expert on this refcounting stuff - what do
you think of this ?

I guess my key question is whether this change will introduce risk by
messing with the complex refcounting machineryt - or remove it by
removing an interaction with the refcounting.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:26:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87013.163900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9Xi-0007Zx-4g; Fri, 19 Feb 2021 17:26:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87013.163900; Fri, 19 Feb 2021 17:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9Xi-0007Zp-0t; Fri, 19 Feb 2021 17:26:14 +0000
Received: by outflank-mailman (input) for mailman id 87013;
 Fri, 19 Feb 2021 17:26:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZyFk=HV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1lD9Xh-0007Yz-3Z
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:26:13 +0000
Received: from mail-wm1-f52.google.com (unknown [209.85.128.52])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14f4da07-ec77-40ce-b2c9-f2fea1d6109a;
 Fri, 19 Feb 2021 17:26:12 +0000 (UTC)
Received: by mail-wm1-f52.google.com with SMTP id o24so8306566wmh.5
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:26:12 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id l38sm11699513wmp.19.2021.02.19.09.26.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:26:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14f4da07-ec77-40ce-b2c9-f2fea1d6109a
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=TuBoLkcy6c7lcYki0OpGyZY2qotADkjln3ECovhSUvA=;
        b=r3zvaXhay9MUw5a210J5OY4zkeASS/FIJ7yVXtW/04ixi4JBmpbs07Z4rp1ZGpnX0L
         Qagaq9DyMFnylx7L2U2N1S2qbk9Xg2BigkcQTRmMoYk69eRhYGjDNs1/4SGOq6rW7Qly
         n435zla+Mbk0jAfhsfMO28OLi+I9CXZ5wemcS6ejNgAKDpIIzyh9h/No/S6SvaKlT+Lp
         Q3hfNag+8jGWpBTH8q9wkT6H+LEQbXG3exwbQbx4FyAo22ZAPCwajcC26C24uPoWJlBy
         NHlZeGsxCENdZafuJfbCzVj7kOBdtp5aaE/ToPpJqTwtmpksdUlEOx9KlHlgTEeasTKX
         /+jQ==
X-Gm-Message-State: AOAM533nf0ZufP606GyGW0O9KRg1oROeeIBKj1xkO8+TKeiZUtPcxsqL
	UOimhEP7yVkz6ahdlAzID18=
X-Google-Smtp-Source: ABdhPJzbX57P44obtewQ/Q9zlPmY3YFqZUmV2JvJKG6QWp/n2Lh1UezWMkqEh41v4uohPPS05Mg1xg==
X-Received: by 2002:a1c:5f84:: with SMTP id t126mr7347886wmb.52.1613755571721;
        Fri, 19 Feb 2021 09:26:11 -0800 (PST)
Date: Fri, 19 Feb 2021 17:26:09 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH-for-4.15] tools/libs/light: fix xl save -c handling
Message-ID: <20210219172609.xxrmhkcafhwgoa6w@liuwe-devbox-debian-v2>
References: <20210219141337.6934-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210219141337.6934-1-jgross@suse.com>

On Fri, Feb 19, 2021 at 03:13:37PM +0100, Juergen Gross wrote:
> libxl_domain_resume() won't work correctly for the case it was called
> due to a "xl save -c" command, i.e. to continue the suspended domain.
> 
> The information to do that is not saved in libxl__dm_resume_state for
> non-HVM domains.
> 
> Fixes: 6298f0eb8f443 ("libxl: Re-introduce libxl__domain_resume")
> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:30:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87015.163911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9by-00005h-Ld; Fri, 19 Feb 2021 17:30:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87015.163911; Fri, 19 Feb 2021 17:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9by-00005a-Ig; Fri, 19 Feb 2021 17:30:38 +0000
Received: by outflank-mailman (input) for mailman id 87015;
 Fri, 19 Feb 2021 17:30:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fNr7=HV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lD9bx-00005U-EG
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:30:37 +0000
Received: from mail-lj1-x234.google.com (unknown [2a00:1450:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b652378e-a0e8-4c78-bbc8-aec455d1bad4;
 Fri, 19 Feb 2021 17:30:36 +0000 (UTC)
Received: by mail-lj1-x234.google.com with SMTP id r23so24008668ljh.1
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:30:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b652378e-a0e8-4c78-bbc8-aec455d1bad4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=5BN6yFN3Scp/K0nK6nNparAeMOeN/nnletLBanC+Jo0=;
        b=aur0rNQW2QglIi5GcgWvWIfCVnZ7mE7oeKteZpRAJLxe9hss6/wUCjCbuoPzY7eSkX
         KC6Gvkvl38yYzSKl/TC5pcTHQnqn9KqoAXOHhnkcJnm2J0Yli/JISjTFcLkYerHkRdVJ
         QQZ598atgMtuKZ4NuZhzAZpAj7a4AO7Vj0OSJE9TAA9Qq5HAsg/upSIu25djKP11cKt1
         KBskMLYGq7Ayb0YdxxhhUEVXaAo3HL8bpsGxcrf927Fax+jdPvNc74zV62cAJ/uxj7Wy
         4kykj4toFLA97g2gaV8rpsA2hrW8SWbgKQdbgtBiRrN7NfjDNDjmOjorEmkly7zkDUyE
         YqIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=5BN6yFN3Scp/K0nK6nNparAeMOeN/nnletLBanC+Jo0=;
        b=E1N2Ui78CGdsjKFysA7jeeeTMl8gPMW+eNnqZaIPjq3WeIHbEL5AluKWpydd9Ead3n
         MpSyXcxUEoaw2gRqtyQSvYXoPYl7wOWKPCLuv5nzUBPoOxpGomKQTfaV0pyV0EYV6H0+
         LsiXYn1MqpMR5ux5pmtlVnJhOmdA8H8v9o2Ryp7HL68HcxwUP24E84slUfwXz8PsqL/a
         ImPcaL++DejoprfE13yr7KJuSHTUSNW/ntXr4Bht9hVLrruHkAcwcaF0DFFPSG3urOFN
         rSAL8GQiGLVQO3IM2srIUSG/3uCs5fHwDeqDuyVi5nqUvYz7kZ5VQtx+vYq4Gm5LuL++
         OhMg==
X-Gm-Message-State: AOAM533Bma+GBJ6rMdna0VMf6Bs+03L4Qe/MvuFTYeOfSwkbQX87Tkrm
	nvaJ66gxm/m4nXtBFuFJ5CXK550DAULldrdq/rs=
X-Google-Smtp-Source: ABdhPJztR6MYwIGS8IqcewEUT0iHoAzfku2FAHPHSy9mUnKPTTZ8wczqXeiaTyxl+eriOdxytrV3mPhEbWG2EyPuFbw=
X-Received: by 2002:a19:7f95:: with SMTP id a143mr5955108lfd.419.1613755835524;
 Fri, 19 Feb 2021 09:30:35 -0800 (PST)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger> <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
 <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com> <CAKf6xpvKiWiU5Wsv2C1EiEFr77nMZTd+VHgkdk7qcKw1OFD8Vg@mail.gmail.com>
 <9bbf6768-a39e-2b3c-c4de-fd883cc9ef85@suse.com>
In-Reply-To: <9bbf6768-a39e-2b3c-c4de-fd883cc9ef85@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 19 Feb 2021 12:30:23 -0500
Message-ID: <CAKf6xpuTbvGtTRHPK9Ock7rxJk4DfCumgTW7-2_PADm9cSaUBg@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org, 
	xen-devel <xen-devel@lists.xenproject.org>, eric chanudet <eric.chanudet@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Oct 21, 2020 at 9:59 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 21.10.2020 15:36, Jason Andryuk wrote:
> > On Wed, Oct 21, 2020 at 8:53 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 21.10.2020 14:45, Jason Andryuk wrote:
> >>> On Wed, Oct 21, 2020 at 5:58 AM Roger Pau Monn=C3=A9 <roger.pau@citri=
x.com> wrote:
> >>>> Hm, it's hard to tell what's going on. My limited experience with
> >>>> IOMMU faults on broken systems there's a small range that initially
> >>>> triggers those, and then the device goes wonky and starts accessing =
a
> >>>> whole load of invalid addresses.
> >>>>
> >>>> You could try adding those manually using the rmrr Xen command line
> >>>> option [0], maybe you can figure out which range(s) are missing?
> >>>
> >>> They seem to change, so it's hard to know.  Would there be harm in
> >>> adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
> >>> 0xff,ffff,ffff )?  Maybe that would just quiet the pointless faults
> >>> while leaving the IOMMU enabled?
> >>
> >> While they may quieten the faults, I don't think those faults are
> >> pointless. They indicate some problem with the software (less
> >> likely the hardware, possibly the firmware) that you're using.
> >> Also there's the question of what the overall behavior is going
> >> to be when devices are permitted to access unpopulated address
> >> ranges. I assume you did check already that no devices have their
> >> BARs placed in that range?
> >
> > Isn't no-igfx already letting them try to read those unpopulated addres=
ses?
>
> Yes, and it is for the reason that the documentation for the
> option says "If specifying `no-igfx` fixes anything, please
> report the problem." I imply from in in particular that one
> better wouldn't use it for non-development purposes of whatever
> kind.

I stopped seeing these DMA faults, but I didn't know what made them go
away.  Then when working with an older 5.4.64 kernel, I saw them
again.  Eric bisected down to the 5.4.y version of mainline linux
commit:

commit 8195400f7ea95399f721ad21f4d663a62c65036f
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Mon Oct 19 11:15:23 2020 +0100

    drm/i915: Force VT'd workarounds when running as a guest OS

    If i915.ko is being used as a passthrough device, it does not know if
    the host is using intel_iommu. Mixing the iommu and gfx causes a few
    issues (such as scanout overfetch) which we need to workaround inside
    the driver, so if we detect we are running under a hypervisor, also
    assume the device access is being virtualised.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:33:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:33:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87018.163924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9eq-0000Fc-37; Fri, 19 Feb 2021 17:33:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87018.163924; Fri, 19 Feb 2021 17:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9ep-0000FV-Vn; Fri, 19 Feb 2021 17:33:35 +0000
Received: by outflank-mailman (input) for mailman id 87018;
 Fri, 19 Feb 2021 17:33:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fNr7=HV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lD9eo-0000FQ-19
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:33:34 +0000
Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 408c43cb-f1d0-4303-b329-377b3d8e1312;
 Fri, 19 Feb 2021 17:33:33 +0000 (UTC)
Received: by mail-lj1-x230.google.com with SMTP id r23so24075641ljh.1
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:33:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 408c43cb-f1d0-4303-b329-377b3d8e1312
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=51CM+DXsCAF5/n+VLs066AKIfE8tu1fm5vrlyP+QoLE=;
        b=Enm7fweUIAJpRveohyyonM3+8L9ObWPqUsUbGOUFG7uwiYwefl/Bc1YP7nQX1lkXYz
         TW//aRBb4pYHCOVVedw5C9vMhDvizmJ56w9ltsAn+ERi3x/SL41foSzUDF/i7rSEhZTH
         2bxpXcUAqFf51ykfghWOTSZmltqAovzT3j1bzAA1uoRzzBPg9Mr3NRH7ew75aWw3pkQ1
         GQjl/nQrZ4RnPwqTuT5mEbzc96xsX7bZlgYre681NHroIKeZdowvZguUOhiE0cnZLspw
         qynsnJ/sO93dkUX9ph11FP1zMpaOFdeQMwGkMhIQWIrekGFJREQUw6O15Zbs7Y89sBqD
         n6Tw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=51CM+DXsCAF5/n+VLs066AKIfE8tu1fm5vrlyP+QoLE=;
        b=WqWOE980PRmCtzOWJHSJzwtHAUm2ECObm4fokjo4HOSWCNHxrSoKVr0hqnGG58/PbO
         c+4pc8XXQeK2DxK1+LjkXTyB7ZdfAqEBM4amYxmXw94UcDT5XEHmXF6hAszdx4Be/r9h
         SMOZx9Cxe9A6lpN96k644dTppHl+xkc18srgsZuVHv4h6qrGwRiPEtS+AZdLw+3BKsB1
         x0JBgPI/rz19vcpM8Bqus9NDmrLCQsZL1jk9t2ZEeTTcjhM4FF5ygOpLPc5wNYHv9kXz
         uErchWxxct7UhaG8SIs/VmCZKGZti2Ncn0aFxD8RpSgXn7MdeQlTDKdpm2MnQilB2ovZ
         aXIQ==
X-Gm-Message-State: AOAM531dJZKgA7XHiJxKnUpZVT/vtuQizUZf3sj0uKjwoXFj/J8L9q+9
	JGF29+06NrRSje+25iZxL1W7S5MF1FxUAu8xKNWMChhq
X-Google-Smtp-Source: ABdhPJxRiVE3oUjMu4Aqa2d6pH4nKSN9qjhNuDf0ufr8ArPESV82V+cs3rrLlikFFCdc9vY/VDzUY6GpDfwBO+nqPmo=
X-Received: by 2002:a2e:8743:: with SMTP id q3mr6423327ljj.12.1613756012202;
 Fri, 19 Feb 2021 09:33:32 -0800 (PST)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CABfawhnwdkB01LKYbcNhyyhFXF2LbLFFmeN5kqh7VaYPevjzuw@mail.gmail.com> <CAKf6xpuACuY63f+m6U55EVoSBL+RR04OStGPytb-Aeacou32gg@mail.gmail.com>
In-Reply-To: <CAKf6xpuACuY63f+m6U55EVoSBL+RR04OStGPytb-Aeacou32gg@mail.gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 19 Feb 2021 12:33:21 -0500
Message-ID: <CAKf6xpueuBS4mN+JfQPVqK-LQ4eEx=AdnPF0p-qRbRYpHtk6Gg@mail.gmail.com>
Subject: tboot UEFI and Xen (was Re: i915 dma faults on Xen)
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 15, 2020 at 1:13 PM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> On Thu, Oct 15, 2020 at 12:39 PM Tamas K Lengyel
> <tamas.k.lengyel@gmail.com> wrote:
> >
> > > > Can you paste the memory map as printed by Xen when booting, and what
> > > > command line are you using to boot Xen.
> > >
> > > So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen
> >
> > Unrelated comment: since tboot now has a PE build
> > (http://hg.code.sf.net/p/tboot/code/rev/5c68f0963a78) I think it would
> > be time for OpenXT to drop the weird efi->xen->tboot->xen flow and
> > just do efi->tboot->xen. Only reason we did efi->xen->tboot was
> > because tboot didn't have a PE build at the time. It's a very hackish
> > solution that's no longer needed.
>
> Thanks for the pointer, Tamas.  If I recall correctly, there was also
> an issue with ExitBootServices.  Do you know if that has been
> addressed?

I tested tboot's UEFI build, but it didn't boot Xen:

Fedora UEFI shim -> grub2 -> tboot.mb2 -> xen
didn't work - hung at a black screen. Power button powered off
promptly, so it didn't get far enough for something to enable ACPI.
Fedora UEFI shim -> grub2 -> tboot.mb2 -> linux
booted and showed efi stuff in /sys

Naturally this is on a laptop without a serial port. I haven't looked
into this further as it's a low priority for me at this time.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87023.163936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9k1-0000Ru-Pt; Fri, 19 Feb 2021 17:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87023.163936; Fri, 19 Feb 2021 17:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9k1-0000Rn-N5; Fri, 19 Feb 2021 17:38:57 +0000
Received: by outflank-mailman (input) for mailman id 87023;
 Fri, 19 Feb 2021 17:38:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9k0-0000Ri-9A
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:38:56 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0fd09e82-fd04-4ca5-9e4a-7b9c9199383d;
 Fri, 19 Feb 2021 17:38:54 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-492-RFoionfWOYKhv2EUOAg6gg-1; Fri, 19 Feb 2021 12:38:52 -0500
Received: by mail-wr1-f69.google.com with SMTP id d10so2747603wrq.17
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:38:52 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id l2sm13785629wrm.6.2021.02.19.09.38.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:38:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fd09e82-fd04-4ca5-9e4a-7b9c9199383d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756334;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=4SOncT9Fo54yUG54/z2tFe4UG2s+CTajIRyeydICZjo=;
	b=YZFlL6hfULqIyWQb/z3hlSqI+0d7XDhxI+UD3ulqt3+SsPKHXd9ZhttbYlzjjwfEjvDr+0
	OS5dhGRn9X+jz+YmEHCxkkIgo9z0nNb9Eg1yRcKHJMK2DgeYY39auP1HBisnrvlIVyWzM6
	wOVfVEafosJYWkb6t+gRvciSL9Ml7Vg=
X-MC-Unique: RFoionfWOYKhv2EUOAg6gg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=lykBUd6DAhrahT8nCJAC5FU9Ql8oesKH07nupx8xYdk=;
        b=PXYwu88UteB0XRvP8XWSdaKEHwc0upbUMaRE8SdHzbNnj15Xf9XMYwDB7c9x6fPYT2
         w4T1MKdRgMFxSftZd7IdEjuUtof6nIT4xi5nOg4DWYM0dq+eaJqJTTaecPA/l6yG+jkl
         e+S99jUQAmnNzUxIQtkNt8FdYMm/PGnRzSFWiSwTzjuHiM6wmYzwEVo/pzmq6TyAgMVQ
         FaZ0RkAsffRkwEbXk0eXwaSCQe+DghFMKQBTcbc9LgIbdj9SmLDmVfEOyQGZCy4nUGrJ
         JapDHdtaKG2iQVLp9PrOMP2MnEM15oCyjyIRh94wyePDkK78yXSE6NdH56jq9rX2lmeb
         7HJA==
X-Gm-Message-State: AOAM532/bXznZXY9MLMVOqgXAMi9ujKs1/w3DXYo0U1WOg67QPyoSO5z
	JtkXvfoQI5HzyhsMTZTLOAEiDbiHrre6MFFV1Tm8lp7cYX4M7gzAHlI+POgiRDQVTX3CtTJRWba
	O8o0yASpWEh4ItxVQLwD3x7vNjp4=
X-Received: by 2002:adf:e4c3:: with SMTP id v3mr10542032wrm.210.1613756331025;
        Fri, 19 Feb 2021 09:38:51 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwI/byrZ/GWSkkhxlUrl1Omb6LyZ4OFFS/CSa2qXI7tP50C7ZajzgipQ5EfU9sygOtDbt6r2g==
X-Received: by 2002:adf:e4c3:: with SMTP id v3mr10542005wrm.210.1613756330876;
        Fri, 19 Feb 2021 09:38:50 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 00/11] hw/accel: Exit gracefully when accelerator is
 invalid
Date: Fri, 19 Feb 2021 18:38:36 +0100
Message-Id: <20210219173847.2054123-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Hi,=0D
=0D
This series aims to improve user experience by providing=0D
a better error message when the user tries to enable KVM=0D
on machines not supporting it.=0D
=0D
Since v1:=0D
- added missing x86 arch (Peter)=0D
- consider all accelerators (Daniel and Peter)=0D
- do not enable KVM on sbsa-ref (Leif)=0D
- updated 'query-machines' (Daniel)=0D
- new patch for XenPV=0D
=0D
Supersedes: <20210219114428.1936109-1-philmd@redhat.com>=0D
=0D
Philippe Mathieu-Daud=C3=A9 (11):=0D
  accel/kvm: Check MachineClass kvm_type() return value=0D
  hw/boards: Introduce machine_class_valid_for_accelerator()=0D
  hw/core: Restrict 'query-machines' to those supported by current accel=0D
  hw/arm: Restrit KVM to the virt & versal machines=0D
  hw/mips: Restrict KVM to the malta & virt machines=0D
  hw/ppc: Restrict KVM to various PPC machines=0D
  hw/s390x: Explicit the s390-ccw-virtio machines support TCG and KVM=0D
  hw/i386: Explicit x86 machines support all current accelerators=0D
  hw/xenpv: Restrict Xen Para-virtualized machine to Xen accelerator=0D
  hw/board: Only allow TCG accelerator by default=0D
  softmmu/vl: Exit gracefully when accelerator is not supported=0D
=0D
 include/hw/boards.h        | 27 ++++++++++++++++++++++++++-=0D
 accel/kvm/kvm-all.c        |  6 ++++++=0D
 hw/arm/virt.c              |  5 +++++=0D
 hw/arm/xlnx-versal-virt.c  |  5 +++++=0D
 hw/core/machine-qmp-cmds.c |  4 ++++=0D
 hw/core/machine.c          | 26 ++++++++++++++++++++++++++=0D
 hw/i386/x86.c              |  5 +++++=0D
 hw/mips/loongson3_virt.c   |  5 +++++=0D
 hw/mips/malta.c            |  5 +++++=0D
 hw/ppc/e500plat.c          |  5 +++++=0D
 hw/ppc/mac_newworld.c      |  6 ++++++=0D
 hw/ppc/mac_oldworld.c      |  5 +++++=0D
 hw/ppc/mpc8544ds.c         |  5 +++++=0D
 hw/ppc/ppc440_bamboo.c     |  5 +++++=0D
 hw/ppc/prep.c              |  5 +++++=0D
 hw/ppc/sam460ex.c          |  5 +++++=0D
 hw/ppc/spapr.c             |  5 +++++=0D
 hw/s390x/s390-virtio-ccw.c |  5 +++++=0D
 hw/xenpv/xen_machine_pv.c  |  5 +++++=0D
 softmmu/vl.c               |  7 +++++++=0D
 20 files changed, 145 insertions(+), 1 deletion(-)=0D
=0D
--=20=0D
2.26.2=0D
=0D



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87024.163948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9k7-0000au-2z; Fri, 19 Feb 2021 17:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87024.163948; Fri, 19 Feb 2021 17:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9k6-0000an-Vw; Fri, 19 Feb 2021 17:39:02 +0000
Received: by outflank-mailman (input) for mailman id 87024;
 Fri, 19 Feb 2021 17:39:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9k5-0000XE-B1
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id d2238aa9-c85f-4066-8cda-567a796caf2f;
 Fri, 19 Feb 2021 17:38:59 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-567-YWgzFbeeOcuLgbEabV4mPA-1; Fri, 19 Feb 2021 12:38:57 -0500
Received: by mail-wm1-f69.google.com with SMTP id z67so2789276wme.3
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:38:57 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id i7sm23525949wmq.2.2021.02.19.09.38.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:38:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2238aa9-c85f-4066-8cda-567a796caf2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756339;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=98X4DsBNdk1BN09KWcHx0mE3/gzzTZdKxOLSGajyIeI=;
	b=Q9/iEHjDqTF80Mo33cOYrt6p2Mpt3KYt2U3Ws869lT0OTof6ZPlwvl5CIILLX0q5dD0TsJ
	GREqXguURm4OuiYckfiL8llnByBBclUng2Ga0xYg7sctKg9dbGPCq2yoHwHUfwCC5Ta4Ih
	L4BjB9lWVIOwISa+z4fduSomV3HSW/o=
X-MC-Unique: YWgzFbeeOcuLgbEabV4mPA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=98X4DsBNdk1BN09KWcHx0mE3/gzzTZdKxOLSGajyIeI=;
        b=MIxZDyB4rhoproujRx7EA2h20999aup5Yyg4JCWOFz7ywkb0p3iZ0KMiUFaFImJ2Hn
         iDBwm93zJTmRovy0gt5mBYfO9L007q+3TgG9LzVvrNGt38zCQoy1nuU0Z+g0vv7YH88f
         +lqSsUKR7OfmjGy8knZfYjVIPYSdAJOn+2+vq9PqjeKrnYOwOdD8uBgMR1Fe4/4dL1y6
         9VdQAbgbvjkPWjqEuWybmZoCuWp5H5NXalfiqoiQJnUmsG6YJaDyWtoYhQlkhr+ecsZv
         j5/tBA8wH2JLbi5y82oEQipQMmRbJWidopJC0yHMfepZcmHvIS3v7l4N8n7ZwpWczc2c
         nzMg==
X-Gm-Message-State: AOAM533bJYw7hgx2ZwITLnZBy8k3iwCCBIi1DqKWUMRgJ8wj/MdoUL9r
	WensgOpMA2gs+oHQGOj4lFfh/bj2m3/oFLUrIEl2ypu22KUP2UfecPFzmdy5xH9pUv3u36d+sL9
	rJ208jeyWw4c+RLUCLCCNaTXjSJ4=
X-Received: by 2002:adf:b60f:: with SMTP id f15mr10395952wre.83.1613756336822;
        Fri, 19 Feb 2021 09:38:56 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzk+AP79F8l3TWLxyR5RU8SmferiSbz3yhDIC5dQKM04uA2QaqDVPRO6H3UVoBxcUQFmDZVKw==
X-Received: by 2002:adf:b60f:: with SMTP id f15mr10395942wre.83.1613756336683;
        Fri, 19 Feb 2021 09:38:56 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type() return value
Date: Fri, 19 Feb 2021 18:38:37 +0100
Message-Id: <20210219173847.2054123-2-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

MachineClass::kvm_type() can return -1 on failure.
Document it, and add a check in kvm_init().

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 include/hw/boards.h | 3 ++-
 accel/kvm/kvm-all.c | 6 ++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/hw/boards.h b/include/hw/boards.h
index a46dfe5d1a6..68d3d10f6b0 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -127,7 +127,8 @@ typedef struct {
  *    implement and a stub device is required.
  * @kvm_type:
  *    Return the type of KVM corresponding to the kvm-type string option or
- *    computed based on other criteria such as the host kernel capabilities.
+ *    computed based on other criteria such as the host kernel capabilities
+ *    (which can't be negative), or -1 on error.
  * @numa_mem_supported:
  *    true if '--numa node.mem' option is supported and false otherwise
  * @smp_parse:
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 84c943fcdb2..b069938d881 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
                                                             "kvm-type",
                                                             &error_abort);
         type = mc->kvm_type(ms, kvm_type);
+        if (type < 0) {
+            ret = -EINVAL;
+            fprintf(stderr, "Failed to detect kvm-type for machine '%s'\n",
+                    mc->name);
+            goto err;
+        }
     }
 
     do {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87025.163960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kC-0000eN-CA; Fri, 19 Feb 2021 17:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87025.163960; Fri, 19 Feb 2021 17:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kC-0000eF-8Z; Fri, 19 Feb 2021 17:39:08 +0000
Received: by outflank-mailman (input) for mailman id 87025;
 Fri, 19 Feb 2021 17:39:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kA-0000XE-EA
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:06 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 257d245b-316b-4a44-8beb-86ed9a86aa32;
 Fri, 19 Feb 2021 17:39:05 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-190-3Q6zqInTO2egoMIp4B7lXQ-1; Fri, 19 Feb 2021 12:39:03 -0500
Received: by mail-wr1-f71.google.com with SMTP id o10so2762328wru.11
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:03 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id u7sm13826375wrt.67.2021.02.19.09.39.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 257d245b-316b-4a44-8beb-86ed9a86aa32
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756345;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cia92nKBjrGhWlmymfB9YbxEpc3m+J7hFEI/efRN2yM=;
	b=B4hXWITd3MjdLXlv+hDENzBkDbtF1k/RhVVeqq0EQHFIg5KW6/GDY7YCXxmHMWtp0LfMbF
	37i31N10OEciZT/BSHw4ryts9n6RJRCAi66FBgyRndsVPHMJn0/5AaHxX+6IfWkwqNv0Ql
	i3PmLrBdebVcPlgX+3xBrlzsZ65Gpcs=
X-MC-Unique: 3Q6zqInTO2egoMIp4B7lXQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=cia92nKBjrGhWlmymfB9YbxEpc3m+J7hFEI/efRN2yM=;
        b=kJ9e1etS4jlgapkwjeohMP6j1h5L+amFT1yjMqDGbITuXw+Pw9UtFhy4n9Ea4S8z+Y
         3bI2q7xOmnkZOFryqSNxSXpC8PkQAq7VKMbiqmTomm/rm1tEtklVxNUMhT8fIfBViFi/
         h0oT+nAmaojLQF/H/9WaNEAWOFSX7AZbpREMIb/eAsRZeV6A9/jdUKhklrzieRWHAGDZ
         IG6ao7j4Hkxg/zlIm/nGWeLiZncj8pKRTZGYL+lPdPL6AnowJoOTrcW8Qg7jwP2L7dG1
         JaHFvDtjRpmZsCAqs9HlJrOkAsE2gcupzwBJHAS6+9R/20UoZV/jdIQcNFvA0dEJZCG+
         2kwQ==
X-Gm-Message-State: AOAM5329OC0huu/KEBfc+jqpyX2evKD5HlirFZRQFre2xAV2WYLs1MUJ
	ic8y4x4nfWyajj1N7C3j41teQe9CrIlMEJSGz/KR/Mvh8DJEiBepctPWrOHBJJuay50jyBPYX1G
	kXRQAlFQ53rGcA97cttcr2ACH/CA=
X-Received: by 2002:a1c:dd09:: with SMTP id u9mr7417310wmg.183.1613756342444;
        Fri, 19 Feb 2021 09:39:02 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxqNL2hop6t3K45dchD7vTDRIpUNwjlifOUsXeOhz1mjGEWf1uUBOU5I8vzusScCU0mSzJMIw==
X-Received: by 2002:a1c:dd09:: with SMTP id u9mr7417287wmg.183.1613756342265;
        Fri, 19 Feb 2021 09:39:02 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 02/11] hw/boards: Introduce machine_class_valid_for_accelerator()
Date: Fri, 19 Feb 2021 18:38:38 +0100
Message-Id: <20210219173847.2054123-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce the valid_accelerators[] field to express the list
of valid accelators a machine can use, and add the
machine_class_valid_for_current_accelerator() and
machine_class_valid_for_accelerator() methods.

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 include/hw/boards.h | 24 ++++++++++++++++++++++++
 hw/core/machine.c   | 26 ++++++++++++++++++++++++++
 2 files changed, 50 insertions(+)

diff --git a/include/hw/boards.h b/include/hw/boards.h
index 68d3d10f6b0..4d08bc12093 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -36,6 +36,24 @@ void machine_set_cpu_numa_node(MachineState *machine,
                                const CpuInstanceProperties *props,
                                Error **errp);
 
+/**
+ * machine_class_valid_for_accelerator:
+ * @mc: the machine class
+ * @acc_name: accelerator name
+ *
+ * Returns %true if the accelerator is valid for the machine, %false
+ * otherwise. See #MachineClass.valid_accelerators.
+ */
+bool machine_class_valid_for_accelerator(MachineClass *mc, const char *acc_name);
+/**
+ * machine_class_valid_for_current_accelerator:
+ * @mc: the machine class
+ *
+ * Returns %true if the accelerator is valid for the current machine,
+ * %false otherwise. See #MachineClass.valid_accelerators.
+ */
+bool machine_class_valid_for_current_accelerator(MachineClass *mc);
+
 void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type);
 /*
  * Checks that backend isn't used, preps it for exclusive usage and
@@ -125,6 +143,11 @@ typedef struct {
  *    should instead use "unimplemented-device" for all memory ranges where
  *    the guest will attempt to probe for a device that QEMU doesn't
  *    implement and a stub device is required.
+ * @valid_accelerators:
+ *    If this machine supports a specific set of virtualization accelerators,
+ *    this contains a NULL-terminated list of the accelerators that can be
+ *    used. If this field is not set, any accelerator is valid. The QTest
+ *    accelerator is always valid.
  * @kvm_type:
  *    Return the type of KVM corresponding to the kvm-type string option or
  *    computed based on other criteria such as the host kernel capabilities
@@ -166,6 +189,7 @@ struct MachineClass {
     const char *alias;
     const char *desc;
     const char *deprecation_reason;
+    const char *const *valid_accelerators;
 
     void (*init)(MachineState *state);
     void (*reset)(MachineState *state);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 970046f4388..c42d8e382b1 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -518,6 +518,32 @@ static void machine_set_nvdimm_persistence(Object *obj, const char *value,
     nvdimms_state->persistence_string = g_strdup(value);
 }
 
+bool machine_class_valid_for_accelerator(MachineClass *mc, const char *acc_name)
+{
+    const char *const *name = mc->valid_accelerators;
+
+    if (!name) {
+        return true;
+    }
+    if (strcmp(acc_name, "qtest") == 0) {
+        return true;
+    }
+
+    for (unsigned i = 0; name[i]; i++) {
+        if (strcasecmp(acc_name, name[i]) == 0) {
+            return true;
+        }
+    }
+    return false;
+}
+
+bool machine_class_valid_for_current_accelerator(MachineClass *mc)
+{
+    AccelClass *ac = ACCEL_GET_CLASS(current_accel());
+
+    return machine_class_valid_for_accelerator(mc, ac->name);
+}
+
 void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type)
 {
     QAPI_LIST_PREPEND(mc->allowed_dynamic_sysbus_devices, g_strdup(type));
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87026.163972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kH-0000iv-MN; Fri, 19 Feb 2021 17:39:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87026.163972; Fri, 19 Feb 2021 17:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kH-0000in-Ho; Fri, 19 Feb 2021 17:39:13 +0000
Received: by outflank-mailman (input) for mailman id 87026;
 Fri, 19 Feb 2021 17:39:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kF-0000hU-MR
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1f7240a8-73a9-4cca-85ff-387540f20d63;
 Fri, 19 Feb 2021 17:39:11 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-162-ekoA_y90MHWIh4Fuavl0tw-1; Fri, 19 Feb 2021 12:39:09 -0500
Received: by mail-wr1-f70.google.com with SMTP id x14so2174132wrr.13
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:09 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id y62sm15299048wmy.9.2021.02.19.09.39.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f7240a8-73a9-4cca-85ff-387540f20d63
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756351;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HQ6bR9q2nTx8UZfcCwDp8YyWJ5IdPylW4EMLv5ePnSc=;
	b=aiIBXvVyVz8UZbKJ1VRMWUICNBGe/Ged0aq2gyJIxKbrGg/y/4qW6C+GOeGtIqz8jtKocz
	5JzrOWvPQ7Ac/lyOQZIzNKA0PophRMhi2pvyfybwaAlU4CSSoR35XxHQdA1Qhusxog216A
	8gIlUEBGhNHKT/IFLkIGd8rknRMOg3c=
X-MC-Unique: ekoA_y90MHWIh4Fuavl0tw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=HQ6bR9q2nTx8UZfcCwDp8YyWJ5IdPylW4EMLv5ePnSc=;
        b=CLxTCie4VOtz5W6EPAKmJABfLK0Vs8Tl9DU4erhE6tX+WxrprBzBPbYv08qdhRv1P/
         J/Lxx/lncNu7G43K4oIT0P+xV0JqffjaFcGxlhw7MfUtnMblm5o8aswPJY0UsaoA8eNX
         vBi2bFW/5QT3Bj4Kk3AqODwy+Hwnd1fG52qiQ1hrA2nrBJ3xSkXOLMl3DBQjinS8t2c2
         Ie2A18KpSid+S5LObd4UiWHx/bR5GC+J6bdUDDVsjjySWtRuHahvT1nCnf1snxEWbV/d
         EKc+/P039U8Azm8+Te3pepeTer6kxc1PAH7irhqEoluCHRYnPSz2afcLAW1DISpbVDdT
         mdyg==
X-Gm-Message-State: AOAM531E3m0Stii7/DF7ZVrfYJ92ReHtEmrwYITslbdp65loHJzKzrut
	ebyDiD1eZG3zIKcuJ7dEGa21pIdPOg16+pUpuVnsQTQCmluKdURZkxKVueRWcsb7wO2EZx9m46O
	w7dEsz4cbz83/SdO/Ol+pTE0rjKA=
X-Received: by 2002:adf:b342:: with SMTP id k2mr10246398wrd.264.1613756347931;
        Fri, 19 Feb 2021 09:39:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzRIsqLMFpNjZYF3WUU2UHyYdzuctCSQdLydicZwE25C3zYE3MQJaOoIUNZctaN8YvHWyznKA==
X-Received: by 2002:adf:b342:: with SMTP id k2mr10246368wrd.264.1613756347790;
        Fri, 19 Feb 2021 09:39:07 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Daniel=20Berrang=C3=A9?= <berrange@redhat.com>
Subject: [PATCH v2 03/11] hw/core: Restrict 'query-machines' to those supported by current accel
Date: Fri, 19 Feb 2021 18:38:39 +0100
Message-Id: <20210219173847.2054123-4-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Do not let 'query-machines' return machines not valid with
the current accelerator.

Suggested-by: Daniel BerrangÃ© <berrange@redhat.com>
Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 hw/core/machine-qmp-cmds.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c
index 44e979e503b..c8630bc2ddc 100644
--- a/hw/core/machine-qmp-cmds.c
+++ b/hw/core/machine-qmp-cmds.c
@@ -204,6 +204,10 @@ MachineInfoList *qmp_query_machines(Error **errp)
         MachineClass *mc = el->data;
         MachineInfo *info;
 
+        if (!machine_class_valid_for_current_accelerator(mc)) {
+            continue;
+        }
+
         info = g_malloc0(sizeof(*info));
         if (mc->is_default) {
             info->has_is_default = true;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87027.163984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kP-0000qB-5q; Fri, 19 Feb 2021 17:39:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87027.163984; Fri, 19 Feb 2021 17:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kP-0000q3-0t; Fri, 19 Feb 2021 17:39:21 +0000
Received: by outflank-mailman (input) for mailman id 87027;
 Fri, 19 Feb 2021 17:39:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kN-0000on-Fo
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:19 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 295cf8c6-0241-4282-877b-e177f7d11b51;
 Fri, 19 Feb 2021 17:39:19 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-177-30Ky9Va3MzKNFxGAlZqNjw-1; Fri, 19 Feb 2021 12:39:14 -0500
Received: by mail-wr1-f72.google.com with SMTP id e29so2574518wra.12
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:14 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id q20sm12010000wmc.14.2021.02.19.09.39.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 295cf8c6-0241-4282-877b-e177f7d11b51
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756358;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CHXCPuc7uWQdwucv7FWJOUQ8ceg9IO3ZY73rXU3+wF0=;
	b=iPXbZj8rGrxPbKEdYl1qsu46ZyVLdqKSX3k6meK0fIWrUdN96qqjWIm6X3GC06TGyjCFTf
	4SH9ilSw3RmowoOA//iBhlkTm096ZjbZEGpIHjNToEKZTWNfDzPhster1Cwo2AistWhzPg
	m61FY8TKA5WoqsvaK5J27fvfbhsi2q8=
X-MC-Unique: 30Ky9Va3MzKNFxGAlZqNjw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=CHXCPuc7uWQdwucv7FWJOUQ8ceg9IO3ZY73rXU3+wF0=;
        b=AFU3fIPAfa+WFqiLg6PhKFePChMHil/HA2hNO9iE5vCK/A64Dym91x8/aWzQhrYv8Q
         ktSs3hkcUZG6ki8m9hXd42dMlDq7BFdIGgrG75YRNCbMFA/BmgclQEjaiKTGh1wbFCqj
         t6MTBnoFgJQzZf/r/y3X85nUzrwIxqn15ZUuwg46aenBzatjFUXTBjHKavW/hMxmQuHJ
         WqGDqsPGRTk8cSt2WwteYw6VljOQL7GDnnAk8NgVicVrGgvurPgc8TizeTy3vfu/nciv
         Dtw76vfi7nuBEcvjE3cstmCNJv0oBpSSAPm1uOIfsp8euFkqGLoFLs1aD39NIlLFhKBK
         vOuw==
X-Gm-Message-State: AOAM532qCdg1iC/4WyPbT4KMCqVnEy+xJAp4vbH77Y9xftOsN9nOPkZR
	G4dEjjIgub1oayCcJPbCnkljr3UeZ9aZWrgp3zqeYzM2WpxjOmpCAfSf40tkw8h2V1LnCtezf+j
	3Cewnq9czTj7q6D1I1JiH3PAU+MI=
X-Received: by 2002:a05:6000:188c:: with SMTP id a12mr10610944wri.105.1613756353508;
        Fri, 19 Feb 2021 09:39:13 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwrzSzq1o4kGieINb1Q744LtUd4TVffu+lfi/YVrFtocFVQ1Ale5KqgmXk7Upte5EwmyzFbHA==
X-Received: by 2002:a05:6000:188c:: with SMTP id a12mr10610915wri.105.1613756353383;
        Fri, 19 Feb 2021 09:39:13 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 04/11] hw/arm: Restrit KVM to the virt & versal machines
Date: Fri, 19 Feb 2021 18:38:40 +0100
Message-Id: <20210219173847.2054123-5-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Restrit KVM to the following ARM machines:
- virt
- xlnx-versal-virt

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 hw/arm/virt.c             | 5 +++++
 hw/arm/xlnx-versal-virt.c | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 371147f3ae9..8e9861b61a9 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2527,6 +2527,10 @@ static HotplugHandler *virt_machine_get_hotplug_handler(MachineState *machine,
     return NULL;
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", "hvf", NULL
+};
+
 /*
  * for arm64 kvm_type [7-0] encodes the requested number of bits
  * in the IPA address space
@@ -2582,6 +2586,7 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
     mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
     mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
     mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
+    mc->valid_accelerators = valid_accels;
     mc->kvm_type = virt_kvm_type;
     assert(!mc->get_hotplug_handler);
     mc->get_hotplug_handler = virt_machine_get_hotplug_handler;
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
index 8482cd61960..d424813cae1 100644
--- a/hw/arm/xlnx-versal-virt.c
+++ b/hw/arm/xlnx-versal-virt.c
@@ -610,6 +610,10 @@ static void versal_virt_machine_instance_init(Object *obj)
 {
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void versal_virt_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -621,6 +625,7 @@ static void versal_virt_machine_class_init(ObjectClass *oc, void *data)
     mc->default_cpus = XLNX_VERSAL_NR_ACPUS;
     mc->no_cdrom = true;
     mc->default_ram_id = "ddr";
+    mc->valid_accelerators = valid_accels;
 }
 
 static const TypeInfo versal_virt_machine_init_typeinfo = {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87029.163996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kS-0000u5-HT; Fri, 19 Feb 2021 17:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87029.163996; Fri, 19 Feb 2021 17:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kS-0000tx-Ck; Fri, 19 Feb 2021 17:39:24 +0000
Received: by outflank-mailman (input) for mailman id 87029;
 Fri, 19 Feb 2021 17:39:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kR-0000on-C1
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:23 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ea10117a-8f96-4a8d-9c72-aed11805b978;
 Fri, 19 Feb 2021 17:39:22 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-378-qcXp7kdzPcWsoqxwdjqA5A-1; Fri, 19 Feb 2021 12:39:20 -0500
Received: by mail-wr1-f70.google.com with SMTP id o10so2762608wru.11
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:20 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id f7sm14534967wre.78.2021.02.19.09.39.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea10117a-8f96-4a8d-9c72-aed11805b978
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756362;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ltjnESlJDd3H5QR2j8eiWOyKx9h8jJk2C0jWkjeNUew=;
	b=Fh9LIVlARdJYfeUwUHUZLMXi8rJsunU93ahUTuZOupqHRamCCnV+eYrVyBIf7NGp1FgDvJ
	pJrgTl8kTZZKh4pKKPrc5TE/bk+0jD/PI2v10roER5nSde0DAWdQXB8Q3cbPxUW8PxXEBQ
	BIHbAJK2gXKEE7wgSyxVanPSLZfb0sA=
X-MC-Unique: qcXp7kdzPcWsoqxwdjqA5A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ltjnESlJDd3H5QR2j8eiWOyKx9h8jJk2C0jWkjeNUew=;
        b=fIdlgrpRDVPIMPFMa4WX4fO34hOnWqf2QDxtsTMorNR0/dRSywiqO1HIJJAaTCA2lp
         bH3a1z1wa0HqMUfsh9WpWxyBiwKBpQyiyd6+2W6fYdOv2tAFb7EEJfzqg8NCml3/bTCK
         uMUKNTzUkK5eg2ZefeY4cf+gHxVx5z1UA8lSj3z+wAoQnV/wCt69qRoMPujreRzZiIK2
         fML9T5X6JP0k6mI5yCXxCfJ1KAW/lcj4SL98PiETSas3kHBHWYr23Gx6SoDBLuI0BMCA
         NS9dbqewqQGbKf5H7D6/f25sr/UmW8kBvhpxF/ryIhPBvdTBhDrWl5wYRM2mRDmLXsXv
         jRNQ==
X-Gm-Message-State: AOAM530CPq7nar3XxRiWaX6y35sDvlBXA3i4osP6Uv1deSk/c2yislQP
	Mj4Kh1qS/6Ij4JlULpAfwtrFXBIAPV7sOPa/B72ANVqpfCp5q4juGrqlX12imBGypeJ83H1Fmww
	VwPrsUTYi/gGo5CJ9Ba3gR9JKnOs=
X-Received: by 2002:a5d:558b:: with SMTP id i11mr10341323wrv.125.1613756359007;
        Fri, 19 Feb 2021 09:39:19 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyksADAECUrKIFfE8NXfSBq0p9Yv0QU1jPKKdp6ensUX0SFo+MEZzOE/tMgXW9KlqBOatRsLw==
X-Received: by 2002:a5d:558b:: with SMTP id i11mr10341310wrv.125.1613756358861;
        Fri, 19 Feb 2021 09:39:18 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 05/11] hw/mips: Restrict KVM to the malta & virt machines
Date: Fri, 19 Feb 2021 18:38:41 +0100
Message-Id: <20210219173847.2054123-6-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Restrit KVM to the following MIPS machines:
- malta
- loongson3-virt

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 hw/mips/loongson3_virt.c | 5 +++++
 hw/mips/malta.c          | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/hw/mips/loongson3_virt.c b/hw/mips/loongson3_virt.c
index d4a82fa5367..c3679dff043 100644
--- a/hw/mips/loongson3_virt.c
+++ b/hw/mips/loongson3_virt.c
@@ -612,6 +612,10 @@ static void mips_loongson3_virt_init(MachineState *machine)
     loongson3_virt_devices_init(machine, liointc);
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void loongson3v_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -622,6 +626,7 @@ static void loongson3v_machine_class_init(ObjectClass *oc, void *data)
     mc->max_cpus = LOONGSON_MAX_VCPUS;
     mc->default_ram_id = "loongson3.highram";
     mc->default_ram_size = 1600 * MiB;
+    mc->valid_accelerators = valid_accels;
     mc->kvm_type = mips_kvm_type;
     mc->minimum_page_bits = 14;
 }
diff --git a/hw/mips/malta.c b/hw/mips/malta.c
index 9afc0b427bf..0212048dc63 100644
--- a/hw/mips/malta.c
+++ b/hw/mips/malta.c
@@ -1443,6 +1443,10 @@ static const TypeInfo mips_malta_device = {
     .instance_init = mips_malta_instance_init,
 };
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void mips_malta_machine_init(MachineClass *mc)
 {
     mc->desc = "MIPS Malta Core LV";
@@ -1456,6 +1460,7 @@ static void mips_malta_machine_init(MachineClass *mc)
     mc->default_cpu_type = MIPS_CPU_TYPE_NAME("24Kf");
 #endif
     mc->default_ram_id = "mips_malta.ram";
+    mc->valid_accelerators = valid_accels;
 }
 
 DEFINE_MACHINE("malta", mips_malta_machine_init)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87032.164008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9ka-000119-T1; Fri, 19 Feb 2021 17:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87032.164008; Fri, 19 Feb 2021 17:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9ka-000111-P4; Fri, 19 Feb 2021 17:39:32 +0000
Received: by outflank-mailman (input) for mailman id 87032;
 Fri, 19 Feb 2021 17:39:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kZ-0000on-7m
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:31 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id da884f6e-5daa-4e73-a9f0-9709690b1f1a;
 Fri, 19 Feb 2021 17:39:29 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-211-z5WyorMzPXecboWrYOT3og-1; Fri, 19 Feb 2021 12:39:25 -0500
Received: by mail-wr1-f69.google.com with SMTP id u15so2773674wrn.3
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:25 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id v66sm12701902wme.33.2021.02.19.09.39.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da884f6e-5daa-4e73-a9f0-9709690b1f1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756369;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Lw4bfUfCEZBd00BTMjyws6AoTKQiuLTiPv1ON4ccv74=;
	b=bC8+1d15noKWKx8Hh745rCDU4HZoLgZ63FobWXUUsj7+SAcwubcsmypKDkqTgOylNdtPtm
	P4hFStTViFqfv7hCZ+qC+blMi3ehAFZTISC81djndPrZbuY468grpX78lSqs76nuRdC4Yj
	D1XKjd/TqWb1c51yphRt2OgTNG0YVA8=
X-MC-Unique: z5WyorMzPXecboWrYOT3og-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Lw4bfUfCEZBd00BTMjyws6AoTKQiuLTiPv1ON4ccv74=;
        b=XqoA/EdPeRtMOT1swM/+eNzpbOxo4ZT2366PsaoMO/1bwgSq/h8wHCq7Uhm7V4EPnd
         CmcMjr+fFC/y1BZW5z1eTnSr3E9x+/pLL3A9asuNyyIGsplorquIgDAst6p9RBYqHuyq
         5rSjQIh0TUSMcamwoZCn/WIBPEjMv+RENM03ODWVYMiX5hiNujX4nfcpDk2lXFh4HK9f
         gMCAechAmiqEmAO9kZiHSHRQZD9hIRW2+PBUdCryE+ahmETKJF694N8ZSr9f3oShZouX
         pvMZbv4sqynWx2M6j0G2RqMR9CLEpf1VShlMekno+lagAs080/zXaLbkGSf4IH8WHehR
         WhoA==
X-Gm-Message-State: AOAM5313HFYRRxVv1vLuAwFAUWDaJYnx43XZ4qvmDZYJsJw0HQ9MIn9n
	JU3HlP9CmyDn4ZGpRUph9aewj0av5aGb5JolcAZb888qrKao/AJgc2HOuklAN9j33eJSzH8yodZ
	NXj2aTxOu16hjxE1bQx0zLQATvl0=
X-Received: by 2002:a1c:bb44:: with SMTP id l65mr8964746wmf.86.1613756364613;
        Fri, 19 Feb 2021 09:39:24 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxjqRkJ5Ff39xPwSCRnl3K5tBZjqMMFJ/FERHr0iybiVEhoAVRLgd8349suwi9XW5Y8/zboYw==
X-Received: by 2002:a1c:bb44:: with SMTP id l65mr8964730wmf.86.1613756364447;
        Fri, 19 Feb 2021 09:39:24 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [RFC PATCH v2 06/11] hw/ppc: Restrict KVM to various PPC machines
Date: Fri, 19 Feb 2021 18:38:42 +0100
Message-Id: <20210219173847.2054123-7-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Restrit KVM to the following PPC machines:
- 40p
- bamboo
- g3beige
- mac99
- mpc8544ds
- ppce500
- pseries
- sam460ex
- virtex-ml507

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
RFC: I'm surprise by this list, but this is the result of
     auditing calls to kvm_enabled() checks.

 hw/ppc/e500plat.c      | 5 +++++
 hw/ppc/mac_newworld.c  | 6 ++++++
 hw/ppc/mac_oldworld.c  | 5 +++++
 hw/ppc/mpc8544ds.c     | 5 +++++
 hw/ppc/ppc440_bamboo.c | 5 +++++
 hw/ppc/prep.c          | 5 +++++
 hw/ppc/sam460ex.c      | 5 +++++
 hw/ppc/spapr.c         | 5 +++++
 8 files changed, 41 insertions(+)

diff --git a/hw/ppc/e500plat.c b/hw/ppc/e500plat.c
index bddd5e7c48f..9701dbc2231 100644
--- a/hw/ppc/e500plat.c
+++ b/hw/ppc/e500plat.c
@@ -67,6 +67,10 @@ HotplugHandler *e500plat_machine_get_hotpug_handler(MachineState *machine,
 
 #define TYPE_E500PLAT_MACHINE  MACHINE_TYPE_NAME("ppce500")
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void e500plat_machine_class_init(ObjectClass *oc, void *data)
 {
     PPCE500MachineClass *pmc = PPCE500_MACHINE_CLASS(oc);
@@ -98,6 +102,7 @@ static void e500plat_machine_class_init(ObjectClass *oc, void *data)
     mc->max_cpus = 32;
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("e500v2_v30");
     mc->default_ram_id = "mpc8544ds.ram";
+    mc->valid_accelerators = valid_accels;
     machine_class_allow_dynamic_sysbus_dev(mc, TYPE_ETSEC_COMMON);
  }
 
diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
index e991db4addb..634f5ad19a0 100644
--- a/hw/ppc/mac_newworld.c
+++ b/hw/ppc/mac_newworld.c
@@ -578,6 +578,11 @@ static char *core99_fw_dev_path(FWPathProvider *p, BusState *bus,
 
     return NULL;
 }
+
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static int core99_kvm_type(MachineState *machine, const char *arg)
 {
     /* Always force PR KVM */
@@ -595,6 +600,7 @@ static void core99_machine_class_init(ObjectClass *oc, void *data)
     mc->max_cpus = MAX_CPUS;
     mc->default_boot_order = "cd";
     mc->default_display = "std";
+    mc->valid_accelerators = valid_accels;
     mc->kvm_type = core99_kvm_type;
 #ifdef TARGET_PPC64
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("970fx_v3.1");
diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
index 44ee99be886..2c58f73b589 100644
--- a/hw/ppc/mac_oldworld.c
+++ b/hw/ppc/mac_oldworld.c
@@ -424,6 +424,10 @@ static char *heathrow_fw_dev_path(FWPathProvider *p, BusState *bus,
     return NULL;
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static int heathrow_kvm_type(MachineState *machine, const char *arg)
 {
     /* Always force PR KVM */
@@ -444,6 +448,7 @@ static void heathrow_class_init(ObjectClass *oc, void *data)
 #endif
     /* TOFIX "cad" when Mac floppy is implemented */
     mc->default_boot_order = "cd";
+    mc->valid_accelerators = valid_accels;
     mc->kvm_type = heathrow_kvm_type;
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("750_v3.1");
     mc->default_display = "std";
diff --git a/hw/ppc/mpc8544ds.c b/hw/ppc/mpc8544ds.c
index 81177505f02..92b0e926c1b 100644
--- a/hw/ppc/mpc8544ds.c
+++ b/hw/ppc/mpc8544ds.c
@@ -36,6 +36,10 @@ static void mpc8544ds_init(MachineState *machine)
     ppce500_init(machine);
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void e500plat_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -56,6 +60,7 @@ static void e500plat_machine_class_init(ObjectClass *oc, void *data)
     mc->max_cpus = 15;
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("e500v2_v30");
     mc->default_ram_id = "mpc8544ds.ram";
+    mc->valid_accelerators = valid_accels;
 }
 
 #define TYPE_MPC8544DS_MACHINE  MACHINE_TYPE_NAME("mpc8544ds")
diff --git a/hw/ppc/ppc440_bamboo.c b/hw/ppc/ppc440_bamboo.c
index b156bcb9990..02501f489e4 100644
--- a/hw/ppc/ppc440_bamboo.c
+++ b/hw/ppc/ppc440_bamboo.c
@@ -298,12 +298,17 @@ static void bamboo_init(MachineState *machine)
     }
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void bamboo_machine_init(MachineClass *mc)
 {
     mc->desc = "bamboo";
     mc->init = bamboo_init;
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("440epb");
     mc->default_ram_id = "ppc4xx.sdram";
+    mc->valid_accelerators = valid_accels;
 }
 
 DEFINE_MACHINE("bamboo", bamboo_machine_init)
diff --git a/hw/ppc/prep.c b/hw/ppc/prep.c
index 7e72f6e4a9b..90d884b0883 100644
--- a/hw/ppc/prep.c
+++ b/hw/ppc/prep.c
@@ -431,6 +431,10 @@ static void ibm_40p_init(MachineState *machine)
     }
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void ibm_40p_machine_init(MachineClass *mc)
 {
     mc->desc = "IBM RS/6000 7020 (40p)",
@@ -441,6 +445,7 @@ static void ibm_40p_machine_init(MachineClass *mc)
     mc->default_boot_order = "c";
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("604");
     mc->default_display = "std";
+    mc->valid_accelerators = valid_accels;
 }
 
 DEFINE_MACHINE("40p", ibm_40p_machine_init)
diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
index e459b43065b..79adb3352f0 100644
--- a/hw/ppc/sam460ex.c
+++ b/hw/ppc/sam460ex.c
@@ -506,6 +506,10 @@ static void sam460ex_init(MachineState *machine)
     boot_info->entry = entry;
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void sam460ex_machine_init(MachineClass *mc)
 {
     mc->desc = "aCube Sam460ex";
@@ -513,6 +517,7 @@ static void sam460ex_machine_init(MachineClass *mc)
     mc->default_cpu_type = POWERPC_CPU_TYPE_NAME("460exb");
     mc->default_ram_size = 512 * MiB;
     mc->default_ram_id = "ppc4xx.sdram";
+    mc->valid_accelerators = valid_accels;
 }
 
 DEFINE_MACHINE("sam460ex", sam460ex_machine_init)
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 85fe65f8947..c5f985f0187 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4397,6 +4397,10 @@ static void spapr_cpu_exec_exit(PPCVirtualHypervisor *vhyp, PowerPCCPU *cpu)
     }
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void spapr_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -4426,6 +4430,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
     mc->default_ram_size = 512 * MiB;
     mc->default_ram_id = "ppc_spapr.ram";
     mc->default_display = "std";
+    mc->valid_accelerators = valid_accels;
     mc->kvm_type = spapr_kvm_type;
     machine_class_allow_dynamic_sysbus_dev(mc, TYPE_SPAPR_PCI_HOST_BRIDGE);
     mc->pci_allow_0_address = true;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87034.164020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kg-00018U-8U; Fri, 19 Feb 2021 17:39:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87034.164020; Fri, 19 Feb 2021 17:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kg-00018L-3w; Fri, 19 Feb 2021 17:39:38 +0000
Received: by outflank-mailman (input) for mailman id 87034;
 Fri, 19 Feb 2021 17:39:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9ke-00015B-HZ
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:36 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 5d3d0e88-38f6-4dbb-b21f-040f82cc7201;
 Fri, 19 Feb 2021 17:39:35 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-463-bvpW0CxdNBuwgHTvOAM7jw-1; Fri, 19 Feb 2021 12:39:31 -0500
Received: by mail-wr1-f69.google.com with SMTP id l10so2764903wry.16
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:31 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id r12sm3052972wrt.69.2021.02.19.09.39.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d3d0e88-38f6-4dbb-b21f-040f82cc7201
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756375;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ueqj+n2dHsbjBHuZ0Hoy+mZlGObPSK9klpKLOoZpu4Y=;
	b=MMF33b6NnBdSSOiWvC0VxRqETouFen+FQ7chlscpyNGwpGoPnOVyXD0KOSt6ZNjzrcGLlx
	e7K3+LMK21YQac6V7nxO0KMpjaaqcNB7cndfaLMYUwlu9nlt+FCYDeVH2LS/Y8rF/bjZu1
	N7OsP/RVseweu135mRvasXNezJWyqF4=
X-MC-Unique: bvpW0CxdNBuwgHTvOAM7jw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Ueqj+n2dHsbjBHuZ0Hoy+mZlGObPSK9klpKLOoZpu4Y=;
        b=gvUjCHKHU0o7U/QuNaPovTACBBYo2wXSv5/NdFyA2U0hbNUtt0iIevYBFGzk1kF38l
         vPyaYsrCRZj/NkV6lec4bsPX1AdEXVWBvcjaU6LKOcI8m07P7ADqpSam6kLMPLBN4LrX
         FUrfjpePBKkEHrmeXlqqvHpWRdZsJfOm1cy1stSSfLRWhXprQPEBaZMWoH5yywKkDprr
         PIjX75HTMCStYq2cDziQfKTb7C74uV6KiqxrYrSI+/WnFGnIdsT9+OOJNTwWlAmHPKg1
         ZRmlN0RpWHmWM+4pryZNoezVCklvGvWwV+Yxr+tZvVDLdIyD4sVpGHwMoiUpRqZyIdpX
         wlxA==
X-Gm-Message-State: AOAM533Q0ZrM0dkvk6wLFwGYJKk3gDdOg3id8yhifDmb1Vs+WvVRRySv
	InPqEPmZbZ4dVWr1PXhu2+2lkHJQVfVR3bDCXioBSDkfyU13MoTcg4zerTzUl+ZZd20VCatr9XN
	JA2dxka/wpfZsFE1ZNiaeTD3sL0I=
X-Received: by 2002:a1c:c90c:: with SMTP id f12mr9312000wmb.98.1613756370707;
        Fri, 19 Feb 2021 09:39:30 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyQVxSvgdKIkMYLk6jZPql9ZCAN0WXriT+5Nc8pY51dY+mHxFK5IkbcVQRBV0rvadWM1WYfTA==
X-Received: by 2002:a1c:c90c:: with SMTP id f12mr9311979wmb.98.1613756370490;
        Fri, 19 Feb 2021 09:39:30 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 07/11] hw/s390x: Explicit the s390-ccw-virtio machines support TCG and KVM
Date: Fri, 19 Feb 2021 18:38:43 +0100
Message-Id: <20210219173847.2054123-8-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

All s390-ccw-virtio machines support TCG and KVM.

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 hw/s390x/s390-virtio-ccw.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
index 2972b607f36..1f168485066 100644
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -586,6 +586,10 @@ static ram_addr_t s390_fixup_ram_size(ram_addr_t sz)
     return newsz;
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", NULL
+};
+
 static void ccw_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -612,6 +616,7 @@ static void ccw_machine_class_init(ObjectClass *oc, void *data)
     mc->possible_cpu_arch_ids = s390_possible_cpu_arch_ids;
     /* it is overridden with 'host' cpu *in kvm_arch_init* */
     mc->default_cpu_type = S390_CPU_TYPE_NAME("qemu");
+    mc->valid_accelerators = valid_accels;
     hc->plug = s390_machine_device_plug;
     hc->unplug_request = s390_machine_device_unplug_request;
     nc->nmi_monitor_handler = s390_nmi;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87039.164032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kt-0001Iy-Nu; Fri, 19 Feb 2021 17:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87039.164032; Fri, 19 Feb 2021 17:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9kt-0001Iq-Jr; Fri, 19 Feb 2021 17:39:51 +0000
Received: by outflank-mailman (input) for mailman id 87039;
 Fri, 19 Feb 2021 17:39:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kr-0001FU-WF
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:50 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d9a537ac-eb31-4856-9d00-58a73a5b2635;
 Fri, 19 Feb 2021 17:39:47 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-120-ZN5_YoBkMcSIkws2sM-PQA-1; Fri, 19 Feb 2021 12:39:43 -0500
Received: by mail-wr1-f69.google.com with SMTP id e11so2715711wro.19
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:43 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id b72sm13082236wmd.4.2021.02.19.09.39.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9a537ac-eb31-4856-9d00-58a73a5b2635
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756387;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m89dmrSplgkgoyBA9XFoMMYsKZX69TbxEanq9oDkcW0=;
	b=DXjY4RQV0itR6RsRPvdf9Coo8i0V0+IoVp9d/6gRb7bfQePt3Yzm6637eeXjUGmuNAlYBA
	EqZzWirw3ji/ywUXv9kvQgAd2BQEu3qmBXjcKGpoid3831Ls92EMrq3JyxCQlu3e6GQuu5
	XYBiTOgCp0JRk6zwPItYpnh87nwQMMQ=
X-MC-Unique: ZN5_YoBkMcSIkws2sM-PQA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=m89dmrSplgkgoyBA9XFoMMYsKZX69TbxEanq9oDkcW0=;
        b=aZuifwT8xzpDif3bTSgPxNV+2XmG3ykdqtuUJQxBsPpbsngkH/r8z3owwkr5gE4vSa
         hcp1+nVvHeQzpD+8F3FQvmiqs/48ftEvQjtToR8Tx+bjpL93irK+RJwCYRwOArUdv83b
         GPBZS1ySGR3sZOI8iSOYaRk2sxNzBRu0V1cGMfMUEgVxeq9gKbleogCwJUSdJrtMi/d3
         ovIgwV3vp6jwQHOn+nY+81VubN01kI/tJsek3GYwn1qxId560dbZ3C9sRIKbYMTwatsv
         5fdorqUjIWelfMyyHg7xTDu8Zl5/kXjQ97j2ig0hedQzsWNmd029xjkbQE8lhG2p00b5
         k+JA==
X-Gm-Message-State: AOAM532xYGn1WMTSfbbibGlt+EtFqaCJrnquirjG6TFdR7NuULw7vTLZ
	NvZiYbaRxuhe1Uvg+mSOC7nB1+GKZGsI5Dqaf1bfh1S8/85n2V+uWd5uxrmxADTPwIX/iZv/7CH
	779u0L7r0Vl0Q49j9DqiZJXLy0RI=
X-Received: by 2002:a1c:c6:: with SMTP id 189mr9272330wma.128.1613756382375;
        Fri, 19 Feb 2021 09:39:42 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyfL/zeX1Twd/41yRwfBzbYwswnUhtDRMBecaJ3a8NHeinaw0+wzScyZWOOrq01liX3KOGGMQ==
X-Received: by 2002:a1c:c6:: with SMTP id 189mr9272291wma.128.1613756382214;
        Fri, 19 Feb 2021 09:39:42 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 09/11] hw/xenpv: Restrict Xen Para-virtualized machine to Xen accelerator
Date: Fri, 19 Feb 2021 18:38:45 +0100
Message-Id: <20210219173847.2054123-10-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

When started with other accelerator than Xen, the XenPV machine
fails with a criptic message:

  $ qemu-system-x86_64 -M xenpv,accel=kvm
  xen be core: can't connect to xenstored
  qemu-system-x86_64: xen_init_pv: xen backend core setup failed

By restricting it to Xen, we display a clearer error message:

  $ qemu-system-x86_64 -M xenpv,accel=kvm
  qemu-system-x86_64: invalid accelerator 'kvm' for machine xenpv

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 hw/xenpv/xen_machine_pv.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/xenpv/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
index 8df575a457c..d7747bcec98 100644
--- a/hw/xenpv/xen_machine_pv.c
+++ b/hw/xenpv/xen_machine_pv.c
@@ -86,12 +86,17 @@ static void xen_init_pv(MachineState *machine)
     atexit(xen_config_cleanup);
 }
 
+static const char *valid_accels[] = {
+    "xen", NULL
+};
+
 static void xenpv_machine_init(MachineClass *mc)
 {
     mc->desc = "Xen Para-virtualized PC";
     mc->init = xen_init_pv;
     mc->max_cpus = 1;
     mc->default_machine_opts = "accel=xen";
+    mc->valid_accelerators = valid_accels;
 }
 
 DEFINE_MACHINE("xenpv", xenpv_machine_init)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:39:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:39:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87040.164044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9ku-0001Ku-WF; Fri, 19 Feb 2021 17:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87040.164044; Fri, 19 Feb 2021 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9ku-0001Kl-Sl; Fri, 19 Feb 2021 17:39:52 +0000
Received: by outflank-mailman (input) for mailman id 87040;
 Fri, 19 Feb 2021 17:39:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9kt-0000on-8S
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:39:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 7b0e2b8c-9c17-456a-bb6c-1467ccc2a51b;
 Fri, 19 Feb 2021 17:39:39 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-519-L0b7UppANLaDziCdFdlqKA-1; Fri, 19 Feb 2021 12:39:38 -0500
Received: by mail-wr1-f71.google.com with SMTP id y6so2750267wrl.9
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:37 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id c133sm2365046wme.46.2021.02.19.09.39.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b0e2b8c-9c17-456a-bb6c-1467ccc2a51b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756379;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0XNpw7EsZwRfqH5jj+d5q0K26B/2H8ocJXujjf2y0q0=;
	b=ZHMfCRdN/bdY9ziUhL8FHMTvt/ET0FvOdowXpYp4YyGeLfmKK3gCq1Q5yUpR/vx4sBp6od
	KsswFI/FdlK0RsYDWmNDnarQHauInHORqA5+n6X3gIMFFYUUgQP2bjAX1Jrl/INaeb3oQr
	zmZDk09RjVLsR0/Ze6jNpbgjNMWyFsU=
X-MC-Unique: L0b7UppANLaDziCdFdlqKA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=0XNpw7EsZwRfqH5jj+d5q0K26B/2H8ocJXujjf2y0q0=;
        b=t5HtcEcIp3i2D4AqEzp27oCkuCyQ55/7gxNko08hANUWLmcEReWpQlC31HcjlugNMU
         LNAoozvrOpYAwi1rel8VaAJp2v7kkUC5bs7cX+n3TEUp3VUZQDxBZHW4OsUaGmK0NrPh
         UrAgLBLkH3Wb2Kr8XnhPZppZTdsqOoP/0mq/mH8s1vc5wi36T6SC/njs9edowzoRmjAo
         NdnuxWtA+nG7UzLqkI21zBOiu0VsZ/A6UA4WogFR62Vtdd3PVHUONsva0HyDQo/nUY48
         A/kcZuOuHlf02mo4rhLIQbRhE2s4yqJ//YEJN8wnv9gOSzYGkKtzBiES70xUvbBdn0tj
         DN4g==
X-Gm-Message-State: AOAM532NCiLuF9sT29p62M54kaUtI8keiyVutJa8upGfgtnLU6bXb2fr
	8f2lZcQoTX2P4YHGEoHN+HI6yDrN26OvwXaJwp3QEJQ0OE8RVfekWlFPmIcicjN1gIBgDLx9kGS
	FQQIoeVGmE0yPvPJDa5p6SFkHBtk=
X-Received: by 2002:adf:fb91:: with SMTP id a17mr10124627wrr.93.1613756376926;
        Fri, 19 Feb 2021 09:39:36 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwUWYk16cfFnjfBZLMWl2Vy5aAjSJDcFd3Zrj54+kp+HHEcqph2ioitLZ00EdqyQhCQF2e9eA==
X-Received: by 2002:adf:fb91:: with SMTP id a17mr10124607wrr.93.1613756376757;
        Fri, 19 Feb 2021 09:39:36 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [RFC PATCH v2 08/11] hw/i386: Explicit x86 machines support all current accelerators
Date: Fri, 19 Feb 2021 18:38:44 +0100
Message-Id: <20210219173847.2054123-9-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

x86 machines currently support all accelerators.

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
RFC: not sure about this, x86 is not my cup of tea

 hw/i386/x86.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/i386/x86.c b/hw/i386/x86.c
index 6329f90ef90..2dc10e7d386 100644
--- a/hw/i386/x86.c
+++ b/hw/i386/x86.c
@@ -1209,6 +1209,10 @@ static void x86_machine_initfn(Object *obj)
     x86ms->pci_irq_mask = ACPI_BUILD_PCI_IRQS;
 }
 
+static const char *const valid_accels[] = {
+    "tcg", "kvm", "xen", "hax", "hvf", "whpx", NULL
+};
+
 static void x86_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
@@ -1218,6 +1222,7 @@ static void x86_machine_class_init(ObjectClass *oc, void *data)
     mc->cpu_index_to_instance_props = x86_cpu_index_to_props;
     mc->get_default_cpu_node_id = x86_get_default_cpu_node_id;
     mc->possible_cpu_arch_ids = x86_possible_cpu_arch_ids;
+    mc->valid_accelerators = valid_accels;
     x86mc->compat_apic_id_mode = false;
     x86mc->save_tsc_khz = true;
     nc->nmi_monitor_handler = x86_nmi;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:40:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87043.164056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9l3-0001T1-9t; Fri, 19 Feb 2021 17:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87043.164056; Fri, 19 Feb 2021 17:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9l3-0001Sn-5F; Fri, 19 Feb 2021 17:40:01 +0000
Received: by outflank-mailman (input) for mailman id 87043;
 Fri, 19 Feb 2021 17:40:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9l2-0001FU-0R
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:40:00 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 5c1ec71a-6762-4892-b815-30b495f60090;
 Fri, 19 Feb 2021 17:39:50 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-125-eeWd5OYkPF2IJSbFAA9Xsg-1; Fri, 19 Feb 2021 12:39:49 -0500
Received: by mail-wr1-f71.google.com with SMTP id w11so2794419wrp.6
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:48 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id v9sm9098392wrn.86.2021.02.19.09.39.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c1ec71a-6762-4892-b815-30b495f60090
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756390;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=23DXGFBg27YE0s/hCFn43oTEX31jOtnU+ISUArv5+bE=;
	b=FcHUbTFI2+D1UE2Q92T8b2JIM9nFTt/JKbFLOyM4TygNx1swzrCgcRrxf64R6wJTlRH7mT
	xFkytyvafaZHjWWHPzMiSJfkdUh85Nq25J787tsuIwr0Y2iWW8mjTF6L1w0bVwb/YBo2KA
	tixYyBWDLkU5lTbRlL8ZeHKf4LGWttk=
X-MC-Unique: eeWd5OYkPF2IJSbFAA9Xsg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=23DXGFBg27YE0s/hCFn43oTEX31jOtnU+ISUArv5+bE=;
        b=nT5UCHdQMCgrZtOk9VSin1CqQ4hbk2W5E/Pbks/rr6nq3jlzwAHHVN87Xyl1Sv/qbw
         puSV/zz8loKbOwuA4irabTO3Q2P/XTw3v9Cb204JbBSDOBWnxxs3G1sTHzduPe9njvEB
         sDlGllblw9jRDa8mRoO1xr1eTU1Aws4G+pCq9fMO8KyJIenpfSBJL80AkGSzj3XVCBr8
         HQNPvvQg/4FDtRDLPlsT9q4vdUlkmfFQa80L1bQGOfZeR45poLmXlNiLPmbnZ4mX4tjZ
         GWQpHLTnCzoFs+nvLgtWht4c2Rg0mYm9GEAqVMEsumcQomagLi911Uh8iOp+AXh3Mz5O
         5aXg==
X-Gm-Message-State: AOAM532NU0QB5PN1TKRyj3EjRpVQkqKBuBno4PPJRXw7fRSsncDMKBF5
	q84QSYMdKKoTqMb4TjFkx6hKzw3eY+TTf2wGnR7TIV9bczdvpRYdoz1HpAI67jALOujdImsrA03
	ZwIkSVMqlylW11FRAwXMCkWL0AoI=
X-Received: by 2002:a7b:c095:: with SMTP id r21mr3049566wmh.48.1613756387887;
        Fri, 19 Feb 2021 09:39:47 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxiAWfgHRpHyAW3kKe8U38bgrZikJVMduVGw3sP0TrSilsr1/3Kg29a0ZDcfjt7MheizWAw2g==
X-Received: by 2002:a7b:c095:: with SMTP id r21mr3049522wmh.48.1613756387730;
        Fri, 19 Feb 2021 09:39:47 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 10/11] hw/board: Only allow TCG accelerator by default
Date: Fri, 19 Feb 2021 18:38:46 +0100
Message-Id: <20210219173847.2054123-11-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

By default machines can only use the TCG and QTest accelerators.

If a machine can use another accelerator, it has to explicitly
list it in its MachineClass valid_accelerators[].

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 include/hw/boards.h | 4 ++--
 hw/core/machine.c   | 8 ++++----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/hw/boards.h b/include/hw/boards.h
index 4d08bc12093..b93d290b348 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -146,8 +146,8 @@ typedef struct {
  * @valid_accelerators:
  *    If this machine supports a specific set of virtualization accelerators,
  *    this contains a NULL-terminated list of the accelerators that can be
- *    used. If this field is not set, any accelerator is valid. The QTest
- *    accelerator is always valid.
+ *    used. If this field is not set, a default list containing only the TCG
+ *    accelerator is used. The QTest accelerator is always valid.
  * @kvm_type:
  *    Return the type of KVM corresponding to the kvm-type string option or
  *    computed based on other criteria such as the host kernel capabilities
diff --git a/hw/core/machine.c b/hw/core/machine.c
index c42d8e382b1..ca7c9ee2a0c 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -520,11 +520,11 @@ static void machine_set_nvdimm_persistence(Object *obj, const char *value,
 
 bool machine_class_valid_for_accelerator(MachineClass *mc, const char *acc_name)
 {
-    const char *const *name = mc->valid_accelerators;
+    static const char *const default_accels[] = {
+        "tcg", NULL
+    };
+    const char *const *name = mc->valid_accelerators ? : default_accels;
 
-    if (!name) {
-        return true;
-    }
     if (strcmp(acc_name, "qtest") == 0) {
         return true;
     }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:40:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:40:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87045.164068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9l8-000214-L6; Fri, 19 Feb 2021 17:40:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87045.164068; Fri, 19 Feb 2021 17:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9l8-00020S-Gd; Fri, 19 Feb 2021 17:40:06 +0000
Received: by outflank-mailman (input) for mailman id 87045;
 Fri, 19 Feb 2021 17:40:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ml8x=HV=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lD9l7-0001FU-0e
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 17:40:05 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 93aca1b8-4db5-4802-8025-a8201b57b270;
 Fri, 19 Feb 2021 17:39:58 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-17-d_bDoTa2MTOYcktTrTGPgA-1; Fri, 19 Feb 2021 12:39:54 -0500
Received: by mail-wr1-f69.google.com with SMTP id o10so2763211wru.11
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 09:39:54 -0800 (PST)
Received: from localhost.localdomain (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id r7sm15304999wre.25.2021.02.19.09.39.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 Feb 2021 09:39:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93aca1b8-4db5-4802-8025-a8201b57b270
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613756398;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LKRfXZiJfNSqbVRL/SaOx1v+Z64xt5w41cSycQqYFFE=;
	b=Bjmh6fywSPPU/REtaHEaRlu05bWrrONLBE99Z0WDU6KkBh5uB5z5oE3Y47iRFrTXGr0SYs
	RFMVwi+f/B1Sayp+X7BK3Ev2Xfu6SQWa4npECu6Uv0w9KmosKUGsRpi6IzY/+TmyC9x82W
	1Hr8PPe5yTIFCaSl849mrAO6sSos6JA=
X-MC-Unique: d_bDoTa2MTOYcktTrTGPgA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=LKRfXZiJfNSqbVRL/SaOx1v+Z64xt5w41cSycQqYFFE=;
        b=AAL8ZTRHOPlZN0QiZUIc+8QYrfFuC6rdnT8QjHxsoEaapKmH7G7bjGYvTch1DDj1uO
         y8VenlKSel4HewBVKkLDuoTcxacVvRLp9cLVHaArbxl/Sj36fNHAulvQSFkO1yFztyF/
         +7/XQnEuw/U8EmLm7/xUeRyw5xKM+A3Y2igRKJ/r6fvacOmYPUH3xw1VV7u1CTvKTfnj
         RmfjWmI5QBGjeskq/fx46ADaFF3mBDW0hQ0A20yxN7cFiN/ybpNddfSG4jXlA6hyg7Ik
         J0xXH7Srvv9KGVZyi0LZP22lPRyxX+tZzxgru9bLTBoOLYP9Z39AzKKBLtOZE8qPAV1p
         wgbA==
X-Gm-Message-State: AOAM532s7D8XqURdoAUH5lGSMUSBXmC+142X9KyObcxA2I7s409alMKH
	kekEhooBvrzDemK6QJJdaGH+24X/SA9ieZIH3DEcM2lxE/e7KdpUADGi+g/jqF3rEjBsLjQ91Bp
	MHzQM6zyVy+llyBZwRFwxuWisxFg=
X-Received: by 2002:a1c:4c03:: with SMTP id z3mr9342124wmf.82.1613756393453;
        Fri, 19 Feb 2021 09:39:53 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx2yrJUDqwa1ymG/2RErXjJWx3WIzkQzNQF90+7ijmcN7yAwa2G2hjcHhwa/T8ilbMl1bPCyw==
X-Received: by 2002:a1c:4c03:: with SMTP id z3mr9342087wmf.82.1613756393314;
        Fri, 19 Feb 2021 09:39:53 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH v2 11/11] softmmu/vl: Exit gracefully when accelerator is not supported
Date: Fri, 19 Feb 2021 18:38:47 +0100
Message-Id: <20210219173847.2054123-12-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210219173847.2054123-1-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Before configuring an accelerator, check it is valid for
the current machine. Doing so we can return a simple error
message instead of criptic one.

Before:

  $ qemu-system-arm -M raspi2b -enable-kvm
  qemu-system-arm: /build/qemu-ETIdrs/qemu-4.2/exec.c:865: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed.
  Aborted

  $ qemu-system-aarch64 -M xlnx-zcu102 -enable-kvm -smp 6
  qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid argument

After:

  $ qemu-system-arm -M raspi2b -enable-kvm
  qemu-system-aarch64: invalid accelerator 'kvm' for machine raspi2b

  $ qemu-system-aarch64 -M xlnx-zcu102 -enable-kvm -smp 6
  qemu-system-aarch64: -accel kvm: invalid accelerator 'kvm' for machine xlnx-zcu102

Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
---
 softmmu/vl.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/softmmu/vl.c b/softmmu/vl.c
index b219ce1f357..f2c4074310b 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -2133,6 +2133,7 @@ static int do_configure_accelerator(void *opaque, QemuOpts *opts, Error **errp)
     const char *acc = qemu_opt_get(opts, "accel");
     AccelClass *ac = accel_find(acc);
     AccelState *accel;
+    MachineClass *mc;
     int ret;
     bool qtest_with_kvm;
 
@@ -2145,6 +2146,12 @@ static int do_configure_accelerator(void *opaque, QemuOpts *opts, Error **errp)
         }
         return 0;
     }
+    mc = MACHINE_GET_CLASS(current_machine);
+    if (!qtest_chrdev && !machine_class_valid_for_accelerator(mc, ac->name)) {
+        *p_init_failed = true;
+        error_report("invalid accelerator '%s' for machine %s", acc, mc->name);
+        return 0;
+    }
     accel = ACCEL(object_new_with_class(OBJECT_CLASS(ac)));
     object_apply_compat_props(OBJECT(accel));
     qemu_opt_foreach(opts, accelerator_set_property,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 19 17:53:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 17:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87055.164079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9xl-0003bL-Up; Fri, 19 Feb 2021 17:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87055.164079; Fri, 19 Feb 2021 17:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lD9xl-0003bE-RV; Fri, 19 Feb 2021 17:53:09 +0000
Received: by outflank-mailman (input) for mailman id 87055;
 Fri, 19 Feb 2021 17:53:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD9xk-0003b6-7p; Fri, 19 Feb 2021 17:53:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD9xk-00060o-2x; Fri, 19 Feb 2021 17:53:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lD9xj-0004aU-Py; Fri, 19 Feb 2021 17:53:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lD9xj-0006wq-PU; Fri, 19 Feb 2021 17:53:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gen4NKszOcitTrMOoLNIe6YbBFjPWmXjR8795Hhe70c=; b=5Nm0WztNaVgel8F+knq4zlMksT
	ZOxyuIANv0enhb92dPrH65Cvk4Hx5Df5yO9lbDVuejyLDdIb2SBXeK04Ur/LqEdfpqH0OtPaCUJPX
	v9OtTDH73IbFIVVwbgQpxsIgEd15CTmTxOmjyhfYxlyKvhJdNc8p9om+zSVy53LJCNYA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159480-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159480: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6794cdd08ea8b3512c53b8f162cb3f88fef54d0d
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 17:53:07 +0000

flight 159480 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159480/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6794cdd08ea8b3512c53b8f162cb3f88fef54d0d
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159461  2021-02-18 13:01:27 Z    1 days
Testing same since   159480  2021-02-19 15:01:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rahul Singh <rahul.singh@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e8185c5f01..6794cdd08e  6794cdd08ea8b3512c53b8f162cb3f88fef54d0d -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 18:06:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 18:06:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87064.164098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDAAh-0004pi-Ds; Fri, 19 Feb 2021 18:06:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87064.164098; Fri, 19 Feb 2021 18:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDAAh-0004pb-AW; Fri, 19 Feb 2021 18:06:31 +0000
Received: by outflank-mailman (input) for mailman id 87064;
 Fri, 19 Feb 2021 18:06:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDAAf-0004pT-Vp; Fri, 19 Feb 2021 18:06:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDAAf-0006Kt-NA; Fri, 19 Feb 2021 18:06:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDAAf-0005Kr-CL; Fri, 19 Feb 2021 18:06:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDAAf-0007cV-Bq; Fri, 19 Feb 2021 18:06:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YDaRIz+piGSsI9chs3Ifh7jmyk2+dqzGK4RXD/PFYOw=; b=xjfrHP/L1UTeFTQmKxNpGX6WWL
	nGSJCDQNgnFkmOKjGY8caRABQnO8cFwbtV34YVs9zYWtI4/VEv92FQsb8hCDTvXMpEy0SnoxTV48Y
	nPpAaLh92ojWB0VaRzZsbX39k5B0PcuZ7GDCIYpbSTSaqjfxGpQ8g2HgBkM18o9zPwxg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159471-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159471: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=634516262aa1fca541fca0e2d06404559de14a43
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 18:06:29 +0000

flight 159471 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159471/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              634516262aa1fca541fca0e2d06404559de14a43
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  224 days
Failing since        151818  2020-07-11 04:18:52 Z  223 days  216 attempts
Testing same since   159471  2021-02-19 04:18:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43291 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 18:20:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 18:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87070.164114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDAO6-0006mO-Nx; Fri, 19 Feb 2021 18:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87070.164114; Fri, 19 Feb 2021 18:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDAO6-0006mH-JE; Fri, 19 Feb 2021 18:20:22 +0000
Received: by outflank-mailman (input) for mailman id 87070;
 Fri, 19 Feb 2021 18:20:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7G3N=HV=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lDAO5-0006mC-4B
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 18:20:21 +0000
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 654b75d0-70c0-47c1-8bff-46fc3c430567;
 Fri, 19 Feb 2021 18:20:20 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id i7so4776526wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 10:20:20 -0800 (PST)
Received: from ?IPv6:2a00:23c5:5785:9a01:101f:7370:9e02:844f?
 ([2a00:23c5:5785:9a01:101f:7370:9e02:844f])
 by smtp.gmail.com with ESMTPSA id y16sm14031151wrw.46.2021.02.19.10.20.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 19 Feb 2021 10:20:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 654b75d0-70c0-47c1-8bff-46fc3c430567
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=QWLOIiitmqBo5+3UaA0iJ43f4RC209g18TM/vosn6KU=;
        b=POVBAu9h6VXtheGXvk2NfpN7VuXcoirzXDBHZeSRMcMGwnLtXNDmDnbRloswSncCiu
         UUBmOonAiKAxfUdgfpFKvVHNlcCFmDWcmG8TCbHlREKDdpiW5tij6aSeDioQohoQDmeC
         l2FziYTXDZmhyOPKjnAEbFWKjYxdlG6G3js9cUC1ajMTGuLS3fEZNPbSMl5onWT1CxdM
         Q2YUQQV7bJTmWqgCLcKw5KDw4nOFUtWukP3ASDJiAqkYmBSvJTP2EuipIaJIBK23iSfg
         JQjIFR1ep2dPw3zIHeetdBiKkyAERCkBqJuP8d2NkTJ+YjJg4vONwd4TsJwMnLvHRhe0
         xrxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=QWLOIiitmqBo5+3UaA0iJ43f4RC209g18TM/vosn6KU=;
        b=DDpQp9BgjmFf50C2qdsCjnIaZkT0DGhxh4JQi0Rr3KJhIa0q2nZzvwyGknarZDK1gH
         AIdixrLtYLe9o40IyandX07w6rI1UHfrCHJuY2dLeQF5lQZp2hFIYxIpgxsWtZyPUPqb
         brd7kvx4rF7uupQZCyK43Ia0egOAVPECwSMqTkXJ/KIFEXMOhMHA34XvEdpG2RYOrfSu
         2pntNhl90QE4zRHzMKRub/8QqQwrlAVlMviJw0+wk3sT0Uugi/ldArBg7lzn/51jBx/p
         Cbvxi+lfBQVHUd1gqbuRg6Ic0ydE9MfEsIaePRBGX+v0ZXPmGZneZN7CMyB49t+N25nL
         6nLw==
X-Gm-Message-State: AOAM5317PL8D2oGu8fPVhFAKm30yGhEuvXf7tbJuN+9kl54Oibfo9vVU
	bmncmFoP2jHMZWqMznImljA=
X-Google-Smtp-Source: ABdhPJyPeyLs0ufJRIjEcmZzmn8XKM8ztr4zDFZnUaEnnHiY3eTI/16kdMKEeG8iwgyT/uZdHskVEw==
X-Received: by 2002:a1c:a795:: with SMTP id q143mr9362251wme.113.1613758819494;
        Fri, 19 Feb 2021 10:20:19 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH v2 09/11] hw/xenpv: Restrict Xen Para-virtualized machine
 to Xen accelerator
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Aurelien Jarno <aurelien@aurel32.net>,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
 qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
 Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Gibson <david@gibson.dropbear.id.au>, qemu-arm@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
 BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-10-philmd@redhat.com>
Message-ID: <f386d7c4-f139-4f17-4e5b-5a3c5288b238@xen.org>
Date: Fri, 19 Feb 2021 18:20:17 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210219173847.2054123-10-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19/02/2021 17:38, Philippe Mathieu-DaudÃ© wrote:
> When started with other accelerator than Xen, the XenPV machine
> fails with a criptic message:
> 
>    $ qemu-system-x86_64 -M xenpv,accel=kvm
>    xen be core: can't connect to xenstored
>    qemu-system-x86_64: xen_init_pv: xen backend core setup failed
> 
> By restricting it to Xen, we display a clearer error message:
> 
>    $ qemu-system-x86_64 -M xenpv,accel=kvm
>    qemu-system-x86_64: invalid accelerator 'kvm' for machine xenpv
> 
> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>

Acked-by: Paul Durrant <paul@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 20:14:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 20:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87076.164126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCAa-0000gz-Qt; Fri, 19 Feb 2021 20:14:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87076.164126; Fri, 19 Feb 2021 20:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCAa-0000gs-Ml; Fri, 19 Feb 2021 20:14:32 +0000
Received: by outflank-mailman (input) for mailman id 87076;
 Fri, 19 Feb 2021 20:14:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCAZ-0000gk-Bx; Fri, 19 Feb 2021 20:14:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCAZ-0008RX-6p; Fri, 19 Feb 2021 20:14:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCAY-00035j-Rz; Fri, 19 Feb 2021 20:14:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCAY-000481-RX; Fri, 19 Feb 2021 20:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yDbMkQK+TLxkZUrbsmW6xbXaojFKoKd4ajUynXnpABQ=; b=3cmNqh5krRUV+68mfTujxkv7Cs
	tCapqpc+C1gHSZEr8es2IXk0HRED8yinEg7tF5OzSwBzEudsr0KlsjfBRVUuzR2dLRCoJCL6t/G1t
	AozfD2BUqZ2mbiDfoSMMLfXlA6MJY+NQujQEReDaySIN3d0wpR8/Qx0knqBUQpAbYtyM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159481-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159481: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=6794cdd08ea8b3512c53b8f162cb3f88fef54d0d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 20:14:30 +0000

flight 159481 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159481/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  6794cdd08ea8b3512c53b8f162cb3f88fef54d0d

Last test of basis   159480  2021-02-19 15:01:24 Z    0 days
Testing same since   159481  2021-02-19 18:02:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6794cdd08e..87a067fd8f  87a067fd8f4d4f7c6be02c3d38145115ac542017 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 20:17:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 20:17:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87080.164141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCDW-0000qU-A9; Fri, 19 Feb 2021 20:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87080.164141; Fri, 19 Feb 2021 20:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCDW-0000qN-6I; Fri, 19 Feb 2021 20:17:34 +0000
Received: by outflank-mailman (input) for mailman id 87080;
 Fri, 19 Feb 2021 20:17:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Nbw=HV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lDCDU-0000qI-Nc
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 20:17:32 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ceb98854-9864-40c1-9f31-728f45de155a;
 Fri, 19 Feb 2021 20:17:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceb98854-9864-40c1-9f31-728f45de155a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613765851;
  h=subject:to:references:from:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=3nb53cQ4PWXD69FuwHnuTzl6UylkUtvoW95r9GrLf4Q=;
  b=AMAM7kzpttHiJ37y7ytpN8aHUlfaHLE8FDvXaPRfPBbZu+z/nF55r1tc
   OHPTjub/gI7zJjvDbH4LJOVP4TLizvCybv4XmN7BAZFS42AzXux3SWv/K
   xEFfWo6Ipr2W5OMYtJJuFC0whkshyzshiHfjZfSs2VRRF7PqtUTiki3ve
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 92RGrAAT3xB2WPHfPxUWVcYOQ3Tf6nDdVSClala0xhY14knOm86SUbMY4DWyZxW4FfxXLI8cst
 eLivrXFRp/+mqArP9p5l/YHcg07P5COwuKuOcn6ZPCyCfcepna+riH5+fAyyH5pu7vh/LSIHhn
 eoVgdXfME1oJzc+bW/LrTZTWE4OxGKO4c3f32e9k+EGWVSdqJUY6bkPW4eX5uleGl/LUNptRHH
 EW+1C5zd4H47WJ6D5fNDqDRaynTHSVMmo+HgVn3i2rO25I2vHfwk21/sLXHWlSjcaGgQytbj+x
 BHE=
X-SBRS: 5.2
X-MesageID: 37661278
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,191,1610427600"; 
   d="scan'208";a="37661278"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jr83w5uYvuF8Q68kD+WyD5T2q0KXJ8Jsbrwdxm1x5D/8uO66ZokcMsIEcAdtSiqzS83M5XzpnSVmdjWkQt1xiBDhNlNsqAdPUvHrtg7DA9IiR203aPXKdKkWgUAMYPm1WGC+8M9UhDurEl9/lDptspP4SA6i+NQIVqNPc/eiXut/kP8qRGeHnOqnfcHfW6wrn90+7H9dTtTUkKscDqzBY+k/u7Ew/9Q5GpoHSjSfXbB3MIJe5Vl7J7TDElAzCXQzSbso550Qlu78/SllPYnrMFuQp74Fp3kcWTPtQynZLjcA55a0rIPeviDr0VHpmJnbVQI2cjTb+H3OXuZKFR73iQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3nb53cQ4PWXD69FuwHnuTzl6UylkUtvoW95r9GrLf4Q=;
 b=AxL1IgA0DvEb51WSFmU1Whe3oAgZlU5mmINBKs9ILJa3EMYnvyUG3XQ6kvfGLPse9+49ye+OVq8pgTpedRGNtCAEdSwBQR6ibmUegbrj0Oq9cS73kpXrVvg2NvvEpfb8B6IjNeTwQAxqsnX0NxXq0+z3Luu0vjURuYGybdpExO1iNucsxv5rR8/+4pj0uLxCTCAM2QDQp2xtx1xkQWEHY2hTKpouPTL0Os+T5y3hj2Jet56VT2eoEMMJYemCfDw1jNetNGSCqLtIFnPXwFq5TAPjE6KDylgTwiBMgm0jgk2hxmCruDWswpyi+RnibujPLratffoSPvjNRVbe5ZByNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3nb53cQ4PWXD69FuwHnuTzl6UylkUtvoW95r9GrLf4Q=;
 b=Cwb6CNrPm6kvPBmQLXMSAjbX7TRkTLQ8uwfQLuChoyP0e2vc4rx2koYIHIx8sPm0yW2MNOZMesYhKpzRe63Kxvel/3+oRX/g5OKzD+TYURXqMzkUHyRX/TKQlOBfavFQrTr+cH0URC3AEvjpdJIcNxav/rX94DStar6PPHIAc0s=
Subject: Re: How does shadow page table work during migration?
To: Kevin Negy <kevinnegy@gmail.com>, <xen-devel@lists.xenproject.org>
References: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <38cf0d39-da1d-5375-89dc-1668e26323a2@citrix.com>
Date: Fri, 19 Feb 2021 20:17:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0202.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bdbf2e09-2d05-444e-c27f-08d8d5135efd
X-MS-TrafficTypeDiagnostic: BYAPR03MB4872:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4872FF2D0A281F3AA1917C02BA849@BYAPR03MB4872.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pFGN29dc1xkTz2rtbPGBeeU4WNzqdGGmsQW1uQKwppMRlETWKfxNqSYyzeSdNf5YolXWMGV14gZ/cymOmQ4aosNZmA3lpKjdeSCvTGtOfeOEfJrqtvYLu181D0OpmJl4vw6WSzYP37R9AGKn4h8NzmxAl4Ch2X9+pmInwovuExWxdm5F4us4Flgfz5ixoqs/jF8v4l3Rn5X0L2INv3DMmwsuPA/cOHV1kKxT/wxblrJZWD3xqPgo2CzoJc5A/gLq13eGVLKOyyhIH+XnQOBI/oK1xz4q6PwrCfNWtyUHrXe0QpwPeMVhTF1SKMERo06QT073JBZaV/r18TZWlfOXcTo2krdtUq34cRkLsxDmjMagqxpahR+U5KUF7zfM3L3tUP5vugHlp8mOf44y3eGEbhI3oENS0FVDAmgY3gjp9jfqESHy2f6KRTiJX06BPJWJpbnrIESqQchN2DqfuZayeQDMKOHuPiJOH/iEg8biOtOi8I7Is7SkpcHZPBIW4vNU/ykDnzWRjrihA6okiPjk3v4BsyTqQWd2ebhSzDRAfpO38qsaiNKp6JddLPRZlqDNMGwMr3vHBnETYAdx0FJzNZh30C8mQz6j2/WohsyRNeA=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(346002)(376002)(366004)(478600001)(31686004)(26005)(2906002)(6486002)(8676002)(8936002)(5660300002)(316002)(31696002)(186003)(36756003)(16576012)(83380400001)(2616005)(956004)(66946007)(66476007)(16526019)(86362001)(66556008)(6666004)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Zk40d2JzSjRxc2FYVlBXclp5SlFVQzJkbVpDVWJJSU02RHE0YVVTbVZLWEZ0?=
 =?utf-8?B?a0hhMUJzMnBjeW9wMjkxUml5S08xR0c0YXZ1d1RsRmNHTXp2VnByL0hwQWFH?=
 =?utf-8?B?aXFUVkpuZldseFZFRS9wM1gxWGI5YklSa1djb3l3T1czUmdPTmJDa1BJUitu?=
 =?utf-8?B?QlRNdUxhdGw4cGI4clkzMFh2NG9Lc0tueU5mN252TjBVSVlKTTJnamg2ODBv?=
 =?utf-8?B?NDNINEptR1BqV3F5Q3VzRjRmY0JMWC83R1RTcDB3U0RFNW12b25JTHZ1UjE1?=
 =?utf-8?B?Q002Z1RMRjNBN0R5Q1kxNnlNYSs0NWkvUXJYN2czZzVaV3JUQW5VbVp1SjZm?=
 =?utf-8?B?TDlaNisxUHY5cWJSNjNLQXdCL0xqYUpuWHo0MHphdWNUY0hkWkZOaHY0VS9R?=
 =?utf-8?B?QWxlYXY1aVlkYVZwVHhLZU1teW1McmJYYmsyc1p3SktHeHVGTHFCYU5sQTRv?=
 =?utf-8?B?dkNWTm1nZW40NlpwVWtYL0xTL3ZTQVNZNWtiL21TY2tXbVNrMGhJdU00UzRl?=
 =?utf-8?B?VU5UTzZZODBCMGRvZURzV1grZUVvOHFzYlA1T0VGWEhnN1RKdjJ5YWZidmE2?=
 =?utf-8?B?SFRKUDlkWmlQdmZjNGsyRzAzc1NmNFJ2V2pxYnNyK3hqVVEyT2Z5L2tkc2Jn?=
 =?utf-8?B?M1U0dllydUpnQjRZVlI3VmJoVW5CRzV3MmhRV1ZBU2R0eVJJRUpYZHNheDRZ?=
 =?utf-8?B?N3pSYUUyT1VHMlM2Vlg0aFRnYU9WNzJCREhWT1B2M3lyd0tUMkJKNmFuM1lv?=
 =?utf-8?B?cWM4WUhDUE9TeGhxQ3d2dlZHRzUyNVo4MTg5QzZXQk83a0ZBQnM4NFNWdGYw?=
 =?utf-8?B?VCtvQWZraCtBLzNiL1NWL0MwTUF5eWs2Wm5lVnFYeC9Zd3dKRStUV0lGdDJ3?=
 =?utf-8?B?RVVldXN6bGFzay9pWS9YNTcwVFpKR3RIemNTM2N1eGU4QWJYallCY2pRbjZC?=
 =?utf-8?B?S25rcCt5WkVFZVlQQUZtS09ndHhiRUYyblRKVlNMbDZGMnBnTS9OQjRjNkdX?=
 =?utf-8?B?VGxTZ0ZSNGtDc3NjNUxuL3h2ai9aUGRHN29TOEt6QmZldTZQamYvSW5JMDVG?=
 =?utf-8?B?aEJidFJEOEZ2d0NjODRFamxTNm8wTG5EMitodWxXZVd0QThERzJaNVFocTJW?=
 =?utf-8?B?eWJaSGF3N3pjNzZ5bEJiaWorMnJUUjdRcVQvMUtZVFBLUitHYUdRYWl4WUVJ?=
 =?utf-8?B?eWlsTjZTMTBNZ1ZJN1NWOGY3UHJ1RENYajQ4VFpRcXcvdTRwVEp0ZEppNW8y?=
 =?utf-8?B?UlAydFZUYUovNmFQMm1FL1BBMU1XazBXbE1Xa2tjcWlJVHV4am1nQkNNM0pQ?=
 =?utf-8?B?SXhKVlkyVkkwOWxzbVlEMWRGNGxZeGJCSXozZUFJMjRNc05WOUpFdmZVOEZO?=
 =?utf-8?B?bDM5dWNpaFVRSFIxYmNtQzBkcXp6OGdPYk5JcW1PNis3VWF3L1EzeHhibW5F?=
 =?utf-8?B?Vy9LakhpM3R1Yi8walJTVEJ0eDQyUTIvMWR4WGV0VVJuK3E0RkszZk5wWUJC?=
 =?utf-8?B?cllkMXRpcHVUQUZNUTdBZUY0cHpyd05pSEYwMjRremYxb2p1Vkcvckl5cHZx?=
 =?utf-8?B?M0ZPeW5TRGd6S0VHMXJrcXBweXpHTW1IVEZVUVNTQkJ5VEZLQWFkWGJYQzFC?=
 =?utf-8?B?NWV6RHduRFBqSVNXVG1TVE5xNjljekJCMzUwekZvVisvWlBnZld0UnVjWVRk?=
 =?utf-8?B?TDFTTUphbENhQlBJWFppNEl6WUN0MGZWV1AraDhHczM0ZTlLZ1dNcWQ0U1dD?=
 =?utf-8?Q?t+EogcoGoj2rLGD7D32Q6o7d6yQ5iDNxyK449FM?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bdbf2e09-2d05-444e-c27f-08d8d5135efd
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 20:17:25.1166
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A/nh8ZE586CT0s9akdAj76YHVEptVtzv56E7OCeNmc/OsjJaLYZTNjltXWiFhJqHk8hJ+IM4fIJFI3KuwtDwNQnBhCCwBJuVGVDCZ8w1Csw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4872
X-OriginatorOrg: citrix.com

On 19/02/2021 16:10, Kevin Negy wrote:
> Hello,
>
> I'm trying to understand how the shadow page table works in Xen,
> specifically during live migration. My understanding is that after
> shadow paging is enabled (sh_enable_log_dirty() in
> xen/arch/x86/mm/shadow/common.c), a shadow page table is created,
> which is a complete copy of the current guest page table. Then the CR3
> register is switched to use this shadow page table as the active table
> while the guest page table is stored elsewhere. The guest page table
> itself (and not the individual entries in the page table) is marked as
> read only so that any guest memory access that requires the page table
> will result in a page fault. These page faults happen and are trapped
> to the Xen hypervisor. Xen will then update the shadow page table to
> match what the guest sees on its page tables.
>
> Is this understanding correct?
>
> If so, here is where I get confused. During the migration pre-copy
> phase, each pre-copy iteration reads the dirty bitmap
> (paging_log_dirty_op() in xen/arch/x86/mm/paging.c) and cleans it.
> This process seems to destroy all the shadow page tables of the domain
> with the call to shadow_blow_tables() in sh_clean_dirty_bitmap().
>
> How is the dirty bitmap related to shadow page tables? Why destroy the
> entire shadow page table if it is the only legitimate page table in
> CR3 for the domain?

Hello,

Different types of domains use shadow pagetables in different ways, and
the interaction with migration is also type-dependent.

HVM guests use shadow (or HAP) as a fixed property from when they are
created.Â  Migrating an HVM domain does not dynamically affect whether
shadow is active.Â  PV guests do nothing by default, but do turn shadow
on dynamically for migration purposes.

Whenever shadow is active, guests do not have write access to their
pagetables.Â  All updates are emulated if necessary, and "the shadow
pagetables" are managed entirely by Xen behind the scenes.


Next, is the shadow memory pool.Â  Guests can have an unbounded quantity
of pagetables, and certain pagetable structures take more memory
allocations to shadow correctly than the quantity of RAM expended by the
guest constructing the structure in the first place.

Obviously, Xen can't be in a position where it is forced to expend more
memory for shadow pagetables than the RAM allocated to the guest in the
first place.Â  What we do is have a fixed sized memory pool (choosable
when you create the domain - see the shadow_memory vm parameter) and
recycle shadows on a least-recently-used basis.

In practice, this means that Xen never has all of the guest pagetables
shadowed at once.Â  When a guest moves off the pagetables which are
currently shadowed, a pagefault occurs and Xen shadows the new address
by recycling a pagetable which hasn't been used for a while.Â  The
shadow_blow_tables() call is "please recycle everything" which is used
to throw away all shadow pagetables, which in turn will cause the
shadows to be recreated from scratch as the guest continues to run.


Next, to the logdirty bitmap.Â  The logdirty bitmap itself is fairly easy
- it is one bit per 4k page (of guest physical address space) indicating
whether that page has been written to, since the last time we checked.

What is complicated is tracking writes, and understand why, it is
actually easier to consider the HVM HAP (i.e. non-shadow) case.Â  Here,
we have a Xen-maintained single set of EPT or NPT pagetables, which map
the guest physical address space.

When we turn on logdirty, we pause the VM temporarily, and mark all
guest RAM as read-only.Â  (Actually, we have a lazy-propagation mechanism
of this read-only-ness so we don't spend seconds of wallclock time with
large VMs paused while we make this change.)Â  Then, as the guest
continues to execute, it exits to Xen when a write hits a read-only
mapping.Â  Xen responds by marking this frame in the logdirty bitmap,
then remapping it as read-write, then letting the guest continue.

Shadow pagetables are more complicated.Â  With HAP, hardware helps us
maintain the guest virtual and guest physical address spaces in
logically separate ways, which eventually become combined in the TLBs.Â 
With Shadow, Xen has to do the combination of address spaces itself -
the shadow pagetables map guest virtual to host physical address.

Suddenly, "mark all guest RAM as read-write" isn't trivial.Â  The logical
operation you need is: for the shadows we have, uncombine the two
logical addresses spaces, and for the subset which map guest RAM, change
from read-write to read-only, then recombine.Â  The uncombine part is
actually racy, and involves reversing a one-way mapping, so is
exceedingly expensive.

It is *far* easier to just throw everything away and re-shadow from
scratch, when we want to start tracking writes.


Anyway - I hope this is informative.Â  It is accurate to the best of my
knowledge, but it also written off the top of my head.Â  In some copious
free time, I should see about putting some Sphinx docs together for it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 20:33:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 20:33:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87085.164153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCT1-0002pb-Rr; Fri, 19 Feb 2021 20:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87085.164153; Fri, 19 Feb 2021 20:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCT1-0002pU-Nk; Fri, 19 Feb 2021 20:33:35 +0000
Received: by outflank-mailman (input) for mailman id 87085;
 Fri, 19 Feb 2021 20:33:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oxor=HV=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lDCT0-0002pP-BG
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 20:33:34 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f595cb7-a3b8-4f9b-9afb-510a38d938e4;
 Fri, 19 Feb 2021 20:33:32 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JKT68O184358;
 Fri, 19 Feb 2021 20:32:26 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 36p66rarke-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 20:32:26 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JKUHor088829;
 Fri, 19 Feb 2021 20:32:26 GMT
Received: from nam04-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam08lp2173.outbound.protection.outlook.com [104.47.73.173])
 by userp3030.oracle.com with ESMTP id 36prq2bevu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 20:32:25 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by SJ0PR10MB4509.namprd10.prod.outlook.com (2603:10b6:a03:2d9::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.41; Fri, 19 Feb
 2021 20:32:23 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456%4]) with mapi id 15.20.3846.039; Fri, 19 Feb 2021
 20:32:23 +0000
Received: from Konrads-MacBook-Pro.local (209.6.208.110) by
 BY3PR05CA0044.namprd05.prod.outlook.com (2603:10b6:a03:39b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.11 via Frontend
 Transport; Fri, 19 Feb 2021 20:32:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f595cb7-a3b8-4f9b-9afb-510a38d938e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=Q+tlPGJKqF6UYVHJtf4RiU4j3D0HGkZd1GdqLyhGl5k=;
 b=OUnX7abV7Zl+u6KIti9+pbFfcT8EvxiyocwczAX0CXSRN86hcLIM6VwLU3654x/i28Z+
 3pSPYNElBGBgxhm1F+DZ2L2IZNG/+wtb6C89OpKs1ZpYVrw29cpL8uUfqbPfGIAkPMGe
 ahyW6qfPTtx6uDTss0Dc0nPtD8x40Ro+xSRxK1BE2z+fsl6D3kdHk6aX4I8zg+BBEFkl
 pbM0GElbZhULPCH8rNgBnFdzffC4q4B9F84TxSvJG/1eLxDmm/oiulbvjcce/he7qvTM
 CyAem6teIK5wc2P1vvIz9Jl0RRnmO16Wz+E+rPPghfZABFAxtq9xQVs7KsFK148ojUf1 DQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cilLbaU5P1EZKHhQFZ/tXoZgv5UWAMFelQ7A/weacmzr72UFTnUSEeSa9kbGIBPRfw+6dJkNoCJu/MPdyLRkSaeodwlghJ3gpIx2a5Q6Yps8t9IsmyODpIogyhDzB2KcPigx1t/PLQjQ0aY9BV2YOwMmp9HQ6YsPHvYjtLbQFQ3feLnZ8Oq18m9XCdU7miOfErH1S1+Jiupd7/6I3LF/Tx21dshT7ObtSCxkmD0/yYxFZyIp1NJMRAOd6BN9oqOUdO0mN+Lf41BrXyMtnu4Z4QuZg7LYxa9iIoc+Ik+Te2TTm46doh2faynP3k5BQa8PFLS3OG4xcS3kzxunHTzp5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q+tlPGJKqF6UYVHJtf4RiU4j3D0HGkZd1GdqLyhGl5k=;
 b=Ld4ESsnTlUH2IkkNza53+IbGAgGHaPJeYH9fktWs41vjynoJKfwrL6UESpbkf32rqtcpn2ZLZgTE1Af7r5IglEeGhNRYYhj+VtX7jZ6PJSJ+lACZVLEP6W0LpvNQV6UOa7W12GukpzMz7BYCboUt91Mf6IH25VTO5W1bRfMgPbAAnq0urCVvyYOflMsJ2eeiFpcSRVqzoUW9HtR7q90JyioiJiXgZ+KaeRf44blAJAaWy3Qgplp2I/z7msFr/Ofgri3wWlCOFIydrFoNl8pAhTvCCkFrLRLOmI7FVQAMSzAFbg++ouoLQMvdZGofr5TVDmhRCPi94+/Udr7rf/s4hQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q+tlPGJKqF6UYVHJtf4RiU4j3D0HGkZd1GdqLyhGl5k=;
 b=F+sfZVS3Rip00Puw598dxvLwI0SZH9jdU6wPTnP4wU6cp305cuTuHpI9o/LXTC/M9CZjn8s/xz/OpZheMhlUPtP4H0Fji8o+DsqZtfMW5l3bnrdlJo5BHx4TYawG8khET8l1kZ1xquvYuXEYmes0eLUgl8wWIcDLt2iR8Rsjews=
Authentication-Results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=oracle.com;
Date: Fri, 19 Feb 2021 15:32:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Christoph Hellwig <hch@lst.de>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com
Cc: Dongli Zhang <dongli.zhang@oracle.com>, dri-devel@lists.freedesktop.org,
        intel-gfx@lists.freedesktop.org, iommu@lists.linux-foundation.org,
        linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org,
        linux-pci@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
        nouveau@lists.freedesktop.org, x86@kernel.org,
        xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
        adrian.hunter@intel.com, akpm@linux-foundation.org,
        benh@kernel.crashing.org, bskeggs@redhat.com, bhelgaas@google.com,
        bp@alien8.de, boris.ostrovsky@oracle.com, chris@chris-wilson.co.uk,
        daniel@ffwll.ch, airlied@linux.ie, hpa@zytor.com, mingo@kernel.org,
        mingo@redhat.com, jani.nikula@linux.intel.com,
        joonas.lahtinen@linux.intel.com, jgross@suse.com,
        m.szyprowski@samsung.com, matthew.auld@intel.com, mpe@ellerman.id.au,
        rppt@kernel.org, paulus@samba.org, peterz@infradead.org,
        robin.murphy@arm.com, rodrigo.vivi@intel.com, sstabellini@kernel.org,
        bauerman@linux.ibm.com, tsbogend@alpha.franken.de, tglx@linutronix.de,
        ulf.hansson@linaro.org, joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: Re: [PATCH RFC v1 5/6] xen-swiotlb: convert variables to arrays
Message-ID: <YDAgT2ZIdncNwNlf@Konrads-MacBook-Pro.local>
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
 <20210203233709.19819-6-dongli.zhang@oracle.com>
 <20210204084023.GA32328@lst.de>
 <20210207155601.GA25111@lst.de>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210207155601.GA25111@lst.de>
X-Originating-IP: [209.6.208.110]
X-ClientProxiedBy: BY3PR05CA0044.namprd05.prod.outlook.com
 (2603:10b6:a03:39b::19) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02501154-d7eb-497a-1b24-08d8d5157639
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4509:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB45090C51834820E7AEC49FF889849@SJ0PR10MB4509.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Q3AKZuSZrPy0l4j4hWsGLN91LmBcmN6L9XURIPwwIX8OLjOW8X9ie9CdRLsIHYJ5S6uVl+O5krhJr4Ar14pPq0XLUXmnKkh/l23ObS2ne6YtXZP5gqz28uvkQBwX5Zsu8irHIUN8Y0shwtWJwdxIck0IxATrWeTvTMLpmNK3zp0wCJQQrlzTVBblVvH/s80GXOqbqqHAeQ5LMgWUJJt2shu6IF17ZowjSiv/yIEhl8ZYRsl0QyRaecku/1me5R6pV5iBb8vtvhglxjEeK9/huSAOTuNNo0Jg66noqDxWOWSl9eKKbIVvqaQlcOKDjTo5IDw0U73GgVT6PE//t34detm1NixfMcgu4+mLfUlwTy/IVfr80V/rFdnsNbiJc872pu9T9e68wX6aO3WaX64AuXjDfkQR/dBhQFnNv2yqam314cWubFUl3Z5jFyzsox8mv6mMyMiFm8mk5DZNdtuwCOjglNRN2aWGVYiAtLu2sPfiihTzfjT+zBK/H4JAq0nADkKVcU4422BUXu6BmTZZ2A==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(376002)(366004)(136003)(39860400002)(7416002)(7406005)(4326008)(6666004)(9686003)(186003)(478600001)(2906002)(6506007)(66556008)(8936002)(66946007)(66476007)(55016002)(52116002)(86362001)(26005)(5660300002)(110136005)(7696005)(956004)(16526019)(8676002)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?GVVmae2fpuGK2oJ5KRrSOfjoBXyS2Pg6fOS1yhTmwR5t21OZNx0gNSxNnwKz?=
 =?us-ascii?Q?xCZljSlABLs9fpLftdNzoQBBKHLxdGxuEqJURx3md2LOtjv8DvcpOEwS7Q/U?=
 =?us-ascii?Q?8tNziK8wPFcFNYllZ5wYBi5Tne09kfBsc/Qy6PJ4kTG70juZ77SCL9eRg6q0?=
 =?us-ascii?Q?hbwlFxCdnfJVsSYzFar/IaHe0nPmcL/UtUMo0ZxfsID9Sm0pfo1GrB0KUtM5?=
 =?us-ascii?Q?5Iu4vSFMArDie4dIURUlmsm472x2phgzCgacrW1Rd6EpII9py2v1wI8ws1dp?=
 =?us-ascii?Q?3ba6tem8ZSHhemhUOCqTT0M0NEacqGBSiYkuaD9M0stR4FooFNNN/WyaIbtr?=
 =?us-ascii?Q?RFcI8MRHtwAzAPDdYYBW4B8DSy6FJdsJQ0wfNnZXsAmVj+Zx7JdA8So3zMib?=
 =?us-ascii?Q?UtOERUWChlHlXkBq/N89p2QxI0G6ndF94I7oVRxP5OG3Jbw90SC1ZEDMwbEd?=
 =?us-ascii?Q?yTyNyjMgIy+Jy4E4WrohdQka34alXB3bH8VBYPOk7P4tKVCE8IPSYrted8u5?=
 =?us-ascii?Q?0/+gQsoQI4lwU8NRhaxqf/rQsfxuD+qaBkLeZngBAyP9+KAvwloFwo7+JzBM?=
 =?us-ascii?Q?DgYRycv5ljXceUTn9N9xoVaJlEBvAd/GrDrTCG9QNB6xMbuj2fy+MmXwihjB?=
 =?us-ascii?Q?RVGL+zIxlahMqr8Z2XtrRzl7GQpb96dEXZ7TCEs7T6Vme/l/10JdQYrTdeXy?=
 =?us-ascii?Q?VaAsOVS2CByBrRKHTt1OnvBJQhE5cyZtLKb7Vj/Sij7WmI24f7etqZ4YgfAV?=
 =?us-ascii?Q?Wr6QBu0lJYKikAYevHB8+M/1uRpqCgVh29xywXbN92Aq417HHsByqyVvGNJG?=
 =?us-ascii?Q?qHrY5UtscdVwfe6fncSsNVDqQ60o5Wr3PGIoCsniWYd6D1S/mUb5qdd4R/r4?=
 =?us-ascii?Q?sgT/6px9QpnG5eP9oWkDBrAvKO0hLG1XCajoe/C5EG7r/y0uPZazA/Uyd0Lk?=
 =?us-ascii?Q?KwRc+iSav/ptjm/cghiR+oeiEwTGZJFXEKUbkuRDprEYLJyJCEWgKvBUpBJ3?=
 =?us-ascii?Q?/NXgvB/mC1eufG7Ai7wm0w6BdW4uhqp9+qqXfzFv949sEqQSwhcGf1kaeFey?=
 =?us-ascii?Q?66b9XwnvB8+DmmIDIIPuXCHdn82G/nHoPSHOGrIjiso8Q1THv5yWuiKrcuWu?=
 =?us-ascii?Q?yB1EpvfnFmlQVs+ERIOHkrS8P8I6NSVSE5ys7G2Achogq6KDl/VB9qA6vvct?=
 =?us-ascii?Q?fgV5PzU2YGckjxjud0bZ9FsU5uzM9eJ2wz8Nnne8eM3Cuam4HmU0LwYf5Ssq?=
 =?us-ascii?Q?9ky1JFpa55/sKnGKlAcLx+88xBVHdaj5R22181NygkreA0I/8KjiFwBDRfKC?=
 =?us-ascii?Q?0ppzetP+TZkgHVuGsp0JnRaX?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02501154-d7eb-497a-1b24-08d8d5157639
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 20:32:23.1896
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oVdORDfZ29f7RAUpUN8XHDQ0O+4tk/o7h5eZ4IZtxStx6v91wvjKmdTWWJmN34CKJjicL60LgzclWhVJE0D7zg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4509
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=938
 phishscore=0 adultscore=0 mlxscore=0 suspectscore=0 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102190164
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190164

On Sun, Feb 07, 2021 at 04:56:01PM +0100, Christoph Hellwig wrote:
> On Thu, Feb 04, 2021 at 09:40:23AM +0100, Christoph Hellwig wrote:
> > So one thing that has been on my mind for a while:  I'd really like
> > to kill the separate dma ops in Xen swiotlb.  If we compare xen-swiotlb
> > to swiotlb the main difference seems to be:
> > 
> >  - additional reasons to bounce I/O vs the plain DMA capable
> >  - the possibility to do a hypercall on arm/arm64
> >  - an extra translation layer before doing the phys_to_dma and vice
> >    versa
> >  - an special memory allocator
> > 
> > I wonder if inbetween a few jump labels or other no overhead enablement
> > options and possibly better use of the dma_range_map we could kill
> > off most of swiotlb-xen instead of maintaining all this code duplication?
> 
> So I looked at this a bit more.
> 
> For x86 with XENFEAT_auto_translated_physmap (how common is that?)

Juergen, Boris please correct me if I am wrong, but that XENFEAT_auto_translated_physmap
only works for PVH guests?

> pfn_to_gfn is a nop, so plain phys_to_dma/dma_to_phys do work as-is.
> 
> xen_arch_need_swiotlb always returns true for x86, and
> range_straddles_page_boundary should never be true for the
> XENFEAT_auto_translated_physmap case.

Correct. The kernel should have no clue of what the real MFNs are
for PFNs.
> 
> So as far as I can tell the mapping fast path for the
> XENFEAT_auto_translated_physmap can be trivially reused from swiotlb.
> 
> That leaves us with the next more complicated case, x86 or fully cache
> coherent arm{,64} without XENFEAT_auto_translated_physmap.  In that case
> we need to patch in a phys_to_dma/dma_to_phys that performs the MFN
> lookup, which could be done using alternatives or jump labels.
> I think if that is done right we should also be able to let that cover
> the foreign pages in is_xen_swiotlb_buffer/is_swiotlb_buffer, but
> in that worst case that would need another alternative / jump label.
> 
> For non-coherent arm{,64} we'd also need to use alternatives or jump
> labels to for the cache maintainance ops, but that isn't a hard problem
> either.
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 20:43:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 20:43:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87088.164165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCcP-0003zc-PJ; Fri, 19 Feb 2021 20:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87088.164165; Fri, 19 Feb 2021 20:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCcP-0003zV-M3; Fri, 19 Feb 2021 20:43:17 +0000
Received: by outflank-mailman (input) for mailman id 87088;
 Fri, 19 Feb 2021 20:43:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oxor=HV=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lDCcO-0003zO-ON
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 20:43:16 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 288ac0e7-65c5-4ec5-8d3a-f3ce51a6df2b;
 Fri, 19 Feb 2021 20:43:15 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JKd3tk191969;
 Fri, 19 Feb 2021 20:43:07 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 36p66rasat-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 20:43:06 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JKdnmI144771;
 Fri, 19 Feb 2021 20:43:06 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2176.outbound.protection.outlook.com [104.47.57.176])
 by userp3020.oracle.com with ESMTP id 36prhw3yxa-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 20:43:06 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BY5PR10MB3988.namprd10.prod.outlook.com (2603:10b6:a03:1b1::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.25; Fri, 19 Feb
 2021 20:43:03 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456%4]) with mapi id 15.20.3846.039; Fri, 19 Feb 2021
 20:43:03 +0000
Received: from Konrads-MacBook-Pro.local (209.6.208.110) by
 BYAPR03CA0025.namprd03.prod.outlook.com (2603:10b6:a02:a8::38) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.28 via Frontend Transport; Fri, 19 Feb 2021 20:43:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 288ac0e7-65c5-4ec5-8d3a-f3ce51a6df2b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=tB/dvSZM1X/jBc7PdsTaWhjcO8FcDvh672wUSDa5wD8=;
 b=NuUwTwaglOm/EkkdFVtC/tpRVeutlMXWLNObeu6mF4N3ViU9FcoGlVz9WBytpZ0Elf7w
 AVMcKpUkZ8ncMv/jUXfvNLa9fofQtrB6j3Rj1h51/Y7XV3/LmxBvLDffRfZYKmFgc2GS
 HweG5PQ0KRdKqLmkq63o+G46cAPe0xsdWMUXZ/5Fks3LDL0jLvwhuMQjFZpx6G20eC8k
 +IHZ+5OZjRLlbDgLtmflkmODIT23gpWts7uUT7UfcH0vwEh3VsqmKWN+gb2EFOi/vD17
 /3B6EEhrSodOfCfTc9Ps/rZ6rC12H9SFZv+uj0jqk15CZD/6HdPigfR8XcWLDsl0LQaJ 3A== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LIqrkcrVIf4cX991XyND1nlQh6kd4fIKaN/zt7Wxsb6WGJntAct1J7nunQiH3i+BHjLtgSgQ2plKvBjre08CuajTCaOjgcXNVuuWBkngmOGMqaVsQ1mlWRkqbfMsEzAH6UE9cattxM2/j/qV42WYNo5RutYSUoy89YetWOdWICmbulMsfWZBQV50iWByk0zBoknfKWCQgbAi6cltCUJoVcQ2v5rh/ir+IOuxLmtqfy50Me1NlXYpXdhqrWHDTWwueNCQDsXFMraCc0JsdAmJG558OaFD0wDA8yyDmghL3vCWUZovtYsN85V+e3QN46ZnehMxbI0LA3DeHxSai3ckFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tB/dvSZM1X/jBc7PdsTaWhjcO8FcDvh672wUSDa5wD8=;
 b=i0M2wFX7Ob40JUrfIizLpHIdhvATot0XRFhBB38F/ealWLAunzUoefcfUE4/pJg5h8D5THDC1RyVfqnysU1W18i4iV6vllkqssbgEqZCDqvPh28UrPgumZ2UugvVd2sxDe1B5n0vvI9w25T5xegQrMwk8XGPdJyKqnL4dC9UjNF+eb2G0+RHU0omugavTRAkBkOuI20vUW/usdRC4kml8MiSQJKegbRPpOmz1V84tpV3e/vpnvRaY1Eh0VW6uhiMW07DmhXMen9aEM2U2HY21wbvc7wtiv8QL5rPHyYug2UGXpiO49UD54v9vsIPRs9C0cnRzN2BS99Rcz2ne2oJfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tB/dvSZM1X/jBc7PdsTaWhjcO8FcDvh672wUSDa5wD8=;
 b=kW5lRKS5Z+UQIwV801kdSgLkQOBZn6qu69czW0mGQaOAps76CTyXHI/KayzpvolPVET2jx9jIyzmH+rO/MMJlLEpMPotke4Erpwd7qpL64FULu7u19Zf9lASAI959/tHptPA//lkz4BN/jIfia+xjTVyOb1EnpKOfjAIAOv7Jls=
Authentication-Results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=oracle.com;
Date: Fri, 19 Feb 2021 15:43:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
        Dongli Zhang <dongli.zhang@oracle.com>,
        Claire Chang <tientzu@chromium.org>, xen-devel@lists.xenproject.org,
        linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org
Subject: Re: [PATCH 2/8] xen-swiotlb: use is_swiotlb_buffer in
 is_xen_swiotlb_buffer
Message-ID: <YDAi1FXOHVWd1DcI@Konrads-MacBook-Pro.local>
References: <20210207160934.2955931-1-hch@lst.de>
 <20210207160934.2955931-3-hch@lst.de>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210207160934.2955931-3-hch@lst.de>
X-Originating-IP: [209.6.208.110]
X-ClientProxiedBy: BYAPR03CA0025.namprd03.prod.outlook.com
 (2603:10b6:a02:a8::38) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 383273e8-7f20-4b2e-c9b4-08d8d516f425
X-MS-TrafficTypeDiagnostic: BY5PR10MB3988:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB3988C2F1B07B0F0914DB700F89849@BY5PR10MB3988.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	jFTnc+S5PmSJiqUnr32F5lQRJwlxvU0bMmpoFDorlslET/3gBF/c7iypaExJjytKNVEvDZgszDy0RoEjlWhEPJywi08C7BidCElpaUFWjCG3t3J1wBX35svD3AzMnlJv+cp1gBmmEWH1I0n8VeGCsx65xqXclPIvAKheAGiVpQXhg03ImViKP1YUEHcU2quiVWYgVX2bCFMEI1RACKYuWZCX/KZo622wWAh/Tsg3fMQqHQH9tkPkKxrOIJTL/VCz6aBTryplq3Z7ZNNLcK3x28iHP/rKvcbDVSgAZZoLTXbUSqbJksOdrX6HWUn5kjtyy59R5O5z2QT4kWan/o1F70HAPBZNRSQG/lgLAb8wX9QwXCETBlnlCqv8r8Wt/LqZ2wWHavybgzynxs29WqHWk1Z9D/uVbnaeNyhj4J0KbxE8pUILCfarQB8RlWIDzd5rNSBoLTeuz2v7qZIFaFAI835syFupCoRGLVNljdlLZogpmnL1Y7VnkysSSrSNICysmxautMgtAV9DoDJQnfsV5A==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(366004)(396003)(39860400002)(346002)(5660300002)(52116002)(8676002)(316002)(956004)(83380400001)(26005)(6506007)(6916009)(478600001)(8936002)(16526019)(54906003)(66556008)(66476007)(66946007)(86362001)(2906002)(55016002)(9686003)(186003)(4326008)(7696005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?2UROQch2XVpYuI6Fiuld7XK42k7YlTz4wySzQVH6pSPLeR/ifylyghZ4KyXk?=
 =?us-ascii?Q?vuQ9o9aaMUHx3J/kInpJepW+Jr5+BRejkjg2oGtHJ04r0ES79fXmGeX6dyjp?=
 =?us-ascii?Q?HDZXwg4HhNBOpry1AHsghDDGyVa16gIV9ZbJ/1s35Ej4e8PilPdAOAmrzhO8?=
 =?us-ascii?Q?W/edZlpVT0rUmR+oRfy7ZaXvn+owbvWHewnI9rCGb26SrX3jV7/j4/YO+h7C?=
 =?us-ascii?Q?90F/ujU+SmqFevVU0Xnt3V/xVmIknNY78krA6kg1N8egjBmNt7KuTCMxkQOz?=
 =?us-ascii?Q?JT1M2fD0jgKl5meJG2k1upzqzzfa7pPP32Lpphs+ummy/wp3wwF94RGhxrSW?=
 =?us-ascii?Q?I6JjKBmg4hhtdGqyQTcgxIwzEUwzHAYbA7KtiZnPawI+xnJb2R2xqu1uaMBa?=
 =?us-ascii?Q?pLn2jVlxNiL9btFN2dtbG2xS0JUjk+ALdvr0rkjXcTjo+sb3DcG1u5RYrjc5?=
 =?us-ascii?Q?QImJdqLAuABFVYkO5a8FpdAHk3hJbnTtQPkzKkMQ9g+7Ubk5UODwtCTRBqdK?=
 =?us-ascii?Q?1AnHeaD1CuDRZUkmRHD24d+q+9l+jHycwQbBUdNw6iYk6YpDb7hEE7cKhbXf?=
 =?us-ascii?Q?bzcomAwbQZ+8j9mWKfPhljvNvpw6RSfU5O9niMnlzHoObevxk3Nk4wsBu07M?=
 =?us-ascii?Q?ai5ZD1hMK86UPYEaS2wOmMbEaE77Ygl3YwDAWW1EIYx5DUAfCRpRYUsz4v66?=
 =?us-ascii?Q?jXJYcZgZzd+Ty9jT74cVxoDhmybP0Oqmb8b9A1CGAoRuKANgaaYQzpeOekpt?=
 =?us-ascii?Q?XqyyGfW6S9Vd6e41QspXrZcnRmFrN9eD0Rge/OHgorRS1rdcTeb67guADYLg?=
 =?us-ascii?Q?g3Xu9wDr00cZkjFjrV/t+JwCSB6eluC/qfWfelwPkDINjLG/iAkjuchU9eSW?=
 =?us-ascii?Q?yrHfAnGP2SjkMVlnzoVI6IRy+6bIC1nxQ5wtzTs1Kfys++Dd03VX5qnQSQqz?=
 =?us-ascii?Q?vr5PHkjcMO/Q/9TIyMi1nNjPda2On0vk8bzSEtk3e4fKQbCcADY9sTKvZ78d?=
 =?us-ascii?Q?DwgbW7XlrqlpA3aoERIZEKnRnGTQuzq5l/Q4rBuM65dxDKgsq0D/TyZ3f2wE?=
 =?us-ascii?Q?RtZq03sj3SMU0AGUVAtZJFgGp53YMsGH6jiZCA5H1LZiHPbmNYuuzVXxKvZ7?=
 =?us-ascii?Q?B4oALljuzV4yIWzZxV1wP9H+FnOMHD/ljeurlBD9vbsZ1/dxZxSVaj6KFJM/?=
 =?us-ascii?Q?CAsGh0XLyWIIwNqLwfnRhQT/9kbiSvbzHIJApIahdFitAw0pue91XvfgDA3X?=
 =?us-ascii?Q?sWVu0+6eMj15d8dWITgCa3xpQABKxVS3S+bnC84WYF+k+NrDn8QnWGUtxpaF?=
 =?us-ascii?Q?TD7ASglPker/eeZHDXNz3kmM?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 383273e8-7f20-4b2e-c9b4-08d8d516f425
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 20:43:03.8217
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IeUTIh7ZWfW2E2cxgKb0qOdUvPw62eK4SoUIsTO2eL4ELCbeJlEZ9ZAFGEQ8JPRBYXa9VPQanu8ykIHi56kUAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB3988
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0 mlxscore=0
 bulkscore=0 suspectscore=0 malwarescore=0 spamscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190166
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190166

On Sun, Feb 07, 2021 at 05:09:28PM +0100, Christoph Hellwig wrote:
> Use the is_swiotlb_buffer to check if a physical address is
> a swiotlb buffer.  This works because xen-swiotlb does use the
> same buffer as the main swiotlb code, and xen_io_tlb_{start,end}
> are just the addresses for it that went through phys_to_virt.
> 

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/xen/swiotlb-xen.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 2b385c1b4a99cb..a4026822a889f7 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -111,10 +111,8 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>  	 * have the same virtual address as another address
>  	 * in our domain. Therefore _only_ check address within our domain.
>  	 */
> -	if (pfn_valid(PFN_DOWN(paddr))) {
> -		return paddr >= virt_to_phys(xen_io_tlb_start) &&
> -		       paddr < virt_to_phys(xen_io_tlb_end);
> -	}
> +	if (pfn_valid(PFN_DOWN(paddr)))
> +		return is_swiotlb_buffer(paddr);
>  	return 0;
>  }
>  
> -- 
> 2.29.2
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 20:56:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 20:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87091.164177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCpE-0005JQ-1D; Fri, 19 Feb 2021 20:56:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87091.164177; Fri, 19 Feb 2021 20:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCpD-0005JJ-U2; Fri, 19 Feb 2021 20:56:31 +0000
Received: by outflank-mailman (input) for mailman id 87091;
 Fri, 19 Feb 2021 20:56:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCpC-0005J9-AX; Fri, 19 Feb 2021 20:56:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCpC-0000hO-42; Fri, 19 Feb 2021 20:56:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCpB-0004Kc-Ot; Fri, 19 Feb 2021 20:56:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDCpB-0002rE-OO; Fri, 19 Feb 2021 20:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T9+I5ekZgbVUK+gUQeDPQAoe66iNu/bfaaCtp3XKuwM=; b=C4zDHW3cI3M7oFUt3FKi6GzZo4
	tOUFEZOvx6oHIjFOkybTZtMcOYKJyqN3sCz7bgRSc+9BBDWfyGkFMEc4FtN5unw8zeNp9PV+U288i
	reWXc64AgmZpJ+d1CNFZncPi/8GgPSa99eRjJ04allhKuDihGjJRxoH73ZIMT0/xEP1A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159462-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159462: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1af5629673bb5c1592d993f9fb6119a62845f576
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 20:56:29 +0000

flight 159462 qemu-mainline real [real]
flight 159482 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159462/
http://logs.test-lab.xenproject.org/osstest/logs/159482/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                1af5629673bb5c1592d993f9fb6119a62845f576
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  183 days
Failing since        152659  2020-08-21 14:07:39 Z  182 days  352 attempts
Testing same since   159462  2021-02-18 14:30:20 Z    1 days    1 attempts

------------------------------------------------------------
416 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113706 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 21:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 21:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87097.164192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCtE-0006Pz-QJ; Fri, 19 Feb 2021 21:00:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87097.164192; Fri, 19 Feb 2021 21:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDCtE-0006Ps-NM; Fri, 19 Feb 2021 21:00:40 +0000
Received: by outflank-mailman (input) for mailman id 87097;
 Fri, 19 Feb 2021 21:00:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oxor=HV=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1lDCtD-0006Pl-N1
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 21:00:39 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 959feb18-846a-4e73-b541-5fe1cbb2a2f6;
 Fri, 19 Feb 2021 21:00:38 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JKx5C3034782;
 Fri, 19 Feb 2021 21:00:30 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 36p66ratgk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 21:00:30 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JL0Gln015849;
 Fri, 19 Feb 2021 21:00:29 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2175.outbound.protection.outlook.com [104.47.59.175])
 by userp3030.oracle.com with ESMTP id 36prq2c8wb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 Feb 2021 21:00:29 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by SJ0PR10MB4526.namprd10.prod.outlook.com (2603:10b6:a03:2d6::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.27; Fri, 19 Feb
 2021 21:00:24 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::e180:1ba2:d87:456%4]) with mapi id 15.20.3846.039; Fri, 19 Feb 2021
 21:00:24 +0000
Received: from Konrads-MacBook-Pro.local (209.6.208.110) by
 SA0PR11CA0196.namprd11.prod.outlook.com (2603:10b6:806:1bc::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Fri, 19 Feb 2021 21:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 959feb18-846a-4e73-b541-5fe1cbb2a2f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=UqlfMFUiDpITjvYlTcrtKvtc2PjKHLcDk3gvr6X5NFY=;
 b=GHDkVsZabbK6MQK/2sMpriOwNE8OOry0IxfItawRGxrWvuc0fmmU3uyDsFA8M6aCMqQP
 g8FgoTkN4wM8UpBrvt0E02ohR4JkOBN9UenIoeK9VaLgmfJGDMDIxW5RPovSGb6oeOjE
 pEesFTfxLgCUHHwgHiFGhUZog3qTAAfH9cGFAzSNtgJZ+V6YEPZ5KjB+Ar0/WNP3OGqn
 iUUL0un0GmqZWVaaSIaJCYWjF6EFoZPJ1xNJOVQc/+5L3W5+Luz0asC5sgZs8NR2iGWu
 YgPbZAvvVmpIVv85biV3xB5FgBFFgIuN+h6zPv+iL5bhw2fZyu+Q9y0ejyEaxAXvHfsD aw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MT74IFNh5PljWJ5GOKU9nN8+5VCKDN2S55rMBcishaDOfXtBKxzJwvxDrV7TZwoEMQu4Vbu0z+9osQ+0N58Yg+vucSGPAJgzWWViNsTBytJzjo83nJr5oNGlSVnIjXs9I713eQrCeem6PsMluUfx7m62Z24Fi3KUOPthvAoE58pVwjtsYK9+Bcb+znurccFA//kOZlLbY+RsBXd1jRXXCXg6NqWhnZQiRTrhfOaLwqp3s76wNi0z7wbouuRBjfSoZYjbmeh81nbWtsHaWrTibqIhu6KIq7tXl0ILA8O0hn3Z3LNKwkEY7z4fU/7uQQL0sX7XSgPAWJJSrjSRio3ypw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UqlfMFUiDpITjvYlTcrtKvtc2PjKHLcDk3gvr6X5NFY=;
 b=EgJUIGf2Sp2FRic75OJladf1FNn/SntjPzEMia2I/uwSUVNI6JlqiFimDp7zTh59tqYBorQ4JymT1ZqeroPlyhxfzVdzPEYMpzUm6bquA22epKOX6NriNbRMq6+L2Zxvzmk/zlCyqpLb3W9Yd7OPzYpxBzeZTFNMSggcQhzX7nohc1mB8G9pbb36mB39827COtQMrGIEOsR/4VvXkxaBU6ygfwQec5pNJRw5rMMCOqDxbTgQ6JvPO6apgRoBzeZ64vMLaMz4EmlETThbOh7kdzV1zqPA5ucwOM42azEQ8TMnSQZ0AH1MPB/QlhO2V1BKCXopOmnqwfRIRgN5u4h9Eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UqlfMFUiDpITjvYlTcrtKvtc2PjKHLcDk3gvr6X5NFY=;
 b=a8khe1NbiTabokezKPPkNMEYHdOT/n+e/nSxIewEk8/bO6iT0i++XcICr2Y66EeI4AYTQtLCx08hNCQdMpJvapuHqgN47H8qIgU/rLqYfqBrZOSbiYr8fQvgCTMn5ii4AigwPe7Rrv7zdaubti6tBoJqfjObgEnj2yUVpMceCVY=
Authentication-Results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=oracle.com;
Date: Fri, 19 Feb 2021 16:00:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
        Dongli Zhang <dongli.zhang@oracle.com>,
        Claire Chang <tientzu@chromium.org>, xen-devel@lists.xenproject.org,
        linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org
Subject: Re: [PATCH 3/8] xen-swiotlb: use io_tlb_end in
 xen_swiotlb_dma_supported
Message-ID: <YDAm5Mfd7lILBrl6@Konrads-MacBook-Pro.local>
References: <20210207160934.2955931-1-hch@lst.de>
 <20210207160934.2955931-4-hch@lst.de>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210207160934.2955931-4-hch@lst.de>
X-Originating-IP: [209.6.208.110]
X-ClientProxiedBy: SA0PR11CA0196.namprd11.prod.outlook.com
 (2603:10b6:806:1bc::21) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 58b9bc58-6d20-4af6-e291-08d8d519608c
X-MS-TrafficTypeDiagnostic: SJ0PR10MB4526:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<SJ0PR10MB4526A494E303AD546FC1990F89849@SJ0PR10MB4526.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3631;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	09nqe3xyOrlaqEKeYPnkiRKRmGjoNc7l50xIzmn7uVFpbsVZ8YhbDxdjkEj0r1AXM3JVTXM/1/MbgYEKzPj3Ji/HWoAlC28B9hoVKLmGf2dsZmyxSw1/E6ZwKMEroo6Q56MVoe85bwDTL7Stl6RYq9+fG+1duWtjPYl4w5mPLxmcgIRwZoI+/0DbiJcEAjYH51OaQPTKn47Sn0gmqUfMUFcHawZdbxT/4QHQXAGjqhBdpUEWyZdyf4AEG7JkXhvedB78vyKzdGJFrZFMz6MblF10gEQ75Zd6Sg9BCDBMbugYXbS+kGVg4OUpvm1SEKdsmzyODgE6zC4/Cu0l62+l9LfmUF71xDjV6RgupgH6olUwZmZPHgiChkJCg6wbcPMke/kIUGqrGTsmEB7kQqN5KokREFEq4zw2/sffYvKRYtnzh/lKA8f6G5gi9WVrgafinW+Ezj4VPMmccHHWvCxTTJ+DTOsMetMrm8Gcjy+sz+uff300VmIZjpDQKaUmi3P6v1oaUZeBkKLxkasZU97fTg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(376002)(136003)(39860400002)(346002)(7696005)(66476007)(6506007)(4326008)(66556008)(66946007)(956004)(9686003)(478600001)(52116002)(86362001)(54906003)(186003)(26005)(6916009)(8936002)(2906002)(55016002)(16526019)(8676002)(83380400001)(5660300002)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?29+DMH2ZNIfHkBy6u5qzPXMdjy6RNMwCjwRXQLW0+WMAdrRZteX+ZakgBs/H?=
 =?us-ascii?Q?XBmCPOvcUx2v1h+8S4DUFbgcDM8AwTCcWExPLVnNrV20AaTKMIYaBUBhchkN?=
 =?us-ascii?Q?lxVzaZOck8f27FunuklHqqn8CnL82gTh90oP+jj60bG/RELpsBAILRcrjg8g?=
 =?us-ascii?Q?Gx4I+SrvvxTLmVTJSir5vFRKIpBbpfRQ2luoRg+okM9E4/c2zUIFNSM6jQUU?=
 =?us-ascii?Q?weBOvSSeLrnKQB1yxBYZl4GVZsUqYqDFFMjMzQkCtXoJCOTDbozSWwzMf2E/?=
 =?us-ascii?Q?7eRA7gtLVtbPKjvebReDJon2TwEKaDt/BgCC/YslHBYhq7tyM3gENOI2gCWg?=
 =?us-ascii?Q?+Ns+2uk4l6Q/kryjmhdFUsQEj3g5tGHMEzxYA21dJv2xcLReIwftW8dqBmOO?=
 =?us-ascii?Q?h9X98wV+BUEGf8+MPOvo1pgRBS8FTBsE48JBiAoGL+lF2z+ICecxupMdtBNp?=
 =?us-ascii?Q?vSvHHROFNEu1VlhrebX2gQUtkinlaCmKrVNSXcmevSSyebmIvePJK9VYD4cx?=
 =?us-ascii?Q?UpaWyfc6qd44M9hbbLEHNHclTAIL0mvK4qn6bb4Zq3Zf7xAhrJZ7oZr6IYf3?=
 =?us-ascii?Q?sTnQy/In+zvlsBUJ7WDOAjyjsrLE2dXbA0uahbsmYtYbpJAnPCv5p84s/KMA?=
 =?us-ascii?Q?5P79YQiBRxjQKqvbGajLrqZrbPCrAmvThxXTkr8IufGG1wjLp8YV1g4bZavL?=
 =?us-ascii?Q?rC09kqXXnwiTVQ58I/ImxxreOycjRCelyic3qE42SelrHZWFD6lBTwEXGreG?=
 =?us-ascii?Q?Rc8MfAFt3t7K6Tc/wNBUsDkVWBYZT0aoSjZ544IVf53PUc5Zx9jQz3yvdxAu?=
 =?us-ascii?Q?65iEIjybrVXaRIFB4suhjnFxXVGVI+TjE3+bpH+vcfHyye4zf7cv1xvJtZqJ?=
 =?us-ascii?Q?mURO+xJF6/aMbTCv2RrPTy7pmnYxp2Hru8yM73hN2dRxvR5GyFWseMzb3NWJ?=
 =?us-ascii?Q?b3/i+Se1JkTBBCmpQCqlM274uoY/4wpRGHRwvj8/g534DoHzuCtqEtD30oFl?=
 =?us-ascii?Q?gzJoFJNSJJRyrGqWH1KgXMGg0TXCxG60wimI0w8VWAqZE3XDsR/ds4je1vt+?=
 =?us-ascii?Q?+JduKl2O/7x9s6mGWAWUqLVWDbs6MeCcK9rzdXTYzlV7QdscTtH24KFwFDDo?=
 =?us-ascii?Q?L+YxiFPwKt2P8Z25RcJikIwz5aDL/JphOZ4eby7uIlxz39efeWSuQ6F6nnOi?=
 =?us-ascii?Q?s88F0Q6NMsaw4xxX9Le5rxqWrwB89lf5qeIij16BNUOFH877gaQGTrR3JtdG?=
 =?us-ascii?Q?o4BxnndMUs7MiVnqGLJtGZmtP95idx5dwJUoyu4kwqeEjMJL2dyAwlaXbT+e?=
 =?us-ascii?Q?FcEgG++qF6FotmCatH/2Lxm3?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 58b9bc58-6d20-4af6-e291-08d8d519608c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2021 21:00:24.5708
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 06gukbd3g2XPcDBY5Jqo2xvEEBrt7YOzdPFThTPzuhTFuMv6+pGzkz0/qRByXmJIbsZaGs2uqhSjpfiMg0xYHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB4526
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=999
 phishscore=0 adultscore=0 mlxscore=0 suspectscore=0 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102190169
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190169

On Sun, Feb 07, 2021 at 05:09:29PM +0100, Christoph Hellwig wrote:
> Use the existing variable that holds the physical address for
> xen_io_tlb_end to simplify xen_swiotlb_dma_supported a bit, and remove
> the otherwise unused xen_io_tlb_end variable and the xen_virt_to_bus
> helper.
> 
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/xen/swiotlb-xen.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index a4026822a889f7..4298f74a083985 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -46,7 +46,7 @@
>   * API.
>   */
>  
> -static char *xen_io_tlb_start, *xen_io_tlb_end;
> +static char *xen_io_tlb_start;
>  static unsigned long xen_io_tlb_nslabs;
>  /*
>   * Quick lookup value of the bus address of the IOTLB.
> @@ -82,11 +82,6 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev,
>  	return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr));
>  }
>  
> -static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address)
> -{
> -	return xen_phys_to_dma(dev, virt_to_phys(address));
> -}
> -
>  static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>  {
>  	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
> @@ -250,7 +245,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>  		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
>  
>  end:
> -	xen_io_tlb_end = xen_io_tlb_start + bytes;
>  	if (!rc)
>  		swiotlb_set_max_segment(PAGE_SIZE);
>  
> @@ -558,7 +552,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
>  static int
>  xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
>  {
> -	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
> +	return xen_phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
>  }
>  
>  const struct dma_map_ops xen_swiotlb_dma_ops = {
> -- 
> 2.29.2
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 22:19:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 22:19:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87109.164204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDE70-00061U-KB; Fri, 19 Feb 2021 22:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87109.164204; Fri, 19 Feb 2021 22:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDE70-00061N-Gj; Fri, 19 Feb 2021 22:18:58 +0000
Received: by outflank-mailman (input) for mailman id 87109;
 Fri, 19 Feb 2021 22:18:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDE6z-0005td-5I; Fri, 19 Feb 2021 22:18:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDE6y-00022V-VG; Fri, 19 Feb 2021 22:18:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDE6y-00082W-Ju; Fri, 19 Feb 2021 22:18:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDE6y-00073v-JG; Fri, 19 Feb 2021 22:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SU/2Mw7IA8hv2915czYhWJKW1vLtJABzLvPQu+o0tO8=; b=gaW0abtJf+qmmK3oAa8SeqdnLh
	TwttRPhsbEKFnMdAlvwIFCi+VoqWS/UO9Qa+ry9mvFBwIAm81aSi24Q++AJL1pnUPhvSKmJhu2rwN
	Odt7aqjEXiLyzKNttpXoSlagO5XwKLg7IIvKqNdNBYQE7RFkZEw1sDnaqxTeqJhbAe7w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159463-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159463: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 Feb 2021 22:18:56 +0000

flight 159463 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159463/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install fail in 159440 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 159367 pass in 159440
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 159367 pass in 159463
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159463
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 159367 pass in 159463
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 159413 pass in 159367
 test-arm64-arm64-xl       10 host-ping-check-xen fail in 159413 pass in 159463
 test-arm64-arm64-xl           8 xen-boot         fail in 159440 pass in 159463
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 159440 pass in 159463
 test-arm64-arm64-examine      8 reboot                     fail pass in 159367
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159413
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 159440
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 159440

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159413 like 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 159440 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  203 days
Failing since        152366  2020-08-01 20:49:34 Z  202 days  348 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    4 days    5 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 19 23:12:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 Feb 2021 23:12:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87118.164219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDExB-0003KE-T5; Fri, 19 Feb 2021 23:12:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87118.164219; Fri, 19 Feb 2021 23:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDExB-0003K7-Po; Fri, 19 Feb 2021 23:12:53 +0000
Received: by outflank-mailman (input) for mailman id 87118;
 Fri, 19 Feb 2021 23:12:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UiAT=HV=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1lDExA-0003K2-LQ
 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 23:12:52 +0000
Received: from mail-ej1-x62f.google.com (unknown [2a00:1450:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8806fea4-79bf-41f2-97fe-8bc5ea13e82c;
 Fri, 19 Feb 2021 23:12:51 +0000 (UTC)
Received: by mail-ej1-x62f.google.com with SMTP id n13so16525594ejx.12
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 15:12:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8806fea4-79bf-41f2-97fe-8bc5ea13e82c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to:cc;
        bh=/OGhTTpzZyxgJI2DnqG489Zc4ZGYVreMMujCNnLjP0k=;
        b=fDBb6+JwnaBVMspwPgonOOXeDao65TPrPjSWlDhu9vFfLrzFb3e8XiMUWDlJCXYgev
         BR991ro84Gz3Ny7HH+u91pQASpeygB0ONNsrseULy88dS4oEXWJhttZC2vQMNPxLwJRc
         KaDjOlurvlcmfuQrEBErGmduPhGEb8yp8nCBw6EuXzsHc+u+gmckgS9+kevs1iyP2Ys8
         /zqD4xTrtUNAp8g2VwSf7yRV0uDOkL8Pk/IkMadTq9/kNsGUJsF3sEk6IzcIi17AKLn1
         Y3odwawgxCqg0KFb+ZCUX2LQegpIOFhX49iUHImR/pVPCXhNnlMY5/RhIatOvZUqr3/Q
         MoQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
        bh=/OGhTTpzZyxgJI2DnqG489Zc4ZGYVreMMujCNnLjP0k=;
        b=swzUrOqGQF1apaTupbAOUvzoHYbhn0VvUWF3+xkPSdSYrAcHeJ39522WuMCxplaZ4M
         +AqmfOjw0bR4JohNGmU5Qn4Sbsjkvyw7mn4VddWsb2bO+JkMyaBCe6xvTuFA46f57IrS
         mdPYGmmmgwrUbnOUQ5PXtFVodG19iWYwbbuYF9eqsErZ+B0lQSevkOnDMpy3K984wzks
         SlXmde8UwG7KMUTVlxg9KSMzcfYoIgionAuvEfQK8mQqbYsL8PEMr6I9es+WvZkYR6mA
         y113eRkRDXexyqL6g9rURUKvzJY+j8FZq8dG6KAyCLvNktzrPWL/LUlAfhGcBA9gsz74
         PYEw==
X-Gm-Message-State: AOAM532+/J7ph1RMdH4y74dzOr1DVivCPO/Io+V6qAzqJsey9am91pGI
	aVC/Ei3aRMUEr5dzbtaLqXRDhds9NsawwCca1X8=
X-Google-Smtp-Source: ABdhPJzsfKxK5E9dWZ39p0jar1FBU13tw9u7DbwX9uNfK/IwohdH4AE8ep/pD65aOg/QzlYjFAOraRl7Bjo0BgeQyz4=
X-Received: by 2002:a17:906:6096:: with SMTP id t22mr11101785ejj.34.1613776371022;
 Fri, 19 Feb 2021 15:12:51 -0800 (PST)
MIME-Version: 1.0
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 19 Feb 2021 23:12:40 +0000
Message-ID: <CAJ=z9a0bi2fAcaTMwez5AyQbqP1u1P1r0hzeXb2SK2vRd8O37Q@mail.gmail.com>
Subject: ISPENDR implementation (WAS Re: [linux-linus test] 159463:
 regressions - FAIL)
To: osstest service owner <osstest-admin@xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, 
	Andre Przywara <andre.przywara@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

Hi all,

On Fri, 19 Feb 2021 at 22:19, osstest service owner
<osstest-admin@xenproject.org> wrote:
>
> flight 159463 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/159463/

[...]

>  test-arm64-arm64-xl-seattle                                  fail

[...]

>  test-arm64-arm64-xl-thunderx                                 pass

While looking at the log to check whether we fixed the Arm bug, I
noticed that Linux will boot on Thunder-X but not Seattle.

>From the log:

(XEN) d0v3: vGICD: unhandled word write 0x00000020000000 to ISPENDR44
Feb 18 17:01:19.426532 (XEN) traps.c:2013:d0v3 HSR=0x93820047
pc=0xffff8000104aec2c gva=0xffff80001000522c gpa=0x000000e111022c

[...]

Feb 18 17:01:19.618568 [   27.097702] Call trace:

Feb 18 17:01:19.618612 [   27.100215]  gic_retrigger+0x2c/0x38

Feb 18 17:01:19.630516 [   27.103861]  irq_startup+0x78/0x138

Feb 18 17:01:19.630575 [   27.107419]  __enable_irq+0x70/0x80

Feb 18 17:01:19.630622 [   27.110978]  enable_irq+0x50/0xa0

Feb 18 17:01:19.642499 [   27.114363]  xgbe_one_poll+0xc8/0xd8

Feb 18 17:01:19.642558 [   27.118009]  net_rx_action+0x110/0x3a8

Feb 18 17:01:19.642605 [   27.121828]  __do_softirq+0x124/0x288

Feb 18 17:01:19.654496 [   27.125560]  irq_exit+0xe0/0xf0

Feb 18 17:01:19.654555 [   27.128772]  __handle_domain_irq+0x68/0xc0

Feb 18 17:01:19.654603 [   27.132939]  gic_handle_irq+0xa8/0xe0

Feb 18 17:01:19.654647 [   27.136671]  el1_irq+0xb0/0x180

Feb 18 17:01:19.666482 [   27.139883]  arch_cpu_idle+0x18/0x28

Feb 18 17:01:19.666540 [   27.143528]  default_idle_call+0x24/0x5c

Feb 18 17:01:19.666587 [   27.147524]  do_idle+0x204/0x278

Feb 18 17:01:19.678517 [   27.150819]  cpu_startup_entry+0x24/0x68

Feb 18 17:01:19.678577 [   27.154812]  secondary_start_kernel+0x174/0x188

Feb 18 17:01:19.678625 [   27.159415] Code: f9409063 d37e6821 91080021
8b010061 (b9000022)

Feb 18 17:01:19.690480 [   27.165582] ---[ end trace a7aadb3ae629b57f ]---

It looks like that Linux will now try to set the interrupt pending by
writing ISPENDR when the interrupt is re-enabled.

I think the ISPENDR write emulation is easier to implement compare to
the other missing IS{PENDR, ACTIVER).

It should be possible to emulate as follows:
  1) For virtual interrupts, just call vgic_inject_irq()
  2) For physical interrupts, set pending at the HW level. This will
raise an interrupt that will call vgic_inject_irq().

The vGIC in KVM will directly set the physical interrupt active to
avoid the round trip. But I am not sure we can do it safely in our
current vGIC to avoid the guest de-activating the interrupt too early
(the virtual interrupt may already be pending/active).

Any thoughts?

Cheers,


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 00:01:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 00:01:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87127.164231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDFi3-0000PA-7k; Sat, 20 Feb 2021 00:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87127.164231; Sat, 20 Feb 2021 00:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDFi3-0000P3-4b; Sat, 20 Feb 2021 00:01:19 +0000
Received: by outflank-mailman (input) for mailman id 87127;
 Sat, 20 Feb 2021 00:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4ysU=HW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lDFi1-0000Oy-BO
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 00:01:17 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 80e143ac-e4d6-417b-bc83-f76d81ad6e0b;
 Sat, 20 Feb 2021 00:01:15 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11K007Vd108102;
 Sat, 20 Feb 2021 00:00:07 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 36p66rb3t9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 20 Feb 2021 00:00:07 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11JNuVkv131595;
 Sat, 20 Feb 2021 00:00:06 GMT
Received: from nam02-bl2-obe.outbound.protection.outlook.com
 (mail-bl2nam02lp2055.outbound.protection.outlook.com [104.47.38.55])
 by aserp3030.oracle.com with ESMTP id 36prbsqydk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 20 Feb 2021 00:00:06 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3478.namprd10.prod.outlook.com (2603:10b6:a03:124::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.31; Sat, 20 Feb
 2021 00:00:02 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.029; Sat, 20 Feb 2021
 00:00:02 +0000
Received: from [10.74.102.113] (138.3.200.49) by
 SA9PR13CA0183.namprd13.prod.outlook.com (2603:10b6:806:26::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3890.12 via Frontend Transport; Fri, 19 Feb 2021 23:59:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80e143ac-e4d6-417b-bc83-f76d81ad6e0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=Y/1Rbl3M9BfZ0Z0hndSyr6AhqYtgqVV5G/OyW4hBz7I=;
 b=RrPwkanvaetiVvaN/34DZC6QPNCVDceabm0+2pUq/x8j27lA/hYON9sGlsnme9KZwQm4
 9M6Zo4gWxR9/V3Yhm1WF+VqN8GL+U06yv953pVgJNEXaop5XRzeLSYm+W/uExG0yWcMf
 im9eGs7SeCgkyWsQEm6UqQZTzkfQFdRgNY/zz6qfEXJujTbqFkGKk+bUrZVvY5u3UkI3
 GIBmYEkbqp5MTFUGaVRjund5P6hc/V0lsIOIxjqw7K67C0A6KgDM+P8Cdy+dtPn+z32v
 YfU0xSIg9HDQH+buDTtlxDIHm2fzIXTOk1+Y3+5NFG6Jh7XvTvQ3Qnf8D9cP7SEYko1T mg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QCmOyuJrc/ZVzb9DLnW6nhkAaVU54Rqn9LIvbKIFHDF7ivmxxZ1s0hAhAyiLjKlXU6Q00XC677BD7Xb1OZRhIj45KabeDDJg4AH7A/4cmLTKhM0Z++B9J8zOUo7lfZU/jOW9kyrt9NUrczL4CXcxO+5QUWK17gdR0xb4hNU9YLPwpKijFoEjB5E1BGnUytLHZMWvMVw8Zl/wePx08Amx3DGO+KlaV/pJqnCHysY4PX+pJ302jONXXA1C6B2zedP7oJdQnjtqQ/AJKblgvFkaOtljA+9Rc3+FLNzRwfWALMI2/g5ylz/lLLMfvWirIJfRShJksgS0Ga+NztqIWUSsVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y/1Rbl3M9BfZ0Z0hndSyr6AhqYtgqVV5G/OyW4hBz7I=;
 b=FGv/cD/onuD6rn36VacLKq0wRegFJxsfzAFFlpZzPAuO432Qb59OPk+jVyUUcjpsOv4fsvpnKI8EvIT60VHY89dhyea7uHBdbRs1NubjCGveWCJBB69lJMoKdNgj+brGkcQEs+V8YEMBwt0Qngea2gpQGJcg8C1mAILEdo5HWVqZgOwWdxlQ7uZHhy28Q7dFMK2npDyIx3KjLsm7Hu2JB46TCw+7U+vAsOgs4pOsb+Jsm2ELh/qv0tgyPOy8Eu1xxxsSZBzkePORQR972k0en3s0Db4CR1IPsaTqktjmv11HUoERoYILX9FkOqZOlWBLoZQ3ASSkZI3A+p06ZnKSMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y/1Rbl3M9BfZ0Z0hndSyr6AhqYtgqVV5G/OyW4hBz7I=;
 b=rn3tuoDTZG8LxnbDl5WtBXHVX+/Kw9sUIf7++eP1+3d29N6LX8iyox2HZZimXwwtHwJdsY7Q5Q7z6xh+CPdgt3/Z1I4oShP8dU9DOTcaoL3zB3CWTeA9islOlBkWcjCZ5v5MPE/3VrtmNKkbdt/Yd8XxdEVokQyBkVYfYW3sx4Y=
Authentication-Results: amd.com; dkim=none (message not signed)
 header.d=none;amd.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH RFC v1 5/6] xen-swiotlb: convert variables to arrays
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Christoph Hellwig <hch@lst.de>, jgross@suse.com
Cc: Dongli Zhang <dongli.zhang@oracle.com>, dri-devel@lists.freedesktop.org,
        intel-gfx@lists.freedesktop.org, iommu@lists.linux-foundation.org,
        linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org,
        linux-pci@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
        nouveau@lists.freedesktop.org, x86@kernel.org,
        xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
        adrian.hunter@intel.com, akpm@linux-foundation.org,
        benh@kernel.crashing.org, bskeggs@redhat.com, bhelgaas@google.com,
        bp@alien8.de, chris@chris-wilson.co.uk, daniel@ffwll.ch,
        airlied@linux.ie, hpa@zytor.com, mingo@kernel.org, mingo@redhat.com,
        jani.nikula@linux.intel.com, joonas.lahtinen@linux.intel.com,
        m.szyprowski@samsung.com, matthew.auld@intel.com, mpe@ellerman.id.au,
        rppt@kernel.org, paulus@samba.org, peterz@infradead.org,
        robin.murphy@arm.com, rodrigo.vivi@intel.com, sstabellini@kernel.org,
        bauerman@linux.ibm.com, tsbogend@alpha.franken.de, tglx@linutronix.de,
        ulf.hansson@linaro.org, joe.jin@oracle.com, thomas.lendacky@amd.com
References: <20210203233709.19819-1-dongli.zhang@oracle.com>
 <20210203233709.19819-6-dongli.zhang@oracle.com>
 <20210204084023.GA32328@lst.de> <20210207155601.GA25111@lst.de>
 <YDAgT2ZIdncNwNlf@Konrads-MacBook-Pro.local>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <e0baa2fa-0ca4-ef21-aeb0-319d9648e830@oracle.com>
Date: Fri, 19 Feb 2021 18:59:50 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <YDAgT2ZIdncNwNlf@Konrads-MacBook-Pro.local>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.49]
X-ClientProxiedBy: SA9PR13CA0183.namprd13.prod.outlook.com
 (2603:10b6:806:26::8) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bb977870-6db9-4847-4298-08d8d5327889
X-MS-TrafficTypeDiagnostic: BYAPR10MB3478:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB347819496D28F67CB66D38608A839@BYAPR10MB3478.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	wHw8Rtx56J3eNRtZBG9OfoZ1cOLDXUf6QDLcCWPioJ5/KOPcUj3PSCBdhUP7BePHPJ6ibTO6BrurKX3Exn4z+Ksng6aTsUlWNbm1G09qeQWyZfSwuhPHQxpmy9/yWDJAzq5qjIrtO4wYs1YIUzcOc4h6YpRoISCpv3l9eDx3QTjzUuhWuRcSlM0N6jXktGmoJ3CvjBdiZmBxzoD3cUFDmHGwsf3xTmMRvfSkZsr4wJej2OvfnF0plIxq0yfiKeiJOm038h6DGy4hz1z3Yn6GUjOUY1NmhKCHKUay95nlQce1Ae1MxZ68x1cMUzsZHGb4HssnfLTgnq5ACwU7dvu7Siyyllr7mYN7d4siXgzqtxBuNAgQZkFsTfDi1KuzKC+zKFJol1Wl18x7y/YEtCcng/cbeyFT9wO18tJ0IEEKveNT+HLyit2yadfNjZIV7kguyyoK3jgNePYYM4paLSijNM8AIEjD18LwhZ5vmILZrhXotVvTB7NWCNpYeBRs3dUjFJLwU/OV7PIfnPmvBFAB3VyZsgeB7NmvIBJZ9YsHqDmLV/A+F/cISJl2SraKa3/0/D0ni6tVIQi+OR5Q8wfZg5RFzwOQejne37B3sjBHnFU=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(346002)(39860400002)(396003)(376002)(6486002)(186003)(86362001)(53546011)(36756003)(2906002)(8936002)(66476007)(16526019)(31686004)(7416002)(478600001)(316002)(5660300002)(8676002)(26005)(31696002)(66946007)(2616005)(44832011)(7406005)(16576012)(6666004)(110136005)(956004)(66556008)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?QVNCUUpBb2NkS1p2blZLb3hFcGxJUkI3NHQvSElScElQcUNXNTVVcWErT2hQ?=
 =?utf-8?B?ckZScGRYT3UvRWM1czFuVjgrVDViTVVyY0FQQmpqNCt5eVJEMUtNcDkxYjFD?=
 =?utf-8?B?L28xVWtrRUQ2YXYrcGw3cVBIVm1CbmQ2aS9DeDJSWnBPeVc1aXN2S3FIVkdP?=
 =?utf-8?B?WmV6TkNuTjdUazdKR0hOUStHKzFWajlZWmFBdisvRnFRTmR6RVIxVGQraFpQ?=
 =?utf-8?B?T2dPUkpEY3dvMFBFT2FqOG5HWUZOR20wYnkxQ0s3cVlsMEN6OXpGaDUxTW8v?=
 =?utf-8?B?Rm54WURzTDh4RWJjb2hUWlpibkZrdDFyc29OQjduRDhwK1NKalk5bkMvWXFl?=
 =?utf-8?B?SVllM3ROWVcyZkZvNXNRd1BEM2pobm5jWUNhOWl3VC9JOFFESi82YzhLeFN0?=
 =?utf-8?B?TjVxcVJtTkhjSm45akN0TW90UU5kU2ZLQ0pXUDlJRTQ1MFAzbno5ajQ3eWxt?=
 =?utf-8?B?SlUyUWhZTDZCOTZLSEw1anJ1T0lVMmVZU2txMTdQbXI0NWhsbE82RnhobXMv?=
 =?utf-8?B?VHF3S2tESy8rYmhlSWh6OTZhN1VjUDRKbHcyckJpckVFRjdXcHVoanVnZkJy?=
 =?utf-8?B?VHdPZmczc05abzEwMkxrdW4wS1NqRTFwbzJ4L082OVZmbVp5VTk2aDFOQURJ?=
 =?utf-8?B?RWZOKzNMWnRDNGpxdHM0YkpCVjlwQzZ2d1FZcWtFYWVqR3plMzU1U0ZtSWpZ?=
 =?utf-8?B?YjFxUzhDcmZIYUJSL2JOR25scGZkM1dVWGpVLzdsMEdvN2hwMzlXVmlDa01O?=
 =?utf-8?B?QVErY3VCa3J0bzBFMThmN2xzWEtWb3VqTHFVcTdBUmlBeWE3ZWJndDhhTkUv?=
 =?utf-8?B?dE12QnNPUGhUZWVHZ01TZzdJamhBZEo1UW44NGhhM1NTUkhjbkdEeitpeHIw?=
 =?utf-8?B?d2pjRStpNFFjSFlaeUFQbWl3TUZEZWlNZ3lGaE5razNaY2lqZ2JIUCtTSUpM?=
 =?utf-8?B?YUNyRUtySjg5eVhxYzNDZkV5V3o5N3hnUUdFN3pybzYrRnRhUDdSSGxUb2x5?=
 =?utf-8?B?NFNoQnFuMVkxRk1xN3EvT0pwRUJEcTRHbmZCdUVOamRlbWNjTHJoaldTMExn?=
 =?utf-8?B?RW9kcVhyQU9qV3dPbG14S2ZwMjVHQlZzak1PU0N1RHpGcHNWYjJNRFU1TERE?=
 =?utf-8?B?K0RadlpBaGpUVklYZW9iZUl0TVpCQlpUMzFsdmU4S0tHRWxVcEphQWI1MG1i?=
 =?utf-8?B?aURDSXgwbzZBR0U0aHM3am5kZzZHL0U1OWhJVjlvem5CQ3VNR3ZRRU1YQWRJ?=
 =?utf-8?B?MkxTdlBXNkU4QWMyZnBNNFIvZzNhSHU1MGRmV3RJK2lmVU5nYTNOTmgwMHJ5?=
 =?utf-8?B?YVpKVkVXNTRQRTkzZVB3Z1pXZEVjQ3FzMDcramdETkZveXZrNjB3ODV3V0dN?=
 =?utf-8?B?Y0ZhcTF6YXZmbHFrcDlNZjdEYVBGbjA1bzNmZ050R3FPbTVIYVllY0g3VFVj?=
 =?utf-8?B?NWRJbm9tOEhmVStJQ3hzWWlyclJQQlZwVzBlYkVvVWN4RFMrc0d4SUJGamlK?=
 =?utf-8?B?S2hEb05WejZmclpKd1pvOUtXeHVteHEzNUZkS1ZOWTY3a0Rzc0dKdHNZMzRl?=
 =?utf-8?B?WUlWRVhQWEMzQ1dNUkkzQzVVOHlpeTEzbWo0Z1lYNThxSkRDUTBkajdZM3VQ?=
 =?utf-8?B?OXVkUVI0eVNTYkZqeDV2cU9ObDBEeEpoaExaSmQ1YjRhN25lZm5NMXpZSURl?=
 =?utf-8?B?S2FsVGZCS3J0YU1aZ3c3NmgvWE5rb09qNDBaeVo1Wk43NXArbU9GZFhTeEhz?=
 =?utf-8?Q?n2oelDtJvZWoXjrMy4S1QIGAPbxs5OfZ1Dx/wvW?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bb977870-6db9-4847-4298-08d8d5327889
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Feb 2021 00:00:02.3805
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Des4+MdYQUm56M8MWeOPJBdTKtHtZDWQxPVWdvf+t3tq7lw1L/455wkojdxiIP+TV14ml/NWhpHwXjt94wgy9iCuufxjtuBu/+w3PAZeb30=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3478
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 spamscore=0 mlxscore=0
 phishscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190196
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9900 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 suspectscore=0
 impostorscore=0 priorityscore=1501 clxscore=1011 spamscore=0 mlxscore=0
 phishscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102190196


On 2/19/21 3:32 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Feb 07, 2021 at 04:56:01PM +0100, Christoph Hellwig wrote:
>> On Thu, Feb 04, 2021 at 09:40:23AM +0100, Christoph Hellwig wrote:
>>> So one thing that has been on my mind for a while:  I'd really like
>>> to kill the separate dma ops in Xen swiotlb.  If we compare xen-swiotlb
>>> to swiotlb the main difference seems to be:
>>>
>>>  - additional reasons to bounce I/O vs the plain DMA capable
>>>  - the possibility to do a hypercall on arm/arm64
>>>  - an extra translation layer before doing the phys_to_dma and vice
>>>    versa
>>>  - an special memory allocator
>>>
>>> I wonder if inbetween a few jump labels or other no overhead enablement
>>> options and possibly better use of the dma_range_map we could kill
>>> off most of swiotlb-xen instead of maintaining all this code duplication?
>> So I looked at this a bit more.
>>
>> For x86 with XENFEAT_auto_translated_physmap (how common is that?)
> Juergen, Boris please correct me if I am wrong, but that XENFEAT_auto_translated_physmap
> only works for PVH guests?


That's both HVM and PVH (for dom0 it's only PVH).


-boris



>
>> pfn_to_gfn is a nop, so plain phys_to_dma/dma_to_phys do work as-is.
>>
>> xen_arch_need_swiotlb always returns true for x86, and
>> range_straddles_page_boundary should never be true for the
>> XENFEAT_auto_translated_physmap case.
> Correct. The kernel should have no clue of what the real MFNs are
> for PFNs.
>> So as far as I can tell the mapping fast path for the
>> XENFEAT_auto_translated_physmap can be trivially reused from swiotlb.
>>
>> That leaves us with the next more complicated case, x86 or fully cache
>> coherent arm{,64} without XENFEAT_auto_translated_physmap.  In that case
>> we need to patch in a phys_to_dma/dma_to_phys that performs the MFN
>> lookup, which could be done using alternatives or jump labels.
>> I think if that is done right we should also be able to let that cover
>> the foreign pages in is_xen_swiotlb_buffer/is_swiotlb_buffer, but
>> in that worst case that would need another alternative / jump label.
>>
>> For non-coherent arm{,64} we'd also need to use alternatives or jump
>> labels to for the cache maintainance ops, but that isn't a hard problem
>> either.
>>
>>


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 04:27:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 04:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87132.164243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDJrj-0002qK-C3; Sat, 20 Feb 2021 04:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87132.164243; Sat, 20 Feb 2021 04:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDJrj-0002qD-7U; Sat, 20 Feb 2021 04:27:35 +0000
Received: by outflank-mailman (input) for mailman id 87132;
 Sat, 20 Feb 2021 04:27:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDJri-0002q5-D7; Sat, 20 Feb 2021 04:27:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDJri-00025T-3N; Sat, 20 Feb 2021 04:27:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDJrh-0000MV-NW; Sat, 20 Feb 2021 04:27:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDJrh-0001yD-M7; Sat, 20 Feb 2021 04:27:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zOJGBNJIVPbDL+8ltt4Lc/bQA6/UTrMHMirF79OLJFs=; b=jQYgxOk64ARjRmb8qMetPVEQK4
	MoUfdPiuc8bzWsm+NTu2fLimGV/9TwaqgpS5T0FqSH518nBMxvEsymsoYCqp3bUI2fwjQZ8Kb3prv
	ipr7cPRlgDEVfn5hHi+8sNwPjYfWuXHGrOcY1UySZl47hdtDcnGqhJuIrBlZfCg21g/s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159475-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159475: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
X-Osstest-Versions-That:
    xen=7a4133feaf42000923eb9d84badb6b171625f137
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 04:27:33 +0000

flight 159475 xen-unstable real [real]
flight 159485 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159475/
http://logs.test-lab.xenproject.org/osstest/logs/159485/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      20 leak-check/check    fail pass in 159485-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159453
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159453
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159453
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159453
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159453
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159453
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159453
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159453
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159453
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159453
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159453
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
baseline version:
 xen                  7a4133feaf42000923eb9d84badb6b171625f137

Last test of basis   159453  2021-02-18 03:39:33 Z    2 days
Testing same since   159475  2021-02-19 06:26:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a4133feaf..e8185c5f01  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca -> master


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 06:09:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 06:09:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87144.164270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDLS3-0004oF-8B; Sat, 20 Feb 2021 06:09:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87144.164270; Sat, 20 Feb 2021 06:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDLS3-0004o8-4j; Sat, 20 Feb 2021 06:09:11 +0000
Received: by outflank-mailman (input) for mailman id 87144;
 Sat, 20 Feb 2021 06:02:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aogJ=HW=kernel.org=chenhuacai@srs-us1.protection.inumbo.net>)
 id 1lDLLU-0004Zm-2c
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 06:02:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2df76584-4671-43de-a100-eef7380d51aa;
 Sat, 20 Feb 2021 06:02:22 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id C337E64EDE
 for <xen-devel@lists.xenproject.org>; Sat, 20 Feb 2021 06:02:21 +0000 (UTC)
Received: by mail-il1-f180.google.com with SMTP id o15so6357199ilt.6
 for <xen-devel@lists.xenproject.org>; Fri, 19 Feb 2021 22:02:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2df76584-4671-43de-a100-eef7380d51aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613800941;
	bh=ShcGVisZXSeMg1DM1Ug/ltlAeszRZetvwkkEgFVlNas=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=YSQAOBZjx77Ns4trHoOsngxkQa38xVjltXBABkfbX425KuGX5umsxwyesIcw3M9m0
	 E1CBeP+KT/7ktq3qJsK+8ztsAyUo1Y+szBDMXVBUy0LssKWNWODnxjYIHb7kwI0p/U
	 OMtv1pX83FzJa09GTTmNeee7rrVKV5Rx57K2tgRQUkoBozsoihRJj5tAd3g/guoNFq
	 6dYliAq0igKoimsq0gG8LbZNbMgmQ/dhvUaUHgPv0ypgHnSvTgIePgQyXuaLXydf1z
	 grZ/lAFZ3Vje1rb1fjKjkE0NPbbhKxxJZc4dBaGoIP5s/mLVrydINYb/OD9SVtUKPj
	 TNTNGsBRQIZHA==
X-Gm-Message-State: AOAM533jBZOZBXQpQvLhjhi4V5aeTpQohu+FDooe0R2ai7RbuCIdZj7Z
	VSBwB0wN+BYeFOPZX8STih2+TcyOssPgBGSyvHU=
X-Google-Smtp-Source: ABdhPJyCLmvlLlMWilIXNLMyuqYrduB90HoBtfugKMuSdQ5Su7DwyI7kL8H7jWSAUPeMHbmo5K5tjSLHJRZ+MPDkFwE=
X-Received: by 2002:a92:6907:: with SMTP id e7mr6713495ilc.134.1613800941002;
 Fri, 19 Feb 2021 22:02:21 -0800 (PST)
MIME-Version: 1.0
References: <20210219173847.2054123-1-philmd@redhat.com> <20210219173847.2054123-6-philmd@redhat.com>
 <31a32613-2a61-7cd2-582a-4e6d10949436@flygoat.com>
In-Reply-To: <31a32613-2a61-7cd2-582a-4e6d10949436@flygoat.com>
From: Huacai Chen <chenhuacai@kernel.org>
Date: Sat, 20 Feb 2021 14:02:08 +0800
X-Gmail-Original-Message-ID: <CAAhV-H6TJyP8diBUu4EsSWSNrVP7YxxPaMNnm2uuZJfdGY40Jg@mail.gmail.com>
Message-ID: <CAAhV-H6TJyP8diBUu4EsSWSNrVP7YxxPaMNnm2uuZJfdGY40Jg@mail.gmail.com>
Subject: Re: [PATCH v2 05/11] hw/mips: Restrict KVM to the malta & virt machines
To: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	QEMU Developers <qemu-devel@nongnu.org>, Aurelien Jarno <aurelien@aurel32.net>, 
	Peter Maydell <peter.maydell@linaro.org>, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-ppc@nongnu.org, qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>, 
	xen-devel@lists.xenproject.org, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
	David Gibson <david@gibson.dropbear.id.au>, qemu-arm@nongnu.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>, kvm <kvm@vger.kernel.org>, 
	BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, 
	Richard Henderson <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>, 
	Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>, 
	Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, 
	=?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>, 
	Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>, 
	Cornelia Huck <cohuck@redhat.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, 
	David Hildenbrand <david@redhat.com>, Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, 
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Reviewed-by: Huacai Chen <chenhuacai@kernel.org>

On Sat, Feb 20, 2021 at 12:56 PM Jiaxun Yang <jiaxun.yang@flygoat.com> wrot=
e:
>
> =E5=9C=A8 2021/2/20 =E4=B8=8A=E5=8D=881:38, Philippe Mathieu-Daud=C3=A9 =
=E5=86=99=E9=81=93:
> > Restrit KVM to the following MIPS machines:
> > - malta
> > - loongson3-virt
> >
> > Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>
> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
>
> > ---
> >   hw/mips/loongson3_virt.c | 5 +++++
> >   hw/mips/malta.c          | 5 +++++
> >   2 files changed, 10 insertions(+)
> >
> > diff --git a/hw/mips/loongson3_virt.c b/hw/mips/loongson3_virt.c
> > index d4a82fa5367..c3679dff043 100644
> > --- a/hw/mips/loongson3_virt.c
> > +++ b/hw/mips/loongson3_virt.c
> > @@ -612,6 +612,10 @@ static void mips_loongson3_virt_init(MachineState =
*machine)
> >       loongson3_virt_devices_init(machine, liointc);
> >   }
> >
> > +static const char *const valid_accels[] =3D {
> > +    "tcg", "kvm", NULL
> > +};
> > +
> >   static void loongson3v_machine_class_init(ObjectClass *oc, void *data=
)
> >   {
> >       MachineClass *mc =3D MACHINE_CLASS(oc);
> > @@ -622,6 +626,7 @@ static void loongson3v_machine_class_init(ObjectCla=
ss *oc, void *data)
> >       mc->max_cpus =3D LOONGSON_MAX_VCPUS;
> >       mc->default_ram_id =3D "loongson3.highram";
> >       mc->default_ram_size =3D 1600 * MiB;
> > +    mc->valid_accelerators =3D valid_accels;
> >       mc->kvm_type =3D mips_kvm_type;
> >       mc->minimum_page_bits =3D 14;
> >   }
> > diff --git a/hw/mips/malta.c b/hw/mips/malta.c
> > index 9afc0b427bf..0212048dc63 100644
> > --- a/hw/mips/malta.c
> > +++ b/hw/mips/malta.c
> > @@ -1443,6 +1443,10 @@ static const TypeInfo mips_malta_device =3D {
> >       .instance_init =3D mips_malta_instance_init,
> >   };
> >
> > +static const char *const valid_accels[] =3D {
> > +    "tcg", "kvm", NULL
> > +};
> > +
> >   static void mips_malta_machine_init(MachineClass *mc)
> >   {
> >       mc->desc =3D "MIPS Malta Core LV";
> > @@ -1456,6 +1460,7 @@ static void mips_malta_machine_init(MachineClass =
*mc)
> >       mc->default_cpu_type =3D MIPS_CPU_TYPE_NAME("24Kf");
> >   #endif
> >       mc->default_ram_id =3D "mips_malta.ram";
> > +    mc->valid_accelerators =3D valid_accels;
> >   }
> >
> >   DEFINE_MACHINE("malta", mips_malta_machine_init)
>


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 06:18:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 06:18:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87147.164282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDLbD-0005ih-5l; Sat, 20 Feb 2021 06:18:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87147.164282; Sat, 20 Feb 2021 06:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDLbD-0005ia-2n; Sat, 20 Feb 2021 06:18:39 +0000
Received: by outflank-mailman (input) for mailman id 87147;
 Sat, 20 Feb 2021 06:18:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDLbB-0005iS-Q1; Sat, 20 Feb 2021 06:18:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDLbB-0004MS-Fn; Sat, 20 Feb 2021 06:18:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDLbB-0005bi-85; Sat, 20 Feb 2021 06:18:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDLbB-0004rE-7c; Sat, 20 Feb 2021 06:18:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0dkwKhqckU29l7RwOmpIBTp060zNr1yFtp8HccwmTUo=; b=u8/LqUXQkXR5LKWP2l0N9uGW9s
	ilP9LMMIRH6gov84U0Qo+b7+oQCy/kQ9D4YtNRVgfuQ5rCxvFegEFVNlCsTPYvBaurMoDY3wHpOuH
	yJLADw3bqWI6sl23AqZEmwHEzVM5t7Ei1HyJJlayObDvFmxHySVaMlaWaN8JR8GzHhNc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159486-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159486: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f30aa2ec74a79f34700915b677f46ec476d2362d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 06:18:37 +0000

flight 159486 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159486/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f30aa2ec74a79f34700915b677f46ec476d2362d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  225 days
Failing since        151818  2020-07-11 04:18:52 Z  224 days  217 attempts
Testing same since   159486  2021-02-20 04:18:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43367 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 09:09:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 09:09:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87162.164297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDOG9-0006Cl-UT; Sat, 20 Feb 2021 09:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87162.164297; Sat, 20 Feb 2021 09:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDOG9-0006Ce-QH; Sat, 20 Feb 2021 09:09:05 +0000
Received: by outflank-mailman (input) for mailman id 87162;
 Sat, 20 Feb 2021 09:09:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDOG7-0006CW-Rm; Sat, 20 Feb 2021 09:09:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDOG7-0007b4-LM; Sat, 20 Feb 2021 09:09:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDOG7-0003r1-B2; Sat, 20 Feb 2021 09:09:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDOG7-0006Ne-AV; Sat, 20 Feb 2021 09:09:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JFB+R84gKPd/wuM9wlMwg9+FwpzSIppHRAYqFh/5p2Y=; b=HYBxx4EdV4RNzlj7d8mr9rtQ3y
	V08KUltgCGjz3dV2GslBQphVpqxk0WUNXh0Ydc3TdWpEd1PRN7z9hjyCj/qNgtOcsnV218Z2S+3bW
	vhhmCkVmfu781IB7ZBOr/qdMfqZ6kN0UkBDgeFHTlBii0dndaCsTo9mCYJ8BtcefWmlQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159483-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159483: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ce42fe17ad2d2459436fdacbc207df3212a58428
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 09:09:03 +0000

flight 159483 qemu-mainline real [real]
flight 159488 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159483/
http://logs.test-lab.xenproject.org/osstest/logs/159488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot            fail pass in 159488-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 15 migrate-support-check fail in 159488 never pass
 test-arm64-arm64-xl-seattle 16 saverestore-support-check fail in 159488 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ce42fe17ad2d2459436fdacbc207df3212a58428
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  183 days
Failing since        152659  2020-08-21 14:07:39 Z  182 days  353 attempts
Testing same since   159483  2021-02-19 20:59:28 Z    0 days    1 attempts

------------------------------------------------------------
420 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 115180 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 10:43:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 10:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87177.164312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDPjR-0007KO-Ky; Sat, 20 Feb 2021 10:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87177.164312; Sat, 20 Feb 2021 10:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDPjR-0007KH-Hb; Sat, 20 Feb 2021 10:43:25 +0000
Received: by outflank-mailman (input) for mailman id 87177;
 Sat, 20 Feb 2021 10:43:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDPjQ-0007K9-Vx; Sat, 20 Feb 2021 10:43:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDPjQ-0000iC-Lw; Sat, 20 Feb 2021 10:43:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDPjQ-0007d2-9a; Sat, 20 Feb 2021 10:43:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDPjQ-00045j-97; Sat, 20 Feb 2021 10:43:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MW/ZZ3rRKWj0OaMAzdOeGO5Vijjnx6lp1bmQEQPAB2c=; b=fGMjdHXizVpkRI5sy8oOIi/n+u
	/7vp9EIWczbFNwheSpqvtK+IyKwo0jKeDQFan+Gu97Xu6fy5JItB/e4nh080yJcm19j5VJYs+LFzt
	aPXUIZYsSwnSXHCYPq16ukLVk8j9uwSX6uEIZuXq0D5rMUvkR5+/lhrRU7TJfeRrV0Nk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159484-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159484: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 10:43:24 +0000

flight 159484 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159484/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install fail in 159440 REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 159463 REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 159463 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159484
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 159413 pass in 159367
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 159413 pass in 159440
 test-arm64-arm64-xl       10 host-ping-check-xen fail in 159413 pass in 159463
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 159413 pass in 159484
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 159440 pass in 159463
 test-arm64-arm64-examine      8 reboot           fail in 159440 pass in 159484
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 159440 pass in 159484
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159413
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 159440
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 159440
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 159463
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen        fail pass in 159463

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159413 like 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 159440 blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 159440 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  203 days
Failing since        152366  2020-08-01 20:49:34 Z  202 days  349 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    5 days    6 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 12:10:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 12:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87200.164326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDR5q-0007dq-RA; Sat, 20 Feb 2021 12:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87200.164326; Sat, 20 Feb 2021 12:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDR5q-0007dj-O7; Sat, 20 Feb 2021 12:10:38 +0000
Received: by outflank-mailman (input) for mailman id 87200;
 Sat, 20 Feb 2021 12:10:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lDR5p-0007de-IJ
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 12:10:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDR5i-00027l-BX; Sat, 20 Feb 2021 12:10:30 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDR5i-0007sz-5L; Sat, 20 Feb 2021 12:10:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=2wKg2oPwA86fP+LMCj1ry7W6JKJh/kUVG52o2wSkk9U=; b=0zUvoAnxXwCs5/WSzAS4HbJ6Vg
	qbIdaz1Pv62BuLly8EM8mQDsehYy8ooMBulLBtaNWKSKnySha+MVjtJDbKRT2FMTgQB2hbt1xaWB0
	f2Q3UxvDqFwjBsVawxrG3vBbNEB54St5T0tZofK969NdFaJmpd4ZCmGVSg64W+5KmQmA=;
Subject: Re: [PATCH v3 2/8] xen/events: don't unmask an event channel when an
 eoi is pending
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
References: <20210219154030.10892-1-jgross@suse.com>
 <20210219154030.10892-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <66b1d440-2500-99fc-70a3-3d24d27f0334@xen.org>
Date: Sat, 20 Feb 2021 12:10:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210219154030.10892-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 19/02/2021 15:40, Juergen Gross wrote:
> An event channel should be kept masked when an eoi is pending for it.
> When being migrated to another cpu it might be unmasked, though.
> 
> In order to avoid this keep three different flags for each event channel
> to be able to distinguish "normal" masking/unmasking from eoi related
> masking/unmasking and temporary masking. The event channel should only
> be able to generate an interrupt if all flags are cleared.
> 
> Cc: stable@vger.kernel.org
> Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framework")
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 12:12:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 12:12:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87203.164339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDR7T-0007la-6H; Sat, 20 Feb 2021 12:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87203.164339; Sat, 20 Feb 2021 12:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDR7T-0007lT-2Z; Sat, 20 Feb 2021 12:12:19 +0000
Received: by outflank-mailman (input) for mailman id 87203;
 Sat, 20 Feb 2021 12:12:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lDR7S-0007lO-F4
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 12:12:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDR7Q-0002Ad-Ue; Sat, 20 Feb 2021 12:12:16 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDR7Q-0008An-I2; Sat, 20 Feb 2021 12:12:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bQI3Ccup6yTIKmHa56nvcajDRXzkAB6HThNzesI2iXU=; b=lB2ygTAnJokH5Xh2dCOi/z2iZo
	gJEn36BjiNPpV/WPhRsJ1hbLzvpOxpIgdFONvx94OiQG5YIh8DqilRSb/6q3f2OzzHcyOycYs8AXm
	W6OQIhbdXauznCSyDbuErPjDa6qK7iSxYdS3yTFhyu9OSKi3zQFE85Qc9FEG5lvUmFWw=;
Subject: Re: [PATCH v3 3/8] xen/events: avoid handling the same event on two
 cpus at the same time
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
References: <20210219154030.10892-1-jgross@suse.com>
 <20210219154030.10892-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <495a12e4-d54b-0841-07b2-5b0a0cea5d10@xen.org>
Date: Sat, 20 Feb 2021 12:12:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210219154030.10892-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 19/02/2021 15:40, Juergen Gross wrote:
> When changing the cpu affinity of an event it can happen today that
> (with some unlucky timing) the same event will be handled on the old
> and the new cpu at the same time.
> 
> Avoid that by adding an "event active" flag to the per-event data and
> call the handler only if this flag isn't set.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 14:04:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 14:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87232.164350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDSrr-0001be-It; Sat, 20 Feb 2021 14:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87232.164350; Sat, 20 Feb 2021 14:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDSrr-0001bX-Fp; Sat, 20 Feb 2021 14:04:19 +0000
Received: by outflank-mailman (input) for mailman id 87232;
 Sat, 20 Feb 2021 14:04:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lDSrp-0001bS-KQ
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 14:04:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDSrn-00041Q-Uw; Sat, 20 Feb 2021 14:04:15 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDSrn-0007Yh-LC; Sat, 20 Feb 2021 14:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=DHsWNJWyNVapDyEP7btR9oRnZl4HrHvqZ66xkvNY/So=; b=VP2ziViRgYOdBZv4XsM2EVyrth
	T+DJJNVItILG+CPonIja75oSyqltE4waI00KAbS1rbR4i9tio8+48pT+A51vBafJ62bdCyqRe3brW
	qpvuIsbv1GqHlEQGsw1DFxD1acF4ePqg7i5HdMdRJwkK+NXD2/eTPb5kerLRdA/vAfko=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2, 3}
Date: Sat, 20 Feb 2021 14:04:12 +0000
Message-Id: <20210220140412.31610-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Currently, Xen will send a data abort to a guest trying to write to the
ISPENDR.

Unfortunately, recent version of Linux (at least 5.9+) will start
writing to the register if the interrupt needs to be re-triggered
(see the callback irq_retrigger). This can happen when a driver (such as
the xgbe network driver on AMD Seattle) re-enable an interrupt:

(XEN) d0v0: vGICD: unhandled word write 0x00000004000000 to ISPENDR44
[...]
[   25.635837] Unhandled fault at 0xffff80001000522c
[...]
[   25.818716]  gic_retrigger+0x2c/0x38
[   25.822361]  irq_startup+0x78/0x138
[   25.825920]  __enable_irq+0x70/0x80
[   25.829478]  enable_irq+0x50/0xa0
[   25.832864]  xgbe_one_poll+0xc8/0xd8
[   25.836509]  net_rx_action+0x110/0x3a8
[   25.840328]  __do_softirq+0x124/0x288
[   25.844061]  irq_exit+0xe0/0xf0
[   25.847272]  __handle_domain_irq+0x68/0xc0
[   25.851442]  gic_handle_irq+0xa8/0xe0
[   25.855171]  el1_irq+0xb0/0x180
[   25.858383]  arch_cpu_idle+0x18/0x28
[   25.862028]  default_idle_call+0x24/0x5c
[   25.866021]  do_idle+0x204/0x278
[   25.869319]  cpu_startup_entry+0x24/0x68
[   25.873313]  rest_init+0xd4/0xe4
[   25.876611]  arch_call_rest_init+0x10/0x1c
[   25.880777]  start_kernel+0x5b8/0x5ec

As a consequence, the OS may become unusable.

Implementing the write part of ISPENDR is somewhat easy. For
virtual interrupt, we only need to inject the interrupt again.

For physical interrupt, we need to be more careful as the de-activation
of the virtual interrupt will be propagated to the physical distributor.
For simplicity, the physical interrupt will be set pending so the
workflow will not differ from a "real" interrupt.

Longer term, we could possible directly activate the physical interrupt
and avoid taking an exception to inject the interrupt to the domain.
(This is the approach taken by the new vGIC based on KVM).

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Note that this doesn't touch the read part for I{S,C}PENDR nor the write
part of ICPENDR because they are more complex to implement.

For physical interrupt, I didn't implement the same solution as KVM because
I couldn't convince myself this could be done race free for physical
interrupt.

This was tested using the IRQ debugfs (CONFIG_GENERIC_IRQ_DEBUGFS=y)
which allows to retrigger an interrupt:

42sh> echo trigger > /sys/kernel/debug/irq/irqs/<irq>

This patch is candidate for 4.15 and also backporting to older trees.
Without this patch, recent Linux version may not boot on Xen on some
platforms (such as AMD Seattle used in OssTest).

The patch is self-contained to implementing a single set of registers.
So this would not introduce any risk on platforms where OSes don't use
those registers.

For the other setup (e.g. AMD Seattle + Linux 5.9+), it cannot be worse
than today.

So therefore, I would consider the risk limited.
---
 xen/arch/arm/vgic-v2.c     | 10 ++++----
 xen/arch/arm/vgic-v3.c     | 18 ++++++---------
 xen/arch/arm/vgic.c        | 47 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/vgic.h |  2 ++
 4 files changed, 62 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 64b141fea586..b2da886adc18 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -472,10 +472,12 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
 
     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        printk(XENLOG_G_ERR
-               "%pv: vGICD: unhandled word write %#"PRIregister" to ISPENDR%d\n",
-               v, r, gicd_reg - GICD_ISPENDR);
-        return 0;
+        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISPENDR, DABT_WORD);
+        if ( rank == NULL ) goto write_ignore;
+
+        vgic_set_irqs_pending(v, r, rank->index);
+
+        return 1;
 
     case VRANGE32(GICD_ICPENDR, GICD_ICPENDRN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index fd8cfc156d0c..613f37abab5e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -808,10 +808,12 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
 
     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        printk(XENLOG_G_ERR
-               "%pv: %s: unhandled word write %#"PRIregister" to ISPENDR%d\n",
-               v, name, r, reg - GICD_ISPENDR);
-        return 0;
+        rank = vgic_rank_offset(v, 1, reg - GICD_ISPENDR, DABT_WORD);
+        if ( rank == NULL ) goto write_ignore;
+
+        vgic_set_irqs_pending(v, r, rank->index);
+
+        return 1;
 
     case VRANGE32(GICD_ICPENDR, GICD_ICPENDRN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
@@ -975,6 +977,7 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu *v, mmio_info_t *info,
     case VREG32(GICR_ICACTIVER0):
     case VREG32(GICR_ICFGR1):
     case VRANGE32(GICR_IPRIORITYR0, GICR_IPRIORITYR7):
+    case VREG32(GICR_ISPENDR0):
          /*
           * Above registers offset are common with GICD.
           * So handle common with GICD handling
@@ -982,13 +985,6 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu *v, mmio_info_t *info,
         return __vgic_v3_distr_common_mmio_write("vGICR: SGI", v,
                                                  info, gicr_reg, r);
 
-    case VREG32(GICR_ISPENDR0):
-        if ( dabt.size != DABT_WORD ) goto bad_width;
-        printk(XENLOG_G_ERR
-               "%pv: vGICR: SGI: unhandled word write %#"PRIregister" to ISPENDR0\n",
-               v, r);
-        return 0;
-
     case VREG32(GICR_ICPENDR0):
         if ( dabt.size != DABT_WORD ) goto bad_width;
         printk(XENLOG_G_ERR
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 82f524a35c9e..8f9400a51960 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -423,6 +423,53 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     }
 }
 
+void vgic_set_irqs_pending(struct vcpu *v, uint32_t r, unsigned int rank)
+{
+    const unsigned long mask = r;
+    unsigned int i;
+    /* The first rank is always per-vCPU */
+    bool private = rank == 0;
+
+    /* LPIs will never be set pending via this function */
+    ASSERT(!is_lpi(32 * rank + 31));
+
+    for_each_set_bit( i, &mask, 32 )
+    {
+        unsigned int irq = i + 32 * rank;
+
+        if ( !private )
+        {
+            struct pending_irq *p = spi_to_pending(v->domain, irq);
+
+            /*
+             * When the domain sets the pending state for a HW interrupt on
+             * the virtual distributor, we set the pending state on the
+             * physical distributor.
+             *
+             * XXX: Investigate whether we would be able to set the
+             * physical interrupt active and save an interruption. (This
+             * is what the new vGIC does).
+             */
+            if ( p->desc != NULL )
+            {
+                unsigned long flags;
+
+                spin_lock_irqsave(&p->desc->lock, flags);
+                gic_set_pending_state(p->desc, true);
+                spin_unlock_irqrestore(&p->desc->lock, flags);
+                continue;
+            }
+        }
+
+        /*
+         * If the interrupt is per-vCPU, then we want to inject the vIRQ
+         * to v, otherwise we should let the function figuring out the
+         * correct vCPU.
+         */
+        vgic_inject_irq(v->domain, private ? v : NULL, irq, true);
+    }
+}
+
 bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
                  int virq, const struct sgi_target *target)
 {
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index ce1e3c4bbdac..62c2ae538db2 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -288,6 +288,8 @@ extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int
 extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
 extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
+extern void vgic_set_irqs_pending(struct vcpu *v, uint32_t r,
+                                  unsigned int rank);
 extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d, int *mmio_count);
 int vgic_v3_init(struct domain *d, int *mmio_count);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 20 14:07:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 14:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87235.164362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDSut-0001kt-2a; Sat, 20 Feb 2021 14:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87235.164362; Sat, 20 Feb 2021 14:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDSus-0001km-Uy; Sat, 20 Feb 2021 14:07:26 +0000
Received: by outflank-mailman (input) for mailman id 87235;
 Sat, 20 Feb 2021 14:07:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oPX6=HW=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lDSur-0001kh-SL
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 14:07:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0737046-ddbf-4640-bb69-b00800279e07;
 Sat, 20 Feb 2021 14:07:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47CD3ACE5;
 Sat, 20 Feb 2021 14:07:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0737046-ddbf-4640-bb69-b00800279e07
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613830044; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MMZfLI3uSSTyMFq/1E6/fns1dfniD+o5WsiH6Zsxu8c=;
	b=UeBW8f7u1ZVbP5sTmbC8YGvgPh1TDFjTJ78ZLbynUweGo/HaZc/CW9Ccdr+pvsXp61a445
	gbprASPvOG/kB36Y0BlXVAvtYpEkBwh89v1MIgAJ0Tgzjm+3htizYvADJyUW1S4RnEvMc0
	h1SBmObTQa7Ap2G+lRjKn+lRKC7V1TY=
Message-ID: <7d4d108c183f9eea160c14547d59c1dc4b22158c.camel@suse.com>
Subject: Re: [ANNOUNCE] Xen 4.15 - hard codefreeze today
From: Dario Faggioli <dfaggioli@suse.com>
To: Ian Jackson <iwj@xenproject.org>, committers@xenproject.org
Cc: xen-devel@lists.xenproject.org, community.manager@xenproject.org
Date: Sat, 20 Feb 2021 15:07:23 +0100
In-Reply-To: <24623.60318.719046.601197@mariner.uk.xensource.com>
References: <24612.676.586372.372903@mariner.uk.xensource.com>
	 <24623.60318.719046.601197@mariner.uk.xensource.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-bxxQY4/QTmO061hEvqbG"
User-Agent: Evolution 3.38.4 (by Flathub.org) 
MIME-Version: 1.0


--=-bxxQY4/QTmO061hEvqbG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2021-02-19 at 16:47 +0000, Ian Jackson wrote:
>=20
> OPEN ISSUES AND BLOCKERS
> ------------------------
>=20
> F. BUG: credit=3Dsched2 machine hang when using DRAKVUF
>=20
> Information from
> =C2=A0 Dario Faggioli <dfaggioli@suse.com>
> References
> =C2=A0 https://lists.xen.org/archives/html/xen-devel/2020-05/msg01985.htm=
l
> =C2=A0=20
> https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01561.htm=
l
> =C2=A0 https://bugzilla.opensuse.org/show_bug.cgi?id=3D1179246
>=20
> Quoting Dario:
> > Manifests only with certain combination of hardware and workload.=20
> > I'm not reproducing, but there are multiple reports of it (see=20
> > above). I'm investigating and trying to come up at least with=20
> > debug patches that one of the reporter should be able and willing
> > to=20
> > test.
>=20
> Dario is working on this.=C2=A0 Last update 29.1.21 ?
>=20
Yep. A have a few more insights about it, but still not sure about a
few things. I'll try to give a more detailed update on Mon or Tue

> G. Null scheduler and vwfi native problem
>=20
> Information from
> =C2=A0 Dario Faggioli <dfaggioli@suse.com>
>=20
> References
> =C2=A0 =20
> https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg01634.htm=
l
>=20
> Quoting Dario:
> > RCU issues, but manifests due to scheduler behavior (especially=C2=A0=
=C2=A0=20
> > NULL scheduler, especially on ARM).
> >=20
> > Patches that should solve the issue for ARM posted already. They
> > will need to be slightly adjusted to cover x86 as well.
>=20
> As of last update from Dario 29.1.21:
> waiting for test report from submitter.
>=20
Report recently arrived and was positive. The issue, on ARM, is solved
by the patches sent to him. I've done the x86 bits of those patches,
but am still debugging an error I have with them applied.

It probably make sense for me to properly submit the two patches that
fix the problem on ARM right away (the x86 part would be in its own
patch anyway).

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-bxxQY4/QTmO061hEvqbG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmAxF5sACgkQFkJ4iaW4
c+50hQ/9F7mlWOPo8CpEh24+Jtl6HZAvYL1eLb/WmyGFsJ5QrMYIhd2PC2tMQEvF
G+jRMUbIYvd3rq+lMZzwBAtNZM3AIQCMQGS4H7EUnPKy7Zu0zeERK9PEVehCzhC5
rJr4AbPK2AFSDvlwjFBMtvioIZXFNsCgbKwpvv1uYJpAnRi1QV5c3yIHNvDVZZOS
bcVN07Z/2bGpW/hdCzvLn5JSc0vinN2zvhTq38NWo5QB3Z5Mh2/nyQ3tF0C4YKnB
IV7sqlluWzjAgFetSaavx5yq+zscslh0byIm2Vd0rifiNcECe57aT2N0muOqYvnZ
2UQEjA/DSNU7r+d013XEDppIXnTkN7EE3b6qMQJjfunCSpuupDKz1u6DOaW89NAZ
IZ4d1ByAjmzZS8lbnnE3mxwLXhVtQnBQ/SfdWEpuqsV1SfF9bMkOgyKRhLYsDo7L
XPRUZ8WyOnYJhWFJewWBXTOvc8EGYZ/3KrEF9MmFTINaa67n8AWACnFQSInJMHDH
f4kz/3VKCyVMhOeaFiR8goCNr4eBgCHs9vXUjJwCEqaD0/hPSMxP+WKY6ARH7d0X
NJPGJncmXz137AL0a6OpQshpZYixmh42l8XIJ2RnZoyit39DpDgNzu+jdpSUeDKN
SWWwnZzpnUJ8fB1OQN7Dt7ViKt0MSiTqxDPugdwb2l3Juh6nrsE=
=noxg
-----END PGP SIGNATURE-----

--=-bxxQY4/QTmO061hEvqbG--



From xen-devel-bounces@lists.xenproject.org Sat Feb 20 14:08:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 14:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87237.164375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDSvi-0001qx-C6; Sat, 20 Feb 2021 14:08:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87237.164375; Sat, 20 Feb 2021 14:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDSvi-0001qq-8A; Sat, 20 Feb 2021 14:08:18 +0000
Received: by outflank-mailman (input) for mailman id 87237;
 Sat, 20 Feb 2021 14:08:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lDSvg-0001qj-RE
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 14:08:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDSvg-000463-KT; Sat, 20 Feb 2021 14:08:16 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDSvg-0007sM-EA; Sat, 20 Feb 2021 14:08:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Y/L+TsqwEzk77RBuVx+nbuFwNE4gXiVh8QgeJ6Fe0K0=; b=hYeIcxlXtrhXIIrztk8doRIWbc
	34AnuqoO8eaj7q5VKv7OrnJlwsQrczuCnZ9C6BatZCBdy4egR2NEhaebrA3n6u6ili457McpiEaEM
	rM+02y7Uex/hQxeP3nEibVtGeAoJpgzGgE90mcqamsVhV0iJTulA3Ov8mgL08PFvg4Mw=;
Subject: Re: ISPENDR implementation (WAS Re: [linux-linus test] 159463:
 regressions - FAIL)
To: Julien Grall <julien.grall.oss@gmail.com>,
 osstest service owner <osstest-admin@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andre Przywara <andre.przywara@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
References: <CAJ=z9a0bi2fAcaTMwez5AyQbqP1u1P1r0hzeXb2SK2vRd8O37Q@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b0e78be6-f914-fa0b-dc8a-23b21f434afa@xen.org>
Date: Sat, 20 Feb 2021 14:08:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <CAJ=z9a0bi2fAcaTMwez5AyQbqP1u1P1r0hzeXb2SK2vRd8O37Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/02/2021 23:12, Julien Grall wrote:
> Hi all,
> 
> On Fri, 19 Feb 2021 at 22:19, osstest service owner
> <osstest-admin@xenproject.org> wrote:
>>
>> flight 159463 linux-linus real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/159463/
> 
> [...]
> 
>>   test-arm64-arm64-xl-seattle                                  fail
> 
> [...]
> 
>>   test-arm64-arm64-xl-thunderx                                 pass
> 
> While looking at the log to check whether we fixed the Arm bug, I
> noticed that Linux will boot on Thunder-X but not Seattle.
> 
>  From the log:
> 
> (XEN) d0v3: vGICD: unhandled word write 0x00000020000000 to ISPENDR44
> Feb 18 17:01:19.426532 (XEN) traps.c:2013:d0v3 HSR=0x93820047
> pc=0xffff8000104aec2c gva=0xffff80001000522c gpa=0x000000e111022c
> 
> [...]
> 
> Feb 18 17:01:19.618568 [   27.097702] Call trace:
> 
> Feb 18 17:01:19.618612 [   27.100215]  gic_retrigger+0x2c/0x38
> 
> Feb 18 17:01:19.630516 [   27.103861]  irq_startup+0x78/0x138
> 
> Feb 18 17:01:19.630575 [   27.107419]  __enable_irq+0x70/0x80
> 
> Feb 18 17:01:19.630622 [   27.110978]  enable_irq+0x50/0xa0
> 
> Feb 18 17:01:19.642499 [   27.114363]  xgbe_one_poll+0xc8/0xd8
> 
> Feb 18 17:01:19.642558 [   27.118009]  net_rx_action+0x110/0x3a8
> 
> Feb 18 17:01:19.642605 [   27.121828]  __do_softirq+0x124/0x288
> 
> Feb 18 17:01:19.654496 [   27.125560]  irq_exit+0xe0/0xf0
> 
> Feb 18 17:01:19.654555 [   27.128772]  __handle_domain_irq+0x68/0xc0
> 
> Feb 18 17:01:19.654603 [   27.132939]  gic_handle_irq+0xa8/0xe0
> 
> Feb 18 17:01:19.654647 [   27.136671]  el1_irq+0xb0/0x180
> 
> Feb 18 17:01:19.666482 [   27.139883]  arch_cpu_idle+0x18/0x28
> 
> Feb 18 17:01:19.666540 [   27.143528]  default_idle_call+0x24/0x5c
> 
> Feb 18 17:01:19.666587 [   27.147524]  do_idle+0x204/0x278
> 
> Feb 18 17:01:19.678517 [   27.150819]  cpu_startup_entry+0x24/0x68
> 
> Feb 18 17:01:19.678577 [   27.154812]  secondary_start_kernel+0x174/0x188
> 
> Feb 18 17:01:19.678625 [   27.159415] Code: f9409063 d37e6821 91080021
> 8b010061 (b9000022)
> 
> Feb 18 17:01:19.690480 [   27.165582] ---[ end trace a7aadb3ae629b57f ]---
> 
> It looks like that Linux will now try to set the interrupt pending by
> writing ISPENDR when the interrupt is re-enabled.
> 
> I think the ISPENDR write emulation is easier to implement compare to
> the other missing IS{PENDR, ACTIVER).
> 
> It should be possible to emulate as follows:
>    1) For virtual interrupts, just call vgic_inject_irq()
>    2) For physical interrupts, set pending at the HW level. This will
> raise an interrupt that will call vgic_inject_irq().
> 
> The vGIC in KVM will directly set the physical interrupt active to
> avoid the round trip. But I am not sure we can do it safely in our
> current vGIC to avoid the guest de-activating the interrupt too early
> (the virtual interrupt may already be pending/active).
> 
> Any thoughts?

I have posted a patch [1]. This should help to discuss about the 
approach taken.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/20210220140412.31610-1-julien@xen.org/T/#u

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 15:27:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 15:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87241.164387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDUAS-00019W-84; Sat, 20 Feb 2021 15:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87241.164387; Sat, 20 Feb 2021 15:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDUAS-00019P-4r; Sat, 20 Feb 2021 15:27:36 +0000
Received: by outflank-mailman (input) for mailman id 87241;
 Sat, 20 Feb 2021 15:27:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDUAQ-00019H-QH; Sat, 20 Feb 2021 15:27:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDUAQ-0005LP-Ii; Sat, 20 Feb 2021 15:27:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDUAQ-0004km-9K; Sat, 20 Feb 2021 15:27:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDUAQ-0002wa-8R; Sat, 20 Feb 2021 15:27:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yy1fhnozDP0A7rIAs4AjETMHPIM7MR2vzN8swdMnRKg=; b=s+bq8r4qRLJ2yOiefbJtG7hZXr
	uevnnaZin44vcXzu79YTT9pWUbye66xwISplhbU6EduKu9yOK35BaoV+8ENOr1MExwBZwxawexH5d
	NhzmhQO4P1/iNLV/VV2aU4sosRpwDW8UgeW99tOWSiYum0AUyr0efYtaRToGphpFz8/s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159487-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159487: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 15:27:34 +0000

flight 159487 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159487/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-xl-vhd      20 leak-check/check             fail  like 159475
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    1 days
Testing same since   159487  2021-02-20 04:29:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 305 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 17:54:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 17:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87299.164408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDWSQ-0007Yo-F2; Sat, 20 Feb 2021 17:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87299.164408; Sat, 20 Feb 2021 17:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDWSQ-0007Yh-Ag; Sat, 20 Feb 2021 17:54:18 +0000
Received: by outflank-mailman (input) for mailman id 87299;
 Sat, 20 Feb 2021 17:54:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lDWSO-0007Yc-VP
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 17:54:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDWSO-0008Bj-Nm; Sat, 20 Feb 2021 17:54:16 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDWSO-0004eO-Ct; Sat, 20 Feb 2021 17:54:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=RJigxHaZepnMNBSs3OPcIQq+qRZ5q9ZXmLPqClu0lEs=; b=Hfvu0tiQOQjiWWryFJ6AV6vJs3
	ywpi4Eu8T/Nbtp8aqVNwRHDxNhk50HhnEIj2nw2wBCaXh9kuMT3miO2W+WjbNflfXoGwkaAgl0j52
	xWy5NmvSxgdZ1O8vHIuj0iJXnobj/23q+rzJ2H1FEPZsVrOAP/sWROPeltwAf/UxWLJ0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to clean to PoC
Date: Sat, 20 Feb 2021 17:54:13 +0000
Message-Id: <20210220175413.14640-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

At the moment, flush_page_to_ram() is both cleaning and invalidate to
PoC the page. However, the cache line can be speculated and pull in the
cache right after as it is part of the direct map.

So it is pointless to try to invalidate the line in the data cache.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 59f8a3f15fd1..2f11d214e184 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -529,7 +529,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
 {
     void *v = map_domain_page(_mfn(mfn));
 
-    clean_and_invalidate_dcache_va_range(v, PAGE_SIZE);
+    clean_dcache_va_range(v, PAGE_SIZE);
     unmap_domain_page(v);
 
     /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 20 19:47:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 19:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87325.164429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDYDd-0001HB-Hf; Sat, 20 Feb 2021 19:47:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87325.164429; Sat, 20 Feb 2021 19:47:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDYDd-0001H4-DR; Sat, 20 Feb 2021 19:47:09 +0000
Received: by outflank-mailman (input) for mailman id 87325;
 Sat, 20 Feb 2021 19:47:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lDYDc-0001Gz-8b
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 19:47:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDYDb-0001dk-Lf; Sat, 20 Feb 2021 19:47:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lDYDb-000628-9G; Sat, 20 Feb 2021 19:47:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=IA7DXmK7YoBXYurby+DI9MwqkK0veMWF6uOFJdzIdaE=; b=B9kRmUlmd5hXjIwNo0MQpwtl9J
	Rbc6ffQUVY/P0xM1ORErJ+hK+vwpyTGu8ajEyRUE9gUkpsY5jgFvyGm0fG3qun+KFTPL5sxfMImjW
	huKbKMPxyHGFoIukpw4sU4WZ2fn2a1SBiQeVL3hKjyIkvW5g0GQlsIdkYrtGwQdtK2fU=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	jbeulich@suse.com,
	sstabellini@kernel.org,
	ash.j.wilding@gmail.com,
	Julien Grall <jgrall@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH for-4.15] xen/sched: Add missing memory barrier in vcpu_block()
Date: Sat, 20 Feb 2021 19:47:01 +0000
Message-Id: <20210220194701.24202-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The comment in vcpu_block() states that the events should be checked
/after/ blocking to avoids wakeup waiting race. However, from a generic
perspective, set_bit() doesn't prevent re-ordering. So the following
could happen:

CPU0  (blocking vCPU A)         |Â   CPU1 ( unblock vCPU A)
                                |
A <- read local events          |
                                |   set local events
                                |   test_and_clear_bit(_VPF_blocked)
                                |       -> Bail out as the bit if not set
                                |
set_bit(_VFP_blocked)           |
                                |
check A                         |

The variable A will be 0 and therefore the vCPU will be blocked when it
should continue running.

vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
to read any information about local events before the flag _VPF_blocked
is set.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This is a follow-up of the discussion that started in 2019 (see [1])
regarding a possible race between do_poll()/vcpu_unblock() and the wake
up path.

I haven't yet fully thought about the potential race in do_poll(). If
there is, then this would likely want to be fixed in a separate patch.

On x86, the current code is safe because set_bit() is fully ordered. SO
the problem is Arm (and potentially any new architectures).

I couldn't convince myself whether the Arm implementation of
local_events_need_delivery() contains enough barrier to prevent the
re-ordering. However, I don't think we want to play with devil here as
the function may be optimized in the future.

This patch is candidate for 4.15. The risk is really small as the memory
ordering will be stricter on Arm and therefore less racy.

[1] <3ce4998b-a8a8-37bd-bb26-9550571709b6@suse.com>
---
 xen/common/sched/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 9745a77eee23..2b974fd6f8ba 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1418,6 +1418,8 @@ void vcpu_block(void)
 
     set_bit(_VPF_blocked, &v->pause_flags);
 
+    smp_mb__after_atomic();
+
     arch_vcpu_block(v);
 
     /* Check for events /after/ blocking: avoids wakeup waiting race. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 20 20:26:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 20:26:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87329.164447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDYpF-00059p-KN; Sat, 20 Feb 2021 20:26:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87329.164447; Sat, 20 Feb 2021 20:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDYpF-00059i-H2; Sat, 20 Feb 2021 20:26:01 +0000
Received: by outflank-mailman (input) for mailman id 87329;
 Sat, 20 Feb 2021 20:25:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDYpD-00059a-Av; Sat, 20 Feb 2021 20:25:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDYpD-0002KY-3G; Sat, 20 Feb 2021 20:25:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDYpC-0001Bp-Pi; Sat, 20 Feb 2021 20:25:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDYpC-0005xv-PG; Sat, 20 Feb 2021 20:25:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=USWvJ7nuyg714137uZ1RnGv0bKRdJ+bpfiGdGvs5Aqs=; b=gDV+Tg40oyUa60A3Y23W+G0PgX
	iEnXJi4x2WUj0vaOwQFrScnrNFDpT+2Y8el6il8QCONmh981o+FAlS/YUEohcIWGp741MnuihWBq4
	DIINSoAucW8jVtxG0sstTjprKW9cCNr6MI9hSPvD9fhcfHvxbCVH9NcSbYTZl7ZW6/6U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159489-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159489: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e90ef02389dc8b57eaea22b290244609d720a8bf
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 20:25:58 +0000

flight 159489 qemu-mainline real [real]
flight 159495 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159489/
http://logs.test-lab.xenproject.org/osstest/logs/159495/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                e90ef02389dc8b57eaea22b290244609d720a8bf
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  184 days
Failing since        152659  2020-08-21 14:07:39 Z  183 days  354 attempts
Testing same since   159489  2021-02-20 09:11:13 Z    0 days    1 attempts

------------------------------------------------------------
420 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 115521 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 20:36:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 20:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87335.164462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDYz0-0006CR-Kv; Sat, 20 Feb 2021 20:36:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87335.164462; Sat, 20 Feb 2021 20:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDYz0-0006CK-Hl; Sat, 20 Feb 2021 20:36:06 +0000
Received: by outflank-mailman (input) for mailman id 87335;
 Sat, 20 Feb 2021 20:36:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4ysU=HW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lDYyz-0006CF-GR
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 20:36:05 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0c53bce-7601-4708-a125-51667abf3dab;
 Sat, 20 Feb 2021 20:36:03 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11KKYvIL099809;
 Sat, 20 Feb 2021 20:36:02 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2130.oracle.com with ESMTP id 36tqxb8ydu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 20 Feb 2021 20:36:01 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11KKYZGL001838;
 Sat, 20 Feb 2021 20:36:01 GMT
Received: from nam04-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam08lp2049.outbound.protection.outlook.com [104.47.73.49])
 by aserp3030.oracle.com with ESMTP id 36trf9rxwg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 20 Feb 2021 20:36:01 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3064.namprd10.prod.outlook.com (2603:10b6:a03:8c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.38; Sat, 20 Feb
 2021 20:35:58 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Sat, 20 Feb 2021
 20:35:58 +0000
Received: from [10.74.103.244] (138.3.200.52) by
 SN2PR01CA0064.prod.exchangelabs.com (2603:10b6:800::32) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Sat, 20 Feb 2021 20:35:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0c53bce-7601-4708-a125-51667abf3dab
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=Ttjp2TZJU+U2UIOOLpcWd1iIjWpC5sFOUFSHTnkE3mU=;
 b=Mkj+XBr/1YTpPlMd+xsWqx3WOG0CI3Cu9sUzdnaG6M+/axHOrIc5hNQs9POEa3UWUeZK
 tIbHaosC+hfiY+ISKn4fTQfoS8u+9LtPdDC0FvSnKTQ0I8pqf2qS1/kWIoybjANsN+lM
 wpQlobBLTz/WcLmo3oacHaGMu0Luz31GkyjoeRWwN8G8caFJho7bqy3mAETvDPVY68hz
 Be878R8WJaJH26sObVpooLhzlbCZClRB27jCX4vG3pX67LRX559Kdy69WhL+A+7uJVP3
 WyAVnKtrbCPRuXkzi3nOTZfg0RiKkyMMJNC8AH6ISF2GyvMtgAyjzKovWV0n7MqhNKbC 3A== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ENV4ehp03eIj0SSnQAs96rP91js2iu9jrzg41p7haHcRXOJiQIXF47u9HVUUPe8ySYf+XPhyCWVeXulOFZZRLKYjBGYRil+1Rt6NoSOCmMJhdQyIh++dr/VW1ZxBt31n/d0AuHgBL14XGNurj0cBNTpOdSY3vc36u54vZ7fmwm1DaoSizSA2BSoKFo/XpGt8SQqF39Wt2yi2WE1ayV3ZF1t8It8EzqPLkfcqhgcTCiPOMZB1x4isxAgiR2Bj2V/I+gmpjepeq/CLz8umgm6iyr78LM2GcAJ0n2jxpzMF8qH+Tf553q0KaA7//vC99XR3Y+O87Gkak9C9E6Lv7nv+RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ttjp2TZJU+U2UIOOLpcWd1iIjWpC5sFOUFSHTnkE3mU=;
 b=XJPWrLwzVhhCjt6dB1iFqZXSDq8/UyvQPPvWGP1q7jMg4G/Y08GQACLs6CCcg7RWCGTU4KCEwJSl25TFQs5j6OqpIS2E4SzkEhEa9wG5tmCkD59ntyEG9LSdeI3tx84+sgtjDNOBs7zDo3YsJ8NypYjD4nCMCQ1C34GGdqsAl5vVowVBa+czHpwjBKcudyEqL99bugwqtu7scq1pnbfn64cKYOgOC561fj+wSkWOAP8l1ZXSkIoBLrjqy8daP2g2glFLx/3s2RmfMi2JyonygmyAAv1C95x7H8OPOCfQRO3Ndj4VJO8q3+DozDUQCiS6c6L4LIhqbeQ4lugsMqG77g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ttjp2TZJU+U2UIOOLpcWd1iIjWpC5sFOUFSHTnkE3mU=;
 b=HdDmoWIPSsjmQzb4UN2wxkzdCLK+mAo/TPEJvsbRq3Hx5sb/HFiasbo8cvmgMdmqv5h+Uxve0NBJePG+gt2qVWdndwK6dHOJUjG59/1P45/VNXw4DTMdtuNog3f3GPZ6CC6azmILWXjxVui+TfVgErhNdXpmWX0k6KDIHVS2LkQ=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v3 8/8] xen/evtchn: use READ/WRITE_ONCE() for accessing
 ring indices
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20210219154030.10892-1-jgross@suse.com>
 <20210219154030.10892-9-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <3d3016a1-d1c8-32fe-40e5-e86e9c39b8cd@oracle.com>
Date: Sat, 20 Feb 2021 15:35:53 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <20210219154030.10892-9-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: SN2PR01CA0064.prod.exchangelabs.com (2603:10b6:800::32) To
 BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 38e3ba5d-f69d-4496-64ea-08d8d5df210e
X-MS-TrafficTypeDiagnostic: BYAPR10MB3064:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB306435DA72C13F0F16FC56FB8A839@BYAPR10MB3064.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1107;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	JmNur/r+mfPuFR3uy7HQfIY4dJMDm32K+tC4fkVYY9O/h2j1eDhPN/u3vO38ZBmFrROe1U8MSmaIVvug4hSv7ftvTf8F7zZDW+XymXVm7TruEP4taT4joTMZxc8IimHzziVwa4blm/mrsDTWpt3NwIDzV67PhzycxjoYuaD/vkQfKtaFIxXiS9pNkWzKRN0k9hG89ysb2lo+qjew78eG2eaXc1x6pbyfsTsKdhM0Y26tWU0WUCo+PlcLiFZUF03WHMY0cZ8McXbpRge4zKIVdV3HjZoTVcFLC/n+YTjLB52+I6/lJOsDsdS8MgkNGKwkFegEyjGfjIDKpn7ZOSnSOt6qAVbKJ8Y0xxAuXlJykUqdzibJzot4RwrLEIFu90++DTSurHvECkYc4c6Dce2BpqJFOFeGGjIkehxQD9bVLyRucFEFAWr8GVgJD5g+hbiYKu2bhCHk8KRg3D1/klzO0vW8nI7msa01YvOrTmBRUgKTOYMu6JdWQBVzaCZ+oPbqqBsKrstlSeGqpyiB950UtJb8aIUCcUIWDu7vZoGpyoPkH9a/1vzPUj4aIhOPtFhEbWZJtSuNJe33+bYazOQqRw==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(376002)(366004)(346002)(136003)(36756003)(31686004)(66946007)(6666004)(53546011)(86362001)(66476007)(31696002)(4326008)(558084003)(478600001)(26005)(8676002)(83380400001)(2906002)(5660300002)(6486002)(956004)(316002)(186003)(2616005)(16526019)(16576012)(8936002)(44832011)(66556008)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?SFV3RW1TaldZZW9vamYvckZibjhWVllSU0RNaEQvTGZBNmhGQmhvVWJncmpw?=
 =?utf-8?B?YmV0MTV4TEV4bm5kbmZDVWpNMEJRamthRDlNSjZBQ01QZ0Nwd1Brb0hCSDdB?=
 =?utf-8?B?QktWaTNpYVB3dWpnTlZ5ZFdoSmFvRFRZQkVuQmxtWE5qTzZZNVhlRGZ4WTZO?=
 =?utf-8?B?b1N6SFpGVys3ZFpKQjJKNy9TMDA0aERXYjVpR2VRTU95K3h1UHU2bFRPZDJw?=
 =?utf-8?B?M09GekhobVpROFFpMDJYZVZLd0tVcTRvOExvMng3V1JxN3llemFtNHF2RllF?=
 =?utf-8?B?cVluQThNdHU1U1hiaFRlSjlVd01WLzZ2R2VhU1M1TXNxOVVXVk54SWtObVFZ?=
 =?utf-8?B?RHVOUDB3ekRuVlY5U2dWcUREMUJ3V0hJTVJhWktOODZyaFlyM2xSUEtOS3NQ?=
 =?utf-8?B?SzFQMkJKU0JRQkM5V1BBRlRUZ3JXN3RKaGN2Um1ya2xhc2w5V1B0RmxHeGJD?=
 =?utf-8?B?Y1JaYkV1ZjFHR3pWK1dIOW9jdm80QkUzZ2JoL3czRHgyQjVUV2ZYUjlTVnE1?=
 =?utf-8?B?eEVyWFVNTStmWEFpOUF3WG5VS1QvdDZpdXBhUXhTNXA1c0V3bkFxaE1sK21K?=
 =?utf-8?B?alE4c2ZGZ0JpdXJBMUZodE55WUtFdzJEUHN5cHVndzlaQ3JIWDRvVDAwZTZV?=
 =?utf-8?B?WnVpNU80ZTYyd3VMSEtaZFdMOFZLOTZlcy9QUFpLc1BjeTdsV1lLY2phZGpw?=
 =?utf-8?B?dENWT2c3UTFWWHk0N0liU1ZLUzhKTm9oNnVwYVVrOWxSVGhjUGVXeUI4N1lF?=
 =?utf-8?B?cmxmK2h0NnhxUHlrb0ZUWVhiOUw4bjBnNHVxeVc2MWFZMVlpNDlXUWhjQThU?=
 =?utf-8?B?aEQvUVFVWkRVZFVXYlgrd21peGhseHZOR3lHQWZUc3IvckZsRDB0aDh2MWMv?=
 =?utf-8?B?ejgzMmxsZHFkSG9SL3VuS2M4WVVoNjBISVNBa09QU2IzcEp1Q0FWaHZ3cSt4?=
 =?utf-8?B?K01zbXdTNTJNUTdJZ2FKWDZTeHpidUUwZkx5b1p5MTQvaHRuU3BiYnZzeGlP?=
 =?utf-8?B?cm5ITXUzaW5KbndKR3J4U0VWWWxtTUhOZEJnWFZFbUdpc3ZuUy9wMGlPZUNC?=
 =?utf-8?B?aU83SDloYlhHbU43ZURIcXpSKythSVQvUE1MRk5ZbElTb3VEenVlVDJEc08w?=
 =?utf-8?B?bWVWRzhCTDF4WXFmOEZtK29mN082Vy9DbkgyM2ZiSnlmOUFycFdPRlZvTUtx?=
 =?utf-8?B?ckRtV1NZMFdYNGx3YnBtSWRqRmlTOGl0Q2MzQ3dCY1hRSktvN2xlMFRUSVFW?=
 =?utf-8?B?UmJQcTNXcTY5NHZTeDRsUFczTGwzZVB0bmV0cW4ydzBzQ1BycXdWWnlJaWR2?=
 =?utf-8?B?ZWtPWnlJYm4zdEhOQnljN3pVenBRcXVWbjVWTitjOXBiNEpNdGVhQUJ5STQ2?=
 =?utf-8?B?S0pwY3JxZ3dIL2lKRFRxMmZvZFBiaGNWcWhsR1BtUTdqSmxjaFJJaGk1T1RR?=
 =?utf-8?B?WjN4MWVjQ0k2VzFCVTFLN2RtekRVU3VNeTE5MXZIS1NqOHQzcDBlVU1Gd0dy?=
 =?utf-8?B?S3ErVC9oTzl6V2RHSk9QTHZ2bXRIRnJwYU41QmFwVkdENDYrZGV1blVPK1Ft?=
 =?utf-8?B?RE9CSFpLb0tmS3NSMTQ4MjFsTW41d3JQRHVhdDkvRHJYbndqczF3Q3g3d1Bz?=
 =?utf-8?B?NFdDZXpSRFpHUGNTOS9GZklBRlhwdVQvWmR5ZTNlcEhPVnVvYVpWc0ptNU1u?=
 =?utf-8?B?VUxNcEpqS1p0V01GTGFyYmRUemtPN2pRWVlDaVZwS0RDRS92VDNJOHBQblVq?=
 =?utf-8?Q?fWkkz4/Yoe60D6j3l4XfXMt0NZ8JT3C99GPhplB?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 38e3ba5d-f69d-4496-64ea-08d8d5df210e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Feb 2021 20:35:58.3918
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JvDUhs5L8jSNwSOs1xb63DldtmsFsyzjGLgwUI4GBh17dxlPrOTsLN1kSQufdCLha1geB5Z96EnRyy5dE7pYDEnSdi03F2kX1OKF0Bc53Q4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3064
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9901 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0 phishscore=0
 mlxlogscore=999 adultscore=0 suspectscore=0 spamscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102200192
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9901 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 impostorscore=0 phishscore=0
 mlxlogscore=999 malwarescore=0 clxscore=1015 suspectscore=0
 lowpriorityscore=0 bulkscore=0 adultscore=0 priorityscore=1501 mlxscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102200192


On 2/19/21 10:40 AM, Juergen Gross wrote:
> For avoiding read- and write-tearing by the compiler use READ_ONCE()
> and WRITE_ONCE() for accessing the ring indices in evtchn.c.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Sat Feb 20 21:07:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 21:07:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87345.164479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDZTc-0000mM-AO; Sat, 20 Feb 2021 21:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87345.164479; Sat, 20 Feb 2021 21:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDZTc-0000mF-7K; Sat, 20 Feb 2021 21:07:44 +0000
Received: by outflank-mailman (input) for mailman id 87345;
 Sat, 20 Feb 2021 21:07:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H6fi=HW=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1lDZTb-0000mA-5W
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 21:07:43 +0000
Received: from MTA-10-1.privateemail.com (unknown [68.65.122.30])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b5e563f-c885-4b5f-a9f4-a64d8f235e01;
 Sat, 20 Feb 2021 21:07:42 +0000 (UTC)
Received: from MTA-10.privateemail.com (localhost [127.0.0.1])
 by MTA-10.privateemail.com (Postfix) with ESMTP id 1F45660033;
 Sat, 20 Feb 2021 16:07:41 -0500 (EST)
Received: from drt-xps-ubuntu.lan (unknown [10.20.151.239])
 by MTA-10.privateemail.com (Postfix) with ESMTPA id 7C24660045;
 Sat, 20 Feb 2021 21:07:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b5e563f-c885-4b5f-a9f4-a64d8f235e01
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-4.15] tools/misc/xen-vmtrace: use reset and enable
Date: Sat, 20 Feb 2021 16:07:38 -0500
Message-Id: <d63d274c46f964d89f791d5e5166971387c0e2e8.1613855006.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Virus-Scanned: ClamAV using ClamSMTP

The expected behavior while using xen-vmtrace is to get a clean start, even if
the tool was used previously on the same VM.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 tools/misc/xen-vmtrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 7572e880c5..35d14c6a9b 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -119,7 +119,7 @@ int main(int argc, char **argv)
         goto out;
     }
 
-    if ( xc_vmtrace_enable(xch, domid, vcpu) )
+    if ( xc_vmtrace_reset_and_enable(xch, domid, vcpu) )
     {
         perror("xc_vmtrace_enable()");
         goto out;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Sat Feb 20 21:36:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 21:36:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87348.164492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDZvV-0003et-Ig; Sat, 20 Feb 2021 21:36:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87348.164492; Sat, 20 Feb 2021 21:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDZvV-0003em-F8; Sat, 20 Feb 2021 21:36:33 +0000
Received: by outflank-mailman (input) for mailman id 87348;
 Sat, 20 Feb 2021 21:36:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDZvU-0003ee-5w; Sat, 20 Feb 2021 21:36:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDZvT-0003Sc-Ty; Sat, 20 Feb 2021 21:36:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDZvT-00044k-KD; Sat, 20 Feb 2021 21:36:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDZvT-0002wn-Jg; Sat, 20 Feb 2021 21:36:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5JGT0lNgNnNCbb+tkbbSBH8fkYUkp6SxV5MOkzUvVqk=; b=Kx+nr8m6QJkV8/MI3BykKj3z90
	o9uDnYF6rb2h/Htc3Em9mwXUJXKXQuTgmm/N8R2y4cqM90/UXYBXKgt+XPUQPtQA3ZS5RLBtyg2sd
	AOyHMCQqNuXbodSwXLE0Qt11gZT4ZmYd+5GUKVzzZzs2nvWNX9djCrMbbOiCGTVRvX4k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159493-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159493: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=44ae214591e58af468eacb7b873eaa0bc187c4fa
X-Osstest-Versions-That:
    ovmf=4f4d862c1c7232a18347616d94c343c929657fdb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 21:36:31 +0000

flight 159493 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159493/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 44ae214591e58af468eacb7b873eaa0bc187c4fa
baseline version:
 ovmf                 4f4d862c1c7232a18347616d94c343c929657fdb

Last test of basis   159394  2021-02-15 23:39:45 Z    4 days
Testing same since   159493  2021-02-20 17:09:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Samer El-Haj-Mahmoud <Samer.El-Haj-Mahmoud@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4f4d862c1c..44ae214591  44ae214591e58af468eacb7b873eaa0bc187c4fa -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 22:52:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 22:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87358.164513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDb6W-0002lA-9v; Sat, 20 Feb 2021 22:52:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87358.164513; Sat, 20 Feb 2021 22:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDb6W-0002l3-6n; Sat, 20 Feb 2021 22:52:00 +0000
Received: by outflank-mailman (input) for mailman id 87358;
 Sat, 20 Feb 2021 22:51:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDb6V-0002kv-6T; Sat, 20 Feb 2021 22:51:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDb6U-0004gR-TP; Sat, 20 Feb 2021 22:51:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDb6U-0007D4-Ez; Sat, 20 Feb 2021 22:51:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDb6U-000407-EW; Sat, 20 Feb 2021 22:51:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7MQO9LUJ/B0iPeu8nKKdLAXDDW67ev9eaq98mOLc25M=; b=jh7D7H6oToxfSvnOXCScnca4uG
	fGvD7JHxmuM/ISihGWILYlJynK25lC/XwLvothp4nC6INetsAXryGCYuwwfSsbkZjWQxtVoN6IDM+
	nvJpVtXP85D0od5EFPOVZsqbrQ7qCr95FJ6TUZmFmFnV11idRjR44BVST9wKsyylcTWE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159490-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159490: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 Feb 2021 22:51:58 +0000

flight 159490 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159490/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-arm64-arm64-xl-credit1     <job status>                 broken  in 159484
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install fail in 159440 REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 159463 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159490
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 159413 pass in 159367
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 159413 pass in 159440
 test-arm64-arm64-xl       10 host-ping-check-xen fail in 159413 pass in 159463
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 159413 pass in 159490
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 159440 pass in 159413
 test-arm64-arm64-examine      8 reboot           fail in 159440 pass in 159490
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 159440 pass in 159490
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 159440 pass in 159490
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 159463 pass in 159490
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 159484 pass in 159463
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 159440
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 159463
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 159484

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-credit1 5 host-install(5) broken in 159484 blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159413 like 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 159440 blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 159440 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  204 days
Failing since        152366  2020-08-01 20:49:34 Z  203 days  350 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    5 days    7 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-xsm host-install(5)
broken-job test-arm64-arm64-xl-credit1 broken

Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 20 23:22:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 23:22:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87365.164528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDbZy-0005hW-47; Sat, 20 Feb 2021 23:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87365.164528; Sat, 20 Feb 2021 23:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDbZy-0005hP-0T; Sat, 20 Feb 2021 23:22:26 +0000
Received: by outflank-mailman (input) for mailman id 87365;
 Sat, 20 Feb 2021 23:22:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1lDbZw-0005hK-91
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 23:22:24 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e67292e3-7505-40b0-a2bb-4595ec173f73;
 Sat, 20 Feb 2021 23:22:23 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id m2so7780988pgq.5
 for <xen-devel@lists.xenproject.org>; Sat, 20 Feb 2021 15:22:23 -0800 (PST)
Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1])
 by smtp.gmail.com with ESMTPSA id 4sm13171538pjc.23.2021.02.20.15.22.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 20 Feb 2021 15:22:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e67292e3-7505-40b0-a2bb-4595ec173f73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=Ep1/hKudadrnwSmy7i9Qe1zqNMv0dz/lqPo/OEzQBLY=;
        b=Y5+H14gHGZgi0CNqllBjsnLBOhgoLpbYiHYC+nT+ktigN+4Fl8meG6ozXsst06ueyg
         FvbGHjyqEVyTHPB3Zp/v5zJBgju0J3MTawqo26iPtMIljoyQw5J1tDi3JCN/LxouS/67
         klSfhyjA0WlAiqToMlFX0eaeULuKmwtYr35j93DM4SArIGeRTKUvjtZCTmu8pQtz95mc
         zKlmH9HYQzFlX11KOMI08PJmn+exU7ZbXRN94aN7hiSJOXTCbgGEPd/DA5A9MX4Ou19E
         nctzTNhjKFOM2GpCZEVJiLc58qMaRY5BfZIfIevPc9WwHjjM/vaTFeQkkKNfQuiwZ549
         HuoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=Ep1/hKudadrnwSmy7i9Qe1zqNMv0dz/lqPo/OEzQBLY=;
        b=CjJTn8wNBhUk68nIj4j8Y3SGAf/mXrI2CsEIWTnXv0OKdsO3LEQH2rP9TLXYTf3xqN
         jF3LsbekDuWV26oId5sxGdscBfUHE7AN++qOzNRAnzl2hRSSj71NtGKMEht1qV/tcNA5
         hk3d2F06yGS0h17CnygChHGwgNZV8OxjKashIFsfUfgYEotUphiE4LP5lwGU2m7WXgLi
         G0eh1pus3mdk6eMCC8ZjzpWzuelyhEBnNtVCP1o6/MjhOEHpaVk6b2Wta0tEhfYHM/lD
         H9wDr/6lrPUsAgjwg/Td5raoSAuE7iHkH3HBcDWoWzaTpuJ14bG0UWruP8lxAZvEgeHj
         NHuQ==
X-Gm-Message-State: AOAM532FMvxRrj5eLcGkr9V5vT37K/6KQMx9T8+4Fuygzu7xvXgjKz/W
	7v+6rxay7zuK/t9lnCJ6t54=
X-Google-Smtp-Source: ABdhPJxpOmWYOapxv+6DBYdb/cvr1/vWkG8h2RXX9Og2ITBxnK+Xono/2zD2+h9CTxu3Wx35RZTQdA==
X-Received: by 2002:a63:1648:: with SMTP id 8mr14414133pgw.392.1613863342154;
        Sat, 20 Feb 2021 15:22:22 -0800 (PST)
From: Nadav Amit <nadav.amit@gmail.com>
X-Google-Original-From: Nadav Amit
To: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Nadav Amit <namit@vmware.com>,
	Borislav Petkov <bp@alien8.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Ingo Molnar <mingo@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Rik van Riel <riel@surriel.com>,
	Sasha Levin <sashal@kernel.org>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	kvm@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v6 0/9] x86/tlb: Concurrent TLB flushes
Date: Sat, 20 Feb 2021 15:17:03 -0800
Message-Id: <20210220231712.2475218-1-namit@vmware.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nadav Amit <namit@vmware.com>

The series improves TLB shootdown by flushing the local TLB concurrently
with remote TLBs, overlapping the IPI delivery time with the local
flush. Performance numbers can be found in the previous version [1].

v5 was rebased on 5.11 (long time after v4), and had some bugs and
embarrassing build errors. Peter Zijlstra and Christoph Hellwig had some
comments as well. These issues were addressed (excluding one 82-chars
line that I left). Based on their feedback, an additional patch was also
added to reuse on_each_cpu_cond_mask() code and avoid unnecessary calls
by inlining.

KernelCI showed RCU stalls on arm64, which I could not figure out from
the kernel splat. If this issue persists, I would appreciate it someone
can assist in debugging or at least provide the output when running the
kernel with CONFIG_CSD_LOCK_WAIT_DEBUG=Y.

[1] https://lore.kernel.org/lkml/20190823224153.15223-1-namit@vmware.com/

v5 -> v6:
* Address build warnings due to rebase mistakes
* Reuse code from on_each_cpu_cond_mask() and inline [PeterZ]
* Fix some style issues [Hellwig]

v4 -> v5:
* Rebase on 5.11
* Move concurrent smp logic to smp_call_function_many_cond() 
* Remove SGI-UV patch which is not needed anymore

v3 -> v4:
* Merge flush_tlb_func_local and flush_tlb_func_remote() [Peter]
* Prevent preemption on_each_cpu(). It is not needed, but it prevents
  concerns. [Peter/tglx]
* Adding acked-, review-by tags

v2 -> v3:
* Open-code the remote/local-flush decision code [Andy]
* Fix hyper-v, Xen implementations [Andrew]
* Fix redundant TLB flushes.

v1 -> v2:
* Removing the patches that Thomas took [tglx]
* Adding hyper-v, Xen compile-tested implementations [Dave]
* Removing UV [Andy]
* Adding lazy optimization, removing inline keyword [Dave]
* Restructuring patch-set

RFCv2 -> v1:
* Fix comment on flush_tlb_multi [Juergen]
* Removing async invalidation optimizations [Andy]
* Adding KVM support [Paolo]

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm@vger.kernel.org
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: x86@kernel.org
Cc: xen-devel@lists.xenproject.org

Nadav Amit (9):
  smp: Run functions concurrently in smp_call_function_many_cond()
  x86/mm/tlb: Unify flush_tlb_func_local() and flush_tlb_func_remote()
  x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()
  x86/mm/tlb: Flush remote and local TLBs concurrently
  x86/mm/tlb: Privatize cpu_tlbstate
  x86/mm/tlb: Do not make is_lazy dirty for no reason
  cpumask: Mark functions as pure
  x86/mm/tlb: Remove unnecessary uses of the inline keyword
  smp: inline on_each_cpu_cond() and on_each_cpu()

 arch/x86/hyperv/mmu.c                 |  10 +-
 arch/x86/include/asm/paravirt.h       |   6 +-
 arch/x86/include/asm/paravirt_types.h |   4 +-
 arch/x86/include/asm/tlbflush.h       |  48 ++++---
 arch/x86/include/asm/trace/hyperv.h   |   2 +-
 arch/x86/kernel/alternative.c         |   2 +-
 arch/x86/kernel/kvm.c                 |  11 +-
 arch/x86/kernel/paravirt.c            |   2 +-
 arch/x86/mm/init.c                    |   2 +-
 arch/x86/mm/tlb.c                     | 176 +++++++++++++----------
 arch/x86/xen/mmu_pv.c                 |  11 +-
 include/linux/cpumask.h               |   6 +-
 include/linux/smp.h                   |  50 +++++--
 include/trace/events/xen.h            |   2 +-
 kernel/smp.c                          | 196 +++++++++++---------------
 15 files changed, 278 insertions(+), 250 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sat Feb 20 23:22:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 Feb 2021 23:22:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87366.164540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDba3-0005j6-Cx; Sat, 20 Feb 2021 23:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87366.164540; Sat, 20 Feb 2021 23:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDba3-0005iy-9G; Sat, 20 Feb 2021 23:22:31 +0000
Received: by outflank-mailman (input) for mailman id 87366;
 Sat, 20 Feb 2021 23:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1lDba2-0005ii-Aa
 for xen-devel@lists.xenproject.org; Sat, 20 Feb 2021 23:22:30 +0000
Received: from mail-pj1-x1029.google.com (unknown [2607:f8b0:4864:20::1029])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc2163dc-c21b-4c3a-9a02-ab1ef60ebde7;
 Sat, 20 Feb 2021 23:22:28 +0000 (UTC)
Received: by mail-pj1-x1029.google.com with SMTP id c19so5893369pjq.3
 for <xen-devel@lists.xenproject.org>; Sat, 20 Feb 2021 15:22:28 -0800 (PST)
Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1])
 by smtp.gmail.com with ESMTPSA id 4sm13171538pjc.23.2021.02.20.15.22.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 20 Feb 2021 15:22:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc2163dc-c21b-4c3a-9a02-ab1ef60ebde7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TD2ogWuZxu1nh3sbkljzgokEMLMfmz1SRAU0dNVuiq4=;
        b=KkjrXvF2EMBhf0LbFW72lo9HmRaa+ILywBixPXDQd4dpCPL2oc6Eiep6d9lOnE7zpk
         4B7qL/utLy9vjpphJA9v3/um7xbRyW2cEIyEqFKYua/wPI5qyGUHCxVz7o6g7I8ibCbL
         +uJ0SUe43Y2IlS8oKwqCaTfkScSCv2IIyFSvLvvTVyVVtbwJtye6YZADwr1izAB1OP0f
         q3HOqpHWkgGb+RX/k7TMg91KR5Z6naG+ETWxdIt7b3juBva1ocdmBCJnEUq9TsGVmqVN
         LT9+C4gS7U4y7KNiXEa0/TRhwcV6B6QlIUmoAeGkI2GmvyTJ5UM0qlYc+jtbXQe0E7mw
         MtyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TD2ogWuZxu1nh3sbkljzgokEMLMfmz1SRAU0dNVuiq4=;
        b=tVsnZ1s2kzPWZCA7J2/FTZqALua4vIIJzVe+2CorBFInC2Zie/1ZAyxEqMSfaxZy3+
         pTpZNldb8b803MVpAqYYEo9vHyiZpvCmBr6al8tamhpvdA8xkgjPB0oFVxnOwcPbUsop
         pt/9HvhGwNriV1X9u6GQmEtW3wNjdyQslJSHXuGQi0t2GxXCmIrE+H1mvsaPu7aOgLvD
         Mn5uRMBSsICfjefRxjzG5KVbO9dNqqWcqIoOUAz/In3thCqreN/tgc7Bmz6dNVXIPNo0
         y2ePJsprJq9P51ucftNxP4n27wBIUw3ORjqmfPLeeo0CzmGqojCYK4dPA8HuafVt4INO
         z7Zw==
X-Gm-Message-State: AOAM532SYPFj4o89HX9X03DZugCaPEHttQtj27P+xihRgbvLu4XzQscG
	DuGBXYbZ4CSQl32dDiToHF8=
X-Google-Smtp-Source: ABdhPJyhIA8sW3A3FXOeBX8uv8g3j/9mj0CBUmIlxopHAejrFvCbEXuUQil/FuxdgaurHBXiEhiqqg==
X-Received: by 2002:a17:903:1d0:b029:df:d098:f1cb with SMTP id e16-20020a17090301d0b02900dfd098f1cbmr15565103plh.49.1613863347887;
        Sat, 20 Feb 2021 15:22:27 -0800 (PST)
From: Nadav Amit <nadav.amit@gmail.com>
X-Google-Original-From: Nadav Amit
To: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Nadav Amit <namit@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Sasha Levin <sashal@kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Michael Kelley <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v6 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
Date: Sat, 20 Feb 2021 15:17:07 -0800
Message-Id: <20210220231712.2475218-5-namit@vmware.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210220231712.2475218-1-namit@vmware.com>
References: <20210220231712.2475218-1-namit@vmware.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nadav Amit <namit@vmware.com>

To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. Introduce
paravirtual versions of flush_tlb_multi() for KVM, Xen and hyper-v (Xen
and hyper-v are only compile-tested).

While the updated smp infrastructure is capable of running a function on
a single local core, it is not optimized for this case. The multiple
function calls and the indirect branch introduce some overhead, and
might make local TLB flushes slower than they were before the recent
changes.

Before calling the SMP infrastructure, check if only a local TLB flush
is needed to restore the lost performance in this common case. This
requires to check mm_cpumask() one more time, but unless this mask is
updated very frequently, this should impact performance negatively.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Reviewed-by: Michael Kelley <mikelley@microsoft.com> # Hyper-v parts
Reviewed-by: Juergen Gross <jgross@suse.com> # Xen and paravirt parts
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Nadav Amit <namit@vmware.com>

---
v5->v6:
* Use on_each_cpu_mask() instead of on_each_cpu_cond_mask() [PeterZ]
* Use cond_cpumask when needed instead of cpumask
* Rename remaining instance of native_flush_tlb_others()
---
 arch/x86/hyperv/mmu.c                 | 10 +++---
 arch/x86/include/asm/paravirt.h       |  6 ++--
 arch/x86/include/asm/paravirt_types.h |  4 +--
 arch/x86/include/asm/tlbflush.h       |  4 +--
 arch/x86/include/asm/trace/hyperv.h   |  2 +-
 arch/x86/kernel/kvm.c                 | 11 +++++--
 arch/x86/kernel/paravirt.c            |  2 +-
 arch/x86/mm/tlb.c                     | 46 +++++++++++++++++----------
 arch/x86/xen/mmu_pv.c                 | 11 +++----
 include/trace/events/xen.h            |  2 +-
 10 files changed, 57 insertions(+), 41 deletions(-)

diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index 2c87350c1fb0..681dba8de4f2 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -52,8 +52,8 @@ static inline int fill_gva_list(u64 gva_list[], int offset,
 	return gva_n - offset;
 }
 
-static void hyperv_flush_tlb_others(const struct cpumask *cpus,
-				    const struct flush_tlb_info *info)
+static void hyperv_flush_tlb_multi(const struct cpumask *cpus,
+				   const struct flush_tlb_info *info)
 {
 	int cpu, vcpu, gva_n, max_gvas;
 	struct hv_tlb_flush **flush_pcpu;
@@ -61,7 +61,7 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
 	u64 status = U64_MAX;
 	unsigned long flags;
 
-	trace_hyperv_mmu_flush_tlb_others(cpus, info);
+	trace_hyperv_mmu_flush_tlb_multi(cpus, info);
 
 	if (!hv_hypercall_pg)
 		goto do_native;
@@ -164,7 +164,7 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
 	if (!(status & HV_HYPERCALL_RESULT_MASK))
 		return;
 do_native:
-	native_flush_tlb_others(cpus, info);
+	native_flush_tlb_multi(cpus, info);
 }
 
 static u64 hyperv_flush_tlb_others_ex(const struct cpumask *cpus,
@@ -239,6 +239,6 @@ void hyperv_setup_mmu_ops(void)
 		return;
 
 	pr_info("Using hypercall for remote TLB flush\n");
-	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
+	pv_ops.mmu.flush_tlb_multi = hyperv_flush_tlb_multi;
 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
 }
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 4abf110e2243..45b55e3e0630 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -50,7 +50,7 @@ static inline void slow_down_io(void)
 void native_flush_tlb_local(void);
 void native_flush_tlb_global(void);
 void native_flush_tlb_one_user(unsigned long addr);
-void native_flush_tlb_others(const struct cpumask *cpumask,
+void native_flush_tlb_multi(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info);
 
 static inline void __flush_tlb_local(void)
@@ -68,10 +68,10 @@ static inline void __flush_tlb_one_user(unsigned long addr)
 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
 }
 
-static inline void __flush_tlb_others(const struct cpumask *cpumask,
+static inline void __flush_tlb_multi(const struct cpumask *cpumask,
 				      const struct flush_tlb_info *info)
 {
-	PVOP_VCALL2(mmu.flush_tlb_others, cpumask, info);
+	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
 }
 
 static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index de87087d3bde..b7b35d5d58e7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -188,8 +188,8 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_one_user)(unsigned long addr);
-	void (*flush_tlb_others)(const struct cpumask *cpus,
-				 const struct flush_tlb_info *info);
+	void (*flush_tlb_multi)(const struct cpumask *cpus,
+				const struct flush_tlb_info *info);
 
 	void (*tlb_remove_table)(struct mmu_gather *tlb, void *table);
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index a7a598af116d..3c6681def912 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -175,7 +175,7 @@ extern void initialize_tlbstate_and_flush(void);
  *  - flush_tlb_page(vma, vmaddr) flushes one page
  *  - flush_tlb_range(vma, start, end) flushes a range of pages
  *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
- *  - flush_tlb_others(cpumask, info) flushes TLBs on other cpus
+ *  - flush_tlb_multi(cpumask, info) flushes TLBs on multiple cpus
  *
  * ..but the i386 has somewhat limited tlb flushing capabilities,
  * and page-granular flushes are available only on i486 and up.
@@ -209,7 +209,7 @@ struct flush_tlb_info {
 void flush_tlb_local(void);
 void flush_tlb_one_user(unsigned long addr);
 void flush_tlb_one_kernel(unsigned long addr);
-void flush_tlb_others(const struct cpumask *cpumask,
+void flush_tlb_multi(const struct cpumask *cpumask,
 		      const struct flush_tlb_info *info);
 
 #ifdef CONFIG_PARAVIRT
diff --git a/arch/x86/include/asm/trace/hyperv.h b/arch/x86/include/asm/trace/hyperv.h
index 4d705cb4d63b..a8e5a7a2b460 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -8,7 +8,7 @@
 
 #if IS_ENABLED(CONFIG_HYPERV)
 
-TRACE_EVENT(hyperv_mmu_flush_tlb_others,
+TRACE_EVENT(hyperv_mmu_flush_tlb_multi,
 	    TP_PROTO(const struct cpumask *cpus,
 		     const struct flush_tlb_info *info),
 	    TP_ARGS(cpus, info),
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5e78e01ca3b4..38ea9dee2456 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -613,7 +613,7 @@ static int kvm_cpu_down_prepare(unsigned int cpu)
 }
 #endif
 
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
 			const struct flush_tlb_info *info)
 {
 	u8 state;
@@ -627,6 +627,11 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 	 * queue flush_on_enter for pre-empted vCPUs
 	 */
 	for_each_cpu(cpu, flushmask) {
+		/*
+		 * The local vCPU is never preempted, so we do not explicitly
+		 * skip check for local vCPU - it will never be cleared from
+		 * flushmask.
+		 */
 		src = &per_cpu(steal_time, cpu);
 		state = READ_ONCE(src->preempted);
 		if ((state & KVM_VCPU_PREEMPTED)) {
@@ -636,7 +641,7 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 		}
 	}
 
-	native_flush_tlb_others(flushmask, info);
+	native_flush_tlb_multi(flushmask, info);
 }
 
 static void __init kvm_guest_init(void)
@@ -654,7 +659,7 @@ static void __init kvm_guest_init(void)
 	}
 
 	if (pv_tlb_flush_supported()) {
-		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
+		pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
 		pr_info("KVM setup pv remote TLB flush\n");
 	}
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index c60222ab8ab9..197a12662155 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -330,7 +330,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_user	= native_flush_tlb_local,
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
-	.mmu.flush_tlb_others	= native_flush_tlb_others,
+	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
 	.mmu.tlb_remove_table	=
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
 
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 07b6701a540a..8db87cd92e6b 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -24,7 +24,7 @@
 # define __flush_tlb_local		native_flush_tlb_local
 # define __flush_tlb_global		native_flush_tlb_global
 # define __flush_tlb_one_user(addr)	native_flush_tlb_one_user(addr)
-# define __flush_tlb_others(msk, info)	native_flush_tlb_others(msk, info)
+# define __flush_tlb_multi(msk, info)	native_flush_tlb_multi(msk, info)
 #endif
 
 /*
@@ -490,7 +490,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		/*
 		 * Even in lazy TLB mode, the CPU should stay set in the
 		 * mm_cpumask. The TLB shootdown code can figure out from
-		 * from cpu_tlbstate.is_lazy whether or not to send an IPI.
+		 * cpu_tlbstate.is_lazy whether or not to send an IPI.
 		 */
 		if (WARN_ON_ONCE(real_prev != &init_mm &&
 				 !cpumask_test_cpu(cpu, mm_cpumask(next))))
@@ -697,7 +697,7 @@ static void flush_tlb_func(void *info)
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
 		 *
-		 * This should be rare, with native_flush_tlb_others skipping
+		 * This should be rare, with native_flush_tlb_multi() skipping
 		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
@@ -795,9 +795,14 @@ static bool tlb_is_not_lazy(int cpu)
 
 static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
 
-STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
+STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
 					 const struct flush_tlb_info *info)
 {
+	/*
+	 * Do accounting and tracing. Note that there are (and have always been)
+	 * cases in which a remote TLB flush will be traced, but eventually
+	 * would not happen.
+	 */
 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
 	if (info->end == TLB_FLUSH_ALL)
 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
@@ -816,8 +821,7 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables) {
-		smp_call_function_many(cpumask, flush_tlb_func,
-			       (void *)info, 1);
+		on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
 	} else {
 		/*
 		 * Although we could have used on_each_cpu_cond_mask(),
@@ -844,14 +848,14 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
 			if (tlb_is_not_lazy(cpu))
 				__cpumask_set_cpu(cpu, cond_cpumask);
 		}
-		smp_call_function_many(cond_cpumask, flush_tlb_func, (void *)info, 1);
+		on_each_cpu_mask(cond_cpumask, flush_tlb_func, (void *)info, true);
 	}
 }
 
-void flush_tlb_others(const struct cpumask *cpumask,
+void flush_tlb_multi(const struct cpumask *cpumask,
 		      const struct flush_tlb_info *info)
 {
-	__flush_tlb_others(cpumask, info);
+	__flush_tlb_multi(cpumask, info);
 }
 
 /*
@@ -931,16 +935,20 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables,
 				  new_tlb_gen);
 
-	if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
+	/*
+	 * flush_tlb_multi() is not optimized for the common case in which only
+	 * a local TLB flush is needed. Optimize this use-case by calling
+	 * flush_tlb_func_local() directly in this case.
+	 */
+	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
+		flush_tlb_multi(mm_cpumask(mm), info);
+	} else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
 		flush_tlb_func(info);
 		local_irq_enable();
 	}
 
-	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
-		flush_tlb_others(mm_cpumask(mm), info);
-
 	put_flush_tlb_info();
 	put_cpu();
 }
@@ -1152,16 +1160,20 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 	int cpu = get_cpu();
 
 	info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, 0);
-	if (cpumask_test_cpu(cpu, &batch->cpumask)) {
+	/*
+	 * flush_tlb_multi() is not optimized for the common case in which only
+	 * a local TLB flush is needed. Optimize this use-case by calling
+	 * flush_tlb_func_local() directly in this case.
+	 */
+	if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) {
+		flush_tlb_multi(&batch->cpumask, info);
+	} else if (cpumask_test_cpu(cpu, &batch->cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
 		flush_tlb_func(info);
 		local_irq_enable();
 	}
 
-	if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids)
-		flush_tlb_others(&batch->cpumask, info);
-
 	cpumask_clear(&batch->cpumask);
 
 	put_flush_tlb_info();
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index cf2ade864c30..09b95c0e876e 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -1247,8 +1247,8 @@ static void xen_flush_tlb_one_user(unsigned long addr)
 	preempt_enable();
 }
 
-static void xen_flush_tlb_others(const struct cpumask *cpus,
-				 const struct flush_tlb_info *info)
+static void xen_flush_tlb_multi(const struct cpumask *cpus,
+				const struct flush_tlb_info *info)
 {
 	struct {
 		struct mmuext_op op;
@@ -1258,7 +1258,7 @@ static void xen_flush_tlb_others(const struct cpumask *cpus,
 	const size_t mc_entry_size = sizeof(args->op) +
 		sizeof(args->mask[0]) * BITS_TO_LONGS(num_possible_cpus());
 
-	trace_xen_mmu_flush_tlb_others(cpus, info->mm, info->start, info->end);
+	trace_xen_mmu_flush_tlb_multi(cpus, info->mm, info->start, info->end);
 
 	if (cpumask_empty(cpus))
 		return;		/* nothing to do */
@@ -1267,9 +1267,8 @@ static void xen_flush_tlb_others(const struct cpumask *cpus,
 	args = mcs.args;
 	args->op.arg2.vcpumask = to_cpumask(args->mask);
 
-	/* Remove us, and any offline CPUS. */
+	/* Remove any offline CPUs */
 	cpumask_and(to_cpumask(args->mask), cpus, cpu_online_mask);
-	cpumask_clear_cpu(smp_processor_id(), to_cpumask(args->mask));
 
 	args->op.cmd = MMUEXT_TLB_FLUSH_MULTI;
 	if (info->end != TLB_FLUSH_ALL &&
@@ -2086,7 +2085,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 	.flush_tlb_user = xen_flush_tlb,
 	.flush_tlb_kernel = xen_flush_tlb,
 	.flush_tlb_one_user = xen_flush_tlb_one_user,
-	.flush_tlb_others = xen_flush_tlb_others,
+	.flush_tlb_multi = xen_flush_tlb_multi,
 	.tlb_remove_table = tlb_remove_table,
 
 	.pgd_alloc = xen_pgd_alloc,
diff --git a/include/trace/events/xen.h b/include/trace/events/xen.h
index 3b61b587e137..44a3f565264d 100644
--- a/include/trace/events/xen.h
+++ b/include/trace/events/xen.h
@@ -346,7 +346,7 @@ TRACE_EVENT(xen_mmu_flush_tlb_one_user,
 	    TP_printk("addr %lx", __entry->addr)
 	);
 
-TRACE_EVENT(xen_mmu_flush_tlb_others,
+TRACE_EVENT(xen_mmu_flush_tlb_multi,
 	    TP_PROTO(const struct cpumask *cpus, struct mm_struct *mm,
 		     unsigned long addr, unsigned long end),
 	    TP_ARGS(cpus, mm, addr, end),
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun Feb 21 04:28:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 04:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87388.164582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDgLS-0003Nm-LS; Sun, 21 Feb 2021 04:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87388.164582; Sun, 21 Feb 2021 04:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDgLS-0003Ne-F1; Sun, 21 Feb 2021 04:27:46 +0000
Received: by outflank-mailman (input) for mailman id 87388;
 Sun, 21 Feb 2021 04:27:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDgLR-0003NW-VB; Sun, 21 Feb 2021 04:27:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDgLR-0004U5-QT; Sun, 21 Feb 2021 04:27:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDgLR-0004jv-Fz; Sun, 21 Feb 2021 04:27:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDgLR-0004IF-FW; Sun, 21 Feb 2021 04:27:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O6oGaqz5/2QgFJWaecvZVdeyQMjVb7ZfK8U++Lvs1+g=; b=UYYNg/RtVXK1R72rg3yZ/lX/c6
	xwsjoUZp0E9DugLW80MB/0Ux4oxNhXjOFTwihHz5gWoG39jRObICzDNOgmr4knQxdpLOFIQbRAOGX
	kn62V5ja4GaNF+WglaI5Q5G/sSiYE2aP02lmT44eUhouLP8RIZ+GaQsggvNnU8pwKI8U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159491-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159491: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 04:27:45 +0000

flight 159491 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159491/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    1 days
Testing same since   159487  2021-02-20 04:29:29 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 305 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 06:19:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 06:19:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87403.164602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDi5F-0006CR-Na; Sun, 21 Feb 2021 06:19:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87403.164602; Sun, 21 Feb 2021 06:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDi5F-0006CK-Kj; Sun, 21 Feb 2021 06:19:09 +0000
Received: by outflank-mailman (input) for mailman id 87403;
 Sun, 21 Feb 2021 06:19:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDi5E-0006CC-9e; Sun, 21 Feb 2021 06:19:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDi5D-0006ei-W6; Sun, 21 Feb 2021 06:19:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDi5D-0000B5-Nw; Sun, 21 Feb 2021 06:19:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDi5D-00036m-NO; Sun, 21 Feb 2021 06:19:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u4ZU0sKG4fd0H2S8QoF+WstCxldpy9G1WQOSH8CCnjY=; b=HP/j3rLQXKi/W04Z9EHGHP7QgL
	T6JT9M8GDbGaCFf1WPIlYpW90vpJzIyuSIm3eG4j+YshE+zzLVWT1toG5KSWE2AE+Qsf5wY5yFsof
	F8UMjQi1SV3lrPTNNhO+Pju2EROcZ5LzgxmUzom4vs35QoRIZHJCgY1Ycj0kSf7y7UfM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159507-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159507: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f30aa2ec74a79f34700915b677f46ec476d2362d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 06:19:07 +0000

flight 159507 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159507/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f30aa2ec74a79f34700915b677f46ec476d2362d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  226 days
Failing since        151818  2020-07-11 04:18:52 Z  225 days  218 attempts
Testing same since   159486  2021-02-20 04:18:50 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43367 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 09:09:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 09:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87460.164635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDkjY-0005yO-Dz; Sun, 21 Feb 2021 09:08:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87460.164635; Sun, 21 Feb 2021 09:08:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDkjY-0005yH-Az; Sun, 21 Feb 2021 09:08:56 +0000
Received: by outflank-mailman (input) for mailman id 87460;
 Sun, 21 Feb 2021 09:08:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDkjX-0005y9-KH; Sun, 21 Feb 2021 09:08:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDkjX-0001TY-Db; Sun, 21 Feb 2021 09:08:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDkjX-0008Aa-0S; Sun, 21 Feb 2021 09:08:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDkjW-00088F-WD; Sun, 21 Feb 2021 09:08:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P7AR6MNROjb41XfhB1IvBLZOY0GUC3pBRn3O8+RgIK8=; b=NfAjLmYcQu6CZeG2TXhM92uxMW
	cLqDnIw4xz3BdCwbsjnXY97CSBZqe0F2ErgQP9Y1o+m5Au362we0oVnR6bz8aJknzjBE/gNaNTV+B
	xqtVCjgQg7dkszIFTDYbt1ltFf3s3JByVyPaScx+ulc2Ivc0YP1X73vosE1u8hyWIUD8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159498-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159498: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d6798cc01d6edabaa4e326359b69f08d022bf4c7
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 09:08:54 +0000

flight 159498 qemu-mainline real [real]
flight 159512 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159498/
http://logs.test-lab.xenproject.org/osstest/logs/159512/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d6798cc01d6edabaa4e326359b69f08d022bf4c7
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  184 days
Failing since        152659  2020-08-21 14:07:39 Z  183 days  355 attempts
Testing same since   159498  2021-02-20 20:38:00 Z    0 days    1 attempts

------------------------------------------------------------
421 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 115685 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 09:59:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 09:59:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87473.164651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDlWX-0002P0-Ej; Sun, 21 Feb 2021 09:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87473.164651; Sun, 21 Feb 2021 09:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDlWX-0002Ot-Aj; Sun, 21 Feb 2021 09:59:33 +0000
Received: by outflank-mailman (input) for mailman id 87473;
 Sun, 21 Feb 2021 09:59:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDlWV-0002Ol-OZ; Sun, 21 Feb 2021 09:59:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDlWV-0002He-GD; Sun, 21 Feb 2021 09:59:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDlWV-0001ZQ-87; Sun, 21 Feb 2021 09:59:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDlWV-0007eJ-7e; Sun, 21 Feb 2021 09:59:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c2Ab0zsxEEG6oqtRNLRgbtEJ0yomv6yEF4Ltd2ghaXU=; b=BKTTd0JlxFZy96fUKyyuzo20MD
	cWih0dx6iBDOw9T3jHoZQOBIUc3tjMHvF1pWNvjTnUu1P4S96lhqd31m7p6CCkOZ+xfivTUhifXn4
	mH/Xhi5q3yxPukhgFJVilmHyPa52N+HTUoykePl9KASLRXI1KAgUOW9sbnXc4A4hOjUQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159515-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159515: all pass - PUSHED
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=3b1cc15f1931ba56d0ee256fe9bfe65509733b27
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 09:59:31 +0000

flight 159515 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159515/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  3b1cc15f1931ba56d0ee256fe9bfe65509733b27

Last test of basis   159441  2021-02-17 09:20:56 Z    4 days
Testing same since   159515  2021-02-21 09:18:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b1cc15f19..87a067fd8f  87a067fd8f4d4f7c6be02c3d38145115ac542017 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 10:51:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 10:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87494.164671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDmKD-0007uL-E3; Sun, 21 Feb 2021 10:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87494.164671; Sun, 21 Feb 2021 10:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDmKD-0007uE-BB; Sun, 21 Feb 2021 10:50:53 +0000
Received: by outflank-mailman (input) for mailman id 87494;
 Sun, 21 Feb 2021 10:50:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDmKB-0007u5-Sk; Sun, 21 Feb 2021 10:50:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDmKB-0003Aj-JK; Sun, 21 Feb 2021 10:50:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDmKB-0003hg-9E; Sun, 21 Feb 2021 10:50:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDmKB-00016l-8l; Sun, 21 Feb 2021 10:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hO/Whg46ZgmS0Qxi1bOAdbCbTa2PtYy8bOECWyv9X48=; b=4Oog/jZJMX84grDtk0k3VU591k
	9M28e5rG7kVu8Gs66uat6qdnMbw5UXuKxu2z50mhTxo6xmzpH6hp5JouPjdDuZ5ps6bMK++J3UsV8
	mFwIqfbAHeudZuhsgdZPNOuwuViVWcfzZkS/x2Z+4CoGodIt6q8jbl/KzMhTy5PTF5eg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159501-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159501: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f40ddce88593482919761f74910f42f4b84c004b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 10:50:51 +0000

flight 159501 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159501/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm         <job status>                 broken  in 159490
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 159367 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install fail in 159440 REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 159463 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 159490 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 159367 pass in 159501
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 159413 pass in 159367
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 159413 pass in 159440
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 159440 pass in 159501
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 159440 pass in 159501
 test-arm64-arm64-xl           8 xen-boot         fail in 159490 pass in 159501
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 159490 pass in 159501
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159413
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 159440
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 159463
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen        fail pass in 159463
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen        fail pass in 159490
 test-arm64-arm64-examine      8 reboot                     fail pass in 159490
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 159490
 test-armhf-armhf-xl-vhd      17 guest-start/debian.repeat  fail pass in 159490

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm   5 host-install(5) broken in 159490 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 159367 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 159413 like 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 159440 blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 159463 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 159490 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 159490 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f40ddce88593482919761f74910f42f4b84c004b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  204 days
Failing since        152366  2020-08-01 20:49:34 Z  203 days  351 attempts
Testing same since   159367  2021-02-15 05:07:31 Z    6 days    8 attempts

------------------------------------------------------------
4578 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-xsm broken

Not pushing.

(No revision log; it would be 1033140 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 14:33:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 14:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87534.164716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDpnT-00038O-OM; Sun, 21 Feb 2021 14:33:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87534.164716; Sun, 21 Feb 2021 14:33:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDpnT-00038H-Kl; Sun, 21 Feb 2021 14:33:19 +0000
Received: by outflank-mailman (input) for mailman id 87534;
 Sun, 21 Feb 2021 13:34:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zznf=HX=gmail.com=akihiko.odaki@srs-us1.protection.inumbo.net>)
 id 1lDosU-0005yJ-JQ
 for xen-devel@lists.xenproject.org; Sun, 21 Feb 2021 13:34:26 +0000
Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6675a642-83a4-4196-be61-cc09662cd435;
 Sun, 21 Feb 2021 13:34:25 +0000 (UTC)
Received: by mail-pf1-x430.google.com with SMTP id 201so1811508pfw.5
 for <xen-devel@lists.xenproject.org>; Sun, 21 Feb 2021 05:34:25 -0800 (PST)
Received: from localhost.localdomain ([2400:4050:c360:8200:b418:f77:22b4:17c9])
 by smtp.gmail.com with ESMTPSA id 134sm16204899pfc.113.2021.02.21.05.34.22
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 21 Feb 2021 05:34:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6675a642-83a4-4196-be61-cc09662cd435
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=gZAKh0kV5+3TZoFtekzuyQxJDLVOXyNYG3XsThFGRQQ=;
        b=bOyT01PkAVMcGws32dlX2ZJpbbsVNvYuuvmQG7V0VuruvU+EE+YxxxXlApjQgXgyRp
         B3UtLnEKSSXtuekbzfSHaf0jA+YfyXvx/xVR7wXujWYp0JJDAJq6owLJF9YsMY2D6S5s
         ZmsT/u35BB42HK5QzUd7G/jWlsJkxtADVjYY5cC8RwsqbBbaKBkrdGK3qiFUiaWj9sQE
         Kyy2c0DqqoBUBzqY3jUedj+0XeFKi1+k6/f75j7mYev85gzx5sk83XEqvI3lPen3iwUc
         sZHxJFGKdO8v6L7zL19XBy3JSO7dLmIXPCak6JXwMdoL9Vo7sCqPDywvkGRbs4It9TpB
         bDOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=gZAKh0kV5+3TZoFtekzuyQxJDLVOXyNYG3XsThFGRQQ=;
        b=ZtNgR1mH13tn8HWiPdAprznQxzTG49Ht83+VH1Meftb+nRNhjwUhw7urIihT0ByzBw
         NEoa+ZmkOsPhGALahFLWFitd4q9IZpZQKlwG1D6wCuJ2sn5wWUWduZh5fJ1Rl/65tz2i
         gtIqs9DrKZYVEWqLQ5jw4B8LQvYEKveex5Iy+4hw3TiaLwDK83FaoqoXFiKy/qJjevMH
         SelHYik//7YPARjjaBacleEpBbwHnDyoSEPYtubrkplMZSWzrfjHGjMdiKubPfG9jrOL
         TclRfwZoummUC9f9dwaCOUqi958lgf9X+vNAz1GcASHXIcvvdawP7I0l1IyYPsW7vLFY
         GiBQ==
X-Gm-Message-State: AOAM530v4+YTXOkUFcSU3gGvLYlKry2FuJy13znzDrQnYfI0BpFWtOPm
	u+0ofpIBLi+GBnX9MkrnQws=
X-Google-Smtp-Source: ABdhPJz50uxcbgxnB7g1S2MaX9c71eoHuxrr1egAiaqVjAm93b+9QTP+nhRjiWB5zbBhw5kkMGYGvQ==
X-Received: by 2002:aa7:8742:0:b029:1ed:4d14:7513 with SMTP id g2-20020aa787420000b02901ed4d147513mr10165627pfo.66.1613914464692;
        Sun, 21 Feb 2021 05:34:24 -0800 (PST)
From: Akihiko Odaki <akihiko.odaki@gmail.com>
To: 
Cc: qemu-devel@nongnu.org,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>
Subject: [PATCH] virtio-gpu: Respect graphics update interval for EDID
Date: Sun, 21 Feb 2021 22:34:14 +0900
Message-Id: <20210221133414.7262-1-akihiko.odaki@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This change introduces an additional member, refresh_rate to
qemu_edid_info in include/hw/display/edid.h.

This change also isolates the graphics update interval from the
display update interval. The guest will update the frame buffer
in the graphics update interval, but displays can be updated in a
dynamic interval, for example to save update costs aggresively
(vnc) or to respond to user-generated events (sdl).
It stabilizes the graphics update interval and prevents the guest
from being confused.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
---
 hw/display/edid-generate.c     |  7 ++++---
 hw/display/virtio-gpu-base.c   | 11 +++++++++++
 hw/display/virtio-gpu.c        |  1 +
 hw/display/xenfb.c             |  2 +-
 include/hw/display/edid.h      |  1 +
 include/hw/virtio/virtio-gpu.h |  2 ++
 include/ui/console.h           |  3 ++-
 ui/console.c                   | 21 ++++++++++++++++-----
 ui/gtk.c                       |  2 +-
 9 files changed, 39 insertions(+), 11 deletions(-)

diff --git a/hw/display/edid-generate.c b/hw/display/edid-generate.c
index 1665b7cbb29..7317e68506b 100644
--- a/hw/display/edid-generate.c
+++ b/hw/display/edid-generate.c
@@ -203,7 +203,7 @@ static void edid_desc_dummy(uint8_t *desc)
     edid_desc_type(desc, 0x10);
 }
 
-static void edid_desc_timing(uint8_t *desc,
+static void edid_desc_timing(uint8_t *desc, uint32_t refresh_rate,
                              uint32_t xres, uint32_t yres,
                              uint32_t xmm, uint32_t ymm)
 {
@@ -216,7 +216,7 @@ static void edid_desc_timing(uint8_t *desc,
     uint32_t ysync  = yres *  5 / 1000;
     uint32_t yblank = yres * 35 / 1000;
 
-    uint32_t clock  = 75 * (xres + xblank) * (yres + yblank);
+    uint32_t clock  = refresh_rate * (xres + xblank) * (yres + yblank);
 
     stl_le_p(desc, clock / 10000);
 
@@ -303,6 +303,7 @@ void qemu_edid_generate(uint8_t *edid, size_t size,
     uint8_t *xtra3 = NULL;
     uint8_t *dta = NULL;
     uint32_t width_mm, height_mm;
+    uint32_t refresh_rate = info->refresh_rate ? info->refresh_rate : 75;
     uint32_t dpi = 100; /* if no width_mm/height_mm */
 
     /* =============== set defaults  =============== */
@@ -400,7 +401,7 @@ void qemu_edid_generate(uint8_t *edid, size_t size,
 
     /* =============== descriptor blocks =============== */
 
-    edid_desc_timing(edid + desc, info->prefx, info->prefy,
+    edid_desc_timing(edid + desc, refresh_rate, info->prefx, info->prefy,
                      width_mm, height_mm);
     desc += 18;
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 4a57350917c..41b08b2e944 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -96,6 +96,16 @@ static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
     return 0;
 }
 
+static void virtio_gpu_update_display_interval(void *opaque, uint64_t interval)
+{
+    VirtIOGPUBase *g = opaque;
+
+    g->refresh_rate = 1000 / interval;
+
+    /* send event to guest */
+    virtio_gpu_notify_event(g, VIRTIO_GPU_EVENT_DISPLAY);
+}
+
 static void
 virtio_gpu_gl_flushed(void *opaque)
 {
@@ -142,6 +152,7 @@ static const GraphicHwOps virtio_gpu_ops = {
     .invalidate = virtio_gpu_invalidate_display,
     .gfx_update = virtio_gpu_update_display,
     .text_update = virtio_gpu_text_update,
+    .gfx_update_interval = virtio_gpu_update_display_interval,
     .ui_info = virtio_gpu_ui_info,
     .gl_block = virtio_gpu_gl_block,
     .gl_flushed = virtio_gpu_gl_flushed,
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 2e4a9822b6a..64fdc5a6e89 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -216,6 +216,7 @@ virtio_gpu_generate_edid(VirtIOGPU *g, int scanout,
         .height_mm = b->req_state[scanout].height_mm,
         .prefx = b->req_state[scanout].width,
         .prefy = b->req_state[scanout].height,
+        .refresh_rate = b->refresh_rate,
     };
 
     edid->size = cpu_to_le32(sizeof(edid->edid));
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 838260b6ad1..4229f9a42df 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -983,5 +983,5 @@ struct XenDevOps xen_framebuffer_ops = {
 static const GraphicHwOps xenfb_ops = {
     .invalidate  = xenfb_invalidate,
     .gfx_update  = xenfb_update,
-    .update_interval = xenfb_update_interval,
+    .gfx_update_interval = xenfb_update_interval,
 };
diff --git a/include/hw/display/edid.h b/include/hw/display/edid.h
index 1f8fc9b3750..9617705cc0a 100644
--- a/include/hw/display/edid.h
+++ b/include/hw/display/edid.h
@@ -11,6 +11,7 @@ typedef struct qemu_edid_info {
     uint32_t    prefy;
     uint32_t    maxx;
     uint32_t    maxy;
+    uint32_t    refresh_rate;
 } qemu_edid_info;
 
 void qemu_edid_generate(uint8_t *edid, size_t size,
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index fae149235c5..9d1a547ba10 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -116,6 +116,8 @@ struct VirtIOGPUBase {
 
     int enabled_output_bitmask;
     struct virtio_gpu_requested_state req_state[VIRTIO_GPU_MAX_SCANOUTS];
+
+    uint32_t refresh_rate;
 };
 
 struct VirtIOGPUBaseClass {
diff --git a/include/ui/console.h b/include/ui/console.h
index d30e972d0b5..0bcb610b80a 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -246,6 +246,7 @@ typedef struct DisplayChangeListenerOps {
 } DisplayChangeListenerOps;
 
 struct DisplayChangeListener {
+    uint64_t gfx_update_interval;
     uint64_t update_interval;
     const DisplayChangeListenerOps *ops;
     DisplayState *ds;
@@ -384,7 +385,7 @@ typedef struct GraphicHwOps {
     void (*gfx_update)(void *opaque);
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
-    void (*update_interval)(void *opaque, uint64_t interval);
+    void (*gfx_update_interval)(void *opaque, uint64_t interval);
     int (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
     void (*gl_flushed)(void *opaque);
diff --git a/ui/console.c b/ui/console.c
index c5d11bc7017..928fb009db1 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -176,6 +176,7 @@ struct QemuConsole {
 struct DisplayState {
     QEMUTimer *gui_timer;
     uint64_t last_update;
+    uint64_t gfx_update_interval;
     uint64_t update_interval;
     bool refreshing;
     bool have_gfx;
@@ -200,6 +201,7 @@ static void text_console_update_cursor(void *opaque);
 static void gui_update(void *opaque)
 {
     uint64_t interval = GUI_REFRESH_INTERVAL_IDLE;
+    uint64_t gfx_interval = GUI_REFRESH_INTERVAL_DEFAULT;
     uint64_t dcl_interval;
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl;
@@ -209,20 +211,29 @@ static void gui_update(void *opaque)
     dpy_refresh(ds);
     ds->refreshing = false;
 
+    QLIST_FOREACH(dcl, &ds->listeners, next) {
+        if (dcl->gfx_update_interval &&
+            gfx_interval > dcl->gfx_update_interval) {
+            gfx_interval = dcl->gfx_update_interval;
+        }
+    }
     QLIST_FOREACH(dcl, &ds->listeners, next) {
         dcl_interval = dcl->update_interval ?
-            dcl->update_interval : GUI_REFRESH_INTERVAL_DEFAULT;
+            dcl->update_interval : gfx_interval;
         if (interval > dcl_interval) {
             interval = dcl_interval;
         }
     }
-    if (ds->update_interval != interval) {
-        ds->update_interval = interval;
+    if (ds->gfx_update_interval != gfx_interval) {
+        ds->gfx_update_interval = gfx_interval;
         QTAILQ_FOREACH(con, &consoles, next) {
-            if (con->hw_ops->update_interval) {
-                con->hw_ops->update_interval(con->hw, interval);
+            if (con->hw_ops->gfx_update_interval) {
+                con->hw_ops->gfx_update_interval(con->hw, gfx_interval);
             }
         }
+    }
+    if (ds->update_interval != interval) {
+        ds->update_interval = interval;
         trace_console_refresh(interval);
     }
     ds->last_update = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
diff --git a/ui/gtk.c b/ui/gtk.c
index 79dc2401203..53f68b0bdaf 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -810,7 +810,7 @@ static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
         return FALSE;
     }
 
-    vc->gfx.dcl.update_interval =
+    vc->gfx.dcl.gfx_update_interval =
         gd_monitor_update_interval(vc->window ? vc->window : s->window);
 
     fbw = surface_width(vc->gfx.ds);
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Sun Feb 21 16:54:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 16:54:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87630.164763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDrzu-0008MJ-G6; Sun, 21 Feb 2021 16:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87630.164763; Sun, 21 Feb 2021 16:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDrzu-0008MC-DA; Sun, 21 Feb 2021 16:54:18 +0000
Received: by outflank-mailman (input) for mailman id 87630;
 Sun, 21 Feb 2021 16:54:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDrzs-0008M4-Qd; Sun, 21 Feb 2021 16:54:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDrzs-00017N-Fs; Sun, 21 Feb 2021 16:54:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDrzs-00030t-9p; Sun, 21 Feb 2021 16:54:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDrzs-0005YN-9I; Sun, 21 Feb 2021 16:54:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=iNBXypuHgI9SL+AChZ59/RvtbTeZGJLxKkEZt1HOC6M=; b=g4+YpQ8kZ8R07LlfeYdMW4Nx5J
	gK5ZrInzxi1oXokoLzdHYBzpz8wS9Xz3PwcgyKwBySiqXuBfItoienyPi63kiOEZN3V5qdg58+iFG
	bFYc49zTv7aBAmFtoWiqJCj+e3cfJMIL+yxBN3SsB3Y/nhskaqfVnKN9YwpP4AqVchzk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-xtf-amd64-amd64-5
Message-Id: <E1lDrzs-0005YN-9I@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 16:54:16 +0000

branch xen-unstable
xenbranch xen-unstable
job test-xtf-amd64-amd64-5
testid xtf/test-pv32pae-selftest

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159524/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-xtf-amd64-amd64-5.xtf--test-pv32pae-selftest.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-5.xtf--test-pv32pae-selftest --summary-out=tmp/159524.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-xtf-amd64-amd64-5 xtf/test-pv32pae-selftest
Searching for failure / basis pass:
 159491 fail [host=albana0] / 159475 [host=godello0] 159453 [host=fiano1] 159424 [host=albana1] 159396 [host=chardonnay1] 159362 [host=albana1] 159335 [host=chardonnay1] 159315 [host=godello1] 159202 [host=huxelrebe1] 159134 [host=fiano1] 159036 [host=elbling1] 159013 [host=pinot0] 158957 [host=godello1] 158922 ok.
Failure / basis pass flights: 159491 / 158922
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 9dc687f155a57216b83b17f9cde55dd43e06b0cd 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#9dc687f155a57216b83b17f9cde55dd43e06b0cd-87a067fd8f4d4f7c6be02c3d38145115ac542017 git://xenbits.xen.org/xtf.git#8ab15139728a8efd3ebbb60beb16a958a6a93fa1-8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Loaded 5001 nodes in revision graph
Searching for test results:
 158811 [host=pinot1]
 158835 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 9dc687f155a57216b83b17f9cde55dd43e06b0cd 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 158873 [host=chardonnay1]
 158922 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 9dc687f155a57216b83b17f9cde55dd43e06b0cd 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 158957 [host=godello1]
 159013 [host=pinot0]
 159036 [host=elbling1]
 159134 [host=fiano1]
 159202 [host=huxelrebe1]
 159315 [host=godello1]
 159335 [host=chardonnay1]
 159362 [host=albana1]
 159396 [host=chardonnay1]
 159424 [host=albana1]
 159453 [host=fiano1]
 159475 [host=godello0]
 159496 [host=godello1]
 159487 [host=godello1]
 159492 [host=godello1]
 159494 [host=godello1]
 159497 [host=godello1]
 159499 [host=godello1]
 159500 [host=godello1]
 159502 [host=godello1]
 159503 [host=godello1]
 159504 [host=godello1]
 159505 [host=godello1]
 159491 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159506 [host=godello1]
 159509 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 9dc687f155a57216b83b17f9cde55dd43e06b0cd 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159510 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159511 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 52280d9492ee486be735859ef496220534c71905 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159513 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d670ef3401b91d04c58d72cd8ce5579b4fa900d8 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159516 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 336fbbdf61562e5ae1112f24bc90c1164adf2144 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159519 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f954a1bf5f74ad6edce361d1bf1a29137ff374e8 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159520 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159521 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159523 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159524 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Searching for interesting versions
 Result found: flight 158835 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x56076a7b42f8) HASH(0x56076a7ad3b8) HASH(0x56076a7a9380) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05\
 e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d670ef3401b91d04c58d72cd8ce5579b4fa900d8 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x56076a7a8f00) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 52280d9492ee486be735859ef496220534c71905 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x56076a7a\
 9080) Result found: flight 159491 (fail), for basis failure (at ancestor ~75)
 Repro found: flight 159509 (pass), for basis pass
 Repro found: flight 159510 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
No revisions left to test, checking graph state.
 Result found: flight 159516 (pass), for last pass
 Result found: flight 159520 (fail), for first failure
 Repro found: flight 159521 (pass), for last pass
 Repro found: flight 159522 (fail), for first failure
 Repro found: flight 159523 (pass), for last pass
 Repro found: flight 159524 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159524/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-5.xtf--test-pv32pae-selftest.{dot,ps,png,html,svg}.
----------------------------------------
159524: tolerable all pass

flight 159524 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159524/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-xtf-amd64-amd64-5     19 xtf/test-pv32pae-selftest fail baseline untested


jobs:
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Feb 21 18:04:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 18:04:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87645.164782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDt5a-0006Sy-VP; Sun, 21 Feb 2021 18:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87645.164782; Sun, 21 Feb 2021 18:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDt5a-0006Sr-SE; Sun, 21 Feb 2021 18:04:14 +0000
Received: by outflank-mailman (input) for mailman id 87645;
 Sun, 21 Feb 2021 18:04:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDt5Z-0006Sj-Bd; Sun, 21 Feb 2021 18:04:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDt5Z-0002K3-3h; Sun, 21 Feb 2021 18:04:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDt5Y-0005Lq-Qd; Sun, 21 Feb 2021 18:04:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDt5Y-0002jP-Q3; Sun, 21 Feb 2021 18:04:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GK+PlrvgjCOrKOYsFy7Q5w6J+nD/JBD2PJoU5Y6FEGI=; b=oaVDSOGCDZydwNIXQFMqxDAEbj
	NCGMJ1o2PVEfRae5gj4ezCYRUKXAvjbq+ihuukm9BJ2jdKkMmaAHI6qQtsw1Q+R8ETIBHv3HvjJoq
	15SVeVcpsM/HqhSzMWEJxpBY4RgPae0B1KGk9NGLnDxQHzeW8PakbPCsMYrAsSkrhNZw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159508-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159508: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 18:04:12 +0000

flight 159508 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159508/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    2 days
Testing same since   159487  2021-02-20 04:29:29 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 305 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 21:19:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 21:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87686.164827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDw8F-0007IJ-UN; Sun, 21 Feb 2021 21:19:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87686.164827; Sun, 21 Feb 2021 21:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDw8F-0007IC-RH; Sun, 21 Feb 2021 21:19:11 +0000
Received: by outflank-mailman (input) for mailman id 87686;
 Sun, 21 Feb 2021 21:19:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDw8F-0007I4-0j; Sun, 21 Feb 2021 21:19:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDw8E-0005XC-Oa; Sun, 21 Feb 2021 21:19:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDw8E-0004Ja-Dl; Sun, 21 Feb 2021 21:19:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDw8E-0001Sm-DD; Sun, 21 Feb 2021 21:19:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Hf14++COWam+0lsYjvRpDQGts5sWag3cxUp7hZLooCY=; b=XLzWeqx9no9XqxD59icyXGqlFe
	+7jAursWv71W+ScSJVhhDpPwzFI7dy65/U+0uUjN6+nfZowpKYW1f8q5hndjIvvFVP6MV/cn7waT4
	OB20TXdDqmXlGZjzrPpZnaAxPXBgqEKeCT2nLHDoSizVoxEeK/cGxNJ6B1oVTNjreDrc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159514-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159514: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d6798cc01d6edabaa4e326359b69f08d022bf4c7
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 21:19:10 +0000

flight 159514 qemu-mainline real [real]
flight 159529 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159514/
http://logs.test-lab.xenproject.org/osstest/logs/159529/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d6798cc01d6edabaa4e326359b69f08d022bf4c7
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  185 days
Failing since        152659  2020-08-21 14:07:39 Z  184 days  356 attempts
Testing same since   159498  2021-02-20 20:38:00 Z    1 days    2 attempts

------------------------------------------------------------
421 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 115685 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 22:02:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 22:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87695.164844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDwni-0003Kq-3y; Sun, 21 Feb 2021 22:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87695.164844; Sun, 21 Feb 2021 22:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDwni-0003Kj-16; Sun, 21 Feb 2021 22:02:02 +0000
Received: by outflank-mailman (input) for mailman id 87695;
 Sun, 21 Feb 2021 22:02:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A4AH=HX=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1lDwng-0003Ke-VS
 for xen-devel@lists.xenproject.org; Sun, 21 Feb 2021 22:02:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d68e9947-57e3-45b7-aea5-19a6dbbe5def;
 Sun, 21 Feb 2021 22:01:58 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id AE06364ED7;
 Sun, 21 Feb 2021 22:01:57 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id AA07C60191;
 Sun, 21 Feb 2021 22:01:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d68e9947-57e3-45b7-aea5-19a6dbbe5def
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1613944917;
	bh=mK3C99UKZixS/nnfE5aagPIFw7iVd/1vCpSt8FauGUc=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=VXG4inXsdwoOIuUytoDO6GsuKxYBT2e20sD4yHWdXpya6idATdol9dAqJ4oLIvn4R
	 arezUGuQ86km5mHbF6sWtmM5jT4EnJdRm4o2OppoFUyt/Fx1ILenikNVEWOVn/xj+R
	 Q351H+xwReoQZEArvwmof2fcBDs6aTcL1+CirYJ+J0DXZqc5rZvj4N/+O2vZvXG3Bp
	 YJSPUkXxvL1AbsMWuZEzX1mgAFLtgK1ZgigLEDgE18nA2ce6aUpADIu5pQ+WI26uE0
	 pyQhzO73bajhD3fLP1+lEfXLMz33trXw9wihp8G7Ruv+bWC4owYc0FVYC2XvIC0ijZ
	 mNKOHPX+DfpeQ==
Subject: Re: [GIT PULL] xen: branch for v5.12-rc1
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210216124015.28923-1-jgross@suse.com>
References: <20210216124015.28923-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20210216124015.28923-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.12-rc1-tag
X-PR-Tracked-Commit-Id: 871997bc9e423f05c7da7c9178e62dde5df2a7f8
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 4a037ad5d115b2cc79a5071a7854475f365476fa
Message-Id: <161394491769.8676.1427666132870005356.pr-tracker-bot@kernel.org>
Date: Sun, 21 Feb 2021 22:01:57 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Tue, 16 Feb 2021 13:40:15 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.12-rc1-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/4a037ad5d115b2cc79a5071a7854475f365476fa

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sun Feb 21 23:51:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 Feb 2021 23:51:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87710.164866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDyVA-0004rz-GQ; Sun, 21 Feb 2021 23:51:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87710.164866; Sun, 21 Feb 2021 23:51:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lDyVA-0004rs-DK; Sun, 21 Feb 2021 23:51:00 +0000
Received: by outflank-mailman (input) for mailman id 87710;
 Sun, 21 Feb 2021 23:50:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDyV8-0004rk-Jp; Sun, 21 Feb 2021 23:50:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDyV8-0007wc-9r; Sun, 21 Feb 2021 23:50:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lDyV7-0003JF-WA; Sun, 21 Feb 2021 23:50:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lDyV7-0007OJ-VT; Sun, 21 Feb 2021 23:50:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8f7Yr1zKUkaQcoz1LTuCjDQdwS9GGREFJEKLlTaf6E8=; b=xnpXbIn4HeXTJh8iLWGF45L1VT
	chSwU012F9lGBZCg3pkTBz63243UWpEO2Asg7oVpisWRRlgwLgau0mh8UylU9ftXzS7qDizKusukN
	FVuKsvFFtTLCjP6B8cIuBMSHcIZjjKCiL8K6LPyMWyIXp4UuL7vcQg43KxM/+Q1vZOh8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159517-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159517: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=55f62bc873477dae2c45bbbc30b86cf3e0982f3b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 Feb 2021 23:50:57 +0000

flight 159517 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159517/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                55f62bc873477dae2c45bbbc30b86cf3e0982f3b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  205 days
Failing since        152366  2020-08-01 20:49:34 Z  204 days  352 attempts
Testing same since   159517  2021-02-21 10:54:47 Z    0 days    1 attempts

------------------------------------------------------------
4766 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1105425 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 05:54:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 05:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87760.164929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE4AO-0007zI-H8; Mon, 22 Feb 2021 05:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87760.164929; Mon, 22 Feb 2021 05:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE4AO-0007zB-D2; Mon, 22 Feb 2021 05:53:56 +0000
Received: by outflank-mailman (input) for mailman id 87760;
 Mon, 22 Feb 2021 05:53:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4AM-0007z3-Un; Mon, 22 Feb 2021 05:53:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4AM-0000kU-KS; Mon, 22 Feb 2021 05:53:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4AM-0001qa-AT; Mon, 22 Feb 2021 05:53:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4AM-0000jE-9s; Mon, 22 Feb 2021 05:53:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fL+9qdSTZ/rQAeHd+x9qFtZfaUWcOKVxjbuvxGAbj0A=; b=Y/HIf6xYdJDKVjO6JRzf/OZmHJ
	jJs+UJnDvMwZ/FyAfkkrTaIeZzNj+VR01slbZZ67+Gz+JpwW13b3ksUnbWrKTlHXzYVeb3dESuq2m
	yqCy9lyvgcYOUCqICkyzxr1KXr4JYFtlo7xoWJvSdhkmRdDIdSgEA8hVZj+2+j7LspiM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159526-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159526: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 05:53:54 +0000

flight 159526 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159526/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 159508

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    2 days
Testing same since   159487  2021-02-20 04:29:29 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 305 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 06:00:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 06:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87766.164947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE4GT-0008Rl-TN; Mon, 22 Feb 2021 06:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87766.164947; Mon, 22 Feb 2021 06:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE4GI-0008R3-4A; Mon, 22 Feb 2021 06:00:02 +0000
Received: by outflank-mailman (input) for mailman id 87766;
 Mon, 22 Feb 2021 06:00:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4jN7=HY=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1lE4GG-0008GL-9x
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 06:00:00 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79116c2f-6471-4807-9599-b751c0675167;
 Mon, 22 Feb 2021 05:59:57 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4DkWjj1HJCz9sVV; Mon, 22 Feb 2021 16:59:53 +1100 (AEDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79116c2f-6471-4807-9599-b751c0675167
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1613973593;
	bh=AY6wvTF45hDdtk/1CpIGS2xL+8YeDQJOtv+n0NJmFCQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=pkel7UI59661cPEq42sFwtAV/SW8gIGlw3cy8clzRcKqMZD3D76DkRZALGDvH+8T1
	 ucjxSwyzImOVTBraSk+bhxS1gZoSaCLUo6erCzTbY8KW5tyjfr0YLFUow8U6xq9lTD
	 KFkZAMzgAJg9jmzoQESAqP2yYaujBccgSFqP8TkA=
Date: Mon, 22 Feb 2021 16:59:30 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>, Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [RFC PATCH v2 06/11] hw/ppc: Restrict KVM to various PPC machines
Message-ID: <YDNIQiHG0nfKXNR8@yekko.fritz.box>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-7-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="ItpTtSJa605u8HWx"
Content-Disposition: inline
In-Reply-To: <20210219173847.2054123-7-philmd@redhat.com>


--ItpTtSJa605u8HWx
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Feb 19, 2021 at 06:38:42PM +0100, Philippe Mathieu-Daud=E9 wrote:
> Restrit KVM to the following PPC machines:
> - 40p
> - bamboo
> - g3beige
> - mac99
> - mpc8544ds
> - ppce500
> - pseries
> - sam460ex
> - virtex-ml507

Hrm.

The reason this list is kind of surprising is because there are 3
different "flavours" of KVM on ppc: KVM HV ("pseries" only), KVM PR
(almost any combination, theoretically, but kind of buggy in
practice), and the Book E specific KVM (Book-E systems with HV
extensions only).

But basically, qemu explicitly managing what accelerators are
available for each machine seems the wrong way around to me.  The
approach we've generally taken is that qemu requests the specific
features it needs of KVM, and KVM tells us whether it can supply those
or not (which may involve selecting between one of the several
flavours).

That way we can extend KVM to cover more situations without needing
corresponding changes in qemu every time.


> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> ---
> RFC: I'm surprise by this list, but this is the result of
>      auditing calls to kvm_enabled() checks.
>=20
>  hw/ppc/e500plat.c      | 5 +++++
>  hw/ppc/mac_newworld.c  | 6 ++++++
>  hw/ppc/mac_oldworld.c  | 5 +++++
>  hw/ppc/mpc8544ds.c     | 5 +++++
>  hw/ppc/ppc440_bamboo.c | 5 +++++
>  hw/ppc/prep.c          | 5 +++++
>  hw/ppc/sam460ex.c      | 5 +++++
>  hw/ppc/spapr.c         | 5 +++++
>  8 files changed, 41 insertions(+)
>=20
> diff --git a/hw/ppc/e500plat.c b/hw/ppc/e500plat.c
> index bddd5e7c48f..9701dbc2231 100644
> --- a/hw/ppc/e500plat.c
> +++ b/hw/ppc/e500plat.c
> @@ -67,6 +67,10 @@ HotplugHandler *e500plat_machine_get_hotpug_handler(Ma=
chineState *machine,
> =20
>  #define TYPE_E500PLAT_MACHINE  MACHINE_TYPE_NAME("ppce500")
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void e500plat_machine_class_init(ObjectClass *oc, void *data)
>  {
>      PPCE500MachineClass *pmc =3D PPCE500_MACHINE_CLASS(oc);
> @@ -98,6 +102,7 @@ static void e500plat_machine_class_init(ObjectClass *o=
c, void *data)
>      mc->max_cpus =3D 32;
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("e500v2_v30");
>      mc->default_ram_id =3D "mpc8544ds.ram";
> +    mc->valid_accelerators =3D valid_accels;
>      machine_class_allow_dynamic_sysbus_dev(mc, TYPE_ETSEC_COMMON);
>   }
> =20
> diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
> index e991db4addb..634f5ad19a0 100644
> --- a/hw/ppc/mac_newworld.c
> +++ b/hw/ppc/mac_newworld.c
> @@ -578,6 +578,11 @@ static char *core99_fw_dev_path(FWPathProvider *p, B=
usState *bus,
> =20
>      return NULL;
>  }
> +
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static int core99_kvm_type(MachineState *machine, const char *arg)
>  {
>      /* Always force PR KVM */
> @@ -595,6 +600,7 @@ static void core99_machine_class_init(ObjectClass *oc=
, void *data)
>      mc->max_cpus =3D MAX_CPUS;
>      mc->default_boot_order =3D "cd";
>      mc->default_display =3D "std";
> +    mc->valid_accelerators =3D valid_accels;
>      mc->kvm_type =3D core99_kvm_type;
>  #ifdef TARGET_PPC64
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("970fx_v3.1");
> diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
> index 44ee99be886..2c58f73b589 100644
> --- a/hw/ppc/mac_oldworld.c
> +++ b/hw/ppc/mac_oldworld.c
> @@ -424,6 +424,10 @@ static char *heathrow_fw_dev_path(FWPathProvider *p,=
 BusState *bus,
>      return NULL;
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static int heathrow_kvm_type(MachineState *machine, const char *arg)
>  {
>      /* Always force PR KVM */
> @@ -444,6 +448,7 @@ static void heathrow_class_init(ObjectClass *oc, void=
 *data)
>  #endif
>      /* TOFIX "cad" when Mac floppy is implemented */
>      mc->default_boot_order =3D "cd";
> +    mc->valid_accelerators =3D valid_accels;
>      mc->kvm_type =3D heathrow_kvm_type;
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("750_v3.1");
>      mc->default_display =3D "std";
> diff --git a/hw/ppc/mpc8544ds.c b/hw/ppc/mpc8544ds.c
> index 81177505f02..92b0e926c1b 100644
> --- a/hw/ppc/mpc8544ds.c
> +++ b/hw/ppc/mpc8544ds.c
> @@ -36,6 +36,10 @@ static void mpc8544ds_init(MachineState *machine)
>      ppce500_init(machine);
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void e500plat_machine_class_init(ObjectClass *oc, void *data)
>  {
>      MachineClass *mc =3D MACHINE_CLASS(oc);
> @@ -56,6 +60,7 @@ static void e500plat_machine_class_init(ObjectClass *oc=
, void *data)
>      mc->max_cpus =3D 15;
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("e500v2_v30");
>      mc->default_ram_id =3D "mpc8544ds.ram";
> +    mc->valid_accelerators =3D valid_accels;
>  }
> =20
>  #define TYPE_MPC8544DS_MACHINE  MACHINE_TYPE_NAME("mpc8544ds")
> diff --git a/hw/ppc/ppc440_bamboo.c b/hw/ppc/ppc440_bamboo.c
> index b156bcb9990..02501f489e4 100644
> --- a/hw/ppc/ppc440_bamboo.c
> +++ b/hw/ppc/ppc440_bamboo.c
> @@ -298,12 +298,17 @@ static void bamboo_init(MachineState *machine)
>      }
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void bamboo_machine_init(MachineClass *mc)
>  {
>      mc->desc =3D "bamboo";
>      mc->init =3D bamboo_init;
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("440epb");
>      mc->default_ram_id =3D "ppc4xx.sdram";
> +    mc->valid_accelerators =3D valid_accels;
>  }
> =20
>  DEFINE_MACHINE("bamboo", bamboo_machine_init)
> diff --git a/hw/ppc/prep.c b/hw/ppc/prep.c
> index 7e72f6e4a9b..90d884b0883 100644
> --- a/hw/ppc/prep.c
> +++ b/hw/ppc/prep.c
> @@ -431,6 +431,10 @@ static void ibm_40p_init(MachineState *machine)
>      }
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void ibm_40p_machine_init(MachineClass *mc)
>  {
>      mc->desc =3D "IBM RS/6000 7020 (40p)",
> @@ -441,6 +445,7 @@ static void ibm_40p_machine_init(MachineClass *mc)
>      mc->default_boot_order =3D "c";
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("604");
>      mc->default_display =3D "std";
> +    mc->valid_accelerators =3D valid_accels;
>  }
> =20
>  DEFINE_MACHINE("40p", ibm_40p_machine_init)
> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
> index e459b43065b..79adb3352f0 100644
> --- a/hw/ppc/sam460ex.c
> +++ b/hw/ppc/sam460ex.c
> @@ -506,6 +506,10 @@ static void sam460ex_init(MachineState *machine)
>      boot_info->entry =3D entry;
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void sam460ex_machine_init(MachineClass *mc)
>  {
>      mc->desc =3D "aCube Sam460ex";
> @@ -513,6 +517,7 @@ static void sam460ex_machine_init(MachineClass *mc)
>      mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("460exb");
>      mc->default_ram_size =3D 512 * MiB;
>      mc->default_ram_id =3D "ppc4xx.sdram";
> +    mc->valid_accelerators =3D valid_accels;
>  }
> =20
>  DEFINE_MACHINE("sam460ex", sam460ex_machine_init)
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 85fe65f8947..c5f985f0187 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -4397,6 +4397,10 @@ static void spapr_cpu_exec_exit(PPCVirtualHypervis=
or *vhyp, PowerPCCPU *cpu)
>      }
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void spapr_machine_class_init(ObjectClass *oc, void *data)
>  {
>      MachineClass *mc =3D MACHINE_CLASS(oc);
> @@ -4426,6 +4430,7 @@ static void spapr_machine_class_init(ObjectClass *o=
c, void *data)
>      mc->default_ram_size =3D 512 * MiB;
>      mc->default_ram_id =3D "ppc_spapr.ram";
>      mc->default_display =3D "std";
> +    mc->valid_accelerators =3D valid_accels;
>      mc->kvm_type =3D spapr_kvm_type;
>      machine_class_allow_dynamic_sysbus_dev(mc, TYPE_SPAPR_PCI_HOST_BRIDG=
E);
>      mc->pci_allow_0_address =3D true;

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--ItpTtSJa605u8HWx
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmAzSEIACgkQbDjKyiDZ
s5KQOw//YS94mBUeu/EPKaGiexiH4d4Orn932aJsK8KfT2d6JHky9L3Z55oFSk8A
r13ZUkzrZFuUfpX1s138QN61Y7aCkK6JgxY7jl8Uj6jOJP6Vk26lnXiVim8r94n2
ouAsoi9TDVxDD5Ewxwmr+PHD/s9f3lvWpjVCvI0V1mHIfDrEyMIzM+Il4PeBkxgO
bmakSOE25jouYGjY1uf1Bx2/BZkU2o7J6n0yaD8M/ZpUVxgBFF0av8b0ZTqP66dc
CDw//dCygGegCNohXuTW5rYh+FSy9FMkxsQqHtn/aiJoUQJvbydRueNszc1N1W6D
aYSbkhX77ZU1wSGJbBDkGeyMhsRkwi9VJdG8fRAXRkYCutRbKKU+zIs1oAty8l5J
E8I5GwxVprM5G1tujmCuBw8D4yXmlBrSksNkka01UjErfoQIutrMbmhGu6LtXyi2
VCz18YUvhrI+6xgJQPMiOHasOQmoEUq/6KHmWlp39nj6MoiOUZXpMBOVyQqkHEKN
ZuQZe5RJKv2X3fu2laxX4MjShgBmhqKUCzqAcYzQDPHpU/f82tO+f3a08+6m2y32
b0KtPw8ojvPHflgCgTjWbug7byXAit1ZCjFkX4Sen3TMV6FFoKheh29Dlujn08xR
9oQ98yWtCnc7xQ0fj1vTSsdjvf8PlVkoQucMqa1ShpAVr3zRV6c=
=unpI
-----END PGP SIGNATURE-----

--ItpTtSJa605u8HWx--


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 06:18:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 06:18:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87770.164956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE4Y4-0001r7-UW; Mon, 22 Feb 2021 06:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87770.164956; Mon, 22 Feb 2021 06:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE4Y4-0001r0-Rc; Mon, 22 Feb 2021 06:18:24 +0000
Received: by outflank-mailman (input) for mailman id 87770;
 Mon, 22 Feb 2021 06:18:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4Y3-0001qs-Hu; Mon, 22 Feb 2021 06:18:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4Y3-0001Eg-6c; Mon, 22 Feb 2021 06:18:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4Y3-0002je-0J; Mon, 22 Feb 2021 06:18:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lE4Y2-00011M-W0; Mon, 22 Feb 2021 06:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=viPa5N8oQA+J8OERnHB5dAsGhTSJpI198pa1FLJwC5A=; b=0HNZ23w2nslUUmTC/j4BZnOTDS
	ym0Gt7emYegcU1fYEftUpjilMEuzQ1sFWHS77/DqGTA+W5oVvzgJ1UyXqPeGvtUPZPnwGF36GGolA
	cLZEJHJwM4dNHsg7/vFwAxYMwGTu/gKg/t4chiTv8K0KNubL/KbIYFpf1niqa8+f3a3M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-qemut-ws16-amd64
Message-Id: <E1lE4Y2-00011M-W0@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 06:18:22 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-ws16-amd64
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159539/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-ws16-amd64.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-ws16-amd64.xen-boot --summary-out=tmp/159539.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-qemut-ws16-amd64 xen-boot
Searching for failure / basis pass:
 159526 fail [host=huxelrebe1] / 159475 ok.
Failure / basis pass flights: 159526 / 159475
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca-87a067fd8f4d4f7c6be02c3d38145115ac542017
Loaded 5001 nodes in revision graph
Searching for test results:
 159475 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
 159487 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159491 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159508 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159525 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
 159527 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159528 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 76aff7f6336b0ce19559700717537449972531be
 159531 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159532 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f954a1bf5f74ad6edce361d1bf1a29137ff374e8
 159534 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159535 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159536 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159538 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159526 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159539 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
Searching for interesting versions
 Result found: flight 159475 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25, results HASH(0x56365ad9db60) HASH(0x56365ad7e7e0) HASH(0x56365ada2918) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895\
 af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca, results HASH(0x56365ad86890) HASH(0x56365ad81890) Result found: flight 159487 (fail), for basis failure (at ancestor ~75)
 Repro found: flight 159525 (pass), for basis pass
 Repro found: flight 159526 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
No revisions left to test, checking graph state.
 Result found: flight 159531 (pass), for last pass
 Result found: flight 159534 (fail), for first failure
 Repro found: flight 159535 (pass), for last pass
 Repro found: flight 159536 (fail), for first failure
 Repro found: flight 159538 (pass), for last pass
 Repro found: flight 159539 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159539/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-ws16-amd64.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
159539: tolerable ALL FAIL

flight 159539 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159539/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot         fail baseline untested


jobs:
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 07:41:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 07:41:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87780.164973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE5q9-0001Ym-Vw; Mon, 22 Feb 2021 07:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87780.164973; Mon, 22 Feb 2021 07:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE5q9-0001Yf-Ss; Mon, 22 Feb 2021 07:41:09 +0000
Received: by outflank-mailman (input) for mailman id 87780;
 Mon, 22 Feb 2021 07:41:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE5q8-0001YX-AD; Mon, 22 Feb 2021 07:41:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE5q7-0002Z2-V1; Mon, 22 Feb 2021 07:41:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE5q7-0007KJ-Mb; Mon, 22 Feb 2021 07:41:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lE5q7-0004M8-Lz; Mon, 22 Feb 2021 07:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xrvjetxJLMM5nKFHM5jwC9dKQ2LQi6aCl9E5O1Yk4AA=; b=QaPKmQ+LKX37jSWss7JD04A13C
	/6FqA23tGM546ffROjUCp0KN4Kq6ZG57chYqyZ+uAf9d96CK0k8xZBsINokFl8OdHJ08ION74qm6X
	08sIq+zcG7IcvfOtRWXX3D95T6KzpVL4atQrVvvsJO+NRM0YDzBo1EPe9ieNEOp9gNiA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159537-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159537: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4865dd673dd5cab616433b2335af783e584fef69
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 07:41:07 +0000

flight 159537 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159537/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4865dd673dd5cab616433b2335af783e584fef69
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  227 days
Failing since        151818  2020-07-11 04:18:52 Z  226 days  219 attempts
Testing same since   159537  2021-02-22 04:24:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43946 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 07:52:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 07:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87786.164988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE60f-0002bD-0n; Mon, 22 Feb 2021 07:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87786.164988; Mon, 22 Feb 2021 07:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE60e-0002b6-UD; Mon, 22 Feb 2021 07:52:00 +0000
Received: by outflank-mailman (input) for mailman id 87786;
 Mon, 22 Feb 2021 07:51:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE60d-0002az-GN
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 07:51:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5efe0135-04ed-480c-bb50-0f53d8d77186;
 Mon, 22 Feb 2021 07:51:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59536AE36;
 Mon, 22 Feb 2021 07:51:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5efe0135-04ed-480c-bb50-0f53d8d77186
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613980317; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dlT6ej+2/kuWXutbS9Mejo2sW8ib9b0ySrgZatwo2fA=;
	b=poNN9bKPZq8LspINsWpFmjpthVz7C0iQApWn5Vi/YYbyL57lP4gQvTT5ewVdtSxIfU0G9l
	3+KIStwlkkt+rrXrABCkOrHjQsTaa6M2UkczR3RtyC2DKg31fGv9eX8dNQLnuJUcGvfrBc
	FZE0/vy0bEf/fHRTXEVlty91IZsp9h8=
Subject: Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page
To: Ian Jackson <iwj@xenproject.org>, George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
 <dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
 <24623.61403.440917.434@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
Date: Mon, 22 Feb 2021 08:51:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24623.61403.440917.434@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 18:05, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH v2] VMX: use a single, global APIC access page"):
>> While this patch was triggered not just by Julien's observation of
>> the early p2m insertion being a problem, but also many earlier
>> times of running into this odd code, it is - especially at this
>> stage - perhaps a possible option to split the change into just
>> the movement of the set_mmio_p2m_entry() invocation and all the
>> rest, in order to defer that rest until after 4.15.
> 
> I infer that this contains a bugfix, but perhaps other
> changes/improvements too.
> 
> George, I think you're our expert on this refcounting stuff - what do
> you think of this ?
> 
> I guess my key question is whether this change will introduce risk by
> messing with the complex refcounting machineryt - or remove it by
> removing an interaction with the refcounting.

If anything, then the latter, but largely neither afaict - there's no
change in this regard here at all as far as the guest could affect
behavior, due to the page getting inserted as p2m_mmio_direct, and
guest_remove_page() having

    if ( p2mt == p2m_mmio_direct )
    {
        rc = clear_mmio_p2m_entry(d, gmfn, mfn, PAGE_ORDER_4K);
        goto out_put_gfn;
    }

before any refcounting logic is reached. The removal of interaction
is because now the page doesn't get associated with a domain (and
hence doesn't become subject to refcounting) at all.

The risk of the change stems from going from using a per-domain
page to using a single, system-wide one, which indeed was the subject
of v1 discussion. In any event the consideration towards splitting
the change would cover either concern. Perhaps I should really do so
and submit as v3 ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:03:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:03:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87820.165017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE83y-0006dx-0o; Mon, 22 Feb 2021 10:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87820.165017; Mon, 22 Feb 2021 10:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE83x-0006dq-Tp; Mon, 22 Feb 2021 10:03:33 +0000
Received: by outflank-mailman (input) for mailman id 87820;
 Mon, 22 Feb 2021 10:03:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE83x-0006di-30; Mon, 22 Feb 2021 10:03:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE83w-0005XW-Uc; Mon, 22 Feb 2021 10:03:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lE83w-0004XD-Kv; Mon, 22 Feb 2021 10:03:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lE83w-0000yD-KS; Mon, 22 Feb 2021 10:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IW60CXEXbmti17b/TFf0xIEQIVx5e1r0NQ6RXYbiHMQ=; b=1uluHypQshw2Ylx11Oxg4ryrl3
	IxLR+MOMoSZArgMkh1fncRBOt6IFCSO6gsSpN2oZ5j41QTvlowJHaINbVAjF2iI48cBJtFyTeUTK8
	4r21SvENImgMW/feNyL0YZezyJsJS35ZcHUkJQKsdY+FKm1/Q7cCxYJPKf3lLWHo8pWQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159530-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159530: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=00d8ba9e0d62ea1c7459c25aeabf9c8bb7659462
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 10:03:32 +0000

flight 159530 qemu-mainline real [real]
flight 159543 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159530/
http://logs.test-lab.xenproject.org/osstest/logs/159543/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                00d8ba9e0d62ea1c7459c25aeabf9c8bb7659462
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  186 days
Failing since        152659  2020-08-21 14:07:39 Z  184 days  357 attempts
Testing same since   159530  2021-02-21 21:37:59 Z    0 days    1 attempts

------------------------------------------------------------
423 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117284 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:10:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:10:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87827.165031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8Ad-0007bZ-QG; Mon, 22 Feb 2021 10:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87827.165031; Mon, 22 Feb 2021 10:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8Ad-0007bS-N5; Mon, 22 Feb 2021 10:10:27 +0000
Received: by outflank-mailman (input) for mailman id 87827;
 Mon, 22 Feb 2021 10:10:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lE8Ab-0007bN-Ow
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:10:26 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c657e947-f3b1-4ee2-a534-3b8f909e21cb;
 Mon, 22 Feb 2021 10:10:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c657e947-f3b1-4ee2-a534-3b8f909e21cb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613988624;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=SkmEDHbhEloiKXa0cVdsvcT+4iZ7AddyOCLXloeWakI=;
  b=YNNK6oqMPHWjG7audNTfTmCnnpYcBSE6tg/DSQFYYuKCC4QSccIFpihL
   nu8HM/5oGrChe451r8YBGdGoOBDjyLrWR+ATK0mm8H9AqIlq7UhMkExRX
   OQOmizmPHXwEfWmRBoYkZHcoutlyqNMdU+t6zF6epa43kIXhdII56r8F4
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4y5Hl5XxpXt/lYh8cLweGUY8vq5Ckz+EXFhHagFTp59TwNvf60G64oYr73x2mdNMUO0Yooa191
 L23sh8wZhgStQcc+NRD6NTQiaH9WLvNQd5oYGF7psAYd6xKXyp0n9Cj9eUkEIm+MwVDaHHsYBS
 q0qfkwSyjMc9B9tCg1ccBj38cebrr1PpoumdOsbzSxwyRvvTdwAnc5GJDtnhQLZ6U8MD37w7qy
 OfIcxyXZrWpniUU3vLoiwpJzlHdS4iQrhZgUwLjuQ3dGDy+I5uOQWyhX3T7grpGhY6SlBtMPA8
 +po=
X-SBRS: 5.2
X-MesageID: 37718800
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,196,1610427600"; 
   d="scan'208";a="37718800"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RtMWGve/Vz4B9/a8sIvCK3qjQU0l+GiPb25J89JRQS3XZs0vvx2b2GxmMxJXJ+4HrwjOKtRIuwmzyS32lnp4V1jsOsZ+icEaEiEHNDsFj9Sqz0KAoWDJr8F60eryfgW7fPYPGN3jhsE/1HsPZwpRTmoaQC4nHr842GadwXrgp3WtPmA68x2J+H7EtEavr53uhMd/FftPb2uFV1cONh8VLu+sf+gzkEXlxR/LmluX5VExuvuwk/mVBjW1CS90G9ZBI02k3HNqH2R+F1ZE7FHPnKZfUkNQdz03RATzciPKc/DVlvkq/E+f1Ve45BaeyholhI1MH21bg6ex58rtKfnjpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pqfrxr549WpMuPY429Zzbo+E3Xzkh0J3wI+EMwusg4A=;
 b=jzyslptbbd+MVlf4/Ctoc1mc7Gdyc1VUt916FrWrKeNICjky9AFJmY6LBm05t/DkX8fnmZCuY9YnZqmhFRGFkxBk8CclvbX69qlPJyFpeshUteGMyIzOSPerqffnHgh1mNml2cJrInIEVuRf8hSNlNYhpRxVusj5s4DOvsovJm6YKA894yyNNG11bgTLbpdG+j4BCYFWW+VoH7fncPq6ZKhlSIem1TWYE6gmMqKwMj0vN3BneLwCsxbJLu/7p/2dlr7CdgsC+tjIrMaRahwqVx+yJBEd5Fffs2BI8PdEogWsTU7XWhd31ezG33ZQs28Xskp4X782cyAY74mOiED/Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pqfrxr549WpMuPY429Zzbo+E3Xzkh0J3wI+EMwusg4A=;
 b=OQH0ptxZimFraXVdZxw50Dq7BE87hhjYyADwrUluZaO+sam9vYfT3m3KsoxI15tjUG0ttIeM2B4D20sK1EuD7cgKsVqZ9YGW4FQM2KYTjavErsdmt2uX2Z/7msIZgNL4mFdaDJAes8EfaSplr/ddZ108apxOWbHzN4M/0dEoah4=
Date: Mon, 22 Feb 2021 11:09:28 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, George Dunlap
	<George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Kevin Tian <kevin.tian@intel.com>, "Julien
 Grall" <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page
Message-ID: <YDOC2ACTf0bmryG1@Air-de-Roger>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
 <dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
 <24623.61403.440917.434@mariner.uk.xensource.com>
 <dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
X-ClientProxiedBy: PR3P193CA0011.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:50::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ef236b18-9573-4525-404e-08d8d71a102a
X-MS-TrafficTypeDiagnostic: DM5PR03MB2842:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB284261E208E7F05124BD1E1F8F819@DM5PR03MB2842.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PXmUSd5YIdCCL7y3gFUKfXcKUbZq84zN/tafkmKmH+xlqU1yimMwH3a1rNpif4lK+v4sc4Zv8Drpe6YaMQyo3cvL7h44MU64aVyRRVKnKxmepD7QhE4aQY640F631MYDlSBNrzlf3TZaePznZKp0gzEkktnWGunm2OLOk5n8jwggQVcAgNnDMul+4SjCFC71JWvpGHjiNB5JjCXZzk7fe7X1UV1WzBMAmZrPDokK4zaBb2hNrmCAvUpwH6c5ZXgfO1USH1ezAGvV7luZRj/RZMg/1E1eStwVvEvZwRJqWEKH9LbOBSgYElO5LjcriGPcM/3quo13b4V0upe6nHQF7l4fZgLpFUmnLsw0AOekX7vREJ5lSH8pnX8cYLwazpBQQisHoFh01WfXn5UmaE4lj+UI+Q+ZHihy201lSrufrswDO8W3NCMOmw3NXCk4vFVS8uhB6bgZbQ7Q5Cyja0FMgdtqf5NquB9CnpefOro8YH24yrvfjEJ1vHzToah8WwYe1XrLsDQuGnGoH7UxatnMwg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39850400004)(136003)(366004)(396003)(346002)(376002)(186003)(6486002)(83380400001)(8936002)(5660300002)(16526019)(8676002)(53546011)(26005)(6916009)(66556008)(66476007)(6496006)(33716001)(66946007)(9686003)(85182001)(316002)(54906003)(86362001)(478600001)(4326008)(2906002)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bHNJYVdmWlJydk5NbnFiSW9aMmcvZXVOMnNkb3dTSWNRcU9Qd08vUis0RC80?=
 =?utf-8?B?WU9Iak1UbUpkK0VLQ2N4NWFPR3NXNjFpZDJZaXBNejlBbmdnanlDRUJKanZJ?=
 =?utf-8?B?UldMb09oNUc4eWZNU0wyVXR2M0E5ZG4yeEEwL0R2eU45enF3TlBTbDQ5ZXdQ?=
 =?utf-8?B?Szl3aTF6SjFzSTA0TWJPbmIvejRzZFFzYlVGRDlxbXpIdUoxOUhNY2ZJRTEr?=
 =?utf-8?B?YVU2L0pKeDJDNmJuQ2FldUVnbFdjNURjUm9pbHNVU01BcUh6dE04UWVxNXdP?=
 =?utf-8?B?d0xNZE1HcUowdTIrQzFldk1TTTRnMW1QUDBlQUVaTTBQa0diN080d05LNUFM?=
 =?utf-8?B?L3hyMzUwR1lCNmxyenB1WUZGWnJ2WFE5N0phZ0hsdkdOVDZoMGR1Q2pmNlhL?=
 =?utf-8?B?cWN5VFZoQ1ZzTk5HcWRUM2VmUWYxbDhVRGRxcGhFYjhtcFZJR3hjK1lUL3R4?=
 =?utf-8?B?L1pUQjVmSjl1MlMyQjE3cHNqS0tkTkFuZHUzYmVOM2VyNC9tOVkzTmI4bEIy?=
 =?utf-8?B?VHpITDlkSlZuQjBzNWlxaGtuS1V6S3NHYzVoVFAwUGxtdVZtQlB2bjk1UTA5?=
 =?utf-8?B?cXUrY21tVEJrRmhhclRyMjZpUjlxOWxKYzRtZENrOFhWZ2NjR0dBby8xQ0ta?=
 =?utf-8?B?ZkdWU3Uzbm15R1QvRm8zaEpKWDFwd3NSM3VMYVdGODBWWThyMUV3Vy9XQ3g2?=
 =?utf-8?B?elRkSy94Z21kenEzM0tIekhlazVEa2hXUFNCeXFvK0NUREtndUZHQnQ3TVZS?=
 =?utf-8?B?RStFWEZOYnlESHQ3SnhMWEdKdGE0Sm9LcnA5WjllUGxDa05PeWRna05XOWJ4?=
 =?utf-8?B?RitCSHM3azhGWENBTEZ0Qlg3RUpYZXVIb2UreSthUlUvV0hZaU9rbG9kVjNB?=
 =?utf-8?B?WjMvQytSVzh5a0FERThib0xaRk50YnVDVDdNNFlJUUhNV0tUb1BNVW1kQ25p?=
 =?utf-8?B?T1BhK1k2V1ZRbHNPR080bi9FS0p6YVNubTViRHFTVmszYkZRbzBhTHVVVXpL?=
 =?utf-8?B?ZEtnb2oyNkMyTzhqYXBqNUVoQy9vaGhOT2xHOUh5c3hvaUV4VFg0WVdWRkNj?=
 =?utf-8?B?QStNalJRR1NDdHA3MHVFbW0vU0s2OHZuZXl3a2RheXRYWDdYcVZoK3Zabnlj?=
 =?utf-8?B?MGFJZS90U1pHUUx6Yy9JVjN1c1NPSWI2amFsb1JrTEt3bFpab0hGbTZIc1VZ?=
 =?utf-8?B?ZnU3MExNV2puY2wva3Jla2RSbzExWXpsTkdWM3g5akFtTmw5Q3RoWmpMd1dr?=
 =?utf-8?B?OHkyQWsvbjlCMEd4RGRIYmpsSXptNjJKeFRPYmRVOHN3b0t6bEJhb0FyUlJq?=
 =?utf-8?B?U0x0NVZ3RERqR0ZpdVUyeEFSWUFPckVVQWZNdTNCMC9XUy9JSExkNG1lY3Ro?=
 =?utf-8?B?Ry9rV1VDWnpMVXlSemdaSGtpZExqUFY5bXZ3RW1NN3grK1JPMVFtWnRwSXMz?=
 =?utf-8?B?YzdhMTVzdCtQTHh5OTNkODlBeTV6Q0gvSGIwY05tV3VuQmVuY25FRWNrMzlP?=
 =?utf-8?B?TzZIdklzZHA3ak1rUFFiSnhIbnoyZ0NwQVNTbjJpQkk4SWk2UDZZaUV1dmdu?=
 =?utf-8?B?L0dwbzR0TFFCK2RpYlZiOVRjSHA4MEl5Q0c2dDlRd2U0MUR6TlUyYldQb0tQ?=
 =?utf-8?B?ZlVhOWNYb1JUUTVtMEFYTlJqOWJWRkk0ekU1T3BrS2p5dVpZMnNxMmlxRjR5?=
 =?utf-8?B?aGR0dVpPYkRSYWZLZnlNNEU0dmpLbHBFb3VOYXVKc2IwSVVoMTN3QmJFeFFt?=
 =?utf-8?Q?L6Ye/q4eCQ66VA+MFPnWpfqU+bcdVMiWaxu1cjx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ef236b18-9573-4525-404e-08d8d71a102a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 10:10:21.4605
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0Xs4jrPm2gLIgF6xirjxu0s91c3JAxBN4tgGm59iybdNCVDPyQ3VQBmTGBKoIouNsQI+XXTmIsfjJMyOL2Wg/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2842
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 08:51:59AM +0100, Jan Beulich wrote:
> On 19.02.2021 18:05, Ian Jackson wrote:
> > Jan Beulich writes ("Re: [PATCH v2] VMX: use a single, global APIC access page"):
> >> While this patch was triggered not just by Julien's observation of
> >> the early p2m insertion being a problem, but also many earlier
> >> times of running into this odd code, it is - especially at this
> >> stage - perhaps a possible option to split the change into just
> >> the movement of the set_mmio_p2m_entry() invocation and all the
> >> rest, in order to defer that rest until after 4.15.
> > 
> > I infer that this contains a bugfix, but perhaps other
> > changes/improvements too.
> > 
> > George, I think you're our expert on this refcounting stuff - what do
> > you think of this ?
> > 
> > I guess my key question is whether this change will introduce risk by
> > messing with the complex refcounting machineryt - or remove it by
> > removing an interaction with the refcounting.
> 
> If anything, then the latter, but largely neither afaict - there's no
> change in this regard here at all as far as the guest could affect
> behavior, due to the page getting inserted as p2m_mmio_direct, and
> guest_remove_page() having
> 
>     if ( p2mt == p2m_mmio_direct )
>     {
>         rc = clear_mmio_p2m_entry(d, gmfn, mfn, PAGE_ORDER_4K);
>         goto out_put_gfn;
>     }
> 
> before any refcounting logic is reached. The removal of interaction
> is because now the page doesn't get associated with a domain (and
> hence doesn't become subject to refcounting) at all.
> 
> The risk of the change stems from going from using a per-domain
> page to using a single, system-wide one, which indeed was the subject
> of v1 discussion. In any event the consideration towards splitting
> the change would cover either concern. Perhaps I should really do so
> and submit as v3 ...

I agree it would be less risky to keep using a per-domain page, and
switch to a global one after the release. From the discussion in v1 I
don't think we where able to spot any specific issues apart from
guests possibly being able to access shared data in this page from
passthrough devices. I would at least feel more confortable with
that approach given the point we are in the release process.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:18:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:18:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87841.165051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8IT-0007sV-OG; Mon, 22 Feb 2021 10:18:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87841.165051; Mon, 22 Feb 2021 10:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8IT-0007sO-LG; Mon, 22 Feb 2021 10:18:33 +0000
Received: by outflank-mailman (input) for mailman id 87841;
 Mon, 22 Feb 2021 10:18:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lE8IR-0007sH-PP
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:18:31 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 36b7d7e1-a17f-4415-8e75-13762b48c787;
 Mon, 22 Feb 2021 10:18:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36b7d7e1-a17f-4415-8e75-13762b48c787
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613989110;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=f0ntnzxrAcCCprETtimTZivIvh35hgF0Y6cUad1bI1I=;
  b=CVv1YTQMznyjZueM4XFkZk/Nswm7+BJnwmUxUoQIU4ATtx9lB35AGclT
   sF7aVnXJl0wFaajx7j613C7+0fXgFM2DlXV8eW2mt3gdBojG8DWH10g99
   LPGtBlFByEJJS/BJwiXb6zSDCoqkWUgZuTFXcFl50Xkl0GIapBazELqJ3
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HlacU/Q282O7oSGayqyDunGI1sO5TLbsmIAkhNxDbld4Pn+EQUewy6RYP6LZArAbuJyYhZ0Uhc
 H76fHuIsoMr/agFt5DHGVT500fnUvUatDHLkIJ/MPyzN6zxczVRhlFrnXVIO6XD4NflEKvmC3d
 kw8iwGB06NkrJw0TuVUCEYs5DeGglIxmLlITcHZdNHztAQhbVTK6g17Tlj+KxpzHzsY6aRMtiP
 iIaJg1vwqpqnfOa5bEHpIkwjGoZ2FDCoxuLf8Kx4c57g1od0KUe0uRKLtkhot+22Zg+btb9P1Y
 0UU=
X-SBRS: 5.2
X-MesageID: 37652739
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,196,1610427600"; 
   d="scan'208";a="37652739"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mFR7OQkD+1fqoh5cJFRlDT05llPXilOQrlyU5nmoAV5OnYP8hQwkpbIZGaI/5dUf/TG3Ftv2zaiO6r7K/hsV9ij5P2j6DvsvD3ujqPDtSRQMjvgTVgqhlDGciZZB/YnFYO+IQGTw9az53/Zq32LsvgalCh7ShjJZBCaVx6e3MX5hFt17CIg0vbyJhv92knx7MMz+zzwxWA2AJ0h9BYCzpyEXhWt0EnXOCogW1POoRqs5K8yw3uN6nj0updmE0ydHt55FrsVYN5nuofYlg7+t0bdVA0eBXjfK4OaO7+15axAqouXJkOEbw9CDAxt0b8yaXqtOHtQ+jGVXu+Zb5fJehQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDEZAqvOuOUDzy7reVKXP2Cxh0gmJnOMxrdxjFePi4k=;
 b=DKWwfz7W4SYD1Y9JruaDyE4CnnmE0Nk1I4dbUPJQlRPnwiCz98HE/jcEnR0DkTtoaMWW8HJlSn8rY0JLShw22GDcOJAWU2y+CN1yQDrJXA5227r9sLv8+RQYZaWQqY8JfD0dqdoODfXwqpUZIefQml0MOdqGMjLoTcMZHZzj7p9Hf5V9atKVWlst6rhjCa3y6NeFa6zd9JwsVi6gTX2qubH0YVGTCZt8gJmaGW3cgtspIc/hdrW3AqARhFvgDuKY/S3oakad1CcLQrvyfo7NL072CxvI5icPH0tL7NJENlq8xteBprhx/1uZ6rl46ogZgkpzldb37SmvO4rM7M963Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDEZAqvOuOUDzy7reVKXP2Cxh0gmJnOMxrdxjFePi4k=;
 b=vrLY8iU/XRRQ6ki10tm3m7cFhHKb5etEb0nWJlRdAKliSl3LongsFOyTw9mtBom+Q2P72ElY/PQpklYJ4YNbsaeWZNNDBiadVo7vUxQaJxU5qQB+iDj/7apaDcUOhLjrWM27PJrel8OURBfjVwWO1m0Of3U6inhHb3c8j/zFwEQ=
Date: Mon, 22 Feb 2021 11:18:07 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, <intel-gfx@lists.freedesktop.org>, xen-devel
	<xen-devel@lists.xenproject.org>, eric chanudet <eric.chanudet@gmail.com>
Subject: Re: i915 dma faults on Xen
Message-ID: <YDOE35zhQYwgaxke@Air-de-Roger>
References: <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
 <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
 <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com>
 <CAKf6xpvKiWiU5Wsv2C1EiEFr77nMZTd+VHgkdk7qcKw1OFD8Vg@mail.gmail.com>
 <9bbf6768-a39e-2b3c-c4de-fd883cc9ef85@suse.com>
 <CAKf6xpuTbvGtTRHPK9Ock7rxJk4DfCumgTW7-2_PADm9cSaUBg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAKf6xpuTbvGtTRHPK9Ock7rxJk4DfCumgTW7-2_PADm9cSaUBg@mail.gmail.com>
X-ClientProxiedBy: PR3P189CA0074.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3e8ac64a-49b4-4f0f-99c0-08d8d71b28b9
X-MS-TrafficTypeDiagnostic: DM6PR03MB5354:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53547C0B39A488BBB74521BE8F819@DM6PR03MB5354.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: SgNZg9C/akNhstNAJman/iFsW4+lyM4190OIPdMeUw3I4qiM2uwFC90eVQCoWrX6HDdFfterMlVNhEUws5BD9wO1+OZ8qkamUoLRvzQ8yznvvpLM0WI1Z1P1WfDIN+qxJYSV4aRmjnoMOoTFpUikHps4y1ErdeHU76Xix+2zK4prVQa4VMiu40DzLkhj2fFC+eZ89o++59DF3viGdk1f9fQ3Dt5QBDqd7iWRwtF/K3hGSf3ZVAoXFnekiuADCDXlTGwB1V7NsiyONa3nzNmSC2X2LQ5S8rw+GPkFSDy8aEXly59U1evhRNXXs18v794/kgAR7WilWL4TzXNQ67sdt9HXSA8mFRZNXSPDE0MzZNC5CQuNoc7aljxCb6e/+exCKyERcYNGKSxyUAKEp7GGzK0O+sjTjA+PM9e0rIOCFkjWUhqoBwjFZWMrGYHFtZ7i76rgTFbFhKBFB+WByf2xO4mtiBHxqcsC8p0HS6QIqOhRaWfknEzDcx+Sjr6vcvEX4TqJSjYyrlq/lylFtESCVA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(33716001)(186003)(66476007)(9686003)(83380400001)(54906003)(6916009)(6666004)(26005)(2906002)(8676002)(86362001)(316002)(16526019)(956004)(478600001)(8936002)(53546011)(6496006)(5660300002)(85182001)(66946007)(66556008)(4326008)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z3ExdG1kaWtiQ05RQ3hhU2NOUWxnQUhQNi9KM2ExbFQ2bURjYnpBR1ppUEQx?=
 =?utf-8?B?SGZSMDlWMUpwdks1Z204ditYVEFJeFlhS0tEblpQbXFCYWJVclIyS1BMWFlE?=
 =?utf-8?B?QWFXQXNSTlp2RHhIaWFCbkd1QldpVjY0T2dobjVldDRUaEtoTi9VamVoVGln?=
 =?utf-8?B?aU5hVGxPbzcwMTNjaHRhanRFVkVkYTg4ZW80NllQcXdHZnMwU3VHbTJmZFNm?=
 =?utf-8?B?QXhDSXdOM1EweXhtL2lqeTBjQWdSWFlzdHpZdXFiZTRQd0haMStJa1dBNE96?=
 =?utf-8?B?TTBUeTFDSEpMb3lrL2xRZVRraGdNeXhrWk1HOGJwdkVjOENNdW5MVWUyQWho?=
 =?utf-8?B?amY4UlZscGJGUTlsT1BkYk9qN3ZFekxxY09xWStVeWJKUjN0WUhnaFlPNC8r?=
 =?utf-8?B?Wjd0Y3dtSjI0bFdnNTdwRGdrSVBjc2V1SnRwMkNpU2ZQRHRkV2lQUFRSdG94?=
 =?utf-8?B?Y1QrRTZUNnhvMnVKUFlidzJaaTFUbVVzY0UzS2doNXVsdTF6eGVSZVRPRjBq?=
 =?utf-8?B?N2tMMllYVWhUTHE5bEVQc3FSWjM4bi9NVFNTK2t0MWs3cUNGeFU0RzF3bisx?=
 =?utf-8?B?UHo3cUFIZ0FZSFlvcnJHaGlaYUNyTHFWTThKTXlDSk8yZXpkellSM2F3QmZ3?=
 =?utf-8?B?OWxLaisxRUErcjlyWVFCd25XdVFUZ1o0SnRYYXBld3pXZVN5WjIvWklIbWJs?=
 =?utf-8?B?SVJja1Y2TTVuN3lmaGpvd050OWlpUlFUUVB3OXVCOFNtL1EvblQ4ZVExK1Y1?=
 =?utf-8?B?Vmh6TEdEdlhxRnZoRVVkSE1LZ1dBZ3FiYWZtKzg3eTdzTGxUNzJRcE5TeFpq?=
 =?utf-8?B?d0dXS3hwQ0I0L2V5VGZRWWZ3VWxlWjV0Ulc1dmt5SjRUOW5JN1BraUNIZjdX?=
 =?utf-8?B?Qzh5RzltMFQ3VzZ2L3ZUcXlMc2dndWdJb01xcWpzbE10cDlhRmVWZTdrbXFI?=
 =?utf-8?B?TXN0MWc0a2VlakxNYTE1NjNZSGtnUU1VQTJIZnliOG5memVnbzhBNDk0c1Zw?=
 =?utf-8?B?VG1ZcnJZUnR6b1ZoQmpQMUJiQ0tIZm9XSytVL3BNZ1BvN2J6RnRmTGsvZVpR?=
 =?utf-8?B?cThKK0xFUmlkQ1FLd1ZIUy9kdzdsa21PRWwrc3lHMTRVcFAySmxEMnhYWjBF?=
 =?utf-8?B?R1kzRno1RFJ5dy9OdEpJc2REWm9wTTZOYVpnQTBjWmJjV2dDdHNyTVlvMUVu?=
 =?utf-8?B?UmRKUmdvaWlBZ1lVOWNxMUpYemtGTElCcXNOUSs0SVN6TjZkRUczTmVreFM0?=
 =?utf-8?B?aHhxdzJoSk1UZTZmWkpkTUNKWVR2emxPRjYyV2xVSFl3UmtJMDJuN0k4Rkw4?=
 =?utf-8?B?T2dFSmhabzJFNzM0eGhtR1RXajdKYlhBYW1zaWJkVUJuNXVGUmYvYUhPVjg5?=
 =?utf-8?B?Y05UK1lsdm01SHBjZm4rcUhwTno4SVlFYXA5RHh6T2IyZklGUXQwYU1pY1p4?=
 =?utf-8?B?ZElLVnp5aC9HWnJ2U3JnV1dXcCs5NHdOOHJYclZxcFBIZnhpNi8xVDdvR1My?=
 =?utf-8?B?YVFSS1Z2WUxnQStWNy81OHRxYjF4Mk1PWnY4U0o4eHNVejlpNTFJbE1pWHVV?=
 =?utf-8?B?cjlERk1lYitwUlEwdk9iakdwN3NnN3M3OXJiUkdwLzZDTmVMaFUwdEVjbUNM?=
 =?utf-8?B?elRMdU1ZNkhXL2ZtMEZOTld1dnpCTlJqaWk5azlma1JORFAwYi9lbWt0Sktj?=
 =?utf-8?B?VzdJUDNBcFBLR2lYMW1iNkIwQkZzMkpOdTRiNGcvZE12UUJSS3JObFR3RzVM?=
 =?utf-8?Q?3gkErRIH70ca51WbEwiz+0SzHEUjYY0ux8zVoPi?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e8ac64a-49b4-4f0f-99c0-08d8d71b28b9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 10:18:12.2536
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5L3q2t2vBwrItu306GyTWLI8xDuTCEJOxyTzLBG8sRxeg2di2TLiC8VTJCCVnq2EQVXNoqkOfAoauRukFhBrAA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5354
X-OriginatorOrg: citrix.com

On Fri, Feb 19, 2021 at 12:30:23PM -0500, Jason Andryuk wrote:
> On Wed, Oct 21, 2020 at 9:59 AM Jan Beulich <jbeulich@suse.com> wrote:
> >
> > On 21.10.2020 15:36, Jason Andryuk wrote:
> > > On Wed, Oct 21, 2020 at 8:53 AM Jan Beulich <jbeulich@suse.com> wrote:
> > >>
> > >> On 21.10.2020 14:45, Jason Andryuk wrote:
> > >>> On Wed, Oct 21, 2020 at 5:58 AM Roger Pau MonnÃ© <roger.pau@citrix.com> wrote:
> > >>>> Hm, it's hard to tell what's going on. My limited experience with
> > >>>> IOMMU faults on broken systems there's a small range that initially
> > >>>> triggers those, and then the device goes wonky and starts accessing a
> > >>>> whole load of invalid addresses.
> > >>>>
> > >>>> You could try adding those manually using the rmrr Xen command line
> > >>>> option [0], maybe you can figure out which range(s) are missing?
> > >>>
> > >>> They seem to change, so it's hard to know.  Would there be harm in
> > >>> adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
> > >>> 0xff,ffff,ffff )?  Maybe that would just quiet the pointless faults
> > >>> while leaving the IOMMU enabled?
> > >>
> > >> While they may quieten the faults, I don't think those faults are
> > >> pointless. They indicate some problem with the software (less
> > >> likely the hardware, possibly the firmware) that you're using.
> > >> Also there's the question of what the overall behavior is going
> > >> to be when devices are permitted to access unpopulated address
> > >> ranges. I assume you did check already that no devices have their
> > >> BARs placed in that range?
> > >
> > > Isn't no-igfx already letting them try to read those unpopulated addresses?
> >
> > Yes, and it is for the reason that the documentation for the
> > option says "If specifying `no-igfx` fixes anything, please
> > report the problem." I imply from in in particular that one
> > better wouldn't use it for non-development purposes of whatever
> > kind.
> 
> I stopped seeing these DMA faults, but I didn't know what made them go
> away.  Then when working with an older 5.4.64 kernel, I saw them
> again.  Eric bisected down to the 5.4.y version of mainline linux
> commit:
> 
> commit 8195400f7ea95399f721ad21f4d663a62c65036f
> Author: Chris Wilson <chris@chris-wilson.co.uk>
> Date:   Mon Oct 19 11:15:23 2020 +0100
> 
>     drm/i915: Force VT'd workarounds when running as a guest OS
> 
>     If i915.ko is being used as a passthrough device, it does not know if
>     the host is using intel_iommu. Mixing the iommu and gfx causes a few
>     issues (such as scanout overfetch) which we need to workaround inside
>     the driver, so if we detect we are running under a hypervisor, also
>     assume the device access is being virtualised.

So the commit above fixes the DMA faults seen on Linux when using a
i915 gfx card?

Thanks for digging into this.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:24:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87858.165068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8OI-0000Ry-JV; Mon, 22 Feb 2021 10:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87858.165068; Mon, 22 Feb 2021 10:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8OI-0000Rr-GN; Mon, 22 Feb 2021 10:24:34 +0000
Received: by outflank-mailman (input) for mailman id 87858;
 Mon, 22 Feb 2021 10:24:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lE8OH-0000Rm-3E
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:24:33 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bc05b2f-c776-41eb-a506-b3920b73644c;
 Mon, 22 Feb 2021 10:24:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bc05b2f-c776-41eb-a506-b3920b73644c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613989471;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=xBERzIVCjCmT1zh+ia5T7GT9F6AlN8X3qbUJ7Rr6Vp4=;
  b=Ng8OqM/TdNyXswSCXQa6h3aXkzZU6BpI2I77CDN4ubgfR39lutLgFpOR
   VQk/11UWbeqOYi++JL9WQD28xRXx0KB39SwRJSmUQudyYSrHtdyfW5Vg5
   tLSZ7Y5SGpaxbixzuHmCaWzvd49Iu71yT6fFHS8+n1PBm8YbOumWqRdBB
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zqposPiWRB/2qI/Btpwd1YgV1DxSCxj+GKrJSS5DdhgIlNzi+eHQkeVK4ANlXb9hPXtMspA+hN
 h1EemKuVk+sxwWSHgknlns2/3NwxJv5tkZ2tpAf9+gy6IZj1D3os/k0hn505IbCTDuujPDImw/
 EYq8myw1IJOhk1DYjFEzhP0hx9oMMNbNRqbynQk5DUL2myfeHa60X6j+RX6qxMGZiWVQP7UxKJ
 nLXezcU/XBtlLKM5oAzB3D0do8vrRvVLR7VEscQ0ohomBVcktFJ96WKtbjVpFN+UIplJ8KG8lB
 7I4=
X-SBRS: 5.2
X-MesageID: 37914758
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,196,1610427600"; 
   d="scan'208";a="37914758"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BJLrVSSWoNYC9mU7FBVgMwGJM99HCSIoe/yJqZbignXYXmonU/GkwdKW0eEkBInQxm+28++YdEjDErsF2j07xhy6Pi7IE2XgZDxADZn/h8DaQYp4beA856TN5ZsBBTkyZCa7bT7GAjRQTqYWQ3hOgHSmUpGq6AjE7+ctCUb/Pdnnk7elvBNKWrE7bVaPW6+MMUVonokaOxR7D7euvUxZg6AG7IRN6570/99jwZPCXfQdfI778m1auIvkkazQeZFAg3nkSuo/zbCfD6JfTwlx8+0c595RGeabT5BOf+0z0L//9ObcjxSyjZyfQAjH4oKY+0+0lEPxhgciVNeyBUI2zA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/djj7M5mquJQDj1u6cxQhbBU22AEShavrxn4mhwoRxk=;
 b=ClNIVnOlq4hKbpr7AB8XtPqzjNKZPhcK9hVdIHFAGWMyumzWg9/OVQ5ViCm3Dv8cTvkH+ULMprk+0xo84t5aSr1ueU/338ibTt3G9o4yr2HfkRb/W3ZSXsUAX0BQ7rHxLvaUxEqXf/+CNmPydaK8GW3O3FCsd7d0Tyxr07X1aWMGpHAi2UYnjZ7tKYUnUXBVaZiy3jL1d8KnEmQSQFeuTpsLIcJkTiF4LMV7MUU2N4eSYdBUruhgh64soDscVVEJFNMKzU86NLqatZ+Mf+okTygcenXTSt/AUq3C9URaGHZtSdtR6RAaa/S2qczFjv93puVreTXzB9i+MEcOFcTqzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/djj7M5mquJQDj1u6cxQhbBU22AEShavrxn4mhwoRxk=;
 b=MF6U8EmxKjhtH2523NIH4E1SdjihgAQYr9vnMw/DafBJeniovEKPmFYP54CHMllJiHd+VSkrdcLN+zrtddjXy/YbjIsi0kBw8OnODjCPtsCBC10tmFWd3sF3u3jBmuQu6kc57rk+8e/tAJpkF6mTO9NCqjf79m9YrWJymV4+prI=
Date: Mon, 22 Feb 2021 11:24:24 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>,
	<iwj@xenproject.org>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<andrew.cooper3@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
Message-ID: <YDOGWC/VK9eOtgLw@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
 <YC5EitRCZB+VCeCC@Air-de-Roger>
 <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
 <YC6NPcym62a0Nu0M@Air-de-Roger>
 <8ffd4f51-5fc6-349b-146f-e52c35c59b4d@suse.com>
 <5b286dfd-278b-8675-cd88-3ee2706c06e1@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5b286dfd-278b-8675-cd88-3ee2706c06e1@oracle.com>
X-ClientProxiedBy: PR0P264CA0127.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1a::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: de7e0685-6441-437f-8b94-08d8d71c08df
X-MS-TrafficTypeDiagnostic: DM5PR03MB3212:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3212C821A64EA44C2A14CFA18F819@DM5PR03MB3212.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: a9vdiCuEdQAghBFmgEDS6ieVTaJwqbvaW9mEQZCv7cIy3EIBwDOgHJj5cvhC9HyEnhAyDc4HHVdFQMudYiXCyvVCVP0hy3Pn2fBoT8niIeeNUmK6TX0FkIQYNkAZAT5e+t8K1/ECQixpAhAFXW7YMsDrmJQl1YuRUthfI064qqXg0XJabe3fUAFM1rvUMbo7FooTvjsO6uW181BPuECpwYFi08ITxpsg0xK8L3npc8eQTxQSFd12toG00uD635eyJM6YEk174lwNXVXQkGha9dx4LkjV98WWdunVSQwrDYviVYgDURY1GEgwz+HFFJFmxdzDtkptJ1Y0QgaL3/GHuAZSP3AA/SMGmyOFer7APgTMcO0qNwIa++GUW+wRxITtzdrX0wD11GBIvQ7FIRBRHNvfyg9Fqa2zVJ3RZJAul3oSwZyvzgCZPDupDO6HWc/4y/kjKnFsNoUtUZVuKI65Z4lAGrE/akhzNtNcA9Bm/WCFUu/TEG4C7bLo2uH/iPC8Y+jf2OiScG1ABPlmH42E/Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(39850400004)(396003)(346002)(376002)(136003)(6666004)(26005)(8676002)(16526019)(66946007)(6486002)(9686003)(5660300002)(53546011)(4326008)(186003)(85182001)(478600001)(33716001)(8936002)(956004)(6916009)(316002)(6496006)(83380400001)(86362001)(66556008)(2906002)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z3RFYkRSQTFmcWJpYm0yMGdQcHkraXArRFNmV3lrOTdkMWhwS3FYbnZ3M2JM?=
 =?utf-8?B?WmVWVVhVcGx0N29NNUtDZ1hteFRHK01Pam1YcUJVeXNvYVJmaTk5MnY4Vlg3?=
 =?utf-8?B?d1hXMzZWK0hOb0ZqcngzMHhIeHV0N3pBNUNUVkhtRVZ6a0E4UEZGV3JuNmY0?=
 =?utf-8?B?ZldTSGg5V0hoQ0RIVjU1WTZ4RUhEOE1YMS84em45dnRXTnZUVi96ZDV5TlVW?=
 =?utf-8?B?QTdya1lMK0MyUW9XazJzQjladFJmV3Q2ZmlUWHpZR1NyTDZFZUtabmVhUXBk?=
 =?utf-8?B?cTFxR0w3KzZzRVdSc3JrMnJEbXFCd1g5azB4ZnN6UGZOaUVCdmtCSUtCZ0pp?=
 =?utf-8?B?K1d1SmlEWjBacXlkOWR6VWlTSVVHRUZXN2UwVU03ejV1ZDdrWjczaE00bUwr?=
 =?utf-8?B?NlNLQ2Z3dFQxblN4clBKY25BbGtaQk1FYm5tRW5BYTJSMWJXMVM0QkJsUDdI?=
 =?utf-8?B?VEtzZk5lMkN5WmtPUGFYRlBSWVZ2dFprbTRJQUU2Qlk3dGV2UVZHZjhLQkxX?=
 =?utf-8?B?MFVQeE0wR0pzSkhtdDIybDB5bnZ4b2ZiaDVTcWFQRU56WUFuWERsN1VIWnVj?=
 =?utf-8?B?NWZOSjg2eWsvT3A1NlJMK3k2Szl5OWFNRGVpc05RTG5hUzczWFl5M0JSZEJP?=
 =?utf-8?B?UDdtRllFYm9UTXV6eURuTk5aR0YxR1NseDJ6Ulh2Tm4zVm1pcUZJZk1kMUs5?=
 =?utf-8?B?c3pMaksrS3VJQndDS0pCOENhdUpmbXN0VVFDeWRWcjNGVVJJYitRRVFEckNC?=
 =?utf-8?B?R2RsVEgrYnBXcldWTThlcE9ZMlJIWWhuS3Vuc2FSVjVDNDJTczdZeS8zcFVX?=
 =?utf-8?B?Y3dHSWF0V25NZzQyTkFCUW1FSFNXVW1nRHYyNUdhL2hmTEdZd0dJMk00cFJz?=
 =?utf-8?B?T0p2WWFJamwvemlTdTFzeit5anFNa0dPWUhqMjUrWlpsY3BYdkVOL04yZjBL?=
 =?utf-8?B?ZUM2VmUxV25VWERSekI5M2I1OEhoNW1XcnhSb01idFNvS2xpdlRkcTdIcHJJ?=
 =?utf-8?B?Vi9wU05rYUdRVExSVFoya1VCYXcySVFLMWd3cnhWK2R6bzdqNWNXckhzYldV?=
 =?utf-8?B?aFZPcWMrQ1o1VnpJbFFzOXNwbFI0MmNWOWFXZ0diMk1XVGxoNTdyQlBjTXVZ?=
 =?utf-8?B?S3dvbFVnMnFqbUZnZk5mRmRocWVmNEIvRHF3OGE0NUh4RnFNSk5vTEF2a0sy?=
 =?utf-8?B?eUxibmlzNkluM1BmbmxCMk1lU1lkVUlTN0VuQjFqTUVRaGtrSHRnV3ZKMUVs?=
 =?utf-8?B?K3RXTWx6Z2NUK0hrTVZrazBSMWt6aURscGJVdU0yUUcwVkcxRnU0eUY3eHVx?=
 =?utf-8?B?anhPeENueTd3RDM5dk5sYjlLbmdQOHlZaWpRbWpVZk4yTTZuZlhwMDZJV2J0?=
 =?utf-8?B?TXE4TTlYTURVUWJOcWdDQ1FwRUM3RTBSb1FhQWxxY0tCTXBNTW9IS1pENUlU?=
 =?utf-8?B?L2lCMzR1WkV3V3ZjRlpIYmk1c0FoQXI4V3BqSmZySzNIbFpTd1R2eWd6dVBK?=
 =?utf-8?B?cENzMStGZmVNdFNKQjU3S2FocTdodG10M1h2KzkzK2JPWHlJcWsyQzk5YS81?=
 =?utf-8?B?OGpUQlgxdVFBeFBYNHQ5eFNYdktTVXFoUlRzQ0dQVzlBWUFLMW9pYmJVRDdt?=
 =?utf-8?B?cmxnYVBHL2Y2QXVyTWVZRGNiak9JWUZIbXkwZENoOVJxRE5VMlpzbDViNEo5?=
 =?utf-8?B?NzBEcDR3V1NhcGVQQW9pOEZrZUNmUzByS0JsNEt1MzlWcVpkY2h4amRDOUhF?=
 =?utf-8?Q?EHnWtCexZuF0B8tzmw3hMDJMQJcgqV+Jn6rcbSu?=
X-MS-Exchange-CrossTenant-Network-Message-Id: de7e0685-6441-437f-8b94-08d8d71c08df
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 10:24:28.3258
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DwiwODx8zC2PuI8DlfS8481V3JJi9GiPU37HS8u66abp+KIp0dPnCOHkoTJwhxg2m4Is9a7UILL2Cf2Q8cE9AQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3212
X-OriginatorOrg: citrix.com

On Fri, Feb 19, 2021 at 09:50:12AM -0500, Boris Ostrovsky wrote:
> 
> On 2/18/21 10:57 AM, Jan Beulich wrote:
> > On 18.02.2021 16:52, Roger Pau MonnÃ© wrote:
> >> On Thu, Feb 18, 2021 at 12:54:13PM +0100, Jan Beulich wrote:
> >>> On 18.02.2021 11:42, Roger Pau MonnÃ© wrote:
> >>>> Not that you need to implement the full thing now, but maybe we could
> >>>> have something like:
> >>>>
> >>>> "
> >>>> =item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>
> >>>>
> >>>> Specify a list of MSR ranges that will be ignored by the hypervisor:
> >>>> reads will return zeros and writes will be discarded without raising a
> >>>> #GP.
> >>>>
> >>>> Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
> >>>> c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
> >>>> "
> >>>>
> >>>> Then you can print the messages in the hypervisor using a guest log
> >>>> level and modify it on demand in order to get more verbose output?
> >>> "Modify on demand"? Irrespective of what you mean with this, ...
> >>>
> >>>> I don't think selecting whether the messages are printed or not from
> >>>> xl is that helpful as the same could be achieved using guest_loglvl.
> >>> ... controlling this via guest_loglvl would affect various other
> >>> log messages' visibility.
> >> Right, but do we really need this level of per-guest log control,
> >> implemented in this way exclusively for MSRs?
> 
> 
> In a multi-tenant environment we may need to figure out why a particular guest is failing to boot, without affecting behavior of other guests.
> 
> 
> If we had per-guest log level in general then what you are suggesting would be the right thing to do IMO. Maybe that's what we should add?

Yes, that would seem better IMO, but I don't think it's fair to ask
you to do that work.

Do you think it would be acceptable to untangle both, and try to get
the MSR stuff without any logging changes?

I know we would be addressing only one part of what the series
originally tried to achieve, but I would rather prefer to have a
generic way to set a per-guest log level rather than something
specific to MSR accesses.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:26:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:26:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87871.165098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8QA-0000hP-Be; Mon, 22 Feb 2021 10:26:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87871.165098; Mon, 22 Feb 2021 10:26:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8QA-0000hI-8Z; Mon, 22 Feb 2021 10:26:30 +0000
Received: by outflank-mailman (input) for mailman id 87871;
 Mon, 22 Feb 2021 10:26:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lhyD=HY=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1lE8Q8-0000cu-DW
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:26:28 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b01b657c-0ba2-48e9-bf9d-c7d7201ad1ef;
 Mon, 22 Feb 2021 10:26:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b01b657c-0ba2-48e9-bf9d-c7d7201ad1ef
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613989582;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=9l6X9hjC3QSh7jnLSXYPmGofXv0ZNx5xjhXcPkar+vg=;
  b=aqIekZEXcqXusnHxTi9OUSxVuhTO/67vEL0/JjtxotdTAMQQEd/LJY6r
   KZY+ymGa92GRtDJ8g11lTQ6pWlY/7G36A9aKfXzbfWZ9a8D0L9NZMuItm
   5YdjW/9U+InvqtIM+A5TUvx+cAcksUwGm8WuNdQCfwbdbqysbMFc2bV1c
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5iGxB753N93ImTWnqi/8yCjPpXLY2KCzUrJJfgCkkVEiSIttW286/yoKNxDzAJhzg0tFMuoedZ
 ggwrdJa4D8DAWkKdU2zXit4GkSFWioJRpJCKeFMAD0oPPRVwk8x57PCT/NmyfbjuORc62KxmkI
 643++tH0WsBQIVcs5O6Cyl3z7nSA5K9mexfiFEBACkgutKHMAiFM8UOipQJ9/n22F7IO0ekhWs
 X/n429h89SkJw64QbGNBVKN2Lyhx7OjQSatny3icl17Fikl3ihusquHwdlxmA2+4kzcsbho2Hq
 +JI=
X-SBRS: 5.2
X-MesageID: 37914844
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,196,1610427600"; 
   d="scan'208";a="37914844"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N3sd9fRCJ92rfaHwXZlewEVWPVozZtdB3HynYZ7JteC8h0pDw6DBwo9KNC3+R7ob/siUbu1ZsUBGy5gubE9TTiX2xNhiXTHZOIFDMhz3D3aTt42FXlGqI5qv1xv5YZbXykbSn4Hv0EUXvtInn6ZjyhtHQkWfr0j/WU2HgaQtQW8LPLpQm9h0WN7jSjUZXuWkGiDvYggZs9Iy+rPvT2C/n4O9E1t9I3puTWFJOJ5vgsaSBooWlGa5N5man1xIWoD38cHYWBzvBb7GlGXiaj9dyGqGDmPoPwpFE9F9bgKgzA7cGfbz5T9R1SBZ/k6NSxsalGee6GPRTju6nvTRUAv9kA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9l6X9hjC3QSh7jnLSXYPmGofXv0ZNx5xjhXcPkar+vg=;
 b=nRmGmfsBdLarIB/BBTCaPNn3APBBEwlrVRQkehjLDmYttCGSvtpyJ2ypge10rumdCw6ERmmJwDzBbP6h2Srxkz5eW+lOHXsnGXhfiMxaT+csbA0K9ZLs4Ok1YDtTeJ3TZHO8goOtwj8rNhyN94k/bVkKZaGQJyWF8qlnxe8LCjT0Vs7KHbHl+tuoNcAgdMvZDf55QdkAA7hCp0nL1zzBPdDHEUxTa2Y2gUSyqeOKwoOQdNFk+LRcXxsPBiZv0KINv0i0OBSsekYzgNVwFzI5TBpIUKWzNSrpdSzojZH3jzYHpqnmU8YZBtsEG49JjplrLbyo3yoxXW5WcWRyXx7/Ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9l6X9hjC3QSh7jnLSXYPmGofXv0ZNx5xjhXcPkar+vg=;
 b=fiXV+0UMBON9t//GCfVb4SyTaQlWWpkwJCltR6BMldVzbrBkqj3iOvgLKYwh8B5Cd/gYluVp92TYt/nBSaNqli+MVXJrwflkYDpX9qOwFCx9M8/ck9qeEzgxwvqxrqB+uIWdb5CwEgZOtMJ87DQQdRnFCmYOq0gh+R5xwTesBXg=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Kevin Tian <kevin.tian@intel.com>, "Julien
 Grall" <julien@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Wei Liu
	<wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page
Thread-Topic: [PATCH for-4.15 v2] VMX: use a single, global APIC access page
Thread-Index: AQHXCO+iqgVfJXeEMEKij3zVyzL30apj+G+A
Date: Mon, 22 Feb 2021 10:26:19 +0000
Message-ID: <8B51B3E9-901A-491D-A54E-1F67641D03F0@citrix.com>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
 <dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
 <24623.61403.440917.434@mariner.uk.xensource.com>
 <dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
In-Reply-To: <dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.20.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 08a34af9-8bd8-41fe-3fc4-08d8d71c4b47
x-ms-traffictypediagnostic: PH0PR03MB5798:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <PH0PR03MB5798902BCE2FA9AA6AA097E799819@PH0PR03MB5798.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: iUSps1jFWDJ/9Jl12UIxdVIwzDBF2IfKSVqqBNf6xiiWy4BydRl2lIuqH6kUQr2onKedPGou9cfoT+35NHwzAN5zQlcWI+sK36dNg8S/Db09S0vNubIpYol/T7u7iPEU3hGVdYOFOpOS1pNQmlnae3PGtENwpAEYefJ3f4o0F06OnIsOHOlb8OOfMI2YW6RO4gSA1P7LPhJYbwhUVicFSZmTb4lfLUhgFfAYa1/xoJBO+vVyAvvNdOAC+ScswoGeR/EE39Vc+hVA3oHnk0KwUb3g+xM90DlZDShtavBoHBjjFndozGdRkRVy8MwUBJY582t/HOEny3hGEnp3o1dyhXicy3CuHHdDK//oiU1Oo06gSd20dVjXclunrD4AhE6sozrHOiifiWzxaBYigOA5QIcqiooYcC87G+0N5Em8FcPMGkEOS0LUJwhbWNOTrqJKF+qUUSDTLBsZo5KFvj1veL2iSTExP5TA9Eh2I8VmK/cMAm2K24rKCBuw15ASVK7jAxVKxZrJdp3oh2lBP877l1Pmx2bBnGT1sILG+QC7EZ8KiPQE8j27F+7VvUnFgGWVnRn+3tJj/KLgwLCHCTxQ/g==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(396003)(136003)(376002)(366004)(346002)(5660300002)(6506007)(2616005)(6916009)(107886003)(33656002)(53546011)(66556008)(478600001)(8676002)(66946007)(54906003)(66446008)(186003)(71200400001)(64756008)(26005)(6512007)(66476007)(316002)(76116006)(36756003)(8936002)(6486002)(86362001)(4326008)(83380400001)(2906002)(91956017)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?Z0NldXFEa29jc0g5RGpzMEJyUXdROHdDaTJOcGxjNFhFSFY2ZFd0dU5oNzZr?=
 =?utf-8?B?NmF0SWtFUlp3VUZaRGVTa1ViWkV5S3Z6THJXbFQvRjdpREN3SVB6aU1ZcUs5?=
 =?utf-8?B?Q0RzRkdLeUpCRkZGODA1bnFVeDVBd3ZTQlNjRkd0WWxhSUF2ZElQemMzcThp?=
 =?utf-8?B?ek9jR1dTU0pFdHFJN09Ba0dqam5uZFMvYjU3Z3IyZG5tdlkrVG14Ukh2V2sr?=
 =?utf-8?B?U0h2ZmNEZjcvbGw3ckhwdkJSbm10U1JDcHJlTTM3TFBrcUJ4L051SWpNbHo0?=
 =?utf-8?B?ZVRyckNEN2E2NEtvSEx4R0pvWkhEQzlxVXZVMWV1ZkpQZUhYL0I1QXZ0eGFD?=
 =?utf-8?B?dzkrcE43eW5FOVZXQVUwNm55WmxXTzEvL3A2UWxjQis5MDdYTER2a2VYR3Fw?=
 =?utf-8?B?azNlcmtIcCtvbUNjZ2lHY3lQRjRoaitZUDdMR3l6WmRRQS9WVmlCOUR0amN6?=
 =?utf-8?B?ckFUTEZPenJ5TVU3N2owaVhib1FNdHA2dThjSzJhNEVhUVRVR21IZzFKZWdB?=
 =?utf-8?B?UlBsRjVMaER1dUl6S0doeFlLTDJ6eml2QkwvL1RWS1UwcmQ3LzlEb0VOUlZO?=
 =?utf-8?B?d0pUTVVzNEVmTnFReEorS0FNcnlobXJTcFVMTWJmWG8rS1hTU3FvK1RkQkZq?=
 =?utf-8?B?WUZXajlXbmNQSFQ3WmRNVWxHYXRSUGlqSmQ1cERnNEpEMWRveXh3bkpOY3NL?=
 =?utf-8?B?cjQxa3hCakp3MHE3TDl6emZZTG9GUlVrTGpqUDRuSFdoQVd2MlRIMDF5blJI?=
 =?utf-8?B?YyswVCtXT09KeUhWN1lqUFp0VWVBb29tcGhCVis2WVcwSmVWS0Fhb0JLMHhq?=
 =?utf-8?B?N1hlSVFsWmF1MXlUR0xOTlZOOTBGdlVNS25RVmc5a2VYUDg2WUlDRkwrWU1E?=
 =?utf-8?B?aDRYMXpkNDlzWFE5a3BPczdwR0ZSN0ZPMUVBWit6dnZEeklBVTRHM1JBanox?=
 =?utf-8?B?UHhQc0FmWHkxa0s3bEFwb2JTcVhQRzg2bE5TM0V2MmtScmhSdDNpdXpCNEhU?=
 =?utf-8?B?NU01SmxQcWtQTW9KeEdvZlFNUFRjSER3dFJZS0tzaUd6NkVjNG01SVRxSG9G?=
 =?utf-8?B?eFNiZG10OWtTYUVTenlTc1RCdjhodjF4TlVOOWtRU0FUWFdMLy9LSCsxUTZ4?=
 =?utf-8?B?K0ppT3RidWhFSHZocWZlYUVscEFWTHBMcXhsbDJpdmxNckczSHd0VERqcFZz?=
 =?utf-8?B?QkpDZmd4d3ZURm9FMkJTYUFqOHBXYnNWbEoxZHpWbG8zZHhGbWZoVXhETVhK?=
 =?utf-8?B?cnErVWtrYXo0Nk1LeXovQnlFUENtKy83VXV3MVR3OFllK2dtWWJjSzVacHBw?=
 =?utf-8?B?ZlVOaDBFY21ZS2NTeFR2T0tLZllIcGo0dmZ4dHlXK09wTWdNYkdWMW4veHVm?=
 =?utf-8?B?TGJ1czVCYjhUTDViTkkrak5VZGxYWHN5elBqYVVLVEdCMjdEZnlSa0I5WDFr?=
 =?utf-8?B?SzZ2QzRYcnA0VDJvMXp6dnFVWmhsL1YrMUluN1Q1dGpmVmR0dWxYcWtjZEtX?=
 =?utf-8?B?WVhoRlY3R0tqRjV0UTB2T2ozZ3lCY1FxRFJSUGxFS2NXYzlMQWxHRll5TWhm?=
 =?utf-8?B?N2dPTytDQXBPUXVsYVBOcFFJcEdWeUN2M0Z5WGtEeS8rMVJXMjd1NVZpWEF5?=
 =?utf-8?B?Z1pSSU1YTjZWb0lPOGQvS21LKzQ1R0RhNVNPaVozTDdoYy9remQ1TXdpODZG?=
 =?utf-8?B?RkltQTJKYzdFQXZld3hReCtzVk1QOFp0OExQb1FpNjFGVFhuWFdoejZOa0o1?=
 =?utf-8?B?TU5ybXBlWVByUFZZQ3JEc2tOYTlNRG5RUG00T0JRTExVcTNNVXF2cUIyMm1P?=
 =?utf-8?B?NE42TTJJTldzeU5QaFllUT09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A49D2832AA0F274C8736843B1F8EBC56@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08a34af9-8bd8-41fe-3fc4-08d8d71c4b47
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Feb 2021 10:26:19.0720
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dgvVxHRF1o3bFJJ/hs5gac7m3cFPCSe+Ys4GaiMckvrxB0ywvGQSUi6lNk9KzIhqZAGBZM8GcRPua4VIlTLa832sdOlt16heM6V4MPRstyY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5798
X-OriginatorOrg: citrix.com

DQoNCj4gT24gRmViIDIyLCAyMDIxLCBhdCA3OjUxIEFNLCBKYW4gQmV1bGljaCA8SkJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTkuMDIuMjAyMSAxODowNSwgSWFuIEphY2tzb24g
d3JvdGU6DQo+PiBKYW4gQmV1bGljaCB3cml0ZXMgKCJSZTogW1BBVENIIHYyXSBWTVg6IHVzZSBh
IHNpbmdsZSwgZ2xvYmFsIEFQSUMgYWNjZXNzIHBhZ2UiKToNCj4+PiBXaGlsZSB0aGlzIHBhdGNo
IHdhcyB0cmlnZ2VyZWQgbm90IGp1c3QgYnkgSnVsaWVuJ3Mgb2JzZXJ2YXRpb24gb2YNCj4+PiB0
aGUgZWFybHkgcDJtIGluc2VydGlvbiBiZWluZyBhIHByb2JsZW0sIGJ1dCBhbHNvIG1hbnkgZWFy
bGllcg0KPj4+IHRpbWVzIG9mIHJ1bm5pbmcgaW50byB0aGlzIG9kZCBjb2RlLCBpdCBpcyAtIGVz
cGVjaWFsbHkgYXQgdGhpcw0KPj4+IHN0YWdlIC0gcGVyaGFwcyBhIHBvc3NpYmxlIG9wdGlvbiB0
byBzcGxpdCB0aGUgY2hhbmdlIGludG8ganVzdA0KPj4+IHRoZSBtb3ZlbWVudCBvZiB0aGUgc2V0
X21taW9fcDJtX2VudHJ5KCkgaW52b2NhdGlvbiBhbmQgYWxsIHRoZQ0KPj4+IHJlc3QsIGluIG9y
ZGVyIHRvIGRlZmVyIHRoYXQgcmVzdCB1bnRpbCBhZnRlciA0LjE1Lg0KPj4gDQo+PiBJIGluZmVy
IHRoYXQgdGhpcyBjb250YWlucyBhIGJ1Z2ZpeCwgYnV0IHBlcmhhcHMgb3RoZXINCj4+IGNoYW5n
ZXMvaW1wcm92ZW1lbnRzIHRvby4NCj4+IA0KPj4gR2VvcmdlLCBJIHRoaW5rIHlvdSdyZSBvdXIg
ZXhwZXJ0IG9uIHRoaXMgcmVmY291bnRpbmcgc3R1ZmYgLSB3aGF0IGRvDQo+PiB5b3UgdGhpbmsg
b2YgdGhpcyA/DQo+PiANCj4+IEkgZ3Vlc3MgbXkga2V5IHF1ZXN0aW9uIGlzIHdoZXRoZXIgdGhp
cyBjaGFuZ2Ugd2lsbCBpbnRyb2R1Y2UgcmlzayBieQ0KPj4gbWVzc2luZyB3aXRoIHRoZSBjb21w
bGV4IHJlZmNvdW50aW5nIG1hY2hpbmVyeXQgLSBvciByZW1vdmUgaXQgYnkNCj4+IHJlbW92aW5n
IGFuIGludGVyYWN0aW9uIHdpdGggdGhlIHJlZmNvdW50aW5nLg0KPiANCj4gSWYgYW55dGhpbmcs
IHRoZW4gdGhlIGxhdHRlciwgYnV0IGxhcmdlbHkgbmVpdGhlciBhZmFpY3QNCg0KRG9lcyBpdCBh
Y3R1YWxseSBjb250YWluIGEgYnVnZml4PyAgSXTigJlzIG5vdCBhdCBhbGwgY2xlYXIgdG8gbWUg
ZnJvbSByZWFkaW5nIHRoZSBkZXNjcmlwdGlvbiB0aGF0IGl04oCZcyBhbnl0aGluZyBvdGhlciB0
aGFuIGEgY2xlYW4tdXAuICBJZiB0aGVyZeKAmXMgc29tZXRoaW5nIGVsc2UgdGhhdCBuZWVkcyB0
byBiZSBjYWxsZWQgb3V0IGV4cGxpY2l0bHkuDQoNCkl0IHNob3VsZCBpbmRlZWQgdGhlb3JldGlj
YWxseSBtYWtlIHRoaW5ncyBzYWZlciBsb25nLXRlcm07IHRoZSBjdXJyZW50IHZsYXBpY19wYWdl
IGFsbG9jYXRpb24gaXMgdXNpbmcgc3BlY2lhbC1jYXNlIG9mIHRoZSByZWZjb3VudGluZyBydWxl
cywgbWFraW5nIGl0IG11Y2ggbW9yZSBwcm9uZSB0byBiZWluZyB0aGUgc3ViamVjdCBvZiBhbiDi
gJxvdmVyc2lnaHTigJ0uICBCdXQgYXQgdGhpcyBwb2ludCBpbiB0aGUgcmVsZWFzZSB3ZSBkb27i
gJl0IGhhdmUgbXVjaCB0aW1lIGF0IGFsbCB0byBzaGFrZSBvdXQgYW55IHBvdGVudGlhbCBidWdz
IGluIHRoZSBuZXcgaW1wbGVtZW50YXRpb247IGFzIHN1Y2ggSeKAmWQgY29uc2lkZXIgYW55dGhp
bmcgb3RoZXIgdGhhbiB0aGUgbWluaW11bSBuZWNlc3NhcnkgdG8gZml4IGEgYnVnIHRvIGJlIG5v
dCB3b3J0aCBpdC4NCg0KIC1HZW9yZ2UNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:27:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:27:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87875.165109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8Qr-0000pf-L8; Mon, 22 Feb 2021 10:27:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87875.165109; Mon, 22 Feb 2021 10:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8Qr-0000pX-I5; Mon, 22 Feb 2021 10:27:13 +0000
Received: by outflank-mailman (input) for mailman id 87875;
 Mon, 22 Feb 2021 10:27:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE8Qq-0000ou-Lz
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:27:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05a3259d-4d45-4e23-8f82-424dc12342b6;
 Mon, 22 Feb 2021 10:27:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0106CB147;
 Mon, 22 Feb 2021 10:27:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05a3259d-4d45-4e23-8f82-424dc12342b6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613989628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=nMp/VPdudPdDiFUQUULeKoVRWd0Od4PCLUKJPHYG6fk=;
	b=hPYXqNvBBrOAQ48+OrAeAVdVa866aqD63kHfRuwyPnoqOd3x8Hw6P5ZZJp9kLiP+vLz/ti
	S+PYeqgA8w6gJHwzHEtS8TyaYKt/IxpTTFAct6Vkozm/Rvh4OGr4cIbPPBVtsA1V+EyODa
	NoOlCuN79EXbqMI1fFvVAEAmdZxyCmg=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH][4.15] x86: mirror compat argument translation area for 32-bit
 PV
Message-ID: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
Date: Mon, 22 Feb 2021 11:27:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Now that we guard the entire Xen VA space against speculative abuse
through hypervisor accesses to guest memory, the argument translation
area's VA also needs to live outside this range, at least for 32-bit PV
guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
uniformly.

While this could be conditionalized upon CONFIG_PV32 &&
CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
keeps the code more legible imo.

Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
                (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
                 l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
     }
+
+    /* Slot 511: Per-domain mappings mirror. */
+    if ( !is_pv_64bit_domain(d) )
+        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
+            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
 }
 
 bool fill_ro_mpt(mfn_t mfn)
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -159,11 +159,11 @@ extern unsigned char boot_edid_info[128]
  *    1:1 direct mapping of all physical memory.
 #endif
  *  0xffff880000000000 - 0xffffffffffffffff [120TB,             PML4:272-511]
- *    PV: Guest-defined use.
+ *    PV (64-bit): Guest-defined use.
  *  0xffff880000000000 - 0xffffff7fffffffff [119.5TB,           PML4:272-510]
  *    HVM/idle: continuation of 1:1 mapping
  *  0xffffff8000000000 - 0xffffffffffffffff [512GB, 2^39 bytes  PML4:511]
- *    HVM/idle: unused
+ *    HVM / 32-bit PV: Secondary per-domain mappings.
  *
  * Compatibility guest area layout:
  *  0x0000000000000000 - 0x00000000f57fffff [3928MB,            PML4:0]
@@ -242,6 +242,9 @@ extern unsigned char boot_edid_info[128]
 #endif
 #define DIRECTMAP_VIRT_END      (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE)
 
+/* Slot 511: secondary per-domain mappings (for compat xlat area accesses). */
+#define PERDOMAIN2_VIRT_START   (PML4_ADDR(511))
+
 #ifndef __ASSEMBLY__
 
 #ifdef CONFIG_PV32
--- a/xen/include/asm-x86/x86_64/uaccess.h
+++ b/xen/include/asm-x86/x86_64/uaccess.h
@@ -1,7 +1,17 @@
 #ifndef __X86_64_UACCESS_H
 #define __X86_64_UACCESS_H
 
-#define COMPAT_ARG_XLAT_VIRT_BASE ((void *)ARG_XLAT_START(current))
+/*
+ * With CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS (apparent) PV guest accesses
+ * are prohibited to touch the Xen private VA range.  The compat argument
+ * translation area, therefore, can't live within this range.  Domains
+ * (potentially) in need of argument translation (32-bit PV, possibly HVM) get
+ * a secondary mapping installed, which needs to be used for such accesses in
+ * the PV case, and will also be used for HVM to avoid extra conditionals.
+ */
+#define COMPAT_ARG_XLAT_VIRT_BASE ((void *)ARG_XLAT_START(current) + \
+                                   (PERDOMAIN2_VIRT_START - \
+                                    PERDOMAIN_VIRT_START))
 #define COMPAT_ARG_XLAT_SIZE      (2*PAGE_SIZE)
 struct vcpu;
 int setup_compat_arg_xlat(struct vcpu *v);


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:33:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:33:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87888.165125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8XB-0001uh-DF; Mon, 22 Feb 2021 10:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87888.165125; Mon, 22 Feb 2021 10:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8XB-0001ua-AK; Mon, 22 Feb 2021 10:33:45 +0000
Received: by outflank-mailman (input) for mailman id 87888;
 Mon, 22 Feb 2021 10:33:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE8XA-0001uR-Hm
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:33:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6b05476-8d51-4bd7-93fe-f45c4a10e9ad;
 Mon, 22 Feb 2021 10:33:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AC41DACCF;
 Mon, 22 Feb 2021 10:33:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6b05476-8d51-4bd7-93fe-f45c4a10e9ad
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613990022; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XP6ExC202EgsdIHhSjjZkbCWtVSsVnCYDjnEzu3A4dA=;
	b=jk9HdvMe8h6xnyiVpcIVst5T5z8ivntHIg4nGI0GuFYv9paG9ZDpfTWM/xfsG7LcolaNs1
	8AmLEMCsObu3K1KJuyKnHsUdSKRT+kwpoWDVuypjZ18VPHBAQfZewXsZw5zF1YAkEVLI+u
	0bHdqPAlcx7DA7oF72ifu8JGNcRk4e4=
Subject: Re: [PATCH v2 1/4] xl: Add support for ignore_msrs option
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, andrew.cooper3@citrix.com,
 jun.nakajima@intel.com, kevin.tian@intel.com,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-2-git-send-email-boris.ostrovsky@oracle.com>
 <YC5EitRCZB+VCeCC@Air-de-Roger>
 <a78a4b94-47cc-64c0-1b1f-8429665822b2@suse.com>
 <YC6NPcym62a0Nu0M@Air-de-Roger>
 <8ffd4f51-5fc6-349b-146f-e52c35c59b4d@suse.com>
 <5b286dfd-278b-8675-cd88-3ee2706c06e1@oracle.com>
 <YDOGWC/VK9eOtgLw@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <99dace05-576d-bd53-898b-74130ffc59fe@suse.com>
Date: Mon, 22 Feb 2021 11:33:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDOGWC/VK9eOtgLw@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 11:24, Roger Pau MonnÃ© wrote:
> On Fri, Feb 19, 2021 at 09:50:12AM -0500, Boris Ostrovsky wrote:
>>
>> On 2/18/21 10:57 AM, Jan Beulich wrote:
>>> On 18.02.2021 16:52, Roger Pau MonnÃ© wrote:
>>>> On Thu, Feb 18, 2021 at 12:54:13PM +0100, Jan Beulich wrote:
>>>>> On 18.02.2021 11:42, Roger Pau MonnÃ© wrote:
>>>>>> Not that you need to implement the full thing now, but maybe we could
>>>>>> have something like:
>>>>>>
>>>>>> "
>>>>>> =item B<ignore_msrs=[ "MSR_RANGE, "MSR_RANGE", ..]>
>>>>>>
>>>>>> Specify a list of MSR ranges that will be ignored by the hypervisor:
>>>>>> reads will return zeros and writes will be discarded without raising a
>>>>>> #GP.
>>>>>>
>>>>>> Each MSR_RANGE is given in hexadecimal format and may be a range, e.g.
>>>>>> c00102f0-c00102f1 (inclusive), or a single MSR, e.g. c00102f1.
>>>>>> "
>>>>>>
>>>>>> Then you can print the messages in the hypervisor using a guest log
>>>>>> level and modify it on demand in order to get more verbose output?
>>>>> "Modify on demand"? Irrespective of what you mean with this, ...
>>>>>
>>>>>> I don't think selecting whether the messages are printed or not from
>>>>>> xl is that helpful as the same could be achieved using guest_loglvl.
>>>>> ... controlling this via guest_loglvl would affect various other
>>>>> log messages' visibility.
>>>> Right, but do we really need this level of per-guest log control,
>>>> implemented in this way exclusively for MSRs?
>>
>>
>> In a multi-tenant environment we may need to figure out why a particular guest is failing to boot, without affecting behavior of other guests.
>>
>>
>> If we had per-guest log level in general then what you are suggesting would be the right thing to do IMO. Maybe that's what we should add?
> 
> Yes, that would seem better IMO, but I don't think it's fair to ask
> you to do that work.
> 
> Do you think it would be acceptable to untangle both, and try to get
> the MSR stuff without any logging changes?
> 
> I know we would be addressing only one part of what the series
> originally tried to achieve, but I would rather prefer to have a
> generic way to set a per-guest log level rather than something
> specific to MSR accesses.

TBH I'd see us go the other route: Follow Boris'es approach for
4.15, and switch the logging control to per-guest once that
ability is there, _and_ if we're really convinced we don't want
to have this extra level of control. The latter because I think
a domain could end up pretty chatty just because of MSR accesses,
and it might therefore be undesirable to also hide all other
potentially relevant output. Perhaps the per-domain log level
control needs to be finer grained than what "guest_loglvl="
currently permits, more like what "hvm_debug=" has.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:50:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:50:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87916.165146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8nE-0003rW-38; Mon, 22 Feb 2021 10:50:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87916.165146; Mon, 22 Feb 2021 10:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8nD-0003rP-Vv; Mon, 22 Feb 2021 10:50:19 +0000
Received: by outflank-mailman (input) for mailman id 87916;
 Mon, 22 Feb 2021 10:50:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE8nC-0003rK-7K
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:50:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e2e892f-94f6-4a4b-b68f-d951737117e3;
 Mon, 22 Feb 2021 10:50:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFD22ACBF;
 Mon, 22 Feb 2021 10:50:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e2e892f-94f6-4a4b-b68f-d951737117e3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613991016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sSH6tyFf1SH97IvQoE4h8V3voyOjxH+M7HbTvsiMaJo=;
	b=K0hwPjY0ZIVQqG1L9OaxcafrMUSCvqHXyge0v/FiPDKmVbQ/lw0L3UmzpYX7czqs/9tYnP
	TA1wzFMM/W1z9mCtH4crwBkaRLwVsizg3D+W3LsxSJCoGKfSZuiXOsoDe/mUjiN0i4KnDL
	tcmsuvLQXArwRo/UcA881suLcl3bVLU=
Subject: Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
 <dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
 <24623.61403.440917.434@mariner.uk.xensource.com>
 <dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
 <8B51B3E9-901A-491D-A54E-1F67641D03F0@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b5759150-e028-ac68-b8b5-8abcea02b5d9@suse.com>
Date: Mon, 22 Feb 2021 11:50:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <8B51B3E9-901A-491D-A54E-1F67641D03F0@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 11:26, George Dunlap wrote:
> 
> 
>> On Feb 22, 2021, at 7:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> On 19.02.2021 18:05, Ian Jackson wrote:
>>> Jan Beulich writes ("Re: [PATCH v2] VMX: use a single, global APIC access page"):
>>>> While this patch was triggered not just by Julien's observation of
>>>> the early p2m insertion being a problem, but also many earlier
>>>> times of running into this odd code, it is - especially at this
>>>> stage - perhaps a possible option to split the change into just
>>>> the movement of the set_mmio_p2m_entry() invocation and all the
>>>> rest, in order to defer that rest until after 4.15.
>>>
>>> I infer that this contains a bugfix, but perhaps other
>>> changes/improvements too.
>>>
>>> George, I think you're our expert on this refcounting stuff - what do
>>> you think of this ?
>>>
>>> I guess my key question is whether this change will introduce risk by
>>> messing with the complex refcounting machineryt - or remove it by
>>> removing an interaction with the refcounting.
>>
>> If anything, then the latter, but largely neither afaict
> 
> Does it actually contain a bugfix?  Itâ€™s not at all clear to me from reading the description that itâ€™s anything other than a clean-up.  If thereâ€™s something else that needs to be called out explicitly.

Hmm, yes. The change wanted making anyway, for a long time imo. Hence
when putting together the patch I forgot to call out that as a side
effect it addresses a memory leak, as reported by Julien. With the
splitting of the two changes that'll be necessarily mentioned. I'm
about to submit v3.

Jan

> It should indeed theoretically make things safer long-term; the current vlapic_page allocation is using special-case of the refcounting rules, making it much more prone to being the subject of an â€œoversightâ€.  But at this point in the release we donâ€™t have much time at all to shake out any potential bugs in the new implementation; as such Iâ€™d consider anything other than the minimum necessary to fix a bug to be not worth it.
> 
>  -George
> 



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:56:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:56:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87920.165158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8si-00042T-ND; Mon, 22 Feb 2021 10:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87920.165158; Mon, 22 Feb 2021 10:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8si-00042M-K3; Mon, 22 Feb 2021 10:56:00 +0000
Received: by outflank-mailman (input) for mailman id 87920;
 Mon, 22 Feb 2021 10:55:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE8sg-00042H-Kq
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:55:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00da6c20-e4f0-4600-b025-ae603449b97f;
 Mon, 22 Feb 2021 10:55:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB0CCACCF;
 Mon, 22 Feb 2021 10:55:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00da6c20-e4f0-4600-b025-ae603449b97f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613991357; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=8JXC8hn+A4ki/3rMcvz1kCsidmRm0ApTLUBjTGwtAHg=;
	b=ixiyzA2nnDfiI5KzAcDG44A6D9sqE6q/KFOdxyozncflSud0ssLYtAA7lM8VRAuBP/yJ0+
	UhKYF+iEcywOd6/Sf9fVd6FT4DFFjSW56KvqpY+fqF2vzC3xKPxtoiKaZTXUcpnFK3b4Ih
	W+zmWZR/CFsbXAxV+kUe7SzSCapgz28=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/2] VMX: apic access page handling adjustments
Message-ID: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
Date: Mon, 22 Feb 2021 11:55:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The latter of the two changes was on my mental todo list for a
very long time. With Julien reporting a problem with the handling
of this page, I finally felt urged to make a patch. As it turns
out, for addressing this problem only the first of the now split
pages is needed, and the second can be further discussed and
considered for 4.16.

1: delay p2m insertion of APIC access page
2: use a single, global APIC access page

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:57:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87924.165174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8ti-0004Ax-2s; Mon, 22 Feb 2021 10:57:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87924.165174; Mon, 22 Feb 2021 10:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8th-0004Aq-Vr; Mon, 22 Feb 2021 10:57:01 +0000
Received: by outflank-mailman (input) for mailman id 87924;
 Mon, 22 Feb 2021 10:57:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE8tg-0004AA-V7
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:57:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16605c37-2f38-4771-b03e-73c7aa7b963f;
 Mon, 22 Feb 2021 10:56:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2F7FAF23;
 Mon, 22 Feb 2021 10:56:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16605c37-2f38-4771-b03e-73c7aa7b963f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613991419; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8uj5lky+beBmzi1shgy61QnM9D1z2TNAJ5ARxpmCwuk=;
	b=dETcHXuZtaAQA4Kh8SPgm2s3OPELVTc6arhwtd9UXJ227s7dLkKVrvFg9oHknfWLuUF4b0
	TDtPNu47G9x3kHzcdiAoczKUMvc+Lwkq6JZMm2PyyRmBJykVY1PtHNMy6dKnkZwHPlOMyX
	ydoibrdOJhoaxkXxNhjYCOY9ppJcKIo=
Subject: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Julien Grall <julien@xen.org>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
Message-ID: <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
Date: Mon, 22 Feb 2021 11:56:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Inserting the mapping at domain creation time leads to a memory leak
when the creation fails later on and the domain uses separate CPU and
IOMMU page tables - the latter requires intermediate page tables to be
allocated, but there's no freeing of them at present in this case. Since
we don't need the p2m insertion to happen this early, avoid the problem
altogether by deferring it until the last possible point. This comes at
the price of not being able to handle an error other than by crashing
the domain.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New (split out).
---
Hooking p2m insertion onto arch_domain_creation_finished() isn't very
nice, but I couldn't find any better hook (nor a good place where to
introduce a new one). In particular there look to be no hvm_funcs hooks
being used on a domain-wide basis (except for init/destroy of course).
I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
considered this no better, the more that the tool stack could be smarter
and avoid setting that param when not needed.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1007,6 +1007,8 @@ int arch_domain_soft_reset(struct domain
 
 void arch_domain_creation_finished(struct domain *d)
 {
+    if ( is_hvm_domain(d) )
+        hvm_domain_creation_finished(d);
 }
 
 #define xen_vcpu_guest_context vcpu_guest_context
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -428,6 +428,14 @@ static void vmx_domain_relinquish_resour
     vmx_free_vlapic_mapping(d);
 }
 
+static void domain_creation_finished(struct domain *d)
+{
+    if ( has_vlapic(d) && !mfn_eq(d->arch.hvm.vmx.apic_access_mfn, _mfn(0)) &&
+         set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
+                            d->arch.hvm.vmx.apic_access_mfn, PAGE_ORDER_4K) )
+        domain_crash(d);
+}
+
 static void vmx_init_ipt(struct vcpu *v)
 {
     unsigned int size = v->domain->vmtrace_size;
@@ -2408,6 +2416,7 @@ static struct hvm_function_table __initd
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
     .domain_relinquish_resources = vmx_domain_relinquish_resources,
+    .domain_creation_finished = domain_creation_finished,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
@@ -3234,8 +3243,7 @@ static int vmx_alloc_vlapic_mapping(stru
     clear_domain_page(mfn);
     d->arch.hvm.vmx.apic_access_mfn = mfn;
 
-    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
-                              PAGE_ORDER_4K);
+    return 0;
 }
 
 static void vmx_free_vlapic_mapping(struct domain *d)
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -106,6 +106,7 @@ struct hvm_function_table {
      * Initialise/destroy HVM domain/vcpu resources
      */
     int  (*domain_initialise)(struct domain *d);
+    void (*domain_creation_finished)(struct domain *d);
     void (*domain_relinquish_resources)(struct domain *d);
     void (*domain_destroy)(struct domain *d);
     int  (*vcpu_initialise)(struct vcpu *v);
@@ -390,6 +391,12 @@ static inline bool hvm_has_set_descripto
     return hvm_funcs.set_descriptor_access_exiting;
 }
 
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    if ( hvm_funcs.domain_creation_finished )
+        alternative_vcall(hvm_funcs.domain_creation_finished, d);
+}
+
 static inline int
 hvm_guest_x86_mode(struct vcpu *v)
 {
@@ -765,6 +772,11 @@ static inline void hvm_invlpg(const stru
 {
     ASSERT_UNREACHABLE();
 }
+
+static inline void hvm_domain_creation_finished(struct domain *d)
+{
+    ASSERT_UNREACHABLE();
+}
 
 /*
  * Shadow code needs further cleanup to eliminate some HVM-only paths. For



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:57:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:57:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87930.165185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8u3-0004HM-Ai; Mon, 22 Feb 2021 10:57:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87930.165185; Mon, 22 Feb 2021 10:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8u3-0004HF-7p; Mon, 22 Feb 2021 10:57:23 +0000
Received: by outflank-mailman (input) for mailman id 87930;
 Mon, 22 Feb 2021 10:57:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lE8u1-0004GV-Uy
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:57:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a651254-2653-4f72-b939-40e592837b4c;
 Mon, 22 Feb 2021 10:57:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEEADAF21;
 Mon, 22 Feb 2021 10:57:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a651254-2653-4f72-b939-40e592837b4c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1613991439; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qwwk9cXc8T/+fcdIQ8thR6iikFAxZizICApHFhBhDOw=;
	b=Aa8Rn3JV3/A++QN/tQ0BfOq7HlP0wKM43rw/+19LnVLknS9984ew0c13s+XKbMpmkPU+Gc
	lRIxWegqDVd8xxURcpwSCAY5vyrHMqAq2QbhC54iSPnyi4OKWlNLnHMxVbMzLlhLeQyDSH
	F6DRACguw8IiYKB7peSMPkuvicZUF1w=
Subject: [PATCH v3 2/2] VMX: use a single, global APIC access page
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
Message-ID: <774a0bf2-d2a4-7dba-bf15-fec8b0ec8c5f@suse.com>
Date: Mon, 22 Feb 2021 11:57:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The address of this page is used by the CPU only to recognize when to
access the virtual APIC page instead. No accesses would ever go to this
page. It only needs to be present in the (CPU) page tables so that
address translation will produce its address as result for respective
accesses.

By making this page global, we also eliminate the need to refcount it,
or to assign it to any domain in the first place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Split p2m insertion change to a separate patch.
v2: Avoid insertion when !has_vlapic(). Split off change to
    p2m_get_iommu_flags().
---
I did further consider not allocating any real page at all, but just
using the address of some unpopulated space (which would require
announcing this page as reserved to Dom0, so it wouldn't put any PCI
MMIO BARs there). But I thought this would be too controversial, because
of the possible risks associated with this.

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -66,8 +66,7 @@ boolean_param("force-ept", opt_force_ept
 static void vmx_ctxt_switch_from(struct vcpu *v);
 static void vmx_ctxt_switch_to(struct vcpu *v);
 
-static int  vmx_alloc_vlapic_mapping(struct domain *d);
-static void vmx_free_vlapic_mapping(struct domain *d);
+static int alloc_vlapic_mapping(void);
 static void vmx_install_vlapic_mapping(struct vcpu *v);
 static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
                                 unsigned int flags);
@@ -78,6 +77,8 @@ static int vmx_msr_read_intercept(unsign
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg(struct vcpu *v, unsigned long linear);
 
+static mfn_t __read_mostly apic_access_mfn;
+
 /* Values for domain's ->arch.hvm_domain.pi_ops.flags. */
 #define PI_CSW_FROM (1u << 0)
 #define PI_CSW_TO   (1u << 1)
@@ -401,7 +402,6 @@ static int vmx_domain_initialise(struct
         .to   = vmx_ctxt_switch_to,
         .tail = vmx_do_resume,
     };
-    int rc;
 
     d->arch.ctxt_switch = &csw;
 
@@ -411,28 +411,14 @@ static int vmx_domain_initialise(struct
      */
     d->arch.hvm.vmx.exec_sp = is_hardware_domain(d) || opt_ept_exec_sp;
 
-    if ( !has_vlapic(d) )
-        return 0;
-
-    if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-        return rc;
-
     return 0;
 }
 
-static void vmx_domain_relinquish_resources(struct domain *d)
-{
-    if ( !has_vlapic(d) )
-        return;
-
-    vmx_free_vlapic_mapping(d);
-}
-
 static void domain_creation_finished(struct domain *d)
 {
-    if ( has_vlapic(d) && !mfn_eq(d->arch.hvm.vmx.apic_access_mfn, _mfn(0)) &&
+    if ( has_vlapic(d) && !mfn_eq(apic_access_mfn, _mfn(0)) &&
          set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
-                            d->arch.hvm.vmx.apic_access_mfn, PAGE_ORDER_4K) )
+                            apic_access_mfn, PAGE_ORDER_4K) )
         domain_crash(d);
 }
 
@@ -2415,7 +2401,6 @@ static struct hvm_function_table __initd
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_relinquish_resources = vmx_domain_relinquish_resources,
     .domain_creation_finished = domain_creation_finished,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
@@ -2662,7 +2647,7 @@ const struct hvm_function_table * __init
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( vmx_vmcs_init() )
+    if ( vmx_vmcs_init() || alloc_vlapic_mapping() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -3217,7 +3202,7 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
-static int vmx_alloc_vlapic_mapping(struct domain *d)
+static int __init alloc_vlapic_mapping(void)
 {
     struct page_info *pg;
     mfn_t mfn;
@@ -3225,52 +3210,28 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_refcount);
+    pg = alloc_domheap_page(NULL, 0);
     if ( !pg )
         return -ENOMEM;
 
-    if ( !get_page_and_type(pg, d, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(d);
-        return -ENODATA;
-    }
-
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    d->arch.hvm.vmx.apic_access_mfn = mfn;
+    apic_access_mfn = mfn;
 
     return 0;
 }
 
-static void vmx_free_vlapic_mapping(struct domain *d)
-{
-    mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn;
-
-    d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
-    if ( !mfn_eq(mfn, _mfn(0)) )
-    {
-        struct page_info *pg = mfn_to_page(mfn);
-
-        put_page_alloc_ref(pg);
-        put_page_and_type(pg);
-    }
-}
-
 static void vmx_install_vlapic_mapping(struct vcpu *v)
 {
     paddr_t virt_page_ma, apic_page_ma;
 
-    if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) )
+    if ( !has_vlapic(v->domain) || mfn_eq(apic_access_mfn, _mfn(0)) )
         return;
 
     ASSERT(cpu_has_vmx_virtualize_apic_accesses);
 
     virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
-    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
+    apic_page_ma = mfn_to_maddr(apic_access_mfn);
 
     vmx_vmcs_enter(v);
     __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -58,7 +58,6 @@ struct ept_data {
 #define _VMX_DOMAIN_PML_ENABLED    0
 #define VMX_DOMAIN_PML_ENABLED     (1ul << _VMX_DOMAIN_PML_ENABLED)
 struct vmx_domain {
-    mfn_t apic_access_mfn;
     /* VMX_DOMAIN_* */
     unsigned int status;
 



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 10:57:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 10:57:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87932.165198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8uT-0004O6-LK; Mon, 22 Feb 2021 10:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87932.165198; Mon, 22 Feb 2021 10:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE8uT-0004Nz-GC; Mon, 22 Feb 2021 10:57:49 +0000
Received: by outflank-mailman (input) for mailman id 87932;
 Mon, 22 Feb 2021 10:57:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OBBz=HY=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1lE8uS-0004Nn-J5
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 10:57:48 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 21a8ba2c-6837-49f6-a2b8-5a8f722049e3;
 Mon, 22 Feb 2021 10:57:47 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-457-O0U_erKjNFiOb3Hd-kM1Ng-1; Mon, 22 Feb 2021 05:57:44 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4BDF41020C20;
 Mon, 22 Feb 2021 10:57:43 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-114-184.ams2.redhat.com
 [10.36.114.184])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id B4B2460C04;
 Mon, 22 Feb 2021 10:57:39 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3EF8F1800399; Mon, 22 Feb 2021 11:57:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21a8ba2c-6837-49f6-a2b8-5a8f722049e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613991466;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nvlkKULpwYNVPS6/mh0XmFuo5ZYky+bE+E6dQG6ewlY=;
	b=cjq8NnG/SHqrk+mc3q8m9VwYKPx/3se55HAIkgdHDl/dLe6QENtKkwHe1AQbIg7GL85TFI
	v1hjntuRrAWy8jsXY8IO0EMGigow2up5G29c7jy7mhe5orh/2jGgXZE9BHnRLd2M6di2Hz
	Fa7ccqNT35nvtZZbDICBI3BznDnZ7RY=
X-MC-Unique: O0U_erKjNFiOb3Hd-kM1Ng-1
Date: Mon, 22 Feb 2021 11:57:38 +0100
From: Gerd Hoffmann <kraxel@redhat.com>
To: Akihiko Odaki <akihiko.odaki@gmail.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] virtio-gpu: Respect graphics update interval for EDID
Message-ID: <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org>
References: <20210221133414.7262-1-akihiko.odaki@gmail.com>
MIME-Version: 1.0
In-Reply-To: <20210221133414.7262-1-akihiko.odaki@gmail.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kraxel@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Sun, Feb 21, 2021 at 10:34:14PM +0900, Akihiko Odaki wrote:
> This change introduces an additional member, refresh_rate to
> qemu_edid_info in include/hw/display/edid.h.
> 
> This change also isolates the graphics update interval from the
> display update interval. The guest will update the frame buffer
> in the graphics update interval, but displays can be updated in a
> dynamic interval, for example to save update costs aggresively
> (vnc) or to respond to user-generated events (sdl).
> It stabilizes the graphics update interval and prevents the guest
> from being confused.

Hmm.  What problem you are trying to solve here?

The update throttle being visible by the guest was done intentionally,
so the guest can throttle the display updates too in case nobody is
watching those display updated anyway.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:09:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:09:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87959.165222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE95I-0005aG-QY; Mon, 22 Feb 2021 11:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87959.165222; Mon, 22 Feb 2021 11:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE95I-0005a9-Mc; Mon, 22 Feb 2021 11:09:00 +0000
Received: by outflank-mailman (input) for mailman id 87959;
 Mon, 22 Feb 2021 11:08:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lE95G-0005YQ-Lw
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:08:58 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fea87cd-b6a4-4089-b657-075a8424dbab;
 Mon, 22 Feb 2021 11:08:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fea87cd-b6a4-4089-b657-075a8424dbab
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613992137;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=A2EUgiYFsnqe8RM/X0IvcyhSocV64autWSihTfkL35A=;
  b=LDWC8C/v9WATOo7B0dtmGooPHxEr4EQyYElkgQsFmNBdaVEZz7EoARWG
   ixN9CBffR/zCkp3OzTxDJnXRRJTegD9NFdhPVKaGLQqvfN9re4+W8EBfF
   bEHWe/iRfDHbmO2avJCjSxkkcJFCtSSwSeDK2puxo5vdJfiFKuxJd+RdW
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: CuUk7IhEUHwL+qJqZxmF4N5PFOn5+rkvzAysosndqZC5gkk+Fa5BoBSvoaEzZZWhZ5SDfBr34w
 1G9DX5L2wThCHFZgwNYx0dI1oDg1oznjt8z3GKwfgLWoJIAct3lt4TR6xpwllGgMgD2XbXOdgW
 /VI9gRL9DMygviE4jIE6/LZY+puDHx+L/lMKCVVX85skeVww+7QXr6kM8rKRCTW7J+R5Ksbn7M
 zj2ZkF0AF2XoJUWzIeMa2fxG9vt44qTbd4yOGIiiCroG1RVpibqecBYaCUEfkqM/Y5FbvUFo0u
 pO4=
X-SBRS: 5.2
X-MesageID: 37744783
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37744783"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xr8HfZrXvrJ9UaTwrD6xVMcu5KTgnbQrpEfXGo8x1/6G1o2nXJi+P7Wdu7xcQ4/gyXGNQbAryEAygTrTBBzyEzcOE+jLa8TvBTNRgtXGrExXyQGPsPIzp+xTc8XkxXENRC0SRgbGSl4blPtL5U+IJbyR0FDCXDypIJs1q/xegDnZ64Ho4IIRJDMLuRGBcOfYzH9ew9RBOyr7WcnZQrK4I8Q1htIuDiFYG00xzlLLPG9aImsFmQBPjE1qYgjHb4o+XmGXwFwkAat5L/IkWjzO2m+9xpi6Tz4z9oXwiAgUhSkPuCDdws/cguLll9/ovzlDz2QC6OJmqdd/hXTKGTQBOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZvQYY4KOH8lL75WEUmYjowiYS7ZA9LVTkSG5qn0uycA=;
 b=L2hBo5Pqf/1ywuOhWwAEZTFh44BAH/PFfDzuG5ZrISP9o59XxY3SRkfa9cSmeLjW3JnP7EUQQUOMpiB8mH/ai20YThiTZLRRjug76Wsqn56qJjbdUpXjOFUQWTy4Z34Se4C2Iz3VMAMk5VCy61mu2Yi9lB83KxlbNtD1ssjT/naK9rclkiFmRsOHvEe2ecjSmuJB1msi7goIwJ/gQq9Il10RywzQEyTboZXpDW4dq5BXXJTPV3+CcGegsdncKssNgR6OPZrtpMtUQBEDJ7Dks8QUtpruj0+48gOS0bgwi/k+bNSSOClU6v0XB8fSGr3RwTVjuqycuKiZZUiclXcwnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZvQYY4KOH8lL75WEUmYjowiYS7ZA9LVTkSG5qn0uycA=;
 b=xlUs0uywNCDX1hA9PFXHKrnCMHs1XXEjzBLgM8p5kQlKTTAHdVzJ3bqtKuAREHI1rDOZqOpTPnTr1mgFFDX0OeoP8/FRWUCTQXZ/yV95apazZOvL8BlB3alrhLb2tzw1+DZNbdRCjQnVeWCQV3xcr1MooqnM+bT/wvqGmyy+XQI=
Date: Mon, 22 Feb 2021 12:08:45 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: <xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YDOQvU1h8zpOv5PH@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
X-ClientProxiedBy: AS8PR04CA0156.eurprd04.prod.outlook.com
 (2603:10a6:20b:331::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 14b95bd8-c8ce-4047-e744-08d8d7223c6f
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5324B3EE7FD6697B2CA8A8828F819@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vtjX76BOv33ZPtyU65YQgoLTcISpRyxee0MoslKB8pUbcRunKh5Wy1EOOgWSowjbKEbaEXQz835yZN3O/MHUCAsSGYq/GaAvxDEz1NivRz3fLA7fef3YKIxblln9S0kSo3TZKsL7qUkjqXbYWx9jQPKdtI868dwTBfKbLdHt6hEuJwW+0cQ7gLaC4mcdz2qr+JvbcSkyXctocHQV3iIWdZSiuPyYXxroVUT9f2dnAJJiQf4a1ao9kGFAl1/xZx4uhKxcT7xXsbk6MXWoIPZ8FpnBOnOHR+HeFFpG+P+EYlRlK2XbXgovPQR5WWZwzoTJeZ/kVlrO1PWeNGOosKY8NJ/Jm0zZqOspeBHCHFIRY1NUZ37/WSs0tEZCWzDIIasog25wbiF7vN4onatRDlPVEf3HLaZxIbfGl2m3d7mB/i6VtKuscTnV/uVctR4Y+Q47CrygPH12pYt+BtgTIxuMh8BXYU3k953HZX2/DnEJCqWsRJ/0QATgj4zJKbfGBbAk9UZ6Off7ghO1FKrJ8tOuXw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(346002)(366004)(396003)(376002)(136003)(83380400001)(956004)(478600001)(316002)(86362001)(33716001)(66476007)(5660300002)(66946007)(26005)(66556008)(4326008)(16526019)(186003)(8676002)(85182001)(2906002)(6916009)(8936002)(6496006)(9686003)(6666004)(6486002)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VnlkN01zVEpveFdkMW5DOTRDSnhWa29HWXhZcVk3QUR3Mk1mVDhzVXpFVlhr?=
 =?utf-8?B?cDJtVkFRZFdWaDFLbHUvaEZlSWpzUjRnWjJMc00zOFRUdDZkWDVaSFZUSHlH?=
 =?utf-8?B?cElVcnRrN2txSFNWR1pEcmVuTFM1T2FnTDdoTi9FU1ZYV1BtNDBrd3piTXg1?=
 =?utf-8?B?WmR3b21xMlhsRG9CdXB2dnVrYXc0K0t6NXpmZU1ScHdpcjE0UHZQVTFVL0RF?=
 =?utf-8?B?Z3BPMEp4UVN3MVg3dTFzQ3NuaDlvcTBxcWoxdE9ick5zQk1DK3d5Z0o2bDJo?=
 =?utf-8?B?Yy9vYkNqa0tVcHJ5cXhzNzdRZ3c3YkVDYjBYbmRkWWczRFNDZHdzVEF1SW4w?=
 =?utf-8?B?SG9IdXQrT3VQV3FpM3Q4M0RsVXJNV0pwdE1hMnNsRWEvempiL2NDZUZNNVJv?=
 =?utf-8?B?cXQ2K3RGUkR6ZFBHMzFNOTNTRVdLMGE0bGpvRFQveXhGbWNPNndzdjA3N1NR?=
 =?utf-8?B?U2JTcEJRSElsdUsxOXE5Sm5FM25td0twRzhoOENZQzNjR2NEeW01eTdSYWdN?=
 =?utf-8?B?eEd4Q1JoOVJwMWE2andqMG5sUnUyNFdUc1lqYUN3cE1DU3pMS09MeVIvRWpN?=
 =?utf-8?B?QmVhSVZ3blhHSnZHN05sSHFucjQwQktUMGcxbnpPSmNCZ2pta3Y3L2ovV1Y2?=
 =?utf-8?B?cVdaV1BOMUtNbXdvYjY0em42U1AxMUhFa0RHcE1xa2pudzBWYVRqSkpEUnRG?=
 =?utf-8?B?ZW4yTUNKUjU4VFpHQkRaOWtuNjhZWjdqOEhrUE9FRzFHQlJ5amIwNWlrOXRI?=
 =?utf-8?B?QUtQdkZqZUp2NTZwb1FsQzBHRDJRSVBTbzVQRWZSUnVESGdiQnh3Z0NTZ3lh?=
 =?utf-8?B?WWNXcDNKbm4wUGdhM2VFTCszSTBWNjF3VVJ5cDIwcXpxcGFEWHZSdUx3d1ZH?=
 =?utf-8?B?aTk0QXU4MC9qWUxqNytaNWxyWnovZmIxNnFFMnRlRnlTSkUvd0hxcHBieW9s?=
 =?utf-8?B?L3AyL2JRVk5xZmFzRHJjaHNCbTBXazZXNFVmdG1DRU5jZ3pBZmozTWJGZklX?=
 =?utf-8?B?cXcxZmtMUW9QeTErNmttUWZQTVZBR2ZCTUdSOGZ6QjVUM2xzN3dnTGQ1aWNn?=
 =?utf-8?B?aFlpbU4wNEl6dGV6Z1ZFWHByUVQ4Uy9FNzVrNGpScWtmM3I1WlV0dXhXd2gv?=
 =?utf-8?B?VE1wSHZLR0RVNmtENFowT1pGOHF3My9hdzdlaUdsZnRPWkdSRElGVDFZbU55?=
 =?utf-8?B?ZkcvVUlpV3VEbjFYQVcyaXJ0Yzd0L1JTTTJKUGJrVnRXcEdIMTdpSHREQW9M?=
 =?utf-8?B?QlBBTGNyYnQwZEd5RDd5RThkQ1h1UnNGUHQvbzVUd0cvYk5xUm5FSmtYeXdk?=
 =?utf-8?B?SHRXQSsxZWJRcUwvbmh0czV3MU5LdFFubFMrM0tDK29kN2k1czNEanBFcUY5?=
 =?utf-8?B?VGNxZExtSnk3VTdHaDJtMUVHeFJwY24zbU4wanVKdTd2UTljNFRKYmJuMTQ5?=
 =?utf-8?B?RjJFeGNrWlQ0Y3lMcVlnRUM4YmRSdnVaRWk4OWVoSWl1Mm9JSndkb0JRU3B4?=
 =?utf-8?B?TWlUVWRWVC93U2xYL3RNd0l3YkY3SXJjTWRpQ0VueVpuQUdONFNTb2R4U3h3?=
 =?utf-8?B?Rmk5L2RYRjFvU3BxQVA5YzM1Qnp1Sk9pNjVXQk1WR3dsWFA5OUR2a0UrZ0pV?=
 =?utf-8?B?YUR3dmNVdjM2aHJ1bksxTDIrQy9nOUdQdFhSMXdpTHNLSDdKbU1zbWFGSFVs?=
 =?utf-8?B?VE91TTkwMFN1UXVZaytzQnlIMGZwNWtVUU4ydk5PVUlXVlF6YW1NV0xTSUM5?=
 =?utf-8?Q?RJPlGUdDMhYI1ksI/oJnXoZByOptWMHEfiI0xHc?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 14b95bd8-c8ce-4047-e744-08d8d7223c6f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 11:08:52.0112
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EdeBVKElkeGW8vf48ywLyZzl5beseLPEpGZYYSvBy4jpa2VkjHPRef26AkjExFNCXZzhJTngk3zDNNSUvYSxNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
> 
> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
> > On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
> >> When toolstack updates MSR policy, this MSR offset (which is the last
> >> index in the hypervisor MSR range) is used to indicate hypervisor
> >> behavior when guest accesses an MSR which is not explicitly emulated.
> > It's kind of weird to use an MSR to store this. I assume this is done
> > for migration reasons?
> 
> 
> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?

I agree that using the msr_policy seems like the most suitable place
to convey this information between the toolstack and Xen. I wonder if
it would be fine to have fields in msr_policy that don't directly
translate into an MSR value?

But having such a list of ignored MSRs in msr_policy makes the whole
get/set logic a bit weird, as the user would have to provide a buffer
in order to get the list of ignored MSRs.

> 
> > Isn't is possible to convey this data in the xl migration stream
> > instead of having to pack it with MSRs?
> 
> 
> I haven't looked at this but again --- the feature itself has nothing to do with migration. The fact that folding it into policy makes migration of this information "just work" is just a nice side benefit of this implementation.

IMO it feels slightly weird that we have to use a MSR (MSR_UNHANDLED)
to store this option, seems like wasting an MSR index when there's
really no need for it to be stored in an MSR, as we don't expose it to
guests.

It would seem more natural for such option to live in arch_domain as a
rangeset for example.

Maybe introduce a new DOMCTL to set it?

#define XEN_DOMCTL_msr_set_ignore ...
struct xen_domctl_msr_set_ignore {
    uint32_t index;
    uint32_t size;
};

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:10:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87968.165234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE96d-0006P5-9K; Mon, 22 Feb 2021 11:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87968.165234; Mon, 22 Feb 2021 11:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE96d-0006Oy-68; Mon, 22 Feb 2021 11:10:23 +0000
Received: by outflank-mailman (input) for mailman id 87968;
 Mon, 22 Feb 2021 11:10:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE96c-0006Ot-Ic
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:10:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE96c-0006fj-GD
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:10:22 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE96c-0005Z3-EV
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:10:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lE96Z-0000iu-6C; Mon, 22 Feb 2021 11:10:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=Gn3ncHkmdy8iMJV2ipgldzvFX6Ln7JH5odnom5ookYc=; b=gFwa+jkUZIseff/QPpcFeo3M8k
	4hl8LsRcL0zfdihSzQ6iGmW3SE5itszcdQi2fJEN4MliHwUhVfnmHKk36ev9Ysh5Kw2CQYD0YOl9D
	Lv99ISAJX8RmSHxDzjV9jexT5JchQ6WBDRl9ouvjshCYmAPjweRZ5X1M4WAB2wrC2BVc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24627.37145.647801.601857@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 11:10:17 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    bertrand.marquis@arm.com,
    Julien Grall <jgrall@amazon.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2, 3}
In-Reply-To: <20210220140412.31610-1-julien@xen.org>
References: <20210220140412.31610-1-julien@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("[PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2, 3}"):
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, Xen will send a data abort to a guest trying to write to the
> ISPENDR.
> 
> Unfortunately, recent version of Linux (at least 5.9+) will start
> writing to the register if the interrupt needs to be re-triggered
> (see the callback irq_retrigger). This can happen when a driver (such as
> the xgbe network driver on AMD Seattle) re-enable an interrupt:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:11:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:11:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87970.165246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE97e-0006Xt-M1; Mon, 22 Feb 2021 11:11:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87970.165246; Mon, 22 Feb 2021 11:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE97e-0006Xm-H6; Mon, 22 Feb 2021 11:11:26 +0000
Received: by outflank-mailman (input) for mailman id 87970;
 Mon, 22 Feb 2021 11:11:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE97d-0006Xe-2U
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:11:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE97d-0006gn-1l
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:11:25 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE97d-0005pJ-0w
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:11:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lE97Z-0000jK-Ry; Mon, 22 Feb 2021 11:11:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=BVRupZ+BKF0UTFHVG43HCdjtn0mdRZqluv+w+RTUfEg=; b=E92PbpzkupX5oWQ0MghIif0oh3
	oAMXH1coIfl5Os/d901oN6nb2/exR1WvczhxDspVYHntw6Eylc89hkcDy4MggoDgrGwRsCAXP0PFP
	3qzlhfN7FSxNAUNwhxwJ+Pol9GLLYfUQ1H6saRBaypF9juFo9ST385atIe9qlTPjG/eo=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24627.37209.655443.911873@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 11:11:21 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    jbeulich@suse.com,
    sstabellini@kernel.org,
    ash.j.wilding@gmail.com,
    Julien Grall <jgrall@amazon.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Dario Faggioli <dfaggioli@suse.com>
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in vcpu_block()
In-Reply-To: <20210220194701.24202-1-julien@xen.org>
References: <20210220194701.24202-1-julien@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("[PATCH for-4.15] xen/sched: Add missing memory barrier in vcpu_block()"):
> From: Julien Grall <jgrall@amazon.com>
> 
> The comment in vcpu_block() states that the events should be checked
> /after/ blocking to avoids wakeup waiting race. However, from a generic
> perspective, set_bit() doesn't prevent re-ordering. So the following
> could happen:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:16:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:16:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87973.165258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9C3-0006mM-6Y; Mon, 22 Feb 2021 11:15:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87973.165258; Mon, 22 Feb 2021 11:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9C3-0006mF-3S; Mon, 22 Feb 2021 11:15:59 +0000
Received: by outflank-mailman (input) for mailman id 87973;
 Mon, 22 Feb 2021 11:15:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9C1-0006mA-O7
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:15:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9C1-0006mB-KM
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:15:57 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9C1-00067i-IW
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:15:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lE9By-0000ko-C3; Mon, 22 Feb 2021 11:15:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=tvmEtbpqs6ufp1O+ExbeT6WdYpgoKsN6Fz0xeN6YX60=; b=uCBlGTzjF1mijs0mnnWx+87IcI
	gonjUmIAkzBKH06biccndL5qbOyTk7Z4WoINv6BcoCsW15kMTs3GZRtTOpPzv7luEAHNZ4MVEKIht
	4DE/bn/nQbZva1e60Stf48tu9HqvVYiwhmZiOsUTrsxMWQvdzL5c6NBFDqIRdX9Y7SVc=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24627.37482.140028.358793@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 11:15:54 +0000
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] tools/misc/xen-vmtrace: use reset and enable
In-Reply-To: <d63d274c46f964d89f791d5e5166971387c0e2e8.1613855006.git.tamas@tklengyel.com>
References: <d63d274c46f964d89f791d5e5166971387c0e2e8.1613855006.git.tamas@tklengyel.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Tamas K Lengyel writes ("[PATCH for-4.15] tools/misc/xen-vmtrace: use reset and enable"):
> The expected behavior while using xen-vmtrace is to get a clean start, even if
> the tool was used previously on the same VM.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>
Reviewed-by: Ian Jackson <iwj@xenproject.org>

and pushed to staging.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:23:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87978.165277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9Ip-0007nQ-09; Mon, 22 Feb 2021 11:22:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87978.165277; Mon, 22 Feb 2021 11:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9Io-0007nJ-TI; Mon, 22 Feb 2021 11:22:58 +0000
Received: by outflank-mailman (input) for mailman id 87978;
 Mon, 22 Feb 2021 11:22:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9In-0007nE-L0
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:22:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9In-0006uF-KC
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:22:57 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9In-0006nr-IU
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:22:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lE9Ih-0000nd-BY; Mon, 22 Feb 2021 11:22:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=h3LCZfk4f4WOJlw4VQ/w8mDw1s5Kve0vRma/C/jPGVU=; b=hwpfDDO6d9ztoNTCEWdwz6kNXy
	ayMDbw+h/ckOR9/BE1RKa+cD09EEE3DodeSPs9q7FrWeluDfRSLXmML+DW/t/MVmlHjkjU54N3ihd
	KM4bXUbzJgsOWYN//qh/QU8Pp6pnhrPD3UmUMYRfQB3LZG2fvemsHxpUWI8DRp2ijQfM=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24627.37898.442930.913809@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 11:22:50 +0000
To: Jan Beulich <jbeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
    "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Kevin Tian <kevin.tian@intel.com>,
    "Julien  Grall" <julien@xen.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page [and 1 more messages]
In-Reply-To: <YDOC2ACTf0bmryG1@Air-de-Roger>,
	<b5759150-e028-ac68-b8b5-8abcea02b5d9@suse.com>
References: <a895386d-db14-2743-d8f9-09f9509a510a@suse.com>
	<dcada8be-a91d-87f0-c579-30f3c7e3607e@suse.com>
	<24623.61403.440917.434@mariner.uk.xensource.com>
	<dfdd4440-3c37-8cb5-b7d3-a86b7c697b2e@suse.com>
	<8B51B3E9-901A-491D-A54E-1F67641D03F0@citrix.com>
	<b5759150-e028-ac68-b8b5-8abcea02b5d9@suse.com>
	<YDOC2ACTf0bmryG1@Air-de-Roger>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monné writes ("Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page"):
> On Mon, Feb 22, 2021 at 08:51:59AM +0100, Jan Beulich wrote:
> > If anything, then the latter, but largely neither afaict - there's no
> > change in this regard here at all as far as the guest could affect
> > behavior, due to the page getting inserted as p2m_mmio_direct, and
> > guest_remove_page() having
> > 
> >     if ( p2mt == p2m_mmio_direct )
> >     {
> >         rc = clear_mmio_p2m_entry(d, gmfn, mfn, PAGE_ORDER_4K);
> >         goto out_put_gfn;
> >     }
> > 
> > before any refcounting logic is reached. The removal of interaction
> > is because now the page doesn't get associated with a domain (and
> > hence doesn't become subject to refcounting) at all.
> > 
> > The risk of the change stems from going from using a per-domain
> > page to using a single, system-wide one, which indeed was the subject
> > of v1 discussion. In any event the consideration towards splitting
> > the change would cover either concern. Perhaps I should really do so
> > and submit as v3 ...
> 
> I agree it would be less risky to keep using a per-domain page, and
> switch to a global one after the release. From the discussion in v1 I
> don't think we where able to spot any specific issues apart from
> guests possibly being able to access shared data in this page from
> passthrough devices. I would at least feel more confortable with
> that approach given the point we are in the release process.

Thanks to Roger and Jan for these comments which were very helpful to
me.

Jan Beulich writes ("Re: [PATCH for-4.15 v2] VMX: use a single, global APIC access page"):
> Hmm, yes. The change wanted making anyway, for a long time imo. Hence
> when putting together the patch I forgot to call out that as a side
> effect it addresses a memory leak, as reported by Julien. With the
> splitting of the two changes that'll be necessarily mentioned. I'm
> about to submit v3.

Great, I see it now.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:25:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:25:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87983.165292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9Kt-0007vO-FH; Mon, 22 Feb 2021 11:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87983.165292; Mon, 22 Feb 2021 11:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9Kt-0007vH-Bb; Mon, 22 Feb 2021 11:25:07 +0000
Received: by outflank-mailman (input) for mailman id 87983;
 Mon, 22 Feb 2021 11:25:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9Ks-0007vC-Js
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:25:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9Ks-0006wH-IL
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:25:06 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9Ks-0006x6-HN
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:25:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lE9Kp-0000oO-C0; Mon, 22 Feb 2021 11:25:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=qyTCpkoizKoayMH2qHnn7AaZQLQeUl7oU96NRKebpI4=; b=z9OGhHTc8sDlAn3eyJKwJW0IB2
	t415WjLF0+VZpYSMp3lfJI2ouhcj/onJZua9GwtoaEQHKdIZraTv/NXMvVysHQS4lF6XrTelM/B2I
	7ObUueIep7PVgfMFPZm2gncGc8Z+BbETjhxtvEUyLA5P5S1WVDvl26x6d/swA+j20bZ8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24627.38031.77928.536108@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 11:25:03 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Kevin Tian <kevin.tian@intel.com>,
    Jun Nakajima <jun.nakajima@intel.com>,
    Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
In-Reply-To: <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
	<90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page"):
> Inserting the mapping at domain creation time leads to a memory leak
> when the creation fails later on and the domain uses separate CPU and
> IOMMU page tables - the latter requires intermediate page tables to be
> allocated, but there's no freeing of them at present in this case. Since
> we don't need the p2m insertion to happen this early, avoid the problem
> altogether by deferring it until the last possible point.

Thanks.

>   This comes at
> the price of not being able to handle an error other than by crashing
> the domain.

How worried should I be about this ?

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:26:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:26:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87986.165303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9MY-00084O-QU; Mon, 22 Feb 2021 11:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87986.165303; Mon, 22 Feb 2021 11:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9MY-00084H-NR; Mon, 22 Feb 2021 11:26:50 +0000
Received: by outflank-mailman (input) for mailman id 87986;
 Mon, 22 Feb 2021 11:26:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9MX-00084C-SN
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:26:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9MX-0006zP-Rf
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:26:49 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lE9MX-00075a-Qu
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:26:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lE9MU-0000p2-LY; Mon, 22 Feb 2021 11:26:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=8gKJB4I4ITKySHnLFO8xtUprqhdgsEyu6r0sBZf/LRY=; b=jmXAAYyzehF2eUeWrw54nxl/47
	xguTi1JBiTo4Ae04e34aEiNAu+e5DN3Ud/BvhQaFqvXa2i+0VBqxvKovoEOSKe0pA2etHOu5CiO2O
	vZKO4H5Yus7fPwGHTdQxukOCSmOkP7jJtz4v8XSWByFqqkPdjDY0Xv/SExhL9II5ZoHE=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24627.38134.451961.515628@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 11:26:46 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for 32-bit
 PV
In-Reply-To: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH][4.15] x86: mirror compat argument translation area for 32-bit PV"):
> Now that we guard the entire Xen VA space against speculative abuse
> through hypervisor accesses to guest memory, the argument translation
> area's VA also needs to live outside this range, at least for 32-bit PV
> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> uniformly.
> 
> While this could be conditionalized upon CONFIG_PV32 &&
> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
> keeps the code more legible imo.
> 
> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Despite the fact that this is trying to fix the current breakage, I
would still want to see a full maintainer review.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:35:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:35:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.87991.165315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9Uz-0000cy-Ms; Mon, 22 Feb 2021 11:35:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 87991.165315; Mon, 22 Feb 2021 11:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9Uz-0000cr-Jc; Mon, 22 Feb 2021 11:35:33 +0000
Received: by outflank-mailman (input) for mailman id 87991;
 Mon, 22 Feb 2021 11:35:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lE9Uy-0000cm-3g
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:35:32 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 360fc7fc-307c-49d6-b5ae-1eca8937401c;
 Mon, 22 Feb 2021 11:35:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 360fc7fc-307c-49d6-b5ae-1eca8937401c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613993730;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=jWSnlA+PQt1lgyX9O3AGqFCKiGlVGdo+mKIDMnFHSlo=;
  b=Yme587BmMwsClYNeb6kFa56SBQkT8gy8k6uIiMqHVukmMIrHonBBpeiX
   YEcbX5g4V+g9yN9nTRD1LBcxzz6XpfspnQNAlfkt08fXhppWmQUzUg+xI
   FxA6qrIcMHrHAegorCDs1tJwgjN30/8ccoHG1fFR3BhF3C4bwjv2q87KG
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ShrAMV53vK/XWFH9buubr0ojDKOgVtUeP5g7MDQsP4urFBzUqOQXT4UoiXX9I/53luwInBVZx/
 Q1BdBEWc+mT2YSSNi62g/11ZlssC4zqlwB+DCvIKVd/iQByUpBBoa+BsBtHcfYMQNg0NfbyPXo
 mWmQj4QY/3ba8vk+s5mkj1bML4CWHThODMKCyR2vppIuN10nitT2tNhK3Ogr4+/n2548dWTUQW
 eDwnBzYnafLzscDPCusByH/BVW5dZCp9kbR+rSnTFRUY+PpQb8FZTwLbYTbslu58yj4nCH4ykF
 XFA=
X-SBRS: 5.2
X-MesageID: 37746328
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37746328"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cIP0BWXwi6AHjWyRGPMSWgE5KwlLWwzCprB+os+cXMnQWQpvH3RHtjsbleG2eX8+PiBrAIL4uNWFYvNgxeYDGfwI3lLp+ZeYJKck5ZffXfPNVx6oBrhISrnnqYzdzNM0sX/h3g2nEjjDXbXR1F2KGRDUBol+SU4FSnWVfS9swvZDJNkfN6NJSPt6Tb9dUasPLwk849ldFXeJhaPCvL2jYuelfzDQ/Dft5s2U6kMx1x0KGNEK4yPnHJltU8N7xrtS8FE9QwqD9uUx4OjPZCXRv0LerTZ5u0sEXhblZrENvKz6v/9KU0XBebJZantSUf0FrxJWKE2ZhpkEXQ9NotjTXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jWSnlA+PQt1lgyX9O3AGqFCKiGlVGdo+mKIDMnFHSlo=;
 b=npS7tCfXG33kP/HgPPLYcX5iVkWy7zL53WlD4CrO7BGpxMwJz2GxR4S/7h3Xxol071hzlO7hvy67m++ikfcNnGdJ/4I7ajFk/1gTx7NQDuhFIgYrgTuyM2Vh/FnfXLHiR+fYWF5s/yR3Rmw8bfp80XGJKejQua3u5mzdGQvvvMv/GeF4dgkkEOUq45/lnlWBD8Oy4PaR2y61FFI/JOpp3pr3NzWYB7ds9+WTPn7t8GkgBX5n7kqosHW4ok++zV5kENzEWiAQP3e3izTJdalW8o1WcwHQt1GUr13QYefXoJwxV8Dc2DTJlMwmKK5z7J2+PVVFi8ih81OpCamubMAb2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jWSnlA+PQt1lgyX9O3AGqFCKiGlVGdo+mKIDMnFHSlo=;
 b=oGNWtXjHPmEiOYjGmttGOpuDa9LQ7FLlRY7pAQgK0Ef+3jBIAUZFdrh6vuQ7jiiGnfST+4tJiSg4oIji0dc1/ZPD3MUcH41wdudt36ItP5e4XA4U5WRUHhHh+vboIrL0518jDKOfuFEYxZahNU40nakjVkfYYWpCyqz/+CjJDR4=
Date: Mon, 22 Feb 2021 12:35:21 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
Message-ID: <YDOW+ftkNsG2RH3C@Air-de-Roger>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
X-ClientProxiedBy: AM6P193CA0084.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:209:88::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6d6eccdd-3bb9-4a7c-e8b1-08d8d725f294
X-MS-TrafficTypeDiagnostic: DM6PR03MB4603:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB460379236A9F8C20F4D32C2E8F819@DM6PR03MB4603.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Dytq3yml/7nrmVWwf6ns6RMq7UPKUjRisN/2jybxEC8jx9z6DvUuyquVaE0DLi4+KItijRHHk3niZaz0I/AxtkXmfNkC2gSJ9H6zYoY9Z9lRgHpp4PYZagqFZHHxS8i7h5BJ6UcNWMp7HLjwy/OpEvKL+/srMb5bW9fUL8pLkrPDde8ymgLjFrmSJUPL0+RReAPIWa3+H0Q0fPSiEZYPkq9Xvp4C931pP2VCYbVTSFxWP7pbnNNdLO+HThI+Ukcvi8RHYUZouARtJ3y/KR6SKgpS+CBhPfXsJd0yjhj+0KobQu7iMm/qgvjMawITW9jcEhD7eC/jUXUjejxuAOW4Tf2Wsxqs2qtRiEqqgzmpcLAs5fIgt5a/SxJQLDJ1slC50K3BuwGKhRgcIThUukBgQhxNHAM9rU6aGUauczOfRpepjU3O+8Mz9AjppsBPmb8t0xqUxlE2qcKNSNl+U8EjDw93W5fWllyQK6d0LEc9y86kJlSGlaUiLNoKdsWDXXnIwSTVwlMnqDc4O/3v8W5nLOILhSItqHKX2xQPP92BZyaSNCP7Mm1FJ2LaWYt/QLv9
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(376002)(346002)(366004)(39860400002)(396003)(316002)(9686003)(26005)(956004)(8936002)(54906003)(6666004)(86362001)(478600001)(33716001)(6916009)(6486002)(4326008)(8676002)(5660300002)(83380400001)(4744005)(66946007)(66476007)(66556008)(16526019)(186003)(85182001)(6496006)(2906002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dVR3VXBPUnhHSXVINEIvT3FORnVxdFB2Um9pWTAwUU9xeVRtRnBJbEI4aWY1?=
 =?utf-8?B?NUd2b05EbDRLRXg3OVlBWk44RDZvUERTTk1tWW5GcDlOaGx4K3RLTW1BYnA5?=
 =?utf-8?B?MTd1cHdEeThnRyswZkc4aW9mUFdhcmlIMUhRS2h0eWp5V2hacU5xSUxEZGNQ?=
 =?utf-8?B?VXhPVC9CTjg3TUxHOFRZZjM3T1RQbUZwcXFnWis2ZDZjZWFmUzk4d2REeG5n?=
 =?utf-8?B?ZXV6TGpTenNYN3BhbTFaUnE4akFNYndpWDhZYmJ3SlNsYi8ydUZuTFhSMTUz?=
 =?utf-8?B?OVp2T2JSUC9JUGVUQW1EZ0hucDJjQXhmSDBhYzNuTm4xS2U0bXFJUU5wYlI2?=
 =?utf-8?B?UktPTk1rWDdPVmlKaXRLdHh5OWpsZ2ZBS1B1ZkFhaDZiQ2JoanJiYktNeFgy?=
 =?utf-8?B?M1J1YkFmUnhTMVhkOWMrZVY5ZjhIWURJN1BZcTUzYWpLb3JrOTFJcnVpdjBu?=
 =?utf-8?B?bEJyU1JETjJUcUJhT0o3TWVYR3JyTUE2QWR6bkVYL3dZc3ZaUmhEMjNieDVj?=
 =?utf-8?B?WkkvQlZ1WGVIdkg3cmw5RGVHbERxLzVZb2ZlWEJmdUJuUTR3Y2l6cjVWQTRm?=
 =?utf-8?B?VzBoNnFJcW15WCt3Qks2dnhkbkNackRGVjloK2d2YmhKZkNLa2hDVmlKV3dq?=
 =?utf-8?B?MlZDRkptdWo2SE5jM2hwd0JCMTFRMlRBa3FjTjRVOFdOaG4zdm42SW0zMjlJ?=
 =?utf-8?B?c0Z4KzV1UTcyMWxpU01KTUVxWmxvUXZYSGNWOTd2bVZjRUJoZGJUNXdCMjc0?=
 =?utf-8?B?aVc3U3V4MmpKM3Z2SU9OTEFodUtiYkJzR2F0c3NpWXV3WFFrek9ubEhyQytK?=
 =?utf-8?B?d1ZUU2tFZEg0Q1NlYmZmMGVMc2V0NDlSRlBQUVlld2pYbm4vWlltTytuUXZ1?=
 =?utf-8?B?V1grQmUyTEl5Ui8wVjlRZ05Rajl5SGxjc0pRck81dXViZ3lJMFRIdDlKRmJ6?=
 =?utf-8?B?akpHcU16aHRmUytxN2wwd2VRNDIraUh6STcvREtLL1Z4Qnd4RG9PVVpUZTls?=
 =?utf-8?B?TEZRZUpRNmduOTZydHcvRS9rWkVGUi9XbGw4SGZ0QjNDUldpZ2R1NzJKQVFl?=
 =?utf-8?B?bTJCRVEyeTdPWlBWRWV1a0VBTzBYZkFqQ1VhVmE2STB1ZHhqZWh6dDFkME1K?=
 =?utf-8?B?YzF0U0ExVUZSRWhYbzRQNEQzb3cybTE4Q1ZmNG1QVVNzNkFyOXdFVWRUZzcx?=
 =?utf-8?B?d0lxajRpWkVzUWZOYjkyV2NNU3JJTHpSSTh0SUpxSkExM2ljdTZSTVhzcGwy?=
 =?utf-8?B?SVczZG9HU1ZrRWV0dW8rSm1xVlV3ZW52MVNCV1dSQUZKN2tjbDRVVXpaenJ0?=
 =?utf-8?B?clYzT2VtVkdraHgydS9JbHM3V2t1cTEva0RZZ2gvakhHaG4rYlVnS3NqS3NT?=
 =?utf-8?B?UFRBTFVZNTJvWHVVOGczQ0tvRmdENkJGa1g4NEp3YndGa1lFMFpQTWx5Nk9V?=
 =?utf-8?B?TEg4RHZwT1BaWEhYS3IzeFRtOTJhdzVIMlFRbklRUGUvNVhvc3ZCZDFXa1Ex?=
 =?utf-8?B?Z0E0UTNOZk5STTNscWVXMVZrcHNaVUhMaFp4UG9OMUhyK05iUmZWR2wxQ01E?=
 =?utf-8?B?WFdjN2FaQ001RHhscHQ4djJhc0dIV3VpekFjbFoxaCsrc0pzQTRVQmsreWZO?=
 =?utf-8?B?bU9NMWtwOUpFM1RJZldhTFNYTWlPUlJFMzRJeEl5UFc1VWtpV2pXNnBPZ3c3?=
 =?utf-8?B?dFQ4a1BPT1NjT0NlVHF4anlHZGRFOVlrUGVkcDFXS29ZdHBDK0gyZ0RWVFh5?=
 =?utf-8?Q?AH9malFE7wDeqadiJt1OixUyFA9oP1TF/y3zbs5?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d6eccdd-3bb9-4a7c-e8b1-08d8d725f294
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 11:35:25.9628
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IRacPrOaMrYji/0k3LNl2wHH/Jp6bRXiBf7FM+8svNJSDKAJDYFctbYE+6Jp0oODVDY5TLTih/AJLjHXxOfPyw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4603
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 11:27:07AM +0100, Jan Beulich wrote:
> Now that we guard the entire Xen VA space against speculative abuse
> through hypervisor accesses to guest memory, the argument translation
> area's VA also needs to live outside this range, at least for 32-bit PV
> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> uniformly.

Since you are double mapping the per-domain virtual area, won't it
make more sense to map it just once outside of the Xen virtual space
area? (so it's always using PML4_ADDR(511))

Is there anything concerning in the per-domain area that should be
protected against speculative accesses?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 11:58:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 11:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88007.165328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9rX-0002d0-Ot; Mon, 22 Feb 2021 11:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88007.165328; Mon, 22 Feb 2021 11:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lE9rX-0002ct-Km; Mon, 22 Feb 2021 11:58:51 +0000
Received: by outflank-mailman (input) for mailman id 88007;
 Mon, 22 Feb 2021 11:58:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ieRt=HY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lE9rW-0002co-El
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 11:58:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.43]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef804066-a81a-4f0e-80d3-870d6f1d9f50;
 Mon, 22 Feb 2021 11:58:48 +0000 (UTC)
Received: from AM6P194CA0062.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::39)
 by HE1PR0802MB2281.eurprd08.prod.outlook.com (2603:10a6:3:c0::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.30; Mon, 22 Feb
 2021 11:58:42 +0000
Received: from VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::29) by AM6P194CA0062.outlook.office365.com
 (2603:10a6:209:84::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Mon, 22 Feb 2021 11:58:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT051.mail.protection.outlook.com (10.152.19.75) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Mon, 22 Feb 2021 11:58:41 +0000
Received: ("Tessian outbound 46f6cf9da5e8:v71");
 Mon, 22 Feb 2021 11:58:41 +0000
Received: from a20266b377a2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0EE379F5-91BF-4013-8F49-FA06ED5BB9D8.1; 
 Mon, 22 Feb 2021 11:58:35 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a20266b377a2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 Feb 2021 11:58:35 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VI1PR08MB5422.eurprd08.prod.outlook.com (2603:10a6:803:12e::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.28; Mon, 22 Feb
 2021 11:58:33 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3868.032; Mon, 22 Feb 2021
 11:58:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef804066-a81a-4f0e-80d3-870d6f1d9f50
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fndaDEo2QfTLg4yPXyoHD4mZ+VaKn/RmwZSeipwhK1s=;
 b=WrjBfAnjlzTfNEjR0Q/c2HeQ6qeUaPC6gp2yvFBEdiInfVYoUkmyK5Ur2cbNECv2qB+YNkimhCcbUa4tUykAna7akfflNooP0tlA9a3eG3G/GL7gwlR/7vG7iq4Ekm2cV4BuQ0VSDtgr9QHRyBrlbiCa6Ql5ApsqIf8B/XB0yR4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0b49ad20a5b3c66e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jiWdzOV+D25//tT7MYET4UeC2GMmb4Fbt8XzQZ7rHo+ACkamheuZUaeQXeX/jD23GjS+Ivnke6xW06W46a8L1d3yg3o8q/R6m77OiapQXQbMCpyp4/Tjk+m2lONVF4pE9igkCx++JPDrUNbREWDkJF3SqWZKo9HNyGW8KCTpop5TQcAwyKSK1gtJn7JcBtGYSMaxuxxe9PbQLLjbCxBqoqJ1MnwVwMkSOW8ZsAsSKcSa335hMCvkhln9jazxM4Ik7IVoWlblZLXdH7pUqEj6doi+8Z9pvwi9s3PrLqNLb22dZnjXEQdaSE4T8HiyvqsBAhfpEQrxNdC9r8ePL5XkIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fndaDEo2QfTLg4yPXyoHD4mZ+VaKn/RmwZSeipwhK1s=;
 b=croSNUEhmbz/nUv9FiTBNTQ6apJGbDpq0gGrYdIjGA5SMwOjrRnRg1HTgpHLwSNF1lOkF5ycXrEjA6hWH5dWgbC1ITyRIBxv3GogYEMdBOuZ1u1ubhptSs2zKIqpWnhlSVnpNAvMbF16xI0NbiI8CjrmOfPiDRg3Oc1iDtrLhKBxy+VJXMJqeqVytw11GOqQUystOzL+SuKZ5TcDRwTxJs/vN5AgQx7bzlkyXEH2RmsiMcIJ/s8dUcUV7QmDCdlMs3ThjwWSYG2DUC+GtAqsUif31zVkPpUhr8mn2aFfzxtB+nxuYAON72iW029KklglIVvEIzz463wttkDvccnhmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fndaDEo2QfTLg4yPXyoHD4mZ+VaKn/RmwZSeipwhK1s=;
 b=WrjBfAnjlzTfNEjR0Q/c2HeQ6qeUaPC6gp2yvFBEdiInfVYoUkmyK5Ur2cbNECv2qB+YNkimhCcbUa4tUykAna7akfflNooP0tlA9a3eG3G/GL7gwlR/7vG7iq4Ekm2cV4BuQ0VSDtgr9QHRyBrlbiCa6Ql5ApsqIf8B/XB0yR4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
Thread-Topic: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
Thread-Index: AQHXB7GCcUtPYwKMVk+6z4A6IyPFF6pkFLEA
Date: Mon, 22 Feb 2021 11:58:33 +0000
Message-ID: <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com>
References: <20210220175413.14640-1-julien@xen.org>
In-Reply-To: <20210220175413.14640-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7d7d4aa5-d91d-41c8-0e8e-08d8d729329e
x-ms-traffictypediagnostic: VI1PR08MB5422:|HE1PR0802MB2281:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB22813561F340F84D675389F59D819@HE1PR0802MB2281.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QkVGwGOZymRlL7fmA7f8kpGGuThkD9iw9V/umDZK10XhnNDLQxisp3+STAHfJpxdHT8gzH7qEwqJyEIS6Cy+Y5pq4FXSe0QGO0j5a6zXcDpbchtg/LnUyyG/dxBTGKhMMJ0gy3yVl9/55mqwUIbx0tXE3UHoKFm31JftbfvqMXvyclghZuIkY5EIhL6v0iOBIqyeEG8uWg7LhZgdqiOO7h62aVj2dRX5SbzcGpMuGUk5bSlfCWPyAIPAYfNxiEmTkj8bqGIh0FCICQl+N7qRZjow2oTSwGPWolQ6GCg/k9qnRxOlwPAke9jwi0+i3TVtLLQOcJ7IH5nI8FsJqR2yzqf+dGvhXkG0mF14T4QSPooGXN6xVM2CxG1vJf9bWdaI54aMpsRgiz6FxGC6LYGlWZdaz1K1WdNYpWC7+4WUy3XBMWu5VsXiqXy42ezzfUXmmN8qxSC8fQ6Pbt5mm8S0/rv7kGyKuG4muSa9Z365OPDlbDikLNwA3xw/uXvWZdu2OJjXFr0xUtWcbHuW0E4+/+CX0zG+Sbazrns+nfp3p0sJSFmAWbwbPs8OlCxQfxihhhwjEwRDsdr4lHXf4X/bhw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(366004)(346002)(39860400002)(136003)(6916009)(83380400001)(26005)(36756003)(6486002)(71200400001)(53546011)(6506007)(6512007)(55236004)(2906002)(186003)(8676002)(54906003)(33656002)(86362001)(4326008)(66446008)(91956017)(5660300002)(76116006)(478600001)(316002)(8936002)(2616005)(66476007)(64756008)(66946007)(66556008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?0k2KizVrFufdKc9poOF4e9d4xvfRuV1Un+0yqzq4nga0aamsKPrSeBx/VrPr?=
 =?us-ascii?Q?x7WTO0/4omtKU/bLL1D1b8HFuG5IsENNr+TNBNswkpZCpR40h2s3pHzmnjsb?=
 =?us-ascii?Q?j/X4yUil50qS2Hec0H5sJDXO5Q304NFh+7flGlY2+jI7NzRr68Yx6OMh/PsI?=
 =?us-ascii?Q?1ZocBOzNR5PbjkKfQIOSZ5v0buwO8LWU8n4fxBgrbkEnMGwQpiyge5M60cho?=
 =?us-ascii?Q?RGSvIyebJom6iJ6OND4eeuA/6VvH3ASPTuzE49nQxH6jI+ydF6LRfroCjplc?=
 =?us-ascii?Q?lkeZDNudTlwz7rododGXNpXvETrKpnkX9zyvD6syFwLgrMKhqhqd9CMKlRHJ?=
 =?us-ascii?Q?YOxUieGsmjp9yi4C77FeDuy8uxVpwvUx3uJLEh1lNFIQ9sPujRGIVjm63h2c?=
 =?us-ascii?Q?u3a82+okdpT49G74bV1K9Rl7d3U9RgsiQuh6ibw8Ky1CT5cCjE6VPI3T0aRg?=
 =?us-ascii?Q?8iweD9Fajj6pq58Vl+UOsCG21vLkcCCuqM1nDa89U2DmbSeyiqalYhYgYpUK?=
 =?us-ascii?Q?iyub04LMDgImSCLRPt1PCdiY+UUNBZUMBsKJsU5yX4ae8kciMqTV+b+qVZR/?=
 =?us-ascii?Q?OPfiXAD9ryQa/2aLJD4JiLrG9Ax0TmKFqCIgTph4MIifBvodMj1Fe7tdAL7w?=
 =?us-ascii?Q?S51B/pVpNxHVIXGbPKFz1ssp8NogayLYdTCl9JdblsIkN82Qxgyc9mL+rA0v?=
 =?us-ascii?Q?Cp4DeLjJKvfpB+YP3tzfh0WFTIXGGDHgOyKOQvOwlXYB5XvMt/9O1NoER68c?=
 =?us-ascii?Q?RAz5rVxdDRZzMNKdxsGTOAIpdIKw1Sfq0pjSxBsLO7+HiqYAX3YoGG4VBknI?=
 =?us-ascii?Q?0ZOR+WxNqwsvnREAWN2kJbQizA9K19etpdmquVOvFViJ1VnWHwIHci0unmN2?=
 =?us-ascii?Q?0nx6PzEPtc0em2pl9zNya/I+ocJ2NshHaBy9QoKUkwv+GUgFzNPazuZnU2XU?=
 =?us-ascii?Q?vt1kgzTaIVSjAUxk/R5foPMZjXZly7Dliqw6fiH9BwlicfQwPtCKULhSf/P+?=
 =?us-ascii?Q?AeBzI1Ff22RI31ErEvgoHCK12jGNXUCcpHIuMNspM7DOEsKWxvew/AWl7IC7?=
 =?us-ascii?Q?a/1X3RZUKsv6frKhaFDSZXfKR5X+zpeWOH66Ff5A6z7rRwHlwbFwEr60q3ZD?=
 =?us-ascii?Q?i8msZFyxSR61WuOjhP7HG6r2ucERuoPn4Kj7OWYpvuMDfhIW1NWenJKT8rrA?=
 =?us-ascii?Q?UMHhixeCvISU87OQx3eISs10H5xM5NkPSjorfuf9JN3nYp/umyimB+eJTwa1?=
 =?us-ascii?Q?b8wtxVlfCh2PjdIYffVw/7jptJq9iwNcczp16dx0wdAORJ8VckbpIbbROR+/?=
 =?us-ascii?Q?Bh1yp+muMlJ4p9+QO/LHkwF7?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <7A93A220501F2542AB64B79A445C8A1D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5422
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	70104712-0f65-47db-c530-08d8d7292dd2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gOneqUhEpyoL3V97KUL51PSIQa0tn/NkQ/vfPhaRyOPS11Wsm7YQyVjXxwqsss/zQgQpNa/rvmZKynJd9q/Z+C62J9kc5RmqdPZf+yGChkzCdFmg2vUEJuS5i21khyiEqwddvusM+7XaCizTt2dtum5bWlQId4/1gIOriT9S2hwYGdjDhybjE8fVPTvghA281ft14DYrWQqZAJV65IsySWpXd0m13IVK3Gzmg9KMzKu/68m6KYUstCT+phosFvrgPVl8k03n46ZDQUdTSixIU/VNvGl4lZ4kFrxhvgrv1ddBtKCLB2l4PMHasdynH8g7gi6qrwY1f4ksfaOm8mn9nVFU5uHj7BUIWzd1A04qnHmhVOfz/RVhBmvxBmjjhqnxTGSKu/UgOW2cAYSDzo2iNFfvui5z0A2uBOuTPOxIsGDnvzObvJrsqmlDzEHM6zjIUffl32iD8DwHHGVlt6zuy6F5aX++XdXuU6oJfgUE8iPyuFk1dowd89YUQF3+pUAv43sjQcnfiYngMM1kjnQHdHVW0RC/2x+TIW9UasZ7BJ5Y4xjqOwQYLKk39eoCny1oYQqg1h6eXTl4uumzYdetp3UMHpnu1VuM3rJkFfrHqQ3JcwXcxEQSfaO3gA4QG1P5SkIheqiD9X8HkRt3uT1D4FWR3sWK5/eG6hjSNaIETpU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(376002)(396003)(46966006)(36840700001)(6506007)(86362001)(26005)(82310400003)(54906003)(36860700001)(70586007)(5660300002)(83380400001)(70206006)(186003)(53546011)(82740400003)(336012)(107886003)(55236004)(356005)(81166007)(6862004)(36756003)(478600001)(4326008)(8676002)(6512007)(2906002)(2616005)(47076005)(8936002)(316002)(6486002)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 11:58:41.4495
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d7d4aa5-d91d-41c8-0e8e-08d8d729329e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2281

Hi Julien,

> On 20 Feb 2021, at 17:54, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, flush_page_to_ram() is both cleaning and invalidate to
> PoC the page. However, the cache line can be speculated and pull in the
> cache right after as it is part of the direct map.

If we go further through this logic maybe all calls to
clean_and_invalidate_dcache_va_range could be transformed in a
clean_dcache_va_range.

>=20
> So it is pointless to try to invalidate the line in the data cache.
>=20

But what about processors which would not speculate ?

Do you expect any performance optimization here ?

If so it might be good to explain it as I am not quite sure I get it.

Cheers
Bertrand

> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
> xen/arch/arm/mm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 59f8a3f15fd1..2f11d214e184 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -529,7 +529,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_i=
cache)
> {
>     void *v =3D map_domain_page(_mfn(mfn));
>=20
> -    clean_and_invalidate_dcache_va_range(v, PAGE_SIZE);
> +    clean_dcache_va_range(v, PAGE_SIZE);
>     unmap_domain_page(v);
>=20
>     /*
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 12:15:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 12:15:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88025.165346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEA7O-0004dC-JQ; Mon, 22 Feb 2021 12:15:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88025.165346; Mon, 22 Feb 2021 12:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEA7O-0004d5-Fx; Mon, 22 Feb 2021 12:15:14 +0000
Received: by outflank-mailman (input) for mailman id 88025;
 Mon, 22 Feb 2021 12:15:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEA7M-0004d0-IR
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 12:15:12 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7f909cc-ad91-4d6e-941f-4bc55c792288;
 Mon, 22 Feb 2021 12:15:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7f909cc-ad91-4d6e-941f-4bc55c792288
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1613996110;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=m1zTxuVTQpSQbr2cd8ZAo3Lkj50KlDQoCgbQFNWD+k0=;
  b=gfmXmwJAt5e+hGZxG94N5Jc+BrUXftdX/3jhN1rfMZFWZHabhNqO2aod
   cX/qslRJVmE6MEUWbvHJEeK8mJTZtI4aidkAWH/QZObt9q4iC4dZQ3OL3
   T7JAIFgkCyJhcO0EWFXYykATlcFa5pBPiobNXQqO2JC+8GnxFEoMbhgMU
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5Na/BLDdQGfXeaOVFk8BXLcvjtdqEGT61z7wx5j33B25YX5WwpwZD5ejKXqtHN8zBT+wD0ouTu
 PFImEiXm3gpVnvfubbLYToqbydZXqT6Nc5ZSgH97CYF5EEZA5Ckjoglik327M6yrPyhsCOKgze
 qArrBUSz+ZZbDSeprQGkYeMyha27n3KHBYYIT5LxNod8iasf2c7Wwmb/onU0IjIyjJbukM2SxA
 HolUu7yyEU4h5ha+oZrA1UhmZr5XWTRRtzQWTkz13t8uQ1uPBinJuor8b+x9flNicjo0h6+kUK
 wW0=
X-SBRS: 5.2
X-MesageID: 37659351
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37659351"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TMnVkIcF6qydhK2kimYdPKcekMeeaU7n5YswLs5H6eFBWcFGPMi7q2OTZVWm72gNbm48q8MGnJy83jBKrFKe+vNpTw+n+SqJWRf0RFtidRwQ7NcoklkMbpBShgUokQjT43dRN63WYw889nAuAmW07jmfnfpB9nnu83qgZkBlieeuDR0YO3XXKvGcq69CJgUN//Q5gPx+0ma6u0mOb1NM8WImiJCbG9VGgZxJBEGomlYPZ9BRaZIoUvBlPSif4+eDYnI9YYBN63ZZhiBW4zDOBDfZwQl1IvDn9gizg/GwWDrIKPtG0x1VcFOkJAQSt/LlU4Ql/IoB0e/PkbreoNr+6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qa3uRHj4XKIDzMx71uwje1sb6P0VBoR6WHFb8GYPcGU=;
 b=HiIJGuh0qcdaK2W7nSUykIix/JHwUSNBe9AqafW47UKQTC4j2YkBd7cGY77I0i1mhYuA4HBoWWvLZi0zCGR+q1rwSX0ae4cCBL+aXRYcYcSMx6EYttE/5/3FqqtIbu0vzHIHKBEdwS05eanRAwNcKvDmueKfibPx/JX4B7CMt8Ez6ZJ/eRUrazEWXCcyS60TdHiJMEvrnyXaangy7Iva1uRe9zquZD1K2K7lZQuDNm94C9np4Vh6bvUTnH1ddgSTyJmLLU05ad/k1R10MgsSv+Ukh7qikohIsGAMMZTZ5HF0RkpiZq09ef/stbYHjlnBRc3vOpxiXe0FlvHKx+qiTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qa3uRHj4XKIDzMx71uwje1sb6P0VBoR6WHFb8GYPcGU=;
 b=BOCLlZl6EQoBGJDiSSvbpoitLDPmfvCa057hnpqBdvouBLUq8o3fe2nC8UW/KOetJSFmOhyUlR8MsCs4S/818iDKvrQ77eoikUYUTdgbuzV3+od4egNAnQ5CGHYYxK65T2vuFFkKtFXKxZex7wUf3+GiAPoO2YuUZ7hyrUf9nl8=
Date: Mon, 22 Feb 2021 13:15:01 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>, Kevin Tian
	<kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
Message-ID: <YDOgRZoD46xrMlRP@Air-de-Roger>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
 <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
X-ClientProxiedBy: PR2P264CA0008.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::20)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 028031e4-460b-4d2c-da81-08d8d72b7dc7
X-MS-TrafficTypeDiagnostic: DM6PR03MB3740:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3740B4F965F1C7076AB7B4C98F819@DM6PR03MB3740.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +91O/DjU+ytCLct8hpjJ8CY4U7T5twlbYype7uvQHdSbq65Sx2Z7VUu6Nllgv8IiLN7HIgtY3eDkiOlbVjCMhcanFkmfnfQwatEBmH/ThbP03SU4twHKd2jq9jxrN2HhTYTazFvI/3DIffPMfei+zNzohhI7xZASCr7VG+29Hp3DfFRhJ/qvYqQAkUELf9A7SwBwHUi6ItKrFLPH9s6Qcsz90a8RJSAJI3tSK4plN0/gnJ4gfjJhzFJiXVHOVKfBdEySCJvvkuwYH+DpzgoIAZEOOnU6Ukovg9QyiqSH721qTAwhcBBxyP2ZXg3nIrDZAAjAcPdjPbzjB/yeO+BGh7bqOr437pbDTFoWQCJFEGb0FGX5ey2cjwMNi5XSsS8czXFJIz75tiRq78hJwnPxpB/EozrWiXSORjrUBfIVKSpjVOIKrMALKvurEMPexAd13NelJnpIPqtwkWXK5pvRxYwmXWGfqqEqXsh8nB9JMMRma8++1POhYF4csouiAwaEU3a4nbPRWMvyU3zLdYAKnQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(136003)(39860400002)(396003)(376002)(6666004)(54906003)(66946007)(478600001)(16526019)(9686003)(85182001)(66556008)(8936002)(66476007)(316002)(86362001)(4326008)(8676002)(6916009)(6496006)(26005)(2906002)(6486002)(186003)(33716001)(5660300002)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dTFIcWdGbXhTcnNSbldVVFZLSS9tYzc1SXk0dmpoRDIwU1JVV0JWb3A2RDVU?=
 =?utf-8?B?OFh4cHJYRXkyYnBzV25YeXZKelpKTHQrd2syR1VUSi9BaHYzaTVzN2FwMWE3?=
 =?utf-8?B?dWhiM0ZHaVlKV0ZpakwxaDM5WTFpZ29waUdwcnpSTTFhMGRrZjZTRytuZ2Ev?=
 =?utf-8?B?UXdWL2dEZkZuRHRCQkhwSXpMNHpYR1YveUlSVUgwVStSM0RHd1czV1ZFZE1V?=
 =?utf-8?B?NERidm1Od1hNL3FPS1NLbTVyWFRyQ3dqUlV0cHBYZmJOekRjdUVRSmpTY0NR?=
 =?utf-8?B?U3dFZFpyT1lmOFRIOXd3R3dDWVNuOEJ0TlN0Ny9mdzNtY3AxeFc1SHJ0Smpx?=
 =?utf-8?B?WXgvMVVCV0gvd1U5REpSK1BPV0EyeU9BRUJ1V1BHYnFtS2hxdE45WVFSemU4?=
 =?utf-8?B?NlNNMDdjSWY5V2hkNldjanFZVlJ5SkJkamZyNjEzUGtJSklEMmYvYjJIZ3lv?=
 =?utf-8?B?WmR2OERvSDRVWUU3MmJYcE9xbCtBUFJVOVBmbGtXWXRxYlJSSWdIb052bzUr?=
 =?utf-8?B?V3dmaXRRdjJGTncwSlF4anpwV0FNbkVqK21UMi8zMnRXeUlWc3ZJNDUySEUw?=
 =?utf-8?B?cVFkVkRua1hqOHR2M2MrYkg0c3pId0M5ODlKaVVxVW0vZUJZeFdVZlBvV29Y?=
 =?utf-8?B?bkhUOTV3VHpneHFsNFBYTjRMdzMzeGNYakVDSW9IN2pjTG8yU21iUktJbEl6?=
 =?utf-8?B?M0pOY3lkWlE2Sjh2VmZTYzNvODNUWFovU3JYbkRDNjgvelRGZUo4VDFVOGYz?=
 =?utf-8?B?bFZBb1VZcEkyN1VQdHVxODk1UHNjZW5sSitjajJjSXBLNjFBQjZ3Q3ZaaGNC?=
 =?utf-8?B?WURIcjYreGI3N0N5amZoazNydGg4czNrdnBOK3ptV2RVdFBnQkdyU1FGcGlv?=
 =?utf-8?B?dDRISDNlNnBJeWxka0hiTnFzME1wUGE4YWFNM05xdHdQUnJQOVFUTjFVRW9y?=
 =?utf-8?B?elpaT2VSUjJJVkZ0bjdsSnZVVDV6aG16Ti8reHN5ZTJRMG5BTVRxTlBqMGt5?=
 =?utf-8?B?SG5CQVhlL0pwK1NUajNJMm91OC9yN3F6azM3bk5yVVZGMG90WTRRMllIRy9w?=
 =?utf-8?B?UXNLQm9WSEpIbHViNFM4LzFhN0wwOE1NSDZqSWlkamRxTEM0VUlOYUZta29Y?=
 =?utf-8?B?eVJidHJ6dlBReWUrbUFYQmNVWHB4aVR4Wlk4UnVaOHVEVVpDQ3p5YjNxQlZu?=
 =?utf-8?B?djZsNzgxTTYrenIrbms1ZUlHNFE0M3pVVWg2VWcyRktMSWE2Skt2RXpVNmJM?=
 =?utf-8?B?Ni9NSEVLeGx1TnlBYjUvMmRZZVZ4VDdkU1JNdE1ZTHU0bGxvU3F3bHE5MG5w?=
 =?utf-8?B?d3hVR3ZRMlRLUXc3RktDZlZhbTRidWRJTEx1OEhqNmprM3dMd0U4NFUyM3Nw?=
 =?utf-8?B?QUd4R3NaWmxoUm1oZlcxVzAzOG5qUG9iazRkQ3RDVjl2enRrS2dKcnpEK2dC?=
 =?utf-8?B?NEpUaTNXMDl1b05OOHo5RDdTUkttdXF3ZXptdkRZV2hFRDd4bzFOYTZnV0VJ?=
 =?utf-8?B?RkV3enlsQkxNWmprdVFucnF4d2cxa05DR2NNbW85M0xDNXNrL25VakE5bVhM?=
 =?utf-8?B?cVJhNU1PVURDaTYwMmxsUmJBaFVZMVBBVTBidGRNU1hoMzNCVmZTbmNWYTli?=
 =?utf-8?B?cTEwTnlvQ0pocThxRHlYOG9NK3FvVFZCeDZ2b1RKL3AwV3I4eHJKTDloZUhx?=
 =?utf-8?B?Vm41T0l5RmRQeUt0NGtJMlV0cEZlVzNqMXJMMWZLWVlwY2ZMelBrM1F5M0Rh?=
 =?utf-8?Q?bSnPLOUrraWEb9CmWGQp252gc0Z7sJnG755CNOJ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 028031e4-460b-4d2c-da81-08d8d72b7dc7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 12:15:06.8681
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yLKWV8d6FqBkUYa2DTuvqdFaD5sNGIGx2+xF779ZH+NdBxgitVm919rM3B24wReJqhpAGYR6Qlr5aFz24yQSBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3740
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 11:56:58AM +0100, Jan Beulich wrote:
> Inserting the mapping at domain creation time leads to a memory leak
> when the creation fails later on and the domain uses separate CPU and
> IOMMU page tables - the latter requires intermediate page tables to be
> allocated, but there's no freeing of them at present in this case. Since
> we don't need the p2m insertion to happen this early, avoid the problem
> altogether by deferring it until the last possible point. This comes at
> the price of not being able to handle an error other than by crashing
> the domain.
> 
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

> ---
> v3: New (split out).
> ---
> Hooking p2m insertion onto arch_domain_creation_finished() isn't very
> nice, but I couldn't find any better hook (nor a good place where to
> introduce a new one). In particular there look to be no hvm_funcs hooks
> being used on a domain-wide basis (except for init/destroy of course).
> I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
> considered this no better, the more that the tool stack could be smarter
> and avoid setting that param when not needed.

I'm not specially found of allocating the page in one hook, mapping it
into the p2m in another hook and finally setting up the vmcs fields in
yet another hook.

I would rather prefer to have a single place where for the BSP the
page is allocated and mapped into the p2m, while for APs just the vmcs
fields are set. It's IMO slightly difficult to follow the setup when
it's split into so many different places.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 12:16:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 12:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88028.165358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEA8q-0004lK-UP; Mon, 22 Feb 2021 12:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88028.165358; Mon, 22 Feb 2021 12:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEA8q-0004lD-RL; Mon, 22 Feb 2021 12:16:44 +0000
Received: by outflank-mailman (input) for mailman id 88028;
 Mon, 22 Feb 2021 12:16:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEA8p-0004l4-C5; Mon, 22 Feb 2021 12:16:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEA8p-0007nG-44; Mon, 22 Feb 2021 12:16:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEA8o-0001Sq-RR; Mon, 22 Feb 2021 12:16:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEA8o-0005q2-Qw; Mon, 22 Feb 2021 12:16:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gOFWX3sUb/7vMkFnK1dMqW65I9dAIQNw6T666hKFsA0=; b=HDGTKhCHWVFoPyeG8Z/2wUou/D
	VjeSUADsq5uEAcwFMNmbfLq27zWOv5t4QpbhlqZfc4QNZx+A9ZtYa6vmHBq07EWclxmWMr0+FHp36
	Tytda9j71DzqvHEOho5FA16Hr4if8eROwfmbBTMGGR1sWdMwB5lz1nr0UrWAsqohEJsc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159533-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159533: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d99676af540c2dc829999928fb81c58c80a1dce4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 12:16:42 +0000

flight 159533 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159533/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d99676af540c2dc829999928fb81c58c80a1dce4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  205 days
Failing since        152366  2020-08-01 20:49:34 Z  204 days  353 attempts
Testing same since   159533  2021-02-22 00:10:10 Z    0 days    1 attempts

------------------------------------------------------------
4912 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit2 broken
broken-step test-arm64-arm64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 1178887 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 12:49:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 12:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88037.165373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEAei-0007ll-N0; Mon, 22 Feb 2021 12:49:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88037.165373; Mon, 22 Feb 2021 12:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEAei-0007le-K1; Mon, 22 Feb 2021 12:49:40 +0000
Received: by outflank-mailman (input) for mailman id 88037;
 Mon, 22 Feb 2021 12:49:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+pG5=HY=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lEAeh-0007lZ-6d
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 12:49:39 +0000
Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15126312-c21f-45db-8972-33c8c130575b;
 Mon, 22 Feb 2021 12:49:38 +0000 (UTC)
Received: by mail-lj1-x22b.google.com with SMTP id y7so55435342lji.7
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 04:49:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15126312-c21f-45db-8972-33c8c130575b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=q1DxJj8swHEeWTeo4fAZR8P2RH2Uf8viG+/8tvr60tA=;
        b=sQWPyYDh5UPyg28k4b98VvetcT/u5cqyMYfGY8M56nJqb4fl0gZwWb6k4BuAr15i4w
         +iICfEgHtFxHmiWMxRsgLNrnaaw6qACh6s3T+huC9wzVxHo+um3T6yAx6ZS6qEvKYQhb
         WrOV9iI+hPsOjjQT8VunpIZZ0DHdHY/NAhmb9NNaXfKgEEPJpoxP1nrslY9rqxnAbTqZ
         olv8uJUjARfNRjd0inEyf35SrOXXs9osa91zqleVARHz7TmzRAzMuTuYWSKLaaxCWiFI
         sEfj/bU+O5FjztLIRCgk81nipAaPqMLQiMWwxGlKtP/YBxEWq1bc3Dk771YmOY6DsW7w
         ozqg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=q1DxJj8swHEeWTeo4fAZR8P2RH2Uf8viG+/8tvr60tA=;
        b=FWeNkpujNDrzJXJxjaYi0ApMJ9Z4qEMAm7jr9yiKT7Tc0DRYxmIITJrTxToiZCy9Mb
         NDtvsMYOE+FeuW8ChyODsre/3V5QLueFDE15Og5vnL30hylsqM59UJ9S3TebbjLc0UVW
         hMyn92z3JvZou/dIYTd+Jp4gKpk1YZ23gGAuLbAE31CC6C3b2CcB4mG5/2hz7wTGB+ww
         7ddJmgooIMZByHeHjH9aTNssHJ3VUr8IjOxfNaUIMKt7cIn5T7Mv4mTAX7SlTpE/35J0
         Ne/GzE2esgQE/3y50OFTy1sv2vXX3OXmri+Xn2IR68ZDY5QkgLcjSMR8HXzmUNJow35D
         Ny5w==
X-Gm-Message-State: AOAM530TxRg1Lsdeis/waOsaD7VeNyJgsNLbsrIvVrIRIyni7RtAr+1L
	oz+rsHFX0m07U9DO0yoXATwin5CVur64DfX7cPs=
X-Google-Smtp-Source: ABdhPJy9vJjFJ/Jr5h1qILzD/nDIRxWypyGDc05UhEIQd3n0S1GjaBpSMNlADzg96SFwGF6RGQ/miGcKYbNsoiSApik=
X-Received: by 2002:a2e:9ec6:: with SMTP id h6mr6095597ljk.12.1613998177145;
 Mon, 22 Feb 2021 04:49:37 -0800 (PST)
MIME-Version: 1.0
References: <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger> <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger> <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
 <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com> <CAKf6xpvKiWiU5Wsv2C1EiEFr77nMZTd+VHgkdk7qcKw1OFD8Vg@mail.gmail.com>
 <9bbf6768-a39e-2b3c-c4de-fd883cc9ef85@suse.com> <CAKf6xpuTbvGtTRHPK9Ock7rxJk4DfCumgTW7-2_PADm9cSaUBg@mail.gmail.com>
 <YDOE35zhQYwgaxke@Air-de-Roger>
In-Reply-To: <YDOE35zhQYwgaxke@Air-de-Roger>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 22 Feb 2021 07:49:25 -0500
Message-ID: <CAKf6xpvoKmFVp2HtsTVZS8w+GntpAEKXan8fB72JEy1rrWgC1A@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	intel-gfx@lists.freedesktop.org, xen-devel <xen-devel@lists.xenproject.org>, 
	eric chanudet <eric.chanudet@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Feb 22, 2021 at 5:18 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com>=
 wrote:
>
> On Fri, Feb 19, 2021 at 12:30:23PM -0500, Jason Andryuk wrote:
> > On Wed, Oct 21, 2020 at 9:59 AM Jan Beulich <jbeulich@suse.com> wrote:
> > >
> > > On 21.10.2020 15:36, Jason Andryuk wrote:
> > > > On Wed, Oct 21, 2020 at 8:53 AM Jan Beulich <jbeulich@suse.com> wro=
te:
> > > >>
> > > >> On 21.10.2020 14:45, Jason Andryuk wrote:
> > > >>> On Wed, Oct 21, 2020 at 5:58 AM Roger Pau Monn=C3=A9 <roger.pau@c=
itrix.com> wrote:
> > > >>>> Hm, it's hard to tell what's going on. My limited experience wit=
h
> > > >>>> IOMMU faults on broken systems there's a small range that initia=
lly
> > > >>>> triggers those, and then the device goes wonky and starts access=
ing a
> > > >>>> whole load of invalid addresses.
> > > >>>>
> > > >>>> You could try adding those manually using the rmrr Xen command l=
ine
> > > >>>> option [0], maybe you can figure out which range(s) are missing?
> > > >>>
> > > >>> They seem to change, so it's hard to know.  Would there be harm i=
n
> > > >>> adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
> > > >>> 0xff,ffff,ffff )?  Maybe that would just quiet the pointless faul=
ts
> > > >>> while leaving the IOMMU enabled?
> > > >>
> > > >> While they may quieten the faults, I don't think those faults are
> > > >> pointless. They indicate some problem with the software (less
> > > >> likely the hardware, possibly the firmware) that you're using.
> > > >> Also there's the question of what the overall behavior is going
> > > >> to be when devices are permitted to access unpopulated address
> > > >> ranges. I assume you did check already that no devices have their
> > > >> BARs placed in that range?
> > > >
> > > > Isn't no-igfx already letting them try to read those unpopulated ad=
dresses?
> > >
> > > Yes, and it is for the reason that the documentation for the
> > > option says "If specifying `no-igfx` fixes anything, please
> > > report the problem." I imply from in in particular that one
> > > better wouldn't use it for non-development purposes of whatever
> > > kind.
> >
> > I stopped seeing these DMA faults, but I didn't know what made them go
> > away.  Then when working with an older 5.4.64 kernel, I saw them
> > again.  Eric bisected down to the 5.4.y version of mainline linux
> > commit:
> >
> > commit 8195400f7ea95399f721ad21f4d663a62c65036f
> > Author: Chris Wilson <chris@chris-wilson.co.uk>
> > Date:   Mon Oct 19 11:15:23 2020 +0100
> >
> >     drm/i915: Force VT'd workarounds when running as a guest OS
> >
> >     If i915.ko is being used as a passthrough device, it does not know =
if
> >     the host is using intel_iommu. Mixing the iommu and gfx causes a fe=
w
> >     issues (such as scanout overfetch) which we need to workaround insi=
de
> >     the driver, so if we detect we are running under a hypervisor, also
> >     assume the device access is being virtualised.
>
> So the commit above fixes the DMA faults seen on Linux when using a
> i915 gfx card?

Yes, DMA faults are not seen with this commit.  i915 behaves
differently when it detects VT-d active, and this commit sets the VT-d
behavior when running under any hypervisor.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 13:19:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 13:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88046.165391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEB7T-0002DZ-2B; Mon, 22 Feb 2021 13:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88046.165391; Mon, 22 Feb 2021 13:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEB7S-0002DS-VH; Mon, 22 Feb 2021 13:19:22 +0000
Received: by outflank-mailman (input) for mailman id 88046;
 Mon, 22 Feb 2021 13:19:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V+0I=HY=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lEB7R-0002DN-8r
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 13:19:21 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 247dde78-34ac-4b7a-85a6-69aaca5549ff;
 Mon, 22 Feb 2021 13:19:20 +0000 (UTC)
Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com
 [209.85.218.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-261-7SJOOPFgMHu-GualZedBhQ-1; Mon, 22 Feb 2021 08:19:17 -0500
Received: by mail-ej1-f72.google.com with SMTP id yh28so3985705ejb.11
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 05:19:17 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id kd13sm6734645ejc.106.2021.02.22.05.19.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 Feb 2021 05:19:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 247dde78-34ac-4b7a-85a6-69aaca5549ff
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1613999959;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qug2RTu/T1uSQ745n8lC2c0QqoReD0ieGgc66jNB2QI=;
	b=DP5MsoxPqZr6RjnVFpMb5qSgPTT6REsm0tdYzTax3iVGEq4Zg4xCcDtXb8Z0+YfMHgQJYn
	OOKkgW92WwYuoPy0vaIbPn59IZT9PgMdwF+rohSJgqZlU8W0GwXvWeYtkZbu8EoPH4aIww
	Cwh77KpCGW8WNcxi7aNwmYcCW/pU16I=
X-MC-Unique: 7SJOOPFgMHu-GualZedBhQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Qug2RTu/T1uSQ745n8lC2c0QqoReD0ieGgc66jNB2QI=;
        b=d8xUNjHWwdEpFzNoD3YY+rWrWRkQry9JuDashrhYkop8bz+e7ue4mr9RjeXCUsm9nj
         Wyok9dvdmOP4/SuPTFT6WAHINN8BpZovDUP/AoxfC8+3mObYMrfGIsNDGmHRE88ifag0
         shTImiEWArFAhUsA3oqSM9O9xO3bcLkvK5oFsmynUrlviSz0IQv+/B+SwC4E0K5v8n5j
         gEONyFgBD7oefqaJis8UozXQyFk0vatzPCpNaK5oHxI8PcJDQ+FlYCP30qYsniN5mIAt
         3FRe1Y4b0TKHeYhaPjnoYt569d0IEIn92S03BKxt/S++gwTEMtbMGfU1M5D9GtfvVG/l
         d7fw==
X-Gm-Message-State: AOAM531dd0QThIxub6WUAbqR90FrW3JFjrQA1af8euDYLc9gzhdY7b7f
	aESwfm4ld2H4CpBJguYdT77WAQ9R5LM+CiC8MmUg3Xv5OSV2i3gMIR5hUFzJ4yd8X3y0d9MrUuR
	Dol7Mo/nwjIoWLV/g99IPHP/VFaU=
X-Received: by 2002:a17:906:cf8f:: with SMTP id um15mr5937389ejb.455.1613999956676;
        Mon, 22 Feb 2021 05:19:16 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwvbw/PgfA0F/25nHKxn/HTOW5jX2GdrY67Z+Yez2DT22kLFPNtBmQLRjUdkVjUL1EidwLbmg==
X-Received: by 2002:a17:906:cf8f:: with SMTP id um15mr5937362ejb.455.1613999956555;
        Mon, 22 Feb 2021 05:19:16 -0800 (PST)
Subject: Re: [RFC PATCH v2 06/11] hw/ppc: Restrict KVM to various PPC machines
To: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
 qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
 Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, qemu-arm@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
 BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-7-philmd@redhat.com>
 <YDNIQiHG0nfKXNR8@yekko.fritz.box>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <e28dc7fe-3a78-6b24-0034-830909f71f8e@redhat.com>
Date: Mon, 22 Feb 2021 14:19:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YDNIQiHG0nfKXNR8@yekko.fritz.box>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=WINDOWS-1252
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/22/21 6:59 AM, David Gibson wrote:
> On Fri, Feb 19, 2021 at 06:38:42PM +0100, Philippe Mathieu-Daudé wrote:
>> Restrit KVM to the following PPC machines:
>> - 40p
>> - bamboo
>> - g3beige
>> - mac99
>> - mpc8544ds
>> - ppce500
>> - pseries
>> - sam460ex
>> - virtex-ml507
> 
> Hrm.
> 
> The reason this list is kind of surprising is because there are 3
> different "flavours" of KVM on ppc: KVM HV ("pseries" only), KVM PR
> (almost any combination, theoretically, but kind of buggy in
> practice), and the Book E specific KVM (Book-E systems with HV
> extensions only).
> 
> But basically, qemu explicitly managing what accelerators are
> available for each machine seems the wrong way around to me.  The
> approach we've generally taken is that qemu requests the specific
> features it needs of KVM, and KVM tells us whether it can supply those
> or not (which may involve selecting between one of the several
> flavours).
> 
> That way we can extend KVM to cover more situations without needing
> corresponding changes in qemu every time.

OK thanks for the information. I'll wait the other patches
get reviewed (in particular the most important ones, 2 and
10) before respining including this information.

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 13:37:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 13:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88050.165402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBOm-0003w3-Hj; Mon, 22 Feb 2021 13:37:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88050.165402; Mon, 22 Feb 2021 13:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBOm-0003vw-Ek; Mon, 22 Feb 2021 13:37:16 +0000
Received: by outflank-mailman (input) for mailman id 88050;
 Mon, 22 Feb 2021 13:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEBOl-0003vr-Bx
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 13:37:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEBOj-0000gx-IT; Mon, 22 Feb 2021 13:37:13 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEBOj-0001yN-9W; Mon, 22 Feb 2021 13:37:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=n/6qo3FbQGuHJxi+92hPDUzmYPEfFgLFqs9z12dsc3g=; b=1iAx2yWkM7luovKAPJxO89KN46
	gR/EXvpMNyVOi76xLuEhhaWKyrhIDtcnBCSIYd3dMQdqOFfmh9gBDTGP/Tk6AVY0D5NyYtvtHAJnj
	1vz1xK6lCWB7Sf8QXiSJ6oAnLOue/AOrgkfc/SoTI1JaJxUO4mKxAQfKKrZewOmF1T/I=;
Subject: Re: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210220175413.14640-1-julien@xen.org>
 <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org>
Date: Mon, 22 Feb 2021 13:37:11 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 22/02/2021 11:58, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 20 Feb 2021, at 17:54, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, flush_page_to_ram() is both cleaning and invalidate to
>> PoC the page. However, the cache line can be speculated and pull in the
>> cache right after as it is part of the direct map.
> 
> If we go further through this logic maybe all calls to
> clean_and_invalidate_dcache_va_range could be transformed in a
> clean_dcache_va_range.

Likely yes. But I need to go through them one by one to confirm this is 
fine to do it (it also depends on the caching attributes used). I have 
sent this one in advance because this was discussed as part of XSA-364.

> 
>>
>> So it is pointless to try to invalidate the line in the data cache.
>>
> 
> But what about processors which would not speculate ?
> 
> Do you expect any performance optimization here ?

When invalidating a line, you effectively remove it from the cache. If 
the page is going to be access a bit after, then you will have to load 
from the memory (or another cache).

With this change, you would only need to re-fetch the line if it wasn't 
evicted by the time it is accessed.

The line would be clean, so I would expect the eviction to have less an 
impact over re-fetching the memory.

> 
> If so it might be good to explain it as I am not quite sure I get it.

This change is less about performance and more about unnecessary work.

The processor is likely going to be more clever than the developper and 
the exact numbers will vary depending on how the processor decide to 
manage the cache.

In general, we should avoid interferring too much with the cache without 
a good reason to do it.

How about the following commit message:

"
At the moment, flush_page_to_ram() is both cleaning and invalidate to
PoC the page.

The goal of flush_page_to_ram() is to prevent corruption when the guest 
has disabled the cache (the cache line may be dirty) and read the guest 
to read previous content.

Per this defintion, the invalidating the line is not necessary. So 
invalidating the cache is unnecessary. In fact, it may be 
counter-productive as the line may be (speculatively) accessed a bit 
after. So this will incurr an expensive access to the memory.

More generally, we should avoid interferring too much with cache. 
Therefore, flush_page_to_ram() is updated to only clean to PoC the page.

The performance impact of this change will depend on your 
workload/processor.
"

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 13:45:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 13:45:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88075.165433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBWa-0005EF-Pz; Mon, 22 Feb 2021 13:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88075.165433; Mon, 22 Feb 2021 13:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBWa-0005E8-MJ; Mon, 22 Feb 2021 13:45:20 +0000
Received: by outflank-mailman (input) for mailman id 88075;
 Mon, 22 Feb 2021 13:45:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ieRt=HY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lEBWY-0005E3-GE
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 13:45:18 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.59]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c4db3e3-c542-4f97-b96f-a89c2b8bb81c;
 Mon, 22 Feb 2021 13:45:16 +0000 (UTC)
Received: from AM6PR0202CA0044.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::21) by VI1PR0801MB1679.eurprd08.prod.outlook.com
 (2603:10a6:800:59::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.29; Mon, 22 Feb
 2021 13:45:14 +0000
Received: from AM5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:3a:cafe::38) by AM6PR0202CA0044.outlook.office365.com
 (2603:10a6:20b:3a::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Mon, 22 Feb 2021 13:45:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT018.mail.protection.outlook.com (10.152.16.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Mon, 22 Feb 2021 13:45:13 +0000
Received: ("Tessian outbound f8d85101260a:v71");
 Mon, 22 Feb 2021 13:45:13 +0000
Received: from 0a4daea76858.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8E71B778-7DD9-4127-A522-7B8E3DC2D7D7.1; 
 Mon, 22 Feb 2021 13:45:07 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0a4daea76858.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 Feb 2021 13:45:07 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VE1PR08MB5758.eurprd08.prod.outlook.com (2603:10a6:800:1a0::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.29; Mon, 22 Feb
 2021 13:45:06 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3868.032; Mon, 22 Feb 2021
 13:45:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c4db3e3-c542-4f97-b96f-a89c2b8bb81c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B512dvbpOnEYWD1i4RnQHZAQXptOjs6RivQrMPNyXkc=;
 b=rcZNCYjfc5XqrT+N6e7OFayE8bSXAjCPZTnr1iFq9NpHyODn0L1XuCBpqFJWSogwtDSXvyePA6xVcvqBcWNzPrEDKvLTshOp/mfIYnJdCaXUusNfPQLJ4kT4z8MpeW0/eFaBIcZMqtRZgtvoKInl/CFFSC1s2ut04aX4aW61PKU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: e18be2a99fcfbbfe
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b/26/6d5cxdTWk+R5cOP4ZhcLYfR5H+lPnl+DPFi77IDMIQzjHh0lBlpGAk1OJWiIeg8Eg1awTvo80oo70fcK7/DLYnWalhPfTUNnpz0U8JuxJFM1annoiNBC+hQgnV+4NTNV8bexkHnKugbk3unnSRGtdc3vVIWaDpnCw2uSXLys3mk/ZPsHVQVLyLsGaqUc+dsu/kOBDwUZDO94CDUGsjMU4u9AN9Qcx/ML+ay8emshd8Sbwi6yaOMRsg3z/FR51vXOoiALl6RJxO9Ojx4jETNRKzY+h6UPfyAocuie6EIe7DOH/glIJqOVaFkWOai9f1mKo4JpfKoqd6d6xSCaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B512dvbpOnEYWD1i4RnQHZAQXptOjs6RivQrMPNyXkc=;
 b=Xu1oLZGVbKEBkLvT/W9noBdP0WWbD1nVoxSjW1PpiLo9HYmylfisC3sk+OrAzwSQq+QKFJaL4LJ1jIYOvUN6ALy96n7rL/19LEosjMVeOB/Vig0H0Wn5WQ8v83uxNfUd90pBQ2HemwKcuBDyJKX/2prBpJywv2GAxxd2qWhzbmfLxQuBgk8XwAts1nw28W54VC9uOhb9SzvxmgyUoPtM2PotybgtbHc34oNOLktGgFPFHuLK0G9XshqAcKUZbUAKeORxmI5HwARh2raY4kFEd4UEY8ix7LvTVmEk0qfyaFAsaxhEIulJHeu3JPph1zIlZX9j/Z0X8w3DJXNljSbtLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B512dvbpOnEYWD1i4RnQHZAQXptOjs6RivQrMPNyXkc=;
 b=rcZNCYjfc5XqrT+N6e7OFayE8bSXAjCPZTnr1iFq9NpHyODn0L1XuCBpqFJWSogwtDSXvyePA6xVcvqBcWNzPrEDKvLTshOp/mfIYnJdCaXUusNfPQLJ4kT4z8MpeW0/eFaBIcZMqtRZgtvoKInl/CFFSC1s2ut04aX4aW61PKU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iwj@xenproject.org" <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2,
 3}
Thread-Topic: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in
 vGICv{2, 3}
Thread-Index: AQHXB5FR40tx26Grzkm8HRIE12rerqpkMreA
Date: Mon, 22 Feb 2021 13:45:05 +0000
Message-ID: <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com>
References: <20210220140412.31610-1-julien@xen.org>
In-Reply-To: <20210220140412.31610-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 01ba3c79-59d5-4af0-33ef-08d8d7381498
x-ms-traffictypediagnostic: VE1PR08MB5758:|VI1PR0801MB1679:
X-Microsoft-Antispam-PRVS:
	<VI1PR0801MB1679BF7D3F88D72B4076293C9D819@VI1PR0801MB1679.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Nk/HHs8SlHq25ZiYJj7aIHFnPl2iv3x2V1pwC9ZPq2J0fk/JTqWJsTle4O+9On3OCWBmHKCmjUZ9ROqj7IgwRxORyFdoyPoHRD+ffTHxuXqlnrz+9i8OEcbyDht9oDlxmMTGlN8tiRqoSZuaSlz3/s2qQMWHVSmddiePR9KRRXdJGmuFn/+ob7eKwNJjeTIL5PUwFvAVI2LDbzuMQ7LYiK3/6kvbMZXWJcInz9ca0NvlM47XgqKWjDcSN7rj7H+4+utXh8JsX9i/lgaGzUZMFHKRlUH+/FkxtnYEF0o2uVz23T3uJI0JSub0cEV0Fz8Npp2kZHwTI+4zSpgmEWSUbQ3C5SDD1xxHfL/xRrZvycwsXtp6AdhcOFqFmRri2WUUb9eIQrpE5OQnswFe1+7Ho29jKVgi6ubAhIIONDm/3dTJM3MCDa8m603TKQxWuw0J/41Ls6gcbHSoJ9hvVRXsNL8Uar6V4q78M1M+nIT3vNccBtDTC1ygsN1BFeXmne4BqXSTwQmskltgrGOSaCAcIUTKXKjGBAROTLoVvQcjHvkez0KzOWEJS8o46m8zhF9zkSeWBlwJb/lN0KveydgQ7KYADt2Z9ZdHUz7VmJuGYQU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(346002)(39850400004)(366004)(86362001)(66946007)(5660300002)(64756008)(66446008)(66476007)(54906003)(8676002)(76116006)(8936002)(66556008)(91956017)(6506007)(6916009)(478600001)(55236004)(316002)(71200400001)(186003)(26005)(53546011)(4326008)(6512007)(6486002)(36756003)(33656002)(2906002)(83380400001)(2616005)(21314003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?TrFwDIlMz5FSbhTx2qs3ovpnr8BH7MlgXAEzC++0bXEgcvataPZNaFtgWavv?=
 =?us-ascii?Q?NM4RPeZB2LJ0x2K1E1VEZ0HOSVRMgvYYHfy8DTUbECqF1YGZp5Ye2T+1nPsW?=
 =?us-ascii?Q?7MoOCVWjmahBLF30IlHrhW0hMc/Y/vIgdto0p/Q4WECMbpNF9CAhxZ5EimNy?=
 =?us-ascii?Q?nYAw4mM9ew/pXqL9uW4f4ht7DmSBQtR2QZ1a17ERjJbH1oG5A2DpLkBgKKae?=
 =?us-ascii?Q?57mjCtuIY82LHLlDKPsbKlnsfBV66rpZ3syCCBF1g41G7S7p4AYuUoLkA81j?=
 =?us-ascii?Q?/e+bKYgFZnkoLI6i1h5bImEuVMLzRm+PD+IMsAjhPziYPVdrkAzPy3sirAwW?=
 =?us-ascii?Q?pPnyZjaq0n1nokYRDkhnuK+lVMRylAw6MgyVaxTvnSVMkmCnH4s8vC+sG4OY?=
 =?us-ascii?Q?Zinf/jWTk1KPzlQHzrHXHLr6BfzfK0Loe4DLtPl5C4US/zA8DchmG41k9wzW?=
 =?us-ascii?Q?KXliqCrBFTloE4pMRws9QlsULwwPGScBAdlT2UqPIw35ALtXu/dgGDXagyfg?=
 =?us-ascii?Q?F4sD0Daj5ZIbYHt0EQo9ByRQ/AWr35CCMreBMaecfEgSbLzgKrw6a9+afqNf?=
 =?us-ascii?Q?hZwUFkkCc3IfNcRAZm7HNPQv7jmFYDAhCd+l+y2li+8gW53afFT9EbbBnPja?=
 =?us-ascii?Q?1qQXwL2v+vfPZeIQG1OEP0D+5mSguUMNEiG4Ac2rLJztBz284HJHl4WkGNO4?=
 =?us-ascii?Q?cIDxwaWP1VAfe6StrwLznwWsiMxML/DzR24k13qGfPBL04s1z3KExFYy9fae?=
 =?us-ascii?Q?qfl/LH01mC0n0WMcYu7J81tX8XOx3Gd1Xz3XubfIMxtHzgd1M8XW8cmZV6+m?=
 =?us-ascii?Q?DcRd2N4w/QX+iSihVDTiU9twxwfDDKz8Ca0rNwkatrGxdw/xtYldChh0KNe8?=
 =?us-ascii?Q?YnzHME8WTucGtiHf7h5G2MvJ290Sw2FjwfShJ+Q0Lab0Jb8aZcR/B+uRc/eM?=
 =?us-ascii?Q?lTdVha+FfvRFt94kZDMSA0f4YXAu8OCyc+CPCBnvvTUVdX/AbkCY+N0SYN1n?=
 =?us-ascii?Q?5y4cXnGswnvZaDBjEGSX0earpp0a+JWt+ZJdR57EQuX03Ovu2TKDptF0+BeB?=
 =?us-ascii?Q?4Jvhy3s8u3VFLh4ITC3ShKQ7CJizwKUWhUqcF7nI/IrvMHbW18kE7GdxP8/d?=
 =?us-ascii?Q?Pwf7B6b+abnOgHF32acyPkL/PyYR2YehELnvWmGnAPBvrufK48DwiAoYUrac?=
 =?us-ascii?Q?1Xj+UYr1stEKrySDfnx4l2DmeYa/4KrC3zPhuwmRLK7/rPRDZT/rkrYPrNOt?=
 =?us-ascii?Q?grAqtntKLtEyDfrUkNX/+vrmE16DTdoZH4Q0xRU1oHfpjpqepS23PaBdT30C?=
 =?us-ascii?Q?a9ronT721CcYsL/n2WyKiu2+?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BCDA3449C65DDA48B38A5A65A2E5905C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5758
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	37037d5e-2a82-4708-b542-08d8d7380fee
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hnQogsVQ0MqYlVyADmLLz2hj2hpU2OnmZ+MLcRenX0zh0EpGCp4o04awhENff0+nbgDYc+bC9dBkEqYvEk+upAQUGhm7nKWpqDj/Z9kDXSMbRhiUcIgRDDUNonrsewJq3vHkx+CkFBJLzMhgyXhR5OfoXKE/NuTqzNglSFGa0wBS8kiHGQ4vIwiaidBxicijPvdkvaWzLLdVN1pNskISyGKHQJh5PGSfGBsZIGoY4/OqP6cqqiLNdeLiW2YpU6dmsLzMkb58KXJoLkF8o9htBI4z0nLVDCyOstBrfMEPS5GkqQhDGUsqb5VKyM+QFhy3rG0SvZmo2e4n6pSB6/ja960D/mMDwhdi7wVmFXBqrZ7PdRakcqBK2LaN5psKiZ4LHumapLvuvHHHeT4hOgXDmoRpvdd5/G/2zQhc3sm6j+GenBbvZ4waXyKhc+NzRRykt4Pca+WEb7vBIjTK15WBdOCB6dGoN68dq6IbphwDvZq6sYbZU63E437uZuhz0Etgi9nPvLI6AHlIkyOXUbPJiev6J0mrBQ/DLq2Mbid11sJfAIU4gwG1hoxxv3jvDTh5NFSgwHX692oOuSWtbJ3ydeE+HKPMRsMp/Ngqr6ZUyXhewtJDBENwN0mcMN0f87LKzyExFCCgwbYOrlj84ilnNxvWxHY/RO5I+NPe2duk4jKC1XafQzyLc483NX1cMfCk
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(396003)(376002)(346002)(136003)(46966006)(36840700001)(186003)(82310400003)(8936002)(2616005)(81166007)(26005)(55236004)(8676002)(336012)(83380400001)(356005)(70586007)(82740400003)(478600001)(33656002)(36756003)(53546011)(6506007)(6512007)(107886003)(2906002)(316002)(54906003)(70206006)(36860700001)(5660300002)(47076005)(6862004)(4326008)(6486002)(86362001)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 13:45:13.5835
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 01ba3c79-59d5-4af0-33ef-08d8d7381498
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1679

Hi Julien,

> On 20 Feb 2021, at 14:04, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Currently, Xen will send a data abort to a guest trying to write to the
> ISPENDR.
>=20
> Unfortunately, recent version of Linux (at least 5.9+) will start
> writing to the register if the interrupt needs to be re-triggered
> (see the callback irq_retrigger). This can happen when a driver (such as
> the xgbe network driver on AMD Seattle) re-enable an interrupt:
>=20
> (XEN) d0v0: vGICD: unhandled word write 0x00000004000000 to ISPENDR44
> [...]
> [   25.635837] Unhandled fault at 0xffff80001000522c
> [...]
> [   25.818716]  gic_retrigger+0x2c/0x38
> [   25.822361]  irq_startup+0x78/0x138
> [   25.825920]  __enable_irq+0x70/0x80
> [   25.829478]  enable_irq+0x50/0xa0
> [   25.832864]  xgbe_one_poll+0xc8/0xd8
> [   25.836509]  net_rx_action+0x110/0x3a8
> [   25.840328]  __do_softirq+0x124/0x288
> [   25.844061]  irq_exit+0xe0/0xf0
> [   25.847272]  __handle_domain_irq+0x68/0xc0
> [   25.851442]  gic_handle_irq+0xa8/0xe0
> [   25.855171]  el1_irq+0xb0/0x180
> [   25.858383]  arch_cpu_idle+0x18/0x28
> [   25.862028]  default_idle_call+0x24/0x5c
> [   25.866021]  do_idle+0x204/0x278
> [   25.869319]  cpu_startup_entry+0x24/0x68
> [   25.873313]  rest_init+0xd4/0xe4
> [   25.876611]  arch_call_rest_init+0x10/0x1c
> [   25.880777]  start_kernel+0x5b8/0x5ec
>=20
> As a consequence, the OS may become unusable.
>=20
> Implementing the write part of ISPENDR is somewhat easy. For
> virtual interrupt, we only need to inject the interrupt again.
>=20
> For physical interrupt, we need to be more careful as the de-activation
> of the virtual interrupt will be propagated to the physical distributor.
> For simplicity, the physical interrupt will be set pending so the
> workflow will not differ from a "real" interrupt.
>=20
> Longer term, we could possible directly activate the physical interrupt
> and avoid taking an exception to inject the interrupt to the domain.
> (This is the approach taken by the new vGIC based on KVM).
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

This is something which will not be done by a guest very often so I think y=
our
implementation actually makes it simpler and reduce possibilities of race c=
onditions
so I am not even sure that the XXX comment is needed.
But i am ok with it being in or not so:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I did some tests by manually generating interrupts and I can confirm that t=
his
works as expected.

Cheers
Bertrand

>=20
> ---
>=20
> Note that this doesn't touch the read part for I{S,C}PENDR nor the write
> part of ICPENDR because they are more complex to implement.
>=20
> For physical interrupt, I didn't implement the same solution as KVM becau=
se
> I couldn't convince myself this could be done race free for physical
> interrupt.
>=20
> This was tested using the IRQ debugfs (CONFIG_GENERIC_IRQ_DEBUGFS=3Dy)
> which allows to retrigger an interrupt:
>=20
> 42sh> echo trigger > /sys/kernel/debug/irq/irqs/<irq>
>=20
> This patch is candidate for 4.15 and also backporting to older trees.
> Without this patch, recent Linux version may not boot on Xen on some
> platforms (such as AMD Seattle used in OssTest).
>=20
> The patch is self-contained to implementing a single set of registers.
> So this would not introduce any risk on platforms where OSes don't use
> those registers.
>=20
> For the other setup (e.g. AMD Seattle + Linux 5.9+), it cannot be worse
> than today.
>=20
> So therefore, I would consider the risk limited.
> ---
> xen/arch/arm/vgic-v2.c     | 10 ++++----
> xen/arch/arm/vgic-v3.c     | 18 ++++++---------
> xen/arch/arm/vgic.c        | 47 ++++++++++++++++++++++++++++++++++++++
> xen/include/asm-arm/vgic.h |  2 ++
> 4 files changed, 62 insertions(+), 15 deletions(-)
>=20
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 64b141fea586..b2da886adc18 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -472,10 +472,12 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v,=
 mmio_info_t *info,
>=20
>     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
>         if ( dabt.size !=3D DABT_WORD ) goto bad_width;
> -        printk(XENLOG_G_ERR
> -               "%pv: vGICD: unhandled word write %#"PRIregister" to ISPE=
NDR%d\n",
> -               v, r, gicd_reg - GICD_ISPENDR);
> -        return 0;
> +        rank =3D vgic_rank_offset(v, 1, gicd_reg - GICD_ISPENDR, DABT_WO=
RD);
> +        if ( rank =3D=3D NULL ) goto write_ignore;
> +
> +        vgic_set_irqs_pending(v, r, rank->index);
> +
> +        return 1;
>=20
>     case VRANGE32(GICD_ICPENDR, GICD_ICPENDRN):
>         if ( dabt.size !=3D DABT_WORD ) goto bad_width;
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index fd8cfc156d0c..613f37abab5e 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -808,10 +808,12 @@ static int __vgic_v3_distr_common_mmio_write(const =
char *name, struct vcpu *v,
>=20
>     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
>         if ( dabt.size !=3D DABT_WORD ) goto bad_width;
> -        printk(XENLOG_G_ERR
> -               "%pv: %s: unhandled word write %#"PRIregister" to ISPENDR=
%d\n",
> -               v, name, r, reg - GICD_ISPENDR);
> -        return 0;
> +        rank =3D vgic_rank_offset(v, 1, reg - GICD_ISPENDR, DABT_WORD);
> +        if ( rank =3D=3D NULL ) goto write_ignore;
> +
> +        vgic_set_irqs_pending(v, r, rank->index);
> +
> +        return 1;
>=20
>     case VRANGE32(GICD_ICPENDR, GICD_ICPENDRN):
>         if ( dabt.size !=3D DABT_WORD ) goto bad_width;
> @@ -975,6 +977,7 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu =
*v, mmio_info_t *info,
>     case VREG32(GICR_ICACTIVER0):
>     case VREG32(GICR_ICFGR1):
>     case VRANGE32(GICR_IPRIORITYR0, GICR_IPRIORITYR7):
> +    case VREG32(GICR_ISPENDR0):
>          /*
>           * Above registers offset are common with GICD.
>           * So handle common with GICD handling
> @@ -982,13 +985,6 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu=
 *v, mmio_info_t *info,
>         return __vgic_v3_distr_common_mmio_write("vGICR: SGI", v,
>                                                  info, gicr_reg, r);
>=20
> -    case VREG32(GICR_ISPENDR0):
> -        if ( dabt.size !=3D DABT_WORD ) goto bad_width;
> -        printk(XENLOG_G_ERR
> -               "%pv: vGICR: SGI: unhandled word write %#"PRIregister" to=
 ISPENDR0\n",
> -               v, r);
> -        return 0;
> -
>     case VREG32(GICR_ICPENDR0):
>         if ( dabt.size !=3D DABT_WORD ) goto bad_width;
>         printk(XENLOG_G_ERR
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 82f524a35c9e..8f9400a51960 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -423,6 +423,53 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, in=
t n)
>     }
> }
>=20
> +void vgic_set_irqs_pending(struct vcpu *v, uint32_t r, unsigned int rank=
)
> +{
> +    const unsigned long mask =3D r;
> +    unsigned int i;
> +    /* The first rank is always per-vCPU */
> +    bool private =3D rank =3D=3D 0;
> +
> +    /* LPIs will never be set pending via this function */
> +    ASSERT(!is_lpi(32 * rank + 31));
> +
> +    for_each_set_bit( i, &mask, 32 )
> +    {
> +        unsigned int irq =3D i + 32 * rank;
> +
> +        if ( !private )
> +        {
> +            struct pending_irq *p =3D spi_to_pending(v->domain, irq);
> +
> +            /*
> +             * When the domain sets the pending state for a HW interrupt=
 on
> +             * the virtual distributor, we set the pending state on the
> +             * physical distributor.
> +             *
> +             * XXX: Investigate whether we would be able to set the
> +             * physical interrupt active and save an interruption. (This
> +             * is what the new vGIC does).
> +             */
> +            if ( p->desc !=3D NULL )
> +            {
> +                unsigned long flags;
> +
> +                spin_lock_irqsave(&p->desc->lock, flags);
> +                gic_set_pending_state(p->desc, true);
> +                spin_unlock_irqrestore(&p->desc->lock, flags);
> +                continue;
> +            }
> +        }
> +
> +        /*
> +         * If the interrupt is per-vCPU, then we want to inject the vIRQ
> +         * to v, otherwise we should let the function figuring out the
> +         * correct vCPU.
> +         */
> +        vgic_inject_irq(v->domain, private ? v : NULL, irq, true);
> +    }
> +}
> +
> bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmo=
de,
>                  int virq, const struct sgi_target *target)
> {
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index ce1e3c4bbdac..62c2ae538db2 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -288,6 +288,8 @@ extern struct vgic_irq_rank *vgic_rank_offset(struct =
vcpu *v, int b, int n, int
> extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int i=
rq);
> extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
> extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
> +extern void vgic_set_irqs_pending(struct vcpu *v, uint32_t r,
> +                                  unsigned int rank);
> extern void register_vgic_ops(struct domain *d, const struct vgic_ops *op=
s);
> int vgic_v2_init(struct domain *d, int *mmio_count);
> int vgic_v3_init(struct domain *d, int *mmio_count);
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 13:49:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 13:49:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88081.165445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBaS-0005VU-E5; Mon, 22 Feb 2021 13:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88081.165445; Mon, 22 Feb 2021 13:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBaS-0005VN-Az; Mon, 22 Feb 2021 13:49:20 +0000
Received: by outflank-mailman (input) for mailman id 88081;
 Mon, 22 Feb 2021 13:49:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ieRt=HY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1lEBaR-0005VI-Bk
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 13:49:19 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::624])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 707b8b15-775d-4791-ab09-5d7b0f2bbfe5;
 Mon, 22 Feb 2021 13:49:17 +0000 (UTC)
Received: from DB6PR0801CA0048.eurprd08.prod.outlook.com (2603:10a6:4:2b::16)
 by AM9PR08MB6305.eurprd08.prod.outlook.com (2603:10a6:20b:284::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.29; Mon, 22 Feb
 2021 13:49:16 +0000
Received: from DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::62) by DB6PR0801CA0048.outlook.office365.com
 (2603:10a6:4:2b::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27 via Frontend
 Transport; Mon, 22 Feb 2021 13:49:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT060.mail.protection.outlook.com (10.152.21.231) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Mon, 22 Feb 2021 13:49:16 +0000
Received: ("Tessian outbound 2db1bbc8a1d2:v71");
 Mon, 22 Feb 2021 13:49:16 +0000
Received: from c58ce23b36f1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7BED9DF4-0D4D-4AAE-9B58-32D923B10E61.1; 
 Mon, 22 Feb 2021 13:49:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c58ce23b36f1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 Feb 2021 13:49:10 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com (2603:10a6:800:1ae::15)
 by VI1PR08MB2861.eurprd08.prod.outlook.com (2603:10a6:802:19::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Mon, 22 Feb
 2021 13:48:58 +0000
Received: from VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839]) by VE1PR08MB5696.eurprd08.prod.outlook.com
 ([fe80::5c93:6e79:8f1e:a839%6]) with mapi id 15.20.3868.032; Mon, 22 Feb 2021
 13:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 707b8b15-775d-4791-ab09-5d7b0f2bbfe5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h8m3exn3bRMvrS0WFhcJORaBO+3ZwcCWXcVBfe7b/LY=;
 b=qlDmVQxZBHxa+rolFXsHoLvA9cyRkTrFwBE8cfvQhl58aS4b6/Oz+rlZMnoqgVy9v9XNOPsWZtBwIpC2KLcWUKB3CIxtWiL4NlG08WCakEccG6zAqv+SdUqvm3FDUMD8z8HamihJS+VAgI8G12SWqEJeQF0KXMExzvfCk/A537Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5bf8335f4cbe4fd4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ez+k5sFTWSftMQ7kmsOB9HTKSqegx+O9KeqkhW71kQtfxAgLQCs1PMAN1nbA79g1nnWSNPoYF36Pt0IH1tbVTQa8985ZsePbyD1tU3Q7XtiiKJvr6EpAqrmkLTYnYup0+CU6v7PmafEIbMETgGP9YBQoWviZaHQBBXEfYMhbJSd1KwB7Z8+jGEukMaW9+M76BPmYGHBB64Q5Cm73kXXEF6DiaHu2yJpXSJu0LaQ1i8f4i5ycM9H4wvhfp78ZUj8Fyt+k5ARyHbX42pkhH2OClyVE+AoMl6mbATUiGXvhVMSMN+XfOpGw/zzh5yepz0wT+QfQ00cNNWPkptkxYV89jA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h8m3exn3bRMvrS0WFhcJORaBO+3ZwcCWXcVBfe7b/LY=;
 b=M2IJ0cSmBudjC2HlfjVgrr7PUBFRFlWAtdwCzNdyCaP9I8QD0PFNmcBkT3yTQfJhyJDfVrc1y5zIBX+8S84yaTOwGoWk9zQjzkn8piQfpjy77snBQCW3bfF2bCzdX1LU7Xcsu4nI6zzdDHbAZuGhXkUbPAN4EIWa6YNwy2i2TuFRxsQr/wUXcZB0jDoMAkSe0lplanw0ZueIpYD9N42WH25dLZIjF7nxaxnlVE9vJutVgEmWl5V5gVgDZlBRYFPzoEMVaTgN3X1FPh+FWdxGvhSTyKmDQWBZbbaKhAH8/+7LlX0VQD14HjAsYAL4YJmcLyDmP7rHVLCJ8XsCmrTNBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h8m3exn3bRMvrS0WFhcJORaBO+3ZwcCWXcVBfe7b/LY=;
 b=qlDmVQxZBHxa+rolFXsHoLvA9cyRkTrFwBE8cfvQhl58aS4b6/Oz+rlZMnoqgVy9v9XNOPsWZtBwIpC2KLcWUKB3CIxtWiL4NlG08WCakEccG6zAqv+SdUqvm3FDUMD8z8HamihJS+VAgI8G12SWqEJeQF0KXMExzvfCk/A537Y=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
Thread-Topic: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
Thread-Index: AQHXB7GCcUtPYwKMVk+6z4A6IyPFF6pkFLEAgAAbkICAAANLAA==
Date: Mon, 22 Feb 2021 13:48:58 +0000
Message-ID: <7ADEDCE0-4C50-47B5-AA15-C5DF841FA330@arm.com>
References: <20210220175413.14640-1-julien@xen.org>
 <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com>
 <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org>
In-Reply-To: <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.33.241]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1eb19402-80b1-41dc-7940-08d8d738a510
x-ms-traffictypediagnostic: VI1PR08MB2861:|AM9PR08MB6305:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB630591DA183AA622C48E2BAA9D819@AM9PR08MB6305.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gydjdHoVe69Y+yU1Kt2w2p82Z4orYohG1fU84hr7RxwKAegwp+tydsygHjJ6BbuOki5tBt1OUVXscJQ3NwJtwsGpuqjpWSJFaRaUG/Y1c2w3su2CqLMHcuWA3xKXLmRnWtTgkNo0z8T0BVQ4DSQHfNUP9QEBJUmUXlf825Z/b8yiz2L+XwYzgtE0GXgR8RISzUodBFFtYiDcIqPIGJzfMc2PTVOiVRZLyfjR1dy4L+1K73+FgzrGKrIyqQGNynquFEEJ8kxmnMQTZGC5ma1Rz3qb2maWTUELzYbWmXcDnKyw4Eusa8z54WiV5Tusoc+1l8f2irBacEeyAQJaL0ilBGsjKR0WwcLtEHV9EwOyamXhoZt044v6JXKqtgOjeVL1KuBh6MSGti6c5hw6mMu6s4xbnWk5gOiNu1BSn4AXGmvcXPsnBtH7Z7Ng10rpqYzEyJTrR9PCgfTtF3SNK/Ip69qtfMDVYAh/xO6skOwHRsApAmr4lJ8+QPb74I5Ji3hpj6qmRyJsZlASiTpYqeJMHhxbwr0qhLZckeWfSsywEh7mSi3fUw/RJkGPud8B4+3rWXbx/S3lYlx7lvcE5ZIdFA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(366004)(39850400004)(136003)(478600001)(86362001)(55236004)(6506007)(53546011)(186003)(6486002)(26005)(33656002)(8936002)(8676002)(2906002)(83380400001)(6916009)(6512007)(316002)(54906003)(76116006)(66946007)(64756008)(91956017)(36756003)(71200400001)(4326008)(5660300002)(66446008)(66556008)(2616005)(66476007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?LYr9h/yJ33lANlYYxCzjfk9pLjU5wGRdkrtplRpjBu2zbkh55Lzf0MfzPhJt?=
 =?us-ascii?Q?fjRGKxp7TY/Il5l+mTxv+lxmhUOshTLZFmPR91dO/0tef+nf165ftbIOmktA?=
 =?us-ascii?Q?VFOaCDyVyKn4Ry5AkD6ATTpZrCT/qQMpBq5WP+2aNjeT/Zag/w3vNTrxdE7X?=
 =?us-ascii?Q?U7XAuZtTexgEvT02+4u+yFisy6sq+gG+C2B8aD7GbLYymPZT4XQ4zippmDge?=
 =?us-ascii?Q?QsW9eCfuz5sNtqCL9t+CV8ERq70r8piSqJO1gnXneIkNbACAyye0e9ixs756?=
 =?us-ascii?Q?QMTSklGlbCkksn0Ki02yAiUxgEibpiEVOQmdF6AVwaFvVFCnCCMDrP7Bbmkn?=
 =?us-ascii?Q?NjC+R0H4UAX/SZWIOIlri2b19U7J9DUEZTl89p4OeAEtpGjxfhx/r9VuO4Uw?=
 =?us-ascii?Q?5CVuaqLuXXxHJMzJdu0FK2TIlN67hPEbLmXFbQeiTmhdljdQFzqAWIdYiVY0?=
 =?us-ascii?Q?BPtQW+sBuKiPeozPUWujkb11s/9TA8USlh+fxAkVI+SynO3GaVbYwxVZU8ia?=
 =?us-ascii?Q?LkE1Iu3eqMwTUYmzRjee9rc18axYqkOTYQFBM8M6NutTPiRnK8sOKpvJPvWM?=
 =?us-ascii?Q?sIArZ94nGQZ6MvNmOaSJU2b89t9s4KI2CrLo9Y2nBYSnEyWHwKXfF/D/Cvl9?=
 =?us-ascii?Q?/PtVyVyD/xk9xrt4NhJubLsj+go6S665yPdmIfnE9ggRUZ1itymtTVfeSdTW?=
 =?us-ascii?Q?tf//oKgoxqbD+t3OyYfHSJz7T/3F1W5sA9Hq7Lhr8iYlbdjGjQ5j2aWD8Dps?=
 =?us-ascii?Q?MERO8CyfBvRUxcHUOCIeF2UxENZUYNmeguvdTfuoxRE4jlR7qNiD1WhVB+JQ?=
 =?us-ascii?Q?RJWLMEEw9KvlUVsYQOqmK9RfE5LMzm1aLUTPCXOJrF6JA579m1FCkJH++XLV?=
 =?us-ascii?Q?mNPvrwx8tNvkaiy+81J7hHB1IVDmnsx+lhm+AseeWM5pA6s5WIq7P7Mn77pV?=
 =?us-ascii?Q?PBveAcldxulPf8yrOcBb6mw046MbUPKGS4yyBg/27ZTDd6d2DF7u3jyl4osK?=
 =?us-ascii?Q?qNHFzVF4OcoY1LYbwS7ZXtn6uF61XZqycSKrMvjkqxKphZt9q8HxbTYP80Hr?=
 =?us-ascii?Q?+882Ou1QLpMACUORue6dza/a9nqjiWhNc+vynLJDtAHNq6lS1wcAxLLg5bwQ?=
 =?us-ascii?Q?hG0XYMORSdmepjFlF8T4EWp4z3jZSY5M4PEC9K5uSVq+iZtXRaK057YdZGX5?=
 =?us-ascii?Q?+uuYh8z5oaAUKA/0jQLtZ+bPs++OF6v71Mz1a4vVcha++bCI0Aywlb/aLTmZ?=
 =?us-ascii?Q?czz5Gzqk32WX6oorva4ZOm9SCsccaDmWckZLOz8s1aHuz9Gb1wvlwBtB6h74?=
 =?us-ascii?Q?1BSy1f3t9adHatcfHrR1w/UN?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <33F0E59B582856408D3941AAB6C886F6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2861
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d693e6b7-65bd-4428-4176-08d8d7389ad1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1T89p4YaBifA/1AIzx2B1T66pMj1eXCLuSze4OYFTS2RkYm+HLrcuMYmsp98/ABYV8bkwnT+IpKKwksOa+/93uGHuFlLL12CloUPIyz/JUvnJ2tldoK3Cxu6dT7cM0qigCRXuG+l7+oCGao+h/EhPygwY5J8+7LPsxnn0ZtLfjQF6+HNnXqgghS0X8YK3yBIXw652CtaE+qGKSG9sdbFE9amHtUxl+x+cMQol+vcBn0IvhB2GsycRw1TS6OgiltovOgP2o6TJoXrHf4TDfvXR/c2F+cv6KefO55O+XV25NMVfwGHA1pBwp+MC15h6nQ1cZQurFGCa/49/qojeBtqvCuoVdr3qsFkR3C0XIpBOO1wtwYOpMEu38VhydNESsXQzY6/D6DXFYI4t4y31nslmsAFo6Wp+OF7nTGlCiRdIi+p5FViyf7INHz6z9Vw5yfRiIl1BW8GJsqWS+HRuSI9+WRFbyIX+GZinxEEHwIpxlgD4vfyHpJrJ8dgZBb0RYbzOQNfakQ5tbEQ0fI8KwFEIPLgCjS81XatfyTcfD0vOJHVt5cTfN0vt7rYFb+1NS6iDjWP3++8ONdJwTrO9XvAnCPu6jVdluIqOWG6Q8BzgRlugjtYbkloSOS42/7NSwrsa1eCAfAa6c0ZoSrguNn2FBDPBkjY7h3vBqM3CmU+wJs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39850400004)(136003)(36840700001)(46966006)(478600001)(186003)(5660300002)(83380400001)(356005)(4326008)(2616005)(36860700001)(316002)(6486002)(6862004)(33656002)(54906003)(336012)(82740400003)(8936002)(55236004)(86362001)(8676002)(36756003)(6512007)(2906002)(81166007)(70586007)(47076005)(70206006)(6506007)(53546011)(82310400003)(26005)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 13:49:16.0325
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1eb19402-80b1-41dc-7940-08d8d738a510
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6305

Hi Julien,

> On 22 Feb 2021, at 13:37, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 22/02/2021 11:58, Bertrand Marquis wrote:
>> Hi Julien,
>>> On 20 Feb 2021, at 17:54, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> From: Julien Grall <jgrall@amazon.com>
>>>=20
>>> At the moment, flush_page_to_ram() is both cleaning and invalidate to
>>> PoC the page. However, the cache line can be speculated and pull in the
>>> cache right after as it is part of the direct map.
>> If we go further through this logic maybe all calls to
>> clean_and_invalidate_dcache_va_range could be transformed in a
>> clean_dcache_va_range.
>=20
> Likely yes. But I need to go through them one by one to confirm this is f=
ine to do it (it also depends on the caching attributes used). I have sent =
this one in advance because this was discussed as part of XSA-364.

Ok.

>=20
>>>=20
>>> So it is pointless to try to invalidate the line in the data cache.
>>>=20
>> But what about processors which would not speculate ?
>> Do you expect any performance optimization here ?
>=20
> When invalidating a line, you effectively remove it from the cache. If th=
e page is going to be access a bit after, then you will have to load from t=
he memory (or another cache).
>=20
> With this change, you would only need to re-fetch the line if it wasn't e=
victed by the time it is accessed.
>=20
> The line would be clean, so I would expect the eviction to have less an i=
mpact over re-fetching the memory.

Make sense.

>=20
>> If so it might be good to explain it as I am not quite sure I get it.
>=20
> This change is less about performance and more about unnecessary work.
>=20
> The processor is likely going to be more clever than the developper and t=
he exact numbers will vary depending on how the processor decide to manage =
the cache.
>=20
> In general, we should avoid interferring too much with the cache without =
a good reason to do it.
>=20
> How about the following commit message:
>=20
> "
> At the moment, flush_page_to_ram() is both cleaning and invalidate to
> PoC the page.
>=20
> The goal of flush_page_to_ram() is to prevent corruption when the guest h=
as disabled the cache (the cache line may be dirty) and read the guest to r=
ead previous content.
>=20
> Per this defintion, the invalidating the line is not necessary. So invali=
dating the cache is unnecessary. In fact, it may be counter-productive as t=
he line may be (speculatively) accessed a bit after. So this will incurr an=
 expensive access to the memory.
>=20
> More generally, we should avoid interferring too much with cache. Therefo=
re, flush_page_to_ram() is updated to only clean to PoC the page.
>=20
> The performance impact of this change will depend on your workload/proces=
sor.
> "
>=20

With this as your commit message you can add my:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks for details

Cheers
Bertrand

> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:04:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88084.165457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBoq-0007Fo-Q5; Mon, 22 Feb 2021 14:04:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88084.165457; Mon, 22 Feb 2021 14:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBoq-0007Fh-MB; Mon, 22 Feb 2021 14:04:12 +0000
Received: by outflank-mailman (input) for mailman id 88084;
 Mon, 22 Feb 2021 14:04:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEBop-0007Fc-Sx
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 14:04:12 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe4616f5-5361-48db-803e-34e202b22944;
 Mon, 22 Feb 2021 14:04:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe4616f5-5361-48db-803e-34e202b22944
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614002650;
  h=to:from:subject:cc:message-id:date:
   content-transfer-encoding:mime-version;
  bh=PicZB/wZknwts8R5utTU6NvVnJ3tHmNsSXCjPN56PPs=;
  b=ZS5jIIYA7rlQqVWcKnfB8RGo1YmWxVVWgIj0NE/NFIHvOkD/uzLejEej
   Sf0UVDcL89Grt2wnNSmmN5OF9PaUsBEKtUQ1uE2vQahbGOIYU+YcbdS8P
   fRxSde/2YDnUxMIFYW9a+2A4+MWy4A/0Gsp780aVKM2rYcZMxbo5rGti+
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rj74Scy0TG1GdU3wEYD8upGeuCUJLTPQX8+hYED5I0X01rduDn1a0ScuMImui0R5DHBO0oK5OU
 nWN07Scp7Mcc2Beo+3VDB4SOLlOr4wzxI+GTEJeuzywmZB7tVINc/Vi/MQvQGAMxOiiW8Zxo5X
 2V+3goCMSEwdxw2aPwJAKWxVOiiOl7x8uJ2s3Yuig+dXmeo039RUrQXX8m3L8WgH7OXHWK/3hL
 VKF9ITvp4xclaLcUROFNGDU9dr0OI3JOPVvOK39NKQZoGxqRYJhhwcnalYsoRIHcraz+VVkXz+
 b0k=
X-SBRS: 5.2
X-MesageID: 37734600
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37734600"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c3bePMqM/3odbsDgzL+NZirLbrzPrSiuhYRgUfadz0rpGM8a4VLG6QXiD33oaPTJrqGX1WS+g1BX4d8taVQQ3LzUnoekPs3FjNF6Yk8IwhXJ6xCICrJWrvgNhBeRwO4sm2qOorrDRQjjsoMNUYFBHH/oJNp+omgmB5N6vIYCQYvOxxAsPuDMnxL5kJwY85vkA5g1QvUOm4s/kgY2RZxk3tVP3PhaOLLBIB5ZTNr6CKiob9xvZeVk1h7rQDjHynAK9j/8pqh8H69NMtv4+y2bHU5KmdnmwwZxZ5GPJmnMmJnSaLQOg0/6xS4v7DMkCLPkDAk6fNAIyxIBh5V1JvSADA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PicZB/wZknwts8R5utTU6NvVnJ3tHmNsSXCjPN56PPs=;
 b=CDxog2zaks3ljgXt9UaMqRQqjQVRg21lqe/oI47ns2MJ3SrHw1IM1HLFoqr3+Y4L26CQ5nDyMTWGE7S/B1xX89eYwplgd8XISid04gqWXX9cOsD9owsIwfSs2Gwc7070gwbc5c7NE174qmPTLC9IxTpr6ThXAmJqv1i3PIyNH81dJaLjH3baTXc1vedhUEYYpRrKDLoLMCa5YPGtWqGoaL/orSh40+8hrgZS0w+UMwTn+4ovr628Tn8dQ2goW2dGMFF4Qp1jARpW9SgwkqWt6twGeWpMDtrkr0SCdZmK2cUKrBXluOhPkJscgsmkDjfmiU6ccLAR98qGJ97l2km4Kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PicZB/wZknwts8R5utTU6NvVnJ3tHmNsSXCjPN56PPs=;
 b=Xot7mxo6CXhGtFb7Yurk9YFj7eg4exLVcner/WUr0AVHOJHe7Yk/CBaiz0WrrbjKcFDxar8c1nKzn9h99JjB84+bIfh/ZsmGGOHX38Fl0ksoD4C6FkaYFeWZHOrKUeEMG1z0KgppcSJuG5PkX1FNrubWqy9p9j+N+P6e7C1nqxY=
To: Xen-devel <xen-devel@lists.xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Stable ABI checking (take 2)
CC: Ian Jackson <iwj@xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>
Message-ID: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
Date: Mon, 22 Feb 2021 14:03:58 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0035.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600::23)
 To BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 01a9ad84-d35f-4821-d60e-08d8d73ab7aa
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5776:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB577676F7E364B03733374CBFBA819@SJ0PR03MB5776.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bSDchbsi2ColxT841lNRC45p3R6NcruidAfqzkw2O2YgKDTlbBnHs1UnNj0hXAm0LWlw1CTqrcnV8+vtnqXge+nD80wKLyg5bzBIdKVQP5Mf39B8/YomUJIBZ6cV3STuIF2WOPCei+fbLfC9yS5U/KP2l6C2W31DWP5r069RIHMHDX0hLuhYvHtCB35if4aVebGT371TqlKxA8hCFMnvMTGVZNjJFNnL27Mg3Vu+JEdNbVVCYRvU7LFtyznzHIcIeV/2C+0eZq+g2XHxdckUOm6b4CG7+eJlKVzgIXdR0aECC5XpighVBjzX3RL/AJlLbY1fS54qIJaR6s0a7ejUg1NfiLMj9gx+epzxDJ6AZy//yMqGYTuX4v3TiEqYroccS4LzZBo7OLbdDxVJ8hi9krf060k+OQys4gOP/6m3uAxIm8zHolCtE6D6Ggf6xubxZuQxNw+bHcIDNzihY+lJvx1uE3lUY2XctXuB2xJk8bV3juTefrvnbT3GVAcbWNSgqK2az0vl858ZBT0PKIaWbX4OUuQsdY3Vr3HuC9oz5n1HMIuPHgftoeEWzO7OPiNHt9FGQK01boLYo/5HvOWHT4XpaEGBw7/jqE0AhJt3DeY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(346002)(376002)(396003)(39860400002)(6666004)(16576012)(316002)(2616005)(26005)(956004)(4326008)(478600001)(8936002)(6916009)(86362001)(54906003)(6486002)(83380400001)(8676002)(36756003)(31686004)(5660300002)(31696002)(66946007)(66476007)(186003)(66556008)(16526019)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bWltMzhHcTFmdDBPc1NQT3NHVThndVowK3NEZkVrOGVQZkkyejFlRTd3aXky?=
 =?utf-8?B?dlkrWkM0SnRPYTFFemh6cHQwOGdDUThjZVFvSkpiZ09NaTJjRVRtWUtBc3lR?=
 =?utf-8?B?QTFPeTkrKzI3NEZSNC9LTnVEZzZLMW9ORTc1Q3J2MFpGTE9CcjdLRDU1UGJY?=
 =?utf-8?B?bHBoZ1RKZi8zSjdxamNjTUs5S1k3OWtzS2FFZFNZQnlnOEEvcmVqRFZ6Mmh0?=
 =?utf-8?B?S1JhTlFUcitHSzRTc0pYWEV4dzVCcWhFblM5Mk9qTHRmTk5BRHdhenBvbzF6?=
 =?utf-8?B?Zmg2Q003RnZaay8xWXNCbUlTaGl6eGovZXpCOVQ3Z01mblVTUFRlMlpiaS9Z?=
 =?utf-8?B?NkFXZ0hWWmIzcXVZTzJvNU9Od01LcVA1VFg1eHNlVGdpYnRwNUZrR1ZiUkk0?=
 =?utf-8?B?N1ArTFgrQWMrblJDeVYrZTgwQU1pS1p0M3FpYnBLaFVIZWlrYlZXUDVGaDM4?=
 =?utf-8?B?bUhKR3BKd2lQM2x0ekE0VFBWeHpSQlBxcURreDhQTFhsWFVJOStHNldQamdP?=
 =?utf-8?B?enFSZm5ldFZHendCSWZxY2YrUEplRzlBMnlrTTRuV2ZXUXVzQk9ERnpEUEFN?=
 =?utf-8?B?dkYxaDIycVlHSEJTTHY0cVlnQzVhM2VVTWI5MFppVTBPVFgxTml6dGtLVllY?=
 =?utf-8?B?ckE1UG9qZER5Q0VFdjBkVzNocXlUN3gvdG03ajZEcXZlYlpyNmNOeU1CVDBK?=
 =?utf-8?B?WmppM1lPQW5qK281SkpuZU0yUE5wR1lRcDNGKzhLQ0Fyc0k4b2ZhTE1HZUFy?=
 =?utf-8?B?NFowQUx2aGFqMTFwT2pJTUhIZXFTeS9NdENuSVNHTHlDS1Q1aHhuNFJJQ3I1?=
 =?utf-8?B?VU5SazRjTHloVzdLUm4rbnVMTHVkYm9ob3hxekNnd3hQR1U0Y3J4VkJPUktX?=
 =?utf-8?B?MW9WMzdsS2dKdHBlOHRJMEhnQ1MrdlM1LzdPU3FJNDZleENkNDU1Z2NjNk8v?=
 =?utf-8?B?aytlaDVlYklVT2ZacG90MysxYkxaY3NicFRNR2pybDFac3JuOXlZTEFiR0pw?=
 =?utf-8?B?aW8yd1VadDIzYnVaRUNxckRka2Zya2tZMndvYWgxSkFzbEExS3lrMWZSR29i?=
 =?utf-8?B?Q0NjRkpXeDlFb2ZGc0paTWl3VTdqQzZZZEJYZWM3UHk5KzJWY3RjRzl0Y3Bn?=
 =?utf-8?B?aGRITHBReUprUU1rNEVGeUR5RTUyM3ZXeHVER0luUDE0QUlOZk9mZkZ0U243?=
 =?utf-8?B?aWs4RWxpRDM4Q240d1hCUHIzMll4RWpKcGUxWFkvdUtBRVU3TkpKWll1Q3Iz?=
 =?utf-8?B?UDAwZUV6UGU0bXJLVTdsbXJDZ3Q4VkdBQ252Nm85c0pTMmtlVnZtQk1ITWpz?=
 =?utf-8?B?ZzlUb3M5eXpGQ3EzSVlveENCZTVsalJhYmFSMGRJa1Z1MmlEM1RYcE1Yc25B?=
 =?utf-8?B?TFlvMGNvUUhkcjUvRzJYN3BDUkpUY0lVSHhnVU82WEkraTk1QTZKNTBwM0NY?=
 =?utf-8?B?RHlBNmpzOWhYVDlMckE3a1Q1Q1FEZlVubVNaRjE0cXNMb2pMQ3R2S1dyRzZt?=
 =?utf-8?B?cHdkVjJyK1NlVEdWSURxSmgvVTFLbUNEdVBuSE9HMG9xV1ZyZlZyaE9VYlE1?=
 =?utf-8?B?cVQvNm82NDA5eVU0NmEzczhmbnlMUVdpYmsvK0ptTnF1bnVXNnZrdHlrSTVy?=
 =?utf-8?B?YmoxTmoyamlldmdYd1BvdGQ2ZFNjaXRSeVNCNzVHN1gvRTNzY2RNTGZISVFC?=
 =?utf-8?B?bS9vME0veDEzZHZINkpRWkVheHJZTzBUeEdhQWRlOFdYclVjbEFPanFPa3hj?=
 =?utf-8?Q?WOIXK8fQ0JPETN0NyBzNqyaftkL9AVQt8JwoIDK?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 01a9ad84-d35f-4821-d60e-08d8d73ab7aa
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 14:04:06.3765
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: q7ogtZTzwyGAdp6bS6F7N3bkuiq4BR3TEDhhchN4KjOQKfXq3YSwc8nwSelCppqk3yW8v+w4icQmWfGngOkuLYeSteuGsFyKIuZ/iepdpaY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5776
X-OriginatorOrg: citrix.com

Hello,

Staging is now capable of writing out an ABI description when the
appropriate tool (abi-dumper) is available.

We now have to possible courses of action for ABI checking in builds.

1) Publish the ABI descriptions on xenbits, update all downstream test
systems to invoke abi-compliance-checker manually.

2) Commit/update the ABI descriptions when RELEASE-$X.$Y.0 is tagged,
update the main build to use abi-compliance-checker when available.


Pros/Cons:

The ABI descriptions claim to be sensitive to toolchain in use.Â  I don't
know how true this is in practice.

Publishing on xenbits involves obtaining even more misc artefacts during
the build, which is going to be firm -2 from downstreams.

Committing the ABI descriptions lets abi checking work in developer
builds (with suitable tools installed).Â  It also means we get checking
"for free" in Gitlab CI and OSSTest without custom logic.


Thoughts on which approach is better?Â  I'm leaning in favour of option 2
because it allows for consumption by developers and test systems.

If we do go with route 2, I was thinking of adding a `make check`
hierarchy.Â  Longer term, this can be used to queue up other unit tests
which can be run from within the build tree.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:05:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:05:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88087.165469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBqT-0007Mn-5H; Mon, 22 Feb 2021 14:05:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88087.165469; Mon, 22 Feb 2021 14:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBqT-0007Mg-1R; Mon, 22 Feb 2021 14:05:53 +0000
Received: by outflank-mailman (input) for mailman id 88087;
 Mon, 22 Feb 2021 14:05:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEBqR-0007MZ-BY
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:05:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e29f0f0-5302-40d2-bde1-4fefbdc427cf;
 Mon, 22 Feb 2021 14:05:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 89F0AADDB;
 Mon, 22 Feb 2021 14:05:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e29f0f0-5302-40d2-bde1-4fefbdc427cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614002749; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OZv5MLMWp4gbXkOtW95MCRMHdyhv7FEZHTNn45SU7hQ=;
	b=cZd4df3m2UNfpnV+hs5RPZXAisLsrpG1WDS1qZbRUwM1aCFcino3YBB3AOxJtY1ay/YstT
	OQSuL349mYTg6/uKti5qeT0ZqSZ8wzN+Io7Bw+AxYJlr3PKnnn5npW8+kS8kAQ1PLMlfgp
	9nNxv9tym5A6tVMOru0MERcOV3V7aSo=
Subject: Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Julien Grall <julien@xen.org>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
 <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
 <24627.38031.77928.536108@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <04a2869a-282f-783a-6c03-8a2d7209411a@suse.com>
Date: Mon, 22 Feb 2021 15:05:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24627.38031.77928.536108@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.02.2021 12:25, Ian Jackson wrote:
> Jan Beulich writes ("[PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page"):
>> Inserting the mapping at domain creation time leads to a memory leak
>> when the creation fails later on and the domain uses separate CPU and
>> IOMMU page tables - the latter requires intermediate page tables to be
>> allocated, but there's no freeing of them at present in this case. Since
>> we don't need the p2m insertion to happen this early, avoid the problem
>> altogether by deferring it until the last possible point.
> 
> Thanks.
> 
>>   This comes at
>> the price of not being able to handle an error other than by crashing
>> the domain.
> 
> How worried should I be about this ?

Not overly much I would say. The difference is between a failure
(-ENOMEM) during domain creation vs the domain getting crashed
before it gets first scheduled. This is certainly less friendly
to the user, but lack of memory shouldn't typically happen when
creating domains. Plus the memory talked about here is such that
gets provided explicitly to the domain (the p2m pool), rather
than a system wide pool.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:12:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88092.165481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBwr-0008Ta-Vg; Mon, 22 Feb 2021 14:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88092.165481; Mon, 22 Feb 2021 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBwr-0008TT-SG; Mon, 22 Feb 2021 14:12:29 +0000
Received: by outflank-mailman (input) for mailman id 88092;
 Mon, 22 Feb 2021 14:12:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEBwp-0008TO-Qt
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:12:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e637493-f65c-4c81-85e4-290dd2968932;
 Mon, 22 Feb 2021 14:12:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2AE18ACBF;
 Mon, 22 Feb 2021 14:12:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e637493-f65c-4c81-85e4-290dd2968932
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614003146; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2/u0KYvjfvVBPZ+ouLLVqw7o5ez4RuTOPoh1vjJdpMc=;
	b=GU96mxdwLML8CRLKpH3NZNbNuGRRqqzhbkXem5xuXIbdDGrPFIZ0yqkqfVXeD+3yXHB6wU
	yiy+6duwjtHkq3XmJW7HvIQUylH8wKZjKNFke3ufoyCc6M3pp6fZgJeojlqxAvYSynGQbl
	lwl8G3Ydz/DrawhKGSXmUnerzfVK3Do=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <YDOW+ftkNsG2RH3C@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <de119412-7a38-1446-55a0-806bddeda06c@suse.com>
Date: Mon, 22 Feb 2021 15:12:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDOW+ftkNsG2RH3C@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 12:35, Roger Pau MonnÃ© wrote:
> On Mon, Feb 22, 2021 at 11:27:07AM +0100, Jan Beulich wrote:
>> Now that we guard the entire Xen VA space against speculative abuse
>> through hypervisor accesses to guest memory, the argument translation
>> area's VA also needs to live outside this range, at least for 32-bit PV
>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
>> uniformly.
> 
> Since you are double mapping the per-domain virtual area, won't it
> make more sense to map it just once outside of the Xen virtual space
> area? (so it's always using PML4_ADDR(511))

This would then require conditionals in paths using other parts of
the per-domain mappings for 64-bit PV, as the same range is under
guest control there.

> Is there anything concerning in the per-domain area that should be
> protected against speculative accesses?

First of all this is an unrelated question - I'm not changing what
gets accessed there, but only through which addresses these accesses
happen. What lives there are GDT/LDT mappings, map cache, and the
argument translation area. The guest has no control (or very limited
when considering GDT/LDT one) over the accesses made to this space.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:13:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88094.165493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBy0-00008i-9v; Mon, 22 Feb 2021 14:13:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88094.165493; Mon, 22 Feb 2021 14:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBy0-00008b-6f; Mon, 22 Feb 2021 14:13:40 +0000
Received: by outflank-mailman (input) for mailman id 88094;
 Mon, 22 Feb 2021 14:13:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEBxz-00008V-NH
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:13:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7adf0c51-5224-4d5a-93f2-72f8fea52bc3;
 Mon, 22 Feb 2021 14:13:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7adf0c51-5224-4d5a-93f2-72f8fea52bc3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614003218;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=BDDrQ2qM6+vGCjkv3tgDQZXRKOhGXSpXwrQthCeuY20=;
  b=WcKFO+a3d5bTUIIKB1b5Jbr8nq3/nv+VObVDEdh9u9KBu97V6iw61yQR
   evriniGAbfGqqAixlVFxWVaTl/yG6JIOpnO4cpT8XEpBHdsF7/Q23foVZ
   YoqhGypOdh1eg5O8uk1IFr6OKcb4JwPWtXSELhZR+HXGC82Yam8bQ3nAr
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qNgK37lxZkDuE3C/o8EEhgu4C1QMTIa420ZqWh2bwlOSgzTLp65rmPl9gQnkC1Oylrdn0iJ/It
 k0BE3birGgyOTaatnZtougZ8tt4DBANXLAh1eIpJqTXmKSApPZOWEGAEoUR3oEysaD95CrNdTc
 nN0a3umeLRpB11fTkX43oWf4atv9Wqh1yOBToi8iRdJuWihzR8CQXh4gZKHhDbMWlz4OqptdgF
 V6742VOsItjPqC/Qx4qiJNEb07vzH+e8qOTj+ewzj0+w2t/mU3GhnOGg4RUrmo6YYllUT766Cl
 oco=
X-SBRS: 5.2
X-MesageID: 37735681
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37735681"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=imfBha3A90v/uCd5fLejUu4bOjjO1k7qP25yq8eI8G79/uXWcVHWNugcp2EabA3m7vH49fGd+yz1w0nK5XY3IkRdHWn2pQaOxvRET+3eaao7h+I7043NA5EIOdjSNNMPykXN3aeVUQzIA4PRC8SLZWcj1+yfSF1RNwAXi7cBqoWJKBx6M1DIt8+g3VctCbQAhbzs3xljlH7BCFSthLdg+xuhiaG+CIK+Emw74JUYQBxsB9zB+Bs5dbA8Gpn/8ePKN9CDk1ToyYBx2NMup6v6YCqBt+VsIizMjQegg3731EqLqWeQqAeiAcXI9PxQDFjyDFbUDJ3D/0lPh0hLI0hpkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fXNSbAxKetyjknVoleTu+e4FO8g8d0PVDRXqx/8BaAI=;
 b=BzYylmt57NTSMH8orP1y5aapEfgEowO7pfcgfsRvB3ZvVKww+tdW1ZpCAnbPmzHTCVTK61+o4YNPsYGZ23VTg1qTtaC64NxjkTu5WIGfGypOR6rT+FRHDZzILPJlN9g9nfDC5L2cVZraBRQTOcpyWGCQ601QRX2ebIDi/dMpBVCvtvUZtIwzDAacOAw2ibk7POCSiZ8FXKGZuNZQ14NpP3jpvVsm51tYE6rLZDcrfKmsny0KBXvz13X6bVSCWWz1cR2N5TLVr01ZjuF9AOIA+QwC0VMeTRDnTDRoyEgD2omhmV77rhF2oiy1Su1SiuXMZa8fcIlRyPSWQz047+1cJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fXNSbAxKetyjknVoleTu+e4FO8g8d0PVDRXqx/8BaAI=;
 b=GbnV61EBOg3TRCP3drZGTSfSLrqtHBgKWv/tvUMm8TUqeTSq12rfqIcW3LMl6HHS4Vuno0xRiTS00m+rkTStv/DzHMHDmQTWVf+OU95JBDfHcTO8iyMBpd5cZCVuU2iLSmN8O+B4JRunmAwTFOsznrNZ7BOHTnMx0lBkZe1Cdvg=
Date: Mon, 22 Feb 2021 15:13:28 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
Message-ID: <YDO8CM0prPjoo/X1@Air-de-Roger>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <YDOW+ftkNsG2RH3C@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YDOW+ftkNsG2RH3C@Air-de-Roger>
X-ClientProxiedBy: MR2P264CA0141.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fff865ed-d647-4e11-08c4-08d8d73c0a1c
X-MS-TrafficTypeDiagnostic: DM6PR03MB3674:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB36747AB073AF3E28126977C38F819@DM6PR03MB3674.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Y16tFAtrDdPoRcTJ8EUmFzzhhbHRbdBmXqNwpoj3GYGUTK2RgiCutBFEjlubdnYrXcuQvm3EZ2KdJLTlTDfSTQsTCOYwmso1lUBTfjpLzYpR9JtnOwBGgipIrfzjJRQsX2VipFWe35KOjd7LZkuigmSYAWVR8zEX9leFgQ+YxXJgV/KEdWb6CW0b9Qs5jTXwrxHp65smWT2HG9OjqbVC35cViTRvSGMQ0vacXrFg9gXM6mTYvsRbj6RHiZIdETGFg08EP7i1ubHjwC9SxS5ixanrkdJrJtGeg0eVwhavLMyZXbB2DK1MEtuK+cnda0Nz92AgGnkmX69+QFvFbhPv0pPWoAMQat8uKHMlXw+sPMZ1UAxA/KGhKb5TqpoYJL1LBo62WsXdXlSVD4jo4kbxzWG8fgaBj40nhoMh93cQjq1k2JSIOFwQNDIYEbgXAOKQPwTzJt+GoGEidylYhbmI8eAvRuxCKKeYONcg284dNAJBvQQt89ffcv1LDryzQmfmT/zaGwbWKjpMqi+p2B6hEQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(396003)(39860400002)(136003)(346002)(366004)(8676002)(4326008)(6486002)(83380400001)(26005)(86362001)(8936002)(316002)(85182001)(54906003)(6666004)(956004)(66946007)(9686003)(6200100001)(478600001)(66556008)(5660300002)(6862004)(33716001)(186003)(2906002)(66476007)(6496006)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?d1RlRlBsa2RLd0x2bEJFbmp1T3dVbmpQM0ZxcnI2eVFrZXhUOHBpRUNCU2N4?=
 =?utf-8?B?cjk5bTM3MUdlRnA4bU04ellkaWI4cmhhWVhtRDhsc0RwRE5xSUZCZnJ4RE5s?=
 =?utf-8?B?bUhCR3E2M0hCeXBwZ1RvTnFsSlczakE4MkN6WHQ4Qk1RbDFYbVZJWDJ1eUlw?=
 =?utf-8?B?RDZMVXJ4dHlFRmlHY01JTExaVDcwTUdyaFlSd2hYSUQ3dGdUNzU4UzJhRTN6?=
 =?utf-8?B?Vm51NEZJazZMbmFQTHlJMjBEMDBNNjkrZmVDekJDRDVvL3NZQUN2MXMrVzRp?=
 =?utf-8?B?MC9rcEVsbmNsMU54OHkwLzNFeUxVeHlMZUpNMHRLNVJrejk3RHpZanBxclpK?=
 =?utf-8?B?elcrdkpHTGlEakg3U205Q2I4SGh3K1p1dVJTeUlBV0VxK2NqMHVYWXR0TDBE?=
 =?utf-8?B?TVdkWGFnbWNBbGVlYnZiWmJpZGFsaWZNNGplVGNkU0Nhd1BZaGFRUDRZV3Qw?=
 =?utf-8?B?WCs0aWJuMXRsM3JFRCtxdTlDMnAvL01ZMlZXa1B1aXkwN2FaenROb0tpRHJO?=
 =?utf-8?B?YVF6OVdMTUJIdzdaUFRObGQreGxPbGUxYld4TC9raXVuYUl6aXZiQkIvaU0z?=
 =?utf-8?B?WmRyZlUrV1dNcTR6Vk9RNlNlU05FMzNRZW5FZHBlUDM2djNrN2pRSTlRRG1a?=
 =?utf-8?B?di9XNEJlZzVKZ2hzb3hUcWttTGFkREtUN1JrTWUrZmZYYzZaRWk4STVQc2lD?=
 =?utf-8?B?QXcvK0pYZk4xM1ZmMWJxQlMxN2VId1ZPakxFRmtudU5yVUNLc0JJWjBETFlx?=
 =?utf-8?B?c3RoTzd2eFJkYnRvNm9scW5CSUxkWWZlc2I3cjY2cUp2S1hiK3c0R1RQREE4?=
 =?utf-8?B?TEJDRGFrZm5TK0JtbDBKclYzYUpZRVZ1TDQyRXNXMWZ3ajZobms4M3FpVkRq?=
 =?utf-8?B?L1VuY3NyVWpvWUJOM3RGcWxHVHlWZkl1ZnVNT1VJdlFnV3FsS2lEamgxY2Rl?=
 =?utf-8?B?OFV2NEw3ZzlZYmtJd3JTNXdPdXk4Y3EzT1lSWm1uM1NsVHhUa0VLTTY4Vzly?=
 =?utf-8?B?a1dZZ2RHbTNZRklRblN5M01STkM5ZUV2cnQzdzhSZHFUWDZzUzdhWHdveE1S?=
 =?utf-8?B?VGU2L2xPWGw1a2ljMWl0elhCVFRncStNNHBTY2xRNXpvSDJDaHh5ODJsaFc4?=
 =?utf-8?B?K2ZDbjBuMUdVNzFzcW9TaFB6U2tWL0J3OEhCNGlXRGszL3pFSUJrdlpsczhK?=
 =?utf-8?B?Rkh0NnNXT0xjYzRRVE5yV2ZjT0NBZm5hYmNNeDFVR1dTamRFZTR5NkM3ZTNV?=
 =?utf-8?B?dUwrRXZPRW1kRGJ4MnJrMlJBeTFNV3NuTlpEYWxTUk5YWmZMbmRKT3lkTEpZ?=
 =?utf-8?B?L2lHbE5VbE4vUW9YbEQvczcyVC9LV0t0YkR0aUU2VTFvTk85UFhWbVhFaVVn?=
 =?utf-8?B?ZE1HVVlLd3pSbG5NN3h4Vm9RVkdLQS9paXRQOTlXTDBIN0pTanlvUm80bE5n?=
 =?utf-8?B?dDQvQWZKQWNGdWF0Y25FN3ZCSmVuaFBmL3Q5S0tEQ0V1dWwvOHFTcFNqNGlX?=
 =?utf-8?B?eGdEMU1UKzBpb3Zsc2VsQW5iaVlJa0NoYXd3cmcyS1ZMdGVqeXRzVHYyd0lG?=
 =?utf-8?B?bXgzU2g3SThwZ1RJRFdEQXBNbUVBTlhIcmx5YWw3aU40WlFhOUgrWWV3S2Ez?=
 =?utf-8?B?cWZNeXhDcVRJK2pkKzk2TWJSejkyTGxOVE5aeGVqUmIxeWFWY3JpdVJhVXVX?=
 =?utf-8?B?MDhJSStENFkydHdLRGVlR1FKM0pKOGFqeUpsMGFmY29pbDdka2ljQ01jeUdj?=
 =?utf-8?Q?ZIj+Z54LB5CBAbobb312OCWxm1M/6cr0g06o56Y?=
X-MS-Exchange-CrossTenant-Network-Message-Id: fff865ed-d647-4e11-08c4-08d8d73c0a1c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 14:13:34.3541
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xupR/ArkH1PgQZPjFxRdCNFOn0ybhBNuRFHakNxa/9oYPXxOyOuBKmA7kAFLZXl120t0pPCivlnR1pGIqQqw3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3674
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 12:35:21PM +0100, Roger Pau MonnÃ© wrote:
> On Mon, Feb 22, 2021 at 11:27:07AM +0100, Jan Beulich wrote:
> > Now that we guard the entire Xen VA space against speculative abuse
> > through hypervisor accesses to guest memory, the argument translation
> > area's VA also needs to live outside this range, at least for 32-bit PV
> > guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> > uniformly.
> 
> Since you are double mapping the per-domain virtual area, won't it
> make more sense to map it just once outside of the Xen virtual space
> area? (so it's always using PML4_ADDR(511))

Right, that's not possible for PV 64bit domains because it's guest
owned linear address space in that case.

It seems like paravirt_ctxt_switch_to will modify the root_pgt to set
the PERDOMAIN_VIRT_START entry, does the same need to be done for
PERDOMAIN2_VIRT_START?

I would also consider giving the slot a more meaningful name, as
PERDOMAIN2_VIRT_START makes it seem like a new per-domain scratch
space, when it's just a different mapping of the existing physical
memory.

Maybe PERDOMAIN_MIRROR_VIRT_START? Or PERDOMAIN_XLAT_VIRT_START?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:14:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:14:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88098.165505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBzF-0000GO-N7; Mon, 22 Feb 2021 14:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88098.165505; Mon, 22 Feb 2021 14:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEBzF-0000GH-II; Mon, 22 Feb 2021 14:14:57 +0000
Received: by outflank-mailman (input) for mailman id 88098;
 Mon, 22 Feb 2021 14:14:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEBzE-0000G9-7H
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:14:56 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d116a056-e9df-4980-b0a3-5602308c7432;
 Mon, 22 Feb 2021 14:14:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d116a056-e9df-4980-b0a3-5602308c7432
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614003295;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3lmTV1PBGM3mXwHF87/m9EanEs9qhoWaVDRMmX6XpzM=;
  b=G3a4Zh1/RHEDbUs53ULECK25TMVz3v9H2kuIDp6BLgU5RximgXPLUyvR
   Vlkv9Nd0iVBKbdGPLRLM6COJNEGrx9nAIY6MEaenx+J22TaM+55jL53iO
   4P8TWp6ylTo7fsEo74g9UTlN60PE7sB5x6+fEkeV8GuM4uCYGFKtYRWyi
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: C/0ZGe2hfj7XGcu+wLznZlBOB2NKuHixLOUaaGQdeDBEL+H6HjTWMTVdffmO9EHuZ8hrp156CM
 WN00SbnDbGPnoKK8+zcPp9ERHijvAsvRVo5LaEM99NbBsY52nzvwSOygvQCujDMAf7VoVmj2UX
 WprtAf7tuoLcdegjFB0ZbguAfozzmuoBnvtXdv/4BIKo8orxcYfT2dPClKaLIlyyy07A0IcLIc
 zHGP+UBMkxLzZTFp6X3FyBKPiCI4QhjmLyJSu29/MS6s73gC+7GmwPKEcaTZRgD7YFtH7NtEmA
 WCA=
X-SBRS: 5.2
X-MesageID: 37668868
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37668868"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fofRHSN6xY9+vnXfb7VV74xAT31ESyxgxy+F8bHKXQKHvWaNWgjn5FD259U0mrtFpvs2AnZ0q/+hRBmTUXoCZexGS11dNlLq0sQ6siY5/rO0H1vycQUSQnrfIjPyxzrVulMeZARG+KC/03qO0Bq0wl/HBtvsU+vZz28pDNPqsmRYbUFokwWqVZ4H7cFYrpmC02OQFPQCA0au2NiBVxhhZMaoT0WQ22TCuSQ0Poez6uuUVAAZ6cHPmgI5O/ejRm1E4pcu5A0diFpHfAm4Kjm3CIG2ZiyQ+OZOVvxrTVoc07WrD0PxNiVZpDnhbGeTRYfaG42UkQcCuOb79oFRVqfjsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m00Gap14DRqIJ0WKC6WtlbHhK11veJDP5sWPZ6s8I6Y=;
 b=NZvtaMpBWo8RJGYVCozl9311mPWs917kDzW/Fi9GV+u8gvtD0E0IQMtdXjNLujqye/8RPlPSeietGRQFLcxV2jhhwCCyhjv59tytNO1tlFLQECQdqH3PVunFV9zwKuOaJd6QRciQvx0cPmov8NXsORRrPSOeVL+NWV6fuDIvxTttAp6fB59vVw45N/agbqEV7o1FZ/6BGdg4gvRuoxWmILgZdvNeVN3Af1muYhxLwUUK0Z+Ew0npgVD8P+V/Q8rApBjgaKnqtQoULm030i9XiMEOywp1bb5eO+5ldKl3hQCx/FMO1877Mto65N7c8D8EEyazrT59HO5kCdNnfF7Y8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m00Gap14DRqIJ0WKC6WtlbHhK11veJDP5sWPZ6s8I6Y=;
 b=ToLWKPtYAclJVr8GFV6oYGcuAaDEzB+dai/IN8i26BEj87Ku9d0VkS0a8v6l5crIpGhlrWfqDXUktzdc8Ov1xj7+R4AnKyRFGaKmwR95Vlr9q26faUahUzDzHdQT+LJz8SxehI2sVLilCeYNUjecmni5azCRjC8BN4b8uMH73Is=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
Date: Mon, 22 Feb 2021 14:14:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0166.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18a::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 23791a90-fdd1-4787-c0bc-08d8d73c37ae
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5854:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5854595B0A43C98E75CB4337BA819@SJ0PR03MB5854.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pfmqJrJKeBLI/gpikLCyZ/nZuT8Dg5zxqQapgru9E/5648451bGDG3se9cicLRwKQ5CuGEkJSRFdZPJJcqQUjCpBc7jVhJOB/MWUGcwGi1WSzn+cT1o1tGmBcJRZvm5nU2kGLWSEhM+cB6rnb4B0v6H/nZXuHX/gZHeM2vK3c6UyLzdXDFExnSxQURQ9aLuBeXbRCrgRLNjjzOjRkIPaU0ueGKrzKq7Cj+cEYovc5Lwu8ziyRYwfPqjwQvoDvBbEpSD4VtkDgOTIDwpgjusqRLsInFmjMEShgoyIto/gi6R0HbSzm2kbMJF9XL4/vVi+SePgngFyZyTWscH74ArppcK+w9J05h08MrZyhze7Wvo+aPVYcjkjGy2QdVnB2lTYtvYlwyUNSa47Lbh+1gqm8xLNFevcPoEtWCuMgTX9FA1JQ4A2gSmvBPJdjO4cIPHVVFX2/JdBmAkzKQ/j/38GoBIA1AUYTbruUGGYFGqCRUUVVNOEoWIzMQyRfhM8k3oKfJxeiR+XiA7M+EkCR9XwwwbH83Ljtg0jcGswsEwxfw/tl4U4798TERzJD6W0Zr+5U62DwEfYlLAniW68jMM6u72ZvZEwNKs1y6B8+6BI33w=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(366004)(346002)(39860400002)(6666004)(36756003)(2616005)(66946007)(66476007)(66556008)(956004)(478600001)(83380400001)(31696002)(86362001)(53546011)(31686004)(110136005)(8936002)(2906002)(4326008)(6486002)(8676002)(54906003)(16576012)(5660300002)(16526019)(316002)(186003)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TGhpVXRWYStseFFkalBySFlJNHpMdll2eHZaS0tHOVB3ZnY3empJZHZsVDdF?=
 =?utf-8?B?U0grYWRvak9KNXUwVUpMS1AzSUU4dWNOTEhWKzJ6Z09IV2NWWUMzMzRGbnNa?=
 =?utf-8?B?VzR1OTV3eGhBNGtkTGFlcWNIRDJJQTJLdnY3UVkvVml5Z25HMmFVaVM1ZUdH?=
 =?utf-8?B?L1VFRWVXUHAzQjVOWUhWMkROeVZwaGJsSEhvRU5XTjlBTFVLSVVycmxCYVZ1?=
 =?utf-8?B?dTB1eEVTUk5KYm9qMXNub21EOFEzZ1RHNCtOMnIyNCs1UVdpQzBQc3doaEJD?=
 =?utf-8?B?RkdUSHl1RVV3Szl0Ny9SekRoVVYzQTlZUkg0SE5CdW9xRnJSaWhZREVWUGxo?=
 =?utf-8?B?eVFGOTE4a01iamp1Y29WRnRHV2J6a0lraUJPVVlzalR5bTB4SUtraDdhN2dw?=
 =?utf-8?B?ZG41ZURXT0JoLzhqTExXNVJ3U2RnUkZrRlFKS0EyenRaSnoySW0zNE1heVhh?=
 =?utf-8?B?di96R1BSQ3lpT2ZORDdGNXhiQjBVV2xjb1lZNUVXKzVTM01YQ1JEOE4yWjBn?=
 =?utf-8?B?Yk82cCtLUnRGRVpTNG9mbk1uV3phRXZXbzFyZ0NuMVp3Qkw0ZUNiMUNTVDVL?=
 =?utf-8?B?Qm5NUkhwT05VK2tlN2xadDJ3UWhncFJLUXdJOGxadEZnYkdwOEYvdXM4WEI3?=
 =?utf-8?B?M1IxMmF4U2lvbUNFcHV6QkZxRFFRMHFYVE9iZVJzRWF5MGtMdmlrYXZ2bFN4?=
 =?utf-8?B?MjMyTTZaZ3NxU3hCZVlZb3YyeUJRaG45RGRWS01GODdiQXM2TFZRYmgrOENS?=
 =?utf-8?B?RXA2eTJMVjlLQjd2Q1BXOHFtcXdvR2kxL2FWRXM3dWxJLzFRU0x5b3BWV1Np?=
 =?utf-8?B?Tk5KWENoblNsWC9hemdJTGpvVFFuY0l5RWV4di9YY2poLzkvL2gyY1Y3eFl3?=
 =?utf-8?B?K3ZUSmd6dHBoWmQ0ZkY1WjNaYVdWaVBXZzR0QnBTaDgyY2dhKzk2RmRGU2pJ?=
 =?utf-8?B?QW11Z2tSTnlZYWxmN01JcTlYZEl5dGl3ZU5HOHhiTk1KVDgvR3lRclF5RnRX?=
 =?utf-8?B?MUU2bmNvRHlSSk1SQlZYWjZWc0NFSk1TOFdObHhYcW8wSjFyWEFhbVR6YlJh?=
 =?utf-8?B?dmQ3cjVVaThNaWRZTkZmQjV3aGJKNXMrRjVLTU5sM2JUUTdQUWlBRlV1b3Iw?=
 =?utf-8?B?b0ZDd0lBNEJzcEwzYkhON2hPMGltRlJ6eURQZ2hMTUM2b3ozMTZ2Z1FVdFJJ?=
 =?utf-8?B?cjI1VWlnQUYyNUh4QmJqVGtCOVN1VlF1UjNUVHdDMEQxYko4aGJMb2VCalQz?=
 =?utf-8?B?UEVnaGpSODFJRTkxa3F1QXc5OFJSSlBZQTlHV3oyNzhmdjZqaXd1dEdSWWdH?=
 =?utf-8?B?NnArYXBleWhxOFNkVlZWQ25kZnFJZVFDd2hOZE5pTW50MnJZZ0FCdXY0Z09w?=
 =?utf-8?B?YVlLTy9kUi9UckpUN0hUSDRsMHRvV2ZsUHpRaU5PeWFiZ3FuWFZLclN2aHZy?=
 =?utf-8?B?Wnh5N0pQZzJmVVBsbWgzQ1ZIQ0U4OUlEZElIMnd5N2FCVGk5RmZLdTNReGRr?=
 =?utf-8?B?ekN3Und4blUzYzVFZkw5VktIcmFwcXR2aFQ4Smw0b09sNFZBZ0pXY3Z1dyt5?=
 =?utf-8?B?RnNuemNQaVdqemZtNFJidmpnQmJRaXRqVkR4WmlmQW0wSW5rdG55bHptU1A1?=
 =?utf-8?B?Rmd6aUNwTTlYcEpRYWp0d2E0QXR0Y0lvK2VqNVREOStTK3NNcy93NTJRYmkw?=
 =?utf-8?B?ZDhMMzd3VWtaL3VXSzNLS2dyRzVsSzdpcDR1dUNxQ2FjSFQ5dnB6SmlPZUZ3?=
 =?utf-8?Q?VRsAVUKfcThUDW8q6EW5Ob7KhqNeb+d16LIZ/kR?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 23791a90-fdd1-4787-c0bc-08d8d73c37ae
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 14:14:50.7113
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TnwZoTsDsJqViCB1LHJNc5e43UwNds5N72aZQjYWSx4pE+QN2khmfgPDF4pNQ+JfK+ierHTpYkT2e/YAcaP+524J9kd/HDl8uMMqyUjpet4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5854
X-OriginatorOrg: citrix.com

On 22/02/2021 10:27, Jan Beulich wrote:
> Now that we guard the entire Xen VA space against speculative abuse
> through hypervisor accesses to guest memory, the argument translation
> area's VA also needs to live outside this range, at least for 32-bit PV
> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> uniformly.
>
> While this could be conditionalized upon CONFIG_PV32 &&
> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
> keeps the code more legible imo.
>
> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>      }
> +
> +    /* Slot 511: Per-domain mappings mirror. */
> +    if ( !is_pv_64bit_domain(d) )
> +        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);

This virtual address is inside the extended directmap.Â  You're going to
need to rearrange more things than just this, to make it safe.

While largely a theoretical risk as far as the directmap goes, there is
now a rather higher risk of colliding with the ERR_PTR() range.Â  Its bad
enough this infrastructure is inherently unsafe with 64bit PV guests,

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:18:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88101.165517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC2L-0000Qk-5H; Mon, 22 Feb 2021 14:18:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88101.165517; Mon, 22 Feb 2021 14:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC2L-0000Qd-1O; Mon, 22 Feb 2021 14:18:09 +0000
Received: by outflank-mailman (input) for mailman id 88101;
 Mon, 22 Feb 2021 14:18:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NLRj=HY=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1lEC2J-0000QY-Fg
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:18:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ef01bd8-9828-410e-9664-a5a4e678bd9e;
 Mon, 22 Feb 2021 14:18:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2C0A4AFF9;
 Mon, 22 Feb 2021 14:18:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ef01bd8-9828-410e-9664-a5a4e678bd9e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
From: Thomas Zimmermann <tzimmermann@suse.de>
To: airlied@linux.ie,
	daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com,
	mripard@kernel.org
Cc: dri-devel@lists.freedesktop.org,
	linux-aspeed@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mips@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-amlogic@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	linux-renesas-soc@vger.kernel.org,
	linux-rockchip@lists.infradead.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-tegra@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v3] drm/gem: Move drm_gem_fb_prepare_fb() to GEM atomic helpers
Date: Mon, 22 Feb 2021 15:17:56 +0100
Message-Id: <20210222141756.7864-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.30.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function drm_gem_fb_prepare_fb() is a helper for atomic modesetting,
but currently located next to framebuffer helpers. Move it to GEM atomic
helpers, rename it slightly and adopt the drivers. Same for the rsp
simple-pipe helper.

Compile-tested with x86-64, aarch64 and arm. The patch is fairly large,
but there are no functional changes.

v3:
	* remove out-comented line in drm_gem_framebuffer_helper.h
	  (Maxime)
v2:
	* rename to drm_gem_plane_helper_prepare_fb() (Daniel)
	* add tutorial-style documentation

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Maxime Ripard <mripard@kernel.org>
---
 drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c     |  4 +-
 drivers/gpu/drm/drm_gem_atomic_helper.c      | 96 +++++++++++++++++++-
 drivers/gpu/drm/drm_gem_framebuffer_helper.c | 63 -------------
 drivers/gpu/drm/drm_gem_vram_helper.c        |  4 +-
 drivers/gpu/drm/imx/dcss/dcss-plane.c        |  4 +-
 drivers/gpu/drm/imx/ipuv3-plane.c            |  4 +-
 drivers/gpu/drm/ingenic/ingenic-drm-drv.c    |  3 +-
 drivers/gpu/drm/ingenic/ingenic-ipu.c        |  4 +-
 drivers/gpu/drm/mcde/mcde_display.c          |  4 +-
 drivers/gpu/drm/mediatek/mtk_drm_plane.c     |  6 +-
 drivers/gpu/drm/meson/meson_overlay.c        |  8 +-
 drivers/gpu/drm/meson/meson_plane.c          |  4 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c    |  4 +-
 drivers/gpu/drm/msm/msm_atomic.c             |  4 +-
 drivers/gpu/drm/mxsfb/mxsfb_kms.c            |  6 +-
 drivers/gpu/drm/pl111/pl111_display.c        |  4 +-
 drivers/gpu/drm/rcar-du/rcar_du_vsp.c        |  4 +-
 drivers/gpu/drm/rockchip/rockchip_drm_vop.c  |  3 +-
 drivers/gpu/drm/stm/ltdc.c                   |  4 +-
 drivers/gpu/drm/sun4i/sun4i_layer.c          |  4 +-
 drivers/gpu/drm/sun4i/sun8i_ui_layer.c       |  4 +-
 drivers/gpu/drm/sun4i/sun8i_vi_layer.c       |  4 +-
 drivers/gpu/drm/tegra/plane.c                |  4 +-
 drivers/gpu/drm/tidss/tidss_plane.c          |  4 +-
 drivers/gpu/drm/tiny/hx8357d.c               |  4 +-
 drivers/gpu/drm/tiny/ili9225.c               |  4 +-
 drivers/gpu/drm/tiny/ili9341.c               |  4 +-
 drivers/gpu/drm/tiny/ili9486.c               |  4 +-
 drivers/gpu/drm/tiny/mi0283qt.c              |  4 +-
 drivers/gpu/drm/tiny/repaper.c               |  3 +-
 drivers/gpu/drm/tiny/st7586.c                |  4 +-
 drivers/gpu/drm/tiny/st7735r.c               |  4 +-
 drivers/gpu/drm/tve200/tve200_display.c      |  4 +-
 drivers/gpu/drm/vc4/vc4_plane.c              |  4 +-
 drivers/gpu/drm/vkms/vkms_plane.c            |  3 +-
 drivers/gpu/drm/xen/xen_drm_front_kms.c      |  3 +-
 include/drm/drm_gem_atomic_helper.h          |  8 ++
 include/drm/drm_gem_framebuffer_helper.h     |  7 --
 include/drm/drm_modeset_helper_vtables.h     |  2 +-
 include/drm/drm_plane.h                      |  4 +-
 include/drm/drm_simple_kms_helper.h          |  2 +-
 41 files changed, 176 insertions(+), 145 deletions(-)

diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
index 20c2197b270f..098f96d4d50d 100644
--- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
+++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
@@ -9,8 +9,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_panel.h>
 #include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_vblank.h>
@@ -220,7 +220,7 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
 	.enable		= aspeed_gfx_pipe_enable,
 	.disable	= aspeed_gfx_pipe_disable,
 	.update		= aspeed_gfx_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank	= aspeed_gfx_enable_vblank,
 	.disable_vblank	= aspeed_gfx_disable_vblank,
 };
diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
index fa4eae492b81..a005c5a0ba46 100644
--- a/drivers/gpu/drm/drm_gem_atomic_helper.c
+++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
@@ -1,6 +1,10 @@
 // SPDX-License-Identifier: GPL-2.0-or-later

+#include <linux/dma-resv.h>
+
 #include <drm/drm_atomic_state_helper.h>
+#include <drm/drm_atomic_uapi.h>
+#include <drm/drm_gem.h>
 #include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_simple_kms_helper.h>
@@ -12,8 +16,33 @@
  *
  * The GEM atomic helpers library implements generic atomic-commit
  * functions for drivers that use GEM objects. Currently, it provides
- * plane state and framebuffer BO mappings for planes with shadow
- * buffers.
+ * synchronization helpers, and plane state and framebuffer BO mappings
+ * for planes with shadow buffers.
+ *
+ * Before scanout, a plane's framebuffer needs to be synchronized with
+ * possible writers that draw into the framebuffer. All drivers should
+ * call drm_gem_plane_helper_prepare_fb() from their implementation of
+ * struct &drm_plane_helper.prepare_fb . It sets the plane's fence from
+ * the framebuffer so that the DRM core can synchronize access automatically.
+ *
+ * drm_gem_plane_helper_prepare_fb() can also be used directly as
+ * implementation of prepare_fb. For drivers based on
+ * struct drm_simple_display_pipe, drm_gem_simple_display_pipe_prepare_fb()
+ * provides equivalent functionality.
+ *
+ * .. code-block:: c
+ *
+ *	#include <drm/drm_gem_atomic_helper.h>
+ *
+ *	struct drm_plane_helper_funcs driver_plane_helper_funcs = {
+ *		...,
+ *		. prepare_fb = drm_gem_plane_helper_prepare_fb,
+ *	};
+ *
+ *	struct drm_simple_display_pipe_funcs driver_pipe_funcs = {
+ *		...,
+ *		. prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
+ *	};
  *
  * A driver using a shadow buffer copies the content of the shadow buffers
  * into the HW's framebuffer memory during an atomic update. This requires
@@ -32,7 +61,7 @@
  *
  * .. code-block:: c
  *
- *	#include <drm/drm/gem_atomic_helper.h>
+ *	#include <drm/drm_gem_atomic_helper.h>
  *
  *	struct drm_plane_funcs driver_plane_funcs = {
  *		...,
@@ -87,6 +116,65 @@
  *	}
  */

+/*
+ * Plane Helpers
+ */
+
+/**
+ * drm_gem_plane_helper_prepare_fb() - Prepare a GEM backed framebuffer
+ * @plane: Plane
+ * @state: Plane state the fence will be attached to
+ *
+ * This function extracts the exclusive fence from &drm_gem_object.resv and
+ * attaches it to plane state for the atomic helper to wait on. This is
+ * necessary to correctly implement implicit synchronization for any buffers
+ * shared as a struct &dma_buf. This function can be used as the
+ * &drm_plane_helper_funcs.prepare_fb callback.
+ *
+ * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
+ * GEM based framebuffer drivers which have their buffers always pinned in
+ * memory.
+ *
+ * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
+ * explicit fencing in atomic modeset updates.
+ */
+int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
+{
+	struct drm_gem_object *obj;
+	struct dma_fence *fence;
+
+	if (!state->fb)
+		return 0;
+
+	obj = drm_gem_fb_get_obj(state->fb, 0);
+	fence = dma_resv_get_excl_rcu(obj->resv);
+	drm_atomic_set_fence_for_plane(state, fence);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_plane_helper_prepare_fb);
+
+/**
+ * drm_gem_simple_display_pipe_prepare_fb - prepare_fb helper for &drm_simple_display_pipe
+ * @pipe: Simple display pipe
+ * @plane_state: Plane state
+ *
+ * This function uses drm_gem_plane_helper_prepare_fb() to extract the exclusive fence
+ * from &drm_gem_object.resv and attaches it to plane state for the atomic
+ * helper to wait on. This is necessary to correctly implement implicit
+ * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
+ * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
+ *
+ * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
+ * explicit fencing in atomic modeset updates.
+ */
+int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
+					   struct drm_plane_state *plane_state)
+{
+	return drm_gem_plane_helper_prepare_fb(&pipe->plane, plane_state);
+}
+EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb);
+
 /*
  * Shadow-buffered Planes
  */
@@ -198,7 +286,7 @@ int drm_gem_prepare_shadow_fb(struct drm_plane *plane, struct drm_plane_state *p
 	if (!fb)
 		return 0;

-	ret = drm_gem_fb_prepare_fb(plane, plane_state);
+	ret = drm_gem_plane_helper_prepare_fb(plane, plane_state);
 	if (ret)
 		return ret;

diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
index 109d11fb4cd4..5ed2067cebb6 100644
--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
@@ -5,13 +5,8 @@
  * Copyright (C) 2017 Noralf TrÃ¸nnes
  */

-#include <linux/dma-buf.h>
-#include <linux/dma-fence.h>
-#include <linux/dma-resv.h>
 #include <linux/slab.h>

-#include <drm/drm_atomic.h>
-#include <drm/drm_atomic_uapi.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
@@ -19,7 +14,6 @@
 #include <drm/drm_gem.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_modeset_helper.h>
-#include <drm/drm_simple_kms_helper.h>

 #define AFBC_HEADER_SIZE		16
 #define AFBC_TH_LAYOUT_ALIGNMENT	8
@@ -432,60 +426,3 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init);
-
-/**
- * drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer
- * @plane: Plane
- * @state: Plane state the fence will be attached to
- *
- * This function extracts the exclusive fence from &drm_gem_object.resv and
- * attaches it to plane state for the atomic helper to wait on. This is
- * necessary to correctly implement implicit synchronization for any buffers
- * shared as a struct &dma_buf. This function can be used as the
- * &drm_plane_helper_funcs.prepare_fb callback.
- *
- * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple
- * gem based framebuffer drivers which have their buffers always pinned in
- * memory.
- *
- * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
- * explicit fencing in atomic modeset updates.
- */
-int drm_gem_fb_prepare_fb(struct drm_plane *plane,
-			  struct drm_plane_state *state)
-{
-	struct drm_gem_object *obj;
-	struct dma_fence *fence;
-
-	if (!state->fb)
-		return 0;
-
-	obj = drm_gem_fb_get_obj(state->fb, 0);
-	fence = dma_resv_get_excl_rcu(obj->resv);
-	drm_atomic_set_fence_for_plane(state, fence);
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(drm_gem_fb_prepare_fb);
-
-/**
- * drm_gem_fb_simple_display_pipe_prepare_fb - prepare_fb helper for
- *     &drm_simple_display_pipe
- * @pipe: Simple display pipe
- * @plane_state: Plane state
- *
- * This function uses drm_gem_fb_prepare_fb() to extract the exclusive fence
- * from &drm_gem_object.resv and attaches it to plane state for the atomic
- * helper to wait on. This is necessary to correctly implement implicit
- * synchronization for any buffers shared as a struct &dma_buf. Drivers can use
- * this as their &drm_simple_display_pipe_funcs.prepare_fb callback.
- *
- * See drm_atomic_set_fence_for_plane() for a discussion of implicit and
- * explicit fencing in atomic modeset updates.
- */
-int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
-					      struct drm_plane_state *plane_state)
-{
-	return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
-}
-EXPORT_SYMBOL(drm_gem_fb_simple_display_pipe_prepare_fb);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2226ef5ba6dc..2b7c3a07956d 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -8,7 +8,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_file.h>
 #include <drm/drm_framebuffer.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_ttm_helper.h>
 #include <drm/drm_gem_vram_helper.h>
 #include <drm/drm_managed.h>
@@ -708,7 +708,7 @@ drm_gem_vram_plane_helper_prepare_fb(struct drm_plane *plane,
 			goto err_drm_gem_vram_unpin;
 	}

-	ret = drm_gem_fb_prepare_fb(plane, new_state);
+	ret = drm_gem_plane_helper_prepare_fb(plane, new_state);
 	if (ret)
 		goto err_drm_gem_vram_unpin;

diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c
index 03ba88f7f995..4723da457bad 100644
--- a/drivers/gpu/drm/imx/dcss/dcss-plane.c
+++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c
@@ -6,7 +6,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>

 #include "dcss-dev.h"
@@ -355,7 +355,7 @@ static void dcss_plane_atomic_disable(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs dcss_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = dcss_plane_atomic_check,
 	.atomic_update = dcss_plane_atomic_update,
 	.atomic_disable = dcss_plane_atomic_disable,
diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
index 075508051b5f..cff783a37162 100644
--- a/drivers/gpu/drm/imx/ipuv3-plane.c
+++ b/drivers/gpu/drm/imx/ipuv3-plane.c
@@ -9,8 +9,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_plane_helper.h>

@@ -704,7 +704,7 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = ipu_plane_atomic_check,
 	.atomic_disable = ipu_plane_atomic_disable,
 	.atomic_update = ipu_plane_atomic_update,
diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
index 7bb31fbee29d..c00961907b10 100644
--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
@@ -28,6 +28,7 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_irq.h>
 #include <drm/drm_managed.h>
@@ -780,7 +781,7 @@ static const struct drm_plane_helper_funcs ingenic_drm_plane_helper_funcs = {
 	.atomic_update		= ingenic_drm_plane_atomic_update,
 	.atomic_check		= ingenic_drm_plane_atomic_check,
 	.atomic_disable		= ingenic_drm_plane_atomic_disable,
-	.prepare_fb		= drm_gem_fb_prepare_fb,
+	.prepare_fb		= drm_gem_plane_helper_prepare_fb,
 };

 static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_funcs = {
diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/ingenic/ingenic-ipu.c
index e52777ef85fd..91457263a3ce 100644
--- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
+++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
@@ -23,7 +23,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_property.h>
@@ -608,7 +608,7 @@ static const struct drm_plane_helper_funcs ingenic_ipu_plane_helper_funcs = {
 	.atomic_update		= ingenic_ipu_plane_atomic_update,
 	.atomic_check		= ingenic_ipu_plane_atomic_check,
 	.atomic_disable		= ingenic_ipu_plane_atomic_disable,
-	.prepare_fb		= drm_gem_fb_prepare_fb,
+	.prepare_fb		= drm_gem_plane_helper_prepare_fb,
 };

 static int
diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
index 83ac7493e751..4ddc55d58f38 100644
--- a/drivers/gpu/drm/mcde/mcde_display.c
+++ b/drivers/gpu/drm/mcde/mcde_display.c
@@ -13,8 +13,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_mipi_dsi.h>
 #include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_bridge.h>
@@ -1479,7 +1479,7 @@ static struct drm_simple_display_pipe_funcs mcde_display_funcs = {
 	.update = mcde_display_update,
 	.enable_vblank = mcde_display_enable_vblank,
 	.disable_vblank = mcde_display_disable_vblank,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 int mcde_display_init(struct drm_device *drm)
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
index 92141a19681b..c95ceb400b07 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
@@ -6,10 +6,10 @@

 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
-#include <drm/drm_fourcc.h>
 #include <drm/drm_atomic_uapi.h>
+#include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>

 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -216,7 +216,7 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = mtk_plane_atomic_check,
 	.atomic_update = mtk_plane_atomic_update,
 	.atomic_disable = mtk_plane_atomic_disable,
diff --git a/drivers/gpu/drm/meson/meson_overlay.c b/drivers/gpu/drm/meson/meson_overlay.c
index 1ffbbecafa22..be6ca49e20b0 100644
--- a/drivers/gpu/drm/meson/meson_overlay.c
+++ b/drivers/gpu/drm/meson/meson_overlay.c
@@ -10,11 +10,11 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_device.h>
+#include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_plane_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_plane_helper.h>

 #include "meson_overlay.h"
 #include "meson_registers.h"
@@ -742,7 +742,7 @@ static const struct drm_plane_helper_funcs meson_overlay_helper_funcs = {
 	.atomic_check	= meson_overlay_atomic_check,
 	.atomic_disable	= meson_overlay_atomic_disable,
 	.atomic_update	= meson_overlay_atomic_update,
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 };

 static bool meson_overlay_format_mod_supported(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
index 35338ed18209..b8309d8fc277 100644
--- a/drivers/gpu/drm/meson/meson_plane.c
+++ b/drivers/gpu/drm/meson/meson_plane.c
@@ -16,8 +16,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "meson_plane.h"
@@ -417,7 +417,7 @@ static const struct drm_plane_helper_funcs meson_plane_helper_funcs = {
 	.atomic_check	= meson_plane_atomic_check,
 	.atomic_disable	= meson_plane_atomic_disable,
 	.atomic_update	= meson_plane_atomic_update,
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 };

 static bool meson_plane_format_mod_supported(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index bc0231a50132..40eb5c911e3c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -13,7 +13,7 @@
 #include <drm/drm_atomic_uapi.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_file.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>

 #include "msm_drv.h"
 #include "dpu_kms.h"
@@ -892,7 +892,7 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
 	 *       we can use msm_atomic_prepare_fb() instead of doing the
 	 *       implicit fence and fb prepare by hand here.
 	 */
-	drm_gem_fb_prepare_fb(plane, new_state);
+	drm_gem_plane_helper_prepare_fb(plane, new_state);

 	if (pstate->aspace) {
 		ret = msm_framebuffer_prepare(new_state->fb,
diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
index 6a326761dc4a..e9c6544b6a01 100644
--- a/drivers/gpu/drm/msm/msm_atomic.c
+++ b/drivers/gpu/drm/msm/msm_atomic.c
@@ -5,7 +5,7 @@
  */

 #include <drm/drm_atomic_uapi.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_vblank.h>

 #include "msm_atomic_trace.h"
@@ -22,7 +22,7 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
 	if (!new_state->fb)
 		return 0;

-	drm_gem_fb_prepare_fb(plane, new_state);
+	drm_gem_plane_helper_prepare_fb(plane, new_state);

 	return msm_framebuffer_prepare(new_state->fb, kms->aspace);
 }
diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
index 3e1bb0aefb87..7c19ec5384d4 100644
--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
@@ -21,8 +21,8 @@
 #include <drm/drm_encoder.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_vblank.h>
@@ -495,13 +495,13 @@ static bool mxsfb_format_mod_supported(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs mxsfb_plane_primary_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = mxsfb_plane_atomic_check,
 	.atomic_update = mxsfb_plane_primary_atomic_update,
 };

 static const struct drm_plane_helper_funcs mxsfb_plane_overlay_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = mxsfb_plane_atomic_check,
 	.atomic_update = mxsfb_plane_overlay_atomic_update,
 };
diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
index 69c02e7c82b7..6fd7f13f1aca 100644
--- a/drivers/gpu/drm/pl111/pl111_display.c
+++ b/drivers/gpu/drm/pl111/pl111_display.c
@@ -17,8 +17,8 @@

 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_vblank.h>

 #include "pl111_drm.h"
@@ -440,7 +440,7 @@ static struct drm_simple_display_pipe_funcs pl111_display_funcs = {
 	.enable = pl111_display_enable,
 	.disable = pl111_display_disable,
 	.update = pl111_display_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
index 53221d8473c1..336ba0648a79 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
@@ -11,8 +11,8 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_vblank.h>
@@ -236,7 +236,7 @@ static int rcar_du_vsp_plane_prepare_fb(struct drm_plane *plane,
 	if (ret < 0)
 		return ret;

-	return drm_gem_fb_prepare_fb(plane, state);
+	return drm_gem_plane_helper_prepare_fb(plane, state);
 }

 void rcar_du_vsp_unmap_fb(struct rcar_du_vsp *vsp, struct drm_framebuffer *fb,
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
index 8d15cabdcb02..daea2493bfb8 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
@@ -23,6 +23,7 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_flip_work.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
@@ -1096,7 +1097,7 @@ static const struct drm_plane_helper_funcs plane_helper_funcs = {
 	.atomic_disable = vop_plane_atomic_disable,
 	.atomic_async_check = vop_plane_atomic_async_check,
 	.atomic_async_update = vop_plane_atomic_async_update,
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 };

 static const struct drm_plane_funcs vop_plane_funcs = {
diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
index 6f3b523e16e8..0967cd9190a2 100644
--- a/drivers/gpu/drm/stm/ltdc.c
+++ b/drivers/gpu/drm/stm/ltdc.c
@@ -26,8 +26,8 @@
 #include <drm/drm_device.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_of.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
@@ -940,7 +940,7 @@ static const struct drm_plane_funcs ltdc_plane_funcs = {
 };

 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = ltdc_plane_atomic_check,
 	.atomic_update = ltdc_plane_atomic_update,
 	.atomic_disable = ltdc_plane_atomic_disable,
diff --git a/drivers/gpu/drm/sun4i/sun4i_layer.c b/drivers/gpu/drm/sun4i/sun4i_layer.c
index acfbfd4463a1..259c10b85ee7 100644
--- a/drivers/gpu/drm/sun4i/sun4i_layer.c
+++ b/drivers/gpu/drm/sun4i/sun4i_layer.c
@@ -7,7 +7,7 @@
  */

 #include <drm/drm_atomic_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "sun4i_backend.h"
@@ -122,7 +122,7 @@ static bool sun4i_layer_format_mod_supported(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs sun4i_backend_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 	.atomic_disable	= sun4i_backend_layer_atomic_disable,
 	.atomic_update	= sun4i_backend_layer_atomic_update,
 };
diff --git a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
index e64f30595f64..a78d2cf00012 100644
--- a/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
+++ b/drivers/gpu/drm/sun4i/sun8i_ui_layer.c
@@ -14,8 +14,8 @@
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>

@@ -322,7 +322,7 @@ static void sun8i_ui_layer_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs sun8i_ui_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 	.atomic_check	= sun8i_ui_layer_atomic_check,
 	.atomic_disable	= sun8i_ui_layer_atomic_disable,
 	.atomic_update	= sun8i_ui_layer_atomic_update,
diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
index 8abb59e2f0c0..f4bc8320fa13 100644
--- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
+++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
@@ -7,8 +7,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_crtc.h>
 #include <drm/drm_fb_cma_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>

@@ -426,7 +426,7 @@ static void sun8i_vi_layer_atomic_update(struct drm_plane *plane,
 }

 static const struct drm_plane_helper_funcs sun8i_vi_layer_helper_funcs = {
-	.prepare_fb	= drm_gem_fb_prepare_fb,
+	.prepare_fb	= drm_gem_plane_helper_prepare_fb,
 	.atomic_check	= sun8i_vi_layer_atomic_check,
 	.atomic_disable	= sun8i_vi_layer_atomic_disable,
 	.atomic_update	= sun8i_vi_layer_atomic_update,
diff --git a/drivers/gpu/drm/tegra/plane.c b/drivers/gpu/drm/tegra/plane.c
index 539d14935728..19e8847a164b 100644
--- a/drivers/gpu/drm/tegra/plane.c
+++ b/drivers/gpu/drm/tegra/plane.c
@@ -8,7 +8,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "dc.h"
@@ -198,7 +198,7 @@ int tegra_plane_prepare_fb(struct drm_plane *plane,
 	if (!state->fb)
 		return 0;

-	drm_gem_fb_prepare_fb(plane, state);
+	drm_gem_plane_helper_prepare_fb(plane, state);

 	return tegra_dc_pin(dc, to_tegra_plane_state(state));
 }
diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c
index 35067ae674ea..795d24b44091 100644
--- a/drivers/gpu/drm/tidss/tidss_plane.c
+++ b/drivers/gpu/drm/tidss/tidss_plane.c
@@ -10,7 +10,7 @@
 #include <drm/drm_crtc_helper.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_fb_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>

 #include "tidss_crtc.h"
 #include "tidss_dispc.h"
@@ -151,7 +151,7 @@ static void drm_plane_destroy(struct drm_plane *plane)
 }

 static const struct drm_plane_helper_funcs tidss_plane_helper_funcs = {
-	.prepare_fb = drm_gem_fb_prepare_fb,
+	.prepare_fb = drm_gem_plane_helper_prepare_fb,
 	.atomic_check = tidss_plane_atomic_check,
 	.atomic_update = tidss_plane_atomic_update,
 	.atomic_disable = tidss_plane_atomic_disable,
diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8357d.c
index c6525cd02bc2..3e2c2868a363 100644
--- a/drivers/gpu/drm/tiny/hx8357d.c
+++ b/drivers/gpu/drm/tiny/hx8357d.c
@@ -19,8 +19,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -184,7 +184,7 @@ static const struct drm_simple_display_pipe_funcs hx8357d_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode yx350hv15_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili9225.c
index 8e98962db5a2..6b87df19eec1 100644
--- a/drivers/gpu/drm/tiny/ili9225.c
+++ b/drivers/gpu/drm/tiny/ili9225.c
@@ -22,8 +22,8 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_rect.h>
@@ -328,7 +328,7 @@ static const struct drm_simple_display_pipe_funcs ili9225_pipe_funcs = {
 	.enable		= ili9225_pipe_enable,
 	.disable	= ili9225_pipe_disable,
 	.update		= ili9225_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode ili9225_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili9341.c
index 6ce97f0698eb..a97f3f70e4a6 100644
--- a/drivers/gpu/drm/tiny/ili9341.c
+++ b/drivers/gpu/drm/tiny/ili9341.c
@@ -18,8 +18,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -140,7 +140,7 @@ static const struct drm_simple_display_pipe_funcs ili9341_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode yx240qv29_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
index d7ce40eb166a..6422a7f67079 100644
--- a/drivers/gpu/drm/tiny/ili9486.c
+++ b/drivers/gpu/drm/tiny/ili9486.c
@@ -17,8 +17,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -153,7 +153,7 @@ static const struct drm_simple_display_pipe_funcs waveshare_pipe_funcs = {
 	.enable = waveshare_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode waveshare_mode = {
diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi0283qt.c
index ff77f983f803..dc76fe53aa72 100644
--- a/drivers/gpu/drm/tiny/mi0283qt.c
+++ b/drivers/gpu/drm/tiny/mi0283qt.c
@@ -16,8 +16,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_modeset_helper.h>
@@ -144,7 +144,7 @@ static const struct drm_simple_display_pipe_funcs mi0283qt_pipe_funcs = {
 	.enable = mi0283qt_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode mi0283qt_mode = {
diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
index 11c602fc9897..2cee07a2e00b 100644
--- a/drivers/gpu/drm/tiny/repaper.c
+++ b/drivers/gpu/drm/tiny/repaper.c
@@ -29,6 +29,7 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_format_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
@@ -860,7 +861,7 @@ static const struct drm_simple_display_pipe_funcs repaper_pipe_funcs = {
 	.enable = repaper_pipe_enable,
 	.disable = repaper_pipe_disable,
 	.update = repaper_pipe_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };

 static int repaper_connector_get_modes(struct drm_connector *connector)
diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st7586.c
index ff5cf60f4bd7..7d216fe9267f 100644
--- a/drivers/gpu/drm/tiny/st7586.c
+++ b/drivers/gpu/drm/tiny/st7586.c
@@ -19,8 +19,8 @@
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fb_helper.h>
 #include <drm/drm_format_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>
 #include <drm/drm_rect.h>
@@ -268,7 +268,7 @@ static const struct drm_simple_display_pipe_funcs st7586_pipe_funcs = {
 	.enable		= st7586_pipe_enable,
 	.disable	= st7586_pipe_disable,
 	.update		= st7586_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct drm_display_mode st7586_mode = {
diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
index faaba0a033ea..df8872d62cdd 100644
--- a/drivers/gpu/drm/tiny/st7735r.c
+++ b/drivers/gpu/drm/tiny/st7735r.c
@@ -19,8 +19,8 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_fb_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_managed.h>
 #include <drm/drm_mipi_dbi.h>

@@ -136,7 +136,7 @@ static const struct drm_simple_display_pipe_funcs st7735r_pipe_funcs = {
 	.enable		= st7735r_pipe_enable,
 	.disable	= mipi_dbi_pipe_disable,
 	.update		= mipi_dbi_pipe_update,
-	.prepare_fb	= drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };

 static const struct st7735r_cfg jd_t18003_t01_cfg = {
diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
index cb0e837d3dba..50e1fb71869f 100644
--- a/drivers/gpu/drm/tve200/tve200_display.c
+++ b/drivers/gpu/drm/tve200/tve200_display.c
@@ -17,8 +17,8 @@

 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_cma_helper.h>
-#include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_panel.h>
 #include <drm/drm_vblank.h>

@@ -316,7 +316,7 @@ static const struct drm_simple_display_pipe_funcs tve200_display_funcs = {
 	.enable = tve200_display_enable,
 	.disable = tve200_display_disable,
 	.update = tve200_display_update,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank = tve200_display_enable_vblank,
 	.disable_vblank = tve200_display_disable_vblank,
 };
diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
index 7322169c0682..1a1d6609b80f 100644
--- a/drivers/gpu/drm/vc4/vc4_plane.c
+++ b/drivers/gpu/drm/vc4/vc4_plane.c
@@ -20,7 +20,7 @@
 #include <drm/drm_atomic_uapi.h>
 #include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
-#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_plane_helper.h>

 #include "uapi/drm/vc4_drm.h"
@@ -1250,7 +1250,7 @@ static int vc4_prepare_fb(struct drm_plane *plane,

 	bo = to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base);

-	drm_gem_fb_prepare_fb(plane, state);
+	drm_gem_plane_helper_prepare_fb(plane, state);

 	if (plane->state->fb == state->fb)
 		return 0;
diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
index 0824327cc860..2a02334b72ac 100644
--- a/drivers/gpu/drm/vkms/vkms_plane.c
+++ b/drivers/gpu/drm/vkms/vkms_plane.c
@@ -5,6 +5,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_gem_shmem_helper.h>
@@ -159,7 +160,7 @@ static int vkms_prepare_fb(struct drm_plane *plane,
 	if (ret)
 		DRM_ERROR("vmap failed: %d\n", ret);

-	return drm_gem_fb_prepare_fb(plane, state);
+	return drm_gem_plane_helper_prepare_fb(plane, state);
 }

 static void vkms_cleanup_fb(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index ef11b1e4de39..371202ebe900 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -13,6 +13,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_probe_helper.h>
 #include <drm/drm_vblank.h>
@@ -301,7 +302,7 @@ static const struct drm_simple_display_pipe_funcs display_funcs = {
 	.mode_valid = display_mode_valid,
 	.enable = display_enable,
 	.disable = display_disable,
-	.prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb,
+	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.check = display_check,
 	.update = display_update,
 };
diff --git a/include/drm/drm_gem_atomic_helper.h b/include/drm/drm_gem_atomic_helper.h
index 7abf40bdab3d..cfc5adee3d13 100644
--- a/include/drm/drm_gem_atomic_helper.h
+++ b/include/drm/drm_gem_atomic_helper.h
@@ -9,6 +9,14 @@

 struct drm_simple_display_pipe;

+/*
+ * Plane Helpers
+ */
+
+int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state);
+int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
+					   struct drm_plane_state *plane_state);
+
 /*
  * Helpers for planes with shadow buffers
  */
diff --git a/include/drm/drm_gem_framebuffer_helper.h b/include/drm/drm_gem_framebuffer_helper.h
index 6b013154911d..6bdffc7aa124 100644
--- a/include/drm/drm_gem_framebuffer_helper.h
+++ b/include/drm/drm_gem_framebuffer_helper.h
@@ -9,9 +9,6 @@ struct drm_framebuffer;
 struct drm_framebuffer_funcs;
 struct drm_gem_object;
 struct drm_mode_fb_cmd2;
-struct drm_plane;
-struct drm_plane_state;
-struct drm_simple_display_pipe;

 #define AFBC_VENDOR_AND_TYPE_MASK	GENMASK_ULL(63, 52)

@@ -44,8 +41,4 @@ int drm_gem_fb_afbc_init(struct drm_device *dev,
 			 const struct drm_mode_fb_cmd2 *mode_cmd,
 			 struct drm_afbc_framebuffer *afbc_fb);

-int drm_gem_fb_prepare_fb(struct drm_plane *plane,
-			  struct drm_plane_state *state);
-int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
-					      struct drm_plane_state *plane_state);
 #endif
diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h
index eb706342861d..df77ed843dd6 100644
--- a/include/drm/drm_modeset_helper_vtables.h
+++ b/include/drm/drm_modeset_helper_vtables.h
@@ -1179,7 +1179,7 @@ struct drm_plane_helper_funcs {
 	 * members in the plane structure.
 	 *
 	 * Drivers which always have their buffers pinned should use
-	 * drm_gem_fb_prepare_fb() for this hook.
+	 * drm_gem_plane_helper_prepare_fb() for this hook.
 	 *
 	 * The helpers will call @cleanup_fb with matching arguments for every
 	 * successful call to this hook.
diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
index 95ab14a4336a..1294610e84f4 100644
--- a/include/drm/drm_plane.h
+++ b/include/drm/drm_plane.h
@@ -79,8 +79,8 @@ struct drm_plane_state {
 	 * preserved.
 	 *
 	 * Drivers should store any implicit fence in this from their
-	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_fb_prepare_fb()
-	 * and drm_gem_fb_simple_display_pipe_prepare_fb() for suitable helpers.
+	 * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_plane_helper_prepare_fb()
+	 * and drm_gem_simple_display_pipe_prepare_fb() for suitable helpers.
 	 */
 	struct dma_fence *fence;

diff --git a/include/drm/drm_simple_kms_helper.h b/include/drm/drm_simple_kms_helper.h
index 40b34573249f..ef9944e9c5fc 100644
--- a/include/drm/drm_simple_kms_helper.h
+++ b/include/drm/drm_simple_kms_helper.h
@@ -117,7 +117,7 @@ struct drm_simple_display_pipe_funcs {
 	 * more details.
 	 *
 	 * Drivers which always have their buffers pinned should use
-	 * drm_gem_fb_simple_display_pipe_prepare_fb() for this hook.
+	 * drm_gem_simple_display_pipe_prepare_fb() for this hook.
 	 */
 	int (*prepare_fb)(struct drm_simple_display_pipe *pipe,
 			  struct drm_plane_state *plane_state);
--
2.30.1



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:20:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:20:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88105.165529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC4Z-0001MX-Mh; Mon, 22 Feb 2021 14:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88105.165529; Mon, 22 Feb 2021 14:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC4Z-0001MQ-JF; Mon, 22 Feb 2021 14:20:27 +0000
Received: by outflank-mailman (input) for mailman id 88105;
 Mon, 22 Feb 2021 14:20:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEC4Y-0001MK-DL
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:20:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9c3f9c7-c2d7-44d5-854d-fa3691615cd1;
 Mon, 22 Feb 2021 14:20:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DDC39AE04;
 Mon, 22 Feb 2021 14:20:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9c3f9c7-c2d7-44d5-854d-fa3691615cd1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614003625; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b0Jl9Bcg7pqGHTTJc5Uzuuj9YDW7llrK2+vjY5FAdrE=;
	b=YeDaQco3Mod0WzCZw8DIG6oTDohnVjzSDVqdZIM8+dWYCf1l8Rs3TPK2hlhFDwSRBDA7r+
	jt7z1brz01gjyMCp2eMJdxo0kilKh4C6EdMB8Di8cn1YbUCYnc7t6udZ7ZMFbnYkCE/jf7
	fPPAwP4TObF24O5VRvzwYeAsShi/CkE=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <YDOW+ftkNsG2RH3C@Air-de-Roger> <YDO8CM0prPjoo/X1@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d66da403-8054-0313-cf1e-cf3c539ce33a@suse.com>
Date: Mon, 22 Feb 2021 15:20:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDO8CM0prPjoo/X1@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 15:13, Roger Pau MonnÃ© wrote:
> On Mon, Feb 22, 2021 at 12:35:21PM +0100, Roger Pau MonnÃ© wrote:
>> On Mon, Feb 22, 2021 at 11:27:07AM +0100, Jan Beulich wrote:
>>> Now that we guard the entire Xen VA space against speculative abuse
>>> through hypervisor accesses to guest memory, the argument translation
>>> area's VA also needs to live outside this range, at least for 32-bit PV
>>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
>>> uniformly.
>>
>> Since you are double mapping the per-domain virtual area, won't it
>> make more sense to map it just once outside of the Xen virtual space
>> area? (so it's always using PML4_ADDR(511))
> 
> Right, that's not possible for PV 64bit domains because it's guest
> owned linear address space in that case.
> 
> It seems like paravirt_ctxt_switch_to will modify the root_pgt to set
> the PERDOMAIN_VIRT_START entry, does the same need to be done for
> PERDOMAIN2_VIRT_START?

I don't think so, no. Argument translation doesn't happen when
the restricted page tables are in use, and all other uses of
the per-domain area continue to use the "normal" VA.

> I would also consider giving the slot a more meaningful name, as
> PERDOMAIN2_VIRT_START makes it seem like a new per-domain scratch
> space, when it's just a different mapping of the existing physical
> memory.
> 
> Maybe PERDOMAIN_MIRROR_VIRT_START? Or PERDOMAIN_XLAT_VIRT_START?

XLAT would be too specific - while we use it for xlat only, it's
still all of the mappings that appear at the alternate addresses.
I did consider using MIRROR, but it got too long for my taste.
Now that I think about it maybe PERDOMAIN_ALT_VIRT_START would do?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:22:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:22:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88107.165540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC6b-0001Va-37; Mon, 22 Feb 2021 14:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88107.165540; Mon, 22 Feb 2021 14:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC6b-0001VT-01; Mon, 22 Feb 2021 14:22:33 +0000
Received: by outflank-mailman (input) for mailman id 88107;
 Mon, 22 Feb 2021 14:22:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEC6a-0001VL-JY; Mon, 22 Feb 2021 14:22:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEC6a-0001XS-EF; Mon, 22 Feb 2021 14:22:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEC6a-0006Fr-6h; Mon, 22 Feb 2021 14:22:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEC6a-0002fb-6B; Mon, 22 Feb 2021 14:22:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NHJHPkU9woSHy3fxrzLWQqr+/EnY2JZsIyHwpfocLJg=; b=4Ondn3NIGNv9s+sMNgvN6Lrvur
	AocYeZcXq54zoMiKZ4ZLWY+ZRk1iNJlozKpybuUeeFqVitWP3ZGX02E3oKULfhsoCpbEF1qrwjzRL
	vNOv6+cGISJHv5qhfBB08reqToX7Mlj7K1ileCP/mFqH1/ChiPZiEGLS59sb5vLuX1pE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159548-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159548: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
X-Osstest-Versions-That:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 14:22:32 +0000

flight 159548 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159548/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
baseline version:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017

Last test of basis   159481  2021-02-19 18:02:31 Z    2 days
Testing same since   159548  2021-02-22 12:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Tamas K Lengyel <tamas@tklengyel.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   87a067fd8f..f894c3d8e7  f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:22:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88109.165556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC6f-0001YI-CN; Mon, 22 Feb 2021 14:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88109.165556; Mon, 22 Feb 2021 14:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEC6f-0001YA-98; Mon, 22 Feb 2021 14:22:37 +0000
Received: by outflank-mailman (input) for mailman id 88109;
 Mon, 22 Feb 2021 14:22:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEC6e-0001Xi-8w
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:22:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 891291a7-b0cb-4b34-9f74-99a3651c5535;
 Mon, 22 Feb 2021 14:22:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0422FACBF;
 Mon, 22 Feb 2021 14:22:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 891291a7-b0cb-4b34-9f74-99a3651c5535
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614003754; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ib9VfTPPSxo8nkl5zZBa2u0I49i3UgZnQHQiTXZfQbg=;
	b=emEbzEV0u0dr1jl9bUc3l0lBMjTGDhSryhOm32YbeFIbaK2AvyKydvOb6cbqr+Ky3xcGis
	j5TQ8fisz7SRytI0iY9aguv0rqtnEwuRfanbmc6NbRynPTGudwz/iR01S6CPX3VlGrNoGY
	2+ddxLI0cmxxJEnFLksBn6D0E02QnYA=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <77a36366-9157-c3d3-b1f0-211f4fc39a93@suse.com>
Date: Mon, 22 Feb 2021 15:22:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 15:14, Andrew Cooper wrote:
> On 22/02/2021 10:27, Jan Beulich wrote:
>> Now that we guard the entire Xen VA space against speculative abuse
>> through hypervisor accesses to guest memory, the argument translation
>> area's VA also needs to live outside this range, at least for 32-bit PV
>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
>> uniformly.
>>
>> While this could be conditionalized upon CONFIG_PV32 &&
>> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
>> keeps the code more legible imo.
>>
>> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>>      }
>> +
>> +    /* Slot 511: Per-domain mappings mirror. */
>> +    if ( !is_pv_64bit_domain(d) )
>> +        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
>> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
> 
> This virtual address is inside the extended directmap.

No. That one covers only the range excluding the last L4 slot.

>Â  You're going to
> need to rearrange more things than just this, to make it safe.

I specifically picked that entry because I don't think further
arrangements are needed.

> While largely a theoretical risk as far as the directmap goes, there is
> now a rather higher risk of colliding with the ERR_PTR() range.Â  Its bad
> enough this infrastructure is inherently unsafe with 64bit PV guests,

The ERR_PTR() range is still _far_ away from the sub-ranges we
use in the per-domain area.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:26:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88117.165567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECAc-0001nW-TY; Mon, 22 Feb 2021 14:26:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88117.165567; Mon, 22 Feb 2021 14:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECAc-0001nP-Qa; Mon, 22 Feb 2021 14:26:42 +0000
Received: by outflank-mailman (input) for mailman id 88117;
 Mon, 22 Feb 2021 14:26:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lECAb-0001nK-Tp
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:26:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b337e91c-c9f5-4eef-b7aa-41368687a4fd;
 Mon, 22 Feb 2021 14:26:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35528ADD6;
 Mon, 22 Feb 2021 14:26:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b337e91c-c9f5-4eef-b7aa-41368687a4fd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614004000; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jRGBB4KGzgS4JNFjfofwW8YfWFRgIdjPGdABGwpvCpo=;
	b=WdXP63MUu+D82OBrObimk3fbF0Lrln71mD1WP2arojFg06Bj+MvxQScwA/hf73q41hXFLs
	tj55nFCd9SVbC1yfkSqMzu2N9KUjEOoRsgsgoUL40S0ickz7VFlbaxZ6E1VPOyyttLE7SE
	1LOcahD7r7FVG/p4BrQ3Jz0b8oMR4p8=
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
To: Julien Grall <julien@xen.org>
Cc: iwj@xenproject.org, sstabellini@kernel.org, ash.j.wilding@gmail.com,
 Julien Grall <jgrall@amazon.com>, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20210220194701.24202-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
Date: Mon, 22 Feb 2021 15:26:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210220194701.24202-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.02.2021 20:47, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The comment in vcpu_block() states that the events should be checked
> /after/ blocking to avoids wakeup waiting race. However, from a generic
> perspective, set_bit() doesn't prevent re-ordering. So the following
> could happen:
> 
> CPU0  (blocking vCPU A)         |Â   CPU1 ( unblock vCPU A)
>                                 |
> A <- read local events          |
>                                 |   set local events
>                                 |   test_and_clear_bit(_VPF_blocked)
>                                 |       -> Bail out as the bit if not set
>                                 |
> set_bit(_VFP_blocked)           |
>                                 |
> check A                         |
> 
> The variable A will be 0 and therefore the vCPU will be blocked when it
> should continue running.
> 
> vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
> to read any information about local events before the flag _VPF_blocked
> is set.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> ---
> 
> This is a follow-up of the discussion that started in 2019 (see [1])
> regarding a possible race between do_poll()/vcpu_unblock() and the wake
> up path.
> 
> I haven't yet fully thought about the potential race in do_poll(). If
> there is, then this would likely want to be fixed in a separate patch.
> 
> On x86, the current code is safe because set_bit() is fully ordered. SO
> the problem is Arm (and potentially any new architectures).
> 
> I couldn't convince myself whether the Arm implementation of
> local_events_need_delivery() contains enough barrier to prevent the
> re-ordering. However, I don't think we want to play with devil here as
> the function may be optimized in the future.

In fact I think this ...

> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -1418,6 +1418,8 @@ void vcpu_block(void)
>  
>      set_bit(_VPF_blocked, &v->pause_flags);
>  
> +    smp_mb__after_atomic();
> +

... pattern should be looked for throughout the codebase, and barriers
be added unless it can be proven none is needed.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:35:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88121.165582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECIp-0002gp-Qn; Mon, 22 Feb 2021 14:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88121.165582; Mon, 22 Feb 2021 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECIp-0002gi-Nr; Mon, 22 Feb 2021 14:35:11 +0000
Received: by outflank-mailman (input) for mailman id 88121;
 Mon, 22 Feb 2021 14:35:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V+0I=HY=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lECIn-0002gd-T1
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:35:10 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e5453185-93de-419d-af51-947d8447ed01;
 Mon, 22 Feb 2021 14:35:08 +0000 (UTC)
Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com
 [209.85.208.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-81-wnzsbwKhPbe49Di5rv67lw-1; Mon, 22 Feb 2021 09:35:04 -0500
Received: by mail-ed1-f70.google.com with SMTP id g20so4523121edy.7
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 06:35:04 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id s14sm10457606ejf.47.2021.02.22.06.35.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 Feb 2021 06:35:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5453185-93de-419d-af51-947d8447ed01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614004508;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ROXc4pPESl2hnVPAmTj87jG8Y89YLot+XFk/YVIBdHE=;
	b=Utm4jjZcClpf2C5IqxE6o3+p8aSCVx1b9QUseQuVCq6/QEbz/4ar8Dsr4E6aOxo81KX64N
	27wyaTn7rQkbqdzWzN8HVr1t0ITomq5YnFcbWKAeiyTrSUjSiEEZgBTtX5LBWaMy9V7UAK
	8NzZqEh3L+USeKZjsfv7OPmWuf2AXt0=
X-MC-Unique: wnzsbwKhPbe49Di5rv67lw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=ROXc4pPESl2hnVPAmTj87jG8Y89YLot+XFk/YVIBdHE=;
        b=qQQk5ClsEyYn86XNGOHlD4MLvONCGeYD/CvpXTButzbXdbtqIQZ8iHNUUqp4wh9/K2
         8yP/EmEQh0MfJ1YlefoA0qOb4/CFhhlSMV++O5wA3laXT+fdebNrKHU87xUFpDRTPX3U
         6ayPSqs7jn5kfW0BJ/VRu7R/sD1FkqA0j/7HfqrBz/ngBpEPIkIYuxPVUU2UG4XF95Zk
         iNiloHoyGaXBlF+3EfpPtximD8xtm8XCpo80O2Xdi8m+BVyv38tTFrc3fO6xHjPmta75
         exjfGa346wep9+ggYImnyKspRCF+PerG1vZELsMQgQVTXRn5NyjU+l4J+202zxBPaG4z
         1EaA==
X-Gm-Message-State: AOAM533wpXQ8caEuvXJ8gmyqdu2HePa2w6gw4x1A/1Bu15UgTCEgFHNo
	DTWSUnVvjTOZG/xw6EDPO0ZdaPSrCu0+DhUEjHed+QTNyPKhxG12/kdWAWSd+OshD5nc1hc3TpJ
	KRLG3FirjAOsHjpPcrkl9s7T5tE0=
X-Received: by 2002:a50:9dc9:: with SMTP id l9mr22865682edk.377.1614004503647;
        Mon, 22 Feb 2021 06:35:03 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyumDD7RCrt3aWNrG+469eebI8wUrsStbqcQO2s6zYWdcqMS9H6mt2qeXwl0MrBf/vG1e+wdw==
X-Received: by 2002:a50:9dc9:: with SMTP id l9mr22865651edk.377.1614004503527;
        Mon, 22 Feb 2021 06:35:03 -0800 (PST)
Subject: Re: [PATCH 0/2] sysemu: Let VMChangeStateHandler take boolean
 'running' argument
To: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
Cc: Huacai Chen <chenhuacai@kernel.org>, Greg Kurz <groug@kaod.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-trivial@nongnu.org,
 Amit Shah <amit@kernel.org>, Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 qemu-arm@nongnu.org, John Snow <jsnow@redhat.com>, qemu-s390x@nongnu.org,
 Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Kevin Wolf <kwolf@redhat.com>, Marcelo Tosatti <mtosatti@redhat.com>,
 Max Reitz <mreitz@redhat.com>, Alex Williamson <alex.williamson@redhat.com>,
 Aurelien Jarno <aurelien@aurel32.net>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Jason Wang <jasowang@redhat.com>, Peter Maydell <peter.maydell@linaro.org>,
 =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>,
 Halil Pasic <pasic@linux.ibm.com>, Fam Zheng <fam@euphon.net>,
 qemu-ppc@nongnu.org, kvm@vger.kernel.org,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Cornelia Huck <cohuck@redhat.com>, David Hildenbrand <david@redhat.com>,
 qemu-block@nongnu.org, Christian Borntraeger <borntraeger@de.ibm.com>,
 Sunil Muthuswamy <sunilmut@microsoft.com>,
 David Gibson <david@gibson.dropbear.id.au>,
 Richard Henderson <richard.henderson@linaro.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>, Thomas Huth <thuth@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Jiaxun Yang <jiaxun.yang@flygoat.com>
References: <20210111152020.1422021-1-philmd@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <84048681-32d3-7217-e94c-461501cf550b@redhat.com>
Date: Mon, 22 Feb 2021 15:34:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210111152020.1422021-1-philmd@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Paolo, this series is fully reviewed, can it go via your
misc tree?

On 1/11/21 4:20 PM, Philippe Mathieu-DaudÃ© wrote:
> Trivial prototype change to clarify the use of the 'running'
> argument of VMChangeStateHandler.
> 
> Green CI:
> https://gitlab.com/philmd/qemu/-/pipelines/239497352
> 
> Philippe Mathieu-DaudÃ© (2):
>   sysemu/runstate: Let runstate_is_running() return bool
>   sysemu: Let VMChangeStateHandler take boolean 'running' argument
> 
>  include/sysemu/runstate.h   | 12 +++++++++---
>  target/arm/kvm_arm.h        |  2 +-
>  target/ppc/cpu-qom.h        |  2 +-
>  accel/xen/xen-all.c         |  2 +-
>  audio/audio.c               |  2 +-
>  block/block-backend.c       |  2 +-
>  gdbstub.c                   |  2 +-
>  hw/block/pflash_cfi01.c     |  2 +-
>  hw/block/virtio-blk.c       |  2 +-
>  hw/display/qxl.c            |  2 +-
>  hw/i386/kvm/clock.c         |  2 +-
>  hw/i386/kvm/i8254.c         |  2 +-
>  hw/i386/kvmvapic.c          |  2 +-
>  hw/i386/xen/xen-hvm.c       |  2 +-
>  hw/ide/core.c               |  2 +-
>  hw/intc/arm_gicv3_its_kvm.c |  2 +-
>  hw/intc/arm_gicv3_kvm.c     |  2 +-
>  hw/intc/spapr_xive_kvm.c    |  2 +-
>  hw/misc/mac_via.c           |  2 +-
>  hw/net/e1000e_core.c        |  2 +-
>  hw/nvram/spapr_nvram.c      |  2 +-
>  hw/ppc/ppc.c                |  2 +-
>  hw/ppc/ppc_booke.c          |  2 +-
>  hw/s390x/tod-kvm.c          |  2 +-
>  hw/scsi/scsi-bus.c          |  2 +-
>  hw/usb/hcd-ehci.c           |  2 +-
>  hw/usb/host-libusb.c        |  2 +-
>  hw/usb/redirect.c           |  2 +-
>  hw/vfio/migration.c         |  2 +-
>  hw/virtio/virtio-rng.c      |  2 +-
>  hw/virtio/virtio.c          |  2 +-
>  net/net.c                   |  2 +-
>  softmmu/memory.c            |  2 +-
>  softmmu/runstate.c          |  4 ++--
>  target/arm/kvm.c            |  2 +-
>  target/i386/kvm/kvm.c       |  2 +-
>  target/i386/sev.c           |  2 +-
>  target/i386/whpx/whpx-all.c |  2 +-
>  target/mips/kvm.c           |  4 ++--
>  ui/gtk.c                    |  2 +-
>  ui/spice-core.c             |  2 +-
>  41 files changed, 51 insertions(+), 45 deletions(-)
> 



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:37:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:37:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88126.165598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECL1-0002qt-9n; Mon, 22 Feb 2021 14:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88126.165598; Mon, 22 Feb 2021 14:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECL1-0002qm-4o; Mon, 22 Feb 2021 14:37:27 +0000
Received: by outflank-mailman (input) for mailman id 88126;
 Mon, 22 Feb 2021 14:37:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lECL0-0002pk-4g
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 14:37:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d287ab9-0d28-45a1-ba7d-daf0d830c199;
 Mon, 22 Feb 2021 14:37:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35BD4ACBF;
 Mon, 22 Feb 2021 14:37:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d287ab9-0d28-45a1-ba7d-daf0d830c199
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614004644; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tCWkG02dPXSfkiAcHJic0pFvK1DmDmjDroIUFKTrOuc=;
	b=NWxEDaohl7i3l77dmFQrWAGlMNQ6FhUYr1Z8jvC+EI1Gul8MsNKrpzwGzuAyvLbjpk8neG
	IhUOB3YR5Ra8wzowBoAYKPqrrC8F4ltSEjiig02/vO8XSNvKF6OIUVseErSPzNcc35BaGE
	OeU80FmIfj4bbsnA2cCAz/cLT1mql+g=
Subject: Re: Stable ABI checking (take 2)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xen.org>
References: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
Date: Mon, 22 Feb 2021 15:37:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 15:03, Andrew Cooper wrote:
> Hello,
> 
> Staging is now capable of writing out an ABI description when the
> appropriate tool (abi-dumper) is available.
> 
> We now have to possible courses of action for ABI checking in builds.
> 
> 1) Publish the ABI descriptions on xenbits, update all downstream test
> systems to invoke abi-compliance-checker manually.
> 
> 2) Commit/update the ABI descriptions when RELEASE-$X.$Y.0 is tagged,
> update the main build to use abi-compliance-checker when available.
> 
> 
> Pros/Cons:
> 
> The ABI descriptions claim to be sensitive to toolchain in use.Â  I don't
> know how true this is in practice.
> 
> Publishing on xenbits involves obtaining even more misc artefacts during
> the build, which is going to be firm -2 from downstreams.
> 
> Committing the ABI descriptions lets abi checking work in developer
> builds (with suitable tools installed).Â  It also means we get checking
> "for free" in Gitlab CI and OSSTest without custom logic.
> 
> 
> Thoughts on which approach is better?Â  I'm leaning in favour of option 2
> because it allows for consumption by developers and test systems.

+1 for option 2, fwiw.

> If we do go with route 2, I was thinking of adding a `make check`
> hierarchy.Â  Longer term, this can be used to queue up other unit tests
> which can be run from within the build tree.

Is there a reason the normal build process can't be made fail in
case verification fails? Besides "make check" typically meaning to
invoke a functional testsuite rather than (just) some compatibility
checking, I'd also be worried of no-one (likely including me) to
remember to separately run "make check" at appropriate times.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:46:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88141.165614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECTe-0003xQ-A9; Mon, 22 Feb 2021 14:46:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88141.165614; Mon, 22 Feb 2021 14:46:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECTe-0003xJ-76; Mon, 22 Feb 2021 14:46:22 +0000
Received: by outflank-mailman (input) for mailman id 88141;
 Mon, 22 Feb 2021 14:46:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lECTd-0003xE-5M
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:46:21 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ca78438-57aa-4654-87f8-6416a19a7a62;
 Mon, 22 Feb 2021 14:46:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ca78438-57aa-4654-87f8-6416a19a7a62
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614005180;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=XtPu81e5ECsp17twcbkQo4WLNEM8H6KJjOumt4fkz90=;
  b=IKr4uDRzbZj7eWpW1c4BxtivdM66I/pynEOH+roB8u0fdes4u2gciAG5
   O9/qBPkUlXbR1EyhKcSM7pxdbDj3rbaftB3o/eIhdaon7fsY/l+YK5e9N
   OqRWOGWQ1p6D90mtufeB2OqP3YiCjE4jAuJGsdkjptaGdlF9kCuViedOL
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GpHzofYtOioQ7Swf8sRu6kijspYGgVnryXNhpBZJAfcUsv1v6FBJpN6RRevA/8HugoJXnxBMv5
 eN4Wdgb3PaFP03zmZ7gwcozX1/HoeBg3QawXGWTF7GWa3fcEDhz8L9ks74ChbDaYy3nSkmgLhw
 86uDfnoCL594Xv3Dm8B9Q11C/Ox5+X6IvyMzAeAfgHyEBE52lnivPHTkb94li8q6DLf/RTniSK
 JgHK9WOLrq9/ECmbMmq8ZBGQkKJROwGuNTGrKeLLlMPsVcYHVdB42zX80gYNA8m17uaiCxi8hg
 iiQ=
X-SBRS: 5.2
X-MesageID: 38111707
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="38111707"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YW7XWaViBCnvESqwAxr6LaINwkR25d5qYw+uc17+0LKMQqQIqfzc/ydgacJHPq0ItiSwa7pSLanaRAe3C3qKyB2y+ePEi8hdGAb/aL89335fLW03BCsJDEf2B3U2LfmBEtAE4OdxZ6QbkXYARy6Xgz7NY/sCtCj4xKziwN4LNkzLrQ43EnWHXSHtpCL28y9xtUe/0DbktqPk2ECcecrw2FEwj6b5XX696apQqa9bAPH476Mckx7Y1huLnFNIzk3tefxIsqj6erwlAV4E1Ld1MRx1RPZn4ehmvWXfl0U9T9tWtco2ojavWiGNPv4D4RBfP1VpAE6tqaLDa/NYU9Sw2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z5znb0meEW6WIg2ygXXgb/pfy3QminwbKmrSm+/8LeM=;
 b=mw7z8U0PSBziBhdR3V+Lm3Y2k1dVC7oTSWhSmXIIYJ1lWhs5mZBPqaMjgGJ+zAK5zV41LjS+1eIc9ZxrcFnYuQJdWbSVdD3x7lQGyeh8sR5mN75mYKMXNV2a1Wp2PraKVSSIs3igJfpYbhlUw/LXxaM5BirIjD+5xyr472Ztb0MTXKApq4MCirSuVarkvJZ9UJJz+DnRPaKFrs756HpI2azLdin6TxSAd3Izpv9FHe2GnFSKV+yOkLsrG9Nfvlo4YNdjDKe3t6MABlADsLbxOH+ZR7RNM1ICIBK6JNNA3rt/5oE92CZg0UthnlgQYntoaWn9AeBkqY9tJ0+pHe4Opw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z5znb0meEW6WIg2ygXXgb/pfy3QminwbKmrSm+/8LeM=;
 b=RNvodmJbmmNu/uFJAF5qkfNUbL2fy94tZVtX5+0uMp+KtjpdEFSLgZvqFd2DD7xDYczba9+zS/eNQ54s84w/+rm7VQuyh9CEXrlWFI+kgyTzcH5ciwTLmQ5SRrgAJcVF/mXNDE9UVRsItmHoldyHYPWHp5mEonNN4+4tpie/L5g=
Date: Mon, 22 Feb 2021 15:46:11 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
Message-ID: <YDPDsxgSJMPUk1DW@Air-de-Roger>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <YDOW+ftkNsG2RH3C@Air-de-Roger> <YDO8CM0prPjoo/X1@Air-de-Roger>
 <d66da403-8054-0313-cf1e-cf3c539ce33a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d66da403-8054-0313-cf1e-cf3c539ce33a@suse.com>
X-ClientProxiedBy: MR2P264CA0170.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::9)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3e7bb71b-0202-4536-12d6-08d8d7409b29
X-MS-TrafficTypeDiagnostic: DM6PR03MB3676:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB36763B70E35810231A69D87E8F819@DM6PR03MB3676.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mw05rfo/9XmRh0P3J+XQKFrjrckY0NqNQ+kAHrHzfR0WJafnOZ5FJ5M3U5Ct5tMF5grDF9lrzitDBXkE4DbvgTzqQl3Zh/ybqOhsCFjsMlnglN2AwdGllLv6kx9HhxKaJDyw2ieZbY3gkMIMsoI+IMBfRE0D3yDLH7CJICYdQIFhtfJ6/FdQeY/Ggjw3+8tmNwIWdzwtKeNqMWqQEBLVkO7uzS8Os36xsQzjyundPLvEm6aqeKAs3h9W+hQHWIUITAPaDK+5d3nuP6xzmkjnqRVLE590hWaN2PR+4Y5sTUEZPh4T0cmPTk2gxW1LQBZo3ZNTuXnBtWlx4ZQRuxHr55pE9cHh6BZjsXNLO5bUpaw+efV8Cvc+CW0HmC8eoZYy/DzSIJ2Bo4ew73pqOO1DDQSr21gdnhCLqV1fdYU8gjLRB4X7gE0QOaCpi5hqxj2Qmq0rWEMzGs7IEqLIK0/zDnSD9D/ZSOq75gOJPxKTwv7Yv90Oyz8161cqXIo25AGpIyesEhVU7klS0R6YD0i+uw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(26005)(316002)(53546011)(83380400001)(86362001)(2906002)(6916009)(478600001)(9686003)(16526019)(6486002)(956004)(4326008)(186003)(8676002)(54906003)(66946007)(66556008)(85182001)(6666004)(66476007)(33716001)(8936002)(6496006)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TG5TR0dmZ2RRWVhmYkxmZW4zQ1NUT3VJSXhXWnhFSDdCaHRScUhwMEdpWXdw?=
 =?utf-8?B?SkdkMGMxeVhMc1BLWDkrQ1Qza3RUYmNOTUcwWkJueXRZbFg3T2dOaHdyZjhs?=
 =?utf-8?B?TFFlV1BsV3FoMENUNnQ0c2VSdnhHbGxNZXVoL3FhTk5XVEFhSTR4dExCMGVi?=
 =?utf-8?B?UUlyK0p4QmFoMk5QcGJqWVNXYm1KYU9SQ3FISlZGZGFaUExKZkM2Ynh3YUxx?=
 =?utf-8?B?dTFlZHlYWjlmdzRnWlZuRDJaUnpoYUlnUG5oVHhZR25xaEVXSWtVSklKSmFv?=
 =?utf-8?B?cW5ZcCtpTWRVT3Y2OTQreUhlUGpDWWJhdEt4K2RpVUZ0VThxNWtkd1o2WHRt?=
 =?utf-8?B?V3p1aFg4V1JhdkVJREJ2SjgzQjh0eVRNQ0pNMS9sdzVjNFF4SURsRlZUNE9X?=
 =?utf-8?B?U2FoYmZhc3h2Y09IbktSZlhyL3hFOHk0cnhNcERFbFpvM1FMUGo2Rk9MM0hM?=
 =?utf-8?B?VUdyN05TMmM1WGFWQzhMcW1QWWRtODk5V2RSQkpwTlZ4UWRhOFFYUVpsKzYx?=
 =?utf-8?B?TVdIRUpTOVR6RWNVb0laWmErc0J1MzZ5akVXS0xRNHIrQm1aMW0rNk4ra09j?=
 =?utf-8?B?V002N0pMN2c0T2ZzZkoyNzBXckIyS3BEbnpGWWg2QlJEMUN1RlRzTHI5L0ZC?=
 =?utf-8?B?enA1dzFTaWJTeTRoV3B1VmxnNHQ4aGMzaHhzc1hYNU1xRGNTemNQNUVJM1ZT?=
 =?utf-8?B?L0tWM2p1ZGs5dk8yWUV5TnAyVmFkd2FKSm8vVVplb1BOWXM1UXdxRWJLaUVY?=
 =?utf-8?B?d1FwanZLTUx6Yzdha0RLbXRHOVN6SlBvYlBYcFVtVVlJemlaSVZ2elJQenBw?=
 =?utf-8?B?Nm1LL3ZMTHBMRTA1REcwTDNUVmN1VXZhT2FTTG5PV2FtOWp0SHJBRnJ1WkNm?=
 =?utf-8?B?NU1JTytUNXNXRFN4bSs1ZjFiYjRLUk1STEdyZzVkS1daZGEzR1BWRnh6SVZH?=
 =?utf-8?B?Qm5zVWJXa3N2eE4rdWdyMXUvUGU0MlVqRU5sdlZyMXhzNnN4QmdpSWVwbnJX?=
 =?utf-8?B?NFRVUzFWQ3M5ZkRsUUNuTW1rRWNPWXV4WjMyM0JwZUdsYTBINzdGMndKa3Z3?=
 =?utf-8?B?Y2dRNnJPRXFtMUtjb0pDNDJ6OXdXaGM0dGNuendDMHNFN3dmVnVoWXZGRjJS?=
 =?utf-8?B?S1RFL3FwSitDaldQMVdvLzJEeWRuQU4yQ09SaGlIalB3cGsxUWVIVDZkdTRG?=
 =?utf-8?B?dzY5VERqbUdBY3ZtYVRTa3dpMTRpRi9NRHdnRXg3bUlMZGE0Tlh0QUsrampL?=
 =?utf-8?B?SDd5WjJHcWphSFpDSWt1eWlPRlF0TG83VXc2QVU1cGw2UHB5d2ZtU1NqV2NE?=
 =?utf-8?B?aXJnazc4cDJObkx0ZElqa1BnbUxtVG1VNVFITm1JbkpTRndNeW1Ldm11Zk9p?=
 =?utf-8?B?OVhDVXI4L3BrM3M1WXFld3VnTmRRSFJlUTZEcm1sZ1BJWlgxK2U2a1ViTldk?=
 =?utf-8?B?eXV1d2lHL0FRTE8weHR6SVBPQ3BlV2VEUXNFU2N3ZG5kN1FBT2tmNlFHS2I5?=
 =?utf-8?B?L0gvaFdiN2hvRTRlK1MreXRuSUpKYkJURnlhdHgxT0N4dHZSUUxaSjRmR0tX?=
 =?utf-8?B?YU5aK2o3dUxLTWpURGpjTWpUc1lsVVp5OUZpa0crOUYvbE1PMlhvbFUya2Iw?=
 =?utf-8?B?dU1rOUNKVHlWeHN5ZVVYR3JKTG1HQmpvNkw2UWF3eFRkbytoUWdIcGdJMVVp?=
 =?utf-8?B?SjIzek5vcUhoZzhWaU5oZzhDbXNPWmsrYy9jVzNDM3pEdXYwcm9pb3UrcElN?=
 =?utf-8?Q?zQvLHaGrwoy8fXDlwCD0/DfnilI1VJCBGb8vo6z?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e7bb71b-0202-4536-12d6-08d8d7409b29
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 14:46:15.6353
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JO3CR6tIPWWE7E8Ip8k2IbwqFGX5us4h6MjDQgVAzQfBJkIhGOLZ6dLrtjDSvhhIP50Kj3Go1mrRLfU9o3J3Ig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3676
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 03:20:24PM +0100, Jan Beulich wrote:
> On 22.02.2021 15:13, Roger Pau MonnÃ© wrote:
> > On Mon, Feb 22, 2021 at 12:35:21PM +0100, Roger Pau MonnÃ© wrote:
> >> On Mon, Feb 22, 2021 at 11:27:07AM +0100, Jan Beulich wrote:
> >>> Now that we guard the entire Xen VA space against speculative abuse
> >>> through hypervisor accesses to guest memory, the argument translation
> >>> area's VA also needs to live outside this range, at least for 32-bit PV
> >>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> >>> uniformly.
> >>
> >> Since you are double mapping the per-domain virtual area, won't it
> >> make more sense to map it just once outside of the Xen virtual space
> >> area? (so it's always using PML4_ADDR(511))
> > 
> > Right, that's not possible for PV 64bit domains because it's guest
> > owned linear address space in that case.
> > 
> > It seems like paravirt_ctxt_switch_to will modify the root_pgt to set
> > the PERDOMAIN_VIRT_START entry, does the same need to be done for
> > PERDOMAIN2_VIRT_START?
> 
> I don't think so, no. Argument translation doesn't happen when
> the restricted page tables are in use, and all other uses of
> the per-domain area continue to use the "normal" VA.

Oh, OK, thanks for the clarification. AFAICT the PERDOMAIN2_VIRT_START
slot won't get populated on the restricted page tables, and hence will
always trigger a page fault if access is attempted with those tables
loaded.

> > I would also consider giving the slot a more meaningful name, as
> > PERDOMAIN2_VIRT_START makes it seem like a new per-domain scratch
> > space, when it's just a different mapping of the existing physical
> > memory.
> > 
> > Maybe PERDOMAIN_MIRROR_VIRT_START? Or PERDOMAIN_XLAT_VIRT_START?
> 
> XLAT would be too specific - while we use it for xlat only, it's
> still all of the mappings that appear at the alternate addresses.

Well, given that such mappings won't be available when running 64bit
PV guests I still think it's unlikely to be used for anything that's
not XLAT specific, as it won't work for 64bit PV guests otherwise.

> I did consider using MIRROR, but it got too long for my taste.
> Now that I think about it maybe PERDOMAIN_ALT_VIRT_START would do?

Indeed, I would prefer that rather than PERDOMAIN2_VIRT_START if you
still consider XLAT to be too specific.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:50:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:50:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88144.165626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECXO-0004xK-RK; Mon, 22 Feb 2021 14:50:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88144.165626; Mon, 22 Feb 2021 14:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECXO-0004xD-Nt; Mon, 22 Feb 2021 14:50:14 +0000
Received: by outflank-mailman (input) for mailman id 88144;
 Mon, 22 Feb 2021 14:50:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lECXN-0004x8-5n
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:50:13 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aedfb177-88ac-4cb4-9ab2-51dd880e9701;
 Mon, 22 Feb 2021 14:50:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aedfb177-88ac-4cb4-9ab2-51dd880e9701
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614005412;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=MJCxW3qNl1hPqMGc5zH16ZGY1J+V6SCxU2+Rwzu6njQ=;
  b=S/6bVhE/501TAiz/ew5L07tHt98GTYc6U1NHCmP4/Xt7l3p8/is3R44k
   feWOYFhPPJkglKYYxyodCA+7CcQaos5xcWr9S/qJGXwcvk/V7tjIot4qZ
   ywCyj8a6DCgSgIlXzLt1l9RnIoglUoIec5cyvZzotPP/OiRDAcEgmX+CU
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4FTGYgDDVZVKUJiVByAJDO+Gq2ll4f3pfvbAQ5vnRiwBYIoOKobf2dCEXaGK+mCuIsvbbnw0jh
 gOfPGd9qOR+0f0FrmusWHV4F89lXZxGm4l31j4dP1u/AMBN+6lxL86wq1i/plBAHnCne7Ej3Sr
 wM7hp7+LtJBQ8V8S5RiKPie6sNmYMZ/a6pmW5lllLZi3b6bakS3pg8LiepHFqSfY4kB1iObfMW
 okuNPVnN/yDjm+rr0IGOMy1EmbCt2HXZ5e0SbMGPdPm2OIiCSZsvmuFivPVslOqHWxGFqCKU3O
 weg=
X-SBRS: 5.2
X-MesageID: 37761949
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37761949"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AXFgoFxid9SBdi90XzbWQoD7zc+/Z/oua60Zh0A6YTHNCQ2x19T4T/Wt8+9jSfzYjhjV59Aj5/Gq12L08nRDkZqyhORw0QyBNGJrZ/NiwvbReR4Lly7HPRPt+aZtEvgurXFsJcwaZ7AdzB//qEYCfMGnbIVLYWwbTEQQAOTKccTMBzoDeNz7hdJd4XVa9W3FTjZ3AP/2X8Fh7I18CLvJT+nf5zsZZ2kg7Rpknq5YTvgcc110WFPOBugoCcE22aAiRuaJcMR/2CKVU2A0PC1zqs1dVeRTVDocKmTX7A3ro74K1nNWs71kqIo3OkUb471LAaaZU2nw2IPK1ajBbJopfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IHWIzam8C4gOjndBTLDHVX7Kab5zRN8V8acWF+pcAAs=;
 b=gIi/CBQye3wk8w2BE87B8aYZ9wpMuOuuMm2tED4AijYoGV/UzWAND7C+/md/XmESFRueKQCQg06ENEP8xDPvCutUjhGpgs8dcFOo5ZK1WMLdT6+66AmRUOaELTc/GSNtIrhets5EGItqEFQ/zqk71Kj4hBgGC1/ppB6uYXcoHZ0zfKxHQAlkqN1qqf4vIPsPHEMcz12BauywsNR7ZhLzr/XqMhHCzj+lwYLjfTjxBjULqEi2vli/CVAnyYi0TPCXAVSfDzciP/vLxEU+nIllXKeFGSLj7B7Ii3dlIl1JFZiMfPT4NgEr0HC2FE9R1JIRP5s2fAHpZGTrtXcT0k3Kjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IHWIzam8C4gOjndBTLDHVX7Kab5zRN8V8acWF+pcAAs=;
 b=YbJMCD2VfUmFtIGu7o4d6nBIqnOAG/Gcmjn+6ZeUPebwLlY7+qfUb9bN9tpVUjzfZ0uvrIMQP2GzhxwAxQcBhDgkmhJshg1aGDk4ZpvCAjY873MqQi2/mFBxwZBvKHXP/3g8fNzUvfpmJfOmVFYWKZgtaiYzHUXym271a/TOHE0=
Date: Mon, 22 Feb 2021 15:50:00 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
CC: <xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <netdev@vger.kernel.org>,
	<linux-scsi@vger.kernel.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Jens Axboe <axboe@kernel.dk>, Wei Liu <wei.liu@kernel.org>, Paul Durrant
	<paul@xen.org>, "David S. Miller" <davem@davemloft.net>, Jakub Kicinski
	<kuba@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v3 5/8] xen/events: link interdomain events to associated
 xenbus device
Message-ID: <YDPEmFaWQsBhvmb0@Air-de-Roger>
References: <20210219154030.10892-1-jgross@suse.com>
 <20210219154030.10892-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210219154030.10892-6-jgross@suse.com>
X-ClientProxiedBy: MR2P264CA0047.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::35)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 387c4e78-c9b3-4029-2c2f-08d8d7412459
X-MS-TrafficTypeDiagnostic: DM5PR03MB3068:
X-Microsoft-Antispam-PRVS: <DM5PR03MB3068FF543A6D78210C9E04EA8F819@DM5PR03MB3068.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2958;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Je14NHuU8TTgtU49vAC30bDxawrIvvoOhuXCa5JPGzMcNPZ6Z3VE4gcwLorgev9+wHVhGb8x45OiYfdiZI23AWAUqMEXzi9oiDtrcXeoQnV7TUK/9kbmwTOGpOum9nkDQqzPeYrrU2CtuEXrUZODzyRBdNocVaKcJCGiemolY7Mo/oHwDRiZm+r6tr5Ql2hvFlRK8t4GnRiiweoXhJMM+DSoV+0umpQxFzkfNhYbPaDYi/D9J23gHKS/xxeA1+WQirHjDqZV/iG802YIF9AvrvYfuT/Oe4PzDcblohiBA1oEBonGI8qsoecU1Wa5f67pHMFroVscrlXSWKuKu6TDprZnQU8F3ZF7XCkzXse6y89FoCmGNQlBdZ4DdRPTnqfx5A9u4LRNkORAcBGynaD6gf+rfYDq/z/nyuKvtEgtzCFYkSk5oIoGzeozjYKpceU/Z7IVTrh7roCD10VqjrvGLAxt/kSCgxKNlBTbyJBg92BCsJHiz430/0+4BxsjQgmpuqNAMr2hOrV6UZJdEkVy4g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(346002)(366004)(376002)(396003)(136003)(6666004)(6496006)(7416002)(54906003)(8676002)(6486002)(8936002)(4326008)(4744005)(85182001)(6916009)(83380400001)(186003)(33716001)(66476007)(9686003)(26005)(66556008)(5660300002)(86362001)(316002)(16526019)(478600001)(66946007)(956004)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S2Z1VFREOEJpdEdmU3Y4SVBndTY0WU1leGVtdUxaOGN4RythVkdJMmxMM0Jm?=
 =?utf-8?B?b0dmWlo2QnRhYUkwT0FMVlpOcXN6cDl5djFFUmlpZytCMVR6QnpZZXBtQ0xT?=
 =?utf-8?B?bGsrSHZ5N2xITmlBTlkvQjFzWlRXR1grS2h1M3NqVXFLY2lPT2VvVVFaWWNl?=
 =?utf-8?B?U0J4TUFudExLOFRmNnZmWndoWlQ2czBIbXBOM1ZsRlBUV3U1RnBuUG1KcHZC?=
 =?utf-8?B?Y0llaG15MjlLZ2orUU9SMkxzc3pPV3lpSlZqM0MzR2x3T2RWb2Q2b3c1MThq?=
 =?utf-8?B?US9pUEZOSExpNnFUR2toVmtqaUlHcFdkOE9UQXlkd2E2ZTRyNEJXbERUcFRr?=
 =?utf-8?B?VzRtbHlSRVE2QVh2RlVMeWlnNWY5V1VxV3NIU2Q1WnVzZjB2WDQwQ0xQbGpy?=
 =?utf-8?B?WjIrSEQyZU9hN0ZpRzFZZXRRRUpJRldEMjhyaktScnJzdzNSa0Y5dkhVRXRS?=
 =?utf-8?B?RU5UYXg5YVZiYytNekprRGJReG5ZT1dSYTlBSTNuSEpuQ3ZwTjFYY0JLOWcx?=
 =?utf-8?B?VWRlcWc3cWJnKzRINU96MGVaYUc4b1JrUHJDQkFRRkxxRCsvUkhER3FCalpl?=
 =?utf-8?B?UVFzYkdNYURGcGJMaTU5Nyt0djJEM0drMnpWc1E3aEptZVFVL3pYakJXMzcw?=
 =?utf-8?B?b01zaE8xVWZ3M0FNM2VVNDExRWJKYjBick8raGZsczdyUHRiRXovS1FLMXl3?=
 =?utf-8?B?TElyTTgraWcxMFZDQkFWeG5zTkJ0dndYd1pNazdwci9aKzFieTVtZjRaeUJu?=
 =?utf-8?B?RXAwZDdNUEtOV00waVlldUNXUmNJcGExYTc3bFNxeUVnT2RhZjJZTGJob1dh?=
 =?utf-8?B?NkZGL1A4b1JvWFo3dE5PTjRoNFZ6Zk9aZ292bDBtSG5KM0FoUXl2SXBIY3Zm?=
 =?utf-8?B?YVh4MVVUem1RN1l0RUExM2JUYU9EczRibTVmbnhnSTdOdnhTMEVsQXptWm5Q?=
 =?utf-8?B?QnJDOXlTOE01L2FwUkRuaW50Tm8zQSthdFAvTGlVbHEzbE8zTWJoeVZjMGZx?=
 =?utf-8?B?eUFnSndQK2tEMXA3RHpKT2NZWjRoeGNEdjIydTMzUDVTTWIxc1dHNWxISTVi?=
 =?utf-8?B?dUt4aDhXTGxBUzJQUHpicjU1UzlvVUNlSHZsZHNHRzE2WTdwQ3ZQY2Exd2Rv?=
 =?utf-8?B?SjIyOVJsS0VZbHlWeDNBd3hzYXRlR0lYSkNYdXdKYkNKTHo2NnN2cCsrNEU3?=
 =?utf-8?B?MXhPL05XUnZEVDFJaTdUUEVzdURqbGRXTEtScnVrb2FrMjMyY2owcDZaZmow?=
 =?utf-8?B?ck9yNCsvQzV6YU11WG9EN2VFZFoyR1R1UVFCN1VzUVhGUFlRcW83Rm9TR1Qw?=
 =?utf-8?B?Z2tCU1k5a0RmQ2lWMzVndXlXV1RSWDdVRE9PSDRtZDZITGJaTnM4Um9qWnhR?=
 =?utf-8?B?TjFaenF3WVkwYWcrVFZ1ZGtldXg2MHB6MThsRDlib2EwTUxLa2tZaGczMTBB?=
 =?utf-8?B?cnNYbHVpamZVNzNOY1Y1VlBIVGQzMXFYRzBwVmZOZ3NpWE5vbWpFaXdqZmZj?=
 =?utf-8?B?Y0JsZnhQWkZWdU9BUWw4WVNmcWJWd3FpejNwUDY2QVI4TDBPY2lyRkZqWVFp?=
 =?utf-8?B?V3pnWUtiYVBSUDl4Mm5GMFIyNkZsMGZSejN3RmZiaUtGc2I3MVJhOTJTcHhm?=
 =?utf-8?B?Q3o2YlpQSkFzK2M4eGRJdVVMRlJxZEh4ZzlxNHllSFhucGpManZhdjE1MTM4?=
 =?utf-8?B?ZmdkS1J5WjRLYk9PcTRha2x1VXpjblR0bk11Z0JkN2ZhY2pDb293Vk9HZWty?=
 =?utf-8?Q?5908sgLNKSbpSZwrIzUyMPAzJADEFhDtCUpvYjy?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 387c4e78-c9b3-4029-2c2f-08d8d7412459
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 14:50:05.8044
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gGVLEiKd4DwEUnfr+UjobNjiT+4qMmPV7Z85YSjlp9pypfFRv0P/YmxSFJsd/5YiOcV/hm7XibDc0TzXTguZYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3068
X-OriginatorOrg: citrix.com

On Fri, Feb 19, 2021 at 04:40:27PM +0100, Juergen Gross wrote:
> In order to support the possibility of per-device event channel
> settings (e.g. lateeoi spurious event thresholds) add a xenbus device
> pointer to struct irq_info() and modify the related event channel
> binding interfaces to take the pointer to the xenbus device as a
> parameter instead of the domain id of the other side.
> 
> While at it remove the stale prototype of bind_evtchn_to_irq_lateeoi().
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Wei Liu <wei.liu@kernel.org>
> Reviewed-by: Paul Durrant <paul@xen.org>
> ---
>  drivers/block/xen-blkback/xenbus.c  |  2 +-

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 14:57:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 14:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88149.165641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECdw-0005Cl-JZ; Mon, 22 Feb 2021 14:57:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88149.165641; Mon, 22 Feb 2021 14:57:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lECdw-0005Ce-GW; Mon, 22 Feb 2021 14:57:00 +0000
Received: by outflank-mailman (input) for mailman id 88149;
 Mon, 22 Feb 2021 14:56:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lECdu-0005CE-P2
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 14:56:58 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a499f80-aae2-406e-b463-1272d60e08bc;
 Mon, 22 Feb 2021 14:56:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a499f80-aae2-406e-b463-1272d60e08bc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614005817;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yLi43FZrcj03EbcFY3Aha6Srz/MpSiaoOj48kpyZ/Tg=;
  b=hkCG3H4DzLWmRsQEuCOEr4uLG9GZPz3B7x4bMV+bDOejNWXf4H6sOcER
   icwBNBhwM65HoudhAcZEqEIuGinjRxWwQM+K0A3kb+YNq0Hp64rQUUETI
   1UNpyK8LQyiOJb7RecATHSADXNTys78+J96xgIg7yVNlCqJ7GJoncK+b8
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GDfVy7rZY9SGfZHkARRI5WETkzXs3NxopEIMfuXqDPA4kFvfIU/9VgPux6C2n9hrJ6fcyH1FP8
 yxW78gI3wPsIEqm1m+PEdfhb4Aq4EpUypw5Brg4Zg+HrT4xk/rzO5K0vQoQ2p8FBcbsHci9Wpl
 kMv8ihJXEnrKb6bL10Jh+d7IZ0YAa8Egduz2I68lqE0iYwvBr3WqSnuOZZpSfF1qZH736r1yc7
 ti5ADcaEVGovEri3cMhWdCN4nDLdqgaUfzW3gj+7iPT6Wi+RpWFDpMNI1pCqtOMfdWVYkp4IDa
 1PI=
X-SBRS: 5.2
X-MesageID: 37935217
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37935217"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i4r0UTIV6gD2X2kurob+dKzk2qBxvOCE09a2e2t2UXR6RIfCyvhBgCgDgZaUGlOxS1LY7TbQmYrmYSWtJ28iT61ymPxpxveCkG4FSOA+agk178P6x4mc8nkOmIP6PG/LG4sKZbl8lMws9UTkaEaPsY6fVEt/zGC90Uti+M8isNnJ7AXHixFhS23HVn4pUSseOpsXsNejw2vQLnXI9bjWMWRiiPBCsoeYi4Ig3+H+hQXzp0rm+fiegjLdYWOb6rS20JLhwOQSd04fBC8nHiu3FZE6IuGdvRetEK4CHdJkaMuTCpnWNaQDFT2RlobsGhXYlnJdf0qQ8/mrEiipjYPy8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VSl6Vl11W3QTSz2u4mO81xxZ40P/k97NaOubUWyp+HE=;
 b=SM0pBjZJxwbE5QHlMF4poaYlBbB63scO/RP2J6H7uU/Os47sUA9SNvcpgxLtBWNmb/NqS7Cep8RZ/G9VfFOb2FtyQBjVSeHUBgxEvvwOAckgiTgbFalq3qVFGrHRPskyTMxGpgxdQZ7/RRvNA1sFIJOcVPMhXjI91B5LGSDe6FEELVE3/ITLtRKYyb4VCsavwV2wR/5lh222DPC5K0LxiJTw+o/NPX5hmVn+J+1hHFsRvDmJkxVW4nFWMkH4oYdtokgellcRJqARHadPADv+QzhIMgf7FdEswdkUG5tn2K7eZcGVGhQP7K7118Ua2N2jCRhwd6ugbLK1wyYLLkGM4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VSl6Vl11W3QTSz2u4mO81xxZ40P/k97NaOubUWyp+HE=;
 b=ulWfO2l3ScMTo9kedn+ihM5U3V2eB3siLzElaPAYGK2xRoZFVvaLF6+IN28dfzcdQhWmsbgy+6jrfxid5+NpBWK5XTjWeOly4yQ+NoBod1u3W9J3jYE8oXxHY0tiPWRTG6/q9LQE78wFAWNrzIle/2aM4wz2ZXBClBo/wQ1zPac=
Date: Mon, 22 Feb 2021 15:56:48 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/irq: simplify loop in unmap_domain_pirq
Message-ID: <YDPGMOgV1sLCczpT@Air-de-Roger>
References: <20210210092211.53359-1-roger.pau@citrix.com>
 <f80e0026-9a0b-d25d-a0a4-81774da8cba8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f80e0026-9a0b-d25d-a0a4-81774da8cba8@suse.com>
X-ClientProxiedBy: PR3PR09CA0003.eurprd09.prod.outlook.com
 (2603:10a6:102:b7::8) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 56b4574e-54c9-4754-b7c2-08d8d74217fe
X-MS-TrafficTypeDiagnostic: DM6PR03MB5324:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53247FDAB32A94559F1D36308F819@DM6PR03MB5324.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fq1YOS4j/0Nhs0X6A+8otHz4DPyoJ5NfmuWu6QZhNtqg42MRG7ks0/OgzM8ncneyC9kZI1Je1/4oxWJAw+4YnH9Zb9uDzkEC5EBslkikY0YZ+JuC/BAx5FAobdNWRNUa3xxu7lr+8K9NTMnzLWRfz23Yn7oRdX6x/8o0it1Z3lCtsr///mYO1JcmIOexitl/7wubvW+ljEe5T9mjjTgreXq1WBZt18QMMLoc/qqn18p1d5TMpFhRbUQCTdw0MfKDPo18QCMmx1qqMslWHxxyuqQqnAwxUS/uf9BW5DNh0xgNjjY3KUj32zFGmJxbPjWLBpNyQMy/aJ52GmVXeyJf4Zz+fCTENAYTRwXJRCXCwrZIYLTe+fv7AxUwjMA+6DfZVGqrl2vBxCFHHBDNHYTP1fl8IIQZcs/81RJzPRiqx3FszTNu59TtcN+U0pud2DmK1ujhNcsR5wJBQ6vHxjz4mvYAg9O+PGWRqgJQDT59vNDMPEfSdYtEGLrEgeQTQze+GAl8MXL3g5Xtt5RYEeTHQg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(396003)(39860400002)(366004)(346002)(8676002)(66556008)(85182001)(2906002)(6666004)(9686003)(53546011)(6486002)(6496006)(6916009)(8936002)(316002)(54906003)(86362001)(478600001)(83380400001)(956004)(4326008)(16526019)(186003)(26005)(5660300002)(66946007)(33716001)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bnlOb0xUYlowTjhOWHpkczNvWjVyMVNUc3hTVi8rTUZQU3pyL0gzdmhPSkIw?=
 =?utf-8?B?MklRcDdlcjkzUDRsbUxiSzYxZmV6cEVHMEd6c0xvYVliVmFtVEc3NWxacjJt?=
 =?utf-8?B?U1VmajBuWEt2NG9PVk8rN0VydWE2UGlIMzNwbVdBekIxbFR6SFptaG0yYSs5?=
 =?utf-8?B?czFUTjFESERzcUE2eFlqT2FCR1F0dlllMkREUTd2RFNLU1pNNDlTcUVIY3Ni?=
 =?utf-8?B?eWJNaElpNkFuV3pqQXJBbVhUYVg2a0J2bDN3dE50amp0S2pZQW9sUXgwK3hq?=
 =?utf-8?B?aWh1SE1OUWlwL3dzL2RyS0V3NmtabHpRckQvcE54R1JQUkpTVHRFejZ6ZjdV?=
 =?utf-8?B?QVFDbkxudjVzenYyaVRlRkg5ckcrTVFDVFF4WGswczhjWXhZQXpTZUttdDRo?=
 =?utf-8?B?NFdPUGY5TU1XUExQSFhuVmxDMWVWbjJPeTVBY0lTYUVCZTd3UXZsaTl2U2V5?=
 =?utf-8?B?ZUVuOStwNUh3RkVrY1VDTEhVTll4TEVoNk1JblJ4NExyOHg0VENxb1BYZ3ox?=
 =?utf-8?B?bTNkMjFmbGZ6bmZNMWtJV1o5WFRLbDI5NmlIVXhJUXg3YkVQNlZlMGR1QStW?=
 =?utf-8?B?TWx6NW5IMk5FM2IydG9YMURpMTdlQU1WYkxmYnUxRWdXbVEveUI0MHp3ZUt1?=
 =?utf-8?B?QXl1SFkwUTJ3ZEZNNDVLWWtDUitNTHowdndBb1hRcE9xRjd4NG1NbjdlR2xi?=
 =?utf-8?B?NUN5OFA5eEZWR2tXaENDWDBndGVaZldaZDI1WmNjT1c5NVYzMWk1N3pYdy9o?=
 =?utf-8?B?RUlVZVJFUVpMaS83T05Xd2M5VDYyMjU3S0hpTEYrSGJLQ0o2dnJOTmNUMjVv?=
 =?utf-8?B?YUQyL0RTQWgvV3llSTlhalJKMmRnVHl4OEFodjdtenk2WFdVQWV4R2NhdHJm?=
 =?utf-8?B?Zm11dS9MYS9qVEJyOGRDZEF1cTR1MHJIcXQzR3d6dTRyditXRVorSWhLbjFS?=
 =?utf-8?B?SWFVZ1VEZVNJeVY5TkI5MHdoNkRQVjBERjF1TUZnVFRxTWhkWnJUblE1M3Jx?=
 =?utf-8?B?bVlVZmN1MERzZytwaklmak40SDBXM0ZZbkpEVlhOWG5EV2xiajBlSDQxbitL?=
 =?utf-8?B?Qk5jRG51dFR2M3FHVzlzYTdMa0RvY2lpeFM1eEhjUW9raUg2dkdnS3lUMXdo?=
 =?utf-8?B?aHlVdEF5djBFdkNVOURtQjBpVDVNN0Z0NFkrb05XUlZHQlc1Ty9Gb3ZpTVBh?=
 =?utf-8?B?RUtaMHViK0hBSHkwVmNHU3owbkxtS2J4QzdDc0xaU1VDS3dsUDk1YTFuckNV?=
 =?utf-8?B?OWFQUi9reEc0dUx4bnR1S0JrUVlqSEQzVTN4cWM5RzJIQWs5bUtnNFhFa1dG?=
 =?utf-8?B?dElOQS9yT2tQazRuenMrRTRIbWdQMnBqYldNaXNSdnpaVHJDYUh0SllPSzVT?=
 =?utf-8?B?dkEraTQ2cHh6em5SV1pjM0ZmbHdRVzA4ckhZZWo4eTAyV3R5czRKZEZ0cTNq?=
 =?utf-8?B?NUFZaXA5Q1dLbHBMUHc0SU5ta2hhSTYzSDM0NzhrWEdvRmlYVkV1L0ZtVU9P?=
 =?utf-8?B?cU9xWGxRNXVvYVZuaXcyZEQyVjR4VEpFdWgrTFVUUDNvdjVHMGdiNmhTV1Zn?=
 =?utf-8?B?ZU80Y1JxR2pkRnZ1dWFodzBWUTdpR0llM0VGVlFaN1FaZUtPL09iOXdydW5w?=
 =?utf-8?B?ZnQ4K05EcExEcHp2OGpROVF3Y3lDY2xpc3N0bVZxa2lTcEUzb0tBUTlFaGg5?=
 =?utf-8?B?cGRnUW8xbC9ieCtzNUFqSzFncUZUdEUvckthNUVLTW1JdWQ3enRBRlk2bGNz?=
 =?utf-8?Q?5at8t40yguINrj6DUFUwhR8WM1J9VtIjR9/fwbc?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 56b4574e-54c9-4754-b7c2-08d8d74217fe
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 14:56:54.4641
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gugdK3cV3hlsjrRt7CxdiDZyzqXaw56vgIktVtYtobnSX6lehqVI0PahtGhAKXELo7PunXARyNGfXMnqrGeQ/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5324
X-OriginatorOrg: citrix.com

On Fri, Feb 19, 2021 at 03:39:14PM +0100, Jan Beulich wrote:
> On 10.02.2021 10:22, Roger Pau Monne wrote:
> > The for loop in unmap_domain_pirq is unnecessary complicated, with
> > several places where the index is incremented, and also different
> > exit conditions spread between the loop body.
> > 
> > Simplify it by looping over each possible PIRQ using the for loop
> > syntax, and remove all possible in-loop exit points.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Quite a bit better indeed. Just one nit below (can be taken care of
> while committing, once the tree will re-open).

Sure, if you want to queue this already please fix the format
string.

> > @@ -2356,11 +2355,23 @@ int unmap_domain_pirq(struct domain *d, int pirq)
> >      if ( msi_desc != NULL )
> >          pci_disable_msi(msi_desc);
> >  
> > -    spin_lock_irqsave(&desc->lock, flags);
> > -
> > -    for ( i = 0; ; )
> > +    for ( i = 0; i < nr; i++, info = pirq_info(d, pirq + i) )
> >      {
> > +        unsigned long flags;
> > +
> > +        if ( !info || info->arch.irq <= 0 )
> > +        {
> > +            printk(XENLOG_G_ERR "dom%d: MSI pirq %d not mapped\n",
> > +                   d->domain_id, pirq + i);
> 
> %pd please as you touch/move this anyway.
> 
> > @@ -2378,45 +2389,6 @@ int unmap_domain_pirq(struct domain *d, int pirq)
> >              desc->msi_desc = NULL;
> >          }
> >  
> > -        if ( ++i == nr )
> > -            break;
> > -
> > -        spin_unlock_irqrestore(&desc->lock, flags);
> > -
> > -        if ( !forced_unbind )
> > -           cleanup_domain_irq_pirq(d, irq, info);
> > -
> > -        rc = irq_deny_access(d, irq);
> > -        if ( rc )
> > -        {
> > -            printk(XENLOG_G_ERR
> > -                   "dom%d: could not deny access to IRQ%d (pirq %d)\n",
> > -                   d->domain_id, irq, pirq + i);
> 
> Looks like the pirq number logged here also was off by one, which
> the re-arrangement takes care of.

Indeed. I don't think it's worth fixing this now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 15:22:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 15:22:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88164.165654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lED2c-00080D-SD; Mon, 22 Feb 2021 15:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88164.165654; Mon, 22 Feb 2021 15:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lED2c-000806-Oi; Mon, 22 Feb 2021 15:22:30 +0000
Received: by outflank-mailman (input) for mailman id 88164;
 Mon, 22 Feb 2021 15:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lED2b-000801-S7
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 15:22:30 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 802bd940-40de-47d5-b4f3-f8dfe30ea5e1;
 Mon, 22 Feb 2021 15:22:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 802bd940-40de-47d5-b4f3-f8dfe30ea5e1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614007348;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tiJbYi+Z9kjuAue+Z3yJp6ppWlwGZlFE3MLCTrWV82M=;
  b=TP/CREkW4N0RWCtRgxmnDKWo1oBDTX2dcSJpmdHOOpSS3TM4G7FYgHMP
   JTiVBkspZOsoix2w0asrVc8xF9p5+YZm5b8U484scrm6kQ+tjin5/j9Ss
   Wmfx6UX2Mf5/Wqmcw6kDync680nlMzPFhZ5p8rjx8bqsaNGhReYErAs8R
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: t6YlYs5JoAZcO1ro8uMyUdM0EMee4fGViNRX/+U5BHuBfFXGUFxXClIf7hJ/9uj8etPE6aexhY
 99A/KKS8ESkAzRJYXM+D0Xl+wUsriXaqbfgbTwEbNSA2I3vsk+krPETioLWj+VsfavNEZ/WWgn
 VDORDpuQIWkDH9x7IqLfjDNUBhhmicj/yGlZubq7WCOL6oyvQ2gwh5i+Df1JPjIxjjmjjDdkum
 UGUvZVNg5Ksa0KWSe8FmOFu5ySeIuVVK6Q6U9O/cPI5URHAld2DdzhebRDMAsP68xPoTlXN7/M
 5TM=
X-SBRS: 5.2
X-MesageID: 39125711
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="39125711"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n0bGO1cUl3FvqeVyp44Uyj6UEJb5IrAl1GToCUMRwiyqYZQrnm2yETttehI+xafeEhhfW6QUfxf7pJ0HgqegwdVU3vAhYpOYOeIuMYiFPgCoH3NkjCAEY5imTxrVT211C8YC53fIjZrKSWDRMRhrwoVqATO398wXByaqSHYybKGndLQobe+29httyMQZ9Dm9GMSkHj+AbtBvw1OrOcPbJO+K04wZvMFtl/jlfKFgKrPpQMG+jEcWv4A+J3jVvBNE9Keo3JfM87ysOv6dVhAyMgzEwmmsYswMf+r7wCWVp4knNGqR+ix7ihgD7eCoS7VDmajquINg3EH9o9l+WVlE2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rZ2mo0putVkxA0MxiPFuXaEymxeMLUUbkm7kAvDUQCI=;
 b=LrxqfNxQhiAX3DwbBZ/vSMhbjKqiR1VYix+MU3hYhksVTOZVVVjP31TIArWprUd2IhjG5HY1FROaPE21MmnnJFvjeuIbtcdQK7bmLC1Hcx+NTX0hN6DR74J8UPs/dAucJxgE/ychvqkEqU7Pz3hC8PYUP0pR/KAwxejikbLPyeg80wngqyLAAL2mnvnYY5yJHj+0Id4SWqG4t1jaQ+K0jxauMlE76m+1QYuWye6OMk855bvrtmLuJNobr/nNwq6vPll/WhxuPDqJ4R/KnrECzS+oobf7uITFWm6S31BGyF2VkRyhHnSGsawHaof1JAsAtTITXZh4HcbE9anJlg1o6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rZ2mo0putVkxA0MxiPFuXaEymxeMLUUbkm7kAvDUQCI=;
 b=U636ZyIh6TFyHqUztwOnb/PAnYlHmaNw0iHgpy8Y/mVk1Z7VkaXEUKVV5rTt7J71+gehUDEedy5iMZR0e9GsG0J7TUTFN0ZgZpkyKp8ytN2B68S6jB/vsBiHF4ouzVGvy6EzFy6PiXOMQgbg0eYI8ScWGP6rcFx4SekWjXqN0iQ=
Date: Mon, 22 Feb 2021 16:22:14 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 4/8] x86: rename {get,put}_user() to {get,put}_guest()
Message-ID: <YDPMJp731Zt+Vx0J@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <369ae5ec-ee2a-78d4-438f-b18d04c81c4c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <369ae5ec-ee2a-78d4-438f-b18d04c81c4c@suse.com>
X-ClientProxiedBy: MRXP264CA0031.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1b90a34b-3f34-49f9-c2e8-08d8d745a566
X-MS-TrafficTypeDiagnostic: DM6PR03MB4218:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB421835BA00B26FBACC1034B18F819@DM6PR03MB4218.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2733;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oeyccdzvJ0aQcj+CpnO0/k4xfliHsqxt32aDv7YbVDFhlZmejdqhZQ68ySAF6Xqs8WlkG7LwyA63fdVUiOthitTPwzGEXNYe744COli7KgWMkJoegQh1Dn1sAxs0hliP3OMphcrcddVjSZOHJACyPT6NWu5lQeVni53ATMZ1gnuTbksyI3ECOJ+y1gS+m6x2U7dN+DyugipbIhUOoH2ESrQeJZ6rqHc81A1wj4SBY2oxX0HGbLbw0lb8EWn/aHaMzwdd9kE/pDiZ9EeTNi6YXlji1WW4dx2izL9H6eNb68kYX6y0qSBEfDp9N8BUW8lboikIkZN7vpJlwmzHzvvkcy+LbewVeshaXCL9i/qV/UUi/GXP0mMhCoHE9oelcVBWhQyExPYLdDJdMypzpZEYGXq35SAkxHjjk1lEk6zPcn162s6PLfseS1JknN7XTiey+mh/HcOPy1PXLG4N4WJMv1zgqa8gCaUroouJTebORGX2z9rvq70nN0BPgg/V9DbVzN1s7ggJOF6Lv+JyPUl4il/AA8KUhBCFmvb4XW2RGXUsPtTPEK+U5W4HTHcpFO7/
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(366004)(396003)(346002)(136003)(186003)(2906002)(86362001)(26005)(85182001)(5660300002)(6496006)(478600001)(66556008)(66946007)(4326008)(8936002)(8676002)(54906003)(316002)(9686003)(6666004)(956004)(33716001)(16526019)(6486002)(6916009)(66476007)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TGR5TnN4ZTUxSGt5VlEvOEp1MWMvellFNG9aVWEzbnAxNVhMcllJV255WVVU?=
 =?utf-8?B?MTZZSjdWTEFIVnhCaHJOVG50RUpnOWRzNjhkMUNkUTNWUUduVmVJTUZlYXVV?=
 =?utf-8?B?QjRQakh1K3pXZkhQeEx5M0FBQlk2eDE3L2lpdGJnYm81RTBWRlhlWm1wVTJC?=
 =?utf-8?B?Sy9JWGlycDBSUFdacUIzS2JDaDZkZ2pXcDYxdVkydS8rZk8zTG4wbEp4SGk3?=
 =?utf-8?B?VlFSSXJ4WG02NDZYcFUrVUk0bkk1alBDR2dXRG82MFh0elhtMTcvY3pmOGhH?=
 =?utf-8?B?VVpaR0hOVlJyTGxNWk1SWUdIYnkrcnNMWGdINklTd1dlR1ZyNkFVV1Q4UHpy?=
 =?utf-8?B?bHArR1dFN2ZEVENyVk5GZDNVMnlGZjRYT1djbWxwSGhtNzZCdFRDUjVhSnNB?=
 =?utf-8?B?d3Y1UlIzRTRNR2Nkc2YxMlZMSVI3ME8ySUw0Nm91S3hFMzYzVWdwcXgvbWJ5?=
 =?utf-8?B?bHhsbVRZbjBjZGFodEoyalJOSE9DUmVobDkrNWUrdE5va0lRWE1jOFhDVXNN?=
 =?utf-8?B?UXdtd2hHTSt5Zzg4ckhrRWdSdWFWSk5WbU8zNXNNQk9scFBpT3pnclZwZTZR?=
 =?utf-8?B?VllDL3JPZy9TOGY3UVNpZ1o1MTFwUlA1TUxMM0htaFhsV1B1OXBDY2laV2NJ?=
 =?utf-8?B?dG5PSHdrUFJpNjBIcFR3RGI2YmNyN3haZ1piUkhhcDBTL2RBcERuTk91T1ZO?=
 =?utf-8?B?ak9sU0dKS2FJdkhrZEZUcWNPblhKZVkvTndtWkNwL2lkVG5xT0lhRStkOWMx?=
 =?utf-8?B?aWdLR1NwUDVnQlRiMUZnWkVkMGJVRUw3MlArYTExWDhxMm5iQmxrM0djU203?=
 =?utf-8?B?aFhpaDVwV0hWTnlldDF4TG9iTmdUdHhiNUlBd2V5Z0xhb1orZ0dRYkFsbk5U?=
 =?utf-8?B?MFVGcktKaHYxMG0vbERkRkdvbVpkaVlSUVo0SFowRjJQaE5MUFgxc2JqMy9D?=
 =?utf-8?B?ZzM2cGlhWFlqY0FrVFZvVEk2MVZOdkU4K2VUd3FxWG84bTFyd1FrVXVpNC9t?=
 =?utf-8?B?czRtNGpLRTBPcEJoN2tuQnRNWGU1SXNGdWhCNXlOWjJ4aWlPeDgvRFlhbTZX?=
 =?utf-8?B?WHBuMUdWRHkxSTFZVktsVU1Od2tLM3BQZm9LTmt5UGwzQ1lZelJqdFM2WEFn?=
 =?utf-8?B?QkdIOStHNXNjYTNCeDF6aVhHQmtBTEh5SlJhc2lBSWNxakxhSHJ5M1o3Z203?=
 =?utf-8?B?alExQUtuZG9ZbGYwSzdkS3grcUlITnBXOTZQdWtNeGVKOXV0dm9hbStOWXhM?=
 =?utf-8?B?cFhRai8yQWpldjBkbnYxY1ZtcWd2UkxtZ054YVluZEdnZmhzRXdOOURXeXZh?=
 =?utf-8?B?V2o4Q3YrZm1pbXVDRzJKT09WSDd5QmhPOUxtQm1abVlRRE1UZ0xpYkRQTGFo?=
 =?utf-8?B?QWowNzU0dUNQYVJ5QkN5QzBQcUVVTC9US0ZyVWtJZkZpUWp5eitIcC9rU2t6?=
 =?utf-8?B?ZldGUlJNMUVLaGRsdGhCYzJXalBPWjUwUGhFTENVdE9WY0JzL3prNFV4TDlN?=
 =?utf-8?B?c3V1emZZT1IyY0NON2RHYU5GdzNwaFlyNzlQNjluMlFWd1cxSlh1cFJVTytN?=
 =?utf-8?B?aXpSYjFxR1FLbjBiandZMmRGSFlaaktjV2pIZmhMVFh5SzhQRC9uaDluTC9F?=
 =?utf-8?B?RkYzcEd2VFE5WENnb2J4ZUV6ZmVUSEZKaFFNS1pUQlVUeEtlR3F5QU5PbGlj?=
 =?utf-8?B?OVUxSnJqTVU4d3hnUXBLdUsxT2F5TkVOeVVHWTBISjFpR2lYL2lnVU9HTDFa?=
 =?utf-8?Q?BsX8oF1Pr8wIj0XX4V6LIG9QSzWJ5i1yWkWsW/u?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b90a34b-3f34-49f9-c2e8-08d8d745a566
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 15:22:20.3532
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VpjaSLgOUj7oXCHWNFlQBMuResxw8ZKs/lVYqMKB52m0kzsRAmuImXd4v00tkXF1K2GESohyTU/1C1ERvHD07Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4218
X-OriginatorOrg: citrix.com

On Wed, Feb 17, 2021 at 09:21:05AM +0100, Jan Beulich wrote:
> Bring them (back) in line with __{get,put}_guest().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

> 
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1649,19 +1649,19 @@ static void load_segments(struct vcpu *n
>  
>              if ( !ring_1(regs) )
>              {
> -                ret  = put_user(regs->ss,       esp-1);
> -                ret |= put_user(regs->esp,      esp-2);
> +                ret  = put_guest(regs->ss,  esp - 1);
> +                ret |= put_guest(regs->esp, esp - 2);
>                  esp -= 2;
>              }
>  
>              if ( ret |
> -                 put_user(rflags,              esp-1) |
> -                 put_user(cs_and_mask,         esp-2) |
> -                 put_user(regs->eip,           esp-3) |
> -                 put_user(uregs->gs,           esp-4) |
> -                 put_user(uregs->fs,           esp-5) |
> -                 put_user(uregs->es,           esp-6) |
> -                 put_user(uregs->ds,           esp-7) )
> +                 put_guest(rflags,      esp - 1) |
> +                 put_guest(cs_and_mask, esp - 2) |
> +                 put_guest(regs->eip,   esp - 3) |
> +                 put_guest(uregs->gs,   esp - 4) |
> +                 put_guest(uregs->fs,   esp - 5) |
> +                 put_guest(uregs->es,   esp - 6) |
> +                 put_guest(uregs->ds,   esp - 7) )

I wonder whether we could use put_unsafe here, but I assume there's
some kind of speculation attack also against stores?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 15:26:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 15:26:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88169.165666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lED6h-0008Gl-E7; Mon, 22 Feb 2021 15:26:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88169.165666; Mon, 22 Feb 2021 15:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lED6h-0008Ge-A2; Mon, 22 Feb 2021 15:26:43 +0000
Received: by outflank-mailman (input) for mailman id 88169;
 Mon, 22 Feb 2021 15:26:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lED6g-0008GZ-HH
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 15:26:42 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44ce8b9d-2542-4ec9-859e-d85d342fa419;
 Mon, 22 Feb 2021 15:26:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44ce8b9d-2542-4ec9-859e-d85d342fa419
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614007601;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=GSpUnuMR6P81lwxsJXWAxZKkt6WUyH49l16sFhBlTeo=;
  b=erDQncfLorNnz7QXpbE4VgQUp91Kpl0UI3YXLjhvOxtyHa2b3zoK25WU
   p9RDDEVgzlg/GWMDFCJoGmkQPc8Q29vVZ8bhPWS0sTiowJUC4v1jwix5s
   FEtOtGpEqnLowhe2v1dTCEDzVmNKlZvqh8uJW5buP9egtR60PMlKjf9I7
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CKU1se9KpxAoYcMj5mnSPkpOjhlJJuGCjc+h39MY4NTC10nf7atAezkIvbhlTCa1fbMrLSSX6Y
 VanWFfQZkKnNAl3gc0LRd85ta0KCYoWctKGM6pn4YfFUeJ+MJleVUDLZSfn8ZRu5FidSTKx1Id
 8/DM22JuVcqo8MEezdbbsuYzmvz2DJx061tqTpohxKbBM3ISAPcV40e/xpy9dX8uhl/RJ3F7xP
 Aom5xK9ZECUzxnu1ReYkQ9qdYDxVaZYlPwpew5TWE8/ZKKv3Q7faBq9JLxxPaBJh3as+PLcWI8
 xU4=
X-SBRS: 4.0
X-MesageID: 38116280
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="38116280"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Ian Jackson <iwj@xenproject.org>
Subject: Domain reference counting breakage
Date: Mon, 22 Feb 2021 15:26:17 +0000
Message-ID: <20210222152617.16382-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

At the moment, attempting to create an HVM guest with max_gnttab_frames of 0
causes Xen to explode on the:

  BUG_ON(atomic_read(&d->refcnt) != DOMAIN_DESTROYED);

in _domain_destroy().  Intrumenting Xen a little more to highlight where the
modifcations to d->refcnt occur:

  (d6) --- Xen Test Framework ---
  (d6) Environment: PV 64bit (Long mode 4 levels)
  (d6) Testing domain create:
  (d6) Testing x86 PVH Shadow
  (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402046b5 [domain_create+0x1c3/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
  (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040321b11 [share_xen_page_with_guest+0x175/0x190], stk e010:ffff83003fea7ce8, dr6 ffff0ff1
  (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d04022595b [assign_pages+0x223/0x2b7], stk e010:ffff83003fea7c68, dr6 ffff0ff1
  (d6) (XEN) grant_table.c:1934: Bad grant table sizes: grant 0, maptrack 0
  (d6) (XEN) *** d1 ref 3
  (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402048bc [domain_create+0x3ca/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
  (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040225e11 [free_domheap_pages+0x422/0x44a], stk e010:ffff83003fea7c38, dr6 ffff0ff1
  (d6) (XEN) Xen BUG at domain.c:450
  (d6) (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y  Not tainted ]----
  (d6) (XEN) CPU:    0
  (d6) (XEN) RIP:    e008:[<ffff82d040204366>] common/domain.c#_domain_destroy+0x69/0x6b

the problem becomes apparent.

First of all, there is a reference count leak - share_xen_page_with_guest()'s
reference isn't freed anywhere.

However, the main problem is the 4th #DB above is this atomic_set()

  d->is_dying = DOMDYING_dead;
  if ( hardware_domain == d )
      hardware_domain = old_hwdom;
  printk("*** %pd ref %d\n", d, atomic_read(&d->refcnt));
  atomic_set(&d->refcnt, DOMAIN_DESTROYED);

in the domain_create() error path, which happens before free_domheap_pages()
drops the ref acquired assign_pages(), and destroys still-relevant information
pertaining to the guest.

The best options is probably to use atomic_sub() to subtract (DOMAIN_DESTROYED
+ 1) from the current refcount, which preserves the extra refs taken by
share_xen_page_with_guest() and assign_pages() until they can be freed
appropriately.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 15:31:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 15:31:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88178.165681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDBb-0000pg-2H; Mon, 22 Feb 2021 15:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88178.165681; Mon, 22 Feb 2021 15:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDBa-0000pZ-VU; Mon, 22 Feb 2021 15:31:46 +0000
Received: by outflank-mailman (input) for mailman id 88178;
 Mon, 22 Feb 2021 15:31:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEDBZ-0000pU-EC
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 15:31:45 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c927d29-4e49-42f9-a9f1-f947172b592a;
 Mon, 22 Feb 2021 15:31:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c927d29-4e49-42f9-a9f1-f947172b592a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614007903;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=5XPJglws4Iy0I60FKA+1Uk2apZ9Txn/Az1bnwZewBFA=;
  b=Z4CkruL8lhUbgIGXpcqztv27qItnawrB1/sPxakBW3+vX85gnM5Mzi1G
   DTcpBsn8HB3W0xfHyMiUppJTW2mp+HZnNRkT2PHmfjvwnADPyzxFdEgou
   8qICZ2kdyr9D85+14+rLQXkVgp94aumofMIp0VFaymq3RS4N089RuSYeO
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 03OHuxaI4UH063STll9d+eBDUBVjEhmUxcOs65iSDXOeBIKLD6STxs326wwVrq5OCSw9IcO82k
 3qRDRGzIqpW5gKov88CMSSfh1fsds5TQ7VtkwzgtzFNolRq1QCwlcV2gURb16d9/3TIDykk6Fn
 iqy7nXHxJUE/Z5Bs/e3GAl4vRg0TY6gA1Em+bXwE5bytziKbOTN9cG/ma6mBdmzGePmBSnrj50
 QfB/noQUOwQ+EQ5NfygrHe+GPaVvu1KmFRExkpA/mGudAgUk+dsk6wq9yQ4jZwNTZA8Qv9AzHN
 56A=
X-SBRS: 5.2
X-MesageID: 37766510
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37766510"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XppEhxEoK8z38hk7j0Gk97fsKAaDRrufy4ZVN1/ceQsKM3j+667VHj7rNXCVtYRoLmCMcx1euTBS3QTHblbkXcK9ss5TTLtIT2CCgFq3LFJOX2Oj2uV+OkdXhfD490BEOsTu6NFNuRNdZlrwCPLW4vfmJHVP9HvoVxqCxWMB/W/S/aWuyqTLGuHh2a8Gkz+PxfFLoLHYeFztnbxhHMtNafcrtJWoE0S9CQgEtPpYOva78/Xe2PicycSpju6X21fOJR5+D05Ws+0Ac/mC8BdePKiEk9Jl/71yqIdRkkgan1sop6i9qO1qGyV41fa/lrtgN74Rls1uwPR05ptOEN9pYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CE81KYBr9pPb/QK34EJoVBbaFPK/qNZDDnc8VpPlONE=;
 b=RO70A+axRiDNXod4IgIDjKnuHnizXHcxoR0RZWtgVswLdY7L7beWL5YcJQ2l5vR+Lv+6DkoAJG4AlAngJGW/Vp/Wd07CWalJoZ4z24NUGmSgTEE8L37Z3yFqF4q9Nr4Q50OWXehNX82IqKlKM6ARtsGHZuqrKOnNFOy4BUa0ewtYN6fRv2Sgh69yHknZn4+XP6Y7hqbVmvkXiCTIBkHvgF1Qo+i7mfW184tWFUq9n9+TelORbw4UkjXUfB9v55akRVQ0vqSQuY96Vd4ur7TUqKDVkEr865d4imJjvZ07oaKvtboqq3rqXErAHzMLwAxn5Or7qrTJJaF+AtvPzN5OpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CE81KYBr9pPb/QK34EJoVBbaFPK/qNZDDnc8VpPlONE=;
 b=KwFjtc7YM6MXlskJWp/pdDWrmRl8GEZmhaBvdAoQlArvSy69R54G7Gdc4JtV9mtKdFbeMEQIbOuQZ2xeqWYH3UzVotY3M4Y8U2ZcStv0xliSUVCQ4TcRqYHO2/5SXrLdQZDht6rfQz/Q1LxnFu6BLzrZ+JD9fjv13Tb9Vc8gIhs=
Date: Mon, 22 Feb 2021 16:31:06 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 5/8] x86/gdbsx: convert "user" to "guest" accesses
Message-ID: <YDPOOpC6/wGZaAkA@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <d1a1b9eb-33b4-4d07-9465-189699f88323@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d1a1b9eb-33b4-4d07-9465-189699f88323@suse.com>
X-ClientProxiedBy: MR2P264CA0101.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4df2cc41-9cd5-4737-3334-08d8d746e587
X-MS-TrafficTypeDiagnostic: DM6PR03MB5273:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB52738AFBABDB04CB362E8CB28F819@DM6PR03MB5273.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XhhBGobUGfhpB8TNrumEvo79qOyHfHuaPAAc1XGb3Gw4166JT29JxbLW4qfmY5kFlXrnTBJdWZvG6NlKZ4euSCCKcFPqA5RxlUeiirVxaLtavjfFCJlTAfLw66fQP7qiU6MzrQSB13s6QN44VgxGKw83WgdFyAd4dBu8JAkDFZLryFSyg+RaFt5FMFt8S+kOdhYtHbjLVVnyitXxX1qOHrRE1c5GQkDFgL/JhDEqltypvKw2kVgWeiNmTxtESXlbOHKreFqUeVwT/iC61XEOf9c47b2PNPSzRxN5bgGAU7PZSePwX96pF3xXDciBS/r/hZNRHlDlx1ceG4TFGmzg5Q6fVqjl3LO21qYkW7uDTmLk00uIh2UQKwom+4U4cQM2Xux8w+mS0sDcT+flWdR4hdlz8loTTprWOQSzht8yoGAD7FzdECw6pHdKSYX45Nr/0y4UIS636qwJsBGqw5lpAqFJqU39eVXtFPvJiRNWU8W0mR9Oaex3c9zTVz5AFK4taAdAXzs6jC6hLqeaAldO4A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(39860400002)(366004)(376002)(346002)(396003)(85182001)(66946007)(66476007)(6486002)(33716001)(66556008)(9686003)(2906002)(8936002)(16526019)(8676002)(6496006)(186003)(54906003)(478600001)(956004)(4326008)(86362001)(6916009)(6666004)(26005)(316002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?emxra1UxRHYzSmJxOEIxa3Y5MTc3UFV5bnlTaGNOMThuQmd2WVVaVlQvSHBo?=
 =?utf-8?B?N0NGaGNUcG9sZ0xOa3FLRVhmRkhNdkhqRkNVcmdFcUtsNGtqeGwvV0lZVi9y?=
 =?utf-8?B?Z2UxTWJVcm56M1pFVEF2TWNCUjZzWmpKK0F2aTFnUUlJWDI5amd6SnAyb3cv?=
 =?utf-8?B?YWVaNFIwdzFLWnRwS3BxZDBnbEdJMWZ2WDQ3bkdsVjVzc1FDNkZDRjJwdVN5?=
 =?utf-8?B?Ykh1YzZSRXRNTU8rMDFTc0g4RmNQVWEvcEEvQkxMd2tBa2FIZHp2R0JYMW1C?=
 =?utf-8?B?MElhZTlnLzRWU2pMV3MyMlpObWRxQVd2eGJmU200NzRQN0QyUWprb1FNWk1Y?=
 =?utf-8?B?dXA3RkxLbmN3SVAxbXJMTHJZaEhOd2NjUUdIWGh6dHlNdlE5bHJtcEpBbUlM?=
 =?utf-8?B?NFhRRndxNVBreDBjWUpVVU14NjQrTllNMi9xRlhyalNRZ3BzU2QveXA4Zll1?=
 =?utf-8?B?N2RiWnZXSGtHSnFiaHJPSVB5YVRxcHZMbXk0d1AreXV3VDBTQXF1Z1FaQ2hG?=
 =?utf-8?B?aitDUmhHMHp0RkRmdVMwUlgvamlGVncrOS9kVUprU3RWeUdDblZGVGlWTjBF?=
 =?utf-8?B?OFdpNVVJQXZYUmt3Q0ZiZTcxSXVQNXo4aCsvaHowcWc4eTBFdWwrd3I2Qm1R?=
 =?utf-8?B?emZ2TWlqc2tBNmF5WXNTN2hKTUJiVXE4VVk3MUxMZXkwSkhjbThFbjk4WlJo?=
 =?utf-8?B?cFdpT0dsZFRHTzRTcnl2L1F0QXZSUkw0eWJSL3Boam5Md2JubEo4blp5Vm8r?=
 =?utf-8?B?dkFGd3hBU24yM05veFFCNDUxbnZ0amJQdGZiWXhlWjJ4NGZaZTZ4bkZIK0p4?=
 =?utf-8?B?ZjVXb013UExLWnBDUXVkSHl2Mml2TWIvdUpXNi9mOW5ZTGtTNk1RVlVYOGx2?=
 =?utf-8?B?K0N1QS9uWk5mSWJFbHQvMy8yMFNXMFo3VDQwK1FROWJwbVdnWXZ5N0FSaXBy?=
 =?utf-8?B?YWNURTN1eGEvYjNLY05kREtqRjVVOUtzM1ozT3QzK1FYbm5GekdBWUp0bEhq?=
 =?utf-8?B?c0NmMjNkSG9nbk5sdE52alFQdHVCa3JOZURLekdZbk1nK2ptczh3d1NmS0RW?=
 =?utf-8?B?dlRVb1h6T1BBNGdPNE5NSStGakxDU3BBdUtYeFBSSE1vVklheDluSjBoSkVM?=
 =?utf-8?B?WFdmOU8vRnNPSWxMQUJ1QjJCQWxrTG1GL1g4RW9aaFJWek5Vdzl6S0VCdHM4?=
 =?utf-8?B?ajJPYUl3MmQrOEF5a292ZUJGcFJUcll0OFhqdXFqNURUYllZUVhTY1ZseG90?=
 =?utf-8?B?em5LeXZ0dHF3WERjSk55Sml0ZXdsVVBNbkZOU1ByR0s2ZUV2bXNGeVJhc1JK?=
 =?utf-8?B?MUdad1ZtaVppTHlBUUNFcXhYb21sbHBIblNIY0FBaVRKQ0oyNjZCd1lKby92?=
 =?utf-8?B?MExvdGovU2Jsam9EbENwaHdFQThjLzNQUW12UVVKTU9ITHduY3AwWFpCR3V1?=
 =?utf-8?B?dVJ6ZzI3RHYyZTE1NlBZZG9kWm84L3Fqa2NZaE5DQTNueWVYR1kvUERYY1Fu?=
 =?utf-8?B?V2tpT0JFSWlWN1ovMjVabDFQUU54dlljaitkTjFLRDhwRWVTSEh1Vkw5Smll?=
 =?utf-8?B?ZmZGQTdESjBIaFN4a1N1ZHNYVHU5R3ZDSGsxYmtoV29sdTVkNzVHTGRKMjAv?=
 =?utf-8?B?UlY2MDFtWnE5Q2dHb244QzlhbkdvZlZtYllMVFVGOXhWUzRTM0pNVkpLVDFh?=
 =?utf-8?B?dVlqNkhoUFg1cjNFWlJjQXJjcm5BZGZHSDYrU2d6bWxtVkhNdjJsVm9mb01Q?=
 =?utf-8?Q?Sf9wJO/YXiLX0l1bkqrEmVRhjFx0vxWOeFlqWIv?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4df2cc41-9cd5-4737-3334-08d8d746e587
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 15:31:17.4198
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OixewPYDKPy6eXfoq6bhPJ7ErFaXIC+7NTdHcjrRdw2sHcqgeedbRsGAFxb5pfZG1EBlF/5/yxaJU4zmwX2F8w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5273
X-OriginatorOrg: citrix.com

On Wed, Feb 17, 2021 at 09:21:36AM +0100, Jan Beulich wrote:
> Using copy_{from,to}_user(), this code was assuming to be called only by
> PV guests. Use copy_{from,to}_guest() instead, transforming the incoming
> structure field into a guest handle (the field should really have been
> one in the first place). Also do not transform the debuggee address into
> a pointer.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

One minor comment below that can be taken care of when committing I
think.

> ---
> v2: Re-base (bug fix side effect was taken care of already).
> 
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -108,12 +108,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct doma
>  }
>  
>  /* Returns: number of bytes remaining to be copied */
> -static unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> -                                     void * __user buf, unsigned int len,
> -                                     bool toaddr, uint64_t pgd3)
> +static unsigned int dbg_rw_guest_mem(struct domain *dp, unsigned long addr,
> +                                     XEN_GUEST_HANDLE_PARAM(void) buf,
> +                                     unsigned int len, bool toaddr,
> +                                     uint64_t pgd3)
>  {
> -    unsigned long addr = (unsigned long)gaddr;
> -
>      while ( len > 0 )
>      {
>          char *va;
> @@ -134,20 +133,18 @@ static unsigned int dbg_rw_guest_mem(str
>  
>          if ( toaddr )
>          {
> -            copy_from_user(va, buf, pagecnt);    /* va = buf */
> +            copy_from_guest(va, buf, pagecnt);
>              paging_mark_dirty(dp, mfn);
>          }
>          else
> -        {
> -            copy_to_user(buf, va, pagecnt);    /* buf = va */
> -        }
> +            copy_to_guest(buf, va, pagecnt);
>  
>          unmap_domain_page(va);
>          if ( !gfn_eq(gfn, INVALID_GFN) )
>              put_gfn(dp, gfn_x(gfn));
>  
>          addr += pagecnt;
> -        buf += pagecnt;
> +        guest_handle_add_offset(buf, pagecnt);
>          len -= pagecnt;
>      }
>  
> @@ -161,7 +158,7 @@ static unsigned int dbg_rw_guest_mem(str
>   * pgd3: value of init_mm.pgd[3] in guest. see above.
>   * Returns: number of bytes remaining to be copied.
>   */
> -unsigned int dbg_rw_mem(void * __user addr, void * __user buf,
> +unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf,
>                          unsigned int len, domid_t domid, bool toaddr,
>                          uint64_t pgd3)

You change the prototype below to make pgd3 unsigned long, so you
should change the type here also? (and likely in dbg_rw_guest_mem?)

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 15:50:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 15:50:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88182.165693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDTW-0002dW-J9; Mon, 22 Feb 2021 15:50:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88182.165693; Mon, 22 Feb 2021 15:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDTW-0002dP-GC; Mon, 22 Feb 2021 15:50:18 +0000
Received: by outflank-mailman (input) for mailman id 88182;
 Mon, 22 Feb 2021 15:50:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEDTV-0002dE-AC
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 15:50:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f4ea680-e786-4910-8402-d092a0ce4e55;
 Mon, 22 Feb 2021 15:50:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F37F5AE14;
 Mon, 22 Feb 2021 15:50:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f4ea680-e786-4910-8402-d092a0ce4e55
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614009015; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=cX6HFlC6yMX9jwte7+WnLwTbx9zMa27UEshtsdOdOPQ=;
	b=jyETjackKs54X5vX6w5mfNXWqcVuW9em9Zs2PN26JyzvtXzj/bRPzUxVOf8qKE8ghvBqBc
	Zl19bGxFNJmGiRL6H9Ew6nlQYUGnrCD7nwt5WBs07Gysrmi7mDLWsNqLmPTJ90AWgo/RFu
	+C7gpcDYy21B7gz0XoF1ELLVyQvq6qM=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
Message-ID: <2357b6ef-a452-13c8-8656-e42642e80d99@suse.com>
Date: Mon, 22 Feb 2021 16:50:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Now that we guard the entire Xen VA space against speculative abuse
through hypervisor accesses to guest memory, the argument translation
area's VA also needs to live outside this range, at least for 32-bit PV
guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
uniformly.

While this could be conditionalized upon CONFIG_PV32 &&
CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
keeps the code more legible imo.

Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>
---
v2: Rename PERDOMAIN2_VIRT_START to PERDOMAIN_ALT_VIRT_START.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
                (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
                 l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
     }
+
+    /* Slot 511: Per-domain mappings mirror. */
+    if ( !is_pv_64bit_domain(d) )
+        l4t[l4_table_offset(PERDOMAIN_ALT_VIRT_START)] =
+            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
 }
 
 bool fill_ro_mpt(mfn_t mfn)
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -159,11 +159,11 @@ extern unsigned char boot_edid_info[128]
  *    1:1 direct mapping of all physical memory.
 #endif
  *  0xffff880000000000 - 0xffffffffffffffff [120TB,             PML4:272-511]
- *    PV: Guest-defined use.
+ *    PV (64-bit): Guest-defined use.
  *  0xffff880000000000 - 0xffffff7fffffffff [119.5TB,           PML4:272-510]
  *    HVM/idle: continuation of 1:1 mapping
  *  0xffffff8000000000 - 0xffffffffffffffff [512GB, 2^39 bytes  PML4:511]
- *    HVM/idle: unused
+ *    HVM / 32-bit PV: Secondary per-domain mappings.
  *
  * Compatibility guest area layout:
  *  0x0000000000000000 - 0x00000000f57fffff [3928MB,            PML4:0]
@@ -242,6 +242,9 @@ extern unsigned char boot_edid_info[128]
 #endif
 #define DIRECTMAP_VIRT_END      (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE)
 
+/* Slot 511: secondary per-domain mappings (for compat xlat area accesses). */
+#define PERDOMAIN_ALT_VIRT_START PML4_ADDR(511)
+
 #ifndef __ASSEMBLY__
 
 #ifdef CONFIG_PV32
--- a/xen/include/asm-x86/x86_64/uaccess.h
+++ b/xen/include/asm-x86/x86_64/uaccess.h
@@ -1,7 +1,17 @@
 #ifndef __X86_64_UACCESS_H
 #define __X86_64_UACCESS_H
 
-#define COMPAT_ARG_XLAT_VIRT_BASE ((void *)ARG_XLAT_START(current))
+/*
+ * With CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS (apparent) PV guest accesses
+ * are prohibited to touch the Xen private VA range.  The compat argument
+ * translation area, therefore, can't live within this range.  Domains
+ * (potentially) in need of argument translation (32-bit PV, possibly HVM) get
+ * a secondary mapping installed, which needs to be used for such accesses in
+ * the PV case, and will also be used for HVM to avoid extra conditionals.
+ */
+#define COMPAT_ARG_XLAT_VIRT_BASE ((void *)ARG_XLAT_START(current) + \
+                                   (PERDOMAIN_ALT_VIRT_START - \
+                                    PERDOMAIN_VIRT_START))
 #define COMPAT_ARG_XLAT_SIZE      (2*PAGE_SIZE)
 struct vcpu;
 int setup_compat_arg_xlat(struct vcpu *v);


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 15:55:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 15:55:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88186.165706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDYA-0002pJ-CZ; Mon, 22 Feb 2021 15:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88186.165706; Mon, 22 Feb 2021 15:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDYA-0002pC-95; Mon, 22 Feb 2021 15:55:06 +0000
Received: by outflank-mailman (input) for mailman id 88186;
 Mon, 22 Feb 2021 15:55:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEDY9-0002p7-Oq
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 15:55:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d14eee94-2277-422d-8177-66635144beb6;
 Mon, 22 Feb 2021 15:55:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB3AAAE14;
 Mon, 22 Feb 2021 15:55:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d14eee94-2277-422d-8177-66635144beb6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614009304; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WyhG2GllmGX07OFwwz/Psem5L1OWxxuGL2g1ZmtXgSo=;
	b=Fn3q8x3iph/inl9I/wFBv1r+oIukguo8N/SiRh4yC3gbLzDLXdNjNBv36wwbgsRZH1OcVo
	VP/k/f8v9A+eojnRMak+J16p+tgK8k2mv49izWaJtKJVxvgzAzkwvDKRbd1iHBzYSTpuW0
	XN8BGM0MqPXFcEDviv3bSzoehnSxNVk=
Subject: Re: [PATCH v2 5/8] x86/gdbsx: convert "user" to "guest" accesses
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <d1a1b9eb-33b4-4d07-9465-189699f88323@suse.com>
 <YDPOOpC6/wGZaAkA@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aa96faaf-478a-bfb0-1def-e79efb399668@suse.com>
Date: Mon, 22 Feb 2021 16:55:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDPOOpC6/wGZaAkA@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 16:31, Roger Pau MonnÃ© wrote:
> On Wed, Feb 17, 2021 at 09:21:36AM +0100, Jan Beulich wrote:
>> Using copy_{from,to}_user(), this code was assuming to be called only by
>> PV guests. Use copy_{from,to}_guest() instead, transforming the incoming
>> structure field into a guest handle (the field should really have been
>> one in the first place). Also do not transform the debuggee address into
>> a pointer.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks.

> One minor comment below that can be taken care of when committing I
> think.
> 
>> ---
>> v2: Re-base (bug fix side effect was taken care of already).
>>
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -108,12 +108,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct doma
>>  }
>>  
>>  /* Returns: number of bytes remaining to be copied */
>> -static unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>> -                                     void * __user buf, unsigned int len,
>> -                                     bool toaddr, uint64_t pgd3)
>> +static unsigned int dbg_rw_guest_mem(struct domain *dp, unsigned long addr,
>> +                                     XEN_GUEST_HANDLE_PARAM(void) buf,
>> +                                     unsigned int len, bool toaddr,
>> +                                     uint64_t pgd3)
>>  {
>> -    unsigned long addr = (unsigned long)gaddr;
>> -
>>      while ( len > 0 )
>>      {
>>          char *va;
>> @@ -134,20 +133,18 @@ static unsigned int dbg_rw_guest_mem(str
>>  
>>          if ( toaddr )
>>          {
>> -            copy_from_user(va, buf, pagecnt);    /* va = buf */
>> +            copy_from_guest(va, buf, pagecnt);
>>              paging_mark_dirty(dp, mfn);
>>          }
>>          else
>> -        {
>> -            copy_to_user(buf, va, pagecnt);    /* buf = va */
>> -        }
>> +            copy_to_guest(buf, va, pagecnt);
>>  
>>          unmap_domain_page(va);
>>          if ( !gfn_eq(gfn, INVALID_GFN) )
>>              put_gfn(dp, gfn_x(gfn));
>>  
>>          addr += pagecnt;
>> -        buf += pagecnt;
>> +        guest_handle_add_offset(buf, pagecnt);
>>          len -= pagecnt;
>>      }
>>  
>> @@ -161,7 +158,7 @@ static unsigned int dbg_rw_guest_mem(str
>>   * pgd3: value of init_mm.pgd[3] in guest. see above.
>>   * Returns: number of bytes remaining to be copied.
>>   */
>> -unsigned int dbg_rw_mem(void * __user addr, void * __user buf,
>> +unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf,
>>                          unsigned int len, domid_t domid, bool toaddr,
>>                          uint64_t pgd3)
> 
> You change the prototype below to make pgd3 unsigned long, so you
> should change the type here also? (and likely in dbg_rw_guest_mem?)

I'd rather undo the change to the prototype, or else further
changes would be needed for consistency. I'll take it that
you're fine either way, and hence your ack stands.

Thanks for noticing, Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:01:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88203.165740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDdr-0004Mp-Ee; Mon, 22 Feb 2021 16:00:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88203.165740; Mon, 22 Feb 2021 16:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDdr-0004Mi-BJ; Mon, 22 Feb 2021 16:00:59 +0000
Received: by outflank-mailman (input) for mailman id 88203;
 Mon, 22 Feb 2021 16:00:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEDdq-0004Kg-4r
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 16:00:58 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8edd0a69-da3e-44c1-863c-5d4154122083;
 Mon, 22 Feb 2021 16:00:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8edd0a69-da3e-44c1-863c-5d4154122083
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614009656;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=SfhWJERM8VMjLdeL9m2QrAHCx8VmAP9Id6a8RwnXkVY=;
  b=htKw/ipb/Ilfth3fzQJMqn6vNYZ6PxFSLvqPzJKstIsOtelnaY7SCZOi
   3W7SZ++Fwmyz5fQESsOiKb0MW/6NIGZQoYH4d+KXNZZrDAr/9naTlbbrv
   hDQWwwA4EIJRckE3S8CpFfgbejdKIPamy3PAVMsXCmrod+TsU1M144JWL
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HCd/iBAf1l0DUOCyhHAIDgHIeEuaIKtyPscas30o7OliN2iE+5khxwz4mshOhjy56VyfKV4Uau
 hYU3ZommW53wHlL/GrHg1UC4B1dxBiAbrFhaQPZXJXbKZfdmFTQPfZAhAQ08N7iqcZ+qiTHwBB
 7JLdl/DnHy/t3+W+77LuIS21eYghQ24EfHmLWHP5AefTaA257p+VtsmI08lb6ZKBo6ODLpaOGp
 rL91DygK8JqwzW2JWdNohD2XxLQuhYnSz9tpNW9ItwMUvXUU/gR1Gxyvo38eQ+Dk2DzhwpAmRr
 Ijs=
X-SBRS: 5.2
X-MesageID: 37769223
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37769223"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BAFvJP5cwAxJASgWpE+4apIJMIq5Na07us7E/PweJTDsuCXaECE2y4N8H+5BzMSU+lq9/McrLCQxdk6Ukp1CzOGaRf83/kG3Bc2u7YGU/8tj3/vSzw2sY/CX8cpHSdaf7DLBIHWIkyVGeY2OQCOl455gtNTvJl03RHbI96gdXtBnTFp9ECOUDQUWwiSFIEv94WmTMhx5LTeZ1WubbzGipS8lgu5MHooFSN5Vxu6qqHzZEQUqgu/A+x+odA/DqGMp0ouqEx4XkNmipvAAk+FyJgyK9sGtIq6qIq4WMdCKimB/n2w2txVzz0+glMKZk3UUCbXuOyoj4zNoMpP3Kz+zcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SfhWJERM8VMjLdeL9m2QrAHCx8VmAP9Id6a8RwnXkVY=;
 b=fqG04hFyDCxQl5zGI1jvjdVecn+X/9Yl1FkO36W+q3AsmOWcXZ4I9Tj/vAfmRtwuS8Y6FLOdlQBEoHgbm/fGUUfKEOY0i6L+T234AOsDJdI0vCrxcv7nMA+7InWYpxGyHs1u/AOa5kTaf3250ClmZdPkNOvysKaFJau0JYPpP5/SNffXQ5ew5huz9wlsXslzY/ixtjs5lO38wsRvS/9/8Hgw+20EaJXyFIFpFUCiQyHuO3qY4T5tnPQvfP34PgPz8/q4LpayKeOrCJKw/NQ0hO+Pdi1MtDmiJXiR2p3o+4YBwPQ2FKA/ASDywDGgJpno5brzeKKp4CVBgt5j3SRzTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SfhWJERM8VMjLdeL9m2QrAHCx8VmAP9Id6a8RwnXkVY=;
 b=sAAKVWoSZH/xIRL7oUJDy4RwIhxXpEWFRSKUDb+z0GcR/xcyVi/e4DyE+5aNH7megn6qYptrVFF9ssSyS9LwMNYaSO7aVbXgFk2DcCAk+gyWSXQpSnpKlJu5uc2ZQn4lKPaitJImS+Lp0Xf8EuzGCxJ+RWOY2BDD6WvbcADgQq8=
Subject: Re: Stable ABI checking (take 2)
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Xen-devel <xen-devel@lists.xen.org>
References: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
 <78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
Date: Mon, 22 Feb 2021 16:00:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0069.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 33297024-2c8a-4190-581f-08d8d74b0701
X-MS-TrafficTypeDiagnostic: BYAPR03MB4485:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB44851549D40094E6C058D23EBA819@BYAPR03MB4485.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 97rW3+3LEFn5rfIp9zgsFryPkPQWR8ICj/2FA1sn7rAkS6ouyGDbPE8vbLfvzigSeEHhhMR6v2d7m0DWlun6YqDpODNWJoWhzDEsvVRR6T5Lu2vC1HmbufnwBosCrsxkZEp9zGp+TZIWmq5Cq6ar/4P96z2YmF6iiWDrTPDcS5eXSaQecVjPUNFcRmRnMdCkEMDMfCRi0/DAN03DRFdy2truGKmAcMoCikgYCzzCvtHQume3eFuh21LAXwhPD468oDojaJ21ta/m9kR/FuTrdVr3CmMVQw3rSkR38PO3OxDKM5mSWXJCf9WhsCa/f129NDasyc5jk6hlHWWgE7q8HVbr+aS61qN/RlMViuLv8U5enfLnkvrSczx+nc1jzKDvd0rofVzjnLteEu6btNjHewCzeMKKQlkSWjsMty5+tjeMh9JxCN5YQBNuES/QU/CJpdvmxMoa9QIuB3TufZyryBqjhb1/Ninf2KLm0XfrGHhBmOBQIQj0DZSx+bhOqCjrv2LWBq4ZMS4c9RIGH5RVljdfiV1HiqG4aCIeskz2/dShK5FvdLV1f1KZa4S0OJfaftw65IUJZbudwWU3+gs/y8EU11PaClkSJRYfHoN1p4o=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(31686004)(6486002)(86362001)(31696002)(316002)(26005)(5660300002)(956004)(8936002)(66476007)(66556008)(53546011)(54906003)(6666004)(36756003)(6916009)(2616005)(16576012)(16526019)(83380400001)(4326008)(2906002)(478600001)(8676002)(186003)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OEMxK2V3enJ3L09oaWFqdGZYT3RpcUVKTzlGb3BzVnJURDVxblJqc25rOHFF?=
 =?utf-8?B?U0hKZUZFSUJtb1k4TDVtSGU1K0NTYU90S2hEeTVIbk5RUDBVR0wwRndVWWpR?=
 =?utf-8?B?WFk1eXF6RS9laHMvTWU0N2M2cVZVZy9HdnpmM0tqczVobytQQXhSb3pITTNP?=
 =?utf-8?B?UllZcTNXUUs0dkQ0L2Yya056aEx1Smp5QUlKZ3FFM1RGbk5EaDZBVDlJRFhi?=
 =?utf-8?B?N0RJelRHVkJaSUpuWU1yUFQySlJvUXFkek00cE8wMjNSRHlYdElOL0xGdmpE?=
 =?utf-8?B?eXAxVDVkRm42cWFzNE1jemdvdjNuRnR1K29hYlBsbHFOZnJEZWRxdk5sNDRL?=
 =?utf-8?B?UktoSld6VWtha2lwVGc3bjdYYWphQ1pYdEhXeHdaZDVMenNGSjIxZFlDNEFk?=
 =?utf-8?B?NTRIQzV4azMyaEhNa3VSL3N0K056NG4xRlJoUmxrYy8yY3ZZQVlaVTNObFpl?=
 =?utf-8?B?QmdoNWdLWm9JOThCcnNreG1YYzl5Q25nZ2ZyMXA3NXNQWXVtMlJpYytWWmta?=
 =?utf-8?B?cG5FZjQvRmQ5OUREdHFJaStiQnFxc2xXeXhQWFBvWTg2TGpwM3N5eWdhRWsw?=
 =?utf-8?B?REMyVXY2OVRCRndveDRIWHliVGdvNVdHRVJWSHJwV00xekpvM08wVHpmMjcr?=
 =?utf-8?B?UzVNYVVmSkM3NGg2RnQvY2NMVHhGc3JRRWxHR05aZHBVd3JRQW5wOGFzbU0r?=
 =?utf-8?B?ZHQ4Z1lzRnBOUDJ4d0p2L2tkK2hMaEhPRkhaK2ZYTG1oRlV5Y0laT1F5VUpF?=
 =?utf-8?B?bkFlVHZZeW83cW1NRGMrd1JIdHIwWDlNbzdyWFhJdkZ3a21JNmJZU1J3ZUJZ?=
 =?utf-8?B?R2Jqbnh6allDdnhyYXlZVEkvUEVDbXlGa2syK1VNcWdjMjN5OGRaSkVJekx1?=
 =?utf-8?B?Q2hVcjJnTGRNaEFwVHJPOXpmbXRrYStGcjQvb3VocTFOZm9pakx4eHgxaDhB?=
 =?utf-8?B?UlcxUUFLM3dzalM2dWxKUTBFWjNMQXg4RlgrNW0zWnZ1dFlrRURnbHErcWxU?=
 =?utf-8?B?QWxwZ2pvcWVmMVV0VFZ6S09zeEZVSU9iM2ZXVGNFZkJnSC9oNnovTGZ0R0FC?=
 =?utf-8?B?Zy9LRG1ROC83OHRPWDBzRHlTWlUxWENkdDB5THUvVnNvalNPSldVNEN2Uksx?=
 =?utf-8?B?VUJIQy9QdktmZnBWL0N3a3NqSkFKcWNDNUpMU3pOeXcrN01RZjdlVW8vTlpi?=
 =?utf-8?B?a3QwclhIOHdIMytyRVlXUllsQ29USk1jQmhQSmFENDZhemRHT1BqeUFKcnBi?=
 =?utf-8?B?cUJCTjBQTjdwWjBDMGhGaDRycWllQmJKMDFnV0Jtd2VvWGEybnFVTnFTaDNo?=
 =?utf-8?B?cld2ZkxTSGdQWXZtTi9iQVA4VlRiamxNNFFKSG5SaldCRmpOYVovWllvMVlj?=
 =?utf-8?B?eG1kSkJWZnV0QXRFSHNmdGRrcU9FYnRyZ2JDQmJrMEtjT3hMak91aUVMZzI5?=
 =?utf-8?B?UlVKMU9md0NqNmMzeDVRYS9HU2JvdFhGTFNlcmdxU2dVL1NTSnh4cURyT0RI?=
 =?utf-8?B?dVBpRGwyTnl2TUtMdHE0Wmk1UWhvK2wyYWpVRGxYSlBoOTY3S29Pd0VxWWdU?=
 =?utf-8?B?Yi9KaTNmenhpYkJ1WmFBbjF0UGVVbk1ZMktyaGhJckNGVlp4OW5LaVB0cGtR?=
 =?utf-8?B?T2xLWDJyaDFRWHFCa1FnZUx5Y0MyMVMxRWUvdlNINmo5MTBCQmw4YmxHMkZF?=
 =?utf-8?B?N2N6dFp0SUdhK3dDNkU1cGpuTHBSSDk1ekRjZE51VldOVHI4V3pmWEhZczkx?=
 =?utf-8?Q?Uo2PWylXNg8nXppuWp8BDz46Znht/0DNmF5XUNL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 33297024-2c8a-4190-581f-08d8d74b0701
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 16:00:51.4741
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +x/irqjACryDf1PjGrvhCERHtd6m52e71TIDtb+Qp9X1IqgQmV8bw/mhOs35LkrC9ie/DDNOcViuR9sQNwT05NM03wojBO+l1/U9N7pXP3Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4485
X-OriginatorOrg: citrix.com

On 22/02/2021 14:37, Jan Beulich wrote:
> On 22.02.2021 15:03, Andrew Cooper wrote:
>> Hello,
>>
>> Staging is now capable of writing out an ABI description when the
>> appropriate tool (abi-dumper) is available.
>>
>> We now have to possible courses of action for ABI checking in builds.
>>
>> 1) Publish the ABI descriptions on xenbits, update all downstream test
>> systems to invoke abi-compliance-checker manually.
>>
>> 2) Commit/update the ABI descriptions when RELEASE-$X.$Y.0 is tagged,
>> update the main build to use abi-compliance-checker when available.
>>
>>
>> Pros/Cons:
>>
>> The ABI descriptions claim to be sensitive to toolchain in use.Â  I don't
>> know how true this is in practice.
>>
>> Publishing on xenbits involves obtaining even more misc artefacts during
>> the build, which is going to be firm -2 from downstreams.
>>
>> Committing the ABI descriptions lets abi checking work in developer
>> builds (with suitable tools installed).Â  It also means we get checking
>> "for free" in Gitlab CI and OSSTest without custom logic.
>>
>>
>> Thoughts on which approach is better?Â  I'm leaning in favour of option 2
>> because it allows for consumption by developers and test systems.
> +1 for option 2, fwiw.
>
>> If we do go with route 2, I was thinking of adding a `make check`
>> hierarchy.Â  Longer term, this can be used to queue up other unit tests
>> which can be run from within the build tree.
> Is there a reason the normal build process can't be made fail in
> case verification fails? Besides "make check" typically meaning to
> invoke a functional testsuite rather than (just) some compatibility
> checking, I'd also be worried of no-one (likely including me) to
> remember to separately run "make check" at appropriate times.

As far as RPM is concerned, splitting the two is important, as %build
and %check are explicitly separate steps.Â  I have no idea what the deb
policy/organisation is here.

Merging some of check into build would be a layering violation, and even
if we did so, where do you draw the line?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:08:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88227.165755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDl2-0004uG-9B; Mon, 22 Feb 2021 16:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88227.165755; Mon, 22 Feb 2021 16:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDl2-0004u9-4y; Mon, 22 Feb 2021 16:08:24 +0000
Received: by outflank-mailman (input) for mailman id 88227;
 Mon, 22 Feb 2021 16:08:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEDl1-0004u4-MN
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 16:08:23 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a811038-7577-4009-ac84-c7094850eab0;
 Mon, 22 Feb 2021 16:08:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a811038-7577-4009-ac84-c7094850eab0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614010101;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=29jhYJNSiPzkntXS1Vdx7RLpnsGpoNDg168uheA4bsE=;
  b=Al+32PVKZu0u92C4Bp66Na2dnnWWepFmK1MIJqylOXbzar2ypkdN4J9s
   ejFM/JvRIpANkrPbOpMdUy6FRuPwZaeXg0zddXS+aUlmD8QyduX2E/8Ki
   7thA/tQ55H9p4Sq1YRjWKzCbX4FWmvuSVOpt7H0PKXUoKpRX6AxYtdvov
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4H+WQCI7vCLXTiDhBv2gLDZdacPuGNvJAYMZIjxCXaMPzf7iAoQOb/larwec8UmgwSZH84gMpP
 634JMeuEO4cQ/IsXkU0YRTnK2UcxzKuCJoa1cvmoxkXHS1zKpHwmi0xWyVX8uPglJ/9QoiNFZg
 BXXJhzhxqSet3k1MUe8VVQCpBY0rk0DtAvF9GBTiOd3y3oQGET9jTYzW4ROorlDXl7CugHLGx2
 OJexsIXc9u0vzkEtWrAWYpd5DzA48FAK2PgHKDiVh/OLJRxUwtwBhdI5rwKf8mwtuOtJ8JDLzQ
 140=
X-SBRS: 5.2
X-MesageID: 37748281
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37748281"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XYOUSujoU/eB2aVFpGcUhiBDB19CeBtJUH0O35gMDAlbwDX/0fXf8N0J3zMwBNlE6fcaAATSbvUc8b0oH5kGVsX4tOm2UbSR+E4JamV0IH3//IqaZh6Pc/Jb/DGy36MDSWc+Qnnl3XcgB7rSFfrUe5C5SkHonyBsUPY6Ws61D3nBRRZ5vy5dxo7fTZbMfFGf35Trw+3aTi5gorH9luqs5Yzxdf5p9Q/4lFl78NqbdGisoRXnUMkkGlerAfwCj6MXshrgTdqAAh6M1yZFm4d+Poqj44/O6iuuwfCt95GeOGzXHenny5NZjBsLyFocGFjT2hwbmUoUZo3yF7SeL/sFCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WLnkZsJwwsI8HcmRulCLgyxmuujzINu+NYJ+HPQolY4=;
 b=ncoIMq9+OWRCe/CyvcBX+XdClROT1eeVY7OzhUnMAkmzpvIELHEVshj8CX2GsbUy+8XcwVGb/KeEJaaS6g8jliybMRKc8kN+PfAnesYQd9eQFOkDmcQo6bmBoVfoRHiO1SPgoMbfNNqOqhO2NNFEZwvWH8hP/okaQoEHzf6vKHOlLOfJNDgXHR1Qv0InHVtvbDrVhgAnWUKdMqq/VDHdWiGexnHC6Fggr/SXyUYILVljcYmBekG/x2NzjP+WEuyjAQljRyBb8QDjO4di0EB89iG2Ky7Fcu8OAVUWI+JYNn0A22CZcO0nkGnKYqNRzpnC0bAhnrtuCVOBBemL6JWT9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WLnkZsJwwsI8HcmRulCLgyxmuujzINu+NYJ+HPQolY4=;
 b=NxmpokM9B2SKVUw0LG0IOh+ecuwDTkaZwtLVzULh3J+k0k6uW4vjR1YDheoN3BL5SqxsPaVD4BJws1/eETVBmg3AQwLyzjY3I9zavIamtyZktZcw8P3ZxWn0dgeqXQDavDkTXHuQtuvAIdvyp3iNDBV83r+abnLr/xR4tFIR4mg=
Date: Mon, 22 Feb 2021 17:08:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 5/8] x86/gdbsx: convert "user" to "guest" accesses
Message-ID: <YDPW7TJh+IctgNIq@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <d1a1b9eb-33b4-4d07-9465-189699f88323@suse.com>
 <YDPOOpC6/wGZaAkA@Air-de-Roger>
 <aa96faaf-478a-bfb0-1def-e79efb399668@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <aa96faaf-478a-bfb0-1def-e79efb399668@suse.com>
X-ClientProxiedBy: MR2P264CA0144.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7f40fe13-5d5d-4625-92eb-08d8d74c11a3
X-MS-TrafficTypeDiagnostic: DM5PR03MB2713:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB27134E969C9633D77F79E9D88F819@DM5PR03MB2713.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cRIFX1o+7MxM58q9eRLMob5531B+jQc7q3Mbwhqtpchroysp7wbrmrpq6+1iMEr1t1znGoEGJvVJVFqQ9K51LG8LGVNn5A1dfanuI7aEJv++lmY4drG0eDnrHOSggY28KExkIQC74d0gVkZ+kd3Lw2VXC2WcjSNfa+QqyH9q/mIFPv1cJhrKWewwEaZB4SGxL4TB20RsRVU2LA/7aYDnemQBle3hnpS+5DMcgyuC+lEYKjbOcRnZhvlc5QEPXFf1VwCp8JAmCVGJmHQjlT6G4VzAuEqxg7HUkf9qteVwzlFynnLT2dcsQFIRbiyarxSSSjdqRXYACEqbP5IK4ck38SGZw40YoZUPfNYmkgFuHDDoT7/Yqu450wdho0THkKuAadH8ScY5oxdiZnZSVyAPOHb4w0jJzErah5kZSGe3bgpMZFJ1MGHJfGc4ilElr9XsDQ8NA5o4F8YNq/FT/5qTBQNUN9wRYCanhDEfg3kTHXfjEE/Lk6SzVt2P9a1fkjY1LUqBbJQJGz3TnJhqZpp5f5TuMtQXeP2K3YwAAb7Nz0Bi/xCL3B9kabW+AjaySSGI
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(366004)(396003)(136003)(346002)(6666004)(66556008)(86362001)(316002)(956004)(6496006)(8936002)(9686003)(5660300002)(478600001)(6486002)(2906002)(85182001)(53546011)(186003)(8676002)(26005)(66946007)(33716001)(54906003)(16526019)(6916009)(66476007)(4326008)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SThpaVJ1ZWJlYWFQVU43RWpORUZBd1hhYVdEeUlSUUZXUGRXd2ZhWDBrN2xB?=
 =?utf-8?B?YzRJZU9EWEk5NFIvcTBsVkdRaFEvMVIycVJaSWg0YWpNY1NSWXE0bGlmREZl?=
 =?utf-8?B?bzFvQjd5MWxsejdBT0svVURQUVBaSnhrTkZIcVJOZ2xYQXo3d2xTZk9Zd3NH?=
 =?utf-8?B?RmRaVlVWZFhBVHVLcERWeThtb3lzSzV2OWRjLzZpL0ZjenhXSTlnUU9VSkhI?=
 =?utf-8?B?WVdqVDIrOVVqL1llZGZXaUQ1STFVTHlBWnhDR2hGUnNXVUMwcHZRNU5ONTkz?=
 =?utf-8?B?MXhOWDB1NTZsYkJ0UDBRN3poQ1pwVnAyZm9OZU9pc24weVhEa2NSTGF4czZ1?=
 =?utf-8?B?RDBrbTY4RHREaHV1VnkydExPNkhKa2Z3V01TbUp6NUpKY2c2ejJyYzdNNEp2?=
 =?utf-8?B?N2Vxblp4UFNqZXlwdXlpQWJXckhqVDRWbjFyYjlvRTdoOE5OeTh6SzRYbGha?=
 =?utf-8?B?bTluVzF3WnljbHFib1J5YW11b3kwbWZLc2N0ZzVnR1ZFNzZvcjd3Zzg1em9h?=
 =?utf-8?B?QkRob05uNGdtNnhjR1hYdkZEQSsvY0w0b0VrSkd3NXhZU2tWK0hwRmhrR3FS?=
 =?utf-8?B?ZGlrNWZzUXVTNWtHSS9Ra3FpbU5raFpaalp1TFRWbGpiNjdxQnhRdnJ3SXk5?=
 =?utf-8?B?VUVMU0gvQ3pNc091RjdFWDV6NlB3Y2Ewd1NQamlHQlVmTk0zTWNveTNYaEor?=
 =?utf-8?B?NjROL3MvMU4rdC9GM0lNT2daeGhZenN4Q1pnRGV2MDkzc2dadEQzRVBRTnZh?=
 =?utf-8?B?M1VnNmFZRnJoZm05aTgybDZhbFg0dkJtT2ZyMFlyMThadmdsL2VleENMcEl2?=
 =?utf-8?B?RjJsOUhFM0pDR0FNbmdkMEpVejFNYnlDbnNjYU8vZ1Q1eXQzSDB3NEwxb04y?=
 =?utf-8?B?a1I5Y1pxUG5qWFVGWmNrbjgwWmlLTHR0WWxxenRZUit5bWVOTGVVTTZSVjYv?=
 =?utf-8?B?bG9PNW12VCt4Rm1CTVhQMmZYV0QrRGJjOFN3UndVSVhvd0t5Mlh2ZGsrWFNr?=
 =?utf-8?B?TUsxamQvR3IwaG9FaE4zRUd2NG5pSmhUVDFWMHUvQ0d1WVhEZmxabEpzVXZ4?=
 =?utf-8?B?OFA4ZjY4R245QUxQRFN5VWRFMFh4VVRESVM1RThpYTkvdmVIT3F3NUs0b1VN?=
 =?utf-8?B?WHZzRkZXaEtXV2dyK1ZzMUVDQUFTV1BxSHMvQ0NuU0kreS9EVXdxOW8reEJY?=
 =?utf-8?B?KytyY2xRN1RzcVc3VUoybXhHZ2lYd1RINW1PZzNMd3NDQUp4SEZxWkIyMzVo?=
 =?utf-8?B?YTZOY2dvbFIvWDJ1T0ZjWVdObmpxZXdEVlhpUmdNN0JEcTR6UHdiK2xDNXBU?=
 =?utf-8?B?V0E5SExPc3BHM3NBby9rdzlZY2ZESzFEc1RTQVZORzJzQWxpbUkwd2wxTmQw?=
 =?utf-8?B?K2hWWWs5cE5MWnlNN2dGcm5Pdm1IZzNIWW84d1pJN0YwUHFxQXUrNURaR2RR?=
 =?utf-8?B?Zzhyek5SNEV5RlVqVmFWR09YaWkwQjYrRnpGbDBkZjNzMnVJZy90RDM2Ums1?=
 =?utf-8?B?a3UwN1gzdklUVTc2aEtGaGJiL0dJUnd4bS9RWUUwSHE4RUkzYldrT3VJMXhK?=
 =?utf-8?B?YmoyNk1DQUViME5nV3Y1TGFtaWFHQUJCRXJPYmJ2SmV6TDlyNWpOK0JOSkRw?=
 =?utf-8?B?Y1BPT0VVVG9EQjZnMGxLV1hFM0ROS1RockkvQVZ4b2Q4VGg2UkFQKzRVNG9w?=
 =?utf-8?B?K2wrTUExWjFVMzhwbUhhcXJ2VFRsWW5ZL1ljOWJFOUIrTURvMHV6ak12R3Zx?=
 =?utf-8?Q?gHHREc4hToY4ErEx5Jw21R6kk3Mj/ePIi2yLDVM?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f40fe13-5d5d-4625-92eb-08d8d74c11a3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 16:08:18.8743
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uQz9U2uSN4oL39A7yuwgxT/MsUYNUjUwTV9Feu5BQ0cn+tkLoPN3QaoNX/I2pvKoqCMeIjWACXoN80xSXndnbA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2713
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 04:55:03PM +0100, Jan Beulich wrote:
> On 22.02.2021 16:31, Roger Pau MonnÃ© wrote:
> > On Wed, Feb 17, 2021 at 09:21:36AM +0100, Jan Beulich wrote:
> >> Using copy_{from,to}_user(), this code was assuming to be called only by
> >> PV guests. Use copy_{from,to}_guest() instead, transforming the incoming
> >> structure field into a guest handle (the field should really have been
> >> one in the first place). Also do not transform the debuggee address into
> >> a pointer.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> 
> Thanks.
> 
> > One minor comment below that can be taken care of when committing I
> > think.
> > 
> >> ---
> >> v2: Re-base (bug fix side effect was taken care of already).
> >>
> >> --- a/xen/arch/x86/debug.c
> >> +++ b/xen/arch/x86/debug.c
> >> @@ -108,12 +108,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct doma
> >>  }
> >>  
> >>  /* Returns: number of bytes remaining to be copied */
> >> -static unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> >> -                                     void * __user buf, unsigned int len,
> >> -                                     bool toaddr, uint64_t pgd3)
> >> +static unsigned int dbg_rw_guest_mem(struct domain *dp, unsigned long addr,
> >> +                                     XEN_GUEST_HANDLE_PARAM(void) buf,
> >> +                                     unsigned int len, bool toaddr,
> >> +                                     uint64_t pgd3)
> >>  {
> >> -    unsigned long addr = (unsigned long)gaddr;
> >> -
> >>      while ( len > 0 )
> >>      {
> >>          char *va;
> >> @@ -134,20 +133,18 @@ static unsigned int dbg_rw_guest_mem(str
> >>  
> >>          if ( toaddr )
> >>          {
> >> -            copy_from_user(va, buf, pagecnt);    /* va = buf */
> >> +            copy_from_guest(va, buf, pagecnt);
> >>              paging_mark_dirty(dp, mfn);
> >>          }
> >>          else
> >> -        {
> >> -            copy_to_user(buf, va, pagecnt);    /* buf = va */
> >> -        }
> >> +            copy_to_guest(buf, va, pagecnt);
> >>  
> >>          unmap_domain_page(va);
> >>          if ( !gfn_eq(gfn, INVALID_GFN) )
> >>              put_gfn(dp, gfn_x(gfn));
> >>  
> >>          addr += pagecnt;
> >> -        buf += pagecnt;
> >> +        guest_handle_add_offset(buf, pagecnt);
> >>          len -= pagecnt;
> >>      }
> >>  
> >> @@ -161,7 +158,7 @@ static unsigned int dbg_rw_guest_mem(str
> >>   * pgd3: value of init_mm.pgd[3] in guest. see above.
> >>   * Returns: number of bytes remaining to be copied.
> >>   */
> >> -unsigned int dbg_rw_mem(void * __user addr, void * __user buf,
> >> +unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf,
> >>                          unsigned int len, domid_t domid, bool toaddr,
> >>                          uint64_t pgd3)
> > 
> > You change the prototype below to make pgd3 unsigned long, so you
> > should change the type here also? (and likely in dbg_rw_guest_mem?)
> 
> I'd rather undo the change to the prototype, or else further
> changes would be needed for consistency. I'll take it that
> you're fine either way, and hence your ack stands.

Yes, that's also fine as long as it's consistent.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:11:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:11:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88241.165785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDnz-00062C-4j; Mon, 22 Feb 2021 16:11:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88241.165785; Mon, 22 Feb 2021 16:11:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEDnz-000625-1A; Mon, 22 Feb 2021 16:11:27 +0000
Received: by outflank-mailman (input) for mailman id 88241;
 Mon, 22 Feb 2021 16:11:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEDnx-00061u-V3
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 16:11:25 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a77b65f-1577-49f2-ab87-26b019530023;
 Mon, 22 Feb 2021 16:11:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a77b65f-1577-49f2-ab87-26b019530023
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614010284;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=XSOgJbWp7keu4hnHmMl3c8h51USvAUMex177l66Cujw=;
  b=Tp68u0BYSxqTUIkUUkueYfLCiRotp21KLjOQkxyvDrWgUT+PPiS/Fzf7
   xyCng3a79w0F0McpCeO81mortTp9rP2Fs3QdhpHJ5xd3tMwznGDadovr9
   nyDobZpVpy6gW0pTfSXQ4vGt3vyXdR9G76+v1r2+lWppCIBvlzeYI2AaG
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bTQJDP0O7BbF0wV/ZqN8XrfTYJj7Z4/rwhgnB4pG50DVvcP6NB1mnwFwhxZpE03Xnq/gBxGMsu
 AwhvWZV72rToOb/2L5+dcLUZ6LjQoYhcj7ZVF57mIAZOSDCnMV/qNKAe0/hfnE9GrTcdwKXVxq
 OZ8cL+Vio5Ekw4UbdJ3WSU9gncc0hs+NBrC0iXmayKHkGBgU01YZf3gEes0rc/RI6/elFYuXRi
 pyzFQ6YjL/R5/T7ajkaRzV278xiAT3ypreobVGLLHBcA2cmE8FUCllpMwlOcZGLucBKMFa5+Om
 KMI=
X-SBRS: 5.2
X-MesageID: 38121101
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="38121101"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ajc8TUQxyynmuOpQnGnMadylbSlDjU729F/hhxrY+YSwsM/uhoJopuirGen7sD7qsiWSUz0MDG4FUd8WIij9MhsZxEtsGQyf9dVY1/y4GavVMzjUO5YIPiyyji+Q8B0KC0lwvU9Xig5bkfXEZbBAxpFkg+rstpJax7f+0kGhPZAOTdyDfQaksgEvzFUXlxuCAFyvoCkduzpen+aPT3tvUFrggm+AJFUv/nMDfaRwM4GZWK41lqCugfvwWbvIEJaTzgReCVo57YOam3F5T9EgYZ7wTK0Y/W+3Ohi6Aw4UR7i4R5AF3JYxswpBkYG1Ov/VxP6djTP+KrVoiC/ohWJtfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jbkpyKsGP2H59wfl58+O1sN6CTAlszEnW1TPSoDA6PI=;
 b=CEUBHOoyGVQxXKcHkz1Mo/YV0SnyWUQQtoLT6yiMZ822cubjYOKLZl/JR7LkBIpvusXlUeDb38OmE6iU9KaantfCTIPFC7M9RTexUe20dB1tbW4qdHHWVWmlNi8BlH9RgggUTYRVZuTf93/sRpQ2Q+hTUbx6qyV+V8ucVyXq4M9tgJCavOgzqIfOVtjtlrjjIJfvbdszF2Hi1cWa0yYVx1KEqTwEtYgYtTX/uf8+ValHQNgpN5OuGugQVAmp6EFLcDfv8KaJx1vhqabwngMAKMnK/iorb6Uy1EN41t+r4V1tVyJxOYfeDIyqM7HQ/GeFC7v2qIEUHEwEfB8lT29pwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jbkpyKsGP2H59wfl58+O1sN6CTAlszEnW1TPSoDA6PI=;
 b=GPngRZ/GX8I6hHF5uAyeahEg73UBgAhhMD11Q9eML8YYyP+Na3jZ8X5eIWiFh/X41ZJyXu+0LXEpOAVoH66LOzSbpsHa/sslepw5ztle6E5phC2WWWNDT7+vPwDWIR07yn9z/s0XeR5hZQ7gOlQ0oibvE1P1rKnv9g8RM4XJThY=
Date: Mon, 22 Feb 2021 17:11:15 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH v2][4.15] x86: mirror compat argument translation area
 for 32-bit PV
Message-ID: <YDPXoxc6WZUYAQZW@Air-de-Roger>
References: <2357b6ef-a452-13c8-8656-e42642e80d99@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2357b6ef-a452-13c8-8656-e42642e80d99@suse.com>
X-ClientProxiedBy: MR2P264CA0110.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6a177ca1-9952-4160-1abf-08d8d74c7d6a
X-MS-TrafficTypeDiagnostic: DM6PR03MB4842:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB484239548E8CF0B2B12964E38F819@DM6PR03MB4842.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 2ADFBtFMcJyuCQtYGcv1XHVQO2TpLgWgyb9RPrjfirpYrQmOsAU+WfRZK+6BsYCmwcMftX8jxTqCXPXw10/6x3FlxaZrDybo2HL+b/t88pFsGJT35u7MtY8uncwWcA+IOE1CI6vNm2qveFzlKZsfGTsaoJTfe75GtiRuzDqi6nZziF5qdp2mtGmit6bynO+KcN0ebbOvbmIriJhdY09IAhxUA2jombI9GvsU8u//fKhhg08JvPSVg2pvKzlMGREKhhrxXWIY8bCe1k7/E9KBs6mIUvZQLfJOhEJmlyhTcBsDY3f5VtScPTP2o/kyaQRjiNFciHPrdVHK9tPKZ3wS+FM/f7eYd6S88A3aF5pQrd747rGaf/t8ZO9XwhmrQCwRVX7ZkrcqSE/GMLb4klK3VOxD82B79a146DKl0cvD4+e/A1kzMnAj/qkNSSK4MX7yn+JkaMQcJ+1u053qGldMBCxHa3A8Ma4Y+MORSEnT+k2wKm6687vkm3w5hJEtAxBRmwfvaqaVHoD6iLUFOvCYNQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(376002)(39860400002)(366004)(346002)(6486002)(66476007)(2906002)(4326008)(26005)(86362001)(6916009)(8936002)(6496006)(5660300002)(9686003)(33716001)(6666004)(186003)(956004)(66946007)(54906003)(8676002)(66556008)(316002)(85182001)(83380400001)(478600001)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bktwVWlOd3hDSmhESHdldjk2WXpuejJIbDZUMlozOUdRdmRDd2NyVEhrN1RY?=
 =?utf-8?B?MnNBZWw5WHJFdmFoRElXYVRpbzRLaVZEWjAxdGVlbHBJemJwdmpZM1dDdnpT?=
 =?utf-8?B?b01xYUdUQmd3MmZKbEluZFpSODdYV2IydTQ5QnArM0hQQllMRTg1eEdGSGVM?=
 =?utf-8?B?RDNIRUdIM05WdCtXbXBpTmZZNVpPa3RBS3FwdFMyODhsbUFnZHBpeVEwVVBR?=
 =?utf-8?B?b0lqUm9maW5yNGtVcTltTTBYS2o0eDlNWDBZRVRVanZiVm5XYWx0SFV4cTEr?=
 =?utf-8?B?SEZTUDJSWGNpOU1NZEhKUy9NelJ0RUpCNm44UWhwdjIrbitwV0pJZE1zem9V?=
 =?utf-8?B?NGl6QzBLRytmMDFyOFNGeGdqNkJVTlJCYmhaSWlUWHNZYjQ5TXIzYSsrSk05?=
 =?utf-8?B?Qkd1bDZXMlFjSEVxL01CbW02UTUwNWVLaXpBaXNmdmtXVjdyc3VFR1pHV3dq?=
 =?utf-8?B?NWduWjMxV043dzgwOTZkcXlJcDA0akZlYnFac0I0Zmc1ZFJ5M0xIc2VEbDRM?=
 =?utf-8?B?U1dLOW9Ib2tCRzd6Z2pkNEJMaVhBbzFRNS9nZUlCU2p2WjFLMWxSdjhJcXh2?=
 =?utf-8?B?OTdNeFA0RFF0OFQ1ZzBlUkN2U3p1cm1yMFlQM1YwalRoR1dFajkySDVhYWVO?=
 =?utf-8?B?amViWXJhTjY0dmUvdkIrTkpiUHdEaFkwb2lSeWd4dDJ5eEF4K1ZOTDNSRW1p?=
 =?utf-8?B?YzZUTVFxenhwREJhWkxiVU5iNEMyNU4xcDhUOEJFQ01zQXVwWTl0YWxuWFN6?=
 =?utf-8?B?WkkzWHhvd09SVFBKbGdFV3NuMURhT2RDUjBkcngyVU5nUkhrQldyYWlhMWsw?=
 =?utf-8?B?RkFEc3JKeWlWd0NrZHdMcjdBNE44VjRvOUthcEYwc29FZXdDTG05YXpQcU44?=
 =?utf-8?B?VzdpRmo3ckNaWWh2dGcwRGZyaXlRUWZpeEdpd01JV1pTanRGTVIwM0U0dUgw?=
 =?utf-8?B?U043TGZpckNrYkFUMStYQ0xMYVJJZDhxTllBellJcHRRTU9kQ092VzZiWERr?=
 =?utf-8?B?TEdvSS91V2x4YUR0YWIvQUZJbERHYWxWbFlwMW96MnpWSkdBZ1QrdlJ2SXI3?=
 =?utf-8?B?cFlNajNRb3ExN085c1F2eCtHSy8rdWhmaTZkTnVOck9xajFDWCsxSzVJYjNh?=
 =?utf-8?B?TkFsbmsxNmtnVUZ6WlhPSVd5NkZLMnhHWjk5c3N1MjE5SDNkYVQ4VlNSZU9O?=
 =?utf-8?B?ZnFHdlo3bUhBNGlQazEwdEsvNldKc0t5Z3dONTJHOWdONUl3WWlTVkR4REdh?=
 =?utf-8?B?L3ZXay9kYmdDdS9HMi8yQ3JjRkV6eVFOS1FUNGdnYzVqeXJoZ1BsbzRCQUR1?=
 =?utf-8?B?VWx5YVdUUUY4NnA1M2VsM0FDWXZYdnlvb0hLZEVnOFErTWUyUWR5RUhGQWlC?=
 =?utf-8?B?YWlrUGphOTNPZzdabmhaUVU3V3J6aVlMMXV1Tm5WK1oxSTlienhXNDY5b1FE?=
 =?utf-8?B?UnNqTWphVGoySENkbmNWVUo1U2pXUmlvRUVzWWpiQWFsMTlJa0FOUWNLN3hD?=
 =?utf-8?B?S085aXJ5a08raEVMY1FOcXFDbU94QlJ1bkhCWFFOazdjM3JEOVkvOXlYVmdM?=
 =?utf-8?B?amtFNWVpSktLNzVxNkpZd24rNkFuWlZjK05NblFRY2VhaHY0aG1WTytQMTg1?=
 =?utf-8?B?eGhKeWt0Rjh4U1czOFNlQU45ZDBiejVDZXA5a0NDYVd3ajYwMFdoaTZFWUoz?=
 =?utf-8?B?QkZVaDBkV0pXQndGMk9IWjFQTkJVNGN4TC94WC9lbjk1cVEwK0poci8rOWs4?=
 =?utf-8?Q?W1z52NjKZyLANpPvYzjE7WrYrHPgk4y4Uj29UHE?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a177ca1-9952-4160-1abf-08d8d74c7d6a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 16:11:19.7105
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rL0dD9kHrJFUE/HfR0BFuiBK82plbYFfWG6IfB5AsUm/B/lPvD0dBCRChbH8314LxonagWvTXEA4PcBFv/diCw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4842
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 04:50:14PM +0100, Jan Beulich wrote:
> Now that we guard the entire Xen VA space against speculative abuse
> through hypervisor accesses to guest memory, the argument translation
> area's VA also needs to live outside this range, at least for 32-bit PV
> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> uniformly.
> 
> While this could be conditionalized upon CONFIG_PV32 &&
> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
> keeps the code more legible imo.
> 
> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

One comment nit below.

> ---
> v2: Rename PERDOMAIN2_VIRT_START to PERDOMAIN_ALT_VIRT_START.
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>      }
> +
> +    /* Slot 511: Per-domain mappings mirror. */
> +    if ( !is_pv_64bit_domain(d) )
> +        l4t[l4_table_offset(PERDOMAIN_ALT_VIRT_START)] =
> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
>  }
>  
>  bool fill_ro_mpt(mfn_t mfn)
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -159,11 +159,11 @@ extern unsigned char boot_edid_info[128]
>   *    1:1 direct mapping of all physical memory.
>  #endif
>   *  0xffff880000000000 - 0xffffffffffffffff [120TB,             PML4:272-511]
> - *    PV: Guest-defined use.
> + *    PV (64-bit): Guest-defined use.
>   *  0xffff880000000000 - 0xffffff7fffffffff [119.5TB,           PML4:272-510]
>   *    HVM/idle: continuation of 1:1 mapping
>   *  0xffffff8000000000 - 0xffffffffffffffff [512GB, 2^39 bytes  PML4:511]
> - *    HVM/idle: unused
> + *    HVM / 32-bit PV: Secondary per-domain mappings.

I would use "Mirrored per-domain mappings." to make it clear this is
indented to be backed up by the same physical range as the original
per-domain mappings.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:25:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88256.165800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEE1X-00074l-Kq; Mon, 22 Feb 2021 16:25:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88256.165800; Mon, 22 Feb 2021 16:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEE1X-00074e-Ht; Mon, 22 Feb 2021 16:25:27 +0000
Received: by outflank-mailman (input) for mailman id 88256;
 Mon, 22 Feb 2021 16:25:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEE1W-00074Z-IQ
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 16:25:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df841af0-1fb3-48df-83cd-a605bcd6ee26;
 Mon, 22 Feb 2021 16:25:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A0EB0AC69;
 Mon, 22 Feb 2021 16:25:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df841af0-1fb3-48df-83cd-a605bcd6ee26
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614011124; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=osJvbx1L06taSNZkiSDQJ0xWcSCB+zp9cpcXFxUyEMo=;
	b=bJ+Gi09CWniYmaYBkYCSiHjbmE/zF8iPt9XwDLMwipvAzcmB7sbva4Dilg0/qY6NAacs39
	8uwrZiKO7ZpLeLIRcbWgsGBH6igBwVqH8DM7LFkZ7HWRFG7ItjQd19tzaQ9/KDayPFQ1Tu
	C3yJYd2Mn7kXIs5cnk7/MyiHM47vfnY=
Subject: Re: Stable ABI checking (take 2)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xen.org>
References: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
 <78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
 <a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5a38fe0a-0aee-f3f3-f9ea-43300499c350@suse.com>
Date: Mon, 22 Feb 2021 17:25:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 17:00, Andrew Cooper wrote:
> On 22/02/2021 14:37, Jan Beulich wrote:
>> On 22.02.2021 15:03, Andrew Cooper wrote:
>>> Hello,
>>>
>>> Staging is now capable of writing out an ABI description when the
>>> appropriate tool (abi-dumper) is available.
>>>
>>> We now have to possible courses of action for ABI checking in builds.
>>>
>>> 1) Publish the ABI descriptions on xenbits, update all downstream test
>>> systems to invoke abi-compliance-checker manually.
>>>
>>> 2) Commit/update the ABI descriptions when RELEASE-$X.$Y.0 is tagged,
>>> update the main build to use abi-compliance-checker when available.
>>>
>>>
>>> Pros/Cons:
>>>
>>> The ABI descriptions claim to be sensitive to toolchain in use.Â  I don't
>>> know how true this is in practice.
>>>
>>> Publishing on xenbits involves obtaining even more misc artefacts during
>>> the build, which is going to be firm -2 from downstreams.
>>>
>>> Committing the ABI descriptions lets abi checking work in developer
>>> builds (with suitable tools installed).Â  It also means we get checking
>>> "for free" in Gitlab CI and OSSTest without custom logic.
>>>
>>>
>>> Thoughts on which approach is better?Â  I'm leaning in favour of option 2
>>> because it allows for consumption by developers and test systems.
>> +1 for option 2, fwiw.
>>
>>> If we do go with route 2, I was thinking of adding a `make check`
>>> hierarchy.Â  Longer term, this can be used to queue up other unit tests
>>> which can be run from within the build tree.
>> Is there a reason the normal build process can't be made fail in
>> case verification fails? Besides "make check" typically meaning to
>> invoke a functional testsuite rather than (just) some compatibility
>> checking, I'd also be worried of no-one (likely including me) to
>> remember to separately run "make check" at appropriate times.
> 
> As far as RPM is concerned, splitting the two is important, as %build
> and %check are explicitly separate steps.Â  I have no idea what the deb
> policy/organisation is here.
> 
> Merging some of check into build would be a layering violation, and even
> if we did so, where do you draw the line?

Well, building a shared object that won't load is as bad as building
a shared object that won't work because of violating expected
guarantees. The closest similarity I can think of right away would be
the linker error you ought to get when a to-be-exported symbol can't
be resolved. The line imo would be drawn between things detectable at
build time vs those only detectable by actually using the generated
binaries.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:37:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88260.165812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEECq-0008Ep-Q3; Mon, 22 Feb 2021 16:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88260.165812; Mon, 22 Feb 2021 16:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEECq-0008Ei-Li; Mon, 22 Feb 2021 16:37:08 +0000
Received: by outflank-mailman (input) for mailman id 88260;
 Mon, 22 Feb 2021 16:37:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEECp-0008Ed-Fi
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 16:37:08 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1952d20-1d2a-4830-a891-4ae2ee793bff;
 Mon, 22 Feb 2021 16:37:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1952d20-1d2a-4830-a891-4ae2ee793bff
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614011826;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Cq4RYPbzBecZqbvkdQqZfeatq2NE9ia0qzEPltOPq/A=;
  b=OsYXsPj4pWx5hakwsrC7v7+QKlIIJKUZ0Bujc3K85he92v2F4w3oQD9P
   sif5ZUXRDRXZMl46u/yf+D1ZmvXnVqn6H32NgTJRngYE+AWEX5BjwJq+o
   qjUNdAhTDipNTaXVs3KGkq+SAWLzrxSZ3tlUUNTuGYsLWe5HqmNKS5tn+
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Jggs+ESgMP3fPHTF3L32ZHyCfkkkZ6CdBchiNHwNJiy10eeFxaC4316lFVyJLBXkWLXCpKBBwM
 uT17aEQXtY5SdLuHVdv/pfEDMdtXK3MeFG85tHhfur3UbJgyprfuyAANdPzod4be0Mu5flvW6c
 Ui+TcGqriSsMfpaQL4QsOsMTHsdRFC+hkHpuoJ2YiRpN4oPk9mbbxy5lMCiqHK1KTv/h33A5Q6
 hjqK22mc1BY4vPs9Ywr6V/X0Ch7O6EEdCVmK9g4ySTSVZz73Iac+RxanOUuTVXJB4qqwJMSi1P
 ISU=
X-SBRS: 5.2
X-MesageID: 39133682
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="39133682"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GfxlvWRwlQHr6IgDGhGjvYlomkzFXpRrgq4ganAoN61LASLKtdU5AjdypBFRgIIYRMMV//4nyJzN9ZSuC//aXE7TdBUviSYk2hbWx533wW/JDcOCRj68wGUBcd24RJNBvL7hxs/5mQuJ7OVEsqVGMegZyRb3teig6conQMrVEXGWOOzWRuI/IAruUGZqpKyF5o6dHy0XSKDHngVHahoAo5wYnvM5bzx4Z5GJFNIOjU8vkfP4HCgfHaWyPc29irYMRHPUalEkDzLl+ZBI4CBHjwLhrEcZFzsnOd9DDx/XCPQh/g8hyg/Kq7DhGoncFkFNqjTZY5NdfjxulZRUIW5orQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=igvHt88+y0PxeuZPTsv7W3aUySGlin14FYCjRGWRSMQ=;
 b=Gcxn2hOVoFkANg5BczBtG+19OwFLfNchJzgRJ2mUjskcg+njK3Fp+MISo15vZRNzvzB0r9M7y0hljWd2F7Q3LgUcMvl/0qrIoInA7aUlTUDWJKM1s76eznUMLnuoikQkvUQj5/Pm3v1MaLiwzyOn1vRPXax1UZVo9YA6VXaBy8znNRejRxB4Qc5R6m54bz1WzIwG6TCkCN2w3JmqD4Hr12U4zCSCvyF4atBDwwtLrqDhoeAAOY9aA9sFvU8l+PXRfrlcI1t09DXWOZsilJA1ZYWYFDtpBE8MYP1eQhdF+sH9sidE39lwBPlaDA253KNuRL0yfB2sr4W4Rg1NoDvCNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=igvHt88+y0PxeuZPTsv7W3aUySGlin14FYCjRGWRSMQ=;
 b=fczfR6RpEb1xda5jQcH8eimGn7DL8g8jA7lPPdljcJ1shV1+WaStCeYNCa1F3G3EcMu9/ygQVl4o8IBq7sWWcI3zf8rqAOXUujPVnl/+xiq8tU3wAMMXoXz9x3z6u7EH30itkkzEII3hYAw5FqNuEijEhDiR8SijzfZhk4wGFDo=
Subject: Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base relocs
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
Date: Mon, 22 Feb 2021 16:36:56 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0264.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aa5cc219-7b92-433b-052a-08d8d75014d9
X-MS-TrafficTypeDiagnostic: BY5PR03MB5000:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB500032B73F4464F7D6AC0275BA819@BY5PR03MB5000.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1091;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hHQSvmOZ35R13B6wWS01zR+KJVzAjsYhZQFiExin8m40uFP8j+8XwKbuaGDr1ZcftPexM7O4UZV2JpSkHFqfiHzM7sZbVjpdmSdIZwUF5a9rPZwpOVXkbA7eXPdkfVYNs1zTa7vJr2LY52sv98PF68O9BRUDB/nEb1vFDcMV0IeSHKiuGtsBnZcMwy/MRyBT4I8BZ7GiL++2AiLLMEUiLX8oP3jDsFsLMJT8jnZAZ5Vz/ItlyuxA6qaN0S+JjTiPcPaTyk7kpU89cdSeRjTUqdB7Cw9NB3Jfuchw186pp85iAILzsbkrn5PgAzR4a4HJwfdPVQhEMUKMzA39V2v9YzEVwQwWHvFPbmSaqVXbq0bEWZvKawquKg2GxyymJ+KcWudDGT2yaswCaFRr8QsWK3yfzSFxWTnIZcBb6hpMJfxPlVeMTbdrQMbq1Tspa0bVv/cLo+YDRzBmh9hqiZrAanYTYGHju3KDXsHsd8knPtPSA69LsLNjXiLwGOt4+LSgzaDcGmH1Mt7qDsqt2KsUUnMFrB0DY9KNKkyB44eDF2wVy5wB4/OcaRmZAuYrfAUjcDI5GFWzDs5zuB1LBZCI9UQLLEuAj9Wx7g/mXilLeRY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(366004)(376002)(396003)(4326008)(5660300002)(83380400001)(6486002)(8936002)(186003)(8676002)(16526019)(956004)(31686004)(26005)(2616005)(86362001)(6666004)(107886003)(2906002)(54906003)(31696002)(36756003)(53546011)(66946007)(66476007)(66556008)(478600001)(16576012)(316002)(110136005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZVdPeWJ4am02ZHR3YU5KWi9xWm9kYy96ZXFURms4UlVpNWFDQXVtTGpJU0Ni?=
 =?utf-8?B?TlNlbjRXaUpDZFNyYmtVL1ZUWHBnaEpNUHBGTklkVFozbW53cGtWL0RidUtq?=
 =?utf-8?B?ai9EeGd5WWE4SGREdy93RUtSajBWa0ptcCt4d0FHQTkrZXY3QjNNNVBLUkRQ?=
 =?utf-8?B?eStLTDV3UlR5RUdkbEl5UlRQbTBJdVplL3NOYjNuUnU1eXU1Y0xDWnNqS2gr?=
 =?utf-8?B?dUs3ZThkTEtXNlRXbTdwcDFYSzY3VjJ4TTF5b2IvYkJ2Zlg1K0JlWUx1YVQ3?=
 =?utf-8?B?Vzh1b3djNm5yUnQwcnRtdHMyS0JZanpGN3Zjd21MQVlmNFkxZW8yRC9UaWJq?=
 =?utf-8?B?TVMyU0Fyd0xpbnBqOWJUNjVsYUdtOHkzVThpSDNxbHZ3N3Q2RDBzYWVwK3Rs?=
 =?utf-8?B?SG1jNHhadFcweXVpSWdzQ1ZlV1dmNytNQ2dqcEFBSGZTMGpaRmJjUXhXYys3?=
 =?utf-8?B?dk0vR0w3eFU0OE04VU1YYUNpc2U0dWJSSGhkbVZmd1ZkL2tMY3A3R3IvZVVJ?=
 =?utf-8?B?YWtORUg1dVYrZEZqVFFZbkM2Uzd2bi85NEVDbngrRGxLWUd0a2RUc2pnN2cx?=
 =?utf-8?B?YXFTR0dEekNJVk93ZDhXTWVlMmRHejVSL1RWcEVjeUZjUExLK2Q4aXFRM3cx?=
 =?utf-8?B?OEtNejlwS3JuaDJPMTBYMU1ndjFjQkNURi9qTkxKWG1lOFNyTzByc1Q5MTJs?=
 =?utf-8?B?elNQVDBtdkYwK1RkRzQ0bi9EYU1OeGJZQ1Z6VHJCR3FhK21EWnBNODVBRXhw?=
 =?utf-8?B?bU5HYVg3bXAvRXUzTjR2TXZRMEw4aWI1WXcwR28wcExreEpydEdFMHhXb05y?=
 =?utf-8?B?TlFjUzV3ZGRMUGZjM3c2aHlMUCt1QkVteEFXalJXMnVYRVozY0NvWmdzSVlD?=
 =?utf-8?B?Z2pQUEV6bzRYS0h4Vk5yS1BoWHFUT3l6c2RJN2ptZ2QwNWZ1aStQLzdhODRY?=
 =?utf-8?B?a3lybWJGaDJ3MDZlUFdxNEl2YlpSd1FhNmJPS1hzSm53NFhSdGdBM1JzRkEz?=
 =?utf-8?B?OGdFK0pHa0h3R2JSMktxKzNJSlJQcHJNbUMrNDJjZnpvN1E5Nmk3dDFCbXFx?=
 =?utf-8?B?alo2MDdoQjl1QXU5Z0RPeHl0VE43bWIxWlJ5eHNyR1hFWjZUQXRhNzZLSXU1?=
 =?utf-8?B?WFBBUFdIQnYwc01XYitmSDM1Nm54SC9mWXBiaC8ycGY4K05URkRYaWNoL0w1?=
 =?utf-8?B?NjZ4WGFlQ0VrOUNjU2RiQTRaa0VLZitDUURYWVM1YTg0YTdTR0JQNGg5Vm8x?=
 =?utf-8?B?bThkdDVEU2lsNHJGVTBSZnZMTUdwb1dML2NkZm5Zbm1sVHkrQjcvZXVRcG03?=
 =?utf-8?B?bEZyYjFqN0FBRDJRalh0d2dFeW4zaUhEWkU2NDcveEl3OHJzS25zV3ZiSlJq?=
 =?utf-8?B?RFBGL2xXTUlTVWlBQWFrSGZGS0hJL3dVS01kcGxhWGVic0cwTDI3MVc0NUxt?=
 =?utf-8?B?eGZMN2VOZUx4Y0ExbW9tcmVlZkRUOHNjNzBVeHJsMU1pUVFxRzJwZXBSOUdL?=
 =?utf-8?B?UDBTcThPU1dISVU3WGh5b0xDb2RHOGpRVTQ2NkswOTllY0JhWU1rcndLRE43?=
 =?utf-8?B?TjB1WnRmS0o0V2tySUNWOHhySDRyMDZFc1FsWFVIYjQ4K05RMVpQeWVRUlBD?=
 =?utf-8?B?Q3h3YnhDL05pUW0xVlBDZjJHcnNwRUpZbm4yRFF2RnFaNUFZcWlwWWcweG9o?=
 =?utf-8?B?bnc1M2dsMCtFOEZQcFA5RkJ0Um5pRmtyc1J3c290REFjNVdYY3RUbnlzT1VO?=
 =?utf-8?Q?TO746G9DpUu7fHc2IDQj/md4+16PClT43Bys7na?=
X-MS-Exchange-CrossTenant-Network-Message-Id: aa5cc219-7b92-433b-052a-08d8d75014d9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 16:37:02.3235
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Y59wtwKNED8HCFSw1TVxieDQPWtppGFnLPH2NUYicyA1fEIsQvijnaAAivpoTkcUuhYjrPhW4YVlF+h2QsbhanRjZpsIjJmBqPLqYuzceXg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5000
X-OriginatorOrg: citrix.com

On 19/02/2021 08:09, Jan Beulich wrote:
> All of the sudden ld creates base relocations itself, for PE
> executables - as a result we now have two of them for every entity to
> be relocated. While we will likely want to use this down the road, it
> doesn't work quite right yet in corner cases, so rather than suppressing
> our own way of creating the relocations we need to tell ld to avoid
> doing so.
>
> Probe whether --disable-reloc-section (which was introduced by the same
> commit making relocation generation the default) is recognized by ld's PE
> emulation, and use the option if so. (To limit redundancy, move the first
> part of setting EFI_LDFLAGS earlier, and use it already while probing.)
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -123,8 +123,13 @@ ifneq ($(efi-y),)
>  # Check if the compiler supports the MS ABI.
>  export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
>  # Check if the linker supports PE.
> -XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
> +EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10 --strip-debug
> +XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) $(EFI_LDFLAGS) -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>  CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
> +# Check if the linker produces fixups in PE by default (we need to disable it doing so for now).
> +XEN_NO_PE_FIXUPS := $(if $(XEN_BUILD_EFI), \
> +                         $(shell $(LD) $(EFI_LDFLAGS) --disable-reloc-section -o efi/check.efi efi/check.o 2>/dev/null && \
> +                                 echo --disable-reloc-section))

Why does --strip-debug move?

What's wrong with $(call ld-option ...) ?Â  Actually, lots of this block
of code looks to be opencoding of standard constructs.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:48:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88262.165823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEENE-0000q1-Pw; Mon, 22 Feb 2021 16:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88262.165823; Mon, 22 Feb 2021 16:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEENE-0000pu-N1; Mon, 22 Feb 2021 16:47:52 +0000
Received: by outflank-mailman (input) for mailman id 88262;
 Mon, 22 Feb 2021 16:47:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEENC-0000pp-Vh
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 16:47:51 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be596848-b707-41a8-a15a-87d03f989a18;
 Mon, 22 Feb 2021 16:47:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be596848-b707-41a8-a15a-87d03f989a18
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614012469;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=70Sst0AB2Da/emwY1ASdEevkGhbzuWUaowz6K55cHKk=;
  b=f5OkSOeIlGPAJzV9SjI/DWTuWm2dd7QTKgmR40gXQdW1N4uz0UEKcz0Y
   dIIe5EZirhInhUxVCvi84sz00rCPvnR5FtxbYr22IMqEHxRG7Q2AmPTq4
   vIVIqAup7iZgc/GJtRAgaINlp8mYo4IsjHFq4A1nu636anShp/D3h5ry2
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dU2ipOSJKeeWvGzD4D/f5b3G7ZNwAuB75RxitbE2hWi8YRfUKekA4Rkyg5s0xnmh22cUF1JKNS
 SQDNanqkr1Y7FLPImNH+aHL+OtXqIFZTJtL7XovMNUleeZcRLsuKjroHcgQ8hKikbDTWAAlMim
 z8O0RO4BpWEggdu02XuZa1E00agX6SGrlHURPPYYysqyXftjnGATkH9nchYLXp1xAfTNxjY8Q7
 hpAXqf6yfeqeGIrAnvOAD7UiIJ8SZHpMUtvXnmzxxNqv7dGrllvX8zkmk8fJqnxUWoYksqE3Cc
 LDU=
X-SBRS: 5.2
X-MesageID: 37752078
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37752078"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fwi+1G9YCfb4x5t+I8mjYX2IPzdx1xg7NoOViSEErhZxAb0V1gJzd3E8Fe+qF45mG44VWdEXD1bqpttqIJo43mO2fXx0SjYZPiVO5XsRUFy+iv3Dg0hWHdBIDlraiLqWZh99s3OXAUJ3CyjZJivVcPo3afuPd15FC/kqFDLvRsA8lYeshA1yrekNSaCj9dxXvmpymqhOkfXk0FzNmJGnfhx5PGWBoWaM+FbXC3R5B3kK19ytemCFnJZAfrbnpqW/HGqNL6hkxjKvPqULtT+TqJsgzh5MzNJLHswGNwZ0wTQS5pbCHFC1PTsg+DDlCTUfbAOWdP6DDl6jXl+pUJjaYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CgvN37ACd1a4UDwjqHLdcvk+kqIiDrpjX4DXrjugPZA=;
 b=HG6hxGPsXaG8KBAdPAo6aIG/u3Ir65j7YC1GbOnKNG2B5Pqw6074jLxPOflmBsP62xLwg1TVLl6zxwetWNtF9toelZyS46/qndwkEyEb5HNZk9HyAyCGq2bpaheWJB0uFFJQre8v654fg/zYglgOlxyOOFJNGYA7RBSlzmAmDRorwJSLOLxoI31kup9ahzBeGa5t8ybyegnRArtfryFG0z0F1WuVm0fS0WIBz/Cgd+KHkOUzdS87bwYUwPuDl0oNCW6CG6va0oT5c2elxjOIcbsoq4Mr2jRyQpG95oNxI2RTFUlGTEKfgvS1XWp7/QdNjQI887hub6RUP3U5qw3Qow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CgvN37ACd1a4UDwjqHLdcvk+kqIiDrpjX4DXrjugPZA=;
 b=flbujg3KzogSg2CPY6hWJTG2ry51337oUvCrtsK15QjkJBnhBaf0dI864Sky8J62swCj56IJIKLQ5XjRwmHpmV5ouHYfdXWYVKRSOeBdX5flsYhGM1JNn+ev8FbeAprBhEZXRfV3pBDNnf6DyqUeBIWud6AKEYQz21I8h1VJzSQ=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
 <77a36366-9157-c3d3-b1f0-211f4fc39a93@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <60a31ec8-6844-2149-1a04-7e757d1d2dd3@citrix.com>
Date: Mon, 22 Feb 2021 16:47:38 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <77a36366-9157-c3d3-b1f0-211f4fc39a93@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0179.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18a::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 85f9cd8e-0c40-4053-d371-08d8d7519445
X-MS-TrafficTypeDiagnostic: BYAPR03MB4485:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4485B7239CBCA50455E643A4BA819@BYAPR03MB4485.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PfPevVfnss+qLa0uRnZZC28eXh+I5ZCbU03JJ9yGl7WQvoZ4xV3BkveV9JDoN7sWiwsxLYkicLJmtT2EJbveJkijpkBSVu7K/PRTN5H4XFxRn0byZxkxb6LfBJ2tV/+TtrXCWSLj5ZpblZxsjM5zxQcS+VS9Ohqg2QPNJ20SQja5EqJQu/6ecku3nnuk5vZhzLVcMRpbjooOGwoR32+05BKUQ/wJRayb/x+j63s4oazYZ35L1yXUuLqgMKXwF7wfhADMwlqL5qZT1+9r89/T+D1Kz6UXxf04+ns+8b0tOGSCf5UaE5y/3v0N0+38zIytANkiFG5Nm+7X6gSi1hY3aLo4p9ae29uodQrez4VqSOw8ujWk30w7y4i4NcNPbmAIgtwMUHU/2Oxp6gQhxzEveXETyCAKDyIsHvEtb8XpS8swfiLQJxpp8g0RU4V8ptarmpNzS7JQpB2YHTSA+c5lCX8e0ua8kO7PyjsrcyVSMwUXspZS6+1lJSUxqu/ALH5rl3cWBthxno08hLu/9SFPYflp/2Y+InS8ll9x6nG9bONYGnzlieiVVLyv8LmVxgvaWi+6bB0DXoufBpw6zhJzojNKS9Uaj/JR6KSGu+uxLCE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(396003)(136003)(39860400002)(6916009)(16576012)(2616005)(36756003)(6666004)(8676002)(66946007)(186003)(4326008)(2906002)(16526019)(83380400001)(478600001)(53546011)(316002)(31696002)(54906003)(26005)(6486002)(31686004)(86362001)(66476007)(66556008)(956004)(5660300002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Q0lUNGNIaUJ5M0lZTG53eFlpNURCend2U0h3YXFKZEFtZm1Fb2xKNXU1MUl4?=
 =?utf-8?B?WUNZSnQvYTVyc09MMVo0RXlRMnhRaDY2cWFUMnVGU2YyUk0veityTjU4T2Nk?=
 =?utf-8?B?d29BVVRvZWRwVy9reWpIbnFYU0lWNXduZVU1ZlpNTllDSWNVTnJOWDlqYnIx?=
 =?utf-8?B?dU1HMnIxbUcyd0g3Z3BlaURqeEFSTHRMeEtLQzBoTmUza25Sek1QdENEMWpR?=
 =?utf-8?B?bWFkMUVRakJackN6ZFZxdXJoQnM5b0tlY1VSU3FDY09hdGhDVmRQeGJDczRG?=
 =?utf-8?B?RFZ0NklqMEhGcHFNd0UxNmVqdGVhTXRpVHQ4SXNnVW5YaXZubTlOSlhvalg2?=
 =?utf-8?B?Wk9HVVdwOUs2RnhxT2J0ZFR6MHBSRitxeGhxcnd5VnRwTUlIY3RtL0ptRXRZ?=
 =?utf-8?B?VG43YlVLY0NIdFhiQlhkc0d3N2ZRRStRbHBuWUlqb3hhZW1ydjA2VGU0K3Zm?=
 =?utf-8?B?RjZjV3dEVEVNeFAyeVBsYjRsL2R6M0dObjRTMWVuMjJYNG1ybU1JNFI3YnJ4?=
 =?utf-8?B?QkxYa21tcVNMTUhWR1NGVkpWTFZKWFIvVWJkQUxqS1ZsazFvSWRoOVNnRmNj?=
 =?utf-8?B?NDRRb0o5UXZJWE1KZFArS0lmVUdpcjJhRkM4UmhNZ242cHB3OFlQdXFHUlZ5?=
 =?utf-8?B?M3hmSmt4QTVCMUs4bWFYOHVIZklTaGsramFRRmROTDVlWUpQZEQ3K0RIdDZR?=
 =?utf-8?B?bGVEZjFMM1U0R3U1QzZOcGZkNURUTzBYbXFZM1M5ZlJVS1M3bkJWUzlrRDZS?=
 =?utf-8?B?Z1JENklGbVdnRG9udkRUQVRKK3RKNmhLaE1DR3pYQWFkRjV1MXNadDFUUlNu?=
 =?utf-8?B?KzJyRUtaSHFKbjEyd25tT3FKRmhveXVodVg3QnIvUll1d0pnN0tmUkN4bml0?=
 =?utf-8?B?VTZKbVdSbWZaOVlVbXVPZ2VSZHVNZEZLMlRyb2oyajBqWXFLSm1VZFZ3N2NR?=
 =?utf-8?B?T21BSDhibW0vWTF5KzFSUUJYb1ptSk5kLzZ0WElJTlZRSG9mRU1peDEvU0Fi?=
 =?utf-8?B?QkZqV2pyTkRkekk2SUZ0VS90eFVYVmtubjgxdisvVTdkVHlDdFhVcGt5Tlhk?=
 =?utf-8?B?ZkpMRjU3ZmtoNVZVYzRvbHhMaUN3WC9oU1RSNVR4aG1zS014VUtuN0Fjdlcx?=
 =?utf-8?B?a2Vod2JOMWd0amRoYWthNjhwZTVudkU0SlRMa1JvQjlGMDdkNFBGT2NxQUs3?=
 =?utf-8?B?OGt4ZXYxdnZ3V1NTUXBYSmFZdk0vWWNnVTlXYUFwbjVpUzNCcDVzaXpoVWRQ?=
 =?utf-8?B?TjJjV2kvanBwK2lkeENvOENkOVdrVWQ2R1pjc1lXTGpPeldYZmF6bkI2cVdn?=
 =?utf-8?B?TWpjNS9ud1Nnak1WK2MxN3RLbFZPc1ZMMGQxWHZVWEpYRkxuSHAvNnErMGZw?=
 =?utf-8?B?ZlRJZFYvS1pBcTNpZmZFU1kvdU13OGZUajdHUDNNemg1MnFzcnN2Und0Zmti?=
 =?utf-8?B?Vm5LRjljQStUUjAvVWhPem4rM2M1N3ZnMVFtUzU2U2VpTlVhRDQrcE45Nzg5?=
 =?utf-8?B?MDVpR1RGQ3FVL0Q2L2Q3c3VSUXlObzBZTVloczN3cFBna0lLZ3FTajFDVXph?=
 =?utf-8?B?VWpqc3ZITEwzdFJCeVZVTGJVM2NpVDBLZG84cmVMVU5XcTNXaHJiTTFwaHlv?=
 =?utf-8?B?dzdZYWxBZXVhMXQ3cENpZlpYeWlkaTNGUlZ6a3BIZlRwck9jM1FPREZ3NW96?=
 =?utf-8?B?dE5mb1hWZTZ4UWlUbVlmZTBzTWloNlU1cCs4Z000ZC9UVWJTNDgycDBla1Ji?=
 =?utf-8?Q?b9ntEMHS5SMdM/axX13vtCIUNygDAtEqAtZvjaH?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 85f9cd8e-0c40-4053-d371-08d8d7519445
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 16:47:45.4171
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mbwCZZhMlRfslZb7YqVoTL8KjJZi85ZzDp2FN/iymvIHiPM/lfotgJbtZWw/vXZpS0OoKFIPTofe68pWcpAq0nX6MZDC0phkM8EkLywh4UU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4485
X-OriginatorOrg: citrix.com

On 22/02/2021 14:22, Jan Beulich wrote:
> On 22.02.2021 15:14, Andrew Cooper wrote:
>> On 22/02/2021 10:27, Jan Beulich wrote:
>>> Now that we guard the entire Xen VA space against speculative abuse
>>> through hypervisor accesses to guest memory, the argument translation
>>> area's VA also needs to live outside this range, at least for 32-bit PV
>>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
>>> uniformly.
>>>
>>> While this could be conditionalized upon CONFIG_PV32 &&
>>> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
>>> keeps the code more legible imo.
>>>
>>> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>>>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>>>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>>>      }
>>> +
>>> +    /* Slot 511: Per-domain mappings mirror. */
>>> +    if ( !is_pv_64bit_domain(d) )
>>> +        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
>>> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
>> This virtual address is inside the extended directmap.
> No. That one covers only the range excluding the last L4 slot.
>
>> Â  You're going to
>> need to rearrange more things than just this, to make it safe.
> I specifically picked that entry because I don't think further
> arrangements are needed.

map_domain_page() will blindly hand this virtual address if an
appropriate mfn is passed, because there are no suitability checks.

The error handling isn't great, but at least any attempt to use that
pointer would fault, which is now no longer the case.

LA57 machines can have RAM or NVDIMMs in a range which will tickle this
bug.Â  In fact, they can have MFNs which would wrap around 0 into guest
space.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:49:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:49:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88265.165836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEOu-0000yQ-7g; Mon, 22 Feb 2021 16:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88265.165836; Mon, 22 Feb 2021 16:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEOu-0000yJ-2r; Mon, 22 Feb 2021 16:49:36 +0000
Received: by outflank-mailman (input) for mailman id 88265;
 Mon, 22 Feb 2021 16:49:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bXc=HY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEEOs-0000yE-Ug
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 16:49:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6a807ee-033e-4e34-b1cc-7c92ab1dc21f;
 Mon, 22 Feb 2021 16:49:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9A221B121;
 Mon, 22 Feb 2021 16:49:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6a807ee-033e-4e34-b1cc-7c92ab1dc21f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614012572; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2PSFRd8POhtHOTw+wsgDyhdDxjvzvtWFzAWOIYusJDY=;
	b=XNU0twYTsC7Dbeq1yFhRQYT2BkCCHNjgJGLiFtElyo5pRO4pWrrJ0WCQoLPWaKX86GkYwP
	2qJaDBZ66l4/XcpftJyiVTHWshNiXURJuotrVVsQejNNu3qIQJig9+neqAPJB5TYcdFUNy
	9NN1pvtKJcHVH/6mk8QZokRtqu34a6Q=
Subject: Re: Domain reference counting breakage
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210222152617.16382-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <90be630d-dccf-f63f-8419-dc583204b3a9@suse.com>
Date: Mon, 22 Feb 2021 17:49:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210222152617.16382-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.02.2021 16:26, Andrew Cooper wrote:
> At the moment, attempting to create an HVM guest with max_gnttab_frames of 0
> causes Xen to explode on the:
> 
>   BUG_ON(atomic_read(&d->refcnt) != DOMAIN_DESTROYED);
> 
> in _domain_destroy().  Intrumenting Xen a little more to highlight where the
> modifcations to d->refcnt occur:
> 
>   (d6) --- Xen Test Framework ---
>   (d6) Environment: PV 64bit (Long mode 4 levels)
>   (d6) Testing domain create:
>   (d6) Testing x86 PVH Shadow
>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402046b5 [domain_create+0x1c3/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040321b11 [share_xen_page_with_guest+0x175/0x190], stk e010:ffff83003fea7ce8, dr6 ffff0ff1
>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d04022595b [assign_pages+0x223/0x2b7], stk e010:ffff83003fea7c68, dr6 ffff0ff1
>   (d6) (XEN) grant_table.c:1934: Bad grant table sizes: grant 0, maptrack 0
>   (d6) (XEN) *** d1 ref 3
>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402048bc [domain_create+0x3ca/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040225e11 [free_domheap_pages+0x422/0x44a], stk e010:ffff83003fea7c38, dr6 ffff0ff1
>   (d6) (XEN) Xen BUG at domain.c:450
>   (d6) (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y  Not tainted ]----
>   (d6) (XEN) CPU:    0
>   (d6) (XEN) RIP:    e008:[<ffff82d040204366>] common/domain.c#_domain_destroy+0x69/0x6b
> 
> the problem becomes apparent.
> 
> First of all, there is a reference count leak - share_xen_page_with_guest()'s
> reference isn't freed anywhere.
> 
> However, the main problem is the 4th #DB above is this atomic_set()
> 
>   d->is_dying = DOMDYING_dead;
>   if ( hardware_domain == d )
>       hardware_domain = old_hwdom;
>   printk("*** %pd ref %d\n", d, atomic_read(&d->refcnt));
>   atomic_set(&d->refcnt, DOMAIN_DESTROYED);
> 
> in the domain_create() error path, which happens before free_domheap_pages()
> drops the ref acquired assign_pages(), and destroys still-relevant information
> pertaining to the guest.

I think the original idea was that by the time of the atomic_set()
all operations potentially altering the refcount are done. This
then allowed calling free_xenheap_pages() on e.g. the shared info
page without regard to whether the page's reference (installed by
share_xen_page_with_guest()) was actually dropped (i.e.
regardless of whether it's the domain create error path or proper
domain cleanup). With this assumption, no actual leak of anything
would occur, but of course this isn't the "natural" way of how
things should get cleaned up.

According to this original model, free_domheap_pages() may not be
called anymore past that point (for domain owned pages, which
really means put_page() then; anonymous pages are of course fine
to be freed late).

> The best options is probably to use atomic_sub() to subtract (DOMAIN_DESTROYED
> + 1) from the current refcount, which preserves the extra refs taken by
> share_xen_page_with_guest() and assign_pages() until they can be freed
> appropriately.

First of all - why DOMAIN_DESTROYED+1? There's no extra reference
you ought to be dropping here. Or else what's the counterpart of
acquiring the respective reference?

And then of course this means Xen heap pages cannot be cleaned up
anymore by merely calling free_xenheap_pages() - to get rid of
the associated reference, all of them would need to undergo
put_page_alloc_ref(), which in turn requires obtaining an extra
reference, which in turn introduces another of these ugly
theoretical error cases (because get_page() can in principle fail).

Therefore I wouldn't outright discard the option of sticking to
the original model. It would then better be properly described
somewhere, and we would likely want to put some check in place to
make sure such put_page() can't go unnoticed anymore when sitting
too late on the cleanup path (which may be difficult to arrange
for).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 16:58:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 16:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88272.165848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEX5-0001vi-6e; Mon, 22 Feb 2021 16:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88272.165848; Mon, 22 Feb 2021 16:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEX5-0001vb-1t; Mon, 22 Feb 2021 16:58:03 +0000
Received: by outflank-mailman (input) for mailman id 88272;
 Mon, 22 Feb 2021 16:58:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEEX4-0001vT-A3; Mon, 22 Feb 2021 16:58:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEEX3-0004em-WC; Mon, 22 Feb 2021 16:58:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEEX3-0004Sv-ND; Mon, 22 Feb 2021 16:58:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEEX3-0003Th-Mj; Mon, 22 Feb 2021 16:58:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vrqXNND1zYmUoqdjbWG9xxDCvrkBvLh40XRvy7u6HC0=; b=UPAZdTJIPPC727nUV8tglXu4Qf
	YJx1XoTglIqF8x4F74kBXANRQtBh/+FVAZlOZcmL6n5+WGtiWogsBaZjIovSogUsz6kaUj8YgCIqE
	FY50N+BNGMWxZp3CJYvsdsL6NKshe79j9Re64sovdnHXlI65/oN5BsXb0hbmfUGYAGKs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159546-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159546: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=078400ee15e7b250e4dfafd840c2e0c19835e16b
X-Osstest-Versions-That:
    ovmf=44ae214591e58af468eacb7b873eaa0bc187c4fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 16:58:01 +0000

flight 159546 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159546/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 078400ee15e7b250e4dfafd840c2e0c19835e16b
baseline version:
 ovmf                 44ae214591e58af468eacb7b873eaa0bc187c4fa

Last test of basis   159493  2021-02-20 17:09:42 Z    1 days
Testing same since   159546  2021-02-22 10:09:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   44ae214591..078400ee15  078400ee15e7b250e4dfafd840c2e0c19835e16b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:12:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88283.165863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEkk-0003tF-F5; Mon, 22 Feb 2021 17:12:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88283.165863; Mon, 22 Feb 2021 17:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEkk-0003t8-Bc; Mon, 22 Feb 2021 17:12:10 +0000
Received: by outflank-mailman (input) for mailman id 88283;
 Mon, 22 Feb 2021 17:12:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEEkj-0003t3-M8
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:12:09 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3dc3b59e-bf3d-4eb5-b76c-828cbd628a9b;
 Mon, 22 Feb 2021 17:12:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3dc3b59e-bf3d-4eb5-b76c-828cbd628a9b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614013928;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ds1cMi/eMDqkf5hlS1IB0OeIlq9uralr44dx+zsfPQE=;
  b=KYHrkKS8EhwtFa05NJW2DZWxXj/HVkcISqNwPbagyrNUDjwH/6dnEbbn
   drUiVosFYW4iKxZRjrNK3Urrl59n3oZvoITc9plFXLZhyDCauTfCxIr44
   TpzUPhXtrPoGnpa8TVQr+8utdxhZQcqjM4ENH3lZ9ZL/FA8kMEyBfxtVN
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ca+iF0xMLVRcZwaHeV+gjGhVEzzkIQ20M6WgDmDsgJKdbCGhg8Fb3bbT70FeUNcVo7dMCaW+Tg
 NQbmoAWKkecDTzXTxQKk3vclibmmYH0aOXTR7Dwv/IxLTqo3QJbvxgEy+7VprYh4AHlHIwNOi9
 JtWTi69cqxBH6+8hDqHTbCWfVV0Po/NCPVb45Zmi23czsCBHhYRL54KiWXp9Vt7n6Qtt5P7HsO
 uMsQSAuFRf53EAiNNoCFlYmeWtUv5vMZsZM7ttn67JrmhHznLMrHm/qfGzqgeLYHBQiF1RTEAa
 kfw=
X-SBRS: 5.2
X-MesageID: 37688133
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="37688133"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ialnecX72XViOO3FJWCixyflgi2q24tygirmwPzTX48wzqpipdIw/CVH8DQ2igUCsEJtrtr8cf4uUAXhCseFebKiMRoVd4C/EgcxVtk5qHyGmSnjEUMdkqEMAuYY0POs2/q4weyOeVLeRNJV3SZ+H5sn7iTIuouSyB6nbfmeV4khV7XVqatJrv0NgZUyeBhda1H+QpkQlp7ALnMBc+MxqGSTYuNsCXymjevyHzZ3WYz+xfcgu3fN92TjCg4GnqzCVYQCI88rFo/ptkUGXeoH7LOmLMZboNx6Knzwey6IWdJlnvx8jcPzJU3mWXazxWstjC2u8nU4Ref9qHxgd41yqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ekn/jOu948u5aKc9artCXz4sJqW/I5i8Z8FdNCAhuwA=;
 b=LzD1t1isVsUzgyyeuXQAej/h1tv+/DNMR11LwAlsW8lZQXKKD0Tpwr6zbQEG53aEFI71CVJR/Gin4garvdizY1x6sBhmfPzGMuzFT4/D40BhB8UyBZZStSje+gz+9mbWdP+RFw3YKL4evRcM3pno/0pgIcO2DM+KeApOid2yY8yS328JHcpHqt3WtJhZ3CguxQRbOKXXYCrW8Se35CnPuXZXQnRFhI7NQL0ww8Nc2XOCu91UQN2eQ75gtBSNEhW14Z9lxsRTD0q8i8Iv1Ayj+uTjNH3ognhkFaFz5qlCjjMwZ3t5kYjwjedqlyHLLldX3UR9p05MKQiIf69NMgt8Qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ekn/jOu948u5aKc9artCXz4sJqW/I5i8Z8FdNCAhuwA=;
 b=tVii/anEc87Bp1ECSOBgX3UM4C19f4BGby6HZtizN1rwJqc//wX1YC5w11JeSY5BfoRJOXA+LTgmGhmmtgsPkwgqX4NwkSptZxXk4EoPV1StHpaRUvuYYkqMx5N+0bfHKMnO75PwJmihZRymN/F9H6uM9R5iKpOFOlOmy7Ni3tU=
Subject: Re: Domain reference counting breakage
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Ian Jackson
	<iwj@xenproject.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210222152617.16382-1-andrew.cooper3@citrix.com>
 <90be630d-dccf-f63f-8419-dc583204b3a9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f32e70fc-ad67-836e-5181-5d9d2dd9cd7a@citrix.com>
Date: Mon, 22 Feb 2021 17:11:57 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <90be630d-dccf-f63f-8419-dc583204b3a9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0389.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9a901f65-5f9c-4763-fa69-08d8d754f9ed
X-MS-TrafficTypeDiagnostic: BY5PR03MB4965:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB49658BCD636EF6271C30BA76BA819@BY5PR03MB4965.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /3xYG8/ShVuQb4uWMxC16XkFUmcNR20RJADfvo7mqAArBWfKnNuyvQLV57jdjyMWLuhJLwciGg8LeAvfXAiVhwLSujj70XJqK1SpGkFIGiK3wTXaNSKv0gUfIZe3JKEIyo4TGaRE03UDnQApyZdaMPxysKFWMBqr03Y8gG3pw/6f1Gj4fVj00KB4PrFyUBw9UbQ6/U9E6Eg+8xUwAVcWOK3lGXlF1I94KI/tvJOjLgAtbF1Lz/Qy8kfcQB9Pe1JHQ9MgSZ9X5Di7s/jICWjJgV7BfG4O7X/OKGkpKiMf7ElbKJ1ngVsr44ex07JSWprEbncWRkp+WjMKEnr4aRr/yJpk7xw61NCzAo0v64R6MhfsO/M+fpcVmKj1/d+DHho5Xzpr10FOg7nSskV2KbXk3RGCPHNGaG5QO/LpuVp4b4gKSHSnD6C7m82S0fKnhAYC1UP+jlBBqhKThZxF3E5DpXURdRwVCicokDlfyTzXEHfpfjpcmeoqHXptLjbfzAm9oqcXW8c2HtYr9GtOt/F8Z4VF0uvjW/p1ScvZDjf80eNGsSCXUnCW/XGZMVHHjrP0UiPQ91JJPzscs7yQavPlTI0bW1gafvNYIMpDSpQhw5g=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(366004)(39860400002)(376002)(6666004)(36756003)(8676002)(3480700007)(4326008)(31686004)(66946007)(2906002)(6486002)(83380400001)(2616005)(8936002)(16576012)(316002)(86362001)(31696002)(6916009)(66476007)(53546011)(478600001)(66556008)(5660300002)(956004)(186003)(26005)(16526019)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dHlFdGZ1SHJhK2lxNE1uWVl6OS9NNmlRS2dlY25ENFZmcFZvWG9SM25KdkMx?=
 =?utf-8?B?U1JWNmFSNW40R3FKeVcyNEk2cjJvUFF2YkdsN1RIc1huNERWWGpqNXYyOXpj?=
 =?utf-8?B?S1JKUkFDcWd2a2kxdmcwMWRGKzVweVM2bnR1SlNUVkVFd1hNSG9hOUpWaEdl?=
 =?utf-8?B?NnJzWUczVU1yczIyRnd6VGRsam1PSUVsUkFVVTlmUnlRNWNoT1kyTW9VUENJ?=
 =?utf-8?B?cFpiQWM5RWRuOTFiTHp6eWtDSG5CYlJMb1FQOGVYOEV6UFpiSGFrZzFWdmVF?=
 =?utf-8?B?eTJGS0xYQUhrenFaSUtMUkxTSVZRR3dNbDNWcVBQRGhMekUrOVA5ZDhXK01L?=
 =?utf-8?B?cjhidnpuOTJUT0hDYVlRQ1NPTVhZRW8vTEdmZS8zMFpldFVxQ09TNGZuOWUx?=
 =?utf-8?B?cWEyVWVKWHczMXZIOWdDM05VZ29kRXVEMkEyNzR1eDR4S3FySUJlVERad2Qz?=
 =?utf-8?B?eHpEOFk1RGFMb25JRzdnb1FsajRvZ09xR2J0VE9jZXRja3Y0YXBJU1JYaEYx?=
 =?utf-8?B?K1REMU9PZjR0VThIdkViR0RhY3V5L0ZpcjcvTTdvUWFwVklzVnBDZng4a0E3?=
 =?utf-8?B?aTBxM3RIQjAvbDlxYjIyQkRYYVR3TkxFZ3VpV3d1cFR1bUxWdHM0WmhVbm5N?=
 =?utf-8?B?a3JhelhQZ0NSaTBEOUg0alRkWnpkcWRnQkMrc3JGM0Q2Yk5yUnhSbzZpNGJ3?=
 =?utf-8?B?WXhoNUErb1dvN1lRTlMzb25KbXQvdnVWdUNWQmdhMnl0WFRuaHIrakhzbzB5?=
 =?utf-8?B?UlBKRzlLazBmWXhYazhtOVhUV2lOUFlYNXZpSDNHK2NWQmE5SjhyaE9Banhl?=
 =?utf-8?B?KzhZbFJMQVh4SXMzbEJKSVJEUW04Tlg2Z3hCblkzejZURzFXSHlQNC82VWFL?=
 =?utf-8?B?WGgwMjhWSG93cHMwa013c05VRjFLYXVIWFJnemxkNkJRNUx1R01TWW1PMmsr?=
 =?utf-8?B?R25Zci9JaXhTMVhsSmtiSlRtYUczNGYxbWdNLy81VXJaaGFEMlhtYTVHMEk5?=
 =?utf-8?B?anZySUFzUFFFOGNPK2ZFTkYzNkhYYXRVWHlFSE1KMHRubXo5NE5hOGhoMkpC?=
 =?utf-8?B?SmRhck5OeVNvL3NSQ3E2d3dwb1E2MTU2MjFxOVlxdTNqeTZEb3RnMnY4UmNB?=
 =?utf-8?B?Ujd3L0U4NlZ1UlNCOWNGNjVkakc1dllTK2p1QU1udWpSaDVFQlc1UUFQTTRP?=
 =?utf-8?B?NU9FSzZVQnpCY3UzcFo0UHRZRXpObzFmbGNiSk9kUnVQMFJXenRrVk5leTlS?=
 =?utf-8?B?WDUwZDdFOHNUUXNUbWZWaWdKRXVUbitjdHBzbU40cEpTRWNsWlJnRVRjOUE0?=
 =?utf-8?B?S1JvQXB6MjBMRFo2dHZMZ2k1dEp1S3BTN0NzVE9ybjNWQWxCM3BrWFdEOXBG?=
 =?utf-8?B?L3ZVOStuWG1uQzB0cmtrZjJhNjNISU40MEJzZzRkRjhTTm5iS0tReVZ5ZktB?=
 =?utf-8?B?d2k2Vndlb2hhcklWblZRcWZtNTNNNzFWdmtWQ2ovTkxIbDd1Z2FabUwzQndD?=
 =?utf-8?B?SXVub3MzdDN6b0ZEaURWc0tzMEFhSVBETUhXMVo5cDA1SVZmbVRLZUI1Nk9Q?=
 =?utf-8?B?MTZoWGF6QjlNN2J2OXlDbGhkbmFwZTk1cXdQWmZCbUJjRzg4S0tzajBGNjh4?=
 =?utf-8?B?aExmN3laait3bTdodmN4YlU5cWFoZVZaOHF0NVAxSllBa2UyRDBoL1QwNkt6?=
 =?utf-8?B?Vzh2ek5NVytqYndBMWtzbEtGQUZ5bUs1ZlgvdmxWNGlyeGh0TzdHVGZWUjJP?=
 =?utf-8?Q?4foNkumqdGIj2Uqp1UibATC79uqr/SZqkSlwrPB?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a901f65-5f9c-4763-fa69-08d8d754f9ed
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 17:12:04.4881
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BuU42QyKatDgIeDie2usS2etKhk9N3tg6o/umL14NA7oyrn98XwKyqQexBFRu5CRBcl6BF3ou45WIYwDOhemanMhyhNBR1HmHzHWERasEWw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4965
X-OriginatorOrg: citrix.com

On 22/02/2021 16:49, Jan Beulich wrote:
> On 22.02.2021 16:26, Andrew Cooper wrote:
>> At the moment, attempting to create an HVM guest with max_gnttab_frames of 0
>> causes Xen to explode on the:
>>
>>   BUG_ON(atomic_read(&d->refcnt) != DOMAIN_DESTROYED);
>>
>> in _domain_destroy().  Intrumenting Xen a little more to highlight where the
>> modifcations to d->refcnt occur:
>>
>>   (d6) --- Xen Test Framework ---
>>   (d6) Environment: PV 64bit (Long mode 4 levels)
>>   (d6) Testing domain create:
>>   (d6) Testing x86 PVH Shadow
>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402046b5 [domain_create+0x1c3/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040321b11 [share_xen_page_with_guest+0x175/0x190], stk e010:ffff83003fea7ce8, dr6 ffff0ff1
>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d04022595b [assign_pages+0x223/0x2b7], stk e010:ffff83003fea7c68, dr6 ffff0ff1
>>   (d6) (XEN) grant_table.c:1934: Bad grant table sizes: grant 0, maptrack 0
>>   (d6) (XEN) *** d1 ref 3
>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402048bc [domain_create+0x3ca/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040225e11 [free_domheap_pages+0x422/0x44a], stk e010:ffff83003fea7c38, dr6 ffff0ff1
>>   (d6) (XEN) Xen BUG at domain.c:450
>>   (d6) (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y  Not tainted ]----
>>   (d6) (XEN) CPU:    0
>>   (d6) (XEN) RIP:    e008:[<ffff82d040204366>] common/domain.c#_domain_destroy+0x69/0x6b
>>
>> the problem becomes apparent.
>>
>> First of all, there is a reference count leak - share_xen_page_with_guest()'s
>> reference isn't freed anywhere.
>>
>> However, the main problem is the 4th #DB above is this atomic_set()
>>
>>   d->is_dying = DOMDYING_dead;
>>   if ( hardware_domain == d )
>>       hardware_domain = old_hwdom;
>>   printk("*** %pd ref %d\n", d, atomic_read(&d->refcnt));
>>   atomic_set(&d->refcnt, DOMAIN_DESTROYED);
>>
>> in the domain_create() error path, which happens before free_domheap_pages()
>> drops the ref acquired assign_pages(), and destroys still-relevant information
>> pertaining to the guest.
> I think the original idea was that by the time of the atomic_set()
> all operations potentially altering the refcount are done. This
> then allowed calling free_xenheap_pages() on e.g. the shared info
> page without regard to whether the page's reference (installed by
> share_xen_page_with_guest()) was actually dropped (i.e.
> regardless of whether it's the domain create error path or proper
> domain cleanup). With this assumption, no actual leak of anything
> would occur, but of course this isn't the "natural" way of how
> things should get cleaned up.
>
> According to this original model, free_domheap_pages() may not be
> called anymore past that point (for domain owned pages, which
> really means put_page() then; anonymous pages are of course fine
> to be freed late).

And I presume this is written down in the usual place?Â  (I.e. nowhere)

>> The best options is probably to use atomic_sub() to subtract (DOMAIN_DESTROYED
>> + 1) from the current refcount, which preserves the extra refs taken by
>> share_xen_page_with_guest() and assign_pages() until they can be freed
>> appropriately.
> First of all - why DOMAIN_DESTROYED+1? There's no extra reference
> you ought to be dropping here. Or else what's the counterpart of
> acquiring the respective reference?

The original atomic_set(1) needs dropping (somewhere around) here.

> And then of course this means Xen heap pages cannot be cleaned up
> anymore by merely calling free_xenheap_pages() - to get rid of
> the associated reference, all of them would need to undergo
> put_page_alloc_ref(), which in turn requires obtaining an extra
> reference, which in turn introduces another of these ugly
> theoretical error cases (because get_page() can in principle fail).
>
> Therefore I wouldn't outright discard the option of sticking to
> the original model. It would then better be properly described
> somewhere, and we would likely want to put some check in place to
> make sure such put_page() can't go unnoticed anymore when sitting
> too late on the cleanup path (which may be difficult to arrange
> for).

I agree that some rules are in desperate need of writing down.

However, given the catastrophic mess that is our reference counting and
cleanup paths, and how easy it is to get things wrong, I'm very
disinclined to permit a rule which forces divergent cleanup logic.

Making the cleanup paths non-divergent is the fix to removing swathes of
dubious/buggy logic, and removing a steady stream of memory leaks, etc.

In particular, I don't think its acceptable to special case the cleanup
rules in the domain_create() error path simply because we blindly reset
the reference count when it still contains real information.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:17:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:17:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88286.165878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEpf-00043y-3Z; Mon, 22 Feb 2021 17:17:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88286.165878; Mon, 22 Feb 2021 17:17:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEpf-00043r-0E; Mon, 22 Feb 2021 17:17:15 +0000
Received: by outflank-mailman (input) for mailman id 88286;
 Mon, 22 Feb 2021 17:17:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEEpd-00043m-9x
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:17:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEEpd-000500-6L
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:17:13 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEEpd-0006ii-5C
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:17:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEEpZ-0001YR-My; Mon, 22 Feb 2021 17:17:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=A5jwKEr7w9q1MNq9LDQtTavMi2AaamSI7SEYo/eV58w=; b=GQzuoA18pq4LiQHBk3klWJl59y
	TSL3mImcg6aMKj5B2yRsDdPgAx8MR4b6dEBDbOLzgB6ikpKEfXrU5PI1HSQ0gJMY8Un9C8Sd29ckE
	T6P8ZWFQfrfaW/Rp/BuugYz3iAhbKDyQw+mySvEl5lizoxDTeSVc/B1haIGf9CMVm4ng=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24627.59157.450971.787744@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 17:17:09 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Kevin Tian <kevin.tian@intel.com>,
    Jun Nakajima <jun.nakajima@intel.com>,
    Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
In-Reply-To: <04a2869a-282f-783a-6c03-8a2d7209411a@suse.com>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
	<90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
	<24627.38031.77928.536108@mariner.uk.xensource.com>
	<04a2869a-282f-783a-6c03-8a2d7209411a@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page"):
> On 22.02.2021 12:25, Ian Jackson wrote:
> > Jan Beulich writes ("[PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page"):
> >> Inserting the mapping at domain creation time leads to a memory leak
> >> when the creation fails later on and the domain uses separate CPU and
> >> IOMMU page tables - the latter requires intermediate page tables to be
> >> allocated, but there's no freeing of them at present in this case. Since
> >> we don't need the p2m insertion to happen this early, avoid the problem
> >> altogether by deferring it until the last possible point.
> > 
> > Thanks.
> > 
> >>   This comes at
> >> the price of not being able to handle an error other than by crashing
> >> the domain.
> > 
> > How worried should I be about this ?
> 
> Not overly much I would say. The difference is between a failure
> (-ENOMEM) during domain creation vs the domain getting crashed
> before it gets first scheduled. This is certainly less friendly
> to the user, but lack of memory shouldn't typically happen when
> creating domains. Plus the memory talked about here is such that
> gets provided explicitly to the domain (the p2m pool), rather
> than a system wide pool.

OK, thanks.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:21:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:21:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88290.165893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEtq-0004yH-MS; Mon, 22 Feb 2021 17:21:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88290.165893; Mon, 22 Feb 2021 17:21:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEtq-0004yA-J5; Mon, 22 Feb 2021 17:21:34 +0000
Received: by outflank-mailman (input) for mailman id 88290;
 Mon, 22 Feb 2021 17:21:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ease=HY=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lEEto-0004y5-Lq
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 17:21:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a67abc4-6e7d-4ed6-86a9-2d24e4fd4e74;
 Mon, 22 Feb 2021 17:21:31 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEEtm-00053X-UK
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 17:21:30 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEEtm-0006zj-TX
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 17:21:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEEtZ-0001Zq-Bp; Mon, 22 Feb 2021 17:21:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a67abc4-6e7d-4ed6-86a9-2d24e4fd4e74
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=a/9qX5moC3Rq8S3aUWOAoq37RgEc4xE6+ZRw7DRDZlQ=; b=6th44pN9krjVTBoZSv0zruvT1T
	wiu9rDTZKD4cNrY/XZTQTvSWP5vLB4lVM+UJ7w0MyrAz8KKuEVhdoYMs/2X05KMmkoKZkNHYbODwT
	XL/Qud00erH3YqsN/mtf1iZVBSLIKqbBL+4Kp59XADMDBAnAVypRQywAFYuzI3knTDx4=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24627.59405.114762.685265@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 17:21:17 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
    Ian Jackson <iwj@xenproject.org>,
    George Dunlap <george.dunlap@citrix.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>,
    Xen-devel <xen-devel@lists.xen.org>
Subject: Re: Stable ABI checking (take 2)
In-Reply-To: <a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
References: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
	<78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
	<a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: Stable ABI checking (take 2)"):
> On 22.02.2021 15:03, Andrew Cooper wrote:
> +1 for option 2, fwiw.

I'm in favour of option 2.

Andrew Cooper writes ("Re: Stable ABI checking (take 2)"):
> As far as RPM is concerned, splitting the two is important, as %build
> and %check are explicitly separate steps.  I have no idea what the deb
> policy/organisation is here.

The reason why distro build systems like to distinguish "build" from
"check" (run tests) is that often the tests are time-consuming (or
have intrusive dependencies or other practical problems).

IMO if the ABI check is very fast there is no reason not to run it by
default.  (We have configure to deal with the dependency issue.)

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:24:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:24:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88293.165904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEwm-00057g-4H; Mon, 22 Feb 2021 17:24:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88293.165904; Mon, 22 Feb 2021 17:24:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEEwm-00057Z-1B; Mon, 22 Feb 2021 17:24:36 +0000
Received: by outflank-mailman (input) for mailman id 88293;
 Mon, 22 Feb 2021 17:24:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEEwk-00057U-R9
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:24:35 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a1baa6dd-cac0-4b3a-a573-358b91181ca3;
 Mon, 22 Feb 2021 17:24:32 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-234-1b80s02bMCOR2lkI1fzehA-1; Mon, 22 Feb 2021 12:24:29 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 498CB1936B68;
 Mon, 22 Feb 2021 17:24:26 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id F1AD85D9D3;
 Mon, 22 Feb 2021 17:24:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1baa6dd-cac0-4b3a-a573-358b91181ca3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614014672;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=H5mrm3DZDcM5fraUv4fndEkj6st2YNZQP0B5Q36J9Uo=;
	b=eA2hPWtNFSBChFkO1/svU5YW0B/7ijE61JlzagtOiKJapbOMVXecxssihddYHsR2e0XcVo
	ufgAUMf6r3XFMvfHYKa3zY2CkQSi0rSX7jDp56bXut8qoOGt43K1w4WzGp42MTaXV71poE
	6oMYOqYqgZRuPm22tG1Fwa5NrqWF4Es=
X-MC-Unique: 1b80s02bMCOR2lkI1fzehA-1
Date: Mon, 22 Feb 2021 18:24:05 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type()
 return value
Message-ID: <20210222182405.3e6e9a6f.cohuck@redhat.com>
In-Reply-To: <20210219173847.2054123-2-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-2-philmd@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

On Fri, 19 Feb 2021 18:38:37 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> MachineClass::kvm_type() can return -1 on failure.
> Document it, and add a check in kvm_init().
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  include/hw/boards.h | 3 ++-
>  accel/kvm/kvm-all.c | 6 ++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>=20
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index a46dfe5d1a6..68d3d10f6b0 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -127,7 +127,8 @@ typedef struct {
>   *    implement and a stub device is required.
>   * @kvm_type:
>   *    Return the type of KVM corresponding to the kvm-type string option=
 or
> - *    computed based on other criteria such as the host kernel capabilit=
ies.
> + *    computed based on other criteria such as the host kernel capabilit=
ies
> + *    (which can't be negative), or -1 on error.
>   * @numa_mem_supported:
>   *    true if '--numa node.mem' option is supported and false otherwise
>   * @smp_parse:
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 84c943fcdb2..b069938d881 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
>                                                              "kvm-type",
>                                                              &error_abort=
);
>          type =3D mc->kvm_type(ms, kvm_type);
> +        if (type < 0) {
> +            ret =3D -EINVAL;
> +            fprintf(stderr, "Failed to detect kvm-type for machine '%s'\=
n",
> +                    mc->name);
> +            goto err;
> +        }
>      }
> =20
>      do {

No objection to this patch; but I'm wondering why some non-pseries
machines implement the kvm_type callback, when I see the kvm-type
property only for pseries? Am I holding my git grep wrong?



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:34:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88299.165917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEF6W-0006D0-6g; Mon, 22 Feb 2021 17:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88299.165917; Mon, 22 Feb 2021 17:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEF6W-0006Ct-3E; Mon, 22 Feb 2021 17:34:40 +0000
Received: by outflank-mailman (input) for mailman id 88299;
 Mon, 22 Feb 2021 17:34:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEF6V-0006Co-13
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:34:39 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 7f8e8e55-6765-41c4-90ae-d4664e6b01b0;
 Mon, 22 Feb 2021 17:34:38 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-454-vQd1LyseM5qTcpsZxvKmjQ-1; Mon, 22 Feb 2021 12:34:33 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2E7AB1850233;
 Mon, 22 Feb 2021 17:34:10 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 30F26E2CE;
 Mon, 22 Feb 2021 17:34:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f8e8e55-6765-41c4-90ae-d4664e6b01b0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614015278;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XWsN624cEMx0QkfAoPhColDm3XkV4SngFq4vEfyfRY4=;
	b=MRd/l2kJEH2fk0oqWs5VEoOg+Z8kLRs1I7qlcUCSvaXMdmRnmoZcsvxbPyu8jDe1i59nge
	syWoEWc+f+3nTsafBDe+iqHmv0wnC+eH4PaCg70ElYHY4qsljt0Eba4IyGuzkCj8dDd3nI
	b/NEFHCqkJHTjwjbE3V51LGUYnERWWg=
X-MC-Unique: vQd1LyseM5qTcpsZxvKmjQ-1
Date: Mon, 22 Feb 2021 18:34:00 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 02/11] hw/boards: Introduce
 machine_class_valid_for_accelerator()
Message-ID: <20210222183400.0c151d46.cohuck@redhat.com>
In-Reply-To: <20210219173847.2054123-3-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-3-philmd@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

On Fri, 19 Feb 2021 18:38:38 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> Introduce the valid_accelerators[] field to express the list
> of valid accelators a machine can use, and add the
> machine_class_valid_for_current_accelerator() and
> machine_class_valid_for_accelerator() methods.
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  include/hw/boards.h | 24 ++++++++++++++++++++++++
>  hw/core/machine.c   | 26 ++++++++++++++++++++++++++
>  2 files changed, 50 insertions(+)
>=20
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 68d3d10f6b0..4d08bc12093 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -36,6 +36,24 @@ void machine_set_cpu_numa_node(MachineState *machine,
>                                 const CpuInstanceProperties *props,
>                                 Error **errp);
> =20
> +/**
> + * machine_class_valid_for_accelerator:
> + * @mc: the machine class
> + * @acc_name: accelerator name
> + *
> + * Returns %true if the accelerator is valid for the machine, %false
> + * otherwise. See #MachineClass.valid_accelerators.

Naming confusion: is the machine class valid for the accelerator, or
the accelerator valid for the machine class? Or either? :)

> + */
> +bool machine_class_valid_for_accelerator(MachineClass *mc, const char *a=
cc_name);
> +/**
> + * machine_class_valid_for_current_accelerator:
> + * @mc: the machine class
> + *
> + * Returns %true if the accelerator is valid for the current machine,
> + * %false otherwise. See #MachineClass.valid_accelerators.

Same here: current accelerator vs. current machine.

> + */
> +bool machine_class_valid_for_current_accelerator(MachineClass *mc);
> +
>  void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char=
 *type);
>  /*
>   * Checks that backend isn't used, preps it for exclusive usage and
> @@ -125,6 +143,11 @@ typedef struct {
>   *    should instead use "unimplemented-device" for all memory ranges wh=
ere
>   *    the guest will attempt to probe for a device that QEMU doesn't
>   *    implement and a stub device is required.
> + * @valid_accelerators:
> + *    If this machine supports a specific set of virtualization accelera=
tors,
> + *    this contains a NULL-terminated list of the accelerators that can =
be
> + *    used. If this field is not set, any accelerator is valid. The QTest
> + *    accelerator is always valid.
>   * @kvm_type:
>   *    Return the type of KVM corresponding to the kvm-type string option=
 or
>   *    computed based on other criteria such as the host kernel capabilit=
ies
> @@ -166,6 +189,7 @@ struct MachineClass {
>      const char *alias;
>      const char *desc;
>      const char *deprecation_reason;
> +    const char *const *valid_accelerators;
> =20
>      void (*init)(MachineState *state);
>      void (*reset)(MachineState *state);
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 970046f4388..c42d8e382b1 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -518,6 +518,32 @@ static void machine_set_nvdimm_persistence(Object *o=
bj, const char *value,
>      nvdimms_state->persistence_string =3D g_strdup(value);
>  }
> =20
> +bool machine_class_valid_for_accelerator(MachineClass *mc, const char *a=
cc_name)
> +{
> +    const char *const *name =3D mc->valid_accelerators;
> +
> +    if (!name) {
> +        return true;
> +    }
> +    if (strcmp(acc_name, "qtest") =3D=3D 0) {
> +        return true;
> +    }
> +
> +    for (unsigned i =3D 0; name[i]; i++) {
> +        if (strcasecmp(acc_name, name[i]) =3D=3D 0) {
> +            return true;
> +        }
> +    }
> +    return false;
> +}
> +
> +bool machine_class_valid_for_current_accelerator(MachineClass *mc)
> +{
> +    AccelClass *ac =3D ACCEL_GET_CLASS(current_accel());
> +
> +    return machine_class_valid_for_accelerator(mc, ac->name);
> +}

The implementation of the function tests for the current accelerator,
so I think you need to tweak the description above?

> +
>  void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char=
 *type)
>  {
>      QAPI_LIST_PREPEND(mc->allowed_dynamic_sysbus_devices, g_strdup(type)=
);



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:38:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88302.165929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFA1-0006Mf-NX; Mon, 22 Feb 2021 17:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88302.165929; Mon, 22 Feb 2021 17:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFA1-0006MY-KE; Mon, 22 Feb 2021 17:38:17 +0000
Received: by outflank-mailman (input) for mailman id 88302;
 Mon, 22 Feb 2021 17:38:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEFA0-0006MT-1g
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:38:16 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id d88c079e-5b5f-4c27-af19-e6b89c5bcf92;
 Mon, 22 Feb 2021 17:38:15 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-287-011n0yg1NUOpvcUU3B4QYw-1; Mon, 22 Feb 2021 12:38:10 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 92829195D560;
 Mon, 22 Feb 2021 17:38:06 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 11BEF5C1BD;
 Mon, 22 Feb 2021 17:37:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d88c079e-5b5f-4c27-af19-e6b89c5bcf92
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614015495;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4lA4izgL6BmBHFfoBO0go7fPCtxFMbj7pq95MLNGllc=;
	b=LnFc6ewXYM4QD9GMDLV9UCERGgaDu5QETE3r3gLZ4aaK1JFNJqKlYdzUURtKeFq9Hatv+T
	L4qz7jClJmYYDyfgmlrdqruyINFRGll7k5pkm+pBJGSp47K960S6J9buQHB6dWnnDEUz5q
	GdW6wRyh4rGPv6MvZX/u6nIcTr9tjM8=
X-MC-Unique: 011n0yg1NUOpvcUU3B4QYw-1
Date: Mon, 22 Feb 2021 18:37:51 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 07/11] hw/s390x: Explicit the s390-ccw-virtio
 machines support TCG and KVM
Message-ID: <20210222183751.0a8f2d2d.cohuck@redhat.com>
In-Reply-To: <20210219173847.2054123-8-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-8-philmd@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16

On Fri, 19 Feb 2021 18:38:43 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

I'd lose the 'Explicit' in $SUBJECT.


> All s390-ccw-virtio machines support TCG and KVM.
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  hw/s390x/s390-virtio-ccw.c | 5 +++++
>  1 file changed, 5 insertions(+)
>=20
> diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
> index 2972b607f36..1f168485066 100644
> --- a/hw/s390x/s390-virtio-ccw.c
> +++ b/hw/s390x/s390-virtio-ccw.c
> @@ -586,6 +586,10 @@ static ram_addr_t s390_fixup_ram_size(ram_addr_t sz)
>      return newsz;
>  }
> =20
> +static const char *const valid_accels[] =3D {
> +    "tcg", "kvm", NULL
> +};
> +
>  static void ccw_machine_class_init(ObjectClass *oc, void *data)
>  {
>      MachineClass *mc =3D MACHINE_CLASS(oc);
> @@ -612,6 +616,7 @@ static void ccw_machine_class_init(ObjectClass *oc, v=
oid *data)
>      mc->possible_cpu_arch_ids =3D s390_possible_cpu_arch_ids;
>      /* it is overridden with 'host' cpu *in kvm_arch_init* */
>      mc->default_cpu_type =3D S390_CPU_TYPE_NAME("qemu");
> +    mc->valid_accelerators =3D valid_accels;
>      hc->plug =3D s390_machine_device_plug;
>      hc->unplug_request =3D s390_machine_device_unplug_request;
>      nc->nmi_monitor_handler =3D s390_nmi;

Reviewed-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:41:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:41:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88305.165941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFCw-0007J2-6p; Mon, 22 Feb 2021 17:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88305.165941; Mon, 22 Feb 2021 17:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFCw-0007Iv-3G; Mon, 22 Feb 2021 17:41:18 +0000
Received: by outflank-mailman (input) for mailman id 88305;
 Mon, 22 Feb 2021 17:41:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V+0I=HY=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lEFCv-0007Iq-0w
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:41:17 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 945e0222-ce00-4f3c-a36d-0c98ecd82a24;
 Mon, 22 Feb 2021 17:41:16 +0000 (UTC)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-211-CxJApGiAP1Or60oY4lYrXg-1; Mon, 22 Feb 2021 12:41:12 -0500
Received: by mail-ed1-f71.google.com with SMTP id l23so7454930edt.23
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 09:41:12 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id 35sm12703670edp.85.2021.02.22.09.41.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 Feb 2021 09:41:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 945e0222-ce00-4f3c-a36d-0c98ecd82a24
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614015676;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=77VYX8qyvFFdVxjrkL962xLS4W9z5d8KO6lHL7cyqdE=;
	b=XqYfmYj8tLkJMIg5nvMVX8eRh3zUdX9mr+ycheUob06W01kE5psM5baLZ9qNWB9tQO/ctb
	lqA3STj+TP37KAiL51w1PbNAa7mWjdOhIP2QL+ez4ccLhNe31bdqWVpx9dJmM+flr3lorO
	7Jp+Uoy+jlj/do5GCiWUj6NMabnyVYA=
X-MC-Unique: CxJApGiAP1Or60oY4lYrXg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=77VYX8qyvFFdVxjrkL962xLS4W9z5d8KO6lHL7cyqdE=;
        b=KBHrCPtjnzktsSxjqmSiKBKgs5Xn6ElYRBoXVo3/uUF28ax6V96vp5OU8S9Zq77CQz
         07jqYp1Fir05Q5UmEYI486aFa088K8GfW4g38UGwmfCcEt++cRuqP+WqUZ2JRE4a6jky
         53qZIN8uDXdycMggZy5kbDhs7NxvdNiH1ZpyxVLBxBNd39eKUMedgsBHkYXCea2IOMTO
         lMWnnBXXGxgUz/yt7pGMyDRPyWt78FRIrsUalakBOh3TtLvBrBapETA+XuLWcGPQOx0s
         JtEuLJ29EzLb7v5wOI9GZJlTikv7JF6F1w+BRVZpapFkyJW4GuJizxTMqBwudcPo3rmY
         FDdw==
X-Gm-Message-State: AOAM5314pRvgQLNfKViMb/92ui9yrEVNe3tK719NN3t5CKXjCGzMAflm
	I9AyNrUJy675ZC8oLxWfj6BRK2l/xqtZBfj6GBdPvoPVoNQX7hjiieh8IaEQDfxRya4qXQlpH0o
	t+wuIpRuoHDNQqIHK59gIMRwOAro=
X-Received: by 2002:a17:906:3916:: with SMTP id f22mr21981079eje.328.1614015671135;
        Mon, 22 Feb 2021 09:41:11 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyHW8/h+A0Q0TqQKsXXcDdlUdoA+aX8vZ+NMqUGfMGQKgJyne7vdUySnR2jDa7GViPMBbw/Eg==
X-Received: by 2002:a17:906:3916:: with SMTP id f22mr21981048eje.328.1614015670962;
        Mon, 22 Feb 2021 09:41:10 -0800 (PST)
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type() return
 value
To: Cornelia Huck <cohuck@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
 qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
 Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Gibson <david@gibson.dropbear.id.au>, qemu-arm@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
 BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-2-philmd@redhat.com>
 <20210222182405.3e6e9a6f.cohuck@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
Date: Mon, 22 Feb 2021 18:41:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210222182405.3e6e9a6f.cohuck@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/22/21 6:24 PM, Cornelia Huck wrote:
> On Fri, 19 Feb 2021 18:38:37 +0100
> Philippe Mathieu-DaudÃ© <philmd@redhat.com> wrote:
> 
>> MachineClass::kvm_type() can return -1 on failure.
>> Document it, and add a check in kvm_init().
>>
>> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
>> ---
>>  include/hw/boards.h | 3 ++-
>>  accel/kvm/kvm-all.c | 6 ++++++
>>  2 files changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>> index a46dfe5d1a6..68d3d10f6b0 100644
>> --- a/include/hw/boards.h
>> +++ b/include/hw/boards.h
>> @@ -127,7 +127,8 @@ typedef struct {
>>   *    implement and a stub device is required.
>>   * @kvm_type:
>>   *    Return the type of KVM corresponding to the kvm-type string option or
>> - *    computed based on other criteria such as the host kernel capabilities.
>> + *    computed based on other criteria such as the host kernel capabilities
>> + *    (which can't be negative), or -1 on error.
>>   * @numa_mem_supported:
>>   *    true if '--numa node.mem' option is supported and false otherwise
>>   * @smp_parse:
>> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
>> index 84c943fcdb2..b069938d881 100644
>> --- a/accel/kvm/kvm-all.c
>> +++ b/accel/kvm/kvm-all.c
>> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
>>                                                              "kvm-type",
>>                                                              &error_abort);
>>          type = mc->kvm_type(ms, kvm_type);
>> +        if (type < 0) {
>> +            ret = -EINVAL;
>> +            fprintf(stderr, "Failed to detect kvm-type for machine '%s'\n",
>> +                    mc->name);
>> +            goto err;
>> +        }
>>      }
>>  
>>      do {
> 
> No objection to this patch; but I'm wondering why some non-pseries
> machines implement the kvm_type callback, when I see the kvm-type
> property only for pseries? Am I holding my git grep wrong?

Can it be what David commented here?
https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:43:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:43:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88309.165956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFEk-0007QV-Hq; Mon, 22 Feb 2021 17:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88309.165956; Mon, 22 Feb 2021 17:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFEk-0007QO-Dg; Mon, 22 Feb 2021 17:43:10 +0000
Received: by outflank-mailman (input) for mailman id 88309;
 Mon, 22 Feb 2021 17:43:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEFEi-0007QG-9A
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:43:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a972bc14-6ecf-4921-9681-1defaecdd559;
 Mon, 22 Feb 2021 17:43:07 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-182-0CE6DRn-Pp6-w9VgG4m5yg-1; Mon, 22 Feb 2021 12:43:03 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7F908195D560;
 Mon, 22 Feb 2021 17:43:00 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 931011002382;
 Mon, 22 Feb 2021 17:42:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a972bc14-6ecf-4921-9681-1defaecdd559
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614015787;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iZK1AHq84V95gWDJ19gXOB4yY4ebnGHnrWYQW1lx22Y=;
	b=IsXEY80aIDhHGew0TYz61iIKps7KqoxQiLBRRMu+2Q5c8MiiVTDidhKAjXLUnVzzRTAEd0
	JBh1AsAvc/gN17YPjfZt0cqPWMmvhfcidDDDy7+TC9j3vAKO9SSiVablyEyZ2YmuQqzjGi
	paDCFpbBlQdgt2CTs3dPgk2sJTA2hQw=
X-MC-Unique: 0CE6DRn-Pp6-w9VgG4m5yg-1
Date: Mon, 22 Feb 2021 18:42:45 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>, Daniel =?UTF-8?B?QmVycmFuZ8Op?= <berrange@redhat.com>
Subject: Re: [PATCH v2 03/11] hw/core: Restrict 'query-machines' to those
 supported by current accel
Message-ID: <20210222184245.1e0d0315.cohuck@redhat.com>
In-Reply-To: <20210219173847.2054123-4-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-4-philmd@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22

On Fri, 19 Feb 2021 18:38:39 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> Do not let 'query-machines' return machines not valid with
> the current accelerator.
>=20
> Suggested-by: Daniel Berrang=C3=A9 <berrange@redhat.com>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  hw/core/machine-qmp-cmds.c | 4 ++++
>  1 file changed, 4 insertions(+)
>=20
> diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c
> index 44e979e503b..c8630bc2ddc 100644
> --- a/hw/core/machine-qmp-cmds.c
> +++ b/hw/core/machine-qmp-cmds.c
> @@ -204,6 +204,10 @@ MachineInfoList *qmp_query_machines(Error **errp)
>          MachineClass *mc =3D el->data;
>          MachineInfo *info;
> =20
> +        if (!machine_class_valid_for_current_accelerator(mc)) {
> +            continue;
> +        }
> +
>          info =3D g_malloc0(sizeof(*info));
>          if (mc->is_default) {
>              info->has_is_default =3D true;

Reviewed-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:46:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88312.165968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFHv-0007ac-1I; Mon, 22 Feb 2021 17:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88312.165968; Mon, 22 Feb 2021 17:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFHu-0007aV-Tm; Mon, 22 Feb 2021 17:46:26 +0000
Received: by outflank-mailman (input) for mailman id 88312;
 Mon, 22 Feb 2021 17:46:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V+0I=HY=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lEFHt-0007aP-GI
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:46:25 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 63f9116a-6948-45f5-965d-1fbaf19a8a6a;
 Mon, 22 Feb 2021 17:46:24 +0000 (UTC)
Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com
 [209.85.208.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-595-W2kS_EfeOFe0_XbI0Xr19A-1; Mon, 22 Feb 2021 12:46:21 -0500
Received: by mail-ed1-f72.google.com with SMTP id ch30so3589727edb.14
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 09:46:19 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id s2sm8446265edt.35.2021.02.22.09.46.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 Feb 2021 09:46:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63f9116a-6948-45f5-965d-1fbaf19a8a6a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614015984;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a3xauJaR66hfM3jitPOYh7RFO07zXJKYLcWPgDvmEWs=;
	b=abfOmxhQRXxloIPo1wlXbGWMGcPQ4MzBmK5SyQIwNnQ6BL1oYwj5pzD072tQTIRsg5TnSr
	cj9Yx69C4NWmzyGjbwjC6FKUFMUYd+vDSIZJwYZ9O3aFzLBtnJcNa7C86WK9tAf/pGOC8o
	w/b42ObNg6l6y0Fx+M70U4IQPqdn8m0=
X-MC-Unique: W2kS_EfeOFe0_XbI0Xr19A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=a3xauJaR66hfM3jitPOYh7RFO07zXJKYLcWPgDvmEWs=;
        b=sAWDWygKiPRi+qRs6UbRDAzrBDOVq39vKZj6FErNQuFYmdYYQSgW5mtMuLdO9tJJ7z
         9tWV+Rj2ciunUPgLJIHsSoTLjy2GUJSwtJeviR1Ro8Yx4ZWXmEEKzjaGt6HBWhsyla2d
         H9oZmSnGIeEZeaJ2gzGM6p61QcfD0D+4f6aStzRFg39OpsR6MwSDYOQoWLrNRvr1OOHg
         JvJSc+qHA6TgpaUO6tZ9W+/+49ExrzTxiUV5hfgj+WNX5pZBwPXfL0urgzalCcAIxuH6
         cwuZysVW/Ffn9QgWlp5We8J+ViYMeT4vrXSEdd47h7Luaj+M3AKy2BGEQohXvvyfw/ek
         X6Zg==
X-Gm-Message-State: AOAM531q5fsb7I3ilsTIGYTQQCvEmffep0xX3l3cORx3iF5cyulsuSCC
	yA32U5q4BfvstufFJyd93A/zAGCdaEaJyvGWyNMhYIXmI2K/iep0kcqZ0SeazjKsKEhbVuX9Zan
	sAwbWD5o7br9E8grhTWV8xCNCmLU=
X-Received: by 2002:a05:6402:17b6:: with SMTP id j22mr23112097edy.325.1614015978954;
        Mon, 22 Feb 2021 09:46:18 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwDAve+ys1o3Ix/SlRK4csT4W1S2Ma59p+Xf1UMouPtJ1ZRfjAz9b8cjfXQB2d67f6WzKBREw==
X-Received: by 2002:a05:6402:17b6:: with SMTP id j22mr23112063edy.325.1614015978791;
        Mon, 22 Feb 2021 09:46:18 -0800 (PST)
Subject: Re: [PATCH v2 02/11] hw/boards: Introduce
 machine_class_valid_for_accelerator()
To: Cornelia Huck <cohuck@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
 qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
 Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Gibson <david@gibson.dropbear.id.au>, qemu-arm@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
 BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-3-philmd@redhat.com>
 <20210222183400.0c151d46.cohuck@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <6ceff55c-6da4-e773-7809-de3be2f566ab@redhat.com>
Date: Mon, 22 Feb 2021 18:46:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210222183400.0c151d46.cohuck@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/22/21 6:34 PM, Cornelia Huck wrote:
> On Fri, 19 Feb 2021 18:38:38 +0100
> Philippe Mathieu-DaudÃ© <philmd@redhat.com> wrote:
> 
>> Introduce the valid_accelerators[] field to express the list
>> of valid accelators a machine can use, and add the
>> machine_class_valid_for_current_accelerator() and
>> machine_class_valid_for_accelerator() methods.
>>
>> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
>> ---
>>  include/hw/boards.h | 24 ++++++++++++++++++++++++
>>  hw/core/machine.c   | 26 ++++++++++++++++++++++++++
>>  2 files changed, 50 insertions(+)
>>
>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>> index 68d3d10f6b0..4d08bc12093 100644
>> --- a/include/hw/boards.h
>> +++ b/include/hw/boards.h
>> @@ -36,6 +36,24 @@ void machine_set_cpu_numa_node(MachineState *machine,
>>                                 const CpuInstanceProperties *props,
>>                                 Error **errp);
>>  
>> +/**
>> + * machine_class_valid_for_accelerator:
>> + * @mc: the machine class
>> + * @acc_name: accelerator name
>> + *
>> + * Returns %true if the accelerator is valid for the machine, %false
>> + * otherwise. See #MachineClass.valid_accelerators.
> 
> Naming confusion: is the machine class valid for the accelerator, or
> the accelerator valid for the machine class? Or either? :)

"the accelerator valid for the machine class".

Is this clearer?

"Returns %true if the current accelerator is valid for the
 selected machine, %false otherwise.

Or...

"Returns %true if the selected accelerator is valid for the
 current machine, %false otherwise.

How would look "either"?

The machine is already selected, and the accelerator too...

> 
>> + */
>> +bool machine_class_valid_for_accelerator(MachineClass *mc, const char *acc_name);
>> +/**
>> + * machine_class_valid_for_current_accelerator:
>> + * @mc: the machine class
>> + *
>> + * Returns %true if the accelerator is valid for the current machine,
>> + * %false otherwise. See #MachineClass.valid_accelerators.
> 
> Same here: current accelerator vs. current machine.
> 
>> + */
>> +bool machine_class_valid_for_current_accelerator(MachineClass *mc);
>> +
>>  void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type);
>>  /*
>>   * Checks that backend isn't used, preps it for exclusive usage and
>> @@ -125,6 +143,11 @@ typedef struct {
>>   *    should instead use "unimplemented-device" for all memory ranges where
>>   *    the guest will attempt to probe for a device that QEMU doesn't
>>   *    implement and a stub device is required.
>> + * @valid_accelerators:
>> + *    If this machine supports a specific set of virtualization accelerators,
>> + *    this contains a NULL-terminated list of the accelerators that can be
>> + *    used. If this field is not set, any accelerator is valid. The QTest
>> + *    accelerator is always valid.
>>   * @kvm_type:
>>   *    Return the type of KVM corresponding to the kvm-type string option or
>>   *    computed based on other criteria such as the host kernel capabilities
>> @@ -166,6 +189,7 @@ struct MachineClass {
>>      const char *alias;
>>      const char *desc;
>>      const char *deprecation_reason;
>> +    const char *const *valid_accelerators;
>>  
>>      void (*init)(MachineState *state);
>>      void (*reset)(MachineState *state);
>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>> index 970046f4388..c42d8e382b1 100644
>> --- a/hw/core/machine.c
>> +++ b/hw/core/machine.c
>> @@ -518,6 +518,32 @@ static void machine_set_nvdimm_persistence(Object *obj, const char *value,
>>      nvdimms_state->persistence_string = g_strdup(value);
>>  }
>>  
>> +bool machine_class_valid_for_accelerator(MachineClass *mc, const char *acc_name)
>> +{
>> +    const char *const *name = mc->valid_accelerators;
>> +
>> +    if (!name) {
>> +        return true;
>> +    }
>> +    if (strcmp(acc_name, "qtest") == 0) {
>> +        return true;
>> +    }
>> +
>> +    for (unsigned i = 0; name[i]; i++) {
>> +        if (strcasecmp(acc_name, name[i]) == 0) {
>> +            return true;
>> +        }
>> +    }
>> +    return false;
>> +}
>> +
>> +bool machine_class_valid_for_current_accelerator(MachineClass *mc)
>> +{
>> +    AccelClass *ac = ACCEL_GET_CLASS(current_accel());
>> +
>> +    return machine_class_valid_for_accelerator(mc, ac->name);
>> +}
> 
> The implementation of the function tests for the current accelerator,
> so I think you need to tweak the description above?
> 
>> +
>>  void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type)
>>  {
>>      QAPI_LIST_PREPEND(mc->allowed_dynamic_sysbus_devices, g_strdup(type));
> 



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:46:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:46:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88313.165980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFID-0007fm-DM; Mon, 22 Feb 2021 17:46:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88313.165980; Mon, 22 Feb 2021 17:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFID-0007fe-9U; Mon, 22 Feb 2021 17:46:45 +0000
Received: by outflank-mailman (input) for mailman id 88313;
 Mon, 22 Feb 2021 17:46:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEFIB-0007f2-V8
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:46:43 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id b3979496-2e51-40c7-899d-9e13db34dcc0;
 Mon, 22 Feb 2021 17:46:43 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-126-MITzUUf3Oqye6SI3VLeV4w-1; Mon, 22 Feb 2021 12:46:41 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BFE62C291;
 Mon, 22 Feb 2021 17:46:37 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 7CA0B5D6B1;
 Mon, 22 Feb 2021 17:46:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3979496-2e51-40c7-899d-9e13db34dcc0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614016003;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ap6wVunjnHQO5CinJJgI7vpNWx2oOiKw2tFNPevDbBQ=;
	b=hktoStxsYtrUntRoNr2igTcCuInFkkkhEbKLWbzp1SAclnmEoiZnGvDKNq6MHDLf7xC+TB
	DDrTspxmCwGlb5ecER1nUOR3Wp0GquQBq8UPSPX5dbTFdN3C3ctjuK9OIpHrSQyMx28ZY2
	KzBqZ8gnCQziAaW6MtGOFBvviyMPFyY=
X-MC-Unique: MITzUUf3Oqye6SI3VLeV4w-1
Date: Mon, 22 Feb 2021 18:46:20 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 11/11] softmmu/vl: Exit gracefully when accelerator
 is not supported
Message-ID: <20210222184620.57119057.cohuck@redhat.com>
In-Reply-To: <20210219173847.2054123-12-philmd@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-12-philmd@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15

On Fri, 19 Feb 2021 18:38:47 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> Before configuring an accelerator, check it is valid for
> the current machine. Doing so we can return a simple error
> message instead of criptic one.

s/criptic/cryptic/

>=20
> Before:
>=20
>   $ qemu-system-arm -M raspi2b -enable-kvm
>   qemu-system-arm: /build/qemu-ETIdrs/qemu-4.2/exec.c:865: cpu_address_sp=
ace_init: Assertion `asidx =3D=3D 0 || !kvm_enabled()' failed.
>   Aborted
>=20
>   $ qemu-system-aarch64 -M xlnx-zcu102 -enable-kvm -smp 6
>   qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Inva=
lid argument
>=20
> After:
>=20
>   $ qemu-system-arm -M raspi2b -enable-kvm
>   qemu-system-aarch64: invalid accelerator 'kvm' for machine raspi2b
>=20
>   $ qemu-system-aarch64 -M xlnx-zcu102 -enable-kvm -smp 6
>   qemu-system-aarch64: -accel kvm: invalid accelerator 'kvm' for machine =
xlnx-zcu102
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  softmmu/vl.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>=20
> diff --git a/softmmu/vl.c b/softmmu/vl.c
> index b219ce1f357..f2c4074310b 100644
> --- a/softmmu/vl.c
> +++ b/softmmu/vl.c
> @@ -2133,6 +2133,7 @@ static int do_configure_accelerator(void *opaque, Q=
emuOpts *opts, Error **errp)
>      const char *acc =3D qemu_opt_get(opts, "accel");
>      AccelClass *ac =3D accel_find(acc);
>      AccelState *accel;
> +    MachineClass *mc;
>      int ret;
>      bool qtest_with_kvm;
> =20
> @@ -2145,6 +2146,12 @@ static int do_configure_accelerator(void *opaque, =
QemuOpts *opts, Error **errp)
>          }
>          return 0;
>      }
> +    mc =3D MACHINE_GET_CLASS(current_machine);
> +    if (!qtest_chrdev && !machine_class_valid_for_accelerator(mc, ac->na=
me)) {

Shouldn't qtest be already allowed in any case in the checking function?

> +        *p_init_failed =3D true;
> +        error_report("invalid accelerator '%s' for machine %s", acc, mc-=
>name);
> +        return 0;
> +    }
>      accel =3D ACCEL(object_new_with_class(OBJECT_CLASS(ac)));
>      object_apply_compat_props(OBJECT(accel));
>      qemu_opt_foreach(opts, accelerator_set_property,



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 17:51:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 17:51:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88322.165994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFMU-0000Dn-0A; Mon, 22 Feb 2021 17:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88322.165994; Mon, 22 Feb 2021 17:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFMT-0000Dg-TO; Mon, 22 Feb 2021 17:51:09 +0000
Received: by outflank-mailman (input) for mailman id 88322;
 Mon, 22 Feb 2021 17:51:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEFMS-0000Db-As
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 17:51:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 694c1d3e-c040-4445-a358-f4141dc3c2f5;
 Mon, 22 Feb 2021 17:51:07 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-441-BUheIuS7NaiFmeoxrS7BDQ-1; Mon, 22 Feb 2021 12:51:05 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9D0F2801965;
 Mon, 22 Feb 2021 17:51:01 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DE0141001281;
 Mon, 22 Feb 2021 17:50:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 694c1d3e-c040-4445-a358-f4141dc3c2f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614016267;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O+4avbp3iRC9wzL9kQsBbVb6oEaX3FYxbdrqhd4FEnE=;
	b=U3s/yYlTK3/NzWSID/Irp/EeEw6YuWOdsdJTXTLCJ8nsgHSo4pvlixJo4etvdVPHOUqFrL
	zGEEF+XtvTJJOk9ixReD7FiSM3dA1dmBW35ngo3OaynS3uXB5PTgT/rfvgd0DlhVix8ylT
	1AeoFI6ke7YmaHq9oMB66DHMKqKUFB0=
X-MC-Unique: BUheIuS7NaiFmeoxrS7BDQ-1
Date: Mon, 22 Feb 2021 18:50:44 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type()
 return value
Message-ID: <20210222185044.23fccecc.cohuck@redhat.com>
In-Reply-To: <bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-2-philmd@redhat.com>
	<20210222182405.3e6e9a6f.cohuck@redhat.com>
	<bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22

On Mon, 22 Feb 2021 18:41:07 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> On 2/22/21 6:24 PM, Cornelia Huck wrote:
> > On Fri, 19 Feb 2021 18:38:37 +0100
> > Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
> >  =20
> >> MachineClass::kvm_type() can return -1 on failure.
> >> Document it, and add a check in kvm_init().
> >>
> >> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >> ---
> >>  include/hw/boards.h | 3 ++-
> >>  accel/kvm/kvm-all.c | 6 ++++++
> >>  2 files changed, 8 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/include/hw/boards.h b/include/hw/boards.h
> >> index a46dfe5d1a6..68d3d10f6b0 100644
> >> --- a/include/hw/boards.h
> >> +++ b/include/hw/boards.h
> >> @@ -127,7 +127,8 @@ typedef struct {
> >>   *    implement and a stub device is required.
> >>   * @kvm_type:
> >>   *    Return the type of KVM corresponding to the kvm-type string opt=
ion or
> >> - *    computed based on other criteria such as the host kernel capabi=
lities.
> >> + *    computed based on other criteria such as the host kernel capabi=
lities
> >> + *    (which can't be negative), or -1 on error.
> >>   * @numa_mem_supported:
> >>   *    true if '--numa node.mem' option is supported and false otherwi=
se
> >>   * @smp_parse:
> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> >> index 84c943fcdb2..b069938d881 100644
> >> --- a/accel/kvm/kvm-all.c
> >> +++ b/accel/kvm/kvm-all.c
> >> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
> >>                                                              "kvm-type=
",
> >>                                                              &error_ab=
ort);
> >>          type =3D mc->kvm_type(ms, kvm_type);
> >> +        if (type < 0) {
> >> +            ret =3D -EINVAL;
> >> +            fprintf(stderr, "Failed to detect kvm-type for machine '%=
s'\n",
> >> +                    mc->name);
> >> +            goto err;
> >> +        }
> >>      }
> >> =20
> >>      do { =20
> >=20
> > No objection to this patch; but I'm wondering why some non-pseries
> > machines implement the kvm_type callback, when I see the kvm-type
> > property only for pseries? Am I holding my git grep wrong? =20
>=20
> Can it be what David commented here?
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html
>=20

Ok, I might be confused about the other ppc machines; but I'm wondering
about the kvm_type callback for mips and arm/virt. Maybe I'm just
confused by the whole mechanism?



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 18:00:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 18:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88330.166010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFV7-0001Aw-V1; Mon, 22 Feb 2021 18:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88330.166010; Mon, 22 Feb 2021 18:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFV7-0001AL-Rz; Mon, 22 Feb 2021 18:00:05 +0000
Received: by outflank-mailman (input) for mailman id 88330;
 Mon, 22 Feb 2021 18:00:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cC4D=HY=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEFV6-00011V-S0
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 18:00:04 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f41cb476-ea6e-482d-9015-37ffb16e0d44;
 Mon, 22 Feb 2021 18:00:04 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-113-x6AyWPnMMK6ocNz1yohAhA-1; Mon, 22 Feb 2021 13:00:02 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CAB31192CC57;
 Mon, 22 Feb 2021 17:59:56 +0000 (UTC)
Received: from gondolin (ovpn-113-115.ams2.redhat.com [10.36.113.115])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 90F5F60DA0;
 Mon, 22 Feb 2021 17:59:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f41cb476-ea6e-482d-9015-37ffb16e0d44
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614016804;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g0daaZIiL7zB1dNQnlGOdmrK5Ct4nJWAQKVeDV4S+9U=;
	b=SxAvLQEgIQg4AF43XkIExc4cZPw1W0kFjn25idJ41GmCzCFGeNIKelKJtCCKHnHXT8l8fM
	raDcsdZXnT3ihpQ+y0m5xdVmdGqpQdxpf9vv8yJHoKnd52CjRlog/vOomjy6wOoOdpqTEZ
	C7C2mLthoiCOD1Ddm9yxOwe2VJTujzY=
X-MC-Unique: x6AyWPnMMK6ocNz1yohAhA-1
Date: Mon, 22 Feb 2021 18:59:30 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter
 Maydell <peter.maydell@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, qemu-ppc@nongnu.org, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, David Gibson <david@gibson.dropbear.id.au>,
 qemu-arm@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paolo
 Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, BALATON Zoltan
 <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>, Richard Henderson
 <richard.henderson@linaro.org>, Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>, Jiaxun Yang
 <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVydsOp?= Poussineau
 <hpoussin@reactos.org>, Greg Kurz <groug@kaod.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 02/11] hw/boards: Introduce
 machine_class_valid_for_accelerator()
Message-ID: <20210222185930.4c08cb69.cohuck@redhat.com>
In-Reply-To: <6ceff55c-6da4-e773-7809-de3be2f566ab@redhat.com>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-3-philmd@redhat.com>
	<20210222183400.0c151d46.cohuck@redhat.com>
	<6ceff55c-6da4-e773-7809-de3be2f566ab@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13

On Mon, 22 Feb 2021 18:46:15 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> On 2/22/21 6:34 PM, Cornelia Huck wrote:
> > On Fri, 19 Feb 2021 18:38:38 +0100
> > Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
> >  =20
> >> Introduce the valid_accelerators[] field to express the list
> >> of valid accelators a machine can use, and add the
> >> machine_class_valid_for_current_accelerator() and
> >> machine_class_valid_for_accelerator() methods.
> >>
> >> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >> ---
> >>  include/hw/boards.h | 24 ++++++++++++++++++++++++
> >>  hw/core/machine.c   | 26 ++++++++++++++++++++++++++
> >>  2 files changed, 50 insertions(+)
> >>
> >> diff --git a/include/hw/boards.h b/include/hw/boards.h
> >> index 68d3d10f6b0..4d08bc12093 100644
> >> --- a/include/hw/boards.h
> >> +++ b/include/hw/boards.h
> >> @@ -36,6 +36,24 @@ void machine_set_cpu_numa_node(MachineState *machin=
e,
> >>                                 const CpuInstanceProperties *props,
> >>                                 Error **errp);
> >> =20
> >> +/**
> >> + * machine_class_valid_for_accelerator:
> >> + * @mc: the machine class
> >> + * @acc_name: accelerator name
> >> + *
> >> + * Returns %true if the accelerator is valid for the machine, %false
> >> + * otherwise. See #MachineClass.valid_accelerators. =20
> >=20
> > Naming confusion: is the machine class valid for the accelerator, or
> > the accelerator valid for the machine class? Or either? :) =20
>=20
> "the accelerator valid for the machine class".
>=20
> Is this clearer?
>=20
> "Returns %true if the current accelerator is valid for the
>  selected machine, %false otherwise.
>=20
> Or...
>=20
> "Returns %true if the selected accelerator is valid for the
>  current machine, %false otherwise.

Maybe that one, given how it ends up being called? Or "specified
machine"?
>=20
> How would look "either"?
>=20
> The machine is already selected, and the accelerator too...

Yes, so this is basically testing the (machine,accelerator) tuple,
which is what I meant with 'either'.

>=20
> >  =20
> >> + */
> >> +bool machine_class_valid_for_accelerator(MachineClass *mc, const char=
 *acc_name);
> >> +/**
> >> + * machine_class_valid_for_current_accelerator:
> >> + * @mc: the machine class
> >> + *
> >> + * Returns %true if the accelerator is valid for the current machine,
> >> + * %false otherwise. See #MachineClass.valid_accelerators. =20
> >=20
> > Same here: current accelerator vs. current machine.

So maybe

"Returns %true if the current accelerator is valid for the specified
machine class..." ?

> >  =20
> >> + */
> >> +bool machine_class_valid_for_current_accelerator(MachineClass *mc);



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 18:04:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 18:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88334.166023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFZa-0001XH-Gv; Mon, 22 Feb 2021 18:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88334.166023; Mon, 22 Feb 2021 18:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFZa-0001XA-Dz; Mon, 22 Feb 2021 18:04:42 +0000
Received: by outflank-mailman (input) for mailman id 88334;
 Mon, 22 Feb 2021 18:04:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V+0I=HY=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lEFZZ-0001X5-UN
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 18:04:41 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2dcd7357-e522-4cae-b78d-1df92cad2614;
 Mon, 22 Feb 2021 18:04:40 +0000 (UTC)
Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com
 [209.85.208.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-108-az0L5RqbNS6dy3SwhF9Kng-1; Mon, 22 Feb 2021 13:04:37 -0500
Received: by mail-ed1-f72.google.com with SMTP id g20so4874722edy.7
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 10:04:37 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id b27sm7825403eja.64.2021.02.22.10.04.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 Feb 2021 10:04:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2dcd7357-e522-4cae-b78d-1df92cad2614
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614017080;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SSsnX5SjnKwxu3rr90T4nBZzi1MZ/dtXAKPK6GY/CHA=;
	b=JYQXspFD0XNylrg8ANJu0NJ9V/mNSHFBXTZuEEA6VnMco8/dKpvb0jREOo7vXGeEVfkDCB
	ZsifBcfUwnq8TVyaKkhOeW3GC3H2AMxlW2vMHLoL+nqGxdex99ATP7Q6RW+1ntOEIyG3wV
	nEcF6SeVNbcZGaE8D2wXd0s3IU2gkvg=
X-MC-Unique: az0L5RqbNS6dy3SwhF9Kng-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=SSsnX5SjnKwxu3rr90T4nBZzi1MZ/dtXAKPK6GY/CHA=;
        b=mbmIpU48MGd/1pAIl4CODRFPKe+SjVzzaC+yjf0tyXZKriZUP6nWwEOBNa7w2z3eDI
         z3CeWxP/6GDr8OP+r3Ziz2AA6AT9nmmrwiNF89ltINzfxV0b+vEfVQAuY94zXoxYZQ5g
         9k1ZDUG+aLgJ9NsOPSY6Vdt21dnN5YKl0X8MI7RyZ1YBH3gvdxXUT7th9HFPeUeSSsuP
         qaROsowGq+MQmRVnJtK6H5pqhjqfZR+N61mrdfhYRZnQ3DISLodr6weO8k9NfGbRbLuv
         j3c26+M8jEIW5aXDV+dFxtj+8SkLN1Yrvg+P+01tufCE09x46EUgAFGpXyq3tREjKuWK
         +V/g==
X-Gm-Message-State: AOAM532AndEranoKInmATOsc5kAhIts/Hx8WAtRFxiUtx9vC3xRkKbF0
	o2BK+FTSRyqFsAgn20qjQCthpWAlI4LXaWwoiISA9uacrG9scyv7McoMy/IdfMbT3y0+G9zMPNj
	DAACBv74hAyx+TQB6Sd8nvM7Ak3k=
X-Received: by 2002:a17:906:7e42:: with SMTP id z2mr21995947ejr.177.1614017076008;
        Mon, 22 Feb 2021 10:04:36 -0800 (PST)
X-Google-Smtp-Source: ABdhPJz/JC++82lk2ObRrXbPMXgvjOy+aii8aMRtQqY2wAzax0ZKP5G6jLvuYpnN+D+qmV72Wu/wKA==
X-Received: by 2002:a17:906:7e42:: with SMTP id z2mr21995909ejr.177.1614017075776;
        Mon, 22 Feb 2021 10:04:35 -0800 (PST)
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type() return
 value
To: Cornelia Huck <cohuck@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
 qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
 Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Gibson <david@gibson.dropbear.id.au>, qemu-arm@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
 BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-2-philmd@redhat.com>
 <20210222182405.3e6e9a6f.cohuck@redhat.com>
 <bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
 <20210222185044.23fccecc.cohuck@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <0f2a252e-34d8-6714-d1fd-d4e3764feef7@redhat.com>
Date: Mon, 22 Feb 2021 19:04:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210222185044.23fccecc.cohuck@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/22/21 6:50 PM, Cornelia Huck wrote:
> On Mon, 22 Feb 2021 18:41:07 +0100
> Philippe Mathieu-DaudÃ© <philmd@redhat.com> wrote:
> 
>> On 2/22/21 6:24 PM, Cornelia Huck wrote:
>>> On Fri, 19 Feb 2021 18:38:37 +0100
>>> Philippe Mathieu-DaudÃ© <philmd@redhat.com> wrote:
>>>   
>>>> MachineClass::kvm_type() can return -1 on failure.
>>>> Document it, and add a check in kvm_init().
>>>>
>>>> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
>>>> ---
>>>>  include/hw/boards.h | 3 ++-
>>>>  accel/kvm/kvm-all.c | 6 ++++++
>>>>  2 files changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>>>> index a46dfe5d1a6..68d3d10f6b0 100644
>>>> --- a/include/hw/boards.h
>>>> +++ b/include/hw/boards.h
>>>> @@ -127,7 +127,8 @@ typedef struct {
>>>>   *    implement and a stub device is required.
>>>>   * @kvm_type:
>>>>   *    Return the type of KVM corresponding to the kvm-type string option or
>>>> - *    computed based on other criteria such as the host kernel capabilities.
>>>> + *    computed based on other criteria such as the host kernel capabilities
>>>> + *    (which can't be negative), or -1 on error.
>>>>   * @numa_mem_supported:
>>>>   *    true if '--numa node.mem' option is supported and false otherwise
>>>>   * @smp_parse:
>>>> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
>>>> index 84c943fcdb2..b069938d881 100644
>>>> --- a/accel/kvm/kvm-all.c
>>>> +++ b/accel/kvm/kvm-all.c
>>>> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
>>>>                                                              "kvm-type",
>>>>                                                              &error_abort);
>>>>          type = mc->kvm_type(ms, kvm_type);
>>>> +        if (type < 0) {
>>>> +            ret = -EINVAL;
>>>> +            fprintf(stderr, "Failed to detect kvm-type for machine '%s'\n",
>>>> +                    mc->name);
>>>> +            goto err;
>>>> +        }
>>>>      }
>>>>  
>>>>      do {  
>>>
>>> No objection to this patch; but I'm wondering why some non-pseries
>>> machines implement the kvm_type callback, when I see the kvm-type
>>> property only for pseries? Am I holding my git grep wrong?  
>>
>> Can it be what David commented here?
>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html
>>
> 
> Ok, I might be confused about the other ppc machines; but I'm wondering
> about the kvm_type callback for mips and arm/virt. Maybe I'm just
> confused by the whole mechanism?

For MIPS see https://www.linux-kvm.org/images/f/f2/01x08a-MIPS.pdf
and Jiaxun comment here:
https://lore.kernel.org/linux-mips/a2a2cfe3-5618-43b1-a6a4-cc768fc1b9fb@www.fastmail.com/

TE KVM: Trap-and-Emul guest kernel
VZ KVM: HW Virtualized

For "the whole mechanism" I'll defer to Paolo =)



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 18:07:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 18:07:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88337.166035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFc9-0001gb-Vg; Mon, 22 Feb 2021 18:07:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88337.166035; Mon, 22 Feb 2021 18:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFc9-0001gU-ST; Mon, 22 Feb 2021 18:07:21 +0000
Received: by outflank-mailman (input) for mailman id 88337;
 Mon, 22 Feb 2021 18:07:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DTf3=HY=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lEFc8-0001gP-6t
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 18:07:20 +0000
Received: from mail-pj1-x1031.google.com (unknown [2607:f8b0:4864:20::1031])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c56a066-11b3-4420-a818-37e6be53eb9b;
 Mon, 22 Feb 2021 18:07:19 +0000 (UTC)
Received: by mail-pj1-x1031.google.com with SMTP id d2so74145pjs.4
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 10:07:19 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id ga17sm81844pjb.7.2021.02.22.10.07.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 Feb 2021 10:07:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c56a066-11b3-4420-a818-37e6be53eb9b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=33j1yZ2ahtKC/KNwknKm+XfvSDIpc6r4Koeq0CEeQqY=;
        b=EXkDaw/IP2+37adqcu0nnIfJjt1XnNKSu9friC7LZnR3rO5KAPRwSuEUe+8Rs0d2CG
         4ZzoCnlFCpJvGfpZ7eCS4p788sPXS2fafzejytEkQATJCEIGKi9Wep1u4BEEXQ3QgMh1
         A90yrxZG98vrUKsA2X4+vFJAEGZPcxLuDvywzdHMheYRNrkPngrJ/5pgptmp4XHXYSx1
         sFKox9RbmYkGn2fInBT2zapB5loc1A+j6A1bynOSS5rCuyz5KPyGZ6AAEHfpws75u/iq
         LIjGD7L6lxr725OwU/KdK/x4S3T7dWQowrYatCngaEb/owJC99Q0b+B6zEYUFWdmbGou
         WmpA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=33j1yZ2ahtKC/KNwknKm+XfvSDIpc6r4Koeq0CEeQqY=;
        b=DkEOh562BaAlmWID1dqd2GJCcEzqR7/QX81VmLdKQZv5RoeMoevkZK2zaZQ9xLqkZX
         xNZ1dlpZZdsFY9z8rf3hrhzQ1VZRPQTVpipv6EkLJv3bGJuhN5wly80tR/ZoAT/HDm6P
         1vhgeMRitiyPujudYDKKSlzUEwjpekgB4rPHhe3xE3axZ+jkPO+g+6EDYX1fXn51vkxd
         nR9PGxbl+WPOFbKkT25KSM0By0PqnPFHozXOTWH6lg72JYsUZH1D5/iuiTz84AXiBNr2
         wUCZmUj6vhIGSZ6e+/cFQcKBO8McEOBSf6p4cRqnolFkRorazONDygTadwIb4LgJ/1v1
         WKew==
X-Gm-Message-State: AOAM530bSFOll6N+9Ml2ijyMVO/eCKvf1G5gD8OZtYYhgmZKUza5Q4VG
	hGE52KsHU1w/N5xtk3Xc2js=
X-Google-Smtp-Source: ABdhPJwflyeD5qZuFjKo+t39goJJO/FOLErRnI52zJ4do+cWqYMU3VPKFDtK0qRdmT56Fav8AFjYFQ==
X-Received: by 2002:a17:902:8204:b029:e3:b425:762e with SMTP id x4-20020a1709028204b02900e3b425762emr20548485pln.13.1614017238855;
        Mon, 22 Feb 2021 10:07:18 -0800 (PST)
Subject: Re: [PATCH v3 0/5] Support Secure Boot for multiboot2 Xen
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Daniel Kiper <daniel.kiper@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Olivier Lambert <olivier.lambert@vates.fr>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
From: Bobby Eshleman <bobbyeshleman@gmail.com>
Message-ID: <9a58bdf7-3a34-1b81-aec9-b14da463d75e@gmail.com>
Date: Mon, 22 Feb 2021 10:04:17 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <cover.1611273359.git.bobbyeshleman@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hey all,

I just wanted to request more feedback on this series and put it on the radar, while acknowledging
that I'm sure given the recent code freeze it is a busy time for everybody.

Best,
Bob


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 18:09:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 18:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88341.166047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFeE-0001tq-Fb; Mon, 22 Feb 2021 18:09:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88341.166047; Mon, 22 Feb 2021 18:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFeE-0001tj-CS; Mon, 22 Feb 2021 18:09:30 +0000
Received: by outflank-mailman (input) for mailman id 88341;
 Mon, 22 Feb 2021 18:09:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3rhT=HY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEFeC-0001te-AF
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 18:09:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee3fca0e-a983-443e-b871-7049ab4b7c16;
 Mon, 22 Feb 2021 18:09:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee3fca0e-a983-443e-b871-7049ab4b7c16
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614017367;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=68GMU/d8w61sL2gVWlk7/rEqldzOfeH8vue3EcjpiH8=;
  b=dEW9/2utrNVjA15GxLe6Kb/wGhoxzchgLt/SGY8WEKCoS+RYRiHt64JF
   ofcnQby7g+FZK57XZp/5JZ5Bdt6lsz04xa+yW63eOvXXPv3W+EMQnb9kZ
   ZVTsR2Pkhtrbmca0TbjYeROL/dDMbnBmGtXaJA6+iAKmshhw9vYiot/a2
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: yDsdRUbJ7KM4I+QvzUp1Gqnn/xx8ICl5qIYivn4mBhHKsh+yuZ2/y4iz+hHUbmQU/cjFIh8eG0
 lt/i338d5LtnN63rkIwyDqMyGisGgjdxl74BfvJv2t8pwGiVrlRXlTMkZGEyrK6QNf+wWi98O5
 u2gMLDtWxLHZCaQ52G0vfc6VbgmZbbTy1BG6Ds9DOtjrH/mDDKdLITbO7IQaTHfkOwuxLBk/JB
 Qy0KVAts0K/4Zspwe2Kr4wcPD1wkuqbwUk9FhSb+OJa/HT+mo+v6MI3GLRWxLM0EFmhdg2tFqw
 dRw=
X-SBRS: 5.2
X-MesageID: 38132271
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,197,1610427600"; 
   d="scan'208";a="38132271"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IFkYcmfDm4HnsQlTcdc2AG/3W7WO9z4hZBnRunwAIx3eqqBbTyY+P9yj+ilN8pieU31CJJfxH9+iJUJojEXsGLY8NGZMvDaYcu/99mLC3V2vR7A7+HDdbSfUTL+pnTg/xQL1cwjtxjceiR4G9K96mi5u7p5Jicl/wThhHLX847cQVo0UiWaFAKwe/PPcvfDFvxBCUIQJYaqZkpQ3t1nNeFI37hOlYn1Pi9oddrJ4ZxrHPVn9xUJv0M8QB6XMVdHnuz6WSKYFsKBtqqRLHa1CWmxRyyaXW06vHG9AA7ANPitzzCBxA0yVHmfFJMlI/miuLO0i3GgRpveRpn8SRz0H0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pHZUAWCWjBNgta8jX/wgNCI+u2JFcJkmEeK4YD5rPPI=;
 b=ZzRKYQY2Y9mV0ZoYaEyZKYC3YJ1mo2Vx/6U45sRpxH2IHK6F4nt4WaZ9maR001rAgsv8CN63F/soo/13fLNii9E/gw/yze0wfOBZWgPstNCOyeBDaKsnhsN0jxJtXfRW9aU0kjJOeKwO0CjkNXKHwXatAvJ9rTIO2Kqf0keIJTv+cz2B+dxn03VxuaFO6Nqw2mDzfqqLLyG0gYXD1J4FFMfTsS8CTZZWpxGwjQD2he+a0j1RMfA9D7nwoApUNSOGqyryBCVVCBNEgpSEnVFnlP8hfmtS7oWSVJzpZhNVvgc0tY4ItWMzFwb15k5MJPnUCc3qkFdGhR0qXxMSSGnmsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pHZUAWCWjBNgta8jX/wgNCI+u2JFcJkmEeK4YD5rPPI=;
 b=ahbxScop+R65N9O9jIK0ZicbUb6GHGHbQtH4kST1g/R67+uATXl3r4BtHs4xbO91meIlHuKnaSSr+Gllu2hSKc/l2LVllYlOaeokaTah3/j1pe2BWexb4/Zgtnsi6PHeL4lTCfhNWJ4xr1XP4CXtXqzV084BBry7bgqsTFFbGQg=
Subject: Re: Stable ABI checking (take 2)
To: Ian Jackson <iwj@xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
 <78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
 <a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
 <24627.59405.114762.685265@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c28f25ed-c9e2-429e-f65e-34b9ecda487d@citrix.com>
Date: Mon, 22 Feb 2021 18:09:13 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <24627.59405.114762.685265@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0408.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6a9ea7ba-0d64-498c-a519-08d8d75cfa23
X-MS-TrafficTypeDiagnostic: BYAPR03MB4165:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4165A067DBBFA2876F1EE08ABA819@BYAPR03MB4165.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qTROc70/rKvPDzvedOGJVgTfBTv4z+eTpX8PvqJcbbaqYI9SSfXdHkgFdROWb90n0RzEzWqUs+kuWRTwP/9KsnQ9TxV31leX1bA/W4bvyg8gfQyXPWA3iETL4XKsT/dcfHgQjHO0WNtFN+XHWcViRC8MtAj0ifSoATH6I8+OJd2msF7OQ0fSfK/PHrVJvxCWihAKk+FYK/szdEJJFZ9uq8B/h+rBEcSUktPcX5EPgTG3BPcnyzknbHkTtXNMMLdMNJ7BcOFsX2yRIJ49Ny0RkdDxbHt3t4Z6n4PECTHQD3uXPh5leX01l5T2eyELVZE9dPASC4foAua5IiJI/a6Rp8PxCgCaRtNsz4G2Ue8Nu6utyDckbLrVJpIw67Tokd4JJxOUBABxqovMm4SIF+X1miJJ8zojfgdDe3F3kNQd44iAibQpJbmHzGlQnOKO9DtjXpdzsOO6/a22RhRL5kM5oF9jEQdSXCrN/yYgxogNMg0DGwLXH6PDbsIq+mpe0hREyBPGRt5IphdorfqaQXgsB1+AgqqoRjPjj644A4KqbjKFUjLicAg4W16FRYT5Arg6Dg/0x9zspijIagJTnxdmPOKeg95UQoT5GJ65vVXjwL4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(31696002)(186003)(16576012)(31686004)(26005)(54906003)(8676002)(8936002)(36756003)(316002)(2906002)(16526019)(86362001)(66556008)(956004)(478600001)(53546011)(6916009)(6666004)(4326008)(66946007)(6486002)(5660300002)(66476007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TkZKeDR3Z2lKanFaTTNiSWxZRVpSK0hZRDIySWZhOVdmRmpBT0FFNVA3Y29T?=
 =?utf-8?B?K1BVOXJxQWJSNWlKTTlXeTBXZTl4NnNHRDFJTkFJQllnZmN5WDgxMlk1aDJG?=
 =?utf-8?B?KzFydVU0dDJKMTA3akhFOGpIYUxRZCtHdjdqakdScnZub2ZQM2RDTVNFMkdu?=
 =?utf-8?B?NE1wQXhvT1U5Ym0rY243RVd2NjJYbzQrWUhsY3kxeGFEY0tob3lSWUlpbTVx?=
 =?utf-8?B?eFNwL1BGQXE0ZGpOOG9vSDdCdTVhdFlwekMxNUJneGs2WjBuaUdCcHBjVmk4?=
 =?utf-8?B?WTc5MkNJQ1dXcDMyMjI4SWlhNUdiRnI4dWkxUDVWelBOWUFhTExFMVBZRnl4?=
 =?utf-8?B?ZDZiaEx6SDZwb2oxSnk2bzhmRlhreTRvVTFQVlNXZmJvbjd5WVhuZnlHQXFQ?=
 =?utf-8?B?c1daOVlYa2VMSUNhbElUa0hVT2lHdjl4MExPQXA1eG9Sdy9DUzJKdXZaUHJv?=
 =?utf-8?B?dDgvRnFQcXJMMEQ1ZjRsS3RvWWpVMXBtc25nd1VRQVVzQXZ5RFJ4bWY5dTly?=
 =?utf-8?B?VWRFNERDMm13WDVEcXlNcjFYMi82aFdjbi9nQUxNYm9GalRlMGdvNThuSEU5?=
 =?utf-8?B?Tm5ZVXIwbldDYUhTbGRCbThjUzZ4OGZ3VHp4MlFuWDdLWnVzRExtNVJpZEd6?=
 =?utf-8?B?R1l3WllBdFNUV0tvWFJqTG55cWUvdnlZZ2pvWjF0YVgxaFJXWnNrb080S1Rv?=
 =?utf-8?B?cmdYdmx0c0g1ellSWGVhdXFkOU1ud21qcnJleGxmUDh6b3lLays2TUJySHAw?=
 =?utf-8?B?Wll3K2xxMS8rdFBFbVdEVTJQUVowSG9aK0lsSDc3Tk5OS0ZwMnBXT1E5MXl4?=
 =?utf-8?B?OUs5K0hmWXFnekp5U2haRW01bWM4K21pRUtjSWk1U2dCMGc4bWZ2bUsvSEdv?=
 =?utf-8?B?VHg0a3hZQjlFS0xlZnNHaDlVZ3V6cW5KYTNPT2htb1BXVjJ4NFBTOVlpWE9s?=
 =?utf-8?B?UlBNMCt0TWlnd1dXYTB2YjZPYlUrMzU5Si8yWFUrNmN1SkRPb0dWcC8xUGRY?=
 =?utf-8?B?T3M0b1NYeGpkaXdkYmJPNGZTdnJIQmRBM2xsRm0rRFVmc0lTNndxUG45SjUw?=
 =?utf-8?B?N1J3Z2NBMGtJVytmZDVXRzFSejlRZ3VVVnA1Q0U3YU80TVlsQ2xDSi81NGNy?=
 =?utf-8?B?a2cyemhDZVdTUEVEcUp1YlAzV1Q0MmNyL0tMeUZtKzZYaVlnOEJxc3Y4TzNv?=
 =?utf-8?B?WVY1cGhieG0wdXpXbjhFcllGNGN6V1RqU2tMUk5uWnpSMENyOG5POXBTcTBr?=
 =?utf-8?B?VjJLSGttK2V1WWxLRGU0TUhHUFFrd0pKOE8vNWxYNzBZWkRzOEFRd2o5L1oy?=
 =?utf-8?B?VS9VRTUvM3ZHRWVQbU92MEFFTjNqT01sZytxYzZVSUQxWm14TzNsdTlueTNh?=
 =?utf-8?B?RlFvTnR6cU5LQytOVStrYlB0ZWlsUW1NUkZEckVxOFl5Z2FjazZFQzhGTmIx?=
 =?utf-8?B?ZFhrKzBwSzFzMDVSMTFGbmdhTHJxajVDWVczUStzckhWN2llenI4MjVEQW41?=
 =?utf-8?B?dCtJZWtsbG04Q1o4SU10cWFqVTY0TThROG5UVmdDNGFZVmZxMFlzelVsRVNX?=
 =?utf-8?B?Umg2SkkrMXZHSGdFUzBrVENrNGJPRTZVWVVTb1RjdzRjMUU0T0taMytWR1Ay?=
 =?utf-8?B?b3YxNkdPdkg5cFVBSkxDT3ZzcStvaVhQQUJYekQ3bURtbkphMDA2MHRrZSto?=
 =?utf-8?B?MDRJR2VpNU5uZDV4cEtEbmFvZlhOUjg0dk1UczZhZEJ3TnZvWkwySWxXSVZl?=
 =?utf-8?Q?6S6ep/PEzJMTOyLdblrJBS28Tj8tr//gB/zaP4U?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a9ea7ba-0d64-498c-a519-08d8d75cfa23
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 18:09:20.7867
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +lu3vdXUSbztLJqgCbNfmm4mGsDdjB/sXAYqXzXGSdeNNGi9mcZWWyyAlTjiklUP+hzcysMG93p+6BsH2ldHggHsqtwJ7lM4+p0A84XaUN4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4165
X-OriginatorOrg: citrix.com

On 22/02/2021 17:21, Ian Jackson wrote:
> Jan Beulich writes ("Re: Stable ABI checking (take 2)"):
>> On 22.02.2021 15:03, Andrew Cooper wrote:
>> +1 for option 2, fwiw.
> I'm in favour of option 2.

Option 2 it is then.

>
> Andrew Cooper writes ("Re: Stable ABI checking (take 2)"):
>> As far as RPM is concerned, splitting the two is important, as %build
>> and %check are explicitly separate steps.Â  I have no idea what the deb
>> policy/organisation is here.
> The reason why distro build systems like to distinguish "build" from
> "check" (run tests) is that often the tests are time-consuming (or
> have intrusive dependencies or other practical problems).
>
> IMO if the ABI check is very fast there is no reason not to run it by
> default.  (We have configure to deal with the dependency issue.)

It turns out that libxl causes abi-dumper to churn for ~4s or so, which
isn't ideal.Â  All the other libraries are in the noise.

If we are going for a check-by-default policy, then we obviously need to
exclude the non-stable libraries by default.

However, to fix problems pertaining to the unstable libraries,
downstreams do need to be able to invoke checking on the other libraries
as well.

I'll have a think as to how best to make that happen.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 18:26:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 18:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88344.166059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFuK-0003hr-Ta; Mon, 22 Feb 2021 18:26:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88344.166059; Mon, 22 Feb 2021 18:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEFuK-0003hk-Qg; Mon, 22 Feb 2021 18:26:08 +0000
Received: by outflank-mailman (input) for mailman id 88344;
 Mon, 22 Feb 2021 18:26:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ease=HY=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lEFuJ-0003hf-Ie
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 18:26:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5ff804f-9d4a-4a68-a81c-65de56b0694d;
 Mon, 22 Feb 2021 18:26:06 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEFuI-0006F5-JP
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 18:26:06 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEFuI-0003Ev-I6
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 18:26:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEFu6-0001id-Bn; Mon, 22 Feb 2021 18:25:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5ff804f-9d4a-4a68-a81c-65de56b0694d
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=rtOrzhmDPw0+SR4biTVjGyqD3zRb1dDnjZahR5FuLLo=; b=UsH3btlXzVEdqLUNNZn9QObCDR
	NzqdLI7nZ1RZbqi7nHzUYeihhwBPRGqn+t50FrzIIIx36usKcup64o3HTuueCkZ+jGCRzJ/q8TR0x
	J/fibvs+nStjWrJcIYTTGX3HjsyGdTiEpXhDVbbWyeAMTwsk35VDNpAboM92xV/2Wx5w=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24627.63282.67533.350056@mariner.uk.xensource.com>
Date: Mon, 22 Feb 2021 18:25:54 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>,
    Xen-devel <xen-devel@lists.xen.org>
Subject: Re: Stable ABI checking (take 2)
In-Reply-To: <c28f25ed-c9e2-429e-f65e-34b9ecda487d@citrix.com>
References: <68c93553-7db5-f43b-b3cd-b9112a8a57dc@citrix.com>
	<78eec55c-ac2c-467e-0a2c-9acb44eba850@suse.com>
	<a2acb45e-244a-2786-391d-c6ee7d267cfd@citrix.com>
	<24627.59405.114762.685265@mariner.uk.xensource.com>
	<c28f25ed-c9e2-429e-f65e-34b9ecda487d@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Re: Stable ABI checking (take 2)"):
> It turns out that libxl causes abi-dumper to churn for ~4s or so, which
> isn't ideal.  All the other libraries are in the noise.

I think that means making it part of "make check" or something.

> However, to fix problems pertaining to the unstable libraries,
> downstreams do need to be able to invoke checking on the other libraries
> as well.

Fine by me.

Ian


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 18:53:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 18:53:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88348.166074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEGKU-0006X8-58; Mon, 22 Feb 2021 18:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88348.166074; Mon, 22 Feb 2021 18:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEGKU-0006X1-22; Mon, 22 Feb 2021 18:53:10 +0000
Received: by outflank-mailman (input) for mailman id 88348;
 Mon, 22 Feb 2021 18:53:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oys7=HY=gmail.com=kevinnegy@srs-us1.protection.inumbo.net>)
 id 1lEGKT-0006Ww-At
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 18:53:09 +0000
Received: from mail-oi1-x233.google.com (unknown [2607:f8b0:4864:20::233])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d77ed82-a3aa-4b5b-9e2f-e3ad4407d926;
 Mon, 22 Feb 2021 18:53:08 +0000 (UTC)
Received: by mail-oi1-x233.google.com with SMTP id f3so15004157oiw.13
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 10:53:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d77ed82-a3aa-4b5b-9e2f-e3ad4407d926
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=f1puqYGuQ0MULhYXk50f1Ja3v9r3HCUU4QrM/Mkm1gI=;
        b=f7q9AE0VqTXmXXz461I2PttTbNs3JNA/Jf4uQEHRhhjx0fZq23GF1nOT4PySEM81Bh
         1JHgqnw8x9Jzrr6aThel9//jFsTG+KgJEGeuIg9wCYE83pgi0fP1ca1zh5uaMmbMnTZW
         M1CcgQP5Ls+b5h1DxH03SIeJPOFjfkGRa7HMx3JaNujHs9WSk5lIPOYwaHUBIjT9wjrH
         heXpSV5fnwGk8jt/iiea2fKNdePWCF3EmPCGmgUIaBTalmNErgqF6vY4oWtwa5y/j1Cx
         KtTtCK0qWKI6IIai1Te7n3HEPjJxHTR9u5vOG6/UIgO8e3WBRRJy3oVkbr20V1stMzHo
         ojFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=f1puqYGuQ0MULhYXk50f1Ja3v9r3HCUU4QrM/Mkm1gI=;
        b=sIk2kFRyJJcXujUWC7ZObbjV5OQQ5TDpN0KI+KDmYF0ADf6eAszkP4y9b9GAvRWfj1
         dXYXYyMwvc+l7FOw6yeAgIvjafrj7vcOiHqaEgdiKfaq3pYvuYjnGR6l0KM4DhmNxoul
         0rB2G24hcfaES7DYFGzWo68ZtLsyS3lDgqNRxxgURZcaP34Jl7j+yUx74+hN6ltkH5xC
         lLRqz89+8BOMExxsAGCEa1H0Xv8FpU0xtuguZj1vKoiQSdCtT74uXsuYIVk6IHhGETvU
         rSXjiskKr/UXvnpixHesabNZYrE7SOnykmGWkHGgQygX1kXDBsIFhNd3bpbvfvYbS1po
         VLnQ==
X-Gm-Message-State: AOAM531ZQunZ1kNy9AKlsZU6iMvUwCxOYFOIEWEtK12kjqqO4Evh9sbP
	CX1jQTcfWkMmXDE854eGomNSDuR7ixFKvGXhsWIv9aioKTNZ
X-Google-Smtp-Source: ABdhPJwZWWWvlTzS20caJnqhNysg3K9xiCXBWFkYWpvpdOUsDn5312T+Xf+kHOa0kIr+OoLW4JkEtDT1h2AK+fpyx+U=
X-Received: by 2002:aca:5954:: with SMTP id n81mr16469325oib.25.1614019987627;
 Mon, 22 Feb 2021 10:53:07 -0800 (PST)
MIME-Version: 1.0
References: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
 <38cf0d39-da1d-5375-89dc-1668e26323a2@citrix.com>
In-Reply-To: <38cf0d39-da1d-5375-89dc-1668e26323a2@citrix.com>
From: Kevin Negy <kevinnegy@gmail.com>
Date: Mon, 22 Feb 2021 13:52:31 -0500
Message-ID: <CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com>
Subject: Re: How does shadow page table work during migration?
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="00000000000005e3f605bbf150f0"

--00000000000005e3f605bbf150f0
Content-Type: text/plain; charset="UTF-8"

Hello again,

Thank you for the helpful responses. I have several follow up questions.

1)

> With Shadow, Xen has to do the combination of address spaces itself -
> the shadow pagetables map guest virtual to host physical address.


The shadow_blow_tables() call is "please recycle everything" which is used
> to throw away all shadow pagetables, which in turn will cause the
> shadows to be recreated from scratch as the guest continues to run.


With shadowing enabled, given a guest virtual address, how does the
hypervisor recreate the mapping to the host physical address (mfn) from the
virtual address if the shadow page tables are empty (after a call to
shadow_blow_tables, for instance)? I had been thinking of shadow page
tables as the definitive mapping between guest pages and machine pages, but
should I think of them as more of a TLB, which implies there's another way
to get/recreate the mappings if there's no entry in the shadow table?


2) I'm trying to grasp the general steps of enabling shadowing and handling
page faults. Is this correct?
    a) Normal PV - default shadowing is disabled, guest has its page tables
in r/w mode or whatever mode is considered normal for guest page tables
    b) Shadowing is enabled - shadow memory pool allocated, all memory
accesses must now go through shadow pages in CR3. Since no entries are in
shadow tables, initial read and writes from the guest will result in page
faults.
    c) As soon as the first guest memory access occurs, a mandatory page
fault occurs because there is no mapping in the shadows. Xen does a guest
page table walk for the address that caused the fault (va) and then marks
all the guest page table pages along the walk as read only.
    d) Xen finds out the mfn of the guest va somehow (my first question)
and adds the mapping of the va to the shadow page table.
    e) If the page fault was a write, the va is now marked as read/write
but logged as dirty in the logdirty map.
    e) Now the next page fault to any of the page tables marked read-only
in c) must have been caused by the guest writing to its tables, which can
be reflected in the shadow page tables.


3) How do Xen/shadow page tables distinguish between two equivalent guest
virtual addresses from different guest processes? I suppose when a guest OS
tries to change page tables from one process to another, this will cause a
page fault that Xen will trap and be able to infer that the current shadow
page table should be swapped to a different one corresponding to the new
guest process?

Thank you so much,
Kevin

--00000000000005e3f605bbf150f0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr">Hello again,<br><br>Thank you for the hel=
pful responses. I have several follow up questions. <br><br>1) <br><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex">With Shadow, Xen has to do the co=
mbination of address spaces itself -<br>
the shadow pagetables map guest virtual to host physical address.</blockquo=
te><div><br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px=
 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The sha=
dow_blow_tables() call is &quot;please recycle everything&quot; which is us=
ed<br>
to throw away all shadow pagetables, which in turn will cause the<br>shadow=
s to be recreated from scratch as the guest continues to run.=C2=A0</blockq=
uote><br>With shadowing enabled, given a guest virtual address, how does th=
e hypervisor recreate the mapping to the host physical address (mfn) from t=
he virtual address if the shadow page tables are empty (after a call to sha=
dow_blow_tables, for instance)? I had been thinking of shadow page tables a=
s the definitive mapping between guest pages and machine pages, but should =
I think of them as more of a TLB, which implies there&#39;s another way to =
get/recreate the mappings if there&#39;s no entry in the shadow table?</div=
><div dir=3D"ltr"><br><br>2) I&#39;m trying to grasp the general steps of e=
nabling shadowing and handling page faults. Is this correct?<br>=C2=A0 =C2=
=A0 a) Normal PV - default shadowing is disabled, guest has its page tables=
 in r/w mode or whatever mode is considered normal for guest page tables<br=
>=C2=A0 =C2=A0 b) Shadowing is enabled - shadow memory pool allocated, all =
memory accesses must now go through shadow pages in CR3. Since no entries a=
re in shadow tables, initial read and writes from the guest will result in =
page faults.<br>=C2=A0 =C2=A0 c) As soon as the first guest memory access o=
ccurs, a mandatory page fault occurs because there is no mapping in the sha=
dows. Xen does a guest page table walk for the address that caused the faul=
t (va) and then marks all the guest page table pages along the walk as read=
 only. <br>=C2=A0 =C2=A0 d) Xen finds out the mfn of the guest va somehow (=
my first question) and adds the mapping of the va to the shadow page table.=
 <br>=C2=A0 =C2=A0 e) If the page fault was a write, the va is now marked a=
s read/write but logged as dirty in the logdirty map.<br>=C2=A0 =C2=A0 e) N=
ow the next page fault to any of the page tables marked read-only in c) mus=
t have been caused by the guest writing to its tables, which can be reflect=
ed in the shadow page tables.<br>=C2=A0 =C2=A0 <br><br>3) How do Xen/shadow=
 page tables distinguish between two equivalent guest virtual addresses fro=
m different guest processes? I suppose when a guest OS tries to change page=
 tables from one process to another, this will cause a page fault that Xen =
will trap and be able to infer that the current shadow page table should be=
 swapped to a different one corresponding to the new guest process?<br></di=
v><div dir=3D"ltr"><br></div><div>Thank you so much,</div><div>Kevin</div><=
br></div>

--00000000000005e3f605bbf150f0--


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 19:00:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 19:00:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88352.166088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEGRV-0007XX-UU; Mon, 22 Feb 2021 19:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88352.166088; Mon, 22 Feb 2021 19:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEGRV-0007XQ-Qx; Mon, 22 Feb 2021 19:00:25 +0000
Received: by outflank-mailman (input) for mailman id 88352;
 Mon, 22 Feb 2021 19:00:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NRjd=HY=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lEGRU-0007XL-4w
 for xen-devel@lists.xen.org; Mon, 22 Feb 2021 19:00:24 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c139c3c-4d09-4f0d-89a8-f90678af1187;
 Mon, 22 Feb 2021 19:00:23 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 161402040932016.64447211319191;
 Mon, 22 Feb 2021 11:00:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c139c3c-4d09-4f0d-89a8-f90678af1187
ARC-Seal: i=1; a=rsa-sha256; t=1614020415; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=mRB2FRsysLflKMTYK8Z6+7ex5pijr4ZEshrObNKPaAynQ61TL/WwMHyqlP2zzH1RPewaRAYrxBJncYqji+nr3sh6BVYrQzjiOw6T8/ODjIj9ol0oINh1M+zqfemVS4ARuh0i53HgbgJHTldHPiysuDx9C0q4qkXP5MihZcrhqHw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1614020415; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=WaT7MlNB652JgJQRnrZ0dkEP9j85MD51klliJzSmi0w=; 
	b=EAlF9CsqQkybHVp4Z9d7EDR+qT5lG4z4nEDhk1rALGTENyHg3bzBgtv0E4T7/9lusFdDnonpzrLpsTyvHXvw9RRFF+eVGEkJZS74mnzbLQgbn1IZb0iHGzqcnyfuoxUqmb8HzceUXM5wxCruH/RnaImsGQKohQs88bWDWlWuPLE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1614020415;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:From:To:Cc:References:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=WaT7MlNB652JgJQRnrZ0dkEP9j85MD51klliJzSmi0w=;
	b=qN8i2MTmRc5creVnav+dME6b+d8QkuzIowpX1Jl3WHmX6zXhYl+6setoxWui/VvP
	oWKiiqVGjv0byjGzz9oxlf3/BqWaRWNe/31ZGhSQDh5hqNpc9EeA+UMIgzF1eds8V5+
	oprLvnfKz45p1PDFyL77+92hiQl/GYdU1ssiQGLU=
Subject: Re: DomB Working Group
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <JBeulich@suse.com>, bertrand.marquis@arm.com, roger.pau@citrix.com,
 julien@xen.org, Stefano Stabellini <sstabellini@kernel.org>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Rich Persaud <persaur@gmail.com>, adam.schwalm@starlab.io
References: <d0b1a7d1-2260-567b-fd8d-04e32a3504f2@apertussolutions.com>
Message-ID: <5d6bef74-8030-0058-85cd-a764bf31e196@apertussolutions.com>
Date: Mon, 22 Feb 2021 14:00:07 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d0b1a7d1-2260-567b-fd8d-04e32a3504f2@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 2/5/21 2:12 PM, Daniel P. Smith wrote:
> Greetings,
> 
> Per the community call on Feb. 4 I would like to get the working group
> started that will be reviewing the major design decisions for the DomB
> implementation. A summary of the discussion around the two primary
> decisions we are seeking to get resolved are as follows,
> 
> 
> Topic: DomB: Adoption of Device Tree as the format for the Launch
> Control Module
> * Consensus approval from x86 and Arm maintainers and members of the Xen
> community on the call to proceed with Device Tree as the format for the
> DomB LCM (described in the previous mailing list posts).
> 
> - a working group will follow up on the work for handling migration of
> device tree handling code within the hypervisor, previously imported
> from outside the project, from the Arm hypervisor code into Common.
> 
> Topic: DomB: Surfacing configuration data to guests: ACPI tables, Device
> Tree
> * Recommendation was that both will be needed, and OK to proceed with
> just implementing one but plan and design for the second to be added.
> First is likely to be ACPI; to be determined as development progress is
> made.
> 
> 
> To continue the discussion from there, I would like to propose a call on
> Thursday February 11th at 1700UTC, 0900PST/1200EST/1800CEST. I have
> provided call details below for those who are able to attend. The agenda
> is available on CryptPad. If you are not able to attend, please reach
> out directly. Thanks and hope to see everyone on the call!
> 
> 
> Agenda
> ======
> https://cryptpad.fr/pad/#/2/pad/edit/iVEku8zImQg320a3D4IBAKQh/

In case any who was not able to attend and would like some to see what
was discussed, you can find the meeting minutes at the link below.

https://cryptpad.fr/pad/#/2/pad/view/SVV9D9eA90bRT9Lwb-nycaw4ugKcpLJhN+odyFspd-0/

V/r,
dps


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 19:36:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 19:36:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88355.166100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEH0Y-00022F-RI; Mon, 22 Feb 2021 19:36:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88355.166100; Mon, 22 Feb 2021 19:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEH0Y-000228-O7; Mon, 22 Feb 2021 19:36:38 +0000
Received: by outflank-mailman (input) for mailman id 88355;
 Mon, 22 Feb 2021 19:36:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w45/=HY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEH0X-000223-A9
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 19:36:37 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 20ca9ad8-fba3-488e-aa94-9e8a4153e421;
 Mon, 22 Feb 2021 19:36:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20ca9ad8-fba3-488e-aa94-9e8a4153e421
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614022595;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yIIDdTs4M753HnCo4p+NLv4EdJWLKKVNJwGdJSOVFSM=;
  b=eHiJ7Zvo4vf2oBd6OagZLt/+12pfh+loiq4pBjoDS6SpaDrX/jFYkKB5
   Uh/mIjipc0k2rdN8MXNsj5TKhQIiaqjKwJir+qzTxwTD5FQY3NjUIZ53k
   sLiPVhISlb40+OCa5AmAMj4lRbb7bfCUwQ7DO2LvV6PEsNEg0aEcc0npK
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jUvTIvZWIdATa/pn29vU+7/MGA94YGdjfMotrAJIYmrEBpoSQqVIhCKRaKKSVfnd+wGETA4R1M
 II3kAn5kuk0QkyA3tGHBa7t2kIKeFhFmp38VPepmjoRPWVoEB75QJ+ytUb/g/czL1v5piiPml/
 r2Hn0rFt3P6iIdp6IukB8e7SZcvmYKXt/30E/wcADnhqWuJdJ62TYbmHH9dg+lDz3mpH/poz00
 ZCTDvGNkz8c/qXRuE0gx5I0xvMP9bRZXyirUgpFANirxWA/0eZDp1n1AWLDbax//UwyLyV6Ckl
 8u0=
X-SBRS: 5.2
X-MesageID: 38138546
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,198,1610427600"; 
   d="scan'208";a="38138546"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XKfzNeEIbuyTj1FINJDtvSsVqfww5QEqe5m0ZOoFtL3RUpPm1u4VXB+TyftBbZBcl0uoZu5Qjd/6n4ZF5EpZumY3apkCE2ldgodU6YjQOwdv544FQT1Rh7lkYB7u8ffYdP75PwTwheIuNvmIVHYRJ1yl+ETsrnCadWb+4VZO/Xub6vI49hcd/v1HWPBiKKDscm1SFlkkNNpKSI+ahBie8rlZ8Vl70a5/E6fcYnEcnQ8MO8urL4rxEmwA6+mYR1kjukf2zNQ8ggTHhdLbCuIuTJgO+JkLcXwezlnMYxIcJvpcg7BRfjaaOEfplEkQNjPgHknmujvCdkxrtMmint+QXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3MEpMaYpx1LTAi9LKxCvjT5JulrsbKJ6fs0Biqt+3zM=;
 b=eJWKIQ+YU1sSAxISRR0lbN9iscF5WytJn9owYp8zURYBhsrFPfTZM+pIrU1f88hx3usBnxlp5foefh/+5196j7Fr8s7OnDDHo1S+wVLrqVIflf3XYO9GgF/BeY5Fhjk0CtbYeNufUwCbs6jTawM5A18ETgTQRIW7RN0ljYqMQGxrIi4LCFPuJGVGSNDpp+oR2Nt06jXVGtARz3iRKkiQ1dKyHR0r/KCSGNAUURBMrzOXuo5qwlDPYRivsdMiuvKfFxAtl5yog3mXtqHnXKKrTW0zL+R5CQvzELQweNBaUvs1XR693tAe5ETDNQVbMUeh+Y/ER+eldxQPmtikFJV+pQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3MEpMaYpx1LTAi9LKxCvjT5JulrsbKJ6fs0Biqt+3zM=;
 b=p384+T/cKxMRwwtjhCww9NheQ7wPqvpvGW/mP8xru6UD01WuS+seKQVnrQ3SZXj0xOGzws7Ulk28gb7NjyE6XCj9UsCe8uC4cHP5QJSkFzy6eBl01BMtz1aFOtmNpjVGjefBwZ4IybPp3LQNPXP94UNdHhAkfwngBH/+OlUuYss=
Date: Mon, 22 Feb 2021 20:36:24 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
Message-ID: <YDQHuHRJqhY4BruZ@Air-de-Roger>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
 <77a36366-9157-c3d3-b1f0-211f4fc39a93@suse.com>
 <60a31ec8-6844-2149-1a04-7e757d1d2dd3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <60a31ec8-6844-2149-1a04-7e757d1d2dd3@citrix.com>
X-ClientProxiedBy: MR2P264CA0052.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9e520f8e-d8ef-48d2-4c43-08d8d7692758
X-MS-TrafficTypeDiagnostic: DM6PR03MB4683:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4683AD73542C08C9CE57C9728F819@DM6PR03MB4683.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JTdueBf5a/7gB6e5QdpHPWw/dv7qxj2Pw6FJqcJooaBErjRX5ClbyQ41ukFpoDVRQSAHu0PM0RXJI07IPl5N4do3GEUfIl3pATbOEwPOvit/z7FQdKPf/2XPMl5UxtLoFlKMy8llGTfLsVq1PJT8jeH3C//RbHWAqXxnooo/RQWxAuqZC4mPYfbBWg3OQTU2TZpcXzIRhEZVvOj2efXcOffO8CvDor+d+CFEc72fHs0Wqm/V2/AGLfcAIHgX7iIlZHnU4cSbmNTgIvhyJkUwwqcE+tCVLTICcTgF8KHN/BM1x1P3PjDjgCN96Y9UucUSpyPwzvuZlvHg/5tRegjYVSLSvRsbWTKzZUPJGd+bKwJ63A0DxtDTTgsyxWUvpLEuTTZWsOjAWo+sHg+xFIXZBtjQK3PnujMcNeJl0llEq2epiW+WAoQtZ66Yj65eqw/gQcTsp1XMmsp0JZjfNDhJhK6i+jy6IrXC/tRuQE3Hv2XpQogXSgjjlO2tfWCIGXMtBmWNlY3ddOecM8I2CTPFVQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(136003)(39860400002)(376002)(396003)(33716001)(83380400001)(6496006)(66556008)(26005)(16526019)(956004)(54906003)(6486002)(66476007)(9686003)(86362001)(316002)(2906002)(478600001)(186003)(8676002)(66946007)(8936002)(5660300002)(6862004)(85182001)(6636002)(53546011)(4326008)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SzllOFZKQ1QzQzh6OHR1VmtVUUZxeENzY0wvMnVYR2RFVUhSZ2RoWGFTaG1W?=
 =?utf-8?B?c0hxMHF6U1orSTlSK1Q0MndWZjJkcWNvelhoQk5KL2l1VHVOVlNrQWVzTTZz?=
 =?utf-8?B?N0hXTVRkSEpiODJFTWdoaUpQMExRRmlGZ2VoWWttbWE3SzRGbFZmL2NOd2lC?=
 =?utf-8?B?eFdVNUpDSy9xRjZjR2gvUjRCWng1TXR3VDJtYzg0ZzdkV3pUNDArY3Vnb0Ji?=
 =?utf-8?B?M0VxRHQ0QnJ5bnZnSlVGTm5wdHJxTGRrNTRvRVdaVmIxR3NNbW84MzhNYW1Z?=
 =?utf-8?B?M3BidHdJcFl1em8xa21LS3BRcGlRRk1TTjhPTkcvNmRWMlJ0M21Na2FpOWNG?=
 =?utf-8?B?Nlh1SHBneGpnYW9sWUlRZmFrN2sxN3V0T1o4eW83ZkpzTnZ6YWxoc0Rsaktw?=
 =?utf-8?B?MDN3U2ZyL2xkdHdxYkpTelNFc0FlV3haMC9kRjg1bk1GY2gvSDFFZFRQTjZE?=
 =?utf-8?B?Zzh6SGNqK1k0MHRUSUUvOXc3Zklid0RLYlVra0FSMjZJTlRobDlrNmlicTNJ?=
 =?utf-8?B?bm9iaFlIODE3Kzg2bjdMblBFNWVaNVYyaHlpclYySW52cGNvSS9Mb2s5UEky?=
 =?utf-8?B?TXE1UjBmSzE3Qk0wTjh2TUlWUTI3ZDVJaVIrN0pBUnIybUlYT1NBZzVwVThn?=
 =?utf-8?B?OEZodHZRdDRyWThHQmhsOGR0SjRxWVlqOGlGeDR0V1hraytsQnk5RGxPTk04?=
 =?utf-8?B?ck52SUJ5ajlucWVmaHBSQ29mdS9CTDBUeS93Q2RoamhSWm1ZTnp4N1dxNTdY?=
 =?utf-8?B?djZxK1pzaG5BKzlTOGNmL3Y3ZlJVZFpGOUYzZHpDZ2hqQkpadWFXUlVKQ3ZL?=
 =?utf-8?B?YzllSmtvSXdUWWNscnJYdjRZbEt3NVp0T2E1bnZnaG1HcmxtS3NnMzBDcDZz?=
 =?utf-8?B?Vy9uQnA4aXVaSnhrZ2lmc25oT1kyeVZvZGcycG5HTllnclptcmhQK2VpaElh?=
 =?utf-8?B?bXZJWmhZMU9YWUlYUERtRzJNUWZ4NXBEUEtNcksyRUFVUlVleVBEcGxsbVBZ?=
 =?utf-8?B?NkJabGJEU2ZUWlNFZUVMUmI4UFR6aEExTXg4TWF1TVhkTjVRaEp4TjFDY0h6?=
 =?utf-8?B?YlVtYmhzRzBJblE0NHBWTmVCc1VOSUtmM1V0ejJEN2crVlMzcUp0YTNGYXJW?=
 =?utf-8?B?R0ZuMXB1QkJGNk9uU3B2TWVxdjVQYk91aHBqVWJud3dXN001c1o0dlJPa01V?=
 =?utf-8?B?S1RESXdxMG5sbVZNNzlaTmRWZVZqb283ajE0bzFMRlMvYjRQbmhidXJkYlZK?=
 =?utf-8?B?MDdmU01uL2o3S3BONmpDM25KSEIvdFYzeDBIY0lXM2lZdmVSOHM1elhnNlAx?=
 =?utf-8?B?ZFRMRFhLU2FKT2VnNTFSTno0WUplMEtyc01xNTdlTHhQSElGbVRjUFh6N3Ny?=
 =?utf-8?B?WTFrQUc5MHFsd3ZIVzFpNERidzU1TEltL2tQOTNmUnZWMXUyelQwTkNGb01X?=
 =?utf-8?B?blNFelE4Ymxha3dYYTJrdTZNSW56T0cxSjIvL1BYZDVtMG5OSXBpdEZ5NUZp?=
 =?utf-8?B?THd0TitPTWdCWDdNb2Y5bkovYXFHWXQvbVRaS3JtOGRwQitHdDNWTE5zWGJn?=
 =?utf-8?B?M2w5N0lKVFF3R2hYK3ZiWjhkRkgxRW50aFN5NzVuL0hXTmZ6b0pnT2duWGI3?=
 =?utf-8?B?N3JEeE05QldDN3hDaUJWdk4xYzVVcU83eG1Lb1JCOXRML3c0TVZaN1RId2x5?=
 =?utf-8?B?QmlteHlzZGVVQ1lJWFhJVXFFS0hINlFKbVVLa1lBc0Rxd3ZDc3BIWHhTRkly?=
 =?utf-8?Q?jgXuwEFj1aoXa+pZxIw/7m77S0f4TI3dcJ7gbtJ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e520f8e-d8ef-48d2-4c43-08d8d7692758
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 19:36:30.7587
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KQDZSfdh9ERA4kA6okkN9NY/yik8ZU3LmY/K1UylZv07PJNcoE1K1Yv10c+9kdy0gSScdePL2xJ5U4dsxQAfAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4683
X-OriginatorOrg: citrix.com

On Mon, Feb 22, 2021 at 04:47:38PM +0000, Andrew Cooper wrote:
> On 22/02/2021 14:22, Jan Beulich wrote:
> > On 22.02.2021 15:14, Andrew Cooper wrote:
> >> On 22/02/2021 10:27, Jan Beulich wrote:
> >>> Now that we guard the entire Xen VA space against speculative abuse
> >>> through hypervisor accesses to guest memory, the argument translation
> >>> area's VA also needs to live outside this range, at least for 32-bit PV
> >>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> >>> uniformly.
> >>>
> >>> While this could be conditionalized upon CONFIG_PV32 &&
> >>> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
> >>> keeps the code more legible imo.
> >>>
> >>> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
> >>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>
> >>> --- a/xen/arch/x86/mm.c
> >>> +++ b/xen/arch/x86/mm.c
> >>> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
> >>>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
> >>>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
> >>>      }
> >>> +
> >>> +    /* Slot 511: Per-domain mappings mirror. */
> >>> +    if ( !is_pv_64bit_domain(d) )
> >>> +        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
> >>> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
> >> This virtual address is inside the extended directmap.
> > No. That one covers only the range excluding the last L4 slot.
> >
> >> Â  You're going to
> >> need to rearrange more things than just this, to make it safe.
> > I specifically picked that entry because I don't think further
> > arrangements are needed.
> 
> map_domain_page() will blindly hand this virtual address if an
> appropriate mfn is passed, because there are no suitability checks.
> 
> The error handling isn't great, but at least any attempt to use that
> pointer would fault, which is now no longer the case.

AFAICT map_domain_page will never populate the error page virtual
address, as the slot end (MAPCACHE_VIRT_END) is way lower than
-MAX_ERRNO?

We could add:

BUILD_BUG_ON((PERDOMAIN_VIRT_SLOT(PERDOMAIN_SLOTS) - 1) >= (unsigned long)-MAX_ERRNO);

For safety somewhere.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 20:04:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 20:04:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88367.166123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHR3-0004xq-C2; Mon, 22 Feb 2021 20:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88367.166123; Mon, 22 Feb 2021 20:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHR3-0004xj-84; Mon, 22 Feb 2021 20:04:01 +0000
Received: by outflank-mailman (input) for mailman id 88367;
 Mon, 22 Feb 2021 20:04:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OgTk=HY=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1lEHR2-0004xe-AH
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 20:04:00 +0000
Received: from zero.eik.bme.hu (unknown [2001:738:2001:2001::2001])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ba3970f-ed74-4372-a251-2a29ae5b9e50;
 Mon, 22 Feb 2021 20:03:57 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id 02F4C7462D3;
 Mon, 22 Feb 2021 21:03:56 +0100 (CET)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id B583A7462BD; Mon, 22 Feb 2021 21:03:55 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id B37E474581E;
 Mon, 22 Feb 2021 21:03:55 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ba3970f-ed74-4372-a251-2a29ae5b9e50
Date: Mon, 22 Feb 2021 21:03:55 +0100 (CET)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>
cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>, 
    Huacai Chen <chenhuacai@kernel.org>, kvm@vger.kernel.org, 
    Paul Durrant <paul@xen.org>, David Hildenbrand <david@redhat.com>, 
    Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>, 
    Jiaxun Yang <jiaxun.yang@flygoat.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    "Michael S. Tsirkin" <mst@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, 
    Christian Borntraeger <borntraeger@de.ibm.com>, 
    =?ISO-8859-15?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
    Leif Lindholm <leif@nuviainc.com>, Thomas Huth <thuth@redhat.com>, 
    Eduardo Habkost <ehabkost@redhat.com>, 
    Alistair Francis <alistair@alistair23.me>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Greg Kurz <groug@kaod.org>, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, 
    David Gibson <david@gibson.dropbear.id.au>, 
    Radoslaw Biernacki <rad@semihalf.com>, 
    =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>, 
    qemu-ppc@nongnu.org, Cornelia Huck <cohuck@redhat.com>, 
    Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Subject: Re: [PATCH v2 04/11] hw/arm: Restrit KVM to the virt & versal
 machines
In-Reply-To: <20210219173847.2054123-5-philmd@redhat.com>
Message-ID: <36692cea-e747-b054-51ff-bbcfbbdd4151@eik.bme.hu>
References: <20210219173847.2054123-1-philmd@redhat.com> <20210219173847.2054123-5-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="3866299591-317435051-1614024235=:60531"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2021.2.22.191817, AntiVirus-Engine: 5.79.0, AntiVirus-Data: 2020.12.21.5790000
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-317435051-1614024235=:60531
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Fri, 19 Feb 2021, Philippe Mathieu-DaudÃ© wrote:
> Restrit KVM to the following ARM machines:

Typo: "Restrict" (also in patch title).

Regards,
BALATON Zoltan

> - virt
> - xlnx-versal-virt
>
> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
> ---
> hw/arm/virt.c             | 5 +++++
> hw/arm/xlnx-versal-virt.c | 5 +++++
> 2 files changed, 10 insertions(+)
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 371147f3ae9..8e9861b61a9 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -2527,6 +2527,10 @@ static HotplugHandler *virt_machine_get_hotplug_handler(MachineState *machine,
>     return NULL;
> }
>
> +static const char *const valid_accels[] = {
> +    "tcg", "kvm", "hvf", NULL
> +};
> +
> /*
>  * for arm64 kvm_type [7-0] encodes the requested number of bits
>  * in the IPA address space
> @@ -2582,6 +2586,7 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
>     mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
>     mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
>     mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
> +    mc->valid_accelerators = valid_accels;
>     mc->kvm_type = virt_kvm_type;
>     assert(!mc->get_hotplug_handler);
>     mc->get_hotplug_handler = virt_machine_get_hotplug_handler;
> diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
> index 8482cd61960..d424813cae1 100644
> --- a/hw/arm/xlnx-versal-virt.c
> +++ b/hw/arm/xlnx-versal-virt.c
> @@ -610,6 +610,10 @@ static void versal_virt_machine_instance_init(Object *obj)
> {
> }
>
> +static const char *const valid_accels[] = {
> +    "tcg", "kvm", NULL
> +};
> +
> static void versal_virt_machine_class_init(ObjectClass *oc, void *data)
> {
>     MachineClass *mc = MACHINE_CLASS(oc);
> @@ -621,6 +625,7 @@ static void versal_virt_machine_class_init(ObjectClass *oc, void *data)
>     mc->default_cpus = XLNX_VERSAL_NR_ACPUS;
>     mc->no_cdrom = true;
>     mc->default_ram_id = "ddr";
> +    mc->valid_accelerators = valid_accels;
> }
>
> static const TypeInfo versal_virt_machine_init_typeinfo = {
>
--3866299591-317435051-1614024235=:60531--


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 20:09:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 20:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88370.166134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHW1-0005Ex-VX; Mon, 22 Feb 2021 20:09:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88370.166134; Mon, 22 Feb 2021 20:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHW1-0005Eq-SY; Mon, 22 Feb 2021 20:09:09 +0000
Received: by outflank-mailman (input) for mailman id 88370;
 Mon, 22 Feb 2021 20:09:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zptr=HY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEHW0-0005El-E1
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 20:09:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 421d96f0-4ecb-4103-9c0a-bcebd22fd0b9;
 Mon, 22 Feb 2021 20:09:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 666B164DF2;
 Mon, 22 Feb 2021 20:09:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 421d96f0-4ecb-4103-9c0a-bcebd22fd0b9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614024546;
	bh=xin9vc9mBGli/0hU6z+Gj/4wKbJmYC+lxAce91Trq14=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=IR8CXf9aGGSILZYntATZyKIlTYx56VHSTaNmtr58uefn8uvjs9+H3mv4o5zjlTaBE
	 vj7BngMt2RwZ4R/06YkGxZmITU1m1++4SFJmTkAegGueZhFJSksvhenr9BzUz0RzWz
	 ffdGdGhRXE2QajGiJ0FWKiUaJUU6QUMr+AT+tvjG0irynnKm68qzDTVL05nU+KO/ia
	 lNLC17vfnhOY7Uz7zjow+Sn8lP+aq0SU6IRrAvzS7nUBVifZzC9lpKCc6WpOwceZ9n
	 JsxBi5dPEsk+IVLbABxm5QpnYi27/nwWd4nma1Hgb/KXUdUyUY7uG2kkN1/KeHdNni
	 cOAVtt9IoU3rQ==
Date: Mon, 22 Feb 2021 12:09:05 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Julien Grall <julien@xen.org>, iwj@xenproject.org, sstabellini@kernel.org, 
    ash.j.wilding@gmail.com, Julien Grall <jgrall@amazon.com>, 
    George Dunlap <george.dunlap@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
In-Reply-To: <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
Message-ID: <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
References: <20210220194701.24202-1-julien@xen.org> <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1133757694-1614024546=:3234"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1133757694-1614024546=:3234
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 22 Feb 2021, Jan Beulich wrote:
> On 20.02.2021 20:47, Julien Grall wrote:
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > The comment in vcpu_block() states that the events should be checked
> > /after/ blocking to avoids wakeup waiting race. However, from a generic
> > perspective, set_bit() doesn't prevent re-ordering. So the following
> > could happen:
> > 
> > CPU0  (blocking vCPU A)         |Â   CPU1 ( unblock vCPU A)
> >                                 |
> > A <- read local events          |
> >                                 |   set local events
> >                                 |   test_and_clear_bit(_VPF_blocked)
> >                                 |       -> Bail out as the bit if not set
> >                                 |
> > set_bit(_VFP_blocked)           |
> >                                 |
> > check A                         |
> > 
> > The variable A will be 0 and therefore the vCPU will be blocked when it
> > should continue running.
> > 
> > vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
> > to read any information about local events before the flag _VPF_blocked
> > is set.
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



> > This is a follow-up of the discussion that started in 2019 (see [1])
> > regarding a possible race between do_poll()/vcpu_unblock() and the wake
> > up path.
> > 
> > I haven't yet fully thought about the potential race in do_poll(). If
> > there is, then this would likely want to be fixed in a separate patch.
> > 
> > On x86, the current code is safe because set_bit() is fully ordered. SO
> > the problem is Arm (and potentially any new architectures).
> > 
> > I couldn't convince myself whether the Arm implementation of
> > local_events_need_delivery() contains enough barrier to prevent the
> > re-ordering. However, I don't think we want to play with devil here as
> > the function may be optimized in the future.
> 
> In fact I think this ...
> 
> > --- a/xen/common/sched/core.c
> > +++ b/xen/common/sched/core.c
> > @@ -1418,6 +1418,8 @@ void vcpu_block(void)
> >  
> >      set_bit(_VPF_blocked, &v->pause_flags);
> >  
> > +    smp_mb__after_atomic();
> > +
> 
> ... pattern should be looked for throughout the codebase, and barriers
> be added unless it can be proven none is needed.

And in that case it would be best to add an in-code comment to explain
why the barrier is not needed
--8323329-1133757694-1614024546=:3234--


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 20:12:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 20:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88372.166147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHYl-00065c-Da; Mon, 22 Feb 2021 20:11:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88372.166147; Mon, 22 Feb 2021 20:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHYl-00065V-AZ; Mon, 22 Feb 2021 20:11:59 +0000
Received: by outflank-mailman (input) for mailman id 88372;
 Mon, 22 Feb 2021 20:11:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEHYk-00065M-5i; Mon, 22 Feb 2021 20:11:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEHYj-00083l-Sw; Mon, 22 Feb 2021 20:11:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEHYj-0004M4-GT; Mon, 22 Feb 2021 20:11:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEHYj-0004SP-Fy; Mon, 22 Feb 2021 20:11:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N0AE4QWDycx3AKdugXn1ofL+YurRgGkb5QVbSp9CN+g=; b=mMIZ4pKFvdxlUAGUdl5K3fZgTy
	S5fjCBC5gLTZkS3hM4XeTUj2hzhAbOusMrl8InCbqPrN23q3YJp7S7p/5YtHKymQzDzN20Cd/abW8
	Z+0gt34FQwnP0/bJJWEz2vRXEa4ybmJ2rqw9mjwoERGhpSoxKdRGR/y36Kk+6/Vu18DM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159540: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 20:11:57 +0000

flight 159540 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    3 days
Testing same since   159487  2021-02-20 04:29:29 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 305 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 20:12:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 20:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88376.166161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHZE-0006Bs-RG; Mon, 22 Feb 2021 20:12:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88376.166161; Mon, 22 Feb 2021 20:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHZE-0006Bl-OG; Mon, 22 Feb 2021 20:12:28 +0000
Received: by outflank-mailman (input) for mailman id 88376;
 Mon, 22 Feb 2021 20:12:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEHZD-0006BZ-5V
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 20:12:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEHZC-000849-Kk; Mon, 22 Feb 2021 20:12:26 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEHZC-0004dL-ER; Mon, 22 Feb 2021 20:12:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VgtNM/SFmOz7fuSArWQTz0TUKyFubVv15hRl0Q67t+k=; b=E2qB/FrcPXKJRHELO2eADK0qUh
	sDasfoVDUARRb2dLKRySDqWyVej1RqXxnEOUKZEqkHvxRLOlPd28KbzdMnAAtu+jCrxzlnTBFt/gZ
	RlPxcNb21yMugzOzeH7+WLGWTSSO97pasoSPXiH6Bv/q6e3O7+tMUwE7vEDeUU79XMRA=;
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jan Beulich <jbeulich@suse.com>
Cc: iwj@xenproject.org, ash.j.wilding@gmail.com,
 Julien Grall <jgrall@amazon.com>, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20210220194701.24202-1-julien@xen.org>
 <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
 <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b68a644f-8b9c-3e1d-49c6-4058d276228b@xen.org>
Date: Mon, 22 Feb 2021 20:12:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 22/02/2021 20:09, Stefano Stabellini wrote:
> On Mon, 22 Feb 2021, Jan Beulich wrote:
>> On 20.02.2021 20:47, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The comment in vcpu_block() states that the events should be checked
>>> /after/ blocking to avoids wakeup waiting race. However, from a generic
>>> perspective, set_bit() doesn't prevent re-ordering. So the following
>>> could happen:
>>>
>>> CPU0  (blocking vCPU A)         |Â   CPU1 ( unblock vCPU A)
>>>                                  |
>>> A <- read local events          |
>>>                                  |   set local events
>>>                                  |   test_and_clear_bit(_VPF_blocked)
>>>                                  |       -> Bail out as the bit if not set
>>>                                  |
>>> set_bit(_VFP_blocked)           |
>>>                                  |
>>> check A                         |
>>>
>>> The variable A will be 0 and therefore the vCPU will be blocked when it
>>> should continue running.
>>>
>>> vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
>>> to read any information about local events before the flag _VPF_blocked
>>> is set.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> 
> 
>>> This is a follow-up of the discussion that started in 2019 (see [1])
>>> regarding a possible race between do_poll()/vcpu_unblock() and the wake
>>> up path.
>>>
>>> I haven't yet fully thought about the potential race in do_poll(). If
>>> there is, then this would likely want to be fixed in a separate patch.
>>>
>>> On x86, the current code is safe because set_bit() is fully ordered. SO
>>> the problem is Arm (and potentially any new architectures).
>>>
>>> I couldn't convince myself whether the Arm implementation of
>>> local_events_need_delivery() contains enough barrier to prevent the
>>> re-ordering. However, I don't think we want to play with devil here as
>>> the function may be optimized in the future.
>>
>> In fact I think this ...
>>
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>> @@ -1418,6 +1418,8 @@ void vcpu_block(void)
>>>   
>>>       set_bit(_VPF_blocked, &v->pause_flags);
>>>   
>>> +    smp_mb__after_atomic();
>>> +
>>
>> ... pattern should be looked for throughout the codebase, and barriers
>> be added unless it can be proven none is needed. >
> And in that case it would be best to add an in-code comment to explain
> why the barrier is not needed
.
I would rather not add comment for every *_bit() calls. It should be 
pretty obvious for most of them that the barrier is not necessary.

We should only add comments where the barrier is necessary or it is not 
clear why it is not necessary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 20:35:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 20:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88385.166177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHvA-0008Bn-QB; Mon, 22 Feb 2021 20:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88385.166177; Mon, 22 Feb 2021 20:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEHvA-0008Bg-Mc; Mon, 22 Feb 2021 20:35:08 +0000
Received: by outflank-mailman (input) for mailman id 88385;
 Mon, 22 Feb 2021 20:35:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zptr=HY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEHv8-0008Bb-TP
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 20:35:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f1ff0bb-c2c7-4c11-87df-b747571a6764;
 Mon, 22 Feb 2021 20:35:06 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 231F864E05;
 Mon, 22 Feb 2021 20:35:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f1ff0bb-c2c7-4c11-87df-b747571a6764
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614026105;
	bh=tnL3z2looJ3vk2lLI/WaSNlC0mRPUEjmHywP8Ef6kS0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aCBmEjtYlXlkuVdxsN0tsad3334No51kAm/EqkWggf9nML/roBnXQz9i+/q5wCgwf
	 hINTCRrl0JQCKXzonXvFyrDuLRKOk3r+noqAHgQlFKe64DJxNCCbsjiRAWXXtmoTyo
	 +7T3WNrExANqGImOAs5BaDtO5rwIWvXyEOL0SA2Xg1+9gzkGSxA3H2uuiWUhhCOmUn
	 1CcbQVl7OY2xzod3yRanE8VoOKOZyhCRbdgQzrSfPFp64nj403/PxTirOh4xJ2X0At
	 r/zO5WQTDj9s7ztmyvRIhjowzl3vM1dtA+e8zH3KijpAIEZuSPY+e51LLk+xLF+72E
	 qrh7xsCflXYpw==
Date: Mon, 22 Feb 2021 12:35:04 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
In-Reply-To: <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org>
Message-ID: <alpine.DEB.2.21.2102221220000.3234@sstabellini-ThinkPad-T480s>
References: <20210220175413.14640-1-julien@xen.org> <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com> <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 22 Feb 2021, Julien Grall wrote:
> On 22/02/2021 11:58, Bertrand Marquis wrote:
> > Hi Julien,
> > 
> > > On 20 Feb 2021, at 17:54, Julien Grall <julien@xen.org> wrote:
> > > 
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > At the moment, flush_page_to_ram() is both cleaning and invalidate to
> > > PoC the page. However, the cache line can be speculated and pull in the
> > > cache right after as it is part of the direct map.
> > 
> > If we go further through this logic maybe all calls to
> > clean_and_invalidate_dcache_va_range could be transformed in a
> > clean_dcache_va_range.
> 
> Likely yes. But I need to go through them one by one to confirm this is fine
> to do it (it also depends on the caching attributes used). I have sent this
> one in advance because this was discussed as part of XSA-364.
> 
> > 
> > > 
> > > So it is pointless to try to invalidate the line in the data cache.
> > > 
> > 
> > But what about processors which would not speculate ?
> > 
> > Do you expect any performance optimization here ?
> 
> When invalidating a line, you effectively remove it from the cache. If the
> page is going to be access a bit after, then you will have to load from the
> memory (or another cache).
> 
> With this change, you would only need to re-fetch the line if it wasn't
> evicted by the time it is accessed.
> 
> The line would be clean, so I would expect the eviction to have less an impact
> over re-fetching the memory.
> 
> > 
> > If so it might be good to explain it as I am not quite sure I get it.
> 
> This change is less about performance and more about unnecessary work.
> 
> The processor is likely going to be more clever than the developper and the
> exact numbers will vary depending on how the processor decide to manage the
> cache.
> 
> In general, we should avoid interferring too much with the cache without a
> good reason to do it.
> 
> How about the following commit message:
> 
> "
> At the moment, flush_page_to_ram() is both cleaning and invalidate to
> PoC the page.
> 
> The goal of flush_page_to_ram() is to prevent corruption when the guest has
> disabled the cache (the cache line may be dirty) and read the guest to read
> previous content.
> 
> Per this defintion, the invalidating the line is not necessary. So
> invalidating the cache is unnecessary. In fact, it may be counter-productive
> as the line may be (speculatively) accessed a bit after. So this will incurr
> an expensive access to the memory.
> 
> More generally, we should avoid interferring too much with cache. Therefore,
> flush_page_to_ram() is updated to only clean to PoC the page.
> 
> The performance impact of this change will depend on your workload/processor.
> "
 
>From a correctness and functionality perspective, we don't need the
invalidate. If the line is dirty we are writing it back to memory (point
of coherence) thanks to the clean operations anyway. If somebody writes
to that location, the processor should evict the old line anyway.

The only reason I can think of for doing a "clean and invalidate" rather
than just a "clean" would be that we are trying to give a hint to the
processor that the cacheline is soon to be evicted. Assuming that the
hint even leads to some sort of performance optimization.

In practice, aside from being CPU specific, we don't know if it is even
a optimization or a pessimization.

In any case, on the grounds that it is unnecessary, I am OK with this.
I agree with Julien's proposal of applying this patch "for-next".

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 20:51:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 20:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88389.166192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEIAT-0001eb-75; Mon, 22 Feb 2021 20:50:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88389.166192; Mon, 22 Feb 2021 20:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEIAT-0001eU-44; Mon, 22 Feb 2021 20:50:57 +0000
Received: by outflank-mailman (input) for mailman id 88389;
 Mon, 22 Feb 2021 20:50:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEIAR-0001eP-Bu
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 20:50:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEIAP-0000HJ-Pd; Mon, 22 Feb 2021 20:50:53 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEIAP-0008Gh-IF; Mon, 22 Feb 2021 20:50:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=id1+ESmZla9PF9KGl/R6EdoleGz1ZPJKW8zcIwvjDhc=; b=IpIdKmez3LE2ZSJk7TKP79L2Zr
	JQx/yJAUT//5yI/6lez8M/AR20RF1UmB9jvtC/Uf25QNvoEjntYegr+30P/InRq7grlsTMgpZqbQH
	ZptBhEtRPmhOjDBWNC6+1obkRugSiieTrqGoml2XfOrZTJLav05h/KXCZxBc8up2/JkE=;
Subject: Re: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210220175413.14640-1-julien@xen.org>
 <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com>
 <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org>
 <alpine.DEB.2.21.2102221220000.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <ec608001-7663-961b-667c-bcf6397f1864@xen.org>
Date: Mon, 22 Feb 2021 20:50:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102221220000.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 22/02/2021 20:35, Stefano Stabellini wrote:
> On Mon, 22 Feb 2021, Julien Grall wrote:
>> On 22/02/2021 11:58, Bertrand Marquis wrote:
>>> Hi Julien,
>>>
>>>> On 20 Feb 2021, at 17:54, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> At the moment, flush_page_to_ram() is both cleaning and invalidate to
>>>> PoC the page. However, the cache line can be speculated and pull in the
>>>> cache right after as it is part of the direct map.
>>>
>>> If we go further through this logic maybe all calls to
>>> clean_and_invalidate_dcache_va_range could be transformed in a
>>> clean_dcache_va_range.
>>
>> Likely yes. But I need to go through them one by one to confirm this is fine
>> to do it (it also depends on the caching attributes used). I have sent this
>> one in advance because this was discussed as part of XSA-364.
>>
>>>
>>>>
>>>> So it is pointless to try to invalidate the line in the data cache.
>>>>
>>>
>>> But what about processors which would not speculate ?
>>>
>>> Do you expect any performance optimization here ?
>>
>> When invalidating a line, you effectively remove it from the cache. If the
>> page is going to be access a bit after, then you will have to load from the
>> memory (or another cache).
>>
>> With this change, you would only need to re-fetch the line if it wasn't
>> evicted by the time it is accessed.
>>
>> The line would be clean, so I would expect the eviction to have less an impact
>> over re-fetching the memory.
>>
>>>
>>> If so it might be good to explain it as I am not quite sure I get it.
>>
>> This change is less about performance and more about unnecessary work.
>>
>> The processor is likely going to be more clever than the developper and the
>> exact numbers will vary depending on how the processor decide to manage the
>> cache.
>>
>> In general, we should avoid interferring too much with the cache without a
>> good reason to do it.
>>
>> How about the following commit message:
>>
>> "
>> At the moment, flush_page_to_ram() is both cleaning and invalidate to
>> PoC the page.
>>
>> The goal of flush_page_to_ram() is to prevent corruption when the guest has
>> disabled the cache (the cache line may be dirty) and read the guest to read
>> previous content.
>>
>> Per this defintion, the invalidating the line is not necessary. So
>> invalidating the cache is unnecessary. In fact, it may be counter-productive
>> as the line may be (speculatively) accessed a bit after. So this will incurr
>> an expensive access to the memory.
>>
>> More generally, we should avoid interferring too much with cache. Therefore,
>> flush_page_to_ram() is updated to only clean to PoC the page.
>>
>> The performance impact of this change will depend on your workload/processor.
>> "
>   
>  From a correctness and functionality perspective, we don't need the
> invalidate. If the line is dirty we are writing it back to memory (point
> of coherence) thanks to the clean operations anyway. If somebody writes
> to that location, the processor should evict the old line anyway.

Location as in same physical address or the same set?

For the former, the line is usually bigger than any write. So it is 
unlikely to get evicted.

For the later, it will depend on the content of the other ways in the set.

> The only reason I can think of for doing a "clean and invalidate" rather
> than just a "clean" would be that we are trying to give a hint to the
> processor that the cacheline is soon to be evicted. Assuming that the
> hint even leads to some sort of performance optimization.

This may change which lines get evict as there will be an unused way. 
But we are now down to the territory of micro-optimization.

If that's a problem for someone, then that user should better switch to 
cache coloring because the impact of flush_page_to_ram() will pretty 
small compare to the damage that another domain can do if it shares the 
same set.

> In any case, on the grounds that it is unnecessary, I am OK with this.
> I agree with Julien's proposal of applying this patch "for-next".
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thanks! I am thinking to create a branch next again for queuing 4.15+ 
patches. Would that be fine with you?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 21:19:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 21:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88395.166210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEIbt-0003r4-Ik; Mon, 22 Feb 2021 21:19:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88395.166210; Mon, 22 Feb 2021 21:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEIbt-0003qx-Fa; Mon, 22 Feb 2021 21:19:17 +0000
Received: by outflank-mailman (input) for mailman id 88395;
 Mon, 22 Feb 2021 21:19:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tmM1=HY=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEIbs-0003qr-K2
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 21:19:16 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e53f988b-14fa-4098-9c60-255ff6ed8701;
 Mon, 22 Feb 2021 21:19:15 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11MLAftA195557;
 Mon, 22 Feb 2021 21:19:11 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 36ttcm5669-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 22 Feb 2021 21:19:10 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11MLARWv016618;
 Mon, 22 Feb 2021 21:19:10 GMT
Received: from nam10-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam10lp2101.outbound.protection.outlook.com [104.47.55.101])
 by userp3030.oracle.com with ESMTP id 36ucbwmfvc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 22 Feb 2021 21:19:09 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB2678.namprd10.prod.outlook.com (2603:10b6:a02:a9::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.30; Mon, 22 Feb
 2021 21:19:08 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Mon, 22 Feb 2021
 21:19:08 +0000
Received: from [10.74.102.77] (138.3.200.13) by
 SJ0PR05CA0092.namprd05.prod.outlook.com (2603:10b6:a03:334::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3890.8 via Frontend Transport; Mon, 22 Feb 2021 21:19:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e53f988b-14fa-4098-9c60-255ff6ed8701
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=FnSLpPpx6znBKYOeSTHZG3qtMo96LSccbYN8BJYyelU=;
 b=gLJMf9Jhk5UsG/bgP172QLJfDZjVM/PYP+ScNsGaNAJLzeX99SYKNO3sgS665+4Q0iw8
 N5rGkTFCs/+KGv772knK3kc5F0nZyQ8qvM9oWQ7d0LOQdTSk/owc+gml7dpTKb2oME8M
 v/42NNlsylG9dqhushyvuP6zJ/Iram1wyQfGKqKpXQiUh4YHkU+U30q6kcnmd5Uioen4
 UEnTg5Zc/ZeGaAI73wpwfK8fdHoiKYmynjvpLv5hqb+OzyBBl/giA0xiUG+jb/lnmyug
 3fiGcx3BdFAvA9XRCetR18iKo5uF3RgY8gommg2TU9t42N7PxONRogTiIag56M0QyvDv 4w== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=daxE374AKZ1UOri811Iq/KVgE7kKlLl24HunBtx9pfUrQbqu3Dc/7/2KvDI4n0DzgDaT3q5qnO0UFgHRK1bhWzF0qOGIUy7DeVPRO2VIIRPbjKPSO+G+3KeLONsjVlBjfuQp8/kFs2pbltg4N5fI80x7NQnyPfDHBaTsvfkf50n7PX9iQq2NWqQTxgzXdFwRJz+l0nUgG3e67ppjs3a5dPJSkJ7k1HE+hZvbMBTwkNwUDv/d8cBCg7OXMjgEz2p/HuI6Jvhf7SxINZpH9jRBOExj4EBxjtrcrpj/JSToRVmy0iNnNMFOw0ZgCYhdtgIXy1UHIj8NZJGIO8on149HmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FnSLpPpx6znBKYOeSTHZG3qtMo96LSccbYN8BJYyelU=;
 b=TkycPZ7uJfGIEZHBR6edQyKDpqQZy3jDcPsQJOogxXFPjQvQeWK1jZ0inuJA9Q1r1g/uxkg49XIBVgDQOlaIcIXw/XA76s/fBj5MqYYsEHgcELN7B3nLNifQshnUDQSLErnsLe9UVzyl3pkC/L3zYf56bf+b9ll6ibuSPbVHWmZ8SgibD219JuEv5guv/+WQHuq1ZQ4uzdtehNxLXEJktt4iL4Yv954QXJixBExAqa5EFhpzgVjcX8YZGnVD9NKQQRoytZorlmA7NnDkWeuxUelCz5JAlYY0YdPxBaMtHiMG3PyhUKeckcS/lKZGvMuSKC6zIbp//0fU2XDKKE38rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FnSLpPpx6znBKYOeSTHZG3qtMo96LSccbYN8BJYyelU=;
 b=tAIQA7Izq/1xyuI096wJKjxyLYhr09mswN2ndwC/nJU7EV2VNDqVkg+0NSyOi7HqClg3q3BHK6hO/zIQ2+BIoHBDrCMABdgvi1frYtAUbfoyiskUe3r+MU+d+M2m336CKGckEqzktXcDv/8P8pKcGiivLUgXJd2DD3oem7H7LBg=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
        anthony.perard@citrix.com, jbeulich@suse.com,
        andrew.cooper3@citrix.com, jun.nakajima@intel.com,
        kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
Date: Mon, 22 Feb 2021 16:19:04 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <YDOQvU1h8zpOv5PH@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.13]
X-ClientProxiedBy: SJ0PR05CA0092.namprd05.prod.outlook.com
 (2603:10b6:a03:334::7) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 87a97751-b4ea-4af1-eff5-08d8d7777d51
X-MS-TrafficTypeDiagnostic: BYAPR10MB2678:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB2678AFC7F07AA6C03CCB88FB8A819@BYAPR10MB2678.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	rasqxYmP6RhVylYeqq8HIpPLzMCozS5pLLJ235ngRJLYncZY6Cjvz+7ZkXtYeQHSjHSjASynA1du7xH0fOrXaOeLsEMz4d9IowfCoSRbLx3K3RHAPgA+Sbuy7FiIi1yK/3JZslAEnK9FHpP5mkIB7BGbf+jaJtvLy2y2PGQ0nF+O413g0y8lSqhiM78aM7nYyjiZsFFI1pJj6PpqUXgBjWCkrBctpbfZs18Ky5OfPngY4d0uPhEwQYMZGMIQeU3J+2TGNu+sULMXhnSGYNSloZ8jQSOeIHGdrPD84VFsCBrQAMkI/CozVaQScKVt7QP5cQvlW599Ija8e/RZfTHYLpMKGAk3uXMlNKx5CBKOdHrKqru4Pk/JS6fZ0+sFHTkUqMdIo5ekOUwYcO+/xxePvxB0Xae1tkXuPr5qXrqItEuksolnjLhmZmF1n60bVzn7r1MMkkReIVfZJkEqSLkqxGiwBFXaMkjKgNhEhm94Qi+VPy2qXqGQWwmvKF2hapr38DblvH7iwrc7nM0/IfpNYmi1ARHozyQU6XI4/OzyR2iMGCK/SXImDsbdvjgkK/YX1leMhiUUaqHkWFm/zDBXzcfvm34F6bi4l+ilp/Pes7M=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(396003)(376002)(39860400002)(346002)(366004)(478600001)(36756003)(6916009)(316002)(86362001)(956004)(53546011)(186003)(16526019)(5660300002)(8936002)(4326008)(31696002)(26005)(2616005)(83380400001)(44832011)(6486002)(16576012)(66946007)(31686004)(66556008)(66476007)(2906002)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?NEIyalJValNLNXFLOFFzWjFkNlY5UFcxZS8wYmEza1FFekZMWVV6dzZPeWNH?=
 =?utf-8?B?eHFhMnlKbVd0Q1lNV0IrQ2VlUmhPZE03bUhnajhsazUrMmcveGdjV0ZaMm10?=
 =?utf-8?B?ZXdFalRIUE52alB0ZWZsTVRqY1BPeEdhK2FzWFlGMW1aM0FxOGJESkxBNmtv?=
 =?utf-8?B?NTVOdWdjTzgvZ1F4Q1NEUzM1R1lqM3pkendqOEtkbGxwQWs1cUZEU2pOb0ZB?=
 =?utf-8?B?SEZaZitVUFZPY2QwVCt1NVVKaXJwRWVHUUpNMmdidURacFg2SlVtL3J2Qldq?=
 =?utf-8?B?RzB0VWhoMnJNSmU0MGxMMlUrQ21TeENTeTFxS08wV21jckUvUkNEaFQ1b3Rn?=
 =?utf-8?B?MXJCMk05KzE3clIxV0Y3dDJITVYzdmUzMnVoMk8xM2hHSFFZTmRnM28zQ3Fn?=
 =?utf-8?B?UjZQV3VDOTA1YlZRK3pXK0I1RkFrb1JyMjk1VjRvR2tsUDJMT1hUMlJ5aEt4?=
 =?utf-8?B?a2xqUUkwMjkvOHhFNWNUejVDK25EQ0k0VHpzWExaa1c5aTRQYnM2RmJZU0FK?=
 =?utf-8?B?M3NnSFFLcVdDZ0lPb2tZL2RsRCtBNTRPdTlIZ214aCtZOW1nZjlGUzVwcU9u?=
 =?utf-8?B?YmhqbjE1Y1p2dVJWT2ROU2tuRElOTnkvUDQ3NUVwTlpscFBGL0xXNzBmeXBW?=
 =?utf-8?B?blZiUjYyNCsyV1RhM0dsTkl5d3gvNzIycjR4MTBad1NLcG5BT0hmSXhkaDBn?=
 =?utf-8?B?QlYwUDV6QzA5UlFMaGtWY3hjdk9PRVVSYk1aSWRGUDk4ckw4cHJkTW44cHlt?=
 =?utf-8?B?L1c2VS9SZjNFWGhlMHB1SlRSZ3pSd2hVb3U4em16MXdxeEE1UW9leTB1TnpP?=
 =?utf-8?B?VXdTL3U0TFh4Z25zd3F5OUxvenh1ai9veDN1Z24wQ3gwT3BhTjZHay9mR1Ey?=
 =?utf-8?B?clcrWmVIdXlwZ3gxNVovTklrajFIOVljVVZ2Q3lwU05GZDZHRXZzVEZCQmpV?=
 =?utf-8?B?MHM4OXRqMW83UGR3bzkxaDAzOC9CZENGN2ZKck5FaUhsbXllZmNMVHozbzlT?=
 =?utf-8?B?bmJEeE4zc1dRSlBuNGU5dXRlbkg5YUlCWGF0dHhRelJ5S2lTVW0vQkFMY3dP?=
 =?utf-8?B?K1JpWEpkc0RteXQ5b2JYYVBZVW5SNHkzRDZOWG4rek1oZkJiQ2t1R3ZVcFcv?=
 =?utf-8?B?SmdVZWlYcmZOMSs0ZURsVnBENG43dGo3a1JHcmlCeVV2cE9ja2EwOSthWjdo?=
 =?utf-8?B?V3czVGxyZFJmZVNTa3lybUFQWWxjeDZ6aUdGMi91ZHd4Z1JPV21mWktyUzk0?=
 =?utf-8?B?WXkzZ1BESTlvZVo0L0oybTFyZTNqZTRrRkNwMnVhdmdsRjkzSzhycCtiL0o3?=
 =?utf-8?B?SElQaTQ2NGJxS2dCdVBITjFIZ3FDMzNSTFY5VDgxK25lbTdHTkM1b1lEek9y?=
 =?utf-8?B?UnJJeGQrSEtBSVlFK0VwazRYMHVqODU0ZU1LR2RVVWQxNnlBRTNoY2lWbWJ5?=
 =?utf-8?B?aXBJQnFuRjQxTFhQTVlSUmhTOTg5ZXBETGpHZnhCbnFLamxUTDB6Nys4aS95?=
 =?utf-8?B?YmFZeERUbzN1M1UrOVR1aFMxajI4eXRJTVpRNng4Ui9Ocm1MZHNBbTlkTzlh?=
 =?utf-8?B?ZEl4Rk1KNFUvOU5JUEZFV0JVUHoySUN0WGJhM0dnVnVINEErODdDNVY2dnMw?=
 =?utf-8?B?dVR2SFd2OFJpV0JMWEpOVStYckpHdWl4SVpXYmhMOWZTK0ovTW4zbkVqMHNh?=
 =?utf-8?B?S1dBYUpVUHNOeHhIc0JLLzhZRHl0Y08yei9LTnNDQVZjZ3RQRW1WZHRZSmEw?=
 =?utf-8?Q?JEU6xURdEtNVw/w8BwyWm5nKN+pfSsvSBTUGofX?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87a97751-b4ea-4af1-eff5-08d8d7777d51
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2021 21:19:07.8805
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 157n12+H0JGzC0el3/8G8HZm/tJnyRo0zGZe2EYZJqGkWzppTFOTAyuteBBbNR+4kVBB/Qf75V98SdfD45xL79i8mx2l0CTo4fxf1yGAwnM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB2678
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0
 phishscore=0 spamscore=0 suspectscore=0 bulkscore=0 malwarescore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102220186
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0
 lowpriorityscore=0 spamscore=0 mlxscore=0 bulkscore=0 clxscore=1015
 priorityscore=1501 malwarescore=0 impostorscore=0 suspectscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102220186


On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>>>> When toolstack updates MSR policy, this MSR offset (which is the last
>>>> index in the hypervisor MSR range) is used to indicate hypervisor
>>>> behavior when guest accesses an MSR which is not explicitly emulated.
>>> It's kind of weird to use an MSR to store this. I assume this is done
>>> for migration reasons?
>>
>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
> I agree that using the msr_policy seems like the most suitable place
> to convey this information between the toolstack and Xen. I wonder if
> it would be fine to have fields in msr_policy that don't directly
> translate into an MSR value?


We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).


>
> But having such a list of ignored MSRs in msr_policy makes the whole
> get/set logic a bit weird, as the user would have to provide a buffer
> in order to get the list of ignored MSRs.


If we go with ranges/lists of ignored MSRs then we will need to have ignore_msrs as a rangeset in msr_policy, not as (current) uint8. And xen_msr_entry_t will need to have a range as opposed to single index. Or maybe I don't understand what you are referring to as get/set logic.


But I would like to make sure we really want to support such ranges, I am a little concerned about over-engineering this (especially wrt migration, I think it will need marshalling/unmarshalling)


>>> Isn't is possible to convey this data in the xl migration stream
>>> instead of having to pack it with MSRs?
>>
>> I haven't looked at this but again --- the feature itself has nothing to do with migration. The fact that folding it into policy makes migration of this information "just work" is just a nice side benefit of this implementation.
> IMO it feels slightly weird that we have to use a MSR (MSR_UNHANDLED)
> to store this option, seems like wasting an MSR index when there's
> really no need for it to be stored in an MSR, as we don't expose it to
> guests.
>
> It would seem more natural for such option to live in arch_domain as a
> rangeset for example.
>
> Maybe introduce a new DOMCTL to set it?
>
> #define XEN_DOMCTL_msr_set_ignore ...
> struct xen_domctl_msr_set_ignore {
>     uint32_t index;
>     uint32_t size;
> };


That will work too but this is adding 2 new domctls (I think we will need a "get" too) whereas with policy we use existing interface.


-boris



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 21:42:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 21:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88399.166222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEIyY-0006Y1-HH; Mon, 22 Feb 2021 21:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88399.166222; Mon, 22 Feb 2021 21:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEIyY-0006Xu-Dr; Mon, 22 Feb 2021 21:42:42 +0000
Received: by outflank-mailman (input) for mailman id 88399;
 Mon, 22 Feb 2021 21:42:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEIyW-0006Xm-U7; Mon, 22 Feb 2021 21:42:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEIyW-00018M-JI; Mon, 22 Feb 2021 21:42:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEIyW-0007jy-9r; Mon, 22 Feb 2021 21:42:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEIyW-0001ZO-9K; Mon, 22 Feb 2021 21:42:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=ckwN8Y5KwWIbJUa/wZCcUMK8uTerAu0hzLYc7ZNNJjg=; b=YhNmrFqQeZNZ2YrRmeOA33yzoB
	FU9QeEbuqsqqWPRIDrikqW3TkdgAI+sgovhiwHK9ZFYaZp8DidBcCBmFlzYVaJvmIICRxuxKWjJHw
	E/FF6S3e5SR/4G/WjQQL+b9Bg5IOsDglvV/EdBAq3vjXwMRupjDFN1bWXkKfqqB8xGLA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-qemuu-debianhvm-amd64
Message-Id: <E1lEIyW-0001ZO-9K@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 21:42:40 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-debianhvm-amd64
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159560/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-qemuu-debianhvm-amd64.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemuu-debianhvm-amd64.xen-boot --summary-out=tmp/159560.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-qemuu-debianhvm-amd64 xen-boot
Searching for failure / basis pass:
 159540 fail [host=huxelrebe1] / 159475 [host=chardonnay0] 159453 [host=fiano0] 159424 [host=chardonnay1] 159396 [host=albana1] 159362 [host=albana0] 159335 [host=pinot1] 159315 [host=elbling1] 159202 [host=pinot0] 159134 [host=albana0] 159036 [host=fiano1] 159013 [host=albana1] 158957 ok.
Failure / basis pass flights: 159540 / 158957
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e7aa904405fa2f268c3af213516bae271de3265
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#5e7aa904405fa2f268c3af213516bae271de3265-87a067fd8f4d4f7c6be02c3d38145115ac542017
Loaded 5001 nodes in revision graph
Searching for test results:
 158957 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e7aa904405fa2f268c3af213516bae271de3265
 159013 [host=albana1]
 159036 [host=fiano1]
 159134 [host=albana0]
 159202 [host=pinot0]
 159315 [host=elbling1]
 159335 [host=pinot1]
 159362 [host=albana0]
 159396 [host=albana1]
 159424 [host=chardonnay1]
 159453 [host=fiano0]
 159475 [host=chardonnay0]
 159487 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159491 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159508 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159526 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159541 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e7aa904405fa2f268c3af213516bae271de3265
 159542 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159544 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7a321c3676250aac5bacb1ae8d7dd22bfe8b1448
 159547 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 928bffb6dd3f1b8a2355418f4c763a6fff714aa7
 159549 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159551 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b339e3a976b1680f57051adabcb98281198f7eac
 159552 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6794cdd08ea8b3512c53b8f162cb3f88fef54d0d
 159553 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b
 159554 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159555 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159556 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159540 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159557 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159558 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159560 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
Searching for interesting versions
 Result found: flight 158957 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25, results HASH(0x5621a0e51190) HASH(0x5621a0e5a880) HASH(0x5621a0909e40) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895\
 af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b, results HASH(0x5621a0ef5e18) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6794cdd08ea8b3512c53b8f162cb3f88fef54d0d, results HASH(0x5621a0ef4710) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b339e3a976b1680f57051adabcb98281198f7eac, results HASH(0x5621a0ef34e8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 928bffb6dd3f1b8a2355418f4c763a6fff714aa7, results HASH(0x5621a0eeb318) For basis failure, parent search stopping at c3038e718a19\
 fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7a321c3676250aac5bacb1ae8d7dd22bfe8b1448, results HASH(0x5621a0ecf600) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e7aa904405fa2f268c3af213516bae271de3265, results HASH(0x5621a0e6134\
 0) HASH(0x5621a0e583d0) Result found: flight 159487 (fail), for basis failure (at ancestor ~75)
 Repro found: flight 159541 (pass), for basis pass
 Repro found: flight 159542 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
No revisions left to test, checking graph state.
 Result found: flight 159554 (pass), for last pass
 Result found: flight 159555 (fail), for first failure
 Repro found: flight 159556 (pass), for last pass
 Repro found: flight 159557 (fail), for first failure
 Repro found: flight 159558 (pass), for last pass
 Repro found: flight 159560 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159560/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemuu-debianhvm-amd64.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
159560: tolerable ALL FAIL

flight 159560 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159560/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot    fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Feb 22 23:05:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 23:05:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88422.166244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEKGV-0005hA-2a; Mon, 22 Feb 2021 23:05:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88422.166244; Mon, 22 Feb 2021 23:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEKGU-0005h3-Vi; Mon, 22 Feb 2021 23:05:18 +0000
Received: by outflank-mailman (input) for mailman id 88422;
 Mon, 22 Feb 2021 23:05:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zptr=HY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEKGU-0005gy-AQ
 for xen-devel@lists.xenproject.org; Mon, 22 Feb 2021 23:05:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 860e5a30-5a20-4b0c-abf6-7ae81031fdef;
 Mon, 22 Feb 2021 23:05:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 4A3BD64E20;
 Mon, 22 Feb 2021 23:05:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 860e5a30-5a20-4b0c-abf6-7ae81031fdef
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614035116;
	bh=OlkGaVDCTD0em0cJ90yqyCtlDfX6TULh+vYlXVNqQIM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bvgw7au8Yb7HEyIM556kPTMiwK6lED2IstBP1d51kI+LANoiLXvJ76SgtTWOr2w9Q
	 WGU7RZbfwu2+3mZiwxc1kcY2I6s7qbRhMyq/kcshRZLbhxUDuDv8CsuZqxWgtfapPt
	 0uiLecqrzr6fhZHWxk664ggpMW5Xf02qV4swFYlXUGfs+gX3uUlmPSQfFGJhW9Nb/2
	 hIPBNqSfGYv6STpjr2gZwksTGQGQbC7i0d5NL8zoC7XLjM0uhTTmRKBBQJdSyJFtf8
	 Q1n/+ZaOh/eILvcM1hJFhCVx08S8XiIRRutYe2CDtPFuN5jALduOszEGJeAOqP8HZU
	 ChiGNv2WxS9Ow==
Date: Mon, 22 Feb 2021 15:05:15 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    xen-devel@lists.xenproject.org, cardoe@cardoe.com, 
    andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org, 
    anthony.perard@citrix.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
In-Reply-To: <3723a430-e7de-017a-294f-4c3fdb35da51@suse.com>
Message-ID: <alpine.DEB.2.21.2102221453080.3234@sstabellini-ThinkPad-T480s>
References: <20210213020540.27894-1-sstabellini@kernel.org> <20210213135056.GA6191@mail-itl> <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com> <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s> <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
 <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s> <416e26b7-0e24-a9ee-6f9a-732f77f7e0cc@suse.com> <alpine.DEB.2.21.2102181737310.3234@sstabellini-ThinkPad-T480s> <3723a430-e7de-017a-294f-4c3fdb35da51@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 Feb 2021, Jan Beulich wrote:
> On 19.02.2021 02:42, Stefano Stabellini wrote:
> > OK it took me a lot longer than expected (I have never had the dubious
> > pleasure of working with autoconf before) but the following seems to
> > work, tested on both Alpine Linux and Debian Unstable. Of course I had
> > to run autoreconf first.
> > 
> > 
> > diff --git a/config/Tools.mk.in b/config/Tools.mk.in
> > index 48bd9ab731..d5e4f1679f 100644
> > --- a/config/Tools.mk.in
> > +++ b/config/Tools.mk.in
> > @@ -50,6 +50,7 @@ CONFIG_OVMF         := @ovmf@
> >  CONFIG_ROMBIOS      := @rombios@
> >  CONFIG_SEABIOS      := @seabios@
> >  CONFIG_IPXE         := @ipxe@
> > +CONFIG_HVMLOADER    := @hvmloader@
> >  CONFIG_QEMU_TRAD    := @qemu_traditional@
> >  CONFIG_QEMU_XEN     := @qemu_xen@
> >  CONFIG_QEMUU_EXTRA_ARGS:= @EXTRA_QEMUU_CONFIGURE_ARGS@
> > diff --git a/tools/Makefile b/tools/Makefile
> > index 757a560be0..6cff5766f3 100644
> > --- a/tools/Makefile
> > +++ b/tools/Makefile
> > @@ -14,7 +14,7 @@ SUBDIRS-y += examples
> >  SUBDIRS-y += hotplug
> >  SUBDIRS-y += xentrace
> >  SUBDIRS-$(CONFIG_XCUTILS) += xcutils
> > -SUBDIRS-$(CONFIG_X86) += firmware
> > +SUBDIRS-$(CONFIG_HVMLOADER) += firmware
> 
> But there are more subdirs under firmware/ than just hvmloader.
> In particular you'd now also skip building the shim if hvmloader
> was disabled.
> 
> > --- a/tools/configure.ac
> > +++ b/tools/configure.ac
> > @@ -307,6 +307,10 @@ AC_ARG_VAR([AWK], [Path to awk tool])
> >  
> >  # Checks for programs.
> >  AC_PROG_CC
> > +AC_LANG(C)
> > +AC_LANG_CONFTEST([AC_LANG_SOURCE([[int main() { return 0;}]])])
> > +AS_IF([gcc -m32 conftest.c -o - 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(hvmloader build disabled as the compiler cannot build 32bit binaries)])
> > +AC_SUBST(hvmloader)
> 
> I'm puzzled: "gcc -m32" looked to work fine on its own. I suppose
> the above fails at the linking stage, but that's not what we care
> about (we don't link with any system libraries). Instead, as said,
> you want to check "gcc -m32 -c" produces correct code, in
> particular with sizeof(uint64_t) being 8. Of course all of this
> would be easier if their headers at least caused some form of
> error, instead of silently causing bad code to be generated.
> 
> The way you do it, someone simply not having 32-bit C libraries
> installed would then also have hvmloader build disabled, even if
> their compiler and headers are fine to use.

I realize that technically this test is probing for something different:
the ability to build and link a trivial 32-bit userspace program, rather
than a specific check about sizeof(uint64_t). However, I thought that if
this test failed we didn't want to continue anyway.

If you say that hvmloader doesn't link against any system libraries,
then in theory the hvmloader build could succeed even if this test
failed. Hence, we need to change strategy.

What do you think of something like this?

AC_LANG_CONFTEST([AC_LANG_SOURCE([[#include <assert.h>
#include <stdint.h>
int main() { assert(sizeof(uint64_t) == 8); return 0;}]])])
AS_IF([gcc -m32 conftest.c -o conftest 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(XXX)])


Do you have any better ideas?


> Also I don't think "-o -" does what you want - it'll produce a
> binary named "-" (if compilation and linking succeed), while I
> suppose what you want is to discard the output (i.e. probably
> "-o /dev/null"). Albeit even that doesn't look to be the commonly
> used approach - a binary named "conftest" would normally be
> specified as the output, according to other configure.ac I've
> looked at. Such tests then have a final "rm -f conftest*".

OK


From xen-devel-bounces@lists.xenproject.org Mon Feb 22 23:27:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 Feb 2021 23:27:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88424.166256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEKbs-0007f8-V0; Mon, 22 Feb 2021 23:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88424.166256; Mon, 22 Feb 2021 23:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEKbs-0007f1-Rt; Mon, 22 Feb 2021 23:27:24 +0000
Received: by outflank-mailman (input) for mailman id 88424;
 Mon, 22 Feb 2021 23:27:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEKbr-0007et-A3; Mon, 22 Feb 2021 23:27:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEKbr-0002on-2E; Mon, 22 Feb 2021 23:27:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEKbq-0004N3-Pr; Mon, 22 Feb 2021 23:27:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEKbq-0005B7-PN; Mon, 22 Feb 2021 23:27:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aSUsISdV+8Ykb0ik8TLK+d4snQaFoIbqoE3DIiofvNs=; b=34sckytV/rx1G64mKIOX56n+0h
	yTlEQzWLWBhI3lsoY065iZiKHs0vppVBdtIZ3uQRe1U2IaHLYa9jqE8QYnGLJObgSJsK1pXy0p2oD
	kWKkX0ChWsU0Wlmo5c0DVsoSluljSNiFYWOOD6DNFqvzoEHcaDFfkCZ6UoQsD1JClUhE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159545-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159545: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=00d8ba9e0d62ea1c7459c25aeabf9c8bb7659462
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 Feb 2021 23:27:22 +0000

flight 159545 qemu-mainline real [real]
flight 159562 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159545/
http://logs.test-lab.xenproject.org/osstest/logs/159562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                00d8ba9e0d62ea1c7459c25aeabf9c8bb7659462
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  186 days
Failing since        152659  2020-08-21 14:07:39 Z  185 days  358 attempts
Testing same since   159530  2021-02-21 21:37:59 Z    1 days    2 attempts

------------------------------------------------------------
423 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117284 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 00:23:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 00:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88434.166277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lELUE-0005Qd-St; Tue, 23 Feb 2021 00:23:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88434.166277; Tue, 23 Feb 2021 00:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lELUE-0005QW-Pi; Tue, 23 Feb 2021 00:23:34 +0000
Received: by outflank-mailman (input) for mailman id 88434;
 Tue, 23 Feb 2021 00:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ey4c=HZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lELUC-0005QR-D0
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 00:23:32 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89b0eb42-aa7d-4e8e-a347-0f7ff6f64863;
 Tue, 23 Feb 2021 00:23:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89b0eb42-aa7d-4e8e-a347-0f7ff6f64863
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614039809;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:mime-version;
  bh=FFMBN4EAxLyGJpP5wo+0pA5d3p/cwKO5mZty9s5ZNJ0=;
  b=WfQzpvl/sUV0i0HB1ta+lxd7rKN52sJJtuw3OxaN15XX1JxgsiCJFDMM
   yHVdhIYubZOSB3JKNufAvrGKB8boQL1fTQAoQgFQ2wr1Fa98MikNDVCus
   cy0zz8fb2hy2AteA7fQJoCHSBzVTWB3GdPDT/3Apcb0qEtMDFPm737vwe
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: LEeKwLO55xb4cYPsZ+2BwopSFMyFTTscJphwOKcH4MCRwQyKw6fpnNmbKpm0XFhytjWZgNEHHk
 rT9/zsG25si8BQ4nHgsfE8Vy/s/0WfETAFZOGn4eO1BYRRyKmXcYJQiexqQa1jj9Bzx923AN8G
 Am9UoaAt2IABPgKPr8Dw2ufyU5N2m7vXEf12DsGVFCpt5T2U9XS3fYnbusCaRLahUq3aUCLUXw
 94cmLWKhkNeXjW1gBVs37k8Fj6/72GyHfo42+ulALnIzSSg1McLUBmPP3xlhaEmUKKSuTB9Qq7
 3zo=
X-SBRS: 5.2
X-MesageID: 39167301
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,198,1610427600"; 
   d="scan'208,217";a="39167301"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kR95zZ8reLlRly71wuhPwBBvkme3n90RWu8LNoX9ae/kcF1O+5i1uUk1Xd8C+nNCl85T8Gxx0fnQ4hcB8dQ7gxVQNZF/pRKqWVx5Y7GWxliCBzoC/xyNOicKhF3nhpHJ+rtn75Wa9KVsQweFrPLuX22+byh8rpZwnDA9zLOrj6dicDX+0Nzk59ay2iZKSJzFa7V7e0jHovWXhnYcbpU/XSYx0lgVcQ0J7tyjZ2O7ScheYgoIC9XHXAE1xUDhtv+BYK3lKdRA/IxOnx5N7bsPbT+TBWaEx0WK7L5pk/bSsbyAFL21Z8Y3F8qc2ruCpphQFXRiw7kzNrdMGCT1LeyE3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qVEvmFdZeXnGLsnBpItiVwn29ne//YKxeaNpD1P+CeE=;
 b=lREULVmX7Lgn0DXfXNiiHmFPwPBeELIAP9aa/yNtJq+dkq/+sl6HW+plEJOofePQm9Ci1gtm4r0zvgJUYKZ0QH18COufYlrTVZb9kiUdX2PDNEqlnE9lE96wSPh+1nddZZtd0I0efA8zrsWFLKLzxcHP4sZtYZPwztqwkDqdydeHJiMVLWo/1PAHN6da1A91BepOIt+EaTnSRkJptaQLdH1/owR34hTEj5M64JM6RqRwPaIBL3gVO/GljhsdTQ0+k5Jmdz0+V/6ER1W690DiC/fmHdPrlgIkwSLaOxkDvMHPe/l0tbLYKtqjD4pJmo/9AZ2yfH7NMiCzC3Z8/PTSSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qVEvmFdZeXnGLsnBpItiVwn29ne//YKxeaNpD1P+CeE=;
 b=VSbnqvu2lD2dxC2t5E05bgMJVAicy2xmtPDuvQhJ3JusH1wATv7XzS1uR8pvQsHX0ck9D9riUyotMi0WljL1k7IsXiFzhlaCApDVad1hPIABfKdHEylY3DJv9PBnLzlj4A0SynH+daxSb7JtNSw1+yxzwVz+aiqOgk0GncKlpXg=
Subject: Re: How does shadow page table work during migration?
To: Kevin Negy <kevinnegy@gmail.com>
CC: <xen-devel@lists.xenproject.org>
References: <CACZWC-qK_biKgyi+ZiXnsHRscAbK9pz=kncdBA25QYWY129HCQ@mail.gmail.com>
 <38cf0d39-da1d-5375-89dc-1668e26323a2@citrix.com>
 <CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6a91e41c-b7f9-f856-bc55-fd92b8188adc@citrix.com>
Date: Tue, 23 Feb 2021 00:23:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com>
Content-Type: multipart/alternative;
 boundary="------------0AC75CF46D3FFA1913A692BE"
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0489.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dc105c76-0278-43f1-5c5d-08d8d7913c25
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5885:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB58851AC314C150A41CA2E82DBA809@SJ0PR03MB5885.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hh2FWJJbnSxUPyR3a2On1goMS03wglDm0q3Ihr7gmXlzTHBuqZx57IJ4iVMTnpW3WSO+wdowWqzymrZHWnonIBCIYxgfUYaK7lQlc8F0rs7hcqSYCe4GczRwW7FWuDpcoYit5yNg9nxVIT2MDYo3MIsq/f4aHq8ZOk2qmQtYW40tCdI3olS/pGlZB4RE9Cnk3Cau82dNUEfyXPGKGL1VkYTOVLZJ4ebRvgvzXXM7FOUcyp9Fqg0dcybRoZGVlAmbowF81Lao81AH0XkGzPc0WI+l9n+0d7cFovid8yawPGYdqjL7gUm/ppH6Cs/fMnHqIje7p4RbXyiw7GXdLgqNzW2JmEFg3Qa/PPa1GhVSnoKYh/dno3ths8LQkAOiQyIVGOMDe3y/BUZ69A8KOqc5JVbsDWoRWzFaRkySdAhbA03grAVw4H8qPsAb+SuyGXQHDg3IhTlzrJljdfYsgZ39P0EzWGfnCDwXXaD9R8NoA5koKVcIJY76/OXt6LxFS2A9cbiA+INdJlETvVIdmOvV0gtb8taPN45YWv4Q+uTNyUEnt0LNSQsyP1h6ywbP/bxHKbQFgeOAmZVCK6tjUC5iI1dWTINKhimcjOSjaGIqbpo=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(396003)(366004)(136003)(66556008)(66946007)(66476007)(86362001)(53546011)(956004)(2616005)(316002)(8936002)(6486002)(186003)(26005)(33964004)(4326008)(36756003)(8676002)(2906002)(5660300002)(478600001)(16576012)(6666004)(31686004)(31696002)(6916009)(83380400001)(30864003)(16526019)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NWtQSGt6S0RFRWFHZW1PMkYwMzc2TUdhN0ZNcHlNajIxeTcySXN2MjN1c3hM?=
 =?utf-8?B?QU5RWXgrSi9MNXV3U1FKaENIMVlBMFg2c3BRSFBLV3A5QjE2RVluRDV2aWpv?=
 =?utf-8?B?a3FHZXIwZlR6ZGRsNlRQOFZoNDhlb0loazdCUXI3SXIyVk1HK2FSdjcxQ3Ft?=
 =?utf-8?B?SUxTb0dXNHJtOTl5WW9yN1pBMmRoL0c4QWgvaEErK2pEVXJQTnJrVzE5REtr?=
 =?utf-8?B?dFI4UVh2UkhSdjgzTUlyWldwVm0rUlNDRGJUb2NuUVcyV0tWWGZ5Z2loaTA0?=
 =?utf-8?B?VlJjZFJOa3BHaVpaWlJJT3BOeWdvRWRFK25sNXdaK2xyWGVkQThLMGtyREZy?=
 =?utf-8?B?S01KUHhuWE84ekpvb0w1NzgwUnBWSEUwMDRnQzVac0ZzRi9IVWp5Sm1FRjFP?=
 =?utf-8?B?SHhReW1QcHZzU0tPTjZ5L2M5V01mUlc2UU5vQ3VoQ201Rmk1QkhyMThteHcz?=
 =?utf-8?B?NnYwOElrVDZOcS96c2xzQS9uVWppWEJ2ZHRvUURGc1pzN25FRGRtYTJtSngr?=
 =?utf-8?B?WlFoaDh6eXZDR2hnVGdlblRKdDhVaEEwR3E0RlovQzgxWXFzMFlubHBlMVhm?=
 =?utf-8?B?OUdNaWZWcytzYndnSHU0aHFsNy81TFRSbGtGTFIrZzNkRENTbStxQy94aXRI?=
 =?utf-8?B?SHRxVkFZei8xeVFvTU1PWGZTK3llMlNoRjNCL2kvNHhldVlBMW9oMTlrRkRJ?=
 =?utf-8?B?S0daMGlNRGliWStoMDJSRWNuWGhYQ2xJcXVXNzhDYzdVS3RSUDVVUDV0UjIz?=
 =?utf-8?B?R0lMdlVneDlmTVZPZzZhZzZhYmhzMEJPYTkzalBHYklYUmdoU0wwZUcvZVFM?=
 =?utf-8?B?RU1PMUl0SDMwTXg5d0ZYNWZzUkhZSGFGamIvc056R2dJRWdxT3cwa0RTZXpV?=
 =?utf-8?B?Y291R0NYUEtlUGY5SktxcFRoeFdxVkVsT3NiSlBKVU44Zm1VYVJOZGEvdS9O?=
 =?utf-8?B?QjVwOVlhdEIzemtRcHoybk5SQ2VMNE12TkxlSGVWa0pjOFlRYmFxanNQdEg4?=
 =?utf-8?B?WXBFRC9lUlozVndLSWplUy9UeC8yS1NzRk5YTE45Y2k2VVd1aGNoQXRBZ3ZI?=
 =?utf-8?B?cE9yV0QxSWRoYkhIVTBXcWs2WkhucWtIUldpWldFSnJUTnpPUmNld3cyUXVV?=
 =?utf-8?B?MGJpQVVPTGlibGFEanFUNVpqVGhlbVVJa3dIYUtvK1dydEQvdUVPMWZqdnBM?=
 =?utf-8?B?L1dZaFN6NWxxSWRJMzFxZ25md3ozWTM0d3hVV1ZNRGZjaktTTzNsVEVWaW12?=
 =?utf-8?B?ck9odW5rR0lEY3B5OThubEUzcEdwaW1xNTlPckFFaDVEaDhFeVRkY2g4RGVI?=
 =?utf-8?B?anFmUC9rTjZzZFI0TkpZYXR6NVBXR1BXNVNzQkgvRTdibGhGb3ZiK0N2VGpR?=
 =?utf-8?B?ZG5jOVo1cHNmWmpheTNPWVdhZXE3dStMdzNNZGZObGhXMjF2TlBZQnlEa3Bu?=
 =?utf-8?B?bjJwK0ZpODl1cUI1V2VLOHQwQjhteDFsU3VIcSsvdlprMFkybFBmVG5LTE1a?=
 =?utf-8?B?OWJlL3k5OTRGY0dSTmxHM3pYRUZuN21ha09oaXNhMGpEK1dXUGdTNEZyS2lN?=
 =?utf-8?B?bUhaeUJTZ0pZa2ZOQW85L21YN0JyR1B4L002UXVobzRlVHRCSDU2VGg0QWNr?=
 =?utf-8?B?dzh3N1Vzcmpxb0NJNXgySVNITTVheno2ZHArUHhPTS8vOU5qMVRNbTcxQWRM?=
 =?utf-8?B?THF2N0hkT01KcmZZSXFIOU9kZXFpWEl1ZmxBL05ObStWTzhOeUtEeEN1cGdn?=
 =?utf-8?Q?8jvAUs5D81fZWktfulAYmW0vfZ3FIUSgEsJ+0FY?=
X-MS-Exchange-CrossTenant-Network-Message-Id: dc105c76-0278-43f1-5c5d-08d8d7913c25
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 00:23:25.8742
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bf8X56otAnQ9oJrxO2eCnQNf+EqM/Ze7XEmbL3lNPwdnDZUezp6Hz4JXQvhVAaZXdy9nYNZdCDntomHPOpeZNIkAptC3K43IeWzx+zdId14=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5885
X-OriginatorOrg: citrix.com

--------------0AC75CF46D3FFA1913A692BE
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

On 22/02/2021 18:52, Kevin Negy wrote:
> Hello again,
>
> Thank you for the helpful responses. I have several follow up questions.
>
> 1)
>
>     With Shadow, Xen has to do the combination of address spaces itself -
>     the shadow pagetables map guest virtual to host physical address.
>
>
>     The shadow_blow_tables() call is "please recycle everything" which
>     is used
>     to throw away all shadow pagetables, which in turn will cause the
>     shadows to be recreated from scratch as the guest continues to run.Â 
>
>
> With shadowing enabled, given a guest virtual address, how does the
> hypervisor recreate the mapping to the host physical address (mfn)
> from the virtual address if the shadow page tables are empty (after a
> call to shadow_blow_tables, for instance)? I had been thinking of
> shadow page tables as the definitive mapping between guest pages and
> machine pages, but should I think of them as more of a TLB, which
> implies there's another way to get/recreate the mappings if there's no
> entry in the shadow table?

Your observation about "being like a TLB" is correct.

Lets take the most simple case, of 4-on-4 shadows.Â  I.e. Xen and the
guest are both in 64bit mode, and using 4-level paging.

Each domain also has a structure which Xen calls a P2M, for the guest
physical => host physical mappings.Â  (For PV guests, its actually
identity transform, and for HVM, it is a set of EPT or (N)PT pagetables,
but the exact structure isn't important here.)

The other primitive required is an emulated pagewalk.Â  I.e. we start at
the guest's %cr3 value, and walk though the guests pagetables as
hardware would.Â  Each step involves a lookup in the P2M, as the guest
PTEs are programmed with guest physical addresses, not host physical.


In reality, we always have a "top level shadow" per vcpu.Â  In this
example, it is a level-4 pagetable, which starts out clear (i.e. no
guest entries present).Â  We need *something* to point hardware at when
we start running the guest.

Once we run the guest, we immediately take a pagefault.Â  We look at %cr2
to find the linear address accessed, and perform a pagewalk.Â  In the
common case, we find that the linear address is valid in the guest, so
we allocate a level 3 pagetable, again clear, then point the appropriate
L4e at it, then re-enter the guest.

This takes an immediate pagefault again, and we allocate an L2
pagetable, re-enter then allocate an L1 pagetable, and finally point an
L1e at the host physical page.Â  Now, we can successfully fetch the
instruction (if it doesn't cross a page boundary), then repeat the
process for every subsequent memory access.

This example is simplified specifically to demonstrate the point.Â 
Everything is driven from pagefaults.

There is of course far more complexity.Â  We typically populate all the
way down to an L1e in one go, because this is far more efficient than
taking 4 real pagefaults.Â  If we walk the guest pagetables and find a
violation, we have to hand #PF back to the guest kernel rather than
change the shadows.Â  To emulate dirty bits correctly, we need to leave
the shadow read-only even if the guest PTE was read/write so we can spot
when hardware tries to set the D bit in the shadows, and copy it back
into guest's view.Â  Superpages are complicated to deal with (we have to
splinter to 4k pages), and 2-on-3 (legacy 32bit OS with non-PAE paging)
a total nightmare because of the different format of pagetable entries.

Also notice that a guest TLB flush is also implemented as "drop all
shadows under this virtual cr3".

> 2) I'm trying to grasp the general steps of enabling shadowing and
> handling page faults. Is this correct?
> Â  Â  a) Normal PV - default shadowing is disabled, guest has its page
> tables in r/w mode or whatever mode is considered normal for guest
> page tables

It would be a massive security vulnerability to let PV guests write to
their own pagetables.

PV guest pagetables are read-only, and all updates are made via
hypercall, so they can be audited for safety.Â  (We do actually have
pagetable emulation for PV guests for those which do write to their own
pagetables, and feeds into the same logic as the hypercall, but is less
efficient overall.)

> Â  Â  b) Shadowing is enabled - shadow memory pool allocated, all memory
> accesses must now go through shadow pages in CR3. Since no entries are
> in shadow tables, initial read and writes from the guest will result
> in page faults.

PV guest share an address space with Xen.Â  So actually the top level
shadow for a PV guest is pre-populated with Xen's mappings, but all
guest entries are faulted in on demand.

> Â  Â  c) As soon as the first guest memory access occurs, a mandatory
> page fault occurs because there is no mapping in the shadows. Xen does
> a guest page table walk for the address that caused the fault (va) and
> then marks all the guest page table pages along the walk as read only.

The first guest memory access is actually the instruction fetch at
%cs:%rip.Â  Once that address is shadowed, you further have to shadow any
memory operands (which can be more than one, e.g. `PUSH ptr` has a
regular memory operand, and an implicit stack operand which needs
shadowing.Â  With the AVX scatter/gather instructions, you can have an
almost-arbitrary number of memory operands.)

Also, be very careful with terminology.Â  Linear and virtual addresses
are different (by the segment selector base, which is commonly but not
always 0).Â  Lots of Xen code uses va/vaddr when it means linear addresses.

> Â  Â  d) Xen finds out the mfn of the guest va somehow (my first
> question) and adds the mapping of the va to the shadow page table.

Yes.Â  This is a combination of the pagewalk and P2M to identify the mfn
in question for the linear address, along with suitable
allocations/modifications to the shadow pagetables.

> Â  Â  e) If the page fault was a write, the va is now marked as
> read/write but logged as dirty in the logdirty map.

Actually, what we do when the VM is in global logdirty mode is always
start by writing all shadow L1e's as read-only, even if the guest has
them read-write.Â  This causes all writes to trap with #PF, which lets us
see which frame is being written to, and lets us set the appropriate bit
in the logdirty bitmap.

> Â  Â  e) Now the next page fault to any of the page tables marked
> read-only in c) must have been caused by the guest writing to its
> tables, which can be reflected in the shadow page tables.

Writeability of the guest's actual pagetables is complicated and
guest-dependent.Â  Under a strict TLB-like model, its not actually
required to restrict writeability.

In real hardware, the TLB is an explicitly non-coherent cache, and
software is required to issue a TLB flush to ensure that changes to the
PTEs in memory get propagated subsequently into the TLB.

> 3) How do Xen/shadow page tables distinguish between two equivalent
> guest virtual addresses from different guest processes? I suppose when
> a guest OS tries to change page tables from one process to another,
> this will cause a page fault that Xen will trap and be able to infer
> that the current shadow page table should be swapped to a different
> one corresponding to the new guest process?

Changing processes involves writing to %cr3, which is a TLB flush, so in
a strict TLB-like model, all shadows must be dropped.

In reality, this is where we start using restricted writeability to our
advantage.Â  If we know that no writes to pagetables happened, we know
"the TLB" (== the currently established shadows) aren't actually stale,
so may be retained and reused.

We do maintain hash lists of types of pagetable, so we can locate
preexisting shadows of a specific type.Â  This is how we can switch
between already-established shadows when the guest changes %cr3.

In reality, the kernel half of virtual address space doesn't change much
after after boot, so there is a substantial performance win from not
dropping and reshadowing these entries.Â  There are loads and loads of L4
pagetables (one per process), all pointing to common L3's which form the
kernel half of the address space.

If I'm being honestly - this is where my knowledge of exactly what Xen
does breaks down - I'm not the author of the shadow code - I've merely
debugged it a few times.

I hope this is still informative.

~Andrew

--------------0AC75CF46D3FFA1913A692BE
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 22/02/2021 18:52, Kevin Negy wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      
      <div dir="ltr">
        <div dir="ltr">Hello again,<br>
          <br>
          Thank you for the helpful responses. I have several follow up
          questions. <br>
          <br>
          1) <br>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">With Shadow, Xen has to
            do the combination of address spaces itself -<br>
            the shadow pagetables map guest virtual to host physical
            address.</blockquote>
          <div><br>
          </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">The shadow_blow_tables()
            call is &quot;please recycle everything&quot; which is used<br>
            to throw away all shadow pagetables, which in turn will
            cause the<br>
            shadows to be recreated from scratch as the guest continues
            to run.&nbsp;</blockquote>
          <br>
          With shadowing enabled, given a guest virtual address, how
          does the hypervisor recreate the mapping to the host physical
          address (mfn) from the virtual address if the shadow page
          tables are empty (after a call to shadow_blow_tables, for
          instance)? I had been thinking of shadow page tables as the
          definitive mapping between guest pages and machine pages, but
          should I think of them as more of a TLB, which implies there's
          another way to get/recreate the mappings if there's no entry
          in the shadow table?</div>
      </div>
    </blockquote>
    <br>
    Your observation about &quot;being like a TLB&quot; is correct.<br>
    <br>
    Lets take the most simple case, of 4-on-4 shadows.&nbsp; I.e. Xen and the
    guest are both in 64bit mode, and using 4-level paging.<br>
    <br>
    Each domain also has a structure which Xen calls a P2M, for the
    guest physical =&gt; host physical mappings.&nbsp; (For PV guests, its
    actually identity transform, and for HVM, it is a set of EPT or
    (N)PT pagetables, but the exact structure isn't important here.)<br>
    <br>
    The other primitive required is an emulated pagewalk.&nbsp; I.e. we start
    at the guest's %cr3 value, and walk though the guests pagetables as
    hardware would.&nbsp; Each step involves a lookup in the P2M, as the
    guest PTEs are programmed with guest physical addresses, not host
    physical.<br>
    <br>
    <br>
    In reality, we always have a &quot;top level shadow&quot; per vcpu.&nbsp; In this
    example, it is a level-4 pagetable, which starts out clear (i.e. no
    guest entries present).&nbsp; We need *something* to point hardware at
    when we start running the guest.<br>
    <br>
    Once we run the guest, we immediately take a pagefault.&nbsp; We look at
    %cr2 to find the linear address accessed, and perform a pagewalk.&nbsp;
    In the common case, we find that the linear address is valid in the
    guest, so we allocate a level 3 pagetable, again clear, then point
    the appropriate L4e at it, then re-enter the guest.<br>
    <br>
    This takes an immediate pagefault again, and we allocate an L2
    pagetable, re-enter then allocate an L1 pagetable, and finally point
    an L1e at the host physical page.&nbsp; Now, we can successfully fetch
    the instruction (if it doesn't cross a page boundary), then repeat
    the process for every subsequent memory access.<br>
    <br>
    This example is simplified specifically to demonstrate the point.&nbsp;
    Everything is driven from pagefaults.<br>
    <br>
    There is of course far more complexity.&nbsp; We typically populate all
    the way down to an L1e in one go, because this is far more efficient
    than taking 4 real pagefaults.&nbsp; If we walk the guest pagetables and
    find a violation, we have to hand #PF back to the guest kernel
    rather than change the shadows.&nbsp; To emulate dirty bits correctly, we
    need to leave the shadow read-only even if the guest PTE was
    read/write so we can spot when hardware tries to set the D bit in
    the shadows, and copy it back into guest's view.&nbsp; Superpages are
    complicated to deal with (we have to splinter to 4k pages), and
    2-on-3 (legacy 32bit OS with non-PAE paging) a total nightmare
    because of the different format of pagetable entries.<br>
    <br>
    Also notice that a guest TLB flush is also implemented as &quot;drop all
    shadows under this virtual cr3&quot;.<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">2) I'm trying to grasp the general steps of
          enabling shadowing and handling page faults. Is this correct?<br>
          &nbsp; &nbsp; a) Normal PV - default shadowing is disabled, guest has
          its page tables in r/w mode or whatever mode is considered
          normal for guest page tables<br>
        </div>
      </div>
    </blockquote>
    <br>
    It would be a massive security vulnerability to let PV guests write
    to their own pagetables.<br>
    <br>
    PV guest pagetables are read-only, and all updates are made via
    hypercall, so they can be audited for safety.&nbsp; (We do actually have
    pagetable emulation for PV guests for those which do write to their
    own pagetables, and feeds into the same logic as the hypercall, but
    is less efficient overall.)<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">&nbsp; &nbsp; b) Shadowing is enabled - shadow memory pool
          allocated, all memory accesses must now go through shadow
          pages in CR3. Since no entries are in shadow tables, initial
          read and writes from the guest will result in page faults.<br>
        </div>
      </div>
    </blockquote>
    <br>
    PV guest share an address space with Xen.&nbsp; So actually the top level
    shadow for a PV guest is pre-populated with Xen's mappings, but all
    guest entries are faulted in on demand.<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">&nbsp; &nbsp; c) As soon as the first guest memory access
          occurs, a mandatory page fault occurs because there is no
          mapping in the shadows. Xen does a guest page table walk for
          the address that caused the fault (va) and then marks all the
          guest page table pages along the walk as read only.<br>
        </div>
      </div>
    </blockquote>
    <br>
    The first guest memory access is actually the instruction fetch at
    %cs:%rip.&nbsp; Once that address is shadowed, you further have to shadow
    any memory operands (which can be more than one, e.g. `PUSH ptr` has
    a regular memory operand, and an implicit stack operand which needs
    shadowing.&nbsp; With the AVX scatter/gather instructions, you can have
    an almost-arbitrary number of memory operands.)<br>
    <br>
    Also, be very careful with terminology.&nbsp; Linear and virtual
    addresses are different (by the segment selector base, which is
    commonly but not always 0).&nbsp; Lots of Xen code uses va/vaddr when it
    means linear addresses.<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">&nbsp; &nbsp; d) Xen finds out the mfn of the guest va
          somehow (my first question) and adds the mapping of the va to
          the shadow page table.<br>
        </div>
      </div>
    </blockquote>
    <br>
    Yes.&nbsp; This is a combination of the pagewalk and P2M to identify the
    mfn in question for the linear address, along with suitable
    allocations/modifications to the shadow pagetables.<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">&nbsp; &nbsp; e) If the page fault was a write, the va is
          now marked as read/write but logged as dirty in the logdirty
          map.<br>
        </div>
      </div>
    </blockquote>
    <br>
    Actually, what we do when the VM is in global logdirty mode is
    always start by writing all shadow L1e's as read-only, even if the
    guest has them read-write.&nbsp; This causes all writes to trap with #PF,
    which lets us see which frame is being written to, and lets us set
    the appropriate bit in the logdirty bitmap.<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">&nbsp; &nbsp; e) Now the next page fault to any of the page
          tables marked read-only in c) must have been caused by the
          guest writing to its tables, which can be reflected in the
          shadow page tables.<br>
        </div>
      </div>
    </blockquote>
    <br>
    Writeability of the guest's actual pagetables is complicated and
    guest-dependent.&nbsp; Under a strict TLB-like model, its not actually
    required to restrict writeability.<br>
    <br>
    In real hardware, the TLB is an explicitly non-coherent cache, and
    software is required to issue a TLB flush to ensure that changes to
    the PTEs in memory get propagated subsequently into the TLB.<br>
    <br>
    <blockquote type="cite" cite="mid:CACZWC-r7fS2AztaAgGdVPv5NcJiAxZ5mvC4FQTkorPDGwOfn9g@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">3) How do Xen/shadow page tables distinguish
          between two equivalent guest virtual addresses from different
          guest processes? I suppose when a guest OS tries to change
          page tables from one process to another, this will cause a
          page fault that Xen will trap and be able to infer that the
          current shadow page table should be swapped to a different one
          corresponding to the new guest process?<br>
        </div>
      </div>
    </blockquote>
    <br>
    Changing processes involves writing to %cr3, which is a TLB flush,
    so in a strict TLB-like model, all shadows must be dropped.<br>
    <br>
    In reality, this is where we start using restricted writeability to
    our advantage.&nbsp; If we know that no writes to pagetables happened, we
    know &quot;the TLB&quot; (== the currently established shadows) aren't
    actually stale, so may be retained and reused.<br>
    <br>
    We do maintain hash lists of types of pagetable, so we can locate
    preexisting shadows of a specific type.&nbsp; This is how we can switch
    between already-established shadows when the guest changes %cr3.<br>
    <br>
    In reality, the kernel half of virtual address space doesn't change
    much after after boot, so there is a substantial performance win
    from not dropping and reshadowing these entries.&nbsp; There are loads
    and loads of L4 pagetables (one per process), all pointing to common
    L3's which form the kernel half of the address space.<br>
    <br>
    If I'm being honestly - this is where my knowledge of exactly what
    Xen does breaks down - I'm not the author of the shadow code - I've
    merely debugged it a few times.<br>
    <br>
    I hope this is still informative.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------0AC75CF46D3FFA1913A692BE--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 00:53:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 00:53:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88439.166292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lELwk-0008HS-Cw; Tue, 23 Feb 2021 00:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88439.166292; Tue, 23 Feb 2021 00:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lELwk-0008HL-8J; Tue, 23 Feb 2021 00:53:02 +0000
Received: by outflank-mailman (input) for mailman id 88439;
 Tue, 23 Feb 2021 00:53:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lELwj-0008HD-E1; Tue, 23 Feb 2021 00:53:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lELwj-0004ly-5W; Tue, 23 Feb 2021 00:53:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lELwi-0007Kd-M2; Tue, 23 Feb 2021 00:53:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lELwi-0001MD-LX; Tue, 23 Feb 2021 00:53:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SSSzt8pE7/rSY+PbaWxuDoN8e9kuC6ZGFljMrebRzfs=; b=M9+Ls51SfkGr8bAa/jGumYQwd/
	HlFiEwgpND3a3tUESE588nFKolAjBZ4iz1BEbvF/4+DElWN7AVd7F9EdFSOVAVRhUlln0Opntmmt4
	imQz+iyB+YyEHT/HiCWzx9Ck/D+0VXHnmIOMUvFvqTHnvbzThQkfuChvTkWOE0F438eU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159550-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159550: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=31caf8b2a847214be856f843e251fc2ed2cd1075
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 00:53:00 +0000

flight 159550 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159550/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle   5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                31caf8b2a847214be856f843e251fc2ed2cd1075
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  206 days
Failing since        152366  2020-08-01 20:49:34 Z  205 days  354 attempts
Testing same since   159550  2021-02-22 12:19:17 Z    0 days    1 attempts

------------------------------------------------------------
4923 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-seattle broken
broken-step test-arm64-arm64-xl-seattle host-install(5)

Not pushing.

(No revision log; it would be 1182833 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 01:22:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 01:22:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88449.166309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMPH-0004iu-VC; Tue, 23 Feb 2021 01:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88449.166309; Tue, 23 Feb 2021 01:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMPH-0004in-SC; Tue, 23 Feb 2021 01:22:31 +0000
Received: by outflank-mailman (input) for mailman id 88449;
 Tue, 23 Feb 2021 01:22:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjLX=HZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEMPG-0004ii-8z
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 01:22:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e96a0301-58f8-41bb-8b37-1325b4637893;
 Tue, 23 Feb 2021 01:22:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A8BC86023B;
 Tue, 23 Feb 2021 01:22:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e96a0301-58f8-41bb-8b37-1325b4637893
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614043347;
	bh=QyNsljerVpDiDO+K9/ltiFTkhVqcxc/0Ay2CXZ1z8vI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qT9uEWGJgqYzpURFhCR6u+Wbw6bbsJzfvEpXt+qOvRpZBeHHsRUnipN3HQHDDB3E3
	 gCjXrJz0oOmy8eLfkAtbzisTgVdrdP8aPT65rPgULTDiPO+qGgU3kXayVSCi7szBX/
	 MfxWKKRGZmJ1Y0ya3KFks5fDwYLPA+LP36j6egOWcx+zZZlUCQVI7We8WcL7pgLfTc
	 aFyNR7YKXv/dZO9DLIrqsrZIWF99Xag6MDO0N1HuYflfFdk0rqJHeOrj3VhyML8MHz
	 tVChYYBBryvSqBDuB2Mo5J/NvyYzT12d/wTuwUUZX0GTHVvNU/omP5eYzJ827lYJXw
	 ABjqJ4JGM8RZA==
Date: Mon, 22 Feb 2021 17:22:24 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
cc: Christoph Hellwig <hch@lst.de>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com, 
    Dongli Zhang <dongli.zhang@oracle.com>, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, iommu@lists.linux-foundation.org, 
    linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, 
    linux-pci@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, 
    nouveau@lists.freedesktop.org, x86@kernel.org, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    adrian.hunter@intel.com, akpm@linux-foundation.org, 
    benh@kernel.crashing.org, bskeggs@redhat.com, bhelgaas@google.com, 
    bp@alien8.de, chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie, 
    hpa@zytor.com, mingo@kernel.org, mingo@redhat.com, 
    jani.nikula@linux.intel.com, joonas.lahtinen@linux.intel.com, 
    m.szyprowski@samsung.com, matthew.auld@intel.com, mpe@ellerman.id.au, 
    rppt@kernel.org, paulus@samba.org, peterz@infradead.org, 
    robin.murphy@arm.com, rodrigo.vivi@intel.com, sstabellini@kernel.org, 
    bauerman@linux.ibm.com, tsbogend@alpha.franken.de, tglx@linutronix.de, 
    ulf.hansson@linaro.org, joe.jin@oracle.com, thomas.lendacky@amd.com
Subject: Re: [PATCH RFC v1 5/6] xen-swiotlb: convert variables to arrays
In-Reply-To: <YDAgT2ZIdncNwNlf@Konrads-MacBook-Pro.local>
Message-ID: <alpine.DEB.2.21.2102221511360.3234@sstabellini-ThinkPad-T480s>
References: <20210203233709.19819-1-dongli.zhang@oracle.com> <20210203233709.19819-6-dongli.zhang@oracle.com> <20210204084023.GA32328@lst.de> <20210207155601.GA25111@lst.de> <YDAgT2ZIdncNwNlf@Konrads-MacBook-Pro.local>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 Feb 2021, Konrad Rzeszutek Wilk wrote:
> On Sun, Feb 07, 2021 at 04:56:01PM +0100, Christoph Hellwig wrote:
> > On Thu, Feb 04, 2021 at 09:40:23AM +0100, Christoph Hellwig wrote:
> > > So one thing that has been on my mind for a while:  I'd really like
> > > to kill the separate dma ops in Xen swiotlb.  If we compare xen-swiotlb
> > > to swiotlb the main difference seems to be:
> > > 
> > >  - additional reasons to bounce I/O vs the plain DMA capable
> > >  - the possibility to do a hypercall on arm/arm64
> > >  - an extra translation layer before doing the phys_to_dma and vice
> > >    versa
> > >  - an special memory allocator
> > > 
> > > I wonder if inbetween a few jump labels or other no overhead enablement
> > > options and possibly better use of the dma_range_map we could kill
> > > off most of swiotlb-xen instead of maintaining all this code duplication?
> > 
> > So I looked at this a bit more.
> > 
> > For x86 with XENFEAT_auto_translated_physmap (how common is that?)
> 
> Juergen, Boris please correct me if I am wrong, but that XENFEAT_auto_translated_physmap
> only works for PVH guests?

ARM is always XENFEAT_auto_translated_physmap


> > pfn_to_gfn is a nop, so plain phys_to_dma/dma_to_phys do work as-is.
> > 
> > xen_arch_need_swiotlb always returns true for x86, and
> > range_straddles_page_boundary should never be true for the
> > XENFEAT_auto_translated_physmap case.
> 
> Correct. The kernel should have no clue of what the real MFNs are
> for PFNs.

On ARM, Linux knows the MFNs because for local pages MFN == PFN and for
foreign pages it keeps track in arch/arm/xen/p2m.c. More on this below.

xen_arch_need_swiotlb only returns true on ARM in rare situations where
bouncing on swiotlb buffers is required. Today it only happens on old
versions of Xen that don't support the cache flushing hypercall but
there could be more cases in the future.


> > 
> > So as far as I can tell the mapping fast path for the
> > XENFEAT_auto_translated_physmap can be trivially reused from swiotlb.
> > 
> > That leaves us with the next more complicated case, x86 or fully cache
> > coherent arm{,64} without XENFEAT_auto_translated_physmap.  In that case
> > we need to patch in a phys_to_dma/dma_to_phys that performs the MFN
> > lookup, which could be done using alternatives or jump labels.
> > I think if that is done right we should also be able to let that cover
> > the foreign pages in is_xen_swiotlb_buffer/is_swiotlb_buffer, but
> > in that worst case that would need another alternative / jump label.
> > 
> > For non-coherent arm{,64} we'd also need to use alternatives or jump
> > labels to for the cache maintainance ops, but that isn't a hard problem
> > either.

With the caveat that ARM is always XENFEAT_auto_translated_physmap, what
you wrote looks correct. I am writing down a brief explanation on how
swiotlb-xen is used on ARM.


pfn: address as seen by the guest, pseudo-physical address in ARM terminology
mfn (or bfn): real address, physical address in ARM terminology


On ARM dom0 is auto_translated (so Xen sets up the stage2 translation
in the MMU) and the translation is 1:1. So pfn == mfn for Dom0.

However, when another domain shares a page with Dom0, that page is not
1:1. Swiotlb-xen is used to retrieve the mfn for the foreign page at
xen_swiotlb_map_page. It does that with xen_phys_to_bus -> pfn_to_bfn.
It is implemented with a rbtree in arch/arm/xen/p2m.c.

In addition, swiotlb-xen is also used to cache-flush the page via
hypercall at xen_swiotlb_unmap_page. That is done because dev_addr is
really the mfn at unmap_page and we don't know the pfn for it. We can do
pfn-to-mfn but we cannot do mfn-to-pfn (there are good reasons for it
unfortunately). The only way to cache-flush by mfn is by issuing a
hypercall. The hypercall is implemented in arch/arm/xen/mm.c.

The pfn != bfn and pfn_valid() checks are used to detect if the page is
local (of dom0) or foreign; they work thanks to the fact that Dom0 is
1:1 mapped.


Getting back to what you wrote, yes if we had a way to do MFN lookups in
phys_to_dma, and a way to call the hypercall at unmap_page if the page
is foreign (e.g. if it fails a pfn_valid check) then I think we would be
good from an ARM perspective. The only exception is when
xen_arch_need_swiotlb returns true, in which case we need to actually
bounce on swiotlb buffers.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 01:22:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 01:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88450.166321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMPU-0004mG-7N; Tue, 23 Feb 2021 01:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88450.166321; Tue, 23 Feb 2021 01:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMPU-0004m8-4J; Tue, 23 Feb 2021 01:22:44 +0000
Received: by outflank-mailman (input) for mailman id 88450;
 Tue, 23 Feb 2021 01:22:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjLX=HZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEMPT-0004lo-0m
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 01:22:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6415042-eeff-434c-9f04-58e5e503bc6e;
 Tue, 23 Feb 2021 01:22:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 37FE860232;
 Tue, 23 Feb 2021 01:22:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6415042-eeff-434c-9f04-58e5e503bc6e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614043360;
	bh=oWm6YpCZFokyXFPg4w/nT1ETGWtxM4rWjT0X04AB81s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LE0/8+/pevloD1jA0tvpooneXesJD7l65fVSZIgUi+AkkmrOjn+CFX2eNzibVDw4U
	 GMxeHFFSHkEm/WZId2Z7V8in5o2GqVfmG0iGhaHbwmYcQriT7lHptDA9eQ8s2uJJn4
	 Isl03t2dgqS7P/R6Yjsz6t2MeLBVfoZk9oKsCq15ivW+dgOygur/PW86yq4BpHlG4a
	 QQLylMZijyKDNkI+JVyY8OVlDEwZgb1cFXuCMpqR+CO7kg7wxiuLH1+93b184T7mkd
	 ZMwbprqiYf3BesEasDp8kbH4NpZcFY/r/4Ef+fSLNKadWlLgV+oGPuzSuK8U1lzFNX
	 dNQhvI/kEOh0A==
Date: Mon, 22 Feb 2021 17:22:39 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Jan Beulich <jbeulich@suse.com>, iwj@xenproject.org, 
    ash.j.wilding@gmail.com, Julien Grall <jgrall@amazon.com>, 
    George Dunlap <george.dunlap@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
In-Reply-To: <b68a644f-8b9c-3e1d-49c6-4058d276228b@xen.org>
Message-ID: <alpine.DEB.2.21.2102221217480.3234@sstabellini-ThinkPad-T480s>
References: <20210220194701.24202-1-julien@xen.org> <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com> <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s> <b68a644f-8b9c-3e1d-49c6-4058d276228b@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1073832744-1614025194=:3234"
Content-ID: <alpine.DEB.2.21.2102221722300.3234@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1073832744-1614025194=:3234
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102221722301.3234@sstabellini-ThinkPad-T480s>

On Mon, 22 Feb 2021, Julien Grall wrote:
> On 22/02/2021 20:09, Stefano Stabellini wrote:
> > On Mon, 22 Feb 2021, Jan Beulich wrote:
> > > On 20.02.2021 20:47, Julien Grall wrote:
> > > > From: Julien Grall <jgrall@amazon.com>
> > > > 
> > > > The comment in vcpu_block() states that the events should be checked
> > > > /after/ blocking to avoids wakeup waiting race. However, from a generic
> > > > perspective, set_bit() doesn't prevent re-ordering. So the following
> > > > could happen:
> > > > 
> > > > CPU0  (blocking vCPU A)         |Â   CPU1 ( unblock vCPU A)
> > > >                                  |
> > > > A <- read local events          |
> > > >                                  |   set local events
> > > >                                  |   test_and_clear_bit(_VPF_blocked)
> > > >                                  |       -> Bail out as the bit if not
> > > > set
> > > >                                  |
> > > > set_bit(_VFP_blocked)           |
> > > >                                  |
> > > > check A                         |
> > > > 
> > > > The variable A will be 0 and therefore the vCPU will be blocked when it
> > > > should continue running.
> > > > 
> > > > vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
> > > > to read any information about local events before the flag _VPF_blocked
> > > > is set.
> > > > 
> > > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > 
> > > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> > 
> > 
> > 
> > > > This is a follow-up of the discussion that started in 2019 (see [1])
> > > > regarding a possible race between do_poll()/vcpu_unblock() and the wake
> > > > up path.
> > > > 
> > > > I haven't yet fully thought about the potential race in do_poll(). If
> > > > there is, then this would likely want to be fixed in a separate patch.
> > > > 
> > > > On x86, the current code is safe because set_bit() is fully ordered. SO
> > > > the problem is Arm (and potentially any new architectures).
> > > > 
> > > > I couldn't convince myself whether the Arm implementation of
> > > > local_events_need_delivery() contains enough barrier to prevent the
> > > > re-ordering. However, I don't think we want to play with devil here as
> > > > the function may be optimized in the future.
> > > 
> > > In fact I think this ...
> > > 
> > > > --- a/xen/common/sched/core.c
> > > > +++ b/xen/common/sched/core.c
> > > > @@ -1418,6 +1418,8 @@ void vcpu_block(void)
> > > >         set_bit(_VPF_blocked, &v->pause_flags);
> > > >   +    smp_mb__after_atomic();
> > > > +
> > > 
> > > ... pattern should be looked for throughout the codebase, and barriers
> > > be added unless it can be proven none is needed. >
> > And in that case it would be best to add an in-code comment to explain
> > why the barrier is not needed
> .
> I would rather not add comment for every *_bit() calls. It should be pretty
> obvious for most of them that the barrier is not necessary.
> 
> We should only add comments where the barrier is necessary or it is not clear
> why it is not necessary.

Either way is fine, as long as it is consistent. Yeah we don't want to
add too many comments everywhere so maybe adding them only when the
barrier is required could be better.
--8323329-1073832744-1614025194=:3234--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 01:22:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 01:22:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88452.166334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMPg-0004ri-HU; Tue, 23 Feb 2021 01:22:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88452.166334; Tue, 23 Feb 2021 01:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMPg-0004rb-E2; Tue, 23 Feb 2021 01:22:56 +0000
Received: by outflank-mailman (input) for mailman id 88452;
 Tue, 23 Feb 2021 01:22:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjLX=HZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEMPf-0004pt-5Q
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 01:22:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee9dcbaf-0da7-473b-9b72-ecbc66cb8e8e;
 Tue, 23 Feb 2021 01:22:50 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0C01764E57;
 Tue, 23 Feb 2021 01:22:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee9dcbaf-0da7-473b-9b72-ecbc66cb8e8e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614043369;
	bh=nepRcKS9LxIOYl82o+Qvw+H790U1zk0VxTswYbcPC/s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ja3ltW8qHGcHkizFaHa85IPNwJ2/s1lfiDcHKpeluidLkjqCGpgG2AxMW3j7tQKPK
	 9URC5TR3NlLsgrUnUor72ogtq+SbTLedBSS5lgqI402w6sKl6D+jahKIrP/ak8J9/k
	 VELr1XPTObOc1NS1btmle2quIGqkIDlQsTCirhAGqqLX6jMWsMySTs2Ss9PgPGxP8/
	 4V7nIgcRb/EGbl32tuRnxs4rLfOLjmaxfHskllLL/Qhta3c1jnKrzc3mYUcXx5ilwp
	 3lIytZ1+2p3B6JJ8G89nl9wnSFz+NqFHLZgedBiNnBZCFx+y45eo1DOJnl40KK0XS3
	 GlTTQeMy7eTSQ==
Date: Mon, 22 Feb 2021 17:22:48 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-next] xen/arm: mm: flush_page_to_ram() only need to
 clean to PoC
In-Reply-To: <ec608001-7663-961b-667c-bcf6397f1864@xen.org>
Message-ID: <alpine.DEB.2.21.2102221344200.3234@sstabellini-ThinkPad-T480s>
References: <20210220175413.14640-1-julien@xen.org> <FC521246-BD88-4D8C-82B7-6C3EFC8B00D0@arm.com> <45cd6455-3ad0-f052-65d8-37adb658f003@xen.org> <alpine.DEB.2.21.2102221220000.3234@sstabellini-ThinkPad-T480s> <ec608001-7663-961b-667c-bcf6397f1864@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 22 Feb 2021, Julien Grall wrote:
> On 22/02/2021 20:35, Stefano Stabellini wrote:
> > On Mon, 22 Feb 2021, Julien Grall wrote:
> > > On 22/02/2021 11:58, Bertrand Marquis wrote:
> > > > Hi Julien,
> > > > 
> > > > > On 20 Feb 2021, at 17:54, Julien Grall <julien@xen.org> wrote:
> > > > > 
> > > > > From: Julien Grall <jgrall@amazon.com>
> > > > > 
> > > > > At the moment, flush_page_to_ram() is both cleaning and invalidate to
> > > > > PoC the page. However, the cache line can be speculated and pull in
> > > > > the
> > > > > cache right after as it is part of the direct map.
> > > > 
> > > > If we go further through this logic maybe all calls to
> > > > clean_and_invalidate_dcache_va_range could be transformed in a
> > > > clean_dcache_va_range.
> > > 
> > > Likely yes. But I need to go through them one by one to confirm this is
> > > fine
> > > to do it (it also depends on the caching attributes used). I have sent
> > > this
> > > one in advance because this was discussed as part of XSA-364.
> > > 
> > > > 
> > > > > 
> > > > > So it is pointless to try to invalidate the line in the data cache.
> > > > > 
> > > > 
> > > > But what about processors which would not speculate ?
> > > > 
> > > > Do you expect any performance optimization here ?
> > > 
> > > When invalidating a line, you effectively remove it from the cache. If the
> > > page is going to be access a bit after, then you will have to load from
> > > the
> > > memory (or another cache).
> > > 
> > > With this change, you would only need to re-fetch the line if it wasn't
> > > evicted by the time it is accessed.
> > > 
> > > The line would be clean, so I would expect the eviction to have less an
> > > impact
> > > over re-fetching the memory.
> > > 
> > > > 
> > > > If so it might be good to explain it as I am not quite sure I get it.
> > > 
> > > This change is less about performance and more about unnecessary work.
> > > 
> > > The processor is likely going to be more clever than the developper and
> > > the
> > > exact numbers will vary depending on how the processor decide to manage
> > > the
> > > cache.
> > > 
> > > In general, we should avoid interferring too much with the cache without a
> > > good reason to do it.
> > > 
> > > How about the following commit message:
> > > 
> > > "
> > > At the moment, flush_page_to_ram() is both cleaning and invalidate to
> > > PoC the page.
> > > 
> > > The goal of flush_page_to_ram() is to prevent corruption when the guest
> > > has
> > > disabled the cache (the cache line may be dirty) and read the guest to
> > > read
> > > previous content.
> > > 
> > > Per this defintion, the invalidating the line is not necessary. So
> > > invalidating the cache is unnecessary. In fact, it may be
> > > counter-productive
> > > as the line may be (speculatively) accessed a bit after. So this will
> > > incurr
> > > an expensive access to the memory.
> > > 
> > > More generally, we should avoid interferring too much with cache.
> > > Therefore,
> > > flush_page_to_ram() is updated to only clean to PoC the page.
> > > 
> > > The performance impact of this change will depend on your
> > > workload/processor.
> > > "
> >    From a correctness and functionality perspective, we don't need the
> > invalidate. If the line is dirty we are writing it back to memory (point
> > of coherence) thanks to the clean operations anyway. If somebody writes
> > to that location, the processor should evict the old line anyway.
> 
> Location as in same physical address or the same set?
> 
> For the former, the line is usually bigger than any write. So it is unlikely
> to get evicted.
> 
> For the later, it will depend on the content of the other ways in the set.
> 
> > The only reason I can think of for doing a "clean and invalidate" rather
> > than just a "clean" would be that we are trying to give a hint to the
> > processor that the cacheline is soon to be evicted. Assuming that the
> > hint even leads to some sort of performance optimization.
> 
> This may change which lines get evict as there will be an unused way. But we
> are now down to the territory of micro-optimization.
> 
> If that's a problem for someone, then that user should better switch to cache
> coloring because the impact of flush_page_to_ram() will pretty small compare
> to the damage that another domain can do if it shares the same set.
> 
> > In any case, on the grounds that it is unnecessary, I am OK with this.
> > I agree with Julien's proposal of applying this patch "for-next".
> > 
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Thanks! I am thinking to create a branch next again for queuing 4.15+ patches.
> Would that be fine with you?

yes good idea


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 01:24:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 01:24:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88462.166350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMQs-00053v-Ts; Tue, 23 Feb 2021 01:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88462.166350; Tue, 23 Feb 2021 01:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEMQs-00053o-Qj; Tue, 23 Feb 2021 01:24:10 +0000
Received: by outflank-mailman (input) for mailman id 88462;
 Tue, 23 Feb 2021 01:24:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjLX=HZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEMQr-00053i-3a
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 01:24:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed33250b-c8fa-4049-a486-2df52977b981;
 Tue, 23 Feb 2021 01:24:08 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 78B0D64E5C;
 Tue, 23 Feb 2021 01:24:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed33250b-c8fa-4049-a486-2df52977b981
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614043447;
	bh=6PNWPCOEa99vUhFegDYqJfYobxHhLaUuhFb7duvrvRY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SG2deOa1OGRPUnkYc6zlfuUCfn0cL/QDppHjz3no6EbAjL/30okf7k0sCs0UQYhQE
	 twdpDXPF0/H/90ppQyPp2I5FBs79TTvzlvNlATTOBsNQ5fJoJSX+KKsfYnz5O7a2Iq
	 OilkRUaWENNOlVwzQYsW1zla/9S/xNoNz34iKxWgcye1NbblK+Lasat+pirpYUlsA0
	 I3nHCCz+fZeL/mMzA9vv7CUgF16pqXV8+KytNMfAi/Gi7oUD/R2DJKJcHiAESO4pxu
	 DVBFVVUU+KbkBxxN2qfyQp/jlcDUhu31sIVImnrohx7en3qn9xkrBLo0+WoUw94/PE
	 zUh5Ye1y1i/pQ==
Date: Mon, 22 Feb 2021 17:24:06 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "iwj@xenproject.org" <iwj@xenproject.org>, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, dfaggioli@suse.com, 
    George.Dunlap@citrix.com
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2,
 3}
In-Reply-To: <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com>
Message-ID: <alpine.DEB.2.21.2102221253390.3234@sstabellini-ThinkPad-T480s>
References: <20210220140412.31610-1-julien@xen.org> <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 22 Feb 2021, Bertrand Marquis wrote:
> Hi Julien,
> 
> > On 20 Feb 2021, at 14:04, Julien Grall <julien@xen.org> wrote:
> > 
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > Currently, Xen will send a data abort to a guest trying to write to the
> > ISPENDR.
> > 
> > Unfortunately, recent version of Linux (at least 5.9+) will start
> > writing to the register if the interrupt needs to be re-triggered
> > (see the callback irq_retrigger). This can happen when a driver (such as
> > the xgbe network driver on AMD Seattle) re-enable an interrupt:
> > 
> > (XEN) d0v0: vGICD: unhandled word write 0x00000004000000 to ISPENDR44
> > [...]
> > [   25.635837] Unhandled fault at 0xffff80001000522c
> > [...]
> > [   25.818716]  gic_retrigger+0x2c/0x38
> > [   25.822361]  irq_startup+0x78/0x138
> > [   25.825920]  __enable_irq+0x70/0x80
> > [   25.829478]  enable_irq+0x50/0xa0
> > [   25.832864]  xgbe_one_poll+0xc8/0xd8
> > [   25.836509]  net_rx_action+0x110/0x3a8
> > [   25.840328]  __do_softirq+0x124/0x288
> > [   25.844061]  irq_exit+0xe0/0xf0
> > [   25.847272]  __handle_domain_irq+0x68/0xc0
> > [   25.851442]  gic_handle_irq+0xa8/0xe0
> > [   25.855171]  el1_irq+0xb0/0x180
> > [   25.858383]  arch_cpu_idle+0x18/0x28
> > [   25.862028]  default_idle_call+0x24/0x5c
> > [   25.866021]  do_idle+0x204/0x278
> > [   25.869319]  cpu_startup_entry+0x24/0x68
> > [   25.873313]  rest_init+0xd4/0xe4
> > [   25.876611]  arch_call_rest_init+0x10/0x1c
> > [   25.880777]  start_kernel+0x5b8/0x5ec
> > 
> > As a consequence, the OS may become unusable.
> > 
> > Implementing the write part of ISPENDR is somewhat easy. For
> > virtual interrupt, we only need to inject the interrupt again.
> > 
> > For physical interrupt, we need to be more careful as the de-activation
> > of the virtual interrupt will be propagated to the physical distributor.
> > For simplicity, the physical interrupt will be set pending so the
> > workflow will not differ from a "real" interrupt.
> > 
> > Longer term, we could possible directly activate the physical interrupt
> > and avoid taking an exception to inject the interrupt to the domain.
> > (This is the approach taken by the new vGIC based on KVM).
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>

Thanks for the patch!


> This is something which will not be done by a guest very often so I think your
> implementation actually makes it simpler and reduce possibilities of race conditions
> so I am not even sure that the XXX comment is needed.
> But i am ok with it being in or not so:
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> I did some tests by manually generating interrupts and I can confirm that this
> works as expected.

And thank you for testing.

Yes it is definitely true that it shouldn't be done by the guest often
so it is not worth optimizing. For example, if somebody is trying to
measure latency using this, they are really barking at the wrong tree.
And I like the simplicity of this implementation; the patch looks good.


So I am not too worries about the cost of the roundtrip. But should we
be worried about the guest continuously creating events for itself in a
loop and potentially "kidnapping" a pcpu by looping around ISPENDR? I am
talking about cases where 1 pcpu is shared between 2 vcpus of different
guests. I thought we should have a brief conversation about it before
allowing it.

The consequence of this patch is that a guest can cause vcpu_unblock(v),
hence vcpu_wake(v), to be called for its own vcpu, which basically
always changes v to RUNSTATE_runnable. However, that alone shouldn't
allow v to always come up ahead of any other vcpus in the queue, right?
It should be safe. I just wanted a second opinion on this :-)

It was possible to trigger interrupts for your own vcpus even before, but
now the code path is going to be direct for virtual interrupts.



> > ---
> > 
> > Note that this doesn't touch the read part for I{S,C}PENDR nor the write
> > part of ICPENDR because they are more complex to implement.
> > 
> > For physical interrupt, I didn't implement the same solution as KVM because
> > I couldn't convince myself this could be done race free for physical
> > interrupt.
> > 
> > This was tested using the IRQ debugfs (CONFIG_GENERIC_IRQ_DEBUGFS=y)
> > which allows to retrigger an interrupt:
> > 
> > 42sh> echo trigger > /sys/kernel/debug/irq/irqs/<irq>
> > 
> > This patch is candidate for 4.15 and also backporting to older trees.
> > Without this patch, recent Linux version may not boot on Xen on some
> > platforms (such as AMD Seattle used in OssTest).
> > 
> > The patch is self-contained to implementing a single set of registers.
> > So this would not introduce any risk on platforms where OSes don't use
> > those registers.
> > 
> > For the other setup (e.g. AMD Seattle + Linux 5.9+), it cannot be worse
> > than today.
> > 
> > So therefore, I would consider the risk limited.
> > ---
> > xen/arch/arm/vgic-v2.c     | 10 ++++----
> > xen/arch/arm/vgic-v3.c     | 18 ++++++---------
> > xen/arch/arm/vgic.c        | 47 ++++++++++++++++++++++++++++++++++++++
> > xen/include/asm-arm/vgic.h |  2 ++
> > 4 files changed, 62 insertions(+), 15 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> > index 64b141fea586..b2da886adc18 100644
> > --- a/xen/arch/arm/vgic-v2.c
> > +++ b/xen/arch/arm/vgic-v2.c
> > @@ -472,10 +472,12 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
> > 
> >     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
> >         if ( dabt.size != DABT_WORD ) goto bad_width;
> > -        printk(XENLOG_G_ERR
> > -               "%pv: vGICD: unhandled word write %#"PRIregister" to ISPENDR%d\n",
> > -               v, r, gicd_reg - GICD_ISPENDR);
> > -        return 0;
> > +        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISPENDR, DABT_WORD);
> > +        if ( rank == NULL ) goto write_ignore;
> > +
> > +        vgic_set_irqs_pending(v, r, rank->index);
> > +
> > +        return 1;
> > 
> >     case VRANGE32(GICD_ICPENDR, GICD_ICPENDRN):
> >         if ( dabt.size != DABT_WORD ) goto bad_width;
> > diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> > index fd8cfc156d0c..613f37abab5e 100644
> > --- a/xen/arch/arm/vgic-v3.c
> > +++ b/xen/arch/arm/vgic-v3.c
> > @@ -808,10 +808,12 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
> > 
> >     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
> >         if ( dabt.size != DABT_WORD ) goto bad_width;
> > -        printk(XENLOG_G_ERR
> > -               "%pv: %s: unhandled word write %#"PRIregister" to ISPENDR%d\n",
> > -               v, name, r, reg - GICD_ISPENDR);
> > -        return 0;
> > +        rank = vgic_rank_offset(v, 1, reg - GICD_ISPENDR, DABT_WORD);
> > +        if ( rank == NULL ) goto write_ignore;
> > +
> > +        vgic_set_irqs_pending(v, r, rank->index);
> > +
> > +        return 1;
> > 
> >     case VRANGE32(GICD_ICPENDR, GICD_ICPENDRN):
> >         if ( dabt.size != DABT_WORD ) goto bad_width;
> > @@ -975,6 +977,7 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu *v, mmio_info_t *info,
> >     case VREG32(GICR_ICACTIVER0):
> >     case VREG32(GICR_ICFGR1):
> >     case VRANGE32(GICR_IPRIORITYR0, GICR_IPRIORITYR7):
> > +    case VREG32(GICR_ISPENDR0):
> >          /*
> >           * Above registers offset are common with GICD.
> >           * So handle common with GICD handling
> > @@ -982,13 +985,6 @@ static int vgic_v3_rdistr_sgi_mmio_write(struct vcpu *v, mmio_info_t *info,
> >         return __vgic_v3_distr_common_mmio_write("vGICR: SGI", v,
> >                                                  info, gicr_reg, r);
> > 
> > -    case VREG32(GICR_ISPENDR0):
> > -        if ( dabt.size != DABT_WORD ) goto bad_width;
> > -        printk(XENLOG_G_ERR
> > -               "%pv: vGICR: SGI: unhandled word write %#"PRIregister" to ISPENDR0\n",
> > -               v, r);
> > -        return 0;
> > -
> >     case VREG32(GICR_ICPENDR0):
> >         if ( dabt.size != DABT_WORD ) goto bad_width;
> >         printk(XENLOG_G_ERR
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 82f524a35c9e..8f9400a51960 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -423,6 +423,53 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> >     }
> > }
> > 
> > +void vgic_set_irqs_pending(struct vcpu *v, uint32_t r, unsigned int rank)
> > +{
> > +    const unsigned long mask = r;
> > +    unsigned int i;
> > +    /* The first rank is always per-vCPU */
> > +    bool private = rank == 0;
> > +
> > +    /* LPIs will never be set pending via this function */
> > +    ASSERT(!is_lpi(32 * rank + 31));
> > +
> > +    for_each_set_bit( i, &mask, 32 )
> > +    {
> > +        unsigned int irq = i + 32 * rank;
> > +
> > +        if ( !private )
> > +        {
> > +            struct pending_irq *p = spi_to_pending(v->domain, irq);
> > +
> > +            /*
> > +             * When the domain sets the pending state for a HW interrupt on
> > +             * the virtual distributor, we set the pending state on the
> > +             * physical distributor.
> > +             *
> > +             * XXX: Investigate whether we would be able to set the
> > +             * physical interrupt active and save an interruption. (This
> > +             * is what the new vGIC does).
> > +             */
> > +            if ( p->desc != NULL )
> > +            {
> > +                unsigned long flags;
> > +
> > +                spin_lock_irqsave(&p->desc->lock, flags);
> > +                gic_set_pending_state(p->desc, true);
> > +                spin_unlock_irqrestore(&p->desc->lock, flags);
> > +                continue;
> > +            }
> > +        }
> > +
> > +        /*
> > +         * If the interrupt is per-vCPU, then we want to inject the vIRQ
> > +         * to v, otherwise we should let the function figuring out the
> > +         * correct vCPU.
> > +         */
> > +        vgic_inject_irq(v->domain, private ? v : NULL, irq, true);
> > +    }
> > +}
> > +
> > bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
> >                  int virq, const struct sgi_target *target)
> > {
> > diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> > index ce1e3c4bbdac..62c2ae538db2 100644
> > --- a/xen/include/asm-arm/vgic.h
> > +++ b/xen/include/asm-arm/vgic.h
> > @@ -288,6 +288,8 @@ extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int
> > extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);
> > extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
> > extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
> > +extern void vgic_set_irqs_pending(struct vcpu *v, uint32_t r,
> > +                                  unsigned int rank);
> > extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
> > int vgic_v2_init(struct domain *d, int *mmio_count);
> > int vgic_v3_init(struct domain *d, int *mmio_count);
> > -- 
> > 2.17.1
> > 
> 


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88519.166467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXf-0004IF-75; Tue, 23 Feb 2021 02:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88519.166467; Tue, 23 Feb 2021 02:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXf-0004I4-1Y; Tue, 23 Feb 2021 02:35:15 +0000
Received: by outflank-mailman (input) for mailman id 88519;
 Tue, 23 Feb 2021 02:35:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXd-00046u-V9
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:13 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 721913e0-99e5-4f4c-9933-dbdb10ca54f0;
 Tue, 23 Feb 2021 02:35:04 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxk004083; Tue, 23 Feb 2021 02:35:01 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:01 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:57 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 721913e0-99e5-4f4c-9933-dbdb10ca54f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n8UsQ/zNRlGvj/CMCAtMQlV8xoeGe/qY82ASPb8bsv1EJdfCD5A166Brx+0DHQCkbAvOD1j0OJ7Rury/pO+pMG69Q/YnB0J4EimaaZTI77s1IgEpb5JMteMdHZxmAd3/fcXj8hcFvvZRcIGP8pKEw691wnOqPbCHPzi1y+rFYFCe7dafaFJOy1ewuh04VM4e5PmlkjjRuWqsmOCYtuXQ7XdgDKEKawldJ28zOD/M2QkCa4xC9uFKERuBr1193ecVyrBJ36iqOouoStzvs5ZUhysNObYjnBrSzNwAQx6pXyFMnQxtpn8PsAXILqS3kEDPb/L3LGD3JgM9JKJMgVl+ZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z7bkZJeAVBuSp9Gyc797wEvVdAIwSUhCtPUTvuKcVEE=;
 b=CYpIuVSxRvIsnDd2YCEd7pN6Q2wWhZkEbUaFx1JQjeRvPZMICDtYtdkrltjVN2DVPhhiVnZIi8XIZgD66iqoAQxRa0OIevf6+98UwhOkgJnMKEcR4M0DeP8JqnTaBJ5ZCCmRqtj7aWvozqUrzlFzy2/jHz7mLGdtotJE8iDlNmSL6lVsTVrotp7CO7IYSlocaAJGuXH+DDRX35mzl3tWxTPICziYo273iHLH94nocdeZE6XarCAN/VlClsVDF3kMAN/ej0AsGgp/xrtN4z1WIo3pC7vcR7KsBLcap9yYpc/UwOpHxpm4ZkcPflDVVejpyNHP4hlAZArV8Zs+Jr8xZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z7bkZJeAVBuSp9Gyc797wEvVdAIwSUhCtPUTvuKcVEE=;
 b=fugEaoUmhgNcWbKy3Jbn87JBECOIM+DBEd6OK4KY7VQk/TriVRWxJk6qYa7HtezQvBznJSNn+ZTLSAgIcJnaIkFaLpNavDiz0NF1XaIVdryoRNv9EQ5AY0OVPVLQB+eEQOR6BuZR6rf6aTnoNr7qVFa3c5HON/7uS1hZ2Ymc57BwP1WOwD7rRZur3oh/ld2uDMH6fYdLCuQv+7p+3am8SWVwVLfv1D/gcWzaN77wVi5MYadAMGYDzavi2Wn74YFXwUmiGvt52nM2qpcWuE7snspokLwC5qfXGGxIg06VXhiJJkmXXSLjw8pMS05qZtKGrs8nqu1JhtoSUKdjhjYPgQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper
	<andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [RFC PATCH 05/10] preempt: add try_preempt() function
Thread-Topic: [RFC PATCH 05/10] preempt: add try_preempt() function
Thread-Index: AQHXCYx5ySuVthNLHUKZmgWg9dU+LA==
Date: Tue, 23 Feb 2021 02:34:57 +0000
Message-ID: <20210223023428.757694-6-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cd8759b3-5f5f-4e1d-6730-08d8d7a39c3a
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB323587FF7499713894A2505EE6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 MhrIJOptljGFO3p3aYiupiRFD8WDZIGxT2LOroitiU2FYLsNLy7sqcVLaps+dEwnuLF3/B0VeWLSmp9kMqOwUNMUwxNGBmFGb3bfU+ayslZjWk5NTxo/vcsyCSNuq4VuSrqDxh9hCoolxbKYuHdT4VhAisIJZLEKLHFN3bIx5LW6B3zD8trTDrXqaoT8LB0RmRjrgzrEYL3cD2HFGtyouUKWbcckqSHPiukVbyphSDHHzPoO5S5crIdGmq3HZ4+vHEGmFJ+yiC7tjm9b8vu+4hDdX47/oEqsshLksylKgiNtQQiqG8AVpodCEPnbbWgYfXAtpjpLG2k1ubpiV6fHK8RvywkJbj9i+jvPBqAgKDVISDp5V4yygJMAxuWGdwDdmoIfPHO3gd4y8eiZEvjGnDiI1EoLbDOr9rzOJZd7UBBtvTwhPILjAFygF4GvV2YVZLquqy56zcDiiAm/mVXdusTU1Pbeze/qvRrjh/XX5CojxnnGTc2GFcFDJ4Eg4jaZYmFdnDDZ/hNkh9X4zE/VKfxnig3dcOdwDVQSi2tCo2gTrJ1PkZY0Z3pRE9EXtYU2
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(54906003)(66556008)(55236004)(83380400001)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007)(2004002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?C1Exbo5RCcolDWAgr5FgqW9T4dHuKN7wXk1WlE6QoaYMt3/QR1CBY5yMyg?=
 =?iso-8859-1?Q?+2D24BJ3NSotM7zkk2mZMWCAmhnbLt/wWPdLJ7Y9FXTRsfHk3wp5MW2fVw?=
 =?iso-8859-1?Q?AIzDfTNT1dP09xdNj8K1yimvDhXuI/Wt/qFAzGjIFFhH8sK2sm/Mzy+tD8?=
 =?iso-8859-1?Q?+o1wi/WFMoPwDe1kdTl+Fx+AUvOEJOcXelS1/IFBqKNhw5SsvyAsiDu5Hi?=
 =?iso-8859-1?Q?0oCObvbQxjZVj6uSBFaRzVCHwHc2Hts/atlvdpkrKFRNaMGzIxnXySxacj?=
 =?iso-8859-1?Q?El/2Xbtd0LW6pOBtD7D+k2QiD+AbMnbO8HgzRGhXfReETtTyRX+8DBTGKH?=
 =?iso-8859-1?Q?OTGGZUgk4gD4UHMjL82XaDidijtKV/kL82bFGiCOrG3Oa6vQYh0hgNzN3E?=
 =?iso-8859-1?Q?9LZgCKdF3J2JqPfZsQbtDooM62A8HvfntxEEsF8i7KBgsvFbL0s3vPx9oT?=
 =?iso-8859-1?Q?IGaWcIfYH2OlXjHOgZEXq4pf0MlVwOt3+PP5dykDL/YjAQ6hibiTraP6BU?=
 =?iso-8859-1?Q?taRzPycX4oh0khilItJFU0RjafR5CN+pufeBKBBtNC5is2ebEZPKF5cW+5?=
 =?iso-8859-1?Q?8GNmC2Ei4XyMh5fOlQtqZVwhIIegbiniPPM2tbqv/IxLyVTvvJu4ruzA2M?=
 =?iso-8859-1?Q?FRIQkudW9ml837l+lipmZ2oq3vUuagg3uTKgJynF4tHjnEiUwIhfLfoL/i?=
 =?iso-8859-1?Q?jxZ37CYHejZ6hqLrbxriYWVzNoy1f3YoEJxCuBFQ9bHXGc9lKd6pTct07F?=
 =?iso-8859-1?Q?MbRkAI0ujDIqIVpW8zNFpdJzD+uIBNmkrpogCnhmCeQDaF2N18xZ6a1MeG?=
 =?iso-8859-1?Q?vEJgX8eCSpwoyTznVHZnZ3T4AycK1yXN93jcShE3QNY22aHpshMt9xss1y?=
 =?iso-8859-1?Q?BFbmArwxBXGampv6cCzKeq8EqMbVztDjdyGK8o3asYWAGNXcFbQFDTtRf8?=
 =?iso-8859-1?Q?JLqN5ImzLEq120yznvmt8PARzJmWbPvZ0EkVKrEB1D2ymyvNFy4t1BZaS0?=
 =?iso-8859-1?Q?araHVAEuGIAY6lsNOHGILwlrHAXM5q4+8rfpAv/XyYCy4QOZxPfo/xeZuq?=
 =?iso-8859-1?Q?jgZf2ZfPBcFAbf01NbtfQ8rcUwAMELEAHqwdSt4+GfUIlCjR2VIGt4I581?=
 =?iso-8859-1?Q?ILEy0VOMh9H4EK5abNhUj9saOQ7DN7KUXsOov0MAVN5Ul1lO69jWSlTr9T?=
 =?iso-8859-1?Q?kjaTwT+V+1h5yudbJB1ovkxGDLGQQWoFs4dt2/R3DKQsW84qrNyrYShvI+?=
 =?iso-8859-1?Q?NzG6FRcfLp9CJ2RxX2oABKj8zZvp43q96J28zlUNGEIQtZKlbNTHclhzkQ?=
 =?iso-8859-1?Q?Im6Ydko9vpXeFc4rCkvIY07ZzE9BWGnog9LwTVSU3MLdDXy/1DqfysZpf1?=
 =?iso-8859-1?Q?nMMKs8dnfu?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd8759b3-5f5f-4e1d-6730-08d8d7a39c3a
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:57.0931
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: qcl/SXjF/ndaYdz1kJRPCFAz+NiQgEIuok3cm7vSoPoHTV2w2VNmvDV06qtTrEUkCfmcB2esxfpMFdKF46OwRDcnBE1H6bpwUqNf/cj+3bo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

This function can be used to preempt code running in hypervisor mode.
Generally, there are two reasons to preempt while in HYP mode:

1. IRQ arrived. This may woke vCPU with higher scheduling priority.
2. Exit from atomic context. While we were in atomic context, state
   of the system may changed and we need to reschedule.

It is very inefficient to call scheduler each time we leave atomic
context, so very simple optimists is used. There are cases when
we *know* that there might be reasons for preemption. One example - is
IRQ. In this case we call try_preempt(true). This will force
rescheduling if we are outside atomic context or it will ensure that
scheduler will be called right after leaving atomic context. This is
done by calling try_preempt(false) when we are leaving atomic
context. try_preempt(false) will check if there was call to
try_preempt(true) in atomic context and call scheduler only in this
case.

Also macro preempt_enable_no_sched() is introduced. It is meant to
be used by scheduler itself, because we don't want to initiate
rescheduling inside scheduler code.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/preempt.c      | 32 +++++++++++++++++++++++++++++++-
 xen/include/xen/preempt.h |  8 ++++++++
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/xen/common/preempt.c b/xen/common/preempt.c
index ad61c8419a..98699aaa1f 100644
--- a/xen/common/preempt.c
+++ b/xen/common/preempt.c
@@ -4,6 +4,7 @@
  * Track atomic regions in the hypervisor which disallow sleeping.
  *=20
  * Copyright (c) 2010, Keir Fraser <keir@xen.org>
+ * Copyright (c) 2021, EPAM Systems
  *=20
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
@@ -21,13 +22,42 @@
=20
 #include <xen/preempt.h>
 #include <xen/irq.h>
+#include <xen/sched.h>
+#include <xen/wait.h>
 #include <asm/system.h>
=20
 DEFINE_PER_CPU(atomic_t, __preempt_count);
+DEFINE_PER_CPU(unsigned int, need_reschedule);
=20
 bool_t in_atomic(void)
 {
-    return atomic_read(&preempt_count()) || in_irq() || local_irq_is_enabl=
ed();
+    return atomic_read(&preempt_count()) || in_irq();
+}
+
+void try_preempt(bool force)
+{
+    /*
+     * If caller wants us to call the scheduler, but we are in atomic
+     * context - update the flag. We will try preemption upon exit
+     * from atomic context.
+     */
+    if ( force && in_atomic() )
+    {
+        this_cpu(need_reschedule) =3D 1;
+        return;
+    }
+
+    /* idle vCPU schedules via soft IRQs */
+    if ( unlikely(system_state !=3D SYS_STATE_active) ||
+         in_atomic() ||
+         is_idle_vcpu(current) )
+        return;
+
+    if ( force || this_cpu(need_reschedule) )
+    {
+        this_cpu(need_reschedule) =3D 0;
+        wait();
+    }
 }
=20
 #ifndef NDEBUG
diff --git a/xen/include/xen/preempt.h b/xen/include/xen/preempt.h
index e217900d6e..df7352a75e 100644
--- a/xen/include/xen/preempt.h
+++ b/xen/include/xen/preempt.h
@@ -4,6 +4,7 @@
  * Track atomic regions in the hypervisor which disallow sleeping.
  *=20
  * Copyright (c) 2010, Keir Fraser <keir@xen.org>
+ * Copyright (c) 2021, EPAM Systems
  */
=20
 #ifndef __XEN_PREEMPT_H__
@@ -15,6 +16,8 @@
=20
 DECLARE_PER_CPU(atomic_t, __preempt_count);
=20
+void try_preempt(bool force);
+
 #define preempt_count() (this_cpu(__preempt_count))
=20
 #define preempt_disable() do {                  \
@@ -23,6 +26,11 @@ DECLARE_PER_CPU(atomic_t, __preempt_count);
=20
 #define preempt_enable() do {                   \
     atomic_dec(&preempt_count());               \
+    try_preempt(false);                         \
+} while (0)
+
+#define preempt_enable_no_sched() do {          \
+    atomic_dec(&preempt_count());               \
 } while (0)
=20
 bool_t in_atomic(void);
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88517.166443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXa-0004Bt-E4; Tue, 23 Feb 2021 02:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88517.166443; Tue, 23 Feb 2021 02:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXa-0004Bi-9r; Tue, 23 Feb 2021 02:35:10 +0000
Received: by outflank-mailman (input) for mailman id 88517;
 Tue, 23 Feb 2021 02:35:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXY-00046u-V5
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:08 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a683b22-87a2-4aba-b212-f23bca93a2d3;
 Tue, 23 Feb 2021 02:35:04 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxj004083; Tue, 23 Feb 2021 02:35:01 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:01 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:56 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a683b22-87a2-4aba-b212-f23bca93a2d3
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SVd+3NStcB6wNxCQ0IH0EwlhcAgzILiv20hUtyCk2+R9ofOqYwZrGr9SU8d4fQhVSnZS3Kz3xuQfa2b7cgQEGWZmioNsBkW3ZXT0g3Alk+FO+yO3FB+2+jfcAoZZ/KPv8mC4zIx4dL0rCzUjqiQ2WinKheKOHzlQIeEX/8NDCobTBL+RT0juOXHHdD3AKr9N011+WWyXX3Y987QJHTMZxnNWwERliYQjk0NOCDmYuwdLEmz38mw3kVEB0ixl5hkXHvopXvkyXcnRlShMIamFVmOsVYDZUVpfEi0zjmP4LFN5/cDT/V/jwKqwICvx8toVQvPcznKi4WwrkR3Fd6iUwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kphJiUy2kLhF2NKPvP/qP8YU2nqCVKOjptsGkk/vL84=;
 b=XW+boCnlFJzSuQpKzcrNOYYLys/jCIuqGUgaZy4WW+zx1AdojBFEenV5kiYsP0N5RbBbbWKWL40inH5ycgSMLVE/k/lnj+rLC5Y8BzBj1cOsj/a7zPkmG8mAATOX/bvN0LpRT1h2f03R167ZBcNns29KTzCUWJLh7p1So4KmqAfsq/Uu6euJcWU2Mzw5NHYPs9vEztkf6pKpM6LJuXjbv2lTKtMQ0VqvwidW917BRbCb3K7aQ5jCvbYv58525gh8JKvG4rQsAepjfDwRtkjGDfNyOh3//VMaX6h1QnwKumH3dEoJtGuVgNoTrJDO0L5+7b8U5y94OL3+daqFzpvjPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kphJiUy2kLhF2NKPvP/qP8YU2nqCVKOjptsGkk/vL84=;
 b=cS7iAANsZBsM5KfGWi9tBf0WaghKY1l69+bd9AwzqOFZC8/mjxI4eLb6qAAlCAtQOl9cUDqtXcK/jIHSlAEyOraBA4CTdD9UY3dUqOgTaZkvojaI5luFl5Fu8XAN0q8EaHbVSPNyffQbNe1++j5pz9pwNoxFKNWxqItigtbgbMOwpTdPRiESHL1KZCd3joBGe9BuhlDhKfXCdm/NS9vfyxqKZrVsN/2o4ECqj7y8cJu0nkN0ifqLbfaBPS26VmZbBjoX/f/R19/aWso1tqXTLncU826Vj+cjjC3E1ni92f0LLHL5533RzJAa7s3twZ1ttLvNx+R7ErhL8k/SrJOvUA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper
	<andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [RFC PATCH 04/10] preempt: use atomic_t to for preempt_count
Thread-Topic: [RFC PATCH 04/10] preempt: use atomic_t to for preempt_count
Thread-Index: AQHXCYx5c7ga2XxajUiE7SdBv/dfFA==
Date: Tue, 23 Feb 2021 02:34:56 +0000
Message-ID: <20210223023428.757694-5-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cf007190-437a-4c26-e6ff-08d8d7a39bde
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3235E87F167ECCF1FBB4A6C4E6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:1775;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 i97tdpblv3xqToaxF4CiMxMl92IE6SDpDlUbnNlP0+3TClGIpkZHgucRTZzzvsp5q9WVvCSxyQg1TH3CFNnmLDi7H/CdQttVLj8yCG5kHnQeYF5YIMK8Ci+6DNdUQu48PCALYMKpx2q4Qx/rX2nFh8bmv6SxLc4+7dLAy5ju8163nDqDsZSsW/riLPEQdp1Xet24SdKUBweSWFZsBVtSl1GyQh0eqd6/AUDPbusq9lxXiQaKSglgMUO3VkbekY5jT9UM5TrhqjcP9jV/2+dd2ndzKpz+bZDol+ShSHvmX8sh6b5pRfS/wI4hbbBbFeO62iSkAg/8gwm13SMPXHmXtAB0S2CysseMuOYt55WA8rMQLhP1rg7wb8BPEXJlhpW98d+CyJt8bEmRPiZ3/eTGcQSeC7GGfqF/wJI7gRX+Y7ADe3lgy5mjjYPpBipiZQeZRT25Mb/N4ai/GJdAXBHD28gMDUJafqVu4JUrkIJIZujiztnq9J1lFkEwPZ744yRQs5uTdnXxz/KkmyLQX9L1Cg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(54906003)(66556008)(55236004)(83380400001)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?mXs0iPRk3o5ZuDGxCOtGxNS0OtlKJFdIKJUWPc9YMSuW+tmXP3Gd6T8ht+?=
 =?iso-8859-1?Q?Xi1aKvFLC4JwL/Ay6GSMtH3GT+hCVamxgXhL2oKhkOey5akZZqoQQlBq7/?=
 =?iso-8859-1?Q?oQQU2rUKNsPYKFZWoku0WFEFDQ3P4YUDrU2fQhRubw5Vt5MWNARIknrNjP?=
 =?iso-8859-1?Q?kjcrzPd8Kd/5kd4PUfJv7LXlEuFMW3HsACRTv4NETei3DPbtH42WTapidp?=
 =?iso-8859-1?Q?v/7hGxbq5fNkd9cMdHlPChcEacrdd1SYF+QM5lk5mhpAhXdJSTyOzZrr3N?=
 =?iso-8859-1?Q?eR/8gHnfvR5aA6+/RZE+TKkEszOyeefZEkk1mpo9rYN4yLLh2ZdFVIFZ2O?=
 =?iso-8859-1?Q?sSsXgvDai6jEWBkfAsAKMo2s3RSya1443g4GDUl0xecB9tq4ShrQ2u/QER?=
 =?iso-8859-1?Q?XKGVs2qilD74/ADumANBKytIyQMRonhPrNfHqDyQb/lkFrMZCJN5RH7ldL?=
 =?iso-8859-1?Q?37H7cgLCe09zcFL6mHbQXH+F/bfxJOJFTXcR3E54gP2GLxY++fnMInCrUo?=
 =?iso-8859-1?Q?6LYrEvlJGNK25nfeQKPI9LY1cLUCg0aJddX7U9K0B6a+BS5ip+hoEzEqsh?=
 =?iso-8859-1?Q?WygqS3mr7uodOvWLHo4b2b86BkSn6hXu+Rw3fvh6X2hv2B3bt7BMOSh/iK?=
 =?iso-8859-1?Q?0KY/vO4ujQbYyU/J8icywAhdzPK3U6t3I1K2d2sgs/p6gUxOPXRTshyxPD?=
 =?iso-8859-1?Q?1/yh04YtmX4MiXJhRH7fRlC/ZhjDt5tmq5I99CbsAYoQpYGe5FStgB/3Kd?=
 =?iso-8859-1?Q?+DpZq0T5DsrZFiG8NFUjGCq51HpweKa7HWXGotVpQ7QUAg1FHyVdjyl3Vu?=
 =?iso-8859-1?Q?hTyonkDlAsgINfOiVastHvrpBHaxQeGBdcPgijVc9pVtHsoaIUeFriBrtv?=
 =?iso-8859-1?Q?7d3TnKR+EuVmHG1BzhXyPZNmr9Wkz5ZsAgUr2Oq/9NeqY9IpaZk5h/pRf7?=
 =?iso-8859-1?Q?SoLiIsBQKqbMb0L8q5gJOlxDObc3uJiM4g8+7sVdWHMD9NwC9f2QahCvOz?=
 =?iso-8859-1?Q?kA277mrqI+zQafvI6Vz6yl0hUeEOOnHKdywH9JEeJOzz2RPxwUBa3ETpI1?=
 =?iso-8859-1?Q?8oNJ4DVy2T86hbnjwKB60qEqCcztDXZq7djeBzE3JNZ44UlxyPfLo6wMBZ?=
 =?iso-8859-1?Q?HggWRe4oK3ao48ddRzA/1SB7hw0kDHwgzkH66Y3BNSdDkOlnqv3TSXd6mp?=
 =?iso-8859-1?Q?OU9pGiZ3hrZDtilIw0hXRzRh0+JRY42q2urNYwK5C+roeRjxMx+Pwo/QTJ?=
 =?iso-8859-1?Q?Q5RwAc2AxYhgjCigw6Qv3LtF/8jxNPlmSd4iPdrApxPK7BV2Vu3RRc3L/i?=
 =?iso-8859-1?Q?F2fDb+BIPdgaSCYaeahkTVmRmTHY86b0fG+jbkQWlqJ+/ko61FwZRzOyK9?=
 =?iso-8859-1?Q?qKFImEG0Uf?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf007190-437a-4c26-e6ff-08d8d7a39bde
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:56.6783
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: EorqOHZR/msedpiNMbRu3jdJd0YOAMW6NjB1dbgqrSDqWRJrnlgzFlxdRSh3FmOslK1Tozvs/KQsQaiuoyv3X8Cz6RMih9HsSVZpVvMyk3g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=775 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

This ensures that preempt_count will be accounted correctly
during in-hypevisor context switches.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/preempt.c      | 6 +++---
 xen/include/xen/preempt.h | 9 ++++-----
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/common/preempt.c b/xen/common/preempt.c
index 3b4178fd44..ad61c8419a 100644
--- a/xen/common/preempt.c
+++ b/xen/common/preempt.c
@@ -23,17 +23,17 @@
 #include <xen/irq.h>
 #include <asm/system.h>
=20
-DEFINE_PER_CPU(unsigned int, __preempt_count);
+DEFINE_PER_CPU(atomic_t, __preempt_count);
=20
 bool_t in_atomic(void)
 {
-    return preempt_count() || in_irq() || !local_irq_is_enabled();
+    return atomic_read(&preempt_count()) || in_irq() || local_irq_is_enabl=
ed();
 }
=20
 #ifndef NDEBUG
 void ASSERT_NOT_IN_ATOMIC(void)
 {
-    ASSERT(!preempt_count());
+    ASSERT(!atomic_read(&preempt_count()));
     ASSERT(!in_irq());
     ASSERT(local_irq_is_enabled());
 }
diff --git a/xen/include/xen/preempt.h b/xen/include/xen/preempt.h
index bef83135a1..e217900d6e 100644
--- a/xen/include/xen/preempt.h
+++ b/xen/include/xen/preempt.h
@@ -9,21 +9,20 @@
 #ifndef __XEN_PREEMPT_H__
 #define __XEN_PREEMPT_H__
=20
+#include <asm/atomic.h>
 #include <xen/types.h>
 #include <xen/percpu.h>
=20
-DECLARE_PER_CPU(unsigned int, __preempt_count);
+DECLARE_PER_CPU(atomic_t, __preempt_count);
=20
 #define preempt_count() (this_cpu(__preempt_count))
=20
 #define preempt_disable() do {                  \
-    preempt_count()++;                          \
-    barrier();                                  \
+    atomic_inc(&preempt_count());               \
 } while (0)
=20
 #define preempt_enable() do {                   \
-    barrier();                                  \
-    preempt_count()--;                          \
+    atomic_dec(&preempt_count());               \
 } while (0)
=20
 bool_t in_atomic(void);
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88518.166455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXc-0004EZ-QF; Tue, 23 Feb 2021 02:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88518.166455; Tue, 23 Feb 2021 02:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXc-0004ER-Kz; Tue, 23 Feb 2021 02:35:12 +0000
Received: by outflank-mailman (input) for mailman id 88518;
 Tue, 23 Feb 2021 02:35:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXc-00046p-7F
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:12 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25fbc6db-e3bc-4661-b022-f6e721c1303b;
 Tue, 23 Feb 2021 02:35:03 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2PMaS026304; Tue, 23 Feb 2021 02:35:01 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2050.outbound.protection.outlook.com [104.47.1.50])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vq3kr64x-3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:00 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB4082.eurprd03.prod.outlook.com (2603:10a6:208:70::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 02:34:56 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25fbc6db-e3bc-4661-b022-f6e721c1303b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CvtP5KMBplTBWpgA6/CvU5jOwv9Af/k5Xbh57TfCHRdCgHI8E0UbUmr9CrnxaK3Vm5zPJUhSzf1WsRdp5+x+Q/8M56T5KO1hUVjrHzYbFSdruSevpde4Z8SbZEQhFh4NE7xA4UDKfVIQvaC84rfpzRDdEUJE++lJ3MP5Sm7L7HfNvz2H5b3npX7v/yXz58XloyfWeQsWiGHQzkjSBnmJINcWo8S8g4m6uBjKrjpV7yaSpReKYWRZ1uvkgmcJlMQY3YPRVGdc9dNpRjMhxDUPr80sKDWnbHfiDtfwvokcRAvakJILu+lx5PKt00tNNGu+okwqHbOE9/EQ5DW5QGfJ9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QJTEpv4AaTo8JBp7LsUk0AWc8Ja4nv4Eo3TfeWXSlUU=;
 b=T5AYYPqAqrpYKj8JVdeZoHb3cgrDqP0j6dtnaxW0pM6j81cpXvL9vKEM531IeaQoQVQaPGFG83wunAZKoDbQEI0mAR/2xOkr1FIiK3BrmS+v0NtvUsvl1RwMRUMr6HNd/kSGwwmRgShpAHx7U3Cbib84liEb51d7qhe6nfgdktyT2XwQVkDwfut5qClAgtFl3IVVGzB71FFQX4QKO+Nx6kkFNoDqk2hT5UmAAvJmiXLaT/PORgHwWQVjvxMUMHGSyiaWNa4nVwhq9m04Jopnu4vADva4Lts2oomT8QaEpFjup238DNMwvMKFVNtGzpeDxD71YEYfkXSsC+DIAgoAMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QJTEpv4AaTo8JBp7LsUk0AWc8Ja4nv4Eo3TfeWXSlUU=;
 b=JyVQxcrqLsMJ6BVmrwNGH+0zeVQtsEOWxM4y6sYfk++saiBG4DU4sBFgZWubhtjum/ay8gNIoOyqMCunfkswUdsHDbm5HrMtqzKiZX0yyuCEfMnzF163u+D1bC5NBb3l/BwwPJ6L1yhngPN5Pm5kJC1nfrfS0S0Yss7NMlNk5DvctbQxo9s+5MVSanbVNEeNovHq7WBtBbHnfn2tHVTmvjIVRg2Eqc+ORpg+9j5fUokiaILF6UUheh0fF6PRxcW0SvY7KJQL7bKK0krYVOghHRY9ppV3gXYgOStaVv8e3Qzdf/Xt9Da13XskLf1gqrGDNxsA9rSQla4ZAqQyxPvXGQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Dario Faggioli
	<dfaggioli@suse.com>, Meng Xu <mengxu@cis.upenn.edu>,
        George Dunlap
	<george.dunlap@citrix.com>
Subject: [RFC PATCH 02/10] sched: rt: save IRQ state during locking
Thread-Topic: [RFC PATCH 02/10] sched: rt: save IRQ state during locking
Thread-Index: AQHXCYx4gKfh+6izfkCEXHj4/bqk4w==
Date: Tue, 23 Feb 2021 02:34:56 +0000
Message-ID: <20210223023428.757694-3-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 530c420b-0734-458a-0fc7-08d8d7a39b87
x-ms-traffictypediagnostic: AM0PR03MB4082:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB40820D1F26543180D4DC6AF7E6809@AM0PR03MB4082.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4941;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 gOjGmr3nTs3YIiyXWb754h5mEeOBl9QERsthwQ9+b/CeNCkWiN0Az3U2dOwI2jFUqnGy2p23O31FYOR4HMemPafc5QTpnSGDNn4WuouxwPLxQI0i1goNfaAdxAstiHM/0Snvp+kGUwYTHS495sRPZORFp5JrUvynbAz/IZ0s02+2spivclvBsMOvyDZUcrUuGHGkKrFg5K9Sf94/7K5qXhEmBKO3BqkEs2JqTYCjzCjDZVx8dq+YZ2Y1hHLbyUQMkHGJOfikaX5VBLbSDzmCRF0WmDw7yJGuxB9qTuts47jzElrumMIRkybK6d9qyMP2eJDgRRA+SqbuWYQ1/kPq/SjJkmGM9SeCIqYJOU5Od/KDmJ6gT8jV+RjH1ZHnrHYBAbVFnYMDSDthEOED5+UfNeXErP8TuW3XUPdMCbccWItp/74XX/jpJMRZrDrl8oulhYmOTPZw6Yt/la3HGJUPPZZcflyUexZKCkgSFtmoMSxEM0s29WEkzwZHWxvbZxo8jAeXY9kyn+6yDEJZSh6hSg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(136003)(376002)(1076003)(8676002)(186003)(66446008)(76116006)(2906002)(71200400001)(55236004)(4326008)(6486002)(478600001)(36756003)(26005)(66556008)(54906003)(64756008)(83380400001)(6916009)(2616005)(66476007)(86362001)(6512007)(316002)(6506007)(66946007)(5660300002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?137VOklkTfRbwkSDaRFMcM7Jy10u0SeDZeEbCAqkKFzvaTTjQZGJjdTpNL?=
 =?iso-8859-1?Q?SR4VPTxgbHXPRiWnTE3vjrbR5gR3YfAv2gpPzkbLNb9CqaqDrQ2FBT9N5U?=
 =?iso-8859-1?Q?0aUe0PK/flvfFwk+bNz/9q6k7yC3vNVPraj790eYZznARsuBLiyfN8wTCS?=
 =?iso-8859-1?Q?tSk3yPzLLjo+tIVCtoTMJEYPk1iscYtRxjNjFDsPPfS3Ai2oXf85AazYem?=
 =?iso-8859-1?Q?17FrZ1SORbMWGOsjHk4aRKQguPWokAaT91fh3cHde5hBwTFS3CBtfog8AU?=
 =?iso-8859-1?Q?4HNDWpmd/Ha/qRETvayu8w6FLTy6Rcif+b4zpK0QTVdjptrgXePzbDJnTx?=
 =?iso-8859-1?Q?/aLYTnaC7YmcGOocBHv3XxkFMzKXv15SXbYndca4QIUvIGGi2HYnzCr3UH?=
 =?iso-8859-1?Q?IXhcEz3HzWyG7w9Y/+OKJyE06T1bFZNmv6ApLnwgW9ADsa0tWNUvJh1Lds?=
 =?iso-8859-1?Q?KmCyFkeg2aYsLLAFQsnWg+Y/0ymjr2B9W1J5K/dWq6bJt1RTZZav8Vbo4s?=
 =?iso-8859-1?Q?9qYXb4rrwwSl3XzxaDfZp0FsBPgWPDBKq8j0TJM0s/P2Q9uVG2KFZrB8LX?=
 =?iso-8859-1?Q?m09HRIyBjnhvA8mrR8WRKUvrK9Wdxh3+VsqCjurGJBCMGy5xlcLHcYKhoM?=
 =?iso-8859-1?Q?vDuhzu8r457UsmECIyjM0kHk/gWClFJUI9RFt3dFEjzV9M53EB/Fbc2w3J?=
 =?iso-8859-1?Q?BuE8bbPM3RFuIsgyNBPKVLnNRIqLzg/FAMKNGEqkM5aZF+giTRsfrgi21R?=
 =?iso-8859-1?Q?dEhfzeIM3zpmukwPAhCHCM1w/ypdIfVcv457ml6kfd/m5reitbAlVxr0NT?=
 =?iso-8859-1?Q?+pf0GtHCV06iEQhybFXibrSIro3sFVf9nSieeHUAV4d8fawoxwXukG1q15?=
 =?iso-8859-1?Q?rrByiNIDs06ELlMONXo2ELdfDsZ7Bu7Tt43nFnSnPgsjoFYU3DA+Dvj3YD?=
 =?iso-8859-1?Q?IA6Qir6fxvZLXbBcl2Vt419MPWOL4U4KIS412xZMp4qyPluThP8wjG+EIm?=
 =?iso-8859-1?Q?hQZIcxV6epiJuPndN1guf3OVpEj9f4LFcI4xv7robiET90hFlHBlPDKapG?=
 =?iso-8859-1?Q?iR11oy7sN96HAi3CDmMkuY5vmK2Xb246EHOiB4Sm96mI32YpGbcn8gNRx+?=
 =?iso-8859-1?Q?IgjYF/nVws+o7gDap2C/lB+eGLau/F6Y2/R5pLbK/SQZYAUrfY3aK9F6EF?=
 =?iso-8859-1?Q?OThla05YFYT3mWUe5ZpZZlilVCbNae22VaS1112dmPNVSD3Q9UTzvWXEbp?=
 =?iso-8859-1?Q?APs01IJif0nvMsp196pSc6eAMvrcCG00UoU1TB+Fl0aECgMal32DY6i3fI?=
 =?iso-8859-1?Q?Xy+QweQfugrOGdVFmzAHJtqMsdsGRoGtl4AJyeRnvnuiMNPZNsgeWy6Bmg?=
 =?iso-8859-1?Q?zZ1JyUWyyC?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 530c420b-0734-458a-0fc7-08d8d7a39b87
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:56.0786
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6inaaIkdhXiQ3tVQfPw5qfIT4X4kMnQDQ3aLZFiFjYnD3Tjo0rj/VnJwOMAdf+xjoGSuaHxTenKyeYv1vSsC+fh0vlVWQD6KnJsbJKKMGLo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4082
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 phishscore=0 adultscore=0 impostorscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=864 clxscore=1015 mlxscore=0 malwarescore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

With XEN preemption enabled, scheduler functions can be called with
IRQs disabled (for example, at end of IRQ handler), so we should
save and restore IRQ state in schedulers code.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/sched/rt.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index c24cd2ac32..e1711a8edc 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -1318,7 +1318,8 @@ static void
 rt_context_saved(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct rt_unit *svc =3D rt_unit(unit);
-    spinlock_t *lock =3D unit_schedule_lock_irq(unit);
+    unsigned long flags;
+    spinlock_t *lock =3D unit_schedule_lock_irqsave(unit, &flags);
=20
     __clear_bit(__RTDS_scheduled, &svc->flags);
     /* not insert idle unit to runq */
@@ -1335,7 +1336,7 @@ rt_context_saved(const struct scheduler *ops, struct =
sched_unit *unit)
         replq_remove(ops, svc);
=20
 out:
-    unit_schedule_unlock_irq(lock, unit);
+    unit_schedule_unlock_irqrestore(lock, flags, unit);
 }
=20
 /*
@@ -1460,9 +1461,10 @@ static void repl_timer_handler(void *data){
     struct list_head *runq =3D rt_runq(ops);
     struct list_head *iter, *tmp;
     struct rt_unit *svc;
+    unsigned long flags;
     LIST_HEAD(tmp_replq);
=20
-    spin_lock_irq(&prv->lock);
+    spin_lock_irqsave(&prv->lock, flags);
=20
     now =3D NOW();
=20
@@ -1525,7 +1527,7 @@ static void repl_timer_handler(void *data){
     if ( !list_empty(replq) )
         set_timer(&prv->repl_timer, replq_elem(replq->next)->cur_deadline)=
;
=20
-    spin_unlock_irq(&prv->lock);
+    spin_unlock_irqrestore(&prv->lock, flags);
 }
=20
 static const struct scheduler sched_rtds_def =3D {
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88514.166407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXU-000476-Ck; Tue, 23 Feb 2021 02:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88514.166407; Tue, 23 Feb 2021 02:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXU-00046z-7i; Tue, 23 Feb 2021 02:35:04 +0000
Received: by outflank-mailman (input) for mailman id 88514;
 Tue, 23 Feb 2021 02:35:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXS-00046p-Bv
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:02 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3dbc9a8-3a03-4c02-b139-eb8fd8326a0b;
 Tue, 23 Feb 2021 02:35:01 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2PMaR026304; Tue, 23 Feb 2021 02:35:00 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2050.outbound.protection.outlook.com [104.47.1.50])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vq3kr64x-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:34:59 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB4082.eurprd03.prod.outlook.com (2603:10a6:208:70::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 02:34:55 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3dbc9a8-3a03-4c02-b139-eb8fd8326a0b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ILj3bBHfEiUMCREsS+BKq7fCjQJP3qdoZCtWoK1RvOke0nKZsj9S77E0THWkfH7X/rl8OAtEY0UtqaiuLZyJF6gFWqOg5U7w9Ta5vl4Fkq2B3uWsgJvq1XxEkHuOXladrGRvlvZB3+DTou36EuqCgkk8uy1Un9ceEsT42TYgw51vWUDJlQqYG2/Ke7J5Be24tQanuna8XhcDHk25geHyMTkCJwv2DhS51Yc0QCO9v3j+IaCnJXG4ob4C8Q9yf1t9O/+//HzPpRwxlPvs5IjDdbF98QiBjAsTv3gWoNHxo7LkJynfkc2TNLVYYLvyiTjr10FNnT7ib6Sxc5WFJZZqCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tpKHysdqPazW6fGzt3svNddiygNyG3avYaAeAhXrZRY=;
 b=lgf4la7IBw82a7bcvFzWNDtkjOTkd5BDBUGnCfEP8P0a5Mov8a1HfC+NiLd4yAQMqy+Ghvq9jzUobb450pwI35r2EfSUz5dJW0KvlLe72Tg+yfAkT+pZAT1xxnirQ7sKXfleocq3X3mmsG0r4MZHsCTVKVHYL7xgBNIQyj+FHjP0F1ke9Dd5HsA+CCemmrc9wEDVc50o0Ijqgp4c8k+H+3+lQ7SQIWYVcQCO63tOXtEsH7efNbFPA3XJAOZmRcYMjqO6fblYMGxwXAQP8OUU68hl1Qp6e6NX/FNUG+Vvpg5RSnu71hNB24TwvkXXeZ0Gmg5Tez+GeI0k3FAUt2kLPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tpKHysdqPazW6fGzt3svNddiygNyG3avYaAeAhXrZRY=;
 b=zUOwQBT3y4kV3dyNYL+NWdf5VDL3lvxv4B0e82dnHHq+5hNA+acXLBt8DaaQvhiNlYeTWaLzZ59zOBA5qx9ZkPl4aSksTCxneujqolLDDoqgH931PDwO0wmrfw+Cf2P0l1HZBYB/5B0N+rgTbWs7z/OuAMIzAPpHkjVol5raHaFartj4JJzQ/35JdpAwghmfsOVzigabqN//nthmzJ5ktsoipTInVe4e6+DugypuWvRYBPzJkHmJ9rl2jganMSU/n5BXRvYm2Fn01nncdmTcKL1gjY07wJuKfgtM3YmysDcImMvLfBqQMqIY1W3oXLqMN1VcvJ6OvlaB9Hc6nB2zSw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        George Dunlap
	<george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>
Subject: [RFC PATCH 01/10] sched: core: save IRQ state during locking
Thread-Topic: [RFC PATCH 01/10] sched: core: save IRQ state during locking
Thread-Index: AQHXCYx4NBzFu5keaEOeI3LkFrTKiA==
Date: Tue, 23 Feb 2021 02:34:55 +0000
Message-ID: <20210223023428.757694-2-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cc0fc91f-ed67-49c5-104e-08d8d7a39b41
x-ms-traffictypediagnostic: AM0PR03MB4082:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB40823B23C82B5E01B0A73528E6809@AM0PR03MB4082.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:469;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 lSyqVPhJG6ukJTGokHeu/8MBuWC6Stinw8mx64JPqDVitSHF07PuS5s/zi7WhQS7+w+Db2zGzX+Xofmqxa9l093vzejv/zzpse6f0axaGQABJ886ORbTMwrsrmHgMq6eJhAHu1RTdsXnEfrCSYOZhx0EVDRG+RySR0VAZ1OUbhemTolb6pY+JjSealz5kgOuk0MjqQGQFEl0v0oRF8QtwBJpMHkE8AV4tjTVmyjIp4iCagTvvdh2O8XClHzDUS5PKBL/0qxozpiFcVhN9EOBj3i6/5t3uakC7yHjgt3IE4dVwdnUM7hKaDOzcRWs+il8bnG6jT22qAUm9xVjW2u8HirTzZoCttPrY98k0xHq915yBV8QnvIUs0KtERvCbMuq/xfXIbbn+OL+GrskJZLuzDIfl+fdhnyeeWzmOfYJbMe4YCCzHuEXQ/eKuPuGqYVdH05q9lpoB57qM/GUwGJjNEHLxdeAgcRBIpYCjY2Z4q4jcuOHRay1rN/XZT/D/am63CqoKDzWuMa678YdYcmwlg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(136003)(376002)(1076003)(8676002)(186003)(66446008)(76116006)(2906002)(71200400001)(55236004)(4326008)(6486002)(478600001)(36756003)(26005)(66556008)(54906003)(64756008)(83380400001)(6916009)(2616005)(66476007)(86362001)(6512007)(316002)(6506007)(66946007)(5660300002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?EiKN0SUDa3y6ujQT39WViKPNNZfmtApZorSxFbQ5MNCuMCHtS2iXXhp6NR?=
 =?iso-8859-1?Q?O+gqG6D2cMZd3rpR5hu6EJFt4XpD2VzkQW7K992WG1ri3ovHJ8gInXR+lQ?=
 =?iso-8859-1?Q?QYTb4H4Aqq7L1LRRGVJauwqK3i0UvgqvrCXZp92qF8qktq6JzrKZtiTD4D?=
 =?iso-8859-1?Q?YB2xZyzGWnIFBU1k+bo50UsyExij8ABzjJK/1/pd8WheNRDmji730rQLG6?=
 =?iso-8859-1?Q?WJcbKWWUS+hup4SJfVAjZDfzlGNLfc8k2y7NC3DQfErYmQG7EesVIFTtLa?=
 =?iso-8859-1?Q?vfVxFTUYvanTQdJTm63jhq58x2iNbvN2cuZDVtazjjqjFuBSpsG2MjxJQy?=
 =?iso-8859-1?Q?JMWwLPLoMp5fqRLRBKVE2EjUMl8kXVL+iPexVG/ZzlCYNZBsNnIiAoju8U?=
 =?iso-8859-1?Q?qHbchRfJ2gZltgwd1lJXIGSiG1k5fXFBQUngAchZV/PBO5HjbU+nnqCakD?=
 =?iso-8859-1?Q?WEpKf1LA5aJnLGDOo0hXttZ5BPQItYuBYIuREoUiRCy3UABlArw+qjY0CN?=
 =?iso-8859-1?Q?R69gh+FVesmsMBnuDFSapdViIyuNnm+1JWJYaT+qOfHZ0u7FoimN7Bm16V?=
 =?iso-8859-1?Q?eXsFr9zHWwkJDFbdEhjslLbieOrWzHxWfu1cCZL/9ZdJM2EerrTPZrPDdN?=
 =?iso-8859-1?Q?8deQepZjG2z2kHOml65GonMCo/1VnnoY2LPc+cwODRk9Aet1RoSCO0GEU9?=
 =?iso-8859-1?Q?p9TH3I91vXl/Zj6IqyKugCgPyOoSz2Y6VIW2IYmLMSFQ1fR/2PiTBDdVmP?=
 =?iso-8859-1?Q?rhHmJP+HCJYliBf1zEAm2JNtq0z9ZyATkwiPdqGBm51QpYnonJpyl1WTCc?=
 =?iso-8859-1?Q?dWNdhDoSLu5jzFO4zmdJCYjvbac1/mwdipfua3RUErI4pyXuNIDiPbMIXC?=
 =?iso-8859-1?Q?UmuB3lSoiUyshvhzYnS6E2Q0z/ofEgTUlzaepHmdXUKG7fK0e7+eMAWEm9?=
 =?iso-8859-1?Q?/w+pUkPJsGbjrHgYUELNW6t0f8Lo1oDYEjGbeSMhvWhBchMb+gpuageqA9?=
 =?iso-8859-1?Q?sHdq7NNBoYwz9AGSst583x9jP0LcNECiYfrcZSL5eFlisV24YrN2mqOIRL?=
 =?iso-8859-1?Q?0XAPw/daSeL9m/tHOmyT+BtARgkoS5tOV1RgFkyAOLNn2hF3/8yE81T5sv?=
 =?iso-8859-1?Q?tQ66PnMwABfEJz+qCGwS+HGy9cB98OMWLlUdsvJhRJ7HV0saDeD2qMedwc?=
 =?iso-8859-1?Q?zRYF1Oa8vtXUd9kkEPiwhOxQdxMGQds7TnyJ6C4xTmh6ODCGnpNLtkEVoJ?=
 =?iso-8859-1?Q?CizBjZNPzpCILXZgJz6FCM4Soxa/SqbTS4MsRwgndfBpswPv1xa31C/hh9?=
 =?iso-8859-1?Q?ACucZgG8XIGGi7RLn0KguCeqGBak6A1chQ+TBdn+Ur0r45oRihi9sbfbAN?=
 =?iso-8859-1?Q?W1CrFjYhxf?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cc0fc91f-ed67-49c5-104e-08d8d7a39b41
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:55.6839
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: bQW1NGyCh4FDQzBPXQe46aJBBSs9pjiArjAypXldUPsocKb2tl+v9zw5GYlLgTqRMyNOqrqm0zeUFHc1d8YBhJHPb0+r6tcmHc/cYB9a5wY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4082
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 phishscore=0 adultscore=0 impostorscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=891 clxscore=1015 mlxscore=0 malwarescore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

With XEN preemption enabled, scheduler functions can be called with
IRQs disabled (for example, at end of IRQ handler), so we should
save and restore IRQ state in schedulers code.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/sched/core.c | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 9745a77eee..7e075613d5 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2470,7 +2470,8 @@ static struct vcpu *sched_force_context_switch(struct=
 vcpu *vprev,
  * sched_res_rculock has been dropped.
  */
 static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit *prev=
,
-                                                   spinlock_t **lock, int =
cpu,
+                                                   spinlock_t **lock,
+                                                   unsigned long *flags, i=
nt cpu,
                                                    s_time_t now)
 {
     struct sched_unit *next;
@@ -2500,7 +2501,7 @@ static struct sched_unit *sched_wait_rendezvous_in(st=
ruct sched_unit *prev,
                 prev->rendezvous_in_cnt++;
                 atomic_set(&prev->rendezvous_out_cnt, 0);
=20
-                pcpu_schedule_unlock_irq(*lock, cpu);
+                pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
=20
                 sched_context_switch(vprev, v, false, now);
=20
@@ -2530,7 +2531,7 @@ static struct sched_unit *sched_wait_rendezvous_in(st=
ruct sched_unit *prev,
             prev->rendezvous_in_cnt++;
             atomic_set(&prev->rendezvous_out_cnt, 0);
=20
-            pcpu_schedule_unlock_irq(*lock, cpu);
+            pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
=20
             raise_softirq(SCHED_SLAVE_SOFTIRQ);
             sched_context_switch(vprev, vprev, false, now);
@@ -2538,11 +2539,11 @@ static struct sched_unit *sched_wait_rendezvous_in(=
struct sched_unit *prev,
             return NULL;         /* ARM only. */
         }
=20
-        pcpu_schedule_unlock_irq(*lock, cpu);
+        pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
=20
         cpu_relax();
=20
-        *lock =3D pcpu_schedule_lock_irq(cpu);
+        *lock =3D pcpu_schedule_lock_irqsave(cpu, flags);
=20
         /*
          * Check for scheduling resource switched. This happens when we ar=
e
@@ -2557,7 +2558,7 @@ static struct sched_unit *sched_wait_rendezvous_in(st=
ruct sched_unit *prev,
             ASSERT(is_idle_unit(prev));
             atomic_set(&prev->next_task->rendezvous_out_cnt, 0);
             prev->rendezvous_in_cnt =3D 0;
-            pcpu_schedule_unlock_irq(*lock, cpu);
+            pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
             rcu_read_unlock(&sched_res_rculock);
             return NULL;
         }
@@ -2574,12 +2575,13 @@ static void sched_slave(void)
     spinlock_t           *lock;
     bool                  do_softirq =3D false;
     unsigned int          cpu =3D smp_processor_id();
+    unsigned long         flags;
=20
     ASSERT_NOT_IN_ATOMIC();
=20
     rcu_read_lock(&sched_res_rculock);
=20
-    lock =3D pcpu_schedule_lock_irq(cpu);
+    lock =3D pcpu_schedule_lock_irqsave(cpu, &flags);
=20
     now =3D NOW();
=20
@@ -2590,7 +2592,7 @@ static void sched_slave(void)
=20
         if ( v )
         {
-            pcpu_schedule_unlock_irq(lock, cpu);
+            pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
=20
             sched_context_switch(vprev, v, false, now);
=20
@@ -2602,7 +2604,7 @@ static void sched_slave(void)
=20
     if ( !prev->rendezvous_in_cnt )
     {
-        pcpu_schedule_unlock_irq(lock, cpu);
+        pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
=20
         rcu_read_unlock(&sched_res_rculock);
=20
@@ -2615,11 +2617,11 @@ static void sched_slave(void)
=20
     stop_timer(&get_sched_res(cpu)->s_timer);
=20
-    next =3D sched_wait_rendezvous_in(prev, &lock, cpu, now);
+    next =3D sched_wait_rendezvous_in(prev, &lock, &flags, cpu, now);
     if ( !next )
         return;
=20
-    pcpu_schedule_unlock_irq(lock, cpu);
+    pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
=20
     sched_context_switch(vprev, sched_unit2vcpu_cpu(next, cpu),
                          is_idle_unit(next) && !is_idle_unit(prev), now);
@@ -2637,6 +2639,7 @@ static void schedule(void)
     s_time_t              now;
     struct sched_resource *sr;
     spinlock_t           *lock;
+    unsigned long         flags;
     int cpu =3D smp_processor_id();
     unsigned int          gran;
=20
@@ -2646,7 +2649,7 @@ static void schedule(void)
=20
     rcu_read_lock(&sched_res_rculock);
=20
-    lock =3D pcpu_schedule_lock_irq(cpu);
+    lock =3D pcpu_schedule_lock_irqsave(cpu, &flags);
=20
     sr =3D get_sched_res(cpu);
     gran =3D sr->granularity;
@@ -2657,7 +2660,7 @@ static void schedule(void)
          * We have a race: sched_slave() should be called, so raise a soft=
irq
          * in order to re-enter schedule() later and call sched_slave() no=
w.
          */
-        pcpu_schedule_unlock_irq(lock, cpu);
+        pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
=20
         rcu_read_unlock(&sched_res_rculock);
=20
@@ -2676,7 +2679,7 @@ static void schedule(void)
         prev->rendezvous_in_cnt =3D gran;
         cpumask_andnot(mask, sr->cpus, cpumask_of(cpu));
         cpumask_raise_softirq(mask, SCHED_SLAVE_SOFTIRQ);
-        next =3D sched_wait_rendezvous_in(prev, &lock, cpu, now);
+        next =3D sched_wait_rendezvous_in(prev, &lock, &flags, cpu, now);
         if ( !next )
             return;
     }
@@ -2687,7 +2690,7 @@ static void schedule(void)
         atomic_set(&next->rendezvous_out_cnt, 0);
     }
=20
-    pcpu_schedule_unlock_irq(lock, cpu);
+    pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
=20
     vnext =3D sched_unit2vcpu_cpu(next, cpu);
     sched_context_switch(vprev, vnext,
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88515.166412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXU-00047X-MB; Tue, 23 Feb 2021 02:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88515.166412; Tue, 23 Feb 2021 02:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXU-00047L-HE; Tue, 23 Feb 2021 02:35:04 +0000
Received: by outflank-mailman (input) for mailman id 88515;
 Tue, 23 Feb 2021 02:35:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXT-00046u-W7
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:04 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68b0f640-32de-4aa0-85b5-e53c587577d8;
 Tue, 23 Feb 2021 02:35:02 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2PMaT026304; Tue, 23 Feb 2021 02:35:01 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2050.outbound.protection.outlook.com [104.47.1.50])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vq3kr64x-4
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:01 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB4082.eurprd03.prod.outlook.com (2603:10a6:208:70::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 02:34:56 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68b0f640-32de-4aa0-85b5-e53c587577d8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a/0LdzXELduml+jUaLJtNvT+Wx2FNmyZTe49zUVTc283TNgbc+Kc7DBmx1uV9R3sJ2YlVjl7duoCoOQIPlvwvoDssF2o1HrlfcTgoHelS/A1QyupTfCHM4ZlMMmuICNNWxNYHZx4WM+bO5cmUkM0bCVRJ56Kp6uE4cHCjOoFbMAJBwxyiOFaT8mjHt74l1zAvjNiRCN4HgF6/BMxDHyZ4MWlSbwjcJWnusAVRgPREN1oa7lf9kMkH0ddXnL9Ccziz3btUcS4WKi/rNLaek0i0uWMVP2AhckZ8g/HzeOc7rOJW4Vfz0C+AD/nxk3uKzc4UF7cLm+zqDOK3M2gT7B3Bg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fn4xI64B+KozaWUtPjUMMalIz9nsigT141zdrSuovfQ=;
 b=dnJ6JWrECpTgYrkflvlaqJwEjXebtzyNyQxPPl356g+Vr1vNrGG0UKHj8uUtQ5CXSVrAG9BidHev7KHyfwFWxkm1ZlZ65XLSTMht1U//8j8B+G+LjcKdLg5ax9oZADCD+pIEmc2/WnEg8IB8OiTR1vPY8bSnnuFGAorTMPbQ2Gj54R3ilLAvVbtlKyfI/4915iDLTKK2bl207KLt6Kyc3OCM96eHgAmjHAK6z8o68l1AsO/hj54RTn0zcMHqMnn6vIUzJdad9LTT+vUtF4LhhwzQ2KWsJfOcEVOiZq9pN4MULnJbWGKfTksncT1R+/tc4k3MqQfvpKrRRNxJDWrQxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fn4xI64B+KozaWUtPjUMMalIz9nsigT141zdrSuovfQ=;
 b=Xe0hc7GWVavvNZAN0q4rZt8nrTovi7xQOS19ZeaO7Cv/xqHhmWAtLdy9osZzGdjfaPBh1sO9WjpdZTFVVbOUGiKwkXqtbFZQKnqqd70SY7xJj3fVwZUBHnuE6fj+dQieEbEYmUBMH5NWm/BTMVEmfB2u8qfCCicTqGxshNRVT+dnDgbe6Alp+fMPNaNFTkqK3oGR6FIkqhWHWu53Qn9mIEMizqFpTJMiIg/f5BSy+oAOo0H5dTP88CsZfrrDBkcA8TtyQRflQlVX+KFclJ7vxM5CduR3LrkfgufBgWENFIEPERoXWcV4VBDtKqwOfVkUINzoKfKVFlmPBjvzImKHYA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        George Dunlap
	<george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>
Subject: [RFC PATCH 03/10] sched: credit2: save IRQ state during locking
Thread-Topic: [RFC PATCH 03/10] sched: credit2: save IRQ state during locking
Thread-Index: AQHXCYx5L/8BTjhE1kmjLesetNwagA==
Date: Tue, 23 Feb 2021 02:34:56 +0000
Message-ID: <20210223023428.757694-4-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: eb228de5-cf0a-4d52-a8d7-08d8d7a39bb4
x-ms-traffictypediagnostic: AM0PR03MB4082:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB40820ECFE21B530892450458E6809@AM0PR03MB4082.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4303;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 GY8GrZ5vRzAffAw8dMb3KHeLz56TgIJpsNsAt0QryFuysYxwuSii9DH6dMgRkW/c7tlLVp3h/Lco4RnORbXbh1ECNyYnJHyAVPe0Bpr5lYsm7uIrRD5J0vpf667lezp18ZIKI2Yb2U3WEfbM/yoHo3ATiqukchV549MI3i+D/58rhSck20HOEP9AIjtp4V04/XCN4ZzKlieWIqPUd97ek26FbylrU4r0QWmDY+14YBoeluN+O17IjG8O49d9j6R2t3TBY0GKL8AP7KNqEU+J1Ix0XE1kvcfqfUj9b+TR7RkXvdPGFLN5R5ge4qUyPIgdJG41hvTpxnKKnufAXwNz3CR8y8Ta9aFyomxkz2FrZKkar0r463tgUFiL0/Hohy7iY9ymSJrCfv4g0tIVoG6Lc5jIY38hOchyLkKq+K8SwYmRon1Wdayle5aIYN0fPSdeHiD+pTfmnQeRlPdZj3aTiGCffivCuhpebkB4mbIwdbjhk5FqWuH6OuN2v2mKJ4cE2L2UkWhlsqt+KOiIyzm5/g==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(136003)(376002)(1076003)(8676002)(186003)(66446008)(76116006)(2906002)(71200400001)(55236004)(4326008)(6486002)(478600001)(36756003)(26005)(66556008)(54906003)(64756008)(83380400001)(6916009)(2616005)(66476007)(86362001)(6512007)(316002)(6506007)(66946007)(5660300002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?Odef/KZUAh2CdLP33DX7YhCl8YG6UdI8z7CCRRmNd4ksZi+DH6H/5I1+w3?=
 =?iso-8859-1?Q?+3APbF9DiWuYJc2/LTKDDPtkMoXkLxaELKF/ztO1MMVdgVbKiyvtC8R5eQ?=
 =?iso-8859-1?Q?O5tpmMko7WsjVjpzQklaD/mjTqMPiBmYDRaFd0ByFJcGhqr51roA3CcArt?=
 =?iso-8859-1?Q?MrnxFuaxvrlB0E30IuDosDrhcXs6D7hhL74iEaVMVUzJEZslzIE7GI8hxq?=
 =?iso-8859-1?Q?7mCtK9w9/lZL/SzqwoyPVRMPvoxrS91JTymn9VYnR3vsCNk+6uYgnNowQ9?=
 =?iso-8859-1?Q?r/srReAve96lfJA8kVDUkepnszFJtE7ZUZRw9pSeiTcwzz60fsc2nEDlIg?=
 =?iso-8859-1?Q?qyIpMikc2/F21G8kHOe4Iy/5xbk2wml5RCmx7SqyvPs2Lzi7aSm/RIOdb4?=
 =?iso-8859-1?Q?rhsX+gRrlA2vfzfNwbKujTkJwcma6VChY7BTc6zz31F85qhuCSXXQQAnea?=
 =?iso-8859-1?Q?S5RcN8X9KWaUJJiRZBUaLWG2NWdP0y+TcbZtg+I/R4MPH4z6EF13HpP9Oc?=
 =?iso-8859-1?Q?YbtQRon9g/S3JwKNKesdbzcJ3lR0mIeE4gFdAO7+GDJgm1/ihynIdQ2y3f?=
 =?iso-8859-1?Q?vfd0Y2vlu+DDsSWn9Op0aNymMvrJI/8cuoNc57ZFHNzobE9uZ2y1yMTGBD?=
 =?iso-8859-1?Q?6NgnrSDHOJH1lo1B9fTJFSHmqOXMy5g8t2fDPLIy7vho2riyiiMaFKFfAE?=
 =?iso-8859-1?Q?5iGgL21S456jMzZ1o2n6U65ECTuY1m+o/IvYn7S6t7UV+Z3JDceg2Odq8A?=
 =?iso-8859-1?Q?SSVTJg6TrVpOXmTSnOYXlE0hYT07Cvvr6VS/SbimND5bt9ET7OsBYdq3Mo?=
 =?iso-8859-1?Q?v35LxNX5VpHaQWnWn9AOm8XfeSsBfx41kTb1WR0/v/SQGANk1n9YOJCBj1?=
 =?iso-8859-1?Q?x+CZurLQmtP6ht+0lctaIU5X+oxp2O5msX25pEj2bX9AIvEhopNqZeu9no?=
 =?iso-8859-1?Q?FWOGVE2AGWCMIAYFsJvLf6bglDIVxnqTzTB8Kh0RmRi0xsQ9LjCP8dvm4x?=
 =?iso-8859-1?Q?g/CGeRpf+NGnqhkclx94TS3nEAph8AqLrkzn9WoWslZlp70f9G22t2oh7t?=
 =?iso-8859-1?Q?XJJaN499tpfKlGR3Gcw30XTdEQgyX/OAvB4v24uy4bb+W8OC9Pctev7fjy?=
 =?iso-8859-1?Q?LUIhEPwwquVkUD0rqCOnuz7DzegNpseqPYuu0PxkPawo/m9YncZci1UO+A?=
 =?iso-8859-1?Q?TajxazRep9HZ1tyH34jESUAaRDTwLX3lEOPDQD1AShUHq2Y/QBVcBD4p73?=
 =?iso-8859-1?Q?0PSTCErXyD5M+RMQzne14PZMGEgohFKg7O12LqmsOiMyKAb9NFZaKw/04q?=
 =?iso-8859-1?Q?V9KYPblBcvLWxey4XAuvNmfM8tRL+aLfWtw8Ix91ZGNbqUuclUtEaYGnBJ?=
 =?iso-8859-1?Q?VCfG8+h1CL?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eb228de5-cf0a-4d52-a8d7-08d8d7a39bb4
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:56.3895
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +adPgn6nAGWGBXgAkinbxzNqAC9hVqx4ES6xcjdxn+lFnh2aCCO5sCosH4IqQUjxLp0GAvyEldl6K3e5+4fscElAIySHnRcVgto14N9q8i8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4082
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 phishscore=0 adultscore=0 impostorscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=912 clxscore=1015 mlxscore=0 malwarescore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

With XEN preemption enabled, scheduler functions can be called with
IRQs disabled (for example, at end of IRQ handler), so we should
save and restore IRQ state in schedulers code.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/sched/credit2.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index eb5e5a78c5..b3a9786425 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -2297,7 +2297,8 @@ static void
 csched2_context_saved(const struct scheduler *ops, struct sched_unit *unit=
)
 {
     struct csched2_unit * const svc =3D csched2_unit(unit);
-    spinlock_t *lock =3D unit_schedule_lock_irq(unit);
+    unsigned long flags;
+    spinlock_t *lock =3D unit_schedule_lock_irqsave(unit, &flags);
     s_time_t now =3D NOW();
     LIST_HEAD(were_parked);
=20
@@ -2329,7 +2330,7 @@ csched2_context_saved(const struct scheduler *ops, st=
ruct sched_unit *unit)
     else if ( !is_idle_unit(unit) )
         update_load(ops, svc->rqd, svc, -1, now);
=20
-    unit_schedule_unlock_irq(lock, unit);
+    unit_schedule_unlock_irqrestore(lock, flags, unit);
=20
     unpark_parked_units(ops, &were_parked);
 }
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88516.166431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXX-00049L-Vs; Tue, 23 Feb 2021 02:35:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88516.166431; Tue, 23 Feb 2021 02:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXX-00049D-RN; Tue, 23 Feb 2021 02:35:07 +0000
Received: by outflank-mailman (input) for mailman id 88516;
 Tue, 23 Feb 2021 02:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXX-00046p-77
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:07 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1488b3b-51ee-46c6-a8c5-5287ac50cb63;
 Tue, 23 Feb 2021 02:35:02 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2PMaQ026304; Tue, 23 Feb 2021 02:34:59 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2050.outbound.protection.outlook.com [104.47.1.50])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vq3kr64x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:34:58 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB4082.eurprd03.prod.outlook.com (2603:10a6:208:70::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 02:34:55 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1488b3b-51ee-46c6-a8c5-5287ac50cb63
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z6/2exD15NZLAtaQWJ1UtLZVGYajg/Kl6edODKKwKdFrp/DAtuB6vHSObpH6ebXoeCVIJAfGT6Yc6QVCx6q1F/39CbABUrp52yFQlOFxVreHZeKyGDLC6NWPXTbrgHUrqLpDhsWg/ipk71KNNGkAviJdTR4cRn6auDcscOUZYGIcqMrEZU871LNfeh1iXZWp90QZ3MZabhy5eeb9JTlBMDLiiLm6uWURtvXjS8qzh1Vzo0LlyXi7L3mBnQPC+jETuIet0o6kLsAFmASNn3XDeKHgjpg/dC8KUxjuxmJ5Bu8atcaBaV3O0zdKtZk9qOwr+KD8UWWfFJlkNuMJ8EfS3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aMv9AIA+SlsYD7jStw5GaXVCfP6GBuKwDkIf84cklY4=;
 b=ZXyKfVT9e/p/CqjatqPuxvaP1OiMR8HV6aesT6VgG/AEperbizj9Cqt7GVcOSapYNdPTZG12sCBrV4Gb4yAIivkxlWpJUIR5aMfT08+bFpJYT2G8ehO7LaF2jZHjJ5b6SuORs9y6erSn35dfjBVRlRlp2CK9tl+BetfRQ92iWsOHSKiK1z+GRvQfL8v9uYkokygDZpYNif6MisM0CtW6+2lsaEFVMYQMGNRf9Y/81iCdDgf3gYkhH92HPUBPjn0vsn0UOYugseQRtGstwFRo5+IxKtXhdhr2WOHtEbhoUR4LBmXXzzxaS53wAPuok4r6kTUHYaw67ZwLyTSKNPiCNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aMv9AIA+SlsYD7jStw5GaXVCfP6GBuKwDkIf84cklY4=;
 b=oAu21bq3mnoGFwBcYOWJ51FSQO8IBs0UlbrJFVsziVAEUf3orQCrX1LSOEnSzI//QTQDpwrSqbnZtXBYE8IXaspMW1VBdRk3dlzL717q3WxHyzS7kYym80X+fM/KCu5OBzOULlnc4+P1T+A7CF77w2vdqW9qpqWi/T9gxCHQjym0Pj57IRbuzH27E7fIUx4jybG5klPx+kkxexQwy1xff5StBeMWQY2gAm/lcKNTQ/9NaiBv0xnEd0ZYQTnMOoZtTxkipqf3kyMExoty12NznvTuqPHDmXS7XIEQQka4jSDGJ31VeKPJsYiX5paSagRZFeKWZQcba/vGnLLPqVAerw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        George Dunlap
	<george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>, Meng Xu
	<mengxu@cis.upenn.edu>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: AQHXCYx4A6OUUHr1gkqxWv1TEOLkug==
Date: Tue, 23 Feb 2021 02:34:55 +0000
Message-ID: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5c4a50d1-743b-4028-37ce-08d8d7a39afb
x-ms-traffictypediagnostic: AM0PR03MB4082:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB4082DCD7F48383493C01BE58E6809@AM0PR03MB4082.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 a7jiGZYwmPZbNATGF7FVtNSC4IVJtdfENjbON89YpUUF7z0QLqcy+CCSJkQDZ8+J2KR10nrMAVxCX5deyZAT28t3g93iEThgOb4SwOL9+OBwLJ2/7mlwKtnrAqBdr9ezclzZNSDYKOy1CnvEPMzppobRoujYiUT+NPpVQOD37ONuYnxGAjxJ1vMe69Fdm/f2tcIYIaHPrr2Ixg3GKMa0cmBBKi3Rc/odzc6VqCmG5+jUAsY5FpmsLsBVy7hIuOWGcA8N65G9K0JDpsmqF1csdKSIxAimLinF9XKJZzz8mYbsInF6lQqSYDd7LJUfZ3tED44d2jo+SZQiUR6Lezl1pRAUib+vW1KoL8LtaZL0oXjWyauKAoPUGSPzv59YLdHK2ilSKmhp8wH0sortK/4htnthJM2/3SGv5I+61UdnlF5qAikH9qVEUR9PjLo3Mocr0oIFpgXPUTxIH9WSSpbAScKNCVI67qNhK1v1vBgo7IzZwL3GHfqE6cznn/H678K5zV6EVbc5JaU40W4S5hoyCwXn/SPozdbHZTX5b/FQISdiIVs+EgbQ1wmP8AAZbbUvnVvuwr5iAMgZIMeUghqzdqw+aTvjEUeiSfcBbO9aExBZ5ODv+Xr0hVkB2FaWfYVBXLmkPyeox2NJOh1yj0LmAw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(136003)(376002)(1076003)(8676002)(186003)(66446008)(76116006)(2906002)(71200400001)(55236004)(4326008)(6486002)(478600001)(36756003)(26005)(66556008)(54906003)(7416002)(64756008)(83380400001)(6916009)(2616005)(66476007)(107886003)(966005)(86362001)(6512007)(316002)(6506007)(66946007)(5660300002)(8936002)(6606295002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?rrn0lVB4DD0EGsZKt0ksNqB1BcwlRB++Pa+mMrgHQogsBi6ymhqE+GeEzu?=
 =?iso-8859-1?Q?IT8nBl9Jk4+XU5td3qD2jO6Rv8POV+0ph9oMbfqXeKrZd2WWhe2raZUNTV?=
 =?iso-8859-1?Q?cKVWZAK4raOkFF4rs9A4/iNfIhrDTqMFSftDwofsNA2WEmaLsvCOf4f3Zf?=
 =?iso-8859-1?Q?XaWHud48Xeu+Aj9Ny/uFk4mfmMRwOKVPnKmJbXJor0uHHWYvUu1BC0aC4d?=
 =?iso-8859-1?Q?o5z5uL86LMdLdeq5aygFIAx02dl1Fs70/jvda+9IAKHjZcVA7ZCLfCl91o?=
 =?iso-8859-1?Q?wXAA8uzludSLFjApbelFsBYuqwZ7xYA4c6AIA4xZW1tk7g1cYptGODSAEq?=
 =?iso-8859-1?Q?sPFmfa+DEg8DdACcTYteTGqGW8PlJG8FlpFuyNwHgLd5icaur1c+q4yBOy?=
 =?iso-8859-1?Q?0TK4nh7XwmV+QlQzlmiKc/b5eLihdmBdhg9RLZGuLWiDV0iDVLQjM4+N1+?=
 =?iso-8859-1?Q?OWf/ir53USeKMditKwH3WmGuvUXsWHW5g6xvevvwvfHOYBnixRuBN5BC21?=
 =?iso-8859-1?Q?0VpIShOrKXUU9XlBCZwMRUAsvECSI9rcRNCJq9TnLEVHGkE4ccHALmt5/c?=
 =?iso-8859-1?Q?q+A86AIoVrbXsHNSGFkadd6s9EKIg3Fb9NZpGAeAJsljFoFLTZTK99YDGx?=
 =?iso-8859-1?Q?PSj4A3M2rH6f43TzBbgEoCooAkU1XPpqph3SMJ2vQFl7IEebgvfH34rKQn?=
 =?iso-8859-1?Q?LNk9vkXWpjaO38xWkcT/Oa+cui7ngUz76c9JjPVVbSuScRDc7ZGGtrTEwt?=
 =?iso-8859-1?Q?ou+X3rwnLPJL5ie81CkTSYl9tpH9rgeLD49kcbV+32yuTHBCvFtsa9Kc9c?=
 =?iso-8859-1?Q?Rb2k4AdUsN5nfpxoJswZhz8o4AmwBT6hfVwk9cz2C6Z0kboqpt+sOCBL/P?=
 =?iso-8859-1?Q?RTZ56oZ+zcUgYt3FHdiHml4JAtUISqqs6SHGBVh1te/y5h8Hhr0jnb3XwI?=
 =?iso-8859-1?Q?bfXRp2p4bMkc1YMnOkuH4fq4ENyWcvO8h00h4UQ7wOqEvQRLXRwIehsbJd?=
 =?iso-8859-1?Q?hamuWedLtPj9nugeubxXnLKKmqZ0BJzyKLslEDDXoHXetcySCby8mhHV5y?=
 =?iso-8859-1?Q?gnSsokBR1rEw9UsIPRqSXXyjKdTlJNZsPXbiIydl5wVkc0PsRhoGWJqW3I?=
 =?iso-8859-1?Q?UcjKTib7CdqGO+oeTrauiKuACG6cEs1dHeuPONSdTCJfrUNjt43DD88Pbj?=
 =?iso-8859-1?Q?3aBl0lyVp5WsO7U8LGQJUBAvnGFXRf+3UmEwje5oiJxG66wth/BNpFAOgo?=
 =?iso-8859-1?Q?mFa54dN7UqV1fjWHjE9DltcXW10shELstoducurfY0/QCE3Vkkkt0pSnXG?=
 =?iso-8859-1?Q?j4t9vXHZ4FKp2YgpEEqhAOE/+5s65CwV+sl4rjRhvinjCfGSIF/Kvt+hK9?=
 =?iso-8859-1?Q?WYwXXLlULr?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c4a50d1-743b-4028-37ce-08d8d7a39afb
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:55.1991
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pNTgLZzbchkiEKl6NCAGLwQc+xNaf/WHTXlaR8h+/5+pVoHzVuz9Wf4pyzN234NXU4vEYJrGXf90mjWMXIhsCb5vq0B5tXiLQMG6zjeacho=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4082
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 phishscore=0 adultscore=0 impostorscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=999 clxscore=1011 mlxscore=0 malwarescore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

Hello community,

Subject of this cover letter is quite self-explanatory. This patch
series implements PoC for preemption in hypervisor mode.

This is the sort of follow-up to recent discussion about latency
([1]).

Motivation
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

It is well known that Xen is not preemptable. On other words, it is
impossible to switch vCPU contexts while running in hypervisor
mode. Only one place where scheduling decision can be made and one
vCPU can be replaced with another is the exit path from the hypervisor
mode. The one exception are Idle vCPUs, which never leaves the
hypervisor mode for obvious reasons.

This leads to a number of problems. This list is not comprehensive. It
lists only things that I or my colleagues encountered personally.

Long-running hypercalls. Due to nature of some hypercalls they can
execute for arbitrary long time. Mostly those are calls that deal with
long list of similar actions, like memory pages processing. To deal
with this issue Xen employs most horrific technique called "hypercall
continuation". When code that handles hypercall decides that it should
be preempted, it basically updates the hypercall parameters, and moves
guest PC one instruction back. This causes guest to re-execute the
hypercall with altered parameters, which will allow hypervisor to
continue hypercall execution later. This approach itself have obvious
problems: code that executes hypercall is responsible for preemption,
preemption checks are infrequent (because they are costly by
themselves), hypercall execution state is stored in guest-controlled
area, we rely on guest's good will to continue the hypercall. All this
imposes restrictions on which hypercalls can be preempted, when they
can be preempted and how to write hypercall handlers. Also, it
requires very accurate coding and already led to at least one
vulnerability - XSA-318. Some hypercalls can not be preempted at all,
like the one mentioned in [1].

Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
which are supposed to run when the system is idle. If hypervisor needs
to execute own tasks that are required to run right now, it have no
other way than to execute them on current vCPU. But scheduler does not
know that hypervisor executes hypervisor task and accounts spent time
to a domain. This can lead to domain starvation.

Also, absence of hypervisor threads leads to absence of high-level
synchronization primitives like mutexes, conditional variables,
completions, etc. This leads to two problems: we need to use spinlocks
everywhere and we have problems when porting device drivers from linux
kernel.

Proposed solution
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

It is quite obvious that to fix problems above we need to allow
preemption in hypervisor mode. I am not familiar with x86 side, but
for the ARM it was surprisingly easy to implement. Basically, vCPU
context in hypervisor mode is determined by its stack at general
purpose registers. And __context_switch() function perfectly switches
them when running in hypervisor mode. So there are no hard
restrictions, why it should be called only in leave_hypervisor() path.

The obvious question is: when we should to try to preempt running
vCPU?  And answer is: when there was an external event. This means
that we should try to preempt only when there was an interrupt request
where we are running in hypervisor mode. On ARM, in this case function
do_trap_irq() is called. Problem is that IRQ handler can be called
when vCPU is already in atomic state (holding spinlock, for
example). In this case we should try to preempt right after leaving
atomic state. This is basically all the idea behind this PoC.

Now, about the series composition.
Patches

  sched: core: save IRQ state during locking
  sched: rt: save IRQ state during locking
  sched: credit2: save IRQ state during locking
  preempt: use atomic_t to for preempt_count
  arm: setup: disable preemption during startup
  arm: context_switch: allow to run with IRQs already disabled

prepare the groundwork for the rest of PoC. It appears that not all
code is ready to be executed in IRQ state, and schedule() now can be
called at end of do_trap_irq(), which technically is considered IRQ
handler state. Also, it is unwise to try preempt things when we are
still booting, so ween to enable atomic context during the boot
process.

Patches
  preempt: add try_preempt() function
  sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
  arm: traps: try to preempt before leaving IRQ handler

are basically the core of this PoC. try_preempt() function tries to
preempt vCPU when either called by IRQ handler and when leaving atomic
state. Scheduler now enters atomic state to ensure that it will not
preempt self. do_trap_irq() calls try_preempt() to initiate preemption.

Patch
  [HACK] alloc pages: enable preemption early

is exactly what it says. I wanted to see if this PoC is capable of
fixing that mentioned issue with long-running alloc_heap_pages(). So
this is just a hack that disables atomic context early. As mentioned
in the patch description, right solution would be to use mutexes.

Results
=3D=3D=3D=3D=3D=3D=3D

I used the same testing setup that I described in [1]. The results are
quite promising:

1. Stefano noted that very first batch of measurements resulted in
higher than usual latency:

 *** Booting Zephyr OS build zephyr-v2.4.0-2750-g0f2c858a39fc  ***
RT Eval app

Counter freq is 33280000 Hz. Period is 30 ns
Set alarm in 0 sec (332800 ticks)
Mean: 600 (18000 ns) stddev: 3737 (112110 ns) above thr: 0% [265 (7950 ns) =
- 66955 (2008650 ns)]
Mean: 388 (11640 ns) stddev: 2059 (61770 ns) above thr: 0% [266 (7980 ns) -=
 58830 (1764900 ns)]

Note that maximum latency is about 2ms.

With this patches applied, things are much better:

 *** Booting Zephyr OS build zephyr-v2.4.0-3614-g0e2689f8edc3  ***
RT Eval app

Counter freq is 33280000 Hz. Period is 30 ns
Set alarm in 0 sec (332800 ticks)
Mean: 335 (10050 ns) stddev: 52 (1560 ns) above thr: 0% [296 (8880 ns) - 12=
56 (37680 ns)]
Mean: 332 (9960 ns) stddev: 11 (330 ns) above thr: 0% [293 (8790 ns) - 501 =
(15030 ns)]

As you can see, maximum latency is ~38us, which is way lower than 2ms.

Second test is to observe influence of call to alloc_heap_pages() with
order 18. Without the last patch:

Mean: 756 (22680 ns) stddev: 7328 (219840 ns) above thr: 4% [326 (9780 ns) =
- 234405 (7032150 ns)]

Huge spike of 7ms can be observed.

Now, with the HACK patch:

Mean: 488 (14640 ns) stddev: 1656 (49680 ns) above thr: 6% [324 (9720 ns) -=
 52756 (1582680 ns)]
Mean: 458 (13740 ns) stddev: 227 (6810 ns) above thr: 3% [324 (9720 ns) - 3=
936 (118080 ns)]
Mean: 333 (9990 ns) stddev: 12 (360 ns) above thr: 0% [320 (9600 ns) - 512 =
(15360 ns)]

Two things can be observed: mean latency time is lower, maximum
latencies are lower too, but overall runtime is higher.

Downside of this patches is that mean latency time is a bit
higher. There are the results for current xen master branch:

Mean: 288 (8640 ns) stddev: 20 (600 ns) above thr: 0% [269 (8070 ns) - 766 =
(22980 ns)]
Mean: 287 (8610 ns) stddev: 20 (600 ns) above thr: 0% [266 (7980 ns) - 793 =
(23790 ns)]

8.6us versus ~10us with the patches.

Of course, this is the crude approach and certain things can be made
more optimally.

Know issues
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

0. Right now it is ARM only. x86 changes vCPU contexts in a different
way, and I don't know what amount of changes needed to make this work on x8=
6

1. RTDS scheduler goes crasy when running on SMP system (e.g. with
more than 1 pCPU) and tries to schedule already running vCPU on
multiple pCPU at a time. This leads to some hard-to-debug crashes

2. As I mentioned, mean latency become a bit higher

Conclusion
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

My main intention is to begin discussion of hypervisor preemption. As
I showed, it is doable right away and provides some immediate
benefits. I do understand that proper implementation requires much
more efforts. But we are ready to do this work if community is
interested in it.

Just to reiterate main benefits:

1. More controllable latency. On embedded systems customers care about
such things.

2. We can get rid of hypercall continuations, which will results in
simpler and more secure code.

3. We can implement proper hypervisor threads, mutexes, completions
and so on. This will make scheduling more accurate, ease up linux
drivers porting and implementation of more complex features in the
hypervisor.



[1] https://marc.info/?l=3Dxen-devel&m=3D161049529916656&w=3D2

Volodymyr Babchuk (10):
  sched: core: save IRQ state during locking
  sched: rt: save IRQ state during locking
  sched: credit2: save IRQ state during locking
  preempt: use atomic_t to for preempt_count
  preempt: add try_preempt() function
  arm: setup: disable preemption during startup
  sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
  arm: context_switch: allow to run with IRQs already disabled
  arm: traps: try to preempt before leaving IRQ handler
  [HACK] alloc pages: enable preemption early

 xen/arch/arm/domain.c      | 18 ++++++++++-----
 xen/arch/arm/setup.c       |  4 ++++
 xen/arch/arm/traps.c       |  7 ++++++
 xen/common/memory.c        |  4 ++--
 xen/common/page_alloc.c    | 21 ++---------------
 xen/common/preempt.c       | 36 ++++++++++++++++++++++++++---
 xen/common/sched/core.c    | 46 +++++++++++++++++++++++---------------
 xen/common/sched/credit2.c |  5 +++--
 xen/common/sched/rt.c      | 10 +++++----
 xen/include/xen/preempt.h  | 17 +++++++++-----
 10 files changed, 109 insertions(+), 59 deletions(-)

--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88520.166479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXi-0004Oa-Qv; Tue, 23 Feb 2021 02:35:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88520.166479; Tue, 23 Feb 2021 02:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXi-0004OL-Kq; Tue, 23 Feb 2021 02:35:18 +0000
Received: by outflank-mailman (input) for mailman id 88520;
 Tue, 23 Feb 2021 02:35:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXh-00046p-7O
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:17 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d276e0e6-dacc-4b09-8b8b-b647f20e8e58;
 Tue, 23 Feb 2021 02:35:04 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxn004083; Tue, 23 Feb 2021 02:35:03 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-5
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:03 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:58 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d276e0e6-dacc-4b09-8b8b-b647f20e8e58
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fnsoOYeaZ5ap723wZNcX5IBljYGd+jOoU5tg7D29IJwkoifiWlQ29yEkQiI8det8eYG/walXrTRJaZunRIUKKe0Bd/7fH4yHvsMgJRkL3DVPW4If/ciKu5l60yOozxyg+BzaePt+nhASfCNSynePIhSdxXXvBLDPisGHfiJDx8+suA9ZzYWcFnnfm3zXx+Sp9PDGjHlN+NHGp5XEQgO6kPFwoJm1dQVDmYOCCjNLce5cPG6JgsgzJQpctQtr6pBOEGHJw5erGcc0FG6H2pOg9qq1UPpFjaRZSMbky0z4E+zjtkT4Bw8DZ1rDLvfoeucJr7QBhzqTvjzJR/baKYaxZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hLZbFCdGG+G9vwfGEiWEMpY27lBaZZM8a/Rntb3OeZ8=;
 b=XqiDlF5qgZCsTMuVSdUW8Hdg2NKXBhCKIyUORsD7cXbl6M67tEHTLL3zeCHiRTppAhFCAZFM8R1VWgAmlo5Q6hMX8CQt9HdGAoucy6LOaa9+z7XShl2z2tmWdLOiQxSjtjvNRM73OxJYmH2Q3DMf5q6ae2hcakNiU+Mud+4mRFAAvCbNWaA7q/CHzIF/ExtnMKATPuE6OLH2kEM8BDXWjF9TY+wZ1zDiuXntkBQ4rOTVEZ2TASIaxObDlVSMkdm/rLqoTnfd82Pl5CjoE/LIkl5LkdYz0xyIxcWOWqL5BzMTxEfVKtwTD/0xVKW2QPjgKBU8ThTfFwYHoVGQmmb5DQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hLZbFCdGG+G9vwfGEiWEMpY27lBaZZM8a/Rntb3OeZ8=;
 b=Do3kY2ktfT4pYTe7de5Zbtc88uic2L7hP81FVFySkF7qxPtAh29/ZPU1KdwcDTJjFKwnztvJgCyqK5oD+3eAXyEOsN5OQBGmQgTYOtsTqF8nPwf/D8ZyBJefnzLDFZ3rU94UjLooLPg544Dc+mFWifGqzo0gaSSX6AZmdImWfIIPbcePrIRmIybL7qKzSCmF0La6Y8+b2k3ABye/a/3uRetkNl0t67+4T3mryyXPzcwh1G3e84qDSTTZ84uGBfv1/RFngz0JvfKe0ewOT/R5ATI8sqwy8A2W0Te9TdxSKneqwaUG0DwMPnfkgVWaLHiuDveKZeXbP14J24hPlgzJoQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 08/10] arm: context_switch: allow to run with IRQs already
 disabled
Thread-Topic: [RFC PATCH 08/10] arm: context_switch: allow to run with IRQs
 already disabled
Thread-Index: AQHXCYx6+wvu0Xd3nESKdamktRGE6w==
Date: Tue, 23 Feb 2021 02:34:58 +0000
Message-ID: <20210223023428.757694-9-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b928eb8a-9ac4-4d70-8ac4-08d8d7a39cea
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3235B18F52228B33ECCA2B66E6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:1265;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 2mKXp8dQBrHgSaT64CcaGyXnuiyAm1iqTb/jAO/x28VCqUJabnDfdicEbaIGm2yAYr+wMt4rQ/mUWFGfpmXUVgl6n+hqDssf7RGZD5fgWGK2CmY2UCT4HkDtJSZfcZq05z2cB/suy9ZVqOAYa2t9viH1mWMUEGBdECs/DlZOWCK2m8CZsVuQBkZRptSclPjeAyOt9cC6c4+FqBKbROSIv9gtwQ6DWlHr0eXYjiSYeVWtKpLghbjchuEuOQcFcYa/5UXwow9Tfd6EMYqo7s7tQlOhD1IY0BM3/Wlut1j4IjH6FNP/Dl91ZVLhC/3IaJUva45zbBY917I5rWbCquAkWMZBZEGHykZfcIkXSih5NWlh/LsQBW5bn1x5PM+qseBj4N+V+9pvWo430m8SxvpqQPAblNEEZNjDlAqerBOIL8IUHCBbcRdrjHEkuWzh8wQcAOiXfWo0TEfaai018P545DAzg66y2ng/FIGtGJvfYSuJBvllXMmtojLAKtTuBhxpxef86+1X0QCKCCgLJV2g7A==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(107886003)(54906003)(66556008)(55236004)(83380400001)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?XIwzGB8T8dRu9iJXV/cfOciAGeAWAE9JZvdCkymrYp8/zX4ustw16rwWAL?=
 =?iso-8859-1?Q?VHu2XSFShArGlMJuDaAtQImueOJke+l94pzfJGmwLEC3pFde+0oPrv0uZ/?=
 =?iso-8859-1?Q?4F4WKg169Y4vVz/PhxQ0zYRvVj6jerCKQ3GUCaGweU5HbxDQGYxNDiBAZw?=
 =?iso-8859-1?Q?djP2pzlROgcEcIImZur4XKVWCUsc+MK7atMXi7TrgR488XlXyke5FIyb6x?=
 =?iso-8859-1?Q?iUTQSx29yZSiBljFmvUUMrHsxJDq0z6AyiL3Lc41ooAiFOSPoVOZSUo/nk?=
 =?iso-8859-1?Q?05Iug5NXiJLfrM5Fz8hRLS0fh7XYbHNYg1SGTHvWkHNyz1EUSfPuLm3Y9D?=
 =?iso-8859-1?Q?Y4LoJtfhGxl+wJKQS7upuLVvCjzEg8vN1aU97Hjkh5p9w8JuA7bpRjXvOC?=
 =?iso-8859-1?Q?CQy+rr4ln8nsgGTgRDVvaL5hrcEuAq9kVgsglnBDOdZVgQxom5NThpQ3Bz?=
 =?iso-8859-1?Q?ufCKwFIOxxDw2rQQk0/Wn7gVrk1Kw1o7eSKRdKdIp4Hiv9q0pkuPrBHo0i?=
 =?iso-8859-1?Q?bcKIS+O0f8QlNJ/PxurHKXTgaVp07Jez3ZEWOJ+qBdy00NZGcXeLpXMhpu?=
 =?iso-8859-1?Q?H/lbxMQeiMo0mdEefTfNXYwQs8xVFmAZ7dWoqjnV2ZkJLosRKxRG3AIPYI?=
 =?iso-8859-1?Q?dRC+Gwt7J5ylEPJDQWfdYB4FbSgsRupRlNkBlUBKopLVYPS8QePIuj0G8n?=
 =?iso-8859-1?Q?o5mRaAlr+ho78ruuyldDsvGRUxcZNSh/tvhBshC5fm/YEBqQmq7BtzSUvI?=
 =?iso-8859-1?Q?0IGMF0oUwD/x2GXxNH5gdQ5OrgulHjMjUzsFdR1wFFo51000WSGuR4vWQX?=
 =?iso-8859-1?Q?t4nSRG6CzsnxSbaWDXgchzqV5tF5ObiadspAUzpU7eaSsCtfpCyYr83YJo?=
 =?iso-8859-1?Q?yvGRiSzkFQmyFT+erJhCs0CgL+1ASB2txyR9SOGZICoXcC0xfqXPFTb/HQ?=
 =?iso-8859-1?Q?wLt/PrtfXGPmnqxcEtx8YolRS2t8mBaT3lCoV8wP9rcv0dF9pRIjkc/264?=
 =?iso-8859-1?Q?f9AjWSbszdVOPeO60YVlpG/Vu9f1od85fO2H4oFDpVTpnDccVqfyRdH3ig?=
 =?iso-8859-1?Q?nsUmWAYHP3mopCCS1f8Xm4KNv0ovAySU3TmM7QjUYbLsZV1BoYOjRIwCob?=
 =?iso-8859-1?Q?49U8HThF5VdWVORMZzD5A7uPC8Nl4N+fIuFkmESwKg+huZD5P07q5WeMIQ?=
 =?iso-8859-1?Q?ePlJ1FT01eQQAUCvowAJdf+zLQs8wPpk2+nLd8QsO1vYPftQn0UYVvgT1P?=
 =?iso-8859-1?Q?aN7k+WtMRLCVubJvqXgebnks1dXWb6op3hynDdtEMfEl75Fwxpu+XO9G7i?=
 =?iso-8859-1?Q?+aAb9uMarawMXkYNRZqnmFLLx3WG3Hpl5RpKir/EhQOn8yUBWA/0AIG0LC?=
 =?iso-8859-1?Q?zT2CNOVJLC?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b928eb8a-9ac4-4d70-8ac4-08d8d7a39cea
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:58.1025
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: F6FQv6X463hVt9hhQ8WuBquVF9RFWSVcl0plmS2r0D1U37EcJ7elOmoegGYxCNV11Ah3X16KLsT6ujYxzD+cFgO4AV+FRRMka6PkmkDty2o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

With upcoming full hypervisor preemption it is possible that context
switching code will be called with IRQs already disabled. In this case
we don't want to enable them back. So we need to add logic that tracks
if IRQs are already disabled.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/domain.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2ccf4449ea..3d4a1df4a4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -310,7 +310,7 @@ static void update_runstate_area(struct vcpu *v)
     }
 }
=20
-static void schedule_tail(struct vcpu *prev)
+static void schedule_tail(struct vcpu *prev, bool enable_irqs)
 {
     ASSERT(prev !=3D current);
=20
@@ -318,7 +318,8 @@ static void schedule_tail(struct vcpu *prev)
=20
     ctxt_switch_to(current);
=20
-    local_irq_enable();
+    if (enable_irqs)
+        local_irq_enable();
=20
     sched_context_switched(prev, current);
=20
@@ -333,7 +334,7 @@ static void continue_new_vcpu(struct vcpu *prev)
     current->arch.actlr =3D READ_SYSREG32(ACTLR_EL1);
     processor_vcpu_initialise(current);
=20
-    schedule_tail(prev);
+    schedule_tail(prev, true);
=20
     /* This matches preempt_disable() in schedule() */
     preempt_enable_no_sched();
@@ -350,19 +351,21 @@ static void continue_new_vcpu(struct vcpu *prev)
=20
 void context_switch(struct vcpu *prev, struct vcpu *next)
 {
-    ASSERT(local_irq_is_enabled());
+    bool need_to_disable_irqs =3D local_irq_is_enabled();
+
     ASSERT(prev !=3D next);
     ASSERT(!vcpu_cpu_dirty(next));
=20
     update_runstate_area(prev);
=20
-    local_irq_disable();
+    if (need_to_disable_irqs)
+        local_irq_disable();
=20
     set_current(next);
=20
     prev =3D __context_switch(prev, next);
=20
-    schedule_tail(prev);
+    schedule_tail(prev, need_to_disable_irqs);
 }
=20
 void continue_running(struct vcpu *same)
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88521.166491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXk-0004S6-E3; Tue, 23 Feb 2021 02:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88521.166491; Tue, 23 Feb 2021 02:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXk-0004Rr-9P; Tue, 23 Feb 2021 02:35:20 +0000
Received: by outflank-mailman (input) for mailman id 88521;
 Tue, 23 Feb 2021 02:35:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXi-00046u-VJ
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:18 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ea283e0-94ff-4ea6-bb64-eeaed80d9b46;
 Tue, 23 Feb 2021 02:35:05 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxm004083; Tue, 23 Feb 2021 02:35:02 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-4
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:02 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:57 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ea283e0-94ff-4ea6-bb64-eeaed80d9b46
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YJ/RULMdgalRbvZotrK5nfc7rXQK1j/F446yQM56m7iTrFl7wQDqOqnUfVP4cRNqd6CU9f6OQ4ueYtxBXEgHYA03VueY2wPKHHQ2Mqlz7Z4rvD01DUgmy0sAtPGKFIs4pPS1ABJSWwWo1YKi96vTuDT8s1eRFELE8IQ0V36Owz9PdFHpFCAvs5Bl4Ra07pvMw54phxWQz94k7oQhY9EMy0TBUE41Pxvjt9GZbR4XGrF6a1QCF7T4CYI1/zm6Ks+rPrelTZSJY3IBFo7H3NsmTP/Kk1qFppfY4HjuAaRjYZEXCL5+i43R3BTN5Dw4lFUvEVsHaUEQ410RC/Pl/lGd9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RJ3Oo6oW3QM/2RenEoHxu7guFzqOYawmzOzsPjipV9A=;
 b=CRQMvXLU9l5QR74h3fPdBG7DkRd7VfhVo271EAwyVa/yOlqv3n8uGjPDf1JqKXmYs/PxCbj6D+Khigar0oE80/K2ZuGDg7iz0ur4NdE48tJQU6NM0d+rbusjKQQ/DJlzPp42VwMyTuRN8roeOlXCqJasIzwFuMdvr4i29DaZ5F7dW90R3/rKKWUXZy5PhXGjbFGPor1EDdgyogyuDszaYALhLHX/paM6ZOxOXAJ7O4VyJgxvt4ye9btYGO6wjFuDfZCINKKc6/4zCi9Mt9XQbOfLd92pIjJawHiLP2RZjymZVzs5ntmuoiY1YiIGp2l8bAXdU/iadoLaARxV3Bqzpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RJ3Oo6oW3QM/2RenEoHxu7guFzqOYawmzOzsPjipV9A=;
 b=PA/NgIOnELMKF1eicJyP4XIVSkYfMYol+d1JCZBpxdzRl5SPwCuq6Wm/o+hio/FKiZDzGD+0e33m0oQ+jks5R5eEdfv/DPUJ9YuxqYugXehSLxBc/pmpdvnaJ0sh+zU/jG7h0Z+KGTeDpLxEK6xx6wIG8f6Cop+niXl5CDC2nq9ETn6SXceO8S2FdPEv37AeUdibcQmDt8KUbn+372+4RU3JGULaOQE58GU5vdw0iei+ko/FbIkt3LJ7NXIydEMExr89Ep1Q/NVtkg94bXYIAbrBtMCxCSDAaWty250QO1MTGm+HuqOUsh6mGtdKpCklDuYbth/4tsV8PNHY8GxGkg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Dario
 Faggioli <dfaggioli@suse.com>
Subject: [RFC PATCH 07/10] sched: core: remove ASSERT_NOT_IN_ATOMIC and
 disable preemption[!]
Thread-Topic: [RFC PATCH 07/10] sched: core: remove ASSERT_NOT_IN_ATOMIC and
 disable preemption[!]
Thread-Index: AQHXCYx5ZDmFTcXHSEWdjEjbMIYFyA==
Date: Tue, 23 Feb 2021 02:34:57 +0000
Message-ID: <20210223023428.757694-8-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d6422711-5b25-4811-b388-08d8d7a39c92
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3235DB80597FBA5B44215105E6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 a+1bqIrVGzfSkrkhvCp/EBx4rw4UYEFp2oFrCpBgOfRdi3eA5DxBXJz4pt6InB7WTvDyWIwcTn+vfkK8ZupqvtooNii1+G2GMmpGCSEcZ5qSkzvFPUmATauxtUAZrCV27KRJxK4M2VHxw5ZnQJO0qC5shS6J42V3m5yTy0jCoIUBA8PNPJSXBuTj27ve7bnFjTXQLsXbR4uRg0KptKd/AX3tmpKJU9LY3LVl5XRY2ekgQTklDqSnsw9LVhD4UnkCoNoEWr6h7cs1PX5NecOpwDz0Q+dIxNE2ib/jMmC1A+JNwQwwSdnzTkzVjxXUonkpg6JI+LF+8V3tVyVZQwW+VhuEO2Ar+PCIgq3cwfb2TRMFpUxCzuxSiZnO05lJfLLHzymeQ7KDLU7SD0iG3B9rqPkpMgFRJg7zGWIVtfC5lmI3m6NqDuC8CV5HXbhbIpmksooHkCcC7hK1/LhvaYg3r0gAIls3aA9AsCFQdK9+f5HOBwYX4Xxdesrw1zXq3yujeHwpNHBUX/NxvDAdzORZ5muDIQtoXC6No+yZg0SvfUUIkcOqElNlKLF0wNBSL0Sc
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(54906003)(66556008)(55236004)(83380400001)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007)(21314003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?LyL77pT6OA7J4vHWTKqlSDgwKjVqk7XTV4KAB10ixS6OyDeB+/zRELBkhD?=
 =?iso-8859-1?Q?ul1zKCzUpfL8ctBQ9VTV+gMT1cELsFmBTYS7haQax1gwJypSUwXLo5s8sf?=
 =?iso-8859-1?Q?djn5nbWUQEeRiQ87lgJ24m8QiMye0y3Zi/EHWnF8dIoEWkeQpC9GnZu2uj?=
 =?iso-8859-1?Q?S/bJnwkJqmk+LGp0Lv7s0XSEYJqsrqfE7NO4SW69oBhk7suwfJ36qEOiVf?=
 =?iso-8859-1?Q?2MD8FMjuxUBA03DcUYgyB3ju3bVVILHvLP1yJK/XApQlpy1emHm0sCCzJy?=
 =?iso-8859-1?Q?z7XHXtBCG2h+Rzvpyt1E+irql4T52gMtPtXhb4Ui1uPLTRfOV2eRFBzd4m?=
 =?iso-8859-1?Q?ILoszlHFAf8LPQvWGSuNnn8uncUFuhuqah27xWHeeh4W8pD+HnPcH18dgC?=
 =?iso-8859-1?Q?gUfLu2slE+G2PCgr6IEmz4NZdz9FLPZKFvhhWLG5samGTDUSK1lL8LQTSr?=
 =?iso-8859-1?Q?yVsbKEL+mTkjQKvxaMiGsSAfbu3uEWmxg5NYZiCSCjbSpDizMcfpRVc2zs?=
 =?iso-8859-1?Q?w0zC3UB3eczDUxb2chAM9KpxmV0/Sk2n4e0DWCaSz+AESKJBAMbrdSTsB8?=
 =?iso-8859-1?Q?+GQFzZQXaJLeM64g5Y1jCvaHi5PqR08TB++gzNJ53m9aQ9bp0EUHKZ6tar?=
 =?iso-8859-1?Q?CRIfbcgmoZsNBrILp7CkUj3ytBD9XNrdpLwPQ3yczv4NWPFtcWz/N8HDYQ?=
 =?iso-8859-1?Q?oLt56N9MSfYknJQUWdh3hg3EOUFnNrkVuA7pZ7wL5CCio56VbhGHXh9ETA?=
 =?iso-8859-1?Q?9zeObfjvi3BP28INpJwveVhd02yjDnixWFDh4TROIWXNBEfjGRtv00NwMG?=
 =?iso-8859-1?Q?Tl9XN2bkOkJesd4KkCRIGG4iYNZ4jtJ3Dae3ckUpDIEHYHcu4J08r/dp7N?=
 =?iso-8859-1?Q?Oj1Rmc0eb+UnoYZSOct8ZmbATE5R+w3Djbqvn+tJ9tWc3Vw9iueGT8HLQ8?=
 =?iso-8859-1?Q?ZvuOrQW6MiqYOVSAeE25bHmm5Xd29q7fgY3VYCTQXxbgZoa+KFl8qd/WtM?=
 =?iso-8859-1?Q?IGwDBNnXiMYSq9/2IkszYv2R3Q6DZtnLkLPHYOVC7RkuL9IDBxqY6nZqdL?=
 =?iso-8859-1?Q?dxXdB15DV2EeryUjPTwkoWdyukS6sfuKa4RHhG0ZmLgqhAyJTEUE1DdMuR?=
 =?iso-8859-1?Q?6ydVNObkHJr7/gbYMkbyROwvbfycNOjxBwPDtLxqt20tAaFWemusAfHGrF?=
 =?iso-8859-1?Q?WqdmRDkLIhK83o7BPfJyhp3mz6MEp1rORLpnZJx2GJxv+HNUyTmcFAnpas?=
 =?iso-8859-1?Q?tZ0nHXaNyZS+8As2AIeq1qFyNqc/PM2HJe78pDK20yjmmWQxl0yNda60N+?=
 =?iso-8859-1?Q?uxJD4+SZnsNFFiPM06KzbsW850sWaKMLvJc6aDvVQg29hZN5BPMPkcuRuX?=
 =?iso-8859-1?Q?cLZ+3qhpy6?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6422711-5b25-4811-b388-08d8d7a39c92
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:57.6927
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: m9QhMp75F3EOHtlT/s8/uzUSCd6VqiDIiNrIDY6FOKZKMWYdtpA3VhLKl7zjCkGrG8GqJ1hqUC9r8mrbAfT3ztaXBputB9GiqfXZr+XgzdE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

ASSERT_NOT_IN_ATOMIC() is very strict, because it checks that
local IRQs are disabled. But there is a case when this is okay:
when we finished handling IRQ in hypervisor mode we might want
to preempt current vCPU. In this case scheduler will be called
with local IRQs disabled.

On other hand, we want to ensure that scheduler code is not preempted
itself. So we need to disable preemption while doing scheduling.

WARNING! This patch works only for ARM code, because ARM code returns
after call to sched_context_switch() and it is able to enable
preemption back. In case of x86 further investigation required.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/domain.c   |  3 +++
 xen/common/sched/core.c | 13 ++++++++++---
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index bdd3d3e5b5..2ccf4449ea 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -335,6 +335,9 @@ static void continue_new_vcpu(struct vcpu *prev)
=20
     schedule_tail(prev);
=20
+    /* This matches preempt_disable() in schedule() */
+    preempt_enable_no_sched();
+
     if ( is_idle_vcpu(current) )
         reset_stack_and_jump(idle_loop);
     else if ( is_32bit_domain(current->domain) )
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 7e075613d5..057b558367 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2577,8 +2577,6 @@ static void sched_slave(void)
     unsigned int          cpu =3D smp_processor_id();
     unsigned long         flags;
=20
-    ASSERT_NOT_IN_ATOMIC();
-
     rcu_read_lock(&sched_res_rculock);
=20
     lock =3D pcpu_schedule_lock_irqsave(cpu, &flags);
@@ -2643,7 +2641,7 @@ static void schedule(void)
     int cpu =3D smp_processor_id();
     unsigned int          gran;
=20
-    ASSERT_NOT_IN_ATOMIC();
+    preempt_disable();
=20
     SCHED_STAT_CRANK(sched_run);
=20
@@ -2665,6 +2663,9 @@ static void schedule(void)
         rcu_read_unlock(&sched_res_rculock);
=20
         raise_softirq(SCHEDULE_SOFTIRQ);
+
+        preempt_enable_no_sched();
+
         return sched_slave();
     }
=20
@@ -2681,7 +2682,10 @@ static void schedule(void)
         cpumask_raise_softirq(mask, SCHED_SLAVE_SOFTIRQ);
         next =3D sched_wait_rendezvous_in(prev, &lock, &flags, cpu, now);
         if ( !next )
+        {
+            preempt_enable_no_sched();
             return;
+        }
     }
     else
     {
@@ -2695,6 +2699,9 @@ static void schedule(void)
     vnext =3D sched_unit2vcpu_cpu(next, cpu);
     sched_context_switch(vprev, vnext,
                          !is_idle_unit(prev) && is_idle_unit(next), now);
+
+    /* XXX: Move me */
+    preempt_enable_no_sched();
 }
=20
 /* The scheduler timer: force a run through the scheduler */
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88522.166503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXn-0004YE-T0; Tue, 23 Feb 2021 02:35:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88522.166503; Tue, 23 Feb 2021 02:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXn-0004Y4-O7; Tue, 23 Feb 2021 02:35:23 +0000
Received: by outflank-mailman (input) for mailman id 88522;
 Tue, 23 Feb 2021 02:35:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXm-00046p-7Z
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:22 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8d010ed-a512-4352-a189-d6758d431a56;
 Tue, 23 Feb 2021 02:35:03 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxl004083; Tue, 23 Feb 2021 02:35:02 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:02 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:57 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8d010ed-a512-4352-a189-d6758d431a56
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LTYi1KQbJDEeKaNrH+9ZtzE8sQo9fUaORjGHgdOBdARb2sqabg1zjHLer1dQcSoG3XRHIKE78LuNikZfsJROfhzRpToPesYTfDXQFX+drh7OXUGViT09G5HFGQa18UhGQAc7FYvsGfj+NB58B9WdTyIuwYMj+XFRVhyrv6ICTz60AmZcfTDHqO/zdez3sCzNP2M+y5ODBO+JlYDIiMrZoIFTBoCPtMV2teHLn82l4a6bzumgFgBkbF0zqEGV7sASrNYQnUPLcu6wV7YwokkYY56x5g4n3NoVfh9vPbzc1i7FvjdiGxrYZRQdkIBOTdlfQr4iqOhKjexy/B5FL23Bfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZXx2B0ALYb183P5HVU3xyAVbUZAXMNspxc3XMEA+DN0=;
 b=MWpOqoNz4KU/wu/bP6KfsIS0TEsUSMSClO3p4gBuITAXlXsZw0YgreYE51mefXNtx/vrv+Ex/XywPqTCEJhXF6pe/LuMC6hz/peNpkFw7vIKTzelCCo0czVUbSn1Xzr14NZ4Q7HAqWmqlWYswaSvhnsfZ7lx6/GqLr9wugtdMUpwHLtD8JdmSdA9Hwt0QO3Lb6W0J64nKOpVVBhjzLwQk+9MH+ncBLdGLieDRUiOTflKLHwO+VemP7MQW8skIv+tdV7LH9YYY9tprp89M4P91UMu3d5uLBvya4A6PiRfL4EOM7tmOSQx/g1yC/uxPczmXc8AscFs00O1Q4s43L+zfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZXx2B0ALYb183P5HVU3xyAVbUZAXMNspxc3XMEA+DN0=;
 b=Oa1Kttm1BZZLoZhU2dHDs6moFOM8D7ZPERcj6v8svvPn0H2TmmLcMt6gTS9MgttWiKCCd5BOL3EP1EWFjW2TAb9bpG/5M4C95wWDg0HJVqpdUGLCG3RF/tvxDzOYWMQVxigEN+R4cQvOCUp4FdSMzyHQdtj9dF7XWCfNHKB1X8353T6iUA643LlVG8qre6azxFd8stdTEk/WuXO39EefurxWx/IdzQkzhXrxeYP+0od9aO8eNWRcJYktou7rbAmFZzFuTIUpcU6MXt0AP08JVH8F1iqkZ37tDk8cb/rriqqaGeLT0jrl7fs/Ci6O43Ftv92rv3HDiJf7u/EZs5itOg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 06/10] arm: setup: disable preemption during startup
Thread-Topic: [RFC PATCH 06/10] arm: setup: disable preemption during startup
Thread-Index: AQHXCYx5K2kzQwNs80ySE93i7jtdRg==
Date: Tue, 23 Feb 2021 02:34:57 +0000
Message-ID: <20210223023428.757694-7-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ce111a5f-bbea-45ee-3d0e-08d8d7a39c67
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3235A936D4E52685F52AB7C7E6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4714;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 cSdecW6WPoe9LB+WlAGvJQ84X3CaS7/RSWKbEWrTpW7SSRqU9UVo79qjl9b7oJTvmeK5cV8LdlqLN/l0JuDR4TvZugxgSmU57Nzn+cDyoBrpkAsC1py4+k28u5qx5I9S84eqE2RF769ZcCVaPYP+sDqisx2RnUxYLWEfwtCCnvbAASB4FokEd/wvZoKm6WNcIWxUiuOt3fSj4PE5NnfxYnxbpNl7DC6XpIwxsRoQgb38xlpSjJ+NdlVQWYRXdbnewBCXa7yXhTBPW3+d/PfhE/9K1XxXdjCcSk8T9cyPI8/YeypJxj4RzjrrWz4nUstpIYheTzHxr+qyCbw0CGDW8SWHZvfG6dKrf1heJYlTLgMb+lDLfELEDap5Gf35QqNz4ECHFDcfHSmVHI03SsDRECgbC2yULM2r5MnyybHouFbVWBYpl3LhGeQGeglofHg154Sj8mArdNEDxnLxnmMj0fTz7XDXSpVY5m32dTJAgl9dDQrREOQQq5isWOslmgg7+koL8phO5pORJAkzCRiT0w==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(4744005)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(107886003)(54906003)(66556008)(55236004)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?ppOJi/8P8oq+XZHJ+oFQMSc+h/swovuz7JkKwn0FF7Jsll5+ZY7hfsg/0q?=
 =?iso-8859-1?Q?Hf392GJdsE+U2asyKtQpzj6LE5KsqKcfQYwAw57dZx9z+3SNAvgHHwKeBC?=
 =?iso-8859-1?Q?sSHrg9ChWmPDFr8QlmWT7GeKvXnpLr6T8LJABevFh83RTyBcBfEqsLmjKn?=
 =?iso-8859-1?Q?qzax37jMUuM4oUwzdrKUHLNlJafDRSL6CGOk8tjAeA0ZFXyDvztiAoCzNh?=
 =?iso-8859-1?Q?o8yz4yApKhdpvspiqAO1LvGqQUTTPnRjHDoH1YmzcS4dGnBYtfDP5y69G2?=
 =?iso-8859-1?Q?CCxrmpAzdJwihstsvR3e28HrnbGUREjZKbzf8D6ZBnIgRvACYdJecyMP9T?=
 =?iso-8859-1?Q?IBSrSLU20VSnmfpm8/6yIeIWIUX3yPiWepp/ZcRVKdmM/Xy3I+tkk/aCt/?=
 =?iso-8859-1?Q?neEDPmoeSHfWuOwVubykgmSUe/kzQ7Yo1mNUZ5Fz2v5516n5dbGIuKznat?=
 =?iso-8859-1?Q?La5tsLmGZzsZefrh9XTGKt/eBuK7ERpQQ6ATxgOnYu241wkuAJYzrFrSHT?=
 =?iso-8859-1?Q?WQ8LFcK1yii/rO4OI/xve/Z91yN3hndezWjtv7Aeax69wepxYy+v+Ww53T?=
 =?iso-8859-1?Q?dUu92PQzVmiAsIiIK1b+RcX3JKZFe4YQuBFErdyxwmBEmM/YNtlH424Biz?=
 =?iso-8859-1?Q?HPA9mvMuHkt+y6hYn0EFaCwGc5Yw0nK2J/0GWrcXMT+oo4Bm0RwLa/SMtR?=
 =?iso-8859-1?Q?MAZfGQ5opzvJTIsQ0exx2qy5NZqeeW23eYvFBL7ImbprFvSG4b6MYDo8Py?=
 =?iso-8859-1?Q?HFUiI/91fV8C8CxFhlh6t7pkCVsiX3Z2dCWowpe0nizDCzEeJMFcZYM/ez?=
 =?iso-8859-1?Q?xEGedH1O0iPmx0jXIj7ft6M8lMbprpJ3R26vuOxajr3WymRRrx63u9lJv5?=
 =?iso-8859-1?Q?RRDAc9c+QxxzEC4zd9sD1/DlJOxjJLYWXHmJV2HUEQcQ3VBPrj2t12qyn2?=
 =?iso-8859-1?Q?R9tqwWodb0eYLNJYZvgxm5Z1hTi8o6PAvrByi7TPzEo/wQO6My5Is++Ha8?=
 =?iso-8859-1?Q?Az88KfE2Ix820Z/xTn5RKwSJ9UHifArbjykASAxQ6NgDUiD0ySA4ev+Lpg?=
 =?iso-8859-1?Q?LYiOJZXDypoP5XA5YbCYK8asnQqQct+92CuYsWSsIVoOkoKwvi3RZQqzFI?=
 =?iso-8859-1?Q?l/UbCjM20JIv4V+V4xEhcY89nrSqbU/qrBpuBRTbG/wacVXwhihFdaeoYz?=
 =?iso-8859-1?Q?FXj0/SrGkup0/ae/RO55/UrqGAXThDSWR8KhwTeYEkfnEyom533fyjez3Z?=
 =?iso-8859-1?Q?pEAxDQc8VZuOyumyLeZL49hmkmVpkFMXVOv6kVZ/CPi1bnitWJWZTSe692?=
 =?iso-8859-1?Q?FsKiot3njZ/+4U+IITnm+BW/saswZ6ypRlOmU4ARAhrtdEiGTilYsLChEg?=
 =?iso-8859-1?Q?VHjFqNX+9P?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce111a5f-bbea-45ee-3d0e-08d8d7a39c67
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:57.4389
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: OkQcUQgbc8b2ynvWj4QyAsST9R3nPGDvp11O5xsLzt51L6BvKTjzPyMVejIzbWkmvppAB00my1Dt+KU3NcY87TcpaHPRUXJ1HTF2BGSwLas=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=697 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

In the subsequent patches we will enable preemption in the hypervisor
mode. But we don't want any preemption attempts while system is
still not ready to call scheduler. So we should disable preemption
during early boot stages and enable it only when we switched to idle
vCPU stack.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/setup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2532ec9739..15a618b87c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -78,6 +78,9 @@ static __used void init_done(void)
     unregister_init_virtual_region();
=20
     free_init_memory();
+
+    preempt_enable();
+
     startup_cpu_idle_loop();
 }
=20
@@ -920,6 +923,7 @@ void __init start_xen(unsigned long boot_phys_offset,
=20
     setup_system_domains();
=20
+    preempt_disable();
     local_irq_enable();
     local_abort_enable();
=20
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88523.166515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXp-0004c4-Mk; Tue, 23 Feb 2021 02:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88523.166515; Tue, 23 Feb 2021 02:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXp-0004bu-GO; Tue, 23 Feb 2021 02:35:25 +0000
Received: by outflank-mailman (input) for mailman id 88523;
 Tue, 23 Feb 2021 02:35:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXn-00046u-VU
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:24 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62d32df6-b2f3-471e-a2c2-450356ae1247;
 Tue, 23 Feb 2021 02:35:07 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxp004083; Tue, 23 Feb 2021 02:35:04 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-7
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:04 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:59 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62d32df6-b2f3-471e-a2c2-450356ae1247
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BebwjZqHmfzIbjnBu/31UGt8bl9qKaqCFPPRLnE8/VERJllMVuTwKTV4XKObOhpubLOxfb1fi2ytVMhqpAjrEdyjEIpAG4MT8/fb0Aoe+p5NxhycZHm9Ucrg2j6YxnIAnww71q333AhjB1Ky+wyqdNXzSnoEr9QV5v9+hFr6pbjPKbT1BaXSb1qh/uZcfHx4HGKX9HeBDdqyJpLcnRHyF04yYPw3UOr7/y1S3/g1zyhG9V8NWsIdZYNtvpjsoLH+Q8gptxkK3FA4x0Uc7CdI5cjPnEve19hk6ZaNosFTqjevM1lgg4SYH+N3nvbZ/sYXJQoBOsyBFAXypfsgjVIsqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5zq9Qig5UFOKzlvYCb9mxH6RosiS9ZEQ9G2xfCdQj3A=;
 b=oXlgQq/rGZu34TqCZD2aVCFucXcRx68BoF3MKTdWtQUXSbKkZNJunVk1y9Sz8rSSRiX0wvF1KolC2FJ6tmJ5s9p+Jm5Co+o0szwht6JXkuY5/C0VjSPvQQHcRlztQNvx3lMnltDRsOaDU3jGyuWvGwPACWrVpW8TtvI88pAKdTJ94NWgEDjnNil4NObohnSfGG7WFqfMRXsgh2YAOR8Ej2PN1OOvycBY32hZns+YZnSrD++lZkewZcAD3eil+r3hJfn7Yb4bhX8BzD/hPEgr29kvCQhuu6m5Ny02DXFi4Mg8exkeYjIOSc51m1Ydro6Q4DQcehMVP08S5AY6SplDKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5zq9Qig5UFOKzlvYCb9mxH6RosiS9ZEQ9G2xfCdQj3A=;
 b=k0PFlVZZHn1Uii+1WoQpsR69DztX+xPyT7GEDmiuZ1ALJl/P21p1Cs7eLm7hru1+hDyblCt/EHOnMXclpmzhgwl7mDXQYjvjv5PEadsnb6n2W04f5iE0O5IVSGYQVqRr6Pwo0KEHqPyLf2UuUFnAxwrg8qo2sbw5yEpKeAyV1/Uy20MOxzQ13Sc0Z0d6Ym7nDTQcYshw16+lyzA87ZK/JySXRyIgeHaNI7gh+UgSL6BFCupHqQIYoyb2PI8QlB0RMVkxr+L63iCEzUiMIpNFPcN2kdOj8MxilTiHrFcw9DShDgy2Q/5OZYbpmR4Qv+xaduHGdUCKDtV4+VgoqmCcmA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper
	<andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [RFC PATCH 10/10] [HACK] alloc pages: enable preemption early
Thread-Topic: [RFC PATCH 10/10] [HACK] alloc pages: enable preemption early
Thread-Index: AQHXCYx6WMhDzv7reECPf5h9r1HBJg==
Date: Tue, 23 Feb 2021 02:34:58 +0000
Message-ID: <20210223023428.757694-11-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 24b9de1c-03b7-41d5-6ed2-08d8d7a39d45
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3235BEFE2996F1DA5C1131B5E6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:568;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 SgiB155XE+2+GFV59/hdiPfQHHXoeFTE3Dp7amd0ORH4nxiFYgMM0HWXqMKMbujRWOpUwSu3Y9f3OApJQicuGA9tMUHBCdC3kIUGYm68rRwSiG/j5UYezcWSPer1l97kaClq8MvndkENHDIfYYA1baWagxtNN47W6hKQxf2a0vJSLqb5RLenpAb6ovj608cCvSb+YdZGtQRvESI1zMcavZmz573FM9nqRwCsO4kJUKFFoyC1AIn8XolKFlFJJ+ko8NjLjw2jvBq7F0r5OpiRCoesS42l5ofM3hr4YAZPhz6KeHf/mtthPjI1+Vu3P1oTWlnlgBmALruxwNgY50Rs1wXAWKOqh/k7MGzzloFoqcsj+BZw30mrQ7mrEHidI+uTfPYfkyF/6OSfB+g8WJGdkrYJnMrmeN3FwPjc/wwbRXpnkQrRub6pzohNB+oyLtXankUckuyB9HitMJpO3UTX+eUVxVT4CfiGGcwwdaQbJVtxB8r49pxHOWrp54ugpv75cEYhZPOCk8Mr3dixeuFlpQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(54906003)(66556008)(55236004)(83380400001)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?yo7jjSK2uVnzGBGQhMbXOJ0Gx5rTzA3Vs6KqGP/LT52jRShjBzbePqMWFo?=
 =?iso-8859-1?Q?OY+nrECjzwE8uXt4WBqFOgXdIh4O3GlpcR7Y6WFhbntYpIlrFWz7DdrDlt?=
 =?iso-8859-1?Q?ZMnZ+AcUdI1oq5B6EHeT+pybyoLocZkA4dDEABGDV12GqqH4SAsM0YnQvf?=
 =?iso-8859-1?Q?tReMxDAfvP6OxaPxjA70kU4w72GKWinKFH9GrVg5or9Tcx5R5YSLQqpBqs?=
 =?iso-8859-1?Q?LWEz5wpJG+IEIu+pXevVKf5XG+6mSg86BdpkTvrwrAH3dv/qLYhnlEnQy0?=
 =?iso-8859-1?Q?HZu477CYP+LTQ5IrJlEjMId0fCG/8akU40fXeWigSkFzqNTzLuQcvGC/+8?=
 =?iso-8859-1?Q?MUTO432+GAmtLDBAcL6qMVDNUppq51UGd46flKzGtBYjmYsV+VZgYjyjVF?=
 =?iso-8859-1?Q?i1yXKNI11/cc45f+1SZ7d9W1kRSkX6vP0caGBzoXGcdIAw17EbcogPXCWk?=
 =?iso-8859-1?Q?sf08pnxp2O2ztIC9rJwmommU5sdEN9ctORTDyrvI3mgc6pOQNSxHVycJtP?=
 =?iso-8859-1?Q?/pMG6/x31/aFSUZ0qclbu26HeDE+x1pX+WKVSmm2Ij3ySv5Nd2Eu6EP++v?=
 =?iso-8859-1?Q?GL7Bcef/Msyv/Gu+haPJQcB99YYzgcfGgvNKIX55N1O2Q9/m4IzAafk2B0?=
 =?iso-8859-1?Q?tr9WQkw1SUTl91VsktzWo/zzixd2HUl63GbiidFecWrmif6RmLfxnEiOQX?=
 =?iso-8859-1?Q?wbsJB2FKVnNcUGGJA1NsmF4vyH6H4KgtoEZeSrdp1ys7U3Xi7ou3gGoxQh?=
 =?iso-8859-1?Q?mtm2jYfl4/xYDz9rGN03KNX4tPNzDC89GKn8FpvkFmsr+xaOahJBfyIbsE?=
 =?iso-8859-1?Q?6kPpquP6mJarSvG0SQSOFm5cIVnTrSpaz3/c8NNX9FRvfYWgqXovYZsoJ3?=
 =?iso-8859-1?Q?4qfVkuqf/k3UH1VIVyWxZWpIslKM5VfhloCXEMHz7LDqaIRc0sr0/C0AIk?=
 =?iso-8859-1?Q?9Z1MNVVZltHjUWIxqKkNcuSoKFs1NMt+eXzotw5PrSf4s4GuCccK9qslOU?=
 =?iso-8859-1?Q?3y0eGHhObWfuXnOvkiIaO0EVCEZg2KHSXpLdUWTKqKSB3hlULSK8nBxRAZ?=
 =?iso-8859-1?Q?EJftv/6jhp2yNQNVGl/kQKkFYvwju108ft2A+ELIWeLSUP13XfaNMwXwzo?=
 =?iso-8859-1?Q?E1mrzFqAWu4gOw8DLcrbD5JImW4kWEgVMYiTBIwTqiOToE+4BeZEJLYesO?=
 =?iso-8859-1?Q?VbzIUIZ2tG9FIyDoEWS8pLG2H2HX/peBj3XDp6xMsXputCJMY9NEQO2C4r?=
 =?iso-8859-1?Q?UraDRK8fWd6o233Uo9k2SC5Fns/mi1W3W4wkoAJvKWO9LX/yLAgTK03USC?=
 =?iso-8859-1?Q?Wlyo1fDE0uuNcS/89tutCWpgRUAVF+wWqwbb6E1xgkHpdSS6Vho3E8ik3Y?=
 =?iso-8859-1?Q?su2oRLBZgW0eJEoJZxytFOIbK6LGZXZQ=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 24b9de1c-03b7-41d5-6ed2-08d8d7a39d45
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:58.8171
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lyxjbXsVOQ6j4vN5L4qb6JwXqqAtjkbTr3+tHbqzwLN+xLNh2hv1CMKie3ntlzuAkW0+2JQLiVlCbl/MBvY4N5gHDjaRlo8PdsCUOvhnkL0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=822 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

This code moves spin_unlock() and rcu_unlock_domain() earlier in the
code just to decrease time we spent with preemption disabled. Proper
fix is to replace spinlocks with mutexes, but mutexes are not
implemented yet.

With this patch enabled, allocation huge number of pages (e.g. 1GB of
RAM) does not leads to problems with latency in time-critical
domains.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/common/memory.c     |  4 ++--
 xen/common/page_alloc.c | 21 ++-------------------
 2 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 76b9f58478..73c175f64e 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1390,6 +1390,8 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE=
_PARAM(void) arg)
             pv_shim_online_memory(args.nr_extents, args.extent_order);
 #endif
=20
+        rcu_unlock_domain(d);
+
         switch ( op )
         {
         case XENMEM_increase_reservation:
@@ -1403,8 +1405,6 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE=
_PARAM(void) arg)
             break;
         }
=20
-        rcu_unlock_domain(d);
-
         rc =3D args.nr_done;
=20
         if ( args.preempted )
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 1744e6faa5..43c2f5d6e0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -996,6 +996,8 @@ static struct page_info *alloc_heap_pages(
     if ( d !=3D NULL )
         d->last_alloc_node =3D node;
=20
+    spin_unlock(&heap_lock);
+
     for ( i =3D 0; i < (1 << order); i++ )
     {
         /* Reference count must continuously be zero for free pages. */
@@ -1025,8 +1027,6 @@ static struct page_info *alloc_heap_pages(
=20
     }
=20
-    spin_unlock(&heap_lock);
-
     if ( first_dirty !=3D INVALID_DIRTY_IDX ||
          (scrub_debug && !(memflags & MEMF_no_scrub)) )
     {
@@ -2274,23 +2274,6 @@ int assign_pages(
         goto out;
     }
=20
-#ifndef NDEBUG
-    {
-        unsigned int extra_pages =3D 0;
-
-        for ( i =3D 0; i < (1ul << order); i++ )
-        {
-            ASSERT(!(pg[i].count_info & ~PGC_extra));
-            if ( pg[i].count_info & PGC_extra )
-                extra_pages++;
-        }
-
-        ASSERT(!extra_pages ||
-               ((memflags & MEMF_no_refcount) &&
-                extra_pages =3D=3D 1u << order));
-    }
-#endif
-
     if ( pg[0].count_info & PGC_extra )
     {
         d->extra_pages +=3D 1u << order;
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 02:35:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 02:35:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88524.166527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXt-0004il-5z; Tue, 23 Feb 2021 02:35:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88524.166527; Tue, 23 Feb 2021 02:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lENXs-0004iV-VK; Tue, 23 Feb 2021 02:35:28 +0000
Received: by outflank-mailman (input) for mailman id 88524;
 Tue, 23 Feb 2021 02:35:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lENXr-00046p-7i
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 02:35:27 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f95cf823-5a6f-4cae-aced-a78d80284f96;
 Tue, 23 Feb 2021 02:35:05 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11N2QQxo004083; Tue, 23 Feb 2021 02:35:03 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2052.outbound.protection.outlook.com [104.47.2.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36vqte83qr-6
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 02:35:03 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR0302MB3235.eurprd03.prod.outlook.com (2603:10a6:208:a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 02:34:58 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 02:34:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f95cf823-5a6f-4cae-aced-a78d80284f96
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Box0ZW6cJts97vL+MA7FHGhTExkrOv+H0xYvOoLWoSAex+3v4NDWEvoaOGDmzmYtNlOmg/NUnpdBiDqKPBYeOUWcpQ/I0c0VlqMmtg3jg2MPTaIYi/26drW6h3/7UEcP+Vyvapdji3kNNbUty4rrcVgsm36UZ8gKvy4Lx2ZFLyj/gr7+4TAANZ5szhH6DKG1e3ciDXUN6zmh8paJrWBFSjM26CChOarY2WeE4ehbr21Uh0RJMjf8dIbjDGq6kOtc3+Iv288cRGxAyGsVprLYDGqk4sTTJ8O70yKAnXrNdXiHjEJIZbJCK0c1o0QRW/Zgo9LcZDA97nUn6rCtHcpH5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=77HrcIok9Ev7108K/qUYq6X0RM0POs+crZXjM8Ysf2c=;
 b=JXX58xLnl8fV4rPTTgIAgoZIAfRLCD6wBHiETxq5Ox6dRdwRpFg2xSEGSTKE01Ot9kjtMlCxBqiI0sy8r54/RCLEIO5rr+iYfbU0gRJwVdU+rKAQ8OZwrknDOXRmR8WKv3VyOntez6MQiGdGzZC+mTKYnyFQngeEl5nJHLf3lNc3zMXa+cyjDeR1EgalQvKLfk9cqsIQak4ZV2sIRLgvjn9rPna1J9dXyza2n3b0v3ElMoJkv74O+7+0nbsl/qxioORoR3UknCVUsftCjwv4sjYvZvPJdKb/tXSECe/9thod3op/Xhkh1uxZP0AkCVvLVasqHu4AN3qiNmbq2BEG2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=77HrcIok9Ev7108K/qUYq6X0RM0POs+crZXjM8Ysf2c=;
 b=IF0hKvbI5CNPPsygUhpYpltnYRwGQbrvAa+erWdPtZvlZ6t1GkdVVBW8eO1T++ZtSWT3a4SwwYPWsyPamc71Da7iscM4C4+FZU/ofs6hHy1IiQU3TlYxHK6r97GL9Aoq64YQ+cEXtId9Y9/O/FtbnV6LpQfdsxMLPkDwQu2pf1g7Oorzm7y+n1uo+puZ+YLmFkCqoEJXw1N55t/F4mn6AS+e2xHu9L+XKBuLvO/M/OdJXM0zM5vf4vHCtxUIXIDxT1+/BGqwCvfnDO8LrcSdkWAQ7Uo8K3pzYUzXILciz6sFWhVWNkNjxGW71Z6kYUb4f3qMnZipnn2YY71lHf5RaA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 09/10] arm: traps: try to preempt before leaving IRQ
 handler
Thread-Topic: [RFC PATCH 09/10] arm: traps: try to preempt before leaving IRQ
 handler
Thread-Index: AQHXCYx6aQhBTjfpeUOOmxfCufx61A==
Date: Tue, 23 Feb 2021 02:34:58 +0000
Message-ID: <20210223023428.757694-10-volodymyr_babchuk@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.30.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5acda12d-2f42-4779-aa3c-08d8d7a39d19
x-ms-traffictypediagnostic: AM0PR0302MB3235:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3235D1A9677897B3C93C0900E6809@AM0PR0302MB3235.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6108;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 YUQaa/FnT3MvD/24iWZzbvd71oeHj86lSWMrFzCZUZKFMc+WkFxbz5LiHZWwwXJZ/ZWMmuv90yeEVuxYMPpXgw2yPME+CK2SV37fvtvr3vlXZ/+SztoFbzLHR027WjhT7lJpz/fLmH4ocWEOQd7+6gQg8PEEqiKUfF9l58Q9rb5Mwithg03V3vSGeULKwQHCkeOY6CF/V3JEy1owHmfgRlODPr9zdte2yUFGBwljTB2lUhC23JHwMwHPMQEhvSi2+ZJkWtbBvhgRIELR+mrtp7iSuyL9pKcDz7DG8MkAflYpIbpShbpyMTjHIkJah1yAH8chcDYMR9XxCPujDNzwb/fPANAkmM8lv08arCPQztNOoLw4R91f6y4/URsl/EGRLnhuTtTN1LVumzlrY88ydKTXTS8uSXW2loKlwEy1KRXSAYmE/jJYiUucwP/Uww0aMa6/pvJURuxjh9NFi2T7HTpAdBw5fXPA5D3j1B3LwjYIAqzW05BzlWQmHyXkewdrtowjkCk3aQay+Bf+Tc17EA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(316002)(6512007)(4744005)(8936002)(8676002)(86362001)(2906002)(4326008)(478600001)(6486002)(186003)(107886003)(54906003)(66556008)(55236004)(66446008)(66946007)(66476007)(64756008)(71200400001)(1076003)(2616005)(26005)(76116006)(6916009)(36756003)(5660300002)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?dQHEm3IGAN922H5DYg775cTiCxL8zP+0cWNFPD/H/NR8hwbpTjdYY56BHa?=
 =?iso-8859-1?Q?3j8Rw1UsY6lMjPThHc/hnOSw8Ka9Kf0V6Lq2KLDXsrXNRwAdQghVcvkYLU?=
 =?iso-8859-1?Q?PMfekSKH2zzPVhl2yZCm9APKxekkFqfo2edz8Is7pG/hQraNm3VEy+xtLg?=
 =?iso-8859-1?Q?UIGuqVfcHaoql5wxNzPL3GbRRiEfG6c8e6cTNrNgRkYErnG5he3X9trDwQ?=
 =?iso-8859-1?Q?mEVtIwPfKExEWqk1szSDUNKWvwevy/Dx3ieRXHp+TK9r3AVnjx8Eo87vVy?=
 =?iso-8859-1?Q?3Ab5+8gdqjhRGlED69mqxgCuTC6HLBMfcW99BAYm3zC7BShYsgqqexPmhV?=
 =?iso-8859-1?Q?PN9TZ/Ona51NwkQQVsq9+7YJYEwJlVs9jsqPFZZl9FtCrMykO/EaFj5n5f?=
 =?iso-8859-1?Q?7qaypnU8IwnxxvrUmJOaxD2ElVNgInewvLYfqLfX/FO/Taj9WtA7mQtDvT?=
 =?iso-8859-1?Q?wY0X66jGjh4Qv+p7hZDGkY2WMLMFYIT+vlBwEBdWdnEEMK3/4JLq0YplAf?=
 =?iso-8859-1?Q?ALnNYpnwdBsdvNMhbv5T1oJro4kF4i0NDPJL+su5SJk62DDSr8yv9oiXS0?=
 =?iso-8859-1?Q?NGqafX+rggLGGRYhIftdIH82zl6EpjxM80bzBYnndPlh7FRbwYpMoQ4+K9?=
 =?iso-8859-1?Q?crJvlHNDpCCzHMvUvqUCXd+h9WnzlyhDhYN2kTr5akSFG7cPeGkCUT/NmQ?=
 =?iso-8859-1?Q?up6lgiQZfR63Q09flMC0E/5MNVvelkQ5A8en6YR67k8QlLvYsn80YY/2yw?=
 =?iso-8859-1?Q?p+Hi3uYxgK7GgcU049TyNVWZU/aPP+JZmckqUBf+S/AOzkRkoIclq2mzlT?=
 =?iso-8859-1?Q?NY1PunTy9qEyTwc5NJiiKQYeBXgtKqmg9jzO5ZBMJ/JRsNBzrjI0g6ch89?=
 =?iso-8859-1?Q?Knwb9gxPfLBwIG1/QHn7EeAUoNXDE6yVKIf1yBglA+TxriaObqxxm8gmnm?=
 =?iso-8859-1?Q?JiBuNcVpGgq+ucrdvh1n7UCzQrD5+QhuwzmKJIKYCtHjqKa087O5ke/vFh?=
 =?iso-8859-1?Q?e8lKvDy8SEy0/lFtnqfgXnegieMKcJUQWKLfTJuXoZZxVnDByMA8/kCnjP?=
 =?iso-8859-1?Q?nfmLUYceIW9jjtj4aWbQMztv8U3m3cu8zRubWdjq0NRCizah5ybvv4U0wW?=
 =?iso-8859-1?Q?TWCB97wdOd5owo5dW1fA3vyNBhhPVTNOepDVdp+4Y+//d32ZK0JXowM8x0?=
 =?iso-8859-1?Q?adiYUMb2RGqcRRR0aP9rNORZ7yLqfX2IJfyhQ6bll9IgUpBR3GdUyYIZCC?=
 =?iso-8859-1?Q?sJAaMAqUyD3fwFwS0zE5xleNHSyWS8xb12fyNeS3MTIgr07IPQ6dApQZ7d?=
 =?iso-8859-1?Q?bjjsRd+E8p9F5nDWoPC+N/qo1deDFooreiJ84HxpFTHsJZZHiNl4jFznAO?=
 =?iso-8859-1?Q?2pACeCIQlg?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5acda12d-2f42-4779-aa3c-08d8d7a39d19
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 02:34:58.4833
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hjdn6W8pYOxYaazx2wU5MY23yUziY37gihP4izXveGMMtMPo2hMl1+haK9f7i/gIValS3T1h4oayqGOsUMHiYRk36P8hdD4+AOw3Jtkh9Eg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3235
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 lowpriorityscore=0 malwarescore=0 phishscore=0 impostorscore=0
 suspectscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 mlxlogscore=730 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230018

Now we can try to preempt code that is running in hypervisor code. To
do so we need to try to invoke scheduler by calling try_preempt(),
while not in nested IRQ handler.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/traps.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 6fa135050b..98a4fb4904 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2260,7 +2260,14 @@ void do_trap_guest_serror(struct cpu_user_regs *regs=
)
=20
 void do_trap_irq(struct cpu_user_regs *regs)
 {
+    static DEFINE_PER_CPU(unsigned int, irq_entry_cnt);
+
+    this_cpu(irq_entry_cnt)++;
     gic_interrupt(regs, 0);
+    this_cpu(irq_entry_cnt)--;
+
+    if (this_cpu(irq_entry_cnt) =3D=3D 0)
+        try_preempt(true);
 }
=20
 void do_trap_fiq(struct cpu_user_regs *regs)
--=20
2.29.2


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 03:07:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 03:07:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88554.166551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEO3A-0008G7-2I; Tue, 23 Feb 2021 03:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88554.166551; Tue, 23 Feb 2021 03:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEO39-0008Fv-Sd; Tue, 23 Feb 2021 03:07:47 +0000
Received: by outflank-mailman (input) for mailman id 88554;
 Tue, 23 Feb 2021 03:07:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2+Io=HZ=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1lEO38-0008FS-WC
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 03:07:47 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0ba089a-7c14-43ae-bf12-77ddea2a4cb9;
 Tue, 23 Feb 2021 03:07:43 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4Dl3rT035vz9sVS; Tue, 23 Feb 2021 14:07:36 +1100 (AEDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0ba089a-7c14-43ae-bf12-77ddea2a4cb9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1614049657;
	bh=1gsrnS8Yqt+OMMNcD8PjrztykA2+tSkcf0+GKbiU8J4=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=OaRH20fHuFVRaWzIJ0EKlAlwQzA+spug1Y4CvlwH/1BYnvoBmPyrKN9ZCnfZ6dGH4
	 uERupAmFnK6R3GPbAYmEtGTzB5U/4LqeJb4KS+9xzi+6s28WmcwSneWRiBHGBy0pkl
	 JZGPIZzI8QcPFt7g813NNMI+x6yC4rJk2ZS6ZCJM=
Date: Tue, 23 Feb 2021 10:33:55 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Cornelia Huck <cohuck@redhat.com>
Cc: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
	qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>, Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type() return
 value
Message-ID: <YDQ/Y1KozPSyNGjo@yekko.fritz.box>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-2-philmd@redhat.com>
 <20210222182405.3e6e9a6f.cohuck@redhat.com>
 <bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
 <20210222185044.23fccecc.cohuck@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="10TahfdR2kVaNh/t"
Content-Disposition: inline
In-Reply-To: <20210222185044.23fccecc.cohuck@redhat.com>


--10TahfdR2kVaNh/t
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Feb 22, 2021 at 06:50:44PM +0100, Cornelia Huck wrote:
> On Mon, 22 Feb 2021 18:41:07 +0100
> Philippe Mathieu-Daud=E9 <philmd@redhat.com> wrote:
>=20
> > On 2/22/21 6:24 PM, Cornelia Huck wrote:
> > > On Fri, 19 Feb 2021 18:38:37 +0100
> > > Philippe Mathieu-Daud=E9 <philmd@redhat.com> wrote:
> > >  =20
> > >> MachineClass::kvm_type() can return -1 on failure.
> > >> Document it, and add a check in kvm_init().
> > >>
> > >> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> > >> ---
> > >>  include/hw/boards.h | 3 ++-
> > >>  accel/kvm/kvm-all.c | 6 ++++++
> > >>  2 files changed, 8 insertions(+), 1 deletion(-)
> > >>
> > >> diff --git a/include/hw/boards.h b/include/hw/boards.h
> > >> index a46dfe5d1a6..68d3d10f6b0 100644
> > >> --- a/include/hw/boards.h
> > >> +++ b/include/hw/boards.h
> > >> @@ -127,7 +127,8 @@ typedef struct {
> > >>   *    implement and a stub device is required.
> > >>   * @kvm_type:
> > >>   *    Return the type of KVM corresponding to the kvm-type string o=
ption or
> > >> - *    computed based on other criteria such as the host kernel capa=
bilities.
> > >> + *    computed based on other criteria such as the host kernel capa=
bilities
> > >> + *    (which can't be negative), or -1 on error.
> > >>   * @numa_mem_supported:
> > >>   *    true if '--numa node.mem' option is supported and false other=
wise
> > >>   * @smp_parse:
> > >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > >> index 84c943fcdb2..b069938d881 100644
> > >> --- a/accel/kvm/kvm-all.c
> > >> +++ b/accel/kvm/kvm-all.c
> > >> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
> > >>                                                              "kvm-ty=
pe",
> > >>                                                              &error_=
abort);
> > >>          type =3D mc->kvm_type(ms, kvm_type);
> > >> +        if (type < 0) {
> > >> +            ret =3D -EINVAL;
> > >> +            fprintf(stderr, "Failed to detect kvm-type for machine =
'%s'\n",
> > >> +                    mc->name);
> > >> +            goto err;
> > >> +        }
> > >>      }
> > >> =20
> > >>      do { =20
> > >=20
> > > No objection to this patch; but I'm wondering why some non-pseries
> > > machines implement the kvm_type callback, when I see the kvm-type
> > > property only for pseries? Am I holding my git grep wrong? =20
> >=20
> > Can it be what David commented here?
> > https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html
> >=20
>=20
> Ok, I might be confused about the other ppc machines; but I'm wondering
> about the kvm_type callback for mips and arm/virt. Maybe I'm just
> confused by the whole mechanism?

For ppc at least, not sure about in general, pseries is the only
machine type that can possibly work under more than one KVM flavour
(HV or PR).  So, it's the only one where it's actually useful to be
able to configure this.

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--10TahfdR2kVaNh/t
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmA0P2MACgkQbDjKyiDZ
s5IPEw/9Gpfn9x9Uti+grFwBeMIAyCQKsB6yi7YPWIht5HgE4QAKDTgqk/WE1TLr
ZvHKBDOP1v+s0qWO2itu5zE4Sqk1sHgjWurqWMGF1vDq1VT6OKrJ61CSVRhUBS4p
a61Psz/2lnHubEnHTLyo6I8i5PKgvnSxicB5MeoUNOGR+TPZKCF5gVvFLJfrCxF3
tRIhxfFHfP97ykZyO1koej1Dyqt18TW0aDisTC+ID9MoE0hejalyAeBCfX9ooFMM
c7TCcGKwau5nZVm6/Oph8DJftpP4/H2GhlEHYm/BFCHBPTjGiTN26bmxljFojud9
JUNhdDvqeNBmuc1aKrKKblr947BMu5tW3SDfgffU8jrdEK9YBcYuTnM8u2EexX65
SeOmEk8AnZwvBs/fvTIlaPEjLPucNI7YuFATay5Wr0BV8QzPVE8A61bqSPFNE+zG
1hQmfuoXGsUJ4xGazBpJV1P6GAY4TQ8hTfvqDa41+QGkFQjhlpAZ7WFuTAXPLrBL
/7Pnvvlt218Tw02YBkqslu0zFDBn3WVFvphIhIJf5rcvoUeVsZ+ArB593v8YVNEg
cKi9iVZnZH+G8ni6IGkujisqBKea0cvdKmCDu+Poo/ilEGCGKmKxhwgKZaNN7TQe
o1VTLUyFi22ySs0dEdVD3bDFPozkNpYY53nyZNafAVFfsqBmz+M=
=IT+B
-----END PGP SIGNATURE-----

--10TahfdR2kVaNh/t--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 03:07:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 03:07:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88553.166545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEO39-0008Fe-O8; Tue, 23 Feb 2021 03:07:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88553.166545; Tue, 23 Feb 2021 03:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEO39-0008FX-Km; Tue, 23 Feb 2021 03:07:47 +0000
Received: by outflank-mailman (input) for mailman id 88553;
 Tue, 23 Feb 2021 03:07:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2+Io=HZ=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1lEO37-0008FN-OK
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 03:07:46 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3df86e6-ed50-437b-9eca-e4f2df1545a8;
 Tue, 23 Feb 2021 03:07:43 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4Dl3rS5NJnz9sTD; Tue, 23 Feb 2021 14:07:36 +1100 (AEDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3df86e6-ed50-437b-9eca-e4f2df1545a8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1614049656;
	bh=lB2qhVhVvCNwKnLEP9cyVRt4kgQq0RQXnIPlpk7dq2g=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=KFOWhcID0dtdiNAomEpDaPts1jSJ82Ytxndpt+d2goQHThz6ePo193JIsAGJiwYUo
	 P86LbDcu1xNkh2QLXqPt/+rGLma2RlyYfZGg3c2dZ4Rw4FNWTs8HtM/WMOo8D/0hW2
	 WkZKKs7NJTNCZUbJTm/cdhZcKb3yIEEd5MCTIJEI=
Date: Tue, 23 Feb 2021 10:37:01 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Cornelia Huck <cohuck@redhat.com>
Cc: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
	qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
	Peter Maydell <peter.maydell@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
	qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, qemu-arm@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Leif Lindholm <leif@nuviainc.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Radoslaw Biernacki <rad@semihalf.com>,
	Alistair Francis <alistair@alistair23.me>,
	Paul Durrant <paul@xen.org>, Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
	Greg Kurz <groug@kaod.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type() return
 value
Message-ID: <YDRAHW1ds1eh0Lav@yekko.fritz.box>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-2-philmd@redhat.com>
 <20210222182405.3e6e9a6f.cohuck@redhat.com>
 <bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
 <20210222185044.23fccecc.cohuck@redhat.com>
 <YDQ/Y1KozPSyNGjo@yekko.fritz.box>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="iHciYgIJJ+RNzZ6R"
Content-Disposition: inline
In-Reply-To: <YDQ/Y1KozPSyNGjo@yekko.fritz.box>


--iHciYgIJJ+RNzZ6R
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Feb 23, 2021 at 10:33:55AM +1100, David Gibson wrote:
> On Mon, Feb 22, 2021 at 06:50:44PM +0100, Cornelia Huck wrote:
> > On Mon, 22 Feb 2021 18:41:07 +0100
> > Philippe Mathieu-Daud=E9 <philmd@redhat.com> wrote:
> >=20
> > > On 2/22/21 6:24 PM, Cornelia Huck wrote:
> > > > On Fri, 19 Feb 2021 18:38:37 +0100
> > > > Philippe Mathieu-Daud=E9 <philmd@redhat.com> wrote:
> > > >  =20
> > > >> MachineClass::kvm_type() can return -1 on failure.
> > > >> Document it, and add a check in kvm_init().
> > > >>
> > > >> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> > > >> ---
> > > >>  include/hw/boards.h | 3 ++-
> > > >>  accel/kvm/kvm-all.c | 6 ++++++
> > > >>  2 files changed, 8 insertions(+), 1 deletion(-)
> > > >>
> > > >> diff --git a/include/hw/boards.h b/include/hw/boards.h
> > > >> index a46dfe5d1a6..68d3d10f6b0 100644
> > > >> --- a/include/hw/boards.h
> > > >> +++ b/include/hw/boards.h
> > > >> @@ -127,7 +127,8 @@ typedef struct {
> > > >>   *    implement and a stub device is required.
> > > >>   * @kvm_type:
> > > >>   *    Return the type of KVM corresponding to the kvm-type string=
 option or
> > > >> - *    computed based on other criteria such as the host kernel ca=
pabilities.
> > > >> + *    computed based on other criteria such as the host kernel ca=
pabilities
> > > >> + *    (which can't be negative), or -1 on error.
> > > >>   * @numa_mem_supported:
> > > >>   *    true if '--numa node.mem' option is supported and false oth=
erwise
> > > >>   * @smp_parse:
> > > >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > > >> index 84c943fcdb2..b069938d881 100644
> > > >> --- a/accel/kvm/kvm-all.c
> > > >> +++ b/accel/kvm/kvm-all.c
> > > >> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
> > > >>                                                              "kvm-=
type",
> > > >>                                                              &erro=
r_abort);
> > > >>          type =3D mc->kvm_type(ms, kvm_type);
> > > >> +        if (type < 0) {
> > > >> +            ret =3D -EINVAL;
> > > >> +            fprintf(stderr, "Failed to detect kvm-type for machin=
e '%s'\n",
> > > >> +                    mc->name);
> > > >> +            goto err;
> > > >> +        }
> > > >>      }
> > > >> =20
> > > >>      do { =20
> > > >=20
> > > > No objection to this patch; but I'm wondering why some non-pseries
> > > > machines implement the kvm_type callback, when I see the kvm-type
> > > > property only for pseries? Am I holding my git grep wrong? =20
> > >=20
> > > Can it be what David commented here?
> > > https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html
> > >=20
> >=20
> > Ok, I might be confused about the other ppc machines; but I'm wondering
> > about the kvm_type callback for mips and arm/virt. Maybe I'm just
> > confused by the whole mechanism?
>=20
> For ppc at least, not sure about in general, pseries is the only
> machine type that can possibly work under more than one KVM flavour
> (HV or PR).  So, it's the only one where it's actually useful to be
> able to configure this.

Wait... I'm not sure that's true.  At least theoretically, some of the
Book3E platforms could work with either PR or the Book3E specific
KVM.  Not sure if KVM PR supports all the BookE instructions it would
need to in practice.

Possibly pseries is just the platform where there's been enough people
interested in setting the KVM flavour so far.

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--iHciYgIJJ+RNzZ6R
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmA0QB0ACgkQbDjKyiDZ
s5JQqBAA2HMmkpdhk4iK7XRStsDCbxBXZysxhFDBM0qLWfKEbOKqcRzv9BiIFbRb
Ioh4AG1Vvt/cxatFBRkKOOIML8ICtAm/hhBBo3A/6gwi5ogzlRO54Jkr+GIWiRSH
0iY9VKPGnsjTaf6SrE7GKxdYiPdGTR+6+2snq/f0EUjcz15LsWujsrB2Vyr/F/Oe
KEYH+S+Pwot39s3mGSOIR0QYHsfN4+oYLPajZXdmcrGjBsTNM6s7+fDGb6oAC7Yg
7cvOCbL6EAOJ1P3wmYySwu1rVM/vU6IFW7LUfZcP33JynhPYWWh7J84zCTBwNgJF
IQvlIEQU3Qta+SieLJma+ZTwALAiYSDKSJ1n4vIEWQ21FoQLSbzP4oxqDMiTZha+
LW5mzqPTQuF/MZg+B+dZtpCKWiLhkG7SWVB7Gb8hq7wat+AZONUJU/BaqQq2QVdN
a7YiR95k/6rsUFyN0I8NYsrIIMbR6IlI+ivVCukvurwo3iP9m+4/UyM/gM3XRBNc
ekYjyglLGj9pztFIC0JVBPodTG/8OSkAuvVDh2fVaefjfj0nckXJ9oxJDc1D6LH2
vocCjxis9Dv2YgQgzem8pUAQ//Z1WGIiiyWD1gPDt/A3EDbvjxHPZK0w7ZOxKLYR
cagHb4ijSlK6htqbMaqOJ3zM7FOgPwHvc0xLzUttBEolVXQia1E=
=kPBL
-----END PGP SIGNATURE-----

--iHciYgIJJ+RNzZ6R--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 04:51:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 04:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88566.166577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEPf6-00024R-MR; Tue, 23 Feb 2021 04:51:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88566.166577; Tue, 23 Feb 2021 04:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEPf6-00024K-JT; Tue, 23 Feb 2021 04:51:04 +0000
Received: by outflank-mailman (input) for mailman id 88566;
 Tue, 23 Feb 2021 04:51:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Sev=HZ=gmail.com=akihiko.odaki@srs-us1.protection.inumbo.net>)
 id 1lEPf5-00024F-AI
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 04:51:03 +0000
Received: from mail-ej1-x630.google.com (unknown [2a00:1450:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4fe7bf9-a9ed-46c1-8b0b-9ee3bfec0e83;
 Tue, 23 Feb 2021 04:51:02 +0000 (UTC)
Received: by mail-ej1-x630.google.com with SMTP id g5so32657138ejt.2
 for <xen-devel@lists.xenproject.org>; Mon, 22 Feb 2021 20:51:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4fe7bf9-a9ed-46c1-8b0b-9ee3bfec0e83
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=eyV0i+C+/h3drfKQYSgn5N2vYzhLCJd5VmWoT2VkEoU=;
        b=pzVLXKEKzvs2zueS4b8VZnt0B2A1VZQiMUTosGz2+LxDz2sOSMm+4do8amPtPwMInW
         RTNppoTFhwerWxvd4vhBVP+eccc8gKnbIsswMJkxxtgERcS3nh5YFFdYuC7ppTAghvUI
         GHspGWPEUc0tGE4oLfW4Oxd4aiB4rw9ySmsHad/Yf3U+TrO7ezR2NHXuZVcz1IM0gJQb
         XuITgLxiNTVC8HZ/14qhgD5HqwSjUDKAiPErn2GV2VZZKpWxSC/B4hDsNoI3R7dQqEyT
         HcSxTeU6R2eZL+FKcL8uuWWDx0cp67PYw2T1lAsCaz48ianIB3zX7ptpgkpadRjKovCS
         jERw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=eyV0i+C+/h3drfKQYSgn5N2vYzhLCJd5VmWoT2VkEoU=;
        b=a9QT7lADcTvoZDotF4lPtr1NC3X9i9x2DsKI3iwQ6nXerO51CiH9RRjYFy04oY5jlj
         YJjOALKASeCiRrwOZ2YgAgq1Sbd3BhopiB35mD230CdR98MjcRTSyEW/kHqomZ4akBYV
         ixwsZQGGkqEED+i2XBaqK5+XzLb30Ow9NleNKNMVFw2EnidVYC5Pkh17eUoC5KUigexn
         /UBnWf240GIltVLNe5/V0FJfhY8E5AP63nBlRPzuFNS39DEaMcp8bIudSwWt1uSDABcE
         etePUgitNHAHLzjjvd/SSNYYpvr8Drs3hNcUst54UpIGzVHYRnQP1N2o6OO4cIkyf3nK
         rq5g==
X-Gm-Message-State: AOAM530AbTlsSf5ps9TiQZdk//PoIPnOeunId+0MFYr0SN0JWpk1E5v3
	d2oMNubYN7dratH6kMy9IXBGI/cuPgALOdjeTsg=
X-Google-Smtp-Source: ABdhPJyEsVXOt5ffezeXVPCwJHnmQ5QdmAVNmzijrNpdiNHtBYNLyYXnrDbjRFz4rLHU+fbg3MyG2RTIHwjDpj372Ck=
X-Received: by 2002:a17:906:3856:: with SMTP id w22mr23918943ejc.77.1614055861541;
 Mon, 22 Feb 2021 20:51:01 -0800 (PST)
MIME-Version: 1.0
References: <20210221133414.7262-1-akihiko.odaki@gmail.com> <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org>
In-Reply-To: <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org>
From: Akihiko Odaki <akihiko.odaki@gmail.com>
Date: Tue, 23 Feb 2021 13:50:51 +0900
Message-ID: <CAMVc7JVo_XJcGcxW0Wmqje3Y40fRZDY6T8dnQTc2=Ehasz4UHw@mail.gmail.com>
Subject: Re: [PATCH] virtio-gpu: Respect graphics update interval for EDID
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org, 
	"Michael S. Tsirkin" <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

2021=E5=B9=B42=E6=9C=8822=E6=97=A5(=E6=9C=88) 19:57 Gerd Hoffmann <kraxel@r=
edhat.com>:
>
> On Sun, Feb 21, 2021 at 10:34:14PM +0900, Akihiko Odaki wrote:
> > This change introduces an additional member, refresh_rate to
> > qemu_edid_info in include/hw/display/edid.h.
> >
> > This change also isolates the graphics update interval from the
> > display update interval. The guest will update the frame buffer
> > in the graphics update interval, but displays can be updated in a
> > dynamic interval, for example to save update costs aggresively
> > (vnc) or to respond to user-generated events (sdl).
> > It stabilizes the graphics update interval and prevents the guest
> > from being confused.
>
> Hmm.  What problem you are trying to solve here?
>
> The update throttle being visible by the guest was done intentionally,
> so the guest can throttle the display updates too in case nobody is
> watching those display updated anyway.

Indeed, we are throttling the update for vnc to avoid some worthless
work. But typically a guest cannot respond to update interval changes
so often because real display devices the guest is designed for does
not change the update interval in that way. That is why we have to
tell the guest a stable update interval even if it results in wasted
frames.

Regards,
Akihiko Odaki

>
> take care,
>   Gerd
>


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:01:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88576.166605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lERgg-0006NE-LM; Tue, 23 Feb 2021 07:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88576.166605; Tue, 23 Feb 2021 07:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lERgg-0006N7-IT; Tue, 23 Feb 2021 07:00:50 +0000
Received: by outflank-mailman (input) for mailman id 88576;
 Tue, 23 Feb 2021 07:00:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lERgf-0006Mv-Ix
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 07:00:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 972bbf44-fed1-4716-8c5a-05a374bba8a0;
 Tue, 23 Feb 2021 07:00:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0196FAC6E;
 Tue, 23 Feb 2021 07:00:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 972bbf44-fed1-4716-8c5a-05a374bba8a0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614063641; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qXuzaG7HYJNIXv1W1/ysQ5FoBF8f7kN3bLKzCHwi4OE=;
	b=MSJczit3151vtPKUb108eSVKXBDmD+PBlZrOxHpnkC4fReYTkqmEJxQNLVH8b6vJn1GuCu
	gjBC7unIcDHcFov+kuy9Zanhr/j9MEJhSL2Mn2V2539KoOR8U95e7VO7JpTIr/OJNZ6DJj
	0vZ6XI+BACqHwIbIa+FVfY0bOA7AKcc=
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Cc: iwj@xenproject.org, ash.j.wilding@gmail.com,
 Julien Grall <jgrall@amazon.com>, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20210220194701.24202-1-julien@xen.org>
 <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
 <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
 <b68a644f-8b9c-3e1d-49c6-4058d276228b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dd2ce0b0-4bd4-15e5-c4b2-2540799ed493@suse.com>
Date: Tue, 23 Feb 2021 08:00:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <b68a644f-8b9c-3e1d-49c6-4058d276228b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.02.2021 21:12, Julien Grall wrote:
> On 22/02/2021 20:09, Stefano Stabellini wrote:
>> On Mon, 22 Feb 2021, Jan Beulich wrote:
>>> On 20.02.2021 20:47, Julien Grall wrote:
>>>> This is a follow-up of the discussion that started in 2019 (see [1])
>>>> regarding a possible race between do_poll()/vcpu_unblock() and the wake
>>>> up path.
>>>>
>>>> I haven't yet fully thought about the potential race in do_poll(). If
>>>> there is, then this would likely want to be fixed in a separate patch.
>>>>
>>>> On x86, the current code is safe because set_bit() is fully ordered. SO
>>>> the problem is Arm (and potentially any new architectures).
>>>>
>>>> I couldn't convince myself whether the Arm implementation of
>>>> local_events_need_delivery() contains enough barrier to prevent the
>>>> re-ordering. However, I don't think we want to play with devil here as
>>>> the function may be optimized in the future.
>>>
>>> In fact I think this ...
>>>
>>>> --- a/xen/common/sched/core.c
>>>> +++ b/xen/common/sched/core.c
>>>> @@ -1418,6 +1418,8 @@ void vcpu_block(void)
>>>>   
>>>>       set_bit(_VPF_blocked, &v->pause_flags);
>>>>   
>>>> +    smp_mb__after_atomic();
>>>> +
>>>
>>> ... pattern should be looked for throughout the codebase, and barriers
>>> be added unless it can be proven none is needed. >
>> And in that case it would be best to add an in-code comment to explain
>> why the barrier is not needed
> .
> I would rather not add comment for every *_bit() calls. It should be 
> pretty obvious for most of them that the barrier is not necessary.
> 
> We should only add comments where the barrier is necessary or it is not 
> clear why it is not necessary.

I guess by "pattern" I didn't necessarily mean _all_ *_bit()
calls - indeed there are many where it's clear that no barrier
would be needed. I was rather meaning modifications like this
of v->pause_flags (I'm sure there are further such fields).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:13:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88581.166622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lERsw-0007VI-TK; Tue, 23 Feb 2021 07:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88581.166622; Tue, 23 Feb 2021 07:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lERsw-0007VB-QE; Tue, 23 Feb 2021 07:13:30 +0000
Received: by outflank-mailman (input) for mailman id 88581;
 Tue, 23 Feb 2021 07:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lERsv-0007V6-T5
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 07:13:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2831ddb7-3f68-43aa-8d38-2c5a9f4f91d2;
 Tue, 23 Feb 2021 07:13:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0D47ACBF;
 Tue, 23 Feb 2021 07:13:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2831ddb7-3f68-43aa-8d38-2c5a9f4f91d2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614064408; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jrUHoyJwvbbMr4+3pK/5jLU2vTyFp02UgSPe3FxTo5E=;
	b=IB3tBAttR5Iqu9j+PEoBvgLiqdcD6CMvFExjLstJTW8MsVhF+JBztM8+rqYZcaXJb1AI2W
	IlGFK9G5aY9F2uh4eS+nabsj1ChCaBZhmkV0bny6hPAtHjAmho1NI+m5spY42+jW8FpWLg
	MKUTf/KvElHuW+83h1T/rtDs+21233o=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
 <77a36366-9157-c3d3-b1f0-211f4fc39a93@suse.com>
 <60a31ec8-6844-2149-1a04-7e757d1d2dd3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <42c86cc7-c417-6089-4e44-90a96ebaede1@suse.com>
Date: Tue, 23 Feb 2021 08:13:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <60a31ec8-6844-2149-1a04-7e757d1d2dd3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 17:47, Andrew Cooper wrote:
> On 22/02/2021 14:22, Jan Beulich wrote:
>> On 22.02.2021 15:14, Andrew Cooper wrote:
>>> On 22/02/2021 10:27, Jan Beulich wrote:
>>>> Now that we guard the entire Xen VA space against speculative abuse
>>>> through hypervisor accesses to guest memory, the argument translation
>>>> area's VA also needs to live outside this range, at least for 32-bit PV
>>>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
>>>> uniformly.
>>>>
>>>> While this could be conditionalized upon CONFIG_PV32 &&
>>>> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
>>>> keeps the code more legible imo.
>>>>
>>>> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>>>>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>>>>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>>>>      }
>>>> +
>>>> +    /* Slot 511: Per-domain mappings mirror. */
>>>> +    if ( !is_pv_64bit_domain(d) )
>>>> +        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
>>>> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
>>> This virtual address is inside the extended directmap.
>> No. That one covers only the range excluding the last L4 slot.
>>
>>> Â  You're going to
>>> need to rearrange more things than just this, to make it safe.
>> I specifically picked that entry because I don't think further
>> arrangements are needed.
> 
> map_domain_page() will blindly hand this virtual address if an
> appropriate mfn is passed, because there are no suitability checks.
> 
> The error handling isn't great, but at least any attempt to use that
> pointer would fault, which is now no longer the case.
> 
> LA57 machines can have RAM or NVDIMMs in a range which will tickle this
> bug.Â  In fact, they can have MFNs which would wrap around 0 into guest
> space.

This latter fact would be a far worse problem than accesses through
the last L4 entry, populated or not. However, I don't really follow
your concern: There are ample cases where functions assume to be
passed sane arguments. A pretty good (imo) comparison here is with
mfn_to_page(), which also will assume a sane MFN (i.e. one with a
representable (in frame_table[]) value. If there was a bug, it
would be either the caller taking an MFN out of thin air, or us
introducing MFNs we can't cover in any of direct map, frame table,
or M2P. But afaict there is guarding against the latter (look for
the "Ignoring inaccessible memory range" loge messages in setup.c).

In any event - imo any such bug would need fixing there, rather
than being an argument against the change here.

Also, besides your objection going quite a bit too far for my taste,
I miss any form of an alternative suggestion. Do you want the mirror
range to be put below the canonical boundary? Taking in mind your
wrapping consideration, about _any_ VA would be unsuitable.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:16:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88583.166635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lERw8-0007ex-Cf; Tue, 23 Feb 2021 07:16:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88583.166635; Tue, 23 Feb 2021 07:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lERw8-0007eq-9W; Tue, 23 Feb 2021 07:16:48 +0000
Received: by outflank-mailman (input) for mailman id 88583;
 Tue, 23 Feb 2021 07:16:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lERw6-0007el-Tt
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 07:16:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75c20548-ce79-4e9e-94d2-fba73181b91d;
 Tue, 23 Feb 2021 07:16:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6632EAC6E;
 Tue, 23 Feb 2021 07:16:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75c20548-ce79-4e9e-94d2-fba73181b91d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614064605; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2rZNl2J+TuuNqaHSlsFWCxq5LosMSejDP9+klGAcWHc=;
	b=iPwHmf31DvMhgFCYtea+ROBZ5MrzKgNGLNPQzC+w4aHaL+7f5B1e2gBCdjsNnRajwSQ782
	BBFi1tqcKmO6r7SctCNruPU8RaA5BJy97od8VCTnFRn/rsbnYyd+yWQ3x0O9mbf/7EZLym
	obIAy1t2yvK7CisayNkpgAyHOOEI7w4=
Subject: Re: [PATCH v3 0/5] Support Secure Boot for multiboot2 Xen
To: Bobby Eshleman <bobbyeshleman@gmail.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Olivier Lambert <olivier.lambert@vates.fr>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <9a58bdf7-3a34-1b81-aec9-b14da463d75e@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f24e9e8d-9d55-f301-9a33-4398b463013d@suse.com>
Date: Tue, 23 Feb 2021 08:16:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <9a58bdf7-3a34-1b81-aec9-b14da463d75e@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.02.2021 19:04, Bobby Eshleman wrote:
> I just wanted to request more feedback on this series and put it on the radar, while acknowledging
> that I'm sure given the recent code freeze it is a busy time for everybody.

It is on my list of things to look at. While probably not a good excuse,
my looking at previous versions of this makes we somewhat hesitant to
open any of these patch mails ... But I mean to get to it.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:23:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88587.166646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lES2n-0000Dc-4a; Tue, 23 Feb 2021 07:23:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88587.166646; Tue, 23 Feb 2021 07:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lES2n-0000DV-19; Tue, 23 Feb 2021 07:23:41 +0000
Received: by outflank-mailman (input) for mailman id 88587;
 Tue, 23 Feb 2021 07:23:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lES2l-0000DQ-Ay
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 07:23:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 079eb4b4-059e-4578-8041-ef481f304d37;
 Tue, 23 Feb 2021 07:23:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 87569ACBF;
 Tue, 23 Feb 2021 07:23:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 079eb4b4-059e-4578-8041-ef481f304d37
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614065017; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6cbhbTlOQ2V1rz/BldkIurA7ktZyiVuzBwIbKMcbcF0=;
	b=m+GeUblM9DPYo/7OGZBv25bfiX6kBjO1jh/ddaNBLvRgwQY0RH7gV6E67Ws1jez+SA1Oxm
	sOkHBtUDlx+Am6QbIduLf2RPuNPviCl6SOB5ThMHDH4g1YCRiAdc66CUfF0fvz/MUnUkrK
	/3aGjMGMyt2vvauwcOH1/6q0Z8mz/Js=
Subject: Re: [PATCH] firmware: don't build hvmloader if it is not needed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 cardoe@cardoe.com, andrew.cooper3@citrix.com, wl@xen.org,
 iwj@xenproject.org, anthony.perard@citrix.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <20210213020540.27894-1-sstabellini@kernel.org>
 <20210213135056.GA6191@mail-itl>
 <4d9200cd-bd4b-e429-5c96-7a4399bb00b4@suse.com>
 <alpine.DEB.2.21.2102161016000.3234@sstabellini-ThinkPad-T480s>
 <5a574326-9560-e771-b84f-9d4f348b7f5f@suse.com>
 <alpine.DEB.2.21.2102171529460.3234@sstabellini-ThinkPad-T480s>
 <416e26b7-0e24-a9ee-6f9a-732f77f7e0cc@suse.com>
 <alpine.DEB.2.21.2102181737310.3234@sstabellini-ThinkPad-T480s>
 <3723a430-e7de-017a-294f-4c3fdb35da51@suse.com>
 <alpine.DEB.2.21.2102221453080.3234@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4654a547-0d07-6a90-7e92-7a5403ec6a63@suse.com>
Date: Tue, 23 Feb 2021 08:23:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102221453080.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.02.2021 00:05, Stefano Stabellini wrote:
> On Fri, 19 Feb 2021, Jan Beulich wrote:
>> On 19.02.2021 02:42, Stefano Stabellini wrote:
>>> --- a/tools/configure.ac
>>> +++ b/tools/configure.ac
>>> @@ -307,6 +307,10 @@ AC_ARG_VAR([AWK], [Path to awk tool])
>>>  
>>>  # Checks for programs.
>>>  AC_PROG_CC
>>> +AC_LANG(C)
>>> +AC_LANG_CONFTEST([AC_LANG_SOURCE([[int main() { return 0;}]])])
>>> +AS_IF([gcc -m32 conftest.c -o - 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(hvmloader build disabled as the compiler cannot build 32bit binaries)])
>>> +AC_SUBST(hvmloader)
>>
>> I'm puzzled: "gcc -m32" looked to work fine on its own. I suppose
>> the above fails at the linking stage, but that's not what we care
>> about (we don't link with any system libraries). Instead, as said,
>> you want to check "gcc -m32 -c" produces correct code, in
>> particular with sizeof(uint64_t) being 8. Of course all of this
>> would be easier if their headers at least caused some form of
>> error, instead of silently causing bad code to be generated.
>>
>> The way you do it, someone simply not having 32-bit C libraries
>> installed would then also have hvmloader build disabled, even if
>> their compiler and headers are fine to use.
> 
> I realize that technically this test is probing for something different:
> the ability to build and link a trivial 32-bit userspace program, rather
> than a specific check about sizeof(uint64_t). However, I thought that if
> this test failed we didn't want to continue anyway.
> 
> If you say that hvmloader doesn't link against any system libraries,
> then in theory the hvmloader build could succeed even if this test
> failed. Hence, we need to change strategy.
> 
> What do you think of something like this?
> 
> AC_LANG_CONFTEST([AC_LANG_SOURCE([[#include <assert.h>
> #include <stdint.h>
> int main() { assert(sizeof(uint64_t) == 8); return 0;}]])])
> AS_IF([gcc -m32 conftest.c -o conftest 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(XXX)])

The assert() would trigger at runtime only, so you'd also need to
invoke the program, wouldn't you?

> Do you have any better ideas?

An open-coded BUILD_BUG_ON()-like test would allow noticing the issue
already at compile time.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:52:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88594.166665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESUh-0003F9-Ly; Tue, 23 Feb 2021 07:52:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88594.166665; Tue, 23 Feb 2021 07:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESUh-0003F2-Iw; Tue, 23 Feb 2021 07:52:31 +0000
Received: by outflank-mailman (input) for mailman id 88594;
 Tue, 23 Feb 2021 07:52:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lESUg-0003Eu-D5; Tue, 23 Feb 2021 07:52:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lESUg-0005oR-6H; Tue, 23 Feb 2021 07:52:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lESUf-0000Ij-Qz; Tue, 23 Feb 2021 07:52:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lESUf-00075B-QU; Tue, 23 Feb 2021 07:52:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UM7wp9lYupF08uAXFcq9BU4im3aN87DsP2xuK2ZpH9M=; b=A8bNLfLLR/50GSgRO40kTrz3YT
	x0eRV8z0L8L52fVX1XfMUbNssf69F9JzML0n1LzpPBorJTXX/+wlSXzI+4hIpgN0HvRJiwLleG2he
	6kcwD7rrqQhXtuZ7LGk8DLvoIfBb/VeagqyY73ndlsERXcU18XnxHkFVdBWIQ/rzqg20=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159559-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159559: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 07:52:29 +0000

flight 159559 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159559/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    4 days
Failing since        159487  2021-02-20 04:29:29 Z    3 days    6 attempts
Testing same since   159559  2021-02-22 20:38:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 318 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:53:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88598.166680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESW6-0003MN-4S; Tue, 23 Feb 2021 07:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88598.166680; Tue, 23 Feb 2021 07:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESW5-0003MG-WA; Tue, 23 Feb 2021 07:53:57 +0000
Received: by outflank-mailman (input) for mailman id 88598;
 Tue, 23 Feb 2021 07:53:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lESW5-0003MB-7E
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 07:53:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c20295c3-83c5-4152-9afe-b1bfdf137795;
 Tue, 23 Feb 2021 07:53:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CEC5AC6E;
 Tue, 23 Feb 2021 07:53:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c20295c3-83c5-4152-9afe-b1bfdf137795
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614066835; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RkIVdJe47ovNbnukGvcz6nDHzSpEn60BEyXAtYbaTos=;
	b=gvQBHtdXQNdHRfdWXkga0TPFOISzvucDmWABpjsK+vr0f3dtX6fQnQXYvmAcYAWin5YDM4
	7NuEGVwpdqrYFzAtbUMWG+eyQbPMYAs/GlWifsn1EVCfYoYdK+yjuj24udZzQJZBfEjhrj
	4ZHJ9mHbO5QEvpzhX81ECsZm2KkXjSs=
Subject: Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base relocs
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
 <53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f8e56c90-f51c-01f7-0987-4c0697a17bb0@suse.com>
Date: Tue, 23 Feb 2021 08:53:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 17:36, Andrew Cooper wrote:
> On 19/02/2021 08:09, Jan Beulich wrote:
>> --- a/xen/arch/x86/Makefile
>> +++ b/xen/arch/x86/Makefile
>> @@ -123,8 +123,13 @@ ifneq ($(efi-y),)
>>  # Check if the compiler supports the MS ABI.
>>  export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
>>  # Check if the linker supports PE.
>> -XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>> +EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10 --strip-debug
>> +XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) $(EFI_LDFLAGS) -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>  CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
>> +# Check if the linker produces fixups in PE by default (we need to disable it doing so for now).
>> +XEN_NO_PE_FIXUPS := $(if $(XEN_BUILD_EFI), \
>> +                         $(shell $(LD) $(EFI_LDFLAGS) --disable-reloc-section -o efi/check.efi efi/check.o 2>/dev/null && \
>> +                                 echo --disable-reloc-section))
> 
> Why does --strip-debug move?

-S and --strip-debug are the same. I'm simply accumulating in
EFI_LDFLAGS all that's needed for the use in the probing construct.

Also I meanwhile have a patch to retain debug info, for which this
movement turns out to be a prereq. (I've yet to test that the
produced binary actually works, and what's more I first need to get
a couple of changes accepted into binutils for the linker to actually
cope.)

> What's wrong with $(call ld-option ...) ?Â  Actually, lots of this block
> of code looks to be opencoding of standard constructs.

It looks like ld-option could indeed be used here (there are marginal
differences which are likely acceptable), despite its brief comment
talking of just "flag" (singular, plus not really covering e.g. input
files).

But:
- It working differently than cc-option makes it inconsistent to
  use (the setting of XEN_BUILD_EFI can't very well be switched to
  use cc-option); because of this I'm not surprised that we have
  only exactly one use right now in the tree.
- While XEN_BUILD_PE wants to be set to "y", for XEN_NO_PE_FIXUPS
  another transformation would then be necessary to translate "y"
  into "--disable-reloc-section".
- Do you really suggest to re-do this at this point in the release
  cycle?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 07:57:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 07:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88603.166692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESZ5-0003Xv-M5; Tue, 23 Feb 2021 07:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88603.166692; Tue, 23 Feb 2021 07:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESZ5-0003Xo-J5; Tue, 23 Feb 2021 07:57:03 +0000
Received: by outflank-mailman (input) for mailman id 88603;
 Tue, 23 Feb 2021 07:57:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lESZ3-0003Xi-PW
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 07:57:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c111462b-dbd3-42be-b84f-691372253489;
 Tue, 23 Feb 2021 07:57:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53E9AAC6E;
 Tue, 23 Feb 2021 07:57:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c111462b-dbd3-42be-b84f-691372253489
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614067020; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3clWBHAJhZAGn0wAyDOQs8az+Z/tr8dKyeXJxsN4p4g=;
	b=LzPF2GA/yV2b2UBz+AbM9psI2EiPXUptuHR7mX5J7+i/OGLwpbKVt4Rmzf+qBplGU8/wKO
	+G12kRs/O3iPiSSrC0W/WWRMQEBSH00I+12MXpbRB9oakr2eWpfp90P+D/vo9PHE3vJqm4
	5cBk8fk2q0gkTBrknl0zu9Fi/8H7/no=
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, andrew.cooper3@citrix.com,
 jun.nakajima@intel.com, kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
Date: Tue, 23 Feb 2021 08:57:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 22:19, Boris Ostrovsky wrote:
> 
> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
>>>> It's kind of weird to use an MSR to store this. I assume this is done
>>>> for migration reasons?
>>>
>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
>> I agree that using the msr_policy seems like the most suitable place
>> to convey this information between the toolstack and Xen. I wonder if
>> it would be fine to have fields in msr_policy that don't directly
>> translate into an MSR value?
> 
> 
> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).

Which, just to clarify, was not the least because of the flags
field being per-entry, i.e. per MSR, while here we want a global
indicator.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 08:21:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 08:21:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88613.166706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESwA-0006wO-Ms; Tue, 23 Feb 2021 08:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88613.166706; Tue, 23 Feb 2021 08:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lESwA-0006wH-Jd; Tue, 23 Feb 2021 08:20:54 +0000
Received: by outflank-mailman (input) for mailman id 88613;
 Tue, 23 Feb 2021 08:20:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lESw9-0006vt-Iu
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 08:20:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe5af15b-25fd-49cd-b8ad-a0316ebbc278;
 Tue, 23 Feb 2021 08:20:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F29AAF49;
 Tue, 23 Feb 2021 08:20:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe5af15b-25fd-49cd-b8ad-a0316ebbc278
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614068451; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EpgiD216cNRYhBm73/WD4DKnrgw4M/eJ0KV1yvWs/HM=;
	b=Bit8Al5XEHdY9fB4hRvf44Skthd7NypK4rqP9joRovawY2jbDelhSHrDeNrBdxwS9lcW1P
	bb09a/6AduDv0YSx6qcaCKUq+bTafrAr2KXi08kkqX2qw7zIXUd8NDxJKp6ElgVjSo/7KD
	THzYSotQ/6M/QVfGAaXawhRJgOyYFL0=
Subject: Re: Domain reference counting breakage
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210222152617.16382-1-andrew.cooper3@citrix.com>
 <90be630d-dccf-f63f-8419-dc583204b3a9@suse.com>
 <f32e70fc-ad67-836e-5181-5d9d2dd9cd7a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9558e92d-644d-7a80-c530-5a45d451a48c@suse.com>
Date: Tue, 23 Feb 2021 09:20:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <f32e70fc-ad67-836e-5181-5d9d2dd9cd7a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.02.2021 18:11, Andrew Cooper wrote:
> On 22/02/2021 16:49, Jan Beulich wrote:
>> On 22.02.2021 16:26, Andrew Cooper wrote:
>>> At the moment, attempting to create an HVM guest with max_gnttab_frames of 0
>>> causes Xen to explode on the:
>>>
>>>   BUG_ON(atomic_read(&d->refcnt) != DOMAIN_DESTROYED);
>>>
>>> in _domain_destroy().  Intrumenting Xen a little more to highlight where the
>>> modifcations to d->refcnt occur:
>>>
>>>   (d6) --- Xen Test Framework ---
>>>   (d6) Environment: PV 64bit (Long mode 4 levels)
>>>   (d6) Testing domain create:
>>>   (d6) Testing x86 PVH Shadow
>>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402046b5 [domain_create+0x1c3/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
>>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040321b11 [share_xen_page_with_guest+0x175/0x190], stk e010:ffff83003fea7ce8, dr6 ffff0ff1
>>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d04022595b [assign_pages+0x223/0x2b7], stk e010:ffff83003fea7c68, dr6 ffff0ff1
>>>   (d6) (XEN) grant_table.c:1934: Bad grant table sizes: grant 0, maptrack 0
>>>   (d6) (XEN) *** d1 ref 3
>>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d0402048bc [domain_create+0x3ca/0x7f1], stk e010:ffff83003fea7d58, dr6 ffff0ff1
>>>   (d6) (XEN) d0v0 Hit #DB in Xen context: e008:ffff82d040225e11 [free_domheap_pages+0x422/0x44a], stk e010:ffff83003fea7c38, dr6 ffff0ff1
>>>   (d6) (XEN) Xen BUG at domain.c:450
>>>   (d6) (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y  Not tainted ]----
>>>   (d6) (XEN) CPU:    0
>>>   (d6) (XEN) RIP:    e008:[<ffff82d040204366>] common/domain.c#_domain_destroy+0x69/0x6b
>>>
>>> the problem becomes apparent.
>>>
>>> First of all, there is a reference count leak - share_xen_page_with_guest()'s
>>> reference isn't freed anywhere.
>>>
>>> However, the main problem is the 4th #DB above is this atomic_set()
>>>
>>>   d->is_dying = DOMDYING_dead;
>>>   if ( hardware_domain == d )
>>>       hardware_domain = old_hwdom;
>>>   printk("*** %pd ref %d\n", d, atomic_read(&d->refcnt));
>>>   atomic_set(&d->refcnt, DOMAIN_DESTROYED);
>>>
>>> in the domain_create() error path, which happens before free_domheap_pages()
>>> drops the ref acquired assign_pages(), and destroys still-relevant information
>>> pertaining to the guest.
>> I think the original idea was that by the time of the atomic_set()
>> all operations potentially altering the refcount are done. This
>> then allowed calling free_xenheap_pages() on e.g. the shared info
>> page without regard to whether the page's reference (installed by
>> share_xen_page_with_guest()) was actually dropped (i.e.
>> regardless of whether it's the domain create error path or proper
>> domain cleanup). With this assumption, no actual leak of anything
>> would occur, but of course this isn't the "natural" way of how
>> things should get cleaned up.
>>
>> According to this original model, free_domheap_pages() may not be
>> called anymore past that point (for domain owned pages, which
>> really means put_page() then; anonymous pages are of course fine
>> to be freed late).
> 
> And I presume this is written down in the usual place?Â  (I.e. nowhere)

I'm afraid so, hence me starting the explanation with "I think ...".

>>> The best options is probably to use atomic_sub() to subtract (DOMAIN_DESTROYED
>>> + 1) from the current refcount, which preserves the extra refs taken by
>>> share_xen_page_with_guest() and assign_pages() until they can be freed
>>> appropriately.
>> First of all - why DOMAIN_DESTROYED+1? There's no extra reference
>> you ought to be dropping here. Or else what's the counterpart of
>> acquiring the respective reference?
> 
> The original atomic_set(1) needs dropping (somewhere around) here.

Ah, right.

>> And then of course this means Xen heap pages cannot be cleaned up
>> anymore by merely calling free_xenheap_pages() - to get rid of
>> the associated reference, all of them would need to undergo
>> put_page_alloc_ref(), which in turn requires obtaining an extra
>> reference, which in turn introduces another of these ugly
>> theoretical error cases (because get_page() can in principle fail).
>>
>> Therefore I wouldn't outright discard the option of sticking to
>> the original model. It would then better be properly described
>> somewhere, and we would likely want to put some check in place to
>> make sure such put_page() can't go unnoticed anymore when sitting
>> too late on the cleanup path (which may be difficult to arrange
>> for).
> 
> I agree that some rules are in desperate need of writing down.
> 
> However, given the catastrophic mess that is our reference counting and
> cleanup paths, and how easy it is to get things wrong, I'm very
> disinclined to permit a rule which forces divergent cleanup logic.
> 
> Making the cleanup paths non-divergent is the fix to removing swathes of
> dubious/buggy logic, and removing a steady stream of memory leaks, etc.
> 
> In particular, I don't think its acceptable to special case the cleanup
> rules in the domain_create() error path simply because we blindly reset
> the reference count when it still contains real information.

Of course I agree in principle. The question is whether this is a
good time for such a more extensive rework. IOW I wonder if there's
an immediate bug to be fixed now and then some rework to be done
for 4.16.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 08:45:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 08:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88620.166722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETJt-0000Z0-MO; Tue, 23 Feb 2021 08:45:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88620.166722; Tue, 23 Feb 2021 08:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETJt-0000Yt-Ib; Tue, 23 Feb 2021 08:45:25 +0000
Received: by outflank-mailman (input) for mailman id 88620;
 Tue, 23 Feb 2021 08:45:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vqnB=HZ=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lETJs-0000Yo-AC
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 08:45:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 225e3f63-6940-4283-b7fb-485a034bff6e;
 Tue, 23 Feb 2021 08:45:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82D49ACBF;
 Tue, 23 Feb 2021 08:45:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 225e3f63-6940-4283-b7fb-485a034bff6e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614069922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZkNXdyMDx8NTUOj1TzHP+7IReNsG+wxkpdKOKxULGc0=;
	b=Xn+E33kTlgqw5+xs5M7AyxSost7CV3qpLJF9XxDtdgsRtAfNnlintw0NWQ2ORrA0ALy4C0
	oziQv6IUmfpHpNK7gjl02HzU0kDqXNRKyeCzmgWvbHoRHJxct4yLimcOFLImkRfwL4wtOc
	twvArflXXUeMcP+cjeXO17SA3yBlGQs=
Message-ID: <ba0522858e4a16336ddfb3c5ecd1f791ad7634e7.camel@suse.com>
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
From: Dario Faggioli <dfaggioli@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich
 <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, iwj@xenproject.org,
 ash.j.wilding@gmail.com,  Julien Grall <jgrall@amazon.com>, George Dunlap
 <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Date: Tue, 23 Feb 2021 09:45:20 +0100
In-Reply-To: <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
References: <20210220194701.24202-1-julien@xen.org>
	 <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
	 <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-gpDumVTxRxxZLZxvWZVR"
User-Agent: Evolution 3.38.4 (by Flathub.org) 
MIME-Version: 1.0


--=-gpDumVTxRxxZLZxvWZVR
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2021-02-22 at 12:09 -0800, Stefano Stabellini wrote:
> On Mon, 22 Feb 2021, Jan Beulich wrote:
> > On 20.02.2021 20:47, Julien Grall wrote:
> > >=20
> > > vcpu_block() is now gaining an smp_mb__after_atomic() to prevent
> > > the CPU
> > > to read any information about local events before the flag
> > > _VPF_blocked
> > > is set.
> > >=20
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> >=20
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-gpDumVTxRxxZLZxvWZVR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmA0wKEACgkQFkJ4iaW4
c+7dIw/9F6mygFJS5J8GnnflnJcMkS6PkxYdGPD5k6s732AwHnTCW/jZhciNqXIC
cSN0lC+9oNxnaLuR2ieQ7oguUhhbtcnu//QxrbYaO5qfYIbZV3xJ96F5vZBr8u7x
dGKVKxnmXJld5Jikk/dZ5m1kRSEQH1rUxl7JwsEF5y6MAzF+s5XvS2hPfBQlItM9
a5qOdf2RPXMd1P9vwSO18paaoiEfEH1G09JTVJnueBvcZmcQgcGPSt2pdYonxZxm
ZY6RszC+0Jk5PNdBIkj4INXuFMQB65w0tk88/iDsce8QD/qLHvnD72wDwbU43nGs
qEhLFQ8rD/WkT2ygLViniJM6LHylRaynNt4/qID5D8joZ9jf91eKvGJx5J7Mj/ar
2T/G46Ps8sYzjk4SYGo40GyZek7+ivAFXWAQZ6CL+wWofoKvMZj3x/o+Xi3H5ukX
KWcV663LXbeu23eTOEidMBoR/lsX7+78sAl6MJNcXiBAMDZMBRNv8/CsOV4O6e6J
L7IioP9fD8sI3qpzkjme+zBmFAXiluzyZs1AgsZZJoppGB5qEKEvwtbiuArLpjpb
NtbUw+rsJ5aLyvx+aJ6vkoh1EjltYkWSyQqsPEVPqmwjbvi6X0hvOZXsu+uY02NM
NqTc/9otskHZAlT3h8J53S4c2adRQGwu9JsgTqIFWogQs/4SGXg=
=a6nR
-----END PGP SIGNATURE-----

--=-gpDumVTxRxxZLZxvWZVR--



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 08:49:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 08:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88625.166734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETO5-0000rs-7n; Tue, 23 Feb 2021 08:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88625.166734; Tue, 23 Feb 2021 08:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETO5-0000rl-4M; Tue, 23 Feb 2021 08:49:45 +0000
Received: by outflank-mailman (input) for mailman id 88625;
 Tue, 23 Feb 2021 08:49:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H6Tz=HZ=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1lETO3-0000rg-RK
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 08:49:43 +0000
Received: from mail-ot1-x330.google.com (unknown [2607:f8b0:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2c28ac9-975f-47a6-9af6-51dca549d4af;
 Tue, 23 Feb 2021 08:49:42 +0000 (UTC)
Received: by mail-ot1-x330.google.com with SMTP id r19so7539039otk.2
 for <xen-devel@lists.xenproject.org>; Tue, 23 Feb 2021 00:49:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2c28ac9-975f-47a6-9af6-51dca549d4af
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to:cc;
        bh=Q77yWRQ2CQBDEy11eUFP1uQgha0pUWsJHxX+Y3O9jfc=;
        b=SFZL7wuk81EHa8js+LYHEmcCjZCBFXp0JSINnX6BVLnKRPXdkD0w8m4jX5IccO6eqQ
         t+SV/j1wTjM8kvsgIGWBX8QETWhO3bJa/pUxOQhwgDHADr6XS4eorEIOhoPsQltlk6B8
         PxBymLLH/3o2BebyYPhulymJidSuzdtvWk4w7bvFsj4x58aUTG081bwQ585qGZvuXHvz
         54Jdh1iZMFlOBYPZwzG8PGFmIBaJFMNEU2Xqv0NJUpayGsomjqGzIQmDKaj4Pp7/17pO
         mwI1k6ew2Cv/LYxPRh9yzW/9zv6ax+Qy/8UFQNFdlDdA8SIo+lhPxXhXaKDGoYxTBzaz
         m46Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
        bh=Q77yWRQ2CQBDEy11eUFP1uQgha0pUWsJHxX+Y3O9jfc=;
        b=F3f4bwXdft76MKIWp7rW1PXvNUuNgmvDHLWo/KJNM0lZSKlPBN3eg41u6PUr8ZClhs
         azGrh2p272qJZoU453y2nLMs44cEfMAHxkTXDZIsQMEUpiBn6b7TqxjbpbquyY68yyjI
         27q+fP/8uOYAprvrjQGbwhYdKAnoez7kIW96hOkW7PswaLC65oBiSxrT+lrR97Q4s/H0
         JCgOoOm1pMY+o2Ac3mqXJV4+KvMiNak6QMWmtFhRXXRtsRPU0z+5PluC9TtFk1YJhKqE
         o+QSnW9fTUfea34qnIe0AqniymvWSxN9P5//BkavYtrgoU7Hph/9mCKICjS1IFZICS2Z
         Zj6g==
X-Gm-Message-State: AOAM532k0vFxV0DoGPB//rx2vlx7XdDL3bdcG202E6TYybgmL4bTxzAS
	nFCmvCPVdGk1UvWcwtzvS1rHjq4n2UlwCxxVXac=
X-Google-Smtp-Source: ABdhPJwxqr4V1R8HPv3pMJzpaemL00BfgmvRgXgfkPv4vh9spctvAZcEDuWYp0XYUIYTsR28ENxHeNmzE+ZyHRCb7Gw=
X-Received: by 2002:a9d:6c6:: with SMTP id 64mr11461601otx.78.1614070181313;
 Tue, 23 Feb 2021 00:49:41 -0800 (PST)
MIME-Version: 1.0
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 23 Feb 2021 00:49:28 -0800
Message-ID: <CACMJ4GaM7JXgVNWkL0W3QmDcbWTMY8a4ZdMEcP_RDsYzkpeMkg@mail.gmail.com>
Subject: Argo HMX Transport for VirtIO meeting minutes
To: Rich Persaud <persaur@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	openxt <openxt@googlegroups.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Julien Grall <jgrall@amazon.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>, 
	Daniel Smith <dpsmith@apertussolutions.com>, Oleksandr Tyshchenko <olekstysh@gmail.com>, 
	Julien Grall <julien.grall.oss@gmail.com>, James McKenzie <james@bromium.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.co.uk>, 
	Jean-Philippe Ouellet <jpo@vt.edu>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	Juergen Gross <jgross@suse.com>, Jason Andryuk <jandryuk@gmail.com>, 
	eric chanudet <eric.chanudet@gmail.com>, Chris Rogers <rogersc@ainfosec.com>, 
	Rich Turner <turnerr@ainfosec.com>
Content-Type: multipart/mixed; boundary="000000000000cd049805bbfcffb0"

--000000000000cd049805bbfcffb0
Content-Type: text/plain; charset="UTF-8"

Minutes from the HMX Argo-VirtIO transport topic call held on the 14th
January, 2021.

Thanks to Rich Persaud for organizing and hosting the call, to the
call attendees for the highly productive discussion, and Daniel Smith
for early assistance with the minutes; apologies for my delay in
completing and posting these.

The VirtIO-Argo Development Wiki page has been updated for items discussed:
    https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1
and a PDF copy of the page is attached.

thanks,

Christopher

--------------------------------------------------------------------------------
## Argo: Hypervisor-agnostic Guest interface for x86

Discussed: an interface for invoking Argo via an alternative mechanism to
hypercalls. MSRs suggested.
Objective: a single interface to guests supported by multiple hypervisors,
since a cross-hypervisor solution is a stronger proposal to the VirtIO
Community.
-- was introduced in a reply on the mailing list thread prior to the call:
"Re: [openxt-dev] VirtIO-Argo initial development proposal"
https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01802.html

Summary notes on the proposal are on the VirtIO-Argo development wiki:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-Hypervisor-agnostic-Hypervisor-Interface

Discussion:
- hypercalls: difficult for portable across hypervisors
- Intel has a MSR range, always invalid: VMware, HyperV and others
  use for virtualization MSRs
- concern: some hypervisors do not intercept MSRs at all
    - so nested hypervisors encounter unexpected behaviour
- perf sensitive to whichever mechanism selected
- alt options exist:
    - HP/Bromium AX uses CPUIDs
    - Microsoft Hyper-V uses EPT faults
- Arm context: hypercalls may be acceptable on Arm hypervisors
    - standard way to do it; can implement Argo in either firmware
      or hypervisor; differences in access instruction
    - on no-hypercall, PV-only hypervisors: may not work at all
  https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01843.html

Proposal: unlikely that a single mechanism will ever work for all hypervisors,
so plan instead to allow multiple mechanisms and enable the guest device driver
to probe
- a hypervisor can implement as many mechanisms as feasible for it
- guest can select between those presented available
- preference for mechanisms close to platform architecture
- ensure scheme is forward-extensible for new mechanisms later

--------------------------------------------------------------------------------
## Hypervisor-to-guest interrupt delivery: alternative to Xen event channels

Proposed: Argo interrupts delivered via a native mechanism, like MSI delivery,
with destination APIC ID, vector, delivery mode and trigger mode.
Ref: https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01802.html

- MSIs: OK for guests that support local APIC
- Hypervisors post-Xen learned from Xen: register a vector callback
    - sometimes hardware sets bits
    - MSI not necessary
    - likely arch-specific; could be hypervisor-agnostic on same arch
- Vector approach is right; some OSes may need help though since alloc can
  be hard
    - so a ACPI-type thing or device can assist communicating vector to OS
    - want: OS to register a vector, and driver => hypervisor the vector to use

Context: Xen event channel implementation is not available in all guests;
don't want to require it as a dependency for VirtIO-Argo transport.
- Want: Avoid extra muxing with Argo rings on top of event channels
- Vector-per-ring or Vector-per-CPU? -: Vector-per-CPU is preferable.
    - aim: avoid building muxing policy into the vector allocation logic
- Scalability, interface design consideration/requirement:
    Allow expansion: one vector per CPU => multiple vectors per CPU
    - eg. different priority for different rings:
      will need different vectors to make notifications work correctly
    - to investigate: specify the vector for every ring when registered
      and allow same vector for multiple rings (fwds-compatible)

--------------------------------------------------------------------------------
## Virtual device discovery

Reference: uXen v4v storage driver: uses a bitmap retrieved via ioport
access to enumerate devices available
    - advantages:
        - simple logic in the driver
        - assists on Windows in allocating
    - negatives:
        - very x86-arch-specific; not a cross-architecture design
        - not great interface across multiple hypervisors

Alternative proposal: allocate a range of well-known Argo port addresses

Context: planned extensions to Argo, documented in minutes from the Cambridge
F2F meeting
    - meeting minutes:
https://lists.archive.carbon60.com/xen/devel/577800#577800
    - section with notes on the VirtIO-Argo development wiki page:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-VirtIO-Argo-with-uXen-and-PX-hypervisors
=> concept 1: "implicit destinations" table used when dest unspecified
=> concept 2: toolstack allowed to program the table to connect VMs
              to services

Plan: a VM talks to its storage service via well-known Argo port ID
used for that purpose. Likewise for networking, other services.

- Access to services via a well-known address: consensus OK

Discussion covered:
- communicating endpoint identifiers from source to destination,
  with effects of nesting
- interest expressed in design allowing for capability-based systems
- labels conveyed along the transport are to support the hypervisor doing
  enforcement; provided to receiver for own reasoning if meaningful there
- access to services via well-known identifiers supports out-of-guest
  reasoning and request routing

Notes added to the VirtIO-Argo development wiki page:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-VirtIO-Argo-transport:-Virtual-Device-Discovery

--------------------------------------------------------------------------------
## VirtIO-MMIO driver and Xen; IOREQ

A VirtIO-MMIO transport driver is under development by EPAM for Arm arch,
for an automotive production customer.

Xen needs work to forward guest memory accesses to an emulator, so: porting the
existing Xen-on-x86 feature 'IOREQ' to Arm. Work being reviewed for Xen.
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html

- working demonstration: VirtIO block device instead of Xen PV block
- no modifications to the guest OS
- approach viable for x86

Relating IOREQ to VirtIO-Argo transport: not a natural fit due to IOREQ arch
use of the device emulator, shared memory mappings and event channels.

Discussion: could Argo perform the DMA transfers between a guest and the
privileged guest doing emulation for it? Aim for system to work more like
hardware.
Response: consider a new DMA Device Model Operation (DMOP):
has permission model as per foreign map, but enable a guest VM to request bytes
fetched on its behalf. Alternative to foreign mapping - note: design needs to
align with new vIOMMU development, affects paths involved in I/O emulation.

DMOP's ABI is designed to be safe for use from userspace. Xen also has a
longstanding capability for guests to transfer data via the grant copy op.
A new op could enable some perf improvements for introspection: eg.
one hypercall
vs. 4-5 hypercalls + complicated invalidation: helpful for eg. 5-byte accesses.

[ related contexts, adding here post-call:
Vates XCP-NG post on IOREQ:
"Device Emulation in the Xen Hypervisor" by Bobby Eschleman:
https://xcp-ng.org/blog/2020/06/03/device-emulation-in-the-xen-hypervisor/

Intel: External monitoring of VMs via Intel Processor Trace in Xen 4.15:
https://twitter.com/tklengyel/status/1357769674696630274?s=21 ]

--------------------------------------------------------------------------------
## Argo Linux guest-to-userspace driver interface

- Guests that use standard VirtIO drivers, with VirtIO-Argo transport,
  don't need another Argo Linux driver; but:
- Host Platform VMs (eg. Dom0, driver domains, stub domains) run
  userspace software, eg. device model software emulator -- QEMU --
  to implement the backend of split device drivers and do need an interface
  to Argo via kernel that is separate from VirtIO-Argo transport driver.

Argo Linux driver also has a separate function, for providing non-VirtIO
guest-to-guest communication via Argo, to Argo-enlightened VMs.

VSock: explored for Argo to sit underneath an existing Linux interface; assists
app compatibility: standard socket header, syscalls and the transport is
abstracted. Hyper-V implemented a transport protocol under VSock address
family, so Argo could follow.

Question: how to determine the destination Argo endpoint (domid) from the
address provided by a guest initiating comms:
eg. an abstract scheme: "I want to talk to my storage"
    - not simple to insert into VSock; could use predetermined identifiers
    - not expected to know own domid (self identifier)
    - other hypervisor implementations on VSock use pre-known IDs

ie. raises: should addressing be based on knowing the remote domid

VSock likely will not be the interface to use for communicating from userspace
to domain kernel in support of the VirtIO-Argo transport backend.

Forward direction: Argo Linux driver to be built modularly, similar to the uXen
v4v driver, with a library core (a misc driver with ring and interrupt handling
logic, etc) plus separate drivers that export different interfaces to userspace
for access.

### Available Linux Argo/v4v device drivers:
uXen source code is available
- includes an implementation of the v4v Linux driver
https://github.com/uxen-virt/uxen/tree/ascara/vm-support/linux
- current OpenXT Argo Linux driver and exploration of a VSock Argo driver:
https://github.com/OpenXT/linux-xen-argo

#### Projects on the VirtIO-Argo development wiki page:
* Project: unification of the v4v and Argo interfaces
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-unification-of-the-v4v-and-Argo-interfaces
* Project: Port the v4v Windows device driver to Argo
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-Port-the-v4v-Windows-device-driver-to-Argo
* Comparison of VM/guest Argo interface options
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Comparison-of-VM/guest-Argo-interface-options

--------------------------------------------------------------------------------
## Reference: "VirtIO-Argo initial development" thread:
  https://groups.google.com/g/openxt/c/yKR5JFOSmTc?pli=1

--000000000000cd049805bbfcffb0
Content-Type: application/pdf; name="Argo-HMX-Transport-for-VirtIO.pdf"
Content-Disposition: attachment; 
	filename="Argo-HMX-Transport-for-VirtIO.pdf"
Content-Transfer-Encoding: base64
Content-ID: <f_klhralni0>
X-Attachment-Id: f_klhralni0

JVBERi0xLjUKJeLjz9MKMTE5IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs3
IDAgUi9YWVogMCA2MTcuODUgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAw
XS9SZWN0Wzg0LjcxIDY2OC43OCAyNzEuMTYgNjc4LjA0XT4+CmVuZG9iagoxMjAgMCBvYmoKPDwv
QlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzcgMCBSL1hZWiAwIDU5MC42NiAwXT4+L1N1YnR5
cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDY1OS4xNyAxNjUuMzQg
NjY4LjQ0XT4+CmVuZG9iagoxMjEgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9E
WzcgMCBSL1hZWiAwIDUxMS42NSAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAw
IDBdL1JlY3RbMTIxLjM5IDY0Ni41OCAxNTUuMTQgNjU1Ljg0XT4+CmVuZG9iagoxMjIgMCBvYmoK
PDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzcgMCBSL1hZWiAwIDMwNi4wNCAwXT4+L1N1
YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDYzMy45NyAxMzcu
NCA2NDMuMjRdPj4KZW5kb2JqCjEyMyAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1Rv
L0RbMTYgMCBSL1hZWiAwIDU4OC40NSAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJb
MCAwIDBdL1JlY3RbODQuNzEgNjIxLjM4IDI0NS44OSA2MzAuNjRdPj4KZW5kb2JqCjEyNCAwIG9i
ago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbMTYgMCBSL1hZWiAwIDU2MS4yNiAwXT4+
L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDYxMS43OCAx
NjUuMzQgNjIxLjA0XT4+CmVuZG9iagoxMjUgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1Mv
R29Uby9EWzE2IDAgUi9YWVogMCA1MDcuNDUgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9y
ZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSA1OTkuMTcgMTU1LjE0IDYwOC40NF0+PgplbmRvYmoKMTI2
IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFsxNiAwIFIvWFlaIDAgMzkwLjY0
IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgNTg2
LjU4IDEzNy40IDU5NS44NF0+PgplbmRvYmoKMTI3IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8
PC9TL0dvVG8vRFsyNSAwIFIvWFlaIDAgNDQzLjEgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0v
Qm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSA1NzMuOTcgMjkwLjU1IDU4My4yNF0+PgplbmRvYmoK
MTI4IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFsyNSAwIFIvWFlaIDAgMzEw
Ljg0IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs4NC43MSA1
NjEuMzggMjU4LjM0IDU3MC42NF0+PgplbmRvYmoKMTI5IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+
L0E8PC9TL0dvVG8vRFsyNSAwIFIvWFlaIDAgMjgzLjY1IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAw
IDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgNTUxLjc4IDE2MS4zNiA1NjEuMDRdPj4KZW5k
b2JqCjEzMCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbMjUgMCBSL1hZWiAw
IDIyOS44NCAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIx
LjM5IDUzOS4xNyAxNTUuMTQgNTQ4LjQ0XT4+CmVuZG9iagoxMzEgMCBvYmoKPDwvQlM8PC9TL1Mv
VyAwPj4vQTw8L1MvR29Uby9EWzI1IDAgUi9YWVogMCAxMzguMjIgMF0+Pi9TdWJ0eXBlL0xpbmsv
Q1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSA1MjYuNTcgMTM3LjQgNTM1Ljg0XT4+
CmVuZG9iagoxMzIgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzM2IDAgUi9Y
WVogMCA2MzMuNDUgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0
Wzg0LjcxIDUxMy45NyAyNjIuNzMgNTIzLjI0XT4+CmVuZG9iagoxMzMgMCBvYmoKPDwvQlM8PC9T
L1MvVyAwPj4vQTw8L1MvR29Uby9EWzM2IDAgUi9YWVogMCA2MDYuMjYgMF0+Pi9TdWJ0eXBlL0xp
bmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSA1MDQuMzggMTYxLjM2IDUxMy42
NF0+PgplbmRvYmoKMTM0IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFszNiAw
IFIvWFlaIDAgNTUyLjQ1IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0v
UmVjdFsxMjEuMzkgNDkxLjc3IDEzNy40IDUwMS4wNF0+PgplbmRvYmoKMTM1IDAgb2JqCjw8L0JT
PDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFszNiAwIFIvWFlaIDAgMjI1LjY0IDBdPj4vU3VidHlw
ZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs4NC43MSA0NzkuMTcgMjc4LjcgNDg4
LjQ0XT4+CmVuZG9iagoxMzYgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzM2
IDAgUi9YWVogMCAxOTguNDUgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAw
XS9SZWN0WzEyMS4zOSA0NjkuNTcgMTY1LjM0IDQ3OC44NF0+PgplbmRvYmoKMTM3IDAgb2JqCjw8
L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFszNiAwIFIvWFlaIDAgMTQ0LjY0IDBdPj4vU3Vi
dHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgNDU2Ljk4IDE1NS4x
NCA0NjYuMjRdPj4KZW5kb2JqCjEzOCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1Rv
L0RbMzYgMCBSL1hZWiAwIDkzLjgzIDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclsw
IDAgMF0vUmVjdFsxMjEuMzkgNDQ0LjM4IDI5MC41NSA0NTMuNjRdPj4KZW5kb2JqCjEzOSAwIG9i
ago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbNTEgMCBSL1hZWiAwIDcwMS43IDBdPj4v
U3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgNDMxLjc3IDEz
Ny40IDQ0MS4wNF0+PgplbmRvYmoKMTQwIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dv
VG8vRFs1MSAwIFIvWFlaIDAgNjQ1LjY0IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRl
clswIDAgMF0vUmVjdFs4NC43MSA0MTkuMTcgMTkzLjQ2IDQyOC40NF0+PgplbmRvYmoKMTQxIDAg
b2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs1MSAwIFIvWFlaIDAgNjE4LjQ1IDBd
Pj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgNDA5LjU3
IDE2NS4zNCA0MTguODRdPj4KZW5kb2JqCjE0MiAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwv
Uy9Hb1RvL0RbNTEgMCBSL1hZWiAwIDU2NC42NCAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9C
b3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDM5Ni45NyAxNTUuMTQgNDA2LjI0XT4+CmVuZG9iagox
NDMgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzUxIDAgUi9YWVogMCA1MDQu
MjMgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSAz
ODQuMzggMTM3LjQgMzkzLjY0XT4+CmVuZG9iagoxNDQgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4v
QTw8L1MvR29Uby9EWzUxIDAgUi9YWVogMCAyNzUuMjEgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAg
MV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSAzNzEuNzcgMTk2Ljg4IDM4MS4wNF0+PgplbmRv
YmoKMTQ1IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs1MSAwIFIvWFlaIDAg
MjE0LjggMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4z
OSAzNTkuMTcgMjkwLjU1IDM2OC40NF0+PgplbmRvYmoKMTQ2IDAgb2JqCjw8L0JTPDwvUy9TL1cg
MD4+L0E8PC9TL0dvVG8vRFs1MSAwIFIvWFlaIDAgNzIuOTQgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1sw
IDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzg0LjcxIDM0Ni41NyAyNjYuNzQgMzU1Ljg0XT4+CmVu
ZG9iagoxNDcgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzUxIDAgUi9YWVog
MCA0NS43NSAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIx
LjM5IDMzNi45NyAxNjEuMzYgMzQ2LjI0XT4+CmVuZG9iagoxNDggMCBvYmoKPDwvQlM8PC9TL1Mv
VyAwPj4vQTw8L1MvR29Uby9EWzcwIDAgUi9YWVogMCA2NjIuNzQgMF0+Pi9TdWJ0eXBlL0xpbmsv
Q1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSAzMjQuMzggMjA5LjI5IDMzMy42NF0+
PgplbmRvYmoKMTQ5IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs3MCAwIFIv
WFlaIDAgNTE3LjEzIDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVj
dFsxMjEuMzkgMzExLjc3IDIwMS4zIDMyMS4wNF0+PgplbmRvYmoKMTUwIDAgb2JqCjw8L0JTPDwv
Uy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs3MCAwIFIvWFlaIDAgMzEyLjI2IDBdPj4vU3VidHlwZS9M
aW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs4NC43MSAyOTkuMTcgMjU4LjMgMzA4LjQ0
XT4+CmVuZG9iagoxNTEgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzcwIDAg
Ui9YWVogMCAyODUuMDcgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9S
ZWN0WzEyMS4zOSAyODkuNTcgMTYxLjM2IDI5OC44NF0+PgplbmRvYmoKMTUyIDAgb2JqCjw8L0JT
PDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs3MCAwIFIvWFlaIDAgMjQzLjg2IDBdPj4vU3VidHlw
ZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgMjc2Ljk3IDE1NS4xNCAy
ODYuMjRdPj4KZW5kb2JqCjE1MyAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0Rb
ODEgMCBSL1hZWiAwIDI0NC41IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAg
MF0vUmVjdFsxMjEuMzkgMjY0LjM4IDE0Ni43IDI3My42NF0+PgplbmRvYmoKMTU0IDAgb2JqCjw8
L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs4MSAwIFIvWFlaIDAgMTY1LjQ5IDBdPj4vU3Vi
dHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkgMjUxLjc3IDI5MC41
NSAyNjEuMDRdPj4KZW5kb2JqCjE1NSAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1Rv
L0RbODYgMCBSL1hZWiAwIDcxMS4zIDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclsw
IDAgMF0vUmVjdFsxMjEuMzkgMjM5LjE3IDI2MS42OCAyNDguNDRdPj4KZW5kb2JqCjE1NiAwIG9i
ago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbODYgMCBSL1hZWiAwIDYwNC4wOSAwXT4+
L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDIyNi41OCAx
MzcuNCAyMzUuODRdPj4KZW5kb2JqCjE1NyAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9H
b1RvL0RbODYgMCBSL1hZWiAwIDU1Ny42MyAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3Jk
ZXJbMCAwIDBdL1JlY3RbODQuNzEgMjEzLjk3IDI3MS4xNiAyMjMuMjRdPj4KZW5kb2JqCjE1OCAw
IG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbODYgMCBSL1hZWiAwIDUzMC40NCAw
XT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDIwNC4z
OCAxNjEuMzYgMjEzLjY0XT4+CmVuZG9iagoxNTkgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8
L1MvR29Uby9EWzg2IDAgUi9YWVogMCA0ODkuMjMgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0v
Qm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSAxOTEuNzcgMTU1LjE0IDIwMS4wNF0+PgplbmRvYmoK
MTYwIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs4NiAwIFIvWFlaIDAgNDEw
LjIxIDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEuMzkg
MTc5LjE3IDE0Ni43IDE4OC40NF0+PgplbmRvYmoKMTYxIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+
L0E8PC9TL0dvVG8vRFs4NiAwIFIvWFlaIDAgMzU2LjQgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAg
MV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEyMS4zOSAxNjYuNTggMjkwLjU1IDE3NS44NF0+PgplbmRv
YmoKMTYyIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFs4NiAwIFIvWFlaIDAg
MTQ0LjE5IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjEu
MzkgMTUzLjk3IDI2MS42OCAxNjMuMjRdPj4KZW5kb2JqCjE2MyAwIG9iago8PC9CUzw8L1MvUy9X
IDA+Pi9BPDwvUy9Hb1RvL0RbODYgMCBSL1hZWiAwIDEwMi45NyAwXT4+L1N1YnR5cGUvTGluay9D
WzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTIxLjM5IDE0MS4zOCAxMzcuNCAxNTAuNjRdPj4K
ZW5kb2JqCjE2NCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbODYgMCBSL1hZ
WiAwIDU2LjUxIDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs4
OS4xNCAxMjguNzcgNDY3LjM2IDEzOC4wNF0+PgplbmRvYmoKMTY1IDAgb2JqCjw8L0JTPDwvUy9T
L1cgMD4+L0E8PC9TL0dvVG8vRFsxMDcgMCBSL1hZWiAwIDY5OS43MSAwXT4+L1N1YnR5cGUvTGlu
ay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTI1LjgxIDExOS4xNyAxNjUuNzkgMTI4LjQ0
XT4+CmVuZG9iagoxNjYgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzEwNyAw
IFIvWFlaIDAgNjIwLjcgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9S
ZWN0WzEyNS44MSAxMDYuNTggMTU5LjU2IDExNS44NF0+PgplbmRvYmoKMTY3IDAgb2JqCjw8L0JT
PDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFsxMDcgMCBSL1hZWiAwIDUzOC42OSAwXT4+L1N1YnR5
cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTI1LjgxIDkzLjk3IDE0MS44MiAx
MDMuMjRdPj4KZW5kb2JqCjE2OCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0Rb
MTA3IDAgUi9YWVogMCA0OTcuNDggMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAg
MCAwXS9SZWN0WzEyNS44MSA4MS4zOCAxNTEuMTMgOTAuNjRdPj4KZW5kb2JqCjE2OSAwIG9iago8
PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbMTA3IDAgUi9YWVogMCA0NDEuNDEgMF0+Pi9T
dWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzg5LjE0IDY4Ljc3IDM0MS43
OCA3OC4wNF0+PgplbmRvYmoKMTcwIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8v
RFsxMDcgMCBSL1hZWiAwIDQxNC4yMyAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJb
MCAwIDBdL1JlY3RbMTI1LjgxIDU5LjE3IDE1OS41NiA2OC40NF0+PgplbmRvYmoKMTcxIDAgb2Jq
Cjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNDI4MD4+c3RyZWFtCnicpZxRk+O2sYXf9Sv4
mFTZDNAgSNBvsde+16m4snYmiatu3QdZw92R74w0ljS72X8fgGgApzVcD+6wnKrds1S3GhS+BniI
ym+br282qhlGam5uN9/ebH7c/LZRrTKDbT5uqPmLv/jrRqvmh83//K9qbjeda4auax42tu/mv93P
f+tcq8Lf/WX4a7x+t/nX5rD503/9XTfvzxvdhP9O70smiMFU5QvuNu9COIVw1YT/fPigW2ebfuha
Y3ySJE0b8upW915QO4zNoFqjk9ht+rHt8rXetYNLUUnMGf0HWdpWD02Ksq3rc8ZZ7Dbp2+I1rmOO
EjXu5nFoFf+t79qBfN1Zm9a5JqiQyys9ehW/KKrdZoxFRDnG8mJcEnNS/0GWtrVDk6L6lvqScla7
Tf5CvsrFzIGyVFm+pVYbKN/qlkr5VrW2lB9VLj9KLjHGJTEnzeV7OebyrWn7Un5Upfx0NRc3Do0s
VZZvxtZi+ca1fSnfDO1Yyo8qlx8llxjjkpiT5vI7H57L73SrSvlRlfLT1VhMDJSlxvJ5PlHfjlTm
vJcqz3myrclzPoo856PkeR2jihipzHka2iHPeXLhj5RxFnnO8zWuY44SNcq7rofIatZ9wC7dBG0D
kekWRZXvepR8Z2NcEkOEK0kXwEtRLiCZU7rIa/5CvsrFzIGyVFm+ukJWCWSVQFZJZBUiqxBZJZFV
iKwSyKorZJVAVglk1XNk7SiRtSMia0dEllUqn2UskeOSEMgGmZG1IyLLKpefr+biCrI5K5Y/SGTt
gMjaAZFllcsfAFmOS0Igax0gax0iy6qU7xBZDpSlIrK2F8gGmZG1PSDLIiHLMmLJUUUAsrYHZG0P
yLJIyKZrXAcgmzPiXbcSWWsRWWsRWVb5rltAluOSEMhaC8gGUZBlVe66RWQ5UJYqy+8ksrZDZIMq
yLLK5XeALMclIZC1HSBrO0SWVSm/Q2Q5UJYqyzdXyBqBrBHIGomsQWQNImsksgaRNQJZc4WsEcga
gawBZHk+aSY2yRk8noV6RpJnqGZcefZGyfM6RiXBYLGkmTqOohlIzkhMK38bX4t1xChRo7zraoys
glblrisXiEy3KKp816PkOxvjihhh0mgVwOMorcMfKWVU5a6nq1zwHChLFeV3o+cAyu9GRLYbEVlW
qXyWsUSOS2JOmsrvRkA2iIIsq1x+vsrFILI5a5k0nfMYlEnTuRm8+DMGodOkYZEmDcs4MTgqiTlj
mjSdm6lLUTOQKSPTmr4tXuM65ihRo7zrYcMPqHZ+z19Q7fwDRkGVVb7rUfKdjXFJWEQ1yIxqN/SA
Kqty19PVXFxBNWfF8ntqLU6aXsPq2vUKVldWufwoucQYl8ScNJfvH1Xy6tr5h4qyurIq5aerXIyB
1TVnxfI7iWzQBdmuQ2RZ5fI7QJbjigBkOwvIdhaRZVXKt4gsB8pSZfnmClkjkDUCWSORNYisQWSN
RNYgskYga66QNQJZI5A1z5EliSwhsoTIkkSWEFlCZEkiS4gsIbIkkSVElhBZWkBWXyGrBbJaIKsl
shqR1YislshqRFYLZPUVslogqwWyegFZdYWsEsgqgaySyCpEViGySiKrEFklkFVXyCqBrBLIqufI
GieRDbogaxwiyyqVzzKWyHFFALJmBGTNiMiyyuXnq7GYEZHNWbH8QSJrBkTWDIgsq1z+AMhyXBIC
WTMAskEUZFmV8gdElgNlqbL8K/vJCPvJCPvJSPvJoP1k0H4y0n4yaD8ZYT+ZK/vJCPvJCPvJoP0U
aTbJfUpy3t7GHmCivRT7g0nWU+wdLGNX4agkePta5Jg6jonmUsrIe+L0bfFaLmscGlGjvOudRNZ0
iKzpEFlW+a53gCzHJSGQNR0gazpEllW56x0iy4GyVFk+XSFLAlkSyJJElhBZQmRJImsQWSOQNVfI
GoGsEcjSArL6CllhPxlhPxlpPxm0nwzaT5w0l4/2kxH2k7myn4ywn4ywn3JWmPPJfUoSVlmjYJU1
SqyyLHleK1hlTTKJkoRV1ihYZY0Sq2y6xnXAKmsWbCca5SpLI66yNOIqyyrddZbxznJcEmKVDTKv
sjTiKssq3/V8NRdXVtmcFct3EllyiCw5RJZVLt8BshyXhECWHCBLDpFlVcp3iCwHylJl+b1ENuiC
LPWILKtcfg/IclwRgCwNgCwNiCyrUv6AyHKgLFWWf2U/kbCfSNhPJO0nQvuJ0H4iaT8R2k8k7Ce6
sp9I2E8k7CdasJ/oyn4iYT+RsJ9I2k+E9hOh/UTSfiK0n0jYT3RlP5Gwn0jYT7RgP9GV/UTCfiJh
P5G0nwjtJ0L7iaT9RGg/kbCf6Mp+ImE/kbCf6Ln9RMHaKQ2T0H4itJ9I2k+E9hOh/cQZU8MktJ8I
7SeS9hOh/URoP+WMeNev7CcS9hMJ+4mk/URoPxHaTyTtJ0L7iYT9RFf2Ewn7iYT9RAv2kx4lslrY
T1rYT1raTxrtJ432EydN5Wu0n7Swn/SV/ZSvcjGIbM6K5TuJrHaIrHaIrHYCWZZcogNkOWku3wGy
2iGy2klk81UuBpHNWbH8QSKrB0RWD4gsq1z+AMhyXBIC2SAzsnpAZFmV8gdElgNlqbJ8O4rVVlt8
yaMtvuRhlcu38JKH45IYcbXVPbzk0T2+5GFVyu/xJQ8HylJl+V0v0A26oKs7C+iyyuVHySXGuCIA
Xd0NBV3dOUCXVSk/XeViBkA3Zy0NUxtxlEIbOEqhDRylYJEaJsvYFDkqCXGUQhs4ShFEPkrBIjXM
dI3rgKMUOSPedbpClgSyJJAliSwhsoTIkkSWEFkSyNIVsiSQJYEsLSCrr5DVAlktkNUSWY3IakRW
S2Q1IqsFsvoKWS2Q1QJZ/RzZURI7IrAj8joKXEegdQRYR8mqQlaVYFVdsaoEqwpZHZ+j6iSpDkF1
yKkTmDqg1AGkTjDqAFGHhDoJqEM+HeLpntEpzznhMSc85SQPOeEZJzziJE844QEnPN8kjzfh6SY8
3LR0tunqnIQ4JiFOSchDEnhGAo9IyBMS4kyTONJ0daIJT0eIwxHPGfSbX432e0s2G6n+oaqYnrPI
Nuqskr0eYtKLA58tW6hdO9r81qDtu5xrFvDOIF5L9Yy2weJ26RCcmQ/BtWpU1Cgf6Hrj/zTK75fC
qbivbzZ/+i401LYfmxs/wPm4XHgt1/rfisa2H5qbh80f/rk/Xb7/25d/Pr0/Nm+mD9P98fFhOly+
at7ebc9To/9482s4Dei/sBOn7jj/TC+m99PIbz47PSe/udufG/+/bfN4Oj4ez9v75t3x1OwP+8ve
//22fF9zOX7cnm7DR/+6Pzz9u4l1NXNdl9P2cH48ni4hYr+bmtvT/sN0araH2+ZyNzW/bHf/9+Xk
xeP99uK/4KE5Pz3Onz8f31183smnb3gkny+8D1vEWPjueDhMu0uzv3wRQvcPj/fTXOb5+DA1x3fz
1z6eptP029P+vL9Mze7kx7Tzg3rcT7vp7D+wvTThm89P+8v2l/uJR747nnxl28v+ePDK5356PF9O
0/Yh3KJf/Xeev5jH5dWH/e1UUbUbW4pV304Px4PPFrNzlR/221/29/vLp/gP/td43F7u2t/PG/jy
8Pcxr/79DzuaGSyfb8qksXGWahr9lAjTfehtEx4d/GP570yjsKPVmPJtvDlfNT///Yfmu/1p+ri9
v59v1Hen4+Ey//rfHy7T6d3Wz4+3R39rz6WKvmbq6nCkJmz0eNDtC8PW/plNdxgC4x5eN27tnxDD
MZ+S8810vuwP8y8KA3LVA+oyjH5XUDegEgIDGtcNqOT8aR7L9n4qubWqHk7YEKbhmLrhlBAYjtbr
xlOSvr3fHiAv1Qwl4OU3mYztCz8L41U+j8Mwq/gqOTNfh+ljbLqxDUOnfTpPp/Ojxwu+vmpNmH+4
sBDxcGvBKiE44ld2lPTDlaTLZOn6XqEypFSLllrqFXplsyhJl9iqbhXWZUapki0IwfGs6xWQVLJF
1W3CDhlPD0/dUIalNkHr2gQkfbM/757O57AwH44Xv0F4dzo+NDfHx/2u+cavZV80f9kenranTw0p
gl0X1TYU22eYX/jxYkOBz+OAVzUUyJkbytNh/85vjMSWpPswd5S5z+zTqg0QUnVfsTbzbCr7CoTg
wNf1FUgKfQXSV7cV22WcTWVbgRAc0bq2AkkX2grVtxXKOJvatkJLbYVWthX6TFsxVW1lPquZaX6h
pzBheqmlmFe2FCas5MyE4SPcx/3lrnn6eTrMgL39ubn79DidPuzPxxPwZaqayvzrhXe8POZavkoI
DvuVjSX9eiXpIl+mumN0Yya1q+QLQnBE6zoGJL2aj1WtYj7Bmgm1NfMRPo/DeGWbiPMRci7Ox/zo
Hv/5yT8gv4nP8GFBPPq95ScopbqpdOH9MQ+9clpCCI5+XVOBpMvbya56z9LZ7D7Y2nlZQmBI3bo9
CyRd6Ptddefoukysrez7EILjWdc5IOlr92BdfXMxGWpbufGEEBz0yuZiPtNcuurmQhnsvqq50FJz
6dY1F3reXL7/20/f/jh7adBm4AvrW4jO6Pa1LUQvtZBuZQvRL7QQW91CTDEw+8oWAiHo2q1rIZB0
oYXY6hZiXEa3r2whEILjWddCIKmkyVY3BjNkIPvKxgAhOJR1jQGS/uPxdnuZGr+tfjPtQsdTX8HX
VD+vmOJV9u0L+5A0skXL1K57XoGkr+3ztqp5zGdr87PEUNMczaKlal/ZOGJzhJy5Ob4N7zzSE/a/
9ofb48fz1TuTy7GRHbOvby8mYz1UdkwIgYH3K9tLSbr4LNDXNxjKYA+1DZOWGky/ssGUpN8cHx6e
DuF1zbeH99v381sn+J76hlOMzqG2dy4arv3KhoOGa3ndl94HwvfU7kxM8TtdFXyLpmu/amcCOTN8
/52ftL/cvj8c/cTcwb+Vt1NQQ/Vmhca8SXCV6EEIDnvdZgWSLqI3VDcTGjLErhI9CME3beuaCSRd
2KsM1a2E+oywq+QNQnA861oJJP3mNN3u8TXoUN09qNiZrnK7Qou26rCue0DS1y7qQ/VOhorj6Sp3
MrTovA7rdjKQ9G0+NlE52PqOUuxQ177whJcGu2jLDis7yudsWVdry1KxRMea9YAWbVm3ypalBVu2
vN04PT2Goyv3YQP2KRzIaELf/DA1D9PubnvYnx+gjvqOUyzRsXZNWLRm3cqO84I166q7ji6W6Fi5
JuhFa9at6zqQdGFNcNUNRRdzdKxcE/SiR+vWNRRI+mxNcNUdQ/eZ1LFyTYAQHM26jgFJX7smjNXb
FG1Lc6lcEyAEz8+s26ZA0v/vmjBWdxTdmdJR6tYECMHBrusokFSuCWNVK5lPmJfDa+qFRaFv/YSE
ABzHa4+vxXtTcv403W8v0y2esGyWHhwePJ3zB7e73XQ++8ek/cN+XiqC4xm9z/mVyrvpdP6i+ecP
4bHidDw/+kSi6Y71TaqYrFq9uI6QmU/BLpq946u71Dy3IeniOjLWd6pyAtUP6aWFhIekFk+9vbpV
xSGp3z/2Vt2GxnLsTb24ksQBjYvH3tSr29CcbvzcsTdV3WRcZtsP5aVlJA7FLR59U69uMvMgStJn
a6JWtX0Gzsi+dEh2bjOLZ2S1WtVmBugy52l72t01aYn4qvn6aX9/O58FmD5s75+C4bs9NNvzp8Pu
7nQ8HJ/OzXk+OZ0aDZZVfxSutJCXj83G33TRAtZqXQvpP3tkJfy/2f64+Q/oJDnhCmVuZHN0cmVh
bQplbmRvYmoKMSAwIG9iago8PC9Db250ZW50cyAxNzEgMCBSL1R5cGUvUGFnZS9SZXNvdXJjZXM8
PC9Qcm9jU2V0IFsvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJXS9FeHRHU3RhdGU8
PC9HUzk1IDEwNSAwIFIvR1M5NCAxMDQgMCBSL0dTOTMgMTAzIDAgUi9HUzkyIDEwMiAwIFIvR1M5
MSAxMDEgMCBSL0dTOTAgMTAwIDAgUi9HUzE1IDE5IDAgUi9HUzE0IDE4IDAgUi9HUzEzIDE3IDAg
Ui9HUzEyIDE1IDAgUi9HUzExIDE0IDAgUi9HUzk5IDExMCAwIFIvR1MxMCAxMyAwIFIvR1M5OCAx
MDkgMCBSL0dTOTcgMTA4IDAgUi9HUzk2IDEwNiAwIFIvR1MxIDIgMCBSL0dTODQgOTQgMCBSL0dT
MiAzIDAgUi9HUzgzIDkzIDAgUi9HUzgyIDkyIDAgUi9HUzgxIDkxIDAgUi9HUzgwIDkwIDAgUi9H
UzkgMTIgMCBSL0dTNyAxMCAwIFIvR1M4IDExIDAgUi9HUzg5IDk5IDAgUi9HUzUgOCAwIFIvR1M4
OCA5OCAwIFIvR1M2IDkgMCBSL0dTODcgOTcgMCBSL0dTMyA0IDAgUi9HUzg2IDk2IDAgUi9HUzQg
NiAwIFIvR1M4NSA5NSAwIFIvR1MzNyA0MyAwIFIvR1MzNiA0MiAwIFIvR1MzNSA0MSAwIFIvR1Mz
NCA0MCAwIFIvR1MzMyAzOSAwIFIvR1MzMiAzOCAwIFIvR1MzMSAzNyAwIFIvR1MzMCAzNSAwIFIv
R1MyOSAzNCAwIFIvR1MyOCAzMyAwIFIvR1MyNyAzMiAwIFIvR1MyNiAzMSAwIFIvR1MyNSAzMCAw
IFIvR1MyNCAyOSAwIFIvR1MyMyAyOCAwIFIvR1MyMiAyNyAwIFIvR1MyMSAyNiAwIFIvR1MyMCAy
NCAwIFIvR1MxOSAyMyAwIFIvR1MxOCAyMiAwIFIvR1MxNyAyMSAwIFIvR1MxNiAyMCAwIFIvR1M1
MSA1OCAwIFIvR1M1MCA1NyAwIFIvR1M1OSA2NiAwIFIvR1M1OCA2NSAwIFIvR1M1NyA2NCAwIFIv
R1M1NiA2MyAwIFIvR1M1NSA2MiAwIFIvR1M1NCA2MSAwIFIvR1M1MyA2MCAwIFIvR1M1MiA1OSAw
IFIvR1M0OSA1NiAwIFIvR1M0MCA0NiAwIFIvR1M0OCA1NSAwIFIvR1M0NyA1NCAwIFIvR1M0NiA1
MyAwIFIvR1M0NSA1MiAwIFIvR1M0NCA1MCAwIFIvR1M0MyA0OSAwIFIvR1M0MiA0OCAwIFIvR1M0
MSA0NyAwIFIvR1MzOSA0NSAwIFIvR1MzOCA0NCAwIFIvR1M3MyA4MiAwIFIvR1M3MiA4MCAwIFIv
R1M3MSA3OSAwIFIvR1M3MCA3OCAwIFIvR1M3OSA4OSAwIFIvR1M3OCA4OCAwIFIvR1M3NyA4NyAw
IFIvR1M3NiA4NSAwIFIvR1M3NSA4NCAwIFIvR1M3NCA4MyAwIFIvR1M2MiA2OSAwIFIvR1M2MSA2
OCAwIFIvR1M2MCA2NyAwIFIvR1M2OSA3NyAwIFIvR1M2OCA3NiAwIFIvR1M2NyA3NSAwIFIvR1M2
NiA3NCAwIFIvR1M2NSA3MyAwIFIvR1M2NCA3MiAwIFIvR1M2MyA3MSAwIFIvR1MxMDAgMTExIDAg
Ui9HUzEwMSAxMTIgMCBSL0dTMTA0IDExNSAwIFIvR1MxMDUgMTE2IDAgUi9HUzEwMiAxMTMgMCBS
L0dTMTAzIDExNCAwIFIvR1MxMDYgMTE3IDAgUi9HUzEwNyAxMTggMCBSPj4vRm9udDw8L0YxIDUg
MCBSPj4+Pi9Bbm5vdHNbMTE5IDAgUiAxMjAgMCBSIDEyMSAwIFIgMTIyIDAgUiAxMjMgMCBSIDEy
NCAwIFIgMTI1IDAgUiAxMjYgMCBSIDEyNyAwIFIgMTI4IDAgUiAxMjkgMCBSIDEzMCAwIFIgMTMx
IDAgUiAxMzIgMCBSIDEzMyAwIFIgMTM0IDAgUiAxMzUgMCBSIDEzNiAwIFIgMTM3IDAgUiAxMzgg
MCBSIDEzOSAwIFIgMTQwIDAgUiAxNDEgMCBSIDE0MiAwIFIgMTQzIDAgUiAxNDQgMCBSIDE0NSAw
IFIgMTQ2IDAgUiAxNDcgMCBSIDE0OCAwIFIgMTQ5IDAgUiAxNTAgMCBSIDE1MSAwIFIgMTUyIDAg
UiAxNTMgMCBSIDE1NCAwIFIgMTU1IDAgUiAxNTYgMCBSIDE1NyAwIFIgMTU4IDAgUiAxNTkgMCBS
IDE2MCAwIFIgMTYxIDAgUiAxNjIgMCBSIDE2MyAwIFIgMTY0IDAgUiAxNjUgMCBSIDE2NiAwIFIg
MTY3IDAgUiAxNjggMCBSIDE2OSAwIFIgMTcwIDAgUl0vUGFyZW50IDE3MiAwIFIvTWVkaWFCb3hb
MCAwIDYxMiA3OTJdPj4KZW5kb2JqCjIwMiAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9H
b1RvL0RbMTA3IDAgUi9YWVogMCAyMjcuODEgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9y
ZGVyWzAgMCAwXS9SZWN0WzEyNS44MSA3MzQuNTUgMTUxLjEzIDc0My44MV0+PgplbmRvYmoKMjAz
IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFsxMDcgMCBSL1hZWiAwIDE2NC40
IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsxMjUuODEgNzIx
Ljk1IDI5NC45OCA3MzEuMjFdPj4KZW5kb2JqCjIwNCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9B
PDwvUy9Hb1RvL0RbMTA3IDAgUi9YWVogMCAxMTcuOTQgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAg
MV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzg5LjE0IDcwOS4zNSAyNTYuOTUgNzE4LjYxXT4+CmVuZG9i
agoyMDUgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzE4MSAwIFIvWFlaIDAg
NTk2LjcgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzg5LjE0
IDY5Ni43NSAzMjIuMiA3MDYuMDFdPj4KZW5kb2JqCjIwNiAwIG9iago8PC9CUzw8L1MvUy9XIDA+
Pi9BPDwvUy9Hb1RvL0RbMTgxIDAgUi9YWVogMCAzMDQuNDYgMF0+Pi9TdWJ0eXBlL0xpbmsvQ1sw
IDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzg5LjE0IDY4NC4xNSAzMjguNDMgNjkzLjQxXT4+CmVu
ZG9iagoyMDcgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzE4MSAwIFIvWFla
IDAgMTU1LjAyIDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs4
OS4xNCA2NzEuNTUgMjA5LjAzIDY4MC44MV0+PgplbmRvYmoKMjA4IDAgb2JqCjw8L0JTPDwvUy9T
L1cgMD4+L0E8PC9TL0dvVG8vRFsxODggMCBSL1hZWiAwIDM5MS42NSAwXT4+L1N1YnR5cGUvTGlu
ay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbODkuMTQgNjU4Ljk1IDE0Ny4zIDY2OC4yMV0+
PgplbmRvYmoKMjA5IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL0dvVG8vRFsxODggMCBS
L1hZWiAwIDI0Mi44MSAwXT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1Jl
Y3RbODkuMTQgNjQ2LjM1IDE3OS4yOSA2NTUuNjFdPj4KZW5kb2JqCjIxMCAwIG9iago8PC9CUzw8
L1MvUy9XIDA+Pi9BPDwvUy9Hb1RvL0RbMTg4IDAgUi9YWVogMCAxNzMuMTcgMF0+Pi9TdWJ0eXBl
L0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzg5LjE0IDYzMy43NSAxNzguODQgNjQz
LjAxXT4+CmVuZG9iagoyMTEgMCBvYmoKPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA0MjY1
Pj5zdHJlYW0KeJylW22P20aS/q5f0dgvlwVmGL6/GIcDss4m8GKNzSazuQCX+8ChWjOMJVEhqRl7
f8391Kt3Nh3bEmwESPhMq6qfrn6quppSft/85W4Tu6pJ3d1289e7zT83v2/iKM6qwj1vUvc3GPxt
k8Tu9eZ//jd2201euyrP3WFTlDk97ekpr6MYn2E4eOTxx81/b46br7//KYlr9zBtEof/jA+Lr8Aq
dLZM8bjZsYMGHcQO/wEHVRLVhauqOMoycCOwbCL0nERJCaCOqsbhJxIF3QY/YWP471qtBLBH+KBA
8FU5scJZSvXIoNvIbDImtMhqxbGjlSQx/y2PowRoK8yaKC0cInSV1VFRAuJ5GHWbhjkwbJgd2wkg
n/A5Qw0MsVGeRBBQ9cio2+h8OmrUmsKtiK65p1VUBNzTMioX7mkRNQt3RsadoRBkOwXo07indZQZ
97RBSZhHQgt3HRUuZLgiytxlK5I8aha1AIoLVUuSRZntLQNTC0NRBFstoFnEkhQRJI8alREIWB0S
MLHImNAgq5DhinOcRoHCYWm5cYaYVMaZgXFmKMTYSgE6NM5xFiXGGUBtnBkYZxkTGmQVMgw5l3UT
Vca5rGteGrpBkChnAcpZIBMTKwXoUDmXTRwVyrlskihVzgKUs44xDbYKGa44VxWlpKISM0vcVAXm
nEzBwDgzFGJspaCi1FlQY5yrGjNOHdacjjoZjxmppnAhwxXnMqdUVJRhRombMsVckykYGGeGQoyt
FOSUMooKTCc1KjHT1GHJaaiT8ZjQIKuQ4YpzkQQ5iMhysCziJQcFGGeGQoytFrDkYFmkSw6WRbbk
oADjLGNCI11yUB0GnKHOLjlYQgm2HCyzaslBAcaZoRBjKwVNkIMlFlnjDMByUIBxljGmwVYhw5Bz
AdtQLuKAAzXSkxGfk0SmEKCcBTIxNtJn8qecC9hypVyUIMzG/BFQyjomJNBoxW9FGbYkDijDpqRK
GfakMMoMjDJD4UVG+kz+jDLARikXoEujzMAoy5hxCp/j9yjDJuQBZdiGUinDhjdGmYFRZii8yEie
2Z9RhqM5U8rw3BhlBkZZxoQTGq34rSjDsVgHlNNKCcOJnBlhBkaYobACE3tCH0YXzl9tlgo4mTOj
y8DoypgQQKMVt5BuXlJLeDCoegI/OeuJpuBn5cuIWYmJgpor1wJrFUUO/ykq80dAKeuYcaobtyK4
6pFy6GPrfGmS8rzC/ZGmBVGmHQ0DbZEYcRskRgpqrgQKQTONtEg51sbaHDKyFslGhQkZrnmu4o19
WRBv6NkqizcAFQg/W7wJSUzZREHJp5vCKsot3rDzicWbgcVbxoQEWa0IrjjHeZQHnKE7KYwzdCcq
Q342zoSEF5soyPmkUFhEqXEG0BhnBsZZxoQEWa0Ihpwz6E3qhXMG3UmjnBFkMgc/K2dGzEtMFCS8
kwpT3GY1ylAB6i9jeehUPCYkyGpFcMW5arCfMs7QqmTGGboTjQs/G2dCwotNFDTcrQms4fZpnAFU
5o+BcZYxJsFWK4IrzpBFgTayvFy0keWFaYOfjTMh4cUmCqpQGxkkkGkDgWlDgHGWMSFRL9owjwHn
LF9pI8sCbQAwbdCzcSYkvNhEQb7SRlYE2sjKQBsMjLOMCYki0IZ6DDin6UobaRJoI40XbdCzcSYk
vNhEQbrSRpoF2gCwaIOBcZYxIZEF2lCPC+cUGuE8Nc5pCY2ENklpGVONp0kEKGuBTE2sFJBHpZ1C
c53qiYjAWnIBSlvHhAdZrTiujpc0hw+ky/GS5qmWePSF9/lGjwNBesAI5FNE7BSQUz1h0hzuuZVT
qxw7WXOZc5trE8qokCHDNdU1/SThsBuOMYLqLW4wuDoXI6PPUCiynYKE46QwxRiKVZIGbyUELfR1
VMiQ4ZpqKJoEro7pIpoE7qmZiiapK0oSmkeAikYgC0OsFDQcI4FQ12MVDQK7XwhQ0egY82CrFcdV
1BPwEQdRT2CCxKKewNy5RV2QRl0gR1bsBLBTjTrCWqOeAJflPZYgi7qNGrm6cmuqa/rQMoeaT+By
tWg+gVK1aF6Q0WcoFNlOQR1qPoFO3DSPYNG8oIW+jgqZJtC8eSX6Gf8taST6imuKfsbeEmpwkozn
YgRzpcxERlMmKZaKGg5VgDGQYgnlFYMsfhl1G51VR40hWq74rtbQlFGdLUto4Ii0FTTU7chEDIy/
jDFBsRJAHo18Axli3GFLMqPOwJjrGPNgq5Cj0C7pT5B3LByBdKghAEc1HWpJSZMwgEkKIiBjBVET
KwEV76/CmjZfrGrShXisRTQym4wxD7YKOa5oV3lUZgvtKuPkk1fEeK7JJAyMtowV8oaYrASQR6Nd
FVFutKuSq4C8LefXcjqbjDEPtgo5vnf1Cd9sY/W3l8Nw2iz1gIHde9LlrTbb6D0neKcNwF5pQ3+2
vNFmENx4gvfZbBSSY7p/udt8/R0sFdukO2BPXyxAjsQYtBIbkbvD5iso/+mf737Dr0c+ZgA9Br4c
XGycGOC3FklM31pEoOkcLIBsVRYO29e0KPB7jI96LaI6Cb2+HP22n6fQd7L6RuRTC8KmyBaUXbeg
xWa1oPTLFrR4/bafuvM09cPRHYfZT243Dgd3N5z6zr1s9/sb97f2eG7Hd3Dop0lIIbtm3fQ+PsGL
Py37wi5CetOF3gxWa84/b81wCCd56PTlcDi1Yz/Bkoed+/n11w9nP83um/FhcP1x9uOu7bwbTjME
ZbXTxZUrLiFRClnxhW3mFQcGqxWXX7LiwOlLWEi/9WNLS3Kw8OndNPuDa8fusZ99N59H73bD6OZH
747+2W3H/smPbprHMw2GrKprwwClu5Ew5FeFYTFYhaH+ojAsTl8d+7lv9+5fkwdtT/6F+/6Hf7nR
H4a5Pz64AVf8cz/Or/5BsSAtbIdD2x/dw9ieHvtuJYfm2jhA+ddCVlwVh8UgjEP6mVVM4rA4/dZ3
/nAPi4WUjt2P/qmHHd95v71vuzfhhFeVNvqKIrbKVl61wsVgtcLPLGuywsXpj37fzn7rDvDvEbY8
nOPaulXmtdWt6qpFLQarRX1R3Qqc/jScx07TtJ/ct0N3PvjjHM51dYnKCitR9VWLWwxWi/uyErU4
/Xvf+SPkJVTkjy6OCw/0VdASw5RlDV0PTBkncG9cpmrwmxOYKnV34wa/rsrxlxTkoP6Egx+/N3J5
g2+tYzj0idoP4/AblMgX7pefXrvv+tE/w7no2uPWfTcOx/nWw9MrOzd+GKBuUJ2IkUACc2tsU1dG
ELAgDDRTUacy07dwDvXH9v2DJ7220hTQDzY5uUKup2Hfd+9cByTHYT9xgaNjDv50hCXhPC/cPLjz
CSq9bw/uF3/8tBro65AmyngW3SWijHv3gWk/YwLot6plGTOezh3Ua3Z1793WQ7QP/dFvX8BsMxCA
HN+/Q+dOdstBlhz83N4+QUE/t/v+38Tx8txpjsqUfZ+H+d0J5ztApOZRlolhbI9wPuDxiMuHSuPf
9hOdInKAwIeP02kYZ+IMn/7HyR9/uXP38PfuMdjcLL5C1R9TTpzhnY4qHnFr9+FJnV1Vv+m3P2mE
P0ACP3fQAOB5+MRrtWJzGofTMMHhCc/zYzu7fn5xMZh5mUcZ+/XH9n4P7eXWP/n9cMKoca570gyp
cqe5RRN0EDTYa1UOxBgCSZ+HXYYe7nCG4/ydax99u1VXu/MI/xnd6RHlgn+9zLHIo4rTrz+c9n4l
Z/TJG3pLDKUt6qCDHI7+aBeBT3T9OdyDksxi+8fFTo/Deb/FpWKEROHnCdb73M8wNhML6FKgD1kE
dit5rERuwARH8bOmxb/3x/Nbnu4KoliMebNgk6Ac62qx0oX5Gya5fwvZt+W9mc4nFLz1kLrCy1uQ
mvxUJi20nrALHc719Xf7dnoTlJVd/3CWXIRpHzAfj1bSnlAU935+9sCUlv6f3MPdOGT3Xw74nqhG
u7brhnFLURtMiZfZ4u+QmO2+vfd7yYfH9snDvDBpP03nRa0/v4aJZud7EiZMfnkC/NEQTwDRG54n
8mOtuGndDp9QEVZ3dPdYT6exhdh0VCX929N+gHvsRSIZ/hIoW+0LTvw4DLgbFELhQlNDOP00ad13
3aPv3kwkYmigyRBq6fgEFy9lBXcSKN27HqKFInuCMr3Fvu0ys8q06t/6sesn4Ya6+2OKjf60h7MZ
U/uFbbTo6bmHD4iGKNxARnMLS0zHQgvldeOe4S7wyKawjMt0y0IF3m5BBqyO+3fEpdv3WAylMMLQ
UnOxit04GABRcaCnxxasbw9wIMGV/MMFa5EA7PmZtpzKST8d/2MGvij4y5SLUlVOgQJ74Itxa6cP
xUdicdlvnqm4VVOsx9USgv2ZB/gvMMaowD6OfkLNwB/CEhEcHVYQ3m90LlPDt5osd6jJz1iExvOR
ai5fl2+sIo/+93M/ar3dnWHiMAnX+3J54qRWNcMx+wRZMcEMO9DJEapwN2yXuzne0DDlJbmfh/EN
xgxvO+PqbJUTkPOshcbg3SRZtgitfWixhwk7hvTzm5GU3wBT57Rvj6HXq65c0IekVYUvErkV1pbL
08LxMv4gIpHaozEJI09FMQgQLlgrxFKpQsVc3J0U39Myqb+C4u73/fTI7R+/TQij7o/Qbg5HLjSs
b1CRtITSiTo5XWEhcJjquQ2nuIr/o62kLGqCqS8ncJrzRQhoo1N6x3W5CUizBn/FhVZnihgeXcpn
IXLr/ENEC+qH28OhH7DdFnjqehinJpmrleNqdSNi5LBAte9baO2hfxnx7eI07OZn+PQVjUqaNviz
LeR4GLZwfHTyXgtqJCQ7nEHo8PYW55N8okTBJYgsSCYkDkkoVlMLqXzsuJH+5K5FwfqvYhyrru/3
Q/dmiRYQnqMrHMSVahDf0vK+0BmxHSDL4W+wUt/tz7DUdj8NH2oGsXNbNXakIzpUoLtmgbh2Xv4y
9wc4fTAuvuWb+WWiSVOr6mifuVBrGaU5LiyXvoktVISv7I7lTMUfyItVv/rrV73/hDh//TMKAwqf
H+WlL/Ul2Bv9H5RFfUkqkZBa84yn9uWkS6AvEWmS58CfNgt8dEh3bz2L0GtXLdwVwkjKUpWFi2aa
S1F002nfWyuIBxYsDD7htZD57Y3V0Q9yuKEh6TewP+roxZo2tzzLn6i/QPd/ukYj0FqImEHAJNfR
P4BK/SosQbcILUcrBzpOwuewO+E7AP7qgpPXDkd+Z/vrV5ikAOIbR+NQjbZUY+QDV1DNK5UzeZBA
sj0L6dCe4C58zVZluapa4wge30Cr4vca4Pvzbmc9OwUVl65/5pfS0HcFlQz6ZcgvugKb1m6osmlM
gd37Abxm5WmuSv5D78HRwxsfwnm4tHj8yjYpVKd3WLZkRZQU1qViGdN1S2nSQ/Zk72JY4txmceWj
zytJdhbUBhwIYnVzae30a4ZCBXqPzR5GjKM5Bs3up9sKrR9wwxzxdRTVESIDCzgafz0M22ndul8R
0iZTaVJEmZO0HHxH/9CFTO+DfgQuh0tXLWgAXZ2pbL/5+aXbDw8gVjhiH7zM2R+3PYkZjl+U4u6M
r4mGrjuPl16PlPSTN5HZq+AqadoNC/Lo2TvWrKD40PZMfj6f9JVRew8R1G6Xt150JG8BTS2X+ZWJ
6lYOGZij8/s9kghugYd27h5lwznaO7hK8l6//ualXmbgvtv76b2rExev1dmI/xPgPzf/DxOxrwUK
ZW5kc3RyZWFtCmVuZG9iago3IDAgb2JqCjw8L0NvbnRlbnRzIDIxMSAwIFIvVHlwZS9QYWdlL1Jl
c291cmNlczw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4
dEdTdGF0ZTw8L0dTMTMwIDE5OCAwIFIvR1MxMTEgMTc2IDAgUi9HUzEzMyAyMDEgMCBSL0dTMTEy
IDE3NyAwIFIvR1MxMzEgMTk5IDAgUi9HUzExMCAxNzUgMCBSL0dTMTMyIDIwMCAwIFIvR1MxMTUg
MTgwIDAgUi9HUzExNiAxODIgMCBSL0dTMTEzIDE3OCAwIFIvR1MxMTQgMTc5IDAgUi9HUzExOSAx
ODUgMCBSL0dTMTE3IDE4MyAwIFIvR1MxMTggMTg0IDAgUi9HUzEyMiAxODkgMCBSL0dTMTIzIDE5
MCAwIFIvR1MxMjAgMTg2IDAgUi9HUzEyMSAxODcgMCBSL0dTMTI2IDE5MyAwIFIvR1MxMjcgMTk0
IDAgUi9HUzEyNCAxOTEgMCBSL0dTMTI1IDE5MiAwIFIvR1MxMDggMTczIDAgUi9HUzEwOSAxNzQg
MCBSL0dTMTI4IDE5NSAwIFIvR1MxMjkgMTk3IDAgUj4+L0ZvbnQ8PC9GMSA1IDAgUi9GMiAxOTYg
MCBSPj4+Pi9Bbm5vdHNbMjAyIDAgUiAyMDMgMCBSIDIwNCAwIFIgMjA1IDAgUiAyMDYgMCBSIDIw
NyAwIFIgMjA4IDAgUiAyMDkgMCBSIDIxMCAwIFJdL1BhcmVudCAxNzIgMCBSL01lZGlhQm94WzAg
MCA2MTIgNzkyXT4+CmVuZG9iagoyMzQgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJ
L1VSSShodHRwczovL29wZW54dC5hdGxhc3NpYW4ubmV0L3dpa2kvc3BhY2VzL0RDL3BhZ2VzLzc3
NTM4OTE5Ny9OZXcrTGludXgrRHJpdmVyK2ZvcitBcmdvKT4+L1N1YnR5cGUvTGluay9DWzAgMCAx
XS9Cb3JkZXJbMCAwIDBdL1JlY3RbNDYxLjMzIDM1MC40NyA1MjAuMzkgMzU5Ljc0XT4+CmVuZG9i
agoyMzUgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL29wZW54
dC5hdGxhc3NpYW4ubmV0L3dpa2kvc3BhY2VzL35jY2xhcmsvcGFnZXMvMTY5NjE2OTk4NS9WaXJ0
SU8tQXJnbytEZXZlbG9wbWVudCk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAw
XS9SZWN0WzI3My4xMSAzMTIuNjcgMzc2LjEzIDMyMS45NF0+PgplbmRvYmoKMjM2IDAgb2JqCjw8
L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly93d3cuYnJvbWl1bS5jb20vb3Bl
bnNvdXJjZS8pPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs0NjIu
NzEgMzEyLjY3IDU1NS4wOCAzMjEuOTRdPj4KZW5kb2JqCjIzNyAwIG9iago8PC9CUzw8L1MvUy9X
IDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vZ2l0LnlvY3RvcHJvamVjdC5vcmcvY2dpdC9jZ2l0
LmNnaS9tZXRhLXZpcnR1YWxpemF0aW9uL3RyZWUvcmVjaXBlcy1leHRlbmRlZC91eGVuP2g9ZHVu
ZmVsbCk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzI4MS41MSAz
MDMuMDcgNTYxLjE5IDMxMi4zNF0+PgplbmRvYmoKMjM4IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+
L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9naXQueW9jdG9wcm9qZWN0Lm9yZy9jZ2l0L2NnaXQuY2dp
L21ldGEtdmlydHVhbGl6YXRpb24vdHJlZS9yZWNpcGVzLWV4dGVuZGVkL3V4ZW4/aD1kdW5mZWxs
KT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbNDggMjkzLjQ3IDEw
NC42MyAzMDIuNzRdPj4KZW5kb2JqCjIzOSAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9V
UkkvVVJJKGh0dHBzOi8vZ2l0aHViLmNvbS91eGVuLXZpcnQvdXhlbi9ibG9iL2FzY2FyYS92bS1z
dXBwb3J0L2xpbnV4L3Y0dnZzb2NrL3Y0dl92c29jay5jI0w2NDcpPj4vU3VidHlwZS9MaW5rL0Nb
MCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFszMDQuMTYgMTk5LjI3IDU1MC45OSAyMDguNTRdPj4K
ZW5kb2JqCjI0MCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8v
Z2l0aHViLmNvbS91eGVuLXZpcnQvdXhlbi9ibG9iL2FzY2FyYS92bS1zdXBwb3J0L2xpbnV4L3Y0
dnZzb2NrL3Y0dl92c29jay5jI0w2NDcpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclsw
IDAgMF0vUmVjdFs0OCAxODkuNjcgNzguMTkgMTk4Ljk0XT4+CmVuZG9iagoyNDEgMCBvYmoKPDwv
QlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL2dpdGh1Yi5jb20vT3BlblhUL2xp
bnV4LXhlbi1hcmdvL2Jsb2IvbWFzdGVyL3Zzb2NrLWFyZ28vbW9kdWxlL2FyZ29fdHJhbnNwb3J0
LmMjTDQ3OCk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzMwMi40
IDE4OS42NyA1NTcuMjUgMTk4Ljk0XT4+CmVuZG9iagoyNDIgMCBvYmoKPDwvQlM8PC9TL1MvVyAw
Pj4vQTw8L1MvVVJJL1VSSShodHRwczovL2dpdGh1Yi5jb20vT3BlblhUL2xpbnV4LXhlbi1hcmdv
L2Jsb2IvbWFzdGVyL3Zzb2NrLWFyZ28vbW9kdWxlL2FyZ29fdHJhbnNwb3J0LmMjTDQ3OCk+Pi9T
dWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzQ4IDE4MC4wOCA3Ny4zMiAx
ODkuMzRdPj4KZW5kb2JqCjI0MyAwIG9iago8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDM2
NzY+PnN0cmVhbQp4nJ1a667cthH+v0/BnzZwjo7uWp0iKBzHdtzWdpJz4gaoi4ArcXdpayVZl11v
XqNFn7czHJKifNPGMGBrluLwm/sM5fer7+9XPsvykN2Xqyf3q59X71e+50dZwk6rkP0NFt+uAp+9
WP3r3z4rV/GaZXHMDqskjdVTpZ7itefjMyw7j7S+X/1zVa9unt0FUcx2/Spg+KfbTbycXS6z6Yj9
aksMEmTgM/wDDLLAWydsvU68MAQ2hoy9CDgHXpACEXlpzjLfiwJDFKs092K7lq69bG12GUJxhBc1
mXp+xsyu1MtSy1ERxcqcRmsah9o1w1goSQJf/ZbFvhcAbENGuRcmDClglUVrL0mBUudoqljlCoMm
c4VO79OE4gnvWSqHJdoUBx4o1HAkqliZ88yqhZYnbAZUY480WN9LELsmw9xL4f2IeIVrLwfsEZ1E
FJwUEg69GhJEvVNTii2+qsnAi3BRSx2iXxi2RBUrc6hZ1fhop4tWC5BqxCAhCjCRPryfEq8g9yKA
mNJJRMFJCeHQq4kWgHZOVK5e1WTgZbioJQ89cGfDlqhiZQ41qxoQ7XTRzgXw117kCOBnXjwJ4CtH
NCf52ksNDr2qMeqdhkK2kwB+7gWTAECtJwGImgQwqxoQ7XTRkgAUESmER2CDNsX4SHTQphCLiQkx
TZig1SQFpt5liFR56USBB5tNGTq3YZiR55vDaM2CyhPmIpxhTiPlNBPlW8xpiE6hjyDCYiZSA6Nd
E5FPmNMYbW42JegOhmFCvmIOozUNQ+1yEc7STJrE5CiGjJS5KezTJFSuQElBUybNaJJyid5niFiZ
01AJmtpsSpQXGI6J9hFznl7VWNTGGdCZviEpZZO+IQ2trb4xJ1l9E2H1TaRWKu0yBDK0+o5DL7H6
jiMvtPomwupbr2kYapeLcK5vSEROWk8hBU1pPcVMNOmbKKtvIrVSaZ8hQietI2XTehpFTlrX1KRv
s2qhTWndMHWxQ2pLHOzBWiVnzSvIVOLWJxFlsROpAdI+Q+Qq+WoKMllksUOS8yfsRE3YzSphoY0z
oK6vJFGK+jpYEkJE+0oSQYQE2rSaML6iSXII2mSeFT/jK0mUgbLMnszLcstPEcZVzJoGgZtm+GaQ
wwjVZCFDATB9Cz4HFjIRFjKRGpfaZJ4VPws5hFRpIIcQaRYyERayXtMgcNMMH0GGPvHmKSDy8pzd
g8OoHgwaOR/LdOpF8Oth9eC0FzUr+cCZ7Fkv6uH24f1b7Cu/tB07tTBFleF2vh1Ex4a9YKXoi062
Q9Mhp1MnhwE4D41afC274fkr1reVHK7LTh5hUyfr3dJhKao3Q2HxsHbPe8GCW8bLknFWNTt2EH3P
dwLPkXUpCz4I9qjbNShKeWSnZqxK1hTF2F1wkg+adE8K9Um1y7JpRccH2dRX7LSXxZ6dZFWxTrwf
ZSeUsBtevINXWdkcuFQq2POjgFd2sgdtCcSOwuMK8mTL0NI8Vr8CtKH5+uuqlIbGvD+Io6ia9gCG
BQBFc4CnEhDA0RvBQJRt0x2APslhz34Dg/Ea4W14LwtQ7sCvj2C6kVfyDyU0e/UELD3xFDWsNzU+
XzHQy155A2iMs+0IelnGmoXGl161ov7tnm1GWZVXjPesbhj9dt23opBbgLQd6wJxAKDhzPq9su9G
GP2Xy+dBK6Td6V70AyvA0HCSEForR+BcohehJX+7e8HappLFmRVNPXRNxRp0XeUOqMyxRpcDPAsm
9LHm+sa7XrVKlXCagONGOO2WVRwk26NXoBVg9ecnL3690nYBGKLmmwpdB+yzGwF5ryCBgylPsqIs
KyDKjCu9FCdw0mJE63EDCZTpBC9F9UYsBSoKCCOA9rp70B04z74pGfgXAt3K3Uhhw5qt1a3S4xbs
duJVdcV6TBZ8YP0Z4uRgF8gEEsxUgGNN+C6AFCTGuX6YyTk5Tj/KATWrgPbj5iD7XmtCoQQ1PCZD
D+cryDEozFvthGSdzwqzFNMIDpp67Yny0FbCotNb1ficqvHZ83M/ZNgjrdMI/o38AEo3zNOafY7l
BdiH7L6Dt6M0xssAxSD7CoNfnllAMVbnJINmWgH6qWveimK4hcA4kWD/kPX4genUjU449qLrW14I
hOvjwQGcaUQOGRTP1BWZToAG1SSmfpC1krd3BV7P7gu+7MYJ9EJJrFg9B9tIXt3qZLEYAQm0Ijlt
vRs3PWQOLHvK1Fruv+hA2DZV1Zwg6pr6xLuyN15ByngnulpUECJwbIlROLbgEzOPYS3vBvDd9hPD
5hcY9gtKjKEX80mLv3BKhsJhHfuXqBBvcDLoOiLF51E9s7I4ykIYY0tMjwXW2e6sokRVe6pt3VhT
/oGf2ooPWE10sb+mmtlsB1CdoMC2ft6rHbq0mPN03eyvFoIHoUOTFBB0kzSQofVJ1nYNQv7ssboZ
0ades+HcQhavqjN7Lw4j/IBpRtbH5h1VAW1n1WJA/ebFQIFPAY/9Dv6sNbIMHSbzJFTQZxVkSiad
ODSDmKlx6Hjdt003WKPU6lVVCbQxvOWj8ZIotBkamqIOa7iuu191gGaAim5zZdGUYoMdEkq/HTtV
9p22AFsjaHQ46XvEkCDGx/ioufYMvWIvd/sLVIZ3Q4T7/UiVHzNQr+zaXRHh5nHUlG6wlFTUqEFP
h0LIg6y4NtoWfQWrfny8QH1BaOIF1TdWgzxgp1BOqWyKj5kGp3Kj9NH2AyjnMEsjV9D3DEaHiJ3b
zSCZoMyDtXEAaSEXgQ00y2XcfmSCpR27fsQuQmlCQHZCAcR2i66lk5ukdOra03PTS/DtmSuCxjTM
qMJU3M2HcXhh0orSwMuDP+vD5CPzFFHJTYcpbZ6iRmwHJXgweH0pcDP0MF1zUJubTu7A0hUm+ceV
xIPRo+nARTtEUPyiQCctOtNj9xgoeBj0FTAM9GNvYluLVNhaQrlsUM1qrytUJyDtFqaB56pez7sJ
Ev3r4BIc1XMXIHagwx5guTaKyPJBmIMT4GCawbDpe1kWJsnXrAYmiyKXeyn7YqQ+CzKLcBuAOL7U
EeIQ7wGQnRqvNgKyDI44XakKtqvDk3wnl+M7gkk6mFxLpa39GWako+wbm9+A+ebMfvyJ3bDvwS/k
eHDrC2eHpkOzFMY5FAcsL8oOatKesrkalWaeBcYn0JfYDUEHMLQSaCMwu2vGrjBdCpWIfmzVgcbx
gPuRQx7EjKmL59ePiqA3hCTiHEcAJY56f8gW0jmML0fkdrvkbUni+TPk3szNkm9zszCL0HwO23bc
wOBAhtxBfIwbjJeml0PTnRc0myrvcphtyNgexOMNeEHdKyW7uNNLPdePnRRG0c/RZWRL5UtNwLqt
Oph08LlhvOJn0d3OlJd9o/LWAQrqQNsPQ9vf3tyA4rxzUwxNS72x13S7mwJ+VX958NfNZ6DdQIUT
NyRTfy0+DOraYdGbwzwxOeJm/CDqv+6/K8d6K6rKlfGiGQG5gc9mHwW0Wxx624ug1jFYp0KN0UI3
Aqqm6MZe1U3MsdtmhDJsS757JaKHI5xzzVY6lHh4yzEdppmbiPgG+gWuJs4eGluBt3QEbSpmm7Ny
EoT2cY+l2ySTaxqCNnyheOqm439M7Dx86QzdpWr/llEnmYkVmxDVMf+hdue/WOJgkqCLBsiKcAg0
7wXA2oEp1DWac5Nj/H6ScSrUTbOczcNo7TYKpqvpi0ZHmdGXtrTT7PKug6mvm7Kz+CB71VDONDVv
MKjH1HkW3WQnahgWqj83HYRhbgJgqhJvHqAx1A0Qgvjh+1/vQFPDCSseMe7fPLxVSLcNFFYrnhFa
cTEKxjnG2uekegp3wsNLSV0slqsCIoZpWgfZfJ6xhjRocKCRzfWnylPhgrdXF1gVvzTQaS+bQTij
H4bn5Ppw+rVq5XDtzYN5y/3mIXt91xTv2KOnt9hQubkl/7b8iYkzSF145rrXZIweDhTDdVFxGEvH
Hg356Onvr+9ePf4704MSL0uI7v56y2FEObuwkotmepjng5wFeTCVaBwq5DCNcCT3x+OkakM/USHO
Sy6G4Bvbv7ULSZbgdnIrxcKFfOSHXuxuhHlIbrdCJSxndsMQJaFoJITQg8ysv0lMCibNzsS5aOLI
Mmxdg7UdOlBV+H3D1AnONhVatlNuzuuzHe/QwnqyUvlW9mSOAz+rjUUh2mlktdHN2U/NY5OhP38H
4ErxrT352hVKO5wsXc6XNuJBZvt60yKw5ySw8aQTdjnogJBc6NuNuld8+vujX569uprSEclPkcIo
UmjM5cdGgnWbA9YHUK/U1VC3+R979WIaCVI7PADmoSmaql9OPjCc2Lqspi9ZH3H238EQXVL+dbv/
H7Hzf/3ROIaWJbhvHrx+8fj5m4emdNBlI2WClg/g4faufJpCpi8hxGQZczTNB59cIs2RmYyFVlDj
5BEt8TtMVMJgRN/lCJO8fboeJjQfMVSXQPSp68BrvhN0NySG4oImKAhtKwqOBGFeVkKPG/biQsuh
ikAnbL4nMIjbY1OVWMg3aYyXqs6pUFoFpU01X3J90Svq4ox+tzDjRBCTscvOqNDdF33ah+ehF7rb
Hr96+fT5M0pjT+7vvr47xvuV4DPbXz5ZuBZHhcOsk9LlzIG/A9FJjS0osMYuAhLE2Kt8hR9Q1Cca
fb/KdXTX0JM03Tuw97X+XLUcU7n6P3f6DppX6PXgPEd94SV7e+dqr0gJVoP16agv+RAvGKgwV5Ju
xaVW0+brWfdD72nY857kAg9dp16soU88dMLDXsu0UXoYUElW8s5R07y1xwJmO03nJWh3q9758PhR
6+exO1mr9vhwQbsGI9GaQM9MiHdKbuunDkLgdDvpGNp8zNN3Ts3o2p1dMykWVRcm6KUTEtiFX4zL
UtLXjPndLBbKqXR/wfMjGJnwv41anpPjf2fuGfF/yv68+j/FhrkLCmVuZHN0cmVhbQplbmRvYmoK
MTYgMCBvYmoKPDwvQ29udGVudHMgMjQzIDAgUi9UeXBlL1BhZ2UvUmVzb3VyY2VzPDwvUHJvY1Nl
dCBbL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0vRXh0R1N0YXRlPDwvR1MxNDAg
MjE4IDAgUi9HUzE1MSAyMjkgMCBSL0dTMTQxIDIxOSAwIFIvR1MxNTIgMjMwIDAgUi9HUzE1MCAy
MjggMCBSL0dTMTQ0IDIyMiAwIFIvR1MxMzQgMjEyIDAgUi9HUzE0NSAyMjMgMCBSL0dTMTQyIDIy
MCAwIFIvR1MxNTMgMjMxIDAgUi9HUzE0MyAyMjEgMCBSL0dTMTU0IDIzMiAwIFIvR1MxMzcgMjE1
IDAgUi9HUzE0OCAyMjYgMCBSL0dTMTM4IDIxNiAwIFIvR1MxNDkgMjI3IDAgUi9HUzEzNSAyMTMg
MCBSL0dTMTQ2IDIyNCAwIFIvR1MxMzYgMjE0IDAgUi9HUzE0NyAyMjUgMCBSL0dTMTM5IDIxNyAw
IFI+Pi9Gb250PDwvRjEgNSAwIFIvRjIgMTk2IDAgUi9GMyAyMzMgMCBSPj4+Pi9Bbm5vdHNbMjM0
IDAgUiAyMzUgMCBSIDIzNiAwIFIgMjM3IDAgUiAyMzggMCBSIDIzOSAwIFIgMjQwIDAgUiAyNDEg
MCBSIDI0MiAwIFJdL1BhcmVudCAxNzIgMCBSL01lZGlhQm94WzAgMCA2MTIgNzkyXT4+CmVuZG9i
agoyNTUgMCBvYmoKPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAzNTU5Pj5zdHJlYW0KeJyd
Wsuy2zYS3esrsEyqrhm+KXpWTuIkTk0etu8kqRrPAqIgibFEynzo+v7GfPH0EyTt2NKkvDCPwG4c
NLob3eB9t/r6fhWaoozN/Xb1/H71cvVuFQZhUmTmYRWbH2Hwz1UUmp9W//5PaLardG2KNDWnVZan
9HSkp3QdhPgMw7NHHj+sfl81q6++fx1lmdn3q8jgv24/6ZpJzZVNUxxWO1aQo4LQ4D9QUETBOjN5
mgUlaJlQmBkAUQ4gDZLcFGGQRAqqVV4GqR/L10GxVqkJlPieoDwAa6hQEQAjVUigWulkPCY0SGrO
sKJlRCH/lORBAqQ9zII0M4hQU5IGRQ6Ip2FUrUqmwLBkciynAHXCe4KKIIIhESqCdTppJFSt/Hwy
KlxIcEF0yT1OgmLGPY7xP9UFKJq4M/LcGQpBllOAOj33OA0yzz3OgnjizmjirqPChQQXRJm77EQE
QpOvRGEQe18JyyDzvsLA+wpDcQiWUoAKva8AKr2vRHGQe19h4H1FxjypMjNzhkt7w3/ZzN5giNzb
OwMjld7egtTeAsWoLKcAdXp7h3GQeHuHCcaf2pvRZG8dFS4kuCAq3BP6KSsjCs4ZhDhDRNxDDMEo
Ee4hx2cUC3kejYmiSk6opFcFxhhtXjDBQPRqE45SP6mMCiGWnLNdLqAoOVAVrilQRVdRUKDKTIym
BciocBRJRSWFlcJ1SDEngoDW0wIYTQvQUSYkknO2ywXkOUerwoyiVXQBiqYFMJoWIKPCUSQV5RRb
HhYUeCq4pqBUtWsJWZ1URoUQS87ZLheQpRSzHiYYfqorizEydSZG0wJkVDiKpKKUAm0Gy2kBWYbx
6dVmHLx+Uhn1/MrMLNgu4jdLF/GbpYv4TRbxmyzjN5nFr8gpmMdvls7iN0vn8SvIx68fFS7z+FWl
c+5gnXLGHWA4cY8LilCZiZHnzlAIstwEyok7WLzw3MHg6cSd0cRdR4ULCS6IznN9FrLjKyLv5fSL
INJcL0BzvUBO6CKlgH1TEfmtCpFLq0Lxd52Mx4QGSc0ZLuydlgU7u8KcXJbXn5YZuTNbR5DaWyAb
VeQUFOSSEyrV3in8lqeTxrX4uc4no55amZkF0bm90wj2eCrE0qiQZaOmKKdl8zQM1OAC2aoipYA0
qsURlqlRqTLIE6+RgFpcxzytMjULjnPaCdhtPdFGGCrtpEwxPHgSAUpbIFMTqQmsJ9oJ7EWhtBMw
Wxp7jQSUto4JD5JacFzQLpIgn9EuYjwURBGAyNNm4GkzFGospYA0etoFlFKedgFVlqfNwNOWMeFB
UguOC9pZFKQz2lkodRAqSkuqg3gSBp42Q6HGUgpIo6cNKT/xtCHjh542A09bxoQHSS04zmnHGUXC
yUM42UuhjSCSOfhZSTNiXiKiQAJNYYkTilAOB3qh+hgoZx0TEiS1ILjgDC1IOOMMTUjkOUPzo5bm
Z8+ZkPBiEQU5HyQTXHvO0Ppka6+PgOcsY57TujQLggvOYRqsk4kzFJvlWjkDiEudhIFnzVCosZQC
0uhpQ0LPC6UN+de3hwI8bRkTHiS14DinHUEVCSFw8jAKEqUdQRmbK20BSlsgUxMpBTHHmEJwVqWN
oMi9xoSLTp2Nx4QHSS04Mm3o+r/6Dpw2KEtzD4cPddSQR4E09nvw42n1xbNu35rd2FRD3Tb2WA+P
prKNGQ6uMRtnzl17qbduay61NdZUB9uZrbvUlQvM/cGZ+nQ+upNrBovyZtd2KNo7cxqPQw1jpm4G
1+1s5XoztGbsXdefAZn+0I7HremHbqyGsXMoZ768/xMvKD7DPM4xwpB51W5BSwtydjBVezrh/POF
3Jl+rA7G9qarm7052cbuias5tvu6ujN1DyxsB6vbuOHBOVq26d3ZdnZwZtvVF2DLE4gh6A2/iCfX
+cJRFzFfW4EN+nqzsAlbcfwD5r6kF53yzk/Xj/VgUaRzO9e5Bs3IRkbyarvgKo8cTvmMedy3uLF1
c3H9UO9hodun5uHgYGG0d+bFL6+ev5ztS00bByJ2257hbd1k81vdDS9+eUIetLHVW+Oa7Z2xzXam
Dix3crYRI067f53wek0/AmGwBVpOVs7eB4Y5n9tuMNvWNO1gGgfEgObbpn0wdtOOg3nrusYd2daW
vMI81MPBIN/A/CwCYO5698j0fnr2DahvansUvhaX32zrCq1kNo830MbeTPb7DMTPXY2u5LoOqKPH
3plNCyRwJY17IGu59zXsBHho1TaNI6bgATj3AYaPMHPVdh0MHB+BzWTEG3YdjhKJFnAzUGgpzjrn
UOXHfmfanTm1/QDednQXC9729PNz0NVZrB4+vncNaDvWm89LRSHeXMXqkOhLEr8g2tnuEXlMtHxU
9zeohapQ3MbtAw78zu3BwB1lqDv2h248D2xdeAHzRH8G56Xd6Fw/nijxUbg2++sWgHJf9hxIU4p8
SgGCj+B5zjutj3vyvPRyfT1ZWegO8jJwl+xHmu/YX302hlCpIWm0Z8fL7q/sY5Tg9cRaN/Khqwd3
gwQUhbKH5/Z4vEEgj3V3dsexP9wgAVWg2HbXPzbVDRZLvMUOFm012P6teTe60VHUTRsL2wFuhw9b
O1ixIGTV6uC2I4YdpLHGoQNa8sYXr16atqrG7gY/zOJMjTmlL4hhBw69hZTY93AWwS517Qm8jnIS
Oyu4SAvzyRvCG/NWzaeThAS8pBmwd/BSM0DWmq/tarIilpHfQJ9Vrrp7FvpNBKtc+hYyP/s7P///
Tp5CCSObDLnc8U6gwhMcI+0Ws4E6Py7tr6Yi68DmzbIGaXmoj0djdzvIniSHh/AZTtSaKh1QTFIi
Qba/ge46UQ8bz5BYnD2Zf9bN+B5LjRpqkRGydYPZuh8go/RyJiITPaY2eKD37fFCtceu7WTRZwtn
g1RGWH2Br41uezXPp7n3tmfzHDrAEU+nt3vvuqruuXzBc0cTPrGCmdBByYfu8PW9BXvCabSDEqRR
67q604OogddpUVyesDExuMCL8OVfIJv+cU9vXOeeeR90x9494DmlJxt9DirocxB09NDIQWeUr6Hr
h6QQRmWO34dIc2zyIM4XmkvocFL8jJKS7m/rvhohNOCQQR+T0Ltvz3VlvrFHWPePthnRbnEYR/P5
14vPUZ+OizRKsbTHyX7H6nmkOPztNToqJaJ5QUJOixYjtG1PaHE4o5oeY/oO3Qj8kisbDGnb1ehP
lOBxh0727JOH3W473BDcrKHe1Zwe7BXTI2NocopCDpcTRh4TeWpaKt8Oj3B+XOoeaS7qfCgUGlkZ
7vi5c0+w8GpmFK7kSLoyibHPnW2OmypHa7A0EbuAJdz7M/idDx4qibXcq8HHcXJ9G+rZ/VhvsXwx
D2D5Pbh6I+mdzAoR20OPIDpsTwf+db5rqFiYL9cJ78Ya68PhSh2GotDTlSzKVqOwO9ZvHWwqbvKG
I3OROtGyO6oaT6exwRoU/Yk2faqj31CZ8/L5T/8y8Cpvm4Peyw5t9+ZL9gOxi3e/G5YKdaO4sp4y
lCqXNb93V6r+oX76hzm0D3hgXq8ak7RQ1/uu7R5stzXbuuPilw8TmoLTqqRn6NU4WW3G+gh9RQun
tO2O2OfVpxoecbnDx+3UHdf91mdGqn3ffAGnLPidaqd36Hy5Tj5Zq+eiJ3xcUWqD6YYKNuEMpc4n
ukpwa2pi6h11d8Mnu2V0BG4gzRNTQ/PNfoStOhikbdwNyRaZx2vvwxOlhdttqL/lLk82Xw9dPL2o
R5h4gXOeYXaqOEFg5h6LHF7ekMMjUwZRhIRjc9+t8Ltgin+ggAry8DMKXn2/zPpxWeI9ES7y1679
E5zqqYEA2lEIQeISV0YHQaORo012R9ohEohgbjXmJ86XGG9FeaZvsZtuaILZwvPoxsMjxu9LMWl6
0dR4ED+VU/Tqnsb4eYdFX48bqQafmh9+/epryBX1eOJ4wKXisvFZzHLnf+Q44wwxpx///bM3jqMg
ZVd7ZflOxs1VJ7daBj9TRKTnOeds6RX3I5jc9O1ueMC2Vq9wpkOrl8BHHwUGs5rPcb+teUVVgFJn
qwMErB3Am09XTR9By5R9xE2SNWizVddCxDYyH/TUUOJgBu4foZg99czPX5XNqEOU944PhR+eQSXV
VQdoyehC5mpFFeFpE023ezAfbDpvNjo93iGgFxzsxWEvyeZsu3pfw1sbqLSbhlsRmp8HoLu4xBd1
GLkp8+7G2RaXBy0n5w6Y4ZtjDWM8MVgAkvcN3OnCk0uSAY8t67P7dN3FMYtn+3JOaOgg16LlqZT/
wN1n9gVO24N0E/yeHOfcPFOGkFtF2E28Sryplo2gWy2YPVr4v9yPfHBHirQxYUsa6tuxo0S6XVzy
cbnSoBs6jlFZKl1gguyHOTZP/36s4gfxdc4Z82gXCSy7JUxx6fjXM5K/+PKDuyDoYTZHqLExmZnX
vNYPi0k4isnm30Kpe9pAREIJHr758o70bGwP4ucOnGCAcO3VLXFjyQ1oVyssDaZzTPMa+fnVbcM/
oNF+9q/VQXJQL3RNO+4Pcj+oBQpf70g+qah9sxu8c5K2Uw943Fnkpte7VpYnMbg4iK/Tht1d9u53
/j6cPMtCWctZ0p7PR01KPt0xz4e2e4uZz0pS8je1UHTXVE+iVSC4+N50cV098cWrlhOWVVdZryFE
iPV0OYVT8mZKrmhP8wvqRTxjVVzZXiMEupb6Qrez2tlOaj+8i2573kkxhpT/uBZY3Q03+QFHtjS3
lTbpYEsohIGCRiP+eeXL1f8AD1DQcwplbmRzdHJlYW0KZW5kb2JqCjI1IDAgb2JqCjw8L0NvbnRl
bnRzIDI1NSAwIFIvVHlwZS9QYWdlL1Jlc291cmNlczw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0lt
YWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4dEdTdGF0ZTw8L0dTMTYyIDI1MSAwIFIvR1MxNjMgMjUy
IDAgUi9HUzE2MCAyNDkgMCBSL0dTMTYxIDI1MCAwIFIvR1MxNTUgMjQ0IDAgUi9HUzE1NiAyNDUg
MCBSL0dTMTY0IDI1MyAwIFIvR1MxNjUgMjU0IDAgUi9HUzE1OSAyNDggMCBSL0dTMTU3IDI0NiAw
IFIvR1MxNTggMjQ3IDAgUj4+L0ZvbnQ8PC9GMSA1IDAgUi9GMiAxOTYgMCBSPj4+Pi9QYXJlbnQg
MTcyIDAgUi9NZWRpYUJveFswIDAgNjEyIDc5Ml0+PgplbmRvYmoKMjcyIDAgb2JqCjw8L0JTPDwv
Uy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9saXN0cy5hcmNoaXZlLmNhcmJvbjYwLmNv
bS94ZW4vZGV2ZWwvNTc3ODAwKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBd
L1JlY3RbNDggNDE4LjA5IDIyOC42OCA0MjcuMzVdPj4KZW5kb2JqCjI3MyAwIG9iago8PC9GaWx0
ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDMwMDc+PnN0cmVhbQp4nKVaa4/bNhb97l9BzH7YFphR9X4M
igJp0scWSJu2s22B7X6QJdpWI0uOKNmdv7G76O/dc3lJPSYT291Fgli0yMvD+zy8zrvV5w8rVySZ
Lx7K1RcPq+9X71au4wZJJE4rX3yDl7+tPFe8Xv3jn64oV2EqkjAU+1UUh/qp1k9h6rj0jNezR36/
W/28alaffPWjF8diq1aeoD/ddpI1WzUXNm2xW21YQEICXEF/ICDxnDQSUeo5fgQxdug6WOc5Xiyi
JHMSTySuE3h2UKzizAnHd3HqJKlZZJ+1PMwzQ9/BNF6D5ySz8nhQrMxe5p0BQYsW+Ap9CvMVRMcz
yHHipBYynr0RMg9GyDw0uPQi+6zljZAxPbKQIckbIfNghGzeGRC0aIFvDjlIIycNR8hBGjpZZjDT
IDBb8LNFzCNGZZbYgRZnEQdp7MSZsIsSJ0hHeXpgIdt3BoRetQC4wBziI5swY+ha1whCiI/sJjwY
UfPQQONV0yDNJtghbB5a2GHghP4oUQ9G2OadwaFXLTDOYXsJJk+q9hLfiayqPXxYa/KzBc0jxmWW
2IEWZzF7Sej4VtU0yJJRnh5YzPadAaFXLQAuMMfu3D28KJvcgwbWPfh5xKxHBhcvMQMWN2KOvck9
vNif3MMMRszmncHkTe4xSpwwQ3bqjZARTllsEOPZt1rhZ4uYR4yKV5hnLcvixVzkOl6RzPwsMW7G
u5g3vLteMcc1RxqGzgQUruRHBig8KYqNcH62QHnEcHiFeSZRFicGmU0UYeRQvmVR+tniNG8skiwS
M1QME1Xkky9xPrLgwwYpnjI05MAUPgIWX+5XHz3sZCdFpUTeiKrpZdNXbZPXQjabtitkKQ5tXRWP
oq72VZ/TS3Gq+l3ViOEX2Yh+l/eiaJu+y6tGiWN4xGi/H5qq4Mlr2Z8kJh6k7ETZ7mnaPZZJMSgp
ihz/YCP+ggR+/PAblbwz2L2EUjph3z0eZHesFNZXjRbx9ZtPPu/afTXsxaFry6HosadUoml70cl3
Q4Wz5nXdnqpmq7HfCtWKqicFlJXK17UsHfGi27Y4ZV1DrOplXuIUYtseZddAIetHvdUvP77miRsI
PUGocwV2N6XcTth/bPfyiaqMpqHNTbUdOv5yL4td3lRqz4AApJGyBAzSGpug5SOJXOxbnK+u1rKD
BckWRiTNfd4sFyHHWUpZiyAb693Sjms54pxUouFsB6l6sc+brdzDm6DfTX/KO+lc3irRyYa2+qol
A22GDmLJOeCcsOexKunrXJ8Fh6AT5k3JZiDn7TZ5IQkefKtTBxrY7W/1zGp/qDUslqPwUUsrbuFO
VtgVqF0bSzoYqqZou0ML88HvSCuboSlI50q0G7Fu+90tiW9hrLLaVNAeaY3AafVNIBQ8Vh0k1h5l
/XgrTu1Ql0YN8GI4ZIthnfew7l6buJSY2R602rHXZehIvyaUGhgNUPS52c5PHAZ/oa9HzNqLbmga
UiAdZoHeET8jN+hT/1R1/d++u9OmAa4Kqiw7nKQj/yE9qZ0+D7suoR/FX8YdeTaMVN8hyOGERuhw
gOp7JXat0jY+AsaAYDAQRvvnbJF5KtOOq/igNGY01m01000103XczPUFWFqcxgE+A9dDyQH1NYgz
4oNADPreYXYQh8TbtYDsjIAfvprOqKmiT9yBDvmma3+DH9wvlErQGThp/80vc88hzC7t7mFjq0pf
xI4fL1RJ20RZYrZ5JUlnWjOzQyfugt5/wCpEcJPEiUItaZaGCeF5g2pujAoWmhiiDHdaHoi8njQw
h+VdYYsPHRlF0jWqrfPFYf1rDgsXjPzAwWRTQReGQSlsFHkhaigVFDVWO8QkVwybYZ6NNJuaaRUn
U572L/XEm5W2PE2Tv0MYVe2LoRN54AeBxj2mDpsiOYRMgmHUz4bN5UweZpETsXZ0Qsif5jbeahSr
tF5u30sckyo5d9h4nVRT6CTC1aiBhyCHIbWOxcjgR6qVNe95GTuuLBljH7PJuJ+ufTYF07dn8wuY
RK0NY1N4NQN8qIf9muEa7LqSN09k2CPnoDBUCy/jT2InYPyzevzT61vOGGoAjYNXmvJTmEQ4V1rf
gjU02yvMHIGKsTN9NysPJzb5IoL12bmC7fIj667V2oXj94/66IdD/Whk3J1Iwa/h33nfdo/iRQF3
V+IlEc22Zp0tqFPe79QVuglT6/02yqx6dHUe0+nTSjwQUzBOetpVxc6chUkhFbH3YpOXfNCjr9Cu
n9kgohSzr5qBKMUGiZWonpS6wME1Xub7dVeVW0mDV7KQ8KtO+MiE4kTk/jCs60rt2Nd+l82d5gn3
87QXcDL1/Cz0KIt6SRzhM0n8KDqfCEMvs+HybTviW6IgbX7pf7nAOt89vDLphp5rXRsaUXTeoh40
GVJScyzx72/zPanltex3oEeonIdWwSp/8e/FFyZL1o93U7bgALfcsCrgpuVUB9V/OGvgq6IDr9b2
zMuyYk60Ef2p1eXqsjVd1xYMbF7IQ6+Djp2itcRFYOtcW4sO8Xxd0EtkUx7AkXt1f7G4BtT2Yqd/
yTsL754OcQG072ZOEMyXw/XJn/YIxXwLUxc5oYLuwTd//ehY5Vo9GJZHBPevH3MmaBuENSwEKst8
9/y2ES7GQTjf9naxInxvhReEzmLBzbOmvJlLCd63EZKZG83FfKraAVdgVjhZ57PLuk5044hW1zI/
khGHRsmeqTJp54LOwU/CbC5Gu+b5RQERL2++yBGUMXjnmQrEMa8Hog261sOVNrhSatZPN7jz6vGi
lJQ87fHpXDAYSlV+dl6CT22a+IMi/oyWqYcUjXRxRitwDDwTq9EXPG5paGZUt+3b4cDZ6dePLoRr
4BKTnW0DX0aQtuue6FoOvf0OtS3FPH/gKJ6LsR7F6roV5z3sGR/NYiq4c4kl80Mt7vypPPgWFB/g
8oQkb05ley8mpufe4pzHQnaYRGkcf8aEASpxskhKyNB8GSQ8xEGQCou3utZ3ErdpHQn7vCGYmvSC
C1TwZlikohvf6AV/VeIDGUBo6kPRUakFO5+2u4zcT52IkRMXZG8TN2Agxe6uIJE3d3X1dmSkplBw
nVmwDr5+PgHPfQqTWcu2mZF7w9HAVS6ThwDJO0smBq7zD9Urql9rsBZSG24ABTVGIFLvqHMCpe8p
xfOtWqfuR87kJKGoK8r5dKbb8RIyC0PdqkL1ApHtpblITJF+BXiXGsYEHshI/nw576qtx10czFkQ
Qqn1enEXPyOCo3fB2W1BxVZQN9LH1NYaneMZxzHXupOs67u3TXtq5roRDVF80Mc/RAWvy7edJKnG
B8zRyPx1W2j7XueEhD31bPhoC3FYKCLZf+iYVmQKoqFy64ibPXg1eDSm3Fx2HT/2rYO/YNMD2817
2f7G9lyfCzUONMwotUEQkMi9xn8bOPTMWW5uDeBGN/dGzxsOuJOWijhHIxotentFi9WPfOv4D+3S
B42QZ64OFAMz96UOq9IcW5NI2LcDn62OC3y6kpDxDiinMDhnp8PBsP49HZdeX0ZMhIcRc1q4Fy8o
r9y9fPN3s7EU62GzofvvkoDZljBut3Q95gZeO29ZJdH/0bJaNE18b2qaPNePGm82/DVdhV7xpfhV
pQq6Jz3+mbaUl7l2u1lfSs2PFl/ZmPLizFLvc82Fi5nfA0P0WI69J6phreS7AfZHZuQLOsm+4xRR
Pu3JWeTJ/9678oKYfuwiED/k/FvN4i6VXnmX8rzIybh2f5HjPsuamStl3rSl30eoGw0LVBTYutts
zMpMi3RBHXN76X16HdZpil6MbaddrhAwdHs75lWt84XJEBdDxnNj+kVsDBkk5hZZSSjTQ3qvbzRF
dM59eM0Lu3ab9xbwiAsg7NFmPTC1iKpzfdwLBkwj+mFP+zV2GZSiJNRMV+eH9lAVuCfX9a34Jm+G
vHvEVcz3Ztun17Zk48jxTCLv6AeZsQNpbr2dNL/lwLbwUzgsGbKT20r1Op3ARLPShjsjJipCSv16
7slAAPdQbMkxQOk/rXy/+i9GDLu8CmVuZHN0cmVhbQplbmRvYmoKMzYgMCBvYmoKPDwvQ29udGVu
dHMgMjczIDAgUi9UeXBlL1BhZ2UvUmVzb3VyY2VzPDwvUHJvY1NldCBbL1BERiAvVGV4dCAvSW1h
Z2VCIC9JbWFnZUMgL0ltYWdlSV0vRXh0R1N0YXRlPDwvR1MxODAgMjcxIDAgUi9HUzE3MCAyNjAg
MCBSL0dTMTczIDI2MyAwIFIvR1MxNzQgMjY0IDAgUi9HUzE3MSAyNjEgMCBSL0dTMTcyIDI2MiAw
IFIvR1MxNjYgMjU2IDAgUi9HUzE3NyAyNjggMCBSL0dTMTY3IDI1NyAwIFIvR1MxNzggMjY5IDAg
Ui9HUzE3NSAyNjYgMCBSL0dTMTc2IDI2NyAwIFIvR1MxNjggMjU4IDAgUi9HUzE3OSAyNzAgMCBS
L0dTMTY5IDI1OSAwIFI+Pi9Gb250PDwvRjEgNSAwIFIvRjIgMTk2IDAgUi9GMyAyMzMgMCBSL0Y0
IDI2NSAwIFI+Pj4+L0Fubm90c1syNzIgMCBSXS9QYXJlbnQgMTcyIDAgUi9NZWRpYUJveFswIDAg
NjEyIDc5Ml0+PgplbmRvYmoKMjk2IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9V
UkkoaHR0cHM6Ly9saXN0cy5hcmNoaXZlLmNhcmJvbjYwLmNvbS94ZW4vZGV2ZWwvNTkyMzUxIzU5
MjM1MSk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzQ4IDQ0NS40
NiAyNjIuOTEgNDU0LjczXT4+CmVuZG9iagoyOTcgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8
L1MvVVJJL1VSSShodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL2FyY2hpdmVzL2h0bWwveGVu
LWRldmVsLzIwMjAtMTEvbXNnMDIxNTkuaHRtbCk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9y
ZGVyWzAgMCAwXS9SZWN0WzQ4IDM5OC42NiAzMTMuMDUgNDA3LjkyXT4+CmVuZG9iagoyOTggMCBv
YmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL2xpc3RzLnhlbnByb2pl
Y3Qub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMjEtMDEvbXNnMDI0MDMuaHRtbCk+Pi9T
dWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzQ4IDM3MC40NiAzMTMuMDUg
Mzc5LjczXT4+CmVuZG9iagoyOTkgMCBvYmoKPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAy
NzE5Pj5zdHJlYW0KeJylWduO20YSfddXNLAvNjDD4Z2i98kbJ1kHmPg26xiI80CRLZE2b+kmpZn9
jN394D1V3aSocWIJDsYG1SJZfep2qqr1++ofdytXJKkv7orV93erN6vfV67jBkkkDitf/ISbn1ae
K25Xv/7mimIVrkUShqJZRXHIn2r+FK4dlz7j9uKjuV+uflm1q5sf33lrT+z0yhP0p3ZHWYu3lsKO
W5SrrRHgkwBX0B8EJJ6zjvCI63iQYldB6vi4eI4XY7F2IlxcJ/CmRb6KUyec78VrJ1lPb9kFC8Rz
8yrFLfNS6DmEygg0i3xlN7P3ZlBpJJYIc1bDfuMlTnrEjJU7Y/ZiJ5gxm8WM2SwtMPPWcZEeMXu4
zJg9XGbMZjFjtvcsDH5riXCJOUoDh901LX0nSi3oKPXwtNnCfJ4gm5WBZV+ZFoFBMi1Dx0/F9FLo
pMksjxcT5OmeBcFvnQA8wbx2nfUCc5I66YwZi2Dagz/PmHllcZlX7MKImzGv4fEZ89p3gvUkzyxm
zPaexcRvnQA0mJGLNz/AR7Th3RaJQnGONcUnHI8vm9WT56Kvs2HbqUYMXVfrIcs/i06JTOixGrJN
/SCycSg7Vf1bFmI3Sj2IomuyqhV51opedTuV4d1SigL3qjYbqq4V9KYUeIhulA+9VPtKQ+zQiaJS
Mh+Epq9yKZT8nYRq8fTuE1HGV1D7sZMY1BADwRVg7rOq5r0mebIt+q5qB31enhc63mQFo9n7W9Zq
yOrPwuwh3ldqePnq+rnaAbrkLYpK591eqoeFErtKD1LBRDClqAaxeRB51zRjW+WwSLubxB1kXV9/
brsDbNepwT7uWLDMSgGzkuOmri+Q6fE6DnANXC+NiaZYIV/Ejh8vFQpTh1I0cIKQVXpdZ+1SanjC
dX9iFVBnnPiUECTinZSMOdtA22fiv69V9wmue3Zik0M1lGL8IFuRtYV4/WHhbf0/cShhFBaCMGtb
2EfeD7LViBFNJiER/9EiKwoltSY7ZYojKVfVRhZXEFDlJTvl6/4k5HFAacHxUQLPRopRY0e44rEj
B5W1mu1fqAqeZOz00JwDQFdmiNKx58ceuSi6wEWeSB3PI4yohgpPB3FIZZAFxF8R8PbHR04NQjxo
nDp54OWrt9+/4eBZqEUIXdrLwzaTqf4sUjx3EvrimLd6qWRyScQwb6VO4rOol201VFn9TLzqZfvh
7mwKRknieIEJtnGjiQpaaEfRZFX9O9IEAUNSQUVvZDMuIa6/PVWiCF+baHnLume1XIpOL8yXKIic
1GM5dyVRHtAWiLolG5KfHofgoVOfRcVJMPZ6UBIsahljqKSeIzIflYL+9l2R5UO1r4aHiVsXpiJp
4Gk8jO0z1Y2QcDZpIpBqYODLFkRKKWi3AnCSjstz1Vxhw7weC05RQ13ddkaAZ+4R01O2kL4mQokg
pbKp3jjirgRIo/WcWk1fywaoja0g9TxoNC6JAY0MV7rPwMG62w4HYg9YJpvy2pYq/Ju2g3zD9Xto
OWa1JXW9zO/U/fa4CtPIgSn+gIJT78KQCpMIgqaIIvt+ZwPjgdDKuuvJXgKqsb9q1B5RZogZFKJa
7jPcG0pEVLFgCOvOK0PYFKaKrLBVXbN0I7w0h+HDlYBQqlh0BRuedUwYIxkYeS0zSO9aI5iCgxwx
4L9UM6uDcKqCwmNk6seTqAUcOCcRYhUosiE7MvezpWVNI+94fhp65CgviSNck8SPojO2DhPHEFA5
DL1+dnND1tROpvISEeTkmdp0bew6MMrNvWxv2AE36D2DyPubuSyRBJf62EfPN/uYyQAaow5OiWN1
N4lDKdNkn8lI3HGhVpI/yG9NR7Fg3Uq5RkkUTMmJ7CAy6TtNpIA0QLMnxc+o580G6eG7vnsldC/z
aotGhSj2vI/R5Jvc23b5qKnCYsdH5HF9ewuP/bGzwm93luvaxDh1FvzSGw50OrW7sb7TN+XQ1OS0
a+M00vba824avXN9L0odur9EFl3ovGDt2TD/dewRlmiO9jH5CH3dZGtqfJ593ZiBF9DIFaDjMiEo
fluiib/ZTkeJ32on79q1dgrd4As7XdQZEI4oWAT5nzRgC0LjNGilLDj9h+wztY4wq6mVpuB9USyp
FFLq5DkqHvoFuXNmZjFdJ4UlEeHZ0A7C0IY28SMGG65IhjAH5kBwZG+aBZHVoFAq8HtKwkJyyk1q
XZECeXct74mbWUIH9GqhuBGkLS8758H5U1X4GR2RaU/JILwFqWiI41E5RX3UAiaGdfOSOvBaM6sX
Hb5nV9gGowbbY84bdyVIQ1ry6DNFfGGDhqCyP/psx1PBeEFJCLypJHDNUWNPPq8rnp+2PGnOHR5D
3MlpMBDZIXuYKlQ2nDeRn05kftdR+1+1e+rCdtChQFf57hZb1VVOo1k7KHT6gsY4a7jFuNaZYaZE
NrMbYSDmTRhFS+NNEnYqxcR1BybUo1r45zzqdWKzRD+AOpoJJAzNky3JvO+Z80neFvPzAVTNQY5w
oll3bjtPupi/0B37cUznCQTqX8xwsK14IXOuGCdsfmmf7IexE5pA+AEZvqFzhpPmgyxuxoa53aHS
RX3NRsrWGNQ2B4cK+rfdQE6mA4Rqmrw3Y1WbHrUVj4jm2ApRG3qJXwLYwCC2qdVuFboaNeYDeZj5
HrQwt+jbaphb/dPNbXmmG0YUU+8gjSDyLc27fExQ8tRKZIJZpzbBeLrxFW0iq90FI7HvJ3SgRSo0
Wd+b/gHzQy13EPOYFuSQL+LHd/9CF+y7MR2e8XhZaWoTSA+YCx5lr991fZWL7xDHV+KnrB0zsAHV
neX+l/bLGKcdN7EDKDIIfa9tf6wlJ3tZG+gj580nBLNZinkwaDL4qJX2JKCXik/Jjl7p6IihzOot
bWHYfWeOsmypOuseVHQnTBYRtgwL0DRq4QFZx0dMtygxNSWIrUofn7y4ffX641Nz3pIbQhjrYga6
hTvBVwROEavq4+jTyKaDvU8UuAww0YIBbO2AiGKZXPBQJ/You5R8y+rIjXufIQEoO2xnS29Oln5x
+3zR3W/RxNpJUWeNFKSn1S2rdQdG2PMWVdPIoiJuOo87Qh4Y3L2iETqnAt739UT3iH9pzqr4QK5h
TWw6yi16Y4zVoGTY6f0tka3qqGmmV69E3R3QZduH25Ebazw4+8XM8oXM0bbwnINKgxp9T63LRW2J
hxklNuiVHNDQ74EeO+iGvG52ZO9uHoYv3Xy+bKLDnPLnubG2iXjjTW1q/pH3uGjnGJn1gtYU9hqs
qzeStLRQ6SyCjjKsZ9XxQG5xLIyY5uaopMhh0fKCswuPfm4wuM2hMvJ5X2XImznhTaDbvL/iknoa
m0e1+ID2qDbDZBYnfY9Yr23QFUZXeW97lo9P/nn74eNTR7y8gJc9N53yiJsImpWnpseE4enhcS9h
HW5CqGUsq14by9O5KEognE3Tly6Bir9XxgF0KK1mspIVG5ka4ytzJKs6DCt4h2mG/H4BcHdKpEaS
5pXm4dT+WGCDg5JkiXZxDAw3yYpTi3DQqwvmpUAx5melpx8m7Cm7fnSMQ+JMoLPfqgua0dSdEslG
mT4peP5fONZ9dLo6/b4yn9i+5mkekPfhXvxStUV30PMPCuawykbA42Nc+hH1zer/dx5tvAplbmRz
dHJlYW0KZW5kb2JqCjUxIDAgb2JqCjw8L0NvbnRlbnRzIDI5OSAwIFIvVHlwZS9QYWdlL1Jlc291
cmNlczw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4dEdT
dGF0ZTw8L0dTMTkxIDI4NCAwIFIvR1MxODEgMjc0IDAgUi9HUzE5MiAyODUgMCBSL0dTMTkwIDI4
MyAwIFIvR1MxODQgMjc3IDAgUi9HUzE5NSAyODggMCBSL0dTMTg1IDI3OCAwIFIvR1MxOTYgMjg5
IDAgUi9HUzE4MiAyNzUgMCBSL0dTMTkzIDI4NiAwIFIvR1MxODMgMjc2IDAgUi9HUzE5NCAyODcg
MCBSL0dTMTg4IDI4MSAwIFIvR1MxOTkgMjkyIDAgUi9HUzE4OSAyODIgMCBSL0dTMjAwIDI5MyAw
IFIvR1MxODYgMjc5IDAgUi9HUzE5NyAyOTAgMCBSL0dTMTg3IDI4MCAwIFIvR1MxOTggMjkxIDAg
Ui9HUzIwMSAyOTQgMCBSL0dTMjAyIDI5NSAwIFI+Pi9Gb250PDwvRjEgNSAwIFIvRjIgMTk2IDAg
Uj4+Pj4vQW5ub3RzWzI5NiAwIFIgMjk3IDAgUiAyOTggMCBSXS9QYXJlbnQgMTcyIDAgUi9NZWRp
YUJveFswIDAgNjEyIDc5Ml0+PgplbmRvYmoKMzE4IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8
PC9TL1VSSS9VUkkoaHR0cHM6Ly9naXN0LmdpdGh1Yi5jb20vamFuZHJ5dWsvNjg1YjE1MDU1MDQx
ZTcwYTc4OWVkZDIyYTA4ZTQzZTgpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAg
MF0vUmVjdFsxMDggNTI3Ljc4IDM2Mi40NCA1MzcuMDRdPj4KZW5kb2JqCjMxOSAwIG9iago8PC9C
Uzw8L1MvUy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vd3d3Lm9hc2lzLW9wZW4ub3JnL2Nv
bW1pdHRlZXMvdGNfaG9tZS5waHA/d2dfYWJicmV2PXZpcnRpbyk+Pi9TdWJ0eXBlL0xpbmsvQ1sw
IDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzQ2NC40NCAyMTMuMyA0OTEuMSAyMjIuNTZdPj4KZW5k
b2JqCjMyMCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vZGV2
ZWxvcGVyLmFybS5jb20vYXJjaGl0ZWN0dXJlcy9zeXN0ZW0tYXJjaGl0ZWN0dXJlcy9zb2Z0d2Fy
ZS1zdGFuZGFyZHMvc21jY2MpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0v
UmVjdFs3Ny43NCAxMDAuNSAxNjUuNjQgMTA5Ljc2XT4+CmVuZG9iagozMjEgMCBvYmoKPDwvRmls
dGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAzNjEwPj5zdHJlYW0KeJylWtuS00gSffdX1NtAhBG6S+aN
ZZiBje1hhu5liFj2QZarbYEsCV3c9H7NfOrmrUqlbsC9bExMtLPLmXUq8+Slqvm8+tvVylfZJlRX
u9XLq9Ufq88r3/OjLFE3q1D9HRY/rgJfXaz+9W9f7VZxrrI4VsdVksb0qaZPce75+BmWnY+8flj9
uWpWT3+9DP1I7YdVoPC/fj/bcrRcY/MWh9U1G4jRgK/wPzCQBV6eqCxIvQzNGDHx8g388IKUhFBl
vhcF8rlcpRsvNitp7mW5UTECmYPviZh5CayJUu6FubVHQrkyW/GagCCtBcCSjsG/SoONF0QWc2p1
wVAqurSJCAa1iAxNtIxAFg1sFDeZwE5D30tTY5EFA9usWVigtcDIsAOffpds4Ash4HZkP1coga1k
k3vRBiTaSKRytSEQIm4InujNwgYjw2IK6lmmWCsFy/BDTIpUrmRDu8qAWXEJdQk/y7zMhZ+lXj7D
BymY4bNk4bMoiFnPCGTUwE+y3EsM/CTbeGE6myTJwrerAoYUl1Bd0iRJ4iUzaZIk9lJDmiSJvI0h
jQiGNCIyMUTLCGTRkCZJUi8ypEmSzPNTa5EEQxqzJjhIa4FxATsOmTOz6FvYcYCkkE1YsLBZFGis
NQsUWiNGGHejFVOmisWY+WJ24zXBQVoLjEuyRJEXuWSJQi+eyRIFXjaThSVLFhaFEKxnBDJqyRLF
XmDJAkI+k4WlmSxmVcCQ4hKq6/U4haAH1usocsTAVJyGXmTKmAjG6yKyZ0VrFsCi8Xqcxl6WKqOV
eHFiLZJgvG7WBAdpLTAuvB5DjLJg9noMYZJjoy2Q6Ni8EUvG6yKyZ0XPCGTUeD2OgaSpMlqpFyaz
SZKs1+2qgCHFJdQlfAh24MIHJoQz/MjnFOe9WLLwWRSIrGcEMmrhg7ix8CH06QyfpRm+WbXgNqla
Ql2QBkp+4pAGGkxqSQOtZ2NJw4IlDYtCDNYyAlm0pIFuE1nShIHnW9KwYEkja4yDtRYYF7D9dMF1
EGeu+4nDdRYsbBYFGmvNgst1P3O47ucO11mwsGVNcGQO143FGXaUC9WNSIxlQygAl3kTEQxsERma
aBlBGGlEoqvRIiYbi0JzsxuvCQ7SWmBccDyChE4cjkeQ06nleIRJbTkukuG4iMxj0TNCwmE1Yooh
N1oZksGazJgpdkNZFTCkuIS68HoSumRB0ZIlSoKZLCJYr7MonmWtWXDIEkG7tWSJoN1asohgvS5r
giOayWItul6Hgha5XoeWFc9eh2aZzV5nyXqdRfEs6xmBjFqvQ0ULrNdByGevszR73awKGFJcQhX4
Ef8OKhHXdSOHVNcjtoa1COBHvBdLsFfISGQ1ZJCiaaSICWrlmEq00UyofBu7iRR3s6usCiLWXOB1
mROmPgbdMCdMYHoLhDlhAsObuWmIYJgjIrNDtERgi4Y5KELYRSsNPDPH8GfDG1mxmDaBWgAs5/tS
Qvclz9/4ofJBN08j+Bn5AbQBuEDB5e/pL4AMW80VhIouVlDUoMKqDGeKjbo6rh79rIexaoqxapvH
Vx/xmkjG08VljGwBEhw1HVt0hYIpIiJL73Wjfu/bj7ocxdI31GBgSjeQHiGpXR20+rNqdu3NoE7x
Se366qR7NRzaqd6prVZd2496p8ZWPe/37VrdHKryoG6qula6Kba1VtWopkGrtlHHdqf7RiGUotmp
N51u3l/Rx6Ku2xt13fb4bbAFhqduGHtdHNn4CDjO44aSmc/HLdvjcWqq8dZTr0dVFo26nnow1JP5
gQ0LyJ0+6brtjroZVXuthqnDgxGid1U/vn7zBI+nTvB5Kmr8elVqtS3KT7rZDapqVNM2T/5RNdMX
tWuPRdUMazoZrJzHnUE2sL/NxoChWOw89kUz0JKEAKFJZNbOQVjnDtABvW/DWBUUK89lVPbjdE2h
mEJ5RvQvjMfVy2Zf7DW6090lfwhv0SFQEPyATL4e9XFQ1317JA781p70cQvHF/LMO74ADj37vqvp
dg4jMVt+ceirQb1t97ofnqmbtv9UNXt0FO4zdb0+YRSQSOBdKLLxGoJZ1tMOv+byZXa+4Sx+o1oC
F7x1tf1Sn2EE3cQTL2eYXTGWh8+TnrT68Kjtqz3Ug7q+NQiAxVs4PJ7lw+Pv2w18vPGnWATRMMAW
G6rRN+r95YU6tO2nAc9TtkgxIt6HR8znNaX6h8eqK6p+QF/At5qPU1NidYKMHw90UP62qoutromY
nNu97uqi1CbBzuczoYXpKGW011Wvb8AUFwmIeAXsNv7mAnO9oPX3zGaWXlSoihJL2NTBMYrRHKAE
w3SyD4/eXahhLOjwmNIS7F4f2xOAgH0XwYWvjlodi/JQNVAfMXyQgOMBTJdT34MDIHomzx8AFmY1
YSw67VUxfNKAeWzbGnYqP/0EJDO+gYBcV/upZ+CwoUF+xin0AhEYxj3f91obZndTPwD1xgPkCuYI
Rq5RHdC6Q6ftqqGchoG227bTCDX+hOwXj4ABDA1nlMQKMwrJ1rV1Vd6eBxZtDGP/Xgywz/Nm199O
n54BtG1dDQdAUaih7KuO63Wne/hBOQgdoCsgM9qGKiDu+ksNDuS9Kz24xWnDJRDmF3A3Nv8M7m/4
rhUmyXfKFcUIZh7h6WEcu+HZ06f7ahi9PSTFtPUAxtOPBcN+mubJNkj8JPHjQGd+keUbvduFYeHn
Oo507kAK/B+vygnch2Cq5SFirlUQuK4ditrdJXhgVY5zuGKRxZdfimMHKbDv26mjqnl9t4MWQwej
xmByvePRYy3NHWgz9tV2QgoVxxYs4Jdsw4aMLNt+59Tj4lRUUFKqGlfBptg731vjLPdyKaT9CEHv
imYczjeKGCY6bsm/SXFkvmKOjT3knppLpOAuTb2Y2wTmONRlPLUUQLcZkLJNXiwtuPi1KnIebuJ7
HO263Vcl13AuBrzlvM0ovoZSBA7s2gr8gdUcfwv1oq6udXlb1vp8aYqlLlVIBgw8O0BCbvKcTkmd
ZQ1lrxqpeGKiuhWBh6U9+rRBx3296zwAUmg4igUSLWEN69vdBK4v3DhSwyq+UTvP+xv/cEH73OE9
tlIeA+ncMijyzDZwMUZUNBcUysx6VCDRYfOMPZiZ+jwWPxKq/vHy4p9mKAUXftbHaa30WIL/YPau
roWiA/MRS5ShcA+gBuSGjZAQAwNF/OmB92hWBsx78yh2+hFmno7axlnQ0SYSwg5odTx/zEja0xax
Th0VmcaOAgJr4W8sHtJymWPTgL4HLf2lGigO907z4ZHeezQ+V+2T47GCK00xgLOuNZgpedoyNkX5
PHK4JDIpYdKoTljCJN+cW9lc/T485jI5VTW2NpdgFoeCYjJimzufEhFcWvPv5Kmcwgb4XmTvUQcu
NxqWiTgwgXLKnElOeuPYCE9/bstpxuHWAsnYu74RqM03SvHolNH7FfkBLoIZg8n4uukmvitClxpo
SsAeRel6pJvH4Na3+fpB5QR8QW0P3EYDB2HZ6ttWSvvdS8sZZBEi8wkXkhKy8z00wObLGgwf1+rV
72v1pi/mUk3dPHzAzBCoDT5Aw14Qj36Fb7wx/tmZDETfMfD21+WUEeEf/jgtJVTP1KtbcMMJhq7+
SbFvWsiz0vmdeg0s66+hEyJoH7cPYGfjhW9MM2GWmn2+/iQSxA98Egnxb2aJeSNYm66LLMR7Cic4
8qill4KDBe7OisH/8boTRonH+7+lQxTL8D3oaQfmGsAAV1KyIwmMA3q103C5KHYtDedF2bfDAENE
c+sehFvtBH402QtJN9wOeGH1TDmATtXsCpjB/sNJWpEduNOTg1zE2Y/Nzfj3qDh2z/Hm+eXrS/dV
InjoewG43UtieS+A5Je3KV0MlbyUyKnmCRPbVVlq6Fbojfm0OCVgmZmrYGUIq6prfJ66gQqsmhba
XMVPSNSFzHcw/7Fa0oLimXOqi/78pBrAZXfDh5iDtYY2AGW/IDIWThTn3Jr3pjsmBnsPBahxviwu
0TXWUizgphF46sp2gG+8M2EtPI89y7woNpdUKp7Yvka+VNPl0cLknDsWn3gwHYqjdnddNPCt5jvc
WOFIQjX2ONVjhZcPh9FnrrcIMMnxX4KYJ03TwXk2nf00+7LrNQwmI9PdlHs3LnCuLdQI9SVP6UBQ
ktUBKHRT9HJE5N0rmKjx8+/vOOPw4WR4gD+hVEhW2FhhQg+GAyWWKIq3vCQMBPJOS36AX/BPn7zR
n4dq4VWFewxwphMHqtid4OoEFQAb4FZj0djrBraq5/cMmhy396vNwC2di47E9s40Ij0Xn1GLvjw8
wEehb/LFJN0a8xP86ybh/dipHT5o8MpAOSHT0OzZZqJHRpjS8PnC3MyKfk9ji0MTMx7/7473Q5Mx
bxrq52eGgTTx0thVMxRFD8PW6H5b2nClP2Etg8TiiNnDyVuUeZf56XAqf8KRbuyn0s5j/JhPfHML
8g++keC/qFliv7x48eLF3Sbj7BT6Dyz9m9Dk9eVcw3fuwHGJHyBU+EBMgzWVHgqYvCoAZ/FR3E0t
ec4EP0FLbUYgE18nXGLT8v2aChw8H/4stl2vwRKyprmo/mlQp2NpXgOeX/xMv+DfOCGCsPTaQQ8R
+8uJL02rWHk4395dDNxG8FD4Rodhhwaxw0NI4mI2n/87RWxqRfHX4Hq7hGNDHZA7ZU8vNE4yqOeK
JpEnX620hm5yOwNwwMzFYW1Dbxv4v7MkrZoTDoN72sY7jz+JTcF43sCMBwBwjjxpB4zdStoXQbq4
fDsQIl3s1tJc0Yf6Cw77XL6YDFDXbqkjOCc9ThADqqJDC63Ovs+AxzoKs+DGfwD6x+q/2E47NQpl
bmRzdHJlYW0KZW5kb2JqCjcwIDAgb2JqCjw8L0NvbnRlbnRzIDMyMSAwIFIvVHlwZS9QYWdlL1Jl
c291cmNlczw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4
dEdTdGF0ZTw8L0dTMjA5IDMwNiAwIFIvR1MyMTAgMzA3IDAgUi9HUzIxMSAzMDggMCBSL0dTMjIw
IDMxNyAwIFIvR1MyMDMgMzAwIDAgUi9HUzIxNCAzMTEgMCBSL0dTMjA0IDMwMSAwIFIvR1MyMTUg
MzEyIDAgUi9HUzIxMiAzMDkgMCBSL0dTMjEzIDMxMCAwIFIvR1MyMDcgMzA0IDAgUi9HUzIxOCAz
MTUgMCBSL0dTMjA4IDMwNSAwIFIvR1MyMTkgMzE2IDAgUi9HUzIwNSAzMDIgMCBSL0dTMjE2IDMx
MyAwIFIvR1MyMDYgMzAzIDAgUi9HUzIxNyAzMTQgMCBSPj4vRm9udDw8L0YxIDUgMCBSL0YyIDE5
NiAwIFI+Pj4+L0Fubm90c1szMTggMCBSIDMxOSAwIFIgMzIwIDAgUl0vUGFyZW50IDE3MiAwIFIv
TWVkaWFCb3hbMCAwIDYxMiA3OTJdPj4KZW5kb2JqCjMzMCAwIG9iago8PC9CUzw8L1MvUy9XIDA+
Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vbGlzdHMuYXJjaGl2ZS5jYXJib242MC5jb20veGVuL2Rl
dmVsLzYwNzMzMiM2MDczMzIpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0v
UmVjdFs3OCAyMTMuOTQgNDQ2LjA2IDIyMy4yXT4+CmVuZG9iagozMzEgMCBvYmoKPDwvQlM8PC9T
L1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL2xpc3RzLmFyY2hpdmUuY2FyYm9uNjAuY29t
L3hlbi9kZXZlbC82MDczODAjNjA3MzgwKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJb
MCAwIDBdL1JlY3RbNzggMTc2LjE0IDQ4MS41NCAxODUuNF0+PgplbmRvYmoKMzMyIDAgb2JqCjw8
L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNDAxMz4+c3RyZWFtCnicnVrbctvItX3nV/TjTEqG
iTvh8+TxjDOaisqOrTiuOj41BZFNCWMS4ACgZOVD8jy/l7/I2pduNCQfU4n9IG6Ce/fq3Wvfmvx9
8cPlYmnKKjGXm8VPl4u/Ln5fLKNlWubmbpGYX/Dwt0W8NBeL//2/pdksspUps8zsF3mR8asdv8pW
0ZJe43HwUp7fLP6+aBfP//w+SWJzPSxiQ//768lWoBUam5a4WWzFQEIGlob+w0AZR6vc5EkSVbAy
ScvcQIgLCHGUFqZcRmnshPWiqKLMPytWUblyWpNQ0edUSiN4wyllERA5gyysF24xeaYwWCtEuOZt
xEt5K06jFKC9mERZbkgiS3EclQUkWUak9aISCCJWAk70nEA28TmVsijGI1XKolU2WWRpvfDr6VPF
woozoDPsWQWtCXtWLaPEY89WVZR77Co57CoKQNVzAtl02EmqHPasSqLCY1fJY/dPPbQqNzOggl1O
IsOZTlwhyXMlK8qJKyo4rqgohFCtSZi4kuHjnisZTHmuqOC44p4pjGriijMYYM6KqJwwZzn9UTMQ
Yo9ZBI9ZRAUmWk4ggx5zVka5x4zYSzxmETxmfaYwWCtEOMOMWMgnzAiGwmOmYPCYRfCYRVRgouUE
MugxJ3mUesxJQfnCGWTBY9ZnCoO1QoQh5jQWWjuJ2Slm0iXzVpZQwWFWUYCplhOEfpNUOcxpzKxV
g7FSWhfTZx5UlZsQYYg5QbxMfk6qcvJzUhWTn1VwmFUUYKrlhFXg5wQR6P2cIgC9n1XwmPWZwqgm
PzuDAeZVHsQgST4Gk1U2xaAKHrOICky0JmGKwWRVTDGYrMopBlVwmN0zhVFMMegMBphLSddO4qyr
ZkrOx7pEqblalxdRgYmWEyStOolTrlPibOwMaqp2i8kzhcFaIcIZ5mIZ5I0kr6a8QYLPGyp4zCIq
MNFSgQ16zEU85Y2kSKa8oYLHrM8UVDzlDWcwwBzDXECOeKWhQHbikkNB1hDBgxZRkYmWE9iiR51A
PXOo0QIsE2dRBI9anwkO0ZphnMFeFtEqgL10WYcMLSXryCIieNgiKjTRmoRVAHtZRqWHvVxFmYct
goetzxQHa80whrDjKouSCXZcpRoQMBSjrpYOtgoOtooCTbWcwBYd7LhCDXKwSVil3iILDrZ7pjhY
a4ZxBnsVR0UAe7XUmCBDEGIPWwQPW0SFJlpOYIse9iqJcg97lUZJ4i2y4GHrM8XBWjOMM9gZav4U
kHFaRUXqYKeo+i5+VPCwRVRooqWCWPSwM3S0iYOdoaONnUURPGx9prBYa4ZxBjspo9Vqgg1x6WGj
vKYetggetogKTbQmARY97AR/POwEfzxsETxsfaY4WGuGcdalxsgCSTm1qTFSRJa6NjWOCwoNbSpV
cm2qitKLqp4T2KjrU2OkgjgxTgu5Mp5MsuT7VP9UwbDiHOrM68gMRUCWZabxQaYgxN7rInivi6ie
FS0nsEXvdaSD3HsdiSLxXhfBe12fKQ7WmmEMYSMXLCeuIE0kDjRa8dxhltcOskiCSjT0NdtyeCFV
Di5SQuHQymsHVp84LFViQlwzfiB0s4AeCN7CswPBW3lyiOC4IZIcv+jo61QCR6WMYko1cgo2ZyyX
QHQL6TOBwEozeDPE2DAH4iQuPWI4JvWIRXCIRRJcouNfc8yoFFM4iQY1Kh6xCB6xexZrN1F6xM7e
xIZcI1AljiOdlTnAdIzW2NMJmyUdvVlDX2uIqMThoxocWGpLg05XkSeyOmuEuOYzbTYbadNwok3C
gTaZzbNJMM6mwTSbhcNsFsyyeTjK5vNJNg8H2SycYzPf9v9wuXj+GnuKqspcAj1fhdCcbcoU+RZv
7hff3dTtZmfN0O2t6bZmvOkGe2bwJl7Wo7G3tr8310c7jObNe3NTD3jfmu2xXY9N10LqTG/rDWvc
9c1ozcX7d4O5a8Yb03bts1vbbrr+2XCw62bbrE3TDmN/FOXIvGzvzc39wfa3zdD15vvL3+hi6RvA
UShKAb7ujjsCaVtjd3Y9EpJmf9jZvW1HAT/U2FTTjrbf1msbnbaOhigW66+Bpg6gPauv224YsQFs
b7IJT5m+bq/Zd80GK2OTtqf973amtXZDsK4sfDTAkojXxxo6o7WCElu41893I332OOBzV/dmfex7
2gu59iT2gmYnwb49jsfemkPfre0A7PDzOZbbTQsPsnKAnQ/t03fLL9lS/j1zL1+//vS9qWGv3t3V
9wMBbNrbetdsTNeaK3qyt2O9E9IM3Xa8o/dOA16t+E0A3oBlu+5Afrupb0HGse5H+OA4NO01oDaD
AoXz6s3G3Db9eASCf9REo4lbtIkIu3jPrrVmY7dNi832dleP4vufP/z6sch+xSd//RP2YT6CPluc
9cT/p0AvKxc+PxNDPsATdog+fX9ak6YY0XyDpXrst9+wv+p+fYPoWdPRafhM7NPzGo6HQ9eP5oBj
9D7Avlx42uHMYC8ifrZ9a3fupLHCVdPWiOURrhzYH2dmX9/jWIeOmXcafB678KBj+qNeE7/Iq+S1
2/263u2e3+757yzOQY3BYC/m7QcBNxB14PrI/P2mQfb5sioYy4HChBhvNs12a5n9ewu8dDTrbr8/
ts0ahw5WsIeedlpoWjUyglTz6Tt7HYFJNWj7xyymQXemHAHSiLy2re2xK/LeEWe17jtsnLY5O7bo
D3M+GqjWI7yMvd8yZZnTYTKZ1pLTgcbpXaSxCxcCNXbdbG0fBGccJLcdwrNpKYOQs+C6w449Ryn7
jnImEYke+YCFWmvvHmzoNCy68lVCt+Zlvz/jQ1nvUEZAQvv7sUGu4IwMWO08e5o7zuGS8uiI398P
o92bd/a6wYsexLk6jnIa8lHaOtze3XHQSlLlXZF2bSQt+aR2Gv0ydeF4/qOGyjrAtInMJXbDJ6es
dtABaQ1nYoPYHIgRHDlBY09IFQWN6dMMHY6xdf9HvYOZtuZPf/qu40yAxeGei/M3FMJvX52Dhifx
5+hhNSIfYyePu4yBwIPVRxlAo/H0Medl7mIIx4xwlWOGW7DZfd3CIWuUR3K6JG2uJ3i2JoHCuacS
/eHip4/nlxPMQzdSyeTIsl8Oth3II+S2Zgtmnk0EES5brjl0EKcRF7mLlzCPBiXFn+X/8F72HY5D
uEQHGEKjRHcYvSLvtHE5eQRcJHEtRq6GTETRZLE/7sYG3ckTkOdTcxYgp3Rk2/oKJl721532FSjy
h44OGxkW/Plaw7K3m4YPe1OPNdy8vpFaSiwgcnDGsDUisEFilr3yiWIPnfmAOnP+5gkMSUsXSq/7
bs/BgYaoF6/BDVfdLbdLa/irARRgr5HC4Wuknw2VEfl7dWx2G2GNbK7eEcBNM6yPwzDlfQHmqsJ4
/+LbEPlbrips75jFGzs01y25bjqx4diM7GfiwX/g0Yn534YSY7zM6ZsAwcI76i2XuzXnYWpKXKPd
9CBbUDooOGozWPh3Y94gZMz77tivbfgZcpfdd1SByc/cl2ki4ORMhHx2GmOGOeRRCOH8BvsE3VXp
SMwNhv1S79GLUe3hbFc/7hec+3122IKUDZ0DJTIfZj6iguA4m2jBPX939ZvlXPziRLgx1NIzV0pL
PcWG2fSw0muwcVY9tvtuQ13+5itATnIwQ1oKOMhlwvmG4L+/ePXqld8tH93GwjPqPG7ag3Lk5h3J
PUJA9KKU7PddH4wdj2aTXdd9PuEcgpv7vL9rPp84d55fE0cZ2p404RxhmhelKaTxhAs2dRx/adrj
Fw3nZ5LaJvfb2waUkFM480nF3OCYrqjBYKpTl8+uQiWqEYiueUR7Sj3+adRp+jjjBozcW4rwZtij
WDc2mg7gjKKdxiNtCGc9PAp7mD4GfI5YCkDsFiHWuts8wauJnxrewSHw2azEcHds6/WNFAYapIQP
mlmoNRxdm04z6eBO49u7HSTZBJt6gitjPyP01Pz1TM8TcYHykVaJIxpaaKp1VxIQlG6k0m6VUIiZ
GagzOmMwA6wRQvj25Gu5m4sOivfAsdObf5rN0Trn1FfNDqWEROpIw0RwegcrT309W9drWPO57e7a
IG69ix323/Q+Re9NcMJfjVxJQ93VWIPxG55eOGGcPeoyws7hNPTS8/+f09WPH4KuqCRuMVs02seH
xUWKSVgfMGuAfupJnq2l9jhnhFW93uybUXl59zjtcgdyGn7hu3jtVsxLwOiJ+aiStK4mIfFf7WqK
q4kPl51vAafi3UqDauhaUArHCZLjQAZKx7BSt+MLUAeZ4USNJei5b+CvaqouDzK6NHwD9UI3FAiU
2+cfOXPtMnbLjcAglzW3dbPjjaLf4mLaUW2Ti6UHiyAZ8PSotwnY3ROAZ9lXhup6Ux8crLubBhlJ
irnUVc//Z7rXKYEF9fWOBya+FKv5Fos6I3T+p1vQNMnD4mMxDR4ZzOnWMF36diWYaU6qJZi/lHkS
pqDLcwO/2x3+HjrM3Peyze5w2hgSiHKhRWmDKx6m56cYKXxf+fPFRz7LWWwO4Gb4Bneu6J+C98K8
faaXSRIYp9ubBLOLHoGO9G8+XLymLnbvLhkQ/rdwFk0CaN1BP7wxdutux7lClju6nC/zn1xnuOrF
n0MLjLgGvdH703F9tZQpXP41YMq/BoyW1TIxyyguVkWKv+kyrgr6eSBvKTFFlBQzVsGbJkkL+p6M
NvUKRGx8OWPDmRiOwYWYLMZlkeNvWSZ57i1/3Vlxjo/JUN031xjOd8hJ1zgPDkuKa59vHvTIdFd6
kBaXAvpdd023a/XRXHRt+y9X3mmOeOXmoxBzPvtp5DcQLtMoVoSzcwxgYvkfUcvA+Pd7Kl66No0m
Hy95EyGMxp4mUVwto1xWPRfaB7H8YOl3lGTewhv1cfPkpdkHxX99bjEGhkrwUS9y2NUjUzA8lF76
NMIgLf10ig9vDNpu5EP85bhDgTV/povGkydY/vd0jvOSvt4i+D/KUO1QDGZLo/tld0Cj9Ir721/q
9kh3RskyicP1V09kUIzYKYupTpAHhhfcoTdrVFW9a+ll4H48UKGLRhrYWWrtOr55OnXLzr8SyOhL
MvmWCHm8b3EeL3gyfyE3/eEKG7n+5uCiuw++o5S7bSqmfOF4eoqM44y+35u+maI8132mBEU3W2vK
51f2pr5tUKVpK49y/BMCYxnTT/EeVCk/WvHXYGFdDQYXnvmuqF1DIKzp2xquz/T1nlc8uf6qol/V
yaA8XWB2B/9NIN/62S/NMOroMH2ZJZecT94tubSsHHV+fvv8BzCzOe7Ny49kcjCv3v7t/McnGClK
R4WLhtjVbUfDX9o8+yCGfnp7abY1qHja/7k/Yop7GjnsF1dPmdiuXdZLNKJ018rA8v9dFdCP6P+6
+DdaCKTCCmVuZHN0cmVhbQplbmRvYmoKODEgMCBvYmoKPDwvQ29udGVudHMgMzMyIDAgUi9UeXBl
L1BhZ2UvUmVzb3VyY2VzPDwvUHJvY1NldCBbL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0lt
YWdlSV0vRXh0R1N0YXRlPDwvR1MyMjEgMzIyIDAgUi9HUzIyMiAzMjMgMCBSL0dTMjI1IDMyNiAw
IFIvR1MyMjYgMzI3IDAgUi9HUzIyMyAzMjQgMCBSL0dTMjI0IDMyNSAwIFIvR1MyMjcgMzI4IDAg
Ui9HUzIyOCAzMjkgMCBSPj4vRm9udDw8L0YxIDUgMCBSL0YyIDE5NiAwIFI+Pj4+L0Fubm90c1sz
MzAgMCBSIDMzMSAwIFJdL1BhcmVudCAxNzIgMCBSL01lZGlhQm94WzAgMCA2MTIgNzkyXT4+CmVu
ZG9iagozNTIgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL2xp
c3RzLmFyY2hpdmUuY2FyYm9uNjAuY29tL3hlbi9kZXZlbC82MDczMzIjNjA3MzMyKT4+L1N1YnR5
cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbNzggMzc5LjY1IDI3NS4xNCAzODgu
OTFdPj4KZW5kb2JqCjM1MyAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0
dHBzOi8vbGlzdHMuYXJjaGl2ZS5jYXJib242MC5jb20veGVuL2RldmVsLzU3NzgwMD9zZWFyY2hf
c3RyaW5nPWRlY2VtYmVyJTIwZjJmOyM1Nzc4MDApPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0Jv
cmRlclswIDAgMF0vUmVjdFs3OCAzNjcuMDUgNDI4LjI1IDM3Ni4zMV0+PgplbmRvYmoKMzU0IDAg
b2JqCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzM0Mz4+c3RyZWFtCnicpVrbcuS2EX2f
r8CjXSVxSYBXuZLUeu3Y69SWZa+yTlWcB4oDzdDLIcckR7LyGfnL/EX6BhDcjTXKptZVmkNMNw4a
fboBjn/dfHmziVVRaXWz3Xx9s/lh8+smjmJTZOpho9V3MPjLJonVm83f/xGr7SYtVZGm6rDJ8pQ+
dfQpLaMYP8Nw8JHH95ufNv3mxTdvta7UbtokCv+Nu8VXYBU6W6bYb+7IgYnRQazwHzgokqjMVJGl
kS7BjYMmSo0CkOQAdFSAlzgyiQPNJq+i1I/lZVSUzsoB8ghfFJhFiVbOKovKxHsk0GzcbDwmPMhq
xbGhlSQxP0vjKAHaDpoq0plChK5MGWU5IJ6HUbOpmAPDitmxnQDyCd/zqIIhNkqTCALqPDJqNm4+
N+qpVZlaEV1z10WUBdx1HuULd51F1cKdkefOUAiynQPo03PXZWQ8d11hSniPhBbublS4kOGKKHPn
rcgzSK/Kp0ueFZFOJV3yLI+yTDZXgEsXgZwSYuUAeXTpgrBKlbOqotx4jwRcurgxT6tK1Yrjinaa
RWlAO01xv8RRanC7ZBIGnjZDocZWDpBHTzvNI+NppwUUAe+RgKctY8KDrFYcV7SNjsqANsDY0zYJ
7pRMwsDTZijU2GoBZUDbGFKxWKVRqr1HAp62jAkPslpxXNHWcaQD2gm497QTmNjTZuBpMxRqbCWA
PXraGnx52gBKH20GnraMCS2yWnEMaWdxhms8eJhGWSm0s9jgKnkSAY62QKYmVg5kHEQH80gXylnl
KGXnMWeZu9l4THiQ1YpjSNuUWbSwNmVKaUF+TGmi3M0hwLEWyMzEygF06EibMo/iTDmjfOkIAhxp
NyY0yCpkuOJc6ChfOMOfwnOGP7HnzMBzZijE2MoBdOg5F9CPPOcipW0XhwQ8ZxkTGmQVMlxxRlEt
WW1MHGUuqw0U0dJltQBPmqEwYysHyKNnDUrSLqsRVE6MAjxrGRMeWkrwwnFFG0UV0E4KqZToCID2
tBl42gyFGls5QB49bRBq7mmDrIynzcDTljHhUUkJXjiumqSJqyipli5p4hJrpXQtExfcM3kiRq5L
CuRWKHYOkFPXJk0SY2kUKwCF9i4Z+TbpR4VwHDRN7zWgrytoeAF9XWWoDvGGKPb0BTn6Apmi2DlA
Th19XRVYIp1VGSVmcUnI0/ejQoYM11TX9KE6VCF9wMlCv9SoFDcXI0+foVBkuwVUAX0oOqWnDzUi
04tLQgt9NypkyHBNNcx5XaykqotAqjoPpCrA5bxAzmuxcmAlVV0EUkXgpSrA5bwbEx6BVL3HMOp5
wlr1OEbZuSAA0kvUGfmoM5TIsp0DCYvLQY3Cc1YmMkvSMFqi7kaFjA4Oud5rSD9dS1anoWR1GkpW
kKefBpIVOwdWktVZIFkEi2QFLfSzULJiuKa6ShojinUwk3aGrgxJUrbYiFxl+xlKYrCVAyIsB0l1
zooE6TyKWt1sPCY8Cjk9LRxXtLUodYGJp61JijKJFpkKAYZCja0WUAW0NanNWZEQnUdRqZuNx4QH
Wa04rpMlSVmjHhuUm9u7RONDt7OMfLIwlIRgOwdSFpWDGQrOWWUoRe8yY536CWVUyJDhmuoq6nES
dlUdx0tXReC7qgAfdYYSWbZyIAm7qo710lV1TEJ0HkWlbjYeEx566areYxD1pNIriSZVEkg0qeJA
ooJc1AVyZMXOAR1KNKnMIlEEi0QF+aj7USFjAol6r0Tf8DPoLdxVHU6oqxr2Bgi7quG5GMFcmpnI
qGaSYumQZl15bKhBOsuUmqfzm0prdbPKqDBiyxXf9RpgT6twDYCTZQ2w/emyBkbLGmRUWIrlgqpw
DZAu5bIGyAnsoOKX0bIGN8qMxHLFl9fw5c3mxZ8TVeCTG1gSvUwCfzG+SslRJTeHzWfTXPfbetyq
h/pRzQP+tx1UO3+h6tvOImwPx84ebD+rl+MOYK9sO+/tqO7a8fBQj1YNo9o/Hu14307D+IXatnd3
drR9Y/HLddPYaYJP0zyemrkd+s9vfsE3cE/xgyJUML9+mFXdq+GIluoOprp+dzn03WMw5aQegNFw
mvlZU3fdJJPQy7SEXqZFcRVrFUMMy9zAXxMncHEbd0xEK7i65SERaDtYGTTqHplcj8NxmOpO3Y3D
Qd0Mx7ZRr2CqC/Vd3Z/q8RElnYTz6tVLvN9Zb4o3UI0XUZzkbUthm1U7qVPfte8trHTe1xAENbX9
DrbkYJt93bfTAVbddcrew1bcWnXf0oZhhIDUKjqwidPpeBzG+UIdO4gmboattzgQbkb6MTlNL/oC
guB7eFCHUze3x5AMzQI0zm0uSTnHo1ZB/mBzgcfe9mcsdQG36ZUhrvQBIxNy2Ntg5RLHtofFQub1
O8oT1UAEJB5nVl+GEx7qrVX1fd12GOinLRM4CegqtLY96wkI7k52mtXW3rew1duxxQ2E4B3H4dY+
HQR66QYNjrfiZbhUXNSi1HpSh7p/DEMDjyAWd7aeWpcn7Xx+ujTBAsXTMXGcCaYFBwc12c6SpmHr
5wcLu4gLPI52AhZ2u4QrZDJaDD2M3j5+sGHn6UCFTAqRo3VVBteySgKUC9SlphsmKmGQ9DPxrcdm
385A+QTDP3/W2kg92K673Nq7tgdGsJB2/vnz8zyS3Ani635CZzQprmbbTs0AWxoEnwI/jFArt5f2
t9n2yxb09iGkDjztGJYQ8+mlK8PrqOZYgehDr+kzC1MGojM5ufhqaE6YWrDXF6oZDodT386Pyva7
ekc5d7GkX83fOkCuQWh4cIbUAQVewO7ft/YBfYCcptPtoZ0mTCCsA6jUkayjkG72jCAkCjIjQf6w
5BG+bfIUf9ohB/kTDn785oOwpXkUl67i/wK5cuUaH5AbT0eUbtfSDkPNVT3wvQ/yG4nHSCGB2V0S
/d4WwcFe5vqK4lMH/ZGYF8/ZKXpfCTpNyNPfLAeT1UrUV7Vm1RrLT8+vtISjByfHj8S79lWRXFfP
TLIUbsAZVxjk2pxGUPUMbW9qd+BzUnAEaO/ahuagNsMre/dmog1A0eGav77HyvcKdqG33bIfkbrZ
40Hkkq2m4W6mQwsJ1qfs9IGbpysAsoY7Kp9QGpkR/mw7bDItUYYuDEeqyRVD2pIJDindlkahVXJD
AUH8empZJlQSpDWhJ+T0rh3n199fUmzmse4nHJStjM7TNDoyRPOG2nPb32Oi7aDObK9cIhNpl91w
7sCDhvo4s5XdRerN29dL/kMBd1/eLvmrXl6/fqVef3Wh7kE9w3ixGBxQ9BiJ88RBGlzo57Hd7aBF
om1YFtL4/8hdvreg+1ej3bZzKIpUzotwHIePeAgv4DIVR0Whs+xpGRq4qqScFm9PwHqSNvfjgCu4
rk/qzdD3/1bDnc+3V66Wnm06Bm4UJTv/rj7YSb1p/mL7f7YW3X0Jh9L2dHjx7TUkIvZYaY50uApq
17KbpwkGKeFIZ5y6vbpP78NY6E8PsuHft6m6QVM8cZ2H5Afqzz1Dp+aZBdDoIso5YSBFpys6AAzv
aYGkfDkXiLogY7sBLgqUq+cjn2RRzM6/Dc7VUFZtNxzxnHMH8aXt7Gw94kmCFggP/jUp+xuYtLgZ
V7Avu3bC79YiDoW3ldu6eX/+VmTwByW/RFdmeov3KwjdeQcaruklt5tpONi5xSTaw6mECuJkIUa3
ixSecgSlXzMTuaKEB6vL6WgbrNhfQJPHencbnvEu610/QK1o8LCFuTlBLq/sz26Hzv1ev+Mg1kc4
PdfNHoMC5WIP11dcofr+rcWT8CNECfZkb7sj3qKgFeI1hjsKioe3YqKjLZKFkDwjCPjbtRzuoTCC
6ctX168vZ1gprAvVBMSk86LjGvIfGhBITI5POD/VTskEvH3vkfMz5k59KjzUPZxQvn+L5h9nl9yu
gmvGMjnfRpbJT3xY/h+P5Bp/VGYqPwEV9FDfDy1PC6fdEZSGOsE4u2vjb3QXw0sftTVsQBMJtqsf
oWzROXwejmiy6upnUpPegPrM5NS4hIVcjrIbwaNX13/9k7r64Ammz5FK54dXvN/ZhiR2iVi3hytZ
+O2p7bY44+FECz0OXds8Yg0ewogHKdgNu7Y5u7ak8gn3FqpGfdt20DYuuLjf1XjAs1jLYYvhhrGV
k/SL4HwBJ1m6vkNFgoPEs17FJGXpMm3oPXeIl8J4/eGPy6sAJyIZO+Pa4Fuv0mUOCUjeN6FOYBHu
LdIMO9IOI142sJYvjylrrvglCMl7GXJUwNuhfm8/OEE+DON7iBEcMxs4Zj6DZ567nMLXYcsJ6kpx
pXsMtxVJWjrsjD7LnS6xU4Ag+R0K1b3AyAeS5fDzZ3cP2+nyGfSy3CUhaPsIi4RA+jssNdH005t4
YrKo1J/2AizNnnkFSKC/pnlY0DliH98F06eucmeWUqVRnvy3+3D6rFsW/V98+IsAenjtLg7+mstX
3OV+S1fbWZ2OsAxbH9wi8P87/GHzH34C5wgKZW5kc3RyZWFtCmVuZG9iago4NiAwIG9iago8PC9D
b250ZW50cyAzNTQgMCBSL1R5cGUvUGFnZS9SZXNvdXJjZXM8PC9Qcm9jU2V0IFsvUERGIC9UZXh0
IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJXS9FeHRHU3RhdGU8PC9HUzI0MCAzNDQgMCBSL0dTMjMy
IDMzNiAwIFIvR1MyNDMgMzQ3IDAgUi9HUzIzMyAzMzcgMCBSL0dTMjQ0IDM0OCAwIFIvR1MyMzAg
MzM0IDAgUi9HUzI0MSAzNDUgMCBSL0dTMjMxIDMzNSAwIFIvR1MyNDIgMzQ2IDAgUi9HUzIzNiAz
NDAgMCBSL0dTMjQ3IDM1MSAwIFIvR1MyMzcgMzQxIDAgUi9HUzIzNCAzMzggMCBSL0dTMjQ1IDM0
OSAwIFIvR1MyMzUgMzM5IDAgUi9HUzI0NiAzNTAgMCBSL0dTMjI5IDMzMyAwIFIvR1MyMzggMzQy
IDAgUi9HUzIzOSAzNDMgMCBSPj4vRm9udDw8L0YxIDUgMCBSL0YyIDE5NiAwIFIvRjQgMjY1IDAg
Uj4+Pj4vQW5ub3RzWzM1MiAwIFIgMzUzIDAgUl0vUGFyZW50IDE3MiAwIFIvTWVkaWFCb3hbMCAw
IDYxMiA3OTJdPj4KZW5kb2JqCjM3OCAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9Hb1Rv
L0RbMSAwIFIvWFlaIDAgNzQ0IDBdPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAg
MF0vUmVjdFsxMDggMTc1LjA1IDMzNy45OSAxODQuMzFdPj4KZW5kb2JqCjM3OSAwIG9iago8PC9G
aWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDI3MzQ+PnN0cmVhbQp4nKVZyXLbSBK98yvq2I6QYOyL
b26pFzlC403tccR4DkWgSFYbC40CSOvv52UtIKBxi4zusB1EAqisV5kvN/jb6ueHlc+yImQP1eqX
h9X71beV7/lRlrDjKmRv8PDPVeCz+9V//uuzahXnLItj1qySNNZXtb6Kc8+nazyeXZrnu9W/V+3q
5W8fQ9zaqlXA6E+/PemarZorO22xW22MgoIU+Iz+QEEWeHnC0iz2MmhxUkQ/EIJUC/jJfC8KnFCu
0sKLp2dp7mW5W+UEUoj3rJR4CR7ZRakXxpNCLZQrt5l5ZmHoVXOEpT6GuZOkmYfDNJOYemlsQSdp
4hWJ3cMKDrQVDTK7yglao0Od4CeKmVtVwKWTRi041O6ZxaFXLTAuYCexl89gQ/Qn2EnkRRNsI0yw
jWihmVUnIZ/BThIvm2AnqReHk0YtTLDtM4tDr1pgnMMO88SLgwl2mMdeklrYYQ7P53YTKzjYVjTQ
7ConaI0OdpjD9Slzq9KT/6zgYLtnFodetcBoYAe+uQcS5YR7kmOvSBlJpAtSmEEyGxmpXBUGhBEL
A8+sc4JWihetCBLhmV2VkZ8mlZlx4rShfWrB6IVLqAurw+npzOpJ6GWT1SH4k9WNMFndiNayZpUT
tMbJ6vB0PFkdHAgmqxthsrp9ZnHoVQuMC9h+6OUn1Dhe4dIJCaFLJ1aYUBvRIjOrnEAKJ9A+dnTp
JPRjHW9WoRYm0PaZhaFXzRHOMQeF74UT5iAvdCRqNUGek5vMFlZwmK1ogNlVVtAKHeagCDzfYSYh
c5it4DC7ZxaUXjVHuGB3kPteeiJ3kBVeljhyk+SnjolWcuS2oiGwXWcFrdNxO8gDL04st4M89IJ4
0mikidvTUwtNL1wALacqlPi6Cnl+4YfMx+o8jfAb+QGCDGUJJfXlrwErvCBgDxtU0Yceb0dpTDVV
KwieUfDhN1RIKnABiwsP9osi4tJDs/rpg6j5ICp2Kw6i7vaNaAf2ru/+FOXwiv3+uBf9Qaquv25E
JfWLvCyFUmzfy0YO8iDYpuvZ3dsPv7xnQ89btRG9umKf7tldO/Sd2kOR7NoXD3+ixgJ0ALzoCTTk
cFF57REzryjoiBNghHQQwtGE9qMQbNgJxtcddr6VqhyVgnrWdoNQDBf0dMJvYBHAT7If7t5ev+63
nUdYHIToArOHTOfUOSYyYgouRsaIt0INsuXuoE55fMn5dDvg65vQ9Fm0Dr7V9NdmSZHHIrPsrj0Q
hC08xDirhJLbVp+bs1YcybmyFOy+q0TNduTUktc163ChQbMvP93ev3335QUbOni2O8hKmPe080+O
Zd2G9eLbiM1ABdzdCnuvgQPYlh6w88BjJGADvMHC/pHJlqlxv+/6gbSBWwdZiy22AI8UoQDnPPaw
k0Q8bR2cS1SK8PKaDnuUw06f1R6etxUuT5wWG5hjUNooxJH56U5nPg8dkRMY6AdQauQ1SHZ//4di
SrYlkRMQjxLG5ZsNwaS9Kj5wtufDTuGgh64+4GA48d3Lt0w0Y61dsGBl8g9YiVhBitehrRXzWsxV
pxfGXAJ2F5nWo82Ovxyu2POeOOacsOm7Rh/x2PVf54bkLaNYM8zRfj1F4RXjvYS5tqwbtb+1jU6x
bO/YnNLtZUnWoltveDty0OWsmxKk3ijX6FHZUIy7FvQV5CFivve8At3rRpTtScE70QN7w2mxaHf0
qxmlD3QPaE/S3PN6E5SGzMYsFlZjSYb4/f4zmQ6UHKTQrNacOcWdNYCxiXpE+DWwYrmTAzYe+4WL
s7/PniTCi5E5ds0XySy/lDh+QY2hPSGMriYfU3Kbxd1T48FLTTO2cngkA6xh7e/7uutFtYiN4u+f
Ls5N/QS0G6iVg5opTv0LDxhTA5PMIqMSnB25zkz7TiG214/sdVs9spuOPEqnv5FDL7+TF2c81ynr
lrcSSfljQwLefE0cGBX72NUjGUVZKxBLdt2RLCNavq6F5YytwpohZ8MiTgpqWgn6iU1zGrkK+qCj
7oaKBPS6sKNQmvsifa7nONe0LD0TFl6e2oZECYJEZRDm5PUr9vMo60pndHHg9aiLHDK8emzLXd+1
HcylBJ5O1njabTxLCj+Ztv5BwkwvbVIiDFuOFURdSS3B1mTBJTqbS2Wzr8UVm59iXppNpaIE0Mgp
L+qsgKBB1XM1txelgNKeEUU89itWqa4RbFSU65RAM3aWFxEmLVPU1AjLg8u2GtMJTFHf9LwR6xFF
rdfIqPrJ7vrIH2tyDO50II5L8azqCRMy2Y4PrBx75N6hftRclqWkS8KndhxxyGwT0IstEf5KO7dG
6qCO6nBBUY7S1DNJa8EJNDsE4nFm+YY/uvCh/W1eelKpwPjJePYAvEWDOTsHklMjBhPBpOH/qHge
c5J6pg5M6M5UJVoU5Z7JX69BhzWVYNeBEwpXlaWuIDZ7suNOaM/s5JZ+BuAct7v9OFyxujviDg0A
bflIPrTvoFmCl/RNHJ1Oy5Ej0E1VHrvT+qEcO8F32j7ngYe5DY79rJqWcD+Hlp5ipbR9UyU1x7rW
XlGpxXvVEVRZ5Cp1gb2oFi2zig4e3aWpXTciraA1UEjhvTnJ2OJSDaC0Trhu37XY8YPsxl5noVnW
NfTQiogTnFqfA5e1phjNJehI4RYamWDeCwD7jsnzk1JEwg/IGOAnh0HQhk1VlVqv8w1NiPHT8O12
MmvJ93wta6l7DgqN9RP0DQaGvmXfkR8XVeILygeRiMYuHJFOh5aupkiDnaG5owiuWVlzpcyEcPPu
D/XlxfMwA5++HYWW4r98By0rUGzPt2isNSY+oI6uR5r1yBGuBd+MbWmKJYWCHWuov2wJ0TUcsEX6
amh6raqexle150Dt2C1P41OlVVyAMw0to2eDC5qXSvammwEsKNpw5M5TasE767H+avL4WgxHgZao
6houW3XehbFj878WrFpS5ctPAmXgw93Hm+tPGOhggrd7bPIR5EXIqW4zXJMvXBo0WcJMga4VNxMU
JMPeiexkGZcazgW9xuvIPDPR7JPCvfukoK0hPiMdYJ6c195/MJ2HQUGfW/6i17t0Mg+KArsvm1n5
o9aj59J2fm8Eb6/f7RBW+71gb0dR12KK1ffjGna8cX3uWRMGeUrfy+cfaqjew/r9IqFR3mbz1CoJ
rSHWiX5oMJBI1sQ5ZLx2kBtpMH+Q1G8h8fGxIqjEmM8Pc4PZiTQIizggFwQZbvleltGw+tcmpEjB
q/TxfP755hZnaNbIuegoffZBHCSItsFAv+blVxjUjANrDO9HTbpatl8X/rtoio0ifWe2/6J5/Qdj
UoDi7cc2mz75BqVH4lP/fLXonuf7XzpQBVHipXYS12rXvRQbJFuaQrEvuXCcNyfc1nVELuYGmqYQ
WLY9m38SQc5olBtIllNWesmUdVFnH/ip+0gF2u9p8jeZ5tP9S9NZ6u5L83WDnIzsrTP5jz4YZpeO
aHlM/4nlenG9wWzuPO2FyHElpKEOpxX2Q8d+XKNP1QPr6W2qOC3v++54ZTq/rtXWVg3VunbUjMbJ
ps9qil3P3LJBBr6gF0crbqlla2wtv4paj8SyLXuBrpRCliNejgbGIT5olNt+8SXpmREdSWUyj+6q
8K9WHWvGepCYSp74RJlLKmn0ZcKUq1d2G/rf3fer/wGsKiNGCmVuZHN0cmVhbQplbmRvYmoKMTA3
IDAgb2JqCjw8L0NvbnRlbnRzIDM3OSAwIFIvVHlwZS9QYWdlL1Jlc291cmNlczw8L1Byb2NTZXQg
Wy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4dEdTdGF0ZTw8L0dTMjUwIDM1
NyAwIFIvR1MyNjEgMzY4IDAgUi9HUzI1MSAzNTggMCBSL0dTMjYyIDM2OSAwIFIvR1MyNzAgMzc3
IDAgUi9HUzI2MCAzNjcgMCBSL0dTMjU0IDM2MSAwIFIvR1MyNjUgMzcyIDAgUi9HUzI1NSAzNjIg
MCBSL0dTMjY2IDM3MyAwIFIvR1MyNTIgMzU5IDAgUi9HUzI2MyAzNzAgMCBSL0dTMjUzIDM2MCAw
IFIvR1MyNjQgMzcxIDAgUi9HUzI1OCAzNjUgMCBSL0dTMjY5IDM3NiAwIFIvR1MyNDggMzU1IDAg
Ui9HUzI1OSAzNjYgMCBSL0dTMjU2IDM2MyAwIFIvR1MyNjcgMzc0IDAgUi9HUzI1NyAzNjQgMCBS
L0dTMjY4IDM3NSAwIFIvR1MyNDkgMzU2IDAgUj4+L0ZvbnQ8PC9GMSA1IDAgUi9GMiAxOTYgMCBS
Pj4+Pi9Bbm5vdHNbMzc4IDAgUl0vUGFyZW50IDE3MiAwIFIvTWVkaWFCb3hbMCAwIDYxMiA3OTJd
Pj4KZW5kb2JqCjM4NCAwIG9iago8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDcxNTU+PnN0
cmVhbQp4nKVca3Mct7H9vr9iPiZV0njeg8n9ZDtRSje25VwpTqqS1K3VckiuvQ96H5R0f/1Fox9o
9I65Q6ZcLm0T7IM+DeA0gBnur4tvPiyKrB+q7MPN4k8fFn9d/Loo8qLu2+zTosr+2zf+vCiL7PvF
P/9dZDeLxmV902TbRds14dMmfGpcXsBn36w+Yvv94u+L3eKrP7+v+jK7Oy7KDP473EUs5aXBYhf3
i1sEqACgyIemLF1m/0XIruzyoUXUdijzpvUAZd14Uhc2/S520La5A/cqdzUQRLNo8gocKpc31sJf
Re8UcXvRY1X5tLYXdhJBWQ9518YQyG6HNm8d+JRN3tQXdhJHiru96LcuOvhg7SSOqmrzTqWCbWJf
FwX83Jo6ihR1e9nrUOW9NZMYag9XqBjY5k5dP2UmMWjQre2y6R3mMrXTGAaX17WKgWzqtOn9z62l
I0gxtxd9wgRPjKT3xjU4ktQ729Rf23Z5WV+Yuv+wZGpcMq7ruj6z/8KS6fMO/HqghvO2YRMWSo1T
ssMp59dm7aLJM7+xAECrcBEAbDcFGBHIZUsRCQJ2aQHExkQrj21K6SJmg4cBUFDEARZNoxCqcsAF
QRBMWXFIXQRCWBCEdGshFYbwUFSL2K2FmGJiIdR4JpEbxBAEB0ZEYOk6lQuwCxW4EFdEUh/BECKE
If1aTIUhTBTbQvVrMaa4WIw4riZ2i8lxFDofQ61s1K5aYQh3zSXxEYyYD8SIXAymwhAuim+h+rUY
U1wsRjK27pJLisGxERcQNQUBZqHWt1BXVBIXRojrHRGkU4sYESKPyLVQnVqIKR4WQ41rGrjF5DAK
pRtew1FqCKPtalQilh4irpikLgIhVAhCurWQCkPpn9AtYrcWYoqJhUiG1Vkiv6HBLi0DVrWNqBeX
VUABNJ3L+zYCNN2QtxN4EYA8uAoIAPZo/NmMBIx7nEJpwCmaUV5nCoBVayPmxYT+awihQBDcq0VU
EMJCeEoaMUcJwgQNC6AG0pR3jWc01xnttzptdbyY0H6NISwIg7u1kApCeAhVziUnKoGYIGIR4oDa
Cp8iGr11RvetRlsNLyZ0X2PEXCCGEDGQCkKYCFmVTncBMUHEIiSDagu8MxBJPdpaCbGaZUtkqvoK
IS5wROA+LWAEiCyYqKSy1pPAACoSFkENqC3stZ1WaS1KBN+qsxHvYkLvNYTwIAju1SIqCKV2xFUl
01mECRoWIBlPZ1lMyS38ZGt0xKoWqy2ZWuy1e+18eVL+cCqsJ+CiP3mw2gsAdmj82YzRG3eZOGm4
KVgishi9UQ4rVaYSJjKfAEj4hMBdWkCFIAyEo6QQ85MgTHCwAHEITfnWcIm8IgWrQUaibClMJD5B
EAYEwX1aRIUgHIQm55GTlEBMsLAIMpS2gqeAibQSi1QxLiTKVMJE3hOEmAeEEBIGUSEICyGqUuku
ICZYWAQ9nLaAO4Ogq87WCIaVJ1sEtbhr/7igEYA7tHjRPzJgkpLGWg++AVQMLEIcSlu4azubkoqj
dN2osNFoxSHxEADhQAjcpQVUCErZiKdKpLMIExwsgB5JZzlMKSv8ZGt0w4oUKyuZWte1e+XKvOmj
f+WqvKom8CIAubCyCwL2aAHEjgQMgEybNGIDl+gqMjCKYRXKFL5E2RMAYUAI0qdFVBDCQWhKHjFH
KcQUDQsRR9IU7AQwUVakYSXIKJStgIm6JwhCgiCkUwupIISFMOVkcqJSjCkiFkNG1NZuA5moKzFJ
leNCqUwhTBQ+QYi5QIjIw0AqCOEhXFU+3SXGFBGLoUfVlnBnIXTx2RoBsWpla6HWee0f1zcCSI8W
MAJEDsxTclknc8BCKhIWI46oLeH1xbRKCpCSeiPMRrYVjcRDAIQGIUifFlFBKK0jqiqb7gJiioaF
0APqLI1JuQ0/2lolscrFgkumFvwEoBy6vFIloxz6vJgCjAjkwoovCNilBRA7cjAAcQKlMRu8RGyJ
g9UQK1umLCaSn0IIC4KQbi2kwhAeQlVyiXlKIaaYWAg1nqagJ4iJ3BKRC1kysmXrYyL7KYYQIQzp
12IqDGEibDmhnKwUY4qLxYjjamu7wUwkl7kYLbmQL1MmE+lPMWI+ECNyMZgKQ7gIX5VTd4kxxcVi
JGNrS7yzGElZ2lpJsRJmK6UuAAlCXO+IIJ1axIgQeTBXyWedzAULqXhYDDWutsTXF/MrLU2qBljB
NnqumKQuAiFUCEK6tZAKQ+kf0VUZdRcQU0wsRDKszhKZ1ODwo61VFitlLMJk6iqQAhSO3zhBgGLI
p/AUAHpIGWEA6tH4k6kIGHeZQmnAKVqivBS/1RKrX6ZOJvpvIJgCQ1CvFlFDMAvh6eKLO80FwgQN
CxAH0pR3jZdoLpG40CWjW7ZQJtpvMJgFY1C3FlJDMA+h6uL7QyFRKcQEEYsgA2orfIqY6C3zMBpy
oVumTia6bzAkF4TBRAykhmAmQlal011CTBCxCHpQbYF3BiKpR1srIVazbInUqp8iyAInBOrTAioA
YcFEJZW1ngQGUJOwCHFAbWGv7bRKa5ESfKvORrwVjdQlQjAPhqBeLaKGiGpHXFUy3QXCBA0LoMfT
XbCYkFv4ydboiFUtVlsytdhr99d93qtrrtddPrgJtOiODqz17I69GW82Y+ips0yaNNQUKxFYjNyo
hpUpUwUTiU8AOHgC4B4tngLg+JmgJC9kJvWfIGDc49CZsq3RElnF8K32GGmyJTCR9gSB4ycE7tIC
KgBmwBw5g5SeFGCCgvGXIbRVO8VL5JQYpCpxIUum+iWSniBIDhBBKBhABcAcmKVKorsAmKBg/PUw
2oLtDICuMlsjEFaObNHTYq79ZQmjP/dn4aK7xE8MJYG1HnQDp8I3/nEIbZGu7SRKqovScKO4Ro8V
gcRDAJgBAXCPFk8BRB1DkiqFzvpPEDDuegSdjV+7h5drm/BybQb/He4WfZm7NuvrHp98sNnlfeY/
l134XJRZX+R1ycZq0Q15I23wYMSRE38OeP73yAzvnpPPkBeD4AVjteC+sI2CAKckvlVgQT+qGnxD
OZolh1zVeSMhoyEho0lxBaf4eVAhVy28PEY+Xd5IyGhIyNRGQYBTEl8ScuknrAq5LPOWQy6L3EnI
aEjIaFJcwYk/BzwJuaz9NGCfOncSMhoSMrVREOCUxIchlwX+rKjopSq2y3zIwAAk/09Vegu7QWu1
GDAENAeMLbjx5wDpf49MP80z9mnyaoiAwVotpDtqpUjALw1TZ7sbiryM2e7ckNeU7c65vONsk8HZ
JhNTik70GfE4291QUq79p7zjXJPBueY2iqjMktiSTHeuoLcSyO4Hv56IOhiFZJoszjSZmE50o88I
yZnu4CqBMt25CpYaA6IlmZZWiqwM+wsdZpLprtdLEUxeil3XxaVIhmQaTUpncIqf1VLsOidLsYN/
JNdoSK6pjYJwshQFT2e7dbgWxfYKI9luO1hxnBy0JNtoUkqDG392uHzYBFVmnwGWnAAOuB6lO2ql
SMAvDZNC9ztbeK/RNYCyjXYNI1Li3yC1fugGP1p+i1w5tnxXTZe3dWxtcz8ZxZOsgAu/S3abF36F
sWebd33EDdZqIb1SK0VEnjrelEMTkrjVdhE5+ACayAGtyIFbMUr2FMtpDh6ijRw8fBk5oBU5cCtF
RJ46XuRQ+Q02vFXJ4yB2yCZYnjxluvJb9bqM41DVFfyStJbwj3iSRfkSO2RTPEOmBZfGQXqlVoqI
PHW8CYdmCL+xjbb/HabQ+B2UFy/qiiyhIK0hSHYkI6AKA7B99sTRr/sqwgZLGEirxBc8dbTJTGq6
EjY4W21XNc+kpitAzWnEyZKZJK1hroinWB5XZlLT+QrTZOJZw7oU3BpXqfRKrRQReep401FgDsqu
4jBglJwv5sDZ5FbMNHuKFfoSO0QiniFKwSUO0iu1UkTkqeNFDk3R4ytDxEHZPpLGn0J6ibopfC67
yAHsKjY2eTtER7FCV2KHQNgxxCioxED6pFaKhxx1tCmD1qtzpxg0/owpBPzoDYN0hZYQ4EYMkfzQ
QNAYf+slv+T4vdHXAopWjJ9bKTpy1LFS/HDSUStB2ZBIf0xqZCU0sCFRK6HxO5Ghjq0FzgXyFAuH
gG3MJHtilhmXx4B7pVaKiDx1vCmHpqSjDdsF/A6jNYE199VwaikSbsUo2ZMszi3bFewVxLPOY2qC
ERlQG4VDbjrYlEDl90p6EKoiDCKBeauMPaEVCXArhsieZAXcSKCq8ioOgrdcHAS0IgVupYjIU8eb
SGoNW0glqWAXIqk1/EWoSCpZIqnSGkRTPMXqlaTWsJUUSa39LrMUSSVLJFVaMSL21PGmHPwCGdQG
A+xSNhi1X6KtbDDIihy4FaNkT7EGtcGo/bLsZINRtxXcvTIuWpEDt1JE5KnjTcqCjIOyCykLlGmS
bxkHEndpDcIvnmL1qixQNtkTM824PA7cK7diROyp40058Dgou5RNEmWa++Jx4Ei4FaNkT7EGtUmi
bLInZppxeRy4V26liMhTx5sUBhkHZRdS2ijTJOIyDqjw0hjEXxzF6lVpo2SSI+aZUXkUuE9uxXjY
UUebMoALY1Xaap+mQQh4o5LSRpYQ4EYMkfzICKAxfrh35tJW1y08lBDQYMX4uZWiIUcdaxq/7yEZ
gaLRI1DUegSKOhkBbsQY2VGsZAS8RxwBD65GAK3IgFspnlaPAMMqBlXfgNs22jWcSQkNrLrhvshi
BtIYYhRHsgKsMKj6Fs6b7IgvtDNqsISBtFI85KijTRl4wcO/YYp28Ee0puGOoS+0hAE3YozsKJar
FQNQ0oEZgJBGUDBi/NRGwZCXDjUpzHEA2MY8YonkHGMBjQOA5VVaQ+kVT7J4BNjGRLInJplxeQi4
V2qliMhTx5tyaEMeI4e2znuh4I1ikK7QihS4FYMkRzJ4sNhu4eJOHEMuBZZHizulVoqHPHW0KYOG
vxki2oXs7yq/oWpkf0dWpMCtGCV7iuXU/q7yu7ZW9ncw/6oI2+A04j6xjcIhNx1sSqAOt5qRQF3A
bTuDVUPu4jRCKxLgVgyRPcmiy1NlD3EaQVmL0witSIFbJcKhy9J4kUNdhT9tK52vq8BB7PCXDTU8
NOuCVfqCWg4w/8haLerSwayXVgcrgj3ZCrj4u2D74KsuY09vOSe4aPnfpV6plSIizyTelIOv4YXm
4Kt4FTn4Kt5FDmhFDtyKUZInWxVerSq7jxz8uqkjB7QiB2qNEfZdlsabKKqMg9ghm6RvlGkSPxkH
FEZpDJopjmRRutjGZJIj5plReRS4T27FeNhRR5sygGOjqgkl3GRKTQCrkppAljDgRoyRHdFC2MjA
nxRrrgllJxUBPsbYO6kG8vs6yGQZx9SzjRnEBcXZxeUWU4+LUVrDQhVPsjj3ZFMKyZPSS7iSfOqV
WzEi9tTxphy6kMHIoXN5pND1cKfMXaEVKXArBkmOZPAwkd0XMHHZsQ+5ZNiex4k65VaMhz11tCmD
tqfpw3aHGUA0b1VSDciKFLgVo2RPsnqOi2wX5gN7xry0g4o+Kr846DDT0JsO9UfsFlSEoXxR6eIE
QiuGzq0YHHuSRU9Yld3HCeT3PHWcQGhFAtwqEfZdlsabnM3gGUOnzpfweGKQ8yU9rKAzFFlyNpPW
cPoST7ICrpzNyiKWKfCkwkS4wZKzmbRSROSp4004+Lzo++vwvI6g4EGe3F6jIeFzW4iPnMJnxJPQ
h0JdXMPXXsm9NRoSN7dxUPHSmhH1fS9t49iEzRhfu4ZtGt/J0h6OL2ypDe9y0Qk/00aLzbALY6ew
QWNA2r1xZ9SGUZCXijDVep4tYuPVE6kvzgcWZp4tJNnciGrOjmTRoIqNd1bkiJdUjEpzRfqkVoqH
HHW06QUv757JDHtgumvF3TFdxPLWGS9pqQmvb9GH7nn5jEM3u2HvSxe7YVfM97p8wKFrXWyje1x0
UvGlMiNJZxtzRwue8kpyIEknseDWghfwUEeLs842Jo89MbGMy2nnXqmVIiJPHW/CAZ4eK5F3BT7K
CW6uwGc+wcvRSqEouC0EyF5oBEQJ3lV5Keru4CvtGBA+S9zUgkGQiwowidkXK6fU3ZuFiLuvlo1o
OxoSM7fRuwDoxYZTuu5rYyuyDs+ve0ZEQ6LmthAHe6kYzfV5cj5s1PGw0adDmeF0fazPho06Gjbp
ybDRB8NGnwub9FjY6FNhow+Feo5/82Hx1RufCnh28uF2UYZ3krwNLwzVcFL6sF387uN4+jSOu+zm
sH4cD8fs0/p0v95lp/sx+269O3/O3j2Mh+VpvbvL3n85nsZt9st42I2b33/4Gb4c9Ik+/Ma8utrH
39e7m/2n48t7KRzUet0LwP4GXLbc3WRfH+722Wm/3xwzH8T56CN6WK7Gp7sqC3hFYoBrZuhrvMuz
w3l3Wm/H7HZ9GD8tN5tstd/dru/O0PF+dzXyznUwlM+OfLP+eFge1uOzo+87HnOI/mZ8XK/GbLe/
8UinfXY8PzzsD6fsZnlaZsfTYVxus+VH/2G5AjrH63y6xo63iRcorDbrcXfKlg8Pm/VqOQM5hN42
PMgQ+u3+4LmfxsPNfrv0SfjjN39775O/3Z53hPk0JHwjLLxZh4gf7sfDmC39/8fRz87lJts/hLhC
PzAkoa9bn+RMDxSuDjU6MhavIJ8fx+ww+hmxWm/Gm2x5zB6WPrv72+D7aX/4BX7ptPzFd02JohG5
HnslEwcXFAS4D7jnBxo5yMtmvUujzDV0lcH1nkLmL7dt4VkWgL8V2j8st+OTvvwttdH5bxzJHL92
6OFPssDvW5/49c14uJwaF87yFbTS6Y+HffZw2PsVdPLz7Uln+e5YcfY9z3bmL30V5zeHvc8WzG8f
9XnrJ8KTAPK9rQLwzXL1y2/6l9Nj1fqiDbPDu/+0PpzevnuN0nZY7o5hLc/B6Ct4qBsn05M+pV+I
Za07/uN49GIVRusPT/fHvrFDM0lneXc1POEGb5zTr7LH9TLM/X/4dTkLwhMvywBBmnH6MsvPF+4G
SYc0f1p7yd+NfnH7xTcLwG/9XUcC6bsGwVmdD77717Pc/bInfSWt9l179fO8n3SvmvBcT43Zn3bL
jxuvx16wQJBmecdRGz+vj6E+yfDNAogDh6ONM9aP3uF09po7CyOOHGklbShezXNvOh7A8fODV2wg
ATNnlnPd8+AtH5frDWTwecFXAw8fBX/68uAHYZ5veHsEfGHrtD+ffG359bw+AINZAGXJg+87Hzf7
hy1UYV/fZnn7A4lasl7nlrvLHceUJ+xeadS/9wlfnvaHL9nXq9V4nEe8cTWPuJfn02E/L9dN38hI
73wVX41A90nPusTvRLeSGov/MewRZoEoTcVs+5X68Uu2nOcdV8rxBEk73BzD7mZ9Oo0zA4gL5V+/
e/f1+7fv//X7V9lxn638XmO7f5yJEtfLcbP/dKU01EOXJ5Xh+1TdKZ1zMJL0qWX+pG8Dr1QnAfyw
3z09SbECNq2D5+FBGFna/CZ8948Pc+onvChWaF17ais35Q8P3cpn1l8V8g97v5n2C+rhBIo0a3FE
5w++eMPm1O875nkK19vzLhwLYLcMO9DlldrLCJHt+Bn2S753v+Fa72/mTEgV+dvtwyYs6rDzyNZX
5EQAhMDZb7PWIWdhm397PvhKcEUOGUUNWRTTpydnGe5GJmaaPx0t/YyZ4930HHzYmT/s/XmDTldP
b2Aav11tk1nzVp2e5rlK4uCwNculdpylsCTklDzL2RfLfqA9mjpUh2P7LIDSwZv/AOAz9HnjT7TL
05UFya5FeCsKq93qHhQMZshP38/0Du83Yce34+rLajNe2Z6QJ/y5XYWc135v4g+Pp83T44oKUvtz
fYNUvznst+vzNjvDXvixeZwjQDU8qmom1PYV3AIs13f3s84RNTxuRN7h6OH3xDDsf8h+er9f/TIP
oYBrXuRPNXeOGCr+X2/8kfPmiz/GjUfY31yb3YwQUxAyd9zfnj5dLfbsHInf+DV9WH88T91ATLrC
n0Ag4zd+hs3bT7OvX15DF+vOK9jM+eDnHcLquoCH7+GgvovlY55vVcI1O/iC9B/8af3qKYJdy3AR
Gobqzf/+9P7dt38Ja+vKxoi9i4YHerffvX48Xp1Y5FjBHzk06cx6hee3We6u5xHGXTfc7Bz3m+vj
TFVPzdD3fg/rVX+1v5l37FBT83Z/hh00qODtzI27mpu359PZBz63YDFAnKHv11ByD+oybBZAnKZz
dxlqbq7TKj/LO87O0/0Sb+r8IfsAcjDLP05R2gTO2JbRxkArEU9vOGPDJcG8DYoa7jDFHv0WYcbW
QHX8495vq05rX2i/XKk66Bk7JLp+U3F970rOcXr99P3xvzI4md7dz/MUrdc3V9vlvKCbkifVx3An
62dkBtu58eltJHnH+fUjXqLMK5fkHefX0Z9ml3fj9e0zecaZtRtP4QZ41rkGnYsyjnAIetbGEc9E
anrQID9slicvIlu/q5lz0Vj14X2fl24xqi68qTW1xeC9whwUUOMmbigh7eNn2AVnq/vlFUEkiKaF
t3xiIE/Lf43fuRjJh36Iwmp/3tzALf+14kUoMQceZbMZd37meOI/n/054PbpfV5V4RcexkDexOOX
F/Rd9un+2uUnY8Qw6MnF+ijPrcZrAk0YcSD4Noom9PXrKEKI40BXmXPEUfH3R/vXfDkyY/QZIJJ/
++7bD9+pWuaPF3OPf4p+qDDL2ewZQLEnDjMUQLF/gWwp6s+ULUX3+bIFb/gS1efIFh6IFGMrW7Pc
I+VrKsce8CeAzUujrQdDdS2n9Fn+VXhl0NbwK+rCzmV4+S9URHqQdE1UxbPgR6u6Er+e7V+E926p
Ft+MnvLWn2GentGoxqXr4CWRl1aVsg9/QP4fVhV4N3OYriqzvMN7G8nimF1YFP/UeVZBUfSjjOFO
E58+zwKJ7MfH5ea8hHkLp4vz1OPb30aRLPBZcr259lSNSoFKwrehoi4f92sagrnlRKWC3u2Iz+Kz
5SnzC+lpXa7CV0mrUP4Hz3lHfrw/yz1GYe6s7pdXzh6MEEdjvVttzjfXHg+Jn52F144N7NiE1zTx
EmG7FtEKc2l95UwvGOE1srTz1/vd5suVay/2r0t+m2F5zG72nvNuPzNf8He3fJsclizcL35a7q4d
BNi/bHjAv4KVA3OfNlZzipuaLy+oyGq6PLMiq2ny/IoM71rXL63IivFLKrKiPLMil13LVF8QbdsY
qs+ryKXfvHQvrcglvgH8gopcVi0viBdV5LLseFU8tyLDay0vv0l2Az+3/A/qsSv5yaUtxx/3pyt3
C4jQV/zUMhwMAIGXyQzvrubH/EjCRz8uV/f0Tti1Z/30Tk3LT/rh3ZS/fPvuhzdv/zynkMb8f+cn
OTxWu7s7jHfw5ILe9JqDEsfhuP6/MSnnVysqQ8goQN7m3JvF0N9sxs/rmc8ivZ5wqHgOPGZyrJvj
3zuOc/YJFB3xq6zSzdMsv4Knh5+V4+o0/QbjhO7G/Dzzhi6O5vMv6OIwPvN+Lq6hF1zPxSX0vNs5
lK+YqJeUmJitmRUmzqDnF5hO3o/4rfry18Wv/v//B1VAfPYKZW5kc3RyZWFtCmVuZG9iagozODYg
MCBvYmoKPDwvQ29udGVudHMgMzg0IDAgUi9UeXBlL1BhZ2UvUmVzb3VyY2VzPDwvUHJvY1NldCBb
L1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0vRXh0R1N0YXRlPDwvR1MyNzIgMzgx
IDAgUi9HUzI3MyAzODIgMCBSL0dTMjcxIDM4MCAwIFIvR1MyNzQgMzgzIDAgUj4+L0ZvbnQ8PC9G
MSA1IDAgUi9GMiAxOTYgMCBSPj4+Pi9QYXJlbnQgMzg1IDAgUi9NZWRpYUJveFswIDAgNjEyIDc5
Ml0+PgplbmRvYmoKNDAzIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0
cHM6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2Nocm9taXVtb3MvdGhpcmRfcGFydHkva2Vy
bmVsLysvODYxM2NkOTQ5NmZhY2ZkN2YzOGM1YjRjYzc1ZTg0ODRiNjcwMmFmMS9kcml2ZXJzL3Zp
cnRpby92aXJ0aW9fd2wuYyk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9S
ZWN0WzIwMC4xIDI2OS4zMyAyNzYuNDUgMjc4LjU5XT4+CmVuZG9iago0MDQgMCBvYmoKPDwvQlM8
PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL3dheWxhbmQuZnJlZWRlc2t0b3Aub3Jn
Lyk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzUxMC44NiAyNjku
MzMgNTQxLjk1IDI3OC41OV0+PgplbmRvYmoKNDA1IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8
PC9TL1VSSS9VUkkoaHR0cHM6Ly9jaHJvbWl1bS5nb29nbGVzb3VyY2UuY29tL2Nocm9taXVtb3Mv
cGxhdGZvcm0vY3Jvc3ZtKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1Jl
Y3RbMzIxLjExIDIyNy45MiAzNDYuODggMjM3LjE5XT4+CmVuZG9iago0MDYgMCBvYmoKPDwvQlM8
PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL2FseXNzYS5pcy91c2luZy12aXJ0aW8t
d2wvKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTE5LjI5IDE3
MC45MiAyNTYuNDYgMTgwLjE5XT4+CmVuZG9iago0MDcgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4v
QTw8L1MvVVJJL1VSSShodHRwczovL2dyb3Vwcy5nb29nbGUuY29tL2cvb3Blbnh0L2MveUtSNUpG
T1NtVGMvbS9kU0hCRklIUkFnQUopPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAg
MF0vUmVjdFs3OCAxMTkuODkgMTgzLjY4IDEyOS4xNV0+PgplbmRvYmoKNDA4IDAgb2JqCjw8L0Zp
bHRlci9GbGF0ZURlY29kZS9MZW5ndGggNTIwND4+c3RyZWFtCnicpVxZc+RGcn7vX1GPdngGwn3M
26yslbXeWV2UtBEOhwPsBknsoIEW0E0O/73zrAOk2Vg59MBOFjPzy6w80Rj9vvvTzS42VZOam8Pu
m5vdj7vfd3EUZ1Vhnnap+Qsc/mOXxObT7r/+OzaHXV6bKs/NcVeUOX0a6FNeRzF+hmPvI58/7H7b
jbuvvv05BaH3yy4x+N9872R5XL4wp+Jhd8cCShQQR3VZlpVZ/0SRVVRmpo6LKAW5SZZHuVADUFlU
FkDCLwsCGGW1I1GHz6DsZV1Hae34y7oB/7wizwkQlqPAsRJY41qApVFAwHEM7HmBeCWO9QsmtiBN
4yh3/GnSRKUnQM31DAg4rABrgUiwOtcSPRHWBmum9SP7KBTxmhlrEe4mA9wrgYRBcbEZWVxGtfMD
krGH2hrtWRGwWAnWCBFhla5FeiKsFdZSdaY6KpTxmiFrGfZGV8jXIhVG7PmiyRwJSps0yjwJ1m7f
EJ/FSnC+YBHOjpVIT4S1w9rq+bN+KeM1Q9Yy/FutX1oSilBobEle1ZETgFTsJbQ127PD51B+l98s
wGpcC3QCnA1qp/VlFsTAWqRnxFqGu9EQ9lqkoohdnYBKy5WFJRRlxmVHC40Y7ZkRcFgB1gyRYHWu
JXoivFonpnrerF+IeM2MtQj/Quu1Ga+WW/rVcV1J1pVLC66QfsEPBSQpetwJALp+TaAngVlsy1AJ
onItQGnPhpUAG0Ah5pW8oNiKDesasi5bq7YYlPyVCLVCRajatUhfhtrhmRp7atciXrNkLcLd56qh
BxKDciuGvChLq7K17o9B2V/JUENUhupdy/RlqCWetbGvdy3jNVvWMuy9rnv7SmZQctWWVS15Ub5W
bTIo/SsZ1h8iw9qykunLUFs8e2Nf71rGa7asZfh3u27x9VpG0JaO65KyLmHrTuk3gFCCzXeRoErX
Ej0J1g5na+wrXYt4zY61DHev6xafvYivsDV5PWBdsFf13LMkZHEi1BQVoWrXIn0Zrv5Zc2NP7VrE
a5asRfjXWr8wxJdAy0lFy4nB/2AVSaG6lLjhQFaVmH5CZ01UGSTikoi4ASqLskSpPfxpGhWFO02i
JreMTLBU/FOhkyhNjDICBdBULFP7nSrVU8EnnD7aPbslrjAorAVKE5AcsqOymHPouE3pLMipA9vD
PCoay8eEKrI0wRA+RqhCFb+q1FNBJ4w+1hB/kkRF7eFPYuZnaUCJYtTFlDVADxmjMgpFYp0FEA9Z
rhYA0RROKlHOAj0VPMLooxULMO0K7waEZk9Czua5dXKTaHjxDTRx1GTuNNYIqyxhr0Bp9qQwipdF
rL0DUaqngk84fbRsgcRVWYNVicuCso6jqtA0QCrJNV6FsmlgTynSLadQJNfmQVmnUVoaywkpWju5
RNk8sKeCSDh9vKENRYqVw9lQJMqP0oAiftbFlLNBTxmlcgpFcp0NBYhorA2Fc0yRe+jp9wJEGHyY
DL0CRYUpyipK0PtKlggGiKQEosB4qygEhNjvygavUMiyjqpauZQgifCHloQ4V64apy6VSMR+p9r4
zMKi7PAwMuwk5t8VFUeNpcuoLA1SKKuA5KqAYkVM7XcNg2CyYXjMp0TFV6skTMRwJlwNRrAVSdR+
ZxXKqYAhxhCqwM/4dzCKNwTf0XBzSKG0vMS+lWSsiynQlTISOU0ZpHA6qknob4WGH3gqnA2uO1Yu
Ufud1Sqngog5A7yhDWmBFcvZkOZR7WwAKqmsLqacDXIqKIVTKZLrbEjLqHA2pFWUOhuYcjboqSBi
zgBvaEOcSRgpnVIYiTS4wMbdA1POBjkVlMKpVMZ3bumcIkI5C4oWlVtILKlWORVEzBng9TOYirLL
YCRjzeAcxpRMM1gIzWAhOUuFyxGNy2Cs85VmMBV5zWAhNIP1THAQV4AxyOAc6mjmZXBeU2+QhMor
aEc2g4XSDBaSs1T4lCChmsE5VO5EMxiJ2mawUDaD7amAIcYQaggfnePDhyJVO/hYshx8pix8JgUi
8ylRc4gq2WD8qtExhbZ6JJbAV3/JqYAhxhBqCB/qXuLDh6KYOvgFzC0OPlMWPpMCkfmUKLlQO7Kx
8KEKlg4+Uw6+nlpwTWlCqEHM55K2SlLuSRTmlJYSobnkrEQvkxLXzKWEJJaSlHXKRQmpEiVbVRuf
CQ7iCjCGXodu3fheBzp2Xoe2joVdXMSU9TqT4lnmc0TjeR16fWW9nhVU01VkIRVfFcqpgCHGEGoI
P12lbBqkbBKkbBKmbOKnbOqnbBqmbOqnbBqkbLpK2TRI2TRI2dRLWb6QDGR6o04G+uyokzUVxjRf
sRAaNEJyYAiXEo0/6iBpR50cMNtRRwgbNHJmYdlRx0r0vJ5BofZHnQyWTTfqZHXljTpCqdeFZM8K
nxKNP+pkOOer1zMo3W7UEcp63Z4K4NgbdaxUr8VmUKz9UQdpN+pkVe2NOkLZFqun3ESV01H+qJPh
9G9bbAYl2406QtkWa08FdeyNOlaubwMU6cy3AQbe3NkAs3BlRx2hnA1yKiiFU6mKo9TSNcWwctYU
3yq3luhXrXIqiJgzwBtEP1Ttyot+WAxqG/1AJDb6mbDRz6REOHMpkXNbUbLAnqNcJbYjlVhyr1Jt
fCY4iCvAGEY/FFi/UWVQY12jyrDIuuhnykY/kxLhzKdE4TcqJG2jymC8do1KKBf9emrBuUZlpTqv
p1WO69zRI+NMvJ5WGS737CMh1OtCsmeFyxG0qioJ0ZUa5YLgSqxEItTreiY4iCvAGHg9hWspK+f1
FG4GlnpxAlKwdouLhFKvC8meFT4lSKh6PYU4KFKjXBU+J7AiK37WYhXKqYAhxhBqCB+aV+7Dz2jx
VWm8TKsupix8JgUi8ylBQi18aJeZhQ+dNHbwmXLw9VTAEGMINQgaaI9+0KSxFzTQR13QMGGDhkkJ
DOZyhB800Bxd0EDfdEHDhA0aORMciRc0KtHBTvCZqIOd4DNRhZ1AIyoUthAKW0iGJlxKkESFjWSj
sBNoPKXCFkJh65mF1aQmwBjAxsSoHGxIjdLChtRoLGwmLGwmBRpzKVHylSpZ4X0rV42hoBJrjhPV
xmeCg7gCjAFs2GbpDRkl5Ukj9w+Y3VQHfbagiRJczKIEibOYYXNOG8UMRFNZeSVPGKqKzwQEcQUA
g7xMkiKqc5eXSQI+bDQvkco0h5jQrGSKE0+YlCCJmpRJAuW5McpUefO0UDYp7akgIcYQZ4gdVu/U
xw4Lf+awww/bh5iw2IkSfMykBEm02GHTjy12ICpPYC4zhWqTU0FCjCHOADvmgwcdkqCyyIGwhYs+
K24iGBoz8GeWpaBhGMwVM6z/budlwiLWM4ZDTAG2AG5Fb3lZuJAJiYULmWCbM31WuEQwKGaQzyRL
4eLXMwoX5rzCRgcTFq6eKZy6MQG2AG4pmagkpZUIKjHfREfJmcjaiWBQzCCfJWmEonwShtKb80tJ
QtUiZ6yfmAJs4RYH/dlfQfW5AW1U/NhA1i0k7AJHlK6YyCOf88hb3nL88kIXfrxXu+/zJdt1n88U
T1MYHxzD/dPN7qs/w81EqbkB8PTlGXSoPI4S/J6jwinj5rj7l/1DO5t+PHfzXbvvzL4dzb/e/APf
WHxbAozcsKughOVyOk3z2Xyc76dtvNBpISSQ966fu6d2GEw7HsyxX/YbBdBDfxJwGffnfhoX829m
Px2PyzYB+LiR7X966EYzduenaf7cj/eb2Msmx1aD7ON0Nqe5W7rx/CZnjl9q+n7/AVzWHczl76D+
MX98W69wez4/T3N735HbtnA6j4up5jD3j928bGGOa/W2gB762xYu+21eiOkiCy0+g5d6uO1n015B
LczO4Nt2/7kDY89XQkw5ncG/9vP5u+/fU3S+385vI+y2M4cO0uPYj93hTV4YpWGoK6HGJawa7/a7
73/65kdzN81v6xVefvJIeTmNY0ehjZjFiE0iYhysUMTHa3eUZPxKk0P88XSap9Pct+eOMKMFm0T4
wI/Hy9ifn9+ZewixrfwW9f4yzxAnYrL4b5OMIsViiTJacNwjQNiUzJ71h4cOcvlqkCinM7r70i9n
LB9XHWZ5rcHtvH/oz3DXl7nbxJo3aufXUK2X/nga+rurOWW5YxzVkPuyQAE4YdU/dI89/NjED0td
zEYfp0M3bGOCwTlnaxFtd4QLbjG232TOkpjeFnEX9FP3+6XHGwLoZrp7W7eye7d0vAykFhre3dwu
5/lCXt8ox95YDzn50JmH51M3P/YL5AnW4Sv+VzEuSiHPHvuhu4eKepiObT++2yYBBgi5QUjRrr8f
zbE7TvPzNm7cpuX+2tMJY9bi2Kgf9hS5zPu5Hc8LWd89YtJu4ofVoU50+IAqN7zdhbKmjIIi9bdp
fDtR8PunOGD5dw5wDoBpXq4XZJXhogcq42N/QIc9Qmm6tFdC3wqwYcNZpsbSm001/7OLuIlTA5lS
1lDHYOSLk6bEV51EMj3VBtEgf97hE+Mc/xEJCWjeEPDTt15bg/G+qAtMRiocMDH1h25ueXaCjFie
l3N3NH4xIidhoI/dk8wLxuYMWhEjngSgiDl1HLyoZf0C+6bnGHqtAeoBA4EIuOeSi5qW6e781ILq
80N7Nk89TIbzBdOVTu8v3XJePqzdHopPYnz3IIPBBeXfPPQLy2nv7sAsiNN2uKjZd1Sv2fSFdbaH
6aTNh4cGxXRNb4YvDOQYBNL6parKpGWO7bPBcREGipnr2AGQnR/AODYM8ujZQCJDHi79uO/IZmmD
OrZhqt0O0x4+DTCUjGDJI7gPx/bbl3XsFYT4LgshnE5y/RDI/QgeaA/RBv409aaq10x8y7wDX8Vx
Ws40rU/jQAzT7bnF8crczdMR0OyHy8JlmnxwOUHYde1xi30wsnJk/bUfL1/MZ/BRN2wwLIe9q9ar
w/G6kwszwwQROgd2QbTq1iO2QZSOWBrQYlsfJOFNN7a3Q3fNvfSNeyXx80/lxQmKGuTq0fz6aUt2
5LDhl/4thniNa5NWLWkMTF4buUFtybXn7z9/+urPQ7t8Nqdp6PfPLPyhfYQEh34CumcIjvE8T4OZ
sOawvnYPWsgh/7zmQoPiN1SFSShXaRZYf+FCpYFiwQtC2tgsuaKG9nqNoJtpGpYzbCxmz9e4AWIW
R35mcel5WsP9IrH5DtP/YbrcP5gfv/n0C4UqefDQnWBN6kZwK+bPi6X0Fc2JRgNcDC/xdit/oR82
vy9wV9P02Ybmr5/gt1Ban/cDuPOMm8OxhSYydvgXcJF3/f1l5mIWSH+nZedaWiPImD3bjXBFe5of
GRxkJUwxQw93eIsjMALo99rN0CIOsqu3lzWFxMh/uLHOxv/me8ygxzLUj+bQQ8uhXWbpBtnloOV4
Y6M+vWgHWFasQVIuqF4e+oWJx741//k1eROCckNEYlHLqlKiapmOnfKpoj3+7ghacCR6/mC6+4hu
6B36bYNwfOH7pfAXVdNquH4Hsux/ugzn/r3nJ+nQG2pblidyjTe4y3FS3HbUjRcclcGh+8uyENT2
drqcIZqfMFLZ60bW1z0XQCrnFLW30IBRDJRYAyXY3e0Ak+9As8QIZarbEs0Zfs9MIJ2NywfIkJ47
+RmfSXmRrZnmVQZcgPbtwjMa/iYy30lKiCFSwHizoGwFiQ8TQQwM2IA3TSWMQmOh4b2jKBWx0+iJ
9UyLvJEXJv8/PvIG42zapPiqAcL6buzxwZL5BRzyNXjlg/n2h1+gaB0nWsqpi0ivQ3/R4052DLac
00O/X16dZ9ON82zKX+dI2JmvH2CM6S9H488gEIrLKztiKIu+Zy58eVjpl8l8Hqen0YAEbH799P6p
fR7A8+94ILBLNWo4QxNdqFHr/P62zgJf6wp0Gv++Mr6vJG3yBC8qqaCPx1FVpUXxhlfSGHh8oWFf
ZTP+52m4gg3/ZUDpi/mNDfcR5ltvqUy8Xe487XXA4NB4B4vw9OUZAwadJoqM/dMFpu7ubGQU9WPq
90sHI8q1ITPGL+0TfOPBLiacqjz4qz4oQKdp6c94d5znXuS6p4KLrUh7qBGQb1Qs9rLZoCA0Aqod
DIfPsFBM2BzBtuHqloA48xTfS+JrWz6fpxO2eSxNM04Xc6hMZ99+5JTDmWCTN9IG3xyy3oC182Gc
hukeWiFE+gFrDcxfB7Lc5tT3P3vV+m0l+LpJHehBVhycOTEDK2RXWcB1t9OXLixaxR9LgixNsI55
APbztDwefdHl1uiFOU1y4G/g4w/mIya7XLDvO6oIEDcLbFPtsbu9YEVe3vnbpMaaJCMMY9PT4lZs
8hJMfPhxNSD3V52OUONKEw3CZz/3J3rmghptpcKAgaHqMOAHKVQLrlj3/ojiVi64mvZw6PWBOGIN
HyViU6LW2LIl/2wZxO+Qm0rT09uMNmU3vWBRaNLcULPtvpwGmNUPH2A2QiD4VLzvFknkYYDxY/bG
jG9++PiJfPTj5XaTvirVqPqpo6a7h6jwQ6v6Y1GbJA19B+rkfzQ3GF+QK4P5/hHbeveE/g6bka/6
//NgK+jySU5v+PBDPJj8byFe0ziNzU8dobjrugN+OfRq/27+mAPoFQ56XYedC2E0Lh3f0hFnU4g7
2CtgjPAf5jWbnn4ldYb/mMiTzw89/tK14/sfHvqhP5068/2lG4buvGHgTZIU7JMp6IxfX5wxV2BP
vMxclPWrXV3fKfzaU3vb49bxfpqxfUCGyTbur1j0hH/D1tM0+EoZje4drKW8uljNi79Xd+PlKCu1
jI7n03C5xx4KQCBXeFD9v79ye6m9LvEVpbUH3LcbHhAQzhsO3SX86QmfCUo9sV/iOFb1GT/7lVZE
dRKffJszte/rCCt6Zw0RfqtPOHCxu7+HcmeLl37FyY56z9+uiOJ5of1lRn3X1ZU2tj4uz+MeOuc4
XRbYysb39OwQbVw6mm36Yy9PEKfLcEDPg/F3l0Hm5D3MOguYvUDmS8CIevz/Gf24+19U+dKaCmVu
ZHN0cmVhbQplbmRvYmoKMTgxIDAgb2JqCjw8L0NvbnRlbnRzIDQwOCAwIFIvVHlwZS9QYWdlL1Jl
c291cmNlczw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4
dEdTdGF0ZTw8L0dTMjkwIDQwMiAwIFIvR1MyODAgMzkyIDAgUi9HUzI4MyAzOTUgMCBSL0dTMjg0
IDM5NiAwIFIvR1MyODEgMzkzIDAgUi9HUzI4MiAzOTQgMCBSL0dTMjc2IDM4OCAwIFIvR1MyODcg
Mzk5IDAgUi9HUzI3NyAzODkgMCBSL0dTMjg4IDQwMCAwIFIvR1MyODUgMzk3IDAgUi9HUzI3NSAz
ODcgMCBSL0dTMjg2IDM5OCAwIFIvR1MyNzggMzkwIDAgUi9HUzI4OSA0MDEgMCBSL0dTMjc5IDM5
MSAwIFI+Pi9Gb250PDwvRjEgNSAwIFI+Pj4+L0Fubm90c1s0MDMgMCBSIDQwNCAwIFIgNDA1IDAg
UiA0MDYgMCBSIDQwNyAwIFJdL1BhcmVudCAzODUgMCBSL01lZGlhQm94WzAgMCA2MTIgNzkyXT4+
CmVuZG9iago0MzcgMCBvYmoKPDwvQ29sb3JTcGFjZS9EZXZpY2VHcmF5L1N1YnR5cGUvSW1hZ2Uv
SGVpZ2h0IDM2L0ZpbHRlci9GbGF0ZURlY29kZS9UeXBlL1hPYmplY3QvV2lkdGggMTAyL0xlbmd0
aCA1My9CaXRzUGVyQ29tcG9uZW50IDg+PnN0cmVhbQp4nO3NsREAIAwDsVGzfyoaWhsYIi641wKq
ljxK6loeTm7jQPKaQOJMAgAAAOB3O5IcSmBI1QplbmRzdHJlYW0KZW5kb2JqCjQzOCAwIG9iago8
PC9Db2xvclNwYWNlL0RldmljZVJHQi9TdWJ0eXBlL0ltYWdlL0hlaWdodCAzNi9GaWx0ZXIvRmxh
dGVEZWNvZGUvVHlwZS9YT2JqZWN0L1dpZHRoIDEwMi9TTWFzayA0MzcgMCBSL0xlbmd0aCAyOTQ2
L0JpdHNQZXJDb21wb25lbnQgOD4+c3RyZWFtCnic3VoHUFRZFm2SIqAkAQcXCQaU0lGRphkJIgqI
GZjW0YIVnJJBXAfHsCoWuoNLFBGkZ3QUELSbTDehm5zEktlSLKsMZSgjKGWOiGJY9/y+7t8eCbW1
/nZq9tWrrs/r/989/7x7z73vNTweb5jhMBs7m1HWo/5w3WqU1YdubaVuW6Bo2LBhPB7P0NBwz8+p
heVFkpLcXGlefx3fioslR4rEql1cJBn4KTV1SXEurOeW5OaXFhSVFwF8vqyA4KkJD6aFFRAFumzt
bAsriorlJUUVxX123FlQViitKlU0VFY111Q1V1c2VeGzurkGI7KqUnyL3t/jnHfYKqspBwCYBksH
cg5mirPyZPkVdfKqpuoShVRNYEARiAJd1jbWIJCY+ajDdL4sH/Aq6uWZkqyov21b+d23wmVCvwVz
v/5GiOut27cePJwBqOW1FVhu9N6TcNhpfixT+gGRcNkShIm2tjZP2YboDXHkT4tYtwbvArQgEOC5
tQ6KMDnoQofX9WYM8MCqvF4RvzvBc5ansYkxr68GL3X1cPsx/sfyugpppUx9pGFm+A+68BuhpqYm
TOvo6Ez+cvLs2bPd3d3t7OwIzxeWX2yJ3qJorOQcCSgCUf0xBnOl1WV5sgLfub5aWlosPyamJmPH
jnV0dLS3tzczM2PHNTQ03DzcsiSH4GxYX87pgsOUyKV50nwnAR/mLC0tRSJRx62O9/9ub9++PX36
dEREBOEJDg2Gp3FL2gCM4ZVBF8Jt3PhxBGDo0KGrV6+urKy8c+fO69evgfDNmzf37t2rr6/fsGGD
ubk53WZmbp6UtgtuyTlpYAyqJfhKACs+Pj6AQUQ9e/Zsx44deXl5LHWNjY0WFha4LWzNd/KGSg6R
9McYeRfoMrf4wENgYOCFCxfe9986OjrAJ92sp6+XlJqECOUQKqaqbKwKDv0z5vf09Ozp6YHR7u5u
+FV0dDTZPX/+PP589eoVvoKzGRkZIWZ3i1Igwlx5Wp+MUZbBxVj7D961bds2ogVOBUj/VDb8SRcY
wTjdsH//foDEI1C8jCOZ0DROtJeJx0opJtTT1zcxMens7KQYRMNFbW0txIHP5798+ZJQUQhIJBIg
4Qv4ZbXqZQyTyxsU3n4+RFdUVBTBe/fu3QA+hm8JZ3Z2Nj04dZojFpcTxhgHa6oKDVuJaWNjY2nt
aNUuX74MWUhPT8/NzUUwPnnyhEgjMp2cnPCI6OBPCBlOkPRmDHRBt2OT4yDjsLVo0SIWHjihZVWl
jrBRYxd348aNRNrGqE0g/9Njk6qviV9ORCFx/fp1AkOMwbVUs3ZqaioBJrdPS0vD4JrINcibnEhE
n4yhsnKZ7kJS397eTv7zkYPRChLmj8ZxJ+TFxsYGnE9wmEDF7SfSVSQvQT1vMNRgzJgxrCCQxSlT
piCPo9KAGuCT9UBCePz4cbzIgsULoIHqYAwjUJ59Wfv19PRgCBmQrBM2+L9MJquqqrp9+7Yq5tbW
1pKSkpaWFpIRcrO0vcziwiWS03czsVn6v5PG7DUU0uy8HCwBii5aQdY6fAzj4Aq2QF1iYiJhpiW+
ePEiYLjNcGdqMxkHUvYRY8CGWnrt+u9hBRhOnjwJYBSSCDSSdDRsRQEMkEAd0hYbEQ4ODlASSgfX
rl3T1dXF4Mqwbz9xfRnZV0hzCg5raWu5uLiwGYcYGz16NKGlyh9lhipj586dY3Lr7JlqZKyxcnGg
P6ygRkUGJ7ukBoAE3uLi4igEELBz587FOCrtpKSkkJAQXKP2ZpOps7MzRlD9chURpsNNR4wYQXUF
G5tIiOPHj9dQtuXLl7e1tZGGEIyamhqeBi9wSaCaohKMQai9ZnvhTb28vMj/YXrSpElgKTw8nJa1
ubkZ8kvLBxrhVzSuUCi6urpYlQsICMAN091dP73wpvSN4MKE8HxCxRpavHgxOXlFRQUrI/S5efNm
jG+O3oLHOSkwejOGROnkPA1Wli5dSjzcv3+f6ufMzExoFFSdxqFdWFZUiXfv3sU4K3cs2lWrVuGp
SZMnfbr4I6BQum+K2oQJg4KCWLWkHE2ujiYWi0EmhSTa8+fPsZPSHaJ7KDcb5Zw6qgvMySRKVyZR
+vv70+s/ffp05MiRIGfv3r3kdTSOKogp7/X0KBGoNmJsxYoVyqpsamkNZ7UQcAJJdXU15kd4ko/N
nz9fU9ny8/NpnPiMjIxkXkQYwFVp0SdjUH7feXOYaHKdzrr9rFmzmIzj5vbo0SMwBlcPDg4+c+YM
yg/gjImJwT3YYM6bNy8+Pp4txfEieMpj5gxOIoLKnpiEGCREU1NTxCZ5GlYHAkI+lpOTw1a2KXtS
eMpDjMOFYjouUwdjQAWFXBa8nNlQm5k9ePCANPzo0aOUiQDV2tqa2QQZG4OinTt3ElSIP53lCoVC
klxs7pA7mFrIfyFXqovYhLesigijfC2WiMmlUUI0NTVBXRGG+PPhw4fkXajfUkQpdGrHCV19+hjW
cfvftxMPZWVlrGKANLgQBA3KEBoaeuvWLUpVIpEIOXH48OEoLRISEsgtMX7ixAmectfww1/Xw2+5
wsykgHpFROQaDSVCDw8P7GTBGLh6/Pjxr//4FbtyaAi+svzTyBTRnvI6LunqzRgdzEpKci1GWEAu
fH193/92RwnPoYh7/9uCn44LVEUsLCyMVjnjSAaHQUGkYQl2pe2axp/G66tp62j7f+2P+k0dx3S9
GaM975LlS8i6XC4nN1PdKKlyyBJI10TX2bNnBw0axJQo3rMUnB5PfSBNll9WWy6rLk3emxwUEuzq
7uow0WHy1Mnec7y/3xB5IOcgjGKZ1HEUrMqYuFgCT8DbYfBwwWFjE2O4GWLwxo0brOeoFtuqjQaJ
PQQI9nqgC6Sl/yLCtgtz0vk/hx2wMS3qlop6BUKvtLqM+SWiTo5eLC/JLclTh1F0UASiQJetnS2W
DEkN1SbeERi2RG/9cGgzdSqddrJpqE/GSO5evHjBVkfBocHQHKw15lRTR9yBKHRc/Oe6rkJ9FkER
iAJdKKucnJ0E0wUQh7+sX6v85UgRFBJE7z5u3LhTp04ROVQisic87CkB2pUrVwQCAT3iNsONcYA6
+Q+b12NOwVcCvoDPbXd2cabOR1f+SRd8F5VrzruLM4iikwq2DR48eF/WfrwvXD1AGECDBgYGKMMo
RfZu2BfExsai9qCbZ3h5FlYU0bm3nv5vJv9/alQ2U91lP2E8/ZSJsFq9NoJkHM3ExGThwoUpKSmF
hYUNDQ3YKKWnpwuXCGkbxUyioYlyDrKMxxHd2CLxlHtPTc6blqaurq6BspFplIg8ZaLEp+6QIfr6
+sxdyqMzzltv6vDp6uEqq8Y+swj6sCstGZsdDZ7GwJwjYcUk7ISMgGqQNtPbi52Nw0aHwwiE1tbW
q1evXrp0CfU/dCMjIwMryCB3dcXGzd7eXh3W+2v06yTEB9kBQgdvQQcbc+b52djaGBoakisCPJbS
2tbax88neud2+BViGR080zkDPIFzbMSYjo4OknJ4eLiRkVF7ezt2alZWVtAH1JBtbW3YYLJv8dka
vSzSaHxKAhIEGMOGmn5QQ8GTui8tdldcUmrSz1n7MEJ5CvfgTtSWdmOY36bVtL6sj3V0dLS0tEgk
ElT7dBYXFxeHNHTz5k3QSCdm6gAwQKNXxqe3n0/6L+lQcuQ+MAPqmDyuktBpXJTxk998P1pZ9YUD
8QBp7ezsxMbW3d0dZbZUKiWH7+rqWrduHU8pnmoC8N/A4ymjwNHJMXztarjQobxs+lEAn7hOTk/G
Rs9J4MQmCA1NNS4uy1h3d3diYiJU69ixY8w2VrlMPT09ISEhuOf3YozaRw5jaGRoOdLS2tYGn/D/
Ae5UR2MZE4vFiErQBdWi/1GBe8tkMh8fH95nF7E+cVJO7/NbTWUO//y68UdpJKpE0e/IElsdUb3H
Dn4eSP8CLE+sqwplbmRzdHJlYW0KZW5kb2JqCjQzOSAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9B
PDwvUy9VUkkvVVJJKGh0dHBzOi8vbGlzdHMuYXJjaGl2ZS5jYXJib242MC5jb20veGVuL2RldmVs
LzYwNzMzMiM2MDczMzIpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVj
dFsxOTAuOCA2MzAuMTUgMzA4LjQ4IDYzOS40MV0+PgplbmRvYmoKNDQwIDAgb2JqCjw8L0JTPDwv
Uy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9saXN0cy5hcmNoaXZlLmNhcmJvbjYwLmNv
bS94ZW4vZGV2ZWwvNjA3MzgwIzYwNzM4MCk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVy
WzAgMCAwXS9SZWN0WzMyNi4yNSA2MzAuMTUgMzY2LjIzIDYzOS40MV0+PgplbmRvYmoKNDQxIDAg
b2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9saXN0cy5hcmNoaXZl
LmNhcmJvbjYwLmNvbS94ZW4vZGV2ZWwvNjA3MjQzIzYwNzI0Myk+Pi9TdWJ0eXBlL0xpbmsvQ1sw
IDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzc4IDU3Ni4xNSAyNDQuNDYgNTg1LjQxXT4+CmVuZG9i
ago0NDIgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL3lvdXR1
YmUuY29tL3dhdGNoP3Y9V3QtU0JoRm5EWlkmdD0zbTQ4cyk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAg
MV0vQm9yZGVyWzAgMCAwXS9SZWN0WzI1Ny4xOCA1NTYuOTUgNDUyLjA2IDU2Ni4yMV0+PgplbmRv
YmoKNDQzIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly93d3cu
cGxhdGZvcm1zZWN1cml0eXN1bW1pdC5jb20vMjAxOS9zcGVha2VyL2h1bnQvKT4+L1N1YnR5cGUv
TGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMzA3LjE2IDU0Ny4zNSAzNTcuMzQgNTU2
LjYxXT4+CmVuZG9iago0NDQgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvVVJJL1VSSSho
dHRwczovL3d3dy5wbGF0Zm9ybXNlY3VyaXR5c3VtbWl0LmNvbS8yMDE4L3NwZWFrZXIvcHJhdHQv
KT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMzYxLjggNTQ3LjM1
IDQxOS45NiA1NTYuNjFdPj4KZW5kb2JqCjQ0NSAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwv
Uy9VUkkvVVJJKGh0dHBzOi8veW91dHViZS5jb20vY2hhbm5lbC9VQ0gtN1B3OTZLNVYxUkhBUG41
LWNtWUEpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs0MzcuNzQg
NTQ3LjM1IDQ4Ni42IDU1Ni42MV0+PgplbmRvYmoKNDQ2IDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+
L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9saXN0cy5hcmNoaXZlLmNhcmJvbjYwLmNvbS94ZW4vZGV2
ZWwvNTc3ODAwKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTY4
IDUzNy43NSA1MjQuOTMgNTQ3LjAxXT4+CmVuZG9iago0NDcgMCBvYmoKPDwvQlM8PC9TL1MvVyAw
Pj4vQTw8L1MvVVJJL1VSSShodHRwczovL2xpc3RzLmFyY2hpdmUuY2FyYm9uNjAuY29tL3hlbi9k
ZXZlbC81Nzc4MDApPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFsx
NjggNTI4LjE1IDM4NS4wOSA1MzcuNDFdPj4KZW5kb2JqCjQ0OCAwIG9iago8PC9CUzw8L1MvUy9X
IDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vbGlzdHMuYXJjaGl2ZS5jYXJib242MC5jb20veGVu
L2RldmVsLzU5MTUwOSk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0
WzE2OCA1MTUuNTUgMzQ0LjI5IDUyNC44MV0+PgplbmRvYmoKNDQ5IDAgb2JqCjw8L0JTPDwvUy9T
L1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9zb2Z0d2FyZS5pbnRlbC5jb20vY29udGVudC93
d3cvdXMvZW4vZGV2ZWxvcC9hcnRpY2xlcy9pbnRlbC10cnVzdC1kb21haW4tZXh0ZW5zaW9ucy5o
dG1sKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTM4IDQ4My43
NSA0NzUuODQgNDkzLjAxXT4+CmVuZG9iago0NTAgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8
L1MvVVJJL1VSSShodHRwczovL3d3dy5icmlnaHR0YWxrLmNvbS93ZWJjYXN0LzE4MjA2LzQ1MzYw
MCk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEzOCA0NzEuMTUg
MzE2LjA1IDQ4MC40MV0+PgplbmRvYmoKNDUxIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9T
L1VSSS9VUkkoaHR0cHM6Ly93aWtpLnhlbnByb2plY3Qub3JnL3dpa2kvQXJnbzpfSHlwZXJ2aXNv
ci1NZWRpYXRlZF9FeGNoYW5nZV9cKEhNWFwpX2Zvcl9YZW4pPj4vU3VidHlwZS9MaW5rL0NbMCAw
IDFdL0JvcmRlclswIDAgMF0vUmVjdFs0OTguMjMgNDU4LjU1IDUxNiA0NjcuODFdPj4KZW5kb2Jq
CjQ1MiAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vaXBhZHMu
c2Uuc2p0dS5lZHUuY24vX21lZGlhL3B1YmxpY2F0aW9ucy9za3licmlkZ2UtZXVyb3N5czE5LnBk
Zik+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0WzEzOCA0MzkuMzUg
Mzk0LjYxIDQ0OC42MV0+PgplbmRvYmoKNDUzIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9T
L1VSSS9VUkkoaHR0cHM6Ly9pcGFkcy5zZS5zanR1LmVkdS5jbi9fbWVkaWEvcHVibGljYXRpb25z
L2d1YXRjMjAucGRmKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3Rb
MTM4IDQwNy41NSAzNDkuMzUgNDE2LjgxXT4+CmVuZG9iago0NTQgMCBvYmoKPDwvQlM8PC9TL1Mv
VyAwPj4vQTw8L1MvVVJJL1VSSShodHRwczovL29wZW54dC5hdGxhc3NpYW4ubmV0L3dpa2kvc3Bh
Y2VzL0RDL3BhZ2VzLzEzNDg3NjM2OTgpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclsw
IDAgMF0vUmVjdFs3OCAzNTYuNTEgMTI1LjA2IDM2NS43N10+PgplbmRvYmoKNDU1IDAgb2JqCjw8
L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9vcGVueHQuYXRsYXNzaWFuLm5l
dC93aWtpL3NwYWNlcy9EQy9wYWdlcy8xMzMzNDI4MjI1L0FuYWx5c2lzK29mK0FyZ28rYXMrYSt0
cmFuc3BvcnQrbWVkaXVtK2ZvcitWaXJ0SU8pPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRl
clswIDAgMF0vUmVjdFs3OCAzNDMuOTEgMjUxLjU5IDM1My4xN10+PgplbmRvYmoKNDU2IDAgb2Jq
Cjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly9vcGVueHQuYXRsYXNzaWFu
Lm5ldC93aWtpL3NwYWNlcy9EQy9wYWdlcy83NzUzODkxOTcvTmV3K0xpbnV4K0RyaXZlcitmb3Ir
QXJnbyk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9yZGVyWzAgMCAwXS9SZWN0Wzc4IDMzMS4z
MSAxNjkuNDYgMzQwLjU4XT4+CmVuZG9iago0NTcgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8
L1MvVVJJL1VSSShodHRwczovL29wZW54dC5hdGxhc3NpYW4ubmV0L3dpa2kvc3BhY2VzL0RDL3Bh
Z2VzLzczNzM0NTUzOC9BcmdvKyUzQStIeXBlcnZpc29yLU1lZGlhdGVkK2RhdGErZVhjaGFuZ2Ur
JTNBK0RldmVsb3BtZW50KT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1Jl
Y3RbNzggMzE4LjcxIDI4NC44OSAzMjcuOThdPj4KZW5kb2JqCjQ1OCAwIG9iago8PC9CUzw8L1Mv
Uy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0dHBzOi8vbGlzdHMuYXV0b21vdGl2ZWxpbnV4Lm9yZy9n
L2FnbC1kZXYtY29tbXVuaXR5L2F0dGFjaG1lbnQvODU5NS8wL0FyZ28lMjBhbmQlMjBWaXJ0SU8u
cGRmKT4+L1N1YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTA4IDI5Ni41
MSAxMjkuNzUgMzA1Ljc3XT4+CmVuZG9iago0NTkgMCBvYmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8
L1MvVVJJL1VSSShodHRwczovL2xpc3RzLmF1dG9tb3RpdmVsaW51eC5vcmcvZy9hZ2wtZGV2LWNv
bW11bml0eS9tZXNzYWdlLzg1OTUpPj4vU3VidHlwZS9MaW5rL0NbMCAwIDFdL0JvcmRlclswIDAg
MF0vUmVjdFsxMDggMjgzLjkxIDE2Mi42IDI5My4xN10+PgplbmRvYmoKNDYwIDAgb2JqCjw8L0JT
PDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6Ly94Y3Atbmcub3JnL2Jsb2cvMjAyMC8w
Ni8wMy9kZXZpY2UtZW11bGF0aW9uLWluLXRoZS14ZW4taHlwZXJ2aXNvci8pPj4vU3VidHlwZS9M
aW5rL0NbMCAwIDFdL0JvcmRlclswIDAgMF0vUmVjdFs3OCAyNzEuMzEgMzI3LjQ5IDI4MC41OF0+
PgplbmRvYmoKNDYxIDAgb2JqCjw8L0JTPDwvUy9TL1cgMD4+L0E8PC9TL1VSSS9VUkkoaHR0cHM6
Ly9hbHlzc2EuaXMvdXNpbmctdmlydGlvLXdsLyk+Pi9TdWJ0eXBlL0xpbmsvQ1swIDAgMV0vQm9y
ZGVyWzAgMCAwXS9SZWN0Wzc4IDI1OC43MSAyMTUuMTggMjY3Ljk3XT4+CmVuZG9iago0NjIgMCBv
YmoKPDwvQlM8PC9TL1MvVyAwPj4vQTw8L1MvR29Uby9EWzEgMCBSL1hZWiAwIDc0NCAwXT4+L1N1
YnR5cGUvTGluay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbNzggMTg5LjA4IDIwMSAxOTgu
MzRdPj4KZW5kb2JqCjQ2MyAwIG9iago8PC9CUzw8L1MvUy9XIDA+Pi9BPDwvUy9VUkkvVVJJKGh0
dHBzOi8vY3JlYXRpdmVjb21tb25zLm9yZy9saWNlbnNlcy9ieS80LjAvKT4+L1N1YnR5cGUvTGlu
ay9DWzAgMCAxXS9Cb3JkZXJbMCAwIDBdL1JlY3RbMTcyLjMxIDc5Ljg0IDMyOS45MiA4OS4xXT4+
CmVuZG9iago0NjQgMCBvYmoKPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA0NDkxPj5zdHJl
YW0KeJytW9lyG8mxfcdX1JNjFCYbvS90OBwUKUucO5zhDKkxbeuGo9FoAj3sBdMLKfj9foH0Af49
/4UzK7OWHtsCTN9QhISD6sw6VXUyK6sa+nnx+m7hiiTzxd168eZu8f3i54XruEESieeFL76Gxp8W
niuuF3/+X1esF2EqkjAUzSKKQ/mplp/C1HHxMzRbH6l9u/jDol0s3976mSc2w8IT+KffGF+Wle3M
dLFdPJADHx24Av+Ag8Rz0kikvudINwq6TpQJAF4sUi9z4LPrBB5/LhZx5oSqJU6dJFUmCkh38BxD
3/GhjYwAZInyR6BYcFfcxiSk1YxgIYfhufK7JHQdDzgrGGSOHwlE4CoJUieKAcl+GBWLTHJgmEl2
bMdA+oTnNMqgiYxCz4HZVB4JFQvVn2rV1LJIzIgy94DJwnQhd4Z+5sTwfEC+/NTJgHtAPRGCnnzi
wa0+UWRLRtItPsrQcwJs5FH7KArlllCxUJ2qVuZHljZbHkDMjGGEOAADXXg+Jl8gjAAoxtQTIegp
Ih7cGvEAyNKgTD7K0HMSbOSR+ygC5ZZQsVCdqlYmRJY22/kA3NQJrAG4iROaAbixk5gBEDID4Fbm
yJYKoVszADdzPDMAQKkZACEzANXKhMjSZjsbQJxFTmIGEGch/sO+EHl6AIz0AFQrcVSWCqFbPYA4
i51IDyDOEscPjVuJ9AB0KxMiS5vtfABpQPGroC/jl32lnoxf7omQGQC3Mke2VCiQ0WbBzAwgDWWY
KrchB7HqlFs1vywSM7bzASQcxAxjCkX2FVOYck+xCmLmwa3MkS0ZJRzEClIosmFCYcpuExXE3Klq
ZX5kabOdZdA4SiiIDXR1Bo2jWIYp5TtGKoMypDTJdgZkOoPGOAaVQWMcQmg8Zhy/qj9uZS7ScEZ0
lkHjkONXQYpCSmZxSBFKqY6RzqCqlZKkslSI41dBikJlSBGq3Kr4VZ1yKxMiS5stDYB2tDig8FVI
xiBtgwhkPpC9BBy7tH0ypI2VrRSgCFNIRp8ykoGpHHLUqs6ojWlIK5vhXDB+bG+5sR9ZW27sh9aW
y0gLhiCrguwUiK0tF5HecmM/sbZcRkYwqlVTM1uucmpzd4OZ2AFaYnd9W+yENHeCTJDsDLDE7oaW
2N3IFjshw121MpfQFjs7NVqJIN0ZrUSpa7SCQGuFgdIKQxIEWyngWVqJIH9qrUSQ57RWGCitqDam
4RutKIfWfEeQiyytRPCg0UqEyUjPNyM13wxpUtlOAd/SCiKtlSgJLK0w0vOtWzU1oxXl1EouUeST
Vgx0dXKJIk9qhbIAI51cVCvlD2VpUGaSSxQFctmVYSglodyGLBjVKbcyIbK02c62pigM7OomCn2r
uomgVjXVDSO9NalW2n2UpUKBXd1EYWhVN4hMdcNIb026lQmFVnWj3NoDgPrNKg4iKOBMcRBBBWmK
A0ZmANzKHNlSIc8uDhCa4iCCCtIUB4zMAFSr5meKA+XWlj9UbJElf6gSYyN/qCAzI39CWv4EWeNk
p0Aqt3CFMtze2chz5c7PHgkZ+atW5iINZ0Rn8g/TzN5bwzS19tYwTay9lZGWv2olhStLhTJ7bw0z
19pbEZm9lZGWv24lQmxps50PIImpNlYwkhUu+wLkmQEQMgPgVubIlgrFVBsrmMgKVxmmsvpVblOu
jVWn3MqEyNJmO1NPGId28gzjwEqeYexbyZORUg9DkgjbKRBayRORTp5hHFnJk5FWj27V1EzyVE7t
yYdUZZ1tQ8hV5mwbYq4yk0/ITD638vyypUKhfbYNo8g624ZQj5qzLSMz+aqVCUXW2Va5tQcAR2Fb
/vCPJX/owpI/ITMAbmWObKmQP5N/ENjyB2TJn5AZgGplQoEtf3ZryoUAJODGul4IIEf5HtcLASS3
yOftnYGqFxhSUcBWCkiPqmBAmHlCWcEqaYf4WZUL3KI5ZZ6YEZxxDqUzzRnq5Vhzhgo905wJaM4E
mRdZKSAf0JwjPrNKq8hT7iLP8I00wYhvLixqM7ZQJ6cWW4CuZgtlc6DZEtBsCTIjsjIgtdhCVZ5o
tgFkUe0QP2vG1MIkpMmM4IwzVOC+xdkPnFBzhr0t0ZwJaM4EmRdZKSA9as5Q1XuaM4BUO8TPmjO1
MAlpMiM44+x5Tmxxhm0s1ZwBeIHqgoDmTJB5kZUC0qPm7EG21Jw9yJbaIX7WnKmFSUiTGcFZ7g5c
Dj+NZSRxMg1cGWScahmp5M2QMjTbKcAhYyCEk7LCONMeKQR1d9SmmWWemPOccfdTDkONZUSxMz+V
wcY9MVLcGRI/tlOAA4hhJs3ZSvIhf5lnePuGJz8/Z2irxE9m0YhQR6MPG3igVMJAqYQhKYGtDLCi
0U+saPQTE430WamEW5iEFY3ancU5nkWjH1vR6MdWNDLQnGMrGtlKgVk0+rEVjQhS7dCORm5hElY0
aneGs5eFThxozl4WyCtO6QeBq14TMFCcGRIvtlJAelScvQxOxYlQVrE52TJQrFUb85BWM45E+/Xd
Yvl7GJCTZeIONC5feqCS8Go8dgL4tll89W1ZrsXYifIpr6d8LMW4LcU3VTt9FOvyqSpKse6rp7IX
VTuW/UMOX8DT01D2ww7BQ9eLh7r8WK2quhr3Z6/ufsLXQP+u8wCvtaHgo853ffkArj8VeSue+wp6
/1as9mM5fBbg9tO2exZN3u7pO4FPrUr54Fi2v4NnkNensd8jJbY/EV1b7wF1gK7Fr8Wb87fnV99+
PsArxqVMHI941Tn00KsOq2ZXl00J419TlzDmBv5ZTaNou1HgLJ3C90N+RCduiG+KsJNGdjLIGZzP
5yovHmFqhmHq5WzvytJ+5Lkatx30vZoeYPaqdnO42zjz5Le/6LZsoQtwIMZ+GreiexDrqc9pJRHJ
OR2OcJ9kSk85cCpGWKy+G4bTddfkVSve3FzdvBGrbmrXOaxW0fU9PNTCCI/wHWu5gFrOxPfTCrSw
6/oRiT8VW1ikh75rxH3Z4mT9z4/X4mnoiscDrl28nw3Vit92TSmGatNWDxUs+yh+rPrx6jsW/yDK
j7u6KqoRpAULIYZt3oMamrLpYDzlxqE46POmpFU5NLAAb1e1FJ6gs6o7fRjEh68uz+9P6+oRlDBI
Icjl6ahHscs3Jciy2JYfXv1GsNlmN52oz8/5vs7btcZP1brsvsxF3pdqedz03U9yAfP1GkaYr0Dx
MLi67p6RyZddBXCaDmfuBJA5ZAQL7Ae2kUoi8o1tIN/YOnBSgGSP6TCBUxGcMxIfzpD95t/OMJ6Z
bacPkOhwPkksP3QbiKmbfBLXXdv+/QBDyFh+ZDv7eqorkNvbPq9rm2w4e738JelBXckR826/K/un
auj603zTdsNYFeL69od5wi3bfFWXUnsQlyTO0/N+00Ho5u2A4aASNaYH0UEa78VWux5O6HuZcMuW
nGKqPyJKoOTjABx2ZYEBMiAJtP4IGzuu8HnfiLwvtpAuihGy1iH5o1uoezj45DiKCZJCiwG2LmsK
OkiuMhxByO2AhDdTOYA0n6pcRnv5BAYCM0Bb1sNvRDXCdrHHlP1pVcoMXj3QNPbTDiUNEcTOQdrT
gILORQv+n46ZiChzde4usddqaE7EAJlj3KIrGbXXt1eqD9y3uJc18K5aORBxfnN1Ia4uIURhrrr+
xDzedOtSTufYVxvUJ37hfLYFFr0sGuR9caLU+0MJGxymL/i7r/LaqGLg6KiKrbjBHW1a273Hx8jb
h7wmr6STX8bzl+YWaiCOh5v7M5gE8d3trfjGFZ9ucCcv5MyZSPkMyyoF+C7v12ULYzkvgP0g7sq+
gYmuIZG+O7/78GqmSoGbKu3jsDKXsIBlLW4bjIt8PDIaIrw7JaK+67tSh5cl7hxy5S5Bk3UHNMXt
1IDnM2HPX/Ky1fMjqExmXW/HcTecLZd7qANgP3SKrlk+52Ox/d3Tb/8wnt6+3v6+vfzTH381/jZo
wnSwOaRHpagA77czFaA396IaZCS1ayoXV1NVQzXUihpmHaOzLvMe10Hq50A6heEEoe3/5IBFiL+1
iW2LI/aVMI2duQ3IycqIs10me9nKBG5C1/u6k6vX1+J9DUlZdnJo54Oq3TZ+d7N8DfNXTY04vz8w
OrzwmE3i+dWteA0p7gEKgEPFD9RVUZCpbPZtN6rIvywL4bvgsSlLWWBBnF3kzaqv1pvyBGJmL3yx
roZiGgYMyaot6gk1AXPbyiS3seb4RLx/f3V5KKqQjJ+pZDGUUPLDHgg7Fu4LJ1AtNs3U8jYAyX18
LiHmsDPo1lrPI3qBCp+TDIYtRaiQYawnDlM1BvNQ0gDbzlTAoJTAdY/d4iN8r8mioGRVd+3mFFIu
7BrlA2xsFW5fZgR6Ap+rGnPyz1MFOauoO9j0iw6TCk2B3MgHKNKLjpLOQ9U3z7i5wX64hplwBK4o
dAlZ7QqCtsboPZzaQjhKspYg1PtuPRXI5u7yHrLpXT8No7jkev4jZAKcnuHDqzN7crwXFmsBvh+J
lB5Vdhu6hxHH5WDmqWWSgyFD1+Py+fl5OQ3Lsl2uKeMuc6h3C8hGS/nw6Yh8+fxxWmq+znZs6sO1
eYjvSOdkoEcHwmADOK8fKeOWqyIfxqWX+m68DKMAahp7NvxjpRLiSze1Nxeoil2+k4c0jMlbKDU2
27wSX1d5J+5AROJ9KyskOKbheZeLDDghrKUMqnaA4Re6bMphb6xRPSVVQ+pY9uV5iOTLD01sts4v
LMrDLMVS3/L67vre9ntc/YwLFLkquK5xi5dJ60x8un3cv+ZcBZVd+fyL7AHlNB1uKcrxgAPx0+3G
qqn+irsXROGwb4tt37XdNIirmwvMgE0Fx9nHsscy87PN91A5JqmG+IVoFgHezngK1zYOYsDq4RmQ
T37pOkc+56nAVWqtdvl6cIbSGX4aJ6dcT07RLv/SlOsqX+6mVa2q6uXwuKfkflpOcGTfD7DV7tYP
9iCPqvokDz9VIQwHpL3MrLAm76Fo6GlVPov/E59yyG2cZfEY0efrCrlA0WbNM9R/pRQvqLrpWlgd
mUI5jvU20ECazjeHT6aSnpeqoIbgwkucHA9CqABIv7WckNnqvrBck325+kbkP16SzQSVnO/+chmo
cHPcDPIK/rQljQMkBbqPLTIZXmICGV/c9QvUUog/xJYOsi84+OGtFaROJAL89cy/PCwgJRede+CX
uXnui48lARS1QSR74iuXX8l9/+CdRRBG+CsQWfmAdvZDJU+l8iyZw1nPOhjjHENJhcFNfRx2Ti8Z
6YL0me9DL+l8jW6OY4i/Co/MCffMOsCcXuO647yu8zEX5T2eJzclPMMnCLxvtFbf844JQvluJMEX
1NjpJ+z1jIf82Tr88NH/fBq7ppOH37cQhOraF5+f8rr6az52uNUDZTg7v+27aQcl2bTBOgAj22bn
vzBSXHwhogVwW1frY64O/FQv/XUO+VyevYHVrhvGg6viJ3pp7y9uTsF2VXcbOe942/2mmSgTqBMm
Vopm3eTqv/vxWryVVxGHe4uNBuBwWmxxK4Lk9oT+QFmg2PnNnT2rwX8R8bNo9gMPX2fQRefU8zXz
uIWQueyKSWntn8L6qO04hFFCtosoXdyh0zU7xbqT6zPeahvUWf4Eqybvszqa4+92ZXt/B4XtY6VO
4ubKse4oMdoFiPfyexC8GMw8K+HQNZoVdmfiZptDze3ZHcb/X2uBP5qKfdn/NyC4lm70Dq9FcuRa
4PvENJH+L7rdXlasUL8XH17JqD2Fvzzx+vyNuN3DCaqBw8JFX+Z8J3Kx7SGQuh1eHl7Uef8oF82+
DnIOHbpFBlpLjBaeO/AC/9Y02LWYsByQayw7xvRzAUUa1qrn49hXq0mGX+i48vDS07UZBA1P12EG
KcwwM+iEDLMc6sDdXk80c8Fbcqih/3bo9hfOpoHl1LF1kb4w88EBQ/6XIuVUVQkFz0lBU+J0/WbJ
dIflar+EaVly/z+LJAZBoUOY8JB+Mu5HeC1QNGJZNRsPJCW+l/8n6/vFPwB5BLHLCmVuZHN0cmVh
bQplbmRvYmoKMTg4IDAgb2JqCjw8L0NvbnRlbnRzIDQ2NCAwIFIvVHlwZS9QYWdlL1Jlc291cmNl
czw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldL0V4dEdTdGF0
ZTw8L0dTMzA4IDQyNiAwIFIvR1MzMDkgNDI3IDAgUi9HUzI5MSA0MDkgMCBSL0dTMjk0IDQxMiAw
IFIvR1MyOTUgNDEzIDAgUi9HUzI5MiA0MTAgMCBSL0dTMjkzIDQxMSAwIFIvR1MyOTggNDE2IDAg
Ui9HUzI5OSA0MTcgMCBSL0dTMzEwIDQyOCAwIFIvR1MyOTYgNDE0IDAgUi9HUzI5NyA0MTUgMCBS
L0dTMzEzIDQzMSAwIFIvR1MzMTQgNDMyIDAgUi9HUzMxMSA0MjkgMCBSL0dTMzEyIDQzMCAwIFIv
R1MzMTcgNDM1IDAgUi9HUzMxOCA0MzYgMCBSL0dTMzE1IDQzMyAwIFIvR1MzMTYgNDM0IDAgUi9H
UzMwMiA0MjAgMCBSL0dTMzAzIDQyMSAwIFIvR1MzMDAgNDE4IDAgUi9HUzMwMSA0MTkgMCBSL0dT
MzA2IDQyNCAwIFIvR1MzMDcgNDI1IDAgUi9HUzMwNCA0MjIgMCBSL0dTMzA1IDQyMyAwIFI+Pi9G
b250PDwvRjEgNSAwIFI+Pi9YT2JqZWN0PDwvaW1nMSA0MzggMCBSL2ltZzAgNDM3IDAgUj4+Pj4v
QW5ub3RzWzQzOSAwIFIgNDQwIDAgUiA0NDEgMCBSIDQ0MiAwIFIgNDQzIDAgUiA0NDQgMCBSIDQ0
NSAwIFIgNDQ2IDAgUiA0NDcgMCBSIDQ0OCAwIFIgNDQ5IDAgUiA0NTAgMCBSIDQ1MSAwIFIgNDUy
IDAgUiA0NTMgMCBSIDQ1NCAwIFIgNDU1IDAgUiA0NTYgMCBSIDQ1NyAwIFIgNDU4IDAgUiA0NTkg
MCBSIDQ2MCAwIFIgNDYxIDAgUiA0NjIgMCBSIDQ2MyAwIFJdL1BhcmVudCAzODUgMCBSL01lZGlh
Qm94WzAgMCA2MTIgNzkyXT4+CmVuZG9iago0NjYgMCBvYmoKPDwvRGVzdFsxIDAgUi9YWVogMCA3
NDQgMF0vVGl0bGUoVmlydElPLUFyZ28gRGV2ZWxvcG1lbnQ6IFBoYXNlIDEpL1BhcmVudCA0NjUg
MCBSPj4KZW5kb2JqCjQ2NSAwIG9iago8PC9UeXBlL091dGxpbmVzL0NvdW50IDEvRmlyc3QgNDY2
IDAgUi9MYXN0IDQ2NiAwIFI+PgplbmRvYmoKNSAwIG9iago8PC9TdWJ0eXBlL1R5cGUxL1R5cGUv
Rm9udC9CYXNlRm9udC9IZWx2ZXRpY2EvRW5jb2RpbmcvV2luQW5zaUVuY29kaW5nPj4KZW5kb2Jq
CjE5NiAwIG9iago8PC9TdWJ0eXBlL1R5cGUxL1R5cGUvRm9udC9CYXNlRm9udC9IZWx2ZXRpY2Et
Qm9sZC9FbmNvZGluZy9XaW5BbnNpRW5jb2Rpbmc+PgplbmRvYmoKMjMzIDAgb2JqCjw8L1N1YnR5
cGUvVHlwZTEvVHlwZS9Gb250L0Jhc2VGb250L0NvdXJpZXIvRW5jb2RpbmcvV2luQW5zaUVuY29k
aW5nPj4KZW5kb2JqCjI2NSAwIG9iago8PC9TdWJ0eXBlL1R5cGUxL1R5cGUvRm9udC9CYXNlRm9u
dC9IZWx2ZXRpY2EtT2JsaXF1ZS9FbmNvZGluZy9XaW5BbnNpRW5jb2Rpbmc+PgplbmRvYmoKMiAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjMgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago0IDAgb2JqCjw8
L2NhIDE+PgplbmRvYmoKNiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjggMCBvYmoKPDwvY2EgMT4+
CmVuZG9iago5IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKMTAgMCBvYmoKPDwvY2EgMT4+CmVuZG9i
agoxMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjEyIDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKMTMg
MCBvYmoKPDwvY2EgMT4+CmVuZG9iagoxNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE1IDAgb2Jq
Cjw8L2NhIDE+PgplbmRvYmoKMTcgMCBvYmoKPDwvY2EgMT4+CmVuZG9iagoxOCAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjE5IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKMjAgMCBvYmoKPDwvY2EgMT4+
CmVuZG9iagoyMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIyIDAgb2JqCjw8L2NhIDE+PgplbmRv
YmoKMjMgMCBvYmoKPDwvY2EgMT4+CmVuZG9iagoyNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI2
IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKMjcgMCBvYmoKPDwvY2EgMT4+CmVuZG9iagoyOCAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjI5IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKMzAgMCBvYmoKPDwv
Y2EgMT4+CmVuZG9iagozMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMyIDAgb2JqCjw8L2NhIDE+
PgplbmRvYmoKMzMgMCBvYmoKPDwvY2EgMT4+CmVuZG9iagozNCAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjM1IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKMzcgMCBvYmoKPDwvY2EgMT4+CmVuZG9iagoz
OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNDAgMCBv
YmoKPDwvY2EgMT4+CmVuZG9iago0MSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQyIDAgb2JqCjw8
L2NhIDE+PgplbmRvYmoKNDMgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago0NCAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjQ1IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNDYgMCBvYmoKPDwvY2EgMT4+CmVu
ZG9iago0NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQ4IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoK
NDkgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago1MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjUyIDAg
b2JqCjw8L2NhIDE+PgplbmRvYmoKNTMgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago1NCAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjU1IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNTYgMCBvYmoKPDwvY2Eg
MT4+CmVuZG9iago1NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjU4IDAgb2JqCjw8L2NhIDE+Pgpl
bmRvYmoKNTkgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago2MCAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjYxIDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNjIgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago2MyAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjY0IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNjUgMCBvYmoK
PDwvY2EgMT4+CmVuZG9iago2NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjY3IDAgb2JqCjw8L2Nh
IDE+PgplbmRvYmoKNjggMCBvYmoKPDwvY2EgMT4+CmVuZG9iago2OSAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjcxIDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNzIgMCBvYmoKPDwvY2EgMT4+CmVuZG9i
ago3MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjc0IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKNzUg
MCBvYmoKPDwvY2EgMT4+CmVuZG9iago3NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjc3IDAgb2Jq
Cjw8L2NhIDE+PgplbmRvYmoKNzggMCBvYmoKPDwvY2EgMT4+CmVuZG9iago3OSAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjgwIDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKODIgMCBvYmoKPDwvY2EgMT4+
CmVuZG9iago4MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjg0IDAgb2JqCjw8L2NhIDE+PgplbmRv
YmoKODUgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago4NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjg4
IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKODkgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago5MCAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjkxIDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKOTIgMCBvYmoKPDwv
Y2EgMT4+CmVuZG9iago5MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjk0IDAgb2JqCjw8L2NhIDE+
PgplbmRvYmoKOTUgMCBvYmoKPDwvY2EgMT4+CmVuZG9iago5NiAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjk3IDAgb2JqCjw8L2NhIDE+PgplbmRvYmoKOTggMCBvYmoKPDwvY2EgMT4+CmVuZG9iago5
OSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjEwMCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjEwMSAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjEwMiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjEwMyAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjEwNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjEwNSAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjEwNiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjEwOCAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjEwOSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjExMCAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjExMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjExMiAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjExMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjExNCAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjExNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjExNiAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjExNyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjExOCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE3
MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE3NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE3NSAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjE3NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE3NyAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjE3OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE3OSAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjE4MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE4MiAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjE4MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE4NCAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjE4NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE4NiAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjE4NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE4OSAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjE5MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE5MSAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjE5MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE5MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE5
NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE5NSAwIG9iago8PC9DQSAxPj4KZW5kb2JqCjE5NyAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjE5OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE5OSAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjIwMCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIwMSAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjIxMiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIxMyAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjIxNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIxNSAwIG9iago8PC9DQSAx
Pj4KZW5kb2JqCjIxNiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIxNyAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjIxOCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIxOSAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjIyMCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIyMSAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjIyMiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIyMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIy
NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIyNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIyNiAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjIyNyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIyOCAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjIyOSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIzMCAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjIzMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjIzMiAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjI0NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI0NSAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjI0NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI0NyAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjI0OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI0OSAwIG9iago8PC9DQSAxPj4KZW5k
b2JqCjI1MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI1MSAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjI1MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI1MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI1
NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI1NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI1NyAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjI1OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI1OSAwIG9i
ago8PC9DQSAxPj4KZW5kb2JqCjI2MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI2MSAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjI2MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI2MyAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjI2NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI2NiAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjI2NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI2OCAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjI2OSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI3MCAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjI3MSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI3NCAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjI3NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI3NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI3
NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI3OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI3OSAw
IG9iago8PC9DQSAxPj4KZW5kb2JqCjI4MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI4MSAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjI4MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI4MyAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjI4NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI4NSAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjI4NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI4NyAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjI4OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI4OSAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjI5MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI5MSAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjI5MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI5MyAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjI5NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjI5NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMw
MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMwMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMwMiAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjMwMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMwNCAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjMwNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMwNiAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjMwNyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMwOCAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjMwOSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMxMCAwIG9iago8PC9DQSAx
Pj4KZW5kb2JqCjMxMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMxMiAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjMxMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMxNCAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjMxNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMxNiAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjMxNyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMyMiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMy
MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMyNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMyNSAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjMyNiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMyNyAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjMyOCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMyOSAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjMzMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMzNCAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjMzNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMzNiAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjMzNyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjMzOCAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjMzOSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0MCAwIG9iago8PC9DQSAxPj4KZW5k
b2JqCjM0MSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0MiAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjM0MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0
NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0NyAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjM0OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM0OSAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjM1MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM1MSAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjM1NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM1NiAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjM1NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM1OCAwIG9iago8PC9DQSAx
Pj4KZW5kb2JqCjM1OSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2MCAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjM2MSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2MiAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjM2MyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2NCAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjM2NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2
NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM2OSAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjM3MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM3MSAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjM3MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM3MyAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjM3NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM3NSAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjM3NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM3NyAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjM4MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM4MSAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjM4MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM4MyAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjM4NyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM4OCAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjM4OSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5MCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5
MSAwIG9iago8PC9DQSAxPj4KZW5kb2JqCjM5MiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5MyAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjM5NCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5NSAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjM5NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5NyAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjM5OCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjM5OSAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjQwMCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQwMSAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjQwMiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQwOSAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjQxMCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQxMSAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjQxMiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQxMyAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjQxNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQxNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQx
NiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQxNyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQxOCAw
IG9iago8PC9jYSAxPj4KZW5kb2JqCjQxOSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQyMCAwIG9i
ago8PC9jYSAxPj4KZW5kb2JqCjQyMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQyMiAwIG9iago8
PC9jYSAxPj4KZW5kb2JqCjQyMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQyNCAwIG9iago8PC9j
YSAxPj4KZW5kb2JqCjQyNSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQyNiAwIG9iago8PC9jYSAx
Pj4KZW5kb2JqCjQyNyAwIG9iago8PC9DQSAxPj4KZW5kb2JqCjQyOCAwIG9iago8PC9jYSAxPj4K
ZW5kb2JqCjQyOSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQzMCAwIG9iago8PC9jYSAxPj4KZW5k
b2JqCjQzMSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQzMiAwIG9iago8PC9jYSAxPj4KZW5kb2Jq
CjQzMyAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQzNCAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQz
NSAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjQzNiAwIG9iago8PC9jYSAxPj4KZW5kb2JqCjE3MiAw
IG9iago8PC9LaWRzWzEgMCBSIDcgMCBSIDE2IDAgUiAyNSAwIFIgMzYgMCBSIDUxIDAgUiA3MCAw
IFIgODEgMCBSIDg2IDAgUiAxMDcgMCBSXS9UeXBlL1BhZ2VzL0NvdW50IDEwL1BhcmVudCA0Njcg
MCBSPj4KZW5kb2JqCjM4NSAwIG9iago8PC9LaWRzWzM4NiAwIFIgMTgxIDAgUiAxODggMCBSXS9U
eXBlL1BhZ2VzL0NvdW50IDMvUGFyZW50IDQ2NyAwIFI+PgplbmRvYmoKNDY3IDAgb2JqCjw8L0tp
ZHNbMTcyIDAgUiAzODUgMCBSXS9UeXBlL1BhZ2VzL0NvdW50IDEzL0lUWFQoMS4zLjE5KT4+CmVu
ZG9iago0NjggMCBvYmoKPDwvUGFnZU1vZGUvVXNlT3V0bGluZXMvVHlwZS9DYXRhbG9nL091dGxp
bmVzIDQ2NSAwIFIvUGFnZXMgNDY3IDAgUj4+CmVuZG9iago0NjkgMCBvYmoKPDw+PgplbmRvYmoK
eHJlZgowIDQ3MAowMDAwMDAwMDAwIDY1NTM1IGYgCjAwMDAwMTE4NjUgMDAwMDAgbiAKMDAwMDA4
MzA0NiAwMDAwMCBuIAowMDAwMDgzMDcxIDAwMDAwIG4gCjAwMDAwODMwOTYgMDAwMDAgbiAKMDAw
MDA4MjY3NyAwMDAwMCBuIAowMDAwMDgzMTIxIDAwMDAwIG4gCjAwMDAwMTkzOTcgMDAwMDAgbiAK
MDAwMDA4MzE0NiAwMDAwMCBuIAowMDAwMDgzMTcxIDAwMDAwIG4gCjAwMDAwODMxOTYgMDAwMDAg
biAKMDAwMDA4MzIyMiAwMDAwMCBuIAowMDAwMDgzMjQ4IDAwMDAwIG4gCjAwMDAwODMyNzQgMDAw
MDAgbiAKMDAwMDA4MzMwMCAwMDAwMCBuIAowMDAwMDgzMzI2IDAwMDAwIG4gCjAwMDAwMjU2NTAg
MDAwMDAgbiAKMDAwMDA4MzM1MiAwMDAwMCBuIAowMDAwMDgzMzc4IDAwMDAwIG4gCjAwMDAwODM0
MDQgMDAwMDAgbiAKMDAwMDA4MzQzMCAwMDAwMCBuIAowMDAwMDgzNDU2IDAwMDAwIG4gCjAwMDAw
ODM0ODIgMDAwMDAgbiAKMDAwMDA4MzUwOCAwMDAwMCBuIAowMDAwMDgzNTM0IDAwMDAwIG4gCjAw
MDAwMjk4NTEgMDAwMDAgbiAKMDAwMDA4MzU2MCAwMDAwMCBuIAowMDAwMDgzNTg2IDAwMDAwIG4g
CjAwMDAwODM2MTIgMDAwMDAgbiAKMDAwMDA4MzYzOCAwMDAwMCBuIAowMDAwMDgzNjY0IDAwMDAw
IG4gCjAwMDAwODM2OTAgMDAwMDAgbiAKMDAwMDA4MzcxNiAwMDAwMCBuIAowMDAwMDgzNzQyIDAw
MDAwIG4gCjAwMDAwODM3NjggMDAwMDAgbiAKMDAwMDA4Mzc5NCAwMDAwMCBuIAowMDAwMDMzNDQx
IDAwMDAwIG4gCjAwMDAwODM4MjAgMDAwMDAgbiAKMDAwMDA4Mzg0NiAwMDAwMCBuIAowMDAwMDgz
ODcyIDAwMDAwIG4gCjAwMDAwODM4OTggMDAwMDAgbiAKMDAwMDA4MzkyNCAwMDAwMCBuIAowMDAw
MDgzOTUwIDAwMDAwIG4gCjAwMDAwODM5NzYgMDAwMDAgbiAKMDAwMDA4NDAwMiAwMDAwMCBuIAow
MDAwMDg0MDI4IDAwMDAwIG4gCjAwMDAwODQwNTQgMDAwMDAgbiAKMDAwMDA4NDA4MCAwMDAwMCBu
IAowMDAwMDg0MTA2IDAwMDAwIG4gCjAwMDAwODQxMzIgMDAwMDAgbiAKMDAwMDA4NDE1OCAwMDAw
MCBuIAowMDAwMDM3MjM0IDAwMDAwIG4gCjAwMDAwODQxODQgMDAwMDAgbiAKMDAwMDA4NDIxMCAw
MDAwMCBuIAowMDAwMDg0MjM2IDAwMDAwIG4gCjAwMDAwODQyNjIgMDAwMDAgbiAKMDAwMDA4NDI4
OCAwMDAwMCBuIAowMDAwMDg0MzE0IDAwMDAwIG4gCjAwMDAwODQzNDAgMDAwMDAgbiAKMDAwMDA4
NDM2NiAwMDAwMCBuIAowMDAwMDg0MzkyIDAwMDAwIG4gCjAwMDAwODQ0MTggMDAwMDAgbiAKMDAw
MDA4NDQ0NCAwMDAwMCBuIAowMDAwMDg0NDcwIDAwMDAwIG4gCjAwMDAwODQ0OTYgMDAwMDAgbiAK
MDAwMDA4NDUyMiAwMDAwMCBuIAowMDAwMDg0NTQ4IDAwMDAwIG4gCjAwMDAwODQ1NzQgMDAwMDAg
biAKMDAwMDA4NDYwMCAwMDAwMCBuIAowMDAwMDg0NjI2IDAwMDAwIG4gCjAwMDAwNDIwMjUgMDAw
MDAgbiAKMDAwMDA4NDY1MiAwMDAwMCBuIAowMDAwMDg0Njc4IDAwMDAwIG4gCjAwMDAwODQ3MDQg
MDAwMDAgbiAKMDAwMDA4NDczMCAwMDAwMCBuIAowMDAwMDg0NzU2IDAwMDAwIG4gCjAwMDAwODQ3
ODIgMDAwMDAgbiAKMDAwMDA4NDgwOCAwMDAwMCBuIAowMDAwMDg0ODM0IDAwMDAwIG4gCjAwMDAw
ODQ4NjAgMDAwMDAgbiAKMDAwMDA4NDg4NiAwMDAwMCBuIAowMDAwMDQ2OTM1IDAwMDAwIG4gCjAw
MDAwODQ5MTIgMDAwMDAgbiAKMDAwMDA4NDkzOCAwMDAwMCBuIAowMDAwMDg0OTY0IDAwMDAwIG4g
CjAwMDAwODQ5OTAgMDAwMDAgbiAKMDAwMDA1MTA1OSAwMDAwMCBuIAowMDAwMDg1MDE2IDAwMDAw
IG4gCjAwMDAwODUwNDIgMDAwMDAgbiAKMDAwMDA4NTA2OCAwMDAwMCBuIAowMDAwMDg1MDk0IDAw
MDAwIG4gCjAwMDAwODUxMjAgMDAwMDAgbiAKMDAwMDA4NTE0NiAwMDAwMCBuIAowMDAwMDg1MTcy
IDAwMDAwIG4gCjAwMDAwODUxOTggMDAwMDAgbiAKMDAwMDA4NTIyNCAwMDAwMCBuIAowMDAwMDg1
MjUwIDAwMDAwIG4gCjAwMDAwODUyNzYgMDAwMDAgbiAKMDAwMDA4NTMwMiAwMDAwMCBuIAowMDAw
MDg1MzI4IDAwMDAwIG4gCjAwMDAwODUzNTQgMDAwMDAgbiAKMDAwMDA4NTM4MSAwMDAwMCBuIAow
MDAwMDg1NDA4IDAwMDAwIG4gCjAwMDAwODU0MzUgMDAwMDAgbiAKMDAwMDA4NTQ2MiAwMDAwMCBu
IAowMDAwMDg1NDg5IDAwMDAwIG4gCjAwMDAwODU1MTYgMDAwMDAgbiAKMDAwMDA1NDQ4OSAwMDAw
MCBuIAowMDAwMDg1NTQzIDAwMDAwIG4gCjAwMDAwODU1NzAgMDAwMDAgbiAKMDAwMDA4NTU5NyAw
MDAwMCBuIAowMDAwMDg1NjI0IDAwMDAwIG4gCjAwMDAwODU2NTEgMDAwMDAgbiAKMDAwMDA4NTY3
OCAwMDAwMCBuIAowMDAwMDg1NzA1IDAwMDAwIG4gCjAwMDAwODU3MzIgMDAwMDAgbiAKMDAwMDA4
NTc1OSAwMDAwMCBuIAowMDAwMDg1Nzg2IDAwMDAwIG4gCjAwMDAwODU4MTMgMDAwMDAgbiAKMDAw
MDAwMDAxNSAwMDAwMCBuIAowMDAwMDAwMTU4IDAwMDAwIG4gCjAwMDAwMDAzMDIgMDAwMDAgbiAK
MDAwMDAwMDQ0NiAwMDAwMCBuIAowMDAwMDAwNTg5IDAwMDAwIG4gCjAwMDAwMDA3MzMgMDAwMDAg
biAKMDAwMDAwMDg3OCAwMDAwMCBuIAowMDAwMDAxMDIzIDAwMDAwIG4gCjAwMDAwMDExNjcgMDAw
MDAgbiAKMDAwMDAwMTMxMSAwMDAwMCBuIAowMDAwMDAxNDU1IDAwMDAwIG4gCjAwMDAwMDE2MDAg
MDAwMDAgbiAKMDAwMDAwMTc0NSAwMDAwMCBuIAowMDAwMDAxODg5IDAwMDAwIG4gCjAwMDAwMDIw
MzMgMDAwMDAgbiAKMDAwMDAwMjE3OCAwMDAwMCBuIAowMDAwMDAyMzIyIDAwMDAwIG4gCjAwMDAw
MDI0NjUgMDAwMDAgbiAKMDAwMDAwMjYxMCAwMDAwMCBuIAowMDAwMDAyNzU1IDAwMDAwIG4gCjAw
MDAwMDI4OTkgMDAwMDAgbiAKMDAwMDAwMzA0MiAwMDAwMCBuIAowMDAwMDAzMTg2IDAwMDAwIG4g
CjAwMDAwMDMzMzEgMDAwMDAgbiAKMDAwMDAwMzQ3NiAwMDAwMCBuIAowMDAwMDAzNjIwIDAwMDAw
IG4gCjAwMDAwMDM3NjUgMDAwMDAgbiAKMDAwMDAwMzkwOSAwMDAwMCBuIAowMDAwMDA0MDUyIDAw
MDAwIG4gCjAwMDAwMDQxOTYgMDAwMDAgbiAKMDAwMDAwNDM0MSAwMDAwMCBuIAowMDAwMDA0NDg1
IDAwMDAwIG4gCjAwMDAwMDQ2MjggMDAwMDAgbiAKMDAwMDAwNDc3MyAwMDAwMCBuIAowMDAwMDA0
OTE4IDAwMDAwIG4gCjAwMDAwMDUwNjEgMDAwMDAgbiAKMDAwMDAwNTIwNiAwMDAwMCBuIAowMDAw
MDA1MzUwIDAwMDAwIG4gCjAwMDAwMDU0OTQgMDAwMDAgbiAKMDAwMDAwNTYzOCAwMDAwMCBuIAow
MDAwMDA1NzgzIDAwMDAwIG4gCjAwMDAwMDU5MjggMDAwMDAgbiAKMDAwMDAwNjA3MiAwMDAwMCBu
IAowMDAwMDA2MjE2IDAwMDAwIG4gCjAwMDAwMDYzNjEgMDAwMDAgbiAKMDAwMDAwNjUwNSAwMDAw
MCBuIAowMDAwMDA2NjQ4IDAwMDAwIG4gCjAwMDAwMDY3OTQgMDAwMDAgbiAKMDAwMDAwNjkzOSAw
MDAwMCBuIAowMDAwMDA3MDg0IDAwMDAwIG4gCjAwMDAwMDcyMjggMDAwMDAgbiAKMDAwMDAwNzM3
MSAwMDAwMCBuIAowMDAwMDA3NTE1IDAwMDAwIG4gCjAwMDAwOTE1MzcgMDAwMDAgbiAKMDAwMDA4
NTg0MCAwMDAwMCBuIAowMDAwMDg1ODY3IDAwMDAwIG4gCjAwMDAwODU4OTQgMDAwMDAgbiAKMDAw
MDA4NTkyMSAwMDAwMCBuIAowMDAwMDg1OTQ4IDAwMDAwIG4gCjAwMDAwODU5NzUgMDAwMDAgbiAK
MDAwMDA4NjAwMiAwMDAwMCBuIAowMDAwMDg2MDI5IDAwMDAwIG4gCjAwMDAwNjg3MDAgMDAwMDAg
biAKMDAwMDA4NjA1NiAwMDAwMCBuIAowMDAwMDg2MDgzIDAwMDAwIG4gCjAwMDAwODYxMTAgMDAw
MDAgbiAKMDAwMDA4NjEzNyAwMDAwMCBuIAowMDAwMDg2MTY0IDAwMDAwIG4gCjAwMDAwODYxOTEg
MDAwMDAgbiAKMDAwMDA4MTY5MCAwMDAwMCBuIAowMDAwMDg2MjE4IDAwMDAwIG4gCjAwMDAwODYy
NDUgMDAwMDAgbiAKMDAwMDA4NjI3MiAwMDAwMCBuIAowMDAwMDg2Mjk5IDAwMDAwIG4gCjAwMDAw
ODYzMjYgMDAwMDAgbiAKMDAwMDA4NjM1MyAwMDAwMCBuIAowMDAwMDg2MzgwIDAwMDAwIG4gCjAw
MDAwODI3NjUgMDAwMDAgbiAKMDAwMDA4NjQwNyAwMDAwMCBuIAowMDAwMDg2NDM0IDAwMDAwIG4g
CjAwMDAwODY0NjEgMDAwMDAgbiAKMDAwMDA4NjQ4OCAwMDAwMCBuIAowMDAwMDg2NTE1IDAwMDAw
IG4gCjAwMDAwMTM3NTkgMDAwMDAgbiAKMDAwMDAxMzkwNSAwMDAwMCBuIAowMDAwMDE0MDUwIDAw
MDAwIG4gCjAwMDAwMTQxOTUgMDAwMDAgbiAKMDAwMDAxNDMzOCAwMDAwMCBuIAowMDAwMDE0NDgz
IDAwMDAwIG4gCjAwMDAwMTQ2MjggMDAwMDAgbiAKMDAwMDAxNDc3MiAwMDAwMCBuIAowMDAwMDE0
OTE3IDAwMDAwIG4gCjAwMDAwMTUwNjIgMDAwMDAgbiAKMDAwMDA4NjU0MiAwMDAwMCBuIAowMDAw
MDg2NTY5IDAwMDAwIG4gCjAwMDAwODY1OTYgMDAwMDAgbiAKMDAwMDA4NjYyMyAwMDAwMCBuIAow
MDAwMDg2NjUwIDAwMDAwIG4gCjAwMDAwODY2NzcgMDAwMDAgbiAKMDAwMDA4NjcwNCAwMDAwMCBu
IAowMDAwMDg2NzMxIDAwMDAwIG4gCjAwMDAwODY3NTggMDAwMDAgbiAKMDAwMDA4Njc4NSAwMDAw
MCBuIAowMDAwMDg2ODEyIDAwMDAwIG4gCjAwMDAwODY4MzkgMDAwMDAgbiAKMDAwMDA4Njg2NiAw
MDAwMCBuIAowMDAwMDg2ODkzIDAwMDAwIG4gCjAwMDAwODY5MjAgMDAwMDAgbiAKMDAwMDA4Njk0
NyAwMDAwMCBuIAowMDAwMDg2OTc0IDAwMDAwIG4gCjAwMDAwODcwMDEgMDAwMDAgbiAKMDAwMDA4
NzAyOCAwMDAwMCBuIAowMDAwMDg3MDU1IDAwMDAwIG4gCjAwMDAwODcwODIgMDAwMDAgbiAKMDAw
MDA4Mjg2MCAwMDAwMCBuIAowMDAwMDIwMDI3IDAwMDAwIG4gCjAwMDAwMjAyMzcgMDAwMDAgbiAK
MDAwMDAyMDQ1MSAwMDAwMCBuIAowMDAwMDIwNjExIDAwMDAwIG4gCjAwMDAwMjA4MzUgMDAwMDAg
biAKMDAwMDAyMTA1NSAwMDAwMCBuIAowMDAwMDIxMjY4IDAwMDAwIG4gCjAwMDAwMjE0NzYgMDAw
MDAgbiAKMDAwMDAyMTY5MiAwMDAwMCBuIAowMDAwMDIxOTA0IDAwMDAwIG4gCjAwMDAwODcxMDkg
MDAwMDAgbiAKMDAwMDA4NzEzNiAwMDAwMCBuIAowMDAwMDg3MTYzIDAwMDAwIG4gCjAwMDAwODcx
OTAgMDAwMDAgbiAKMDAwMDA4NzIxNyAwMDAwMCBuIAowMDAwMDg3MjQ0IDAwMDAwIG4gCjAwMDAw
ODcyNzEgMDAwMDAgbiAKMDAwMDA4NzI5OCAwMDAwMCBuIAowMDAwMDg3MzI1IDAwMDAwIG4gCjAw
MDAwODczNTIgMDAwMDAgbiAKMDAwMDA4NzM3OSAwMDAwMCBuIAowMDAwMDI2MjIyIDAwMDAwIG4g
CjAwMDAwODc0MDYgMDAwMDAgbiAKMDAwMDA4NzQzMyAwMDAwMCBuIAowMDAwMDg3NDYwIDAwMDAw
IG4gCjAwMDAwODc0ODcgMDAwMDAgbiAKMDAwMDA4NzUxNCAwMDAwMCBuIAowMDAwMDg3NTQxIDAw
MDAwIG4gCjAwMDAwODc1NjggMDAwMDAgbiAKMDAwMDA4NzU5NSAwMDAwMCBuIAowMDAwMDg3NjIy
IDAwMDAwIG4gCjAwMDAwODI5NDggMDAwMDAgbiAKMDAwMDA4NzY0OSAwMDAwMCBuIAowMDAwMDg3
Njc2IDAwMDAwIG4gCjAwMDAwODc3MDMgMDAwMDAgbiAKMDAwMDA4NzczMCAwMDAwMCBuIAowMDAw
MDg3NzU3IDAwMDAwIG4gCjAwMDAwODc3ODQgMDAwMDAgbiAKMDAwMDAzMDE5MiAwMDAwMCBuIAow
MDAwMDMwMzY0IDAwMDAwIG4gCjAwMDAwODc4MTEgMDAwMDAgbiAKMDAwMDA4NzgzOCAwMDAwMCBu
IAowMDAwMDg3ODY1IDAwMDAwIG4gCjAwMDAwODc4OTIgMDAwMDAgbiAKMDAwMDA4NzkxOSAwMDAw
MCBuIAowMDAwMDg3OTQ2IDAwMDAwIG4gCjAwMDAwODc5NzMgMDAwMDAgbiAKMDAwMDA4ODAwMCAw
MDAwMCBuIAowMDAwMDg4MDI3IDAwMDAwIG4gCjAwMDAwODgwNTQgMDAwMDAgbiAKMDAwMDA4ODA4
MSAwMDAwMCBuIAowMDAwMDg4MTA4IDAwMDAwIG4gCjAwMDAwODgxMzUgMDAwMDAgbiAKMDAwMDA4
ODE2MiAwMDAwMCBuIAowMDAwMDg4MTg5IDAwMDAwIG4gCjAwMDAwODgyMTYgMDAwMDAgbiAKMDAw
MDA4ODI0MyAwMDAwMCBuIAowMDAwMDg4MjcwIDAwMDAwIG4gCjAwMDAwODgyOTcgMDAwMDAgbiAK
MDAwMDA4ODMyNCAwMDAwMCBuIAowMDAwMDg4MzUxIDAwMDAwIG4gCjAwMDAwODgzNzggMDAwMDAg
biAKMDAwMDAzMzg3NiAwMDAwMCBuIAowMDAwMDM0MDU1IDAwMDAwIG4gCjAwMDAwMzQyNTAgMDAw
MDAgbiAKMDAwMDAzNDQ0NSAwMDAwMCBuIAowMDAwMDg4NDA1IDAwMDAwIG4gCjAwMDAwODg0MzIg
MDAwMDAgbiAKMDAwMDA4ODQ1OSAwMDAwMCBuIAowMDAwMDg4NDg2IDAwMDAwIG4gCjAwMDAwODg1
MTMgMDAwMDAgbiAKMDAwMDA4ODU0MCAwMDAwMCBuIAowMDAwMDg4NTY3IDAwMDAwIG4gCjAwMDAw
ODg1OTQgMDAwMDAgbiAKMDAwMDA4ODYyMSAwMDAwMCBuIAowMDAwMDg4NjQ4IDAwMDAwIG4gCjAw
MDAwODg2NzUgMDAwMDAgbiAKMDAwMDA4ODcwMiAwMDAwMCBuIAowMDAwMDg4NzI5IDAwMDAwIG4g
CjAwMDAwODg3NTYgMDAwMDAgbiAKMDAwMDA4ODc4MyAwMDAwMCBuIAowMDAwMDg4ODEwIDAwMDAw
IG4gCjAwMDAwODg4MzcgMDAwMDAgbiAKMDAwMDA4ODg2NCAwMDAwMCBuIAowMDAwMDM3NzYxIDAw
MDAwIG4gCjAwMDAwMzc5NDggMDAwMDAgbiAKMDAwMDAzODEzNyAwMDAwMCBuIAowMDAwMDM4MzQ1
IDAwMDAwIG4gCjAwMDAwODg4OTEgMDAwMDAgbiAKMDAwMDA4ODkxOCAwMDAwMCBuIAowMDAwMDg4
OTQ1IDAwMDAwIG4gCjAwMDAwODg5NzIgMDAwMDAgbiAKMDAwMDA4ODk5OSAwMDAwMCBuIAowMDAw
MDg5MDI2IDAwMDAwIG4gCjAwMDAwODkwNTMgMDAwMDAgbiAKMDAwMDA4OTA4MCAwMDAwMCBuIAow
MDAwMDQyNDk2IDAwMDAwIG4gCjAwMDAwNDI2NzQgMDAwMDAgbiAKMDAwMDA0Mjg1MiAwMDAwMCBu
IAowMDAwMDg5MTA3IDAwMDAwIG4gCjAwMDAwODkxMzQgMDAwMDAgbiAKMDAwMDA4OTE2MSAwMDAw
MCBuIAowMDAwMDg5MTg4IDAwMDAwIG4gCjAwMDAwODkyMTUgMDAwMDAgbiAKMDAwMDA4OTI0MiAw
MDAwMCBuIAowMDAwMDg5MjY5IDAwMDAwIG4gCjAwMDAwODkyOTYgMDAwMDAgbiAKMDAwMDA4OTMy
MyAwMDAwMCBuIAowMDAwMDg5MzUwIDAwMDAwIG4gCjAwMDAwODkzNzcgMDAwMDAgbiAKMDAwMDA4
OTQwNCAwMDAwMCBuIAowMDAwMDg5NDMxIDAwMDAwIG4gCjAwMDAwODk0NTggMDAwMDAgbiAKMDAw
MDA4OTQ4NSAwMDAwMCBuIAowMDAwMDg5NTEyIDAwMDAwIG4gCjAwMDAwODk1MzkgMDAwMDAgbiAK
MDAwMDA4OTU2NiAwMDAwMCBuIAowMDAwMDg5NTkzIDAwMDAwIG4gCjAwMDAwNDcyNTggMDAwMDAg
biAKMDAwMDA0NzQzNyAwMDAwMCBuIAowMDAwMDQ3NjQ2IDAwMDAwIG4gCjAwMDAwODk2MjAgMDAw
MDAgbiAKMDAwMDA4OTY0NyAwMDAwMCBuIAowMDAwMDg5Njc0IDAwMDAwIG4gCjAwMDAwODk3MDEg
MDAwMDAgbiAKMDAwMDA4OTcyOCAwMDAwMCBuIAowMDAwMDg5NzU1IDAwMDAwIG4gCjAwMDAwODk3
ODIgMDAwMDAgbiAKMDAwMDA4OTgwOSAwMDAwMCBuIAowMDAwMDg5ODM2IDAwMDAwIG4gCjAwMDAw
ODk4NjMgMDAwMDAgbiAKMDAwMDA4OTg5MCAwMDAwMCBuIAowMDAwMDg5OTE3IDAwMDAwIG4gCjAw
MDAwODk5NDQgMDAwMDAgbiAKMDAwMDA4OTk3MSAwMDAwMCBuIAowMDAwMDg5OTk4IDAwMDAwIG4g
CjAwMDAwOTAwMjUgMDAwMDAgbiAKMDAwMDA5MDA1MiAwMDAwMCBuIAowMDAwMDkwMDc5IDAwMDAw
IG4gCjAwMDAwOTAxMDYgMDAwMDAgbiAKMDAwMDA5MDEzMyAwMDAwMCBuIAowMDAwMDkwMTYwIDAw
MDAwIG4gCjAwMDAwOTAxODcgMDAwMDAgbiAKMDAwMDA5MDIxNCAwMDAwMCBuIAowMDAwMDUxNTQ3
IDAwMDAwIG4gCjAwMDAwNTE2ODUgMDAwMDAgbiAKMDAwMDA5MDI0MSAwMDAwMCBuIAowMDAwMDkw
MjY4IDAwMDAwIG4gCjAwMDAwOTAyOTUgMDAwMDAgbiAKMDAwMDA5MDMyMiAwMDAwMCBuIAowMDAw
MDU1MDE1IDAwMDAwIG4gCjAwMDAwOTE2NjkgMDAwMDAgbiAKMDAwMDA2MjI0MCAwMDAwMCBuIAow
MDAwMDkwMzQ5IDAwMDAwIG4gCjAwMDAwOTAzNzYgMDAwMDAgbiAKMDAwMDA5MDQwMyAwMDAwMCBu
IAowMDAwMDkwNDMwIDAwMDAwIG4gCjAwMDAwOTA0NTcgMDAwMDAgbiAKMDAwMDA5MDQ4NCAwMDAw
MCBuIAowMDAwMDkwNTExIDAwMDAwIG4gCjAwMDAwOTA1MzggMDAwMDAgbiAKMDAwMDA5MDU2NSAw
MDAwMCBuIAowMDAwMDkwNTkyIDAwMDAwIG4gCjAwMDAwOTA2MTkgMDAwMDAgbiAKMDAwMDA5MDY0
NiAwMDAwMCBuIAowMDAwMDkwNjczIDAwMDAwIG4gCjAwMDAwOTA3MDAgMDAwMDAgbiAKMDAwMDA5
MDcyNyAwMDAwMCBuIAowMDAwMDkwNzU0IDAwMDAwIG4gCjAwMDAwNjI0ODQgMDAwMDAgbiAKMDAw
MDA2Mjc0MSAwMDAwMCBuIAowMDAwMDYyODk4IDAwMDAwIG4gCjAwMDAwNjMwODMgMDAwMDAgbiAK
MDAwMDA2MzI0MiAwMDAwMCBuIAowMDAwMDYzNDI2IDAwMDAwIG4gCjAwMDAwOTA3ODEgMDAwMDAg
biAKMDAwMDA5MDgwOCAwMDAwMCBuIAowMDAwMDkwODM1IDAwMDAwIG4gCjAwMDAwOTA4NjIgMDAw
MDAgbiAKMDAwMDA5MDg4OSAwMDAwMCBuIAowMDAwMDkwOTE2IDAwMDAwIG4gCjAwMDAwOTA5NDMg
MDAwMDAgbiAKMDAwMDA5MDk3MCAwMDAwMCBuIAowMDAwMDkwOTk3IDAwMDAwIG4gCjAwMDAwOTEw
MjQgMDAwMDAgbiAKMDAwMDA5MTA1MSAwMDAwMCBuIAowMDAwMDkxMDc4IDAwMDAwIG4gCjAwMDAw
OTExMDUgMDAwMDAgbiAKMDAwMDA5MTEzMiAwMDAwMCBuIAowMDAwMDkxMTU5IDAwMDAwIG4gCjAw
MDAwOTExODYgMDAwMDAgbiAKMDAwMDA5MTIxMyAwMDAwMCBuIAowMDAwMDkxMjQwIDAwMDAwIG4g
CjAwMDAwOTEyNjcgMDAwMDAgbiAKMDAwMDA5MTI5NCAwMDAwMCBuIAowMDAwMDkxMzIxIDAwMDAw
IG4gCjAwMDAwOTEzNDggMDAwMDAgbiAKMDAwMDA5MTM3NSAwMDAwMCBuIAowMDAwMDkxNDAyIDAw
MDAwIG4gCjAwMDAwOTE0MjkgMDAwMDAgbiAKMDAwMDA5MTQ1NiAwMDAwMCBuIAowMDAwMDkxNDgz
IDAwMDAwIG4gCjAwMDAwOTE1MTAgMDAwMDAgbiAKMDAwMDA2OTE0OSAwMDAwMCBuIAowMDAwMDY5
MzU4IDAwMDAwIG4gCjAwMDAwNzI0NzUgMDAwMDAgbiAKMDAwMDA3MjY1NyAwMDAwMCBuIAowMDAw
MDcyODQwIDAwMDAwIG4gCjAwMDAwNzMwMTkgMDAwMDAgbiAKMDAwMDA3MzE5MSAwMDAwMCBuIAow
MDAwMDczMzczIDAwMDAwIG4gCjAwMDAwNzM1NTUgMDAwMDAgbiAKMDAwMDA3MzczMSAwMDAwMCBu
IAowMDAwMDczOTA0IDAwMDAwIG4gCjAwMDAwNzQwNzcgMDAwMDAgbiAKMDAwMDA3NDI1MCAwMDAw
MCBuIAowMDAwMDc0NDY4IDAwMDAwIG4gCjAwMDAwNzQ2MzcgMDAwMDAgbiAKMDAwMDA3NDg0MiAw
MDAwMCBuIAowMDAwMDc1MDM2IDAwMDAwIG4gCjAwMDAwNzUyMTggMDAwMDAgbiAKMDAwMDA3NTM5
OSAwMDAwMCBuIAowMDAwMDc1NjMwIDAwMDAwIG4gCjAwMDAwNzU4MzYgMDAwMDAgbiAKMDAwMDA3
NjA3NSAwMDAwMCBuIAowMDAwMDc2MjkyIDAwMDAwIG4gCjAwMDAwNzY0NzkgMDAwMDAgbiAKMDAw
MDA3NjY3NCAwMDAwMCBuIAowMDAwMDc2ODI5IDAwMDAwIG4gCjAwMDAwNzY5NjMgMDAwMDAgbiAK
MDAwMDA3NzEyOSAwMDAwMCBuIAowMDAwMDgyNjA2IDAwMDAwIG4gCjAwMDAwODI1MDUgMDAwMDAg
biAKMDAwMDA5MTc1NSAwMDAwMCBuIAowMDAwMDkxODMyIDAwMDAwIG4gCjAwMDAwOTE5MTkgMDAw
MDAgbiAKdHJhaWxlcgo8PC9JbmZvIDQ2OSAwIFIvSUQgWzxhMzZhZDRhZmE4ZWZjYTdlOWRjNGNi
OWJjNTc3MWNiNT48OWQwYjFmYjkyMGFlMmI4NGMxNjAwZWE4MmYxNzIwZTk+XS9Sb290IDQ2OCAw
IFIvU2l6ZSA0NzA+PgpzdGFydHhyZWYKOTE5NDEKJSVFT0YK
--000000000000cd049805bbfcffb0--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 08:53:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 08:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88628.166746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETRI-0001it-SX; Tue, 23 Feb 2021 08:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88628.166746; Tue, 23 Feb 2021 08:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETRI-0001im-PS; Tue, 23 Feb 2021 08:53:04 +0000
Received: by outflank-mailman (input) for mailman id 88628;
 Tue, 23 Feb 2021 08:53:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YqHm=HZ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lETRH-0001ig-7e
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 08:53:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1246c04-3ba0-47cb-a34b-3ff73b12a8c2;
 Tue, 23 Feb 2021 08:53:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1BA4ACBF;
 Tue, 23 Feb 2021 08:53:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1246c04-3ba0-47cb-a34b-3ff73b12a8c2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614070380; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HzZGoE20qkrk+cOtWLJdne7w3nvphGvj6RPYFs7gYSA=;
	b=gYZaL0jSdHiHuCLuPXf0dPoBZveZlYn+jBU9JFfXWh7dW7MQl7al0Wc3yqDhBFbwmSLHzU
	DpMB5X0PwVm3pOZveqLWQAeDdLxowgnepodJct6p+PYTuCGuVWD0ZcdtqYPzTb0AF8q8Zi
	gIWO57rn0Eng2+n1QprdCw0fRf7iyvg=
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <20210223023428.757694-2-volodymyr_babchuk@epam.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [RFC PATCH 01/10] sched: core: save IRQ state during locking
Message-ID: <4f7b4788-3b2e-8501-6aec-948b70320af2@suse.com>
Date: Tue, 23 Feb 2021 09:52:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210223023428.757694-2-volodymyr_babchuk@epam.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="pCfBWb9QPFsr4MRSolY7kw1TRfAHKnqoQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--pCfBWb9QPFsr4MRSolY7kw1TRfAHKnqoQ
Content-Type: multipart/mixed; boundary="90dSW4K7AGrBTbuCGBLPvpLH1tLm7uEIH";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Message-ID: <4f7b4788-3b2e-8501-6aec-948b70320af2@suse.com>
Subject: Re: [RFC PATCH 01/10] sched: core: save IRQ state during locking
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <20210223023428.757694-2-volodymyr_babchuk@epam.com>
In-Reply-To: <20210223023428.757694-2-volodymyr_babchuk@epam.com>

--90dSW4K7AGrBTbuCGBLPvpLH1tLm7uEIH
Content-Type: multipart/mixed;
 boundary="------------9E3D4271ACE632562E1CB367"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9E3D4271ACE632562E1CB367
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 23.02.21 03:34, Volodymyr Babchuk wrote:
> With XEN preemption enabled, scheduler functions can be called with
> IRQs disabled (for example, at end of IRQ handler), so we should
> save and restore IRQ state in schedulers code.

This breaks core scheduling.

Waiting for another sibling with interrupts disabled is an absolute
no go, as deadlocks are the consequence.

You could (in theory) make preemption and core scheduling mutually
exclusive, but this would break the forward path to mutexes etc.


Juergen

>=20
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
>   xen/common/sched/core.c | 33 ++++++++++++++++++---------------
>   1 file changed, 18 insertions(+), 15 deletions(-)
>=20
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index 9745a77eee..7e075613d5 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -2470,7 +2470,8 @@ static struct vcpu *sched_force_context_switch(st=
ruct vcpu *vprev,
>    * sched_res_rculock has been dropped.
>    */
>   static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit =
*prev,
> -                                                   spinlock_t **lock, =
int cpu,
> +                                                   spinlock_t **lock,
> +                                                   unsigned long *flag=
s, int cpu,
>                                                      s_time_t now)
>   {
>       struct sched_unit *next;
> @@ -2500,7 +2501,7 @@ static struct sched_unit *sched_wait_rendezvous_i=
n(struct sched_unit *prev,
>                   prev->rendezvous_in_cnt++;
>                   atomic_set(&prev->rendezvous_out_cnt, 0);
>  =20
> -                pcpu_schedule_unlock_irq(*lock, cpu);
> +                pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
>  =20
>                   sched_context_switch(vprev, v, false, now);
>  =20
> @@ -2530,7 +2531,7 @@ static struct sched_unit *sched_wait_rendezvous_i=
n(struct sched_unit *prev,
>               prev->rendezvous_in_cnt++;
>               atomic_set(&prev->rendezvous_out_cnt, 0);
>  =20
> -            pcpu_schedule_unlock_irq(*lock, cpu);
> +            pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
>  =20
>               raise_softirq(SCHED_SLAVE_SOFTIRQ);
>               sched_context_switch(vprev, vprev, false, now);
> @@ -2538,11 +2539,11 @@ static struct sched_unit *sched_wait_rendezvous=
_in(struct sched_unit *prev,
>               return NULL;         /* ARM only. */
>           }
>  =20
> -        pcpu_schedule_unlock_irq(*lock, cpu);
> +        pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
>  =20
>           cpu_relax();
>  =20
> -        *lock =3D pcpu_schedule_lock_irq(cpu);
> +        *lock =3D pcpu_schedule_lock_irqsave(cpu, flags);
>  =20
>           /*
>            * Check for scheduling resource switched. This happens when =
we are
> @@ -2557,7 +2558,7 @@ static struct sched_unit *sched_wait_rendezvous_i=
n(struct sched_unit *prev,
>               ASSERT(is_idle_unit(prev));
>               atomic_set(&prev->next_task->rendezvous_out_cnt, 0);
>               prev->rendezvous_in_cnt =3D 0;
> -            pcpu_schedule_unlock_irq(*lock, cpu);
> +            pcpu_schedule_unlock_irqrestore(*lock, *flags, cpu);
>               rcu_read_unlock(&sched_res_rculock);
>               return NULL;
>           }
> @@ -2574,12 +2575,13 @@ static void sched_slave(void)
>       spinlock_t           *lock;
>       bool                  do_softirq =3D false;
>       unsigned int          cpu =3D smp_processor_id();
> +    unsigned long         flags;
>  =20
>       ASSERT_NOT_IN_ATOMIC();
>  =20
>       rcu_read_lock(&sched_res_rculock);
>  =20
> -    lock =3D pcpu_schedule_lock_irq(cpu);
> +    lock =3D pcpu_schedule_lock_irqsave(cpu, &flags);
>  =20
>       now =3D NOW();
>  =20
> @@ -2590,7 +2592,7 @@ static void sched_slave(void)
>  =20
>           if ( v )
>           {
> -            pcpu_schedule_unlock_irq(lock, cpu);
> +            pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
>  =20
>               sched_context_switch(vprev, v, false, now);
>  =20
> @@ -2602,7 +2604,7 @@ static void sched_slave(void)
>  =20
>       if ( !prev->rendezvous_in_cnt )
>       {
> -        pcpu_schedule_unlock_irq(lock, cpu);
> +        pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
>  =20
>           rcu_read_unlock(&sched_res_rculock);
>  =20
> @@ -2615,11 +2617,11 @@ static void sched_slave(void)
>  =20
>       stop_timer(&get_sched_res(cpu)->s_timer);
>  =20
> -    next =3D sched_wait_rendezvous_in(prev, &lock, cpu, now);
> +    next =3D sched_wait_rendezvous_in(prev, &lock, &flags, cpu, now);
>       if ( !next )
>           return;
>  =20
> -    pcpu_schedule_unlock_irq(lock, cpu);
> +    pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
>  =20
>       sched_context_switch(vprev, sched_unit2vcpu_cpu(next, cpu),
>                            is_idle_unit(next) && !is_idle_unit(prev), n=
ow);
> @@ -2637,6 +2639,7 @@ static void schedule(void)
>       s_time_t              now;
>       struct sched_resource *sr;
>       spinlock_t           *lock;
> +    unsigned long         flags;
>       int cpu =3D smp_processor_id();
>       unsigned int          gran;
>  =20
> @@ -2646,7 +2649,7 @@ static void schedule(void)
>  =20
>       rcu_read_lock(&sched_res_rculock);
>  =20
> -    lock =3D pcpu_schedule_lock_irq(cpu);
> +    lock =3D pcpu_schedule_lock_irqsave(cpu, &flags);
>  =20
>       sr =3D get_sched_res(cpu);
>       gran =3D sr->granularity;
> @@ -2657,7 +2660,7 @@ static void schedule(void)
>            * We have a race: sched_slave() should be called, so raise a=
 softirq
>            * in order to re-enter schedule() later and call sched_slave=
() now.
>            */
> -        pcpu_schedule_unlock_irq(lock, cpu);
> +        pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
>  =20
>           rcu_read_unlock(&sched_res_rculock);
>  =20
> @@ -2676,7 +2679,7 @@ static void schedule(void)
>           prev->rendezvous_in_cnt =3D gran;
>           cpumask_andnot(mask, sr->cpus, cpumask_of(cpu));
>           cpumask_raise_softirq(mask, SCHED_SLAVE_SOFTIRQ);
> -        next =3D sched_wait_rendezvous_in(prev, &lock, cpu, now);
> +        next =3D sched_wait_rendezvous_in(prev, &lock, &flags, cpu, no=
w);
>           if ( !next )
>               return;
>       }
> @@ -2687,7 +2690,7 @@ static void schedule(void)
>           atomic_set(&next->rendezvous_out_cnt, 0);
>       }
>  =20
> -    pcpu_schedule_unlock_irq(lock, cpu);
> +    pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
>  =20
>       vnext =3D sched_unit2vcpu_cpu(next, cpu);
>       sched_context_switch(vprev, vnext,
>=20


--------------9E3D4271ACE632562E1CB367
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9E3D4271ACE632562E1CB367--

--90dSW4K7AGrBTbuCGBLPvpLH1tLm7uEIH--

--pCfBWb9QPFsr4MRSolY7kw1TRfAHKnqoQ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB4BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA0wmsFAwAAAAAACgkQsN6d1ii/Ey9P
Igf3SnsyG0rDRCsIe6iKgPryLM6LTNlH4Xy54m8ksV5+5DCXRQOms6Casc66APflJj426Y/G445J
FivS2JuAgPT0DmeloHMKjAiSUBJIRK/YN2VqA2avMaGpTX6dnXzsfyDy8vRpf3RhBqaqXeG9lmvF
VlWT8GvCM4Z+uvAAbif19CHuuKpmr7skWSAk9o58SEOg9sTiODdyt10d6zKTFqChrNlQkrool9qi
bYEogwwWs34JTVBvXoYM9iAlfg2QwTtFI6xLHOs9zZlh+wuRB3N+/QVtLFlHXtG4tB2jI3+BdOxB
I6TVYFm9jW30sss0b01QvzqU8MpSdrIggVBoscxp
=1TRy
-----END PGP SIGNATURE-----

--pCfBWb9QPFsr4MRSolY7kw1TRfAHKnqoQ--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 09:02:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 09:02:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88637.166757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETaN-0002oj-Q5; Tue, 23 Feb 2021 09:02:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88637.166757; Tue, 23 Feb 2021 09:02:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETaN-0002oc-My; Tue, 23 Feb 2021 09:02:27 +0000
Received: by outflank-mailman (input) for mailman id 88637;
 Tue, 23 Feb 2021 09:02:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lETaM-0002oX-S8
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 09:02:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lETaI-0007Vh-6n; Tue, 23 Feb 2021 09:02:22 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lETaH-0000hi-Pt; Tue, 23 Feb 2021 09:02:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=V+qic+6jyXXaqGoQ+mw274Kr/RppVX/x0MgjnLPN3Eo=; b=MS0ShGycXYLzbefrg0ftQIUVVT
	qDYmYcTNvix1Oxy2r0XhFM9x2Bu/zZcuohv0IMfkfpjjPiCLwDY0DQUbXhWA9zHBUFdiTQcDbkIXS
	EYLWQGlfnoh6Jhl0/csUMiYGfhJHsANbkWvKQkSibW6mKX0HzkvyOcc+GOeDAsJyiFks=;
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Meng Xu <mengxu@cis.upenn.edu>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org>
Date: Tue, 23 Feb 2021 09:02:19 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> Hello community,

Hi Volodymyr,

Thank you for the proposal, I like the like of been able to preempt the 
vCPU thread. This would make easier to implement some of the device 
emulation in Xen (e.g. vGIC, SMMU).

> 
> Subject of this cover letter is quite self-explanatory. This patch
> series implements PoC for preemption in hypervisor mode.
> 
> This is the sort of follow-up to recent discussion about latency
> ([1]).
> 
> Motivation
> ==========
> 
> It is well known that Xen is not preemptable. On other words, it is
> impossible to switch vCPU contexts while running in hypervisor
> mode. Only one place where scheduling decision can be made and one
> vCPU can be replaced with another is the exit path from the hypervisor
> mode. The one exception are Idle vCPUs, which never leaves the
> hypervisor mode for obvious reasons.
> 
> This leads to a number of problems. This list is not comprehensive. It
> lists only things that I or my colleagues encountered personally.
> 
> Long-running hypercalls. Due to nature of some hypercalls they can
> execute for arbitrary long time. Mostly those are calls that deal with
> long list of similar actions, like memory pages processing. To deal
> with this issue Xen employs most horrific technique called "hypercall
> continuation". 

I agree the code is not nice. However, it does serve another purpose 
than ...

> When code that handles hypercall decides that it should
> be preempted, it basically updates the hypercall parameters, and moves
> guest PC one instruction back. This causes guest to re-execute the
> hypercall with altered parameters, which will allow hypervisor to
> continue hypercall execution later.

... just rescheduling the vCPU. It will also give the opportunity for 
the guest to handle interrupts.

If you don't return to the guest, then risk to get an RCU sched stall on 
that the vCPU (some hypercalls can take really really long).


> This approach itself have obvious
> problems: code that executes hypercall is responsible for preemption,
> preemption checks are infrequent (because they are costly by
> themselves), hypercall execution state is stored in guest-controlled
> area, we rely on guest's good will to continue the hypercall. 

Why is it a problem to rely on guest's good will? The hypercalls should 
be preempted at a boundary that is safe to continue.

> All this
> imposes restrictions on which hypercalls can be preempted, when they
> can be preempted and how to write hypercall handlers. Also, it
> requires very accurate coding and already led to at least one
> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
> like the one mentioned in [1].
> 
> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
> which are supposed to run when the system is idle. If hypervisor needs
> to execute own tasks that are required to run right now, it have no
> other way than to execute them on current vCPU. But scheduler does not
> know that hypervisor executes hypervisor task and accounts spent time
> to a domain. This can lead to domain starvation.
> 
> Also, absence of hypervisor threads leads to absence of high-level
> synchronization primitives like mutexes, conditional variables,
> completions, etc. This leads to two problems: we need to use spinlocks
> everywhere and we have problems when porting device drivers from linux
> kernel.
> 
> Proposed solution
> =================
> 
> It is quite obvious that to fix problems above we need to allow
> preemption in hypervisor mode. I am not familiar with x86 side, but
> for the ARM it was surprisingly easy to implement. Basically, vCPU
> context in hypervisor mode is determined by its stack at general
> purpose registers. And __context_switch() function perfectly switches
> them when running in hypervisor mode. So there are no hard
> restrictions, why it should be called only in leave_hypervisor() path.
> 
> The obvious question is: when we should to try to preempt running
> vCPU?  And answer is: when there was an external event. This means
> that we should try to preempt only when there was an interrupt request
> where we are running in hypervisor mode. On ARM, in this case function
> do_trap_irq() is called. Problem is that IRQ handler can be called
> when vCPU is already in atomic state (holding spinlock, for
> example). In this case we should try to preempt right after leaving
> atomic state. This is basically all the idea behind this PoC.
> 
> Now, about the series composition.
> Patches
> 
>    sched: core: save IRQ state during locking
>    sched: rt: save IRQ state during locking
>    sched: credit2: save IRQ state during locking
>    preempt: use atomic_t to for preempt_count
>    arm: setup: disable preemption during startup
>    arm: context_switch: allow to run with IRQs already disabled
> 
> prepare the groundwork for the rest of PoC. It appears that not all
> code is ready to be executed in IRQ state, and schedule() now can be
> called at end of do_trap_irq(), which technically is considered IRQ
> handler state. Also, it is unwise to try preempt things when we are
> still booting, so ween to enable atomic context during the boot
> process.

I am really surprised that this is the only changes necessary in Xen. 
For a first approach, we may want to be conservative when the preemption 
is happening as I am not convinced that all the places are safe to preempt.

> 
> Patches
>    preempt: add try_preempt() function
>    sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>    arm: traps: try to preempt before leaving IRQ handler
> 
> are basically the core of this PoC. try_preempt() function tries to
> preempt vCPU when either called by IRQ handler and when leaving atomic
> state. Scheduler now enters atomic state to ensure that it will not
> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.

AFAICT, try_preempt() will deal with the rescheduling. But how about 
softirqs? Don't we want to handle them in try_preempt() as well?

[...]

> Conclusion
> ==========
> 
> My main intention is to begin discussion of hypervisor preemption. As
> I showed, it is doable right away and provides some immediate
> benefits. I do understand that proper implementation requires much
> more efforts. But we are ready to do this work if community is
> interested in it.
> 
> Just to reiterate main benefits:
> 
> 1. More controllable latency. On embedded systems customers care about
> such things.

Is the plan to only offer preemptible Xen?

> 
> 2. We can get rid of hypercall continuations, which will results in
> simpler and more secure code.

I don't think you can get rid of it completely without risking the OS to 
receive RCU sched stall. So you would need to handle them hypercalls 
differently.

> 
> 3. We can implement proper hypervisor threads, mutexes, completions
> and so on. This will make scheduling more accurate, ease up linux
> drivers porting and implementation of more complex features in the
> hypervisor.
> 
> 
> 
> [1] https://marc.info/?l=xen-devel&m=161049529916656&w=2
> 
> Volodymyr Babchuk (10):
>    sched: core: save IRQ state during locking
>    sched: rt: save IRQ state during locking
>    sched: credit2: save IRQ state during locking
>    preempt: use atomic_t to for preempt_count
>    preempt: add try_preempt() function
>    arm: setup: disable preemption during startup
>    sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>    arm: context_switch: allow to run with IRQs already disabled
>    arm: traps: try to preempt before leaving IRQ handler
>    [HACK] alloc pages: enable preemption early
> 
>   xen/arch/arm/domain.c      | 18 ++++++++++-----
>   xen/arch/arm/setup.c       |  4 ++++
>   xen/arch/arm/traps.c       |  7 ++++++
>   xen/common/memory.c        |  4 ++--
>   xen/common/page_alloc.c    | 21 ++---------------
>   xen/common/preempt.c       | 36 ++++++++++++++++++++++++++---
>   xen/common/sched/core.c    | 46 +++++++++++++++++++++++---------------
>   xen/common/sched/credit2.c |  5 +++--
>   xen/common/sched/rt.c      | 10 +++++----
>   xen/include/xen/preempt.h  | 17 +++++++++-----
>   10 files changed, 109 insertions(+), 59 deletions(-)
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 09:04:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 09:04:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88640.166770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETc1-0002ve-5O; Tue, 23 Feb 2021 09:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88640.166770; Tue, 23 Feb 2021 09:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETc1-0002vX-2N; Tue, 23 Feb 2021 09:04:09 +0000
Received: by outflank-mailman (input) for mailman id 88640;
 Tue, 23 Feb 2021 09:04:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lETbz-0002vR-Vw
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 09:04:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lETbz-0007X1-H3; Tue, 23 Feb 2021 09:04:07 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lETbz-0000lp-90; Tue, 23 Feb 2021 09:04:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=D0/+6qeBKZ7NdRi3TvHYcLe/GaCdj7Z4YKoUAwfw7es=; b=LSHWvaYQQTtMbvXWDl3sNJSwC4
	3raxNp2wSfxzX+pMMyimMWfw6LgbjdnrExLItpUUdYUgIwle+6lDg53hbdCFeFOqbe4oxNroUzUBR
	ty6/cU+dct5YIsJ7KlPIii8pCq9rPYewvv9lXWgi3QQPwNToEhlCNTZgX4fUcbU/dNjs=;
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: iwj@xenproject.org, ash.j.wilding@gmail.com,
 Julien Grall <jgrall@amazon.com>, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20210220194701.24202-1-julien@xen.org>
 <744ca7e5-328d-0c5f-bc52-e4c0e78dad97@suse.com>
 <alpine.DEB.2.21.2102221208050.3234@sstabellini-ThinkPad-T480s>
 <b68a644f-8b9c-3e1d-49c6-4058d276228b@xen.org>
 <dd2ce0b0-4bd4-15e5-c4b2-2540799ed493@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7f246e9f-dc5b-f730-2cb3-1920309bdb3a@xen.org>
Date: Tue, 23 Feb 2021 09:04:04 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <dd2ce0b0-4bd4-15e5-c4b2-2540799ed493@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 23/02/2021 07:00, Jan Beulich wrote:
> On 22.02.2021 21:12, Julien Grall wrote:
>> On 22/02/2021 20:09, Stefano Stabellini wrote:
>>> On Mon, 22 Feb 2021, Jan Beulich wrote:
>>>> On 20.02.2021 20:47, Julien Grall wrote:
>>>>> This is a follow-up of the discussion that started in 2019 (see [1])
>>>>> regarding a possible race between do_poll()/vcpu_unblock() and the wake
>>>>> up path.
>>>>>
>>>>> I haven't yet fully thought about the potential race in do_poll(). If
>>>>> there is, then this would likely want to be fixed in a separate patch.
>>>>>
>>>>> On x86, the current code is safe because set_bit() is fully ordered. SO
>>>>> the problem is Arm (and potentially any new architectures).
>>>>>
>>>>> I couldn't convince myself whether the Arm implementation of
>>>>> local_events_need_delivery() contains enough barrier to prevent the
>>>>> re-ordering. However, I don't think we want to play with devil here as
>>>>> the function may be optimized in the future.
>>>>
>>>> In fact I think this ...
>>>>
>>>>> --- a/xen/common/sched/core.c
>>>>> +++ b/xen/common/sched/core.c
>>>>> @@ -1418,6 +1418,8 @@ void vcpu_block(void)
>>>>>    
>>>>>        set_bit(_VPF_blocked, &v->pause_flags);
>>>>>    
>>>>> +    smp_mb__after_atomic();
>>>>> +
>>>>
>>>> ... pattern should be looked for throughout the codebase, and barriers
>>>> be added unless it can be proven none is needed. >
>>> And in that case it would be best to add an in-code comment to explain
>>> why the barrier is not needed
>> .
>> I would rather not add comment for every *_bit() calls. It should be
>> pretty obvious for most of them that the barrier is not necessary.
>>
>> We should only add comments where the barrier is necessary or it is not
>> clear why it is not necessary.
> 
> I guess by "pattern" I didn't necessarily mean _all_ *_bit()
> calls - indeed there are many where it's clear that no barrier
> would be needed. I was rather meaning modifications like this
> of v->pause_flags (I'm sure there are further such fields).

Agree, this work is mostly a side project for now. So I will continue to 
go through the pattern when I find time.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 09:05:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 09:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88644.166782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETcz-00033R-Ij; Tue, 23 Feb 2021 09:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88644.166782; Tue, 23 Feb 2021 09:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETcz-00033K-Fd; Tue, 23 Feb 2021 09:05:09 +0000
Received: by outflank-mailman (input) for mailman id 88644;
 Tue, 23 Feb 2021 09:05:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lETcy-00033D-AM
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 09:05:08 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 981a266b-80c1-4f16-9973-5262a013fcef;
 Tue, 23 Feb 2021 09:05:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 981a266b-80c1-4f16-9973-5262a013fcef
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614071106;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=cU1FUb/Uw0Edow3JzmDgNyE+SR+cQtbnmvgWCEyYwFg=;
  b=hklGtv8k7qWckC8C6QCwpHVyFjYF2qn6TuhyS8/21DS5xzjilR+l3jen
   0WXupkU4FIzxu2EYIDrE+KFIB77kdoBPJb91rSs3UFBmuANNNuSNsViAt
   Iwma+yi85CdsweRQQI29fha71UYyFnnYYCjxK/wQo53MCXEM2ZOwh+KYj
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: M/I3PTqTXbnsMlA8ezvsCFtjpmQH85JW5dnmNQgkH6rTlMCB3qNQF3IcNp6QNqyQRERTAQieM9
 h0+s9ik0kTbyk+OumtIsgQv/alufkHp4UL974mxwpPHnPrr7wEgt8tJ69ONZmCjcyd8z2nTZdV
 0Or5PM3pvluROWX0jeaMWsy+kXT2ZEyD0rbhKdvubGt51U1HJhN0xJ4SImGtSPlAfcd0IDFuVI
 haP3ohV2KzL9YwotInHdTdea6y0wkQcQelxkMaiTwr0rCw1+rnm8HzoSlqrflX+kmWtslrYhxs
 42g=
X-SBRS: 5.2
X-MesageID: 37999006
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,199,1610427600"; 
   d="scan'208";a="37999006"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VZ78wjoxlwjq4LVSgD3JDXqLM6Vh9+Ix3GF1Ozue7iYjZqUmLHAR1aO8vRLvG17cT32Vj2F0mCcVKpvKyDv3r5aNEEx+tnIdW5XgnAPLo77rWCE4BFgS2TL0J+52VydapOZsE5NeUYnk7H4vVYuxE+wAoDmgH8ZFfQ8hW3ko0HCH8gq/q9J5EjyaESTZQIAIgL2i9CNPhNzCOnAejxiarHBlnX4/UcZjIl+QUnb9qfblD5Z9EPWt3PwT4hjwtNYAzQ9FvLWyXuLbWMkP9I21pG2sTcqp8c4riSkAr+g9PFmDrv4JmCIWS1Ctt7uHrBJalJ8cRZw5dJM4PtvhO/DEXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b08gw1GSwvJDwx5bQPJTl/PIeuA89t3a/v8ZhfhOjow=;
 b=jWx8IFLC+szq8Cmn6a5OEy/koO0G1+hrFr4aPHC5WNlINKcoVwQ0irKyUxNdNy9nKCnphAi0BGLlvRuSYNTX7GYIb2UsU60s/UIlkkZPPpEiYCSrk4gKmNJBjsqDE63JmVWgao2ivxLjWQamkMzmGQp/sxNZeaISonWrx0JdTChwk10vk6sUDE0JxXjUPX1Jbi/1F72AROiV/W7hBp1Y6mpo2rhJ3nMOSucLLvQHhHp3k5lvbJnXCuaK1gd3/v78lXgkMu1bvkQldkli97RBh+b2IMDtiorN1Gi0mP8DzY2kSPGmjscLY0Q529PlgSrIgF7nhyGYAU6eRJ6toh4aOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b08gw1GSwvJDwx5bQPJTl/PIeuA89t3a/v8ZhfhOjow=;
 b=w99kfavP0oZYDjIWdhCgOE9dnKU93yjQVW40J4Vv+NerzBLWFlmtV0dOxEpLiIYZMBTdv7jR823BYty+wT0xFRaEN/tY0m7grp3KLntXqSZZOuwKmwWz2uYP0kWJXDhrCWfgyHgv+dOyYiS8R99RLHQ/01dd7p0C6UTaXYLn+IA=
Date: Tue, 23 Feb 2021 10:04:56 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bobby Eshleman <bobbyeshleman@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Daniel Kiper
	<daniel.kiper@oracle.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 4/5] xen/x86: add some addresses to the Multiboot2
 header
Message-ID: <YDTFOD4jdE90fZ0/@Air-de-Roger>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <35ad940a3da56fc39c9f24e15c9f09ef74ad3448.1611273359.git.bobbyeshleman@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <35ad940a3da56fc39c9f24e15c9f09ef74ad3448.1611273359.git.bobbyeshleman@gmail.com>
X-ClientProxiedBy: MR2P264CA0124.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 75906212-4390-46cf-b1af-08d8d7da1b05
X-MS-TrafficTypeDiagnostic: DM5PR03MB2923:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB29236BDA5BF4ACA463B243CA8F809@DM5PR03MB2923.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mXxDyhBdyf/v6xsWjs9THhGyGDCvq+kA5BUe3XOVUeRWaicwI0UwrTRiozUwKp+bnupPoT4ciMtmFnpD+evXPtNthfAP8HOhMoSSCT0hEAAQhunTgDU7wwAkFZOorxofKLlht0qjQ6XGZuizfGd3f9nQsw5Y3YNCpwtySjakqh4O4Pee4JiDQ6NCPxkXu/sW3op/wo72oo81T283eKz35tmAh+IF+teMuzgnDY12veIYTOTu2Md1JAIdYb8XZOWkDcVyIcRBZD16xYHHz4ZznDk1CzKtBUWpz2JHrsDiGocQiHiIK0vFUV4lBDyN2m9sSAHAL2JGGb77HhvgJ4UBidHCDCNxBnRKtYmZlOX2M3CSkU0Tphkmg5QG0Nir0BMZXHjk4nYzeQCK95rU0IMCVDbkI7LlxZBnbCdbNJID5TGKBhDqrp5kEowVl0z37XC/Vp4ZIb50XP67BiswgYRdbYVYmbWGFYLdAwAb7wLwy5sjMlk48r/jFuLPURv0ATl9cBGgmprgB/9Laybgd4Gp1xJ4jciiS+1LE0dYMTKEa/UtNxHLuwCuCa4H0A79rBjvCe3JS0W5j1N2Vxr5ZS+k0VG1oarxe4xAusjZxTZziSk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(396003)(136003)(376002)(39860400002)(346002)(16526019)(4326008)(186003)(26005)(9686003)(86362001)(66946007)(6666004)(66476007)(33716001)(6916009)(316002)(6496006)(54906003)(6486002)(8676002)(66556008)(2906002)(478600001)(8936002)(5660300002)(85182001)(956004)(83380400001)(966005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SnVCai9KSVFyMFhqWVljTWF0ZXBpSzI3L0t3eFNCQ2lXVVhybFBTbmZ1UkEz?=
 =?utf-8?B?Uyt3aGo1NUl1QUdFSWZiSm9ZdFpuTHlVRElyNmlRMWM4WGlEakhRaUxyRFhw?=
 =?utf-8?B?OStKMUdLWWdOTzZFY3dPUUlna01Cb1VZZDFUNFpQbmhGWktaT3FmWnhwZTNh?=
 =?utf-8?B?SWZlRUxIS1pOMmxQRXc4dkdpVzZwWko1Z1RVK3NVZzhaeHUycEpLUVFwdmxB?=
 =?utf-8?B?NDBrSzVKblN1TUJKdFUxK1lqenRPelF0QUVRUmhnZ2srT3NHSnBEVW5DRVB1?=
 =?utf-8?B?SVBpMm5GQncwOURkeU9Ca2pvT0taTFNSVXc1bGMxU3RTTzRwd1RraHJTZmdn?=
 =?utf-8?B?QjJPcU5xVHE3RXdGMmN1b3B2emdVbTE1dURoSEgydkxWaTAyY1lXYW9aa1Rl?=
 =?utf-8?B?WTl3QlAwTXhkMEt4bUwyT3pqSzRZTCtZYitGRjVaTm41YUg0UVU0S0tEQkU5?=
 =?utf-8?B?UGNDZk4vWVVBcjJGd2MycVoveDJhSmZSOUJCdFAvV29hQWUrV1dacHhtaFNr?=
 =?utf-8?B?V3NpMGllQllKTDMxUmpILzdZK0QveXp1OXNHWnUxZTZlcm9DKzlLYVo1SE9U?=
 =?utf-8?B?UUkxcEFyY3EzOW1QcWxMa1VYM0RaR2Y1aXJNdElsVW5vVGZEUnFCTXpCK2hV?=
 =?utf-8?B?L3cwU053Y2RpV1QzblhpdThKRFdTbUNGMFN5WjVnVHBTRjNkdUh5b0FiK3JJ?=
 =?utf-8?B?Wjlzdkd3eXkvMUJ2OXRvSjhNOFBKN0JiNkJLbHRMU2J4NzdNMW9jZC9UR0Zl?=
 =?utf-8?B?WFJla1h6d010ZWZ1KzdxTmRCVnhIYk1vVlFNdXJRZy9ybStpV21oc2dYK1ZO?=
 =?utf-8?B?TThmSWFyRDM5OWhmNVFxSlQ1bUc4SFAvdzJXMXluaTRnM1BsWE1jV2w5VDNQ?=
 =?utf-8?B?dm1NZk0veSt2VWRzUzNEWnEvWFEzQlFJN3NUc1c5cUZhelBNUDIzeU1CRXFz?=
 =?utf-8?B?MmdWSFhMMDZsRkVUODBCbEdXakdHOFFtNGdPbzJFU1FIOGwwY3RHaXlmdjBh?=
 =?utf-8?B?d1RiejgrOWdkK3RUeml6UjFENDNESVppNXVVSDVKVWFWNGo0a2c2Y1VJTTc2?=
 =?utf-8?B?Z3RTVENad2lCajNjOU5LcFZDSWRXdHhyS3lNUkEwbWlkQ0lTdTNsVWx6NmlN?=
 =?utf-8?B?QzVpSkhZUzZwZU5iUitmd0EycTcwM3UybStTazYvT0NJZGY3QXluQnlja1NX?=
 =?utf-8?B?cFg1NVZUaC81NWhKcldlQ054M0pxS2t4TkgyMDR2VWZyRHhaMVVGTVNIbnFh?=
 =?utf-8?B?SkZ6T0dOelNaN3J2VWVqSEpZeTdnS3FVL0picitJK21ETm8zWHdHVUpqVEVM?=
 =?utf-8?B?M3BpZXNjck11MitVMUtSd2VYS24vbVJLMUpwR0UwNkc4OU4rZUFoQ1dBQklT?=
 =?utf-8?B?SktqYldmeEFoV2RjbTRiL0dVMTJsSSt1S3FDVXV1anBJQlQzSFQ1bTBWRXFU?=
 =?utf-8?B?ajh0ekVDaXhjWWcrb1N2K0NTWDFTVmp2OVRFc1o3eVBHNGJoUkw4UTR4NEVB?=
 =?utf-8?B?RG5KVGx3Y09FSURIYU1CRThjYnl5a21nZzRTYjRublVuTXRrMnhqNFBhWjFm?=
 =?utf-8?B?dFdySytHNDR5bGFCK0M0OUoxVllPaXlxV29PekVhV2wxazIvWEtYT2NUQU9W?=
 =?utf-8?B?V0piZ0Y3aDhjanBLMEdFVFFCMVd4U3NYSWNaVXlzNlZVazQyaWlwZ3IzYUx4?=
 =?utf-8?B?cWljRHpDNjI1MHNMY0dUdi81YVpoWCtkYlhJSGZmeXlGUjZyZVVnelNqU3Q1?=
 =?utf-8?B?YnZheC91cExGZ1pKaFFHQk9rUlk0eEV0d0NKY1AwZE9pSjJ6UFZXMUpGY0lX?=
 =?utf-8?B?OVZ1S3FZdnZUTzJldk1LUT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 75906212-4390-46cf-b1af-08d8d7da1b05
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 09:05:03.1684
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t1vYa+uDhufouumOFutXRRr2LTwjaDLfZaVidrSR2Uqgrg7snv6dtjlQAUVufrga49X4YJctx1sv9GY66XIjMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2923
X-OriginatorOrg: citrix.com

On Thu, Jan 21, 2021 at 04:51:43PM -0800, Bobby Eshleman wrote:
> From: Daniel Kiper <daniel.kiper@oracle.com>
> 
> In comparison to ELF the PE format is not supported by the Multiboot2
> protocol. So, if we wish to load xen.mb.efi using this protocol we have
> to add MULTIBOOT2_HEADER_TAG_ADDRESS and MULTIBOOT2_HEADER_TAG_ENTRY_ADDRESS
> tags into Multiboot2 header.
> 
> Additionally, put MULTIBOOT2_HEADER_TAG_ENTRY_ADDRESS and
> MULTIBOOT2_HEADER_TAG_ENTRY_ADDRESS_EFI64 tags close to each
> other to make the header more readable.
> 
> The Multiboot2 protocol spec can be found at
>   https://www.gnu.org/software/grub/manual/multiboot2/
> 
> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> Signed-off-by: Bobby Eshleman <bobbyeshleman@gmail.com>
> ---
>  xen/arch/x86/boot/head.S | 19 +++++++++++++++----
>  1 file changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> index 189d91a872..f2edd182a5 100644
> --- a/xen/arch/x86/boot/head.S
> +++ b/xen/arch/x86/boot/head.S
> @@ -94,6 +94,13 @@ multiboot2_header:
>          /* Align modules at page boundry. */
>          mb2ht_init MB2_HT(MODULE_ALIGN), MB2_HT(REQUIRED)
>  
> +        /* The address tag. */
> +        mb2ht_init MB2_HT(ADDRESS), MB2_HT(REQUIRED), \
> +                   sym_offs(multiboot2_header), /* header_addr */ \
> +                   sym_offs(start),             /* load_addr */ \
> +                   sym_offs(__bss_start),       /* load_end_addr */ \
> +                   sym_offs(__2M_rwdata_end)    /* bss_end_addr */

Shouldn't this only be present when a PE binary is built?

You seem to unconditionally add this to the header, even when the
resulting binary will be in ELF format?

According to the spec: "This information does not need to be provided
if the kernel image is in ELF format", and hence Xen shouldn't require
the loader to understand this tag unless it's strictly required, as
the presence of the tag forces the bootloader to use the presented
information in order to load the kernel, regardless of the underlying
binary format.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 09:26:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 09:26:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88653.166793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETxG-00054y-8d; Tue, 23 Feb 2021 09:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88653.166793; Tue, 23 Feb 2021 09:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETxG-00054r-5a; Tue, 23 Feb 2021 09:26:06 +0000
Received: by outflank-mailman (input) for mailman id 88653;
 Tue, 23 Feb 2021 09:26:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lETxE-00054j-I4; Tue, 23 Feb 2021 09:26:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lETxE-0007sU-8B; Tue, 23 Feb 2021 09:26:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lETxE-00057E-0P; Tue, 23 Feb 2021 09:26:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lETxD-0007rD-W7; Tue, 23 Feb 2021 09:26:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5TwNqf1YbBzEhvYcRGpA2giXbm5KE9fX756eTFVy+Z8=; b=3cc/DDabSBheuaXzZuakwruD3A
	QiPdfDS9SWh6+VYUHG/x1T2NP4PReQMrVN0aABJzT3FOwZqbsP2q6d5aVauIMew94+hnCE6vpVqOL
	vD6BBsC3hYKd28bJD/DGkX6sqHKUKtbmTEjySdgzPH/WqRm2SvStSRIfX1wgQF26dJQo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159570-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159570: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=87a9d3a6b01baebdca33d95ad0e79781b6a46ca8
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 09:26:03 +0000

flight 159570 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              87a9d3a6b01baebdca33d95ad0e79781b6a46ca8
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  228 days
Failing since        151818  2020-07-11 04:18:52 Z  227 days  220 attempts
Testing same since   159570  2021-02-23 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 44042 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 09:27:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 09:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88658.166813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETyY-0005Dw-OT; Tue, 23 Feb 2021 09:27:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88658.166813; Tue, 23 Feb 2021 09:27:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lETyY-0005Dp-KE; Tue, 23 Feb 2021 09:27:26 +0000
Received: by outflank-mailman (input) for mailman id 88658;
 Tue, 23 Feb 2021 09:27:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wmaw=HZ=citrix.com=ross.lagerwall@srs-us1.protection.inumbo.net>)
 id 1lETyY-0005Di-1L
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 09:27:26 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7be0412-0d7f-41ab-904c-9595b6b1c5e7;
 Tue, 23 Feb 2021 09:27:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7be0412-0d7f-41ab-904c-9595b6b1c5e7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614072444;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=elkiEyw/x+p4Mzp82r1PU6gv61RdX2VzYrI0EcXBgiE=;
  b=As6UcGJK6bypwN1OEdqFVkppT+JLta/yNR3b8tIx/9GSesWdEfA5AD4W
   uQm/dwo/4rwPoGqnrQE0/8ZFpKcHCGD09eyB5yNpL+MpXGka4dLgX34l5
   qc/lDpvFo8Ep8sb/UQY1IQ7426bp4SzyRIdWZIm45QELvLh9jupohnyJ2
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: r+v2a+pAcLkDXkvtq+LmVooxIXl9Yi1OFFOUqXQVl7OPoBQ/F1wR39/aL26u9L65xhinLzNmzD
 IkM5sCEPjCfpmf9zzrGZh3QpaL3V+Q5QWq+TyAXm2mwA4eOOjMRr16+CUyDyewQEuJfODY5MbN
 wFE9A627oR/uIUpbMpS8JESAt4rQ0ZHYTeHTVyqMz0CtZl4/Eu4i5+crISP29PhiJDm9wpiP9E
 8gWCC28x4GMlXuCPvXKvUr/Ak6638Jo/O1FCh/AH4ug72oJzWRID7AuaDHkXLapWfmKY7SqObb
 oGk=
X-SBRS: 5.1
X-MesageID: 37826829
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,199,1610427600"; 
   d="scan'208";a="37826829"
Subject: Re: [PATCH v3 2/8] xen/events: don't unmask an event channel when an
 eoi is pending
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, <stable@vger.kernel.org>, Julien Grall
	<julien@xen.org>
References: <20210219154030.10892-1-jgross@suse.com>
 <20210219154030.10892-3-jgross@suse.com>
From: Ross Lagerwall <ross.lagerwall@citrix.com>
Message-ID: <d368a948-17d6-4e64-110e-bede3158f49f@citrix.com>
Date: Tue, 23 Feb 2021 09:26:49 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210219154030.10892-3-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2021-02-19 15:40, Juergen Gross wrote:
> An event channel should be kept masked when an eoi is pending for it.
> When being migrated to another cpu it might be unmasked, though.
> 
> In order to avoid this keep three different flags for each event channel
> to be able to distinguish "normal" masking/unmasking from eoi related
> masking/unmasking and temporary masking. The event channel should only
> be able to generate an interrupt if all flags are cleared.
> 
> Cc: stable@vger.kernel.org
> Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framework")
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>

I tested this patch series backported to a 4.19 kernel and found that
when doing a reboot loop of Windows with PV drivers, occasionally it will
end up in a state with some event channels pending and masked in dom0
which breaks networking in the guest.

The issue seems to have been introduced with this patch, though at first
glance it appears correct. I haven't yet looked into why it is happening.
Have you seen anything like this with this patch?

Thanks,
Ross


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 09:34:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 09:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88664.166827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEU5V-0006GR-Gd; Tue, 23 Feb 2021 09:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88664.166827; Tue, 23 Feb 2021 09:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEU5V-0006GK-Da; Tue, 23 Feb 2021 09:34:37 +0000
Received: by outflank-mailman (input) for mailman id 88664;
 Tue, 23 Feb 2021 09:34:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEU5U-0006GE-Ig
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 09:34:36 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 471dd317-2404-45d0-a7b4-149f9eb8661b;
 Tue, 23 Feb 2021 09:34:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 471dd317-2404-45d0-a7b4-149f9eb8661b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614072875;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=QnwEhqK3nf/JJvD0AKuux9RAyrH6udqqWb22TaRYJ5A=;
  b=Z/hu9Y+TrFTqvlrXyEkU8Jwq+j6zcB/5oRIsNfhCCSPji630IqeJIuqW
   zd2NS63yL2FCCw69rwFcKshvFo8bsiXKFB0i8DlWpK3aWEWPB66W/Bb3N
   ULrqCt0oU9WIsgkae35ROUcR0BwMI9HwAsK8E4Pwrtr3nLDrYdb+BNAz6
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3IGz5ZaZZzreJYT8gcBgo0KqgcsOggKGYJVPHI8BJ63nMU+Bup50SVrf/T2PGg4SMMSn4FNQpV
 NNIFztnTp/r5nU0DYVJIGg5Td/TUUp8byvExfZ+Z30Pvf+Wr3RGgdMxqcStIy2AIWGjFBfqIRv
 aP/SqKPAHr3lSatL6qYoECsl1jKoO5jW5bsp+vFaUCtDHuwu6in+qm+0dw0uGaUXLFnahvYqeU
 NeJiur5gXNdvKPAwKwRwLIHLix3xoABMnbU+SdKWQ0466oCdZ3eeYlaIt6y31pg8KaIJukwZ/P
 isY=
X-SBRS: 5.2
X-MesageID: 37805208
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,199,1610427600"; 
   d="scan'208";a="37805208"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eZcVOjLibKuqskGHYwceMldM9ELA9RVocPjMj8Txzq4acUDcvFhB+CFvWAHV/4CR5afbSouqjOP08FP5qRyk+FPA4HTSwGIifH9tUPaeX+MwTXXuLChRY7XEl1jItgUcjLnKZi0X035GYpOfnQjmdIkjYmGuYbKyW8j5Jc+KvjTYWAFIY5RuzSa/HB7FNG2ixlajnd0eiLgGFunhXaFFTh5reQ012R1YGi0z6GNF2ejk4YVCsrpMD9YEqmNxpmg1xtds2qbywfIFf1Zr7LJw8kKB8NAmgqm6daxDr8EHzjibUgGYcVBPWkLPUJaQ3rS9oaB9b92zy1LzI129FA/YkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ylQB6UdonLTXrzMiypbu+eZrZZWQq2NInzj2fKjWhaw=;
 b=ZuOMi75qzDGu+nVPTJma//E8+UVmgLG/T/hMsaz+SbZ7sh25PfP4rQRSAu3ZPhT7OlxXALYvGbaGdj8KxaNlu9Zy2SddbbscozCrXqeN0efdzNBRNfM2ZDjPM4tMEmp3ikuAUJ8HI3IxVWF0qcp1x+o3pdVycSPkqM8QT9gABOax8U7yN6lZUpN5JhYCmgehe68OolB6UuTvMEoJauZbxp6JktNjWVd9xNJoehNngNTFQLLKr5oUoa40TkCGtTXmdAyMeZx3XzKJ7KhiECRMMzoH4IqZc/iDB+KgxW9PTxjzk5mQH6XjxpGAW/p3cWhCqO4fr4HX65aJlNkvyt4r7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ylQB6UdonLTXrzMiypbu+eZrZZWQq2NInzj2fKjWhaw=;
 b=OZxk6cd+iSgjDzyxP3cRW1+UyF5Xdj2N3mX/1vyS4UunqLQGG0GWjimy5WxLGDx4Uy5Ivn5O9yi9KmWmukli7whVp5xiKmMsc9N9O6Sds6n3tczlITDMVDmNcBGwBjJh5Zfpfzu12UvTQaM7MJ6a+N+wSMHDz/ugi8eymgRSw60=
Date: Tue, 23 Feb 2021 10:34:25 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <andrew.cooper3@citrix.com>,
	<jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YDTMIW5vBe0IncVR@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
X-ClientProxiedBy: AS8PR04CA0126.eurprd04.prod.outlook.com
 (2603:10a6:20b:127::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b7a4ce81-f4e2-4ccc-c6e0-08d8d7de3922
X-MS-TrafficTypeDiagnostic: DS7PR03MB5608:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB56088A919674474F3A7D7F4B8F809@DS7PR03MB5608.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PNEOSzSj/JQ+waKZOEqHS1PN6dEZFq6QGnK9HFA4q3FShOs/RdeaaxXhyKZycuTwo+Zii6376rN0FvhwN6rfmjUj+/60qiaoAH0KdyTVtIEEbdbh5ZEX2K5Qi77A1TJG5JK80pVG7qdK9EeKuCH+xFi0Gg63B9xaEg6h/+PIB/Ngi9kqvcKARE+uJDbRNBa5hUiX0OGG78qxK8U1tdKpkCzvZ9Dzpy60XNq6cH32kojrxy1gIL/ObC6chsBwQZP9Syijz7w6Oy5aiOA6IxFq0MlJtRzZaJkPtptbuRVYbYZfCyofd12t9Ldk6bwrEKl8HWN2Tag2Vg0EK8H/yRj7JWNvPe/djLXiwtAoxF+Yq1drJOIC0ihh/u8iFAQqiMafg2XA4WgO5m1EWoLaMIxAGLm2NDzz34iJoN3lIUdgS3wvoXWXj9LoU90jlGC/i8ySiq/sR8txF4M3bPk4RlQh+q5SAPH0KGF2ACaKjiAfaHHGl+uAkBwDlngTX7QB+G17Mi7S4RDB0JrHRlgnIjfHaQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(366004)(346002)(396003)(376002)(39860400002)(9686003)(316002)(478600001)(4326008)(8936002)(8676002)(66556008)(66476007)(33716001)(6486002)(186003)(83380400001)(66946007)(26005)(2906002)(86362001)(6916009)(5660300002)(16526019)(85182001)(6496006)(6666004)(956004)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZTdUUzFZaFV1OUZGRmhWWWwxcG1JZlkvMnlZSFpzSTZ0NFkyeVhOVEFJejJj?=
 =?utf-8?B?Q3k4SkNSNFNuckVMby9VSExXNGJ6QWtGMjRRc014R00wTERWNFlQS2xHdkJB?=
 =?utf-8?B?cm02MzI4QjZteHloTTlKNG5obStCSjRoK1ZPcUhjUEtscm9uRVN0MUFvRFJN?=
 =?utf-8?B?aHgwR2lnYkxJd0wrSXZ3TUNaenMyYnVhZDVsQ3dlbTZNb1pBZU5yOEtpMmJL?=
 =?utf-8?B?cmJyRTR4ais0S0d2Ymo0R2hyKzhzeURST2JhSEZPKy91dmhyRWZTREdwaEpL?=
 =?utf-8?B?aGNqRW5YSVpndzMySVJROGFlSi9VYmZHRXBsdFRYTG9hMGtkUDNhVkVhaGEw?=
 =?utf-8?B?RW1XNm1vWjZPby9lU21DOERoR0dEZlZqZTBHTW5DdllnbENudUZHcmtza1JX?=
 =?utf-8?B?aTZCTnlpcHlxZEpBVUZ0d3UxeG9NeHZQMklDZ0pPWG1jMlhYSHNob1NLS0hn?=
 =?utf-8?B?eFlERm5wSGwvR1VJVmphRUdWS3NxUEh3VjlHVDNrc0hlL094cklhYW9NWmpz?=
 =?utf-8?B?NUNJSzZEUGVueFJxZWtxOVVsNnVFR25pUjhTenZMc2FaRDdnUEQrREhNSzdQ?=
 =?utf-8?B?aW5yVlFxS09FcTNLTGVkTERGcitlZmFJK0pkWm4yWEprRlgveU9VVEFyZThB?=
 =?utf-8?B?aTljWFEzOTJPdUM2bm1JZFdnVk1DQnR5cXNzZG1zb25TMmtzMFdUQnlhNU9N?=
 =?utf-8?B?NjJsT0JQQUw2aFcyaGhsTkZ5TTVZdWtKQ3daZmE5SnlXTFVHSDU2UVRzbDBk?=
 =?utf-8?B?ZUZZV203SmJHcWVsMlV2R0pPNEg5YmNiRVROSXE0VFFpeEhPd01COTVQVjFF?=
 =?utf-8?B?b3NQMDJ6VmswaG1uS2Q5UkR5VDBSZDRLakVEcW9sdzJSU3NQcFFkR2EzTytC?=
 =?utf-8?B?RVNmSlhmZUl6Njd5ZWd5cWx3dlk3V1hnL1BUaC9iLzAvclJVQWppSHVFQlJY?=
 =?utf-8?B?b3JhcVRTS2gwTkRwMEk0eEJFVnVjSFJQc3N3SWlTWklXMEhIK3lZQmU0WDNR?=
 =?utf-8?B?TXNrZ1hrVE9OTXRRREJ4S3p2QlpxcVhKbTRHR2VidlJlM3gzb0NFOEFVT3Mw?=
 =?utf-8?B?d2JHdFY5cU9wTXpNNE0wdHVHeDZtT2k4Z0JMYTRwMlhmVnlXN09iZmNpS3JN?=
 =?utf-8?B?SmE4NFQ2QktYdHhWUzVDbnlCamxiN0I1Wm0zaGVudGExMkdVMkdMQ3ZGc3Nh?=
 =?utf-8?B?czh1M2JXUEFGQm5ISDlpczU0WTJDbGdPTVZyRWYvcGwzUUVDWE9Rb3JHcnl5?=
 =?utf-8?B?Z0tJL2N5ZTh1THhKdnl3WGFGL1NpQU9TVWNLLzBjQi80WVlsNHAxYmF2dEp5?=
 =?utf-8?B?U3dIMEFYa0lEMmI3UzBacittMllMRVRWU00zcjV6MitmZlJDQnR5aXFqMk5m?=
 =?utf-8?B?QlhVR25yamlJSTVndmdFQWxlam5JRVZ5UGRaWW1vbUZFYkRoemRxZmNOTEpp?=
 =?utf-8?B?cmQ2REo2MGlJbXNZVFJmZVByYnVLZjJWNkhGL21CeXErTmxTTTBmTVNVN1N0?=
 =?utf-8?B?cExNdGpJWUY3T2pDTXlPWDZiMkdWQWJTTWY3MHNqMTUrU3J1YVRvRjBIdkhJ?=
 =?utf-8?B?T2xvVnl3T05vWE5iMGwxdU1EWm5vbWdsVUxFZithREJhalRpQjd6SCtHVkpI?=
 =?utf-8?B?Vy9hbjdGRXBlbi9tUitjQmhSL2NKQmg3SWJkaTJ2bW1haWJBQXNEVm1HZzI1?=
 =?utf-8?B?eW9rR0FYUGxKMzVtVFN3REFGUTRxd1hQZUlsaStjL3M5a3ZDZGp4eTJhdVdN?=
 =?utf-8?B?VDluZERJSVdkMFh3TzZNSWg0a05HY0JQSHNHaUx2TlpBK2N1M3lNQnBCSlRt?=
 =?utf-8?B?UGN6TExONHlVcFNhVnBrZz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b7a4ce81-f4e2-4ccc-c6e0-08d8d7de3922
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 09:34:31.5770
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IyH3kg3KABvM2jKHsnpb2jvNMEG8+IowpUEW0C0fJzGyJe+1lJ1kvoMBdElPjon4cagUmbS2ocX18Yk69s7VXA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5608
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
> On 22.02.2021 22:19, Boris Ostrovsky wrote:
> > 
> > On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
> >> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
> >>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
> >>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
> >>>>> When toolstack updates MSR policy, this MSR offset (which is the last
> >>>>> index in the hypervisor MSR range) is used to indicate hypervisor
> >>>>> behavior when guest accesses an MSR which is not explicitly emulated.
> >>>> It's kind of weird to use an MSR to store this. I assume this is done
> >>>> for migration reasons?
> >>>
> >>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
> >> I agree that using the msr_policy seems like the most suitable place
> >> to convey this information between the toolstack and Xen. I wonder if
> >> it would be fine to have fields in msr_policy that don't directly
> >> translate into an MSR value?
> > 
> > 
> > We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
> 
> Which, just to clarify, was not the least because of the flags
> field being per-entry, i.e. per MSR, while here we want a global
> indicator.

We could exploit a bit the xen_msr_entry_t structure and use it
like:

typedef struct xen_msr_entry {
    uint32_t idx;
#define XEN_MSR_IGNORE (1u << 0)
    uint32_t flags;
    uint64_t val;
} xen_msr_entry_t;

Then use the idx and val fields to signal the start and size of the
range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
field?

xen_msr_entry_t = {
    .idx = 0,
    .val = 0xffffffff,
    .flags = XEN_MSR_IGNORE,
};

Would be equivalent to ignoring accesses to the whole MSR range.

This would allow selectively selecting which MSRs to ignore, while
not wasting a MSR itself to convey the information?

It would still need to be stored somewhere in the Xen internal domain
structure using a rangeset I think, which could be translated back and
forth into this xen_msr_entry_t format.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 10:15:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 10:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88691.166867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEUjA-00025p-7G; Tue, 23 Feb 2021 10:15:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88691.166867; Tue, 23 Feb 2021 10:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEUjA-00025i-47; Tue, 23 Feb 2021 10:15:36 +0000
Received: by outflank-mailman (input) for mailman id 88691;
 Tue, 23 Feb 2021 10:15:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEUj8-00025d-9d
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 10:15:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb1f2c5c-0bc4-4e6a-b7c5-035a6ce2059e;
 Tue, 23 Feb 2021 10:15:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6B75AC1D;
 Tue, 23 Feb 2021 10:15:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb1f2c5c-0bc4-4e6a-b7c5-035a6ce2059e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614075332; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=He4BN6kF4wmmsZgRktxBgPSAErM07xavK09Fd+mw2/Y=;
	b=FevqeJBX+sAkqDcedBuXKqLtcWzMz6wOZk7Ro2zHipE+MUOR7ZD+Q8975uFkYmNVqlD3IS
	dlYz8e3BJaSUZrogv4sJ2nR8yVEQr8faOVvfsS9sGgfSnrkNJvFo3a3iZPFHqqsgKNV3Sg
	ycMNTeecNcxOcmsXRmnfcB0RSWMRgCw=
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 andrew.cooper3@citrix.com
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, jun.nakajima@intel.com, kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
Date: Tue, 23 Feb 2021 11:15:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDTMIW5vBe0IncVR@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.02.2021 10:34, Roger Pau MonnÃ© wrote:
> On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
>> On 22.02.2021 22:19, Boris Ostrovsky wrote:
>>>
>>> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
>>>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
>>>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
>>>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>>>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
>>>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
>>>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
>>>>>> It's kind of weird to use an MSR to store this. I assume this is done
>>>>>> for migration reasons?
>>>>>
>>>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
>>>> I agree that using the msr_policy seems like the most suitable place
>>>> to convey this information between the toolstack and Xen. I wonder if
>>>> it would be fine to have fields in msr_policy that don't directly
>>>> translate into an MSR value?
>>>
>>>
>>> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
>>
>> Which, just to clarify, was not the least because of the flags
>> field being per-entry, i.e. per MSR, while here we want a global
>> indicator.
> 
> We could exploit a bit the xen_msr_entry_t structure and use it
> like:
> 
> typedef struct xen_msr_entry {
>     uint32_t idx;
> #define XEN_MSR_IGNORE (1u << 0)
>     uint32_t flags;
>     uint64_t val;
> } xen_msr_entry_t;
> 
> Then use the idx and val fields to signal the start and size of the
> range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
> field?
> 
> xen_msr_entry_t = {
>     .idx = 0,
>     .val = 0xffffffff,
>     .flags = XEN_MSR_IGNORE,
> };
> 
> Would be equivalent to ignoring accesses to the whole MSR range.
> 
> This would allow selectively selecting which MSRs to ignore, while
> not wasting a MSR itself to convey the information?

Hmm, yes, the added flexibility would be nice from an abstract pov
(not sure how relevant it would be to Solaris'es issue). But my
dislike of using a flag which is meaningless in ordinary entries
remains, as was voiced against Boris'es original version.

Andrew - afaict you've been completely silent on this thread so
far. What's your view?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 10:31:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 10:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88694.166879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEUyT-00045W-Kc; Tue, 23 Feb 2021 10:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88694.166879; Tue, 23 Feb 2021 10:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEUyT-00045P-HT; Tue, 23 Feb 2021 10:31:25 +0000
Received: by outflank-mailman (input) for mailman id 88694;
 Tue, 23 Feb 2021 10:31:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEUyS-00045K-QV
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 10:31:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEUyO-0000aF-HD; Tue, 23 Feb 2021 10:31:20 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEUyO-0006iy-88; Tue, 23 Feb 2021 10:31:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=E9x98BvpQuPzE0ncy7ZaRY/7oUBcrD2szpENYMNrbK0=; b=EuP0yrFUq98LhuO61s8SKWIOv1
	O5MmLNM5KiTU7L8ADF3M3jQkfxsNCnt6jXiL7zRQC3WDBJUhjxoPpBxrWqDtxzgacAtkFMYjKEPUb
	CVkvNhhqyWBnCFX26KvMWPzEui6GM6qC0B7ZvUvRostVuV8cmYGQhtPxpw3blautRuwU=;
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2,
 3}
To: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, dfaggioli@suse.com,
 George.Dunlap@citrix.com
References: <20210220140412.31610-1-julien@xen.org>
 <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com>
 <alpine.DEB.2.21.2102221253390.3234@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <767e2028-ca86-bd0f-e936-c386066c11c8@xen.org>
Date: Tue, 23 Feb 2021 10:31:18 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102221253390.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/02/2021 01:24, Stefano Stabellini wrote:
> On Mon, 22 Feb 2021, Bertrand Marquis wrote:
>>> On 20 Feb 2021, at 14:04, Julien Grall <julien@xen.org> wrote:
> The consequence of this patch is that a guest can cause vcpu_unblock(v),
> hence vcpu_wake(v), to be called for its own vcpu, which basically
> always changes v to RUNSTATE_runnable. However, that alone shouldn't
> allow v to always come up ahead of any other vcpus in the queue, right?
> It should be safe. I just wanted a second opinion on this :-)

vcpu_wake() only tells the scheduler that the vCPU can be run, it is 
then up to the scheduler to decide what to do. AFAIU, for credit{1, 2}, 
each vCPU will have some credit. If you run out of credit, then you vCPU 
will not be descheduled if there is work do it.

> 
> It was possible to trigger interrupts for your own vcpus even before, but
> now the code path is going to be direct for virtual interrupts.

You can already trigger "directly" virtual interrupts using the event 
channels.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 10:37:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 10:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88697.166891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEV47-0004G8-9Y; Tue, 23 Feb 2021 10:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88697.166891; Tue, 23 Feb 2021 10:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEV47-0004G1-6W; Tue, 23 Feb 2021 10:37:15 +0000
Received: by outflank-mailman (input) for mailman id 88697;
 Tue, 23 Feb 2021 10:37:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CEda=HZ=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1lEV46-0004Fw-7S
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 10:37:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 807fc8fe-83dd-4a70-a909-e61e690507fd;
 Tue, 23 Feb 2021 10:37:12 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-127-QvMczwhUMS6WpidU3ya6Pg-1; Tue, 23 Feb 2021 05:37:06 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D6226801986;
 Tue, 23 Feb 2021 10:37:01 +0000 (UTC)
Received: from gondolin (ovpn-113-126.ams2.redhat.com [10.36.113.126])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D59045D9D0;
 Tue, 23 Feb 2021 10:36:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 807fc8fe-83dd-4a70-a909-e61e690507fd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614076632;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bIGzZAHT2IrR97nIWaOZh13hqXqdEEn4ZaBsZNS2Jg0=;
	b=KHxX8DKO752n5FbuHjwofIlpdWW7Z5N7UkycR4nJhVsuREEQGngoK+BXa+jsagZPSTsUTJ
	IddAdh2Fipago6Gfz2d6VIqDKVQ4b6ztkdtN27Xi0bUyVgMvy5oTiPLLyi6SaaTC6qCP3a
	J+ZZ4mBMcV4VvzXXr15YCuNQWCsqWl8=
X-MC-Unique: QvMczwhUMS6WpidU3ya6Pg-1
Date: Tue, 23 Feb 2021 11:36:34 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>,
 qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>, Peter Maydell
 <peter.maydell@linaro.org>, Anthony Perard <anthony.perard@citrix.com>,
 qemu-ppc@nongnu.org, qemu-s390x@nongnu.org, Halil Pasic
 <pasic@linux.ibm.com>, Huacai Chen <chenhuacai@kernel.org>,
 xen-devel@lists.xenproject.org, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, qemu-arm@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm
 <leif@nuviainc.com>, Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>, Alistair Francis
 <alistair@alistair23.me>, Paul Durrant <paul@xen.org>, Eduardo Habkost
 <ehabkost@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Thomas Huth
 <thuth@redhat.com>, Jiaxun Yang <jiaxun.yang@flygoat.com>, =?UTF-8?B?SGVy?=
 =?UTF-8?B?dsOp?= Poussineau <hpoussin@reactos.org>, Greg Kurz
 <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>, "Edgar E.
 Iglesias" <edgar.iglesias@gmail.com>, David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, Aleksandar Rikalo
 <aleksandar.rikalo@syrmia.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type()
 return value
Message-ID: <20210223113634.6626c8f8.cohuck@redhat.com>
In-Reply-To: <YDRAHW1ds1eh0Lav@yekko.fritz.box>
References: <20210219173847.2054123-1-philmd@redhat.com>
	<20210219173847.2054123-2-philmd@redhat.com>
	<20210222182405.3e6e9a6f.cohuck@redhat.com>
	<bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
	<20210222185044.23fccecc.cohuck@redhat.com>
	<YDQ/Y1KozPSyNGjo@yekko.fritz.box>
	<YDRAHW1ds1eh0Lav@yekko.fritz.box>
Organization: Red Hat GmbH
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/nudo1.j=xV83/2pyII4Lt07";
 protocol="application/pgp-signature"; micalg=pgp-sha256
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

--Sig_/nudo1.j=xV83/2pyII4Lt07
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 23 Feb 2021 10:37:01 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> On Tue, Feb 23, 2021 at 10:33:55AM +1100, David Gibson wrote:
> > On Mon, Feb 22, 2021 at 06:50:44PM +0100, Cornelia Huck wrote: =20
> > > On Mon, 22 Feb 2021 18:41:07 +0100
> > > Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
> > >  =20
> > > > On 2/22/21 6:24 PM, Cornelia Huck wrote: =20
> > > > > On Fri, 19 Feb 2021 18:38:37 +0100
> > > > > Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
> > > > >    =20
> > > > >> MachineClass::kvm_type() can return -1 on failure.
> > > > >> Document it, and add a check in kvm_init().
> > > > >>
> > > > >> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> > > > >> ---
> > > > >>  include/hw/boards.h | 3 ++-
> > > > >>  accel/kvm/kvm-all.c | 6 ++++++
> > > > >>  2 files changed, 8 insertions(+), 1 deletion(-)
> > > > >>
> > > > >> diff --git a/include/hw/boards.h b/include/hw/boards.h
> > > > >> index a46dfe5d1a6..68d3d10f6b0 100644
> > > > >> --- a/include/hw/boards.h
> > > > >> +++ b/include/hw/boards.h
> > > > >> @@ -127,7 +127,8 @@ typedef struct {
> > > > >>   *    implement and a stub device is required.
> > > > >>   * @kvm_type:
> > > > >>   *    Return the type of KVM corresponding to the kvm-type stri=
ng option or
> > > > >> - *    computed based on other criteria such as the host kernel =
capabilities.
> > > > >> + *    computed based on other criteria such as the host kernel =
capabilities
> > > > >> + *    (which can't be negative), or -1 on error.
> > > > >>   * @numa_mem_supported:
> > > > >>   *    true if '--numa node.mem' option is supported and false o=
therwise
> > > > >>   * @smp_parse:
> > > > >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > > > >> index 84c943fcdb2..b069938d881 100644
> > > > >> --- a/accel/kvm/kvm-all.c
> > > > >> +++ b/accel/kvm/kvm-all.c
> > > > >> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
> > > > >>                                                              "kv=
m-type",
> > > > >>                                                              &er=
ror_abort);
> > > > >>          type =3D mc->kvm_type(ms, kvm_type);
> > > > >> +        if (type < 0) {
> > > > >> +            ret =3D -EINVAL;
> > > > >> +            fprintf(stderr, "Failed to detect kvm-type for mach=
ine '%s'\n",
> > > > >> +                    mc->name);
> > > > >> +            goto err;
> > > > >> +        }
> > > > >>      }
> > > > >> =20
> > > > >>      do {   =20
> > > > >=20
> > > > > No objection to this patch; but I'm wondering why some non-pseries
> > > > > machines implement the kvm_type callback, when I see the kvm-type
> > > > > property only for pseries? Am I holding my git grep wrong?   =20
> > > >=20
> > > > Can it be what David commented here?
> > > > https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html
> > > >  =20
> > >=20
> > > Ok, I might be confused about the other ppc machines; but I'm wonderi=
ng
> > > about the kvm_type callback for mips and arm/virt. Maybe I'm just
> > > confused by the whole mechanism? =20
> >=20
> > For ppc at least, not sure about in general, pseries is the only
> > machine type that can possibly work under more than one KVM flavour
> > (HV or PR).  So, it's the only one where it's actually useful to be
> > able to configure this. =20
>=20
> Wait... I'm not sure that's true.  At least theoretically, some of the
> Book3E platforms could work with either PR or the Book3E specific
> KVM.  Not sure if KVM PR supports all the BookE instructions it would
> need to in practice.
>=20
> Possibly pseries is just the platform where there's been enough people
> interested in setting the KVM flavour so far.

If I'm not utterly confused by the code, it seems the pseries machines
are the only ones where you can actually get to an invocation of
->kvm_type(): You need to have a 'kvm-type' machine property, and
AFAICS only the pseries machine has that.

(Or is something hiding behind some macro magic?)

--Sig_/nudo1.j=xV83/2pyII4Lt07
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAmA02rMACgkQ3s9rk8bw
L69Sng//SPiU5hi/9Db125/S0xZG5O8UzQoag2vh8Q68aGY9pmkB5pUsF5xCYvq4
v3GT9vtpCT+urKHCNhQcPD0nLLumzQxaz3GKTHvqOWkOwGI3HJhCg9HAutC4d77k
pAQpFCiDaxRw98uRREJDiG1tM9xzhU/qb1Ujs90aYALeZ3B4wmQQTRVXTiZjto++
PqJyNULu02yA4sFyZy+iCvv8dT8Ex2uyxV0JzeNS9RV4xsOGH8jMqElRPJiioJhf
20o5RAL+tpkM71Z1OMj3mBfrdui2K6ordXZKs7OoIkrjb01l/oZXSvVSjxzbKOTn
LKQYKIZ2/0SHH1IIxovfDJYm/1iV0JHmmW7klM2U1OSmMlZx0TsRmZ6ArWAE6/7z
CJhC/PpeE8bX9fRuXzAwuBRbT3Cgp6XurESExT1BDWMF3Gym3FaiIz2FHyVnvlPR
yFcVjR7pgAKWSRI1/EddICKWb2paYhSpzZ9QjbhOISelEslzJU57WQIAUjVPnSho
lrgY/XuKSJA+ZnRQdY3LX5IADVpA0rn7W2nW0JkJN0nJn4dw3P6Ikp14W1qUC8UR
AcsnC9Xqbj9D+xRgf1yoCBez7D9kthUXY226A3DYJJcp0qfsquXw0+cxN71CqPhb
dcvq0J1IEjbx4ir/qN2R8hxfCm3vwXG+w/3ZhKx69rUI5w4PA8Q=
=u9hO
-----END PGP SIGNATURE-----

--Sig_/nudo1.j=xV83/2pyII4Lt07--



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:04:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:04:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88703.166903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVUC-0007Cg-9p; Tue, 23 Feb 2021 11:04:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88703.166903; Tue, 23 Feb 2021 11:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVUC-0007CZ-4s; Tue, 23 Feb 2021 11:04:12 +0000
Received: by outflank-mailman (input) for mailman id 88703;
 Tue, 23 Feb 2021 11:04:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEVUA-0007CU-UO
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 11:04:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e528acd-0b0a-4e6f-a602-122798e2dffc;
 Tue, 23 Feb 2021 11:04:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e528acd-0b0a-4e6f-a602-122798e2dffc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614078250;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Bq3Ao2qIDInLxkkZCE4fOBHBMwcSdwBNahWQ/0rvNTw=;
  b=BKJO2zrawzy024Mn0lcOij5BDnjzvLQcsDJ75VHbi+BK5hW/EiwBtltp
   0gTPgv/DoNBz7q87iawgnkQXU/O6umsA6Qp/i/Di2tDfcV3Chmn7yQ/aK
   L+obqdSi7dx4+8iE3wojP5t7KKo6SC6J5Cwq2GnNQfFTEktbgPOdlxSUB
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9/if3uqo7pnGBFBsCsjltZqpF6kyA+WybhEyp8BACLOpVsPJZXt55ZrFEkLrYhkSV/D1A8YBWS
 kwxuU1RbE28s+ht0sIBoMG9XlXvH77MPgRB+ecGo1nOFJ48qvU/WIGsZqC1bpXpYWlvLVeVsbu
 b+Alk1EynbbSHlWp8hpQvgtnuYqstrOhCl/IJoZde2vCO1C6QI754V9t7VFNQQwLOirLHxmwnF
 eWvFI62XukDmHDXpzbeRar6a3uiWb5MkjMHfdnUQofokRDkerg106mbSkHXa55ZsGZjNxXVk+j
 edA=
X-SBRS: 5.2
X-MesageID: 39194740
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,199,1610427600"; 
   d="scan'208";a="39194740"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DR0ZWBCzPAPYtFigMEceYjnwRR5mXYMOMtA5DjDOqSGbw7XDqSGiOkPD/zj5S2SrZhzTS1H6Qxt/kUvE2QYj0OicL65diOKO5MJIQSCiyDe6vTCme5a50czImJQFC5NEp48WyarWLGDs6A0kk+fLbPeiqXKXlTam17FsJGcp251AvDpSkV6XIrjYTiXpAlgSwL2UreC5CBB0Q5/I2p+Yi+yzhhooIe28Pg737L8/XfjJMfP8VVCiojed3ctGV/jgrLDsigDEV/7qnzRXP+1KE2RLX76K8CnL8BpCHzxZLnM9MnvLNdxFguiQzVzWiS3eYeU33ODSbelr2MApm6fNCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ResH4lssBS4NZCOdLD0/JgmxVqGf48AtG3FpQUPa89Y=;
 b=UqOnM5o70TZ/xgIZwTss+ohuGntJJEtICWoC+G6YCheb3QmzUeF9iIJQJeyOoj2cYHiBInTUuHIyw5/RZ/Vr/XRSxSZ80Av5/bRkR0bPMKMsUSLOkyP+wOMunikNnOXfEGWT1S1ctVatnyOGS4VG3KfP7m9m+PR1hVYzLotl57o+Hm6941BnyR0kA8NWpPM1qWDsycVU1z34EllLcj3q62YvLtXQIdax714cZVvzai3euvHC2HhVvwneD6XWJY37HQpY2V1wZPNNM3eewY+MV1FcEGPSg1RE1KEy9o1VRHqm9QZs0zjUtzjTMrKUfVPlrHMBvR3wXHpMlUVqcOkhQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ResH4lssBS4NZCOdLD0/JgmxVqGf48AtG3FpQUPa89Y=;
 b=UC1arfyxYlsonH0rNw6iKVhvE1+YOQhUJTrbNmJJ2EZ4Q9H0WKpVFzNuOkFFK7otNlY3hF8bE2mSgV3m9H345UuyHiZWDUev5sUBw8YmDMg0RTrvEBRS8+rIpv0/afsFEoNv9esV8tO2mwA5VgVRB5tWTlobrYncPa+940mEyok=
Date: Tue, 23 Feb 2021 12:04:00 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 6/8] x86: rename copy_{from,to}_user() to
 copy_{from,to}_guest_pv()
Message-ID: <YDThIFB7ox6qdfFE@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <5104a32f-e2a1-06a5-a637-9702e4562b81@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5104a32f-e2a1-06a5-a637-9702e4562b81@suse.com>
X-ClientProxiedBy: MR2P264CA0040.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::28)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6a8fff3b-888d-4e48-8203-08d8d7eabcbf
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4475775C9E6F0D115D24D6088F809@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wFg/cq+9/5+61dChThUKdeE4r/FStzYxR7z+aeTJrrw8kn+P/kZF1KQV/KPdYPLvG1QCd9mdSNwmbP77o59SwFttBF+S3rmQXLO0kNVMSW6Y6wptHhhABS+f5ey8qVwkEAYT1SvyrLU5XqIN0NwE4rl+/qyyc1WHKai8qs8WVw/i+6Z6GqmV95Y8whZ+gMfiXMSCNGI5CE5GM6GNQw2CK7KhsTx1Lkdd45wO3FMVSL8F30itpSBQOxFeErJSvztxJii/rQH3VpLEr6WDtWAAw/kVPlpLfahqeg7t1qXCJwvapWdhx7f7lAj7g4kbIMpvD3fTvw2h+ssfZLtbZdvtLlTM6Jj4vjJCdKD/5H4a6G9Hv0+cbtkNgsD210z17NFFsymmsCbcLLgJONHG+02OSKJ/u11ELJN8dzcTn8IWGAgjr2XspXUo0qXhBl9iDACQqD1E0iQJbHeYmjIlFBNRRxsX0iegYt/0cZ4D6rRYjCx+++HyABs2u+AuRqvmiMl/ioyRAu8wUT1qLyEqBJpM2g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(39860400002)(376002)(366004)(346002)(136003)(5660300002)(6916009)(6496006)(2906002)(186003)(4326008)(85182001)(66946007)(8936002)(83380400001)(66556008)(8676002)(16526019)(66476007)(26005)(478600001)(6666004)(956004)(316002)(9686003)(54906003)(6486002)(86362001)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZWRnNS9HSjdHNk5aaE1EZTAzSTVvRUZQTEdPNE5hT1VQL1lYLyt0ZjU3Y01w?=
 =?utf-8?B?Qnpic3UyLzA0cXdmMTJ0N1lPdWwwNDlCdWhBa05iQmp0SzhHSFMveDQ3ME5H?=
 =?utf-8?B?ZEFwWU5yT3ZOOVBrekdnYmdtZW16cmlwelo5Sm1NNGRjbFY2QkI0cm0zVWJS?=
 =?utf-8?B?SGhpNzEraGExUStUODY0SUxmSktrZ2hFaS9VL280My9BRG10SEMxVWl6Sml1?=
 =?utf-8?B?ME5EWVo1eURvaTlmZC9VWnY1Yjh5bDFQVnpsNFJrME1MSDRUTzRDS1kwdWF1?=
 =?utf-8?B?WDFVT3FzOWtmYkNIdzJxMktxbTgrNXNwM3J2NythRk55N2RDMkFoWFJZK0JK?=
 =?utf-8?B?T1JPYTBuNEV4UW1BZC8vanJWOFFtWVVqaStUUStuVHl0NEdzQ0RlYlozNjBj?=
 =?utf-8?B?bVk4NzRsZ1R0ZEEra2ZGQUk3YjF1UXdDc2NqdnlRWW1ncnZRQ2kxQlhZRmZB?=
 =?utf-8?B?eGM3dCtrcU00QWpOempKamNJSmNLQTdzWHJVdnNoVDF2cVBpcUMvZzBzUFN1?=
 =?utf-8?B?Yk5mUy8yU1l0eGlVMGFDcE1PeU9qZVp4TGgyVzlPRFJ6Z0p2N0xhdWFWdndn?=
 =?utf-8?B?cm8zN3NxYXFtR2M5Q2s3emdSMnVVUlgxY0ZwMUsvWkJwNmRrcEpsaTllTmxQ?=
 =?utf-8?B?bTlvWG8rQ3pYZFY1WTYwd0dCQTE5ckJEdXVoVkJMVjFtWnhvRU8xOFBTUnNw?=
 =?utf-8?B?RGdBMlRXc2RralNlT2RacnQxbEdSNXI0aTZZVEdwRXZhQjhPV3A4dUk0UzRP?=
 =?utf-8?B?eTV4U3ZiSEcrS2tBa0pZeVVDR1VTL1RhcmtGeG1nU2lIZjFES0VMUzk3K1hs?=
 =?utf-8?B?YWtvUUFXa3hQZ2xTcVA0RFA0dGhpb09aaWVQSUNsTDEzbWZuVU4rL29HRVlV?=
 =?utf-8?B?ckpuQVRyZEV2WUdndm0xUFNRMVdFNlpCZXNhMStnMDEyRTloVHAzeU9WTHFW?=
 =?utf-8?B?aWwrbkJEU1RMVURrMStoL0JpRlU3endzbnlZSHpvbWV5d2p4eDIwU3E4MUFL?=
 =?utf-8?B?T3BIUXVvZ0d5MW01bUxmMlF2bFBac1RUVmMxQXBBYTJoSEtPV1MrUWtLUmNs?=
 =?utf-8?B?YnFaQmNuRTFUM29UQ3YzL0JwcGhOZWJ1dDhlK2Ric0huMnFwdTBWNDNDeHBl?=
 =?utf-8?B?N0ZqZktkN2dQSHdmYkx6TWNteFU1Qk5OQ3ZGSlhjcmxISVhHWDA1Z2g3eUh3?=
 =?utf-8?B?cjBDcTZuMmhpb1MwZ3Z2dThVOXBMWVdZL0xNTktocVFlc1lPc2p0MEI2OUd4?=
 =?utf-8?B?VVY3a21uY2s2bVVaREhrODdEeU9kVjBsQ0d1aXBsN3Z5Q3MrOWFCdkNsckhz?=
 =?utf-8?B?ZjRFNWFVNy9HWEhkNG1xVkEyRjgzbXcvS0xJQ29veDQrVmhKY3NyejMwQ0w0?=
 =?utf-8?B?SGx4MjB2a0dMMzJrd2NreFdOK0pjUC9ScS84NytZeXpEMTdXeXFJRGtkdis4?=
 =?utf-8?B?Mnoyb0NneGZPU1RRSTJDSm0wSVp4Z3pSWXhvTEVLcEdjSWtaRlpma1dNOHk5?=
 =?utf-8?B?NDJLRjNMYno4R3MzODAwcE5rV2JvS3crUU5jdG95Mmh0L2dFSnU5NjJMOVdX?=
 =?utf-8?B?Tks1YllIRVNCK3RPdWZLVEZLZVk1R2lEV0R4Nm1tYmdET2lXS2Vja1FQZUVE?=
 =?utf-8?B?SlFqd0VNZlY4ZlFzVGpxaXFtM3dEWkhucEFWakZmdzNuRldLdTBWTUhOYldB?=
 =?utf-8?B?Ylk4a2NiQk5GN0JjOWxFNERiNFJBbzJNRjZaampnYWdkV0hkMDdTdFVwZGky?=
 =?utf-8?B?bitWMURWVG90cU1NRm4rNFFZWlo4aGJOWFVhMzFLUkxSOVZaLzJFakdndFJX?=
 =?utf-8?B?Z0E3elViNlVHaU1aZmc5QT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a8fff3b-888d-4e48-8203-08d8d7eabcbf
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 11:04:06.6780
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vv+rGeS/NaTuiCqZ0LQF8ozCjg1ebhQP5cCLj+fiLL67RdiRK9MdpMNTCWUcN0tvJFDXqEkkihX7tyRUi9MVog==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

On Wed, Feb 17, 2021 at 09:22:32AM +0100, Jan Beulich wrote:
> Bring them (back) in line with __copy_{from,to}_guest_pv(). Since it
> falls in the same group, also convert clear_user(). Instead of adjusting
> __raw_clear_guest(), drop it - it's unused and would require a non-
> checking __clear_guest_pv() which we don't have.
> 
> Add previously missing __user at some call sites and in the function
> declarations.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

> 
> --- a/xen/arch/x86/pv/emul-inv-op.c
> +++ b/xen/arch/x86/pv/emul-inv-op.c
> @@ -33,7 +33,7 @@ static int emulate_forced_invalid_op(str
>      eip = regs->rip;
>  
>      /* Check for forced emulation signature: ud2 ; .ascii "xen". */
> -    if ( (rc = copy_from_user(sig, (char *)eip, sizeof(sig))) != 0 )
> +    if ( (rc = copy_from_guest_pv(sig, (char __user *)eip, sizeof(sig))) != 0 )
>      {
>          pv_inject_page_fault(0, eip + sizeof(sig) - rc);
>          return EXCRET_fault_fixed;
> @@ -43,7 +43,8 @@ static int emulate_forced_invalid_op(str
>      eip += sizeof(sig);
>  
>      /* We only emulate CPUID. */
> -    if ( ( rc = copy_from_user(instr, (char *)eip, sizeof(instr))) != 0 )
> +    if ( (rc = copy_from_guest_pv(instr, (char __user *)eip,
> +                                  sizeof(instr))) != 0 )
>      {
>          pv_inject_page_fault(0, eip + sizeof(instr) - rc);
>          return EXCRET_fault_fixed;
> --- a/xen/arch/x86/pv/iret.c
> +++ b/xen/arch/x86/pv/iret.c
> @@ -54,8 +54,8 @@ unsigned long do_iret(void)
>      struct iret_context iret_saved;
>      struct vcpu *v = current;
>  
> -    if ( unlikely(copy_from_user(&iret_saved, (void *)regs->rsp,
> -                                 sizeof(iret_saved))) )
> +    if ( unlikely(copy_from_guest_pv(&iret_saved, (void __user *)regs->rsp,
> +                                     sizeof(iret_saved))) )
>      {
>          gprintk(XENLOG_ERR,
>                  "Fault while reading IRET context from guest stack\n");
> --- a/xen/arch/x86/pv/ro-page-fault.c
> +++ b/xen/arch/x86/pv/ro-page-fault.c
> @@ -90,7 +90,8 @@ static int ptwr_emulated_update(unsigned
>  
>          /* Align address; read full word. */
>          addr &= ~(sizeof(full) - 1);
> -        if ( (rc = copy_from_user(&full, (void *)addr, sizeof(full))) != 0 )
> +        if ( (rc = copy_from_guest_pv(&full, (void __user *)addr,
> +                                      sizeof(full))) != 0 )
>          {
>              x86_emul_pagefault(0, /* Read fault. */
>                                 addr + sizeof(full) - rc,
> --- a/xen/arch/x86/usercopy.c
> +++ b/xen/arch/x86/usercopy.c
> @@ -109,19 +109,17 @@ unsigned int copy_from_guest_ll(void *to
>  #if GUARD(1) + 0
>  
>  /**
> - * copy_to_user: - Copy a block of data into user space.
> - * @to:   Destination address, in user space.
> - * @from: Source address, in kernel space.
> + * copy_to_guest_pv: - Copy a block of data into guest space.

I would expand to 'PV guest' here and below, FAOD.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:12:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:12:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88709.166915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVc9-0008Gn-9d; Tue, 23 Feb 2021 11:12:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88709.166915; Tue, 23 Feb 2021 11:12:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVc9-0008Gg-6E; Tue, 23 Feb 2021 11:12:25 +0000
Received: by outflank-mailman (input) for mailman id 88709;
 Tue, 23 Feb 2021 11:12:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEVc8-0008GY-2B; Tue, 23 Feb 2021 11:12:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEVc7-0001Hp-Pl; Tue, 23 Feb 2021 11:12:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEVc7-0000GT-FL; Tue, 23 Feb 2021 11:12:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEVc7-0006hM-Et; Tue, 23 Feb 2021 11:12:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=5QmFQSHNMHct+qcdfqqz8gbtQJkfs0evAYlnJGBGn4U=; b=uhQFpA50JBmzVcks9gn8rxn1jx
	9YIswpcgk7+sBOFEkcZmb4LJVRq+yXgvixSkELrk/O9m/CsPRjQNw+CTis9vF6MfknUg6bFJTL0IJ
	umU6oAzLk7hZNTqGCKn+aOtWle4h68Slb79JKrncBxnU612WXveoWDqU1kPiQxNQ9JgY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-shadow
Message-Id: <E1lEVc7-0006hM-Et@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 11:12:23 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-shadow
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159579/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-shadow.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-shadow.xen-boot --summary-out=tmp/159579.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-shadow xen-boot
Searching for failure / basis pass:
 159559 fail [host=huxelrebe1] / 159475 [host=chardonnay1] 159453 [host=elbling0] 159424 [host=albana1] 159396 [host=fiano1] 159362 [host=fiano0] 159335 [host=albana0] 159315 [host=pinot1] 159202 [host=chardonnay0] 159134 [host=elbling1] 159036 [host=chardonnay1] 159013 ok.
Failure / basis pass flights: 159559 / 159013
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d203dbd69f1a02577dd6fe571d72beb980c548a6
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#d203dbd69f1a02577dd6fe571d72beb980c548a6-f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
Loaded 5001 nodes in revision graph
Searching for test results:
 159013 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d203dbd69f1a02577dd6fe571d72beb980c548a6
 159036 [host=chardonnay1]
 159134 [host=elbling1]
 159202 [host=chardonnay0]
 159315 [host=pinot1]
 159335 [host=albana0]
 159362 [host=fiano0]
 159396 [host=fiano1]
 159424 [host=albana1]
 159453 [host=elbling0]
 159475 [host=chardonnay1]
 159487 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159491 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159508 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159526 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159540 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159561 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d203dbd69f1a02577dd6fe571d72beb980c548a6
 159564 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159565 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14
 159567 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b4159d2de0153eb8ce6aced1978e1917c07cf39d
 159568 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f954a1bf5f74ad6edce361d1bf1a29137ff374e8
 159569 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
 159571 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b
 159572 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159573 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159574 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159559 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
 159575 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159577 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
 159578 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159579 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
Searching for interesting versions
 Result found: flight 159013 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25, results HASH(0x558c0d373a58) HASH(0x558c0d3e7de8) HASH(0x558c0d40d108) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895\
 af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b, results HASH(0x558c0d373ed8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca, results HASH(0x558c0d40bbe0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b4159d2de0153eb8ce6aced1978e1917c07cf39d, results HASH(0x558c0d382e60) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14, results HASH(0x558c0d3863f0) For basis failure, parent search stopping at c3038e718a19\
 fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d203dbd69f1a02577dd6fe571d72beb980c548a6, results HASH(0x558c0d378ff0) HASH(0x558c0d379a70) Result found: flight 159487 (fail), for basis failure (at ancestor ~76)
 Repro found: flight 159561 (pass), for basis pass
 Repro found: flight 159577 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
No revisions left to test, checking graph state.
 Result found: flight 159572 (pass), for last pass
 Result found: flight 159573 (fail), for first failure
 Repro found: flight 159574 (pass), for last pass
 Repro found: flight 159575 (fail), for first failure
 Repro found: flight 159578 (pass), for last pass
 Repro found: flight 159579 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159579/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-shadow.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
159579: tolerable ALL FAIL

flight 159579 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159579/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-shadow     8 xen-boot                fail baseline untested


jobs:
 test-amd64-i386-xl-shadow                                    fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:15:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88712.166929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVep-0008P7-QX; Tue, 23 Feb 2021 11:15:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88712.166929; Tue, 23 Feb 2021 11:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVep-0008P0-Na; Tue, 23 Feb 2021 11:15:11 +0000
Received: by outflank-mailman (input) for mailman id 88712;
 Tue, 23 Feb 2021 11:15:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lEVen-0008Ov-C7
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 11:15:09 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 908aba94-1a6b-4d1a-9bfb-bc224baf8f87;
 Tue, 23 Feb 2021 11:15:07 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11NBBraW001552; Tue, 23 Feb 2021 11:15:06 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2105.outbound.protection.outlook.com [104.47.18.105])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vyuyr8gw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 11:15:05 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by VI1PR0302MB2656.eurprd03.prod.outlook.com (2603:10a6:800:de::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 11:15:01 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 11:15:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 908aba94-1a6b-4d1a-9bfb-bc224baf8f87
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lT+oemMdgS4N/vE7x1U0FclQfxbGuwd9dZZ3MCNtrGNs44UIEe8lFZC0APf/DF04huH4KVkVqwrkvXxqVcIZCYT6FpuchEVGhM+HGIH2Uktd32DHypakdkcPTsxwdQo3c5gicpKTREG9AUQ8IOymmev8HvN0JDM0NuvMPtpzKJbTHJAuldvv39MkWmaOQ5AI7cRHGQoqCxII+B0vT6RPN2R13GE605qgqYFp01Gc4tq3kLbRBVMTSxPTyY8kfG7PjyDERjsx45GWxmdIrxi0cY4kFR7OJ0sKzLpaGfBQVGorKUJK7OeEAazFXgR+CdKX9CU+JJoP3Y+Soh18ZlUBbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PfewKRC2yYhTMHG5MnDpna+lxbBIa+KZm1m7ThfTP48=;
 b=SU2kmH1zrbwWpNNvH5q+GvRPybgzY/yFjBoEUHdFN39oCnFMvBOWKXKzeCk1ueac/kFySDkzHCEPuk8z+wApcMqT7q/q1EKMpIdmJI678jYUKEGIHkiFINAceS7khCBBIY6Vw/tLTeuFzJSFbDXyRnNfVGZYC6INg7Qevar5kk7jtl8xGRhHeWkrjbYXC4VCXJo5zF7lfrME4EbakVEzrQPPrdC0I4lTWfSdG65XnSFo51DU2L9EbMeGxEIMPv6+Gf9JK9rsVoY+D3IbDo4J+JVL9eq/zkeBgMUKT9ZhjB8Mlc4DPc52TepvXFLNeMelS+M1b0dfchjM10svnUe9wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PfewKRC2yYhTMHG5MnDpna+lxbBIa+KZm1m7ThfTP48=;
 b=wd+kAKIym5Jkp0b58X6QbnnHPEcPzQo7F2tppauaIJryrKgq07F2ni/f38nDuC4YVj3Wb+qu6d5wswCR/F6HSPmVvx78+p0xw1Es4mlEYolX1bGt4fNgvypK9OFADEm31AknTxG/vos/FP84aOAt6HgijtZBvGZXdkOxKr2Fdd/x/AjJQ4F8309IlVs3LNS0fFyH8GP7iV8kw/iERyhoXRmToH1/SUut62q+KnL3lyN9pE0mqSFnYHajJgiVYQHem0CAxyvepRyRPrSLzqavCcJtVlCJXXUJ+Dqc1ooMN9+G1eIwvy86yLemlqrHXfgHr4p9d1eRnikn/ZJOLOrDVA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>,
        Dario Faggioli
	<dfaggioli@suse.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [RFC PATCH 01/10] sched: core: save IRQ state during locking
Thread-Topic: [RFC PATCH 01/10] sched: core: save IRQ state during locking
Thread-Index: AQHXCYx4NBzFu5keaEOeI3LkFrTKiKplb3mAgAAnrgA=
Date: Tue, 23 Feb 2021 11:15:00 +0000
Message-ID: <877dmz6nnv.fsf@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <20210223023428.757694-2-volodymyr_babchuk@epam.com>
 <4f7b4788-3b2e-8501-6aec-948b70320af2@suse.com>
In-Reply-To: <4f7b4788-3b2e-8501-6aec-948b70320af2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 4df97325-8a04-47b5-f3cc-08d8d7ec4322
x-ms-traffictypediagnostic: VI1PR0302MB2656:
x-microsoft-antispam-prvs: 
 <VI1PR0302MB265675DF9804B74786DCE978E6809@VI1PR0302MB2656.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 l9zMv5k4tlITt4wz0uzk5RGQbbYr6U8rImJ0swht7VXXeexAKHxlSxydAEuRoVQcKJF0sLYEtKkYm4eXKpX+F+8rhxijTgYuJzozQhqsXat8ZCA5x4EKlvlcqrTcc40/CbJPJAda52lpYdoq7VMun36mKFFyzOr6NeeyRNoaHVk0+ECkYYwJtTYPGM/a92mywnj6/SVsETjAe/J0pHlFRcBlrwfDaWl0MciW/FeYOeA6uSzbAE3TlrIGKmTs9wH3h8+hIGafkdM4ffDSt4eW89wVx0OHJ60RcXQsWvukSfgnRNPV1HCq3MniYCp5IVi0ULxbjzDFTKpN+2VO9aM19zObS+Mf3WiRO7B34+W2y2lVgRoPGWLB+smUd+uQRcUjMn43czy6bCoikCVzJp1B6Wy+ZynctwbjyIsfHxR/Dg5XgkUnKBOJpAe2BFvKEBID3VzQ9DqzyEvokeNsxH2K+TwHPBVJC7Srptg4hEnOChd8YhhlF38gzmLGKVo1l2R7WXedYlq1a1QkuWEbPoRFmw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(376002)(136003)(366004)(6916009)(66946007)(53546011)(2616005)(478600001)(4326008)(66574015)(83380400001)(71200400001)(6506007)(5660300002)(66446008)(76116006)(66476007)(66556008)(8936002)(6512007)(86362001)(36756003)(8676002)(64756008)(26005)(2906002)(6486002)(54906003)(186003)(55236004)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?UTBSS0w3SDh2ZEo4WEdMQlR5c2ZmaFJrR3hzcy9vd1M3c3M0N0h2TW93L2RZ?=
 =?utf-8?B?Q0xtTXF6STlOVnRKMUE2QnVnbmxXa2l0b2Frdy9ZWFZMSWpKeXl4TW45aUU0?=
 =?utf-8?B?T2JEamFIZk9VbmF4SEhlWkJNWWNpNjhkblVEWGlhRndYRlFxRXN6c25mOS9t?=
 =?utf-8?B?NUJwTktOR0hiVlJtZTlBblFiaW9JNXFWejcrbkVQRUtTdnpjbVFlWjB3SHpp?=
 =?utf-8?B?bGwvVWxHbmlPUzZGdDd4dFExSEY3MzJKK1FRYm5GVVp4R3owSUg3NHBYWmVJ?=
 =?utf-8?B?V0xWK203aDJ3cVdvc1M0d2E0UzRBUDVGeFVUbVhUQUhwRFY0U1Q2Wm1RaGpQ?=
 =?utf-8?B?akZadWJwNHlBN0c0Mk04Tjg2UW1XTFBJZ2pxdTUwRGFJWmpyN01obzB6ZHRa?=
 =?utf-8?B?S3JmUjRlTHpXdm56UVNEbzRjenR1SWNwL25YYW4yYTlxUysrSjZMU3dySkE0?=
 =?utf-8?B?V1d5Vi9kQk5Ocy90dTdON0l0MVRFUFRqdm1JcUZvNTk0TnQwUG9Bam1kdVJH?=
 =?utf-8?B?MGZWdzVYajBqY2JoQzBLQjZUK2h4eTVwSE5oclNYdXV1K2RjVTRNUCt5K0RM?=
 =?utf-8?B?NEdER1hwbk5hQnpKUjk2TGpLMUVmVGZDL29PTWl6UDBqRnlSajA2blk5cXBt?=
 =?utf-8?B?bTEyUm5qVkN1REZ4UUJRbllZaGlZQ2JjUEZBTGtMWmhwcDNFMHdUejZiVmFr?=
 =?utf-8?B?c1ZybTNkSjlqd2c3S3daejZUYTVGWEYrL3pPME9IQjBpUnFuYWhCbkt1Q1BW?=
 =?utf-8?B?ZkszVSs0K0l1UUN0SjhtTEVkNGwzeWpqTkhROTV2UnRQaDYxeklRMHc0NkVU?=
 =?utf-8?B?bnNJeE45dTdhTERZMmdQVnF0Y1U5YnRxcCtQbHdBS3ptZ1VJdnNranVFRFNO?=
 =?utf-8?B?Y0o4VS9ZcDhMRkZRRTB2c0dheWlqa2EvdzNaMkpaRGZmZVNiU09OVjFLdmg1?=
 =?utf-8?B?WUZxeEprUzd6ckFJNjBJcWxKaitROG12aG9RSDhraEZUYk15S2N4TzZ5WFlB?=
 =?utf-8?B?NTNxcmcwS0xqdWVUb2gzQzFlUWgxR3VQcU43MzI5bnpEWkxKVU1UN1VqRmdU?=
 =?utf-8?B?Yzk5S2VueG9iNGlwVVI0d0lJZ042UUU3M0YzcWtwbGQxdnBEWnZPVm8rb3k3?=
 =?utf-8?B?cTZVWkp4QWpXV0F5bm5HUjU1T2V3SHF3UDBwUzZTd2ordE1GMHNZS3BqZ3Zy?=
 =?utf-8?B?d2c3TVo0NDRZSmx4TU9XVDJKZXZ6b2ViNXVKdmVEZExFajRLd2JKMnhqaDFH?=
 =?utf-8?B?b3lvamhpVGQ4SzF2Y3ZwTUxkVDFSL1VvNHowdFJrNTI4YVIvTklnTDlKamlB?=
 =?utf-8?B?Vm1GVGVseW9Ub3RKSkFMclBPWnRaTVhSREZQbkVNLy9MaHdBbVB2UmFxOWF2?=
 =?utf-8?B?UjJTdHFDT2t1aUplc05DUGhKaUJLK1NUSFJuNDlkZE5LTFdhd3A0RFhIemtj?=
 =?utf-8?B?Yk9ieWFrYVM5SDdhdmM3WjFZL0pnV3NiK2RwZnhURXlNMzZHSUttY1poa21L?=
 =?utf-8?B?dmJnMVZrZW5SaUp2NTlIbzAvOURiSWNhV1RmdU9sSDFCaUtKczVEZ3RMeWlZ?=
 =?utf-8?B?WEtUNVkrdWtKbmZiMWlaaHdHVFBYck9ESEN2Y21nVVhzdjYrbUdPdXRlN1Nm?=
 =?utf-8?B?dkxlc2JGZDFMaCtQYUs3OHRTSjFaRGVrTDBpZkpScTdRSE0wSHVUajRpNlN1?=
 =?utf-8?B?STlWTVpMdHBIV1M0UGYycHhFZS9LcVk2VjF4YWhQWVc3SEN6blJpWmdibWh6?=
 =?utf-8?B?Y0t4TGdWSTI1UmMzTHVyd0lxRFVtYlJnbEx2dlZDVExQMFcrNm9YSG1NSkRk?=
 =?utf-8?B?RU5BR1QrdlZIMTA5UnUvQT09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <28EDA844F069D04CA1A1C75AD8A5F5C9@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4df97325-8a04-47b5-f3cc-08d8d7ec4322
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 11:15:01.0203
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: wvXfoUG67ZZvTq0OLp8POL/hlr22kh486LRRo6ROGJdaaLxkaTHa2yc5eknvjFou+z68pckDWyyksDOsF2kIozC+0MiJ9YeAQGJVT8xL3mc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB2656
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 adultscore=0
 spamscore=0 mlxlogscore=999 phishscore=0 bulkscore=0 mlxscore=0
 suspectscore=0 impostorscore=0 malwarescore=0 lowpriorityscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230096

DQpIaSBKdXJnZW4sDQoNCkrDvHJnZW4gR3Jvw58gd3JpdGVzOg0KDQo+IE9uIDIzLjAyLjIxIDAz
OjM0LCBWb2xvZHlteXIgQmFiY2h1ayB3cm90ZToNCj4+IFdpdGggWEVOIHByZWVtcHRpb24gZW5h
YmxlZCwgc2NoZWR1bGVyIGZ1bmN0aW9ucyBjYW4gYmUgY2FsbGVkIHdpdGgNCj4+IElSUXMgZGlz
YWJsZWQgKGZvciBleGFtcGxlLCBhdCBlbmQgb2YgSVJRIGhhbmRsZXIpLCBzbyB3ZSBzaG91bGQN
Cj4+IHNhdmUgYW5kIHJlc3RvcmUgSVJRIHN0YXRlIGluIHNjaGVkdWxlcnMgY29kZS4NCj4NCj4g
VGhpcyBicmVha3MgY29yZSBzY2hlZHVsaW5nLg0KDQpZZXMsIHRoYW5rIHlvdS4gSSBmb3Jnb3Qg
dG8gbWVudGlvbiB0aGF0IHRoaXMgUG9DIGlzIG5vdCBjb21wYXRpYmxlIHdpdGgNCmNvcmUgc2No
ZWR1bGluZy4gSXQgaXMgbm90IHVzZWQgb24gQVJNLCBzbyBJIGNvdWxkIG5vdCB0ZXN0IGl0IGFu
eXdheXMuDQoNCj4gV2FpdGluZyBmb3IgYW5vdGhlciBzaWJsaW5nIHdpdGggaW50ZXJydXB0cyBk
aXNhYmxlZCBpcyBhbiBhYnNvbHV0ZQ0KPiBubyBnbywgYXMgZGVhZGxvY2tzIGFyZSB0aGUgY29u
c2VxdWVuY2UuDQo+DQo+IFlvdSBjb3VsZCAoaW4gdGhlb3J5KSBtYWtlIHByZWVtcHRpb24gYW5k
IGNvcmUgc2NoZWR1bGluZyBtdXR1YWxseQ0KPiBleGNsdXNpdmUsIGJ1dCB0aGlzIHdvdWxkIGJy
ZWFrIHRoZSBmb3J3YXJkIHBhdGggdG8gbXV0ZXhlcyBldGMuDQo+DQoNCldlbGwsIEkgaW1wbGVt
ZW50ZWQgdGhlIG1vc3QgbmFpdmUgd2F5IHRvIGVuYWJsZSBoeXBlcnZpc29yDQpwcmVlbXB0aW9u
LiBJJ20gc3VyZSB0aGF0IHdpdGggYSBiaXQgbW9yZSBjYXJlZnVsIGFwcHJvYWNoIEkgY2FuIG1h
a2UgaXQNCmNvbXBhdGlibGUgd2l0aCBjb3JlIHNjaGVkdWxpbmcuIFRoZXJlIGlzIG5vIHN0cmlj
dCByZXF1aXJlbWVudCB0byBydW4NCnNjaGVkdWxlciB3aXRoIElSUXMgZGlzYWJsZWQuDQoNCj4N
Cj4gSnVlcmdlbg0KPg0KPj4gU2lnbmVkLW9mZi1ieTogVm9sb2R5bXlyIEJhYmNodWsgPHZvbG9k
eW15cl9iYWJjaHVrQGVwYW0uY29tPg0KPj4gLS0tDQo+PiAgIHhlbi9jb21tb24vc2NoZWQvY29y
ZS5jIHwgMzMgKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tDQo+PiAgIDEgZmlsZSBj
aGFuZ2VkLCAxOCBpbnNlcnRpb25zKCspLCAxNSBkZWxldGlvbnMoLSkNCj4+IGRpZmYgLS1naXQg
YS94ZW4vY29tbW9uL3NjaGVkL2NvcmUuYyBiL3hlbi9jb21tb24vc2NoZWQvY29yZS5jDQo+PiBp
bmRleCA5NzQ1YTc3ZWVlLi43ZTA3NTYxM2Q1IDEwMDY0NA0KPj4gLS0tIGEveGVuL2NvbW1vbi9z
Y2hlZC9jb3JlLmMNCj4+ICsrKyBiL3hlbi9jb21tb24vc2NoZWQvY29yZS5jDQo+PiBAQCAtMjQ3
MCw3ICsyNDcwLDggQEAgc3RhdGljIHN0cnVjdCB2Y3B1ICpzY2hlZF9mb3JjZV9jb250ZXh0X3N3
aXRjaChzdHJ1Y3QgdmNwdSAqdnByZXYsDQo+PiAgICAqIHNjaGVkX3Jlc19yY3Vsb2NrIGhhcyBi
ZWVuIGRyb3BwZWQuDQo+PiAgICAqLw0KPj4gICBzdGF0aWMgc3RydWN0IHNjaGVkX3VuaXQgKnNj
aGVkX3dhaXRfcmVuZGV6dm91c19pbihzdHJ1Y3Qgc2NoZWRfdW5pdCAqcHJldiwNCj4+IC0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzcGlubG9ja190
ICoqbG9jaywgaW50IGNwdSwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBzcGlubG9ja190ICoqbG9jaywNCj4+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nICpmbGFncywg
aW50IGNwdSwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgc190aW1lX3Qgbm93KQ0KPj4gICB7DQo+PiAgICAgICBzdHJ1Y3Qgc2NoZWRfdW5p
dCAqbmV4dDsNCj4+IEBAIC0yNTAwLDcgKzI1MDEsNyBAQCBzdGF0aWMgc3RydWN0IHNjaGVkX3Vu
aXQgKnNjaGVkX3dhaXRfcmVuZGV6dm91c19pbihzdHJ1Y3Qgc2NoZWRfdW5pdCAqcHJldiwNCj4+
ICAgICAgICAgICAgICAgICAgIHByZXYtPnJlbmRlenZvdXNfaW5fY250Kys7DQo+PiAgICAgICAg
ICAgICAgICAgICBhdG9taWNfc2V0KCZwcmV2LT5yZW5kZXp2b3VzX291dF9jbnQsIDApOw0KPj4g
ICAtICAgICAgICAgICAgICAgIHBjcHVfc2NoZWR1bGVfdW5sb2NrX2lycSgqbG9jaywgY3B1KTsN
Cj4+ICsgICAgICAgICAgICAgICAgcGNwdV9zY2hlZHVsZV91bmxvY2tfaXJxcmVzdG9yZSgqbG9j
aywgKmZsYWdzLCBjcHUpOw0KPj4gICAgICAgICAgICAgICAgICAgICBzY2hlZF9jb250ZXh0X3N3
aXRjaCh2cHJldiwgdiwgZmFsc2UsIG5vdyk7DQo+PiAgIEBAIC0yNTMwLDcgKzI1MzEsNyBAQCBz
dGF0aWMgc3RydWN0IHNjaGVkX3VuaXQNCj4+ICpzY2hlZF93YWl0X3JlbmRlenZvdXNfaW4oc3Ry
dWN0IHNjaGVkX3VuaXQgKnByZXYsDQo+PiAgICAgICAgICAgICAgIHByZXYtPnJlbmRlenZvdXNf
aW5fY250Kys7DQo+PiAgICAgICAgICAgICAgIGF0b21pY19zZXQoJnByZXYtPnJlbmRlenZvdXNf
b3V0X2NudCwgMCk7DQo+PiAgIC0gICAgICAgICAgICBwY3B1X3NjaGVkdWxlX3VubG9ja19pcnEo
KmxvY2ssIGNwdSk7DQo+PiArICAgICAgICAgICAgcGNwdV9zY2hlZHVsZV91bmxvY2tfaXJxcmVz
dG9yZSgqbG9jaywgKmZsYWdzLCBjcHUpOw0KPj4gICAgICAgICAgICAgICAgIHJhaXNlX3NvZnRp
cnEoU0NIRURfU0xBVkVfU09GVElSUSk7DQo+PiAgICAgICAgICAgICAgIHNjaGVkX2NvbnRleHRf
c3dpdGNoKHZwcmV2LCB2cHJldiwgZmFsc2UsIG5vdyk7DQo+PiBAQCAtMjUzOCwxMSArMjUzOSwx
MSBAQCBzdGF0aWMgc3RydWN0IHNjaGVkX3VuaXQgKnNjaGVkX3dhaXRfcmVuZGV6dm91c19pbihz
dHJ1Y3Qgc2NoZWRfdW5pdCAqcHJldiwNCj4+ICAgICAgICAgICAgICAgcmV0dXJuIE5VTEw7ICAg
ICAgICAgLyogQVJNIG9ubHkuICovDQo+PiAgICAgICAgICAgfQ0KPj4gICAtICAgICAgICBwY3B1
X3NjaGVkdWxlX3VubG9ja19pcnEoKmxvY2ssIGNwdSk7DQo+PiArICAgICAgICBwY3B1X3NjaGVk
dWxlX3VubG9ja19pcnFyZXN0b3JlKCpsb2NrLCAqZmxhZ3MsIGNwdSk7DQo+PiAgICAgICAgICAg
ICBjcHVfcmVsYXgoKTsNCj4+ICAgLSAgICAgICAgKmxvY2sgPSBwY3B1X3NjaGVkdWxlX2xvY2tf
aXJxKGNwdSk7DQo+PiArICAgICAgICAqbG9jayA9IHBjcHVfc2NoZWR1bGVfbG9ja19pcnFzYXZl
KGNwdSwgZmxhZ3MpOw0KPj4gICAgICAgICAgICAgLyoNCj4+ICAgICAgICAgICAgKiBDaGVjayBm
b3Igc2NoZWR1bGluZyByZXNvdXJjZSBzd2l0Y2hlZC4gVGhpcyBoYXBwZW5zIHdoZW4gd2UgYXJl
DQo+PiBAQCAtMjU1Nyw3ICsyNTU4LDcgQEAgc3RhdGljIHN0cnVjdCBzY2hlZF91bml0ICpzY2hl
ZF93YWl0X3JlbmRlenZvdXNfaW4oc3RydWN0IHNjaGVkX3VuaXQgKnByZXYsDQo+PiAgICAgICAg
ICAgICAgIEFTU0VSVChpc19pZGxlX3VuaXQocHJldikpOw0KPj4gICAgICAgICAgICAgICBhdG9t
aWNfc2V0KCZwcmV2LT5uZXh0X3Rhc2stPnJlbmRlenZvdXNfb3V0X2NudCwgMCk7DQo+PiAgICAg
ICAgICAgICAgIHByZXYtPnJlbmRlenZvdXNfaW5fY250ID0gMDsNCj4+IC0gICAgICAgICAgICBw
Y3B1X3NjaGVkdWxlX3VubG9ja19pcnEoKmxvY2ssIGNwdSk7DQo+PiArICAgICAgICAgICAgcGNw
dV9zY2hlZHVsZV91bmxvY2tfaXJxcmVzdG9yZSgqbG9jaywgKmZsYWdzLCBjcHUpOw0KPj4gICAg
ICAgICAgICAgICByY3VfcmVhZF91bmxvY2soJnNjaGVkX3Jlc19yY3Vsb2NrKTsNCj4+ICAgICAg
ICAgICAgICAgcmV0dXJuIE5VTEw7DQo+PiAgICAgICAgICAgfQ0KPj4gQEAgLTI1NzQsMTIgKzI1
NzUsMTMgQEAgc3RhdGljIHZvaWQgc2NoZWRfc2xhdmUodm9pZCkNCj4+ICAgICAgIHNwaW5sb2Nr
X3QgICAgICAgICAgICpsb2NrOw0KPj4gICAgICAgYm9vbCAgICAgICAgICAgICAgICAgIGRvX3Nv
ZnRpcnEgPSBmYWxzZTsNCj4+ICAgICAgIHVuc2lnbmVkIGludCAgICAgICAgICBjcHUgPSBzbXBf
cHJvY2Vzc29yX2lkKCk7DQo+PiArICAgIHVuc2lnbmVkIGxvbmcgICAgICAgICBmbGFnczsNCj4+
ICAgICAgICAgQVNTRVJUX05PVF9JTl9BVE9NSUMoKTsNCj4+ICAgICAgICAgcmN1X3JlYWRfbG9j
aygmc2NoZWRfcmVzX3JjdWxvY2spOw0KPj4gICAtICAgIGxvY2sgPSBwY3B1X3NjaGVkdWxlX2xv
Y2tfaXJxKGNwdSk7DQo+PiArICAgIGxvY2sgPSBwY3B1X3NjaGVkdWxlX2xvY2tfaXJxc2F2ZShj
cHUsICZmbGFncyk7DQo+PiAgICAgICAgIG5vdyA9IE5PVygpOw0KPj4gICBAQCAtMjU5MCw3ICsy
NTkyLDcgQEAgc3RhdGljIHZvaWQgc2NoZWRfc2xhdmUodm9pZCkNCj4+ICAgICAgICAgICAgIGlm
ICggdiApDQo+PiAgICAgICAgICAgew0KPj4gLSAgICAgICAgICAgIHBjcHVfc2NoZWR1bGVfdW5s
b2NrX2lycShsb2NrLCBjcHUpOw0KPj4gKyAgICAgICAgICAgIHBjcHVfc2NoZWR1bGVfdW5sb2Nr
X2lycXJlc3RvcmUobG9jaywgZmxhZ3MsIGNwdSk7DQo+PiAgICAgICAgICAgICAgICAgc2NoZWRf
Y29udGV4dF9zd2l0Y2godnByZXYsIHYsIGZhbHNlLCBub3cpOw0KPj4gICBAQCAtMjYwMiw3ICsy
NjA0LDcgQEAgc3RhdGljIHZvaWQgc2NoZWRfc2xhdmUodm9pZCkNCj4+ICAgICAgICAgaWYgKCAh
cHJldi0+cmVuZGV6dm91c19pbl9jbnQgKQ0KPj4gICAgICAgew0KPj4gLSAgICAgICAgcGNwdV9z
Y2hlZHVsZV91bmxvY2tfaXJxKGxvY2ssIGNwdSk7DQo+PiArICAgICAgICBwY3B1X3NjaGVkdWxl
X3VubG9ja19pcnFyZXN0b3JlKGxvY2ssIGZsYWdzLCBjcHUpOw0KPj4gICAgICAgICAgICAgcmN1
X3JlYWRfdW5sb2NrKCZzY2hlZF9yZXNfcmN1bG9jayk7DQo+PiAgIEBAIC0yNjE1LDExICsyNjE3
LDExIEBAIHN0YXRpYyB2b2lkIHNjaGVkX3NsYXZlKHZvaWQpDQo+PiAgICAgICAgIHN0b3BfdGlt
ZXIoJmdldF9zY2hlZF9yZXMoY3B1KS0+c190aW1lcik7DQo+PiAgIC0gICAgbmV4dCA9IHNjaGVk
X3dhaXRfcmVuZGV6dm91c19pbihwcmV2LCAmbG9jaywgY3B1LCBub3cpOw0KPj4gKyAgICBuZXh0
ID0gc2NoZWRfd2FpdF9yZW5kZXp2b3VzX2luKHByZXYsICZsb2NrLCAmZmxhZ3MsIGNwdSwgbm93
KTsNCj4+ICAgICAgIGlmICggIW5leHQgKQ0KPj4gICAgICAgICAgIHJldHVybjsNCj4+ICAgLSAg
ICBwY3B1X3NjaGVkdWxlX3VubG9ja19pcnEobG9jaywgY3B1KTsNCj4+ICsgICAgcGNwdV9zY2hl
ZHVsZV91bmxvY2tfaXJxcmVzdG9yZShsb2NrLCBmbGFncywgY3B1KTsNCj4+ICAgICAgICAgc2No
ZWRfY29udGV4dF9zd2l0Y2godnByZXYsIHNjaGVkX3VuaXQydmNwdV9jcHUobmV4dCwgY3B1KSwN
Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlzX2lkbGVfdW5pdChuZXh0KSAmJiAhaXNf
aWRsZV91bml0KHByZXYpLCBub3cpOw0KPj4gQEAgLTI2MzcsNiArMjYzOSw3IEBAIHN0YXRpYyB2
b2lkIHNjaGVkdWxlKHZvaWQpDQo+PiAgICAgICBzX3RpbWVfdCAgICAgICAgICAgICAgbm93Ow0K
Pj4gICAgICAgc3RydWN0IHNjaGVkX3Jlc291cmNlICpzcjsNCj4+ICAgICAgIHNwaW5sb2NrX3Qg
ICAgICAgICAgICpsb2NrOw0KPj4gKyAgICB1bnNpZ25lZCBsb25nICAgICAgICAgZmxhZ3M7DQo+
PiAgICAgICBpbnQgY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOw0KPj4gICAgICAgdW5zaWduZWQg
aW50ICAgICAgICAgIGdyYW47DQo+PiAgIEBAIC0yNjQ2LDcgKzI2NDksNyBAQCBzdGF0aWMgdm9p
ZCBzY2hlZHVsZSh2b2lkKQ0KPj4gICAgICAgICByY3VfcmVhZF9sb2NrKCZzY2hlZF9yZXNfcmN1
bG9jayk7DQo+PiAgIC0gICAgbG9jayA9IHBjcHVfc2NoZWR1bGVfbG9ja19pcnEoY3B1KTsNCj4+
ICsgICAgbG9jayA9IHBjcHVfc2NoZWR1bGVfbG9ja19pcnFzYXZlKGNwdSwgJmZsYWdzKTsNCj4+
ICAgICAgICAgc3IgPSBnZXRfc2NoZWRfcmVzKGNwdSk7DQo+PiAgICAgICBncmFuID0gc3ItPmdy
YW51bGFyaXR5Ow0KPj4gQEAgLTI2NTcsNyArMjY2MCw3IEBAIHN0YXRpYyB2b2lkIHNjaGVkdWxl
KHZvaWQpDQo+PiAgICAgICAgICAgICogV2UgaGF2ZSBhIHJhY2U6IHNjaGVkX3NsYXZlKCkgc2hv
dWxkIGJlIGNhbGxlZCwgc28gcmFpc2UgYSBzb2Z0aXJxDQo+PiAgICAgICAgICAgICogaW4gb3Jk
ZXIgdG8gcmUtZW50ZXIgc2NoZWR1bGUoKSBsYXRlciBhbmQgY2FsbCBzY2hlZF9zbGF2ZSgpIG5v
dy4NCj4+ICAgICAgICAgICAgKi8NCj4+IC0gICAgICAgIHBjcHVfc2NoZWR1bGVfdW5sb2NrX2ly
cShsb2NrLCBjcHUpOw0KPj4gKyAgICAgICAgcGNwdV9zY2hlZHVsZV91bmxvY2tfaXJxcmVzdG9y
ZShsb2NrLCBmbGFncywgY3B1KTsNCj4+ICAgICAgICAgICAgIHJjdV9yZWFkX3VubG9jaygmc2No
ZWRfcmVzX3JjdWxvY2spOw0KPj4gICBAQCAtMjY3Niw3ICsyNjc5LDcgQEAgc3RhdGljIHZvaWQg
c2NoZWR1bGUodm9pZCkNCj4+ICAgICAgICAgICBwcmV2LT5yZW5kZXp2b3VzX2luX2NudCA9IGdy
YW47DQo+PiAgICAgICAgICAgY3B1bWFza19hbmRub3QobWFzaywgc3ItPmNwdXMsIGNwdW1hc2tf
b2YoY3B1KSk7DQo+PiAgICAgICAgICAgY3B1bWFza19yYWlzZV9zb2Z0aXJxKG1hc2ssIFNDSEVE
X1NMQVZFX1NPRlRJUlEpOw0KPj4gLSAgICAgICAgbmV4dCA9IHNjaGVkX3dhaXRfcmVuZGV6dm91
c19pbihwcmV2LCAmbG9jaywgY3B1LCBub3cpOw0KPj4gKyAgICAgICAgbmV4dCA9IHNjaGVkX3dh
aXRfcmVuZGV6dm91c19pbihwcmV2LCAmbG9jaywgJmZsYWdzLCBjcHUsIG5vdyk7DQo+PiAgICAg
ICAgICAgaWYgKCAhbmV4dCApDQo+PiAgICAgICAgICAgICAgIHJldHVybjsNCj4+ICAgICAgIH0N
Cj4+IEBAIC0yNjg3LDcgKzI2OTAsNyBAQCBzdGF0aWMgdm9pZCBzY2hlZHVsZSh2b2lkKQ0KPj4g
ICAgICAgICAgIGF0b21pY19zZXQoJm5leHQtPnJlbmRlenZvdXNfb3V0X2NudCwgMCk7DQo+PiAg
ICAgICB9DQo+PiAgIC0gICAgcGNwdV9zY2hlZHVsZV91bmxvY2tfaXJxKGxvY2ssIGNwdSk7DQo+
PiArICAgIHBjcHVfc2NoZWR1bGVfdW5sb2NrX2lycXJlc3RvcmUobG9jaywgZmxhZ3MsIGNwdSk7
DQo+PiAgICAgICAgIHZuZXh0ID0gc2NoZWRfdW5pdDJ2Y3B1X2NwdShuZXh0LCBjcHUpOw0KPj4g
ICAgICAgc2NoZWRfY29udGV4dF9zd2l0Y2godnByZXYsIHZuZXh0LA0KPj4gDQoNCg0KLS0gDQpW
b2xvZHlteXIgQmFiY2h1ayBhdCBFUEFN


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:24:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88718.166945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVnH-00012z-Sc; Tue, 23 Feb 2021 11:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88718.166945; Tue, 23 Feb 2021 11:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEVnH-00012s-PK; Tue, 23 Feb 2021 11:23:55 +0000
Received: by outflank-mailman (input) for mailman id 88718;
 Tue, 23 Feb 2021 11:23:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C8r7=HZ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1lEVnG-00012n-U5
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 11:23:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c1d878ea-0a77-430b-a906-4adbd58231d4;
 Tue, 23 Feb 2021 11:23:53 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-598-pXUsixvVPD6nmK-WXgKf2w-1; Tue, 23 Feb 2021 06:23:50 -0500
Received: by mail-wr1-f69.google.com with SMTP id l3so5573917wrx.15
 for <xen-devel@lists.xenproject.org>; Tue, 23 Feb 2021 03:23:50 -0800 (PST)
Received: from [192.168.1.36] (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id a3sm4521531wrt.68.2021.02.23.03.23.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 23 Feb 2021 03:23:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1d878ea-0a77-430b-a906-4adbd58231d4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614079433;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fqXHpRX9rxmTQG166I9+zJJichU2jtp55FS+xZ4AgUs=;
	b=V8QhmDZDsPRLyTSyb4mSzCxKqXi6SB/3ZtA3vEAgYj72RrcHSKMloGp1O/D+O/1K0AZYA8
	ykrjqbI4m8gTMx4W27z0OYgH9aWKxasnAaz4xYCibOUsu3UfUFpMegSNr176l+uKpb22Kz
	gRIwQhoCchkCDgLIxfuvUXgqZvz4tFg=
X-MC-Unique: pXUsixvVPD6nmK-WXgKf2w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=fqXHpRX9rxmTQG166I9+zJJichU2jtp55FS+xZ4AgUs=;
        b=QTISatqmvDEfbtT1r8GPtpaR5HHz3AXNz77WozAqI6CvdxHIACCkzK2pqD92Lz3ZBC
         VRNroJuMHpCNb87lfM3YMvkX0mf5rLzvceuQ6OO3vgLoOl8aHQC2GQ7kM1zqBQHP8RlZ
         BzBMMemHgln0jypyw5ZOr7YdqqN2pysvefNg8B3N/SuAcqULgE3pDdQcZYmeaOClCBr+
         KYIdG6CK/EDEklGAeuxHB135N4W0Md5iAK+VXjnoSRCIkWI2J31ED5WbWcRJasZNWj6P
         4vtN6LRBFkLVCpnHEjEJEjvNdDg6H+aD7aa2ZGkCEvF2xjMhFHVIdDCqMlNz75znDn39
         m8Cw==
X-Gm-Message-State: AOAM531OyBRHg6xD4Ydm5LlE4TzcpCWqj672TJSKMYFcLP+Nb77ex3HB
	LKmF/s+RAqTILvbC5d6A0Qp572QLU5kKEUmrHZJhS968v/bHBQ0r46NDH0XgOe3xbZsX5i+S6ZG
	S4n4MVnbG/acmztlShl4jRUtWb3k=
X-Received: by 2002:adf:e444:: with SMTP id t4mr15063434wrm.97.1614079429579;
        Tue, 23 Feb 2021 03:23:49 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwL+4SwYURVZHwXSt7T2jcYFKjCeE7PCTmR6MVNodayFVhJQhTgY/EVJ9CycKefcbMINHdzhQ==
X-Received: by 2002:adf:e444:: with SMTP id t4mr15063387wrm.97.1614079429294;
        Tue, 23 Feb 2021 03:23:49 -0800 (PST)
Subject: Re: [PATCH v2 01/11] accel/kvm: Check MachineClass kvm_type() return
 value
To: Cornelia Huck <cohuck@redhat.com>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-ppc@nongnu.org,
 qemu-s390x@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
 Huacai Chen <chenhuacai@kernel.org>, xen-devel@lists.xenproject.org,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, qemu-arm@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>, kvm@vger.kernel.org,
 BALATON Zoltan <balaton@eik.bme.hu>, Leif Lindholm <leif@nuviainc.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Radoslaw Biernacki <rad@semihalf.com>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Greg Kurz <groug@kaod.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 David Hildenbrand <david@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20210219173847.2054123-1-philmd@redhat.com>
 <20210219173847.2054123-2-philmd@redhat.com>
 <20210222182405.3e6e9a6f.cohuck@redhat.com>
 <bc37276d-74cc-22f0-fcc0-4ee5e62cf1df@redhat.com>
 <20210222185044.23fccecc.cohuck@redhat.com>
 <YDQ/Y1KozPSyNGjo@yekko.fritz.box> <YDRAHW1ds1eh0Lav@yekko.fritz.box>
 <20210223113634.6626c8f8.cohuck@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <98ed9478-240d-cd20-ac84-82c540bd3e21@redhat.com>
Date: Tue, 23 Feb 2021 12:23:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <20210223113634.6626c8f8.cohuck@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/23/21 11:36 AM, Cornelia Huck wrote:
> On Tue, 23 Feb 2021 10:37:01 +1100
> David Gibson <david@gibson.dropbear.id.au> wrote:
> 
>> On Tue, Feb 23, 2021 at 10:33:55AM +1100, David Gibson wrote:
>>> On Mon, Feb 22, 2021 at 06:50:44PM +0100, Cornelia Huck wrote:  
>>>> On Mon, 22 Feb 2021 18:41:07 +0100
>>>> Philippe Mathieu-DaudÃ© <philmd@redhat.com> wrote:
>>>>   
>>>>> On 2/22/21 6:24 PM, Cornelia Huck wrote:  
>>>>>> On Fri, 19 Feb 2021 18:38:37 +0100
>>>>>> Philippe Mathieu-DaudÃ© <philmd@redhat.com> wrote:
>>>>>>     
>>>>>>> MachineClass::kvm_type() can return -1 on failure.
>>>>>>> Document it, and add a check in kvm_init().
>>>>>>>
>>>>>>> Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
>>>>>>> ---
>>>>>>>  include/hw/boards.h | 3 ++-
>>>>>>>  accel/kvm/kvm-all.c | 6 ++++++
>>>>>>>  2 files changed, 8 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>>>>>>> index a46dfe5d1a6..68d3d10f6b0 100644
>>>>>>> --- a/include/hw/boards.h
>>>>>>> +++ b/include/hw/boards.h
>>>>>>> @@ -127,7 +127,8 @@ typedef struct {
>>>>>>>   *    implement and a stub device is required.
>>>>>>>   * @kvm_type:
>>>>>>>   *    Return the type of KVM corresponding to the kvm-type string option or
>>>>>>> - *    computed based on other criteria such as the host kernel capabilities.
>>>>>>> + *    computed based on other criteria such as the host kernel capabilities
>>>>>>> + *    (which can't be negative), or -1 on error.
>>>>>>>   * @numa_mem_supported:
>>>>>>>   *    true if '--numa node.mem' option is supported and false otherwise
>>>>>>>   * @smp_parse:
>>>>>>> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
>>>>>>> index 84c943fcdb2..b069938d881 100644
>>>>>>> --- a/accel/kvm/kvm-all.c
>>>>>>> +++ b/accel/kvm/kvm-all.c
>>>>>>> @@ -2057,6 +2057,12 @@ static int kvm_init(MachineState *ms)
>>>>>>>                                                              "kvm-type",
>>>>>>>                                                              &error_abort);
>>>>>>>          type = mc->kvm_type(ms, kvm_type);
>>>>>>> +        if (type < 0) {
>>>>>>> +            ret = -EINVAL;
>>>>>>> +            fprintf(stderr, "Failed to detect kvm-type for machine '%s'\n",
>>>>>>> +                    mc->name);
>>>>>>> +            goto err;
>>>>>>> +        }
>>>>>>>      }
>>>>>>>  
>>>>>>>      do {    
>>>>>>
>>>>>> No objection to this patch; but I'm wondering why some non-pseries
>>>>>> machines implement the kvm_type callback, when I see the kvm-type
>>>>>> property only for pseries? Am I holding my git grep wrong?    
>>>>>
>>>>> Can it be what David commented here?
>>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg784508.html
>>>>>   
>>>>
>>>> Ok, I might be confused about the other ppc machines; but I'm wondering
>>>> about the kvm_type callback for mips and arm/virt. Maybe I'm just
>>>> confused by the whole mechanism?  
>>>
>>> For ppc at least, not sure about in general, pseries is the only
>>> machine type that can possibly work under more than one KVM flavour
>>> (HV or PR).  So, it's the only one where it's actually useful to be
>>> able to configure this.  
>>
>> Wait... I'm not sure that's true.  At least theoretically, some of the
>> Book3E platforms could work with either PR or the Book3E specific
>> KVM.  Not sure if KVM PR supports all the BookE instructions it would
>> need to in practice.
>>
>> Possibly pseries is just the platform where there's been enough people
>> interested in setting the KVM flavour so far.
> 
> If I'm not utterly confused by the code, it seems the pseries machines
> are the only ones where you can actually get to an invocation of
> ->kvm_type(): You need to have a 'kvm-type' machine property, and
> AFAICS only the pseries machine has that.

OMG you are right... This changed in commit f2ce39b4f06
("vl: make qemu_get_machine_opts static"):

@@ -2069,13 +2068,11 @@ static int kvm_init(MachineState *ms)
     }
     s->as = g_new0(struct KVMAs, s->nr_as);

-    kvm_type = qemu_opt_get(qemu_get_machine_opts(), "kvm-type");
-    if (mc->kvm_type) {
+    if (object_property_find(OBJECT(current_machine), "kvm-type")) {
+        g_autofree char *kvm_type =
object_property_get_str(OBJECT(current_machine),
+                                                            "kvm-type",
+                                                            &error_abort);
         type = mc->kvm_type(ms, kvm_type);
-    } else if (kvm_type) {
-        ret = -EINVAL;
-        fprintf(stderr, "Invalid argument kvm-type=%s\n", kvm_type);
-        goto err;
     }

Paolo, is that expected?

So these callbacks are dead code:
hw/arm/virt.c:2585:    mc->kvm_type = virt_kvm_type;
hw/mips/loongson3_virt.c:625:    mc->kvm_type = mips_kvm_type;
hw/ppc/mac_newworld.c:598:    mc->kvm_type = core99_kvm_type;
hw/ppc/mac_oldworld.c:447:    mc->kvm_type = heathrow_kvm_type;

> 
> (Or is something hiding behind some macro magic?)
> 



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:37:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88722.166957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEW03-00029k-34; Tue, 23 Feb 2021 11:37:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88722.166957; Tue, 23 Feb 2021 11:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEW02-00029d-V5; Tue, 23 Feb 2021 11:37:06 +0000
Received: by outflank-mailman (input) for mailman id 88722;
 Tue, 23 Feb 2021 11:37:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEW01-00029V-W8; Tue, 23 Feb 2021 11:37:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEW01-0001gv-Q4; Tue, 23 Feb 2021 11:37:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEW01-00014V-FS; Tue, 23 Feb 2021 11:37:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEW01-0003ec-Ep; Tue, 23 Feb 2021 11:37:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LcdpBKtMMdEd+O5dZI7Q/PXis+cFsJ3Ha87lSud6YR8=; b=ELYEpzCaSkXssgd4eV5SG/Ul1t
	O1eE0wp0rFLCaGTEv98HWMsGvKQwZAzw0i7J5bY5R2EEvx3NTalnQJB+tc+Vis5BURRCZz4H4BMnp
	ROTVE5XlasezDbSfN1IJhdv9+KwVZzK9kcgI9xxF5qooZhFiLaBBDt0AgVk3KzTxOjZ4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159563-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159563: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ef8134565dccf9186d5eabd7dbb4ecae6dead87
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 11:37:05 +0000

flight 159563 qemu-mainline real [real]
flight 159580 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159563/
http://logs.test-lab.xenproject.org/osstest/logs/159580/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw  8 xen-boot            fail pass in 159580-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 159580 like 152631
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 159580 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ef8134565dccf9186d5eabd7dbb4ecae6dead87
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  187 days
Failing since        152659  2020-08-21 14:07:39 Z  185 days  359 attempts
Testing same since   159563  2021-02-22 23:37:57 Z    0 days    1 attempts

------------------------------------------------------------
425 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117355 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:40:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88726.166972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEW3X-00032E-K1; Tue, 23 Feb 2021 11:40:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88726.166972; Tue, 23 Feb 2021 11:40:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEW3X-000327-Gl; Tue, 23 Feb 2021 11:40:43 +0000
Received: by outflank-mailman (input) for mailman id 88726;
 Tue, 23 Feb 2021 11:40:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEW3V-00031u-D1
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 11:40:41 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cb0b1c1-4968-46bb-8a96-df299eee05c8;
 Tue, 23 Feb 2021 11:40:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7cb0b1c1-4968-46bb-8a96-df299eee05c8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614080439;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=/3fSsGYrIlfHef56cXyRjqj6xMTtwGPHkVEJpcIC2bI=;
  b=LO2K24HhKknBlhHKG/GIk4aFlTivmcuCdqWSF1qQoI5rlc+MLj4w1eDc
   RfcURip/rCC20ZKGOUSsD9WAMparxbSdv0hWmLQREcDNs2nbnEuRVdVzb
   Sy6XAcfhB+kKxAEbmFUG9C7OvsSy1lQfD61trVJqTqjjXWuCva7sl55Rd
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: mJh3Pfo86OHdPlvaKnU/wwpmeg0AlGiI7Lcqi1xrzzXr8rbuhu+VkMaxlko5MpsIj/oVA3P/cY
 qrUPo6QrOfkPCQremMUtv2wNplgZqrfYSTRUk1+5KR1e9HKOj2a0W8Yzgaf4c6JXbz3qJ4wLE8
 WjQr73uo2UCIkiTW5sP2yq7k7mhvrfbXfRtZrE75BXm8/5TzEOYSGI5oNng0n1rRVQOVeZbqRl
 x7CI0qUyDCLiCf8V/yq+6vtaLrXZMhr1uj0F66SlIS3QJOfZWmr0k4ndKUxdF0hfW8OaBMgBqO
 LjI=
X-SBRS: 5.2
X-MesageID: 38184446
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,199,1610427600"; 
   d="scan'208";a="38184446"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n4DdZoAQ4XFtq0obPctenxLLVuGCgFS0yM2GEnW4SSnyUST6eFidVDDx6nfpKvAk49q6e/nW+SilywaKHK7EzMYDKj5Y+S1QOVkzwr/LDYdDqCU8zqRHSrUcqqP20u45G93KncPnFnST12tk3LfsUVB//1Lu5tu97JPBZS9+9zOz9/C4lz4bSpsrGrv/fwuStXNd4GbXp/5fNsaPI6TrbgoqFO27IqkC9qcrPY1IfrY8aBAC0m0nGQki4TIOUAlwFcUNwSX9mxGMv0pDbNv349SgAeYyXXjm4H5CCnTbgfkvtmm/URO+ZNCsNwMPqXOkuzQfTKWN8FmS1HvhsgJrzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=587w1u7AFBGdrC/Uas+c5jv2J/eZIqKYuWOEsK6J6eo=;
 b=nQF54DM/7dSUuaVymqCkrs8ecfP+eQp66PczBoHURk/ARRwtQ/fg93d3DNnUfXFrwyovbT5XKW/BT1tyXisOgrggGfZqzUDU3ly+MUTJWf+HZbhHWJmFBl/zmtVBlkkBoelc6qn4poDLYYmps3K4uCZJ00ifDlMtysZ8lhggjy6DATA+67FCgYtENc/Sx1ibAYOPIZkxjm9p2uHv+s+g2aCHMNK9X4mMzc8RkV0z8kKYjaIOUfR2EsDgQBgJOuUn0egm3TBCHqJ797NB8qRBCrq+05t2G56l12/cPpU9D9hKDyfvsqKG/IEsV4f3cWfTqjvlKTz2GZtDbczYfBN38Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=587w1u7AFBGdrC/Uas+c5jv2J/eZIqKYuWOEsK6J6eo=;
 b=G5fryW2zqsXpqm1GAUFp6uFNh6xpaxJUvCdeRxEy4Kqm9H+nPg1iDNlcIVDgdVq8jfh1wgmmS1N2FYOvk2n6KQRABXnFag50DPXEiIP5Nf/8ziSJODtfaMP9XKZT0HcA3ZIPTHMFzDdJT/uf8Yh05DIGqCyp9LJZUzZC3ocCQWU=
Date: Tue, 23 Feb 2021 12:40:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 7/8] x86: move stac()/clac() from
 {get,put}_unsafe_asm() ...
Message-ID: <YDTprpV4QfNLJpav@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <c817e9be-bcdf-3c38-2298-0a3a58773964@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c817e9be-bcdf-3c38-2298-0a3a58773964@suse.com>
X-ClientProxiedBy: PR2P264CA0001.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::13)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f3311547-6c14-4e38-2f90-08d8d7efd55e
X-MS-TrafficTypeDiagnostic: DM6PR03MB3578:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3578AC05D586AB5143FD40618F809@DM6PR03MB3578.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: BaHndryeLrXFIMaVlJylAMw2DzXaaYtgX+4qYStLQy1KrmruKNVGHRykCJfmJERksyJZhRJ5ci82Z0FgSZNDtTSV3k7RUug9KuZeUPsnDfBnz5vNTBz5NU4ovjQu5UI/v5dSrP5f0xLEsxvcMmxqt9CIPI4bW9sCmrCOaHtGmKiCsfcHJAfMWyAGv5hPW8OLLcAF06VT0LMrwI4m6gpWY6VT5z/dc1kBwc63MOYnKjX14toqkBM17nEbb/xYA+fOgzTT8oKFlBWITtJNuhxQODfRbkaEB9a++1Bxv6C5K5WL5G1yUxc8H/jz9jKWrRUdkT+IJBiGo3Z69wRL6+BCWX96e+g3MmzKdCmCj7G3afIji5Jlp6iivOsfvh4yddkXrVvHBS+lgkEDlojcwgslk8+COHhcPt+5MuxDcrI6Qxr2Ry1qfrmj/atioMYvBhIcIREa1WPlpiE8rGLyecNQfKFU1OPHlTsNXFHzD6OpZEeIUrzKIcq2XQNmhw6+RlLnwVGxSCjS9e21h0DdytOpIQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(39860400002)(396003)(366004)(376002)(478600001)(2906002)(186003)(16526019)(26005)(4326008)(66476007)(5660300002)(66556008)(4744005)(6666004)(66946007)(33716001)(316002)(85182001)(86362001)(8936002)(6496006)(8676002)(54906003)(6486002)(9686003)(6916009)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aTd2NDQ0bjJVMnB2cUVNekI0dGhZMzBTaWdxTWp5QkdDUGF1bFZ4ZXJZekp5?=
 =?utf-8?B?MU5EZHVVVEsyWjk2SHFmaDZnOGdSenJ3K3hUelZHSCtBYWkyZXJFbUVaMlps?=
 =?utf-8?B?VWxTNEhaV2Y3SmRoek5qMUpXK3YrblBzK2hXVmViWXhpWWE5VVZVaWdSaUlt?=
 =?utf-8?B?cWJiYVNBM3IwVVR5b3NmL0ZHSnNlOHZ0VXFNMHhwd05TVGV3OVNHOW9CZ2VV?=
 =?utf-8?B?a0RsWWlaN0hGcEV5MktCRG1GUk5pVGh0MC9hQ2JzeXdxQlFuL3ptWHhNd0Qz?=
 =?utf-8?B?TzNpYVFPeE1obG1BTm1xVENLR2xFYnFmUjFmc0s2ekFkLzJDYjZqUXlqNXI3?=
 =?utf-8?B?Qi9mWmYySlR6a3cyQnVaSFBGYnducUVXTTFkV2lnN0xLeGVHb3RGbU5KZVdZ?=
 =?utf-8?B?eDF6ZGo5VTJ3UVZaRlRvSXY5clBoZnVNdTRnR3BPaHBjalQ0d1lJTnQzY2Jz?=
 =?utf-8?B?TjZ5YjhBbzFCV1BLNGN6UXRzMXFwTlpuZGlEV3VKSm9rZ3huYVRRVGRHcHhB?=
 =?utf-8?B?MktBWm1lbDlrMVRXMll2aHhkVnY1eUM2R3VDU3M4SUUwRDR0SHl5NERqc3hR?=
 =?utf-8?B?VkRDUU1BVVVRRG1iMjRMRHNlS3Zhc3hvUXFNMzRSdG5RdDMvRTlGV3RGVGtz?=
 =?utf-8?B?WGE5WnllQXNESFlocDdJMWFLdk9EYWpRQmp3SjNOaUw4cDV4WEVPK1pkSU5G?=
 =?utf-8?B?QkpwYmx1T1RMbnU3Mm1hTVNYTlc0cU5hZ3g1K0JLV1o0bXJIbDlZNnM5RG43?=
 =?utf-8?B?bEdueFpDSThFYjFCSlcxTU5FdEI3ajFqNTJRU0drWEVQS2ZGOG5XYnVqRnZ6?=
 =?utf-8?B?bTZqM0VGWE9aRmI3YkU5S003ZnBVUnJFaXQwN29SMWlsaUpqZzNpRHJjbXY3?=
 =?utf-8?B?Zy84RWlUM3E4ZzFlMFRqVU9DTG9ZdmE4ZUM4b3ozajZXTG5Oa0p4RjJLQ0E2?=
 =?utf-8?B?V0ZQK09HWTZRSUpMWmZkeHhJWDk3MCtjWkVKd2dLdldHYnNKSDhYNXlLMExy?=
 =?utf-8?B?Yy9TdGMydWxlTUw0MzZuQ2VFdFJkRlY1cnNtbnMzY0d3S0ZrazJIMjQyL0lG?=
 =?utf-8?B?dFRvVzhpU1BYVlhDeHhzZktUbXNwNkEzU2pMcmNpelVYOHJlVXlBeno4Vkl2?=
 =?utf-8?B?bzZqR0IxOS9LNFZVTlBxL1hQTzA0dVdUWnRQd1JZc243TkwzVmZjcjR3UzYz?=
 =?utf-8?B?MWZnNDB0d0hsemFBSGFNUFhlSmlRQndGbUFGa0lBOVJiNHlzNm9vUWQrN3Mx?=
 =?utf-8?B?MCszUmhKY29CUEFPcnZSZWNqSFRKb2UzUXhVVjJsU0hnSEV6U3EyLzdrbGhS?=
 =?utf-8?B?elBqODlBZ0FBTG9vOEUyN1pKSktKbnU3bzRtaHBLSUtzSFQyNjVxU3hKd3Q5?=
 =?utf-8?B?ZUU2UkxLak1Oam0rc1EybkNOTkVNMk04NHozbis2WFFUaTNQQldINEtXYnVh?=
 =?utf-8?B?b3RYbnJNaUhiRGdiQXczZk1XWE12aGVDVHhVZWJ4WmNwOFA0akthcGdjWndU?=
 =?utf-8?B?RVI4WWYwSjlyVjVhWDdtV1RpTDlTUElFZlNOOVM4QXVnYUJoWnBuSkV2WDI5?=
 =?utf-8?B?K1BKemhLV1JWWm05RkdiaFJ5N3ZWODBtZ0wrbENvMDgyS3J0V01ycHZxeWJD?=
 =?utf-8?B?VUlHUktQREJLNnRvMFFsRGgwQkZjOTNIaWdHWGdTRkRab2xSWWpHNm01RDU3?=
 =?utf-8?B?RndCanhCa3BVMjFybURWODJIVHg3azVOM1JEbWNFYWYrMTJLaUpoVVNGaDRY?=
 =?utf-8?B?ZTFOajZzanNqT2FmZ3ZyL1RKemRMWk5INlB1ZUQwR29vUElzeTZkandLQ0tj?=
 =?utf-8?B?RGpHZU96dXJ5aFBVUlVGZz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f3311547-6c14-4e38-2f90-08d8d7efd55e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 11:40:35.0686
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uOMbfxrOKSvMShT2wzkq33/TeNC1oR9YfQxzaq5vb78er+fdCcwVYY9R8od/kv4PWbWiioqt7DB/VrS3nbX67g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3578
X-OriginatorOrg: citrix.com

On Wed, Feb 17, 2021 at 09:22:59AM +0100, Jan Beulich wrote:
> ... to {get,put}_unsafe_size(). There's no need to have the macros
> expanded once per case label in the latter. This also makes the former
> well-formed single statements again. No change in generated code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 11:59:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 11:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88732.166987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWLn-0004RS-Bo; Tue, 23 Feb 2021 11:59:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88732.166987; Tue, 23 Feb 2021 11:59:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWLn-0004RL-8j; Tue, 23 Feb 2021 11:59:35 +0000
Received: by outflank-mailman (input) for mailman id 88732;
 Tue, 23 Feb 2021 11:59:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEWLm-0004RG-Mp
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 11:59:34 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 463ea0e6-b92e-4de0-8ace-99c949fea1c5;
 Tue, 23 Feb 2021 11:59:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 463ea0e6-b92e-4de0-8ace-99c949fea1c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614081573;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YbC7vBfeYp7nhA7rfcbAxrSgOD11tHtXxR1jIsRoxiU=;
  b=R90DnHiLrO0b/1ww1XXwcTP+ENxgyQVcf3BLDpCMdeC3GrzBZdOJyESN
   NAmV7uvfr8pEFNufT3mdncS57zl/EjO+bh1GPHDFwHH8VyXZbsEplc+4U
   IMO52Y8bunVg8qfnspElolWTlU9HueHFKwVzI/h0w26+QfawxCDALyefr
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GJXlDHAIaSiqFMJuEh+pJtz4EFwKOXOp/isyF0Usi8Vd4JTbdMIQ8vLKCObr2ts9uy8MCqsChZ
 PozHbRL8+3W2VSSWP+Sycap0e+iGe1PXW+gpHtN9oDb4XtTU53M1xorxz/YOrnBNLnLDzvUQp5
 0rnu/5kCpz8kvS3WFGzhZwa5yIzFOXCF3+HvPCgRA1F65QZDs1y7M01lHYsXEPjuRMfc5zgEqd
 SYa9cxuQb4PBHYi9s9pk5zKAVNJdlr/j5Lczy6qT0iX+BUS60NhDTfPvGzRp3FixDm7535rIHQ
 DSs=
X-SBRS: 5.2
X-MesageID: 38185196
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="38185196"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fiqMMfv/tcU+mp7ZOjqHqiYX+o7Z6bqZJxdeTlNH3gQhTd0ViarmAz1uXoPgVuplamXPinKNRVpbnzUjVyJv0xP5uKKDEb3Hyl9rNuhy6mfPqXAg5BL8whilIkDzFm9CPtloYXozDHjaxRg6WdherQbjCqYYoS1bGsqckqQ13GChW++swF+uvPjwoWXzTq9HANRI1XtBlSAcSMgqee0S0s9g3vbdaln3gKylfYzUB5uH06lA4aA+YHuCrVjzb3WmCjwwgCSnHky0vj+4pYwoGdtHZfujNuNoBP9MI/NHpED7NcHTFqErqAh5sfIdn0rruwDb3TL7tTsj5z8hfVX3zQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HgO7JImBHK/SXOfsGbqi/TqQyoKGxJPlNWajPWBAOuQ=;
 b=HDiFvPb9ciEHNRUhhgHzcCLq31mbKEatXbn4sRho3nfFH6a5rmNOXIwNX+BJEjKWkk03b1LOhVGGEoAvy/bCGA165oSxuJSP2R9cb2OxzpCxZNCVsqwwLIcLwjSpDRHO05EX/bZ1B05xv6UF7TMgdN9dGJvHbZeu5dX8d9ULz+6iILDi+4kWfRvdMRDoJeTLbhUYA+hnL3Bi17NunG6ti9mi+Z9g97DJEyxxGrXsdNTvJLfdtnAKV9s9kBVrcUqaqrqc8YZc2RQhXX7qY2ho0s3l34Yp6HYMOb3dYZyvzwiDkj27fFAr/Kaw1VrUF5IYmI11poLs+e615U7I1TwpRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HgO7JImBHK/SXOfsGbqi/TqQyoKGxJPlNWajPWBAOuQ=;
 b=LQFkq5fELO10g4MdHfRD6UExidi3GvUeN54CkV3MMKR4LAn5eJzgTugKGIBmmYhh6Me30GOJpUqYLXuXmg3z6X8ghAcTphwxuMuYcLe3OZ7mgC69+LzAaTwjO2ZDCw2N7rqxcPvsRCcmco7CmVmli/duolPFWZU3jZMdsQxzqXE=
Date: Tue, 23 Feb 2021 12:59:22 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of
 copy_from_unsafe()
Message-ID: <YDTuGn8YWRrWlbS9@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
X-ClientProxiedBy: PR1PR01CA0028.eurprd01.prod.exchangelabs.com
 (2603:10a6:102::41) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e6ee5426-577e-4787-1db2-08d8d7f278b9
X-MS-TrafficTypeDiagnostic: DM6PR03MB3738:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3738802ECDA2AF8B7A87F8DD8F809@DM6PR03MB3738.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: D7HcXHOKXegJTc27cjQcO7cBQ/I3TCxb7qCKfQyyeiGONDHdUrdreTEZOfi85/8I+oZ4HOOOMgD+cvS2/MPn3Kw9788b4eSRlNMtU4BiZLw1EGkaMHsdQDW7ZNlGzQ+vhN469JalVF6dg4Hki3Lprcdw6Cgr94dupfMMXJQxJD78F9aNqNVK/E3cUj4IJJk2HikwFDmKn4xqLs8T1u2uhmKntgxZH+3Qg7OXVQbK/XQw296T5GVgac0b7vb3TWw2f3G9ClQ30ym+ZK5JRwVSsVaAgzM5ebWTB5pVaM7fw2q3BCwPhkWa3u7pJnntoBmFO0LLgI2XOOifFbrBlQenhBXrDbJj2sctZ1CbZBsSzmkiMC+rsZHOV9y8jCOs+ksUukhYoNpve2X7usVu02XYAYfSS8x1w5pYOQXIquntROdakoq44z+YPJUXb5tXpAwBN94uKH7aJMl8VTmAnRWLRn6kUcLpUKrjUPkkg2BBuB2qoiDmNTHaa2Z3L0bGy6Hqln1NaJepf+Keb9gGU0r6IA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(39860400002)(396003)(346002)(136003)(366004)(5660300002)(316002)(85182001)(16526019)(8676002)(54906003)(956004)(6916009)(66476007)(8936002)(186003)(2906002)(6486002)(26005)(66946007)(66556008)(6666004)(9686003)(478600001)(86362001)(33716001)(6496006)(83380400001)(4744005)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eVlMdEZuNlFDN2dhdmxuaURmaTdvLzNGVW1DY3R0eU45M0M4SHRwekt0RFJw?=
 =?utf-8?B?bXVtUHJiUnlJNGVSQkhwVTlwZTFCWUtKQjRkV3Btak1BblFaWGQ5aHcrMUVD?=
 =?utf-8?B?SkRsNHFOMW1RK1RVVitSc2g0bFNmUHUzSW1HKzZJd3ppeU9kQlpJUmRCS3Bk?=
 =?utf-8?B?Q25IYThhNXRZUTBDWHVFTWV2MFVULzkzdVgvQTRQU0dDcjc1RVZoNVRmQ0M5?=
 =?utf-8?B?a2V1Zk1GbzUrZTNZMjdNaE9LcFpQbEdWNkFsYWVJQ1BZR1loaUw5WnhYNHU2?=
 =?utf-8?B?K2VqTy9ST3pPMVFGdG9WTUtWcGE2QU83RlpKNTkycnJhWlVkUDQ5dWtMb2pZ?=
 =?utf-8?B?SkY1blpCQnlqQ09tVmUrNkNScU1jS2NrZ285QjU1ZjVoNlpCL21CZkdOQllB?=
 =?utf-8?B?SmMvOTF1K2V2a2txaFp6VGVZejZnWkxKSUpOc1BFeEhlNnhGRDQvS2hBU1Mx?=
 =?utf-8?B?MzRpR041VDNTR05EdmJrdVQ1cXIweCtDS2V5dUIveklMWkNsLzVjUWM4dCth?=
 =?utf-8?B?cHg4QW11TjFMaG9VT3F5SE9zUzI1akRSaU1YeFhHRi8vdGJsSG8xUUxocGRZ?=
 =?utf-8?B?b1FRY2dOTHhBU2FZYitFME0wWjF6TEczZmoyemRjNUh1Zkp3M2FqZ3phbXlx?=
 =?utf-8?B?VWcyVDNtajhKdllveHZYbzJOa3BQekJJeUFzZUVyWWVlUWlOZks5QjBibFA4?=
 =?utf-8?B?UnBzc1p1Y0UyVU1HdlRTelZRNzJCdXRUa0JnL0w4aVJmNm5iTnF0TGk2a2xP?=
 =?utf-8?B?aTJRSEtvRHgrd2t1Z1NNVEtsUitieHFodC9iU3N4by9zVEZQQUlsNU9UOXlr?=
 =?utf-8?B?V0d0enNuQ1pZTUdJbVhBdDlrMTlyeW9wcFFEc0haRGVJa29hRVhOakFnM3E1?=
 =?utf-8?B?SkVPV3plU2VEUWlkYzlYQ0I1YXNubWFBMStuRmkxZmN6dnhvZXNNSWRITFJO?=
 =?utf-8?B?dnI2U2JlMW5LWFpDTnJmMG1nV3prbTJxeTgzc095M0U3dkZMenNFSnB3OGZz?=
 =?utf-8?B?eksxY2lpTlJsdEpwM0VnMG5Qb054b0JMck5Ic0szQ2libmZET1ptTE0reURx?=
 =?utf-8?B?SDFYL21sYlM3U1pMUFlOZk9CUEdyamU5ZmlSKzBJQ1NuUWU3WkdqdnMrRUdj?=
 =?utf-8?B?eXlISXErSkxINEk5c3pSQzN3Q2FZMkRGdWVvMzZFb0Nsd3lydjBlQzFZL1dD?=
 =?utf-8?B?RXJpcktnaUJDbFZXdkRLTXd3cnFKSEJxTU9XRlpFTzVTMU5pZkRieVpZSUxW?=
 =?utf-8?B?dlNSbFZEbHJzU1RJZEg3SFAwN3k0bjdZSHdkVVIrZjRqZDFadFdwL1RuN1l6?=
 =?utf-8?B?dFZwMW9KYkNCUUhqN1prN3NEVDhiVzN1bU5vNmxJN0JPejFsTk5EMHp3WEdo?=
 =?utf-8?B?bEM4ellvbDEvUlh1K1pkR2dWRjBVRzc3Znd6OTVvNlNKeGo0aXVGTnhEdVhs?=
 =?utf-8?B?L0FrMkNTZ0FQaGxFYmNVc2ZnLzZZNTQ4NkkwbzVMK3kyb3VURTExamdqcC9C?=
 =?utf-8?B?Q3J2R1Qyd0dzU09NMDhLdEQ3RkJkNU1FWDQ4bDlmVDRHZTdjeVU5Q0lGU2Ns?=
 =?utf-8?B?Q0ZEWjNDdlFmL1lodHBvR3ZYQkZxRW5qVlhTclVxTy9ZblNta1BRcTBWbnN2?=
 =?utf-8?B?Tk1oVVJMU2ZDaEU4L2hZTTB6Z2haSUpaYTJwSWhpWFJVbFMyaDVLeHJ4aFRO?=
 =?utf-8?B?ZVp5TWhLWnB3SWRlTGRiM0JiVWJUOGVYMFBXRXlvdzd1bHlETlpiaHhPSFhP?=
 =?utf-8?B?ekFvZUFvR3BTTHB5emRrd2d6NkJ1YTBlNU1kbFRSWlhUVW42STJuWFJOMVNl?=
 =?utf-8?B?dks3UGdvdlFmb0EzQUVndz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e6ee5426-577e-4787-1db2-08d8d7f278b9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 11:59:28.1565
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7rqiceotEpL+QIaHU9zRTslnf1GFiWm6F1OFS/WRJB4rVtF6wH1aUeDI88wtWkHXae8UVURL9VI758CtmGZxbA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3738
X-OriginatorOrg: citrix.com

On Wed, Feb 17, 2021 at 09:23:33AM +0100, Jan Beulich wrote:
> The former expands to a single (memory accessing) insn, which the latter
> does not guarantee. Yet we'd prefer to read consistent PTEs rather than
> risking a split read racing with an update done elsewhere.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Albeit I wonder why the __builtin_constant_p check done in
copy_from_unsafe is not enough to take the get_unsafe_size branch in
there. Doesn't sizeof(l{1,2}_pgentry_t) qualify as a built time
constant?

Or the fact that n it's a parameter to an inline function hides this,
in which case the __builtin_constant_p is pointless?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 12:17:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 12:17:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88746.167002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWd5-0006OE-6D; Tue, 23 Feb 2021 12:17:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88746.167002; Tue, 23 Feb 2021 12:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWd5-0006O7-31; Tue, 23 Feb 2021 12:17:27 +0000
Received: by outflank-mailman (input) for mailman id 88746;
 Tue, 23 Feb 2021 12:17:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEWd2-0006O0-P1
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 12:17:25 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46f47052-6738-4e75-b35d-737404c18497;
 Tue, 23 Feb 2021 12:17:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46f47052-6738-4e75-b35d-737404c18497
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614082643;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=6B5lLyzjoNzPFmwuDmERxt8fR91d9DWd3QrM6/1ZnqI=;
  b=NF5Oy20g3xZ+4wfNvXQnZ3ffadO6i5Z9Y3kBvcx+liGcdTW6LuOnA1uc
   aAwwCr3ARWR195RJCFq0KJuinXAieVvdVgMAyWJQ66eIobpc055o6HEyl
   id6wJe7Ztv7iRIh/jERt8P0J7UUvGh+KgkWyKPXcu0dI7XIiJ8G4FAwos
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: I+E74cw3AHxp7YawhHMtesxL62lXw4T/VTYIOSn2yKkd+0pNngsaEmqJbaidNSy+7tOUjfMyYY
 Z0N8N1Nxof3k79jcScTGKLjD9WMV56JedJgyRbQNYR8AMfX6/Y6nM5jSguaZpWZfZKdrnfHRXu
 /YgK38wscWefONQSzxsAnPar+r1PUdCNl3zHNbzU7gwuBlxF0IeYDMkiIqlMxfuOSqaFaO5Lq1
 O32ilnM4wqgMRuoZZygfewpqjhV1QQHB7547EPwsriL96ys2pQXFlun8OHB2w3cb0+Ge8mZOIv
 YLc=
X-SBRS: 5.2
X-MesageID: 37836393
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="37836393"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aMuLUlW1fscXoVtU17HHFHtAEW8cH22EslQLCEQQzh7OA9INBtpRoYgadpUeqMuVwG0woPp77FBBlHy39Mjfe0+e9Vvtl/Igr56gf9paJLagkqhtcP48nvny+j+0+AZ2A1McrHTY2TNDeZgFmgJhp2K83dGJjJLcfe0ZRqjCBbVJaJmmUE19T+7VVQazLLI+MGW/6bz1eGh+74tBjRGf1DIaP3WFv2HQFddaf4bgd3UMC/kITG2r1P3P6VT7ZN4pXhmljhj1qIh5ZnaxW/nxAOGawTooX2S7b4asxxmm5UJLBFcEGyaz03FQYVsIdhN9Dj0bOZWY1wNiHaXsC06M1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9NubZbynDjJjRzoU5i+z36kQ7herjb7QFxkiBN4s8hE=;
 b=HrPn5lTSDTAMPF38DWJCAaYnfjjrTaE2iuTqevVx33LxAaPn04rF23ZSOjIvYCekHVxntWQ+VR99hXZRs8EHKCAWg5L37jiwXTUVcipVoae1dDPu0eyaCg8LMtjfhwWfs3uM0mdG2u4h0AV4bievrLhbM0eLrlFqc923fVr+oE0caUwRYxrFZn7SVBMVTLgOkuP4cxV7Iwo3PbNCj5W/krqRMNvbdFJXu/8WcNGNCnYFSYxBTCSKzR9xVaX4ryFgCmPBPKw1e1A4zJozOKx99piyuATwVqxciFbg53j4PqCwhN2yqlcDTeh1fcVN+lgYLkoUT37XGssJaBQGMQ6M6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9NubZbynDjJjRzoU5i+z36kQ7herjb7QFxkiBN4s8hE=;
 b=dK3M+Hl+BBzBJ6SO3O16zTP6nxD0bDMS/zclmdY8jK7Sv+2w36BQpI9EQHyC70AJ/u28dN5xvV4sfaM8VME/5KDEN3ECPNwRuBZzvs87NKNRCEzFQYrvchThUkOXrMsTM4sJNdlWyfICTwVwIkCNCB6HUZItKlHLgk6Q+fFfWFg=
Date: Tue, 23 Feb 2021 13:17:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <andrew.cooper3@citrix.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YDTyScmud26aiaMi@Air-de-Roger>
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
X-ClientProxiedBy: PR3P195CA0029.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::34) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fb05d81e-82d1-41fd-deda-08d8d7f4f786
X-MS-TrafficTypeDiagnostic: DS7PR03MB5445:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5445BB2528197853F74E271F8F809@DS7PR03MB5445.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JPbq+3cTwU8xuJX9NDjmAm78CIkdbxh3u4nAXPEyAgggAitCfFj53q24QG5kfVcAz1c5Q1pv1LfQ6uBIj3uoCKc/oD/301uLzvEib4J5J33VcEsU+OTdi2q47tS92SkuHi7GfLpaPBi5ohHp9wsnrwd9j/RRslm1OrUbZXVgQH2vobjcVEbNrBmfvkjVc6owi2cz4MZl12HKIcOm/zmURMMO+pJ+k6MSBnFcvTv2hSjmHz6hLOqjIWE1KcBWRZe4e9l69GGIa9PlINsZUPIb9HA/Lxm3Fv4oRiOXCiGEd8+3IZzfgEygpiCYc3xU4mx2soy0DDYZSmwi6Fqr1yiLm+IL13kKO076nTc1kUMrqCXWnFssnYk+T7QLLQqcbxnV8ErH/Qa7W0GtfgihAd6MA1Bo6nboUdtTh8/BESA/Qr1IX/hjkppmL2ChkpGEAnzE9+0OsCgMurBsCazqTFXqhr2g6YgpkaSHjmmEdpd6nCOp3058OLZsBecjJeTf4DoWuw2AsU73HFa8uV5irlZ8gQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(376002)(346002)(136003)(396003)(366004)(33716001)(5660300002)(16526019)(83380400001)(86362001)(66476007)(66946007)(6666004)(2906002)(316002)(956004)(53546011)(186003)(4326008)(6486002)(26005)(9686003)(6916009)(8676002)(8936002)(478600001)(6496006)(85182001)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UFd3T2ltWmFjbXl3M1VBczg5RGdBODFRYnpVaVQ5Unk3K2M5M1lHMzc1ZjJ4?=
 =?utf-8?B?MVRKcnVGV2FBdFFZMzVsUmk2UDVrZFhtRTk4bFFYNDRBZ3JNUHoxQTRGL3dl?=
 =?utf-8?B?a01pWHlVbUZ6QU1kRmEra1VhRU1WY0RVN0c3d3BuZVprRjR5TUpDVTlCOU1O?=
 =?utf-8?B?MitwY25iVEZPVmZRTHIwZWpZbGlVM2NtTFd0UHFTRTc5NVhQSnVnZWQyckh1?=
 =?utf-8?B?Z0crU3NpU2dCbXJObENZTW1ON0o5c0JaZll2QTdzaStweHhaMzVVTmhIMDVt?=
 =?utf-8?B?QU1KUzV6eDl0VHNtVDRBaDFUYVlNZEkxQVhvajNRckJuUGM5ekppOEZibjcw?=
 =?utf-8?B?U2dOMGJabTQ3MEpmUEhOeTNndUxLd0I4U29nRGsrMFkxeUdTQmp0My9TSUht?=
 =?utf-8?B?R2NLK1BWL1hZNE8rTlJuUzRzYktnT3ZVeFhUVXE3SThML0I2cC96UXZIa2da?=
 =?utf-8?B?SlF0V2cydEtMUisrbFM4YjROR1BFYUtad05TS2VLamNwdHFnc3Z6L1Y5VzRi?=
 =?utf-8?B?SFlKRkpQWWtINzN2VzlWbjFxRkpWN3BoQmFFU1ZGYnNGS1ZsczBiYis3bkxx?=
 =?utf-8?B?d2gvM0ZjSWZnYlVFYWFwS1hMTFRMeW5FY2grSG9oNmo5UXRJbGs3dmFNZ2lP?=
 =?utf-8?B?cVRzRVpmaEpNazZ0cC9pQ0grdEszSUZnMEpsWkhSQ2ZqMUMrV0I1Y0ZzOUNk?=
 =?utf-8?B?Snd3VWxwc3ZZdWNPSFc0S1VvSGNqZkZGOGZWMEdmblVTL0dpbVdZY3N1N1hm?=
 =?utf-8?B?NXJReDZTS01DRDFxU3VmaGk2QlpabDNMeUNLc0dFcHEvWVo5clhKRkdVZ210?=
 =?utf-8?B?L0tIZ055bnltSGxtRjBpVzQyd3ZsUG9iT2cvUkwvaFNleVMxb2lPQW9oZVQ1?=
 =?utf-8?B?SytQbE0xMm9tOXVjM3ROeFltdkpUbXlEN1IrYTU3M2NKWmZmK09TSnZuSFZ1?=
 =?utf-8?B?T2ZDaUI0N0VLZU9WSXpyWGphYkhLbmRGS3Y1RUNicEs4MkszVndPK3JHZXVN?=
 =?utf-8?B?dkpFM2lwejd5aUdaYWlic29DditXS1Q2VmpGUUFldUVUMi9ZRG56em84NE5q?=
 =?utf-8?B?d2t5YzhlUy9ZN3FuMXJvZ0l3QzBkakc2NmUxYjZNUkFJQ1Q0czZFdzNxWVZv?=
 =?utf-8?B?UnRwWGJCUWI1UVJDcjBOUkFkYjRyUVFGdUpQTDE0ZTlSSzZGMzV4dFQ0V2ty?=
 =?utf-8?B?QTRCem96OTA0RXQ3Z1JCRytQTnplRzh6OUVlV2lacHJJMW12MWhBNjhWeFZj?=
 =?utf-8?B?bjhBTEJLd2E2WElmWVlUTUlBQ1ZWNmdmb3FjSENQQlNNRThaTTUwTUR1eWRZ?=
 =?utf-8?B?THRoY0ZBaVBIbGVkMDlzam94eDlTWGtSUXZXZWtoWU5xQUlpNlNJelkzejhp?=
 =?utf-8?B?bCtQU2Vnd2FqcW9HOHMrRzFEZzZWblFCZTZ1ais5R3BSNFdoZDN3dGlZZ2tU?=
 =?utf-8?B?bkllTGJiN3JOdklXQWJyOVBJL1hwTTk5WkJqOWd2RUJET2RQeHlUU1BCV0l1?=
 =?utf-8?B?cEJPOHRCSngxK2x4dDhtR0o1L1RsQkgyQmlHYXZkS1QwMFVjL3Q2dExEZXRr?=
 =?utf-8?B?WTFnUjh1Zk1aOWdtdjJnRzVhbHVheFlDZGwxOUpjV1NYWHJLY3NIYU5zU0Rn?=
 =?utf-8?B?Mncrc0t5NlVkRllxeEFSYnFCb0ZHbWZRQW9GRE1NU0RWMm1NOHRhMDlFOTRC?=
 =?utf-8?B?ODBrNkZwR1I3c0xaOWVoMVJKMFA5SldUMXd3cURtTDhNM093UmZOU3lFeUQr?=
 =?utf-8?B?UlBRWVFXb0ZtUDBMbzhIVWpRdGVUVjNWQlBBMitBK1QyODVBbDVKNERuNzhE?=
 =?utf-8?B?TWJKZFZFMEhpN3pFTTdNZz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: fb05d81e-82d1-41fd-deda-08d8d7f4f786
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 12:17:20.0476
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P+3csdu/manMx+HfFbRyxAKZ56Sml5cAiaJwX9aZejAWs9VmfGY2A3fBuePmIZkIJvETwPP2YDN8bsHiABNEHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5445
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 11:15:31AM +0100, Jan Beulich wrote:
> On 23.02.2021 10:34, Roger Pau MonnÃ© wrote:
> > On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
> >> On 22.02.2021 22:19, Boris Ostrovsky wrote:
> >>>
> >>> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
> >>>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
> >>>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
> >>>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
> >>>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
> >>>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
> >>>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
> >>>>>> It's kind of weird to use an MSR to store this. I assume this is done
> >>>>>> for migration reasons?
> >>>>>
> >>>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
> >>>> I agree that using the msr_policy seems like the most suitable place
> >>>> to convey this information between the toolstack and Xen. I wonder if
> >>>> it would be fine to have fields in msr_policy that don't directly
> >>>> translate into an MSR value?
> >>>
> >>>
> >>> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
> >>
> >> Which, just to clarify, was not the least because of the flags
> >> field being per-entry, i.e. per MSR, while here we want a global
> >> indicator.
> > 
> > We could exploit a bit the xen_msr_entry_t structure and use it
> > like:
> > 
> > typedef struct xen_msr_entry {
> >     uint32_t idx;
> > #define XEN_MSR_IGNORE (1u << 0)
> >     uint32_t flags;
> >     uint64_t val;
> > } xen_msr_entry_t;
> > 
> > Then use the idx and val fields to signal the start and size of the
> > range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
> > field?
> > 
> > xen_msr_entry_t = {
> >     .idx = 0,
> >     .val = 0xffffffff,
> >     .flags = XEN_MSR_IGNORE,
> > };
> > 
> > Would be equivalent to ignoring accesses to the whole MSR range.
> > 
> > This would allow selectively selecting which MSRs to ignore, while
> > not wasting a MSR itself to convey the information?
> 
> Hmm, yes, the added flexibility would be nice from an abstract pov
> (not sure how relevant it would be to Solaris'es issue). But my
> dislike of using a flag which is meaningless in ordinary entries
> remains, as was voiced against Boris'es original version.

I understand the flags field is meaningless for regular MSRs, but I
don't see why it would be an issue to start using it for this specific
case of registering ranges of ignored MSRs. It certainly seems better
than hijacking an MSR index (MSR_UNHANDLED).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 12:25:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 12:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88752.167019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWl2-0007PV-2O; Tue, 23 Feb 2021 12:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88752.167019; Tue, 23 Feb 2021 12:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWl1-0007PO-Vf; Tue, 23 Feb 2021 12:25:39 +0000
Received: by outflank-mailman (input) for mailman id 88752;
 Tue, 23 Feb 2021 12:25:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p6em=HZ=epam.com=prvs=268883478e=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lEWl0-0007PJ-Bx
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 12:25:38 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b9d024f-9e52-4ba1-b442-82e471665634;
 Tue, 23 Feb 2021 12:25:36 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11NCPWdE031886; Tue, 23 Feb 2021 12:25:34 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2057.outbound.protection.outlook.com [104.47.12.57])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vyne0m6h-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 12:25:33 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM9PR03MB7490.eurprd03.prod.outlook.com (2603:10a6:20b:265::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 12:10:11 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.042; Tue, 23 Feb 2021
 12:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b9d024f-9e52-4ba1-b442-82e471665634
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V4X21lOEtU0Z0u+dZzpzGaFSJgR4n4sEDNNv4o3ziWwhLgXgdoObo6uAZF+l0ZPzhbA2oN3ZeKnLch4QLu9GKKZ67fFYM5ddXhGR/9uzOJ15walmwM+7tBDawtN3nDnvGSp+UudJoWEmw76IM44n6+Sd2JWjkIWfiNiHbTHo7jGvnz3QClNiP5RZITVruPy/l8Zd+yGAczakpIfq0eKV6oGJyoN+wlw5+pelXtvDWjROkF3cSzWJvrcP4u3RUafBGkqnb7q3sG33GZvsDV+7TjTVEvcN8qrYtlcN1W5/+oJvpgsJjGOnCfhmC35s79dupcpEfTcVSmq5YNjrMXBY5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N7NxZlCispGWLgeqciZXU66peGcDQMUmqNJA6ReggVg=;
 b=faOJInOAyUnP9JcAENm197cX4+Z56sFxQWY52CVCGGd82fOfvG++AzfcmSUxrxr0z1f+sdD1xl3/BAhJqIuVXE4zuQtVzMJiZaxN5NQ4xsBTlVlQmLi+bwZmrhxgF45VGXl+ooQiEJnlTLacRchyXdvW+GORMLXZfXu/xt3jnOA7qJmtKzSAt6Qks2uESNNuLt+SM0u3qJGzD82qR+NX3kBfsuuuCj84BkS+LL9mg3UMumk4Pyde45yWHcqQX4aV27sz0whbeFILFOaw0Y9rT3nOXPKzMMaDzPflh0yDuhICyCh8e8k9UcAO9j6VQX8fsmuW9swJumSMadbKLVIFxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N7NxZlCispGWLgeqciZXU66peGcDQMUmqNJA6ReggVg=;
 b=549XpSc2xC6+iOg8ANdyOCB7vIrfzuwAh7xzMOqzjuF1zMTO6n0z8d3sLLCup+TIY9m5ArE6D+d0DEJuXhDA3y61liJrFUqRmwWnnTdB6WzUDN1gq03/YNnWjN1KpYSAZK+MDaAUHGdT74915GA6m4u/ROz1A9QBdytEtRbPo19V2BM9ca9yzLL/ijoFYZs3TbuIWkOsMiEpL4tO+nKfixQgZuUfaooXAUVVWcT3wxKIm2LvEZ4aCDn7Gk17bIuN1CYGKv9G7eh9Qt0MwDJnzogmuXQJ/D4q1nkzJL4lREFNFJvB8DhYEPoAosPcjDYaiWWPY+XuMbqdoBelWLgVCA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        George
 Dunlap <george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: AQHXCYx4A6OUUHr1gkqxWv1TEOLkuqplchSAgAAzdYA=
Date: Tue, 23 Feb 2021 12:06:32 +0000
Message-ID: <87zgzv56pm.fsf@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org>
In-Reply-To: <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: eeab5274-7a96-4bab-6de9-08d8d7f375e0
x-ms-traffictypediagnostic: AM9PR03MB7490:
x-microsoft-antispam-prvs: 
 <AM9PR03MB74909888EA9A799EF07D4B4DE6809@AM9PR03MB7490.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 4ToXfISwM58WZRDScQo9S9aFK7wN6lYBTeDsGR+K8cE7CWBkdNDVwuRVsoRq4MxiO0eEayyYhwgZrW2F2HPPY2Ye5cssONvgbOjFVCKD1idJlER+0830KFb8pYXp1phjM06iyCtfNHOxu8NlBUrcH5BAp9kllam5zAIrEf5xn4R5AmlS+DBw/xD/z5Rk6WZvvYwG1ZtGbcDjwiWQEqm+BeXb2nsRNPTqg4oYjY3srtC95/hXY40u1BRZO3Ozu0jhZu5d3xWOtwh/wOOG0kACWrnt/jMyViVEERHBwV4VMCwv01Bxr6/vnGmPwoL1WWT2XWt24IstG/XN4iiLzgQaDdJtq/avV2f0iQQxm5JLJquVP0NbavCOpJ5H3/XrM4VzeiVVi+ehL0tMRRwdab+tbL21SIb05YBfdJPAMe4qSh84rp5ZFo5HyjSq4AVqg0Es/AmtecqVWVkVCaazVKguRhxS1q1jo8e4VtdypvuqeYVQi9p0pcYzFygyDyOhgH0t/0vkhIlx3fW9MAs94W1AyA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(366004)(376002)(39860400002)(83380400001)(6512007)(2906002)(76116006)(7416002)(26005)(6916009)(316002)(36756003)(71200400001)(66946007)(86362001)(8936002)(6506007)(66556008)(55236004)(66476007)(66446008)(5660300002)(2616005)(6486002)(478600001)(186003)(4326008)(54906003)(53546011)(64756008)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?FPxFWimci5Wob8Jg8Fq40/GpQP+wIKmeTTOTfF3rP8VqQCBxoKK6irNFl0?=
 =?iso-8859-1?Q?n18JdrEJV6uql6kwyAEmhOZIiyVYnfC9zQtPw5LA97qXECHxakcQE8bzd5?=
 =?iso-8859-1?Q?bBfXGc7BLhBmmdoA9Xku1pWj3j26fbr0HyTJDGajG7LmAXti4EqddRBqTr?=
 =?iso-8859-1?Q?hnGDs3eGXtUsm3sFJvDsYBm+IPs4uT3YFD9KaLLHTSR8EWFyoZk6EXO5LB?=
 =?iso-8859-1?Q?4eBirFIZNrnlUKyaRZKjookk7livgTMF7C5EPPGxuitCmzlYqKsnuk0XrX?=
 =?iso-8859-1?Q?COUTTziL7MDkhpshd0TmY8F18HK6RTv8xQl9/JgyTiQlKLaf+QLME2p3hg?=
 =?iso-8859-1?Q?5wiPTVUO+peyGjjNNowPgGEQy8UfN8Q20jlHUknN/VFoOisMmPhpbViGQn?=
 =?iso-8859-1?Q?bgJLWi1Mr7M6iuFb3rYH+Gk71bmMjm0NoNgkkhjVTRu6cG7tv83qubqZmI?=
 =?iso-8859-1?Q?CV9LcABk8oJrLlQoywQv8EnFbRXL/d2i8cf/QMNhgAyI4SrVdqq8Q7kduz?=
 =?iso-8859-1?Q?KO0zvmEILW5RMGyLM5zayC4YkO68FYAs5gtdavmVYhZ2Zo/A6TcUL+TJL5?=
 =?iso-8859-1?Q?elgObPXIe/vD2KRM8tju2vPVJGbE1JhbAsASYq0hM8XXAnvon8tRYz0+yO?=
 =?iso-8859-1?Q?AgKF3QEe9mPZXb8hJINQEuXjg9JRvVdIFBpsdW3+tw0G5HuSV/aXZPDOr7?=
 =?iso-8859-1?Q?TZ/XUB2zAXXOUwVLc300Yo2OKTiPHUmChaNrp/XX/freAh+PEAPVLfBEfE?=
 =?iso-8859-1?Q?Unes5srvlrgXaY1kW60Q2HCx+Vr4eMRcOQdQjX7cXeGcMG0RhA5IgkGHhr?=
 =?iso-8859-1?Q?e56x8mHIhlGAsHkxEcp4+tQdShi0jtKNu8j5Jiphrj8XyPFSln/oSoV/Nu?=
 =?iso-8859-1?Q?JUynKeo3LHjt0ysB3TlwZ6qfwP6y6UWGKUhIHyWYbyC465WNpWnLuUQYT2?=
 =?iso-8859-1?Q?fuvEM2K/4+iGGs1/4169E4nxTerLgn8OWtwzdK42vwXSmhJjvk5qdw8ytO?=
 =?iso-8859-1?Q?hjK0vQE/iaWIVB6UedG4jpvJlYjowiNuV3YB7vCnd8panlwasnKvn00hTX?=
 =?iso-8859-1?Q?y8lTfEtp5xVVJE+dAe7Nr5ZVrnH87NkkBfR6Lm9OnUwMsremDglYmZi/U/?=
 =?iso-8859-1?Q?e/J8WNJbQ4mjNkOjrWC66bmGcVjTm1zSNeJYvyKEiFljbxK2rX7yeZH6b1?=
 =?iso-8859-1?Q?hPcGU0bMVN+in3qbtjqMzFweGsYCKwm3mU3fLzFjOwMaR6MRjjxls/oyNC?=
 =?iso-8859-1?Q?kogEV0DDMB2E1hlyZyCwgOzc6bMSKhK6BBRByL+7h2DFxij+hFjAFMaqqt?=
 =?iso-8859-1?Q?BKZ4VJDn6+etKM3X3Dk+GqpvTdvYnSjaP7L0H9A9lIZcDJtqiqIBD9VvkR?=
 =?iso-8859-1?Q?3U9MDfo1UbKoSqKJE0tSWuFENEVfRoVg=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eeab5274-7a96-4bab-6de9-08d8d7f375e0
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 12:06:32.6503
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: H/Z72nOOML0DHfxL+rFuAY++2mbULDa/UUHSVH1afoKlQESHVtLmVEXsVi3wNJd7ZSUW3p1HLFCFfJf8f/3a6UJffoKJRd5K8rlw8Niiqmg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7490
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxscore=0
 mlxlogscore=999 spamscore=0 clxscore=1015 adultscore=0 impostorscore=0
 malwarescore=0 suspectscore=0 bulkscore=0 lowpriorityscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230107


Hi Julien,

Julien Grall writes:

> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>> Hello community,
>
> Hi Volodymyr,
>
> Thank you for the proposal, I like the like of been able to preempt
> the vCPU thread. This would make easier to implement some of the
> device emulation in Xen (e.g. vGIC, SMMU).

Yes, emulation is the other topic that I didn't mentioned. Also, it could
lift some restrictions in OP-TEE mediator code as well.

>> Subject of this cover letter is quite self-explanatory. This patch
>> series implements PoC for preemption in hypervisor mode.
>> This is the sort of follow-up to recent discussion about latency
>> ([1]).
>> Motivation
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> It is well known that Xen is not preemptable. On other words, it is
>> impossible to switch vCPU contexts while running in hypervisor
>> mode. Only one place where scheduling decision can be made and one
>> vCPU can be replaced with another is the exit path from the hypervisor
>> mode. The one exception are Idle vCPUs, which never leaves the
>> hypervisor mode for obvious reasons.
>> This leads to a number of problems. This list is not
>> comprehensive. It
>> lists only things that I or my colleagues encountered personally.
>> Long-running hypercalls. Due to nature of some hypercalls they can
>> execute for arbitrary long time. Mostly those are calls that deal with
>> long list of similar actions, like memory pages processing. To deal
>> with this issue Xen employs most horrific technique called "hypercall
>> continuation".=20
>
> I agree the code is not nice. However, it does serve another purpose
> than ...
>
>> When code that handles hypercall decides that it should
>> be preempted, it basically updates the hypercall parameters, and moves
>> guest PC one instruction back. This causes guest to re-execute the
>> hypercall with altered parameters, which will allow hypervisor to
>> continue hypercall execution later.
>
> ... just rescheduling the vCPU. It will also give the opportunity for
> the guest to handle interrupts.
>
> If you don't return to the guest, then risk to get an RCU sched stall
> on that the vCPU (some hypercalls can take really really long).

Ah yes, you are right. I'd only wish that hypervisor saved context of
hypercall on it's side...

I have example of OP-TEE before my eyes. They have special return code
"task was interrupted" and they use separate call "continue execution of
interrupted task", which takes opaque context handle as a
parameter. With this approach state of interrupted call never leaks to
rest of the system.

>
>> This approach itself have obvious
>> problems: code that executes hypercall is responsible for preemption,
>> preemption checks are infrequent (because they are costly by
>> themselves), hypercall execution state is stored in guest-controlled
>> area, we rely on guest's good will to continue the hypercall.=20
>
> Why is it a problem to rely on guest's good will? The hypercalls
> should be preempted at a boundary that is safe to continue.

Yes, and it imposes restrictions on how to write hypercall
handler.
In other words, there are much more places in hypercall handler code
where it can be preempted than where hypercall continuation can be
used. For example, you can preempt hypercall that holds a mutex, but of
course you can't create an continuation point in such place.

>> All this
>> imposes restrictions on which hypercalls can be preempted, when they
>> can be preempted and how to write hypercall handlers. Also, it
>> requires very accurate coding and already led to at least one
>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>> like the one mentioned in [1].
>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle
>> vCPUs,
>> which are supposed to run when the system is idle. If hypervisor needs
>> to execute own tasks that are required to run right now, it have no
>> other way than to execute them on current vCPU. But scheduler does not
>> know that hypervisor executes hypervisor task and accounts spent time
>> to a domain. This can lead to domain starvation.
>> Also, absence of hypervisor threads leads to absence of high-level
>> synchronization primitives like mutexes, conditional variables,
>> completions, etc. This leads to two problems: we need to use spinlocks
>> everywhere and we have problems when porting device drivers from linux
>> kernel.
>> Proposed solution
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> It is quite obvious that to fix problems above we need to allow
>> preemption in hypervisor mode. I am not familiar with x86 side, but
>> for the ARM it was surprisingly easy to implement. Basically, vCPU
>> context in hypervisor mode is determined by its stack at general
>> purpose registers. And __context_switch() function perfectly switches
>> them when running in hypervisor mode. So there are no hard
>> restrictions, why it should be called only in leave_hypervisor() path.
>> The obvious question is: when we should to try to preempt running
>> vCPU?  And answer is: when there was an external event. This means
>> that we should try to preempt only when there was an interrupt request
>> where we are running in hypervisor mode. On ARM, in this case function
>> do_trap_irq() is called. Problem is that IRQ handler can be called
>> when vCPU is already in atomic state (holding spinlock, for
>> example). In this case we should try to preempt right after leaving
>> atomic state. This is basically all the idea behind this PoC.
>> Now, about the series composition.
>> Patches
>>    sched: core: save IRQ state during locking
>>    sched: rt: save IRQ state during locking
>>    sched: credit2: save IRQ state during locking
>>    preempt: use atomic_t to for preempt_count
>>    arm: setup: disable preemption during startup
>>    arm: context_switch: allow to run with IRQs already disabled
>> prepare the groundwork for the rest of PoC. It appears that not all
>> code is ready to be executed in IRQ state, and schedule() now can be
>> called at end of do_trap_irq(), which technically is considered IRQ
>> handler state. Also, it is unwise to try preempt things when we are
>> still booting, so ween to enable atomic context during the boot
>> process.
>
> I am really surprised that this is the only changes necessary in
> Xen. For a first approach, we may want to be conservative when the
> preemption is happening as I am not convinced that all the places are
> safe to preempt.
>

Well, I can't say that I ran extensive tests, but I played with this for
some time and it seemed quite stable. Of course, I had some problems
with RTDS...

As I see it, Xen is already supports SMP, so all places where races are
possible should already be covered by spinlocks or taken into account by
some other means.

Places which may not be safe to preempt are clustered around task
management code itself: schedulers, xen entry/exit points, vcpu
creation/destruction and such.

For example, for sure we do not want to destroy vCPU which was preempted
in hypervisor mode. I didn't covered this case, by the way.

>> Patches
>>    preempt: add try_preempt() function
>>    sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>>    arm: traps: try to preempt before leaving IRQ handler
>> are basically the core of this PoC. try_preempt() function tries to
>> preempt vCPU when either called by IRQ handler and when leaving atomic
>> state. Scheduler now enters atomic state to ensure that it will not
>> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.
>
> AFAICT, try_preempt() will deal with the rescheduling. But how about
> softirqs? Don't we want to handle them in try_preempt() as well?

Well, yes and no. We have the following softirqs:

 TIMER_SOFTIRQ - should be called, I believe
 RCU_SOFTIRQ - I'm not sure about this, but probably no
 SCHED_SLAVE_SOFTIRQ - no
 SCHEDULE_SOFTIRQ - no
 NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ - should be moved to a separate
 thread, maybe?
 TASKLET_SOFTIRQ - should be moved to a separate thread

So, looks like only timers should be handled for sure.

>
> [...]
>
>> Conclusion
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> My main intention is to begin discussion of hypervisor
>> preemption. As
>> I showed, it is doable right away and provides some immediate
>> benefits. I do understand that proper implementation requires much
>> more efforts. But we are ready to do this work if community is
>> interested in it.
>> Just to reiterate main benefits:
>> 1. More controllable latency. On embedded systems customers care
>> about
>> such things.
>
> Is the plan to only offer preemptible Xen?
>

Sorry, didn't get the question.

>> 2. We can get rid of hypercall continuations, which will results in
>> simpler and more secure code.
>
> I don't think you can get rid of it completely without risking the OS
> to receive RCU sched stall. So you would need to handle them
> hypercalls differently.

Agree. I believe that continuation context should reside in
hypervisor. Those changes are not connected to preemption per se and can
be implemented separately. But we can discuss them there, of course.

[...]

--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 12:34:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 12:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88756.167032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWti-0008Sb-16; Tue, 23 Feb 2021 12:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88756.167032; Tue, 23 Feb 2021 12:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWth-0008SU-TL; Tue, 23 Feb 2021 12:34:37 +0000
Received: by outflank-mailman (input) for mailman id 88756;
 Tue, 23 Feb 2021 12:34:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEWtg-0008SK-JG
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 12:34:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEWtg-0002ds-2m; Tue, 23 Feb 2021 12:34:36 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEWtf-0000cJ-L5; Tue, 23 Feb 2021 12:34:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=qOQNybg5DTKGdQKuYPQV7s9nH2sOE4WF8M2drVNX3Xs=; b=ebwc4kLeSRiYVQ3DfWm/K1blnj
	iWdaXzpZnt7/Z5tbcNTB2eiWUGigmJ9zE/cJd1SsTLeEFq7yDrNqKRY9UnkJKzzfROx1l+a/sM3lE
	ojsk6amQCpRy44P95BxQ0ZBUteS6VWpeEhIcqh8hOxCMBy+N0xa+5h1KtiEhcSHumoPM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>
Subject: [for-4.15][PATCH v4 0/2] xen/iommu: Collection of bug fixes for IOMMU teardown
Date: Tue, 23 Feb 2021 12:34:31 +0000
Message-Id: <20210223123433.19645-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of bug fixes for the IOMMU teardown code.
All of them are candidate for 4.15 as they can either leak memory or
lead to host crash/host corruption.

This is sent directly on xen-devel because all the issues were either
introduced in 4.15 or happen in the domain creation code.

Major changes since v3:
    - Remove patch #3 "xen/iommu: x86: Harden the IOMMU page-table
    allocator" as it is not strictly necessary for 4.15.
    - Re-order the patches to avoid on a follow-up patch to fix
    completely the issue.

Major changes since v2:
    - patch #1 "xen/x86: p2m: Don't map the special pages in the IOMMU
    page-tables" has been removed. This requires Jan's patch [1] to
    fully mitigate memory leaks.

Cheers,

[1] <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>

Julien Grall (2):
  xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
  xen/iommu: x86: Clear the root page-table before freeing the
    page-tables

 xen/drivers/passthrough/amd/iommu_map.c     | 12 +++++++++++
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 ++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 24 ++++++++++++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 19 ++++++++++++++++
 xen/include/xen/iommu.h                     |  1 +
 5 files changed, 66 insertions(+), 2 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 12:34:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 12:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88758.167056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWtj-0008UY-HQ; Tue, 23 Feb 2021 12:34:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88758.167056; Tue, 23 Feb 2021 12:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWtj-0008UP-Dl; Tue, 23 Feb 2021 12:34:39 +0000
Received: by outflank-mailman (input) for mailman id 88758;
 Tue, 23 Feb 2021 12:34:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEWti-0008Sw-4l
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 12:34:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEWth-0002e2-NV; Tue, 23 Feb 2021 12:34:37 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEWth-0000cJ-F6; Tue, 23 Feb 2021 12:34:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=rHfC/wahEJYSijbiVGgEzgUQtxG8SVxu1zXJPzObAQE=; b=SphyqBbfEuOUuRq48A581W8Xk
	RZ2tIxprE1m2FJrKTMYslahhcwruXd8/SBSyo5UpyPQt7vd4kXoD0Vs+FQYeyt1OwPvWMTqruce2q
	Zfwl5TAhnWL20YNkvgFipjNY6zawL+xjAApo98egh85rpBJW0WlmNxZYmBT1l+nsjg7l4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>
Subject: [for-4.15][PATCH v4 2/2] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Tue, 23 Feb 2021 12:34:33 +0000
Message-Id: <20210223123433.19645-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210223123433.19645-1-julien@xen.org>
References: <20210223123433.19645-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new per-domain IOMMU page-table allocator will now free the
page-tables when domain's resources are relinquished. However, the
per-domain IOMMU structure will still contain a dangling pointer to
the root page-table.

Xen may access the IOMMU page-tables afterwards at least in the case of
PV domain:

(XEN) Xen call trace:
(XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
(XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
(XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
(XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
(XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
(XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
(XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
(XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
(XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
(XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
(XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
(XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
(XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
(XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
(XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
(XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120

This will result to a use after-free and possibly an host crash or
memory corruption.

It would not be possible to free the page-tables further down in
domain_relinquish_resources() because cleanup_page_mappings() will only
be called when the last reference on the page dropped. This may happen
much later if another domain still hold a reference.

After all the PCI devices have been de-assigned, nobody should use the
IOMMU page-tables and it is therefore pointless to try to modify them.

So we can simply clear any reference to the root page-table in the
per-domain IOMMU structure. This requires to introduce a new callback of
the method will depend on the IOMMU driver used.

Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy()
to check if we freed all the IOMMU page tables.

Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v4:
        - Move the patch later in the series as we need to prevent
        iommu_map() to allocate memory first.
        - Add an ASSERT() in arch_iommu_domain_destroy().

    Changes in v3:
        - Move the patch earlier in the series
        - Reword the commit message

    Changes in v2:
        - Introduce clear_root_pgtable()
        - Move the patch later in the series
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 13 +++++++++++++
 xen/include/xen/iommu.h                     |  1 +
 4 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 42b5a5a9bec4..085fe2f5771e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
+static void amd_iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.amd.root_table = NULL;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    dom_iommu(d)->arch.amd.root_table = NULL;
+    ASSERT(!dom_iommu(d)->arch.amd.root_table);
 }
 
 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
@@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .remove_device = amd_iommu_remove_device,
     .assign_device  = amd_iommu_assign_device,
     .teardown = amd_iommu_domain_destroy,
+    .clear_root_pgtable = amd_iommu_clear_root_pgtable,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index b549a71530d5..475efb3be3bd 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1726,6 +1726,15 @@ out:
     return ret;
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.vtd.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void iommu_domain_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d)
         xfree(mrmrr);
     }
 
-    hd->arch.vtd.pgd_maddr = 0;
+    ASSERT(!hd->arch.vtd.pgd_maddr);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2731,6 +2740,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .remove_device = intel_iommu_remove_device,
     .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index c6b03624fe28..faeb549591d8 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    /*
+     * There should be not page-tables left allocated by the time the
+     * domain is destroyed. Note that arch_iommu_domain_destroy() is
+     * called unconditionally, so pgtables may be unitialized.
+     */
+    ASSERT(dom_iommu(d)->platform_ops == NULL ||
+           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -273,6 +280,12 @@ int iommu_free_pgtables(struct domain *d)
     /* After this barrier, no more IOMMU mapping can happen */
     spin_barrier(&hd->arch.mapping_lock);
 
+    /*
+     * Pages will be moved to the free list below. So we want to
+     * clear the root page-table to avoid any potential use after-free.
+     */
+    hd->platform_ops->clear_root_pgtable(d);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 863a68fe1622..d59ed7cbad43 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -272,6 +272,7 @@ struct iommu_ops {
 
     int (*adjust_irq_affinities)(void);
     void (*sync_cache)(const void *addr, unsigned int size);
+    void (*clear_root_pgtable)(struct domain *d);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 12:34:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 12:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88757.167038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWti-0008T5-A8; Tue, 23 Feb 2021 12:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88757.167038; Tue, 23 Feb 2021 12:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEWti-0008Sq-4Z; Tue, 23 Feb 2021 12:34:38 +0000
Received: by outflank-mailman (input) for mailman id 88757;
 Tue, 23 Feb 2021 12:34:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEWth-0008SP-8a
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 12:34:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEWtg-0002dx-Sb; Tue, 23 Feb 2021 12:34:36 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEWtg-0000cJ-I5; Tue, 23 Feb 2021 12:34:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=F3pV8CFzjIR41kWk/X0MEyzvkJbWUgUJEtOR4PpRZWU=; b=q3+ReLHEwtnUCAwIgcDRLjifb
	PpBJB30Kbw3Y92arepez1Rw812d92MAo/ToFT0QIjKkb6eFPm3bGEHXV9RbiUr6eylG32+Nj7HEXQ
	V30rwbVG4fhDTf0uMQB1z7hP98fHwa4Ev3M5XJ0DPGtXv4gg+ugcQi6gLc46X3UdpUHIU=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>
Subject: [for-4.15][PATCH v4 1/2] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
Date: Tue, 23 Feb 2021 12:34:32 +0000
Message-Id: <20210223123433.19645-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210223123433.19645-1-julien@xen.org>
References: <20210223123433.19645-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new x86 IOMMU page-tables allocator will release the pages when
relinquishing the domain resources. However, this is not sufficient
when the domain is dying because nothing prevents page-table to be
allocated.

As the domain is dying, it is not necessary to continue to modify the
IOMMU page-tables as they are going to be destroyed soon.

At the moment, page-table allocates will only happen when iommu_map().
So after this change there will be no more page-table allocation
happening.

In order to observe d->is_dying correctly, we need to rely on per-arch
locking, so the check to ignore IOMMU mapping is added on the per-driver
map_page() callback.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

As discussed in v3, this is only covering 4.15. We can discuss
post-4.15 how to catch page-table allocations if another caller (e.g.
iommu_unmap() if we ever decide to support superpages) start to use the
page-table allocator.

Changes in v4:
    - Move the patch to the top of the queue
    - Reword the commit message

Changes in v3:
    - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't
    crash the domain if it is dying"
---
 xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++++++++
 xen/drivers/passthrough/vtd/iommu.c     | 12 ++++++++++++
 xen/drivers/passthrough/x86/iommu.c     |  6 ++++++
 3 files changed, 30 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index d3a8b1aec766..560af54b765b 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -285,6 +285,18 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables()).
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     rc = amd_iommu_alloc_root(d);
     if ( rc )
     {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d136fe36883b..b549a71530d5 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1762,6 +1762,18 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables())
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
     if ( !pg_maddr )
     {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cea1032b3d02..c6b03624fe28 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,6 +267,12 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    /* After this barrier, no more IOMMU mapping can happen */
+    spin_barrier(&hd->arch.mapping_lock);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 13:17:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 13:17:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88770.167080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXYo-0004Cj-Vl; Tue, 23 Feb 2021 13:17:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88770.167080; Tue, 23 Feb 2021 13:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXYo-0004Cc-Qw; Tue, 23 Feb 2021 13:17:06 +0000
Received: by outflank-mailman (input) for mailman id 88770;
 Tue, 23 Feb 2021 13:17:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YqHm=HZ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lEXYm-0004CX-NF
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 13:17:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bb4cb0a-575c-4282-ba87-36ab84c555a9;
 Tue, 23 Feb 2021 13:17:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EC73AC1D;
 Tue, 23 Feb 2021 13:17:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bb4cb0a-575c-4282-ba87-36ab84c555a9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614086222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Z+J582orrVnsRLIqlpO3tJmmbKvU/Cbzk7PWqQR3xW4=;
	b=ud5J3HS1ScZjLvxLA8FllI448k+t7T0k1udgUce48ll/Xzq/LtmlqeqUQ3usk/WXmqwEP4
	fki/aKthdk6+fTycJJL/Wkz9zORQpYQx1Fmec3AXbApe3T6ewfqxlFnXIMGDJUu8ftwqW6
	dXFOm974eAKDchsmd4oOySrTN9Em3f0=
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
 <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
Message-ID: <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com>
Date: Tue, 23 Feb 2021 14:17:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="EfbhKDrrZMU0yIsOqJCQ9lW9F0ukvuujF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--EfbhKDrrZMU0yIsOqJCQ9lW9F0ukvuujF
Content-Type: multipart/mixed; boundary="cGb9T0xPG40UEkZk4U8nke1dy7oVuq52N";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Message-ID: <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
 <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
In-Reply-To: <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>

--cGb9T0xPG40UEkZk4U8nke1dy7oVuq52N
Content-Type: multipart/mixed;
 boundary="------------048EBAE9B46709486A8C459F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------048EBAE9B46709486A8C459F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.02.21 06:21, Roman Shaposhnik wrote:
> On Wed, Feb 17, 2021 at 12:29 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com=
=20
> <mailto:jgross@suse.com>> wrote:
>=20
>     On 17.02.21 09:12, Roman Shaposhnik wrote:
>      > Hi J=C3=BCrgen, thanks for taking a look at this. A few comments=
 below:
>      >
>      > On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F <jgross@s=
use.com
>     <mailto:jgross@suse.com>> wrote:
>      >>
>      >> On 16.02.21 21:34, Stefano Stabellini wrote:
>      >>> + x86 maintainers
>      >>>
>      >>> It looks like the tlbflush is getting stuck?
>      >>
>      >> I have seen this case multiple times on customer systems now, b=
ut
>      >> reproducing it reliably seems to be very hard.
>      >
>      > It is reliably reproducible under my workload but it take a long=
 time
>      > (~3 days of the workload running in the lab).
>=20
>     This is by far the best reproduction rate I have seen up to now.
>=20
>     The next best reproducer seems to be a huge installation with sever=
al
>     hundred hosts and thousands of VMs with about 1 crash each week.
>=20
>      >
>      >> I suspected fifo events to be blamed, but just yesterday I've b=
een
>      >> informed of another case with fifo events disabled in the guest=
=2E
>      >>
>      >> One common pattern seems to be that up to now I have seen this
>     effect
>      >> only on systems with Intel Gold cpus. Can it be confirmed to be=
 true
>      >> in this case, too?
>      >
>      > I am pretty sure mine isn't -- I can get you full CPU specs if
>     that's useful.
>=20
>     Just the output of "grep model /proc/cpuinfo" should be enough.
>=20
>=20
> processor: 3
> vendor_id: GenuineIntel
> cpu family: 6
> model: 77
> model name: Intel(R) Atom(TM) CPU =C2=A0C2550 =C2=A0@ 2.40GHz
> stepping: 8
> microcode: 0x12d
> cpu MHz: 1200.070
> cache size: 1024 KB
> physical id: 0
> siblings: 4
> core id: 3
> cpu cores: 4
> apicid: 6
> initial apicid: 6
> fpu: yes
> fpu_exception: yes
> cpuid level: 11
> wp: yes
> flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pa=
t=20
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp=
=20
> lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology=20
> nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx es=
t=20
> tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer =

> aes rdrand lahf_lm 3dnowprefetch cpuid_fault epb pti ibrs ibpb stibp=20
> tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida=20
> arat md_clear
> vmx flags: vnmi preemption_timer invvpid ept_x_only flexpriority=20
> tsc_offset vtpr mtf vapic ept vpid unrestricted_guest
> bugs: cpu_meltdown spectre_v1 spectre_v2 mds msbds_only
> bogomips: 4800.19
> clflush size: 64
> cache_alignment: 64
> address sizes: 36 bits physical, 48 bits virtual
> power management:
>=20
>      >
>      >> In case anybody has a reproducer (either in a guest or dom0) wi=
th a
>      >> setup where a diagnostic kernel can be used, I'd be _very_
>     interested!
>      >
>      > I can easily add things to Dom0 and DomU. Whether that will
>     disrupt the
>      > experiment is, of course, another matter. Still please let me
>     know what
>      > would be helpful to do.
>=20
>     Is there a chance to switch to an upstream kernel in the guest? I'd=
 like
>     to add some diagnostic code to the kernel and creating the patches =
will
>     be easier this way.
>=20
>=20
> That's a bit tough -- the VM is based on stock Ubuntu and if I upgrade =

> the kernel I'll have fiddle with a lot things to make workload=20
> functional again.
>=20
> However, I can install debug kernel (from Ubuntu, etc. etc.)
>=20
> Of course, if patching the kernel is the only way to make progress --=20
> lets try that -- please let me know.

I have been able to gather some more data.

I have contacted the author of the upstream kernel patch I've been using
for our customer (and that helped, by the way).

It seems as if the problem is occurring when running as a guest at least
under Xen, KVM, and VMWare, and there have been reports of bare metal
cases, too. Hunting this bug is going on for several years now, the
patch author is at it since 8 months.

So we can rule out a Xen problem.

Finding the root cause is still important, of course, and your setup
seems to have the best reproduction rate up to now.

So any help would really be appreciated.

Is the VM self contained? Would it be possible to start it e.g. on a
test system on my side? If yes, would you be allowed to pass it on to
me?


Juergen

--------------048EBAE9B46709486A8C459F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------048EBAE9B46709486A8C459F--

--cGb9T0xPG40UEkZk4U8nke1dy7oVuq52N--

--EfbhKDrrZMU0yIsOqJCQ9lW9F0ukvuujF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA1AE0FAwAAAAAACgkQsN6d1ii/Ey9W
9wf9HGMuph0lLE6x9JVzEtP/Obz5jVV6MO4AWC1v8DVXKHOmSIy/IPGAB3HJzLAFyoejLFFcgs2i
wtaTh19J7OFYXkjibxK1N8WI6j7yhpyBVVRdg3mSV1RSoTRwSizd3Si/1Ir2TZJCVtzEEgWk3Cad
HSgF3WejZQK79IFQe25SHvTo7Ns6H9IRawr5oHOCek4bKpWRt/UfkBgP4f/Pb6PIXr5Bn4ADmKx/
0ELH+pbnyEHYU16qoUBPGIqnH1E/z+LWl7ox/nRYiAvyhbt8AUnAddOJiB/+nozt5J/dWNYNn08G
+yxf9aIJCT0amOe7DE2yPc3phKaZ+g0+JoayFeOKOg==
=p/OO
-----END PGP SIGNATURE-----

--EfbhKDrrZMU0yIsOqJCQ9lW9F0ukvuujF--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 13:23:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 13:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88778.167092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXep-0005DC-Ol; Tue, 23 Feb 2021 13:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88778.167092; Tue, 23 Feb 2021 13:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXep-0005D5-L3; Tue, 23 Feb 2021 13:23:19 +0000
Received: by outflank-mailman (input) for mailman id 88778;
 Tue, 23 Feb 2021 13:23:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEXeo-0005D0-Hw
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 13:23:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 188030e4-0816-438b-aa0b-b3788625ce06;
 Tue, 23 Feb 2021 13:23:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 89F1FACD4;
 Tue, 23 Feb 2021 13:23:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 188030e4-0816-438b-aa0b-b3788625ce06
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614086596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QgvTXMKQTA4gTB8ZG6AKpieUSw+sGJoebQ8OHtxNdCk=;
	b=Z4IO8CKw3DfrqPSOYoEtP46QT9zHxcLZRKgGCP97u2jbzxkQgC0cr47qG5CMXc/FN0Lory
	hhSCq4h4rY158hCBuzRz722e9/y4pl7iQatMUvoVtAS/1AC3b1o5cAonwUNtJnhY2PYVqK
	uB/qCl0dUlvG1xfDWSJpZhYNc+twsYI=
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: andrew.cooper3@citrix.com, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
 anthony.perard@citrix.com, jun.nakajima@intel.com, kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
Date: Tue, 23 Feb 2021 14:23:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDTyScmud26aiaMi@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.02.2021 13:17, Roger Pau MonnÃ© wrote:
> On Tue, Feb 23, 2021 at 11:15:31AM +0100, Jan Beulich wrote:
>> On 23.02.2021 10:34, Roger Pau MonnÃ© wrote:
>>> On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
>>>> On 22.02.2021 22:19, Boris Ostrovsky wrote:
>>>>>
>>>>> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
>>>>>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
>>>>>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
>>>>>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>>>>>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
>>>>>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
>>>>>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
>>>>>>>> It's kind of weird to use an MSR to store this. I assume this is done
>>>>>>>> for migration reasons?
>>>>>>>
>>>>>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
>>>>>> I agree that using the msr_policy seems like the most suitable place
>>>>>> to convey this information between the toolstack and Xen. I wonder if
>>>>>> it would be fine to have fields in msr_policy that don't directly
>>>>>> translate into an MSR value?
>>>>>
>>>>>
>>>>> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
>>>>
>>>> Which, just to clarify, was not the least because of the flags
>>>> field being per-entry, i.e. per MSR, while here we want a global
>>>> indicator.
>>>
>>> We could exploit a bit the xen_msr_entry_t structure and use it
>>> like:
>>>
>>> typedef struct xen_msr_entry {
>>>     uint32_t idx;
>>> #define XEN_MSR_IGNORE (1u << 0)
>>>     uint32_t flags;
>>>     uint64_t val;
>>> } xen_msr_entry_t;
>>>
>>> Then use the idx and val fields to signal the start and size of the
>>> range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
>>> field?
>>>
>>> xen_msr_entry_t = {
>>>     .idx = 0,
>>>     .val = 0xffffffff,
>>>     .flags = XEN_MSR_IGNORE,
>>> };
>>>
>>> Would be equivalent to ignoring accesses to the whole MSR range.
>>>
>>> This would allow selectively selecting which MSRs to ignore, while
>>> not wasting a MSR itself to convey the information?
>>
>> Hmm, yes, the added flexibility would be nice from an abstract pov
>> (not sure how relevant it would be to Solaris'es issue). But my
>> dislike of using a flag which is meaningless in ordinary entries
>> remains, as was voiced against Boris'es original version.
> 
> I understand the flags field is meaningless for regular MSRs, but I
> don't see why it would be an issue to start using it for this specific
> case of registering ranges of ignored MSRs.

It's not an "issue", it is - as said - my dislike. However, the way
it is in your proposal it is perhaps indeed not as bad as in Boris'es
original one: The flag now designates the purpose of the entry, and
the other two fields still have a meaning. Hence I was wrong to state
that it's "meaningless" - it now is required to be clear for
"ordinary" entries.

In principle there could then also be multiple such entries / ranges.

> It certainly seems better than hijacking an MSR index (MSR_UNHANDLED).

Not sure.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 13:24:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 13:24:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88781.167104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXfi-0005JM-1y; Tue, 23 Feb 2021 13:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88781.167104; Tue, 23 Feb 2021 13:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXfh-0005JE-Uh; Tue, 23 Feb 2021 13:24:13 +0000
Received: by outflank-mailman (input) for mailman id 88781;
 Tue, 23 Feb 2021 13:24:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=10EA=HZ=gmail.com=ash.j.wilding@srs-us1.protection.inumbo.net>)
 id 1lEXfg-0005J5-Cn
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 13:24:12 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a863917-e589-431b-8253-77e72fbb7cd1;
 Tue, 23 Feb 2021 13:24:11 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id u14so22588865wri.3
 for <xen-devel@lists.xenproject.org>; Tue, 23 Feb 2021 05:24:11 -0800 (PST)
Received: from ud64051762ce75c.ant.amazon.com ([90.205.254.186])
 by smtp.googlemail.com with ESMTPSA id z2sm2701318wml.30.2021.02.23.05.24.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 23 Feb 2021 05:24:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a863917-e589-431b-8253-77e72fbb7cd1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=1qCgwT6+fcYuuNIjgXIClkAwKS8Ev4UEDiigD0oQpOs=;
        b=T55UF9K8Yu9uPckE5MDIruu+aCoc5nUirVBi3iLfHQ+JbLpD1mPKu3EqXRB9ZxQ1Cb
         ufUve26zFIP1rmFHrD0tnqQd6lyzjt1ILONZIlWQvxNHv8lWeHoSXjhchljysiThxURm
         YYXD/XoAs3Ok4IWW4fytHOLNAlaWK47XTLi4ic4D1HRhgVDDUHLc5WnRlPO6BGzmbv8Z
         hPBxEE171ePgnoZTiZxRtUAKnnLTo6PWROaysMAPwRTMnHaxbRZ001J+Sg8aTx4nA/cu
         exwGhz1Y9+24FZcJMGXjNzOM0YZjqoB+iUzzhQj8Qyblj8IWU9OhDf4MXtytmiSOkG/v
         7y0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=1qCgwT6+fcYuuNIjgXIClkAwKS8Ev4UEDiigD0oQpOs=;
        b=OZGVaCL2byd7AL0xc9TZIvRzhYfAMZ5XOiHIUZvj7kP3CwQcBLDZM/TC+3nt1wRvl/
         Pq/407k0QVU6NHsjfTl0sK3vrsB4uSdPG/r769qk88ZyQvp+25j0zWSW6dE6mwMbyGyo
         w+njWprFv1uq3a3xpToFlX8eTQ982B+XqddWbM1jYmcqWAmqHugCH0tD2FfiEXSoeed7
         BXJtCqjYIF2Lo0aErcGRzP/rBE97OgFqC8whu+MhxD11/ucE4YWPcPIG698qslXl1FaG
         1Y1VVZ5ZWTejgXbsHpr41/yJpHW5BkpxG2ualBHlUxOKByQU9nl1wuGy/fHEs67Q2LTX
         ajSA==
X-Gm-Message-State: AOAM530hCymKUf/aMcL3iyqOzd1Ogm8SV6yC2ieoYsfztBtfzB7vGQSM
	IZbINJOyuI8t6IgS7OWPcy8=
X-Google-Smtp-Source: ABdhPJweDBz9dctw2JqZqTOEBaZUMCVq3uDO6rdCypzWN6KU4VUQArYD+Fej3qSVBi360615r8KhDw==
X-Received: by 2002:a5d:5910:: with SMTP id v16mr26393884wrd.304.1614086650936;
        Tue, 23 Feb 2021 05:24:10 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
To: julien@xen.org
Cc: ash.j.wilding@gmail.com,
	dfaggioli@suse.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	jgrall@amazon.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: RE: [PATCH for-4.15] xen/sched: Add missing memory barrier in vcpu_block()
Date: Tue, 23 Feb 2021 13:24:08 +0000
Message-Id: <20210223132408.10283-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210220194701.24202-1-julien@xen.org>
References: <20210220194701.24202-1-julien@xen.org>

Hi Julien,

Thanks for looking at this,

> vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the
> CPU to read any information about local events before the flag
> _VPF_blocked is set.

Reviewed-by: Ash Wilding <ash.j.wilding@gmail.com>


As an aside,

> I couldn't convince myself whether the Arm implementation of
> local_events_need_delivery() contains enough barrier to prevent the
> re-ordering. However, I don't think we want to play with devil here
> as the function may be optimized in the future.

Agreed.

The vgic_vcpu_pending_irq() and vgic_evtchn_irq_pending() in the call
path of local_events_need_delivery() both call spin_lock_irqsave(),
which has an arch_lock_acquire_barrier() in its call path.

That just happens to map to a heavier smp_mb() on Arm right now, but
relying on this behaviour would be shaky; I can imagine a future update
to arch_lock_acquire_barrier() that relaxes it down to just acquire
semantics like its name implies (for example an LSE-based lock_acquire()
using LDUMAXA),in which case any code incorrectly relying on that full
barrier behaviour may break. I'm guessing this is what you meant by the
function may be optimized in future?

Do we know whether there is an expectation for previous loads/stores
to have been observed before local_events_need_delivery()? I'm wondering
whether it would make sense to have an smb_mb() at the start of the
*_nomask() variant in local_events_need_delivery()'s call path.

Doing so would obviate the need for this particular patch, though would
not obviate the need to identify and fix similar set_bit() patterns.


Cheers,
Ash.



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 13:38:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 13:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88785.167116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXt8-0006Q2-Ae; Tue, 23 Feb 2021 13:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88785.167116; Tue, 23 Feb 2021 13:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEXt8-0006Pv-7S; Tue, 23 Feb 2021 13:38:06 +0000
Received: by outflank-mailman (input) for mailman id 88785;
 Tue, 23 Feb 2021 13:38:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEXt6-0006Pn-JI; Tue, 23 Feb 2021 13:38:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEXt6-0003iX-89; Tue, 23 Feb 2021 13:38:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEXt5-0005n3-SP; Tue, 23 Feb 2021 13:38:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEXt5-0006tO-Rv; Tue, 23 Feb 2021 13:38:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WichMQYGun57XkrWONZKKp8q17PM8hovpsVOQMIvsz0=; b=nvYppOwkffL03AlHO6TJr8psYK
	E9oZhtjRYOMOKTMJuRmuVK86iXpeVMv1ZkyG+JCttlQ89MJtS1a+inn8ndOpgQhrtyklZbEtaLpiY
	7IMFPxLhPs5O1Je5bz8+1GkRmmz7EhvJfnmSOdeALnB+Aw/FnVgeKVGZ7tHa2x/yAYWk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159566-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159566: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b12b47249688915e987a9a2a393b522f86f6b7ab
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 13:38:03 +0000

flight 159566 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159566/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b12b47249688915e987a9a2a393b522f86f6b7ab
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  206 days
Failing since        152366  2020-08-01 20:49:34 Z  205 days  355 attempts
Testing same since   159566  2021-02-23 01:09:57 Z    0 days    1 attempts

------------------------------------------------------------
5019 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1226585 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:15:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:15:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88823.167152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZPD-0007hv-5X; Tue, 23 Feb 2021 15:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88823.167152; Tue, 23 Feb 2021 15:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZPD-0007ho-2d; Tue, 23 Feb 2021 15:15:19 +0000
Received: by outflank-mailman (input) for mailman id 88823;
 Tue, 23 Feb 2021 15:15:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEZPB-0007hi-No
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:15:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7010f2bb-2de3-4491-a5bb-116fb016ab13;
 Tue, 23 Feb 2021 15:15:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F685AC1D;
 Tue, 23 Feb 2021 15:15:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7010f2bb-2de3-4491-a5bb-116fb016ab13
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614093315; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t2UA1j/RmVE55nOLziUQOk9xwYCY+H9PjhIvbvQ/JsU=;
	b=TIw0EvIylOOPOOpLu6ZwHp6IHezYq14ATWkdcs9FxQugQFGBrDfbiWqCA1otCoKnzpMcmE
	IV1vF2SKsuQ5o90MdI13KEwyxLdTMK0lkij7JmSIBu1fkQGQ/fjWV2BMHJBFGFe3WpfVU/
	Jir9yIeHsfk/TlVshmm2P7Xwj6Fumuw=
Subject: Re: [PATCH v2 6/8] x86: rename copy_{from,to}_user() to
 copy_{from,to}_guest_pv()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <5104a32f-e2a1-06a5-a637-9702e4562b81@suse.com>
 <YDThIFB7ox6qdfFE@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1e4d803c-ab4f-b7f4-c8e7-e7ea450c7921@suse.com>
Date: Tue, 23 Feb 2021 16:15:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDThIFB7ox6qdfFE@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.02.2021 12:04, Roger Pau MonnÃ© wrote:
> On Wed, Feb 17, 2021 at 09:22:32AM +0100, Jan Beulich wrote:
>> Bring them (back) in line with __copy_{from,to}_guest_pv(). Since it
>> falls in the same group, also convert clear_user(). Instead of adjusting
>> __raw_clear_guest(), drop it - it's unused and would require a non-
>> checking __clear_guest_pv() which we don't have.
>>
>> Add previously missing __user at some call sites and in the function
>> declarations.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Thanks.

>> --- a/xen/arch/x86/usercopy.c
>> +++ b/xen/arch/x86/usercopy.c
>> @@ -109,19 +109,17 @@ unsigned int copy_from_guest_ll(void *to
>>  #if GUARD(1) + 0
>>  
>>  /**
>> - * copy_to_user: - Copy a block of data into user space.
>> - * @to:   Destination address, in user space.
>> - * @from: Source address, in kernel space.
>> + * copy_to_guest_pv: - Copy a block of data into guest space.
> 
> I would expand to 'PV guest' here and below, FAOD.

Can do, albeit in the header (in particular also in the two
patches that have gone in already) we also say just "guest".
But since this is a different file, I can make the change
without introducing too much of an inconsistency.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:25:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:25:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88827.167167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZYe-0000Jv-6X; Tue, 23 Feb 2021 15:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88827.167167; Tue, 23 Feb 2021 15:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZYe-0000Jo-34; Tue, 23 Feb 2021 15:25:04 +0000
Received: by outflank-mailman (input) for mailman id 88827;
 Tue, 23 Feb 2021 15:25:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEZYc-0000Jj-20
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:25:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab752354-589e-4a4e-a9f4-311f26672c21;
 Tue, 23 Feb 2021 15:25:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55E06AD57;
 Tue, 23 Feb 2021 15:25:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab752354-589e-4a4e-a9f4-311f26672c21
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614093900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gw6AzAsZH+6lNjHhY0dDF8PbEU4hQdZguOfaumbBcLY=;
	b=hjByv/ZFEhM+RNo5QnjhzBFdsG4xw9zElnK8Njq4yWpCWQ/pzmae3gRFF6sdxPpjuddS7q
	OaZKKMWJPEAOt8sqJMmI09HTA/ko+y//9HykLEzeskmL5BA/fRBbOjooadezH17DyzMuHI
	eocQD8M7k7p5e38QLPLu9SdJqgx3kZ8=
Subject: Re: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of
 copy_from_unsafe()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
 <YDTuGn8YWRrWlbS9@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76207250-1372-e7ab-2d03-b46020a7906b@suse.com>
Date: Tue, 23 Feb 2021 16:25:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDTuGn8YWRrWlbS9@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.02.2021 12:59, Roger Pau MonnÃ© wrote:
> On Wed, Feb 17, 2021 at 09:23:33AM +0100, Jan Beulich wrote:
>> The former expands to a single (memory accessing) insn, which the latter
>> does not guarantee. Yet we'd prefer to read consistent PTEs rather than
>> risking a split read racing with an update done elsewhere.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> 
> Albeit I wonder why the __builtin_constant_p check done in
> copy_from_unsafe is not enough to take the get_unsafe_size branch in
> there. Doesn't sizeof(l{1,2}_pgentry_t) qualify as a built time
> constant?
> 
> Or the fact that n it's a parameter to an inline function hides this,
> in which case the __builtin_constant_p is pointless?

Without (enough) optimization, __builtin_constant_p() may indeed
yield false in such cases. But that wasn't actually what I had
in mind when making this change (and the original similar on in
shadow code). Instead, at the time I made the shadow side change,
I had removed this optimization from the new function flavors.
With that removal, things are supposed to still be correct - it's
an optimization only, after all. Meanwhile the optimizations are
back, so there's no immediate problem as long as the optimizer
doesn't decide to out-of-line the function invocations (we
shouldn't forget that even always_inline is not a guarantee for
inlining to actually occur).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:37:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:37:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88832.167179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZkf-0001Od-C9; Tue, 23 Feb 2021 15:37:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88832.167179; Tue, 23 Feb 2021 15:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZkf-0001OW-9H; Tue, 23 Feb 2021 15:37:29 +0000
Received: by outflank-mailman (input) for mailman id 88832;
 Tue, 23 Feb 2021 15:37:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEZkd-0001OR-Oy
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:37:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f137e573-794f-40e9-9b67-96b375fccc80;
 Tue, 23 Feb 2021 15:37:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f137e573-794f-40e9-9b67-96b375fccc80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614094645;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=sUDcpx6dLOhRBP43cnFz/f8uIz9kVVFXggG9jGn0e4w=;
  b=UvBFBl1ig/BaN0620QbADwe32Cz9uUENLW3oX/UR0MIQ2FEp/1F7Ct5W
   +KVQdCrWfUXN53+rk/oOvVh5YFvB5/EMtyc06VldKkq/I/ehfKFibv/je
   XrVtG8b3PgvrLzsOGBKo6xOq8y7Gbqef0oT5tRUsICeE6YS1vth+2gcDJ
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: nmcTU305/hoj0tgouqRSvlVG/cJF2Kk1n4OVKwoiyp8jbcI28qefgm57HqqhkWL/AhRwYPg2zj
 H14Mk3kl7+Ju7FGF6ECNUyQbznuzeGhs/7EPaj8GCvVljCKbnImxZe6S+2u+NKsj5/ke3jZ/La
 DD80EiaKPka4VKAwtTMzWgEgZt/ewmjeIVbskfGowh2u6ooJZiUtgXDmBlq+BsS2TIdwkPoF80
 L1nOZqMiVvGWdWdgdnSLgGIgk47INbUBZfkjABOChneQhOHhyXe6xQ7uExGYjad/OEFe437V+f
 HoQ=
X-SBRS: 5.2
X-MesageID: 37765267
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="37765267"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eqR3G8/dECBEEf55WQyQa0klKxFqoXJEl9YultfD8ub7hCyo2N4jMu6yFEaWMLedxE23cYv89pB4Lc0Gy2cZpEgHf2aDacLU3GTW07zg6FmHbr17B7JevQUJS8ycUFvYstRFCqb4XCIbaNaBMyTEKgWhOXHGgj+GZWNJfRbOgu02gqmZzzXxbPmjYYpZU3DR8s30zoQP2pQjPm3+pwLq821KnjNxEo7YCwkTwkD0j2f4K7z9fb2IGp04iLtn9Jtnu3M0RkEVX9ogl6V4W6jVshgA8NJ0F8YKzsDtI46NaQjAZS/6Htq8AUJxDc0toAJwfXQOMvktjAl+LRg6bJvJqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dk0QKkx8KOD+aEZhcQ5FynwWqr2Jayi3cggBH2hmmCM=;
 b=V6inC/Et0GBzG1ABQ1OEV2dRLX2Sc6wSp7dOoj7sNypDSREFOVhRuEZHK7fRhzJCp326Guwr79XKuPLHUJxJHV5+QMdByzOsFevDGnQaV5/U4Q2+8wOXlyZWw50RqPg3kSKvEOmwhBWEEppEcaKzMVEdQrXBPaXZqZr+yc2tVhnx8hvC144IdX0wvGS83wm+mDS3p+KwRgE8tXyajZSfI/74gsKpCUzOZoXmPM7rxQZgCMdyHMGfueEW4rax35sTgqLD3TcRe9Snd+cd8/yByWsTtrygqoAsYArqQRHAa3cKJk0oSB/c2oBl3xXgBYghlY6TbJi4j3tyVKl78mlFOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dk0QKkx8KOD+aEZhcQ5FynwWqr2Jayi3cggBH2hmmCM=;
 b=glInX5RN0Nb1orRft62IEf8d7/b6KOXzV97WviHhtMG7dSspvr0WqsE8NUSqixH8Sapr1jvlmlPVFlsZ6gYENoyvqIjW+0ZZFW+968AlzFPeso15PW8APTyGa1qEBEQ3c95CzemvPrhL2qDlhGrlMpaw3zih9wAloTRyKCo5c9M=
Date: Tue, 23 Feb 2021 16:37:15 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of
 copy_from_unsafe()
Message-ID: <YDUhKw+19ITgVmml@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
 <YDTuGn8YWRrWlbS9@Air-de-Roger>
 <76207250-1372-e7ab-2d03-b46020a7906b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <76207250-1372-e7ab-2d03-b46020a7906b@suse.com>
X-ClientProxiedBy: MR2P264CA0111.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7294105f-1963-4b62-578d-08d8d810e93a
X-MS-TrafficTypeDiagnostic: DM6PR03MB4969:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB49697B36EBDBF49313D2E2858F809@DM6PR03MB4969.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2331;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ITsx363DdjHxpe3sLgXvhDVRUPRkcayZothUAo2qGh3255IU53ghapC3goAT7oEX/Ok6VfzVwe/lLmABreV6XFH4lWiUdm0VeMQJ+Z7swLMwW1nDiH71cSZpPg/M7Ra+eTCMcck/BBkcNux6T71N9b41p2PpX4p3FuIc8Qo6+wL8h5YVOpjjE6PlcV20nAu4yYHKf8vbgQBmLGixWEIaAIsMD45cnWFEppFD6zOXE2/rgXpjDPw/tcVMmVwFLK5BkEGP3GqZBkrq6ueFxfHDQIZ0Mt4vTYPrQPqvD+4CtW7ouD/pgi5ZpjdEP3GAbfb9E2MYJx6DKbrsTrpiOrzBanH/tLHKNdSSX9vPR7bcKyL1f1ebXr10k2b7SIL9SG91vx4aOK3hhJccclmLRdkcWwusRPgfEZm6HPC9EJQZrfoGz/f2pl4PzuCLM5Fqk300GOasxFWh2ud1P3VIzBfvusO3NmQNQ/RGhQ/Z1L7qpjxt7b27jZCWdXj2P8A5sX0sYitBDVN5Wow/VvZrZqX8bihbY1I+eSKQAD12jPWemojMuDoGHrU1Xg2oOCP2Y/sE
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(366004)(396003)(376002)(346002)(39860400002)(16526019)(186003)(54906003)(956004)(6666004)(26005)(33716001)(8676002)(316002)(8936002)(9686003)(86362001)(83380400001)(6496006)(2906002)(53546011)(478600001)(6486002)(5660300002)(85182001)(66476007)(66556008)(6916009)(66946007)(4326008)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bTVHTlBCbE5EUVphUm1wdzF1WmI1cFljRmhvdnNMVUFlQnI1d1ltRkZtQ0Fq?=
 =?utf-8?B?T1A2UG1ISXlQNGYrd3RSUDR1RTF2Z2tXbmdGQ2h3NDc2R1lYbGwzb2dDdnp4?=
 =?utf-8?B?cFIrZWZWdGhlOWF2RTVIZlJEM0ppR3hHcVBVUWUwZjI2cjYxcXlkSWtVOGkw?=
 =?utf-8?B?bHRmQzB5NkRJcVo2WCtXbnZHTmszQTYvL3RCSFRtVVVHY2NWbElyNVVtWXBi?=
 =?utf-8?B?dUNabzl1dzNFMi9iQkVwdmN3ZWtoWEVYaEFuaUtQL1BhMGFpR2hYRmlmQnhk?=
 =?utf-8?B?UDF2U1kzOElERXlvUjByS2xjY20yRnlYNTdWdWh5aDhrTm9pc0g1RmhuRTZv?=
 =?utf-8?B?Yys1TFFUOTdJc3dUeXMwb25IMU4vUkhVUmJYempwL1VGUDlQa280aHV5emwv?=
 =?utf-8?B?Y2tkOUdJN2E3S1M3UTBWaWNaYzBTNkV5Z09EMUs5TjVNKzlOb0pvNkdBMzM2?=
 =?utf-8?B?NGlyT25scGlKZmhRVG1Yck1la2s2Y1hzYm1KR2NmUERmS2xmODlVakJaWjlY?=
 =?utf-8?B?MXpUUkxhSnNNOWZpc3pRbWpSWXZtSnJHSVRCYnZkMDk1Y2p3b0Z4eXo3NExw?=
 =?utf-8?B?Zk1VVUk2d3hoaEJORkdJdkpMNU1LeDJKL2xoUXVWYlNjMFp2eEVPek5VTWQz?=
 =?utf-8?B?VmpLZjhJeGZMWTQ0SWlsa3I1YlBMc2hFVFdIQ0Z3dDM5VWFSY25IMUNBa3lT?=
 =?utf-8?B?ZVNPTll2a3g2UndaNEQ5KzNEUTNiRThCUVU2aC8wNnhxRUhtU2lhRkc0Sll3?=
 =?utf-8?B?SEFibnJ5dS8wbU9YWnNteGVnb2dTd2ptUXNvZ29zL1R1UVQrNTZuYWhodFRj?=
 =?utf-8?B?QlY5ZlNtL3BXalkxWWpzWmlxSVNsbjNiRjhBMmozSWpKMHgzY0R5dEtYVEhw?=
 =?utf-8?B?SUhlelZxaXNpZnVGOW9lcUxNUWNtNkxWSXZtVjN4Mm8xYkgzZHVQNEI5ZFJi?=
 =?utf-8?B?VTRJK2IzMDBKalgycTNSS3Z1cTdmdTl6MmxMSjM1aGR2MjZicnBYRGpKeTN5?=
 =?utf-8?B?WG0yNDJHUFc1YWY5RmtRT29IbmRJb0N4STZxY2FHQXJKQWJKeGlCYlR1VXc5?=
 =?utf-8?B?eWlHTFowRy94UWhjMGUvSyt2UHgxWEtJT1RaY08wbFB1VGluU3AzUGh2WS9T?=
 =?utf-8?B?d0JuZWlDVkZveGtQc1Fjc0FmMlpiNmRuVkFOY0sxYXJHTDBQODVIK1QwcWVV?=
 =?utf-8?B?YUlvV0JXNmlrelROL3hsVXhyWDU2YlducVQ2UlFwSk9aMEp6K0pXeXRGeDNW?=
 =?utf-8?B?dEl1U01hMDJCcHlldVc0cWJQU1FERktRQUFDRXZPZ0RobGl0QXoxUDNGakp0?=
 =?utf-8?B?bkw0VEV6OXJ6TUF0d0ZNMHRrYkFJRndhdGhvMGhzMTNuU0hjbTBSNGc4K2RK?=
 =?utf-8?B?MHpVNkU1ZzZyU1ZldVphV3M4T2NXaitHdTdrc2tZKzFnYTF1aFJPazBFaHlB?=
 =?utf-8?B?UHcvU0M5MU9iLy9DUDdhUmlBWW9BWEVXMzVaYnRRRlBIWnBvM0tzaitMb0w0?=
 =?utf-8?B?dXJLUEt0RkJ1T3BiY1NIbW9RUk1BRENrZFdBRVhVajNQZFZ3ckNOdlFKZ2w0?=
 =?utf-8?B?L0N2dlhUZmd4akVZR2ZkMURiVFhNUUxkWXNTTXdyb2FyWG5zRmxiVzJYNVFZ?=
 =?utf-8?B?aUZRa1JLKzgrUE81aFVYVnlzSXRoY3VjWTUySjN4eWV4VWVOVVd1a0o3eHd5?=
 =?utf-8?B?NnpzWlowdURqOGd4eGgrcUh2QUpCRzVucmZRWXRlV0s1NzhLaXIzOUxXeStL?=
 =?utf-8?B?MkppY0JLb0IrcjdoMnNuaTNOUUZ3VDNCMEpOZWhneGQ5NEVFa3JOenhBM1M5?=
 =?utf-8?B?a3BpaXZmNGpJN3Qzd0Q4Zz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7294105f-1963-4b62-578d-08d8d810e93a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 15:37:21.9495
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uuh8zY6wl47UQ48aQVUXtJtbe6f0s3FFHj/3UR1PPagr3ZEUvrPCHTQnY+5ooF0W2aeNgLy5f/ebM67o4d/H4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4969
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 04:25:00PM +0100, Jan Beulich wrote:
> On 23.02.2021 12:59, Roger Pau MonnÃ© wrote:
> > On Wed, Feb 17, 2021 at 09:23:33AM +0100, Jan Beulich wrote:
> >> The former expands to a single (memory accessing) insn, which the latter
> >> does not guarantee. Yet we'd prefer to read consistent PTEs rather than
> >> risking a split read racing with an update done elsewhere.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> > 
> > Albeit I wonder why the __builtin_constant_p check done in
> > copy_from_unsafe is not enough to take the get_unsafe_size branch in
> > there. Doesn't sizeof(l{1,2}_pgentry_t) qualify as a built time
> > constant?
> > 
> > Or the fact that n it's a parameter to an inline function hides this,
> > in which case the __builtin_constant_p is pointless?
> 
> Without (enough) optimization, __builtin_constant_p() may indeed
> yield false in such cases. But that wasn't actually what I had
> in mind when making this change (and the original similar on in
> shadow code). Instead, at the time I made the shadow side change,
> I had removed this optimization from the new function flavors.
> With that removal, things are supposed to still be correct - it's
> an optimization only, after all. Meanwhile the optimizations are
> back, so there's no immediate problem as long as the optimizer
> doesn't decide to out-of-line the function invocations (we
> shouldn't forget that even always_inline is not a guarantee for
> inlining to actually occur).

I'm fine with you switching those use cases to get_unsafe, but I think
the commit message should be slightly adjusted to notice that
copy_from_unsafe will likely do the right thing, but that it's simply
clearer to call get_unsafe directly, also in case copy_from_unsafe
gets changed in the future to drop the get_unsafe paths.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:40:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:40:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88834.167191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZn7-0001e3-R7; Tue, 23 Feb 2021 15:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88834.167191; Tue, 23 Feb 2021 15:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZn7-0001dw-Nq; Tue, 23 Feb 2021 15:40:01 +0000
Received: by outflank-mailman (input) for mailman id 88834;
 Tue, 23 Feb 2021 15:40:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hUnP=HZ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEZn6-0001dq-Ut
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:40:01 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba446f7c-3ba6-41e3-9319-b234cd5b4846;
 Tue, 23 Feb 2021 15:39:59 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NFOF3U010867;
 Tue, 23 Feb 2021 15:39:56 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 36ttcm7scx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 15:39:56 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NFQiAb127810;
 Tue, 23 Feb 2021 15:39:55 GMT
Received: from nam02-sn1-obe.outbound.protection.outlook.com
 (mail-sn1nam02lp2054.outbound.protection.outlook.com [104.47.36.54])
 by userp3030.oracle.com with ESMTP id 36ucbxm8f1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 15:39:55 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3526.namprd10.prod.outlook.com (2603:10b6:a03:11c::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.20; Tue, 23 Feb
 2021 15:39:53 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Tue, 23 Feb 2021
 15:39:53 +0000
Received: from [10.74.102.180] (138.3.200.52) by
 CY1PR03CA0017.namprd03.prod.outlook.com (2603:10b6:600::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.28 via Frontend Transport; Tue, 23 Feb 2021 15:39:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba446f7c-3ba6-41e3-9319-b234cd5b4846
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=XQMt+fQEixjcF+7rhvfkQDyu9DhXQesKMvPb844wxpo=;
 b=SlZyYu0NrlWShd05kX3wWkvgNBhMkPvj52DU8NjlcPWXZjocgjLobZCPiQn+4q50ModC
 zH5Kg3QLCH1HnQkVlGdE01kgXRg4eb+nc3BjFhRzl9HYoXh3ROsYf7iG2I3V5bk++KGp
 XkZfZvZncu85XXeKS6JGayKieJG2eXCWNOSTjZWDXmZnrCx96mIUGMB4MBp8kPPdyY+d
 O4is4r3dY4rpvg/qyn26Hbe8if1KNysVjhdn77ppqs4KPrTILUMhRorO/rrwz3SUKlm+
 JnMCbNCqK8HC1uMVU3NJemg5fcRH+U0ORJk6RPiy0V1UOlklNtwRRkyH6KzEh/ujNRaz Jw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j2NCvypqyyeXd8mTAOpBGV7dhG+LLtfdXgw1B2GFOFNbyjcBB9+SznQfCTHegubBIV55NdPznynu7oSbUoWAMFlu64+2GpUtKeKNwPzibh6yc52juRUmpQfo9SjUP8vwdXJGT2n2xWWrL8Fh6feHu/De67swj/CYmOs+LzgE+rYVTmXQ/bhthKTpOvMMuHsgmailM9J74WkO7+BcuPG7kVsCfBL1xpp/t+T2hbSilMiBmoDbZYwJ3AGti5iE3cdCHJxscuAvmUWMwNrgn5CywuaMi5JgLXl9Y+cXXae46RmgMSjSIgJZTE2GNzTXFYVn99+21KKVfexikzkuPRLCAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XQMt+fQEixjcF+7rhvfkQDyu9DhXQesKMvPb844wxpo=;
 b=QyBYD/jljFbP9AGna5kMUQlJL0OS3zj8y56d9tMf6qapS8Dp2lLH17w1jaNUwS9abJBxCcsTW2TcfZfOE1SD/JwBTRGIz01r1X1aNchVfFYwT13HstBzZw8y2fwBznizY03f+rpxZ6Foq4hlCKobs+0n+eBYM47F6XZxIhb9qzIiVNZWnONPKPImEeO9HijJv4+EKuYvY9+vZjXanyVuan5PUdd39cGAETDogVUCrW8Kv6MepI3I6Uj56Gg2UuTymeViUanSXLu2YrkbFrykjK7hrBdAyzh2ZmuwBOJgTBc4yLPtWLPUIpcNo4ByZYArnzNAxMoYm0tuXQJMD5Fs1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XQMt+fQEixjcF+7rhvfkQDyu9DhXQesKMvPb844wxpo=;
 b=H4iOPYINgDVlwoIh/NsmJLQ6tOh1QzdCww7+ZLBCXZscAUmCELTzBTwQOySb4FiAAY5sZwzxVYFT0P5ets31MSpjZOEN4JfKaKDSfArSlP/67AGOTQnJRB/LpFgNa2So1LzH2D9Ipoo9V8vmctlJXR/baTuBNfvJPAD0ahDVlLw=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: Jan Beulich <jbeulich@suse.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
        iwj@xenproject.org, wl@xen.org, anthony.perard@citrix.com,
        jun.nakajima@intel.com, kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
Date: Tue, 23 Feb 2021 10:39:48 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: CY1PR03CA0017.namprd03.prod.outlook.com (2603:10b6:600::27)
 To BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d6f08dfe-c7ce-4ed9-6541-08d8d811438e
X-MS-TrafficTypeDiagnostic: BYAPR10MB3526:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB3526AA95BD1BB708546742458A809@BYAPR10MB3526.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	ea6hcQGEWnZOeSQBFe+zh6iN5YuotWyNtmLIucMYpQ6CLEmdpMxVXULzskMX4yXiJSQTsYhO2tqMtoZHVCGS5ESHrcPPea7ULjfzJCwv+ippkwEaOe+m85Kc4BOC3+rMALOWn3croXRPZRzQfZCTNpjof9bqWgCNEVZZf8ZRPfmarnbbyVPZxsG5VKQblg8yI+AqlJJ7/uPvR1YBWA4qk+k+G4w/vvrQBEyvxrkwubjKCYgZQjVvoElsUY7URXWEOvMkCSOcsPHFSWAF8q6TuSgC0SQvFFJgu38CxNW7kCqZhFV8jW/JxXU3FkbbdVRfxaBroHo8HBV2qA7XtumfgGN79RtyWFTmMwtTw55QiddzvS51ipfTGKiiV4Wy0rkGTnJKNEfYKq4A7EVzdAR8KwWF124GbuUJZ8T/OhsEXu6O11DR+MzlM7XFktrp0nT/bRvOfSN2j2asZqWspPoTK4Pn6aHYOw3DGKDGykZSP5rA1zRsCILuVgpKJmisDgZuyTQMsfbY/jYXlwvQzYxl0R3hpVxEM7Hk3edzPoqGKPGEl6tWDp9kqcGVnyCrrAx6lnXDNtJ+GZerHjkEE8v5fly7fWBYndf91Zqzi1vQJmc=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(396003)(39860400002)(366004)(376002)(16576012)(4326008)(956004)(66946007)(6666004)(44832011)(478600001)(66556008)(31696002)(6486002)(316002)(86362001)(66476007)(31686004)(83380400001)(2906002)(5660300002)(16526019)(110136005)(2616005)(26005)(186003)(36756003)(8676002)(8936002)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?NDRTUElGR1V6aEQrbHNIcjd0aXFGcnhhdjIvR3E3c0t5VjRpYW9rb1hFQUtU?=
 =?utf-8?B?Z2QzdnRyQk1qb1BiOVFHOXdqZyt4SzJKd3ZlQ2JOdlNkUUxUT1FOZGJoeG5h?=
 =?utf-8?B?VWo4QUFPQ2kxbEJ5NDN2dVg3ZmU5Zng0NjVHV1o0SFJPd0NjclRiSVlSSUNQ?=
 =?utf-8?B?NFR4OTlnTHFESjVDQnBzbldjSnYyeXlVZzNMSUhTbEhJeFRGQ0FEMy9TUDUz?=
 =?utf-8?B?ck9nQ0ozY0JKS0hQY2FqZmt2ek9DMGFMQ0xDQWJrRUlPVUt1cWJ4eElGNFJw?=
 =?utf-8?B?dkhuR0RLUGpLQnYzVy9OTVQ0eDE0Sm9NQjZ2UFJUV1YwdG1wSnlVZEV4Szlz?=
 =?utf-8?B?ZGw4OVl0S3MrOVI2SVVrc2daVWJ1R3gwcTJqemlOMWtWc1B5NElrNG12dXJZ?=
 =?utf-8?B?SFVvcC8ycmdNSFhWNm4yKzdHMWJ1b3dFenh3SUpNaklKVkFQM1lBUnkwUTFu?=
 =?utf-8?B?ZnBnaW1qcGdrR3pkQ3h4ODM0TEY5R3RwK2o2RzNwRWVhZU1TWCtrVWhqUFc0?=
 =?utf-8?B?ZHhDczROR0F3WWNSTHRZYWhUM2Q0ZS9mZ0RDdWNqRlRBaW1DZ2dydm45ZGN1?=
 =?utf-8?B?MlVReUxwMW9LYXYxb2FiYk8vYWhVYXJGclBzTnYyVWhBR1FYVU9JbFdTRXlJ?=
 =?utf-8?B?M2tsdUxZUUdkQU1YaFNacTJsQ1FUYW1pdmNWc1N5b3FBRXBzTS9mZjYwRy9r?=
 =?utf-8?B?Um9vektEdUdTekRTRDRMMGduNEdJMmNUYlpnaXpXSjJHZ09BQzQzanEyMllQ?=
 =?utf-8?B?TXZUTDMySHRZVUpBYytVMzlYTzlmU3UvbjV3U0ZQeVM2ajNGV3NadlJvTTJz?=
 =?utf-8?B?dDJNWXZrOXRCTGdCMkV3S2tTRThad3hYSEhlWCs1aUhBYjc2Q0dsbkNZMUdk?=
 =?utf-8?B?NHY5M293M1FYUVFkbitnemlvT2R3QkRQejNmYUlkYkZ4dWY5UXkwK2FtM2FM?=
 =?utf-8?B?aUpMaVUxVVgyU2tNVDRTSmU0NURVbXE2M1dYOWg1cHplWTdDVHFLNUpFbGtv?=
 =?utf-8?B?bUlpM2M0c0RBSFhLYVlmekw3VTg3K2EvSnFRMU5laUpzY0RkajRkRndFak9H?=
 =?utf-8?B?eHFRRWFDSWVialhSK1V6TVZGSjhaaUxJMWxKbnNQamJrZCt0MTAwMGdtTis3?=
 =?utf-8?B?bW9IUEo1SWgvckVTd2tkTElLSWV3bWQrdm9zOUg1QmJrbFhFTitkaDhYOWtw?=
 =?utf-8?B?dHdNcHpnQ2tIbUVvYmFLWHY0YUYxYmtYWlRVQ1FXRmxtT0JuZjJKeElIZlpB?=
 =?utf-8?B?RlVhRW90R2FCMjh2ZE9WbDQwOFlhMDhmZWhKUjhLSTJVQlV5SWVWQXlSU0lr?=
 =?utf-8?B?S1FLVDRJMDN3aFZid0V2NFdrQ1NOTWFNUksyVHpjVXFUanQvTzdPWDQvNjl4?=
 =?utf-8?B?YVdsV0hyZldONWNzdHoyckhZbGlHSk9IcW9xczFJbTVDWnZ0V2ZJdnpaWnox?=
 =?utf-8?B?dFBCRTRWQVNmNU0yenZibDVCeFlUQmFubmo4dGI1QjhDYlJJaXRnS3JTdXdy?=
 =?utf-8?B?K04vakREVDNUYzNFUlBlUmR4V3JsalM2bkF6YkRyUzh0QzVCQVRRTnJMcXRu?=
 =?utf-8?B?OUVRZHl2VGx5YVFpUGlMVWpIb3VVN2JaN0xsMFJzbHV4TFRNajdpRGFKWXBq?=
 =?utf-8?B?NlkwWGRSRW83blJHSjVYWUc1cHRpQTFSSlloRDZPOWsvZWtmeDVhNmZweEpE?=
 =?utf-8?B?eWJBODdWaWg0Vk1IL0ZBYXphKzY5OXc5N09PUGVjWjlpSzR1OE1ES2JoWENa?=
 =?utf-8?Q?uwJKE7+9KlAwVWBpLwFTfCIlaf3NzvHJSGAwUVw?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6f08dfe-c7ce-4ed9-6541-08d8d811438e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 15:39:53.4802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OZqmXYNgzlSCMI9HM6ijZXQX33SnpvtuWtUOSf5DrajIt9s1Tkmh0hqnxxPoz5sLw2nTXKRRfRlRBY7929y5FF63PvZgK58kyECFfdTxomc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3526
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0
 phishscore=0 spamscore=0 suspectscore=0 bulkscore=0 malwarescore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230131
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0
 lowpriorityscore=0 spamscore=0 mlxscore=0 bulkscore=0 clxscore=1015
 priorityscore=1501 malwarescore=0 impostorscore=0 suspectscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230131


On 2/23/21 8:23 AM, Jan Beulich wrote:
> On 23.02.2021 13:17, Roger Pau MonnÃ© wrote:
>> On Tue, Feb 23, 2021 at 11:15:31AM +0100, Jan Beulich wrote:
>>> On 23.02.2021 10:34, Roger Pau MonnÃ© wrote:
>>>> On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
>>>>> On 22.02.2021 22:19, Boris Ostrovsky wrote:
>>>>>> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
>>>>>>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
>>>>>>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
>>>>>>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>>>>>>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
>>>>>>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
>>>>>>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
>>>>>>>>> It's kind of weird to use an MSR to store this. I assume this is done
>>>>>>>>> for migration reasons?
>>>>>>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
>>>>>>> I agree that using the msr_policy seems like the most suitable place
>>>>>>> to convey this information between the toolstack and Xen. I wonder if
>>>>>>> it would be fine to have fields in msr_policy that don't directly
>>>>>>> translate into an MSR value?
>>>>>>
>>>>>> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
>>>>> Which, just to clarify, was not the least because of the flags
>>>>> field being per-entry, i.e. per MSR, while here we want a global
>>>>> indicator.
>>>> We could exploit a bit the xen_msr_entry_t structure and use it
>>>> like:
>>>>
>>>> typedef struct xen_msr_entry {
>>>>     uint32_t idx;
>>>> #define XEN_MSR_IGNORE (1u << 0)
>>>>     uint32_t flags;
>>>>     uint64_t val;
>>>> } xen_msr_entry_t;
>>>>
>>>> Then use the idx and val fields to signal the start and size of the
>>>> range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
>>>> field?
>>>>
>>>> xen_msr_entry_t = {
>>>>     .idx = 0,
>>>>     .val = 0xffffffff,
>>>>     .flags = XEN_MSR_IGNORE,
>>>> };
>>>>
>>>> Would be equivalent to ignoring accesses to the whole MSR range.
>>>>
>>>> This would allow selectively selecting which MSRs to ignore, while
>>>> not wasting a MSR itself to convey the information?
>>> Hmm, yes, the added flexibility would be nice from an abstract pov
>>> (not sure how relevant it would be to Solaris'es issue). But my
>>> dislike of using a flag which is meaningless in ordinary entries
>>> remains, as was voiced against Boris'es original version.
>> I understand the flags field is meaningless for regular MSRs, but I
>> don't see why it would be an issue to start using it for this specific
>> case of registering ranges of ignored MSRs.
> It's not an "issue", it is - as said - my dislike. However, the way
> it is in your proposal it is perhaps indeed not as bad as in Boris'es
> original one: The flag now designates the purpose of the entry, and
> the other two fields still have a meaning. Hence I was wrong to state
> that it's "meaningless" - it now is required to be clear for
> "ordinary" entries.
>
> In principle there could then also be multiple such entries / ranges.


TBH I am not sold on usefulness of multiple ranges but if both of you feel it can be handy I'll do that, using Roger's proposal above. (Would it make sense to make val a union with, say, size or should the context be clear from flag's value?)


Before I do that though --- what was the conclusion on verbosity control?

Â 

-boris


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:45:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:45:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88837.167203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZsM-0002WI-Mb; Tue, 23 Feb 2021 15:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88837.167203; Tue, 23 Feb 2021 15:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZsM-0002WB-Hp; Tue, 23 Feb 2021 15:45:26 +0000
Received: by outflank-mailman (input) for mailman id 88837;
 Tue, 23 Feb 2021 15:45:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hUnP=HZ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEZsK-0002W6-Mb
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:45:24 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64a4511b-eadf-4b0d-a6d8-f93e109b54c2;
 Tue, 23 Feb 2021 15:45:23 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NFjH7s123659;
 Tue, 23 Feb 2021 15:45:17 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 36tsuqyufq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 15:45:17 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NFQN1q108475;
 Tue, 23 Feb 2021 15:45:17 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2041.outbound.protection.outlook.com [104.47.66.41])
 by userp3020.oracle.com with ESMTP id 36uc6rvweb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 15:45:16 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB4129.namprd10.prod.outlook.com (2603:10b6:a03:210::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 15:45:05 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Tue, 23 Feb 2021
 15:45:05 +0000
Received: from [10.74.102.180] (138.3.200.52) by
 SN4PR0701CA0031.namprd07.prod.outlook.com (2603:10b6:803:2d::34) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.28 via Frontend
 Transport; Tue, 23 Feb 2021 15:45:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64a4511b-eadf-4b0d-a6d8-f93e109b54c2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=NtEUi/ZzaRq0ckwBefxzpmmUWbdZ5Kv+7+Dnxrn0dUw=;
 b=YbZIRMyh9Hv9FgskCfQkcpmmA/Pe+qx2JOOqvfTLil+BUccb7taRiCsxMxRNW362Fe2K
 OdBNhjNZoD6YrYF3NO42/oI2VKnMT0l+GC+sizImIX0D8xsiofm3j2iJDrJahBOgZssU
 FkB7KM6cBRtbT3VWZj8s3wcs2g2Y4ibpgwmipRP+Cw4eW8RZ60UwcaotUvWXd6abgIGF
 XwoPz/t+fzu6h3bMk3ZYH6OzenrVWQC/p4TJ61FhSl6XaISs44+AdOabW+WuofZH7sxl
 XUzh+k/6CzMM47tF4N2UqkKCOCk+PAt88WaRb7z3oQlDNXT+fHTdX/RiccgghkkCm215 GA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yp4q6DXoQfeoxCYn55dJw69YvA0AIkX39xELR0l+RNzZTXMtkaXzoQaHd/ohuMYIqMzPXBWEM0gCA6K+7HhBUsnuCGSISzbA6f+gJQnYd1fvdSPrjwi5EJ8YovMOPZ9mStQNy+3iY/BuT/9H4PnwC2N1mRasiieycF9XodaPF0ziiJE9iqdN7ZGAQ17Ir7vvFLWJj2FHrIz03U5nzBlvo9uftcJ/kcQqSjFDs17rRLLKJtOuAJuWYcNFAYnOavcxjmEeDck3xKYz0Ed5wWO69wEtQEP0B7Ky9S8z0S7dPhjD53dabEN+Boay8vpXINnwIWGL9uAlUx9M0ReFhBW3xA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NtEUi/ZzaRq0ckwBefxzpmmUWbdZ5Kv+7+Dnxrn0dUw=;
 b=GZ8gIb1DgwBKwRA9bXs646JY1fqNco/Cq366lHdDakJ09d5aDxYIc4k6E3v4WU1VnmwX3mTy12xuQtlKiwfc/q6fEWDO8WLUrawqhEEku4ZllQ+GWzjgh4JIO204x6gHDAJSs0WIcsDWx2XLt0lDp7tOIc7aKoDOUQFSvs8bE1B0Fs5CPk4Rgyx7O2O7L4AoCP3LV1rQX4wKOtpk4QKHm/0Ax/Ky4PZVUNiHe3QqbeiL7oX53qK2D4Zha4LM1x0cvu/0Qnol8KRE5ik2M//66k6rXvBdiIJAW+X7tpsF0CBZex3/x1EeHcGCUlBldJva1wi7hKiUlkRfY9TrphQLsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NtEUi/ZzaRq0ckwBefxzpmmUWbdZ5Kv+7+Dnxrn0dUw=;
 b=wkbxys8TDn6Zt4FD10XRKxZ7Itf1FsIRsh0RAklsIpEU1NoDFxsafzaxNLyxvy90RQypUofE6JrtpSXTrIKDzmteYY3sDKnNZYugDuVPXv0sYY+gO4NLeaRqqgQOrAjmXhyCw/XRDY+E+jNjaLcjJrHTZd/9WjPh/NWu1N8mswk=
Authentication-Results: kernel.dk; dkim=none (message not signed)
 header.d=none;kernel.dk; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v3 0/8] xen/events: bug fixes and some diagnostic aids
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
        linux-block@vger.kernel.org, linux-scsi@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
        Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
        "David S. Miller" <davem@davemloft.net>,
        Jakub Kicinski <kuba@kernel.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
        Jens Axboe <axboe@kernel.dk>
References: <20210219154030.10892-1-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <67b7a01c-9de5-d520-d465-e4e3954302e2@oracle.com>
Date: Tue, 23 Feb 2021 10:44:58 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <20210219154030.10892-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: SN4PR0701CA0031.namprd07.prod.outlook.com
 (2603:10b6:803:2d::34) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7cb1e208-8ac8-49c3-833e-08d8d811fd63
X-MS-TrafficTypeDiagnostic: BY5PR10MB4129:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB412999BBC5E10E7418CFF9288A809@BY5PR10MB4129.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	V6Eb2wugrSCpaJhdrDS7bhszSnHIg7n478K/s+yDrJmTn/QfuRwFgYOZHOTlifteaxnF1vSY0pZRL9jPNR9rXeJVLG0n5kg7cTtyzpGGwifgqUx+NF+DH2dYjZFlZLmwZ7SIHTYl7k+Ao9n5nRAHjVeyVsjN1/N1pR8N/GFKzzDpCp5jFF/H+lhUcVcqPAJYNiobF+5BDldb+tw64R8C8va5GHvEQPcHPTT5/+CTKbY2I/WYJ9NhA/F9aMTbqosPNHNuKceGXGFtwE9dKsKkeZoIctH9x6xfN1YxEXkfaojghHMrLkh5GsOlo3kBFPen5LlG2n7K4o4hJPzHsPF8dfbhXFBtXdeZCaWdw8Ujj7oveLLYIEnKwvha5UHS4aqQbOETzJq/qT3ya/13ca8IUf3fT47d6tuxmKKV86Hg4x26wqQTFR6LEw32MN7HctIIPvl/ZMVJsLqUomPF83iMpaOjTLyXSocbS+x6z0v1aWyhL654/CQjQ4Y94mc3gZ9HKOelKS8cnYOjTM4mcajTF2PruF3cE5KkCeJ78ecNtKIpyDG4QJcH3Tg3O0mNY2k5jQ7l3pp1wP2W9hx2JuB3SYXJM6uUSiysR7/mvbBrPNc=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(376002)(39860400002)(136003)(396003)(31686004)(31696002)(316002)(478600001)(2906002)(66946007)(5660300002)(53546011)(16576012)(186003)(6666004)(8936002)(6486002)(54906003)(86362001)(36756003)(26005)(7416002)(16526019)(2616005)(66476007)(66556008)(44832011)(956004)(8676002)(4326008)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?b2JGcjVWZUZRRzhPeVRQb0pPUHQ3dEZhcHFyWU1EeFFJR1d5aGdPNWMxNXJH?=
 =?utf-8?B?QlhTVVpuNHdISVFTTitKUzZlQU54SHhZWW93TVJTLy9ZLzJqTEY3MkRncEw0?=
 =?utf-8?B?RTE2azFKcDdiKzcxbkxhUnFIKzBkQzRLdEF3TXp4aGMwSlFmSytVLzR5S2hT?=
 =?utf-8?B?KzVmcXNacTlua1BpbDMwOFhXN080L293UjBmaEwvYjFkMWt5VGhwcHJ5d0Vr?=
 =?utf-8?B?NFpCTnU0MU41Q01Qc1JQM2V0a1ZxMjUxMHR2aWpVUDhESklNbzFUVnJqa1Rm?=
 =?utf-8?B?RTJoQVM1ZEpNUlBrOGVSQ2xBNDRZMXpUSGsyeDNxSWtVSUNEUWthbkVZak9t?=
 =?utf-8?B?dWJoMTNnQVdXbDM3YURoM28rQWJtK0RPNGIrTTFaTm00UkYvblQwbUVaQWZp?=
 =?utf-8?B?djU0NU5oSTRmTDhXU0k3YVg2VGxLbFhxVjlDUWVPMVdtMlpOYlZ2VlhITXBj?=
 =?utf-8?B?UkhlcDZRTjZ6QzQ2U3FoaTIzZzhRNWh1VG50OGFEM1dmdGRYOFdxQ1JkdlJ4?=
 =?utf-8?B?cnBlSDFTTWpCOXYxRHk0a1h1UDd5Z1ZlVW1tTkRlTVBjYUJNSlFlQXJYaGxk?=
 =?utf-8?B?bjBZREJ0VjY4NnlyZU4zbkhLTG9EMnNxZUthQjJvNVc1MUNzMlhWTGxaVGxs?=
 =?utf-8?B?QlAyTTJ5dnc4M2t0WnFISlM5NksrWXl1d3U4djBXdUZQcXNFVnpZWlRuRUNR?=
 =?utf-8?B?OHJ4MjdDSVFCdG9IaDlXV0o0YTAzdUdKWFd5QmtFNW5LUE5pV2Nld0lDY09U?=
 =?utf-8?B?MGNxbzNqUEZ6OU9oejFCekVkcllOMHNZSEluNXdPaEVEcjYvZU5hTng0SFM3?=
 =?utf-8?B?YzB0RkJySzhsb2FVZmhHRkE2WWNiMkJMNGtsNWFobS9SRkVneXJ1eE94SC9l?=
 =?utf-8?B?OUVrZ2RqQm1xcXJuNEJXdXZpTkJUeVVyazB5TlF3MGRDejczMlR5WnlKNXIr?=
 =?utf-8?B?QWd4TmlWOE5UY3g3WnpROFVaQlVyVEx0d3pSWEpEMFFsNVQyUGNtUFNjV0lN?=
 =?utf-8?B?K0NmS2dsc1UwTElseml2R1ZmT3N6SHVPYjVpTWUxSnk3Z2Fja3IzaHFTSDhx?=
 =?utf-8?B?SURjVUJNalNIVEFVNjgyY1Q3SXhIM0xpVlRqV1ZYQmRDL1IyL3lZb1RJcmY0?=
 =?utf-8?B?VkFZQXZyc2pMakxzSDAxNEtVaFdablB6WTBSU24vZDgxNlVBanBoU01xUWo4?=
 =?utf-8?B?K2ROZ3FpZmMxRnhIckpsTVZRTDFFSG1yVzZaZzJZVkI2YkxPZlI4MjVFSVVE?=
 =?utf-8?B?bHI2VWhyclFuL3lPckJzMklES2hzdktnK1gveDJHOHlOSmtQWkJvR3kxSjlN?=
 =?utf-8?B?RzNHVXc4U3BvNENQWm1GaFVTWkp4WWZPU21NSzkrZjB0S0t3Z0lMaG9lNXFM?=
 =?utf-8?B?elRwQ1lEancvTjNkd0REU1U1Q2tIcTVrWWtBaURmZnVHa1Y0NS9KZWNteVRy?=
 =?utf-8?B?MEhiTWdXR1ZlQUJabjVXc0lCVEpYWjU3VFNlWkhoR2NLWEdESE9rVC9lM3Rt?=
 =?utf-8?B?ZUl0a1k0dm8yTVR0UE05bGp5TzBwNjU1R2g4clVubU03Ni9iTFZmS1lRL1J3?=
 =?utf-8?B?R0o0YktLalZrSHZlV0JiOTdKdzRLMnkyZDZaL0MxU0txdFJyYkNnLzBZMFhw?=
 =?utf-8?B?NlorNGRCaXRQVUFpQkdRaWpnQjQzQzQzSzhPZ01WcW56RVRTZWlndEhIRG9B?=
 =?utf-8?B?R2grWmljTlhxU0tnd2dBUkgyUWdqSVI3a3ExUHRHSERTMDB5ODM2SUZNVmlr?=
 =?utf-8?Q?8NtCBQedf8dDWzYL1fpF1PEe6S6e0zef30h/LLE?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7cb1e208-8ac8-49c3-833e-08d8d811fd63
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 15:45:05.5493
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fJFnGFw6Fzg0uuIts7hqMA+aPOoaI5fHd2m6wskw/EYB5UyL7tmmWqlTu9lrBpO+ZJAaPxHkxML9o9SikE0fT9Et7ZKr4BGIPNhH1sWiNGA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4129
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 spamscore=0
 mlxlogscore=999 adultscore=0 bulkscore=0 malwarescore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102230131
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0
 priorityscore=1501 impostorscore=0 bulkscore=0 mlxscore=0 malwarescore=0
 clxscore=1011 phishscore=0 mlxlogscore=999 lowpriorityscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102230132


On 2/19/21 10:40 AM, Juergen Gross wrote:
> The first four patches are fixes for XSA-332. The avoid WARN splats
> and a performance issue with interdomain events.
>
> Patches 5 and 6 are some additions to event handling in order to add
> some per pv-device statistics to sysfs and the ability to have a per
> backend device spurious event delay control.
>
> Patches 7 and 8 are minor fixes I had lying around.
>
> Juergen Gross (8):
>   xen/events: reset affinity of 2-level event when tearing it down
>   xen/events: don't unmask an event channel when an eoi is pending
>   xen/events: avoid handling the same event on two cpus at the same time
>   xen/netback: fix spurious event detection for common event case
>   xen/events: link interdomain events to associated xenbus device
>   xen/events: add per-xenbus device event statistics and settings
>   xen/evtchn: use smp barriers for user event ring
>   xen/evtchn: use READ/WRITE_ONCE() for accessing ring indices
>

I am going to pick up the last 3 patches since Ross appears to be having some issues with #2 (and 4 and 5 went in via netdev tree)


-boris



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:46:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88839.167215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZtj-0002ex-0y; Tue, 23 Feb 2021 15:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88839.167215; Tue, 23 Feb 2021 15:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEZti-0002eq-U7; Tue, 23 Feb 2021 15:46:50 +0000
Received: by outflank-mailman (input) for mailman id 88839;
 Tue, 23 Feb 2021 15:46:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hUnP=HZ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEZth-0002ek-7h
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:46:49 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 541f760f-ed3e-4dd3-9429-50a0c38392a0;
 Tue, 23 Feb 2021 15:46:48 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NFjE6v123596;
 Tue, 23 Feb 2021 15:46:46 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 36tsuqyumx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 15:46:46 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NFkaa4029070;
 Tue, 23 Feb 2021 15:46:45 GMT
Received: from nam11-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam11lp2171.outbound.protection.outlook.com [104.47.58.171])
 by aserp3020.oracle.com with ESMTP id 36ucayhgn6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 15:46:45 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB4340.namprd10.prod.outlook.com (2603:10b6:a03:210::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 15:46:43 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Tue, 23 Feb 2021
 15:46:43 +0000
Received: from [10.74.102.180] (138.3.200.52) by
 SN4PR0701CA0025.namprd07.prod.outlook.com (2603:10b6:803:2d::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.28 via Frontend
 Transport; Tue, 23 Feb 2021 15:46:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 541f760f-ed3e-4dd3-9429-50a0c38392a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=xZQx4Q3z+D4lExqRxh3MkBUnxpYrHmK/UY8Crx6t11o=;
 b=vuB+EsClYLn/MogezzfxVpPbUTg+8PAESlRRQT3cxqbBMXlKgAcs7mRRQXBCpwiHHhNV
 +WJwJBVYd9FD/aBuM6EQv6C2K55VBAcLzZBOP7G20IVlG311BxtKRJ1CFotpUwlMdJV7
 LEdgv4Vy4Bauhrc5spV4MVB4Y/I44UBtww2UX1SO69WgAVZW0dGwx5tRP+z17MACSm6Z
 h3n22lnsPoqaqkvUOP+vu7EgVnuiQ62LD81aK9kuebUgxbMIIgeXlsz1yEFdBtQ+UYj2
 7gj8LikR6wapBamYrs67nfXWskvt9UeqzCyjgdm46N6fAdQCOODAU7rhXU3arrByAERK wQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z+MXrT2gsXlgTO9D+YI9zlCxJSIepLampUCC2ZXFHxcZ8+wMP2USstixIE+1fjXvcR0ti7Rxv6Giy7Ssh95OGZD54kpdMW+WsnfNZmCXtv/BNB8c2cScXJubbtTXl309E5LBdvpsoGwbFExZXlpW8YEs3nGjJw0RQMC9Ix2tbXergT1DXITBSlbLA44rsaP3QsFNyScc0A1x9di2bsigr2VSdQKeWqK2mexcKWUADmnuFoWT0dwoXiugyci55eNOtuk0qJFmUSoI2II8/gZotKcW/9lZArkQ0AJ/2OkUV1qCEUvDSBjEh6AH7A6YhNwJiO+s9r1l9LY3khQq6t4mMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xZQx4Q3z+D4lExqRxh3MkBUnxpYrHmK/UY8Crx6t11o=;
 b=IdFkUoBmvcgHGlv5aaxSt0hLHYTYejfS9qmkpkQqbPqJ4vRPCi6V6W/HOfDxoX5x3IxlAR2nGkX3+qbEbELeLFOeJV4E5JtzN2g0AYJdl9VzKwQIHZm6o/a72VXFqcNY3PqAiYCrfxTC0SD9lY9vFD1YbNxkJMSJ+8uqoRTj1JUlWwEVVtbPYEF5H+4NEdc1K3893DIZrw4hb73r/a8f4yj28++RzYXmmahBVXrdbLFaW9f+EJdm8XRcMYmhxUKTPsjHhm7h5tQXsIXb2GRE1YCxxsUSB/QSBeVmn9Sm9WuY4B6Q/P4CQNV1EMksyoFuwZqP6KZqjDK1tJptZsfn3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xZQx4Q3z+D4lExqRxh3MkBUnxpYrHmK/UY8Crx6t11o=;
 b=KBsKJZA/1hG7GSmGgJfLprlihhWSUdVfqt7pCBWY3agmdRnHQTFOddHDUmkt8EPUb4//fZLAS81t8SW1XEkNNQ6dS/NFxXmMt1z9mivZdBnMfwXSPjqCFNn+X8+1BA6HMOX6NGIE2tePkHDXrApops0arViPDQYnoCgeC/RB3VQ=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH] xen: Replace lkml.org links with lore
To: Kees Cook <keescook@chromium.org>, linux-kernel@vger.kernel.org
Cc: Joe Perches <joe@perches.com>, Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org
References: <20210210234618.2734785-1-keescook@chromium.org>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <6248360f-4be7-cca7-4906-e7919eacaf05@oracle.com>
Date: Tue, 23 Feb 2021 10:46:38 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <20210210234618.2734785-1-keescook@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: SN4PR0701CA0025.namprd07.prod.outlook.com
 (2603:10b6:803:2d::22) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3bd561f4-48c5-4bcf-84fd-08d8d8123799
X-MS-TrafficTypeDiagnostic: BY5PR10MB4340:
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB434084203223E2CE2F79ABA18A809@BY5PR10MB4340.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:188;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	9O46M3p9pV9QInFHbOcq/6RXvQgJYTuOe3HQVlskJHTVkCA7yAHsoaUr7q22c5vZA0Qc5gKsOOulrzepl559IAGvVkJCda18Pe5f6z8SesomN44W3v3XsjbiPKT8ageEpbp8ePPW25Q9z97KyOJ9iJQRfXWcukn0tfX0r9HdKgG7DPm/enldFlqyZIsSLRQIMLQCDrlekEZF8gONbvvgCfzOzD1tq3O5RQwJqFL67P6OFUg6vJSb/YZPaAITRyNO023hFVlnpgst/xNaZXcBgXtwbBJD42XGiOr+KMvJ40C2/rVmntbFkqw5SH88uZm1bn8iLJJHi4OyW1AwLUmyAh2dcD67N6EdCVNsmNie33voU9AyOQABszoF1t55GDDMP1tA1Q5emBPpnDFmdvFeLG1FDqtY087WnlvfJM6ewuC5/DXYib+8qcGUKdpIVXpMSYVeIy/hwbrCPOElocYsyUBN2F2a6BDzBJkVfQ+a6efFqn01ZjUXG5xHCJxyYV29eWPrAvCh5KoYCN91nPGd7nmZjy5E8iJk3KgVWpOzIg6V3AlhHefohZN3l4GeDZkqapKbbo9xJoZqVLBIO1+dfUN4OJst22b/AbbdKvX78Hc=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(346002)(396003)(376002)(39860400002)(6666004)(16526019)(2616005)(26005)(66476007)(956004)(66946007)(31696002)(478600001)(66556008)(2906002)(5660300002)(6486002)(8676002)(4326008)(86362001)(53546011)(186003)(54906003)(4744005)(36756003)(44832011)(316002)(31686004)(8936002)(16576012)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?TC93dlRpcWNPTVBVbG95a1B2bVFGQ3dwd1I2VFEzZTZxdEw3SFJUMmRzNE02?=
 =?utf-8?B?dE9TMWhTOThMMk9WQXJSc3NQTXBCNDJLVllLUUtreUE3UmdFMmZGZFhlYkhG?=
 =?utf-8?B?Z0REUmt0N1VDNm1qVmhRU0tVMllPdFdvNzJCTFhvcHJWTHNYcVUwSTREb3V6?=
 =?utf-8?B?aVlrbnE4dFdzalpDamE3QlFoa0ZtRnFuM1NESVlPQmZZM0JwTGtha2xvUmoz?=
 =?utf-8?B?N2J5QVczNm4ranlwLzhmNnk2bEhCc2NLRi9Ha1NQd3RtMkNrZkM2U2YyaCtj?=
 =?utf-8?B?dkZMd3FCemU2U2RhYklTTTVCMTk0WjI2SmxTeDJabjJPU2dvL2ZJc0xrOE1l?=
 =?utf-8?B?Y0NGYWQ4b1g0NXc3MFVsWUMrbTNrelJvaFd4cHB5bjRtbHkwcEU5ZkpmVmNz?=
 =?utf-8?B?a0tVRUN3dGRlWENsRFFsaXROWFNXbjFSL3dyVGxITUFVbVZySE5UV0pycXBI?=
 =?utf-8?B?dHlId2VidFFjeW13c1Z4U09KYURTWDNmNUVxT0ZNTjYvaE1RNkhzZENCSlFK?=
 =?utf-8?B?eXBLd3hLQWc1anBpR1diZ2VoamtOakFMeU5xQjNkMjBybnBTMXllemhRS09X?=
 =?utf-8?B?OGVjWTJCamNMSnVIWS9sZHlmREZwbFlQRFFGdkY2b3I0RGVkVlk4NEdRUmtK?=
 =?utf-8?B?eVBBam94OGxlNmNkNWMwdHBDNmQzU0lnN1oyVnVMQm5peDN4QXhFaWdUbm9B?=
 =?utf-8?B?d05GL2RDcUtlWWxmNWlqYzJhK3JDS1JrNFVicXNSNE9LOXk4Q1gyN0xqemVL?=
 =?utf-8?B?L2RvdzRvNncyUEVseHlCQ0FnNS81blF2MU94ZVQyQ3BWRVlWcXJvT0ExNXhO?=
 =?utf-8?B?QTBVczhaYUJwSW5XWkN3M0xHbmowKzRRbnp1LytBOURYeGlqNG9iOGcwVnZD?=
 =?utf-8?B?L2hTRzJ2ZEJRUUN3S1BuR2VQdkx6NXZMTjMvTUUrTGpZVDMwblM1UmtmcjJn?=
 =?utf-8?B?S0IzNDRNMEFYUGZxZkhPb0psTk42L0pjOUVob2YrZnVIVzJJVUdSbHNlSHUw?=
 =?utf-8?B?Wk13SkE0TEF2c29FWmU4QTZnQ2U0WGZ6bytHZ1cvcWhnbWJ5TU9yOTNVa2Zi?=
 =?utf-8?B?Q1lSRTBUaFJXVllYQVJBUHhLN0N6bmEwMjhaTDJwdXEyNEhLWGJ4bEx5cko0?=
 =?utf-8?B?OGk3N2JUdVU2dDFvNHNNNzNNYUVramhmNEttSG14ZWNnck5FbVVvQ3V6c04x?=
 =?utf-8?B?MDJwVFBtRkYrN0ptS0QrZmhiZzN5azJvbFZyRUNsSW9mYVdseFpVdXNvVkpv?=
 =?utf-8?B?OTVKRlU3dkNwdi8vWDcrMUZCQjRSd2NlWG1sSGd5aGRwMWJPSDEzYllvU0Jx?=
 =?utf-8?B?cEcya2VtSzc1MGErdkZRZTFXM1FEQ015MG1MeXZ0eFR0T2tiTGJyb0Rzb1cw?=
 =?utf-8?B?ZlpQa3VoRWp4enVzNmpUWDdaTStDUEhjSm5pM1BaSFlNczBIT0VBdmw2SDNH?=
 =?utf-8?B?K2R1UWl1dTVVSmJkUVdUUTR6WThzZXVDK1hxR3YxKzU2K3VQMVNJTTAzVnlS?=
 =?utf-8?B?bnJvUGxDOVk3OEJlQ2gwM29KQnZRc0tPR1hkTDlKYmMybmQwa204c2lwSUIw?=
 =?utf-8?B?YmE3YkphN0t2MVhYSGIxaVY2d0NRQVFLVjR1ODlSQmcrK2d1NnFxMDFXRXR0?=
 =?utf-8?B?N3psZTBwaVNKUzJFUUpZc3l6SWxuUlJ5ekI0UVR0L2pndGxLTGlvdEQrQThW?=
 =?utf-8?B?SzRPTTVqUmhvTDFuaU90OGhZMmxNRjhhRktoQW1aYU5zbEErM0RsZk0za3BE?=
 =?utf-8?Q?QqVrCOQQ4N31osdOnJ9lRp59oTXcERopKaviWXQ?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3bd561f4-48c5-4bcf-84fd-08d8d8123799
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 15:46:42.8453
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vnJ6puiuxC/n/weRPgtdZcYKMJ/6V0uKcEC1qavzAk2aaMSZ08R1MG1Xq8IF+URq1uw7rLP7BeYaeHfObOkbVjaOdrNfxnQK3Ru8X3I7bJI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4340
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0
 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230132
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0
 priorityscore=1501 impostorscore=0 bulkscore=0 mlxscore=0 malwarescore=0
 clxscore=1011 phishscore=0 mlxlogscore=999 lowpriorityscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102230132


On 2/10/21 6:46 PM, Kees Cook wrote:
> As started by commit 05a5f51ca566 ("Documentation: Replace lkml.org
> links with lore"), replace lkml.org links with lore to better use a
> single source that's more likely to stay available long-term.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


Applied to for-linux-12b





From xen-devel-bounces@lists.xenproject.org Tue Feb 23 15:56:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 15:56:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88863.167229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEa2c-0003iT-0S; Tue, 23 Feb 2021 15:56:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88863.167229; Tue, 23 Feb 2021 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEa2b-0003iM-Tc; Tue, 23 Feb 2021 15:56:01 +0000
Received: by outflank-mailman (input) for mailman id 88863;
 Tue, 23 Feb 2021 15:56:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEa2a-0003iH-Pq
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 15:56:00 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34ac5835-f75f-44da-9504-601ebf1cc5b7;
 Tue, 23 Feb 2021 15:55:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34ac5835-f75f-44da-9504-601ebf1cc5b7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614095759;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=AVEoWrMOQdPnHblXGpkJgOocPMFc324aAoNNOclG6AE=;
  b=VFimJ2a6NRXwycBcTCmz5SItpSHDpRTjBXLvuGMLBFQU4a9a+45YipKF
   V0/2LCYjhAYKwem+uJVwGWB4MlQRWsFsvOQPnKiLx1Chn4zm7W+P75fDe
   55qrFeEanUtvN15NSiZUFfxooY9q8LlonrMxr32+uZ/UO8fDYbG+Mo5dP
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZOnFqxAnwSdtLTSdQfJEwHxxE955P+YsnosY2l0VUgqZtKWaWqCt02GkMlNiWaCMOefWxkg1vl
 Vsl8acIgrbi8z3Yd2VneHf25kOhq2WP8ZP5we4grFAlI6o13ZMl89cDHsiAypPu2N7yRCzeY9X
 GYSMi9oWpC1SxiMnvLp4PPO42pgRy8ucirKbUSnofzF10m8611oNmhpTbEE9/AulXv+866JNfs
 Vc8mFeVfhimIGgIiZI4ZfyU8B2NVZVrFy0x+XY0A+Sa4yWHwuJfHCxXX6CYzBqNe4rubBnRUic
 iEE=
X-SBRS: 5.2
X-MesageID: 37833856
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="37833856"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HLkplyHFbAFcNcJzutSoZ5DZ2dXe7GtFvHFW3m4b662DBL8y4lVHUaxYxaMSYWgso+cQtDUiEGMKPVZozPOu1B0724pczbyuZiVpehu7CwtnAmhRSzMWx3DTNCKAaY7EX1vbMJtWFuly1qqbh4vxGvdtnErtUEogJjcZ9pjbe83ENjDGj7dzlECcyFz0PN66gp+4uzkg9QQAQCC0pZANDNT90AirexVEVnPM1g9V8vhlsRx1XOaRy9FNECT78LKC1XF6NhT8dGVnEKqF4O091NPT4HbubF3VxK7ZsUJMDnRgE/9FRFYpJlfAG3bXZJirN14YuBRmdI2jLXRtQeQ9iQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TX5TkCMVopvOD+/peD2pbZJMxf38hUZzN7+U/hRv6Fc=;
 b=EwG6SOpe15/AzBZgmUHU9vcXC4u8mJAyq9ZK8G7NNC6aKXMynDP9LHQAT9FYzXp/gwkAQDtCIhANSdGqhAYu1QrQmZjMQKJxa7PIjzY8lPCYPkmhGPO7V+1GOVoLWZhZpQ4WALc7Vtq4TJrwuE3miIcWjIoFZAg/nv+6y73VT0xAj4B7eXmZokxovNjiRXlSy6qb4Hh20G1L60F1005eicyMqUalGXIH3Kcfbqx56txx13zL5h6Li9XX0YQAsSR/5ZMPhwzfaj38BFVa5zXiR9jSk47nT8H4bxc3WT4xzWbOtK8fNUnxlhHEwBS7WdImhwcCNzqTcQfSbrNTf/YZNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TX5TkCMVopvOD+/peD2pbZJMxf38hUZzN7+U/hRv6Fc=;
 b=uzsCy8YaYh2C6G8f1iOJEVpIoE2weTbOH75s+quM0cajFD9Bs8wpyIqzuXwtJyXZXbda3H8HoAnxjIlmgaqrwsLlG8qrP00hHzcQjXF20JrcwZJThBq6tiySzB9cBYricTuxdqSUwP1KB0MAHXvCF12luFDpsbZjzb/ado+shSo=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH for-4.15] cirrus-ci: introduce some basic FreeBSD testing
Date: Tue, 23 Feb 2021 16:53:53 +0100
Message-ID: <20210223155353.77191-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0014.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b62067bf-041f-4370-0e79-08d8d8133ef6
X-MS-TrafficTypeDiagnostic: DM6PR03MB4603:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB46030B68E23B591891AD39F58F809@DM6PR03MB4603.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jjjBM3Gh3eRQ9id7GhV4c9ZJiyQmLEVyGuayRthk/3DYrXxRgxGafLuhiimuS/MbCHgNLnRRX5Ad2wpgJXae0yY1DqoMDtziikq1bno48icGE36GZFDUJR6EoCJIImUG+VbeE0ekgRgWndKSiFA6pd2aVGYgymF8W2+6CzAddca0WF82SyDXoXPDqWkSlgdbux2RCiZDCCamZp5wo+3QAaCJcrMmW3sRJjeQteFk/56V9I1ld9bdPRAdh25kczxvMMq1XLgs6HLoNNiqlS6eyQbuyNdiDcGRICaolaaxOHgxQ6g/cF9QHbI0uqCZHf02oS+BDuie5Ycrr3VBUYXIMfViX4/wKPgtD9TLaogQIh1sCEoY7hgPv73CoZe6AUog+3dF7jpIuFlRWPqHUT90cgpDuiPmaIh//NrQvqNI6P6tT89X600wbL574Q76jVThQ7c2tPtRKDPrSxOXOjk8CWT7x6UeZBZOdtMJp42exQUNvQ7FM9rZoC9dyZV76gy6h8BEqHsYtgKzCy7WDG+jII0HNr1mI8DZulR2AlVvZYcplY+Ofs4SfIw1sfnTc62BW2VK988sYugEO8R9Dp0lSC3aBRvL/oD3J9iyueWJYVM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(366004)(396003)(346002)(1076003)(54906003)(6666004)(4326008)(316002)(36756003)(2616005)(956004)(86362001)(6486002)(478600001)(26005)(966005)(16526019)(8676002)(6916009)(2906002)(186003)(83380400001)(66946007)(66476007)(66556008)(6496006)(8936002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?emh6OGd0a1RkTW13ZHNYcVlXU0lrVU1VWTBhRTJwSU5BL3FjNXNNMldvbUov?=
 =?utf-8?B?L2plZ3lyekpqQXFrYXlsODIybDhFNHZLeGdneW9Vd090UktMK25PM3hIVUpE?=
 =?utf-8?B?d0grV1BjNkoyZmtteUFyaTlaUS9meklFTHc2NWdnekZNVWpEd0ZjNGQwTXpX?=
 =?utf-8?B?ZjM0cXFKYTYyb21uV2gyUWZ5aGcwSFVKSER6VTB1NzFNd2NQampQdTNIaE5S?=
 =?utf-8?B?dlhBWTVTbWlHM1l6dldaczZPbE9xYmRFbkpFekVRbENoMGVEcjduY3J1cEl3?=
 =?utf-8?B?ZlUrQ1MvckJrSndleHI3ZmJmWDJERjFhVUc1anZDMVhGbWtDZnNiMUhsZks5?=
 =?utf-8?B?aWx6ZmpvLy9hdzYzazlTOVJjK29EbThKU0FQS2FFSkdkdXdQR0c3SHJqdXkz?=
 =?utf-8?B?c2lLa003UnJ0WFQ0alROL0hBckIzdFdYRlY5L0hGSG1TbjhVdjFKVmd3VFE2?=
 =?utf-8?B?Z1ZncSswV2dJTnBVOWZ6eWdoUmNmQUd3K0gvRmlPMzdrUVQ4cWNSUlRGRjJC?=
 =?utf-8?B?VmJLYVBNaDZJRWlUbG9ucXdJNWFDOXVBYWszTWRqQitGUXduVnNSVXpQTDBL?=
 =?utf-8?B?Q1ZDNFpjZ3pkRTVMNjdXZEpaNEZTdW9ON2hpb0RIcE1zem5SYXA3ZXF1bjFZ?=
 =?utf-8?B?bGFjNEVxSndSMzNzMHo5K0F0Sm4yTzZaZ3F2aml1cDQyZFE1R2E4cmxheW1m?=
 =?utf-8?B?bWZiRXF4MjZvYVdUY1ZHVTd0aGxBbERQQTlyWDEzSHo0UUY5WWdzL2g0cmhz?=
 =?utf-8?B?S2lKbnEvTXFrN0I4cEtLRUdKSVgwUUdlc1p3UDluaHN1SmYzSUJ4bHpaMktF?=
 =?utf-8?B?U2p5VWczeDlTRTRZemFZRTJJNjFvblhjTVNlTEQ1WkpjSURPV1BoTm9JVytx?=
 =?utf-8?B?VUN0eW1ubEdhaTJIQ0VtVzRDdk9rR24zKzVxYk95S1lFV1B6ZTMzVXVhRHQ5?=
 =?utf-8?B?a3JlVFNBU2JJbVZJTGZvUkxDUkNZU3RKa3RrL0JNdk10SmIzSHZFRExmRVdJ?=
 =?utf-8?B?eFIzMDY2VDBmRjFmUHpCWC9rSDJiY0luQ2lJd2N2a0dDYXRYV1ZXNWY3L2hn?=
 =?utf-8?B?MzNiOERHM21QL0NzemlhVUtuZW5RbkVQV3grZFluSW8yWDR6K1RweVc0QUVK?=
 =?utf-8?B?Z214RW45NkI5OEN3TWIyeUZKUDY1K3h3TlZpa1pmZysrMW93OEQwVTJXak5J?=
 =?utf-8?B?dFA3MWhxS2xwN2M3aVhxdHpCVGJrcnl6eU9TcmhEbXpibnhJbjMrOWNKb0hT?=
 =?utf-8?B?QjRWSi9vaDBUVENnMVh0VlczUmVWcUZzUTVHd3lBaEJIM0RTelF1T3M0S0xO?=
 =?utf-8?B?WDhWSW9OQ1pnT1R5ZUtvQVB0bWRJWUFSS1IwL05BM3VUaHJIOVE3d3M3UVBZ?=
 =?utf-8?B?VE5HbnNxNGhvWFpFNnZmbHBob1BuTXgwV293NHdTcVVINFpMWWRmeC92OVJL?=
 =?utf-8?B?cGs4N1FGS3pNRzh4a1JhQ2xPazdhSGVCVDBFa20zeVY4LzJjemk0bGVvUnc2?=
 =?utf-8?B?OFE1OXQvUFZLV1NQRzBDQzlLc0VYNGUrLzBIL0NGeEF1VUUySjVEQzQyUStj?=
 =?utf-8?B?TExuT2dqZWlBWlQ3eldKTXNPaVZTRk1zS1RScE9za2NpL3hJTFk1TThhOEVY?=
 =?utf-8?B?YXFRbjZGQkx2UE1XZUVuQ1lJWFg2VEprQ3Rrak8wZXQ0UnJTdjhtUzhRVmE4?=
 =?utf-8?B?dHlZak1OVGtLanNTQU1JdVJ1WGpVcDlqWEhveTVscmx2a1RqNDBXQ0Z5ZCsx?=
 =?utf-8?B?eFozRVlJVi9heFlhN3ZrbFdCVGNDT0ZtMzBGMGF0KzFkdmFtVXlQNENsb3Bw?=
 =?utf-8?B?dlEzMFE1Y1lRMHRxb2o1UT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b62067bf-041f-4370-0e79-08d8d8133ef6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 15:54:04.7894
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kOi0E9y3Jv1OcUJF0wsCgLSEPL27++EbPEgiRY9isT7EX3DMk3oTA8zeIOPVvxdX04U4Cz8nSGSIv7gN4sZ5sw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4603
X-OriginatorOrg: citrix.com

Cirrus-CI supports FreeBSD natively, so introduce some build tests on
FreeBSD.

The Cirrus-CI requires a Github repository in order to trigger the
tests.

A sample run output can be seen at:

https://github.com/royger/xen/runs/1962451343

Note the FreeBSD 11 task fails to build QEMU and is not part of this
patch.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
I think this is fine to go in right now, as it's just a meta file for
a test framework, and has 0 chance of breaking anything existing in
Xen.
---
 .cirrus.yml | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)
 create mode 100644 .cirrus.yml

diff --git a/.cirrus.yml b/.cirrus.yml
new file mode 100644
index 0000000000..5e3e46368e
--- /dev/null
+++ b/.cirrus.yml
@@ -0,0 +1,26 @@
+# https://cirrus-ci.org/guide/tips-and-tricks/#sharing-configuration-between-tasks
+freebsd_template: &FREEBSD_TEMPLATE
+  environment:
+    APPEND_LIB: /usr/local/lib
+    APPEND_INCLUDES: /usr/local/include
+
+  install_script: pkg install -y seabios markdown gettext-tools gmake
+                                 pkgconf python libiconv bison perl5
+                                 yajl lzo2 pixman argp-standalone
+                                 libxml2 glib git
+
+  build_script:
+    - ./configure --with-system-seabios=/usr/local/share/seabios/bios.bin
+    - gmake -j`sysctl -n hw.ncpu` clang=y
+
+task:
+  name: 'FreeBSD 12'
+  freebsd_instance:
+    image_family: freebsd-12-2
+  << : *FREEBSD_TEMPLATE
+
+task:
+  name: 'FreeBSD 13'
+  freebsd_instance:
+    image_family: freebsd-13-0
+  << : *FREEBSD_TEMPLATE
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:07:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:07:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88873.167245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaDF-0005LQ-6I; Tue, 23 Feb 2021 16:07:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88873.167245; Tue, 23 Feb 2021 16:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaDF-0005LJ-35; Tue, 23 Feb 2021 16:07:01 +0000
Received: by outflank-mailman (input) for mailman id 88873;
 Tue, 23 Feb 2021 16:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ey4c=HZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEaDD-0005Kv-E3
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:06:59 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55cce69c-b148-4287-9480-06d76ddc19a5;
 Tue, 23 Feb 2021 16:06:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55cce69c-b148-4287-9480-06d76ddc19a5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614096418;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1Jy9kfn1Rg/Q8aoBegPupCuZ0Rut8f+LX2PojXWt8Lg=;
  b=MgWdpszA0ISTv/50nKTyP6kCyocGQitMu/gfXvnRGanN+KCLtvX7qEtX
   KD3C2F9ygd8Vnw6QTPjKKzSplvzBeVu7sZqfc6mNn32KxgerRWWwXQopo
   dWuJTs3lZdAZ1kGsRaHR9pfcrXE+fxC62FLPJ93TCH3zjQ6Hs3x+btOVb
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZyQJvQ0pjDaBfo65i57ep1ONyjyVZxl8hN2LofPdP+13vudXYEF58KmD2w3U1EK8xRHnGUiBrm
 FOJIT2uNgHAk315mMuQ8ghXbBcMfH1UEmo+dcfZAwsclAXlWJXEFTH+zGePKS3GFXiQYkRtOSn
 GoU31gYEI5+MS5vuuDLp0TeHRPx2yyhYAl66Hnqo9ps+8bAS9Qv0RUvM84VCHKfmA3oqUmGg8W
 4gC2R+cTSC1/WkMgR7otq+U2/Ebp/OHQkCYJy+/AxC0cy9F1ax1XZyCSOX40HVONcm21hb+2sd
 Uac=
X-SBRS: None
X-MesageID: 39226058
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="39226058"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HfNQPDgeLgNDk43/8Ml+edXiwbivpnds0R/aSeuUxVS3ggev/2ZyEBnguNL0GgXIoIRRU8/hL+jFgNI1oCV6ZTBE6sqACUEVVt4DYYqTrcLWThdS9s8D1hCgauiPAczWwywpkddFd1+8joNK1VC1U3Vx/IIqm0KimURZoWnxFstXDOlR8g0Vmn2HR7jF5nhQRiZSR2/jlzDeVG/kWRoBjuZof/oOTbZzup6l+lIZCleIM+nO7W27WsmhzN2EZzFGV3r1wudNHHuP/YnKFULN60LEZJ+5MLtHusus7CgDJHWEpxmMcm4yKKct3eGUncZbOkKYLSWXnQt6XE9k5D1E1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1Jy9kfn1Rg/Q8aoBegPupCuZ0Rut8f+LX2PojXWt8Lg=;
 b=fAFrIeSEl1oZCHyjvwrNOWdFjNrtYYkDM6OuRBTXthf+rsCm9beywtYMPFeq07Gx/mt9RAFBK7dopT541vz+62jCj8c1EMwLU4pRq17TT95ZSfEfepxpqCYKh4vYa8EqlE0jcH3Ql+1Zh2ZaUo6ExGfWbpgYYNXMspgsxDKSDK0L4hguSus4VmSemcyAnEUca+DGq+vZt9NxGFIbgrj33JKXP7syCm/w3stMTQ67HLxwPutU7Lt1n7H14JdPHfWwwMSEmooaKKuKCsx7kQCueQsWgWdsfzTUrM733zjhuLwTt9AL37gYJh9nOOBOS0zz1ngR4VsQKkHjKW1+KLyiGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1Jy9kfn1Rg/Q8aoBegPupCuZ0Rut8f+LX2PojXWt8Lg=;
 b=nkM7czuOE0Adl23DDN27FEoqMi4nec5AtzJGn6lzGOeoqhgoilJRNi32bnUyVfQpe4P0kAy0aLkdFGhRDzMVwe59TRwHxiaeImoCIkIDf0QVge6fXe7tURQ4oFFez282tt3vk4wb7bjudnjznYqvE0gLksjtTJcBlnB42qlJKqI=
Subject: Re: [PATCH for-4.15] cirrus-ci: introduce some basic FreeBSD testing
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20210223155353.77191-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9a63c657-81be-9ddc-b822-64a790136ddf@citrix.com>
Date: Tue, 23 Feb 2021 16:06:46 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210223155353.77191-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0254.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d7defa13-5e8d-407c-ad1a-08d8d81509b4
X-MS-TrafficTypeDiagnostic: BYAPR03MB4679:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4679DFF41BDCE18FDFC35DF6BA809@BYAPR03MB4679.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qvEWxvAX97qROlT3qfW1Rxvmjg94mEURnKQoVaQolONy/YT0ZUcu2Uua7wbvG9dbIMbmm4O8Vi1aoS/usnY5MTizc03foiI77H7d45KzU0X5j3AgGBN8HuVZKynjcFQzYc0hncH2bBkdil1gVEua5wEA0v2t/Vr0KRY5oxl49EmXM4WfZ/KSr6pyJHQ72Ou1Tsoes3ia69y381BSYKG9N7xGMjjxVLDFKveU+Ld2+Swqf65zD6vWO/2bxCMfCIpHq5wQmpoASeU/qpEbRZk8rm7WA65Y8LRFE73OlncoO9mrf5j7HVhQ1GurMokevda71tuRON2IRpS3/ifewNBHuX2w8yHieK1aC7tWSRudE9ola+ctyH06nmkYoId82osF8wMHI+VSz5O4v/pTAjYcsKN1mWgTCPT3elRUKTofnBSMOpswwH3FrnJQKoHii9AujQlgfBzPsFFSP2G/5Wxex/XgqeXdFYvS2FrfvK3CSx0yWhZhg1MNS5jwmnMCu1T9n8zGumuRqFDZomeLsoWOK4UUywI30GVFIYl7rQXNtVgjE1agckqDMD2nMl8ZuIPv4h7EV5GF0IA970vcz+HNwRDu5zxVEvkKRdZo7kqVxDcVdUzCQk6xZzGF8oFFRDLY6Mh2+8xGnIZYrO3Av3RbHpezFBjOTh7f5XzAHHeUrIjNQEifueObofK1111QzrH3
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(5660300002)(6486002)(31696002)(8676002)(66556008)(4326008)(31686004)(316002)(83380400001)(6666004)(53546011)(36756003)(966005)(8936002)(86362001)(26005)(478600001)(66946007)(16576012)(186003)(2616005)(956004)(54906003)(66476007)(2906002)(16526019)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R0VrcmtFaDllUmxNOXVVdVVJdVJEcVpCNEtIMmJ6Qm91UW9tNkJvZ21jMG1M?=
 =?utf-8?B?RjZOZkEvNGd2cnE1MHRkaDFrRzN5NGw3bURXUTdTV2h1TnVabzU3YWc1NEd6?=
 =?utf-8?B?dDRpZnBkUnRhMUdrZzlpRFNjWEtlM1U1OWVNU2U3QmoxWWJSRzY5Z2hSc2pS?=
 =?utf-8?B?WHd5elNRcGt3dVJKdjI4cTJ2QWRoV3FYa3RtSEhTS3BBY0h0TlRDaUkyVGVV?=
 =?utf-8?B?TW9tVWxXTkthV2kzemFCQW9SaDNqcUVueFJ0UXBFUk5pczVMa3FRNTYyM3Bi?=
 =?utf-8?B?YlZsRkJndmRZQmlsTlVFaVhTVzNBTTdYT1BmNitjTytnbXo3K2oxalA2enFT?=
 =?utf-8?B?K3NpTXdqSGE0OGRxQ0VGVmZ3MTJJVUR4QUZBUllaZ0tlVExsNjRUdW41ZVZ0?=
 =?utf-8?B?dGNNYm1YQnM3V1pmcmpIcHBXb1p6VFNjL1dKMXY4UG41SjNwN0hheSsxdEZE?=
 =?utf-8?B?aklxbis3ZXFyWVhMYXEyS0VMVkpOWjY0NG5xTVJMeTRxMUZCYmlvVG1MZnlo?=
 =?utf-8?B?aGVvTWtYekovWE9ZaE1uaFlhSHgxSkJlY0tWTkltajNQUlBFYTd0NVhNQVdL?=
 =?utf-8?B?akxOQVU4aG1SU25Zb3BLWDZPSDlFNU05Uk5HWXNsRXBiMWwxcU9aNmJxMEhE?=
 =?utf-8?B?TTZZZXlXeFVXTk9zWTdrZDh2Z0VjREhuZXZjenhpOUljbDRsQ3NOS1ZPUlAz?=
 =?utf-8?B?NDZUcVE0Yk1lR1dYY21tT200S1Y2OGRGdm9GVW1xUUtoaGo5Q1pCeEZFQyt1?=
 =?utf-8?B?aDkyZGVrYmhQVGlLT2dUZjZvT1pPNUZ1cmJ2Ym9nTWJrdWo1R3ZkWk9kQlJI?=
 =?utf-8?B?NTlQUC9SaHJzbS85VG1DWmh5YXRhVjdOOHlUZGt6NTA5NUtzNGhjWFg0SFhq?=
 =?utf-8?B?b1NDMGtVdzNieEpwSXdDUUtDeFM4U1d6bDhmMDIxYVdSWDBRUFhJRXhvZkY2?=
 =?utf-8?B?MkZTMmtzMFk1WEdSQ1FtcVBsQTQ3aitDL1JpejgzbWx0Y0swZHBSNmNIM0ZE?=
 =?utf-8?B?YjM2S1k4VDlXWTBJaXhJdHYyMGg4ZGl5NUxwUTJIWlFmVlJGa2tYbDN2djBo?=
 =?utf-8?B?ZzVobmREZjhBMFBtSFA2V3A1K21iVy9iNTQzRC9vSy9aZ29HWlV0VXEwcjZu?=
 =?utf-8?B?L1RDdWdXMzJQNkRrQnU5c3B2SkdjMEdnVVhueEdTQm9rK0NIcTRNUzlvalk4?=
 =?utf-8?B?Z3JhYXVibnl0S1RLYUdEQWozRG9ndWZvcHpxR1ZJL2F6UG42VmwxV29Pb25Q?=
 =?utf-8?B?SXp0bTF4RFF6d1QwdXFBWkpYNFpnTExVNUdERjlUYVNGR010RXBYSUR4UFNX?=
 =?utf-8?B?OFV6c2YyZGRLbldsL0grSVdRKzcybXo3THVTVzV6TmUyd1BsenlCaXZVTG9K?=
 =?utf-8?B?cXFnZnZscGxrOFg1ZEIzTDE5bWEvTGxMNEM4UWMrcTJuQ3ZkN0lsMVl3bmRP?=
 =?utf-8?B?NUhsY3B1ZXdHQU5SeVpYUlNscHh0ZDNsWTc3ZmhVNnB2bG9rK2xnSGh2ZExz?=
 =?utf-8?B?RlZqVmJ1YzBNQ2FmNlo2ODJKMkpPOXNoUVcvY1lGc3VxSVdlNzl5OVRmOGtB?=
 =?utf-8?B?R0xGVWd3MnQzaURKVVBIWTN3dGJLQm1qSi9NU0YxZ2lrUUFpMHd2OThmaUdm?=
 =?utf-8?B?Qk5FQXNHTzE3dVJ4WkdHWGxBUXJWcVFoSVM5MWhkQnFDMW4vNmZYSGJMdk90?=
 =?utf-8?B?eDZUZU0yemdqT1R5OWhGVXJRTlp0by82cFZLNzdYZ2x4MFo2Qk1Gd0FtemUz?=
 =?utf-8?Q?4FwMliYuUhss4uOzVLTxQWuViFr6o8NqCy+garc?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d7defa13-5e8d-407c-ad1a-08d8d81509b4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 16:06:54.3442
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w0GpacDc+CENrACJ8ALSAwYdU3zW2C26Z+lu1uv5V94mFnbAzc4aT1zV0lQStUBq6RkX4Au3f9C+qzNLoUuNbqKzj7LSwH6e9hpksULPovo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4679
X-OriginatorOrg: citrix.com

On 23/02/2021 15:53, Roger Pau Monne wrote:
> Cirrus-CI supports FreeBSD natively, so introduce some build tests on
> FreeBSD.
>
> The Cirrus-CI requires a Github repository in order to trigger the
> tests.
>
> A sample run output can be seen at:
>
> https://github.com/royger/xen/runs/1962451343
>
> Note the FreeBSD 11 task fails to build QEMU and is not part of this
> patch.
>
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Fantastic to get some FreeBSD CI available.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:10:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88879.167256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaGR-0006Hn-Lf; Tue, 23 Feb 2021 16:10:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88879.167256; Tue, 23 Feb 2021 16:10:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaGR-0006Hg-In; Tue, 23 Feb 2021 16:10:19 +0000
Received: by outflank-mailman (input) for mailman id 88879;
 Tue, 23 Feb 2021 16:10:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEaGP-0006Hb-S2
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:10:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9866f7a-3bc8-4e18-8135-afd17714f83a;
 Tue, 23 Feb 2021 16:10:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EE637AC6E;
 Tue, 23 Feb 2021 16:10:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9866f7a-3bc8-4e18-8135-afd17714f83a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614096616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KK5re5HbElwgFFnYL7SM8dj8dYR5cXtxL91gYBgmhq8=;
	b=P08BzRkPZ4fBg2WpeM4ekuAWRKaZW937NsyC8M9DrRJ95mngSda3zl/RytqVi32OttEGzC
	31fHEwpI4uWgMoI4PF3XW58a8Qu2n2Un32FY5hgFgbDQ/JfsANraFLEizQMpepPEpHxNw0
	u9DmTZFNTX/vmofy9b+Gafrzv2UtJ/c=
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org, anthony.perard@citrix.com,
 jun.nakajima@intel.com, kevin.tian@intel.com
References: <1611182952-9941-1-git-send-email-boris.ostrovsky@oracle.com>
 <1611182952-9941-3-git-send-email-boris.ostrovsky@oracle.com>
 <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
 <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a4d2f7f5-a8b0-eeb5-dfaa-539784c67a87@suse.com>
Date: Tue, 23 Feb 2021 17:10:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.02.2021 16:39, Boris Ostrovsky wrote:
> 
> On 2/23/21 8:23 AM, Jan Beulich wrote:
>> On 23.02.2021 13:17, Roger Pau MonnÃ© wrote:
>>> On Tue, Feb 23, 2021 at 11:15:31AM +0100, Jan Beulich wrote:
>>>> On 23.02.2021 10:34, Roger Pau MonnÃ© wrote:
>>>>> On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
>>>>>> On 22.02.2021 22:19, Boris Ostrovsky wrote:
>>>>>>> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
>>>>>>>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
>>>>>>>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
>>>>>>>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
>>>>>>>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
>>>>>>>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
>>>>>>>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
>>>>>>>>>> It's kind of weird to use an MSR to store this. I assume this is done
>>>>>>>>>> for migration reasons?
>>>>>>>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
>>>>>>>> I agree that using the msr_policy seems like the most suitable place
>>>>>>>> to convey this information between the toolstack and Xen. I wonder if
>>>>>>>> it would be fine to have fields in msr_policy that don't directly
>>>>>>>> translate into an MSR value?
>>>>>>>
>>>>>>> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
>>>>>> Which, just to clarify, was not the least because of the flags
>>>>>> field being per-entry, i.e. per MSR, while here we want a global
>>>>>> indicator.
>>>>> We could exploit a bit the xen_msr_entry_t structure and use it
>>>>> like:
>>>>>
>>>>> typedef struct xen_msr_entry {
>>>>>     uint32_t idx;
>>>>> #define XEN_MSR_IGNORE (1u << 0)
>>>>>     uint32_t flags;
>>>>>     uint64_t val;
>>>>> } xen_msr_entry_t;
>>>>>
>>>>> Then use the idx and val fields to signal the start and size of the
>>>>> range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
>>>>> field?
>>>>>
>>>>> xen_msr_entry_t = {
>>>>>     .idx = 0,
>>>>>     .val = 0xffffffff,
>>>>>     .flags = XEN_MSR_IGNORE,
>>>>> };
>>>>>
>>>>> Would be equivalent to ignoring accesses to the whole MSR range.
>>>>>
>>>>> This would allow selectively selecting which MSRs to ignore, while
>>>>> not wasting a MSR itself to convey the information?
>>>> Hmm, yes, the added flexibility would be nice from an abstract pov
>>>> (not sure how relevant it would be to Solaris'es issue). But my
>>>> dislike of using a flag which is meaningless in ordinary entries
>>>> remains, as was voiced against Boris'es original version.
>>> I understand the flags field is meaningless for regular MSRs, but I
>>> don't see why it would be an issue to start using it for this specific
>>> case of registering ranges of ignored MSRs.
>> It's not an "issue", it is - as said - my dislike. However, the way
>> it is in your proposal it is perhaps indeed not as bad as in Boris'es
>> original one: The flag now designates the purpose of the entry, and
>> the other two fields still have a meaning. Hence I was wrong to state
>> that it's "meaningless" - it now is required to be clear for
>> "ordinary" entries.
>>
>> In principle there could then also be multiple such entries / ranges.
> 
> 
> TBH I am not sold on usefulness of multiple ranges but if both of
> you feel it can be handy I'll do that, using Roger's proposal above.

As indicated I really think an opinion from Andrew would be helpful.

> (Would it make sense to make val a union with, say, size or should
> the context be clear from flag's value?)

I'd recommend against this, unless you were to also name the upper
half of the 64-bit field so you can check that part to be zero.
Plus unions are generally not nice to be introduced into public
headers for already existing types.

> Before I do that though --- what was the conclusion on verbosity
> control?

Not sure - afaict the conclusion was that we still don't really
agree. Roger?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:11:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:11:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88881.167269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaHM-0006PO-Vt; Tue, 23 Feb 2021 16:11:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88881.167269; Tue, 23 Feb 2021 16:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaHM-0006PH-So; Tue, 23 Feb 2021 16:11:16 +0000
Received: by outflank-mailman (input) for mailman id 88881;
 Tue, 23 Feb 2021 16:11:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEaHL-0006PA-EP
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:11:15 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5c6574e-51da-466a-8be0-f9538bb9b015;
 Tue, 23 Feb 2021 16:11:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5c6574e-51da-466a-8be0-f9538bb9b015
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614096674;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=GMAvHhBZ4kHJNR00wvWm1wUpfp5Z1cl6G21K5K7JKPU=;
  b=Gf9WiEQcVOkhvpaHgjwK3SVeCEso/ZfrfQ6F+HcHeElF6xuKI/kCuJSB
   wdvlkqBBtpjaYZQ+qjQ0MGYpPL5TQHPHL7Kn8rOWGQ16k5yrT2j9BrFxh
   /m599abEoRVwYn/UXdiBYa0T/REgaQpxg7Fa+unsbf21GUnSw86r0yRY0
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xJ0uFc0bob3OeP2mDDVoKNhqkQG8YsZJ8Yw2OOjzCpNvPRLsUoVn6BJrFj4DvKwKXNsdFMBq6n
 dDbw3Q425J+2qKxSvh3ucE0iHwX33Bghe0JHOyWa3kAa/mwdpK/E8y2c1yk/4mobZp4W8ES7I8
 Hkz2hFOwd653F3elEQT1/+ISadjcbYrppVt5SPr0dL69Zv0725yeWW7wl1unhVjVkvZQ+JasBW
 q0alV1dp3MZR/nqHG4RWcVTAJ0AXYHMJ2At+HINs+FxMks7nomBn96vy99fxIO/YX4ZVMIAule
 25Q=
X-SBRS: None
X-MesageID: 37769854
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="37769854"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MUoPU8HNRedOJFDKOBXUeNmTknObLPHewQJtrjt/x6MaW31JJhwYmz5tXVS4NbadywdSf0W4wEXJMb13xotjiwr36iK8envUbUkj8MVjjIZWy5qGzAizDo5N5gM/yKajSuo0EPemJVnYl7mM6ArCjTmfwd2Q3C38OM+QIZi633SZ4C8vTdc5uSu7GpC5+HoPHvxP/hXbiM4mGXhRVyygOWdEn6p1IcaN379tlcfq6SZRlcH+Wygafrubz0I5WO9mUbU+Cq5NoXx9Di3w/y4h9PLEruyvG8iZkHovByQ/C8R+ajXhBjq1hpMhioRdpXEDetqBWPKoUpfAzq6JLAfREw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hv04ppCqndpvRQCyHm3bZB5wiK7zxXlt/61PCaJpAXc=;
 b=S+x8U1ZxioBO/Hhmq5OdTWJZ/6Mq/AERpLlwqTTb2N4cm13sVKzkHA2XGbLJpKYt34u/L6JrVujIUKERDwNdu/9IDU51MmZoslrLne5yweVv7yoxTIa5qXbrXYddgYlgsW9doTABp4shoU6FHTEyF/FI9tzqyg0ZTnmJvupSYAMEnVDGO0Porqv0ohAU5EafxfRcmen0nSSleircJ1e8Kf4tZLWclQVdj+zdcclgigX5KXfUT6o3KgByCx+FQNSY63sKwNJefz0F3NARSvhZ/Ra2+5JOhXlzUa6B0rjZ1U+r+CcrfmBZicSUtaZmSkx3R5w8d+qdSL6au+gurxcyHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hv04ppCqndpvRQCyHm3bZB5wiK7zxXlt/61PCaJpAXc=;
 b=MbG4yQv+/e6FZMPpXYlrJOPO62XjMPdKyvP7j03xd8TakTKJTyWsDvi1CtlXezmofAQIwm/snM/MGRQea2CLjdAQgAe6qMCJvUpyG3xS0PO2COFab8yGMGlfceXPY0xibs/lPlMpl6yOHR2iV5tX7bc6lCtkSB8pmQF68j5e0Gg=
Date: Tue, 23 Feb 2021 17:11:03 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Jan Beulich <jbeulich@suse.com>, <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YDUpF8gf6fbm4ouQ@Air-de-Roger>
References: <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
 <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
X-ClientProxiedBy: PR1P264CA0029.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c9dc5841-66ac-4431-3fa9-08d8d815a126
X-MS-TrafficTypeDiagnostic: DM5PR03MB3065:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3065031B654B7A0B3493EC638F809@DM5PR03MB3065.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 741MvTSetGTB+B3zH7rcwq4zVAMyy0DL6m4Ihc//HO3zgwq/9U4bOlHtWfjEwrMPojC3CbeUUPJYvbhYMH7h85dIvHJccHEgyyQToTMRlVAW3ef22G32COxgI+NHcN/2vhJPcJ5z3kc/Hlw21YRKOoUSbQBv+8O4D4fw1V2DGodtUqiT0D1+GUvEaWIJVnUxvVhyeA004c1qBzm9vSAK4zowlm7FNsbLHwusqSj4TO1++8e0QBS7lYOrIIw9VySXEFWIbQIX8nCduVI9o55bZeTWWJwDw+GLXkWLI80ShY8yEZiC8OTV8IuxBeXtyrYKYElYtOMJQZya2/ifj+mle2RXl/V4tuKaeuPgy7Hmsb8oCnmDrYpE3j/74hKyF5AIAvXB5t475VwOXmEX2idjrT/EY3MRBymID+1fFF/6p2RttIMvQal33D3XMDMPZcIai6JIcbkv24JdwwEFvZhpY3Sos2xZDvEAZEJf/Ehlj/UzwLfqMo591z9+Lum3UqPIamLPNutwsD/kAkDYs808MA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(346002)(39860400002)(376002)(366004)(26005)(5660300002)(2906002)(33716001)(66556008)(956004)(4326008)(6496006)(53546011)(8936002)(8676002)(6666004)(16526019)(85182001)(6916009)(66476007)(9686003)(186003)(6486002)(83380400001)(66946007)(86362001)(316002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TTJYUFdVcGErNXhRWVRLZ3ptczk5bnhmMDBrYVlwVUczK1dnMy9NRW5pWU1C?=
 =?utf-8?B?blVpL01JRGtvakNBNE5xUzRuOW03eSs3cEpnZFBZSVBUVXNJdndBMEY1dE9H?=
 =?utf-8?B?QTQyS081N2xBQyt6SmJ1RUdOeFVDbEgrNnhYNGdiY2RTdm90MGo1ZFg1dlJ6?=
 =?utf-8?B?em4rcmNGYVRYa3dpWHN1ZFQ5S2VGU0J2YkFOMHpCNFNPVllXdXdTd1lRRUU4?=
 =?utf-8?B?SVJGRXlJNWhIdEdaVXllbkJWU2ZjZFdSdVFLRGZTQ1hQSGNxemRBVk53NElY?=
 =?utf-8?B?Y2JwcndtUzNNNDB3UDNDN0FhVmpxenBpcWNobjlzRDhWcWJCV2hqQ0lobmlo?=
 =?utf-8?B?d2RiUGJVR2hYWDBCbnN0R3hjR2k5aWtMMFloYXpLT0hEbzZENW1aM2Z1QVpw?=
 =?utf-8?B?b1RabXg1YzRkOGc2SWwyRXptQkNUZW8zSUoyMFAzVEVHL3BPWGFGZ3JWYk50?=
 =?utf-8?B?ZUtCSEZlOU5MOVNxWFFjN1NINlUvZC9wT0N2cnNnY1R4N1FUTmJLU3gvRVRX?=
 =?utf-8?B?MUhTMEc4bUxQeFppQUc5cHUwNjBZd1hEN1d2aEdDUFVOYTZYVytEMzZDMzZm?=
 =?utf-8?B?QXM1eUpPa1dCNFc0c0ZKbGo2KzFLM0dJd1hEdGdRZUpFVlVTZWwxS3hQNmdy?=
 =?utf-8?B?aWgwT09iR1FYM2ZLUDhwYStkd0txTEM3cXgweklDVjBwbzlZbXB5N2JQTGgz?=
 =?utf-8?B?YWZwbnl0emh3M1NrbFkxTk81V0NyQmF4T3hhSVV6Z0J3alB1MlNPY1o3Y0hp?=
 =?utf-8?B?NGlsSUR3dEMwV21uaU9lWTNuN1k5MmhCTFVOaUtGcWtuVk9QdDZzb2pOTkd4?=
 =?utf-8?B?Y2FzbHhVa3EzTGFRS05kaDJtSml5cHIwTU14eXg2eVpDOGFxOG1jK2xFUXdk?=
 =?utf-8?B?aXlENlhncWpCRytTMFVBc3ZmMXdFK2Z1ZlJKeExIeE16emJsZGFjMkQ1c1h5?=
 =?utf-8?B?RGRZd2pXRFlML2R0MUlGTkxwVTF1OHUvdnlXL3YrRHlVcHBxTk5mcEJtRjBw?=
 =?utf-8?B?alRMZ2lWM04yckx3c1JzcWVOTFZVVVlBa2xRbFlramZ0TlNGWmM1dmdvdVZB?=
 =?utf-8?B?SmIvaFNsYlBObDZRa3FZZVN3UkF5Q2dIdlBFeUJHcG1uVDA3bTFXcG0wRlJS?=
 =?utf-8?B?RWw0VERjT1ZRRXVpNG12ZnpMSTd3d0tEc0JYSFR2RmZ2R1IrOGlBeWVWRHF1?=
 =?utf-8?B?MG4vQktmaC9Pa1dhdG8vSVJDT3NjZC9hU01TeUI5cklISUl2Vm01ZFlEeUc0?=
 =?utf-8?B?OFQ3MGl4bmZYT2NrVXJZcDBHTmNOQy9qK3RSTzM5QitZSVZaY3kvL2FBOGd1?=
 =?utf-8?B?MXJvTnVWS3c0bDZrL1lpanBDVStMTmpISEx1bVJoeU5Sb0ZSc3VEeUt6SStp?=
 =?utf-8?B?SWpDWmMreFJESnZtSVg1ckJhaGQzdjFOZjQrYjFtZnV6QyszaTNkUUlHU0lm?=
 =?utf-8?B?TDNCT1hwTi9CWVdhSkZWZmVDbWZtQm52bFZJaERmVEsxdkFyK0t3amVVR04v?=
 =?utf-8?B?SzgrSEZXeUVDT1RoY3BwWnJxT0NoWmhmcE5NQUtGUjZBSkxMUFVabkE4QUpB?=
 =?utf-8?B?UmlOSmxudEtsR2c1aWt2U0M5emE2ZjVVUnE4Vm56emhyWk1YZDNXUmpobFRJ?=
 =?utf-8?B?WUEzUm5CWm9aRmlSV1h2b0krTUtKS0xWS0RrcXNlaGZiek9IOWZwamNBT3Iw?=
 =?utf-8?B?aVRNb1dCVXo3VDZuL1ZsVU12QS9Pbm5xNGg3enJRbkpuV3RDK00zMy8rQ3ZT?=
 =?utf-8?B?aXJXd0VvUkN0Vmg3VnBGV2ZxQkUyVXlMRElOU2pkZkhuVUhrL09CbDMyOEVv?=
 =?utf-8?B?YXBBSE9OMVk5RnJreFN3QT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c9dc5841-66ac-4431-3fa9-08d8d815a126
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 16:11:08.4019
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dscUJWrfmCYVyQI+9RFBqGjACMdMXLhVEpu+1Kh3KhJqeWmsxZ83icWrlFrU5YrkK+lVn9EzH9SMAZi4gYvKmA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3065
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 10:39:48AM -0500, Boris Ostrovsky wrote:
> 
> On 2/23/21 8:23 AM, Jan Beulich wrote:
> > On 23.02.2021 13:17, Roger Pau MonnÃ© wrote:
> >> On Tue, Feb 23, 2021 at 11:15:31AM +0100, Jan Beulich wrote:
> >>> On 23.02.2021 10:34, Roger Pau MonnÃ© wrote:
> >>>> On Tue, Feb 23, 2021 at 08:57:00AM +0100, Jan Beulich wrote:
> >>>>> On 22.02.2021 22:19, Boris Ostrovsky wrote:
> >>>>>> On 2/22/21 6:08 AM, Roger Pau MonnÃ© wrote:
> >>>>>>> On Fri, Feb 19, 2021 at 09:56:32AM -0500, Boris Ostrovsky wrote:
> >>>>>>>> On 2/18/21 5:51 AM, Roger Pau MonnÃ© wrote:
> >>>>>>>>> On Wed, Jan 20, 2021 at 05:49:10PM -0500, Boris Ostrovsky wrote:
> >>>>>>>>>> When toolstack updates MSR policy, this MSR offset (which is the last
> >>>>>>>>>> index in the hypervisor MSR range) is used to indicate hypervisor
> >>>>>>>>>> behavior when guest accesses an MSR which is not explicitly emulated.
> >>>>>>>>> It's kind of weird to use an MSR to store this. I assume this is done
> >>>>>>>>> for migration reasons?
> >>>>>>>> Not really. It just seemed to me that MSR policy is the logical place to do that. Because it *is* a policy of how to deal with such accesses, isn't it?
> >>>>>>> I agree that using the msr_policy seems like the most suitable place
> >>>>>>> to convey this information between the toolstack and Xen. I wonder if
> >>>>>>> it would be fine to have fields in msr_policy that don't directly
> >>>>>>> translate into an MSR value?
> >>>>>>
> >>>>>> We have xen_msr_entry_t.flags that we can use when passing policy array back and forth. Then we can ignore xen_msr_entry_t.idx for that entry (although in earlier version of this series Jan preferred to use idx and leave flags alone).
> >>>>> Which, just to clarify, was not the least because of the flags
> >>>>> field being per-entry, i.e. per MSR, while here we want a global
> >>>>> indicator.
> >>>> We could exploit a bit the xen_msr_entry_t structure and use it
> >>>> like:
> >>>>
> >>>> typedef struct xen_msr_entry {
> >>>>     uint32_t idx;
> >>>> #define XEN_MSR_IGNORE (1u << 0)
> >>>>     uint32_t flags;
> >>>>     uint64_t val;
> >>>> } xen_msr_entry_t;
> >>>>
> >>>> Then use the idx and val fields to signal the start and size of the
> >>>> range to ignore accesses to when XEN_MSR_IGNORE is set in the flags
> >>>> field?
> >>>>
> >>>> xen_msr_entry_t = {
> >>>>     .idx = 0,
> >>>>     .val = 0xffffffff,
> >>>>     .flags = XEN_MSR_IGNORE,
> >>>> };
> >>>>
> >>>> Would be equivalent to ignoring accesses to the whole MSR range.
> >>>>
> >>>> This would allow selectively selecting which MSRs to ignore, while
> >>>> not wasting a MSR itself to convey the information?
> >>> Hmm, yes, the added flexibility would be nice from an abstract pov
> >>> (not sure how relevant it would be to Solaris'es issue). But my
> >>> dislike of using a flag which is meaningless in ordinary entries
> >>> remains, as was voiced against Boris'es original version.
> >> I understand the flags field is meaningless for regular MSRs, but I
> >> don't see why it would be an issue to start using it for this specific
> >> case of registering ranges of ignored MSRs.
> > It's not an "issue", it is - as said - my dislike. However, the way
> > it is in your proposal it is perhaps indeed not as bad as in Boris'es
> > original one: The flag now designates the purpose of the entry, and
> > the other two fields still have a meaning. Hence I was wrong to state
> > that it's "meaningless" - it now is required to be clear for
> > "ordinary" entries.
> >
> > In principle there could then also be multiple such entries / ranges.
> 
> 
> TBH I am not sold on usefulness of multiple ranges but if both of you feel it can be handy I'll do that, using Roger's proposal above. (Would it make sense to make val a union with, say, size or should the context be clear from flag's value?)

Doing an union with {Â val, size } would be fine for me, I'm also fine
with not doing it though.

> 
> Before I do that though --- what was the conclusion on verbosity control?

Ideally I would like to find a way to have a more generic interface to
change verbosity level on a per-guest basis, but I haven't looked at
all about how to do that, neither I think it would be acceptable to
put that burden on you.

Maybe we could introduce another flag to set whether ignored MSRs
should be logged, as that would be easier to implement?

I think in that case we could enforce that either all ranges have the
flag set or not, in order to prevent ending up tracking two different
ranges of ignored MSRs, one that triggers a message and another one
that doesn't.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:13:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:13:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88885.167280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaJQ-0006Yd-HQ; Tue, 23 Feb 2021 16:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88885.167280; Tue, 23 Feb 2021 16:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaJQ-0006YW-Ec; Tue, 23 Feb 2021 16:13:24 +0000
Received: by outflank-mailman (input) for mailman id 88885;
 Tue, 23 Feb 2021 16:13:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEaJP-0006YR-3M
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:13:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4632d9a5-f33d-417d-bfff-3da4782474cc;
 Tue, 23 Feb 2021 16:13:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5F5ABAB95;
 Tue, 23 Feb 2021 16:13:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4632d9a5-f33d-417d-bfff-3da4782474cc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614096801; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Q1DLj/+4Nqge5qsuEp8WvxJqf7aK5Y4qbufSexgLYQE=;
	b=AXdNyMaVqxaiRkJ5YzPlYElPK9J5ze8OfJNMRgvqLNput3TLiD2zzbl5RgWq6oFGtkzHyl
	kd/6Lfo7UqXbeaVLHaYEo8cQyf6h2srlaPxC4DaGLr92F8DO2iSMBSSm1BhCQ8QuevoQg5
	8s3bkkGhpIHXXOJUYq1xCXf+5KxKtgY=
Subject: Re: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of
 copy_from_unsafe()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
 <YDTuGn8YWRrWlbS9@Air-de-Roger>
 <76207250-1372-e7ab-2d03-b46020a7906b@suse.com>
 <YDUhKw+19ITgVmml@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4fdb5952-6196-3a79-1306-e65d75e495d2@suse.com>
Date: Tue, 23 Feb 2021 17:13:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDUhKw+19ITgVmml@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.02.2021 16:37, Roger Pau MonnÃ© wrote:
> On Tue, Feb 23, 2021 at 04:25:00PM +0100, Jan Beulich wrote:
>> On 23.02.2021 12:59, Roger Pau MonnÃ© wrote:
>>> On Wed, Feb 17, 2021 at 09:23:33AM +0100, Jan Beulich wrote:
>>>> The former expands to a single (memory accessing) insn, which the latter
>>>> does not guarantee. Yet we'd prefer to read consistent PTEs rather than
>>>> risking a split read racing with an update done elsewhere.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
>>>
>>> Albeit I wonder why the __builtin_constant_p check done in
>>> copy_from_unsafe is not enough to take the get_unsafe_size branch in
>>> there. Doesn't sizeof(l{1,2}_pgentry_t) qualify as a built time
>>> constant?
>>>
>>> Or the fact that n it's a parameter to an inline function hides this,
>>> in which case the __builtin_constant_p is pointless?
>>
>> Without (enough) optimization, __builtin_constant_p() may indeed
>> yield false in such cases. But that wasn't actually what I had
>> in mind when making this change (and the original similar on in
>> shadow code). Instead, at the time I made the shadow side change,
>> I had removed this optimization from the new function flavors.
>> With that removal, things are supposed to still be correct - it's
>> an optimization only, after all. Meanwhile the optimizations are
>> back, so there's no immediate problem as long as the optimizer
>> doesn't decide to out-of-line the function invocations (we
>> shouldn't forget that even always_inline is not a guarantee for
>> inlining to actually occur).
> 
> I'm fine with you switching those use cases to get_unsafe, but I think
> the commit message should be slightly adjusted to notice that
> copy_from_unsafe will likely do the right thing, but that it's simply
> clearer to call get_unsafe directly, also in case copy_from_unsafe
> gets changed in the future to drop the get_unsafe paths.

How about this then?

"The former expands to a single (memory accessing) insn, which the latter
 does not guarantee (the __builtin_constant_p() based switch() statement
 there is just an optimization). Yet we'd prefer to read consistent PTEs
 rather than risking a split read racing with an update done elsewhere."

Jan


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:26:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88898.167298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaVz-0007fZ-PW; Tue, 23 Feb 2021 16:26:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88898.167298; Tue, 23 Feb 2021 16:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaVz-0007fS-Ma; Tue, 23 Feb 2021 16:26:23 +0000
Received: by outflank-mailman (input) for mailman id 88898;
 Tue, 23 Feb 2021 16:26:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEaVy-0007fN-HY
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:26:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa8f8c93-3b77-4624-979d-c1926e38649e;
 Tue, 23 Feb 2021 16:26:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD749AC1D;
 Tue, 23 Feb 2021 16:26:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa8f8c93-3b77-4624-979d-c1926e38649e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614097580; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=x8GTAvdmsyH+Pp0li/Vl3nOFzMKqCxBpQieMBY2N7/I=;
	b=A3SeXkLY1ANJcd40RWOUEDSk5mVdQ/bqXEvEmj8fAiYJcjYb/+L1HHL5J0WIczye/ltXIE
	WLSdk3KA4BqILQ31VrDh0VEDeEd8U5tiqvX1OSqGA40EzolPcB9ml6sOwf6fPkVJCo0GVo
	+Iu9NQqEJj3Aj0NwxicDKckdYmiKj60=
To: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "oleksandr_andrushchenko@epam.com" <oleksandr_andrushchenko@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xen-front-pgdir-shbuf: don't record wrong grant handle upon
 error
Message-ID: <82414b0f-1b63-5509-7c1d-5bcc8239a3de@suse.com>
Date: Tue, 23 Feb 2021 17:26:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In order for subsequent unmapping to not mistakenly unmap handle 0,
record a perceived always-invalid one instead.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
v2: Use INVALID_GRANT_HANDLE.

--- a/drivers/xen/xen-front-pgdir-shbuf.c
+++ b/drivers/xen/xen-front-pgdir-shbuf.c
@@ -305,11 +305,18 @@ static int backend_map(struct xen_front_
 
 	/* Save handles even if error, so we can unmap. */
 	for (cur_page = 0; cur_page < buf->num_pages; cur_page++) {
-		buf->backend_map_handles[cur_page] = map_ops[cur_page].handle;
-		if (unlikely(map_ops[cur_page].status != GNTST_okay))
+		if (likely(map_ops[cur_page].status == GNTST_okay)) {
+			buf->backend_map_handles[cur_page] =
+				map_ops[cur_page].handle;
+		} else {
+			buf->backend_map_handles[cur_page] =
+				INVALID_GRANT_HANDLE;
+			if (!ret)
+				ret = -ENXIO;
 			dev_err(&buf->xb_dev->dev,
 				"Failed to map page %d: %d\n",
 				cur_page, map_ops[cur_page].status);
+		}
 	}
 
 	if (ret) {


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:29:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88900.167311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaYy-0007ui-8b; Tue, 23 Feb 2021 16:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88900.167311; Tue, 23 Feb 2021 16:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaYy-0007ub-59; Tue, 23 Feb 2021 16:29:28 +0000
Received: by outflank-mailman (input) for mailman id 88900;
 Tue, 23 Feb 2021 16:29:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEaYw-0007uW-Eu
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:29:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78f96d78-34a1-4204-846a-25d16c4ea25c;
 Tue, 23 Feb 2021 16:29:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6070ACBF;
 Tue, 23 Feb 2021 16:29:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78f96d78-34a1-4204-846a-25d16c4ea25c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614097765; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=1fcBBVi2VQJZ1nf/JdbxVgiBTyHyQF9V+0dM+Pib8sw=;
	b=AIsb4Pg1kS6LzyrZ1kIU6GCBQu5bS5bkoOedIG3Nw+T9eBFfq99Ao4VwWMueQmuco2vIjY
	Ic3soYYBWVMq3BTbOblmODIv+41R+gt4ZwjNpzlcBwXmp+D2rTGKlpOAF3SBvncPRo5TlM
	gdvQxneUDFJeKhdpBdnIo3AHxIoYMyc=
To: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
Message-ID: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
Date: Tue, 23 Feb 2021 17:29:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
special considerations for the head of the SKB no longer apply. Don't
mistakenly report ERROR to the frontend for the first entry in the list,
even if - from all I can tell - this shouldn't matter much as the overall
transmit will need to be considered failed anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -499,7 +499,7 @@ check_frags:
 				 * the header's copy failed, and they are
 				 * sharing a slot, send an error
 				 */
-				if (i == 0 && sharedslot)
+				if (i == 0 && !first_shinfo && sharedslot)
 					xenvif_idx_release(queue, pending_idx,
 							   XEN_NETIF_RSP_ERROR);
 				else


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:37:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:37:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88904.167326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaga-0000O4-3j; Tue, 23 Feb 2021 16:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88904.167326; Tue, 23 Feb 2021 16:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEaga-0000Nx-06; Tue, 23 Feb 2021 16:37:20 +0000
Received: by outflank-mailman (input) for mailman id 88904;
 Tue, 23 Feb 2021 16:37:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MteW=HZ=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1lEagY-0000NY-Rf
 for xen-devel@lists.xen.org; Tue, 23 Feb 2021 16:37:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1d8a33d-122f-4862-9164-9e8b412781fa;
 Tue, 23 Feb 2021 16:37:14 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lEagN-0007HJ-Gl; Tue, 23 Feb 2021 16:37:07 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1lEagN-0005TY-Eq; Tue, 23 Feb 2021 16:37:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1d8a33d-122f-4862-9164-9e8b412781fa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=ZVKCACDsa88A+J2oOVB0FQlmNUn9GbX8Js0ggX9rhno=; b=4xAMLjnvN7z2GsUYt2gylMU/d3
	t0UDA9v/XLyQ/WnKNPWWUBppja+J7Ci2nu4Mc+WZ7ozb9Z+PPcjPfIDlisZwVqIT9v1MrsyFhrN3F
	WHFqe3f5ZCx+6peklvNXnB/DEU1MDoc8ca5nNZrg+MzvbSmtwWVpACK2+T1T4PYaXeN8=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 366 v2 (CVE-2021-27379) - missed flush in
 XSA-321 backport
Message-Id: <E1lEagN-0005TY-Eq@xenbits.xenproject.org>
Date: Tue, 23 Feb 2021 16:37:07 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-27379 / XSA-366
                              version 2

                   missed flush in XSA-321 backport

UPDATES IN VERSION 2
====================

CVE assigned.

Fixed erroneous reference to XSA-320; should have read XSA-321.

ISSUE DESCRIPTION
=================

An oversight was made when backporting XSA-321, leading entries in the
IOMMU not being properly updated under certain circumstances.

IMPACT
======

A malicious guest may be able to retain read/write DMA access to
frames returned to Xen's free pool, and later reused for another
purpose.  Host crashes (leading to a Denial of Service) and privilege
escalation cannot be ruled out.

VULNERABLE SYSTEMS
==================

Xen versions up to 4.11, from at least 3.2 onwards, are affected.  Xen
versions 4.12 and newer are not affected.

Only x86 Intel systems are affected.  x86 AMD as well as Arm systems are
not affected.

Only x86 HVM guests using hardware assisted paging (HAP), having a
passed through PCI device assigned, and having page table sharing
enabled can leverage the vulnerability.  Note that page table
sharing will be enabled (by default) only if Xen considers IOMMU and
CPU large page size support compatible.

MITIGATION
==========

Suppressing the use of page table sharing will avoid the vulnerability
(command line option "iommu=no-sharept").

Suppressing the use of large HAP pages will avoid the vulnerability
(command line options "hap_2mb=no hap_1gb=no").

Not passing through PCI devices to HVM guests will avoid the
vulnerability.

CREDITS
=======

This issue was reported as a bug by M. Vefa Bicakci, and recognized as
a security issue by Roger Pau Monne of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa366-4.11.patch      Xen 4.11.x

$ sha256sum xsa366*
3131c9487b9446655e2e21df4ccf1e003bec471881396d7b2b1a0939f5cbae96  xsa366.meta
8c8c18ca8425e6167535c3cf774ffeb9dcb4572e81c8d2ff4a73fefede2d4d94  xsa366-4.11.patch
$

NOTE REGARDING LACK OF EMBARGO
==============================

This was reported and debugged publicly, before the security
implications were apparent.
-----BEGIN PGP SIGNATURE-----

iQE/BAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmA1Lx4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZXRkH+MsCFrh/HOCaqzbdlT46sZBSS3B7wMjaCt4WtB8z
MKxRY013/MMi7xbOhMvLE/qEtT8cdkOykxac9WjMnAPk2NQE3L3uRvoWsS8cYLa6
39RklCw0o/0YTsiY4bB5X1jI+8dBZxt4QPYl1YQqsLOHTlSJFix2Vm6w/K8+BZt9
ceS58GEoAawwlkVXdSH2115rSVRoBUZqgHCkPIc6eOjAmXCPL++8uUToWWhiROWD
Ic0STLsf/Rt44G71rPh8GoFdncIBULcPlp1LbxCUEzRVhdmeb1/shs79vsIk0Z3l
c2oHzypyS15p/kdQbulGTXDFq933C4ELtjrY/HwPumJSdg==
=er6n
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa366.meta"
Content-Disposition: attachment; filename="xsa366.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNjYsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
IjQuMTEiCiAgXSwKICAiVHJlZXMiOiBbCiAgICAieGVuIgogIF0sCiAgIlJl
Y2lwZXMiOiB7CiAgICAiNC4xMSI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAg
ICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiODBjYWQ1
ODRmYjRjMjU5OWFlMTc0MjI2ZTJjOTEzYmIyM2RmM2JmYSIsCiAgICAgICAg
ICAiUHJlcmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAg
ICAgICAgICJ4c2EzNjYtNC4xMS5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa366-4.11.patch"
Content-Disposition: attachment; filename="xsa366-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4K
U3ViamVjdDogeDg2L2VwdDogZml4IG1pc3NpbmcgSU9NTVUgZmx1c2ggaW4g
YXRvbWljX3dyaXRlX2VwdF9lbnRyeQoKQmFja3BvcnQgb2YgWFNBLTMyMSBt
aXNzZWQgYSBmbHVzaCBpbiBhdG9taWNfd3JpdGVfZXB0X2VudHJ5IHdoZW4K
bGV2ZWwgd2FzIGRpZmZlcmVudCB0aGFuIDAuIFN1Y2ggb21pc3Npb24gd2ls
bCB1bmRlcm1pbmUgdGhlIGZpeCBmb3IKWFNBLTMyMSwgYmVjYXVzZSBwYWdl
IHRhYmxlIGVudHJpZXMgY2FjaGVkIGluIHRoZSBJT01NVSBjYW4gZ2V0IG91
dApvZiBzeW5jIGFuZCBjb250YWluIHN0YWxlIGVudHJpZXMuCgpGaXggdGhp
cyBieSBzbGlnaHRseSByZS1hcnJhbmdpbmcgdGhlIGNvZGUgdG8gcHJldmVu
dCB0aGUgZWFybHkgcmV0dXJuCndoZW4gbGV2ZWwgaXMgZGlmZmVyZW50IHRo
YXQgMC4gTm90ZSB0aGF0IHRoZSBlYXJseSByZXR1cm4gaXMganVzdCBhbgpv
cHRpbWl6YXRpb24gYmVjYXVzZSBmb3JlaWduIGVudHJpZXMgY2Fubm90IGhh
dmUgbGV2ZWwgPiAwLgoKVGhpcyBpcyBYU0EtMzY2LgoKUmVwb3J0ZWQtYnk6
IE0uIFZlZmEgQmljYWtjaSA8bS52LmJAcnVuYm94LmNvbT4KU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
Ci0tLQogeGVuL2FyY2gveDg2L21tL3AybS1lcHQuYyB8IDcgKy0tLS0tLQog
MSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCA2IGRlbGV0aW9ucygt
KQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgYi94
ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCmluZGV4IDAzNjc3MWY0M2MuLmZk
ZTJmNWY3ZTMgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0
LmMKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYwpAQCAtNTMsMTIg
KzUzLDcgQEAgc3RhdGljIGludCBhdG9taWNfd3JpdGVfZXB0X2VudHJ5KGVw
dF9lbnRyeV90ICplbnRyeXB0ciwgZXB0X2VudHJ5X3QgbmV3LAogICAgIGJv
b2xfdCBjaGVja19mb3JlaWduID0gKG5ldy5tZm4gIT0gZW50cnlwdHItPm1m
biB8fAogICAgICAgICAgICAgICAgICAgICAgICAgICAgIG5ldy5zYV9wMm10
ICE9IGVudHJ5cHRyLT5zYV9wMm10KTsKIAotICAgIGlmICggbGV2ZWwgKQot
ICAgIHsKLSAgICAgICAgQVNTRVJUKCFpc19lcHRlX3N1cGVycGFnZSgmbmV3
KSB8fCAhcDJtX2lzX2ZvcmVpZ24obmV3LnNhX3AybXQpKTsKLSAgICAgICAg
d3JpdGVfYXRvbWljKCZlbnRyeXB0ci0+ZXB0ZSwgbmV3LmVwdGUpOwotICAg
ICAgICByZXR1cm4gMDsKLSAgICB9CisgICAgQVNTRVJUKCFsZXZlbCB8fCAh
aXNfZXB0ZV9zdXBlcnBhZ2UoJm5ldykgfHwgIXAybV9pc19mb3JlaWduKG5l
dy5zYV9wMm10KSk7CiAKICAgICBpZiAoIHVubGlrZWx5KHAybV9pc19mb3Jl
aWduKG5ldy5zYV9wMm10KSkgKQogICAgIHsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:40:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:40:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88961.167371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEajX-0001nT-9k; Tue, 23 Feb 2021 16:40:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88961.167371; Tue, 23 Feb 2021 16:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEajX-0001nL-6c; Tue, 23 Feb 2021 16:40:23 +0000
Received: by outflank-mailman (input) for mailman id 88961;
 Tue, 23 Feb 2021 16:40:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hUnP=HZ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEajW-0001mi-En
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:40:22 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 318447d9-dfcd-43f6-a6db-dff6599f3f2c;
 Tue, 23 Feb 2021 16:40:18 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NGSebe161267;
 Tue, 23 Feb 2021 16:40:14 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2130.oracle.com with ESMTP id 36vr622cgh-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 16:40:14 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NGUa3l070492;
 Tue, 23 Feb 2021 16:40:14 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2174.outbound.protection.outlook.com [104.47.59.174])
 by aserp3020.oracle.com with ESMTP id 36ucaykyrw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 16:40:13 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3654.namprd10.prod.outlook.com (2603:10b6:a03:123::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Tue, 23 Feb
 2021 16:40:12 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Tue, 23 Feb 2021
 16:40:12 +0000
Received: from [10.74.102.180] (138.3.200.52) by
 SA9PR10CA0019.namprd10.prod.outlook.com (2603:10b6:806:a7::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3890.19 via Frontend Transport; Tue, 23 Feb 2021 16:40:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 318447d9-dfcd-43f6-a6db-dff6599f3f2c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=qsgfKHA7SbpMIqs/sjy9zD05l1oV79scyX26D+kZzhM=;
 b=ceIxA7rB5NwfhGsmtQAVUxN/BKHcERHZZDcRsqhzGTw1lI3SboFIiESSyxAPw1ve61wc
 Nvx0lgaXhN9B0VSByIaX9cgevwSYP6cYxGvZvYDzyFA8YM2qk37+NzkvfjfFUuEoin/y
 ZWYoDpOKpMSE+RvTkaBCL16Myltujz2aDZb+rJg8KfqdVJ6pk86aO19y3qMLDTuQkI9I
 hUsi5rCiFwRORMHj/+T1EkN6qm7LIbo9/E3mR10r78t66u9aFIAVGoROjOeeuag0/566
 MSE0YOnSdliZ/hBPD8w9CjSlr2ESTeDf2e7QtNbxFbixOu0ITQqsnGoTNY7thCre9m5B PA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n81Hz2ILaO21q6k9MQ65Zy047Vg8h7d1gaQ6XNZ1D/EUuaFA+79A6UpLKZJVdn0XzEdtv7jwXEVxY4X7y+HREA53rYEIfLLlji+hl5CPqog/+5hufyaxsU47NYGL3ocgSB+EVxw0kG9v5177pPk0MYpSxO6/Zzo/ZJDcpm0BGr7Tb8xscKrpSMOCFbXhpz5Fsu8BeyHhRvRB17YRA960LfG5o59ZCzwUYU27tQxRF5D9gTaXfxXtlatKnsLFbpr3cxQaEKcMYbQoBq7ic3utsRBL2USnL7g85E2Y0d/ZjZKez1hmts3M7Op/hj2r7+5+ftUZecwyPqrIrVdAXjaCpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qsgfKHA7SbpMIqs/sjy9zD05l1oV79scyX26D+kZzhM=;
 b=AVpVKa78secgxHASsQbqTGcCbG0P+WleUY1a2TL0FsWHn1yiQp7I4i+B+Nt5mE1eNPmoVLzY56gsBC7JkNPu87FB3+GcvsIPmg0zfZhCq7ILqi0bGXSFf551CfLMJ/wuWpe8WZpBDRm1oFDrbRL0PEyYMwXSBUbZ67X8HYVBGvYHsPXlGBagDgMgRK8SngT9B0KwN7uDIImR0SIQVukGMl6Hgnxogp43hI9EfbA1y2sFco3dz0wVP/xyhF9oa9m0w/JL0TAMXNK8WM34eRhN/Yi2Fijvg0IIY3dpYBvmrwXFW4E905SWrRvRUojBkBEbY2+MwtXlGQ34YHlYfkcvfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qsgfKHA7SbpMIqs/sjy9zD05l1oV79scyX26D+kZzhM=;
 b=UE5qIeHIPYNrRLBWEOFuVzHnD0pfhxuKmWSKQcZkB6OTS8VZYtHbAZCGGke/WcIJOYOD5UFRyns9HVHVMcuoBMXK47guLkW6JDRyHbmCHaZRwQ9ZYKBS4uJJocDzCGyqYE5L9G50cGHSAohzBIVvN8TfHVJU4aqf3UwX/W0JWYc=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com,
        xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
        anthony.perard@citrix.com, jun.nakajima@intel.com,
        kevin.tian@intel.com
References: <YC5GrgqwsR/eBwpy@Air-de-Roger>
 <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
 <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
 <YDUpF8gf6fbm4ouQ@Air-de-Roger>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <a49b03dd-19c5-5df1-81a0-0d8d9e84156b@oracle.com>
Date: Tue, 23 Feb 2021 11:40:07 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <YDUpF8gf6fbm4ouQ@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: SA9PR10CA0019.namprd10.prod.outlook.com
 (2603:10b6:806:a7::24) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 27aec047-c484-41a5-d1c8-08d8d819b082
X-MS-TrafficTypeDiagnostic: BYAPR10MB3654:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB36549BCAB2F2A9BDA2A505388A809@BYAPR10MB3654.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	gdhW4HHhCijPfdoau5vt7O0XYlUfdz9C3fInGkmb0iFx7SGa+GnbPIwnaGB280i0PzMdP8k5+BI4+Sh+n+SVyWaUMUBoqOVp7rdLnhR/v5MmMrwUv1A0gUoxrhCsbmxrQ15/T7/So1MzorcDNFnNNe0aUChjzUxolIEtCKAkOwIOMnfoMec8CqOVfVLfoVLhxLiOjfnye4lld95KJ4dbepcn5IK//DZnu+rAYfx4quTIsWnK7A3ngzirCTCfY7/53m8wqVmrDrX6Og7jpidzTyRwNsGpug9Nkrs7QYurLlltrvrLdPXnaWyJVIDJxCXK4LWO7PGhof47ZZ02MOdHAgcaJBNLvH0PRuV2KKrxUdK2rfatUV2v7InDYP/dTKPsewH+NE/VMXUz+DtkOSXRhNeE/lJwJaX0ZX4STXyjUatZxhRg87McO91+0t8iXlVbbbLqSf99RrxCbjIbfj9rb047zjHvBpyD/owofu0mbLkQ2Gvppl2mr1zOfAsYNr8t0A2nahSjBz/l6aGoTg66udGq0yOuOmrmYNxU6G2JcZzJao/b+ABhnibmfComgW0cCMesLfsDWhq3qNxDpZrqfqQAzNYpqYa+8qwHh6UGfjU=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(366004)(396003)(136003)(376002)(346002)(2616005)(83380400001)(6666004)(6486002)(8676002)(53546011)(26005)(4326008)(8936002)(31696002)(16526019)(66556008)(186003)(16576012)(31686004)(5660300002)(6916009)(66946007)(86362001)(316002)(478600001)(36756003)(2906002)(66476007)(44832011)(956004)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?Q045dlh2bSttRzVxT0p4alhhUklJRlJROGxZMEQ3ZS9BSEdHNnM3M3U0RFUz?=
 =?utf-8?B?M1lmekw4SXd5NEp6YlRPQU9Gb0NVMFVBZms0eTVaWjBnQUZGcVlNdEErZ0cr?=
 =?utf-8?B?dlNQWTFSbVVsdE5hUUttNUt5Qm1ZY0pWazB5UVNtbUpGbXVQSmM2Um1NTW5Q?=
 =?utf-8?B?aDFMeGVhcEFFdmhveUc2NW0wQzhIMGV4ZXovUHh0ZWdqSXBpU2dWZ1RkeXdu?=
 =?utf-8?B?OGcxNnltZUhzbWV5aXUvOHlvVWJpNVN1N0REYm9QMHFneGh6Y0dtNmJYMkdB?=
 =?utf-8?B?RWZKazYvenY1aElUSEZHTS9TTjQyWlorK293V0tvSjNUMjRldWFBSDJ0Vmow?=
 =?utf-8?B?eWRTckhoQ1VzcFFxUktkT1RtaEJFQkRJclREMitDRUtHcGR1cUJUZFBCUFkz?=
 =?utf-8?B?RnhNY1BjeW9LOTlMazJzYkhKWUpnb29uU1hrSXVubG10RCtIN2ZSSXRrZm9W?=
 =?utf-8?B?UlhyTjB6Q2FVUTBucUpXU2hjaStNNE5MQlpOZHFGTnhHTSt1NjBIVDh1WjF0?=
 =?utf-8?B?NFpXUUg5NHByK0pDVTFSOEsxOHliK1BSdWlRZGFad3hVbTRjVkdVOWVIL0ZV?=
 =?utf-8?B?emJiWmxjOHNjUnFaR1NhUWNWeVNpUndqcEp3QlN2TGJWTU1rdGdPWTZmVndr?=
 =?utf-8?B?S1J2QUtiTmJoRVhrbUhnZ0ZpOUlYK2VSdDdxTFZ0OHlZZkJaeW5HMWMvV2hm?=
 =?utf-8?B?Y1FIbVpMdlBubDIyTjU0TjlhcEtYWkdTOURtUy92d0FaOGpTQ0h5aDZRanU3?=
 =?utf-8?B?UUJXQTlNeEptUXJmeTJZcGRmMUxTSGFQamNPZHN3b29yTTJRQjZKUU9XY3lT?=
 =?utf-8?B?WjZGaU9Fd0lIVjkzNmVvM2V6dk00Y3JKc25LeHhIUXNKbE5SMWp0c0g2aEh5?=
 =?utf-8?B?UEMvSG4vZE9UdTJJUlNTdnNwQmhGOEludVNoL2s4Y3YxYUpXK090UjkvMlFD?=
 =?utf-8?B?ZWxkQS9ScFh0Q2YvSUJUNWZZNVpUL1ZHay9GQkRoSXhTTmNuUnhYamxBeDla?=
 =?utf-8?B?UWpZMjlsS21UZDhZWXk1RVRiZE1hZjQxSnlTWEpNN2gxckNRQXpUYzNodGVP?=
 =?utf-8?B?dEJvZjdlZVlLN3pnRmNic3F6RTdwTEZMcE1mNGkzYlo1YVJQb0cvU2FXcU5u?=
 =?utf-8?B?TDNTMy9yY0VHRHhDUFkrYXhIa0tyZjE2VEpsczhFUlE5d25vbGFnVDF3aGJw?=
 =?utf-8?B?ZmRpNzl6OWYrUmhFaU5HbGZwZDdSNi9RK2pVdXEveWNLcEl1cEQ1TXAzUEhv?=
 =?utf-8?B?a3NCcWlZdDhyQTdoYTNVcEJ5RjM4b3c0ZHcwRllvb1dNa0p1YUl3WmYyY3BS?=
 =?utf-8?B?ZmNoT3ROSktHOTdSNnQ5UHhwOWtOcjF6YlNWR0NzMXlrNFBsOTNWYjh0dGVa?=
 =?utf-8?B?MTNYZzRLNkF2V0xpQTdUSGNMdlIzbWIyY0xoYkdUbHE1K05KdFJNckpBazJF?=
 =?utf-8?B?QkdoM0pmQkQ5Kzl6TmpnRTVRclFpdEVLVG5paDdJOUdvMUUvb1lGYklDN1Vu?=
 =?utf-8?B?ZUJFZkN1b0E4QWUra3BLVjhUaE5Jd2ljeXZQdWdkYXUyd3gvNjM2VndOOFZY?=
 =?utf-8?B?TmlONFRwalBkMlNUOGo5bC9CY2o5UE94WXN5cVpGRGFoRUNEWHh2N2M5SW9Q?=
 =?utf-8?B?Z2phU1QwdVlWOFJEVWM0Q2JGbzM4ajdkYjVOdXRRZWRQVzJCM3VHYXg5UVE3?=
 =?utf-8?B?WHJNWkxHR0hOOWFVb0EzOHgzMkt0dTIvcjJrVXBuTnEyVlFhc3MrMHd6N2k1?=
 =?utf-8?Q?16abPgHitArTyY3EmZqaXNXYmWFX90yR6QJf9bH?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27aec047-c484-41a5-d1c8-08d8d819b082
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 16:40:12.1419
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Rw7iSirlfsgoOnRPzVpPH8NMmCgNScGSFGuSQP8kHNCCe2TdIOpzcsuY72a4QZz0HmIhsEVAEzqcrVhYZt0QehPJJgA+Xu5utd0CPjQAlXs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3654
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9904 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0
 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230138
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9903 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 bulkscore=0
 clxscore=1015 mlxlogscore=999 lowpriorityscore=0 phishscore=0
 impostorscore=0 adultscore=0 mlxscore=0 priorityscore=1501 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230138


On 2/23/21 11:11 AM, Roger Pau MonnÃ© wrote:
> On Tue, Feb 23, 2021 at 10:39:48AM -0500, Boris Ostrovsky wrote:
>
>> Before I do that though --- what was the conclusion on verbosity control?
> Ideally I would like to find a way to have a more generic interface to
> change verbosity level on a per-guest basis, but I haven't looked at
> all about how to do that, neither I think it would be acceptable to
> put that burden on you.
>
> Maybe we could introduce another flag to set whether ignored MSRs
> should be logged, as that would be easier to implement?


Probably.


Â Â Â  msr_ignore=["verbose=<bool>", "<range>", "range>", ...]


-boris


>
> I think in that case we could enforce that either all ranges have the
> flag set or not, in order to prevent ending up tracking two different
> ranges of ignored MSRs, one that triggers a message and another one
> that doesn't.
>
> Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 16:41:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 16:41:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88965.167383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEakD-0001wd-Nq; Tue, 23 Feb 2021 16:41:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88965.167383; Tue, 23 Feb 2021 16:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEakD-0001wW-Km; Tue, 23 Feb 2021 16:41:05 +0000
Received: by outflank-mailman (input) for mailman id 88965;
 Tue, 23 Feb 2021 16:41:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xmUX=HZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEakC-0001wK-CE
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 16:41:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c44e8d80-86a2-4c9a-b623-4b257548d2f4;
 Tue, 23 Feb 2021 16:41:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99159ADE3;
 Tue, 23 Feb 2021 16:41:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c44e8d80-86a2-4c9a-b623-4b257548d2f4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614098462; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NMSoGkZ7tOncwXKdvkLdYenM0RdMgXI+RBMm6qRER6I=;
	b=W+kyOdBWY8kJxh7KzhKbLljqa3O1QsVa3t762qMv69AocuhZs6IOnP+e5VvrqhegOb3kys
	VVNcbsfPvfG5Z2GQE+DCm6fhmjqpWJCh5TtlbyofVTO2mZ8HFPZwElIWc83iMXSttb2ES/
	yGh9982qAKeM23RgcYvnFxykbPBs5Vs=
To: "oleksandr_andrushchenko@epam.com" <oleksandr_andrushchenko@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] drm/xen: adjust Kconfig
Message-ID: <54ae54f9-1ba9-900b-a56f-f48e2c9a82b0@suse.com>
Date: Tue, 23 Feb 2021 17:41:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

By having selected DRM_XEN, I was assuming I would build the frontend
driver. As it turns out this is a dummy option, and I have really not
been building this (because I had DRM disabled). Make it a promptless
one, moving the "depends on" to the other, real option, and "select"ing
the dummy one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/gpu/drm/xen/Kconfig
+++ b/drivers/gpu/drm/xen/Kconfig
@@ -1,15 +1,11 @@
 # SPDX-License-Identifier: GPL-2.0-only
 config DRM_XEN
-	bool "DRM Support for Xen guest OS"
-	depends on XEN
-	help
-	  Choose this option if you want to enable DRM support
-	  for Xen.
+	bool
 
 config DRM_XEN_FRONTEND
 	tristate "Para-virtualized frontend driver for Xen guest OS"
-	depends on DRM_XEN
-	depends on DRM
+	depends on XEN && DRM
+	select DRM_XEN
 	select DRM_KMS_HELPER
 	select VIDEOMODE_HELPERS
 	select XEN_XENBUS_FRONTEND


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 17:04:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 17:04:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.88996.167395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEb6y-000456-OH; Tue, 23 Feb 2021 17:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 88996.167395; Tue, 23 Feb 2021 17:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEb6y-00044z-IX; Tue, 23 Feb 2021 17:04:36 +0000
Received: by outflank-mailman (input) for mailman id 88996;
 Tue, 23 Feb 2021 17:04:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEb6x-00044o-L1
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 17:04:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEb6x-0007kb-IM
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 17:04:35 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEb6x-000551-HO
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 17:04:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEb6t-0004Su-UR; Tue, 23 Feb 2021 17:04:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=NgX2UxQvb9zaMK85kD3n/hDVPGirJsk/0wKJ9pvrzZM=; b=3L4FphfemeMZsDRUdP80kp3glP
	HvbuCPx9RZtQXzh/LLWm3nEPnaVtX27FR81XSzI/9Z5yAnRpE97wIcG7OsuiiYklnjpuqjqsHgb0R
	pS8FABdnv9u2NV+Ru4gbvBONIKVPc/rp1pni/ka9VqBUWhR6iMTSqdiDeNBTVAkuOz5E=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24629.13727.682287.671087@mariner.uk.xensource.com>
Date: Tue, 23 Feb 2021 17:04:31 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] cirrus-ci: introduce some basic FreeBSD testing
In-Reply-To: <20210223155353.77191-1-roger.pau@citrix.com>
References: <20210223155353.77191-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] cirrus-ci: introduce some basic FreeBSD testing"):
> Cirrus-CI supports FreeBSD natively, so introduce some build tests on
> FreeBSD.
> 
> The Cirrus-CI requires a Github repository in order to trigger the
> tests.
> 
> A sample run output can be seen at:
> 
> https://github.com/royger/xen/runs/1962451343
> 
> Note the FreeBSD 11 task fails to build QEMU and is not part of this
> patch.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89036.167419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEbyz-0001Pv-4j; Tue, 23 Feb 2021 18:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89036.167419; Tue, 23 Feb 2021 18:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEbyz-0001Po-1P; Tue, 23 Feb 2021 18:00:25 +0000
Received: by outflank-mailman (input) for mailman id 89036;
 Tue, 23 Feb 2021 18:00:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEbyw-0001Pj-V0
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:00:23 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c6e1fab-a222-4865-85f8-74b7c9e2bf7a;
 Tue, 23 Feb 2021 18:00:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c6e1fab-a222-4865-85f8-74b7c9e2bf7a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614103221;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=HL+rCgnhWWVwH2c8OglKjw4FmFXDlZ5mZgFx2gb4vBw=;
  b=I4toG335c1pcxCxtp6r5afHzXeuM347Iwe7dDc10giElWAWg2C5VWrGE
   CCwcFTYp4UcRDcKG0eEwEu3KNb10g16p/zZipwAShgz/Ag+kggxzoCKeS
   COzRPCwfmwGP+ba3KlYVITUNVy50jRN9Xa0FOEbrNxaKr7wsYDPTK027v
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PoaUFeIxALenvVms2RSoolzkrsbSf/qJQ7OsrsChX4BlNJVXwk7axCSCKK1rBRPWvCiK8QW8LK
 oM93PIhzilDKgIHPAK7W3auVKdVO80hVYrqiNDXlonyH5n5M1GTBEdORQpPF+6hWS6NHOZZjrH
 puEmmNsnDPlpfHClyXIxfz6AS4BHXlII/g8OJb+/6hlavAfag59Ed+CRb+V1yhZuoUGimfrErP
 /54niccLcWymwufyxq/M42x6wN635B+i5qhBYPtX3ZB9MVvA0YfDFkPAWfOPXyKPYw0kGWdqNh
 NNM=
X-SBRS: 5.2
X-MesageID: 37871554
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="37871554"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eOhDpiiPiMPH6C2Aseu3uJzl+ZuRQUMixrTL+SBYZxo3Q0I2uEAvEybJzwDxm61JKuZDS5KOjgnC7bR0TOR8UHQ3gItA46Kt8NuPsiyJPx6MnmGgzXQu2CJkERfW2RcSOxhArOAxs4tr4rf+WIYdJrmOq46Ik+v3/IktcdRETv5LlQAG716A22WaHBKsCYg+wchZ+dvezIxeivfu3z9yJM7i1eC53R25WVLbtlKOjZqrW17YlR+KVsjtkaGaBe6sQnyu8CLwECjhM8gBBnGRFHRwW8uwpOSbYGsoNauasPJ2jO8CpeYdeftZDXQWJNQyniZ0Blam8hExh3uoStaz3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6L3TgQfo2UIqdfUdhFuk2vIahm8UXlonLYTKGfKEFRU=;
 b=O1jzwtJ4wV8jJwJ0/kQuhnNd+O+xxIRLQwwYkp7/FgNSJpNWPj5syXEW73StLD7kJV7XaVqSYwkE9aYEARrAE3sOVLa7YLXwlGoxh9htNC9DUG7Looyg2+MIsUFmLLFlDB1CKqFOy33hafOuIV6AQ1OS86cXWafm2lZhDEbay7b3/2TI539DihcSIQbmskO7E61gmROF+/cSI7uGBurfvzDdVEXRUgoHdBjVDYh2ViQq9wd/mneq+bOSOIKV+sQzFCB7jkulUCcrnwKbktn7TTH2GzxpaDDGCbQ2L70vSXI9kBPOWMRh+18ImDm+6oujdKWZL73yPIGhBQ1sjLq1yQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6L3TgQfo2UIqdfUdhFuk2vIahm8UXlonLYTKGfKEFRU=;
 b=icgCTw3eHLIcBQOQSddvsnXfgOgQvktJ7hvHR5CL29sg9n9mo2KuVRald5WYp0VZA7JvY5wOJ8DRE41cOSD3potKJjEYzk23kS7Hn2dTOQqHqRyJEhVN80MpOJkrZC3SLqX4RGFKgu2RRXQsrf7VSeGddbQxRYxkrBGJEI7hKRk=
Date: Tue, 23 Feb 2021 19:00:06 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YDVCpq5girn1Klh3@Air-de-Roger>
References: <4e585daa-f255-fbff-d1cf-38ef49f146f5@oracle.com>
 <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
 <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
 <a4d2f7f5-a8b0-eeb5-dfaa-539784c67a87@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <a4d2f7f5-a8b0-eeb5-dfaa-539784c67a87@suse.com>
X-ClientProxiedBy: PR0P264CA0282.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 681ff6bf-6ca1-4065-6d14-08d8d824e165
X-MS-TrafficTypeDiagnostic: DM6PR03MB3579:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB357929609585518A683173FA8F809@DM6PR03MB3579.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: v92NNcoVXX9Hxpd1tNiJ0OZWeMVooJB7YjTYiTpDXPwShCOvyZ7+3EUNNhUEEiDBbIS+Oi1l61WeEcl7Lr9e16rbYYiAyDgUsi/bM/TKCZxvan85EWMw+XnATzte4nOFvRzwYrk8n5DWd4+kXNe4/v9DU1h9ryL1maACPU0H0lfZey0GBQP9zeTwSttQbbOswzbUnR8Adp5b80A7DyeKShoqgzEunw+KMJrlzCqfevke8Jx5J/zul73aekDirp13mX6VgGaku5bio4RE/fIdXaq3BmN1cLJpyFlrADCycBaVBRyWM7Xs3LO30dXV1AoflSzMk94b6Kd+eSbeLkskPOwLcebUfb70X5uh1MyHIAdB5KfkLXHBVxDfsI53IMFvy5niDlxzI/EVpOjijDn+7Zg84BGP36qoC9iojkYqoBVOOukc05ejfgHc1WQeJyj/2NdMudRvt9UOcRQS/U0pvAGz1ouj3+LjDkhEYN41dPpSweHWs75MGJDJIFwSUREMXPDNsJlPH99JjTtYrCIvKQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(366004)(396003)(136003)(39860400002)(346002)(6496006)(956004)(86362001)(2906002)(53546011)(9686003)(26005)(85182001)(16526019)(83380400001)(5660300002)(8936002)(66556008)(66476007)(186003)(66946007)(8676002)(316002)(6916009)(33716001)(6666004)(478600001)(6486002)(4744005)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SXNMcC9PeDJIbzI4NG51MjVMMWxSTS80S0RGcURFQVc1UTlHNUQ4SWxzQW5Z?=
 =?utf-8?B?SU9pbHg0ZVQ0L2Yvem55Q2JtN1dYRm9MVWhnWVBKN3JFZHhWcmUxWkZzRVR5?=
 =?utf-8?B?ZDVrVEQ4SWpUMDNOSHk1ZHpEWTVGWDBkMlVaMUc0T0xQQ2RlQldHMmtuMUVE?=
 =?utf-8?B?TEgyZUFWcUZhRnFRcS9ocXpyWkxRRU0rVXFXSWFuQjZxVTVCUFhQbThZS0tZ?=
 =?utf-8?B?VnY3VVJlVTZ0Uk9MbzNKenFBYVJ3NWFndkJPU0NQZ2I0eXVnUCsxb0EwdlJv?=
 =?utf-8?B?bFVoRkQ5WDhwMDlkTVo5YzVnV2xFQmtUbHBkZWFuQVplNmhmRlVqY1o5MWMr?=
 =?utf-8?B?TG44cVNmVUIzSjAzTjUzdjNoMUZpVzRGMlVscVMxMWI5QmVBU2loc3dLUTVa?=
 =?utf-8?B?ZHRNbjFneVh3RkFDRi9TaVlHcmRGMEhvL0dSWUNLajBlTEQ1THJ3cjFKektQ?=
 =?utf-8?B?K3lHOFV2bm41M3ZIWVErT0M0dktvL21raUNTYUFLalgyQW1naVpMNU9sdWNt?=
 =?utf-8?B?NnIwTDRyRGxJa3RQY2hJc3IvWVg1TW1vSDdSRHlrTlQ5L2Jjd3gzcFN4NFMx?=
 =?utf-8?B?OWpSMnZ2TXV0UnByZ3cwZ3JlcUtCZUc0Mk4xMFNKTFZsRW5XbFVvODR4QWRX?=
 =?utf-8?B?RkE0M2RLQXlWdUVzb1dLcHpPb0E5UVU0eW5hbWNUdmtmMyt0V0x0REJIdm13?=
 =?utf-8?B?QTVZRkc1MTA1OFlXcmxTWmttVDdIMjJ2SU40VHM0VnNaQmNyZFVEL28ycms0?=
 =?utf-8?B?WU83WGgvNEZhN1N4bTJWUUpoa1VoM0dsSHYvYnIvb0lZQUowS09qbkcydzNx?=
 =?utf-8?B?R1o4bjVWUlVMY0JPdm1zUk9hQTF5Nm42dVEyVXFKczVTZjRSSWFvVXRhZ2lt?=
 =?utf-8?B?MWE1NFhIa2RyZHBrQ3l3Q1pON0x1WTI1Rk0vdkk4TkVTR2tqM0ZKQmNEUWVl?=
 =?utf-8?B?VXphdGZpQ3h3c1hFSVFrUnFneFc4SVJ1NlJoNHE5VFlLU3NOMlJhZFEzRjVX?=
 =?utf-8?B?aFozZ1hhYWtwRThBOWdPcFp0bThnQ2o4aDFPUjJKWXBTQkRUMDU4SFl1cVgx?=
 =?utf-8?B?R2s1MjQ3QWZtK2trZnZUdS9paW5mRnVlRkhUQk9OVWxoTWtkMTUrVDYxZkln?=
 =?utf-8?B?L0Z6ekcyMXdqZGNKM0JWT2NKeVFrV3lBY0h1YStQNUxhY0dnVHhZdWVNSXhm?=
 =?utf-8?B?S0tKeXBWV3Vyek55d1JYMzg3cDI1dFAveU02djVseFA3bXBjQWZVb280bXdL?=
 =?utf-8?B?NzlzWTVPM3M4NFYzcEZQOHBLQmwrRER3OFhJZHVyZFlTY0RibXJoQ2YxM3dZ?=
 =?utf-8?B?eWRWcjd3MlVqVUY4emsrZTFOWXNkb3lpZWlwcmhYZGhHbFU5Nk1VcTg1RVhs?=
 =?utf-8?B?ZHlhdjdHTHdWK1VBdFdKbTZQK2RheWxTQ3N3ZWFmaWdzUVYwcnVWeTBES2JX?=
 =?utf-8?B?VmE1bmZsWkFJMFprNis5dDdyc3JwM21uc3FpcG8wQWQrMGxSTmc0Yk9VSzFn?=
 =?utf-8?B?c2preXBBc2JyQnRKR0NxS01jMC9sZXcxeEdTZmpsVmlod2ZkRWI3SjZGMjBl?=
 =?utf-8?B?VmwxM1MrdjlhR2pKZzJzN3RqdklwV0NMTzlIYzQ4aTVQaUliTEFTOFV6Z1do?=
 =?utf-8?B?TWZvUEQveEZRSjJkUFdNQm9yOHMyNVREQ1J6L1ZQZkpiYVRLbjRQMzJZM1hj?=
 =?utf-8?B?WVZLU2Q1YmdoOHFreWdCSU96ejAxSWYxbGZZT2ozbENXWlpSOE96bVdJcnow?=
 =?utf-8?B?RjVza3FqMlNBdHVLRFFIWXV3OHY0UzE3dUFQNnJwSWQwVmpSNUtyMjhxcVJM?=
 =?utf-8?B?WWI5cDk0RG9oeG5FZTJpQT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 681ff6bf-6ca1-4065-6d14-08d8d824e165
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 18:00:18.6162
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: K+o3stInkRRBCyRs19VAWZz/18/s6fjDTKgVuXfRb2Guh9sWLzggjn9htyJiXAXNegMzdbGMETTajelaAmEHtg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3579
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 05:10:16PM +0100, Jan Beulich wrote:
> On 23.02.2021 16:39, Boris Ostrovsky wrote:
> > Before I do that though --- what was the conclusion on verbosity
> > control?
> 
> Not sure - afaict the conclusion was that we still don't really
> agree. Roger?

As I said on my reply to Boris, I would really like to have a more
generic way of doing this kind of per-domain verbosity level, but I
don't think it would be fair to ask Boris to implement this. Xen
likely needs a way to print issues with MSRs accesses in 4.15 on a
per-domain basis, so going for a MSR specific solution would be
acceptable given the time frame. I think Boris made a proposal for a
solution on another email, let me go look at that.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:02:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89039.167431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc0e-0001Y4-Hm; Tue, 23 Feb 2021 18:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89039.167431; Tue, 23 Feb 2021 18:02:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc0e-0001Xx-EW; Tue, 23 Feb 2021 18:02:08 +0000
Received: by outflank-mailman (input) for mailman id 89039;
 Tue, 23 Feb 2021 18:02:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEc0d-0001Xp-TN; Tue, 23 Feb 2021 18:02:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEc0d-0000LC-Lz; Tue, 23 Feb 2021 18:02:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEc0d-0001ry-Dp; Tue, 23 Feb 2021 18:02:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEc0d-0002H2-DJ; Tue, 23 Feb 2021 18:02:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=FHKZ23TR0hVCNN67OYVtvISUjl429fpIpsP39URzQM0=; b=pKRMqnmaRQ5GMMFQQljah/OQx5
	COpZLTNYhewWIvHCyQJ8sftlpAeyXVgtJU99CVGonG3gterYVRI5UtozR4pxnbATmmi6GfoPpJ4Vo
	QsfofRueABETejScMEHf2vE82KilaXk6PsX0JwRYfa3egGEKIOtBEmV2SO1WmjYYRHKg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-xtf-amd64-amd64-1
Message-Id: <E1lEc0d-0002H2-DJ@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 18:02:07 +0000

branch xen-unstable
xenbranch xen-unstable
job test-xtf-amd64-amd64-1
testid xtf/test-pv32pae-selftest

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159596/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-xtf-amd64-amd64-1.xtf--test-pv32pae-selftest.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-1.xtf--test-pv32pae-selftest --summary-out=tmp/159596.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-xtf-amd64-amd64-1 xtf/test-pv32pae-selftest
Searching for failure / basis pass:
 159559 fail [host=godello0] / 159475 [host=huxelrebe1] 159453 [host=fiano0] 159424 [host=chardonnay0] 159396 ok.
Failure / basis pass flights: 159559 / 159396
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#04085ec1ac05a362812e9b0c6b5a8713d7dc88ad-f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a git://xenbits.xen.org/xtf.git#8ab15139728a8efd3ebbb60beb16a958a6a93fa1-8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Loaded 5001 nodes in revision graph
Searching for test results:
 159315 [host=huxelrebe1]
 159335 [host=godello1]
 159362 [host=fiano1]
 159396 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159424 [host=chardonnay0]
 159453 [host=fiano0]
 159475 [host=huxelrebe1]
 159487 [host=chardonnay1]
 159491 [host=albana1]
 159508 [host=elbling0]
 159526 [host=albana0]
 159540 [host=albana1]
 159559 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159581 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159583 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159584 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159586 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 336fbbdf61562e5ae1112f24bc90c1164adf2144 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f954a1bf5f74ad6edce361d1bf1a29137ff374e8 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159589 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159591 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159593 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159594 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159595 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159596 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Searching for interesting versions
 Result found: flight 159396 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x5555cd8fe6a8) HASH(0x5555cd97c438) HASH(0x5555cd97f3c8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05\
 e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x5555cd8e70d8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x5555cd8e\
 bb90) HASH(0x5555cd8f4658) Result found: flight 159559 (fail), for basis failure (at ancestor ~76)
 Repro found: flight 159581 (pass), for basis pass
 Repro found: flight 159583 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
No revisions left to test, checking graph state.
 Result found: flight 159589 (pass), for last pass
 Result found: flight 159591 (fail), for first failure
 Repro found: flight 159593 (pass), for last pass
 Repro found: flight 159594 (fail), for first failure
 Repro found: flight 159595 (pass), for last pass
 Repro found: flight 159596 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159596/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-1.xtf--test-pv32pae-selftest.{dot,ps,png,html,svg}.
----------------------------------------
159596: tolerable all pass

flight 159596 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159596/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-xtf-amd64-amd64-1     19 xtf/test-pv32pae-selftest fail baseline untested


jobs:
 test-xtf-amd64-amd64-1                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:02:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:02:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89044.167448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc1H-0001fn-0c; Tue, 23 Feb 2021 18:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89044.167448; Tue, 23 Feb 2021 18:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc1G-0001fg-Te; Tue, 23 Feb 2021 18:02:46 +0000
Received: by outflank-mailman (input) for mailman id 89044;
 Tue, 23 Feb 2021 18:02:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEc1G-0001fO-0R
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:02:46 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 459a4017-5390-4df5-bcde-c6c7e715c885;
 Tue, 23 Feb 2021 18:02:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 459a4017-5390-4df5-bcde-c6c7e715c885
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614103365;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=GpJR0TkEJWV1cw5g+bnSVakPaumVAn7676LSPUruwrE=;
  b=aFPoePWDfFux1IA+WsXYd3bXQgdOBkwh9vrSUHvKvbM7PpPmPYVJXyI1
   NBqDAtqwKsgy+jarOmjWMraJr7/YAMLCX9dYYsm/63B08Y42idoWcMwip
   t6NtFOqrkHiI4akq7FEF++iEmV+dU8CDDYKAZwQoazFhkb/MgZooLdhpB
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ghJ0LMW93T0ifyFrWdGQRyUzJ7uhp4XFvy+wIFGUJumOtySwv0PMW6qvzkLp3yxMW18JDWz/EY
 freL9mkA9aeWXa8zJbuqC1YY8mmp7+86AsMFlDoxUADwHoAMMHAJ1RYp/hc/UakD+6RK0uks8Q
 bWlNz368a2KuMlEMIp9Hu2Bg7LtvByu35t7OgUBOjXSSjIoWHa4t8YcHPFwYxASItCUAbriiUV
 hzkYuZdFvNAXM/aW2dTqhdlKLQQroq6W45r7BSPR+zK6WF5NUwN6gn6J4NMRi+tDpZvgwYMzL7
 mrw=
X-SBRS: 5.2
X-MesageID: 39241038
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="39241038"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Diqc2y/9XiiJ4n2zEokzojHCY8Hfm1u3kavDqsE5UjYJwaOHuL5QI6LNgkbd6khizf92HPLfU5AL6TC7WbYE+3pq/RPb/w2M+12i2XZVQfN0ZcN8n3/9ASZu8thB/ONVaompceWEUBzGYnC6hlk5K5ABGaM+8hD1Q100VcVf8ric5cs1vsbDowGLR+3PtA5FnVnUcVCBY356poj4AR35Qv4eM3usebVh27ZHH1QtE5xtoAIpyMTiUnZzVQZgd5qaIrDScouZPe4Pui7l7C23L0+DjIW2j6u5EGPSBIRLsMyR2PoD82w0b3LvO/7jAt7sad0VqDD+QXvyvpCRU0bCqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P0/Z9e88+p8B4E8ib4HiG4eVCKTuEgjgJnd6RaqIKfA=;
 b=UZ1QoB5VfHSjoYXEqHYVr988+C2O8PNUj5Lpur3gTI+Y/O5YXYecDOhb0LADgolVF5X7cubkvcncpo7MWm+7/veLmeOM6QzqKpWA5vM6TczI3XxhqTvyHQh+jHw8uhbZ8NasNqUmukAVLnm62eNhQMCX4xoFi9ugDU/PR0J4U9VxYj7wlYY5HBZLnYgTTiUbNcU1YT4P/97QOLlc4Wm2OHPJ3c6clYcuNW2Hg50RvXnserN0WsVQFVw+6PlJvNGd+yWeeZZ54KtdCgaz8QZVx25Oad6aAMbJRXX16nA1OOtFQHUI7vQBgyG0LYHmogJoF45Ia/iM3G0+E9+q2DnjeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P0/Z9e88+p8B4E8ib4HiG4eVCKTuEgjgJnd6RaqIKfA=;
 b=fyN/LSBJGOsR+YW+YkxOmsZceb+PeUii8wnaSMD4k8yfNb6hTfOA1DlxbT24aIxIlRMFsYgcKA6Oll/QUOeaMhSYpVy1kTXzCAJEyJx/ikISj21HxP+/aEyVVrRpBuA985rtTHGgnA72vTlQXuWQugYZLxcRee1kk9fLUjHILsk=
Date: Tue, 23 Feb 2021 19:02:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Jan Beulich <jbeulich@suse.com>, <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	<anthony.perard@citrix.com>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
Message-ID: <YDVDNtd4n8zV7T6J@Air-de-Roger>
References: <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
 <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
 <YDUpF8gf6fbm4ouQ@Air-de-Roger>
 <a49b03dd-19c5-5df1-81a0-0d8d9e84156b@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a49b03dd-19c5-5df1-81a0-0d8d9e84156b@oracle.com>
X-ClientProxiedBy: MRXP264CA0027.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6bcbed88-34e1-42a0-3334-08d8d825335e
X-MS-TrafficTypeDiagnostic: DM6PR03MB5354:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB535437C1C349F35D98D399A48F809@DM6PR03MB5354.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZyLLIOuDUHVS0Z9tN+eKWpBYHOf8waZHd1IfSlgXsZceYdd7kX7z+UliPcAjqCGDq0MtoapidewQ+BdNJZ8cynaZtBmyat/zA1e7hJBYorvuUZEFtxvQ58+1VDPjPHrnsYAGBJAIEZS4c5sIDGJ1BSw2lUYHhlNmk84Gb8ipywHF3zYUUQylrOGmHTNcKgFuNz6biGoxuyirSFC6RX8CSQtIiz9P+6D2eaaFGfOwBUyJEOT79+COKtZshkjVaK4eNLpcJpozvK3ZlkU1Ge5Hp3KS0c/yQwMQiDgLLgXCn4QdZ3y7VfH++p2HctuSVYwwJYX3i81TnJME1h/QdDmacEMg1lsy+sWOEmAzwF7qCFVaZ0K+QYHDNy2y/bGi4OLDmn+sYk4/ibdV20R7gOsPFyp1F4lzbqWG7rNQDQcUXn0uLY46FbYjtPqBWNsyn/XDDJeIpheO0IN7gHlhEzL+VYdw+Koe877UIV3FEAqMXkf8msFQaFVynzRV5dFtPBM3midxN9WR12nx3afrF+X6AA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(346002)(366004)(39860400002)(396003)(136003)(53546011)(6496006)(316002)(478600001)(8936002)(66556008)(4326008)(956004)(85182001)(66946007)(6486002)(5660300002)(16526019)(6666004)(4744005)(26005)(8676002)(186003)(33716001)(6916009)(66476007)(9686003)(86362001)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?clhFMS9yRTkvUUIxSkZBWXFsaXc4aWtia1I5MUhPdHM5S3VPcE9rYTNNOXhl?=
 =?utf-8?B?cCtIRlAwV1dmVitESkZ5QXJEUElZRi9pamk1a0NtRE5pRnp4Uzlsb0FYK28x?=
 =?utf-8?B?RTRaYi9PNnVUS3ZXYzVwOXNSU2NBNDVTV1Y1U0FsU3lPSVJ5aEpFY0ptaEJQ?=
 =?utf-8?B?amVvVUhpSkpRL3ZRYjZNWDVjMFlGQjlwMnhmVUoyWmFocTBHZ2xidlpEdkkw?=
 =?utf-8?B?SWdNUDZoczdHbW5WcU5NWm8wQ1hKNUFSTzRJR01QUzQzWXdZeERvZE9VZzVh?=
 =?utf-8?B?ZkxKRVFNWTdHbWJha1kwQmp5QWtwZmRkMFNxaUhocVRXeEV5NlJJWjdFNThF?=
 =?utf-8?B?ellIR3dianBnYlkzUk91RW56MW5UZyt3TmNBb3VZekpzR2ZQUjIzYXdZeEY0?=
 =?utf-8?B?KzZBSXB3SUVlb0pSWkdvNzI3VlRJSmlVV05ISStRa0p1bFlqWlJOeUtzenJL?=
 =?utf-8?B?QlNqcUlLU1lNdXEzTmRSdlM4dDZOUThsc2lSZG1ldVpublVUV2JyMFFBRG1J?=
 =?utf-8?B?czhXRDlwTi84TDlJclNCMlByQkorMGpkL2FqNzh2RENPZm1Nb1FZNEQrUDFL?=
 =?utf-8?B?RHRYR1REVzNORVhubll0ZWZCRnRmcVUvLzgxdldmUks1Smk2REtIMFBBZTY3?=
 =?utf-8?B?SWFGT1pYSE1DbmhLRXFhNk1sdFFCWTdGTFpWYkQ4bUlkalN0aU01d2tnamtw?=
 =?utf-8?B?Uy9yRTR5MWl5eHFwRzY2YVRKRGkrWExwUWplL2l1UGk2MjNlNThvdll2dzNG?=
 =?utf-8?B?RlBIM0ZGaFlPOXF4bEpFWkVFdW9YNjF2Mm9FWm5jejJGWHBCVXloSE5hOXJz?=
 =?utf-8?B?NmtJMUVxWmRMU01zZlhWcXIzK29HRGJUdmVuNExRNG92U0dQaFY5emJBVG1u?=
 =?utf-8?B?TDhvSlhvcmdUVkViWmIrT1k5ZjkySll5VXVpMGMrQVVOU0ZNWTBuVlpubi95?=
 =?utf-8?B?Z0R4QTZnRk8yZXBFZFZDeWFnM2hsVCtVNC9zdGYrOFA3djNXc010RVRpMmta?=
 =?utf-8?B?SUJtSDVrUUFQUlpmaFFwcXFrd1pESVRSU055VC9QNjE3blRoZHNwMWRQT2o3?=
 =?utf-8?B?Y1kvanhtUGF0eVFhN2ZncjNxdkRPaUhzZkdROWR1dzhlRTNXenl2b3Iwb0xI?=
 =?utf-8?B?K2F5ZU9Yd1ZBblhFempvTlpQZ2VMS3N5bFNPdjFpRVpKV0s5QVdJNTJhekww?=
 =?utf-8?B?ZWxDSTUwZWlsSENUTGNDREphNE9YYXJoQUI3Y3VGOXNZbjdhU3JEZXdKTlBi?=
 =?utf-8?B?QTFtbG1sZEp6cVFBTG5Gdm51Z3JCZVozcVZpNzkyaTJqeEh3RjRvK2dqZ3VQ?=
 =?utf-8?B?emxiek5JRFF4cXVMTDllb2c0ZVlEcVhvUWtGSXRVRUdIcC9DR0Fucm1mWHh1?=
 =?utf-8?B?bmN3VWEvMnJyQ0ZwQXVWMGRiOHQwSnhFdWpVejlWaHVjSmlFU280M0d1eVdT?=
 =?utf-8?B?QzQwN1czc0UwOWwzbXNGUUtMZkVVNnkzS2lqVVpNeXQ5Ykhrb0JVaDhYancr?=
 =?utf-8?B?UmRYNXhId2xUWG9RWjRGTkdXY3REZk9oQys2YkN6S2dBZW81QWFzS2xDQ3E3?=
 =?utf-8?B?YzFvM001aGhicjFQeHZzc01MdlVlSUtJM05oZTl5QlVWYWx3ZGpaWG5QNUFu?=
 =?utf-8?B?ZG02WVFHamo5a3hHelgzZ2x2WXVaYWxyOFk5NFZ0S1dNYkpOSkowMU4waUw5?=
 =?utf-8?B?QUVtS1o1QmtCS3h1UjZBNFRVbEhQN2NGQnNyMHFpQW95b0FDcElOV1E3emR6?=
 =?utf-8?B?ektBMEVGZEN6NXREV2c5a2lGaUlRVWZVRFJmNnFiTzJvdnBMa2t6eG1yUXBC?=
 =?utf-8?B?SGZTcGtHMjhhb0luR2w0QT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6bcbed88-34e1-42a0-3334-08d8d825335e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 18:02:36.2485
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UykzLdiaWEhpIY230uZYaEvF8QiE5HQ6bF+Fhze4IvD1iVFma0stpCYruhEkdWlAO9cZO9Mdk+t4+KzAFOH2yA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5354
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 11:40:07AM -0500, Boris Ostrovsky wrote:
> 
> On 2/23/21 11:11 AM, Roger Pau MonnÃ© wrote:
> > On Tue, Feb 23, 2021 at 10:39:48AM -0500, Boris Ostrovsky wrote:
> >
> >> Before I do that though --- what was the conclusion on verbosity control?
> > Ideally I would like to find a way to have a more generic interface to
> > change verbosity level on a per-guest basis, but I haven't looked at
> > all about how to do that, neither I think it would be acceptable to
> > put that burden on you.
> >
> > Maybe we could introduce another flag to set whether ignored MSRs
> > should be logged, as that would be easier to implement?
> 
> 
> Probably.
> 
> 
> Â Â Â  msr_ignore=["verbose=<bool>", "<range>", "range>", ...]

I think just adding a new option will be easier to parse in xl rather
than placing it in the list of MSR ranges:

msr_ignore=["<range>", "range>", ...]
msr_ignore_verbose=<boolean>

Seems easier to implement IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:03:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89047.167461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc2D-0001nV-CC; Tue, 23 Feb 2021 18:03:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89047.167461; Tue, 23 Feb 2021 18:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc2D-0001nO-8G; Tue, 23 Feb 2021 18:03:45 +0000
Received: by outflank-mailman (input) for mailman id 89047;
 Tue, 23 Feb 2021 18:03:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeLE=HZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEc2C-0001nE-0E
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:03:44 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfaaa91e-9940-4dc3-ba40-21c8f24bf469;
 Tue, 23 Feb 2021 18:03:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfaaa91e-9940-4dc3-ba40-21c8f24bf469
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614103422;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=PCgjtMgIflsC6PSiHLI9Ys5v7Nm5z7vXeFYHLR6W0IA=;
  b=MgGM4Cr/LNdw7ToZeHgcQBQOs+67KPWaVHGd0ZArdnRMDkPRrXiejCFx
   4sZcDY07hmgEDpiHKzUHvKOzzku5leohIe25L5spu7kzfkT8I7By/KqYn
   XZ5jSo+wGedLXBqhfYu199U3P6HhfFVcKS2E6zbTdORpVpGVeIKqlpOFv
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: I/ga5D+mcVxz/gH5YPjhN8K4X1frv3S6d6GOTHkgwZhdWl/Nx9DIb/OpPGCbUVwBvadfEXhMpS
 iHC1yOnVB3r65vo7YX84jaKDL3p28PL6FISKDj1iweLiQCm7sELkFYitU5e5+OuA1seMXS+U9A
 LdSIhQHZwy5C7jRa8zzdxWyTvlcO5odXTR6OQBxfqQjMIcpp9EWHVRChOTL/iRhSxbxBtrQCzx
 BZWonwt9IG2YX1O77z3ngYQ4t6KhEAsQwv4hPf2TMKzsSJX1knreYaYLHHo3gZHch8tXRLRyS5
 fMw=
X-SBRS: 5.2
X-MesageID: 37850584
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,200,1610427600"; 
   d="scan'208";a="37850584"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Oun0jR8m1ZDozQMz0Qus/4tpoK4hcxls1G5qf/W3klaz4EXsOCkXeHGOcoTwXeqC5gw12y3X9Ys1aRlFISECKwt8a8tUaOurW11Ze9AcEQ/ZjHMFW5o/CWgTLKLm1QEe0mgsCAleIjWNz0iN2sapUEwoFgZKMKiKDGDoFfErFuK1MWNCnKzzEhHjE2PWIWxuwJBucbg7gAagAe0r+J6FzWvBZTV2GI4+GqJlikMFtyaF7F64H0S7x3tnw9bj8rz7gekgooZor5t1+PKuBZGE5HHkfX8DT+tMGB9/ijUbSwudbBtqngxZZWhmcu6N/rc2xcavz5pVGd1pJ7zng4DWBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HN5eKrFtSWC3HPsDBv8QE0Sl+p3T/8jC3D6N82xv25s=;
 b=fqLjaYEkAWME09e75hNxzwsZ5Kh+a5HXD4TTcyJLTm5eaSlt4tL6m+dNMZksJt1dmUWwb+4yXvPFGgOhPPHYuCX6/QPcGrqVLA/Ev/UZcmkITOiyb5O4Q9XTgzpoDZwJBtLZdqiB0VJxHRD3GJTQID83DUketM9sR14uk4Ft5SdF6hQZntcH76V+tP/Zeqf4uVAOEBtDRbzHhW7b9zf9Hj2BBfaqiWbLVRkRK/CpxXPsWyal5gkvhKzdcHdS6iPNrZMkzVZYTjtwakxoNBEjfEmzmvNG3xLw3kWscVSyC58grtakaaMpulnImVs+a38W5GUcL1/qDOlgSxLkWIeEpQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HN5eKrFtSWC3HPsDBv8QE0Sl+p3T/8jC3D6N82xv25s=;
 b=GH4E1zr7Yk9UffecGsaDD22qIIR5k5jk7Lk5dWmqDhoaK0YY3tfBUw/KNKQE41RY69lYfli3gxc8zLqxUHOEajpQBkzDGqxuOwnCH7NPysm1X6I5BOYeYzolpNnTymYeOLk0VWldMadAZ6YvQw4q99iRHeD/8Z93qOufJgZi/eo=
Date: Tue, 23 Feb 2021 19:03:34 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of
 copy_from_unsafe()
Message-ID: <YDVDdozqBnoZjD/H@Air-de-Roger>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com>
 <YDTuGn8YWRrWlbS9@Air-de-Roger>
 <76207250-1372-e7ab-2d03-b46020a7906b@suse.com>
 <YDUhKw+19ITgVmml@Air-de-Roger>
 <4fdb5952-6196-3a79-1306-e65d75e495d2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4fdb5952-6196-3a79-1306-e65d75e495d2@suse.com>
X-ClientProxiedBy: MRXP264CA0043.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9465a5fc-c45e-4b8c-9412-08d8d82558d9
X-MS-TrafficTypeDiagnostic: DM6PR03MB4602:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4602FF7BB41C92F99283C1BA8F809@DM6PR03MB4602.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1388;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: EL3kLv1t1Thb4ZR2j8oLfPYrZhxchwLXx7aZwEXJEe4ypPnUdc+/+2LklYp2Pc1e6x8x9flts8WxAs56DCzbzPLJTVb2Vcx+iC0OJAv8G8e0qGb6g+pxNyC2ZNHk+dZtTtl7PysxjeaDqUD3vpuERdYQxyaUURu3EiYwQXm5MS8vPAs3icI6nMzOz0A5xQMRkADutAwq/VYwu9q1vsTihdlLDnB5zmby4k/3N0oZ8riRQpLySOJxL9JWgNMj7A2YGJfVGo4kckM6hg77yvqyKzpU34QMzgjCohhdpbPqMUK/0TPTFLrkkGyQHQXGL2e6A/GR3x03h0+iH0OD2HL9e1bKppc5QB7SREPj7wwnaStFQusF0bymdldxHNBopLFmZyH+u0j3RYNmQmn+ccKhaRAEIXnnoU/i84SPsQ/+qZEhD8RQXMff0NpZuPm1qe+C/9dy9dfNfF4ngwzlrTv5+kLlwglHchNNXyeQLgz5GpXLW/jOEzz+cck7gzTQ73BdVnB5dYA5ZG2ix0pm1TT6Xw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(376002)(136003)(396003)(39860400002)(6496006)(956004)(86362001)(2906002)(53546011)(9686003)(26005)(85182001)(16526019)(83380400001)(5660300002)(8936002)(66556008)(66476007)(186003)(66946007)(8676002)(6916009)(316002)(33716001)(6666004)(478600001)(6486002)(4326008)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RlpBZVU1RHNGL2l5VWRTb2RDVXlMMVVvR2E1NkI2d2pZVC9CeWVGcWtEb1Zx?=
 =?utf-8?B?N1lNcGVOTWdzRFhSNlRONjM5ZEFadnlyeG02ZGs3bzMyNVl3QURGSkozdVhn?=
 =?utf-8?B?bGVJSkszNTBVMVpGUTVhOU82bEpwdklGbXBtUkNsREc3WVRzK1I2S1FIeExw?=
 =?utf-8?B?OFhtMkZVU0hLeUlkWlltOXptRVpIbFdvTGNMT3dESm5yV1g3QWpEQWZMUzhF?=
 =?utf-8?B?QnRQR240S3R3TkJyU0J1S3p0VlFXM2FnTFV5dnlQSmRybWd5UkNsMWhWUGQ1?=
 =?utf-8?B?SHp4Y0p4WG0xbjdvMDJQakJzS1hJbGZUVGR4SWpjYlp4eUZUUmIrN0xZdkYv?=
 =?utf-8?B?c2VMV1RHOHk5QjRHZ0ZaelVaUGhPVHhOb0l2aHZXOW51SnFqaUI1RzAxaC9t?=
 =?utf-8?B?UElqZ2MwM01RUU1uaTB1SWNWazZVVy9KRTI1U1JZU1k2OVluUTRqc2JXUlo4?=
 =?utf-8?B?R0xaLy9US09vWFl3R1dyblpNQ3ZPVG9OL3BFYXJBUUdzUTRVZ1h4RHFUVDlY?=
 =?utf-8?B?ZlhkTkhkeEdsbFpCRG9kZjNyQVQ2R2REV0tBNVgvYnlldVlldExCeW53ZFNG?=
 =?utf-8?B?bm5jaVk2K3B3NHlWNEhDWXVkYlgyQWZ2REluOVJOMW5VMVg4MktDcFBJeVB5?=
 =?utf-8?B?K3FiRUIzaGJlZS8zeHo3QUdhMnJUNjFLby9tNW9aNWRwa1FWQ01MejU2K1VZ?=
 =?utf-8?B?NnZFY2lZVThwZXNUcWsvS1FReVpub1h3REFqN3gxUmRBSnJQdGRoUUt1elZa?=
 =?utf-8?B?cmtvTi9jMDlPWlBUZ0VqUDBTbnRxRDlRQnRQaW12NkxHRE9uWjhMQ2NVZFUy?=
 =?utf-8?B?aGpiaDd6eUlTSDhJcW83d3hCRDBMSWo2Nis5YXRweEhNckVjYWdlc3MxRjhq?=
 =?utf-8?B?bU9hUzR0YWVkUjl2V0UrWkVyVEZkQ3BERld2dW1CNzByUkhXTUpDWGxTa0xl?=
 =?utf-8?B?SE1qOElGb01aTThUa3krSU5xMlZvYS8vbDRQNW4xdDdqNThtaE9BL2JjdzZk?=
 =?utf-8?B?Nk0rV3d6TzF6ZTFBQlBBWTZnVnM5clJkWnAxZWdueEFoQUZrSGorQzlBUmNj?=
 =?utf-8?B?SXAvN05oV1czMDJEQnljczB1dzMxNFU4NEVLNU5NOUZWdy9BODVHQ2dJZjRE?=
 =?utf-8?B?NG00czhzeGZvMlNMcGdhbUs3Sk1HTzRMenFHb1pVckpHV0I4dks5d01peVIw?=
 =?utf-8?B?SXFoZ1p6NmtEVjhDNEsxSURRd0RiRldkdjJwVXl6SDJBeWR5R1grVHUwYWgr?=
 =?utf-8?B?eTFNeGN4Sy9xVDRCUFJXWnM5K3pXeGNLSW9GTVdlWEt3T25xUmhPOHZkNkRz?=
 =?utf-8?B?bnd5NzVUekRabVlpSnRXOVJxZG5mc2tBbnVFMythdXR0d25BNUJDL2h6SGI0?=
 =?utf-8?B?UnBobm9raERzUUswSDJBbjRkOUxGU1Q0NXQ5K0ZXdHpBUEU0VmdSckJyaHk0?=
 =?utf-8?B?YUFNVHk0bGdMR1NmTHZ1bWhaMXZBSUpyNHdwYm82ZHF0QkNXYkNlMUE3Q1lL?=
 =?utf-8?B?NlhZTGVZSXIyaktiekppVy81TFA1eTNHbkdaQUxDTjlJa3NpUjV1NmtrcDBI?=
 =?utf-8?B?T2FNV0xrdURqS21sZ2xDME9vS1dpYktJaHJxRTNhRE5jOWVJMXhQaVVxZEwr?=
 =?utf-8?B?RmRBYnB1aVdxbnVoQlhjK2srMFRvczZGREhOdzFpMFZ0dFN4MjAvaE1pMGdW?=
 =?utf-8?B?dldMUlFKUWE2VW5tcDJBc2lVV0FCamRmV1h5UkdaUXRUdlF3REY4dG5sdHlV?=
 =?utf-8?B?OUJvcVI1ajEwMWJSNUxxL3lxcGtQT3hueHZaVzJ3N1ZlYk4rc1YxSEN5QnBq?=
 =?utf-8?B?V1pQMmNyVVlYNlE2NWptdz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9465a5fc-c45e-4b8c-9412-08d8d82558d9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 18:03:39.0761
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zeVAIGWXmCMAhtCOnjyTcVos30wFWjRNU3VKeAGzgsDK0YefcRVLrJLzFNqVFQ8V1mT1cVY2J3Gl0Fq94O7a7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4602
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 05:13:21PM +0100, Jan Beulich wrote:
> On 23.02.2021 16:37, Roger Pau MonnÃ© wrote:
> > On Tue, Feb 23, 2021 at 04:25:00PM +0100, Jan Beulich wrote:
> >> On 23.02.2021 12:59, Roger Pau MonnÃ© wrote:
> >>> On Wed, Feb 17, 2021 at 09:23:33AM +0100, Jan Beulich wrote:
> >>>> The former expands to a single (memory accessing) insn, which the latter
> >>>> does not guarantee. Yet we'd prefer to read consistent PTEs rather than
> >>>> risking a split read racing with an update done elsewhere.
> >>>>
> >>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>
> >>> Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> >>>
> >>> Albeit I wonder why the __builtin_constant_p check done in
> >>> copy_from_unsafe is not enough to take the get_unsafe_size branch in
> >>> there. Doesn't sizeof(l{1,2}_pgentry_t) qualify as a built time
> >>> constant?
> >>>
> >>> Or the fact that n it's a parameter to an inline function hides this,
> >>> in which case the __builtin_constant_p is pointless?
> >>
> >> Without (enough) optimization, __builtin_constant_p() may indeed
> >> yield false in such cases. But that wasn't actually what I had
> >> in mind when making this change (and the original similar on in
> >> shadow code). Instead, at the time I made the shadow side change,
> >> I had removed this optimization from the new function flavors.
> >> With that removal, things are supposed to still be correct - it's
> >> an optimization only, after all. Meanwhile the optimizations are
> >> back, so there's no immediate problem as long as the optimizer
> >> doesn't decide to out-of-line the function invocations (we
> >> shouldn't forget that even always_inline is not a guarantee for
> >> inlining to actually occur).
> > 
> > I'm fine with you switching those use cases to get_unsafe, but I think
> > the commit message should be slightly adjusted to notice that
> > copy_from_unsafe will likely do the right thing, but that it's simply
> > clearer to call get_unsafe directly, also in case copy_from_unsafe
> > gets changed in the future to drop the get_unsafe paths.
> 
> How about this then?
> 
> "The former expands to a single (memory accessing) insn, which the latter
>  does not guarantee (the __builtin_constant_p() based switch() statement
>  there is just an optimization). Yet we'd prefer to read consistent PTEs
>  rather than risking a split read racing with an update done elsewhere."

LGTM, thanks.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:03:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:03:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89048.167473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc2N-0001r1-LA; Tue, 23 Feb 2021 18:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89048.167473; Tue, 23 Feb 2021 18:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc2N-0001qt-H1; Tue, 23 Feb 2021 18:03:55 +0000
Received: by outflank-mailman (input) for mailman id 89048;
 Tue, 23 Feb 2021 18:03:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pXyS=HZ=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lEc2M-0001qb-IJ
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:03:54 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af6da4fb-1eae-4e2d-97b6-7c724af0887c;
 Tue, 23 Feb 2021 18:03:53 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id o22so1946556pjs.1
 for <xen-devel@lists.xenproject.org>; Tue, 23 Feb 2021 10:03:53 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id c6sm7830377pfc.94.2021.02.23.10.03.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 23 Feb 2021 10:03:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af6da4fb-1eae-4e2d-97b6-7c724af0887c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Ge4IIFdlQFH1Xek33XcsWVlIeV6+yCVvViifgE9+p+s=;
        b=gILul5JaCqaLqKRU5o6y/z57kUxYVVmiTpx4ZHYx+UxfDhPbPuCVtLByCLstm7ikb9
         8bxFAL3e5Mj0bp/DGJNAPzOFzbnZzzDyhA1xGT45EMtXDF4rjc6mPDKGVf0dbjmoANRC
         7+A6eq3yJDMv/3MsGBsbI6w4b7z0jsiq4eML45T57TTpwi4rssI7XMwjNvGFi+vSq2qt
         kzBpvCh9gOmGgzl8uNw0NYEGXM8AlFtQxTjwewDiGwhvdBeUy2Xxdlu/uMVkBLAxcs7u
         /7Oj54FMxycbOmZBilDRb9boPSQtNaewCBc7wmF0kiDe3F0MNGXgNU4CKrxk0WnaikjF
         Sohg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=Ge4IIFdlQFH1Xek33XcsWVlIeV6+yCVvViifgE9+p+s=;
        b=qcHJ+zDUjb8vZ9kczqGATMdprTAQvkeMpQBxBhC2I79AVUxmdMiPq0JwsGDqbbnzL4
         rwi/4x0+CORVJ9gJ0oteBtn6iZcu1e5SH01QGHvaq0WMYa9FOFTCUiAkYxBQraqVEODJ
         NjR3/yDj7EOsDwX8lfH5QB9cxy2ggi8a5ZkBPHmOMCU2yr2SI6dtGWiLZSjPVBt7Zf2A
         mZ6lwPls+ExQzWpdsPz9h38xppNcVTuqoAGiVtUFZz/6b5PCAS8Q+SE/x/GZeRd5iiKA
         wUaC5S7CxTpPMcdIO1ePhUfz0nQ+WmgMyuZBn2deh6X3FujiE/CnD2Jaie0UQU5an+kM
         lCpg==
X-Gm-Message-State: AOAM531RMGHqhzxILL6y8zV/qGGnanPJWMrA8eBqPnHrR7197XPbRiPc
	F37F7WYKoXaFW4vwUCSKOTtsIdlYnzoWp4Nj
X-Google-Smtp-Source: ABdhPJyDQIw1Yye2kqr+UbCidXk5v7BCIMn0tQE+kmYc7qrn/SuSeQbinuw/+DYujMy6mw+Bhjkd3w==
X-Received: by 2002:a17:90a:f493:: with SMTP id bx19mr17266282pjb.213.1614103432627;
        Tue, 23 Feb 2021 10:03:52 -0800 (PST)
Subject: Re: [PATCH v3 0/5] Support Secure Boot for multiboot2 Xen
To: Jan Beulich <jbeulich@suse.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Olivier Lambert <olivier.lambert@vates.fr>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <9a58bdf7-3a34-1b81-aec9-b14da463d75e@gmail.com>
 <f24e9e8d-9d55-f301-9a33-4398b463013d@suse.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <bb7dd7ee-9df1-47d2-c4ae-08e051ea16b1@gmail.com>
Date: Tue, 23 Feb 2021 10:00:53 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <f24e9e8d-9d55-f301-9a33-4398b463013d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2/22/21 11:16 PM, Jan Beulich wrote:
> It is on my list of things to look at. While probably not a good excuse,
> my looking at previous versions of this makes we somewhat hesitant to
> open any of these patch mails ... But I mean to get to it.
> 
> Jan
> 

Thanks for this response.  I did comb through your v2 feedback
point-by-point and incorporate it into the code, so I do hope
that ends up helping.


Best,
Bob


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:10:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:10:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89056.167484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc97-0002vN-DF; Tue, 23 Feb 2021 18:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89056.167484; Tue, 23 Feb 2021 18:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEc97-0002vG-AJ; Tue, 23 Feb 2021 18:10:53 +0000
Received: by outflank-mailman (input) for mailman id 89056;
 Tue, 23 Feb 2021 18:10:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pXyS=HZ=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lEc96-0002vB-0Y
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:10:52 +0000
Received: from mail-pj1-x1034.google.com (unknown [2607:f8b0:4864:20::1034])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c62899d2-6773-4d9e-96d4-ce2582f2dd9b;
 Tue, 23 Feb 2021 18:10:51 +0000 (UTC)
Received: by mail-pj1-x1034.google.com with SMTP id e9so2515950pjj.0
 for <xen-devel@lists.xenproject.org>; Tue, 23 Feb 2021 10:10:51 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id i10sm15767282pfq.95.2021.02.23.10.10.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 23 Feb 2021 10:10:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c62899d2-6773-4d9e-96d4-ce2582f2dd9b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Ryf0krA1u79pVJENjelYCbZuFAvQJ85HPmI2Z5SonOg=;
        b=NaU6a9d/zvFpR6cKQKM0Me+30l1VTKt99yxliuIJL9bmZqO05VF7NJJUn6/UeMRtVY
         9/b8zM5S1Iv+mi4mDNyZbLpu+5YsTCJVjvPYRRfEX7MQ8jNLmIcAz/0XRUceDneU7GTj
         TvR7c4haXRM9AecdyTeRF2bKrLZcF+G90ZfP0zRQZ0t7q8YDAERUZTvUBDhxXchzwUML
         MHObNemGRmRQkUJ9KRoPdY9NPa9E31lKpdzGJ1QiVa81QMKIChuEV6vubvxMEZ9nk5NN
         dQu/hpBBvX4p4hMkWjYVyMlu6YOKCaS2A3vXlCmiWxqvxwiJ7A/Kea3K5nwcLP5OPkWa
         uPKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=Ryf0krA1u79pVJENjelYCbZuFAvQJ85HPmI2Z5SonOg=;
        b=Wd9Y7CHCv8GqdTOOIHUQ6XtIn42GwLEUmvAj/FOooY1FhOpkHqcEdP+CO4+YSiKSh9
         tVSbWQXuf6I7svkeUI/iblc0XT4zeDJQe99bm5B33ioaeA4Kic7dwg3cSemUVg1/aEH+
         lS9SXMh/5eMRTJxMnafJdX4a3Ias8nDnDx2IdY5LPgzn7p5N3TSlo+nLW2Etx0Bqf2sd
         amxP1xp+Auyth8a7bvwLFaAGqdem49DdhkSFvfD6dLveR+D5RV94kIw33enE4tmyb4C+
         913gpmgLqqcx0bpowRL/a7AJBY8TBn8ujvSw4LPqkKgHoCShtbIb73SCZ4idAoSr+/eo
         jH2w==
X-Gm-Message-State: AOAM530ZPJnSxeMn6kDUGICGq6qNwlY7bt+KGZZTklSTb5qY/z2t8los
	PsrJHu50hBT4ZAEOG8J3TbQ=
X-Google-Smtp-Source: ABdhPJyPWTY2ojVAxQL6iH2Y7r85BFjGkIh/0PiY9uLNrjDmTtPTUx2DqMF6GwZjmPXqJsW9XeVEfQ==
X-Received: by 2002:a17:902:6b87:b029:dc:3402:18af with SMTP id p7-20020a1709026b87b02900dc340218afmr29120302plk.29.1614103850197;
        Tue, 23 Feb 2021 10:10:50 -0800 (PST)
Subject: Re: [PATCH v3 4/5] xen/x86: add some addresses to the Multiboot2
 header
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Daniel Kiper <daniel.kiper@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <35ad940a3da56fc39c9f24e15c9f09ef74ad3448.1611273359.git.bobbyeshleman@gmail.com>
 <YDTFOD4jdE90fZ0/@Air-de-Roger>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <b17a562e-90d1-9704-d3e8-2be1b0c215cb@gmail.com>
Date: Tue, 23 Feb 2021 10:07:52 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <YDTFOD4jdE90fZ0/@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/23/21 1:04 AM, Roger Pau MonnÃ© wrote:
> On Thu, Jan 21, 2021 at 04:51:43PM -0800, Bobby Eshleman wrote:
>> From: Daniel Kiper <daniel.kiper@oracle.com>
>>
>> In comparison to ELF the PE format is not supported by the Multiboot2
>> protocol. So, if we wish to load xen.mb.efi using this protocol we have
>> to add MULTIBOOT2_HEADER_TAG_ADDRESS and MULTIBOOT2_HEADER_TAG_ENTRY_ADDRESS
>> tags into Multiboot2 header.
>>
>> Additionally, put MULTIBOOT2_HEADER_TAG_ENTRY_ADDRESS and
>> MULTIBOOT2_HEADER_TAG_ENTRY_ADDRESS_EFI64 tags close to each
>> other to make the header more readable.
>>
>> The Multiboot2 protocol spec can be found at
>>   https://www.gnu.org/software/grub/manual/multiboot2/
>>
>> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
>> Signed-off-by: Bobby Eshleman <bobbyeshleman@gmail.com>
>> ---
>>  xen/arch/x86/boot/head.S | 19 +++++++++++++++----
>>  1 file changed, 15 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
>> index 189d91a872..f2edd182a5 100644
>> --- a/xen/arch/x86/boot/head.S
>> +++ b/xen/arch/x86/boot/head.S
>> @@ -94,6 +94,13 @@ multiboot2_header:
>>          /* Align modules at page boundry. */
>>          mb2ht_init MB2_HT(MODULE_ALIGN), MB2_HT(REQUIRED)
>>  
>> +        /* The address tag. */
>> +        mb2ht_init MB2_HT(ADDRESS), MB2_HT(REQUIRED), \
>> +                   sym_offs(multiboot2_header), /* header_addr */ \
>> +                   sym_offs(start),             /* load_addr */ \
>> +                   sym_offs(__bss_start),       /* load_end_addr */ \
>> +                   sym_offs(__2M_rwdata_end)    /* bss_end_addr */
> 
> Shouldn't this only be present when a PE binary is built?
> 
> You seem to unconditionally add this to the header, even when the
> resulting binary will be in ELF format?
> 
> According to the spec: "This information does not need to be provided
> if the kernel image is in ELF format", and hence Xen shouldn't require
> the loader to understand this tag unless it's strictly required, as
> the presence of the tag forces the bootloader to use the presented
> information in order to load the kernel, regardless of the underlying
> binary format.
> 
> Thanks, Roger.
> 

Ah yes, this is true.  It may have made more sense to do this with v2 trying
to step us in the direction of a single unified binary, but it certainly isn't
required with v3.

Thanks,
Bob


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:28:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89061.167496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEcQ0-000454-1i; Tue, 23 Feb 2021 18:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89061.167496; Tue, 23 Feb 2021 18:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEcPz-00044x-V0; Tue, 23 Feb 2021 18:28:19 +0000
Received: by outflank-mailman (input) for mailman id 89061;
 Tue, 23 Feb 2021 18:28:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oQ2t=HZ=epam.com=prvs=2688306719=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lEcPy-00044s-A0
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:28:18 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7607a04c-8cca-4efc-8de6-fdeb4f9a10fb;
 Tue, 23 Feb 2021 18:28:17 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11NIPTqI003841; Tue, 23 Feb 2021 18:28:16 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2051.outbound.protection.outlook.com [104.47.13.51])
 by mx0a-0039f301.pphosted.com with ESMTP id 36w1dahe8k-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 23 Feb 2021 18:28:16 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6770.eurprd03.prod.outlook.com (2603:10a6:20b:283::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 18:28:13 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::6156:8f40:92f3:de55]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::6156:8f40:92f3:de55%8]) with mapi id 15.20.3868.033; Tue, 23 Feb 2021
 18:28:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7607a04c-8cca-4efc-8de6-fdeb4f9a10fb
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e01GgD9JiBHAfiY38eTuBe+lADk+5BHvRCLuDdyxGutesVuV3mAhTcXTK+bmk/TVKkjQ2w4kxGtXihOY7vHWrvunKSIgUdEoq/lROQ5ZbKzW6t+IhODK6oOIDlvbGy4dK74wQEgXY21o6if1C2YojvbuuJE0Rm/6ZNFQe4Gpoc6khD4ccKTAUlbkoesGZvsQO8Wx6Uz4usIVL4qSuJ+DK2KqN5ea9wycGegLbkHyMDbb1PlhpgYGHWs/7XvofvhShpY6EDQaGgSWnEBJCsS/nckcVrY2/iUjzOVP6fSgz8ebq39qtYgHiVouNQOWhFpqvQg8JsJ2HfahzSxohq4P0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nd2K5q7SKiHvqYeyLcJ9WLE9Sw2vdyG4fRh7sv0bBgo=;
 b=irBPftfn/j/RzMSb3lNuxf6mE34DzEE+U2sxMJTYJJEmK+RCQiQ3Ap514OVhQsxTm+7dbBWbTIbwwYWrL7z3UtwWOJ4SYPdq7K7CPgo+51nDxwPwKbJ67ntBJ7hAuTeFWaq0lJqQ2F+EInorfN/YeVJGXzArOmB2U1uxrO8BzT2D+E1W5n4AHQPYUN3p+w0WCqfWvOpQVBtM4kJ6l9GF3glBdjpcN66tZKSHi16Fr7j+jZtBHndOxwED487pV5dncs8RrL2/Ml7Lq5sXMdZQhjHYgWCvMjpLfa1Sq5eb6HKjb7DKaKbN+vXix7u3b5qiHwIjfDSMfe/jSu/uZymuIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nd2K5q7SKiHvqYeyLcJ9WLE9Sw2vdyG4fRh7sv0bBgo=;
 b=AjCH5cM8oazod6exaG1RuhbOifik0/swMkT33zAPSysCP3Jj3lYxjOKvDtyyx6LHIwW9Xb83a7UWDrZ/WGpbx6pU7xgHC9t4xsZeYkhN3DXGVgG8cvTt0uk48/+te+ztNU2ctnL2ycpAIhF6nYs6kj8UfeaD1qqDOCe3bnH+mY/xWl1bsDH+vtEZNS+ABoblR/GLdfNfJEhzUYSbB0R8w/acdnqA2n5EELVAJGJPwWFWOr99DlUg/YjJn5y+kp1+FRghhoDhHMrDDEIns0hjU9TTdON+Wi/DWnq0KbYZrHlt8O9Xm4i6BH7CjZJeyeS2v3Ng9CPFeaCmrVuOnKp01Q==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>,
        Boris
 Ostrovsky <boris.ostrovsky@oracle.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen-front-pgdir-shbuf: don't record wrong grant handle
 upon error
Thread-Topic: [PATCH] xen-front-pgdir-shbuf: don't record wrong grant handle
 upon error
Thread-Index: AQHXCgCim25QM03ANUywDJtpwJ6UVqpmD0iA
Date: Tue, 23 Feb 2021 18:28:13 +0000
Message-ID: <75c64c79-7458-19ca-1346-4c0e090cf0f7@epam.com>
References: <82414b0f-1b63-5509-7c1d-5bcc8239a3de@suse.com>
In-Reply-To: <82414b0f-1b63-5509-7c1d-5bcc8239a3de@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: af7594ee-f3d7-4c51-84d3-08d8d828c802
x-ms-traffictypediagnostic: AM9PR03MB6770:
x-microsoft-antispam-prvs: 
 <AM9PR03MB6770322F938C3E585A75E684E7809@AM9PR03MB6770.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5516;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 48gUV7jOsnfGHqJPLR8jjdmCWQc9VBwbMDAKX2zTHMbc+dBHXeOFyOswDIG4SFto6/QcvblS9kOicNQwoqLIC+9QnbmCxGlBpQRMijSDlVrYXOUt2xNG2PR9VrlLRlk1Jw2uaRmYM8rZ3QsqaHxfz8DtfKZHtvnwMx2Km/prKQ0wTUWY7iA1mwxexqe2Fma/PdBQWXdwDQmy4E1R23ARowwexBdT/Ov0TS/nH+7cJhxs79ew3vg6VvIsBbOY1r0eNDjzKt3O2SE2PW7k9a9QJZu/sERMbcUFwA4fY1JQUBide3Qz0DJtRZWzoItrppOcZqEKUU3DEdU7H5V8XMauYIPbbbdR7rvhv2EBSFWyF8wDXDqYLm6wqYAMlshzyEKC3rXrabK/namuFS1NupPgJpd5yQSWFKgdJKwa4l6cdeMNZis2Lt5f7uZpXB0VoaZrnZh19aJ0q5APnafbs8p8cnnpV16m4fwrMswobcTVfq9m413p+WxrDPZ1VPmgR/Hx90+wH6sG0idM1kPlouX2+gh5FfZOSlQmHBY+vZZ4EC6s7Q8QsEuY1a2AWla5t1riR3F68ZWDUI77weLlYXHvow==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39850400004)(136003)(366004)(396003)(376002)(76116006)(8936002)(66476007)(6512007)(66556008)(83380400001)(8676002)(66446008)(64756008)(66946007)(86362001)(6506007)(31696002)(110136005)(2616005)(316002)(4326008)(478600001)(6486002)(53546011)(71200400001)(36756003)(2906002)(5660300002)(186003)(31686004)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?NzZqSE5OcEJiQ0VDL0prVlpPTys5bGhkaHR2aXJiMERHOEhVcytvVEExWWYw?=
 =?utf-8?B?K3pxOWtuZ25tam5vakwxTHdOWTNEbW4xWG1IT0R4MGJ2blEyQlJjLzRhMEhQ?=
 =?utf-8?B?dlZRdmdoVXluRkhZV08xQnRzclZWYkdSc2liUFBCY3RpZHRmN2VIenFYeGZs?=
 =?utf-8?B?RUl5cVdseEtNVWVwTEpNQkZIeklUeEtQeXFPdDcyallCdVRzZFZUUkZGVFVt?=
 =?utf-8?B?ZDcvVmpnR05uU0ZTbU94T2w2a3NYUTF1QlMzTkIxRFNyMFBnVWl3ZHlldm5B?=
 =?utf-8?B?TUdWb0dhdEdGMWphTTZMbVF4QjRuUkxvOWIwNUVXNUdMRFJhNmFkYnFxR3c2?=
 =?utf-8?B?WlAvbEdNTDBDMEcwa290Z2ozaFdMT2R6QnQ1dEFkYnRUSnFKaXVxM0Z3NVdz?=
 =?utf-8?B?Z09KUVFFNUlWOG8vcWNMekZtbStBeGJ3T1ltZHRReUhiU0R6aVNDYlloZnY3?=
 =?utf-8?B?bFJaN2QzUHltUStlbWlNSVFCbzR1ZW5aT0ZWNy85ZEx6YU1JaFEzSDg5Zm0x?=
 =?utf-8?B?OUN5Q1k3SjZxbDhIUlMxekxzZzkxUlJyQ3hIVENiWEs4cmFwcFZKbWtLRWs0?=
 =?utf-8?B?dk1FK1MrRUJmSEVtMHBsMGZDazA2RXFWdWRVWmREd2FEZys1aSsrU09IMlBr?=
 =?utf-8?B?ODdENXhGczAvbTFzcDMxSGU1YkhMNWJUMHp3SE00cU9FVEFkTHVsbGxGeTB5?=
 =?utf-8?B?K05kdlpFN2VjbHU5WWxsTjU1UlhNdDVtVWd0MjRRQjVJNFQ1ZzNPNXgxU0Qr?=
 =?utf-8?B?U3h6aTc2RW9abzF6dkJQU1JHVW0xT1lybEhXTEM0cm1tVWhXVkIyVVdMTUVI?=
 =?utf-8?B?MXV4R0tCaTZsTlVxQXdWT1FkWTdIZ0o5NXlzemFqMUJjY3BNOGJpZFU5eG5Y?=
 =?utf-8?B?dGU0OXVjeGFrSEd1cW1IS3dFY2tLaS90S2o2dVZoaE1IcEJLT3BjZDdleVJO?=
 =?utf-8?B?WTRIb2w3RkRNVjh4OFFoeFFKWFR0S2dlQVNra0tmUVEwSEdteFNGcXo4Z0x3?=
 =?utf-8?B?S25vSDRnblFYY29LRTA1S09iRTFJZzdyaDFCYXdnSmVsV2UxRWR0VVZoZ0Z0?=
 =?utf-8?B?Uy8rZ050VW5IK1RVV0VGUW1nOWhONE9CVmNCNXpHMS9LcUdEeUxETUpmUjVi?=
 =?utf-8?B?SVhxUVFEZjhoK1FOM0FiRERrdGFGR0FrdktxY3pIakdPc3B1b25Pd251dmVs?=
 =?utf-8?B?Z1FsTGFJMFpQai93ZUVoQzZ5KzhWU1djZWNnQThwcjJvRHVJejY3emRMZ2NO?=
 =?utf-8?B?ajVtRFBFVzI3ZkRuOWFFcVV3Nnl0Y1k0NTZVaHp3c00vYktTbkNITUZNd2wx?=
 =?utf-8?B?dVRZTGEvdVZuMTk4QlV6eVhnY3RrcW9xS0huRk96U24xdSswN2FkNnlocFh3?=
 =?utf-8?B?c1Y5c1JaMFFJejljVlhNZG1nVDVZUjV2WGxYMTV4WERnei9aYzNIQm5HL0NI?=
 =?utf-8?B?ZFJmK0Rvc21ld1MwNExxMG5PSkF5SUhpRXBtQmJUdEVSMUozZlppNXRrY2lC?=
 =?utf-8?B?THpwZll5OHcyQjFsY1BkdUtUZFpFU3FBL25CTVBkbjRpY1V2L2JFY0lMQk5F?=
 =?utf-8?B?NjcrTXVQNURqeTFHakpLM1VjeVhJOGlSeng2ZEdGc2g1MG1EMmR0MzR5U0ZW?=
 =?utf-8?B?VjYrdnl6L0QrRlprNFJ5TXpuUCs3ZXVIV05BVnlxR0lvY3NEb2Vpa3ZXbXJ1?=
 =?utf-8?B?N1FwL2JHWTN2ZmFvS3g1Ui8rT3lRZG16bGlCSStSNTRwWmdTREFLR2YxYzlJ?=
 =?utf-8?Q?2HsIQlwVPDoHPZyOXZkCwwBfXsS5TeYOon939SM?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6A99F573C88B1C488C092203746DC84A@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af7594ee-f3d7-4c51-84d3-08d8d828c802
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Feb 2021 18:28:13.8028
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lGPd/sjA1UJurCwnLniwA4rq0g2WrsSKtFEN4POOVVPbhS/csZAl0qXq3BcrS0VARNMT5qa7bBGWyfiSn4Gmr6lgBfeZTbnt6rFDPf3DGDPGu/b0O3X1iDV4DQ7JOhqp
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6770
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 bulkscore=0
 impostorscore=0 priorityscore=1501 spamscore=0 mlxscore=0 phishscore=0
 lowpriorityscore=0 adultscore=0 mlxlogscore=999 malwarescore=0
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230154

SGVsbG8sIEphbiENCg0KT24gMi8yMy8yMSA2OjI2IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4g
SW4gb3JkZXIgZm9yIHN1YnNlcXVlbnQgdW5tYXBwaW5nIHRvIG5vdCBtaXN0YWtlbmx5IHVubWFw
IGhhbmRsZSAwLA0KPiByZWNvcmQgYSBwZXJjZWl2ZWQgYWx3YXlzLWludmFsaWQgb25lIGluc3Rl
YWQuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4N
Cj4gUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NClJldmlld2Vk
LWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBh
bS5jb20+DQo+IC0tLQ0KPiB2MjogVXNlIElOVkFMSURfR1JBTlRfSEFORExFLg0KPg0KPiAtLS0g
YS9kcml2ZXJzL3hlbi94ZW4tZnJvbnQtcGdkaXItc2hidWYuYw0KPiArKysgYi9kcml2ZXJzL3hl
bi94ZW4tZnJvbnQtcGdkaXItc2hidWYuYw0KPiBAQCAtMzA1LDExICszMDUsMTggQEAgc3RhdGlj
IGludCBiYWNrZW5kX21hcChzdHJ1Y3QgeGVuX2Zyb250Xw0KPiAgIA0KPiAgIAkvKiBTYXZlIGhh
bmRsZXMgZXZlbiBpZiBlcnJvciwgc28gd2UgY2FuIHVubWFwLiAqLw0KPiAgIAlmb3IgKGN1cl9w
YWdlID0gMDsgY3VyX3BhZ2UgPCBidWYtPm51bV9wYWdlczsgY3VyX3BhZ2UrKykgew0KPiAtCQli
dWYtPmJhY2tlbmRfbWFwX2hhbmRsZXNbY3VyX3BhZ2VdID0gbWFwX29wc1tjdXJfcGFnZV0uaGFu
ZGxlOw0KPiAtCQlpZiAodW5saWtlbHkobWFwX29wc1tjdXJfcGFnZV0uc3RhdHVzICE9IEdOVFNU
X29rYXkpKQ0KPiArCQlpZiAobGlrZWx5KG1hcF9vcHNbY3VyX3BhZ2VdLnN0YXR1cyA9PSBHTlRT
VF9va2F5KSkgew0KPiArCQkJYnVmLT5iYWNrZW5kX21hcF9oYW5kbGVzW2N1cl9wYWdlXSA9DQo+
ICsJCQkJbWFwX29wc1tjdXJfcGFnZV0uaGFuZGxlOw0KPiArCQl9IGVsc2Ugew0KPiArCQkJYnVm
LT5iYWNrZW5kX21hcF9oYW5kbGVzW2N1cl9wYWdlXSA9DQo+ICsJCQkJSU5WQUxJRF9HUkFOVF9I
QU5ETEU7DQo+ICsJCQlpZiAoIXJldCkNCj4gKwkJCQlyZXQgPSAtRU5YSU87DQo+ICAgCQkJZGV2
X2VycigmYnVmLT54Yl9kZXYtPmRldiwNCj4gICAJCQkJIkZhaWxlZCB0byBtYXAgcGFnZSAlZDog
JWRcbiIsDQo+ICAgCQkJCWN1cl9wYWdlLCBtYXBfb3BzW2N1cl9wYWdlXS5zdGF0dXMpOw0KPiAr
CQl9DQo+ICAgCX0NCj4gICANCj4gICAJaWYgKHJldCkgew==


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:46:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89066.167509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEchJ-00060H-Kc; Tue, 23 Feb 2021 18:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89066.167509; Tue, 23 Feb 2021 18:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEchJ-000605-HN; Tue, 23 Feb 2021 18:46:13 +0000
Received: by outflank-mailman (input) for mailman id 89066;
 Tue, 23 Feb 2021 18:46:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hUnP=HZ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEchI-0005zF-BJ
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 18:46:12 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1800fa8f-bde4-46af-bec9-e24b124f7dce;
 Tue, 23 Feb 2021 18:46:11 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NIiidF093054;
 Tue, 23 Feb 2021 18:46:07 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2120.oracle.com with ESMTP id 36ugq3fak3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 18:46:07 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11NIQPu9115117;
 Tue, 23 Feb 2021 18:46:06 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2046.outbound.protection.outlook.com [104.47.66.46])
 by aserp3030.oracle.com with ESMTP id 36v9m4ytcq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 Feb 2021 18:46:06 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB4195.namprd10.prod.outlook.com (2603:10b6:a03:201::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Tue, 23 Feb
 2021 18:46:04 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Tue, 23 Feb 2021
 18:46:04 +0000
Received: from [10.74.102.180] (138.3.200.52) by
 CY4PR13CA0010.namprd13.prod.outlook.com (2603:10b6:903:32::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3890.9 via Frontend Transport; Tue, 23 Feb 2021 18:46:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1800fa8f-bde4-46af-bec9-e24b124f7dce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=uVUiwQj9+9nxoo2rnj1fO9AMn0rm4aGzjApXlJYHX90=;
 b=glyOixaNXaGr+CvbkuANnB6LlyDwwF3BVRwHysyq7Dy55d1Vbl8OBXLjDZHcHeZcI3i/
 SEnczdSXZKu4C6V5/adjtu7Bh3yXq0QgPhE5+eX4VlFZi/eNwpRpvtJHYuP3F8FuwCZf
 CFr3/sjS458fhXRLxUjEOResA7q0+g/HZHPet4YJ1Xm6veHGo5vf1kf4LK2P/0IpYW1O
 uwiYKAZIid9BC2T1ZQo5cZTwvHh407mkOt5+Vx0wafwlTEXq0RjYbqfPTYPZctN1LjK0
 Ilue0e5A3lx3vc0rvjv8e5JHyKSt9MoYg+ZvyqOpAVyO4GkeNca3vGFzFjPFG9aUCVeE ow== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VnEuNZwGfQrypvxNhE8XGluCWFPy+oxzf0bVY20kq1Fe5twDX7UJdSktaYl44L7bDqTWDD/hzoM80OMs1k4ToMIRh8yQ1wmpOSywvUyOSXUJqz8w8qTfcega2/Kp7npWplj6PaqjCUkowLYpCAgaTNkzcg2RsSagK1QdoIdbLrJK4uFTiP2oYiM+VcqYLo/4yrAXYfTIApfXO02aogKsyV+KHXXn/lYm1zZzPOT/ONn5GPl+wOwSuJagu47pXYRbwrWcBRWYoMEyvDOZPZIkdHDB+uYPaU8EYcczIPRLAThx+OaDmyuQror+gAVF0ZgVIQDgbySCZ+v0Pdf6qExeYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uVUiwQj9+9nxoo2rnj1fO9AMn0rm4aGzjApXlJYHX90=;
 b=XBcVpuXsL2TnK50S9H4LJEwJQ0Hfv8ttygdaK3hX6OMi5AdE3vXolLGNR60v0231YnVMoasUciEGF/Ws83GYFqY8/DqPF5dHlIrqJWZ5IbUFgC4VOIn3yiVIZ4nUh9X8U2hEnDotIBVWekNQpt+/0OAtr9RBWpBfVPrXbPN3FJ0lt+Mep3faABek2u0NSPc2jWvMHT6OR+5woUpOOBLVMQyEaUNwCkFEpBQSeZFO4B2g0daBmrMvZbQQDJd/4IxtDpgaVXYJTmTVujQLMJm8sTneVodf5qAiQ/dVDt2+3EFM4u9+XfIsfC+HqKAJMDe9qOUfAOH2SeupaWghykbJrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uVUiwQj9+9nxoo2rnj1fO9AMn0rm4aGzjApXlJYHX90=;
 b=kETts1EEBaC7IFZuVNQl87odVTNOf0i+FV3wVEDARo8eYVshmiyvTOf1QZxjfkT2Mt519b59hD5b1DoWrtiAdqq8G9RzFBXEY2beqO4iSuN3NGxWe0IWxORIXxhTzvQCBIFSl5HEyEdI7rMKqhE9Z3QaFaI+FtElUYvDOya2w9w=
Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v2 2/4] x86: Introduce MSR_UNHANDLED
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com,
        xen-devel@lists.xenproject.org, iwj@xenproject.org, wl@xen.org,
        anthony.perard@citrix.com, jun.nakajima@intel.com,
        kevin.tian@intel.com
References: <YDOQvU1h8zpOv5PH@Air-de-Roger>
 <ce2ef7a3-0583-ffff-182a-0ab078f45b82@oracle.com>
 <307a4765-1c54-63fd-b3fb-ecb47ba3dbca@suse.com>
 <YDTMIW5vBe0IncVR@Air-de-Roger>
 <2744f277-06fb-e49f-2023-0ec6175259dd@suse.com>
 <YDTyScmud26aiaMi@Air-de-Roger>
 <172dfcab-9366-47f0-9c56-2202a8b7a7db@suse.com>
 <3ae19e76-2543-28f4-9c7f-697ccf9ed202@oracle.com>
 <YDUpF8gf6fbm4ouQ@Air-de-Roger>
 <a49b03dd-19c5-5df1-81a0-0d8d9e84156b@oracle.com>
 <YDVDNtd4n8zV7T6J@Air-de-Roger>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <e8b6c86a-339d-c89e-e087-ebe4ffd92e46@oracle.com>
Date: Tue, 23 Feb 2021 13:45:59 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <YDVDNtd4n8zV7T6J@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: CY4PR13CA0010.namprd13.prod.outlook.com
 (2603:10b6:903:32::20) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2f10ecc7-7160-4247-f0fc-08d8d82b45d6
X-MS-TrafficTypeDiagnostic: BY5PR10MB4195:
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB41951F51BAABD81AD3622A578A809@BY5PR10MB4195.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Yjo732jSadERdTi6Ob+ZsWVLresr5qRgX2xSY4W7MYoykajV6Ztltl2m1UTYAJWWtZs20rX0Q5mI+hzuLfNHnVLpi602tVBGk8kt/SsexTHjVwMmzBCGKKglQLC1w6pNs5GCZ7AdSZzxtYK3Z+Wr76XApRKcnFM6Ae1fbY9TpLoBQq4ln1Y3rjJf9Ly0/vC/jAgIzSYYB2j+5qtgz3Ssl1zhWy7aRLZ2nYhgOZsxtwsLx2pPOjPh4aXpBcdRUQ0QTKHoiTd+7K2r32J4PfxRdP6fUeEGy9bd4gUA5z92KtdJmCEQ/PIesxa9WyU/j+x+108vZ70biuLCAE4GLxnryQ6Asz4Z9LE6k6QP5yAoewcWgAey0JBRLUJM7NZfTwqVHPavpg8IPAW5H35GMMs/hBXiaZfn37F63aC+dY6UnFHX9+w+zT+fKRErQyBd6PSqVGbe0NqY2ZEAjlColmAC6zCiZwE4AjL+yWczQNlIxou/89tllY77Ir1yFxSSMTKUTNcds3hAJzqOnO7/cTB8OqhwEETwZNlHRIQ7nEN2VmQ0VxE9QXo1sG4yyv+pG0XK6KbI2imF247vmIBBDAx90dM4qJMneCvvP9O0yDkewx8=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(136003)(39860400002)(376002)(366004)(346002)(5660300002)(66476007)(8936002)(956004)(66946007)(316002)(478600001)(36756003)(26005)(6666004)(31686004)(53546011)(8676002)(86362001)(6916009)(4326008)(16526019)(6486002)(16576012)(31696002)(2616005)(186003)(44832011)(66556008)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?RS9ibGJxQ0R3Y1RJSGFEVEdpVFRIU1p5dzNvTDdINTNWZ2ZaZU9QVFhldWJt?=
 =?utf-8?B?RU44bHFwWTk4SHBITlpkaTFTZEdYcWpQZEdUclp4QkJGVzBSRHZGVXdPbUU2?=
 =?utf-8?B?VFhEZ2x2UUdXaDJEWjVPU3BrMWNyeHo3MjBXNkV0d1BrS2FtMHY1VEM3Nlkx?=
 =?utf-8?B?L0hCUG8rNTczUUdRR01PRDlkM1RMNXZPNnZTV2kyZDZmM0ZMUkRnL05KWDVv?=
 =?utf-8?B?aU4rZ0hnV1p1MUppdCtzS1l5cUNrTUFwZWJFZWZBU1hCRFhTRU1GYUtDdnhD?=
 =?utf-8?B?QU84ZS90L1BBUkx1Mk5UNFUyMWlwK0tUOWFhZVpjUjR4YTYzSGpvZitvcXZ3?=
 =?utf-8?B?UHhONDJDbDNLV2VDNkRWZFpHTXV3Ymg3TXdEbFd2Y08vTDhHbGlHQytUbHFx?=
 =?utf-8?B?REhGT0gvZmcrMzN3eVJ3UUU4eW9sMkU1ZGN6blJsQW1GekpMUk5PMXRsUWNP?=
 =?utf-8?B?UlBzc2V4TGcrUDFXUEZ3eUt4cFB1TWQ2UWtDZGV0L2tKMjVHRno5Y1RVYlVV?=
 =?utf-8?B?STVkU2Z1N2VNSm0yQzYxSmg5NlhhRmw5UmdtNURIM0NhM0pkUENvMlZsZ0Jq?=
 =?utf-8?B?QWtsOUVhWlppT1FaYmhaV2czeWtVVGZNVE5IZkhqL3J1VkZldU5qa04wd0Mw?=
 =?utf-8?B?a1FYd05SbG56NGZaN1dDVERoWFozVFJUNENEek9qZS96UEZFZmt2VEZkZ2Fo?=
 =?utf-8?B?NzJOcHRzL2RXV1lUQ3hLbVFaL2hiRnk1S3p0SkdodExnenhDNHl3VHlabmdU?=
 =?utf-8?B?c0YxRUJoY1RBVjZZTVdKak5SRFNQUklHWTFlbjEzQVI2Q1pIZnlMY3lRVWhk?=
 =?utf-8?B?SlZCVGhIQ1NOSzNROVdDTmYwQXM2RnNiUHp4QVZ2RC9ac0FFVSt1WUxqZytw?=
 =?utf-8?B?NHVoMW9FaWh2MlkweGJwSk5rbjRMRE1yUi9uR21BRi83blB6VzY5b284a1VB?=
 =?utf-8?B?emlGNXpwcE0zSVNGRFRBTDd6WmtGYmZTN1NOM01jQS9jV3RKZk9rTy9lQSt1?=
 =?utf-8?B?cnRnbUdEOEZRSXk2YUYwUWtwMXFPQUJxeFpoT3JJcEQ0UmNPRXVFeVpFZ2lS?=
 =?utf-8?B?Z1haU0Q5bXp6ZU5iSHovMmhXUFNlZGZ6bDlWdS8reDRUUVp3cG00dUNmYis4?=
 =?utf-8?B?NFhwQ0IvVVlwcThMdVYvMVQzb3k2WjU0YzJZM0RvWTZWTFZGUmdqK1BGTFFp?=
 =?utf-8?B?ZHAvYk93eUZSYXdZNU1BK2J2T2JoSUFPSVJPbXM1YkVsQzFqSDlGUVhZOVgx?=
 =?utf-8?B?cGU4VG9IVDNCUloxNFFlU2FiZ0pNM2VqR0VJVExqQmNkTGt6RjYwcHlDRlRz?=
 =?utf-8?B?YjFSVTNIcFd4WXNXMDJnTkhlKzFFcEFMRGYyNjZKbTVlUXNTRWZDN0dKNGps?=
 =?utf-8?B?S25DdFp3SGtFNzg4MkFBOVloNnVobEJwellDdHI4R20rMlRNelJmeUkwSXlo?=
 =?utf-8?B?WFlLMGJmT2dGQzRxNnBuWHc2c3ErN3U2SEd3NGNsNlNzVTBNcy9Bbm0rbE9t?=
 =?utf-8?B?bGdxYVYvNmtaOFkwVjRWVlMrY1RrNWVuVmlIQU5VZXJFNXNyQVN3NUtVTXl3?=
 =?utf-8?B?Qkw1dDk3ZGFwczIyakFvMTZsY1J5YkhzQU9MN2VRUWtxSmJSL0psQ2xReS9j?=
 =?utf-8?B?VHdZZ2dYekVZSkM3Ni82bzNiT1h4UmJwWTB3RDZLUDRoL0NqWDFCMUg2N0dp?=
 =?utf-8?B?NTRIR0QwZkx6RThZZC9nS0NrZ1RsSUcxbi85SW14WmtRTHB1T2VzRDkvUy9y?=
 =?utf-8?Q?dPZY0IqWgRnUO+AWh4lUeU1gMns8KQ0JdFstKrW?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f10ecc7-7160-4247-f0fc-08d8d82b45d6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2021 18:46:04.2381
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iURhayGs6GlNU/x6SIywp8Zid4C6DAO9icLELO5UWUOXlFsxIzP1XOIXTJteVsQWy/CviqwhENjBF3IuER73ryNUbXdlFprRLelgA/8RLVo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4195
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9904 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 adultscore=0
 suspectscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 bulkscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230154
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9904 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0
 malwarescore=0 spamscore=0 mlxscore=0 suspectscore=0 priorityscore=1501
 clxscore=1015 impostorscore=0 lowpriorityscore=0 mlxlogscore=999
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102230155


On 2/23/21 1:02 PM, Roger Pau MonnÃ© wrote:
> On Tue, Feb 23, 2021 at 11:40:07AM -0500, Boris Ostrovsky wrote:
>> On 2/23/21 11:11 AM, Roger Pau MonnÃ© wrote:
>>> On Tue, Feb 23, 2021 at 10:39:48AM -0500, Boris Ostrovsky wrote:
>>>
>>>> Before I do that though --- what was the conclusion on verbosity control?
>>> Ideally I would like to find a way to have a more generic interface to
>>> change verbosity level on a per-guest basis, but I haven't looked at
>>> all about how to do that, neither I think it would be acceptable to
>>> put that burden on you.
>>>
>>> Maybe we could introduce another flag to set whether ignored MSRs
>>> should be logged, as that would be easier to implement?
>>
>> Probably.
>>
>>
>> Â Â Â  msr_ignore=["verbose=<bool>", "<range>", "range>", ...]
> I think just adding a new option will be easier to parse in xl rather
> than placing it in the list of MSR ranges:
>
> msr_ignore=["<range>", "range>", ...]
> msr_ignore_verbose=<boolean>
>
> Seems easier to implement IMO.


I haven't looked at what parsing support is available in xl. If I don't find anything useful (and I give up quickly ;-) ) I can use separate options.



-boris



From xen-devel-bounces@lists.xenproject.org Tue Feb 23 18:50:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 18:50:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89069.167521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEclu-0006x0-96; Tue, 23 Feb 2021 18:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89069.167521; Tue, 23 Feb 2021 18:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEclu-0006wt-4B; Tue, 23 Feb 2021 18:50:58 +0000
Received: by outflank-mailman (input) for mailman id 89069;
 Tue, 23 Feb 2021 18:50:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEclt-0006wl-7X; Tue, 23 Feb 2021 18:50:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEclt-00016X-1a; Tue, 23 Feb 2021 18:50:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEcls-0003m4-PA; Tue, 23 Feb 2021 18:50:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEcls-0001SR-Od; Tue, 23 Feb 2021 18:50:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vgK01ubZ+ssTqbtWfy9L1yMBHJ1BKMUQukIa+sN6S6Q=; b=iwXItaoYC+V0nSRUxRMuEKAZLH
	r/jbUQxSZniGsU4EkyhLnuCpXc9VWooEenq/Lq9yH41L0Wgcp2TNPiM6Ig4nIieTV0AP2wJ0ukwBs
	9UYzI9bEUnZmU8J02lFnUBlTk0JWIKnl6FCHQHY5zD0Xrl/aiNoqVjLK02F6HdoDBU3o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159585-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159585: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a2b5ea38a6fbbcf1a8f4c2128be35121236437a7
X-Osstest-Versions-That:
    ovmf=078400ee15e7b250e4dfafd840c2e0c19835e16b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 18:50:56 +0000

flight 159585 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159585/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a2b5ea38a6fbbcf1a8f4c2128be35121236437a7
baseline version:
 ovmf                 078400ee15e7b250e4dfafd840c2e0c19835e16b

Last test of basis   159546  2021-02-22 10:09:51 Z    1 days
Testing same since   159585  2021-02-23 12:39:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   078400ee15..a2b5ea38a6  a2b5ea38a6fbbcf1a8f4c2128be35121236437a7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 20:29:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 20:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89084.167549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEeJM-0007Jn-Uf; Tue, 23 Feb 2021 20:29:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89084.167549; Tue, 23 Feb 2021 20:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEeJM-0007Jg-Pt; Tue, 23 Feb 2021 20:29:36 +0000
Received: by outflank-mailman (input) for mailman id 89084;
 Tue, 23 Feb 2021 20:29:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjLX=HZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEeJK-0007JV-W4
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 20:29:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0146f00-5eaf-492d-b3e2-6100393b9360;
 Tue, 23 Feb 2021 20:29:34 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3499C64E6B;
 Tue, 23 Feb 2021 20:29:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0146f00-5eaf-492d-b3e2-6100393b9360
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614112173;
	bh=Msc8Jyit6uwbkwcqci0Of2uto8QfqsbsB2hqrjLdtX4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kylLrgZMEdOyJJ3VD9w2xM07vz4USwSKC59m8raSn7YnNnk8YynuF8qZkJ7MLz0x/
	 XiThQj4+RnW5fbHf3Fm2xE7ofVWfx69giSPNo2y3n8K8kRqpdm9ggCus62Oe1oxpgN
	 By6gRw2voDNihJmF6Kpis17yZVmmS3934XZiQ3L4z4r9uNBzfE/1Q+EB/e3x2YhrZF
	 apQudMAReTLct8JAtllpxlfo8Tj+soKJzRuY7JOs/tbRBtacnO8GFmjM9g/E2aGgQY
	 2cVAUNBlVOSyGH4gUR59w3PP3xcUPKv1qOiCtAxrpSTky2q0jKXce0G9v57OokH1SI
	 GNudjw0Ob3VLA==
Date: Tue, 23 Feb 2021 12:29:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
cc: famzheng@amazon.com, sstabellini@kernel.org, cardoe@cardoe.com, wl@xen.org, 
    Bertrand.Marquis@arm.com, julien@xen.org, andrew.cooper3@citrix.com
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
In-Reply-To: <161405394665.5977.17427402181939884734@c667a6b167f6>
Message-ID: <alpine.DEB.2.21.2102231228060.3234@sstabellini-ThinkPad-T480s>
References: <161405394665.5977.17427402181939884734@c667a6b167f6>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Volodymyr,

This looks like a genuine failure:

https://gitlab.com/xen-project/patchew/xen/-/jobs/1048475444


(XEN) Data Abort Trap. Syndrome=0x1930046
(XEN) Walking Hypervisor VA 0xf0008 on CPU0 via TTBR 0x0000000040545000
(XEN) 0TH[0x0] = 0x0000000040544f7f
(XEN) 1ST[0x0] = 0x0000000040541f7f
(XEN) 2ND[0x0] = 0x0000000000000000
(XEN) CPU0: Unexpected Trap: Data Abort
(XEN) ----[ Xen-4.15-unstable  arm64  debug=y  Tainted: U     ]----
(XEN) CPU:    0
(XEN) PC:     00000000002273b8 timer.c#remove_from_heap+0x2c/0x114
(XEN) LR:     0000000000227530
(XEN) SP:     000080003ff7f9a0
(XEN) CPSR:   800002c9 MODE:64-bit EL2h (Hypervisor, handler)
(XEN)      X0: 000080000234e6a0  X1: 0000000000000001  X2: 0000000000000000
(XEN)      X3: 00000000000f0000  X4: 0000000000000000  X5: 00000000014d014d
(XEN)      X6: 0000000000000080  X7: fefefefefefeff09  X8: 7f7f7f7f7f7f7f7f
(XEN)      X9: 717164616f726051 X10: 7f7f7f7f7f7f7f7f X11: 0101010101010101
(XEN)     X12: 0000000000000008 X13: 0000000000000001 X14: 000080003ff7fa78
(XEN)     X15: 0000000000000020 X16: 000000000028e558 X17: 0000000000000000
(XEN)     X18: 00000000fffffffe X19: 0000000000000001 X20: 0000000000310180
(XEN)     X21: 00000000000002c0 X22: 0000000000000000 X23: 0000000000346008
(XEN)     X24: 0000000000310180 X25: 0000000000000000 X26: 00008000044e91b8
(XEN)     X27: 000000000000ffff X28: 0000000041570018  FP: 000080003ff7f9a0
(XEN) 
(XEN)   VTCR_EL2: 80043594
(XEN)  VTTBR_EL2: 000200007ffe3000
(XEN) 
(XEN)  SCTLR_EL2: 30cd183d
(XEN)    HCR_EL2: 00000000807c663f
(XEN)  TTBR0_EL2: 0000000040545000
(XEN) 
(XEN)    ESR_EL2: 97930046
(XEN)  HPFAR_EL2: 0000000000030010
(XEN)    FAR_EL2: 00000000000f0008
(XEN) 
(XEN) Xen stack trace from sp=000080003ff7f9a0:
(XEN)    000080003ff7f9c0 0000000000227530 00008000044e9190 00000000002280dc
(XEN)    000080003ff7f9e0 0000000000228234 00008000044e9190 000000000024dd04
(XEN)    000080003ff7fa40 000000000024a414 0000000000311390 000080000234e430
(XEN)    0000800002345000 0000000000000000 0000000000346008 00008000044e9150
(XEN)    0000000000000001 0000000000000000 0000000000000240 0000000000270474
(XEN)    000080003ff7faa0 000000000024b91c 0000000000000001 0000000000310238
(XEN)    000080003ff7fbf8 0000000080000249 0000000093860047 00000000002a1de0
(XEN)    000080003ff7fc88 00000000002a1de0 00000000000002c0 00008000044e9470
(XEN)    000080003ff7fab0 00000000002217b4 000080003ff7fad0 000000000027a8c0
(XEN)    0000000000311324 00000000002a1de0 000080003ff7fc00 0000000000265310
(XEN)    0000000000000000 00000000002263d8 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000020
(XEN)    0000000000000080 fefefefefefeff09 7f7f7f7f7f7f7f7f 717164616f726051
(XEN)    7f7f7f7f7f7f7f7f 0101010101010101 0000000000000008 0000000000000001
(XEN)    000080003ff7fa78 0000000000000020 000000000028e558 0000000000000000
(XEN)    00000000fffffffe 0000000000000000 0000000000310238 000000000000000a
(XEN)    0000000000310238 00000000002a64b0 00000000002a1de0 000080003ff7fc88
(XEN)    0000000000000000 0000000000000240 0000000041570018 000080003ff7fc00
(XEN)    000000000024c8c0 000080003ff7fc00 000000000024c8c4 9386004780000249
(XEN)    000080003ff7fc90 000000000024c974 0000000000000384 0000000000000002
(XEN)    0000800002345000 00000000ffffffff 0000000000000006 000080003ff7fe20
(XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 000080000234e430
(XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
(XEN)    000080003ff7fce0 000000000031a147 000080003ff7fd20 000000000027f7b8
(XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
(XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
(XEN)    0000000000000240 0000800002345000 00000000ffffffff 0000000000000004
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000022
(XEN)    000080003ff7fda0 000000000026ff2c 000000000027f608 0000000000000000
(XEN)    0000000000000093 0000800002345000 0000000000000000 000080003ffe4a60
(XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 0000000041570018
(XEN)    000080003ff7fda0 000000000026fee0 000080003ff7fda0 000000000026ff18
(XEN)    000080003ff7fe30 0000000000279b2c 0000000093860047 0000000000000090
(XEN)    0000000003001384 000080003ff7feb0 ffff800011dc1384 ffff8000104b06a0
(XEN)    ffff8000104b0240 ffff00000df806e8 0000000000000000 ffff800011b0ca88
(XEN)    0000000003001384 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000093860047 0000000003001384 000080003ff7fe70 000000000027a180
(XEN)    000080003ff7feb0 0000000093860047 0000000093860047 0000000060000085
(XEN)    0000000093860047 ffff800011b0ca88 ffff800011b03d90 0000000000265458
(XEN)    0000000000000000 ffff800011b0ca88 000080003ff7ffb8 000000000026545c
(XEN) Xen call trace:
(XEN)    [<00000000002273b8>] timer.c#remove_from_heap+0x2c/0x114 (PC)
(XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0 (LR)
(XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0
(XEN)    [<0000000000228234>] stop_timer+0x1fc/0x254
(XEN)    [<000000000024a414>] core.c#schedule+0xf4/0x380
(XEN)    [<000000000024b91c>] wait+0xc/0x14
(XEN)    [<00000000002217b4>] try_preempt+0x88/0xbc
(XEN)    [<000000000027a8c0>] do_trap_irq+0x5c/0x60
(XEN)    [<0000000000265310>] entry.o#hyp_irq+0x7c/0x80
(XEN)    [<000000000024c974>] printk+0x68/0x70
(XEN)    [<000000000027f7b8>] vgic-v2.c#vgic_v2_distr_mmio_write+0x1b0/0x7ac
(XEN)    [<000000000026ff2c>] try_handle_mmio+0x1ac/0x27c
(XEN)    [<0000000000279b2c>] traps.c#do_trap_stage2_abort_guest+0x18c/0x2d8
(XEN)    [<000000000027a180>] do_trap_guest_sync+0x10c/0x63c
(XEN)    [<0000000000265458>] entry.o#guest_sync_slowpath+0xa4/0xd4
(XEN) 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) CPU0: Unexpected Trap: Data Abort
(XEN) ****************************************


On Mon, 22 Feb 2021, no-reply@patchew.org wrote:
> Hi,
> 
> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> 
> You can find the link to the pipeline near the end of the report below:
> 
> Type: series
> Message-id: 20210223023428.757694-1-volodymyr_babchuk@epam.com
> Subject: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> sleep 10
> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> === TEST SCRIPT END ===
> 
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>  * [new tag]               patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com -> patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com
> Switched to a new branch 'test'
> a569959cc0 alloc pages: enable preemption early
> c943c35519 arm: traps: try to preempt before leaving IRQ handler
> 4b634d1924 arm: context_switch: allow to run with IRQs already disabled
> 7d78d6e861 sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
> d56302eb03 arm: setup: disable preemption during startup
> 18a52ab80a preempt: add try_preempt() function
> 9c4a07d0fa preempt: use atomic_t to for preempt_count
> 904e59f28e sched: credit2: save IRQ state during locking
> 3e3726692c sched: rt: save IRQ state during locking
> c552842efc sched: core: save IRQ state during locking
> 
> === OUTPUT BEGIN ===
> [2021-02-23 02:38:00] Looking up pipeline...
> [2021-02-23 02:38:01] Found pipeline 260183774:
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/260183774
> 
> [2021-02-23 02:38:01] Waiting for pipeline to finish...
> [2021-02-23 02:53:10] Still waiting...
> [2021-02-23 03:08:19] Still waiting...
> [2021-02-23 03:23:29] Still waiting...
> [2021-02-23 03:38:38] Still waiting...
> [2021-02-23 03:53:48] Still waiting...
> [2021-02-23 04:08:57] Still waiting...
> [2021-02-23 04:19:05] Pipeline failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-arm64-gcc' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-alpine-arm64-gcc' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-clang-debug' in stage 'build' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-clang' in stage 'build' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc-debug' in stage 'build' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc' in stage 'build' is failed
> === OUTPUT END ===
> 
> Test command exited with code: 1


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 20:40:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 20:40:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89091.167563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEeTv-0000dB-3C; Tue, 23 Feb 2021 20:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89091.167563; Tue, 23 Feb 2021 20:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEeTv-0000d4-0D; Tue, 23 Feb 2021 20:40:31 +0000
Received: by outflank-mailman (input) for mailman id 89091;
 Tue, 23 Feb 2021 20:40:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjLX=HZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEeTt-0000cz-VK
 for xen-devel@lists.xenproject.org; Tue, 23 Feb 2021 20:40:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 878ec573-2ba2-4b0c-8bf7-c2d49312d6dd;
 Tue, 23 Feb 2021 20:40:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 2DFDC601FF;
 Tue, 23 Feb 2021 20:40:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 878ec573-2ba2-4b0c-8bf7-c2d49312d6dd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614112828;
	bh=GQUobiaItm0t2cmOOoyfX4zeMrB5aEvhU2hMF0b1duk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=B0+Yysz1Lm1d2Kf1NfePaJhMx2Qej0kYt9qYL/HM9h1Ek9hOb1MlvnkusGMog7D/6
	 q63eCxlsRHCcHsIfMVyd3Heh8Q/jEmx9ZyTZyUeLb9H/4V9mh5avZVLvpVzPAil61S
	 tNPv+iRfPVCwV8OOIj6IPBzju33q9meL+e3RhHGpyYnCrrriA5wg8P4X3mOnP5xaZ+
	 UsJBgZhl4c8nsc3SVoB0HoM9qYrneeOGtyb3M5HCBhFtjsXBPlqvcxluh0n9DC4J8/
	 dQg5FGE6m+SkIFdXb4AU2925HCvV6O26VJ+sRcTg0OlAFgxVr+e5o9s15cQOx4f2bt
	 JUmgZSqYwHAsA==
Date: Tue, 23 Feb 2021 12:40:27 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "iwj@xenproject.org" <iwj@xenproject.org>, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, dfaggioli@suse.com, 
    George.Dunlap@citrix.com
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2,
 3}
In-Reply-To: <767e2028-ca86-bd0f-e936-c386066c11c8@xen.org>
Message-ID: <alpine.DEB.2.21.2102231237410.3234@sstabellini-ThinkPad-T480s>
References: <20210220140412.31610-1-julien@xen.org> <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com> <alpine.DEB.2.21.2102221253390.3234@sstabellini-ThinkPad-T480s> <767e2028-ca86-bd0f-e936-c386066c11c8@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 23 Feb 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 23/02/2021 01:24, Stefano Stabellini wrote:
> > On Mon, 22 Feb 2021, Bertrand Marquis wrote:
> > > > On 20 Feb 2021, at 14:04, Julien Grall <julien@xen.org> wrote:
> > The consequence of this patch is that a guest can cause vcpu_unblock(v),
> > hence vcpu_wake(v), to be called for its own vcpu, which basically
> > always changes v to RUNSTATE_runnable. However, that alone shouldn't
> > allow v to always come up ahead of any other vcpus in the queue, right?
> > It should be safe. I just wanted a second opinion on this :-)
> 
> vcpu_wake() only tells the scheduler that the vCPU can be run, it is then up
> to the scheduler to decide what to do. AFAIU, for credit{1, 2}, each vCPU will
> have some credit. If you run out of credit, then you vCPU will not be
> descheduled if there is work do it.

OK, great, it matches my understanding. Thanks for checking.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 20:41:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 20:41:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89095.167576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEeUY-0000j5-DK; Tue, 23 Feb 2021 20:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89095.167576; Tue, 23 Feb 2021 20:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEeUY-0000iy-9l; Tue, 23 Feb 2021 20:41:10 +0000
Received: by outflank-mailman (input) for mailman id 89095;
 Tue, 23 Feb 2021 20:41:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEeUX-0000io-6Q; Tue, 23 Feb 2021 20:41:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEeUW-0002y9-SM; Tue, 23 Feb 2021 20:41:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEeUW-00084C-Hs; Tue, 23 Feb 2021 20:41:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEeUW-0007Up-HK; Tue, 23 Feb 2021 20:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ho2jbA9tbu4P+nJhf82giSvtDTw3XG4UDYg70oLnkxY=; b=g8BWUOadWKVM9R70OK5bJSk5mE
	nTnmCkgkTdCtORjXSKLWQpdRyly3pnbh7+hLVmMb1t5WNGaEs3P3eE35grBbEqrn5/MCOcdjA1WhU
	XJBdpWvFcDI8YHHRiqwNXJieHKRIZDQizwKyehzjjmLeiLhzzngPBMT4WTyjCckTND1Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159576-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159576: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 20:41:08 +0000

flight 159576 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159576/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 159559
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 159559

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    4 days
Failing since        159487  2021-02-20 04:29:29 Z    3 days    7 attempts
Testing same since   159559  2021-02-22 20:38:35 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 318 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 22:53:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 22:53:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89115.167611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEgYc-0004Y0-BK; Tue, 23 Feb 2021 22:53:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89115.167611; Tue, 23 Feb 2021 22:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEgYc-0004Xt-8G; Tue, 23 Feb 2021 22:53:30 +0000
Received: by outflank-mailman (input) for mailman id 89115;
 Tue, 23 Feb 2021 22:53:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgYb-0004Xl-Ek; Tue, 23 Feb 2021 22:53:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgYa-00058r-Qu; Tue, 23 Feb 2021 22:53:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgYa-00066F-H9; Tue, 23 Feb 2021 22:53:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgYa-0006sF-Gh; Tue, 23 Feb 2021 22:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3BFrlyfzlmhqv+BKNooRuI8rJ/kvnT4fkcD3PvSKBtg=; b=U5EXu+ZU2NbQ0Bn72dtCk6o+kY
	59E6evYL1shxIficmFBet+ofkrNE2E8itGoKyinyoWuOvVkZl/8ecMOX/30V0VE3aSBjCk4MHpDK8
	X0kmHk9iuE30d70/PYfibZXkEVP+tBI0Ar3g7tiUGQP/aGd/rHORInerFQIuY+3j7bmg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159582-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159582: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ef8134565dccf9186d5eabd7dbb4ecae6dead87
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 22:53:28 +0000

flight 159582 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159582/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw  8 xen-boot         fail in 159563 pass in 159582
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 159563
 test-amd64-i386-libvirt-xsm  20 guest-start/debian.repeat  fail pass in 159563

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 159563 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 159563 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ef8134565dccf9186d5eabd7dbb4ecae6dead87
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  187 days
Failing since        152659  2020-08-21 14:07:39 Z  186 days  360 attempts
Testing same since   159563  2021-02-22 23:37:57 Z    0 days    2 attempts

------------------------------------------------------------
425 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117355 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Feb 23 22:55:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 Feb 2021 22:55:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89120.167627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEgas-0004gX-UU; Tue, 23 Feb 2021 22:55:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89120.167627; Tue, 23 Feb 2021 22:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEgas-0004gQ-QX; Tue, 23 Feb 2021 22:55:50 +0000
Received: by outflank-mailman (input) for mailman id 89120;
 Tue, 23 Feb 2021 22:55:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgar-0004gE-3Z; Tue, 23 Feb 2021 22:55:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgaq-0005AM-QM; Tue, 23 Feb 2021 22:55:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgaq-0006Cn-JY; Tue, 23 Feb 2021 22:55:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEgaq-0000Nc-J6; Tue, 23 Feb 2021 22:55:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Q/U4b0NzbBuQVlwD5P8M7TfLstVhJNA5aPtK2vqM+qY=; b=Qq1AjKhEkO1Rg9IQ07KB0VIF27
	E6vDaBGk3zywy0q2qAG/2lSqWx8cBa2ZkxQj6cwPKGotlq1987bA0ZRqqy1pggv3q8rXJ8qmYT5E4
	lPP+V8pJAAx/bG3BcYVCqplyZJoc+3hRBRKTEFhjGepxZkI4ZVto2lZICHgX78vab2Rk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159600: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
X-Osstest-Versions-That:
    xen=f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 Feb 2021 22:55:48 +0000

flight 159600 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159600/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe
baseline version:
 xen                  f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a

Last test of basis   159548  2021-02-22 12:00:26 Z    1 days
Testing same since   159600  2021-02-23 20:01:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f894c3d8e7..5d94433a66  5d94433a66df29ce314696a13bdd324ec0e342fe -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 00:20:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 00:20:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89136.167666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEhuf-0005IX-PC; Wed, 24 Feb 2021 00:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89136.167666; Wed, 24 Feb 2021 00:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEhuf-0005IQ-LH; Wed, 24 Feb 2021 00:20:21 +0000
Received: by outflank-mailman (input) for mailman id 89136;
 Wed, 24 Feb 2021 00:20:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Stqq=H2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEhue-0005IG-OF
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 00:20:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70a8238c-a04a-4ffa-8619-79a4178e0301;
 Wed, 24 Feb 2021 00:20:19 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A5E8464E89;
 Wed, 24 Feb 2021 00:20:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70a8238c-a04a-4ffa-8619-79a4178e0301
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614126018;
	bh=Z+5yeHvLg7b690ETZIHtVhwsU2cPjYuOULSm4Lixplw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AQKCf+sqhZJgio3h1+YwN+m3Z9fweF/VBYpbBDw/VpeSiXyMgepxF1ANJWH6z3a4b
	 RjCD00wPd8dKRH7sWkc9bRiVioxQzvmXduSpRpU6cjT7PW96pplm3a0oyxxWh7dtsC
	 O5NJG4eaTlecXavNRHgvJYVVkdnERUpFusm1p/Hmdd+m6Qqb+rVZYj9izs/YqEU9Te
	 yF/Ewc91PtLXBIWjCAOnr0dR6QsgKu+LRXVNKpssh5UKxds9mU8MDTZLC7Nh7sxkGn
	 aOyeYbxtOIbjDlVtuRK2GG62ng973hzFdvSCQvge+HANDW5CUPyYSqLXyELaJFEOSD
	 1N5FLDYKz8P0A==
Date: Tue, 23 Feb 2021 16:20:18 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Volodymyr_Babchuk@epam.com, rahul.singh@arm.com
Subject: Re: [RFC] xen/arm: introduce XENFEAT_ARM_dom0_iommu
In-Reply-To: <98a15b6d-7460-31a0-0b4a-acf035571a17@xen.org>
Message-ID: <alpine.DEB.2.21.2102231520510.3234@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2102161333090.3234@sstabellini-ThinkPad-T480s> <2d22d5b8-6cda-f27b-e938-4806b65794a5@xen.org> <alpine.DEB.2.21.2102171233270.3234@sstabellini-ThinkPad-T480s> <0be0196f-5b3f-73f9-5ab7-7a54faabec5c@xen.org>
 <alpine.DEB.2.21.2102180920570.3234@sstabellini-ThinkPad-T480s> <98a15b6d-7460-31a0-0b4a-acf035571a17@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 Feb 2021, Julien Grall wrote:
> On 19/02/2021 01:42, Stefano Stabellini wrote:
> > On Thu, 18 Feb 2021, Julien Grall wrote:
> > > On 17/02/2021 23:54, Stefano Stabellini wrote:
> > > > On Wed, 17 Feb 2021, Julien Grall wrote:
> > > > > On 17/02/2021 02:00, Stefano Stabellini wrote:
> > > > But actually it was always wrong for Linux to enable swiotlb-xen without
> > > > checking whether it is 1:1 mapped or not. Today we enable swiotlb-xen in
> > > > dom0 and disable it in domU, while we should have enabled swiotlb-xen if
> > > > 1:1 mapped no matter dom0/domU. (swiotlb-xen could be useful in a 1:1
> > > > mapped domU driver domain.)
> > > > 
> > > > 
> > > > There is an argument (Andy was making on IRC) that being 1:1 mapped or
> > > > not is an important information that Xen should provide to the domain
> > > > regardless of anything else.
> > > > 
> > > > So maybe we should add two flags:
> > > > 
> > > > - XENFEAT_direct_mapped
> > > > - XENFEAT_not_direct_mapped
> > > 
> > > I am guessing the two flags is to allow Linux to fallback to the default
> > > behavior (depending on dom0 vs domU) on older hypervisor On newer
> > > hypervisors,
> > > one of this flag would always be set. Is that correct?
> > 
> > Yes. On a newer hypervisor one of the two would be present and Linux can
> > make an informed decision. On an older hypervisor, neither flag would be
> > present, so Linux will have to keep doing what is currently doing.
> > 
> >   
> > > > To all domains. This is not even ARM specific. Today dom0 would get
> > > > XENFEAT_direct_mapped and domUs XENFEAT_not_direct_mapped. With cache
> > > > coloring all domains will get XENFEAT_not_direct_mapped. With Bertrand's
> > > > team work on 1:1 mapping domUs, some domUs might start to get
> > > > XENFEAT_direct_mapped also one day soon.
> > > > 
> > > > Now I think this is the best option because it is descriptive, doesn't
> > > > imply anything about what Linux should or should not do, and doesn't
> > > > depend on unreliable IOMMU information.
> > > 
> > > That's a good first step but this still doesn't solve the problem on
> > > whether
> > > the swiotlb can be disabled per-device or even disabling the expensive 1:1
> > > mapping in the IOMMU page-tables.
> > > 
> > > It feels to me we need to have a more complete solution (not necessary
> > > implemented) so we don't put ourself in the corner again.
> > 
> > Yeah, XENFEAT_{not,}_direct_mapped help cleaning things up, but don't
> > solve the issues you described. Those are difficult to solve, it would
> > be nice to have some idea.
> > 
> > One issue is that we only have limited information passed via device
> > tree, limited to the "iommus" property. If that's all we have, there
> > isn't much we can do.
> 
> We can actually do a lot with that :). See more below.
> 
> > The device tree list is maybe the only option,
> > although it feels a bit complex intuitively. We could maybe replace the
> > real iommu node with a fake iommu node only to use it to "tag" devices
> > protected by the real iommu.
> > 
> > I like the idea of rewarding well-designed boards; boards that have an
> > IOMMU and works for all DMA-mastering devices. It would be great to be
> > able to optimize those in a simple way, without breaking the others. But
> > unfortunately due to the limited info on device tree, I cannot think of
> > a way to do it automatically. And it is not great to rely on platform
> > files.
> 
> We would not be able to automate in Xen alone, however we can ask the help of
> Linux.
> 
> Xen is able to tell whether it has protected the device with an IOMMU or not.
> When creating the domain device-tree, it could replace the IOMMU node with a
> Xen specific one.
> 
> With the Xen IOMMU nodes, Linux could find out whether the device needs to use
> the swiotlb ops or not.

That might work.

Another similar idea is to use "dma-ranges" in device tree.  dma-ranges
can only be used for a bus and allow us to specify special DMA address
mappings between the nodes under the bus the parent address space.

If we created a new bus node called "amba-nodma" with a "dma-ranges"
that basically traslates child addresses into an invalid address or size
zero, and moved devices without "iommus" under it, the consequence is
that all those devices will be accessible and usable by Linux but there
would be no DMA transactions originating from them. Or the transactions
would fail explicitely.

Thus, IOMMU-protected devices would continue to work as normal.
Non-dma-mastering non-IOMMU-protected devices would also continue to
work as normal.
Dma-mastering non-IOMMU-protected devices would not work, but the
failure would be controlled and explicit.

At that point, swiotlb-xen could be enabled only for devices that have
this special dma-ranges address translation.

On one hand, this technique would require fewer changes in Linux -- it
might even work with almost no changes on the Linux side thanks to the
automatic fallback to the swiotlb when dma_mask checks fail.

On the other hand, I really dislike the magic invalid address
translation. If it is invalid, the device DMA should not work at all; it
should not trigger the swiotlb-xen. I can't think of a way to make this
clean from a device tree spec perspective. I thought I would mention it
in case it gives you any good ideas.


> Skipping extra mapping in the IOMMU is a bit trickier. I can see two
> solutions:
>   - A per-domain toggle to skip the IOMMU mapping. This is assuming that Linux
> is able to know that all DMA capable devices are protected. The problem is a
> driver may be loaded later. Such drivers are unlikely to use existing grant,
> so the toggle could be used to say "all the grant after this point will
> require a mapping (or not)"

I cannot think of a way for Linux to figure out that all DMA capable
devices are protected. Is the idea that Linux would base the decision on
the Linux drivers, not the capability of the hardware? Even using the
drivers, I don't know if it would be possible to implement.


>   - A per-grant flag to indicate whether an IOMMU mapping is necessary. This
> is assuming we are able to know whether a grant will be used for DMA.

That might be easier because the caller of gnttab_map_refs in Linux
should be able to make a decent guess.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 00:20:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 00:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89135.167653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEhuV-0005GW-G2; Wed, 24 Feb 2021 00:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89135.167653; Wed, 24 Feb 2021 00:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEhuV-0005GP-Cw; Wed, 24 Feb 2021 00:20:11 +0000
Received: by outflank-mailman (input) for mailman id 89135;
 Wed, 24 Feb 2021 00:20:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FJMv=H2=epam.com=prvs=26891aedce=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lEhuU-0005GK-N8
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 00:20:10 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a8849a5-def2-4bc3-9991-99c71466fea4;
 Wed, 24 Feb 2021 00:20:08 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11O0BRwU001376; Wed, 24 Feb 2021 00:20:01 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2051.outbound.protection.outlook.com [104.47.13.51])
 by mx0b-0039f301.pphosted.com with ESMTP id 36w0ymta87-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 Feb 2021 00:20:01 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM9PR03MB6977.eurprd03.prod.outlook.com (2603:10a6:20b:2d8::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.19; Wed, 24 Feb
 2021 00:19:58 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.048; Wed, 24 Feb 2021
 00:19:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a8849a5-def2-4bc3-9991-99c71466fea4
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S/w5Wp6ZrXceZmuHnQnbemItW19soW8JCdf1Uc2SP3Ov3pfmZad7nP1O9sF/VVZJgeUFRhjqYAs4nG99k85N1fvWvmWbmFob/pm7r3+0Eo9vu/Z82TyaedXkHw92v/8+/ADdNLFHwWEPEgw+qey0VslD7OjgNLSHyTSZRLH9rTeoGQUvNjIEzXqNagpG/qEJMFK/yYMvs9mY42Kw7RQdfrPVPC1kEpGhqy4NXLKtBPNspBfgiwB9dDHLbWF2tb9miwS8aG4EmZ/+r2j6nsFf0yX+1a/Lb0yJ2uRtS+D17TtVtMBkE4WfQ87cQAqb1ngDZegzHLeVMcnNhV5NHDNoaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/pA7clmf8L2PrYGDWbFd1HANlAQeQJl6RNhm3tdmYQ=;
 b=T0cy9T5T7jM19lTBYHrE8rISYUFhp/LOyZ7l0KfHJQCJsPqXFQVT2lWVkUQLvYDY5dbc1nlE6shyS7snny1xcQHVjMLW2vqdjWF/2kYGF6uKDOj3aj+HbYnDXJ27SckFGJYEkfxjR1efM4/ZT0tV4dLhREhVzNZGBSKOhSJt4tGJDN6uluakTPY7y/blkn1PiayIhzXk3NZckutQusW2NNu2OZpDbRLj5Uq/IyQmvRiufLLhiFEiymFW8HURs83/KCeaK0xfWOZSVtNeS60anwwCwqqAEDj23Exg4GR/w8nCRYE4lKclHWstqoljISwrADR6YzdxyVQ+WsIWDlz4vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/pA7clmf8L2PrYGDWbFd1HANlAQeQJl6RNhm3tdmYQ=;
 b=kE6H3O+49MHAyaG0ToA8rzhBHeeftglkvCj09pVymM1fiZQkiK1y2cA33ElNMIk7/X2hZ422bdrAM51O/VHuH2VfNw9c9dO17WhNJiwu/dQworPZfLK52FGzgj+yecf/cWxbZc8NU9RGbwcB3/CFtDpo3ahAieIkJxlPuVevx0ZeDuEdDNPYcbxMVRgyZXFDYN4g8EcGF93h1A7z+qp+TmvA16sOjmI9Bxi+fwy+FYJYHtuoEckQRkBOzg7gn+rwTrs93vBdykouWFRLZrQ+PDg0DpGiFKFgYKADHYlx1JAyHcMa8oH9Xm7hYU59D1bbxSMGdUaZZee3p6jZBo45Zg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "famzheng@amazon.com" <famzheng@amazon.com>,
        "cardoe@cardoe.com"
	<cardoe@cardoe.com>, "wl@xen.org" <wl@xen.org>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien@xen.org" <julien@xen.org>,
        "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: AQHXCiKc1pmgaaPjJEuqIKIpTRVLeqpmcUqA
Date: Wed, 24 Feb 2021 00:19:57 +0000
Message-ID: <87v9ai5nbm.fsf@epam.com>
References: <161405394665.5977.17427402181939884734@c667a6b167f6>
 <alpine.DEB.2.21.2102231228060.3234@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2102231228060.3234@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 49b52ac9-9329-425e-757f-08d8d859eb1c
x-ms-traffictypediagnostic: AM9PR03MB6977:
x-microsoft-antispam-prvs: 
 <AM9PR03MB6977F775C41CF7CDCFED35F6E69F9@AM9PR03MB6977.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 IAZ33xw/jYSBSXhpavFoYodX6HxZgS8qCT5RO15TPjoIIkWEjKIZgcF1hFkgxquTtM4z+ozwzaMScHrme1gkNmjjmPb/dne/NpEIodJIJ5hKX9tg+EjWL06DPOUXXey8MpekDgWFDxeGzAm7fcPwS7fTcmooIjre9lyeVnNEFiwousnrOhL9JLYsGrN7A7gRSHYn+c9ZSS96AYhWFibHXbujH9v+/y04m3OYRZpYeGn/5DQ5Fm9xmbPdegGa/9zbhzQU956E69sL0NyEvoy0sOfSlpN/+Yg4M+t+GhJOviExJGtjArJmacUbaGTv9NSZvhF1jKRCLEIkfOXTgpo4E1pTfjJUlNrG6Xr3i+iY9rFYCHwfgvv4rH199rFQnVg47t+feOE+9bNY69M5uKCHs+AA4h89vnN7Jd3/7s29xpT6BE7MDYpXVaoI79WboCqxa2ZAmqH1gARP9Sh9hfHf/T8wZNhXD9JiLGY1WEJ3wD8vOMq10fie1XfwV/rbrJu3Vbouk2+K03MtCSQCeVpT7Sh25zqoURlhktjadvHxwfpsJsJ10/wTgwQ9IioGOLX85/eSWApusk4zhP8k3KW378RnAJDKxNYgmh1ChSnsMc8=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(376002)(136003)(366004)(6506007)(8936002)(5660300002)(71200400001)(186003)(86362001)(66446008)(64756008)(4326008)(66556008)(76116006)(66476007)(2616005)(6512007)(36756003)(478600001)(66946007)(30864003)(966005)(26005)(6916009)(54906003)(316002)(2906002)(55236004)(8676002)(6486002)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?f5zlGFtHOJ59u2b5wDgy+BzXPFh240FnQxoX0MlycyFJsOHEO3DLX86JdJ?=
 =?iso-8859-1?Q?5Ex3bRcpK4LL/HcaR5SjolM9cBcfcQOHoRTxb0Owcucisc3TrFZdTxxUEE?=
 =?iso-8859-1?Q?tusc7uaJxB0hIzWJ3xs/AAf9PklrqKNkrzvwiNkZjch242xKeoPMsWb9nW?=
 =?iso-8859-1?Q?RV3kt7ya2YkCAAUJcETTJJxPNmd5X+HOC7d1yfvb7qEDyuZ6O98vy8iiXl?=
 =?iso-8859-1?Q?1aKVu9aFYcZJ/JgkRWaFonZiR0SZgHwkhCBavtzCleus+vLdR1JYXuj8mj?=
 =?iso-8859-1?Q?NBqmt3rIMk8ZuIaTQWgMHMMrNLvDKrSXHel6Scf7/TNaMuvQ/rInaX3fyA?=
 =?iso-8859-1?Q?TDLNEODg8+4AzgyP0eRVwyWv9DzweKlv8enLhQ89UlfX/n0ALau8+5oWw1?=
 =?iso-8859-1?Q?wbtZZPieYiex58t+a+sgvr2fUupJNUQcsT+lVXcLNn/Mq3LzxE95FY6M61?=
 =?iso-8859-1?Q?mxyPlRxu3DiDLA1n7vMVtHuJcDPcQBdwVj/v+xNIJHBvXwoSeOtdKofPYZ?=
 =?iso-8859-1?Q?6uboe4aA8AsH5n4FrXbfUEPLo9DFYkulzQHkS7Sjzo+cGR23dwExBjRa/T?=
 =?iso-8859-1?Q?Vmup7IYpEMvGybUQGbj/hp7o4vamB9cGI5+2tlhcXsHLQ2yVNEaw9nFkIE?=
 =?iso-8859-1?Q?ui/nDSb59+5+LDIq1tDU+X21+oLA4em2cB2flfw/gjZPNlaY+Tz6X2bqG6?=
 =?iso-8859-1?Q?SgC3EB+T9uAUHtCA73fmniq7jc3eP5Iu4er8CQuvn43jngrjOEmZ2dfw57?=
 =?iso-8859-1?Q?GuMdqGWEqC4aL0Rk+hBtsje5rIRXlt9RJ5sG00WXyDn8Y5KwlxWtfvzkT3?=
 =?iso-8859-1?Q?3+u4MGgN9Fj+2ZBOMNcEmM1ZOrCtZ6k5aNWBYg5dBTRKGU+4+b10NWQlgE?=
 =?iso-8859-1?Q?dOt19AeHwfAKbsVHTWxqB68kyWFVik3Bosd0lc0sQpk5qtFiWKAG/HO9jV?=
 =?iso-8859-1?Q?gWLJ6cZ8WTa/+dFnix7RcTZIKMxERdNSnJlpeFMdCU+Z20DWWsx+QUwmPW?=
 =?iso-8859-1?Q?O9MAQTmief+FcXG5JcSHdHjp9/mI2D4ZWK2MuivNcKrQmtlz4TnqhHrXWQ?=
 =?iso-8859-1?Q?eDmYxYKJoVbqrCM+SGHXURTCeMNhMMKYe0+4UGC9qFU+ZILjJMW8iO48UU?=
 =?iso-8859-1?Q?YD+17WEMhpGC9h7b1J2OZq7gT1yxny3ermhSOpCHIosvSBzzjHVGSgCpl0?=
 =?iso-8859-1?Q?yD/MspkJTrB/gsoyaphmpCxgSrchZHklqRRgGiAsdNlM8mEmz2UEnIDfnh?=
 =?iso-8859-1?Q?woqm3o5bRqT4yGQBep+ZK+3a9STtImnxe7QvjhYYNMhjaoMtiXcVRDRmZS?=
 =?iso-8859-1?Q?hIFPQdIm5NOUqNJPDPV0U7cv3Hdmifdkvj7VkqKsGyT/GaQYj/MWnhGSzr?=
 =?iso-8859-1?Q?YpRqHXfmTvskjKvJu+Ku/3mVEL0RcJmQ=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49b52ac9-9329-425e-757f-08d8d859eb1c
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Feb 2021 00:19:57.9795
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +4M8hirs1hzCsdZpCU6QM4UnjPivJJPvuEfwuR0vavtkGld+GagRJzJ/+qSvdQ6iq4mkpMn+xfvukSp0K4sO0PIHqaEfKN2oHUmyuPdR/oM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6977
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 bulkscore=0 adultscore=0 impostorscore=0 mlxscore=0 malwarescore=0
 lowpriorityscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 phishscore=0
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102240000


Hi Stefano,

Stefano Stabellini writes:

> Hi Volodymyr,
>
> This looks like a genuine failure:

Thank you for the report. I just debugged similar issues, which seems
happen randomly and found a flaw in this_cpu() implementation. It is
currently not compatible with preemption in hypervisor mode.

It might happen then CPU id is being read while running on one pCPU, but
then code might be preempted and it may continue to run on other pCPU,
while accessing data for a previous pCPU.

This mostly happens with __preempt_count variable in my case, by other
per_cpu variables affected too. Linux uses pair of
get_cpu_var/put_cpu_var functions, that temporally disable/enable
preemption. Something like that should be implemented in my patches as
well. But for __preempt_count I need completely different approach of
course. I'm looking for solution right now.

> https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen/-/=
jobs/1048475444__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESse=
nHBOJ4yxtj7XSU8QELo6TojcFLguIww$ [gitlab[.]com]
>
>
> (XEN) Data Abort Trap. Syndrome=3D0x1930046
> (XEN) Walking Hypervisor VA 0xf0008 on CPU0 via TTBR 0x0000000040545000
> (XEN) 0TH[0x0] =3D 0x0000000040544f7f
> (XEN) 1ST[0x0] =3D 0x0000000040541f7f
> (XEN) 2ND[0x0] =3D 0x0000000000000000
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ----[ Xen-4.15-unstable  arm64  debug=3Dy  Tainted: U     ]----
> (XEN) CPU:    0
> (XEN) PC:     00000000002273b8 timer.c#remove_from_heap+0x2c/0x114
> (XEN) LR:     0000000000227530
> (XEN) SP:     000080003ff7f9a0
> (XEN) CPSR:   800002c9 MODE:64-bit EL2h (Hypervisor, handler)
> (XEN)      X0: 000080000234e6a0  X1: 0000000000000001  X2: 00000000000000=
00
> (XEN)      X3: 00000000000f0000  X4: 0000000000000000  X5: 00000000014d01=
4d
> (XEN)      X6: 0000000000000080  X7: fefefefefefeff09  X8: 7f7f7f7f7f7f7f=
7f
> (XEN)      X9: 717164616f726051 X10: 7f7f7f7f7f7f7f7f X11: 01010101010101=
01
> (XEN)     X12: 0000000000000008 X13: 0000000000000001 X14: 000080003ff7fa=
78
> (XEN)     X15: 0000000000000020 X16: 000000000028e558 X17: 00000000000000=
00
> (XEN)     X18: 00000000fffffffe X19: 0000000000000001 X20: 00000000003101=
80
> (XEN)     X21: 00000000000002c0 X22: 0000000000000000 X23: 00000000003460=
08
> (XEN)     X24: 0000000000310180 X25: 0000000000000000 X26: 00008000044e91=
b8
> (XEN)     X27: 000000000000ffff X28: 0000000041570018  FP: 000080003ff7f9=
a0
> (XEN)=20
> (XEN)   VTCR_EL2: 80043594
> (XEN)  VTTBR_EL2: 000200007ffe3000
> (XEN)=20
> (XEN)  SCTLR_EL2: 30cd183d
> (XEN)    HCR_EL2: 00000000807c663f
> (XEN)  TTBR0_EL2: 0000000040545000
> (XEN)=20
> (XEN)    ESR_EL2: 97930046
> (XEN)  HPFAR_EL2: 0000000000030010
> (XEN)    FAR_EL2: 00000000000f0008
> (XEN)=20
> (XEN) Xen stack trace from sp=3D000080003ff7f9a0:
> (XEN)    000080003ff7f9c0 0000000000227530 00008000044e9190 0000000000228=
0dc
> (XEN)    000080003ff7f9e0 0000000000228234 00008000044e9190 000000000024d=
d04
> (XEN)    000080003ff7fa40 000000000024a414 0000000000311390 000080000234e=
430
> (XEN)    0000800002345000 0000000000000000 0000000000346008 00008000044e9=
150
> (XEN)    0000000000000001 0000000000000000 0000000000000240 0000000000270=
474
> (XEN)    000080003ff7faa0 000000000024b91c 0000000000000001 0000000000310=
238
> (XEN)    000080003ff7fbf8 0000000080000249 0000000093860047 00000000002a1=
de0
> (XEN)    000080003ff7fc88 00000000002a1de0 00000000000002c0 00008000044e9=
470
> (XEN)    000080003ff7fab0 00000000002217b4 000080003ff7fad0 000000000027a=
8c0
> (XEN)    0000000000311324 00000000002a1de0 000080003ff7fc00 0000000000265=
310
> (XEN)    0000000000000000 00000000002263d8 0000000000000000 0000000000000=
000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000=
020
> (XEN)    0000000000000080 fefefefefefeff09 7f7f7f7f7f7f7f7f 717164616f726=
051
> (XEN)    7f7f7f7f7f7f7f7f 0101010101010101 0000000000000008 0000000000000=
001
> (XEN)    000080003ff7fa78 0000000000000020 000000000028e558 0000000000000=
000
> (XEN)    00000000fffffffe 0000000000000000 0000000000310238 0000000000000=
00a
> (XEN)    0000000000310238 00000000002a64b0 00000000002a1de0 000080003ff7f=
c88
> (XEN)    0000000000000000 0000000000000240 0000000041570018 000080003ff7f=
c00
> (XEN)    000000000024c8c0 000080003ff7fc00 000000000024c8c4 9386004780000=
249
> (XEN)    000080003ff7fc90 000000000024c974 0000000000000384 0000000000000=
002
> (XEN)    0000800002345000 00000000ffffffff 0000000000000006 000080003ff7f=
e20
> (XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 000080000234e=
430
> (XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000fffff=
fc8
> (XEN)    000080003ff7fce0 000000000031a147 000080003ff7fd20 000000000027f=
7b8
> (XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000fffff=
fc8
> (XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000fffff=
fc8
> (XEN)    0000000000000240 0000800002345000 00000000ffffffff 0000000000000=
004
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000=
022
> (XEN)    000080003ff7fda0 000000000026ff2c 000000000027f608 0000000000000=
000
> (XEN)    0000000000000093 0000800002345000 0000000000000000 000080003ffe4=
a60
> (XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 0000000041570=
018
> (XEN)    000080003ff7fda0 000000000026fee0 000080003ff7fda0 000000000026f=
f18
> (XEN)    000080003ff7fe30 0000000000279b2c 0000000093860047 0000000000000=
090
> (XEN)    0000000003001384 000080003ff7feb0 ffff800011dc1384 ffff8000104b0=
6a0
> (XEN)    ffff8000104b0240 ffff00000df806e8 0000000000000000 ffff800011b0c=
a88
> (XEN)    0000000003001384 0000000000000000 0000000000000000 0000000000000=
000
> (XEN)    0000000093860047 0000000003001384 000080003ff7fe70 000000000027a=
180
> (XEN)    000080003ff7feb0 0000000093860047 0000000093860047 0000000060000=
085
> (XEN)    0000000093860047 ffff800011b0ca88 ffff800011b03d90 0000000000265=
458
> (XEN)    0000000000000000 ffff800011b0ca88 000080003ff7ffb8 0000000000265=
45c
> (XEN) Xen call trace:
> (XEN)    [<00000000002273b8>] timer.c#remove_from_heap+0x2c/0x114 (PC)
> (XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0 (LR)
> (XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0
> (XEN)    [<0000000000228234>] stop_timer+0x1fc/0x254
> (XEN)    [<000000000024a414>] core.c#schedule+0xf4/0x380
> (XEN)    [<000000000024b91c>] wait+0xc/0x14
> (XEN)    [<00000000002217b4>] try_preempt+0x88/0xbc
> (XEN)    [<000000000027a8c0>] do_trap_irq+0x5c/0x60
> (XEN)    [<0000000000265310>] entry.o#hyp_irq+0x7c/0x80
> (XEN)    [<000000000024c974>] printk+0x68/0x70
> (XEN)    [<000000000027f7b8>] vgic-v2.c#vgic_v2_distr_mmio_write+0x1b0/0x=
7ac
> (XEN)    [<000000000026ff2c>] try_handle_mmio+0x1ac/0x27c
> (XEN)    [<0000000000279b2c>] traps.c#do_trap_stage2_abort_guest+0x18c/0x=
2d8
> (XEN)    [<000000000027a180>] do_trap_guest_sync+0x10c/0x63c
> (XEN)    [<0000000000265458>] entry.o#guest_sync_slowpath+0xa4/0xd4
> (XEN)=20
> (XEN)=20
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ****************************************
>
>
> On Mon, 22 Feb 2021, no-reply@patchew.org wrote:
>> Hi,
>>=20
>> Patchew automatically ran gitlab-ci pipeline with this patch (series) ap=
plied, but the job failed. Maybe there's a bug in the patches?
>>=20
>> You can find the link to the pipeline near the end of the report below:
>>=20
>> Type: series
>> Message-id: 20210223023428.757694-1-volodymyr_babchuk@epam.com
>> Subject: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
>>=20
>> =3D=3D=3D TEST SCRIPT BEGIN =3D=3D=3D
>> #!/bin/bash
>> sleep 10
>> patchew gitlab-pipeline-check -p xen-project/patchew/xen
>> =3D=3D=3D TEST SCRIPT END =3D=3D=3D
>>=20
>> warning: redirecting to https://urldefense.com/v3/__https://gitlab.com/x=
en-project/patchew/xen.git/__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5R=
JvSwfsRrESsenHBOJ4yxtj7XSU8QELo6TojcGTFbSRQ$ [gitlab[.]com]
>> warning: redirecting to https://urldefense.com/v3/__https://gitlab.com/x=
en-project/patchew/xen.git/__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5R=
JvSwfsRrESsenHBOJ4yxtj7XSU8QELo6TojcGTFbSRQ$ [gitlab[.]com]
>> From https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/=
xen__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESsenHBOJ4yxtj7X=
SU8QELo6TojcntxRYAg$ [gitlab[.]com]
>>  * [new tag]               patchew/20210223023428.757694-1-volodymyr_bab=
chuk@epam.com -> patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com
>> Switched to a new branch 'test'
>> a569959cc0 alloc pages: enable preemption early
>> c943c35519 arm: traps: try to preempt before leaving IRQ handler
>> 4b634d1924 arm: context_switch: allow to run with IRQs already disabled
>> 7d78d6e861 sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preempti=
on[!]
>> d56302eb03 arm: setup: disable preemption during startup
>> 18a52ab80a preempt: add try_preempt() function
>> 9c4a07d0fa preempt: use atomic_t to for preempt_count
>> 904e59f28e sched: credit2: save IRQ state during locking
>> 3e3726692c sched: rt: save IRQ state during locking
>> c552842efc sched: core: save IRQ state during locking
>>=20
>> =3D=3D=3D OUTPUT BEGIN =3D=3D=3D
>> [2021-02-23 02:38:00] Looking up pipeline...
>> [2021-02-23 02:38:01] Found pipeline 260183774:
>>=20
>> https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen/-=
/pipelines/260183774__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsR=
rESsenHBOJ4yxtj7XSU8QELo6Tojc-d06GNY$ [gitlab[.]com]
>>=20
>> [2021-02-23 02:38:01] Waiting for pipeline to finish...
>> [2021-02-23 02:53:10] Still waiting...
>> [2021-02-23 03:08:19] Still waiting...
>> [2021-02-23 03:23:29] Still waiting...
>> [2021-02-23 03:38:38] Still waiting...
>> [2021-02-23 03:53:48] Still waiting...
>> [2021-02-23 04:08:57] Still waiting...
>> [2021-02-23 04:19:05] Pipeline failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' =
is failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is=
 failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang' in stage 'test' is f=
ailed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is fai=
led
>> [2021-02-23 04:19:06] Job 'qemu-smoke-arm64-gcc' in stage 'test' is fail=
ed
>> [2021-02-23 04:19:06] Job 'qemu-alpine-arm64-gcc' in stage 'test' is fai=
led
>> [2021-02-23 04:19:06] Job 'alpine-3.12-clang-debug' in stage 'build' is =
failed
>> [2021-02-23 04:19:06] Job 'alpine-3.12-clang' in stage 'build' is failed
>> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc-debug' in stage 'build' is fa=
iled
>> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc' in stage 'build' is failed
>> =3D=3D=3D OUTPUT END =3D=3D=3D
>>=20
>> Test command exited with code: 1


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 00:32:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 00:32:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89141.167677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEi5x-0006Rh-Sf; Wed, 24 Feb 2021 00:32:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89141.167677; Wed, 24 Feb 2021 00:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEi5x-0006Ra-PZ; Wed, 24 Feb 2021 00:32:01 +0000
Received: by outflank-mailman (input) for mailman id 89141;
 Wed, 24 Feb 2021 00:32:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0hR2=H2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lEi5w-0006RV-JD
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 00:32:00 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7a70535-95e5-4e2c-81c3-ba8c7fe9cfe5;
 Wed, 24 Feb 2021 00:31:59 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11O0TWMB008160;
 Wed, 24 Feb 2021 00:31:56 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 36tsur19xf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 24 Feb 2021 00:31:56 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11O0Qa6S090381;
 Wed, 24 Feb 2021 00:31:55 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2176.outbound.protection.outlook.com [104.47.55.176])
 by aserp3020.oracle.com with ESMTP id 36ucb02x4s-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 24 Feb 2021 00:31:55 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB2886.namprd10.prod.outlook.com (2603:10b6:a03:8f::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Wed, 24 Feb
 2021 00:31:54 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3868.031; Wed, 24 Feb 2021
 00:31:53 +0000
Received: from [10.74.102.180] (138.3.200.52) by
 CY4PR20CA0019.namprd20.prod.outlook.com (2603:10b6:903:98::29) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3868.27 via Frontend Transport; Wed, 24 Feb 2021 00:31:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7a70535-95e5-4e2c-81c3-ba8c7fe9cfe5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=DnFK6qS0raf+ibtwUkuL742QqUvL3YLVtxWmmQfm18o=;
 b=RITZTi1MIPQdlqpQyX8zRHpqpijaiiCukPO8L1iW/b/7u0GfcBZZTQu0Pr5iQLwBpMlJ
 2E51Lv/AFvOK3CXzDIkEvjQaqvnBEpFITBunRCNQS+86cU4OROWC6NupmNR4D58/xnH+
 q8pebTnXHy42Di8LDV8kMHV62iLh+I2EF7aIGU/YvcgufQTTfLr6uBLhCZ8dlW1L6vD/
 ji6ppsbGm0zu6AkxQ2NItdK3CbcSohy6Ykt1ErTTR/Hg9C4wu342bKMHjKwz4R0ibA9I
 70WvvKbLbqD51cKrYZXQhjb1eWDnUxQuyvnJ2c7Eas6VlgjPX27u7FqlZnGU+xEGT0Ua rw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S2kAByvtVQ/Sv7us2DBz0EVfqfjbzCsqia7xIuJr8hkkB1t9e6hjLfOcj+6axNV1QaXRbCyj2C6e6eYT3Gn39R4pQylnHKkZdMahyarJwK6wYQObnBvOzgAdoTo8bfeldV8IQHp05bXcSx93pRKj+26Cbb6ejioSWU4ZOws31elLp+UhsXuzxKWOiCPSxZHnFI+zsOMOZz9VpZwin47MrAXszhqSYQ4gB9whXTZSySWBE1UeeHFJQVVR9NN4B23JkqT5GlpgQQsmfKyVjCivFyIuKyYYpgZCfUprH7tmHjVdI0Gz37nyISVsfs/udMTYq9/qQKbxRtmRqaTB87Pv8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DnFK6qS0raf+ibtwUkuL742QqUvL3YLVtxWmmQfm18o=;
 b=nS+JrQv9TahF2Drcz1Z4x2VIyj2HGGWNQyvcgPysA6mDtKdYEjPviWU9eGakiV9q4v8RPmbCvcitXV7zxtBIDaUPNHBLqYAQM8BI9eEfE/31qr5QafNuq1wfkyVYi2ACpdd0NTwJNZ3iI/v7MUYOh87pF2jpTsjvXvbNcW7mMhDgVRNXHTIPCED5xBLjI6N4TxvpTvX6ZrvaJM4G9iI5/FkHuJ1yyVCDk9zjfv0RrQTIr1UO45s7UP8oPdN4nX5eUjwcBdwDSKNnNqkCbYE9L6n4WGFKAEvVX9a/PPuLuADRhXKExhFuTfWxQQ36aPjsbtpUhHCeMVAflz0qdk62ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DnFK6qS0raf+ibtwUkuL742QqUvL3YLVtxWmmQfm18o=;
 b=VSW6Mr4FfDlZ7dqjHq0jsjxu0162JOPhoKhGhSHsiIUkTw7LD1Ml/p9DVXpRUaWq7yLTyTTniqd6ID3/FCQRnOFvxoKmBCuOCSSC4tPmrjXkqO7aAb0xwiQwDqAuN+waonWCarP+jKI+l/ChXQlDMPDRW6NsUBtxOCX8Jo8Bjc0=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH] xen-front-pgdir-shbuf: don't record wrong grant handle
 upon error
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
        Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <82414b0f-1b63-5509-7c1d-5bcc8239a3de@suse.com>
 <75c64c79-7458-19ca-1346-4c0e090cf0f7@epam.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <42f5f00d-f06c-935e-13eb-5facd0b4ed30@oracle.com>
Date: Tue, 23 Feb 2021 19:31:49 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <75c64c79-7458-19ca-1346-4c0e090cf0f7@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.52]
X-ClientProxiedBy: CY4PR20CA0019.namprd20.prod.outlook.com
 (2603:10b6:903:98::29) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 942b732f-869f-4057-371b-08d8d85b958a
X-MS-TrafficTypeDiagnostic: BYAPR10MB2886:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB2886740E7C0593EFFF747FC48A9F9@BYAPR10MB2886.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3631;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	j1KxkMwMvut9ABOUJ9RW5c4T+0jdMmcbVqCBcGO0PVneMsnLXAPYbmtxjhjk4fifGzOtZMYeqM+R7CloRy7gl1B6GzYydzbOsw9J1NJD66lrS8wYrSqUB/lq8OQFXBtfA+X3kthvVyungq5bvtqBJyd8JY5oSecOcV2DK5Oemx2ndyXGlBvKHPTaB13+7Ufp668IpMogdm9xNniSZ19qU4B2HW2NroNleyu6XpcYoYfaFo4/jKNPV+HXhL6/lUkKNUYSXfdwpR8UfJwKTTaRrifLH4PLqntqutPCQLbMaIvtfXaxCTVzY9XXI/ofJ817g6tLATzETkZE2vGFbpwPtx+y5uOWdB2OYiSPzFp+wrGHtH68YLeqDtpOjx1cszL8ql6sDvVMdOJrrclrCA9bQyvUANCyrgiKRugvVIatFXFN9wojZnhuVZ5/UpoO0k18xdWuxqruMWcWtv0mpVZYi69ZkYFa07y0sumVk6hm1nE9ibckDpsjAoUzJAV0QblFwt/cGL8pvgTtlcv23yxrR+IaoMkJGBAJ0wB9rcC0oCU3WHGVpJ0fsrx2AFJso40TRg6R4fYnulYLiIb61d9T4HlDUf2ul3lG6spy7EhAwDg=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(396003)(39860400002)(376002)(136003)(16526019)(26005)(31686004)(44832011)(31696002)(36756003)(66556008)(2616005)(6666004)(186003)(86362001)(66946007)(83380400001)(110136005)(478600001)(4744005)(2906002)(316002)(6486002)(16576012)(66476007)(8936002)(4326008)(8676002)(53546011)(956004)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?a3lkeXVMQkNoellCZnczVGlnRGtRdzJzUnR2ZjNTWEl3RXdKL096a1hBZE5x?=
 =?utf-8?B?TElhbGlncGZVNUZvU1dnaWkyR2pXYm5rWW1zbklyMlVSNDZoLzdHbzZQSWpa?=
 =?utf-8?B?NmU5SGxYTkpFZ3BvdGhId2R6Qm9JcW45aDVIQUVYOHNDVE9OZ3RmYVZKelUw?=
 =?utf-8?B?R25WQWt4bEdGenRMZWdTM0I2cFlHZWw1VUNvS0NDa1lXcTcxTDVzM0xuRHFy?=
 =?utf-8?B?b3psVXVFNWN2bkFCY29KSXFnd3ByaVdVU1BNc2dUWnp3QWhpeXNFZ0NiUWRx?=
 =?utf-8?B?TS9jbjNlVldOWGVlRytYWDhWaUtud096OWJSRnNBS1B6QUlPSDUvR3hqNTlH?=
 =?utf-8?B?QU1hL1RFRXhxN0FVRFZnNlpFbnRFK08yc2t5U2Zxb1EvNnJUWFN2Rm1WZU1w?=
 =?utf-8?B?eTRMMTdCS3p3QzJYVFVieHRZNVpRMTY4bDZINldMeGpEc2RvVjlGTlAvRVE4?=
 =?utf-8?B?UWllRnVSQUxPSGt4NmUyY0tWNm9PL3ZRWkZBNldWSUZkdXhSeVQ0azFRdXpM?=
 =?utf-8?B?WWZGWE9nVXNjQW9ZcGZFM2xtajllQ1NKbThLU1MyaWpWcjRqOGNXV3hPQm5M?=
 =?utf-8?B?OVpzR0UvUFdyS1RVajJrbHlheUtTRHVVTjA1YWYwWkxTV0NvSjJMaENiV1Bu?=
 =?utf-8?B?RmFWWWZqSk16Umk2NmJtN3ZjeEpKK2ZNcVZYMzNUY2ZKWWErVWs1Ykc0NUpv?=
 =?utf-8?B?SkZjUWZlR1dPdFE1aXA4U0ZweFAzZkhVNUpLUk8xOURBM1ZiR0lZdW9xMjFq?=
 =?utf-8?B?cVNNVG1uTU9Va29SQ3YyVWVQRVpQamR6anp3Tk5BWGx4Z21iWWZ3dWNZNExS?=
 =?utf-8?B?THIwSGpVZ0dMN1VqOWpDSkY1Ky81SG1HeFUvSGRsQnBQL1lUL2IrZkVSMmNQ?=
 =?utf-8?B?RWRoVlFKQTd2YkNNYWpQOGdrQThsaWdYNjNYbCtrYU5ieHR5M3B0ay93MlFs?=
 =?utf-8?B?WEw0MEZvaldmaFowVHU2S2FBVkVRZjB2OElXTkttTm5QRmJVdFMvSDI4dTFF?=
 =?utf-8?B?ek9obTJISWk3OVNqSHpiaCtZSE5sSmxLNzdOSFFyZDFYNk9NbWtxUVFNOXpY?=
 =?utf-8?B?UzBDeDMwUnQ5T2JkdTRVS0VJTkk5QmVJQW1PWlEvL3BpVDQ4a0VSSkhEVnVq?=
 =?utf-8?B?RUZ0NEZLbzk5QnZXeUtxZVhWTnA4R01Ib1JFaHFLYi8rYlpQVEc3Q1ZhMVMw?=
 =?utf-8?B?UVhuMk1kUGFGc3Z1QXNRelpsMXowSkw4SlEvNUtBZEUvNDBWRkg5TzBoR0tI?=
 =?utf-8?B?ajk4K3V6dXdTa0lGOExKREI5WWhpd0ZvQWYyUTBteXBkbFRLWjZ2MGhRRnEr?=
 =?utf-8?B?bVZEMExLaDBpeDEzYzArUmVGd0FtVGUyUmRaRFVJS1ptMGxVbnpPaXdxdTFB?=
 =?utf-8?B?ZWh4ZEdEdWRhdkkxNk0yeVJuU2pFVURpQzlrT1pWdk5ueXBPcitxdnVUTlN5?=
 =?utf-8?B?OGpYbkZlMW4wZ0hNNGt6VVpVdXpyZ0hlLy9jVXQyVkdUN1Bza2VZNkhjWjN1?=
 =?utf-8?B?eXFaT0EwWjNzT1pYM1RIVVdIdmVWTFBrT2taYnQwQWVHSVdRbld5MGpSZ3l5?=
 =?utf-8?B?bTlIVWNXUG1JNzFESmlNYmp3QWk0amorR0RCeUNrY1dDNGZlTm9BanV0VzI3?=
 =?utf-8?B?Uy9EdGU2WkRxRmZDN1Z1VTlSZVk4TXU5bXVLTVRyT1h5aUJHUXpWK2dsdDEz?=
 =?utf-8?B?dkdZY3NNMjNaRE1ydW1yMk9wcUI2WjZvWHc0T0FYUHp1MHBYTENNSUNGQ1Vt?=
 =?utf-8?Q?4GGayXa9iTagCb7Q8DRw2kjTRE3LXLIe3mnvft9?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 942b732f-869f-4057-371b-08d8d85b958a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 00:31:53.9249
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F2utCAcaWxN31VpxF4QEuh3NPst9V6wicwF9QJt8W3v9kZT2O+5HiNN3QAITQR8i9kH3QQAJv9Ch4ulV7ttCW4cM9AOsmByKrAmBjOcL1P0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB2886
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9904 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0
 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102240001
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9904 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0
 priorityscore=1501 impostorscore=0 bulkscore=0 mlxscore=0 malwarescore=0
 clxscore=1011 phishscore=0 mlxlogscore=999 lowpriorityscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102240001


On 2/23/21 1:28 PM, Oleksandr Andrushchenko wrote:
> Hello, Jan!
>
> On 2/23/21 6:26 PM, Jan Beulich wrote:
>> In order for subsequent unmapping to not mistakenly unmap handle 0,
>> record a perceived always-invalid one instead.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>



Applied to for-linus-5.12b



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 01:08:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 01:08:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89145.167695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEifY-0002y3-Qf; Wed, 24 Feb 2021 01:08:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89145.167695; Wed, 24 Feb 2021 01:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEifY-0002xw-Ne; Wed, 24 Feb 2021 01:08:48 +0000
Received: by outflank-mailman (input) for mailman id 89145;
 Wed, 24 Feb 2021 01:08:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Stqq=H2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEifX-0002xp-8J
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 01:08:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2969239b-9fce-4132-9e6e-acc7c1d74322;
 Wed, 24 Feb 2021 01:08:45 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A2C4764DAF;
 Wed, 24 Feb 2021 01:08:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2969239b-9fce-4132-9e6e-acc7c1d74322
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614128924;
	bh=oMS5JWEo0m1QcNnQlEKnVo4dkcJkk7R8UoNZxYntNYM=;
	h=Date:From:To:cc:Subject:From;
	b=BFUNfOcEpJdu7L69tia48pzW0abb4l1F2yloITyoDPSthDQIR54fewQKbmoA3LTEz
	 cOSTtLGWj4hSGsgvKt8YR3j5mOEvbaf5EvLHjXWBw0DwjLNk77eK/z3pONIB6Yladd
	 9iHC/lVqfLxCFVxuWak0MTClsNp0q5gIct5KJ8zV1S4uVLULyUai0cFZKpXCksD+Vm
	 vTOznUVjIQAdvS89u1hSAPEHdz0zd/u4hpODV6SJIQieAFJuWkmzf2e8n1BKzgsLOz
	 7+9rglPgJLdT+U60rGaYB4YQ6avZrnzggLB5bBAfTW7SJG3YOcExzB+XvyGuP71r7/
	 e9AQI7Y5yRX0g==
Date: Tue, 23 Feb 2021 17:08:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: sstabellini@kernel.org, jbeulich@suse.com, andrew.cooper3@citrix.com, 
    wl@xen.org, iwj@xenproject.org, anthony.perard@citrix.com
Subject: [PATCH for-next] configure: probe for gcc -m32 integer sizes
Message-ID: <alpine.DEB.2.21.2102231648580.3234@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

The hvmloader build on Alpine Linux x86_64 currenly fails:


hvmloader.c: In function 'init_vm86_tss':
hvmloader.c:202:39: error: left shift count >= width of type [-Werror=shift-count-overflow]
  202 |                   ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss));

util.c: In function 'get_cpu_mhz':
util.c:824:15: error: conversion from 'long long unsigned int' to 'uint64_t' {aka 'long unsigned int'} changes value from
'4294967296000000' to '0' [-Werror=overflow]
  824 |     cpu_khz = 1000000ull << 32;


The root cause of the issue is that gcc -m32 picks up headers meant for
64-bit builds.

The failures are currently causing problems to the xen-project
gitlab-ci pipeline.

This patch introduces code to detect this kind of errors in the
configure script, and disables with a warning the compilation of
hvmloader if problems are detected.

This patch also updates tools/configure. It has been done by calling
autoreconf -fi.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>


diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index d47936686b..395ed2a6d2 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -51,6 +51,7 @@ CONFIG_OVMF         := @ovmf@
 CONFIG_ROMBIOS      := @rombios@
 CONFIG_SEABIOS      := @seabios@
 CONFIG_IPXE         := @ipxe@
+CONFIG_HVMLOADER    := @hvmloader@
 CONFIG_QEMU_TRAD    := @qemu_traditional@
 CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_QEMUU_EXTRA_ARGS:= @EXTRA_QEMUU_CONFIGURE_ARGS@
diff --git a/tools/configure b/tools/configure
index bb5acf9d43..f23a3bb8aa 100755
--- a/tools/configure
+++ b/tools/configure
@@ -687,6 +687,7 @@ INSTALL_DATA
 INSTALL_SCRIPT
 INSTALL_PROGRAM
 SET_MAKE
+hvmloader
 AWK
 IASL
 XGETTEXT
@@ -5279,6 +5280,25 @@ ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
 ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
 ac_compiler_gnu=$ac_cv_c_compiler_gnu
 
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <stdint.h>
+#define BUILD_BUG_ON(p) ((void)sizeof(char[1 - 2 * !!(p)]))
+int main() { BUILD_BUG_ON(sizeof(uint64_t) != 8); }
+_ACEOF
+if gcc -m32 -c conftest.c -o /dev/null 2>/dev/null; then :
+  hvmloader=y
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: hvmloader build disabled as the compiler cannot build 32bit binaries" >&5
+$as_echo "$as_me: WARNING: hvmloader build disabled as the compiler cannot build 32bit binaries" >&2;}
+fi
+
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5
 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; }
 set x ${MAKE-make}
diff --git a/tools/configure.ac b/tools/configure.ac
index 636e7077be..706c162322 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -307,6 +307,12 @@ AC_ARG_VAR([AWK], [Path to awk tool])
 
 # Checks for programs.
 AC_PROG_CC
+AC_LANG(C)
+AC_LANG_CONFTEST([AC_LANG_SOURCE([[#include <stdint.h>
+#define BUILD_BUG_ON(p) ((void)sizeof(char[1 - 2 * !!(p)]))
+int main() { BUILD_BUG_ON(sizeof(uint64_t) != 8); }]])])
+AS_IF([gcc -m32 -c conftest.c -o /dev/null 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(hvmloader build disabled due to headers mismatch)])
+AC_SUBST(hvmloader)
 AC_PROG_MAKE_SET
 AC_PROG_INSTALL
 AC_PATH_PROG([FLEX], [flex])
diff --git a/tools/firmware/Makefile b/tools/firmware/Makefile
index 1f27117794..5c395ad738 100644
--- a/tools/firmware/Makefile
+++ b/tools/firmware/Makefile
@@ -13,7 +13,7 @@ SUBDIRS-$(CONFIG_ROMBIOS) += rombios
 SUBDIRS-$(CONFIG_ROMBIOS) += vgabios
 SUBDIRS-$(CONFIG_IPXE) += etherboot
 SUBDIRS-$(CONFIG_PV_SHIM) += xen-dir
-SUBDIRS-y += hvmloader
+SUBDIRS-$(CONFIG_HVMLOADER) += hvmloader
 
 SEABIOSCC ?= $(CC)
 SEABIOSLD ?= $(LD)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 01:51:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 01:51:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89150.167708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEjKH-0007Zh-4n; Wed, 24 Feb 2021 01:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89150.167708; Wed, 24 Feb 2021 01:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEjKH-0007Za-0l; Wed, 24 Feb 2021 01:50:53 +0000
Received: by outflank-mailman (input) for mailman id 89150;
 Wed, 24 Feb 2021 01:50:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEjKG-0007ZS-BG; Wed, 24 Feb 2021 01:50:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEjKG-0001hD-30; Wed, 24 Feb 2021 01:50:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEjKF-000610-P4; Wed, 24 Feb 2021 01:50:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEjKF-0008S7-Oa; Wed, 24 Feb 2021 01:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FQ36U94wD0Y4/5wCrNy3+YA+HkVLVSsd484eSxXion4=; b=eJYsYoA26Z1N2C4+LiqBcW1pR9
	cq2oeBTHyrqprsMTfGvQgMPZHPdEfWan9bilBsbdU1bcme3hfLNJSLxu16Xn3FSq4WOH1YiTUsx8a
	nwRfYpr49ueOP6R+zRNeBLr4YFRyDlox8BihIxIS+IP2xlEEtkhmUnbhjAKCeISIjg4o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159588-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159588: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3b9cdafb5358eb9f3790de2f728f765fef100731
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 01:50:51 +0000

flight 159588 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159588/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3b9cdafb5358eb9f3790de2f728f765fef100731
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  207 days
Failing since        152366  2020-08-01 20:49:34 Z  206 days  356 attempts
Testing same since   159588  2021-02-23 13:40:46 Z    0 days    1 attempts

------------------------------------------------------------
5027 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1228637 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 04:37:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 04:37:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89163.167735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEluz-0006NM-N9; Wed, 24 Feb 2021 04:36:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89163.167735; Wed, 24 Feb 2021 04:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEluz-0006NF-Jz; Wed, 24 Feb 2021 04:36:57 +0000
Received: by outflank-mailman (input) for mailman id 89163;
 Wed, 24 Feb 2021 04:36:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lElux-0006N7-Qk; Wed, 24 Feb 2021 04:36:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lElux-0004ra-LF; Wed, 24 Feb 2021 04:36:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lElux-0005Vi-Bl; Wed, 24 Feb 2021 04:36:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lElux-0007XV-BE; Wed, 24 Feb 2021 04:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0/3k44Y5ERqxt1cB60AmAB57RKdjryWosn5Mokvdx7I=; b=AVUafvfzE9s3eBz9XIhFZPkYkd
	wm1c2kcXM7DqEodWjtHElZpVj+q9xPdNQnZ3CXcJNTIEThMAjOVHFJn7ntQNh1HrfkFGwDJOPhV5g
	6qmtZaP+4uEz4mn0bJ3lxThTcGHhjwaLwcpNyfgcYv8b7mOz+/NxQ1JrUrpf41W35XWg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159598-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159598: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=68e5ecc4d208fe900530fdbe6b44dfc85bef60a8
X-Osstest-Versions-That:
    ovmf=a2b5ea38a6fbbcf1a8f4c2128be35121236437a7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 04:36:55 +0000

flight 159598 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159598/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 68e5ecc4d208fe900530fdbe6b44dfc85bef60a8
baseline version:
 ovmf                 a2b5ea38a6fbbcf1a8f4c2128be35121236437a7

Last test of basis   159585  2021-02-23 12:39:43 Z    0 days
Testing same since   159598  2021-02-23 19:09:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Achin Gupta <achin.gupta@arm.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ilias Apalodimas <ilias.apalodimas@linaro.org>
  Sughosh Ganu <sughosh.ganu@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a2b5ea38a6..68e5ecc4d2  68e5ecc4d208fe900530fdbe6b44dfc85bef60a8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 06:48:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 06:48:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89177.167765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEnyT-0002B3-7F; Wed, 24 Feb 2021 06:48:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89177.167765; Wed, 24 Feb 2021 06:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEnyT-0002Aw-3G; Wed, 24 Feb 2021 06:48:41 +0000
Received: by outflank-mailman (input) for mailman id 89177;
 Wed, 24 Feb 2021 06:48:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tmjC=H2=epam.com=prvs=268986b58a=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lEnyR-0002Aq-7O
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 06:48:39 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60d5a160-565d-4da3-8b38-51e9cdb542bc;
 Wed, 24 Feb 2021 06:48:38 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11O6jYkN015264; Wed, 24 Feb 2021 06:48:36 GMT
Received: from eur02-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2054.outbound.protection.outlook.com [104.47.6.54])
 by mx0b-0039f301.pphosted.com with ESMTP id 36w0ymturc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 Feb 2021 06:48:36 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB3588.eurprd03.prod.outlook.com (2603:10a6:208:50::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.28; Wed, 24 Feb
 2021 06:48:30 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::6156:8f40:92f3:de55]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::6156:8f40:92f3:de55%8]) with mapi id 15.20.3868.033; Wed, 24 Feb 2021
 06:48:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60d5a160-565d-4da3-8b38-51e9cdb542bc
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PlRE1MUzd9854R8VT8Wbfkwlp5SrQY54nSCqIhFID6bM8p2gbPJJkNAxQaU+nSRfXABol3tNxgx/WIIvNumm75WnN2yIJCCxwNyD1FYyoXSvYMgCMM/B8wcUB8evQV1AApPnUil6kXEdMZpHthMT8wLhxjvexUeUaNjO2ej0q7rHj7MRM6r16QzkQ1GnherrLA7iJ/9nv771SwjKFfMNG9WhF0fD/sr1gPOaeSCRFzW9SqZjhK4m/zUAnarydj7hKe4twQjhiTrzPEQ1JpFQWMY25XNabvvAeHi4kn6VYYpXYImid2LAxywKfKiq/drspTIi2bKNStDIm4chBn0wsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N37LfyqeB/Y7Hov9jr3gImj5owT3IzbTNE6qWyTXww4=;
 b=E664vYgd02Fte/UoAZsMysx5Z2CvsI9eSnNCN462ughM2Cs0VVvZIeaKaJK3sekYfccebw/JDezVO+PJxaTa+NAk3kj5p0lXSRSOpCMZE4L1StFFNjLXHqKyoHml5vO+NmG7cG05W13E799azFfPEDmcWiwKqkL0K5TtDo7aPlnrYmcku9UkjhdozrTRPYexAxLTazff7hDv3q+Gc4qWg5266DHQVTXhsqpFyP2HtXYuygKqz5jaIznHBYX9ELcs3SBnfpZBonJWju8OqQ9wWQ7a6gxsTzWb/pxYb9CkYKKo80kzkBtYK81dVaKWk7rE88rcdM3rEMbgznbFf/tMug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N37LfyqeB/Y7Hov9jr3gImj5owT3IzbTNE6qWyTXww4=;
 b=uJl8u93Cp0ebGS0legVVEAaU2ec4K6n9RnR39UtMM5AsJETjUaqlZbzTa79fV7VIL4pmpRTx3v58HBn0BxDu0lnoyzQcXdapj8q1k+5m0LGtLxVO9FmJ90eBMzohwdQVaizK4AnSw7NKxvhoMKxqtQMwD4fxdAfAuUEacrLRFTy1+L2jtrCzRfK1IydBi1pA1a3eOrLQeoKEYMHedOTKSLIjlLTO00Zyo8Uu7tMN0iUa4LXbxldyCw7x6U80tNM+RxHRP/TQN8zptH766xIsH6nqp2hRAGLWDZBdDaakrz42zd8OIFz33TxuZ5HZM5YosGd2jBQO+2ICLF/WN17lCA==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xen: adjust Kconfig
Thread-Topic: [PATCH] drm/xen: adjust Kconfig
Thread-Index: AQHXCgKuZuE2hAvl4Eu58Fslz5uDM6pm3hiA
Date: Wed, 24 Feb 2021 06:48:30 +0000
Message-ID: <a9597f1a-39a6-3664-8617-90338e7943d0@epam.com>
References: <54ae54f9-1ba9-900b-a56f-f48e2c9a82b0@suse.com>
In-Reply-To: <54ae54f9-1ba9-900b-a56f-f48e2c9a82b0@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 32ca0dee-2568-4c6d-6088-08d8d8903248
x-ms-traffictypediagnostic: AM0PR03MB3588:
x-microsoft-antispam-prvs: 
 <AM0PR03MB3588CDE7EBA6038AD0B0D461E79F9@AM0PR03MB3588.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 dYyI48JUuw3EBXEsQei+n7tSAq0RXBVfEqyGDjp3skOEm6pGPNvEL8g44mAogvsJdV4fidqk3brXxLHCHv3ld6wdiN0BvV/pJTseuUNMr/uzQlbxJNuACoz4DrDj6NQpj9wP34nd3KNUGUZ22MaugXUhVXoO3gGObsi4YcRnJlu5I03SR5oy1LePS8SlrdgEMr45PJHAbwOBZHRAB4ikCZdFFA4qm5nhXWpk8NqsQucao1nC+UdosorsrmE5lZ9oRZ0OxZhvDEQQK4TH55w0I2EGmepwyUoLHXzLp0LbUx48lUJ8+0mTJL6AjlBChcF1VyQ04bJSdjXkXKOsSRnbK2XGpNEg9qkxT0RPl7vMLiEDkT9XsDZFDfWIufCVvz4W5JCHhANwNcNk1C2wqR4yUZ49NdVTogOMM2zbZatVjUziH4aDiA4oaHdbPs4vJl0owPwMSeLhBWYwTdDcaP95kq4pGTfVKrNdOkcFZzYUq2HM0JDp/NX1H9moRheMW/ZLh/3hOlehlnaTCtZrEbc9n+CQ2VCxa+SYFOTNfiAa1Cm5aNagWKVS8seBCOswHd3uiVstR+aZPN35YE6SyDVyPw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(136003)(346002)(376002)(396003)(26005)(5660300002)(36756003)(6512007)(54906003)(8936002)(186003)(64756008)(76116006)(66446008)(4326008)(478600001)(71200400001)(86362001)(83380400001)(66946007)(316002)(66556008)(66476007)(31686004)(31696002)(8676002)(6486002)(2906002)(6506007)(6916009)(53546011)(4744005)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?R21ZdVFobFFoWUdFcGJvN2RJb3RjWFptdnhueXRVRTZZdFhsdUxIYkV2OUNX?=
 =?utf-8?B?T3dTbDJ5MFJXZXlwcmt5RDVTdVBMTHJoRGVZL0t3R2w1OWl0RlJucE5FNnRG?=
 =?utf-8?B?d0hJVFY0bFgrQXFGbWJVQ2huWmE1ZmxmUGxyMUt3M1UvZHo1dWFWQUtyWVQr?=
 =?utf-8?B?Qkx1THp0aXkyNVdNN3RndCtDTDBqZjZQWmR4WW0yMDUxVk16SXhkYkdDS2cw?=
 =?utf-8?B?ZE9PekJBbSthWGRyQngvQXk2M2ZMZ2s5TGIrQzFlVEE3YU5nTVM2WFg5SjMz?=
 =?utf-8?B?S2NiQmJwQXF6RGMwOENYT2pzdCtNdEg5VUtRaVIzVW12QzZ2eTRHNjVGM2Fv?=
 =?utf-8?B?OWJzaUFSV09nbTdzYlVZakxkeFY0NlFqY1N3SmE0dlA2S1RuTHI4SS85Q2w5?=
 =?utf-8?B?L0x3Y00zZXlzT2tjeEF5QjVvcTdxd1pyMFJMdlBKd0xHWFQ2Y0RmUXBuckNU?=
 =?utf-8?B?SUZvUndkRmdoUVpGUm1ZMVJiMUd3c0R4ekltYUlreXdBWG53YUpwa3kxWHFz?=
 =?utf-8?B?b0pHRkJYUlZ5WEtXOW5ma3R5ZGxIK3hNdlBpREtBdko4MWdLN2Fjc09MUlZr?=
 =?utf-8?B?aCt5eDYrTVorSjd4aDh1a0l5ekRTZFkrVnNaVCtFb2Z0cU04SUFNcE9hOTF6?=
 =?utf-8?B?UDBNYWpVU2c3N3ZQQ1JFTkN4UUFoeHdXNWcwL3JGYW81MDduS29FOHQxekpE?=
 =?utf-8?B?dXFDZVRVSWQxWjNPV3hGMTVXYkFUOVg1bnZMaCtwNmZBNU1iSDRCQUpWT3lV?=
 =?utf-8?B?Z0YwK2FIenZuZUZRK0loNXFrNm9SdUZSNk5NWnBnZUduRjNkNmswR2x0eTRJ?=
 =?utf-8?B?UU1Bd0tBeWUrOFlGM0VrMm1wODRNck8wTGpwWkVxUFNrVFdFWmVlTm1wL3RV?=
 =?utf-8?B?ZVdLQXByWG5mYkdmWkJwSVhUaXlGSWZOb0hTS01RS3l4eUo5Y2ZZN3N2SEt6?=
 =?utf-8?B?ZE4rdUcwREJrTWo0R1NudzR1V3JQbUpzYWt1T2hFaTBZMDJhVElObms4WWhG?=
 =?utf-8?B?R1gyQmFmRXZTdUJOUzBqL1JLaFRjYUxMd2tVeGVXbUExMVhLUUlDU055ZFRq?=
 =?utf-8?B?Tk91cWxYaUswTXR1UVkxUk5GTzUrOW1HTW5zbEFoc0lKemFhVGlSM3VMdUdU?=
 =?utf-8?B?OTJWckRsK1JSK0I3V0JGdlRRcUN6d2hVakpLaXpwNHpVL1NuN0NjQThkbE9S?=
 =?utf-8?B?UUJpRlVPMGw2bGUwSVlKaE9uVUgvdVgyYnNlQ2J6NjJNdzJJVklpVW10Q2M4?=
 =?utf-8?B?NHlaQjB4dDd2cGpObVNJU0RMWGJqVk9NSkFDS3RrM0hDR0U3aTh6dndpSklQ?=
 =?utf-8?B?QUVnMmpKVXV5ODQvWExPZUJsSWlCY1dqL1RDUlpmcUpJVVB6OVQxMk80MllN?=
 =?utf-8?B?aksyZ1lTdTRLdTEyZ1l5RHdMN25uQTBRa0ZCSWt1d015VnZUTStRZVBEYzEr?=
 =?utf-8?B?dTN5YkFjcmZ5UFhibExXdXNWTFJHYkQyTjhxZkZLYWlyVU9YLy96R1o4YWNl?=
 =?utf-8?B?cnYzYlVMc3FOOGhkNHp4dnNFaHA3TmpTbFdOUkFHem1kcWJ6QktEckVCRno1?=
 =?utf-8?B?MDFocXNvT3N1b0ZVWkRxZC9Yd1ZvVW1ubnJkUWdQTVUzc0xoa1hzdThiLzl2?=
 =?utf-8?B?N1dlOWlIVS8xVHFSTUliMGlvTGZ5WWVoVWNEZGpxaHBJZkRaaFBlMHVHU2Jj?=
 =?utf-8?B?OHN6QWVBUEMxRTF2WkxHbXFPMWFVdHdnUHpJM3drTGxWemNJNklaeXRiRUxK?=
 =?utf-8?Q?5pcj/TMM05u8WVzlS649VF58P9Gm89dzLOpC1W2?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <4CE83E0FC65F6A4C9D0C37EA96107146@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 32ca0dee-2568-4c6d-6088-08d8d8903248
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Feb 2021 06:48:30.1975
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: CLzEbpxFBAPZQfLE7uYk7NfEybKbxHwztu2T1HVUMSF/SmztboX8BnUJ+0AjkRun0w0IrJ+rIwjwxCkV+dEblbINrdZGKG8wD9lbL30gNHhCo3mtx6CXa802pvqNXFst
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3588
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 bulkscore=0 adultscore=0 impostorscore=0 mlxscore=0 malwarescore=0
 lowpriorityscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 phishscore=0
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102240053

SGVsbG8sIEphbiENCg0KT24gMi8yMy8yMSA2OjQxIFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4g
QnkgaGF2aW5nIHNlbGVjdGVkIERSTV9YRU4sIEkgd2FzIGFzc3VtaW5nIEkgd291bGQgYnVpbGQg
dGhlIGZyb250ZW5kDQo+IGRyaXZlci4gQXMgaXQgdHVybnMgb3V0IHRoaXMgaXMgYSBkdW1teSBv
cHRpb24sIGFuZCBJIGhhdmUgcmVhbGx5IG5vdA0KPiBiZWVuIGJ1aWxkaW5nIHRoaXMgKGJlY2F1
c2UgSSBoYWQgRFJNIGRpc2FibGVkKS4gTWFrZSBpdCBhIHByb21wdGxlc3MNCj4gb25lLCBtb3Zp
bmcgdGhlICJkZXBlbmRzIG9uIiB0byB0aGUgb3RoZXIsIHJlYWwgb3B0aW9uLCBhbmQgInNlbGVj
dCJpbmcNCj4gdGhlIGR1bW15IG9uZS4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPg0KUmV2aWV3ZWQtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4gLS0tIGEvZHJpdmVycy9ncHUv
ZHJtL3hlbi9LY29uZmlnDQo+ICsrKyBiL2RyaXZlcnMvZ3B1L2RybS94ZW4vS2NvbmZpZw0KPiBA
QCAtMSwxNSArMSwxMSBAQA0KPiAgICMgU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjAt
b25seQ0KPiAgIGNvbmZpZyBEUk1fWEVODQo+IC0JYm9vbCAiRFJNIFN1cHBvcnQgZm9yIFhlbiBn
dWVzdCBPUyINCj4gLQlkZXBlbmRzIG9uIFhFTg0KPiAtCWhlbHANCj4gLQkgIENob29zZSB0aGlz
IG9wdGlvbiBpZiB5b3Ugd2FudCB0byBlbmFibGUgRFJNIHN1cHBvcnQNCj4gLQkgIGZvciBYZW4u
DQo+ICsJYm9vbA0KPiAgIA0KPiAgIGNvbmZpZyBEUk1fWEVOX0ZST05URU5EDQo+ICAgCXRyaXN0
YXRlICJQYXJhLXZpcnR1YWxpemVkIGZyb250ZW5kIGRyaXZlciBmb3IgWGVuIGd1ZXN0IE9TIg0K
PiAtCWRlcGVuZHMgb24gRFJNX1hFTg0KPiAtCWRlcGVuZHMgb24gRFJNDQo+ICsJZGVwZW5kcyBv
biBYRU4gJiYgRFJNDQo+ICsJc2VsZWN0IERSTV9YRU4NCj4gICAJc2VsZWN0IERSTV9LTVNfSEVM
UEVSDQo+ICAgCXNlbGVjdCBWSURFT01PREVfSEVMUEVSUw0KPiAgIAlzZWxlY3QgWEVOX1hFTkJV
U19GUk9OVEVORA==


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 07:15:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 07:15:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89181.167780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEoOn-000598-7C; Wed, 24 Feb 2021 07:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89181.167780; Wed, 24 Feb 2021 07:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEoOn-000591-4D; Wed, 24 Feb 2021 07:15:53 +0000
Received: by outflank-mailman (input) for mailman id 89181;
 Wed, 24 Feb 2021 07:15:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEoOm-00058t-9b; Wed, 24 Feb 2021 07:15:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEoOm-0007uF-0E; Wed, 24 Feb 2021 07:15:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEoOl-0004No-ND; Wed, 24 Feb 2021 07:15:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEoOl-0002kF-Mg; Wed, 24 Feb 2021 07:15:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tFnnw6GLpwCUZT160zsQuiqOjTbmqR2GRLD+RCRI/C0=; b=B1Kca1ROfGN21HxlgoNjGbHMPt
	D6jjJIFtmPaw4ncIkSiv1PQjcA1K5h5qPkBLzknqUjqC/UTg1yvZkpK7HtlY/+itDVd1safe6VVfW
	qZiLkoPGM5X03fitVM/gY3zmfDHsd1xdPS4AAdA8vezmnUPmGf2tVQ1f/2b/cf52xmig=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159590-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159590: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=fc944ddc0b4a019d4ece166909e65fa2a11c7e0e
X-Osstest-Versions-That:
    linux=850e6a95deb5a9e6e922ace64bf2dd0ed290ecb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 07:15:51 +0000

flight 159590 linux-5.4 real [real]
flight 159615 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159590/
http://logs.test-lab.xenproject.org/osstest/logs/159615/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 159615-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 159615 like 159457
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 159615 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159457
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159457
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159457
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159457
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 159457
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159457
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159457
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159457
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159457
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159457
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159457
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                fc944ddc0b4a019d4ece166909e65fa2a11c7e0e
baseline version:
 linux                850e6a95deb5a9e6e922ace64bf2dd0ed290ecb7

Last test of basis   159457  2021-02-18 08:59:55 Z    5 days
Testing same since   159590  2021-02-23 14:12:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Sterba <dsterba@suse.com>
  David Sterba <dsterba@suse.cz>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Self <jason@bluehome.net>
  Juergen Gross <jgross@suse.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Loic Poulain <loic.poulain@linaro.org>
  Matwey V. Kornilov <matwey@sai.msu.ru>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Nikolay Aleksandrov <nikolay@nvidia.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Sasha Levin <sashal@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wang Hai <wanghai38@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   850e6a95deb5..fc944ddc0b4a  fc944ddc0b4a019d4ece166909e65fa2a11c7e0e -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 07:26:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 07:26:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89187.167794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEoZD-0006GT-C7; Wed, 24 Feb 2021 07:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89187.167794; Wed, 24 Feb 2021 07:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEoZD-0006GM-8v; Wed, 24 Feb 2021 07:26:39 +0000
Received: by outflank-mailman (input) for mailman id 89187;
 Wed, 24 Feb 2021 07:26:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEoZB-0006GH-V4
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 07:26:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fa980ba-ef95-4a71-b3d3-c66f84bb3cfe;
 Wed, 24 Feb 2021 07:26:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 465ADAD2B;
 Wed, 24 Feb 2021 07:26:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fa980ba-ef95-4a71-b3d3-c66f84bb3cfe
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614151596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y+hTg0wQmFtQ+vVLAIpOFsCYKuz3H3w3GIYg8JoR9qA=;
	b=OE/8cbnvfLIme65otmPurDyNqeUVkItNOh7pn6/S9i5kbGJo4UEVZKkFZ+rQx21TJKs5/9
	L1xRZHuqiOnDDxX+JfkjMbIzWP7Ui6Drxg8fBaaWG38fzj/AhOvhfPqAOxLVwTRhtfrnfL
	oYjAmSqIPKuQn+9NgXmKzybBudGsIgg=
Subject: Re: [PATCH for-next] configure: probe for gcc -m32 integer sizes
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, wl@xen.org, iwj@xenproject.org,
 anthony.perard@citrix.com, xen-devel@lists.xenproject.org
References: <alpine.DEB.2.21.2102231648580.3234@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b221e588-c21c-fcd8-6c89-70b424d10620@suse.com>
Date: Wed, 24 Feb 2021 08:26:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102231648580.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 02:08, Stefano Stabellini wrote:
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -307,6 +307,12 @@ AC_ARG_VAR([AWK], [Path to awk tool])
>  
>  # Checks for programs.
>  AC_PROG_CC
> +AC_LANG(C)
> +AC_LANG_CONFTEST([AC_LANG_SOURCE([[#include <stdint.h>
> +#define BUILD_BUG_ON(p) ((void)sizeof(char[1 - 2 * !!(p)]))
> +int main() { BUILD_BUG_ON(sizeof(uint64_t) != 8); }]])])
> +AS_IF([gcc -m32 -c conftest.c -o /dev/null 2>/dev/null], [hvmloader=y], [AC_MSG_WARN(hvmloader build disabled due to headers mismatch)])
> +AC_SUBST(hvmloader)
>  AC_PROG_MAKE_SET
>  AC_PROG_INSTALL
>  AC_PATH_PROG([FLEX], [flex])

I'm fine with the approach now, but I'm rather uncertain about
the insertion point you've selected (in the middle of the
"Checks for programs" section). It'll need to be the tools
maintainers to judge about this.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 07:47:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 07:47:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89190.167807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEotI-0008GS-52; Wed, 24 Feb 2021 07:47:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89190.167807; Wed, 24 Feb 2021 07:47:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEotI-0008GL-16; Wed, 24 Feb 2021 07:47:24 +0000
Received: by outflank-mailman (input) for mailman id 89190;
 Wed, 24 Feb 2021 07:47:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEotH-0008GD-Ki; Wed, 24 Feb 2021 07:47:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEotH-0008QC-9f; Wed, 24 Feb 2021 07:47:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEotH-0005Ee-2W; Wed, 24 Feb 2021 07:47:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEotH-0007A3-1z; Wed, 24 Feb 2021 07:47:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=fZpqKsfbGnO9cJeKsYrlfe87v2vKrt4mbmwmv8ZoI1c=; b=WnMZT+0VUR/J02XH1q4m2pavJM
	BoeZfw4n/d5Y1y0sTGuAtbVTod3JbZvQS2jqUjFaehgzFFBeyyDCSDa1Bu/nBYTece2rWFT3DTJao
	hB1hTuDI831BE6qNIAmzfUal+wR8WP5/3OKR103aDw5ezZ9StgEncvK9/ci4zErVBlzo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-xtf-amd64-amd64-2
Message-Id: <E1lEotH-0007A3-1z@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 07:47:23 +0000

branch xen-unstable
xenbranch xen-unstable
job test-xtf-amd64-amd64-2
testid xtf/test-pv32pae-selftest

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159617/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-xtf-amd64-amd64-2.xtf--test-pv32pae-selftest.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-2.xtf--test-pv32pae-selftest --summary-out=tmp/159617.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-xtf-amd64-amd64-2 xtf/test-pv32pae-selftest
Searching for failure / basis pass:
 159576 fail [host=chardonnay1] / 159475 [host=godello0] 159453 [host=godello0] 159424 [host=huxelrebe1] 159396 [host=albana1] 159362 [host=godello1] 159335 ok.
Failure / basis pass flights: 159576 / 159335
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#04085ec1ac05a362812e9b0c6b5a8713d7dc88ad-f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a git://xenbits.xen.org/xtf.git#8ab15139728a8efd3ebbb60beb16a958a6a93fa1-8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Loaded 5001 nodes in revision graph
Searching for test results:
 159315 [host=chardonnay0]
 159335 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159362 [host=godello1]
 159396 [host=albana1]
 159424 [host=huxelrebe1]
 159453 [host=godello0]
 159475 [host=godello0]
 159487 [host=godello1]
 159491 [host=albana0]
 159508 [host=fiano0]
 159526 [host=huxelrebe1]
 159540 [host=elbling0]
 159559 [host=fiano1]
 159597 [host=fiano1]
 159599 [host=fiano1]
 159576 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159601 [host=fiano1]
 159603 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159605 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159607 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 336fbbdf61562e5ae1112f24bc90c1164adf2144 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159608 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f954a1bf5f74ad6edce361d1bf1a29137ff374e8 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159609 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159611 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159612 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159614 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159616 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159617 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Searching for interesting versions
 Result found: flight 159335 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x55e3362f7750) HASH(0x55e336376e78) HASH(0x55e336379e08) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05\
 e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x55e3362ddb50) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 04085ec1ac05a362812e9b0c6b5a8713d7dc88ad 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x55e3362e\
 6198) HASH(0x55e3362efb88) Result found: flight 159576 (fail), for basis failure (at ancestor ~76)
 Repro found: flight 159603 (pass), for basis pass
 Repro found: flight 159604 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
No revisions left to test, checking graph state.
 Result found: flight 159609 (pass), for last pass
 Result found: flight 159611 (fail), for first failure
 Repro found: flight 159612 (pass), for last pass
 Repro found: flight 159614 (fail), for first failure
 Repro found: flight 159616 (pass), for last pass
 Repro found: flight 159617 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159617/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-2.xtf--test-pv32pae-selftest.{dot,ps,png,html,svg}.
----------------------------------------
159617: tolerable all pass

flight 159617 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159617/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-xtf-amd64-amd64-2     19 xtf/test-pv32pae-selftest fail baseline untested


jobs:
 test-xtf-amd64-amd64-2                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 09:29:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 09:29:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89211.167832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqTw-0001Vs-3h; Wed, 24 Feb 2021 09:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89211.167832; Wed, 24 Feb 2021 09:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqTw-0001Vl-0Y; Wed, 24 Feb 2021 09:29:20 +0000
Received: by outflank-mailman (input) for mailman id 89211;
 Wed, 24 Feb 2021 09:29:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEqTu-0001Vg-Fg
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 09:29:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26f1dd9c-ace1-422c-88c5-3af2c86823c6;
 Wed, 24 Feb 2021 09:29:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 50FC7AE47;
 Wed, 24 Feb 2021 09:29:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26f1dd9c-ace1-422c-88c5-3af2c86823c6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614158956; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dilbBM9wYrMX1TQWeEKnZc47GsGs9mibBqzlujSceRE=;
	b=aBtVqNzWOXhjI9ok88r7LV/MryktKP1Cc6ubCxXlW5QazvSpb7KVbFrv30VcHUM7Zc2uef
	de47OcZ9UL9uOeDJv5U/+aMNUPodgHIO6p+dP7d5bx1GU20T79iV/LWgkWoKvYJPnfRPcT
	AiS3mvOjG0nw5NxF4ZeZ/evXz2vWIR8=
Subject: Re: [PATCH RFC v3 4/4] x86/time: re-arrange struct
 calibration_rendezvous
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb7494b9-f4d1-f0c0-2fb2-5201559c1962@suse.com>
 <56d70757-a887-6824-18f4-93b1f244e44b@suse.com>
Message-ID: <eaafcf3d-5920-6a29-b479-1901ee52a85f@suse.com>
Date: Wed, 24 Feb 2021 10:29:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <56d70757-a887-6824-18f4-93b1f244e44b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.02.2021 13:57, Jan Beulich wrote:
> To reduce latency on time_calibration_tsc_rendezvous()'s last loop
> iteration, separate fields written on the last iteration enough from the
> crucial field read by all CPUs on the last iteration such that they end
> up in distinct cache lines. Prefetch this field on an earlier iteration.

I've now measured the effects of this, at least to some degree. On my
single-socket 18-core Skylake system this reduces the average loop
exit time on CPU0 (from the TSC write on the last iteration to until
after the main loop) from around 32k cycles to around 28k (albeit the
values measured on separate runs vary quite significantly).

About the same effect (maybe a little less, but within error bounds)
can be had without any re-arrangement of the struct's layout, by
simply reading r->master_tsc_stamp into a local variable at the end
of each loop iteration. I'll make v4 use this less convoluted model
instead.

Jan

> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1655,10 +1655,20 @@ static void tsc_check_reliability(void)
>   * All CPUS snapshot their local TSC and extrapolation of system time.
>   */
>  struct calibration_rendezvous {
> -    cpumask_t cpu_calibration_map;
>      atomic_t semaphore;
>      s_time_t master_stime;
> -    uint64_t master_tsc_stamp, max_tsc_stamp;
> +    cpumask_t cpu_calibration_map;
> +    /*
> +     * All CPUs want to read master_tsc_stamp on the last iteration.  If
> +     * cpu_calibration_map isn't large enough to push the field into a cache
> +     * line different from the one used by semaphore (written by all CPUs on
> +     * every iteration) and master_stime (written by CPU0 on the last
> +     * iteration), align the field suitably.
> +     */
> +    uint64_t __aligned(BITS_TO_LONGS(NR_CPUS) * sizeof(long) +
> +                       sizeof(atomic_t) + sizeof(s_time_t) < SMP_CACHE_BYTES
> +                       ? SMP_CACHE_BYTES : 1) master_tsc_stamp;
> +    uint64_t max_tsc_stamp;
>  };
>  
>  static void
> @@ -1709,6 +1719,8 @@ static void time_calibration_tsc_rendezv
>  
>              if ( i == 0 )
>                  write_tsc(r->master_tsc_stamp);
> +            else
> +                prefetch(&r->master_tsc_stamp);
>  
>              while ( atomic_read(&r->semaphore) != (2*total_cpus - 1) )
>                  cpu_relax();
> @@ -1731,6 +1743,8 @@ static void time_calibration_tsc_rendezv
>  
>              if ( i == 0 )
>                  write_tsc(r->master_tsc_stamp);
> +            else
> +                prefetch(&r->master_tsc_stamp);
>  
>              atomic_inc(&r->semaphore);
>              while ( atomic_read(&r->semaphore) > total_cpus )
> 
> 



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 09:43:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 09:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89215.167847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqi3-0003J1-G4; Wed, 24 Feb 2021 09:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89215.167847; Wed, 24 Feb 2021 09:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqi3-0003Iu-CN; Wed, 24 Feb 2021 09:43:55 +0000
Received: by outflank-mailman (input) for mailman id 89215;
 Wed, 24 Feb 2021 09:43:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEqi2-0003Ip-4H
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 09:43:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqi0-0002Mr-Vx; Wed, 24 Feb 2021 09:43:52 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqi0-0000g2-PZ; Wed, 24 Feb 2021 09:43:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=thp4GQi99PzZhi5ioyWpqANj/epwPHY8B6PmRxQuMD8=; b=Uerka5GejnWeADmdqjn0Gf5kAM
	/iiSOy5ncqzF1NPWp+lMzpMs2y0JmSLLap7oJxSCmk6WSXJPZ/yMLJDmVqFEbMabdVuyIfZZ3CRqB
	yRBHDKzzoVJlLBTIccRnycQYj6G0pp80KvxgBmTveU+Xnx5C+qaTwalKV7mVvsQqzrAw=;
Subject: Re: [for-4.15][PATCH v4 0/2] xen/iommu: Collection of bug fixes for
 IOMMU teardown
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20210223123433.19645-1-julien@xen.org>
Message-ID: <d0b4fe72-cd9f-7cb4-16ee-f5872f9aff7c@xen.org>
Date: Wed, 24 Feb 2021 09:43:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210223123433.19645-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi all,

Please ignore this version. I forgot to CC the maintainers on it. I will 
resend a new series.

Cheers,

On 23/02/2021 12:34, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> This series is a collection of bug fixes for the IOMMU teardown code.
> All of them are candidate for 4.15 as they can either leak memory or
> lead to host crash/host corruption.
> 
> This is sent directly on xen-devel because all the issues were either
> introduced in 4.15 or happen in the domain creation code.
> 
> Major changes since v3:
>      - Remove patch #3 "xen/iommu: x86: Harden the IOMMU page-table
>      allocator" as it is not strictly necessary for 4.15.
>      - Re-order the patches to avoid on a follow-up patch to fix
>      completely the issue.
> 
> Major changes since v2:
>      - patch #1 "xen/x86: p2m: Don't map the special pages in the IOMMU
>      page-tables" has been removed. This requires Jan's patch [1] to
>      fully mitigate memory leaks.
> 
> Cheers,
> 
> [1] <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
> 
> Julien Grall (2):
>    xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
>    xen/iommu: x86: Clear the root page-table before freeing the
>      page-tables
> 
>   xen/drivers/passthrough/amd/iommu_map.c     | 12 +++++++++++
>   xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 ++++++++++-
>   xen/drivers/passthrough/vtd/iommu.c         | 24 ++++++++++++++++++++-
>   xen/drivers/passthrough/x86/iommu.c         | 19 ++++++++++++++++
>   xen/include/xen/iommu.h                     |  1 +
>   5 files changed, 66 insertions(+), 2 deletions(-)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 09:44:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 09:44:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89216.167860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqiF-0003MH-RZ; Wed, 24 Feb 2021 09:44:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89216.167860; Wed, 24 Feb 2021 09:44:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqiF-0003M9-LO; Wed, 24 Feb 2021 09:44:07 +0000
Received: by outflank-mailman (input) for mailman id 89216;
 Wed, 24 Feb 2021 09:44:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEqiE-0003Lp-EZ
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 09:44:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqiB-0002NS-6o; Wed, 24 Feb 2021 09:44:03 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqiA-0000gz-N8; Wed, 24 Feb 2021 09:44:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=LkstIODlCyPA+UDcuOwnCofjWZbTFL2vHYqXk3GFmjg=; b=WYSp8H5bpRSoTb+ex7WBG24QYt
	Lno4z7vrPxCjTVSrqmtXv0+8Jvm9so/YLxWw0uJcX9LHZ9tISEsWVUbcTM277AoYARCIn3W6xWTjU
	zWWJEtvJYyH5V9a9G6rT2+pQzTHhXODEP582/nnYArZbDXEBamTCe4rCFoCxADYO7C3o=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][RESEND PATCH v4 0/2] xen/iommu: Collection of bug fixes for IOMMU teardown
Date: Wed, 24 Feb 2021 09:43:54 +0000
Message-Id: <20210224094356.7606-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of bug fixes for the IOMMU teardown code.
All of them are candidate for 4.15 as they can either leak memory or
lead to host crash/host corruption.

This is sent directly on xen-devel because all the issues were either
introduced in 4.15 or happen in the domain creation code.

Major changes since v3:
    - Remove patch #3 "xen/iommu: x86: Harden the IOMMU page-table
    allocator" as it is not strictly necessary for 4.15.
    - Re-order the patches to avoid on a follow-up patch to fix
    completely the issue.

Major changes since v2:
    - patch #1 "xen/x86: p2m: Don't map the special pages in the IOMMU
    page-tables" has been removed. This requires Jan's patch [1] to
    fully mitigate memory leaks.

Cheers,

[1] <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>


*** BLURB HERE ***

Julien Grall (2):
  xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
  xen/iommu: x86: Clear the root page-table before freeing the
    page-tables

 xen/drivers/passthrough/amd/iommu_map.c     | 12 +++++++++++
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 ++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 24 ++++++++++++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 19 ++++++++++++++++
 xen/include/xen/iommu.h                     |  1 +
 5 files changed, 66 insertions(+), 2 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 09:44:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 09:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89217.167864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqiG-0003My-4C; Wed, 24 Feb 2021 09:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89217.167864; Wed, 24 Feb 2021 09:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqiF-0003Mf-UO; Wed, 24 Feb 2021 09:44:07 +0000
Received: by outflank-mailman (input) for mailman id 89217;
 Wed, 24 Feb 2021 09:44:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEqiE-0003Ly-OI
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 09:44:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqiC-0002NU-BH; Wed, 24 Feb 2021 09:44:04 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqiC-0000gz-2Q; Wed, 24 Feb 2021 09:44:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=F3pV8CFzjIR41kWk/X0MEyzvkJbWUgUJEtOR4PpRZWU=; b=eZG7nvMrer8uwX99SXEsH1jm+
	OXnaUoii+JE82exb2YLB9QGB9igVhFIF+ZVs3GfL2eiYh1sG5tsaIc93MsANIb0+6R6MeZWyINPdl
	9Mkka/pgWj/2pGTao6Tpz4updx1JSjgIWt+WIHe3nC2c6wOCkjA52lHFuHxDRjcBVLGc0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][RESEND PATCH v4 1/2] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
Date: Wed, 24 Feb 2021 09:43:55 +0000
Message-Id: <20210224094356.7606-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210224094356.7606-1-julien@xen.org>
References: <20210224094356.7606-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new x86 IOMMU page-tables allocator will release the pages when
relinquishing the domain resources. However, this is not sufficient
when the domain is dying because nothing prevents page-table to be
allocated.

As the domain is dying, it is not necessary to continue to modify the
IOMMU page-tables as they are going to be destroyed soon.

At the moment, page-table allocates will only happen when iommu_map().
So after this change there will be no more page-table allocation
happening.

In order to observe d->is_dying correctly, we need to rely on per-arch
locking, so the check to ignore IOMMU mapping is added on the per-driver
map_page() callback.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

As discussed in v3, this is only covering 4.15. We can discuss
post-4.15 how to catch page-table allocations if another caller (e.g.
iommu_unmap() if we ever decide to support superpages) start to use the
page-table allocator.

Changes in v4:
    - Move the patch to the top of the queue
    - Reword the commit message

Changes in v3:
    - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't
    crash the domain if it is dying"
---
 xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++++++++
 xen/drivers/passthrough/vtd/iommu.c     | 12 ++++++++++++
 xen/drivers/passthrough/x86/iommu.c     |  6 ++++++
 3 files changed, 30 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index d3a8b1aec766..560af54b765b 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -285,6 +285,18 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables()).
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     rc = amd_iommu_alloc_root(d);
     if ( rc )
     {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d136fe36883b..b549a71530d5 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1762,6 +1762,18 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables())
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
     if ( !pg_maddr )
     {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cea1032b3d02..c6b03624fe28 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,6 +267,12 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    /* After this barrier, no more IOMMU mapping can happen */
+    spin_barrier(&hd->arch.mapping_lock);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 09:44:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 09:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89218.167883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqiH-0003QD-Lg; Wed, 24 Feb 2021 09:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89218.167883; Wed, 24 Feb 2021 09:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqiH-0003Pz-Hc; Wed, 24 Feb 2021 09:44:09 +0000
Received: by outflank-mailman (input) for mailman id 89218;
 Wed, 24 Feb 2021 09:44:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEqiG-0003Oj-LG
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 09:44:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqiD-0002NW-NB; Wed, 24 Feb 2021 09:44:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEqiD-0000gz-E8; Wed, 24 Feb 2021 09:44:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=rHfC/wahEJYSijbiVGgEzgUQtxG8SVxu1zXJPzObAQE=; b=x2htRuk5dxgH1esh77aZtn5At
	DjXhqhuCUiPguIbbdcDu6IvkLw+JUNo+KiTUqscQM9u3AIPcj10utO+F8UjIWMGspUXRUQxVCVXRK
	g2Wc1oyFnfFYoTZDk4H8/UY3uNh7HT8gNWZBN8mBN3XBd4H3DJ24RyX5RNmQLDaeVdYtk=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [for-4.15][RESEND PATCH v4 2/2] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Wed, 24 Feb 2021 09:43:56 +0000
Message-Id: <20210224094356.7606-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210224094356.7606-1-julien@xen.org>
References: <20210224094356.7606-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new per-domain IOMMU page-table allocator will now free the
page-tables when domain's resources are relinquished. However, the
per-domain IOMMU structure will still contain a dangling pointer to
the root page-table.

Xen may access the IOMMU page-tables afterwards at least in the case of
PV domain:

(XEN) Xen call trace:
(XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
(XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
(XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
(XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
(XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
(XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
(XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
(XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
(XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
(XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
(XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
(XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
(XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
(XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
(XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
(XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120

This will result to a use after-free and possibly an host crash or
memory corruption.

It would not be possible to free the page-tables further down in
domain_relinquish_resources() because cleanup_page_mappings() will only
be called when the last reference on the page dropped. This may happen
much later if another domain still hold a reference.

After all the PCI devices have been de-assigned, nobody should use the
IOMMU page-tables and it is therefore pointless to try to modify them.

So we can simply clear any reference to the root page-table in the
per-domain IOMMU structure. This requires to introduce a new callback of
the method will depend on the IOMMU driver used.

Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy()
to check if we freed all the IOMMU page tables.

Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v4:
        - Move the patch later in the series as we need to prevent
        iommu_map() to allocate memory first.
        - Add an ASSERT() in arch_iommu_domain_destroy().

    Changes in v3:
        - Move the patch earlier in the series
        - Reword the commit message

    Changes in v2:
        - Introduce clear_root_pgtable()
        - Move the patch later in the series
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 13 +++++++++++++
 xen/include/xen/iommu.h                     |  1 +
 4 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 42b5a5a9bec4..085fe2f5771e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
+static void amd_iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.amd.root_table = NULL;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    dom_iommu(d)->arch.amd.root_table = NULL;
+    ASSERT(!dom_iommu(d)->arch.amd.root_table);
 }
 
 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
@@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .remove_device = amd_iommu_remove_device,
     .assign_device  = amd_iommu_assign_device,
     .teardown = amd_iommu_domain_destroy,
+    .clear_root_pgtable = amd_iommu_clear_root_pgtable,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index b549a71530d5..475efb3be3bd 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1726,6 +1726,15 @@ out:
     return ret;
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.vtd.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void iommu_domain_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d)
         xfree(mrmrr);
     }
 
-    hd->arch.vtd.pgd_maddr = 0;
+    ASSERT(!hd->arch.vtd.pgd_maddr);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2731,6 +2740,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .remove_device = intel_iommu_remove_device,
     .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index c6b03624fe28..faeb549591d8 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    /*
+     * There should be not page-tables left allocated by the time the
+     * domain is destroyed. Note that arch_iommu_domain_destroy() is
+     * called unconditionally, so pgtables may be unitialized.
+     */
+    ASSERT(dom_iommu(d)->platform_ops == NULL ||
+           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -273,6 +280,12 @@ int iommu_free_pgtables(struct domain *d)
     /* After this barrier, no more IOMMU mapping can happen */
     spin_barrier(&hd->arch.mapping_lock);
 
+    /*
+     * Pages will be moved to the free list below. So we want to
+     * clear the root page-table to avoid any potential use after-free.
+     */
+    hd->platform_ops->clear_root_pgtable(d);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 863a68fe1622..d59ed7cbad43 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -272,6 +272,7 @@ struct iommu_ops {
 
     int (*adjust_irq_affinities)(void);
     void (*sync_cache)(const void *addr, unsigned int size);
+    void (*clear_root_pgtable)(struct domain *d);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 09:48:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 09:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89227.167895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqma-0003oP-9a; Wed, 24 Feb 2021 09:48:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89227.167895; Wed, 24 Feb 2021 09:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEqma-0003oI-6C; Wed, 24 Feb 2021 09:48:36 +0000
Received: by outflank-mailman (input) for mailman id 89227;
 Wed, 24 Feb 2021 09:48:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEqmZ-0003oA-0e; Wed, 24 Feb 2021 09:48:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEqmY-0002TB-Uz; Wed, 24 Feb 2021 09:48:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEqmY-0001no-O2; Wed, 24 Feb 2021 09:48:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEqmY-0001gs-NV; Wed, 24 Feb 2021 09:48:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Qw4iIDdkjmKLiGNeBIkvnOTBKEq6PsnJxK6jQEG9W1o=; b=1JU+fqonYa2K3YJ12pfJl4HlBj
	XRsiESmBhpIMZqu1rHpYDa/BnGMCktIeiUv/GnS63TG7FnEae4OE4RPb9pSethflwJCWpnTK3Qc1H
	zCIxdwrOgE2q2dLQlTWXFLcdm28+o1xo4zEVM7anVpI5etNs+su5ZQbjmjmgDAvqCpGc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159620-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159620: all pass - PUSHED
X-Osstest-Versions-This:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
X-Osstest-Versions-That:
    xen=87a067fd8f4d4f7c6be02c3d38145115ac542017
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 09:48:34 +0000

flight 159620 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159620/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe
baseline version:
 xen                  87a067fd8f4d4f7c6be02c3d38145115ac542017

Last test of basis   159515  2021-02-21 09:18:27 Z    3 days
Testing same since   159620  2021-02-24 09:19:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   87a067fd8f..5d94433a66  5d94433a66df29ce314696a13bdd324ec0e342fe -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:08:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89233.167910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEr5X-0005qn-0z; Wed, 24 Feb 2021 10:08:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89233.167910; Wed, 24 Feb 2021 10:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEr5W-0005qg-UB; Wed, 24 Feb 2021 10:08:10 +0000
Received: by outflank-mailman (input) for mailman id 89233;
 Wed, 24 Feb 2021 10:08:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lEr5W-0005qb-Hd
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:08:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEr5S-0002sJ-NW; Wed, 24 Feb 2021 10:08:06 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lEr5S-0002gl-8T; Wed, 24 Feb 2021 10:08:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=P8UPJzC3UVnf2vnPo+e/z/VoBKXYTUsIKLcYt3UxOws=; b=S+tjCUX2+lMsgcde0UJSc5xqfC
	SOYWAddDIDu6cRN4uQowvGNNPglvZK/OABG0tRiaj8JZEPb2uaOTHYLtYv5o6+gngwkFyB5mnhbFS
	DzcZdDZeAk/XaStL1pVjp9pBjQLesgc6EDN0ac44p5+ZA1tZkLTdJfSziKkIYzSP6uzw=;
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, Meng Xu <mengxu@cis.upenn.edu>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org> <87zgzv56pm.fsf@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org>
Date: Wed, 24 Feb 2021 10:08:04 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <87zgzv56pm.fsf@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/02/2021 12:06, Volodymyr Babchuk wrote:
> 
> Hi Julien,

Hi Volodymyr,

> Julien Grall writes:
>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>> ... just rescheduling the vCPU. It will also give the opportunity for
>> the guest to handle interrupts.
>>
>> If you don't return to the guest, then risk to get an RCU sched stall
>> on that the vCPU (some hypercalls can take really really long).
> 
> Ah yes, you are right. I'd only wish that hypervisor saved context of
> hypercall on it's side...
> 
> I have example of OP-TEE before my eyes. They have special return code
> "task was interrupted" and they use separate call "continue execution of
> interrupted task", which takes opaque context handle as a
> parameter. With this approach state of interrupted call never leaks to > rest of the system.

Feel free to suggest a new approach for the hypercals.

>>
>>> This approach itself have obvious
>>> problems: code that executes hypercall is responsible for preemption,
>>> preemption checks are infrequent (because they are costly by
>>> themselves), hypercall execution state is stored in guest-controlled
>>> area, we rely on guest's good will to continue the hypercall.
>>
>> Why is it a problem to rely on guest's good will? The hypercalls
>> should be preempted at a boundary that is safe to continue.
> 
> Yes, and it imposes restrictions on how to write hypercall
> handler.
> In other words, there are much more places in hypercall handler code
> where it can be preempted than where hypercall continuation can be
> used. For example, you can preempt hypercall that holds a mutex, but of
> course you can't create an continuation point in such place.

I disagree, you can create continuation point in such place. Although it 
will be more complex because you have to make sure you break the work in 
a restartable place.

I would also like to point out that preemption also have some drawbacks.
With RT in mind, you have to deal with priority inversion (e.g. a lower 
priority vCPU held a mutex that is required by an higher priority).

Outside of RT, you have to be careful where mutex are held. In your 
earlier answer, you suggested to held mutex for the memory allocation. 
If you do that, then it means a domain A can block allocation for domain 
B as it helds the mutex.

This can lead to quite serious problem if domain A cannot run (because 
it exhausted its credit) for a long time.

> 
>>> All this
>>> imposes restrictions on which hypercalls can be preempted, when they
>>> can be preempted and how to write hypercall handlers. Also, it
>>> requires very accurate coding and already led to at least one
>>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>>> like the one mentioned in [1].
>>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle
>>> vCPUs,
>>> which are supposed to run when the system is idle. If hypervisor needs
>>> to execute own tasks that are required to run right now, it have no
>>> other way than to execute them on current vCPU. But scheduler does not
>>> know that hypervisor executes hypervisor task and accounts spent time
>>> to a domain. This can lead to domain starvation.
>>> Also, absence of hypervisor threads leads to absence of high-level
>>> synchronization primitives like mutexes, conditional variables,
>>> completions, etc. This leads to two problems: we need to use spinlocks
>>> everywhere and we have problems when porting device drivers from linux
>>> kernel.
>>> Proposed solution
>>> =================
>>> It is quite obvious that to fix problems above we need to allow
>>> preemption in hypervisor mode. I am not familiar with x86 side, but
>>> for the ARM it was surprisingly easy to implement. Basically, vCPU
>>> context in hypervisor mode is determined by its stack at general
>>> purpose registers. And __context_switch() function perfectly switches
>>> them when running in hypervisor mode. So there are no hard
>>> restrictions, why it should be called only in leave_hypervisor() path.
>>> The obvious question is: when we should to try to preempt running
>>> vCPU?  And answer is: when there was an external event. This means
>>> that we should try to preempt only when there was an interrupt request
>>> where we are running in hypervisor mode. On ARM, in this case function
>>> do_trap_irq() is called. Problem is that IRQ handler can be called
>>> when vCPU is already in atomic state (holding spinlock, for
>>> example). In this case we should try to preempt right after leaving
>>> atomic state. This is basically all the idea behind this PoC.
>>> Now, about the series composition.
>>> Patches
>>>     sched: core: save IRQ state during locking
>>>     sched: rt: save IRQ state during locking
>>>     sched: credit2: save IRQ state during locking
>>>     preempt: use atomic_t to for preempt_count
>>>     arm: setup: disable preemption during startup
>>>     arm: context_switch: allow to run with IRQs already disabled
>>> prepare the groundwork for the rest of PoC. It appears that not all
>>> code is ready to be executed in IRQ state, and schedule() now can be
>>> called at end of do_trap_irq(), which technically is considered IRQ
>>> handler state. Also, it is unwise to try preempt things when we are
>>> still booting, so ween to enable atomic context during the boot
>>> process.
>>
>> I am really surprised that this is the only changes necessary in
>> Xen. For a first approach, we may want to be conservative when the
>> preemption is happening as I am not convinced that all the places are
>> safe to preempt.
>>
> 
> Well, I can't say that I ran extensive tests, but I played with this for
> some time and it seemed quite stable. Of course, I had some problems
> with RTDS...
> 
> As I see it, Xen is already supports SMP, so all places where races are
> possible should already be covered by spinlocks or taken into account by
> some other means.
That's correct for shared resources. I am more worried for any 
hypercalls that expected to run more or less continuously (e.g not 
taking into account interrupt) on the same pCPU.

> 
> Places which may not be safe to preempt are clustered around task
> management code itself: schedulers, xen entry/exit points, vcpu
> creation/destruction and such.
> 
> For example, for sure we do not want to destroy vCPU which was preempted
> in hypervisor mode. I didn't covered this case, by the way.
> 
>>> Patches
>>>     preempt: add try_preempt() function
>>>     sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>>>     arm: traps: try to preempt before leaving IRQ handler
>>> are basically the core of this PoC. try_preempt() function tries to
>>> preempt vCPU when either called by IRQ handler and when leaving atomic
>>> state. Scheduler now enters atomic state to ensure that it will not
>>> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.
>>
>> AFAICT, try_preempt() will deal with the rescheduling. But how about
>> softirqs? Don't we want to handle them in try_preempt() as well?
> 
> Well, yes and no. We have the following softirqs:
> 
>   TIMER_SOFTIRQ - should be called, I believe
>   RCU_SOFTIRQ - I'm not sure about this, but probably no

When would you call RCU callback then?

>   SCHED_SLAVE_SOFTIRQ - no
>   SCHEDULE_SOFTIRQ - no
>   NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ - should be moved to a separate
>   thread, maybe?
>   TASKLET_SOFTIRQ - should be moved to a separate thread >
> So, looks like only timers should be handled for sure.
> 
>>
>> [...]
>>
>>> Conclusion
>>> ==========
>>> My main intention is to begin discussion of hypervisor
>>> preemption. As
>>> I showed, it is doable right away and provides some immediate
>>> benefits. I do understand that proper implementation requires much
>>> more efforts. But we are ready to do this work if community is
>>> interested in it.
>>> Just to reiterate main benefits:
>>> 1. More controllable latency. On embedded systems customers care
>>> about
>>> such things.
>>
>> Is the plan to only offer preemptible Xen?
>>
> 
> Sorry, didn't get the question.

What's your plan for the preemption support? Will an admin be able to 
configure Xen to be either preemptible or not?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:21:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89237.167921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErHx-0007id-9w; Wed, 24 Feb 2021 10:21:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89237.167921; Wed, 24 Feb 2021 10:21:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErHx-0007iW-6g; Wed, 24 Feb 2021 10:21:01 +0000
Received: by outflank-mailman (input) for mailman id 89237;
 Wed, 24 Feb 2021 10:21:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lErHv-0007iR-Oc
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:21:00 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbffd52b-bc7a-46aa-bb63-14bf472bce2b;
 Wed, 24 Feb 2021 10:20:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cbffd52b-bc7a-46aa-bb63-14bf472bce2b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614162058;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=nNjlYhI937zkkSmM7N4sPVbJ9UiXVBAZM20+gFzB7fQ=;
  b=gRoe6+l80WDmrq6QGHV68r0AUvCH0KJkWQalE419FJT0XNrB0w3zin3x
   uLzNaSKFteuKWfTOQm5a0KILVSLQmygiue6ESA0lLgPOFR5okj6qGTgIw
   CogvOGeiQ2WTCXokFxNiNXFG6eeUhTmomtzpvHKqYuVIs9EJ9ZgmoFodk
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ngR1ue2C1O97dQX+eBx5yiHLBdlBImAdl3/VygmhgcWPVJ4N6Nh0VNwvkvMZrzQgTCVGj4zdzA
 sEubshhkXFnigOaQoEBzqmWSejm3+JICDlJH2J7elonP8c5xCGuhdqASk/P0CVKTpzrLxzOOvm
 ajo//lHkyXCs8f4Ll4UB6JWdPxF2Mug2PTBm4sswghi9HeLsSLkTXFY5tV3k3sEpSORUpMJSah
 jWXsvDlXdaBZXZiFEquUDd/EZP/Exo6AoVZitmrTSsF9rx6ZuaGEBU2YeX8R+kIwv6LhDk+KXB
 CXI=
X-SBRS: 5.2
X-MesageID: 39296787
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="39296787"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LcQMtkX8ckIVWl+u1NanUYcvSUI40GY+OThluqzhx20H78yX3scf8aN0FCWMrTQePYv3mwm9/tlAH/XG15HrnEKTD57X2U7+Yr74RgZIsK67Kuj7QNe91QC7KJiS3vsZi2x277wsu22YvnwhZe08PrAHxzDBYlbFW2ceHXtFOq4okiFIoTRpid5Sgd9Xg9zWTytNWB4HN/Cp+Byxp0LWD10wbFJVyi0PGcWN1u5p07EHbV8Z0xJQzBJmjfgUFaX8OL/M1GdHYn4t5pms2iFdlW5duIXG3ZnlOqGxipt2XL8A9tBwVguFzPb+zwn7MgqkbpI/ypRXLrtPFUxqLM7idw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TOmbYkEZH+o/N87Ohbl6qlrRkNv2stcHZzmPTrsPvRs=;
 b=HovLOdaeerk3gheF9NhJqnoqfuRWPcPd49OLdGgwU5q1QARIGXPw6s0eRbdbyi7/g1ZDHHcJ2VCAoXA2P2WcKKaqOozyGQ2R5Gpz2c66QHPN3Whb3xrtmGd+U2BYybTruJF5VZpb6YlPeww+K+JC93wz7D8WMeDZ18ZswotblU1mOm4tGUMgjAtfJJ3/Ff7toT9fBacxG3Qtn7jd7E779/wf/cqSCP8u9YR+bElP27q0iofwFyeYGgCaAXhCNq8J4305SMXNdnW1lFUKWWIaCtkDG4uH3Dtgxn4rEedTh4sdbHCmWvRu1iBRU6G9a6Qg+ZUM1jkY3EQXrFy+LgJisA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TOmbYkEZH+o/N87Ohbl6qlrRkNv2stcHZzmPTrsPvRs=;
 b=BWasSQ0ps6MGLpFrY3dyeVnhniLE4OcMlDnITh62fIwfnQOLkzDyLMCyvCNjP8dluGVsNrykwvChcL72H0x5mFq3VJuZxJIjK468Yw+T2An6bgiOg+PvL6SUIea+Ak4VgVWXCHF4d3b4cqU3Ht/ssNLnfz27J+apKPTnSD5ufsE=
Date: Wed, 24 Feb 2021 11:20:47 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: <xen-devel@lists.xenproject.org>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <wl@xen.org>, <iwj@xenproject.org>,
	<anthony.perard@citrix.com>
Subject: Re: [PATCH for-next] configure: probe for gcc -m32 integer sizes
Message-ID: <YDYof8YTAViDlDz/@Air-de-Roger>
References: <alpine.DEB.2.21.2102231648580.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2102231648580.3234@sstabellini-ThinkPad-T480s>
X-ClientProxiedBy: MR2P264CA0178.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::17)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 08756c50-8de3-4533-ec2f-08d8d8adde1f
X-MS-TrafficTypeDiagnostic: DS7PR03MB5463:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5463FCEF58D3207367D977278F9F9@DS7PR03MB5463.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: F8psd3L5EyY+8WtA6vQkh/TmFYszWedV04UeicQBDsPTQqkz9Ab6ZlqTzo23PzQSoO3rmdtj2u1bkOR8n2qvYVsDfPqVq4qpW5J/QI/ecr5TRpiFnBF/n457rWVZfDLm/Z8iJH/7CPRAR0a9bER3BAOW6VhTA1ARlauHnIJIk7Kg52pZdOkJOK7oVVa0rEx+PSaOxA8h2uKxyESxfUoBmxrztqC/dlk9ht1qOUpnCH4t1kG3imj2y8UslI6dhv78x67QX/nTogsdG8p8xJHhMFs2kxaLrp7/LvsBo4G12djEkbr9ivY6Ln4TvMfI79A65EDWEWRnXzlfIzRCyBkyfIBZhFsLKXiNqQQZs4GbWIHCfJ8zLcl/52Bg8HRjgzKfltFxOZS7BbgOkCKtuh0h58+AlQp/0XWaBdxkkoZv2LD/DZ3xpLwx3gQ4Sfm1XSGse0Bkt+QHfV8J+CYp+zel+NlTm1m5zjqUo61hwHaY1u1kyQeC1WBbcFKlbnXGfel9eOQLpgpn8yQ6lxXwq4Ctpg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(376002)(136003)(39860400002)(346002)(366004)(316002)(107886003)(956004)(6916009)(8676002)(478600001)(4326008)(16526019)(186003)(9686003)(5660300002)(26005)(4744005)(66556008)(6666004)(86362001)(8936002)(66946007)(66476007)(6486002)(33716001)(2906002)(6496006)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?blRkWksrWTN2NjdTczRhMW9jcDljWG55QzZCUmhESlhaeXFlWkFWNk93dmEw?=
 =?utf-8?B?Q2VXOG9MZ212RmxvakV1ZE1KMEt6bEdMSDN1bHg3b2M2YXVpcllaaCtWQm5k?=
 =?utf-8?B?UWFSQUZ1K3d6TENLN2w5a0xiRlhhSFE4a293aFVTOGxEQ01Oc2dBSlY0OW5l?=
 =?utf-8?B?TUs5MXg5VnF1TlA1cUtVSkNJK25Mb2d0ZFNFemxRUlpUR3Rmckw5K1VtN0Rx?=
 =?utf-8?B?UjZ2Zks2MFZSU3J5cjJXdEtwMW56cXYyWm52a1h4a1RUSVpndkJqc3hZYlpC?=
 =?utf-8?B?ZkgvdTB5ZFpxWi9VODZITEFhSmRIS09Nc2xPSTJiejhkWk9YZ2RtTDQ4Yy95?=
 =?utf-8?B?aC9oR2gveXpBekJwTmhJZUQ5K2NqLzZGdWluODRQcVZDVUxkNFZTN1dtQWR3?=
 =?utf-8?B?cS9rNnN4U1ZQVE84SW91SEpPYVVGY25LYzFaQ1hmY3dJQ21mQkc0WThnV000?=
 =?utf-8?B?eGxaZnFlSXR3am0vSzRMUGk0WDFJZ2ZpZ3hVVnRJS1ZLQ3ZVS29zbmljdTdH?=
 =?utf-8?B?Q1haU2VoNDJ2aERtSlB0dG9DcVdGd1g0TGg5ZkV6SGpvMXlmd1JJNUJkTk01?=
 =?utf-8?B?eGRKaVhuMzVnWUFDRGlIVEg3Y0FraDBYbXYxWmMvKzR6VGZxdHl1REYwWGEz?=
 =?utf-8?B?cHF2MXVydmx0TlhyUUxWcHh4MEsySndWZVJaU3h5aGNERUZMRXFkdVVhR3Nz?=
 =?utf-8?B?RDZZUzYvelhvRHkzRUpHbk9ObVptalAreHhFOFZ6czVsd2M0VjhWa0dGaHFV?=
 =?utf-8?B?RFlmcTRkMVR3RmlWaFBGaTJWaWpkUE14c2txdmI4U3h1a3QwZ1NmTnpKMnYy?=
 =?utf-8?B?dlg1a0tKRlpEaFN3bXBnMXlEa3pMS0t6MWtTTnpkY2lJQ0N6MGpaelNzdG9n?=
 =?utf-8?B?cFpkZEllS3FISEdsTDFKNk9rN1R1cWlDbkdmZ1RCQnlIOVYzbFdmeE1HZlpK?=
 =?utf-8?B?QVhhNEZwWHFrbGtZYk9Fbng1ckhYL2hob2FhdkhkTTZ5c1dmMWhJNmQ4N2dD?=
 =?utf-8?B?UC9EeWNnUk9JMWswMXU5U21qZno2bzhnN3JlN3NzUHdzNDVjZHdFUlUxMnlY?=
 =?utf-8?B?SDNDK1h6VUJOR2VnT1hNTnRKN3hqY3VoNVFUZVpHRkdRWWRDQ2VSeDlmNVNE?=
 =?utf-8?B?d2pOclB2dFBJT0FabFcybXdsaEN4a3B6R1l0RjFJZXJ5MFVHRWFLQ2NDZWJI?=
 =?utf-8?B?bVhSN2d4T2RPamR0UnZhcjBmN01JeHAxcjhXKzBneWNaeDVwZ0Eybm5mTk9Z?=
 =?utf-8?B?RmpRcFV5S2dMTWR4Z056RnMycktrWEY4a0FhMTh3b1pHem5PZnVlWFliM21O?=
 =?utf-8?B?blhCTU9EdU1Icmw1L0F1bU5GNnFKejdMNVpMUHpHMFUySkxWbFY5RHhZSjZV?=
 =?utf-8?B?WGZLTUtjeXM5Rkp6NnM1SzdDQjFIbmUzcTdNNVZOU0lqL0M2RDc3TjZ5aEdJ?=
 =?utf-8?B?U0JmSEdZUFNoZWZMNHYzMFhCNTQvUlc4YTU2MlBOSXRqdWhDaFdXMVRKeUNS?=
 =?utf-8?B?L0pPS1pBdCtZUzFVYlp5SUpCOFl2MzVmYmpEYjJNaXdhOVdRNFk5a2dnaVlC?=
 =?utf-8?B?elV3ZkRlR3JkOTRacVlUeUgrREdtdEZzSWhSZXpVMEJIVXZHRFZSRm5ZZ09K?=
 =?utf-8?B?djVOb0UwSDhSeUhpdWlBdEhSS2o5VVg5OEg2WXhwUzFIb1NHSGlWQy9RV09I?=
 =?utf-8?B?NWdOZzN6R29rdGRVZ1MzQ1U0akZLSFZqL3ZMMFVQNjB4UUQvRFlDa1lENzRa?=
 =?utf-8?Q?3szOboAUjEl7yuhH/APO3msCWb6nSD4mT/wBHf4?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 08756c50-8de3-4533-ec2f-08d8d8adde1f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 10:20:54.1785
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wdDkZr4Ta6lO85tG+7dH6SMIFuslv+qeAuCRY34ub49iOmqlfWU75w3jiQPUhqaABHqoA1WZDFVpOLnTByxnvQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5463
X-OriginatorOrg: citrix.com

On Tue, Feb 23, 2021 at 05:08:43PM -0800, Stefano Stabellini wrote:
> The hvmloader build on Alpine Linux x86_64 currenly fails:
> 
> 
> hvmloader.c: In function 'init_vm86_tss':
> hvmloader.c:202:39: error: left shift count >= width of type [-Werror=shift-count-overflow]
>   202 |                   ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss));
> 
> util.c: In function 'get_cpu_mhz':
> util.c:824:15: error: conversion from 'long long unsigned int' to 'uint64_t' {aka 'long unsigned int'} changes value from
> '4294967296000000' to '0' [-Werror=overflow]
>   824 |     cpu_khz = 1000000ull << 32;
> 
> 
> The root cause of the issue is that gcc -m32 picks up headers meant for
> 64-bit builds.

I'm working on getting hvmloader to build standalone without using any
system headers, which I think it's a worthwhile change to do rather
than this configure bodge. Will post the series now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:27:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:27:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89240.167933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErO1-0007vY-0a; Wed, 24 Feb 2021 10:27:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89240.167933; Wed, 24 Feb 2021 10:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErO0-0007vR-Tr; Wed, 24 Feb 2021 10:27:16 +0000
Received: by outflank-mailman (input) for mailman id 89240;
 Wed, 24 Feb 2021 10:27:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lErNz-0007vM-GT
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:27:15 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c58d1fbf-852f-4aed-99c3-6be0f2649571;
 Wed, 24 Feb 2021 10:27:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c58d1fbf-852f-4aed-99c3-6be0f2649571
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614162434;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=jRuMHfDyxkdgvMhn5xs9whO3sueeJ+Bdbr8aC7PX8V4=;
  b=MySzAkGnVaM/wJd/y+3ZqSCOZ0wn19xFC82DJbky6AkyB2QEJMwaM6/Q
   xQOuGDoumapH1F7N7iw1KH7dLEddqVIHYu2f+4xroHdyj18IwKyUZUbCc
   EGKiFqqbyO8jvj63EvhVEFqa0lWSeSLgFtdlCqiCbToBOA0KWjnih/VV+
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: eSh1scmUSUBliwiVocOA2ko1c2aqD8E/OO5rx4YPPWvYRNlt0sAF7PRiMRN6gHO3QnQbzt7e+7
 UDge7B1ZI0OxZujy7oFguS9fGVSdGhyrdcnCQWmhWDOtnk+pUUEe+pNjh71C4dIEjSPtZyRCKo
 rid6nbnHvLwaPhu5P5Rg3UUxepqw7HsHIL9rUOXsTVSqsHGxpXL3GxV3g6Gn3Vc23Ue1CzOqYT
 f6u+PbHaGrHI7M1nxfmBWFaJS4MR3H/J9Aeq92kDz0W27LIdzlGxNCGYZvRbISro/dOLAyWy9P
 jMs=
X-SBRS: 5.2
X-MesageID: 37903092
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="37903092"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JEH23qJ2ulh/NwIzAoTM52QudN8XWFFEdW+ShdjSJwIfXm+Cpvkycqd/cBBk9HWTiPAAZFlWysk9P59jzTGPyhI4p2moosQM6n3y29MrJBU3cvlRsaYgsA7+zTOEV4mj8uvIKYEfTuwb9jsZY45VIPtyFt9914MlrqhJrrOPTeSBVZUPvOBRMTUeT3ch2pdj1KUGLPaje4aOOcKGGgMN46dK9kcSmKMNNzoY0n1rFwIdTWMcbpB8PFpo/iADc+IBIby3/k1BJhr/iZPnZL5iIU4n9E99rkQgRAjHTs9B3nDVFdM7rPP9ns0qa6jiNXedJWNvSGQQ0gzwo6cQX4Dw7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DjFphK9MVO6aR2exaPNH0+6en/1Yz/OXiki357Cmo/Y=;
 b=CBgPNCrpu0aSfAQiUoh2r8Q7X1A2uwDWthZgb8IV3uAlQLzyiFwKFGypLYsqTXXaybKP0+oTEqUojX96Ux+H2+6u7IDpa/YxModch6RQ3oH7Kc+pWb9rRpzXMBuLRpzsBcucIzruXpW2yrpuiILFFuk04+4cFq+C5x7d8ZuXvDwtFJUDiF1NIj1ztI0D8OYdO+Wm4MT6uJQOOJ63dnTB3oD0i3NTlSn8uTMH4D3QWS63VNI0w6+HPmoaaeCYgfoN0fiqtH+sJa0te7n6lZK3yRj3lD2GDAXOMIaGlgpu5wdzft/kYp41Gda63bKweP6jsCwsP1i1VxXiZk6dyY607A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DjFphK9MVO6aR2exaPNH0+6en/1Yz/OXiki357Cmo/Y=;
 b=ZKT2v2abjWucH26b5nSaA4ycFFBA+Ay9NmZCgWXkIYSB4Iwsrf2XlGg11BmW+kMZHhxuKn2I1S+O+cGrhsVCxb5quijHyZqifp8tDK0IK0rCE7l/yUR77fNMmu402lscM+/+hurrjMt56Dzn3/2pyLRnkTOd3CiYwEy9xqQRisQ=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH 0/2] hvmloader: drop usage of system headers
Date: Wed, 24 Feb 2021 11:26:39 +0100
Message-ID: <20210224102641.89455-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0178.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::17)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 151abf84-a9e6-463e-362e-08d8d8aebeb7
X-MS-TrafficTypeDiagnostic: DM6PR03MB3481:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3481AAB1E1CD913A1E3EC22F8F9F9@DM6PR03MB3481.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +vKHZPN5tiGTsyZJffEsD4iQYlDAJxGUcBdES8hrdzcknrIuadyIfSRhXdS7G78HcOnYmblWtySiu1scPwfg/Quz/RKQDlzUuZtXZyNtx4K0GhbwDXvZ1fenf5XcPxtYdkNEB9cI6gkEsjE5V2BEImAKYBM8DUABSO7IbTRffYZ1GRq/5E56f1mSVRKUKgpugdmZHn+hn1Y/bQgzTJ7rqiW4A9JUW7fXB8B0UDfp4M8LyfTmbweavjiYyhVxJyI1FiBCfoXDN2XzobGNv9FWo19PTC5rty99vF98A0qKBLWcRatlNeKtG/OgxqrHN+37VyMXxZbqCIPLyc86tWs5m7E5gxgt21Z9mo1CqQmefad6mQHRLBsfpFE0zjTYQqoF7I47oFiTmau6LaJkHGqb0oo+CuAT8TP/3k/2HY5S4X+vIqu7frphtvUTohSlJvFtK87+RYT8dOsBpEt+92Hbd7KkLJ9Nr73xYEhS4FYFuomJpoeIZgYbz5iwjq2w6xdxnvl/i4jImiaSO/FKMIUVaA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(376002)(366004)(136003)(83380400001)(316002)(6496006)(1076003)(4744005)(86362001)(6666004)(26005)(8936002)(8676002)(5660300002)(6486002)(54906003)(478600001)(2616005)(956004)(2906002)(66946007)(6916009)(16526019)(66476007)(36756003)(4326008)(66556008)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RGdtU2hRVWxVRlpFZ2puNVdyNHhmOEFQTURlbXhNbmJ0TWZKVTBaNStDTzJH?=
 =?utf-8?B?amRlbGtqSUp3ZUZONGlhd2hYanNlWERnWDVOUjNpazJ0cWN1ZDg3SU9PUktV?=
 =?utf-8?B?bzFPaWpwK2NSZ1BaUWhQYVk3OE94WldqR2V4ejU2TTMxcTNHdHgyQmJzVTFY?=
 =?utf-8?B?KzNpUXI4cGhBMGdhbWN5V0wyRW93T0ZCdzEySGlrNHpoVFZxNW4vSXNvZEcw?=
 =?utf-8?B?ZG9BMTcyTlpmbEZkRDlITm94TDVkTU5PL250c2dnVzBOY3hydVBWRE5CS1B1?=
 =?utf-8?B?YWpKMHJFQXFGM2xoUjQwZzUzQllCLzYwamw4TVN4MzRoVk5weDRxOUgxN01S?=
 =?utf-8?B?dXFJWktjQnMxc2FmT1pYT3BhdldJOFpFT2NSMnZCSENlSWhZeEdrd200Zlpz?=
 =?utf-8?B?WDgwbXdFMllnMTdzSWtwZ0tkN1c2TGhvYzBINllWbDVWZTB1M3RDMGswMmoy?=
 =?utf-8?B?NVNKVVd6L0drYzJXSk5MVXNXaEhDRHI2dHhqSXVCNVVxaWl5elVTNnpoODNY?=
 =?utf-8?B?RE1yN3dFaHp4ZVA4YnFGMXRaUFgrMXlRWmc1YjY5L0tCWVpjT0IvVE5Hanpu?=
 =?utf-8?B?bTUrUWlhelEwazlXZU9KTlJnVU40cW1RSVhIbEE2YzhaQ243TmJ0aGcra2tF?=
 =?utf-8?B?OGdCakx6c1RpdjZhM0VFeU1SRmFsVEw0dTJNaHlNMnQvUExpcCs1YVJ4YXVG?=
 =?utf-8?B?RmU5cGxhY1VOd1U0bmZnS2x1ZytjNHF5QkxxZFBZcFZRT0ZWN3VOQTNkTE5v?=
 =?utf-8?B?NTYyRy8ydTRxWC9VY3pRQmpERkl6TjVqNGczcFlpYUthaE9oRFduRUpTRDFz?=
 =?utf-8?B?V3BZZGFjdXRhbi9QRElvZU9od2VOMVNlN2IxWUsxTksxb1RFdUhoTnA3aE1i?=
 =?utf-8?B?a21yT1dMTjM1Q0tQT0haUjlrQVYwRm81UTkwUFVMbXlQSGgwOW12ZG9HVWZI?=
 =?utf-8?B?T2dtNmtwYnpWYm1Ca3F1WjliNHQ4S3hmZFYzYk5nZ2R4bW9MTzNLWHhlR2oz?=
 =?utf-8?B?eHV6ZTgvVnJTcUluY1Q1NWRCMFR0Nmh2WXRFdzgyVGdjYUhMa0NFMHZMT0dY?=
 =?utf-8?B?QXYwVTlXQTZYdUpUN3VKMVpnRFM1OU1ublg0MTM2bkRHZkd6MzMxL1dDUC9V?=
 =?utf-8?B?cjUwT29GZWRPQkdWU3U1V01CcHI1azMwZmRVTmZQS3dMVWFrTVRpM1o1dml2?=
 =?utf-8?B?allZbzBrVFNlbHA3VW9YUkZ0SFRlNXJoRFNleW9lM0Z5YmxlcEJuYWYwMVBm?=
 =?utf-8?B?WldLcmNWdjNDV2dQcGRpNzhrUGFDNE1MZGZvQWpXWHVqQ1lVSENIZ2taRHFI?=
 =?utf-8?B?aUZzenUySGFYaWM5d0tEWkROaWovdlJCWlptQ0VxZEZsSFVSd1B4eDNBRjBY?=
 =?utf-8?B?cnNRdVNkRXVHayt1bTdFVFo3ckdXOVd5R2dhVFlHSlRwOEhmdzBDQ2hneDAv?=
 =?utf-8?B?MFJPNndNOTgxdVh2cTI3Kyt5VW1hMmZPWVlXbmRhSERwLzl6MlBzSmI3NXJt?=
 =?utf-8?B?Q1ZZOGpoVGFiUXlJUllpNWtxUjRZVkMvMEVxK1RibVNLV25TekVCK2dYSkpT?=
 =?utf-8?B?MmVHNWdUYWpPTFQ4bjBzMFp5Y1hEbzhicWd0a0Yyb2JPV29tdU5ZLytxRlpy?=
 =?utf-8?B?U1VtNGRFR3N6cmEvV1pVcFV3UG80c1Rld210cXRaQkpWYnhiRlFidHlYRjZU?=
 =?utf-8?B?T0s3WlRkZnNFKzJYZG5HbWkvNXZNMTQ4cEc1M0pzV25GdExiM0dldGN2VUQy?=
 =?utf-8?B?cDVXd0YybDhjVnA1aTh0YW5QWlJWUDQ4Yi9adzBzQUc4anI2TFU1eEhBRjVI?=
 =?utf-8?B?dGM1T3ExRkZ0cDJJaWwxdz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 151abf84-a9e6-463e-362e-08d8d8aebeb7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 10:27:10.9567
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TqEL4fAwXMu9IP4zbY+sIJcJepyLr04isYDyfJxL579vcSGS7smLQCr4vVCUu2c94EOVSrWd9/dyMAvs+mJImg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3481
X-OriginatorOrg: citrix.com

Hello,

Following two patches aim to make hvmloader standalone, so that it don't
try to use system headers. It shouldn't result in any functional
change.

Thanks, Roger.

Roger Pau Monne (2):
  hvmloader: use Xen private header for elf structs
  hvmloader: do not include system headers for type declarations

 tools/firmware/hvmloader/32bitbios_support.c |  4 +-
 tools/firmware/hvmloader/config.h            |  3 +-
 tools/firmware/hvmloader/hypercall.h         |  2 +-
 tools/firmware/hvmloader/mp_tables.c         |  2 +-
 tools/firmware/hvmloader/option_rom.h        |  2 +-
 tools/firmware/hvmloader/pir_types.h         |  2 +-
 tools/firmware/hvmloader/smbios.c            |  2 +-
 tools/firmware/hvmloader/smbios_types.h      |  2 +-
 tools/firmware/hvmloader/types.h             | 47 ++++++++++++++++++++
 tools/firmware/hvmloader/util.c              |  1 -
 tools/firmware/hvmloader/util.h              |  5 +--
 11 files changed, 57 insertions(+), 15 deletions(-)
 create mode 100644 tools/firmware/hvmloader/types.h

-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:27:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89241.167946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErO8-0007y3-AL; Wed, 24 Feb 2021 10:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89241.167946; Wed, 24 Feb 2021 10:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErO8-0007xw-6D; Wed, 24 Feb 2021 10:27:24 +0000
Received: by outflank-mailman (input) for mailman id 89241;
 Wed, 24 Feb 2021 10:27:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lErO6-0007xN-Bb
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:27:22 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c976331-fc60-4e30-b204-3cd549bf1bf8;
 Wed, 24 Feb 2021 10:27:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c976331-fc60-4e30-b204-3cd549bf1bf8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614162441;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=ihxhCmXP17iiV9w9AMaj2ft3L4iKQTPi1IbcOE4CM58=;
  b=FDxqu+FLmFLHAlFukfaalbylwpCcKaKkW8R1sfN5xpCXMY1T1NKmLc8V
   3WW4Zp4zHbT7g6no/YfprFKR/L5vxOaJsk2X+3IeJvvNV2UIWIb4X7fBR
   ZVefBX5tBVYp0p7o5LojfNkLXZP8ycXV2JE5S50rYsUFVpd/tJnizYgYJ
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: oV4/Ida21fjXwEnr1BVP3xImANUPJaCuQLoPiNWByyYoeUrCBmSEBYTsuROGV/rhOdxUR9ph2t
 njHtaFKRTfkBs0VsYIPthlqcTGN2N5bAGhNo/8S/++xdhJ2eG3Orqqzqx350L1+SI7unM6enHS
 PHiCp3SdISlrqIIlg4RjEKYnCOCNwHL0GvbKW2CgPsZU+NLadz5m09PJuboXvNxWqijlN1CkQy
 C1QD7tbrj2B6FgXf10yHvBRLGAVsbBTL0AqG/A/E7Q9oIQwEDfGjJW/NrrTQddbzRigRpOKWwR
 1BU=
X-SBRS: 5.2
X-MesageID: 37836429
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="37836429"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bBL8+N20iZupBSdToCgC3gquntq4zsP+OGK6+x/tEh59jzkiX0rxInNcMSKgIPOgPzkbI7v9OSVQ+fYiFhGoYz2wHWTS/Ogx+YdwOrDyCH/vYPH2+kazZ5ti7VwDdKGLtKS9Ma0gfdSQtIB8vOE8FH5Rw8L8C9cew0+5dAWvN4K0vX9X9VSmSUAUhwcY0XjpOLfwichTieYit2Cy/p1MSUveJoOHnlLf8HGSgUiM9CKcKX8aF8fXWbT2fuifhusGRDAF+Hv5D7HdXvwTTzinE3yoB8V74kw0K65TwnwAxJyFSRWsLYG/J2ECeEj9f5Vj5BgOZAKoZD8ewG7JXetIpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J9KAlhoh5CiX9e/mbxtnN8i1nfHbreHNEtSEGhRcJjw=;
 b=YEA7xs1+JrRvuyXIKyRgytQvoeg5V3lchfgJB9MuYqikwBN0gjnxLJPTdYEoMcCymSA7zP+YJXmcNuL3TYVcY7RhHvU6SO3HO2GrNNQMWQBVttCjpmbxnP6sJ2i14ghQ9YBlbGVjRNipmrsUe13fehbiOAWKwduWm/H0zxpwAp3Qu7v6k5WhLWVlsgKw0RqAKFaA8EJ2J1/DOH/99rY136RazVIUf0u4RGba3VQePIbAirWxhS6DNNy6L3IN+hqK5yJCgl21RcRG9pPQndS7Xhso77j5D5p3dmmPnYsk9Nypj++JuGBR5s05uy6Q8fNfXpxmlMo3kmsKlDCwI9b+JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J9KAlhoh5CiX9e/mbxtnN8i1nfHbreHNEtSEGhRcJjw=;
 b=smsWpfhxuCUeI8O2Aouq0BSNEmeVBT9RB3RJAZqQ+ja+FqFprQRvRepR+yLTeYQcLMqCTCCNTh3kaFCj0TEBBUgKfd9v4tSxc41weo62QXMeH3dlIl0CaLOFNg5cpZn5i6f8XW7MCmGrt6ql+tTnTXYhRfKSMo6RlwV0Z0pOv7A=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH 1/2] hvmloader: use Xen private header for elf structs
Date: Wed, 24 Feb 2021 11:26:40 +0100
Message-ID: <20210224102641.89455-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
In-Reply-To: <20210224102641.89455-1-roger.pau@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0143.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::35) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1a70d12e-6fba-4841-2b56-08d8d8aec2ec
X-MS-TrafficTypeDiagnostic: DM6PR03MB3481:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3481BC1C6AD0DDC31C8043DE8F9F9@DM6PR03MB3481.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:913;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: byCcV/W37FVBQLbYx+dvKXqSsQH2rbPFvhe3ZL7oujBua52m/MKIEAcBZaGvnLy6MGGZiv+ROb0zrm6PjZyFw8DUt5EENxWx8IZBn46LbeCE1/L7kp8yKTCBZEljbd07H9SqP4s2knrdB5m2L+kZpCFPKONN7XZIkzI2sSTul/L2YpWuzv9dv4xZpD43BkY8ts3IC5Zp+ildn7ttSITHJ3CbwkQUhmYT0reshbDColyk2gihw8bsWyuPLsE4kA1sZbsTxpziNLClmWQy5OCEpdGaWDwbtzMDi1jEuYrAdhTDQiwsC5QUaz3l3UoicrHZ49/6FoDY6oV1LeahSNNA8Ko8iDALEQKj8To4NdI/Sgv4jbvdQ0y9Gg/4DddQrWtfYQAU7yHZS+qX/3FTIWTy7fy7iGP9y9olWTQzLCbM6IZh6uDwrsJd0Dl1bWKQK9aQTwK6zyTuPzUQSBxdzqoqkDm+CURQTGY30C3/ofHBDLc63Jny6rnxqY6rnC6sKOp7bbYPNI3hXnzsG+sJxWRKWQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(376002)(366004)(136003)(83380400001)(316002)(6496006)(1076003)(4744005)(86362001)(6666004)(26005)(8936002)(8676002)(5660300002)(6486002)(54906003)(478600001)(2616005)(956004)(2906002)(66946007)(6916009)(16526019)(66476007)(36756003)(4326008)(66556008)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OWhDKzNQVlFZK0MyZlVRbEZXWnZVMjB2SEZPWTN4dnZVRWRHWWo4dmVQMGZv?=
 =?utf-8?B?eHAvMEpSUUcyNDNXWXNnc2hwck9zREErRHNpb0JaMCtwN3BRc2VkL0RkQ3ZF?=
 =?utf-8?B?WDRya1d5U3JwR0lBdk9aSWNRaHFwcnJuM2ZuRUVDcy9MbUlnVlN3SVA0a2ly?=
 =?utf-8?B?QlJrdGxycEFrVkpoUEtBekdFNFNCT05nTGFoNlRrQVlFekpYODRnZGc3cGpO?=
 =?utf-8?B?TndvV2FsSW9UR2dpWTRUeDkvYy9rYytKNkJGTEc4SzFoL0pVN3JVQWpmSWZ4?=
 =?utf-8?B?aVhWWGlxSkZNcEM3SitKVTlFTG94enoycFNlYUt3em8yRkY5VFB6V1cvRU5L?=
 =?utf-8?B?NSswUk5KcnA3WHR6QUREdlNUMFpPalRYNUI4SllPVjIwaEUxSUh6bVVOQ1N3?=
 =?utf-8?B?dHhCTjlaaGVOL1dBdnhIaTcwYVJ4RUdUbDBpdStJK3hpOU1xSHNZUGc4V0xt?=
 =?utf-8?B?UDZONzRQS2FxMkZHZW12VXFqQVFHUjNYLzFMaXpJN2RuZ1paYnNnVnFJNHhh?=
 =?utf-8?B?cEh1NWlCYVVGSExWTTVxU1d4eVBCbjhrZzgyb0EvczkzTFVtUkQweTUwZ2pF?=
 =?utf-8?B?SHFzdmpSU3pXeW1MS0R2M2FFTHpWYUNmR3B3R1diRmpPSWVBNXpRZzArWE9n?=
 =?utf-8?B?VUhVZkdxTGttM0FtbXEvTkg0WVRzNWxlYWZkemFBOGpKRnhnZnI4SHY5MGpp?=
 =?utf-8?B?N014UGpKeCs3dHA2R3Ftb2NXckwyQnozZlpZMUhkU0NyWjFYTzI4WkZQVmVN?=
 =?utf-8?B?YXREQzY2cFcyb29SM0l4Yy96RjdJWmV1VWJwZTFNM3lXNnBBeVdIZHpNcnp2?=
 =?utf-8?B?K3FMU3g5TnZlTWZoNjdPTlhyampvSTFUS25GMnA0OGIvZWJSd0llQ1doWGtm?=
 =?utf-8?B?Smx3NWtRZnhGbTRvNktId212OFRPR2ZTVWN0MjB5OHJjbUtDLzdnQ3Fkclc3?=
 =?utf-8?B?cDdqNXpGeGxWSHZjQWhzeDJPZjNBU2owa3lUbE05Sm5EWnRBemh3WDlNRFkx?=
 =?utf-8?B?ZTM2QmxyRExKZUxUcTJESkhNNm5EOTRxVDNyaTkzRThseXdrTkpudnQzSDdL?=
 =?utf-8?B?ek8waFhZTi9ENFJFM1BxV1JHVnlEUGcrZ3M0RTZRZzRKSWZPQmhXR2VNQk82?=
 =?utf-8?B?dkhGWXlyK1MwMExoejB6Y0FPRjY3Q24ySzdzTzJ3NUxaS3VsaWFVVFVlTTNV?=
 =?utf-8?B?UE9GMjdpZ3U3d3llLzM4bmxOSkt6MmFlT2pnTURQck5SN2tnazFYQ2E4N3Zs?=
 =?utf-8?B?TDBRV1dXVWJJc3UzckZFcjdYS1dwcTBuc0d0R1FxTEIzTUFLano5cVZYKzJx?=
 =?utf-8?B?cDBhbTdIaGMzQjVCRndOaVhrbmpPWTM4T1NqYW1OYVVQN2JkU3JoTExVdmQ3?=
 =?utf-8?B?RVhyYTVERHZBd3llaWpQUVlaRGRNcW5YM0tSTm1wLzdsN2RzQU9iMHFCQjFw?=
 =?utf-8?B?bkNjbHBaQzhSd1VMZTZkRWV0UlBYcUU5aXk2MEsyUEdjYzBnR2JTTExsTG91?=
 =?utf-8?B?ZjRRVHZnV0dVaFFOR2xiZG1GRUFtQTNkR1dhUTlqZDMxa3AxZG96K09oelVw?=
 =?utf-8?B?Ujl0UjBqVTVqZ1dKaVRjRUNybkxtTG9PTzhidVl0Wmx0TXlzVVBjOHJORWw0?=
 =?utf-8?B?MDBvdWY4dk41bWhPRlFjNWFlS2dhU2RiL0NBcDFmYU94NGFQdXU3dGtBTmVW?=
 =?utf-8?B?UnZSTVBUenhvUjZYQ0dReHBDV3ZXcVdab3paY3dVY1MyNld5WjUwRm5JTzAw?=
 =?utf-8?B?cjhXdVAxTk1BVThQakVDUDhTYzdUMnlrTVhMZnA1T1M3NkplZDdxYTl6TlFX?=
 =?utf-8?B?a0owSmxlOWZLRW9iSkNzUT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a70d12e-6fba-4841-2b56-08d8d8aec2ec
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 10:27:18.0236
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oMLkbyDIu417uHg6z13uLiHKdXagB+B/tbuQPvGy8XIkpckMmyT06ZBm1wHVeH5zpib32rKOj+2CrnVO6ZxVZw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3481
X-OriginatorOrg: citrix.com

Do not use the system provided elf.h, and instead use elfstructs.h
from libelf.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
 tools/firmware/hvmloader/32bitbios_support.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/firmware/hvmloader/32bitbios_support.c b/tools/firmware/hvmloader/32bitbios_support.c
index 114135022e..e726946a7b 100644
--- a/tools/firmware/hvmloader/32bitbios_support.c
+++ b/tools/firmware/hvmloader/32bitbios_support.c
@@ -21,7 +21,7 @@
  */
 
 #include <inttypes.h>
-#include <elf.h>
+#include <xen/libelf/elfstructs.h>
 #ifdef __sun__
 #include <sys/machelf.h>
 #endif
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:27:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:27:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89242.167958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErOE-00081u-JU; Wed, 24 Feb 2021 10:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89242.167958; Wed, 24 Feb 2021 10:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErOE-00081l-Fd; Wed, 24 Feb 2021 10:27:30 +0000
Received: by outflank-mailman (input) for mailman id 89242;
 Wed, 24 Feb 2021 10:27:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lErOD-000815-Bg
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:27:29 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a452f7d3-735c-48d1-8ee4-73ff64d6ad5e;
 Wed, 24 Feb 2021 10:27:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a452f7d3-735c-48d1-8ee4-73ff64d6ad5e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614162448;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=gt/9qJPpfd+Nj2qzhIwzlRauIGrZSQbZDuCtyyUly+w=;
  b=Wg5ATzqEkQcprUt4sgm4O7mmCSjoAoTXa3RNHtWFON57uciq1N4IiwPv
   6vUdWcPvM6wusyXbIbvCipgQfY517OfyGpTUAmaV8kc2a0bjkQaChKQe0
   BMzITPmIu+dlDOFw6Blqzj8CE+4H6O7p3OV7/zYHnnFwjMBor41CKQuzu
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: /vncOgBxjZJ3mxkJF5wDc4oCoDxhBISWg1v6BFsZQmxhpFEJH1t2iXWr47uWrI7k/n37riAj9y
 62MzIG4mK5jEddphgpUEw7ddwH2Zu0oUSgoBLibYv+ygrCLNtX0jR45K7wdr9SI5rwoBCRD8TD
 W6JBwLZM+wed8RvOxcMHcVaT7A7VrsESgG4bZfnz2G2Hf5BLfuQbXS8coSzFMkTrCJvOeIH51N
 iIQjwk9CVLdHnADv48Mhg4+/LG7TcgybO43Mt0F2DhKAWVLqjY3f1hUiKdfnQo16+UcioxrhgQ
 e+s=
X-SBRS: 5.2
X-MesageID: 37836438
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="37836438"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i9dPDgWZz3qu/FCOZZpRUK6DbY/kshW2QyJgJW6XfKHudiN70NKtydNAPfG/fzkH86v1tewj0SbW4UdNusVfjIY3LoCkYgBasR0aIzH9EGadcOMNAmURxr1vBDPRKcCI9llEqI+jpiJ8jEJVUyK38nNc8yCahyHhiYdZPkv0IsW7r1FZf7rBECE1r5RjQY9H+GqSCM0ybwtUjgnc7HnwlObN5OjYH8RG21l08XgLJykPEGn0xQjpLCG3L86DIgPxBmvUZit5/HHHuYoAVKWRLLw2vYD0bBhEx0ywawHxfJQ0JdGuZJe4v5k8v6hJwMk9BjcWbMETr5ssUxgoJ86WtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VKK48ldDFyTZ4nizItvT0SdssDedMC6C9cFvsgN2uNY=;
 b=VqGpN+qr58ZV8+odDlR3/NjFuJuq7A80Rg/TVaGDcekBSQJnVykZQwogLoa3g0PR5zg8iPG7K1IfvMiOUPckdotZ6bOmoI5BjTWLzVLYlPjlAvxVonT98nzFHvJVZOuwpgdzkJ78x9FYjbb+MdSAwMSCYhaQ28VEymy+rUoFexTJJTPaOQLb3iheRwBqurebYLyeGldey/ZVu/blhlD35wQ8SAua1v1r3DeiUO3lBnw4uit7QfdDdZxLjFTlExnd3ow8QUytiiuw/v++Rvp4mdL/C0LpjWS0aeaSGRRIOS6KFoV9BD+DXfMTp9NuKXtt1l90xGVo6tTcXsvSmbD5iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VKK48ldDFyTZ4nizItvT0SdssDedMC6C9cFvsgN2uNY=;
 b=hs7lzedNjIJqX+DH8a5D1cEuu7lcH7Ziitip04d1ckMCE7vCjg3FJI+aGVZY36f+fsL9b8mU5dudYmpukGOUUZddZb/KFj3/cbnjiBE9enhQB23IgGiAp40aMYVX+EVatQEECceududNRZXgLMQETrDstVur5Y/t/Hbx7giNwso=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH 2/2] hvmloader: do not include system headers for type declarations
Date: Wed, 24 Feb 2021 11:26:41 +0100
Message-ID: <20210224102641.89455-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
In-Reply-To: <20210224102641.89455-1-roger.pau@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0086.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 70b3d478-8a79-4c21-2925-08d8d8aec64a
X-MS-TrafficTypeDiagnostic: DM6PR03MB3481:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB34819566CCBA25EA4C31ED488F9F9@DM6PR03MB3481.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: F/dvgrk/5ZveMsj36jR6vJoWkHdA0PzLPMZ6Uyo7WXacCiOv3QsTalij0vZ3/+8zqM05zSpOPYW7e8wQoMGaD1B/GISoIe5zeQZKbeW5JQDrBQOIg7PgFguNLDpGSGfTFPLy9Li4yIKmyhUOIJLqRHUTyjJckITxknMQ5/wjfVRd9gmyBvkK0MV9BshL5gadeqXkjvAsnk3D46V9PhgA3JSaI3bPQ4WVIvBMTi2xypeB/9il1MIbLTiRpQ+ZPnurH1hYDmJmFE87qEUKnHdFKFs3L7aXrvg8K9N/kSWsy/wQ1cvBbFcjp7dYf5Gcv3Gg2bH5Y+GBuuU7lcWPgL+5IttAFfyJGktyRuE6aVdmsuW2HUbP/jIsASSXJqgNu32oBsy4ZWEYR8tKhtuR6uzr1haYjwFgMS7E8r1DKPmdQOIV/u9FpRJy08Y9824sLf05jQzksmxKxHTxBvELO1GX/giXkYsBEsN0C42aNoKYmgRWfV1gOcudw7Fz442BlQ3KrP1kO8BlBlzI+Swh24YV7Mxfte6JxCR54axYWPAALomc9PeoBKRjWMCOPUhXH98ZNVAdQwvoiWSxI6SQpuZyqUIjuN4ZAHkO91xSoInsMjQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(376002)(366004)(136003)(83380400001)(316002)(6496006)(1076003)(86362001)(6666004)(26005)(8936002)(8676002)(5660300002)(6486002)(54906003)(478600001)(2616005)(956004)(2906002)(66946007)(6916009)(16526019)(66476007)(36756003)(4326008)(66556008)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?d0x2S2ttdnhJVWpGNW5lZ2ZjZFFNRitFa285UUFFNW5hUGVyeDV6K1kyejA4?=
 =?utf-8?B?eGJ5TGI3cVd2MXhrV0lwWk9NVXMvVEpqUTNEajR3ZkZTK0NKSWVvcllYUE9X?=
 =?utf-8?B?Nmk5YnVyL3AvY0cvL3N0TEtKRmM0dzBjWHc1UVJ1bEZoSWFUNzJ3Y2E3bFBJ?=
 =?utf-8?B?L0ZRZlBKcU1zbk9Id2RUckVBMm8zWU1uM3kwOEpZLys3cTd0ZVFxNVdxUld1?=
 =?utf-8?B?aUtWVStmQU1VRlU1enBQd1Z6VEh3NERPNTVtY0orRzB3TVRkbGVueEgyVTV6?=
 =?utf-8?B?L3JwSFBpYjNMeWFKS3RTL2F3Zy9NUjlsb2N1NVZjNmhxc1pDcUZxa3NadDY1?=
 =?utf-8?B?NGdRYTZaMjFlRlp5T0NJN3Y5Q0ZJN3dpZFpHLzhtVngySTgxS3ZZZjJVWmFW?=
 =?utf-8?B?WHl4Njc4QUtuZncwbUlHdGNVZE1BWlZ2cmNKdkJSb2ZoOVVrcUxJS0MyOEh0?=
 =?utf-8?B?cHNKVUk2SkRWMXZQVTRSS3FtTm85UDZIRXRlSktEVmJBOGc0M0lFTUwwS0Fv?=
 =?utf-8?B?MEVGZ1VUeXhLWDFkRjhWcmZUNGN5VU91czZGbDVPNjlYOHRXWXBRbU83WXp1?=
 =?utf-8?B?U0dHeXQrNWppTFpsbWVKTHovZXpzTnJ1RnY5d204OUpMSXNiV0xnanlhZEsv?=
 =?utf-8?B?T3ZxcWJ1WlNEZFB3V3FhVSt5NzZZaHZpTW5Td0I4NXZyTUtIZ1dkSC9TWXFl?=
 =?utf-8?B?VUdjckc1eWJaU1Ixc29ockFSQzJwL0ZLMG9BMTFYb1lPWmsvMGhER0pXQmV6?=
 =?utf-8?B?RXBaL0N4QVpZaE1XU1ZXSHBDMS82dUtXYk52bDlDckNVcGl3SzB4SlgvL2Qr?=
 =?utf-8?B?ZnVaajU4ZURhRG54by9OdFNtQzZCbTB0a2tOYzJ3clI1K1pFMldOWmJLSlVC?=
 =?utf-8?B?bkY4VDVlNVFyTXpPYjNSZTlOZXROa003Zmh2aGN4T1BGc2dnWWp0cUZ3ekVH?=
 =?utf-8?B?RjBOOHFGSHhQOWdwL3h0N3UwcW9NYVg5WGlRdHR4K09BMFMzUWFxMkY5NVJ4?=
 =?utf-8?B?TEZiYlZQV2J0Z2JRVUJCTE91STdzNE9qYW0wbFQrSjl1cDVuNVNaa3pELzcw?=
 =?utf-8?B?YmpJdk0xTy9HQkR2WFNZOENhK1RwOUJseVFQMHRJd1lOYW82blFrZnRreFdm?=
 =?utf-8?B?dVBBV3VwU1lhcmpvbU1EKzdKZFN6OE9oblQyVjRGMGF5QmROMlZNNkhFbFdB?=
 =?utf-8?B?RWFKYUJMYnF3LytUb1F0aWF2MFlnMFM5dVAzSXV3eGs3MmthTEVzaFozTzhZ?=
 =?utf-8?B?YmxjNGRXZTlpdVBtZjhzWE1CRGlhT244T2J0Y25iNVQwd1FUSzczNldsZTVC?=
 =?utf-8?B?V3pDSC9YN29xVzU3bmFSODZaZ3V0QlBZajRSMGlnZjNYZDlSblZpd2huWDBZ?=
 =?utf-8?B?bVdaQWQ0Wm5WY0ZVcjl6UG0zaTNIWURTOVRnTWJiTDAxN1ZXT0FNRUVjVnI1?=
 =?utf-8?B?eTNRTTlBQ1psOTJCeitQY0YxRkRyS3VzcWJjQ0k2NjM2NkxVYlFEa3hVTWF2?=
 =?utf-8?B?L1BwZ0RWciszSFpGdnMyQzl6NUNZR0MvdlNZdmxXSlhvcTR0MlJBVGc3Tzln?=
 =?utf-8?B?U0dKeCtubWlZNjJCREpyTWppM09TcnVpQUNGbGVjOEdyWmhuV3pEaXdRd0VJ?=
 =?utf-8?B?NWpEYmVpRzM4dUxRSVJLTTQ5RkNXSXNJVFl2c0xXSWxXcHBacmFjUldidms5?=
 =?utf-8?B?SW9Ib05GWEZPbCtUakVGTHltb0NDU3ZwTFZlZm0vcWVabU12MUt0ekQxWEl0?=
 =?utf-8?B?VnRvV25PaG1yRkVaWE13Uklha0FiREp2Y1I5RU56TlNXa1lkRXhwZ3hZZGVU?=
 =?utf-8?B?UVpiUWdOTGFndzRNNFNwZz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 70b3d478-8a79-4c21-2925-08d8d8aec64a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 10:27:23.7016
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A6qO8/NbA323Zzf+AjsVS9D/vu4YbO1Efv935iYtubOTDP+9ifuNaLEtsf1FWgVoUZHaEpM/NujVstd0wYemFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3481
X-OriginatorOrg: citrix.com

Instead create a private types.h header that contains the set of types
that are required by hvmloader. Replace include occurrences of std*
headers with type.h. Note that including types.h directly is not
required in util.c because it already includes util.h which in turn
includes the newly created types.h.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
 tools/firmware/hvmloader/32bitbios_support.c |  2 +-
 tools/firmware/hvmloader/config.h            |  3 +-
 tools/firmware/hvmloader/hypercall.h         |  2 +-
 tools/firmware/hvmloader/mp_tables.c         |  2 +-
 tools/firmware/hvmloader/option_rom.h        |  2 +-
 tools/firmware/hvmloader/pir_types.h         |  2 +-
 tools/firmware/hvmloader/smbios.c            |  2 +-
 tools/firmware/hvmloader/smbios_types.h      |  2 +-
 tools/firmware/hvmloader/types.h             | 47 ++++++++++++++++++++
 tools/firmware/hvmloader/util.c              |  1 -
 tools/firmware/hvmloader/util.h              |  5 +--
 11 files changed, 56 insertions(+), 14 deletions(-)
 create mode 100644 tools/firmware/hvmloader/types.h

diff --git a/tools/firmware/hvmloader/32bitbios_support.c b/tools/firmware/hvmloader/32bitbios_support.c
index e726946a7b..32b5c4c4ad 100644
--- a/tools/firmware/hvmloader/32bitbios_support.c
+++ b/tools/firmware/hvmloader/32bitbios_support.c
@@ -20,7 +20,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <inttypes.h>
+#include "types.h"
 #include <xen/libelf/elfstructs.h>
 #ifdef __sun__
 #include <sys/machelf.h>
diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h
index 844120bc87..510d5b5c79 100644
--- a/tools/firmware/hvmloader/config.h
+++ b/tools/firmware/hvmloader/config.h
@@ -1,8 +1,7 @@
 #ifndef __HVMLOADER_CONFIG_H__
 #define __HVMLOADER_CONFIG_H__
 
-#include <stdint.h>
-#include <stdbool.h>
+#include "types.h"
 
 enum virtual_vga { VGA_none, VGA_std, VGA_cirrus, VGA_pt };
 extern enum virtual_vga virtual_vga;
diff --git a/tools/firmware/hvmloader/hypercall.h b/tools/firmware/hvmloader/hypercall.h
index 5368c30720..788f699565 100644
--- a/tools/firmware/hvmloader/hypercall.h
+++ b/tools/firmware/hvmloader/hypercall.h
@@ -31,7 +31,7 @@
 #ifndef __HVMLOADER_HYPERCALL_H__
 #define __HVMLOADER_HYPERCALL_H__
 
-#include <stdint.h>
+#include "types.h"
 #include <xen/xen.h>
 #include "config.h"
 
diff --git a/tools/firmware/hvmloader/mp_tables.c b/tools/firmware/hvmloader/mp_tables.c
index d207ecbf00..76790a9a1e 100644
--- a/tools/firmware/hvmloader/mp_tables.c
+++ b/tools/firmware/hvmloader/mp_tables.c
@@ -27,7 +27,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <stdint.h>
+#include "types.h"
 #include "config.h"
 
 /* number of non-processor MP table entries */
diff --git a/tools/firmware/hvmloader/option_rom.h b/tools/firmware/hvmloader/option_rom.h
index 0fefe0812a..7988aa29ec 100644
--- a/tools/firmware/hvmloader/option_rom.h
+++ b/tools/firmware/hvmloader/option_rom.h
@@ -1,7 +1,7 @@
 #ifndef __HVMLOADER_OPTION_ROM_H__
 #define __HVMLOADER_OPTION_ROM_H__
 
-#include <stdint.h>
+#include "types.h"
 
 struct option_rom_header {
     uint8_t signature[2]; /* "\x55\xaa" */
diff --git a/tools/firmware/hvmloader/pir_types.h b/tools/firmware/hvmloader/pir_types.h
index 9f9259c2e1..9efcdcf94b 100644
--- a/tools/firmware/hvmloader/pir_types.h
+++ b/tools/firmware/hvmloader/pir_types.h
@@ -23,7 +23,7 @@
 #ifndef PIR_TYPES_H
 #define PIR_TYPES_H
 
-#include <stdint.h>
+#include "types.h"
 
 #define NR_PIR_SLOTS 6
 
diff --git a/tools/firmware/hvmloader/smbios.c b/tools/firmware/hvmloader/smbios.c
index 97a054e9e3..5821c85c30 100644
--- a/tools/firmware/hvmloader/smbios.c
+++ b/tools/firmware/hvmloader/smbios.c
@@ -19,7 +19,7 @@
  * Authors: Andrew D. Ball <aball@us.ibm.com>
  */
 
-#include <stdint.h>
+#include "types.h"
 #include <xen/xen.h>
 #include <xen/version.h>
 #include "smbios_types.h"
diff --git a/tools/firmware/hvmloader/smbios_types.h b/tools/firmware/hvmloader/smbios_types.h
index 7c648ece71..439c3fb247 100644
--- a/tools/firmware/hvmloader/smbios_types.h
+++ b/tools/firmware/hvmloader/smbios_types.h
@@ -25,7 +25,7 @@
 #ifndef SMBIOS_TYPES_H
 #define SMBIOS_TYPES_H
 
-#include <stdint.h>
+#include "types.h"
 
 /* SMBIOS entry point -- must be written to a 16-bit aligned address
    between 0xf0000 and 0xfffff. 
diff --git a/tools/firmware/hvmloader/types.h b/tools/firmware/hvmloader/types.h
new file mode 100644
index 0000000000..3d765f2c60
--- /dev/null
+++ b/tools/firmware/hvmloader/types.h
@@ -0,0 +1,47 @@
+#ifndef _HVMLOADER_TYPES_H_
+#define _HVMLOADER_TYPES_H_
+
+typedef unsigned char uint8_t;
+typedef signed char int8_t;
+
+typedef unsigned short uint16_t;
+typedef signed short int16_t;
+
+typedef unsigned int uint32_t;
+typedef signed int int32_t;
+
+typedef unsigned long long uint64_t;
+typedef signed long long int64_t;
+
+#define INT8_MIN        (-0x7f-1)
+#define INT16_MIN       (-0x7fff-1)
+#define INT32_MIN       (-0x7fffffff-1)
+#define INT64_MIN       (-0x7fffffffffffffffll-1)
+
+#define INT8_MAX        0x7f
+#define INT16_MAX       0x7fff
+#define INT32_MAX       0x7fffffff
+#define INT64_MAX       0x7fffffffffffffffll
+
+#define UINT8_MAX       0xff
+#define UINT16_MAX      0xffff
+#define UINT32_MAX      0xffffffffu
+#define UINT64_MAX      0xffffffffffffffffull
+
+typedef uint32_t size_t;
+typedef uint32_t uintptr_t;
+
+#define UINTPTR_MAX UINT32_MAX
+
+#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined   1
+
+typedef __builtin_va_list va_list;
+#define va_copy(dest, src)    __builtin_va_copy((dest), (src))
+#define va_start(ap, last)    __builtin_va_start((ap), (last))
+#define va_end(ap)            __builtin_va_end(ap)
+#define va_arg                __builtin_va_arg
+
+#endif
diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 7da144b0bb..2df84482ab 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -24,7 +24,6 @@
 #include "vnuma.h"
 #include <acpi2_0.h>
 #include <libacpi.h>
-#include <stdint.h>
 #include <xen/xen.h>
 #include <xen/memory.h>
 #include <xen/sched.h>
diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h
index 4f0baade0e..285a1d23c4 100644
--- a/tools/firmware/hvmloader/util.h
+++ b/tools/firmware/hvmloader/util.h
@@ -1,10 +1,7 @@
 #ifndef __HVMLOADER_UTIL_H__
 #define __HVMLOADER_UTIL_H__
 
-#include <stdarg.h>
-#include <stdint.h>
-#include <stddef.h>
-#include <stdbool.h>
+#include "types.h"
 #include <xen/xen.h>
 #include <xen/hvm/hvm_info_table.h>
 #include "e820.h"
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:40:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:40:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89252.167976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEraZ-0001bf-1y; Wed, 24 Feb 2021 10:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89252.167976; Wed, 24 Feb 2021 10:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEraY-0001bY-VC; Wed, 24 Feb 2021 10:40:14 +0000
Received: by outflank-mailman (input) for mailman id 89252;
 Wed, 24 Feb 2021 10:40:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEraX-0001bT-QE
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:40:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24d2cc7c-1fbb-492f-bd0e-5d14d4c0037c;
 Wed, 24 Feb 2021 10:40:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B53A6ADE5;
 Wed, 24 Feb 2021 10:40:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24d2cc7c-1fbb-492f-bd0e-5d14d4c0037c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614163211; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cTuUl2xtB7P8DiJLTPQATjG5gYNCqVCkgtBitZgCg4Q=;
	b=J9wF47lwAd/6Uxv8/LdoQd8RDfv0/rV2n1YFnO6NWksaOC5liujjU4MjMv2pRzDylJVZ51
	zuGksERZdhVn1s5zlqdtIfBL/BetaDhv0+LcOQeuDcDpvV27baIfFcZfCJOXd9cqFfGyZS
	f+ma7dh+eow/AL1E/2hD5DF77y4Ja3o=
Subject: Re: [PATCH 1/2] hvmloader: use Xen private header for elf structs
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f8531c7e-abdd-67da-f8a1-583126c265f5@suse.com>
Date: Wed, 24 Feb 2021 11:40:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210224102641.89455-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 11:26, Roger Pau Monne wrote:
> Do not use the system provided elf.h, and instead use elfstructs.h
> from libelf.
> 
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:51:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89256.167988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErlg-0002ge-6X; Wed, 24 Feb 2021 10:51:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89256.167988; Wed, 24 Feb 2021 10:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErlg-0002gX-1q; Wed, 24 Feb 2021 10:51:44 +0000
Received: by outflank-mailman (input) for mailman id 89256;
 Wed, 24 Feb 2021 10:51:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lErle-0002gS-9M
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:51:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24af6dfb-e8e0-41ea-83b0-6506239e35a5;
 Wed, 24 Feb 2021 10:51:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 43812ADE5;
 Wed, 24 Feb 2021 10:51:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24af6dfb-e8e0-41ea-83b0-6506239e35a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614163900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BaPCh8nL9HBojeUkSfe+yQ4ioJVg4Tz4ByDCmiXQWhA=;
	b=iJTHSeKuCbZNndJiSSyglDGjkQZmqsecJDQ6GVlN875mK2sqZiyBf/SgxkN44dCwojsKsZ
	p+kE9ncjU7VG8XkkeWFONW1JHR6mS2XA2pGSgtsbtnSrpEflHzmAF7gJ6TTZZ6tas1Fm/c
	gswTU71JHk8TPHedGLLjF7BM0+H0iKQ=
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
Date: Wed, 24 Feb 2021 11:51:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210224102641.89455-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 11:26, Roger Pau Monne wrote:
> --- /dev/null
> +++ b/tools/firmware/hvmloader/types.h
> @@ -0,0 +1,47 @@
> +#ifndef _HVMLOADER_TYPES_H_
> +#define _HVMLOADER_TYPES_H_
> +
> +typedef unsigned char uint8_t;
> +typedef signed char int8_t;
> +
> +typedef unsigned short uint16_t;
> +typedef signed short int16_t;
> +
> +typedef unsigned int uint32_t;
> +typedef signed int int32_t;
> +
> +typedef unsigned long long uint64_t;
> +typedef signed long long int64_t;

I wonder if we weren't better of not making assumptions on
short / int / long long, and instead use
__attribute__((__mode__(...))) or, if available, the compiler
provided __{,U}INT*_TYPE__.

> +#define INT8_MIN        (-0x7f-1)
> +#define INT16_MIN       (-0x7fff-1)
> +#define INT32_MIN       (-0x7fffffff-1)
> +#define INT64_MIN       (-0x7fffffffffffffffll-1)
> +
> +#define INT8_MAX        0x7f
> +#define INT16_MAX       0x7fff
> +#define INT32_MAX       0x7fffffff
> +#define INT64_MAX       0x7fffffffffffffffll
> +
> +#define UINT8_MAX       0xff
> +#define UINT16_MAX      0xffff
> +#define UINT32_MAX      0xffffffffu
> +#define UINT64_MAX      0xffffffffffffffffull

At least if going the above outlined route, I think we'd then
also be better off not #define-ing any of these which we don't
really use. Afaics it's really only UINTPTR_MAX which needs
providing.

> +typedef uint32_t size_t;

Like the hypervisor, we should prefer using __SIZE_TYPE__
when available.

> +typedef uint32_t uintptr_t;

Again - use __UINTPTR_TYPE__ or, like Xen,
__attribute__((__mode__(__pointer__))).

> +#define bool _Bool
> +#define true 1
> +#define false 0
> +#define __bool_true_false_are_defined   1
> +
> +typedef __builtin_va_list va_list;
> +#define va_copy(dest, src)    __builtin_va_copy((dest), (src))
> +#define va_start(ap, last)    __builtin_va_start((ap), (last))

Nit: Perhaps better omit the unnecessary inner parentheses?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 10:54:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 10:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89259.168000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEro8-0002oE-Jm; Wed, 24 Feb 2021 10:54:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89259.168000; Wed, 24 Feb 2021 10:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEro8-0002o7-GJ; Wed, 24 Feb 2021 10:54:16 +0000
Received: by outflank-mailman (input) for mailman id 89259;
 Wed, 24 Feb 2021 10:54:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEro7-0002o2-SI
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:54:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEro7-0003cg-QG
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:54:15 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEro7-0006AS-Ox
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 10:54:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEro4-0006bf-Bl; Wed, 24 Feb 2021 10:54:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=9JN2optW0yIzKfa1RJQdHleQKv352qScGmZsEaZTUng=; b=zeaMmBIDmhmrwjhtIEz/bNcOvb
	c5R9Hpp3qzH12mU2/v4JcWWe7o79XmyqujmvBk0eP9amJSs691pWsAa27oaDOFuYXulD+Uog3zmU/
	BnFIL9Sx8FMyoqBX9ytoeY8TQ1hljYZY9Oa/Sn1Gv9pphSRQnxNMIPqeYljvD6vtcN3U=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24630.12372.139290.270183@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 10:54:12 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Jan Beulich <jbeulich@suse.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
In-Reply-To: <20210224102641.89455-1-roger.pau@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH 0/2] hvmloader: drop usage of system headers"):
> Following two patches aim to make hvmloader standalone, so that it don't
> try to use system headers. It shouldn't result in any functional
> change.

Both patches:

Reviewed-by: Ian Jackson <iwj@xenproject.org>

Given its status as a build fix,

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 11:01:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 11:01:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89263.168012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErvW-0003rH-Cz; Wed, 24 Feb 2021 11:01:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89263.168012; Wed, 24 Feb 2021 11:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lErvW-0003rA-9W; Wed, 24 Feb 2021 11:01:54 +0000
Received: by outflank-mailman (input) for mailman id 89263;
 Wed, 24 Feb 2021 11:01:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lErvV-0003r5-7W
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:01:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lErvV-0003ml-0n
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:01:53 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lErvU-0006yH-SR
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:01:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lErvR-0006d7-Jm; Wed, 24 Feb 2021 11:01:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=KGUKAx6F4QFDhjlJfQFEI6wrsIO6N4p5EZcQDnzLuXs=; b=skGDHhJF3gRaXopY704B2AOe0K
	oyMZLnFCT8PdK0bCEBkh3GglrKJMSqYZgcEpEGDuUTm7wG0sOn9ibfx/jN+36icB7tgr/n/D+Bp1J
	WNaFWk0LQVufj0LlM8jQd7MgVeTLTklr24CKG/ApXtfnDxHdzOeYsEX6Yq1/qcdfoA9U=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24630.12829.379628.348559@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 11:01:49 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    hongyxia@amazon.co.uk,
    Julien Grall <jgrall@amazon.com>,
    Jan Beulich <jbeulich@suse.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Kevin Tian <kevin.tian@intel.com>,
    Paul Durrant <paul@xen.org>
Subject: Re: [for-4.15][RESEND PATCH v4 0/2] xen/iommu: Collection of bug fixes for IOMMU teardown
In-Reply-To: <20210224094356.7606-1-julien@xen.org>
References: <20210224094356.7606-1-julien@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("[for-4.15][RESEND PATCH v4 0/2] xen/iommu: Collection of bug fixes for IOMMU teardown"):
> This series is a collection of bug fixes for the IOMMU teardown code.
> All of them are candidate for 4.15 as they can either leak memory or
> lead to host crash/host corruption.
> 
> This is sent directly on xen-devel because all the issues were either
> introduced in 4.15 or happen in the domain creation code.

Thanks.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

I'd appreciate it if reviewers would double-check the comments and
commit messages which Julien has helpfully provided, to check that the
assertions made are true.  It seems this is quite complex...

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 11:08:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 11:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89267.168024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEs1S-00042A-3L; Wed, 24 Feb 2021 11:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89267.168024; Wed, 24 Feb 2021 11:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEs1R-000423-W7; Wed, 24 Feb 2021 11:08:01 +0000
Received: by outflank-mailman (input) for mailman id 89267;
 Wed, 24 Feb 2021 11:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEs1R-00041y-2g
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:08:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEs1Q-0003sp-Ua
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:08:00 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEs1Q-0007Ot-TZ
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:08:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEs1J-0006eV-6V; Wed, 24 Feb 2021 11:07:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=uCGM9csyWng0qVf0sasPUOoV8YU3y+OZGgPoJq1JhTg=; b=qGOFdoffgzRCBS7XGFYmGhf+od
	KHd+EU/HTG0Vu8J/SOe/law93lWvw/piXhgA6eN0Lou/JoYCQ4t7XYzAEnatflNArTL3PdRvdPtt1
	Z8V7Wa74RXR9g4S+WzYUaJm/SlwbPKOXEUvvdqdn8upOg1ahTJ9ujXKJI3ZhR2gnhb0k=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24630.13192.874503.894268@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 11:07:52 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
In-Reply-To: <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
	<20210224102641.89455-3-roger.pau@citrix.com>
	<fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
> On 24.02.2021 11:26, Roger Pau Monne wrote:
> > --- /dev/null
> > +++ b/tools/firmware/hvmloader/types.h
> > @@ -0,0 +1,47 @@
> > +#ifndef _HVMLOADER_TYPES_H_
> > +#define _HVMLOADER_TYPES_H_
> > +
> > +typedef unsigned char uint8_t;
> > +typedef signed char int8_t;
> > +
> > +typedef unsigned short uint16_t;
> > +typedef signed short int16_t;
> > +
> > +typedef unsigned int uint32_t;
> > +typedef signed int int32_t;
> > +
> > +typedef unsigned long long uint64_t;
> > +typedef signed long long int64_t;
> 
> I wonder if we weren't better of not making assumptions on
> short / int / long long, and instead use
> __attribute__((__mode__(...))) or, if available, the compiler
> provided __{,U}INT*_TYPE__.

This code is only ever going to be for 32-bit x86, so I think the way
Roger did it is fine.

Doing it the other way, to cope with this file being used with
compiler settings where the above set of types is wrong, would also
imply more complex definitions of INT32_MIN et al.

> > +#define INT8_MIN        (-0x7f-1)
> > +#define INT16_MIN       (-0x7fff-1)
> > +#define INT32_MIN       (-0x7fffffff-1)
> > +#define INT64_MIN       (-0x7fffffffffffffffll-1)
> > +
> > +#define INT8_MAX        0x7f
> > +#define INT16_MAX       0x7fff
> > +#define INT32_MAX       0x7fffffff
> > +#define INT64_MAX       0x7fffffffffffffffll
> > +
> > +#define UINT8_MAX       0xff
> > +#define UINT16_MAX      0xffff
> > +#define UINT32_MAX      0xffffffffu
> > +#define UINT64_MAX      0xffffffffffffffffull
> 
> At least if going the above outlined route, I think we'd then
> also be better off not #define-ing any of these which we don't
> really use. Afaics it's really only UINTPTR_MAX which needs
> providing.

I disagree.  Providing the full set now gets them all properly
reviewe and reduces the burden on future work.

> > +typedef uint32_t size_t;

I would be inclined to provide ssize_t too but maybe hvmloader will
never need it.

> Like the hypervisor, we should prefer using __SIZE_TYPE__
> when available.

I disagree.

> > +typedef uint32_t uintptr_t;
> 
> Again - use __UINTPTR_TYPE__ or, like Xen,
> __attribute__((__mode__(__pointer__))).

I disagree.

> > +#define bool _Bool
> > +#define true 1
> > +#define false 0
> > +#define __bool_true_false_are_defined   1
> > +
> > +typedef __builtin_va_list va_list;
> > +#define va_copy(dest, src)    __builtin_va_copy((dest), (src))
> > +#define va_start(ap, last)    __builtin_va_start((ap), (last))
> 
> Nit: Perhaps better omit the unnecessary inner parentheses?

We should definitely keep the inner parentheses.  I don't want to
start carefully reasoning about precisely which inner parentheses are
necesary for macro argument parsing correctness.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 11:13:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 11:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89270.168036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEs6v-000513-On; Wed, 24 Feb 2021 11:13:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89270.168036; Wed, 24 Feb 2021 11:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEs6v-00050w-Lb; Wed, 24 Feb 2021 11:13:41 +0000
Received: by outflank-mailman (input) for mailman id 89270;
 Wed, 24 Feb 2021 11:13:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEs6t-00050r-QA
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:13:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b5dbc4d-3514-436c-978b-5f2e3512683d;
 Wed, 24 Feb 2021 11:13:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 017D7ADF0;
 Wed, 24 Feb 2021 11:13:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b5dbc4d-3514-436c-978b-5f2e3512683d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614165218; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KVtRyWlNUTJ6sD30gcEEEJyG/aGwIaj9/tz5k+zVB4o=;
	b=bKP2ap80OjqOSblZVfzc7nX+BO6zQbNEWsreU0QPE0xngWDnvW2/BxHbm4vqJ0x/D7XDBG
	ITwzPS4JuCu78dK7jnOAL0fzOGmqFbOFoc73iy/exhAPShRfVNSCuLiZ5s6HNh831A0Qz8
	HdVu1q/sDbCOaiWmAsBg7/3CGFrgnqg=
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <24623.56913.290437.499946@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ce93bd48-7ef3-cdb1-9429-ccd894895e9e@suse.com>
Date: Wed, 24 Feb 2021 12:13:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24623.56913.290437.499946@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.02.2021 16:50, Ian Jackson wrote:
> Jan Beulich writes ("[PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
>> Re-sending primarily for the purpose of getting a release ack, an
>> explicit release nak, or an indication of there not being a need,
>> all for at least the first three patches here (which are otherwise
>> ready to go in). I've dropped the shadow part of the series from
>> this re-submission, because it has all got reviewed by Tim already
>> and is intended for 4.16 only anyway. I'm re-including the follow
>> up patches getting the code base in consistent shape again, as I
>> continue to think this consistency goal is at least worth a
>> consideration towards a freeze exception.
>>
>> 1: split __{get,put}_user() into "guest" and "unsafe" variants
>> 2: split __copy_{from,to}_user() into "guest" and "unsafe" variants
>> 3: PV: harden guest memory accesses against speculative abuse
> 
> These three:
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> On the grounds that this is probably severe enough to be a blocking
> issue for 4.15.
> 
>> 4: rename {get,put}_user() to {get,put}_guest()
>> 5: gdbsx: convert "user" to "guest" accesses
>> 6: rename copy_{from,to}_user() to copy_{from,to}_guest_pv()
>> 7: move stac()/clac() from {get,put}_unsafe_asm() ...
>> 8: PV: use get_unsafe() instead of copy_from_unsafe()
> 
> These have not got a maintainer review yet.  To grant a release-ack
> I'd like an explanation of the downsides and upsides of taking this
> series in 4.15 ?
> 
> You say "consistency" but in practical terms, what will happen if the
> code is not "conxistent" in this sense ?
> 
> I'd also like to hear from aother hypervisor maintainer.

Meanwhile they have been reviewed by Roger. Are you willing to
give them, perhaps with the exception of 7, a release ack as
well?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 11:17:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 11:17:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89273.168047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsAf-0005Ar-8j; Wed, 24 Feb 2021 11:17:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89273.168047; Wed, 24 Feb 2021 11:17:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsAf-0005Ak-5g; Wed, 24 Feb 2021 11:17:33 +0000
Received: by outflank-mailman (input) for mailman id 89273;
 Wed, 24 Feb 2021 11:17:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eUeI=H2=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1lEsAd-0005Af-9y
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:17:31 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1ca72629-abb2-4b43-be2d-6be073c2ab40;
 Wed, 24 Feb 2021 11:17:30 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-141-JG18An3NMO2ncTIcRy6BFQ-1; Wed, 24 Feb 2021 06:16:42 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 19DD28049C6;
 Wed, 24 Feb 2021 11:15:47 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-114-4.ams2.redhat.com
 [10.36.114.4])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id AE8DB5D9D0;
 Wed, 24 Feb 2021 11:15:43 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 82C5718000AE; Wed, 24 Feb 2021 12:15:40 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ca72629-abb2-4b43-be2d-6be073c2ab40
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614165450;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EZdth/jUbehtJCYZwrly7yhxVtc9NrH42+8crogjt7Y=;
	b=GM4I8taqKVwQDVmyT3rUDtT9jX63Mtu8Nfg8oVdbyoYkmW5jcCLwgyv4mEeCzwfv5qArpy
	Z0wEY9SX1f/5eq+SohnRvytGicCUhXGFsJCprQqnVVRPsIlolGSDa964nfZX8CqnGQBvG/
	5EA6bBhTX0Xe5v+zyc0E0kg5givj4ws=
X-MC-Unique: JG18An3NMO2ncTIcRy6BFQ-1
Date: Wed, 24 Feb 2021 12:15:40 +0100
From: Gerd Hoffmann <kraxel@redhat.com>
To: Akihiko Odaki <akihiko.odaki@gmail.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] virtio-gpu: Respect graphics update interval for EDID
Message-ID: <20210224111540.xd5a6yszql6wln7m@sirius.home.kraxel.org>
References: <20210221133414.7262-1-akihiko.odaki@gmail.com>
 <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org>
 <CAMVc7JVo_XJcGcxW0Wmqje3Y40fRZDY6T8dnQTc2=Ehasz4UHw@mail.gmail.com>
MIME-Version: 1.0
In-Reply-To: <CAMVc7JVo_XJcGcxW0Wmqje3Y40fRZDY6T8dnQTc2=Ehasz4UHw@mail.gmail.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kraxel@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Tue, Feb 23, 2021 at 01:50:51PM +0900, Akihiko Odaki wrote:
> 2021å¹´2æœˆ22æ—¥(æœˆ) 19:57 Gerd Hoffmann <kraxel@redhat.com>:
> >
> > On Sun, Feb 21, 2021 at 10:34:14PM +0900, Akihiko Odaki wrote:
> > > This change introduces an additional member, refresh_rate to
> > > qemu_edid_info in include/hw/display/edid.h.
> > >
> > > This change also isolates the graphics update interval from the
> > > display update interval. The guest will update the frame buffer
> > > in the graphics update interval, but displays can be updated in a
> > > dynamic interval, for example to save update costs aggresively
> > > (vnc) or to respond to user-generated events (sdl).
> > > It stabilizes the graphics update interval and prevents the guest
> > > from being confused.
> >
> > Hmm.  What problem you are trying to solve here?
> >
> > The update throttle being visible by the guest was done intentionally,
> > so the guest can throttle the display updates too in case nobody is
> > watching those display updated anyway.
> 
> Indeed, we are throttling the update for vnc to avoid some worthless
> work. But typically a guest cannot respond to update interval changes
> so often because real display devices the guest is designed for does
> not change the update interval in that way.

What is the problem you are seeing?

Some guest software raising timeout errors when they see only
one vblank irq every 3 seconds?  If so: which software is this?
Any chance we can fix this on the guest side?

> That is why we have to
> tell the guest a stable update interval even if it results in wasted
> frames.

Because of the wasted frames I'd like this to be an option you can
enable when needed.  For the majority of use cases this seems to be
no problem ...

Also: the EDID changes should go to a separate patch.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 11:39:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 11:39:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89279.168065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsVf-0007Hr-52; Wed, 24 Feb 2021 11:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89279.168065; Wed, 24 Feb 2021 11:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsVf-0007Hk-1m; Wed, 24 Feb 2021 11:39:15 +0000
Received: by outflank-mailman (input) for mailman id 89279;
 Wed, 24 Feb 2021 11:39:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEsVe-0007Hf-Cn
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 11:39:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc812aea-8584-4c65-b787-fee87fed8138;
 Wed, 24 Feb 2021 11:39:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4E6D7ADDD;
 Wed, 24 Feb 2021 11:39:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc812aea-8584-4c65-b787-fee87fed8138
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614166752; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0siXn6TJ90QOL1JmkIRuuoNWfa9xZ4Kmuu52ceU3t6A=;
	b=BsO9sKGbtW5yLfdjeuFPZr9icn3Fvbr2ci9WEQuoZPsS6KxETHSSmv+LbQI51C4Hhlx4uk
	MTFXvwUezDyB+AEzBwPHDqybCABa1CIQiRBUF8MfizdMA3D9lb+YyTfcZEo6WD6oDVxWQU
	22P+QyMDi+4C/JO9R/sT+2cw4ytCp+0=
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
To: Ian Jackson <iwj@xenproject.org>
Cc: Roger Pau Monne <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
 <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
 <24630.13192.874503.894268@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9e64f1b3-fbb3-6561-cf7b-498ed3839020@suse.com>
Date: Wed, 24 Feb 2021 12:39:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24630.13192.874503.894268@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 12:07, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
>> On 24.02.2021 11:26, Roger Pau Monne wrote:
>>> --- /dev/null
>>> +++ b/tools/firmware/hvmloader/types.h
>>> @@ -0,0 +1,47 @@
>>> +#ifndef _HVMLOADER_TYPES_H_
>>> +#define _HVMLOADER_TYPES_H_
>>> +
>>> +typedef unsigned char uint8_t;
>>> +typedef signed char int8_t;
>>> +
>>> +typedef unsigned short uint16_t;
>>> +typedef signed short int16_t;
>>> +
>>> +typedef unsigned int uint32_t;
>>> +typedef signed int int32_t;
>>> +
>>> +typedef unsigned long long uint64_t;
>>> +typedef signed long long int64_t;
>>
>> I wonder if we weren't better of not making assumptions on
>> short / int / long long, and instead use
>> __attribute__((__mode__(...))) or, if available, the compiler
>> provided __{,U}INT*_TYPE__.
> 
> This code is only ever going to be for 32-bit x86, so I think the way
> Roger did it is fine.

It is technically correct at this point in time, from all we can
tell. I can't see any reason though why a compiler might not
support wider int or, in particular, long long. hvmloader, unlike
most of the rest of the tools, is a freestanding binary and hence
not tied to any particular ABI the compiler used may have been
built for.

> Doing it the other way, to cope with this file being used with
> compiler settings where the above set of types is wrong, would also
> imply more complex definitions of INT32_MIN et al.

Well, that's only as far as the use of number suffixes goes. The
values used won't change, as these constants describe fixed width
types.

>>> +typedef uint32_t size_t;
> 
> I would be inclined to provide ssize_t too but maybe hvmloader will
> never need it.
> 
>> Like the hypervisor, we should prefer using __SIZE_TYPE__
>> when available.
> 
> I disagree.

May I ask why? There is a reason providing of these types did get
added to (at least) gcc.

One argument against this would be above mentioned independence
on any ABI the compiler would be built for, but I'd buy that only
if above we indeed used __attribute__((__mode__())), as that's
the only way to achieve such independence.

IOW imo if we stick to what is there now for {,u}int<N>_t, we
should use __SIZE_TYPE__ here. If we used the mode attribute
approach there, using uint32_t here would indeed be better.

>>> +typedef uint32_t uintptr_t;
>>
>> Again - use __UINTPTR_TYPE__ or, like Xen,
>> __attribute__((__mode__(__pointer__))).
> 
> I disagree.

The same question / considerations apply here then.

>>> +#define bool _Bool
>>> +#define true 1
>>> +#define false 0
>>> +#define __bool_true_false_are_defined   1
>>> +
>>> +typedef __builtin_va_list va_list;
>>> +#define va_copy(dest, src)    __builtin_va_copy((dest), (src))
>>> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
>>
>> Nit: Perhaps better omit the unnecessary inner parentheses?
> 
> We should definitely keep the inner parentheses.  I don't want to
> start carefully reasoning about precisely which inner parentheses are
> necesary for macro argument parsing correctness.

Can you give me an example of when the inner parentheses would be
needed? I don't think they're needed no matter whether (taking the
example here) __builtin_va_...() were functions or macros. They
would of course be needed if the identifiers were part of
expressions beyond the mere function invocation. We've been trying
to eliminate such in the hypervisor part of the tree, and since
hvmloader is more closely related to the hypervisor than the tools
(see also its maintainership), I think we would want to do so
here, too. But of course if there are cases where such
parentheses are really needed, we'd want (need) to change our
approach in hypervisor code as well.

The primary reason why I've been advocating to avoid them is that,
as long as they're not needed for anything, they harm readability
and increase the risk of mistakes like the one that had led to
XSA-316.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 12:01:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 12:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89290.168086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsrE-0001hp-KK; Wed, 24 Feb 2021 12:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89290.168086; Wed, 24 Feb 2021 12:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsrE-0001hi-HH; Wed, 24 Feb 2021 12:01:32 +0000
Received: by outflank-mailman (input) for mailman id 89290;
 Wed, 24 Feb 2021 12:01:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEsrC-0001hZ-OT
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 12:01:31 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3637d434-1057-45bd-99c5-83888c23a5ed;
 Wed, 24 Feb 2021 12:01:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3637d434-1057-45bd-99c5-83888c23a5ed
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614168089;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=EjGlgW462sH8P19Ms353SLwTaTK5KVgqqkHtffHh78A=;
  b=Pp0dG+q/kc//irqpmEXyQ18OjqsofDYlPPlBUUZYyWvx3pxENYmFOG8w
   ZZplRJKwtCAdx3YdlrtkmaF/Jz9xeEHQS6ioRQbkuF7jZFmeKdrduJU9S
   18AHc3WniokvDbVosECrbvw4wIJFgulCax58LrmqZMwFZ0gRsPKwsdJ1K
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: a7x+pRvs27SmkhG9MWSjkknWzCQT8Op4wLpv73zxRHLCiL+O6Kdb/OgziL9yKRYLcDhmLvVCoW
 ToXklTvCvww/jPSUo3k5vKrv2t725Zy4c/Kc5toDt4ICI+POzEpZWtMzSej7S/6zEUa8D8Vy1U
 kP5BwCdYJFZAWw0ocRR9l131UMrljjJSKJADZqVBydj+p2s0FpWxLnsHaRuH1C9ZnUxClVI4ac
 aalKNCo/Wu/YJOn2qdD7w0kPxUocpOun6dT/ZG0eS7roO3e8kANmPaB5CP+kQ7yz8QzyFqidxY
 ujE=
X-SBRS: 5.2
X-MesageID: 38106618
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="38106618"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LOdE1Y/pf5SNar6yxJrcUMz+zaZ5P3B7K9RreAUM+SwOC7AV6tf1rk9aNw/8dlas2RovRK8gxK7KtMQZ5mXhhabrtQnIF/6FAKdJcja0PtdQwpUtN7HDTTpt6UtPahPY1lugAy7IIt00A4lMVnyjY2ihaewfODfhdC/9yxcmDXD9l9vJp42mJ6ezc/JwyVAoPdU5dYBnLbCAGDQiQ4pgVrmNTVz/w554dpP/+B+VaFEfbnTgItw1sayI4sctJNdjSqauSCg+NYTpBna8cLi0fMMwIOJNt2SbakOKEUsowPVrvl88SbT0slKoMxi6J3ibsANsS5v2H+9XDLt1SxU+ow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UxFW7RVYTR75gQJZBY19+9Q+oJH44WliVf6AYexgJuY=;
 b=FjIjMrG16U9o1kYOpuRPIw5Mg45Z8Ulk0CV3HB7pIcl+/aECHxiyUzjl9+qC81+K4gsrxVkE9wng+3dCypUoFUGKDNCJPUR1gZ4lFUjBMWiQMO9wD5qL+YY/i+apJkNwW1W3LxDy92J83/U6UB2nrVeIokmilt4+8mLwbldd9SSE5ZxYU1b30kQTsX7GVud87LHZQVfYO6XGaLnLPij46FXnAjZq8g4PMF2gPlm+nfan8Jfh+f/AVbslAjDqUFuoQFIWLsr+Y3XvoqzfZDMOn/Pj3ZT4nvXq9jBqu3R0t/D/R+FWOuLZ1DGGCH1lxN1NPpvlg1xvTvUkSYL7OvG5ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UxFW7RVYTR75gQJZBY19+9Q+oJH44WliVf6AYexgJuY=;
 b=wDirOk8JzGer7Cthcaf9U6znLMb/DSBmdhYIXM1PA/01EmN6XhjS17doSa7NHW4YsH8aHosQp8GPtMZiRrNmThy20ip2atWDX428b9r2qTb5C4z13LRV5aKrQhYqKZ0V6h90Q7p4J4dC6Qlrx0KLFzhFfmlfcchxpwTuNsuUfa4=
Date: Wed, 24 Feb 2021 13:01:19 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Ian
 Jackson" <iwj@xenproject.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
Message-ID: <YDY+tvs9Llf5K8Da@Air-de-Roger>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
 <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
X-ClientProxiedBy: MR2P264CA0176.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::15)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b67d9c3b-202b-4061-10bc-08d8d8bbe915
X-MS-TrafficTypeDiagnostic: DM5PR03MB2633:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2633CD682C3930377AB7FC9B8F9F9@DM5PR03MB2633.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jAFbWfi7wi3edTuzQoq82SIG0fvf+yA4og5bpm/kF4Ct+ccXrjxjiimxeh6BHBJJY/w2I4P3lSYwJGE6A+Bf4TozjfzkF2jGvs/Q7C9ID1IuUYsCfgzGK9B712Gfh4s8+4pq/j3Bg3xqcUm1b4tyDDXszO7JV2ESRskDBAT6kj9735jjBknGnkOKg/2AgC1Za3cA7uT24pv90U9y2CSnIKHq46Q/TuyY5UzbP8InFifvo4EKzUFgaTL+8G+iO4relTyQGXElPoQwtJIGqhqTLtvx8k6JcbZnjMLzsO8TsfnXxk3rjlJJJO17kE+IBE8Hv/drzI6+83vdxk4jyVrCqc7sdqoI4zqPSjSU2FH/i9Z5AMthBYUEk7tRTTo7Rz0wgSnQLUW1YF16/osYOfuv3RATiS1VXILQ8hFJoIuKmqkar/QYPdQASXoHHi11ZpoeAQsAgAi9LDLQAOgUkX2lo4ypJnrB277Yk/xdd4TA6GBmn0dTeDBLr6pkgwNKuqcImgluTOot7oFYpS5I4BYHKdAWQvOqiip/riLNUs6DtQdWHRPDR9MfYLmTutqQ9QRA
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39860400002)(376002)(136003)(396003)(366004)(53546011)(4326008)(5660300002)(6666004)(85182001)(6496006)(86362001)(316002)(8676002)(26005)(8936002)(33716001)(956004)(2906002)(66946007)(478600001)(9686003)(6486002)(54906003)(186003)(6916009)(66476007)(66556008)(16526019)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cWpKOSsrMGRjaDdVdVU5ZDhKSnUzZWY3UHdYdy9GZGNUSm14YXFYNVRDeGpV?=
 =?utf-8?B?bTNVUXAxSjNGMDBaY2tqRGlGVmpUUWJicCs4aEZoQkdJTklQNk1GU1k2dU14?=
 =?utf-8?B?Zy9sUW9scUJwSXpheVZpMGh2aDcveUUveURqOGtHVHNQSnp0TURBY3ZCVVJ2?=
 =?utf-8?B?czFWUkYzbzNSNHkvTFhQa21sN1lMcjVMOGgrNUNNS2RPd2wydTZ3MzcwdDQz?=
 =?utf-8?B?dnhvNTI3c2E4eVc0UTgzRlZZNWZsK3NER1RrblRBVmpWbkprTktRMDZEaGdW?=
 =?utf-8?B?L0tKa2NxNUF1Qlp0bk1xZ3FHc1UvTG1OaFA0b04xSGkzbXZWT3VZWlZ1ZGcw?=
 =?utf-8?B?Y2VWUUpOTHNKbTQyVCtqR2J4U2hPdmZnZlNqa1AwQ0RXVlhqc2xPM3VETU9h?=
 =?utf-8?B?aXVQTEJ0L3VzSFRpa1VTeGMybXJiZytmcUZFcjJoZWZkMHc1UEhhL3dNampK?=
 =?utf-8?B?cVZPdFM4cmJPUS9tbjB5TzRnTEhaUStlSlV6Q2dLRDVlQURzVitVMExvMXdS?=
 =?utf-8?B?SXllNWZuRnlBVlpMZDk5MS9sOC9OQUp1UFZvYU1BY1k4OTRDbUNsQjVxL0d2?=
 =?utf-8?B?Q1hhWkZTY2lwVGpnU1g5a1dndDdXUW1INUgrblI0RW5abEJPQi9PV1NlYUpr?=
 =?utf-8?B?RlNSNUdIVEJxQ1FSL2pKb3dmb3hrWTl5U2xsNmhBMVAwL2ZneGgrRUhVcHIy?=
 =?utf-8?B?aG5RWDZaN1ZwR0s4aitnZVFIbCtGYVdvMmh0dnY4Tk5aMWE5VkVqUU4xQnBr?=
 =?utf-8?B?U200Nm8xL3poT1pDUTJsa3pxbExmTnNqNjZhV2RodGdoMjdPRndJZ3hRUlZ0?=
 =?utf-8?B?L0ltcy9NQ1h1S2E2ZmNFYTBoSDRaU1lRU2RsMFhKanZzWEEwVW1nWEVMcnM4?=
 =?utf-8?B?ZXc4dkxqaGFsNFlBbzBKWFN1ZThmYVdycmZ4NFdTMDBNVTZNNnhmcnZFcmJR?=
 =?utf-8?B?MlA2Rm95QWJUSFpFSUlOdnRFN3ZoczVPVUFMa1MyVXBvcjhUWmNQZHVNN1Jk?=
 =?utf-8?B?YkJaRlltT3RIVFl1ZHY4UWtCeFNaM05zT2ZCcG9Jbjd0aGFNL1gyUCtTZ0Rm?=
 =?utf-8?B?WGdTblBpcitweGswS0N0eVlnY3lDdEIzZ1ZWWDFzN3lyT2oyVml6UzJTeUl0?=
 =?utf-8?B?TXpnUUpHZ2VaRG1xKzZRSUlWeStvZEVyNHhUSlYxNVA0bXc0alJSS092Q1Rq?=
 =?utf-8?B?aXVMRmxlRjU3VUpSTHczVk9CYnVKWHQzb3hOQk1yeU14a25hMVBVMWE2N3VX?=
 =?utf-8?B?SUtIbmI5K0dpS0ZOOGZORHVQaDB1RHpQdktoTTdpa2NCOTF3Y294b0R0Q2Fa?=
 =?utf-8?B?bmszcUI5RGJOUGFNWUlwSDdYYVZ2ZnhTNVJrRDlwOWZmcmNpYkZKYTk5RmRs?=
 =?utf-8?B?WmdLcHNzN0FRZURUM2NCc0pNR0pONG9qZ3p0YVlIa0U0emFJUWVOU3F6clRR?=
 =?utf-8?B?SlB2cDg2QnpHZFArN3JNMTJCYXVYTW02QTFNZWxyT2xTajJKM25TQWZSVDlh?=
 =?utf-8?B?UHV1M0NSaFNBK1hmZGJuTkFQYzArY3JrS3FhQ0RMQ1Vzcy9xb294WWNtMmVN?=
 =?utf-8?B?MERhT0J6Rm1pREczVU4zTTQvb3dlZ2l6Umt0SFhmREJKRElUTGR2b0U5aGtY?=
 =?utf-8?B?V2dNR3djNXIzQ1BWOHpxZS9vSlNEWEVjSXlWRUM5cit1TXFYZGtlK0ljMDYz?=
 =?utf-8?B?VzRwOVhJK1I0VHlqV0xUbE5PUkFKWkp5akE0dFJSWGV4dVhXWnlLek02QW9E?=
 =?utf-8?B?QWFlVTRENWZrQWdiYWMrc0wyVk5WT0h6QWNCZkZwcUhtTER6L1d0Z0t2d0Fu?=
 =?utf-8?B?UXZCbGo4Z0s4S0I5MnhYUT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b67d9c3b-202b-4061-10bc-08d8d8bbe915
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 12:01:25.6479
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RXribDZOYrjSlSJy292SmxpTwpmmL9DSq8nhuYEGCPccjshO1B8YYukFSVyjIs2gGYU+61yhxDcEJRGcCm4BrQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2633
X-OriginatorOrg: citrix.com

On Wed, Feb 24, 2021 at 11:51:39AM +0100, Jan Beulich wrote:
> On 24.02.2021 11:26, Roger Pau Monne wrote:
> > --- /dev/null
> > +++ b/tools/firmware/hvmloader/types.h
> > @@ -0,0 +1,47 @@
> > +#ifndef _HVMLOADER_TYPES_H_
> > +#define _HVMLOADER_TYPES_H_
> > +
> > +typedef unsigned char uint8_t;
> > +typedef signed char int8_t;
> > +
> > +typedef unsigned short uint16_t;
> > +typedef signed short int16_t;
> > +
> > +typedef unsigned int uint32_t;
> > +typedef signed int int32_t;
> > +
> > +typedef unsigned long long uint64_t;
> > +typedef signed long long int64_t;
> 
> I wonder if we weren't better of not making assumptions on
> short / int / long long, and instead use
> __attribute__((__mode__(...))) or, if available, the compiler
> provided __{,U}INT*_TYPE__.

Oh, didn't know about all this fancy stuff.

Clang doens't seem to support the same mode attributes, for example
QImode is unknown, and that's just on one version of clang that I
happened to test on.

Using __{,U}INT*_TYPE__ does seem to be supported on the clang version
I've tested with, so that might be an option if it's supported
everywhere we care about. If we still need to keep the current typedef
chunk for fallback purposes then I see no real usefulness of using
__{,U}INT*_TYPE__.

> > +#define INT8_MIN        (-0x7f-1)
> > +#define INT16_MIN       (-0x7fff-1)
> > +#define INT32_MIN       (-0x7fffffff-1)
> > +#define INT64_MIN       (-0x7fffffffffffffffll-1)
> > +
> > +#define INT8_MAX        0x7f
> > +#define INT16_MAX       0x7fff
> > +#define INT32_MAX       0x7fffffff
> > +#define INT64_MAX       0x7fffffffffffffffll
> > +
> > +#define UINT8_MAX       0xff
> > +#define UINT16_MAX      0xffff
> > +#define UINT32_MAX      0xffffffffu
> > +#define UINT64_MAX      0xffffffffffffffffull
> 
> At least if going the above outlined route, I think we'd then
> also be better off not #define-ing any of these which we don't
> really use. Afaics it's really only UINTPTR_MAX which needs
> providing.

I've assumed that for consistency we would like to provide those
already. I can switch them to using __{U}INT*_{MAX,MIN}__ if we agree
that it's supported on all compilers we care about, but I would rather
not drop them. I think those might be useful in the future, and having
them already does no harm.

> > +typedef uint32_t size_t;
> 
> Like the hypervisor, we should prefer using __SIZE_TYPE__
> when available.
> 
> > +typedef uint32_t uintptr_t;
> 
> Again - use __UINTPTR_TYPE__ or, like Xen,
> __attribute__((__mode__(__pointer__))).

Let me run a gitlab test suite using the __{,U}INT*_TYPE__ stuff and
if that's fine everywhere we test then I think we can go for that if
you prefer over the current proposal?

I still think that coding them like I've done above should be fine as
we don't expect hvmloader to ever be built in a mode different than
i386?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 12:09:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 12:09:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89312.168098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsyh-00025u-Oe; Wed, 24 Feb 2021 12:09:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89312.168098; Wed, 24 Feb 2021 12:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEsyh-00025n-Ks; Wed, 24 Feb 2021 12:09:15 +0000
Received: by outflank-mailman (input) for mailman id 89312;
 Wed, 24 Feb 2021 12:09:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEsyg-00025f-J0; Wed, 24 Feb 2021 12:09:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEsyg-0004sj-DK; Wed, 24 Feb 2021 12:09:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEsyg-000830-6C; Wed, 24 Feb 2021 12:09:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEsyg-0002Tn-5h; Wed, 24 Feb 2021 12:09:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P/t4A7n6mHfv+A4hS3tCB65TBV/vn0vA3ooH9Sop6Bc=; b=0DDaXw6nX6X6J/oVWDCJxaKXzk
	xpEOT9WeXnH+bEoU/TBxGjr8dWknU/Yk9o2s8en6XVCBaaeBoJYTmpvDyDLeaR7/TENJKKUbM6TkE
	bjwcgLdgeE4ONfiFqphoHiML8rpHFV62H04iUIfdyImMAMHE6ZsRD5VcTpbZPrXCASiQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159613-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159613: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=fa58f571ee0b2ab72f8a5a19d35298d6de6469b3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 12:09:14 +0000

flight 159613 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              fa58f571ee0b2ab72f8a5a19d35298d6de6469b3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  229 days
Failing since        151818  2020-07-11 04:18:52 Z  228 days  221 attempts
Testing same since   159613  2021-02-24 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 44090 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 12:12:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 12:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89316.168113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEt1L-0002wa-6C; Wed, 24 Feb 2021 12:11:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89316.168113; Wed, 24 Feb 2021 12:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEt1L-0002wT-36; Wed, 24 Feb 2021 12:11:59 +0000
Received: by outflank-mailman (input) for mailman id 89316;
 Wed, 24 Feb 2021 12:11:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEt1J-0002wO-59
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 12:11:57 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6807ac2-6448-48de-8180-80bff69dbdb9;
 Wed, 24 Feb 2021 12:11:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6807ac2-6448-48de-8180-80bff69dbdb9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614168716;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OS4744UASJ5jLuzW+p6RrPHDc6LxB2P5eZQPbwgk9CA=;
  b=LDm/tDUwuw+BlVFDic6/2IlYHqcrsB5xvQOaM0FeSiMCQYHZApJprxEu
   ptNMMvFlHgGdQUa5ya+v4yjRQanNkaE5TctDe5A+RAxr87x4Z8OY6cTZp
   yDjamq9p8jreA5lyNvUKw2oL0SjcKofoVwUbt3iOgP2IxDhx3MyvHDuXM
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wjH4aYhOaD0d+D9uPbYohWBCjdO1J14CjCTQ65V8xlpevPi/LO5wSI79L6DpR/s1llPSKijQS2
 glDDjOW38jwtQ9Ns65GuMuFEKoqArOGqW8xr0qRpMcMQmfnid47tbP2Xs7Fbfbs1Ap8eCZ7vSw
 T+djf96jxvx9purS2iiXXIeuH35FY5umCR0Hhz/qj3ZS6KzkKLebgGxB+BibIq6zt42cuvpFWz
 H1XkzKfsxcuh0tYlTPL4JZuktUGC3tD3el1z/shnTkzBy72tzZfRRXjB+FtC/qSUycjW1a361M
 wY0=
X-SBRS: 5.2
X-MesageID: 39303351
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="39303351"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N9mR7T4OavRkx9qi65hu8GjXxuYAKPcc1cB/aoB3b9QITIYdtwAb0q/fAjQXzaKCYZYCmqixDqMiAlpOKt8qQ5t7Jmilfk6cpjlAm9qn2vsH3MnhdYINYbuEmURCRbrzxwWQKwnmxbz9K9E6kJd7AVSSDYl0qpQS7psU6hmC+Qo1UohJe9FKfgeC5dU4i98FyOW2yJXV71ztDhOVsC3gfiTSWyZamQiTnaSOJPBEIn/+SAnZFXiB+yFlTT7rd3ebH72I3O0X68ZYHdXX/ZolyjxqxxjA0oyh/Slh/PhqjTmRu9ddxGANXaJ2SN0h6G3vtg9G5aEhHNu1T2zXEO0Yhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OS4744UASJ5jLuzW+p6RrPHDc6LxB2P5eZQPbwgk9CA=;
 b=Jp2dTaWjzO7/dTzfHFGaZbnjg+4rB24FpQK+1wPsbKCv7oQQaDbPQkUp3u3rjxVAW5c79GWbeeCmmzqW5+Mk1ld8taf4HHNJrg9a4pMMKJwN+d08hzlTZ0UIoK54ieycGIKMWoetES7+J25y2p9eMZeuBOz0IzLlzxES9YqxrYwWaQ8JjZkZWn4rJakCWWmPb7dHfU8KzT3MKjh/716uPFXd9HNxgukJpA18KWIbwn1lFX9L/vBgYmtdtHZWhqaAjES6rVT2e7dGcI/vaPET4X2ucJIT3uGZJLroF3IIvEh/vw5CP4gV1h2WeQbgevmAX0hiflyZ79TdRcorX0opeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OS4744UASJ5jLuzW+p6RrPHDc6LxB2P5eZQPbwgk9CA=;
 b=qJtNTWnMOaw3eGcLRYEi5d6xVoDSWfcugmazNPg3jktbVcVduI4nAnrnL1Gms9hGd4YLaptWvQYi5mqNDgi1iL8eeQUvqn5EYh3Cv305XaTgEnmV/DY68lYC1B8UPOfBjdrv0c1R39odsYN2VLv0tLQMfmbE5rUtp5VJ13VgltQ=
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ad25d5b7-2f95-91f1-ad46-eb686cd4f397@citrix.com>
Date: Wed, 24 Feb 2021 12:11:45 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210224102641.89455-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0085.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 71947626-6ea4-4254-0862-08d8d8bd5e59
X-MS-TrafficTypeDiagnostic: BYAPR03MB4118:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB411826E77A0E58297D9E535FBA9F9@BYAPR03MB4118.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 5funwVk7CgShPltMjUMe1rXviztygxGXZIxG8GV0NP2hp0DMR0JUw0mDhhH1416rWtjnkc6K6pJZZxlw4pMJ70C1uOWr2pi7wM8rqPL0FF0eFBzNMu3vpBqlkHaQ2buQLKeQ58o0B+NWCYPZ6N3ppRGvXPooQBWRr7/576NigcQnPs7MvdRQa1Rv8fNvIR/+xCRI/NtEPUAr/UxZLnwrGtzTEM0ICSzbcaBD2vlJNHojSvqMOWKKI9KFcDWYZQT8YNmKEmaoOftVZepUjZ+hxmZN66NdZf0c4Ky88yg+/vLOpuoOOC8j/b+U3CyBZUvY63pqtttxHdo+2Er/NcwOuS9pj09kiuDjqIcwmoS6ziLoVl0xRyvMoO2zlbKzFi9vMih23yx21jpRCq1L+7p0AS0NzcjVdcwM2vk/Su6rhAQVm9Fjr4fGUVoAfVni0m612USZdhSzD0g7AhDKpG6jyOq2ALXcONUScr8zRrJpBBCm8CiouLPu5KGzY1WYx0lGgZpId9JH546SjsAEsViyWAKGhU5sORLVR1rTd14dE8v/1ks9Ik4oLcLkukTUn6NuOijhyuLLFgs3UCrgh7xgEveAY56kUBUIeJnKZ3Rcg4A=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(376002)(136003)(346002)(366004)(396003)(4744005)(186003)(16526019)(2906002)(2616005)(36756003)(6666004)(4326008)(86362001)(26005)(316002)(8676002)(6486002)(53546011)(956004)(54906003)(8936002)(478600001)(31686004)(66946007)(66556008)(66476007)(16576012)(31696002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TEtMOGYrTjhzMGZXVG55Qmc1encrTlZJeVJIaHVLQm5yNHNHc3B6bWY3bWFt?=
 =?utf-8?B?VWd2Rnhid1l0MEJ5Q3c5Umcyb3pUc0ltUlkxcFNUSUpGMjE1NDM2M295UzVM?=
 =?utf-8?B?RDlmamUxekY1QkhWdXFEbTRPMUxYTVlDbk10ZnA0dTNQVnNRTHJFOWR4QXd0?=
 =?utf-8?B?RStIdHlvTGNnUzJYWlo4dzI5bld2VmQ0VFZkZmlSTEZjeHZUcjhqOVhvWUdp?=
 =?utf-8?B?MnlBWUVic29uZ0FiZDZ1NmNJWFUrRlNCOFNUUmZ2ZmpJQVhkT2FkeHo5YTRQ?=
 =?utf-8?B?RHJsRy9BazU1NGZGOFBXQWxET3BuNjhvVi9qaFQzK3I1ek4wemV3aE1Id1lC?=
 =?utf-8?B?Z1pEejB3SWhkZEJ0aE5WSXVpTGRPZDBYdmlta011Vk96d2NlS1JMQlZLcHJI?=
 =?utf-8?B?ZEtjWklRK25ZaGhpcHZQYTdxa01YTXJZcUFhdWc3ZHJpNnJ0ZTFkN3k0eVp0?=
 =?utf-8?B?UHJZK2pWemhPUWNvWXBJcUlKcVF6dk1QY2FJSmxDcUUva1poUWl0T2VXek1i?=
 =?utf-8?B?c1hUQzRYUnZnMU1GaTIzbDRmUG4yK3RvVUpJR25QRyswOXZNcU4vNW5uS3N6?=
 =?utf-8?B?TXliL1AxQ3Zuck5yeE84Vzl3OFBEVHI2eHBMbVlSdzN2azhqZEVZRDB0SzVF?=
 =?utf-8?B?UkU5d1A1QUtBZ0lkVlpsOWhUM05FdVkvR1dTTjIrZDZWcWl5RXlLZDZZbHlm?=
 =?utf-8?B?VGduYnVrMFQ5eDhIQWlFQ04rVEpXMGpsWlBOTE1qVHFXRFZlZENZVE5YSTBl?=
 =?utf-8?B?NW9JTjNqWHhXaXl6R21JVXV2d0ZVMHNXVC9qRStibmlyNWExS1d4a2JrTmFF?=
 =?utf-8?B?V3FLUGF1RVFhc2gxd3NIbE0xaGN1encyczdSZDk2WlVTR2Npa2V6bnVFNUxS?=
 =?utf-8?B?Nll3SjdQYWEweDVmQVFObzNOclFYSUMwL1cyVFRTWS83Q25QT2YvaHdKWjEx?=
 =?utf-8?B?N0Z6dkU5UlVlV2tNRk5YM3BsWEg3ZUdhQUQ4N0REWGJRbkFrcTVqNWtTNWJN?=
 =?utf-8?B?aGExWWcwRlhJRDV2SjJ6OEJqalpQMFJFOHdpQVk5U3FkNjF3Q2h4REVXNDda?=
 =?utf-8?B?UnZxdlRCYjlxN1h1VGs4QzNHc2NqSXFkMndwWnB3L0d5MDUrZkR1d1NoYmhi?=
 =?utf-8?B?UWtiOUpKMTcrQkJjVFY5TWprcmJ6MGpOWUgwS3FDS0xSMWJ5dTBmYUpVUWpO?=
 =?utf-8?B?YktWNTJMSzhBdGFEQndBZmkvVXVJQUJxRjl1N3B0M3IyVERVdWZRcmZKY1pF?=
 =?utf-8?B?WXBZYTliNEpMZGgvNm4yY0FJMzhLSTBmVU1CYUZNSGU0NkcwNmpVeTRSQWpF?=
 =?utf-8?B?ZUtXbndWVUFoRUo2dGlZS0w1TlBnSEdLUzBhM0w1VnA2VTdCVUR0QjZGYytm?=
 =?utf-8?B?V3V5SVh0OUhOaWUrNE0rSjFmYjVHQUMyS2tDVmFGSG9FSGxVSTJXWjNWa1lB?=
 =?utf-8?B?SlhwLzdxMGtnVlBRMHROYTlDeXNGUjlBRVFRb2JVYXp6amZydDl4NUJQZHAy?=
 =?utf-8?B?SWs5ZDQzc0ZFWTluNFR5c0ZuRFdxZE1mUE9sZXJFVGNVZUl4M25qelNQOElp?=
 =?utf-8?B?cFRhd1U4dnZ2Q043ZjM1dUFtd3FlWjY5d1RFcitUQlh1bVQyWDhTRTdQQVYx?=
 =?utf-8?B?MlVTa0djVEk3MDdsc0Z3THdPdHpCTnZqU2RnZnQySkVxNGJPeFZLZko4bThE?=
 =?utf-8?B?M2ZnYkNWR3ZlOVgwQ0w4amtDL0NjZ1krQXNQNVg0UlVuSlpMcTRNckZROG9L?=
 =?utf-8?Q?LOvttnfdQdkF30PBckYdGxwm8q2KMwVnL3frqcM?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 71947626-6ea4-4254-0862-08d8d8bd5e59
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 12:11:51.9205
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oZswYrpIlGgzNZ2NejjqqitiLN0QtwZ9hEeKzwgIhE6AL9oC9FHPiOJeyLHDZvMCjTs2v7wJBiV1VBsYlS+6UfFE+fRZ3dtQsZAqT0kedyM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4118
X-OriginatorOrg: citrix.com

On 24/02/2021 10:26, Roger Pau Monne wrote:
> Instead create a private types.h header that contains the set of types
> that are required by hvmloader. Replace include occurrences of std*
> headers with type.h. Note that including types.h directly is not
> required in util.c because it already includes util.h which in turn
> includes the newly created types.h.
>
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

At what point do we just declare Alpine broken?Â  These are all
freestanding headers, an explicitly available for use, in the way we use
them.

There are substantial portability costs for making changes like this,
which takes us from standards compliant C to GCC-isms-only.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 12:20:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 12:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89326.168125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEt9S-0003yt-5c; Wed, 24 Feb 2021 12:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89326.168125; Wed, 24 Feb 2021 12:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEt9S-0003ym-2R; Wed, 24 Feb 2021 12:20:22 +0000
Received: by outflank-mailman (input) for mailman id 89326;
 Wed, 24 Feb 2021 12:20:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEt9R-0003yh-49
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 12:20:21 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4efec8c-fa26-42fc-a931-9caae9d0c5f8;
 Wed, 24 Feb 2021 12:20:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4efec8c-fa26-42fc-a931-9caae9d0c5f8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614169219;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=XrvU96T7nHlvZ7qKNbyDITn0nT2TzC21rEcOWPP9Oao=;
  b=Nqzdh8csc33eWOzEoiwR4w0d4Jrdu00PHGXYJ27HQ14gR4k0Zu4LmY/t
   svBpyRRXu6Q+rGEM0R0VfEpLgbWP0sWXEyxn6LJbZa7vLgwoLmzRABQWH
   qTDBjQ6eg7th7Oiuc19wfJfaW323GxQdiL2lIviz0SzBLpKgxDUtu2h14
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Qlx1dVVS2khDOSVX6Fl2AuMOzuSYdTyq8Ymeg7zCWkJNpUiUR2sChmW6L6UfAXNvGfTqDvi0jD
 RybzboO33BPU2WxGH8ZLxvKsyK131Hd9qMzmTeFfYTVOH7klJTCwM6qAm4U2c5wPurVFlAACvm
 Zt3ECG3yajEJZef0Ipltl4tCxMaQrh13Joo+wyaOYZkoRzeKz2hxp1TFnuMGysoEdmXUSw/Sxp
 7F2z/CiaEHwybbq4bi0HrF0VzGlcuxeUJ/7CmFofv7jYQgNizH0rPZq3dFZOgPPbKID7twuJt6
 s64=
X-SBRS: 5.2
X-MesageID: 37843278
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,202,1610427600"; 
   d="scan'208";a="37843278"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GnhuUWrFmO+wMOiTtmqRFMes7++2U7GM3aiqvqeLxflZeXS/34oXl67wqLso485qwmQrKntqG5f6OT+c9PMxuy7SGjjaMLvdvqg1yiVLU/qvMs2b5THQx5oD/eT1H9swInHS5Dus+uLT2d4/bG6kip/yoIynkaRKPGaoAI1MMlbvxJNcld0TxvsMJC18IXW0LiYu17/2QYR2alsKRzaRpoEyijfRMTBQ3/ZkvT1fNweAIPH27z9ct8uF2J7CgFOtCvQsWbdaK4mvU7JU7cdoB6xSRcczfFku3+9YsBfVDNdqYCfjK1KZiW0zXFJx4QIImq/ujmo3Y0uKFiy9sCT/Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ok12U9rxEGFwWYAsm8WBJ89Xf072quc4MNURba2/bAs=;
 b=BcFykGl8OnVmAqjXyuy7Anc+EwObfDAJ/JwKX5AsC5ic0rLDoHgk35h88mqHDx7qPR++sQ2z2NnC/JVnMvGCaniUjXv/dfqRFb4kQGcSTFU2+OelgOjl6WrnZ5nOuIv9s2clr9InzVIeyNvRMyFeYGxf//1TeRyplbHUbG87jix/t3A/z3771O3sDDM1Izj4UyBncaNvcOVqAQEW0XPAbsx0kb5pM9ICZ1qR7fZFD9BMdoz8FzKUpPoc5HUkcv4+GoQoCBa4a1PasaIczbc4AldQzMT/eB5Xbxp/TSVrrt0utoR/0uC/4lyWacMCcfuwX2TatsK7vMlHuHgFEbJmHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ok12U9rxEGFwWYAsm8WBJ89Xf072quc4MNURba2/bAs=;
 b=kSAxdNa0yY84wqHrLWFRdDTHWHy581+EvH3RzxJ+CHcWmtEwJeHaAvrAViFuyVKKVqm+Cm7UYjRu6X+eYAmF0Pe4dRhlE4/p0Y1pqb7qqO97kJ+FudVtJnL3NsL+HMx1feVqOmUIQo7FojY+4eEfysPa/Hj91+S2WUO2hxrhiXw=
Date: Wed, 24 Feb 2021 13:20:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
Message-ID: <YDZEee8loZJeAKZ9@Air-de-Roger>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
 <ad25d5b7-2f95-91f1-ad46-eb686cd4f397@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ad25d5b7-2f95-91f1-ad46-eb686cd4f397@citrix.com>
X-ClientProxiedBy: PR3P192CA0020.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:102:56::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 67717d43-c7c2-49b9-4aef-08d8d8be8a9f
X-MS-TrafficTypeDiagnostic: DM5PR03MB3371:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3371A2CAE2360ED2FDCD30FB8F9F9@DM5PR03MB3371.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AzuBx12BqKgni2xtn93wwXakVHiP/r+hA+BsGkI+o+acK0o9TgcMFHFRtvxnrjhME2HA1jxP6vxEYlTiIwyl3GRMnOuZ/pvl9p8J0c+Mc1hU2x3d6jBVVStOS7v/4atkcAHCrQ+IfAqcjFsfpetw1RNqfWB2Xp5U2yYhmADDBfuC6/9PyIkzzEkB6MuPS2spMNW1OePEuDaEffn/BEV2+p5KHTaQLX0BaYyYdXzp9LndxGkvdLU2tk6sALdBXLYG0Qp1wRnGgLO0njfI6vd5GbCAgvTIe6vV5qBGSBnxn2IaqsbezIiu52DqS1ssZosfqFrCvLKmOgVrTIle6pQKjsPRIzLE1wGGni7T+NMwOQGaz2vo3qDNN4ry5CQNFUg3LJbCEhjC00NM2i4kXw4y6HcTgKOeptMhD+ftE1IByveuOs19o8tiXCUbflgjOsCLCXjH7tSp2kvfTzKCqkHoWaAET+Wdv1TcDgP9os7lSR/UxPYzAPpXGsKHIwQNOYhLelMN8tccAyA9yw0Sr+ea9Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(39860400002)(136003)(346002)(376002)(366004)(4326008)(53546011)(9686003)(16526019)(26005)(5660300002)(186003)(956004)(6862004)(478600001)(6666004)(2906002)(8936002)(4744005)(316002)(6496006)(54906003)(86362001)(6636002)(33716001)(85182001)(6486002)(66946007)(8676002)(66476007)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?elJ0Y1d6UE1TRUdSODJJazFLZUREZ3JEWGZ5TjQxQXFrbEdWU25qdDdpSVNz?=
 =?utf-8?B?VXUwaDRScmFLbUFJdTZiU0VIeXNUSFdHYmtaLzE2Ly9nWmExRHNUTUYzMlVv?=
 =?utf-8?B?dnpUVWkvbmh3ZmxtTTBtbUVHMGNaRlFad3VGMEJ6T0pOd3hNeWpobEtRVDdQ?=
 =?utf-8?B?Q1N0TFdrUEt2eW85aUt2d0FQeFcxZ1NBZHRPWnlDWDNXRG1lSzFvc21PUmpJ?=
 =?utf-8?B?eWVRN0FlZVBSenBETDUveVAzMUNhUTZXNjA1UzNEVzhyQXRlU3lXR3Z0ZTV3?=
 =?utf-8?B?c0tyd3BCdXFXVTUrV1p2ZVc4SjFlZksrREZpbXQxemdrYlVueW1aZGp3aTha?=
 =?utf-8?B?aVZnWlpvZ291NTJnMDNIUlZVaEtmR1huMGV5TnY3WXlQOGw2eUZISTBOWE0y?=
 =?utf-8?B?UU14cWFwaUJ2U0hlNlFZS0IzeTNYamxLYXdDRUtPVzBXd0U3dlBEZFI2dXpw?=
 =?utf-8?B?bkY5NFhRVjZGM2ZQNzR1UVJlTWRsQzJRVEhGenZTSk1Ia2hjeTFwMjd0M0lU?=
 =?utf-8?B?dXVEL0wvN0pHMWdHRkVoYUwvM2xPRS9HZ2dEMmtrNkZSNmlCcWhiVFRUMHdz?=
 =?utf-8?B?aVd6SXJtZndKdkNRUWtNTmEweGxVbDRKZDN3QkdMTlo2UDREcTNhSGUxb2o4?=
 =?utf-8?B?dWJudHdZcVFhdzRNM1h5TklodlgvS3lua1RMcy9vTWI2RGRPZUVkK0lFUWJJ?=
 =?utf-8?B?QldiMTRsZVkyK3ZVTnJuNjhjQkIybDlIVnJ4aXhpTWFlOHRPVjRqeDV1dGE0?=
 =?utf-8?B?Y1VuMk1GSlhHRWh3ZnVEbGhML1VxdXdNclZFemhhcVRkYy9VWnpUZkt3OXVw?=
 =?utf-8?B?ZlFnWHBWaUhGbnlMc2pyRjVvRy8zVGtWUDVoK0ZoeGxTTTR1L01TM1FTRDUr?=
 =?utf-8?B?UXJkTFJrZ3JsKzZUZWRiS1hDcDJ6SVM3Yk83eVFZL0M1OXNabkVsY3JNYXZ3?=
 =?utf-8?B?TUl5REhHWmFURGh3MTFSeVlrM3N0Z01rNndBN2NNTFhCeE5uaFpJM085VURi?=
 =?utf-8?B?RW1yT1d5VDUvTjR2QUpDVXJETG96Sy90UVY0ZUJ3QXpEdTdLOTRNTG9IYy9u?=
 =?utf-8?B?YkhXUTlnT3NmV2NyQnFwdmwrTTZlcXRWQlVZaVZLZmNhSXc5ZGZSTlNMZmJH?=
 =?utf-8?B?RStxS2owMElTWS9RQmwxQVZWaHh6TjhkS2ZFYzhwK1MvV3N5dG1qYnZlcHdo?=
 =?utf-8?B?eFE3dm1EOHZVdmdRUDZKSUJLQUlvZFJ0c0J0YXdLNnB5OWtwYkw0Mmw2dmlK?=
 =?utf-8?B?UUdubURCYXgxM1B4cEdiNll2N3YrL2tZZmFqRS9kaWpqODAzMVFHMnI0aWVC?=
 =?utf-8?B?eW01V2VBNTNBYmlvWnU2S1FpU04rd0NGTGV0L2QyV0I3QUJNNkF4WVlTYVNu?=
 =?utf-8?B?T1pabVBUKzNIWXNaN2tGVzZ2WFVNSDVmRVhFMWZTcmVjcjJaZzVOd09uL3hr?=
 =?utf-8?B?Tm9LR21ESTZGZUF0dVJsSXJUWFU1Tm9aSmt0QW5MYkFDbVhYVGFFbFBRaHdG?=
 =?utf-8?B?TlU4V01HMTJmZGxMRjlieC9LRTJBaStjTlloVmNBWUlkcjhkS3lia2UrZDNE?=
 =?utf-8?B?YXpvL2FCM040WGpJOUpOQUVoaFJSM09HM2wwZ1lxdHdZNVFCdnQrMFdHenRO?=
 =?utf-8?B?OS9TazIyaG9KYUhHS05KZ2pkQkVaTk5CRnI3bEJDcURaSGJVZXZRazdMU0Mr?=
 =?utf-8?B?M0w3Lzcrd2w4c0dZelRYRzRvOVZjN0pkdzVDZkdnRGdlZnNtNmJWQ1lNOGkv?=
 =?utf-8?B?eWlUMkRRSDRmMENERjIxaXJWMHNlVEhkVGpsTXA0Um40MUdDdzVwVENVT2Nm?=
 =?utf-8?B?MmtHejVCOXRYdjRXSGo3Zz09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 67717d43-c7c2-49b9-4aef-08d8d8be8a9f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 12:20:15.5171
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6JkzcORD15gsrdwBG6e3MBseHmJ+GpaOFTK9khY/g2hHEhEutW5s93nTX4bVTDzHP7ks9DCNYV4zym1lJwuakA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3371
X-OriginatorOrg: citrix.com

On Wed, Feb 24, 2021 at 12:11:45PM +0000, Andrew Cooper wrote:
> On 24/02/2021 10:26, Roger Pau Monne wrote:
> > Instead create a private types.h header that contains the set of types
> > that are required by hvmloader. Replace include occurrences of std*
> > headers with type.h. Note that including types.h directly is not
> > required in util.c because it already includes util.h which in turn
> > includes the newly created types.h.
> >
> > Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> 
> At what point do we just declare Alpine broken?Â  These are all
> freestanding headers, an explicitly available for use, in the way we use
> them.

The headers are available in Alpine, but they seem to be specifically
tied to the native bitness of the OS, which is causing us the issues.

So they are freestanding AFAICT, it's just that the bitness they use
is not portable.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 12:51:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 12:51:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89331.168137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtdw-0006ub-Og; Wed, 24 Feb 2021 12:51:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89331.168137; Wed, 24 Feb 2021 12:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtdw-0006uU-KQ; Wed, 24 Feb 2021 12:51:52 +0000
Received: by outflank-mailman (input) for mailman id 89331;
 Wed, 24 Feb 2021 12:51:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6hyu=H2=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1lEtdv-0006uP-9p
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 12:51:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e9161a8-5b32-4243-ac65-e07a74ccb58c;
 Wed, 24 Feb 2021 12:51:50 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0A48C64F35;
 Wed, 24 Feb 2021 12:51:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e9161a8-5b32-4243-ac65-e07a74ccb58c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614171110;
	bh=1voqjsAoBFLxX7rDbdvLrvHACJsPGL3qdmFT+MVPv9g=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=spjwNbA3ZtlvtPipEn6gfBOlftIW6w7odr2XfL2181T9bK8cCwsM2LBQNoGUtghVL
	 +buNAx7SgQMw6ZIpQxTknwVpx5NTpass/l7ybj6RtHiqw9UifteXfGfnkOwSCWkU4i
	 +eBluSCs+cPiUN6qhMjkc3eoReO4D1oa9TLyuyE4tq8CN6fadJcqZBZAI1aZZ7k9cD
	 E3yRxJH9Z7tX130usP78uATGssnbZnpngxHbuZmw7O9YU3ORHsxEqmWUN5YE2qR+8x
	 2J/Vt5H8IzJBZzsLr2Ld6nXRjj1t9XqhG1tLoj/1TkTDZVWtIxCDwIiZOwDXvLRHs2
	 +1WGDif58wmhw==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 5.11 62/67] xen-blkback: fix error handling in xen_blkbk_map()
Date: Wed, 24 Feb 2021 07:50:20 -0500
Message-Id: <20210224125026.481804-62-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20210224125026.481804-1-sashal@kernel.org>
References: <20210224125026.481804-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Jan Beulich <jbeulich@suse.com>

[ Upstream commit 871997bc9e423f05c7da7c9178e62dde5df2a7f8 ]

The function uses a goto-based loop, which may lead to an earlier error
getting discarded by a later iteration. Exit this ad-hoc loop when an
error was encountered.

The out-of-memory error path additionally fails to fill a structure
field looked at by xen_blkbk_unmap_prepare() before inspecting the
handle which does get properly set (to BLKBACK_INVALID_HANDLE).

Since the earlier exiting from the ad-hoc loop requires the same field
filling (invalidation) as that on the out-of-memory path, fold both
paths. While doing so, drop the pr_alert(), as extra log messages aren't
going to help the situation (the kernel will log oom conditions already
anyway).

This is XSA-365.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkback/blkback.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 9ebf53903d7bf..9301de1386436 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -794,8 +794,13 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 			pages[i]->persistent_gnt = persistent_gnt;
 		} else {
 			if (gnttab_page_cache_get(&ring->free_pages,
-						  &pages[i]->page))
-				goto out_of_memory;
+						  &pages[i]->page)) {
+				gnttab_page_cache_put(&ring->free_pages,
+						      pages_to_gnt,
+						      segs_to_map);
+				ret = -ENOMEM;
+				goto out;
+			}
 			addr = vaddr(pages[i]->page);
 			pages_to_gnt[segs_to_map] = pages[i]->page;
 			pages[i]->persistent_gnt = NULL;
@@ -882,17 +887,18 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 	}
 	segs_to_map = 0;
 	last_map = map_until;
-	if (map_until != num)
+	if (!ret && map_until != num)
 		goto again;
 
-	return ret;
-
-out_of_memory:
-	pr_alert("%s: out of memory\n", __func__);
-	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
-	for (i = last_map; i < num; i++)
+out:
+	for (i = last_map; i < num; i++) {
+		/* Don't zap current batch's valid persistent grants. */
+		if(i >= last_map + segs_to_map)
+			pages[i]->persistent_gnt = NULL;
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
-	return -ENOMEM;
+	}
+
+	return ret;
 }
 
 static int xen_blkbk_map_seg(struct pending_req *pending_req)
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 12:53:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 12:53:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89333.168149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtfP-00071L-3q; Wed, 24 Feb 2021 12:53:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89333.168149; Wed, 24 Feb 2021 12:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtfP-00071E-08; Wed, 24 Feb 2021 12:53:23 +0000
Received: by outflank-mailman (input) for mailman id 89333;
 Wed, 24 Feb 2021 12:53:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6hyu=H2=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1lEtfN-000719-Vw
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 12:53:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84f17e29-46cb-4e7a-9e23-48d1409cd8c9;
 Wed, 24 Feb 2021 12:53:21 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 952A264F13;
 Wed, 24 Feb 2021 12:53:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84f17e29-46cb-4e7a-9e23-48d1409cd8c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614171200;
	bh=1voqjsAoBFLxX7rDbdvLrvHACJsPGL3qdmFT+MVPv9g=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=n4CaV2DsapOHyAY2CTejx635bsmN1PT/GI+bXaSP41g0MUkn97yPgHWDfUXDGusrK
	 hJYCVndOTt3ljWNDEpNMfC+OwFIWm1Y3O7RlCa8ajND8LdyjkXHFjBnJKfAhempKb+
	 FrCFJyhjQLdQFEOJhAPQ+ezCqlszTuMIAwjc4wmZUsH3UYNbNpXF/D5N3qzaZx2fMn
	 vs88qYTVw4CanXIbaallaqKw5ZJCkNUESoOl5oc5VYtTk3LrgntI4V5abPeY1m+lJB
	 C02lj9za7z7k5bu7x2r24d7iynMZK1mZhdTxXkTUkxvwsBF7jQ3duMS6nw0kc2NV49
	 WWwh3F9SmVCuA==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 5.10 51/56] xen-blkback: fix error handling in xen_blkbk_map()
Date: Wed, 24 Feb 2021 07:52:07 -0500
Message-Id: <20210224125212.482485-51-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20210224125212.482485-1-sashal@kernel.org>
References: <20210224125212.482485-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Jan Beulich <jbeulich@suse.com>

[ Upstream commit 871997bc9e423f05c7da7c9178e62dde5df2a7f8 ]

The function uses a goto-based loop, which may lead to an earlier error
getting discarded by a later iteration. Exit this ad-hoc loop when an
error was encountered.

The out-of-memory error path additionally fails to fill a structure
field looked at by xen_blkbk_unmap_prepare() before inspecting the
handle which does get properly set (to BLKBACK_INVALID_HANDLE).

Since the earlier exiting from the ad-hoc loop requires the same field
filling (invalidation) as that on the out-of-memory path, fold both
paths. While doing so, drop the pr_alert(), as extra log messages aren't
going to help the situation (the kernel will log oom conditions already
anyway).

This is XSA-365.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkback/blkback.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 9ebf53903d7bf..9301de1386436 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -794,8 +794,13 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 			pages[i]->persistent_gnt = persistent_gnt;
 		} else {
 			if (gnttab_page_cache_get(&ring->free_pages,
-						  &pages[i]->page))
-				goto out_of_memory;
+						  &pages[i]->page)) {
+				gnttab_page_cache_put(&ring->free_pages,
+						      pages_to_gnt,
+						      segs_to_map);
+				ret = -ENOMEM;
+				goto out;
+			}
 			addr = vaddr(pages[i]->page);
 			pages_to_gnt[segs_to_map] = pages[i]->page;
 			pages[i]->persistent_gnt = NULL;
@@ -882,17 +887,18 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 	}
 	segs_to_map = 0;
 	last_map = map_until;
-	if (map_until != num)
+	if (!ret && map_until != num)
 		goto again;
 
-	return ret;
-
-out_of_memory:
-	pr_alert("%s: out of memory\n", __func__);
-	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
-	for (i = last_map; i < num; i++)
+out:
+	for (i = last_map; i < num; i++) {
+		/* Don't zap current batch's valid persistent grants. */
+		if(i >= last_map + segs_to_map)
+			pages[i]->persistent_gnt = NULL;
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
-	return -ENOMEM;
+	}
+
+	return ret;
 }
 
 static int xen_blkbk_map_seg(struct pending_req *pending_req)
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 13:08:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 13:08:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89338.168166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtu5-0008B4-GR; Wed, 24 Feb 2021 13:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89338.168166; Wed, 24 Feb 2021 13:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtu5-0008Ax-DT; Wed, 24 Feb 2021 13:08:33 +0000
Received: by outflank-mailman (input) for mailman id 89338;
 Wed, 24 Feb 2021 13:08:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEtu3-0008As-Kr
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:08:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEtu3-0005qg-IW
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:08:31 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEtu3-0008Kr-Eq
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:08:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEtu0-0006tE-7O; Wed, 24 Feb 2021 13:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=t9oyT3HlfOULSjznKaqRaJXa222QqsLAnEmpdcIGz8k=; b=CtDkP9sen4ugan+JYeYqJqWacA
	UXVZzgv25BOYnq2G8T9vO+8eSosiS3f/h21l7yFgrRRnOiUUWL7jrvDqQuG9IeUXUoPcOwAHeNM84
	bfYml255hA/wr030YsxcyVlbPl+htkBlUzZgAQOdvhw0uyW8keFpa0+CAC8Acm7iNywI=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24630.20427.917602.787877@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 13:08:27 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
In-Reply-To: <ce93bd48-7ef3-cdb1-9429-ccd894895e9e@suse.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
	<24623.56913.290437.499946@mariner.uk.xensource.com>
	<ce93bd48-7ef3-cdb1-9429-ccd894895e9e@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> On 19.02.2021 16:50, Ian Jackson wrote:
> > Jan Beulich writes ("[PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> >> 4: rename {get,put}_user() to {get,put}_guest()
> >> 5: gdbsx: convert "user" to "guest" accesses
> >> 6: rename copy_{from,to}_user() to copy_{from,to}_guest_pv()
> >> 7: move stac()/clac() from {get,put}_unsafe_asm() ...
> >> 8: PV: use get_unsafe() instead of copy_from_unsafe()
> > 
> > These have not got a maintainer review yet.  To grant a release-ack
> > I'd like an explanation of the downsides and upsides of taking this
> > series in 4.15 ?
> > 
> > You say "consistency" but in practical terms, what will happen if the
> > code is not "conxistent" in this sense ?
> > 
> > I'd also like to hear from aother hypervisor maintainer.
> 
> Meanwhile they have been reviewed by Roger. Are you willing to
> give them, perhaps with the exception of 7, a release ack as
> well?

Sorry, yes.

I found these explanations convincing  Thank you.

For all except 7,
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

For 7, I remember what I think was an IRC conversation where someone
(you, I think) said you had examined the generated asm and it was
unchanged.

If I have remembered that correctly, then for 7 as well:
Release-Acked-by: Ian Jackson <iwj@xenproject.org>

If I have misremembered please do say.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 13:13:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 13:13:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89341.168178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtyZ-0000j2-3o; Wed, 24 Feb 2021 13:13:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89341.168178; Wed, 24 Feb 2021 13:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEtyZ-0000iv-0f; Wed, 24 Feb 2021 13:13:11 +0000
Received: by outflank-mailman (input) for mailman id 89341;
 Wed, 24 Feb 2021 13:13:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEtyY-0000iq-48
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:13:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1077222e-611f-430c-9288-03a61bfc8036;
 Wed, 24 Feb 2021 13:13:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 87C7FAD5C;
 Wed, 24 Feb 2021 13:13:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1077222e-611f-430c-9288-03a61bfc8036
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614172388; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uzb7zSTkowCxI+A5AcIjZbqI8LQRoZv21PHhkyOy+/M=;
	b=deHDvbcZE1DNpf5pi8HxCDboxPF+Gdj/1ZZunK0juw//Ohs57+Rvyg8Ud4dV7DYbZoJpwq
	4MbAOnvi4tncxtWDBhyCMHbjjBOOVbqrlDDSdMBKmxhMQ/xT7doLk7qSBo5mYibye4kIIv
	W/tEPQl6AAm38O5pm233Gk/tAj4i3B4=
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
 <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
 <YDY+tvs9Llf5K8Da@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5f9f7fc3-110b-f73e-20b5-c0ef311c458c@suse.com>
Date: Wed, 24 Feb 2021 14:13:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <YDY+tvs9Llf5K8Da@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 13:01, Roger Pau MonnÃ© wrote:
> On Wed, Feb 24, 2021 at 11:51:39AM +0100, Jan Beulich wrote:
>> On 24.02.2021 11:26, Roger Pau Monne wrote:
>>> --- /dev/null
>>> +++ b/tools/firmware/hvmloader/types.h
>>> @@ -0,0 +1,47 @@
>>> +#ifndef _HVMLOADER_TYPES_H_
>>> +#define _HVMLOADER_TYPES_H_
>>> +
>>> +typedef unsigned char uint8_t;
>>> +typedef signed char int8_t;
>>> +
>>> +typedef unsigned short uint16_t;
>>> +typedef signed short int16_t;
>>> +
>>> +typedef unsigned int uint32_t;
>>> +typedef signed int int32_t;
>>> +
>>> +typedef unsigned long long uint64_t;
>>> +typedef signed long long int64_t;
>>
>> I wonder if we weren't better of not making assumptions on
>> short / int / long long, and instead use
>> __attribute__((__mode__(...))) or, if available, the compiler
>> provided __{,U}INT*_TYPE__.
> 
> Oh, didn't know about all this fancy stuff.
> 
> Clang doens't seem to support the same mode attributes, for example
> QImode is unknown, and that's just on one version of clang that I
> happened to test on.

Oh, these modes have been available even in gcc 3.x. I thought
Clang was claiming to be 4.4(?) compatible.

> Using __{,U}INT*_TYPE__ does seem to be supported on the clang version
> I've tested with, so that might be an option if it's supported
> everywhere we care about. If we still need to keep the current typedef
> chunk for fallback purposes then I see no real usefulness of using
> __{,U}INT*_TYPE__.

Fair point. And they're available from 4.5 onwards only. So
just __SIZE_TYPE__ has been available for long enough. As said
in reply to Ian I think we at least want to use that one (and
I guess in the hypervisor we may want to drop the fallback).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 13:18:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 13:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89344.168190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEu3h-0000tk-On; Wed, 24 Feb 2021 13:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89344.168190; Wed, 24 Feb 2021 13:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEu3h-0000td-Le; Wed, 24 Feb 2021 13:18:29 +0000
Received: by outflank-mailman (input) for mailman id 89344;
 Wed, 24 Feb 2021 13:18:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEu3g-0000tY-8b
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:18:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9adaf885-4109-4620-b347-ead4e3489e27;
 Wed, 24 Feb 2021 13:18:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73DBDAD5C;
 Wed, 24 Feb 2021 13:18:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9adaf885-4109-4620-b347-ead4e3489e27
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614172706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5LadqBTx+fAQ0EzrCO+T9ATM2OUQsGoXznpJ3/ie3Qo=;
	b=CvsvUe6X6cy/nJcp/qBEl8CpJUzGN0wE+eaDzTaBA/enr+olVRfLlKnszYAmeHrB9NKUiu
	k2PT5xm+jetGruhCLcMnqVCqD8RKWYDl+RnuCQtYV72guQgMep9bpkzz49pgJI6BLI3YZs
	KASoajUZffZLwQPUGL9UrOVB6IGBjVs=
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
 <24623.56913.290437.499946@mariner.uk.xensource.com>
 <ce93bd48-7ef3-cdb1-9429-ccd894895e9e@suse.com>
 <24630.20427.917602.787877@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c7fe2872-626c-1bd9-02f2-572ce81eabbe@suse.com>
Date: Wed, 24 Feb 2021 14:18:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24630.20427.917602.787877@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 14:08, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
>> On 19.02.2021 16:50, Ian Jackson wrote:
>>> Jan Beulich writes ("[PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
>>>> 4: rename {get,put}_user() to {get,put}_guest()
>>>> 5: gdbsx: convert "user" to "guest" accesses
>>>> 6: rename copy_{from,to}_user() to copy_{from,to}_guest_pv()
>>>> 7: move stac()/clac() from {get,put}_unsafe_asm() ...
>>>> 8: PV: use get_unsafe() instead of copy_from_unsafe()
>>>
>>> These have not got a maintainer review yet.  To grant a release-ack
>>> I'd like an explanation of the downsides and upsides of taking this
>>> series in 4.15 ?
>>>
>>> You say "consistency" but in practical terms, what will happen if the
>>> code is not "conxistent" in this sense ?
>>>
>>> I'd also like to hear from aother hypervisor maintainer.
>>
>> Meanwhile they have been reviewed by Roger. Are you willing to
>> give them, perhaps with the exception of 7, a release ack as
>> well?
> 
> Sorry, yes.
> 
> I found these explanations convincing  Thank you.
> 
> For all except 7,
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks.

> For 7, I remember what I think was an IRC conversation where someone
> (you, I think) said you had examined the generated asm and it was
> unchanged.

It was in email, and I've inspected only some example of the
generated asm, not all instances. I would hope that was
sufficient, but since I'm not entirely certain ...

> If I have remembered that correctly, then for 7 as well:
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

... I'll better wait for explicit confirmation of this.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 13:19:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 13:19:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89348.168203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEu4z-00012e-9F; Wed, 24 Feb 2021 13:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89348.168203; Wed, 24 Feb 2021 13:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEu4z-00012X-5F; Wed, 24 Feb 2021 13:19:49 +0000
Received: by outflank-mailman (input) for mailman id 89348;
 Wed, 24 Feb 2021 13:19:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEu4x-00012O-IJ
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:19:47 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa6c802f-4a2b-433a-903c-f682eefc88da;
 Wed, 24 Feb 2021 13:19:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa6c802f-4a2b-433a-903c-f682eefc88da
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614172786;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=U5tDomzb85iVlXrYubYyW0d9mDUTX4a2KHOHlXo7OJU=;
  b=PuecgNBXgx52MY4TncxAr+vxANR63hEcrhHxArmx1e1DlvczWQqyJkwH
   sOP/RGJyKM7u3KtapHWFLsggzpDLUkzqfUMnUbs1JaBD+6MRWx56tZ6sH
   UxSdi2eKBPVgcODL/sglwLBcaytg3s4KpTRqCRRUVz4G4YFBYa0ciRQW0
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8SjJ09VYnBnjhFlRMP2dXrPB/MfQ9AhT/eUewo+tB6w5lzsh6sYzzYjKNtMgM7K1kCXVb868sB
 cpebFqRTYGE1tq/jvChDxamXoRlHhrIQyHehYz2sYMNEKup4SMpNUdr4R8RxXQauCNB3Qd8pvm
 nVjO9tgJZjhcFDg4l7lrxwcA+zD2KCkPeM0a5lgKKC/2J7yH05hopgo7V+PklaRjP2bk4RG0Gs
 mW49VYq/xy1603IPZxiq+25ktm0UVnQTU9QLG5Ug6WdNVfnS+qXx3aIvzq/ycbO38YPmPXTWDK
 03E=
X-SBRS: 5.2
X-MesageID: 38286341
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="38286341"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NOp3XA7R+yLaaLssaHVR26Y+KK6+Xe603MDGipN225uKKCr5V6Cq62lI0HsDU5z2Db6YLAXicgyNCfhEUfm55yWM9VoUclwuMbgvQXYgWAPBrDxi9LBam8t1mQwSe47iWg2bsHOCt6Oks0iJk1w6eSVu+foA1QuQv9wtb0wDDHw9V0h1ALJTv/a53dQUC6yyPU7KygmfQexzApbQajGyhiGTWDovdgLHZepxDkwsRjcolm9K1ajavNPv+Y4ZYSZL/xe+FiQGwbsyn4IJP7AbOlJER8h/yc6M69MgR3bFlKGWBcdQK9py3c8hz/v552a9NHIwVgpDAmUgCIZBNfdddw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U5tDomzb85iVlXrYubYyW0d9mDUTX4a2KHOHlXo7OJU=;
 b=dB9nXTUIRDcRXKtuANYFMPRMUjYJZULRpabCiJ/yMYZbGM0DQaioXUsvHVUHn01wxHQAJX5fJwW581iM0GsSJDr9YvBVmYD27x0zNRu9oqTglj4DSEiFDrmoAO9RLtHaIWMZBAtm2Q5pS4N1qzyz3ZiqisYacm9avmecuqkg93otdVJNQiEITqpjroBlOfLFKUMDztN4W3I3k2VFp1kpMHqtPNR4hkt8vi3BCKRIIjFTokxxDt0VcP77W+HC74c6rtTyPixWWEgzxNiHzHHLC0+KAQiV3uFruelEmSbu4VHMLuxvJLip4/93pGaVxEyL8YVzzLQQox6i0XuFH2j8+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U5tDomzb85iVlXrYubYyW0d9mDUTX4a2KHOHlXo7OJU=;
 b=xSqpQTRzlE2kD50MS8ROIHUEv+9yampM4OZP8K4BNm0dN6RN0l9NiOsGzTNdRVr4xamZiEB9oQXEbxCwa58r3vol4vBxS+rf89obmGU8R9hopav7bWsoo+uMg5S90XmKHfDCKPCCPVeuTBwpickkMK9k9xTFCXC4FuqNuMcztMs=
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations
To: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
 <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
 <24630.13192.874503.894268@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1b251bb0-fecf-e5ed-c6e5-a2cb7c612cf1@citrix.com>
Date: Wed, 24 Feb 2021 13:19:23 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <24630.13192.874503.894268@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0257.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 964933cf-18f8-460c-d1e5-08d8d8c6d1d8
X-MS-TrafficTypeDiagnostic: BYAPR03MB4421:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB44217FE3079C1CF4CAC5DED0BA9F9@BYAPR03MB4421.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mpMA3mf1wpiS6SFhIXFIMBcx4aMfQXjBrafXwBvbgacA4lNH1kirOiLal0KDl+RbWwCc7irtm200Xm5DpxPx8AV83FjcHguq137xWPpHbju4rBiXAcAgPSHs59bh1wYqq/fS3IUIlmxT5+r9CziZqbnkeUJDBhWIff/h+4g1YLPDU0WIK0UigxjvnT83ai7No/T9ej9fXBVK3wFmCkWAxZRdM4FoItsYXAItTerPq0P5yl2L2rAmRr+79ddoxme+pM0RgH8tNXVKHLMTEG02p+DRYQ5RXuYxNpmYVOKBtGxycGzUBy94GEYQVCBzeCj0ifmnew3GRySFE708NLk8duaL3tECUgtSCBZYw5uqw3a4X7dnlCPe2nwNvZuKDqGctJMYslCdswwcmQcyGHLAvWnh6yeQp1IhxqvKlq8vsVCH/X+mlLs8DeujyYqVkDqoOKkmeWiDUKQmql6FD2pMR0xPDQ1Vj89X8jcObC4Jba99lreH3eRj545yV0Ad71FYaJRP6YbSc9p8IkhiGsxJwo8vCsfOHIHVfxnIMx5H2FX9Z4aJLKk+7RWW2PqIFhZ3g8yJ4f9I5etT/aZqPw9mM5GLH5Txb0iYF9kusqUJBa0=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39840400004)(396003)(366004)(376002)(346002)(136003)(36756003)(316002)(53546011)(66476007)(6666004)(16576012)(31686004)(956004)(6486002)(2616005)(186003)(5660300002)(478600001)(16526019)(4744005)(8676002)(66556008)(2906002)(66946007)(31696002)(86362001)(26005)(8936002)(110136005)(4326008)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bFo4SXlRUjVzSG9ZYW9YaTVOeDgwMTUwQkZwSUxDN01EUFAxVW13dndZYXkv?=
 =?utf-8?B?bURUSDN5REk1VHppaW9nNWR4dHBEQkZYY2RvZ0gvbEJlM2U1SnFmUmpuWEJR?=
 =?utf-8?B?TE13U0dQeGY5TXJaWGVRWi9lT1JYNmFrVnZjbzVFbGZRS081MVVybnJIM0x5?=
 =?utf-8?B?b1FpUVBZMnVtYjgra3ZlaUVVcnRvbWttcVl0dXpCaGN5Q1Fvb2RDNEsrOUtK?=
 =?utf-8?B?WWkvWUlmZ29DR2ZYRDZDamljTGhkRzhmTllkU1E5MlBaN29GaFhSQ1RKRDd3?=
 =?utf-8?B?aEZXN0F3alE3ZEJPSUt5UWg3ZkZBQVdKUis0MlhsZDdUYncvSUNKTkowTWhN?=
 =?utf-8?B?bW9ObHJFMGg0UmcrRUxLaW1BcDRRb3NjdWZ4TjNodG9qUXk4MFB4SDZMTmVX?=
 =?utf-8?B?MlFsYUdWTktQN3pONUN6SGYwSDRpdkVkdnY2T1BkTlNibDY1UjR4YUtpTWdX?=
 =?utf-8?B?OEg2bXIzOEluL3I5YjdxVVNvak5aVnpIZHo3bXNnRHhvdVA2a05IRDJWY2Vo?=
 =?utf-8?B?R0Z3d2xnRGhPbUNac0tiN0N6RHM5VlNXVkFxMEtPZnREYnBIcFB0SUloNGpi?=
 =?utf-8?B?VkdTYUVFQ2pISzlXaGdwWllSRnpCemdGUjI4bGVWVnF5YWE5UmQ1cWtwYTdt?=
 =?utf-8?B?UENVTStYQVZhdC9ZdVVKOC9BUWRuYXFnY2hwOVdidU14akVVaTNUY3VtZnZz?=
 =?utf-8?B?aGdtb2NwTVB6bW8rOEtLUzM3RzZPcncxZ21ia3FwTWtJcWxqYi9SRjBoMFVY?=
 =?utf-8?B?WExYMnZObE95OFNvY3dWdUxTNGhSMXFEQUtLOXV0ODBOdGNCSmRnL3h1YXo3?=
 =?utf-8?B?d3ZzWFh5N0JBSE8vL0hGUDd5cjkvV1VhNzM2Zm91Q2RxY0wrb0c2cjc3ZUF2?=
 =?utf-8?B?MDY2OEZ2YmZ2SmRGZlVsdmwxR0xEYi85ZWJoTWpzaGJWeWR2NGNaSGFTMDJ1?=
 =?utf-8?B?TStsZm5vMXV1KzJGRjBYZVVHT3JGeWZBbjRPems1Y0cvQUJWQTB0Z0ZIMDlq?=
 =?utf-8?B?QnFhWkpiVnJlbkk4eUUyaU9JYzhTaTgweTlHeWJZL2dmeGNHYkhSQ3FybUpO?=
 =?utf-8?B?SW05S3hwOHdHdEoybElYL3g1VVoyRWZNRDJLYk9nNm9IdXh6Y0Q3ZUsyQitj?=
 =?utf-8?B?MFhGaHRHM2pmL2lVeFdHR0Nxcy9kVGhxRk84NSs0NWdBaTBETGVGb2plVCtL?=
 =?utf-8?B?Q2x2akc2Y0hjYVpUalJzR3R6VS9ob1hKTi9CWW9JTklWSTgyYTVwSytBNWI0?=
 =?utf-8?B?YnIydlI1MFZDSVpOeGxsNUttM0RpQXRpVmdFMDVBZDNyN1d5NDV0N09vT0d2?=
 =?utf-8?B?YXNJbkg1dnJEKzNudjdSRlZ2dm5RN1FNZ29KamVJUG9BY3cwWGNmV3c5Ulp2?=
 =?utf-8?B?T0NKQlJ5dk5YQU1oWVpsZU9BeFdzTlZRNUM4QVU5RmRaQmdzQ1BmMXEyMmxM?=
 =?utf-8?B?Q3lIaHdtKzJlWVIwM0J6aVJpcG1zWkVWVkc5ektNSTNUeGljVm9aak9Eamtv?=
 =?utf-8?B?bmxOTVV3TmttcDgydFpwb1NFSHY4NzE2QytkL2hEL09xVWhrMWlNR2NTaGQr?=
 =?utf-8?B?b0FqQ2RzeXRBTFlaNi81elZnTFNteVhqeGkvK3E5dkZFR1h2dkpnRFB4ZUhK?=
 =?utf-8?B?c2FaMTdBY0d3Y3JCdnkwVlptUFRpNjMxb3BscEQ5SXlMVGhPME1OMmsyeC9Q?=
 =?utf-8?B?U3JTRDNhT3NOajZ6K1hwZWpvVmc1RS9hcTYveENJQklIczVUVU5JVzJLblJB?=
 =?utf-8?Q?nUAOn9tU22xYGri3+EVKyzBGGp5Its0sLc0r152?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 964933cf-18f8-460c-d1e5-08d8d8c6d1d8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 13:19:31.0176
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uO9PArvS7uguG++y3AQnporAq58YPCx8eAwGCjnmbyRbDPslqq/DqJBlOtJD/hkAPQNI8wbqCcqIBIT3KKjuOKqLkNXw3NrmsNvJX8XjBjo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4421
X-OriginatorOrg: citrix.com

On 24/02/2021 11:07, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
>> Like the hypervisor, we should prefer using __SIZE_TYPE__
>> when available.
> I disagree.

size_t is obnoxious in the C spec.Â  It might not be the largest native
word size on the processor, and in some 64bit environments, it really is
32 bits.

It cannot be correctly derived from a standard type, and must come from
the compiler, because it critical that it matches what the compiler
generates for the sizeof operator.

Posix being fairly sane prohibits environments where the maximum object
size is smaller than the address size, which is why aliasing it to
unsigned long works in the common case.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 13:26:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 13:26:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89351.168215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuBp-000232-1I; Wed, 24 Feb 2021 13:26:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89351.168215; Wed, 24 Feb 2021 13:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuBo-00022v-Tu; Wed, 24 Feb 2021 13:26:52 +0000
Received: by outflank-mailman (input) for mailman id 89351;
 Wed, 24 Feb 2021 13:26:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEuBn-00022q-KZ
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:26:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEuBn-00069e-HA
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:26:51 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEuBn-0001M1-Eb
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 13:26:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEuBk-000705-7E; Wed, 24 Feb 2021 13:26:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=zsfChPZ6eRshvqQ362xtptkiwsoOuYeftxuBv6ENnuE=; b=Nw2Vr2er6gI+NQ1R/nFRpwHXD7
	Z0KTvLcv/Nh3R2y0AGA3SWY1NWIDnmOauUlBpCoBz6jPnqpsjT1inyTBKvdnfLWtELU1FPVRPp+gh
	qvkvyzCnbvi7sXkQDH4SS1o0tjuf690tPnj+nxQsX2KzZVVP9Q1o3n47/RYxtCBYsL84=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24630.21527.991285.555074@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 13:26:47 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest
 accessors
In-Reply-To: <c7fe2872-626c-1bd9-02f2-572ce81eabbe@suse.com>
References: <b466a19e-e547-3c7c-e39b-1a4c848a053a@suse.com>
	<24623.56913.290437.499946@mariner.uk.xensource.com>
	<ce93bd48-7ef3-cdb1-9429-ccd894895e9e@suse.com>
	<24630.20427.917602.787877@mariner.uk.xensource.com>
	<c7fe2872-626c-1bd9-02f2-572ce81eabbe@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH v2 0/8] x86/PV: avoid speculation abuse through guest accessors"):
> On 24.02.2021 14:08, Ian Jackson wrote:
> > For 7, I remember what I think was an IRC conversation where someone
> > (you, I think) said you had examined the generated asm and it was
> > unchanged.
> 
> It was in email, and I've inspected only some example of the
> generated asm, not all instances. I would hope that was
> sufficient, but since I'm not entirely certain ...
> 
> > If I have remembered that correctly, then for 7 as well:
> > Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> ... I'll better wait for explicit confirmation of this.

I think that's convincing enough.  Thank you.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 13:34:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 13:34:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89354.168227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuJQ-000345-Qv; Wed, 24 Feb 2021 13:34:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89354.168227; Wed, 24 Feb 2021 13:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuJQ-00033y-N5; Wed, 24 Feb 2021 13:34:44 +0000
Received: by outflank-mailman (input) for mailman id 89354;
 Wed, 24 Feb 2021 13:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuJP-00033q-Op; Wed, 24 Feb 2021 13:34:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuJP-0006Gr-IG; Wed, 24 Feb 2021 13:34:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuJP-0002Xz-7q; Wed, 24 Feb 2021 13:34:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuJP-0001Fy-7J; Wed, 24 Feb 2021 13:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dGfUlnnrenD2538YTZKhtS/M4aMsaYGIclFJ07XKDBQ=; b=QHogbsj0fnKMQexTkQsuoiDBzC
	ydYp0+pkADW3B8z8SMVcwEdN/VbjnZ1Ct6ZTC4a/DvUQc6BUFE9vH8RLHUQD85j9KNddF14+/LSjH
	iQ28CdhTcumttASeiRWo0x6Qhdr3MBZHy1rAtheijnTTIs7yKMzF7qpmaehOM2+DsoYQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159602-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159602: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 13:34:43 +0000

flight 159602 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159602/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 159559

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    5 days
Failing since        159487  2021-02-20 04:29:29 Z    4 days    8 attempts
Testing same since   159559  2021-02-22 20:38:35 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 318 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 14:07:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 14:07:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89360.168242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuor-00066p-NJ; Wed, 24 Feb 2021 14:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89360.168242; Wed, 24 Feb 2021 14:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuor-00066i-K6; Wed, 24 Feb 2021 14:07:13 +0000
Received: by outflank-mailman (input) for mailman id 89360;
 Wed, 24 Feb 2021 14:07:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEuoq-00066d-NZ
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 14:07:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03393f60-9fba-422f-86a4-d19239253c23;
 Wed, 24 Feb 2021 14:07:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1281AAD5C;
 Wed, 24 Feb 2021 14:07:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03393f60-9fba-422f-86a4-d19239253c23
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614175630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AaZrvd0hlbCE7Mcy+OTHJieEox1to3Lvp/Fc03YRH3k=;
	b=IuluAClreHDvA3iq4Un+HAw0bK7YlfvnnVKiSwf9aQcovOz4q5raO97J3Vg3jAotHm4J1D
	j1vbVwPKv2IVAguXBIXK0SW7izWQEoTIt3BZ4qb8Yxg8yy4O9Ry2d/iNqEr4vlLEVIcXOY
	s1QSbBm+T7uvQDX1mxBktYUxiyB7uTc=
Subject: Re: [for-4.15][RESEND PATCH v4 1/2] xen/x86: iommu: Ignore IOMMU
 mapping requests when a domain is dying
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d5a09319-614d-398b-b911-bc2533bec587@suse.com>
Date: Wed, 24 Feb 2021 15:07:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210224094356.7606-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 10:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new x86 IOMMU page-tables allocator will release the pages when
> relinquishing the domain resources. However, this is not sufficient
> when the domain is dying because nothing prevents page-table to be
> allocated.
> 
> As the domain is dying, it is not necessary to continue to modify the
> IOMMU page-tables as they are going to be destroyed soon.
> 
> At the moment, page-table allocates will only happen when iommu_map().
> So after this change there will be no more page-table allocation
> happening.

While I'm still not happy about this asymmetry, I'm willing to accept
it in the interest of getting the underlying issue addressed. May I
ask though that you add something like "... because we don't use
superpage mappings yet when not sharing page tables"?

But there are two more minor things:

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -267,6 +267,12 @@ int iommu_free_pgtables(struct domain *d)
>      struct page_info *pg;
>      unsigned int done = 0;
>  
> +    if ( !is_iommu_enabled(d) )
> +        return 0;

Why is this addition needed? Hitting a not yet initialize spin lock
is - afaict - no worse than a not yet initialized list, so it would
seem to me that this can't be the reason. No other reason looks to
be called out by the description.

> +    /* After this barrier, no more IOMMU mapping can happen */
> +    spin_barrier(&hd->arch.mapping_lock);

On the v3 discussion I thought you did agree to change the wording
of the comment to something like "no new IOMMU mappings can be
inserted"?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 14:13:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 14:13:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89369.168253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuuG-00076i-B2; Wed, 24 Feb 2021 14:12:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89369.168253; Wed, 24 Feb 2021 14:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEuuG-00076b-7v; Wed, 24 Feb 2021 14:12:48 +0000
Received: by outflank-mailman (input) for mailman id 89369;
 Wed, 24 Feb 2021 14:12:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuuE-00076T-Ib; Wed, 24 Feb 2021 14:12:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuuE-0006yq-DD; Wed, 24 Feb 2021 14:12:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuuE-0004Ze-5b; Wed, 24 Feb 2021 14:12:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEuuE-00061S-56; Wed, 24 Feb 2021 14:12:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3iTksSRhvUWznluwgLcMWea0T6QdqJzPiHKFIOgJcuA=; b=jtgR7iCmqa6guU6K+ILkTbA3Cd
	43y43NmR/Ep7o1m5tWBk6WYl8wpoicwUTnvfrbywvOEIaDkxE26tjgvDURm1iBm0QRm4JRqCzMEM0
	TwR+BQg5RU76FUz2pD4WbL4V/pdQvPe3LVk34lfRiHhX5TiG1/6sS3NtHhLRBFPgxMoA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159624-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159624: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=81b2b328a26c1b89c275898d12e8ab26c0673dad
X-Osstest-Versions-That:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 14:12:46 +0000

flight 159624 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159624/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 159600

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  81b2b328a26c1b89c275898d12e8ab26c0673dad
baseline version:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe

Last test of basis   159600  2021-02-23 20:01:30 Z    0 days
Testing same since   159624  2021-02-24 12:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 81b2b328a26c1b89c275898d12e8ab26c0673dad
Author: Roger Pau MonnÃ© <roger.pau@citrix.com>
Date:   Wed Feb 24 12:48:13 2021 +0100

    hvmloader: use Xen private header for elf structs
    
    Do not use the system provided elf.h, and instead use elfstructs.h
    from libelf.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Feb 24 12:47:34 2021 +0100

    build: remove more absolute paths from dependency tracking files
    
    d6b12add90da ("DEPS handling: Remove absolute paths from references to
    cwd") took care of massaging the dependencies of the output file, but
    for our passing of -MP to the compiler to take effect the same needs to
    be done on the "phony" rules that the compiler emits.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 14:33:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 14:33:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89376.168275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvEA-0000gA-6C; Wed, 24 Feb 2021 14:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89376.168275; Wed, 24 Feb 2021 14:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvEA-0000g3-2Q; Wed, 24 Feb 2021 14:33:22 +0000
Received: by outflank-mailman (input) for mailman id 89376;
 Wed, 24 Feb 2021 14:33:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEvE9-0000fy-8c
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 14:33:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEvE9-0007Jz-6o
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 14:33:21 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEvE9-0006fc-4l
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 14:33:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEvE0-00078h-Ug; Wed, 24 Feb 2021 14:33:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=syTlKFg7dYVBP+vpaduTzpE6UI63HkH0IDlCIHzIKiM=; b=Lp7eXWjxMUGioWb8UISLKREM8e
	qxHHnyTUBoxx6/n+nWbsiJAEw6ITRVbaW2wcT8m3KaJ7vJGY3jqBdzMp1cwxkWWjR66973Yzi4foJ
	NDKFf7unUeCiyuuaW1333OZsNZX0wWgzyDOwAHrD9JQLZ/dYcRa6S0/R+lEIZwKI1o/c=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24630.25512.617137.512212@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 14:33:12 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>,
    Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations [and 1 more messages]
In-Reply-To: <9e64f1b3-fbb3-6561-cf7b-498ed3839020@suse.com>,
	<1b251bb0-fecf-e5ed-c6e5-a2cb7c612cf1@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
	<20210224102641.89455-3-roger.pau@citrix.com>
	<fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
	<24630.13192.874503.894268@mariner.uk.xensource.com>
	<1b251bb0-fecf-e5ed-c6e5-a2cb7c612cf1@citrix.com>
	<9e64f1b3-fbb3-6561-cf7b-498ed3839020@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
> At what point do we just declare Alpine broken?  These are all
> freestanding headers, an explicitly available for use, in the way we use
> them.

There is IMO nothing wrong with Alpine here.  Alpine amd64 simply does
not support compilation of 32-bit x86 userland binaries.

But that's OK for us.  Xen doe not require the execution of any 32-bit
userland binaries.  hvmloader is not a userland binary.

As Roger said on irc

13:35 <royger> but requiring a compiler that supports generating
               i386 code doens't imply that we also have a libc for it?
               
> There are substantial portability costs for making changes like this,
> which takes us from standards compliant C to GCC-isms-only.

Since we are defining our out standalone environment for hvmloader, we
are in the position of the C *implementor*.  Compilers have features
(like __builtin_va*) that are helpful for implementing standard C
features like stdarg.h and indeed stdint.h.

Or to put it another way, GCC does not, by itself, provide (in
Standard C terms) a "freestanding implementation".  Arguably GCC ought
to provide stdint.h et al but in practice it doing so causes more
trouble as it gets in the way of the implentors of hosted
implementations.

The conclusion is simply that we must provide for ourselves any
apsects of a "freestanding implementation" that we care about.  (And
then we get to decide for ourselves how much the internal API should
look like Standard C.  Defining the Standard C type names is
definitely IMO advisable as it makes the bulk of the code sensible to
read.)

Jan Beulich writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
> On 24.02.2021 12:07, Ian Jackson wrote:
> > This code is only ever going to be for 32-bit x86, so I think the way
> > Roger did it is fine.
> 
> It is technically correct at this point in time, from all we can
> tell. I can't see any reason though why a compiler might not
> support wider int or, in particular, long long.

Our requirement for hvmloader is that we have an ILP32 LL64 compiler
which generates 32-bit x86 machine code.  That is what "gcc -m32"
means.  Whether future compiler(s) might exist which can provide ILP32
LLP64 (and what type uint64_t is on such a compiler) is not relevant.

> > Doing it the other way, to cope with this file being used with
> > compiler settings where the above set of types is wrong, would also
> > imply more complex definitions of INT32_MIN et al.
> 
> Well, that's only as far as the use of number suffixes goes. The
> values used won't change, as these constants describe fixed width
> types.

So the definitions would need to contain casts.

> >> Like the hypervisor, we should prefer using __SIZE_TYPE__
> >> when available.
> > 
> > I disagree.
> 
> May I ask why? There is a reason providing of these types did get
> added to (at least) gcc.

__SIZE_TYPE__ is provided by the compiler to the libc implementor.  It
is one of those facilities like __builtin_va*.  The bulk of the code
in hvmloader should not use this kind of thing.  It should use plain
size_t.

As for the new header in hvmloader, it does not matter whether it uses
__SIZE_TYPE__ or some other type which is known to be 32-bit, since
this code is definitely only ever for 32-bit x86.

> One argument against this would be above mentioned independence
> on any ABI the compiler would be built for, but I'd buy that only
> if above we indeed used __attribute__((__mode__())), as that's
> the only way to achieve such independence.

We don't want or need to support building hvmloader with a differnet
ABI.

> >> Nit: Perhaps better omit the unnecessary inner parentheses?
> > 
> > We should definitely keep the inner parentheses.  I don't want to
> > start carefully reasoning about precisely which inner parentheses are
> > necesary for macro argument parsing correctness.
> 
> Can you give me an example of when the inner parentheses would be
> needed? I don't think they're needed no matter whether (taking the
> example here) __builtin_va_...() were functions or macros. They
> would of course be needed if the identifiers were part of
> expressions beyond the mere function invocation.

You mention the situation where the parentheses would be needed
yourself.

> We've been trying to eliminate such in the hypervisor part of the
> tree,

Really ?  Well I think that is in exceedingly poor taste.  I can't
claim that it's objectively wrong and this is a question of style so I
won't insist on it.

> The primary reason why I've been advocating to avoid them is that,
> as long as they're not needed for anything, they harm readability
> and increase the risk of mistakes like the one that had led to
> XSA-316.

I looked again at XSA-316.  I don't want to open another can of worms.
It will suffice to say that I don't share your view on the root cause.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 14:56:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 14:56:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89380.168290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvaj-0002id-7m; Wed, 24 Feb 2021 14:56:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89380.168290; Wed, 24 Feb 2021 14:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvaj-0002iW-3Z; Wed, 24 Feb 2021 14:56:41 +0000
Received: by outflank-mailman (input) for mailman id 89380;
 Wed, 24 Feb 2021 14:56:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEvai-0002iP-8U
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 14:56:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f80b112f-878c-45aa-860e-f50b9af381de;
 Wed, 24 Feb 2021 14:56:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B5B6AE6E;
 Wed, 24 Feb 2021 14:56:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f80b112f-878c-45aa-860e-f50b9af381de
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614178598; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4uPqCEpP5t4ePNbFKNrDI7L2s3PL0fx65MATHmLjIN4=;
	b=XGzMkdYIWHL7UJ8rni00dziTSBNgQq6L3PyymioNTzl1KRO22UWQr8z6nI+4OtrbQsPJte
	OpFD5rVZNvc9Ry3hWsvYOjfbaL/LrUwlAdPgo8P4FluP6Hddwk8AK5nQ2wmbj5vGAepmsJ
	cYyHX5uki5530KFdqhz6q83BNiBVito=
Subject: Re: [PATCH 2/2] hvmloader: do not include system headers for type
 declarations [and 1 more messages]
To: Ian Jackson <iwj@xenproject.org>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <20210224102641.89455-3-roger.pau@citrix.com>
 <fb677f29-2b21-aaf1-1127-6774fb8e91e9@suse.com>
 <24630.13192.874503.894268@mariner.uk.xensource.com>
 <1b251bb0-fecf-e5ed-c6e5-a2cb7c612cf1@citrix.com>
 <9e64f1b3-fbb3-6561-cf7b-498ed3839020@suse.com>
 <24630.25512.617137.512212@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <07a0c050-85c6-ab42-2d28-0bd25a06986f@suse.com>
Date: Wed, 24 Feb 2021 15:56:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24630.25512.617137.512212@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 15:33, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
>> At what point do we just declare Alpine broken?Â  These are all
>> freestanding headers, an explicitly available for use, in the way we use
>> them.
> 
> There is IMO nothing wrong with Alpine here.  Alpine amd64 simply does
> not support compilation of 32-bit x86 userland binaries.
> 
> But that's OK for us.  Xen doe not require the execution of any 32-bit
> userland binaries.  hvmloader is not a userland binary.
> 
> As Roger said on irc
> 
> 13:35 <royger> but requiring a compiler that supports generating
>                i386 code doens't imply that we also have a libc for it?
>                
>> There are substantial portability costs for making changes like this,
>> which takes us from standards compliant C to GCC-isms-only.
> 
> Since we are defining our out standalone environment for hvmloader, we
> are in the position of the C *implementor*.  Compilers have features
> (like __builtin_va*) that are helpful for implementing standard C
> features like stdarg.h and indeed stdint.h.
> 
> Or to put it another way, GCC does not, by itself, provide (in
> Standard C terms) a "freestanding implementation".  Arguably GCC ought
> to provide stdint.h et al but in practice it doing so causes more
> trouble as it gets in the way of the implentors of hosted
> implementations.

But gcc _does_ provide a stdint.h.

> Jan Beulich writes ("Re: [PATCH 2/2] hvmloader: do not include system headers for type declarations"):
>> On 24.02.2021 12:07, Ian Jackson wrote:
>>> This code is only ever going to be for 32-bit x86, so I think the way
>>> Roger did it is fine.
>>
>> It is technically correct at this point in time, from all we can
>> tell. I can't see any reason though why a compiler might not
>> support wider int or, in particular, long long.
> 
> Our requirement for hvmloader is that we have an ILP32 LL64 compiler
> which generates 32-bit x86 machine code.  That is what "gcc -m32"
> means.

I'm not sure about the last statement; I'm pretty sure we don't
check that we have such a compiler (in tools/configure).

>  Whether future compiler(s) might exist which can provide ILP32
> LLP64 (and what type uint64_t is on such a compiler) is not relevant.
> 
>>> Doing it the other way, to cope with this file being used with
>>> compiler settings where the above set of types is wrong, would also
>>> imply more complex definitions of INT32_MIN et al.
>>
>> Well, that's only as far as the use of number suffixes goes. The
>> values used won't change, as these constants describe fixed width
>> types.
> 
> So the definitions would need to contain casts.

Which they can't, as that would make them unusable in preprocessor
directives.

>>>> Like the hypervisor, we should prefer using __SIZE_TYPE__
>>>> when available.
>>>
>>> I disagree.
>>
>> May I ask why? There is a reason providing of these types did get
>> added to (at least) gcc.
> 
> __SIZE_TYPE__ is provided by the compiler to the libc implementor.  It
> is one of those facilities like __builtin_va*.  The bulk of the code
> in hvmloader should not use this kind of thing.  It should use plain
> size_t.
> 
> As for the new header in hvmloader, it does not matter whether it uses
> __SIZE_TYPE__ or some other type which is known to be 32-bit, since
> this code is definitely only ever for 32-bit x86.

>From a compiler perspective, "32-bit" and "x86" needs further pairing
with an OS, as it's the OS which defines the ABI. This is why further
up I did say "It is technically correct at this point in time, from
all we can tell" - we imply that all OSes we want to be able to build
on provide a suitable ABI, so we can use their compilers.

>> One argument against this would be above mentioned independence
>> on any ABI the compiler would be built for, but I'd buy that only
>> if above we indeed used __attribute__((__mode__())), as that's
>> the only way to achieve such independence.
> 
> We don't want or need to support building hvmloader with a differnet
> ABI.
> 
>>>> Nit: Perhaps better omit the unnecessary inner parentheses?
>>>
>>> We should definitely keep the inner parentheses.  I don't want to
>>> start carefully reasoning about precisely which inner parentheses are
>>> necesary for macro argument parsing correctness.
>>
>> Can you give me an example of when the inner parentheses would be
>> needed? I don't think they're needed no matter whether (taking the
>> example here) __builtin_va_...() were functions or macros. They
>> would of course be needed if the identifiers were part of
>> expressions beyond the mere function invocation.
> 
> You mention the situation where the parentheses would be needed
> yourself.

Okay, if that would have been your example, then since there are
no further expressions involved here you agree parentheses aren't
needed here?

JAn


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 14:59:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 14:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89382.168302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvdF-0002yK-LZ; Wed, 24 Feb 2021 14:59:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89382.168302; Wed, 24 Feb 2021 14:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvdF-0002yD-Ho; Wed, 24 Feb 2021 14:59:17 +0000
Received: by outflank-mailman (input) for mailman id 89382;
 Wed, 24 Feb 2021 14:59:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEvdD-0002y6-Vh
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 14:59:16 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b2d3c2f-00d8-41b1-bd31-4e0502369d1f;
 Wed, 24 Feb 2021 14:59:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b2d3c2f-00d8-41b1-bd31-4e0502369d1f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614178754;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=Rpf1znQVFPhTHcaRs7o+3m6cDCMZV93qNeHGwUR0YhE=;
  b=SrT/dV9edx9wjzv+h0aNX8QG6AN7TSVGr9uWiJ34AQeuu0wU0CAvdvkX
   27d8Ui0QodsI/vMTM0xKVJ978zC7cRYZnYynjMe8ARcNgHLyBxHmyaoEl
   OLzdF46aI09JFpz7IlCH0zAmOfvXATWlcnzOIQLykaaYC9ag8e8k4rjfM
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s9UqbJCjIOkE1FarCBae5hZLghFEiMB/fN0KVFLT3B0TEzar2DsfULfCxgfM3Db5i90J1wyKzn
 n3+6P06iQ5wSifT9XJWRB6a8QCGhmFEAHHLOyATcJSrEoTKnMFtxQGvosGpEhmagPuXFMQoYz4
 k8okRXAyT2PN+wghkEzwYZIxkrhhEbpIeIwcb/3e63fz/iteATY3nVQ7w6zMjFJXaPlyLzIkWo
 k1UL6HcNdpwqAuM21gVcDTGtZ7FxmeuYReimYRLjcQaeMq35wLhAxOyMTxfXOqppZYZn4ptXsS
 DIk=
X-SBRS: 5.2
X-MesageID: 39319449
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="39319449"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PhrWSopI9TxhxOdquol5Ln6Zs9ecjCSmlxmeeCzldOL0Qi/ImSOry/dCp7exD91C0WkQkz1w8c/oPsmEVSRAJKngEf8zHcj8bFqah753ozaKMjl3Ji4Tbn41qY5PAPQoTCWl52nNFgnowT4BYQJgAt2p9gk8kNmfJBKvrCz17eWv4jNHb+SanzVhN99y0knozp6mBxvvjCCpJ0WOymJbboChahwIykfTfSJK4waRl05ebJWGzZpq6sTFx0tg8em//cVRDEUNNNR41d4LAtY4Z3pjBp4yOrBOZOnqipSXFmvZY21Apknw7zK08hY7SBIhjPPItM84yASyITK7Q9S+pA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HzE9BQN1GctUAEq5FuaqiEyjrg9KDf9htA4nHd5lHYs=;
 b=CkAo9os4Lr1Q0w/ZdHbWX02gk4W5st4Rv4Te1xg+TnPe51WFDve9V4flTx9tlCMUGuVLcF1VIP1A6xuKyLsSj0YexeL4hZ4TSFFLfZqTJEeXzLchBa2t6d+5c49wzhx9k9P+N3Qekhd8v/QTNm9wyY/ZFExFVAJt6Q5tnHtiwW3lKlxNBXS42YQJEauuwBBQ3ArTDCIHNz6Wb9FNmYs8yOGizIfs98oJzQgkWZG05LESbSwnHCa60SJoc9fmzk4rdOnG8vAuzaLrPAuNGxFotbqHcHPb5k+QFgWQ3UfFzQ31nYUzX1VDqIph38GVJ0ClHMqYHCd/c+yluMW3CZbDqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HzE9BQN1GctUAEq5FuaqiEyjrg9KDf9htA4nHd5lHYs=;
 b=Ptz2JIiM9fb6Yp6lL7a0UMsVaB4fwd3YaeZNVJ5A+0E+XlWH3cq1M7hP9mxPIWExZFN+A9B61dXtgyLl8pPctfClKbVeFileUQAFRAOzfXbI11jPglU1cZxikdv3RR9g1fsaomQMp62CTp2SuoxLZqKn28q19QdfUnajNJi1Kfk=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH for-4.15] elfstructs: add relocation defines for i386
Date: Wed, 24 Feb 2021 15:58:56 +0100
Message-ID: <20210224145856.94465-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0145.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::8)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 045db8d1-ec33-4d4c-27c4-08d8d8d4bec7
X-MS-TrafficTypeDiagnostic: DM6PR03MB5337:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5337D860610253A4773AF58F8F9F9@DM6PR03MB5337.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2582;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4UtfiAF6jpTEEepSJwX4vwhXmWyNh6XfhBVAqIbqa2+OGIqh1Ayv3tpz+lXvLdBWNyjUAIgi2DsRzxehwpi9URxUncxe6edJ7t5FcF9JlDDY+Lt7dFmwzX/AxlTjOvE5wNX/TXm391bjwo0ETqYB+S5fUE4CcVTw01dNhT6n7APX2KzSdN40Igx063eNjx8tj7AtK+YmXdBRU+GTsZ+u3K+lsiQie1pHwKoyPXMMGy1voi44re6v+ZD68w+Cy29BhngMp5DAZh+Pmq+QCRK9bVdadGAZdwnDV1+OkBwblCpTuTYfQtgowTaePMvfCqjhK/T1tLgAS8pE0X2JvkcR9ZYQ9RvbBV7EtvCRlbuP+agoDI2r5EhvyTAoks13OJi3E2kPRNMeGAuwIWmZ7W+sgXrOsmOZaUCftg+ryAtBlac/phub8V5KnfXAR0+yZYQnJ+pGMAWX8UHUOoTvRjGJTXwtC06W/42Fl01PIv/RU5TbQ4OxtYL54qM5RjyT8iF5SpdngQElxFwBdTEvG3ctOA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(39860400002)(396003)(366004)(66946007)(66556008)(36756003)(66476007)(26005)(186003)(16526019)(6916009)(2616005)(478600001)(956004)(6496006)(8936002)(8676002)(54906003)(5660300002)(86362001)(2906002)(6666004)(316002)(4326008)(6486002)(1076003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ejJPTjhXUGFRYTY4dVFTQUJCaWxRZ0lWRGJRMjB1ZHRMUEJyUVQ4Wkh6UGxI?=
 =?utf-8?B?MURTeFQvU29HaGxkNGMwRUxMaFdobVhPa29VUUVmN056VlhhQWk0dVVEL3o4?=
 =?utf-8?B?QkkxTVRUcDNpZlpoOHdZYjFVd3pWdFg4UmFzYk5aOGlsMEVETUdBdnY1RTIx?=
 =?utf-8?B?QzRuVGtsVVZiZUtkK0RFUWRGNE8yLzM1ZmtIcjQ5Sk44Ynh0eGNsbHk1Ritm?=
 =?utf-8?B?YmFYNytSKzJGQ2ZDcWNNS3ZiRXBwOTJLOTQ3bWpIOUVaWnhxNy9kRWxkQjV6?=
 =?utf-8?B?NXV5YVZRMnplUUtteTBrWkdlakNxRkxzcTRlN3o2RzQ3cEQrWlVhbWl5RGF1?=
 =?utf-8?B?amp2SmxYOHgvMndacVh6VlZ3T20zZVlEeE5BckJybmF0b0ExSlhFZ3phNGhu?=
 =?utf-8?B?Y1diVU1mcWtTbmVXNzdkZFZrbTFhV0ZwSnFtT3MrbzhCc2hxY2xBYzVhVU41?=
 =?utf-8?B?eHJpemMxOXFMeDhHT1lLQ09aU3BQVjRSMzJueUZSQ1JURmZjWVpCWS9ZYlRK?=
 =?utf-8?B?MXhDWWFtdngraWZUMVBTQysrelRiN3orbEo3RUxzYXpaS3F0QVl2bWJJRlpB?=
 =?utf-8?B?akVKazdlN3ZGNXEvZ1JPNk5hNGRDYW1XR3A4YUlmaWRETk1JbVNYaXNmZ01l?=
 =?utf-8?B?aytGRjZNSlY3U2hlUjZ2N1JpQWhDSU5rcE1FcjlvbmVJYVdJNE8xaVNidWxm?=
 =?utf-8?B?WVg4OE94cS9BOUR5U1hXTG5ZaGxCdGNBQmJtM3BOT2MwWDJtdDFHVXlaaTV1?=
 =?utf-8?B?ZDhYRFNla0ZWQmdPaURscTZRNDN4Z3c2NUhUdFVITzU1MUwyb1RnazVub1B0?=
 =?utf-8?B?RHIvUHNFZ3VxVzJsLy92UEw2UFBTclc0KzFGM000OVVuUE52MldhZDdZb2tq?=
 =?utf-8?B?bUNBQUtaMnFzM3g3dk43K0hqUmZYTkVjWUZQeEFWaU91eWxEU3pyR0RUNkJ3?=
 =?utf-8?B?OTRSK3NmVUxHZWNZaERTV2JZOWwyTXZEeFRWT1dFd0hHNHhaWWhZSDFOaFZN?=
 =?utf-8?B?akZXZXVZMEF0cFB0MUdWeFJpTWN2eW1iNHRSNS9NcUlJOE8wNVpyUVlVbHQ1?=
 =?utf-8?B?OUJIRzliaTBOY2NEcXFHRElTY0dKOXNrVHFLZ3FQSFVhOW5MTy9KVE9hcnJE?=
 =?utf-8?B?TXlaR2hQbmphVmR2Yi9hV3hLQ2pDaDFEZm9KNWdobVEvT256cEk5bHpGcWJG?=
 =?utf-8?B?bGN1MC9oV2liV1NwTkRiSU5KSVVLNWxySC9JU3FaRy81NFVnNVV4bStQcmNP?=
 =?utf-8?B?V2c1U3BBc25lYTY3dmFMK1FQVzN5RzkwQ1BkTnR1Wmp4M2VKNHl0enF6Z2NX?=
 =?utf-8?B?d2J2Mk5iT2hZV3BWU1RVV0lrRVZ3dXZlREhyeXltNGs4NW1NdDFDeDNsaW9a?=
 =?utf-8?B?OUZpUmJxZUFCb1M5dnRQcWNxZ25POUwvRzdCYU1SR0drTnU0d21YMVlhZ1l5?=
 =?utf-8?B?d3Y3Zm1HRysrRTNxOXBoNWh6akpUWFRqYjVnQ3UxZGlhbzZNWW1salJuYkxa?=
 =?utf-8?B?bjRFaXYrQUh1NHhOVUMyeGIvTFVBOVdpWm85ejlIWlNHeUx2S2dqT29LaXQy?=
 =?utf-8?B?dmwrN2tFZ2hObmFXZ3RyemFGdnhDOE96QjNheUZ4UG5hWTB3dnFrWDdDZTV2?=
 =?utf-8?B?UXlFeWo3VmRnWGd1UUh6VFJFSkpjdjBzeDNFSk1laFlaUzl1cEpoMnhpbzVZ?=
 =?utf-8?B?amNWNkUxSER5N0Vybzl2SFlidkVwUmIrK2dIaDVhZmRZb21saUFOOWN5RUc2?=
 =?utf-8?B?STZQUmprQlp5NHdVaXlJVVFkSFU4UTJqOXVKYW9Zd2cwdkd4RElmTHA3VlJW?=
 =?utf-8?B?Ri96bmZyWnhRRzNORGYrQT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 045db8d1-ec33-4d4c-27c4-08d8d8d4bec7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 14:59:12.0644
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GAk/dqxPTpIaPYWxIqBnhzNpVkOKQEab4USmAzqJ/o7uYZAD1xkMIYNyqNd25UCCkYEP+VMqPKQiZprPwvddpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5337
X-OriginatorOrg: citrix.com

Those are need by the rombios relocation code in hvmloader. Fixes the
following build error:

32bitbios_support.c: In function 'relocate_32bitbios':
32bitbios_support.c:130:18: error: 'R_386_PC32' undeclared (first use in this function); did you mean 'R_X86_64_PC32'?
             case R_386_PC32:
                  ^~~~~~~~~~
                  R_X86_64_PC32
32bitbios_support.c:130:18: note: each undeclared identifier is reported only once for each function it appears in
32bitbios_support.c:134:18: error: 'R_386_32' undeclared (first use in this function)
             case R_386_32:
                  ^~~~~~~~

Only add the two defines that are actually used, which seems to match
what we do for amd64.

Fixes: 81b2b328a26c1b ('hvmloader: use Xen private header for elf structs')
Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
 xen/include/xen/elfstructs.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h
index 726ca8f60d..d1054ae380 100644
--- a/xen/include/xen/elfstructs.h
+++ b/xen/include/xen/elfstructs.h
@@ -348,6 +348,9 @@ typedef struct {
 #define ELF32_R_TYPE(i)		((unsigned char) (i))
 #define ELF32_R_INFO(s,t) 	(((s) << 8) + (unsigned char)(t))
 
+#define R_386_32           1            /* Direct 32 bit  */
+#define R_386_PC32         2            /* PC relative 32 bit */
+
 typedef struct {
 	Elf64_Addr	r_offset;	/* where to do it */
 	Elf64_Xword	r_info;		/* index & type of relocation */
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 15:01:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 15:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89386.168314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvfB-0003qd-62; Wed, 24 Feb 2021 15:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89386.168314; Wed, 24 Feb 2021 15:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEvfB-0003qW-2s; Wed, 24 Feb 2021 15:01:17 +0000
Received: by outflank-mailman (input) for mailman id 89386;
 Wed, 24 Feb 2021 15:01:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEvf9-0003qR-Ik
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 15:01:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a15809ff-b903-4fca-8a58-6ae4c3fee40c;
 Wed, 24 Feb 2021 15:01:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0871AAEA8;
 Wed, 24 Feb 2021 15:01:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a15809ff-b903-4fca-8a58-6ae4c3fee40c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614178874; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JcIBiokRdyE2v4AGcNU13cFGO7R1zCaWWwMormpJuAU=;
	b=pi7YA88vhuNXZwQtRU96wN5bPE/SUGTiG6xtbAjUckXJZRVFzkgOlKWgpYjDnOaexCV25/
	09ZoF4GwUava3+kr0XfrWf0MFGJ7R7KQaewKLYjFszI0El6WuQlhg/YTpxXrkpU+WN/shr
	RAJ+RHDjf1/EsSlIOs/8kW/x5RLDyXM=
Subject: Re: [PATCH for-4.15] elfstructs: add relocation defines for i386
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210224145856.94465-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2db1f1db-6a99-c0a1-98c8-2af2ffc7ca75@suse.com>
Date: Wed, 24 Feb 2021 16:01:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210224145856.94465-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 15:58, Roger Pau Monne wrote:
> Those are need by the rombios relocation code in hvmloader. Fixes the
> following build error:
> 
> 32bitbios_support.c: In function 'relocate_32bitbios':
> 32bitbios_support.c:130:18: error: 'R_386_PC32' undeclared (first use in this function); did you mean 'R_X86_64_PC32'?
>              case R_386_PC32:
>                   ^~~~~~~~~~
>                   R_X86_64_PC32
> 32bitbios_support.c:130:18: note: each undeclared identifier is reported only once for each function it appears in
> 32bitbios_support.c:134:18: error: 'R_386_32' undeclared (first use in this function)
>              case R_386_32:
>                   ^~~~~~~~
> 
> Only add the two defines that are actually used, which seems to match
> what we do for amd64.
> 
> Fixes: 81b2b328a26c1b ('hvmloader: use Xen private header for elf structs')
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

In principle
Reviewed-by: Jan Beulich <jbeulich@suse.com>

But ...

> --- a/xen/include/xen/elfstructs.h
> +++ b/xen/include/xen/elfstructs.h
> @@ -348,6 +348,9 @@ typedef struct {
>  #define ELF32_R_TYPE(i)		((unsigned char) (i))
>  #define ELF32_R_INFO(s,t) 	(((s) << 8) + (unsigned char)(t))
>  
> +#define R_386_32           1            /* Direct 32 bit  */
> +#define R_386_PC32         2            /* PC relative 32 bit */
> +
>  typedef struct {
>  	Elf64_Addr	r_offset;	/* where to do it */
>  	Elf64_Xword	r_info;		/* index & type of relocation */

... I'm heavily inclined to move this a few lines down to where
the other relocation types get defined, and add a respective
comment.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 15:26:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 15:26:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89395.168337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEw2w-0005z8-BA; Wed, 24 Feb 2021 15:25:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89395.168337; Wed, 24 Feb 2021 15:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEw2w-0005z1-7r; Wed, 24 Feb 2021 15:25:50 +0000
Received: by outflank-mailman (input) for mailman id 89395;
 Wed, 24 Feb 2021 15:25:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEw2u-0005yt-OH
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 15:25:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEw2u-0008VU-L5
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 15:25:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lEw2u-0002KD-II
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 15:25:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lEw2r-0007FN-DC; Wed, 24 Feb 2021 15:25:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=vT0V/nBbFysqrIXy7cxIeUMkTZqRkdZEUb07V9dNfbA=; b=y9Cxd9Wm863VTIvOMtaGunzOMn
	rmk9Ohrwiwr1I1KxwxWAmy5hgeRetU0c1PiMDzj2p3YcGr8S2RhiwpZOB4gjBMQkVEseoS+Owy9Ov
	q1sZZ6BFVJmk03kfY6meUPMNs+nLvyJRJWqpY/T8dAngCzfCBbMU7N9ti6WQe0/XEwog=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24630.28665.162235.719938@mariner.uk.xensource.com>
Date: Wed, 24 Feb 2021 15:25:45 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15] elfstructs: add relocation defines for i386
In-Reply-To: <20210224145856.94465-1-roger.pau@citrix.com>
References: <20210224145856.94465-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Roger Pau Monne writes ("[PATCH for-4.15] elfstructs: add relocation defines for i386"):
> Those are need by the rombios relocation code in hvmloader. Fixes the
> following build error:
> 
> 32bitbios_support.c: In function 'relocate_32bitbios':
> 32bitbios_support.c:130:18: error: 'R_386_PC32' undeclared (first use in this function); did you mean 'R_X86_64_PC32'?
>              case R_386_PC32:
>                   ^~~~~~~~~~
>                   R_X86_64_PC32
> 32bitbios_support.c:130:18: note: each undeclared identifier is reported only once for each function it appears in
> 32bitbios_support.c:134:18: error: 'R_386_32' undeclared (first use in this function)
>              case R_386_32:
>                   ^~~~~~~~
> 
> Only add the two defines that are actually used, which seems to match
> what we do for amd64.
> 
> Fixes: 81b2b328a26c1b ('hvmloader: use Xen private header for elf structs')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 15:26:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 15:26:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89396.168350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEw3H-00062s-Kd; Wed, 24 Feb 2021 15:26:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89396.168350; Wed, 24 Feb 2021 15:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEw3H-00062l-Gq; Wed, 24 Feb 2021 15:26:11 +0000
Received: by outflank-mailman (input) for mailman id 89396;
 Wed, 24 Feb 2021 15:26:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs49=H2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lEw3G-00062Y-CU
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 15:26:10 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e531d45c-1892-46b8-b285-54d448d8093c;
 Wed, 24 Feb 2021 15:26:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e531d45c-1892-46b8-b285-54d448d8093c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614180367;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=FC0LM8ssa0n+KskI2PcVyMEibRpH6LqagzACBObBRWQ=;
  b=aQBeIZtb88tZ4K5UZ3cTRC4JNsh9F1ubBPUgoiD37aE/tfv8ARID6crJ
   4CLrRkWzRI64ZhEh66qEWR6QonnAxDI6cxstdjt0z+GKfvEyFvmUWGs5I
   bj/t444o+xzUnlBaSMrAohKD87Wzveytpdu6OUYBSYDirrt4XOXu8/+pS
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xcXNY8ZZzFmGuxTgpieVIvZxNpYCgE7Q2Ww29819WviihNzH4ynGbXABmA/FW2sakKFt3SfbCr
 1bebHiNJk4vaYyaGd7yNQvxmO58UzjGeEUZI5NSlo2gYcgP/aOHOXI+ev60aeD2muJg6K8sp0E
 d3MIA4RvBbl4+AzXpFZasH0X96nifZBZnIIwMSJ1l+fDPFqYtdU26fqvfZW75FKcujvuwKEIoE
 K5WOqe7dRDd6FNNafiMoi/VhP24KCLXDUhPHM9tmDjBi2XOHCHc140lm6uo00R5JDOFwUHekHy
 DN8=
X-SBRS: 5.2
X-MesageID: 37950516
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="37950516"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H2opRxzn8zggQHq6cm1wa1v1wCp00XX1bP5bN2WTIf/qb8tu864lA7mFzkMup4fab237OHXjq6xEVdiM0cmZhASKVEFMJ2/2zJYzVM9OEYxjM65mVGblMUrRKHyQg7nrRPfO2eZjW59K86VA7zYLGJoy8dPcGHJLecYM7N+nefmPxBQr20ldrSBUlwLLRX6caxB4RUOYNXuedNRy8u2vUAasG0VZSVBe486KJAfJS5hqyy6BrCs/ICgmm/IqKZ5OaqhYJR0lKp880y1yrngE2F9m5jZQYGlYTpCmohMaBOOXfL3A2Si4Fb/yV86yLV35id2FKPfSkPG7sADAmyG7jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7rf4THjaV32Kqcxxb+3mvPFGJP/HPmVlHwfGg8rvX54=;
 b=A+CV5soXthExcXgPu45AtQYlRRtoqgMnMKdWHGSuZwNgSfdodJrAxIRv1c+HXWXKrjj10aSkXrtVSqPwdQ63ePMmeC9BiEMtw9dJK/bTws+FdukgAF1Mh3pxMzypyvLcPksVR5bWCNtf1pcn9usor+9N2PjiI7Qq2sKobO/UZ6oUI3clqPqgc96uqEC89t3pII+nsmCgmp25a1dNe+6Yd3TJxkwVbRfesGNQ7LUguIR4mHDM7Yz9IRYjbCTDIZvbYfk6R6EoTUOk+yoesmiU0Xk2b3AlPKRcPB7mBD1Cnn8VfU4bYFoYcCBQK4oHsuaJaV6PPI6GgKE9UjapsxGAtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7rf4THjaV32Kqcxxb+3mvPFGJP/HPmVlHwfGg8rvX54=;
 b=CaL5+Eh3Mtx1ZfNUWJIW3seCMJqk7vIC1r0o4GzWcwy72Tg60FDQIzr/zCMsgiMAUrRbQVMQmAqL5nOKnDyb4YBYLARh8MAQEGT0SmH6zcT2RZe1wiJqx+HUvUjNv8XFduyJJqzSCxjzVo/wPg/IGIkZXUqqBpswp0nu0a8V2yo=
Date: Wed, 24 Feb 2021 16:25:58 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.15] elfstructs: add relocation defines for i386
Message-ID: <YDZwBsNBY+uyLbcB@Air-de-Roger>
References: <20210224145856.94465-1-roger.pau@citrix.com>
 <2db1f1db-6a99-c0a1-98c8-2af2ffc7ca75@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2db1f1db-6a99-c0a1-98c8-2af2ffc7ca75@suse.com>
X-ClientProxiedBy: MRXP264CA0022.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::34) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bbbe0c93-4c81-4d5c-53a5-08d8d8d87ff6
X-MS-TrafficTypeDiagnostic: DM6PR03MB5323:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5323964AFFDA7FFBC4C3B0378F9F9@DM6PR03MB5323.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mx9ZbNfS2rDiK9Nn4syzbwWLsFXz90s3e+ppsuTxYHs4F+R/Bwt/bpZn/S6HPJ6ZYeX+Gdx48IIUGbOWC8p2Ahi3xms+5t3JLZwUCOyB4sLZFRXjSD4MeCb3wigs6tnonjYzCxzBcNRcZYBthpNPLl8BL+FueXSfQBdaMwjmEUKxAah0vWqNc395hLqF/78k/EYiLlAgIb0RCWw8J9zTtGDgNbkg1KDn2Av89cqCs0pLdBp8VhqLFVATgL1+RPDrX+JXVal0lY+AbQV9cf5aUKZNVH9hg8Pzw+uYsWLn6I/eWpbhgTJXx2SDPuqLpowa61VeOxOHQBb6371ZEtmzCBQWS12OL9yU0DGIVAAa8hCtXcfy2MF1Dyq+Ks+81FwYlzwiT5oWUPYaTbGoyC4/pXxOaBsP2R9V4DBYObT07ubu9nQGX9vWwr8WuAE+P7Daq2cBSFUy73reUZFT0oqT97b7OR1Th8HE8cRqngg4NwNAsT7oIJyANJwvdRAv6UthanDviYtGC6pyRGzSqt59T1OelCC+2tP3wmTRC0m9GURouz8nDvP9sc7TxwcBUm0i
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(366004)(376002)(39860400002)(346002)(396003)(66556008)(6496006)(5660300002)(6486002)(26005)(9686003)(16526019)(54906003)(33716001)(478600001)(66476007)(4326008)(186003)(6666004)(316002)(8676002)(2906002)(6916009)(53546011)(8936002)(956004)(86362001)(85182001)(66946007)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?K3pGcVVwcUx3YS9PTXRSTy9pSENsSmFMUitKSzF2RXRTZXk4RVVmNzhFZURG?=
 =?utf-8?B?L2Y4ZytCNXJQeGN3TVJrZEFiYzgvS1plQ09XbmpZRlN6RW1XMlVSYzBjOUg4?=
 =?utf-8?B?QlBKRit2OVZUbzlyQld3T1AzOG1SaU44MGM4WHAzNUZNeVlmV1JrU0VCQm5O?=
 =?utf-8?B?L0kyM1FUMG1OOFRnOTJ5WmR6Q1pNczFNYUkvVDBBdW1lNlZrNVE0cCs0MlV1?=
 =?utf-8?B?cGZnZGl4bkhCVm85UXRWUnJXbXNKcWErd1NYRXZJMGUwLzhVc1pjcGNwTWJC?=
 =?utf-8?B?UGhmZk5RUjhsTlZldE5YTXJkcDYrY0NIanU0NHk4S1VTQzI4T2xpQWNNM24r?=
 =?utf-8?B?RGNZbzdUNGdnK3YrTS9vUzhuUUs3TXlrV0hJWm5Ud2RMbmhHK29tZWk2MnVi?=
 =?utf-8?B?M0J3VmUvdzl3NUxqdXJEODAzczNOQjhCVDUwN2wyK3FwdjhaVElwYmt3Z3px?=
 =?utf-8?B?MjZ6Z0ZoNkQwWHRhZUxFMDNLTVEvWVY2RFRnVzJVOVpXZzhFUWJYaFgrN3k4?=
 =?utf-8?B?ZGJ4MFg5SVNYakZzamJaWE15ZjJ0bDdJK0tOelFYRncwdXhXazlsVjlqZUtB?=
 =?utf-8?B?U2ZWcllseVNZU1lualhiSTZxYnorbmx2R3hKWGRGTWRqblZCM2YrUjJxcFhK?=
 =?utf-8?B?ajkwZ2pWZjNhUFBFOWs4UjAyVWpoWkpLeDZkQ1JHaXRtV1JKd0F2SjRXL2hS?=
 =?utf-8?B?bzA1K3pTbkJvbTc4VnZzRjd3dk9OdXhoN3BVRHBWSTZ2NTAwNUg2TnhQZExu?=
 =?utf-8?B?ZUgrTVRsdVN1TWdQSzdjMW1FK3pwY1RScldOYzNuYTVydlZjWkRYMmI1WFZt?=
 =?utf-8?B?Yjl1WVhDK2NQRVZtZDhLcHBxM0k2dXIvQ25ObHZOTm8wSC9EMGpxdUk0VXRh?=
 =?utf-8?B?WWkrVkErbzZONTR4TjduOVVaQTZmUTUvVldiZjlXem94emVLRzBLeVg3ZkZy?=
 =?utf-8?B?UmVFM0ZHZEVkenRiTEo1d1JUZGRaVWgrMUFDR1F3dS9RTkJwMHZHanBLYVR4?=
 =?utf-8?B?RXgxMGFOSDQ4bzBvMTQ0bGxuRzJOZS9nYzRhNUNIbk5kNHNHNDkyQ2RvMmRC?=
 =?utf-8?B?U2oxWnJ5clRUbkJzdmlxaHpuYkJ4STJGTkRVL2REZnoyMzZnVHRSdXN3QzQy?=
 =?utf-8?B?SUc3VG5WOHRXaDlIZ2hsdXI2cGEvbzZLeUtUTWhvT0RqVmM2RStDazNpSUJT?=
 =?utf-8?B?RFI3T1R4aWZkc3QweFpnRXFBaGFLU3p5bWgyOEp4QUxlb2NtbmMrRzZYVGNU?=
 =?utf-8?B?cU02QXZNQk1ySWdhMlA3ZlQ3ZjNQTUJKZkk4RnFZME5oM2xLcW5UQ1dnLzFK?=
 =?utf-8?B?dzBITGxxeDFta09nRUt5T29ZV01vQUpUSVBmeFJ0aXJLQk9GN3lVY2xPSFIw?=
 =?utf-8?B?bFdDbW9sbW5qaUtnYU1NYUtYd2laT3F0Zmw2M1BvY0YycnBZaW9kOVNVVWlV?=
 =?utf-8?B?UnpXV0s4emZiVDJ3Q0pRQkoyZ09hOVBWQ1B6NHp5Mm11LzJnSjgzMUF0OFE3?=
 =?utf-8?B?bG9FTENBV2taTzRwdnRmQmtPNkc1WS9BRmFVZm1BRjRlZ0NZeHZ5cWhuMTdq?=
 =?utf-8?B?V2VEWjhuekpjTkZUejZtK2FWUVpYclFZdEFDRVp2RHhKbTZ1WnNQODl5YUxu?=
 =?utf-8?B?MHFRa3I1QVV4SGpwSXN5bDR1RjZaWHVBdVBzblZRRjd2bmlOak9mUFY4ZnhS?=
 =?utf-8?B?V1dkWDVxTG5QdCtkeHBuODB3NVhPRnJOam1zU2wzOTNUUDZhU1ROcm9FcFhT?=
 =?utf-8?B?ckEwY0U0T2ZCdEdsK3hRcHd2VTdyM2xGbzUrOXlHQTRRaFcvTXlTeDVseVZV?=
 =?utf-8?B?LzZSWXFnYTBaWk9kUC9MQT09?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bbbe0c93-4c81-4d5c-53a5-08d8d8d87ff6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 15:26:04.6311
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Lrd1aLZvAX//OHMz6kP5Fh2+13NxK9rshPRXtQH17fpuPwFAkbYVvzZI2jichq1dNmbLJhCX88MXVd5OBV79GA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5323
X-OriginatorOrg: citrix.com

On Wed, Feb 24, 2021 at 04:01:13PM +0100, Jan Beulich wrote:
> On 24.02.2021 15:58, Roger Pau Monne wrote:
> > Those are need by the rombios relocation code in hvmloader. Fixes the
> > following build error:
> > 
> > 32bitbios_support.c: In function 'relocate_32bitbios':
> > 32bitbios_support.c:130:18: error: 'R_386_PC32' undeclared (first use in this function); did you mean 'R_X86_64_PC32'?
> >              case R_386_PC32:
> >                   ^~~~~~~~~~
> >                   R_X86_64_PC32
> > 32bitbios_support.c:130:18: note: each undeclared identifier is reported only once for each function it appears in
> > 32bitbios_support.c:134:18: error: 'R_386_32' undeclared (first use in this function)
> >              case R_386_32:
> >                   ^~~~~~~~
> > 
> > Only add the two defines that are actually used, which seems to match
> > what we do for amd64.
> > 
> > Fixes: 81b2b328a26c1b ('hvmloader: use Xen private header for elf structs')
> > Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> 
> In principle
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> But ...
> 
> > --- a/xen/include/xen/elfstructs.h
> > +++ b/xen/include/xen/elfstructs.h
> > @@ -348,6 +348,9 @@ typedef struct {
> >  #define ELF32_R_TYPE(i)		((unsigned char) (i))
> >  #define ELF32_R_INFO(s,t) 	(((s) << 8) + (unsigned char)(t))
> >  
> > +#define R_386_32           1            /* Direct 32 bit  */
> > +#define R_386_PC32         2            /* PC relative 32 bit */
> > +
> >  typedef struct {
> >  	Elf64_Addr	r_offset;	/* where to do it */
> >  	Elf64_Xword	r_info;		/* index & type of relocation */
> 
> ... I'm heavily inclined to move this a few lines down to where
> the other relocation types get defined, and add a respective
> comment.

I've placed it together with the other 32bit ELF relocation structures
and macros, but I see the rest of the relocation defines are a bit
below, feel free to move it. For a comment:

/*
 * Relocation definitions required by the rombios hvmloader relocation
 * code.
 */

Does seem fine?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:00:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:00:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89411.168384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEwac-000239-Ko; Wed, 24 Feb 2021 16:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89411.168384; Wed, 24 Feb 2021 16:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEwac-000232-Hk; Wed, 24 Feb 2021 16:00:38 +0000
Received: by outflank-mailman (input) for mailman id 89411;
 Wed, 24 Feb 2021 16:00:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F+xl=H2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lEwab-00022x-74
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 16:00:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bee6f5cc-e43c-456d-88aa-dab923ecd10f;
 Wed, 24 Feb 2021 16:00:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4AFFDAFC1;
 Wed, 24 Feb 2021 16:00:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bee6f5cc-e43c-456d-88aa-dab923ecd10f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614182435; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8HR035oIw42/hVgYurHuseBmclxN8kLLPkiQCO6LBgg=;
	b=eQDjL/WQtad5E2u6fr+LbW+CPaogaNpj1jeUWPvMbjcC3NYfmWhnbRM2/PmnFDa+zAL7L1
	i/Z0wkslez51+8bue6JPAealU3UdCOBI08l5IC04KISR1vtg1jroSt5/WHyfY9O6JrAXO/
	XsWEVGfny7KabP1su0ndcvSLVJS+lX4=
Subject: Re: [for-4.15][RESEND PATCH v4 2/2] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
From: Jan Beulich <jbeulich@suse.com>
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-3-julien@xen.org>
 <c666bf75-451d-fbc1-7fb1-600c4f014f05@suse.com>
Message-ID: <64de5c8f-83ed-23af-b24f-3c8dde50e226@suse.com>
Date: Wed, 24 Feb 2021 17:00:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <c666bf75-451d-fbc1-7fb1-600c4f014f05@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 16:58, Jan Beulich wrote:
> On 24.02.2021 10:43, Julien Grall wrote:
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
>>  
>>  void arch_iommu_domain_destroy(struct domain *d)
>>  {
>> +    /*
>> +     * There should be not page-tables left allocated by the time the
>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>> +     * called unconditionally, so pgtables may be unitialized.
>> +     */
>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
> 
> Since you've used the preferred ! instead of "== 0" /
> "== NULL" in the ASSERT()s you add further up, may I ask that
> you do so here as well?

Oh, and I meant to provide
Reviewed-by: Jan Beulich <jbeulich@suse.com>
preferably with that cosmetic adjustment (and ideally also with
"uninitialized" in the comment, as I notice only now).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:10:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:10:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89416.168405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEwkR-00038H-Nw; Wed, 24 Feb 2021 16:10:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89416.168405; Wed, 24 Feb 2021 16:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEwkR-00038A-KL; Wed, 24 Feb 2021 16:10:47 +0000
Received: by outflank-mailman (input) for mailman id 89416;
 Wed, 24 Feb 2021 16:10:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Stqq=H2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lEwkQ-00037z-8b
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 16:10:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 80206928-ca3c-4b9d-9c0e-0ad097b89912;
 Wed, 24 Feb 2021 16:10:45 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 99A4464E85;
 Wed, 24 Feb 2021 16:10:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80206928-ca3c-4b9d-9c0e-0ad097b89912
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614183044;
	bh=RydWjR9rmx1rTgF2ZPUiw0PUBYV8+Yvncu4LGjo6n5Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EucmxhATepma7+AGRkbW5eATbt8F5hMJjCSUVAwW67d9z5TRF2lSBguOuCn6d1GPq
	 0wJkrNC3tVWoSY+67v7mgf9hWiXMQRthEcx2OmRFilM+u+cDl8DJU3d729/MPAJnBD
	 de/xFjF5Hha5S2fx98b5tf+vzfmtvs2ATTLVgF4exKqugYH64xqCh/fjn9OVk8Kt7e
	 iqFadb6LLzEzfBQpItBOJw0X76XFA/nNUyzCc1WNdvUT9X3P0VN+Bq7OkfpZi/IlQ/
	 CklWyOPNA9C5SkwWTCbD52W7aTxLKv3CMgqU/HJUhCRiEuFS4+SfbFSMrCDtmS8xxx
	 BxP5EVES4YUPg==
Date: Wed, 24 Feb 2021 08:10:42 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: famzheng@amazon.com, sstabellini@kernel.org, cardoe@cardoe.com, wl@xen.org, 
    Bertrand.Marquis@arm.com, julien@xen.org, andrew.cooper3@citrix.com
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
In-Reply-To: <161416925933.13232.13004550022767558137@c667a6b167f6>
Message-ID: <alpine.DEB.2.21.2102240809270.3234@sstabellini-ThinkPad-T480s>
References: <161416925933.13232.13004550022767558137@c667a6b167f6>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Roger,

These seems to be genuine breakages:

https://gitlab.com/xen-project/patchew/xen/-/pipelines/260986219

FYI keep an eye on the patchew gitlab-ci as you should be able to see
the alpine linux tests pass once your series is fixed.

Cheers,

Stefano


On Wed, 24 Feb 2021, no-reply@patchew.org wrote:
> Hi,
> 
> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> 
> You can find the link to the pipeline near the end of the report below:
> 
> Type: series
> Message-id: 20210224102641.89455-1-roger.pau@citrix.com
> Subject: [PATCH 0/2] hvmloader: drop usage of system headers
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> sleep 10
> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> === TEST SCRIPT END ===
> 
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>  * [new tag]               patchew/20210224102641.89455-1-roger.pau@citrix.com -> patchew/20210224102641.89455-1-roger.pau@citrix.com
> Switched to a new branch 'test'
> 471a8f9278 hvmloader: do not include system headers for type declarations
> f826f3a198 hvmloader: use Xen private header for elf structs
> 
> === OUTPUT BEGIN ===
> [2021-02-24 11:27:25] Looking up pipeline...
> [2021-02-24 11:27:27] Found pipeline 260986219:
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/260986219
> 
> [2021-02-24 11:27:27] Waiting for pipeline to finish...
> [2021-02-24 11:42:41] Still waiting...
> [2021-02-24 11:57:46] Still waiting...
> [2021-02-24 12:12:51] Still waiting...
> [2021-02-24 12:20:56] Pipeline failed
> [2021-02-24 12:20:58] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'qemu-smoke-x86-64-clang' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'qemu-smoke-arm64-gcc' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'qemu-alpine-arm64-gcc' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'build-each-commit-gcc' in stage 'test' is skipped
> [2021-02-24 12:20:58] Job 'alpine-3.12-clang-debug' in stage 'build' is failed
> [2021-02-24 12:20:58] Job 'alpine-3.12-clang' in stage 'build' is failed
> [2021-02-24 12:20:58] Job 'alpine-3.12-gcc-debug' in stage 'build' is failed
> [2021-02-24 12:20:58] Job 'alpine-3.12-gcc' in stage 'build' is failed
> === OUTPUT END ===
> 
> Test command exited with code: 1


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:17:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:17:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89420.168420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEwqu-0003LK-Fo; Wed, 24 Feb 2021 16:17:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89420.168420; Wed, 24 Feb 2021 16:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEwqu-0003LD-Bx; Wed, 24 Feb 2021 16:17:28 +0000
Received: by outflank-mailman (input) for mailman id 89420;
 Wed, 24 Feb 2021 16:17:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f+Xn=H2=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1lEwqs-0003L8-8O
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 16:17:27 +0000
Received: from mail-ot1-x332.google.com (unknown [2607:f8b0:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4207b38-3a37-4950-9d44-a8cfd7764962;
 Wed, 24 Feb 2021 16:17:24 +0000 (UTC)
Received: by mail-ot1-x332.google.com with SMTP id s6so2661640otk.4
 for <xen-devel@lists.xenproject.org>; Wed, 24 Feb 2021 08:17:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4207b38-3a37-4950-9d44-a8cfd7764962
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=WoFEQ1/bfKoFI9xvg8DWul0UZStnDpuh33pXiEHG16U=;
        b=N4/rUSMsYrYBLT2mWYW11Tj9i5sFyWaQXEkIWwmk5faDg80d4hXBplnYGQb9fvPTVg
         hjP97F8wrC7M83DOiTXQ1mubWe8HtNkh9bTaT4tfOOL2gcASsySWCWKbJJpuDagAyl2R
         wJMirrr073YqLP+Rq4+WnrHyHvuoa6PNBGeL0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=WoFEQ1/bfKoFI9xvg8DWul0UZStnDpuh33pXiEHG16U=;
        b=HOYD93mfJbOnbLERVUPAidGlsSxoyAdQCa8nk1WAwb+74hKmobwwljmUYoGjzohBUP
         otebEJ8lyIouM8vO2lkFL7IMBatmCpu8/4t44gxHCa/AsmCPvhhW5Kg2hb5m9P6sIGrJ
         JREi5f/7HTq+S0NgojXYV+JU9dHg8IKfdYzq8oXag97PhU/D6LpvqViKzvRcdP1T2Fxb
         8YHDrmd2bmkRYFtOivcy5Tqa/YytUdl6GBbx3fJnXEW2En37GGk93qoq0MzJKx13V1mU
         mStpbcp6pQCC4BtCqpCYsCG4WKRrOHy5GwW1gfNMg1dtvP8k9f364Ubd+I3B/yFwpg6c
         FeMw==
X-Gm-Message-State: AOAM532mBVSX7Z4ahPJJBWDrrH2YGSSUb5fbuM43NgsMXiYPrSTmc7qO
	O+yKL6CVy1zCSImBcGrxcG+iwkzZ5S5vDKoeUPOzCA==
X-Google-Smtp-Source: ABdhPJy5Y98h9GUiqrOFjUQF2+PwNwCz5OXAmFFcvCNRB9jriSwRzPjd90bBKiziOEProUm0GiPXnDfVdnnviyzzWx4=
X-Received: by 2002:a9d:2265:: with SMTP id o92mr24849267ota.188.1614183444301;
 Wed, 24 Feb 2021 08:17:24 -0800 (PST)
MIME-Version: 1.0
References: <54ae54f9-1ba9-900b-a56f-f48e2c9a82b0@suse.com> <a9597f1a-39a6-3664-8617-90338e7943d0@epam.com>
In-Reply-To: <a9597f1a-39a6-3664-8617-90338e7943d0@epam.com>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Wed, 24 Feb 2021 17:17:13 +0100
Message-ID: <CAKMK7uGV25ERN0wy1pJvZqvC0QXBh=oQ_RfpRy7+ViQbEdBNPQ@mail.gmail.com>
Subject: Re: [PATCH] drm/xen: adjust Kconfig
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Jan Beulich <jbeulich@suse.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	"dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Feb 24, 2021 at 8:55 AM Oleksandr Andrushchenko
<Oleksandr_Andrushchenko@epam.com> wrote:
>
> Hello, Jan!
>
> On 2/23/21 6:41 PM, Jan Beulich wrote:
> > By having selected DRM_XEN, I was assuming I would build the frontend
> > driver. As it turns out this is a dummy option, and I have really not
> > been building this (because I had DRM disabled). Make it a promptless
> > one, moving the "depends on" to the other, real option, and "select"ing
> > the dummy one.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Since you're maintainer/committer, I'm assuming you'll also merge
this? Always confusing when there's an r-b but nothing about whether
the patch will get merged or not.
-Daniel

> > --- a/drivers/gpu/drm/xen/Kconfig
> > +++ b/drivers/gpu/drm/xen/Kconfig
> > @@ -1,15 +1,11 @@
> >   # SPDX-License-Identifier: GPL-2.0-only
> >   config DRM_XEN
> > -     bool "DRM Support for Xen guest OS"
> > -     depends on XEN
> > -     help
> > -       Choose this option if you want to enable DRM support
> > -       for Xen.
> > +     bool
> >
> >   config DRM_XEN_FRONTEND
> >       tristate "Para-virtualized frontend driver for Xen guest OS"
> > -     depends on DRM_XEN
> > -     depends on DRM
> > +     depends on XEN && DRM
> > +     select DRM_XEN
> >       select DRM_KMS_HELPER
> >       select VIDEOMODE_HELPERS
> >       select XEN_XENBUS_FRONTEND
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:39:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:39:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89428.168438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExC2-0005VG-EL; Wed, 24 Feb 2021 16:39:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89428.168438; Wed, 24 Feb 2021 16:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExC2-0005V9-BN; Wed, 24 Feb 2021 16:39:18 +0000
Received: by outflank-mailman (input) for mailman id 89428;
 Wed, 24 Feb 2021 16:39:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AZg/=H2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lExC1-0005V4-HL
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 16:39:17 +0000
Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26ff7042-90a1-42f5-b87d-db081084cc33;
 Wed, 24 Feb 2021 16:39:16 +0000 (UTC)
Received: by mail-wm1-x32b.google.com with SMTP id l13so2398153wmg.5
 for <xen-devel@lists.xenproject.org>; Wed, 24 Feb 2021 08:39:16 -0800 (PST)
Received: from ?IPv6:2a00:23c5:5785:9a01:5df2:fdab:9690:bbff?
 ([2a00:23c5:5785:9a01:5df2:fdab:9690:bbff])
 by smtp.gmail.com with ESMTPSA id c11sm4217607wrs.28.2021.02.24.08.39.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 24 Feb 2021 08:39:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26ff7042-90a1-42f5-b87d-db081084cc33
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=8/7WIkygkwXpMQwOKTWFMy/O50s2dwSmAwW3okTy7SU=;
        b=LOrFmN5sWtCCD85DZj9Q7wVk/yGY65GzHtnsqHpOkM6ZRLAPyp9GiMmMRecuCsGzyY
         rZT6u09IFlRc2LPrgttIEBF71ScZbL+aYBybAf6BDdsZD9Wb77qKYcdKp+mvNelq1L0j
         YvjulcryYuSNcXyIq2c4JbxeurukJd1xrxRKTkT+T3gyf/WQcunICYU9CDVUGxa1XtL9
         WuBBZ8ZasSzWQJag/ToTMCenroT0VAU1c82/HhiwFeyCF+2fZ/t7wJew122TkB5rYgWb
         V/2/CvUOyXv1dRprJXYPFK4LBLRpQwTbDleq3GsH6o2konedqnN8xpGdeEDfVrfzF3bH
         Xx4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=8/7WIkygkwXpMQwOKTWFMy/O50s2dwSmAwW3okTy7SU=;
        b=Lfa1e1RgvegwslPHT7bDGM1ORc0q5BbkgkR8Obv+a1evzfwNq345yR8/QEbTPIbc4L
         p5qBCDAFpMkDDAmi3s7F0fUP0lWhk44uU+IyUO86Qw/FDm4P/lL26AY4NR4y1upvulch
         1D/1vAkseiqrrngCLpoMvFQ/6j9BqHHgJJo7dP+NKpkItyC6GnCSvQVSbxr2BnNedNqZ
         7A+tnjmdKBu4aqISvBXBP8gQJG5Sgvjo0pBjBUMSFXswPl1uC9xv2rLi/6OtYA9GPRVr
         BQk7YlIyxt+AFmFRZ+SoX4KXYZNShanTJO6hXHZcrBCGEJX/d+Dq/M7HNe3s4SFtYHVt
         WfoQ==
X-Gm-Message-State: AOAM530mcAYZbAFQB0pfKxrbHygvJimvIOAnizg6hluklR5kurxIiOPj
	xUNUnevWbgKQa9b4F6NbY34=
X-Google-Smtp-Source: ABdhPJxC2n4EKm35CLzWvdVGI9ph2B7GSWM9XEQYrail0xCcUGf0ffT5AqPxohOw+P230kc/FiHCoQ==
X-Received: by 2002:a05:600c:2184:: with SMTP id e4mr4435865wme.107.1614184755828;
        Wed, 24 Feb 2021 08:39:15 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
To: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>
References: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
Message-ID: <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
Date: Wed, 24 Feb 2021 16:39:14 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23/02/2021 16:29, Jan Beulich wrote:
> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
> special considerations for the head of the SKB no longer apply. Don't
> mistakenly report ERROR to the frontend for the first entry in the list,
> even if - from all I can tell - this shouldn't matter much as the overall
> transmit will need to be considered failed anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -499,7 +499,7 @@ check_frags:
>   				 * the header's copy failed, and they are
>   				 * sharing a slot, send an error
>   				 */
> -				if (i == 0 && sharedslot)
> +				if (i == 0 && !first_shinfo && sharedslot)
>   					xenvif_idx_release(queue, pending_idx,
>   							   XEN_NETIF_RSP_ERROR);
>   				else
> 

I think this will DTRT, but to my mind it would make more sense to clear 
'sharedslot' before the 'goto check_frags' at the bottom of the function.

   Paul




From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:46:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:46:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89439.168460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExIW-0006PR-9c; Wed, 24 Feb 2021 16:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89439.168460; Wed, 24 Feb 2021 16:46:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExIW-0006PK-5K; Wed, 24 Feb 2021 16:46:00 +0000
Received: by outflank-mailman (input) for mailman id 89439;
 Wed, 24 Feb 2021 16:45:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uJIr=H2=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1lExIU-0006PF-Ev
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 16:45:58 +0000
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2b47e9f-6b30-4f7c-83cc-f27fbf2b3e88;
 Wed, 24 Feb 2021 16:45:57 +0000 (UTC)
Received: from [10.137.0.21] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 16141851508431001.740919718517;
 Wed, 24 Feb 2021 08:45:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2b47e9f-6b30-4f7c-83cc-f27fbf2b3e88
ARC-Seal: i=1; a=rsa-sha256; t=1614185152; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=iEdYVXzb+3X81H6Jy1PbvV3ftMsCCULRcPojEtrB6FqlcmGGRy0o+aOu29Me5bSq8SnAtZTpiMBoTaRaG9yafGlI9JrgHd+0v14pvIaiFAH5yjalCXUPm6tvKhVmLScztAWdY3NqWSFxbCd2wvjm0TFJcFCExsFTjAoNFId4hfE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1614185152; h=Content-Type:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=3aO4vYChz/tBuPBqKRr8E+jZ80t6iAFdBIR/Tj2Xje8=; 
	b=GgBocbMZGVuVJOE6dI0lEsbTRKEE4hR9GL9adZY1JMPdBQAyh/d4kgTXJwv8KjSS2KRYefxM3y7iLTzrZJ2plbiN6V0vja1BRkU0HV/8qwzWNPFxPBRPJpVBI8ARJmlZxTBEAdqCaCH3IF4ndvp+GX9CRM+1dCt5XGCjTbocy9Y=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1614185152;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:From:Subject:Message-ID:Date:MIME-Version:Content-Type;
	bh=3aO4vYChz/tBuPBqKRr8E+jZ80t6iAFdBIR/Tj2Xje8=;
	b=gWNJ5Np/Oe1Zo7jX+rAVItScGC/NmXWjabGm+c2W9d7Lfumz64FXAUa29Qa1103t
	syvzvIjZ+yo+UJ3i+VwE/eY2v7yM1HKLxaIpRytAwWIpT4ReqAsBa0qnxsQ01mT9/rc
	sofbrjxMA6cvEHMIjnkEUS+dS+/4xC4B2Sydy8SM=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Subject: io-apic issue on Ryzen 1800X
Message-ID: <ee7670ff-b140-a518-2094-df0e3e5f2575@qubes-os.org>
Date: Wed, 24 Feb 2021 17:45:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="AJupDhSlSMsYN3rOx4fvcA6t7S8yeZkXk"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--AJupDhSlSMsYN3rOx4fvcA6t7S8yeZkXk
Content-Type: multipart/mixed; boundary="YW6kU6UFNsdYsJFX3VFue4w0OY3J4K7PP";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <ee7670ff-b140-a518-2094-df0e3e5f2575@qubes-os.org>
Subject: io-apic issue on Ryzen 1800X

--YW6kU6UFNsdYsJFX3VFue4w0OY3J4K7PP
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi,
Just to inform you that after talking with Andrew, we identified that lat=
est workaround for recent Intel laptop (this series of patches https://gi=
thub.com/QubesOS/qubes-vmm-xen/commit/075e6b1) is failing on my system ha=
ving AMD Ryzen 1800X CPU and Asrock X370 Gaming K4 as motherboard.

Related Qubes issue tracking this: https://github.com/QubesOS/qubes-issue=
s/issues/6423

Best regards,
Fr=C3=A9d=C3=A9ric


--YW6kU6UFNsdYsJFX3VFue4w0OY3J4K7PP--

--AJupDhSlSMsYN3rOx4fvcA6t7S8yeZkXk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAmA2kMsACgkQSEAQtc3F
duLwWxAAvg5be62m0lvL7MNv9MPMGo5iKLIgt6VivOMGvU9j09AywRr6deBuiVLl
u8VffSElO6OKjEvhN4Wjzw5oqPNk8Z8iy+m/lDcemCKjBFGy56uR6V5DBmkkDuPg
cCt9mQ3bz93NzHqjhNrD1KSeALlBIfgmC3u229BhasHKuq/jRyVP7UoSloqFU+8r
JausEqMze5//uhk3BsqgIFrHRFvssuoK9ghA7DyP4yPohL4wnBDbYlWqszz+2Y1C
MEfhi8UyB5Kg95qTkW5iGeTGe6Xe7NDZmOiyQIvq1n/GBE+hfPWDTlSQXJSZ+omf
SugLo5q2pNvTN4H2reFN1yM0WI/WC0zuuIZ50Q2r/WZRXNDBK/ET4VJfI+8sBZpD
p7slP0Q960LVOQ5b0ZiJnjaX1XeGthVXlwumr6Li326Y/Pt+F6pqQWsk+Iw0yEdA
xYQdot+gQ4vF19n/OAFF6rMf89H4HTRjP+l2JwCgMgyHSuRM6SbxeiDz4HW6JNJK
+Tz7t+ZqxgiHezNunsRjA8TfMa73Uzj/WQo+9J5+htfT2tj3sqJ3ZPoIqCKbL5ci
UtlqqVkmkLOSMH/RuOypg9GNDVd5J5r+am4GH3hwYtsPpQPrumj+w8afag9B45QT
6wmdpasZPomKpZDulDU/mk7FRnFznm7zs4PPVCfh+1B+ilBRHtk=
=MkC9
-----END PGP SIGNATURE-----

--AJupDhSlSMsYN3rOx4fvcA6t7S8yeZkXk--


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:48:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89442.168471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExL5-0006Yn-MH; Wed, 24 Feb 2021 16:48:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89442.168471; Wed, 24 Feb 2021 16:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExL5-0006Yg-JO; Wed, 24 Feb 2021 16:48:39 +0000
Received: by outflank-mailman (input) for mailman id 89442;
 Wed, 24 Feb 2021 16:48:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lExL4-0006YX-7K; Wed, 24 Feb 2021 16:48:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lExL3-0001y9-Vz; Wed, 24 Feb 2021 16:48:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lExL3-0003hM-Pb; Wed, 24 Feb 2021 16:48:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lExL3-0003RM-P8; Wed, 24 Feb 2021 16:48:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MueLEIlKg/cESsUxTR/Z4XyeRP+lIpU4YRSTDEnXn+4=; b=mBA2N1iOgHew0E+dGAe9VIuLyr
	J9lG0O04bU2Zd+jY/HbnG9bijBQIDaCzGXwIfpRkQGlZzdHkmY5viW3062D2AiQ+T1JwIHHw4pnUQ
	PKdNMeVtazBNTJhJ7FrvhGGELYUDES6MTrxJ9+5aMNTwaCdWRV5UWhH+YWJq/W7l1InY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159619-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159619: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=739a506b18c4f694b8d5d2f3424a329c45d737ba
X-Osstest-Versions-That:
    ovmf=68e5ecc4d208fe900530fdbe6b44dfc85bef60a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 16:48:37 +0000

flight 159619 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159619/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 739a506b18c4f694b8d5d2f3424a329c45d737ba
baseline version:
 ovmf                 68e5ecc4d208fe900530fdbe6b44dfc85bef60a8

Last test of basis   159598  2021-02-23 19:09:45 Z    0 days
Testing same since   159619  2021-02-24 08:09:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Leif Lindholm <leif@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   68e5ecc4d2..739a506b18  739a506b18c4f694b8d5d2f3424a329c45d737ba -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 16:57:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 16:57:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89447.168487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExTU-0007bp-Jf; Wed, 24 Feb 2021 16:57:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89447.168487; Wed, 24 Feb 2021 16:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExTU-0007bi-Fo; Wed, 24 Feb 2021 16:57:20 +0000
Received: by outflank-mailman (input) for mailman id 89447;
 Wed, 24 Feb 2021 16:57:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lExTT-0007ba-7e; Wed, 24 Feb 2021 16:57:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lExTT-000277-1o; Wed, 24 Feb 2021 16:57:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lExTS-0004HK-Ru; Wed, 24 Feb 2021 16:57:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lExTS-0000zN-RP; Wed, 24 Feb 2021 16:57:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=swkNP52MfO6Gs0FiPaMtE6zGQH5T6dzXpGtBi6gX7BE=; b=iXzTLMymc72hW6+UAjLbQEZPTI
	gsGeKl9a2j/J6ba7aTwn+Ki0b5KicMmCsZwkwW2cr33ytohm7/USDImmIFQI0FauhwBhBK9+V/2xv
	i8AOvv0Cb03fztaVMvsIsZpCvg+EjCOsgkv8K9RqB56V5UTfHoyw/WnYDB8lzeO8c/LY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1lExTS-0000zN-RP@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 16:57:18 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  81b2b328a26c1b89c275898d12e8ab26c0673dad
  Bug not present: 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159638/


  commit 81b2b328a26c1b89c275898d12e8ab26c0673dad
  Author: Roger Pau MonnÃ© <roger.pau@citrix.com>
  Date:   Wed Feb 24 12:48:13 2021 +0100
  
      hvmloader: use Xen private header for elf structs
      
      Do not use the system provided elf.h, and instead use elfstructs.h
      from libelf.
      
      Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Acked-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Ian Jackson <iwj@xenproject.org>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/159638.bisection-summary --basis-template=159600 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 159624 fail [host=himrod2] / 159600 ok.
Failure / basis pass flights: 159624 / 159600
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81b2b328a26c1b89c275898d12e8ab26c0673dad
Basis pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#5d94433a66df29ce314696a13bdd324ec0e342fe-81b2b328a26c1b89c275898d12e8ab26c0673dad
Loaded 5001 nodes in revision graph
Searching for test results:
 159600 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe
 159624 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81b2b328a26c1b89c275898d12e8ab26c0673dad
 159627 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe
 159630 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81b2b328a26c1b89c275898d12e8ab26c0673dad
 159631 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
 159632 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81b2b328a26c1b89c275898d12e8ab26c0673dad
 159634 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
 159636 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81b2b328a26c1b89c275898d12e8ab26c0673dad
 159637 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
 159638 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 81b2b328a26c1b89c275898d12e8ab26c0673dad
Searching for interesting versions
 Result found: flight 159600 (pass), for basis pass
 For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 10bb8aa0d5d029bd56da4a2a92e1e42bef880210, results HASH(0x55aef3ccb2a8) HASH(0x55aef3cc4f68) HASH(0x55aef3cceb38) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe, results HASH(0x55aef3cc24e0) HASH(0x55aef3cc2ae0) Result found: flight 159624 (fail), for \
 basis failure (at ancestor ~79)
 Repro found: flight 159627 (pass), for basis pass
 Repro found: flight 159630 (fail), for basis failure
 0 revisions at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
No revisions left to test, checking graph state.
 Result found: flight 159631 (pass), for last pass
 Result found: flight 159632 (fail), for first failure
 Repro found: flight 159634 (pass), for last pass
 Repro found: flight 159636 (fail), for first failure
 Repro found: flight 159637 (pass), for last pass
 Repro found: flight 159638 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  81b2b328a26c1b89c275898d12e8ab26c0673dad
  Bug not present: 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159638/


  commit 81b2b328a26c1b89c275898d12e8ab26c0673dad
  Author: Roger Pau MonnÃ© <roger.pau@citrix.com>
  Date:   Wed Feb 24 12:48:13 2021 +0100
  
      hvmloader: use Xen private header for elf structs
      
      Do not use the system provided elf.h, and instead use elfstructs.h
      from libelf.
      
      Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Acked-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Ian Jackson <iwj@xenproject.org>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
159638: tolerable ALL FAIL

flight 159638 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159638/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Feb 24 17:17:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 17:17:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89454.168502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExnI-0001D9-C6; Wed, 24 Feb 2021 17:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89454.168502; Wed, 24 Feb 2021 17:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lExnI-0001D2-97; Wed, 24 Feb 2021 17:17:48 +0000
Received: by outflank-mailman (input) for mailman id 89454;
 Wed, 24 Feb 2021 17:17:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lExnH-0001Cx-08
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 17:17:47 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50c91e98-ec7d-4daa-9627-2bac5b37c343;
 Wed, 24 Feb 2021 17:17:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50c91e98-ec7d-4daa-9627-2bac5b37c343
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614187065;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Bi6/WIwFlM2dqentlGOGq4wIu4dVsJLaiWfe7NRehqY=;
  b=FmNmNOpwV0M52g7y7OksvET6RFwO+7jqQD95ZIH7SnbJZgNSX3e+8SJA
   wHNL0TlYwnecIhxK1RNxUE4x0nH34s9uHSEHN4+vP9Jj2RgjErLY2fnag
   4Wb1xgStRd0Pfxjep/627BxxaZ5DGBule/mQ8IHY9V2gv04bz+R5U6n0R
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3DrsMFT3EncQfCKnrP4NXFbiKnzgfoJNu3K4zqLl5DB4KIEwgULSG6VKW6QDpTVwne/yuJvvDS
 P/3+QAo9iQcnfpycNnM3RPWiASxeWZqHNsi7XAYzOGzoqZ4+JgyDKj6xbstt0CsTdeh9IgaX03
 xb+gTW7FZCi3+goeNS/9pz2X6okn/6EILB1SrK9Jr74BufzOPIDOqqRyy0CpQQr6qyVDe/ZGqj
 nTENxzWu4Eex8T9XF63DAJnUqlWdD9prlCKAD5rlzzgMSg5vzGrmJpmk+HWP3xLBudQOUEiVPl
 W1g=
X-SBRS: 5.2
X-MesageID: 38312626
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="38312626"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y+/m3fhtZwah0ewvkzgDRzNcOeo8R/pa0CFwReu6fCBoYv/Vo7qLC9EqoiuLMq2PDdtKSEGRfgWnTYEKDfi7+cg0dk4eoeveE1OFqZBpnIFuuEdeCgXbnbfOzW0mYTtKHwlIptdJQUpOlUBMqVx/73QHNsN+nf56tFkSoBdl1MOyVypv20ePIZ2ccofvdfa3b+uhJ1eYVwdpRDvxcZBSKvCDnK8TisqKwitNnDsFb+gtMRdGrCYNYAsLKhfR+TPhbdcNpXAkkZdBxOXZvrrv76wq5METg3yiPQ6HtStxMh3TDYujOb2xBTRi0P6UHRB759DwkXRGYbubUPqQlhUvWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACRe1/lP/czR45NXkhQIJMh21DLYFMN5IrpyGguX+ck=;
 b=N2EsGR1ivCwFjr0bVh26dLy33LdefT5rhjKH3vQfcepyU/CBYus047gFTgGt0uDDFWrx6UVpmutexln8c1yWixN4iW7IzrLVH7QPg9yABlZpfWBCrPOB5ZZ1r4fi8L4NUDXPw8Qn4o1Y9UaxmmD89geI237tD1Ysak6f1wDHXBBJEQuNzzzzFl8DcgvdC/cTJZIIEsKjF4LBTM6BTidCfpLKMzRnp8m9qxfs4Op33fYMjoCwJ+4NzLqfpYouHTDT7MK1EqkrvSA4gSoOsx44SRMvMffto4ip/bCHxiKKlKF0U3OzDnvF9XuZrUI53pblumpQfaClgx7G0uFKOzlK9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACRe1/lP/czR45NXkhQIJMh21DLYFMN5IrpyGguX+ck=;
 b=Is8x3gOWvC+aR3BBLQMOKhAQMcKs7LmM2bT6y+RIwDmRt6Tf58Y/EDZabMoIQla0UmXXNo3BMnIcm7M53FOgpYg6IpuDrEFouCusB9TFn1gaEqhR3zVEAkGOHUHa0FvuCfDUzcK0IzkgLhU0FhnycZ2D+vtQQt4bwWmf6vVQtZk=
Subject: Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base relocs
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
 <53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
 <f8e56c90-f51c-01f7-0987-4c0697a17bb0@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a35dd0b7-b804-9c75-b93c-e764345df46b@citrix.com>
Date: Wed, 24 Feb 2021 17:17:34 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <f8e56c90-f51c-01f7-0987-4c0697a17bb0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0123.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 42bd2478-b8e7-4317-2b7c-08d8d8e817b6
X-MS-TrafficTypeDiagnostic: BYAPR03MB4549:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB454944243182E99D4DA1E8ECBA9F9@BYAPR03MB4549.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +tljV57t7OJzBVUWSMYuOB/YiUGY8yGctoOKef25oAvducfwQTbXVql3lcf8gYMjEyguAXVmmraxcdJrfqprhGu+HJ45RuO3+tO3jQVPKWwwWqQ/u1coIXu1RMwKStys937I+eBFvElLZl/OO+O8eWZXtp0TpBI5xNcE7n4tQjcZvfzDG1st+xnB1bu33zR7o6GPvbg3LpzEBYMDfRafkJRhp/Tb2J2nt5lNZgMW/88/dXf5+KEEkcLHYevN6BQtB9rl6/lAkPT56MhUv2nFa5vf0QpZIitoXFwXi6ly7SR7wWD9zlykFJ03728mTHOdJq9N2VT/vz5wx1Ivd7kTlr6Mw7+GuKXuGiHKNUJogkey3MM/6k5jw1jHiD3TPU2IgMbLhg53AI9hsOWz9kfD1JnO/q/zxsfoEpZBjfSx0rs/PD3fhJu2owtvAVfVcSEtgj1DJCnkLI4ssustnlZ3Ud5+9fbBaU08b9Yu9lQ2oW/VsipYj8HDOpXf+jKfDDvNVmJIxjVMZEaEr21Orh5c8rFLn06KIhBO9WAzZP7/R5eBEbCfOFZq6xrJwtPWwHbaUKO8+y6ErgFjxDD5ah8VZCZE+8WGEavi3A4Vb4bdbAM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(366004)(136003)(396003)(83380400001)(478600001)(54906003)(6916009)(5660300002)(31686004)(6666004)(956004)(2616005)(36756003)(2906002)(316002)(16576012)(31696002)(8676002)(186003)(26005)(66946007)(66476007)(86362001)(6486002)(66556008)(16526019)(53546011)(8936002)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RGdUYkVNUUlFV1FnTnE5Rk92amhqbk03QmJlZUZWMVhpdUVBUTZuelMxa2Ex?=
 =?utf-8?B?dnEwQnFIM1NuUlRaaTNab2c4VkdzR3dGTzBJYW9DbmQvZU9OTS9sTVJaRVVm?=
 =?utf-8?B?ejNCSk13WlBrdkd4UGY5WEdWOEwwQkZBd2dPOWsyeHBhN3FrU1ZSNWFEL3Y4?=
 =?utf-8?B?eE5aOEpJYnNWamxBSUVQWlkzdWI2Ymx1ajBKbDhjN2laa1BuMG1Ud1M2L1kx?=
 =?utf-8?B?Q2Z1YWc4NngxNGVremxsclk0R2d5YWdGdVY1ZkxvejZHZmVLaUdnTHdXUlc5?=
 =?utf-8?B?VG1YU1hXSDd4dmJwZTBhSi9PMDdOd2MyWGhaSll4VmJURS94Mmx2a1BzMlBx?=
 =?utf-8?B?UER4Yy9nRzFtTThPRUNWNE4rejBqaUFPTFJ2S2RVRHdJaGxIa0R1dzRnUkQ1?=
 =?utf-8?B?enV0MEwyTmQvZG9TTys0MEk3VDBpeDBucmo4QkNkdlNlUGt5NkFQUWdKbW5S?=
 =?utf-8?B?dzFmK0dSU0kyU21SR21EeWJQbzZaUTJBUGdSdkVMQVZDK2VOaUhFU1FoSWlS?=
 =?utf-8?B?ekpxM3pJNmRGcFA3MUd5K0E3bTV5VmNUSEZwMWVUSXlvNXRpUVBtVEt3NkJr?=
 =?utf-8?B?dWlxczJORG80WlBQaWZ4d1hVcTZLdmpEZHp2cXJOV0NJVkRhRnFTUkZ4UEJp?=
 =?utf-8?B?bkd1eGlrdVprUnpTOVlOaWVUNStpQktxTmVIMUpGTVE1Zm1oK292Ky85TUh4?=
 =?utf-8?B?RjNsMGxNb3BiaDdseFlpZi8rR0VqblhPQUhtVDVUODlJaTZhVjdGSkRmRWtX?=
 =?utf-8?B?TjJvd1BCOFdlRGdDTXJEbnpQSFR5SlJmOVNCdHVpOWpVRUloTVhrNS8zVmlj?=
 =?utf-8?B?dWNyUnRoNFd3eWdKZXN4MDZZQytTaVE3VjZzMkRWTDhBbWNQMUVlWmN6RDJl?=
 =?utf-8?B?M1EzZklJbjdoUS9SQkFjbS9EZnpLbGN4WlE2alhJSGZ6dFdmVVlGaFRRR3hn?=
 =?utf-8?B?dVdGT0IyR2NsWWFBY0NBMlhwSzBGNU8wVC9mQ1BMZUxDcG5vbTUwTmhFOGp0?=
 =?utf-8?B?NVozc3lkRHBvcFJLTUxWcmMveTgwWE9CNVc1TmdjeUoyN21uMURKcUVqSkNE?=
 =?utf-8?B?akxDQ1gybVBuN3FwNitIZWFGeUUzK2l4Y0NzTVZsbWZTTG4vSTk1VVVwQUxC?=
 =?utf-8?B?VDAvcThzUll5c3ZITUd1WGJNMHJobVVGQlVWek5WRjB1czlnTlNsU0E4bTB3?=
 =?utf-8?B?QW1UK0g5RHBEc0xSU3RhZ2lXVENBRmFGSXh0WHlEdFBBSkx4ejg1ZG1KM3BO?=
 =?utf-8?B?dmNWMy9FRytRMk1sUEZ1ekxIcmUrdktoL3k1Mi9PUjFvK2Q0c1gwdXBQSjBD?=
 =?utf-8?B?anVaYUVLdTVITGkxb0orOU5DZTczNWpKUExQakZ0R0dhVjIvbTYrdTVVQUl0?=
 =?utf-8?B?TkdYdHB5Z3pxa0Z4MGVMYzNnREVvSmErZ2svMXl0bDhDTzR2djgzcnFXeWx4?=
 =?utf-8?B?dGpyeUJFMzcwemNFNUE4NTZJL2Z4cWdiTFU0U3h1aGdOcnRlalJyZU9CZ01P?=
 =?utf-8?B?MjR0T2FMRCtUTzh0STJHblkzcU1WNHNqeXpFM05JaUVqNXRqQnYxWFlUQXE0?=
 =?utf-8?B?T1JPRkNOYUExZEl1NGlTOHphQUU2TVZDR2ZHS0ZiOFNWZncvRjhpQlc0TEdS?=
 =?utf-8?B?V0MrQ3diTFNINGsrZlZoQ3ZTSk5qV1p4bjBFSEFDUExyd1BrWjJIV2FkRzZj?=
 =?utf-8?B?WEhUazVtWVV4Y3dzR1VUR25Pb3NVWnZkNDNKMzlLMUlDWC9wL2FyZmhhWm42?=
 =?utf-8?Q?46K1bXYakmO1JvwhHUgETADLQouaPBtTQ7GfHIh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 42bd2478-b8e7-4317-2b7c-08d8d8e817b6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 17:17:41.7960
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QjnAoKTyT8zmqz8mesCsNXK2EkAax4hcXUcgryAobQ6mzO1+HwPrMEmhWaAzqkgRU0oR3hGlMyHe3Q/wUEiIAF7y1iF8jx75CU5QTiSg/iw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4549
X-OriginatorOrg: citrix.com

On 23/02/2021 07:53, Jan Beulich wrote:
> On 22.02.2021 17:36, Andrew Cooper wrote:
>> On 19/02/2021 08:09, Jan Beulich wrote:
>>> --- a/xen/arch/x86/Makefile
>>> +++ b/xen/arch/x86/Makefile
>>> @@ -123,8 +123,13 @@ ifneq ($(efi-y),)
>>>  # Check if the compiler supports the MS ABI.
>>>  export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
>>>  # Check if the linker supports PE.
>>> -XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>> +EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10 --strip-debug
>>> +XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) $(EFI_LDFLAGS) -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>>  CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
>>> +# Check if the linker produces fixups in PE by default (we need to disable it doing so for now).
>>> +XEN_NO_PE_FIXUPS := $(if $(XEN_BUILD_EFI), \
>>> +                         $(shell $(LD) $(EFI_LDFLAGS) --disable-reloc-section -o efi/check.efi efi/check.o 2>/dev/null && \
>>> +                                 echo --disable-reloc-section))
>> Why does --strip-debug move?
> -S and --strip-debug are the same. I'm simply accumulating in
> EFI_LDFLAGS all that's needed for the use in the probing construct.

Oh ok.

It occurs to me that EFI_LDFLAGS now only gets started in an ifneq
block, but appended to later on while unprotected.Â  That said, I'm
fairly sure it is only consumed inside a different ifeq section, so I
think there is a reasonable quantity of tidying which ought to be done here.

> Also I meanwhile have a patch to retain debug info, for which this
> movement turns out to be a prereq. (I've yet to test that the
> produced binary actually works, and what's more I first need to get
> a couple of changes accepted into binutils for the linker to actually
> cope.)
>
>> What's wrong with $(call ld-option ...) ?Â  Actually, lots of this block
>> of code looks to be opencoding of standard constructs.
> It looks like ld-option could indeed be used here (there are marginal
> differences which are likely acceptable), despite its brief comment
> talking of just "flag" (singular, plus not really covering e.g. input
> files).
>
> But:
> - It working differently than cc-option makes it inconsistent to
>   use (the setting of XEN_BUILD_EFI can't very well be switched to
>   use cc-option); because of this I'm not surprised that we have
>   only exactly one use right now in the tree.
> - While XEN_BUILD_PE wants to be set to "y", for XEN_NO_PE_FIXUPS
>   another transformation would then be necessary to translate "y"
>   into "--disable-reloc-section".
> - Do you really suggest to re-do this at this point in the release
>   cycle?

I'm looking to prevent this almost-incompressible mess from getting worse.

But I suppose you want this to backport, so I suppose it ought to be
minimally invasive.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

This logic all actually needs moving into Kconfig so we can also go
about fixing the other bugs we have such as having Multiboot headers in
xen.efi pointing at unusable entrypoints.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 18:07:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 18:07:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89462.168517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEyZX-00068X-Ei; Wed, 24 Feb 2021 18:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89462.168517; Wed, 24 Feb 2021 18:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEyZX-00068Q-BW; Wed, 24 Feb 2021 18:07:39 +0000
Received: by outflank-mailman (input) for mailman id 89462;
 Wed, 24 Feb 2021 18:07:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEyZV-00068L-WD
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 18:07:38 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62f1bb50-5003-4228-9474-070c1a13cc7f;
 Wed, 24 Feb 2021 18:07:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62f1bb50-5003-4228-9474-070c1a13cc7f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614190056;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/oLhVa52hUxUKtSbAID7LLgsTNFnwK7ci+HdKGlL7TI=;
  b=ZlWPwHddeqdQhkGnRIH3dLEziLH2GFV7tOfpCACNg2cltLJ/rNuutAA3
   8UJLraf90hBjxVdMaCWNSxgi6XI9wdKKYKnAKuPzweb8yg11YsuYF08T7
   PXViQkt/COiV0Le1HUhKjse9Rs7nB8jcK8NxfXmqHp3is/OJBWtAIX4nl
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: N8ZGRH5ZyhImsvOovHH7UxUlcq9Yv4KDNEA8DAs+qNzQivhpr9UcJQce6AMqXX4ZLlpKCaPiQF
 ZlPTeLc1K+TQrdv/OEOxlUzR6EyNsKbQ5Qw+l9lpKnJ8F57xarU0a0E42KZoUQqYTSBIWsJ4gq
 ALpGnFH6yFFInlR1pANdHTZ5D3DLyF22BGeu2GwdpIp0QfnPdWWtkehPMcblpLp2HTWfcsVPRF
 3GC+SSwnszMx0Y5Me+o98TxNzjo8LqW+/aEtNTaQR8RvtLwepBI50pEjN6P/YR+hzJ00ILTY5Q
 OKQ=
X-SBRS: 5.2
X-MesageID: 37969292
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="37969292"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DoUjZmvhqZLYRf7XZSA5FMWM4nG+//G8Dk14UgVaL4twapMjbDc7GIW8VqGRhB0XS3eHMhEw5dZM4CC7mGoCQHB37NpvSFpeaG9ZzejQw52q8cr+UQ7MiRyFDz3tkkpSe+3VhLZkFY90A7lLKNEljkJADvIVDQoMv2OccPt1N2b23Zum2OtaMIkTjZFaHpZJhO2ozmZU865A+Chx+tz8JjRHsCDd23VJf2JS/CwyY7RYt+RqGZONN8zs7CRZHawqg2uJMuIgvEE0dk3CIu2A1ZLUhdie2iLyIAi8QUA3cpQOK+btDffWG2AgCNyNexZ3uW0W3ArIstWaoSQ6lv8bZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/oLhVa52hUxUKtSbAID7LLgsTNFnwK7ci+HdKGlL7TI=;
 b=TFx8zotsgNfyHCmuQH8br+A/mMzSKtvPjMnJa0prmvTUTYTbLYVVXlmuveZgP4sApaDrQywgLp0fgTumM2FetbaqKcjtU5P0QOuQGp6O3H0Th06tiisQqqwA+/yUpLghjJn1T/wX3lVvJaPclstlySYxXSGqRAUppQS6PTLQi0hhWe2NtWKQLR2PSkXTvBRYDx/lHoJZepzHSFpBoVhekMeq/juWE4eITWrBiK6oochK7VuiHW8+MyIjaW+hU//Ncgi/+vf0quJgmEgcWjwaucDs5an4AiuEn+nJsg5rPhAjyMEBRqB1un4/F98wPsutxPt4k+ZHptr//3sdk4eISQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/oLhVa52hUxUKtSbAID7LLgsTNFnwK7ci+HdKGlL7TI=;
 b=IofMJq1shN/S3kkgH5gPIC1KTfOkQIzMbV5JG8WogS+9IDUFMmCXmFmlmeLOpECHQKrTOnw7fjd71EV1oJIqoK+N71HOIE1MakKyJqPj+YXicpQu/wQDNqER2LgHRHaEV28qgzlSliZYX6vg/V3tpPobxSUmDf3JaGFmWKiRZNI=
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
	<dfaggioli@suse.com>, Meng Xu <mengxu@cis.upenn.edu>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <25034a7a-83ed-0848-8d23-67ed9d02c61c@citrix.com>
Date: Wed, 24 Feb 2021 18:07:25 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0233.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a4a499a0-3fc2-4c68-6dbd-08d8d8ef0ecc
X-MS-TrafficTypeDiagnostic: BYAPR03MB4294:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB429429ED3C739CF583106274BA9F9@BYAPR03MB4294.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pbEb7ZzYVPFtIc/4lxv6VOXLugXh5wvRay9WWa/rTu4OMyUe6LHwkJaaFyjun7TGgUZQIwfIvy9NRQkxEOnAT8fK2Tubz3/Z0/qEapaMRde3xtS5Qa6R0CprLbfTtsnRdQcgIn7zQ8LNKpxKHCd09WTnNMsHGeQjbO7Glxrz/4of6aiCeGb2tqWV/H0xVQkd6B2uI0Dwt4kWO08mCH1b+T3xK166YvmpH12jBpxIIC6J36XgRzMfDVawknk8y3N8iWBjvPndp7ZjDpu7oBW5eZ5kMasrMvxT5I5PQ7R9TD8+hgSX8ggxGBcuqX9WIaDvtOfYOiEADSdMDxLRYYCzDV0SgVI47vOBjg9AjL6DMet3k2byftKTgAX7F7TvYfxAJC6CEWmD1iCOTSlVJ5431vGv8j23BhJyB1v4I7624BT3X7oZ/2+W1poqWO//IWVcunAOr0r9WAWFWfd1H7tiP8oIDggDQdlVteVIKtqEm0eRJfS0SEO3kUyHuCOWQzdEqdXdSgVN2AwuEQ75vVHXkyKBKrR6tw9aNn++BTEGTSWdhEJJTezK2bOaHLkKEVWaTpJuTmmIyFmbWGWIGgEl9WVZ+CmgaIRpLQE9YOuoLTM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(366004)(376002)(66556008)(186003)(83380400001)(66476007)(53546011)(478600001)(66946007)(2616005)(6666004)(956004)(26005)(8676002)(5660300002)(36756003)(31686004)(31696002)(54906003)(2906002)(16576012)(110136005)(4326008)(6486002)(86362001)(8936002)(16526019)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SDFpaExVUFg3Vm9vWHRHLzhBbXZrNk0xdC9CcXpmMm1MbmFyWk90bFJUZ3ky?=
 =?utf-8?B?TzdFSEtSeFJuSnVya2I5eEVmamtLTjRJUmJmNkY1eHZjalUzT0hVSkhiUTg3?=
 =?utf-8?B?anB2N3RWWXptYXBabUN2UjhkMEpXRERPTFBqbXB0cmxiLy9rUm1uelZYeXgy?=
 =?utf-8?B?SkQySXJidjBjZGNLSWdmdVNyaVZwZmxReHJ6OVByZ09sRmk2bmpYak1mM3A1?=
 =?utf-8?B?L1hxV2MzWVVuOWFqZlBQSzVRN2JDdUZha0t1K0dZMUxpb2ZCZ1lUYW9tYU9h?=
 =?utf-8?B?MFhzVWpBOUlxM3F6TXd5ZkNPRU80Sm5GU3Urd1FvWkd0U1VEc0IwbzZBTkYw?=
 =?utf-8?B?aUlIbUJ6TXdJU0ZxRDZEVml3Tld4ckZnY3F2Rkc5T0NXTmpkTmVVejNkUm9E?=
 =?utf-8?B?ZFVZNGZEc2JmN0N4TGo3SVZtZWsvbVhSUWVLTmdaQ1p1UHdaeEE0Z3h4M2Zt?=
 =?utf-8?B?ZWF4a3p6S2t3QU5uenlveVF4N2NsTnIxYWdaY0RhNEZDamJRRjZYNVA4clZ3?=
 =?utf-8?B?YzB1TGN1ekl3bjV3dkZoQVB6eXplUDdPc3RZbHRldllpV084Vlh2NWQyT3Np?=
 =?utf-8?B?a2N1aE81bnIzNUg1T29ZbTlGZFp5VE9ONnNIOExJNDNwVUtXdFNMY0VpVzl2?=
 =?utf-8?B?ZDRVZDhoQzU1cFhSQU16K1VqYU1RSFpyVEhOdGx4a1RQTzdQb1pHVVdtLytU?=
 =?utf-8?B?OGFuMWp5ZmdQS3cvNzFlbEdrNUJSdFNDN0hQNmVNeGxud0l0bXdLcUw4L2N3?=
 =?utf-8?B?alRzZXByc0wxeEZIK2pLS0FTN1pHNTlKUWcvWmNGRjFXNkxZcGE5bSszS0Vt?=
 =?utf-8?B?VEwyYlp2RXY0Q2p0TEgxR1NKSTB2ZkhONDNpajNySWQ3Y2krNG5QTWdjQzBG?=
 =?utf-8?B?RnZuK0FSSFJta3IzZ0Q3R2ROQXBzK0Q4UFFUK0hXOThIQ3Nmck1FdGxSb1l4?=
 =?utf-8?B?cUNhai9GZzNYNDN2NXF0MVFqNDkxSEl2UlhETGJjZWZ1TXRuY2FsNzhjU09k?=
 =?utf-8?B?aWJ5N3dJTDFxSC9zeVBuRGFsUHExMHJTQzdrclYxQXJzK0NjdElTaTVhNU0z?=
 =?utf-8?B?YlFWd0k1bCtnYU1VZlJWWHdBTDZoUklnM2JvUVZGclZnMTdrSStZOFg2Ym52?=
 =?utf-8?B?SjgrUnNHVXlJU2JGNDkwM01nSmVsTjhDSURZTVZua01qcmtHcTl3Yy9BNXY1?=
 =?utf-8?B?UWR0QU9YcEcwNER0T1FEdWVlazFvMUxUNUdKNmY3RS94K24yS2ZLamkvWE01?=
 =?utf-8?B?R0ZOOU9vS0dBSzFsMWFrSlpkTC9kN00yZnRzUUVTZXlFbXRLZGNZYzBjMGlF?=
 =?utf-8?B?NXQwYlh3YkR3R29HblV0Q2ZCYXpkNTdZT1BlaElnTThYU2d5V3d3YXhrSDJU?=
 =?utf-8?B?KzF2ekFCbWFEVndKY1JvWU95MmxpQnpLOXpmL3htRy9xYjNaaFJCZ3Q4Z0tQ?=
 =?utf-8?B?RmFvYitXV1dtbTRqaTh1Mys0Z1BWTnFRYU1QUHZTeEx6dGNrNUpVK3U0RUx2?=
 =?utf-8?B?OUZkS1BVbG44UnJ3V1dCVTAwQW9LaGNRVGE0WFRYcDBxV3RzNSs4RVM5VzBa?=
 =?utf-8?B?N1lrc3FxS1RwNzJvYXJ5U1NwajFYWE9yNXJPNVdlbFpyN3RtMUhkdnhka1Vk?=
 =?utf-8?B?amFubHNpWEF4cCtkMVFtbk91OVVxbjBtamNLeHZRZVhKeU1IWUQwR05OMnBx?=
 =?utf-8?B?VDRWSnQ5enlBekhNOUo1TDNVNTF1eXVyV2lna1JnckpIS0pRcCtyVGl1QVJ5?=
 =?utf-8?Q?B4REjp2xfG4LvZ/ZJ68yznaG49FyOvy6YynlgKx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a4a499a0-3fc2-4c68-6dbd-08d8d8ef0ecc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 18:07:33.1676
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sM9bID0edIZ6/9xSU36/2K0ED81Cat26XSPghvrR2RkZYNdbQyhOyXD/p67OYbLs0oJzVyJHt3elp2dEU3C7+CArbsk6Y5LiX/uzv2N3H6c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4294
X-OriginatorOrg: citrix.com

On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> Hello community,
>
> Subject of this cover letter is quite self-explanatory. This patch
> series implements PoC for preemption in hypervisor mode.
>
> This is the sort of follow-up to recent discussion about latency
> ([1]).
>
> Motivation
> ==========
>
> It is well known that Xen is not preemptable. On other words, it is
> impossible to switch vCPU contexts while running in hypervisor
> mode. Only one place where scheduling decision can be made and one
> vCPU can be replaced with another is the exit path from the hypervisor
> mode. The one exception are Idle vCPUs, which never leaves the
> hypervisor mode for obvious reasons.
>
> This leads to a number of problems. This list is not comprehensive. It
> lists only things that I or my colleagues encountered personally.
>
> Long-running hypercalls. Due to nature of some hypercalls they can
> execute for arbitrary long time. Mostly those are calls that deal with
> long list of similar actions, like memory pages processing. To deal
> with this issue Xen employs most horrific technique called "hypercall
> continuation". When code that handles hypercall decides that it should
> be preempted, it basically updates the hypercall parameters, and moves
> guest PC one instruction back. This causes guest to re-execute the
> hypercall with altered parameters, which will allow hypervisor to
> continue hypercall execution later. This approach itself have obvious
> problems: code that executes hypercall is responsible for preemption,
> preemption checks are infrequent (because they are costly by
> themselves), hypercall execution state is stored in guest-controlled
> area, we rely on guest's good will to continue the hypercall. All this
> imposes restrictions on which hypercalls can be preempted, when they
> can be preempted and how to write hypercall handlers. Also, it
> requires very accurate coding and already led to at least one
> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
> like the one mentioned in [1].
>
> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
> which are supposed to run when the system is idle. If hypervisor needs
> to execute own tasks that are required to run right now, it have no
> other way than to execute them on current vCPU. But scheduler does not
> know that hypervisor executes hypervisor task and accounts spent time
> to a domain. This can lead to domain starvation.
>
> Also, absence of hypervisor threads leads to absence of high-level
> synchronization primitives like mutexes, conditional variables,
> completions, etc. This leads to two problems: we need to use spinlocks
> everywhere and we have problems when porting device drivers from linux
> kernel.

You cannot reenter a guest, even to deliver interrupts, if pre-empted at
an arbitrary point in a hypercall.Â  State needs unwinding suitably.

Xen's non-preemptible-ness is designed to specifically force you to not
implement long-running hypercalls which would interfere with timely
interrupt handling in the general case.

Hypervisor/virt properties are different to both a kernel-only-RTOS, and
regular usespace.Â  This was why I gave you some specific extra scenarios
to do latency testing with, so you could make a fair comparison of
"extra overhead caused by Xen" separate from "overhead due to
fundamental design constraints of using virt".


Preemption like this will make some benchmarks look better, but it also
introduces the ability to create fundamental problems, like preventing
any interrupt delivery into a VM for seconds of wallclock time while
each vcpu happens to be in a long-running hypercall.

If you want timely interrupt handling, you either need to partition your
workloads by the long-running-ness of their hypercalls, or not have
long-running hypercalls.

I remain unconvinced that preemption is an sensible fix to the problem
you're trying to solve.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 18:20:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 18:20:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89467.168532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEymF-00082K-M1; Wed, 24 Feb 2021 18:20:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89467.168532; Wed, 24 Feb 2021 18:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEymF-00082D-Io; Wed, 24 Feb 2021 18:20:47 +0000
Received: by outflank-mailman (input) for mailman id 89467;
 Wed, 24 Feb 2021 18:20:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEymE-000821-A5; Wed, 24 Feb 2021 18:20:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEymE-0003Vz-0k; Wed, 24 Feb 2021 18:20:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEymD-00078X-Nw; Wed, 24 Feb 2021 18:20:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEymD-00020K-NT; Wed, 24 Feb 2021 18:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6uahFKvCPCRVhBnGAwOOb/BivNxaZ34XwQawRZO1f0k=; b=3wyq/7u0dYNcHFZQPhk+72qCh1
	NMOdZ3ZUyoa5eoS/uxeIOoBXyeZKBV8yckYuF/wpPCdMtY+Wgt+xX7bbPbDdxaeoqtvzq0xjGiX2H
	TQxIMeHQTRLtIdS6jf83YM9MMasYxW/yQRCEfW8nSg7czxBsNXQc9Pv83sUNOQaLuCHQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159606-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159606: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ef8134565dccf9186d5eabd7dbb4ecae6dead87
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 18:20:45 +0000

flight 159606 qemu-mainline real [real]
flight 159639 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159606/
http://logs.test-lab.xenproject.org/osstest/logs/159639/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ef8134565dccf9186d5eabd7dbb4ecae6dead87
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  188 days
Failing since        152659  2020-08-21 14:07:39 Z  187 days  361 attempts
Testing same since   159563  2021-02-22 23:37:57 Z    1 days    3 attempts

------------------------------------------------------------
425 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117355 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 18:28:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 18:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89472.168547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEytp-0008KP-Jv; Wed, 24 Feb 2021 18:28:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89472.168547; Wed, 24 Feb 2021 18:28:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEytp-0008KI-Gq; Wed, 24 Feb 2021 18:28:37 +0000
Received: by outflank-mailman (input) for mailman id 89472;
 Wed, 24 Feb 2021 18:28:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEyto-0008KA-ME; Wed, 24 Feb 2021 18:28:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEyto-0003fG-Fb; Wed, 24 Feb 2021 18:28:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lEyto-0007c9-7z; Wed, 24 Feb 2021 18:28:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lEyto-000846-7Q; Wed, 24 Feb 2021 18:28:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/z5jlB97542re3xdJgqI0toXOUOy6sJsoQmbZQudLLM=; b=tzgEviJCjIuz2dIaH6QHxgubsI
	f+JC/0MIGjbbRVCJyxT6T5ha1t+76tnZdiIqQDB88aBrbnQeJ014p9cMTOGPUJsRXSe7FkiINXeXV
	qgLJaf1ugXsubuQJPy2XVgXXIANca8NPDbLodAC+iWRoyo3ohhR+Iyh2cJc8OgONSaI8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159629-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159629: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=81b2b328a26c1b89c275898d12e8ab26c0673dad
X-Osstest-Versions-That:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 18:28:36 +0000

flight 159629 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159629/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 159600

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  81b2b328a26c1b89c275898d12e8ab26c0673dad
baseline version:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe

Last test of basis   159600  2021-02-23 20:01:30 Z    0 days
Testing same since   159624  2021-02-24 12:01:29 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 81b2b328a26c1b89c275898d12e8ab26c0673dad
Author: Roger Pau MonnÃ© <roger.pau@citrix.com>
Date:   Wed Feb 24 12:48:13 2021 +0100

    hvmloader: use Xen private header for elf structs
    
    Do not use the system provided elf.h, and instead use elfstructs.h
    from libelf.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 10bb8aa0d5d029bd56da4a2a92e1e42bef880210
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Feb 24 12:47:34 2021 +0100

    build: remove more absolute paths from dependency tracking files
    
    d6b12add90da ("DEPS handling: Remove absolute paths from references to
    cwd") took care of massaging the dependencies of the output file, but
    for our passing of -MP to the compiler to take effect the same needs to
    be done on the "phony" rules that the compiler emits.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 18:29:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 18:29:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89476.168561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEyuu-00008t-VH; Wed, 24 Feb 2021 18:29:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89476.168561; Wed, 24 Feb 2021 18:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEyuu-00008m-SC; Wed, 24 Feb 2021 18:29:44 +0000
Received: by outflank-mailman (input) for mailman id 89476;
 Wed, 24 Feb 2021 18:29:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEyut-00008c-6P
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 18:29:43 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17d4a9c2-1ed5-4aab-b2bf-3f7249d12b4e;
 Wed, 24 Feb 2021 18:29:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17d4a9c2-1ed5-4aab-b2bf-3f7249d12b4e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614191382;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4D45fhMbVNY7HwSrUfby8l6DNxZ+oy/vDwl1OVkHISI=;
  b=IVeIVzj2yEOVyovTzJN7jHHX2FyRYIikrcFbMKKXfuI4un8dG47ckRrN
   XQRuqYTPt3r64BBTorww6MM19Av7sTsEbKEHxIa2qsGdS+Iy9xFQacrlk
   3tCRB40Akh1IyZgYsMmCrlnGaw83DAYNxTgIMZxVqijcFw9f0nzRxnKz5
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: U5ruABiQsx1p2QeAfF9JDHhx+JzXj1pXIKcgQuAaIGXEx/1AxJ0SI+/2ZPFG33KCAHgk1XimIT
 Vj+P4Qe/RKZU+bs7JOKJHtL3hpVVX6YrCdndjn7PVrNV9+BpDJNWunBmez7vfLwQbGsmiAPxeZ
 N8BD5AN7QZeIREhvVtkqacGoPG1wdV8ER2YdI+rCVHwKmyLwLXYZKpmsHHVfWsAQSxuXre9kI6
 SobAxsc5b4PpsBUOA9nLf+xxSjPu4x3Z5echvvFGJsP/5sATWy8juqnd5FKbx4R6ckqaXwLMo0
 Vr0=
X-SBRS: 5.2
X-MesageID: 37948377
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="37948377"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h2ZqP/kOYZoZ2W92GiDICCHF+A/EtXaV338lIUgAGkRn+meC4u7zbVRh7XNgd9cK0zPHmtLgntPV8m8Z6/l1dSFgxGf6dHowfHwGLNrzDquSaBbEy9xQHAvwaBmFwF/BnE0XB5Ia0t2fJJ32CKKNIMpXa69CGNwWhKM2vwDGVI09ATgWC75Nj8KKGdF6ogaIy5L3wxADuZbrQ9MLHhT5q0jB9wuOrZTBE4ptFMSzHiev3zKDVzA+C9r0VHCcVncJBgvsREokhgVClKqDWEd9IVFk3Ax/Pyif2SPTBVSp/zC78Qa5mAOfs6FSCcys14bf/OWCrfDEx1zR9th02xSmYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4D45fhMbVNY7HwSrUfby8l6DNxZ+oy/vDwl1OVkHISI=;
 b=btNKMKJrY+ReLjsT7jakWGpuroeL/gYyM4wtk7oliqX1ciXho/fhPvRbYq/pC8kwGgMzGNn79MSBC0f2R8Ynl4E4zaeherrOe9mpItk8c4nBJapzbiv3HL0LeBUqB+hd+oSmMpNpuCpPFSlXC7t2ljWDJVl35NptYW52X5KNmEaZVZfSx9rsSe+40gTYPLRBAwW9yGOTJRREwf+aUgkktzvN7PEjWmX81xlDSQ6CLJ19naXTBQLipMGAyI8Sabb7cabzzpigzjr8+U+hayKfpv5tX95u8OEyi+7fgWBpq1AQbJXrPKeAr8UPxcL8o2wgJCKgVpsCcbgBYDhTMvcLvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4D45fhMbVNY7HwSrUfby8l6DNxZ+oy/vDwl1OVkHISI=;
 b=uRhNX0zjoiLE/7Qf6fP5rn66tSdPxFcpyEIRzzeq9jpz0M1FUXfW/M6NSNlYcdarJ0stfvpb46j442qUn0lWsmVTv/X9FbQu3uqerPXjVFmd1LQQwedYHvSc3VUKQnRkpXDtnOgzMTclXVzZIhSEW+ZhgZg20h7ja8gvpUA7Xsc=
Subject: Re: [RFC PATCH 01/10] sched: core: save IRQ state during locking
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
	<dfaggioli@suse.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <20210223023428.757694-2-volodymyr_babchuk@epam.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c8475a6a-5a33-cc42-a909-b6d8dce61fab@citrix.com>
Date: Wed, 24 Feb 2021 18:29:30 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210223023428.757694-2-volodymyr_babchuk@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0172.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 65d989ed-5b4c-4a91-97b4-08d8d8f22491
X-MS-TrafficTypeDiagnostic: BYAPR03MB4120:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB41208BC4117D38F81F32FB3FBA9F9@BYAPR03MB4120.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QUpqg5SCZPBi27d4LxzXlMXDK/qJTpfj9fKfYVDBJLk9iHwxH9SRBCGnvHd0qK6aLYgc/yw9lEtjBPoA4+v1hnk7saPZ36ouMIVwhBlRdsgpoO/ld7C4rO+Xx8WASMMIp7rtYrxvGeMCeeRuUB23J8+th2HJ59hvx/KONPmnS14v6kgBAD9q03qKzULYXgLop1+QDsLx8F/Pyf33pYQGeJJXEQchbT0TBpnwVtAYA8REY/UxrsG2DbqztP0Uu1WosL/Tgm2hRR3nBlFX11UkSs5QNFMWprHATeGtow5M9x+vfnlAbBuw1ERt17IAd4v1qFpK0jhxCgqkyWAryhSN1CWepP24heApw9nNedJcIuTpDgqKznCUOWS6xQPrxFnJXokebc2U8EHb+GhZHSiUN6eLG+I+5W1ZSfBZakA7gbEPZ4q9DG2TJ337VnzNNyLWC0DDc/E90ucAbjR7aOC1h1A5YInioQEuTtnWNrahCqOYjnxVoo1p0G6QC/A0K/6XuZeYv7SDwK6xg/QItaZFIOCitdDTWb1CTqyC5HR9K8K9srSOW1OXWCFHgHil7Vfsa0cSF7VJ9TcvyTVxMNTFZj95+/9mvWa6AMcJumMdcJU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(346002)(136003)(396003)(39860400002)(66556008)(26005)(53546011)(478600001)(8936002)(66476007)(86362001)(186003)(5660300002)(6486002)(31696002)(36756003)(4326008)(16576012)(31686004)(316002)(6666004)(54906003)(956004)(4744005)(8676002)(16526019)(110136005)(2616005)(2906002)(66946007)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QWVCdmsvQXdZUU4zYnlmTERYWEt5Zy9ZYThqYnZSaGlKaHpwQ3c5bml4YzV3?=
 =?utf-8?B?bGtPS0pBY1NRWWRvWGZESG5Xb2JZblhNNGpqUHVzdkJuQzdkenN4LzJKVXhX?=
 =?utf-8?B?OUJVamMxbC9qaFlSbVluengzbkxmTGxvYllDMTk3VkJQUVkxbjdObld6SGNC?=
 =?utf-8?B?VDNhUE9DNUJ6Vk1kbGxEYk5Ic1lvc1JyWDRCd2huWE5yeVhJMFdYN3ZCdUcx?=
 =?utf-8?B?bk5aMHhJMzlHNDBuVFllWTQ0RDlXaUtBbldqdmdJQlE0MkpGaWZHcy81MW8z?=
 =?utf-8?B?VlluR2k0TTdERTl1WkppT2xGQXl1WVBYczdRRGE1MUhxSmtVMFN4MjY0Qnlt?=
 =?utf-8?B?bDA4Y2JGamZ5Z3BDSStkK2F1ekQvczdTYllQTVF2U1Q5d3VRdlNrczljMHdP?=
 =?utf-8?B?dVBNNVBETEs5ZE52QjdPdVJWeXdnNEtuRC9ybm1NdEhCc2hVcWU0aW9mN0ZX?=
 =?utf-8?B?N3djZUFCZFlVQ2tUUW1DYmdFUHBZTTNibVd4bzRUaEpSOVdsK2t1bE1rVkxi?=
 =?utf-8?B?b3pWOVlCQjJsbGRoRGtVMkx0YmhTODA4UTRJMEM0QTF0WnpFLzJBa1k1TDVF?=
 =?utf-8?B?bnI1QkF3MEJXeWhuc0cvSmVPWDl0THR2dEFhNHJMVm53b2dkbmpSK09zVVl0?=
 =?utf-8?B?WkZlajdPWjZvTEQ5Q1hWeGk5bWZaeGRTNytYR0FlSXNIZDkrSFo2NGFQNktv?=
 =?utf-8?B?UWhUN2VidDd2WFQ2Y2dlc2JnTkoxRVhsSVd0Wm9XZFZkY3pqTlNKdkNUcFBI?=
 =?utf-8?B?VlBiTEJNRDFGbUwrKzVYV0svTGJUamZDWncyQ0JFOU1xUUFRN2gzWm92dGsy?=
 =?utf-8?B?akl5MEVPNWVtaHlWZ04xVHdwL2JVQWJKSVBLQWxkRm1heGNUN3Jib1c3Sk9G?=
 =?utf-8?B?aDJjNE5jVGJ6M0hyTGVOUlNQNUNVTkpBV2lZSHdWTFNBWGhoK3NldmFXTHl1?=
 =?utf-8?B?QWVtRG42aCtVN1o2ZDRmeDFscGRtSDAzVHJTclBtYkhRdm96Y2VhRkFEL3BP?=
 =?utf-8?B?eXo3T0N4TlZJdFJxc1F0NlBLRjZLZEJCRkdacExZbkZ2SmxrejEreThUK0FV?=
 =?utf-8?B?OExVQXVsZTM3VlJhY3FLSkFKVkNSSmFiMjNHWjB2RzA1WitpcnI1VDJ0ekxy?=
 =?utf-8?B?anJLM2prUEVhUHVaUEhqQkNJeitzRDQxTG5IZjA1QXZrSU91WjltQ05OblVP?=
 =?utf-8?B?dkxoSW5UOWwrcDU5MzZnZE1VcTFOZHZQdGJqL3hoZ0krTTZpOWU2d0xidWx5?=
 =?utf-8?B?SEVwVU9uZTYwRW1aTmM2M0NDdkprMm5WVC9rQjFCdVo3QXBxUFREM1diS2Ev?=
 =?utf-8?B?WDVKSFJRYWQyZlFqdnZCbzhrQktJZjM5ZFVvdi9PWWtPNGtBVHUrejZmUWtt?=
 =?utf-8?B?Vlhzc1hSL280ajRUc0kybTFLMnVIM1liTGttZERUN2QyNS8zZnpDN1VPRFlT?=
 =?utf-8?B?Mi9mSG41V3hrc2hRMVVuMzhzL3dBMEFsM2RLUTdMamY2ZFd1MHZacWplNjE5?=
 =?utf-8?B?a2IrWFhkbzU2WktoUlg5cmZkcEpEVUVYTVB0MkdVV1Ara3NEOVFKaVJUM29n?=
 =?utf-8?B?VVd4L2tiNmNjTk1vVytVNnhqRkZ2czllamhYajhuSGIwS3YxUjZwZWdkNVZJ?=
 =?utf-8?B?Q0dEczRyKytUQzYxVzZLL2pWNUJNVFR1Z3RaNkpMUUtlNTROQXhMZlhlMVRN?=
 =?utf-8?B?QU1xeTRBbDJxV2xySVJxMWR4dTNkWWQzd1AzYStHb3BMSUNoN0oyK0Z0c3Y4?=
 =?utf-8?Q?kZKbVhzVDEsfDn1vCF1KduS4KSZJ4oGiApc+jAS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 65d989ed-5b4c-4a91-97b4-08d8d8f22491
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 18:29:38.3359
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yJ9ygkdnrznorPY7ucnAXRgx0nonu6Ez0v2QxKkJk/IDHWMazal1xcRuM4GzUmSXiwS7SZ/RXXTc2F26uyXS2fo7QdPH4ewXAk0ffsvTtYo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4120
X-OriginatorOrg: citrix.com

On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> With XEN preemption enabled, scheduler functions can be called with
> IRQs disabled (for example, at end of IRQ handler), so we should
> save and restore IRQ state in schedulers code.
>
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

These functions need to fixed not to be _irq() variants in the first place.

The only reason they're like that is so the schedule softirq/irq can
edit the runqueues.Â  I seem to recall Dario having a plan to switch the
runqueues to using a lockless update mechanism, which IIRC removes any
need for any of the scheduler locks to be irqs-off.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 18:47:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 18:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89479.168573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEzBz-000267-EC; Wed, 24 Feb 2021 18:47:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89479.168573; Wed, 24 Feb 2021 18:47:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEzBz-000260-Ar; Wed, 24 Feb 2021 18:47:23 +0000
Received: by outflank-mailman (input) for mailman id 89479;
 Wed, 24 Feb 2021 18:47:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AaNv=H2=rjwysocki.net=rjw@srs-us1.protection.inumbo.net>)
 id 1lEzBx-00025v-Iv
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 18:47:21 +0000
Received: from cloudserver094114.home.pl (unknown [79.96.170.134])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c1c6582-fdf3-432f-aec2-0efcf701e3b8;
 Wed, 24 Feb 2021 18:47:19 +0000 (UTC)
Received: from localhost (127.0.0.1) (HELO v370.home.net.pl)
 by /usr/run/smtp (/usr/run/postfix/private/idea_smtp) via UNIX with SMTP
 (IdeaSmtpServer 0.83.537)
 id f4fe5d91c4c6e91b; Wed, 24 Feb 2021 19:47:17 +0100
Received: from kreacher.localnet (89-64-80-80.dynamic.chello.pl [89.64.80.80])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by v370.home.net.pl (Postfix) with ESMTPSA id BF1AE661E2A;
 Wed, 24 Feb 2021 19:47:16 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c1c6582-fdf3-432f-aec2-0efcf701e3b8
From: "Rafael J. Wysocki" <rjw@rjwysocki.net>
To: Linux ACPI <linux-acpi@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Subject: [PATCH v1] xen: ACPI: Get rid of ACPICA message printing
Date: Wed, 24 Feb 2021 19:47:15 +0100
Message-ID: <1709720.Zl72FGBfpD@kreacher>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"
X-VADE-SPAMSTATE: clean
X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeduledrkeejgdduudehucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvffufffkggfgtgesthfuredttddtvdenucfhrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqeenucggtffrrghtthgvrhhnpeevudefgfeguedtjedvhfetveegleduveeuvedvjeekleefhfduhfefheekffefveenucfkphepkeelrdeigedrkedtrdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepihhnvghtpeekledrieegrdektddrkedtpdhhvghlohepkhhrvggrtghhvghrrdhlohgtrghlnhgvthdpmhgrihhlfhhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqpdhrtghpthhtoheplhhinhhugidqrggtphhisehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepsghorhhishdrohhsthhrohhvshhkhiesohhrrggtlhgvrdgtohhmpdhrtghpthhtohepshhsthgrsggvlhhlihhniheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepjhhgrhhoshhssehsuhhsvgdrtghomhdp
 rhgtphhtthhopeigvghnqdguvghvvghlsehlihhsthhsrdigvghnphhrohhjvggtthdrohhrgh
X-DCC--Metrics: v370.home.net.pl 1024; Body=6 Fuz1=6 Fuz2=6

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The ACPI_DEBUG_PRINT() macro is used in a few places in
xen-acpi-cpuhotplug.c and xen-acpi-memhotplug.c for printing debug
messages, but that is questionable, because that macro belongs to
ACPICA and it should not be used elsewhere.  In addition,
ACPI_DEBUG_PRINT() requires special enabling to allow it to actually
print the message and the _COMPONENT symbol generally needed for
that is not defined in any of the files in question.

For this reason, replace all of the ACPI_DEBUG_PRINT() instances in
the Xen code with acpi_handle_debug() (with the additional benefit
that the source object can be identified more easily after this
change) and drop the ACPI_MODULE_NAME() definitions that are only
used by the ACPICA message printing macros from that code.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/xen/xen-acpi-cpuhotplug.c |   12 +++++-------
 drivers/xen/xen-acpi-memhotplug.c |   16 +++++++---------
 2 files changed, 12 insertions(+), 16 deletions(-)

Index: linux-pm/drivers/xen/xen-acpi-cpuhotplug.c
===================================================================
--- linux-pm.orig/drivers/xen/xen-acpi-cpuhotplug.c
+++ linux-pm/drivers/xen/xen-acpi-cpuhotplug.c
@@ -242,10 +242,10 @@ static void acpi_processor_hotplug_notif
 	switch (event) {
 	case ACPI_NOTIFY_BUS_CHECK:
 	case ACPI_NOTIFY_DEVICE_CHECK:
-		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+		acpi_handle_debug(handle,
 			"Processor driver received %s event\n",
 			(event == ACPI_NOTIFY_BUS_CHECK) ?
-			"ACPI_NOTIFY_BUS_CHECK" : "ACPI_NOTIFY_DEVICE_CHECK"));
+			"ACPI_NOTIFY_BUS_CHECK" : "ACPI_NOTIFY_DEVICE_CHECK");
 
 		if (!is_processor_present(handle))
 			break;
@@ -269,8 +269,8 @@ static void acpi_processor_hotplug_notif
 		break;
 
 	case ACPI_NOTIFY_EJECT_REQUEST:
-		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
-				  "received ACPI_NOTIFY_EJECT_REQUEST\n"));
+		acpi_handle_debug(handle,
+				  "received ACPI_NOTIFY_EJECT_REQUEST\n");
 
 		if (acpi_bus_get_device(handle, &device)) {
 			pr_err(PREFIX "Device don't exist, dropping EJECT\n");
@@ -290,8 +290,7 @@ static void acpi_processor_hotplug_notif
 		break;
 
 	default:
-		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
-				  "Unsupported event [0x%x]\n", event));
+		acpi_handle_debug(handle, "Unsupported event [0x%x]\n", event);
 
 		/* non-hotplug event; possibly handled by other handler */
 		goto out;
@@ -440,7 +439,6 @@ static void __exit xen_acpi_processor_ex
 
 module_init(xen_acpi_processor_init);
 module_exit(xen_acpi_processor_exit);
-ACPI_MODULE_NAME("xen-acpi-cpuhotplug");
 MODULE_AUTHOR("Liu Jinsong <jinsong.liu@intel.com>");
 MODULE_DESCRIPTION("Xen Hotplug CPU Driver");
 MODULE_LICENSE("GPL");
Index: linux-pm/drivers/xen/xen-acpi-memhotplug.c
===================================================================
--- linux-pm.orig/drivers/xen/xen-acpi-memhotplug.c
+++ linux-pm/drivers/xen/xen-acpi-memhotplug.c
@@ -227,13 +227,13 @@ static void acpi_memory_device_notify(ac
 
 	switch (event) {
 	case ACPI_NOTIFY_BUS_CHECK:
-		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
-			"\nReceived BUS CHECK notification for device\n"));
+		acpi_handle_debug(handle,
+			"Received BUS CHECK notification for device\n");
 		fallthrough;
 	case ACPI_NOTIFY_DEVICE_CHECK:
 		if (event == ACPI_NOTIFY_DEVICE_CHECK)
-			ACPI_DEBUG_PRINT((ACPI_DB_INFO,
-			"\nReceived DEVICE CHECK notification for device\n"));
+			acpi_handle_debug(handle,
+			"Received DEVICE CHECK notification for device\n");
 
 		if (acpi_memory_get_device(handle, &mem_device)) {
 			pr_err(PREFIX "Cannot find driver data\n");
@@ -244,8 +244,8 @@ static void acpi_memory_device_notify(ac
 		break;
 
 	case ACPI_NOTIFY_EJECT_REQUEST:
-		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
-			"\nReceived EJECT REQUEST notification for device\n"));
+		acpi_handle_debug(handle,
+			"Received EJECT REQUEST notification for device\n");
 
 		acpi_scan_lock_acquire();
 		if (acpi_bus_get_device(handle, &device)) {
@@ -269,8 +269,7 @@ static void acpi_memory_device_notify(ac
 		break;
 
 	default:
-		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
-				  "Unsupported event [0x%x]\n", event));
+		acpi_handle_debug(handle, "Unsupported event [0x%x]\n", event);
 		/* non-hotplug event; possibly handled by other handler */
 		return;
 	}
@@ -469,7 +468,6 @@ static void __exit xen_acpi_memory_devic
 
 module_init(xen_acpi_memory_device_init);
 module_exit(xen_acpi_memory_device_exit);
-ACPI_MODULE_NAME("xen-acpi-memhotplug");
 MODULE_AUTHOR("Liu Jinsong <jinsong.liu@intel.com>");
 MODULE_DESCRIPTION("Xen Hotplug Mem Driver");
 MODULE_LICENSE("GPL");





From xen-devel-bounces@lists.xenproject.org Wed Feb 24 19:04:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 19:04:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89484.168592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEzSZ-00046D-Ua; Wed, 24 Feb 2021 19:04:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89484.168592; Wed, 24 Feb 2021 19:04:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lEzSZ-000466-Q0; Wed, 24 Feb 2021 19:04:31 +0000
Received: by outflank-mailman (input) for mailman id 89484;
 Wed, 24 Feb 2021 19:04:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lEzSY-000461-MA
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 19:04:30 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d27e5efc-513b-423a-bde3-c520946b2cfc;
 Wed, 24 Feb 2021 19:04:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d27e5efc-513b-423a-bde3-c520946b2cfc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614193469;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=uXEUfonQ+vvfFh715vKTHwnatQmAstNt3V2Mpb+O8uw=;
  b=c9mSTp8Lp46QZLtG1HgbtHAw1h14YuhTbTLg7mx7Wkjh/lIc3V2nhmV2
   dGDyd6C+iwNor2bXzrmeMywzpc8IzBHYuODVS0ccWxdeqz2qK1iKwSTyb
   Omjypn6JEdalCYRCWUEJ/FL729g2p9m1RlJmUPvb4f+sDSY8t1K857nSU
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xLylNo1rfKRRGvIiFgG6mAtzCUGpPlTFerFdRTdbOHusavtsDwkEooBbklIIBbBznC/GbNUpPc
 20///Z3eja9/CuhA5Wj4gHGevn/QJztixP5MYTVz/lUGQD+vA83K+a/fZjVn83HQCZIhMbv0YH
 Nxn2lAN3GeCf/pjS383/ERzS4AJANBSN7fG/sCg+b1sBwIaQVwW03u5D3LWsrIVe42y2b0T0lz
 RhanjGyyg24Jnpp5TXJ+oNDQf4mYXPqK1EqzqpTZ7TwpTLq5hEfHM0mEYSsF+Zg8mIK2zR3ICy
 568=
X-SBRS: 5.2
X-MesageID: 38147358
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="38147358"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fB3bwUapHfEHx1Sus/NHHhu/UiHMe0n1WpdFfeFnyys8PPb/dalEPO32lZ5Kc0pSV6oEOIbF4c6hLLgwwOL8DoCjCDa8+KY+FP+bxye/7KBeLVj4oZHXrsU2+Bfx1dHoXnKi+FMNOawn0XHbdd3mlce1JST4OLj//iAqbNZluuNpooFbFux4TNNqCpsOwKesZNg+MWYfOLrAg1yNGZQdfi+Bqssl9801nl2xYJ9tSfGIzKkS8XKhK0D/O35UXZtAt5A0PfYHEmiFkvRWkr7uFkye5yV1JiBqgYDtd6GA2XhMm6TGHA/je4BvT+3bRctLPZWwbTNX/Zr9fvpd/OGcGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+S08kt1XYIigAJjpZxrc427W49akKuhZHOZEMUi8LqY=;
 b=WmRxcy9JEnSbtfVmioayV+pxMYdPciotw6UfXriBjN0D7BRfHSHFihpEMHPn9mbn+w0mbpTvXcsO5hHOnAZvO3YQLsabelEQMQlB4/XlKWoVRzzMRn6NgrQ1PsuySTppCNtSIFP9S00AvVQe/H2G4T8kIhqsl0PAwlikptlgBWqbs8dxJgJIjtk0cehRw7jAYR+WidHUouvjKfvpn6u5oC0dJ5z7vold2jpDfvZYMOX1uXUG75PF2wojsr2/bGP/5aKXOt2OFQeEbW90geNQlQqMYO8DI7BMqKrbWZKGYb7M+YmX53Lirp8nZTPSqQuTlWWoruk1bU7qBj5NeNtd4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+S08kt1XYIigAJjpZxrc427W49akKuhZHOZEMUi8LqY=;
 b=eoz+8PzWDYMYRb/p/s0ydDRHeimEPcPhFL/wJW8XRr8AtRYPIE8R8QJ1oXNA/d7gMeFOYX5riikwJ8ojcF5bjZPKruYVU6EjpbsUfmTGjeTLzv9cmv2Y2Wy28S55F3L6Uk/EINS+TkJKjV2+ehzOUzjcaByrdEWD1S98aVogH8o=
Subject: Re: [PATCH][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bdedf018-b6c4-d0da-fb4b-8cf2d048c3b1@suse.com>
 <9eade40b-bd95-b850-2dec-f7def66c3c7b@citrix.com>
 <77a36366-9157-c3d3-b1f0-211f4fc39a93@suse.com>
 <60a31ec8-6844-2149-1a04-7e757d1d2dd3@citrix.com>
 <42c86cc7-c417-6089-4e44-90a96ebaede1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <589cb7c9-4538-5018-85e1-6698c815477c@citrix.com>
Date: Wed, 24 Feb 2021 19:04:17 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <42c86cc7-c417-6089-4e44-90a96ebaede1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0224.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f9066505-cab3-4322-6196-08d8d8f7002d
X-MS-TrafficTypeDiagnostic: BYAPR03MB4869:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB486933A99DAA9210418CDB61BA9F9@BYAPR03MB4869.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: MBAc9LqRhyscGqiqTl0mAmAZDGBG0UzZgNXw+2OGEC6ujZ5hbTp6O42yMcBMfU9iaKFcLTvsvqzzNDyFrYu8/WETNIb83gLd8929w190YuuguTbkyf5cXfWRggTNLoLYkvpeRxpfmEf7iSEIctI1EcDyFJGofCLY42SIYICv7f8LxBXGczpQek1ViYGdIn6JRI7hg+sSDJ223v/+Fr9XDXdRXHlFFXTsrn+9dugcNzGWCutRXs4cPD4AAIhzRL0DjV3uVj+LanrbgxhT89/KTY1/171SdErSdj09DB6z1iJmSXeNV5GzzA9ulz2nE8lumZZQ5HiejfGgJBNNxrJuhgbSwN72SNtD70pSNSd4A6jA+lWkXFimeL5RTc3+anD4KcbnUKGW4nvcuapoVPDXxHMPL3kzamRxnoV25Dbr6i1Tn1lAa+hNC20OJJo7QjD35Ek+6F+o9nYywSn+DQ7sepKjBVNZjoK88nh4kJOT6marU9aosmQiJ1qMv6+LJum2IPu3HWn0+PxdPwtrUkrJ9Rlx4HiLUS+OKC8jZmXsm/IRibBak/9q5QJxrkQLfdWonkranbL7zRM+8sNLSCR1SuoTi8vVfC1n13hntes33MM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39860400002)(376002)(136003)(346002)(83380400001)(478600001)(54906003)(31686004)(6916009)(5660300002)(6666004)(956004)(2616005)(36756003)(2906002)(316002)(16576012)(26005)(66476007)(31696002)(186003)(16526019)(8676002)(66556008)(66946007)(86362001)(6486002)(53546011)(8936002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MGRVQUVLTkNvYVN5czJIYkZGbVVtMjFLSlpVNGFvSDNMQXBCOHUvb3o3YUFL?=
 =?utf-8?B?Ry8wQWNYZmhPNEFEcmZSVjdacGpwRjhhRHdSWXZLZlJvNjM0eEVvMjJLL05a?=
 =?utf-8?B?OTFSeENhTm1OMXArc21TY0cweGVPczgvZjBnZVVwM1RSaHhid0dHUGxqRkZ2?=
 =?utf-8?B?dVZnRFJ3WElhUDkvL3VSVnFic2Y4azN6QkVBdUJhYkJXMUdqTXBXVHN6RE9X?=
 =?utf-8?B?SGJSRGk1eHhJZUVWZ2ZYWXNpUG9OSGlQdFF6WDZvbkw1TUthMjZDNloyMk4y?=
 =?utf-8?B?czh1TnJ0RmMzZDluMTVURkN2YS9sb1ZmQlF6ZDQwbTVNNjcrdDVhcGp1QUFT?=
 =?utf-8?B?dUxkMk9pSUNrNFJnTElORTZUejlRMkdIRTkrS2p6ZUFhNTlVZ3ZweiszenQ1?=
 =?utf-8?B?aGpLdWV4K2VadkFVajZqV1FDUjNiL0c5dTIzd2dYaXdCdGl4TDZqVjcyU2tx?=
 =?utf-8?B?U2pPOGFNbmM4ZlBHYkJ6bXJ4bVZxdnVKY3hORzdJRnpsMXZXSU1ESkhaOEhN?=
 =?utf-8?B?QkNGZkk2NytRQnBJWjduY3hEb2xpV1RDR2o4QStRYnpQS1YrdUp6TWhzNW4y?=
 =?utf-8?B?dW9iVkZkQzBSYWJYL3Q5Y01NRWg3Und4aUl2R09mMUF0SFdqc0JwV0FzYWhm?=
 =?utf-8?B?YlpZSVk3VWZvYnAzQmFUaGRyMHN1L014SlNsVkMva01kSE9tcnkxYjZyS0Rh?=
 =?utf-8?B?VHltTVI3bU1qMjA4d1FSM20rRWxEeWdEbUV0cDFBakhTZFdwNUlvdFVObThP?=
 =?utf-8?B?YlZPeVR4RHZldFhCWDZmM3lFMzh2K2w1a1FLYXlGdERncFRLa3FSRFhMbHpj?=
 =?utf-8?B?SllvclhKSHpmbnlRWVQxVzRaRXI4VVhoNkdmaVBlb3IzblBWb1NHdVNPMS9n?=
 =?utf-8?B?WStwZkFHaFVtcVp0S205NTNmMFNXOWdVa1ZacERFbGxBaEZiMjJra2VXcGtS?=
 =?utf-8?B?T3UyS29oNjFyZi9LVnRiWklSbXVKUmpCS2Z2eWNmaTFYNFBTUTVaVGEzTFR3?=
 =?utf-8?B?UEI1N1QvQ0xEUEkydEE5UVpydVJ1YXFLdXVzRngwRS9CQ29FVUFOZWJpdHFl?=
 =?utf-8?B?VjRiNU1HU25uV3pGOURwanVMc1JNWE5zb1BGMG0xS096MThYbE9hTlRLemtx?=
 =?utf-8?B?NUpFejZJaHAyNVB2eVBRcXNtZnJlR2FMRFZPUnRxMzZqd2F2OG16Y0QzTDFT?=
 =?utf-8?B?R3FDRTJWcll1ZFFCWmFtQ0ZXZnFJNmZxQnRIUFFzbGZ6WXQ2UmF1MzRsV3A3?=
 =?utf-8?B?SXRzODhkSDVLWEFWaWxxSzBZWEJaWUFOSDB2VzFCb2RlTktWMUxNR01sa3NY?=
 =?utf-8?B?bkVKTUVqSmZBQ1lwYWJYTDI1QUtiTEQ1ZnJHVDJFRjJKT0FNTkFOYkdQK0dr?=
 =?utf-8?B?S0ZZOFRzRXpFM200M3A3aWRpNjNJNnhqMytmc213bi91aEdJTTBRUGpBamFt?=
 =?utf-8?B?cWhtUEEwYUdZSkY0V1NReFIydHJxdEV1Z0dKNGZFZkhIdW1yclNjSjBCbGFw?=
 =?utf-8?B?MHl5cWdHdTlXQ1F6RHdvUnE2VmszaHZPRktualU5aUd2T25JcjdzbDNZaGhp?=
 =?utf-8?B?cUNwRlQzOHdzL1hZTVZhL1hYUUhrUXY0d1d0N2lJZEdLOHZtS2pFZFdsT2hV?=
 =?utf-8?B?SEo4S0F1Y0NSditIbllWam5ZaXU5ZGxJNlBoWWYvdDd1cVdPbVFkQ2t1REJH?=
 =?utf-8?B?TUFaSy9pNkgzUGVXTDJaSDhjU0NQWGVxL3pIc0g5RmQ3V0d4blFVYUVEUFFJ?=
 =?utf-8?Q?FCN8prO4sdeRWCj9dx+gfcsTmePMz5uDL3H+JDN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f9066505-cab3-4322-6196-08d8d8f7002d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 19:04:24.5970
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XQCu3Sq7mfj80JhvUw5gdDfuTsZQ1B4nLo2p/8I5xruQ9T6nQIZ2SjLAdNPS1bWehGFRuyxTpdc7oXVMvTXXoRT/dJxzRGLzjDZ8gCYQj9A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4869
X-OriginatorOrg: citrix.com

On 23/02/2021 07:13, Jan Beulich wrote:
> On 22.02.2021 17:47, Andrew Cooper wrote:
>> On 22/02/2021 14:22, Jan Beulich wrote:
>>> On 22.02.2021 15:14, Andrew Cooper wrote:
>>>> On 22/02/2021 10:27, Jan Beulich wrote:
>>>>> Now that we guard the entire Xen VA space against speculative abuse
>>>>> through hypervisor accesses to guest memory, the argument translation
>>>>> area's VA also needs to live outside this range, at least for 32-bit PV
>>>>> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
>>>>> uniformly.
>>>>>
>>>>> While this could be conditionalized upon CONFIG_PV32 &&
>>>>> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
>>>>> keeps the code more legible imo.
>>>>>
>>>>> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/xen/arch/x86/mm.c
>>>>> +++ b/xen/arch/x86/mm.c
>>>>> @@ -1727,6 +1727,11 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>>>>>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>>>>>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>>>>>      }
>>>>> +
>>>>> +    /* Slot 511: Per-domain mappings mirror. */
>>>>> +    if ( !is_pv_64bit_domain(d) )
>>>>> +        l4t[l4_table_offset(PERDOMAIN2_VIRT_START)] =
>>>>> +            l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
>>>> This virtual address is inside the extended directmap.
>>> No. That one covers only the range excluding the last L4 slot.
>>>
>>>> Â  You're going to
>>>> need to rearrange more things than just this, to make it safe.
>>> I specifically picked that entry because I don't think further
>>> arrangements are needed.
>> map_domain_page() will blindly hand this virtual address if an
>> appropriate mfn is passed, because there are no suitability checks.
>>
>> The error handling isn't great, but at least any attempt to use that
>> pointer would fault, which is now no longer the case.
>>
>> LA57 machines can have RAM or NVDIMMs in a range which will tickle this
>> bug.Â  In fact, they can have MFNs which would wrap around 0 into guest
>> space.
> This latter fact would be a far worse problem than accesses through
> the last L4 entry, populated or not. However, I don't really follow
> your concern: There are ample cases where functions assume to be
> passed sane arguments. A pretty good (imo) comparison here is with
> mfn_to_page(), which also will assume a sane MFN (i.e. one with a
> representable (in frame_table[]) value. If there was a bug, it
> would be either the caller taking an MFN out of thin air, or us
> introducing MFNs we can't cover in any of direct map, frame table,
> or M2P. But afaict there is guarding against the latter (look for
> the "Ignoring inaccessible memory range" loge messages in setup.c).

I'm not trying to say that this patch has introduced the bug.

But we should absolutely have defence in depth where appropriate.Â  I
don't mind if it is an unrelated change, but 4.15 does start trying to
introduce support for IceLake and I think this qualifies as a reasonable
precaution to add.

> In any event - imo any such bug would need fixing there, rather
> than being an argument against the change here.
>
> Also, besides your objection going quite a bit too far for my taste,
> I miss any form of an alternative suggestion. Do you want the mirror
> range to be put below the canonical boundary? Taking in mind your
> wrapping consideration, about _any_ VA would be unsuitable.

Honestly, I want the XLAT area to disappear entirely.Â  This is partly
PTSD from the acquire_resource fixes, but was an opinion held from
before that series as well, and I'm convinced that the result without
XLAT will be clearer and faster code.

Obviously, that's not an option at this point in 4.15.


I'd forgotten that the lower half of the address space was available,
and I do prefer that idea.Â  We don't need to put everything adjacent
together in the upper half.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 19:46:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 19:46:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89490.168609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF06X-00082P-C9; Wed, 24 Feb 2021 19:45:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89490.168609; Wed, 24 Feb 2021 19:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF06X-00082I-93; Wed, 24 Feb 2021 19:45:49 +0000
Received: by outflank-mailman (input) for mailman id 89490;
 Wed, 24 Feb 2021 19:45:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF06W-000827-4J; Wed, 24 Feb 2021 19:45:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF06V-00051T-Rl; Wed, 24 Feb 2021 19:45:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF06V-0001Zl-Jw; Wed, 24 Feb 2021 19:45:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lF06V-0006wd-JU; Wed, 24 Feb 2021 19:45:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WwLDfcvqUc0RTZnUrZoqfAWtP7gVLPDFG7zB5iq36YU=; b=wJ1UPw7aiQXSYIGo/KfVqRasYW
	KiSq07yb7Pf6QYb7784lotPPc0Yl8aAlogCHpRnZQ1thTrQGZFJIaiIoe2Ot0RRzH1HR+CH2wU+Qo
	k+YWfOmeG0Xw1dmWfPVmTxKbq7diJWGo9VQrUFUiZIfIz+eSBMAW4ZcuedFqP0hrBjeY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159610-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159610: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c03c21ba6f4e95e406a1a7b4c34ef334b977c194
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 19:45:47 +0000

flight 159610 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                c03c21ba6f4e95e406a1a7b4c34ef334b977c194
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  208 days
Failing since        152366  2020-08-01 20:49:34 Z  206 days  357 attempts
Testing same since   159610  2021-02-24 01:55:03 Z    0 days    1 attempts

------------------------------------------------------------
5046 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1237315 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 20:08:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 20:08:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89497.168625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF0Se-0001jX-9W; Wed, 24 Feb 2021 20:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89497.168625; Wed, 24 Feb 2021 20:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF0Se-0001jQ-6O; Wed, 24 Feb 2021 20:08:40 +0000
Received: by outflank-mailman (input) for mailman id 89497;
 Wed, 24 Feb 2021 20:08:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mT50=H2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lF0Sd-0001jK-20
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 20:08:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 627a97bb-5085-4dbe-9364-6cdc75d7adab;
 Wed, 24 Feb 2021 20:08:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 627a97bb-5085-4dbe-9364-6cdc75d7adab
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614197317;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Jpy5KTCJ9VntIKIhkGd/IlAG+YImCBxAoiDRp24skIo=;
  b=X+ndSAq6SZt54NnnLdg9kHnMzuJRI4oqU4ofWI26XldaSbEG2DQ5u42b
   NRL1VtVI5lj3q3BbISd/iMKjMIzJ24l8XMaoNQ9puAqd+6KN4nAxxSo7L
   ZdzRPnhQyV8c0e45eQpHKxUNu4fz93XRWR/NBtyQwAZ1NJDXziXMEo+8E
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NPchT8xj3OMVHVeKeApWdGJ0o8BqTClSsOv77AQYZcnVoyj0kBUupOimxjxJEaWzAt2F0cS+xH
 K/n8E7ZPhu1dncLaPwbRwWVKAlpW3zbERyterWjTKldIphjnBWVRgYq1/338meutFvHv+v2SRK
 7B44IkVavgaxooxV9KMlK53s2mtc7J/prdrXBqL4236kz3BVaV7cc69qkqiRC90/CG1gin5Dx1
 fWsxFJIv+i/9omguRnU1cG6NVHRkO7y47JumLc/pqhdX4ChsLytGSZ5Wd6z13JuPweVj3ydvQU
 rgM=
X-SBRS: 5.2
X-MesageID: 37956586
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="37956586"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EUooNNbCyrCIgydRzrZTAOwSyPNbcE8D3lKnQLVo7rF/3XTmZMPIYWivjKvbkpm3LjzafS37K8A1Snx2nO1KMIx2OwUR/65LL84AKzFW4D1gcdJbdVR9MJ8Fw19WuAiz9nHDrcTv7zGrsgaDmDZRjVqY+DatAvuL+4bK9R8H7WC+LNXKVcTeZp+lQoPxrzZ01Sne04F8XFhWLDGXZAWTSJ3V0b8Uh5XM8g86ckF2Rgpn7n2VI6jAbYxCaSu7hi/27xMkOPGbwvM4WDT7YiRiAmrlaWpalRLdgQU2zw0iE97T88CJs0wZneLMsLN5SLyCplqxhw4Qq9p8+7lu9U6kbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jpy5KTCJ9VntIKIhkGd/IlAG+YImCBxAoiDRp24skIo=;
 b=Q2uIbybWtJUYUZZEvMaLnfLIujddsWeLABfC+B+J4CliCUfOff7IA33XNVxkBen4LfLgBTXxvUlRIM/Dvu9SVCs+0Fr2DxbjNQm+q4MwrdNvTGjmcLtUoRSYSKvZiovc1W2gQNdL6HqyE3KRGHxgPnVJ7Z2ovNh58I/nc4Pr76G7BADu60rzGpUFf2LsrZGPRNBhnckB6N1Nw9yHNpxBZWYJtM2BOgb0XnQM4iFfSHWzkGVVF8gQ5ozmbnPQ01bpfqSHqTx04qX101v8wmgDAaxGPPT1i1VLeVVQ45qxBarT/VZk/OiFoe4UF8fT8PGqC6/bEZQWLy7o5KQ4JB2BMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jpy5KTCJ9VntIKIhkGd/IlAG+YImCBxAoiDRp24skIo=;
 b=UDnj9jbXsTIl3ko2A2mkwYmtUVL+GbWZjViJFNWDp+7keRsTpXNeiLZCJuyDK8sL245fmsdK60Lvfjifi0X1HGaRp4hfGH12uGQkmBwYRRszqIXQZr7D8mLr22zTV/Lh08D8Ko6srelJvuZ4cyVPRU6xgvVs098F0Zq5kX/OYC0=
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
References: <20210224102641.89455-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
Date: Wed, 24 Feb 2021 20:08:25 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210224102641.89455-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0109.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::25) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02a4b95a-30d2-48f3-f087-08d8d8fff635
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5949:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB59490831C7ED57D9C2EF9804BA9F9@SJ0PR03MB5949.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZXRTNDx1e/6L3M14LmX7APu0+W/PWMI4+I60haKIIbvSTCxBJx0jqm8c2ZidBnj1sqGro2wB1wLw+7nb9GJJKeizKvWBj8q1vF0AvA1XoAHlahNYCLGre1NhHrx3ffcpEUR8BIAFgVjqu3+XpmY8yFnjY/S8igw5Z0wLojP99lzrpiZMhCLm82QQ701mEK1uUN6zLCrJpkyKHkxZbitXY7ZpGfPRNa962lzptHbnlKD2yU0DgskEPjVOeOa7KqgaD72v0d7RyTmu9MUlvoY4h7nvV6OT4hvk5EktfDL+tGVbSV1sSjutjMpgPzAELdfdEqZYpxE4eN8b9MuKDTE6hp3bzeyNCsjWA/fpBErI7PEFGS/YDn+f+cVCGRBRpv2puHgqCm/EJ25RR3VMGlz4lMi5lghiD+xeu0gGN1ATJ8ibuhIuWGb964r+oe5FbMqDsaAwjvsR2TiyYp0PeTdNoGEqpAUJOVB1bhkir4dwM95TSqkVJdJL9FplLFB6lSUqAmLcqwZZESsZi8C8DqMcZIZ5FWyodL7wes6vPYYy5+c+sBYUSEESvVStBX7VC7iGEH9Zy6cumkTKQ8Tl6Mg7HZtH2xhecciaJO8e3eq6mss=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(366004)(346002)(39860400002)(66476007)(54906003)(31686004)(66556008)(2616005)(5660300002)(53546011)(16576012)(316002)(16526019)(66946007)(31696002)(6486002)(186003)(86362001)(8936002)(26005)(6666004)(478600001)(956004)(2906002)(8676002)(36756003)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Sjc0SU5SL3JEUWk5M0g2MFRzbjZHTkpsZnNRODRhRm82NFhHby9zZEVTelJo?=
 =?utf-8?B?Z2N4MnFxZVhHS1owQVNGTXRtN1N4OWx0RldGcDk1ZlRTSDVnSXFqc1RVbFpw?=
 =?utf-8?B?S3FvYnRDdUhWYjd4OHFvOGtYamc5NG1XeFBCTEJEaStFVXZLUlNlNUJHYnVG?=
 =?utf-8?B?L2hreVB1WEdjbVh1SVZHYUdmamFSYS9PTUhWWE9VSkxucDc5NkZBaFhEekFx?=
 =?utf-8?B?MCtFMHJRczlGeGZyUXE0bjE1MTd3M1F3L1Qvcmx1MlpvNVlLQTNDbmtZTm9n?=
 =?utf-8?B?SDY5eW5zZ0hsZmw4NUxtYkN4RGs2WUJQT2xLMHFuZkE1NVZUc3BoTUtYSDlG?=
 =?utf-8?B?akJ0NE1CTjcxb3AyTFhNZ0w2ZDF4MHo5QlMyc01Wb1o1ZU9ybytuY1J1MFly?=
 =?utf-8?B?dEhGMk1NNXRuUWFzVUxielFJc1NIRzR5aUlZb0Q4ZUJoSFJzQkxUYW5sN1d2?=
 =?utf-8?B?bU5QK2dYUjduOTg3dTJTbTdqKzVrNWpPMEFHNUgxLzJRMHNBZENvejBLaEY4?=
 =?utf-8?B?WEZIWXlPNFlTMlBVbDNqbkF4R1NSdENqTU85cWc0S1ljbEhGSDJzSUJGdVVT?=
 =?utf-8?B?WnhnOHFpdlFLVU1DcFUyalpkWFJRbm9TUnN5TlllVnd5U1ZQaXBydUdpNlB1?=
 =?utf-8?B?d2pvMTRmYjJJWDk3QnRsYWxlZlFCN3BzVVZDVDZHZXdTbndSeCt2ZTZIeExs?=
 =?utf-8?B?WDJqRnE5WVM2VkdmYjhvenB3UUkwVURFS0Z4aEtKMStycUJnRDRBaG1yVTEy?=
 =?utf-8?B?bGJ0WC9NMUJodmo2ZmlTd1dJVU0va1FoZ1JUUXB2MHV4KzFDLzV1Yi85cEh2?=
 =?utf-8?B?WHlsaTBDZDZYMEtwUzhPT2U3eXlxUlMrZGM5SHVyc2ZZWC9EVWlzTE43S0M2?=
 =?utf-8?B?WDQ2NG9YMG4ySWhCc2liYVJBQ3pqay9meFJxUFZFM3plT3JVeU44ekxPWW10?=
 =?utf-8?B?RWZ4eTZqdDkvRitPazdqaFphb1E2ZFdqbHZwWWJuUjBSK3FSQlN0NmVscWRD?=
 =?utf-8?B?eHdxY1hwVVdkenlCNFdSNWRHeXdVM0hjMFdCL2pTOWtVcms1d25aRDBPdCs0?=
 =?utf-8?B?WS9zeUE1RDRTSlJseXFpa2tONUpSeExjNjNYSlhYVS9nUmtrT2gyeUhGQkhv?=
 =?utf-8?B?dENaNnIyazNXMjRaTGxkUnRBRU13V0daNUR2UDdsUTNFUm1DMmtNZ2d2akw0?=
 =?utf-8?B?S0MwUERQajBiQ2diQWM3aVNQR3ZKNUYySUZZSTRBUkpnd3FwZDRJTUc5ditE?=
 =?utf-8?B?d3NiTzNlNVFPZHlqdEp5TWhNMjhDaFJmQys4UmRnM3RVbmtCdUFTb1dDVURh?=
 =?utf-8?B?cFlaWE1ySzZCeWFZMGpiM1BYME5zamRZRGdBUUJmNDhVQmtReTQ5dndRZHox?=
 =?utf-8?B?a0FFeVVoNHBIK0dETlA4ckVYTzRMcjVNSVBuYklPSzI5UmVtcTNWNnNseWtW?=
 =?utf-8?B?bStrYjJCWktHaG5FNGh1Ykd1M1lITEtscmc5SlhVSTcrenhUYnIyQzUzallh?=
 =?utf-8?B?VER0aUwzTStuOGJVb21QSmdsR2Zpa0l6QU8vbDJCY05pK2tyaEtIK0p4Q2g0?=
 =?utf-8?B?VDRLclhMUE1zK20xbTFFUlBJVkc0SjNud09paGRKZUg4Qzk0RDV4ZFR6ekFz?=
 =?utf-8?B?RFYyamQ3SnVoQll1ellnWFJsMXBia0x5MVFWRWxJVERVV0FXUWxDVVBjbE5i?=
 =?utf-8?B?cjRFeHVHb0dhRWpLU21BV2tyd1VsbWUvb0RvazZ3aFg1SmNDaTVReGd5NmdN?=
 =?utf-8?Q?GxrkApf8f5CTvNuvDtJiUuHW7AMF0rIx9Ox2NJN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 02a4b95a-30d2-48f3-f087-08d8d8fff635
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2021 20:08:33.3944
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RZQJczy81FXFDLkDmHNxHhzPHg8K8xNFZTd/SaVyFpL8uZer2e+wf8y/SEx6TwhoH/iFxGpX2sRU0FGbFHvXJLVi9qLhM11iMc5dcinQnUk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5949
X-OriginatorOrg: citrix.com

On 24/02/2021 10:26, Roger Pau Monne wrote:
> Hello,
>
> Following two patches aim to make hvmloader standalone, so that it don't
> try to use system headers. It shouldn't result in any functional
> change.
>
> Thanks, Roger.

After some experimentation in the arch container

Given foo.c as:

#include <stdint.h>

extern uint64_t bar;
uint64_t foo(void)
{
Â Â Â  return bar;
}

int main(void)
{
Â Â Â  return 0;
}

The preprocessed form with `gcc -m32 -E` is:

# 1 "foo.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 31 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 32 "<command-line>" 2
# 1 "foo.c"
# 1 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 1 3 4
# 9 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 3 4
# 1 "/usr/include/stdint.h" 1 3 4
# 26 "/usr/include/stdint.h" 3 4
# 1 "/usr/include/bits/libc-header-start.h" 1 3 4
# 33 "/usr/include/bits/libc-header-start.h" 3 4
# 1 "/usr/include/features.h" 1 3 4
# 450 "/usr/include/features.h" 3 4
# 1 "/usr/include/sys/cdefs.h" 1 3 4
# 452 "/usr/include/sys/cdefs.h" 3 4
# 1 "/usr/include/bits/wordsize.h" 1 3 4
# 453 "/usr/include/sys/cdefs.h" 2 3 4
# 1 "/usr/include/bits/long-double.h" 1 3 4
# 454 "/usr/include/sys/cdefs.h" 2 3 4
# 451 "/usr/include/features.h" 2 3 4
# 474 "/usr/include/features.h" 3 4
# 1 "/usr/include/gnu/stubs.h" 1 3 4

# 1 "/usr/include/gnu/stubs-32.h" 1 3 4
# 8 "/usr/include/gnu/stubs.h" 2 3 4
# 475 "/usr/include/features.h" 2 3 4
# 34 "/usr/include/bits/libc-header-start.h" 2 3 4
# 27 "/usr/include/stdint.h" 2 3 4
# 1 "/usr/include/bits/types.h" 1 3 4
# 27 "/usr/include/bits/types.h" 3 4
# 1 "/usr/include/bits/wordsize.h" 1 3 4
# 28 "/usr/include/bits/types.h" 2 3 4
# 1 "/usr/include/bits/timesize.h" 1 3 4
# 29 "/usr/include/bits/types.h" 2 3 4

# 31 "/usr/include/bits/types.h" 3 4
typedef unsigned char __u_char;
...

while the freestanding form with `gcc -ffreestanding -m32 -E` is:

# 1 "foo.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "foo.c"
# 1 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 1 3 4
# 11 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 3 4
# 1 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 1 3 4
# 34 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 3 4

# 34 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 3 4
typedef signed char int8_t;
...


I can compile and link trivial programs using -m32 and stdint.h without
any issue.

Clearly something more subtle is going on with our choice of options
when compiling hvmloader, but it certainly looks like stdint.h is fine
to use in the way we want to use it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 20:17:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 20:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89501.168637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF0b9-0002oE-8J; Wed, 24 Feb 2021 20:17:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89501.168637; Wed, 24 Feb 2021 20:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF0b9-0002o7-4y; Wed, 24 Feb 2021 20:17:27 +0000
Received: by outflank-mailman (input) for mailman id 89501;
 Wed, 24 Feb 2021 20:17:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF0b7-0002nz-Il; Wed, 24 Feb 2021 20:17:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF0b7-0005dN-8A; Wed, 24 Feb 2021 20:17:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF0b7-00039C-0F; Wed, 24 Feb 2021 20:17:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lF0b6-0003py-W2; Wed, 24 Feb 2021 20:17:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X5Qkz5en2msTIF9l44TpCLZ3RWAjRJGFLTVFuA70oJg=; b=KZ+1wgOrmApcWgq0HsehrmEqC6
	zYu8zN+qhV5l4758KLiKn5WbfhAySCU5TaX2Osetc+DGom/1sCyoCyiRroxYh0yPP9BV56B6YASZZ
	RokB5Sfmu2e2jnLpdWoFlYGNuL/mQj7nfHVGpRgsjUZmuCcMuDFbJfh01rChCWGAR2gI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159640-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159640: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=35f87da8a2debd443ac842db0a3794b17914a8f4
X-Osstest-Versions-That:
    ovmf=739a506b18c4f694b8d5d2f3424a329c45d737ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 20:17:24 +0000

flight 159640 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159640/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 35f87da8a2debd443ac842db0a3794b17914a8f4
baseline version:
 ovmf                 739a506b18c4f694b8d5d2f3424a329c45d737ba

Last test of basis   159619  2021-02-24 08:09:48 Z    0 days
Testing same since   159640  2021-02-24 17:10:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   739a506b18..35f87da8a2  35f87da8a2debd443ac842db0a3794b17914a8f4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 20:58:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 20:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89513.168658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF1EE-0006ix-II; Wed, 24 Feb 2021 20:57:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89513.168658; Wed, 24 Feb 2021 20:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF1EE-0006iq-Es; Wed, 24 Feb 2021 20:57:50 +0000
Received: by outflank-mailman (input) for mailman id 89513;
 Wed, 24 Feb 2021 20:57:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FJMv=H2=epam.com=prvs=26891aedce=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lF1ED-0006ii-4f
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 20:57:49 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88f112f4-2d7a-4970-9d11-ea1af0b2a654;
 Wed, 24 Feb 2021 20:57:47 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11OKtdIO024731; Wed, 24 Feb 2021 20:57:44 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2108.outbound.protection.outlook.com [104.47.17.108])
 by mx0a-0039f301.pphosted.com with ESMTP id 36vyuywmpy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 Feb 2021 20:57:44 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB5603.eurprd03.prod.outlook.com (2603:10a6:208:174::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.24; Wed, 24 Feb
 2021 20:57:39 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.048; Wed, 24 Feb 2021
 20:57:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88f112f4-2d7a-4970-9d11-ea1af0b2a654
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IFRFkL72iBYUKqZO950N0cjWciICggOPsf3aMDMAHWa6E5PSJNP3oGCCuc/SIzhX0T4aC0jZUmFL0c/gXyZq0PxC4UOe50d8SvvDNz4QDoz6zf70SzD3KmG9pV1REA//lGEY4F51/V37Pib49YDm0pSyjJkszQaGwnt9f1CzlbLYGKnGar58G18g67qmHPgolMQz9u5vihHUbJO394sPsYxvWlBE//XOFAVzyp4F2mKuLBsxMfETMaWfXvEvuz3FdyNkRhxLjw37/ZpnIXgyB19QFrH4RQqcNy/Gvg8sy17rR8g+7INLXwwMtW5TJ8rf1ZoacBXcgt36c6o1KJdxlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lrqVEyAHN62wb+g3QOCdMW1S4p9d6yIwI0LuCXq8sQw=;
 b=IVlBbVZAVXBLXftjmQ47R0iFScA5uGylTiJPW3vMBHShDdAMcmk5KJamTK8OQheTNDwm6eTySCZvFO2dGUouyDneophadVfDIh9A+W/r2cjQ+sEZdJZ36mELX14sjx4rH8QNWU7YnrxMPyTEHanAs4I7sDfVde8B6fBRni7Rd/MCh88TviCLsRmMXLLYgfIGLw4lse1ZQxUED9BvwQx9F/OZ4gDKUwfIV1PZEN9iaSFWyBP/QPLikz9Hbhjln0Ppbmz074E5c/A8Sdj8QA4GZohz9qpnBZAiqXRzrzG/ksDY9uJ5sPT1ID89aezI78TO1+IV7RJyyDmWqWuE66/0HA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lrqVEyAHN62wb+g3QOCdMW1S4p9d6yIwI0LuCXq8sQw=;
 b=Tr+6e5sYQh6KC1GQiRo1pX19oj4pVzy2Bp9qMN+u04ZHBEJI1qJ02I78nof3ziq3DqhJHvcWVF8bWA/mh6IxAiALNyr6DFdvHXBwEt7Hf9TvLfT1vUoKpvegzZ9Oo1fzPda07VWuUJscM/HfGKMUTl08r4PjTMmhRcXmV0b0ULXeDpc9uI6qBHwKGNQZJoJ4Y4MNicaNmO7p2n1izg33i3OzIg2hdwXOXmxBtoPA6YPzQ9meNVs5G9f9evYY+wIhZ94+QFyzp0G8JYeRFEP3jZRZRS2LXDJleS+HypSTf8LJOlVecb9OR98+WA5BhLOSOo0x5qSq/aIvDIL/IsWdkg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        George
 Dunlap <george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: AQHXCYx4A6OUUHr1gkqxWv1TEOLkuqplchSAgAAzdYCAAXE/AIAAtX0A
Date: Wed, 24 Feb 2021 20:57:39 +0000
Message-ID: <87o8g99oal.fsf@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org> <87zgzv56pm.fsf@epam.com>
 <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org>
In-Reply-To: <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.15; emacs 27.1
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9e1923a1-095b-4f72-1b71-08d8d906d27a
x-ms-traffictypediagnostic: AM0PR03MB5603:
x-microsoft-antispam-prvs: 
 <AM0PR03MB5603DDDBAB713D4272BF5989E69F9@AM0PR03MB5603.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 ZkiplXuGpnORhn9zXp571Latzk+xAFiH1lClJsHFAuHDewA5iOAsEE1ertBYKlgo+PeO+rMTR+JOluv3QrI8Oz9GDjQVgsTX1l9WioqrnUsbt36A80LJXyNcKCyZvLPuK0nR8wV01Y60jCKOYl+spudH94xnB7+YocvULf+2cAhXMrsBE756rTLDJdVVasVd3woexgCxwGNreFuy7Hb45DoDgl95vDDOlHTOjaNr64U0HAAXWM2yNB2RB4ANUa2MWAWBlj5ladUb+hE0IVEuFXZoNsj4b15SHetzNju4/+GttL8R92xfJvNNH9fSGZREQ2KsFCyMg+mWFqlJfKsdreEIsFXMDRrtGLeO9PrnZAkRqU+/I501H9oYpfSiITzyboy4U2MD06A5seaX2aFq9v8YLSIo7KwlV7u9vDSBKC1vDGoTpDTIry0KtyUNFsCZbcqa2okTsHH+6R4aiSpbYym1MT4Z6MPU5rHRYqYwDYRncFhnJAyi2j/Mc48w7kcD8Ury5lo7q2d1Dw0NgnlEng==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(136003)(39860400002)(366004)(6506007)(66556008)(66446008)(478600001)(66476007)(8936002)(64756008)(186003)(316002)(6512007)(6486002)(2906002)(54906003)(83380400001)(7416002)(4326008)(53546011)(55236004)(6916009)(66946007)(36756003)(76116006)(5660300002)(86362001)(8676002)(26005)(71200400001)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?O4ZcWJXo4TKjxSGqCfnvKtWVhytOddWtDwBJYx/RjahUja7KLpqCx6UEK7?=
 =?iso-8859-1?Q?UdqZFFP4eHyr2dfTiMJSz20QepS6dbbrCmuj6OVNJYsNv/OJ5S5IilgO9R?=
 =?iso-8859-1?Q?UV8cTbqp1PWNntZ7Q3Df5RqPFXMiM+fQFaV3taOyYyKzbVbCkXUjaeBCd6?=
 =?iso-8859-1?Q?8Jjya8xWQ+MRr020a3wmRQ+AshgSBcxpFV3GU2Pka1m1sVj3dcGu2nvfVm?=
 =?iso-8859-1?Q?BMY98vJeo3RwZfhF/ANqvPtdsXepDxg7jr+4CzW0nOGKgOO5rn+nww8e2c?=
 =?iso-8859-1?Q?749WdzriDhUCvfnj/N27ksfEOk1pbArM7FWMM82VkSzno/tfvwFaWeOtG4?=
 =?iso-8859-1?Q?gWh6WKdoJ2jawJ54084A3X2T0C5wRIilLjZ41x25wiTLecHsfK+06/llO7?=
 =?iso-8859-1?Q?SfMDgaIU10aZJg3tWc972yfGGRmqizKmwA26vikxjG6dmkEp0i3D2xRw+s?=
 =?iso-8859-1?Q?7iDSQgkViw+ZfKL8SbnlUGSJSKVuoYMikSm8BH2UkiEsxyJTAFynIh0/pd?=
 =?iso-8859-1?Q?0VskG17lIYQM1H1yaHqiU4IAahDhjxspakPSui4wUxarFosbYVgJAXixuh?=
 =?iso-8859-1?Q?0vHHBqc7NVg3gahj9EwgK6flSOvG+btqvdKZV7s8VFMiZySrTtK5ngQdSx?=
 =?iso-8859-1?Q?+eHj8S510WFzwVXeUlHy52HNQHMhxkhFNq6jXqehZEzwQziDH6TALzejL6?=
 =?iso-8859-1?Q?Jv93aNMion8DLHNrp419ko7dK/Y13tSNXnmChIUJiQq/gWrfV8xJWy36mT?=
 =?iso-8859-1?Q?cfPI63XnbZTme+zle39uhAsrJUuUVM2PUv9dHVyFrJfDEkVbKxoAaLYHBh?=
 =?iso-8859-1?Q?Xhymc/hJqnU/XEqdamJxojckYFG88Mw7LNA7nwD/14tjwIVxFP4/1wvwdo?=
 =?iso-8859-1?Q?A2hj3N8UlGYR9ZkA7oA3z3DJ+08Ym9ae2dfsFTpZ22yLnCpSwKQgnOrz2I?=
 =?iso-8859-1?Q?90og/xEnzRAJdZlh1JssfE0DBGZN11cZvBqtTsj0I4bYaNfDwxLTU1rJt1?=
 =?iso-8859-1?Q?i7yY1B2aPlP5gBMXrC3v0aembfdSz1NMTceFuec07qirTiEPQSGx3elVuJ?=
 =?iso-8859-1?Q?UYXApoZwi5ttFzUh894s3omT+97iYNAonmfts3G3zabqghvFBMXiCJhBSE?=
 =?iso-8859-1?Q?Un4RLB0+GpJtMKc087IcQdkKE71ErAek0tkrWG1rPnQTx0LuGbXWvwSNzw?=
 =?iso-8859-1?Q?NkTbi/GLEmy7FUrJ4Y891PM8Nd42VzdsUSdUEUbV3ZQK+3nFi/P6pl1EYf?=
 =?iso-8859-1?Q?aFM0C7O9EBAKR9iBKxZeYT0HmHfSQYSFecXWApOgcp5WxuOhbFNeTYOVFF?=
 =?iso-8859-1?Q?RxdBBThBIs42NhYor05/r0LgPUqlFhX5Ne4rdMUHldUSbgflntRDjE9JIy?=
 =?iso-8859-1?Q?EPaRTjxunM?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e1923a1-095b-4f72-1b71-08d8d906d27a
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Feb 2021 20:57:39.5914
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: prLLnA59xLrlASnYjXmbeuBXfUOVEjowB+yM1dywhqvFh/4gGmlEAKLNvewmVw/L1LiCUVKjki61I01tmb2XcV3jvV3eAea/CmtVXWAUx3E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5603
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 adultscore=0
 spamscore=0 mlxlogscore=925 phishscore=0 bulkscore=0 mlxscore=0
 suspectscore=0 impostorscore=0 malwarescore=0 lowpriorityscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102240163


Hi Julien,

Julien Grall writes:

> On 23/02/2021 12:06, Volodymyr Babchuk wrote:
>> Hi Julien,
>
> Hi Volodymyr,
>
>> Julien Grall writes:
>>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>>> ... just rescheduling the vCPU. It will also give the opportunity for
>>> the guest to handle interrupts.
>>>
>>> If you don't return to the guest, then risk to get an RCU sched stall
>>> on that the vCPU (some hypercalls can take really really long).
>> Ah yes, you are right. I'd only wish that hypervisor saved context
>> of
>> hypercall on it's side...
>> I have example of OP-TEE before my eyes. They have special return
>> code
>> "task was interrupted" and they use separate call "continue execution of
>> interrupted task", which takes opaque context handle as a
>> parameter. With this approach state of interrupted call never leaks to >=
 rest of the system.
>
> Feel free to suggest a new approach for the hypercals.
>

I believe, I suggested it right above. There are some corner cases, that
should be addressed, of course.

>>>
>>>> This approach itself have obvious
>>>> problems: code that executes hypercall is responsible for preemption,
>>>> preemption checks are infrequent (because they are costly by
>>>> themselves), hypercall execution state is stored in guest-controlled
>>>> area, we rely on guest's good will to continue the hypercall.
>>>
>>> Why is it a problem to rely on guest's good will? The hypercalls
>>> should be preempted at a boundary that is safe to continue.
>> Yes, and it imposes restrictions on how to write hypercall
>> handler.
>> In other words, there are much more places in hypercall handler code
>> where it can be preempted than where hypercall continuation can be
>> used. For example, you can preempt hypercall that holds a mutex, but of
>> course you can't create an continuation point in such place.
>
> I disagree, you can create continuation point in such place. Although
> it will be more complex because you have to make sure you break the
> work in a restartable place.

Maybe there is some misunderstanding. You can't create hypercall
continuation point in a place where you are holding mutex. Because,
there is absolutely not guarantee that guest will restart the
hypercall.

But you can preempt vCPU while holding mutex, because xen owns scheduler
and it can guarantee that vCPU will be scheduled eventually to continue
the work and release mutex.

> I would also like to point out that preemption also have some drawbacks.
> With RT in mind, you have to deal with priority inversion (e.g. a
> lower priority vCPU held a mutex that is required by an higher
> priority).

Of course. This is not as simple as "just call scheduler when we want
to".

> Outside of RT, you have to be careful where mutex are held. In your
> earlier answer, you suggested to held mutex for the memory
> allocation. If you do that, then it means a domain A can block
> allocation for domain B as it helds the mutex.

As long as we do not exit to a EL1 with mutex being held, domain A can't
block anything. Of course, we have to deal with priority inversion, but
malicious domain can't cause DoS.

> This can lead to quite serious problem if domain A cannot run (because
> it exhausted its credit) for a long time.
>

I believe, this problem is related to a priority inversion problem and
they should be addressed together.

>>=20
>>>> All this
>>>> imposes restrictions on which hypercalls can be preempted, when they
>>>> can be preempted and how to write hypercall handlers. Also, it
>>>> requires very accurate coding and already led to at least one
>>>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>>>> like the one mentioned in [1].
>>>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle
>>>> vCPUs,
>>>> which are supposed to run when the system is idle. If hypervisor needs
>>>> to execute own tasks that are required to run right now, it have no
>>>> other way than to execute them on current vCPU. But scheduler does not
>>>> know that hypervisor executes hypervisor task and accounts spent time
>>>> to a domain. This can lead to domain starvation.
>>>> Also, absence of hypervisor threads leads to absence of high-level
>>>> synchronization primitives like mutexes, conditional variables,
>>>> completions, etc. This leads to two problems: we need to use spinlocks
>>>> everywhere and we have problems when porting device drivers from linux
>>>> kernel.
>>>> Proposed solution
>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>> It is quite obvious that to fix problems above we need to allow
>>>> preemption in hypervisor mode. I am not familiar with x86 side, but
>>>> for the ARM it was surprisingly easy to implement. Basically, vCPU
>>>> context in hypervisor mode is determined by its stack at general
>>>> purpose registers. And __context_switch() function perfectly switches
>>>> them when running in hypervisor mode. So there are no hard
>>>> restrictions, why it should be called only in leave_hypervisor() path.
>>>> The obvious question is: when we should to try to preempt running
>>>> vCPU?  And answer is: when there was an external event. This means
>>>> that we should try to preempt only when there was an interrupt request
>>>> where we are running in hypervisor mode. On ARM, in this case function
>>>> do_trap_irq() is called. Problem is that IRQ handler can be called
>>>> when vCPU is already in atomic state (holding spinlock, for
>>>> example). In this case we should try to preempt right after leaving
>>>> atomic state. This is basically all the idea behind this PoC.
>>>> Now, about the series composition.
>>>> Patches
>>>>     sched: core: save IRQ state during locking
>>>>     sched: rt: save IRQ state during locking
>>>>     sched: credit2: save IRQ state during locking
>>>>     preempt: use atomic_t to for preempt_count
>>>>     arm: setup: disable preemption during startup
>>>>     arm: context_switch: allow to run with IRQs already disabled
>>>> prepare the groundwork for the rest of PoC. It appears that not all
>>>> code is ready to be executed in IRQ state, and schedule() now can be
>>>> called at end of do_trap_irq(), which technically is considered IRQ
>>>> handler state. Also, it is unwise to try preempt things when we are
>>>> still booting, so ween to enable atomic context during the boot
>>>> process.
>>>
>>> I am really surprised that this is the only changes necessary in
>>> Xen. For a first approach, we may want to be conservative when the
>>> preemption is happening as I am not convinced that all the places are
>>> safe to preempt.
>>>
>> Well, I can't say that I ran extensive tests, but I played with this
>> for
>> some time and it seemed quite stable. Of course, I had some problems
>> with RTDS...
>> As I see it, Xen is already supports SMP, so all places where races
>> are
>> possible should already be covered by spinlocks or taken into account by
>> some other means.
> That's correct for shared resources. I am more worried for any
> hypercalls that expected to run more or less continuously (e.g not=20
> taking into account interrupt) on the same pCPU.

Are there many such hypercalls? They can disable preemption if they
really need to run on the same pCPU. As I understand, they should be
relatively fast, because they can't create continuations anyway.

>> Places which may not be safe to preempt are clustered around task
>> management code itself: schedulers, xen entry/exit points, vcpu
>> creation/destruction and such.
>> For example, for sure we do not want to destroy vCPU which was
>> preempted
>> in hypervisor mode. I didn't covered this case, by the way.
>>=20
>>>> Patches
>>>>     preempt: add try_preempt() function
>>>>     sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>>>>     arm: traps: try to preempt before leaving IRQ handler
>>>> are basically the core of this PoC. try_preempt() function tries to
>>>> preempt vCPU when either called by IRQ handler and when leaving atomic
>>>> state. Scheduler now enters atomic state to ensure that it will not
>>>> preempt self. do_trap_irq() calls try_preempt() to initiate preemption=
.
>>>
>>> AFAICT, try_preempt() will deal with the rescheduling. But how about
>>> softirqs? Don't we want to handle them in try_preempt() as well?
>> Well, yes and no. We have the following softirqs:
>>   TIMER_SOFTIRQ - should be called, I believe
>>   RCU_SOFTIRQ - I'm not sure about this, but probably no
>
> When would you call RCU callback then?
>

I'm not sure there. But I think, they should be called in the same place
as always: while leaving hypervisor. But I'm not very familiar with
RCU, so I may talk nonsense.=20

>>   SCHED_SLAVE_SOFTIRQ - no
>>   SCHEDULE_SOFTIRQ - no
>>   NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ - should be moved to a separate
>>   thread, maybe?
>>   TASKLET_SOFTIRQ - should be moved to a separate thread >
>> So, looks like only timers should be handled for sure.
>>=20
>>>
>>> [...]
>>>
>>>> Conclusion
>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>> My main intention is to begin discussion of hypervisor
>>>> preemption. As
>>>> I showed, it is doable right away and provides some immediate
>>>> benefits. I do understand that proper implementation requires much
>>>> more efforts. But we are ready to do this work if community is
>>>> interested in it.
>>>> Just to reiterate main benefits:
>>>> 1. More controllable latency. On embedded systems customers care
>>>> about
>>>> such things.
>>>
>>> Is the plan to only offer preemptible Xen?
>>>
>> Sorry, didn't get the question.
>
> What's your plan for the preemption support? Will an admin be able to
> configure Xen to be either preemptible or not?

Honestly, it would be much easier to enable it unconditionally. But I
understand, that this is not feasible. So, I'm looking at a build-time
option.

--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 22:32:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 22:32:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89524.168676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF2h3-0007zB-Hd; Wed, 24 Feb 2021 22:31:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89524.168676; Wed, 24 Feb 2021 22:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF2h3-0007z4-E3; Wed, 24 Feb 2021 22:31:41 +0000
Received: by outflank-mailman (input) for mailman id 89524;
 Wed, 24 Feb 2021 22:31:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=icB/=H2=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1lF2h1-0007yx-R6
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 22:31:39 +0000
Received: from mail-ej1-x62e.google.com (unknown [2a00:1450:4864:20::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d2da650-9233-4ec8-8fa9-33c6eb1148d7;
 Wed, 24 Feb 2021 22:31:38 +0000 (UTC)
Received: by mail-ej1-x62e.google.com with SMTP id r17so5579689ejy.13
 for <xen-devel@lists.xenproject.org>; Wed, 24 Feb 2021 14:31:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d2da650-9233-4ec8-8fa9-33c6eb1148d7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1N+NfVeW0w/E2tmCkNoCPMbM+AAGCUx8W2J3DLeqO8A=;
        b=JtLQ0t0TOgEU6tD1Nhj43sO7Wlh75TJOa4cEGngj7ycrQKQKv+yF/rle2e9mzTreVT
         SyjSyDjDY+dYKB+CrN3U/2er0oTroOxhUph9XeLQ7lsf7Q0NyikTCxQhce9jX3qZM42P
         n7Zz3JN+srPVB2C6Oe4UwJuytN+e42dyddlyIN2z9OhxKVPtof7anFsb9RMkfqm6ITMC
         /fm4nSVXgzan0iik84DlvBOq7QzEwMp4qCRsKPo8nQJEVltBVFwtkS/qR1XCiUMFOYAi
         f1d81UlcDttmTqUgWcwz1SNesuuQA/aSCHK2JgCfR3AwlWXRvWLx2h+x/v4xH/dq4aZO
         sFPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1N+NfVeW0w/E2tmCkNoCPMbM+AAGCUx8W2J3DLeqO8A=;
        b=KYlpJ5S7Jq0dQ9cejeVkIxaXhe0gPurY2k1jfl8CAiTaZpToh0yARz05qtujqWvqCg
         LA3U5H+kNxQAXzXoS1IimO4GOTt9KgAaSRwbgLFo4FEBsMRvdYTfrUQrtN/Dkt0JcVAh
         HbnZ6ihFbj9H2/9DIG94m769vywUo5/LBakyEGbU/86C1/BraLPqdqibd2d1c0Eq+Cz+
         kF9BcuYXk9DUJwgqICprksHjMahHHhuiqCLZo4vNWx46XTvUDpK6YeNqHPmegkUICMLp
         TujyuK0K+3OLwx/7kT9ZHNdJ2/rfhJf/1v+LVr3uDVNoy4RJDJYApsea+nKmTgoSJmUd
         S/Mw==
X-Gm-Message-State: AOAM5333/NAFKknEqtLFSAavRyHodIH0bw61DTu4h3JKet4koxI9joW1
	mMwE6e2jsy8y87c0QVdsujlu6RC70KBH+apQBjU=
X-Google-Smtp-Source: ABdhPJw4OhlXWUwqBA0MeBP/F3FT8oZjDTvZaS0O+o9T0MIGdgf3ktQ+VXDcBbpwDBaiHn/KOSC2YC3Egp0TISruCxI=
X-Received: by 2002:a17:906:a101:: with SMTP id t1mr31081517ejy.182.1614205897384;
 Wed, 24 Feb 2021 14:31:37 -0800 (PST)
MIME-Version: 1.0
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org> <87zgzv56pm.fsf@epam.com>
 <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org> <87o8g99oal.fsf@epam.com>
In-Reply-To: <87o8g99oal.fsf@epam.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Wed, 24 Feb 2021 22:31:26 +0000
Message-ID: <CAJ=z9a0v37rc_B7xVdQECAYd52PJ0UajGzvX1DYP56Q2RXQ2Tw@mail.gmail.com>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>, 
	Meng Xu <mengxu@cis.upenn.edu>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, 24 Feb 2021 at 20:58, Volodymyr Babchuk
<Volodymyr_Babchuk@epam.com> wrote:
>
>
> Hi Julien,
>
> Julien Grall writes:
>
> > On 23/02/2021 12:06, Volodymyr Babchuk wrote:
> >> Hi Julien,
> >
> > Hi Volodymyr,
> >
> >> Julien Grall writes:
> >>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> >>> ... just rescheduling the vCPU. It will also give the opportunity for
> >>> the guest to handle interrupts.
> >>>
> >>> If you don't return to the guest, then risk to get an RCU sched stall
> >>> on that the vCPU (some hypercalls can take really really long).
> >> Ah yes, you are right. I'd only wish that hypervisor saved context
> >> of
> >> hypercall on it's side...
> >> I have example of OP-TEE before my eyes. They have special return
> >> code
> >> "task was interrupted" and they use separate call "continue execution of
> >> interrupted task", which takes opaque context handle as a
> >> parameter. With this approach state of interrupted call never leaks to > rest of the system.
> >
> > Feel free to suggest a new approach for the hypercals.
> >
>
> I believe, I suggested it right above. There are some corner cases, that
> should be addressed, of course.

If we wanted a clean break, then possibly yes.  But I meant one that doesn't
break all the existing users and doesn't put Xen at risk.

I don't believe your approach fulfill it.

>
> >>>
> >>>> This approach itself have obvious
> >>>> problems: code that executes hypercall is responsible for preemption,
> >>>> preemption checks are infrequent (because they are costly by
> >>>> themselves), hypercall execution state is stored in guest-controlled
> >>>> area, we rely on guest's good will to continue the hypercall.
> >>>
> >>> Why is it a problem to rely on guest's good will? The hypercalls
> >>> should be preempted at a boundary that is safe to continue.
> >> Yes, and it imposes restrictions on how to write hypercall
> >> handler.
> >> In other words, there are much more places in hypercall handler code
> >> where it can be preempted than where hypercall continuation can be
> >> used. For example, you can preempt hypercall that holds a mutex, but of
> >> course you can't create an continuation point in such place.
> >
> > I disagree, you can create continuation point in such place. Although
> > it will be more complex because you have to make sure you break the
> > work in a restartable place.
>
> Maybe there is some misunderstanding. You can't create hypercall
> continuation point in a place where you are holding mutex. Because,
> there is absolutely not guarantee that guest will restart the
> hypercall.

I don't think we are disagreeing here. My point is you should rarely
need to hold a mutex for a long period, so you could break your work
in smaller chunk. In which cases, you can use hypercall continuation.

>
> But you can preempt vCPU while holding mutex, because xen owns scheduler
> and it can guarantee that vCPU will be scheduled eventually to continue
> the work and release mutex.

The problem is the "eventually". If you are accounting the time spent
in the hypervisor to the vCPU A, then there is a possibility that it
has exhausted its time slice. In which case, your vCPU A may be
sleeping for a while with a mutex held.

If another vCPU B needs the mutex, it will either have to wait
potentially for a long time or we need to force vCPU A to run on
borrowed time.

>
> > I would also like to point out that preemption also have some drawbacks.
> > With RT in mind, you have to deal with priority inversion (e.g. a
> > lower priority vCPU held a mutex that is required by an higher
> > priority).
>
> Of course. This is not as simple as "just call scheduler when we want
> to".

Your e-mail made it sounds like it was easy to add preemption in Xen. ;)

>
> > Outside of RT, you have to be careful where mutex are held. In your
> > earlier answer, you suggested to held mutex for the memory
> > allocation. If you do that, then it means a domain A can block
> > allocation for domain B as it helds the mutex.
>
> As long as we do not exit to a EL1 with mutex being held, domain A can't
> block anything. Of course, we have to deal with priority inversion, but
> malicious domain can't cause DoS.

It is not really a priority inversion problem outside of RT because
all the tasks will have the same priority. It is more a time
accounting problem because each vCPU may have a different number of
credits.

> >>> I am really surprised that this is the only changes necessary in
> >>> Xen. For a first approach, we may want to be conservative when the
> >>> preemption is happening as I am not convinced that all the places are
> >>> safe to preempt.
> >>>
> >> Well, I can't say that I ran extensive tests, but I played with this
> >> for
> >> some time and it seemed quite stable. Of course, I had some problems
> >> with RTDS...
> >> As I see it, Xen is already supports SMP, so all places where races
> >> are
> >> possible should already be covered by spinlocks or taken into account by
> >> some other means.
> > That's correct for shared resources. I am more worried for any
> > hypercalls that expected to run more or less continuously (e.g not
> > taking into account interrupt) on the same pCPU.
>
> Are there many such hypercalls? They can disable preemption if they
> really need to run on the same pCPU. As I understand, they should be
> relatively fast, because they can't create continuations anyway.

Well, I never tried to make Xen preemptible... My comment is based on
the fact that the use preempt_{enable, disable}() was mostly done on a
best effort basis.

The usual suspects are anything using this_cpu() or interacting with
the per-CPU HW registers.

>From a quick look here a few things (only looked at Arm):
  * map_domain_page() in particular on arm32 because this is using
per-CPU page-tables
  * guest_atomics_* as this uses this_cpu()
  * virt_to_mfn() in particular the failure path
  * Incorrect use (or missing) rcu locking. (Hopefully Juergen's
recent work in the RCU mitigate the risk)

I can provide guidance, but you will have to go through the code and
check what's happening.

Cheers,


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 22:43:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 22:43:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89531.168693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF2sm-0000ek-QJ; Wed, 24 Feb 2021 22:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89531.168693; Wed, 24 Feb 2021 22:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF2sm-0000ed-NQ; Wed, 24 Feb 2021 22:43:48 +0000
Received: by outflank-mailman (input) for mailman id 89531;
 Wed, 24 Feb 2021 22:43:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF2sl-0000eV-Os; Wed, 24 Feb 2021 22:43:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF2sl-0007z8-Kt; Wed, 24 Feb 2021 22:43:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF2sl-0001Pc-FK; Wed, 24 Feb 2021 22:43:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lF2sl-0001YI-Eq; Wed, 24 Feb 2021 22:43:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M4UWhrY5z6CdYzVtv9g4gF+7ZsJA1ki/BXRjUyFBURM=; b=c1UXWhyNNwOkB8l6sjNJ+o+AMt
	5fhJXgyeB0sB9o9lLgTXHbFLqmcqAtsfemMKqkaeXV8UrW0EM1x9szQjIhZmtBXNbm73MWLAfXCNC
	BGOwEILYF/lJBB+/ht+xSYCOjk1RvXwIsjxjcBtqOnTf2tTc7tL8pcVe0WSpQZbuUqA4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159644-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159644: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=60390ccb8b9b2dbf85010f8b47779bb231aa2533
X-Osstest-Versions-That:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 Feb 2021 22:43:47 +0000

flight 159644 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159644/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  60390ccb8b9b2dbf85010f8b47779bb231aa2533
baseline version:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe

Last test of basis   159600  2021-02-23 20:01:30 Z    1 days
Failing since        159624  2021-02-24 12:01:29 Z    0 days    3 attempts
Testing same since   159644  2021-02-24 19:01:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5d94433a66..60390ccb8b  60390ccb8b9b2dbf85010f8b47779bb231aa2533 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 23:38:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 23:38:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89544.168715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF3j0-0005cA-TY; Wed, 24 Feb 2021 23:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89544.168715; Wed, 24 Feb 2021 23:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF3j0-0005c3-QF; Wed, 24 Feb 2021 23:37:46 +0000
Received: by outflank-mailman (input) for mailman id 89544;
 Wed, 24 Feb 2021 23:37:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FJMv=H2=epam.com=prvs=26891aedce=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lF3iy-0005bx-RJ
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 23:37:45 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71b2c484-fd50-4f84-8830-f4195bb5fe64;
 Wed, 24 Feb 2021 23:37:43 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11ONaRVv029276; Wed, 24 Feb 2021 23:37:39 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2106.outbound.protection.outlook.com [104.47.18.106])
 by mx0b-0039f301.pphosted.com with ESMTP id 36w16admar-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 Feb 2021 23:37:39 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB4500.eurprd03.prod.outlook.com (2603:10a6:208:d0::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.28; Wed, 24 Feb
 2021 23:37:35 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.048; Wed, 24 Feb 2021
 23:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71b2c484-fd50-4f84-8830-f4195bb5fe64
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V4eswuERnQRkymqscH0vNRL+22D08R0TmFWLsk7Cqoae2+wBIB9ga1cXmMZyXK1iUB8T+s406rbJ/xi6tqwsARnzSZXmS0U34eqJ50/Uru19G8zX+JW5Ac6CwjnrlB4q4BCXBcut9JtTb3sycRsgOkhW+7DCEidyij83lh4t4NH1W4omwaYQqvRBK3vAJNo/Yqks8yqUUmKwpCmCMvNvZSZnr/6CqFf58Pr/9cXkBg3hvGGbLxiWxWI+EjN9loA69+/FrjinAQAxB2IrSAX5OwtGAZYzitSdLLk85sMv1UIWL3VjcComdtu1HEdG4rLWtUgG+MHD0Rc6a7rdnPK8dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TaL74+wSv6ihcnPfmovGf+NX1Bf0YN6d4/H2CJgZAKk=;
 b=aaD1bS2/JBZW7wiUT+iRhs8U1736AehZqD4GMaFney622b9SOmgCrXYv4Mh8UZJcT7x9oDPeszJ6gY8zgzU+p0mU57RJTHVelpOimP26NyOktXi9JELNauqqgjkebpRy+qweNWf39klD7SwsPlb87mKZ1KIlr4xCY1pHzrvx9RuT/Ph6pZN4y3cfsuRJNttNtwsgmQ1g7gJ9+jVtd4Z6mmcZ/9y3f0DLr833fG2DmCEO02WeonkO0jeXsRDWwyr8JacSMazxyTxbVzetkdnsb35BhTk7rcLjZ8CRKXvBLlRQGOMDiT88ZoNtaP6q2VjVe+lW4GECGmKE8uleUIc33g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TaL74+wSv6ihcnPfmovGf+NX1Bf0YN6d4/H2CJgZAKk=;
 b=5UsJKN0tSwdppuJz/MdC5t9hVzLapxhQxN+GhY2+pQEnINiEiEzsxRaBjC0D77PWKF9crjSl2pdDU9FqZo0ek7497xN1UXWueDNQq7KCfgL5IgWyz3N8k0PD7y35m6b1AWDIAhuEMsf51q2+PQQ7ztPTs++a/PsYnHdtPcIuZF79CarerTM0Js4b6sKCw476866p8tFnyoUwx/2lmorVReqKgtUyx30mQFbwlDJ4sBaPykKKAaHr3aM2ziXzPd/i0HTROcmRcYvjQ9/eNQ7Hhv4UILy8JjSA+rawgFxUJuQA/ut9C9pKnlUXZQX0JNO6ieWE/0ehGVSBlVQd3/e5tw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        George
 Dunlap <george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: AQHXCYx4A6OUUHr1gkqxWv1TEOLkuqpnnLaAgABcP4A=
Date: Wed, 24 Feb 2021 23:37:35 +0000
Message-ID: <87k0qx9gw0.fsf@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <25034a7a-83ed-0848-8d23-67ed9d02c61c@citrix.com>
In-Reply-To: <25034a7a-83ed-0848-8d23-67ed9d02c61c@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.15; emacs 27.1
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d4357e30-529a-4eaf-b802-08d8d91d2a36
x-ms-traffictypediagnostic: AM0PR03MB4500:
x-microsoft-antispam-prvs: 
 <AM0PR03MB4500DBEAC9756C66DB605D46E69F9@AM0PR03MB4500.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 eJibFc3HM2C/p1O1vGdKIaq0gySQy3OUiI+AlW2wkTV81tXKL6UYOOkFSlXLilHOqGudrhdZBjNd5Gwjd0jgrtdArIbgy5SLrxhzputeH4ggdkL++cOwDIav8h/Gjd4Y06O9xM5FAjPdHZEKPSESpnFfGtzJDtxX2JDOmTH2E5KaWCpml4Iq5jIdJ54SuwqXo1gMVswLqBrh6JwBSBFHOT122s3Z0k3PsmuCXMPq/EMh+JNqzxX66ERrK8eDdxVJWTkbJMlLJ6KDwpevSWG0istkrnybclM/8G4+os+/9cnEAT3zlTAhvcBDBdHBVrsGK1DflQgQvRMI/H/KCNM1I2TEnP6zf97taOgwXOQqEVbU9YSM6ygFOAN4ur9k4hBwDVTjzwbpoeo1zSvZf3Oi5e7WZ67kc190u8Sed2mifJNT+BrJEHvB7HoY0xg0pU/n+6EUC2/1GH1B2wY9Rx4k8ghs5Rg8pg4zBd4optR9huMzwpS6Dguo5yY6q6zuyFMXFQwUhQNb5MVthfmFYKfKhdOT9pH47rVEYbHLAQHHHcaR2hB1AvsBcPIqTTqptJVCMic6AIk65UCVnaDNIjcREQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(366004)(39860400002)(5660300002)(66446008)(76116006)(8936002)(2616005)(6506007)(186003)(53546011)(66476007)(6486002)(54906003)(26005)(66946007)(55236004)(64756008)(66556008)(6916009)(6512007)(36756003)(4326008)(8676002)(2906002)(478600001)(71200400001)(7416002)(86362001)(83380400001)(316002)(557034005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?ShtrSPxK9gQg8w9DEiXqAsL2DhSpL1udkj5frVLg4RinbiSYihoJph45WV?=
 =?iso-8859-1?Q?UTVWTgPX7s1PLppH6laSczSZ8/zTo2qq7E+IsHLma+6ORMQ/AllOQlssci?=
 =?iso-8859-1?Q?cmserFJtVpmEpxCEFO4k9qU5f6RKkcrbhXoGjxFNvBbMClgA5fM6kiiMOV?=
 =?iso-8859-1?Q?fQ85nWZ29ZicNpFgcaY1g29gS8urtg1iXfRKpnLd5kqwhrfiQOPZGurNfM?=
 =?iso-8859-1?Q?w0g1rCu85YEliaiAFwPKNf3UE8hqcRs0TanhzB2pSAAzybGCLlwJNvb/bJ?=
 =?iso-8859-1?Q?8HKdGfujwBi8RFsIUFQI+ZIxInkO48XlOvEg9rA80Pb3cAn9kIPKyhu1wd?=
 =?iso-8859-1?Q?iKi2V6skqfKMeNFhgoZD5rBTQ6vYZFLiK3TLW8r5iqg7YWH5rbm2bWrfJv?=
 =?iso-8859-1?Q?MhFPTDo8hmAMIrVJwMroa39Rd5y7NeHgb3LzF08Iz/wHnTW112UYvmuijo?=
 =?iso-8859-1?Q?wY2E8YdxGkjS664CSDKMgotrFgQZ2IPOaZKifdoL62/872viIe3pVynn1H?=
 =?iso-8859-1?Q?xfFYjqqhCvdzCBI3byvgwydN5HHkAQZfNPlikaMZBAQJy72IJ8VvLAxpCC?=
 =?iso-8859-1?Q?TlVG/2AUxiF8jd1BZoEewnboPzexvws4ykBJO5iWuiXJoniDbTzOIpJxxb?=
 =?iso-8859-1?Q?ZVg2uy/+hgDxeXCpJGBGiMAGZM0jPUw92YXYECxfcXzQpizSHtXgPd0llc?=
 =?iso-8859-1?Q?xLBql6F1i3JfxUq3AInk3Dwxjf50fb/SE/M9ygtiuEq+27HyhLGztQlsuP?=
 =?iso-8859-1?Q?Ff3mJHzsTRa8kWwA2g8WGwQC0qbBuNPTjTJXxrkc5psE+O/H7YuuTnUDd8?=
 =?iso-8859-1?Q?zO0lNE1Kd3GIRgvSq2aKBaOgi/M9WeyzZv8FX6E2wmUfuMZJ0RuFm4Us22?=
 =?iso-8859-1?Q?DAuiIwW2HRRt1jKFagC8ZiioIQf9evXDJ+hocG078VvlIjIevuXLty/UBr?=
 =?iso-8859-1?Q?rqWinmsHscXmoa59o4aK4NHWQKyYQSdYMlvh157eLW1fHqzoRmE5cCeAo8?=
 =?iso-8859-1?Q?JEgsuL9byQvhPFVjrYTF+SjBTf0IoAFflhywyqffhZYVS9+3aM1gtwmbyw?=
 =?iso-8859-1?Q?4nE9tZ5K62lv6SFTtohMEzQC/t/2RME7fIZClmesOEJS7XJ9YZJenifqJb?=
 =?iso-8859-1?Q?iVRhiUlR+Kxojl3o+o29etFfU/A0uYwjgGVvd2ZL1u3cT9RhH41KWdzfeF?=
 =?iso-8859-1?Q?ta+ICowBe9ckn7BLZwCQSOaJkXq0pE+wUtOBAyehDpSbwXmg1ga+f/hlCK?=
 =?iso-8859-1?Q?94HpA7VfwTjfZIjCGwPeLadWpjIGTTBVv9h6uqnNVJ5SjXtCklzDg3Akt4?=
 =?iso-8859-1?Q?c8HbGyI+7xak7nxsBeg2jm9yZYed3HmVSx9qJlvSAAajD2C56Kkqq6uS1k?=
 =?iso-8859-1?Q?5iwPoJgt/y?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d4357e30-529a-4eaf-b802-08d8d91d2a36
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Feb 2021 23:37:35.7253
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: TZp14E/BZInWqTev4i2mkOjmmF0f2bnk92EsVeObmCzDqmav2z9H9SgJdPladJR0+s9RJlvgIZdpMTYbD/+EokNSaO8Yxz7Dp9lhVoKqWBw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4500
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0
 lowpriorityscore=0 bulkscore=0 mlxscore=0 priorityscore=1501 clxscore=1015
 impostorscore=0 malwarescore=0 phishscore=0 adultscore=0 mlxlogscore=477
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102240184


Hi Andrew,

Andrew Cooper writes:

> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>> Hello community,
>>
>> Subject of this cover letter is quite self-explanatory. This patch
>> series implements PoC for preemption in hypervisor mode.
>>
>> This is the sort of follow-up to recent discussion about latency
>> ([1]).
>>
>> Motivation
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>
>> It is well known that Xen is not preemptable. On other words, it is
>> impossible to switch vCPU contexts while running in hypervisor
>> mode. Only one place where scheduling decision can be made and one
>> vCPU can be replaced with another is the exit path from the hypervisor
>> mode. The one exception are Idle vCPUs, which never leaves the
>> hypervisor mode for obvious reasons.
>>
>> This leads to a number of problems. This list is not comprehensive. It
>> lists only things that I or my colleagues encountered personally.
>>
>> Long-running hypercalls. Due to nature of some hypercalls they can
>> execute for arbitrary long time. Mostly those are calls that deal with
>> long list of similar actions, like memory pages processing. To deal
>> with this issue Xen employs most horrific technique called "hypercall
>> continuation". When code that handles hypercall decides that it should
>> be preempted, it basically updates the hypercall parameters, and moves
>> guest PC one instruction back. This causes guest to re-execute the
>> hypercall with altered parameters, which will allow hypervisor to
>> continue hypercall execution later. This approach itself have obvious
>> problems: code that executes hypercall is responsible for preemption,
>> preemption checks are infrequent (because they are costly by
>> themselves), hypercall execution state is stored in guest-controlled
>> area, we rely on guest's good will to continue the hypercall. All this
>> imposes restrictions on which hypercalls can be preempted, when they
>> can be preempted and how to write hypercall handlers. Also, it
>> requires very accurate coding and already led to at least one
>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>> like the one mentioned in [1].
>>
>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
>> which are supposed to run when the system is idle. If hypervisor needs
>> to execute own tasks that are required to run right now, it have no
>> other way than to execute them on current vCPU. But scheduler does not
>> know that hypervisor executes hypervisor task and accounts spent time
>> to a domain. This can lead to domain starvation.
>>
>> Also, absence of hypervisor threads leads to absence of high-level
>> synchronization primitives like mutexes, conditional variables,
>> completions, etc. This leads to two problems: we need to use spinlocks
>> everywhere and we have problems when porting device drivers from linux
>> kernel.
>
> You cannot reenter a guest, even to deliver interrupts, if pre-empted at
> an arbitrary point in a hypercall.  State needs unwinding suitably.
>

Yes, Julien pointed this to me already. So, looks like hypercall
continuations are still needed.

> Xen's non-preemptible-ness is designed to specifically force you to not
> implement long-running hypercalls which would interfere with timely
> interrupt handling in the general case.

What if long-running hypercalls are still required? There are other
options, like async calls, for example.

> Hypervisor/virt properties are different to both a kernel-only-RTOS, and
> regular usespace.  This was why I gave you some specific extra scenarios
> to do latency testing with, so you could make a fair comparison of
> "extra overhead caused by Xen" separate from "overhead due to
> fundamental design constraints of using virt".

I can't see any fundamental constraints there. I see how virtualization
architecture can influence context switch time: how many actions you
need to switch one vCPU to another. I have in mind low level things
there: reprogram MMU to use another set of tables, reprogram your
interrupt controller, timer, etc. Of course, you can't get latency lower
that context switch time. This is the only fundamental constraint I can
see.

But all other things are debatable.

As for latency testing, I'm not interested in absolute times per se. I
already determined that time needed to switch vCPU context on my machine
is about 9us. It is fine for me. I am interested in a (semi-)guaranteed
time of reaction. And Xen is doing quite well in most cases. But there
are other cases, when long-lasting hypercalls cause spikes in time of
reaction.

> Preemption like this will make some benchmarks look better, but it also
> introduces the ability to create fundamental problems, like preventing
> any interrupt delivery into a VM for seconds of wallclock time while
> each vcpu happens to be in a long-running hypercall.
>
> If you want timely interrupt handling, you either need to partition your
> workloads by the long-running-ness of their hypercalls, or not have
> long-running hypercalls.

... or do long-running tasks asynchronously. I believe, for most
domctls and sysctls there is no need to hold calling vCPU in hypervisor
mode at all.

> I remain unconvinced that preemption is an sensible fix to the problem
> you're trying to solve.

Well, this is the purpose of this little experiment. I want to discuss
different approaches and to estimate amount of required efforts. By the
way, from x86 point of view, how hard to switch vCPU context while it is
running in hypervisor mode?


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Wed Feb 24 23:59:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 Feb 2021 23:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89547.168726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF43e-0007lJ-Mp; Wed, 24 Feb 2021 23:59:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89547.168726; Wed, 24 Feb 2021 23:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF43e-0007lC-Js; Wed, 24 Feb 2021 23:59:06 +0000
Received: by outflank-mailman (input) for mailman id 89547;
 Wed, 24 Feb 2021 23:59:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FJMv=H2=epam.com=prvs=26891aedce=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lF43d-0007cZ-0B
 for xen-devel@lists.xenproject.org; Wed, 24 Feb 2021 23:59:05 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5d384d1-4942-4027-98e6-747f7df000b8;
 Wed, 24 Feb 2021 23:59:04 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11ONux2H006978; Wed, 24 Feb 2021 23:58:59 GMT
Received: from eur02-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2056.outbound.protection.outlook.com [104.47.6.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 36w1mr5d8q-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 Feb 2021 23:58:59 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM0PR03MB5089.eurprd03.prod.outlook.com (2603:10a6:208:10a::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.32; Wed, 24 Feb
 2021 23:58:54 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.048; Wed, 24 Feb 2021
 23:58:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5d384d1-4942-4027-98e6-747f7df000b8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=exGuiaUfxqIsnnrijaEofJi8EoDccZj5aBfPJ9kr2uIjB1aqaGS49OAcBZFfi6EBNA8yC6UHdpc1+xtc4+wlJOjsBd0CmeQoK2i3+CsN/mVNh0CQVdz74gomHQEI8vSedGUAC4VDnILH0MT5GlAfwbRYkh/IKEkVEp8BqFq9boL12TjhrQRajjxZYoj1rmekIJWbgNPE76rAMOE2DX8SfY2b4S+e45db5rqNEbH5CrsPomvdZbVO4f2hkfdo69HhT1bPjU0TNyg6aulWpBzPxtbwos/K/GJLcz7FyETHMnfAz+WKeJnjQlFbvPcJSxUtG3wiv+p/vD6NvpIw58QDZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9KEyaSnnXSQi47ErcDhJ1QhWjuqGllib1++JC4xzFzU=;
 b=UqTz2EJw3EruXKwF33tlpXdys9peb3kT9+jJ4ILxsNxVsnIe9Gs0K0W2GaIZa/V7fnlvD4nVg5uLn7ppRLFz8n885cCyvm56le41B9irBdEQUqyOB2To9oSMkADIkwsG2uv09aROT4wq9x0PR94gt0ZyF/kGR76Z52FkS5aJhIJlAdOjoFb+D3WLLIaqYjuVLA18ekak2o6XBHhcnm6o+eTEJNX6M8IV1qonMbH//ElNf0LrVQBsuTsl9gMRtrY5mfkAM6E2VtNA5owB0XEf0THrFiTwMiOQU4WpCCbMJvEm2/nEnH6Ed9BBcMeLIY6pnBU49//9ZOLGrUMA/NvcPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9KEyaSnnXSQi47ErcDhJ1QhWjuqGllib1++JC4xzFzU=;
 b=3Yq5KE1x+mQRxtTWUmj2oCy1i9FxoC3PyWrnpfgIgaKH4dm1RNpSmeOIPZocwOi3JWI0UZwhJ4mFzRyUr2jsFZYRrmS8AsmKshOqQBtBeOhsMcMTkZMRp5MFQYGhXJjil59Gz1b04+9rERXkIKAwqeJolzvRhEupz32w2fMwixkRtba7bRu7rKq5jb+yGj0PQFxBl7vsdUwHjMQCYlFEQOxyCZTO5RN1Rkdp9wVBkJYzjP6zkpD6G6q725bBr+VgtjHYzBbsbNVlMYcWzbQN85yPl7CBEAwoA8Qw87sihY3K+0HYt7lUghcP0M94cXOMZ2aiKPOD2WFpKiMr7x5sig==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien.grall.oss@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        George
 Dunlap <george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: 
 AQHXCYx4A6OUUHr1gkqxWv1TEOLkuqplchSAgAAzdYCAAXE/AIAAtX0AgAAaNQCAABhvgA==
Date: Wed, 24 Feb 2021 23:58:54 +0000
Message-ID: <87eeh59fwi.fsf@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org> <87zgzv56pm.fsf@epam.com>
 <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org> <87o8g99oal.fsf@epam.com>
 <CAJ=z9a0v37rc_B7xVdQECAYd52PJ0UajGzvX1DYP56Q2RXQ2Tw@mail.gmail.com>
In-Reply-To: 
 <CAJ=z9a0v37rc_B7xVdQECAYd52PJ0UajGzvX1DYP56Q2RXQ2Tw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.15; emacs 27.1
authentication-results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1b5ac869-a17e-4cbf-89c5-08d8d9202460
x-ms-traffictypediagnostic: AM0PR03MB5089:
x-microsoft-antispam-prvs: 
 <AM0PR03MB508919F0CA43F580590AFFADE69F9@AM0PR03MB5089.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 xW8lcLyPqqhLWJPueG/v+Expe6q1R2zbnDwH+YaZ5+soDIBH0tg3dOH0tUYfrOvVDKeyYyTEh7AQ0EaZBVYZw9YH8Dywy0UV2Q4R4i9Uy3TlVM9bndVT9K6efYfo/zig8S025gsZgP8yewv3X2lhdsVwx+YHFLI45nifNw/iEr9flRGY5PMObNGlYY6Vf89IP7o4bPzN2QuiKTGd5zXHO9tJDDCvltTuBRqFoVSjySHqgFJYnBzfVMctYq6kRxwbQWe0t/MyRanubdTqlZyjHrOmyXY5nQfqlkkCr8BRTIGI6CtRjv944AjX4SRf1RaVFfC1OIr1F/IrPF3+nzAjvvP8J4Vxp0+GG+abJ3piF58F5z6SUgVwbSwPxgTgNmrj9copnTwaupdrTMLONqeBRgOpuVzmIeDxI06J/d9zNbBmP5gpwGzIBBstQNs98BQ3Mx5F8qmGT7qwIu3K4TRG42tSIpqvphZz49WPOXQJA2Cv2WTEhEF23NaZNSNkrx94zxlW3kKnUuirQ4o9NZklew==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(136003)(39860400002)(346002)(396003)(76116006)(186003)(6486002)(53546011)(8936002)(8676002)(316002)(64756008)(54906003)(55236004)(26005)(2616005)(66446008)(6506007)(71200400001)(2906002)(66946007)(36756003)(86362001)(6916009)(66476007)(6512007)(66556008)(5660300002)(83380400001)(7416002)(4326008)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?zOwkP8mZZTpRP4ZbO4+dJykds/z8WGy5tXvV2rST49QufV5S2QRE/b9LBv?=
 =?iso-8859-1?Q?kWcekCSYgFqJXibtlNP7j4fZrq9lbeQiNVNCAYt0u9yVp8k9BMbaCVziGj?=
 =?iso-8859-1?Q?PIdr1qXyH+5tNex6hkiLNuCe7nB4qIvs5459Yk1BwrO76PAh7yKe+b7Wp3?=
 =?iso-8859-1?Q?b+WEOFL/bYS3ENKY/W0HQXsXopfveJaj5e2arnlH2GdnIdw1w6+i48HQue?=
 =?iso-8859-1?Q?uWp9lLnWe/9SE9v8UYQwGVC15bU0HrMXaUlhK8TvIHZ8kLgNLoe20BD/GT?=
 =?iso-8859-1?Q?y0BvCpN2oL3LtYJejECNoQs1ygq7hQKa0Tfk7m0mIIreK+SQNUFOZLSHCl?=
 =?iso-8859-1?Q?zD632wtDnk5TVuOtvvMdu/1ucZYBas4y5aMAHupE1vACEGYum+EwfPdXZi?=
 =?iso-8859-1?Q?f/QylnvbnTm9zAfAEoYvVarQtKU2tXd2Jrjn1bvC1zmVMHLymCIlZysPCz?=
 =?iso-8859-1?Q?U5i7WlVDbqmLDsYPe+wY5L+X76NfxAw2H65o7HVPBfhWy/KARWKsa7aaLB?=
 =?iso-8859-1?Q?vJg3+mATiqoWWVG5XdgREMVTW+KXaco8xQnj24w6NdodDluZ2uBxvlIYde?=
 =?iso-8859-1?Q?wHpSUHuNcrGp0Nj/frOCrAR7zyTipBNdA1wM1y1zPfL7wYjqP9t00tDMRl?=
 =?iso-8859-1?Q?WC0NuNbFMGGN3PIeHChAqxpoWiJIOB3io42LBzi03oBgRtEpuphuYbTRvf?=
 =?iso-8859-1?Q?ZlFB787xj9olytnGLL5nrOZ3fSbvNeIs6kNzw06LXmtS8/bBPuuGUnO7iA?=
 =?iso-8859-1?Q?yA374P25GQvVlVLec8CvtNnu/AovvQwIfGsBshNYDWYGqzChuDBorPpoNf?=
 =?iso-8859-1?Q?K/1p2JJVzoRtFv/ae4P8l2jPithgJQhkSlZmwbOgZprGjMktBz+Mb8L2dP?=
 =?iso-8859-1?Q?OzqZFWiyLK/cuY3ImPV3ivJFUqME9oLXhVBYoEPaWDNuBAqUO/blcXg2Bu?=
 =?iso-8859-1?Q?nn8tsn33Aec52+9vEgP+qyoLqeQk3Jv4L1w2BDwQxBqdvujaGmaL1qZBFZ?=
 =?iso-8859-1?Q?BTk6UayhyTp2zNX7c2MmwoynSr1YU/48wvhzF8eUiZwhliQwX9xkQfhS5o?=
 =?iso-8859-1?Q?eX3jMMvfjnuH5QpmGUQ28xUwyDTAHeH82bAyIz3m1IMIQrMdRfKuVXh0vX?=
 =?iso-8859-1?Q?iJAfkiRVZQfqnJxTpkWvHqFyPHDO+G8ljwD424G7kOgC+4EFMnEqcU1fzC?=
 =?iso-8859-1?Q?eCWhggwNPPk8qIqmTGKzNFnUabYnCxgd/sK2iKG61g3ttVMTuDTPYmxIQh?=
 =?iso-8859-1?Q?9FT1jTKe8/Z16Pct805WV6RSkx74S9Mlxwja3fC3bV8T0earYEOI0GqDYL?=
 =?iso-8859-1?Q?HBzdp/zAj7NBcK2DlmQAOzYioMQ/MJrqctzahQsggzXzhGe7ktvlKb33t/?=
 =?iso-8859-1?Q?dUDDqTLR3B?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b5ac869-a17e-4cbf-89c5-08d8d9202460
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Feb 2021 23:58:54.4193
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JFvfbJdbGqmkbLHONbR4pXwMUlJLVtQ5mTb5Kqq3xqA2Am388/lA0/afhDtD1jLTEcqSVjWUeB09Ldqc+WCfEAyoQsM44rQ79+jNt5ybBGI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5089
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0
 lowpriorityscore=0 suspectscore=0 phishscore=0 priorityscore=1501
 impostorscore=0 spamscore=0 mlxlogscore=999 clxscore=1011 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102240187



Julien Grall writes:

> On Wed, 24 Feb 2021 at 20:58, Volodymyr Babchuk
> <Volodymyr_Babchuk@epam.com> wrote:
>>
>>
>> Hi Julien,
>>
>> Julien Grall writes:
>>
>>> On 23/02/2021 12:06, Volodymyr Babchuk wrote:
>>>> Hi Julien,
>>>
>>> Hi Volodymyr,
>>>
>>>> Julien Grall writes:
>>>>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>>>>> ... just rescheduling the vCPU. It will also give the opportunity for
>>>>> the guest to handle interrupts.
>>>>>
>>>>> If you don't return to the guest, then risk to get an RCU sched stall
>>>>> on that the vCPU (some hypercalls can take really really long).
>>>> Ah yes, you are right. I'd only wish that hypervisor saved context
>>>> of
>>>> hypercall on it's side...
>>>> I have example of OP-TEE before my eyes. They have special return
>>>> code
>>>> "task was interrupted" and they use separate call "continue execution =
of
>>>> interrupted task", which takes opaque context handle as a
>>>> parameter. With this approach state of interrupted call never leaks to=
 > rest of the system.
>>>
>>> Feel free to suggest a new approach for the hypercals.
>>>
>>
>> I believe, I suggested it right above. There are some corner cases, that
>> should be addressed, of course.
>
> If we wanted a clean break, then possibly yes.  But I meant one that does=
n't
> break all the existing users and doesn't put Xen at risk.
>
> I don't believe your approach fulfill it.

Of course, we can't touch any hypercalls that are part of stable
ABI. But if I got this right, domctls and sysctls are not stable, so one
can change theirs behavior quite drastically in major releases.

>>
>>>>>
>>>>>> This approach itself have obvious
>>>>>> problems: code that executes hypercall is responsible for preemption=
,
>>>>>> preemption checks are infrequent (because they are costly by
>>>>>> themselves), hypercall execution state is stored in guest-controlled
>>>>>> area, we rely on guest's good will to continue the hypercall.
>>>>>
>>>>> Why is it a problem to rely on guest's good will? The hypercalls
>>>>> should be preempted at a boundary that is safe to continue.
>>>> Yes, and it imposes restrictions on how to write hypercall
>>>> handler.
>>>> In other words, there are much more places in hypercall handler code
>>>> where it can be preempted than where hypercall continuation can be
>>>> used. For example, you can preempt hypercall that holds a mutex, but o=
f
>>>> course you can't create an continuation point in such place.
>>>
>>> I disagree, you can create continuation point in such place. Although
>>> it will be more complex because you have to make sure you break the
>>> work in a restartable place.
>>
>> Maybe there is some misunderstanding. You can't create hypercall
>> continuation point in a place where you are holding mutex. Because,
>> there is absolutely not guarantee that guest will restart the
>> hypercall.
>
> I don't think we are disagreeing here. My point is you should rarely
> need to hold a mutex for a long period, so you could break your work
> in smaller chunk. In which cases, you can use hypercall continuation.

Let's say in this way: generally you can hold a mutex much longer than
you can hold a spinlock. And nothing catastrophic will happen if you are
preempted while holding a mutex. Better to avoid, this of course.

>
>>
>> But you can preempt vCPU while holding mutex, because xen owns scheduler
>> and it can guarantee that vCPU will be scheduled eventually to continue
>> the work and release mutex.
>
> The problem is the "eventually". If you are accounting the time spent
> in the hypervisor to the vCPU A, then there is a possibility that it
> has exhausted its time slice. In which case, your vCPU A may be
> sleeping for a while with a mutex held.
>
> If another vCPU B needs the mutex, it will either have to wait
> potentially for a long time or we need to force vCPU A to run on
> borrowed time.

Yes, of course.

>>
>>> I would also like to point out that preemption also have some drawbacks=
.
>>> With RT in mind, you have to deal with priority inversion (e.g. a
>>> lower priority vCPU held a mutex that is required by an higher
>>> priority).
>>
>> Of course. This is not as simple as "just call scheduler when we want
>> to".
>
> Your e-mail made it sounds like it was easy to add preemption in
> Xen. ;)

I'm sorry for that :)
Actually, there is lots of work to do. It appeared to me that "current"
needs to be reworked, preempt_enable/disable needs to be reworked, per-cpu
variables also should be reworked. And this is just to ensure
consistency of already existing code.

And I am not mentioning x86 support there...

>>> Outside of RT, you have to be careful where mutex are held. In your
>>> earlier answer, you suggested to held mutex for the memory
>>> allocation. If you do that, then it means a domain A can block
>>> allocation for domain B as it helds the mutex.
>>
>> As long as we do not exit to a EL1 with mutex being held, domain A can't
>> block anything. Of course, we have to deal with priority inversion, but
>> malicious domain can't cause DoS.
>
> It is not really a priority inversion problem outside of RT because
> all the tasks will have the same priority. It is more a time
> accounting problem because each vCPU may have a different number of
> credits.

Speaking of that, RTDS does not use concept of priority. And ARINC of
course neither.


>>>>> I am really surprised that this is the only changes necessary in
>>>>> Xen. For a first approach, we may want to be conservative when the
>>>>> preemption is happening as I am not convinced that all the places are
>>>>> safe to preempt.
>>>>>
>>>> Well, I can't say that I ran extensive tests, but I played with this
>>>> for
>>>> some time and it seemed quite stable. Of course, I had some problems
>>>> with RTDS...
>>>> As I see it, Xen is already supports SMP, so all places where races
>>>> are
>>>> possible should already be covered by spinlocks or taken into account =
by
>>>> some other means.
>>> That's correct for shared resources. I am more worried for any
>>> hypercalls that expected to run more or less continuously (e.g not
>>> taking into account interrupt) on the same pCPU.
>>
>> Are there many such hypercalls? They can disable preemption if they
>> really need to run on the same pCPU. As I understand, they should be
>> relatively fast, because they can't create continuations anyway.
>
> Well, I never tried to make Xen preemptible... My comment is based on
> the fact that the use preempt_{enable, disable}() was mostly done on a
> best effort basis.
>
> The usual suspects are anything using this_cpu() or interacting with
> the per-CPU HW registers.
>
> From a quick look here a few things (only looked at Arm):
>   * map_domain_page() in particular on arm32 because this is using
> per-CPU page-tables
>   * guest_atomics_* as this uses this_cpu()
>   * virt_to_mfn() in particular the failure path
>   * Incorrect use (or missing) rcu locking. (Hopefully Juergen's
> recent work in the RCU mitigate the risk)
>
> I can provide guidance, but you will have to go through the code and
> check what's happening.

Thank you for the list. Of course, I need to see thru all the code. I
already had a bunch of problems with per_cpu variables...

--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 00:40:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 00:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89552.168739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF4hZ-0004X3-7s; Thu, 25 Feb 2021 00:40:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89552.168739; Thu, 25 Feb 2021 00:40:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF4hZ-0004Ww-3i; Thu, 25 Feb 2021 00:40:21 +0000
Received: by outflank-mailman (input) for mailman id 89552;
 Thu, 25 Feb 2021 00:40:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lF4hX-0004Wr-A2
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 00:40:19 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b515e7a-8246-49a5-9a57-53544aa4d497;
 Thu, 25 Feb 2021 00:40:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b515e7a-8246-49a5-9a57-53544aa4d497
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614213617;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nPDUblydehS6l+/YSHhxieL6FYDcnWbzljL9LshT6rM=;
  b=ViqLw0NYb9d9hGh6hloTSBncHgmi+eY+0NvAwTWIM5zvOR7knHT05Q8w
   ngdP5VCqTndZMY6apovVZ4z/GcotA92ywumgZJuFWG6cn0N8Jj4gHGwXC
   UTy+QyDVd7alLtTvVaXCFWboRE1HozZnqc9eFCe2NrfBI+02X8WDGiJrZ
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: cBOIVlwWvxE1iZG2TDbbzx0k/WL5tWuLjl8Lz1Cr0hugJ1/CIaEHPh5/KxG1E+1fcZGfJZJmPF
 gamstplSS/e1rMGDEPFBU/ts/zttX+EvV71vyOPCZny4mJdnnrJ7ZLS7tUU923fjfv/yKV6Dpy
 BZP0NmOqMRbXH/hhSZFFK/s21d/GeK4Um6L3QmKCh3SRdsmxDYHITkN2e8H9tjn5x5saJGxf/v
 L89K0HX5+cMPA+a/xUhLA5uxefxyuCTb8J7JxqtTdxUN+jFbgzvfIf/LODzSa1owK0e/jazE4q
 2yk=
X-SBRS: 5.2
X-MesageID: 37993797
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,203,1610427600"; 
   d="scan'208";a="37993797"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MG6ByNtmZX3pSUx2FjPq4HvQkvUUHblAzZ6guIX0Au8lDrIojoRo5fNUFW+Zl1g11h8ZqlMCcWV5Udv32TFS3hkPmSj1rZa6uqW7cn91i2TIdFdcvGaMbMTxPZp+IKoS2miBYeOMgN11vbJEF9Obv9NjDENOLuwDTF4Wo8ScZhO+HBp+wtLzGEhyi/bHZnMcvT6Y3Ay3JyTe14xIepU34Nu9UVJonSi4glc1z+OsgeNgpyJeJ4bipdBdcFW3nHFBf5JP8/hXQkgjDpT5JS+/hfatcLeWBKyvTCgBjafOOgAXo5mBH5cGy4uKtnWYgBzw41JItEL23docka5QCWRE3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nPDUblydehS6l+/YSHhxieL6FYDcnWbzljL9LshT6rM=;
 b=be8hisUrDXyr4NAPGNgT0E0qGL/okhih+EjHzUVHHgNfSWgriZUgAUg4diABIDxFc/ll5E4er1gDGtT8Ia1GmUvfz46WQOdf2VqyD6HJ7swqkSlGL6JHlNgEMcyvxwdOsZIvhdtFz9U+3O6U9KixnNKxSEIFBWlea2uVKsj1S7Xujxk6HSuMQ5ySCMQMge9Xi1rYY1eucLndPpse8qd0iNBlXUTW0iT9BcXIt/vy19kXxtWMzCermP1o6m33uzNoGaLS1gke9p0Z8Nkv0eoc3Tew+WV4jDqsz53r2t8J5szSh6lWJPEznV2naCXNba7ktazfpngl8NWWUhxjMCyy/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nPDUblydehS6l+/YSHhxieL6FYDcnWbzljL9LshT6rM=;
 b=ovG8ejXjoIGUDBfAF3UmdmcI8gIoba8/PPi6k1w/DM1qUyqdzMRmYOfNCUnq3W7+3DvWhOJnKGAW5vi6N6Msx38q7ss8uaN3aB3jRnzFphOggh9+bxv03n3OxZUr945u0kkRCAj4Q0xE/Sum+FVLsxJjkZh3tkJLMHNA6ptbsAs=
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Julien Grall
	<julien.grall.oss@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, George
 Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org> <87zgzv56pm.fsf@epam.com>
 <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org> <87o8g99oal.fsf@epam.com>
 <CAJ=z9a0v37rc_B7xVdQECAYd52PJ0UajGzvX1DYP56Q2RXQ2Tw@mail.gmail.com>
 <87eeh59fwi.fsf@epam.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <28f8ffcc-d2df-438c-4fa8-a8174d897109@citrix.com>
Date: Thu, 25 Feb 2021 00:39:56 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <87eeh59fwi.fsf@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0014.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0e22ea7e-7fb8-4584-dc29-08d8d925e85c
X-MS-TrafficTypeDiagnostic: BYAPR03MB3864:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3864CD82911458DE2C10AE1CBA9E9@BYAPR03MB3864.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: obh//eMkWchf5GvOU3KFdzm/01lUDsYMostug+9Ze0B8dymgzo3da7ILOXrPtCeROe+lQ8LLpyU+2EV0cc04e4iGk8EkRLN6WSHTgfU31LnoA8otZxVTvt8h/2rapS3nI8ZvCDu3melFl/13stK4ML5nt7vAKe5Ppn1rciIq2/apUubzzZAIwJTPZuVHqi3w0ojuFK1Vne7RaePRajWlpMd0iSMTRjhiI/M6Ww4hTNfw+h4Uo76Qx8XfgqurqbM3x/VykI2yfIgye69RlfraZQRKJTBNKCm9+2bNMnp6SuI1+bh0wcqGEI77+Spr756iB2Klj//1JsOwn4i+y8uLyPSn+AqsfRd1L9zh/R+fUkETDuftUfG4Ku5tSpF2GExKKjQwyiZM/64e8GN1iz33Fs5Iyji0lbyxLCAEsELInyLRPV9ZRvB+P0Wa4JFEzimM/xVt5FoLc3xNCnMrgBYsLmDFLTh0T4ganXAtO2MBJIY5/YBym3boTtpofYfCQAP17d65GzuS/fuwTSv3mhD6b4QsTwh41KwunhOb8cWlcjATPkAzVD4SRyhc9+76nvkzWNRBGfLhQcL69dDdBihH747Ig7e8FSHAaoNCF9MtRzs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(376002)(136003)(39860400002)(86362001)(110136005)(54906003)(16526019)(31696002)(5660300002)(2906002)(66946007)(66476007)(26005)(66556008)(83380400001)(36756003)(316002)(186003)(8936002)(956004)(16576012)(8676002)(53546011)(478600001)(31686004)(6486002)(6666004)(4326008)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RVUyMFZNM2VqcnpmMGFydGljcVJ4ejVwenl6VVBJT0FrZHFCUFRjdGZmaXJF?=
 =?utf-8?B?ZndBMVJQUW9IT1I3R0s5QjloRU1CRG1zN2NVNEFYZXYxOWljc041dC9FbGg1?=
 =?utf-8?B?Z0hMREljZEs2d2RaSFhmQTNBcm1TL0pKQ2EvMWdRS1BjY0RmekFad0s5SG1I?=
 =?utf-8?B?bkdnTVVMVStmb3c4R1FYVVJXVGFvWWVyTWxvbFExKzdZOG5TUGlkRElnVnVY?=
 =?utf-8?B?RGNkTXQ3UU9ZeUJCTGtDKythV3FMVmZrbjFEM29lY3VVZHBBbUFLNngxaC9m?=
 =?utf-8?B?M0UzeW1TSDRvUTV4Ym01TzF6T2V2eWR1TmxyQ29IS3VaMjdwNVlyYzZKcito?=
 =?utf-8?B?OEh3djNyOFpqSXZMazcwUk00UGN0NGhPbkdzWEQvdUtIRTJLZmFOdy9tVEhh?=
 =?utf-8?B?L0JrZHlkMFc2Z0N1elZHK3NzdG1FdVkrVVVqUkVjT0pKRHllWXRvSE1ZTjUw?=
 =?utf-8?B?OEw5dWJmaXJVUGdCZmhncmlQSUdXaFpPQ0dUM1BVWmUrVHdyRExZeENnQ0I1?=
 =?utf-8?B?a0plZDRHYmxweEFwUHpob0o0NkVtcTBVR3FlUk5Oa2Y0ais5MnZEdVFGRVk5?=
 =?utf-8?B?eFJuSEcxelVmd2ZnRlFYSm9zc2pNU1B2cUluUEV5UitpT0EweVBpTDRrMGJ1?=
 =?utf-8?B?UWEvelh6VmFkV2JmaGwwc0dycDEySUVYck45TDNJMFRwQUFxdy93Qk5UcWFL?=
 =?utf-8?B?dmNTazE1TDQ0VTRPVGluaTdYd0o5NzlYY2RyYnY5bGtsaStYK20xbVVYQTFV?=
 =?utf-8?B?T2VUZXhZejJZTmtWY1Rxb2xwdytKZUFHdmtQQ0Y0UytXTmdUUS9LZTcwMkhn?=
 =?utf-8?B?Z2NrRWNWMWZJTWs0UVJlZXoyUjdZQ1Yrb1RNUUlvejZBMC9WNW5hZlBjMWdo?=
 =?utf-8?B?UXI3MTUzaVB0ZjVRK3o2RUltU0pTN2FnNzJXa2xyMTdRUVJ2K2Q4K1NRZnVI?=
 =?utf-8?B?QWdYd1ZPK3EwM3hTamdjcjlGY2hiZFZYWGJmcXhITUE0Rkd4Ui80eHF3Q2Jr?=
 =?utf-8?B?c0RMWVljV3lnbGN2UzltMnZxdUFXNXpFeFFiTERJWFVHQ0sxWXh2Q1M3ZkdK?=
 =?utf-8?B?VkxYV1lEVVN4YlVSdGt4aFFENVhPcWhPMzg1RGVZbGVnbktaWE56Y25idnRz?=
 =?utf-8?B?REltQVNZSlpBZmhtQnFabDlTcitnV0RpVnYvbUZoR1F3QWVObUZMVVFwUnEw?=
 =?utf-8?B?cWpZbmdtejJMc0hQbThIbXVXMjZIc2tNUnNSNytKRkw5N1hVUURyU015Y0Fr?=
 =?utf-8?B?ZkJ5T0pSZUszV0d4NWd6TSs4RFNQUzBXaVppUUxtbUVuMkpvcnRSYVZVeHVa?=
 =?utf-8?B?ZlN5QXIrZktwbHBPUlNzR1VDUmwreExGZldHTFYyY0pya0xVTVFIdE0vTFU2?=
 =?utf-8?B?Q2dDQWJIckpXSks4ZHlrREFqKzl4L1Jmd1F6NnFBMEhXcnR5bTU1ZzVvSkFO?=
 =?utf-8?B?YUpQWXJwaEpoYWJKYjBuVENUd013TzdJVEFaMFltTVJHUEs5cE5XR043OE5N?=
 =?utf-8?B?SWNSc1g0QUxoWlhBNk9SUjR4WGpFamR0NHhyU3J3VlhiZVBOa2lHOE9qVDVn?=
 =?utf-8?B?cStpM1dRd08xVFJOcEtzVFZjNnJiRmdqNW5pcXoydi90MVZuVzgwSW84a0R5?=
 =?utf-8?B?Wmt2bS95aVJCZm1EalIzM1JnbWtIK2RjVkI0SDZUWDZVd3ZJMHBFTDloRGpT?=
 =?utf-8?B?M3RRbm0wY0E4UWxNQU9Fa3E2SytTSHhKZGE1aVJyNnBEMnlxSlYrK1dTcThm?=
 =?utf-8?Q?qu8PHIwXsXcDe0FPKUSYaB77yaDQTQe+/lmNBNV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0e22ea7e-7fb8-4584-dc29-08d8d925e85c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 00:40:11.0670
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D5pblHnxPwhHlgi7jC9cR3F2TOXCZQuGGevLqZVO5Xp5GwCrwilCyT2stu2nT8U2A1buz9XdsA2q9EWr0iKU+aoxkAVMY/5J6ecoWMXGfrw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3864
X-OriginatorOrg: citrix.com

On 24/02/2021 23:58, Volodymyr Babchuk wrote:
> And I am not mentioning x86 support there...

x86 uses per-pCPU stacks, not per-vCPU stacks.

Transcribing from an old thread which happened in private as part of an
XSA discussion, concerning the implications of trying to change this.

~Andrew

-----8<-----

Here is a partial list off the top of my head of the practical problems
you're going to have to solve.

Introduction of new SpectreRSB vulnerable gadgets.Â  I'm really close to
being able to drop RSB stuffing and recover some performance in Xen.

CPL0 entrypoints need updating across schedule.Â  SYSCALL entry would
need to become a stub per vcpu, rather than the current stub per pcpu.
This requires reintroducing a writeable mapping to the TSS (doable) and
a shadow stack switch of active stacks (This corner case is so broken it
looks to be a blocker for CET-SS support in Linux, and is resulting in
some conversation about tweaking Shstk's in future processors).

All per-cpu variables stop working.Â  You'd need to rewrite Xen to use
%gs for TLS which will have churn in the PV logic, and introduce the x86
architectural corner cases of running with an invalid %gs.Â  Xen has been
saved from a large number of privilege escalation vulnerabilities in
common with Linux and Windows by the fact that we don't use %gs, so
anyone trying to do this is going to have to come up with some concrete
way of proving that the corner cases are covered.



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 00:42:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 00:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89555.168751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF4jJ-0004fD-Ik; Thu, 25 Feb 2021 00:42:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89555.168751; Thu, 25 Feb 2021 00:42:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF4jJ-0004f6-Fc; Thu, 25 Feb 2021 00:42:09 +0000
Received: by outflank-mailman (input) for mailman id 89555;
 Thu, 25 Feb 2021 00:42:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF4jI-0004ey-Ij; Thu, 25 Feb 2021 00:42:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF4jI-00025V-BT; Thu, 25 Feb 2021 00:42:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF4jI-0007Wy-1w; Thu, 25 Feb 2021 00:42:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lF4jI-0007cW-1R; Thu, 25 Feb 2021 00:42:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=+VH2TMBN2TzfQewLlrkOv+rorq4O1IMNfGtM3hkEdeU=; b=PC1IJELFiLDbNj0ahZqF7+ilfF
	1QeE7z6kIYJBFqTHDMYXvycXnfQu1ipmWRaC5h7g0AfEllJsFoFXDq3WDDQ3S+eVFXe94lwvKhbEA
	f3EAa+BZeb//9+WerPCNEFIXIrM6LqQRA4Wf1UOx4EXJbpGvN6EBXwVZ0UnxneblRDgQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-qemut-rhel6hvm-intel
Message-Id: <E1lF4jI-0007cW-1R@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 00:42:08 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-qemut-rhel6hvm-intel
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159650/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-qemut-rhel6hvm-intel.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-qemut-rhel6hvm-intel.xen-boot --summary-out=tmp/159650.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-qemut-rhel6hvm-intel xen-boot
Searching for failure / basis pass:
 159602 fail [host=huxelrebe1] / 159475 [host=chardonnay0] 159453 [host=fiano1] 159424 [host=elbling0] 159396 [host=fiano0] 159362 [host=chardonnay1] 159335 [host=elbling1] 159315 [host=albana0] 159202 [host=albana1] 159134 [host=fiano1] 159036 [host=chardonnay0] 159013 [host=fiano0] 158957 [host=elbling0] 158922 [host=elbling1] 158873 [host=chardonnay1] 158835 [host=chardonnay0] 158811 [host=fiano1] 158787 [host=albana1] 158755 [host=elbling1] 158719 [host=chardonnay1] 158711 [host=fiano0] 1586\
 99 [host=chardonnay1] 158628 [host=elbling1] 158617 [host=chardonnay1] 158607 [host=chardonnay1] 158601 [host=albana0] 158591 [host=elbling1] 158581 ok.
Failure / basis pass flights: 159602 / 158581
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e317896342d553f0b55f72948bbf93a0f1147d3
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#5e317896342d553f0b55f72948bbf93a0f1147d3-f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
Loaded 5001 nodes in revision graph
Searching for test results:
 158581 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e317896342d553f0b55f72948bbf93a0f1147d3
 158591 [host=elbling1]
 158601 [host=albana0]
 158607 [host=chardonnay1]
 158617 [host=chardonnay1]
 158628 [host=elbling1]
 158699 [host=chardonnay1]
 158711 [host=fiano0]
 158719 [host=chardonnay1]
 158755 [host=elbling1]
 158787 [host=albana1]
 158811 [host=fiano1]
 158835 [host=chardonnay0]
 158873 [host=chardonnay1]
 158922 [host=elbling1]
 158957 [host=elbling0]
 159013 [host=fiano0]
 159036 [host=chardonnay0]
 159134 [host=fiano1]
 159202 [host=albana1]
 159315 [host=albana0]
 159335 [host=elbling1]
 159362 [host=chardonnay1]
 159396 [host=fiano0]
 159424 [host=elbling0]
 159453 [host=fiano1]
 159475 [host=chardonnay0]
 159487 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159491 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159508 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159526 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159540 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 87a067fd8f4d4f7c6be02c3d38145115ac542017
 159559 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
 159576 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
 159618 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5e317896342d553f0b55f72948bbf93a0f1147d3
 159621 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
 159622 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 119c98972c4e737508df827bbbc8453cc93292c7
 159623 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 82ed3155fe72a7e15ab28f86a3c1eb970a92d2f6
 159602 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f894c3d8e705fea9cb3244fa61684bfd8bdd1b2a
 159625 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ca82d3fecc93745ee17850a609ac7772bd7c8bf7
 159628 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2dd2cd0f16aae2cad2227f9aae9469fd6cdd8cd3
 159633 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b
 159635 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 336fbbdf61562e5ae1112f24bc90c1164adf2144
 159641 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 f954a1bf5f74ad6edce361d1bf1a29137ff374e8
 159643 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159647 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159648 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
 159649 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
 159650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204
Searching for interesting versions
 Result found: flight 158581 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25, results HASH(0x55cd07744898) HASH(0x55cd0775c348) HASH(0x55cd07783648) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895\
 af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b, results HASH(0x55cd076d81e8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2dd2cd0f16aae2cad2227f9aae9469fd6cdd8cd3, results HASH(0x55cd0718ecf0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 ca82d3fecc93745ee17850a609ac7772bd7c8bf7, results HASH(0x55cd07781ee0) Result found: flight 159487 (fail), for basis failure (at ancestor ~76)
 Repro found: flight 159618 (pass), for basis pass
 Repro found: flight 159621 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25
No revisions left to test, checking graph state.
 Result found: flight 159643 (pass), for last pass
 Result found: flight 159645 (fail), for first failure
 Repro found: flight 159647 (pass), for last pass
 Repro found: flight 159648 (fail), for first failure
 Repro found: flight 159649 (pass), for last pass
 Repro found: flight 159650 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159650/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-qemut-rhel6hvm-intel.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
159650: tolerable ALL FAIL

flight 159650 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159650/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot        fail baseline untested


jobs:
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 01:03:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 01:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89561.168769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF53o-0000FV-B5; Thu, 25 Feb 2021 01:03:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89561.168769; Thu, 25 Feb 2021 01:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF53o-0000FN-7n; Thu, 25 Feb 2021 01:03:20 +0000
Received: by outflank-mailman (input) for mailman id 89561;
 Thu, 25 Feb 2021 01:03:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF53m-0000FF-S4; Thu, 25 Feb 2021 01:03:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF53m-0003zt-Ll; Thu, 25 Feb 2021 01:03:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lF53m-0008FD-AG; Thu, 25 Feb 2021 01:03:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lF53m-0000Ia-9i; Thu, 25 Feb 2021 01:03:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Vheine0pIrXXvGZLh1grQlAwuy49Wl8ceVzHP9tC9sM=; b=Jn1f5vWCrz1QlqTkfTlt4Utzdm
	uaZUKhfv/Gmw+4TkYqnQCTVnies7knydwwztaPzEjcw+ISjE/7TcnUHl5MbDoK1UXEQArD5GIGaRO
	KevId/u80LLOV6OdlMKgwyxWkHpbR6T1mA4GB+nwTkxV9LSjXdmn2iGnGRcvn+jqxUis=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159626-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159626: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 01:03:18 +0000

flight 159626 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159626/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 build-i386                    6 xen-build                fail REGR. vs. 159475
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 159475
 build-i386-prev               6 xen-build                fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    5 days
Failing since        159487  2021-02-20 04:29:29 Z    4 days    9 attempts
Testing same since   159626  2021-02-24 13:35:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 341 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 01:22:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 01:22:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89569.168790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF5Mi-0002IB-8H; Thu, 25 Feb 2021 01:22:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89569.168790; Thu, 25 Feb 2021 01:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF5Mi-0002I4-4X; Thu, 25 Feb 2021 01:22:52 +0000
Received: by outflank-mailman (input) for mailman id 89569;
 Thu, 25 Feb 2021 01:22:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yjlP=H3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lF5Mf-0002Hw-VW
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 01:22:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03c6920c-95bd-4d31-bfbc-eb946e01c793;
 Thu, 25 Feb 2021 01:22:49 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3062264EF1;
 Thu, 25 Feb 2021 01:22:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03c6920c-95bd-4d31-bfbc-eb946e01c793
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614216168;
	bh=KxKbDRzEO0RDzP5UuGlUriPzcgRyS5OFHJrlkUAw18U=;
	h=From:To:Cc:Subject:Date:From;
	b=Zy2EICcfE1/inUdayOX2DGd692xkZhmL8iztQ3/ovYDB+2GuMNWQOLiV4xrgVi7hl
	 k6uXCjkoqaw+jgr/0hqkXs281mM2VWMFF0jAIw6TD1WhQjAsYjXGdc6rrnjZOzyTpu
	 v8XdHcjMRR+IFz6giRHoPvovUWcKGPSWXLjtHze9WwtOfxSqoAF55BD8Q+8c21KhRs
	 LKgI93JeiK9ukDbbsE8An22h0JY0f9HYYdJ+m1HLSzw86ymDfgH3pkq/2zQo79CR5Z
	 InXVK/P/Q3+3KinqY/DYjqPd++uHAZ1Ca2siLpJAzQuCRJ6Fu22bTQ6gV2zujQ6gDv
	 MOyFlILUScppA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	jbeulich@suse.com,
	andrew.cooper3@citrix.com,
	julien@xen.org
Subject: [PATCH] xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped
Date: Wed, 24 Feb 2021 17:22:43 -0800
Message-Id: <20210225012243.28530-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

Introduce two feature flags to tell the domain whether it is
direct-mapped or not. It allows the guest kernel to make informed
decisions on things such as swiotlb-xen enablement.

Currently, only Dom0 on ARM is direct-mapped, see:
xen/include/asm-arm/domain.h:is_domain_direct_mapped
xen/include/asm-x86/domain.h:is_domain_direct_mapped

However, given that it is theoretically possible to have direct-mapped
domains on x86 too, the two new feature flags are arch-neutral.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: jbeulich@suse.com
CC: andrew.cooper3@citrix.com
CC: julien@xen.org
---
 xen/common/kernel.c           | 4 ++++
 xen/include/public/features.h | 7 +++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 7a345ae45e..6ca1377dec 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -560,6 +560,10 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
                              (1U << XENFEAT_hvm_callback_vector) |
                              (has_pirq(d) ? (1U << XENFEAT_hvm_pirqs) : 0);
 #endif
+            if ( is_domain_direct_mapped(d) )
+                fi.submap |= (1U << XENFEAT_direct_mapped);
+            else
+                fi.submap |= (1U << XENFEAT_not_direct_mapped);
             break;
         default:
             return -EINVAL;
diff --git a/xen/include/public/features.h b/xen/include/public/features.h
index 1613b2aab8..18e3e61df0 100644
--- a/xen/include/public/features.h
+++ b/xen/include/public/features.h
@@ -114,6 +114,13 @@
  */
 #define XENFEAT_linux_rsdp_unrestricted   15
 
+/*
+ * A direct-mapped (or 1:1 mapped) domain is a domain for which its
+ * local pages have gfn == mfn.
+ */
+#define XENFEAT_not_direct_mapped       16
+#define XENFEAT_direct_mapped           17
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 01:52:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 01:52:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89573.168802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF5pL-0005ET-Cg; Thu, 25 Feb 2021 01:52:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89573.168802; Thu, 25 Feb 2021 01:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF5pL-0005EM-89; Thu, 25 Feb 2021 01:52:27 +0000
Received: by outflank-mailman (input) for mailman id 89573;
 Thu, 25 Feb 2021 01:52:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UWr=H3=gmail.com=akihiko.odaki@srs-us1.protection.inumbo.net>)
 id 1lF5pK-0005EH-2U
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 01:52:26 +0000
Received: from mail-ed1-x52e.google.com (unknown [2a00:1450:4864:20::52e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f81563d-3900-4cb4-aa14-f9ee2f567d68;
 Thu, 25 Feb 2021 01:52:25 +0000 (UTC)
Received: by mail-ed1-x52e.google.com with SMTP id w21so4913023edc.7
 for <xen-devel@lists.xenproject.org>; Wed, 24 Feb 2021 17:52:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f81563d-3900-4cb4-aa14-f9ee2f567d68
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=dGiiq1qTA8h7U4ksveba+s3Z2W3HyPoZ51+8t/LGa6k=;
        b=AiZocE9+2KmDjXFF1Otxg1ESyS8qMdcP3ccaR6kbasOh2o+a3It/yOWBaGRRdoO/xR
         ogXC6jHXb80ZN4JsJjydJeTIoOMXTp1UpM5xM59RjYREeDUbThOKbgDx63i9BaQrX5h5
         lSJU3tcSP3rc4BP2WOUivjh1O6On6XAIW0b5tvrSVrRWfFx1KlwMfSWu20oEGkUHFYqZ
         3yLo+LTpDIk4BwkvZdbcLusWvJ3jM/be0kx7Ys+KL6qmi1fUtOwGYzd0+1UnDhJC9G3o
         sehZD3o8AHj6L9f4HIa3j6i/vjEva1JcEBpVQ40NUGZLzF2ehrpE1kXzTZ/kEk6cAk3e
         ZSvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=dGiiq1qTA8h7U4ksveba+s3Z2W3HyPoZ51+8t/LGa6k=;
        b=lRMsTtzJaMWirUOQz56g/qfThDcSU5EFrEOiZpVN6/0rBv433b8DI0pzJMmrI1V8Ui
         Vq+k/OCw+xO9O1jJPY+oY7yFNwJGtLl7PJnBKnpmJp3ShFeZgq0jvaRRJ18RVv/0E60P
         xcmSeVwT7WffyYhs5CVhffSvUWXONbEkVLxGBHtr1HMHYCloQ9GxNR3ptXBEaCSe7lI1
         HD+aj5sHAxMTPV5nCgzGgI5578V37IN1wGhyVYsy6fZKuDghvJM+7C/GLxSQ2/hSRoYI
         +pWwQASpiZumkDCvtCRyOS4s64/f24Xw/mIQrQv9lfvmdgphI6GoJ0NY4JlNUCL38UbG
         XGQg==
X-Gm-Message-State: AOAM533WbPEvxPhyehV9VCHOHkho0lN8MaVkLPOnMMNqOjOvrAYDcqpG
	+8SFMVQi6p49M2VIKXgKeh5lf2duVg2VoCWFUuQ=
X-Google-Smtp-Source: ABdhPJwXsXozY2MjY+nQylemcT8qXc8qQfaT+5U0yTvgiHcDlMSIOMfof4stMgqCSfEnmzp/Bfq+WphWjCM1nPDycD4=
X-Received: by 2002:a50:8a90:: with SMTP id j16mr641510edj.334.1614217944282;
 Wed, 24 Feb 2021 17:52:24 -0800 (PST)
MIME-Version: 1.0
References: <20210221133414.7262-1-akihiko.odaki@gmail.com>
 <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org> <CAMVc7JVo_XJcGcxW0Wmqje3Y40fRZDY6T8dnQTc2=Ehasz4UHw@mail.gmail.com>
 <20210224111540.xd5a6yszql6wln7m@sirius.home.kraxel.org>
In-Reply-To: <20210224111540.xd5a6yszql6wln7m@sirius.home.kraxel.org>
From: Akihiko Odaki <akihiko.odaki@gmail.com>
Date: Thu, 25 Feb 2021 10:52:13 +0900
Message-ID: <CAMVc7JXUXnrK_amhQsy=paMeqjMU_8r86Hj4UF5haZ+Oq15JkA@mail.gmail.com>
Subject: Re: [PATCH] virtio-gpu: Respect graphics update interval for EDID
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org, 
	"Michael S. Tsirkin" <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

2021=E5=B9=B42=E6=9C=8824=E6=97=A5(=E6=B0=B4) 20:17 Gerd Hoffmann <kraxel@r=
edhat.com>:
>
> On Tue, Feb 23, 2021 at 01:50:51PM +0900, Akihiko Odaki wrote:
> > 2021=E5=B9=B42=E6=9C=8822=E6=97=A5(=E6=9C=88) 19:57 Gerd Hoffmann <krax=
el@redhat.com>:
> > >
> > > On Sun, Feb 21, 2021 at 10:34:14PM +0900, Akihiko Odaki wrote:
> > > > This change introduces an additional member, refresh_rate to
> > > > qemu_edid_info in include/hw/display/edid.h.
> > > >
> > > > This change also isolates the graphics update interval from the
> > > > display update interval. The guest will update the frame buffer
> > > > in the graphics update interval, but displays can be updated in a
> > > > dynamic interval, for example to save update costs aggresively
> > > > (vnc) or to respond to user-generated events (sdl).
> > > > It stabilizes the graphics update interval and prevents the guest
> > > > from being confused.
> > >
> > > Hmm.  What problem you are trying to solve here?
> > >
> > > The update throttle being visible by the guest was done intentionally=
,
> > > so the guest can throttle the display updates too in case nobody is
> > > watching those display updated anyway.
> >
> > Indeed, we are throttling the update for vnc to avoid some worthless
> > work. But typically a guest cannot respond to update interval changes
> > so often because real display devices the guest is designed for does
> > not change the update interval in that way.
>
> What is the problem you are seeing?
>
> Some guest software raising timeout errors when they see only
> one vblank irq every 3 seconds?  If so: which software is this?
> Any chance we can fix this on the guest side?
>
> > That is why we have to
> > tell the guest a stable update interval even if it results in wasted
> > frames.
>
> Because of the wasted frames I'd like this to be an option you can
> enable when needed.  For the majority of use cases this seems to be
> no problem ...

I see blinks with GNOME on Wayland on Ubuntu 20.04 and virtio-gpu with
the EDID change included in this patch. The only devices inspecting
the variable, xenfb and modified virtio-gpu, do not yield vblank irq,
but they report the refresh rate to the guest, and the guest
proactively requests them to switch the surface.

I suspect Linux's kernel mode setting causes blinks and other guests
have similar problems.

>
> Also: the EDID changes should go to a separate patch.

That makes sense. I'll isolate it to a seperate patch in a series.

Regards,
Akihiko Odaki

>
> take care,
>   Gerd
>


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 03:06:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 03:06:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89578.168820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF6z3-000416-8M; Thu, 25 Feb 2021 03:06:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89578.168820; Thu, 25 Feb 2021 03:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF6z3-00040z-5F; Thu, 25 Feb 2021 03:06:33 +0000
Received: by outflank-mailman (input) for mailman id 89578;
 Thu, 25 Feb 2021 03:06:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oSAw=H3=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1lF6z1-00040u-ME
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 03:06:31 +0000
Received: from mail-qk1-x72b.google.com (unknown [2607:f8b0:4864:20::72b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2e68272-1894-4762-a4be-1e13466ee263;
 Thu, 25 Feb 2021 03:06:29 +0000 (UTC)
Received: by mail-qk1-x72b.google.com with SMTP id 204so4391038qke.11
 for <xen-devel@lists.xenproject.org>; Wed, 24 Feb 2021 19:06:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2e68272-1894-4762-a4be-1e13466ee263
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=ai1cUfVdonYiS+yKckY6wyi52bQ7eo+jWsOtCYDe1hY=;
        b=BwaTq2psWKWHEGCiWWdHJzqA8HIcJbXLCAh3yIBNOn1tQIMhDAYMidqZPisookO/1u
         pBmPSEXEtoB5AxX9foCM+nLGa//usa+mpA7N7QZm4/QaPNp7Et9jT2zABWdYi5aS0oEU
         Va4O95H7QYIX6XdgPmUtMAt5aswyCQqgTziItvDX6dMXmqCZ6wEw8J33Ai4QjVRvARfN
         8dAn2naQfHkCFWmJ4E46vgS8g8pkJ6q9w7dd3nwUHqwrOHK+6lB/2KcalUOMsx3n3oDq
         71Q8jZsZp9cPjTH62MGMql+YWkxIDPbCwCk0ahOR9n04J1pAj5d5F5kEVv4roVddunei
         9WnA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=ai1cUfVdonYiS+yKckY6wyi52bQ7eo+jWsOtCYDe1hY=;
        b=pefxBuY7GWn7fJjhn2vArY948GGA4V3rujxfKCk8zsywdFrZz7/YLuIrjSr+BOThF4
         oKyTAR6P7TYfIB7fbhjLE/+tyIfhluiCpQpZ+hvKPJthJp7EECG3OMKi+hKPzR5BVCe1
         6WIK73poqAW1Xze1eyHr5MxAg0/nU3FvOZ2FjjKFrT/KMtL8AUFQjDIBACjd0y1tsXmV
         8OyjhTvrrXZnY3aUhpY6BhBjn/jyYNFIloyDR75Q4FZ3rRg5TQpSw9LDdsA4P/4SRYy/
         upO+Mz8NALfstY0wvaX/DY6wuuMElNkSfd5eE4ShfY1UNC/LPSXbWc+A1bE/xMx0o/pW
         6vpg==
X-Gm-Message-State: AOAM532n1FXeKC/2hUe+mHW6q9tCw1HQ6ogA3DoZWcY0XQscg9nxfymV
	JKtOlDbNqSzlH8og+ObE/jIW1UZJFTm5t1COxIUD+g==
X-Google-Smtp-Source: ABdhPJyTRfFjefFavK3uGJQ1vHwPumrW4MminednwFrKaNwrlTS1n6t1ULZCj1iO4z3RDkAO5+tjp1QR/RSjkrqNs+A=
X-Received: by 2002:a05:620a:22f:: with SMTP id u15mr1008834qkm.22.1614222389433;
 Wed, 24 Feb 2021 19:06:29 -0800 (PST)
MIME-Version: 1.0
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com> <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com> <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
 <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com>
In-Reply-To: <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 24 Feb 2021 19:06:25 -0800
Message-ID: <CAMmSBy8pSZROdPo+gee8oxrU9EL=k+QTJj0UxZTi3Bh+S_g2_w@mail.gmail.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi J=C3=BCrgen!

sorry for the belated reply -- I wanted to externalize the VM before I
do -- but let me at least reply to you:

On Tue, Feb 23, 2021 at 5:17 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wro=
te:
>
> On 18.02.21 06:21, Roman Shaposhnik wrote:
> > On Wed, Feb 17, 2021 at 12:29 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com
> > <mailto:jgross@suse.com>> wrote:
> >
> >     On 17.02.21 09:12, Roman Shaposhnik wrote:
> >      > Hi J=C3=BCrgen, thanks for taking a look at this. A few comments=
 below:
> >      >
> >      > On Tue, Feb 16, 2021 at 10:47 PM J=C3=BCrgen Gro=C3=9F <jgross@s=
use.com
> >     <mailto:jgross@suse.com>> wrote:
> >      >>
> >      >> On 16.02.21 21:34, Stefano Stabellini wrote:
> >      >>> + x86 maintainers
> >      >>>
> >      >>> It looks like the tlbflush is getting stuck?
> >      >>
> >      >> I have seen this case multiple times on customer systems now, b=
ut
> >      >> reproducing it reliably seems to be very hard.
> >      >
> >      > It is reliably reproducible under my workload but it take a long=
 time
> >      > (~3 days of the workload running in the lab).
> >
> >     This is by far the best reproduction rate I have seen up to now.
> >
> >     The next best reproducer seems to be a huge installation with sever=
al
> >     hundred hosts and thousands of VMs with about 1 crash each week.
> >
> >      >
> >      >> I suspected fifo events to be blamed, but just yesterday I've b=
een
> >      >> informed of another case with fifo events disabled in the guest=
.
> >      >>
> >      >> One common pattern seems to be that up to now I have seen this
> >     effect
> >      >> only on systems with Intel Gold cpus. Can it be confirmed to be=
 true
> >      >> in this case, too?
> >      >
> >      > I am pretty sure mine isn't -- I can get you full CPU specs if
> >     that's useful.
> >
> >     Just the output of "grep model /proc/cpuinfo" should be enough.
> >
> >
> > processor: 3
> > vendor_id: GenuineIntel
> > cpu family: 6
> > model: 77
> > model name: Intel(R) Atom(TM) CPU  C2550  @ 2.40GHz
> > stepping: 8
> > microcode: 0x12d
> > cpu MHz: 1200.070
> > cache size: 1024 KB
> > physical id: 0
> > siblings: 4
> > core id: 3
> > cpu cores: 4
> > apicid: 6
> > initial apicid: 6
> > fpu: yes
> > fpu_exception: yes
> > cpuid level: 11
> > wp: yes
> > flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pa=
t
> > pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp
> > lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
> > nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx es=
t
> > tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer
> > aes rdrand lahf_lm 3dnowprefetch cpuid_fault epb pti ibrs ibpb stibp
> > tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida
> > arat md_clear
> > vmx flags: vnmi preemption_timer invvpid ept_x_only flexpriority
> > tsc_offset vtpr mtf vapic ept vpid unrestricted_guest
> > bugs: cpu_meltdown spectre_v1 spectre_v2 mds msbds_only
> > bogomips: 4800.19
> > clflush size: 64
> > cache_alignment: 64
> > address sizes: 36 bits physical, 48 bits virtual
> > power management:
> >
> >      >
> >      >> In case anybody has a reproducer (either in a guest or dom0) wi=
th a
> >      >> setup where a diagnostic kernel can be used, I'd be _very_
> >     interested!
> >      >
> >      > I can easily add things to Dom0 and DomU. Whether that will
> >     disrupt the
> >      > experiment is, of course, another matter. Still please let me
> >     know what
> >      > would be helpful to do.
> >
> >     Is there a chance to switch to an upstream kernel in the guest? I'd=
 like
> >     to add some diagnostic code to the kernel and creating the patches =
will
> >     be easier this way.
> >
> >
> > That's a bit tough -- the VM is based on stock Ubuntu and if I upgrade
> > the kernel I'll have fiddle with a lot things to make workload
> > functional again.
> >
> > However, I can install debug kernel (from Ubuntu, etc. etc.)
> >
> > Of course, if patching the kernel is the only way to make progress --
> > lets try that -- please let me know.
>
> I have found a nice upstream patch, which - with some modifications - I
> plan to give our customer as a workaround.
>
> The patch is for kernel 4.12, but chances are good it will apply to a
> 4.15 kernel, too.

I'm slightly confused about this patch -- it seems to me that it needs
to be applied to the guest kernel, correct?

If that's the case -- the challenge I have is that I need to re-build
the Canonical (Ubuntu) distro kernel with this patch -- this seems
a bit daunting at first (I mean -- I'm pretty good at rebuilding kernels
I just never do it with the vendor ones ;-)).

So... if there's anyone here who has any suggestions on how to do that
-- I'd appreciate pointers.

> I have been able to gather some more data.
>
> I have contacted the author of the upstream kernel patch I've been using
> for our customer (and that helped, by the way).
>
> It seems as if the problem is occurring when running as a guest at least
> under Xen, KVM, and VMWare, and there have been reports of bare metal
> cases, too. Hunting this bug is going on for several years now, the
> patch author is at it since 8 months.
>
> So we can rule out a Xen problem.
>
> Finding the root cause is still important, of course, and your setup
> seems to have the best reproduction rate up to now.
>
> So any help would really be appreciated.
>
> Is the VM self contained? Would it be possible to start it e.g. on a
> test system on my side? If yes, would you be allowed to pass it on to
> me?

I'm working on externalizing the VM in a way that doesn't disclose anything
about the customer workload. I'm almost there -- sans my question about
the vendor kernel rebuild. I plan to make that VM available this week.

Goes without saying, but I would really appreciate your help in chasing thi=
s.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 03:44:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 03:44:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89581.168832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF7a5-0007rf-An; Thu, 25 Feb 2021 03:44:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89581.168832; Thu, 25 Feb 2021 03:44:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF7a5-0007rY-70; Thu, 25 Feb 2021 03:44:49 +0000
Received: by outflank-mailman (input) for mailman id 89581;
 Thu, 25 Feb 2021 03:44:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pmh/=H3=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lF7a3-0007rT-Rb
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 03:44:47 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5535efb-b9c4-46f0-99e8-79e54643f33f;
 Thu, 25 Feb 2021 03:44:46 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11P3iU8M056842
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 24 Feb 2021 22:44:36 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11P3iT24056841;
 Wed, 24 Feb 2021 19:44:29 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5535efb-b9c4-46f0-99e8-79e54643f33f
Date: Wed, 24 Feb 2021 19:44:29 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: J??rgen Gro?? <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xen-devel <xen-devel@lists.xenproject.org>,
        Jan Beulich <jbeulich@suse.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Roger Pau Monn?? <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
Message-ID: <YDcdHcpN+GywAUKv@mattapan.m5p.com>
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
 <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
 <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com>
 <CAMmSBy8pSZROdPo+gee8oxrU9EL=k+QTJj0UxZTi3Bh+S_g2_w@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy8pSZROdPo+gee8oxrU9EL=k+QTJj0UxZTi3Bh+S_g2_w@mail.gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Feb 24, 2021 at 07:06:25PM -0800, Roman Shaposhnik wrote:
> I'm slightly confused about this patch -- it seems to me that it needs
> to be applied to the guest kernel, correct?
> 
> If that's the case -- the challenge I have is that I need to re-build
> the Canonical (Ubuntu) distro kernel with this patch -- this seems
> a bit daunting at first (I mean -- I'm pretty good at rebuilding kernels
> I just never do it with the vendor ones ;-)).
> 
> So... if there's anyone here who has any suggestions on how to do that
> -- I'd appreciate pointers.

Generally Debian-derivatives ship the kernel source they use as packages
named "linux-source-<major>.<minor>" (guessing you need
linux-source-5.4?).  They ship their configurations as packages
"linux-config-<major>.<minor>", but they also ship their configuration
with their kernels as /boot/config-<version>.

If you're trying to create a proper packaged kernel, the Linux kernel
Make target "bindeb-pkg" will create an appropriate .deb file.

If you wish to extract a Debian package, they're some tarballs and a
marker file wrapped in a ar archive.  You're likely interested in
control.tar.?z* and data.tar.?z*.  Older packages used gzip-format
(.tar.gz), newer packages use xz-format (.tar.xz).

If you want to extract current Ubuntu kernel source on a different
distribution (or even an unrelated flavor of Unix), likely you would
want `ar p linux-source-5.4.deb data.tar.xz | unxz -c | tar -xf -`.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Feb 25 04:31:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 04:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89588.168849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF8Ik-0004Kk-Vi; Thu, 25 Feb 2021 04:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89588.168849; Thu, 25 Feb 2021 04:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF8Ik-0004Kd-SY; Thu, 25 Feb 2021 04:30:58 +0000
Received: by outflank-mailman (input) for mailman id 89588;
 Thu, 25 Feb 2021 04:30:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oSAw=H3=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1lF8Ik-0004KY-3k
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 04:30:58 +0000
Received: from mail-qt1-x833.google.com (unknown [2607:f8b0:4864:20::833])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0172af6-da65-4b8f-b10b-9e7878abc6f5;
 Thu, 25 Feb 2021 04:30:57 +0000 (UTC)
Received: by mail-qt1-x833.google.com with SMTP id r24so3240687qtt.8
 for <xen-devel@lists.xenproject.org>; Wed, 24 Feb 2021 20:30:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0172af6-da65-4b8f-b10b-9e7878abc6f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=0T3uLeLBGXJgEJu/REl6vggmqakFoedkqICiEkaRKKs=;
        b=KLyn6JoEnv7bHpi7AR1d95rtwWCZykuYY6zA8Ivc6BF0EKP1G9jtiVRwj8EIkubcf4
         mLfkF+jPNXJWcDpnZPJxKsrp5ndmzMmp725Q3f3DeOS+19ecKy7rgaIelF1ueYlE8X+m
         /bvkgUVKsJK4QZQmjGUqj5ES0ev+2OX/uDHktZjThTb6EdTS7Jv8dPqKL1xxgYcjxOkk
         nlkACDvj0nEQA7FI5dPHZxfXv14D8K5BfdF/wZdzK4MdU54UusW88tVxpy1n82eiVr8T
         8aD05891/u/aaD/+2YEaKI8Q5te2X5Mmsyju1gotJIUGAjMGRfG59MvEPtAS7PXHM6Mm
         10AQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=0T3uLeLBGXJgEJu/REl6vggmqakFoedkqICiEkaRKKs=;
        b=eVZyiSpeyaRG0DD5KwS6XSPa9Ina/Ro9IwoUug8otSDUOiYOrw8LExsGrp+FqyQ2tr
         aAAl9bDN25Cd+EdV4SnCQrytIduislbo6nXkrKZJNOpBsGNliEyPCCwb4y0VWFIfSwZ6
         2aYSzQvWqGPU8zcQBVKCnQJQD3tqNuR16MjqycBvMek0lvdZGxNKnBNjg0eosRTKyBuH
         XGdbIQJg63qVTrWAy2EuWW6c4Fq/MS3YVPyzkktfHL0KogSV1FE+j9HhOcmPT+iESLO+
         TxiVuZwK1lV3taTdvwBQSXH7bu3HGu9SFm3589yNwewMMN06ZY5dWdgCHEHbwRxgZQwI
         QV0w==
X-Gm-Message-State: AOAM533a2t6VXGQjav+lrXoTSV+y286M55yBIuVrqs0c7vF1cycYj5sZ
	TQIKXF+kbYIRjtPkPOFi/4haPyAPkMfxga21jSkdjg==
X-Google-Smtp-Source: ABdhPJyJ/e31rV2RbhMb59nEfb+fxaEdYPWb6EVKePAyCZo+6V+oU92vbv0X3hDnkHPFH4XaTs+zBiQI62t8etKkqNs=
X-Received: by 2002:a05:622a:81:: with SMTP id o1mr915447qtw.63.1614227456700;
 Wed, 24 Feb 2021 20:30:56 -0800 (PST)
MIME-Version: 1.0
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com> <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com> <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
 <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com> <CAMmSBy8pSZROdPo+gee8oxrU9EL=k+QTJj0UxZTi3Bh+S_g2_w@mail.gmail.com>
 <YDcdHcpN+GywAUKv@mattapan.m5p.com>
In-Reply-To: <YDcdHcpN+GywAUKv@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 24 Feb 2021 20:30:45 -0800
Message-ID: <CAMmSBy91csJ3MGrV8CPYX-fNdkFu6P12zEr2LjCbchvAeEsTKA@mail.gmail.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: "J??rgen Gro??" <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, "Roger Pau Monn??" <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Feb 24, 2021 at 7:44 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Wed, Feb 24, 2021 at 07:06:25PM -0800, Roman Shaposhnik wrote:
> > I'm slightly confused about this patch -- it seems to me that it needs
> > to be applied to the guest kernel, correct?
> >
> > If that's the case -- the challenge I have is that I need to re-build
> > the Canonical (Ubuntu) distro kernel with this patch -- this seems
> > a bit daunting at first (I mean -- I'm pretty good at rebuilding kernels
> > I just never do it with the vendor ones ;-)).
> >
> > So... if there's anyone here who has any suggestions on how to do that
> > -- I'd appreciate pointers.
>
> Generally Debian-derivatives ship the kernel source they use as packages
> named "linux-source-<major>.<minor>" (guessing you need
> linux-source-5.4?).  They ship their configurations as packages
> "linux-config-<major>.<minor>", but they also ship their configuration
> with their kernels as /boot/config-<version>.
>
> If you're trying to create a proper packaged kernel, the Linux kernel
> Make target "bindeb-pkg" will create an appropriate .deb file.

Right -- but that's not what distro builders use, right? I mean they do
the whole sdeb -> deb business.

In fact, to stay as faithful as possible -- I'd love to:
   1. unpack SDEB
   2. add a single patch to the set of sources
   3. repack SDEB back
   4. do whatever it is they do to go SDEB -> DEB

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 04:48:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 04:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89592.168861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF8ZP-0005Uv-Hy; Thu, 25 Feb 2021 04:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89592.168861; Thu, 25 Feb 2021 04:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lF8ZP-0005Uo-Eu; Thu, 25 Feb 2021 04:48:11 +0000
Received: by outflank-mailman (input) for mailman id 89592;
 Thu, 25 Feb 2021 04:48:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pmh/=H3=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lF8ZO-0005Uj-Ah
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 04:48:10 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fac5ea6-4131-472a-afc8-6a6492936374;
 Thu, 25 Feb 2021 04:48:08 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 11P4lrGB057101
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 24 Feb 2021 23:47:58 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 11P4lqcN057100;
 Wed, 24 Feb 2021 20:47:52 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fac5ea6-4131-472a-afc8-6a6492936374
Date: Wed, 24 Feb 2021 20:47:52 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: J??rgen Gro?? <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xen-devel <xen-devel@lists.xenproject.org>,
        Jan Beulich <jbeulich@suse.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Roger Pau Monn?? <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>
Subject: Re: Linux DomU freezes and dies under heavy memory shuffling
Message-ID: <YDcr+Dd2hz9IK6BT@mattapan.m5p.com>
References: <CAMmSBy-_UOK6DTrwGNOw8Y59Muv8H8wxmsc4-BXcv3N_u5USBA@mail.gmail.com>
 <alpine.DEB.2.21.2102161232310.3234@sstabellini-ThinkPad-T480s>
 <45b8ef4c-6d36-e91b-ca1a-a82eeca5aaf5@suse.com>
 <CAMmSBy8k0Y50Xkq9Kq+oES27gsoG==T++Hz9SiR0gDgAKnpvRA@mail.gmail.com>
 <49344e8d-5518-68c6-a417-68522a915e72@suse.com>
 <CAMmSBy-3y+Y3nhyf1uGN6KB_wNLVAqYRfc0hpkdKHtvdGSM5wg@mail.gmail.com>
 <b6b694f6-61ed-c0b7-5980-88ddb5e1616c@suse.com>
 <CAMmSBy8pSZROdPo+gee8oxrU9EL=k+QTJj0UxZTi3Bh+S_g2_w@mail.gmail.com>
 <YDcdHcpN+GywAUKv@mattapan.m5p.com>
 <CAMmSBy91csJ3MGrV8CPYX-fNdkFu6P12zEr2LjCbchvAeEsTKA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy91csJ3MGrV8CPYX-fNdkFu6P12zEr2LjCbchvAeEsTKA@mail.gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Feb 24, 2021 at 08:30:45PM -0800, Roman Shaposhnik wrote:
> Right -- but that's not what distro builders use, right? I mean they do
> the whole sdeb -> deb business.
> 
> In fact, to stay as faithful as possible -- I'd love to:
>    1. unpack SDEB
>    2. add a single patch to the set of sources
>    3. repack SDEB back
>    4. do whatever it is they do to go SDEB -> DEB

Oh, that close to the original distribution package.  For
Debian-derivatives, install the package "dpkg-dev".

Generally the distribution will have a page somewhere where you can get
the files, but often it is handiest to run `apt-get source <package>` (I
believe `apt source <package>` also works, but I'm used to `apt-get`).
This will grab the tarballs for the source and unpack them.

Go into the unpacked directory and run `dpkg-buildpackage -b`
(optionally, patch first).  This creates the package in the starting
directory.

The tarballs left behind in the starting directory can be nuked or saved.
If saved, the build directory can be recreated by running
`dpkg-source -x <src-package-name>_<ver>.dsc`.  This lets you reset the
build directory to original state.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Feb 25 06:40:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 06:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89600.168886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFAJW-0008Lk-Ue; Thu, 25 Feb 2021 06:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89600.168886; Thu, 25 Feb 2021 06:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFAJW-0008Ld-Rb; Thu, 25 Feb 2021 06:39:54 +0000
Received: by outflank-mailman (input) for mailman id 89600;
 Thu, 25 Feb 2021 06:39:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFAJW-0008LV-0R; Thu, 25 Feb 2021 06:39:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFAJV-00024Z-Og; Thu, 25 Feb 2021 06:39:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFAJV-0005Ub-EY; Thu, 25 Feb 2021 06:39:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFAJV-0006Z1-E2; Thu, 25 Feb 2021 06:39:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Srm45ODoK9rYRN9FoNgA9IvZWwtcwnW1KNAi1PE58kQ=; b=vglpfVRzROfnHQ8fQYQ7epJD9q
	Yf40bveJ1l/hMANw1uO6rw8H62GAxD1OGCVasOUJRf+j5+aFgOMpOWOZ5dIbhJbrOJpIo1SlVhD6W
	M/EvJBP0Cs3fueeolWEFQld5yaoMTSgFitpI8Sojg2gbdV3i9EMgNxZeAwTzGNvj5Q+k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159642-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159642: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ef8134565dccf9186d5eabd7dbb4ecae6dead87
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 06:39:53 +0000

flight 159642 qemu-mainline real [real]
flight 159658 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159642/
http://logs.test-lab.xenproject.org/osstest/logs/159658/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ef8134565dccf9186d5eabd7dbb4ecae6dead87
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  188 days
Failing since        152659  2020-08-21 14:07:39 Z  187 days  362 attempts
Testing same since   159563  2021-02-22 23:37:57 Z    2 days    4 attempts

------------------------------------------------------------
425 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117355 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:18:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:18:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89612.168904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFAum-0003mI-0f; Thu, 25 Feb 2021 07:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89612.168904; Thu, 25 Feb 2021 07:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFAul-0003mB-U0; Thu, 25 Feb 2021 07:18:23 +0000
Received: by outflank-mailman (input) for mailman id 89612;
 Thu, 25 Feb 2021 07:18:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFAuk-0003m6-IZ
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 07:18:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ada113ca-d2f3-4a45-afc9-e97d099d4ca0;
 Thu, 25 Feb 2021 07:18:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 51FB3ACD4;
 Thu, 25 Feb 2021 07:18:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ada113ca-d2f3-4a45-afc9-e97d099d4ca0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614237499; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qPNwihq1WLSYmzXrFBK3xP0691vPEul5dzWQctBxqDo=;
	b=QJKNneQjty3xyaVPL2apB8rNaCGeP9zFViymXj+43ETQrkTbiw/zYg6k7dLgY7TkE03mbC
	OpQiQ5XFkCH2x57q2yVHglBYuwV+0kr10HHsZTV8/+z16uFMZIu6WZ054yc6zYWNWjFen2
	9+t0EjCCU9cLnJhoSKOzOqIYNVHCCkE=
Subject: Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base relocs
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
 <53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
 <f8e56c90-f51c-01f7-0987-4c0697a17bb0@suse.com>
 <a35dd0b7-b804-9c75-b93c-e764345df46b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f07c0fef-2ff8-8331-678a-289cde89d36c@suse.com>
Date: Thu, 25 Feb 2021 08:18:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a35dd0b7-b804-9c75-b93c-e764345df46b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 18:17, Andrew Cooper wrote:
> On 23/02/2021 07:53, Jan Beulich wrote:
>> On 22.02.2021 17:36, Andrew Cooper wrote:
>>> On 19/02/2021 08:09, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/Makefile
>>>> +++ b/xen/arch/x86/Makefile
>>>> @@ -123,8 +123,13 @@ ifneq ($(efi-y),)
>>>>  # Check if the compiler supports the MS ABI.
>>>>  export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
>>>>  # Check if the linker supports PE.
>>>> -XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>>> +EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10 --strip-debug
>>>> +XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) $(EFI_LDFLAGS) -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>>>  CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
>>>> +# Check if the linker produces fixups in PE by default (we need to disable it doing so for now).
>>>> +XEN_NO_PE_FIXUPS := $(if $(XEN_BUILD_EFI), \
>>>> +                         $(shell $(LD) $(EFI_LDFLAGS) --disable-reloc-section -o efi/check.efi efi/check.o 2>/dev/null && \
>>>> +                                 echo --disable-reloc-section))
>>> Why does --strip-debug move?
>> -S and --strip-debug are the same. I'm simply accumulating in
>> EFI_LDFLAGS all that's needed for the use in the probing construct.
> 
> Oh ok.
> 
> It occurs to me that EFI_LDFLAGS now only gets started in an ifneq
> block, but appended to later on while unprotected.Â  That said, I'm
> fairly sure it is only consumed inside a different ifeq section, so I
> think there is a reasonable quantity of tidying which ought to be done here.

Yes, in particular it wants to move out of Makefile (so it
won't get executed multiple times).

>> Also I meanwhile have a patch to retain debug info, for which this
>> movement turns out to be a prereq. (I've yet to test that the
>> produced binary actually works, and what's more I first need to get
>> a couple of changes accepted into binutils for the linker to actually
>> cope.)
>>
>>> What's wrong with $(call ld-option ...) ?Â  Actually, lots of this block
>>> of code looks to be opencoding of standard constructs.
>> It looks like ld-option could indeed be used here (there are marginal
>> differences which are likely acceptable), despite its brief comment
>> talking of just "flag" (singular, plus not really covering e.g. input
>> files).
>>
>> But:
>> - It working differently than cc-option makes it inconsistent to
>>   use (the setting of XEN_BUILD_EFI can't very well be switched to
>>   use cc-option); because of this I'm not surprised that we have
>>   only exactly one use right now in the tree.
>> - While XEN_BUILD_PE wants to be set to "y", for XEN_NO_PE_FIXUPS
>>   another transformation would then be necessary to translate "y"
>>   into "--disable-reloc-section".
>> - Do you really suggest to re-do this at this point in the release
>>   cycle?
> 
> I'm looking to prevent this almost-incompressible mess from getting worse.
> 
> But I suppose you want this to backport, so I suppose it ought to be
> minimally invasive.

Backporting - yes, definitely. And hence minimally invasive would
indeed be helpful.

> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> This logic all actually needs moving into Kconfig so we can also go
> about fixing the other bugs we have such as having Multiboot headers in
> xen.efi pointing at unusable entrypoints.

My objections to doing such stuff in Kconfig have remained
unresponded to, iirc. Plus doing this in Kconfig would help on its
own - we'd also need to further split which object files get linked
into which binary. (In fact a patch in the 4.16 series I have now
to use linker produced base relocations and to retain debug info I
do away with prelink-efi.o, for becoming identical to prelink.o.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:21:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:21:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89615.168917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFAxI-0004jG-GV; Thu, 25 Feb 2021 07:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89615.168917; Thu, 25 Feb 2021 07:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFAxI-0004j9-C9; Thu, 25 Feb 2021 07:21:00 +0000
Received: by outflank-mailman (input) for mailman id 89615;
 Thu, 25 Feb 2021 07:20:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFAxH-0004j4-5o
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 07:20:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9a62473-23e2-44ac-8d82-1e71ed1fa690;
 Thu, 25 Feb 2021 07:20:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5D6C5ACD4;
 Thu, 25 Feb 2021 07:20:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9a62473-23e2-44ac-8d82-1e71ed1fa690
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614237657; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4XBXe1Cf9nkyNUF7o/vTMFDBr3U8RRUw+NuVPfTSB2g=;
	b=jTl303D87a4Nxt92oIlhvnJTFGW53xVN8sPgxWPgU2nkH0NUeT39WSVyJAHQdEg63OOwMi
	Xx38BlV3K0CXt/pMj17i+QDL2hgicDxlgzx4kRqzcx3JSIbfBhbF7ILBFG2/U1PhIPRp/J
	yMeEqDIKv8mNtTPpbxaok6dwKjVMMdk=
Subject: [4.15] Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base
 relocs
To: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
 <53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
 <f8e56c90-f51c-01f7-0987-4c0697a17bb0@suse.com>
 <a35dd0b7-b804-9c75-b93c-e764345df46b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fff8e64c-f724-11e8-daa5-80147c6925dd@suse.com>
Date: Thu, 25 Feb 2021 08:20:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a35dd0b7-b804-9c75-b93c-e764345df46b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 18:17, Andrew Cooper wrote:
> On 23/02/2021 07:53, Jan Beulich wrote:
>> On 22.02.2021 17:36, Andrew Cooper wrote:
>>> On 19/02/2021 08:09, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/Makefile
>>>> +++ b/xen/arch/x86/Makefile
>>>> @@ -123,8 +123,13 @@ ifneq ($(efi-y),)
>>>>  # Check if the compiler supports the MS ABI.
>>>>  export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
>>>>  # Check if the linker supports PE.
>>>> -XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -S -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>>> +EFI_LDFLAGS = $(patsubst -m%,-mi386pep,$(XEN_LDFLAGS)) --subsystem=10 --strip-debug
>>>> +XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) $(EFI_LDFLAGS) -o efi/check.efi efi/check.o 2>/dev/null && echo y))
>>>>  CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
>>>> +# Check if the linker produces fixups in PE by default (we need to disable it doing so for now).
>>>> +XEN_NO_PE_FIXUPS := $(if $(XEN_BUILD_EFI), \
>>>> +                         $(shell $(LD) $(EFI_LDFLAGS) --disable-reloc-section -o efi/check.efi efi/check.o 2>/dev/null && \
>>>> +                                 echo --disable-reloc-section))
>>> Why does --strip-debug move?
>> -S and --strip-debug are the same. I'm simply accumulating in
>> EFI_LDFLAGS all that's needed for the use in the probing construct.
> 
> Oh ok.
> 
> It occurs to me that EFI_LDFLAGS now only gets started in an ifneq
> block, but appended to later on while unprotected.Â  That said, I'm
> fairly sure it is only consumed inside a different ifeq section, so I
> think there is a reasonable quantity of tidying which ought to be done here.
> 
>> Also I meanwhile have a patch to retain debug info, for which this
>> movement turns out to be a prereq. (I've yet to test that the
>> produced binary actually works, and what's more I first need to get
>> a couple of changes accepted into binutils for the linker to actually
>> cope.)
>>
>>> What's wrong with $(call ld-option ...) ?Â  Actually, lots of this block
>>> of code looks to be opencoding of standard constructs.
>> It looks like ld-option could indeed be used here (there are marginal
>> differences which are likely acceptable), despite its brief comment
>> talking of just "flag" (singular, plus not really covering e.g. input
>> files).
>>
>> But:
>> - It working differently than cc-option makes it inconsistent to
>>   use (the setting of XEN_BUILD_EFI can't very well be switched to
>>   use cc-option); because of this I'm not surprised that we have
>>   only exactly one use right now in the tree.
>> - While XEN_BUILD_PE wants to be set to "y", for XEN_NO_PE_FIXUPS
>>   another transformation would then be necessary to translate "y"
>>   into "--disable-reloc-section".
>> - Do you really suggest to re-do this at this point in the release
>>   cycle?
> 
> I'm looking to prevent this almost-incompressible mess from getting worse.
> 
> But I suppose you want this to backport, so I suppose it ought to be
> minimally invasive.
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Since getting Andrew's ack has taken across the firm-freeze boundary,
may I ask for a release-ack here? As noted this change (alongside
the earlier one) will want backporting, perhaps even to security-
support-only branches.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:30:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:30:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89621.168935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFB5y-00058H-Hl; Thu, 25 Feb 2021 07:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89621.168935; Thu, 25 Feb 2021 07:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFB5y-00058A-Eg; Thu, 25 Feb 2021 07:29:58 +0000
Received: by outflank-mailman (input) for mailman id 89621;
 Thu, 25 Feb 2021 07:29:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFB5w-000582-P3; Thu, 25 Feb 2021 07:29:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFB5w-0002sh-FE; Thu, 25 Feb 2021 07:29:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFB5w-0007Nu-7A; Thu, 25 Feb 2021 07:29:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFB5w-0002Gh-6c; Thu, 25 Feb 2021 07:29:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=D+QK7lEpOSOdYV6de6u8ZMJGSnzvFZmMNcptrmZlXb4=; b=3C/PWpZQABtEh6FIOmJ/sQ599W
	9Au9HeZ2Qy/U7ymVcjrPR/5wgHUDAfnQUNE7X1QU1HZE6hrWo8drwvrgA6WQfhSx8hBHzX2b8PkUu
	Llf4zht8AiDAnCqqc6bMjcuzFZocdSrV+N7E0F9HvfVvaLBw9ThVSXRoYDHQc2h0sxHo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159656-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159656: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a0cef16787930c810263f1edd057e038cb6406e3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 07:29:56 +0000

flight 159656 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159656/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a0cef16787930c810263f1edd057e038cb6406e3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  230 days
Failing since        151818  2020-07-11 04:18:52 Z  229 days  222 attempts
Testing same since   159656  2021-02-25 04:18:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 44188 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:33:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:33:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89626.168949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFB9d-0005z1-2q; Thu, 25 Feb 2021 07:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89626.168949; Thu, 25 Feb 2021 07:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFB9c-0005yu-W2; Thu, 25 Feb 2021 07:33:44 +0000
Received: by outflank-mailman (input) for mailman id 89626;
 Thu, 25 Feb 2021 07:33:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFB9b-0005yp-LE
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 07:33:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45572949-0d11-4900-9fab-1f20a214a0e6;
 Thu, 25 Feb 2021 07:33:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22A84AD6B;
 Thu, 25 Feb 2021 07:33:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45572949-0d11-4900-9fab-1f20a214a0e6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614238421; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a3BRihie3lifIDroDr4++Mfc0UHcnu/N2fnqw+5haSM=;
	b=GpU8DlBsevf+eVkPZ94CbGcv1uB6uRZI+tW6SJTwFPOhzG3UUXUI5dxuPLatvLOelAIVDc
	XHr8Htmb6stblsr5FBv+Ne2JFlTnn3tyndhJVoHYEP9avlnkyhJmZYer9l6FnNH2mS1SX4
	JcM6RrDa50deZWu7GY4nKXEnQVDSiqU=
Subject: Re: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
To: paul@xen.org
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Wei Liu <wl@xen.org>
References: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
 <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76c94541-21a8-7ae5-c4c4-48552f16c3fd@suse.com>
Date: Thu, 25 Feb 2021 08:33:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.02.2021 17:39, Paul Durrant wrote:
> On 23/02/2021 16:29, Jan Beulich wrote:
>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>> special considerations for the head of the SKB no longer apply. Don't
>> mistakenly report ERROR to the frontend for the first entry in the list,
>> even if - from all I can tell - this shouldn't matter much as the overall
>> transmit will need to be considered failed anyway.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -499,7 +499,7 @@ check_frags:
>>   				 * the header's copy failed, and they are
>>   				 * sharing a slot, send an error
>>   				 */
>> -				if (i == 0 && sharedslot)
>> +				if (i == 0 && !first_shinfo && sharedslot)
>>   					xenvif_idx_release(queue, pending_idx,
>>   							   XEN_NETIF_RSP_ERROR);
>>   				else
>>
> 
> I think this will DTRT, but to my mind it would make more sense to clear 
> 'sharedslot' before the 'goto check_frags' at the bottom of the function.

That was my initial idea as well, but
- I think it is for a reason that the variable is "const".
- There is another use of it which would then instead need further
  amending (and which I believe is at least part of the reason for
  the variable to be "const").

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:42:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:42:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89630.168977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFBHd-00075B-Dt; Thu, 25 Feb 2021 07:42:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89630.168977; Thu, 25 Feb 2021 07:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFBHd-000752-Am; Thu, 25 Feb 2021 07:42:01 +0000
Received: by outflank-mailman (input) for mailman id 89630;
 Thu, 25 Feb 2021 07:41:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFBHb-00074m-Or
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 07:41:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d55fe98f-7ef8-498c-88f6-ba5d95124160;
 Thu, 25 Feb 2021 07:41:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A04C8ACD9;
 Thu, 25 Feb 2021 07:41:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d55fe98f-7ef8-498c-88f6-ba5d95124160
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614238917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T9J3EkjHcnq+Seqkk/bd0FqqRXY4OdossGLUOe//c+8=;
	b=JdSn1OcKPfNuxc8lkQxPQ0UlbcITwRh+jV0HE5XfjBFdabyZCRLqn3LcXnnYinkX6OEOnV
	9aZTny6c9RlqCtECCwq4/1zHpfH2zvRjYtgpN8vxYNYCYrcVi3lt3Iz1Ifjpg9xHq0TENN
	k1rycT+0yCjM3idZr0Wv8tI5Zt2gmEk=
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 xen-devel@lists.xenproject.org, Roger Pau Monne <roger.pau@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <61932477-d44c-5592-da3f-b0b5ff5c6321@suse.com>
Date: Thu, 25 Feb 2021 08:41:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.02.2021 21:08, Andrew Cooper wrote:
> On 24/02/2021 10:26, Roger Pau Monne wrote:
>> Hello,
>>
>> Following two patches aim to make hvmloader standalone, so that it don't
>> try to use system headers. It shouldn't result in any functional
>> change.
>>
>> Thanks, Roger.
> 
> After some experimentation in the arch container
> 
> Given foo.c as:
> 
> #include <stdint.h>
> 
> extern uint64_t bar;
> uint64_t foo(void)
> {
> Â Â Â  return bar;
> }
> 
> int main(void)
> {
> Â Â Â  return 0;
> }
> 
> The preprocessed form with `gcc -m32 -E` is:
> 
> # 1 "foo.c"
> # 1 "<built-in>"
> # 1 "<command-line>"
> # 31 "<command-line>"
> # 1 "/usr/include/stdc-predef.h" 1 3 4
> # 32 "<command-line>" 2
> # 1 "foo.c"
> # 1 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 1 3 4
> # 9 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 3 4
> # 1 "/usr/include/stdint.h" 1 3 4
> # 26 "/usr/include/stdint.h" 3 4
> # 1 "/usr/include/bits/libc-header-start.h" 1 3 4
> # 33 "/usr/include/bits/libc-header-start.h" 3 4
> # 1 "/usr/include/features.h" 1 3 4
> # 450 "/usr/include/features.h" 3 4
> # 1 "/usr/include/sys/cdefs.h" 1 3 4
> # 452 "/usr/include/sys/cdefs.h" 3 4
> # 1 "/usr/include/bits/wordsize.h" 1 3 4
> # 453 "/usr/include/sys/cdefs.h" 2 3 4
> # 1 "/usr/include/bits/long-double.h" 1 3 4
> # 454 "/usr/include/sys/cdefs.h" 2 3 4
> # 451 "/usr/include/features.h" 2 3 4
> # 474 "/usr/include/features.h" 3 4
> # 1 "/usr/include/gnu/stubs.h" 1 3 4
> 
> # 1 "/usr/include/gnu/stubs-32.h" 1 3 4
> # 8 "/usr/include/gnu/stubs.h" 2 3 4
> # 475 "/usr/include/features.h" 2 3 4
> # 34 "/usr/include/bits/libc-header-start.h" 2 3 4
> # 27 "/usr/include/stdint.h" 2 3 4
> # 1 "/usr/include/bits/types.h" 1 3 4
> # 27 "/usr/include/bits/types.h" 3 4
> # 1 "/usr/include/bits/wordsize.h" 1 3 4
> # 28 "/usr/include/bits/types.h" 2 3 4
> # 1 "/usr/include/bits/timesize.h" 1 3 4
> # 29 "/usr/include/bits/types.h" 2 3 4
> 
> # 31 "/usr/include/bits/types.h" 3 4
> typedef unsigned char __u_char;
> ...
> 
> while the freestanding form with `gcc -ffreestanding -m32 -E` is:
> 
> # 1 "foo.c"
> # 1 "<built-in>"
> # 1 "<command-line>"
> # 1 "foo.c"
> # 1 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 1 3 4
> # 11 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint.h" 3 4
> # 1 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 1 3 4
> # 34 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 3 4
> 
> # 34 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 3 4
> typedef signed char int8_t;
> ...
> 
> 
> I can compile and link trivial programs using -m32 and stdint.h without
> any issue.
> 
> Clearly something more subtle is going on with our choice of options
> when compiling hvmloader, but it certainly looks like stdint.h is fine
> to use in the way we want to use it.

Why "more subtle"? All we're lacking is -ffreestanding. The
question is whether it is an acceptably risky thing to do at
this point in the release cycle to add the option.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:42:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:42:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89628.168961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFBHQ-00071X-TV; Thu, 25 Feb 2021 07:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89628.168961; Thu, 25 Feb 2021 07:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFBHQ-00071Q-QX; Thu, 25 Feb 2021 07:41:48 +0000
Received: by outflank-mailman (input) for mailman id 89628;
 Thu, 25 Feb 2021 07:41:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFBHP-00071I-Kp; Thu, 25 Feb 2021 07:41:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFBHP-00033v-DE; Thu, 25 Feb 2021 07:41:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFBHP-0007mm-32; Thu, 25 Feb 2021 07:41:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFBHP-0005FA-2b; Thu, 25 Feb 2021 07:41:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wDNhhwheWC8oQs6HhxaaTReOv9ZYmjHyAA/1yy/fRqg=; b=7DHLc8WMvMzf8r3l4m1UrPqih0
	osuVvaMwDtWJGh5jLQDhu3784cVJ9d9XElLPTXIezn2krmN3yoQQAXo+6yqZ1v4iK1smq1yHXBBf8
	zV66/KTmrYevYzyTDWTZ/aAfETF7dBFVkr9W6BjhocUzADPBA3Kt1Sujf+329+nHhjWo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159646-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159646: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=062c84fccc4444805738d76a2699c4d3c95184ec
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 07:41:47 +0000

flight 159646 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159646/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                062c84fccc4444805738d76a2699c4d3c95184ec
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  208 days
Failing since        152366  2020-08-01 20:49:34 Z  207 days  358 attempts
Testing same since   159646  2021-02-24 20:10:46 Z    0 days    1 attempts

------------------------------------------------------------
5066 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1246356 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 07:56:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 07:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89637.168988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFBV7-0008Ey-MB; Thu, 25 Feb 2021 07:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89637.168988; Thu, 25 Feb 2021 07:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFBV7-0008Er-Ib; Thu, 25 Feb 2021 07:55:57 +0000
Received: by outflank-mailman (input) for mailman id 89637;
 Thu, 25 Feb 2021 07:55:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFBV5-0008Em-QO
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 07:55:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 318bbade-817a-444c-956d-03ab6ed3a7cf;
 Thu, 25 Feb 2021 07:55:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A2627ACD9;
 Thu, 25 Feb 2021 07:55:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 318bbade-817a-444c-956d-03ab6ed3a7cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614239749; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jbKpuDf0o7OZ/JEyCs4U3YPWK3Nnhe4HPGSfszlKtV4=;
	b=FjsJb7d44+Ga3LC6+ou1gbnCdLUyOf64M9bv1vk6TJI/ncUhbzdG9kv6+CVtu5qHHngoRW
	Aapl2zxNpbwzxwXgcYed5Uz3CCyH2EBUqq6/Uw3RNoeDgfF5gdh6jSXxSXvd+8BArBfomP
	nzfx36KcQ0hO+EvlENcAGg4qZvdr96w=
Subject: Re: [PATCH] xen: introduce XENFEAT_direct_mapped and
 XENFEAT_not_direct_mapped
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, julien@xen.org, xen-devel@lists.xenproject.org
References: <20210225012243.28530-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <96d764b6-a719-711c-31ea-235381bfd0ce@suse.com>
Date: Thu, 25 Feb 2021 08:55:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225012243.28530-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 02:22, Stefano Stabellini wrote:
> --- a/xen/include/public/features.h
> +++ b/xen/include/public/features.h
> @@ -114,6 +114,13 @@
>   */
>  #define XENFEAT_linux_rsdp_unrestricted   15
>  
> +/*
> + * A direct-mapped (or 1:1 mapped) domain is a domain for which its
> + * local pages have gfn == mfn.
> + */
> +#define XENFEAT_not_direct_mapped       16
> +#define XENFEAT_direct_mapped           17

Why two new values? Absence of XENFEAT_direct_mapped requires
implying not-direct-mapped by the consumer anyway, doesn't it?

Further, quoting xen/mm.h: "For a non-translated guest which
is aware of Xen, gfn == mfn." This to me implies that PV would
need to get XENFEAT_direct_mapped set; not sure whether this
simply means x86'es is_domain_direct_mapped() is wrong, but if
it is, uses elsewhere in the code would likely need changing.

Also, nit: Please keep the right sides aligned with #define-s
higher up in the file.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 08:44:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 08:44:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89645.169006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFCG4-0005Gl-Lu; Thu, 25 Feb 2021 08:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89645.169006; Thu, 25 Feb 2021 08:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFCG4-0005Ge-It; Thu, 25 Feb 2021 08:44:28 +0000
Received: by outflank-mailman (input) for mailman id 89645;
 Thu, 25 Feb 2021 08:44:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFCG3-0005GZ-ES
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 08:44:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b4e0ab0-18d1-42e3-b3fe-4e8ad6d7d889;
 Thu, 25 Feb 2021 08:44:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 209F2AAAE;
 Thu, 25 Feb 2021 08:44:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b4e0ab0-18d1-42e3-b3fe-4e8ad6d7d889
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614242665; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IjcG979P0LrZO32+Aw+dI+CvT5gBaMVcpWx8utENdVI=;
	b=SzpblRe2678xaohXvjP30pPO3PL1TvD4YBz0YbppEPzT9Bq4d8sx32S1wtjeHHYQxLSuop
	i5MlbBzmAYgJFzaHoazydofU8pIgwqlUnO8fMjpRMj5uwqRz2x8P6XORvR8PPctXZsf2Kw
	UtVyAHFsBO4jDeS1ljq2t1vfYeZa3co=
Subject: Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
From: Jan Beulich <jbeulich@suse.com>
To: Kevin Tian <kevin.tian@intel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>
Cc: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
 <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
Message-ID: <94ac725c-ae3a-5380-aaa0-c8523074b581@suse.com>
Date: Thu, 25 Feb 2021 09:44:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.02.2021 11:56, Jan Beulich wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -428,6 +428,14 @@ static void vmx_domain_relinquish_resour
>      vmx_free_vlapic_mapping(d);
>  }
>  
> +static void domain_creation_finished(struct domain *d)
> +{
> +    if ( has_vlapic(d) && !mfn_eq(d->arch.hvm.vmx.apic_access_mfn, _mfn(0)) &&
> +         set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE),
> +                            d->arch.hvm.vmx.apic_access_mfn, PAGE_ORDER_4K) )
> +        domain_crash(d);
> +}

Having noticed that in patch 2 I also need to arrange for
ept_get_entry_emt() to continue to return WB for this page, I'm
inclined to add a respective assertion here. Would anyone object
to me doing so?

Kevin, Jun - I'd like this to also serve as a ping for an ack
(with or without the suggested ASSERT() addition).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 09:30:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 09:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89652.169025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFCyy-0001if-CM; Thu, 25 Feb 2021 09:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89652.169025; Thu, 25 Feb 2021 09:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFCyy-0001iY-7Q; Thu, 25 Feb 2021 09:30:52 +0000
Received: by outflank-mailman (input) for mailman id 89652;
 Thu, 25 Feb 2021 09:30:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFCyw-0001iT-NS
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 09:30:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 668af6fa-de4b-468b-8874-511155fb9473;
 Thu, 25 Feb 2021 09:30:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DD387AC1D;
 Thu, 25 Feb 2021 09:30:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 668af6fa-de4b-468b-8874-511155fb9473
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614245449; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=k5zZu7hjg52cgojELw71rkpoVSlk7KP4+FpJe/naTbE=;
	b=BO1GKhddZRGNlok9epDuUTki4R+GcGUaS5IGt+lGtcvMU3Kd8u4XHtnOBxAajscA4n7eQN
	i07pclDiNBlaGVZqUspJ1uZSNhDiy+hy06xPJk79wvsjh9KMvlzTKQ9/+kuEeYhQ5vqIqk
	w21els91RR3fiacJsf3nSZsoM1/QVDE=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
Message-ID: <dbdb045d-42de-af94-64cc-0be7992b80b6@suse.com>
Date: Thu, 25 Feb 2021 10:30:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Now that we guard the entire Xen VA space against speculative abuse
through hypervisor accesses to guest memory, the argument translation
area's VA also needs to live outside this range, at least for 32-bit PV
guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
uniformly.

While this could be conditionalized upon CONFIG_PV32 &&
CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
keeps the code more legible imo.

Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
Release-Acked-by: Ian Jackson <iwj@xenproject.org>
---
v3: Use address range in lower half of address space.
v2: Rename PERDOMAIN2_VIRT_START to PERDOMAIN_ALT_VIRT_START.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1691,6 +1691,13 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
     l4t[l4_table_offset(PERDOMAIN_VIRT_START)] =
         l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
 
+    /* Slot 4: Per-domain mappings mirror. */
+    BUILD_BUG_ON(IS_ENABLED(CONFIG_PV32) &&
+                 !l4_table_offset(PERDOMAIN_ALT_VIRT_START));
+    if ( !is_pv_64bit_domain(d) )
+        l4t[l4_table_offset(PERDOMAIN_ALT_VIRT_START)] =
+            l4t[l4_table_offset(PERDOMAIN_VIRT_START)];
+
     /* Slot 261-: text/data/bss, RW M2P, vmap, frametable, directmap. */
 #ifndef NDEBUG
     if ( short_directmap &&
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -170,7 +170,11 @@ extern unsigned char boot_edid_info[128]
  *    Guest-defined use.
  *  0x00000000f5800000 - 0x00000000ffffffff [168MB,             PML4:0]
  *    Read-only machine-to-phys translation table (GUEST ACCESSIBLE).
- *  0x0000000100000000 - 0x00007fffffffffff [128TB-4GB,         PML4:0-255]
+ *  0x0000000100000000 - 0x000001ffffffffff [2TB-4GB,           PML4:0-3]
+ *    Unused / Reserved for future use.
+ *  0x0000020000000000 - 0x0000027fffffffff [512GB, 2^39 bytes, PML4:4]
+ *    Mirror of per-domain mappings (for argument translation area; also HVM).
+ *  0x0000028000000000 - 0x00007fffffffffff [125.5TB,           PML4:5-255]
  *    Unused / Reserved for future use.
  */
 
@@ -207,6 +211,8 @@ extern unsigned char boot_edid_info[128]
 #define PERDOMAIN_SLOTS         3
 #define PERDOMAIN_VIRT_SLOT(s)  (PERDOMAIN_VIRT_START + (s) * \
                                  (PERDOMAIN_SLOT_MBYTES << 20))
+/* Slot 4: mirror of per-domain mappings (for compat xlat area accesses). */
+#define PERDOMAIN_ALT_VIRT_START PML4_ADDR(260 % 256)
 /* Slot 261: machine-to-phys conversion table (256GB). */
 #define RDWR_MPT_VIRT_START     (PML4_ADDR(261))
 #define RDWR_MPT_VIRT_END       (RDWR_MPT_VIRT_START + MPT_VIRT_SIZE)
--- a/xen/include/asm-x86/x86_64/uaccess.h
+++ b/xen/include/asm-x86/x86_64/uaccess.h
@@ -1,7 +1,17 @@
 #ifndef __X86_64_UACCESS_H
 #define __X86_64_UACCESS_H
 
-#define COMPAT_ARG_XLAT_VIRT_BASE ((void *)ARG_XLAT_START(current))
+/*
+ * With CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS (apparent) PV guest accesses
+ * are prohibited to touch the Xen private VA range.  The compat argument
+ * translation area, therefore, can't live within this range.  Domains
+ * (potentially) in need of argument translation (32-bit PV, possibly HVM) get
+ * a secondary mapping installed, which needs to be used for such accesses in
+ * the PV case, and will also be used for HVM to avoid extra conditionals.
+ */
+#define COMPAT_ARG_XLAT_VIRT_BASE ((void *)ARG_XLAT_START(current) + \
+                                   (PERDOMAIN_ALT_VIRT_START - \
+                                    PERDOMAIN_VIRT_START))
 #define COMPAT_ARG_XLAT_SIZE      (2*PAGE_SIZE)
 struct vcpu;
 int setup_compat_arg_xlat(struct vcpu *v);


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 09:32:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 09:32:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89654.169037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFD0j-0001r0-Mi; Thu, 25 Feb 2021 09:32:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89654.169037; Thu, 25 Feb 2021 09:32:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFD0j-0001qt-JZ; Thu, 25 Feb 2021 09:32:41 +0000
Received: by outflank-mailman (input) for mailman id 89654;
 Thu, 25 Feb 2021 09:32:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFD0i-0001ql-EC
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 09:32:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2c86f6b-02e2-4c19-a6c9-e8bc63149443;
 Thu, 25 Feb 2021 09:32:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2657DACCF;
 Thu, 25 Feb 2021 09:32:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2c86f6b-02e2-4c19-a6c9-e8bc63149443
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614245558; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AjkJf4O3dzVkjq702sFusBq8xTbdicOubQgnihwYT7Q=;
	b=VpoLdFqHh73ApSmM1sfEUruSG2FRNOLAAh4qhNcvki34p3pOpr1GL8ZY+WQ93QuS+YzsMP
	0wcrQMxVtAxD1iK0y9cdbxF1R+M/WGVR0PcjHtPkZtBX/P7ufGpPoBK+GQ4VPk/0ZVjkyl
	g20K7AugGdVBk61kP19/JU/xhR/gD6k=
Subject: Re: [PATCH v3][4.15] x86: mirror compat argument translation area for
 32-bit PV
From: Jan Beulich <jbeulich@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dbdb045d-42de-af94-64cc-0be7992b80b6@suse.com>
Message-ID: <c56c115f-10de-35b0-c27c-1930e99d9377@suse.com>
Date: Thu, 25 Feb 2021 10:32:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <dbdb045d-42de-af94-64cc-0be7992b80b6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 25.02.2021 10:30, Jan Beulich wrote:
> Now that we guard the entire Xen VA space against speculative abuse
> through hypervisor accesses to guest memory, the argument translation
> area's VA also needs to live outside this range, at least for 32-bit PV
> guests. To avoid extra is_hvm_*() conditionals, use the alternative VA
> uniformly.
> 
> While this could be conditionalized upon CONFIG_PV32 &&
> CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS, omitting such extra conditionals
> keeps the code more legible imo.
> 
> Fixes: 4dc181599142 ("x86/PV: harden guest memory accesses against speculative abuse")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Roger - I would have dropped an R-b, but I've assumed keeping an A-b
would be fine. Please let me know if this was wrong.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 09:50:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 09:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89658.169048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFDI9-0003q0-6i; Thu, 25 Feb 2021 09:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89658.169048; Thu, 25 Feb 2021 09:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFDI9-0003pt-3m; Thu, 25 Feb 2021 09:50:41 +0000
Received: by outflank-mailman (input) for mailman id 89658;
 Thu, 25 Feb 2021 09:50:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFDI8-0003po-1h
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 09:50:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFDI7-0005ic-Te
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 09:50:39 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFDI7-0004pW-So
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 09:50:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFDI1-00014S-NG; Thu, 25 Feb 2021 09:50:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=P97ygsKZKkV6O5TmJ/tc84vGWcotGhvJmKuGQroEJS8=; b=ZhrQm/M4KC7RKwELr6dZtUlXnr
	4wXbepyGZYWrMKxxzXwnPe8vum2obECWzBrZpbYwRz/UJpDesb2p9IXYEM0xpLtJQfBqOqhdr8Y2M
	Yl3fjEKi8kGO/ISB8RFLepglPRYkEAXeUJqi2AdNjnH5K1FAB3+nv8A1/o3UR52w/E4g=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.29417.501638.284615@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 09:50:33 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    xen-devel@lists.xenproject.org,
    Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
In-Reply-To: <61932477-d44c-5592-da3f-b0b5ff5c6321@suse.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
	<35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
	<61932477-d44c-5592-da3f-b0b5ff5c6321@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH 0/2] hvmloader: drop usage of system headers"):
> On 24.02.2021 21:08, Andrew Cooper wrote:
> > After some experimentation in the arch container
...
> > while the freestanding form with `gcc -ffreestanding -m32 -E` is:
...
> > # 34 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 3 4
> > typedef signed char int8_t;
> > ...

Um, but what size is size_t ?

In particular, note that that path contains nothing to do with 32-bit
so I think it is probably the wrong bitness.

> Why "more subtle"? All we're lacking is -ffreestanding. The
> question is whether it is an acceptably risky thing to do at
> this point in the release cycle to add the option.

If -ffreestanding DTRT then I think it's about as risky as the fix I
already approved and we have merged...

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 10:00:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 10:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89661.169060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFDS2-0004ya-6R; Thu, 25 Feb 2021 10:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89661.169060; Thu, 25 Feb 2021 10:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFDS2-0004yT-3T; Thu, 25 Feb 2021 10:00:54 +0000
Received: by outflank-mailman (input) for mailman id 89661;
 Thu, 25 Feb 2021 10:00:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KbZv=H3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lFDS0-0004yO-Mf
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 10:00:53 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29e97645-4e32-46f2-acbd-3773e56d4f40;
 Thu, 25 Feb 2021 10:00:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29e97645-4e32-46f2-acbd-3773e56d4f40
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614247250;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=zpY8Zjt2WgV9N6Dv/X1QD5JpacpvAcOuZa2Xpl0CllI=;
  b=KicKXa+XxDtIjMIVhjIb8cEYQXo0ptv5F4QGvyMCDqvmwP9zc610Yil+
   J/sCj7VSBQ5drpBCnPZZjNL7k0AATjdeiTUoMW2WU4Sc4kvLFvJDb1OKi
   BT1gb7pQd+9wGi+IebReef71TPR9z0JwE8tCocznwCnPC+eqAw84dbjn0
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s1msz7PjlyvP9zJLGjuMlo+KgZhJ9YBdw5bdPN9id1o0JYUfb4NjkLrLODt4gpA0ZeUOplC1Un
 q29qjk3fPDGKFIsNP2YjUJMKy8xH64RyF9bQ1fnYCud5lAybRV8cu+4Vhb6y3jd0MOyXiyY2Iw
 bV+2YFGRPUZuSGjf4Ti+h3jTrqmihxhbrh6JjEAROAV97cGFIghDgMb4jHApW+rcvhRqLhPeFq
 LwWjamU667Yx7KBgIn9TELI3PWzmO2txbo41Yp5iiQiANvKAESyTV2tQYRq1xXFT36XUDgAWiG
 onM=
X-SBRS: 5.2
X-MesageID: 37924748
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,205,1610427600"; 
   d="scan'208";a="37924748"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nm15Kn3X1fCop3GK+qtcHtiFDiUM8JCnQ3TcMtQfjHIpHEgPIyWG50RfbuQb1t/I2an7z7cUFcdgfz24uyEuDGtXdx007F8BwIo0CKAaScP2BOT4KP9ZlWfomTZ4rEYYC9MpgjMesRAofBnLFBceQtOi0uw+yZs95yDou8b2wwWvIAXkJ3x3TQ+iTUi1iPvqguaNDfZ0DkgE9zLFpkFmIZn2LttFk3jkAF2QOWwZAlVq2e5qx87NmdSVHpfwVMukvqt1li1ILRvJcPtcuQD8XFCG6CUnz9RnH1IyEkyQB46AiP64AOSU2fOzQ2HR/jXyMI6bWQUDJWNDnaxvQ1KjtA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5PQUbz0moQEjC4YcEwZ6anskzmUEX8uXndME7hdjopc=;
 b=V/Pa7en4a/eaqUz96nzgIzwdmxBnJMBIDj5Cb9VSBzBkoERaVDo42oECBt9WlCz6Z6BS7lTPyBNfvHuURaZYW2aGhAi4Ys7HtWv8mjQD3gHdFMyCR1oqX14/1PMRa/1Iu2eWJfackOnL1UpF6GJpYFTQJ/duIweIvsQX50CRUzBUX9ILRQA7dYgstz3DA8i8vXh2NsHGoo9ifL6btOKOa8yfzcJGjfObhBdDfm3NtQdwI/A6qG7xsMceHpC9sUnP3xNRXLrE/8YI7PaOgP5nEOytaJhk5rpoMEcfDqiGaXXwwye6Kgp+8rzKJv0H8HOf+wetkNfLRIBG/R+mO57Yqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5PQUbz0moQEjC4YcEwZ6anskzmUEX8uXndME7hdjopc=;
 b=vXd40k6BvkqA1AFJdMxDqncXyGbbNT0Wr6I50vx1YqZfvM6/lejV/W7ti6BNdC5t+FSo/4qQtY7pBAtQL0U6nJXUoQKmEDtFZLfMKjo+rgu72y7UIHXDN1Dl159b/fqAAzGXA93KFmDjPAUmeC2qsKUw918JPINKNzYzNWp2jeQ=
Date: Thu, 25 Feb 2021 11:00:40 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
Message-ID: <YDd1SBQGDWOdQViB@Air-de-Roger>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
In-Reply-To: <35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
X-ClientProxiedBy: MR2P264CA0034.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::22)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 35e9bd9a-f0ce-46c2-9986-08d8d974389a
X-MS-TrafficTypeDiagnostic: DM6PR03MB3740:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB37408735E511B56B72C958B98F9E9@DM6PR03MB3740.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ArmUMhYYMzbhsgc3jU2X826giZsBm+ykfPzPKWGtxf/gJevY0Hz42CSwvyEjShuE3ZTJe9sdM1P15cAs5p3nyT5OhJnX9ObdeCjA869jiC9Wm9OYAPlH2qjCEQ2HXBt3PXgahyP9oQSjzAXnAzZ3gw/y5LIeeLX1gnUaa455b0IIq/gucQYf+x2JI2nw3h/ccjqy5p2GhdB3GbhpmRKDtlK1kuh84VAySt2x8tW8A+p0eQvOWq+A078ZsXd2oCAYwRjo6YhpFdZT3jhtJxz4OTiiZdk9aTpUyfh91S/modmOWxm9+ee3P/p2oGaYOhibosNq8tEGU5QnoNRNQPkTHpKd9kk44wQmOVnYtLL7dGNQzdqOowsgSGlKI+UKAhsSlpIF2vTapcFyYoC7T9xLlTgw2m2Za9M9Uv3qFe5XqkqN74gy4b7/NXcqM5i9Q/iT4Kg2r+pUCgTvHji/9+2sh7lOOaIYHANql9Bumm5KXUVMKBJnOyotmLpksDCb4wvlXmNJNE85BsHV70eDLcKC5Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(136003)(366004)(39860400002)(396003)(376002)(66556008)(54906003)(16526019)(5660300002)(6666004)(6636002)(8936002)(66574015)(9686003)(478600001)(83380400001)(86362001)(8676002)(316002)(4326008)(85182001)(186003)(6496006)(26005)(53546011)(66946007)(6486002)(2906002)(956004)(33716001)(6862004)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WlRHaDA3YVFEb0NoVnBtUlNkOGNTbWFISmFJQjZ4dmwySHpvYyszc1JNU01p?=
 =?utf-8?B?WFoyanhwa3dLZGgxd1QwUFBFckRqaWpqazlkbWthR0QzU0gwQXBvWVhFL0lW?=
 =?utf-8?B?Y0tRSHJxSEdhekswd0Z2Wkl4SzZXaG51N3M3c2QvVjRSL3c2b3dQMmtaM0JZ?=
 =?utf-8?B?MlZGNXVwMmMzNm9IMXZWVVFYbm5GV1h4c0pXNGhvNkhlMlc5SXRhL0ZPMEhZ?=
 =?utf-8?B?YW1td0o3bEYrTkV4am5GOUoyK0ZrMVFSekU3VW1UTWhLelY0UGhDWnpxYU9V?=
 =?utf-8?B?S1BKdUdHdE1QREl5WkRlVW1Fc3FxQVZFOGtXeGdVdXFBM0pkZGd2eHFWaHdN?=
 =?utf-8?B?NU9zdU8wT0N0ZElZOU9TM05Ua0FXQTZSVnozeWxmK0ZYd1B1eU1BUFdNN0hX?=
 =?utf-8?B?N3luR3VpRkZRZFAySFlWZDk2RUZuSjZ5dCtQS3A2UWMyMnp5cGZJZmtBeUpZ?=
 =?utf-8?B?ckNhVmg1cHkzZ3NKTzNkTjJGZ0t4Z0FFTTRZQmtxVkUzb3ROWWtvQ2lyWEJU?=
 =?utf-8?B?YnBtK2IwZDBSeXRkcGhDQ2ZRLzhJR0dsM1pWOEJmUGcxQTBnZHFLanlSWThp?=
 =?utf-8?B?L1dMcFlYSE51TStXeEtrZ1ovb1k3aUlHMG9TcnVmR0w1dzArQ1Y4NUJTWnls?=
 =?utf-8?B?VlpucU1UQVZaTk5XT1NEcjVmVkFqeS85b1RBdERYLy9nZkgzWjdRL1NGTDhS?=
 =?utf-8?B?VXIyUjZIdk1TSkxHVEtDd3NLVW0yOXFHQWpnbXU3RjZkU0ZLNHpQNW5JbUEz?=
 =?utf-8?B?c0k3dVgvZWhEOWVQdnNxWmNFUXNaN1RuQ0dzOHhTUDlsQVd6WUNqUW9mSEdV?=
 =?utf-8?B?WFRUMGM3bFFtUUlXVnNldWlBZ2I2dU01UEgrcG1rVktNa09nZUFNOU4zek5a?=
 =?utf-8?B?eHg4eSt1cHZCbS9aNmJadTFQcjR4THN3Zzl1R1E5aHRlOVJkbEI1S0U3REdW?=
 =?utf-8?B?U1ovdVdhMHhjdmQrSDJWRmlSaUh4bEViWWtXb2lIK21DTmkyMWx2Tll5NTRY?=
 =?utf-8?B?a0Z2REJhTW1sL1RYbE9UNlc3VXhLbEdlRDVCWXhZMDVVdWwxcUt1WGxJUndX?=
 =?utf-8?B?QWhCSEQ2RHFTc2NFb2QvSC84NHRPaVJvdlpobWlmMkRERDQwN0JHSTc4TDNu?=
 =?utf-8?B?T0RERUcybkxmWjVHaGo3MFMrM1ZtaU5oVTZkL2pFeHMvNEJrSDRvaW1LZ0cx?=
 =?utf-8?B?Z2VCUlM3ZTMrWlJGTHdjejlHejYwMDh0QTQrNzNzbEgxUEVkQjFqT1QyK1Fa?=
 =?utf-8?B?UDNGaWdaK3hwWHRXb1IwS3c0SEc5UmFlVEYxSDVYalJMcDB5ai85VjJyUm1i?=
 =?utf-8?B?c3BDMUtvaTZaVk5wMWZHb2VaZ1h3YWVvclZnUFFzWEtQdzhyRlRmS2FwT29Y?=
 =?utf-8?B?anpPMjQxeGptM09BZFVwV3ptZ2pjbkoyWUFld2VUZTZmYUdBMWlMNFJ5dk9P?=
 =?utf-8?B?cXlnQms4U04raERHdWdWMFlIWWlmR3pPeXV5ZFVJV2MwZVYxblFnN3c4dHBF?=
 =?utf-8?B?OU9BK3ZkQ0dwdmViV2xjV1NXSmVHSUNpLzZTQnFlcG13VkZ4Z1pURkxTZFFK?=
 =?utf-8?B?ejVHTUowY1A3Ukl5ZE5zVFVXd2ZsM0VPUDlhaUpZWGNTQkVzMGc1Z1JjRm5a?=
 =?utf-8?B?MXVWQUFtNGdCNWY1UllRMEVVdWV5NDR2M0ZHN0ZMT1Vsb25Na3Z5eFQxYm0y?=
 =?utf-8?B?Q281N0RmUW15YWdmZXVCTEEwZUlFWjJpRTdIVC94WHl0Um1QcUZQaURzc2My?=
 =?utf-8?Q?wbvqXZTODUGUmyARBU80xCViyf6nL+ZxZHugI84?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 35e9bd9a-f0ce-46c2-9986-08d8d974389a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 10:00:46.3279
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LMGprE/hRzWOaUbQ7rKtmE3pLcw0NSPF+ReoRuf7RXYin2DCUDm5OkqczvY/okhdDGqexIjSDlrvc+X3dXVUew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3740
X-OriginatorOrg: citrix.com

On Wed, Feb 24, 2021 at 08:08:25PM +0000, Andrew Cooper wrote:
> On 24/02/2021 10:26, Roger Pau Monne wrote:
> > Hello,
> >
> > Following two patches aim to make hvmloader standalone, so that it don'=
t
> > try to use system headers. It shouldn't result in any functional
> > change.
> >
> > Thanks, Roger.
>=20
> After some experimentation in the arch container

I'm afraid it's the alpine container the one that gives those errors,
not the arch one.

So I've done some testing on alpine and I think there's something
broken. Given the following snipped:

---
#include <stdint.h>

int main(void)
{
        _Static_assert(sizeof(uint64_t) =3D=3D 8, "");
}
---

This is the output of running `gcc -E -m32 -ffreestanding test.c` on
an alpine chroot that has just the 'gcc' package installed:

---
# 1 "test.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "test.c"

# 1 "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include/stdint.h" 1 3
# 11 "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include/stdint.h" 3
# 1 "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include/stdint-gcc.h" 1 3
# 34 "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include/stdint-gcc.h" 3

# 34 "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include/stdint-gcc.h" 3
typedef signed char int8_t;


typedef short int int16_t;


typedef int int32_t;


typedef long long int int64_t;


typedef unsigned char uint8_t;


typedef short unsigned int uint16_t;


typedef unsigned int uint32_t;


typedef long long unsigned int uint64_t;




typedef signed char int_least8_t;
typedef short int int_least16_t;
typedef int int_least32_t;
typedef long long int int_least64_t;
typedef unsigned char uint_least8_t;
typedef short unsigned int uint_least16_t;
typedef unsigned int uint_least32_t;
typedef long long unsigned int uint_least64_t;



typedef signed char int_fast8_t;
typedef int int_fast16_t;
typedef int int_fast32_t;
typedef long long int int_fast64_t;
typedef unsigned char uint_fast8_t;
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
typedef long long unsigned int uint_fast64_t;




typedef int intptr_t;


typedef unsigned int uintptr_t;




typedef long long int intmax_t;
typedef long long unsigned int uintmax_t;
# 12 "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include/stdint.h" 2 3
# 3 "test.c" 2


# 4 "test.c"
int main(void)
{
 _Static_assert(sizeof(uint64_t) =3D=3D 8, "");
}
---

OTOH, this is the output of the same command run on a chroot that has
the full list of packages required to build Xen (from
automation/build/alpine/3.12.dockerfile):

---
# 1 "test.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "test.c"

# 1 "/usr/include/stdint.h" 1 3 4
# 20 "/usr/include/stdint.h" 3 4
# 1 "/usr/include/bits/alltypes.h" 1 3 4
# 55 "/usr/include/bits/alltypes.h" 3 4

# 55 "/usr/include/bits/alltypes.h" 3 4
typedef unsigned long uintptr_t;
# 70 "/usr/include/bits/alltypes.h" 3 4
typedef long intptr_t;
# 96 "/usr/include/bits/alltypes.h" 3 4
typedef signed char int8_t;




typedef signed short int16_t;




typedef signed int int32_t;




typedef signed long int64_t;




typedef signed long intmax_t;




typedef unsigned char uint8_t;




typedef unsigned short uint16_t;




typedef unsigned int uint32_t;




typedef unsigned long uint64_t;
# 146 "/usr/include/bits/alltypes.h" 3 4
typedef unsigned long uintmax_t;
# 21 "/usr/include/stdint.h" 2 3 4

typedef int8_t int_fast8_t;
typedef int64_t int_fast64_t;

typedef int8_t int_least8_t;
typedef int16_t int_least16_t;
typedef int32_t int_least32_t;
typedef int64_t int_least64_t;

typedef uint8_t uint_fast8_t;
typedef uint64_t uint_fast64_t;

typedef uint8_t uint_least8_t;
typedef uint16_t uint_least16_t;
typedef uint32_t uint_least32_t;
typedef uint64_t uint_least64_t;
# 95 "/usr/include/stdint.h" 3 4
# 1 "/usr/include/bits/stdint.h" 1 3 4
typedef int32_t int_fast16_t;
typedef int32_t int_fast32_t;
typedef uint32_t uint_fast16_t;
typedef uint32_t uint_fast32_t;
# 96 "/usr/include/stdint.h" 2 3 4
# 3 "test.c" 2


# 4 "test.c"
int main(void)
{
 _Static_assert(sizeof(uint64_t) =3D=3D 8, "");
}
---

This is caused by the include path order of gcc on alpine, ie:

---
# cpp -v /dev/null -o /dev/null
Using built-in specs.
COLLECT_GCC=3Dcpp
Target: x86_64-alpine-linux-musl
Configured with: /home/buildozer/aports/main/gcc/src/gcc-10.2.1_pre1/config=
ure --prefix=3D/usr --mandir=3D/usr/share/man --infodir=3D/usr/share/info -=
-build=3Dx86_64-alpine-linux-musl --host=3Dx86_64-alpine-linux-musl --targe=
t=3Dx86_64-alpine-linux-musl --with-pkgversion=3D'Alpine 10.2.1_pre1' --ena=
ble-checking=3Drelease --disable-fixed-point --disable-libstdcxx-pch --disa=
ble-multilib --disable-nls --disable-werror --disable-symvers --enable-__cx=
a_atexit --enable-default-pie --enable-default-ssp --enable-cloog-backend -=
-enable-languages=3Dc,c++,d,objc,go,fortran,ada --disable-libssp --disable-=
libmpx --disable-libmudflap --disable-libsanitizer --enable-shared --enable=
-threads --enable-tls --with-system-zlib --with-linker-hash-style=3Dgnu
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.2.1 20201203 (Alpine 10.2.1_pre1)
COLLECT_GCC_OPTIONS=3D'-E' '-v' '-o' '/dev/null' '-mtune=3Dgeneric' '-march=
=3Dx86-64'
 /usr/libexec/gcc/x86_64-alpine-linux-musl/10.2.1/cc1 -E -quiet -v /dev/nul=
l -o /dev/null -mtune=3Dgeneric -march=3Dx86-64
ignoring nonexistent directory "/usr/local/include"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.=
1/../../../../x86_64-alpine-linux-musl/include"
ignoring nonexistent directory "/usr/include/fortify"
#include "..." search starts here:
#include <...> search starts here:
 /usr/include
 /usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include
End of search list.
COMPILER_PATH=3D/usr/libexec/gcc/x86_64-alpine-linux-musl/10.2.1/:/usr/libe=
xec/gcc/x86_64-alpine-linux-musl/10.2.1/:/usr/libexec/gcc/x86_64-alpine-lin=
ux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/:/usr/lib/gcc/x86_64-=
alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/../../../..=
/x86_64-alpine-linux-musl/bin/
LIBRARY_PATH=3D/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/:/usr/lib/gcc/x=
86_64-alpine-linux-musl/10.2.1/../../../../x86_64-alpine-linux-musl/lib/../=
lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/../../../../lib/:/lib/../=
lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/../../..=
/../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.=
2.1/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS=3D'-E' '-v' '-o' '/dev/null' '-mtune=3Dgeneric' '-march=
=3Dx86-64'
---

/usr/include takes precedence over the gcc private headers, and thus
the usage of the -ffreestanding option is broken.

On my Debian system this is:

---
# cpp -v /dev/null -o /dev/null
Using built-in specs.
COLLECT_GCC=3Dcpp
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion=3D'Debian 6.3.0-18+d=
eb9u1' --with-bugurl=3Dfile:///usr/share/doc/gcc-6/README.Bugs --enable-lan=
guages=3Dc,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=3D/usr --program=
-suffix=3D-6 --program-prefix=3Dx86_64-linux-gnu- --enable-shared --enable-=
linker-build-id --libexecdir=3D/usr/lib --without-included-gettext --enable=
-threads=3Dposix --libdir=3D/usr/lib --enable-nls --with-sysroot=3D/ --enab=
le-clocale=3Dgnu --enable-libstdcxx-debug --enable-libstdcxx-time=3Dyes --w=
ith-default-libstdcxx-abi=3Dnew --enable-gnu-unique-object --disable-vtable=
-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-=
zlib --disable-browser-plugin --enable-java-awt=3Dgtk --enable-gtk-cairo --=
with-java-home=3D/usr/lib/jvm/java-1.5.0-gcj-6-amd64/jre --enable-java-home=
 --with-jvm-root-dir=3D/usr/lib/jvm/java-1.5.0-gcj-6-amd64 --with-jvm-jar-d=
ir=3D/usr/lib/jvm-exports/java-1.5.0-gcj-6-amd64 --with-arch-directory=3Dam=
d64 --with-ecj-jar=3D/usr/share/java/eclipse-ecj.jar --with-target-system-z=
lib --enable-objc-gc=3Dauto --enable-multiarch --with-arch-32=3Di686 --with=
-abi=3Dm64 --with-multilib-list=3Dm32,m64,mx32 --enable-multilib --with-tun=
e=3Dgeneric --enable-checking=3Drelease --build=3Dx86_64-linux-gnu --host=
=3Dx86_64-linux-gnu --target=3Dx86_64-linux-gnu
Thread model: posix
gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
COLLECT_GCC_OPTIONS=3D'-E' '-v' '-o' '/dev/null' '-mtune=3Dgeneric' '-march=
=3Dx86-64'
 /usr/lib/gcc/x86_64-linux-gnu/6/cc1 -E -quiet -v -imultiarch x86_64-linux-=
gnu /dev/null -o /dev/null -mtune=3Dgeneric -march=3Dx86-64
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/6/../../../..=
/x86_64-linux-gnu/include"
#include "..." search starts here:
#include <...> search starts here:
 /usr/lib/gcc/x86_64-linux-gnu/6/include
 /usr/local/include
 /usr/lib/gcc/x86_64-linux-gnu/6/include-fixed
 /usr/include/x86_64-linux-gnu
 /usr/include
End of search list.
COMPILER_PATH=3D/usr/lib/gcc/x86_64-linux-gnu/6/:/usr/lib/gcc/x86_64-linux-=
gnu/6/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/6/:/usr=
/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH=3D/usr/lib/gcc/x86_64-linux-gnu/6/:/usr/lib/gcc/x86_64-linux-g=
nu/6/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/6/../../../..=
/lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/l=
ib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/6/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS=3D'-E' '-v' '-o' '/dev/null' '-mtune=3Dgeneric' '-march=
=3Dx86-64'
---

Which seems fine as the gcc private include path takes precedence over
/usr/{,local/}include.

Will try to figure out if there's a way to fix or workaround this
brokenness.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 10:19:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 10:19:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89666.169073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFDk6-0006Io-Sf; Thu, 25 Feb 2021 10:19:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89666.169073; Thu, 25 Feb 2021 10:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFDk6-0006Ih-P8; Thu, 25 Feb 2021 10:19:34 +0000
Received: by outflank-mailman (input) for mailman id 89666;
 Thu, 25 Feb 2021 10:19:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFDk4-0006Ic-Mf
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 10:19:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3e6e872-b892-4476-8c48-e108d6c3a3e8;
 Thu, 25 Feb 2021 10:19:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F038FAAAE;
 Thu, 25 Feb 2021 10:19:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3e6e872-b892-4476-8c48-e108d6c3a3e8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614248371; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m73X+LwRZuFfbm71MWvE8ALc2KkR1M0XHtMc333OiRA=;
	b=aNZnwRmvF75NwyPEf9JKvAL7kBFwlYCZV+kdpBliDxs1GNrPxdL0pCjuhh5RbV1YDj7ytY
	egbD+ldPOn4ecxP7AW9I9d5I73xSLpjCHD87xPRzWvScx/Rye6Fw3LmbqrcvoXBwRKaKNW
	Fy6gmBfZA9ACs0Y/0fGB5Pz2pxhXb9A=
Subject: Re: [PATCH 0/2] hvmloader: drop usage of system headers
To: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Roger Pau Monne <roger.pau@citrix.com>
References: <20210224102641.89455-1-roger.pau@citrix.com>
 <35864c33-b375-a3c6-13bc-ad1e7d0773eb@citrix.com>
 <61932477-d44c-5592-da3f-b0b5ff5c6321@suse.com>
 <24631.29417.501638.284615@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b53d52ea-6fea-8f4d-3993-010ea796fbdc@suse.com>
Date: Thu, 25 Feb 2021 11:19:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24631.29417.501638.284615@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 10:50, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH 0/2] hvmloader: drop usage of system headers"):
>> On 24.02.2021 21:08, Andrew Cooper wrote:
>>> After some experimentation in the arch container
> ...
>>> while the freestanding form with `gcc -ffreestanding -m32 -E` is:
> ...
>>> # 34 "/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/stdint-gcc.h" 3 4
>>> typedef signed char int8_t;
>>> ...
> 
> Um, but what size is size_t ?
> 
> In particular, note that that path contains nothing to do with 32-bit
> so I think it is probably the wrong bitness.

The path doesn't need to include anything bitness specific, when
the headers can deal with both flavors.

>> Why "more subtle"? All we're lacking is -ffreestanding. The
>> question is whether it is an acceptably risky thing to do at
>> this point in the release cycle to add the option.
> 
> If -ffreestanding DTRT then I think it's about as risky as the fix I
> already approved and we have merged...

It would do the right thing, except Roger found Alpine has another
anomaly undermining this (and breaking -ffreestanding itself).

As an aside I'm not sure what you refer to with "we have merged":
So far only patch 1 of this series (plus its fixup) has gone in.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 11:43:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 11:43:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89676.169097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFF3L-0006Pw-9n; Thu, 25 Feb 2021 11:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89676.169097; Thu, 25 Feb 2021 11:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFF3L-0006Pp-6p; Thu, 25 Feb 2021 11:43:31 +0000
Received: by outflank-mailman (input) for mailman id 89676;
 Thu, 25 Feb 2021 11:43:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Rz4=H3=cert.pl=hubert.jasudowicz@srs-us1.protection.inumbo.net>)
 id 1lFF3J-0006Pk-Pa
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 11:43:29 +0000
Received: from mx.nask.net.pl (unknown [195.187.55.89])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80125619-04d0-4355-a5be-3c82ee34f9dc;
 Thu, 25 Feb 2021 11:43:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80125619-04d0-4355-a5be-3c82ee34f9dc
From: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: [PATCH][4.15] tools: Fix typo in xc_vmtrace_set_option comment
Date: Thu, 25 Feb 2021 12:43:07 +0100
Message-Id: <3e81757428750eb351ea9d938bf0770026be4c33.1614253079.git.hubert.jasudowicz@cert.pl>
X-Mailer: git-send-email 2.30.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 tools/include/xenctrl.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 0efcdae8b4..318920166c 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1644,7 +1644,7 @@ int xc_vmtrace_get_option(xc_interface *xch, uint32_t domid,
                           uint32_t vcpu, uint64_t key, uint64_t *value);
 
 /**
- * Set platform specific vntvmtrace options.
+ * Set platform specific vmtrace options.
  *
  * @parm xch a handle to an open hypervisor interface
  * @parm domid domain identifier
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 11:46:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 11:46:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89678.169109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFF6L-0006ZA-P7; Thu, 25 Feb 2021 11:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89678.169109; Thu, 25 Feb 2021 11:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFF6L-0006Z3-Ly; Thu, 25 Feb 2021 11:46:37 +0000
Received: by outflank-mailman (input) for mailman id 89678;
 Thu, 25 Feb 2021 11:46:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VzdG=H3=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1lFF6K-0006Yx-7W
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 11:46:36 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 929c509b-f16d-4c6f-a154-d1dee7bbb0c5;
 Thu, 25 Feb 2021 11:46:35 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-226-gTc3-8vAOWS90hJJXyCksg-1; Thu, 25 Feb 2021 06:46:33 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A7FEC8030C1;
 Thu, 25 Feb 2021 11:46:31 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-114-4.ams2.redhat.com
 [10.36.114.4])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 247E91980D;
 Thu, 25 Feb 2021 11:46:28 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 8821D18000A7; Thu, 25 Feb 2021 12:46:26 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 929c509b-f16d-4c6f-a154-d1dee7bbb0c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1614253595;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xKmTEivAGnpXirok/FZMI9IT5+2rG+YO+T2SvKiBEY0=;
	b=b8t+tcl1wmoXvvc6cV1OxgFFxHDjcOYfG9GKAqTE1zrVdjHQL8WrXFNEgjikqJagwiSZov
	6/yVef5lZDxZQO9Y2bCmM9b4k4q+ZLDNGMktCeREwnDbKIeUJ09K/6CteAfqrDX4Mo70tG
	VJ+j22D4EP/r0IkYWHgAo7obfqd+pHM=
X-MC-Unique: gTc3-8vAOWS90hJJXyCksg-1
Date: Thu, 25 Feb 2021 12:46:26 +0100
From: Gerd Hoffmann <kraxel@redhat.com>
To: Akihiko Odaki <akihiko.odaki@gmail.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] virtio-gpu: Respect graphics update interval for EDID
Message-ID: <20210225114626.dn7wevr3fozp5rcu@sirius.home.kraxel.org>
References: <20210221133414.7262-1-akihiko.odaki@gmail.com>
 <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org>
 <CAMVc7JVo_XJcGcxW0Wmqje3Y40fRZDY6T8dnQTc2=Ehasz4UHw@mail.gmail.com>
 <20210224111540.xd5a6yszql6wln7m@sirius.home.kraxel.org>
 <CAMVc7JXUXnrK_amhQsy=paMeqjMU_8r86Hj4UF5haZ+Oq15JkA@mail.gmail.com>
MIME-Version: 1.0
In-Reply-To: <CAMVc7JXUXnrK_amhQsy=paMeqjMU_8r86Hj4UF5haZ+Oq15JkA@mail.gmail.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kraxel@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

  Hi,

> > Because of the wasted frames I'd like this to be an option you can
> > enable when needed.  For the majority of use cases this seems to be
> > no problem ...
> 
> I see blinks with GNOME on Wayland on Ubuntu 20.04 and virtio-gpu with
> the EDID change included in this patch.

/me looks closely at the patch again.

So you update the edid dynamically, each time the refresh rate changes.
Problem with that approach is software doesn't expect edid to change
dynamically because on physical hardware it is static information about
the connected monitor.

So what the virtio-gpu guest driver does is emulate a monitor hotplug
event to notify userspace.  If you resize the qemu window on the host
it'll look like the monitor with the old window size was unplugged and
a new monitor with the new window size got plugged instead, so gnome
shell goes adapt the display resolution to the new virtual monitor size.

The blink you are seeing probably comes from gnome-shell processing the
monitor hotplug event.

We could try to skip generating a monitor hotplug event in case only the
refresh rate did change.  That would fix the blink, but it would also
have the effect that nobody will notice the update.

Bottom line:  I think making the edid refresh rate configurable might be
useful, but changing it dynamically most likely isn't.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 11:49:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 11:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89681.169121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFF93-0006qU-6g; Thu, 25 Feb 2021 11:49:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89681.169121; Thu, 25 Feb 2021 11:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFF93-0006qN-3c; Thu, 25 Feb 2021 11:49:25 +0000
Received: by outflank-mailman (input) for mailman id 89681;
 Thu, 25 Feb 2021 11:49:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFF91-0006qE-VU
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 11:49:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFF91-0007gs-T2
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 11:49:23 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFF91-0005Ou-R3
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 11:49:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFF8y-0001Iv-CQ; Thu, 25 Feb 2021 11:49:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=JKGe0tQNVfYsDa0X9Le36lM2G383Bbq0PEjFEDxMFLc=; b=Yfgg+XikABNqNHt5EpauRBueq3
	NNVZBsi1iTJ7zwHiTQoNQfGreLNur027ajPxZ4mT9apXoHVE9Uid21RBTpYhz8zftCG9nZ9iCBRT7
	T8ehO5N+0Emi1IOOeCRkbvWzK6Du83h74E5TqGNT2HT4xyuQKJEW/ER9rZbTOZ/2iZcA=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.36543.997035.838556@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 11:49:19 +0000
To: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
Cc: xen-devel@lists.xenproject.org,
    Wei Liu <wl@xen.org>,
    =?iso-8859-2?Q?Micha=B3_Leszczy=F1ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH][4.15] tools: Fix typo in xc_vmtrace_set_option comment
In-Reply-To: <3e81757428750eb351ea9d938bf0770026be4c33.1614253079.git.hubert.jasudowicz@cert.pl>
References: <3e81757428750eb351ea9d938bf0770026be4c33.1614253079.git.hubert.jasudowicz@cert.pl>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Hubert Jasudowicz writes ("[PATCH][4.15] tools: Fix typo in xc_vmtrace_set_option comment"):
> Signed-off-by: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 11:56:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 11:56:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89684.169132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFFh-0007jZ-TM; Thu, 25 Feb 2021 11:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89684.169132; Thu, 25 Feb 2021 11:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFFh-0007jS-QP; Thu, 25 Feb 2021 11:56:17 +0000
Received: by outflank-mailman (input) for mailman id 89684;
 Thu, 25 Feb 2021 11:56:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFFFh-0007jN-0c
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 11:56:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFFFe-0007mj-Ok; Thu, 25 Feb 2021 11:56:14 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFFFe-0005u3-GS; Thu, 25 Feb 2021 11:56:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=jfKE6v+97vvm05vcCCHW1xOpycuNa1wG849fCaL3QJM=; b=hizpKIfJSLB4OaAq+rnNW0pq95
	H7rXYuggxgpXR4jvacL65kdK1GHmgLaQLbkAfSfgWHqHoI1y8ae8ot8A3dN7oBBrm5IIpG7Zrb3Fc
	UCSB0Yk47YBwQARvxrmAKQ3c3IO2wyfFikvVH9tZqZVyHFaHIFrQD482490Ujo1865DA=;
Subject: Re: [for-4.15][RESEND PATCH v4 1/2] xen/x86: iommu: Ignore IOMMU
 mapping requests when a domain is dying
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-2-julien@xen.org>
 <d5a09319-614d-398b-b911-bc2533bec587@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7ce1deb9-e362-439c-dd14-a17dbb6fb1c8@xen.org>
Date: Thu, 25 Feb 2021 11:56:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d5a09319-614d-398b-b911-bc2533bec587@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 24/02/2021 14:07, Jan Beulich wrote:
> On 24.02.2021 10:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The new x86 IOMMU page-tables allocator will release the pages when
>> relinquishing the domain resources. However, this is not sufficient
>> when the domain is dying because nothing prevents page-table to be
>> allocated.
>>
>> As the domain is dying, it is not necessary to continue to modify the
>> IOMMU page-tables as they are going to be destroyed soon.
>>
>> At the moment, page-table allocates will only happen when iommu_map().
>> So after this change there will be no more page-table allocation
>> happening.
> 
> While I'm still not happy about this asymmetry, I'm willing to accept
> it in the interest of getting the underlying issue addressed. May I
> ask though that you add something like "... because we don't use
> superpage mappings yet when not sharing page tables"?

Sure.

> But there are two more minor things:
> 
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -267,6 +267,12 @@ int iommu_free_pgtables(struct domain *d)
>>       struct page_info *pg;
>>       unsigned int done = 0;
>>   
>> +    if ( !is_iommu_enabled(d) )
>> +        return 0;
> 
> Why is this addition needed? Hitting a not yet initialize spin lock
> is - afaict - no worse than a not yet initialized list, so it would
> seem to me that this can't be the reason. No other reason looks to
> be called out by the description.

struct domain_iommu will be initially zeroed as it is part of struct domain.

For the list, we are so far fine because page_list_remove_head() 
tolerates NULL. If we were using the normal list operations (e.g. 
list_del), then this code would have segfaulted.

Now about the spinlock, at least lock debugging expects a non-zero 
initial value. We are lucky here because this path is not called with 
IRQ disabled. If it were, Xen would crash as it would consider the lock 
was used in a non-IRQ safe environment.

So in the spinlock case, we are really playing with the fire. Hence, the 
check here.

>> +    /* After this barrier, no more IOMMU mapping can happen */
>> +    spin_barrier(&hd->arch.mapping_lock);
> 
> On the v3 discussion I thought you did agree to change the wording
> of the comment to something like "no new IOMMU mappings can be
> inserted"?

Sorry I missed this comment. I will update it in the next version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:01:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:01:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89690.169145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFL6-0000Nz-PX; Thu, 25 Feb 2021 12:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89690.169145; Thu, 25 Feb 2021 12:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFL6-0000Ns-Kz; Thu, 25 Feb 2021 12:01:52 +0000
Received: by outflank-mailman (input) for mailman id 89690;
 Thu, 25 Feb 2021 12:01:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFFL5-0000Nn-Qa
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:01:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFFL3-0007vn-Uo; Thu, 25 Feb 2021 12:01:49 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFFL3-0006Ma-Gf; Thu, 25 Feb 2021 12:01:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Mqi/1JbhpxxRrGMz3QOl6MCMfKt2wbruj2tscyWMBPk=; b=MsSvr32ifGC4bhRU9N1/8RpTSO
	VawjC1qIV6bCdPQrDVquSXvVdeqyfy1PL6BdNT3nNHMt6S0vGKeyued454cJVpbDpdk59QGMDNxpj
	LaS32qT4+E/SQGSD/jfTGbHAwaeLZyvLf3nwv0LkQ40vbLHNTEz1HoglCBORRen2FDkM=;
Subject: Re: [for-4.15][RESEND PATCH v4 2/2] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-3-julien@xen.org>
 <c666bf75-451d-fbc1-7fb1-600c4f014f05@suse.com>
 <64de5c8f-83ed-23af-b24f-3c8dde50e226@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <95800b2b-1412-ee65-33b9-d3e2702b1c88@xen.org>
Date: Thu, 25 Feb 2021 12:01:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <64de5c8f-83ed-23af-b24f-3c8dde50e226@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 24/02/2021 16:00, Jan Beulich wrote:
> On 24.02.2021 16:58, Jan Beulich wrote:
>> On 24.02.2021 10:43, Julien Grall wrote:
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
>>>   
>>>   void arch_iommu_domain_destroy(struct domain *d)
>>>   {
>>> +    /*
>>> +     * There should be not page-tables left allocated by the time the
>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>> +     * called unconditionally, so pgtables may be unitialized.
>>> +     */
>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>
>> Since you've used the preferred ! instead of "== 0" /
>> "== NULL" in the ASSERT()s you add further up, may I ask that
>> you do so here as well?
> 
> Oh, and I meant to provide
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> preferably with that cosmetic adjustment (and ideally also with
> "uninitialized" in the comment, as I notice only now).

I don't seem to find your original answer in my inbox and on lore [1]. 
Could you confirm if the two comments visible in this thread are the 
only one you made on this patch?

Thanks for the review!

Cheers,

[1] https://lore.kernel.org/xen-devel/20210224094356.7606-3-julien@xen.org/

> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:11:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:11:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89693.169156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFUB-0001RU-MK; Thu, 25 Feb 2021 12:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89693.169156; Thu, 25 Feb 2021 12:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFUB-0001RN-Iv; Thu, 25 Feb 2021 12:11:15 +0000
Received: by outflank-mailman (input) for mailman id 89693;
 Thu, 25 Feb 2021 12:11:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qDmu=H3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lFFUA-0001RI-8W
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:11:14 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98dd487b-050b-4144-af35-8bfe5ffdf931;
 Thu, 25 Feb 2021 12:11:13 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id b3so5011698wrj.5
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 04:11:13 -0800 (PST)
Received: from ?IPv6:2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec?
 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id g1sm7018230wmh.9.2021.02.25.04.11.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 25 Feb 2021 04:11:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98dd487b-050b-4144-af35-8bfe5ffdf931
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=Elk6XkroCvy6YDZtlzcazzq5EnmxSahtrCRPixJOSKQ=;
        b=ZtAMC2V0VcR42+wjwA4H3eAxtNacRvwFmrrkpegVmsuoMmvaaVoVzvCW0AEW7M45ZH
         UOCACS+my5DkA0YLxoCJswlhzD70TpW6R6GdJW2JLgvFn5fG7ZttoyMaKdV2E7oGpLYP
         Si2EwEbiZzTSeSUtd5NUUbF8LwUl9vf43KPTKdhIEUX0EIkIXldaoetQ15CFuC+Rc3CC
         aG/3WAc8M73W1wzm4r2gbEImAJiJ9SVCYhZ6uCHfLpXQAfaBxom6SvXapNFDY/fy+9WH
         s/X5mTW1TGBgsnr+UCSNvmSEw+1SZLgvikykKkvpVSECHqALTYiH8G4Z6ky0zNMFHvcI
         0wpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=Elk6XkroCvy6YDZtlzcazzq5EnmxSahtrCRPixJOSKQ=;
        b=gI3bHqS0yiU6LS8eVNmpt2PCp+vQ+2qxF+/fHmSe5hogTzR5SYOd7RMMaKDsAz5ZDp
         MQlFcjjbGTCrxpgMDgP+qCNgfVsx6P7LNmxEM/SJCBe1EAkiF7aedHtlqvPILnnYxdda
         XlOcDU683X9kS1ycM12fJs/VR5BRniWajn8Zkfc4gyE6lXLxaumfMCdZthRsbKDpneVS
         1K7c1gnzIizhzvuWMITCqhVEdRnHv5Om6xNNvUmNZ2s4JGWhjajzuk6wbOiS/bIa6uju
         mt0JW5ZO0lSdmFekEJ80CnQKQmgeoqFa9YoO/1/Ao6g1VsgMA0nvnaZ4l3uuf3B5jTA3
         SLoQ==
X-Gm-Message-State: AOAM531Jmdrly99QL+vAGlKHC9nX8Ko3V4xliRLIqDYnRl/LVeM5e6qi
	Uh62uB5WTDKlZQFRGvBbh88=
X-Google-Smtp-Source: ABdhPJx2i045TU5mnHfbXmvONbyukZsIZCSDA+0N1+KE57sH6FqvEo8nub4xMCR8IevpMvWxOWr/XA==
X-Received: by 2002:adf:f7cc:: with SMTP id a12mr3203944wrq.54.1614255072541;
        Thu, 25 Feb 2021 04:11:12 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Wei Liu <wl@xen.org>
References: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
 <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
 <76c94541-21a8-7ae5-c4c4-48552f16c3fd@suse.com>
Message-ID: <17e50fb5-31f7-60a5-1eec-10d18a40ad9a@xen.org>
Date: Thu, 25 Feb 2021 12:11:11 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <76c94541-21a8-7ae5-c4c4-48552f16c3fd@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25/02/2021 07:33, Jan Beulich wrote:
> On 24.02.2021 17:39, Paul Durrant wrote:
>> On 23/02/2021 16:29, Jan Beulich wrote:
>>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>>> special considerations for the head of the SKB no longer apply. Don't
>>> mistakenly report ERROR to the frontend for the first entry in the list,
>>> even if - from all I can tell - this shouldn't matter much as the overall
>>> transmit will need to be considered failed anyway.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/drivers/net/xen-netback/netback.c
>>> +++ b/drivers/net/xen-netback/netback.c
>>> @@ -499,7 +499,7 @@ check_frags:
>>>    				 * the header's copy failed, and they are
>>>    				 * sharing a slot, send an error
>>>    				 */
>>> -				if (i == 0 && sharedslot)
>>> +				if (i == 0 && !first_shinfo && sharedslot)
>>>    					xenvif_idx_release(queue, pending_idx,
>>>    							   XEN_NETIF_RSP_ERROR);
>>>    				else
>>>
>>
>> I think this will DTRT, but to my mind it would make more sense to clear
>> 'sharedslot' before the 'goto check_frags' at the bottom of the function.
> 
> That was my initial idea as well, but
> - I think it is for a reason that the variable is "const".
> - There is another use of it which would then instead need further
>    amending (and which I believe is at least part of the reason for
>    the variable to be "const").
> 

Oh, yes. But now that I look again, don't you want:

if (i == 0 && first_shinfo && sharedslot)

? (i.e no '!')

The comment states that the error should be indicated when the first 
frag contains the header in the case that the map succeeded but the 
prior copy from the same ref failed. This can only possibly be the case 
if this is the 'first_shinfo' (which is why I still think it is safe to 
unconst 'sharedslot' and clear it).

   Paul


> Jan
> 



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:22:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89696.169169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFfD-0002W2-O7; Thu, 25 Feb 2021 12:22:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89696.169169; Thu, 25 Feb 2021 12:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFfD-0002Vv-Ks; Thu, 25 Feb 2021 12:22:39 +0000
Received: by outflank-mailman (input) for mailman id 89696;
 Thu, 25 Feb 2021 12:22:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFFfC-0002Vq-Sx
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:22:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFFfA-0008Fu-46; Thu, 25 Feb 2021 12:22:36 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFFf9-0007xv-P8; Thu, 25 Feb 2021 12:22:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=9N8Xs4CmQxxJU+JfkOI4Hg6HJzpF0TFOuHikqHwXSOw=; b=2FFMtVt3PjUHGSvjxohxnz3/eL
	/PtHb0BuGkzB5I3pFMYNhuDCODSMscuB8WUlh34My5By/63GdSDVQpz3DHjc2dfXiUty0+WDqriLo
	FeUA+3mLx86BcI+QJL3Y3twfc1gIiASNlxCzzGlIS/FvJwCgKkL6kvYxL2vDZTTeFW6o=;
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2,
 3}
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210220140412.31610-1-julien@xen.org>
 <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0c4e6015-f969-9b6b-91b5-bffa952d47d5@xen.org>
Date: Thu, 25 Feb 2021 12:22:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 22/02/2021 13:45, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 20 Feb 2021, at 14:04, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, Xen will send a data abort to a guest trying to write to the
>> ISPENDR.
>>
>> Unfortunately, recent version of Linux (at least 5.9+) will start
>> writing to the register if the interrupt needs to be re-triggered
>> (see the callback irq_retrigger). This can happen when a driver (such as
>> the xgbe network driver on AMD Seattle) re-enable an interrupt:
>>
>> (XEN) d0v0: vGICD: unhandled word write 0x00000004000000 to ISPENDR44
>> [...]
>> [   25.635837] Unhandled fault at 0xffff80001000522c
>> [...]
>> [   25.818716]  gic_retrigger+0x2c/0x38
>> [   25.822361]  irq_startup+0x78/0x138
>> [   25.825920]  __enable_irq+0x70/0x80
>> [   25.829478]  enable_irq+0x50/0xa0
>> [   25.832864]  xgbe_one_poll+0xc8/0xd8
>> [   25.836509]  net_rx_action+0x110/0x3a8
>> [   25.840328]  __do_softirq+0x124/0x288
>> [   25.844061]  irq_exit+0xe0/0xf0
>> [   25.847272]  __handle_domain_irq+0x68/0xc0
>> [   25.851442]  gic_handle_irq+0xa8/0xe0
>> [   25.855171]  el1_irq+0xb0/0x180
>> [   25.858383]  arch_cpu_idle+0x18/0x28
>> [   25.862028]  default_idle_call+0x24/0x5c
>> [   25.866021]  do_idle+0x204/0x278
>> [   25.869319]  cpu_startup_entry+0x24/0x68
>> [   25.873313]  rest_init+0xd4/0xe4
>> [   25.876611]  arch_call_rest_init+0x10/0x1c
>> [   25.880777]  start_kernel+0x5b8/0x5ec
>>
>> As a consequence, the OS may become unusable.
>>
>> Implementing the write part of ISPENDR is somewhat easy. For
>> virtual interrupt, we only need to inject the interrupt again.
>>
>> For physical interrupt, we need to be more careful as the de-activation
>> of the virtual interrupt will be propagated to the physical distributor.
>> For simplicity, the physical interrupt will be set pending so the
>> workflow will not differ from a "real" interrupt.
>>
>> Longer term, we could possible directly activate the physical interrupt
>> and avoid taking an exception to inject the interrupt to the domain.
>> (This is the approach taken by the new vGIC based on KVM).
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> This is something which will not be done by a guest very often so I think your
> implementation actually makes it simpler and reduce possibilities of race conditions
> so I am not even sure that the XXX comment is needed.

I think the XXX is useful as if someone notice an issue with the code, 
then they know what they could try.

I am open to suggestion how we could keep track of potential improvement.

> But i am ok with it being in or not so:
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks!

Cheers,
-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:32:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:32:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89703.169194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFof-0003bu-6v; Thu, 25 Feb 2021 12:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89703.169194; Thu, 25 Feb 2021 12:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFof-0003bm-0f; Thu, 25 Feb 2021 12:32:25 +0000
Received: by outflank-mailman (input) for mailman id 89703;
 Thu, 25 Feb 2021 12:31:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jPrM=H3=amazon.com=prvs=68338ebd9=andyhsu@srs-us1.protection.inumbo.net>)
 id 1lFFo3-0003ao-6R
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:31:47 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6621d550-1644-445c-bc62-28cfd563fc75;
 Thu, 25 Feb 2021 12:31:46 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1e-57e1d233.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 25 Feb 2021 12:31:38 +0000
Received: from EX13D12EUA002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1e-57e1d233.us-east-1.amazon.com (Postfix) with ESMTPS
 id 502D31416CE
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 12:31:38 +0000 (UTC)
Received: from dev-dsk-andyhsu-1c-d6833dcf.eu-west-1.amazon.com
 (10.43.160.146) by EX13D12EUA002.ant.amazon.com (10.43.165.103) with
 Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 25 Feb 2021 12:31:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6621d550-1644-445c-bc62-28cfd563fc75
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1614256306; x=1645792306;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=4JQ5FcEpuWrAdiBSf7ibanfdKfXRZdUYwPvQsSOrgh4=;
  b=biTZdJyW7pYY/Xkt64t9tLEENB9DN8hZ6pc1Me5i61mU7JP5wjb92bNY
   nDpvfmH4o/CxcP1Ah6RcOVkaHZmQTsgBCK7hHL+D6QrXU0lZqiHJ4f4IV
   uAalKZghrHGRKXe/IbkD1G4TxJkQIi54I8QNUZ49+UrV/QJpllpuVlSLq
   g=;
X-IronPort-AV: E=Sophos;i="5.81,205,1610409600"; 
   d="scan'208";a="92179167"
From: ChiaHao Hsu <andyhsu@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <andyhsu@amazon.com>
Subject: [PATCH 2/2] xen-netback: add module parameter to disable dynamic multicast control
Date: Thu, 25 Feb 2021 12:31:27 +0000
Message-ID: <20210225123127.9771-1-andyhsu@amazon.com>
X-Mailer: git-send-email 2.23.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.43.160.146]
X-ClientProxiedBy: EX13D21UWA002.ant.amazon.com (10.43.160.246) To
 EX13D12EUA002.ant.amazon.com (10.43.165.103)
Precedence: Bulk

In order to support live migration of guests between kernels
that do and do not support 'feature-dynamic-multicast-control',
we add a module parameter that allows the feature to be disabled
at run time, instead of using hardcode value.
The default value is enable.

Signed-off-by: ChiaHao Hsu <andyhsu@amazon.com>
---
 drivers/net/xen-netback/common.h  |  1 +
 drivers/net/xen-netback/netback.c |  7 +++++++
 drivers/net/xen-netback/xenbus.c  | 14 ++++++++------
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index bfb7a3054917..c166ebb5a81f 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -415,6 +415,7 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 irqreturn_t xenvif_interrupt(int irq, void *dev_id);
 
 extern bool control_ring;
+extern bool dynamic_multicast_control;
 extern bool separate_tx_rx_irq;
 extern bool provides_xdp_headroom;
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 20d858f0456a..4c3d92238ae9 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,13 @@
 bool control_ring = true;
 module_param(control_ring, bool, 0644);
 
+/* Provide an option to extend multicast control protocol. This allows
+ * request-multicast-control to be set by the frontend at any time,
+ * the backend will watch the value and re-sample on watch events.
+ */
+bool dynamic_multicast_control = true;
+module_param(dynamic_multicast_control, bool, 0644);
+
 /* Provide an option to disable split event channels at load time as
  * event channels are limited resource. Split event channels are
  * enabled by default.
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 8a9169cff9c5..36c699f99da4 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -1092,12 +1092,14 @@ static int netback_probe(struct xenbus_device *dev,
 			goto abort_transaction;
 		}
 
-		err = xenbus_printf(xbt, dev->nodename,
-				    "feature-dynamic-multicast-control",
-				    "%d", 1);
-		if (err) {
-			message = "writing feature-dynamic-multicast-control";
-			goto abort_transaction;
+		if (dynamic_multicast_control) {
+			err = xenbus_printf(xbt, dev->nodename,
+					    "feature-dynamic-multicast-control",
+					    "%d", 1);
+			if (err) {
+				message = "writing feature-dynamic-multicast-control";
+				goto abort_transaction;
+			}
 		}
 
 		err = xenbus_transaction_end(xbt, 0);
-- 
2.23.3



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:32:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:32:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89701.169186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFoe-0003bS-SV; Thu, 25 Feb 2021 12:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89701.169186; Thu, 25 Feb 2021 12:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFFoe-0003bL-P3; Thu, 25 Feb 2021 12:32:24 +0000
Received: by outflank-mailman (input) for mailman id 89701;
 Thu, 25 Feb 2021 12:29:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jPrM=H3=amazon.com=prvs=68338ebd9=andyhsu@srs-us1.protection.inumbo.net>)
 id 1lFFmI-0002rY-74
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:29:58 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34af81ce-aec0-48a6-8dca-f80a9f307baf;
 Thu, 25 Feb 2021 12:29:57 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-821c648d.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 25 Feb 2021 12:29:51 +0000
Received: from EX13D12EUA002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1a-821c648d.us-east-1.amazon.com (Postfix) with ESMTPS
 id F3328A0645
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 12:29:49 +0000 (UTC)
Received: from dev-dsk-andyhsu-1c-d6833dcf.eu-west-1.amazon.com (10.43.160.27)
 by EX13D12EUA002.ant.amazon.com (10.43.165.103) with Microsoft SMTP
 Server (TLS) id 15.0.1497.2; Thu, 25 Feb 2021 12:29:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34af81ce-aec0-48a6-8dca-f80a9f307baf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1614256198; x=1645792198;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=CGtR0i81LbIzNkrCKXybCDMaGmp77tg1tDfxh2m6iuE=;
  b=gQNdGANCp1LD5JcgDs1mZQu4JcyvohzBDVcicbJwIvC3iS8AlTjZuh9l
   c+WbGhzeoIOvLKfjr09FwOuuEf02ZyXw3u+nGz3WPRTCjBzZN/rH8mQPd
   l5HNBXFK6LULQAWHhwHvLaqduwIUb4TrFo07ayMPBF7Q5bCd++IOg7RLl
   I=;
X-IronPort-AV: E=Sophos;i="5.81,205,1610409600"; 
   d="scan'208";a="88075059"
From: ChiaHao Hsu <andyhsu@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <andyhsu@amazon.com>
Subject: [PATCH 1/2] xen-netback: add module parameter to disable ctrl-ring
Date: Thu, 25 Feb 2021 12:29:40 +0000
Message-ID: <20210225122940.9310-1-andyhsu@amazon.com>
X-Mailer: git-send-email 2.23.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.43.160.27]
X-ClientProxiedBy: EX13D50UWC001.ant.amazon.com (10.43.162.96) To
 EX13D12EUA002.ant.amazon.com (10.43.165.103)
Precedence: Bulk

In order to support live migration of guests between kernels
that do and do not support 'feature-ctrl-ring', we add a
module parameter that allows the feature to be disabled
at run time, instead of using hardcode value.
The default value is enable.

Signed-off-by: ChiaHao Hsu <andyhsu@amazon.com>
---
 drivers/net/xen-netback/common.h  |  2 ++
 drivers/net/xen-netback/netback.c |  6 ++++++
 drivers/net/xen-netback/xenbus.c  | 13 ++++++++-----
 3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4a16d6e33c09..bfb7a3054917 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -276,6 +276,7 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 
 	const char *hotplug_script;
+	bool ctrl_ring_enabled;
 };
 
 struct xenvif {
@@ -413,6 +414,7 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 
 irqreturn_t xenvif_interrupt(int irq, void *dev_id);
 
+extern bool control_ring;
 extern bool separate_tx_rx_irq;
 extern bool provides_xdp_headroom;
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5c73f819662..20d858f0456a 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -48,6 +48,12 @@
 
 #include <asm/xen/hypercall.h>
 
+/* Provide an option to disable control ring which is used to pass
+ * large quantities of data from frontend to backend.
+ */
+bool control_ring = true;
+module_param(control_ring, bool, 0644);
+
 /* Provide an option to disable split event channels at load time as
  * event channels are limited resource. Split event channels are
  * enabled by default.
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index a5439c130130..8a9169cff9c5 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -1123,11 +1123,14 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing multi-queue-max-queues\n");
 
-	err = xenbus_printf(XBT_NIL, dev->nodename,
-			    "feature-ctrl-ring",
-			    "%u", true);
-	if (err)
-		pr_debug("Error writing feature-ctrl-ring\n");
+	be->ctrl_ring_enabled = READ_ONCE(control_ring);
+	if (be->ctrl_ring_enabled) {
+		err = xenbus_printf(XBT_NIL, dev->nodename,
+				    "feature-ctrl-ring",
+				    "%u", true);
+		if (err)
+			pr_debug("Error writing feature-ctrl-ring\n");
+	}
 
 	backend_switch_state(be, XenbusStateInitWait);
 
-- 
2.23.3



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:51:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89709.169215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFG6o-0005it-SZ; Thu, 25 Feb 2021 12:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89709.169215; Thu, 25 Feb 2021 12:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFG6o-0005im-Nb; Thu, 25 Feb 2021 12:51:10 +0000
Received: by outflank-mailman (input) for mailman id 89709;
 Thu, 25 Feb 2021 12:51:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LMls=H3=epam.com=prvs=269023888c=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lFG6n-0005ih-Ay
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:51:09 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 228437ab-2635-4c76-b0e9-e5d936f2479f;
 Thu, 25 Feb 2021 12:51:08 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 11PCoB1A006707; Thu, 25 Feb 2021 12:51:03 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2052.outbound.protection.outlook.com [104.47.12.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 36x21t9ke0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 25 Feb 2021 12:51:03 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com (2603:10a6:208:4f::23)
 by AM9PR03MB7043.eurprd03.prod.outlook.com (2603:10a6:20b:2de::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Thu, 25 Feb
 2021 12:51:01 +0000
Received: from AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb]) by AM0PR03MB3508.eurprd03.prod.outlook.com
 ([fe80::a9a4:6122:8de2:64cb%6]) with mapi id 15.20.3846.048; Thu, 25 Feb 2021
 12:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 228437ab-2635-4c76-b0e9-e5d936f2479f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e4CSd6q+nBVh3qwYKL+Pk8TSuULiLAhx8OZyy2tLZZ3/b2AjiqWqDclsuZDYr5k8oDK/ot6nKBMTVVfAdssOQ0CsB0uVZuH94+tqbAcqZBmzBahgtH3rfrJTbs19Xn3J9WP6PlVDfmImFgu55eZXXj8h9trR6a4tJEsDR70fKTJQlhbzRzmmJ+L06m6FgjVjQva+BQwiG01BaTO88w1siNByUYKKx0HfziJClqh4FGvIH4p1BsrwmGsxnad5I1fLExM009bhBLM2olNKJzFVYJmYb1qVrNHN1qJAp9sgV2iEdJ3cBn5cyhZb4VwQyfjQEDbwCZcPRmg6MiN8aGWzcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=emP4aFwAMAvIsnQEsPzj4VqUxQYACY1/A2oh3xxnAis=;
 b=ELF9JQoKBpT+/yY+rFz5/lMbeFqi9TrjINogNm/Yo0xj2/bg77AjKSnDXUqtp8qWcvLRp3AmpTTALSfCpJBnjDIF+irxKpRW8tVteX6zPUKQr2i5u+7otdUFhyJWP6wjfKxLyNid7fRbaMFh6U85a1SncFcus3IE4jCCylT2rSVJazfxZcKx1i5lrLrzfb17TTyQfceXrA/jztpT3LXLLfJNymJvrpd8vENDORAm0llNql9FQInyQ+yDFLTVfO86Yq6hGaVHo9PbqWRb/kTvpiw8UF2vau54ck3nLraylPbk1F9kyHBi+dWJiZiuY8AD5QqAIAEFAKWl4/iMQPiofg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=emP4aFwAMAvIsnQEsPzj4VqUxQYACY1/A2oh3xxnAis=;
 b=vGr9Kirx0+wxnnRBmdt2MGih3W41I1d5FdtdsHvH/s7L47FgLmo1T0n3tR6hQ6n5Xdmq+DkCnJvUCMsP/O05wKNkpLAZz+swI+ppRdrItTeONgOsP6yeE7HElrapZgXLxeB1HuMvaTz7nZ4AEaetiB00thyxAKRHi4TaB6r8Rqw6LrRDn+eTAUa26XdHpXnz61vtm6u7gvVGh7p5qkuuUoWO1bGdj7YR7mNJ5xOy55Ww3G9ilmzBqGeeEXxd9LaCicF8ry7b0JeYhSaTHGltbFgD3vTWVW3X0ZIlCMBgvHkQ2ipTBWKEsFHDpBhlPalklNrWSmXTwcXOgNIHE9/6tg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Julien Grall <julien.grall.oss@gmail.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        George
 Dunlap <george.dunlap@citrix.com>,
        Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Topic: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
Thread-Index: 
 AQHXCYx4A6OUUHr1gkqxWv1TEOLkuqplchSAgAAzdYCAAXE/AIAAtX0AgAAaNQCAABhvgIAAC3gAgADMQgA=
Date: Thu, 25 Feb 2021 12:51:00 +0000
Message-ID: <87a6rs9uq3.fsf@epam.com>
References: <20210223023428.757694-1-volodymyr_babchuk@epam.com>
 <e6d8726c-4074-fe4c-dbbe-e879da2bb7f6@xen.org> <87zgzv56pm.fsf@epam.com>
 <c1c55bcb-dfd4-a552-836a-985268655cf1@xen.org> <87o8g99oal.fsf@epam.com>
 <CAJ=z9a0v37rc_B7xVdQECAYd52PJ0UajGzvX1DYP56Q2RXQ2Tw@mail.gmail.com>
 <87eeh59fwi.fsf@epam.com> <28f8ffcc-d2df-438c-4fa8-a8174d897109@citrix.com>
In-Reply-To: <28f8ffcc-d2df-438c-4fa8-a8174d897109@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.15; emacs 27.1
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d3ad25d1-1d47-4f62-5f2c-08d8d98c013a
x-ms-traffictypediagnostic: AM9PR03MB7043:
x-microsoft-antispam-prvs: 
 <AM9PR03MB704354D026606DC99DD9AB5DE69E9@AM9PR03MB7043.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 wt5zOEfWGZQWXE7kgnuzKh5aIQCDVbz/g/SpA/biRiXbVFCbtuo95+DJzFyVueOpkpq72bLvl/ScVo45rwM/7FOOCgZJ2iYBfvPuECApsMsl6uVK+oo0VaCr9KjFWp3dKOXWQpnP3j+z8gA9K4ecBcl33JnY3l6Rmbp6cL/HF+5p4r6SgehJsRSaqTzxygaSjZhFZQhi3+6x72tN9AzRuHGOXs5iVfCTmBGVsBit10DbfNLIyioi7FYbKkSp/0FEqSaCKvNkaVyWPrRbSijBYy5o+h/FK5hi5gr7YBJxifO5hpgwncffVYGeYx58Zvj+TqaO+MxeQXDioTM0CSkkBia+A42lRZ87SpAVjB/UT48NxscJTm3ayWP1HBUB5ipYM1KFPKHkTP3gjqAlIwg3QQpm9Oqz5mCAXJNoWiNcytftRPUZ+5q3HpUKfN8jD8nJF4x4LtQQkVtbcjOwIpOz+dhh3FqBJdEed7/mQ6wkglgazcw2Y4SgHl0Y9RXo0DaKFNjg7lKrPoDGA7m5Zy+iVA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB3508.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(136003)(376002)(366004)(6506007)(54906003)(53546011)(71200400001)(86362001)(7416002)(4326008)(2616005)(2906002)(5660300002)(6916009)(76116006)(66476007)(6512007)(186003)(36756003)(8676002)(66556008)(478600001)(55236004)(26005)(8936002)(6486002)(66946007)(64756008)(83380400001)(316002)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?K6iD4EBLxEK5Y57B0/vCMnCu+e+lNo6j6ZuzUAHOfCWWEFAH0HJwHZMVId?=
 =?iso-8859-1?Q?WnBpxVYsTQd7QaLx1Fvp5SDK2EuBQNct1GN+xQDouM1EuA7Zsz1sEjQo8W?=
 =?iso-8859-1?Q?EH1YYE16Sitxz5qxXN0dat51xRL9SiJDGhEQAJ1SVWNQQ2hztlZq+3Q4Ly?=
 =?iso-8859-1?Q?/+oPFAwYuxXg/C8zi0YM31Da7+DK3pG6SQAmN67SZnrtXvm0bj5b6ylNRb?=
 =?iso-8859-1?Q?RxC+Jid98XIEvlzTyjfVwmrqcLvXfaawRFmci5CpWJxm0WAVpDTfYQzzPx?=
 =?iso-8859-1?Q?itpSfoT1o57aAUyiG7xGM6jQgzTM9IdDdqNh58i0HBqEpgZacQhSAKh60v?=
 =?iso-8859-1?Q?aFhPQmTIFuITjczRhAJXPLab6cx1tQPAjmKiLRxyVK5lxe5CpeKgPxQdr7?=
 =?iso-8859-1?Q?+8QrM/6eMC0AFA7KKVFfgnQA/RFaaRo6DIDt4f34k8Qbe3kqilE9EL2/Bz?=
 =?iso-8859-1?Q?9QohD1tZjus9qv/odroPoygcLlzbGlYUG440ZsVDf6pUeSeJvUR8SmakxU?=
 =?iso-8859-1?Q?kaKFGKPpsxBWoZI7Wsr3fztMos6Dg9hjN29AmBrMt7xZtwWrjbqlw2NdJe?=
 =?iso-8859-1?Q?dLl3poHNQEFK9+TQSwE2N02JLNioaMCUJHD17q6G95jsCIEDkgcD4+DxnV?=
 =?iso-8859-1?Q?s2UFq6jgTF1JQMaquDX0ILFEeFZc2lU0DqgVsKIkyaZb2TTki3MYmtoT5k?=
 =?iso-8859-1?Q?Bfd2zgyp02Fk9sS85oF0aA0wpiUSLBsZA9J10++yZLFEYIj6dxa2AtbKiU?=
 =?iso-8859-1?Q?GwrVvBsavSWKy50P3Hx2oUYZ+EX987chA98GME45gwVNmLROj2NhWhJkhg?=
 =?iso-8859-1?Q?KssSaPdw6yhvjuF/yjd0J3tdU5SpzZ5m+v1s/kTFYkLP7CbQSjtAhF84Nd?=
 =?iso-8859-1?Q?HnWN2LNTOJcc0hbsCpf72vUkn+S/LRoJmUAkZPoKnD+1Ymr3IXgGjnDmwy?=
 =?iso-8859-1?Q?Gpvy7+AQ+nZwP9e+WQD/kFUACSNHZnaV3iCxQC2xiOKKtTAv9vpI+jc/pt?=
 =?iso-8859-1?Q?SOeZV/pewKYAQpRgxNXVQj4zr+08O5izm/YgK+N9KI9qHXAjEQq2h9Eaat?=
 =?iso-8859-1?Q?AlaPrOvd9Luufdf+wES+tHn6embb4z59e44lVBKoJ+SYSNbOLyzlz1xxAf?=
 =?iso-8859-1?Q?ZB+4pjT6mP92/lKzfpFZX5sRz9Iam730PyoaPfJT6ms8Jn9Grr5nRpzlbb?=
 =?iso-8859-1?Q?gi1SbEl1o/pBZeZg9qtiRnRo+JdUgCppfnFNto+lbs5bqYU62/faxpznE5?=
 =?iso-8859-1?Q?MRWwRRbTbL6fxfhgLwltP0zNmPEjyC5ryiYgyrxLsWOBp4XZgRxxzfb6qO?=
 =?iso-8859-1?Q?YikaKOLs0fQ4zHz7UxiuB2eN//LnpLGuVdUNGfusggOoIACySFov04s3r6?=
 =?iso-8859-1?Q?g87SRuMAOK?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB3508.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3ad25d1-1d47-4f62-5f2c-08d8d98c013a
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Feb 2021 12:51:01.1107
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: MxF/oEllnTF0xD25MWaEKB0eY1MvFur99/gLR8kUgMpDbOj6ntqXnl1fBotfS20qsevPn06yvb/VAtfW8aL0Pof5vXExUxEvxxDc/e06LxY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7043
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 impostorscore=0 mlxlogscore=961 clxscore=1015 bulkscore=0
 lowpriorityscore=0 suspectscore=0 adultscore=0 priorityscore=1501
 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102250100


Hi Andrew,

Andrew Cooper writes:

> On 24/02/2021 23:58, Volodymyr Babchuk wrote:
>> And I am not mentioning x86 support there...
>
> x86 uses per-pCPU stacks, not per-vCPU stacks.
>
> Transcribing from an old thread which happened in private as part of an
> XSA discussion, concerning the implications of trying to change this.
>
> ~Andrew
>
> -----8<-----
>
> Here is a partial list off the top of my head of the practical problems
> you're going to have to solve.
>
> Introduction of new SpectreRSB vulnerable gadgets.  I'm really close to
> being able to drop RSB stuffing and recover some performance in Xen.
>
> CPL0 entrypoints need updating across schedule.  SYSCALL entry would
> need to become a stub per vcpu, rather than the current stub per pcpu.
> This requires reintroducing a writeable mapping to the TSS (doable) and
> a shadow stack switch of active stacks (This corner case is so broken it
> looks to be a blocker for CET-SS support in Linux, and is resulting in
> some conversation about tweaking Shstk's in future processors).
>
> All per-cpu variables stop working.  You'd need to rewrite Xen to use
> %gs for TLS which will have churn in the PV logic, and introduce the x86
> architectural corner cases of running with an invalid %gs.  Xen has been
> saved from a large number of privilege escalation vulnerabilities in
> common with Linux and Windows by the fact that we don't use %gs, so
> anyone trying to do this is going to have to come up with some concrete
> way of proving that the corner cases are covered.

Thank you. This is exactly what I needed. I am not a big specialist in
x86, but from what I said, I can see that there is no easy way to switch
contexts while in hypervisor mode.

Then I want to return to a task domain idea, which you mentioned in the
other thread. If I got it right, it would allow to

1. Implement asynchronous hypercalls for cases when there is no reason
to hold calling vCPU in hypervisor for the whole call duration

2. Improve time accounting, as tasklets can be scheduled to run in this
task domain.

I skimmed through ML archives, but didn't found any discussion about it.

As I see it, its implementation would be close to idle domain
implementation, but a little different.

--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:52:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89713.169226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFG8R-0005rr-Bh; Thu, 25 Feb 2021 12:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89713.169226; Thu, 25 Feb 2021 12:52:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFG8R-0005rk-8e; Thu, 25 Feb 2021 12:52:51 +0000
Received: by outflank-mailman (input) for mailman id 89713;
 Thu, 25 Feb 2021 12:52:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFG8P-0005rY-Mf
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:52:49 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8762f0ec-6212-4a7c-8d96-0c4f99d072ce;
 Thu, 25 Feb 2021 12:52:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8762f0ec-6212-4a7c-8d96-0c4f99d072ce
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614257568;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3Hs2asnct3TXgHebZpLXqg1Ns4/LLys/oILo8qHy7pg=;
  b=FubBz7k6AExqZ9RIKUWvtmopu3BZIrZ/u145FQMJRm1pt8crEE4StRLd
   MqSa8Ne7ZGVj8R2YPFHytQdMZwTszWepJCz2Ro8pSsBLrjD/B40Dbo+zm
   SKAiC2jgIvuWdMk2apgavAcycbZbFP5X3jDLXMc6eC6Ye7pDOJAqSCbaf
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7DRmsr11YTOSgorGKcPHhc6/WaXY+k5t7zPqCww2OeSOVanaETxKl2tb3OaZB9LJFOa5Lbqg4s
 +HSc9wdQaAh20qu7li57pRIjxUVyTc9rQAN62usxkaMNEZTCxOhungt81NHJUnYvpCNo5bxuuD
 aqt5km11enX7fJ1cXyQDJGvcD6Qrd+l6beB1zeWNo+r+ZK6Ye7oo6bRAi/aJUnYUR5S/mVYvTm
 Dn2KpTOrOzryq5DYSB1So2cpVHEz2wa/H4cwB/Z3fBEmlwrRbLP1hgnG81CPyOMPmbqYXfRNNb
 068=
X-SBRS: 5.2
X-MesageID: 38199715
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,205,1610427600"; 
   d="scan'208";a="38199715"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VCweisxKdLCd4LgMKgSWnWI7XuiVN3rHbPJQEoRPUgBW0mcyvN36oOVp8STNtq++XWzx7z/erYJeJX77+BPMD1/CUJHB/xZMl/fQkEv97tJkxK2cRS/+qjHa6yq96sE3PvPae5fikulMETqJR7+ehJ6bDcGWgaMnhmVA9P8mguS4Ac08gwX0JGtRED2lxzW2slFrndtYBQyg7IF0g9yAKDfzkdxu3DgKW08GsA5z2bA1b2/bvS7jEsnZG4rnrFrNGMztBkK/q6QejlO/Mj9bOD3EuMQD4SS8tkzrBNin/pHfDXZLT54w2AmYrV6WKj0Jl6dYFYpNqGubTSTZ7jowJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rhPANmBnHkkjRf1uO7PE2gOVdJwy8J+ccpi+jZLnQg4=;
 b=mnjNZgIiUUs2R0jNpqI1kXrqxTkHfqYelawnrhXB0fLGy7/mBc5+reDXFQ4Yd8YyM0XVG5QHXpYmEFmjypGuGg2hTNoc9+i5XLGXLttBAvOmwbfcsYNWIONX9w2o0yPQR60NB8ljIqIlWA1XsjWmOdBOIHDm4nXS8LkevLNo31h9krVNaMhFdcF54cXyr654LNL+c2pppspMMri5ABj9EfhhbS6xA39cQ7I3420ezu8/n8gelDRwHtcBymT1A++/gdK5NWXU03p1TG9KQt5PFg8WKhG8imn/Wp9N4+sfXxm/e5THFIuhmkJqV9vSPlPbRzyS45Hkd+LQRH/KAJo7HA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rhPANmBnHkkjRf1uO7PE2gOVdJwy8J+ccpi+jZLnQg4=;
 b=BtoKCHYG/QnlSLip+gcG+chN/EAWZhERLoqp+BivLBYHk6Y8VhINZiaYHsbTawyBnsV+7YPShntM4EAjkxIZc+IlI3WeHGbDhcJ3532NBmAejS3ZoiyFLomzmOc8KsogxTq0BzIbmtKZJL+x5UV95qpBU7VsvUTwicSFRF70CRg=
Subject: Re: [PATCH v3][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>
References: <dbdb045d-42de-af94-64cc-0be7992b80b6@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0e9b9e12-f3b5-971c-caf6-377c7c7c439a@citrix.com>
Date: Thu, 25 Feb 2021 12:52:37 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <dbdb045d-42de-af94-64cc-0be7992b80b6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0298.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bd1f68d7-234f-4e94-d01e-08d8d98c3ea5
X-MS-TrafficTypeDiagnostic: BYAPR03MB4358:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB43589D12F868625FC78412CEBA9E9@BYAPR03MB4358.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:260;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Fm32fSAA4LCf7mIdU7/qijhylGDARs2vSTVY8gDk/E3JH6yBesH1EBDIjKnW4QSwkBiNDKMySRjv0QEcFKBkc+KPhvghIub6u3hWRNMZfoqoTjmCIcmryrEMGzqKZ1G4P5D3mVW5xpVhCG3sUnGWpS3Ci4j4Hchf5RAuqCIP1wbzS/2c2SM2RmInt1SElqpWl0z2ZAR+Vqsh2rdDTxdF7Go989IAlhXtysWvdw3Xg1vsXYeOYqP9P/KDT0ikWeBOCgWY9t+b6CsgvlT+EXoAgVgrtYb0dX+OxbHu7tVuPzJZEopB5rtpq2Hlfi3gAJ90w8EqWFY99KH0EqirfhDFygxfGO9x6bioFWhpWIaDvp2OdjEYYECur4l9xE6qDVCEKFqsKt5+cOW7axr9Y/Qlje+bzji3IRHiUbhGxaBPzAshdOShFElEiB5EADT6+ldkNMiXToqaDT0/KfP9T8Nw3jWrlz2lmANqwLc8f0gLOV1ZWM4q88BiucmKamiVRJoka1D8pqtz7LeZ06+AUCCW8voMsCuAE8sf9MbCDBpS0VNTUZmRxP/1pQqhBqXxA7uy5ExTVKsRv5LOCgOsqnVbplVjH6P2Jr9RiyzwGxxvVrE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(66476007)(316002)(478600001)(66946007)(6666004)(6486002)(54906003)(186003)(53546011)(66556008)(8676002)(16526019)(31686004)(16576012)(956004)(4326008)(26005)(2906002)(2616005)(110136005)(36756003)(8936002)(86362001)(31696002)(83380400001)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YXVwcmNIVnRpbDVkZFdQbkNZV3lkTXdmZGZZbnJVd1RKTERpaXR3TFc0UnNo?=
 =?utf-8?B?Y05oeDJrZlpuOXBWSDhsMUdwK0ViWGluSnJEQnNMTGZJTGR0UEdIQW94T2xX?=
 =?utf-8?B?ZXlvLzllQkJVK1dFNkUxN2IxazYxMDBQeWo2Y3J5Z3VnRlJPL2RMUURVRy9s?=
 =?utf-8?B?L0w1dkRQdHF1bXl5dXRiVmgybmdIT2tVUFdnWmlCeHVPOVVlWXRhdGdCc084?=
 =?utf-8?B?aUV3S1pXWGF3eXZVbFZXT2VlaUVESDJveWtaSWM0NWFxY0tRa05pN3NvUEtH?=
 =?utf-8?B?SmxMcmtKZW1TWlI5WStLK3NhL3hUVkxzL1VTUWJkYlYrbWNjdk85T1ZMTkFY?=
 =?utf-8?B?Q0VRa2Z1b3JpT3I3SFRmVGxWcmt1bkFwTnBybHpKMWhhVDdyUnFFWXNXaWdV?=
 =?utf-8?B?WHo1S2ljR21ub3hyQko2ZExUWTV1clN2VFVycGV1MHNrZTdMbm5PNS9PbGxR?=
 =?utf-8?B?d3BpZm85QnIrZy9NMUM1QXBOb3k5dTdPcUVpb1VVcFk2U1ZJTUMxa1pqUy9k?=
 =?utf-8?B?S2N1d0ZBSTNXQzAva0NVRFNRUm9qWi9Tcm85Wk44ZGY1OXhtcHFTNnRXYXg2?=
 =?utf-8?B?eVA3dUxGNEtFZ0N6d2JLeThoYThIazJmWXRHNUxvRnNneHdqOHhDOTMzZzNR?=
 =?utf-8?B?K2Z2U2Q2V0RyLzNybHJRcytIRW9uQ3lUYXZuYy9jRzNkdXdQWTFsSHlZWHZ4?=
 =?utf-8?B?V1BSaU01V2hadDZTdm41NG5UdkFPTDYzMDhhenA2eWdVamdhZGVrc0twdUE2?=
 =?utf-8?B?ZTY4TklsWFVzNUFOdEJacXVHVU95eFF3L28zeTIyMFdHdmw4RDlDajJYaHk5?=
 =?utf-8?B?b055RFNZMTFvWTJ6UnRld2doN2FNam9qVjZHYzIyUFJXemMrMTVsaDFFYVpO?=
 =?utf-8?B?Y002YWZiNGM1U3grbFFvMENjL3ZHSHphUmtGaERKakhkdVFtbmpnSmNSTE01?=
 =?utf-8?B?Uzl6RTFmc1pmLzRBY0MwcGpRSUw2RVZXS3NXQTRQcEprTzFJek5TTk1HMzBY?=
 =?utf-8?B?N2xIR0hpL3FJSkl4UEIxMUFIak5CVWduYlRIZnp6amRVczFpQnB4eU9PVTYx?=
 =?utf-8?B?ZnpjeTd3UkovRHJIS1VXYk81K0VoVEpKTTVYTVAwWEV3UklmU1ROazM5REZh?=
 =?utf-8?B?cU1VVC9iN1dOeDVpN3lBZ0lYVVhDak80eDZxNWVaWWRLbndaVzZoMmxoU3Ux?=
 =?utf-8?B?Rmtvd1FvTzRpYi9sd0NYZXMyZWRLSkF6VTdYa3pWaERHRU1lSGdOY2grZWRL?=
 =?utf-8?B?dmVjSnMvWkpQRlpLVDRsR1ppUG4vSmFTQXFHamxObWR3RnNaM05DSXlOb3p1?=
 =?utf-8?B?S3lqZWNQMWZuaHdab3lnNVRid3R4VXIvRDYyQ1VOYktHM3lsMm5iaXVxVmgv?=
 =?utf-8?B?M1E5bXk3VUtYRkdlVm9mL3hIb3dKZHZSZXloSURBOEhjZzFBLzJNYVNWU25L?=
 =?utf-8?B?Yy9hZVhkVlh5UHpGN3NkbFFWRjRxMVp3R004bWYvMTFuelpZclV6cUYxVFBF?=
 =?utf-8?B?RGI4MjJpVnBFRmhsZGs4ZjJLNTFETldMQWxjV2xjZEhTTjJiT0ZicFVhVW9x?=
 =?utf-8?B?OHlqVnhvWU1GQ3hSeWZ5MHlscWlDblZGbXpCdzh3VThvcUtzWm9SWmhvbEs1?=
 =?utf-8?B?cVUzL1czdFFNeG40ZW5kaEo2aHpZOGZOdFZSNU9rZW9VL01VYURkakxQRUxx?=
 =?utf-8?B?cVFWZk1mQXdUc1dsbEdPRmxnNXZsYk15S3hGNDJ6YjJnQXFXc2ZGVlFTcllM?=
 =?utf-8?Q?fd0oj/gUjaeLlDUuOLNXBgWGydIUW8rOdKZOaK/?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bd1f68d7-234f-4e94-d01e-08d8d98c3ea5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 12:52:44.3997
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7IvKPvK3Z/WMIlbMYYRC67yHKFxR9SgRlrUzfZ1sjcv29DV8igdhF15XoSaIhH1o30IUjiEn5tf6qUa9WrQvsd8mPCuZKzQL7EOd3qRTZJc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4358
X-OriginatorOrg: citrix.com

On 25/02/2021 09:30, Jan Beulich wrote:
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -170,7 +170,11 @@ extern unsigned char boot_edid_info[128]
>   *    Guest-defined use.
>   *  0x00000000f5800000 - 0x00000000ffffffff [168MB,             PML4:0]
>   *    Read-only machine-to-phys translation table (GUEST ACCESSIBLE).
> - *  0x0000000100000000 - 0x00007fffffffffff [128TB-4GB,         PML4:0-255]
> + *  0x0000000100000000 - 0x000001ffffffffff [2TB-4GB,           PML4:0-3]
> + *    Unused / Reserved for future use.
> + *  0x0000020000000000 - 0x0000027fffffffff [512GB, 2^39 bytes, PML4:4]
> + *    Mirror of per-domain mappings (for argument translation area; also HVM).
> + *  0x0000028000000000 - 0x00007fffffffffff [125.5TB,           PML4:5-255]
>   *    Unused / Reserved for future use.
>   */
>  
> @@ -207,6 +211,8 @@ extern unsigned char boot_edid_info[128]
>  #define PERDOMAIN_SLOTS         3
>  #define PERDOMAIN_VIRT_SLOT(s)  (PERDOMAIN_VIRT_START + (s) * \
>                                   (PERDOMAIN_SLOT_MBYTES << 20))
> +/* Slot 4: mirror of per-domain mappings (for compat xlat area accesses). */
> +#define PERDOMAIN_ALT_VIRT_START PML4_ADDR(260 % 256)

4.

260 % 256 is pure obfuscation.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 12:58:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 12:58:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89716.169239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGDv-000659-1z; Thu, 25 Feb 2021 12:58:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89716.169239; Thu, 25 Feb 2021 12:58:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGDu-000652-V0; Thu, 25 Feb 2021 12:58:30 +0000
Received: by outflank-mailman (input) for mailman id 89716;
 Thu, 25 Feb 2021 12:58:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGDt-00064h-Nn
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 12:58:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7583606f-1264-4c4f-8fa8-34fe05f6ac94;
 Thu, 25 Feb 2021 12:58:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 133C2AF6C;
 Thu, 25 Feb 2021 12:58:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7583606f-1264-4c4f-8fa8-34fe05f6ac94
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614257908; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iz20/M6cyK40jMO3Qm/SHdJg/3GVnkpBIt8WOkrTV24=;
	b=Mp21WoumZ3STbY9aHAIUTVVTyKMwrZNiXDXnuRKhl4jEgpRmUzEhk7aJkzwfDtTfcemgJQ
	YvkoG2ui6j4lycD3nKJAnjdk2Mpmq1lNF8Wj15PqFEHL01aZjhKeQMa9bPZnhFmKLOIsG+
	RWVIKXNxJJ8mOTb4W9xVv4Gs3JTVuhg=
Subject: Re: [PATCH v3][4.15] x86: mirror compat argument translation area for
 32-bit PV
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dbdb045d-42de-af94-64cc-0be7992b80b6@suse.com>
 <0e9b9e12-f3b5-971c-caf6-377c7c7c439a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2bbc398d-f389-1155-68ca-348ebfe8dfe7@suse.com>
Date: Thu, 25 Feb 2021 13:58:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <0e9b9e12-f3b5-971c-caf6-377c7c7c439a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 13:52, Andrew Cooper wrote:
> On 25/02/2021 09:30, Jan Beulich wrote:
>> --- a/xen/include/asm-x86/config.h
>> +++ b/xen/include/asm-x86/config.h
>> @@ -170,7 +170,11 @@ extern unsigned char boot_edid_info[128]
>>   *    Guest-defined use.
>>   *  0x00000000f5800000 - 0x00000000ffffffff [168MB,             PML4:0]
>>   *    Read-only machine-to-phys translation table (GUEST ACCESSIBLE).
>> - *  0x0000000100000000 - 0x00007fffffffffff [128TB-4GB,         PML4:0-255]
>> + *  0x0000000100000000 - 0x000001ffffffffff [2TB-4GB,           PML4:0-3]
>> + *    Unused / Reserved for future use.
>> + *  0x0000020000000000 - 0x0000027fffffffff [512GB, 2^39 bytes, PML4:4]
>> + *    Mirror of per-domain mappings (for argument translation area; also HVM).
>> + *  0x0000028000000000 - 0x00007fffffffffff [125.5TB,           PML4:5-255]
>>   *    Unused / Reserved for future use.
>>   */
>>  
>> @@ -207,6 +211,8 @@ extern unsigned char boot_edid_info[128]
>>  #define PERDOMAIN_SLOTS         3
>>  #define PERDOMAIN_VIRT_SLOT(s)  (PERDOMAIN_VIRT_START + (s) * \
>>                                   (PERDOMAIN_SLOT_MBYTES << 20))
>> +/* Slot 4: mirror of per-domain mappings (for compat xlat area accesses). */
>> +#define PERDOMAIN_ALT_VIRT_START PML4_ADDR(260 % 256)
> 
> 4.
> 
> 260 % 256 is pure obfuscation.

Well, that's why the comment is there. The expression is to
show why 4 isn't entirely arbitrary. But well, if there's no
way to get this in without changing to 4, I will of course do
so. Before I submit v4 - are there any other concerns?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:03:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:03:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89719.169251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGIm-00079G-KU; Thu, 25 Feb 2021 13:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89719.169251; Thu, 25 Feb 2021 13:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGIm-000799-HZ; Thu, 25 Feb 2021 13:03:32 +0000
Received: by outflank-mailman (input) for mailman id 89719;
 Thu, 25 Feb 2021 13:03:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGIk-000794-OR
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:03:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d318e31e-6c6f-4bdb-81ba-c165889cae62;
 Thu, 25 Feb 2021 13:03:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28FABAF6C;
 Thu, 25 Feb 2021 13:03:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d318e31e-6c6f-4bdb-81ba-c165889cae62
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614258209; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=sVWb/AlweGBWskdDVYu6DsRg5Q6IZCg3eCyZ3Hr3SoI=;
	b=TVBT02b8c19GtkNVBAUtkkg4ytytwyz2ntHdX0+eKT51Bmmu8SqRqQRtlTYrvhy7VnK10o
	zkXh7bZLtaCmdPQhRiFRqcSQxgRMJgb/1WYGwqu+GzaSaEWA8ItlGbCIuo3XZGBNF0hn6/
	k0m9l5enzTk8QWwM7Xjl2WFY09GQ7JU=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization
 without reserved bits
Message-ID: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
Date: Thu, 25 Feb 2021 14:03:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When none of the physical address bits in PTEs are reserved, we can't
create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
this case, which is most easily achieved by never creating any magic
entries.

To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
such hardware.

While at it, also avoid using an MMIO magic entry when that would
truncate the incoming GFN.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wonder if subsequently we couldn't arrange for SMEP/SMAP faults to be
utilized instead, on capable hardware (which might well be all having
such large a physical address width).

I further wonder whether SH_L1E_MMIO_GFN_MASK couldn't / shouldn't be
widened. I don't see a reason why it would need confining to the low
32 bits of the PTE - using the full space up to bit 50 ought to be fine
(i.e. just one address bit left set in the magic mask), and we wouldn't
even need that many to encode a 40-bit GFN (i.e. the extra guarding
added here wouldn't then be needed in the first place).

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -499,7 +499,8 @@ _sh_propagate(struct vcpu *v,
     {
         /* Guest l1e maps emulated MMIO space */
         *sp = sh_l1e_mmio(target_gfn, gflags);
-        d->arch.paging.shadow.has_fast_mmio_entries = true;
+        if ( sh_l1e_is_magic(*sp) )
+            d->arch.paging.shadow.has_fast_mmio_entries = true;
         goto done;
     }
 
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -281,7 +281,8 @@ shadow_put_page_from_l1e(shadow_l1e_t sl
  * pagetables.
  *
  * This is only feasible for PAE and 64bit Xen: 32-bit non-PAE PTEs don't
- * have reserved bits that we can use for this.
+ * have reserved bits that we can use for this.  And even there it can only
+ * be used if the processor doesn't use all 52 address bits.
  */
 
 #define SH_L1E_MAGIC 0xffffffff00000001ULL
@@ -291,14 +292,24 @@ static inline bool sh_l1e_is_magic(shado
 }
 
 /* Guest not present: a single magic value */
-static inline shadow_l1e_t sh_l1e_gnp(void)
+static inline shadow_l1e_t sh_l1e_gnp_raw(void)
 {
     return (shadow_l1e_t){ -1ULL };
 }
 
+static inline shadow_l1e_t sh_l1e_gnp(void)
+{
+    /*
+     * On systems with no reserved physical address bits we can't engage the
+     * fast fault path.
+     */
+    return paddr_bits < PADDR_BITS ? sh_l1e_gnp_raw()
+                                   : shadow_l1e_empty();
+}
+
 static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e)
 {
-    return sl1e.l1 == sh_l1e_gnp().l1;
+    return sl1e.l1 == sh_l1e_gnp_raw().l1;
 }
 
 /*
@@ -313,9 +324,14 @@ static inline bool sh_l1e_is_gnp(shadow_
 
 static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags)
 {
-    return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC
-                             | MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK)
-                             | (gflags & (_PAGE_USER|_PAGE_RW))) };
+    unsigned long gfn_val = MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK);
+
+    if ( paddr_bits >= PADDR_BITS ||
+         gfn_x(gfn) != MASK_EXTR(gfn_val, SH_L1E_MMIO_GFN_MASK) )
+        return shadow_l1e_empty();
+
+    return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC | gfn_val |
+                             (gflags & (_PAGE_USER | _PAGE_RW))) };
 }
 
 static inline bool sh_l1e_is_mmio(shadow_l1e_t sl1e)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:05:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:05:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89722.169263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGKv-0007HU-0i; Thu, 25 Feb 2021 13:05:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89722.169263; Thu, 25 Feb 2021 13:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGKu-0007HN-Tt; Thu, 25 Feb 2021 13:05:44 +0000
Received: by outflank-mailman (input) for mailman id 89722;
 Thu, 25 Feb 2021 13:05:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFGKt-0007HF-4B; Thu, 25 Feb 2021 13:05:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFGKs-0000XD-UY; Thu, 25 Feb 2021 13:05:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFGKs-00058W-Nk; Thu, 25 Feb 2021 13:05:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFGKs-0007BM-NH; Thu, 25 Feb 2021 13:05:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=DnT6WBDeiKjCYm6K5+xpOV4QsEV2Jfv8USQSytDkQY4=; b=LAk8YiVdzkxceTQ8Hu6DDBszAI
	O5N6CAnskTPKjD5wosaSNNTiTlmzPzASQYmE/P9EgsGw8GMVljy4KcNSUKitgVt/3TvhriJwWOR08
	6hB0XcqFqJO/XcgdQ8xl+z+UqZzmG0vJE5TEr/y1tGUY2Nq+HCgY/Qj6MXpjR1fHtXz0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-xtf-amd64-amd64-3
Message-Id: <E1lFGKs-0007BM-NH@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 13:05:42 +0000

branch xen-unstable
xenbranch xen-unstable
job test-xtf-amd64-amd64-3
testid xtf/test-pv32pae-selftest

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159667/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-xtf-amd64-amd64-3.xtf--test-pv32pae-selftest.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-3.xtf--test-pv32pae-selftest --summary-out=tmp/159667.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-xtf-amd64-amd64-3 xtf/test-pv32pae-selftest
Searching for failure / basis pass:
 159626 fail [host=chardonnay0] / 159475 [host=godello1] 159453 [host=fiano0] 159424 [host=albana0] 159396 [host=fiano1] 159362 [host=godello0] 159335 [host=albana0] 159315 [host=huxelrebe1] 159202 ok.
Failure / basis pass flights: 159626 / 159202
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#687121f8a0e7c1ea1c4fa3d056637e5819342f14-5d94433a66df29ce314696a13bdd324ec0e342fe git://xenbits.xen.org/xtf.git#8ab15139728a8efd3ebbb60beb16a958a6a93fa1-8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Loaded 5001 nodes in revision graph
Searching for test results:
 159202 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159315 [host=huxelrebe1]
 159335 [host=albana0]
 159362 [host=godello0]
 159396 [host=fiano1]
 159424 [host=albana0]
 159453 [host=fiano0]
 159475 [host=godello1]
 159487 [host=elbling0]
 159491 [host=chardonnay1]
 159508 [host=godello0]
 159526 [host=elbling1]
 159540 [host=albana1]
 159559 [host=albana0]
 159576 [host=fiano1]
 159602 [host=godello0]
 159626 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159651 [host=godello0]
 159653 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159654 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5d94433a66df29ce314696a13bdd324ec0e342fe 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159655 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7a4133feaf42000923eb9d84badb6b171625f137 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159657 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 a2639824e8458a8f3826e184081a13bfa27ea884 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159659 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6794cdd08ea8b3512c53b8f162cb3f88fef54d0d 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159661 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159663 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159664 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159665 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159666 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159667 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Searching for interesting versions
 Result found: flight 159202 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x560b6daca558) HASH(0x560b6daccb88) HASH(0x560b6dac6848) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05\
 e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6794cdd08ea8b3512c53b8f162cb3f88fef54d0d 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x560b6daca3d8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 7a4133feaf42000923eb9d84badb6b171625f137 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x560b6daa\
 b748) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 687121f8a0e7c1ea1c4fa3d056637e5819342f14 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x560b6dab8e48) HASH(0x560b6daac1c8) Result found: flight 159626 (fail), for basis failure (at ancestor ~77)
 Repro found: flight 159653 (pass), for basis pass
 Repro found: flight 159654 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
No revisions left to test, checking graph state.
 Result found: flight 159661 (pass), for last pass
 Result found: flight 159663 (fail), for first failure
 Repro found: flight 159664 (pass), for last pass
 Repro found: flight 159665 (fail), for first failure
 Repro found: flight 159666 (pass), for last pass
 Repro found: flight 159667 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159667/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-3.xtf--test-pv32pae-selftest.{dot,ps,png,html,svg}.
----------------------------------------
159667: tolerable all pass

flight 159667 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159667/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-xtf-amd64-amd64-3     19 xtf/test-pv32pae-selftest fail baseline untested


jobs:
 test-xtf-amd64-amd64-3                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:11:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89744.169298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGQ2-000088-8y; Thu, 25 Feb 2021 13:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89744.169298; Thu, 25 Feb 2021 13:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGQ2-000081-5y; Thu, 25 Feb 2021 13:11:02 +0000
Received: by outflank-mailman (input) for mailman id 89744;
 Thu, 25 Feb 2021 13:11:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tqz4=H3=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lFGPz-00007q-U3
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:11:00 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d449a19c-97cd-4124-a8f9-ed2bf40b5728;
 Thu, 25 Feb 2021 13:10:59 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id t15so5165886wrx.13
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 05:10:59 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id u142sm7679305wmu.3.2021.02.25.05.10.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 05:10:56 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id A3BDF1FF7E;
 Thu, 25 Feb 2021 13:10:55 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d449a19c-97cd-4124-a8f9-ed2bf40b5728
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:date:in-reply-to
         :message-id:mime-version:content-transfer-encoding;
        bh=mgnS54OEhG6StWQlfiBayFcuQ+35hFIjW41rG7EyOIg=;
        b=DULLN3DamYaBSd1mz/P66d69FaXKvQCld4aT8OhnoqB/9Zqk2X9jx/zMrHWzZZujrV
         Ot7gwsbN4PNyjfAHo0ToAx5XMnAbHveODLpa75mriUf1hB5SVMcfcu7MFNXO/jVpUloT
         kcyQ7QgMoAiVzKshAn8lYpN+LfRoubKgNmR9y/YHp0wfwYj/B2AUA4ddGMuttIV81mzS
         v04v5hZbeCoj/tSkdbwhn0qz7iCo2WhIz0A47i9EjMYZMmG7o1wBX/2vk2TJPVaHbh7g
         efE0qtqFVmusG/SS1v/sEbsBWOTLmQEaLIC5v+bITGtE+8GQIiL5aVKW00AOIHT6GuaW
         pobA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject:date
         :in-reply-to:message-id:mime-version:content-transfer-encoding;
        bh=mgnS54OEhG6StWQlfiBayFcuQ+35hFIjW41rG7EyOIg=;
        b=i1hHRpUkEMNncqXYCuiQbUOr7Slllujmbhj9Ot8H2ezpeWU7Xh/vpzRUGp4iVUsxrw
         T32Ee8AARS2PI11Hy0Vs0CYkmtqR6GyMEHg0pSLVddT0q2yFt3dF7LJFBytAnBiUbt2L
         HhSxH6hz29InU492R3C+XCbmIN6sq6RpZ6KkwoxHVvGjwWpvSRXayxpDFdlLRfwZ/RfC
         DK5GGBgNNm8BlR8wH+T9NLTV7K0ViYzxZvAPuqY3ApOnN50IilWqV/9inHTTCv/ki+mJ
         g+mzi2rXk5UljRZ0vDnZVQ0j5k/gjSiHDk9w5X9WFZ+qojuEkNjWCedZ9ZkUbe/T2NdV
         QUDA==
X-Gm-Message-State: AOAM533vWWVHARpF8FVQDShsin8zL+5tyjnAtso2BXEUnqIDTGh4DA1+
	X2eIFYMn/M30nSMRlBo+9k24SQ==
X-Google-Smtp-Source: ABdhPJzbUAyxaW4v8GaYgwhteEyl4EmN5AQfIY3i3VygXCemdi8fb3flI6/NTDdme8q4XH5MGuvTSw==
X-Received: by 2002:adf:8b5c:: with SMTP id v28mr1751149wra.272.1614258658131;
        Thu, 25 Feb 2021 05:10:58 -0800 (PST)
References: <20210211171945.18313-1-alex.bennee@linaro.org>
User-agent: mu4e 1.5.8; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org, stefano.stabellini@linaro.org,
 stefano.stabellini@xilinx.com, andre.przywara@arm.com,
 stratos-dev@op-lists.linaro.org, xen-devel@lists.xenproject.org, Alex
 =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: Re: [PATCH  v2 0/7] Xen guest loader (to boot Xen+Kernel under TCG)
Date: Thu, 25 Feb 2021 13:09:58 +0000
In-reply-to: <20210211171945.18313-1-alex.bennee@linaro.org>
Message-ID: <87y2fc9tsw.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Alex Benn=C3=A9e <alex.bennee@linaro.org> writes:

> Hi,
>
> These patches have been sitting around as part of a larger series to
> improve the support of Xen on AArch64. The second part of the work is
> currently awaiting other re-factoring and build work to go in to make
> the building of a pure-Xen capable QEMU easier. As that might take
> some time and these patches are essentially ready I thought I'd best
> push for merging them.
>
> There are no fundamental changes since the last revision. I've
> addressed most of the comments although I haven't expanded the use of
> the global *fdt to other device models. I figured that work could be
> done as and when models have support for type-1 hypervisors.
>
> I have added some documentation to describe the feature and added an
> acceptance tests which checks the various versions of Xen can boot.
> The only minor wrinkle is using a custom compiled Linux kernel due to
> missing support in the distro kernels. If anyone can suggest a distro
> which is currently well supported for Xen on AArch64 I can update the
> test.
>
> The following patches still need review:
>
>  - tests/avocado: add boot_xen tests
>  - docs: add some documentation for the guest-loader
>  - docs: move generic-loader documentation into the main manual
>  - hw/core: implement a guest-loader to support static hypervisor guests
>
> Alex Benn=C3=A9e (7):
>   hw/board: promote fdt from ARM VirtMachineState to MachineState
>   hw/riscv: migrate fdt field to generic MachineState
>   device_tree: add qemu_fdt_setprop_string_array helper
>   hw/core: implement a guest-loader to support static hypervisor
> guests

Gentle ping. They all have reviews apart from the core bit of loader
code and I'd like to merge this cycle ;-)

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:11:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89748.169311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGQn-0000Hj-Id; Thu, 25 Feb 2021 13:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89748.169311; Thu, 25 Feb 2021 13:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGQn-0000Ha-FZ; Thu, 25 Feb 2021 13:11:49 +0000
Received: by outflank-mailman (input) for mailman id 89748;
 Thu, 25 Feb 2021 13:11:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGQl-0000HV-Sj
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:11:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57add156-7d44-4ffe-8f96-f2fb5e1ab63b;
 Thu, 25 Feb 2021 13:11:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77448AFB5;
 Thu, 25 Feb 2021 13:11:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57add156-7d44-4ffe-8f96-f2fb5e1ab63b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614258706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TLh5jHnGXF8Rf125Zr9n9gQbtHEr2OnV33y4nNQFXpM=;
	b=DhYxzZGW97qcmn9ftQMfpz5P8ieWQypQsM+2WF+dwMzXbLG2YzqnHFDYxnG+uUWH+2yLGm
	tlgnDeLC6QmjTK1OlQy9Hv2tFSgHWRryxrDK4Zxrr9tb90TTAf74FY5q3yn4BR1cijZ/YB
	2nPLXVcv+afprtcQPmfjjudnbG+E73Q=
Subject: Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization
 without reserved bits
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
Message-ID: <dae5479e-9974-334b-7f4f-e4194e435aaa@suse.com>
Date: Thu, 25 Feb 2021 14:11:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 14:03, Jan Beulich wrote:
> When none of the physical address bits in PTEs are reserved, we can't
> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
> this case, which is most easily achieved by never creating any magic
> entries.
> 
> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
> such hardware.

As to 4.15: Without this shadow mode simply won't work on such (new)
hardware. Hence something needs to be done anyway. An alternative
would be to limit the change to just the guest-no-present entries (to
at least allow PV guests to be migrated), and refuse to enable shadow
mode for HVM guests on such hardware. (In this case we'd probably
better take care of ...

> While at it, also avoid using an MMIO magic entry when that would
> truncate the incoming GFN.

... this long-standing issue, perhaps as outlined in a post-commit-
message remark.)

The main risk here is (in particular for the MMIO part of the change
I suppose) execution suddenly going a different path, which has been
unused / untested (for this specific case) for years.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:17:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89755.169322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGW8-0000Wh-6q; Thu, 25 Feb 2021 13:17:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89755.169322; Thu, 25 Feb 2021 13:17:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGW8-0000Wa-3k; Thu, 25 Feb 2021 13:17:20 +0000
Received: by outflank-mailman (input) for mailman id 89755;
 Thu, 25 Feb 2021 13:17:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGW7-0000WV-7C
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:17:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGW7-0000kX-0c
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:17:19 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGW6-0004Md-Vn
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:17:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFGW3-0001UJ-NM; Thu, 25 Feb 2021 13:17:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=/hj4/BTGQFDqHsbTaogqGJRszRoCPs4wNHdfk+St+F0=; b=l1BWMYI6GII665yB6r73VrIFn/
	LWUB6pdAjgbZY0dOqPI1TGL3zCLRanfKBGPDHMqaQIkFY/OPPiRGotgHsmNdjszZDXbYebcfa5q4v
	cToB/GYKpMZOHYzhVSttPx7CjV/qjuY1jJS0BJuOWCCkz5kysul/PnbsIzzCHEfXE1II=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.41819.456811.665249@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 13:17:15 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Tim Deegan <tim@xen.org>,
    George Dunlap <george.dunlap@citrix.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization
 without reserved bits
In-Reply-To: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
References: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> When none of the physical address bits in PTEs are reserved, we can't
> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
> this case, which is most easily achieved by never creating any magic
> entries.
> 
> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
> such hardware.
> 
> While at it, also avoid using an MMIO magic entry when that would
> truncate the incoming GFN.

Judging by the description I'm not sure whether this is a bugfix, or
a change to make it possible to run Xen on hardware where currently it
doesn't work at all.

I assume "none of the physical address bits in PTEs are reserved" is
a property of certain hardware, but it wasn't clear to me (i) whether
such platforms currently exists (ii) what the existing Xen code would
do in this case.

Can you enlighten me ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:18:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89758.169335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGXW-0000eO-I1; Thu, 25 Feb 2021 13:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89758.169335; Thu, 25 Feb 2021 13:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGXW-0000eH-Eu; Thu, 25 Feb 2021 13:18:46 +0000
Received: by outflank-mailman (input) for mailman id 89758;
 Thu, 25 Feb 2021 13:18:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGXV-0000eC-PA
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:18:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f416b39c-09b8-4d67-9640-8bb6d6e1ad7c;
 Thu, 25 Feb 2021 13:18:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30871AFB5;
 Thu, 25 Feb 2021 13:18:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f416b39c-09b8-4d67-9640-8bb6d6e1ad7c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614259124; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7eGGsrE4JEawlrpABZ7S69xNi+nF/N6oxJsFnUYP1gw=;
	b=twJddPBC1kLInAq8l0CtZbD/hnhDZKNPOekCg6+sPEQAst/dnGCOi9yzBfpNvroMZJvD6n
	PgP4NJJ7l2Z/1HvL049WsQbRVhl9SF5zCnp7VkiIOwTUew2NRpoBuDuvoakWHE9quV+0+R
	WlrVfUR9B98R57LpUd3wiDgrbT/4rRM=
Subject: Re: [for-4.15][RESEND PATCH v4 1/2] xen/x86: iommu: Ignore IOMMU
 mapping requests when a domain is dying
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-2-julien@xen.org>
 <d5a09319-614d-398b-b911-bc2533bec587@suse.com>
 <7ce1deb9-e362-439c-dd14-a17dbb6fb1c8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2c1d2b05-7553-5f15-ad28-47aba5b9c47f@suse.com>
Date: Thu, 25 Feb 2021 14:18:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <7ce1deb9-e362-439c-dd14-a17dbb6fb1c8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 12:56, Julien Grall wrote:
> On 24/02/2021 14:07, Jan Beulich wrote:
>> On 24.02.2021 10:43, Julien Grall wrote:
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -267,6 +267,12 @@ int iommu_free_pgtables(struct domain *d)
>>>       struct page_info *pg;
>>>       unsigned int done = 0;
>>>   
>>> +    if ( !is_iommu_enabled(d) )
>>> +        return 0;
>>
>> Why is this addition needed? Hitting a not yet initialize spin lock
>> is - afaict - no worse than a not yet initialized list, so it would
>> seem to me that this can't be the reason. No other reason looks to
>> be called out by the description.
> 
> struct domain_iommu will be initially zeroed as it is part of struct domain.
> 
> For the list, we are so far fine because page_list_remove_head() 
> tolerates NULL. If we were using the normal list operations (e.g. 
> list_del), then this code would have segfaulted.

And so we do, in the CONFIG_BIGMEM case. May I suggest then to split
this out as a prereq patch, or add wording to the description
mentioning this additional effect?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:20:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89762.169346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGZ7-0001cp-U3; Thu, 25 Feb 2021 13:20:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89762.169346; Thu, 25 Feb 2021 13:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGZ7-0001ci-Qz; Thu, 25 Feb 2021 13:20:25 +0000
Received: by outflank-mailman (input) for mailman id 89762;
 Thu, 25 Feb 2021 13:20:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGZ6-0001cd-QU
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:20:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGZ6-0000oR-Ph
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:20:24 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGZ6-0004g6-Oy
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:20:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFGYv-0001VH-U0; Thu, 25 Feb 2021 13:20:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=cGXK98OPChZFyUYflc02QgIrr1x6T2w/4JTgHwg57Xw=; b=glgHR9q5R5le5ckkgiZNSm1R9S
	ynIcPDeRi+ZrhZA0d2vQWUGliSVFYr9Q2/o9UflGS745oCtX46o4dVocJTADkrf8yIO11hxA81kpK
	mM1UZ3YOjKAugaVt9yHJ5OZBIt6hI2URFqHYMy6C6WMKjOaZO3EqW0jML1PZFhtRaaK0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.41997.596809.646522@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 13:20:13 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Tim Deegan <tim@xen.org>,
    George Dunlap <george.dunlap@citrix.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization
 without reserved bits
In-Reply-To: <dae5479e-9974-334b-7f4f-e4194e435aaa@suse.com>
References: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
	<dae5479e-9974-334b-7f4f-e4194e435aaa@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
> As to 4.15: Without this shadow mode simply won't work on such (new)
> hardware. Hence something needs to be done anyway. An alternative
> would be to limit the change to just the guest-no-present entries (to
> at least allow PV guests to be migrated), and refuse to enable shadow
> mode for HVM guests on such hardware. (In this case we'd probably
> better take care of ...

Thanks for this explanation.

It sounds like the way you have it in this proposed patch is simpler
than the alternative.  And that right now it's not a regression, but
it is needed for running Xen on such newer hardware.

> The main risk here is (in particular for the MMIO part of the change
> I suppose) execution suddenly going a different path, which has been
> unused / untested (for this specific case) for years.

That's somewhat concerning.  But I think this only applies to the new
hardware ?  So it would be risking an XSA but not really risking the
release very much.

I think therefore:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:20:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:20:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89764.169359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGZW-0001iP-6d; Thu, 25 Feb 2021 13:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89764.169359; Thu, 25 Feb 2021 13:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGZW-0001iI-3Q; Thu, 25 Feb 2021 13:20:50 +0000
Received: by outflank-mailman (input) for mailman id 89764;
 Thu, 25 Feb 2021 13:20:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGZU-0001hQ-C1
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:20:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47cf0385-c67d-4ffb-9f18-3ea188a90d17;
 Thu, 25 Feb 2021 13:20:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53766AFB5;
 Thu, 25 Feb 2021 13:20:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47cf0385-c67d-4ffb-9f18-3ea188a90d17
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614259246; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OlQJeW38LJ3RN4gJA/jlMu+eQ6c9R5XnU5CKjyoFUQk=;
	b=baf3xJywOmsyGfDtFJfCQY1yybuxufnhULqK4rQzruryU9BiFUJixnB7lG0+fISJ7lZod9
	bMVz5KScIWREGmgoZKPYys/CPpS0DVF6qoRVKIysMkZP9ojb4RC4CDYpstKFCiTZiIPMWT
	/bMYrXr2irf6EkkfS4Z8Z5IswWljPHE=
Subject: Re: [for-4.15][RESEND PATCH v4 2/2] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-3-julien@xen.org>
 <c666bf75-451d-fbc1-7fb1-600c4f014f05@suse.com>
 <64de5c8f-83ed-23af-b24f-3c8dde50e226@suse.com>
 <95800b2b-1412-ee65-33b9-d3e2702b1c88@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a0eca9cf-4b8d-3e8a-faad-b2e447345bc1@suse.com>
Date: Thu, 25 Feb 2021 14:20:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <95800b2b-1412-ee65-33b9-d3e2702b1c88@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 13:01, Julien Grall wrote:
> On 24/02/2021 16:00, Jan Beulich wrote:
>> On 24.02.2021 16:58, Jan Beulich wrote:
>>> On 24.02.2021 10:43, Julien Grall wrote:
>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
>>>>   
>>>>   void arch_iommu_domain_destroy(struct domain *d)
>>>>   {
>>>> +    /*
>>>> +     * There should be not page-tables left allocated by the time the
>>>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>>>> +     * called unconditionally, so pgtables may be unitialized.
>>>> +     */
>>>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>>>
>>> Since you've used the preferred ! instead of "== 0" /
>>> "== NULL" in the ASSERT()s you add further up, may I ask that
>>> you do so here as well?
>>
>> Oh, and I meant to provide
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> preferably with that cosmetic adjustment (and ideally also with
>> "uninitialized" in the comment, as I notice only now).
> 
> I don't seem to find your original answer in my inbox and on lore [1]. 
> Could you confirm if the two comments visible in this thread are the 
> only one you made on this patch?

Oh, yes - what I look to have done is to reply to a draft that was
never sent. There indeed was just what's visible above - thanks for
double checking.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:25:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89768.169370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGe9-0001uY-QS; Thu, 25 Feb 2021 13:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89768.169370; Thu, 25 Feb 2021 13:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGe9-0001uR-NS; Thu, 25 Feb 2021 13:25:37 +0000
Received: by outflank-mailman (input) for mailman id 89768;
 Thu, 25 Feb 2021 13:25:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGe8-0001uM-Ny
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:25:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e7edcde-e457-4c8f-a28a-f84dc9a0679a;
 Thu, 25 Feb 2021 13:25:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E29D0AFEC;
 Thu, 25 Feb 2021 13:25:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e7edcde-e457-4c8f-a28a-f84dc9a0679a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614259535; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j7Vacklv3PUp5sL4v4xQbFRLhlG3BXVNu0h33D+R/aA=;
	b=n+zz5iQC8kHA4r4fNUnb+0hkowE7HjdNrRqjgq+jSQgZjOfUBIivdLkj0E4oWBdkcxVvI8
	sNn3cGVtrb+ffkdFzs31ZerLGxtgrHMHVr3PhmegBPULwP6gdtiE7VWzyKGtGPoIHSfw+Q
	9hGm2BdsakFKqd2eNFQKu6jEveplVIQ=
Subject: Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization
 without reserved bits
To: Ian Jackson <iwj@xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
 <dae5479e-9974-334b-7f4f-e4194e435aaa@suse.com>
 <24631.41997.596809.646522@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e79c8610-6e4e-e57f-a10a-aad1f2cbdfed@suse.com>
Date: Thu, 25 Feb 2021 14:25:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24631.41997.596809.646522@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 14:20, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> As to 4.15: Without this shadow mode simply won't work on such (new)
>> hardware. Hence something needs to be done anyway. An alternative
>> would be to limit the change to just the guest-no-present entries (to
>> at least allow PV guests to be migrated), and refuse to enable shadow
>> mode for HVM guests on such hardware. (In this case we'd probably
>> better take care of ...
> 
> Thanks for this explanation.
> 
> It sounds like the way you have it in this proposed patch is simpler
> than the alternative.  And that right now it's not a regression, but
> it is needed for running Xen on such newer hardware.

I'm not sure about the "simpler" part.

>> The main risk here is (in particular for the MMIO part of the change
>> I suppose) execution suddenly going a different path, which has been
>> unused / untested (for this specific case) for years.
> 
> That's somewhat concerning.  But I think this only applies to the new
> hardware ?  So it would be risking an XSA but not really risking the
> release very much.

Right - afaict an XSA would also be lurking without us doing anything,
as we'd permit a guest access to pages we didn't mean to hand to it.

> I think therefore:
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:30:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89772.169383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGiW-0002v2-HG; Thu, 25 Feb 2021 13:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89772.169383; Thu, 25 Feb 2021 13:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGiW-0002uv-EG; Thu, 25 Feb 2021 13:30:08 +0000
Received: by outflank-mailman (input) for mailman id 89772;
 Thu, 25 Feb 2021 13:30:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFGiU-0002r0-Rv
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:30:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 218ba7c9-98c4-4338-8d7d-a02680e17ccb;
 Thu, 25 Feb 2021 13:30:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7349CAFE5;
 Thu, 25 Feb 2021 13:30:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 218ba7c9-98c4-4338-8d7d-a02680e17ccb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614259805; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FX88hhTm8JfiZk5Vqt5V+3Je8GJxn5GAdxqDaUUwqbo=;
	b=kahfCopJ8sBjEoty2zW8i6F8rbBQsf2+GS87QbFcSrDjlgAneQZoy1z87sKS8MQ6aUZUwE
	O0wTb5FRmmOBXGwO/jmjDPxew2F0QAVEMmCIAsyiMEKycfuWtFDEPDHgXkXmsF8Fppuwb2
	CeBN5izX9puxys+GBFioGaklk8cqPKg=
Subject: Re: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization
 without reserved bits
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
 <24631.41819.456811.665249@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1226b860-9027-f0e8-6f90-78ec2b57ca7f@suse.com>
Date: Thu, 25 Feb 2021 14:30:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24631.41819.456811.665249@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 14:17, Ian Jackson wrote:
> Jan Beulich writes ("[PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits"):
>> When none of the physical address bits in PTEs are reserved, we can't
>> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
>> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
>> this case, which is most easily achieved by never creating any magic
>> entries.
>>
>> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
>> such hardware.
>>
>> While at it, also avoid using an MMIO magic entry when that would
>> truncate the incoming GFN.
> 
> Judging by the description I'm not sure whether this is a bugfix, or
> a change to make it possible to run Xen on hardware where currently it
> doesn't work at all.

It's still a bug fix imo, even if the flawed assumption was harmless
so far.

> I assume "none of the physical address bits in PTEs are reserved" is
> a property of certain hardware, but it wasn't clear to me (i) whether
> such platforms currently exists

If they don't exist yet, they're soon to become available afaict.

> (ii) what the existing Xen code would do in this case.

If memory is populated at the top 4Gb of physical address space,
guests would gain access to that memory through these page table
entries, as those don't cause the expected faults to be raised
anymore (but instead get used for valid - from the hardware's
perspective - translations).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:47:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89777.169401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGza-00047g-4A; Thu, 25 Feb 2021 13:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89777.169401; Thu, 25 Feb 2021 13:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFGza-00047Z-0v; Thu, 25 Feb 2021 13:47:46 +0000
Received: by outflank-mailman (input) for mailman id 89777;
 Thu, 25 Feb 2021 13:47:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGzY-00047U-Bf
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:47:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGzY-0001IO-7E
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:47:44 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFGzY-0006sY-6D
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:47:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFGzR-0001ZW-VP; Thu, 25 Feb 2021 13:47:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=AqxXIXnje+Cpc30fpYjhZLJ2MVLBZvmJuJWGNo8Ce44=; b=6KbY6id2LQO/Q7WcvOYj93qvDK
	NDj2uRAy6Cs3B5LKCHRijVqWL9rBXTkJElF1xv8tGKcOIbaVS9IKq19rJm8CXWOYzrVZL21nmFmK6
	q1Vj/9eoHddoTAtp9xEoTuVkEDeg51k0GTkbi1CtR5lkCB1JCN6kOTWBu3bM0Qvuspbg=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.43641.705515.888498@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 13:47:37 +0000
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?=  <roger.pau@citrix.com>,
    "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [4.15] Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base
 relocs
In-Reply-To: <fff8e64c-f724-11e8-daa5-80147c6925dd@suse.com>
References: <6ce5b1a7-d7c2-c30c-ad78-233379ea130b@suse.com>
	<53c7a708-1664-0186-1fd6-1056f8e7839c@citrix.com>
	<f8e56c90-f51c-01f7-0987-4c0697a17bb0@suse.com>
	<a35dd0b7-b804-9c75-b93c-e764345df46b@citrix.com>
	<fff8e64c-f724-11e8-daa5-80147c6925dd@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[4.15] Re: [PATCH] x86/EFI: suppress GNU ld 2.36'es creation of base relocs"):
> Since getting Andrew's ack has taken across the firm-freeze boundary,
> may I ask for a release-ack here?

Indeed.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

>  As noted this change (alongside
> the earlier one) will want backporting, perhaps even to security-
> support-only branches.

Noted.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 13:53:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 13:53:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89779.169412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFH4l-00058Q-OE; Thu, 25 Feb 2021 13:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89779.169412; Thu, 25 Feb 2021 13:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFH4l-00058J-L4; Thu, 25 Feb 2021 13:53:07 +0000
Received: by outflank-mailman (input) for mailman id 89779;
 Thu, 25 Feb 2021 13:53:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFH4j-00058B-QP; Thu, 25 Feb 2021 13:53:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFH4j-0001Ne-Il; Thu, 25 Feb 2021 13:53:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFH4j-0006vA-9X; Thu, 25 Feb 2021 13:53:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFH4j-0005jC-91; Thu, 25 Feb 2021 13:53:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/dd0AkoQBdlnOSx/FjPLplLmBIxDXkuYHV2bZFatp4Q=; b=cVG5e/IEh+dCvQIldYNj2LVxyM
	rFdAoMAvH06pbr9seEduonHtGXxWI1eGx4nmI2HoyMN8YKkgKcl2/5IIhfFmCRnhRrCb5seKqM1Uf
	4Xg+r1QE7G6EAE5JVY14edHsqRbjQwtLWmXC9IcnXOst2zWOdQmsHN0xUVMixPXgPKQI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159652-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159652: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=60390ccb8b9b2dbf85010f8b47779bb231aa2533
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 13:53:05 +0000

flight 159652 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159652/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  60390ccb8b9b2dbf85010f8b47779bb231aa2533
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    6 days
Failing since        159487  2021-02-20 04:29:29 Z    5 days   10 attempts
Testing same since   159652  2021-02-25 01:07:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 397 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 14:01:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 14:01:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89785.169428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFHCM-0006Go-Qu; Thu, 25 Feb 2021 14:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89785.169428; Thu, 25 Feb 2021 14:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFHCM-0006Gh-NT; Thu, 25 Feb 2021 14:00:58 +0000
Received: by outflank-mailman (input) for mailman id 89785;
 Thu, 25 Feb 2021 14:00:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFHCL-0006Gc-O3
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 14:00:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf11aceb-ae94-4dc5-a5f5-cd7b1ba4076c;
 Thu, 25 Feb 2021 14:00:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E3CA0AF6F;
 Thu, 25 Feb 2021 14:00:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf11aceb-ae94-4dc5-a5f5-cd7b1ba4076c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614261656; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Nj4L2ku4wGbxFmqRYIJnoV48EkI7LYnigPjL71z+YEU=;
	b=SrwZWq4gdqrXBnJiQH0mEgrKXBH8GFPxyZiA9ix5DuihYgvbtW3ILllqkf23V6EotNuxYl
	pFiPWZ7Hkkj2j5GnCdD5K9WdU9Rk6Jdje2TGwzBx5ncOF3n6fO6a82HlYXsW+OupLDyh5R
	XXOyAGKgHgb42GB3rKiiZ4Y0777f4z4=
Subject: Re: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
To: paul@xen.org
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Wei Liu <wl@xen.org>
References: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
 <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
 <76c94541-21a8-7ae5-c4c4-48552f16c3fd@suse.com>
 <17e50fb5-31f7-60a5-1eec-10d18a40ad9a@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <57580966-3880-9e59-5d82-e1de9006aa0c@suse.com>
Date: Thu, 25 Feb 2021 15:00:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <17e50fb5-31f7-60a5-1eec-10d18a40ad9a@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 13:11, Paul Durrant wrote:
> On 25/02/2021 07:33, Jan Beulich wrote:
>> On 24.02.2021 17:39, Paul Durrant wrote:
>>> On 23/02/2021 16:29, Jan Beulich wrote:
>>>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>>>> special considerations for the head of the SKB no longer apply. Don't
>>>> mistakenly report ERROR to the frontend for the first entry in the list,
>>>> even if - from all I can tell - this shouldn't matter much as the overall
>>>> transmit will need to be considered failed anyway.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/drivers/net/xen-netback/netback.c
>>>> +++ b/drivers/net/xen-netback/netback.c
>>>> @@ -499,7 +499,7 @@ check_frags:
>>>>    				 * the header's copy failed, and they are
>>>>    				 * sharing a slot, send an error
>>>>    				 */
>>>> -				if (i == 0 && sharedslot)
>>>> +				if (i == 0 && !first_shinfo && sharedslot)
>>>>    					xenvif_idx_release(queue, pending_idx,
>>>>    							   XEN_NETIF_RSP_ERROR);
>>>>    				else
>>>>
>>>
>>> I think this will DTRT, but to my mind it would make more sense to clear
>>> 'sharedslot' before the 'goto check_frags' at the bottom of the function.
>>
>> That was my initial idea as well, but
>> - I think it is for a reason that the variable is "const".
>> - There is another use of it which would then instead need further
>>    amending (and which I believe is at least part of the reason for
>>    the variable to be "const").
>>
> 
> Oh, yes. But now that I look again, don't you want:
> 
> if (i == 0 && first_shinfo && sharedslot)
> 
> ? (i.e no '!')
> 
> The comment states that the error should be indicated when the first 
> frag contains the header in the case that the map succeeded but the 
> prior copy from the same ref failed. This can only possibly be the case 
> if this is the 'first_shinfo'

I don't think so, no - there's a difference between "first frag"
(at which point first_shinfo is NULL) and first frag list entry
(at which point first_shinfo is non-NULL).

> (which is why I still think it is safe to unconst 'sharedslot' and
> clear it).

And "no" here as well - this piece of code

		/* First error: if the header haven't shared a slot with the
		 * first frag, release it as well.
		 */
		if (!sharedslot)
			xenvif_idx_release(queue,
					   XENVIF_TX_CB(skb)->pending_idx,
					   XEN_NETIF_RSP_OKAY);

specifically requires sharedslot to have the value that was
assigned to it at the start of the function (this property
doesn't go away when switching from fragments to frag list).
Note also how it uses XENVIF_TX_CB(skb)->pending_idx, i.e. the
value the local variable pending_idx was set from at the start
of the function.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 14:01:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 14:01:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89788.169439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFHDL-0006OD-4V; Thu, 25 Feb 2021 14:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89788.169439; Thu, 25 Feb 2021 14:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFHDL-0006O6-1V; Thu, 25 Feb 2021 14:01:59 +0000
Received: by outflank-mailman (input) for mailman id 89788;
 Thu, 25 Feb 2021 14:01:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFHDJ-0006O1-WD
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 14:01:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFHDJ-0001dL-Hh; Thu, 25 Feb 2021 14:01:57 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFHDJ-0008AE-7t; Thu, 25 Feb 2021 14:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=i8G2WBCHQKbcbqSL0ndl8kl+tO1w8KIN/yH2WppIgQ8=; b=eldvYWEAJ3/IApQZxynoW/FfZz
	CI2CCas3RlVf/0D7e1he87v5ahyDH6mwtJ4wkAtT11rY+hCLuccvQZnJSd8fV2INziB6+wBS0ixVo
	fmJBpNTvg1T1Uu93QsMiVjG1Vqf0Mx+xTY4fhHh9l9rCXkcqEHTHK6tPNu9MdCytK0bE=;
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
To: Ash Wilding <ash.j.wilding@gmail.com>
Cc: dfaggioli@suse.com, george.dunlap@citrix.com, iwj@xenproject.org,
 jbeulich@suse.com, jgrall@amazon.com, sstabellini@kernel.org,
 xen-devel@lists.xenproject.org
References: <20210220194701.24202-1-julien@xen.org>
 <20210223132408.10283-1-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ee1d43f2-4c2c-66e0-8ad0-c32ca1c7969f@xen.org>
Date: Thu, 25 Feb 2021 14:01:55 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210223132408.10283-1-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 23/02/2021 13:24, Ash Wilding wrote:
> Hi Julien,

Hi Ash,

> Thanks for looking at this,
> 
>> vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the
>> CPU to read any information about local events before the flag
>> _VPF_blocked is set.
> 
> Reviewed-by: Ash Wilding <ash.j.wilding@gmail.com>

Thanks!

> 
> 
> As an aside,
> 
>> I couldn't convince myself whether the Arm implementation of
>> local_events_need_delivery() contains enough barrier to prevent the
>> re-ordering. However, I don't think we want to play with devil here
>> as the function may be optimized in the future.
> 
> Agreed.
> 
> The vgic_vcpu_pending_irq() and vgic_evtchn_irq_pending() in the call
> path of local_events_need_delivery() both call spin_lock_irqsave(),
> which has an arch_lock_acquire_barrier() in its call path.
> 
> That just happens to map to a heavier smp_mb() on Arm right now, but
> relying on this behaviour would be shaky; I can imagine a future update
> to arch_lock_acquire_barrier() that relaxes it down to just acquire
> semantics like its name implies (for example an LSE-based lock_acquire()
> using LDUMAXA),in which case any code incorrectly relying on that full
> barrier behaviour may break. I'm guessing this is what you meant by the
> function may be optimized in future?

That's one of the optimization I had in mind. The other one is we may 
find a way to remove the spinlocks, so the barriers would disappear 
completely.

> 
> Do we know whether there is an expectation for previous loads/stores
> to have been observed before local_events_need_delivery()? I'm wondering
> whether it would make sense to have an smb_mb() at the start of the
> *_nomask() variant in local_events_need_delivery()'s call path.

That's a good question :). For Arm, there are 4 users of 
local_events_need_delivery():
   1) do_poll()
   2) vcpu_block()
   3) hypercall_preempt_check()
   4) general_preempt_check()

3 and 4 are used for breaking down long running operations. I guess we 
would want to have an accurate view of the pending events and therefore 
we would need a memory barrier to prevent the loads happening too early.

In this case, I think the smp_mb() would want to be part of the 
hypercall_preempt_check() and general_preempt_check().

Therefore, I think we want to avoid the extra barrier in 
local_events_need_delivery(). Instead, we should require the caller to 
take care of the ordering if needed.

This would have benefits any new architecture as the common code would 
already contain the appropriate barriers.

@Stefano, what do you think?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 14:37:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 14:37:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89797.169469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFHlb-00017y-3m; Thu, 25 Feb 2021 14:37:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89797.169469; Thu, 25 Feb 2021 14:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFHlb-00017r-0l; Thu, 25 Feb 2021 14:37:23 +0000
Received: by outflank-mailman (input) for mailman id 89797;
 Thu, 25 Feb 2021 14:37:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFHlZ-00017m-0c
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 14:37:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 211ad76f-ff28-4ac5-a925-1ba6a4b313df;
 Thu, 25 Feb 2021 14:37:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 211ad76f-ff28-4ac5-a925-1ba6a4b313df
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614263839;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=yhNTWm255lLEwEDS0u9GjHof2gWGrlbMR+g2aI2wLBI=;
  b=flOYuPHwmFeim86/qwIAIfFerCcjgKhvVfl9FHH7shm5QqgWxJDQmtTd
   nqIZC+Zz7rh7k6N7S0Rsp2g3BkN2xgFJKpqJvTLbDa9VeGZCNW72vHjWf
   Kzk3u6lKNEtJpabLLD+ZCbeg1fmdk1OLlpVw5wwnZcPXMJVbznxf3J4Kn
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kUnkce5i2TPbTcmj7ATgf0Pe3FA2brZw5HG65rlz49RQGaOUscwesX5K711dr7aU5pF8bMlZa2
 VB1xHYbN/JNjx6JhGWMyRGAmnXkGAsFdvhc74rkuuodvlwsGg2VkUn7OEhzoE1rL0ja/FHKMDW
 CpQOHVdssmO5HTiZX9TrUYa7wtuB4opxxaqVfVu3dUV5AatUALNb5ayvQkZObL7E9qCMr5O7au
 YUMmgVNEsXEcfXzpOUyOG+k0XI9vMMLiZqFm5+WHBo6jy1HOWWMd7ga/9qHSLKVTbcV3IMDdbK
 MeA=
X-SBRS: 5.2
X-MesageID: 38212095
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,205,1610427600"; 
   d="scan'208";a="38212095"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Doug
 Goldstein" <cardoe@cardoe.com>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH for-4.15] automation: Fix containerize to understand the Alpine container
Date: Thu, 25 Feb 2021 14:37:01 +0000
Message-ID: <20210225143701.8487-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

This was missing from the work to add the alpine container.

Fixes: a9afe7768bd ("automation: add alpine linux 3.12 x86 build container")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Doug Goldstein <cardoe@cardoe.com>
CC: Ian Jackson <iwj@xenproject.org>

For 4.15.  This is only developer tooling, with no impact on the build.
---
 automation/scripts/containerize | 1 +
 1 file changed, 1 insertion(+)

diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index da45baed4e..b7c81559fb 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -24,6 +24,7 @@ die() {
 #
 BASE="registry.gitlab.com/xen-project/xen"
 case "_${CONTAINER}" in
+    _alpine) CONTAINER="${BASE}/alpine:3.12" ;;
     _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:19:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:19:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89808.169503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIPh-0005Gq-Hj; Thu, 25 Feb 2021 15:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89808.169503; Thu, 25 Feb 2021 15:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIPh-0005Gj-Ev; Thu, 25 Feb 2021 15:18:49 +0000
Received: by outflank-mailman (input) for mailman id 89808;
 Thu, 25 Feb 2021 15:18:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFIPg-0005Ge-5J
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:18:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFIPg-0002sg-19
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:18:48 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFIPf-0005mj-Ua
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:18:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFIPc-0001nZ-Oc; Thu, 25 Feb 2021 15:18:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=mHgn79VoU0X+EBcR9Q+xaanqK2e6WMyZ3WvtA8NNHrw=; b=qu7Lh0zyxFCjiUL5lZ8PhyPZqv
	pms7kQmcpvMqeBVrhoDM3JFNv+eTFE/e8Iaxv9ffxlja806YfqiRuqA1fzcciTZ/LeoN2mFlT5vLJ
	DwhGxcN5XZP5MHhxJ0MEwJ7VoUMnKQ/1fmRRidK5BxAN4V2mINtOG64khsrRLo6a5uPs=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.49108.545436.134210@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 15:18:44 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Wei Liu <wl@xen.org>,
    "Doug  Goldstein" <cardoe@cardoe.com>
Subject: Re: [PATCH for-4.15] automation: Fix containerize to understand the Alpine container
In-Reply-To: <20210225143701.8487-1-andrew.cooper3@citrix.com>
References: <20210225143701.8487-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15] automation: Fix containerize to understand the Alpine container"):
> This was missing from the work to add the alpine container.
> 
> Fixes: a9afe7768bd ("automation: add alpine linux 3.12 x86 build container")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:25:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:25:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89810.169516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWU-0006Iv-5s; Thu, 25 Feb 2021 15:25:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89810.169516; Thu, 25 Feb 2021 15:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWU-0006Io-2O; Thu, 25 Feb 2021 15:25:50 +0000
Received: by outflank-mailman (input) for mailman id 89810;
 Thu, 25 Feb 2021 15:25:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWT-0006Ig-2r
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:25:49 +0000
Received: from mail-ot1-x32f.google.com (unknown [2607:f8b0:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id caa3e51a-578f-4676-bf40-d41890cff51d;
 Thu, 25 Feb 2021 15:25:48 +0000 (UTC)
Received: by mail-ot1-x32f.google.com with SMTP id x9so1645930otp.8
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:48 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: caa3e51a-578f-4676-bf40-d41890cff51d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=cPpdtH7EP/GfkHJoh7H3eTxLsigt9nHP/GO1BtnF+lU=;
        b=Bf14hds7oDs36GvTBWn1wKIYqICk/QqpQeafNMq02oyCb+vENIT+aDszRqm2+bJM01
         AaqCki956flemlJ9yYXFhdufzPXO/1v4Y0uoXdxS6JMC12miRUlohVcEho6BAGl4CLaq
         S3QzCRHgC51c80Cwd89HDijn697vyZQQaj9wokSqULOKtlTlKlt/4OIZ8D3vJc6fzYBL
         309hLkvXyBe7jAEWJ20rT1r8R6/3HxdaP6ZJD+XkWH1eDyR7fTDuMXsJaFKItrowe1lX
         ETNa9e0qsokgB6XHoZvn9qSP6sDDCEqenLg5mHtZUQaBPqlERnGJkKXL0U9bWYV9RZjC
         b0tA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=cPpdtH7EP/GfkHJoh7H3eTxLsigt9nHP/GO1BtnF+lU=;
        b=dOpBL/kmAZaM4yQTPjy9Nx3ZJNZgx8pJh+B5iKVEamtpx9hiVHvGmaUSrElEsnrkEz
         95OmxJmGx9V2mISzOD/ybrfzKDXIvjqD/Wynf1rHve7ula81LpOyOdClFFnT0kJefgH4
         6nyj18Y7bxSZi1GKRJImIyVXzC9098UMjQKk0lVRTXnX3TjisCcFQIgVEmSoqGpvJgtR
         p3ghq8eCHILNUwr8FKg8AYkgrIRk2WOWLGk0QIitfGY44T3yNXp+74kItUwMVgWn53pG
         t03wZhUy3zIUqDw0yFBm9NnjQQWpXz3kG6JgQTucmiaxjtt5Q/dKWFeHs/UEa5uhbRRg
         RK0Q==
X-Gm-Message-State: AOAM532B8E7X/1cnb+TLdlojCXM/cVa+YtISbAp27fRTapZyoFzKAU7Q
	1Uu4Iu98gPTxJBCpGxwILRACukMeZZzp5ct3
X-Google-Smtp-Source: ABdhPJwaZdTJwT2bM4vTgLqMPQUw8O93laRFsy/02o5DbENHrFqspQnaON7fOYWmvSKW7l3RV/E+xA==
X-Received: by 2002:a05:6830:22f6:: with SMTP id t22mr2637474otc.10.1614266747625;
        Thu, 25 Feb 2021 07:25:47 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-next 1/6] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Thu, 25 Feb 2021 08:24:00 -0700
Message-Id: <def4f2a0dc13d486bcfd86601152885acd880dd0.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/drivers/char/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..b15b0c8d6a 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,6 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
-	default y
+	default y if (ARM || X86)
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:25:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:25:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89811.169528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWZ-0006L0-Do; Thu, 25 Feb 2021 15:25:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89811.169528; Thu, 25 Feb 2021 15:25:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWZ-0006Kt-9u; Thu, 25 Feb 2021 15:25:55 +0000
Received: by outflank-mailman (input) for mailman id 89811;
 Thu, 25 Feb 2021 15:25:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWX-0006Ig-Tq
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:25:53 +0000
Received: from mail-ot1-x344.google.com (unknown [2607:f8b0:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b935c0bf-f453-4cd4-9fb3-258ed13570a3;
 Thu, 25 Feb 2021 15:25:49 +0000 (UTC)
Received: by mail-ot1-x344.google.com with SMTP id b8so5998862oti.7
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:49 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b935c0bf-f453-4cd4-9fb3-258ed13570a3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=S/mJiVuOiu+vXGCosCR3Vu2+74XXXoyXUi+nuAJzkY0=;
        b=Pr4hzrTYQEUCle20bYISfAVaTyOqivHe99TIUf6r0gAKtH+arFGI917QdggE/Cl5kU
         NlpgxZzwYQonOqZ5KFq4j3vH0AaxVg/DTlPARD4hHF+LDhRuFqRSHxCveteWb+hiYUtX
         vKGhGMs5ypFhWkxTLmHjVKv+ZWA36KtxIL+SSYjj/+/0QTYPe0Vozrq37ftJuQedqtVG
         Nr+JFfy1DIpdRFMxJ6O+1VCgSEnCNwPLFqVOh2noag4MfbKB0SlQpjRirQKdfVQ05CKZ
         eVziQezSUddqK71HGH69I2BgA/LIbVFlpWp5FEgcedH/jUn9SlFZy5CS7FS3a7rAST+o
         fMpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=S/mJiVuOiu+vXGCosCR3Vu2+74XXXoyXUi+nuAJzkY0=;
        b=VdG3fw74f/wZ0gR69PrplBF0ggZ7smWgsp6AHKKLfDV9fmHOb6IqrDsu0uvzk2Dkjw
         TOtBJHoB47tEjKdPhVennUMEAEOiHpFgrnTfoBIeTcV7tjsbWS7qXxJDlJxfbaFCAkn7
         LC+64D1rUUaHfFx1yUQ0WstL5HsC4SK5UwjiLcUkdVxRFNEHT+pBT2lN8e5Hdsgg9UjH
         kgdEg2I2xKcO9OMZzFHqZggGOaKbSbOouPJDVNWhG0o2LpM2C/6LvRMXN0MNADyAU/0x
         tiwiUq0sQVLaZ0C436Gck025TF9m6DtNHgWV9iYk05GvpINkqtMQvZW62y9xc9VuNs2o
         cXPQ==
X-Gm-Message-State: AOAM531fGQogkfx4IJaEZKVf6V0nKlF1ij63dbuR3CiD49iTfOevAwrz
	nG1tcA3Vpofid2ZgH1LVgjJ343FpqD+Q4+0z
X-Google-Smtp-Source: ABdhPJxdy3b1mobNcnBRdRRS1EIkalqI25kbgmQE+6fh5/WgH2OS79LPJtpSFmgKcsKjlGur7v+ztw==
X-Received: by 2002:a9d:6356:: with SMTP id y22mr2672025otk.86.1614266748601;
        Thu, 25 Feb 2021 07:25:48 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-next 2/6] xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
Date: Thu, 25 Feb 2021 08:24:01 -0700
Message-Id: <444658f690c81b9e93c2c709fa1032c049646763.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variables iommu_enabled and iommu_dont_flush_iotlb are defined in
drivers/passthrough/iommu.c and are referenced in common code, which
causes the link to fail when !CONFIG_HAS_PASSTHROUGH.

Guard references to these variables in common code so that xen
builds when !CONFIG_HAS_PASSTHROUGH.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/common/domain.c |  2 ++
 xen/common/memory.c | 10 ++++++++++
 xen/common/sysctl.c |  2 ++
 3 files changed, 14 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index d85984638a..ad66bca325 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -501,7 +501,9 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
             return -EINVAL;
         }
 
+#ifdef CONFIG_HAS_PASSTHROUGH
         if ( !iommu_enabled )
+#endif
         {
             dprintk(XENLOG_INFO, "IOMMU requested but not available\n");
             return -EINVAL;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 76b9f58478..7135324857 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
     p2m_type_t p2mt;
 #endif
     mfn_t mfn;
+#ifdef CONFIG_HAS_PASSTHROUGH
     bool *dont_flush_p, dont_flush;
+#endif
     int rc;
 
 #ifdef CONFIG_X86
@@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
      * Since we're likely to free the page below, we need to suspend
      * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
      */
+#ifdef CONFIG_HAS_PASSTHROUGH
     dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
     dont_flush = *dont_flush_p;
     *dont_flush_p = false;
+#endif
 
     rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     *dont_flush_p = dont_flush;
+#endif
 
     /*
      * With the lack of an IOMMU on some platforms, domains with DMA-capable
@@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
     xatp->gpfn += start;
     xatp->size -= start;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
        this_cpu(iommu_dont_flush_iotlb) = 1;
        extra.ppage = &pages[0];
     }
+#endif
 
     while ( xatp->size > done )
     {
@@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         }
     }
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
         int ret;
@@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
+#endif
 
     return rc;
 }
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 3558641cd9..b4dde7bef6 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -271,12 +271,14 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
         pi->cpu_khz = cpu_khz;
         pi->max_mfn = get_upper_mfn_bound();
         arch_do_physinfo(pi);
+#ifdef CONFIG_HAS_PASSTHROUGH
         if ( iommu_enabled )
         {
             pi->capabilities |= XEN_SYSCTL_PHYSCAP_directio;
             if ( iommu_hap_pt_share )
                 pi->capabilities |= XEN_SYSCTL_PHYSCAP_iommu_hap_pt_share;
         }
+#endif
         if ( vmtrace_available )
             pi->capabilities |= XEN_SYSCTL_PHYSCAP_vmtrace;
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:26:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:26:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89812.169540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWe-0006Oh-MW; Thu, 25 Feb 2021 15:26:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89812.169540; Thu, 25 Feb 2021 15:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWe-0006Oa-J1; Thu, 25 Feb 2021 15:26:00 +0000
Received: by outflank-mailman (input) for mailman id 89812;
 Thu, 25 Feb 2021 15:25:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWc-0006Ig-Ts
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:25:58 +0000
Received: from mail-ot1-x332.google.com (unknown [2607:f8b0:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4b25551-09c9-4f5c-a34e-49f38da048f8;
 Thu, 25 Feb 2021 15:25:50 +0000 (UTC)
Received: by mail-ot1-x332.google.com with SMTP id b8so5998917oti.7
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:50 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4b25551-09c9-4f5c-a34e-49f38da048f8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=0m5dJDerLXkuZbIjHhNT28vY8++JYdZl1jR5h36Ds1M=;
        b=DPnRPwDD4srYuosEWwZF3X8aPV2fQkhNjtgGjLmOSVKFpg6e9ScPHU+CtJrOvlXwjK
         zw6DX/D0nL/q2obdwPHzuDL4I27RSfmWPmW7dsGYwX5TtueyQyyiDOqR1SOQQzL3y762
         2knRf9Kc0H6zNVVcXXk5uW/n8RcEcugtxrAcSUJZA9NGh3SeRVTqof8mt5GVWenAiwUK
         tCC8EIuctNupMgWq/rkvVcbbBDHZopFB/tvVVAlIvK2PxCy8NwV3KNu0BOXJwgZfiT3D
         ScPW8gfhhjeipK9is9hwKEFET8u2n8gSHBqRnQiCjlU5uaiw1YITMfyh01iB41hKM1P8
         G0Ig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=0m5dJDerLXkuZbIjHhNT28vY8++JYdZl1jR5h36Ds1M=;
        b=Ybu4VCE1qo7FESvlUZZWPzZST6MVoChwUMA+yOve+7yuWiSfgcez3VJDkKh+pMiy/4
         ImK4Ua1kNl6wNKdWER870o3oKFaiJ8SZOaZozM5FHdepxIdkT7VXAVUQm/iZXn3InPoV
         HDXFFySmetX664V7+hb+vuDUA9G3vN+cbKkuefJeThlfcATEnpuse2cTDEN++No1QMEq
         I0UvkQ+gjz2B0h666AXeTe2oRmK1E3kAuUdWuozPZwAcjRZgIgM67BD4PsN4aPmRZfwn
         OW6nI1ZtjoE6V5gY54WAvoXa+xArg1DTmP+suA/UnygLZN1lbt+9rtttcOJKa3voyq8k
         r8Kw==
X-Gm-Message-State: AOAM533X0nSVce2LmHuf1urYYP77fPEW4m8d1RgV/dEptATBBzEXu89E
	1nMS6ce0xzea0qe7Zwql+dOciOOggD0I5iNp
X-Google-Smtp-Source: ABdhPJwe4ud8EAoypaTz0SETf6ZK7rHsSPipfZl1/lJrhmpEksqhqzmEj9wNWFTfFbHKwjYYWna+aA==
X-Received: by 2002:a05:6830:11c7:: with SMTP id v7mr2694660otq.245.1614266749291;
        Thu, 25 Feb 2021 07:25:49 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
Date: Thu, 25 Feb 2021 08:24:02 -0700
Message-Id: <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Return from cpu_schedule_up when either cpu is 0 or
NR_CPUS == 1. This fixes the following:

core.c: In function 'cpu_schedule_up':
core.c:2769:19: error: array subscript 1 is above array bounds
of 'struct vcpu *[1]' [-Werror=array-bounds]
 2769 |     if ( idle_vcpu[cpu] == NULL )
      |

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/common/sched/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 9745a77eee..f5ec65bf9b 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int cpu)
     cpumask_set_cpu(cpu, &sched_res_mask);
 
     /* Boot CPU is dealt with later in scheduler_init(). */
-    if ( cpu == 0 )
+    if ( cpu == 0 || NR_CPUS == 1 )
         return 0;
 
     if ( idle_vcpu[cpu] == NULL )
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:26:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:26:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89813.169552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWj-0006T1-62; Thu, 25 Feb 2021 15:26:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89813.169552; Thu, 25 Feb 2021 15:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWj-0006Sp-2A; Thu, 25 Feb 2021 15:26:05 +0000
Received: by outflank-mailman (input) for mailman id 89813;
 Thu, 25 Feb 2021 15:26:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWh-0006Ig-U1
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:26:03 +0000
Received: from mail-ot1-x330.google.com (unknown [2607:f8b0:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d3e7a19-c73b-4d5d-91ec-0ba3d8d4e8c0;
 Thu, 25 Feb 2021 15:25:52 +0000 (UTC)
Received: by mail-ot1-x330.google.com with SMTP id d9so5962992ote.12
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:52 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d3e7a19-c73b-4d5d-91ec-0ba3d8d4e8c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Do1GcU7RuDyB1+LZXwivBdKq5nGfBzBt9V9qtSs00zM=;
        b=H8ENibihTuhlK3Xw63b8w6JRznQPzGooXd8JHRdERiFZ5AxRjVSjuHI6SHfNOvBtDn
         wybd8pQNY+iQNQz+OpMJZOHghR9rkloZ1g0m0g9QUroMU66Ni5hpja7CLCC2KeoTM9yg
         ACEnCb+fwmztmvzxcLK7YK0xPuha0sa1L3ELQAD7lxg2L0Ct69WnOfXS7+tMzE6KtflG
         /fFFpIJNVvKmjqoHm/A94dIeoHsPzmfKeE5rbjeuwpKGeEr+jvjVU7zfa6ULjMg5QMZn
         xLBvP8FOwACS/6jBG31mCR8r1L/jRixlC4XuzV47AQS85qYAmCqo7yijtR7uEV8HvlIJ
         zH2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Do1GcU7RuDyB1+LZXwivBdKq5nGfBzBt9V9qtSs00zM=;
        b=VsMFlnbvXsPfw1Gt3kiNQKBi8fA2V/oJOWGFwkg3mwkAwvXDYlyPxvnNmGBudlbqpy
         /iu+f+RDsntdKhv1A1KteNiZq7aTRTyXjvqbonHBOZRg2zoDswSgG3mLIi4uAVKjTxD5
         86Y/vj/FIFBKgwYdvM/ZsRWxr0Az8u+F7MSfHnY/+ZA7qpe6EGKMQfY0fwUrtMy2J5+d
         kstwNipBhheXeYRxafDSFw+xE7rtcullrpNJMLwEOuNx2jKR5ugpwPY6g81/vn8ikwGY
         b/BeYT3Wxq2dBTz+TcKWpfFdF8HKrGyXWNZfs7nzKc79vB2aGQSItaqTCGogjrW01GRa
         dBSA==
X-Gm-Message-State: AOAM531AdoNo4G7lLgUcxYqeww5+bex+FWdI1GacfTA74cwgRxAZT6Bc
	kG7pLjrAGHOo+h9buamX22sLE9xFoVT1Irbi
X-Google-Smtp-Source: ABdhPJyF8FKJXVVBU51bxVTaQUYzOAGv8map21rtIn+aTAQAQPLP01Vev8hOtXXVuTcM15xvoVJPDQ==
X-Received: by 2002:a9d:798f:: with SMTP id h15mr2704210otm.277.1614266750070;
        Thu, 25 Feb 2021 07:25:50 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-next 4/6] xen: Fix build when !CONFIG_GRANT_TABLE
Date: Thu, 25 Feb 2021 08:24:03 -0700
Message-Id: <eb2d1e911870f1662acfbc073447af2d29455750.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Declare struct grant_table {}; in grant_table.h when
!CONFIG_GRANT_TABLE. This fixes the following:

/build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
declared inside parameter list will not be visible outside of this
definition or declaration [-Werror]
   84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
      |

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/include/xen/grant_table.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 63b6dc78f4..0e5f6f85c7 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -66,6 +66,8 @@ int gnttab_acquire_resource(
 
 #define opt_max_grant_frames 0
 
+struct grant_table {};
+
 static inline int grant_table_init(struct domain *d,
                                    int max_grant_frames,
                                    int max_maptrack_frames)
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:26:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:26:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89814.169564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWo-0006YL-HZ; Thu, 25 Feb 2021 15:26:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89814.169564; Thu, 25 Feb 2021 15:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWo-0006Y7-CA; Thu, 25 Feb 2021 15:26:10 +0000
Received: by outflank-mailman (input) for mailman id 89814;
 Thu, 25 Feb 2021 15:26:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWm-0006Ig-UC
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:26:08 +0000
Received: from mail-ot1-x335.google.com (unknown [2607:f8b0:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1364e052-c330-4da9-90bb-5ed76ce4ac61;
 Thu, 25 Feb 2021 15:25:52 +0000 (UTC)
Received: by mail-ot1-x335.google.com with SMTP id e45so5980873ote.9
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:52 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1364e052-c330-4da9-90bb-5ed76ce4ac61
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=YiyxCQHFug7bXis7dCUf995/U3uEMwgP4ohptzUZnC0=;
        b=XsUF+tEDJHgsFKyjXHcFJ+zEKr+nxIgXngWfdFwdzX8dcQiGbpELJYFinIdlcoRoY7
         zihf1E5D2ZQShv99Q6chMRLIcWAgGMHTFLp44SSkmWD9lYOthFrt2rY6l53LW7Af+n3M
         CuQaxVzp2RaIdFm/KumXPEXpcliU06qQQcvvnrQy/Buii7ywuvaOdSdzaNmcpdWg81Np
         4EkAY5kjOnj7+AUkivbxWmgMDurFU+sA0DPCzewJnnA0aodSxk0RWzScjxmRJJ2bw5Vc
         hckmduIWDQ8DflDPZSDVwDEa4tK8uvW9Tbfqxbqwn0rkY8a4AnhiXiJMbpCQOLO2C11b
         yjPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=YiyxCQHFug7bXis7dCUf995/U3uEMwgP4ohptzUZnC0=;
        b=VeU89nQ8gsAAoWp9cdngsgSg+1HXB3zxuaqjn3BoKkj/sk5Z2wYFka3KUe9A1NGUjd
         VnBzcTYTJS5/2BZYg0eJ7DoH27/iHw0jbg9sJZ7Esd/bi/hXO2XNj7693PXAnBewxYTs
         JV6fojqhAwUWLVgiPfzldvcB3mKYzYWJ4P0Dov32pLwFictRdzlLG+KOeCpL3DdyvDXj
         Qc6g1zEAkoFVlKJfzu/2XCYR22lJfE9JVSYhmrhLdh0gaLEmiaCzLdtbXhYZQndiRCVw
         UaHnpD4puDBjP5s2kobq9d+s871COLBYxHmn/f4givz1vOXHsFMu1D42AIPDRiZc9zRa
         mCfw==
X-Gm-Message-State: AOAM533/CW0vla7b4/4BiDP6nN3eQH1sqoxi/WdUtY1us6gEC6ZlaFw6
	epovsaa5J3L1mNtO4DPCcsM+NXYKxRXjEdq/
X-Google-Smtp-Source: ABdhPJxNJ8R4E3S6M8cMfhQpDgFRBtQ59DhMPEt4AVCgAyLkG54jWkuYtksHDQZ0ZvTra/++dMm7eg==
X-Received: by 2002:a9d:7512:: with SMTP id r18mr2769605otk.90.1614266752261;
        Thu, 25 Feb 2021 07:25:52 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH for-next 6/6] automation: add container for riscv64 builds
Date: Thu, 25 Feb 2021 08:24:05 -0700
Message-Id: <a7829e62734a73993cd41cdbc18e1d16e4bb06d9.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a container for cross-compiling xen to riscv64.
This just includes the cross-compiler and necessary packages for
building xen itself (packages for tools, stubdoms, etc., can be
added later).

To build xen in the container run the following:

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 automation/build/archlinux/riscv64.dockerfile | 32 +++++++++++++++++++
 automation/scripts/containerize               |  1 +
 2 files changed, 33 insertions(+)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
new file mode 100644
index 0000000000..d94048b6c3
--- /dev/null
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -0,0 +1,32 @@
+FROM archlinux/base
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+# Packages needed for the build
+RUN pacman --noconfirm -Syu \
+    base-devel \
+    gcc \
+    git
+
+# Packages needed for QEMU
+RUN pacman --noconfirm -Syu \
+    pixman \
+    python \
+    sh
+
+# There is a regression in GDB that causes an assertion error
+# when setting breakpoints, use this commit until it is fixed!
+RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-gnu-toolchain && \
+    cd riscv-gnu-toolchain/riscv-gdb && \
+    git checkout 1dd588507782591478882a891f64945af9e2b86c && \
+    cd  .. && \
+    ./configure --prefix=/opt/riscv && \
+    make linux -j$(nproc) && \
+    rm -R /riscv-gnu-toolchain
+
+# Add compiler path
+ENV PATH=/opt/riscv/bin/:${PATH}
+
+RUN useradd --create-home user
+USER user
+WORKDIR /build
diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index da45baed4e..1901e8c0ef 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -25,6 +25,7 @@ die() {
 BASE="registry.gitlab.com/xen-project/xen"
 case "_${CONTAINER}" in
     _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
+    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
     _fedora) CONTAINER="${BASE}/fedora:29";;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:26:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89815.169576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWt-0006ew-U1; Thu, 25 Feb 2021 15:26:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89815.169576; Thu, 25 Feb 2021 15:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWt-0006em-OM; Thu, 25 Feb 2021 15:26:15 +0000
Received: by outflank-mailman (input) for mailman id 89815;
 Thu, 25 Feb 2021 15:26:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWr-0006Ig-UJ
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:26:13 +0000
Received: from mail-oi1-x22f.google.com (unknown [2607:f8b0:4864:20::22f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7244bbf-df56-4aca-b737-d3d6dbf87f08;
 Thu, 25 Feb 2021 15:25:52 +0000 (UTC)
Received: by mail-oi1-x22f.google.com with SMTP id a13so6444675oid.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:52 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7244bbf-df56-4aca-b737-d3d6dbf87f08
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=jxV1oAouWd/8cftdD9YIWNtxa9BAEzSaS3HYHAZ9n7k=;
        b=FPDMXmJDygUszxoLuOCNlrK28lNTqViFFXeA0ZgzHrRe0vMa9QsIBJF35QYplkl4k0
         aq2hPWx2bYUA/SqjEDYB/fnMT4eBplPKShoJNUDrIJd0ubpi6cTBoGXkgezY4HURd/BW
         XKg6qiz/1aB0hwjwSjAki6A1nUu5fJbRkk4UWMCNCcCleT3AQjU0dXVeIZ0bllAG50oA
         MSGZvyYHewjouj+MJtdAS6DxDp4LQuj+gEyXypER2heqQmsTRUeC/ZAD9U/IUnkUpo2c
         UXD+Tnw5zE4xqPUFic5G/JH3XupQP57xGObgCm390n6wo9snotkhbo8ybzhJmX66jcm4
         3jXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=jxV1oAouWd/8cftdD9YIWNtxa9BAEzSaS3HYHAZ9n7k=;
        b=C9b50dW/UIYisUrqYCogweEEdgHWofof3EEdl4LaayWI7D/5BKAY3YxYj+8QpS49My
         KS/E5fcgiOM3F1QxJm+0lUsClQwgqIegSatdUkBQ+8f4T8QVuh8q/c86Z3dUDbArOJ/U
         yYVpD7gVxM3ilfEuOckHeGa5TnUXfuAVlGWMqyyQNBPBM1921wPpQia+Refl5tLLELIj
         +Jybcng/zgjgDiVWPxdwxu3dMppWF8UQsaBncvadvzmt7+lXvm15xWK2msLSGtIECHd2
         /JU1Dp3jXOUqjamDHp6+n5IuApWOCW6EdiWwR65T+WqL5CVNhi5qsnhyBilX/SU85zSL
         jvEQ==
X-Gm-Message-State: AOAM532NEUikFT+YfdBjjKkw3nvilW4TbqI3BNRCC20/FMH7Ff+lmZoK
	CwocP3yWNr//1GEZfXg57RT5NTWmPSEgo+zV
X-Google-Smtp-Source: ABdhPJzS3WdZA0j6jUhkkWJ7oLhmSrPSDG0oMoeYLDffemXWgyI+2RnZwhmjmLAZLTPOtddziiimzA==
X-Received: by 2002:aca:3887:: with SMTP id f129mr2244058oia.19.1614266746814;
        Thu, 25 Feb 2021 07:25:46 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH for-next 0/6] Minimal build for RISCV
Date: Thu, 25 Feb 2021 08:23:59 -0700
Message-Id: <cover.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0]. I have worked to rebase onto current Xen,
as well as update the various header files borrowed from Linux.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that 1) it uses a
minimal config and 2) files, functions, and variables are included if
and only if they are required for a successful build based on the
config. It doesn't run at all, as the functions just have stub
implementations.

My hope is that this can serve as a useful example for future ports as
well as inform the community of exactly what is imposed by common code
onto new architectures.

The first 4 patches are mods to non-RISCV bits that enable building a
config with:

  !CONFIG_HAS_NS16550
  !CONFIG_HAS_PASSTHROUGH
  NR_CPUS == 1
  !CONFIG_GRANT_TABLE

respectively. The fifth patch adds the RISCV files, and the last patch
adds a docker container for doing the build. To build from the docker
container (after creating it locally), you can run the following:

  $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen 

The sources taken from Linux are documented in arch/riscv/README.sources.
There were also some files copied from arm:

  asm-arm/softirq.h
  asm-arm/random.h
  asm-arm/nospec.h
  asm-arm/numa.h
  asm-arm/p2m.h
  asm-arm/delay.h
  asm-arm/debugger.h
  asm-arm/desc.h
  asm-arm/guest_access.h
  asm-arm/hardirq.h
  lib/find_next_bit.c

I imagine some of these will want some consolidation, but I put them
under the respective RISCV directories for now.

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Connor Davis (6):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
  xen/sched: Fix build when NR_CPUS == 1
  xen: Fix build when !CONFIG_GRANT_TABLE
  xen: Add files needed for minimal riscv build
  automation: add container for riscv64 builds

 automation/build/archlinux/riscv64.dockerfile |  32 ++
 automation/scripts/containerize               |   1 +
 config/riscv64.mk                             |   7 +
 xen/Makefile                                  |   8 +-
 xen/arch/riscv/Kconfig                        |  54 +++
 xen/arch/riscv/Kconfig.debug                  |   0
 xen/arch/riscv/Makefile                       |  57 +++
 xen/arch/riscv/README.source                  |  19 +
 xen/arch/riscv/Rules.mk                       |  13 +
 xen/arch/riscv/arch.mk                        |   7 +
 xen/arch/riscv/configs/riscv64_defconfig      |  12 +
 xen/arch/riscv/delay.c                        |  16 +
 xen/arch/riscv/domain.c                       | 144 +++++++
 xen/arch/riscv/domctl.c                       |  36 ++
 xen/arch/riscv/guestcopy.c                    |  57 +++
 xen/arch/riscv/head.S                         |   6 +
 xen/arch/riscv/irq.c                          |  78 ++++
 xen/arch/riscv/lib/Makefile                   |   1 +
 xen/arch/riscv/lib/find_next_bit.c            | 284 +++++++++++++
 xen/arch/riscv/mm.c                           |  93 +++++
 xen/arch/riscv/p2m.c                          | 150 +++++++
 xen/arch/riscv/percpu.c                       |  17 +
 xen/arch/riscv/platforms/Kconfig              |  31 ++
 xen/arch/riscv/riscv64/asm-offsets.c          |  31 ++
 xen/arch/riscv/setup.c                        |  27 ++
 xen/arch/riscv/shutdown.c                     |  28 ++
 xen/arch/riscv/smp.c                          |  35 ++
 xen/arch/riscv/smpboot.c                      |  34 ++
 xen/arch/riscv/sysctl.c                       |  33 ++
 xen/arch/riscv/time.c                         |  35 ++
 xen/arch/riscv/traps.c                        |  35 ++
 xen/arch/riscv/vm_event.c                     |  39 ++
 xen/arch/riscv/xen.lds.S                      | 113 ++++++
 xen/common/domain.c                           |   2 +
 xen/common/memory.c                           |  10 +
 xen/common/sched/core.c                       |   2 +-
 xen/common/sysctl.c                           |   2 +
 xen/drivers/char/Kconfig                      |   2 +-
 xen/drivers/char/serial.c                     |   1 +
 xen/include/asm-riscv/altp2m.h                |  39 ++
 xen/include/asm-riscv/asm.h                   |  77 ++++
 xen/include/asm-riscv/asm_defns.h             |  24 ++
 xen/include/asm-riscv/atomic.h                | 204 ++++++++++
 xen/include/asm-riscv/bitops.h                | 331 +++++++++++++++
 xen/include/asm-riscv/bug.h                   |  54 +++
 xen/include/asm-riscv/byteorder.h             |  16 +
 xen/include/asm-riscv/cache.h                 |  24 ++
 xen/include/asm-riscv/cmpxchg.h               | 382 ++++++++++++++++++
 xen/include/asm-riscv/compiler_types.h        |  32 ++
 xen/include/asm-riscv/config.h                | 110 +++++
 xen/include/asm-riscv/cpufeature.h            |  17 +
 xen/include/asm-riscv/csr.h                   | 219 ++++++++++
 xen/include/asm-riscv/current.h               |  47 +++
 xen/include/asm-riscv/debugger.h              |  15 +
 xen/include/asm-riscv/delay.h                 |  15 +
 xen/include/asm-riscv/desc.h                  |  12 +
 xen/include/asm-riscv/device.h                |  15 +
 xen/include/asm-riscv/div64.h                 |  23 ++
 xen/include/asm-riscv/domain.h                |  50 +++
 xen/include/asm-riscv/event.h                 |  42 ++
 xen/include/asm-riscv/fence.h                 |  12 +
 xen/include/asm-riscv/flushtlb.h              |  34 ++
 xen/include/asm-riscv/grant_table.h           |  12 +
 xen/include/asm-riscv/guest_access.h          |  41 ++
 xen/include/asm-riscv/guest_atomics.h         |  60 +++
 xen/include/asm-riscv/hardirq.h               |  27 ++
 xen/include/asm-riscv/hypercall.h             |  12 +
 xen/include/asm-riscv/init.h                  |  42 ++
 xen/include/asm-riscv/io.h                    | 283 +++++++++++++
 xen/include/asm-riscv/iocap.h                 |  13 +
 xen/include/asm-riscv/iommu.h                 |  46 +++
 xen/include/asm-riscv/irq.h                   |  58 +++
 xen/include/asm-riscv/mem_access.h            |   4 +
 xen/include/asm-riscv/mm.h                    | 246 +++++++++++
 xen/include/asm-riscv/monitor.h               |  65 +++
 xen/include/asm-riscv/nospec.h                |  25 ++
 xen/include/asm-riscv/numa.h                  |  41 ++
 xen/include/asm-riscv/p2m.h                   | 218 ++++++++++
 xen/include/asm-riscv/page-bits.h             |  11 +
 xen/include/asm-riscv/page.h                  |  73 ++++
 xen/include/asm-riscv/paging.h                |  15 +
 xen/include/asm-riscv/pci.h                   |  31 ++
 xen/include/asm-riscv/percpu.h                |  33 ++
 xen/include/asm-riscv/processor.h             |  59 +++
 xen/include/asm-riscv/random.h                |   9 +
 xen/include/asm-riscv/regs.h                  |  23 ++
 xen/include/asm-riscv/setup.h                 |  14 +
 xen/include/asm-riscv/smp.h                   |  46 +++
 xen/include/asm-riscv/softirq.h               |  16 +
 xen/include/asm-riscv/spinlock.h              |  12 +
 xen/include/asm-riscv/string.h                |  28 ++
 xen/include/asm-riscv/sysregs.h               |  16 +
 xen/include/asm-riscv/system.h                |  99 +++++
 xen/include/asm-riscv/time.h                  |  31 ++
 xen/include/asm-riscv/trace.h                 |  12 +
 xen/include/asm-riscv/types.h                 |  60 +++
 xen/include/asm-riscv/vm_event.h              |  55 +++
 xen/include/asm-riscv/xenoprof.h              |  12 +
 xen/include/public/arch-riscv.h               | 183 +++++++++
 xen/include/public/arch-riscv/hvm/save.h      |  39 ++
 xen/include/public/hvm/save.h                 |   2 +
 xen/include/public/pmu.h                      |   2 +
 xen/include/public/xen.h                      |   2 +
 xen/include/xen/domain.h                      |   1 +
 xen/include/xen/grant_table.h                 |   2 +
 105 files changed, 5421 insertions(+), 4 deletions(-)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/README.source
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
 create mode 100644 xen/arch/riscv/delay.c
 create mode 100644 xen/arch/riscv/domain.c
 create mode 100644 xen/arch/riscv/domctl.c
 create mode 100644 xen/arch/riscv/guestcopy.c
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/arch/riscv/irq.c
 create mode 100644 xen/arch/riscv/lib/Makefile
 create mode 100644 xen/arch/riscv/lib/find_next_bit.c
 create mode 100644 xen/arch/riscv/mm.c
 create mode 100644 xen/arch/riscv/p2m.c
 create mode 100644 xen/arch/riscv/percpu.c
 create mode 100644 xen/arch/riscv/platforms/Kconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/setup.c
 create mode 100644 xen/arch/riscv/shutdown.c
 create mode 100644 xen/arch/riscv/smp.c
 create mode 100644 xen/arch/riscv/smpboot.c
 create mode 100644 xen/arch/riscv/sysctl.c
 create mode 100644 xen/arch/riscv/time.c
 create mode 100644 xen/arch/riscv/traps.c
 create mode 100644 xen/arch/riscv/vm_event.c
 create mode 100644 xen/arch/riscv/xen.lds.S
 create mode 100644 xen/include/asm-riscv/altp2m.h
 create mode 100644 xen/include/asm-riscv/asm.h
 create mode 100644 xen/include/asm-riscv/asm_defns.h
 create mode 100644 xen/include/asm-riscv/atomic.h
 create mode 100644 xen/include/asm-riscv/bitops.h
 create mode 100644 xen/include/asm-riscv/bug.h
 create mode 100644 xen/include/asm-riscv/byteorder.h
 create mode 100644 xen/include/asm-riscv/cache.h
 create mode 100644 xen/include/asm-riscv/cmpxchg.h
 create mode 100644 xen/include/asm-riscv/compiler_types.h
 create mode 100644 xen/include/asm-riscv/config.h
 create mode 100644 xen/include/asm-riscv/cpufeature.h
 create mode 100644 xen/include/asm-riscv/csr.h
 create mode 100644 xen/include/asm-riscv/current.h
 create mode 100644 xen/include/asm-riscv/debugger.h
 create mode 100644 xen/include/asm-riscv/delay.h
 create mode 100644 xen/include/asm-riscv/desc.h
 create mode 100644 xen/include/asm-riscv/device.h
 create mode 100644 xen/include/asm-riscv/div64.h
 create mode 100644 xen/include/asm-riscv/domain.h
 create mode 100644 xen/include/asm-riscv/event.h
 create mode 100644 xen/include/asm-riscv/fence.h
 create mode 100644 xen/include/asm-riscv/flushtlb.h
 create mode 100644 xen/include/asm-riscv/grant_table.h
 create mode 100644 xen/include/asm-riscv/guest_access.h
 create mode 100644 xen/include/asm-riscv/guest_atomics.h
 create mode 100644 xen/include/asm-riscv/hardirq.h
 create mode 100644 xen/include/asm-riscv/hypercall.h
 create mode 100644 xen/include/asm-riscv/init.h
 create mode 100644 xen/include/asm-riscv/io.h
 create mode 100644 xen/include/asm-riscv/iocap.h
 create mode 100644 xen/include/asm-riscv/iommu.h
 create mode 100644 xen/include/asm-riscv/irq.h
 create mode 100644 xen/include/asm-riscv/mem_access.h
 create mode 100644 xen/include/asm-riscv/mm.h
 create mode 100644 xen/include/asm-riscv/monitor.h
 create mode 100644 xen/include/asm-riscv/nospec.h
 create mode 100644 xen/include/asm-riscv/numa.h
 create mode 100644 xen/include/asm-riscv/p2m.h
 create mode 100644 xen/include/asm-riscv/page-bits.h
 create mode 100644 xen/include/asm-riscv/page.h
 create mode 100644 xen/include/asm-riscv/paging.h
 create mode 100644 xen/include/asm-riscv/pci.h
 create mode 100644 xen/include/asm-riscv/percpu.h
 create mode 100644 xen/include/asm-riscv/processor.h
 create mode 100644 xen/include/asm-riscv/random.h
 create mode 100644 xen/include/asm-riscv/regs.h
 create mode 100644 xen/include/asm-riscv/setup.h
 create mode 100644 xen/include/asm-riscv/smp.h
 create mode 100644 xen/include/asm-riscv/softirq.h
 create mode 100644 xen/include/asm-riscv/spinlock.h
 create mode 100644 xen/include/asm-riscv/string.h
 create mode 100644 xen/include/asm-riscv/sysregs.h
 create mode 100644 xen/include/asm-riscv/system.h
 create mode 100644 xen/include/asm-riscv/time.h
 create mode 100644 xen/include/asm-riscv/trace.h
 create mode 100644 xen/include/asm-riscv/types.h
 create mode 100644 xen/include/asm-riscv/vm_event.h
 create mode 100644 xen/include/asm-riscv/xenoprof.h
 create mode 100644 xen/include/public/arch-riscv.h
 create mode 100644 xen/include/public/arch-riscv/hvm/save.h

-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:26:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:26:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89817.169588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWy-0006jW-8I; Thu, 25 Feb 2021 15:26:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89817.169588; Thu, 25 Feb 2021 15:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIWy-0006jN-3z; Thu, 25 Feb 2021 15:26:20 +0000
Received: by outflank-mailman (input) for mailman id 89817;
 Thu, 25 Feb 2021 15:26:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJgK=H3=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFIWw-0006Ig-Ub
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:26:19 +0000
Received: from mail-ot1-x32a.google.com (unknown [2607:f8b0:4864:20::32a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce56bfa3-4db9-4d50-8446-cb3af0b8e5c4;
 Thu, 25 Feb 2021 15:25:54 +0000 (UTC)
Received: by mail-ot1-x32a.google.com with SMTP id f33so5984981otf.11
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 07:25:54 -0800 (PST)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l4sm1047292ooq.33.2021.02.25.07.25.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 07:25:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce56bfa3-4db9-4d50-8446-cb3af0b8e5c4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=uJuNTKYme2uVWadhoux7JhdlBqMADzqKGH7vfz3K3Ro=;
        b=veJpzyfuvpr7shtOstvXumbDWoVmyYE1i12HH3jjz7VbGr5FPrQFoh0l5cS4bqrluw
         xwLYDmW+5+sjonxVOmwf6hFSDU3ZL3xt6Sq8AHF84okIP/6y/4LiYo47zUezMVx4oBNV
         DSjjQC72nBphGHuvla4R0MpvKj8K6yRhw07fsNVUXXnnfJ4YDAYybhScdhm9rHoyvcB5
         sAX4Je85pp/PtSP5rdCIrAc2f4neS7YVDuXdUPAd9oBT6KD4o40M4vloendWl9beqVVw
         qmtvC1CMpDkPErIKBcAdrBe3CgS+swZyWO9AsQeUDiirQdDm5OMysrlI5sdBtlSQAMaN
         sLfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=uJuNTKYme2uVWadhoux7JhdlBqMADzqKGH7vfz3K3Ro=;
        b=P0rU81R3NyfDgwJ/m/L5h09EjnCjBPeRp0d7TidlwbuIFSd/4BSbcuw8SI2vx3OzBg
         fmJjKtmx5fRlj1ni2ynABDFbGFoglFFVlG3/xqTjK9eoQAHQmX191Fr1BP4c7O0knFGw
         cgc7BkXPtYcZKCNsXdkTYsE7EjqVPsHcWRDksJg2GzJ7VwgnfIyldUs/Myrwdm98/uWr
         1mvMb20XNU5rlm7f6AK3Ch6tyOA04ptYOETlG/v7aWmwwrXsngiFbFFnARWBn+QxLsWU
         RpJE1s3CRldBtsFXtgE/yzEBSRf4Z7j5D9mAj8ji9nRsuitLyq5fhWO8R0+IhfCaps/W
         Uygg==
X-Gm-Message-State: AOAM533P3eirY8NrKDlgc4HrCw1LGg/Y12jUEWjQnf07/JokrELmtS3v
	SiCyzLCREq1jAZiL9qtHr2O3xJwQ9EeRWq8Y
X-Google-Smtp-Source: ABdhPJxcK1vRpi9EZ1E4kj3z/dvn9vwFQAytiaNQ0O64eWwQuuEe3ySE5n8OCKZGSWRS+Ba8jdjosQ==
X-Received: by 2002:a05:6830:22fa:: with SMTP id t26mr2654764otc.143.1614266751447;
        Thu, 25 Feb 2021 07:25:51 -0800 (PST)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH for-next 5/6] xen: Add files needed for minimal riscv build
Date: Thu, 25 Feb 2021 08:24:04 -0700
Message-Id: <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the minimum code required to get xen to build with
XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
function added is required for a successful build, given the .config
generated from riscv64_defconfig. The function implementations are just
stubs; actual implmentations will need to be added later.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 config/riscv64.mk                        |   7 +
 xen/Makefile                             |   8 +-
 xen/arch/riscv/Kconfig                   |  54 ++++
 xen/arch/riscv/Kconfig.debug             |   0
 xen/arch/riscv/Makefile                  |  57 ++++
 xen/arch/riscv/README.source             |  19 ++
 xen/arch/riscv/Rules.mk                  |  13 +
 xen/arch/riscv/arch.mk                   |   7 +
 xen/arch/riscv/configs/riscv64_defconfig |  12 +
 xen/arch/riscv/delay.c                   |  16 +
 xen/arch/riscv/domain.c                  | 144 +++++++++
 xen/arch/riscv/domctl.c                  |  36 +++
 xen/arch/riscv/guestcopy.c               |  57 ++++
 xen/arch/riscv/head.S                    |   6 +
 xen/arch/riscv/irq.c                     |  78 +++++
 xen/arch/riscv/lib/Makefile              |   1 +
 xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
 xen/arch/riscv/mm.c                      |  93 ++++++
 xen/arch/riscv/p2m.c                     | 150 +++++++++
 xen/arch/riscv/percpu.c                  |  17 +
 xen/arch/riscv/platforms/Kconfig         |  31 ++
 xen/arch/riscv/riscv64/asm-offsets.c     |  31 ++
 xen/arch/riscv/setup.c                   |  27 ++
 xen/arch/riscv/shutdown.c                |  28 ++
 xen/arch/riscv/smp.c                     |  35 +++
 xen/arch/riscv/smpboot.c                 |  34 ++
 xen/arch/riscv/sysctl.c                  |  33 ++
 xen/arch/riscv/time.c                    |  35 +++
 xen/arch/riscv/traps.c                   |  35 +++
 xen/arch/riscv/vm_event.c                |  39 +++
 xen/arch/riscv/xen.lds.S                 | 113 +++++++
 xen/drivers/char/serial.c                |   1 +
 xen/include/asm-riscv/altp2m.h           |  39 +++
 xen/include/asm-riscv/asm.h              |  77 +++++
 xen/include/asm-riscv/asm_defns.h        |  24 ++
 xen/include/asm-riscv/atomic.h           | 204 ++++++++++++
 xen/include/asm-riscv/bitops.h           | 331 ++++++++++++++++++++
 xen/include/asm-riscv/bug.h              |  54 ++++
 xen/include/asm-riscv/byteorder.h        |  16 +
 xen/include/asm-riscv/cache.h            |  24 ++
 xen/include/asm-riscv/cmpxchg.h          | 382 +++++++++++++++++++++++
 xen/include/asm-riscv/compiler_types.h   |  32 ++
 xen/include/asm-riscv/config.h           | 110 +++++++
 xen/include/asm-riscv/cpufeature.h       |  17 +
 xen/include/asm-riscv/csr.h              | 219 +++++++++++++
 xen/include/asm-riscv/current.h          |  47 +++
 xen/include/asm-riscv/debugger.h         |  15 +
 xen/include/asm-riscv/delay.h            |  15 +
 xen/include/asm-riscv/desc.h             |  12 +
 xen/include/asm-riscv/device.h           |  15 +
 xen/include/asm-riscv/div64.h            |  23 ++
 xen/include/asm-riscv/domain.h           |  50 +++
 xen/include/asm-riscv/event.h            |  42 +++
 xen/include/asm-riscv/fence.h            |  12 +
 xen/include/asm-riscv/flushtlb.h         |  34 ++
 xen/include/asm-riscv/grant_table.h      |  12 +
 xen/include/asm-riscv/guest_access.h     |  41 +++
 xen/include/asm-riscv/guest_atomics.h    |  60 ++++
 xen/include/asm-riscv/hardirq.h          |  27 ++
 xen/include/asm-riscv/hypercall.h        |  12 +
 xen/include/asm-riscv/init.h             |  42 +++
 xen/include/asm-riscv/io.h               | 283 +++++++++++++++++
 xen/include/asm-riscv/iocap.h            |  13 +
 xen/include/asm-riscv/iommu.h            |  46 +++
 xen/include/asm-riscv/irq.h              |  58 ++++
 xen/include/asm-riscv/mem_access.h       |   4 +
 xen/include/asm-riscv/mm.h               | 246 +++++++++++++++
 xen/include/asm-riscv/monitor.h          |  65 ++++
 xen/include/asm-riscv/nospec.h           |  25 ++
 xen/include/asm-riscv/numa.h             |  41 +++
 xen/include/asm-riscv/p2m.h              | 218 +++++++++++++
 xen/include/asm-riscv/page-bits.h        |  11 +
 xen/include/asm-riscv/page.h             |  73 +++++
 xen/include/asm-riscv/paging.h           |  15 +
 xen/include/asm-riscv/pci.h              |  31 ++
 xen/include/asm-riscv/percpu.h           |  33 ++
 xen/include/asm-riscv/processor.h        |  59 ++++
 xen/include/asm-riscv/random.h           |   9 +
 xen/include/asm-riscv/regs.h             |  23 ++
 xen/include/asm-riscv/setup.h            |  14 +
 xen/include/asm-riscv/smp.h              |  46 +++
 xen/include/asm-riscv/softirq.h          |  16 +
 xen/include/asm-riscv/spinlock.h         |  12 +
 xen/include/asm-riscv/string.h           |  28 ++
 xen/include/asm-riscv/sysregs.h          |  16 +
 xen/include/asm-riscv/system.h           |  99 ++++++
 xen/include/asm-riscv/time.h             |  31 ++
 xen/include/asm-riscv/trace.h            |  12 +
 xen/include/asm-riscv/types.h            |  60 ++++
 xen/include/asm-riscv/vm_event.h         |  55 ++++
 xen/include/asm-riscv/xenoprof.h         |  12 +
 xen/include/public/arch-riscv.h          | 183 +++++++++++
 xen/include/public/arch-riscv/hvm/save.h |  39 +++
 xen/include/public/hvm/save.h            |   2 +
 xen/include/public/pmu.h                 |   2 +
 xen/include/public/xen.h                 |   2 +
 xen/include/xen/domain.h                 |   1 +
 97 files changed, 5370 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/README.source
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
 create mode 100644 xen/arch/riscv/delay.c
 create mode 100644 xen/arch/riscv/domain.c
 create mode 100644 xen/arch/riscv/domctl.c
 create mode 100644 xen/arch/riscv/guestcopy.c
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/arch/riscv/irq.c
 create mode 100644 xen/arch/riscv/lib/Makefile
 create mode 100644 xen/arch/riscv/lib/find_next_bit.c
 create mode 100644 xen/arch/riscv/mm.c
 create mode 100644 xen/arch/riscv/p2m.c
 create mode 100644 xen/arch/riscv/percpu.c
 create mode 100644 xen/arch/riscv/platforms/Kconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/setup.c
 create mode 100644 xen/arch/riscv/shutdown.c
 create mode 100644 xen/arch/riscv/smp.c
 create mode 100644 xen/arch/riscv/smpboot.c
 create mode 100644 xen/arch/riscv/sysctl.c
 create mode 100644 xen/arch/riscv/time.c
 create mode 100644 xen/arch/riscv/traps.c
 create mode 100644 xen/arch/riscv/vm_event.c
 create mode 100644 xen/arch/riscv/xen.lds.S
 create mode 100644 xen/include/asm-riscv/altp2m.h
 create mode 100644 xen/include/asm-riscv/asm.h
 create mode 100644 xen/include/asm-riscv/asm_defns.h
 create mode 100644 xen/include/asm-riscv/atomic.h
 create mode 100644 xen/include/asm-riscv/bitops.h
 create mode 100644 xen/include/asm-riscv/bug.h
 create mode 100644 xen/include/asm-riscv/byteorder.h
 create mode 100644 xen/include/asm-riscv/cache.h
 create mode 100644 xen/include/asm-riscv/cmpxchg.h
 create mode 100644 xen/include/asm-riscv/compiler_types.h
 create mode 100644 xen/include/asm-riscv/config.h
 create mode 100644 xen/include/asm-riscv/cpufeature.h
 create mode 100644 xen/include/asm-riscv/csr.h
 create mode 100644 xen/include/asm-riscv/current.h
 create mode 100644 xen/include/asm-riscv/debugger.h
 create mode 100644 xen/include/asm-riscv/delay.h
 create mode 100644 xen/include/asm-riscv/desc.h
 create mode 100644 xen/include/asm-riscv/device.h
 create mode 100644 xen/include/asm-riscv/div64.h
 create mode 100644 xen/include/asm-riscv/domain.h
 create mode 100644 xen/include/asm-riscv/event.h
 create mode 100644 xen/include/asm-riscv/fence.h
 create mode 100644 xen/include/asm-riscv/flushtlb.h
 create mode 100644 xen/include/asm-riscv/grant_table.h
 create mode 100644 xen/include/asm-riscv/guest_access.h
 create mode 100644 xen/include/asm-riscv/guest_atomics.h
 create mode 100644 xen/include/asm-riscv/hardirq.h
 create mode 100644 xen/include/asm-riscv/hypercall.h
 create mode 100644 xen/include/asm-riscv/init.h
 create mode 100644 xen/include/asm-riscv/io.h
 create mode 100644 xen/include/asm-riscv/iocap.h
 create mode 100644 xen/include/asm-riscv/iommu.h
 create mode 100644 xen/include/asm-riscv/irq.h
 create mode 100644 xen/include/asm-riscv/mem_access.h
 create mode 100644 xen/include/asm-riscv/mm.h
 create mode 100644 xen/include/asm-riscv/monitor.h
 create mode 100644 xen/include/asm-riscv/nospec.h
 create mode 100644 xen/include/asm-riscv/numa.h
 create mode 100644 xen/include/asm-riscv/p2m.h
 create mode 100644 xen/include/asm-riscv/page-bits.h
 create mode 100644 xen/include/asm-riscv/page.h
 create mode 100644 xen/include/asm-riscv/paging.h
 create mode 100644 xen/include/asm-riscv/pci.h
 create mode 100644 xen/include/asm-riscv/percpu.h
 create mode 100644 xen/include/asm-riscv/processor.h
 create mode 100644 xen/include/asm-riscv/random.h
 create mode 100644 xen/include/asm-riscv/regs.h
 create mode 100644 xen/include/asm-riscv/setup.h
 create mode 100644 xen/include/asm-riscv/smp.h
 create mode 100644 xen/include/asm-riscv/softirq.h
 create mode 100644 xen/include/asm-riscv/spinlock.h
 create mode 100644 xen/include/asm-riscv/string.h
 create mode 100644 xen/include/asm-riscv/sysregs.h
 create mode 100644 xen/include/asm-riscv/system.h
 create mode 100644 xen/include/asm-riscv/time.h
 create mode 100644 xen/include/asm-riscv/trace.h
 create mode 100644 xen/include/asm-riscv/types.h
 create mode 100644 xen/include/asm-riscv/vm_event.h
 create mode 100644 xen/include/asm-riscv/xenoprof.h
 create mode 100644 xen/include/public/arch-riscv.h
 create mode 100644 xen/include/public/arch-riscv/hvm/save.h

diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..0ec97838f9
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,7 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
+
+CFLAGS +=
diff --git a/xen/Makefile b/xen/Makefile
index 544cc0995d..2381486c1f 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+	  sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+	      -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+			        -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..1b44564053
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,54 @@
+config 64BIT
+	bool
+
+config RISCV_64
+	bool
+	depends on 64BIT
+
+config RISCV
+	def_bool y
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/riscv64_defconfig" if RISCV_64
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+        default RISCV_ISA_RV64IMA
+        help
+          This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+        select 64BIT
+        select RISCV_64
+        help
+           Use the RV64I base ISA, plus the "M" and "A" extensions
+           for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+        help
+           Add "C" to the ISA subsets that the toolchain is allowed
+           to emit when building Xen, which results in compressed
+           instructions in the Xen binary.
+
+           If unsure, say N.
+
+endmenu
+
+source "arch/riscv/platforms/Kconfig"
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..bf67c17d1b
--- /dev/null
+++ b/xen/arch/riscv/Makefile
@@ -0,0 +1,57 @@
+obj-y += lib/
+
+obj-y += domain.o
+obj-y += domctl.o
+obj-y += delay.o
+obj-y += guestcopy.o
+obj-y += irq.o
+obj-y += mm.o
+obj-y += p2m.o
+obj-y += percpu.o
+obj-y += setup.o
+obj-y += shutdown.o
+obj-y += smp.o
+obj-y += smpboot.o
+obj-y += sysctl.o
+obj-y += time.o
+obj-y += traps.o
+obj-y += vm_event.o
+
+ALL_OBJS := head.o $(ALL_OBJS)
+
+$(TARGET): $(TARGET)-syms
+	$(OBJCOPY) -O binary -S $< $@
+
+prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
+	$(call if_changed,ld)
+
+targets += prelink.o
+
+$(TARGET)-syms: prelink.o xen.lds
+	$(LD) $(XEN_LDFLAGS) -T xen.lds -N prelink.o \
+	    $(BASEDIR)/common/symbols-dummy.o -o $(@D)/.$(@F).0
+	$(NM) -pa --format=sysv $(@D)/.$(@F).0 \
+		| $(BASEDIR)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).0.S
+	$(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).0.o
+	$(LD) $(XEN_LDFLAGS) -T xen.lds -N prelink.o \
+	    $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1
+	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
+		| $(BASEDIR)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).1.S
+	$(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).1.o
+	$(LD) $(XEN_LDFLAGS) -T xen.lds -N prelink.o $(build_id_linker) \
+	    $(@D)/.$(@F).1.o -o $@
+	$(NM) -pa --format=sysv $(@D)/$(@F) \
+		| $(BASEDIR)/tools/symbols --all-symbols --xensyms --sysv --sort \
+		>$(@D)/$(@F).map
+	rm -f $(@D)/.$(@F).[0-9]*
+
+asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
+	$(CC) $(filter-out -flto,$(c_flags)) -S -o $@ $<
+
+xen.lds: xen.lds.S
+	$(CPP) -P $(a_flags) -MQ $@ -o $@ $<
+
+.PHONY: clean
+clean::
+	rm -f asm-offsets.s xen.lds
+	rm -f $(BASEDIR)/.xen-syms.[0-9]*
diff --git a/xen/arch/riscv/README.source b/xen/arch/riscv/README.source
new file mode 100644
index 0000000000..a04e06c5f7
--- /dev/null
+++ b/xen/arch/riscv/README.source
@@ -0,0 +1,19 @@
+External RISCV Sources
+======================
+This documents the files copied from other projects for use in the
+RISCV code of Xen. 
+
+Linux (commit f40ddce88593, Feb. 14 2021)
+=========================================
+The following files were copied from arch/riscv/include/asm to
+xen/include/asm-riscv:
+
+asm.h -> asm.h
+atomic.h -> atomic.h
+bitops.h -> bitops.h
+csr.h -> csr.h
+{mmio,io}.h -> io.h
+fence.h -> fence.h
+cmpxchg.h -> cmpxchg.h
+compiler_types.h -> compiler_types.h
+timex.h -> time.h
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..3c368fa05d
--- /dev/null
+++ b/xen/arch/riscv/Rules.mk
@@ -0,0 +1,13 @@
+########################################
+# RISCV-specific definitions
+
+ifeq ($(CONFIG_RISCV_64),y)
+    c_flags += -mabi=lp64
+    a_flags += -mabi=lp64
+endif
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+c_flags += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+a_flags += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..d5d68c9150
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,7 @@
+########################################
+# riscv-specific definitions
+
+CFLAGS += -I$(BASEDIR)/include
+
+$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
+$(call cc-option-add,CFLAGS,CC,-Wnested-externs)
diff --git a/xen/arch/riscv/configs/riscv64_defconfig b/xen/arch/riscv/configs/riscv64_defconfig
new file mode 100644
index 0000000000..664a5d2378
--- /dev/null
+++ b/xen/arch/riscv/configs/riscv64_defconfig
@@ -0,0 +1,12 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_DEBUG is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/delay.c b/xen/arch/riscv/delay.c
new file mode 100644
index 0000000000..403b139b96
--- /dev/null
+++ b/xen/arch/riscv/delay.c
@@ -0,0 +1,16 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+void udelay(unsigned long usecs)
+{
+}
+EXPORT_SYMBOL(udelay);
diff --git a/xen/arch/riscv/domain.c b/xen/arch/riscv/domain.c
new file mode 100644
index 0000000000..a9fdb1f94f
--- /dev/null
+++ b/xen/arch/riscv/domain.c
@@ -0,0 +1,144 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/errno.h>
+#include <xen/sched.h>
+#include <xen/domain.h>
+#include <public/domctl.h>
+#include <public/xen.h>
+
+DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
+
+void context_switch(struct vcpu *prev, struct vcpu *next)
+{
+}
+
+void continue_running(struct vcpu *same)
+{
+}
+
+void sync_local_execstate(void)
+{
+}
+
+void sync_vcpu_execstate(struct vcpu *v)
+{
+}
+
+unsigned long hypercall_create_continuation(
+    unsigned int op, const char *format, ...)
+{
+
+    return 0;
+}
+
+struct domain *alloc_domain_struct(void)
+{
+    return 0;
+}
+
+void free_domain_struct(struct domain *d)
+{
+}
+
+void dump_pageframe_info(struct domain *d)
+{
+}
+
+int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
+{
+    return -EOPNOTSUPP;
+}
+
+
+int arch_domain_create(struct domain *d,
+                       struct xen_domctl_createdomain *config)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_domain_destroy(struct domain *d)
+{
+}
+
+void arch_domain_shutdown(struct domain *d)
+{
+}
+
+void arch_domain_pause(struct domain *d)
+{
+}
+
+void arch_domain_unpause(struct domain *d)
+{
+}
+
+int arch_domain_soft_reset(struct domain *d)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_domain_creation_finished(struct domain *d)
+{
+}
+
+int domain_relinquish_resources(struct domain *d)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_dump_domain_info(struct domain *d)
+{
+}
+
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_dump_vcpu_info(struct vcpu *v)
+{
+}
+
+int arch_set_info_guest(
+    struct vcpu *v, vcpu_guest_context_u c)
+{
+    return -EOPNOTSUPP;
+}
+
+struct vcpu *alloc_vcpu_struct(const struct domain *d)
+{
+    return 0;
+}
+
+void free_vcpu_struct(struct vcpu *v)
+{
+}
+
+int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    return -EOPNOTSUPP;
+}
+
+int arch_vcpu_reset(struct vcpu *v)
+{
+    return -EOPNOTSUPP;
+}
+
+int arch_vcpu_create(struct vcpu *v)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_vcpu_destroy(struct vcpu *v)
+{
+}
diff --git a/xen/arch/riscv/domctl.c b/xen/arch/riscv/domctl.c
new file mode 100644
index 0000000000..f81a13a9c4
--- /dev/null
+++ b/xen/arch/riscv/domctl.c
@@ -0,0 +1,36 @@
+/******************************************************************************
+ * Arch-specific domctl.c
+ *
+ * Copyright (c) 2012, Citrix Systems
+ */
+
+#include <xen/errno.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/sched.h>
+#include <public/domctl.h>
+
+void arch_get_domain_info(const struct domain *d,
+                          struct xen_domctl_getdomaininfo *info)
+{
+}
+
+long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/guestcopy.c b/xen/arch/riscv/guestcopy.c
new file mode 100644
index 0000000000..d8fcf98a0e
--- /dev/null
+++ b/xen/arch/riscv/guestcopy.c
@@ -0,0 +1,57 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/guest_access.h>
+
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
+                                             unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long raw_clear_guest(void *to, unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t gpa,
+                                              void *buf,
+                                              unsigned int len)
+{
+    return -EOPNOTSUPP;
+}
+
+int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
+                               uint32_t size, bool is_write)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/head.S b/xen/arch/riscv/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/arch/riscv/irq.c b/xen/arch/riscv/irq.c
new file mode 100644
index 0000000000..65137e5f11
--- /dev/null
+++ b/xen/arch/riscv/irq.c
@@ -0,0 +1,78 @@
+/*
+ * RISC-V Interrupt support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/sched.h>
+
+const unsigned int nr_irqs = NR_IRQS;
+
+static void ack_none(struct irq_desc *irq)
+{
+}
+
+static void end_none(struct irq_desc *irq)
+{
+}
+
+hw_irq_controller no_irq_type = {
+    .typename = "none",
+    .startup = irq_startup_none,
+    .shutdown = irq_shutdown_none,
+    .enable = irq_enable_none,
+    .disable = irq_disable_none,
+    .ack = ack_none,
+    .end = end_none
+};
+
+int arch_init_one_irq_desc(struct irq_desc *desc)
+{
+    return -EOPNOTSUPP;
+}
+
+struct pirq *alloc_pirq_struct(struct domain *d)
+{
+    return NULL;
+}
+
+irq_desc_t *__irq_to_desc(int irq)
+{
+    return NULL;
+}
+
+int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
+{
+    return -EOPNOTSUPP;
+}
+
+void pirq_guest_unbind(struct domain *d, struct pirq *pirq)
+{
+}
+
+void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
+{
+}
+
+void smp_send_state_dump(unsigned int cpu)
+{
+}
+
+void arch_move_irqs(struct vcpu *v)
+{
+}
+
+int setup_irq(unsigned int irq, unsigned int irqflags, struct irqaction *new)
+{
+    return -EOPNOTSUPP;
+}
diff --git a/xen/arch/riscv/lib/Makefile b/xen/arch/riscv/lib/Makefile
new file mode 100644
index 0000000000..6fae6a1f10
--- /dev/null
+++ b/xen/arch/riscv/lib/Makefile
@@ -0,0 +1 @@
+obj-y += find_next_bit.o
diff --git a/xen/arch/riscv/lib/find_next_bit.c b/xen/arch/riscv/lib/find_next_bit.c
new file mode 100644
index 0000000000..adaa25f32b
--- /dev/null
+++ b/xen/arch/riscv/lib/find_next_bit.c
@@ -0,0 +1,284 @@
+/* find_next_bit.c: fallback find next bit implementation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <xen/bitops.h>
+#include <asm/bitops.h>
+#include <asm/types.h>
+#include <asm/byteorder.h>
+
+#define BITOP_WORD(nr)		((nr) / BITS_PER_LONG)
+
+#ifndef find_next_bit
+/*
+ * Find the next set bit in a memory region.
+ */
+unsigned long find_next_bit(const unsigned long *addr, unsigned long size,
+			    unsigned long offset)
+{
+	const unsigned long *p = addr + BITOP_WORD(offset);
+	unsigned long result = offset & ~(BITS_PER_LONG-1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	size -= result;
+	offset %= BITS_PER_LONG;
+	if (offset) {
+		tmp = *(p++);
+		tmp &= (~0UL << offset);
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+	while (size & ~(BITS_PER_LONG-1)) {
+		if ((tmp = *(p++)))
+			goto found_middle;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = *p;
+
+found_first:
+	tmp &= (~0UL >> (BITS_PER_LONG - size));
+	if (tmp == 0UL)		/* Are any bits set? */
+		return result + size;	/* Nope. */
+found_middle:
+	return result + ffs(tmp);
+}
+EXPORT_SYMBOL(find_next_bit);
+#endif
+
+#ifndef find_next_zero_bit
+/*
+ * This implementation of find_{first,next}_zero_bit was stolen from
+ * Linus' asm-alpha/bitops.h.
+ */
+unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
+				 unsigned long offset)
+{
+	const unsigned long *p = addr + BITOP_WORD(offset);
+	unsigned long result = offset & ~(BITS_PER_LONG-1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	size -= result;
+	offset %= BITS_PER_LONG;
+	if (offset) {
+		tmp = *(p++);
+		tmp |= ~0UL >> (BITS_PER_LONG - offset);
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (~tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+	while (size & ~(BITS_PER_LONG-1)) {
+		if (~(tmp = *(p++)))
+			goto found_middle;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = *p;
+
+found_first:
+	tmp |= ~0UL << size;
+	if (tmp == ~0UL)	/* Are any bits zero? */
+		return result + size;	/* Nope. */
+found_middle:
+	return result + ffz(tmp);
+}
+EXPORT_SYMBOL(find_next_zero_bit);
+#endif
+
+#ifndef find_first_bit
+/*
+ * Find the first set bit in a memory region.
+ */
+unsigned long find_first_bit(const unsigned long *addr, unsigned long size)
+{
+	const unsigned long *p = addr;
+	unsigned long result = 0;
+	unsigned long tmp;
+
+	while (size & ~(BITS_PER_LONG-1)) {
+		if ((tmp = *(p++)))
+			goto found;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+
+	tmp = (*p) & (~0UL >> (BITS_PER_LONG - size));
+	if (tmp == 0UL)		/* Are any bits set? */
+		return result + size;	/* Nope. */
+found:
+	return result + ffs(tmp);
+}
+EXPORT_SYMBOL(find_first_bit);
+#endif
+
+#ifndef find_first_zero_bit
+/*
+ * Find the first cleared bit in a memory region.
+ */
+unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size)
+{
+	const unsigned long *p = addr;
+	unsigned long result = 0;
+	unsigned long tmp;
+
+	while (size & ~(BITS_PER_LONG-1)) {
+		if (~(tmp = *(p++)))
+			goto found;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+
+	tmp = (*p) | (~0UL << size);
+	if (tmp == ~0UL)	/* Are any bits zero? */
+		return result + size;	/* Nope. */
+found:
+	return result + ffz(tmp);
+}
+EXPORT_SYMBOL(find_first_zero_bit);
+#endif
+
+#ifdef __BIG_ENDIAN
+
+/* include/linux/byteorder does not support "unsigned long" type */
+static inline unsigned long ext2_swabp(const unsigned long * x)
+{
+#if BITS_PER_LONG == 64
+	return (unsigned long) __swab64p((u64 *) x);
+#elif BITS_PER_LONG == 32
+	return (unsigned long) __swab32p((u32 *) x);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+/* include/linux/byteorder doesn't support "unsigned long" type */
+static inline unsigned long ext2_swab(const unsigned long y)
+{
+#if BITS_PER_LONG == 64
+	return (unsigned long) __swab64((u64) y);
+#elif BITS_PER_LONG == 32
+	return (unsigned long) __swab32((u32) y);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+#ifndef find_next_zero_bit_le
+unsigned long find_next_zero_bit_le(const void *addr, unsigned
+		long size, unsigned long offset)
+{
+	const unsigned long *p = addr;
+	unsigned long result = offset & ~(BITS_PER_LONG - 1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	p += BITOP_WORD(offset);
+	size -= result;
+	offset &= (BITS_PER_LONG - 1UL);
+	if (offset) {
+		tmp = ext2_swabp(p++);
+		tmp |= (~0UL >> (BITS_PER_LONG - offset));
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (~tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+
+	while (size & ~(BITS_PER_LONG - 1)) {
+		if (~(tmp = *(p++)))
+			goto found_middle_swap;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = ext2_swabp(p);
+found_first:
+	tmp |= ~0UL << size;
+	if (tmp == ~0UL)	/* Are any bits zero? */
+		return result + size; /* Nope. Skip ffz */
+found_middle:
+	return result + ffz(tmp);
+
+found_middle_swap:
+	return result + ffz(ext2_swab(tmp));
+}
+EXPORT_SYMBOL(find_next_zero_bit_le);
+#endif
+
+#ifndef find_next_bit_le
+unsigned long find_next_bit_le(const void *addr, unsigned
+		long size, unsigned long offset)
+{
+	const unsigned long *p = addr;
+	unsigned long result = offset & ~(BITS_PER_LONG - 1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	p += BITOP_WORD(offset);
+	size -= result;
+	offset &= (BITS_PER_LONG - 1UL);
+	if (offset) {
+		tmp = ext2_swabp(p++);
+		tmp &= (~0UL << offset);
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+
+	while (size & ~(BITS_PER_LONG - 1)) {
+		tmp = *(p++);
+		if (tmp)
+			goto found_middle_swap;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = ext2_swabp(p);
+found_first:
+	tmp &= (~0UL >> (BITS_PER_LONG - size));
+	if (tmp == 0UL)		/* Are any bits set? */
+		return result + size; /* Nope. */
+found_middle:
+	return result + ffs(tmp);
+
+found_middle_swap:
+	return result + ffs(ext2_swab(tmp));
+}
+EXPORT_SYMBOL(find_next_bit_le);
+#endif
+
+#endif /* __BIG_ENDIAN */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..72322b9adc
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,93 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/compile.h>
+#include <xen/types.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+
+unsigned long max_page;
+unsigned long total_pages;
+unsigned long frametable_base_mfn;
+
+void flush_page_to_ram(unsigned long mfn, bool sync_icache)
+{
+}
+
+void arch_dump_shared_mem_info(void)
+{
+}
+
+int steal_page(struct domain *d, struct page_info *page, unsigned int memflags)
+{
+    return 0;
+}
+
+int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
+{
+    return 0;
+}
+
+unsigned long domain_get_maximum_gpfn(struct domain *d)
+{
+    return 0;
+}
+
+int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
+                              union add_to_physmap_extra extra,
+                              unsigned long idx, gfn_t gfn)
+{
+    return 0;
+}
+
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    return 0;
+}
+
+struct domain *page_get_owner_and_reference(struct page_info *page)
+{
+    return (void *) 0xdeadbeef;
+}
+
+void put_page(struct page_info *page)
+{
+}
+
+bool get_page(struct page_info *page, const struct domain *domain)
+{
+    return false;
+}
+
+int get_page_type(struct page_info *page, unsigned long type)
+{
+    return 0;
+}
+
+void put_page_type(struct page_info *page)
+{
+    return;
+}
+
+unsigned long get_upper_mfn_bound(void)
+{
+    return -1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
new file mode 100644
index 0000000000..84ae5f8a37
--- /dev/null
+++ b/xen/arch/riscv/p2m.c
@@ -0,0 +1,150 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/sched.h>
+
+#define INVALID_VMID 0 /* VMID 0 is reserved */
+
+void p2m_write_unlock(struct p2m_domain *p2m)
+{
+}
+
+void p2m_dump_info(struct domain *d)
+{
+}
+
+void memory_type_changed(struct domain *d)
+{
+}
+
+void dump_p2m_lookup(struct domain *d, paddr_t addr)
+{
+}
+
+void p2m_save_state(struct vcpu *p)
+{
+}
+
+void p2m_restore_state(struct vcpu *n)
+{
+}
+
+mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
+{
+    return _mfn(gfn_x(gfn));
+}
+
+int p2m_set_entry(struct p2m_domain *p2m,
+                  gfn_t sgfn,
+                  unsigned long nr,
+                  mfn_t smfn,
+                  p2m_type_t t,
+                  p2m_access_t a)
+{
+    int rc = 0;
+
+
+    return rc;
+}
+
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
+{
+    return _mfn(gfn_x(gfn));
+}
+
+mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+                    p2m_type_t *t, p2m_access_t *a,
+                    unsigned int *page_order,
+                    bool *valid)
+{
+    return _mfn(gfn_x(gfn));
+}
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+}
+
+int map_regions_p2mt(struct domain *d,
+                     gfn_t gfn,
+                     unsigned long nr,
+                     mfn_t mfn,
+                     p2m_type_t p2mt)
+{
+    return 0;
+}
+
+int unmap_regions_p2mt(struct domain *d,
+                       gfn_t gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return 0;
+}
+
+int map_mmio_regions(struct domain *d,
+                     gfn_t start_gfn,
+                     unsigned long nr,
+                     mfn_t mfn)
+{
+    return 0;
+}
+
+int unmap_mmio_regions(struct domain *d,
+                       gfn_t start_gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return 0;
+}
+
+int map_dev_mmio_region(struct domain *d,
+                        gfn_t gfn,
+                        unsigned long nr,
+                        mfn_t mfn)
+{
+    return 0;
+}
+
+int guest_physmap_add_entry(struct domain *d,
+                            gfn_t gfn,
+                            mfn_t mfn,
+                            unsigned long page_order,
+                            p2m_type_t t)
+{
+    return 0;
+}
+
+int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                              unsigned int page_order)
+{
+    return 0;
+}
+
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    return 0;
+}
+
+struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                        p2m_type_t *t)
+{
+    return NULL;
+}
+
+void vcpu_mark_events_pending(struct vcpu *v)
+{
+}
+
+void vcpu_update_evtchn_irq(struct vcpu *v)
+{
+}
diff --git a/xen/arch/riscv/percpu.c b/xen/arch/riscv/percpu.c
new file mode 100644
index 0000000000..31c0cce606
--- /dev/null
+++ b/xen/arch/riscv/percpu.c
@@ -0,0 +1,17 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/percpu.h>
+#include <xen/cpu.h>
+#include <xen/init.h>
+
+unsigned long __per_cpu_offset[NR_CPUS];
diff --git a/xen/arch/riscv/platforms/Kconfig b/xen/arch/riscv/platforms/Kconfig
new file mode 100644
index 0000000000..6959ec35a2
--- /dev/null
+++ b/xen/arch/riscv/platforms/Kconfig
@@ -0,0 +1,31 @@
+choice
+	prompt "Platform Support"
+	default ALL_PLAT
+	---help---
+	Choose which hardware platform to enable in Xen.
+
+	If unsure, choose ALL_PLAT.
+
+config ALL_PLAT
+	bool "All Platforms"
+	---help---
+	Enable support for all available hardware platforms. It doesn't
+	automatically select any of the related drivers.
+
+config QEMU
+	bool "QEMU RISC-V virt machine support"
+	depends on RISCV
+	select HAS_NS16550
+	---help---
+	Enable all the required drivers for QEMU RISC-V virt emulated
+	machine.
+
+endchoice
+
+config ALL64_PLAT
+	bool
+	default (ALL_PLAT && RISCV_64)
+
+config ALL32_PLAT
+	bool
+	default (ALL_PLAT && RISCV_32)
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
new file mode 100644
index 0000000000..994d5f60c9
--- /dev/null
+++ b/xen/arch/riscv/riscv64/asm-offsets.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define COMPILE_OFFSETS
+
+#include <asm/init.h>
+
+#define DEFINE(_sym, _val)                                                 \
+    asm volatile ("\n.ascii\"==>#define " #_sym " %0 /* " #_val " */<==\"" \
+                  : : "i" (_val) )
+#define BLANK()                                                            \
+    asm volatile ( "\n.ascii\"==><==\"" : : )
+#define OFFSET(_sym, _str, _mem)                                           \
+    DEFINE(_sym, offsetof(_str, _mem));
+
+void asm_offsets(void)
+{
+   BLANK();
+   OFFSET(INITINFO_stack, struct init_info, stack);
+}
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
new file mode 100644
index 0000000000..129e3db58f
--- /dev/null
+++ b/xen/arch/riscv/setup.c
@@ -0,0 +1,27 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/types.h>
+#include <public/version.h>
+
+void arch_get_xen_caps(xen_capabilities_info_t *info)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/shutdown.c b/xen/arch/riscv/shutdown.c
new file mode 100644
index 0000000000..bfa1174366
--- /dev/null
+++ b/xen/arch/riscv/shutdown.c
@@ -0,0 +1,28 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+void machine_halt(void)
+{
+}
+
+void machine_restart(unsigned int delay_millisecs)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/smp.c b/xen/arch/riscv/smp.c
new file mode 100644
index 0000000000..66f1012b37
--- /dev/null
+++ b/xen/arch/riscv/smp.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/cpumask.h>
+#include <asm/smp.h>
+
+void arch_flush_tlb_mask(const cpumask_t *mask)
+{
+}
+
+void smp_send_event_check_mask(const cpumask_t *mask)
+{
+}
+
+void smp_send_call_function_mask(const cpumask_t *mask)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/smpboot.c b/xen/arch/riscv/smpboot.c
new file mode 100644
index 0000000000..567d12a262
--- /dev/null
+++ b/xen/arch/riscv/smpboot.c
@@ -0,0 +1,34 @@
+/*
+ * Dummy smpboot support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <xen/cpu.h>
+#include <xen/cpumask.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/smp.h>
+#include <xen/nodemask.h>
+
+cpumask_t cpu_online_map;
+cpumask_t cpu_present_map;
+cpumask_t cpu_possible_map;
+
+DEFINE_PER_CPU(unsigned int, cpu_id);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
+
+/* Fake one node for now. See also include/asm-arm/numa.h */
+nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+
+/* Boot cpu data */
+struct init_info init_data = {};
diff --git a/xen/arch/riscv/sysctl.c b/xen/arch/riscv/sysctl.c
new file mode 100644
index 0000000000..9b4ef27aac
--- /dev/null
+++ b/xen/arch/riscv/sysctl.c
@@ -0,0 +1,33 @@
+/******************************************************************************
+ * Arch-specific sysctl.c
+ *
+ * System management operations. For use by node control stack.
+ *
+ * Copyright (c) 2012, Citrix Systems
+ */
+
+#include <xen/types.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/hypercall.h>
+#include <public/sysctl.h>
+
+void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
+{
+}
+
+long arch_do_sysctl(struct xen_sysctl *sysctl,
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/time.c b/xen/arch/riscv/time.c
new file mode 100644
index 0000000000..4d7269195d
--- /dev/null
+++ b/xen/arch/riscv/time.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/sched.h>
+#include <xen/time.h>
+
+unsigned long __read_mostly cpu_khz;  /* CPU clock frequency in kHz. */
+
+s_time_t get_s_time(void)
+{
+    return 0;
+}
+
+/* VCPU PV timers. */
+void send_timer_event(struct vcpu *v)
+{
+}
+
+void domain_set_time_offset(struct domain *d, int64_t time_offset_seconds)
+{
+}
+
+int reprogram_timer(s_time_t timeout)
+{
+    return 0;
+}
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
new file mode 100644
index 0000000000..5287894954
--- /dev/null
+++ b/xen/arch/riscv/traps.c
@@ -0,0 +1,35 @@
+/*
+ * RISC-V Trap handlers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <public/xen.h>
+#include <xen/multicall.h>
+#include <xen/sched.h>
+#include <asm/processor.h>
+
+void show_execution_state(const struct cpu_user_regs *regs)
+{
+}
+
+void vcpu_show_execution_state(struct vcpu *v)
+{
+}
+
+void arch_hypercall_tasklet_result(struct vcpu *v, long res)
+{
+}
+
+enum mc_disposition arch_do_multicall_call(struct mc_state *state)
+{
+    return mc_continue;
+}
diff --git a/xen/arch/riscv/vm_event.c b/xen/arch/riscv/vm_event.c
new file mode 100644
index 0000000000..6c759f85a6
--- /dev/null
+++ b/xen/arch/riscv/vm_event.c
@@ -0,0 +1,39 @@
+/*
+ * Architecture-specific vm_event handling routines
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/sched.h>
+#include <asm/vm_event.h>
+
+void vm_event_fill_regs(vm_event_request_t *req)
+{
+}
+
+void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
+{
+}
+
+void vm_event_monitor_next_interrupt(struct vcpu *v)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
new file mode 100644
index 0000000000..6b95fc84da
--- /dev/null
+++ b/xen/arch/riscv/xen.lds.S
@@ -0,0 +1,113 @@
+/* Excerpts written by Martin Mares <mj@atrey.karlin.mff.cuni.cz> */
+/* Modified for i386/x86-64 Xen by Keir Fraser */
+/* Modified for ARM Xen by Ian Campbell */
+
+#include <xen/cache.h>
+#include <asm/page.h>
+#undef ENTRY
+#undef ALIGN
+
+ENTRY(start)
+OUTPUT_ARCH(riscv)
+
+PHDRS
+{
+  text PT_LOAD ;
+#if defined(BUILD_ID)
+  note PT_NOTE ;
+#endif
+}
+SECTIONS
+{
+  . = XEN_VIRT_START;
+  _start = .;
+  .text : {
+        _stext = .;            /* Text section */
+       *(.text)
+       *(.text.cold)
+       *(.text.unlikely)
+       *(.fixup)
+       *(.gnu.warning)
+       _etext = .;             /* End of text section */
+  } :text = 0x9090
+
+  . = ALIGN(PAGE_SIZE);
+  .rodata : {
+        _srodata = .;          /* Read-only data */
+        /* Bug frames table */
+       __start_bug_frames = .;
+       *(.bug_frames.0)
+       __stop_bug_frames_0 = .;
+       *(.bug_frames.1)
+       __stop_bug_frames_1 = .;
+       *(.bug_frames.2)
+       __stop_bug_frames_2 = .;
+       *(.bug_frames.3)
+       __stop_bug_frames_3 = .;
+       *(.rodata)
+       *(.rodata.*)
+       *(.data.rel.ro)
+       *(.data.rel.ro.*)
+  } :text
+
+#if defined(BUILD_ID)
+  . = ALIGN(4);
+  .note.gnu.build-id : {
+       __note_gnu_build_id_start = .;
+       *(.note.gnu.build-id)
+       __note_gnu_build_id_end = .;
+  } :note :text
+#endif
+  _erodata = .;                /* End of read-only data */
+
+  .data : {                    /* Data */
+       . = ALIGN(PAGE_SIZE);
+       *(.data.page_aligned)
+       *(.data)
+
+       . = ALIGN(8);
+       __start_schedulers_array = .;
+       *(.data.schedulers)
+       __end_schedulers_array = .;
+
+       *(.data.rel)
+       *(.data.rel.*)
+       CONSTRUCTORS
+  } :text
+
+  . = ALIGN(SMP_CACHE_BYTES);
+  .data.read_mostly : {
+       *(.data.read_mostly)
+  } :text
+
+  . = ALIGN(PAGE_SIZE);        /* Init code and data */
+  __init_begin = .;
+  .init.text : {
+       _sinittext = .;
+       *(.init.text)
+       _einittext = .;
+  } :text
+  . = ALIGN(PAGE_SIZE);
+  .init.data : {
+       *(.init.rodata)
+       *(.init.rodata.*)
+
+       . = ALIGN(POINTER_ALIGN);
+       __setup_start = .;
+       *(.init.setup)
+       __setup_end = .;
+
+       __initcall_start = .;
+       *(.initcallpresmp.init)
+       __presmp_initcall_end = .;
+       *(.initcall1.init)
+       __initcall_end = .;
+
+       *(.init.data)
+       *(.init.data.rel)
+       *(.init.data.rel.*)
+  } :text
+  . = ALIGN(STACK_SIZE);
+  __init_end = .;
+  _end = . ;
+}
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 5ecba0af33..b84c316784 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -12,6 +12,7 @@
 #include <xen/param.h>
 #include <xen/serial.h>
 #include <xen/cache.h>
+#include <asm/processor.h>
 
 /* Never drop characters, even if the async transmit buffer fills. */
 /* #define SERIAL_NEVER_DROP_CHARS 1 */
diff --git a/xen/include/asm-riscv/altp2m.h b/xen/include/asm-riscv/altp2m.h
new file mode 100644
index 0000000000..8554495f94
--- /dev/null
+++ b/xen/include/asm-riscv/altp2m.h
@@ -0,0 +1,39 @@
+/*
+ * Alternate p2m
+ *
+ * Copyright (c) 2014, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_RISCV_ALTP2M_H
+#define __ASM_RISCV_ALTP2M_H
+
+#include <xen/sched.h>
+
+/* Alternate p2m on/off per domain */
+static inline bool altp2m_active(const struct domain *d)
+{
+    /* Not implemented on RISCV. */
+    return false;
+}
+
+/* Alternate p2m VCPU */
+static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
+{
+    /* Not implemented on RISCV, should not be reached. */
+    BUG();
+    return 0;
+}
+
+#endif /* __ASM_RISCV_ALTP2M_H */
diff --git a/xen/include/asm-riscv/asm.h b/xen/include/asm-riscv/asm.h
new file mode 100644
index 0000000000..2dafac5b35
--- /dev/null
+++ b/xen/include/asm-riscv/asm.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define REG_SC		__REG_SEL(sc.d, sc.w)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#define RISCV_SZPTR		8
+#define RISCV_LGPTR		3
+#else
+#define RISCV_PTR		".dword"
+#define RISCV_SZPTR		"8"
+#define RISCV_LGPTR		"3"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#define RISCV_SZPTR		4
+#define RISCV_LGPTR		2
+#else
+#define RISCV_PTR		".word"
+#define RISCV_SZPTR		"4"
+#define RISCV_LGPTR		"2"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define RISCV_INT		__ASM_STR(.word)
+#define RISCV_SZINT		__ASM_STR(4)
+#define RISCV_LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define RISCV_SHORT		__ASM_STR(.half)
+#define RISCV_SZSHORT		__ASM_STR(2)
+#define RISCV_LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/xen/include/asm-riscv/asm_defns.h b/xen/include/asm-riscv/asm_defns.h
new file mode 100644
index 0000000000..9145f9cbf1
--- /dev/null
+++ b/xen/include/asm-riscv/asm_defns.h
@@ -0,0 +1,24 @@
+#ifndef __RISCV_ASM_DEFNS_H__
+#define __RISCV_ASM_DEFNS_H__
+
+#ifndef COMPILE_OFFSETS
+/* NB. Auto-generated from arch/.../asm-offsets.c */
+#include <asm/asm-offsets.h>
+#endif
+#include <asm/processor.h>
+
+#define ASM_INT(label, val)                 \
+    .p2align 2;                             \
+label: .long (val);                         \
+    .size label, . - label;                 \
+    .type label, @object
+
+#endif /* __RISCV_ASM_DEFNS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/atomic.h b/xen/include/asm-riscv/atomic.h
new file mode 100644
index 0000000000..7ffae3bd74
--- /dev/null
+++ b/xen/include/asm-riscv/atomic.h
@@ -0,0 +1,204 @@
+/**
+ * Copyright (c) 2018 Anup Patel.
+ * Copyright (c) 2019 Alistair Francis <alistair.francis@wdc.com>
+ * Copyright (c) 2021 Connor Davis <connojd@pm.me>
+ * All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#include <xen/atomic.h>
+#include <asm/compiler_types.h>
+#include <asm/cmpxchg.h>
+#include <asm/system.h>
+
+void __bad_atomic_size(void);
+
+/*
+ * Adapted from {READ,WRITE}_ONCE in linux/include/asm-generic/rwonce.h,
+ * with the exception of only allowing types with size at most sizeof(long).
+ * Linux allows sizes <= sizeof(long long), but long long's will tear on
+ * RV32, so we exclude them.
+ */
+#define read_atomic(p) ({                                        \
+    BUILD_BUG_ON(!__native_word(typeof(*(p))));                  \
+    (*(const volatile __unqual_scalar_typeof(*(p)) *)(p));       \
+})
+
+#define write_atomic(p, x)                                       \
+do {						                 \
+    BUILD_BUG_ON(!__native_word(typeof(*(p))));                  \
+    *(volatile typeof(*(p)) *)(p) = (x);                         \
+} while (0)
+
+#define build_add_sized(name, size, type)                        \
+static inline void name(volatile type *addr, type val)           \
+{                                                                \
+    type t;                                                      \
+    asm volatile("l" size "u %1, %0\n"                           \
+                 "add %1, %1, %2\n"                              \
+                 "s" size " %1, %0\n"                            \
+                 : "+m" (*addr), "=&r" (t)                       \
+                 : "r" (val));                                   \
+}
+
+build_add_sized(add_u8_sized, "b", uint8_t)
+build_add_sized(add_u16_sized, "h", uint16_t)
+build_add_sized(add_u32_sized, "w", uint32_t)
+
+#define add_sized(p, x) ({                                \
+    typeof(*(p)) x_ = (x);                                \
+    switch ( sizeof(*(p)) )                               \
+    {                                                     \
+    case 1: add_u8_sized((uint8_t *)(p), x_); break;      \
+    case 2: add_u16_sized((uint16_t *)(p), x_); break;    \
+    case 4: add_u32_sized((uint32_t *)(p), x_); break;    \
+    default: __bad_atomic_size(); break;                  \
+    }                                                     \
+})
+
+/*
+ * Snipped from linux/arch/riscv/include/asm/atomic.h:
+ *
+ * First, the atomic ops that have no ordering constraints and therefor don't
+ * have the AQ or RL bits set.  These don't return anything, so there's only
+ * one version to worry about.
+ */
+#define ATOMIC_OP(op, asm_op, I)                              		\
+static always_inline void atomic_##op(int i, atomic_t *v)               \
+{									\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op ".w" " zero, %1, %0"	        \
+		: "+A" (v->counter)					\
+		: "r" (I)						\
+		: "memory");						\
+}									\
+
+ATOMIC_OP(add, add,  i)
+ATOMIC_OP(sub, add, -i)
+ATOMIC_OP(and, and,  i)
+
+#undef ATOMIC_OP
+
+/* The *_return variants provide full barriers */
+#define ATOMIC_OP_RETURN(op, asm_op, c_op, I)                   	\
+static always_inline int atomic_fetch_##op(int i, atomic_t *v)	        \
+{									\
+	register int ret;						\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op ".w.aqrl  %1, %2, %0"      	\
+		: "+A" (v->counter), "=r" (ret)				\
+		: "r" (I)						\
+		: "memory");						\
+	return ret;							\
+}                                                                       \
+static always_inline int atomic_##op##_return(int i, atomic_t *v)	\
+{									\
+        return atomic_fetch_##op(i, v) c_op I;          		\
+}
+
+ATOMIC_OP_RETURN(add, add, +,  i)
+ATOMIC_OP_RETURN(sub, add, +, -i)
+
+#undef ATOMIC_OP_RETURN
+
+static inline int atomic_read(const atomic_t *v)
+{
+    return read_atomic(&v->counter);
+}
+
+static inline int _atomic_read(atomic_t v)
+{
+    return v.counter;
+}
+
+static inline void atomic_set(atomic_t *v, int i)
+{
+    write_atomic(&v->counter, i);
+}
+
+static inline void _atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
+
+static inline int atomic_sub_and_test(int i, atomic_t *v)
+{
+    return atomic_sub_return(i, v) == 0;
+}
+
+static inline void atomic_inc(atomic_t *v)
+{
+    atomic_add(1, v);
+}
+
+static inline int atomic_inc_return(atomic_t *v)
+{
+    return atomic_add_return(1, v);
+}
+
+static inline int atomic_inc_and_test(atomic_t *v)
+{
+    return atomic_add_return(1, v) == 0;
+}
+
+static inline void atomic_dec(atomic_t *v)
+{
+    atomic_sub(1, v);
+}
+
+static inline int atomic_dec_return(atomic_t *v)
+{
+    return atomic_sub_return(1, v);
+}
+
+static inline int atomic_dec_and_test(atomic_t *v)
+{
+    return atomic_sub_return(1, v) == 0;
+}
+
+static inline int atomic_add_negative(int i, atomic_t *v)
+{
+    return atomic_add_return(i, v) < 0;
+}
+
+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return cmpxchg(&v->counter, old, new);
+}
+
+static inline int atomic_add_unless(atomic_t *v, int a, int u)
+{
+       int prev, rc;
+
+	__asm__ __volatile__ (
+		"0:	lr.w     %[p],  %[c]\n"
+		"	beq      %[p],  %[u], 1f\n"
+		"	add      %[rc], %[p], %[a]\n"
+		"	sc.w.rl  %[rc], %[rc], %[c]\n"
+		"	bnez     %[rc], 0b\n"
+		"	fence    rw, rw\n"
+		"1:\n"
+		: [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
+		: [a]"r" (a), [u]"r" (u)
+		: "memory");
+	return prev;
+}
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/xen/include/asm-riscv/bitops.h b/xen/include/asm-riscv/bitops.h
new file mode 100644
index 0000000000..f2f6f63b03
--- /dev/null
+++ b/xen/include/asm-riscv/bitops.h
@@ -0,0 +1,331 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#include <asm/system.h>
+
+#define BIT_ULL(nr)		(1ULL << (nr))
+#define BIT_MASK(nr)		(1UL << ((nr) % BITS_PER_LONG))
+#define BIT_WORD(nr)		((nr) / BITS_PER_LONG)
+#define BIT_ULL_MASK(nr)	(1ULL << ((nr) % BITS_PER_LONG_LONG))
+#define BIT_ULL_WORD(nr)	((nr) / BITS_PER_LONG_LONG)
+#define BITS_PER_BYTE		8
+
+#define __set_bit(n,p)            set_bit(n,p)
+#define __clear_bit(n,p)          clear_bit(n,p)
+
+#define BITS_PER_WORD           32
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit_ord(op, mod, nr, addr, ord)		\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) #ord " %0, %2, %1"			\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask))				\
+		: "memory");					\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit_ord(op, mod, nr, addr, ord)			\
+	__asm__ __volatile__ (					\
+		__AMO(op) #ord " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr)))			\
+		: "memory");
+
+#define __test_and_op_bit(op, mod, nr, addr) 			\
+	__test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
+#define __op_bit(op, mod, nr, addr)				\
+	__op_bit_ord(op, mod, nr, addr, )
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * __test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation may be reordered on other architectures than x86.
+ */
+static inline int __test_and_set_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * __test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation can be reordered on other architectures other than x86.
+ */
+static inline int __test_and_clear_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * __test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int __test_and_change_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ */
+static inline void clear_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit(and, __NOT, nr, addr);
+}
+
+static inline int test_bit(int nr, const volatile void *p)
+{
+        const volatile unsigned int *addr = (const volatile unsigned int *)p;
+
+        return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_WORD-1)));
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit()  may be reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit_ord(or, __NOP, nr, addr, .aq);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit_ord(and, __NOT, nr, addr, .rl);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ *
+ * On RISC-V systems there seems to be no benefit to taking advantage of the
+ * non-atomic property here: it's a lot more instructions and we still have to
+ * provide release semantics anyway.
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit_unlock(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+static inline int fls(unsigned int x)
+{
+    return generic_fls(x);
+}
+
+static inline int flsl(unsigned long x)
+{
+   return generic_flsl(x);
+}
+
+
+#define test_and_set_bit   __test_and_set_bit
+#define test_and_clear_bit __test_and_clear_bit
+
+/* Based on linux/include/asm-generic/bitops/find.h */
+
+#ifndef find_next_bit
+/**
+ * find_next_bit - find the next set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The bitmap size in bits
+ */
+extern unsigned long find_next_bit(const unsigned long *addr, unsigned long
+		size, unsigned long offset);
+#endif
+
+#ifndef find_next_zero_bit
+/**
+ * find_next_zero_bit - find the next cleared bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The bitmap size in bits
+ */
+extern unsigned long find_next_zero_bit(const unsigned long *addr, unsigned
+		long size, unsigned long offset);
+#endif
+
+#ifdef CONFIG_GENERIC_FIND_FIRST_BIT
+
+/**
+ * find_first_bit - find the first set bit in a memory region
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+ * Returns the bit number of the first set bit.
+ */
+extern unsigned long find_first_bit(const unsigned long *addr,
+				    unsigned long size);
+
+/**
+ * find_first_zero_bit - find the first cleared bit in a memory region
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+ * Returns the bit number of the first cleared bit.
+ */
+extern unsigned long find_first_zero_bit(const unsigned long *addr,
+					 unsigned long size);
+#else /* CONFIG_GENERIC_FIND_FIRST_BIT */
+
+#define find_first_bit(addr, size) find_next_bit((addr), (size), 0)
+#define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0)
+
+#endif /* CONFIG_GENERIC_FIND_FIRST_BIT */
+
+#define ffs(x) ({ unsigned int __t = (x); fls(__t & -__t); })
+#define ffsl(x) ({ unsigned long __t = (x); flsl(__t & -__t); })
+
+/*
+ * ffz - find first zero in word.
+ * @word: The word to search
+ *
+ * Undefined if no zero exists, so code should check against ~0UL first.
+ */
+#define ffz(x)  ffs(~(x))
+
+/**
+ * find_first_set_bit - find the first set bit in @word
+ * @word: the word to search
+ *
+ * Returns the bit-number of the first set bit (first bit being 0).
+ * The input must *not* be zero.
+ */
+static inline unsigned int find_first_set_bit(unsigned long word)
+{
+        return ffsl(word) - 1;
+}
+
+/**
+ * hweightN - returns the hamming weight of a N-bit word
+ * @x: the word to weigh
+ *
+ * The Hamming Weight of a number is the total number of bits set in it.
+ */
+#define hweight64(x) generic_hweight64(x)
+#define hweight32(x) generic_hweight32(x)
+#define hweight16(x) generic_hweight16(x)
+#define hweight8(x) generic_hweight8(x)
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/xen/include/asm-riscv/bug.h b/xen/include/asm-riscv/bug.h
new file mode 100644
index 0000000000..cdf4c0ebd4
--- /dev/null
+++ b/xen/include/asm-riscv/bug.h
@@ -0,0 +1,54 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#define BUGFRAME_NR     4
+
+#ifndef __ASSEMBLY__
+
+struct bug_frame {
+    signed int loc_disp;    /* Relative address to the bug address */
+    signed int file_disp;   /* Relative address to the filename */
+    signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
+    uint16_t line;          /* Line number */
+    uint32_t pad0:16;       /* Padding for 8-bytes align */
+};
+
+#define BUG()							\
+do {								\
+    __asm__ __volatile__ ("ebreak\n");			        \
+    unreachable();						\
+} while (0)
+
+#define WARN()                                                  \
+do {                                                            \
+    BUG();                                                      \
+} while (0)
+
+#define assert_failed(msg) do {                                 \
+    BUG();                                                      \
+} while (0)
+
+#define run_in_exception_handler(fn) BUG()
+
+extern const struct bug_frame __start_bug_frames[],
+                              __stop_bug_frames_0[],
+                              __stop_bug_frames_1[],
+                              __stop_bug_frames_2[],
+                              __stop_bug_frames_3[];
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/xen/include/asm-riscv/byteorder.h b/xen/include/asm-riscv/byteorder.h
new file mode 100644
index 0000000000..320a03c88f
--- /dev/null
+++ b/xen/include/asm-riscv/byteorder.h
@@ -0,0 +1,16 @@
+#ifndef __ASM_RISCV_BYTEORDER_H__
+#define __ASM_RISCV_BYTEORDER_H__
+
+#define __BYTEORDER_HAS_U64__
+
+#include <xen/byteorder/little_endian.h>
+
+#endif /* __ASM_RISCV_BYTEORDER_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/cache.h b/xen/include/asm-riscv/cache.h
new file mode 100644
index 0000000000..394782ca8e
--- /dev/null
+++ b/xen/include/asm-riscv/cache.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2021 Connor Davis <connojd@pm.me>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		CONFIG_RISCV_L1_CACHE_SHIFT
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#define __read_mostly __section(".data.read_mostly")
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/xen/include/asm-riscv/cmpxchg.h b/xen/include/asm-riscv/cmpxchg.h
new file mode 100644
index 0000000000..b7113fa546
--- /dev/null
+++ b/xen/include/asm-riscv/cmpxchg.h
@@ -0,0 +1,382 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <asm/system.h>
+#include <asm/fence.h>
+#include <xen/lib.h>
+
+#define __xchg_relaxed(ptr, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.w %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.d %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg_relaxed(ptr, x)						\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
+					    _x_, sizeof(*(ptr)));	\
+})
+
+#define __xchg_acquire(ptr, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.w %0, %2, %1\n"			\
+			RISCV_ACQUIRE_BARRIER				\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.d %0, %2, %1\n"			\
+			RISCV_ACQUIRE_BARRIER				\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg_acquire(ptr, x)						\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg_acquire((ptr),			\
+					    _x_, sizeof(*(ptr)));	\
+})
+
+#define __xchg_release(ptr, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"	amoswap.w %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"	amoswap.d %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg_release(ptr, x)						\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg_release((ptr),			\
+					    _x_, sizeof(*(ptr)));	\
+})
+
+#define __xchg(ptr, new, size)						\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.w.aqrl %0, %2, %1\n"		\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.d.aqrl %0, %2, %1\n"		\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg(ptr, x)							\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr)));	\
+})
+
+#define xchg32(ptr, x)							\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+	xchg((ptr), (x));						\
+})
+
+#define xchg64(ptr, x)							\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
+	xchg((ptr), (x));						\
+})
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg_relaxed(ptr, old, new, size)				\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (old);				\
+	__typeof__(*(ptr)) __new = (new);				\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg_relaxed(ptr, o, n)					\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg_relaxed((ptr),			\
+					_o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define __cmpxchg_acquire(ptr, old, new, size)				\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (old);				\
+	__typeof__(*(ptr)) __new = (new);				\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			RISCV_ACQUIRE_BARRIER				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			RISCV_ACQUIRE_BARRIER				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg_acquire(ptr, o, n)					\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg_acquire((ptr),			\
+					_o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define __cmpxchg_release(ptr, old, new, size)				\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (old);				\
+	__typeof__(*(ptr)) __new = (new);				\
+	__typeof__(*(ptr)) __ret = 0;					\
+	register unsigned int __rc = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg_release(ptr, o, n)					\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg_release((ptr),			\
+					_o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define __cmpxchg(ptr, old, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (__typeof__(*(ptr)))(old);		\
+	__typeof__(*(ptr)) __new = (__typeof__(*(ptr)))(new);	        \
+	__typeof__(*(ptr)) __ret = 0;					\
+	register unsigned int __rc = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w.rl %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"	fence rw, rw\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d.rl %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"	fence rw, rw\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg(ptr, o, n)						\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg((ptr),				\
+				       _o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define cmpxchg_local(ptr, o, n)					\
+	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg32(ptr, o, n)						\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+	cmpxchg((ptr), (o), (n));					\
+})
+
+#define cmpxchg32_local(ptr, o, n)					\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+	cmpxchg_relaxed((ptr), (o), (n))				\
+})
+
+#define cmpxchg64(ptr, o, n)						\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
+	cmpxchg((ptr), (o), (n));					\
+})
+
+#define cmpxchg64_local(ptr, o, n)					\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
+	cmpxchg_relaxed((ptr), (o), (n));				\
+})
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/xen/include/asm-riscv/compiler_types.h b/xen/include/asm-riscv/compiler_types.h
new file mode 100644
index 0000000000..dbe4a8bbff
--- /dev/null
+++ b/xen/include/asm-riscv/compiler_types.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_COMPILER_TYPES_H
+#define __LINUX_COMPILER_TYPES_H
+
+/*
+ * __unqual_scalar_typeof(x) - Declare an unqualified scalar type, leaving
+ *			       non-scalar types unchanged.
+ */
+/*
+ * Prefer C11 _Generic for better compile-times and simpler code. Note: 'char'
+ * is not type-compatible with 'signed char', and we define a separate case.
+ */
+#define __scalar_type_to_expr_cases(type)				\
+		unsigned type:	(unsigned type)0,			\
+		signed type:	(signed type)0
+
+#define __unqual_scalar_typeof(x) typeof(				\
+		_Generic((x),						\
+			 char:	(char)0,				\
+			 __scalar_type_to_expr_cases(char),		\
+			 __scalar_type_to_expr_cases(short),		\
+			 __scalar_type_to_expr_cases(int),		\
+			 __scalar_type_to_expr_cases(long),		\
+			 __scalar_type_to_expr_cases(long long),	\
+			 default: (x)))
+
+/* Is this type a native word size -- useful for atomic operations */
+#define __native_word(t) \
+	(sizeof(t) == sizeof(char) || sizeof(t) == sizeof(short) || \
+	 sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long))
+
+#endif /* __LINUX_COMPILER_TYPES_H */
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..84cb436dc1
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,110 @@
+/******************************************************************************
+ * config.h
+ *
+ * A Linux-style configuration list.
+ */
+
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV 1
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+
+#define CONFIG_PAGEALLOC_MAX_ORDER 18
+#define CONFIG_DOMU_MAX_ORDER      9
+#define CONFIG_HWDOM_MAX_ORDER     10
+
+#define OPT_CONSOLE_STR "dtuart"
+
+#ifdef CONFIG_RISCV_64
+#define MAX_VIRT_CPUS 128u
+#else
+#error "Unsupported RISCV variant"
+#endif
+
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#include <xen/const.h>
+
+#ifdef CONFIG_RISCV_64
+
+/*
+ * RISC-V Layout:
+ *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
+ *     Unmapped
+ *   0x0000004000000000 - 0xffffffbfffffffff
+ *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
+ *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
+ *     Unmapped
+ *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
+ *     Xen text, data, bss
+ *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
+ *     Fixmap: special-purpose 4K mapping slots
+ *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
+ *     Early boot mapping of FDT
+ *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
+ *     Early relocation address, used when relocating Xen and later
+ *     for livepatch vmap (if compiled in)
+ *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
+ *     VMAP: ioremap and early_ioremap
+ *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
+ *     Unmapped
+ *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
+ *     Frametable: 48 bytes per page for 133GB of RAM
+ *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
+ *     1:1 direct mapping of RAM
+ *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
+ *     Unmapped
+ */
+
+#define L2_ENTRY_BITS  30
+#define L2_ENTRY_BYTES (_AC(1,UL) << L2_ENTRY_BITS)
+#define L2_ADDR(_slot)                                      \
+    (((_AC(_slot, UL) >> 8) * _AC(0xffffff8000000000,UL)) | \
+     (_AC(_slot, UL) << L2_ENTRY_BITS))
+
+#define XEN_VIRT_START         _AT(vaddr_t, L2_ADDR(256) + MB(2))
+#define HYPERVISOR_VIRT_START  XEN_VIRT_START
+
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t, L2_ADDR(261))
+
+#endif /* CONFIG_RISCV_64 */
+
+#define STACK_ORDER            3
+#define STACK_SIZE             (PAGE_SIZE << STACK_ORDER)
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/cpufeature.h b/xen/include/asm-riscv/cpufeature.h
new file mode 100644
index 0000000000..15133ed63e
--- /dev/null
+++ b/xen/include/asm-riscv/cpufeature.h
@@ -0,0 +1,17 @@
+#ifndef __ASM_RISCV_CPUFEATURE_H
+#define __ASM_RISCV_CPUFEATURE_H
+
+static inline int cpu_nr_siblings(unsigned int cpu)
+{
+    return 1;
+}
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/csr.h b/xen/include/asm-riscv/csr.h
new file mode 100644
index 0000000000..2c84efde99
--- /dev/null
+++ b/xen/include/asm-riscv/csr.h
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <asm/asm.h>
+#include <xen/const.h>
+
+/* Status register flags */
+#define SR_SIE		_AC(0x00000002, UL) /* Supervisor Interrupt Enable */
+#define SR_MIE		_AC(0x00000008, UL) /* Machine Interrupt Enable */
+#define SR_SPIE		_AC(0x00000020, UL) /* Previous Supervisor IE */
+#define SR_MPIE		_AC(0x00000080, UL) /* Previous Machine IE */
+#define SR_SPP		_AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_MPP		_AC(0x00001800, UL) /* Previously Machine */
+#define SR_SUM		_AC(0x00040000, UL) /* Supervisor User Memory Access */
+
+#define SR_FS		_AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF	_AC(0x00000000, UL)
+#define SR_FS_INITIAL	_AC(0x00002000, UL)
+#define SR_FS_CLEAN	_AC(0x00004000, UL)
+#define SR_FS_DIRTY	_AC(0x00006000, UL)
+
+#define SR_XS		_AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF	_AC(0x00000000, UL)
+#define SR_XS_INITIAL	_AC(0x00008000, UL)
+#define SR_XS_CLEAN	_AC(0x00010000, UL)
+#define SR_XS_DIRTY	_AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD		_AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD		_AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SATP flags */
+#ifndef CONFIG_64BIT
+#define SATP_PPN	_AC(0x003FFFFF, UL)
+#define SATP_MODE_32	_AC(0x80000000, UL)
+#define SATP_MODE	SATP_MODE_32
+#else
+#define SATP_PPN	_AC(0x00000FFFFFFFFFFF, UL)
+#define SATP_MODE_39	_AC(0x8000000000000000, UL)
+#define SATP_MODE	SATP_MODE_39
+#endif
+
+/* Exception cause high bit - is an interrupt if set */
+#define CAUSE_IRQ_FLAG		(_AC(1, UL) << (__riscv_xlen - 1))
+
+/* Interrupt causes (minus the high bit) */
+#define IRQ_S_SOFT		1
+#define IRQ_M_SOFT		3
+#define IRQ_S_TIMER		5
+#define IRQ_M_TIMER		7
+#define IRQ_S_EXT		9
+#define IRQ_M_EXT		11
+
+/* Exception causes */
+#define EXC_INST_MISALIGNED	0
+#define EXC_INST_ACCESS		1
+#define EXC_BREAKPOINT		3
+#define EXC_LOAD_ACCESS		5
+#define EXC_STORE_ACCESS	7
+#define EXC_SYSCALL		8
+#define EXC_INST_PAGE_FAULT	12
+#define EXC_LOAD_PAGE_FAULT	13
+#define EXC_STORE_PAGE_FAULT	15
+
+/* PMP configuration */
+#define PMP_R			0x01
+#define PMP_W			0x02
+#define PMP_X			0x04
+#define PMP_A			0x18
+#define PMP_A_TOR		0x08
+#define PMP_A_NA4		0x10
+#define PMP_A_NAPOT		0x18
+#define PMP_L			0x80
+
+/* symbolic CSR names: */
+#define CSR_CYCLE		0xc00
+#define CSR_TIME		0xc01
+#define CSR_INSTRET		0xc02
+#define CSR_CYCLEH		0xc80
+#define CSR_TIMEH		0xc81
+#define CSR_INSTRETH		0xc82
+
+#define CSR_SSTATUS		0x100
+#define CSR_SIE			0x104
+#define CSR_STVEC		0x105
+#define CSR_SCOUNTEREN		0x106
+#define CSR_SSCRATCH		0x140
+#define CSR_SEPC		0x141
+#define CSR_SCAUSE		0x142
+#define CSR_STVAL		0x143
+#define CSR_SIP			0x144
+#define CSR_SATP		0x180
+
+#define CSR_MSTATUS		0x300
+#define CSR_MISA		0x301
+#define CSR_MIE			0x304
+#define CSR_MTVEC		0x305
+#define CSR_MSCRATCH		0x340
+#define CSR_MEPC		0x341
+#define CSR_MCAUSE		0x342
+#define CSR_MTVAL		0x343
+#define CSR_MIP			0x344
+#define CSR_PMPCFG0		0x3a0
+#define CSR_PMPADDR0		0x3b0
+#define CSR_MHARTID		0xf14
+
+#ifdef CONFIG_RISCV_M_MODE
+# define CSR_STATUS	CSR_MSTATUS
+# define CSR_IE		CSR_MIE
+# define CSR_TVEC	CSR_MTVEC
+# define CSR_SCRATCH	CSR_MSCRATCH
+# define CSR_EPC	CSR_MEPC
+# define CSR_CAUSE	CSR_MCAUSE
+# define CSR_TVAL	CSR_MTVAL
+# define CSR_IP		CSR_MIP
+
+# define SR_IE		SR_MIE
+# define SR_PIE		SR_MPIE
+# define SR_PP		SR_MPP
+
+# define RV_IRQ_SOFT    IRQ_M_SOFT
+# define RV_IRQ_TIMER	IRQ_M_TIMER
+# define RV_IRQ_EXT     IRQ_M_EXT
+#else /* CONFIG_RISCV_M_MODE */
+# define CSR_STATUS	CSR_SSTATUS
+# define CSR_IE		CSR_SIE
+# define CSR_TVEC	CSR_STVEC
+# define CSR_SCRATCH	CSR_SSCRATCH
+# define CSR_EPC	CSR_SEPC
+# define CSR_CAUSE	CSR_SCAUSE
+# define CSR_TVAL	CSR_STVAL
+# define CSR_IP		CSR_SIP
+
+# define SR_IE		SR_SIE
+# define SR_PIE		SR_SPIE
+# define SR_PP		SR_SPP
+
+# define RV_IRQ_SOFT    IRQ_S_SOFT
+# define RV_IRQ_TIMER	IRQ_S_TIMER
+# define RV_IRQ_EXT     IRQ_S_EXT
+#endif /* CONFIG_RISCV_M_MODE */
+
+/* IE/IP (Supervisor/Machine Interrupt Enable/Pending) flags */
+#define IE_SIE		(_AC(0x1, UL) << RV_IRQ_SOFT)
+#define IE_TIE		(_AC(0x1, UL) << RV_IRQ_TIMER)
+#define IE_EIE		(_AC(0x1, UL) << RV_IRQ_EXT)
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " __ASM_STR(csr)	\
+			      : "=r" (__v) :			\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/xen/include/asm-riscv/current.h b/xen/include/asm-riscv/current.h
new file mode 100644
index 0000000000..b9f319e9c4
--- /dev/null
+++ b/xen/include/asm-riscv/current.h
@@ -0,0 +1,47 @@
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <xen/page-size.h>
+#include <xen/percpu.h>
+#include <asm/processor.h>
+
+#ifndef __ASSEMBLY__
+
+struct vcpu;
+
+/* Which VCPU is "current" on this PCPU. */
+DECLARE_PER_CPU(struct vcpu *, curr_vcpu);
+
+#define current              (this_cpu(curr_vcpu))
+#define set_current(vcpu)    do { current = (vcpu); } while (0)
+#define get_cpu_current(cpu) (per_cpu(curr_vcpu, cpu))
+
+/* Per-VCPU state that lives at the top of the stack */
+struct cpu_info {
+    struct cpu_user_regs guest_cpu_user_regs;
+    unsigned long elr;
+    uint32_t flags;
+};
+
+static inline struct cpu_info *get_cpu_info(void)
+{
+    register unsigned long sp asm ("sp");
+
+    return (struct cpu_info *)((sp & ~(STACK_SIZE - 1)) +
+                               STACK_SIZE - sizeof(struct cpu_info));
+}
+
+#define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs)
+
+DECLARE_PER_CPU(unsigned int, cpu_id);
+
+#define get_processor_id() (this_cpu(cpu_id))
+
+#define set_processor_id(id)  do {                      \
+    csr_write(CSR_SCRATCH, __per_cpu_offset[id]);       \
+    this_cpu(cpu_id) = (id);                            \
+} while(0)
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/xen/include/asm-riscv/debugger.h b/xen/include/asm-riscv/debugger.h
new file mode 100644
index 0000000000..af4fc8a838
--- /dev/null
+++ b/xen/include/asm-riscv/debugger.h
@@ -0,0 +1,15 @@
+#ifndef __RISCV_DEBUGGER_H__
+#define __RISCV_DEBUGGER_H__
+
+#define debugger_trap_fatal(v, r) (0)
+#define debugger_trap_immediate() ((void) 0)
+
+#endif /* __RISCV_DEBUGGER_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/delay.h b/xen/include/asm-riscv/delay.h
new file mode 100644
index 0000000000..181c54844c
--- /dev/null
+++ b/xen/include/asm-riscv/delay.h
@@ -0,0 +1,15 @@
+#ifndef __RISCV_DELAY_H__
+#define __RISCV_DELAY_H__
+
+extern void udelay(unsigned long usecs);
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/desc.h b/xen/include/asm-riscv/desc.h
new file mode 100644
index 0000000000..a4d02d5eef
--- /dev/null
+++ b/xen/include/asm-riscv/desc.h
@@ -0,0 +1,12 @@
+#ifndef __ARCH_DESC_H
+#define __ARCH_DESC_H
+
+#endif /* __ARCH_DESC_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/device.h b/xen/include/asm-riscv/device.h
new file mode 100644
index 0000000000..e38d2a9712
--- /dev/null
+++ b/xen/include/asm-riscv/device.h
@@ -0,0 +1,15 @@
+#ifndef __ASM_RISCV_DEVICE_H
+#define __ASM_RISCV_DEVICE_H
+
+typedef struct device device_t;
+
+#endif /* __ASM_RISCV_DEVICE_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/div64.h b/xen/include/asm-riscv/div64.h
new file mode 100644
index 0000000000..0a88dd30ad
--- /dev/null
+++ b/xen/include/asm-riscv/div64.h
@@ -0,0 +1,23 @@
+#ifndef __ASM_RISCV_DIV64
+#define __ASM_RISCV_DIV64
+
+#include <asm/system.h>
+#include <xen/types.h>
+
+# define do_div(n,base) ({                                      \
+        uint32_t __base = (base);                               \
+        uint32_t __rem;                                         \
+        __rem = ((uint64_t)(n)) % __base;                       \
+        (n) = ((uint64_t)(n)) / __base;                         \
+        __rem;                                                  \
+ })
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/domain.h b/xen/include/asm-riscv/domain.h
new file mode 100644
index 0000000000..ebf2c5bfe1
--- /dev/null
+++ b/xen/include/asm-riscv/domain.h
@@ -0,0 +1,50 @@
+#ifndef __ASM_DOMAIN_H__
+#define __ASM_DOMAIN_H__
+
+#include <xen/cache.h>
+#include <xen/sched.h>
+#include <asm/page.h>
+#include <asm/p2m.h>
+#include <public/hvm/params.h>
+#include <xen/serial.h>
+#include <xen/rbtree.h>
+
+struct hvm_domain {
+    uint64_t params[HVM_NR_PARAMS];
+};
+
+/* The hardware domain has always its memory direct mapped. */
+#define is_domain_direct_mapped(d) ((d) == hardware_domain)
+
+struct arch_domain {
+    struct hvm_domain hvm;
+} __cacheline_aligned;
+
+struct arch_vcpu {
+}  __cacheline_aligned;
+
+void vcpu_show_execution_state(struct vcpu *);
+void vcpu_show_registers(const struct vcpu *);
+
+static inline struct vcpu_guest_context *alloc_vcpu_guest_context(void)
+{
+    return (struct vcpu_guest_context *)0xdeadbeef;
+}
+
+static inline void free_vcpu_guest_context(struct vcpu_guest_context *vgc)
+{
+}
+
+static inline void arch_vcpu_block(struct vcpu *v) {}
+
+#endif /* __ASM_DOMAIN_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/event.h b/xen/include/asm-riscv/event.h
new file mode 100644
index 0000000000..88e10f414b
--- /dev/null
+++ b/xen/include/asm-riscv/event.h
@@ -0,0 +1,42 @@
+#ifndef __ASM_EVENT_H__
+#define __ASM_EVENT_H__
+
+#include <xen/errno.h>
+#include <asm/domain.h>
+#include <asm/bug.h>
+
+void vcpu_kick(struct vcpu *v);
+void vcpu_mark_events_pending(struct vcpu *v);
+void vcpu_update_evtchn_irq(struct vcpu *v);
+void vcpu_block_unless_event_pending(struct vcpu *v);
+
+static inline int vcpu_event_delivery_is_enabled(struct vcpu *v)
+{
+    return 0;
+}
+
+static inline int local_events_need_delivery(void)
+{
+    return 0;
+}
+
+static inline void local_event_delivery_enable(void)
+{
+
+}
+
+/* No arch specific virq definition now. Default to global. */
+static inline bool arch_virq_is_global(unsigned int virq)
+{
+    return true;
+}
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/fence.h b/xen/include/asm-riscv/fence.h
new file mode 100644
index 0000000000..2b443a3a48
--- /dev/null
+++ b/xen/include/asm-riscv/fence.h
@@ -0,0 +1,12 @@
+#ifndef _ASM_RISCV_FENCE_H
+#define _ASM_RISCV_FENCE_H
+
+#ifdef CONFIG_SMP
+#define RISCV_ACQUIRE_BARRIER		"\tfence r , rw\n"
+#define RISCV_RELEASE_BARRIER		"\tfence rw,  w\n"
+#else
+#define RISCV_ACQUIRE_BARRIER
+#define RISCV_RELEASE_BARRIER
+#endif
+
+#endif	/* _ASM_RISCV_FENCE_H */
diff --git a/xen/include/asm-riscv/flushtlb.h b/xen/include/asm-riscv/flushtlb.h
new file mode 100644
index 0000000000..7a4a4eee23
--- /dev/null
+++ b/xen/include/asm-riscv/flushtlb.h
@@ -0,0 +1,34 @@
+#ifndef __ASM_RISCV_FLUSHTLB_H__
+#define __ASM_RISCV_FLUSHTLB_H__
+
+#include <xen/cpumask.h>
+
+/*
+ * Filter the given set of CPUs, removing those that definitely flushed their
+ * TLB since @page_timestamp.
+ */
+/* XXX lazy implementation just doesn't clear anything.... */
+static inline void tlbflush_filter(cpumask_t *mask, uint32_t page_timestamp) {}
+
+/* Returning 0 from tlbflush_current_time will always force a flush. */
+static inline uint32_t tlbflush_current_time(void)
+{
+    return 0;
+}
+
+static inline void page_set_tlbflush_timestamp(struct page_info *page)
+{
+}
+
+/* Flush specified CPUs' TLBs */
+void arch_flush_tlb_mask(const cpumask_t *mask);
+
+#endif /* __ASM_RISCV_FLUSHTLB_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/grant_table.h b/xen/include/asm-riscv/grant_table.h
new file mode 100644
index 0000000000..8bcc05a60b
--- /dev/null
+++ b/xen/include/asm-riscv/grant_table.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_GRANT_TABLE_H__
+#define __ASM_GRANT_TABLE_H__
+
+#endif /* __ASM_GRANT_TABLE_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/guest_access.h b/xen/include/asm-riscv/guest_access.h
new file mode 100644
index 0000000000..61a16044b2
--- /dev/null
+++ b/xen/include/asm-riscv/guest_access.h
@@ -0,0 +1,41 @@
+#ifndef __ASM_RISCV_GUEST_ACCESS_H__
+#define __ASM_RISCV_GUEST_ACCESS_H__
+
+#include <xen/errno.h>
+#include <xen/sched.h>
+
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
+unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
+                                             unsigned len);
+unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
+unsigned long raw_clear_guest(void *to, unsigned len);
+
+/* Copy data to guest physical address, then clean the region. */
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t phys,
+                                              void *buf,
+                                              unsigned int len);
+
+int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
+                               uint32_t size, bool is_write);
+
+#define __raw_copy_to_guest raw_copy_to_guest
+#define __raw_copy_from_guest raw_copy_from_guest
+#define __raw_clear_guest raw_clear_guest
+
+/*
+ * Pre-validate a guest handle.
+ * Allows use of faster __copy_* functions.
+ */
+#define guest_handle_okay(hnd, nr) (1)
+#define guest_handle_subrange_okay(hnd, first, last) (1)
+
+#endif /* __ASM_RISCV_GUEST_ACCESS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/guest_atomics.h b/xen/include/asm-riscv/guest_atomics.h
new file mode 100644
index 0000000000..85e82e8c7c
--- /dev/null
+++ b/xen/include/asm-riscv/guest_atomics.h
@@ -0,0 +1,60 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef _RISCV_GUEST_ATOMICS_H
+#define _RISCV_GUEST_ATOMICS_H
+
+#define guest_testop(name)                                                  \
+static inline int guest_##name(struct domain *d, int nr, volatile void *p)  \
+{                                                                           \
+    (void) d;                                                               \
+    (void) nr;                                                              \
+    (void) p;                                                               \
+                                                                            \
+    return 0;                                                               \
+}
+
+guest_testop(test_and_set_bit)
+guest_testop(test_and_clear_bit)
+guest_testop(test_and_change_bit)
+
+#undef guest_testop
+
+#define guest_bitop(name)                                                   \
+static inline void guest_##name(struct domain *d, int nr, volatile void *p) \
+{                                                                           \
+    (void) d;                                                               \
+    (void) nr;                                                              \
+    (void) p;                                                               \
+}
+
+guest_bitop(set_bit)
+guest_bitop(clear_bit)
+guest_bitop(change_bit)
+
+#undef guest_bitop
+
+#define guest_test_bit(d, nr, p) ((void)(d), test_bit(nr, p))
+
+#endif /* _RISCV_GUEST_ATOMICS_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/hardirq.h b/xen/include/asm-riscv/hardirq.h
new file mode 100644
index 0000000000..67b6a673db
--- /dev/null
+++ b/xen/include/asm-riscv/hardirq.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_HARDIRQ_H
+#define __ASM_HARDIRQ_H
+
+#include <xen/cache.h>
+#include <xen/smp.h>
+
+typedef struct {
+        unsigned long __softirq_pending;
+        unsigned int __local_irq_count;
+} __cacheline_aligned irq_cpustat_t;
+
+#include <xen/irq_cpustat.h>    /* Standard mappings for irq_cpustat_t above */
+
+#define in_irq() (local_irq_count(smp_processor_id()) != 0)
+
+#define irq_enter()     (local_irq_count(smp_processor_id())++)
+#define irq_exit()      (local_irq_count(smp_processor_id())--)
+
+#endif /* __ASM_HARDIRQ_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/hypercall.h b/xen/include/asm-riscv/hypercall.h
new file mode 100644
index 0000000000..8af474b5e2
--- /dev/null
+++ b/xen/include/asm-riscv/hypercall.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_RISCV_HYPERCALL_H__
+#define __ASM_RISCV_HYPERCALL_H__
+
+#endif /* __ASM_RISCV_HYPERCALL_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/init.h b/xen/include/asm-riscv/init.h
new file mode 100644
index 0000000000..d72e62f0c9
--- /dev/null
+++ b/xen/include/asm-riscv/init.h
@@ -0,0 +1,42 @@
+#ifndef _XEN_ASM_INIT_H
+#define _XEN_ASM_INIT_H
+
+#ifndef __ASSEMBLY__
+
+struct init_info {
+    /* Pointer to the stack, used by head.S when entering in C */
+    unsigned char *stack;
+
+    /* Logical CPU ID, used by start_secondary */
+    unsigned int cpuid;
+};
+
+#endif /* __ASSEMBLY__ */
+
+/* For assembly routines */
+#define __HEAD		.section	".head.text","ax"
+#define __INIT		.section	".init.text","ax"
+#define __FINIT		.previous
+
+#define __INITDATA	.section	".init.data","aw",%progbits
+#define __INITRODATA	.section	".init.rodata","a",%progbits
+#define __FINITDATA	.previous
+
+#define __MEMINIT        .section	".meminit.text", "ax"
+#define __MEMINITDATA    .section	".meminit.data", "aw"
+#define __MEMINITRODATA  .section	".meminit.rodata", "a"
+
+/* silence warnings when references are OK */
+#define __REF            .section       ".ref.text", "ax"
+#define __REFDATA        .section       ".ref.data", "aw"
+#define __REFCONST       .section       ".ref.rodata", "a"
+
+#endif /* _XEN_ASM_INIT_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/io.h b/xen/include/asm-riscv/io.h
new file mode 100644
index 0000000000..92d17ebfa8
--- /dev/null
+++ b/xen/include/asm-riscv/io.h
@@ -0,0 +1,283 @@
+/*
+ * {read,write}{b,w,l,q} based on arch/arm64/include/asm/io.h
+ *   which was based on arch/arm/include/io.h
+ *
+ * Copyright (C) 1996-2000 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#include <asm/byteorder.h>
+
+/*
+ * The RISC-V ISA doesn't yet specify how to query or modify PMAs, so we can't
+ * change the properties of memory regions.  This should be fixed by the
+ * upcoming platform spec.
+ */
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+/* Generic IO read/write.  These perform native-endian accesses. */
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
+{
+	asm volatile("sb %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 val, volatile void __iomem *addr)
+{
+	asm volatile("sh %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 val, volatile void __iomem *addr)
+{
+	asm volatile("sw %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#ifdef CONFIG_64BIT
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
+{
+	asm volatile("sd %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+#endif
+
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+	u8 val;
+
+	asm volatile("lb %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+	u16 val;
+
+	asm volatile("lh %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+	u32 val;
+
+	asm volatile("lw %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#ifdef CONFIG_64BIT
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+	u64 val;
+
+	asm volatile("ld %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+#endif
+
+/*
+ * Unordered I/O memory access primitives.  These are even more relaxed than
+ * the relaxed versions, as they don't even order accesses between successive
+ * operations to the I/O regions.
+ */
+#define readb_cpu(c)		({ u8  __r = __raw_readb(c); __r; })
+#define readw_cpu(c)		({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
+#define readl_cpu(c)		({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
+
+#define writeb_cpu(v,c)		((void)__raw_writeb((v),(c)))
+#define writew_cpu(v,c)		((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
+#define writel_cpu(v,c)		((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
+
+#ifdef CONFIG_64BIT
+#define readq_cpu(c)		({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
+#define writeq_cpu(v,c)		((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
+#endif
+
+/*
+ * Relaxed I/O memory access primitives. These follow the Device memory
+ * ordering rules but do not guarantee any ordering relative to Normal memory
+ * accesses.  These are defined to order the indicated access (either a read or
+ * write) with all other I/O memory accesses. Since the platform specification
+ * defines that all I/O regions are strongly ordered on channel 2, no explicit
+ * fences are required to enforce this ordering.
+ */
+/* FIXME: These are now the same as asm-generic */
+#define __io_rbr()		do {} while (0)
+#define __io_rar()		do {} while (0)
+#define __io_rbw()		do {} while (0)
+#define __io_raw()		do {} while (0)
+
+#define readb_relaxed(c)	({ u8  __v; __io_rbr(); __v = readb_cpu(c); __io_rar(); __v; })
+#define readw_relaxed(c)	({ u16 __v; __io_rbr(); __v = readw_cpu(c); __io_rar(); __v; })
+#define readl_relaxed(c)	({ u32 __v; __io_rbr(); __v = readl_cpu(c); __io_rar(); __v; })
+
+#define writeb_relaxed(v,c)	({ __io_rbw(); writeb_cpu((v),(c)); __io_raw(); })
+#define writew_relaxed(v,c)	({ __io_rbw(); writew_cpu((v),(c)); __io_raw(); })
+#define writel_relaxed(v,c)	({ __io_rbw(); writel_cpu((v),(c)); __io_raw(); })
+
+#ifdef CONFIG_64BIT
+#define readq_relaxed(c)	({ u64 __v; __io_rbr(); __v = readq_cpu(c); __io_rar(); __v; })
+#define writeq_relaxed(v,c)	({ __io_rbw(); writeq_cpu((v),(c)); __io_raw(); })
+#endif
+
+/*
+ * I/O memory access primitives. Reads are ordered relative to any
+ * following Normal memory access. Writes are ordered relative to any prior
+ * Normal memory access.  The memory barriers here are necessary as RISC-V
+ * doesn't define any ordering between the memory space and the I/O space.
+ */
+#define __io_br()	do {} while (0)
+#define __io_ar(v)	__asm__ __volatile__ ("fence i,r" : : : "memory");
+#define __io_bw()	__asm__ __volatile__ ("fence w,o" : : : "memory");
+#define __io_aw()	do { } while (0)
+
+#define readb(c)	({ u8  __v; __io_br(); __v = readb_cpu(c); __io_ar(__v); __v; })
+#define readw(c)	({ u16 __v; __io_br(); __v = readw_cpu(c); __io_ar(__v); __v; })
+#define readl(c)	({ u32 __v; __io_br(); __v = readl_cpu(c); __io_ar(__v); __v; })
+
+#define writeb(v,c)	({ __io_bw(); writeb_cpu((v),(c)); __io_aw(); })
+#define writew(v,c)	({ __io_bw(); writew_cpu((v),(c)); __io_aw(); })
+#define writel(v,c)	({ __io_bw(); writel_cpu((v),(c)); __io_aw(); })
+
+#ifdef CONFIG_64BIT
+#define readq(c)	({ u64 __v; __io_br(); __v = readq_cpu(c); __io_ar(__v); __v; })
+#define writeq(v,c)	({ __io_bw(); writeq_cpu((v),(c)); __io_aw(); })
+#endif
+
+/*
+ * Emulation routines for the port-mapped IO space used by some PCI drivers.
+ * These are defined as being "fully synchronous", but also "not guaranteed to
+ * be fully ordered with respect to other memory and I/O operations".  We're
+ * going to be on the safe side here and just make them:
+ *  - Fully ordered WRT each other, by bracketing them with two fences.  The
+ *    outer set contains both I/O so inX is ordered with outX, while the inner just
+ *    needs the type of the access (I for inX and O for outX).
+ *  - Ordered in the same manner as readX/writeX WRT memory by subsuming their
+ *    fences.
+ *  - Ordered WRT timer reads, so udelay and friends don't get elided by the
+ *    implementation.
+ * Note that there is no way to actually enforce that outX is a non-posted
+ * operation on RISC-V, but hopefully the timer ordering constraint is
+ * sufficient to ensure this works sanely on controllers that support I/O
+ * writes.
+ */
+#define __io_pbr()	__asm__ __volatile__ ("fence io,i"  : : : "memory");
+#define __io_par(v)	__asm__ __volatile__ ("fence i,ior" : : : "memory");
+#define __io_pbw()	__asm__ __volatile__ ("fence iow,o" : : : "memory");
+#define __io_paw()	__asm__ __volatile__ ("fence o,io"  : : : "memory");
+
+#define inb(c)		({ u8  __v; __io_pbr(); __v = readb_cpu((void*)(PCI_IOBASE + (c))); __io_par(__v); __v; })
+#define inw(c)		({ u16 __v; __io_pbr(); __v = readw_cpu((void*)(PCI_IOBASE + (c))); __io_par(__v); __v; })
+#define inl(c)		({ u32 __v; __io_pbr(); __v = readl_cpu((void*)(PCI_IOBASE + (c))); __io_par(__v); __v; })
+
+#define outb(v,c)	({ __io_pbw(); writeb_cpu((v),(void*)(PCI_IOBASE + (c))); __io_paw(); })
+#define outw(v,c)	({ __io_pbw(); writew_cpu((v),(void*)(PCI_IOBASE + (c))); __io_paw(); })
+#define outl(v,c)	({ __io_pbw(); writel_cpu((v),(void*)(PCI_IOBASE + (c))); __io_paw(); })
+
+#ifdef CONFIG_64BIT
+#define inq(c)		({ u64 __v; __io_pbr(); __v = readq_cpu((void*)(c)); __io_par(__v); __v; })
+#define outq(v,c)	({ __io_pbw(); writeq_cpu((v),(void*)(c)); __io_paw(); })
+#endif
+
+/*
+ * Accesses from a single hart to a single I/O address must be ordered.  This
+ * allows us to use the raw read macros, but we still need to fence before and
+ * after the block to ensure ordering WRT other macros.  These are defined to
+ * perform host-endian accesses so we use __raw instead of __cpu.
+ */
+#define __io_reads_ins(port, ctype, len, bfence, afence)			\
+	static inline void __ ## port ## len(const volatile void __iomem *addr,	\
+					     void *buffer,			\
+					     unsigned int count)		\
+	{									\
+		bfence;								\
+		if (count) {							\
+			ctype *buf = buffer;					\
+										\
+			do {							\
+				ctype x = __raw_read ## len(addr);		\
+				*buf++ = x;					\
+			} while (--count);					\
+		}								\
+		afence;								\
+	}
+
+#define __io_writes_outs(port, ctype, len, bfence, afence)			\
+	static inline void __ ## port ## len(volatile void __iomem *addr,	\
+					     const void *buffer,		\
+					     unsigned int count)		\
+	{									\
+		bfence;								\
+		if (count) {							\
+			const ctype *buf = buffer;				\
+										\
+			do {							\
+				__raw_write ## len(*buf++, addr);		\
+			} while (--count);					\
+		}								\
+		afence;								\
+	}
+
+__io_reads_ins(reads,  u8, b, __io_br(), __io_ar(addr))
+__io_reads_ins(reads, u16, w, __io_br(), __io_ar(addr))
+__io_reads_ins(reads, u32, l, __io_br(), __io_ar(addr))
+#define readsb(addr, buffer, count) __readsb(addr, buffer, count)
+#define readsw(addr, buffer, count) __readsw(addr, buffer, count)
+#define readsl(addr, buffer, count) __readsl(addr, buffer, count)
+
+__io_reads_ins(ins,  u8, b, __io_pbr(), __io_par(addr))
+__io_reads_ins(ins, u16, w, __io_pbr(), __io_par(addr))
+__io_reads_ins(ins, u32, l, __io_pbr(), __io_par(addr))
+#define insb(addr, buffer, count) __insb((void __iomem *)(long)addr, buffer, count)
+#define insw(addr, buffer, count) __insw((void __iomem *)(long)addr, buffer, count)
+#define insl(addr, buffer, count) __insl((void __iomem *)(long)addr, buffer, count)
+
+__io_writes_outs(writes,  u8, b, __io_bw(), __io_aw())
+__io_writes_outs(writes, u16, w, __io_bw(), __io_aw())
+__io_writes_outs(writes, u32, l, __io_bw(), __io_aw())
+#define writesb(addr, buffer, count) __writesb(addr, buffer, count)
+#define writesw(addr, buffer, count) __writesw(addr, buffer, count)
+#define writesl(addr, buffer, count) __writesl(addr, buffer, count)
+
+__io_writes_outs(outs,  u8, b, __io_pbw(), __io_paw())
+__io_writes_outs(outs, u16, w, __io_pbw(), __io_paw())
+__io_writes_outs(outs, u32, l, __io_pbw(), __io_paw())
+#define outsb(addr, buffer, count) __outsb((void __iomem *)(long)addr, buffer, count)
+#define outsw(addr, buffer, count) __outsw((void __iomem *)(long)addr, buffer, count)
+#define outsl(addr, buffer, count) __outsl((void __iomem *)(long)addr, buffer, count)
+
+#ifdef CONFIG_64BIT
+__io_reads_ins(reads, u64, q, __io_br(), __io_ar(addr))
+#define readsq(addr, buffer, count) __readsq(addr, buffer, count)
+
+__io_reads_ins(ins, u64, q, __io_pbr(), __io_par(addr))
+#define insq(addr, buffer, count) __insq((void __iomem *)addr, buffer, count)
+
+__io_writes_outs(writes, u64, q, __io_bw(), __io_aw())
+#define writesq(addr, buffer, count) __writesq(addr, buffer, count)
+
+__io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
+#define outsq(addr, buffer, count) __outsq((void __iomem *)addr, buffer, count)
+#endif
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/xen/include/asm-riscv/iocap.h b/xen/include/asm-riscv/iocap.h
new file mode 100644
index 0000000000..e38a7ff3dc
--- /dev/null
+++ b/xen/include/asm-riscv/iocap.h
@@ -0,0 +1,13 @@
+#ifndef __RISCV_IOCAP_H__
+#define __RISCV_IOCAP_H__
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/iommu.h b/xen/include/asm-riscv/iommu.h
new file mode 100644
index 0000000000..c4f24574ec
--- /dev/null
+++ b/xen/include/asm-riscv/iommu.h
@@ -0,0 +1,46 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __ARCH_RISCV_IOMMU_H__
+#define __ARCH_RISCV_IOMMU_H__
+
+struct arch_iommu
+{
+    /* Private information for the IOMMU drivers */
+    void *priv;
+};
+
+const struct iommu_ops *iommu_get_ops(void);
+void iommu_set_ops(const struct iommu_ops *ops);
+
+#endif /* __ARCH_RISCV_IOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/irq.h b/xen/include/asm-riscv/irq.h
new file mode 100644
index 0000000000..ae17872d4d
--- /dev/null
+++ b/xen/include/asm-riscv/irq.h
@@ -0,0 +1,58 @@
+#ifndef _ASM_HW_IRQ_H
+#define _ASM_HW_IRQ_H
+
+#include <public/device_tree_defs.h>
+
+/*
+ * These defines correspond to the Xen internal representation of the
+ * IRQ types. We choose to make them the same as the existing device
+ * tree definitions for convenience.
+ */
+#define IRQ_TYPE_NONE           DT_IRQ_TYPE_NONE
+#define IRQ_TYPE_EDGE_RISING    DT_IRQ_TYPE_EDGE_RISING
+#define IRQ_TYPE_EDGE_FALLING   DT_IRQ_TYPE_EDGE_FALLING
+#define IRQ_TYPE_EDGE_BOTH      DT_IRQ_TYPE_EDGE_BOTH
+#define IRQ_TYPE_LEVEL_HIGH     DT_IRQ_TYPE_LEVEL_HIGH
+#define IRQ_TYPE_LEVEL_LOW      DT_IRQ_TYPE_LEVEL_LOW
+#define IRQ_TYPE_LEVEL_MASK     DT_IRQ_TYPE_LEVEL_MASK
+#define IRQ_TYPE_SENSE_MASK     DT_IRQ_TYPE_SENSE_MASK
+#define IRQ_TYPE_INVALID        DT_IRQ_TYPE_INVALID
+
+#define NR_LOCAL_IRQS	32
+#define NR_IRQS		1024
+
+typedef struct {
+} vmask_t;
+
+struct arch_pirq
+{
+};
+
+struct arch_irq_desc {
+};
+
+struct irq_desc;
+
+struct irq_desc *__irq_to_desc(int irq);
+
+#define irq_to_desc(irq)    __irq_to_desc(irq)
+
+void arch_move_irqs(struct vcpu *v);
+
+#define domain_pirq_to_irq(d, pirq) (pirq)
+
+extern const unsigned int nr_irqs;
+#define nr_static_irqs NR_IRQS
+#define arch_hwdom_irqs(domid) NR_IRQS
+
+#define arch_evtchn_bind_pirq(d, pirq) ((void)((d) + (pirq)))
+
+#endif /* _ASM_HW_IRQ_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/mem_access.h b/xen/include/asm-riscv/mem_access.h
new file mode 100644
index 0000000000..8348a04a53
--- /dev/null
+++ b/xen/include/asm-riscv/mem_access.h
@@ -0,0 +1,4 @@
+#ifndef __RISCV_MEM_ACCESS_H__
+#define __RISCV_MEM_ACCESS_H__
+
+#endif
diff --git a/xen/include/asm-riscv/mm.h b/xen/include/asm-riscv/mm.h
new file mode 100644
index 0000000000..e1972a8c20
--- /dev/null
+++ b/xen/include/asm-riscv/mm.h
@@ -0,0 +1,246 @@
+#ifndef __ARCH_RISCV_MM__
+#define __ARCH_RISCV_MM__
+
+#include <xen/errno.h>
+#include <asm/page.h>
+#include <public/xen.h>
+
+extern unsigned long max_page;
+extern unsigned long total_pages;
+extern unsigned long frametable_base_mfn;
+extern mfn_t xenheap_mfn_start;
+extern mfn_t xenheap_mfn_end;
+extern vaddr_t xenheap_virt_end;
+extern vaddr_t xenheap_virt_start;
+
+/* Per-page-frame information. */
+struct page_info {
+    /* Each frame can be threaded onto a doubly-linked list. */
+    struct page_list_entry list;
+
+    /* Reference count and various PGC_xxx flags and fields. */
+    unsigned long count_info;
+
+    /* Context-dependent fields follow... */
+    union {
+        /* Page is in use: ((count_info & PGC_count_mask) != 0). */
+        struct {
+            /* Type reference count and various PGT_xxx flags and fields. */
+            unsigned long type_info;
+        } inuse;
+        /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */
+        union {
+            struct {
+                /*
+                 * Index of the first *possibly* unscrubbed page in the buddy.
+                 * One more bit than maximum possible order to accommodate
+                 * INVALID_DIRTY_IDX.
+                 */
+#define INVALID_DIRTY_IDX ((1UL << (MAX_ORDER + 1)) - 1)
+                unsigned long first_dirty:MAX_ORDER + 1;
+
+                /* Do TLBs need flushing for safety before next page use? */
+                bool need_tlbflush:1;
+
+#define BUDDY_NOT_SCRUBBING    0
+#define BUDDY_SCRUBBING        1
+#define BUDDY_SCRUB_ABORT      2
+                unsigned long scrub_state:2;
+            };
+
+            unsigned long val;
+        } free;
+    } u;
+
+    union {
+        /* Page is in use, but not as a shadow. */
+        struct {
+            /* Owner of this page (zero if page is anonymous). */
+            struct domain *domain;
+        } inuse;
+
+        /* Page is on a free list. */
+        struct {
+            /* Order-size of the free chunk this page is the head of. */
+            unsigned int order;
+        } free;
+    } v;
+
+    union {
+        /*
+         * Timestamp from 'TLB clock', used to avoid extra safety flushes.
+         * Only valid for: a) free pages, and b) pages with zero type count
+         */
+        u32 tlbflush_timestamp;
+    };
+};
+
+#define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
+
+#define PG_shift(idx)   (BITS_PER_LONG - (idx))
+#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
+
+#define PGT_writable_page   PG_mask(1, 1)  /* has writable mappings?         */
+
+/* Count of uses of this frame as its current type. */
+#define PGT_count_width     PG_shift(2)
+#define PGT_count_mask      ((1UL<<PGT_count_width)-1)
+
+/* Cleared when the owning guest 'frees' this page. */
+#define _PGC_allocated      PG_shift(1)
+#define PGC_allocated       PG_mask(1, 1)
+
+/* Page is Xen heap? */
+#define _PGC_xen_heap       PG_shift(2)
+#define PGC_xen_heap        PG_mask(1, 2)
+
+/* Page is broken? */
+#define _PGC_broken         PG_shift(7)
+#define PGC_broken          PG_mask(1, 7)
+
+/* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
+#define PGC_state           PG_mask(3, 9)
+#define PGC_state_inuse     PG_mask(0, 9)
+#define PGC_state_offlining PG_mask(1, 9)
+#define PGC_state_offlined  PG_mask(2, 9)
+#define PGC_state_free      PG_mask(3, 9)
+#define page_state_is(pg, st) (((pg)->count_info & PGC_state) == PGC_state_##st)
+
+/* Page is not reference counted */
+#define _PGC_extra          PG_shift(10)
+#define PGC_extra           PG_mask(1, 10)
+
+/* Count of references to this frame. */
+#define PGC_count_width     PG_shift(9)
+#define PGC_count_mask      ((1UL<<PGC_count_width)-1)
+
+/*
+ * Page needs to be scrubbed. Since this bit can only be set on a page that is
+ * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
+ */
+#define _PGC_need_scrub   _PGC_allocated
+#define PGC_need_scrub    PGC_allocated
+
+#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
+#define is_xen_heap_mfn(mfn) \
+    (mfn_valid(_mfn(mfn)) && is_xen_heap_page(mfn_to_page(_mfn(mfn))))
+
+#define is_xen_fixed_mfn(mfn)                                   \
+    ((mfn_to_maddr(mfn) >= virt_to_maddr(&_start)) &&           \
+     (mfn_to_maddr(mfn) <= virt_to_maddr(&_end)))
+
+#define page_get_owner(_p)    (_p)->v.inuse.domain
+#define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
+
+#define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
+
+#define mfn_valid(mfn) ({                                          \
+    unsigned long mfn_ = mfn_x(mfn);                               \
+    likely(mfn_ >= frametable_base_mfn && mfn_ < max_page);        \
+})
+
+/* Convert between machine frame numbers and page-info structures. */
+#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
+#define mfn_to_page(mfn)                                           \
+    (frame_table + (mfn_x(mfn) - frametable_base_mfn))
+#define page_to_mfn(pg)                                            \
+    _mfn(((unsigned long)((pg) - frame_table) + frametable_base_mfn))
+
+/* Convert between machine addresses and page-info structures. */
+#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma))
+#define page_to_maddr(pg) (mfn_to_maddr(page_to_mfn(pg)))
+
+/* Convert between frame number and address formats.  */
+#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
+#define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
+#define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn))
+#define maddr_to_mfn(ma)  _mfn(paddr_to_pfn(ma))
+#define vmap_to_mfn(va)   maddr_to_mfn(virt_to_maddr((vaddr_t)va))
+#define vmap_to_page(va)  mfn_to_page(vmap_to_mfn(va))
+
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    return (void *)0xdeadbeef;
+}
+
+static inline paddr_t __virt_to_maddr(vaddr_t va)
+{
+    return 0;
+}
+
+#define virt_to_maddr(va)  __virt_to_maddr((vaddr_t) (va))
+
+/* Convert between Xen-heap virtual addresses and machine addresses. */
+#define __pa(x)            (virt_to_maddr(x))
+#define __va(x)            (maddr_to_virt(x))
+
+/* Convert between Xen-heap virtual addresses and machine frame numbers. */
+#define __virt_to_mfn(va)  (virt_to_maddr(va) >> PAGE_SHIFT)
+#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
+
+/*
+ * We define non-underscored wrappers for above conversion functions.
+ * These are overriden in various source files while underscored version
+ * remain intact.
+ */
+#define virt_to_mfn(va)    __virt_to_mfn(va)
+#define mfn_to_virt(mfn)   __mfn_to_virt(mfn)
+
+/* Convert between Xen-heap virtual addresses and page-info structures. */
+static inline struct page_info *virt_to_page(const void *v)
+{
+    return (void *)0xdeadbeef;
+}
+
+static inline void *page_to_virt(const struct page_info *pg)
+{
+    return (void *)0xdeadbeef;
+}
+
+#define domain_set_alloc_bitsize(d) ((void)0)
+#define domain_clamp_alloc_bitsize(d, b) (b)
+
+/*
+ * RISC-V does not have an M2P, but common code expects a handful of
+ * M2P-related defines and functions. Provide dummy versions of these.
+ */
+#define INVALID_M2P_ENTRY        (~0UL)
+#define SHARED_M2P_ENTRY         (~0UL - 1UL)
+#define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
+
+/* Xen always owns P2M on RISC-V */
+#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
+#define mfn_to_gmfn(_d, mfn)  (mfn)
+
+/* Arch-specific portion of memory_op hypercall. */
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+
+extern void put_page_type(struct page_info *page);
+
+static inline void put_page_and_type(struct page_info *page)
+{
+}
+
+int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
+                                          unsigned int order);
+
+unsigned long domain_get_maximum_gpfn(struct domain *d);
+
+/*
+ * On RISC-V, all the RAM is currently direct mapped in Xen.
+ * Hence return always true.
+ */
+static inline bool arch_mfn_in_directmap(unsigned long mfn)
+{
+    return true;
+}
+
+#endif /*  __ARCH_RISCV_MM__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/monitor.h b/xen/include/asm-riscv/monitor.h
new file mode 100644
index 0000000000..e77d21dba4
--- /dev/null
+++ b/xen/include/asm-riscv/monitor.h
@@ -0,0 +1,65 @@
+/*
+ * include/asm-RISCV/monitor.h
+ *
+ * Arch-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ * Copyright (c) 2016, Bitdefender S.R.L.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_RISCV_MONITOR_H__
+#define __ASM_RISCV_MONITOR_H__
+
+#include <xen/sched.h>
+#include <public/domctl.h>
+
+static inline
+void arch_monitor_allow_userspace(struct domain *d, bool allow_userspace)
+{
+}
+
+static inline
+int arch_monitor_domctl_op(struct domain *d, struct xen_domctl_monitor_op *mop)
+{
+    /* No arch-specific monitor ops on RISCV. */
+    return -EOPNOTSUPP;
+}
+
+int arch_monitor_domctl_event(struct domain *d,
+                              struct xen_domctl_monitor_op *mop);
+
+static inline
+int arch_monitor_init_domain(struct domain *d)
+{
+    /* No arch-specific domain initialization on RISCV. */
+    return 0;
+}
+
+static inline
+void arch_monitor_cleanup_domain(struct domain *d)
+{
+    /* No arch-specific domain cleanup on RISCV. */
+}
+
+static inline uint32_t arch_monitor_get_capabilities(struct domain *d)
+{
+    uint32_t capabilities = 0;
+
+    return capabilities;
+}
+
+int monitor_smc(void);
+
+#endif /* __ASM_RISCV_MONITOR_H__ */
diff --git a/xen/include/asm-riscv/nospec.h b/xen/include/asm-riscv/nospec.h
new file mode 100644
index 0000000000..55087fa831
--- /dev/null
+++ b/xen/include/asm-riscv/nospec.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. */
+
+#ifndef _ASM_RISCV_NOSPEC_H
+#define _ASM_RISCV_NOSPEC_H
+
+static inline bool evaluate_nospec(bool condition)
+{
+    return condition;
+}
+
+static inline void block_speculation(void)
+{
+}
+
+#endif /* _ASM_RISCV_NOSPEC_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/numa.h b/xen/include/asm-riscv/numa.h
new file mode 100644
index 0000000000..52bdfbc16b
--- /dev/null
+++ b/xen/include/asm-riscv/numa.h
@@ -0,0 +1,41 @@
+#ifndef __ARCH_RISCV_NUMA_H
+#define __ARCH_RISCV_NUMA_H
+
+#include <xen/mm.h>
+
+typedef u8 nodeid_t;
+
+/* Fake one node for now. See also node_online_map. */
+#define cpu_to_node(cpu) 0
+#define node_to_cpumask(node)   (cpu_online_map)
+
+static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr)
+{
+    return 0;
+}
+
+/*
+ * TODO: make first_valid_mfn static when NUMA is supported on RISCV, this
+ * is required because the dummy helpers are using it.
+ */
+extern mfn_t first_valid_mfn;
+
+/* XXX: implement NUMA support */
+#define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
+#define node_start_pfn(nid) (mfn_x(first_valid_mfn))
+#define __node_distance(a, b) (20)
+
+static inline unsigned int arch_get_dma_bitsize(void)
+{
+    return 32;
+}
+
+#endif /* __ARCH_RISCV_NUMA_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/p2m.h b/xen/include/asm-riscv/p2m.h
new file mode 100644
index 0000000000..1bb2009d53
--- /dev/null
+++ b/xen/include/asm-riscv/p2m.h
@@ -0,0 +1,218 @@
+#ifndef _XEN_P2M_H
+#define _XEN_P2M_H
+
+#include <xen/mm.h>
+#include <xen/mem_access.h>
+#include <xen/errno.h>
+
+struct domain;
+
+extern void memory_type_changed(struct domain *);
+
+/* Per-p2m-table state */
+struct p2m_domain {
+};
+
+typedef enum {
+    p2m_invalid = 0
+} p2m_type_t;
+
+/* All common type definitions should live ahead of this inclusion. */
+#ifdef _XEN_P2M_COMMON_H
+# error "xen/p2m-common.h should not be included directly"
+#endif
+#include <xen/p2m-common.h>
+
+static inline bool arch_acquire_resource_check(struct domain *d)
+{
+    return true;
+}
+
+static inline
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+}
+
+/* Second stage paging setup, to be called on all CPUs */
+void setup_virt_paging(void);
+
+/* Init the datastructures for later use by the p2m code */
+int p2m_init(struct domain *d);
+
+/* Return all the p2m resources to Xen. */
+void p2m_teardown(struct domain *d);
+
+/* Remove mapping refcount on each mapping page in the p2m */
+int relinquish_p2m_mapping(struct domain *d);
+
+/* Context switch */
+void p2m_save_state(struct vcpu *p);
+void p2m_restore_state(struct vcpu *n);
+
+/* Print debugging/statistial info about a domain's p2m */
+void p2m_dump_info(struct domain *d);
+
+static inline void p2m_write_lock(struct p2m_domain *p2m)
+{
+}
+
+void p2m_write_unlock(struct p2m_domain *p2m);
+
+static inline void p2m_read_lock(struct p2m_domain *p2m)
+{
+}
+
+static inline void p2m_read_unlock(struct p2m_domain *p2m)
+{
+}
+
+static inline int p2m_is_locked(struct p2m_domain *p2m)
+{
+    return 0;
+}
+
+static inline int p2m_is_write_locked(struct p2m_domain *p2m)
+{
+    return 0;
+}
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m);
+
+/* Look up the MFN corresponding to a domain's GFN. */
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
+
+/*
+ * Get details of a given gfn.
+ * The P2M lock should be taken by the caller.
+ */
+mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+                    p2m_type_t *t, p2m_access_t *a,
+                    unsigned int *page_order,
+                    bool *valid);
+
+/*
+ * Direct set a p2m entry: only for use by the P2M code.
+ * The P2M write lock should be taken.
+ */
+int p2m_set_entry(struct p2m_domain *p2m,
+                  gfn_t sgfn,
+                  unsigned long nr,
+                  mfn_t smfn,
+                  p2m_type_t t,
+                  p2m_access_t a);
+
+bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn);
+
+void p2m_invalidate_root(struct p2m_domain *p2m);
+
+/*
+ * Clean & invalidate caches corresponding to a region [start,end) of guest
+ * address space.
+ *
+ * start will get updated if the function is preempted.
+ */
+int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end);
+
+void p2m_set_way_flush(struct vcpu *v);
+
+void p2m_toggle_cache(struct vcpu *v, bool was_enabled);
+
+void p2m_flush_vm(struct vcpu *v);
+
+/*
+ * Map a region in the guest p2m with a specific p2m type.
+ * The memory attributes will be derived from the p2m type.
+ */
+int map_regions_p2mt(struct domain *d,
+                     gfn_t gfn,
+                     unsigned long nr,
+                     mfn_t mfn,
+                     p2m_type_t p2mt);
+
+int unmap_regions_p2mt(struct domain *d,
+                       gfn_t gfn,
+                       unsigned long nr,
+                       mfn_t mfn);
+
+int map_dev_mmio_region(struct domain *d,
+                        gfn_t gfn,
+                        unsigned long nr,
+                        mfn_t mfn);
+
+int guest_physmap_add_entry(struct domain *d,
+                            gfn_t gfn,
+                            mfn_t mfn,
+                            unsigned long page_order,
+                            p2m_type_t t);
+
+/* Untyped version for RAM only, for compatibility */
+static inline int guest_physmap_add_page(struct domain *d,
+                                         gfn_t gfn,
+                                         mfn_t mfn,
+                                         unsigned int page_order)
+{
+    return 0;
+}
+
+mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
+
+/* Look up a GFN and take a reference count on the backing page. */
+typedef unsigned int p2m_query_t;
+#define P2M_ALLOC    (1u<<0)   /* Populate PoD and paged-out entries */
+#define P2M_UNSHARE  (1u<<1)   /* Break CoW sharing */
+
+struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                        p2m_type_t *t);
+
+static inline struct page_info *get_page_from_gfn(
+    struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q)
+{
+    *t = p2m_invalid;
+    return (void *) 0xdeadbeef;
+}
+
+int get_page_type(struct page_info *page, unsigned long type);
+bool is_iomem_page(mfn_t mfn);
+static inline int get_page_and_type(struct page_info *page,
+                                    struct domain *domain,
+                                    unsigned long type)
+{
+    return 0;
+}
+
+/* get host p2m table */
+#define p2m_get_hostp2m(d) (&(d)->arch.p2m)
+
+static inline bool p2m_vm_event_sanity_check(struct domain *d)
+{
+    return true;
+}
+
+/*
+ * Return the start of the next mapping based on the order of the
+ * current one.
+ */
+static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
+{
+    return gfn;
+}
+
+/*
+ * A vCPU has cache enabled only when the MMU is enabled and data cache
+ * is enabled.
+ */
+static inline bool vcpu_has_cache_enabled(struct vcpu *v)
+{
+    return 0;
+}
+
+#endif /* _XEN_P2M_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/page-bits.h b/xen/include/asm-riscv/page-bits.h
new file mode 100644
index 0000000000..5a47701fea
--- /dev/null
+++ b/xen/include/asm-riscv/page-bits.h
@@ -0,0 +1,11 @@
+#ifndef __RISCV_PAGE_SHIFT_H__
+#define __RISCV_PAGE_SHIFT_H__
+
+#define PAGE_SHIFT              12
+
+#ifdef CONFIG_RISCV_64
+#define PADDR_BITS              56
+#define VADDR_BITS              39
+#endif
+
+#endif /* __RISCV_PAGE_SHIFT_H__ */
diff --git a/xen/include/asm-riscv/page.h b/xen/include/asm-riscv/page.h
new file mode 100644
index 0000000000..36c8732efe
--- /dev/null
+++ b/xen/include/asm-riscv/page.h
@@ -0,0 +1,73 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ * Copyright (C) 2019 Bobby Eshleman <bobbyeshleman@gmail.com>
+ * Copyright (C) 2021 Connor Davis <connojd@pm.me>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/page-size.h>
+
+#define PAGE_ALIGN(x)   (((x) + PAGE_SIZE - 1) & PAGE_MASK)
+#define paddr_bits      PADDR_BITS
+
+#define PTE_VALID       BIT(0, UL)
+#define PTE_READABLE    BIT(1, UL)
+#define PTE_WRITABLE    BIT(2, UL)
+#define PTE_EXECUTABLE  BIT(3, UL)
+#define PTE_USER        BIT(4, UL)
+#define PTE_GLOBAL      BIT(5, UL)
+#define PTE_ACCESSED    BIT(6, UL)
+#define PTE_DIRTY       BIT(7, UL)
+#define PTE_RSW         (BIT(8, UL) | BIT(9, UL))
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_HYPERVISOR_RO  (PTE_VALID|PTE_READABLE|PTE_ACCESSED)
+#define PAGE_HYPERVISOR_RX  (PAGE_HYPERVISOR_RO|PTE_EXECUTABLE)
+#define PAGE_HYPERVISOR_RW  (PAGE_HYPERVISOR_RO|PTE_WRITABLE|PTE_DIRTY)
+
+/*
+ * RISC-V does not encode cacheability attributes in the PTEs;
+ * instead mappings must consult the platform PMAs.
+ */
+#define PAGE_HYPERVISOR         PAGE_HYPERVISOR_RW
+#define PAGE_HYPERVISOR_NOCACHE PAGE_HYPERVISOR
+#define PAGE_HYPERVISOR_WC      PAGE_HYPERVISOR
+
+typedef struct {
+    unsigned long pte;
+} pte_t;
+
+#define clear_page(pgaddr)  memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from) memcpy((to), (from), PAGE_SIZE)
+
+/*
+ * Ensure that stores to instruction memory are locally visible to
+ * subsequent fetches on this hart.
+ */
+static inline void invalidate_icache(void)
+{
+    asm volatile ("fence.i" ::: "memory");
+}
+
+/* Flush the dcache for an entire page. */
+void flush_page_to_ram(unsigned long mfn, bool sync_icache);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/include/asm-riscv/paging.h b/xen/include/asm-riscv/paging.h
new file mode 100644
index 0000000000..3f9f704273
--- /dev/null
+++ b/xen/include/asm-riscv/paging.h
@@ -0,0 +1,15 @@
+#ifndef _XEN_PAGING_H
+#define _XEN_PAGING_H
+
+#define paging_mode_translate(d)              (1)
+
+#endif /* _XEN_PAGING_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/pci.h b/xen/include/asm-riscv/pci.h
new file mode 100644
index 0000000000..0ccf335e34
--- /dev/null
+++ b/xen/include/asm-riscv/pci.h
@@ -0,0 +1,31 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __RISCV_PCI_H__
+#define __RISCV_PCI_H__
+
+struct arch_pci_dev {
+};
+
+#endif /* __RISCV_PCI_H__ */
diff --git a/xen/include/asm-riscv/percpu.h b/xen/include/asm-riscv/percpu.h
new file mode 100644
index 0000000000..0d165d6aa1
--- /dev/null
+++ b/xen/include/asm-riscv/percpu.h
@@ -0,0 +1,33 @@
+#ifndef __RISCV_PERCPU_H__
+#define __RISCV_PERCPU_H__
+
+#ifndef __ASSEMBLY__
+
+#include <xen/types.h>
+#include <asm/sysregs.h>
+
+extern char __per_cpu_start[], __per_cpu_data_end[];
+extern unsigned long __per_cpu_offset[NR_CPUS];
+void percpu_init_areas(void);
+
+#define per_cpu(var, cpu)  \
+    (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
+#define this_cpu(var) \
+    (*RELOC_HIDE(&per_cpu__##var, csr_read(CSR_SCRATCH)))
+
+#define per_cpu_ptr(var, cpu)  \
+    (*RELOC_HIDE(var, __per_cpu_offset[cpu]))
+#define this_cpu_ptr(var) \
+    (*RELOC_HIDE(var, csr_read(CSR_SCRATCH)))
+
+#endif
+
+#endif /* __RISCV_PERCPU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/processor.h b/xen/include/asm-riscv/processor.h
new file mode 100644
index 0000000000..19e681652a
--- /dev/null
+++ b/xen/include/asm-riscv/processor.h
@@ -0,0 +1,59 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#ifndef __ASSEMBLY__
+
+/* On stack VCPU state */
+struct cpu_user_regs {
+    unsigned long r0;
+};
+
+void show_execution_state(const struct cpu_user_regs *regs);
+void show_registers(const struct cpu_user_regs *regs);
+
+/* All a bit UP for the moment */
+#define cpu_to_core(_cpu)   (0)
+#define cpu_to_socket(_cpu) (0)
+
+/* Based on Linux: arch/riscv/include/asm/processor.h */
+
+static inline void cpu_relax(void)
+{
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/xen/include/asm-riscv/random.h b/xen/include/asm-riscv/random.h
new file mode 100644
index 0000000000..b4acee276b
--- /dev/null
+++ b/xen/include/asm-riscv/random.h
@@ -0,0 +1,9 @@
+#ifndef __ASM_RANDOM_H__
+#define __ASM_RANDOM_H__
+
+static inline unsigned int arch_get_random(void)
+{
+    return 0;
+}
+
+#endif /* __ASM_RANDOM_H__ */
diff --git a/xen/include/asm-riscv/regs.h b/xen/include/asm-riscv/regs.h
new file mode 100644
index 0000000000..82e7dd2aee
--- /dev/null
+++ b/xen/include/asm-riscv/regs.h
@@ -0,0 +1,23 @@
+#ifndef __ARM_REGS_H__
+#define __ARM_REGS_H__
+
+#ifndef __ASSEMBLY__
+
+#include <asm/current.h>
+
+static inline bool guest_mode(const struct cpu_user_regs *r)
+{
+    return false;
+}
+
+#endif
+
+#endif /* __ARM_REGS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/setup.h b/xen/include/asm-riscv/setup.h
new file mode 100644
index 0000000000..d0fc75054e
--- /dev/null
+++ b/xen/include/asm-riscv/setup.h
@@ -0,0 +1,14 @@
+#ifndef __RISCV_SETUP_H_
+#define __RISCV_SETUP_H_
+
+#define max_init_domid (0)
+
+#endif /* __RISCV_SETUP_H_ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/smp.h b/xen/include/asm-riscv/smp.h
new file mode 100644
index 0000000000..f0f0b06501
--- /dev/null
+++ b/xen/include/asm-riscv/smp.h
@@ -0,0 +1,46 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#ifndef __ASSEMBLY__
+#include <xen/cpumask.h>
+#include <asm/current.h>
+#endif
+
+DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
+DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
+
+/*
+ * Do we, for platform reasons, need to actually keep CPUs online when we
+ * would otherwise prefer them to be off?
+ */
+#define park_offline_cpus true
+
+#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
+
+#define smp_processor_id() get_processor_id()
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/xen/include/asm-riscv/softirq.h b/xen/include/asm-riscv/softirq.h
new file mode 100644
index 0000000000..976e0ebd70
--- /dev/null
+++ b/xen/include/asm-riscv/softirq.h
@@ -0,0 +1,16 @@
+#ifndef __ASM_SOFTIRQ_H__
+#define __ASM_SOFTIRQ_H__
+
+#define NR_ARCH_SOFTIRQS       0
+
+#define arch_skip_send_event_check(cpu) 0
+
+#endif /* __ASM_SOFTIRQ_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/spinlock.h b/xen/include/asm-riscv/spinlock.h
new file mode 100644
index 0000000000..77e6736e71
--- /dev/null
+++ b/xen/include/asm-riscv/spinlock.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_SPINLOCK_H
+#define __ASM_SPINLOCK_H
+
+#define arch_lock_acquire_barrier()
+#define arch_lock_release_barrier()
+
+#define arch_lock_relax()
+#define arch_lock_signal()
+
+#define arch_lock_signal_wmb()
+
+#endif /* __ASM_SPINLOCK_H */
diff --git a/xen/include/asm-riscv/string.h b/xen/include/asm-riscv/string.h
new file mode 100644
index 0000000000..733e9e00d3
--- /dev/null
+++ b/xen/include/asm-riscv/string.h
@@ -0,0 +1,28 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/xen/include/asm-riscv/sysregs.h b/xen/include/asm-riscv/sysregs.h
new file mode 100644
index 0000000000..ae0945d902
--- /dev/null
+++ b/xen/include/asm-riscv/sysregs.h
@@ -0,0 +1,16 @@
+#ifndef __ASM_RISCV_SYSREGS_H
+#define __ASM_RISCV_SYSREGS_H
+
+#include <asm/csr.h>
+
+#endif /* __ASM_RISCV_SYSREGS_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
+
diff --git a/xen/include/asm-riscv/system.h b/xen/include/asm-riscv/system.h
new file mode 100644
index 0000000000..276e7ba550
--- /dev/null
+++ b/xen/include/asm-riscv/system.h
@@ -0,0 +1,99 @@
+/*
+ * Based on arch/arm/include/asm/system.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2021 Connor Davis <connojd@pm.me>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#include <asm/csr.h>
+#include <xen/lib.h>
+
+#ifndef __ASSEMBLY__
+
+#define nop()		__asm__ __volatile__ ("nop")
+
+#define RISCV_FENCE(p, s) \
+	__asm__ __volatile__ ("fence " #p "," #s : : : "memory")
+
+/* These barriers need to enforce ordering on both devices or memory. */
+#define mb()		RISCV_FENCE(iorw,iorw)
+#define rmb()		RISCV_FENCE(ir,ir)
+#define wmb()		RISCV_FENCE(ow,ow)
+
+/* These barriers do not need to enforce ordering on devices, just memory. */
+#define smp_mb()	RISCV_FENCE(rw,rw)
+#define smp_rmb()	RISCV_FENCE(r,r)
+#define smp_wmb()	RISCV_FENCE(w,w)
+
+#define smp_mb__before_atomic()    smp_mb()
+#define smp_mb__after_atomic()     smp_mb()
+
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	RISCV_FENCE(rw,w);						\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	RISCV_FENCE(r,rw);						\
+	___p1;								\
+})
+
+static inline unsigned long local_save_flags(void)
+{
+	return csr_read(CSR_STATUS);
+}
+
+static inline void local_irq_enable(void)
+{
+	csr_set(CSR_STATUS, SR_IE);
+}
+
+static inline void local_irq_disable(void)
+{
+	csr_clear(CSR_STATUS, SR_IE);
+}
+
+#define local_irq_save(x)                                               \
+({                                                                      \
+	x = csr_read_clear(CSR_STATUS, SR_IE);                          \
+})
+
+static inline void local_irq_restore(unsigned long flags)
+{
+	csr_set(CSR_STATUS, flags & SR_IE);
+}
+
+static inline int local_irq_is_enabled(void)
+{
+	unsigned long flags = local_save_flags();
+
+	return !!(flags & SR_IE);
+}
+
+#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v)
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/xen/include/asm-riscv/time.h b/xen/include/asm-riscv/time.h
new file mode 100644
index 0000000000..af1a8ece45
--- /dev/null
+++ b/xen/include/asm-riscv/time.h
@@ -0,0 +1,31 @@
+ /*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+typedef uint64_t cycles_t;
+
+#ifdef CONFIG_64BIT
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+#endif
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/xen/include/asm-riscv/trace.h b/xen/include/asm-riscv/trace.h
new file mode 100644
index 0000000000..e06def61f6
--- /dev/null
+++ b/xen/include/asm-riscv/trace.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_TRACE_H__
+#define __ASM_TRACE_H__
+
+#endif /* __ASM_TRACE_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/types.h b/xen/include/asm-riscv/types.h
new file mode 100644
index 0000000000..b1c76a59c2
--- /dev/null
+++ b/xen/include/asm-riscv/types.h
@@ -0,0 +1,60 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+#if defined(CONFIG_RISCV_64)
+typedef __signed__ long __s64;
+typedef unsigned long __u64;
+#endif
+#endif
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+#if defined(CONFIG_RISCV_64)
+typedef signed long s64;
+typedef unsigned long u64;
+typedef u64 vaddr_t;
+#define PRIvaddr PRIx64
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "016lx"
+typedef u64 register_t;
+#define PRIregister "lx"
+#endif
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+typedef signed long ssize_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/vm_event.h b/xen/include/asm-riscv/vm_event.h
new file mode 100644
index 0000000000..92d24bc381
--- /dev/null
+++ b/xen/include/asm-riscv/vm_event.h
@@ -0,0 +1,55 @@
+/*
+ * vm_event.h: architecture specific vm_event handling routines
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_RISCV_VM_EVENT_H__
+#define __ASM_RISCV_VM_EVENT_H__
+
+#include <xen/sched.h>
+#include <xen/vm_event.h>
+#include <public/domctl.h>
+
+static inline int vm_event_init_domain(struct domain *d)
+{
+    return 0;
+}
+
+static inline void vm_event_cleanup_domain(struct domain *d)
+{
+}
+
+static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+                                              vm_event_response_t *rsp)
+{
+}
+
+static inline
+void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
+{
+}
+
+static inline
+void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
+{
+}
+
+static inline
+void vm_event_sync_event(struct vcpu *v, bool value)
+{
+}
+
+#endif /* __ASM_RISCV_VM_EVENT_H__ */
diff --git a/xen/include/asm-riscv/xenoprof.h b/xen/include/asm-riscv/xenoprof.h
new file mode 100644
index 0000000000..3db6ce3ab2
--- /dev/null
+++ b/xen/include/asm-riscv/xenoprof.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_XENOPROF_H__
+#define __ASM_XENOPROF_H__
+
+#endif /* __ASM_XENOPROF_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-riscv.h b/xen/include/public/arch-riscv.h
new file mode 100644
index 0000000000..29d7f5a9b7
--- /dev/null
+++ b/xen/include/public/arch-riscv.h
@@ -0,0 +1,183 @@
+/******************************************************************************
+ * arch-riscv.h
+ *
+ * Guest OS interface to RISC-V Xen.
+ * Initially based on the ARM implementation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ */
+
+#ifndef __XEN_PUBLIC_ARCH_RISCV_H__
+#define __XEN_PUBLIC_ARCH_RISCV_H__
+
+#include <xen/types.h>
+
+#define  int64_aligned_t  int64_t __attribute__((aligned(8)))
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
+
+#ifndef __ASSEMBLY__
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef union { type *p; unsigned long q; }                 \
+        __guest_handle_ ## name;                                \
+    typedef union { type *p; uint64_aligned_t q; }              \
+        __guest_handle_64_ ## name
+
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On rv64 it is 8 bytes long and 8-byte aligned.
+ *
+ * XEN_GUEST_HANDLE_PARAM represents a guest pointer, when passed as a
+ * hypercall argument. It is 4 bytes on rv32 and 8 bytes on rv64.
+ */
+#define __DEFINE_XEN_GUEST_HANDLE(name, type) \
+    ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
+    ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
+#define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
+#define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do {                                                    \
+        typeof(&(hnd)) _sxghr_tmp = &(hnd);                 \
+        _sxghr_tmp->q = 0;                                  \
+        _sxghr_tmp->p = val;                                \
+    } while ( 0 )
+#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val)
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+/* Anonymous union includes both 32- and 64-bit names (e.g., r0/x0). */
+# define __DECL_REG(n64, n32) union {          \
+        uint64_t n64;                          \
+        uint32_t n32;                          \
+    }
+#else
+/* Non-gcc sources must always use the proper 64-bit name (e.g., x0). */
+#define __DECL_REG(n64, n32) uint64_t n64
+#endif
+
+struct vcpu_guest_core_regs
+{
+    unsigned long zero;
+    unsigned long ra;
+    unsigned long sp;
+    unsigned long gp;
+    unsigned long tp;
+    unsigned long t0;
+    unsigned long t1;
+    unsigned long t2;
+    unsigned long s0;
+    unsigned long s1;
+    unsigned long a0;
+    unsigned long a1;
+    unsigned long a2;
+    unsigned long a3;
+    unsigned long a4;
+    unsigned long a5;
+    unsigned long a6;
+    unsigned long a7;
+    unsigned long s2;
+    unsigned long s3;
+    unsigned long s4;
+    unsigned long s5;
+    unsigned long s6;
+    unsigned long s7;
+    unsigned long s8;
+    unsigned long s9;
+    unsigned long s10;
+    unsigned long s11;
+    unsigned long t3;
+    unsigned long t4;
+    unsigned long t5;
+    unsigned long t6;
+    unsigned long sepc;
+    unsigned long sstatus;
+    unsigned long hstatus;
+    unsigned long sp_exec;
+
+    unsigned long hedeleg;
+    unsigned long hideleg;
+    unsigned long bsstatus;
+    unsigned long bsie;
+    unsigned long bstvec;
+    unsigned long bsscratch;
+    unsigned long bsepc;
+    unsigned long bscause;
+    unsigned long bstval;
+    unsigned long bsip;
+    unsigned long bsatp;
+};
+typedef struct vcpu_guest_core_regs vcpu_guest_core_regs_t;
+DEFINE_XEN_GUEST_HANDLE(vcpu_guest_core_regs_t);
+
+typedef uint64_t xen_pfn_t;
+#define PRI_xen_pfn PRIx64
+#define PRIu_xen_pfn PRIu64
+
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+struct vcpu_guest_context {
+};
+typedef struct vcpu_guest_context vcpu_guest_context_t;
+DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
+
+struct xen_arch_domainconfig {
+};
+
+struct arch_vcpu_info {
+};
+typedef struct arch_vcpu_info arch_vcpu_info_t;
+
+struct arch_shared_info {
+};
+typedef struct arch_shared_info arch_shared_info_t;
+
+typedef uint64_t xen_callback_t;
+
+#endif
+
+/* Maximum number of virtual CPUs in legacy multi-processor guests. */
+/* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
+#define XEN_LEGACY_MAX_VCPUS 1
+
+/* Current supported guest VCPUs */
+#define GUEST_MAX_VCPUS 128
+
+#endif /* __ASSEMBLY__ */
+
+#ifndef __ASSEMBLY__
+/* Stub definition of PMU structure */
+typedef struct xen_pmu_arch { uint8_t dummy; } xen_pmu_arch_t;
+#endif
+
+#endif /*  __XEN_PUBLIC_ARCH_RISCV_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-riscv/hvm/save.h b/xen/include/public/arch-riscv/hvm/save.h
new file mode 100644
index 0000000000..fa010f0315
--- /dev/null
+++ b/xen/include/public/arch-riscv/hvm/save.h
@@ -0,0 +1,39 @@
+/*
+ * Structure definitions for HVM state that is held by Xen and must
+ * be saved along with the domain's memory and device-model state.
+ *
+ * Copyright (c) 2012 Citrix Systems Ltd.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __XEN_PUBLIC_HVM_SAVE_RISCV_H__
+#define __XEN_PUBLIC_HVM_SAVE_RISCV_H__
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/hvm/save.h b/xen/include/public/hvm/save.h
index f72e3a9bc4..d7505f279c 100644
--- a/xen/include/public/hvm/save.h
+++ b/xen/include/public/hvm/save.h
@@ -106,6 +106,8 @@ DECLARE_HVM_SAVE_TYPE(END, 0, struct hvm_save_end);
 #include "../arch-x86/hvm/save.h"
 #elif defined(__arm__) || defined(__aarch64__)
 #include "../arch-arm/hvm/save.h"
+#elif defined(__riscv)
+#include "../arch-riscv/hvm/save.h"
 #else
 #error "unsupported architecture"
 #endif
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index cc2fcf8816..3fb1bcd900 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -28,6 +28,8 @@
 #include "arch-x86/pmu.h"
 #elif defined (__arm__) || defined (__aarch64__)
 #include "arch-arm.h"
+#elif defined (__riscv)
+#include "arch-riscv.h"
 #else
 #error "Unsupported architecture"
 #endif
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index e373592c33..1d80b64ee0 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -33,6 +33,8 @@
 #include "arch-x86/xen.h"
 #elif defined(__arm__) || defined (__aarch64__)
 #include "arch-arm.h"
+#elif defined(__riscv)
+#include "arch-riscv.h"
 #else
 #error "Unsupported architecture"
 #endif
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1708c36964..fd0b75677c 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -60,6 +60,7 @@ void arch_vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+struct xen_domctl_createdomain;
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config);
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:39:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89834.169607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIjI-0008Ck-QR; Thu, 25 Feb 2021 15:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89834.169607; Thu, 25 Feb 2021 15:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIjI-0008Cd-NR; Thu, 25 Feb 2021 15:39:04 +0000
Received: by outflank-mailman (input) for mailman id 89834;
 Thu, 25 Feb 2021 15:39:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFIjH-0008CY-6O
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:39:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 914c8315-89fc-43ad-b121-3e50c5700f24;
 Thu, 25 Feb 2021 15:39:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 20A1BB10B;
 Thu, 25 Feb 2021 15:39:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 914c8315-89fc-43ad-b121-3e50c5700f24
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614267541; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NFd034RDEJYXcq9r3tELC5CypWrYDcd0d0knOZGlyxM=;
	b=XgbQ5MYSf2y5OQAreaPj8wbV/1arB81ci6OiZNQSw0iecmcn9PeJAz3pxsL+lARSo7YFil
	FFLJYih+iUjnSrwCloncsniCXHAee0dIo8SgjzscYwdffwufiCiPBmbI5y8vuQWuYLhWUl
	+WdMBDaNIfKT8sciziRQSdpj+YQekDk=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xen-netback: use local var in xenvif_tx_check_gop() instead
 of re-calculating
To: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Message-ID: <6604dec2-4460-3339-f797-e5f8a7df848f@suse.com>
Date: Thu, 25 Feb 2021 16:39:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

shinfo already holds the result of skb_shinfo(skb) at this point - no
need to re-invoke the construct even twice.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -557,8 +557,8 @@ check_frags:
 	}
 
 	if (skb_has_frag_list(skb) && !first_shinfo) {
-		first_shinfo = skb_shinfo(skb);
-		shinfo = skb_shinfo(skb_shinfo(skb)->frag_list);
+		first_shinfo = shinfo;
+		shinfo = skb_shinfo(shinfo->frag_list);
 		nr_frags = shinfo->nr_frags;
 
 		goto check_frags;


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:45:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:45:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89839.169620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIpK-0000lf-Fc; Thu, 25 Feb 2021 15:45:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89839.169620; Thu, 25 Feb 2021 15:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIpK-0000lY-CN; Thu, 25 Feb 2021 15:45:18 +0000
Received: by outflank-mailman (input) for mailman id 89839;
 Thu, 25 Feb 2021 15:45:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFIpJ-0000lS-2U
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:45:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a748a975-f9b6-4137-be63-5a6ca38ff5ca;
 Thu, 25 Feb 2021 15:45:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6DC26AC1D;
 Thu, 25 Feb 2021 15:45:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a748a975-f9b6-4137-be63-5a6ca38ff5ca
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614267915; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=N7RyTHr5dCnuLgWH3gPD0y8ZfhP7p0lH8Dt7rn8C4M4=;
	b=e6hrKJhYOyA6AG2uR5MBY6n05MdVGOyLyE80DKJNCd9D/l0ISCGg27tnMh3v4tPkrwDr7n
	auT82+kg3tZ4f52r5/oarETqdsQ3/ualAAZM2vEH0CmRYu1/iiuc4IQpgNj9dj/acHSLNa
	v5MWc5rq1vpHDPjaaD3A4YCZks9M5TY=
Subject: Re: [PATCH for-next 2/6] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1614265718.git.connojdavis@gmail.com>
 <444658f690c81b9e93c2c709fa1032c049646763.1614265718.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <02f32a31-1c23-46af-54a7-7d44b5ffb613@suse.com>
Date: Thu, 25 Feb 2021 16:45:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <444658f690c81b9e93c2c709fa1032c049646763.1614265718.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 16:24, Connor Davis wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -501,7 +501,9 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>              return -EINVAL;
>          }
>  
> +#ifdef CONFIG_HAS_PASSTHROUGH
>          if ( !iommu_enabled )
> +#endif
>          {
>              dprintk(XENLOG_INFO, "IOMMU requested but not available\n");
>              return -EINVAL;

Where possible - to avoid such #ifdef-ary - the symbol instead should
get #define-d to a sensible value ("false" in this case) in the header.
The other cases here may indeed need to remain as you have them.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:50:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:50:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89842.169632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFItx-0001To-50; Thu, 25 Feb 2021 15:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89842.169632; Thu, 25 Feb 2021 15:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFItw-0001TI-Ux; Thu, 25 Feb 2021 15:50:04 +0000
Received: by outflank-mailman (input) for mailman id 89842;
 Thu, 25 Feb 2021 15:50:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFItw-0001Kf-1a
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:50:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0571cfaf-47d8-457c-bad8-be2127c852a5;
 Thu, 25 Feb 2021 15:50:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6E437AF6F;
 Thu, 25 Feb 2021 15:50:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0571cfaf-47d8-457c-bad8-be2127c852a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614268202; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pMucLcc56VqB9XHOjU5pnfmJJn7cw8SR0wP2s0f41So=;
	b=m4E/THIfEmiLFUtrLisy3y4MulXF0Hn74tAG/ulYjoiWMHR8j7Kqr3yazdNAd7Xlvrxt6e
	IDWXmO0BW/9vX04082mZnyLtdqyUeqk65wddwHOC5BT6ZkXWnyQvhNdOOrS6uaLsE1cVIm
	4PivqC1PGRt1oxvss9LpdTAAEqmMmHY=
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b4ad0f83-e071-49f8-17a8-7fec0e226b9a@suse.com>
Date: Thu, 25 Feb 2021 16:50:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 16:24, Connor Davis wrote:
> Return from cpu_schedule_up when either cpu is 0 or
> NR_CPUS == 1. This fixes the following:
> 
> core.c: In function 'cpu_schedule_up':
> core.c:2769:19: error: array subscript 1 is above array bounds
> of 'struct vcpu *[1]' [-Werror=array-bounds]
>  2769 |     if ( idle_vcpu[cpu] == NULL )
>       |
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  xen/common/sched/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index 9745a77eee..f5ec65bf9b 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int cpu)
>      cpumask_set_cpu(cpu, &sched_res_mask);
>  
>      /* Boot CPU is dealt with later in scheduler_init(). */
> -    if ( cpu == 0 )
> +    if ( cpu == 0 || NR_CPUS == 1 )
>          return 0;
>  
>      if ( idle_vcpu[cpu] == NULL )

I'm not convinced a compiler warning is due here, and in turn
I'm not sure we want/need to work around this the way you do.
First question is whether that's just a specific compiler
version that's flawed. If it's not just a special case (e.g.
some unreleased version) we may want to think of possible
alternatives - the addition looks really odd to me.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:53:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:53:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89846.169644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIxC-0001wc-IG; Thu, 25 Feb 2021 15:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89846.169644; Thu, 25 Feb 2021 15:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFIxC-0001wV-El; Thu, 25 Feb 2021 15:53:26 +0000
Received: by outflank-mailman (input) for mailman id 89846;
 Thu, 25 Feb 2021 15:53:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWJX=H3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFIxB-0001wQ-Hu
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 15:53:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e1cf085-90d3-4dd1-b0de-652e0e76078c;
 Thu, 25 Feb 2021 15:53:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D322CADCD;
 Thu, 25 Feb 2021 15:53:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e1cf085-90d3-4dd1-b0de-652e0e76078c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614268404; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oTGI60kHpdbuznPYGpuBP4+R0awiYyVENxBUCin95sk=;
	b=RLwgYbitg3pHkmgDELMn2tH0JPrZt8STqA5WBBlN+UR9baxQp6+Gg5cFQ5e9tx0ps6KJAU
	sG6J8wAh8oBVtDfBscdIEzJYbpApRDB+lvxpWg8JomVJeopfZJfxFKAc2GCU9ftBxY1JKJ
	FoiKh8JM+9hRlyd7YZypvwGRFnp2Vgk=
Subject: Re: [PATCH for-next 4/6] xen: Fix build when !CONFIG_GRANT_TABLE
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1614265718.git.connojdavis@gmail.com>
 <eb2d1e911870f1662acfbc073447af2d29455750.1614265718.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <259e6dce-7fc4-1f2e-49b6-61a047012953@suse.com>
Date: Thu, 25 Feb 2021 16:53:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <eb2d1e911870f1662acfbc073447af2d29455750.1614265718.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 16:24, Connor Davis wrote:
> --- a/xen/include/xen/grant_table.h
> +++ b/xen/include/xen/grant_table.h
> @@ -66,6 +66,8 @@ int gnttab_acquire_resource(
>  
>  #define opt_max_grant_frames 0
>  
> +struct grant_table {};
> +
>  static inline int grant_table_init(struct domain *d,
>                                     int max_grant_frames,
>                                     int max_maptrack_frames)

You shouldn't actually declare a struct, all you need is to
move the forward decl further up in the file ahead of the
#ifdef.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 15:57:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 15:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89850.169656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFJ0p-00026g-2s; Thu, 25 Feb 2021 15:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89850.169656; Thu, 25 Feb 2021 15:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFJ0o-00026Z-VI; Thu, 25 Feb 2021 15:57:10 +0000
Received: by outflank-mailman (input) for mailman id 89850;
 Thu, 25 Feb 2021 15:57:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFJ0o-00026R-02; Thu, 25 Feb 2021 15:57:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFJ0n-0003YQ-OM; Thu, 25 Feb 2021 15:57:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFJ0n-0004Xd-Fn; Thu, 25 Feb 2021 15:57:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFJ0n-0000x3-FF; Thu, 25 Feb 2021 15:57:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B2Wnqwmo98iCp2sjBm4iqwn68YQvMLZBbg6/7zoi0jA=; b=Y16mMEeMhHMYJL1sOOelsoEkcD
	bomsvAD9QU2+tI+0H5LVXlJ/eYDZc9CVgOUr8E8LmY5nYuqHdfV0pgP0jBNKnj0md0PJ9LSux0lKX
	iqmpXn6Ntzdwqkt3OSjbVwklerlVGL5oSJegEIkxW6i4NI36wc0rZfgP8GjuZq/aQRMY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159668-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159668: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=067935804a8e7a33ff7170a2db8ce94bb46d9a63
X-Osstest-Versions-That:
    xen=60390ccb8b9b2dbf85010f8b47779bb231aa2533
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 15:57:09 +0000

flight 159668 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159668/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  067935804a8e7a33ff7170a2db8ce94bb46d9a63
baseline version:
 xen                  60390ccb8b9b2dbf85010f8b47779bb231aa2533

Last test of basis   159644  2021-02-24 19:01:47 Z    0 days
Testing same since   159668  2021-02-25 13:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   60390ccb8b..067935804a  067935804a8e7a33ff7170a2db8ce94bb46d9a63 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 16:23:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 16:23:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89859.169677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFJQJ-0005gW-84; Thu, 25 Feb 2021 16:23:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89859.169677; Thu, 25 Feb 2021 16:23:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFJQJ-0005gP-4v; Thu, 25 Feb 2021 16:23:31 +0000
Received: by outflank-mailman (input) for mailman id 89859;
 Thu, 25 Feb 2021 16:23:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qDmu=H3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lFJQH-0005gK-Qc
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 16:23:29 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a53316b5-1634-449e-b862-c754761e8d45;
 Thu, 25 Feb 2021 16:23:28 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id w7so5122898wmb.5
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 08:23:28 -0800 (PST)
Received: from ?IPv6:2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec?
 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id l15sm8132275wmh.21.2021.02.25.08.23.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 25 Feb 2021 08:23:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a53316b5-1634-449e-b862-c754761e8d45
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=GlKFw17HlSadzsLvhkkdjanZrmggScCohCGFmx1NWfg=;
        b=C3jhe/BOKV6oECZihDXiEs05jbwYe6hP8PtR5d5EInbgy2LqWH/sD400gAo66HjCHt
         HG7kGGUhMqJI8YTo6QbS25buhtNUqvbWotZoNrjwsnO9DAnvT99fx3qjb+3IwJGtEI60
         fKMvxk9nB0eRfXwM1YuBOjl9WLZvvjoCeqt+FFUmh2hlccxJ3hWhFFs7qg2e/CnmAnvy
         lP9QmLoFafAhNgRIwJb/d/moC9BqgWa9VD5QM5AG21xiojs9WFg2vUZR+lJrY7iesf39
         jOa2XfJqQUwhbvl0eHEz3w7i2ch7Q/FAkMQ8bbcQJldL7pDJ3sdA9O5mKwyT4AEvSvPa
         qyDg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=GlKFw17HlSadzsLvhkkdjanZrmggScCohCGFmx1NWfg=;
        b=MvK9a2dXYgAmysD5YQuRAthdeE4NYzgBRrzS52tVmNKjdne+m5MUdrHolCHGuwe90b
         vYfdq2sixX8Wen3EfNaszIKjC3dNH9vlG9ouaCHXkjkdrLVXR+pNYc2tadeKgmTQfRYj
         FJTWPry+tUW5lSjPIqMFWCm+2mWEM0tmhBGgniPGI+RIE8b87CfMqWnRQKkLrkm7PNuZ
         u6J3TbGLcydY7cHz2KP3zTsu36DS7H+bkMgw8UmsEC6d3o9QkolHCv4miiigp3uNkT03
         gFO61WJp5rqgL4l6GKO10fKufMbcvI98yMjmPTqc5M83J+A9bVPEUPhrkSWBDc+XvyE8
         Vddg==
X-Gm-Message-State: AOAM530aXmr+77QnBwL11ZPjBfZ52MvYDDd7xePqaT8k1QuR70+VpEMT
	zkVWh1MA6TSDrBHslxBVUZE=
X-Google-Smtp-Source: ABdhPJxFJpbPBk0sDF7od5u+aATKNxXkfGITmm9sISjxD08kKnyIrfCoDPP7XJYEiZfweO2Ea6etnQ==
X-Received: by 2002:a1c:e903:: with SMTP id q3mr4034403wmc.100.1614270208015;
        Thu, 25 Feb 2021 08:23:28 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Wei Liu <wl@xen.org>
References: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
 <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
 <76c94541-21a8-7ae5-c4c4-48552f16c3fd@suse.com>
 <17e50fb5-31f7-60a5-1eec-10d18a40ad9a@xen.org>
 <57580966-3880-9e59-5d82-e1de9006aa0c@suse.com>
Message-ID: <a26c1ecd-e303-3138-eb7e-96f0203ca888@xen.org>
Date: Thu, 25 Feb 2021 16:23:26 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <57580966-3880-9e59-5d82-e1de9006aa0c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25/02/2021 14:00, Jan Beulich wrote:
> On 25.02.2021 13:11, Paul Durrant wrote:
>> On 25/02/2021 07:33, Jan Beulich wrote:
>>> On 24.02.2021 17:39, Paul Durrant wrote:
>>>> On 23/02/2021 16:29, Jan Beulich wrote:
>>>>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>>>>> special considerations for the head of the SKB no longer apply. Don't
>>>>> mistakenly report ERROR to the frontend for the first entry in the list,
>>>>> even if - from all I can tell - this shouldn't matter much as the overall
>>>>> transmit will need to be considered failed anyway.
>>>>>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/drivers/net/xen-netback/netback.c
>>>>> +++ b/drivers/net/xen-netback/netback.c
>>>>> @@ -499,7 +499,7 @@ check_frags:
>>>>>     				 * the header's copy failed, and they are
>>>>>     				 * sharing a slot, send an error
>>>>>     				 */
>>>>> -				if (i == 0 && sharedslot)
>>>>> +				if (i == 0 && !first_shinfo && sharedslot)
>>>>>     					xenvif_idx_release(queue, pending_idx,
>>>>>     							   XEN_NETIF_RSP_ERROR);
>>>>>     				else
>>>>>
>>>>
>>>> I think this will DTRT, but to my mind it would make more sense to clear
>>>> 'sharedslot' before the 'goto check_frags' at the bottom of the function.
>>>
>>> That was my initial idea as well, but
>>> - I think it is for a reason that the variable is "const".
>>> - There is another use of it which would then instead need further
>>>     amending (and which I believe is at least part of the reason for
>>>     the variable to be "const").
>>>
>>
>> Oh, yes. But now that I look again, don't you want:
>>
>> if (i == 0 && first_shinfo && sharedslot)
>>
>> ? (i.e no '!')
>>
>> The comment states that the error should be indicated when the first
>> frag contains the header in the case that the map succeeded but the
>> prior copy from the same ref failed. This can only possibly be the case
>> if this is the 'first_shinfo'
> 
> I don't think so, no - there's a difference between "first frag"
> (at which point first_shinfo is NULL) and first frag list entry
> (at which point first_shinfo is non-NULL).

Yes, I realise I got it backwards. Confusing name but the comment above 
its declaration does explain.

> 
>> (which is why I still think it is safe to unconst 'sharedslot' and
>> clear it).
> 
> And "no" here as well - this piece of code
> 
> 		/* First error: if the header haven't shared a slot with the
> 		 * first frag, release it as well.
> 		 */
> 		if (!sharedslot)
> 			xenvif_idx_release(queue,
> 					   XENVIF_TX_CB(skb)->pending_idx,
> 					   XEN_NETIF_RSP_OKAY);
> 
> specifically requires sharedslot to have the value that was
> assigned to it at the start of the function (this property
> doesn't go away when switching from fragments to frag list).
> Note also how it uses XENVIF_TX_CB(skb)->pending_idx, i.e. the
> value the local variable pending_idx was set from at the start
> of the function.
> 

True, we do have to deal with freeing up the header if the first map 
error comes on the frag list.

Reviewed-by: Paul Durrant <paul@xen.org>

> Jan
> 



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:10:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:10:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89868.169704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFK9m-0002PI-C8; Thu, 25 Feb 2021 17:10:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89868.169704; Thu, 25 Feb 2021 17:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFK9m-0002PB-8v; Thu, 25 Feb 2021 17:10:30 +0000
Received: by outflank-mailman (input) for mailman id 89868;
 Thu, 25 Feb 2021 17:10:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFK9k-0002NX-Ue
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:10:28 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3a0ebe1-0e34-4ef8-8561-8587438a6937;
 Thu, 25 Feb 2021 17:10:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3a0ebe1-0e34-4ef8-8561-8587438a6937
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614273023;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=kNj715ReiVr9iSfrE07jiAXIxtnsUBFtABZQwDxEAto=;
  b=JZC/ui6zn5yR9PtebrpHjyfnZYyIMJe14PmNOuU9iR/HiIF6UW+IJ+T5
   FTyoY8X93BFA6HszRO93OrhLfYj26Hg2xnl/1RBg2sAVETZTPv3hBGPca
   Bz9PRRwr++/tWcLNJrRpVbZd+o3x0P9J/BrDTy3o3ENTPrtJ5GqjNMb3w
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: COegb4ZToJqx916T8zRxHV6xoRQ9dWyjVxY4ybCI4Wamia0/emKp4ch94xhJY8KnYWQq8fJeaQ
 8N+hjidaM9JqLF9WwQkJ3k5/WiTuZTVSgQSNGDC2q8a3CPGWqr6ac05unLW4ytuG/KhoTKP18h
 QXkwoxaxeENybYC1GjPqj0LCIFixkzmjnUM1vG8eR0WySu/8LLSHY930kigrmzz3X4wBcGK0pV
 kM0E5g5N+9S1VTm4xnoN6mvAjFHQ+prBPhCHfPLa88MXJ5mU+v/hcsUrX65uSW0m71yGEsKodF
 Dsk=
X-SBRS: 5.2
X-MesageID: 38055921
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,206,1610427600"; 
   d="scan'208";a="38055921"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15] x86/dmop: Properly fail for PV guests
Date: Thu, 25 Feb 2021 17:09:36 +0000
Message-ID: <20210225170936.3008-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The current code has an early exit for PV guests, but it returns 0 having done
nothing.

Fixes: 524a98c2ac5 ("public / x86: introduce __HYPERCALL_dm_op...")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>
CC: Paul Durrant <paul@xen.org>

For 4.15.  Found while testing XEN_DMOP_nr_vcpus.  Needs backporting to all
stable releases.
---
 xen/arch/x86/hvm/dm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 5bc172a5d4..612749442e 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -365,6 +365,7 @@ int dm_op(const struct dmop_args *op_args)
     if ( rc )
         return rc;
 
+    rc = -EINVAL;
     if ( !is_hvm_domain(d) )
         goto out;
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:10:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:10:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89867.169692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFK9i-0002Nj-14; Thu, 25 Feb 2021 17:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89867.169692; Thu, 25 Feb 2021 17:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFK9h-0002Nc-Tx; Thu, 25 Feb 2021 17:10:25 +0000
Received: by outflank-mailman (input) for mailman id 89867;
 Thu, 25 Feb 2021 17:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFK9g-0002NX-5i
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:10:24 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc4c4aa5-b865-4005-a0d5-ec71647767c7;
 Thu, 25 Feb 2021 17:10:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc4c4aa5-b865-4005-a0d5-ec71647767c7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614273021;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=L+MZGORwjjtnsm6MVooKqC+Umtt9uS/ELwVwq9KToRU=;
  b=ANHDLKz1Dvfa1SMDiJ9yND/PIi2g5diLP+HQlkcb0UjpRwe6P6ZRC+MI
   v4XIbCq+sNjNxihay7+M89KwezssEA3H+njCFBTTy1Ipkzr9f76GBgp3F
   F9AJQX10U5w3oNvUxng2k7rqWVE+VIZH5EWE94XnmOY2fr6x9HLPn2U6t
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1QmICJ5+FmbZvYCr7Gn4YFTT2wy6ieFuBPOquzgfB50euNQqLBWnzyBtY5bQiYEiXn8UNBIqe2
 2A1dIC/zoMUjBd+pI9aGKOXhnnbLy8H+ztFtMcePxrWE3ZQEnqRHmmNMdyQXQXtjt72L9kq+Ca
 oMXaLMR4h/ouK53mDzMwvn7R0MQYGKaGVZj15/DF5U0qwc9WH2vtYi1aUAPuPpYJDmuAx6kclQ
 9RCOGLMLBsUloCC9cl5NI56rHzGrKb0pr8DCMfHugoVCeFRG8oukb54iBa9yzUPghJsNGpPvUD
 UCs=
X-SBRS: 5.2
X-MesageID: 38055929
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,206,1610427600"; 
   d="scan'208";a="38055929"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus
Date: Thu, 25 Feb 2021 17:09:46 +0000
Message-ID: <20210225170946.4297-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Curiously absent from the stable API/ABIs is an ability to query the number of
vcpus which a domain has.  Emulators need to know this information in
particular to know how many stuct ioreq's live in the ioreq server mappings.

In practice, this forces all userspace to link against libxenctrl to use
xc_domain_getinfo(), which rather defeats the purpose of the stable libraries.

Introduce a DMOP to retrieve this information and surface it in
libxendevicemodel to help emulators shed their use of unstable interfaces.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Ian Jackson <iwj@xenproject.org>

For 4.15.  This was a surprise discovery in the massive ABI untangling effort
I'm currently doing for XenServer's new build system.

This is one new read-only op to obtain information which isn't otherwise
available under a stable API/ABI.  As such, its risk for 4.15 is very low,
with a very real quality-of-life improvement for downstreams.

I realise this is technically a new feature and we're long past feature
freeze, but I'm hoping that "really lets some emulators move off the unstable
libraries" is sufficiently convincing argument.

It's not sufficient to let Qemu move off unstable libraries yet - at a
minimum, the add_to_phymap hypercalls need stabilising to support PCI
Passthrough and BAR remapping.

I'd prefer not to duplicate the op handling between ARM and x86, and if this
weren't a release window, I'd submit a prereq patch to dedup the common dmop
handling.  That can wait to 4.16 at this point.  Also, this op ought to work
against x86 PV guests, but fixing that up will also need this rearrangement
into common code, so needs to wait.
---
 tools/include/xendevicemodel.h               | 10 ++++++++++
 tools/libs/devicemodel/core.c                | 15 +++++++++++++++
 tools/libs/devicemodel/libxendevicemodel.map |  1 +
 xen/arch/arm/dm.c                            | 10 ++++++++++
 xen/arch/x86/hvm/dm.c                        | 11 +++++++++++
 xen/include/public/hvm/dm_op.h               | 15 +++++++++++++++
 xen/include/xlat.lst                         |  1 +
 7 files changed, 63 insertions(+)

diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h
index c06b3c84b9..33698d67f3 100644
--- a/tools/include/xendevicemodel.h
+++ b/tools/include/xendevicemodel.h
@@ -358,6 +358,16 @@ int xendevicemodel_pin_memory_cacheattr(
     uint32_t type);
 
 /**
+ * Query for the number of vCPUs that a domain has.
+ * @parm dmod a handle to an open devicemodel interface.
+ * @parm domid the domain id to be serviced.
+ * @parm vcpus Number of vcpus.
+ * @return 0 on success and fills @p vcpus, or -1 on failure.
+ */
+int xendevicemodel_nr_vcpus(
+    xendevicemodel_handle *dmod, domid_t domid, unsigned int *vcpus);
+
+/**
  * This function restricts the use of this handle to the specified
  * domain.
  *
diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index 30bd79f8ba..8e619eeb0a 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -630,6 +630,21 @@ int xendevicemodel_pin_memory_cacheattr(
     return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
 }
 
+int xendevicemodel_nr_vcpus(
+    xendevicemodel_handle *dmod, domid_t domid, unsigned int *vcpus)
+{
+    struct xen_dm_op op = {
+        .op = XEN_DMOP_nr_vcpus,
+    };
+
+    int rc = xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
+    if ( rc )
+        return rc;
+
+    *vcpus = op.u.nr_vcpus.vcpus;
+    return 0;
+}
+
 int xendevicemodel_restrict(xendevicemodel_handle *dmod, domid_t domid)
 {
     return osdep_xendevicemodel_restrict(dmod, domid);
diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map
index 733549327b..f7f9e3d932 100644
--- a/tools/libs/devicemodel/libxendevicemodel.map
+++ b/tools/libs/devicemodel/libxendevicemodel.map
@@ -42,4 +42,5 @@ VERS_1.3 {
 VERS_1.4 {
 	global:
 		xendevicemodel_set_irq_level;
+		xendevicemodel_nr_vcpus;
 } VERS_1.3;
diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
index 785413372c..d689e336fd 100644
--- a/xen/arch/arm/dm.c
+++ b/xen/arch/arm/dm.c
@@ -38,6 +38,7 @@ int dm_op(const struct dmop_args *op_args)
         [XEN_DMOP_set_ioreq_server_state]           = sizeof(struct xen_dm_op_set_ioreq_server_state),
         [XEN_DMOP_destroy_ioreq_server]             = sizeof(struct xen_dm_op_destroy_ioreq_server),
         [XEN_DMOP_set_irq_level]                    = sizeof(struct xen_dm_op_set_irq_level),
+        [XEN_DMOP_nr_vcpus]                         = sizeof(struct xen_dm_op_nr_vcpus),
     };
 
     rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
@@ -122,6 +123,15 @@ int dm_op(const struct dmop_args *op_args)
         break;
     }
 
+    case XEN_DMOP_nr_vcpus:
+    {
+        struct xen_dm_op_nr_vcpus *data = &op.u.nr_vcpus;
+
+        data->vcpus = d->max_vcpus;
+        rc = 0;
+        break;
+    }
+
     default:
         rc = ioreq_server_dm_op(&op, d, &const_op);
         break;
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 612749442e..f4f0910463 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -359,6 +359,7 @@ int dm_op(const struct dmop_args *op_args)
         [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
         [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
         [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
+        [XEN_DMOP_nr_vcpus]                         = sizeof(struct xen_dm_op_nr_vcpus),
     };
 
     rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
@@ -606,6 +607,15 @@ int dm_op(const struct dmop_args *op_args)
         break;
     }
 
+    case XEN_DMOP_nr_vcpus:
+    {
+        struct xen_dm_op_nr_vcpus *data = &op.u.nr_vcpus;
+
+        data->vcpus = d->max_vcpus;
+        rc = 0;
+        break;
+    }
+
     default:
         rc = ioreq_server_dm_op(&op, d, &const_op);
         break;
@@ -641,6 +651,7 @@ CHECK_dm_op_map_mem_type_to_ioreq_server;
 CHECK_dm_op_remote_shutdown;
 CHECK_dm_op_relocate_memory;
 CHECK_dm_op_pin_memory_cacheattr;
+CHECK_dm_op_nr_vcpus;
 
 int compat_dm_op(domid_t domid,
                  unsigned int nr_bufs,
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 1f70d58caa..ee97997238 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -449,6 +449,20 @@ struct xen_dm_op_set_irq_level {
 };
 typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t;
 
+/*
+ * XEN_DMOP_nr_vcpus: Query the number of vCPUs a domain has.
+ *
+ * The number of vcpus a domain has is fixed from creation time.  This bound
+ * is applicable to e.g. the vcpuid parameter of XEN_DMOP_inject_event, or
+ * number of struct ioreq objects mapped via XENMEM_acquire_resource.
+ */
+#define XEN_DMOP_nr_vcpus 20
+
+struct xen_dm_op_nr_vcpus {
+    uint32_t vcpus; /* OUT */
+};
+typedef struct xen_dm_op_nr_vcpus xen_dm_op_nr_vcpus_t;
+
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
@@ -472,6 +486,7 @@ struct xen_dm_op {
         xen_dm_op_remote_shutdown_t remote_shutdown;
         xen_dm_op_relocate_memory_t relocate_memory;
         xen_dm_op_pin_memory_cacheattr_t pin_memory_cacheattr;
+        xen_dm_op_nr_vcpus_t nr_vcpus;
     } u;
 };
 
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 398993d5f4..cbbd20c958 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -107,6 +107,7 @@
 ?	dm_op_set_pci_intx_level	hvm/dm_op.h
 ?	dm_op_set_pci_link_route	hvm/dm_op.h
 ?	dm_op_track_dirty_vram		hvm/dm_op.h
+?	dm_op_nr_vcpus			hvm/dm_op.h
 !	hvm_altp2m_set_mem_access_multi	hvm/hvm_op.h
 ?	vcpu_hvm_context		hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:21:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89875.169719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKKZ-0003db-Bg; Thu, 25 Feb 2021 17:21:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89875.169719; Thu, 25 Feb 2021 17:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKKZ-0003dU-8V; Thu, 25 Feb 2021 17:21:39 +0000
Received: by outflank-mailman (input) for mailman id 89875;
 Thu, 25 Feb 2021 17:21:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKKY-0003dP-GP
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:21:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKKY-0005Tg-BR
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:21:38 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKKY-0001jJ-Ab
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:21:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFKKV-00024q-2W; Thu, 25 Feb 2021 17:21:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=RDMUm1GIH6FXebhsBXlQjycVMdIC2gr2w/QW6l+NXSI=; b=34C8ujWZj6saD8zHFuHnDsxlk0
	sKaNbffvXIhMcU+8uDMo0OffI5PCZ8F05yqkVr6/9H2DXZixwdOyNwgbVEEG2ETerbv9NrKN+tPP3
	BbivHhnY9JAsGkdRQMEf5rg/Kt4zBMi2izMeV+BqLnFnGAj67j98dRZkw4RHJavKo3LQ=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.56478.814418.802877@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 17:21:34 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Jan Beulich <JBeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Paul Durrant <paul@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Julien Grall <julien@xen.org>,
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus
In-Reply-To: <20210225170946.4297-1-andrew.cooper3@citrix.com>
References: <20210225170946.4297-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus"):
> Curiously absent from the stable API/ABIs is an ability to query the number of
> vcpus which a domain has.  Emulators need to know this information in
> particular to know how many stuct ioreq's live in the ioreq server mappings.
> 
> In practice, this forces all userspace to link against libxenctrl to use
> xc_domain_getinfo(), which rather defeats the purpose of the stable libraries.

Wat

> For 4.15.  This was a surprise discovery in the massive ABI untangling effort
> I'm currently doing for XenServer's new build system.

Given that this is a new feature at a late stage I am going to say
this:

I will R-A it subject to it getting *two* independent Reviewed-by.

I will try to one of them myself :-).

...

> +/*
> + * XEN_DMOP_nr_vcpus: Query the number of vCPUs a domain has.
> + *
> + * The number of vcpus a domain has is fixed from creation time.  This bound
> + * is applicable to e.g. the vcpuid parameter of XEN_DMOP_inject_event, or
> + * number of struct ioreq objects mapped via XENMEM_acquire_resource.

AIUI from the code, the value is the maximum number of vcpus, in the
sense that they are not necessarily all online.  In which case I think
maybe you want to mention that here ?

> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
> index 398993d5f4..cbbd20c958 100644
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -107,6 +107,7 @@
>  ?	dm_op_set_pci_intx_level	hvm/dm_op.h
>  ?	dm_op_set_pci_link_route	hvm/dm_op.h
>  ?	dm_op_track_dirty_vram		hvm/dm_op.h
> +?	dm_op_nr_vcpus			hvm/dm_op.h
>  !	hvm_altp2m_set_mem_access_multi	hvm/hvm_op.h
>  ?	vcpu_hvm_context		hvm/hvm_vcpu.h
>  ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
> -- 

I have no idea what even.  I read the comment at the top of the file.

So for *everything except the xlat.lst change*
Reviewed-by: Ian Jackson <iwj@xenproject.org>

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:22:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:22:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89878.169731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKLS-0003jY-LS; Thu, 25 Feb 2021 17:22:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89878.169731; Thu, 25 Feb 2021 17:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKLS-0003jR-IB; Thu, 25 Feb 2021 17:22:34 +0000
Received: by outflank-mailman (input) for mailman id 89878;
 Thu, 25 Feb 2021 17:22:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKLR-0003jL-7A
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:22:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKLR-0005WP-6H
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:22:33 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKLR-0001yS-49
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:22:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFKLN-00025f-VR; Thu, 25 Feb 2021 17:22:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=5f4gmIamxphp8dyxysQf2Q6S9J5oPvwtHF2nfTWISZ4=; b=YymD/MaVbzBnYiQAppHKhkb4XB
	rbV9KL1x71akKZ3EfXWe/U3kvMWnIVSPm4KT/NB+DRkOUD2pS6AZxrEczcFXmjftxCjHho/MjI/5l
	N8lGCXbZp2ypXVWi4gbda3cUbTxlfNsDu4qErks61Oq2anyT2w0w6oyRr3n9HnRDpqSs=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.56533.776930.841094@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 17:22:29 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Jan Beulich <JBeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Paul Durrant <paul@xen.org>
Subject: Re: [PATCH for-4.15] x86/dmop: Properly fail for PV guests
In-Reply-To: <20210225170936.3008-1-andrew.cooper3@citrix.com>
References: <20210225170936.3008-1-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("[PATCH for-4.15] x86/dmop: Properly fail for PV guests"):
> The current code has an early exit for PV guests, but it returns 0 having done
> nothing.

Reviewed-by: Ian Jackson <iwj@xenproject.org>

> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 5bc172a5d4..612749442e 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -365,6 +365,7 @@ int dm_op(const struct dmop_args *op_args)
>      if ( rc )
>          return rc;
>  
> +    rc = -EINVAL;
>      if ( !is_hvm_domain(d) )
>          goto out;

Is this style, of setting rc outside the if, the standard hypervisor
style ?

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:30:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89881.169742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKTA-0004nF-G3; Thu, 25 Feb 2021 17:30:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89881.169742; Thu, 25 Feb 2021 17:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKTA-0004n8-D4; Thu, 25 Feb 2021 17:30:32 +0000
Received: by outflank-mailman (input) for mailman id 89881;
 Thu, 25 Feb 2021 17:30:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFKT9-0004n3-96
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:30:31 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b4cc6a0-968e-4a6d-846e-ab33cdf14b86;
 Thu, 25 Feb 2021 17:30:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b4cc6a0-968e-4a6d-846e-ab33cdf14b86
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614274230;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=zTXBL6wvcVLi1sVQaiP/6gIyNSA3VsLUSxgimK5THA0=;
  b=Teecy4+Djr4jY0V+TDDoPQGULhrI8RcvrM5j9hxoew2w788BtA7c7od3
   x02Ee4+/eeKTdCpl1Kzh2RMi60KyqUj37HG2qt7UDJeH52GUfwbgrohvn
   kOuQyqBF5kpX3ziEkAzTyheg/E8m8LihJQvFAkF8igvn3Cp0nDRvXYfv8
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: gSIKiFCzRJmIea7cXePnFdofIoLVGidRly/gCaC4OlEvU1qi0R9BDQYqxUJiORYvbxbRRljVTq
 YmRpDoa93P3Q1G2SlATkF04ERHD75trrptGuDSvEJlIwZ1AcUzV3zqXPMTSUTuWZ0D4hrhY2kR
 xtQ2FYG6El97yygKqVlvXSQQbQq5h6qYiiwRfdcuQH7G6ADgjRiEA4uUnIPx8ioEwisOWUTMFX
 GFpZvq9BaeZXA+CTzWChQoaGyyqA7/ikln92tJeSAHVf50ZNfE71w5hcoAe0IYJ6cka2laKJzQ
 auA=
X-SBRS: 5.2
X-MesageID: 39434610
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,206,1610427600"; 
   d="scan'208";a="39434610"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gn2yVX9MGqcWvT12cxG6paRC2mcjWd04qpzDltIRH4Wd9hS7cly58Wh97X6AqNUB1Mdw/6pE/cOl+7phkCdB9GJxZNRNtQCf7DNh18TogZ7WhkKf97KMsJnTC6PhZJchoqN4sU+pu6OJ2woqLgtdR6Ufhf/oEbVhz4oqK1Ps/MJ656l9kJFAT9OQIzYB2q4PltFe7+RSgxfxWoK3VEJxB9vKvqRuXjpZRQFwWD90FD4/PInasozaSs7VblVj5TSZCgKQxwFVboG3nwHTgndmafkpeCa+cehyZk6HxmNkJh5I7fZacpApMkcPp+D4EBtq14YWyT3mAKohlrk4/97H0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJMxrc93AYfTb0W2zQjsUvm/jjcfAHB6Z4+rCGsFggo=;
 b=d5AoX1Q7nAeaxquOzSm19FDGjoCd2tbFtfT/pk5DzHwn1WW3vMUt7qaigWp7HbJX9P9ZkBYQvG4NeEVV3JAFwP78x5ugkSBMr0FRiVZxaOlwADahEn5wUjpsUq5AIrK6Jx7dgFX9X4BubVDjwOPADEI8myWyHE2tGAtOcBXJkFdGELEQAa6WtQ/rHTyWhrrqiT7dWaOyn/tzf1OFds1oyCA3LqmKxGO6P1NYE9Zb0Q3/Osn0PsjLIchR8N5U36kGrnbyLAWm0UNzpVa9ld0Dz/6kRAPjQF7SCKivxdGUBgrI3zcJ9vc9aNamBJzgLqESC3A5dTplmoUw5DtQ+wxxJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJMxrc93AYfTb0W2zQjsUvm/jjcfAHB6Z4+rCGsFggo=;
 b=oD+8IfIhN7D3x9N14vxBPAclVw6tWqo6gJQBu89oIEoXvemZ0g310x8MwZ/fhmKsZYIOp7sj771UDiIfIJz8US2TWv72/Iuuen6DBsP8zewuf351AO2hPWxbKCUjcFaELeNTER+n/F0Ex6y1MKbGsrm0ONz/MAGtpuPP9x/VfyE=
Subject: Re: [PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <20210225170946.4297-1-andrew.cooper3@citrix.com>
 <24631.56478.814418.802877@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <12e99519-201a-89c2-eb4e-b67e1d1d3849@citrix.com>
Date: Thu, 25 Feb 2021 17:30:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <24631.56478.814418.802877@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0349.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::25) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a7fd6819-730f-4c27-a33b-08d8d9b30a3c
X-MS-TrafficTypeDiagnostic: BYAPR03MB4808:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4808462DA6C9F09606EF1B35BA9E9@BYAPR03MB4808.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 062pv/7wlOnHGKOrkI/7hO/8HVPx8L0ThWx3Lcstm15eKuF3TGSOicg3C1rxtTQ07m7D0A9vHBMuLXLwmC34US19Zhr+B7mwPBIBfj+mMrHqdEOO50VbOVw19ChvkATgbGVyadtOse+LxkRSjy5XxPyud8jMmbE3FTBEn1aDHx0/DyFFPAu6cNG4dGU27EUcPJGlJg+C8mQEM2Us7+9OdVEfYguFB5PEGr5i5T37Qt4Kqfbs6P408eE8fXbwdGzUIRjBRzr1Se9qspjauSBy4SOXiynaJ1g9wae5Jc9o9WGvrO1HOtBIjdmAoduT3ZSDC99OsIUgAgBwEHYdxv30XKjlT0BUV1oTML9PSz2S2CpZ1COv6d0mvtCWcKNS/46D8ykOY9FHuZT1rEasIaZjdN05BWHFymQZghgOC4X80VkGxzRMkqUz3usiee+Cg/EKg9Z6CeJx8c50hnLobb2lyWGYob0BS7u6KOEvVZwmNLWbXtlPnQBu1WeUyeJlIT+SgG3v+QCaFMkGAohlrfM8ddVpfRE+73gcIeGnUolFzs9WxdCeYGHrnQ5NKEEXC/E/hBrFHNjgomijhWJgNYZWoynlyPfexk9bHQGXY9VFREM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(136003)(346002)(39860400002)(2616005)(6666004)(36756003)(31696002)(31686004)(6916009)(6486002)(26005)(83380400001)(86362001)(4326008)(16576012)(478600001)(66556008)(956004)(5660300002)(8936002)(53546011)(66476007)(2906002)(54906003)(66946007)(186003)(8676002)(16526019)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MUJON2diUkdLM0g3VFNwdGVCa3VYS1hWMFF4UmFDL3VUSW5md2lXOGh2Z3dD?=
 =?utf-8?B?cy93KzliaEcwVkpsZVExakRjdGFGSkhpZXRnbFFsK3g5QTNlZ1JYZmJGbUFv?=
 =?utf-8?B?VTErRGY5OGRQRCtGQ2cyRlBQQkVNS3dLN0R3SEwxODZXS0labXF4S1ZmL2VM?=
 =?utf-8?B?dDdNYnhXOW5qMExzeVhKTml2Z0t4UVpvMkJVNXVvL1k2ZEdHZlp4bTd6d2V6?=
 =?utf-8?B?SnBsOThSRTdOM2pDSjVoelAzcGppU0tkMjBhbzM4SmFOYWtCQkZOd0hTakVj?=
 =?utf-8?B?ckZxZkZDTFc4eGpVMkVOMGlSdUtwMmkxdHlMTHNXaTVIQTJKcGs4M1Flckhh?=
 =?utf-8?B?VDdYUytQUVc4Z1JXWjNWdEh4d1ZJVHhST09QVkNkOEpTYWVORVVBVk9MTVVt?=
 =?utf-8?B?enhBYzlVaWtGL1liUFJTS0dmVCtkcVlvU3Axa3QzUVdScUljeGF3bUZUY1NG?=
 =?utf-8?B?SklrK3d6cjhmcnArZkVOKzZwUUI4Ung4bnhzOWllSmpmLzZBVStyK1BFWVNh?=
 =?utf-8?B?Z1hycFRqVlpMaEcyWkVCdTVvSXZ2MFNKT05GeWZLVEo3WGpqZzJMQ1NjWHgx?=
 =?utf-8?B?a3oxOWZ6d3Vrc2V6SnVHeVZNVEt5SDlZTFJRTFF1cXIxekpKQ21WRmZhV0pS?=
 =?utf-8?B?ZDBTS3hmUjVUK1FkVkhHNG91MVFMYklSdWpONnFPdGcxUmhsaWl5QlNka21T?=
 =?utf-8?B?TDlwL1YwbkdSZkkwMWNXVE1lN0tvbUpBR3l3MEErTHR3V1grVk4yd2hHeCtI?=
 =?utf-8?B?SDNOTzhyY2xYbDJLOEJyTmlRT2ttQnpMR0xsNndudWFUNUVvcEVETW44TkZz?=
 =?utf-8?B?bHNTMFBSVUt0SmZLNUkxZjBRSWdiUGZvM05pTHFSTndlSEU4dUt4YXZjYlRI?=
 =?utf-8?B?cWt6Q2lXQ3IvQ1o1ajd2bjU3d3h0akdsaE5EY1RsN3A2SUxvOUhKa3lLOEI4?=
 =?utf-8?B?UVNCMWN2KzJFeDA5cWR4ZDkvMmI3cmROalMyWjl5UlRlZkl6V0xzODRKMllD?=
 =?utf-8?B?RXRjMnNkV2t5ak9BaDM2RW12dGJ0WklBMGdvNU1leXM1czNRWHIrM281NGp6?=
 =?utf-8?B?TlRKNUlHT0x3aG1HQXJFay82aXlEUWRjUmROVy9icFdkODdlWVJjZTZ1YzRT?=
 =?utf-8?B?YllxQlBYRkE1ZW5QQjNMVzVBSWpWOGxkcmhSZTgrNG5odUFITXY1dE5BOEZD?=
 =?utf-8?B?QklsOCtRd3RzVlM2TGpoL0pEcTlxVC9zb3dLYnVOYzZYVmxyWFM3THhhYjUx?=
 =?utf-8?B?aTRlQVJNUi9seEE4SnZ6bFRrOE9ua0Fvd2ZtR05iQnhSSlNmTnFlTTlRcCt4?=
 =?utf-8?B?TWhjNExuUjFVMG5PL1VORXMwRVVnbDZlMlJrN2hTUVIxTDNOK09hT1ZxQ2RC?=
 =?utf-8?B?YitjTXBVY29Vck5OZlcwVE1Kc3pEcFhzeUdmK25XMmJzNUd3MHRkK0JzclZX?=
 =?utf-8?B?YTdzY2NXR3JGYlk3TUVKTFNyM2tsRk50M1ZwOUxJdXpwZEc1U2xiS2NadEgz?=
 =?utf-8?B?cmtQZlBlTkp1eEcwMFl0ZkVjRTdSeHMwYTdudjJoZE8rUnRkOEk2NUUrVGZO?=
 =?utf-8?B?ZnNnVXMxUi9xVXpLQ3NobklRU0JwTytaVXhCcC8rRnNXdTkyTGdQUVFYUk1U?=
 =?utf-8?B?RUwvNmJwY1UyZ1ZRT1hoblZNMm1yWU5UM2tHZDVMZ2ladERqMHBpd1VuQnJh?=
 =?utf-8?B?aTJsdGxHN1RzbkhOcWNkaThyNnBoSFpGcUlaRnlxdUhFb2RRVXNCZnpPdk1i?=
 =?utf-8?Q?lXl6HhsYerw5ikNkLHemvzT3o7yyFw+Xvr3TJ5O?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a7fd6819-730f-4c27-a33b-08d8d9b30a3c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 17:30:26.8423
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eBAHpfu2YAkNhtTlhA7O+lQtqihX0zdI4eOp0EGU2TgtDzNuAN9a3hfxxEZGdOK0Ag6oyMyKDLlm6m9K+OU+l2FUQ2yKZMh08gistzH0/mM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4808
X-OriginatorOrg: citrix.com

On 25/02/2021 17:21, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus"):
>> Curiously absent from the stable API/ABIs is an ability to query the number of
>> vcpus which a domain has.  Emulators need to know this information in
>> particular to know how many stuct ioreq's live in the ioreq server mappings.
>>
>> In practice, this forces all userspace to link against libxenctrl to use
>> xc_domain_getinfo(), which rather defeats the purpose of the stable libraries.
> Wat

Yeah...Â  My reaction was similar.

>
>> For 4.15.  This was a surprise discovery in the massive ABI untangling effort
>> I'm currently doing for XenServer's new build system.
> Given that this is a new feature at a late stage I am going to say
> this:
>
> I will R-A it subject to it getting *two* independent Reviewed-by.
>
> I will try to one of them myself :-).
>
> ...
>
>> +/*
>> + * XEN_DMOP_nr_vcpus: Query the number of vCPUs a domain has.
>> + *
>> + * The number of vcpus a domain has is fixed from creation time.  This bound
>> + * is applicable to e.g. the vcpuid parameter of XEN_DMOP_inject_event, or
>> + * number of struct ioreq objects mapped via XENMEM_acquire_resource.
> AIUI from the code, the value is the maximum number of vcpus, in the
> sense that they are not necessarily all online.  In which case I think
> maybe you want to mention that here ?

Yeah - there is no guarantee that they're all online, or running.

Emulators tend to attach before the domain starts executing anyway.Â  The
important thing they need to do is loop through each struct ioreq in the
ioreq_server mapping to read the domid and bind the per-vcpu event
channel for notification of work to do.

The totally gross way of not needing this API is to scan through the
mapping and identify the first struct ioreq which has 0 listed for an
event channel, which is not a construct I wish to promote.

>
>> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
>> index 398993d5f4..cbbd20c958 100644
>> --- a/xen/include/xlat.lst
>> +++ b/xen/include/xlat.lst
>> @@ -107,6 +107,7 @@
>>  ?	dm_op_set_pci_intx_level	hvm/dm_op.h
>>  ?	dm_op_set_pci_link_route	hvm/dm_op.h
>>  ?	dm_op_track_dirty_vram		hvm/dm_op.h
>> +?	dm_op_nr_vcpus			hvm/dm_op.h
>>  !	hvm_altp2m_set_mem_access_multi	hvm/hvm_op.h
>>  ?	vcpu_hvm_context		hvm/hvm_vcpu.h
>>  ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
>> -- 
> I have no idea what even.  I read the comment at the top of the file.
>
> So for *everything except the xlat.lst change*
> Reviewed-by: Ian Jackson <iwj@xenproject.org>

Thanks.

This is the magic to make this hunk:

@@ -641,6 +651,7 @@ CHECK_dm_op_map_mem_type_to_ioreq_server;
Â CHECK_dm_op_remote_shutdown;
Â CHECK_dm_op_relocate_memory;
Â CHECK_dm_op_pin_memory_cacheattr;
+CHECK_dm_op_nr_vcpus;
Â 
Â int compat_dm_op(domid_t domid,
Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  unsigned int nr_bufs,

work, to do a build time check that the structure is identical between
32bit and 64bit builds.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:32:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89884.169755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKV1-0004vh-Sx; Thu, 25 Feb 2021 17:32:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89884.169755; Thu, 25 Feb 2021 17:32:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKV1-0004va-Pt; Thu, 25 Feb 2021 17:32:27 +0000
Received: by outflank-mailman (input) for mailman id 89884;
 Thu, 25 Feb 2021 17:32:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYzK=H3=suse.com=jfehlig@srs-us1.protection.inumbo.net>)
 id 1lFKV0-0004vV-FM
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:32:26 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [62.140.7.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a15a9c4f-6334-457e-8279-e7dccf36af87;
 Thu, 25 Feb 2021 17:32:24 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2176.outbound.protection.outlook.com [104.47.17.176])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-2-N4vtx34NP5auh_XSgffBEQ-1; Thu, 25 Feb 2021 18:32:21 +0100
Received: from AM8PR04MB7970.eurprd04.prod.outlook.com (2603:10a6:20b:24f::9)
 by AM0PR04MB3954.eurprd04.prod.outlook.com (2603:10a6:208:63::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3868.27; Thu, 25 Feb
 2021 17:32:18 +0000
Received: from AM8PR04MB7970.eurprd04.prod.outlook.com
 ([fe80::8532:12af:9c2:66ca]) by AM8PR04MB7970.eurprd04.prod.outlook.com
 ([fe80::8532:12af:9c2:66ca%7]) with mapi id 15.20.3890.020; Thu, 25 Feb 2021
 17:32:18 +0000
Received: from [192.168.0.4] (75.169.34.2) by
 AM4PR0202CA0010.eurprd02.prod.outlook.com (2603:10a6:200:89::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.19 via Frontend
 Transport; Thu, 25 Feb 2021 17:32:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a15a9c4f-6334-457e-8279-e7dccf36af87
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1614274343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ryAn2ABAftELdbjkxWIfITKCus8zUPpxQ+W5FBSx3BE=;
	b=QBC0fAtFp/OQPUIc6FP/Oswx85JcKU51KSxOc7PX+VTo9j9CptGT/3mkxPeMgH9Qy5ER+q
	v8nHw/wsO40CgfgH88sIAUWjH9XeUEr/QzAmkHsWiBY3pYBNuIZaQE4Uwx/Tgdva36v7ds
	vYDiZV/sIm8nQsuAPEGkCr/ZeG/WMbA=
X-MC-Unique: N4vtx34NP5auh_XSgffBEQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M8saZWZ85tESZqs/+Sgv/oi/kAlU/xApplnQvSWvSd6hhDVN22xby4EaXrujUF/+HBVldTLDBPS+yIVel1zjSDQ2nt+BMeNz9gNaOd88xDfEozAqvt4NvorF7zaSDfVYOXNzQe3XUpEMnCMmMK5w984Hp7ypWunJkSJI0pqqY/T/hNMPAPZx7h1dzX8pp0zO2my3WJlTeCQvB4pC3moYQbuORX8wcrq6O+JOIzSh3/EddpGuLQBtxiOwgsK5Xq2ffwkji3PriQ9KnfEH+XD9PDW2sdS7UUpTnE4lTVW9vKniJi8EzGCIun0M39hGsifm88cmNWxYuczNAPxhGLUWUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HxQ4+12qblRwW9qr4HnK4DjJExdkPAbV8eQ77AI1Cfw=;
 b=hRyqC/tVIqOZNNqaXCR4P56dv4Peto5qVBzUGbioDFWFNyernIRwy2dmxwCHw+W/KZ9RsDkZvAmMCOi0FGVmTo2Y1bBPaQy6RDyFpxYlKG2pZvT6zL/+AYg1mhqkyStgiMn89iDGnmZY3mVe+oul64TWT0hcaAVp/iGPc7OWh7AjzoDxbmY/OnbLl6IH6YkUz0VSHfqbugN3Y4X0aflDll+5Lv0JFZS6Jrl93G/d0yICs+jNnCec737FyK1cELAYSJhzHG3xbhXmbgrIajnLkXiAFZymb+qCPv2RtcWRIQhJW6m+sXnguUVJkUMPdHpBM7QJOoiQBZBoWdWZCMvogw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 00/14] deprecations: remove many old deprecations
To: =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 qemu-devel@nongnu.org
CC: Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
 libvir-list@redhat.com, Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Gerd Hoffmann <kraxel@redhat.com>, qemu-block@nongnu.org,
 Juan Quintela <quintela@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Artyom Tarasenko <atar4qemu@gmail.com>, Laurent Vivier <lvivier@redhat.com>,
 Thomas Huth <thuth@redhat.com>, Eduardo Habkost <ehabkost@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Greg Kurz <groug@kaod.org>,
 Cleber Rosa <crosa@redhat.com>, John Snow <jsnow@redhat.com>,
 David Gibson <david@gibson.dropbear.id.au>, Kevin Wolf <kwolf@redhat.com>,
 Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
 Max Reitz <mreitz@redhat.com>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20210224131142.1952027-1-berrange@redhat.com>
From: Jim Fehlig <jfehlig@suse.com>
Message-ID: <de4a241c-3cca-203e-62c2-bf2c19f9e7ce@suse.com>
Date: Thu, 25 Feb 2021 10:32:08 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
In-Reply-To: <20210224131142.1952027-1-berrange@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [75.169.34.2]
X-ClientProxiedBy: AM4PR0202CA0010.eurprd02.prod.outlook.com
 (2603:10a6:200:89::20) To AM8PR04MB7970.eurprd04.prod.outlook.com
 (2603:10a6:20b:24f::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5e733b1a-a015-4abf-26bd-08d8d9b34cda
X-MS-TrafficTypeDiagnostic: AM0PR04MB3954:
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB395415700CFD51063AFA323FC69E9@AM0PR04MB3954.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2089;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9f92ddEKIcQukIMcRDUmGBS3KnjB6OHZ+kkjNRd7ti3UiUxVtOIfwYcdv87KwS361yX0RHnKfJ+AbMtJGMMuwxrQgsBhhoV1XDDznm6WZeo9GKOv7KKm80mpWh7CX3gTfhAuZ1SOqfj4MFwBDoeTKBTRdSP3s/Hvk3wEsLAjoPjwzThm+niK8ZVLzcDMI1J3nOZUoH7p4qXObBHQvAw3WT84hgUBE6l0Rnz9dQTgi4ev6q9Z/JC3jJ4rip3eAWFQdPMIySNfUbO/qXo0ojBvvW/rlDgblI4xhccOwns199Xzbv4bj4hRG8gjPJn9KfNwgdaKeii/RbVYfRrp8VEfkMNtMfQ/Jhb1Q/6+Vps9Qvb7juy6aJzxPSTfySKqHUdeYKd+MTDNbVKHICfwKe18pT3MWa3bIhvvKqbn7nht/yXbxoI4NdHqZoi40G0zmUk7u1hVdsnA3bUWNwtRxPskhWktBxyD7LfprVKoyJnFo4l7n7y1l+vKgmF+3GfQRqZ/p/xQbC3VuWtPp+u9H45ZQccblxFQvK8XAc25Ecufji7bSEOerBRnns4Btwzw6Ndm+7JDCkxQimIR3cz9yr4yj5r9fzEXv1ztNaeyRRo3RhY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM8PR04MB7970.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(136003)(396003)(376002)(346002)(39860400002)(8676002)(7416002)(66476007)(5660300002)(186003)(4326008)(16576012)(53546011)(36756003)(478600001)(86362001)(52116002)(2616005)(956004)(66556008)(31686004)(8936002)(16526019)(26005)(54906003)(316002)(31696002)(66946007)(83380400001)(6486002)(2906002)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?iqrnShx0qzYiEYaUH04Ey6WNXT/kjy8aymoTl0FzpOKvVSC2MvQHcQ7gTn8x?=
 =?us-ascii?Q?4f8c1lmK5VM3s2jlNLY331BLby7YC/1ZR6I33sq+XPOuVPn2OIJPNZQn7WK5?=
 =?us-ascii?Q?9J4jKkOwv6N8FwQax0usyj/mXyJP4+ejV8+zSOQLESKz09/1YW2XiRwY+WLW?=
 =?us-ascii?Q?gtRmEh13ktdENlYYqfcyKtTuRcZl7JTRY3CYCgxTuyWLIYjGoYqa/hh+k9Cf?=
 =?us-ascii?Q?OXba9bhstZcL0140KDPJ3vaVVN5vsCcIO3wyTPX8h+9wbzjMggSMH7st53P8?=
 =?us-ascii?Q?eqM/aChboi+BH8pc+o2lWivv415oC/JmKKHuK+1dusiBCGCGeMibExcQuQoh?=
 =?us-ascii?Q?a46uJVtNRyUFkmFkcLRYfPSENe7GOgQILKbg1yKr9dvT46PH8Bem1W8GnPmO?=
 =?us-ascii?Q?SMZlDVqDNbjAiev3d6UTOt4aADcTn064U44zqkdgDv3Hk7Ks5BbzgRWTyLNI?=
 =?us-ascii?Q?bS3cMmlDWixiNLwuDWN6sAuMAX7SMiTVYLWQsBNWSl/ltULOYCoXdkY/UxO+?=
 =?us-ascii?Q?cpkwBGWTrvh3qiRTksGRyLgsyrgMZu2XFJ2Viyb6QrkJkGLZUy3++wXIl5Br?=
 =?us-ascii?Q?DadTeKhKbEpjxlZcI66WzPnaQS9EruEsXbN1DWLvSdSVbCt1HmyD9OHHR6gx?=
 =?us-ascii?Q?UZdh6NPbgyBC81DflIyNW2sk0DKGJUSOkcNlT2nQT/WqQzgdW03PvJuvXh/g?=
 =?us-ascii?Q?WcnEr5MhPPgsevKNANKa8F/IHi53garhQQHeJrod6wg/rbXI6XU/C39cRqkX?=
 =?us-ascii?Q?kGnTSv9bX2RkawlN4rgkSjQjGFqgFmqyJGUlWlpP9v3Kd5c1s8GVQgZBWJU1?=
 =?us-ascii?Q?juMhZPoLA1mNI85uBebEAu3pElH05866rdMxpdmLpg/FdytPTj2zTvoVp+uc?=
 =?us-ascii?Q?K23Ia5RauA6Cb9d/FTh8vAr0OIiUdaBrxC9KaYj7tNkZb/TZg1SMDTmQbeqq?=
 =?us-ascii?Q?ZF2bqRGit/6wp3EK9CqsI3hlgn2osT7jTDoGvnfHC3gNowzg46hdMrDcOqb1?=
 =?us-ascii?Q?ev7fghRsXNBpdOVJ4lHLEIKKb6E21oPA0tx2Ub8cTtEJru/Ha6dxqZ2S+8VP?=
 =?us-ascii?Q?bEmdRwCa0c/kGr+4sUu0LFBD9Yf4MP4k/ybpAgEf8PVdJF0o6CaS0oxsofEV?=
 =?us-ascii?Q?qX0j+wRDjwOOivyVrZhK3Za/xm8FEIC8CCarBHBtUhLkrVt8uGMy0lakQggK?=
 =?us-ascii?Q?xq0Q0jB0jSRYW4jsFLuoGItKZ7++10DVPRLNwCICpPhibj4QMZtTH2MmUd4U?=
 =?us-ascii?Q?aBYgCp+aAwut7LRiSLbleR+KvUaiXLZKMkuJOAphorJZj6V5pUylierkiNrX?=
 =?us-ascii?Q?9ZcexjukNnMWWdIr84kODQLl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e733b1a-a015-4abf-26bd-08d8d9b34cda
X-MS-Exchange-CrossTenant-AuthSource: AM8PR04MB7970.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 17:32:18.6266
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cWO9/i63UGqiixFtjofSOUMA8pO8HQSYd1YtsCGTippXy943hdR/7FThI2YYFxDfHY6qzwRtkQ5v1GPcqvOTLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB3954

Adding xen-devel and Ian to cc.

On 2/24/21 6:11 AM, Daniel P. Berrang=C3=A9 wrote:
> The following features have been deprecated for well over the 2
> release cycle we promise

This reminded me of a bug report we received late last year when updating t=
o=20
5.2.0. 'virsh setvcpus' suddenly stopped working for Xen HVM guests. Turns =
out=20
libxl uses cpu-add under the covers.

>=20
>    ``-usbdevice`` (since 2.10.0)
>    ``-drive file=3D3Djson:{...{'driver':'file'}}`` (since 3.0)
>    ``-vnc acl`` (since 4.0.0)
>    ``-mon ...,control=3D3Dreadline,pretty=3D3Don|off`` (since 4.1)
>    ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
>    ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2=
.10.0)
>    ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10=
.0)
>    ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.1=
1.0)
>    ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i=
].sta=3D
> tus (ince 4.0)
>    ``query-cpus`` (since 2.12.0)
>    ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
>    ``query-events`` (since 4.0)
>    chardev client socket with ``wait`` option (since 4.0)
>    ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove=
`` (s=3D
> ince 4.0.0)
>    ``ide-drive`` (since 4.2)
>    ``scsi-disk`` (since 4.2)
>=20
> AFAICT, libvirt has ceased to use all of these too.

A quick grep of the libxl code shows it uses -usbdevice, query-cpus, and sc=
si-disk.

> There are many more similarly old deprecations not (yet) tackled.

The Xen tools maintainers will need to be more vigilant of the deprecations=
. I=20
don't follow Xen development close enough to know if this topic has already=
 been=20
discussed.

Regards,
Jim



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:39:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89888.169767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKba-0005It-RC; Thu, 25 Feb 2021 17:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89888.169767; Thu, 25 Feb 2021 17:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKba-0005Im-N4; Thu, 25 Feb 2021 17:39:14 +0000
Received: by outflank-mailman (input) for mailman id 89888;
 Thu, 25 Feb 2021 17:39:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFKbZ-0005Ih-Fn
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:39:13 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f15114b8-37f3-46e7-a75a-4735bbc69b2a;
 Thu, 25 Feb 2021 17:39:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f15114b8-37f3-46e7-a75a-4735bbc69b2a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614274752;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=2KwKUd9T9DQaJi0yqu56bFBZny4mvCci2SiKDYl5bfo=;
  b=W9o1c9lMSGcW3100Y9/OMTCU6n3y1kzKLrCxta8mqEj7AMDf1TNoju9F
   bvrW1Nv7LgjKZjVTsJ2WmMaWjkddHmPKICcqYSZ6CsL4Poh430Czow02T
   oDXBLmI2G3IJjM5n/199MzG4Jt4UzY73iCkzbatQ0P0Fe3c2fol8iMNDh
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bMtk4e9rOrZT0xBDxAVYQ5yzeccnbxIcFTU90AV684VSp5F0tTsn/jYF60YvA/+7PDZlND68KR
 DY5iEZV1ZtMxBuxUe0ruGa+N3qOIZmH5Xs66l0B76dVpJxJdrw+qPclWmcp/qeOZDv+ojOhB4Z
 lf8gPD+72+cTJXWQ2aNpjjaLHuG4nR+OuKAMljJmTw/CtXOUSsXBUcX+KPBzjDwIsOmrtyEIZT
 zWXNS6PjuHVwI58Ei2KDh120WpEeHwft8ZtXO8mt/5u7c7toeG0Pz+XrMKyXx03RVhFMw1eAtU
 Lz8=
X-SBRS: 5.2
X-MesageID: 38232735
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,206,1610427600"; 
   d="scan'208";a="38232735"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CupvdXhC/dUBbsRN6mZ4v63KH875YGIWBoSEg37CNxtFCOX0A06Bp9qDDe83emrdvnhSmDcVytQ/TlO9xUY6+voXFjNF3Z0fgUieATcF4O5kJ8rlWIwc9OqBVm9kEgdbW+/yW67VYWrbSw90WlZHJf2aNYz78sMCG9kOzn1Plxx8VMS2i1lFMvB8YRLAJDboHJSNE4MU6+ip7TGnnjEQODmdYOK0AP7PWE6U4r5ijkYdyNHlV29PnqfE6QT6YzyHn5WwA5oTaBC0DnrcJ1SkHDk/hZBYhubjyPOiORKugQ4nL1ps104GkW7tEIGdCPJVNbmfsvx70eTOf88oDtYWVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rzm2ZUwCo5v9VUZDH4J5k9fXJhtts7Pxz9WMJAjKWuw=;
 b=mgUmtF62APfw/qzUsIJs6ixaVsi3dttgMb8aqvYnArfLOvCHmaex5mzibTGO+8v4eF1W6KEe44+Wg2QKMrsNcqnNPJtoaOHrBDUsaWVftGIY8B3LLc+2mlMT1cdbF2uRTjwiEc2xgsvc2a/PpWv3xN5UFrrGuMjVoaj7zH10aQjHMba4FjGv/y/L2WpU41WVKoZfEcvldN/f7wjOuqIEJohfHdN8ko8qPK7kYTbmAAAffPJCCtnu7kavPpqtNMeSesWNKwwJAllWNx0ES9joJIX+7idiNxQbk3ShcFEJ8B1NkJMoVVEuNs/GLkJFNOt0hfcXV1ZZmBzhoRlCkxXYDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rzm2ZUwCo5v9VUZDH4J5k9fXJhtts7Pxz9WMJAjKWuw=;
 b=iYCMAEdarKb4I/+uCrJ7nR1tUcVzh80vNUSpDnD1kNB8xHF9Vetv3NvunPrl5FxBFMyEDvHzNjaEVJg3HUSpI+ijGkUtjzXHJrri2M3hzY1rQ2zk4U68+3HU8TGxivdQzDeRBHqq5K/xNPFawcMNOjtFjUJ5q3uqLyBczB4fmAk=
Subject: Re: [PATCH for-4.15] x86/dmop: Properly fail for PV guests
To: Ian Jackson <iwj@xenproject.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
References: <20210225170936.3008-1-andrew.cooper3@citrix.com>
 <24631.56533.776930.841094@mariner.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a7ccd364-ce64-0784-150a-91e558da2aba@citrix.com>
Date: Thu, 25 Feb 2021 17:39:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <24631.56533.776930.841094@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0374.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 50587421-8be2-4ab8-4c82-08d8d9b4415a
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5727:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5727ED4F52E12E4B91596BC5BA9E9@SJ0PR03MB5727.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: i7nbJthVWr3dgj+jD3eieFdsVj0MO2aqxitpyXBA5i9GOn2p4UZ+6o6ltkFk+nYNu5D2O1kGGl3+JPm3G/xYxDZByLc2XhohCiURq4tNvKNEwYfndhIKQcKZGX62Omfl/QDb+XwkYoRBqRdB19MHFJ7tJ2doz0lBq9qMQV1Jm75Uzg2X1FCBVNGJTAuxWnKL2rRCv/7No1lUFp4nUrAebnqIcHoSNVgB8jwL3xXfqCIdTYI2EVeF+1b8e3v1zBKpGK0pPK6PDGttge+8vbZS1OxtjwMoEHS3283SaKfTC6cadZmfVFkAC8B1Yd6cnF0/mza4f6RTsjJGIsfYXq+Lu562ocI3SJ075U4HYFgqFbq7e3aLQhSdF9/HgQCVuU3y5hCwblKL03NhCH607LWD33ZsGosRMMuPnDFi/qIHBQeLNE40l/6jiHMG4hyclW818qrRbpezbIcGqQiLra34XxM+U1lwYtKyMnPi1jmgqQkaIX5Kzi6qxsWgsCsuOvScuYoZFsuxdWaZhKpZFy+8zj5kenyxYAtTca8hGcove9uWkIvtR+cCauQzKR3mR6e8xHnRFCW0MpdQqFl0Xq0rfm7EoJKi0HKyibNWYHsrHnM=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(366004)(376002)(39860400002)(66476007)(54906003)(66556008)(16576012)(316002)(31696002)(6916009)(26005)(8676002)(186003)(16526019)(956004)(4326008)(2616005)(6486002)(53546011)(6666004)(86362001)(36756003)(4744005)(478600001)(8936002)(31686004)(5660300002)(2906002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z1gra3ExZS80b3I4aEx0T3J1T2lJUTVpNkRNZlNCd0tkalU4blAwZkVBWEVT?=
 =?utf-8?B?TzRHM1dYbHZGMHgrVGJQM0FvUzdkOUV0aU1QODZMak5GY2REdmlHSmQ2WW4v?=
 =?utf-8?B?Kzl5WmtRNmVlYkF5TCtzakZzRk9SNE1jUGVQWFJQdWthTGpmRzJ5OGhCeUYr?=
 =?utf-8?B?R09FWmxLSmNPeWxBMS9XVG80N2VQZU5QcytjN3hzUFBXb2lYcHRsdFRoRzl6?=
 =?utf-8?B?UWxsVHpnUHlubEF1OGlSUVA1ckVSK2MzYlN6YTR4RDNMTElxcUxKTXFuZGUx?=
 =?utf-8?B?U2lvZGN4dEw3Z3J3UjNETUNzUzluSHVDZlVsNlRaN0dOMVhsaTVmUFNKUmdO?=
 =?utf-8?B?TDRtek9ra2tiL1lSeFZzWU12dU85MnFXVGxUcDBTZVB1a2FTSk9PTGpJQVFk?=
 =?utf-8?B?VlpCSHBpcnFuN1dsS1hiTEdVTyt6VVdGOVBMdTlyaUNSOVNRaW1iZERtbnJi?=
 =?utf-8?B?cEZkL3VDWGZzRDVreWF4Yml3QzBqRkIzWEdPS1hTMU5mUlZHQitEUmo1SDRq?=
 =?utf-8?B?VHJhS0tXVkt1bDJreEVBRnpINTFCTEJ5TlA2d2ZkbGRoTmdtdm0zOHU4U0I5?=
 =?utf-8?B?cWJMajZwOXNaUmpWZHlwcDFuTlZqWFlOSm0wSDZaVVl1M0hTSWV1OHo3eGlG?=
 =?utf-8?B?d2l5VG40L0M2THBoV3JzbXhZMlpFUDYwU0FZTnNVVjA2VXZCT09RU1hwQWdw?=
 =?utf-8?B?YWJMbytVR0E0ZTZvL1VNQ2xUVjZzY1NpeXRvNHgvYkU5WmUwS0o2aVJVNzdY?=
 =?utf-8?B?QUJUL3E0Mzg5U0N0dWVYdVp3L01mOEJUbkNoN0p2bVRxN3RLUTJDbmVKR1Zj?=
 =?utf-8?B?UHZaUTJQZHpUdjRnQ2E3YkYwQXdFUFgxalp1anVMUXVJVXY0eTMxQlhtMGxm?=
 =?utf-8?B?SmxWT1JLaFZMMUVFalR5dTh6SWQxZUlFQnNwMEtUb0Z2cU9qWlVadkd3dXVn?=
 =?utf-8?B?aDNRb1NDSkZ2RnpJQ3libWRBM3RLYkdLNnNQeTN5dkZEeWpNUkx0R0loTHlI?=
 =?utf-8?B?NmNMWVgxeFlQZVZZUWZZbTRra0ZnbDFQZHd3bXJmTHZwQjhEY2tyTWhXdDdY?=
 =?utf-8?B?Q1gxb0JRTGROaTN1cnBCSS9UVGs2cUN1bzNnY2J4UlZ5V1l2UzN0MHltbHZi?=
 =?utf-8?B?V0prQ0xOOEpMcWVac0wvMGZCTGxoNGdzUEp2aHhjd2p6S3U4RUJ3bVBYQmJ4?=
 =?utf-8?B?U2xNZkVjWC95bWp4SkM1dStXanpCSWI5aHRGd2NnYlg0QTEzVUcyOEpXUVhM?=
 =?utf-8?B?dUtkc2VtOEk3SWl4SXBiWGdZOElWeUk1ZUJPeWhRZ2hNLzNZODJDQStkdk9M?=
 =?utf-8?B?akVoc0t0SERaZ0hUUHo2RWo5dlBxSG1qMzc0YVMwbi9xWndmcnZzYlhQRkJ1?=
 =?utf-8?B?UHpiS2Q4MGw4ZTBiQ0ZxazZBa0FwREQwT1JnZkszN2lHMDlrQU1ZMEFQSHI3?=
 =?utf-8?B?bldzV0lBQlNseTRnK2NTdndhc2lMYStLbFp5Sm5PY1MzYzB5L3lPbDYyZEFS?=
 =?utf-8?B?Y01CZVFtNDJZSW53S3Rtd1AwMllmNUJlaTkraVNQOFMwdjRYOGIvVUx6OHZn?=
 =?utf-8?B?Z1NHdW1HSTAwTjRWcVRFaFdNYXIrZjd3cWJ2S2ZDTnZ3cHJINVVBM0JnSGxo?=
 =?utf-8?B?Vk03UUhNaEJUUzBHdUdhWmhWU00zYWgyWjBwdXVOUmlydGlUZlh4SHZrUVVN?=
 =?utf-8?B?c1hwWDl5T3l5TzA2K1lESXE3N1ZQSkhiZjhheFFvUHlleU9mcjB6Ymd4a08v?=
 =?utf-8?Q?loGStvlS2X0HXglLtYsN88UcvQM+6Wz3OMp+80Y?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 50587421-8be2-4ab8-4c82-08d8d9b4415a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 17:39:08.8701
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ND9APoyhkXD34poY/RTJFOjVe77/5lt0EVNKKl6ed7gSbWNIEI45FIG7Kb5TAgx9cQQdCijSKDjbIEd1Kiy7gMqskNS+7Qo++Nz0nXX0Gp0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5727
X-OriginatorOrg: citrix.com

On 25/02/2021 17:22, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH for-4.15] x86/dmop: Properly fail for PV guests"):
>> The current code has an early exit for PV guests, but it returns 0 having done
>> nothing.
> Reviewed-by: Ian Jackson <iwj@xenproject.org>

Thanks.

>
>> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
>> index 5bc172a5d4..612749442e 100644
>> --- a/xen/arch/x86/hvm/dm.c
>> +++ b/xen/arch/x86/hvm/dm.c
>> @@ -365,6 +365,7 @@ int dm_op(const struct dmop_args *op_args)
>>      if ( rc )
>>          return rc;
>>  
>> +    rc = -EINVAL;
>>      if ( !is_hvm_domain(d) )
>>          goto out;
> Is this style, of setting rc outside the if, the standard hypervisor
> style ?

If you think the cyclomatic complexity is bad in libxl...

This is the prevailing style in this function.Â  Its common, but not
universal style.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:42:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:42:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89890.169779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeU-00068j-8f; Thu, 25 Feb 2021 17:42:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89890.169779; Thu, 25 Feb 2021 17:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeU-00068c-5R; Thu, 25 Feb 2021 17:42:14 +0000
Received: by outflank-mailman (input) for mailman id 89890;
 Thu, 25 Feb 2021 17:42:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKeT-00068X-Ba
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:42:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeS-0005qf-CD; Thu, 25 Feb 2021 17:42:12 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeR-00032g-Vg; Thu, 25 Feb 2021 17:42:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=hMmY1pZlwkcpsLVU2u7EXJeXlw3MMy5FmKya1ob/Nqw=; b=3ZDz04sUXadIkRDEJ3JECubTga
	O7Dpg4WwmkgBYmTLytupZgVj54glxTTx/Yjgi3SP+u4h0vomepH0uw1uKONXZefI2rwdeuDKs0ogS
	cb44sLI61Ul1TIdbOWeVRrZCnbQasYF5CBAKaHA/tUyA0j1TAY1mwR5jpYfVu0IEFzag=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 0/5] xenstore: Address coverity issues in the LiveUpdate code
Date: Thu, 25 Feb 2021 17:41:26 +0000
Message-Id: <20210225174131.10115-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

The AWS coverity instance spotted a few issues that could either
leak memory and derefence NULL pointer.

All the patches are candidate for 4.15 as they are hardening XenStored
code. The changes are low risks.

Cheers,

Julien Grall (5):
  tools/xenstored: Avoid unnecessary talloc_strdup() in do_control_lu()
  tools/xenstored: Avoid unnecessary talloc_strdup() in do_lu_start()
  tools/xenstored: control: Store the save filename in lu_dump_state
  tools/xenstore-control: Don't leak buf in live_update_start()
  tools/xenstored: Silence coverity when using xs_state_* structures

 tools/xenstore/include/xenstore_state.h |  6 +++---
 tools/xenstore/xenstore_control.c       |  4 +++-
 tools/xenstore/xenstored_control.c      | 26 +++++++++++--------------
 3 files changed, 17 insertions(+), 19 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:42:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:42:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89891.169791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeV-00069y-H5; Thu, 25 Feb 2021 17:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89891.169791; Thu, 25 Feb 2021 17:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeV-00069p-Ck; Thu, 25 Feb 2021 17:42:15 +0000
Received: by outflank-mailman (input) for mailman id 89891;
 Thu, 25 Feb 2021 17:42:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKeU-000690-CY
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:42:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeT-0005qk-EC; Thu, 25 Feb 2021 17:42:13 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeT-00032g-3I; Thu, 25 Feb 2021 17:42:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=AsXl1Pz/JngLBcmlXDlPm3M/+CU4TVWFsS8Ry3xt94k=; b=qIWgFwpRYq59HKUUqTUo+Gbls
	+bPbBxsbcwnQaXp8Y88/l2MGQ+hjpYH0x3IFMNPsdVzhXnYboqliZC3qdrXnPvuvHwjD6wC6eGYsC
	VLfHHzToFtXLrgGUHI7eIWU+DoPmXWl4EjxCu1s1eZXOS0HHvw7JvTG82PRo+YSgOtsBU=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 1/5] tools/xenstored: Avoid unnecessary talloc_strdup() in do_control_lu()
Date: Thu, 25 Feb 2021 17:41:27 +0000
Message-Id: <20210225174131.10115-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210225174131.10115-1-julien@xen.org>
References: <20210225174131.10115-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, the return of talloc_strdup() is not checked. This means
we may dereference a NULL pointer if the allocation failed.

However, it is pointless to allocate the memory as send_reply() will
copy the data to a different buffer. So drop the use of talloc_strdup().

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Fixes: fecab256d474 ("tools/xenstore: add basic live-update command parsing")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index f10beaf85eb4..e8a501acdb62 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -691,7 +691,6 @@ static const char *lu_start(const void *ctx, struct connection *conn,
 static int do_control_lu(void *ctx, struct connection *conn,
 			 char **vec, int num)
 {
-	const char *resp;
 	const char *ret = NULL;
 	unsigned int i;
 	bool force = false;
@@ -734,8 +733,7 @@ static int do_control_lu(void *ctx, struct connection *conn,
 
 	if (!ret)
 		ret = "OK";
-	resp = talloc_strdup(ctx, ret);
-	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
 	return 0;
 }
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:42:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:42:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89892.169803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeW-0006C6-Ps; Thu, 25 Feb 2021 17:42:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89892.169803; Thu, 25 Feb 2021 17:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeW-0006By-LB; Thu, 25 Feb 2021 17:42:16 +0000
Received: by outflank-mailman (input) for mailman id 89892;
 Thu, 25 Feb 2021 17:42:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKeV-0006A2-F1
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:42:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeU-0005qs-Fw; Thu, 25 Feb 2021 17:42:14 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeU-00032g-7E; Thu, 25 Feb 2021 17:42:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=JuTt5+fVqpN12bzr+SQUOtZ9U/AniX5wCdCX0Q2WLao=; b=5i5uy41NOnXHp39CeUAjMnJZn
	Jvu+LoaUW7sd/DbtG/GNjoJ1frUp/JDVUQlRXaPw328ObUdirePteN4lkVfScrXb3vgH5ys6nglEN
	kSG18A/Lwbx+m4L4u7aikIeWRoTxrC5zcrBBL7Mo8C53CVi76QpxC5nlhHEXnptLGPIek=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 2/5] tools/xenstored: Avoid unnecessary talloc_strdup() in do_lu_start()
Date: Thu, 25 Feb 2021 17:41:28 +0000
Message-Id: <20210225174131.10115-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210225174131.10115-1-julien@xen.org>
References: <20210225174131.10115-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, the return of talloc_strdup() is not checked. This means
we may dereference a NULL pointer if the allocation failed.

However, it is pointless to allocate the memory as send_reply() will
copy the data to a different buffer. So drop the use of talloc_strdup().

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Fixes: af216a99fb4a ("tools/xenstore: add the basic framework for doing the live update")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index e8a501acdb62..8eb57827765c 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -638,7 +638,6 @@ static bool do_lu_start(struct delayed_request *req)
 {
 	time_t now = time(NULL);
 	const char *ret;
-	char *resp;
 
 	if (!lu_check_lu_allowed()) {
 		if (now < lu_status->started_at + lu_status->timeout)
@@ -660,8 +659,7 @@ static bool do_lu_start(struct delayed_request *req)
  out:
 	talloc_free(lu_status);
 
-	resp = talloc_strdup(req->in, ret);
-	send_reply(lu_status->conn, XS_CONTROL, resp, strlen(resp) + 1);
+	send_reply(lu_status->conn, XS_CONTROL, ret, strlen(ret) + 1);
 
 	return true;
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:42:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89893.169808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeX-0006Cz-4Q; Thu, 25 Feb 2021 17:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89893.169808; Thu, 25 Feb 2021 17:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeW-0006CT-UQ; Thu, 25 Feb 2021 17:42:16 +0000
Received: by outflank-mailman (input) for mailman id 89893;
 Thu, 25 Feb 2021 17:42:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKeW-0006Bm-Aw
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:42:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeV-0005qz-Js; Thu, 25 Feb 2021 17:42:15 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeV-00032g-B9; Thu, 25 Feb 2021 17:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Ouy6xY5aZ/aq/+pbqmrj55EhVefIirlCz92nwNMOsZA=; b=x+8bIRwILAv/nmOo1cc9xwPdm
	7qBOBZtcexmHlyNMvcxQsWnqcX9OqY/8HPnwk3AP1njlyD940IhnTQ7kfbGv7Sao2Ml4OJVzbK5Zt
	Uqi/WT+drUEonBHHtA1VUbCaEwWvKrEv2Iz5qTmpa6DZRSt17NnvoUaHxWuZF++9BLGnE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 3/5] tools/xenstored: control: Store the save filename in lu_dump_state
Date: Thu, 25 Feb 2021 17:41:29 +0000
Message-Id: <20210225174131.10115-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210225174131.10115-1-julien@xen.org>
References: <20210225174131.10115-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The function lu_close_dump_state() will use talloc_asprintf() without
checking whether the allocation succeeded. In the unlikely case we are
out of memory, we would dereference a NULL pointer.

As we already computed the filename in lu_get_dump_state(), we can store
the name in the lu_dump_state. This is avoiding to deal with memory file
in the close path and also reduce the risk to use the different
filename.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Fixes: c0dc6a3e7c41 ("tools/xenstore: read internal state when doing live upgrade")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8eb57827765c..653890f2d9e0 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -16,6 +16,7 @@ Interactive commands for Xen Store Daemon.
     along with this program; If not, see <http://www.gnu.org/licenses/>.
 */
 
+#include <assert.h>
 #include <ctype.h>
 #include <errno.h>
 #include <stdarg.h>
@@ -74,6 +75,7 @@ struct lu_dump_state {
 	unsigned int size;
 #ifndef __MINIOS__
 	int fd;
+	char *filename;
 #endif
 };
 
@@ -399,17 +401,16 @@ static void lu_dump_close(FILE *fp)
 
 static void lu_get_dump_state(struct lu_dump_state *state)
 {
-	char *filename;
 	struct stat statbuf;
 
 	state->size = 0;
 
-	filename = talloc_asprintf(NULL, "%s/state_dump", xs_daemon_rootdir());
-	if (!filename)
+	state->filename = talloc_asprintf(NULL, "%s/state_dump",
+					  xs_daemon_rootdir());
+	if (!state->filename)
 		barf("Allocation failure");
 
-	state->fd = open(filename, O_RDONLY);
-	talloc_free(filename);
+	state->fd = open(state->filename, O_RDONLY);
 	if (state->fd < 0)
 		return;
 	if (fstat(state->fd, &statbuf) != 0)
@@ -431,14 +432,13 @@ static void lu_get_dump_state(struct lu_dump_state *state)
 
 static void lu_close_dump_state(struct lu_dump_state *state)
 {
-	char *filename;
+	assert(state->filename != NULL);
 
 	munmap(state->buf, state->size);
 	close(state->fd);
 
-	filename = talloc_asprintf(NULL, "%s/state_dump", xs_daemon_rootdir());
-	unlink(filename);
-	talloc_free(filename);
+	unlink(state->filename);
+	talloc_free(state->filename);
 }
 
 static char *lu_exec(const void *ctx, int argc, char **argv)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:42:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:42:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89894.169827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeZ-0006H4-Dy; Thu, 25 Feb 2021 17:42:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89894.169827; Thu, 25 Feb 2021 17:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeZ-0006Gu-9z; Thu, 25 Feb 2021 17:42:19 +0000
Received: by outflank-mailman (input) for mailman id 89894;
 Thu, 25 Feb 2021 17:42:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKeX-0006Eh-Lv
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:42:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeW-0005r7-OD; Thu, 25 Feb 2021 17:42:16 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeW-00032g-FC; Thu, 25 Feb 2021 17:42:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=MOM7smBMP9BCXNrEE1hf8/uoWjibhwZ1Ud0ejevMQus=; b=KDNoJwOGHmFC6HUQafwmQtJ5i
	XDPpKCkfQVLho8V22Sg9WziAsyJSl7e3NsLAAAI97Z9jSnBWKKbhSdFJtPCRolByzOk9IccxFbe6Y
	xWgZ1G5RrcXTIjyNJck3YtCec6T2w7Ic4ndI9qwc9hFeUyCZ4rEhh5Mh/rT3ewGZeu1xY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 4/5] tools/xenstore-control: Don't leak buf in live_update_start()
Date: Thu, 25 Feb 2021 17:41:30 +0000
Message-Id: <20210225174131.10115-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210225174131.10115-1-julien@xen.org>
References: <20210225174131.10115-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

All the error paths but one will free buf. Cover the remaining path so
buf can't be leaked.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Fixes: 7f97193e6aa8 ("tools/xenstore: add live update command to xenstore-control")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstore_control.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstore_control.c b/tools/xenstore/xenstore_control.c
index f6f4626c0656..548363ee7094 100644
--- a/tools/xenstore/xenstore_control.c
+++ b/tools/xenstore/xenstore_control.c
@@ -44,8 +44,10 @@ static int live_update_start(struct xs_handle *xsh, bool force, unsigned int to)
         return 1;
 
     ret = strdup("BUSY");
-    if (!ret)
+    if (!ret) {
+        free(buf);
         return 1;
+    }
 
     for (time_start = time(NULL); time(NULL) - time_start < to;) {
         free(ret);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:42:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89895.169833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeZ-0006I4-UR; Thu, 25 Feb 2021 17:42:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89895.169833; Thu, 25 Feb 2021 17:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKeZ-0006Hf-LW; Thu, 25 Feb 2021 17:42:19 +0000
Received: by outflank-mailman (input) for mailman id 89895;
 Thu, 25 Feb 2021 17:42:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKeY-0006GB-GZ
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:42:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeX-0005rD-RK; Thu, 25 Feb 2021 17:42:17 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKeX-00032g-J5; Thu, 25 Feb 2021 17:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=53L5vgadSleOIYD3/pBTYYZNiwh9to3K/XeeeFHyx60=; b=Ozt6BqekBvI7kK5lApoC1yTg4
	+nbNKSPUsqRE18z9JlEmv01Q1xT3nZpyirFappNNH2i+mRlYe8QEInE1uKqg7sYADkZYOogd/a+8I
	d3NmCXx3WlX5wX2NVF5GemdLgFRwagXNx+5Qq9IO+3Zmts1TXwuTn8uFbMgmdx3+Ot4+w=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using xs_state_* structures
Date: Thu, 25 Feb 2021 17:41:31 +0000
Message-Id: <20210225174131.10115-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210225174131.10115-1-julien@xen.org>
References: <20210225174131.10115-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Coverity will report unitialized values for every use of xs_state_*
structures in the save part. This can be prevented by using the [0]
rather than [] to define variable length array.

Coverity-ID: 1472398
Coverity-ID: 1472397
Coverity-ID: 1472396
Coverity-ID: 1472395
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

>From my understanding, the tools and the hypervisor already rely on GNU
extensions. So the change should be fine.

If not, we can use the same approach as XEN_FLEX_ARRAY_DIM.
---
 tools/xenstore/include/xenstore_state.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h
index ae0d053c8ffc..407d9e920c0f 100644
--- a/tools/xenstore/include/xenstore_state.h
+++ b/tools/xenstore/include/xenstore_state.h
@@ -86,7 +86,7 @@ struct xs_state_connection {
     uint16_t data_in_len;    /* Number of unprocessed bytes read from conn. */
     uint16_t data_resp_len;  /* Size of partial response pending for conn. */
     uint32_t data_out_len;   /* Number of bytes not yet written to conn. */
-    uint8_t  data[];         /* Pending data (read, written) + 0-7 pad bytes. */
+    uint8_t  data[0];         /* Pending data (read, written) + 0-7 pad bytes. */
 };
 
 /* Watch: */
@@ -94,7 +94,7 @@ struct xs_state_watch {
     uint32_t conn_id;       /* Connection this watch is associated with. */
     uint16_t path_length;   /* Number of bytes of path watched (incl. 0). */
     uint16_t token_length;  /* Number of bytes of watch token (incl. 0). */
-    uint8_t data[];         /* Path bytes, token bytes, 0-7 pad bytes. */
+    uint8_t data[0];        /* Path bytes, token bytes, 0-7 pad bytes. */
 };
 
 /* Transaction: */
@@ -125,7 +125,7 @@ struct xs_state_node {
 #define XS_STATE_NODE_TA_WRITTEN  0x0002
     uint16_t perm_n;        /* Number of permissions (0 in TA: node deleted). */
     /* Permissions (first is owner, has full access). */
-    struct xs_state_node_perm perms[];
+    struct xs_state_node_perm perms[0];
     /* Path and data follows, plus 0-7 pad bytes. */
 };
 #endif /* XENSTORE_STATE_H */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:47:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89909.169851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKjg-0006rk-SH; Thu, 25 Feb 2021 17:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89909.169851; Thu, 25 Feb 2021 17:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKjg-0006rd-PB; Thu, 25 Feb 2021 17:47:36 +0000
Received: by outflank-mailman (input) for mailman id 89909;
 Thu, 25 Feb 2021 17:47:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFKjf-0006rT-MD
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:47:35 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd27ccb3-2fae-499b-9739-f2bee1085892;
 Thu, 25 Feb 2021 17:47:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd27ccb3-2fae-499b-9739-f2bee1085892
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614275254;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=p0bB5LXPhj/u6QZchLuLKzPAVBdmryQ3cjYIYo1x/6E=;
  b=NbFeWIV7hnMUrZWcAjHnLGr8X1QyDLOeqr0nb2UOGDdRD8CxYqk6YL8s
   97DjDvLqtW9Az9+f4lGD9V9FLWcfQH5OypM0mozg0tkhYgmGZxSjoYSIf
   1dJf5Rtbs8eqAoCN1YlJIOxBUqerHA4ilxq3zoJOMp6jxh0LdMGIXpqK3
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RKl/JzpNylub9Xb3FH6IJYZ8tZbK8RElC4/hAt3HnljLh3YTIv7AOOIaQK0AsmbI5SbKL3Di2p
 umplfgHzdYtMT5yGQJHp+/qtGZmKaMuG+W6HMNDcrwgczky5PJVd23f07D3RevKCtQ7lbZXcGO
 FQ7CByB+UqHzTVZqy8d/twT8KLkCEx4Vus6gb9hGU3PsfZdrbFTFPZuACJtJU11+g9QiqKRZfR
 qWDPg8jHWP0lHOegY77oMmLC9g/sjN6UNxNOYfAiXcTpdqTPgACcbg+4nt9tAOYZ7wvjeP5JBs
 HuU=
X-SBRS: 5.2
X-MesageID: 39436144
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,206,1610427600"; 
   d="scan'208";a="39436144"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iwm6rvTyav1Ibe578hr45EgfNe2XIEMFMSBY3+cvySZLJ5dckCrIJ2B9LYR2GmiHx2L3LGsSzcMmF4OMm/5y++BXfdvXsfgOvYe4YmXIJ7zt6hQkjAZVEwn7l/ObEEcQe/V2TY2/TN45NDRV00MqZ+0BUHAou53f0KaWQVb+BKP8VrXe/CaJW4gXCGQ4u465KVZiYqsZa8TMMxRCJGXwDPuZS+7+Iu3caymDlfw/dhckhglXIIz+/WuPyAh2g7MPX+W261I2ZabvbEDbLi4CZe2BaAioBaaYy/peFIjC6vHXkpKfVEo1tmXUmRCZbt+RH3iDr2CLe95Eikouam2EBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p0bB5LXPhj/u6QZchLuLKzPAVBdmryQ3cjYIYo1x/6E=;
 b=KxossKtavnSTB0+1kPEfe3OsL2O5fZ5qyX+6fKOdIzssYccaX4L09lXmztK2TOOYjutl9oYFzfaXaVrtIyK+1bphnuh6sxBV+1IZZUMaHW9KxM3nZxGkV0nxporACJHOCdkr2U2fnaoxoTmXEO+Gp+yRPV391w5KMal0rso41TCmzEJay1MrCUt5LNzTqTlNVOk0XPP9z4A9gP8HL7HhcS996eJ4hAajh/Ir19o/T31x339gBsLpp82Il7nRJbSq8NTQwh2Rpw0NCRA/93J7wtbDvRCqkl/3yunejUkt0Ip+cBQPQe3bjdZF5HSDKkCJoxmCgnOzkYwZGIWpTD2MLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p0bB5LXPhj/u6QZchLuLKzPAVBdmryQ3cjYIYo1x/6E=;
 b=r69dUEzn8P9r4OlJ2c0dYAv7ZGcMeLXj8OIpTcyr3faC/7r50+RyOJHPlu6coSXk84wgInyE0TSji2uz5zldbxIxsddlqgt/dg16asvWCy3CvuPmYwXzyRHRurhpNHiKpWTlu4S6Ar5iEpzECRhk7kokRNaa9nEBVgg61OsAKFk=
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <raphning@amazon.co.uk>, <iwj@xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1d456d07-a27e-aa3a-1a0e-46d7e71ba1e7@citrix.com>
Date: Thu, 25 Feb 2021 17:47:21 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210225174131.10115-6-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0008.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6184ae2f-bd2a-4be9-516f-08d8d9b56c52
X-MS-TrafficTypeDiagnostic: BYAPR03MB3862:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3862BCEA1E5E076940210B6ABA9E9@BYAPR03MB3862.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hRk4d/MkQGl3KwulrrlBiRc0sIA3nghScVNNe5Ncj/6Xxyb3Q7FMTjHvJr8obBeOxqM0+eIGpdk7KZys/CNjseEn5D8cTNYke6BJSDvRjkAG6IJr+NBURlgisULZnsxk12s/V01IgMzvnbklYozCln0cD+y60BLwryiHKqGHdJYkN2UoYGKtY1PY8kXi6qD0e20v7RaMoipWT3ps4nVcvkjGVxwfKX/gxeDNmnnbAwSu0y1aw5uJqVcSpWd8WXj7kPj6ExDWUVqz/3+3c8PBKm6+yP//L9e9GJ6gl0/opFeiV6clK+Ow3SgKPQnjVScGusClFpNfh7RRDbdlGuU7RuBsQPFd/7LJXVfAi3br5C4wTPrbWzR4jnyL9gr1IvbYcVTzYNK2gWPmWLSfFTzBM9GXh+NeXM2rmKMTD7Ok9Byu7E5XQ0jJUBLLriHLxvVW24fvRNlLcYTiYVMmW7vtdmlR+muJaIbG6YoO1tcMY12syZINZtHx/SSbWSPBUJTBFKWmrfaGjUJsJh5FcdYMzIpFg4b0zIOnS67b/aA8wkyL7bDt3LxzsJaD5dhU0exFdBtfcPyRmdhXK/bPpuDee63U2UAHC+zqZ2DrNdn3yZw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(366004)(346002)(376002)(54906003)(66476007)(478600001)(26005)(53546011)(16526019)(6666004)(31696002)(186003)(2616005)(16576012)(86362001)(4744005)(31686004)(2906002)(4326008)(83380400001)(956004)(6486002)(8676002)(316002)(8936002)(66556008)(36756003)(66946007)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TjYza1gyRmFvOXNNN1hIOHNqYjI2djBMcC84RzcyVys1NU5lMDdLV0gydUpy?=
 =?utf-8?B?NGNwM2tqRE53N0xuNllFSGhMS1NVb3dtUVA2VEx3OE9TVFQ4c1pKeHVYc2Vv?=
 =?utf-8?B?ZHhzUWw4cDdndVpLLzROcVVCNHZ1WDd6OFVzUEYyQWMvSldGS3NqTDlYK0JG?=
 =?utf-8?B?RWQ0bGlnQ3RUazRXeVBCTmRsS3llQ3FUK2wzbGI5MWNFZmpQNk9ZRW5WUElZ?=
 =?utf-8?B?d1ZLREpmdllsYmhGUnlVcE15cEFLckU5TjJBK1NVdk9QeXVHQWk0SUpkYlFz?=
 =?utf-8?B?OVI5cGNaY29zSWVEU2t4QlNRRUdkUGJQejA0RXNVN3o4VG5LdHd4cFNBQnRX?=
 =?utf-8?B?NTluT1NPNjFmcmtRWUV1ZHJUeWs1R0l1N0s4VXJpQzNuTjFIUHE2VERNOCtF?=
 =?utf-8?B?ejVYbUFHbzBEWTMwQlptdFJHakpXREl0dGtGeGhhcDJWQXJ6dDNzNmJ6cXBE?=
 =?utf-8?B?Vm9teDJ6MEp3VlRGZklvSHl0cW9xY21FM1U0Zk9WZHRHcFRMNkZQQlZEWjZ6?=
 =?utf-8?B?b243RjZGMExtSDA5Z3FCK3VFN0VwSXdxMVZrd3dzdklFYjRablo2UGU2Y1I3?=
 =?utf-8?B?cTRkb3JPbm81RVRnV3FsOXdkL3pyUHJUa1JkczV0aVJNK2NSd2lhUThZaFFm?=
 =?utf-8?B?ZHRsV20xSTlUcnk3cG5ENUNTT0ZmNUNaOFZnQXhjVHhjSWpzSUZZOVBveTlu?=
 =?utf-8?B?Z0o1Z05qWnlsbEtxTE9paDJINnVDNHByMzU5WHViTHZUbUN1NEdjT2pTVDh6?=
 =?utf-8?B?V2F1R242aXowNVpmRnQ4dFNreXE1eWpmOTZRWkxGY2k0Rnk3dGJvSEVpNFJQ?=
 =?utf-8?B?cmVscG1DWFpTeEY4OVZCY1cyT2R0T1Z5ODhEc09DOTNmZWNPNXQvblVrUHFP?=
 =?utf-8?B?WFBTWVpHcmQycktFVHMrdDhlbm9VcmVyVUR3T1lnc01mS3g0WDdEYzBxMkNT?=
 =?utf-8?B?SnEwdTBpTC8rSDJKVkZlKzN4MHd2V3Z0UDRvaXZBdWpKMXptUmRvSnZiM1ZT?=
 =?utf-8?B?UHk0SEN4ZHczU214L2dDS0VPSWU5cGEwWjRYbVF4RmZISkRNSU1ZU0Z3YjNX?=
 =?utf-8?B?eWJBZitPNEVDWVJCa0hCemliSkQxRitFdHdLTHIxbUo0bktuUElMUDhHeDYz?=
 =?utf-8?B?dUoxelZoOGx5bVRhblo2OFkrbmtiOEZ6a1lHOE8veEVCbFFnL1dqeTBsOG5F?=
 =?utf-8?B?eWoyTXZpM0hQWVBiRHQvdStPNmJrMitrYzRRS0UzbzZ4WnZIUzNZRTdub3cz?=
 =?utf-8?B?MXd2c0RGRnBkUElZMHlGYVI4Q2ZYVHVJYVlpN0IrY3ZLc1RPS2FmWUR5eDcr?=
 =?utf-8?B?SzVndmRiNVUzbjhRb3luWThITUFNQ0FmUGc2d3N6MmN1UXg1eGxnNTZNWEpE?=
 =?utf-8?B?aHdFeUNwOWZWdVZObjB2cE8ya3crd0hWWFhmaEM4enJvUktnWjR6WFBpL3Zk?=
 =?utf-8?B?bEtpdHJWU0tIY1EzaW9GdWJ1ZnBFSkJFSnZIUytHSjZrQnFwbEFkMjJPY3hl?=
 =?utf-8?B?Q1VqdFdHMllvQUFrK1ZmMW5LRWtkMzduNHgwWTZTR0tBMVI1endROW01bzZq?=
 =?utf-8?B?MnQ0VXhtS2t1MGZhbGh6M1NpMlhnSGpRK00zcjBsODhQVFdDdjUyRVJya2M1?=
 =?utf-8?B?UUZuazZrNkIza0h2eExJNTJwQUNXYyt2Nlh0RFNUTElMSHNPcXllTzkwMUMx?=
 =?utf-8?B?QmVVUVoxU2RsOTB6cTNoYTN2L2cxMlhNazMrdHhjMHJBREF5eGRIcUh3QmNt?=
 =?utf-8?Q?Xe4PFueSML+17esLMNiDCg5lMaQffPHNLfZjHTC?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6184ae2f-bd2a-4be9-516f-08d8d9b56c52
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 17:47:30.4656
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kFd0a3jisbgwo33jBwNz3JdaOd8t28BuaPY7RH3Or1mLQR6EcSZEFQGImvcq39mA0C7q1BdSmRndNqd9f13YIbCbdv75VtJc3QVoLvH77Q4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3862
X-OriginatorOrg: citrix.com

On 25/02/2021 17:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>
> Coverity will report unitialized values for every use of xs_state_*
> structures in the save part. This can be prevented by using the [0]
> rather than [] to define variable length array.
>
> Coverity-ID: 1472398
> Coverity-ID: 1472397
> Coverity-ID: 1472396
> Coverity-ID: 1472395
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>
> ---
>
> From my understanding, the tools and the hypervisor already rely on GNU
> extensions. So the change should be fine.
>
> If not, we can use the same approach as XEN_FLEX_ARRAY_DIM.

Linux has recently purged the use of [0] because it causes sizeof() to
do unsafe things.

Flexible array members is a C99 standard - are we sure that Coverity is
doing something wrong with them?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:53:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:53:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89912.169863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKpF-0007tk-HH; Thu, 25 Feb 2021 17:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89912.169863; Thu, 25 Feb 2021 17:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKpF-0007td-EB; Thu, 25 Feb 2021 17:53:21 +0000
Received: by outflank-mailman (input) for mailman id 89912;
 Thu, 25 Feb 2021 17:53:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKpE-0007tW-6M
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:53:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKpC-00062V-2V; Thu, 25 Feb 2021 17:53:18 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKpB-0003iN-Pt; Thu, 25 Feb 2021 17:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Ie01WITm9WmXn52XbujX95vhQugB0mW2LjRGpKPcuCs=; b=a4qomH4pT7tTWEhpOQtm+C7fe6
	vGR466bHXIhcQ3h7gsJDTQfjKoqOGMXecVduZtRsSOP1H9jSX9woxIl4bMYeJJPEZmiD4KumRPTw0
	ZUnv/UGLjcM2SzeUNyy//GR4+KkP1ZgzRYXfBpMy/I7DtP4qcsCo27ONOriDG01WPrPc=;
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
 <1d456d07-a27e-aa3a-1a0e-46d7e71ba1e7@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1ff6526c-3d8b-f41f-793f-82d4327e15d0@xen.org>
Date: Thu, 25 Feb 2021 17:53:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <1d456d07-a27e-aa3a-1a0e-46d7e71ba1e7@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 25/02/2021 17:47, Andrew Cooper wrote:
> On 25/02/2021 17:41, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Coverity will report unitialized values for every use of xs_state_*
>> structures in the save part. This can be prevented by using the [0]
>> rather than [] to define variable length array.
>>
>> Coverity-ID: 1472398
>> Coverity-ID: 1472397
>> Coverity-ID: 1472396
>> Coverity-ID: 1472395
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>
>>  From my understanding, the tools and the hypervisor already rely on GNU
>> extensions. So the change should be fine.
>>
>> If not, we can use the same approach as XEN_FLEX_ARRAY_DIM.
> 
> Linux has recently purged the use of [0] because it causes sizeof() to
> do unsafe things.

Do you have a link to the Linux thread?

> 
> Flexible array members is a C99 standard - are we sure that Coverity is
> doing something wrong with them?
I have run coverity with one of the structure switched to [0] and it 
removed the unitialized warning for that specific one.

So clearly coverity is not happy with []. Although, I am not sure why.

Do you have a suggestion how to approach the problem?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:54:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89915.169874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKqF-000800-RV; Thu, 25 Feb 2021 17:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89915.169874; Thu, 25 Feb 2021 17:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKqF-0007zt-OV; Thu, 25 Feb 2021 17:54:23 +0000
Received: by outflank-mailman (input) for mailman id 89915;
 Thu, 25 Feb 2021 17:54:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKqD-0007zl-Rt
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:54:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKqD-00063J-R2
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:54:21 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKqD-0003mD-P5
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:54:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFKqA-0002Ah-Dj; Thu, 25 Feb 2021 17:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=BPTPd5kn0P1h32iZdd7ewLNHkk5RdLdb6Txgn6b8Qlw=; b=fMbPDRE19J9BoFXEjlAbq6DLVZ
	13PDKMXC/C5l2SiZBMCH80DaxhQzHuXwQgbNbxXjWce6aNt0uAzT+4wPFNQZ9A8iz+nrOJQH4m60T
	8A5d2ScnijC8AhrfhT/urjrmA/w+kjG9FekzhE1/bJc8uY8A9X0AwgbQjunepHNYPiqQ=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.58442.167560.663929@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 17:54:18 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    raphning@amazon.co.uk,
    Julien Grall <jgrall@amazon.com>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15 0/5] xenstore: Address coverity issues in the LiveUpdate code
In-Reply-To: <20210225174131.10115-1-julien@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("[PATCH for-4.15 0/5] xenstore: Address coverity issues in the LiveUpdate code"):
>   tools/xenstored: Avoid unnecessary talloc_strdup() in do_control_lu()
>   tools/xenstored: Avoid unnecessary talloc_strdup() in do_lu_start()
>   tools/xenstored: control: Store the save filename in lu_dump_state
>   tools/xenstore-control: Don't leak buf in live_update_start()

These four are actual bugfixes:

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

>   tools/xenstored: Silence coverity when using xs_state_* structures

For this I can't see a reason to give a release ack ?  See also Andy's
comments.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:55:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:55:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89917.169887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKrc-00087g-5u; Thu, 25 Feb 2021 17:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89917.169887; Thu, 25 Feb 2021 17:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKrc-00087Z-2w; Thu, 25 Feb 2021 17:55:48 +0000
Received: by outflank-mailman (input) for mailman id 89917;
 Thu, 25 Feb 2021 17:55:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKrb-00087U-Nj
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:55:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKrb-00064t-My
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:55:47 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKrb-00041l-MM
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:55:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFKrY-0002BL-IR; Thu, 25 Feb 2021 17:55:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=pdNuLA1NOf73LOd5SosZWLHig9jkmql5L+kmdjNJyTo=; b=rXzMU9hsmuXzdopKpXyGsjkPQQ
	sWNYqr78Ig4fK9t5RURiSRaoAHfR/PLqZywgew4+Puo+JSYFC6vd62GAw1dcJWhRtb5sUfUGTe+SJ
	HmXpg4CpmFvcWnMIOgqPqxNJhvRBzlmz+WH5QLHHXdAeHwBF8AzUklBCoLrcwjTFoBHw=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.58528.178916.321556@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 17:55:44 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
    Jan Beulich <JBeulich@suse.com>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Wei Liu <wl@xen.org>,
    Paul Durrant <paul@xen.org>
Subject: Re: [PATCH for-4.15] x86/dmop: Properly fail for PV guests
In-Reply-To: <a7ccd364-ce64-0784-150a-91e558da2aba@citrix.com>
References: <20210225170936.3008-1-andrew.cooper3@citrix.com>
	<24631.56533.776930.841094@mariner.uk.xensource.com>
	<a7ccd364-ce64-0784-150a-91e558da2aba@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Andrew Cooper writes ("Re: [PATCH for-4.15] x86/dmop: Properly fail for PV guests"):
> On 25/02/2021 17:22, Ian Jackson wrote:
> > Andrew Cooper writes ("[PATCH for-4.15] x86/dmop: Properly fail for PV guests"):
> >> The current code has an early exit for PV guests, but it returns 0 having done
> >> nothing.
> > Reviewed-by: Ian Jackson <iwj@xenproject.org>
> 
> Thanks.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 17:58:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 17:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89921.169899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKtq-0008HM-Jo; Thu, 25 Feb 2021 17:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89921.169899; Thu, 25 Feb 2021 17:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKtq-0008HF-Gd; Thu, 25 Feb 2021 17:58:06 +0000
Received: by outflank-mailman (input) for mailman id 89921;
 Thu, 25 Feb 2021 17:58:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFKtp-0008H9-31
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 17:58:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKtl-00068I-BO; Thu, 25 Feb 2021 17:58:01 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFKtl-0004AD-31; Thu, 25 Feb 2021 17:58:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3n9vVZeN9zT7qYBwHJw/YRFdCo4omBMafa8x+45549o=; b=EUmMsZ5kELnsjJexTht27mhvzQ
	XGSNy3RucZhIgY0TpbuR6BHgEsg4F53vHp6sL95aehX+Whsg781DUWSLvCwPoHV4xGwrHIkyMSf5W
	RXTxsyiBbOUgrvXnn6bddr+0dZMshUn9QwszWmoi7dswI9+yhYTvQCtVfZ7lPU3swrik=;
Subject: Re: [PATCH for-4.15 0/5] xenstore: Address coverity issues in the
 LiveUpdate code
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, raphning@amazon.co.uk,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
References: <20210225174131.10115-1-julien@xen.org>
 <24631.58442.167560.663929@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ef866801-27f1-245c-74c6-0b1e08c627af@xen.org>
Date: Thu, 25 Feb 2021 17:57:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <24631.58442.167560.663929@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ian,

On 25/02/2021 17:54, Ian Jackson wrote:
> Julien Grall writes ("[PATCH for-4.15 0/5] xenstore: Address coverity issues in the LiveUpdate code"):
>>    tools/xenstored: Avoid unnecessary talloc_strdup() in do_control_lu()
>>    tools/xenstored: Avoid unnecessary talloc_strdup() in do_lu_start()
>>    tools/xenstored: control: Store the save filename in lu_dump_state
>>    tools/xenstore-control: Don't leak buf in live_update_start()
> 
> These four are actual bugfixes:
> 
> Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Thanks!

> 
>>    tools/xenstored: Silence coverity when using xs_state_* structures
> 
> For this I can't see a reason to give a release ack ?  See also Andy's
> comments.

I don't have a reason for this one as it is so far just silencing 
Coverity. Sorry I should have mention that this one is not really 4.15 
material.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 18:02:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 18:02:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89926.169917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKxb-0000zf-5z; Thu, 25 Feb 2021 18:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89926.169917; Thu, 25 Feb 2021 18:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFKxb-0000zY-33; Thu, 25 Feb 2021 18:01:59 +0000
Received: by outflank-mailman (input) for mailman id 89926;
 Thu, 25 Feb 2021 18:01:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKxa-0000zT-8v
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 18:01:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKxa-0006JY-8A
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 18:01:58 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lFKxa-0004Z2-6J
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 18:01:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lFKxX-0002Ch-1A; Thu, 25 Feb 2021 18:01:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=akTlfYgJjovnCIPnUT9CvbsHD1PV0+yP04ewlvjQrfA=; b=mWDaXuNU6EiVQJ0tkdsuiFMDFN
	IWdRMFV5GMGxejx1mhXwIZKXO0XmmkLGHHoNkF6k3lBgg0LqJDt4Qw5/KfkG2TUtAG3l+zX8rpgFj
	Kk+wUWdclt68srSmQ+PvJjg1tc7h1+XPrhi8+Qh9Uaha6pH6H9IU6P/ALJ7MTDigwX5s=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24631.58898.784038.302778@mariner.uk.xensource.com>
Date: Thu, 25 Feb 2021 18:01:54 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    raphning@amazon.co.uk,
    Julien Grall <jgrall@amazon.com>,
    Wei Liu <wl@xen.org>,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH for-4.15 0/5] xenstore: Address coverity issues in the
 LiveUpdate code
In-Reply-To: <ef866801-27f1-245c-74c6-0b1e08c627af@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
	<24631.58442.167560.663929@mariner.uk.xensource.com>
	<ef866801-27f1-245c-74c6-0b1e08c627af@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("Re: [PATCH for-4.15 0/5] xenstore: Address coverity issues in the LiveUpdate code"):
> On 25/02/2021 17:54, Ian Jackson wrote:
> > Julien Grall writes ("[PATCH for-4.15 0/5] xenstore: Address coverity issues in the LiveUpdate code"):
> >>    tools/xenstored: Silence coverity when using xs_state_* structures
> > 
> > For this I can't see a reason to give a release ack ?  See also Andy's
> > comments.
> 
> I don't have a reason for this one as it is so far just silencing 
> Coverity. Sorry I should have mention that this one is not really 4.15 
> material.

No problem, thanks for the fixes!

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 18:31:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 18:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89933.169935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFLQH-00044I-LH; Thu, 25 Feb 2021 18:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89933.169935; Thu, 25 Feb 2021 18:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFLQH-00044B-HY; Thu, 25 Feb 2021 18:31:37 +0000
Received: by outflank-mailman (input) for mailman id 89933;
 Thu, 25 Feb 2021 18:31:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLQF-000443-P9; Thu, 25 Feb 2021 18:31:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLQF-0006m0-JL; Thu, 25 Feb 2021 18:31:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLQF-0002V9-9C; Thu, 25 Feb 2021 18:31:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLQF-0004io-8i; Thu, 25 Feb 2021 18:31:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LCCSfLxjRMp9HoFOiyVV0z0klTE9yozGKPSyZoAdE94=; b=Jy+v62J7M1q01jB1FP7FuRKQeh
	pFm3pOAjl+j/HaEBZmg4iiGpeI0C+7IY5xbANslfeHQG1nL/dsLJastlAmPpNWY/QnvA67CgpZchL
	Inr+cxXancblUZyySM0EcOGG3LRnA1X9jfmP+m9fYkNcBraQDPWnekpIIlFK5JR35Xw0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159660-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159660: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ef8134565dccf9186d5eabd7dbb4ecae6dead87
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 18:31:35 +0000

flight 159660 qemu-mainline real [real]
flight 159677 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159660/
http://logs.test-lab.xenproject.org/osstest/logs/159677/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ef8134565dccf9186d5eabd7dbb4ecae6dead87
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  189 days
Failing since        152659  2020-08-21 14:07:39 Z  188 days  363 attempts
Testing same since   159563  2021-02-22 23:37:57 Z    2 days    5 attempts

------------------------------------------------------------
425 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117355 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 18:42:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 18:42:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89939.169950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFLaH-0005C5-QW; Thu, 25 Feb 2021 18:41:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89939.169950; Thu, 25 Feb 2021 18:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFLaH-0005By-Ls; Thu, 25 Feb 2021 18:41:57 +0000
Received: by outflank-mailman (input) for mailman id 89939;
 Thu, 25 Feb 2021 18:41:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLaG-0005Bq-Sl; Thu, 25 Feb 2021 18:41:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLaG-0006xZ-JS; Thu, 25 Feb 2021 18:41:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLaG-0002pH-BD; Thu, 25 Feb 2021 18:41:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFLaG-0000Dt-8Q; Thu, 25 Feb 2021 18:41:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6r0/R7/EDcJ9eukepk8sHsMS3toQFADJPlu/H0nV6MY=; b=FN7it/B4V0jLJIaRl4Z2ywGwOU
	xq2GuujZcP5RWDInrE19A+N8Npa5jvjkkL5W1y49TDP9X43BVs4EgBZ9OnlKZl1RMOWja3fhB1m6b
	WAZHh3FYQgAqWl4sx6jDSLACurHzy9i8Lb80aJ901EiBT+ku4B4GeRcS2N0Wa+udm+gc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159674-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159674: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fc8fb368515391374e5f170a1e07205d914bc14a
X-Osstest-Versions-That:
    xen=067935804a8e7a33ff7170a2db8ce94bb46d9a63
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 18:41:56 +0000

flight 159674 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159674/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fc8fb368515391374e5f170a1e07205d914bc14a
baseline version:
 xen                  067935804a8e7a33ff7170a2db8ce94bb46d9a63

Last test of basis   159668  2021-02-25 13:01:28 Z    0 days
Testing same since   159674  2021-02-25 16:01:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   067935804a..fc8fb36851  fc8fb368515391374e5f170a1e07205d914bc14a -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 20:25:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 20:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89949.169977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNBq-0006s9-6V; Thu, 25 Feb 2021 20:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89949.169977; Thu, 25 Feb 2021 20:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNBq-0006s2-3I; Thu, 25 Feb 2021 20:24:50 +0000
Received: by outflank-mailman (input) for mailman id 89949;
 Thu, 25 Feb 2021 20:24:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNBp-0006ru-98; Thu, 25 Feb 2021 20:24:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNBp-0000Hg-0I; Thu, 25 Feb 2021 20:24:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNBo-0006HK-La; Thu, 25 Feb 2021 20:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNBo-0001vk-L6; Thu, 25 Feb 2021 20:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Nmfb49fi08c8O8/FPDE2gu3PRjKBoRcGbkjQZMRIxgc=; b=Bn1b0yAGt2JFnnR3zpoXbngUa1
	e48PBQWO3D9pUADf55XdWnka8Hj7bYrgD6CHwdKj9RYD+vdH2IRSawcvgP+TsSQPh0tI//4J8bAD/
	rV5vTd+Z4LrCUzGGVOYFkwrEYW68TYq7u/ir82+gLC7XLBGCBcuEJD2mRuFHpcorDG2I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159662-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159662: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=29c395c77a9a514c5857c45ceae2665e9bd99ac7
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 20:24:48 +0000

flight 159662 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159662/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                29c395c77a9a514c5857c45ceae2665e9bd99ac7
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  209 days
Failing since        152366  2020-08-01 20:49:34 Z  207 days  359 attempts
Testing same since   159662  2021-02-25 07:44:40 Z    0 days    1 attempts

------------------------------------------------------------
5079 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1251369 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 20:30:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 20:30:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89955.169994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHL-0007vP-U8; Thu, 25 Feb 2021 20:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89955.169994; Thu, 25 Feb 2021 20:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHL-0007vI-Qs; Thu, 25 Feb 2021 20:30:31 +0000
Received: by outflank-mailman (input) for mailman id 89955;
 Thu, 25 Feb 2021 20:30:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFNHK-0007vD-M8
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 20:30:30 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df6720d4-3744-49de-b4d1-09441e620fab;
 Thu, 25 Feb 2021 20:30:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df6720d4-3744-49de-b4d1-09441e620fab
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614285029;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=2KPiebheJyKHjlqvLctYPjbzkFONFknSVoRbxujFVEE=;
  b=WtKRWOZOQE3NpbUysrR8U9Mf1hy6RIIFO86Ps8M8/8/I10xx+Hj7rtpO
   Sgm9N7e4PfB0ZdXphI0XKBB1tWWNMiOlox/8rdue6NMjhq7bPDHw/0zaW
   mN4CmGzMocBLNQ60FnkhL6RDZOMI6hD+2r+YThHFdUHRUdSHbvcPvXGPT
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: NePu1J4P/KYBJYnQPEvgTTrX62mpxFDZpqQPpwPq53wThMHrfGNYe9GkQgSnUt+MlZwPmZ2jnZ
 T1nwU86VaQccohBP6vqE1pswVZiisDvvug2bX6Rz8q2YqT/XyLfkQIxEG+7K44fU5QQ1rmhYQX
 WAHDpNi1OEeyjRkdxSi2+toy+Kwfku6D9OJi8NDP66+QhL/lCgVz6Gwn+5b4mrFWmu6s1IumCa
 zs1/5Kj146QGEVTx8aUN7XpL5kHniPgoM+CLZE0A7/4S8cO/YE2krv+9tQdRI62n55/153s6Hb
 Q/8=
X-SBRS: 5.2
X-MesageID: 37980454
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,207,1610427600"; 
   d="scan'208";a="37980454"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Doug Goldstein
	<cardoe@cardoe.com>, Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH 3/3] automation: Annotate that a 32bit libc is no longer a dependency
Date: Thu, 25 Feb 2021 20:30:09 +0000
Message-ID: <20210225203010.11378-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210225203010.11378-1-andrew.cooper3@citrix.com>
References: <20210225203010.11378-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

We can't drop the 32bit libc from the existing containers, because they are
used on older Xen branches as well.

However, we can avoid the dependency being propagated into newer conainers
derived from our dockerfiles.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Doug Goldstein <cardoe@cardoe.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>

For 4.15.  Documentation changes only
---
 automation/build/archlinux/current.dockerfile        | 1 +
 automation/build/centos/7.2.dockerfile               | 1 +
 automation/build/centos/7.dockerfile                 | 1 +
 automation/build/debian/jessie.dockerfile            | 1 +
 automation/build/debian/stretch.dockerfile           | 1 +
 automation/build/debian/unstable.dockerfile          | 1 +
 automation/build/fedora/29.dockerfile                | 1 +
 automation/build/suse/opensuse-leap.dockerfile       | 1 +
 automation/build/suse/opensuse-tumbleweed.dockerfile | 1 +
 automation/build/ubuntu/bionic.dockerfile            | 1 +
 automation/build/ubuntu/focal.dockerfile             | 1 +
 automation/build/ubuntu/trusty.dockerfile            | 1 +
 automation/build/ubuntu/xenial.dockerfile            | 1 +
 13 files changed, 13 insertions(+)

diff --git a/automation/build/archlinux/current.dockerfile b/automation/build/archlinux/current.dockerfile
index d8fbebaf79..d46fc9d9ca 100644
--- a/automation/build/archlinux/current.dockerfile
+++ b/automation/build/archlinux/current.dockerfile
@@ -20,6 +20,7 @@ RUN pacman -S --refresh --sysupgrade --noconfirm --noprogressbar --needed \
         iasl \
         inetutils \
         iproute \
+        # lib32-glibc for Xen < 4.15
         lib32-glibc \
         libaio \
         libcacard \
diff --git a/automation/build/centos/7.2.dockerfile b/automation/build/centos/7.2.dockerfile
index c2f46b694c..af672a0be1 100644
--- a/automation/build/centos/7.2.dockerfile
+++ b/automation/build/centos/7.2.dockerfile
@@ -34,6 +34,7 @@ RUN rpm --rebuilddb && \
         yajl-devel \
         pixman-devel \
         glibc-devel \
+        # glibc-devel.i686 for Xen < 4.15
         glibc-devel.i686 \
         make \
         binutils \
diff --git a/automation/build/centos/7.dockerfile b/automation/build/centos/7.dockerfile
index e37d9d743a..5f83c97d0c 100644
--- a/automation/build/centos/7.dockerfile
+++ b/automation/build/centos/7.dockerfile
@@ -32,6 +32,7 @@ RUN yum -y install \
         yajl-devel \
         pixman-devel \
         glibc-devel \
+        # glibc-devel.i686 for Xen < 4.15
         glibc-devel.i686 \
         make \
         binutils \
diff --git a/automation/build/debian/jessie.dockerfile b/automation/build/debian/jessie.dockerfile
index 1232b9e204..808d6272e4 100644
--- a/automation/build/debian/jessie.dockerfile
+++ b/automation/build/debian/jessie.dockerfile
@@ -31,6 +31,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
diff --git a/automation/build/debian/stretch.dockerfile b/automation/build/debian/stretch.dockerfile
index 32742f7f39..e3bace1f87 100644
--- a/automation/build/debian/stretch.dockerfile
+++ b/automation/build/debian/stretch.dockerfile
@@ -32,6 +32,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
diff --git a/automation/build/debian/unstable.dockerfile b/automation/build/debian/unstable.dockerfile
index aeb4f3448b..9a10ee08d6 100644
--- a/automation/build/debian/unstable.dockerfile
+++ b/automation/build/debian/unstable.dockerfile
@@ -32,6 +32,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
diff --git a/automation/build/fedora/29.dockerfile b/automation/build/fedora/29.dockerfile
index 6a4e5b0413..5482952523 100644
--- a/automation/build/fedora/29.dockerfile
+++ b/automation/build/fedora/29.dockerfile
@@ -25,6 +25,7 @@ RUN dnf -y install \
         yajl-devel \
         pixman-devel \
         glibc-devel \
+        # glibc-devel.i686 for Xen < 4.15
         glibc-devel.i686 \
         make \
         binutils \
diff --git a/automation/build/suse/opensuse-leap.dockerfile b/automation/build/suse/opensuse-leap.dockerfile
index c60c13c943..685dd5d7fd 100644
--- a/automation/build/suse/opensuse-leap.dockerfile
+++ b/automation/build/suse/opensuse-leap.dockerfile
@@ -26,6 +26,7 @@ RUN zypper install -y --no-recommends \
         git \
         glib2-devel \
         glibc-devel \
+        # glibc-devel-32bit for Xen < 4.15
         glibc-devel-32bit \
         gzip \
         hostname \
diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
index 084cce0921..061173e751 100644
--- a/automation/build/suse/opensuse-tumbleweed.dockerfile
+++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
@@ -26,6 +26,7 @@ RUN zypper install -y --no-recommends \
         git \
         glib2-devel \
         glibc-devel \
+        # glibc-devel-32bit for Xen < 4.15
         glibc-devel-32bit \
         gzip \
         hostname \
diff --git a/automation/build/ubuntu/bionic.dockerfile b/automation/build/ubuntu/bionic.dockerfile
index 712b2e4722..408063698c 100644
--- a/automation/build/ubuntu/bionic.dockerfile
+++ b/automation/build/ubuntu/bionic.dockerfile
@@ -32,6 +32,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
diff --git a/automation/build/ubuntu/focal.dockerfile b/automation/build/ubuntu/focal.dockerfile
index c1c1f8d58f..90b4001a6a 100644
--- a/automation/build/ubuntu/focal.dockerfile
+++ b/automation/build/ubuntu/focal.dockerfile
@@ -31,6 +31,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
diff --git a/automation/build/ubuntu/trusty.dockerfile b/automation/build/ubuntu/trusty.dockerfile
index 397a28061d..fd377d948f 100644
--- a/automation/build/ubuntu/trusty.dockerfile
+++ b/automation/build/ubuntu/trusty.dockerfile
@@ -32,6 +32,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
diff --git a/automation/build/ubuntu/xenial.dockerfile b/automation/build/ubuntu/xenial.dockerfile
index ce0e84fa2f..57a71eb8c6 100644
--- a/automation/build/ubuntu/xenial.dockerfile
+++ b/automation/build/ubuntu/xenial.dockerfile
@@ -32,6 +32,7 @@ RUN apt-get update && \
         bin86 \
         bcc \
         liblzma-dev \
+        # libc6-dev-i386 for Xen < 4.15
         libc6-dev-i386 \
         libnl-3-dev \
         ocaml-nox \
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 20:30:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 20:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89956.170007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHS-0007yW-AH; Thu, 25 Feb 2021 20:30:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89956.170007; Thu, 25 Feb 2021 20:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHS-0007yN-6s; Thu, 25 Feb 2021 20:30:38 +0000
Received: by outflank-mailman (input) for mailman id 89956;
 Thu, 25 Feb 2021 20:30:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFNHR-0007y5-Ak
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 20:30:37 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35a11dac-1e2c-4877-81e7-fe1842d1a6e1;
 Thu, 25 Feb 2021 20:30:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35a11dac-1e2c-4877-81e7-fe1842d1a6e1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614285036;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Uh5oHWzLVTHvmw2rDj/q17q9Xcu+TPFKLKfwevnoXC8=;
  b=AITUxbUocgEviIzCCwYAq50YRcXa8d58YdV2pzBf4T0OBmh/stsbvQ7p
   ujJSGJTRP+vUaSN252jc+FfYqb3y2E113RYLeST7GtzOZQV2FWVPMtj3Z
   t3L86WBrkxECpOfc1vEQH419lsvwzxPBMV1kZZvCZZPE1MgXWmS86o3Ev
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Y8IWb4cUrWteqAlcWBAK+2LHedV+A/E1UxLdUap5ImM4kz9rgw0Gt3n0pmPyXuORQI5cMMpqC2
 KmEtPczrqvo3dz9E6Ze+oNtkEVRXHYs3Rs+emr2vaEY9xYmdOdL2CubNmXa3P+8sLKcKWIxr2l
 bngbeERHmjiRt632rRAx9DNj7IAYZlccC8+lqa1VI4vZhb3nye11T9WOXVCe+1DpyVu98LgaBw
 tRtWI8rW7+e/7DpuK5HINrlSEaVcHn/ntfIFKkGiNMozU2+Tgpz7xKrKmOSAvHtrtCKGhmZqkf
 638=
X-SBRS: 5.2
X-MesageID: 38246981
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,207,1610427600"; 
   d="scan'208";a="38246981"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH for-4.15 0/3] Build firmware as freestanding
Date: Thu, 25 Feb 2021 20:30:06 +0000
Message-ID: <20210225203010.11378-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This fixes a bug we've had for ages, which even ended up being documented as
an inappropriate build dependency.

For 4.15.  I'm tempted to suggest that it wants backporting to the stable
branches as well.

Andrew Cooper (3):
  tools/hvmloader: Drop machelf include as well
  tools/firmware: Build firmware as -ffreestanding
  automation: Annotate that a 32bit libc is no longer a dependency

 .travis.yml                                          | 1 -
 README                                               | 3 ---
 automation/build/archlinux/current.dockerfile        | 1 +
 automation/build/centos/7.2.dockerfile               | 1 +
 automation/build/centos/7.dockerfile                 | 1 +
 automation/build/debian/jessie.dockerfile            | 1 +
 automation/build/debian/stretch.dockerfile           | 1 +
 automation/build/debian/unstable.dockerfile          | 1 +
 automation/build/fedora/29.dockerfile                | 1 +
 automation/build/suse/opensuse-leap.dockerfile       | 1 +
 automation/build/suse/opensuse-tumbleweed.dockerfile | 1 +
 automation/build/ubuntu/bionic.dockerfile            | 1 +
 automation/build/ubuntu/focal.dockerfile             | 1 +
 automation/build/ubuntu/trusty.dockerfile            | 1 +
 automation/build/ubuntu/xenial.dockerfile            | 1 +
 tools/firmware/Rules.mk                              | 2 +-
 tools/firmware/hvmloader/32bitbios_support.c         | 5 +----
 17 files changed, 15 insertions(+), 9 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 20:30:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 20:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89957.170019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHU-00080q-JU; Thu, 25 Feb 2021 20:30:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89957.170019; Thu, 25 Feb 2021 20:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHU-00080i-Fv; Thu, 25 Feb 2021 20:30:40 +0000
Received: by outflank-mailman (input) for mailman id 89957;
 Thu, 25 Feb 2021 20:30:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFNHT-00080O-Tw
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 20:30:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b336a773-bd63-4dcb-99ea-ee13014eb19f;
 Thu, 25 Feb 2021 20:30:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b336a773-bd63-4dcb-99ea-ee13014eb19f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614285038;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=8hVN7Jd1M433kecw5Oj1hJjCQjf9Moi2LnCMo4tdjc8=;
  b=PnHBjyDuSIgH/mNyiTj8N5Dn61ciBVwhrAhRGtNCpEV9kp5uneIVr743
   FhBlvDLimJ9wWT7Y2P0+YuIe0J/aVo+w3uP3arhMdsab1GIeomss/5ECO
   XTwqEgol8x4ky/fbI2UMuG/J0Peu1GZt8WFkTC4XVmgMNaLTqyykS2/eX
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ie/i33Gei64ljV6YdDzbwAvj2LHGW1RD5fqq8cdzwZCw3SJMDR/sr0A65QKgKfL1KiYvRTaAma
 wWm1R6HZw5QFP1YaRMbFw+ae+L32Ztf4C0rFbN4hbO9ozDESgcg6FmYPZXvzqo51j3WMYzq4hl
 Jn/o/1bxZGEE9SPrJn+3/7G5ut7OMKeBJoCjZldZTwtJEY5L3a0YWM1j1Hc+2iq/wBgDq5d4Mj
 S6hwVnQNFvEaOgU5LNG0tAxf/H3A7Zb5sr0kbWdj5v7MepBt6s9FrlANNYrhCNLyOBW1uVEa02
 acs=
X-SBRS: 5.2
X-MesageID: 38050464
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,207,1610427600"; 
   d="scan'208";a="38050464"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH 2/3] tools/firmware: Build firmware as -ffreestanding
Date: Thu, 25 Feb 2021 20:30:08 +0000
Message-ID: <20210225203010.11378-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210225203010.11378-1-andrew.cooper3@citrix.com>
References: <20210225203010.11378-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

firmware should always have been -ffreestanding, as it doesn't execute in the
host environment.

inttypes.h isn't a freestanding header, but the 32bitbios_support.c only wants
the stdint.h types so switch to the more appropriate include.

This removes the build time dependency on a 32bit libc just to compile the
hvmloader and friends.

Update README and the TravisCI configuration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>

For 4.15.  Build tested in Travis (Ubuntu) and XenServer (CentOS) - no change
in the compiled HVMLoader binary.  I'm currently rebuilding the containers
locally to check Arch, Debian and OpenSUSE, but don't anticipate any problems.

This does not resolve the build issue on Alpine.  Exactly what to do there is
still TBC, but Roger has opened a bug with Apline concerning their GCC
packaging.
---
 .travis.yml                                  | 1 -
 README                                       | 3 ---
 tools/firmware/Rules.mk                      | 2 +-
 tools/firmware/hvmloader/32bitbios_support.c | 2 +-
 4 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 15ca9e9047..2362475f7a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -58,7 +58,6 @@ addons:
             - acpica-tools
             - bin86
             - bcc
-            - libc6-dev-i386
             - libnl-3-dev
             - ocaml-nox
             - libfindlib-ocaml-dev
diff --git a/README b/README
index 33cdf6b826..5167bb1708 100644
--- a/README
+++ b/README
@@ -62,9 +62,6 @@ provided by your OS distributor:
     * GNU bison and GNU flex
     * GNU gettext
     * ACPI ASL compiler (iasl)
-    * Libc multiarch package (e.g. libc6-dev-i386 / glibc-devel.i686).
-      Required when building on a 64-bit platform to build
-      32-bit components which are enabled on a default build.
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
diff --git a/tools/firmware/Rules.mk b/tools/firmware/Rules.mk
index 26bbddccd4..93abcabc67 100644
--- a/tools/firmware/Rules.mk
+++ b/tools/firmware/Rules.mk
@@ -16,4 +16,4 @@ CFLAGS += -Werror
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
 # Extra CFLAGS suitable for an embedded type of environment.
-CFLAGS += -fno-builtin -msoft-float
+CFLAGS += -fno-builtin -msoft-float -ffreestanding
diff --git a/tools/firmware/hvmloader/32bitbios_support.c b/tools/firmware/hvmloader/32bitbios_support.c
index 6f28fb6bde..cee3804888 100644
--- a/tools/firmware/hvmloader/32bitbios_support.c
+++ b/tools/firmware/hvmloader/32bitbios_support.c
@@ -20,7 +20,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <inttypes.h>
+#include <stdint.h>
 #include <xen/libelf/elfstructs.h>
 
 #include "util.h"
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 20:30:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 20:30:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89958.170031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHV-000831-Rw; Thu, 25 Feb 2021 20:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89958.170031; Thu, 25 Feb 2021 20:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNHV-00082s-OL; Thu, 25 Feb 2021 20:30:41 +0000
Received: by outflank-mailman (input) for mailman id 89958;
 Thu, 25 Feb 2021 20:30:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFNHU-00080O-Tx
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 20:30:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d24a89a-4b02-493c-a854-65cedcced3a9;
 Thu, 25 Feb 2021 20:30:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d24a89a-4b02-493c-a854-65cedcced3a9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614285040;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=7Qx8jBmkmdqJqWBNokGVben/Nln8ENewcLj1gGF0TLI=;
  b=QJWJwq84XWvyi8LiEMczFTRwHVnfkI6IDTMrrEPE1BG259+wdpbO6FFm
   WJLKYlW5Eu0eg69bXUPzbWENDPVT6xkuHzbdSpCgdfnQnMhgo1/AeCDeb
   gbgLfp3e1pfoBV8whPXy4PRSD97aHIQusIa64eolWzl5z1lFjRqEtnQyN
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ttoJ9fpxkvZkkT8zQSkgPq1z9X8fNuC4qsaA13gavWAV1C+SqixUoeWTy69UYPRzf7lbpxcn9j
 22ImvXfA584+weNvU48sAnHwpULZPO9BQCuwmPub1OGLXD8rWQ1K7D+k/L6ei4Fg64l0/rJZ/c
 gpTjUcYYDnljBMRDSURiqb2agUnL7YeSusaR5mj7sUl1/rVI0XF/8b1PVgazsg7MLOYNJUv39y
 eTIp66EMAzq05LSz9bGkMpH5+x+7FtdizfgCpUoh+rKXzLbxb8bTAq2yMv4NE9ksgUzg95QGB9
 8N8=
X-SBRS: 5.2
X-MesageID: 38422276
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,207,1610427600"; 
   d="scan'208";a="38422276"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH 1/3] tools/hvmloader: Drop machelf include as well
Date: Thu, 25 Feb 2021 20:30:07 +0000
Message-ID: <20210225203010.11378-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210225203010.11378-1-andrew.cooper3@citrix.com>
References: <20210225203010.11378-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The logic behind switching to elfstructs applies to sun builds as well.

Fixes: 81b2b328a2 ("hvmloader: use Xen private header for elf structs")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Ian Jackson <iwj@xenproject.org>
---
 tools/firmware/hvmloader/32bitbios_support.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/firmware/hvmloader/32bitbios_support.c b/tools/firmware/hvmloader/32bitbios_support.c
index e726946a7b..6f28fb6bde 100644
--- a/tools/firmware/hvmloader/32bitbios_support.c
+++ b/tools/firmware/hvmloader/32bitbios_support.c
@@ -22,9 +22,6 @@
 
 #include <inttypes.h>
 #include <xen/libelf/elfstructs.h>
-#ifdef __sun__
-#include <sys/machelf.h>
-#endif
 
 #include "util.h"
 #include "config.h"
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 20:52:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 20:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89969.170046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNc6-0001ya-Mq; Thu, 25 Feb 2021 20:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89969.170046; Thu, 25 Feb 2021 20:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNc6-0001yT-J1; Thu, 25 Feb 2021 20:51:58 +0000
Received: by outflank-mailman (input) for mailman id 89969;
 Thu, 25 Feb 2021 20:51:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yjlP=H3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFNc5-0001yO-30
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 20:51:57 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed53f322-5bf0-4ed8-8b39-a1597c8a3473;
 Thu, 25 Feb 2021 20:51:56 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3DFF364DE9;
 Thu, 25 Feb 2021 20:51:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed53f322-5bf0-4ed8-8b39-a1597c8a3473
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614286315;
	bh=Zkkb2TYZmIvm+7VmqMVOuJGwIvUpWUmuKDrtfLmrGd4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sy0TrtSKaar9ZfS1yvgHWZUhU29ZmjN1i1IXpLucNSSoRHBUie0iymRTFbL9seZSH
	 OaY42zxvKmvwRQ74NryPnmfEQJ0pRTQl9kCStSq/XS4agK4YCnfZ9QXZ8T8myrJNAj
	 mUGO5r6hGJ+ainhFKD2mxHCPpqGH02VzfE6yeof9l9ctBg+OiFCv9qCC/9rhTaZgYh
	 kyrvke/IbEAUSPydZyeInKxRWRHiRAhWE07vdiZCHF3P6bAIk+0vfbeDNbPidar3it
	 dT0XNABk3kzan/lXAf3ymVqNZC8sgXZj2wyVbkmRrICkiDVVJ36f/GodU4zS6RkC/c
	 7yzWZBpmuqe1A==
Date: Thu, 25 Feb 2021 12:51:54 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    andrew.cooper3@citrix.com, julien@xen.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen: introduce XENFEAT_direct_mapped and
 XENFEAT_not_direct_mapped
In-Reply-To: <96d764b6-a719-711c-31ea-235381bfd0ce@suse.com>
Message-ID: <alpine.DEB.2.21.2102250948160.3234@sstabellini-ThinkPad-T480s>
References: <20210225012243.28530-1-sstabellini@kernel.org> <96d764b6-a719-711c-31ea-235381bfd0ce@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 Feb 2021, Jan Beulich wrote:
> On 25.02.2021 02:22, Stefano Stabellini wrote:
> > --- a/xen/include/public/features.h
> > +++ b/xen/include/public/features.h
> > @@ -114,6 +114,13 @@
> >   */
> >  #define XENFEAT_linux_rsdp_unrestricted   15
> >  
> > +/*
> > + * A direct-mapped (or 1:1 mapped) domain is a domain for which its
> > + * local pages have gfn == mfn.
> > + */
> > +#define XENFEAT_not_direct_mapped       16
> > +#define XENFEAT_direct_mapped           17
> 
> Why two new values? Absence of XENFEAT_direct_mapped requires
> implying not-direct-mapped by the consumer anyway, doesn't it?

That's because if we add both flags we can avoid all unpleasant guessing
games in the guest kernel.

If one flag or the other flag is set, we can make an informed decision.

But if neither flag is set, it means we are running on an older Xen,
and we fall back on the current checks.


> Further, quoting xen/mm.h: "For a non-translated guest which
> is aware of Xen, gfn == mfn." This to me implies that PV would
> need to get XENFEAT_direct_mapped set; not sure whether this
> simply means x86'es is_domain_direct_mapped() is wrong, but if
> it is, uses elsewhere in the code would likely need changing.

That's a good point, I didn't think about x86 PV. I think the two flags
are needed for autotranslated guests. I don't know for sure what is best
for non-autotranslated guests.

Maybe we could say that XENFEAT_not_direct_mapped and
XENFEAT_direct_mapped only apply to XENFEAT_auto_translated_physmap
guests. And it would match the implementation of
is_domain_direct_mapped().

For non XENFEAT_auto_translated_physmap guests we could either do:

- neither flag is set
- set XENFEAT_direct_mapped (without changing the implementation of
  is_domain_direct_mapped)

What do you think? I am happy either way.


> Also, nit: Please keep the right sides aligned with #define-s
> higher up in the file.

OK


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 21:01:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 21:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89972.170057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNku-00034A-JJ; Thu, 25 Feb 2021 21:01:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89972.170057; Thu, 25 Feb 2021 21:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFNku-000343-GA; Thu, 25 Feb 2021 21:01:04 +0000
Received: by outflank-mailman (input) for mailman id 89972;
 Thu, 25 Feb 2021 21:01:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNkt-00033v-Rg; Thu, 25 Feb 2021 21:01:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNkt-0000y0-LU; Thu, 25 Feb 2021 21:01:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNkt-0007Qj-Dk; Thu, 25 Feb 2021 21:01:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFNkt-0000re-DH; Thu, 25 Feb 2021 21:01:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Cnq5+9SWbjVJWKOkXN6AfJUfe+sw9uMuWHok9baBTeM=; b=BWgXqUl9e8Izs1ZSvaPmb+0DYA
	BqBAe/NzjbysW5hJAL68r+lTepEvrLKXs5KEtIqIE9TTrF3+n0wzu8yE1/xZfHOPZLLKEshYRHcfO
	nKHfz7cbtR4UjE7kPyEhb1d8uZqDv3K9RjNpGQveCs3NbuxV1Jxvgk2YXW8HqWbQ7ZE8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159676-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159676: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7f34681c488aee2563eaa2afcc6a2c8aa7c5b912
X-Osstest-Versions-That:
    ovmf=35f87da8a2debd443ac842db0a3794b17914a8f4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 21:01:03 +0000

flight 159676 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159676/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7f34681c488aee2563eaa2afcc6a2c8aa7c5b912
baseline version:
 ovmf                 35f87da8a2debd443ac842db0a3794b17914a8f4

Last test of basis   159640  2021-02-24 17:10:46 Z    1 days
Testing same since   159676  2021-02-25 16:11:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Li, Walon <walon.li@hpe.com>
  Walon Li <walon.li@hpe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   35f87da8a2..7f34681c48  7f34681c488aee2563eaa2afcc6a2c8aa7c5b912 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 21:27:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 21:27:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89977.170073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFOA7-0005FU-Kf; Thu, 25 Feb 2021 21:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89977.170073; Thu, 25 Feb 2021 21:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFOA7-0005FN-Hg; Thu, 25 Feb 2021 21:27:07 +0000
Received: by outflank-mailman (input) for mailman id 89977;
 Thu, 25 Feb 2021 21:27:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yjlP=H3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFOA5-0005FI-VG
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 21:27:05 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3792366e-7812-4c9a-84f6-3c3c241a99b1;
 Thu, 25 Feb 2021 21:27:04 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A832364EBA;
 Thu, 25 Feb 2021 21:27:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3792366e-7812-4c9a-84f6-3c3c241a99b1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614288424;
	bh=7xYQZSauNQ3JovgzGPinCB8YPIC6rxZc6dev38y6dsM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l3f7FBUXEYRCt8RfD47pxQL8JjGoKUpm7cQH9l8zGY/ZUTSQqnFJjJjMA4Wj+9AjD
	 js5tiNeORX4PbT1vHLaB9hsQjrJWYqIMMjVG5CQzUkvuaccH4arFP4ST2/a3VYrPWH
	 Tkjh2E6bpWPTz1T/92q0tWocPmF9AKdPHY3FnKUmy6eRoD0xdziocvmjdlt7rYMqjK
	 PB3zHFTmuMbsFgzcq9FUUSFgvFCJqpif26kVOjyJx+AwFLX/C6KpAPC2kRYBcvfzVF
	 PKfdTdk9EUWzxKI+xpAagq6t0DRuq4yKHAWu51i33LynHfvtDECqF6nrHkOk0o//xy
	 9j+AzLEvPINsg==
Date: Thu, 25 Feb 2021 13:27:02 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "iwj@xenproject.org" <iwj@xenproject.org>, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.15] xen/vgic: Implement write to ISPENDR in vGICv{2,
 3}
In-Reply-To: <0c4e6015-f969-9b6b-91b5-bffa952d47d5@xen.org>
Message-ID: <alpine.DEB.2.21.2102251326050.3234@sstabellini-ThinkPad-T480s>
References: <20210220140412.31610-1-julien@xen.org> <F86904EB-91E9-475C-B60B-E08C5C9E76C3@arm.com> <0c4e6015-f969-9b6b-91b5-bffa952d47d5@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 Feb 2021, Julien Grall wrote:
> On 22/02/2021 13:45, Bertrand Marquis wrote:
> > Hi Julien,
> > 
> > > On 20 Feb 2021, at 14:04, Julien Grall <julien@xen.org> wrote:
> > > 
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > Currently, Xen will send a data abort to a guest trying to write to the
> > > ISPENDR.
> > > 
> > > Unfortunately, recent version of Linux (at least 5.9+) will start
> > > writing to the register if the interrupt needs to be re-triggered
> > > (see the callback irq_retrigger). This can happen when a driver (such as
> > > the xgbe network driver on AMD Seattle) re-enable an interrupt:
> > > 
> > > (XEN) d0v0: vGICD: unhandled word write 0x00000004000000 to ISPENDR44
> > > [...]
> > > [   25.635837] Unhandled fault at 0xffff80001000522c
> > > [...]
> > > [   25.818716]  gic_retrigger+0x2c/0x38
> > > [   25.822361]  irq_startup+0x78/0x138
> > > [   25.825920]  __enable_irq+0x70/0x80
> > > [   25.829478]  enable_irq+0x50/0xa0
> > > [   25.832864]  xgbe_one_poll+0xc8/0xd8
> > > [   25.836509]  net_rx_action+0x110/0x3a8
> > > [   25.840328]  __do_softirq+0x124/0x288
> > > [   25.844061]  irq_exit+0xe0/0xf0
> > > [   25.847272]  __handle_domain_irq+0x68/0xc0
> > > [   25.851442]  gic_handle_irq+0xa8/0xe0
> > > [   25.855171]  el1_irq+0xb0/0x180
> > > [   25.858383]  arch_cpu_idle+0x18/0x28
> > > [   25.862028]  default_idle_call+0x24/0x5c
> > > [   25.866021]  do_idle+0x204/0x278
> > > [   25.869319]  cpu_startup_entry+0x24/0x68
> > > [   25.873313]  rest_init+0xd4/0xe4
> > > [   25.876611]  arch_call_rest_init+0x10/0x1c
> > > [   25.880777]  start_kernel+0x5b8/0x5ec
> > > 
> > > As a consequence, the OS may become unusable.
> > > 
> > > Implementing the write part of ISPENDR is somewhat easy. For
> > > virtual interrupt, we only need to inject the interrupt again.
> > > 
> > > For physical interrupt, we need to be more careful as the de-activation
> > > of the virtual interrupt will be propagated to the physical distributor.
> > > For simplicity, the physical interrupt will be set pending so the
> > > workflow will not differ from a "real" interrupt.
> > > 
> > > Longer term, we could possible directly activate the physical interrupt
> > > and avoid taking an exception to inject the interrupt to the domain.
> > > (This is the approach taken by the new vGIC based on KVM).
> > > 
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > 
> > This is something which will not be done by a guest very often so I think
> > your
> > implementation actually makes it simpler and reduce possibilities of race
> > conditions
> > so I am not even sure that the XXX comment is needed.
> 
> I think the XXX is useful as if someone notice an issue with the code, then
> they know what they could try.
> 
> I am open to suggestion how we could keep track of potential improvement.

It is worth capturing it somewhere. Maybe the commit message is better
than as an in-code comment?

Either way it is fine by me and feel free to make that kind of change on
commit.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 21:35:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 21:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89983.170088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFOHt-0006Jg-GW; Thu, 25 Feb 2021 21:35:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89983.170088; Thu, 25 Feb 2021 21:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFOHt-0006JZ-DT; Thu, 25 Feb 2021 21:35:09 +0000
Received: by outflank-mailman (input) for mailman id 89983;
 Thu, 25 Feb 2021 21:35:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yjlP=H3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFOHr-0006JU-Po
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 21:35:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df0ee3b6-c943-4e55-9e4b-d15c57dc9984;
 Thu, 25 Feb 2021 21:35:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E4A5E64EC4;
 Thu, 25 Feb 2021 21:35:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df0ee3b6-c943-4e55-9e4b-d15c57dc9984
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614288906;
	bh=l47WobQyvLKVYH5ft90DfRxCmnuUHpzw5U2SngnPGNo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=iSzHVol3WShxCPNFl3fEWhORnyCBqDA7PlQ5HHggcLtafy/tAqtZWgN4d8brS5V63
	 nIL45jRMPLDDga4ESeeWRbbONSNzjLu7jqY4mwQNQQwoHEupHl2XlLrU3ZAwVinPZA
	 LT3kAZRc21xUnHtDHfK6DjyStzKzCEI9IgNA20cHch/3v5x7uSFet4mfKGvENwGtxs
	 3App47MbTg2b2iiplxE9r0kmyJig4DWbbrigvYV/JvcsNEyKtSCcNUxkVDRoXlIAlD
	 1Qahlzqqf8Mu2FpletGVp5dtEBfYyunaarTnVvEWQbfVZ4maD64J7QWaHu6nWNHr6k
	 XqVEkt8/fEmtw==
Date: Thu, 25 Feb 2021 13:35:05 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Ash Wilding <ash.j.wilding@gmail.com>, dfaggioli@suse.com, 
    george.dunlap@citrix.com, iwj@xenproject.org, jbeulich@suse.com, 
    jgrall@amazon.com, sstabellini@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-4.15] xen/sched: Add missing memory barrier in
 vcpu_block()
In-Reply-To: <ee1d43f2-4c2c-66e0-8ad0-c32ca1c7969f@xen.org>
Message-ID: <alpine.DEB.2.21.2102251333200.3234@sstabellini-ThinkPad-T480s>
References: <20210220194701.24202-1-julien@xen.org> <20210223132408.10283-1-ash.j.wilding@gmail.com> <ee1d43f2-4c2c-66e0-8ad0-c32ca1c7969f@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 Feb 2021, Julien Grall wrote:
> On 23/02/2021 13:24, Ash Wilding wrote:
> > Hi Julien,
> 
> Hi Ash,
> 
> > Thanks for looking at this,
> > 
> > > vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the
> > > CPU to read any information about local events before the flag
> > > _VPF_blocked is set.
> > 
> > Reviewed-by: Ash Wilding <ash.j.wilding@gmail.com>
> 
> Thanks!
> 
> > 
> > 
> > As an aside,
> > 
> > > I couldn't convince myself whether the Arm implementation of
> > > local_events_need_delivery() contains enough barrier to prevent the
> > > re-ordering. However, I don't think we want to play with devil here
> > > as the function may be optimized in the future.
> > 
> > Agreed.
> > 
> > The vgic_vcpu_pending_irq() and vgic_evtchn_irq_pending() in the call
> > path of local_events_need_delivery() both call spin_lock_irqsave(),
> > which has an arch_lock_acquire_barrier() in its call path.
> > 
> > That just happens to map to a heavier smp_mb() on Arm right now, but
> > relying on this behaviour would be shaky; I can imagine a future update
> > to arch_lock_acquire_barrier() that relaxes it down to just acquire
> > semantics like its name implies (for example an LSE-based lock_acquire()
> > using LDUMAXA),in which case any code incorrectly relying on that full
> > barrier behaviour may break. I'm guessing this is what you meant by the
> > function may be optimized in future?
> 
> That's one of the optimization I had in mind. The other one is we may find a
> way to remove the spinlocks, so the barriers would disappear completely.
> 
> > 
> > Do we know whether there is an expectation for previous loads/stores
> > to have been observed before local_events_need_delivery()? I'm wondering
> > whether it would make sense to have an smb_mb() at the start of the
> > *_nomask() variant in local_events_need_delivery()'s call path.
> 
> That's a good question :). For Arm, there are 4 users of
> local_events_need_delivery():
>   1) do_poll()
>   2) vcpu_block()
>   3) hypercall_preempt_check()
>   4) general_preempt_check()
> 
> 3 and 4 are used for breaking down long running operations. I guess we would
> want to have an accurate view of the pending events and therefore we would
> need a memory barrier to prevent the loads happening too early.
> 
> In this case, I think the smp_mb() would want to be part of the
> hypercall_preempt_check() and general_preempt_check().
> 
> Therefore, I think we want to avoid the extra barrier in
> local_events_need_delivery(). Instead, we should require the caller to take
> care of the ordering if needed.
> 
> This would have benefits any new architecture as the common code would already
> contain the appropriate barriers.
> 
> @Stefano, what do you think?

I am thinking the same way as you. Also because it is cleaner if it is
the one who writes that also takes care of any barriers/flushes needed.

In this case it is vcpu_block that is writing _VPF_blocked and knows
that it has to be seen before local_events_need_delivery(). It is easier
to keep track of it if the barrier is in vcpu_block together with the
set_bit call.


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 22:58:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 22:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89996.170109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFPam-00061d-0o; Thu, 25 Feb 2021 22:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89996.170109; Thu, 25 Feb 2021 22:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFPal-00061W-Tf; Thu, 25 Feb 2021 22:58:43 +0000
Received: by outflank-mailman (input) for mailman id 89996;
 Thu, 25 Feb 2021 22:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mloc=H3=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lFPak-00061R-RG
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 22:58:42 +0000
Received: from mail-pl1-x631.google.com (unknown [2607:f8b0:4864:20::631])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e8c302a-ed26-4765-a455-418dd2e5ed39;
 Thu, 25 Feb 2021 22:58:41 +0000 (UTC)
Received: by mail-pl1-x631.google.com with SMTP id z7so4048803plk.7
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 14:58:41 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id d10sm7228664pgl.72.2021.02.25.14.58.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 25 Feb 2021 14:58:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e8c302a-ed26-4765-a455-418dd2e5ed39
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=6li7BCgu0BeaMv5H6dr1yOY/I5ecVdAEH7WnmzvBhtE=;
        b=M7jX6zSfTjE0FmzLS11Hoo2aVFu8QYTZp/uRaPQK7IiJLIZQojVFFpgJTOSNhYDt7j
         oyK5zLcm/Bl5q37yX1VcVr9nTs9tw3kRWkxnWSti02excFutI2/D/aZnKhwB1A7WEQSL
         KEgqNKQrUzlHeBuk9XPZH+mJpwUMoIGST5GWv8R8rIbYf8DbUEUp8SF3cz9VlZMispGd
         c3Jc1N7BTSsiL5WitEs1h7pOd3SjisONfLR5u5afOsTi6Use42u/Dy5EDl7dD0KbL9ND
         ZCBBzD4o7WnejDtP9WAjnx2aiWyrE5oS8oymbLpgNvFVH91Au1+tss7pUimy2A3VaXTf
         40Cw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=6li7BCgu0BeaMv5H6dr1yOY/I5ecVdAEH7WnmzvBhtE=;
        b=e42OB1U3Av3e6InBMYdQkcPeBkufUde7AzGJ6M7wW0T4iP9B8hZr2TkyokoeS1EwtG
         012gBEkeJU49lZLugclpWPa1DzjloTuvz8QdO3T0TGPmHZ3NeiD6I8x+276oxsbkyRoT
         NgrfHlSNE7DnSpERlnoDuSeZcaADDDcFwzQPAk82/+ihrZbwpWew6kP57bQczNfTaxau
         t+qmHeX/mx1NbNoIgY6bYFNaise5KxclgRL/roUoxIKdWmszmfGeUouRlvLdujwjlRfL
         JTCdPaHoO00sD7L3HS5HW6sAaoxn1if2JKw/KMXVKSGBO2KmKmUdRDV09u8oupMUmUfK
         DVaQ==
X-Gm-Message-State: AOAM530T2Cmkvhz80CA+qA5WzpWeU/k7AeORxegcfWbzw4m/m24DZ3hd
	ie6ddCglPYkdThazE14dxf4=
X-Google-Smtp-Source: ABdhPJxOHoN3e8uu1fd4LQeW+bEGjeecBmRyRvJja23o4R/gi9IugVzYzClFVagfJ1NV+APziQs8iw==
X-Received: by 2002:a17:90a:ab10:: with SMTP id m16mr217923pjq.24.1614293920816;
        Thu, 25 Feb 2021 14:58:40 -0800 (PST)
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <71840112-790f-24b9-115c-9030c8521b65@gmail.com>
Date: Thu, 25 Feb 2021 14:55:45 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2/25/21 7:24 AM, Connor Davis wrote:
> Return from cpu_schedule_up when either cpu is 0 or
> NR_CPUS == 1. This fixes the following:
> 
> core.c: In function 'cpu_schedule_up':
> core.c:2769:19: error: array subscript 1 is above array bounds
> of 'struct vcpu *[1]' [-Werror=array-bounds]
>  2769 |     if ( idle_vcpu[cpu] == NULL )
>       |
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  xen/common/sched/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index 9745a77eee..f5ec65bf9b 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int cpu)
>      cpumask_set_cpu(cpu, &sched_res_mask);
>  
>      /* Boot CPU is dealt with later in scheduler_init(). */
> -    if ( cpu == 0 )
> +    if ( cpu == 0 || NR_CPUS == 1 )
>          return 0;
>  
>      if ( idle_vcpu[cpu] == NULL )
> 

Interesting.  I wonder when this changed in GCC.

I haven't yet seen this issue compiling with:
  NR_CPUS=1
  ARCH=riscv64
  riscv64-unknown-linux-gnu-gcc (GCC) 10.1.0

Which version of GCC are you seeing emit this?

- Bob


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 23:15:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 23:15:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.89999.170121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFPqw-00085V-Fq; Thu, 25 Feb 2021 23:15:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 89999.170121; Thu, 25 Feb 2021 23:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFPqw-00085O-Cn; Thu, 25 Feb 2021 23:15:26 +0000
Received: by outflank-mailman (input) for mailman id 89999;
 Thu, 25 Feb 2021 23:15:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3W+l=H3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFPqv-00085J-G9
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 23:15:25 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ff15e28-a158-464a-b93e-90ff139074d5;
 Thu, 25 Feb 2021 23:15:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ff15e28-a158-464a-b93e-90ff139074d5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614294924;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=R10X9M1b0GBXkPrm7zHmbiYNEWQ49Xj8LQbQMY/Ungk=;
  b=Pu3ya1CnvTvoDxAr6/ZzPlc1nvDweUAxCvwylaBs61g2NDtohMvzmqBj
   UNBJed30TjE6KHZ9vVgLbVXXaywc3LK2J4CLM4+Y578sxd8Y3S136ALg3
   4d50nfC3rdwoK3LrL29BAdow5s689vT3YToUZ/55SJ2bTUSk81ejO+wcT
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RcA/4aZsQLkKV5IyqIhyU8SxcfoGrv7hOo4OoyFGs360lv21b//ZXoJTkJq9oLmalQE30gLMo1
 ghjePXEyRGh+0fS6Da1YuBdIt+2FokK77HNFhU8ORXkc48kGvCnhqVix9CXkCicD8Bmead4FKt
 +O/wI0XofRFDdDWE7cC5bjSvA6NH1LoJ4NMhEXGQyrCdRtwG9JVLow1og9Eu8CfFEUyGgYtQwg
 kl5fEnmTclyAiNW7gqDKFWGdbu/7H2dezXvXdUOef6rUMEmAV4YM2O+NA7eU0CF4CyVbakqsIn
 Rns=
X-SBRS: 5.2
X-MesageID: 39461986
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,207,1610427600"; 
   d="scan'208";a="39461986"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AckXllqC29DG4hA2nAeGGje9HsewVmQJJS0GLP7ajbGT8blpWrnPdLKMzsN0k88UQPUX4y+h5hasUFgmZE0Qlpj4B/zGegMCG1eURdWjMS224xJFzGifq6nHcmmPPoE+UIPr/T34GW1zDKJbfPa6ip5rkmlsZ0Jd3vJFaN3g/URXj3oxcrbpuOT7bvpp+PgWt1hMr6ZjYoH/87Ub+YpCorqZN7e2msyeO1EYj2y2mn9RBGUjF8BX80XEAgP0tef0YnPPvMyXwy+JotwocWkFfTxQHyiZjs609/T9o3AdWYz4AUO5YfoVdYejOuiDKexant5auSWdzWy63TXM3sGl0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iS4d3V536nQHSLVDQ4f2Rad+5QcRgjHc9Atug2YiTxQ=;
 b=UH7uPkgTFDu+PuPSAFXbXzXJSoStclX91yKSw27c0u9RTJmVYMeFvFvM5IavlHgiSqSwhupz+3ibnNpMjILXhFnMOVt+FQwB8C9777UsgIhRUqxSoqo5SY1uEyjtDg0vVV27G+bD6DrlRwDV66bAeb7qJVwJgkHWsIN5AqwaSDJE/UlOnA+kqs4MTg22Ly5/OQiG6eZoSUHk4WPTS2B75FEOOc+xNbpC5Iw3UTWJI3niCpkgORPyvOHHJRdkfOKCjov7a7bh23zH6aUHJ55jdBA+6Wz/N9XD4IsDiD0YBdebiqaecM8/lplF4dd3BEsIz+nZtQeIOqHB1fRrcxGddw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iS4d3V536nQHSLVDQ4f2Rad+5QcRgjHc9Atug2YiTxQ=;
 b=PF0x05R9u3S/uOLguJoyyrvsfwrp9TH10x/V5z6Qe1ZRIiaaoT5ItcV1uWSivNKlutVUOu9Oc/6NG+hgWcdbx2NxW35zo5FsNiXIVQlF7qC0GlJwFoHY8dD+iIyHERbm81Y+vCGRx9LApk5CzM1ZmBkIegQBM1PBQqReO3MV7qc=
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
To: Connor Davis <connojdavis@gmail.com>, <xen-devel@lists.xenproject.org>
CC: Bobby Eshleman <bobbyeshleman@gmail.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Tamas K Lengyel
	<tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, Petre
 Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
Date: Thu, 25 Feb 2021 23:14:53 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0449.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 24421eee-5b93-4f70-dfea-08d8d9e336b1
X-MS-TrafficTypeDiagnostic: BYAPR03MB3862:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB38620941D34AA3E44D824D89BA9E9@BYAPR03MB3862.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: t0Y3b4i61eQ3cIAozsIZT1fMSMdh5A47ISmqYFf1+agZjhd/zHBv9ticC3rmrpcqbdOGRROIIA7alU/jFIFvUwwJDp/BYBAPXjSvpqbgf2AfvAFKnr+eIuppdyao3Zoi4ipg8uS8tCI2loHKMihy34tSpqIvc+IiUs4BYvvDgFHVSk2zi5+XJIwk/WsLz9RPL6Qivu8wx0QIiPn60y62iyJA1484ZQflKYLo43UUbJgs0UN7pI0gt4lCxrD4sKtJ4SVqXzbLwQktwh4yhttivbvKEej1PRxGSGC0K+Knn1oreUNEFd+/UVpd1sr717K0fAUiee+jvfZ4JXsslQyjYUqfTtsBv5QaZ9Zo39kZOLb9PWGQzKBgf/ANKICTRhW+IdUV8C3pVMUoDQWNPydN6rbT3p4sbWr3GWtukngOqGOR5QhoUR6gyaAqa6lGQGlCsFJ9BLNpnFSn1NCDOASmJ0dlL6NXGA9BpgK3g65wmE1unjGOI4/uTXX1n/UWVPW5t8cy7P7WU5C+k5q0VrzWLsFWatLibq76haY1InwPrKgbpecbY5/o+05RLeRNMl52H1ufWvqSvX2Yc50Op0SCd+FlfTBbSYkDS/pfl0LYd04=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(54906003)(478600001)(66476007)(2616005)(6666004)(16526019)(53546011)(26005)(31696002)(31686004)(186003)(7416002)(16576012)(86362001)(4326008)(2906002)(83380400001)(6486002)(8676002)(956004)(316002)(8936002)(66946007)(36756003)(5660300002)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OEYzMEdLNDQ5RElvUjkxamZqK291TU4rem5wNGVXa0lseVc4WjEyWXkyS200?=
 =?utf-8?B?MExmN0Q3b21hYTZOeG1HRUFvMGVwU3VzNGVxelcybGVsYTFHSHEzNndEbHNk?=
 =?utf-8?B?Y0dnTDhFclVvdC9jWlN6MkdSZ0JwWFc5c1dlTlBYNDVtT3l2dTRxRTQzc2Mw?=
 =?utf-8?B?a2pjT2ltV1BHNlRrQ3lSV2RsZmtVRjM5NFBaQXQ0eDhUQzVBTlF3Z3ZzMUhG?=
 =?utf-8?B?R2lCZEdWSVNsclZMdHh4Si9MV1ZTUVF3NlFzWW1JTVh4RjZCZFQyOXZKMmFw?=
 =?utf-8?B?NjRLY2xIUzVhRFNiSVdpZEREVXdPMTl2TC9JLyt4SDJJaDdndDVQM0pkWFpj?=
 =?utf-8?B?K3Q3b2NHVlo5elJ5Tit0SWs3NUZ6VVoxZ1c5SmFuRlBWWWdGc1doTk9ZRHhM?=
 =?utf-8?B?SFpEUElFZjNCVTgyaDBmR2FQNFE4ME5QdlZjUEZvR25PRXN3UTRHY0RiY2JU?=
 =?utf-8?B?T2RKb21xalZneVVuc2twNkFDKzQ1dUxveWVxdVEzY2xEb1A1eVkxY2lER0N2?=
 =?utf-8?B?VE9XSjlKTXFFampXZVlLYmJSeWlxNkZ6aWhyK2JQZEl0b2JJN00zTWVDYVl5?=
 =?utf-8?B?WU8wVDBXMHU0NG1YU0V6YXljeGlzWlhGMWFaVW9Kc0JacGt4Snl1RWVOTUVr?=
 =?utf-8?B?L2psdHA0Q0s0R0ZNQnFLcVczVG9WcnpKNlVqTlR3aUhyYXlVVmRGblVqSU80?=
 =?utf-8?B?YnZVa2lHWTI2ZityQWN4NEY2SWh2WWY4NzlNc3FRODFvTFUrSm84aE9xcksx?=
 =?utf-8?B?U0xEREorU2FabTRZRE5Remd4K2hsbzRQODlWaElIZ1Z0d3Q1bmtpK3pnVEJ6?=
 =?utf-8?B?SGZQb1Y1RlhPUDVLRVhBRUd0U0p2TmR6MjVwY3A3MVpzb3g5WFFXeDRNcVJX?=
 =?utf-8?B?b0hXcnBNcnRDK2c0djVMR1l3cGpWWnhZSXUzOUd6UXk4Y1JnMXJMOWJyUWNy?=
 =?utf-8?B?UzV6dzNhc2hZUW9OZWZINXlaaEJ1eG9zaDYvRFlWMHhGQ3U0TFFPVkNSYnhi?=
 =?utf-8?B?alUzWC9USUMxOHhjRkU1MWZvbUtsOU9rYjJIU1J1eE9qL2hvNFdUNXFjVm8w?=
 =?utf-8?B?QUh0WVBGdEFQVGQveFJudXF4WFV0elZ4b0w3TW5nMlNIMlRlVGVHK3E5Z2lM?=
 =?utf-8?B?eE9ROUlDZTk0aDFrODQvdDJpd0FNYXc1QktzQlV2dlRKdVJHc29xRE1YaXdO?=
 =?utf-8?B?dE94TFQvb2pHZzMvOWxsS0Frb0VNMTJLZGswRk5kY0tEUVc0RXdHQjJwcENn?=
 =?utf-8?B?UWpEa3hVemhCYlJFSjVKcnJCRTRSc3VLdGRjK1R4T0lCa1RDMmU2OGxoRTlQ?=
 =?utf-8?B?UDJMSmkwb1FwdkdvQW1QblBkazdKanMrN0JVS3RvMWpJaVFMT0JIV2M3U2hT?=
 =?utf-8?B?STBVWE9XT1dxR0ZuSVN0cEdUY0pSVmVDamkreENTQ0NTWEJNaVJzQ01hSTc3?=
 =?utf-8?B?ZlJVbENNUngyNzBMVXZqQVdySnFkT2YwTTAzZ1ZncHNqNjBlU21UWlJIbXNh?=
 =?utf-8?B?TEQrMTFRT3lSUFZmU24zemt0ZTNFWlpUVGtlU1pzdUJxSXp2TGJWNmUvZEoy?=
 =?utf-8?B?d2xRN25RZ3U5TlZYTGh5aDNEMzVxZTNlRUtpTUE4Y3phRWc0M3Y0YUdJb2ht?=
 =?utf-8?B?Y2c4ODhpYWVxaWQ4YTZuZWJYZzZqQ09VQnBHVjkrOXZ6SmkwVnA0QUM1Ukxn?=
 =?utf-8?B?WXNlbmhlUEphS0MrK1ZJbzJDUjhwRzVVOU0zbE9Ba1h3ckJlK3A1MktRUFN5?=
 =?utf-8?Q?7hpM0B1dxibgWjBvqvEU92V5IPT4PIskyF03Kw+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 24421eee-5b93-4f70-dfea-08d8d9e336b1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 23:15:17.4503
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yCtia5cu2otTdreFXOSuAluM5qlOaThOBCtbAKk9eXpvsUM4Ik9qmbHYGKcG9MWMoWyjqhF1NCufmD5OZGSmdEYu5l2UvmvQqKXRzN84P0g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3862
X-OriginatorOrg: citrix.com

On 25/02/2021 15:24, Connor Davis wrote:
> Add the minimum code required to get xen to build with
> XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
> function added is required for a successful build, given the .config
> generated from riscv64_defconfig. The function implementations are just
> stubs; actual implmentations will need to be added later.
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  config/riscv64.mk                        |   7 +
>  xen/Makefile                             |   8 +-
>  xen/arch/riscv/Kconfig                   |  54 ++++
>  xen/arch/riscv/Kconfig.debug             |   0
>  xen/arch/riscv/Makefile                  |  57 ++++
>  xen/arch/riscv/README.source             |  19 ++
>  xen/arch/riscv/Rules.mk                  |  13 +
>  xen/arch/riscv/arch.mk                   |   7 +
>  xen/arch/riscv/configs/riscv64_defconfig |  12 +
>  xen/arch/riscv/delay.c                   |  16 +
>  xen/arch/riscv/domain.c                  | 144 +++++++++
>  xen/arch/riscv/domctl.c                  |  36 +++
>  xen/arch/riscv/guestcopy.c               |  57 ++++
>  xen/arch/riscv/head.S                    |   6 +
>  xen/arch/riscv/irq.c                     |  78 +++++
>  xen/arch/riscv/lib/Makefile              |   1 +
>  xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
>  xen/arch/riscv/mm.c                      |  93 ++++++
>  xen/arch/riscv/p2m.c                     | 150 +++++++++
>  xen/arch/riscv/percpu.c                  |  17 +
>  xen/arch/riscv/platforms/Kconfig         |  31 ++
>  xen/arch/riscv/riscv64/asm-offsets.c     |  31 ++
>  xen/arch/riscv/setup.c                   |  27 ++
>  xen/arch/riscv/shutdown.c                |  28 ++
>  xen/arch/riscv/smp.c                     |  35 +++
>  xen/arch/riscv/smpboot.c                 |  34 ++
>  xen/arch/riscv/sysctl.c                  |  33 ++
>  xen/arch/riscv/time.c                    |  35 +++
>  xen/arch/riscv/traps.c                   |  35 +++
>  xen/arch/riscv/vm_event.c                |  39 +++
>  xen/arch/riscv/xen.lds.S                 | 113 +++++++
>  xen/drivers/char/serial.c                |   1 +
>  xen/include/asm-riscv/altp2m.h           |  39 +++
>  xen/include/asm-riscv/asm.h              |  77 +++++
>  xen/include/asm-riscv/asm_defns.h        |  24 ++
>  xen/include/asm-riscv/atomic.h           | 204 ++++++++++++
>  xen/include/asm-riscv/bitops.h           | 331 ++++++++++++++++++++
>  xen/include/asm-riscv/bug.h              |  54 ++++
>  xen/include/asm-riscv/byteorder.h        |  16 +
>  xen/include/asm-riscv/cache.h            |  24 ++
>  xen/include/asm-riscv/cmpxchg.h          | 382 +++++++++++++++++++++++
>  xen/include/asm-riscv/compiler_types.h   |  32 ++
>  xen/include/asm-riscv/config.h           | 110 +++++++
>  xen/include/asm-riscv/cpufeature.h       |  17 +
>  xen/include/asm-riscv/csr.h              | 219 +++++++++++++
>  xen/include/asm-riscv/current.h          |  47 +++
>  xen/include/asm-riscv/debugger.h         |  15 +
>  xen/include/asm-riscv/delay.h            |  15 +
>  xen/include/asm-riscv/desc.h             |  12 +
>  xen/include/asm-riscv/device.h           |  15 +
>  xen/include/asm-riscv/div64.h            |  23 ++
>  xen/include/asm-riscv/domain.h           |  50 +++
>  xen/include/asm-riscv/event.h            |  42 +++
>  xen/include/asm-riscv/fence.h            |  12 +
>  xen/include/asm-riscv/flushtlb.h         |  34 ++
>  xen/include/asm-riscv/grant_table.h      |  12 +
>  xen/include/asm-riscv/guest_access.h     |  41 +++
>  xen/include/asm-riscv/guest_atomics.h    |  60 ++++
>  xen/include/asm-riscv/hardirq.h          |  27 ++
>  xen/include/asm-riscv/hypercall.h        |  12 +
>  xen/include/asm-riscv/init.h             |  42 +++
>  xen/include/asm-riscv/io.h               | 283 +++++++++++++++++
>  xen/include/asm-riscv/iocap.h            |  13 +
>  xen/include/asm-riscv/iommu.h            |  46 +++
>  xen/include/asm-riscv/irq.h              |  58 ++++
>  xen/include/asm-riscv/mem_access.h       |   4 +
>  xen/include/asm-riscv/mm.h               | 246 +++++++++++++++
>  xen/include/asm-riscv/monitor.h          |  65 ++++
>  xen/include/asm-riscv/nospec.h           |  25 ++
>  xen/include/asm-riscv/numa.h             |  41 +++
>  xen/include/asm-riscv/p2m.h              | 218 +++++++++++++
>  xen/include/asm-riscv/page-bits.h        |  11 +
>  xen/include/asm-riscv/page.h             |  73 +++++
>  xen/include/asm-riscv/paging.h           |  15 +
>  xen/include/asm-riscv/pci.h              |  31 ++
>  xen/include/asm-riscv/percpu.h           |  33 ++
>  xen/include/asm-riscv/processor.h        |  59 ++++
>  xen/include/asm-riscv/random.h           |   9 +
>  xen/include/asm-riscv/regs.h             |  23 ++
>  xen/include/asm-riscv/setup.h            |  14 +
>  xen/include/asm-riscv/smp.h              |  46 +++
>  xen/include/asm-riscv/softirq.h          |  16 +
>  xen/include/asm-riscv/spinlock.h         |  12 +
>  xen/include/asm-riscv/string.h           |  28 ++
>  xen/include/asm-riscv/sysregs.h          |  16 +
>  xen/include/asm-riscv/system.h           |  99 ++++++
>  xen/include/asm-riscv/time.h             |  31 ++
>  xen/include/asm-riscv/trace.h            |  12 +
>  xen/include/asm-riscv/types.h            |  60 ++++
>  xen/include/asm-riscv/vm_event.h         |  55 ++++
>  xen/include/asm-riscv/xenoprof.h         |  12 +
>  xen/include/public/arch-riscv.h          | 183 +++++++++++
>  xen/include/public/arch-riscv/hvm/save.h |  39 +++
>  xen/include/public/hvm/save.h            |   2 +
>  xen/include/public/pmu.h                 |   2 +
>  xen/include/public/xen.h                 |   2 +
>  xen/include/xen/domain.h                 |   1 +

Well - this is orders of magnitude more complicated than it ought to
be.Â  An empty head.S doesn't (well - shouldn't) need the overwhelming
majority of this.

Do you know how all of this is being pulled in?Â  Is it from attempting
to compile common/ by any chance?

Now is also an excellent opportunity to nuke the x86isms which have
escaped into common code (debugger and xenoprof in particular), and
rethink some of our common/arch split.

When it comes to header files specifically, I want to start using
xen/arch/$ARCH/include/asm/ and retrofit this to x86 and ARM.Â  It has
two important properties - first, that you don't need to symlink the
tree to make compilation work, and second that patches touching multiple
architectures have hunks ordered in a more logical way.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Feb 25 23:22:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 23:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90002.170133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFPxG-0000ke-4P; Thu, 25 Feb 2021 23:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90002.170133; Thu, 25 Feb 2021 23:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFPxG-0000kX-1O; Thu, 25 Feb 2021 23:21:58 +0000
Received: by outflank-mailman (input) for mailman id 90002;
 Thu, 25 Feb 2021 23:21:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m0fh=H3=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lFPxE-0000kS-Ol
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 23:21:56 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4827891-9db4-43c1-b27f-ebf7e22904aa;
 Thu, 25 Feb 2021 23:21:55 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11PN4GJo060726;
 Thu, 25 Feb 2021 23:21:52 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2120.oracle.com with ESMTP id 36ttcmg7gt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 25 Feb 2021 23:21:51 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11PN6FbY063914;
 Thu, 25 Feb 2021 23:21:51 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2105.outbound.protection.outlook.com [104.47.70.105])
 by userp3020.oracle.com with ESMTP id 36uc6v4w54-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 25 Feb 2021 23:21:50 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BYAPR10MB3429.namprd10.prod.outlook.com (2603:10b6:a03:81::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.31; Thu, 25 Feb
 2021 23:21:49 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3890.019; Thu, 25 Feb 2021
 23:21:48 +0000
Received: from [10.74.97.62] (138.3.200.62) by
 SN1PR12CA0107.namprd12.prod.outlook.com (2603:10b6:802:21::42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3890.19 via Frontend Transport; Thu, 25 Feb 2021 23:21:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4827891-9db4-43c1-b27f-ebf7e22904aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=NCLzLTXLk/Yj5R2viPobLArIeniNbUdIBuWPMpeeGm8=;
 b=bwLMteU+YO0GvUZ42S8yXiS+cKbAF7EkwXF6iCThVrHNwq21ro/8lJ9Havog9+0Ic6I/
 FBsuNhS4J2tK6Gzo333rd24nP6JP3YrBEAh20lkBMLUnUIzl4/RMipCzgw/QFgNXz3vl
 peyF271j1XdOeI2CXiPeXR63kbIFOmAIMsqFViw+qfFawPwotqvT7wKUblCy7gw96dJo
 QQvVtXGYC4rDsz/znUFFWmbnTrtc87vQQTEkm3NhNB7sxeWu5OSygRNSbqRo2XcyVrSM
 3mTV8tgn/Z1iBSZ6MqIdiYa4LOXt6DbUxtVApXtdgHUD1lxGW2z2mdbfk2uum/XqcCBv HA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BSnLO2N3HMqw4BQIdUx9Y8gp4wlkj+1hrz/3KDKMQ9r24wYpxXRt57z78HY/soKIVqrJPumKeb/9a4a1ooBp4s1Rg1btoBmYSow8510V4e48xlkKo+OD/gYr60l4344gKZTdZzoWdrqhBEeJb7QyKsaxcop9W97Ux/rCRerw5aNQLoyzJXbJhtj5zqQ0TH4Na5b44/3pSsM0lJbmJPhVpx+sW16ktTJWex8C9+1gOqaGGzm0tX2x017fWaCdhNNQkxUi6tEGA1Ibe13rDFe1t4HvxNcJg5luymekvbDDW60dbSdzVd4lZikiPlfJPKefZ/RJavan5PZsjdd6BqRLEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NCLzLTXLk/Yj5R2viPobLArIeniNbUdIBuWPMpeeGm8=;
 b=oKQEIVigqQ8GOy8Lhw9gtBzbd65HcLNp2jxJvS/8yQeapmRNf78VyXtxrD6sgvZ3Cz1jIlN4OTYjm5bFGyNnnBlYsT8gX1Kdre9rY3bhshN9A+9MBfrjuxOP0hiarRyxtjnmqp6/rOXSzOdHA8juLS+jWAsj9FSCDLKzKFv00bjGK3et2rq2xJZ11WD66qMm9LhExLyt7s6+EEn4l4+geqrOu+zG2VFFGsuWX3yDDi2pA4cCGh9sZRuv7lTfhRne2eFj+SRZXY8uN4jZktuhoyJhiGAIWJInyYfNhHC/RFGc6uOYHSj4VADDpjTREwRiUOWZlFpCuJo+w9y7D721vQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NCLzLTXLk/Yj5R2viPobLArIeniNbUdIBuWPMpeeGm8=;
 b=Y6vRWUOQcpY3e/wY4BUChXI72/gG9XugCSCVcL6y5fO1pDveiS/H2G+QGbfn2Ra8wMGuyLCtEuV45spwzNU42OcAdE7d92Bv6ir7caF7YXKVLTVUIsB090RkGaDZKqWN1/abHEiarUxUM7QIEangcXQr8fdP8xJZxKPltZbbuqw=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH v1] xen: ACPI: Get rid of ACPICA message printing
To: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
        Linux ACPI <linux-acpi@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Juergen Gross
 <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <1709720.Zl72FGBfpD@kreacher>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <f73290b7-9a50-829d-76f6-ba2db3d16305@oracle.com>
Date: Thu, 25 Feb 2021 18:21:41 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <1709720.Zl72FGBfpD@kreacher>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.62]
X-ClientProxiedBy: SN1PR12CA0107.namprd12.prod.outlook.com
 (2603:10b6:802:21::42) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5c3ce6d2-28c0-4660-ced3-08d8d9e42005
X-MS-TrafficTypeDiagnostic: BYAPR10MB3429:
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB34290F4EB3764005086D74FB8A9E9@BYAPR10MB3429.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	70oDqX8LKf+t8CQOsXyZjiwlRZEvHfVRloIBPoEzxBzuXWxUXIK8rTPrsJJAqrckVv3lSxTQhP6+spioyfXZF4oBttJwyttXxRDKdOiJmZo7voXsUIZXzNV1KRlu3RhANMw8FwXhx5Qe8Jt6GjOUtb0lbZZpmLvRNCuTmoCecvRdSU+MrnoxUf4zXOyn1EGajRgI3Ubcl3XnUUcomum1BurbXqih10spqQc6XaqcA8dTBVEM0riqrKm4ns3fwFyqFVkncCJxYwevoWP7bRsw0Sc8ChWXsPMC1GZt2UnaNrgVipT7ufDnt8vPW8C4mbXr5zNrCI4sQJrPmxoAVwHGtqYzKgFKJ/4tfCzCSTTyPouYh78850bp+ehzhog49DWFPgX8ZVJT/Ij+UAchqQfYu35+hccIEa5gBNTg4qDmcyMBtdElWP0/FiTN4T5N4IPxgPJSvWMvzE6AA8bgTio8eBeSv5okiVJ+5tVauUk1lqAk9hOdDAdAuplbRUJvGAGR6wSvPm19CEpmWYYT0SVtxE+iQ8zlXQQkGBtVeBlaZvyH9JDowvWAX3oOM4UY/50JPKkXBQRh4a0HLLV98dM2dpVHRGj1eBZW1woXgXShyVFP6zeFdlcnMWIS2TrKHMFU
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(346002)(366004)(396003)(39860400002)(66946007)(83380400001)(316002)(31686004)(4326008)(16526019)(5660300002)(16576012)(66556008)(478600001)(54906003)(186003)(15650500001)(26005)(66476007)(6666004)(36756003)(6486002)(53546011)(31696002)(44832011)(2616005)(2906002)(110136005)(8676002)(86362001)(956004)(8936002)(26583001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?VXlaa0JJS1VCaDQ5N1VvQ05Na1QrTXk3SWJHci9rZnloMHAvT0RKNmlOcExE?=
 =?utf-8?B?OWE0SHhORnliUXErN2ZMWVpSakhoVmhTSkQ3U1RZWW16TnZOQVRqbnJaM3hM?=
 =?utf-8?B?Uzg2djdBd3BoQmVEM3hvcWdVbXZRZ1BESWI4MWo5MnNZMVVkTmEveW5aZkNI?=
 =?utf-8?B?bTN4NkhJNGRyYVZCdDEyNklwREhUZm5tK1NLVmg2ZklWeTNwV3pRSmJWSis2?=
 =?utf-8?B?VEVxSlRVelpVODhrQ0ViOGJwQjhyWk41M3NBZE1Hc2xZbnBwcUVsVzhRTHp6?=
 =?utf-8?B?ZFk5WGNnd3lJb2dzUUhZanJscUFMUzlxaitxQm5hRWV1MW44UVRoS0UrSTFx?=
 =?utf-8?B?TCtGeldDWnY3QkJTWnc1VW5HeDNBVlB1MldEUndaeVBIU1ZSSVNsS3JRT2Fj?=
 =?utf-8?B?YlpMYmpHdG4vYW1HaVpwNUw1WmVzaXFwYUdqUXhZbzVCaWozamhpRmR4cnl0?=
 =?utf-8?B?YVVYNDRKdEMrQ3RZNE0rZXNZdEZzSnp3b3FCOUhlcGh4bmhySm05b2RXTzVY?=
 =?utf-8?B?UVlMemwyVEZCV0ZaUHZvN1pVWUwwUmgzaGl3Qi9hRnFKU2w5ajZzTmxkVFFX?=
 =?utf-8?B?QmxYL1FGYjdDZ2lmcnJJbVlteksxMnh0bm1sUXhrd2U2bUUwYk91UEs5dDZW?=
 =?utf-8?B?S3QwaGVSbktzajI0cWtNeHJYUlI1bjZPVU1jbWs0eDh0bzkyTUdoVDMwek1w?=
 =?utf-8?B?YWt6dnlsNHhsV2dETnpVck1YTHFOWmRUeDV5TWZWdkRZeXFCK3hySVBRMExB?=
 =?utf-8?B?N2x3NFhqVzRIV3pWRmg2cGJBYnl6VlRmZWhQSGRvdkNZdTFLcFMxZEdZemR0?=
 =?utf-8?B?YWduU1J4bEU0a1lzend3ZlJPUlNOdW5jdGZjcE83bFNzeHRvQVJ4T0szTytv?=
 =?utf-8?B?ck1xU0ZhWXZ2eTZnQTErcDVVZm9EZjFnaWFsV2FsV2dmOWN1MUdad2xBb2Z6?=
 =?utf-8?B?WWF4OUFvVUhGcEZmRnkrdWZYOXBiejF6K2JkUVVVV3NKd3JCbmM3ZGl2d1hI?=
 =?utf-8?B?ZkppdXIzN3NFbWIvODljdG9makt3c3JEOVpyNUlBeFI3emRIajhEazNGTVJG?=
 =?utf-8?B?a0xRTUhheThHSThVSk1hU0Mvd0NsMCtGU0FiU2hMQzI3MzBna29MbEovblNO?=
 =?utf-8?B?cXB3RHg0M2VKd2RPSFEycFZRYzZLY2ErZmt2dERTaDlsQzBqeTVhZXdITU9i?=
 =?utf-8?B?azlmTXVUZ0pzbmZZM21GdWJEMTZ1SlhjRXZnWU5IdnAzN3BNQlhsYWd2ekI0?=
 =?utf-8?B?b21iMlpuZmcvRW10Uk54c3NXeW1JalJoVFhDSFpGYmpkak54S25YcFBZR0lV?=
 =?utf-8?B?TlpVM2Y1WTMyOE44bTVZTEU1RE4ydnBaREcxNG9NS1lJRnBGR3dPWlBhYnFE?=
 =?utf-8?B?aVNYdis1V0x3ZVRkMmllRDkyM3p6TDg3ZVpIS2Q2UWxzVVRxc0lNRVBYRjgv?=
 =?utf-8?B?Qk1KUU9XYWZYTXpGbDJPcVpPUDI5TDFZeE9tNitmOG93VitzbEFqWjVCcEFD?=
 =?utf-8?B?V0V5Mk5IVXk0bFdRR3plc2NVMkczUlFKL0ViQmF3ek1YK0J2OGZsOFdDYXFO?=
 =?utf-8?B?TUptKzJwdUk0RHJ1S2ZYSGs0eklZM0VVL0h4TEV3KzhwUnZLK2l5YXZNaTdz?=
 =?utf-8?B?NllzNGF0eVVhbTdOMWNKWG9EdGNieUt5VEQ1UFR3djBLUFZTV0Y4MEk4L0F3?=
 =?utf-8?B?dTZ6c011UXUvdGpwSjk4WmMwMDJRallDTGt1ZWZwMlNUZlR1Ym5mK2VQaTRz?=
 =?utf-8?Q?9oRpAFgLFk0FbypXQDHGvsumjgX5MUCFvbLyMg+?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c3ce6d2-28c0-4660-ced3-08d8d9e42005
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2021 23:21:48.7666
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9V4925iD72Mty/9IIR/kUgw3SnyZQV1SdoTk1UcWkBGaNiU18EthUUGEAnjW3TXm/0Pkxwq5iJ1LMerecsA4U4++VzNwTH4hlTAQ+I0U704=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3429
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9906 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 spamscore=0
 mlxlogscore=999 adultscore=0 bulkscore=0 malwarescore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102250175
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9906 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0
 lowpriorityscore=0 spamscore=0 mlxscore=0 bulkscore=0 clxscore=1011
 priorityscore=1501 malwarescore=0 impostorscore=0 suspectscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2102250175


On 2/24/21 1:47 PM, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
>
> The ACPI_DEBUG_PRINT() macro is used in a few places in
> xen-acpi-cpuhotplug.c and xen-acpi-memhotplug.c for printing debug
> messages, but that is questionable, because that macro belongs to
> ACPICA and it should not be used elsewhere.  In addition,
> ACPI_DEBUG_PRINT() requires special enabling to allow it to actually
> print the message and the _COMPONENT symbol generally needed for
> that is not defined in any of the files in question.
>
> For this reason, replace all of the ACPI_DEBUG_PRINT() instances in
> the Xen code with acpi_handle_debug() (with the additional benefit
> that the source object can be identified more easily after this
> change) and drop the ACPI_MODULE_NAME() definitions that are only
> used by the ACPICA message printing macros from that code.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>





From xen-devel-bounces@lists.xenproject.org Thu Feb 25 23:27:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 23:27:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90006.170145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFQ2H-0000vy-S5; Thu, 25 Feb 2021 23:27:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90006.170145; Thu, 25 Feb 2021 23:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFQ2H-0000vr-Oy; Thu, 25 Feb 2021 23:27:09 +0000
Received: by outflank-mailman (input) for mailman id 90006;
 Thu, 25 Feb 2021 23:27:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFQ2G-0000vj-D8; Thu, 25 Feb 2021 23:27:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFQ2G-0003K2-2D; Thu, 25 Feb 2021 23:27:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFQ2F-00069J-RI; Thu, 25 Feb 2021 23:27:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFQ2F-0002kJ-Qo; Thu, 25 Feb 2021 23:27:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=GtFElcf/2M1VptzTQoQGqkw2tnw82vc2vRXINCr4i1g=; b=LOxOhDNab5YI/80M7cB0tiKlTq
	20wFX6HL3xARfsA40kp8zLzWjPYxM0Tjtb4soF1BpDFTHnFIaO7pNnVmWxnR4fHt8Hl5sLWV4oGDS
	bG+B2XcKhoNvd2KGnW4gQzMnYsxOtHyRebiNlg1SSZ7S+dNjZKfWfBIqsGBpy4R5Bp4U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-xtf-amd64-amd64-4
Message-Id: <E1lFQ2F-0002kJ-Qo@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 Feb 2021 23:27:07 +0000

branch xen-unstable
xenbranch xen-unstable
job test-xtf-amd64-amd64-4
testid xtf/test-pv32pae-selftest

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159687/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-xtf-amd64-amd64-4.xtf--test-pv32pae-selftest.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-4.xtf--test-pv32pae-selftest --summary-out=tmp/159687.bisection-summary --basis-template=159475 --blessings=real,real-bisect,real-retry xen-unstable test-xtf-amd64-amd64-4 xtf/test-pv32pae-selftest
Searching for failure / basis pass:
 159652 fail [host=godello0] / 159475 ok.
Failure / basis pass flights: 159652 / 159475
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Tree: xtf git://xenbits.xen.org/xtf.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 60390ccb8b9b2dbf85010f8b47779bb231aa2533 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38\
 aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca-60390ccb8b9b2dbf85010f8b47779bb231aa2533 git://xenbits.xen.org/xtf.git#8ab15139728a8efd3ebbb60beb16a958a6a93fa1-8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Loaded 5001 nodes in revision graph
Searching for test results:
 159475 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159487 [host=godello1]
 159491 [host=albana0]
 159508 [host=albana1]
 159526 [host=huxelrebe1]
 159540 [host=elbling0]
 159559 [host=fiano1]
 159576 [host=chardonnay0]
 159602 [host=elbling1]
 159626 [host=chardonnay1]
 159669 [host=chardonnay1]
 159652 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 60390ccb8b9b2dbf85010f8b47779bb231aa2533 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159670 [host=chardonnay1]
 159672 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159673 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 60390ccb8b9b2dbf85010f8b47779bb231aa2533 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159675 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2ff2adc61fcfa09118b76b4b64cbf8a78f7f2882 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159678 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159679 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159680 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159682 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159683 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159684 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159686 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
 159687 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4dc1815991420b809ce18dddfdf9c0af48944204 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
Searching for interesting versions
 Result found: flight 159475 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x562dd310a750) HASH(0x562dd3120f40) HASH(0x562dd311ec38) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05\
 e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6a1d72d3739e330caf728ea07d656d7bf568824b 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x562dd3120940) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca 8ab15139728a8efd3ebbb60beb16a958a6a93fa1, results HASH(0x562dd310\
 f088) HASH(0x562dd3113840) Result found: flight 159652 (fail), for basis failure (at ancestor ~80)
 Repro found: flight 159672 (pass), for basis pass
 Repro found: flight 159673 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 2d824791504f4119f04f95bafffec2e37d319c25 8ab15139728a8efd3ebbb60beb16a958a6a93fa1
No revisions left to test, checking graph state.
 Result found: flight 159680 (pass), for last pass
 Result found: flight 159682 (fail), for first failure
 Repro found: flight 159683 (pass), for last pass
 Repro found: flight 159684 (fail), for first failure
 Repro found: flight 159686 (pass), for last pass
 Repro found: flight 159687 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4dc1815991420b809ce18dddfdf9c0af48944204
  Bug not present: 2d824791504f4119f04f95bafffec2e37d319c25
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/159687/


  commit 4dc1815991420b809ce18dddfdf9c0af48944204
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Feb 19 17:19:56 2021 +0100
  
      x86/PV: harden guest memory accesses against speculative abuse
      
      Inspired by
      https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/
      and prior work in that area of x86 Linux, suppress speculation with
      guest specified pointer values by suitably masking the addresses to
      non-canonical space in case they fall into Xen's virtual address range.
      
      Introduce a new Kconfig control.
      
      Note that it is necessary in such code to avoid using "m" kind operands:
      If we didn't, there would be no guarantee that the register passed to
      guest_access_mask_ptr is also the (base) one used for the memory access.
      
      As a minor unrelated change in get_unsafe_asm() the unnecessary "itype"
      parameter gets dropped and the XOR on the fixup path gets changed to be
      a 32-bit one in all cases: This way we avoid pointless REX.W or operand
      size overrides, or writes to partial registers.
      
      Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
      Release-Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-xtf-amd64-amd64-4.xtf--test-pv32pae-selftest.{dot,ps,png,html,svg}.
----------------------------------------
159687: tolerable all pass

flight 159687 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/159687/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-xtf-amd64-amd64-4     19 xtf/test-pv32pae-selftest fail baseline untested


jobs:
 test-xtf-amd64-amd64-4                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Feb 25 23:52:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 Feb 2021 23:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90012.170162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFQQO-0003y8-Uy; Thu, 25 Feb 2021 23:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90012.170162; Thu, 25 Feb 2021 23:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFQQO-0003y1-S2; Thu, 25 Feb 2021 23:52:04 +0000
Received: by outflank-mailman (input) for mailman id 90012;
 Thu, 25 Feb 2021 23:52:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yjlP=H3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFQQN-0003xw-K5
 for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 23:52:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d08a44e2-4d57-4784-b006-ef7536b343ea;
 Thu, 25 Feb 2021 23:52:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 7ADFC64F29;
 Thu, 25 Feb 2021 23:52:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d08a44e2-4d57-4784-b006-ef7536b343ea
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614297121;
	bh=gKrc3LbPXjztB1RQb8Ri1kuoi7BqfPxmEstNJKugeKg=;
	h=From:To:Cc:Subject:Date:From;
	b=JHbF2ijUkzp1TFtMnJE0LhTEw3vqOrcDj9ulUyKy+7yXyhgUO6D3iUpWIoG8tHyvw
	 oAvCNIPwgg0F7XsXm6+3HIkfDSVZTQWEA4JVajQVqrPyCn9NYbguFElHaaFbZ/Py2x
	 sDLAXRg9Xna7mWSp7z+dGQeVeS+331XRryq52apYKufjqit4PcGO8McRoe5mEzisIG
	 PGKl3Ii7r6F7eHIM/3UzPeEt9jhoBuqrlzc5595TRJJwR7AVJlxjgES3Oth5gO6awN
	 9ssFnJL/EQwN3qmXwaMPcvuIGGr6tZoZR4GX5XC9tzzGbLAa1Rli9ok0i0EjUsE12u
	 +XUroRcE5Wl6g==
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com
Cc: sstabellini@kernel.org,
	boris.ostrovsky@oracle.com,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH] xen/arm: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped
Date: Thu, 25 Feb 2021 15:51:58 -0800
Message-Id: <20210225235158.24001-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

Newer Xen versions expose two Xen feature flags to tell us if the domain
is directly mapped or not. Only when a domain is directly mapped it
makes sense to enable swiotlb-xen on ARM.

Introduce a function on ARM to check the new Xen feature flags and also
to deal with the legacy case. Call the function xen_swiotlb_detect.

Also rename the existing pci_xen_swiotlb_detect on x86 to
xen_swiotlb_detect so that we can share a common function declaration.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---

This is the corresponding Xen patch under review:
https://marc.info/?l=xen-devel&m=161421618217686

We don't *have to* make the x86 function and the ARM function exactly
the same, but I thought it would be much nicer if we did. However, we
can't really call it pci_* on ARM as there is no PCI necessarily.

---
 arch/arm/xen/mm.c                      | 14 +++++++++++++-
 arch/arm64/mm/dma-mapping.c            |  2 +-
 arch/x86/include/asm/xen/swiotlb-xen.h |  4 ++--
 arch/x86/kernel/pci-swiotlb.c          |  2 +-
 arch/x86/xen/pci-swiotlb-xen.c         |  6 +++---
 include/xen/interface/features.h       |  7 +++++++
 include/xen/swiotlb-xen.h              |  6 ++++++
 7 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 467fa225c3d0..f8e5acbef05d 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -135,10 +135,22 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 	return;
 }
 
+int __init xen_swiotlb_detect(void)
+{
+	if (!xen_domain())
+		return 0;
+	if (xen_feature(XENFEAT_direct_mapped))
+		return 1;
+	/* legacy case */
+	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
+		return 1;
+	return 0;
+}
+
 static int __init xen_mm_init(void)
 {
 	struct gnttab_cache_flush cflush;
-	if (!xen_initial_domain())
+	if (!xen_swiotlb_detect())
 		return 0;
 	xen_swiotlb_init(1, false);
 
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 93e87b287556..4bf1dd3eb041 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -53,7 +53,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 		iommu_setup_dma_ops(dev, dma_base, size);
 
 #ifdef CONFIG_XEN
-	if (xen_initial_domain())
+	if (xen_swiotlb_detect())
 		dev->dma_ops = &xen_swiotlb_dma_ops;
 #endif
 }
diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 6b56d0d45d15..494694744844 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -2,14 +2,14 @@
 #ifndef _ASM_X86_SWIOTLB_XEN_H
 #define _ASM_X86_SWIOTLB_XEN_H
 
+#include <xen/swiotlb-xen.h>
+
 #ifdef CONFIG_SWIOTLB_XEN
 extern int xen_swiotlb;
-extern int __init pci_xen_swiotlb_detect(void);
 extern void __init pci_xen_swiotlb_init(void);
 extern int pci_xen_swiotlb_init_late(void);
 #else
 #define xen_swiotlb (0)
-static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
 static inline void __init pci_xen_swiotlb_init(void) { }
 static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
 #endif
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..c18eb6629326 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -30,7 +30,7 @@ int __init pci_swiotlb_detect_override(void)
 	return swiotlb;
 }
 IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
-		  pci_xen_swiotlb_detect,
+		  xen_swiotlb_detect,
 		  pci_swiotlb_init,
 		  pci_swiotlb_late_init);
 
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 19ae3e4fe4e9..0a35657eeb85 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -21,12 +21,12 @@
 int xen_swiotlb __read_mostly;
 
 /*
- * pci_xen_swiotlb_detect - set xen_swiotlb to 1 if necessary
+ * xen_swiotlb_detect - set xen_swiotlb to 1 if necessary
  *
  * This returns non-zero if we are forced to use xen_swiotlb (by the boot
  * option).
  */
-int __init pci_xen_swiotlb_detect(void)
+int __init xen_swiotlb_detect(void)
 {
 
 	if (!xen_pv_domain())
@@ -90,7 +90,7 @@ int pci_xen_swiotlb_init_late(void)
 }
 EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 
-IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
+IOMMU_INIT_FINISH(xen_swiotlb_detect,
 		  NULL,
 		  pci_xen_swiotlb_init,
 		  NULL);
diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
index 6d1384abfbdf..f0d00bb0ac63 100644
--- a/include/xen/interface/features.h
+++ b/include/xen/interface/features.h
@@ -83,6 +83,13 @@
  */
 #define XENFEAT_linux_rsdp_unrestricted   15
 
+/*
+ * A direct-mapped (or 1:1 mapped) domain is a domain for which its
+ * local pages have gfn == mfn.
+ */
+#define XENFEAT_not_direct_mapped         16
+#define XENFEAT_direct_mapped             17
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index d5eaf9d682b8..6a2fc4e4b838 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -12,4 +12,10 @@ void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
 extern int xen_swiotlb_init(int verbose, bool early);
 extern const struct dma_map_ops xen_swiotlb_dma_ops;
 
+#ifdef CONFIG_SWIOTLB_XEN
+extern int __init xen_swiotlb_detect(void);
+#else
+static inline int __init xen_swiotlb_detect(void) { return 0; }
+#endif
+
 #endif /* __LINUX_SWIOTLB_XEN_H */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 00:31:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 00:31:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90017.170174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFR2N-0000DW-9b; Fri, 26 Feb 2021 00:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90017.170174; Fri, 26 Feb 2021 00:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFR2N-0000DP-6f; Fri, 26 Feb 2021 00:31:19 +0000
Received: by outflank-mailman (input) for mailman id 90017;
 Fri, 26 Feb 2021 00:31:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFR2L-0000DK-4N
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 00:31:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8fcf235-cc93-4927-9ffb-a44e9af1f18a;
 Fri, 26 Feb 2021 00:31:16 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0759964F1B;
 Fri, 26 Feb 2021 00:31:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8fcf235-cc93-4927-9ffb-a44e9af1f18a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614299475;
	bh=TLsaYaUNbkdsZYgpv6PcmHyRPTWbYAQZ+92LaE8FPdU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZrMjXja7kOTV6uBjgJfkLBEa3Q23wzEv3xbm+D5W+sUUhVdMQ4RCGzeWdUcpmxWlR
	 jlGHB3L8K0fXuaSruoeUdgecbJUeuizqaa3xIY/JoBkmIuR6eEte8Mz61fy+ufr5y3
	 y5iTrO6QXW+Qkq/cNA0SMSwKEdfruGJIlxcl9lSdL2Lgh6ZZnyiLbm0eJ+v6avD2FA
	 s9NXLbvlQv51dQeZrUcXl0kLN7DJBo3kB1wDHGP+fY5dLwUS7Uhmr873NXtOZcFEdL
	 Hdyk/ApeFKBz5YkiMHeoxfcagdWVjMpFT8c07eEG91gIAYlWc2VH3jbjNda7tQb77A
	 HunHOIzRCLtOA==
Date: Thu, 25 Feb 2021 16:31:13 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Connor Davis <connojdavis@gmail.com>
cc: xen-devel@lists.xenproject.org, Bobby Eshleman <bobbyeshleman@gmail.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH for-next 6/6] automation: add container for riscv64
 builds
In-Reply-To: <a7829e62734a73993cd41cdbc18e1d16e4bb06d9.1614265718.git.connojdavis@gmail.com>
Message-ID: <alpine.DEB.2.21.2102251630382.3234@sstabellini-ThinkPad-T480s>
References: <cover.1614265718.git.connojdavis@gmail.com> <a7829e62734a73993cd41cdbc18e1d16e4bb06d9.1614265718.git.connojdavis@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 Feb 2021, Connor Davis wrote:
> Add a container for cross-compiling xen to riscv64.
> This just includes the cross-compiler and necessary packages for
> building xen itself (packages for tools, stubdoms, etc., can be
> added later).
> 
> To build xen in the container run the following:
> 
> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>

The container build failed for me with:

Creating user git (git daemon user) with uid 977 and gid 977.
:: Running post-transaction hooks...
( 1/13) Creating system user accounts...
( 2/13) Updating journal message catalog...
( 3/13) Reloading system manager configuration...
  Skipped: Current root is not booted.
( 4/13) Updating udev hardware database...
( 5/13) Applying kernel sysctl settings...
  Skipped: Current root is not booted.
( 6/13) Creating temporary files...
/usr/lib/tmpfiles.d/journal-nocow.conf:26: Failed to resolve specifier: uninitialized /etc detected, skipping
All rules containing unresolvable specifiers will be skipped.
( 7/13) Reloading device manager configuration...
  Skipped: Device manager is not running.
( 8/13) Arming ConditionNeedsUpdate...
( 9/13) Rebuilding certificate stores...
(10/13) Reloading system bus configuration...
  Skipped: Current root is not booted.
(11/13) Warn about old perl modules
(12/13) Cleaning up package cache...
(13/13) Updating the info directory file...
Removing intermediate container 81e02adffada
 ---> 575bfaafc6af
Step 4/9 : RUN pacman --noconfirm -Syu     pixman     python     sh
 ---> Running in 9010bd7932b5
error: failed to initialize alpm library
(could not find or read directory: /var/lib/pacman/)
The command '/bin/sh -c pacman --noconfirm -Syu     pixman     python     sh' returned a non-zero code: 255


> ---
>  automation/build/archlinux/riscv64.dockerfile | 32 +++++++++++++++++++
>  automation/scripts/containerize               |  1 +
>  2 files changed, 33 insertions(+)
>  create mode 100644 automation/build/archlinux/riscv64.dockerfile
> 
> diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
> new file mode 100644
> index 0000000000..d94048b6c3
> --- /dev/null
> +++ b/automation/build/archlinux/riscv64.dockerfile
> @@ -0,0 +1,32 @@
> +FROM archlinux/base
> +LABEL maintainer.name="The Xen Project" \
> +      maintainer.email="xen-devel@lists.xenproject.org"
> +
> +# Packages needed for the build
> +RUN pacman --noconfirm -Syu \
> +    base-devel \
> +    gcc \
> +    git
> +
> +# Packages needed for QEMU
> +RUN pacman --noconfirm -Syu \
> +    pixman \
> +    python \
> +    sh
> +
> +# There is a regression in GDB that causes an assertion error
> +# when setting breakpoints, use this commit until it is fixed!
> +RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-gnu-toolchain && \
> +    cd riscv-gnu-toolchain/riscv-gdb && \
> +    git checkout 1dd588507782591478882a891f64945af9e2b86c && \
> +    cd  .. && \
> +    ./configure --prefix=/opt/riscv && \
> +    make linux -j$(nproc) && \
> +    rm -R /riscv-gnu-toolchain
> +
> +# Add compiler path
> +ENV PATH=/opt/riscv/bin/:${PATH}
> +
> +RUN useradd --create-home user
> +USER user
> +WORKDIR /build
> diff --git a/automation/scripts/containerize b/automation/scripts/containerize
> index da45baed4e..1901e8c0ef 100755
> --- a/automation/scripts/containerize
> +++ b/automation/scripts/containerize
> @@ -25,6 +25,7 @@ die() {
>  BASE="registry.gitlab.com/xen-project/xen"
>  case "_${CONTAINER}" in
>      _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
> +    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
>      _centos7) CONTAINER="${BASE}/centos:7" ;;
>      _centos72) CONTAINER="${BASE}/centos:7.2" ;;
>      _fedora) CONTAINER="${BASE}/fedora:29";;
> -- 
> 2.27.0
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 01:07:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 01:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90022.170189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFRay-0005MP-2p; Fri, 26 Feb 2021 01:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90022.170189; Fri, 26 Feb 2021 01:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFRax-0005MI-Vg; Fri, 26 Feb 2021 01:07:03 +0000
Received: by outflank-mailman (input) for mailman id 90022;
 Fri, 26 Feb 2021 01:07:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFRaw-0005MD-RU
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 01:07:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ae0193f-e224-4c71-9a87-56f6d90fdb51;
 Fri, 26 Feb 2021 01:07:02 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 9883564F21;
 Fri, 26 Feb 2021 01:06:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ae0193f-e224-4c71-9a87-56f6d90fdb51
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614301621;
	bh=zzkxH24CelpzugfovTPDIYBkRrfvBrnxQg0c0MkDvsU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bWut5ylG3bUFR1HXFQuXHQMhhCtxvjSWkqe90Osqv9k41HmF092UTJkAsVq5UwXlf
	 ZHNafQNGQXI7pNzBs5BPDYF2RC15CdS5nDpGWYNwSZ5XRjtxMUoNXiPdoVeSA7lda9
	 Io/M1ZxJso0BCFCeC3QspIG+TCWQPxviVdnIYYa+c/RRcYMhZWvH8xulxO5XIh4IPL
	 ZfeG6E05/AUIxEviWUlEoYYfg4isj/lfyNdEqBxMt9TPZxuQ7gu+85vrQeQnyQDAIV
	 lL8ING1aQ++e8fjdUU6+v69epEpfVFd2VryHWteDMUlSUcV9URRAy0E9xJx7p58JaE
	 qLXYfgUGYrFRw==
Date: Thu, 25 Feb 2021 17:06:46 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org, 
    Bobby Eshleman <bobbyeshleman@gmail.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Tamas K Lengyel <tamas@tklengyel.com>, 
    Alexandru Isaila <aisaila@bitdefender.com>, 
    Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
In-Reply-To: <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
Message-ID: <alpine.DEB.2.21.2102251631220.3234@sstabellini-ThinkPad-T480s>
References: <cover.1614265718.git.connojdavis@gmail.com> <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com> <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1727495200-1614299806=:3234"
Content-ID: <alpine.DEB.2.21.2102251638240.3234@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1727495200-1614299806=:3234
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2102251638241.3234@sstabellini-ThinkPad-T480s>

On Thu, 25 Feb 2021, Andrew Cooper wrote:
> On 25/02/2021 15:24, Connor Davis wrote:
> > Add the minimum code required to get xen to build with
> > XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
> > function added is required for a successful build, given the .config
> > generated from riscv64_defconfig. The function implementations are just
> > stubs; actual implmentations will need to be added later.
> >
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>

This is awesome, Connor! I am glad you are continuing this work and
I am really looking forward to have it in the tree.


> > ---
> >  config/riscv64.mk                        |   7 +
> >  xen/Makefile                             |   8 +-
> >  xen/arch/riscv/Kconfig                   |  54 ++++
> >  xen/arch/riscv/Kconfig.debug             |   0
> >  xen/arch/riscv/Makefile                  |  57 ++++
> >  xen/arch/riscv/README.source             |  19 ++
> >  xen/arch/riscv/Rules.mk                  |  13 +
> >  xen/arch/riscv/arch.mk                   |   7 +
> >  xen/arch/riscv/configs/riscv64_defconfig |  12 +
> >  xen/arch/riscv/delay.c                   |  16 +
> >  xen/arch/riscv/domain.c                  | 144 +++++++++
> >  xen/arch/riscv/domctl.c                  |  36 +++
> >  xen/arch/riscv/guestcopy.c               |  57 ++++
> >  xen/arch/riscv/head.S                    |   6 +
> >  xen/arch/riscv/irq.c                     |  78 +++++
> >  xen/arch/riscv/lib/Makefile              |   1 +
> >  xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
> >  xen/arch/riscv/mm.c                      |  93 ++++++
> >  xen/arch/riscv/p2m.c                     | 150 +++++++++
> >  xen/arch/riscv/percpu.c                  |  17 +
> >  xen/arch/riscv/platforms/Kconfig         |  31 ++
> >  xen/arch/riscv/riscv64/asm-offsets.c     |  31 ++
> >  xen/arch/riscv/setup.c                   |  27 ++
> >  xen/arch/riscv/shutdown.c                |  28 ++
> >  xen/arch/riscv/smp.c                     |  35 +++
> >  xen/arch/riscv/smpboot.c                 |  34 ++
> >  xen/arch/riscv/sysctl.c                  |  33 ++
> >  xen/arch/riscv/time.c                    |  35 +++
> >  xen/arch/riscv/traps.c                   |  35 +++
> >  xen/arch/riscv/vm_event.c                |  39 +++
> >  xen/arch/riscv/xen.lds.S                 | 113 +++++++
> >  xen/drivers/char/serial.c                |   1 +
> >  xen/include/asm-riscv/altp2m.h           |  39 +++
> >  xen/include/asm-riscv/asm.h              |  77 +++++
> >  xen/include/asm-riscv/asm_defns.h        |  24 ++
> >  xen/include/asm-riscv/atomic.h           | 204 ++++++++++++
> >  xen/include/asm-riscv/bitops.h           | 331 ++++++++++++++++++++
> >  xen/include/asm-riscv/bug.h              |  54 ++++
> >  xen/include/asm-riscv/byteorder.h        |  16 +
> >  xen/include/asm-riscv/cache.h            |  24 ++
> >  xen/include/asm-riscv/cmpxchg.h          | 382 +++++++++++++++++++++++
> >  xen/include/asm-riscv/compiler_types.h   |  32 ++
> >  xen/include/asm-riscv/config.h           | 110 +++++++
> >  xen/include/asm-riscv/cpufeature.h       |  17 +
> >  xen/include/asm-riscv/csr.h              | 219 +++++++++++++
> >  xen/include/asm-riscv/current.h          |  47 +++
> >  xen/include/asm-riscv/debugger.h         |  15 +
> >  xen/include/asm-riscv/delay.h            |  15 +
> >  xen/include/asm-riscv/desc.h             |  12 +
> >  xen/include/asm-riscv/device.h           |  15 +
> >  xen/include/asm-riscv/div64.h            |  23 ++
> >  xen/include/asm-riscv/domain.h           |  50 +++
> >  xen/include/asm-riscv/event.h            |  42 +++
> >  xen/include/asm-riscv/fence.h            |  12 +
> >  xen/include/asm-riscv/flushtlb.h         |  34 ++
> >  xen/include/asm-riscv/grant_table.h      |  12 +
> >  xen/include/asm-riscv/guest_access.h     |  41 +++
> >  xen/include/asm-riscv/guest_atomics.h    |  60 ++++
> >  xen/include/asm-riscv/hardirq.h          |  27 ++
> >  xen/include/asm-riscv/hypercall.h        |  12 +
> >  xen/include/asm-riscv/init.h             |  42 +++
> >  xen/include/asm-riscv/io.h               | 283 +++++++++++++++++
> >  xen/include/asm-riscv/iocap.h            |  13 +
> >  xen/include/asm-riscv/iommu.h            |  46 +++
> >  xen/include/asm-riscv/irq.h              |  58 ++++
> >  xen/include/asm-riscv/mem_access.h       |   4 +
> >  xen/include/asm-riscv/mm.h               | 246 +++++++++++++++
> >  xen/include/asm-riscv/monitor.h          |  65 ++++
> >  xen/include/asm-riscv/nospec.h           |  25 ++
> >  xen/include/asm-riscv/numa.h             |  41 +++
> >  xen/include/asm-riscv/p2m.h              | 218 +++++++++++++
> >  xen/include/asm-riscv/page-bits.h        |  11 +
> >  xen/include/asm-riscv/page.h             |  73 +++++
> >  xen/include/asm-riscv/paging.h           |  15 +
> >  xen/include/asm-riscv/pci.h              |  31 ++
> >  xen/include/asm-riscv/percpu.h           |  33 ++
> >  xen/include/asm-riscv/processor.h        |  59 ++++
> >  xen/include/asm-riscv/random.h           |   9 +
> >  xen/include/asm-riscv/regs.h             |  23 ++
> >  xen/include/asm-riscv/setup.h            |  14 +
> >  xen/include/asm-riscv/smp.h              |  46 +++
> >  xen/include/asm-riscv/softirq.h          |  16 +
> >  xen/include/asm-riscv/spinlock.h         |  12 +
> >  xen/include/asm-riscv/string.h           |  28 ++
> >  xen/include/asm-riscv/sysregs.h          |  16 +
> >  xen/include/asm-riscv/system.h           |  99 ++++++
> >  xen/include/asm-riscv/time.h             |  31 ++
> >  xen/include/asm-riscv/trace.h            |  12 +
> >  xen/include/asm-riscv/types.h            |  60 ++++
> >  xen/include/asm-riscv/vm_event.h         |  55 ++++
> >  xen/include/asm-riscv/xenoprof.h         |  12 +
> >  xen/include/public/arch-riscv.h          | 183 +++++++++++
> >  xen/include/public/arch-riscv/hvm/save.h |  39 +++
> >  xen/include/public/hvm/save.h            |   2 +
> >  xen/include/public/pmu.h                 |   2 +
> >  xen/include/public/xen.h                 |   2 +
> >  xen/include/xen/domain.h                 |   1 +
> 
> Well - this is orders of magnitude more complicated than it ought to
> be.Â  An empty head.S doesn't (well - shouldn't) need the overwhelming
> majority of this.
> 
> Do you know how all of this is being pulled in?Â  Is it from attempting
> to compile common/ by any chance?

I'd love to see this patch split into several smaller patches. Ideally
one patch per header file or per group of headers. It is fine if it ends
up being a very large series. For patches imported from Linux, make sure
to say that they are coming from Linux commit XXX in the commit message.
It is going to make it a lot easier to ack them.

Also, I think we need a concrete build target in mind: we don't want to
add a function stub if it is not needed to build something. Make sure to
specify what you are building in patch 0.


I tried building this series. The container didn't build for me, so I
build the toolchain myself. I noticed a couple of things:

XEN_TARGET_ARCH=riscv64 works but XEN_TARGET_ARCH=riscv doesn't.
Maybe we should make XEN_TARGET_ARCH=riscv work too using the
xen/Makefile TARGET transformations.

It seems to be building quite a few things under common. However it
breaks with:

vm_event.c: In function 'vm_event_resume':
vm_event.c:428:17: error: implicit declaration of function 'vm_event_reset_vmtrace'; did you mean 'vm_event_resume'? [-Werror=implicit-function-declaration]
  428 |                 vm_event_reset_vmtrace(v);
      |                 ^~~~~~~~~~~~~~~~~~~~~~
      |                 vm_event_resume
vm_event.c:428:17: error: nested extern declaration of 'vm_event_reset_vmtrace' [-Werror=nested-externs]


I got past that with a hack, but then I got this error:

ns16550.c: In function 'ns16550_init_preirq':
ns16550.c:353:42: error: implicit declaration of function 'ioremap'; did you mean 'ioremap_wc'? [-Werror=implicit-function-declaration]
  353 |         uart->remapped_io_base = (char *)ioremap(uart->io_base, uart->io_size);
      |                                          ^~~~~~~
      |                                          ioremap_wc
ns16550.c:353:42: error: nested extern declaration of 'ioremap' [-Werror=nested-externs]
ns16550.c:353:34: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
  353 |         uart->remapped_io_base = (char *)ioremap(uart->io_base, uart->io_size);
      |                                  ^
At top level:
ns16550.c:628:13: error: 'ns16550_init_common' defined but not used [-Werror=unused-function]
  628 | static void ns16550_init_common(struct ns16550 *uart)
      |             ^~~~~~~~~~~~~~~~~~~
ns16550.c:610:41: error: 'ns16550_driver' defined but not used [-Werror=unused-variable]
  610 | static struct uart_driver __read_mostly ns16550_driver = {
      |                                         ^~~~~~~~~~~~~~
ns16550.c:204:13: error: '__ns16550_poll' defined but not used [-Werror=unused-function]
  204 | static void __ns16550_poll(struct cpu_user_regs *regs)
      |             ^~~~~~~~~~~~~~
ns16550.c:76:3: error: 'ns16550_com' defined but not used [-Werror=unused-variable]
   76 | } ns16550_com[2] = { { 0 } };
      |   ^~~~~~~~~~~
cc1: all warnings being treated as errors


Which is strange given that ns16550.c shouldn't be built at all.  "QEMU
RISC-V virt machine support" is forcing CONFIG_HAS_NS16550=y on my
system. I chose "All Platforms" and CONFIG_HAS_NS16550=y went away. That
can't be right :-)


After that, I could actually finish the build:

sstabellini@sstabellini-ThinkPad-T480s:/local/repos/xen-upstream/xen$ find . -name \*.o
./common/spinlock.o
./common/irq.o
./common/sysctl.o
./common/sched/built_in.o
./common/sched/cpupool.o
./common/sched/credit2.o
./common/sched/core.o
./common/stop_machine.o
./common/gunzip.init.o
./common/multicall.o
./common/symbols.o
./common/rwlock.o
./common/event_channel.o
./common/guestcopy.o
./common/softirq.o
./common/virtual_region.o
./common/lib.o
./common/wait.o
./common/time.o
./common/notifier.o
./common/cpu.o
./common/page_alloc.o
./common/string.o
./common/vm_event.o
./common/tasklet.o
./common/version.o
./common/symbols-dummy.o
./common/memory.o
./common/warning.o
./common/xmalloc_tlsf.o
./common/kernel.o
./common/gunzip.o
./common/warning.init.o
./common/domain.o
./common/event_2l.o
./common/radix-tree.o
./common/timer.o
./common/built_in.o
./common/bitmap.o
./common/smp.o
./common/vsprintf.o
./common/keyhandler.o
./common/shutdown.o
./common/rcupdate.o
./common/rangeset.o
./common/vmap.o
./common/domctl.o
./common/preempt.o
./common/event_fifo.o
./common/monitor.o
./common/random.o
./lib/bsearch.o
./lib/rbtree.o
./lib/parse-size.o
./lib/ctype.o
./lib/ctors.o
./lib/list-sort.o
./lib/sort.o
./lib/built_in.o
./drivers/built_in.o
./drivers/char/serial.o
./drivers/char/built_in.o
./drivers/char/console.o
./arch/riscv/irq.o
./arch/riscv/sysctl.o
./arch/riscv/delay.o
./arch/riscv/lib/built_in.o
./arch/riscv/lib/find_next_bit.o
./arch/riscv/guestcopy.o
./arch/riscv/time.o
./arch/riscv/prelink.o
./arch/riscv/vm_event.o
./arch/riscv/setup.o
./arch/riscv/domain.o
./arch/riscv/traps.o
./arch/riscv/built_in.o
./arch/riscv/smp.o
./arch/riscv/mm.o
./arch/riscv/percpu.o
./arch/riscv/p2m.o
./arch/riscv/shutdown.o
./arch/riscv/head.o
./arch/riscv/domctl.o
./arch/riscv/smpboot.o

Which is absolutely astounding! Great job! I didn't imagine you already
managed to build the whole of commons!
--8323329-1727495200-1614299806=:3234--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 02:54:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 02:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90029.170211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTGa-0008Rv-4u; Fri, 26 Feb 2021 02:54:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90029.170211; Fri, 26 Feb 2021 02:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTGZ-0008Rn-VS; Fri, 26 Feb 2021 02:54:07 +0000
Received: by outflank-mailman (input) for mailman id 90029;
 Fri, 26 Feb 2021 02:54:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rwhq=H4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFTGY-0008Ri-AC
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 02:54:06 +0000
Received: from mail-oi1-x241.google.com (unknown [2607:f8b0:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ece3d12b-4cb2-4ea1-9927-50937eb35806;
 Fri, 26 Feb 2021 02:54:05 +0000 (UTC)
Received: by mail-oi1-x241.google.com with SMTP id a13so8380764oid.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 18:54:05 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l18sm1398983oii.56.2021.02.25.18.54.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 18:54:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ece3d12b-4cb2-4ea1-9927-50937eb35806
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=P4OdgG6z7Av8k6lGMJPzVygrES10A/qyrnU/AHVrDFs=;
        b=ADg5Nv/HDtjmdlnzW3eVmOFKXZBbFKXd0UCM5nzSH3inqR6VHvGlzX5nMnMObZT83H
         mJiDxU83hdXzVMb/4hLi8Pwv+AmrCgt4hQ61qp1aZ0TDlsMm7jHa6uvT6RN1tt+xHh8V
         ulRmah5nI/ODKbX3ivcegDCPNLjaDWD9bhK0woky/puvClKwsdlGczTg7ofbVlfM9LEI
         KQFyLpxJQ+5I81A4RLRhZIsUyVGzku+lZ4gDqidrE4JKKWndu3CaaCfX6e+XEQEXM8Co
         1ZcirJDcSgUS7gZPPZ1FSFqd0XqgoqSftQeGh/YPpOAft1Vtx/4cKSnFHS8vKlPJfIM9
         IBFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=P4OdgG6z7Av8k6lGMJPzVygrES10A/qyrnU/AHVrDFs=;
        b=cMEIrhaFnMBuA4DlctG6925m1I5ldp8I9Vb1dLYcsGg/Ig7i1YbJTVOpkohp3PbHdK
         ZOYYEZBi+pHkpwmvZtXKNxlWQNX/D+YQ2jUb4g+Q6vrglXXdkbdwwfuu2luuM9n4hd3b
         Wo0qsY/EpGHOYjhYM6XyEcQjCewlP0+PujEp89amzGZ3NmbEEdBBNbOHXStS63E85H9E
         1tMsl4dd5O0X9BGbhvCDOMo9t48Hw+QjqETae7BIh+7rRFS+0iH0uohuiabSYhvEdYIc
         plll5a40s7BHNPAwcKTfhLUiz83s1xW9pqdX9ltw3jXeh+ycK356vYw/JYB4dvXlSiaX
         GS0Q==
X-Gm-Message-State: AOAM531dFsxmDM7McK0Yqh5gyKhwixbYrbuENw2mu3arPwu+WqxbVKty
	z2OU6YQxQ2zolj/QQEA9lEg=
X-Google-Smtp-Source: ABdhPJw1hbw6xPP+wGBy3hN2q6gzdFJTscKcKI0DmTEj2zIcLcy0TjcKpnS1HVtSJR2V68/pJNPNWw==
X-Received: by 2002:a05:6808:341:: with SMTP id j1mr629180oie.19.1614308044693;
        Thu, 25 Feb 2021 18:54:04 -0800 (PST)
Date: Thu, 25 Feb 2021 19:54:02 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-next 2/6] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
Message-ID: <20210226025402.ryuxpicaqujmfxbu@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <444658f690c81b9e93c2c709fa1032c049646763.1614265718.git.connojdavis@gmail.com>
 <02f32a31-1c23-46af-54a7-7d44b5ffb613@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <02f32a31-1c23-46af-54a7-7d44b5ffb613@suse.com>

On Thu, Feb 25, 2021 at 04:45:15PM +0100, Jan Beulich wrote:
> On 25.02.2021 16:24, Connor Davis wrote:
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -501,7 +501,9 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
> >              return -EINVAL;
> >          }
> >  
> > +#ifdef CONFIG_HAS_PASSTHROUGH
> >          if ( !iommu_enabled )
> > +#endif
> >          {
> >              dprintk(XENLOG_INFO, "IOMMU requested but not available\n");
> >              return -EINVAL;
> 
> Where possible - to avoid such #ifdef-ary - the symbol instead should
> get #define-d to a sensible value ("false" in this case) in the header.
> The other cases here may indeed need to remain as you have them.
> 
Do you prefer the #define in the same function near the if or
somwhere near the top of the file?

Connor


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 03:01:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 03:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90032.170222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTNq-000188-SD; Fri, 26 Feb 2021 03:01:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90032.170222; Fri, 26 Feb 2021 03:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTNq-000181-PD; Fri, 26 Feb 2021 03:01:38 +0000
Received: by outflank-mailman (input) for mailman id 90032;
 Fri, 26 Feb 2021 03:01:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rwhq=H4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFTNq-00017w-02
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 03:01:38 +0000
Received: from mail-oi1-x22e.google.com (unknown [2607:f8b0:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a031f4d3-db69-4b53-ad69-665e2ab7306e;
 Fri, 26 Feb 2021 03:01:37 +0000 (UTC)
Received: by mail-oi1-x22e.google.com with SMTP id 18so8342943oiz.7
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 19:01:37 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d26sm382988oos.32.2021.02.25.19.01.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 19:01:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a031f4d3-db69-4b53-ad69-665e2ab7306e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=iINldmOWG2GYwKGJ8xJW8Kn/cIvnI4L/9E7gMqDXLGo=;
        b=Eko+GlZUJ4kImAI3OsL7br6qICAJb/CbZqo61bDXHqomMgOHQG6DpjKVamM0m58EHp
         puMGLkIlRcvd7DWM0ISfAxDABRx4PP5Qse1LG0htr5BSTIXWYRmGAocwiueO9l8I2ule
         4szYXjVBBjxD+uh3+lyood+qLejgRPYRmnlb1ywib3R/Xfv022a5RhSJFxSukzIN3Ho6
         asFSElfuOERGv9yfc708TsVfLB3TBG/q5ZHi9b667HhJcYFfCZKc4ypnuamaWkHForkz
         oe+pIhoMX/ij3Gb41dEfLbvxL3InTL7k18+h/Rm3tM9QqXniTGFpUGcUxHShOhT5rcsX
         OX5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=iINldmOWG2GYwKGJ8xJW8Kn/cIvnI4L/9E7gMqDXLGo=;
        b=i71dydCcdnKxv+iokXMwAskm87ivehuSOArlFBCLDvyqZsTCblq5RcoBas9vg7p5x5
         WFnrioZcBzj9BIpS7rqxknKS9Q+JJEKBiFymnWIUr4h3jG3vSzRtNYQWLJMWhayWTQ1G
         FpTLupzFT+VCWgwdO77LvD44jTeUVavb7n2G9Pwiie8wLGo4WJhZ4lMm/4VzYtnYJhjw
         qcj69ts9FgJuhRUkuySDwxaBiyP1/6r8On/gAeRxd+Zs+Lkjp/NQvUzKm7s+bpnFBFYX
         jVmbQErpjzyMy3cUquMRf8y6rEV91T0JjnS/PWCrqehlsnN+W9+x33LIzg9QJ4mdvADN
         9dRQ==
X-Gm-Message-State: AOAM533+7tJAPrYmLhMLEk0G7ATtT38rNFnlUw9zKks/sFnSKOu3/rWa
	SM2MO7dHfWd9Vfnyuy5mk+M=
X-Google-Smtp-Source: ABdhPJx95cbTrIArfJ5MeJitpk/4PUk3B7Uh9rRWj77eb3j0vh3216/q5KYw5eCWpS3anxwUNog8wg==
X-Received: by 2002:a05:6808:130d:: with SMTP id y13mr628395oiv.167.1614308496737;
        Thu, 25 Feb 2021 19:01:36 -0800 (PST)
Date: Thu, 25 Feb 2021 20:01:34 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Bob Eshleman <bobbyeshleman@gmail.com>
Cc: xen-devel@lists.xenproject.org,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
Message-ID: <20210226030134.lj3zi3duf33shoz7@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
 <71840112-790f-24b9-115c-9030c8521b65@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <71840112-790f-24b9-115c-9030c8521b65@gmail.com>

On Thu, Feb 25, 2021 at 02:55:45PM -0800, Bob Eshleman wrote:
> On 2/25/21 7:24 AM, Connor Davis wrote:
> > Return from cpu_schedule_up when either cpu is 0 or
> > NR_CPUS == 1. This fixes the following:
> > 
> > core.c: In function 'cpu_schedule_up':
> > core.c:2769:19: error: array subscript 1 is above array bounds
> > of 'struct vcpu *[1]' [-Werror=array-bounds]
> >  2769 |     if ( idle_vcpu[cpu] == NULL )
> >       |
> > 
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>
> > ---
> >  xen/common/sched/core.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> > index 9745a77eee..f5ec65bf9b 100644
> > --- a/xen/common/sched/core.c
> > +++ b/xen/common/sched/core.c
> > @@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int cpu)
> >      cpumask_set_cpu(cpu, &sched_res_mask);
> >  
> >      /* Boot CPU is dealt with later in scheduler_init(). */
> > -    if ( cpu == 0 )
> > +    if ( cpu == 0 || NR_CPUS == 1 )
> >          return 0;
> >  
> >      if ( idle_vcpu[cpu] == NULL )
> > 
> 
> Interesting.  I wonder when this changed in GCC.
> 
> I haven't yet seen this issue compiling with:
>   NR_CPUS=1
>   ARCH=riscv64
>   riscv64-unknown-linux-gnu-gcc (GCC) 10.1.0
> 
> Which version of GCC are you seeing emit this?

The one from cloned from github.com/riscv/riscv-gnu-toolchain
in the docker container uses 10.2.0

    Connor


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 03:08:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 03:08:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90036.170235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTUc-0001JR-MC; Fri, 26 Feb 2021 03:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90036.170235; Fri, 26 Feb 2021 03:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTUc-0001JK-Hb; Fri, 26 Feb 2021 03:08:38 +0000
Received: by outflank-mailman (input) for mailman id 90036;
 Fri, 26 Feb 2021 03:08:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rwhq=H4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFTUa-0001JF-UW
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 03:08:37 +0000
Received: from mail-oi1-x22e.google.com (unknown [2607:f8b0:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e92d2932-b834-483c-8272-89de9c9e0b91;
 Fri, 26 Feb 2021 03:08:36 +0000 (UTC)
Received: by mail-oi1-x22e.google.com with SMTP id w69so8410526oif.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 19:08:36 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id 7sm1569685otd.46.2021.02.25.19.08.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 19:08:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e92d2932-b834-483c-8272-89de9c9e0b91
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=u2FzwyCxUmpQHvHWXPD7VYrKvvcH9DULb+HSsylESE0=;
        b=G57tr7WocuA+HVLqfHi4Gt78eb+XUH/fvStV2yOEjF40RB3HG/iCaTgd0j8y1lvgj2
         GsGwm+h0xV957FtSatyiGK+yMqM8NZpxPfFM9cRY9cyJL8O/l4oGOVI9FTNmQ2YaZqPE
         qd/e6rPErwRISBB/uD+pVxGwCTTcHQJQ6vWxziFL12GEeX7IBaz3qx09RMe939+g6Sy0
         5ft33ILiaLzW6fGp0Mdymfydc7OMxaT10ekKT9LFSdTp2z8EJU8uqkkCuCE7xogj/xBO
         lrK6IDOGihvQlajV2d/4nky5FXlzlrqv1UqjzhUMqFP20dA7iHiuCnhFqEFEjd5ZOF0t
         03hw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=u2FzwyCxUmpQHvHWXPD7VYrKvvcH9DULb+HSsylESE0=;
        b=oKEk8l0fnYaHRONkrBVWVx1GGWyXWJ7ciCd95LZjCb6VDKFMFLp+AkFfHUP/uclU2q
         tucrCgyJ3e4dR40jRJfAcepBZ9C8/8cY/o5dmVLZ331VlnYiQDPqQ10967vhHy44DyQn
         44qqn8p1wWJn68jFgNV6sRIy8fn1PTE3EdKsOHcOAc4dPGChPsk0Wvkx3fHe1nG8PEjE
         oHN11ziXmbm0v1vOeO6nCIoyPsOR7+FLS7d8EXVx+ezbG3GwwjN3xhFOu8oMo0MTCxzn
         utjJpHJwJaozQPo7nerd285DfpFGuL3EGJHB4HD3hH37Hl+bMfoJsTyfRlHAaWQ4kWQ0
         G5UA==
X-Gm-Message-State: AOAM5317CdM4wqx3ud26ekq7o/gKQyUlqKdOMXJX2ZYQqOIhbI/q4YVJ
	IMvz2Fhozzhb6019+knNmPE=
X-Google-Smtp-Source: ABdhPJzQUAqZ9ttedxveHAm0mlZ/D7/A3GWf+WoiWjSeDKfVEl6STsz4pV6LquIQDSm3Gy+SNCgl7w==
X-Received: by 2002:a54:4803:: with SMTP id j3mr669494oij.124.1614308915821;
        Thu, 25 Feb 2021 19:08:35 -0800 (PST)
Date: Thu, 25 Feb 2021 20:08:33 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
Message-ID: <20210226030833.uugfojf5kkxhlpr7@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
 <b4ad0f83-e071-49f8-17a8-7fec0e226b9a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <b4ad0f83-e071-49f8-17a8-7fec0e226b9a@suse.com>

On Thu, Feb 25, 2021 at 04:50:02PM +0100, Jan Beulich wrote:
> On 25.02.2021 16:24, Connor Davis wrote:
> > Return from cpu_schedule_up when either cpu is 0 or
> > NR_CPUS == 1. This fixes the following:
> > 
> > core.c: In function 'cpu_schedule_up':
> > core.c:2769:19: error: array subscript 1 is above array bounds
> > of 'struct vcpu *[1]' [-Werror=array-bounds]
> >  2769 |     if ( idle_vcpu[cpu] == NULL )
> >       |
> > 
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>
> > ---
> >  xen/common/sched/core.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> > index 9745a77eee..f5ec65bf9b 100644
> > --- a/xen/common/sched/core.c
> > +++ b/xen/common/sched/core.c
> > @@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int cpu)
> >      cpumask_set_cpu(cpu, &sched_res_mask);
> >  
> >      /* Boot CPU is dealt with later in scheduler_init(). */
> > -    if ( cpu == 0 )
> > +    if ( cpu == 0 || NR_CPUS == 1 )
> >          return 0;
> >  
> >      if ( idle_vcpu[cpu] == NULL )
> 
> I'm not convinced a compiler warning is due here, and in turn
> I'm not sure we want/need to work around this the way you do.

It seems like a reasonable warning to me, but of course I'm open
to dealing with it in a different way.

> First question is whether that's just a specific compiler
> version that's flawed. If it's not just a special case (e.g.

The docker container uses gcc 10.2.0 from
https://github.com/riscv/riscv-gnu-toolchain

> some unreleased version) we may want to think of possible
> alternatives - the addition looks really odd to me.
> 
> Jan

    Connor


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 03:30:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 03:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90040.170250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTpG-0003by-Fj; Fri, 26 Feb 2021 03:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90040.170250; Fri, 26 Feb 2021 03:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTpG-0003br-Bx; Fri, 26 Feb 2021 03:29:58 +0000
Received: by outflank-mailman (input) for mailman id 90040;
 Fri, 26 Feb 2021 03:29:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rwhq=H4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFTpE-0003bm-MA
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 03:29:56 +0000
Received: from mail-ot1-x330.google.com (unknown [2607:f8b0:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1304e1f-9fa0-4aa4-9fcb-c9c700ddf45b;
 Fri, 26 Feb 2021 03:29:55 +0000 (UTC)
Received: by mail-ot1-x330.google.com with SMTP id x9so3524637otp.8
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 19:29:55 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id v8sm442279oto.6.2021.02.25.19.29.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 19:29:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1304e1f-9fa0-4aa4-9fcb-c9c700ddf45b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=W1FQI9aWKNKJtvIyZ8F4EorqAzKoWuW76K5qdY1SWF8=;
        b=GQTx874fmSLocK1B45MefWBuW09+NMhrFEZCuja1QlGRUZ4zptlKTKe6s7wzwEnN4U
         tvkxnzfc97oHrABSNnAMN8aCdHK3emuFscoi+fO7bdGmy+QgrOEXpcgYm5OCvj3Org1C
         xkR8JdaZ+tpLE19yntZLMkziTmZq7JMSqjQKLbNwt1Q3fJShvH+lHSpQemS1Q297LIPC
         6DsA6egcrBQjt/Hrx12ygX62c0YgixoyrNXEqrzxx2Bh/TrO8boxIeWnc4IYC2FUF33c
         BBW4PiPL+K2K204KJhbJXqE0JWkNXa2IpiX5QjibfqjzvODz7HTSwGSP4kAJ5QnHCjU1
         ayCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=W1FQI9aWKNKJtvIyZ8F4EorqAzKoWuW76K5qdY1SWF8=;
        b=Ldt3xYUCZqC9cVYpGKVe2Zvr+drNFr9/801X7CzOO28wg4g6NRCwly+JcVm2QaGJaL
         BAqYN+HUKDXIq29/boRNYWXstkJ4Jzb5BwIObiHe6u3WKggOHWw/WCUMA9z23SM+0api
         cxkQoJiRgn1YYD3oMe2E9BFhpt80eLgWQu0RFZxDQCk6B5NFbgtiVxvMr6QpT0v0t+i9
         0tfdG4vrL4uzXoJgOIStKGVfhF/fcwNJIAnvVU/909F34xd8u0yyHchpROi07zY2M7DE
         8vnCJoTs5VZfjuSObmJ5w8J7vHSln7TYYy13bmYDQVnALvKsTKJ07ZBBWFJmUvwpthN3
         MCIw==
X-Gm-Message-State: AOAM531e80xV8YZSBwkNXaBRp/ty4P4WUbM8kN6GfSg5hGV98hT2X84N
	qzAWF+EmCy68bQK7zRq41x0=
X-Google-Smtp-Source: ABdhPJzFyUBVWo3Iu7xW5KbObrIxZndKMMWSg+1yH8YOZ/oKWSTMCj2cgtF/a1ZlHFzoK1H5Rt2DnA==
X-Received: by 2002:a05:6830:18d9:: with SMTP id v25mr758451ote.231.1614310194590;
        Thu, 25 Feb 2021 19:29:54 -0800 (PST)
Date: Thu, 25 Feb 2021 20:29:52 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
Message-ID: <20210226032952.dbi2rdkbqa533yme@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>

On Thu, Feb 25, 2021 at 11:14:53PM +0000, Andrew Cooper wrote:
> On 25/02/2021 15:24, Connor Davis wrote:
> > Add the minimum code required to get xen to build with
> > XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
> > function added is required for a successful build, given the .config
> > generated from riscv64_defconfig. The function implementations are just
> > stubs; actual implmentations will need to be added later.
> >
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>
> > ---
> >  config/riscv64.mk                        |   7 +
> >  xen/Makefile                             |   8 +-
> >  xen/arch/riscv/Kconfig                   |  54 ++++
> >  xen/arch/riscv/Kconfig.debug             |   0
> >  xen/arch/riscv/Makefile                  |  57 ++++
> >  xen/arch/riscv/README.source             |  19 ++
> >  xen/arch/riscv/Rules.mk                  |  13 +
> >  xen/arch/riscv/arch.mk                   |   7 +
> >  xen/arch/riscv/configs/riscv64_defconfig |  12 +
> >  xen/arch/riscv/delay.c                   |  16 +
> >  xen/arch/riscv/domain.c                  | 144 +++++++++
> >  xen/arch/riscv/domctl.c                  |  36 +++
> >  xen/arch/riscv/guestcopy.c               |  57 ++++
> >  xen/arch/riscv/head.S                    |   6 +
> >  xen/arch/riscv/irq.c                     |  78 +++++
> >  xen/arch/riscv/lib/Makefile              |   1 +
> >  xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
> >  xen/arch/riscv/mm.c                      |  93 ++++++
> >  xen/arch/riscv/p2m.c                     | 150 +++++++++
> >  xen/arch/riscv/percpu.c                  |  17 +
> >  xen/arch/riscv/platforms/Kconfig         |  31 ++
> >  xen/arch/riscv/riscv64/asm-offsets.c     |  31 ++
> >  xen/arch/riscv/setup.c                   |  27 ++
> >  xen/arch/riscv/shutdown.c                |  28 ++
> >  xen/arch/riscv/smp.c                     |  35 +++
> >  xen/arch/riscv/smpboot.c                 |  34 ++
> >  xen/arch/riscv/sysctl.c                  |  33 ++
> >  xen/arch/riscv/time.c                    |  35 +++
> >  xen/arch/riscv/traps.c                   |  35 +++
> >  xen/arch/riscv/vm_event.c                |  39 +++
> >  xen/arch/riscv/xen.lds.S                 | 113 +++++++
> >  xen/drivers/char/serial.c                |   1 +
> >  xen/include/asm-riscv/altp2m.h           |  39 +++
> >  xen/include/asm-riscv/asm.h              |  77 +++++
> >  xen/include/asm-riscv/asm_defns.h        |  24 ++
> >  xen/include/asm-riscv/atomic.h           | 204 ++++++++++++
> >  xen/include/asm-riscv/bitops.h           | 331 ++++++++++++++++++++
> >  xen/include/asm-riscv/bug.h              |  54 ++++
> >  xen/include/asm-riscv/byteorder.h        |  16 +
> >  xen/include/asm-riscv/cache.h            |  24 ++
> >  xen/include/asm-riscv/cmpxchg.h          | 382 +++++++++++++++++++++++
> >  xen/include/asm-riscv/compiler_types.h   |  32 ++
> >  xen/include/asm-riscv/config.h           | 110 +++++++
> >  xen/include/asm-riscv/cpufeature.h       |  17 +
> >  xen/include/asm-riscv/csr.h              | 219 +++++++++++++
> >  xen/include/asm-riscv/current.h          |  47 +++
> >  xen/include/asm-riscv/debugger.h         |  15 +
> >  xen/include/asm-riscv/delay.h            |  15 +
> >  xen/include/asm-riscv/desc.h             |  12 +
> >  xen/include/asm-riscv/device.h           |  15 +
> >  xen/include/asm-riscv/div64.h            |  23 ++
> >  xen/include/asm-riscv/domain.h           |  50 +++
> >  xen/include/asm-riscv/event.h            |  42 +++
> >  xen/include/asm-riscv/fence.h            |  12 +
> >  xen/include/asm-riscv/flushtlb.h         |  34 ++
> >  xen/include/asm-riscv/grant_table.h      |  12 +
> >  xen/include/asm-riscv/guest_access.h     |  41 +++
> >  xen/include/asm-riscv/guest_atomics.h    |  60 ++++
> >  xen/include/asm-riscv/hardirq.h          |  27 ++
> >  xen/include/asm-riscv/hypercall.h        |  12 +
> >  xen/include/asm-riscv/init.h             |  42 +++
> >  xen/include/asm-riscv/io.h               | 283 +++++++++++++++++
> >  xen/include/asm-riscv/iocap.h            |  13 +
> >  xen/include/asm-riscv/iommu.h            |  46 +++
> >  xen/include/asm-riscv/irq.h              |  58 ++++
> >  xen/include/asm-riscv/mem_access.h       |   4 +
> >  xen/include/asm-riscv/mm.h               | 246 +++++++++++++++
> >  xen/include/asm-riscv/monitor.h          |  65 ++++
> >  xen/include/asm-riscv/nospec.h           |  25 ++
> >  xen/include/asm-riscv/numa.h             |  41 +++
> >  xen/include/asm-riscv/p2m.h              | 218 +++++++++++++
> >  xen/include/asm-riscv/page-bits.h        |  11 +
> >  xen/include/asm-riscv/page.h             |  73 +++++
> >  xen/include/asm-riscv/paging.h           |  15 +
> >  xen/include/asm-riscv/pci.h              |  31 ++
> >  xen/include/asm-riscv/percpu.h           |  33 ++
> >  xen/include/asm-riscv/processor.h        |  59 ++++
> >  xen/include/asm-riscv/random.h           |   9 +
> >  xen/include/asm-riscv/regs.h             |  23 ++
> >  xen/include/asm-riscv/setup.h            |  14 +
> >  xen/include/asm-riscv/smp.h              |  46 +++
> >  xen/include/asm-riscv/softirq.h          |  16 +
> >  xen/include/asm-riscv/spinlock.h         |  12 +
> >  xen/include/asm-riscv/string.h           |  28 ++
> >  xen/include/asm-riscv/sysregs.h          |  16 +
> >  xen/include/asm-riscv/system.h           |  99 ++++++
> >  xen/include/asm-riscv/time.h             |  31 ++
> >  xen/include/asm-riscv/trace.h            |  12 +
> >  xen/include/asm-riscv/types.h            |  60 ++++
> >  xen/include/asm-riscv/vm_event.h         |  55 ++++
> >  xen/include/asm-riscv/xenoprof.h         |  12 +
> >  xen/include/public/arch-riscv.h          | 183 +++++++++++
> >  xen/include/public/arch-riscv/hvm/save.h |  39 +++
> >  xen/include/public/hvm/save.h            |   2 +
> >  xen/include/public/pmu.h                 |   2 +
> >  xen/include/public/xen.h                 |   2 +
> >  xen/include/xen/domain.h                 |   1 +
> 
> Well - this is orders of magnitude more complicated than it ought to
> be.  An empty head.S doesn't (well - shouldn't) need the overwhelming
> majority of this.
> 
> Do you know how all of this is being pulled in?  Is it from attempting
> to compile common/ by any chance?
> 

Yes, IIRC most is pulled in from common code. If it would be helpful
I could play around with not building common to see what the difference
is.

> Now is also an excellent opportunity to nuke the x86isms which have
> escaped into common code (debugger and xenoprof in particular), and
> rethink some of our common/arch split.
> 
> When it comes to header files specifically, I want to start using
> xen/arch/$ARCH/include/asm/ and retrofit this to x86 and ARM.  It has
> two important properties - first, that you don't need to symlink the
> tree to make compilation work, and second that patches touching multiple
> architectures have hunks ordered in a more logical way.

+1 for removing the symlink, that would have saved me an embarrasingly
large amount of time when I started this work and had failed to clean
the existing x86 symlink :).

    Connor


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 03:36:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 03:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90044.170265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTvy-0004WF-7X; Fri, 26 Feb 2021 03:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90044.170265; Fri, 26 Feb 2021 03:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFTvy-0004W8-4O; Fri, 26 Feb 2021 03:36:54 +0000
Received: by outflank-mailman (input) for mailman id 90044;
 Fri, 26 Feb 2021 03:36:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rwhq=H4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFTvx-0004W3-D3
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 03:36:53 +0000
Received: from mail-ot1-x32c.google.com (unknown [2607:f8b0:4864:20::32c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b6c8a3e-a57e-4933-b36c-32aa2e14c796;
 Fri, 26 Feb 2021 03:36:52 +0000 (UTC)
Received: by mail-ot1-x32c.google.com with SMTP id b16so7946029otq.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 19:36:52 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id k18sm1605251ots.24.2021.02.25.19.36.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 19:36:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b6c8a3e-a57e-4933-b36c-32aa2e14c796
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=IqFdziwHDxVIOjGZX/oKUvNBbcd3HE80EMON7TdM10Q=;
        b=H4njjzxla+7jFyKNzQUp1uBAOvrWjqk9z1Neh9YzVlpWSJ9fU6sVHxkUac4FQZzPwH
         eosPqVotvN6xGC6eUuJc3ihqRTCoGMZ8zszqZpeTLOo+qjYf6KLtd65n7fxMYBYxtgw9
         uFcXTatyQC9uSuLe9455APqCDW9F0cnZ62ANW4I/K0NsFppA3gG0INLxwzp8yQ+a7ldP
         RRCQGc8i27q3DYeqc5fqopW3OcDZXE20t3MqZ9EdN39/TV6uuT7fSYwSDJPxIsxO2fy9
         JK6IcDPiJ+oB6X8+jhuQcfmzOcp979BwakKDZuf+W32tBZ8HAXfRjwAw9QzEBt6u1wPx
         +SGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=IqFdziwHDxVIOjGZX/oKUvNBbcd3HE80EMON7TdM10Q=;
        b=tromzoaRz/uRjlWzwnpkT0wxQaoD/3wcRRx7o76KNj7CVVtGSuOSNLNH+SSYOTJScm
         VvIuDKPuNzmMK488E/NbkxA1atDRQG2Nn6vu5OP8wnAWz51oJzFq9JBM/TA0S6Y8aMmH
         /7Y00Zdyci6IaVAidvsPjpAZCdU9XYSO5qHKBHqDz3oMR6Nb0IPC6oWnoJYg4CaoKmNR
         XmcMgF1ZEWrXQQguNfMWKF0zZmAszT9NtKHPReXGvpiK09kbo5znsvsf5Ds50gex7rfb
         7E5UKdK0/jLNvfdiHPwc43WAj1MOl16GsGHx925O1Vx/rBOCqsPpoRxUTgM7LhaJhNcG
         fhxw==
X-Gm-Message-State: AOAM533NaLQAyv8k0NHpBtG35az+lTFNx5X8loVbw9y9JA9E6Bvse9dB
	WpVVixIYUFXF7l4HS+OR8jU=
X-Google-Smtp-Source: ABdhPJx1yb77m+oOkROwInIy0TQylzhb9Rk5GTpsKXfWrN9GyeZLsgWiplBx1IwLg2srnPVvzKiL5Q==
X-Received: by 2002:a05:6830:150c:: with SMTP id k12mr793866otp.104.1614310612287;
        Thu, 25 Feb 2021 19:36:52 -0800 (PST)
Date: Thu, 25 Feb 2021 20:36:50 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-next 4/6] xen: Fix build when !CONFIG_GRANT_TABLE
Message-ID: <20210226033650.ou6eqdu7lfmuuhtk@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <eb2d1e911870f1662acfbc073447af2d29455750.1614265718.git.connojdavis@gmail.com>
 <259e6dce-7fc4-1f2e-49b6-61a047012953@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <259e6dce-7fc4-1f2e-49b6-61a047012953@suse.com>

On Thu, Feb 25, 2021 at 04:53:23PM +0100, Jan Beulich wrote:
> On 25.02.2021 16:24, Connor Davis wrote:
> > --- a/xen/include/xen/grant_table.h
> > +++ b/xen/include/xen/grant_table.h
> > @@ -66,6 +66,8 @@ int gnttab_acquire_resource(
> >  
> >  #define opt_max_grant_frames 0
> >  
> > +struct grant_table {};
> > +
> >  static inline int grant_table_init(struct domain *d,
> >                                     int max_grant_frames,
> >                                     int max_maptrack_frames)
> 
> You shouldn't actually declare a struct, all you need is to
> move the forward decl further up in the file ahead of the
> #ifdef.

Thanks, will fix this in v2.

    Connor


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 03:51:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 03:51:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90048.170277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFUAG-0006Zc-O7; Fri, 26 Feb 2021 03:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90048.170277; Fri, 26 Feb 2021 03:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFUAG-0006ZV-Jw; Fri, 26 Feb 2021 03:51:40 +0000
Received: by outflank-mailman (input) for mailman id 90048;
 Fri, 26 Feb 2021 03:51:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFUAF-0006Z4-5l; Fri, 26 Feb 2021 03:51:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFUAE-0001iX-SX; Fri, 26 Feb 2021 03:51:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFUAE-0000K9-JN; Fri, 26 Feb 2021 03:51:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFUAE-0007UF-Iq; Fri, 26 Feb 2021 03:51:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2wtiQLY+Ke0QJhMyFfcH/0Hw3OmFoqkeClDG9WlTvPA=; b=mxFviR84/5Wp06Aq3lCqrozT89
	YNwYMOkmYzyof/MqNImHlRVqXtv1CKucmugg2mjKFOMsm/ioTvw1tEyrolGsn19RO1gK4cF1PX+Np
	UCJwFd0vJ7x/FeFFfVuMnvKi/wnva0u2xwSeSVId7mAXx8dCDbfOD54FYSFkxW1i7F+s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159671-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159671: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-selftest:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=60390ccb8b9b2dbf85010f8b47779bb231aa2533
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 03:51:38 +0000

flight 159671 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-xtf-amd64-amd64-3      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-1      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 159475
 test-xtf-amd64-amd64-4      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-2      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-xtf-amd64-amd64-5      19 xtf/test-pv32pae-selftest fail REGR. vs. 159475
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-raw        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 159475
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 159475
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 159475
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 159475
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 159475
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 159475
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 159475
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 159475
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 159475
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 159475
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 159475

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  60390ccb8b9b2dbf85010f8b47779bb231aa2533
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    6 days
Failing since        159487  2021-02-20 04:29:29 Z    5 days   11 attempts
Testing same since   159652  2021-02-25 01:07:42 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 397 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 04:25:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 04:25:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90053.170292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFUhD-0001M2-FN; Fri, 26 Feb 2021 04:25:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90053.170292; Fri, 26 Feb 2021 04:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFUhD-0001Lv-Bv; Fri, 26 Feb 2021 04:25:43 +0000
Received: by outflank-mailman (input) for mailman id 90053;
 Fri, 26 Feb 2021 04:25:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+p7+=H4=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lFUhB-0001Lq-E4
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 04:25:41 +0000
Received: from mail-qk1-x730.google.com (unknown [2607:f8b0:4864:20::730])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02e20227-e1b6-4579-80af-699bbfbb792f;
 Fri, 26 Feb 2021 04:25:40 +0000 (UTC)
Received: by mail-qk1-x730.google.com with SMTP id x124so8018809qkc.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 20:25:40 -0800 (PST)
Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com.
 [209.85.160.178])
 by smtp.gmail.com with ESMTPSA id i5sm5561183qkg.32.2021.02.25.20.25.39
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 25 Feb 2021 20:25:39 -0800 (PST)
Received: by mail-qt1-f178.google.com with SMTP id o34so5856436qtd.11
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 20:25:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02e20227-e1b6-4579-80af-699bbfbb792f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=fDTKQXWPODousjcFxfPrAdhXiScgfthwkHIVsnIoPbM=;
        b=abTtJIMMZ8KToSFWbjzFi32HOkPDN2rxZ+IfBv+eO62ap/dN702Uq0mU9oHC9j+snt
         vnVWV+5jH/p/0R+4U10lm2zTMAo+mTTld64KVC2G+prj4nocAzeugJM4VE5FsldHI9zg
         D677137O5m7d9bvVpQQ+IkqHWrA56M7YEa+Ng=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=fDTKQXWPODousjcFxfPrAdhXiScgfthwkHIVsnIoPbM=;
        b=ouTlASx9uD0nZPjGKOH9cok0OkWOAdAC88eUn9alnuhg1XqX0Ldgagi4QPI9O/CsrR
         /OKKkIfm/xAqlqPGK6yzccv9euwb7s7OPSxOvfje3WoD5t8trX1B+Kmkp9YPFZ/1iJR6
         PV1PrB2q4Ike2/pB6nN+1s8XAnRhxdkHSri1xa8eSNs9S6ggxnOyotZwqxAVIiEeEDKf
         AWm2YgyaRUSqcwYZPxKZ70HziUTbMYj84ihdwC+UazJq1HgbqNYgU9J0DrnSRmzR8tyr
         jxnOAILdiRyVSQ2jSrPjhGrFuzl79MTApsk5UxolUsMo3MvE63ep3coWWnMi1SA73lXP
         60Ag==
X-Gm-Message-State: AOAM531UQG1e2UrLAkaNyXumgbbTA4M6ci8HvFy6MHfu6BX74KoCSrmI
	bM/3mTVcX1fFvT59HSSi8pTHwZFjQqhgMg==
X-Google-Smtp-Source: ABdhPJyg2D2KQIGgWNwMvhc+5sTEsHjF0RTy6O/6dAd4uNWlJ6/+DnvydUHZNO8uVNaQtQrN5XcPew==
X-Received: by 2002:a37:a7c3:: with SMTP id q186mr1022939qke.475.1614313539774;
        Thu, 25 Feb 2021 20:25:39 -0800 (PST)
X-Received: by 2002:a05:6638:5:: with SMTP id z5mr1097392jao.84.1614313081344;
 Thu, 25 Feb 2021 20:18:01 -0800 (PST)
MIME-Version: 1.0
References: <20210209062131.2300005-1-tientzu@chromium.org> <20210209062131.2300005-13-tientzu@chromium.org>
In-Reply-To: <20210209062131.2300005-13-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 26 Feb 2021 12:17:50 +0800
X-Gmail-Original-Message-ID: <CALiNf298+DLjTK6ALe0mYrRuCP_LtztMGuQQCS90ubDctbS0kw@mail.gmail.com>
Message-ID: <CALiNf298+DLjTK6ALe0mYrRuCP_LtztMGuQQCS90ubDctbS0kw@mail.gmail.com>
Subject: Re: [PATCH v4 12/14] swiotlb: Add restricted DMA alloc/free support.
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>
Content-Type: text/plain; charset="UTF-8"

> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index fd9c1bd183ac..8b77fd64199e 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -836,6 +836,40 @@ late_initcall(swiotlb_create_default_debugfs);
>  #endif
>
>  #ifdef CONFIG_DMA_RESTRICTED_POOL
> +struct page *dev_swiotlb_alloc(struct device *dev, size_t size, gfp_t gfp)
> +{
> +       struct swiotlb *swiotlb;
> +       phys_addr_t tlb_addr;
> +       unsigned int index;
> +
> +       /* dev_swiotlb_alloc can be used only in the context which permits sleeping. */
> +       if (!dev->dev_swiotlb || !gfpflags_allow_blocking(gfp))

Just noticed that !gfpflags_allow_blocking(gfp) shouldn't be here.

Hi Christoph,

Do you think I should fix this and rebase on the latest linux-next
now? I wonder if there are more factor and clean up coming and I
should wait after that.

Thanks,
Claire


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 04:48:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 04:48:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90058.170307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFV2i-0003fS-H4; Fri, 26 Feb 2021 04:47:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90058.170307; Fri, 26 Feb 2021 04:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFV2i-0003fL-Da; Fri, 26 Feb 2021 04:47:56 +0000
Received: by outflank-mailman (input) for mailman id 90058;
 Fri, 26 Feb 2021 04:47:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WYNV=H4=gmail.com=akihiko.odaki@srs-us1.protection.inumbo.net>)
 id 1lFV2h-0003fG-FK
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 04:47:55 +0000
Received: from mail-ej1-x636.google.com (unknown [2a00:1450:4864:20::636])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29902a70-2c87-474d-8198-d6b751e1a7e7;
 Fri, 26 Feb 2021 04:47:54 +0000 (UTC)
Received: by mail-ej1-x636.google.com with SMTP id b21so1589377eja.4
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 20:47:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29902a70-2c87-474d-8198-d6b751e1a7e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=eSegB5Tn100fjF6cMx01sLj7QAzt6mWuluXvHTB2+fo=;
        b=NdrDvZLAm+S+WbPm1+SkWGTWzukLJOHiAt/bPonkTB36VLmZRFCeyvW9oYX0NCxm3K
         Yk6LsUGmgndTGDv+ViVAjvcfIhn7yLqrD5yFYiUBkR5W6m08QKnXpb6lEL/1R+OgBFKT
         +Ui8rmv13ofNMtpZnlkBFFfTKe51pzhx6yC31AfisA5y6Ed6ynD6qYDVcHsI/MxSN7M3
         lZQIYQSIRmCbc9FnyiRNjmoS7wEfpxjzRQzujI6msHfDgqIAvfNWs9tjGR1e8Bq25W9o
         no8lfdwja4+nvU999f0DrSo/ErrUrOsohXkfwL6fbU1OfKyd03ySNqqXLeLlqf3Zqz7e
         VvLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=eSegB5Tn100fjF6cMx01sLj7QAzt6mWuluXvHTB2+fo=;
        b=F66C2K+kTXej5xSD5A1rbZnkj/6a0Bj9NBUHSqbMq28c/f+lCPZH2FU0Fj6QwH2vej
         +6X6I9Ei+lKqHXbDShXmkRJz18C7P46iSPt8g04KGJyajLY6AsYFT05BGfa6MqgjWq/p
         kelK5RjJIOUAObkpAQcoy2AByk83KO+cI6xSNx9ckLHbzaQMNa9ROF9Lozo471uwz9A1
         egHo2sYQd2ONhWhZEbcGYfSfGZt/X/+wh+xLXoFTsSkKk7y81NAAkcLaMfXZNO9rgomw
         4EnflO2cuA1fywp8imLkbeij0JanslPGRxs96hoO4pBLHifGizEBQ4yWeqxO/GMWgKVB
         RlMQ==
X-Gm-Message-State: AOAM531e0aWUBJFbT0zFaJk6jkAEixaNPf7liMDjpqVNxDeCaHdM+L7f
	mFuCYDsp/9EpaWsS5VJ1B4X/jhtXmABu/aG+3dA=
X-Google-Smtp-Source: ABdhPJyVniYDvxXKn4lUYlhspOSbMbOOaPMZpGpQWzR0w7eap/HgCB4YP5247R22V8cQ2QJbAVjw+VDjDrVN9B4OBvM=
X-Received: by 2002:a17:907:778d:: with SMTP id ky13mr397591ejc.291.1614314873688;
 Thu, 25 Feb 2021 20:47:53 -0800 (PST)
MIME-Version: 1.0
References: <20210221133414.7262-1-akihiko.odaki@gmail.com>
 <20210222105738.w2q6vp5pi4p6bx5m@sirius.home.kraxel.org> <CAMVc7JVo_XJcGcxW0Wmqje3Y40fRZDY6T8dnQTc2=Ehasz4UHw@mail.gmail.com>
 <20210224111540.xd5a6yszql6wln7m@sirius.home.kraxel.org> <CAMVc7JXUXnrK_amhQsy=paMeqjMU_8r86Hj4UF5haZ+Oq15JkA@mail.gmail.com>
 <20210225114626.dn7wevr3fozp5rcu@sirius.home.kraxel.org>
In-Reply-To: <20210225114626.dn7wevr3fozp5rcu@sirius.home.kraxel.org>
From: Akihiko Odaki <akihiko.odaki@gmail.com>
Date: Fri, 26 Feb 2021 13:47:38 +0900
Message-ID: <CAMVc7JX-E_3fE9SCOaYFAtDBRHNmHxmHWiqcJDPE-4zq-QHJbQ@mail.gmail.com>
Subject: Re: [PATCH] virtio-gpu: Respect graphics update interval for EDID
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org, 
	"Michael S. Tsirkin" <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

2021=E5=B9=B42=E6=9C=8825=E6=97=A5(=E6=9C=A8) 20:46 Gerd Hoffmann <kraxel@r=
edhat.com>:
>
>   Hi,
>
> > > Because of the wasted frames I'd like this to be an option you can
> > > enable when needed.  For the majority of use cases this seems to be
> > > no problem ...
> >
> > I see blinks with GNOME on Wayland on Ubuntu 20.04 and virtio-gpu with
> > the EDID change included in this patch.
>
> /me looks closely at the patch again.
>
> So you update the edid dynamically, each time the refresh rate changes.
> Problem with that approach is software doesn't expect edid to change
> dynamically because on physical hardware it is static information about
> the connected monitor.
>
> So what the virtio-gpu guest driver does is emulate a monitor hotplug
> event to notify userspace.  If you resize the qemu window on the host
> it'll look like the monitor with the old window size was unplugged and
> a new monitor with the new window size got plugged instead, so gnome
> shell goes adapt the display resolution to the new virtual monitor size.
>
> The blink you are seeing probably comes from gnome-shell processing the
> monitor hotplug event.
>
> We could try to skip generating a monitor hotplug event in case only the
> refresh rate did change.  That would fix the blink, but it would also
> have the effect that nobody will notice the update.
>
> Bottom line:  I think making the edid refresh rate configurable might be
> useful, but changing it dynamically most likely isn't.
>
> take care,
>   Gerd
>

The "hotplug" implementation is probably what other combinations of
devices and guests will do if they want to respond to the changes of
the refresh rate, or display mode in general. That makes telling the
dynamic refresh rate to guests infeasible.

As you wrote, making the refresh rate configurable should be still
useful, and I think matching it to the backend physical display is
even better. GTK, the sole implementer of gfx_update_interval in my
patch reports the refresh rate of the physical display the window
resides in. It means the value may change when the physical display
changes its refresh rate, which should be rare if it does, or the
window moves to another physical display.

In the former case, there is nothing different from implementing a
physical display driver for guests so there should be no problem. The
latter case is similar to how the changes of the window size, which is
also part of display mode, is delivered to guests and they should be
consistent. The only inconsistency I see in my patch is that the
refresh rate change has no throttling while the window size change
has. I don't think it is problematic because it should be rare to move
the window across physical displays, but I can implement one if you
don't agree or know other cases the refresh rate frequently changes.

Regards,
Akihiko Odaki


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 05:17:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 05:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90065.170322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFVVa-00076n-PZ; Fri, 26 Feb 2021 05:17:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90065.170322; Fri, 26 Feb 2021 05:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFVVa-00076g-MV; Fri, 26 Feb 2021 05:17:46 +0000
Received: by outflank-mailman (input) for mailman id 90065;
 Fri, 26 Feb 2021 05:17:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kQax=H4=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lFVVZ-00076b-CY
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 05:17:45 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb6c6d5f-485f-43bb-b75d-1e5aa86a1d47;
 Fri, 26 Feb 2021 05:17:44 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 001F968BEB; Fri, 26 Feb 2021 06:17:40 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb6c6d5f-485f-43bb-b75d-1e5aa86a1d47
Date: Fri, 26 Feb 2021 06:17:40 +0100
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>
Subject: Re: [PATCH v4 12/14] swiotlb: Add restricted DMA alloc/free
 support.
Message-ID: <20210226051740.GB2072@lst.de>
References: <20210209062131.2300005-1-tientzu@chromium.org> <20210209062131.2300005-13-tientzu@chromium.org> <CALiNf298+DLjTK6ALe0mYrRuCP_LtztMGuQQCS90ubDctbS0kw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf298+DLjTK6ALe0mYrRuCP_LtztMGuQQCS90ubDctbS0kw@mail.gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Feb 26, 2021 at 12:17:50PM +0800, Claire Chang wrote:
> Do you think I should fix this and rebase on the latest linux-next
> now? I wonder if there are more factor and clean up coming and I
> should wait after that.

Here is my preferred plan:

 1) wait for my series to support the min alignment in swiotlb to
    land in Linus tree
 2) I'll resend my series with the further swiotlb cleanup and
    refactoring, which includes a slightly rebased version of your
    patch to add the io_tlb_mem structure
 3) resend your series on top of that as a baseline

This is my current WIP tree for 2:

  http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/swiotlb-struct


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 06:16:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 06:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90073.170334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFWPr-0004ob-6Y; Fri, 26 Feb 2021 06:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90073.170334; Fri, 26 Feb 2021 06:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFWPr-0004oU-2v; Fri, 26 Feb 2021 06:15:55 +0000
Received: by outflank-mailman (input) for mailman id 90073;
 Fri, 26 Feb 2021 06:15:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rwhq=H4=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFWPo-0004oP-V2
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 06:15:53 +0000
Received: from mail-oo1-xc32.google.com (unknown [2607:f8b0:4864:20::c32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfdbb505-eb10-486d-a58a-ffdeda7d05b6;
 Fri, 26 Feb 2021 06:15:51 +0000 (UTC)
Received: by mail-oo1-xc32.google.com with SMTP id x19so1964892ooj.10
 for <xen-devel@lists.xenproject.org>; Thu, 25 Feb 2021 22:15:51 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id w22sm1588812otm.73.2021.02.25.22.15.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 Feb 2021 22:15:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfdbb505-eb10-486d-a58a-ffdeda7d05b6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=WXwFjyd+hV8iKbRjbKd2BJP8+p5hoDnRVTsshNgLObs=;
        b=PbV2OOVCRtDgN5eys1UpbeBoNmICjGn+EMWpq0uoFlULCi/a3wKdbnUtsZ/ygKtqGZ
         FIshySVFsH0gBGd+DPnkjH1XnEl+mIIZM42TSou9zcmEMNjFCXqp2YItkc08PbmS9vcx
         AUFy8G4I5rkhc4EWhnwb1e/MAMiC7r4XtS2IoSkdAKesbvpzDmJES/6QNIrVKSVBFn+d
         VfeICMtYHjKFYGA3xHVv4J6upnv0Vlb9dy9A47/YZQXgE57X9WiLV1CAfzVSKJJ9emDZ
         4da06aTUj2p1M8pm0zCrjfWm6i11rRV7XNwsGBbxzkIN36v9GcGQOzpXJ/O5gNK6d3PT
         PChg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=WXwFjyd+hV8iKbRjbKd2BJP8+p5hoDnRVTsshNgLObs=;
        b=DhXDtnfmtIfeWXgcuTyCoHuBJdL3UZEnGNWK2opfOad2tGu7p3aVNjVVe4GFwpD8aR
         RyFHv64i2proeIBSb7EZ7nW7eVu7IdJUos7nJvh4HkN7cXdrSxFFbi/RqS9OR7W4bMLY
         lDARu+IQ3XMv9EJltWd3F+1ULoLH7y7T60L2OZHcnBthDH4l/jbKN4M+t4+9dniXqTRA
         LL0gJvSkn7GH077kwIPfTPb9385i5yudRrRmUm/XXfwjrdCgv4mf9VCNyUycRoB0nx1i
         3KxaSyZGxU7Gm7rZPRZMHesZix1h8Lga8tYMqughJZqlYX0jIQuBZcx7XzZVAuAJ7nmr
         xKYQ==
X-Gm-Message-State: AOAM530I7n85IvXA5ueL+1lU0ffqLtHj1ywSyEYog+O+TCTyMvtx5uHl
	izbaYyCxL3K+qAC3gjwOPKY=
X-Google-Smtp-Source: ABdhPJx1ewpkQitZcrkIRSQQ8SYaMHDnbn+waiWgKJCq/pAzliJftdP9g6FTAUwG9vSlQ1nJhf/wuQ==
X-Received: by 2002:a4a:e093:: with SMTP id w19mr1242946oos.53.1614320151379;
        Thu, 25 Feb 2021 22:15:51 -0800 (PST)
Date: Thu, 25 Feb 2021 23:15:49 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH for-next 6/6] automation: add container for riscv64 builds
Message-ID: <20210226061548.4dbvv772icl6pwmo@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <a7829e62734a73993cd41cdbc18e1d16e4bb06d9.1614265718.git.connojdavis@gmail.com>
 <alpine.DEB.2.21.2102251630382.3234@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="givbyxahiezow33j"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2102251630382.3234@sstabellini-ThinkPad-T480s>


--givbyxahiezow33j
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Thu, Feb 25, 2021 at 04:31:13PM -0800, Stefano Stabellini wrote:
> On Thu, 25 Feb 2021, Connor Davis wrote:
> > Add a container for cross-compiling xen to riscv64.
> > This just includes the cross-compiler and necessary packages for
> > building xen itself (packages for tools, stubdoms, etc., can be
> > added later).
> > 
> > To build xen in the container run the following:
> > 
> > $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen
> 
> The container build failed for me with:
> 
> error: failed to initialize alpm library
> (could not find or read directory: /var/lib/pacman/)
> The command '/bin/sh -c pacman --noconfirm -Syu     pixman     python     sh' returned a non-zero code: 255
> 

Ooof, apparently archlinux/base has been broken for a
few weeks now due to interactions between faccessat2 and the
host's libseccomp version [0]. This thread [1] suggests the next
docker release will have a fix.

As a temporary workaround I got the attached patch to work
(based on this one[2]) if you want to give that a go.

    Connor

[0] https://bugs.archlinux.org/task/69563
[1] https://github.com/actions/virtual-environments/issues/2658
[2] https://github.com/MiguelNdeCarvalho/docker-baseimage-archlinux/pull/8/files

--givbyxahiezow33j
Content-Type: text/x-diff; charset=us-ascii
Content-Disposition: attachment; filename="docker-glibc.patch"

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
index d94048b6c3..5b3c3b9e3b 100644
--- a/automation/build/archlinux/riscv64.dockerfile
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -2,6 +2,11 @@ FROM archlinux/base
 LABEL maintainer.name="The Xen Project" \
       maintainer.email="xen-devel@lists.xenproject.org"
 
+RUN patched_glibc=glibc-linux4-2.33-4-x86_64.pkg.tar.zst && \
+    curl -LO "https://repo.archlinuxcn.org/x86_64/$patched_glibc" && \
+    bsdtar -C / -xvf "$patched_glibc" && \
+    sed -i 's/#IgnorePkg   =/IgnorePkg   = glibc/' /etc/pacman.conf
+
 # Packages needed for the build
 RUN pacman --noconfirm -Syu \
     base-devel \
@@ -26,6 +31,9 @@ RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-g
 
 # Add compiler path
 ENV PATH=/opt/riscv/bin/:${PATH}
+ENV CROSS_COMPILE=riscv64-unknown-linux-gnu-
+ENV XEN_TARGET_ARCH=riscv64
+ENV SUBSYSTEMS=xen
 
 RUN useradd --create-home user
 USER user

--givbyxahiezow33j--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:00:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90081.170357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFX7G-0001TU-MD; Fri, 26 Feb 2021 07:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90081.170357; Fri, 26 Feb 2021 07:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFX7G-0001TN-JG; Fri, 26 Feb 2021 07:00:46 +0000
Received: by outflank-mailman (input) for mailman id 90081;
 Fri, 26 Feb 2021 07:00:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFX7E-0001Sn-RM
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:00:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3b202d4-c3c2-48da-b36a-2b03f4bc3f48;
 Fri, 26 Feb 2021 07:00:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ABDD8AAAE;
 Fri, 26 Feb 2021 07:00:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3b202d4-c3c2-48da-b36a-2b03f4bc3f48
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614322839; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=V9ltK3tLHfA65LZDLA0UVxYPd2pNEVXAxTGBBkIZR6U=;
	b=OL2BMEPH5kX6K7riOtVyCuXnPPcm2++Krdms+KzNT2DQ6ZO6AUWHfuxnRAFJhoIbYNmJ/1
	JeSI2tTbNBh2O1vpLbO8M0xN4vCGDW79wXQC0e2OLuir6hO8gVTQVIZ4idsU1fW3UhUSvQ
	ckC5DIQtKgQjHS2DiSWGfuJ/kFPitrs=
Subject: Re: [PATCH for-4.15 1/5] tools/xenstored: Avoid unnecessary
 talloc_strdup() in do_control_lu()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-2-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8aff5264-ee89-fde7-14b9-bc2f7ece5406@suse.com>
Date: Fri, 26 Feb 2021 08:00:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225174131.10115-2-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="DhQxo18ivMnw4An1RYsnfYezu8SieLUON"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--DhQxo18ivMnw4An1RYsnfYezu8SieLUON
Content-Type: multipart/mixed; boundary="LdqCPhkPH065KA2Tfgz72EqAx49oRActW";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
Message-ID: <8aff5264-ee89-fde7-14b9-bc2f7ece5406@suse.com>
Subject: Re: [PATCH for-4.15 1/5] tools/xenstored: Avoid unnecessary
 talloc_strdup() in do_control_lu()
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-2-julien@xen.org>
In-Reply-To: <20210225174131.10115-2-julien@xen.org>

--LdqCPhkPH065KA2Tfgz72EqAx49oRActW
Content-Type: multipart/mixed;
 boundary="------------9DCCA64CE8A936E7CF841E8D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9DCCA64CE8A936E7CF841E8D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.02.21 18:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, the return of talloc_strdup() is not checked. This means=

> we may dereference a NULL pointer if the allocation failed.
>=20
> However, it is pointless to allocate the memory as send_reply() will
> copy the data to a different buffer. So drop the use of talloc_strdup()=
=2E
>=20
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
>=20
> Fixes: fecab256d474 ("tools/xenstore: add basic live-update command par=
sing")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


--------------9DCCA64CE8A936E7CF841E8D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9DCCA64CE8A936E7CF841E8D--

--LdqCPhkPH065KA2Tfgz72EqAx49oRActW--

--DhQxo18ivMnw4An1RYsnfYezu8SieLUON
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA4nJYFAwAAAAAACgkQsN6d1ii/Ey/4
aAgAmdNBD62jjOLNU3xNLjYSdrP1qUQjqRjfLsG3HAal99GZHpthzM4Rh1gxPynmLmry7Bu7whEw
tDN2LSOAftsgBB/qJ1AsadF4YE4dodxKSj7dIIMGnrAM/eAsHf9OHPrB6V/2eAJ+86eQHgGFQBoO
DRjQyOduptFmtByvaLaSZxKtkK28NMivNi31c/esUXO8VTj8f7EEDBxykKcCjXeIMxYIRrXmFvUl
VB7B0z7AfI5QBsQIOHEd05rA1jBpV4tcV5pBlMGJAMp1r4Q0qFCy/x2pjULW/aQc65tqaffKpKqq
cH7YalglWP0z5EmZ+CFyG/bendd3eXOLSetruNK4yQ==
=o9gS
-----END PGP SIGNATURE-----

--DhQxo18ivMnw4An1RYsnfYezu8SieLUON--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:01:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90082.170370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFX7i-0001aj-0t; Fri, 26 Feb 2021 07:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90082.170370; Fri, 26 Feb 2021 07:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFX7h-0001ac-T7; Fri, 26 Feb 2021 07:01:13 +0000
Received: by outflank-mailman (input) for mailman id 90082;
 Fri, 26 Feb 2021 07:01:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFX7f-0001Z4-Vg
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:01:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 891ae6c8-631f-40de-affe-c62c091a31a1;
 Fri, 26 Feb 2021 07:01:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4EEA9AAC5;
 Fri, 26 Feb 2021 07:01:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 891ae6c8-631f-40de-affe-c62c091a31a1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614322870; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OJeqOhZIPy5VJigOfn3Sk6y7uTgUZ5yKunKzc1bGxp0=;
	b=bEnqAtl8y4eHcjMu9MtGMazqISBHZt4cDQegqUqZzWAyfuYevmG30g5NvL4pSlG9v6fVZt
	3JdID9BXvRTXpF2z9MQFeKnFOv4cQCy5/JXq5NCruX3Kbbg3gpPEqQpgBZCvmrGpC4VBmF
	cxW76mnnVnJgPwIz0yt0vonMs3k/1lY=
Subject: Re: [PATCH for-4.15 2/5] tools/xenstored: Avoid unnecessary
 talloc_strdup() in do_lu_start()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-3-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1de2c7a3-946b-e6cb-253b-c8d2b9b21843@suse.com>
Date: Fri, 26 Feb 2021 08:01:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225174131.10115-3-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="1blxmMIFTVJCdHJMBXEYLayRtChrePUep"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--1blxmMIFTVJCdHJMBXEYLayRtChrePUep
Content-Type: multipart/mixed; boundary="rQgRiW24aIoGvqruCCqvI72spgVEYItty";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
Message-ID: <1de2c7a3-946b-e6cb-253b-c8d2b9b21843@suse.com>
Subject: Re: [PATCH for-4.15 2/5] tools/xenstored: Avoid unnecessary
 talloc_strdup() in do_lu_start()
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-3-julien@xen.org>
In-Reply-To: <20210225174131.10115-3-julien@xen.org>

--rQgRiW24aIoGvqruCCqvI72spgVEYItty
Content-Type: multipart/mixed;
 boundary="------------29AD1AAF5D59772B7EFAAD78"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------29AD1AAF5D59772B7EFAAD78
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.02.21 18:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, the return of talloc_strdup() is not checked. This means=

> we may dereference a NULL pointer if the allocation failed.
>=20
> However, it is pointless to allocate the memory as send_reply() will
> copy the data to a different buffer. So drop the use of talloc_strdup()=
=2E
>=20
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
>=20
> Fixes: af216a99fb4a ("tools/xenstore: add the basic framework for doing=
 the live update")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------29AD1AAF5D59772B7EFAAD78
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------29AD1AAF5D59772B7EFAAD78--

--rQgRiW24aIoGvqruCCqvI72spgVEYItty--

--1blxmMIFTVJCdHJMBXEYLayRtChrePUep
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA4nLUFAwAAAAAACgkQsN6d1ii/Ey/Y
zgf/dmlUI5rXmv3MKciaz7wSzNTOwR6nmfDM32p9C5zJW2G/EhGKOPMhNAQow/oxuiwd0yN54/r0
jQPcGn1fdMpOq+MglNJfrEvsU0TjFqsW07dxrtGQMlT+fYfsyez03qSZGXc4Dju2TDion8ynG170
tit/LnQiBG8jqQh0i8oZ3WzdUtbhjiG6JeibbdITxYXOirJ94NGNCF5WXurOkoroDVcOjxQ8MTht
C2/aHNptJtecIGV4UJYgCeLLWsNZ8jaPhC4Beplp8V+eHdRq3eJOakiVkGJV7BLoYE5uc89u3zWf
wiZrHkSek7e+0QXuTZ0tcFwc8THNceQcCeOtlJI0Dg==
=y+Cj
-----END PGP SIGNATURE-----

--1blxmMIFTVJCdHJMBXEYLayRtChrePUep--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:04:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:04:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90088.170382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXAk-0001mH-I0; Fri, 26 Feb 2021 07:04:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90088.170382; Fri, 26 Feb 2021 07:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXAk-0001mA-Er; Fri, 26 Feb 2021 07:04:22 +0000
Received: by outflank-mailman (input) for mailman id 90088;
 Fri, 26 Feb 2021 07:04:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFXAj-0001m5-U0
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:04:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02bb39d4-1922-4b50-96a9-b7136ce5c1e8;
 Fri, 26 Feb 2021 07:04:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1F2F3AE36;
 Fri, 26 Feb 2021 07:04:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02bb39d4-1922-4b50-96a9-b7136ce5c1e8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614323060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SnVYfUsTHPqazuGKJl1xKl40R27LNOLN7oUyrt07dQI=;
	b=gp9ZSwWa6f0u2Skwg7vz2rnMc12Vo4VaVEdS1WMkPhsDHDlYSgHD4z9xUfDr7SZABfF39/
	IUyaoV6bH+QRct/NHEPmTd/Y3tKJHIt0nk2juakb9wZTEw/ukBh9slOrJ08G+gu9D7uzA8
	04qDsbdUNFv60Zl4vLnGZGj9UECEMME=
Subject: Re: [PATCH for-4.15 3/5] tools/xenstored: control: Store the save
 filename in lu_dump_state
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-4-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <022c2ec1-3ac8-6e06-1f06-c6a548754d94@suse.com>
Date: Fri, 26 Feb 2021 08:04:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225174131.10115-4-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="SnGt1ZXvnj8tnWLswc4KM9hfOmv7TSPp5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--SnGt1ZXvnj8tnWLswc4KM9hfOmv7TSPp5
Content-Type: multipart/mixed; boundary="rCwufZ9o2aig2JCe7I8JQXGnheoLoxbDo";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
Message-ID: <022c2ec1-3ac8-6e06-1f06-c6a548754d94@suse.com>
Subject: Re: [PATCH for-4.15 3/5] tools/xenstored: control: Store the save
 filename in lu_dump_state
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-4-julien@xen.org>
In-Reply-To: <20210225174131.10115-4-julien@xen.org>

--rCwufZ9o2aig2JCe7I8JQXGnheoLoxbDo
Content-Type: multipart/mixed;
 boundary="------------97D65EABBD4182718B98A83E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------97D65EABBD4182718B98A83E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.02.21 18:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> The function lu_close_dump_state() will use talloc_asprintf() without
> checking whether the allocation succeeded. In the unlikely case we are
> out of memory, we would dereference a NULL pointer.
>=20
> As we already computed the filename in lu_get_dump_state(), we can stor=
e
> the name in the lu_dump_state. This is avoiding to deal with memory fil=
e
> in the close path and also reduce the risk to use the different
> filename.
>=20
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
>=20
> Fixes: c0dc6a3e7c41 ("tools/xenstore: read internal state when doing li=
ve upgrade")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------97D65EABBD4182718B98A83E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------97D65EABBD4182718B98A83E--

--rCwufZ9o2aig2JCe7I8JQXGnheoLoxbDo--

--SnGt1ZXvnj8tnWLswc4KM9hfOmv7TSPp5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA4nXMFAwAAAAAACgkQsN6d1ii/Ey+u
hggAhtOKyVYJuz1x/IbQ4SjV3NR/1PofDi6vX8PqqUOr/lwvpeB3yFXOVbAYI+lKSnM7EWMefwdl
kTObag1i8Z4KeLUI7cXSbxbOWCEg4yPUqq/Ehlqadxu0cqxp4bt681cUQBOgulPQycqm93UcSkWP
emiWQLmpCadZ9BRJDVifqhuXx0ktKIO8tQcS2jSuV9Xd1fu1Q/cLDvfS5YJoEMXyell/INSNGMah
sxkd1j53Xot6GfvMfboX0f4fr/2NYldqeMzjNTLT7yGD7ziTNnLsWhf0OS01RaHBljIVKrOmI+t+
eWtmBpqABgenzXA3BJl9e0taJrz72rStPVb/iv0mPA==
=40zg
-----END PGP SIGNATURE-----

--SnGt1ZXvnj8tnWLswc4KM9hfOmv7TSPp5--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:06:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:06:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90091.170394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXCW-0001uX-Vw; Fri, 26 Feb 2021 07:06:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90091.170394; Fri, 26 Feb 2021 07:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXCW-0001uQ-Rk; Fri, 26 Feb 2021 07:06:12 +0000
Received: by outflank-mailman (input) for mailman id 90091;
 Fri, 26 Feb 2021 07:06:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFXCV-0001ta-Uk
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:06:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62de8ea6-8e41-4709-a291-7125f5e83035;
 Fri, 26 Feb 2021 07:06:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A231EAD5C;
 Fri, 26 Feb 2021 07:06:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62de8ea6-8e41-4709-a291-7125f5e83035
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614323168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Dnzy0+6YaAyNyoTpH9A5Nihrr0w5Jq58dVI33xQo1Ns=;
	b=HzJ8tqeAcSAODNUDelGTHmbQ5g9D3sQoxmKmgpuKAz9mznLI1K6FI2ZXIbf7pfiF03v/cL
	4s75OPug695jZbbm1zzzIRmrIAbtuGwsilgKBY3SDiJ+EumXqtUDu2a5seimeKMLe/0Dw1
	HFtGzaayAfDdRTQJaXbO7D3N0NMxStQ=
Subject: Re: [PATCH for-4.15 4/5] tools/xenstore-control: Don't leak buf in
 live_update_start()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-5-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <74f3a7a7-e2f2-c1a9-f4d5-aa1fc818daba@suse.com>
Date: Fri, 26 Feb 2021 08:06:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225174131.10115-5-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="SNow14MrVG84zSb3g2AJH4NnTo1bPNEs2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--SNow14MrVG84zSb3g2AJH4NnTo1bPNEs2
Content-Type: multipart/mixed; boundary="BDb8nV6Ot42JNFNNbpoj3suO3toq24d7G";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
Message-ID: <74f3a7a7-e2f2-c1a9-f4d5-aa1fc818daba@suse.com>
Subject: Re: [PATCH for-4.15 4/5] tools/xenstore-control: Don't leak buf in
 live_update_start()
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-5-julien@xen.org>
In-Reply-To: <20210225174131.10115-5-julien@xen.org>

--BDb8nV6Ot42JNFNNbpoj3suO3toq24d7G
Content-Type: multipart/mixed;
 boundary="------------4EFB6B980FE9BDA8096EE81B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4EFB6B980FE9BDA8096EE81B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.02.21 18:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> All the error paths but one will free buf. Cover the remaining path so
> buf can't be leaked.
>=20
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
>=20
> Fixes: 7f97193e6aa8 ("tools/xenstore: add live update command to xensto=
re-control")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------4EFB6B980FE9BDA8096EE81B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4EFB6B980FE9BDA8096EE81B--

--BDb8nV6Ot42JNFNNbpoj3suO3toq24d7G--

--SNow14MrVG84zSb3g2AJH4NnTo1bPNEs2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA4nd8FAwAAAAAACgkQsN6d1ii/Ey+8
lgf+JH5740AVg+KAGopXe6TS9C/8gXP9qUwCcH4maZYFSPEzVJWRZ3w62pcW/CWsjQZnIfeOoXq7
1fvUWx1sMSRiqlM9+SWv2EIpWgjLLha9G22KdW/UVGySyRPdzQgCDNH7IanJawV8bePM8GX0O4Xw
1WJ1uTTfs6KsZHtIrLQoi0dPOJ5E4DEyG6YiaRpr4m0Y0S7mo4OOTIuJ+u59OHV3cxoss6cloGf5
px2wDHXmUNGhvPjTXALKMhYpPUNDxgaqAk12e1eVpPklKU0kUbfr1G38J++SNnplFRhxq6r7VAJV
zsyE/AkfCt856Bs1HNGsSwtlS6E3t9yIUek3aUOELA==
=jFy1
-----END PGP SIGNATURE-----

--SNow14MrVG84zSb3g2AJH4NnTo1bPNEs2--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:06:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:06:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90094.170406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXDE-00021K-8S; Fri, 26 Feb 2021 07:06:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90094.170406; Fri, 26 Feb 2021 07:06:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXDE-00021C-5I; Fri, 26 Feb 2021 07:06:56 +0000
Received: by outflank-mailman (input) for mailman id 90094;
 Fri, 26 Feb 2021 07:06:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7C3z=H4=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1lFXDD-000217-40
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:06:55 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4284846-b40c-4c9c-8768-4b1dee8efdfa;
 Fri, 26 Feb 2021 07:06:51 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 Feb 2021 23:06:50 -0800
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by fmsmga001.fm.intel.com with ESMTP; 25 Feb 2021 23:06:48 -0800
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Thu, 25 Feb 2021 23:06:47 -0800
Received: from fmsmsx607.amr.corp.intel.com (10.18.126.87) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Thu, 25 Feb 2021 23:06:46 -0800
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx607.amr.corp.intel.com (10.18.126.87) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2
 via Frontend Transport; Thu, 25 Feb 2021 23:06:46 -0800
Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.171)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2106.2; Thu, 25 Feb 2021 23:06:46 -0800
Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9)
 by CO1PR11MB4852.namprd11.prod.outlook.com (2603:10b6:303:9f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.20; Fri, 26 Feb
 2021 07:06:45 +0000
Received: from MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::f1b4:bace:1e44:4a46]) by MWHPR11MB1886.namprd11.prod.outlook.com
 ([fe80::f1b4:bace:1e44:4a46%6]) with mapi id 15.20.3890.020; Fri, 26 Feb 2021
 07:06:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4284846-b40c-4c9c-8768-4b1dee8efdfa
IronPort-SDR: rAAaw3KLm8Wi1H9bgSYeYFGMrCKvCi893UQdW9ptVStxSHAaZku95s1+zgCW+2kN8uBtAHQXex
 X+gTSuIge49Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9906"; a="172933295"
X-IronPort-AV: E=Sophos;i="5.81,207,1610438400"; 
   d="scan'208";a="172933295"
IronPort-SDR: 81yUy2mej71ymMlOwA/ufYfSwQUlAFaD+v3nn0Q7R+LFJK/OMvjdgiZownspi0f1yLVsmVPjS8
 7mPrPya04acw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.81,207,1610438400"; 
   d="scan'208";a="501200034"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NHErMpdkELMUUAO39HMOjWjIa1P9Kc4j44x4Hdj5+Fi3bTRmb11aswvJW7+Iil3suqrfrESVYeJG6hME0qMgCb3XqYoK3nI7aNkdwk2JCT1KiVtE2F3kLeuFqhT8AxUYItUk9LpT+jO7l7pFjAL8405i8X/QXTDlW1qbV0zccFkTeiMxCohPjb8duBHltl+FlbtTQ6tfFmYTzGklRE3WDbypXcN0JVoxsXuPA2kC9aTkkrDZ8CZERhmFC23BcVdsaw4EzZeg2OWPJ6SRhaZ4BYlVQnm+g2Dzk5dp1tAvWTuUpRVBttLZ/zFNzkRKsX6oWH/Wbn0sHmQM07LO3EEfdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MoXytlSl0K1AXeXOek857oZH2IypKcyG+OGvMWS2mww=;
 b=iFyIuH/g5ghD5Pekqn7NnajakTEdoTE8yxeVso/gD0UgSPdyPutObdEVH3NdTV9ItMZfVyZ3gOQCIbM3w6XOG/xqE7DaY8qr9JNnSUBXwFZnYW3RschUIvo08TSs2LdSfDSzBmw40rqmNC23GHfP/WqeZ0i/LCeGzbhSbdKMrdwUZh1niU8v5A2YsLlBBdji+phSN/SjE+9zkDvCCfYI4bhZz5z4JNzUqw64ZsVLTsscAvVCaC7A2D74UVr5d/1f+zw+aB5gDLHaehvdS0oORhH+Ab7PNI3uBZGyHIVVFG1sG26ldu0o357WS6zd9gP0h6xrSlrmpKP6rG7j/Me6SQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MoXytlSl0K1AXeXOek857oZH2IypKcyG+OGvMWS2mww=;
 b=Ql8H+PXjC5ZtyPP7T6RPnT0xI+t+GanlICgUvpOYSk1WwI3yiD2kzCt6iaxA8FW2zo585Qu/P4V7CWTvJGyqJGbXmXR8/9rsSjjVef+eslLU9ApJWqTZpdzBm2K27AdmtT9dbdZVro+iSlJHBO15Wk0JnJ1pcnMS2zzIZjAGT0s=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "Cooper, Andrew" <andrew.cooper3@citrix.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
CC: Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page
Thread-Topic: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access
 page
Thread-Index: AQHXCQl5Ge7Qk928g0a5oin2oF6ik6poksSAgAF27OA=
Date: Fri, 26 Feb 2021 07:06:45 +0000
Message-ID: <MWHPR11MB1886328CDB4FDBC61EF2BF818C9D9@MWHPR11MB1886.namprd11.prod.outlook.com>
References: <4731a3a3-906a-98ac-11ba-6a0723903391@suse.com>
 <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>
 <94ac725c-ae3a-5380-aaa0-c8523074b581@suse.com>
In-Reply-To: <94ac725c-ae3a-5380-aaa0-c8523074b581@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [101.88.226.48]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9b4122cf-427f-434c-e364-08d8da251406
x-ms-traffictypediagnostic: CO1PR11MB4852:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <CO1PR11MB48522215B2B3A90013D4AFC58C9D9@CO1PR11MB4852.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6790;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: NfAS1+e9NvJKVoJuPHSTyj8nPKEDLoIY4P70jYRrUFd5Pz24O6HB5q0Mh1aDFO0fS6sEeDcHwuM4D5KHitvNN3CIHe80i3Y6t0MSFpUm1Yj2Jo5OR9/2WQAH4oY6lGy1i63nhEqChVM53Na0rVO3hPM9b9cKVgFcxnWpjy2IsD8Pv/SIR076E52vUp175R9nAAORExg6O0O/4CWcldcZY7axe9Bj+du/TEG/wDlP6zA8lBhketCjqR8sLepcUA6BrhMrCdBsjrOHrVQ/diaj1TsKigSUvetmPJpzMlefyOtWF/32LEGxDWjzEWKBR6TLpCihdzE9fLAOfavGVyYXbR2jPskNrkwQAEzLh85JRUS7TTw7yHm63VJlTAzzApoj/GoNyehJZo3Pm6u/r7i/UBzkJZMHWf+Vkn2mzjQInbFOVz1F+EfbIJSdh4W/0eUEmuskLSQAm8ijN9wNFi71dQ8M5Y41RVMJ1dsiS0ppIgOcKKQ2mKPf1+M3DX95Ic9ag99BNIdoQS0lH0neb682Jw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1886.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(346002)(39860400002)(366004)(136003)(6506007)(53546011)(71200400001)(33656002)(110136005)(54906003)(66946007)(64756008)(66476007)(316002)(76116006)(4326008)(55016002)(9686003)(2906002)(7696005)(478600001)(66446008)(86362001)(5660300002)(52536014)(8936002)(26005)(66556008)(8676002)(4744005)(6636002)(186003)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?eTBxVjY3MnAvNlNkVFllVW9PS0VXdjhyaUZydjZOL29lWk8yMStNd0ZyYWo2?=
 =?utf-8?B?YVBCNXl5Wk1rU01DbVhxTUw3ZG04NWRNRnpxdW1DejNlcWlPc0R2MER5aFJt?=
 =?utf-8?B?UEtiRWVlOVNnZE90dzJCWHh4SnVyMms2VXQ0enNKR3hBcVZ6WlEwSzFTVVpv?=
 =?utf-8?B?WnFXaDdOem5uRUFlQlFBTlQ3QzJoVHVSNU1YeTBuSDM2YVNpTXNZZm05a1Aw?=
 =?utf-8?B?L3RsZ2x3RGVCUUNleHF6cWRHNUtsYkQ3UGJFSUtvWDNTbEtYbFZtOGlSbVM3?=
 =?utf-8?B?MEFXQWMwTGJvVVd3R2FzSDJEMEdsZnV1cm5UTXd6Y2hzaEtqWHlKb0VOVUV4?=
 =?utf-8?B?RE1oYXUybXREdVRNMkhFTDlVR2h6dytCbkdqUkdzMkN1eElUdzJhb3pES2o3?=
 =?utf-8?B?M2haM29aU0lMVmZMZGQyTllGbGkxZ0dscW54bG1aK1NkK3RUSWhYdnNZQnZG?=
 =?utf-8?B?cnE2bmd2VGJpYlhBM2xXSFB5VlhYWERmYWhEWnF1N3BUODdKdGVCb3MzME1v?=
 =?utf-8?B?Q3NiNGpYbEF1VldRMDJCcDFmdDg4WXNaM3NGWHd6SlY4bC9oWlVocVVXMXpa?=
 =?utf-8?B?NTlZd3E2RnVhNFg4endqdkFxM3NMTFBpdG1sQkd1ZXlWV0VvdVVNR1RYbzZq?=
 =?utf-8?B?U05zMWVPVUhJZEtnTCtIeGNJcEozVnpvUFY1Q3B4Q242U2tlKzhCakpBYUFm?=
 =?utf-8?B?bFo3TnJWU0Z6VUZKc0wvenRhQjZma0ZCRnA4SC84U1ZqVCszby90TVBkOVN6?=
 =?utf-8?B?VFdBOXJXYitrU1hxZm9ScW1hUCs2VCtJaDBRNVFlYm9sL2hKc1l2U2hVdU1x?=
 =?utf-8?B?cFkvQnlyaHhuNmM5azVEMkRCdUJWSDNhRHlRelRlKzMwZURVZi8ralhkcVNI?=
 =?utf-8?B?Q3ZpbXUxMHlZSEwybnR3d3FJUWw5b2FGU3pOblJDVHJUMEtxNkRrOGRLa0h1?=
 =?utf-8?B?MXF0UndtVyttaGNNNzVLK0NaQkJvenJyTXJyNzNibHhjUUVuRlBDRitsSFVY?=
 =?utf-8?B?dHhOMEhpWTdOUUhkNHVXbjRUbjI2MisxN3Jvd09FMkxPaHoveWF0MlhCb1dI?=
 =?utf-8?B?L1BoZW4wWTdWNXcvRjZ1SFU1RlZLSHladlRydWFpYlpPczFOanZ5bEZ5eFAz?=
 =?utf-8?B?c1lnTFM4Vks1RGltbGR6bVNxd2dXTDJqelFUZW5pcXB2ZTRlK2NscmpNVExZ?=
 =?utf-8?B?US9RVXdOeVU5RzAwUTZLVnMxdUZ5bHB5RkYxTUpzeDVMRGR0VmdWWmVDZmx4?=
 =?utf-8?B?SVgyZjEzYzFnanRwWHV2NlhYZ3BDSzhRTkI5cHl2TEN1VERkYU1lYlRUOVJk?=
 =?utf-8?B?M2lkdW5ud2lmWS90ZXE5c0xyTyswNGZzbDZqREZuRFJFMEZOMHhnTmJRalJs?=
 =?utf-8?B?WElraC9hOVluaVdpUjdsa2J4N09wTzd3WTNwRDY1Yk9EZXRXTFEwU0QwejFy?=
 =?utf-8?B?VWhrRElmVHVCZy9Eb3Z1cHpjZC9PU25QZjV6NHAyUVZiRWNnTEp4RDlCWFZo?=
 =?utf-8?B?cTNDZ1dXTW9YZFhOVXpGYjJvdUl6VGFxcmFpNmt0bUdjdDBEVnNDNnZSc21B?=
 =?utf-8?B?RWxuZm50NDZKNHhKZHZkWlU5T2NiU3QvWTZvTWd0Skd3K2xNL1FSV2Vwemxl?=
 =?utf-8?B?R3AzWjFNOE85TkJ2Ui83OWs1VFNpZXlpcVJtcmU3bEZBYUF1M3U3Q21VSjZN?=
 =?utf-8?B?NWtOZXV4cFZiMm5QeDluT2JkbmZOUmdsbVhUWnUvRENrbzJ6ZzNrRVRLN0w0?=
 =?utf-8?Q?sOCCBp+yOc8VKtOZiix3bBeUdkmzSTx3kmy+5Ur?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b4122cf-427f-434c-e364-08d8da251406
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Feb 2021 07:06:45.5007
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hkCjnHBsC88drcOS3XKHHhAU0mIfe8F1Qvcb58fUGvVTbryMUHl8tpCMewUnuXpWZH1iMUBFVfGP9DGFsd7//Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB4852
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFRodXJzZGF5
LCBGZWJydWFyeSAyNSwgMjAyMSA0OjQ0IFBNDQo+IA0KPiBPbiAyMi4wMi4yMDIxIDExOjU2LCBK
YW4gQmV1bGljaCB3cm90ZToNCj4gPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0K
PiA+ICsrKyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jDQo+ID4gQEAgLTQyOCw2ICs0Mjgs
MTQgQEAgc3RhdGljIHZvaWQgdm14X2RvbWFpbl9yZWxpbnF1aXNoX3Jlc291cg0KPiA+ICAgICAg
dm14X2ZyZWVfdmxhcGljX21hcHBpbmcoZCk7DQo+ID4gIH0NCj4gPg0KPiA+ICtzdGF0aWMgdm9p
ZCBkb21haW5fY3JlYXRpb25fZmluaXNoZWQoc3RydWN0IGRvbWFpbiAqZCkNCj4gPiArew0KPiA+
ICsgICAgaWYgKCBoYXNfdmxhcGljKGQpICYmICFtZm5fZXEoZC0+YXJjaC5odm0udm14LmFwaWNf
YWNjZXNzX21mbiwNCj4gX21mbigwKSkgJiYNCj4gPiArICAgICAgICAgc2V0X21taW9fcDJtX2Vu
dHJ5KGQsIGdhZGRyX3RvX2dmbihBUElDX0RFRkFVTFRfUEhZU19CQVNFKSwNCj4gPiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGQtPmFyY2guaHZtLnZteC5hcGljX2FjY2Vzc19tZm4sIFBB
R0VfT1JERVJfNEspICkNCj4gPiArICAgICAgICBkb21haW5fY3Jhc2goZCk7DQo+ID4gK30NCj4g
DQo+IEhhdmluZyBub3RpY2VkIHRoYXQgaW4gcGF0Y2ggMiBJIGFsc28gbmVlZCB0byBhcnJhbmdl
IGZvcg0KPiBlcHRfZ2V0X2VudHJ5X2VtdCgpIHRvIGNvbnRpbnVlIHRvIHJldHVybiBXQiBmb3Ig
dGhpcyBwYWdlLCBJJ20NCj4gaW5jbGluZWQgdG8gYWRkIGEgcmVzcGVjdGl2ZSBhc3NlcnRpb24g
aGVyZS4gV291bGQgYW55b25lIG9iamVjdA0KPiB0byBtZSBkb2luZyBzbz8NCj4gDQo+IEtldmlu
LCBKdW4gLSBJJ2QgbGlrZSB0aGlzIHRvIGFsc28gc2VydmUgYXMgYSBwaW5nIGZvciBhbiBhY2sN
Cj4gKHdpdGggb3Igd2l0aG91dCB0aGUgc3VnZ2VzdGVkIEFTU0VSVCgpIGFkZGl0aW9uKS4NCj4g
DQoNClJldmlld2VkLWJ5OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:10:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:10:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90097.170418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXGZ-000332-NO; Fri, 26 Feb 2021 07:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90097.170418; Fri, 26 Feb 2021 07:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXGZ-00032v-KI; Fri, 26 Feb 2021 07:10:23 +0000
Received: by outflank-mailman (input) for mailman id 90097;
 Fri, 26 Feb 2021 07:10:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFXGY-00032q-BD
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:10:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f061eb7e-2dcf-4b0e-8f64-f96c4635e391;
 Fri, 26 Feb 2021 07:10:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4B88DAD5C;
 Fri, 26 Feb 2021 07:10:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f061eb7e-2dcf-4b0e-8f64-f96c4635e391
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614323420; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gQKRD2mZy1LZevXZyA5w0flN+3E9nGvJDoetL244laM=;
	b=o/1ARpcODT38CQnJnlnIV1mqKGQe6poQaBbMk830V/DyKN8PwEkD87nDW2TDHMosKd9LwQ
	wwhy5d3+MEYxnNZaVKI8XJM7aEt0zf53tfz8NelwCl+QGy1l5un99KzIWQQZAEoeF/KJCi
	RY1m4PjUVt7WZzhDAoauBrtZCAsQI6k=
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0aa89914-8fae-3731-a5a0-ccf4316ce96b@suse.com>
Date: Fri, 26 Feb 2021 08:10:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225174131.10115-6-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5CVZ0FJ0G3RmnqH7vgLQMgsmFxwEJPkfG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5CVZ0FJ0G3RmnqH7vgLQMgsmFxwEJPkfG
Content-Type: multipart/mixed; boundary="yGZAsfZcPasZlSmROHZHvDvBimiGSZPsP";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>
Message-ID: <0aa89914-8fae-3731-a5a0-ccf4316ce96b@suse.com>
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
In-Reply-To: <20210225174131.10115-6-julien@xen.org>

--yGZAsfZcPasZlSmROHZHvDvBimiGSZPsP
Content-Type: multipart/mixed;
 boundary="------------AB235C8C38FF36930B2925B3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AB235C8C38FF36930B2925B3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.02.21 18:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Coverity will report unitialized values for every use of xs_state_*
> structures in the save part. This can be prevented by using the [0]
> rather than [] to define variable length array.
>=20
> Coverity-ID: 1472398
> Coverity-ID: 1472397
> Coverity-ID: 1472396
> Coverity-ID: 1472395
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Sorry, but Coverity is clearly wrong here.

Should we really modify our code to work around bugs in external
static code analyzers?


Juergen

>=20
> ---
>=20
>  From my understanding, the tools and the hypervisor already rely on GN=
U
> extensions. So the change should be fine.
>=20
> If not, we can use the same approach as XEN_FLEX_ARRAY_DIM.
> ---
>   tools/xenstore/include/xenstore_state.h | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
>=20
> diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/i=
nclude/xenstore_state.h
> index ae0d053c8ffc..407d9e920c0f 100644
> --- a/tools/xenstore/include/xenstore_state.h
> +++ b/tools/xenstore/include/xenstore_state.h
> @@ -86,7 +86,7 @@ struct xs_state_connection {
>       uint16_t data_in_len;    /* Number of unprocessed bytes read from=
 conn. */
>       uint16_t data_resp_len;  /* Size of partial response pending for =
conn. */
>       uint32_t data_out_len;   /* Number of bytes not yet written to co=
nn. */
> -    uint8_t  data[];         /* Pending data (read, written) + 0-7 pad=
 bytes. */
> +    uint8_t  data[0];         /* Pending data (read, written) + 0-7 pa=
d bytes. */
>   };
>  =20
>   /* Watch: */
> @@ -94,7 +94,7 @@ struct xs_state_watch {
>       uint32_t conn_id;       /* Connection this watch is associated wi=
th. */
>       uint16_t path_length;   /* Number of bytes of path watched (incl.=
 0). */
>       uint16_t token_length;  /* Number of bytes of watch token (incl. =
0). */
> -    uint8_t data[];         /* Path bytes, token bytes, 0-7 pad bytes.=
 */
> +    uint8_t data[0];        /* Path bytes, token bytes, 0-7 pad bytes.=
 */
>   };
>  =20
>   /* Transaction: */
> @@ -125,7 +125,7 @@ struct xs_state_node {
>   #define XS_STATE_NODE_TA_WRITTEN  0x0002
>       uint16_t perm_n;        /* Number of permissions (0 in TA: node d=
eleted). */
>       /* Permissions (first is owner, has full access). */
> -    struct xs_state_node_perm perms[];
> +    struct xs_state_node_perm perms[0];
>       /* Path and data follows, plus 0-7 pad bytes. */
>   };
>   #endif /* XENSTORE_STATE_H */
>=20


--------------AB235C8C38FF36930B2925B3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AB235C8C38FF36930B2925B3--

--yGZAsfZcPasZlSmROHZHvDvBimiGSZPsP--

--5CVZ0FJ0G3RmnqH7vgLQMgsmFxwEJPkfG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA4ntsFAwAAAAAACgkQsN6d1ii/Ey8W
Ywf/f3yjOjwcL4BdlS5l76In2wRed9FMpWfpU3BJ6Y9vo86JlW4RPaKRu7TNzZwdDSSiPwAbdwPu
9N7jyhzpUK2xfB7uasPksijVebz8YpiRl4AwomD7pblqU2O21NccOohWAi6rTo2+hXW9r6KENeoj
No/aEqBJaz9Td0IO5eicNPiYWjCLOBT49FJ+bExdq/+k+LJWZjHO00nU/ethhsYmDzxfe6vGrrGs
wEvuMYi1M/LyoT7pc+CBtIl2mHtGi6j9wmkLj5gQ+1rrxr5v29xvMStHAOsoxmrilAphDQcTIilX
NOQGC8eFlQTWJj/DsiT8mxVsw5lyg6Ccg3hDdNi7EA==
=BcoI
-----END PGP SIGNATURE-----

--5CVZ0FJ0G3RmnqH7vgLQMgsmFxwEJPkfG--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:41:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90101.170429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXkj-0006D6-As; Fri, 26 Feb 2021 07:41:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90101.170429; Fri, 26 Feb 2021 07:41:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXkj-0006Cz-7q; Fri, 26 Feb 2021 07:41:33 +0000
Received: by outflank-mailman (input) for mailman id 90101;
 Fri, 26 Feb 2021 07:41:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFXki-0006Cu-Ac
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 07:41:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2237a4f4-cbf6-44db-85f0-f634d1b520c0;
 Fri, 26 Feb 2021 07:41:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 27559ADDD;
 Fri, 26 Feb 2021 07:41:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2237a4f4-cbf6-44db-85f0-f634d1b520c0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614325289; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AyWuNdA7FXC5/tU0kRxD8yU7VkIW5TdtlUXQHM6JuME=;
	b=Mhde/J2MunZV90byjEEqb7635Sbb7JrcMRSVMKABcJb/89vqDMdrGI621v8TDWTE/3Bg2W
	SmQWjljlG9g2/wKj3eCFM53sDP9qX7+ym6NVuWyQv94dejPdMQIZA23jUyeZh+6gft86ar
	dyJXd0nhfV0v3I3DKR3d20dsPh5JfYg=
Subject: Re: [PATCH for-next 2/6] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1614265718.git.connojdavis@gmail.com>
 <444658f690c81b9e93c2c709fa1032c049646763.1614265718.git.connojdavis@gmail.com>
 <02f32a31-1c23-46af-54a7-7d44b5ffb613@suse.com>
 <20210226025402.ryuxpicaqujmfxbu@thewall>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <34bb915a-be60-1cc7-c802-e0491b6741df@suse.com>
Date: Fri, 26 Feb 2021 08:41:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226025402.ryuxpicaqujmfxbu@thewall>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.02.2021 03:54, Connor Davis wrote:
> On Thu, Feb 25, 2021 at 04:45:15PM +0100, Jan Beulich wrote:
>> On 25.02.2021 16:24, Connor Davis wrote:
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -501,7 +501,9 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>              return -EINVAL;
>>>          }
>>>  
>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>          if ( !iommu_enabled )
>>> +#endif
>>>          {
>>>              dprintk(XENLOG_INFO, "IOMMU requested but not available\n");
>>>              return -EINVAL;
>>
>> Where possible - to avoid such #ifdef-ary - the symbol instead should
>> get #define-d to a sensible value ("false" in this case) in the header.
>> The other cases here may indeed need to remain as you have them.
>>
> Do you prefer the #define in the same function near the if or
> somwhere near the top of the file?

Neither, if I understand you correctly. It should be in the same header
where the extern declaration lives, for the whole code base to consume.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 07:53:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 07:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90109.170447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXwS-0007Nd-H4; Fri, 26 Feb 2021 07:53:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90109.170447; Fri, 26 Feb 2021 07:53:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFXwS-0007NW-Du; Fri, 26 Feb 2021 07:53:40 +0000
Received: by outflank-mailman (input) for mailman id 90109;
 Fri, 26 Feb 2021 07:53:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFXwR-0007NN-4i; Fri, 26 Feb 2021 07:53:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFXwQ-0006F8-U6; Fri, 26 Feb 2021 07:53:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFXwQ-0003q8-Lo; Fri, 26 Feb 2021 07:53:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFXwQ-0000l4-LN; Fri, 26 Feb 2021 07:53:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1V7DTC9jyCc7clsAy5OheqHOdyhglJtctIT/tRnveOo=; b=tKBsiJiqpbJ9YW/wytrcL2Q9tk
	qkIrOQshSftl5u0WHKZobswxb5/ZEqoqtsy2aZWcF615yn9nYH5e/sBfbgBlTbhGZgYZjeE47l+p8
	rtHk1uh5eDdacoy/xm2RqpBuVyf8NlMk3ZV/YOjqKqE+YLce7fHU5XAqQcy6W4BVCbTQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159693-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159693: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=eaaf9397f40a7a893a04ee86676478cca3c80d9d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 07:53:38 +0000

flight 159693 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159693/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              eaaf9397f40a7a893a04ee86676478cca3c80d9d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  231 days
Failing since        151818  2020-07-11 04:18:52 Z  230 days  223 attempts
Testing same since   159693  2021-02-26 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 44231 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 08:31:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 08:31:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90120.170463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYWj-0003jJ-Ki; Fri, 26 Feb 2021 08:31:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90120.170463; Fri, 26 Feb 2021 08:31:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYWj-0003jC-Gx; Fri, 26 Feb 2021 08:31:09 +0000
Received: by outflank-mailman (input) for mailman id 90120;
 Fri, 26 Feb 2021 08:31:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFYWh-0003j7-PR
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 08:31:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 299e8d67-cced-4d79-bb36-3d15241eebce;
 Fri, 26 Feb 2021 08:31:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3F07EAAAE;
 Fri, 26 Feb 2021 08:31:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 299e8d67-cced-4d79-bb36-3d15241eebce
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614328262; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zizc9mgtALmCUMLDX8SdCLIf5N1ESalejPn7NvmvnQE=;
	b=XmVTz74Y0YHUvQcuQf8EP8cDnqpZBL249xWLVnwv3o5PXCZuw6wGxv1cR7DR+1ggxPKd/D
	uMBdFe2Y+VBQ5z1x6p8ap+i3rhSKTfMdk8C/Q7KkSySm/iWJI3lP6fvJPnXJ88bCP11eYy
	xV7lPLynT913Ewtlc8fizPrHTEOiaDA=
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
 <b4ad0f83-e071-49f8-17a8-7fec0e226b9a@suse.com>
 <20210226030833.uugfojf5kkxhlpr7@thewall>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb19a389-d2b3-d0cc-fd25-62bbb121cf98@suse.com>
Date: Fri, 26 Feb 2021 09:31:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226030833.uugfojf5kkxhlpr7@thewall>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.02.2021 04:08, Connor Davis wrote:
> On Thu, Feb 25, 2021 at 04:50:02PM +0100, Jan Beulich wrote:
>> On 25.02.2021 16:24, Connor Davis wrote:
>>> Return from cpu_schedule_up when either cpu is 0 or
>>> NR_CPUS == 1. This fixes the following:
>>>
>>> core.c: In function 'cpu_schedule_up':
>>> core.c:2769:19: error: array subscript 1 is above array bounds
>>> of 'struct vcpu *[1]' [-Werror=array-bounds]
>>>  2769 |     if ( idle_vcpu[cpu] == NULL )
>>>       |
>>>
>>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>>> ---
>>>  xen/common/sched/core.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
>>> index 9745a77eee..f5ec65bf9b 100644
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>> @@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int cpu)
>>>      cpumask_set_cpu(cpu, &sched_res_mask);
>>>  
>>>      /* Boot CPU is dealt with later in scheduler_init(). */
>>> -    if ( cpu == 0 )
>>> +    if ( cpu == 0 || NR_CPUS == 1 )
>>>          return 0;
>>>  
>>>      if ( idle_vcpu[cpu] == NULL )
>>
>> I'm not convinced a compiler warning is due here, and in turn
>> I'm not sure we want/need to work around this the way you do.
> 
> It seems like a reasonable warning to me, but of course I'm open
> to dealing with it in a different way.
> 
>> First question is whether that's just a specific compiler
>> version that's flawed. If it's not just a special case (e.g.
> 
> The docker container uses gcc 10.2.0 from
> https://github.com/riscv/riscv-gnu-toolchain

Ah yes, at -O2 I can observe the warning on e.g.

extern int array[N];

int test(unsigned i) {
	if(i == N - 1)
		return 0;
	return array[i];
}

when N=1. No warning appears when N=2 or higher, yet if it is
sensible to emit for N=1 then it would imo be similarly
sensible to emit in other cases. The only difference is that
when N=1, there's no i for which the array access would ever
be valid, while e.g. for N=2 there's exactly one such i.

I've tried an x86 build with NR_CPUS=1, and this hits the case
you found and a 2nd one, where behavior is even more puzzling.
For the case you've found I'd like to suggest as alternative

@@ -2769,6 +2769,12 @@ static int cpu_schedule_up(unsigned int
     if ( cpu == 0 )
         return 0;
 
+    /*
+     * Guard in particular also against the compiler suspecting out-of-bounds
+     * array accesses below when NR_CPUS=1.
+     */
+    BUG_ON(cpu >= NR_CPUS);
+
     if ( idle_vcpu[cpu] == NULL )
         vcpu_create(idle_vcpu[0]->domain, cpu);
     else

To fix the x86 build in this regard we'd additionally need
something along the lines of

--- unstable.orig/xen/arch/x86/genapic/x2apic.c
+++ unstable/xen/arch/x86/genapic/x2apic.c
@@ -54,7 +54,17 @@ static void init_apic_ldr_x2apic_cluster
     per_cpu(cluster_cpus, this_cpu) = cluster_cpus_spare;
     for_each_online_cpu ( cpu )
     {
-        if (this_cpu == cpu || x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
+        if ( this_cpu == cpu )
+            continue;
+        /*
+         * Guard in particular against the compiler suspecting out-of-bounds
+         * array accesses below when NR_CPUS=1 (oddly enough with gcc 10 it
+         * is the 1st of these alone which actually helps, not the 2nd, nor
+         * are both required together there).
+         */
+        BUG_ON(this_cpu >= NR_CPUS);
+        BUG_ON(cpu >= NR_CPUS);
+        if ( x2apic_cluster(this_cpu) != x2apic_cluster(cpu) )
             continue;
         per_cpu(cluster_cpus, this_cpu) = per_cpu(cluster_cpus, cpu);
         break;

but the comment points out how strangely the compiler behaves here.
Even flipping around the two sides of the != doesn't change its
behavior. It is perhaps relevant to note here that there's no
special casing of smp_processor_id() in the NR_CPUS=1 case, so the
compiler can't infer this_cpu == 0.

Once we've settled on how to change common/sched/core.c I guess
I'll then adjust the x86-specific change accordingly and submit as
a separate fix (or I could of course also bundle both changes then).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 08:45:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90124.170475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYkA-0004uG-Ss; Fri, 26 Feb 2021 08:45:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90124.170475; Fri, 26 Feb 2021 08:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYkA-0004u9-OZ; Fri, 26 Feb 2021 08:45:02 +0000
Received: by outflank-mailman (input) for mailman id 90124;
 Fri, 26 Feb 2021 08:45:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFYk8-0004u1-PP; Fri, 26 Feb 2021 08:45:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFYk8-0007aY-Ge; Fri, 26 Feb 2021 08:45:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFYk8-0006AE-8y; Fri, 26 Feb 2021 08:45:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFYk8-0008QJ-8W; Fri, 26 Feb 2021 08:45:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dbiKeSrTw7vvxtbsgHf3rl8WHNI977j+E4klWGgQ010=; b=AcMNLaE5p9vamudw1eluO3yM44
	syNB/hLx5ALf6gXa5oB24CLizjHNIsIxNzpEWxQ23L4RXRnPvuB2mxjbiXTl4FJg7dgGPqR3j+1uv
	6fV3ItLAGSNZxptaI/u1WgTSbpAG7998DWLWYuUdHwGLBtF+ZKE1MMZ4q3keOalujBuk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159681-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159681: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ef8134565dccf9186d5eabd7dbb4ecae6dead87
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 08:45:00 +0000

flight 159681 qemu-mainline real [real]
flight 159697 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159681/
http://logs.test-lab.xenproject.org/osstest/logs/159697/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ef8134565dccf9186d5eabd7dbb4ecae6dead87
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  189 days
Failing since        152659  2020-08-21 14:07:39 Z  188 days  364 attempts
Testing same since   159563  2021-02-22 23:37:57 Z    3 days    6 attempts

------------------------------------------------------------
425 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117355 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 08:50:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 08:50:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90130.170490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYpB-0005yf-OQ; Fri, 26 Feb 2021 08:50:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90130.170490; Fri, 26 Feb 2021 08:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYpB-0005yY-LA; Fri, 26 Feb 2021 08:50:13 +0000
Received: by outflank-mailman (input) for mailman id 90130;
 Fri, 26 Feb 2021 08:50:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFYpA-0005yT-D5
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 08:50:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 593772d0-45b7-45ec-bdc2-e78ff85fe4ac;
 Fri, 26 Feb 2021 08:50:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BA752AAAE;
 Fri, 26 Feb 2021 08:50:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 593772d0-45b7-45ec-bdc2-e78ff85fe4ac
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614329410; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kMTWP7CPpEzU78kPeUn3urXfusHO6i/meLXqHpCmV2U=;
	b=REhURiPkBh3+TBwGE7ccQaAMo1Lvtmn/tA41JbNsRUkWo2vFGw5JhEGsCXZUt6zr9SlRqT
	p5WutVSOjnc67+6fowghDDCW7f61Hz8KF6+OONow44D8aaFPbqH0rfq/lTtLE46MJSmIVw
	vTAeguqU4mrmKJqvskbkEAi8sF2LW+U=
Subject: Re: [PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Ian Jackson <iwj@xenproject.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210225170946.4297-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dfdb04dd-f7ce-59f2-1dd8-532c5be8222f@suse.com>
Date: Fri, 26 Feb 2021 09:50:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225170946.4297-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 18:09, Andrew Cooper wrote:
> Curiously absent from the stable API/ABIs is an ability to query the number of
> vcpus which a domain has.  Emulators need to know this information in
> particular to know how many stuct ioreq's live in the ioreq server mappings.
> 
> In practice, this forces all userspace to link against libxenctrl to use
> xc_domain_getinfo(), which rather defeats the purpose of the stable libraries.
> 
> Introduce a DMOP to retrieve this information and surface it in
> libxendevicemodel to help emulators shed their use of unstable interfaces.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Hypervisor part
Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one small adjustment:

> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -107,6 +107,7 @@
>  ?	dm_op_set_pci_intx_level	hvm/dm_op.h
>  ?	dm_op_set_pci_link_route	hvm/dm_op.h
>  ?	dm_op_track_dirty_vram		hvm/dm_op.h
> +?	dm_op_nr_vcpus			hvm/dm_op.h

We try to keep these sorted alphabetically, so please move the
insertion up a few lines.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 08:53:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 08:53:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90133.170502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYrv-000680-62; Fri, 26 Feb 2021 08:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90133.170502; Fri, 26 Feb 2021 08:53:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYrv-00067t-2O; Fri, 26 Feb 2021 08:53:03 +0000
Received: by outflank-mailman (input) for mailman id 90133;
 Fri, 26 Feb 2021 08:53:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFYrt-00067n-LG
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 08:53:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a4a077c-346a-4ed7-a79d-bdcb62383fc0;
 Fri, 26 Feb 2021 08:53:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 64BFFAAAE;
 Fri, 26 Feb 2021 08:52:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a4a077c-346a-4ed7-a79d-bdcb62383fc0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614329579; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FFALBeC8k+k/GI+FB/NkN2E3GR1U69a47Bjnc+yRKOw=;
	b=YBJeuDT1kn7hK94Ef2MhwrZ59bT1pky1vaQj4ye5qCXMm7T5lK/DMRPHwvSvpEONdG6ABv
	CUFaX0Qro1T482W0yDdHfobCyWXIIYSkl+jgmkr/zzl1flP1ZBweidSfhAhn2ByudmR4Ug
	AjWREYtj/blj4oy/e6GBUo/ckseaTq8=
Subject: Re: [PATCH for-4.15] x86/dmop: Properly fail for PV guests
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Paul Durrant <paul@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210225170936.3008-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e50e1df-23c9-92f7-0d57-d5c0589500b7@suse.com>
Date: Fri, 26 Feb 2021 09:52:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225170936.3008-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 18:09, Andrew Cooper wrote:
> The current code has an early exit for PV guests, but it returns 0 having done
> nothing.
> 
> Fixes: 524a98c2ac5 ("public / x86: introduce __HYPERCALL_dm_op...")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 08:57:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 08:57:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90136.170514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYwF-0006I1-Nb; Fri, 26 Feb 2021 08:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90136.170514; Fri, 26 Feb 2021 08:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYwF-0006Hu-Ka; Fri, 26 Feb 2021 08:57:31 +0000
Received: by outflank-mailman (input) for mailman id 90136;
 Fri, 26 Feb 2021 08:57:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFYwE-0006Hp-Eq
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 08:57:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFYwC-0007nx-3p; Fri, 26 Feb 2021 08:57:28 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFYwB-0007tI-Ow; Fri, 26 Feb 2021 08:57:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bfSRLRQrbGKnSidBkvM8jJR/POW5DzFBzZ5BkdttpWg=; b=cI29TdAIqO++Gz3Ro4q4Nsr658
	wXvmSacLrROH6ueKMU0JjiaPbL7AIH7TMtKM00LfHF6IPiYOb7/UMYkz9KvzBD2ZD6a9yMBRCLbbP
	TcRwMG+W1jc/6Yyv7f2+X1IzwRam9+1+M7bgDGqsF2bkVLN0Sls5rIaAB3Vt2STy79p0=;
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 "Manthey, Norbert" <nmanthey@amazon.de>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
 <0aa89914-8fae-3731-a5a0-ccf4316ce96b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <37e219e6-a66b-383d-2a60-b61fdd1d66a8@xen.org>
Date: Fri, 26 Feb 2021 08:57:25 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <0aa89914-8fae-3731-a5a0-ccf4316ce96b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 26/02/2021 07:10, JÃ¼rgen GroÃŸ wrote:
> On 25.02.21 18:41, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Coverity will report unitialized values for every use of xs_state_*
>> structures in the save part. This can be prevented by using the [0]
>> rather than [] to define variable length array.
>>
>> Coverity-ID: 1472398
>> Coverity-ID: 1472397
>> Coverity-ID: 1472396
>> Coverity-ID: 1472395
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Sorry, but Coverity is clearly wrong here.
I saw what Andrew wrote but neither of you really provided enough 
information to infer the same. Care to provide more details?

> 
> Should we really modify our code to work around bugs in external
> static code analyzers?

I don't think it is OK to have 866 issues (and counting) and keep 
ignoring them because Coverity may be wrong. We should fix them one way 
or another. If this means telling Coverity they are reporting false 
positive, then fine.

But for that, I first needs a bit more details why they are clearly wrong.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90139.170526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYzJ-0007KY-6y; Fri, 26 Feb 2021 09:00:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90139.170526; Fri, 26 Feb 2021 09:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYzJ-0007KR-30; Fri, 26 Feb 2021 09:00:41 +0000
Received: by outflank-mailman (input) for mailman id 90139;
 Fri, 26 Feb 2021 09:00:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zK4+=H4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lFYzH-0007KM-M8
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:00:39 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06285692-3f75-44d9-b64d-a74935c1d1f2;
 Fri, 26 Feb 2021 09:00:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06285692-3f75-44d9-b64d-a74935c1d1f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614330038;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=gXbZzcVyPSH7VpANcMc9mc4rRjd993p7ZSowGV8SYGY=;
  b=Vx9PdrC23Ff2dyBUPteYrc+QrOQZdWx1K3o5iRBTBTAiurQ81zcp1bgQ
   7gWyO9AyOBRk7ud/MQomh1gMnZ0RlGSmaKrvXfZunpC6onyoabJ2RRsVb
   PcMrKO+Jf298RG/PvtY4k78bvQZKn73cs/+V5ctvYUSPTQnMRSOUvU5sg
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0odw4RDyCKB1wbwD/7amzGcfHSxrbGua3dtSPSvZVsq6wO0PX4IVajTsXPHOmhaVHvi2Ts4/VZ
 awREbBGfIZlKgOIUFPM1BJbH0k/7oUbVNLKEUv84WX4ZZhBBjUqUSwE5o5i9aCGljbT2Lw1Gmh
 gu+VS6bWsh4+tT6a85encNUPdFrp4L2j2cuCpLFkNtT/Eit0KaU/YSghBRYip8JkXoGgI+p5Uo
 leeFPGgjapD3LYpIYnWJH5pyRdVwTs7v5gSa3Ssv02TRIqV4Rv5D6eDyx5LN8gt9AxmD2qEN6s
 olk=
X-SBRS: 5.2
X-MesageID: 38015329
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38015329"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L+3dau2enC6Vrd0BGIn8ZEFi6dL9QixsHT/yuKMsN9htHzqr1iEVy0M9/XWaeP47sqA6YrTfL1sEZNvn474r5SMnSp0XC3FAzmQo5XhXZy4mtXZkaS8q+eo7ivhD6bHipprYTagjLoeuGzVoc17AkXI/Pp2s0i5bqT++LVUvo9cgieLFxdf9MjtBv9dmIWmPdywe5xcKKzqSt+DAqRRYapQJJYp0v/8HhwsF9XfwSQEJYyC/PHB429dYacD+XeJtUrMiw/hSnzEBSHHqTudJ7bIRLJwPVR6zRNonDA6foPhRmIN6KTLgrZL421n8Q+hQ+977BGlK9I3gtuvIMsg66Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wPerxV7xu/IWz1CufDmUXDCXn9MszM2GBog3PpAeQX4=;
 b=OBv/jErmO4L3ak5qIBFMKBv1GOPSft6e4Q6ktlf1mDEgRH5yHnBLA1DHssNgcsEy7Fs4t5HfLPtDUmEa3KzfaDLxGuQejK3l26fkS2mEDUYyoDjrgW8d4FrlCkUnG9Dh244wnPXNGDsVBTW2dxBfBYcmzAtBynEY7mDwh8MUjmdCyGt0ICEswFHM/6I4Jk+bWbkvhLSJ9WdA8XJLSlfw31l+1UhXyjbWji318kHxYJaXk8NwmNM54cSa3tqoa4EmftM6bEtJpkdTo8BYdZuM3NpGg9rXz2AqJx5xVGDcKZFN0M3u1PvT2w5XUtcWIYVqsSU/ZivVQbwCxfts7FQ/FQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wPerxV7xu/IWz1CufDmUXDCXn9MszM2GBog3PpAeQX4=;
 b=Qfhs9sHPKMCMGmqk00d+JdFPxkzu3MVIAA5fd8qoBMY/qYwVnHvTSEvZ+GD8WppBZlHECf+0BKl3RZFCt6ux1OsUm0UEsmmptoazM3QKWI9Cvz7g2DjwTg8t+m/dnzvsPShZ7bjfLwVfW/eL+xlJwfJAHBwdyStatD+hEAdUjpU=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Doug Goldstein
	<cardoe@cardoe.com>
Subject: [PATCH for-4.15 0/3] firmware: fix build on Alpine
Date: Fri, 26 Feb 2021 09:59:05 +0100
Message-ID: <20210226085908.21254-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: PR3P195CA0006.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 91ad3921-096f-439c-550b-08d8da34fa71
X-MS-TrafficTypeDiagnostic: DM6PR03MB4394:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4394A18B1F2877862DA6B4F38F9D9@DM6PR03MB4394.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hG9CH3f1MnwUkYH71w/GPpnWMZu82M0Nuwlr67HNXHCWo/HPItgfXrLB8ZVVXo1IAY1LkrAJ6HdBUOootu8pZnsX0G1Md0Nbw7aVh+4be/0GS9UYRovcDLOdoI7i3qQ5z6Z8uQdMyVSIUacVSHWFhDyLiY5XoldCZ1Uc9TL9WyULx8Ikt7LPf8gbTGLBEQmxlxUFWp/Y+Q722yzVBLP9p+b+4BWd2X9njgzfbeo3uKnSlBRB95MzRvMlfXb9xWaKz+4EV3kfPDAHtmJ2+YXzwPqEK/4SdMB2gy5/6sNS+fPcFT8I+X7+u34ZGUqmbbHi5gvSkBdJofGH5Rvp5B8gxKQXejM4iCM+e74hK8u/qM6uhJ0VWS26Nmbf7Zd2mcvfwmUqvqhOtfKAgbTPWa7hKSTSlKstXVQ0m3Cg9WTOHrTJrhBrKHyjs7z+RWTYByeKJGGiqtYZqed79AmQjtvHLlvW1LLz93tSp0RBln0i7s+bE1+6B+mwu2askQT1BJ0VI2XHDwwHQFcboOzk8ysG0j1ZDSMKZ9MY/YKkkPWdTrvVoEHkjIOwywFDXG5Z9QIrbgV4w+eftM3KNiyV6963xkKJzBi+5DtzHjCaHCJpbK4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(366004)(396003)(39840400004)(6916009)(66556008)(66946007)(36756003)(316002)(478600001)(6496006)(6666004)(66476007)(86362001)(2616005)(26005)(4326008)(54906003)(186003)(1076003)(5660300002)(16526019)(966005)(2906002)(8936002)(956004)(83380400001)(8676002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cm9zS1VSckJHdlNmQ1NMSnVXNWdDNWl6aGFXRmNtcjRvR0lCSm8rWURSWFB2?=
 =?utf-8?B?OHJqWXJaVWc4Ui9kcnF4cVVCWjhpN3NOTFlRS2VmV1ZTYVNDV05sa2oyQk1J?=
 =?utf-8?B?YWQrNks2KzdIMUZyQUVkaTYvTFZmL2lLbDNPVjFPYi9TWHBMMTBCOXJLZXMx?=
 =?utf-8?B?eVBvNHFuNjIwQTdkemdrOE5WRDZ2QUw0SXRjbTROT2VRUG1NdnpHcHpMMUQv?=
 =?utf-8?B?eStyMzVuSmpoUFJleXJYN3lobVIwOFYvOVNVVThKR0lNTjNpR0ZmcTBCNHdm?=
 =?utf-8?B?ZXBteGJhUGdLVmV3V21mWk85NW5FaGU2eENkS0Yvb3FtSzcvb05HcHVvdjNp?=
 =?utf-8?B?Z2U4cG9RUFFydEhWNDJPd2tzaVByc2ZIdG9jOVNyaFQ1UHhVVmtwSldmdS9F?=
 =?utf-8?B?WDFteXhYQzgxaFVIR1Q4WUhabWFwUmdPdUdtY3pPS0NJREJFOTdieElxc1Yz?=
 =?utf-8?B?cTdKd3NkelltMXFzU0pYYkxhUkI2NEpHc3VPSjViaFVFY3VpSU5pWVNsYjNl?=
 =?utf-8?B?M1VmMjJGekJJSkt1REtQaURQVTg0TlNPem5tc3d2OVJRRGUxZ25ObjdoZFBh?=
 =?utf-8?B?L2tOSVdoUWlzV0tUMEwzSVVPRTMya2pEK2o4UU5KcmlRUzBUMEkxSzFnc1hp?=
 =?utf-8?B?eGRPVHlWUHpDMDRZbkQ2VXNQUGN6aVhjbkxPTzVpYk94cnVKZDAwSWpZN0VM?=
 =?utf-8?B?cCtYU0ExYUpNV2xVSy8xUW1yRG1DV3F2ZkJGTFJNRG1MSlZjTzhpVXpRTXVM?=
 =?utf-8?B?YzZXdzZSTmdhRG9wcU8va3UrK1owaHREcXNYVDJHVUxzWENleWpqTWtRYmFp?=
 =?utf-8?B?VU5JN0ZEZko2dkpzdWl5bU1XS2I2L0xFVTZPcHRjRjNVZ2F2ZklQWkFKejho?=
 =?utf-8?B?MWk3TkZ5bGZIWjJJOGZyVlJmYUxPb1EzbTI0VFMxRDNpTzBSbFNqaEVUU21o?=
 =?utf-8?B?cjVLSktWSEpFdEJQdUNLUFEvVDRhWXlpdmNJY3lyNExsTC8weFB4T2E3bmdC?=
 =?utf-8?B?d3hzbmQ3YVZUQjJDbjc1SjNQQnRmV1AxR3BXVmp0MXNxMVpTV0RrUUdWSDJQ?=
 =?utf-8?B?VnlUckJQVVNPc0grYWRnQXBLVjJkcS9KaFg5WDJZQzZkNVI5R0ZzcFhJcFZX?=
 =?utf-8?B?eGxqQ0VEVWh0bHMvcm8xRlNIUHlWOWVEVGd0UWc5ODkraXIxNGhzQ1lTZlk2?=
 =?utf-8?B?VmxQU09KMExhMDZNWVJJclpQM2k2cHVRSUNQVG1IS0ZMVHFaM1YwOFU4MGw5?=
 =?utf-8?B?akxVaGJQamFMK0ljL20wRWIybVFaaHRmdVNVMXk3a3MwZXFnRDhCcGZDVEpJ?=
 =?utf-8?B?Q2hVRyt2M1hHaitLYTRpMzdkWkh6MEtSN0NjT05ocDFWUC9kTlJQZ1hRaDNG?=
 =?utf-8?B?WnNIVUEyUE04TVZlcmh6VGRkT0NvM25VZStHTmd3YmdYejhwYkJtSU1PV0JQ?=
 =?utf-8?B?azdhRi9PVHJ6VlB2TTB2eXdUSCttNE5oZnR3TnB2VVRJVnRKdWhUWlpFeWV5?=
 =?utf-8?B?SUx6dE4vMHl5aXZQL2FMcXZ2TzYrVjM5ckhUZmg4VmVEeWJlaFBjeTd1Uml4?=
 =?utf-8?B?L3lxd24xRUE3MHAvclp3Q0xPM2lzcElnd25kOEZFQTVTSnhSTmRmMUVVT01Q?=
 =?utf-8?B?NzluUy9UTmUyNHUvbENsK2FVTUtPektuM3drbnptbldDblByelp0eDltQ0RP?=
 =?utf-8?B?SzU3RjRyWWNXMlZaaTZWYWw1Sk15OW94N0UxdUZ1Vjg1QVJVRXJ6MWhiTlhi?=
 =?utf-8?Q?wExs0Vs4eIAVNuu2hsTdXNUq4nKuKFUNI4K/aHQ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 91ad3921-096f-439c-550b-08d8da34fa71
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 09:00:34.8866
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zUKZo4PfLJAwMs4Umr07+7VD55sDogjTIELsYgN2/iK/Y6MSEvQQDxK5Tf+hP0QW22eP69rQraSaU0naRVR2YA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4394
X-OriginatorOrg: citrix.com

Hello,

While the series started as a build fix for Alpine I think they are
interesting on their own for other OSes/distros, since it allows to
remove the i386 libc as a build dependency.

THis is done by proving a suitable set of standalone headers that are
suitable for the needs of the firmware build. Patch 2 contains the full
description on why it's done this way.

The main risk for patches 1 and 2 is breaking the build in some obscure
distro/OS and toolchain combination. We aim to have this mostly covered
by gitlab CI. Patch 3 main risk is breaking the Alpine containers in
gitlab CI, but they are already failing.

Wanted to send this yesterday but was waiting on gitlab CI output, it's now
all green:

https://gitlab.com/xen-project/people/royger/xen/-/pipelines/261928726

Thanks, Roger.

Roger Pau Monne (3):
  hvmloader: do not include inttypes.h
  firmware: provide a stand alone set of headers
  automation: enable rombios build on Alpine

 README                                        |  3 --
 automation/scripts/build                      |  5 +--
 tools/firmware/Rules.mk                       | 11 ++++++
 tools/firmware/hvmloader/32bitbios_support.c  |  2 +-
 tools/firmware/include/stdarg.h               | 10 +++++
 tools/firmware/include/stdbool.h              |  9 +++++
 tools/firmware/include/stddef.h               | 10 +++++
 tools/firmware/include/stdint.h               | 39 +++++++++++++++++++
 tools/firmware/rombios/32bit/rombios_compat.h |  4 +-
 tools/firmware/rombios/rombios.c              |  5 +--
 10 files changed, 85 insertions(+), 13 deletions(-)
 create mode 100644 tools/firmware/include/stdarg.h
 create mode 100644 tools/firmware/include/stdbool.h
 create mode 100644 tools/firmware/include/stddef.h
 create mode 100644 tools/firmware/include/stdint.h

-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:00:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90140.170538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYzP-0007NO-FC; Fri, 26 Feb 2021 09:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90140.170538; Fri, 26 Feb 2021 09:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYzP-0007NC-Bk; Fri, 26 Feb 2021 09:00:47 +0000
Received: by outflank-mailman (input) for mailman id 90140;
 Fri, 26 Feb 2021 09:00:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zK4+=H4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lFYzO-0007Mw-Ie
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:00:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d1333be-ec27-4dca-9f4c-e6dacbe905d6;
 Fri, 26 Feb 2021 09:00:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d1333be-ec27-4dca-9f4c-e6dacbe905d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614330044;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=BKIbao0IT5e1SwZzXMHrrFJKqyrW7kXtKWLoGUwv0qw=;
  b=DKtYxOjTPhJNCl/wm35CC9HvzbcKQhj+ha9h2eIuZK+ru9dawEGn1p/u
   GEaYTNp5qrXsE/z85KabSkbiUwpBhVMSrpTpg5JsSyme/8U2cf0jJ8WcM
   NTQFbhFe2ethBc8Eh06LHNgFBbQxn6dvBBkkzRPorpmaghqaP6MDHSwXe
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Rw6EQFzdd2pfg4ASxidV+NAkGTGdf1pNH/RLfORxx0pKkndWJPVFmhZTs1051UPGxj+Ij5eeeg
 Go8MQHviieXUteyUmIEjF1w4cB0fIQjRZjjUJ9jL3mtxEUp5x0614Vb+LFgdmH3i8Wbafm/QdO
 U29Kk0+nUf8pBur7aRSkWc2H63DNXFu2bmEbM++FVQPb0zWzy1XvT9sTIaXYY7C7Cgu8NgVKii
 GbT8AE3NMNkY6QuJtMibGwiShg5918YyCQBawzEm3tL4oYb3jgfB5Jae8io/3g9cvgtAhXZyHe
 0WQ=
X-SBRS: 5.2
X-MesageID: 38105673
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38105673"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U0SMC773LEOjlKGFk16pT1oQ8c8/jIUTpkghRo3e3JXhorS+wpaMNk5FFAKpSHfw7g5I/k6eq40dSOh4JJ/477gGK5ioZ6XjtauNZxho9PyTBCUgesfYcsQYaYMiY+SVnLHkR6MaghHkVlH5/ARCL9bcvwalIUqqwZ6wTPofpFzvMEmYDdrOIRVIjLteUD+DgBjJerZtRPfxZ6u9b7SMVfFDyPE4NDdksIGcFT2noXE0SDr38V4gx0V2YVyiUWkHCNMEg0POfgj/6Fg58xH7x+jeUe1UZH9FLj91qrFUnyHFMmTXdrJH2zBVBjr47VeId9FX2l/jZJfIXiJD1KPtgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hj37e+SNrz2nKwe5qTt02taxIT6WJpsEwV4bs2+RqYE=;
 b=HssAwkpMyNUvvcR/Z44J19Je98VDKm2pRoGvYtjmy5wtAfPDDdyICq2sbXLZtmDy60pnm/i+Fwd0JHY6jh9ThFz6OrLPjgb8oIFy3UcXqcDDCMGhS+Mm8sZ10c8A/pqBl2mEjzdm0N/65GFkIPH7CZ0ve2HfpJJ+bH0kRTgHrwJhu8NF/U5+kMTGR9gO+d+wCzi8jaiS28pHbCmVRidY9xMPaGYYr052/T4pGBUn0fLYg/8Xtu3DgvlZQmn9k2m0+e3CPtVogeeuyiMBVkW/O8I7KLuJpo0KcGhgDqN5WR43TXAZDFiZ9+SO8m46SxvRWI5laNhSCNcODhaNnCUC+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hj37e+SNrz2nKwe5qTt02taxIT6WJpsEwV4bs2+RqYE=;
 b=w/IOg6kkTz7PhS8T8onPKq5YsYPz/9FvqXHYMYJUeWJt0m6iSTy7+bwUZS5EumsPt33tfKOdA00uUrgkK/zWwnwXIYTDEST41oDEyg2zyjf2afJwbRvEpyV08S8Oi7oWDC+rwK9gO3SPkl2DDJ0L5W7t6vSfDpxE+KzE7ojxb3Q=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15 1/3] hvmloader: do not include inttypes.h
Date: Fri, 26 Feb 2021 09:59:06 +0100
Message-ID: <20210226085908.21254-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
In-Reply-To: <20210226085908.21254-1-roger.pau@citrix.com>
References: <20210226085908.21254-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: PR0P264CA0135.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1a::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e7dd8cf7-8b11-4705-2e47-08d8da34fe72
X-MS-TrafficTypeDiagnostic: DM6PR03MB4394:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4394CA38234D463768B965DD8F9D9@DM6PR03MB4394.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sb9kNTeIPFSUNuY4xcmv69YXRRrtBMoL60fYDPFi85HjGsAx6dP556hBH9uWEnufx3LM+wbAJiP56HJ44okc81z9EZQNlW5f55DS3S5reUdGVTIQzZQ1joawS6yJc7nbOV8NBTmLSC4k86HoYBSOr6IzUd+ErTT0hjkygVCLhUwXFU6wCs1y2FDJ/XnPU8rbr0g8j1ImHY9rFRRo26/Fybp/nbHV8M8qWIRKszI2wczy8YRiskjedwSjpEWIQAn2g9ISDHXkM/TiauxiuszjPtNAi1Q1V7bdyp6uaPKLs/hhJF7zDjeXp6PwLZW+NYW8XyuZmPrQ62goZ4M96fWRDF3EE+HdPFb+HbJYmv1gcXQayZB7a/d8xbe/78kU7RbFQBBvp6igcxoboukBOu8XXLQZjmru20QeftR2Y9NCpTIaRxotmF3ymO6615cDmijqhelsjmpwfxPa3e0hib+RDj7N50TrlCu0ftbNqfwhxqa1oDMHzbY3h0K9R1N/O3Z2FkRGuFZzzB/xTOkuNWs2TS+8Zg0101Z7nfXb1vrlbk+PJSWtthF2c498Pt/LPjCsCxeca2P5mJwvthzI7cjbDrRXmWVCXPZzm3XccrfXn1Y=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(366004)(396003)(39840400004)(6916009)(66556008)(66946007)(36756003)(316002)(478600001)(6496006)(6666004)(66476007)(86362001)(2616005)(26005)(4326008)(54906003)(186003)(1076003)(5660300002)(16526019)(2906002)(4744005)(8936002)(956004)(83380400001)(8676002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OHlqclMyY20xemVqclJyR2EwdDErRTRBNFhtQ0o3ZmgwWFVtT1BHNERDTWl6?=
 =?utf-8?B?MlpUdWV5TXdVZ29uWHJ0aUoydGU1anhlbG40c0FiS2c1SkdtTGc2dXlRR2pm?=
 =?utf-8?B?d3A3WnJzdTJVQlFocFNlOEQ1ZXZyQnZEckZQNElJcVRkR0RJZUxqRWZpbHJI?=
 =?utf-8?B?YTNXcklVZHh4VzNweXpHWm1VN2licUpXakdraEZnemFqeGFkQVhkaDhXN1ZD?=
 =?utf-8?B?TWExTFpxNDBUMHNlY1M3R09qMUN3eGkvZUo2a3I4RHp6TUxZV0FwOW5McHBt?=
 =?utf-8?B?VnU2MjV2VFRVczYrRzJWSFV2MGtudDV3TC9RRW8vU0ZxZkhySENKYk9HNVNH?=
 =?utf-8?B?S0VvVHB2MG0yNnd0dWxhNUE5dldOYUJNVU5ydkZTaTZneXVuVTZMNEM0WDF2?=
 =?utf-8?B?RDl6YU9vL2d3cmVOTDB3M3lpdWp6QjJKS0J1SzdyRG14bEFlWU9ITFFZUktr?=
 =?utf-8?B?WEROMVkrWUNMdmZiWGVOUnRRM0JsRWR5eGEvSVg4T2RhN0tKQ1Q4cENLRk5W?=
 =?utf-8?B?OENZRGZ4aXVadldnVEZjTnpmSmVFYm1WQmVocnozZWxLY213aUJOeklwbk42?=
 =?utf-8?B?TmFDT3JaaXg4aVZtKzR5amljY0tMak9PQ1NKV2w4c3Y0czEwVXFqbzZSQmRl?=
 =?utf-8?B?QWQ5cnorOEtHZ2Z3RFB3VFdSeDVocnR4ZEtnSzNjeWRPejQ0c3FlY0tLVTNO?=
 =?utf-8?B?ZXRlTWZwQjI4eHIxRnRMUGtpamVuMTJtSnh0ODdoai9BMElUS2hRSTA4bE9o?=
 =?utf-8?B?eTFMbmF5Q3dGeEkzbzVCVStWLzNzY1BiN2tQSlpucFhiRDFDK1VXYTI2M3BT?=
 =?utf-8?B?YnEzR1hUYndOVmtTK0RnTkk5cUZtZWdDYVBENUl6cU5zM0hqRnBIVkM0NGc3?=
 =?utf-8?B?bG4zQUtDeUpCUGNoQ2kyY1Fwd1V6L1pKeVNiM0F1U0hYamN4ZysrSlVjczc2?=
 =?utf-8?B?QmFpRFN6QXVuL2VkVzVvaERkcmE1b3A4S2xnMVhYMjd3R1M0VG94MzUzWk11?=
 =?utf-8?B?aXNyRk5vWVQrUURjK1p3QWlMQmR3b1MyZWE4WG93VktLVVByeTdjN0tyQm91?=
 =?utf-8?B?bVFyckZXS0lmZmp2S0xDKzl2QVc5WG9adE1DMXcrZUhjbzV2UXlLMm5LaVVz?=
 =?utf-8?B?ZFNlbWhsNGJBcUJhdzV6VDFsU3BCM05LYmhFYndxRjg5aXRpU3pTeUFyQTdt?=
 =?utf-8?B?SFJTRE5XZnRMRHhmMFdJbTJHNkRKQ2pGTEhMaEhJTVRBcURFa1NvRVNZM2lG?=
 =?utf-8?B?aVpUU3c2aHVnVllqVVFRRG41cWwyL3hIbnZCRzFiSGw4L2F1QXhjNDFWUjJ1?=
 =?utf-8?B?WUk3UTE4QjNRdUFMS3k3WkQzemNkZ3h6bW1oNm05bHdSeDB5QVRxQnZmakZk?=
 =?utf-8?B?dGZmVXlGZW9IbHUyeVpOTnRUOWFqV2NLTFluL0pNSm5CcGJqVXpYR0VRWm8y?=
 =?utf-8?B?QU1Nc25wRnFhNnBONmx3eVMrN3NwclNYRHVPaFdabjAvWHplYVJkSnd4Vkcz?=
 =?utf-8?B?bDhCdlc2TS9xZFdFV3FmeXJoei82TXJIUFRITWdGVzROanRaWkpVekdBUG1M?=
 =?utf-8?B?bmUrM296cm5Rd1FXTTlSS0phb0oxbVRIUzM1SHNqamNRaG95ZnAwUmdnbCtQ?=
 =?utf-8?B?ZGdPd3JzeVl0TWdHVHBpV1dGMUcwQzlzTUJqa3pTOUkxNUp5R3BhWURLbEVH?=
 =?utf-8?B?a2NkSm1XcTZhTkJ1OUhJblNURDdUdDUwaW9STjdoQ0R4OTVzQkhQUm54WlAy?=
 =?utf-8?Q?aNMeWxjvPICYzblaz6bk0JIBdIjhBHlD4xqpG4u?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e7dd8cf7-8b11-4705-2e47-08d8da34fe72
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 09:00:41.6001
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rvM0SuD9dzndUB8FQFIIUNhY901n/k/cQcpIGMzguoyCk7SDBQCbHqxGaVTdWf/Hk75e5yCgmAQPKHglQW8xAA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4394
X-OriginatorOrg: citrix.com

elfstructs.h doesn't require anything from inttypes.h: it's more
appropriate to include stdint.h instead which contains the type
declarations required for the ELF types.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
 tools/firmware/hvmloader/32bitbios_support.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/firmware/hvmloader/32bitbios_support.c b/tools/firmware/hvmloader/32bitbios_support.c
index e726946a7b..d1ead1ec11 100644
--- a/tools/firmware/hvmloader/32bitbios_support.c
+++ b/tools/firmware/hvmloader/32bitbios_support.c
@@ -20,7 +20,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <inttypes.h>
+#include <stdint.h>
 #include <xen/libelf/elfstructs.h>
 #ifdef __sun__
 #include <sys/machelf.h>
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:00:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90141.170550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYzV-0007Rn-U7; Fri, 26 Feb 2021 09:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90141.170550; Fri, 26 Feb 2021 09:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYzV-0007Rg-Qp; Fri, 26 Feb 2021 09:00:53 +0000
Received: by outflank-mailman (input) for mailman id 90141;
 Fri, 26 Feb 2021 09:00:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zK4+=H4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lFYzU-0007RG-RL
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:00:52 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1a5ffe0-d036-4480-b216-b768df423c93;
 Fri, 26 Feb 2021 09:00:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1a5ffe0-d036-4480-b216-b768df423c93
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614330051;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=5NtpMLJEx8ISDuyDwo32JNV2W/yTr2AO05voxqQUy2M=;
  b=SyLmLvTgqIUVyqtPR1o2VJiDh55Wnf3p4GuOEIM8leflfOcd2WBzhHlj
   KCoA4eLhOyj9PK204A4IMRUQ6JQFKVV5NNpvbc62MT0ZFUURmjWffedLr
   8S8965XnVhYEEtFIEy8h90xC94VXX1SFgVax0lnDEfZqBQ4nP22QU76g4
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jLi68Waa/N8LQ0PKHVnvgKUNu9Nu5VuifzXXhWDLOMrHKF/0yz7PG6Kr4AoX3uODmuluIY5YL8
 jFwU6LF5wGJ2Vb2MEiYNYnZZBzfH0MJmxHimH7fY25j0TL0WIs2sWJCdkgfupN7cIstcO2F/CM
 1bPFeF4bKMqBwctLN6bWLjtjwJhIqcOz06TN/o2kXMqD7YgSbhHZP424VeAI8YF3+JAM9Xy1Ad
 PQWQIvnBfdIp5lkhwXatTSfISSCnotd0PW+jmLpXflpOz7JMdq9HscSB2UUMVXq913zLh2kW6+
 0T0=
X-SBRS: 5.2
X-MesageID: 38456681
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38456681"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vlsow9uwpk4IXlP3nriwoSvcbEqRh5tEUOyUkRRwahuuMVJNyzT//PXm6LQtyBPkfi4OJQadgsA5YYBzxgsJlwXaT8alWIPthvMhYNJHk/7QKogK77XmFLnuP9b3LxDmXpPxt+p4d1YCfE/4GYGzUlzm9Bo1no7vK7yZk6anWZAFUzTSvrnsalwTG2K4ZnDc7N9qK3ReW8qIdU/pzzTWk0yh721tWZWRGSOYgDE0BmQGJD9LrkX6jujMrTYnJSkUKuRB8/0w7WqhU1y0E1EOBeitBv1f23DaOUbSURJYnIQWxW8h2ol5LHjvhG9/ZeWnBTHs5cvKdJM7NsgS5CkmoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RV5OFa57bqpUoSqK0ZdahTgCjVx4yyQ9MfBf1+OuoDs=;
 b=n5RfmP/+q8YmfbZ1gSJlqSxcxEmBkNtEwRohmyz5+/fowAnbinkrDY7gLe5V3bLu3qRtrj4j35I2R4MDYhjZAE5rWIKXQjDPqhcdU54H66n++t/loZvboubmYczctyLShiA60jyVRhiiIX7ZuAs0KiCTDuChzMTvLqjj+JtIZSBZ74M7tY4pZWwSIxE2seSIKaDPTnhYxvPuvH/jHhKM7X4HrY/A9cYDZC4Ijzjve8WrZlN3qL//mH1A7ytO1yo/SY28C1TUWlS4kEnGcfoHCJTAnVgwv7bptii+i7gep5GUcB1HoV+qWv137bVHq1k3ResferWt4NH98uDqLtcjCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RV5OFa57bqpUoSqK0ZdahTgCjVx4yyQ9MfBf1+OuoDs=;
 b=mFRJengj8YyQ2mGO6Z58ij9EI7TwBw/caUR9mwmWa2FIdbhdUmsXmhxa23NY/pecFZjiHWDLN0HjBF5myn30T3EF8A13erTGvxtT8U8eaOsPf5JeiB1w7tzNyaue6Eoi+TmOPy7YMbJNW3qpR1jQtbJ1s9rnEhN0r6Os1HuSk0c=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH for-4.15 2/3] firmware: provide a stand alone set of headers
Date: Fri, 26 Feb 2021 09:59:07 +0100
Message-ID: <20210226085908.21254-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
In-Reply-To: <20210226085908.21254-1-roger.pau@citrix.com>
References: <20210226085908.21254-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: PR3P195CA0015.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:102:b6::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c8b8e075-cbba-4edc-459d-08d8da35016d
X-MS-TrafficTypeDiagnostic: DM6PR03MB4394:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB43942219C73D866D936976478F9D9@DM6PR03MB4394.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mfNptK3dKlvz0qZGKYN5WnE5lcA+PXEITzjnhJx0pnmCI076LXMXL2/MugNHNPHsK1Y2MgPXPwHISUTuoqw69NR6a42kY0UGqv8sx+h8FfevWpIIhF2iQRq1HMeT5iRIcquuRhD7YGKRtFGVhx6tjM1j+4j0up3speF1ytIZO/xaobcGzjlAkSS39EJkQGp+CllV5VtFrqvDLKtTy6BOvL9+ypSYd4DnpTg7KBOf4Y1LqQFNcz2zdTcqoilSUJAPVz8FPwpfYqKKUS26igAefQRE2ujZDhVZI0zQWYHjWBJExZqFbhACW9X5h51ebgeyR9D6ICT6UojnzVB/y39CqaiDQbAQLiYpLJYHMRN8lp+Qj+RXcw3un7ZEZKflat/a6tTQL6/tpSHhEno9N8YIqJReIwwOz33mL0EnXRTZxQfhoek23a4YqjfwEc44dGWsVbdeQu3tJxvg6oVkWevOScDm1npvAPKYoAkNINFURqP0CwNix9YIMRMl0NJNxzjX0O+uWXyBpC6zwW3TKkygOP3lvu/FZif+lXTKEIpkXdXlmK8QA3lTpdx9WIhP0ciLCKR8lVY4WNLrBFH4/G+KidZI9WPg7EMP9nzd1FlzoY4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(136003)(366004)(396003)(6916009)(66556008)(66946007)(36756003)(316002)(478600001)(6496006)(6666004)(66476007)(86362001)(2616005)(26005)(4326008)(54906003)(186003)(1076003)(5660300002)(16526019)(966005)(2906002)(8936002)(956004)(83380400001)(8676002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WlBWK3FDUXdSeS9jNnVkL1JzSk5Ya1dUS056bDlqSEtiNlJUZ0Y1OTZvZ25F?=
 =?utf-8?B?Ym5NOHJ4S2tPTTE1TEI2SnR1THd4bFhreWkrMTZaRXVTdWpOSEJCY2lNQ3dJ?=
 =?utf-8?B?N0ZZTjdocnpjY3JJRHhEUTM1a0NibVFkRGlUY1gyMDZDdVYxdmwyR2hQemhC?=
 =?utf-8?B?QjRCRittVTQ4UFFtQ2o5emdyL2VHM1pxRjQrUWl5OHVZNWFZdWlDQ0ZMem8w?=
 =?utf-8?B?RmNBUFNhRVRsNDdLWmtGZEl2NVExVStjUlVSWWQxZ1hOZENEWG1BejNRZ1VD?=
 =?utf-8?B?SWY2TzRBbWN6Yk1rMUdKTXQ0d1A4bVdqeENSM3cvQzJET2RneVNyM2pVSFh6?=
 =?utf-8?B?R2d5c0lLTEwxamNXTGxPOXRKRGdMTmp0b2ZjUExuc2ZxVGN4M2dnSHRJREZi?=
 =?utf-8?B?OUdJUnJmR1hXZkVwVU5GbDA1MUk0Y0JDbTdVM0JUNUZVTWNzdU4relJzVzFT?=
 =?utf-8?B?dkgrQkh1MWMyS05JM1EzYnVoVkNGMW5RNWpUVEZ1cXBPdVJQZlR3WlVXYTVo?=
 =?utf-8?B?WUxRT1JkL09qNHlXdU1rcC9yb05LcEtJOW42WlNVTXFkUTZCQVJVQmtrSDdu?=
 =?utf-8?B?VmRkRzRLT1JLd2lqc2dheUplYVl6blAvMHFNeE9oWVpHRk5icU10Rk13NGZq?=
 =?utf-8?B?WFpzMGc1WVhGYVZjMWN1SEc2VFAyZWdPUWlXUHhpQnJOUzZ5Y05CeFpRWmlC?=
 =?utf-8?B?czRQeVNIWS9GKzdJQ2g0WEFXSGNIbzNHVkFWSU1lNHQ5SEdOa2xYYnkxdk1J?=
 =?utf-8?B?L1I4U1llR2ZZL0szcHkxWGRIbGRRN29nOFlCOXRuRi9Ub2thRnZacDBxaDZa?=
 =?utf-8?B?UTZ2emFZOGFLU1BJd0wvK2F0UzdsM1RENFUvcUZqUnJ6b3NZUTVDbEF4aER6?=
 =?utf-8?B?U0k2N2ZmRlJmQmgyY3ZDSzBBQWhtVHlOWWUvZjFWN3hlZG5jcDZzdE9FM09U?=
 =?utf-8?B?ZFlXcXNCOVFLVTZLT2F0RmpoamlGeXAxbGZHekpZUkJkNUZTVGFETmtMSHZC?=
 =?utf-8?B?b1NHcWpaUkxJelo0YjRTYnRxZlkxTGp5VlR2aVhBY2Fac2tFQnBhWXA4ZjRY?=
 =?utf-8?B?dTFXYUQ1TGlKZlNIamsrUHJMd0MyMEhnaGFydWQwUmRXZzRLcmJSckwxcjRn?=
 =?utf-8?B?RXgyNk9qeldhOUhrblhzcEQrOU5neUkwTytPSU5VbW1rNHUyaDVRSnA2Zk9V?=
 =?utf-8?B?YlhrcXZ5VUhRKzQ3dHIrVXRKTUl5b1phS3VNWmhiOE9qcUxsaHl4cUNkbEY4?=
 =?utf-8?B?NWJjZ01TT2pDK0kvZjdWbTRDUHpudWtPSzExdWtVbWIzUzJGa2NrK05YSDJZ?=
 =?utf-8?B?TjJpRGRLdDlqRXVOSWkzVElSZTZJS2owMGc1R3JoRVpNTlkydVB5aFdLQVMw?=
 =?utf-8?B?c240SHY0Q1I4N0FNU1FaMTZTdytMSGlzeHhVWTdWN0lQMmJSRkxJemR4UkFF?=
 =?utf-8?B?V0FjUEhiUUJxSjVlUWk5akJFTzVoYXAxZk5TRkVVS0c1NGYwRXM5VkZ5QUNP?=
 =?utf-8?B?SWZLc01jaytlb2wrdG1vS3MyL2pwSkpVZU5LanpwYmNMS1V6UU1IUzVzQlN0?=
 =?utf-8?B?aWFyN1VpWHBkRnJ6K0pQSDk2a3pPREsvQlA0WXVObGdBWUduOVVGTmtXcEJu?=
 =?utf-8?B?MlBHRWVTZnhCOXFIZzRCVHJrRzJTOWpiS3ZTN2RjRTlYeExyNXZzbVp1TUc3?=
 =?utf-8?B?bHV5eTkwZFpyZWl2YThIRHF0M1A5N2pIN3hEVkxmUHNsMTc3YllXbUU4U0Rs?=
 =?utf-8?Q?BPRxDhE5vm4VHnWTxC7319T/yZll8fpVyC7BkwN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c8b8e075-cbba-4edc-459d-08d8da35016d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 09:00:46.6419
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SDmOC1xhz686PaamtHRwgndd9eCuEW/a4JGYge28DZVxdkBN2nXGz8qU+nFtdFJaJ904kjelKf6Zl43b7TY/2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4394
X-OriginatorOrg: citrix.com

The current build of the firmware relies on having 32bit compatible
headers installed in order to build some of the 32bit firmware, but
that usually requires multilib support and installing a i386 libc when
building from an amd64 environment which is cumbersome just to get
some headers.

Usually this could be solved by using the -ffreestanding compiler
option which drops the usage of the system headers in favor of a
private set of freestanding headers provided by the compiler itself
that are not tied to libc. However such option is broken at least
in the gcc compiler provided in Alpine Linux, as the system include
path (ie: /usr/include) takes precedence over the gcc private include
path:

#include <...> search starts here:
 /usr/include
 /usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include

Since -ffreestanding is currently broken on at least that distro, and
for resilience against future compilers also having the option broken
provide a set of stand alone 32bit headers required for the firmware
build.

This allows to drop the build time dependency on having a i386
compatible set of libc headers on amd64.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
There's the argument of fixing gcc in Alpine and instead just use
-ffreestanding. I think that's more fragile than providing our own set
of stand alone headers for the firmware bits. Having the include paths
wrongly sorted can easily make the system headers being picked up
instead of the gcc ones, and then building can randomly fail because
the system headers could be amd64 only (like the musl ones).

I've also seen clang-9 on Debian with the following include paths:

#include "..." search starts here:
#include <...> search starts here:
 /usr/local/include
 /usr/lib/llvm-9/lib/clang/9.0.1/include
 /usr/include/x86_64-linux-gnu
 /usr/include

Which also seems slightly dangerous as local comes before the compiler
private path.

IMO using our own set of stand alone headers is more resilient.

Regarding the release risks, the main one would be breaking the build
(as it's currently broken on Alpine). I think there's a very low risk
of this change successfully producing a binary image that's broken,
and hence with enough build testing it should be safe to merge.
---
 README                                        |  3 --
 tools/firmware/Rules.mk                       | 11 ++++++
 tools/firmware/include/stdarg.h               | 10 +++++
 tools/firmware/include/stdbool.h              |  9 +++++
 tools/firmware/include/stddef.h               | 10 +++++
 tools/firmware/include/stdint.h               | 39 +++++++++++++++++++
 tools/firmware/rombios/32bit/rombios_compat.h |  4 +-
 7 files changed, 80 insertions(+), 6 deletions(-)
 create mode 100644 tools/firmware/include/stdarg.h
 create mode 100644 tools/firmware/include/stdbool.h
 create mode 100644 tools/firmware/include/stddef.h
 create mode 100644 tools/firmware/include/stdint.h

diff --git a/README b/README
index 33cdf6b826..5167bb1708 100644
--- a/README
+++ b/README
@@ -62,9 +62,6 @@ provided by your OS distributor:
     * GNU bison and GNU flex
     * GNU gettext
     * ACPI ASL compiler (iasl)
-    * Libc multiarch package (e.g. libc6-dev-i386 / glibc-devel.i686).
-      Required when building on a 64-bit platform to build
-      32-bit components which are enabled on a default build.
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
diff --git a/tools/firmware/Rules.mk b/tools/firmware/Rules.mk
index 26bbddccd4..5d09ab06df 100644
--- a/tools/firmware/Rules.mk
+++ b/tools/firmware/Rules.mk
@@ -17,3 +17,14 @@ $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
 # Extra CFLAGS suitable for an embedded type of environment.
 CFLAGS += -fno-builtin -msoft-float
+
+# Use our own set of library headers to build firmware.
+#
+# Ideally we would instead use -ffreestanding, but that relies on the compiler
+# having the right order for include paths (ie: compiler private headers before
+# system ones). This is not the case in Alpine at least which searches system
+# headers before compiler ones, and has been reported upstream:
+# https://gitlab.alpinelinux.org/alpine/aports/-/issues/12477
+# In the meantime (and for resilience against broken compilers) use our own set
+# of headers that provide what's needed for the firmware build.
+CFLAGS += -nostdinc -I$(XEN_ROOT)/tools/firmware/include
diff --git a/tools/firmware/include/stdarg.h b/tools/firmware/include/stdarg.h
new file mode 100644
index 0000000000..c5e3761cd2
--- /dev/null
+++ b/tools/firmware/include/stdarg.h
@@ -0,0 +1,10 @@
+#ifndef _STDARG_H_
+#define _STDARG_H_
+
+typedef __builtin_va_list va_list;
+#define va_copy(dest, src) __builtin_va_copy(dest, src)
+#define va_start(ap, last) __builtin_va_start(ap, last)
+#define va_end(ap) __builtin_va_end(ap)
+#define va_arg __builtin_va_arg
+
+#endif
diff --git a/tools/firmware/include/stdbool.h b/tools/firmware/include/stdbool.h
new file mode 100644
index 0000000000..0cf76b106c
--- /dev/null
+++ b/tools/firmware/include/stdbool.h
@@ -0,0 +1,9 @@
+#ifndef _STDBOOL_H_
+#define _STDBOOL_H_
+
+#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined 1
+
+#endif
diff --git a/tools/firmware/include/stddef.h b/tools/firmware/include/stddef.h
new file mode 100644
index 0000000000..c7f974608a
--- /dev/null
+++ b/tools/firmware/include/stddef.h
@@ -0,0 +1,10 @@
+#ifndef _STDDEF_H_
+#define _STDDEF_H_
+
+typedef __SIZE_TYPE__ size_t;
+
+#define NULL ((void*)0)
+
+#define offsetof(t, m) __builtin_offsetof(t, m)
+
+#endif
diff --git a/tools/firmware/include/stdint.h b/tools/firmware/include/stdint.h
new file mode 100644
index 0000000000..7514096846
--- /dev/null
+++ b/tools/firmware/include/stdint.h
@@ -0,0 +1,39 @@
+#ifndef _STDINT_H_
+#define _STDINT_H_
+
+#ifdef __LP64__
+#error "32bit only header"
+#endif
+
+typedef unsigned char uint8_t;
+typedef signed char int8_t;
+
+typedef unsigned short uint16_t;
+typedef signed short int16_t;
+
+typedef unsigned int uint32_t;
+typedef signed int int32_t;
+
+typedef unsigned long long uint64_t;
+typedef signed long long int64_t;
+
+#define INT8_MIN        (-0x7f-1)
+#define INT16_MIN       (-0x7fff-1)
+#define INT32_MIN       (-0x7fffffff-1)
+#define INT64_MIN       (-0x7fffffffffffffffll-1)
+
+#define INT8_MAX        0x7f
+#define INT16_MAX       0x7fff
+#define INT32_MAX       0x7fffffff
+#define INT64_MAX       0x7fffffffffffffffll
+
+#define UINT8_MAX       0xff
+#define UINT16_MAX      0xffff
+#define UINT32_MAX      0xffffffffu
+#define UINT64_MAX      0xffffffffffffffffull
+
+typedef uint32_t uintptr_t;
+
+#define UINTPTR_MAX     UINT32_MAX
+
+#endif
diff --git a/tools/firmware/rombios/32bit/rombios_compat.h b/tools/firmware/rombios/32bit/rombios_compat.h
index 3fe7d67721..8ba4c17ffd 100644
--- a/tools/firmware/rombios/32bit/rombios_compat.h
+++ b/tools/firmware/rombios/32bit/rombios_compat.h
@@ -8,9 +8,7 @@
 
 #define ADDR_FROM_SEG_OFF(seg, off)  (void *)((((uint32_t)(seg)) << 4) + (off))
 
-typedef unsigned char uint8_t;
-typedef unsigned short int uint16_t;
-typedef unsigned int uint32_t;
+#include <stdint.h>
 
 typedef uint8_t  Bit8u;
 typedef uint16_t Bit16u;
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:00:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90142.170562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYza-0007VI-8Q; Fri, 26 Feb 2021 09:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90142.170562; Fri, 26 Feb 2021 09:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFYza-0007VB-5C; Fri, 26 Feb 2021 09:00:58 +0000
Received: by outflank-mailman (input) for mailman id 90142;
 Fri, 26 Feb 2021 09:00:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zK4+=H4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lFYzY-0007UE-FO
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:00:56 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d14fef2c-b6de-4b7c-b17b-f11b32700cfc;
 Fri, 26 Feb 2021 09:00:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d14fef2c-b6de-4b7c-b17b-f11b32700cfc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614330055;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=hcHZxn/yloaKbz4HuwpCWh+4xZp5v7qk/JNM42WjKUM=;
  b=XDhsBgQ7pKWMmfxld8eBDabMT1BJUNIpiLLFuSddW0EqvL97+ckfAc/h
   W+T6+bpBHJpaj7H3sh1pfNsTcykfFYGJuEzwJRQSd8i5lkZd86z/LfsSM
   +Imogjyi+DlOMh7n8vSJUpQZzMgSw2wRVOXY484/iJnxwPju3KjBAJKy2
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wSgAA5y5wVVo/oQ/CS6btyjOt1uTCrvFlD1Y/BVby9vwUiD5yoqxrN4/xpIO046Unqvoz8XC3y
 TizCFOUbkfvhgU48eGEqymCZsjOm3J79j2kgZMWFf8UbRe1K/WokNBFV1YALuzxbfzjBQccNk3
 546ApWeMUynWsMbIuwkR4v1B0MdRqRBy6uLdHaqFcyAsWuszb0xdmi0qqfLI6Qmgl3aK7MgFxO
 buzu+N1R0mUtPKD4Q7Am3fcME6Ficss3sy9h4yxcFmlCpirFuXTiJXR35pJH4gM8GNFrOExxgM
 0f8=
X-SBRS: 5.2
X-MesageID: 38281509
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38281509"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gVcL7XXSzfeBk0HpU77cZvw+pdpXU9InFkf29LogVwHbUX/7uRa2LcM9DlmYrTsPWoHRewF2liSs/tLPp+xH5VIUfdHvhjW+6IGcVN120OeWqeiPZxVfNEO4475IbYv3gy/94E68n4QyuK0GlKI0V5+RxktZrsz0WHBjMiqUe5EbxmUsV/PASFasUv2qafPyyApiks4Pkust1c0rYs3M19zzgtNPwhXtkrHFdzr3a9v/pWvqxsKzf3emd7GbHASFlE0KKu2T9WHpkKttOdJG3ZOuyRtdFouLoOxO2+NthvKuRvcLs8mQh8XNmRJjuDdUtQt9rvyQuYH3c4JumktC6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cydJsAIez600M/vb94JozzdggeHZXjVoZDJiSVI8vdk=;
 b=V/sOIyIo1iWY2O4urxGerAoGPB/zjdgBmAcMJzjrm4m8aSw1dpSO1KdJPUraYRKe+R6t7qYe1DHUNt5HG3+iGOKGDs0+kCsQiBPgVW4AW68gw6vir77aVnmZqHf0BtXiW1dygdrvqej/g84mTZesRXVqb/3/Lfl+PN1DzL+CleNiTekwcZHzCtnsjq4MwIBU+wJ0plwkEkDItQxVxaFOoT6dhfByPZ4ISJ8ox2wSMslCGdskPlF8WDd2TWn3vEzJJhxylHtg+LPWSLVp/6P+XXHXaAuT/waBbJTmQZhrvnRVZj4QvW8Vv8wR8j0HMActwkDH1sCpc6joXVcmsaf/yA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cydJsAIez600M/vb94JozzdggeHZXjVoZDJiSVI8vdk=;
 b=Qb3bkvoReEXHl1cTB3Wvl5VMr+wFUZFsIRA8+WGYwKrKQoU9qpmcnnVMP8kSwYpQDfErBYB7KVpjvsKA5XzwVQaQqnEnwsFov2RFgEm//7U7XgJDJUVZAe3v7JmW0aUNXNc26X4wocIZmaJhDpg+EekiDmKind9GD6z2V5Xdq28=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH for-4.15 3/3] automation: enable rombios build on Alpine
Date: Fri, 26 Feb 2021 09:59:08 +0100
Message-ID: <20210226085908.21254-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.30.1
In-Reply-To: <20210226085908.21254-1-roger.pau@citrix.com>
References: <20210226085908.21254-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P195CA0080.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:86::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 24fd7f05-1b4b-450a-fd15-08d8da3504ec
X-MS-TrafficTypeDiagnostic: DM6PR03MB4394:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4394659081E56D81C4CCC7058F9D9@DM6PR03MB4394.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AiZ5MAl9dU+k5KTCQeFt+JOOMNCSoIkBdrpbYrFSgkMQ6mNr/euqacEaAA5nSSpD6Nsi6yFOJHkUf9sbc4qxBE4NsDGRtVANWD5xFEgA769As8tiLHfQRfHxTy10e4ubuM6VZ5w2oCA24PYrCywQdEFavvlv8wJGnumR4JdEmHEgFPG82det3M/ZmuXNXPdVztv6jMLIt4bKOt5/xJmHU85zZ16LmJWShmg7HMK2sGEV3hQTCur+qiMEmy8UkCLixuoCCeWtFYPTU6tp9ixckOk9YKeKDTBo3eNhI6IXFv+jQJ7+naGpufd8wbEClAk7EeQIVqZKKongGKkkp4pNBj9UfBtHW1ntOJLaqcM6PX5K1DU4C73HwnaLg+389KXjJsS6iCKZowe8qyHji2sEfpAoG7ep/NjCmdXiF4DtLfyUQpNOGmnOEGAVGw8l6+tGpdIkIrex4Mwl36sXmIGPqnmliYI/sipoHBpOCG1CCsyZgMbnnav4SC4QhH6yRKa7PoHp1GVKFo0WyZC4qqnU2w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(136003)(366004)(396003)(6916009)(66556008)(66946007)(36756003)(316002)(478600001)(6496006)(6666004)(66476007)(86362001)(2616005)(26005)(4326008)(54906003)(186003)(1076003)(5660300002)(16526019)(2906002)(4744005)(8936002)(956004)(83380400001)(8676002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WGtHZWJGWnJhWmR1YVg1R2NFdk54eEZMbWlyOTJqWHVQSk9kNmVZTXB4MUtz?=
 =?utf-8?B?WnVXVDdjV0d5RzVqcFV4WmFiVVI3UjF2YWovSjdIbldyOHI4bWw0TllmZzFj?=
 =?utf-8?B?aFh4UDJZc285TFdnNzgrNVQ4VzRQU3ErUmV5cGhaK0lBZlBWOE1rcVVMa1o3?=
 =?utf-8?B?d29XMyttalA5eExWRERxYk4vNmRZMDZXZG5rV1RJVytob1BtUVhDVmNNUlBR?=
 =?utf-8?B?N1NXazF0VVZtdFdIZzR6U1p3LytSVlZ3UWo1V056YS94Yndvc0lYeCtsaS9W?=
 =?utf-8?B?aGJSdkhPNHdacGUzWlQvQ0NKK2FRRndvVXkzY0hrVzhNbFpoaldOQmQrYlZD?=
 =?utf-8?B?V2dhY3pKUW5DTGIwUE0wZWE1Q3NqQS9HYzF3N0JxZzE5UW5tZ0dkY3oyWitY?=
 =?utf-8?B?NVJKM1ZLSEQ1aDQ1bzdBVDRjR3pOM2NQOG04bnZCTDdSeEpmbEFMYzhVSXYw?=
 =?utf-8?B?ZDBvR3pOajhXOEQ1K0FUSGpIbVVjcE1kY1ZGSHpoOG5GVnNBSlZNWnBLOTIx?=
 =?utf-8?B?LzJ0b0gybmtDZC81cFhQRWtNZUdxNExNM3NsM0FVbFpnUkhLYXJxL3NRdG5O?=
 =?utf-8?B?YVpHTElqa0FweTJLd1Q1WnNFakhvTERsNHY1cldjRmlMMFlsSDNYVnR4VWVX?=
 =?utf-8?B?NmoyN3A1blZKNVlacWplUnB2MHh5TWE5U1RTMXIzTWp1ZkZqNEFRZTRzcFVE?=
 =?utf-8?B?QTJ2UDJVWTI2Qi9TRkhuRXNsb2s5UVUyRG9Vc1FTVDJ1K0Z0Vkx0QjlJeDYw?=
 =?utf-8?B?NVdLY3NSakMyMFJYM1M5Vmt2QlpMQTJkK3BuMm9SY2dqWnBHcmdELzh4VkMw?=
 =?utf-8?B?S0N5MEt4bHdWNFB3d3hZSm9mT3BzRzRHaVEvL3FMQVhWSjNwZGlVNk02QXBo?=
 =?utf-8?B?YVlDalRDbWpVdktYNDRMNVlyK1BPTzdOWitVNU1sMS80SlFIZEhmTGRCWGQz?=
 =?utf-8?B?OFBZQlBGTXVpcVlqdUNZRjZvVWFoU0hVVG9NNkpZb0JvNXBsM2I5L2hSQnFD?=
 =?utf-8?B?czRKRnEvQUdvR0IrWnBvNzBNei8vZkl6OCtxM2hXOC9jOGtJMGdUOVIzWHA1?=
 =?utf-8?B?Q1c0SmZ4cEczQWdsK2R6M1h4bHVCcjRpSnpWMml2MEJjU2pVV2VlTlJqb1Iz?=
 =?utf-8?B?b1VSdldDaUpUSnN3cHdVbk5paVRQcjVhR0p1V1plZnhNTUhRdEJkVzVFQ0tH?=
 =?utf-8?B?cTVUL1Z6M040WjJlWStRVFZQaDdIQzhsdzFMZ295WVZwMnlBTGJBb2ZkNERt?=
 =?utf-8?B?SWhiSDVlTUtieUQ4VXVIS1REMkhIZnk0S1hNZ2pmVDdXMXlxMXZqVGdYTm1M?=
 =?utf-8?B?S1hiaU5sVnZsRGtLV21tRjRHWlVTVnRLazdGSWljWVFtRjhYSWRBV0luYUNK?=
 =?utf-8?B?UUV6T3lqUVRJbGhBaVdDT0xGb1BuTzNvcWpXWjl2L1g1dlFMUnh0Y0hyQ3lu?=
 =?utf-8?B?U3lRTHZFZzcyMlVYOEVsTnhFbzRSUXJXUmRIUHhBMGkxbVhwYjFmazJkekxL?=
 =?utf-8?B?V1E3SUJFUXVGSnFXbDU1UndSZXRUV21udTlsZGVaRHQxeXNsbzlPeFM5M3h3?=
 =?utf-8?B?ZXJtUnM4N2hoU2wzU0R3dkd4dUV6Vm9mcTU5STVWVU9lQTI5NGc1VjJIdjI0?=
 =?utf-8?B?b0xPeHZVeTUrek9RUzdudFhPa3ZaOVVjaksvdUZMVUtHbHJHOE85c1NjRGVs?=
 =?utf-8?B?L0hIRklwa2IrMWl1NnlBdVM3N01qUEVmR05xQllKRHlWWktwQ0syc08xekJw?=
 =?utf-8?Q?I74RJad7DdawzbhBOF6g0nI4dans6CyGox7pRPx?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 24fd7f05-1b4b-450a-fd15-08d8da3504ec
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 09:00:52.4802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MaxM2rmnO+DLUV5YP63C9oIIsO4PN46OQgTV2zz25vShUTPEahZL9pIOGpy3dnjg0Trj0TZZsq5LQLYydIRZFA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4394
X-OriginatorOrg: citrix.com

It's now safe to enable the build of rombios on Alpine systems, as
hvmloader already builds fine there.

Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
---
 automation/scripts/build | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index d8990c3bf4..87e44bb940 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -31,9 +31,8 @@ fi
 if ! test -z "$(ldd /bin/ls|grep musl|head -1)"; then
     # disable --disable-werror for QEMUU when building with MUSL
     cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
-    # hvmloader doesn't build on MUSL systems
-    cfgargs+=("--disable-seabios")
-    cfgargs+=("--disable-rombios")
+    # SeaBIOS doesn't build on MUSL systems
+    cfgargs+=("--with-system-seabios=/bin/false")
 fi
 
 # Qemu requires Python 3.5 or later
-- 
2.30.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:03:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90152.170574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZ22-0007nG-Pk; Fri, 26 Feb 2021 09:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90152.170574; Fri, 26 Feb 2021 09:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZ22-0007n9-Kx; Fri, 26 Feb 2021 09:03:30 +0000
Received: by outflank-mailman (input) for mailman id 90152;
 Fri, 26 Feb 2021 09:03:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFZ21-0007n4-J7
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:03:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da0ca2f5-670c-4ea6-920e-6fb2e91452da;
 Fri, 26 Feb 2021 09:03:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6335AC6E;
 Fri, 26 Feb 2021 09:03:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da0ca2f5-670c-4ea6-920e-6fb2e91452da
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614330208; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hLHHBQG+gRZqFcvISMbHh8pjmJkAMVV34j7OAarnsCE=;
	b=JY6v5gY+OguB/NB9fr1bs9t7kv9takJj6m6xunZgVYZIy4bv6UCdiRaM3cE0GwT0sRhF4G
	H94cauVIGQkSVfG+sLVONRC/9lDfihqOvGGzQQ6OIl7/Hea7TkdJuTLKoQhmG3NGxq5Ivc
	7cDzV1+4a+fWkkCO/IO0M+tMDFP7Aoo=
Subject: Re: [PATCH] xen: introduce XENFEAT_direct_mapped and
 XENFEAT_not_direct_mapped
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, julien@xen.org, xen-devel@lists.xenproject.org
References: <20210225012243.28530-1-sstabellini@kernel.org>
 <96d764b6-a719-711c-31ea-235381bfd0ce@suse.com>
 <alpine.DEB.2.21.2102250948160.3234@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fe4f0f87-0b6a-c37a-7f17-e3cf40f739f1@suse.com>
Date: Fri, 26 Feb 2021 10:03:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102250948160.3234@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 21:51, Stefano Stabellini wrote:
> On Thu, 25 Feb 2021, Jan Beulich wrote:
>> On 25.02.2021 02:22, Stefano Stabellini wrote:
>>> --- a/xen/include/public/features.h
>>> +++ b/xen/include/public/features.h
>>> @@ -114,6 +114,13 @@
>>>   */
>>>  #define XENFEAT_linux_rsdp_unrestricted   15
>>>  
>>> +/*
>>> + * A direct-mapped (or 1:1 mapped) domain is a domain for which its
>>> + * local pages have gfn == mfn.
>>> + */
>>> +#define XENFEAT_not_direct_mapped       16
>>> +#define XENFEAT_direct_mapped           17
>>
>> Why two new values? Absence of XENFEAT_direct_mapped requires
>> implying not-direct-mapped by the consumer anyway, doesn't it?
> 
> That's because if we add both flags we can avoid all unpleasant guessing
> games in the guest kernel.
> 
> If one flag or the other flag is set, we can make an informed decision.
> 
> But if neither flag is set, it means we are running on an older Xen,
> and we fall back on the current checks.

Oh, okay - if there's guesswork to avoid, then I see the point.
Maybe mention in the description?

>> Further, quoting xen/mm.h: "For a non-translated guest which
>> is aware of Xen, gfn == mfn." This to me implies that PV would
>> need to get XENFEAT_direct_mapped set; not sure whether this
>> simply means x86'es is_domain_direct_mapped() is wrong, but if
>> it is, uses elsewhere in the code would likely need changing.
> 
> That's a good point, I didn't think about x86 PV. I think the two flags
> are needed for autotranslated guests. I don't know for sure what is best
> for non-autotranslated guests.
> 
> Maybe we could say that XENFEAT_not_direct_mapped and
> XENFEAT_direct_mapped only apply to XENFEAT_auto_translated_physmap
> guests. And it would match the implementation of
> is_domain_direct_mapped().

I'm having trouble understanding this last sentence, and hence I'm
not sure I understand the rest in the way you may mean it. Neither
x86'es nor Arm's is_domain_direct_mapped() has any check towards a
guest being translated (obviously such a check would be redundant
on Arm).

> For non XENFEAT_auto_translated_physmap guests we could either do:
> 
> - neither flag is set
> - set XENFEAT_direct_mapped (without changing the implementation of
>   is_domain_direct_mapped)
> 
> What do you think? I am happy either way.

I'm happy either way as well; suitably described perhaps setting
XENFEAT_direct_mapped when !paging_mode_translate() would be
slightly more "natural". But a spelled out and enforced
dependency upon XENFEAT_auto_translated_physmap would too be fine
with me.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:06:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:06:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90156.170586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZ4j-0007z6-AH; Fri, 26 Feb 2021 09:06:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90156.170586; Fri, 26 Feb 2021 09:06:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZ4j-0007yz-6s; Fri, 26 Feb 2021 09:06:17 +0000
Received: by outflank-mailman (input) for mailman id 90156;
 Fri, 26 Feb 2021 09:06:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFZ4h-0007yr-Ga
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:06:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7bca5a7-6600-4c14-8008-b2cfa3ebc2f0;
 Fri, 26 Feb 2021 09:06:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23725AC6E;
 Fri, 26 Feb 2021 09:06:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7bca5a7-6600-4c14-8008-b2cfa3ebc2f0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614330374; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=roAVk0WhxEwnFkeXO+KTMxzCtKGkFgaKn4OzMeMULG0=;
	b=r/RduEvMlIm5X1N9dK9wywQBPvu9cmXHJtLZvcjy8GyOjZIku4JV8Rirc0LiESz0JUlXa7
	LRdqYR4lh3X5RXf6nckuqTuov/6f+SKHsp7JsZgNKV4bwX5Z5Bs7XcRuUwgFcU6JbiQ8lM
	KU/FebEQ2OpNpUiMrKe+iLnbqS/c+Mg=
Subject: Re: [PATCH 1/3] tools/hvmloader: Drop machelf include as well
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210225203010.11378-1-andrew.cooper3@citrix.com>
 <20210225203010.11378-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c09a2b2a-25bd-66bc-4f89-3cb9d622df20@suse.com>
Date: Fri, 26 Feb 2021 10:06:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225203010.11378-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 21:30, Andrew Cooper wrote:
> The logic behind switching to elfstructs applies to sun builds as well.
> 
> Fixes: 81b2b328a2 ("hvmloader: use Xen private header for elf structs")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:10:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:10:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90159.170598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZ8h-0000bA-SA; Fri, 26 Feb 2021 09:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90159.170598; Fri, 26 Feb 2021 09:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZ8h-0000b3-OU; Fri, 26 Feb 2021 09:10:23 +0000
Received: by outflank-mailman (input) for mailman id 90159;
 Fri, 26 Feb 2021 09:10:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFZ8g-0000ay-A5
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:10:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1198110f-cde1-4b03-bf14-d656f123aa48;
 Fri, 26 Feb 2021 09:10:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C7F95AAAE;
 Fri, 26 Feb 2021 09:10:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1198110f-cde1-4b03-bf14-d656f123aa48
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614330620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZnPO0c8BqED9sNXUMyc5GizBTjaaFQPqZ40nV6grb5g=;
	b=icmCsrvMmX8/butHsBZyoJB5/mefB28V0+ueEZVf0Wqr3ujss+yjtI7XQHMLmehJ9G7C3E
	JNOmUhJriI/SUIrjcF0jz+M3CSs6c6LdE+v3ORct06m/mTo0rMRcv9a8k545BwjBl4XNis
	YXYyh67rb9PVMBLpnnxnh42xoBSpgss=
Subject: Re: [PATCH 2/3] tools/firmware: Build firmware as -ffreestanding
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20210225203010.11378-1-andrew.cooper3@citrix.com>
 <20210225203010.11378-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <23f64cfe-8908-79e1-253b-ad07b7aee00a@suse.com>
Date: Fri, 26 Feb 2021 10:10:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210225203010.11378-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 21:30, Andrew Cooper wrote:
> firmware should always have been -ffreestanding, as it doesn't execute in the
> host environment.
> 
> inttypes.h isn't a freestanding header, but the 32bitbios_support.c only wants
> the stdint.h types so switch to the more appropriate include.
> 
> This removes the build time dependency on a 32bit libc just to compile the
> hvmloader and friends.
> 
> Update README and the TravisCI configuration.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
preferably with one further adjustment:

> --- a/tools/firmware/Rules.mk
> +++ b/tools/firmware/Rules.mk
> @@ -16,4 +16,4 @@ CFLAGS += -Werror
>  $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
>  
>  # Extra CFLAGS suitable for an embedded type of environment.
> -CFLAGS += -fno-builtin -msoft-float
> +CFLAGS += -fno-builtin -msoft-float -ffreestanding

As per gcc doc -ffreestanding implies -fno-builtin, so I think you
want to replace that one instead of adding the new option on top.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:11:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:11:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90162.170610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZA6-0000ji-8Y; Fri, 26 Feb 2021 09:11:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90162.170610; Fri, 26 Feb 2021 09:11:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZA6-0000jb-4A; Fri, 26 Feb 2021 09:11:50 +0000
Received: by outflank-mailman (input) for mailman id 90162;
 Fri, 26 Feb 2021 09:11:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFZA5-0000jV-Mb
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:11:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 231b572b-398b-499d-9d77-8b41612e0d34;
 Fri, 26 Feb 2021 09:11:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5AA45AC6E;
 Fri, 26 Feb 2021 09:11:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 231b572b-398b-499d-9d77-8b41612e0d34
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614330708; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=s9diGcGB9KTOnhLCoAIJmaDokTyMSGo3fC8+l0NpU2g=;
	b=nPHt8qeUBAIOToW1sXRabAiikIzwVcMPVYiQFeJYbbSl0/qXXqwzqBLxzynn80mSPjJ3gD
	9q7dPrKX6HUIlCrRupIZWIRW/KRv7yI/+iQ7Lb80QUE1hB0zvX29H/dRl2Oez7HhZef4He
	s8umkbIKrcZrST2aCA/B4Ct/6wPRSs4=
Subject: Re: [PATCH for-4.15 1/3] hvmloader: do not include inttypes.h
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210226085908.21254-1-roger.pau@citrix.com>
 <20210226085908.21254-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <677a4654-950a-1419-849e-483dc9f19ea5@suse.com>
Date: Fri, 26 Feb 2021 10:11:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226085908.21254-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.02.2021 09:59, Roger Pau Monne wrote:
> elfstructs.h doesn't require anything from inttypes.h: it's more
> appropriate to include stdint.h instead which contains the type
> declarations required for the ELF types.
> 
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Let's go with Andrew's slightly larger change here.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:15:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:15:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90165.170622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZDI-0000s7-Mc; Fri, 26 Feb 2021 09:15:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90165.170622; Fri, 26 Feb 2021 09:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZDI-0000s0-JT; Fri, 26 Feb 2021 09:15:08 +0000
Received: by outflank-mailman (input) for mailman id 90165;
 Fri, 26 Feb 2021 09:15:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFZDI-0000rv-1m
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:15:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7383fb95-ada9-4ace-883b-fabbe6cfdad6;
 Fri, 26 Feb 2021 09:15:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8A54CAE7F;
 Fri, 26 Feb 2021 09:15:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7383fb95-ada9-4ace-883b-fabbe6cfdad6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614330906; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MVqf+6P7NgMTIo1bBRbmI+TeVBA095XQsl4CCk2FjmQ=;
	b=Sp4F6hF3gWBgTQma3ANXsWbokAd8QsQrDfjaDYFbbXNhmu+cUB0wzzZJUmzaVirJ+AH+iT
	tsp4efWNk3hRvCdkBI+lvVerEH8V+3vEe0L4WmjWdhe1fS2hKwn+an6W8niJki0dm1+jtB
	UFky8ggr6rsl7EQQN+aNbR6bvdD0ef8=
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 "Manthey, Norbert" <nmanthey@amazon.de>
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
 <0aa89914-8fae-3731-a5a0-ccf4316ce96b@suse.com>
 <37e219e6-a66b-383d-2a60-b61fdd1d66a8@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0b1875a0-9c42-d778-6574-dbab73da0f8a@suse.com>
Date: Fri, 26 Feb 2021 10:15:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <37e219e6-a66b-383d-2a60-b61fdd1d66a8@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="voVPsjpDiK9b0IyDi9b6UtG5X38R1QPW9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--voVPsjpDiK9b0IyDi9b6UtG5X38R1QPW9
Content-Type: multipart/mixed; boundary="yQPH7TVCtHo0Ergb7h0bnhRxhInnsbE8b";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 "Manthey, Norbert" <nmanthey@amazon.de>
Message-ID: <0b1875a0-9c42-d778-6574-dbab73da0f8a@suse.com>
Subject: Re: [PATCH for-4.15 5/5] tools/xenstored: Silence coverity when using
 xs_state_* structures
References: <20210225174131.10115-1-julien@xen.org>
 <20210225174131.10115-6-julien@xen.org>
 <0aa89914-8fae-3731-a5a0-ccf4316ce96b@suse.com>
 <37e219e6-a66b-383d-2a60-b61fdd1d66a8@xen.org>
In-Reply-To: <37e219e6-a66b-383d-2a60-b61fdd1d66a8@xen.org>

--yQPH7TVCtHo0Ergb7h0bnhRxhInnsbE8b
Content-Type: multipart/mixed;
 boundary="------------250CA52837D796791E934D2B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------250CA52837D796791E934D2B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 26.02.21 09:57, Julien Grall wrote:
> Hi Juergen,
>=20
> On 26/02/2021 07:10, J=C3=BCrgen Gro=C3=9F wrote:
>> On 25.02.21 18:41, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Coverity will report unitialized values for every use of xs_state_*
>>> structures in the save part. This can be prevented by using the [0]
>>> rather than [] to define variable length array.
>>>
>>> Coverity-ID: 1472398
>>> Coverity-ID: 1472397
>>> Coverity-ID: 1472396
>>> Coverity-ID: 1472395
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Sorry, but Coverity is clearly wrong here.
> I saw what Andrew wrote but neither of you really provided enough=20
> information to infer the same. Care to provide more details?
>=20
>>
>> Should we really modify our code to work around bugs in external
>> static code analyzers?
>=20
> I don't think it is OK to have 866 issues (and counting) and keep=20
> ignoring them because Coverity may be wrong. We should fix them one way=
=20
> or another. If this means telling Coverity they are reporting false=20
> positive, then fine.
>=20
> But for that, I first needs a bit more details why they are clearly wro=
ng.

Lets put it this way: why is a[0] not critical, but a[] is?

Semantically there is no difference, so Coverity MUST be wrong in
some way (either a[] is really not critical, or a[0] should be
critical).

Juergen

--------------250CA52837D796791E934D2B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------250CA52837D796791E934D2B--

--yQPH7TVCtHo0Ergb7h0bnhRxhInnsbE8b--

--voVPsjpDiK9b0IyDi9b6UtG5X38R1QPW9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmA4vBkFAwAAAAAACgkQsN6d1ii/Ey9s
Hgf+NUil6kTVxq+F799YDlzgbnrOQrWfS04LDvRc894+QA+ZTUAtywxu+kSxxa/88m8ThAna+zs6
v3HGMKMXeDbVRjJCaB482uzL3YKV/4BiFmRAw9j0f/NvPZ7O/sOvNsHKtsosN0AR57u1SrSaHiND
DEBB1XdoMWoyP+scAlH7Z8441P9JRbS6enIe/RfnC53ALaiLdSI14d5tUd4oFPMQ/jgDKahbOJeo
DoDzHAQUDYGlj5AN0GdUgLGaxLIsgHBwcD5DrRmxohJL2OuDii0rZOlCgQnkCxPSjA8BL6POWud1
VIarJPAHn15nQW82LLEZpmYqjQyiU0HpFHWKBQC7/w==
=YY/q
-----END PGP SIGNATURE-----

--voVPsjpDiK9b0IyDi9b6UtG5X38R1QPW9--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:38:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:38:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90171.170644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZaE-000350-NG; Fri, 26 Feb 2021 09:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90171.170644; Fri, 26 Feb 2021 09:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZaE-00034t-Ja; Fri, 26 Feb 2021 09:38:50 +0000
Received: by outflank-mailman (input) for mailman id 90171;
 Fri, 26 Feb 2021 09:38:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFZaD-00034l-Qq; Fri, 26 Feb 2021 09:38:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFZaD-00005m-Ie; Fri, 26 Feb 2021 09:38:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFZaD-0000fY-99; Fri, 26 Feb 2021 09:38:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFZaD-0001EH-8e; Fri, 26 Feb 2021 09:38:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AqaOOWs3r4rl3wYa8T1+4ckh9u1HIO1RZ8opP1686RI=; b=bL00T0XBFwvfQFk8sjT4Ke6dp4
	ifk5XO9ki0fzWF/QQS1Iv6JPQd+M3dg+7CrIX8O7WD/Zm63CGtpvLzjypPuJ/25W/eiNmIP6Sfpfv
	FwHd0BpUIvu0A8KrHskF0ssp7WXpJZKtDgzCEbgZrEwyBeK4Wr7Z36rP989xpRpNIEAE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159685-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159685: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2c87f7a38f930ef6f6a7bdd04aeb82ce3971b54b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 09:38:49 +0000

flight 159685 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159685/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                2c87f7a38f930ef6f6a7bdd04aeb82ce3971b54b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  209 days
Failing since        152366  2020-08-01 20:49:34 Z  208 days  360 attempts
Testing same since   159685  2021-02-25 20:39:33 Z    0 days    1 attempts

------------------------------------------------------------
5105 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1258498 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 09:42:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 09:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90176.170659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZdq-00048x-CV; Fri, 26 Feb 2021 09:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90176.170659; Fri, 26 Feb 2021 09:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFZdq-00048q-8F; Fri, 26 Feb 2021 09:42:34 +0000
Received: by outflank-mailman (input) for mailman id 90176;
 Fri, 26 Feb 2021 09:42:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+p7+=H4=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lFZdo-00048k-Ep
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 09:42:32 +0000
Received: from mail-qk1-x735.google.com (unknown [2607:f8b0:4864:20::735])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dde87779-66cf-4d63-8c80-b8d76fdbea30;
 Fri, 26 Feb 2021 09:42:31 +0000 (UTC)
Received: by mail-qk1-x735.google.com with SMTP id f17so8504815qkl.5
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 01:42:31 -0800 (PST)
Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com.
 [209.85.160.172])
 by smtp.gmail.com with ESMTPSA id e18sm2022868qtr.69.2021.02.26.01.42.30
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 26 Feb 2021 01:42:30 -0800 (PST)
Received: by mail-qt1-f172.google.com with SMTP id f17so6227997qth.7
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 01:42:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dde87779-66cf-4d63-8c80-b8d76fdbea30
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=E8MT/YBSt9+GOAVKSt/z+/x/oAmJs4mba4MVCW+FEPI=;
        b=YlJGaB2HjeORjhsgYOy8pctb9+8JkGl/ZdIRVJcfQxCMoR62lL7XtrY95NgsqhINjE
         lSgvnnp9tfO6u1oKErjW0g+HTDOG3GFiOyn5d4dJiyFIJKbzFBupmPEB/V7jXmUb4d2q
         N9u5qLxflCWFXhFfzUGrtIkZV0aKvT2pOtERk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=E8MT/YBSt9+GOAVKSt/z+/x/oAmJs4mba4MVCW+FEPI=;
        b=MMC2fobt7Lm6xF9B04NEfH07G+hH5o+ny3ZW9+FHet0g361WNIrfQ2NaWZnI1qrXh3
         m/ME4NJA1KGXsXwKldjwBSd0Rt6qANkaeSreSTzb5poi+i4k+aHhJAuyCCEjsy08d35w
         iE9wZdklyp1r4TZpb92i81HO3qIb6hSRAkYU8OHjwW9csxKpsk5vFrDFjREfzmrd4aeI
         uCyRHPNJwmw+yyQAP6sDr2KcFYWF2N7k1N3OkD2VZZt5TMfVy8P5JQbBhwiEnfiMOT21
         FvysLwp4eHawFKsEHc2ew8RJvcGqLFEg+jjZH8aOg8vlcAAhJXYhYae5KgaHDtrxDL6d
         cF2w==
X-Gm-Message-State: AOAM531JJOBydc6NrjekXjDuo9gf2wfUmQ4xbIkTkpSMaEb5Dpdhc078
	KsNSX11zA96NTkt4sffXecC7jSLmC9S7Iw==
X-Google-Smtp-Source: ABdhPJxSS17I3z7JFspWlipa+fPiAjwpNkli5sbyFLNN93Z+IBPiUvolmDdVhJWP+YAnudJ196tQiQ==
X-Received: by 2002:a37:8a04:: with SMTP id m4mr1752223qkd.78.1614332551112;
        Fri, 26 Feb 2021 01:42:31 -0800 (PST)
X-Received: by 2002:a02:b61a:: with SMTP id h26mr2057255jam.90.1614332168202;
 Fri, 26 Feb 2021 01:36:08 -0800 (PST)
MIME-Version: 1.0
References: <20210209062131.2300005-1-tientzu@chromium.org>
 <20210209062131.2300005-13-tientzu@chromium.org> <CALiNf298+DLjTK6ALe0mYrRuCP_LtztMGuQQCS90ubDctbS0kw@mail.gmail.com>
 <20210226051740.GB2072@lst.de>
In-Reply-To: <20210226051740.GB2072@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Fri, 26 Feb 2021 17:35:57 +0800
X-Gmail-Original-Message-ID: <CALiNf29tSQ1R8zh35neQWuWqDPek+Jr8QzyPQQvTsW2cZBMEUw@mail.gmail.com>
Message-ID: <CALiNf29tSQ1R8zh35neQWuWqDPek+Jr8QzyPQQvTsW2cZBMEUw@mail.gmail.com>
Subject: Re: [PATCH v4 12/14] swiotlb: Add restricted DMA alloc/free support.
To: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Feb 26, 2021 at 1:17 PM Christoph Hellwig <hch@lst.de> wrote:
>
> On Fri, Feb 26, 2021 at 12:17:50PM +0800, Claire Chang wrote:
> > Do you think I should fix this and rebase on the latest linux-next
> > now? I wonder if there are more factor and clean up coming and I
> > should wait after that.
>
> Here is my preferred plan:
>
>  1) wait for my series to support the min alignment in swiotlb to
>     land in Linus tree
>  2) I'll resend my series with the further swiotlb cleanup and
>     refactoring, which includes a slightly rebased version of your
>     patch to add the io_tlb_mem structure
>  3) resend your series on top of that as a baseline
>
> This is my current WIP tree for 2:
>
>   http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/swiotlb-struct

Sounds good to me. Thanks!


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 10:47:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 10:47:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90204.170697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFaeV-00029e-Kq; Fri, 26 Feb 2021 10:47:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90204.170697; Fri, 26 Feb 2021 10:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFaeV-00029X-H4; Fri, 26 Feb 2021 10:47:19 +0000
Received: by outflank-mailman (input) for mailman id 90204;
 Fri, 26 Feb 2021 10:47:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFaeU-00029S-IS
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:47:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFaeR-0001IB-C8; Fri, 26 Feb 2021 10:47:15 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFaeR-0006bN-3y; Fri, 26 Feb 2021 10:47:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ipvQkVg4Bqcmvpp2yi2ILlarioKI8P+uinKEzPQxz0I=; b=fm/WwMQ5ZNGwcCO1/yCfLs/cMd
	ewSiCNnR80xlDWEyB6OED5CgpWgKdwe501KuRgZqJtWnhljGh266DAT5IvVBVOjcKYIUcjGS4Srcv
	Sj8VAKp0nJulPYx7BlR8s6wTDlZvHoVY6/E8TzErFT20ctWaCFQD1i1Zz7qZVhbyYBIw=;
Subject: Re: [for-4.15][RESEND PATCH v4 1/2] xen/x86: iommu: Ignore IOMMU
 mapping requests when a domain is dying
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210224094356.7606-1-julien@xen.org>
 <20210224094356.7606-2-julien@xen.org>
 <d5a09319-614d-398b-b911-bc2533bec587@suse.com>
 <7ce1deb9-e362-439c-dd14-a17dbb6fb1c8@xen.org>
 <2c1d2b05-7553-5f15-ad28-47aba5b9c47f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9d0423c8-2610-0bcc-8b88-3e13ee9e4888@xen.org>
Date: Fri, 26 Feb 2021 10:47:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <2c1d2b05-7553-5f15-ad28-47aba5b9c47f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 25/02/2021 13:18, Jan Beulich wrote:
> On 25.02.2021 12:56, Julien Grall wrote:
>> On 24/02/2021 14:07, Jan Beulich wrote:
>>> On 24.02.2021 10:43, Julien Grall wrote:
>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>> @@ -267,6 +267,12 @@ int iommu_free_pgtables(struct domain *d)
>>>>        struct page_info *pg;
>>>>        unsigned int done = 0;
>>>>    
>>>> +    if ( !is_iommu_enabled(d) )
>>>> +        return 0;
>>>
>>> Why is this addition needed? Hitting a not yet initialize spin lock
>>> is - afaict - no worse than a not yet initialized list, so it would
>>> seem to me that this can't be the reason. No other reason looks to
>>> be called out by the description.
>>
>> struct domain_iommu will be initially zeroed as it is part of struct domain.
>>
>> For the list, we are so far fine because page_list_remove_head()
>> tolerates NULL. If we were using the normal list operations (e.g.
>> list_del), then this code would have segfaulted.
> 
> And so we do, in the CONFIG_BIGMEM case. May I suggest then to split
> this out as a prereq patch, or add wording to the description
> mentioning this additional effect?

You are correct, I can crash the hypervisor when enabling 
CONFIG_BIGMEM=y and not using the IOMMU. I will move this chunk in a 
separate patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 10:56:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 10:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90208.170716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanf-0003Gq-TN; Fri, 26 Feb 2021 10:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90208.170716; Fri, 26 Feb 2021 10:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanf-0003Gf-MM; Fri, 26 Feb 2021 10:56:47 +0000
Received: by outflank-mailman (input) for mailman id 90208;
 Fri, 26 Feb 2021 10:56:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFane-0003GB-Jx
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFanc-0001Pz-11; Fri, 26 Feb 2021 10:56:44 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFanb-0007D9-GS; Fri, 26 Feb 2021 10:56:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=xqgSnLtlJc/ZuKheu9kG+Se2giatvvefwEf89aOzS+w=; b=nV6p8STrDayMAQirzquKLTjbKl
	CxQivydjclySbLsaBgdKBRLVnMqL1mvaSNgqmXcj+7SG9DGL5azphOoc9s93ZeLK/xMlDps0ueTPe
	HCLCvxQN1lfNyV9AExdVB3KbIWZQ8huyI630oAF6uCGTHzK+Gj2yjUWGBErGuZa9AwjY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH for-4.15 v5 0/3] xen/iommu: Collection of bug fixes for IOMMU teardown
Date: Fri, 26 Feb 2021 10:56:37 +0000
Message-Id: <20210226105640.12037-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of bug fixes for the IOMMU teardown code.
All of them are candidate for 4.15 as they can either leak memory or
lead to host crash/host corruption.

This is sent directly on xen-devel because all the issues were either
introduced in 4.15 or happen in the domain creation code.

Major changes since v4:
    - New patch added (it is a split of patch #1 in v4)

Major changes since v3:
    - Remove patch #3 "xen/iommu: x86: Harden the IOMMU page-table
    allocator" as it is not strictly necessary for 4.15.
    - Re-order the patches to avoid on a follow-up patch to fix
    completely the issue.

Major changes since v2:
    - patch #1 "xen/x86: p2m: Don't map the special pages in the IOMMU
    page-tables" has been removed. This requires Jan's patch [1] to
    fully mitigate memory leaks.

Release-Acked-by: Ian Jackson <iwj@xenproject.org>

@Ian, I assumed that the release-acked-by would stand even with the
patch split. Let me know if if this is not the case.

Cheers,

[1] <90271e69-c07e-a32c-5531-a79b10ef03dd@suse.com>

Julien Grall (3):
  xen/iommu: x86: Don't try to free page tables is the IOMMU is not
    enabled
  xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
  xen/iommu: x86: Clear the root page-table before freeing the
    page-tables

 xen/drivers/passthrough/amd/iommu_map.c     | 12 +++++++++++
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 ++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 24 ++++++++++++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 19 ++++++++++++++++
 xen/include/xen/iommu.h                     |  1 +
 5 files changed, 66 insertions(+), 2 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 10:56:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 10:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90207.170709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanf-0003GN-HI; Fri, 26 Feb 2021 10:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90207.170709; Fri, 26 Feb 2021 10:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanf-0003GG-EJ; Fri, 26 Feb 2021 10:56:47 +0000
Received: by outflank-mailman (input) for mailman id 90207;
 Fri, 26 Feb 2021 10:56:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFane-0003G6-3M
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFand-0001Q1-0d; Fri, 26 Feb 2021 10:56:45 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFanc-0007D9-L1; Fri, 26 Feb 2021 10:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=rKYi+qEerpdW0l9C/Ph4WyNn/j7ZcEQ39HY+quniLnM=; b=qmqpY3hlJJfHrB2Ax1D3efsBe
	yoN5ARL4gbcbIK5Dn9bBEsnTxyBAxbypT2+zuRR1UcrFCc0+Mfl9Dh+7XlsArwb2ceshWljLHAluA
	3X7ihp4IQxkw1KPLbwgWUZ/aW8GATDVM4c3uNTEoWRBOnogXSsCrF0jb+NQI+YIqmcPfY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 v5 1/3] xen/iommu: x86: Don't try to free page tables is the IOMMU is not enabled
Date: Fri, 26 Feb 2021 10:56:38 +0000
Message-Id: <20210226105640.12037-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226105640.12037-1-julien@xen.org>
References: <20210226105640.12037-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

When using CONFIG_BIGMEM=y, the page_list cannot be accessed whilst it
is is unitialized. However, iommu_free_pgtables() will be called even if
the domain is not using an IOMMU.

Consequently, Xen will try to go through the page list and deference a
NULL pointer.

Bail out early if the domain is not using an IOMMU.

Fixes: 15bc9a1ef51c ("x86/iommu: add common page-table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v5:
        - Patch added. This was split from "xen/x86: iommu: Ignore
        IOMMU mapping requests when a domain is dying"
---
 xen/drivers/passthrough/x86/iommu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cea1032b3d02..58a330e82247 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,6 +267,9 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 10:56:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 10:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90209.170733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFani-0003JE-2t; Fri, 26 Feb 2021 10:56:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90209.170733; Fri, 26 Feb 2021 10:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanh-0003J7-Vk; Fri, 26 Feb 2021 10:56:49 +0000
Received: by outflank-mailman (input) for mailman id 90209;
 Fri, 26 Feb 2021 10:56:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFang-0003I4-Hd
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFane-0001Q9-9O; Fri, 26 Feb 2021 10:56:46 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFane-0007D9-0b; Fri, 26 Feb 2021 10:56:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=+iKLLRCgdyK7ZcIpTbQEg0QmDjhs81xpS53/hC1UIvE=; b=WpNhpDTv4cWmLJ3MkfreZ7i/i
	H5fN4RFkQhtnKchIqL9lwYYm4X9hri3P7Zc++SmnR3r/PI2U1uwrJhSaWFrd/C6ZW4cyoYUu/07/X
	su+ouT+FC3m+eEkL1oMeHtKj3A3IB8WAsKM8JitYl2ZICV3dCSKHG9uASOteTc11reqmY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 v5 2/3] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
Date: Fri, 26 Feb 2021 10:56:39 +0000
Message-Id: <20210226105640.12037-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226105640.12037-1-julien@xen.org>
References: <20210226105640.12037-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new x86 IOMMU page-tables allocator will release the pages when
relinquishing the domain resources. However, this is not sufficient
when the domain is dying because nothing prevents page-table to be
allocated.

As the domain is dying, it is not necessary to continue to modify the
IOMMU page-tables as they are going to be destroyed soon.

At the moment, page-table allocates will only happen when iommu_map().
So after this change there will be no more page-table allocation
happening because we don't use superpage mappings yet when not sharing
page tables.

In order to observe d->is_dying correctly, we need to rely on per-arch
locking, so the check to ignore IOMMU mapping is added on the per-driver
map_page() callback.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

As discussed in v3, this is only covering 4.15. We can discuss
post-4.15 how to catch page-table allocations if another caller (e.g.
iommu_unmap() if we ever decide to support superpages) start to use the
page-table allocator.

Changes in v5:
    - Clarify in the commit message why fixing iommu_map() is enough
    - Split "if ( !is_iommu_enabled(d) )"  in a separate patch
    - Update the comment on top of the spin_barrier()

Changes in v4:
    - Move the patch to the top of the queue
    - Reword the commit message

Changes in v3:
    - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't
    crash the domain if it is dying"
---
 xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++++++++
 xen/drivers/passthrough/vtd/iommu.c     | 12 ++++++++++++
 xen/drivers/passthrough/x86/iommu.c     |  3 +++
 3 files changed, 27 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index d3a8b1aec766..560af54b765b 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -285,6 +285,18 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables()).
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     rc = amd_iommu_alloc_root(d);
     if ( rc )
     {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d136fe36883b..b549a71530d5 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1762,6 +1762,18 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
+    /*
+     * IOMMU mapping request can be safely ignored when the domain is dying.
+     *
+     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * before any page tables are freed (see iommu_free_pgtables())
+     */
+    if ( d->is_dying )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return 0;
+    }
+
     pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
     if ( !pg_maddr )
     {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 58a330e82247..ad19b7dd461c 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -270,6 +270,9 @@ int iommu_free_pgtables(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return 0;
 
+    /* After this barrier, no new IOMMU mappings can be inserted. */
+    spin_barrier(&hd->arch.mapping_lock);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 10:56:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 10:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90210.170745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanj-0003Lg-Du; Fri, 26 Feb 2021 10:56:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90210.170745; Fri, 26 Feb 2021 10:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFanj-0003LX-AJ; Fri, 26 Feb 2021 10:56:51 +0000
Received: by outflank-mailman (input) for mailman id 90210;
 Fri, 26 Feb 2021 10:56:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFani-0003Jn-BS
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFanf-0001QJ-LT; Fri, 26 Feb 2021 10:56:47 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFanf-0007D9-Ce; Fri, 26 Feb 2021 10:56:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=s4DkygruQ2xwDok1FHnySjDI4sCgf/AFga36jF2Uz3Q=; b=mnSWdB0erv1LSk4cmCT9aMeve
	iWdXwBMUmbQ6f2j7ZZHI8gDcaZID0wCIPsKWv456MvyAcQ2J+FEfmPmpkum7qRc1OSx3VXMHUBke8
	HhBnm4qLxMENW5GyGPy6Qaaidave/PDnEOICFzCk2OgOJkHg3z+RNSWvHiOUSSwqY/okI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 v5 3/3] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Fri, 26 Feb 2021 10:56:40 +0000
Message-Id: <20210226105640.12037-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226105640.12037-1-julien@xen.org>
References: <20210226105640.12037-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new per-domain IOMMU page-table allocator will now free the
page-tables when domain's resources are relinquished. However, the
per-domain IOMMU structure will still contain a dangling pointer to
the root page-table.

Xen may access the IOMMU page-tables afterwards at least in the case of
PV domain:

(XEN) Xen call trace:
(XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
(XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
(XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
(XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
(XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
(XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
(XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
(XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
(XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
(XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
(XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
(XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
(XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
(XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
(XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
(XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120

This will result to a use after-free and possibly an host crash or
memory corruption.

It would not be possible to free the page-tables further down in
domain_relinquish_resources() because cleanup_page_mappings() will only
be called when the last reference on the page dropped. This may happen
much later if another domain still hold a reference.

After all the PCI devices have been de-assigned, nobody should use the
IOMMU page-tables and it is therefore pointless to try to modify them.

So we can simply clear any reference to the root page-table in the
per-domain IOMMU structure. This requires to introduce a new callback of
the method will depend on the IOMMU driver used.

Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy()
to check if we freed all the IOMMU page tables.

Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
    Changes in v5:
        - Add Jan's reviewed-by
        - Fix typo
        - Use ! rather than == NULL

    Changes in v4:
        - Move the patch later in the series as we need to prevent
        iommu_map() to allocate memory first.
        - Add an ASSERT() in arch_iommu_domain_destroy().

    Changes in v3:
        - Move the patch earlier in the series
        - Reword the commit message

    Changes in v2:
        - Introduce clear_root_pgtable()
        - Move the patch later in the series
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++-
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++++++-
 xen/drivers/passthrough/x86/iommu.c         | 13 +++++++++++++
 xen/include/xen/iommu.h                     |  1 +
 4 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 42b5a5a9bec4..085fe2f5771e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
+static void amd_iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.amd.root_table = NULL;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    dom_iommu(d)->arch.amd.root_table = NULL;
+    ASSERT(!dom_iommu(d)->arch.amd.root_table);
 }
 
 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
@@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .remove_device = amd_iommu_remove_device,
     .assign_device  = amd_iommu_assign_device,
     .teardown = amd_iommu_domain_destroy,
+    .clear_root_pgtable = amd_iommu_clear_root_pgtable,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index b549a71530d5..475efb3be3bd 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1726,6 +1726,15 @@ out:
     return ret;
 }
 
+static void iommu_clear_root_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.vtd.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
+}
+
 static void iommu_domain_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d)
         xfree(mrmrr);
     }
 
-    hd->arch.vtd.pgd_maddr = 0;
+    ASSERT(!hd->arch.vtd.pgd_maddr);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2731,6 +2740,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .remove_device = intel_iommu_remove_device,
     .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
+    .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index ad19b7dd461c..b90bb31bfeea 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    /*
+     * There should be not page-tables left allocated by the time the
+     * domain is destroyed. Note that arch_iommu_domain_destroy() is
+     * called unconditionally, so pgtables may be uninitialized.
+     */
+    ASSERT(!dom_iommu(d)->platform_ops ||
+           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -273,6 +280,12 @@ int iommu_free_pgtables(struct domain *d)
     /* After this barrier, no new IOMMU mappings can be inserted. */
     spin_barrier(&hd->arch.mapping_lock);
 
+    /*
+     * Pages will be moved to the free list below. So we want to
+     * clear the root page-table to avoid any potential use after-free.
+     */
+    hd->platform_ops->clear_root_pgtable(d);
+
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 863a68fe1622..d59ed7cbad43 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -272,6 +272,7 @@ struct iommu_ops {
 
     int (*adjust_irq_affinities)(void);
     void (*sync_cache)(const void *addr, unsigned int size);
+    void (*clear_root_pgtable)(struct domain *d);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 10:59:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 10:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90229.170762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFaqY-0003va-4z; Fri, 26 Feb 2021 10:59:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90229.170762; Fri, 26 Feb 2021 10:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFaqY-0003vT-1m; Fri, 26 Feb 2021 10:59:46 +0000
Received: by outflank-mailman (input) for mailman id 90229;
 Fri, 26 Feb 2021 10:59:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uova=H4=cert.pl=hubert.jasudowicz@srs-us1.protection.inumbo.net>)
 id 1lFaqW-0003rd-SC
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:59:44 +0000
Received: from mx.nask.net.pl (unknown [195.187.55.89])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa822b0b-cb6d-4cd6-bd58-8919d5d719ee;
 Fri, 26 Feb 2021 10:59:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa822b0b-cb6d-4cd6-bd58-8919d5d719ee
From: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: [PATCH] tools: Improve signal handling in xen-vmtrace
Date: Fri, 26 Feb 2021 11:59:26 +0100
Message-Id: <26720bf5c8258e1b7b4600af3648039b5b9ee18d.1614336820.git.hubert.jasudowicz@cert.pl>
X-Mailer: git-send-email 2.30.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make sure xen-vmtrace exits cleanly in case SIGPIPE is sent. This can
happen when piping the output to some other program.

Additionaly, add volatile qualifier to interrupted flag to avoid
it being optimized away by the compiler.

Signed-off-by: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 tools/misc/xen-vmtrace.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 7572e880c5..e2da043058 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -43,7 +43,7 @@ static uint32_t domid, vcpu;
 static size_t size;
 static char *buf;
 
-static sig_atomic_t interrupted;
+static volatile sig_atomic_t interrupted;
 static void int_handler(int signum)
 {
     interrupted = 1;
@@ -81,6 +81,9 @@ int main(int argc, char **argv)
     if ( signal(SIGINT, int_handler) == SIG_ERR )
         err(1, "Failed to register signal handler\n");
 
+    if ( signal(SIGPIPE, int_handler) == SIG_ERR )
+        err(1, "Failed to register signal handler\n");
+
     if ( argc != 3 )
     {
         fprintf(stderr, "Usage: %s <domid> <vcpu_id>\n", argv[0]);
-- 
2.30.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 11:03:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 11:03:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90237.170774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFatb-0004sp-MC; Fri, 26 Feb 2021 11:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90237.170774; Fri, 26 Feb 2021 11:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFatb-0004si-IO; Fri, 26 Feb 2021 11:02:55 +0000
Received: by outflank-mailman (input) for mailman id 90237;
 Fri, 26 Feb 2021 11:02:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFatZ-0004sd-Vr
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 11:02:54 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e6dc666-694f-4581-8b99-22baa0e9fbc2;
 Fri, 26 Feb 2021 11:02:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e6dc666-694f-4581-8b99-22baa0e9fbc2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614337372;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=jWNWv9ZWxp5MxdNi+omfIDs1XJ19WhHHHl6Ktv2lyDw=;
  b=FsFxJfFXLJaW5nzxxLOO0WtPTCTOZIr9l69P1xaAwdcNP93QQqwNywxZ
   JXZi5MyTMfKgUwj6R+itdPvYZWj5Byj4t5+XcSqe6aZ/YBLTsHRdefJf9
   K3BtJlhNu5UGQZLQ6JFCBt4c6qZYRgoSXc4/uzsarfJ241HPrDnifz74Z
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: IDhMIlL0wXLN/4l8nk39DDobQVsAXzdgGqgXTB1flrPcbJj1p4NekC2N1bQlZxHBY8MNhDfv2l
 dRNIRGr/aLUWmz3Kyb1mOQdnokG1apr0COy8CK5Yd7fVfoKkdSu3g4Zkyj8mIlzSGh5CWdfThr
 Teec625v7ZKTSY6dnaLFk3WowuLn0L7MofODyaWkXBqkSJ4Gk80cKy8sE/tS2dXUEtTT2vieYH
 TmYvlsX7QTFM45lkwL8sRbllXWm4DT7CN/ZDWQZuzJFh3wO43wwdwzqWvpsnBYgsToQVRctu4t
 Q2Q=
X-SBRS: 5.2
X-MesageID: 38090888
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38090888"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Doug Goldstein
	<cardoe@cardoe.com>, Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: [PATCH for-4.15] automation: Fix the Alpine clang builds to use clang
Date: Fri, 26 Feb 2021 11:02:33 +0000
Message-ID: <20210226110233.27991-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Looks like a copy&paste error.

Fixes: f6e1d8515d7 ("automation: add alpine linux x86 build jobs")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Doug Goldstein <cardoe@cardoe.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
---
 automation/gitlab-ci/build.yaml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index d00b8a5123..23ab81d892 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -443,13 +443,13 @@ alpine-3.12-gcc-debug:
   allow_failure: true
 
 alpine-3.12-clang:
-  extends: .gcc-x86-64-build
+  extends: .clang-x86-64-build
   variables:
     CONTAINER: alpine:3.12
   allow_failure: true
 
 alpine-3.12-clang-debug:
-  extends: .gcc-x86-64-build-debug
+  extends: .clang-x86-64-build-debug
   variables:
     CONTAINER: alpine:3.12
   allow_failure: true
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 11:23:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 11:23:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90249.170785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFbDm-000719-D0; Fri, 26 Feb 2021 11:23:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90249.170785; Fri, 26 Feb 2021 11:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFbDm-000712-A6; Fri, 26 Feb 2021 11:23:46 +0000
Received: by outflank-mailman (input) for mailman id 90249;
 Fri, 26 Feb 2021 11:23:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uova=H4=cert.pl=hubert.jasudowicz@srs-us1.protection.inumbo.net>)
 id 1lFbDk-00070d-MY
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 11:23:44 +0000
Received: from mx.nask.net.pl (unknown [195.187.55.89])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82f68d00-39a3-4f96-b122-169165b395cd;
 Fri, 26 Feb 2021 11:23:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82f68d00-39a3-4f96-b122-169165b395cd
Date: Fri, 26 Feb 2021 12:23:40 +0100
From: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH] tools: Improve signal handling in xen-vmtrace
Message-ID: <20210226112340.xiygnu4qclixswuc@arnold.localdomain>
Mail-Followup-To: xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
References: <26720bf5c8258e1b7b4600af3648039b5b9ee18d.1614336820.git.hubert.jasudowicz@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <26720bf5c8258e1b7b4600af3648039b5b9ee18d.1614336820.git.hubert.jasudowicz@cert.pl>

On 2021-02-26, Hubert Jasudowicz wrote:
> Make sure xen-vmtrace exits cleanly in case SIGPIPE is sent. This can
> happen when piping the output to some other program.
> 
> Additionaly, add volatile qualifier to interrupted flag to avoid
> it being optimized away by the compiler.
> 
> Signed-off-by: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>  tools/misc/xen-vmtrace.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
> index 7572e880c5..e2da043058 100644
> --- a/tools/misc/xen-vmtrace.c
> +++ b/tools/misc/xen-vmtrace.c
> @@ -43,7 +43,7 @@ static uint32_t domid, vcpu;
>  static size_t size;
>  static char *buf;
>  
> -static sig_atomic_t interrupted;
> +static volatile sig_atomic_t interrupted;
>  static void int_handler(int signum)
>  {
>      interrupted = 1;
> @@ -81,6 +81,9 @@ int main(int argc, char **argv)
>      if ( signal(SIGINT, int_handler) == SIG_ERR )
>          err(1, "Failed to register signal handler\n");
>  
> +    if ( signal(SIGPIPE, int_handler) == SIG_ERR )
> +        err(1, "Failed to register signal handler\n");
> +
>      if ( argc != 3 )
>      {
>          fprintf(stderr, "Usage: %s <domid> <vcpu_id>\n", argv[0]);
> -- 
> 2.30.0
> 
> 

Oops, forgot 4.15 tag. But IMO this should be included.

Thanks
Hubert Jasudowicz
CERT Polska


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 12:20:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 12:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90277.170803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFc6K-0004o7-SC; Fri, 26 Feb 2021 12:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90277.170803; Fri, 26 Feb 2021 12:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFc6K-0004o0-Ol; Fri, 26 Feb 2021 12:20:08 +0000
Received: by outflank-mailman (input) for mailman id 90277;
 Fri, 26 Feb 2021 12:20:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc6I-0004k2-O7; Fri, 26 Feb 2021 12:20:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc6I-0002p2-Fz; Fri, 26 Feb 2021 12:20:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc6I-0007XW-7S; Fri, 26 Feb 2021 12:20:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc6I-0002Gs-6y; Fri, 26 Feb 2021 12:20:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8htBbgjrKD065zae/JVwgjmkfF+OMh/6oqB3mbRrojs=; b=U0oEZiNal6PB0N4nFbAZB9VSKY
	pgvN4vG/CNndLgvbrZSL1IPjhJaUmjvM+rnA6QNx9gbcX+WBbLdeVV7sQI2S2NnYJykcBHnS52KcU
	2NSIV6d10BONmjAkqogqVnnn+9mNn5h9sGRd8gBGdoLZwmPzA4hcW7ylaXRu3FJDLiFE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159695-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159695: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6ffbb3581ab7c25a35041bac03b760af54f852bf
X-Osstest-Versions-That:
    ovmf=7f34681c488aee2563eaa2afcc6a2c8aa7c5b912
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 12:20:06 +0000

flight 159695 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159695/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6ffbb3581ab7c25a35041bac03b760af54f852bf
baseline version:
 ovmf                 7f34681c488aee2563eaa2afcc6a2c8aa7c5b912

Last test of basis   159676  2021-02-25 16:11:45 Z    0 days
Testing same since   159695  2021-02-26 06:09:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <Pierre.Gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7f34681c48..6ffbb3581a  6ffbb3581ab7c25a35041bac03b760af54f852bf -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 12:21:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 12:21:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90281.170819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFc7e-0004vr-6n; Fri, 26 Feb 2021 12:21:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90281.170819; Fri, 26 Feb 2021 12:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFc7e-0004vk-3X; Fri, 26 Feb 2021 12:21:30 +0000
Received: by outflank-mailman (input) for mailman id 90281;
 Fri, 26 Feb 2021 12:21:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc7d-0004vc-Kb; Fri, 26 Feb 2021 12:21:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc7d-0002pi-H0; Fri, 26 Feb 2021 12:21:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc7d-0007a8-9U; Fri, 26 Feb 2021 12:21:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFc7d-0003B2-8y; Fri, 26 Feb 2021 12:21:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zfcF/2z8ss3WcBH57BK2Rr0AXzQcEkdgO3u19+cpgnU=; b=h1o832fx9nI9dHbgVlPKrKD0z/
	glDhFpF48NYxUxZpSHUmg9BaM2GzpM5X464TWZ6tLrFe26BVlp3CFr302LK/IZFqOLORZMjZjW5iq
	SMDUzd3aXTuw2gqjqkctYrdiq3GxD2l0DccD5zjngUwBjYU93SpXelCyTRZ7CbkbjP0w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159704-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159704: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=109e8177fd4a225e7025c4c17d2c9537b550b4ed
X-Osstest-Versions-That:
    xen=fc8fb368515391374e5f170a1e07205d914bc14a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 12:21:29 +0000

flight 159704 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159704/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  109e8177fd4a225e7025c4c17d2c9537b550b4ed
baseline version:
 xen                  fc8fb368515391374e5f170a1e07205d914bc14a

Last test of basis   159674  2021-02-25 16:01:33 Z    0 days
Testing same since   159704  2021-02-26 10:01:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fc8fb36851..109e8177fd  109e8177fd4a225e7025c4c17d2c9537b550b4ed -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 12:43:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 12:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90288.170837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFcSS-00075O-VC; Fri, 26 Feb 2021 12:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90288.170837; Fri, 26 Feb 2021 12:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFcSS-00075H-Rw; Fri, 26 Feb 2021 12:43:00 +0000
Received: by outflank-mailman (input) for mailman id 90288;
 Fri, 26 Feb 2021 12:42:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFcSR-00075B-05
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 12:42:59 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2835eca6-d918-45ce-af5a-c95a4eb760f6;
 Fri, 26 Feb 2021 12:42:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2835eca6-d918-45ce-af5a-c95a4eb760f6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614343377;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Plcln42Bxmn51jEPOQUPEgQUTHPDDOSPKgUz/zOKVoo=;
  b=eo+hnzKB6fgTnFB2rhs45XpWgRCcioZlQ2EKhFG7gSdGO3d44sWFoLsg
   dH2q4kn2ARrUb4e02Fzg3xrAM1qJ7l9QtVW6WlsrG4XWdmUWypcef92NX
   /FADa6iK12xW9Qe61QoK4k74pQz5O/sHsjy6619RWOWMuOVCgKt0cXpIj
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: g6cKc/XBmyRulc3N+O3hjjTNbEo9YqhlzPA1WlhInS2S60gA8wAXvN4PDMXgZHJ9JSJLsdcjMh
 +Fa5Uw4VFi6L/nbKYXluk4u1xEna/KtKmCUhv+KcpAFf20FHpVrj7fXEfmZ4zz1ja9ISstSIv9
 rMvWEn26L3kXTfxTE+0v/nDF6HObI9x3xWXq863LwoAQuuX98iYl2dCR3v1m2qCVBVfXI8340y
 p/9mgwy61PiMez5Cx3RVDTj45MIuxwcbrjCzivo0EsJcsonbGYE6zw6B4F61gkrPM7g1OB9cyD
 v2U=
X-SBRS: 5.2
X-MesageID: 38292476
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38292476"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Cd4HtjIHt1Q7JqcAgGcgqlT9EI+Xp3EU/7KAs8mtRlgqt99nU4s8yse4eciIGMpK/D89RILWZLp481ISbbn6n+nvRZuiTNvHbBNOM+dF4ddt6WblM3U+cBEosHt3/oyBWYgL1yGkvQ7cdy7NoxV31H+g406pCJ0t91qTl/6FGAYj+7PsNCYNbp8mn79S84r1RyMYabLr5LgG6PEsKateCkQMGug9nxrPk+6PlWbhv5ojZk68EjvCAeHaPGBmy5xXWJ0087hv+XwzO3/+yZVAK9IDXmejugxvznBD0ZoShzNsMbQAJ50jY7aOX6Z0dnSfQIWQudqT1JZXfLDf2oLjsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qJkm1mjHEZI916fYf+gCBZniTqvzXzWhCGrKE9LdiVs=;
 b=ZMUZyVgjexr/xRXWeJ1+yCaLq8djm/sR9eKiEDactKycgVf9AquMu1p0lgCAsWNCUEz/4kmAVKPDWfjbgnWkpVSyLbGnpYnPxD30iFoUfFUUBR8GXgwXFpYFA+LoMAtRm6rcq26ydbh85XdAnJCigdkI3i11kQyZqtFoSBjGzfpgchO585m6TxdwfvJ559W3gUD1e+8t0U++OiX2o3IpIYaVrK2YmEuK0T2GEFoaYZOqkryR5EC+GEaldEkT9DwfTdVYDcg2YwKRoTzEiIGDh9F/8EqI9jxFzOwI4U6QiYbAX7guLPci7fna6haxsVkUiZyubB3fT5dxwQP5cl+JuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qJkm1mjHEZI916fYf+gCBZniTqvzXzWhCGrKE9LdiVs=;
 b=p5UQns8rTTgDzDPIZfVBED3o/XbaPKvsHJsV3HHJJ4UIKM90oXy8ra3VMal8+InUlIv/rttq3R9xJH+tYFSLBjukme3dZLapuEo+kdGfHQWaXtFebbo4ftnRMA6n4N7X54FhxUnuHdDNtISL0NZ18j2lH94cxLtVXxOC2tAZInU=
Subject: Re: [PATCH 2/3] tools/firmware: Build firmware as -ffreestanding
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20210225203010.11378-1-andrew.cooper3@citrix.com>
 <20210225203010.11378-3-andrew.cooper3@citrix.com>
 <23f64cfe-8908-79e1-253b-ad07b7aee00a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b7292a46-b5f4-fd94-af5b-31c709110219@citrix.com>
Date: Fri, 26 Feb 2021 12:42:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <23f64cfe-8908-79e1-253b-ad07b7aee00a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0148.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::9) To DM6PR03MB3627.namprd03.prod.outlook.com
 (2603:10b6:5:ab::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f9bf0654-12eb-48fd-7cd1-08d8da540634
X-MS-TrafficTypeDiagnostic: DM5PR03MB2938:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2938A99B41AAF7E555E1DDF4BA9D9@DM5PR03MB2938.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: WB12l3/4/8X69Z9Eslbtpk0pUKlRae98vzAAvK9TS0rafpbNlCPS16z8gSQU97EVeRalEWPrdJhVytyFjxy97X3BPBHTuRdI6AhUMxPpcmnaueBvt/g2uOLs4y2J6KyVKD+onZqynFuSsHxh+r1utjDGk8BuzhENSGzljeuCYuZ8lsQdQs7UL5pYOFrpwTnkXFjjnQXxsckfJeIsoZqy/WMIbTIB99HoCrT0KeRDLoi36dOYOSvWx+khcpEw65qDl5EhezxniF43fP8/JNTyBOPw/k1wjPI1zzJxKcdetJjlO318FZwFnIlBc+xsULAUFmkmhixr8UZSsAvAvA6NnPsQj9A8oI1bq1LNu584fHHFXDPODc9GR2t4oN05fX6Wa2tMokUbUj3wmGOAEaEY5AsnzVy7oFnWbbTHxZGrhxdKBHzZhuH8e5P/X6HI3HogQCz5p83yyAku+xBBltLdcqY8HTF14RA1Lbqy5QYs4W9TV0oQ7lXHirmZeQcOINuoy1c86CzqMy3+E7dGqK7YoDw4p791tJpZe3FKX8Kz9UljyOZ9B0mAGWTbjakp6ucyyMgIOpLGOIVyMX3DdYRklQKXBQ3FL1iKUt4evgyIiDw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB3627.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(366004)(346002)(136003)(6916009)(83380400001)(956004)(54906003)(31686004)(478600001)(16526019)(86362001)(5660300002)(6486002)(26005)(2616005)(186003)(316002)(53546011)(16576012)(2906002)(8676002)(66556008)(66476007)(31696002)(36756003)(8936002)(6666004)(4326008)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aEFFcVo1Vm0rR1gwQy8vcDFJYURpNVM1RnFtYVdkbEwxOWI0ZHVwUnZTNllF?=
 =?utf-8?B?RVdvS08ycnZTNnd3a0MxY0JINCtlMVA1SXJPYVQ4L2N6bWNuc093UytzNTM0?=
 =?utf-8?B?ajlpSU5DTGcrOUJDcUJKRUFQVkl6cWZ2K2pQRWszbGQ1RXRyNGNjcVhFWmpU?=
 =?utf-8?B?cFBXeXo4Rk0yNlk1QWRxWktTMWNCUFZVRGJnZEsrS1ZjZHhhUnJtYzBKVWFj?=
 =?utf-8?B?R1NjWnJuZkM1VWhlRkFSMER3STVibDhiTktwN2ZkVllWTWNYU2lweXRSd1M4?=
 =?utf-8?B?YzF5c05hTzg1TkRUYk5zb1h4YTRNQkZnSDc5QmVadklIdjNkYVZ6VzBmVHQw?=
 =?utf-8?B?b3F6Vll5dkptdHFzUXZzTE1kbHhGYTJDeVhxaU80Q3VEUEcxL1BqbC9IZith?=
 =?utf-8?B?TjFEdHdpeGxRNXVYaEx1N21WUXU4TEZTckRVamhQY05zRmltN2hkc3dLZ3Jx?=
 =?utf-8?B?L09DenBIYndOR1VmOXdjeWVQM3U5aHA3clY1bnU0ZmFDL1ZnR3dLRjg5Qlcv?=
 =?utf-8?B?c21YQjAybURpMnJmd2xTUVRITkZDTlJaa2o3OWVUM2lhclZwcTVsbFdmTlJN?=
 =?utf-8?B?R0tMODMwNEVta0c3WGt4UEZ2UkRqNERmZ0dlTFZJYzFOK1pYdC9VeXJvNVdl?=
 =?utf-8?B?RHcyb1lacnk4Z2E5SjZJZVpobEptZWR5c0tQUEIrWEJrWE0xVHhXZHVzMmdy?=
 =?utf-8?B?ampLalRDRnpMT3ozWmlBRVdzOFlid2RHN25oU0JSdUZFTldFN2NWcGZ1Ukh1?=
 =?utf-8?B?M2lHak56T2JPYW9ZNDhHT1UrblBkVGJYUEpUcWxjRXNLMnh1QkVLcDRIVjBE?=
 =?utf-8?B?UnNrNS93MFMvN2FidnBUY0ZXUG1pb1l1ZHFORlpJdVNtM0V5bW4ybkRBSzEx?=
 =?utf-8?B?aHo1ZXAwZkYyTS9wdVdZMTVIMjZQMndJRUJUODZBb3dHdnRGV29hU05pc1Zy?=
 =?utf-8?B?QmxwM1k2YVRZNTZ6MnVLVzdzcDlHRHhrQlErWWZwSWV3THNTTSs5aXUyTG5E?=
 =?utf-8?B?cENMNUxNVjFLQytDMGxUMHc4ekNYK2pmUG1jR1g2aXQxc2ZQdmgvNHRTbi80?=
 =?utf-8?B?N1JCdzVrdGdScDJ2cFNtTGpOaVlPUXRDYlQrNnFLOGNESWZqYXFBMmdQclVI?=
 =?utf-8?B?eXNvMEV2L29QTlBaZjl4SFRUVTVlczJERll2ZThPN2pEVWtXUVZpV0xVRWpk?=
 =?utf-8?B?S2UxWTNqbkRXZ0s0TXBITzlHNk1FV2NGY2RqK2tLV01TdkNUTisxdkw0dXJX?=
 =?utf-8?B?MVErUW9FNjFsNWN0V3ZUSmQ5VFZwRlduNXlZZUc4eGsrVVY3S1paeE9MeWZ2?=
 =?utf-8?B?RkFCSm9uUGdSWDl0dnpHRFF0YnFwSGhHUThKS2xzdXNNUFdNQnFmdm1XRDVz?=
 =?utf-8?B?Vzg3UzhKRnZUbW44dFQvQW8wT0t0eXA1eUY4c0dYeGhvT2pLVHJvUTBZZURK?=
 =?utf-8?B?Rm05bmsyN3ExVTlxNXhpdGY3VG9DYThubkx5WTNJYzFTdjFqNnZuOFc5L1A2?=
 =?utf-8?B?UVA1YjVZL1hHdWo2N3lmTHZNRHBwYVZXdGd6UXEyTzVycGtadVRUeGtPazNT?=
 =?utf-8?B?MHYyTnBFSkFrWXVFQ2xscUJGSnowTmVKVTMyY0p6enN3WkdqWUFaV0F2aVpX?=
 =?utf-8?B?QnlvRzFqVDJtTE5EU0xjT0JWcGhkMDdiekZpSEFPY3g0ZmRrdjBZZk9HQm42?=
 =?utf-8?B?eUdlMWFOZUc4ZWJ2cTA3dHRXOHNBcklSc1VLUG9lNWJqZHZZNmY4OXE4K25N?=
 =?utf-8?Q?YuB3ut57q4cr7BjhFt630QAXymWGtzFfkIvbnNS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f9bf0654-12eb-48fd-7cd1-08d8da540634
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB3627.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 12:42:49.1554
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ta0NponF9oQye5WSaw54QrcP42RJJJ8K6Mv+wZn0Kf/n9V6uux32UgZI9nfedn6maS2PWqjwQmwrjgbzJLHNN0357JGl2UovF4seKIoZdWg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2938
X-OriginatorOrg: citrix.com

On 26/02/2021 09:10, Jan Beulich wrote:
> On 25.02.2021 21:30, Andrew Cooper wrote:
>> firmware should always have been -ffreestanding, as it doesn't execute in the
>> host environment.
>>
>> inttypes.h isn't a freestanding header, but the 32bitbios_support.c only wants
>> the stdint.h types so switch to the more appropriate include.
>>
>> This removes the build time dependency on a 32bit libc just to compile the
>> hvmloader and friends.
>>
>> Update README and the TravisCI configuration.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> preferably with one further adjustment:
>
>> --- a/tools/firmware/Rules.mk
>> +++ b/tools/firmware/Rules.mk
>> @@ -16,4 +16,4 @@ CFLAGS += -Werror
>>  $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
>>  
>>  # Extra CFLAGS suitable for an embedded type of environment.
>> -CFLAGS += -fno-builtin -msoft-float
>> +CFLAGS += -fno-builtin -msoft-float -ffreestanding
> As per gcc doc -ffreestanding implies -fno-builtin, so I think you
> want to replace that one instead of adding the new option on top.

Oops yes - fixed.

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 12:47:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 12:47:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90293.170857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFcWZ-0007Ho-K9; Fri, 26 Feb 2021 12:47:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90293.170857; Fri, 26 Feb 2021 12:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFcWZ-0007Hh-GS; Fri, 26 Feb 2021 12:47:15 +0000
Received: by outflank-mailman (input) for mailman id 90293;
 Fri, 26 Feb 2021 12:47:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFcWY-0007Hc-FD
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 12:47:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac330070-f04c-4735-a967-284d75cf0018;
 Fri, 26 Feb 2021 12:47:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac330070-f04c-4735-a967-284d75cf0018
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614343633;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=TTmReDTxShveKOqNK8OL+sYeF1+NYzAecet5R2MbJoQ=;
  b=Vvwtc42JnZyWdVfNAtCH4MU0l1czFFPuFEAR5S1UKuIqop2Qrs5orBbT
   FUYGleMyZ8s+jpa1N8FRXIzrUGOnz8MeEEslMHt0x0meajYQU9NxQEZTv
   BmKH4hYi62KZGokZsUD3M3vQzgwDLV926Ut4y9L/fvBfMlF4CqxJLoeR6
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: v8APfeoICnLJYMl1h45CPp0NUX/+VzU7qZplYHZ8oFSXCiHK73o+vgjTdcOzZGugHa2RuFIThM
 I7SWBox6xaHu2fYt0yoN/tRWay5aY7sIHUDJHDAjIXZCf4gLjDQyU6UX/ttGSjEcCqLX4ZoPv0
 tzOAUaoMeAajbmpBIJBMOBCGU9q7sdJLdkyrOHL25ioXWf9CZL1a/E1o517e836GePZvDOsUEW
 vCz6lwZIhsYw386mbs4zw4nCpFNI3b1IYvNGU2UI2Q0JOmJAqTVcBBLN6IyjA4BDDlo/ltI94p
 PJ0=
X-SBRS: 5.2
X-MesageID: 38292703
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38292703"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Ian Jackson
	<iwj@xenproject.org>
Subject: [PATCH for-4.15] cirrus-ci: Drop obsolete dependency
Date: Fri, 26 Feb 2021 12:46:47 +0000
Message-ID: <20210226124647.19596-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

markdown as a dependency was dropped in 4.12

Fixes: 5d94433a66 ("cirrus-ci: introduce some basic FreeBSD testing")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>

https://cirrus-ci.com/build/6589407613419520 is a successful run with this
change in effect.
---
 .cirrus.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 5e3e46368e..0efff6fa98 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -4,7 +4,7 @@ freebsd_template: &FREEBSD_TEMPLATE
     APPEND_LIB: /usr/local/lib
     APPEND_INCLUDES: /usr/local/include
 
-  install_script: pkg install -y seabios markdown gettext-tools gmake
+  install_script: pkg install -y seabios gettext-tools gmake
                                  pkgconf python libiconv bison perl5
                                  yajl lzo2 pixman argp-standalone
                                  libxml2 glib git
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 13:16:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 13:16:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90299.170872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFcz8-00020w-0x; Fri, 26 Feb 2021 13:16:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90299.170872; Fri, 26 Feb 2021 13:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFcz7-00020p-U9; Fri, 26 Feb 2021 13:16:45 +0000
Received: by outflank-mailman (input) for mailman id 90299;
 Fri, 26 Feb 2021 13:16:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ScZz=H4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lFcz6-00020k-JP
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 13:16:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1aa8f9a8-fdac-4ffe-9899-35469cded309;
 Fri, 26 Feb 2021 13:16:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3B9DDAD74;
 Fri, 26 Feb 2021 13:16:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1aa8f9a8-fdac-4ffe-9899-35469cded309
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614345402; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=gPicBlvbf8VxlREpUzvDTkJ45cjDUMZKb5OA8osUY9A=;
	b=nO60bCQ37/R22NjlBvecJb14IxPqjYbIcmh0cm6ukT5HpHO/pzHFMDMwld1TrP3we9GW8d
	KSKfxxideYrGYBKl78uCdMFYrBdkJUHQkDU4SjH8jNdBuPSETwuCI1MvT9CUMOoH01pCRu
	rvtYzo5B9GvFKRG9O3y3dtBmd+bNMys=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.12-rc1
Date: Fri, 26 Feb 2021 14:16:41 +0100
Message-Id: <20210226131641.4309-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.12b-rc1-tag

xen: branch for v5.12-rc1

It contains:

- A small series for Xen event channels adding some sysfs nodes for
  per pv-device settings and statistics, and 2 fixes of theoretical
  problems.

- two minor fixes (one for an unlikely error path, one for a comment).

Thanks.

Juergen

 Documentation/ABI/testing/sysfs-devices-xenbus | 41 ++++++++++++++++
 drivers/xen/events/events_base.c               | 27 ++++++++++-
 drivers/xen/evtchn.c                           | 29 ++++++-----
 drivers/xen/xen-acpi-processor.c               |  3 +-
 drivers/xen/xen-front-pgdir-shbuf.c            | 11 ++++-
 drivers/xen/xenbus/xenbus_probe.c              | 66 ++++++++++++++++++++++++++
 include/xen/xenbus.h                           |  7 +++
 7 files changed, 168 insertions(+), 16 deletions(-)

Jan Beulich (1):
      xen-front-pgdir-shbuf: don't record wrong grant handle upon error

Juergen Gross (3):
      xen/events: add per-xenbus device event statistics and settings
      xen/evtchn: use smp barriers for user event ring
      xen/evtchn: use READ/WRITE_ONCE() for accessing ring indices

Kees Cook (1):
      xen: Replace lkml.org links with lore


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 13:25:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 13:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90302.170884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFd6t-00035R-R2; Fri, 26 Feb 2021 13:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90302.170884; Fri, 26 Feb 2021 13:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFd6t-00035K-O6; Fri, 26 Feb 2021 13:24:47 +0000
Received: by outflank-mailman (input) for mailman id 90302;
 Fri, 26 Feb 2021 13:24:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFd6s-00035F-SN
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 13:24:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 56aa94ae-98bf-4783-861e-817ff318cb2e;
 Fri, 26 Feb 2021 13:24:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 903F8AE03;
 Fri, 26 Feb 2021 13:24:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56aa94ae-98bf-4783-861e-817ff318cb2e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614345882; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OUpcpPMutVwL7WbdAb54o2pRIBfzT/3WyF0xRDCbN5U=;
	b=uI03xR78/W73PAWoxsNWqSWxiIq3+t67AQo+C4nQzaYDjTsHotzXAcumaL/L2gaeB3u8PN
	nl+hnVLX5zo4HNls/gyh9gK4vdP25VZ14O2BAmmxfgQ3EqlqhzVzH48lESoadfoq5LHJ//
	qWFKTkPfYrbWUQAR1/Scq+2Pnp8OdfQ=
Subject: Re: [PATCH for-4.15 2/3] firmware: provide a stand alone set of
 headers
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210226085908.21254-1-roger.pau@citrix.com>
 <20210226085908.21254-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2133ba4c-5120-30ca-1328-c8700fd2db94@suse.com>
Date: Fri, 26 Feb 2021 14:24:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226085908.21254-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.02.2021 09:59, Roger Pau Monne wrote:
> The current build of the firmware relies on having 32bit compatible
> headers installed in order to build some of the 32bit firmware, but
> that usually requires multilib support and installing a i386 libc when
> building from an amd64 environment which is cumbersome just to get
> some headers.
> 
> Usually this could be solved by using the -ffreestanding compiler
> option which drops the usage of the system headers in favor of a
> private set of freestanding headers provided by the compiler itself
> that are not tied to libc. However such option is broken at least
> in the gcc compiler provided in Alpine Linux, as the system include
> path (ie: /usr/include) takes precedence over the gcc private include
> path:
> 
> #include <...> search starts here:
>  /usr/include
>  /usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include
> 
> Since -ffreestanding is currently broken on at least that distro, and
> for resilience against future compilers also having the option broken
> provide a set of stand alone 32bit headers required for the firmware
> build.
> 
> This allows to drop the build time dependency on having a i386
> compatible set of libc headers on amd64.
> 
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with possibly small adjustments:

> ---
> There's the argument of fixing gcc in Alpine and instead just use
> -ffreestanding. I think that's more fragile than providing our own set
> of stand alone headers for the firmware bits. Having the include paths
> wrongly sorted can easily make the system headers being picked up
> instead of the gcc ones, and then building can randomly fail because
> the system headers could be amd64 only (like the musl ones).
> 
> I've also seen clang-9 on Debian with the following include paths:
> 
> #include "..." search starts here:
> #include <...> search starts here:
>  /usr/local/include
>  /usr/lib/llvm-9/lib/clang/9.0.1/include
>  /usr/include/x86_64-linux-gnu
>  /usr/include
> 
> Which also seems slightly dangerous as local comes before the compiler
> private path.
> 
> IMO using our own set of stand alone headers is more resilient.

I agree (in particular given the observations), but I don't view
this as an argument against use of -ffreestanding. In fact I'd
rather see this change re-based on top of Andrew's changes. Then ...

> --- a/tools/firmware/Rules.mk
> +++ b/tools/firmware/Rules.mk
> @@ -17,3 +17,14 @@ $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
>  
>  # Extra CFLAGS suitable for an embedded type of environment.
>  CFLAGS += -fno-builtin -msoft-float
> +
> +# Use our own set of library headers to build firmware.
> +#
> +# Ideally we would instead use -ffreestanding, but that relies on the compiler
> +# having the right order for include paths (ie: compiler private headers before
> +# system ones). This is not the case in Alpine at least which searches system
> +# headers before compiler ones, and has been reported upstream:
> +# https://gitlab.alpinelinux.org/alpine/aports/-/issues/12477
> +# In the meantime (and for resilience against broken compilers) use our own set
> +# of headers that provide what's needed for the firmware build.
> +CFLAGS += -nostdinc -I$(XEN_ROOT)/tools/firmware/include

... the initial part of the comment here would want re-wording.

> --- /dev/null
> +++ b/tools/firmware/include/stdint.h
> @@ -0,0 +1,39 @@
> +#ifndef _STDINT_H_
> +#define _STDINT_H_
> +
> +#ifdef __LP64__
> +#error "32bit only header"
> +#endif

Could I talk you into extending this to also cover __P64__? (The
alternative I see would be to omit this altogether.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 13:27:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 13:27:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90306.170895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFd9x-0003Fc-DR; Fri, 26 Feb 2021 13:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90306.170895; Fri, 26 Feb 2021 13:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFd9x-0003FV-Ad; Fri, 26 Feb 2021 13:27:57 +0000
Received: by outflank-mailman (input) for mailman id 90306;
 Fri, 26 Feb 2021 13:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFd9v-0003FQ-ST
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 13:27:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26d7d8ff-d4d4-4e2f-8e94-b68cfabbcbc7;
 Fri, 26 Feb 2021 13:27:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55400AD74;
 Fri, 26 Feb 2021 13:27:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26d7d8ff-d4d4-4e2f-8e94-b68cfabbcbc7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614346074; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VdfAgz1jJHl2MSMAUG70pDDursofdtrzfUF1wKdG9tU=;
	b=kxRvvhGrI/YWuY/TbnEQq3m/i/6hLakOtRoqLAT0EH1+muLvA1oVsRgm2G0yDZpA7LKmkH
	UCrmdAhQwTAF5N7aScRgWH5B2fNKLRuGSUyKsR1JmoWd1Hq4+1HVj4tyJDodePkQwxtRgZ
	aU3BaUITnVahrPwg/UCqIbbWHlNEfNU=
Subject: Re: [PATCH for-4.15 v5 1/3] xen/iommu: x86: Don't try to free page
 tables is the IOMMU is not enabled
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210226105640.12037-1-julien@xen.org>
 <20210226105640.12037-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d3cd9b5c-a03a-a305-4133-7c2fe0546d97@suse.com>
Date: Fri, 26 Feb 2021 14:27:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226105640.12037-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.02.2021 11:56, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> When using CONFIG_BIGMEM=y, the page_list cannot be accessed whilst it
> is is unitialized. However, iommu_free_pgtables() will be called even if
> the domain is not using an IOMMU.
> 
> Consequently, Xen will try to go through the page list and deference a
> NULL pointer.
> 
> Bail out early if the domain is not using an IOMMU.
> 
> Fixes: 15bc9a1ef51c ("x86/iommu: add common page-table allocator")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 13:30:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 13:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90311.170907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFdCZ-0004GM-S7; Fri, 26 Feb 2021 13:30:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90311.170907; Fri, 26 Feb 2021 13:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFdCZ-0004GF-P8; Fri, 26 Feb 2021 13:30:39 +0000
Received: by outflank-mailman (input) for mailman id 90311;
 Fri, 26 Feb 2021 13:30:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFdCY-0004GA-07
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 13:30:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e12d9243-f8e1-454a-934c-e09427245e9a;
 Fri, 26 Feb 2021 13:30:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 531BBAE03;
 Fri, 26 Feb 2021 13:30:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e12d9243-f8e1-454a-934c-e09427245e9a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614346236; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rhIVzR9cysCPi4UlThSyprTCyT1rNKm5mSqhjhmOypQ=;
	b=YSJc+8b/+7NT921hE1KZu5/TpLI4sHjkpo6q1D0/G/4tWJzZcuuiutxx/lfjXGcNi8I268
	lwaaAJuzCAEJIiou3OQxflw2FdPwMGYtjzBpr1KJQaciFaSlwOudVLMlxOXwQrP7KGh8cd
	QF43wJK1bFCN+5XOEgHHDRjfMbgjSrw=
Subject: Re: [PATCH for-4.15 v5 2/3] xen/x86: iommu: Ignore IOMMU mapping
 requests when a domain is dying
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210226105640.12037-1-julien@xen.org>
 <20210226105640.12037-3-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a386635-9c7d-3880-7e99-d87722fe5075@suse.com>
Date: Fri, 26 Feb 2021 14:30:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226105640.12037-3-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.02.2021 11:56, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new x86 IOMMU page-tables allocator will release the pages when
> relinquishing the domain resources. However, this is not sufficient
> when the domain is dying because nothing prevents page-table to be
> allocated.
> 
> As the domain is dying, it is not necessary to continue to modify the
> IOMMU page-tables as they are going to be destroyed soon.
> 
> At the moment, page-table allocates will only happen when iommu_map().
> So after this change there will be no more page-table allocation
> happening because we don't use superpage mappings yet when not sharing
> page tables.
> 
> In order to observe d->is_dying correctly, we need to rely on per-arch
> locking, so the check to ignore IOMMU mapping is added on the per-driver
> map_page() callback.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Does this also want a Fixes: tag (the same as patch 1)?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90351.170956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeK1-0003E5-TA; Fri, 26 Feb 2021 14:42:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90351.170956; Fri, 26 Feb 2021 14:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeK1-0003Dx-Px; Fri, 26 Feb 2021 14:42:25 +0000
Received: by outflank-mailman (input) for mailman id 90351;
 Fri, 26 Feb 2021 14:42:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeJz-0003Af-Vl
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:24 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd023e65-989c-4b31-9181-f9799691722c;
 Fri, 26 Feb 2021 14:42:18 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-1c1b5cdd.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 26 Feb 2021 14:42:11 +0000
Received: from EX13D37EUB004.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2a-1c1b5cdd.us-west-2.amazon.com (Postfix) with ESMTPS
 id 3C548A180F; Fri, 26 Feb 2021 14:42:10 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D37EUB004.ant.amazon.com (10.43.166.187) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:08 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd023e65-989c-4b31-9181-f9799691722c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350539; x=1645886539;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=sj+epzrMKHexT+eQInL0yberMRcZJMZtWGnBvr9EvEc=;
  b=jgPmjlxNqlt0FICNlzHrJi1Dedm5zhk1r+Ikn0QU1H8+OXQ1Fbc7y/tO
   u0ULtomnF746ELbdc3fieIU+tsZNiQog/UclJa3MsRRu209ka4AEj8Kyr
   /fejyC+PyxnKaUwwBYH69DS2lrmPe2gmDzuSGXEPpU236Nu8aqeHC4IHJ
   E=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="91300593"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 01/10] xenstore: add missing NULL check
Date: Fri, 26 Feb 2021 15:41:35 +0100
Message-ID: <20210226144144.9252-2-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

In case of allocation error, we should not dereference the obtained
NULL pointer. Hence, fail early.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstored_core.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1818,6 +1818,10 @@ static int check_store_(const char *name, struct hashtable *reachable)
 
 		struct hashtable * children =
 			create_hashtable(16, hash_from_key_fn, keys_equal_fn);
+		if (!children) {
+			log("check_store create table: ENOMEM");
+			return ENOMEM;
+		}
 
 		if (!remember_string(reachable, name)) {
 			hashtable_destroy(children, 0);
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90350.170944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeJw-0003B7-LT; Fri, 26 Feb 2021 14:42:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90350.170944; Fri, 26 Feb 2021 14:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeJw-0003B0-Hk; Fri, 26 Feb 2021 14:42:20 +0000
Received: by outflank-mailman (input) for mailman id 90350;
 Fri, 26 Feb 2021 14:42:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeJv-0003Af-6K
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:19 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 544f7879-e262-4601-8e9d-32a2530603a7;
 Fri, 26 Feb 2021 14:42:18 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 26 Feb 2021 14:42:15 +0000
Received: from EX13D37EUA004.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS
 id A0F63A245E; Fri, 26 Feb 2021 14:42:13 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D37EUA004.ant.amazon.com (10.43.165.124) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:11 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 544f7879-e262-4601-8e9d-32a2530603a7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350538; x=1645886538;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=z/DnOJlIa9cXjD7A0d8As04PDDD4dbiymrcM3U2q7us=;
  b=Xe+NLs7N90CvzPGfZD8LqZo0L0ttXEi1LaXREJgMw7QtYBvvAwR80Nlp
   dRH8DetfBcXHcyW650gqM0Nt1Qeo3KnbeZ3zavde2RFD7e+tyQtTTfQfe
   zOZT6/E5EjcHvnKuQG86Zyt0blWN+DT+gOe0SaQhV/wkvJF50yoOFIaV4
   E=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="91300630"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 02/10] xenstore: fix print format string
Date: Fri, 26 Feb 2021 15:41:36 +0100
Message-ID: <20210226144144.9252-3-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

Use the correct format specifier for unsigned values. Additionally, a
cast was dropped, as the format specifier did not require it anymore.

This was reported by analysis with cppcheck.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xs_tdb_dump.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xs_tdb_dump.c b/tools/xenstore/xs_tdb_dump.c
--- a/tools/xenstore/xs_tdb_dump.c
+++ b/tools/xenstore/xs_tdb_dump.c
@@ -59,8 +59,8 @@ int main(int argc, char *argv[])
 			fprintf(stderr, "%.*s: BAD truncated\n",
 				(int)key.dsize, key.dptr);
 		else if (data.dsize != total_size(hdr))
-			fprintf(stderr, "%.*s: BAD length %i for %i/%i/%i (%i)\n",
-				(int)key.dsize, key.dptr, (int)data.dsize,
+			fprintf(stderr, "%.*s: BAD length %zu for %u/%u/%u (%u)\n",
+				(int)key.dsize, key.dptr, data.dsize,
 				hdr->num_perms, hdr->datalen,
 				hdr->childlen, total_size(hdr));
 		else {
@@ -69,7 +69,7 @@ int main(int argc, char *argv[])
 
 			printf("%.*s: ", (int)key.dsize, key.dptr);
 			for (i = 0; i < hdr->num_perms; i++)
-				printf("%s%c%i",
+				printf("%s%c%u",
 				       i == 0 ? "" : ",",
 				       perm_to_char(hdr->perms[i].perms),
 				       hdr->perms[i].id);
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90349.170932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeJq-00039X-Bt; Fri, 26 Feb 2021 14:42:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90349.170932; Fri, 26 Feb 2021 14:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeJq-00039Q-8r; Fri, 26 Feb 2021 14:42:14 +0000
Received: by outflank-mailman (input) for mailman id 90349;
 Fri, 26 Feb 2021 14:42:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeJo-00039L-Hd
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:12 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e33ec5c-6a09-4a29-8acd-5a78c6b34fad;
 Fri, 26 Feb 2021 14:42:11 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 26 Feb 2021 14:42:03 +0000
Received: from EX13D37EUA004.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS
 id EDDA7A25E2; Fri, 26 Feb 2021 14:42:02 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D37EUA004.ant.amazon.com (10.43.165.124) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:01 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e33ec5c-6a09-4a29-8acd-5a78c6b34fad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350531; x=1645886531;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=MrpNszt3xmQUZDbSv/CS/qvXEazMfVAoaD1nb9pai1Q=;
  b=cuv3sbHfYAMqB6rIQ5lWVSjPU5bTrG/7Mc9kMpsxGvjZu5uA6PA9vpPh
   JPilmWxzksfEIz1EprZGGSskK8YV1E1tT4SxQmk+1t4yfnUsjOZgEipy5
   jmMz5rxu91NbvHBpa/ZU7mkqg3GP3IZdKl9zxHgMHVlvYI2nFM1/+cGHn
   c=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="88627962"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 00/10] Code analysis fixes
Date: Fri, 26 Feb 2021 15:41:34 +0100
Message-ID: <20210226144144.9252-1-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

Dear all,

we have been running some code analysis tools on the xenstore code, and triaged
the results. This series presents the robustness fixes we identified.

Best,
Norbert

Michael Kurth (1):
  xenstore: add missing NULL check

Norbert Manthey (9):
  xenstore: add missing NULL check
  xenstore: fix print format string
  xenstore: check formats of trace
  xenstore_client: handle memory on error
  xenstore: handle daemon creation errors
  xenstored: handle port reads correctly
  xenstore: handle do_mkdir and do_rm failure
  xs: handle daemon socket error
  xs: add error handling

 tools/libs/store/xs.c            | 10 +++++++++-
 tools/xenstore/xenstore_client.c |  3 +++
 tools/xenstore/xenstored_core.c  | 16 ++++++++++++++++
 tools/xenstore/xenstored_core.h  |  2 +-
 tools/xenstore/xenstored_posix.c |  6 +++++-
 tools/xenstore/xs_tdb_dump.c     |  6 +++---
 6 files changed, 37 insertions(+), 6 deletions(-)

-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90352.170968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeK7-0003Ib-71; Fri, 26 Feb 2021 14:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90352.170968; Fri, 26 Feb 2021 14:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeK7-0003IU-2D; Fri, 26 Feb 2021 14:42:31 +0000
Received: by outflank-mailman (input) for mailman id 90352;
 Fri, 26 Feb 2021 14:42:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeK4-0003Af-W5
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:29 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c304731-8eb6-43a7-9e85-74d2e28eb149;
 Fri, 26 Feb 2021 14:42:19 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-e7be2041.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 26 Feb 2021 14:42:18 +0000
Received: from EX13D37EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2a-e7be2041.us-west-2.amazon.com (Postfix) with ESMTPS
 id 75A1AA2428; Fri, 26 Feb 2021 14:42:17 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D37EUB001.ant.amazon.com (10.43.166.31) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:15 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c304731-8eb6-43a7-9e85-74d2e28eb149
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350539; x=1645886539;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=a4CF3aP7V8BLgQT8wGjqFYA8omaY5bulrWv/CPPvqmk=;
  b=unrra1cdH1zvPCzI6P0P2cMWPp73xYGqhDhfSN50qex9j4IFOCS0Lwi6
   7ROdkwS8YMYBO0yFlQ5lb/VgLu9FyCQwWOwP+HCeEQbOchSbmDa/wb6V0
   aigyb4u0o3N9p8123l9AumFW14+gtPwbn1DjRhuPhKUfMRa4UPaEyT1Y0
   4=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="91300657"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 03/10] xenstore: check formats of trace
Date: Fri, 26 Feb 2021 15:41:37 +0100
Message-ID: <20210226144144.9252-4-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

When passing format strings to the trace function, allow gcc to analyze
those and warn on issues.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstored_core.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -217,7 +217,7 @@ int delay_request(struct connection *conn, struct buffered_data *in,
 /* Tracing infrastructure. */
 void trace_create(const void *data, const char *type);
 void trace_destroy(const void *data, const char *type);
-void trace(const char *fmt, ...);
+void trace(const char *fmt, ...) __attribute__ ((format (printf, 1, 2)));
 void dtrace_io(const struct connection *conn, const struct buffered_data *data, int out);
 void reopen_log(void);
 void close_log(void);
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90353.170974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeK7-0003JN-KZ; Fri, 26 Feb 2021 14:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90353.170974; Fri, 26 Feb 2021 14:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeK7-0003JA-CY; Fri, 26 Feb 2021 14:42:31 +0000
Received: by outflank-mailman (input) for mailman id 90353;
 Fri, 26 Feb 2021 14:42:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeK5-0003Hj-Sq
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:29 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b51fa24-5831-4103-8c14-4e9df781b30f;
 Fri, 26 Feb 2021 14:42:29 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2c-2225282c.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 26 Feb 2021 14:42:22 +0000
Received: from EX13D37EUA002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2c-2225282c.us-west-2.amazon.com (Postfix) with ESMTPS
 id 1A2F2A07DC; Fri, 26 Feb 2021 14:42:21 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D37EUA002.ant.amazon.com (10.43.165.200) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:19 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b51fa24-5831-4103-8c14-4e9df781b30f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350549; x=1645886549;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=ufqPi5XZ73oaK8Z3Lj+n3A5UsFVL/yuIxs1VZVOClrA=;
  b=ej8KE68Muqi19IZucueYR8pAS6Ex1Aq9/uUtE3nzc5gd2ez1E1IGB4VZ
   EvYgeEH067YjdNl4pwCP4MCVfVX54UcU0qAHjsoJYPiyTuMR6TySJYmQB
   hYLIoKvleBGeC35Otj9kz9nsmCE1X57cgXMv/wYZtKCNDwQ5zueEShrQf
   c=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="92741411"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 04/10] xenstore_client: handle memory on error
Date: Fri, 26 Feb 2021 15:41:38 +0100
Message-ID: <20210226144144.9252-5-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

In case a command fails, also free the memory. As this is for the CLI
client, currently the leaked memory is freed right after receiving the
error, as the application terminates next.

Similarly, if the allocation fails, do not use the NULL pointer
afterwards, but instead error out.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstore_client.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore_client.c
--- a/tools/xenstore/xenstore_client.c
+++ b/tools/xenstore/xenstore_client.c
@@ -382,11 +382,14 @@ perform(enum mode mode, int optind, int argc, char **argv, struct xs_handle *xsh
                 /* Copy path, because we can't modify argv because we will need it
                    again if xs_transaction_end gives us EAGAIN. */
                 char *p = malloc(strlen(path) + 1);
+                if (!p)
+                    return 1;
                 strcpy(p, path);
                 path = p;
 
             again:
                 if (do_rm(path, xsh, xth)) {
+                    free(path);
                     return 1;
                 }
 
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90354.170992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKB-0003PZ-VZ; Fri, 26 Feb 2021 14:42:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90354.170992; Fri, 26 Feb 2021 14:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKB-0003PN-R8; Fri, 26 Feb 2021 14:42:35 +0000
Received: by outflank-mailman (input) for mailman id 90354;
 Fri, 26 Feb 2021 14:42:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeK9-0003Af-WF
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:34 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2894b2c9-2b51-40ed-9e89-1632ac5f3a26;
 Fri, 26 Feb 2021 14:42:33 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1d-16425a8d.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 26 Feb 2021 14:42:25 +0000
Received: from EX13D02EUB004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1d-16425a8d.us-east-1.amazon.com (Postfix) with ESMTPS
 id 12398100C47; Fri, 26 Feb 2021 14:42:24 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D02EUB004.ant.amazon.com (10.43.166.221) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:22 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2894b2c9-2b51-40ed-9e89-1632ac5f3a26
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350553; x=1645886553;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=aupOUlFlXdRu4KMtyn+0J4vkXjNtTUzjaeSxjmikpeI=;
  b=GlBHrEWZnEBl7CZdRlCi/dpGEnXRCr3W49Sr1Q+fcVKrwc3SiVTmdbqs
   8sBRbLbYPYv5VbC11GhmEhLN/0NKTHrnlp+FcFc5UxzUXJzWjX4rbWwKw
   yzsHP60SQ0dOKc1lP59qbFv4IMRU8qSd9QMQVC6BKZtshvLqowTz1Lclh
   8=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="88504713"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 05/10] xenstore: handle daemon creation errors
Date: Fri, 26 Feb 2021 15:41:39 +0100
Message-ID: <20210226144144.9252-6-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

In rare cases, the path to the daemon socket cannot be created as it is
longer than PATH_MAX. Instead of failing with a NULL pointer dereference,
terminate the application with an error message.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstored_core.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1996,6 +1996,9 @@ static void init_sockets(void)
 	struct sockaddr_un addr;
 	const char *soc_str = xs_daemon_socket();
 
+	if (!soc_str)
+		barf_perror("Failed to obtain xs domain socket");
+
 	/* Create sockets for them to listen to. */
 	atexit(destroy_fds);
 	sock = socket(PF_UNIX, SOCK_STREAM, 0);
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90355.171004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKG-0003VA-Ep; Fri, 26 Feb 2021 14:42:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90355.171004; Fri, 26 Feb 2021 14:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKG-0003Ux-Af; Fri, 26 Feb 2021 14:42:40 +0000
Received: by outflank-mailman (input) for mailman id 90355;
 Fri, 26 Feb 2021 14:42:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeKF-0003Af-0i
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:39 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f72aee7e-87fb-4250-a417-5e8e4e12b4e6;
 Fri, 26 Feb 2021 14:42:35 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2c-579b7f5b.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 26 Feb 2021 14:42:35 +0000
Received: from EX13D02EUB003.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2c-579b7f5b.us-west-2.amazon.com (Postfix) with ESMTPS
 id 5220AA15CF; Fri, 26 Feb 2021 14:42:34 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D02EUB003.ant.amazon.com (10.43.166.172) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:32 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f72aee7e-87fb-4250-a417-5e8e4e12b4e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350555; x=1645886555;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=vbgIE8H3XZ85DmgU6vJvOMOFjDGWkhUCPA58yeFxSQY=;
  b=UP2scPgBu89tj4LtgnzPw/7OVOsUe84+39oH/hPuS7V5B6rEWmtIXNtl
   7kGsVAfPKE6KPNdpqt9yIjzO+za0Zx6zvvDgZjIXiwDdbTZHCzDEVFU8D
   Mf39rjqw4HbxJo5FuJXhEBdmZSEoW9ii9kUgq8tBGNl+WCgKMC4uBkN2j
   U=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="88504759"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 06/10] xenstored: handle port reads correctly
Date: Fri, 26 Feb 2021 15:41:40 +0100
Message-ID: <20210226144144.9252-7-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

The read value could be larger than a signed 32bit integer. As -1 is
used as error value, we should not rely on using the full 32 bits.
Hence, when reading the port number, we should make sure we only return
valid values.

This change sanity checks the input.
The issue is that the value for the port is
 1. transmitted as a string, with a fixed amount of digits.
 2. Next, this string is parsed by a function that can deal with strings
    representing 64bit integers
 3. A 64bit integer is returned, and will be truncated to it's lower
    32bits, resulting in a wrong port number (in case the sender of the
    string decides to craft a suitable 64bit value).

The value is typically provided by the kernel, which has this value hard
coded in the proper range. As we use the function strtoul, non-digit
character are considered as end of the input, and hence do not require
checking. Therefore, this change only covers the corner case to make
sure we stay in the 32 bit range.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstored_posix.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_posix.c b/tools/xenstore/xenstored_posix.c
--- a/tools/xenstore/xenstored_posix.c
+++ b/tools/xenstore/xenstored_posix.c
@@ -116,7 +116,7 @@ evtchn_port_t xenbus_evtchn(void)
 {
 	int fd;
 	int rc;
-	evtchn_port_t port;
+	uint64_t port;
 	char str[20];
 
 	fd = open(XENSTORED_PORT_DEV, O_RDONLY);
@@ -136,6 +136,10 @@ evtchn_port_t xenbus_evtchn(void)
 	port = strtoul(str, NULL, 0);
 
 	close(fd);
+
+	if (port >= UINT32_MAX)
+		return -1;
+
 	return port;
 }
 
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90357.171016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKL-0003bm-PR; Fri, 26 Feb 2021 14:42:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90357.171016; Fri, 26 Feb 2021 14:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKL-0003bb-LP; Fri, 26 Feb 2021 14:42:45 +0000
Received: by outflank-mailman (input) for mailman id 90357;
 Fri, 26 Feb 2021 14:42:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeKK-0003Af-0e
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:44 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 332821f5-e44f-486c-9617-573c982777f0;
 Fri, 26 Feb 2021 14:42:42 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-1c1b5cdd.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 26 Feb 2021 14:42:42 +0000
Received: from EX13D02EUB003.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2a-1c1b5cdd.us-west-2.amazon.com (Postfix) with ESMTPS
 id 4D6FEA17CD; Fri, 26 Feb 2021 14:42:41 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D02EUB003.ant.amazon.com (10.43.166.172) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:39 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 332821f5-e44f-486c-9617-573c982777f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350562; x=1645886562;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=waHFe5N5zD0cDJXNQ231nyWcAqRV9zOpFqGCBH9GqNQ=;
  b=qldJzvdj2I8bgEtNmUiBlNlXySANiaIgWRBf7zMTb8fVaV6EwaaB4bzK
   s/d2RASTOM5yGZ2RVD5+W36P5rjyzowJEIbApRjO2JfVVMnotjyrU3OPK
   H+OeJPsMYpCF2f+aMYlvqtT9XBbvrPiBQVXP6O7SoQwPBBu6Vlyg3vNQL
   c=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="92741519"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 07/10] xenstore: handle do_mkdir and do_rm failure
Date: Fri, 26 Feb 2021 15:41:41 +0100
Message-ID: <20210226144144.9252-8-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

In the out of memory case, we might return a NULL pointer when
canonicalizing node names. This NULL pointer is not checked when
creating a directory, or when removing a node. This change handles
the NULL pointer for these two cases.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstored_core.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1160,6 +1160,8 @@ static int do_mkdir(struct connection *conn, struct buffered_data *in)
 		/* No permissions? */
 		if (errno != ENOENT)
 			return errno;
+		if (!name)
+			return ENOMEM;
 		node = create_node(conn, in, name, NULL, 0);
 		if (!node)
 			return errno;
@@ -1274,6 +1276,8 @@ static int do_rm(struct connection *conn, struct buffered_data *in)
 	if (!node) {
 		/* Didn't exist already?  Fine, if parent exists. */
 		if (errno == ENOENT) {
+			if (!name)
+				return ENOMEM;
 			parentname = get_parent(in, name);
 			if (!parentname)
 				return errno;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:42:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90362.171028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKV-0003lw-5s; Fri, 26 Feb 2021 14:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90362.171028; Fri, 26 Feb 2021 14:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKV-0003lo-07; Fri, 26 Feb 2021 14:42:55 +0000
Received: by outflank-mailman (input) for mailman id 90362;
 Fri, 26 Feb 2021 14:42:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeKU-0003Af-1D
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:54 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9e2aa11-92ed-4c6f-b463-3924e0fdbdee;
 Fri, 26 Feb 2021 14:42:47 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 26 Feb 2021 14:42:46 +0000
Received: from EX13D02EUB002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com (Postfix) with ESMTPS
 id 254A4A1D59; Fri, 26 Feb 2021 14:42:45 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D02EUB002.ant.amazon.com (10.43.166.170) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:43 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9e2aa11-92ed-4c6f-b463-3924e0fdbdee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350568; x=1645886568;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=INzRJ60EoL7cehlQhV+bX6hPIdepzG7ggFncfF3yBWA=;
  b=Q2MvJYJ6F2Rgl/e/N4BzVypB4Rm1SE1WMv+cTc39BxFizlRpfAtIAUKq
   tnR+ktRY2QBXgH6valgWP/gTOQHG4EYeIcP2QoQ+rNg/+XKesGiBNdmas
   OQWFgG0hc7fvHlYv4mQqvJ2CT00qHaqr+h/RBtJezb9Ch2ngWChJEP9th
   8=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="88628276"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Michael Kurth <mku@amazon.com>, Norbert Manthey
	<nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 08/10] xenstore: add missing NULL check
Date: Fri, 26 Feb 2021 15:41:42 +0100
Message-ID: <20210226144144.9252-9-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

From: Michael Kurth <mku@amazon.com>

In case of allocation error, we should not dereference the obtained
NULL pointer.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Michael Kurth <mku@amazon.com>
Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/xenstore/xenstored_core.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -504,6 +504,11 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 	}
 
 	data.dptr = talloc_size(node, data.dsize);
+	if (!data.dptr) {
+		errno = ENOMEM;
+		return errno;
+	}
+
 	hdr = (void *)data.dptr;
 	hdr->generation = node->generation;
 	hdr->num_perms = node->perms.num;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:43:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90364.171040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKa-0003qp-Dt; Fri, 26 Feb 2021 14:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90364.171040; Fri, 26 Feb 2021 14:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeKa-0003qf-Ab; Fri, 26 Feb 2021 14:43:00 +0000
Received: by outflank-mailman (input) for mailman id 90364;
 Fri, 26 Feb 2021 14:42:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeKZ-0003Af-18
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:42:59 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad3255a7-ceb8-4214-be52-043c296c2896;
 Fri, 26 Feb 2021 14:42:50 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 26 Feb 2021 14:42:49 +0000
Received: from EX13D37EUB002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com (Postfix) with ESMTPS
 id EFEB6A21B1; Fri, 26 Feb 2021 14:42:48 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D37EUB002.ant.amazon.com (10.43.166.116) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:47 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad3255a7-ceb8-4214-be52-043c296c2896
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350570; x=1645886570;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=PoZZTY1JhoFNyUyWnPJ+qOgbzP4vOc+deh5S5ccFoAU=;
  b=al+WrVrFP/kMmEXS4N8LI45roAMxWvIcXsaBwsmFjrjToHum7nK8uxO8
   7dPrZL1kfZkokl91OxBa7ts8XldulN7HkFzpxau+rDuKrcjAvkkILoNKJ
   gGlttY6/iKAcJlgFUMZicPgtyJ+UEiohRTlWJc+5zmbg2sef18GYV88Nn
   c=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="92741617"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 09/10] xs: handle daemon socket error
Date: Fri, 26 Feb 2021 15:41:43 +0100
Message-ID: <20210226144144.9252-10-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

When starting the daemon, we might see a NULL pointer instead of the
path to the socket.

Only relevant in case we start the process in a very deep directory
path, with a length close to 4096 so that appending "/socket" would
exceed the limit. Hence, such an error is unlikely, but should still be
fixed to not result in a NULL pointer dereference.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Thomas Friebel <friebelt@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

---
 tools/libs/store/xs.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -240,6 +240,9 @@ static struct xs_handle *get_handle(const char *connect_to)
 	struct xs_handle *h = NULL;
 	int saved_errno;
 
+	if (!connect_to)
+		return NULL;
+
 	h = malloc(sizeof(*h));
 	if (h == NULL)
 		goto err;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:49:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90380.171051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeQT-0004XF-49; Fri, 26 Feb 2021 14:49:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90380.171051; Fri, 26 Feb 2021 14:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeQT-0004X8-13; Fri, 26 Feb 2021 14:49:05 +0000
Received: by outflank-mailman (input) for mailman id 90380;
 Fri, 26 Feb 2021 14:49:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFeKj-0003Af-1T
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:43:09 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44022063-d5f2-4039-a073-964d796716bb;
 Fri, 26 Feb 2021 14:42:54 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 26 Feb 2021 14:42:54 +0000
Received: from EX13D02EUB002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (Postfix) with ESMTPS
 id 27771A1E6A; Fri, 26 Feb 2021 14:42:53 +0000 (UTC)
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D02EUB002.ant.amazon.com (10.43.166.170) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 14:42:51 +0000
Received: from u6fc700a6f3c650.ant.amazon.com (10.1.212.27) by
 mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 26 Feb 2021 14:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44022063-d5f2-4039-a073-964d796716bb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614350575; x=1645886575;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=/ouoVhSElmsKf09bjsubId7QzwGwRxcvcchsYXzRAAQ=;
  b=ChvSllwggXCJgCv3PB5U54Aw4kKAv3fd8lbIrNiKsIl2bdAKO74kNeJx
   d5q1g9YmxdKOoZ9DLR9nnP6XVDsX9tIzGlcTXFhz27bzCUi1kkJ8tgVQl
   GGTz0il6TIHXeBaEMB+pRneCnCQsTCWF2ZPu9wYuMc08myGZJ33lIg2Ui
   E=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="88504949"
From: Norbert Manthey <nmanthey@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>, Norbert Manthey <nmanthey@amazon.de>
Subject: [PATCH XENSTORE v1 10/10] xs: add error handling
Date: Fri, 26 Feb 2021 15:41:44 +0100
Message-ID: <20210226144144.9252-11-nmanthey@amazon.de>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210226144144.9252-1-nmanthey@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

In case of a failure deep in the call tree, we might return NULL as the
value of the domain. In that case, error out instead of dereferencing
the NULL pointer.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.co.uk>
Reviewed-by: Raphael Ning <raphning@amazon.co.uk>

---
 tools/libs/store/xs.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -1183,7 +1183,12 @@ bool xs_path_is_subpath(const char *parent, const char *child)
 bool xs_is_domain_introduced(struct xs_handle *h, unsigned int domid)
 {
 	char *domain = single_with_domid(h, XS_IS_DOMAIN_INTRODUCED, domid);
-	int rc = strcmp("F", domain);
+	bool rc = false;
+
+	if (!domain)
+		return rc;
+
+	rc = strcmp("F", domain) != 0;
 
 	free(domain);
 	return rc;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Fri Feb 26 14:53:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 14:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90385.171064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeV3-0005Po-N3; Fri, 26 Feb 2021 14:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90385.171064; Fri, 26 Feb 2021 14:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFeV3-0005Ph-K3; Fri, 26 Feb 2021 14:53:49 +0000
Received: by outflank-mailman (input) for mailman id 90385;
 Fri, 26 Feb 2021 14:53:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFeV2-0005Pc-1v
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 14:53:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFeUz-0005Qq-Rz; Fri, 26 Feb 2021 14:53:45 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFeUz-0006WC-Hk; Fri, 26 Feb 2021 14:53:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/XxitYmbtMdY5Kmb/chQfd8nUNCdrYxRbctwC8wUOZs=; b=cUAm7PLyUfKUIh/sGUcqrMzJHU
	hnAHVueoe1zwqiQGvMkIP7+0YFye5ZNqfp+ABGldD9nuDJD+g0mpAXq84DVtr2qxyB6MV4zXhrXtE
	TGp8U2aC5jvlFF0XmWe1R0Tq7wlZnNJnE8na8BL/FAi66LDezA3dZlDv725rc3LBX+PA=;
Subject: Re: [PATCH XENSTORE v1 10/10] xs: add error handling
To: Norbert Manthey <nmanthey@amazon.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>,
 Michael Kurth <mku@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
 <20210226144144.9252-11-nmanthey@amazon.de>
From: Julien Grall <julien@xen.org>
Message-ID: <78560c14-81dc-05de-26f8-15861459806d@xen.org>
Date: Fri, 26 Feb 2021 14:53:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226144144.9252-11-nmanthey@amazon.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Norbert,

On 26/02/2021 14:41, Norbert Manthey wrote:
> In case of a failure deep in the call tree, we might return NULL as the
> value of the domain. In that case, error out instead of dereferencing
> the NULL pointer.
> 
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.

This commit message is not very descriptive. Internally, I suggested:

"
tools/xenstore: Harden xs_domain_is_introduced()

The function single_with_domid() may return NULL if something
went wrong (e.g. XenStored returns an error or the connection is
in bad state).

They are unlikely but not impossible, so it would be better to
return an error and allow the caller to handle it gracefully rather
than crashing.

In this case we should treat it as the domain has disappeared (i.e.
return false) as the caller will not likely going to be able to
communicate with XenStored again.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.
"

I would have expected this to be addressed given that...

> 
> Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
> Reviewed-by: Julien Grall <jgrall@amazon.co.uk
... you carried over my reviewed-by tag.


Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:08:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90391.171076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFejN-0006e3-0Z; Fri, 26 Feb 2021 15:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90391.171076; Fri, 26 Feb 2021 15:08:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFejM-0006dw-Sj; Fri, 26 Feb 2021 15:08:36 +0000
Received: by outflank-mailman (input) for mailman id 90391;
 Fri, 26 Feb 2021 15:08:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFejL-0006dr-1U
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:08:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 614ce6ce-a827-43ea-8fa2-428a65765be7;
 Fri, 26 Feb 2021 15:08:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1FBA1AD57;
 Fri, 26 Feb 2021 15:08:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 614ce6ce-a827-43ea-8fa2-428a65765be7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614352113; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=x+UT5oe5lNbLyAE6e6Zv15GL0RHN4l6DYQIIJarNa28=;
	b=ur3kDYEO4Mcs2GQ3c0qc6I0sN+JXwDs4JuxNqlYDYNAeemMyuy8OX4O2923Wd2JgZ4vbQG
	u4gOL6bLTaxmDIVDeRGzf/7cb0ctbKF9DVX0FAxR7MFyinxXCdam5TZIUCMnoXyea5Fsxh
	5d23kLaIyf4itAGpC96/jId4PbgcJow=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH][4.15] x86/shadow: replace bogus return path in
 shadow_get_page_from_l1e()
Message-ID: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
Date: Fri, 26 Feb 2021 16:08:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Prior to be640b1800bb ("x86: make get_page_from_l1e() return a proper
error code") a positive return value did indicate an error. Said commit
failed to adjust this return path, but luckily the only caller has
always been inside a shadow_mode_refcounts() conditional.

Subsequent changes caused 1 to end up at the default (error) label in
the caller's switch() again, but the returning of 1 (== _PAGE_PRESENT)
is still rather confusing here, and a latent risk.

Convert to an ASSERT() instead, just in case any new caller would
appear.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -802,9 +802,7 @@ shadow_get_page_from_l1e(shadow_l1e_t sl
     struct domain *owner;
 
     ASSERT(!sh_l1e_is_magic(sl1e));
-
-    if ( !shadow_mode_refcounts(d) )
-        return 1;
+    ASSERT(shadow_mode_refcounts(d));
 
     res = get_page_from_l1e(sl1e, d, d);
 


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:14:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:14:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90395.171087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFep6-0007ih-NO; Fri, 26 Feb 2021 15:14:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90395.171087; Fri, 26 Feb 2021 15:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFep6-0007ia-KA; Fri, 26 Feb 2021 15:14:32 +0000
Received: by outflank-mailman (input) for mailman id 90395;
 Fri, 26 Feb 2021 15:14:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFep5-0007iV-9L
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:14:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72f69eb1-0c91-449f-8077-94d82702036a;
 Fri, 26 Feb 2021 15:14:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 548EBAD57;
 Fri, 26 Feb 2021 15:14:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72f69eb1-0c91-449f-8077-94d82702036a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614352469; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rNcdar0ziOksxnU4RbiKXFpr3QFbD2o4W65027z3aRw=;
	b=N85DuCOg1ZcpLLo6vzh+HqneJRE+55lUHPsr6Haf6icfksx/8OkFTKdcrnZRF/a6QRC7Y5
	o0NvSy1HPddA44cn6v1B5bFTLB0ZTO2+IkuAp6npN6F81jtv8jNfpxeZKJjtPyza07ZVZ7
	CPbGtVrOoSoNZ7RuJK6ex4C2tHmSt2I=
Subject: Re: [PATCH][4.15] x86/shadow: replace bogus return path in
 shadow_get_page_from_l1e()
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
Message-ID: <8bb04a4d-70ad-c557-c334-e1e55a429353@suse.com>
Date: Fri, 26 Feb 2021 16:14:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.02.2021 16:08, Jan Beulich wrote:
> Prior to be640b1800bb ("x86: make get_page_from_l1e() return a proper
> error code") a positive return value did indicate an error. Said commit
> failed to adjust this return path, but luckily the only caller has
> always been inside a shadow_mode_refcounts() conditional.
> 
> Subsequent changes caused 1 to end up at the default (error) label in
> the caller's switch() again, but the returning of 1 (== _PAGE_PRESENT)
> is still rather confusing here, and a latent risk.

The confusion on my part was so significant that I screwed up
the shadow mode fix for "VMX: use a single, global APIC access
page" (which turned out to be necessary) initially. Hence my
proposing this for 4.15. I'm on the edge at this point whether
I'd even consider this a backporting candidate.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:22:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:22:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90398.171100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFewS-0000Nn-HO; Fri, 26 Feb 2021 15:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90398.171100; Fri, 26 Feb 2021 15:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFewS-0000Ng-E2; Fri, 26 Feb 2021 15:22:08 +0000
Received: by outflank-mailman (input) for mailman id 90398;
 Fri, 26 Feb 2021 15:22:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFewR-0000Nb-4r
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:22:07 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a19ecaa1-c0a7-45c1-8bf4-d551b96585f0;
 Fri, 26 Feb 2021 15:22:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a19ecaa1-c0a7-45c1-8bf4-d551b96585f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614352927;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/iZkzWXubcciC54G3U/GV6lU27YWtOlUMpJqcCJP3nU=;
  b=QFnor260YGsJ5dxQeRTnTLHEslyKoQk+E017PtbbXi6x6xLRS3LUX8oj
   gQFWe/fy4scydMRrpRM+bEsygsXpLLS56dcRuZGsuJScs23EphXW5XiDR
   wuXjYTyUDpj41k+AbWRC00b+vZlpGJOaXuyOXjN2cQDNuHLUQyssuoG9O
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: K/cACOhBasEEdJMhaZj4BlKAsgJKncI0787NmpSm6EWGAbhiwPYx69eMevOxaVFvctfQR3h2gx
 RKsm/iBQ4LBC+DmN5Y043MwE9LLO6kMPds+YoUrMzllIzPV6M9rSEXkvJIc/cjEt7OiK8xojNA
 1B6upNlf4NqJSGIFcwQ8nyIl6FztOmX/GdORBSCvYmMXhc+7rFoZLgZsdZcLBiPOIEE2XclOj1
 bHKV/iLf/sF8rlruli4saqNGeMe4d4+XUQRRUhdgWgxl0Nkgn4WsWb2HCtDmCksEdqntiNSian
 PpI=
X-SBRS: 5.2
X-MesageID: 38038773
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38038773"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q89+pG9mpgPrq1fRhDBwXWdxBa7P0iyfp+vVoCVpKgIkIgNB2SbSDQPWEtZU9/Vz0sSwFEfz8vS+rVj9GK0mHBzPKUUMEnrRKmqKXFk4q2mE6z0JXkLetJcw8UkR62Pjjak42/B5uNYXdQcFTt5zmrmAetC4J1/GIAoYpuONti5O0shucy0ONSOcWbnxIVb3qVyaLIiw2yNiF6q++ONUg/mT0xjC7EPXMLF0fBmpY+tXFGksrDDPmMCXYm3BDV0hGPaaRI0dLdOE0HWi+yZrfKanuPMmdfSoApfJeDiMMVeNr8QE5flAuazE++z2+H32rkObglCI6yfbpSckeOQ2KA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/iZkzWXubcciC54G3U/GV6lU27YWtOlUMpJqcCJP3nU=;
 b=LXA8H8X3EI6KJEGhCHdA4T3fJaPNpd8V0v6+x5SXcTJWudV5KEQVi8Rm9zkrW69EerkVTXg9/+v66GscXK94pTAPsyzGVp9dOisYV2ywhuDJfDjDMaSO6vu91t6nyC5sCIz0hnSJovZ36MeWj96JXXYjbU3pTy/bAm85qDzumYUG+trnPxFAmAVbkWEPrutvrra8yZksleJtiw/5ZjRvC0DDpjojD2xCY82hFF1Q+lCf7PTj6mYijCNMibObFrl3nIxftIFgvj16cLL18PATqb40yyxYJCTpEf6nyXFQjX1AzibHRPcXCZ/GyjHHREJVZAgctPWYQ60VWB247WwkEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/iZkzWXubcciC54G3U/GV6lU27YWtOlUMpJqcCJP3nU=;
 b=mSff6DmKFqbtb05iZu7AZT/kwMhHh3soHbOYLFCoDhO2VXn7h9vnhVYLod00gOqbqYav2KTKIT0mslhg1tfqTjEJtQYKZp3XJ2lMkaIZCIJmxOXH6aKZXBesix/hKu/ak+QF2PwoIY5qhzJ5ouvRGGXirzdauvIN2nAOyubtcDk=
Subject: Re: [PATCH][4.15] x86/shadow: replace bogus return path in
 shadow_get_page_from_l1e()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>, "Wei
 Liu" <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Ian Jackson <iwj@xenproject.org>
References: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1ef6a569-f6c1-f20c-6b63-a5577cf95e36@citrix.com>
Date: Fri, 26 Feb 2021 15:21:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0049.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d2f92ade-e024-43f1-6a1f-08d8da6a4510
X-MS-TrafficTypeDiagnostic: BYAPR03MB3992:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB39926C1359382CF23CDE3D1FBA9D9@BYAPR03MB3992.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3aR5vjA9qlWcqtk+eXu8kUzvrV7uqS+r6wj+5pEz5TAtMR6v5Pb+lIsbUfKGFw9qsb+smqPkel40pD9jf8/EwJPhI3z+C5bcvoLFAwQC2HXZEZvu1hv2qTXjeWRT8rClfkqEtcHnIBVSKxC4mn01Hwv+MnAQajjJPWNaYDh8eAzrjCXs5lPYtGM5cCpD4KvbFauatHgZDiMmY3oIY71+0yAhlzSC2ge54Xt0a0+pP1mPWDbxSraxi7uRwMVAln53EHQ1PbcovCHwfZqs0wukJQQus4klJeBcFNf1w22O87SSraMtATyy8d+pROtBBCytcAy5dFZz/XmMbvPKwE9Y7DjlLM5yJFVQbdqvJrtJZycO/SI+mf18MPld6q70NLRl4o67ApSlkmmHAkyR6dP2/OP881BOjvzUGyfJNUtKn6UrVBn0EblbjzC0nNrrq/WPLS6bJzZi+CQiQyoEz5Pq/kKhy0tRk2igeRDOTUKjpNtBd9YoYnZi/ggoEpwLfuMbOyVPKfx5KjxekqSZbZihMZN2QSD9M6XmDji0vGFyZcgt/0VR5CAF/xJPwd7KrMf/IDyd8soFGN3GiGDjM7D/+iK6HIiKFkXtotBOSX+DR2A=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39860400002)(346002)(136003)(376002)(4744005)(66556008)(26005)(6666004)(54906003)(6486002)(83380400001)(31686004)(186003)(8936002)(2906002)(2616005)(16526019)(110136005)(956004)(66946007)(53546011)(86362001)(31696002)(4326008)(5660300002)(36756003)(8676002)(478600001)(316002)(16576012)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eHFEYWtpbjkrSVAyWkgyeTAvN1k1TVhWOWQ2Wkw5eWVpditLdmVSOG81MXJ5?=
 =?utf-8?B?cnZkR2huOVBhK3ljNGdiTjF4QkVpSHBtTlVSUWk5UHU2QUxxdzB3Qmt4ZHg0?=
 =?utf-8?B?d2lyVThKUmVMVWRRb1Y1ZnllMHVhZWdVQlV0UkNWWExCU2tKMklCQTI1NlRI?=
 =?utf-8?B?ZGp3TXQzWXRwWWllalQ4YmJrZ2tCNERucXNVTXh5SHp4UWpNa1FhQnR3d1Yy?=
 =?utf-8?B?TndYSDI1eUFHUzc2SlU0NmVUYk9MVUlhNHo2NVZiNnd0RmtvQ3V1bkxQMHJF?=
 =?utf-8?B?QUtUMmRLOTFybmxrUVl2K2RxTlh3cVJoa2tnSnJzbktLeUVMK2VjV3JBWHVk?=
 =?utf-8?B?UlcvbWUyLy83Y3JzWXJxb1Z6bU1JeDVlS3pGYlh1NVk1SnpzRG9iZ2VNcyt1?=
 =?utf-8?B?bVc2ODVGVGtadTJCTzA4aXJqR3F1L2MrWFNmZ3pGbGF5N3AvSHRBb3Y5WWFw?=
 =?utf-8?B?WEw4ZVF3ZTBONmgva0FSYnp5RStSVlFwWm8zajU3Yk8yK0p2cHduQ0lzUi9L?=
 =?utf-8?B?ZUNsdjYrZ0tXSkpVUjFpR0Q5RzFsdEVaRjdXVFU2QjhWbmNuc01BZS9oSFpX?=
 =?utf-8?B?SDVNby9YbEtQQ0RoYWVFOHR6TTdXYm8wVGsyUkdTU1BqWnYyWXNqYkozcDZ0?=
 =?utf-8?B?NmRzQ3hreENPMHJJL0JGTFptSlVHK0VHd20xUVNMYjJpVmxwWWdyNEdnbWl2?=
 =?utf-8?B?bVdDRW8wemtsNjRIaVZBSDcyNmttNVZxazc2Wmg4eW9ZUXZEaFNmclE5V0xV?=
 =?utf-8?B?RkhOSWl5ekFKb2hYa1VxbjUxcWMzcmE3UnI0MVFBZys5ZHNXRzNaUnphamV1?=
 =?utf-8?B?TzFvNWdrRjVQSTRhVFRYemxSRm9wc1p4QTk2UVlYUm1mMXl3V05wejJKamtq?=
 =?utf-8?B?SGxDT21Idy9GdzdxVHVLVjZSVk5JNnJVK01pZXZmRFhXbVNDRldDUVNtNUY1?=
 =?utf-8?B?bVhEMnRDMkROM1g3bXd4UXB0aHNUSzFPSjliOFBWTTZwcXg3U0Fmb1pqRHF5?=
 =?utf-8?B?anhiSGxSSEZwV0laVk94aFp0aU9PWmNYYkltME1SNmNFdWdqQ2ZLZ1dheEJZ?=
 =?utf-8?B?K2ZkTXlQUlFBanBGZUVBaDNkL1pWWm5LV0FVeFFRMFRrWThzbENZZUIxSTVL?=
 =?utf-8?B?dHp1a1JkckwyaWRrTTdkdWk3RG10WW1BSjRicFFvcU1ENmdKZ0RDOHZwRmox?=
 =?utf-8?B?WDdjNGlUZ213dVZsZ0F6K0RXNkNncDg0MXJrdnArNXA4T05TZmZCQXYyUFBs?=
 =?utf-8?B?bUdGeDdoVWlNUFU5V2kxTFBJc2ZjM0I1MkxCbjc5anFrYjhRU0VHWGE3U0po?=
 =?utf-8?B?c0NFU1lrNE9ZSmRsTmUvcCtKaTl5YVR3NlB0V0k0RTkrV3RkakNqQy9waGRy?=
 =?utf-8?B?T1FsQzgwT2l5Y25FR1NNa0FTUXk1ODQxQTdTSFcxVE1uWjFEOXVoZllGV3pW?=
 =?utf-8?B?c0dpUDdoK2hzcDUyWWVKNDljTVltakVka2FHdGtNSmNPSXlrWDZDLzc1RDBR?=
 =?utf-8?B?ZC80LzRzV2NmZGNkeURJMTl6WTNIdzRLdTZmTzc2MklwZk5nWXJ5V0hYa3JU?=
 =?utf-8?B?ZnY0ZnNxQngzWXZCamFBMjlhcllwcW9WUllmMEQrd1hpK3d6S1JmM0NGdC9W?=
 =?utf-8?B?VG5tRStEcHBiNlNhT3p4U1NYdWx3ZE5WZ2Q5YkhCamNRRmo3ejU1VlFxbHdR?=
 =?utf-8?B?SCtjNmtvYjZkajMyczJaYUFKdjdldDRGWkxSVDJaU3VoYnlLZW44UUxuVDMx?=
 =?utf-8?Q?xsPyzy60D0kUN2nbyX6rSrb9ND2xpYXgexVeQpC?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d2f92ade-e024-43f1-6a1f-08d8da6a4510
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 15:22:03.5275
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ADvXkMSpubPH1K+GvqHZ15f9HhGgw7MHSCapTaEemDyRodM2m2qMzGVraqZuD8W2wkjpPZ98Azmr12DOfZqeWyv1spYDmPehYYKv5gam0hs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3992
X-OriginatorOrg: citrix.com

On 26/02/2021 15:08, Jan Beulich wrote:
> Prior to be640b1800bb ("x86: make get_page_from_l1e() return a proper
> error code") a positive return value did indicate an error. Said commit
> failed to adjust this return path, but luckily the only caller has
> always been inside a shadow_mode_refcounts() conditional.
>
> Subsequent changes caused 1 to end up at the default (error) label in
> the caller's switch() again, but the returning of 1 (== _PAGE_PRESENT)
> is still rather confusing here, and a latent risk.
>
> Convert to an ASSERT() instead, just in case any new caller would
> appear.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Yikes, and only 9 years to notice.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:25:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90401.171112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFezG-0000Vg-02; Fri, 26 Feb 2021 15:25:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90401.171112; Fri, 26 Feb 2021 15:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFezF-0000VZ-TA; Fri, 26 Feb 2021 15:25:01 +0000
Received: by outflank-mailman (input) for mailman id 90401;
 Fri, 26 Feb 2021 15:25:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ro2M=H4=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lFezE-0000VT-RT
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:25:00 +0000
Received: from mail-pj1-x102a.google.com (unknown [2607:f8b0:4864:20::102a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3a00cfd-a106-4ff8-9a8e-d1edf749c8dc;
 Fri, 26 Feb 2021 15:25:00 +0000 (UTC)
Received: by mail-pj1-x102a.google.com with SMTP id e9so6302104pjj.0
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 07:25:00 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id o11sm10181377pfp.136.2021.02.26.07.24.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 26 Feb 2021 07:24:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a00cfd-a106-4ff8-9a8e-d1edf749c8dc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=0bWZZkklmiX/hLGaS61TrZAqbCz/hhH6RbD6Omwvgm4=;
        b=AD0lib1g1Bf7ySw3S8Bxi7iBZ15pgJ+PG5Njr0j4i9v7DCSdh3OQFmm0EOSmZVMMBh
         YozfiaP7H0rtIkfPmHNs+tNIfX8/z3outy88bhOJEkaooFZLaV4gY7GpHxLO9gKvcAzx
         KQWAtkXgXi3ZZcLyxEBPP7O+Nl6TfWv4yNdrMJ75CRQJbKxHiAKi/mCEm21eLtfDZSbf
         lY/G8ln1VcWXxta47o+Yv12yzyPiAFgCptiQSNGrvaHhoGZCBknAOCc3FOs8vBMGlpqr
         pX6/YGEW4kVL7kOlVM03KfHm8UUdohivz1hwUEoibLRHeIm5bFSzulw7naHXMcp9T1/G
         9Qng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=0bWZZkklmiX/hLGaS61TrZAqbCz/hhH6RbD6Omwvgm4=;
        b=oO1f6dVsUZpID6MJ1/LNW7VzgCZ/mTf5QKjAsNEbo1E3a4rzIBxKMAeHo4xKTd/hDz
         R63hEvU7awnPoopMvW2b3W7lRaPmNuZ43BxE6F8Q9MMm69I0VQGMFb7rZzfDTRjXZeIm
         bNFUmZKSmf4peHcyFfaJnKmvITvByfuszJuwBrjYsyc7isqJyJTAZ2EVL2cmRR9rZPKS
         PdKtcwLJ+KQDXE40tE5pmzIUb1WcWNdMZOhAzvI5SqwYfXI9EpAkkBmh2t20TLiMU1MV
         6WzbU/N2kh4Ct+pKjNeH6EQ0qtXyNpPm2RS5vFxuVeZnzpLqwXFytNgrEl5o1YoCUqTX
         oTgQ==
X-Gm-Message-State: AOAM533SdviGaG2sPB82dzGGCca5I6y61er8f2JQpz1e7Y1iYW4PCd+E
	gMNB9JwDgumFUK9qC/iHATM=
X-Google-Smtp-Source: ABdhPJwRlrgxo43PpWA2ePB+f7NmPSPeNg+MzBCsT/PaWEGSoWPPDYqLAZqigxphl4qSxnGDheBGaw==
X-Received: by 2002:a17:902:9b93:b029:e0:a40b:cbd7 with SMTP id y19-20020a1709029b93b02900e0a40bcbd7mr3521638plp.16.1614353099368;
        Fri, 26 Feb 2021 07:24:59 -0800 (PST)
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
To: Connor Davis <connojdavis@gmail.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
 <71840112-790f-24b9-115c-9030c8521b65@gmail.com>
 <20210226030134.lj3zi3duf33shoz7@thewall>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <bbb9415e-65f6-3930-f6c4-055cc5921cb0@gmail.com>
Date: Fri, 26 Feb 2021 07:21:58 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <20210226030134.lj3zi3duf33shoz7@thewall>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2/25/21 7:01 PM, Connor Davis wrote:
> On Thu, Feb 25, 2021 at 02:55:45PM -0800, Bob Eshleman wrote:
>>   riscv64-unknown-linux-gnu-gcc (GCC) 10.1.0
>>
>> Which version of GCC are you seeing emit this?
> 
> The one from cloned from github.com/riscv/riscv-gnu-toolchain
> in the docker container uses 10.2.0
> 
>     Connor
> 

The commit I pinned in the container is actually for GDB only, since
more recent versions broke when used with QEMU at the time of writing
the dockerfile (this last June).

Since I built the container some months ago and no commit pinning for
the compiler, it still contains 10.1.0 for me.

It _shouldn't_ be necessary...  but since there is a lot of dev done
on riscv-gcc, it might be worth talking about pinning the compiler
version in the container.

-Bob


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:26:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:26:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90404.171124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFf0h-0000e9-Bs; Fri, 26 Feb 2021 15:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90404.171124; Fri, 26 Feb 2021 15:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFf0h-0000e2-8S; Fri, 26 Feb 2021 15:26:31 +0000
Received: by outflank-mailman (input) for mailman id 90404;
 Fri, 26 Feb 2021 15:26:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=77G0=H4=amazon.de=prvs=684d0ac3b=nmanthey@srs-us1.protection.inumbo.net>)
 id 1lFf0f-0000dw-RU
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:26:29 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40a5bbcd-9e6c-4d00-a56f-f5bd6c81050e;
 Fri, 26 Feb 2021 15:26:29 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 26 Feb 2021 15:26:21 +0000
Received: from EX13D02EUB001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com (Postfix) with ESMTPS
 id 58299A1772; Fri, 26 Feb 2021 15:26:19 +0000 (UTC)
Received: from u6fc700a6f3c650.ant.amazon.com (10.43.160.244) by
 EX13D02EUB001.ant.amazon.com (10.43.166.150) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 26 Feb 2021 15:26:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40a5bbcd-9e6c-4d00-a56f-f5bd6c81050e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1614353190; x=1645889190;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=/mWILN4s552lqJMkiRuRB9qtEhhfkm9EpTvr62O1Zuw=;
  b=IQQ8JdXQ6q2n6LLMMQv2q3b/B2wr5kDSTKZjnnv94D2iO+8tkomLKSFg
   TQXNqMjzOY36dMMKQTdSl61jWxGfU3H1QzIuKtakKCWbXGaYqvyarq2aN
   89H7nmUkjpLAx+OkqM1UjfN9/eF6PyaSW8hJNxbOvG+pL7VKuCP8HsC6f
   4=;
X-IronPort-AV: E=Sophos;i="5.81,208,1610409600"; 
   d="scan'208";a="88523753"
Subject: Re: [PATCH XENSTORE v1 10/10] xs: add error handling
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
 <20210226144144.9252-11-nmanthey@amazon.de>
 <78560c14-81dc-05de-26f8-15861459806d@xen.org>
From: Norbert Manthey <nmanthey@amazon.de>
Autocrypt: addr=nmanthey@amazon.de; prefer-encrypt=mutual; keydata=
 xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR
 /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg
 kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps
 K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89
 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV
 QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok
 xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o
 eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C
 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK
 VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h
 bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI
 BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC
 XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG
 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c
 kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN
 sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar
 C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt
 OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL
 n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur
 xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5
 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs
 nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf
 Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG
 MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK
 sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ
 /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV
 vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM
 p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk
 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p
 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f
 HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE
 GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b
 tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+
 RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ
 R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4
 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5
 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd
 DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i
 p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U
 sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C
 hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9
 pEddsOqYwOs=
Message-ID: <b035a1f0-70b5-9224-b8ac-a1dd9dff06da@amazon.de>
Date: Fri, 26 Feb 2021 16:26:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <78560c14-81dc-05de-26f8-15861459806d@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
X-Originating-IP: [10.43.160.244]
X-ClientProxiedBy: EX13D14UWC004.ant.amazon.com (10.43.162.99) To
 EX13D02EUB001.ant.amazon.com (10.43.166.150)
Precedence: Bulk
Content-Transfer-Encoding: base64

T24gMi8yNi8yMSAzOjUzIFBNLCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4gSGkgTm9yYmVydCwKPgo+
IE9uIDI2LzAyLzIwMjEgMTQ6NDEsIE5vcmJlcnQgTWFudGhleSB3cm90ZToKPj4gSW4gY2FzZSBv
ZiBhIGZhaWx1cmUgZGVlcCBpbiB0aGUgY2FsbCB0cmVlLCB3ZSBtaWdodCByZXR1cm4gTlVMTCBh
cyB0aGUKPj4gdmFsdWUgb2YgdGhlIGRvbWFpbi4gSW4gdGhhdCBjYXNlLCBlcnJvciBvdXQgaW5z
dGVhZCBvZiBkZXJlZmVyZW5jaW5nCj4+IHRoZSBOVUxMIHBvaW50ZXIuCj4+Cj4+IFRoaXMgYnVn
IHdhcyBkaXNjb3ZlcmVkIGFuZCByZXNvbHZlZCB1c2luZyBDb3Zlcml0eSBTdGF0aWMgQW5hbHlz
aXMKPj4gU2VjdXJpdHkgVGVzdGluZyAoU0FTVCkgYnkgU3lub3BzeXMsIEluYy4KPgo+IFRoaXMg
Y29tbWl0IG1lc3NhZ2UgaXMgbm90IHZlcnkgZGVzY3JpcHRpdmUuIEludGVybmFsbHksIEkgc3Vn
Z2VzdGVkOgo+Cj4gIgo+IHRvb2xzL3hlbnN0b3JlOiBIYXJkZW4geHNfZG9tYWluX2lzX2ludHJv
ZHVjZWQoKQo+Cj4gVGhlIGZ1bmN0aW9uIHNpbmdsZV93aXRoX2RvbWlkKCkgbWF5IHJldHVybiBO
VUxMIGlmIHNvbWV0aGluZwo+IHdlbnQgd3JvbmcgKGUuZy4gWGVuU3RvcmVkIHJldHVybnMgYW4g
ZXJyb3Igb3IgdGhlIGNvbm5lY3Rpb24gaXMKPiBpbiBiYWQgc3RhdGUpLgo+Cj4gVGhleSBhcmUg
dW5saWtlbHkgYnV0IG5vdCBpbXBvc3NpYmxlLCBzbyBpdCB3b3VsZCBiZSBiZXR0ZXIgdG8KPiBy
ZXR1cm4gYW4gZXJyb3IgYW5kIGFsbG93IHRoZSBjYWxsZXIgdG8gaGFuZGxlIGl0IGdyYWNlZnVs
bHkgcmF0aGVyCj4gdGhhbiBjcmFzaGluZy4KPgo+IEluIHRoaXMgY2FzZSB3ZSBzaG91bGQgdHJl
YXQgaXQgYXMgdGhlIGRvbWFpbiBoYXMgZGlzYXBwZWFyZWQgKGkuZS4KPiByZXR1cm4gZmFsc2Up
IGFzIHRoZSBjYWxsZXIgd2lsbCBub3QgbGlrZWx5IGdvaW5nIHRvIGJlIGFibGUgdG8KPiBjb21t
dW5pY2F0ZSB3aXRoIFhlblN0b3JlZCBhZ2Fpbi4KPgo+IFRoaXMgYnVnIHdhcyBkaXNjb3ZlcmVk
IGFuZCByZXNvbHZlZCB1c2luZyBDb3Zlcml0eSBTdGF0aWMgQW5hbHlzaXMKPiBTZWN1cml0eSBU
ZXN0aW5nIChTQVNUKSBieSBTeW5vcHN5cywgSW5jLgo+ICIKPgo+IEkgd291bGQgaGF2ZSBleHBl
Y3RlZCB0aGlzIHRvIGJlIGFkZHJlc3NlZCBnaXZlbiB0aGF0Li4uCgpVbmRlcnN0b29kLiBJIHdp
bGwgaXRlcmF0ZS4KCk5vcmJlcnQKCj4KPj4KPj4gU2lnbmVkLW9mZi1ieTogTm9yYmVydCBNYW50
aGV5IDxubWFudGhleUBhbWF6b24uZGU+Cj4+IFJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpn
cmFsbEBhbWF6b24uY28udWsKPiAuLi4geW91IGNhcnJpZWQgb3ZlciBteSByZXZpZXdlZC1ieSB0
YWcuCj4KPgo+IENoZWVycywKPgo+IC0tIAo+IEp1bGllbiBHcmFsbAoKCgpBbWF6b24gRGV2ZWxv
cG1lbnQgQ2VudGVyIEdlcm1hbnkgR21iSApLcmF1c2Vuc3RyLiAzOAoxMDExNyBCZXJsaW4KR2Vz
Y2hhZWZ0c2Z1ZWhydW5nOiBDaHJpc3RpYW4gU2NobGFlZ2VyLCBKb25hdGhhbiBXZWlzcwpFaW5n
ZXRyYWdlbiBhbSBBbXRzZ2VyaWNodCBDaGFybG90dGVuYnVyZyB1bnRlciBIUkIgMTQ5MTczIEIK
U2l0ejogQmVybGluClVzdC1JRDogREUgMjg5IDIzNyA4NzkKCgo=



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:33:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:33:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90411.171136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFf7E-0001iS-2k; Fri, 26 Feb 2021 15:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90411.171136; Fri, 26 Feb 2021 15:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFf7D-0001iL-Vp; Fri, 26 Feb 2021 15:33:15 +0000
Received: by outflank-mailman (input) for mailman id 90411;
 Fri, 26 Feb 2021 15:33:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ro2M=H4=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lFf7C-0001iG-Lz
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:33:14 +0000
Received: from mail-pl1-x62b.google.com (unknown [2607:f8b0:4864:20::62b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c8ab0b9-3fa0-49c8-9e6d-934efea98b8a;
 Fri, 26 Feb 2021 15:33:13 +0000 (UTC)
Received: by mail-pl1-x62b.google.com with SMTP id a24so5466551plm.11
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 07:33:13 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id v129sm9955277pfc.110.2021.02.26.07.33.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 26 Feb 2021 07:33:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c8ab0b9-3fa0-49c8-9e6d-934efea98b8a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Kdw3Sjv+WXzbm/SAcdz2KeQ0wofI8c3b0xhYkexR8PM=;
        b=Pogd6tvRaDBWAjFxqTsWv4pxN5mC26G7WTyBMWjoH5vyzg7WCxLe5k4jC4m2lSEUAl
         7aPLoxdTllnmop78hTvyzZxfRZFCzM2AahO/55MTyRn27x69uYue3l/PXECOF93mJsjl
         11CD/h/WvpU2iLQIRHbF5wj2UHyIK377HarrqxU4cQ1gZ3h1M/AjMQml1R6JHSSXQO8W
         lrE/ZGOrL1ZDm4jeBjp88/geCwVaDGvOjPeHgcTke08GnI9NV9pFUCQ/0kMbBUjX2egZ
         w5eaB6W+V4jN3NeheCAK9suM5QhVtx6Ie5kOP+hEc65MhcNQurs9GTuHeOBvz6F7tgNc
         RasQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=Kdw3Sjv+WXzbm/SAcdz2KeQ0wofI8c3b0xhYkexR8PM=;
        b=LAkrfUZ9sEBFbhNUG0T/jtaFvvWPCsGuWHaddieM6Ka9fGI0PboESp62nY+Tf+BM8d
         xmYVaV/5+iZrj9Wu4OHeqVrYE8TaWX+M40egpR2PaabzSbZVbqoyua3PBizGy9ln5y7+
         caqfsmZ6qJxre/RGjNRYA8gYG8REhiawcaOuUfz8Xuc2DMQDZAUGH5OgdQdE92+5jowT
         4oCBqaQ3hHWSZutqRwhAOM5mwfh8Rb6wKF9i48GXg/di2Xl8EsiiKYwqZb/kurXT7FGh
         /BWZFf7pBDKzsw3ys2w+766XyfYvolxpBxo81yziiBT7PNzwpcOQHbN0zz6qscz/Hvby
         X6IA==
X-Gm-Message-State: AOAM533Bv4gmjdbyWQaUTyiYkO6Bu6tGjxVykj+u9TVqyU1rIFYqBYpj
	r/BqyJG/BKwUJpINQEjAafo=
X-Google-Smtp-Source: ABdhPJyPy00Gja5UfZEBdttwHCJ/qYuf5xI7H4KJe4eyVXZV4zll+H/+sHr7dFiIMXEBXukngKd05Q==
X-Received: by 2002:a17:90b:4d05:: with SMTP id mw5mr3990126pjb.217.1614353592838;
        Fri, 26 Feb 2021 07:33:12 -0800 (PST)
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <0d58bca1-0998-1114-d023-0d8a5a193961@gmail.com>
Date: Fri, 26 Feb 2021 07:30:09 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/25/21 3:14 PM, Andrew Cooper wrote:
> 
> Well - this is orders of magnitude more complicated than it ought to
> be.Â  An empty head.S doesn't (well - shouldn't) need the overwhelming
> majority of this.
> 
> Do you know how all of this is being pulled in?Â  Is it from attempting
> to compile common/ by any chance?
> 
> Now is also an excellent opportunity to nuke the x86isms which have
> escaped into common code (debugger and xenoprof in particular), and
> rethink some of our common/arch split.
> 
> When it comes to header files specifically, I want to start using
> xen/arch/$ARCH/include/asm/ and retrofit this to x86 and ARM.Â  It has
> two important properties - first, that you don't need to symlink the
> tree to make compilation work, and second that patches touching multiple
> architectures have hunks ordered in a more logical way.
> 
> ~Andrew
> 

I think we may have envisioned different things here....  I was under
the impression that we wanted to implicate common, so that changes
there that broke the RISC-V build would present themselves on CI...
and to demonstrate which "arch_*" calls common expects to exist.

It sounds like you'd prefer no common to start and none of the
arch_* calls it relies on?


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:37:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90414.171147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfAz-0001t7-NC; Fri, 26 Feb 2021 15:37:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90414.171147; Fri, 26 Feb 2021 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfAz-0001t0-K8; Fri, 26 Feb 2021 15:37:09 +0000
Received: by outflank-mailman (input) for mailman id 90414;
 Fri, 26 Feb 2021 15:37:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFfAy-0001sv-Hb
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:37:08 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 462c8f64-9c20-4562-9fc5-e994bec13f18;
 Fri, 26 Feb 2021 15:37:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 462c8f64-9c20-4562-9fc5-e994bec13f18
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614353827;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rcOgp2VM5pLOrUgoppIFmbdPISEsnDb4/3Gx0avKyW8=;
  b=Es7ukiYtg3HdrtEUaOB6noTKYQjr73p2uZOdIXEFd4oZEL+6oVAuIL3+
   HHGb4pisKigi2UflKa0Pa9iL6GPFpweriEYYFX97woPz9jGHFo+o8RWHm
   /bKW1kIYnAmF0G2OJmeU7Jv6qAQgslpaXfIWKonlxF3ILh0LYzFbIfyj4
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: stlaOehzP/NLdjGu6TWfEXkzrdYuz47NRhjCtgnHn7+7P5qWyAk2hdbY8DZz8Qc3wU4ZVkfvhC
 ToJeuM318K1Co/zLPPDBLQzXqHeeUSQzsac9HVNIVr7RtFXtoXg7SK8oh/WpQB67ajqA49rnbL
 JYVFCkANGV4P+glvRkZZCYGE/AGi+LxYiJdgX7J1li/m29nDikEASaMMjsMQYKqbQ6Lx97MKCi
 p2kD8TzJKxsFR0PVNrhee3t6DPrixsbljgSWWpFmqCLXk2KyEJJTOm5mX6a9Yetp4szca+ug3P
 diI=
X-SBRS: 5.2
X-MesageID: 38130836
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38130836"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=esYQY1feebpvIO2kIyg6b2SR3Wv3LYdwUJXhi6uVZs6LbJOSZIl9L2CQ820HGO4r7YEgtn9g7w5tXU0aYSnQC5WaS7f9xHP2BtQ8+FQE9/rOf778W+b8eV4r2InLWLNs8acvvYO56Oahs436gZpxAxkBzwVZTVQLNs79O+yAqBBMeHZB1hOoADevF8yMtqGlDEawb73a6E20ktXGdkgwJVhtekQ3h6+ak7jkPtumzP4Y+PpM3EsIgRgl19mtS8rA1JJ8AdXIypnL1Xa4oH7m1xALIHzKyhU+IqR9bJ/HMr2GJwC6GbdCoZh1Vl9H1ljCRVcVZD7PuEs+HjpkNAt/IA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=07E0LxdgpUQqTVTGSukUDX52sFSOCZ0Rir1X2rkAvE0=;
 b=AvqkfzEqS4mIvQHCb0sQfiWK8ZvZwOY6C5baTFlCLPJQhdUbL/ZrdCuGulecc4sPTR2GYQY8lN3au511vFl5Izik8EsxJ+ypZZpjxAXkijj3Ud702sqR9eWlp6g4S9YZhO3Dvp3WAgKgZg9Vn74gqTSF9laeb27PwVBQwoTVzv3MAiI9yRuKkAqSTeliCQi1B0dYktCE8NYZP79EAWi+kYHxlJ2alRBkKxuOaGOON1AWw05oSSFvuKe9XCWgXZLocjhAO+SDflW3SwZVq2Yyw+imoYTW1RhiWgSP77Xm5WF6wDQtp7QE+F4sQU74NIYgksGqLOROchQ21NY349I82A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=07E0LxdgpUQqTVTGSukUDX52sFSOCZ0Rir1X2rkAvE0=;
 b=Vrw/Yblq4v00OTkilxVAkFiPovdH/p9PNgHq/+x4s6/phpFmZz4gZwV5X4KqVUg8PYsPf7VRdtXIGvN8D+kkL/vQjZuCsGHqr9I7Fd4vmTh/ovfdO5W2UN9RDSpV50w0W45383z8tdp9O5NYWCRtVh6mfQmnxwaENsXa1KcHWU0=
Subject: Re: [PATCH XENSTORE v1 06/10] xenstored: handle port reads correctly
To: Norbert Manthey <nmanthey@amazon.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>, Wei Liu
	<wl@xen.org>, Julien Grall <jgrall@amazon.co.uk>, Michael Kurth
	<mku@amazon.de>
References: <20210226144144.9252-1-nmanthey@amazon.de>
 <20210226144144.9252-7-nmanthey@amazon.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a840c4fa-148e-7617-20e5-9b286ace2c40@citrix.com>
Date: Fri, 26 Feb 2021 15:36:57 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <20210226144144.9252-7-nmanthey@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0086.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c53f4c00-939a-4c5d-0b47-08d8da6c5deb
X-MS-TrafficTypeDiagnostic: BYAPR03MB4487:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4487A6988D3750CBF83540F9BA9D9@BYAPR03MB4487.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: y8kyqGoflwafsbKdEcc4UBiTS8ADfjJrQFDK7Cn6BNfJgjsvUiwZ35+nQDu+Tlw35GyfNGX3i5+hdQxyq7tsKv5AdFvHi9EeXj8zRvnDHQDUUG8EwOwhxbIVDwyPX8eusAZwsYXSt0Jov+UFfYAXSrtqjPZ0y42aZ3WAEyaPPECrnUwJzSXdRIJpmKDuRmNXjlXVSwJmpOeUDR6Mg1Wej99MxhoFZvRMIpmrzHyGBm+a0VE29rqug7uRyp1y0CXjqIvYzRkiCmyLTsX2LmWdoV2uleHf4vnUtBb72qJPe9ol3+5cUefGxFuEFlBTE3S7k30g9xHWKAFoB9orLj6bsyFQ3acXHHNN/FE4P3R0cyNFI/uthPvlJW7uIzR+2QUwXSFctXnFnztuvl03LeERoEgjsminoURq2D/QEidmqC190Mx/0wX+OC1r9ixveW5vAcDyiLLq0xLUoTWlq71FQEFRYFJIieBlnets6+GOQzTe+6g1EunhZxLa+8OwleOP1P8npnwHB+Dg3lYtCdy1CdT/Ud2k/W1HdMA7SGi3cwpa7l5IbBIyrTpoXqmoIWLXRKDhjya8HSOFMwBGC/gwOxn4Ke+FECpkT6Saz4HO/Hk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(39860400002)(396003)(376002)(136003)(2906002)(4326008)(8676002)(956004)(31686004)(36756003)(478600001)(186003)(26005)(53546011)(8936002)(83380400001)(5660300002)(16526019)(6486002)(86362001)(66476007)(6666004)(316002)(66946007)(31696002)(2616005)(16576012)(66556008)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TDVlYWJ6TitJeU03NnVlbm9RZmpZVXh5bndUdFJqMGVSYVFtR2k1cWx3dVBl?=
 =?utf-8?B?RzZsSHRES3EyQzVHbXBwUHdVYVBGMTdPcUFpbDFEMGlxV2dFTDd5bGtlcUxw?=
 =?utf-8?B?M1FsN20xQ0ZwY0p6WEVRNGFQdXRkRjJIRE1ZZmlIQVgraFF5bW1jYnV4cUpo?=
 =?utf-8?B?TGM5T2trS0pUQVd5S1dlNVl4VHgrODF4VjNHTmxOc0VWLzQ0WDBNci9Femg4?=
 =?utf-8?B?L2EwTkYwY3g0aGFJVmFyU3l6clVZR2Q5UmUzeXRZU0x2b3duNHpDSVZRVUsx?=
 =?utf-8?B?ZlJyN2NxZGxWV1ZKQkxwczdNT0tSR0QyQ2FaOHBVQTZ4RGJKeGYxdHRyTVBx?=
 =?utf-8?B?UWdoVldIS0FBRkxSY0hEME1JWWpIKzZyRldxTEVMS2F1bHV4eHhFOWVOcDFn?=
 =?utf-8?B?QjhZa2NydVQ1d3VOU0RsRkdhZ3NwVmFKazRxMGFKd24vcFhaNkYwb1VrQzA2?=
 =?utf-8?B?ekZZZmFRb2xjd1hSOVdhZjFTM0NqVy80Q2EyMXNLZWszem5Fci85aXFBRTBo?=
 =?utf-8?B?Y3RSbmNRSHBKc3JsUjJ6NmpSVGh4Qkdxd3prdVMyaU1kTTdLQWJBdjMxQnk1?=
 =?utf-8?B?dk5NQWUyWWZEQytYSEFzaU5Wd3pEcllvQWk3QUZKVUloZDVjdFd2eUhXZUpH?=
 =?utf-8?B?ZjhMWGlob1EzY1AxaE5UdEZxZHBLaW40THZYTnNtak5hWmJzTHdNSjVnK1RG?=
 =?utf-8?B?L1VrWWVKSS9RanV5SVFHNnlSVmIrYnlkdXp6c2ZPQUlqL09GUlFpWUdoUS8x?=
 =?utf-8?B?cTZSQ1pPY0ZOMVpVaGxGd1Uvam5FQUpLRUV3VEdmTEw2N3NWZ21NZEkyME92?=
 =?utf-8?B?ZUQ5R0VXRGU3cTd0VVpZdGZ2RnhOWTBYN0owQks4N3R3OWR0TndXMklhWkhN?=
 =?utf-8?B?TFJEOFVYd1JlS0FUT21OUUNSdEYzSnFoaWMrczJ0OXhVc1AwWEFHSnVPQmVn?=
 =?utf-8?B?WmZGcUdGc2RHSmRsL3hRcnhrQWdJVFNQZGFZVTdBdWF6dVZJbjZTUENFUVJs?=
 =?utf-8?B?L2JZVUN3OUh1U1JOeGtNdUVEczRHRmp5cTVHQnFROFNTblBmNmtZUHF6WjdM?=
 =?utf-8?B?b242SHR1bnVTYVUzSndEek1mdjRPdjQzZ2FGTSswWGJBTmt6YVpBQ0tmWDNy?=
 =?utf-8?B?L0x4L0VwMGljTTFkS3V1eW9wSVVBa2d4WDJHNENBVzRBT2gxOU1PZXdoU0pB?=
 =?utf-8?B?KzB3TUp5bjJrbFVoUklGMUwrRzI4QWdpdFFKZlgrSG5UbUh1UTRyRVFvaWw5?=
 =?utf-8?B?VTFxZVpSWmxvOW9qS2ZzVkVtWmRPVXM0UWxQKy8wVThydXl5dzROTkxXeVpZ?=
 =?utf-8?B?Z2g0T2ptVlp6VDZsd1NJaEFYZDE0NktTeE8zVHVETTQrTDJ5VU9haUdwb2xX?=
 =?utf-8?B?QlhCS2prQWtXUU03QU0vMGw1allYNCtiNEtxTzhDS09FUTVtTG9IcHk1Qkky?=
 =?utf-8?B?TWxscjBCUkN0UVYrbG9ZV1kxY0VoblprZG9uOVpYRlQxS3RCQXcxRVErQW44?=
 =?utf-8?B?S211aERaajdGWlVlVUFnT01zdGpaR1RKc0xxL0F2KzVMMzJsSjJTbXBYUGRw?=
 =?utf-8?B?TkVicmdWbmVHRG0zajFmOGJMYzRDNEg5T0RvckhUdGZ3ak52ZzhHWmpvZi9p?=
 =?utf-8?B?elpRQ1d4ZndRS1p5cjFnRUQ1eTlvajgrU1k0S0JpbUs0dFovVWxHamxIdlAz?=
 =?utf-8?B?cEFOb1FGZW9LanJlOVE3ckljSmJFZzF2OFBKOFVlQzhNT3p1YmlPR1I1OVNj?=
 =?utf-8?Q?38+UkEDL6YozBHNQnHpMSrzUIRUabSOZfR8b8Ah?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c53f4c00-939a-4c5d-0b47-08d8da6c5deb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 15:37:04.1792
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sittZcy/VYoTLOsMrefjiUEiVP+eNfLlKkBD+rbOeL7L/uyVRjWfGIM6UJ3dI3R+jvwR+2yDgfyq2BQptq1rxJgLnjmUqYUE0dmOscgQJBQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4487
X-OriginatorOrg: citrix.com

On 26/02/2021 14:41, Norbert Manthey wrote:
> The read value could be larger than a signed 32bit integer. As -1 is
> used as error value, we should not rely on using the full 32 bits.
> Hence, when reading the port number, we should make sure we only return
> valid values.
>
> This change sanity checks the input.
> The issue is that the value for the port is
>  1. transmitted as a string, with a fixed amount of digits.
>  2. Next, this string is parsed by a function that can deal with strings
>     representing 64bit integers
>  3. A 64bit integer is returned, and will be truncated to it's lower
>     32bits, resulting in a wrong port number (in case the sender of the
>     string decides to craft a suitable 64bit value).
>
> The value is typically provided by the kernel, which has this value hard
> coded in the proper range. As we use the function strtoul, non-digit
> character are considered as end of the input, and hence do not require
> checking. Therefore, this change only covers the corner case to make
> sure we stay in the 32 bit range.
>
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
>
> Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
> Reviewed-by: Thomas Friebel <friebelt@amazon.de>
> Reviewed-by: Julien Grall <jgrall@amazon.co.uk>

Port numbers are currently limited at 2^17, with easy extension to 2^29
(iirc), but the entire event channel infrastructure would have to
undergo another redesign to get beyond that.

I think we can reasonably make an ABI statement saying that a port
number will never exceed 2^31.Â  This is already pseudo-encoded in the
evtchn_port_or_error_t mouthful.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:56:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:56:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90421.171166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfT5-0003yq-CC; Fri, 26 Feb 2021 15:55:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90421.171166; Fri, 26 Feb 2021 15:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfT5-0003yj-95; Fri, 26 Feb 2021 15:55:51 +0000
Received: by outflank-mailman (input) for mailman id 90421;
 Fri, 26 Feb 2021 15:55:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFfT3-0003ye-W0
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:55:50 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a1a4ce8-5d0d-42e5-9ab6-fd419d9dffd3;
 Fri, 26 Feb 2021 15:55:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a1a4ce8-5d0d-42e5-9ab6-fd419d9dffd3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614354947;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=hJo0c/nJuTsO3JgxsMpT8abuaPjZ57s6f7/I+Ms2B7E=;
  b=hQ5UaYDSHy69jqvmtpzd4qEyf6QD9X+cb7mmmT7IgyB71IWQzWbvLEwv
   V+AO1RqJVv390Tfc+rklgTZvEdfXzr1DIuRWgb958v0CkNBDEQ/B/SwLv
   9GY8ceTBght4lTkfuOJhs1BS7nOxyjsYaIac5aVRnOUzLGTvozAWmqi9b
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8wHP6e79kLa1tVYcmZrRqRVs/kDHvXEj8gM3B2aHXzMcwlENrIzUxkzPk1+Zu15M13jb9avYn5
 SdlMcMenshGkzhhGZI2GZgj9bDiBq3w7Memqlmq819IV1AVzmH9BZjYIjjaWFW3s9U2wlcE2nY
 JHpmi56iX53UZzdE1gUAf4ITTRnxYmxwE/lfwO1IJ557W3ICIQ0PWQsrnYUaUBN3tZwYevBCe9
 mesQIIMl8Zah4UykE1QstIcL3HuJGWe6vTw4r+cdlQjm3GnQUWnyiWjF79iymARG3/iZS1g/1B
 0gA=
X-SBRS: 5.2
X-MesageID: 38132286
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,208,1610427600"; 
   d="scan'208";a="38132286"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=htX/Opr//tpXCD4rQJFBsyDqsIjODgPoW9hA6SE6xp7IcVhFW+eMymoBk0RkJWc7uKvfejLHz+D4dL9Foy0/O3JXiBwG1NdM2k5H6KTsjpxflzd5jrDwpMzBXgZZl7sQzgCKA8wFVyKh/APBv/mXtw/3S5SW6EBladzBAj7sovvXS4qn93v/EbDlefZJD+l8qDx6b7epub70vrstTgRsmp+id5ZXqhuWthj1jwbZ6Mlf7WYIsEJo9jTfqPDtrFS5IJzUCn9p6uWfw2GPzXCyj+Jc+Y1bR5+OCDs5cFt56iWnRdm9XIZyEPzNudjeQRAAmwtLLq6jBdkXi358A090KQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d97pyAgV5MagtV0sq9xH5n9k/J5bO4lCyXUNK6r1bRI=;
 b=ncevfVhZUde8aIPtCK9bS/l/1t4+ZjeIdQ3sHcen0R8NxnCZXBhQoBXhZGD/vkLDhiRyu/FyTOcPNlx6gLjexO8fXoTvfMJDPscKesDFxAucAP0RKwwTMteJx7VSbagVL5IlWTbjyI7ESVxRy0eHbQFcW+HZTFs3dOPtBu3iX95hdM6hI3THouCiMwNsv6yUdd2EUprSJBrWrfNDiScGu8PPJwnuEGisgrE02sNRtf+OECtEMN755S9557R9tO8aX2bsYnVEVbEzVJehC3RFwoE0nj+me7jMVQkVoUKem9VurG1k5MZAcSFRcmccRDgVDeEzKZngddGirudVWLm1Mg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d97pyAgV5MagtV0sq9xH5n9k/J5bO4lCyXUNK6r1bRI=;
 b=UUvb8sBZqBugmlAfiYAfhqJzKvzAdXtPDC5DJkecXUu5lyv98GnKLIL6DB242ZzqMw9jLNuSp4uhDsGhPFKeBWqtqBDAE+uN0N422A5OKW8H1hHJ0LRw3UaGeHnXaosjY+mN7b4xo94RBbpki/MdXlsEw/gOT1exDe9f4eLeQdY=
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
To: Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
	<connojdavis@gmail.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila
	<aisaila@bitdefender.com>, Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
 <0d58bca1-0998-1114-d023-0d8a5a193961@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c6ed745b-3847-b878-f683-2d1041be22a1@citrix.com>
Date: Fri, 26 Feb 2021 15:55:37 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <0d58bca1-0998-1114-d023-0d8a5a193961@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0481.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7f98b264-0e7f-495a-c531-08d8da6ef9ad
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5775:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB57755014571C77F19360B28DBA9D9@SJ0PR03MB5775.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AHLxogknrEE/ZPwjgqbm15MSCWcxcirmSxqb2jf4/BbSizcDWUmL7og/1FpkX+PMOyIwZoW7SXsx3esNAtFMB5EErn2b+MZX7ZceDbvPiTUUWaxbhKCPHnEIB1pY6rY+LoRykmIzjZeDvREWd1urIK8/+2H2pyJ3xjKjThOkrkMmSd3zrYdPyewjIHFBPiXfLYwS4djCE9luZCOXtsb/bfi23OUaFD8R7TVPqn24XM4r2b8Ld3ZJZSur01iz1amSS1JgQAtcJGSVTnwefjmZCsQ4dSZcLwQKoGpxgCPHTErsVXcMpNX1ula070ir1XKWZkm7dLkgmWYaGCIb3wf2h/kM6N+z/4LhJSxYgFk4HGTdvz3NFtBG8PZrOXkR20zUwNUexyeulJZQZJj/WGLU6pKRoAEX1kDlUHEdQpD85lkhJ1n/DBEWsaL6vetuCHoTLchLMxO3PaLz1rkO8uoW0VwRHTtpolcGfi/lawAOn2eZLfJFovoc2g2gBnh+YlSlGO7H92cBYuZ3CJaXoDKxExaO+hyzomkngVTxazCEC5pDsoKiZBHh+ZcBMtEPBXs7xLf6qfZOlgwud4QtpdmfxpkD0DqUXWl0AP33Oyo39zg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(136003)(39860400002)(376002)(6666004)(8676002)(31686004)(956004)(31696002)(2616005)(86362001)(2906002)(186003)(4326008)(8936002)(54906003)(6486002)(66476007)(53546011)(5660300002)(16576012)(478600001)(66556008)(16526019)(316002)(26005)(83380400001)(66946007)(7416002)(36756003)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QVJBc2l3c0txL3J6SzV2VnZoekFocjNiczNZeUhFb1FaeU5uREhHWjl2bU80?=
 =?utf-8?B?d1B3QUd5TW1pK2dTdmFFNTlHakNLYWdLbVV4KzVScTRtWEtiRElEczk5VHJz?=
 =?utf-8?B?OVI5WlBldGJlV21jaDVXOElxb0h0NWNGK3E0TUxyMkt0Q0lSd1BNSUgrQmpp?=
 =?utf-8?B?aUlkRlJPWCtNVCt6dFNiZkxMWWlOMmFPUllGVVp4THE0VGZiTGZkbU9BNGtz?=
 =?utf-8?B?SVk0OWF4ODl6QlJaRkl6SXo1U0VRajhCVUp6SG54Tk5seXRxY1g1KzgrdGxq?=
 =?utf-8?B?OVdwNFkyL3dtalRieDFIaWJJd3NRMnNyd0dEdTludXNUS01xSVNTcWtJeEsy?=
 =?utf-8?B?c0xrVC9lSEI2RFFIUURRNUQwU3liNUVseEFSSFgxUzdkZGlMMGtPWjlad01V?=
 =?utf-8?B?bitwbGFMMXk5czBJcmZLcG41cGNaM2U4NmZQMUkvdkd4WXMrWnJsNzJkVytL?=
 =?utf-8?B?Sk5RZk5ITWw4Zjc0TzNBOEhaOFdwMkhaVmpwcmFVUGFMOUhOUU1FWVpMRWhG?=
 =?utf-8?B?WVhCRkYyNFZHK2UyU2h4cURJUjd5UnluMnI5cmtIOWUrdmNQRDFSQXNCYm41?=
 =?utf-8?B?VW5xM2p5SHJEL0NuRGtCOURCSFlaakxEV0dKMHdVbkljUmFlR0tKU0loSi9B?=
 =?utf-8?B?MjhCOW1YSWNKR0NybmpxZGhVbGJsODNVdUJybFpNNmRsWmxXSndud1E0Sy83?=
 =?utf-8?B?OTA4ZG1CWUxBbEgrU1h1d25vQUFZWVA2ZlF6MXNlNW9hQzhoVkgzbk12RGMz?=
 =?utf-8?B?cWIrL1Jtb1plVmRqZi9MNEFKMXVzYzR0NFdNRk9yenRRWjMxQlBOWFJiMGpW?=
 =?utf-8?B?Z0xmV3JUYW5kUzdVSkRheHBlUVRJbVZ2RS9hcWhIOVlMTGZlUlAxRlpWRis2?=
 =?utf-8?B?TXBXWU4vbVM1U3U1SU02bnA0aC9vZXcwTEJoMzJ3ZUhKZy9Zc1JFS25CQitm?=
 =?utf-8?B?K3h3VU5PWGwvNWdwbTJOQWNqcWdreGtQdXExdUs2SitZYStmZzJWY1h2bTZM?=
 =?utf-8?B?NXNTQk1ZSS9mVlhNUjZLNS9QMkxLSGpQMWJDMWVTR3paVHcrQkd0VzlrVXdx?=
 =?utf-8?B?U3JnTEdHRDdKdjJTempjTkh4R1pHc04rcWhTeEQ1WXRhMzZMaUtyZWdMVGpz?=
 =?utf-8?B?bm9vK29pTXZTK2V0djBwQys0VEgwM1BZQmRVdVZxWkFlV3hlSllGU2xSWFkx?=
 =?utf-8?B?c0FENDkrKy9tMnB5ZDAyTGNtMzE1ZEgwTlROZHFGNDMwVlBlczh4TDE1Mkds?=
 =?utf-8?B?cVBqNG5URUNBQVBhcVlQdElNMmVwYnQ3eGFuNHJmVllZMEZJM0tMRWFYSDFW?=
 =?utf-8?B?L1VEZ1g4aWRrNUZGVGlsWVhjelAybGpWTzVxZHdEdS9JcGxxYXRhdjZGY1lX?=
 =?utf-8?B?d1pQYzRBUENzbXZPS3Q2YlNWbnp0dDZhRU5BSmtSeVIrV29ieXJUR2dXanNN?=
 =?utf-8?B?ekhTVm5XMEJGUTF1eDNFSHkzR3BXb0JpZThwdkJ6OFpCYm41TXVYZTdXZUo0?=
 =?utf-8?B?bmg5a1dqVDdHOC9HZ0pVdjc3d1llTFY3OEN0TGZhZU96UXFLSWxWbHBlZVBn?=
 =?utf-8?B?ZmFnYm9veHJnSm5kQ3BwNkZ1RGhzM2Vya3JRNjRwWGs3RVd3ZitwOXp6U2pB?=
 =?utf-8?B?QmRJMXNoY3pBcDQ3NHcyN0NVZ2ZXVVlMOHpYeTh0Myt2bHg5Uy9ZZFBMK3J2?=
 =?utf-8?B?K3J1bDBLTUc0MjJVdnlDR0RJT3VCTGVOQjI4ZGI2T0tBYS90enFhcytPelZj?=
 =?utf-8?Q?Oms6ooZoLeenM4tcRVEv1cx9rpZyvLvhtf1BeUN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f98b264-0e7f-495a-c531-08d8da6ef9ad
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 15:55:44.4976
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P5pjS7b2J0IIs+RGgQqKWqzM+/lC6TdhVah29bFFixOP3BQ/KZOH1Vhjnst2PThk7J+m7/+8Xw9NwGFysZT18NRtfuTwRRXPDJJHoAZ3XNc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5775
X-OriginatorOrg: citrix.com

On 26/02/2021 15:30, Bob Eshleman wrote:
> On 2/25/21 3:14 PM, Andrew Cooper wrote:
>> Well - this is orders of magnitude more complicated than it ought to
>> be.Â  An empty head.S doesn't (well - shouldn't) need the overwhelming
>> majority of this.
>>
>> Do you know how all of this is being pulled in?Â  Is it from attempting
>> to compile common/ by any chance?
>>
>> Now is also an excellent opportunity to nuke the x86isms which have
>> escaped into common code (debugger and xenoprof in particular), and
>> rethink some of our common/arch split.
>>
>> When it comes to header files specifically, I want to start using
>> xen/arch/$ARCH/include/asm/ and retrofit this to x86 and ARM.Â  It has
>> two important properties - first, that you don't need to symlink the
>> tree to make compilation work, and second that patches touching multiple
>> architectures have hunks ordered in a more logical way.
>>
>> ~Andrew
>>
> I think we may have envisioned different things here....  I was under
> the impression that we wanted to implicate common, so that changes
> there that broke the RISC-V build would present themselves on CI...
> and to demonstrate which "arch_*" calls common expects to exist.
>
> It sounds like you'd prefer no common to start and none of the
> arch_* calls it relies on?

We definitely want "stuff compiled under RISC-V" to be caught in CI, but
that doesn't mean "wedge all of common in with stubs to begin with".

Honestly - I want to see the build issues/failures in common, to help us
fix the rough corners on Kconfig system and include hierarchy.

In light of this patch, there are definitely some things which should be
fixed as prerequisites, rather than forcing yet-more x86-isms into every
new arch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 15:57:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 15:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90424.171177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfUJ-000463-Mb; Fri, 26 Feb 2021 15:57:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90424.171177; Fri, 26 Feb 2021 15:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfUJ-00045w-JO; Fri, 26 Feb 2021 15:57:07 +0000
Received: by outflank-mailman (input) for mailman id 90424;
 Fri, 26 Feb 2021 15:57:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ro2M=H4=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lFfUI-00045q-7E
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 15:57:06 +0000
Received: from mail-pf1-x434.google.com (unknown [2607:f8b0:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0e33974-6835-4976-b57b-8aa2607d6a75;
 Fri, 26 Feb 2021 15:57:05 +0000 (UTC)
Received: by mail-pf1-x434.google.com with SMTP id 201so6529740pfw.5
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 07:57:05 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id n4sm9674881pgg.68.2021.02.26.07.57.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 26 Feb 2021 07:57:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0e33974-6835-4976-b57b-8aa2607d6a75
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=PhjSFtO93otAqwm0y+OY5w6bOJL8h00QcgJZEv9C3mI=;
        b=tuDL3Ei/NRjcCyJqFu9l3vKl8RYx0jN19SxGPyVEiBBRDU3WXoGKAF4lcyJPysczuo
         APQoBjpI66oVpj+1Y27Xha74+G9ZZ8UpStdzQ9oCsL3fcjJl/4wK/mD/Y4w1R75s3a+E
         nbWKMmigdVp0n7YmldJfB4nosFLhh+TFAsUNASWsU2dDEXo/214IEfjN10P2e0pibrOD
         fMsaRtQlvfXG2zeHXrtmICC0UTn7KBl1OrFijvQQmd3UydGds5o+pDPPKcpuqztWmEKq
         v+JbqloGyhrRyHt3vY8EuCRlqJDLXYC+Zx5uycWJy+mIVQLQT7EtjSZaUKfAz7nj1VVQ
         k6dg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=PhjSFtO93otAqwm0y+OY5w6bOJL8h00QcgJZEv9C3mI=;
        b=HCsynRg53gBIP0Og5m/5/33pwU1FePJSUy0xnbVmzFC9M/gRXIuC/xnBrX824NBYdX
         2L7uGmNGCwpw7SnvhA/23ENGYP2oQxIP8fFSeVNrktM6h3l/czfyPYAHiG0kEGEcRDh0
         3DVD0y6XSldNuGPBmf4BaSx+sTkt3MvWG0gRa8G7HK+txm3vnpjR3g4oqOBECiGqtSyC
         W3I/EuBlSw+0M/l59Re0vBl+Yvv7xqWmLk0tLt6mLcYnGBfrCJTYe8qIjJmm+ZhHeBkv
         PlAWGB5tNhFYtJXYMpSWq1ngD/MgcTidfRihFp5HHrBPX9Hu2fwW7crWTkLigG14NFlZ
         HWTQ==
X-Gm-Message-State: AOAM530v5nZOmkEQezCKKsc6z+MHV0CuRMCwS5MLR+9QX+t9laMx2qAB
	B1Zgzb4NSiN3ZUhWeA87g/I=
X-Google-Smtp-Source: ABdhPJybZgIZnxgsFn7QwLu3UtyS+xGL+BYnRh/Jz8j1wr9M0GcUEGkef7u97dzsZxQ+ACvdqQmMfA==
X-Received: by 2002:a63:504f:: with SMTP id q15mr3534743pgl.290.1614355024713;
        Fri, 26 Feb 2021 07:57:04 -0800 (PST)
Subject: Re: [PATCH for-next 0/6] Minimal build for RISCV
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Dario Faggioli <dfaggioli@suse.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Doug Goldstein <cardoe@cardoe.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <ec2f36a3-6fb9-2f8d-098c-66eb15ba91d1@gmail.com>
Date: Fri, 26 Feb 2021 07:54:04 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <cover.1614265718.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2/25/21 7:23 AM, Connor Davis wrote:
> Hi all,
> 
> This series introduces a minimal build for RISCV. It is based on Bobby's
> previous work from last year[0]. I have worked to rebase onto current Xen,
> as well as update the various header files borrowed from Linux.
> 
> This series provides the patches necessary to get a minimal build
> working. The build is "minimal" in the sense that 1) it uses a
> minimal config and 2) files, functions, and variables are included if
> and only if they are required for a successful build based on the
> config. It doesn't run at all, as the functions just have stub
> implementations.
> 
> My hope is that this can serve as a useful example for future ports as
> well as inform the community of exactly what is imposed by common code
> onto new architectures.
> 
> The first 4 patches are mods to non-RISCV bits that enable building a
> config with:
> 
>   !CONFIG_HAS_NS16550
>   !CONFIG_HAS_PASSTHROUGH
>   NR_CPUS == 1
>   !CONFIG_GRANT_TABLE
> 
> respectively. The fifth patch adds the RISCV files, and the last patch
> adds a docker container for doing the build. To build from the docker
> container (after creating it locally), you can run the following:
> 
>   $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen 
> 

Thanks for getting this out!  Looking forward to continue working with
you on this.  Great job on fixing these up and fixing the rebase!

For anyone interested in peeking in on how other RISC-V developments
are advancing, the out-of-tree repo is located here:
   https://gitlab.com/xen-on-risc-v/xen

The parallel work has been on getting dom0 up and running.  We're just
beyond making domain_create/construct_dom0 calls work (and getting all of
common and _init calls involved working appropriately), and are currently
on mapping in and launching the dom0 created (this is being done on branch
riscv-create-dom0 and be forewarned, there is still a good amount of churn
there).

I'm not sure on the optimal way to keep the ML informed about how things
are going... as I'm not sure I foresee many intermediate patches between
this initial build and seeing a dom0 console... save for common fixes and
questions.  I think Connor and I are both on the IRC. Any suggestions,
of course, are welcome.

Best,
Bob


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:24:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:24:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90439.171190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfuS-0007sB-05; Fri, 26 Feb 2021 16:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90439.171190; Fri, 26 Feb 2021 16:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFfuR-0007s4-Qn; Fri, 26 Feb 2021 16:24:07 +0000
Received: by outflank-mailman (input) for mailman id 90439;
 Fri, 26 Feb 2021 16:24:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ro2M=H4=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lFfuR-0007rz-9k
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:24:07 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8428e2f-41d7-461e-ac0c-03a172d434e0;
 Fri, 26 Feb 2021 16:24:06 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id p21so6481815pgl.12
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 08:24:06 -0800 (PST)
Received: from ?IPv6:2601:1c2:4f80:d230::5? ([2601:1c2:4f80:d230::5])
 by smtp.gmail.com with ESMTPSA id k4sm9407710pfg.102.2021.02.26.08.24.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 26 Feb 2021 08:24:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8428e2f-41d7-461e-ac0c-03a172d434e0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=EgPbYrDjeBajl85MDkJIwUvxr9idnrsm7yPEQm+mh6U=;
        b=RY1ZZO3HnGWJQnOWEUQCtlRpZadlFsu12UaQYNLCe8NrUbhtDUogH5h9Q4Iv2mb9IF
         EFVGAH0zx3JG1M0lnsoMfneKYTwfym1jP5Tllp6bFAE391Om5MC4vyUdIqmC6B3U+6Br
         Vm/2VQ55ZtHQg2wmKIxyYPT3S3BohPpvhWoC8+PRDt4qJRub9Sm4R1gKoIeR2SMnxbwY
         +iHBQ+2eYm3my2OLBhL4lpBSFvRSZdE6QSAtXTBO1bZJoLZ6lybWxb8J7aNLk/KbJ7mB
         WY3vmMQhBDbSl7tyPRdfTd2oeYu+wyAiupdGZe3Aolh05hFdWee7caU5SI5mXMnj9ICK
         34Uw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=EgPbYrDjeBajl85MDkJIwUvxr9idnrsm7yPEQm+mh6U=;
        b=XA9lTfdcBmTA7Gb4bBXLfb1uMMri81nf4Sqn0v85hN3wiJzTW9m7ALReC8LgSYSflP
         YOWLpoUUNHxYth688xUWrJXRmqVASniJm6Nx9dAO+vDzHCS4W09FFu3DQPHBoI49RuLy
         yKZGOAHdzDwAAvkyhyIWMjt8/MtfdPoLFh7uhOX5TGyQQw60b2wRolwwdOov7rtO60QX
         wNxGGLXLIBEZKVERqhpG90hgnHDkqzKgbG0/7uj0xUIZKFJnBDfphvvppnp7t2TB2Usi
         sTY3JeWKdXuoQlyhEQvyukA+XjDV4nb5QlrqiJlAUBntKxwNQQFpy/G2tmdn11LfHYoN
         xPFA==
X-Gm-Message-State: AOAM530IAkiXaJ70C2n1JDMqNcS/uOTFFF02uQkPwoDuVzDpdQhf+da6
	meqRbM0Nbki6e7IcRUhH5uM=
X-Google-Smtp-Source: ABdhPJzvnzdyf38O4f9AE1IDFZwo4yi1B5IiUJRzPmMQ2JPn0ItHMCgO6fypX5aRFiDUN4wXwd17SQ==
X-Received: by 2002:aa7:8a11:0:b029:1ee:42d8:a8f5 with SMTP id m17-20020aa78a110000b02901ee42d8a8f5mr2653938pfa.5.1614356645300;
        Fri, 26 Feb 2021 08:24:05 -0800 (PST)
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
 <0d58bca1-0998-1114-d023-0d8a5a193961@gmail.com>
 <c6ed745b-3847-b878-f683-2d1041be22a1@citrix.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <247706ec-c011-5217-7b6e-deedf22cac84@gmail.com>
Date: Fri, 26 Feb 2021 08:21:04 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <c6ed745b-3847-b878-f683-2d1041be22a1@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

>> On 2/25/21 3:14 PM, Andrew Cooper wrote:
>>
>> It sounds like you'd prefer no common to start and none of the
>> arch_* calls it relies on?
> 
> We definitely want "stuff compiled under RISC-V" to be caught in CI, but
> that doesn't mean "wedge all of common in with stubs to begin with".
> 
> Honestly - I want to see the build issues/failures in common, to help us
> fix the rough corners on Kconfig system and include hierarchy.
> 
> In light of this patch, there are definitely some things which should be
> fixed as prerequisites, rather than forcing yet-more x86-isms into every
> new arch.
> 
> ~Andrew
> 

Ah I see.  There's more that could be Kconfig'd away and if they can't be
Kconfig'd away, their should be some commits to make it so they can be.

But things like, for example, `arch_domain_create()` would still
be stubbed, because this is and always will be required.

-Bobby


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:34:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:34:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90445.171202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFg43-0000Z4-15; Fri, 26 Feb 2021 16:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90445.171202; Fri, 26 Feb 2021 16:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFg42-0000Yx-Te; Fri, 26 Feb 2021 16:34:02 +0000
Received: by outflank-mailman (input) for mailman id 90445;
 Fri, 26 Feb 2021 16:34:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFg41-0000Yp-JC; Fri, 26 Feb 2021 16:34:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFg41-0007fy-9T; Fri, 26 Feb 2021 16:34:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFg41-0002MT-1r; Fri, 26 Feb 2021 16:34:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFg41-0007pI-1K; Fri, 26 Feb 2021 16:34:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DZ3blG6LYlK0WHXW1LUbvStEfg3vRMB+2lU3///060Q=; b=59uRkxb0FJfYQ+N4VU378fuZ+8
	5A3foZ5qDzcN2zY33gNgeYXlF8ZV3TxEkqojNnmx6DhzMdkUtKPzHPAPYlhe1UljuQh7Fk/ix9xLj
	Ea8q1m5ISo1XdX1w8KVMoVKiv77MMBw1486RN6px7vUAQPywdhi1WIT1+jXAdUSR1Q4I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159709-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159709: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c4441ab1f1d506a942002ccc55fdde2fe30ef626
X-Osstest-Versions-That:
    xen=109e8177fd4a225e7025c4c17d2c9537b550b4ed
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 16:34:01 +0000

flight 159709 xen-unstable-smoke real [real]
flight 159713 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159709/
http://logs.test-lab.xenproject.org/osstest/logs/159713/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 159704

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c4441ab1f1d506a942002ccc55fdde2fe30ef626
baseline version:
 xen                  109e8177fd4a225e7025c4c17d2c9537b550b4ed

Last test of basis   159704  2021-02-26 10:01:26 Z    0 days
Testing same since   159709  2021-02-26 13:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c4441ab1f1d506a942002ccc55fdde2fe30ef626
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Feb 25 15:46:10 2021 +0000

    dmop: Add XEN_DMOP_nr_vcpus
    
    Curiously absent from the stable API/ABIs is an ability to query the number of
    vcpus which a domain has.  Emulators need to know this information in
    particular to know how many stuct ioreq's live in the ioreq server mappings.
    
    In practice, this forces all userspace to link against libxenctrl to use
    xc_domain_getinfo(), which rather defeats the purpose of the stable libraries.
    
    Introduce a DMOP to retrieve this information and surface it in
    libxendevicemodel to help emulators shed their use of unstable interfaces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
    ---
    CC: Jan Beulich <JBeulich@suse.com>
    CC: Roger Pau MonnÃ© <roger.pau@citrix.com>
    CC: Wei Liu <wl@xen.org>
    CC: Paul Durrant <paul@xen.org>
    CC: Stefano Stabellini <sstabellini@kernel.org>
    CC: Julien Grall <julien@xen.org>
    CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
    CC: Ian Jackson <iwj@xenproject.org>
    
    For 4.15.  This was a surprise discovery in the massive ABI untangling effort
    I'm currently doing for XenServer's new build system.
    
    This is one new read-only op to obtain information which isn't otherwise
    available under a stable API/ABI.  As such, its risk for 4.15 is very low,
    with a very real quality-of-life improvement for downstreams.
    
    I realise this is technically a new feature and we're long past feature
    freeze, but I'm hoping that "really lets some emulators move off the unstable
    libraries" is sufficiently convincing argument.
    
    It's not sufficient to let Qemu move off unstable libraries yet - at a
    minimum, the add_to_phymap hypercalls need stabilising to support PCI
    Passthrough and BAR remapping.
    
    I'd prefer not to duplicate the op handling between ARM and x86, and if this
    weren't a release window, I'd submit a prereq patch to dedup the common dmop
    handling.  That can wait to 4.16 at this point.  Also, this op ought to work
    against x86 PV guests, but fixing that up will also need this rearrangement
    into common code, so needs to wait.

commit 615367b5275a5b0123f1f1ee86c985fab234a5a4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Feb 25 16:54:17 2021 +0000

    x86/dmop: Properly fail for PV guests
    
    The current code has an early exit for PV guests, but it returns 0 having done
    nothing.
    
    Fixes: 524a98c2ac5 ("public / x86: introduce __HYPERCALL_dm_op...")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Ian Jackson <iwj@xenproject.org>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:37:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90448.171217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFg6v-0000ij-Fp; Fri, 26 Feb 2021 16:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90448.171217; Fri, 26 Feb 2021 16:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFg6v-0000ic-Ck; Fri, 26 Feb 2021 16:37:01 +0000
Received: by outflank-mailman (input) for mailman id 90448;
 Fri, 26 Feb 2021 16:37:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFg6t-0000iX-Of
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:37:00 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b417ec2-1421-4f29-b858-3663f43cebf2;
 Fri, 26 Feb 2021 16:36:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b417ec2-1421-4f29-b858-3663f43cebf2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614357418;
  h=subject:to:references:from:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=gPNexwi3Ur13MGfs0iK96Zeq0N1BIGNczxMkxehweq0=;
  b=Kg7u+cHY/30VLfdc4WXHNAE0ElrkVKw7QH9wy7xUdfnKxsKidQE8yJQY
   48wPnQNAIgTf/a8IgrcdgOERDUE5C2l5ZCxcBIu1k4Uml81uu7s+uzV4L
   rP4AEoVk0gHxqREO+d3rE24aL3lx9qvQV6KNNgWq25rDJST9M8ytmCR1n
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: h5jnTRInna0nAd0dqfqk+vbGeCtvRC6Tl6n6JcEGwY1xv4LpIrRW46x3wrNRCDvAG3uHK+M5kM
 jhrvoXiKyQ5bmreKuz5lp6ESTOIpx/UWBAZsFX4an1qnOCFgtRTBDn9AXpXFpu/KxTZ386Y9KD
 n5vQB2qSKl/zMZaAxzVaavBhNRZomRX/avYLslA0ANgAbfY6AAgK1PKpONd4gEntt4clSnBNdX
 uFrKnfsgcHs6lZsIlgkGZcq3lwn2v17HTrn0o+YzA1lsHL+dMnAl7sGea9zBqq0N58Um3KMLR9
 Ohs=
X-SBRS: 5.2
X-MesageID: 38116201
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,209,1610427600"; 
   d="scan'208";a="38116201"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cPFBHUbQcpal9OL5LL3E7WoocFnEv/toOh8WvXhAqq+vpjVUZsOk2jiN2cvxsV2Zhvrirz6XoekjMsIIOY5Lr1zLCzZYREw2dx9yDDPb5IV0zHMeRVb8m9nTCD87/0SzMrG/8H6z0m9pZ2Boqt9mXgxKnqoPP1NVTDeXUjeSQYH6WE+CsxJ2j36rZ+Q557mn0HHk6p7jCuvYEdxPEvDorGxx/6zgDxqEVDP1MZ8UllU62Vffqk7Wp0L0vmJlzZB/F+f+vUoxhxApnRe9ZbGnCOBMmisHdGm1N/WEyoz9CFsqe8TyntOW085hYx4gRRA2silPf3iWAthxsje6FiQ3Dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qKTOPBg5EvSWekL0yTFJthgB0zqsdnNIaXAiTwnAC/E=;
 b=ESqVMxSaHlpgQ9vJOtqGzKM5gTrf8U6qFiraR1SaaeE4cqiOSB2EwtRXVJyC2XZq04FVMBJxUCZIMrD2RYNU75lBoWnHYsdKlkL2pCp+YrHrCPnm6Q8M0o6fr8ZiFvDm99+XHIdgIeDPKCDHbgyYdyx//qJChQHGJlWwj4P1vST0PSFxZcmKpCljgOQZuz0aDUYZd9epe8pzIudpwYylpnKLawL0QQ5al9r5VfAAS2hNDLvS9fsRy5gF+zWpT3+D33JgNl3eAJ2mw7/MkBdTDaqUicDPLKENus8+XMu+BrASRqbGBHBg1alCtX+ERBAQw7NmAc58C04AZKQWX+pCCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qKTOPBg5EvSWekL0yTFJthgB0zqsdnNIaXAiTwnAC/E=;
 b=s6CkHLNI3wF7qHK24TU2H3AYqMBgzMgZl9QxV7qud2sT6PDR93cpoz+vPzqM5lxUFH6YGdV/i8Pl1p1IWckaCL6PhX1La9B06pvV6uiVDIwohDj5sB6xglNgifgjFCXrx+Lmh0wn8KKLN1Sfd5n1+QFxP6nqghvH0AAX+Q+Tu7Q=
Subject: Re: [xen-unstable-smoke test] 159709: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>,
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Ian
 Jackson <iwj@xenproject.org>
References: <osstest-159709-mainreport@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a33d2c9a-00e3-caf2-afa6-48f70ef1202c@citrix.com>
Date: Fri, 26 Feb 2021 16:36:35 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <osstest-159709-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0095.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::35) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4c83722b-4a9c-45aa-65a5-08d8da74b2a8
X-MS-TrafficTypeDiagnostic: BYAPR03MB3736:
X-Microsoft-Antispam-PRVS: <BYAPR03MB3736F2DEEC88F265DFA57276BA9D9@BYAPR03MB3736.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:551;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xijWg6lczJyNF8HcjRT6mZlu8E5xvkHLf6A3TgvAjWcRjKO6GkPj2RJqRXF0YZd/ke6PmZmNpzNRZWGc0C043YMtV+9NWu+39mQuG9XoH7Q07UZDdTYYoOQssTQwGeLIOiWjFvl1g0WM6yrE8PqnRoQOzrbmckJHABeo1wLWHe3625cyoJyrl+tW1+cEIlkjDuvliBgEvnyFGHw2aLIwlFQVTfuP3ZrfkBADDWEZfjpHXHW5sJ1myyZfG0FLPH4VEaA4pY132nMmcu6Iq0EAiQuDX7fbGP82a0BXupllad/hSmFycGDozHqf+2OaJK+sfrYVhR7qvgPrGwTWC64C6wWhBkyH0x3gjX+Bm9CvpZpVerAhgQm0OK7cRN5ARMm86yQDIasU9Ns/h4FotgCj0+ziwE49YCfOv0vgHqAf/eeZ8HC/v/EaU+kh1KnU4qKRZXdKiYxS/IrzAJXzEEbuEDaFKsN7oiRQ1BLWcXaP5YCASjGM+ZbVDsihMOqQCi18CxeUb4S20HePGR2mRMZB4bDeSX4p/W748NMTwJYr07ikQJOc7/3fhmyWtu9lfsRX39Gjb2xbDF8LQGVqeuq2Zb2t1kx9yh5qWc4mFxS47BtjCSBWcG8aMKstRF/1Blh0rbsC19NmZxv5ebjD96cxGzD62sD5/MTFpASepYXOqZy35ED9pmXGnUVsoAZ1bYIq
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(346002)(39860400002)(136003)(6666004)(86362001)(31686004)(110136005)(36756003)(8936002)(8676002)(478600001)(5660300002)(66946007)(53546011)(966005)(66556008)(16576012)(66476007)(2906002)(316002)(956004)(186003)(83380400001)(26005)(2616005)(16526019)(6486002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z2x1RVdWWHQxVTRyaHpkOHFYalc3bDJsUENaNzBBYXkwbDN0QzRUK2QxNVYz?=
 =?utf-8?B?YVJhRXExeENOTGF3NUQ0c3dMWElPVHpscVJKdW03WC9pazBFblRhWXJadGlL?=
 =?utf-8?B?VjMzUlRuL05yQ3d2YlFlbXV4SnlnT0ZxempjbEpuVWttelFBdUdzb2Zua2ht?=
 =?utf-8?B?ajJpM2c2R3lnTkJybEd2Qzh2eXQ2S1Zkb2JiWExrdXR3d2tYT1NkcERSV0Rm?=
 =?utf-8?B?OElBVjM5SDZHS0FZVnNRNE0vdFFJYjNtZE9tY1FpdkZhOCtuN2xFb0hkSXlx?=
 =?utf-8?B?eXY2N29FUTZkb2oxL2JUa2w3UUNLT2ZQallRWU90dlBYNC9XMGlGRGE5UDZz?=
 =?utf-8?B?MVZNOHlMMmpBMk9OS0pLQlpxTTJqbWRDYnpmWWxGYkZrVDRLSU9VOXJoZ0pU?=
 =?utf-8?B?T1NURmpVMGVFVVgzWnB5cnpKMWR2MlJBOFV0SXdleXNiWFV5bk9KUWtBUHRG?=
 =?utf-8?B?Wk5UdThWUE9wV2pUcCtJZTM1OVl0NjJqdkRvdVFFbGJvanJPalpPczQ5TDlH?=
 =?utf-8?B?NHFhZFQreVU0eVhKcmRicEtmQmdObzAzOXFVYnJzbkFLZkR3aG5LbGF6eHkv?=
 =?utf-8?B?MUF6V2hFbGcwZmF3YUo3OUpjTkJoTHhhRVUrNE82NnVxTXBqZHRVUXo1Q2pQ?=
 =?utf-8?B?MkVnMEZvYkVCZHl6aXJmeGwwM3hxZlRlNjNIWFB3RGo5L1lqeFZSS2Ivekdm?=
 =?utf-8?B?UVJzdHNuQmRWWk1kTDBXT2JZRGY0b1dMWFZXNHdIdm1La1lGdzZXRVlvcVFo?=
 =?utf-8?B?ZmZwZUFKaFh1bWdrUFJudks4N2hwSVA0NzFFMTk5VklmNzVWb05WRUJUNEFa?=
 =?utf-8?B?R0VNck9DdmZtNjRFN1Q4Mk1VdGVnYnU3KzRTbWNWQ3RZQVZKNUR0WVAvQW52?=
 =?utf-8?B?TjZsTWt6azdFdEJJQWRCL05ZeEVZT2FZY2FWdGU1aUVDVTdDMEtjenpqNjJJ?=
 =?utf-8?B?bitWdFVOaW9WS0Rha0hIM1JyT3VHcVg2ZktJRGFIOXcyUnlqR1VyRFZ2eW1o?=
 =?utf-8?B?cjBoRllmd3Q3aldQbk5Jazh6VVRtczh5Q2h6eVRaWlM0Y2JpT3V0TUR5RTUy?=
 =?utf-8?B?eWl4RGpNTm1oOStaK3BMaEdwSTdmd3c3NGJlcEQ3Zmx1NVkxVTZOS1lJTEl5?=
 =?utf-8?B?cnIwRlU2S2FxVFd6RHh6R3B1aDdxLzIySy9pbmpHUE5SbHllbk1FVlBNZzR0?=
 =?utf-8?B?cXNHa0R1bjlBS2ZUSWhzQS85ampKNm5ldXZYTkROZ09MeGwzZ1F2L00wa1Vz?=
 =?utf-8?B?Y2ZhdlNieTVlczBZSC9mVUx2K2ZvSjArZlh1ekRHV2taRmh3cFk1NzBzNktN?=
 =?utf-8?B?NE0zZHlEc2xHeXF6cStybFpSNi9RWGNiRllCUUt5ZU80Z0NYWTR4bTRnZmZ1?=
 =?utf-8?B?NEJ3akM5cFBUZ0IyZzc0dDFtenpaQnk3T1BxUVh1SjZVOWVsdkZQZmRPTmZ1?=
 =?utf-8?B?N2tPUHVHUzRGMTI4ZWJmR2ROSytxdWloVGhjMFdUOFVIbnF4QnlZVTRDcVlD?=
 =?utf-8?B?OW5jMEpBdDk5cjhyMUxxa0dXeGhLSEJYTFMrR2dkdExJT0ZMUDdkTXZtYjdV?=
 =?utf-8?B?VHlkamlDVXlhbERLd2RBaDhnYWFQazZtZnRDV1pnWEFxYlRmVC9DM2JTTEE3?=
 =?utf-8?B?ZnRUU2Mxb1J1N2tQZFg2ME1iekZna2RQbVRxM2duSi9sMm1hQzgyMWN1ZjJI?=
 =?utf-8?B?OU1sOExnNld2NnBBRThVeVFjM0oxTVljMHJjWVV1aGo0WmxmWlJHclQ1dFlz?=
 =?utf-8?Q?Fb7o1VC2AMOHIsp5UJL+o460VDVCiNEnBPY+F6r?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c83722b-4a9c-45aa-65a5-08d8da74b2a8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 16:36:42.4705
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m15ETNEUOWVBgoJToBlAwl38YyQaA5tYAgcxorPLkImmmfwCrGt6letCsSVEmm17+xPJXD5fDHtsfF7khHOARRZqdch6bL5bZQ6+3YWLqpQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3736
X-OriginatorOrg: citrix.com

On 26/02/2021 16:34, osstest service owner wrote:
> flight 159709 xen-unstable-smoke real [real]
> flight 159713 xen-unstable-smoke real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/159709/
> http://logs.test-lab.xenproject.org/osstest/logs/159713/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 159704

Well - there's only one possibility here...

> commit 615367b5275a5b0123f1f1ee86c985fab234a5a4
> Author: Andrew Cooper <andrew.cooper3@citrix.com>
> Date:   Thu Feb 25 16:54:17 2021 +0000
>
>     x86/dmop: Properly fail for PV guests
>     
>     The current code has an early exit for PV guests, but it returns 0 having done
>     nothing.
>     
>     Fixes: 524a98c2ac5 ("public / x86: introduce __HYPERCALL_dm_op...")
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>     Reviewed-by: Ian Jackson <iwj@xenproject.org>
>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
>     Release-Acked-by: Ian Jackson <iwj@xenproject.org>

which means we've something very wonky going on somewhere.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:40:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:40:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90454.171229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFg9x-0001ja-0d; Fri, 26 Feb 2021 16:40:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90454.171229; Fri, 26 Feb 2021 16:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFg9w-0001jT-Sc; Fri, 26 Feb 2021 16:40:08 +0000
Received: by outflank-mailman (input) for mailman id 90454;
 Fri, 26 Feb 2021 16:40:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3f7O=H4=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1lFg9v-0001fY-DJ
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:40:07 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 337976cf-1dc1-40f6-9141-39f268ce7e78;
 Fri, 26 Feb 2021 16:40:05 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id v15so9197977wrx.4
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 08:40:05 -0800 (PST)
Received: from x1w.redhat.com (68.red-83-57-175.dynamicip.rima-tde.net.
 [83.57.175.68])
 by smtp.gmail.com with ESMTPSA id z9sm13754247wrv.56.2021.02.26.08.40.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 Feb 2021 08:40:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 337976cf-1dc1-40f6-9141-39f268ce7e78
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=1YYfb9HtaU4XCyWre7LsV/F1n1nBQssVppd4Gep+V2o=;
        b=jhNEIGzNIvm7uIJDNeepdBUBvbCkZr3OsBRClju38u86A4yYqUv90EsetfETvrc3uy
         iI+zIGPzqsjlZqM++r2OHaGhN1EzCO0E3YJpAhFua42/pl21XD9e5HFMclVV3MQ+6lTS
         PClfCZetd/GDxJDP+5n9KoAX4AGRbRSG6XTZtSG+8Nw2bpL/4z0T+d5B7pCVGMZXmiNK
         OseETsQ/lfVRwt5qoIW8PcqR3RBzTgp54jgmCfQq4zN/3MyhKfBGnOaPaww63XbuW0Lc
         j7cEfUdZ3jM66LhZ8WTkEl5LGQpcwA2VaQagvYajzHh+QFFwl0nq4/OdBxEOZCSE+E8s
         q7lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
         :mime-version:content-transfer-encoding;
        bh=1YYfb9HtaU4XCyWre7LsV/F1n1nBQssVppd4Gep+V2o=;
        b=NrnpQt3Fi/5bTzizxOKgQKA9TjyYOgnrruLSbLh/TJi7y128OBM2QyJf9h/hAN+Vdy
         wLUU3+mR/DyeB0WY8N5de1Tlj4KfCqLn+b6cAYgpIPsWXG2g1OkirEaarJSTiVYC1EvB
         eWaecSQ93KWTk0Wpf5shB1KAGccjxBbdqDJVv6Y3lNc5jGP2HxdxZKGg0TolyEmcioSY
         XkbK40Jws89JSPtBMUpr6Za9sxih/hVzpIVGOKsgKvRbKBiyceMIvancYGqSfQ3+1h8T
         30OqBAtfvDuk4VM/6jNcO1P8WCWCNA+hlFQpuG+mQlOPzhzPb3RVDLNglyUJLVg6nAiC
         9hSQ==
X-Gm-Message-State: AOAM5332QDF/dMdTtJ/fi2xLD7dDf+RQzbqx+l8pLGDGUYNGiq8N7uat
	t6y3MwDM7/QcgZcYiS4U5es=
X-Google-Smtp-Source: ABdhPJwZ/DYZ11NdDeegyRPZoqNN4h2zXrlp0uTbsaCHPUpP4XdOBWrM8f4yQU2LZOI/wLarcjbEaA==
X-Received: by 2002:adf:97d5:: with SMTP id t21mr4094844wrb.139.1614357604604;
        Fri, 26 Feb 2021 08:40:04 -0800 (PST)
Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philippe.mathieu.daude@gmail.com>
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	xen-devel@lists.xenproject.org,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [RFC PATCH] cpu: system_ops: move to cpu-system-ops.h, keep a pointer in CPUClass
Date: Fri, 26 Feb 2021 17:40:01 +0100
Message-Id: <20210226164001.4102868-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Similarly to commit 78271684719 ("cpu: tcg_ops: move to tcg-cpu-ops.h,
keep a pointer in CPUClass"):

We cannot in principle make the SysEmu Operations field definitions
conditional on CONFIG_SOFTMMU in code that is included by both
common_ss and specific_ss modules.

Therefore, what we can do safely to restrict the SysEmu fields to
system emulation builds, is to move all sysemu operations into a
separate header file, which is only included by system-specific code.

This leaves just a NULL pointer in the cpu.h for the user-mode builds.

Inspired-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Philippe Mathieu-DaudÃ© <f4bug@amsat.org>
---
RFC: Only ARM updated to get review before keep going with the other
targets.

Based-on: <20210226163227.4097950-1-f4bug@amsat.org>
---
 include/hw/core/cpu-system-ops.h | 89 ++++++++++++++++++++++++++++++++
 include/hw/core/cpu.h            | 77 ++-------------------------
 cpu.c                            | 13 ++---
 hw/core/cpu.c                    | 43 +++++++--------
 hw/core/qdev.c                   |  1 +
 monitor/misc.c                   |  1 +
 softmmu/physmem.c                |  1 +
 stubs/xen-hw-stub.c              |  1 +
 target/arm/cpu.c                 | 24 ++++++---
 9 files changed, 141 insertions(+), 109 deletions(-)
 create mode 100644 include/hw/core/cpu-system-ops.h

diff --git a/include/hw/core/cpu-system-ops.h b/include/hw/core/cpu-system-ops.h
new file mode 100644
index 00000000000..1554ccbdf07
--- /dev/null
+++ b/include/hw/core/cpu-system-ops.h
@@ -0,0 +1,89 @@
+/*
+ * CPU operations specific to system emulation
+ *
+ * Copyright (c) 2012 SUSE LINUX Products GmbH
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef CPU_SYSTEM_OPS_H
+#define CPU_SYSTEM_OPS_H
+
+#include "hw/core/cpu.h"
+
+/*
+ * struct CPUSystemOperations: System operations specific to a CPU class
+ */
+typedef struct CPUSystemOperations {
+    /**
+     * @get_memory_mapping: Callback for obtaining the memory mappings.
+     */
+    void (*get_memory_mapping)(CPUState *cpu, MemoryMappingList *list,
+                               Error **errp);
+    /**
+     * @get_paging_enabled: Callback for inquiring whether paging is enabled.
+     */
+    bool (*get_paging_enabled)(const CPUState *cpu);
+    /**
+     * @get_phys_page_debug: Callback for obtaining a physical address.
+     */
+    hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
+    /**
+     * @get_phys_page_attrs_debug: Callback for obtaining a physical address
+     *       and the associated memory transaction attributes to use for the
+     *       access.
+     * CPUs which use memory transaction attributes should implement this
+     * instead of get_phys_page_debug.
+     */
+    hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
+                                        MemTxAttrs *attrs);
+    /**
+     * @asidx_from_attrs: Callback to return the CPU AddressSpace to use for
+     *       a memory access with the specified memory transaction attributes.
+     */
+    int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
+    /**
+     * @get_crash_info: Callback for reporting guest crash information in
+     * GUEST_PANICKED events.
+     */
+    GuestPanicInformation* (*get_crash_info)(CPUState *cpu);
+    /**
+     * @write_elf32_note: Callback for writing a CPU-specific ELF note to a
+     * 32-bit VM coredump.
+     */
+    int (*write_elf32_note)(WriteCoreDumpFunction f, CPUState *cpu,
+                            int cpuid, void *opaque);
+    /**
+     * @write_elf64_note: Callback for writing a CPU-specific ELF note to a
+     * 64-bit VM coredump.
+     */
+    int (*write_elf64_note)(WriteCoreDumpFunction f, CPUState *cpu,
+                            int cpuid, void *opaque);
+    /**
+     * @write_elf32_qemunote: Callback for writing a CPU- and QEMU-specific ELF
+     * note to a 32-bit VM coredump.
+     */
+    int (*write_elf32_qemunote)(WriteCoreDumpFunction f, CPUState *cpu,
+                                void *opaque);
+    /**
+     * @write_elf64_qemunote: Callback for writing a CPU- and QEMU-specific ELF
+     * note to a 64-bit VM coredump.
+     */
+    int (*write_elf64_qemunote)(WriteCoreDumpFunction f, CPUState *cpu,
+                                void *opaque);
+    /**
+     * @virtio_is_big_endian: Callback to return %true if a CPU which supports
+     *       runtime configurable endianness is currently big-endian.
+     * Non-configurable CPUs can use the default implementation of this method.
+     * This method should not be used by any callers other than the pre-1.0
+     * virtio devices.
+     */
+    bool (*virtio_is_big_endian)(CPUState *cpu);
+    /**
+     * @vmsd: State description for migration.
+     */
+    const VMStateDescription *vmsd;
+} CPUSystemOperations;
+
+#endif /* CPU_SYSTEM_OPS_H */
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 29e1623f775..ef65011f206 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -80,79 +80,8 @@ struct TCGCPUOps;
 /* see accel-cpu.h */
 struct AccelCPUClass;
 
-/*
- * struct CPUSystemOperations: System operations specific to a CPU class
- */
-typedef struct CPUSystemOperations {
-    /**
-     * @get_memory_mapping: Callback for obtaining the memory mappings.
-     */
-    void (*get_memory_mapping)(CPUState *cpu, MemoryMappingList *list,
-                               Error **errp);
-    /**
-     * @get_paging_enabled: Callback for inquiring whether paging is enabled.
-     */
-    bool (*get_paging_enabled)(const CPUState *cpu);
-    /**
-     * @get_phys_page_debug: Callback for obtaining a physical address.
-     */
-    hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
-    /**
-     * @get_phys_page_attrs_debug: Callback for obtaining a physical address
-     *       and the associated memory transaction attributes to use for the
-     *       access.
-     * CPUs which use memory transaction attributes should implement this
-     * instead of get_phys_page_debug.
-     */
-    hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
-                                        MemTxAttrs *attrs);
-    /**
-     * @asidx_from_attrs: Callback to return the CPU AddressSpace to use for
-     *       a memory access with the specified memory transaction attributes.
-     */
-    int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
-    /**
-     * @get_crash_info: Callback for reporting guest crash information in
-     * GUEST_PANICKED events.
-     */
-    GuestPanicInformation* (*get_crash_info)(CPUState *cpu);
-    /**
-     * @write_elf32_note: Callback for writing a CPU-specific ELF note to a
-     * 32-bit VM coredump.
-     */
-    int (*write_elf32_note)(WriteCoreDumpFunction f, CPUState *cpu,
-                            int cpuid, void *opaque);
-    /**
-     * @write_elf64_note: Callback for writing a CPU-specific ELF note to a
-     * 64-bit VM coredump.
-     */
-    int (*write_elf64_note)(WriteCoreDumpFunction f, CPUState *cpu,
-                            int cpuid, void *opaque);
-    /**
-     * @write_elf32_qemunote: Callback for writing a CPU- and QEMU-specific ELF
-     * note to a 32-bit VM coredump.
-     */
-    int (*write_elf32_qemunote)(WriteCoreDumpFunction f, CPUState *cpu,
-                                void *opaque);
-    /**
-     * @write_elf64_qemunote: Callback for writing a CPU- and QEMU-specific ELF
-     * note to a 64-bit VM coredump.
-     */
-    int (*write_elf64_qemunote)(WriteCoreDumpFunction f, CPUState *cpu,
-                                void *opaque);
-    /**
-     * @virtio_is_big_endian: Callback to return %true if a CPU which supports
-     *       runtime configurable endianness is currently big-endian.
-     * Non-configurable CPUs can use the default implementation of this method.
-     * This method should not be used by any callers other than the pre-1.0
-     * virtio devices.
-     */
-    bool (*virtio_is_big_endian)(CPUState *cpu);
-    /**
-     * @vmsd: State description for migration.
-     */
-    const VMStateDescription *vmsd;
-} CPUSystemOperations;
+/* see cpu-system-ops.h */
+struct CPUSystemOperations;
 
 /**
  * CPUClass:
@@ -224,7 +153,7 @@ struct CPUClass {
     struct AccelCPUClass *accel_cpu;
 
     /* when system emulation is not available, this pointer is NULL */
-    struct CPUSystemOperations system_ops;
+    struct CPUSystemOperations *system_ops;
 
     /* when TCG is not available, this pointer is NULL */
     struct TCGCPUOps *tcg_ops;
diff --git a/cpu.c b/cpu.c
index 619b8c14f94..9a1792edaec 100644
--- a/cpu.c
+++ b/cpu.c
@@ -36,6 +36,7 @@
 #include "sysemu/replay.h"
 #include "exec/translate-all.h"
 #include "exec/log.h"
+#include "hw/core/cpu-system-ops.h"
 
 uintptr_t qemu_host_page_size;
 intptr_t qemu_host_page_mask;
@@ -138,13 +139,13 @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
 #endif /* CONFIG_TCG */
 
 #ifdef CONFIG_USER_ONLY
-    assert(cc->system_ops.vmsd == NULL);
+    assert(cc->system_ops->vmsd == NULL);
 #else
     if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
         vmstate_register(NULL, cpu->cpu_index, &vmstate_cpu_common, cpu);
     }
-    if (cc->system_ops.vmsd != NULL) {
-        vmstate_register(NULL, cpu->cpu_index, cc->system_ops.vmsd, cpu);
+    if (cc->system_ops->vmsd != NULL) {
+        vmstate_register(NULL, cpu->cpu_index, cc->system_ops->vmsd, cpu);
     }
 #endif /* CONFIG_USER_ONLY */
 }
@@ -154,10 +155,10 @@ void cpu_exec_unrealizefn(CPUState *cpu)
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
 #ifdef CONFIG_USER_ONLY
-    assert(cc->system_ops.vmsd == NULL);
+    assert(cc->system_ops->vmsd == NULL);
 #else
-    if (cc->system_ops.vmsd != NULL) {
-        vmstate_unregister(NULL, cc->system_ops.vmsd, cpu);
+    if (cc->system_ops->vmsd != NULL) {
+        vmstate_unregister(NULL, cc->system_ops->vmsd, cpu);
     }
     if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
         vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
diff --git a/hw/core/cpu.c b/hw/core/cpu.c
index 8bd7bda6b0b..0c58d81b6a5 100644
--- a/hw/core/cpu.c
+++ b/hw/core/cpu.c
@@ -21,6 +21,7 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "hw/core/cpu.h"
+#include "hw/core/cpu-system-ops.h"
 #include "sysemu/hw_accel.h"
 #include "qemu/notify.h"
 #include "qemu/log.h"
@@ -71,8 +72,8 @@ bool cpu_paging_enabled(const CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (cc->system_ops.get_paging_enabled) {
-        return cc->system_ops.get_paging_enabled(cpu);
+    if (cc->system_ops->get_paging_enabled) {
+        return cc->system_ops->get_paging_enabled(cpu);
     }
 
     return false;
@@ -83,8 +84,8 @@ void cpu_get_memory_mapping(CPUState *cpu, MemoryMappingList *list,
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (cc->system_ops.get_memory_mapping) {
-        cc->system_ops.get_memory_mapping(cpu, list, errp);
+    if (cc->system_ops->get_memory_mapping) {
+        cc->system_ops->get_memory_mapping(cpu, list, errp);
         return;
     }
 
@@ -96,12 +97,12 @@ hwaddr cpu_get_phys_page_attrs_debug(CPUState *cpu, vaddr addr,
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (cc->system_ops.get_phys_page_attrs_debug) {
-        return cc->system_ops.get_phys_page_attrs_debug(cpu, addr, attrs);
+    if (cc->system_ops->get_phys_page_attrs_debug) {
+        return cc->system_ops->get_phys_page_attrs_debug(cpu, addr, attrs);
     }
     /* Fallback for CPUs which don't implement the _attrs_ hook */
     *attrs = MEMTXATTRS_UNSPECIFIED;
-    return cc->system_ops.get_phys_page_debug(cpu, addr);
+    return cc->system_ops->get_phys_page_debug(cpu, addr);
 }
 
 hwaddr cpu_get_phys_page_debug(CPUState *cpu, vaddr addr)
@@ -116,8 +117,8 @@ int cpu_asidx_from_attrs(CPUState *cpu, MemTxAttrs attrs)
     CPUClass *cc = CPU_GET_CLASS(cpu);
     int ret = 0;
 
-    if (cc->system_ops.asidx_from_attrs) {
-        ret = cc->system_ops.asidx_from_attrs(cpu, attrs);
+    if (cc->system_ops->asidx_from_attrs) {
+        ret = cc->system_ops->asidx_from_attrs(cpu, attrs);
         assert(ret < cpu->num_ases && ret >= 0);
     }
     return ret;
@@ -151,10 +152,10 @@ int cpu_write_elf32_qemunote(WriteCoreDumpFunction f, CPUState *cpu,
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (!cc->system_ops.write_elf32_qemunote) {
+    if (!cc->system_ops->write_elf32_qemunote) {
         return 0;
     }
-    return (*cc->system_ops.write_elf32_qemunote)(f, cpu, opaque);
+    return (*cc->system_ops->write_elf32_qemunote)(f, cpu, opaque);
 }
 
 int cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cpu,
@@ -162,10 +163,10 @@ int cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cpu,
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (!cc->system_ops.write_elf32_note) {
+    if (!cc->system_ops->write_elf32_note) {
         return -1;
     }
-    return (*cc->system_ops.write_elf32_note)(f, cpu, cpuid, opaque);
+    return (*cc->system_ops->write_elf32_note)(f, cpu, cpuid, opaque);
 }
 
 int cpu_write_elf64_qemunote(WriteCoreDumpFunction f, CPUState *cpu,
@@ -173,10 +174,10 @@ int cpu_write_elf64_qemunote(WriteCoreDumpFunction f, CPUState *cpu,
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (!cc->system_ops.write_elf64_qemunote) {
+    if (!cc->system_ops->write_elf64_qemunote) {
         return 0;
     }
-    return (*cc->system_ops.write_elf64_qemunote)(f, cpu, opaque);
+    return (*cc->system_ops->write_elf64_qemunote)(f, cpu, opaque);
 }
 
 int cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cpu,
@@ -184,10 +185,10 @@ int cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cpu,
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (!cc->system_ops.write_elf64_note) {
+    if (!cc->system_ops->write_elf64_note) {
         return -1;
     }
-    return (*cc->system_ops.write_elf64_note)(f, cpu, cpuid, opaque);
+    return (*cc->system_ops->write_elf64_note)(f, cpu, cpuid, opaque);
 }
 
 static int cpu_common_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg)
@@ -204,8 +205,8 @@ bool cpu_virtio_is_big_endian(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
 
-    if (cc->system_ops.virtio_is_big_endian) {
-        return cc->system_ops.virtio_is_big_endian(cpu);
+    if (cc->system_ops->virtio_is_big_endian) {
+        return cc->system_ops->virtio_is_big_endian(cpu);
     }
     return target_words_bigendian();
 }
@@ -220,8 +221,8 @@ GuestPanicInformation *cpu_get_crash_info(CPUState *cpu)
     CPUClass *cc = CPU_GET_CLASS(cpu);
     GuestPanicInformation *res = NULL;
 
-    if (cc->system_ops.get_crash_info) {
-        res = cc->system_ops.get_crash_info(cpu);
+    if (cc->system_ops->get_crash_info) {
+        res = cc->system_ops->get_crash_info(cpu);
     }
     return res;
 }
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
index cefc5eaa0a9..b2de42ed6ce 100644
--- a/hw/core/qdev.c
+++ b/hw/core/qdev.c
@@ -38,6 +38,7 @@
 #include "hw/boards.h"
 #include "hw/sysbus.h"
 #include "hw/qdev-clock.h"
+#include "hw/core/cpu-system-ops.h"
 #include "migration/vmstate.h"
 #include "trace.h"
 
diff --git a/monitor/misc.c b/monitor/misc.c
index a7650ed7470..8feb34c1633 100644
--- a/monitor/misc.c
+++ b/monitor/misc.c
@@ -77,6 +77,7 @@
 #include "qapi/qmp-event.h"
 #include "sysemu/cpus.h"
 #include "qemu/cutils.h"
+#include "hw/core/cpu-system-ops.h"
 
 #if defined(TARGET_S390X)
 #include "hw/s390x/storage-keys.h"
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 19e0aa9836a..06b72bee2d7 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -28,6 +28,7 @@
 #ifdef CONFIG_TCG
 #include "hw/core/tcg-cpu-ops.h"
 #endif /* CONFIG_TCG */
+#include "hw/core/cpu-system-ops.h"
 
 #include "exec/exec-all.h"
 #include "exec/target_page.h"
diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 15f3921a76b..2af9c1de9d9 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -7,6 +7,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/core/cpu-system-ops.h"
 #include "hw/xen/xen.h"
 #include "hw/xen/xen-x86.h"
 
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 87a581fa47c..90fe3bfaaf3 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -2278,6 +2278,19 @@ static struct TCGCPUOps arm_tcg_ops = {
 };
 #endif /* CONFIG_TCG */
 
+#ifndef CONFIG_USER_ONLY
+#include "hw/core/cpu-system-ops.h"
+
+static struct CPUSystemOperations arm_sysemu_ops = {
+    .vmsd = &vmstate_arm_cpu,
+    .get_phys_page_attrs_debug = arm_cpu_get_phys_page_attrs_debug,
+    .asidx_from_attrs = arm_asidx_from_attrs,
+    .virtio_is_big_endian = arm_cpu_virtio_is_big_endian,
+    .write_elf64_note = arm_cpu_write_elf64_note,
+    .write_elf32_note = arm_cpu_write_elf32_note,
+};
+#endif
+
 static void arm_cpu_class_init(ObjectClass *oc, void *data)
 {
     ARMCPUClass *acc = ARM_CPU_CLASS(oc);
@@ -2296,14 +2309,6 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
     cc->set_pc = arm_cpu_set_pc;
     cc->gdb_read_register = arm_cpu_gdb_read_register;
     cc->gdb_write_register = arm_cpu_gdb_write_register;
-#ifndef CONFIG_USER_ONLY
-    cc->system_ops.get_phys_page_attrs_debug = arm_cpu_get_phys_page_attrs_debug;
-    cc->system_ops.asidx_from_attrs = arm_asidx_from_attrs;
-    cc->system_ops.vmsd = &vmstate_arm_cpu;
-    cc->system_ops.virtio_is_big_endian = arm_cpu_virtio_is_big_endian;
-    cc->system_ops.write_elf64_note = arm_cpu_write_elf64_note;
-    cc->system_ops.write_elf32_note = arm_cpu_write_elf32_note;
-#endif
     cc->gdb_num_core_regs = 26;
     cc->gdb_core_xml_file = "arm-core.xml";
     cc->gdb_arch_name = arm_gdb_arch_name;
@@ -2314,6 +2319,9 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
 #ifdef CONFIG_TCG
     cc->tcg_ops = &arm_tcg_ops;
 #endif /* CONFIG_TCG */
+#ifndef CONFIG_USER_ONLY
+    cc->system_ops = &arm_sysemu_ops;
+#endif
 }
 
 #ifdef CONFIG_KVM
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:44:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90459.171244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgEO-0001uw-MQ; Fri, 26 Feb 2021 16:44:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90459.171244; Fri, 26 Feb 2021 16:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgEO-0001up-JJ; Fri, 26 Feb 2021 16:44:44 +0000
Received: by outflank-mailman (input) for mailman id 90459;
 Fri, 26 Feb 2021 16:44:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eEmz=H4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lFgEN-0001uk-Ao
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:44:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac001091-6edb-48e4-8c23-7db429185820;
 Fri, 26 Feb 2021 16:44:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA08DAC6E;
 Fri, 26 Feb 2021 16:44:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac001091-6edb-48e4-8c23-7db429185820
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614357881; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dqK8mTylXLft1nQhfFmEKN6OQ3lpRwHTJi9dr9srQfc=;
	b=JTAHXPQ11404kE7YL6pQtnRhs103zT2pwFSMh6BNYAzkVO9qJWLdz12kGDEj20L2wUKJKV
	NYMqL0Y80A1T9oDaEXRxi2YXXUMfQbgntlXyLYtBiDaDh+ue+Hgob+GJLuDecTc0EPrOdm
	PC4sS9iscCSZ8K66Zzx85x4nn57RCmk=
Subject: Re: [xen-unstable-smoke test] 159709: regressions - FAIL
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <osstest-159709-mainreport@xen.org>
 <a33d2c9a-00e3-caf2-afa6-48f70ef1202c@citrix.com>
Cc: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aa088007-af68-d4cc-6764-03e6efc9e0d5@suse.com>
Date: Fri, 26 Feb 2021 17:44:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <a33d2c9a-00e3-caf2-afa6-48f70ef1202c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.02.2021 17:36, Andrew Cooper wrote:
> On 26/02/2021 16:34, osstest service owner wrote:
>> flight 159709 xen-unstable-smoke real [real]
>> flight 159713 xen-unstable-smoke real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/159709/
>> http://logs.test-lab.xenproject.org/osstest/logs/159713/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 159704
> 
> Well - there's only one possibility here...
> 
>> commit 615367b5275a5b0123f1f1ee86c985fab234a5a4
>> Author: Andrew Cooper <andrew.cooper3@citrix.com>
>> Date:   Thu Feb 25 16:54:17 2021 +0000
>>
>>     x86/dmop: Properly fail for PV guests
>>     
>>     The current code has an early exit for PV guests, but it returns 0 having done
>>     nothing.
>>     
>>     Fixes: 524a98c2ac5 ("public / x86: introduce __HYPERCALL_dm_op...")
>>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>     Reviewed-by: Ian Jackson <iwj@xenproject.org>
>>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>     Release-Acked-by: Ian Jackson <iwj@xenproject.org>
> 
> which means we've something very wonky going on somewhere.

Or it's the heisenbug, and the next flight is going to be fine.
I've not spotted anything unusual in the logs so far.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:50:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:50:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90465.171259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgJX-0002XP-CV; Fri, 26 Feb 2021 16:50:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90465.171259; Fri, 26 Feb 2021 16:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgJX-0002X1-7c; Fri, 26 Feb 2021 16:50:03 +0000
Received: by outflank-mailman (input) for mailman id 90465;
 Fri, 26 Feb 2021 16:50:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TsVJ=H4=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lFgJW-0002QE-Fm
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:50:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a1738df-d4cb-4b78-98c9-ed2f1e89001f;
 Fri, 26 Feb 2021 16:50:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AC48BAC6E;
 Fri, 26 Feb 2021 16:50:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a1738df-d4cb-4b78-98c9-ed2f1e89001f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1614358200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MQAut+uWR0IdJHP1Ov+l+NFEplJCkTn5IaOK0g2Xjyk=;
	b=ONBBvTbr6p8QbDbDiHLm5T5Kl8Bq6h2SLsqgGICCfeIHXcnzzrjnRNHY/PlcJxsXo4yBlQ
	SaxAH87OErZVODDmIut/D+dAXEfTTWk30colUwje5nH9x/KDXwz98HV7CkEp1ZvvkJVYnH
	tl1WaVWdg3pk5qQO6Fmz5g67gLhnLk4=
Message-ID: <8afb67d3185ecece6853eb04663be088a84d48c5.camel@suse.com>
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>, George Dunlap
	 <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Date: Fri, 26 Feb 2021 17:49:59 +0100
In-Reply-To: <eb19a389-d2b3-d0cc-fd25-62bbb121cf98@suse.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
	 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
	 <b4ad0f83-e071-49f8-17a8-7fec0e226b9a@suse.com>
	 <20210226030833.uugfojf5kkxhlpr7@thewall>
	 <eb19a389-d2b3-d0cc-fd25-62bbb121cf98@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-aaKHRH6qYFvpUc2r7pER"
User-Agent: Evolution 3.38.4 (by Flathub.org) 
MIME-Version: 1.0


--=-aaKHRH6qYFvpUc2r7pER
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2021-02-26 at 09:31 +0100, Jan Beulich wrote:
> On 26.02.2021 04:08, Connor Davis wrote:
> > On Thu, Feb 25, 2021 at 04:50:02PM +0100, Jan Beulich wrote:
> > > On 25.02.2021 16:24, Connor Davis wrote:
> > > > index 9745a77eee..f5ec65bf9b 100644
> > > > --- a/xen/common/sched/core.c
> > > > +++ b/xen/common/sched/core.c
> > > > @@ -2763,7 +2763,7 @@ static int cpu_schedule_up(unsigned int
> > > > cpu)
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 cpumask_set_cpu(cpu, &sched_res_mask);
> > > > =C2=A0
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 /* Boot CPU is dealt with later in schedul=
er_init(). */
> > > > -=C2=A0=C2=A0=C2=A0 if ( cpu =3D=3D 0 )
> > > > +=C2=A0=C2=A0=C2=A0 if ( cpu =3D=3D 0 || NR_CPUS =3D=3D 1 )
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return 0;
> > > > =C2=A0
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 if ( idle_vcpu[cpu] =3D=3D NULL )
>=20
> @@ -2769,6 +2769,12 @@ static int cpu_schedule_up(unsigned int
> =C2=A0=C2=A0=C2=A0=C2=A0 if ( cpu =3D=3D 0 )
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return 0;
> =C2=A0
> +=C2=A0=C2=A0=C2=A0 /*
> +=C2=A0=C2=A0=C2=A0=C2=A0 * Guard in particular also against the compiler=
 suspecting out-
> of-bounds
> +=C2=A0=C2=A0=C2=A0=C2=A0 * array accesses below when NR_CPUS=3D1.
> +=C2=A0=C2=A0=C2=A0=C2=A0 */
> +=C2=A0=C2=A0=C2=A0 BUG_ON(cpu >=3D NR_CPUS);
> +
>
I would be fine with this.

Actually, I do prefer it too, over the "is index 0 or is the array
length 1" check, which I also find confusing.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-aaKHRH6qYFvpUc2r7pER
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmA5JrcACgkQFkJ4iaW4
c+4F1w/9FQ8Y0D8cR4vdcVl6CzNI8uHXmXEvvjx0NiSID4wm8/gLYIJGhp6DeEv6
7wgI/L85EaYVdUCByR44uS8lnCOeu1SRQOd3Av/drGUtor3zs1f+u1h4umcVKPlr
DibSqQyi4CsjvIWVfakWiYNGzYubQ15YI6KVPfBlxm2iFt6MFuWi8WvQGQgnV9EZ
1DCD+lMkjYvzVbg+G/3B5CqC+6jDp7GrZsS2TmdKhU7A1/xOpQvYGs/GjWXgGO79
xtXO9kRvOVMNjhCwNkWZf620xOg9F8Pqz21xXijDogjv4Ugm7Vdc6vfsV36d8ddv
u8X5YB4GVsrOpU1IrD/GnoZmL6z7cGXzHkFBIb4ywvp4vZGbpzNAiwlgg5+fZa9l
RkCZac6iYZegNv4yY1VwNUI5HGxLBm89/64tO17f/baLdMHHA/mV7rBNtWIuGvD/
51G93wGBHWemZS0P00cOZxVIuG8pk7Hb9n0YsCGrbrX8GCiLEAnac0eVsDikUQ+a
nYvU9IHl7rk1RAli+fteTVaTsqPjmbHvhoo+OiAmeMT/Ssh0YlWef2rB375Bxf9H
VWJXXW56jAJ3tzeVIxd5q1RrYoNTEIpF6Tuoc+APONGU3vWmIpM3CUHGQbVBVnL1
OttplLJla25LOsvQDa1YlYJt9fvbAX5iqBlYLf9fLfZ7HppX2vs=
=S++2
-----END PGP SIGNATURE-----

--=-aaKHRH6qYFvpUc2r7pER--



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:57:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90470.171274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgQ4-0003CZ-3J; Fri, 26 Feb 2021 16:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90470.171274; Fri, 26 Feb 2021 16:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgQ4-0003CS-0M; Fri, 26 Feb 2021 16:56:48 +0000
Received: by outflank-mailman (input) for mailman id 90470;
 Fri, 26 Feb 2021 16:56:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFgQ2-0003CN-Pb
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:56:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d20dde0b-11a4-4d7a-bc24-7dc151004b72;
 Fri, 26 Feb 2021 16:56:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d20dde0b-11a4-4d7a-bc24-7dc151004b72
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614358605;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=cWYB9IsTD+qrlGJpq52pabGhea51goCLkz8tNK+CHcw=;
  b=S2Jdxjir5Z1dHejJaXm/T2HIF6UTnWTbO52DPeCGRyJ4TLPycpl/Nou5
   LjSdF/BdLFiS2kKNlYStfGVrHk6WLmPGNQxYp3YKefRZrC/nb0006NDLA
   xGvioLUtQPplDn+w6Da+LHeWP/KYcNNFqRd73cRhngXzhjYPmdzoY++Cy
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FWMcR9mtOiyoi2rPVN6rkM/o8QDUh6FJXGbkHTy4n1rOA7ULk3UAswna5Du4Em7hdwksVej4ep
 ABVytrtdlkvxbxO7H5yPXuMQy3lneEUQZXhf3kpYd353DeT18yENZiK8BckkkmD4eIOA+I8ruz
 /9Lu1E5ckj1o44/6LwlYu9aaCMqv7STQwba+sHRiUus+Au602bupr3Qar2UiFmbqqM8VPfttal
 nzG2IyZJmMxivGMVgvx/Rc2F6P6OmqEkTkXAKkns5g3LcRTGPjlKAW8StL6blyaQcZvJuXPhbV
 kR4=
X-SBRS: 5.2
X-MesageID: 38137569
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,209,1610427600"; 
   d="scan'208";a="38137569"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XXIZfLHZJOL3Vt43JqA67bidLDugjzm01WA74kYsh4hKPvtLRrJzWnPLkGrcmaWZDTWqs7touCtW/gpkdimHSacdcuNXFQzZ24gYjHCPXrzqtYhIF2syMxXXkLIEsCZvydEcCCv0dz85/u9VouTxz1fbpR4qdnsq8FXD/pa2eKZyL9y0iafV2jX5R/GS98PA/LUy+2bBRUrBSbpUPozQgZv997P30OnRThVDY13jhRjhwtviPPicVn6YXDccMyGxLgIvOfpzTfju4qRxjjFNms6DbdNZwEIqnftWeHmZC1fxDJpy1XRmFiA8SIowLAyY6ysh9eynZ4PfrqPYcR93Hw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9hoPS/paFSn1bbFxqzXZ9/mrVuQvZQ+8Ti19QRzKOPg=;
 b=MW3HwasM8UlL9MBIUJ2TN1279A++OkIgSxngRFNeLZbe3Qty/1asv2radZgHTkcKnvymGTifLMYxpuTidqTaYtJN5+iHF2RWAiBkUc8eCkHueQqpxH0YgdNw+xZ9a8aemBbZeTDYXmzwYqfydSayzwY5ggWLLFm+tH2R30pKwpmzn3DXmZSd5cqzk3qXcxQKJbtdwZanjgWdR7NyozHFNAe/N9+36cOkTcNRfPdLhziNjb6ukvRgD6lLLmx+9NsoQhHJxpGUKpNuc9zjTfTMS81aGz3hJ99X0I6Rvekr56zdL/2LR4jT7HI5EnCOA4slGXMVHjz0PXFU1QwvMgcEjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9hoPS/paFSn1bbFxqzXZ9/mrVuQvZQ+8Ti19QRzKOPg=;
 b=h1ylYR4JwSbaeDzPpJdTp+nTokVWphsFsKFsDU2PCKvsLIY7E0i1X0T6xgED34b1MBaaZLTOU1MezAu+RkWNanwSb/1yNvQhfZiHDKEOlVngeXaykIPcKv1RDaC+ECEAWLkeBoSLE+HjIGJzM/f1cf2lXqkE2QiDDywDB2wvJuY=
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
To: Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
	<connojdavis@gmail.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila
	<aisaila@bitdefender.com>, Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
 <0d58bca1-0998-1114-d023-0d8a5a193961@gmail.com>
 <c6ed745b-3847-b878-f683-2d1041be22a1@citrix.com>
 <247706ec-c011-5217-7b6e-deedf22cac84@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4834f606-d691-88ab-6a75-34ecd4721389@citrix.com>
Date: Fri, 26 Feb 2021 16:56:32 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <247706ec-c011-5217-7b6e-deedf22cac84@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0421.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0935b7f2-7367-4a7c-41a4-08d8da777d42
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5597:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB559739B1BD03B38790ED038BBA9D9@SJ0PR03MB5597.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3r9p77cGI2wiq8fLeoOQahGQE671LkdYBGNvm9yNFHIIzlslX1rL3LbP3TQCF4hnfIKRwY1QeoJPHxjdViQn+h9vEjqmFqHYX/NFBWQFUdyOvaEux9mWjOzIK1eY87eR9FZtxlE7jiw5aUMSwe1qhv9l0NWiPXYM/Gg5FA3y7ZPMUrwf79OuH+O5z7KFMPnR49ugoxawGU5/PEBH7Z2oiKYJKIhUh9uaS62dHMZua8TlsQwAHLVTZMH3x1Dm9Ftn2ulea9x1o1eymoSth9+JR1e4ug6TyTrUuIgFkfB69lcmuL2QN8WNGloECAdsBp2JwF9jNmROEwHe1B3obU6xspAxsXa6QpSnwRD8LAZCFHCbcOJZFobwUR86hEOCZd7r82YrQ0T0dvKPY/LSptjMLhv0S2UonhM8Po84uE3TVbPgozoXCr49LmonjqfjpLZQfFkJ+s69Atpun9uGs3wkYcUcTYojmIOuNjTJQJKBJ5NRwLmx7K2Y9HfDOa/f0upbJea9FNppZJ0ob/JLT7Ixf6wkAnweap7nLiqP1mfMN3NYv4YEdE+JZ7e7Pptihh0dAPfS9AZglc04QeV43cGQ4KaxA4udcq71obIAA2Tt0Gw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(366004)(39860400002)(346002)(6666004)(66556008)(16526019)(186003)(956004)(478600001)(6486002)(26005)(2616005)(66946007)(36756003)(31686004)(83380400001)(5660300002)(86362001)(66476007)(4326008)(110136005)(7416002)(316002)(54906003)(2906002)(8676002)(16576012)(8936002)(31696002)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eDV5UmlRMWhTVHRVK1pjNGZRV2tCd0kwZzhTcXd5ZmpSTXZmRDVWUlhZU25S?=
 =?utf-8?B?dW50YkNwUWpBeVFlQ1FvTFpEdWpIbmxSRFMxUUJBN0tjVTY1UXYybUJCejhs?=
 =?utf-8?B?UlVyMXpQeVdWZ2VXQmw5QUwvTzAvZmo1dnhLWXNwT2xCWml0SnBmV3g5ZG9N?=
 =?utf-8?B?UE4wbzBLQ0ZqanZwZ0UyRnZKeUwyWW4vdjN4aFk0dWJFM3A1T05pMjEwL0x1?=
 =?utf-8?B?T2gxYVg2WW0zcHhITk0raE9lSHRjSFM1aVZsOXVCL0NOb25heGM3aE5QdjJD?=
 =?utf-8?B?ZWVjRWx2MVJNRUNiUm1vdUJqUUxGS2wrSnlsSk01MXV4eWljU1hBcG4wRncy?=
 =?utf-8?B?SEdRNnJjL01mMzJDNXpRVFlxNmpxaHdldGtjbHFYT2s2RnFBbndRMWM0a3ht?=
 =?utf-8?B?ZUpWZ0Eva1FpUVp6WWVtOTVjeUMwaWVrdjZFWGh4TmhYWVJxNUxxS2JrOEE3?=
 =?utf-8?B?SGV5alZ5eVRVL1g3dmx0ZzBCSGE0NkRsakcwS0Y4bU8yZG9wSGVsZmFmVGlO?=
 =?utf-8?B?YlFxYUV5aTJXK01kdU9heTdMWnUvMFN6VHZqSlVyY2c5dWZzSkxXQWpoMGl0?=
 =?utf-8?B?QTdCRGFucDBRRnptTDc3dFlWODdISjR5TW5QUEJpQ3A4OExZbTh1NTUySE1k?=
 =?utf-8?B?bld2czFCR29tdjZRK3JCVFBYTHNCSUlDRWpUcStXNWs4NkpyQTV2anY3N1J5?=
 =?utf-8?B?MTF4M0htSFJIU2tHQzdJQlR6UmZnL0dwWlR5NlBjem11SDNrL1BOMnNwTWZk?=
 =?utf-8?B?WVpsb21aTUJjQ25ZL09zTEM0YURtK0JucWEvS21rREw0Y1BiVU5VQi9LbHNC?=
 =?utf-8?B?Sk1mTWtUcXVvUjhhR0s4citRcyttcm91V3VzcVNJOWtaaTVlK1BoT2tXTmcv?=
 =?utf-8?B?TkluN21nZ0NWdjRDODVWMk1LdGIwUTlUMWRYRm03TTI2UEFYTzZsbmJaYUNG?=
 =?utf-8?B?L056WXlFWElyNzc0MWRFamJpY281QXB4bzNESFBKWEFIZkc1MXRBRUhJQm5z?=
 =?utf-8?B?V1V0aHdLOEhxc3hUZkpjdGRPemdzS2VOMjA3U0pQdjVRdEtNZFczZjY1elIr?=
 =?utf-8?B?aTNwT09qZGIvSkc4c2c1TmZadnBpODVUQ2lwY3QrL2Q1WkZObXdYRGNxbnVX?=
 =?utf-8?B?a01nR3JXa1ZFZXBJcTRGTWxMaWFrZjlDOUY1WTZsNkQ4WFVoNGtEY1RlNWQz?=
 =?utf-8?B?SXo1ZUpWS1luSEtMeThuQkhOYWdSQzJQVnZtQnRUcHlTanB4SkJjd1lhc2Z5?=
 =?utf-8?B?cjBuczZjeDBSZGhzeVRldHBBYjF0S1VrY1MzMnlSK1lBN0pHK3BrRkNFeXRy?=
 =?utf-8?B?aGRHU2dYd0VlRndaR1U1aUhGOThNbndpNmM0akFwZzBOcGEyUFNCZCtuNlBp?=
 =?utf-8?B?OXhDUll6aWVZOSt4dngweHdNSVhBYTZpZ29rTURhN0habDU4bVMxUHJVOFFJ?=
 =?utf-8?B?YXhwZDRhaFFYV01zTURkY0Q3bGpCc3VCUHNiNzV5K283NzdaME4weGRhelpq?=
 =?utf-8?B?MFRNRlk0ZTg3RVE2ZkJTSEd2a1VNNS9UWXErb3l3VUJmdnhtVUNrdEIwbGJm?=
 =?utf-8?B?ODY3aWhKNTUxVFZkbkJWWVg4anhtM2VQT09VTVcxZkxRT0lhcUNSRzgzeCtn?=
 =?utf-8?B?aXdCc1BFNWdnOFZZaHJzQlRzblQ2N2VybitHS2FqUStic2dUdjcvR0d0YUxR?=
 =?utf-8?B?OWVQWlJ3NEpMR1M1OU43QmlBa2hpQ3czSlNTSnJ1aGsycmdpR3hVQzFOdjBG?=
 =?utf-8?Q?MOMM6v8Uxlk9Ft6mw1GWtUJKy1Jf4+AVaTOuFzh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0935b7f2-7367-4a7c-41a4-08d8da777d42
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 16:56:41.1921
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7RUdJwmWXOsDJmTllDREdZqMW0OhprdgufP53sJ7MG4RWu62C2WEUXHMc827TD4Td2IfJ++kurFccA/O6OySnrMsoduIgm5E/pQUlFobFMk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5597
X-OriginatorOrg: citrix.com

On 26/02/2021 16:21, Bob Eshleman wrote:
>>> On 2/25/21 3:14 PM, Andrew Cooper wrote:
>>>
>>> It sounds like you'd prefer no common to start and none of the
>>> arch_* calls it relies on?
>> We definitely want "stuff compiled under RISC-V" to be caught in CI, but
>> that doesn't mean "wedge all of common in with stubs to begin with".
>>
>> Honestly - I want to see the build issues/failures in common, to help us
>> fix the rough corners on Kconfig system and include hierarchy.
>>
>> In light of this patch, there are definitely some things which should be
>> fixed as prerequisites, rather than forcing yet-more x86-isms into every
>> new arch.
>>
>> ~Andrew
>>
> Ah I see.  There's more that could be Kconfig'd away and if they can't be
> Kconfig'd away, their should be some commits to make it so they can be.
>
> But things like, for example, `arch_domain_create()` would still
> be stubbed, because this is and always will be required.

Some bits are very mandatory (at the point you start compiling
domain.c), but it absolutely shouldn't be necessary to implement that
much in the way of stubs to bootstrap an architecture.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 16:57:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 16:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90472.171286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgQV-0003HJ-Dg; Fri, 26 Feb 2021 16:57:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90472.171286; Fri, 26 Feb 2021 16:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgQV-0003HC-94; Fri, 26 Feb 2021 16:57:15 +0000
Received: by outflank-mailman (input) for mailman id 90472;
 Fri, 26 Feb 2021 16:57:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NRZt=H4=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1lFgQT-0003H1-Iv
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 16:57:13 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d62d0565-d87d-437c-bb4e-203d36ae2cd8;
 Fri, 26 Feb 2021 16:57:12 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1lFgQP-000JpO-W3; Fri, 26 Feb 2021 16:57:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d62d0565-d87d-437c-bb4e-203d36ae2cd8
Date: Fri, 26 Feb 2021 16:57:09 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH][4.15] x86/shadow: replace bogus return path in
 shadow_get_page_from_l1e()
Message-ID: <YDkoZZnm7WN8r+67@deinos.phlegethon.org>
References: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <d6cf1205-d537-fafb-a082-e973bfe11315@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 16:08 +0100 on 26 Feb (1614355713), Jan Beulich wrote:
> Prior to be640b1800bb ("x86: make get_page_from_l1e() return a proper
> error code") a positive return value did indicate an error. Said commit
> failed to adjust this return path, but luckily the only caller has
> always been inside a shadow_mode_refcounts() conditional.
> 
> Subsequent changes caused 1 to end up at the default (error) label in
> the caller's switch() again, but the returning of 1 (== _PAGE_PRESENT)
> is still rather confusing here, and a latent risk.
> 
> Convert to an ASSERT() instead, just in case any new caller would
> appear.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 17:07:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 17:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90480.171298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgae-0004SI-DJ; Fri, 26 Feb 2021 17:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90480.171298; Fri, 26 Feb 2021 17:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgae-0004SB-9W; Fri, 26 Feb 2021 17:07:44 +0000
Received: by outflank-mailman (input) for mailman id 90480;
 Fri, 26 Feb 2021 17:07:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NRZt=H4=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1lFgad-0004S6-88
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 17:07:43 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eea3539e-54db-4bee-83b4-664ea6cf34c1;
 Fri, 26 Feb 2021 17:07:42 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1lFgaa-000JrX-Kt; Fri, 26 Feb 2021 17:07:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eea3539e-54db-4bee-83b4-664ea6cf34c1
Date: Fri, 26 Feb 2021 17:07:40 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH][4.15] x86/shadow: suppress "fast fault path"
 optimization without reserved bits
Message-ID: <YDkq3KwtfGZZTyLL@deinos.phlegethon.org>
References: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <aefe5617-9f10-23a4-ee27-6ea66b62cdbe@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

Hi,

At 14:03 +0100 on 25 Feb (1614261809), Jan Beulich wrote:
> When none of the physical address bits in PTEs are reserved, we can't
> create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
> the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
> this case, which is most easily achieved by never creating any magic
> entries.
> 
> To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
> such hardware.
> 
> While at it, also avoid using an MMIO magic entry when that would
> truncate the incoming GFN.
> 
> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> I wonder if subsequently we couldn't arrange for SMEP/SMAP faults to be
> utilized instead, on capable hardware (which might well be all having
> such large a physical address width).

I don't immediately see how, since we don't control the access type
that the guest will use.

> I further wonder whether SH_L1E_MMIO_GFN_MASK couldn't / shouldn't be
> widened. I don't see a reason why it would need confining to the low
> 32 bits of the PTE - using the full space up to bit 50 ought to be fine
> (i.e. just one address bit left set in the magic mask), and we wouldn't
> even need that many to encode a 40-bit GFN (i.e. the extra guarding
> added here wouldn't then be needed in the first place).

Yes, I think it could be reduced to use just one reserved address bit.
IIRC we just used such a large mask so the magic entries would be
really obvious in debugging, and there was no need to support arbitrary
address widths for emulated devices.

Cheers,

Tim.




From xen-devel-bounces@lists.xenproject.org Fri Feb 26 17:12:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 17:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90489.171338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgev-0005et-Bu; Fri, 26 Feb 2021 17:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90489.171338; Fri, 26 Feb 2021 17:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFgev-0005em-8Y; Fri, 26 Feb 2021 17:12:09 +0000
Received: by outflank-mailman (input) for mailman id 90489;
 Fri, 26 Feb 2021 17:12:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFgeu-0005ec-1z; Fri, 26 Feb 2021 17:12:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFget-0008Mt-UC; Fri, 26 Feb 2021 17:12:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFget-0004PF-Eh; Fri, 26 Feb 2021 17:12:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFget-0001Wg-EA; Fri, 26 Feb 2021 17:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mCk1rR4N8i7YpwxGtw3Kll8hDJg9ivJE1Hl63sb995k=; b=tI0N7YaHwZnRo/T64pF9z/kYBk
	3MbN0FT5tF3E7LWSVowFTzooyP48UiDlq79J7AeAf5arDFELD77ESVCHotvwZn4+1+3ChwH0EBzfV
	ZRxf7m4704OfXdLuFKhzvzVHIaiHEj3RtlfuOjFrmk0wfXIhvHTyC/X0jo5x7qVIadIQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159692-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159692: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fc8fb368515391374e5f170a1e07205d914bc14a
X-Osstest-Versions-That:
    xen=e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 17:12:07 +0000

flight 159692 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159692/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159475
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159475
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159475
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159475
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159475
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159475
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159475
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159475
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159475
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159475
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fc8fb368515391374e5f170a1e07205d914bc14a
baseline version:
 xen                  e8185c5f01c68f7d29d23a4a91bc1be1ff2cc1ca

Last test of basis   159475  2021-02-19 06:26:37 Z    7 days
Failing since        159487  2021-02-20 04:29:29 Z    6 days   12 attempts
Testing same since   159692  2021-02-26 03:53:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Norbert Manthey <nmanthey@amazon.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e8185c5f01..fc8fb36851  fc8fb368515391374e5f170a1e07205d914bc14a -> master


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 17:44:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 17:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90529.171440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFhAK-0000qL-4f; Fri, 26 Feb 2021 17:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90529.171440; Fri, 26 Feb 2021 17:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFhAK-0000qE-1Z; Fri, 26 Feb 2021 17:44:36 +0000
Received: by outflank-mailman (input) for mailman id 90529;
 Fri, 26 Feb 2021 17:44:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSsz=H4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lFhAI-0000q8-3r
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 17:44:34 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8c424bd-9097-479e-927a-7a76702572d2;
 Fri, 26 Feb 2021 17:44:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8c424bd-9097-479e-927a-7a76702572d2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1614361472;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=YknpcX4TnKt0uq0H9eu7tiHhoQ5t7bkxn77kHUtLvT8=;
  b=ifJuR3YS1XqL+qv8NDtHjO6DPcIRZsbU4mFDP0ugX3vjiPYY0wT6H8J3
   NM1k20wGGeqxa1jT47pXC+w2L8dracukP0+/ek+dED+mfPlAjFm5IEvPK
   ZoURBCNkWRXmls/7OMbnxhl8SgwWke1IS8rdfr+yb7G7x8wiXwt6KtJOa
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Np7LvF285OgHQivTAT6IEhtxafGNLh7c3xMgDttjpC5FAtPnxLcLRirTDE0WVVUltHKFO7VbUe
 268R7MVDeWJPkMV2RNGueQ/K5DG+OfZ9M8TA1edDgt8CrUQl069MgQduQMlAIwtujxslk4TGsy
 UvQ+H62m3oH9GVCwVEQuoqc4nbyLCpnKph78igj85W1IWzX7UTrEQO2Z5Tdv9mOqIC1z8PArYj
 eQZVhsbvwcKEGfBPn1yFRDqTsAvtVx1p2RB25dtMdnRwU7ar690bAR+WbQE3yGPlPpl4jyn45E
 TzU=
X-SBRS: 5.2
X-MesageID: 39524274
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.81,209,1610427600"; 
   d="scan'208";a="39524274"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IR55csTaO518pPo40zWRo63u93oXXTYDz9bbIsDMbadje3Ji/5tcUhZkitCVqTrjyl3qpsdCD3uSGHeAV2cKxFG4z7lVDtoHKb+QawHYj2eng854/3n2RYjNA+9eoF1FagPGEWkuRa/BEMcEnEZYszkkLaDb3GjHTMfINUrW8/OJgw/5tku+mS4KHfL72CGCy8W+Uji7dFf044P3hF3RW1xpP1TDHDOVRzjWaFfENhn7THJr/sPp5OMnnxdMqPTmrGNCxEd3iZGbzADRpSdLoH45/QTqPYDd5T/WQohzNFyyS3vUErMhtKpgoqVEZnXHvta+toH6kWPpm48Yv+liIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YknpcX4TnKt0uq0H9eu7tiHhoQ5t7bkxn77kHUtLvT8=;
 b=ZwqeY0f1AqywObCFQwsxVfQ0alJEVJcGMYCmFqEiuWAHa0k0uDqIC0QVV1qLpNRKO6pWJ/LAMdwD67zOzeDng3R+rYCbZXw34R8P4Hp1cvnrsE3baxzL6pXA5RqG9Jao9gkP5BPgSWQ0x6XNAuO438QzBeKhmTF+gS2dzbgNW7wEULxHOqsw5GRx/xvF1lzn1ElS2J/b6BOU4h8NLdsO1zpl6rruo9LeNWzAav2AP7rOvPSKRWVHDc4ucD72NviVSjgod110xxnZIda8qeRJQGF3/TPw9P4ZuLOS9TGUq5IXgzY75Aucr+PzyXGiF5cn2jdiOFBU+4FzB8tdXFi/5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YknpcX4TnKt0uq0H9eu7tiHhoQ5t7bkxn77kHUtLvT8=;
 b=UbRKwXQ2jnCX2eXGsR2DyZUFQtCY7o4qyYebztXiCyEvCyOSg1lQY1wmvQ+pTayUoPqLXXTr5OZ0WaVKUlKi9qSe3oBElyLE17AbacO8oM+B8q4sre70poohzMmpzemfACxQHf+orMNXh+4YHa7/pztbYdTW1cg6+RmDDh/+//o=
Subject: Re: [PATCH] tools: Improve signal handling in xen-vmtrace
To: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <26720bf5c8258e1b7b4600af3648039b5b9ee18d.1614336820.git.hubert.jasudowicz@cert.pl>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b04c6b3a-08a8-7507-7f3d-24d179358761@citrix.com>
Date: Fri, 26 Feb 2021 17:44:21 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <26720bf5c8258e1b7b4600af3648039b5b9ee18d.1614336820.git.hubert.jasudowicz@cert.pl>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0084.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6621352f-e990-446f-64f9-08d8da7e2a2d
X-MS-TrafficTypeDiagnostic: BY5PR03MB5156:
X-Microsoft-Antispam-PRVS: <BY5PR03MB515694FD429DF03A0031CF8ABA9D9@BY5PR03MB5156.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pUbGNbr08U1AA5S0CWrcItklIJpRy2vmUWKA7mAs//w8geaxCUm1jUBYET6O4LAeOka94MT6wWV9Sge2fhjJZJUB/2f0sXWXo7VvOifeLECI22OrcOKiWw3H2ztdRaCa2UgJujCvOiuUw9Dc7K+P99+c4xy+/1zEOkElYtY3zk5uRbq/fd6aZxAsTqWb7FmFLT7qv2qYEpgmM+w+5kCFG4Ai8TxVO7XFSOC+TOrJnec5Tmnfk2qik/M+eINk2M6N8ob5eBTMPfClsau6p071yvHZKbqyvi+QqRY+Xhr9AiDl0irFBrUFBs5sa9MMo3l8hI24/ajBeWyoOK49NA5XxipsIFL/wW+1XdxLaCDZJvFM+w2cjyxxI74ZXJwUBdkrxcv9E76R+naxJHpZKELxei+trg6mgo4xKPr2RCTmQMQfNDrdWoaZjyAdIBbFPEtERDyzKs5Rr6BlfoEg3mMBW+or+eeQcnXB/EgzC0T4RuROv9ir2w1cAf+CU2DDs/85pB6KKbbcNNSCvWJk4r7nDRVqsHAn01pG+QJJD/2iNjiwbodTQrmKnPTYryMhazIq/lnWcB3sEdUhdn4xr5zoQKLzHEyRpBj7UjkY7UGuZCY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(366004)(376002)(2616005)(956004)(6486002)(5660300002)(54906003)(8676002)(478600001)(16576012)(8936002)(86362001)(26005)(36756003)(31686004)(316002)(66556008)(4326008)(31696002)(66946007)(66476007)(186003)(2906002)(4744005)(16526019)(53546011)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OElGdzdQL0xxYklXb0czNFk3RWxCZVZtVGNpTUxWS0ROYnQzY2VTNFN6azRv?=
 =?utf-8?B?VFJQN1FscFQ3WUtwV2FlcTIrOUZMdGdodzArUlJ0UDJKMEtWTFJuK3hZY2NF?=
 =?utf-8?B?c3E5M2NZVHN3MXRhUzNJeDVoaXlJd0RwM0txQU5jQXEyMFhzdkJ6K25wSWli?=
 =?utf-8?B?OGRPUnprTDBxZG5qemVHdW13SkZ4REZEM3kzeld0TkFIYUpuLy9zRmlSZEpk?=
 =?utf-8?B?QTkydXpvRTJDQlpLUVh6cjgyV1hya0pza2tLd0JSVTRMOUVISEh6ZHpoQmRC?=
 =?utf-8?B?RkxxdzU0S24yb0w1M3NGcnBReDR0YUhiSUM0aTZKRzhWQmN3Rlh4RHErbERT?=
 =?utf-8?B?WUc3OFl6cUtSeEhhdlRRUDVTZnNUeWtuNERPbXdKSXgzeUdzVW9nV2F2Zmdh?=
 =?utf-8?B?YllRZ2lxdkdVY1dCSktnaWZGS2dhZ2xpenhKRzBiaFluRDl2TGVwZkFuWkVF?=
 =?utf-8?B?YUpFc25IRTZJY3d4NG5BOEx3RDFzQTVONkZjY0VwUDdKYlFES0ZiVytSZXFJ?=
 =?utf-8?B?aVZmSytlSVhNWEhYbGZwZ2NUZlBVYXpZUzA5U3RjTDA4cHc3NXJBczRjRUJz?=
 =?utf-8?B?aUp4cVg0NzNjb1NUenhGS1FqWE0zYW5nSVRrT3B4b2tYTUQyQTJTUDFCUzdn?=
 =?utf-8?B?VnB1OXpGWUZ0VGFuU0o0UXdFMGNKVjcyckJFK0xBMzBBQmh0OG9UYUlkdTBI?=
 =?utf-8?B?NzNwcFlqZ1ZVT3lUaEdkdC9PdUJBWWJaa3M3QVdGSUJUdU5XK05iNGNCcjFl?=
 =?utf-8?B?UFhpZzhqRmxmUU8vNTRqYXQvbENROWZ1c1dBdlgxOXRWVW9vV2ViMVI2Q3ly?=
 =?utf-8?B?UGZGUmZjYTdjQm1kTGIySHUzQnlaOTVFeC9qdVdCZlBLNUVWT0ZzSGJFSDQv?=
 =?utf-8?B?UHl3OEY3WGNiODhFamRicU1aYUtGQVF5eGNVcHZmV0hFeFJiWEZKSld3ellN?=
 =?utf-8?B?SFBZWlpSRGdXK3hsbWxyUElvNDluVGlJdWo1bTRROVZ3L3lmdjJ1NVlobWJN?=
 =?utf-8?B?cG5vYkJaQ1hOZGRSZXB5Sm8ydFZNRHAzeTN1cUpEc2Jnc0pCWVBoVjd2V0pC?=
 =?utf-8?B?bmFMdDYvd25BTE42S0c1SWxWR3YyUG1sRVVzMDMyNTdjNWZ1SUtkb1N3MTVo?=
 =?utf-8?B?cmxjWm5kZ3BGNHVYdzFoMmxQeFNPOWNaSTNYNnhtNHBMQlQxa3Ria3JFaXNW?=
 =?utf-8?B?NEJzWGMwRlJlbkdXbVNwZTdSWkxoVzZvd1h1WnJuQ2VWNWFIcmYweEI5bW9s?=
 =?utf-8?B?d3g0ODlCK2VmOERxa2Zrc1hnR2tlK3paaE9LUTIrZTBydzZMV3hiK0lnemNw?=
 =?utf-8?B?aGdkRUJ0OWkyL2ttZWlKaU5oV2ppWXZhcmU4b3BsWGNtRm1DdVh2aHVhTGR3?=
 =?utf-8?B?R3JoNUlqM2JWOTFmUjhzUlVUbnlzMUdKQ1cvUngvTVVsUnN0WElLNHZqU0ts?=
 =?utf-8?B?RXpoOTc0aENFb21xSGd1RE1YNWNyMlNoMWJCZ0pKUHFjK05HcnNkVlgzL3Rl?=
 =?utf-8?B?ZkYwSFJPSFNYZ3A1cytWaWZ0ZjFJRklTbHEwT0Rxci9UVTFOc3pUY0M2Smxz?=
 =?utf-8?B?bDhmRVljUU42MnBUNU9wQXl5bWtoZWozb01Xd1RySVNYcEY1RjBRcnFMVnAr?=
 =?utf-8?B?dGZrYTJpSzZkNUd3Mk1DNlFaYWU0SzdUUzNKeCtiYVR2aU9pekY5TkJhYnBi?=
 =?utf-8?B?VFYrbElWeW1RTkdwTWRkeTkyRjUwWlRsMjBPK0prMTQ0TU5Gc3BybythTGNa?=
 =?utf-8?Q?M9K0z94YOWsO7oEAt04EDpR9S8Wq2/KczolHoAE?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6621352f-e990-446f-64f9-08d8da7e2a2d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2021 17:44:28.5965
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SLzd3ErdpJ6fJ0wtCSPPl0EDlwtnZOh3dtZ2ULuKkQBS8r3Q5QXI2DkKYztvOquCy0RUISx8tzWBZK2RTM4IRThhvbXNOhOSuXLFIWKK8jA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5156
X-OriginatorOrg: citrix.com

On 26/02/2021 10:59, Hubert Jasudowicz wrote:
> Make sure xen-vmtrace exits cleanly in case SIGPIPE is sent. This can
> happen when piping the output to some other program.
>
> Additionaly, add volatile qualifier to interrupted flag to avoid
> it being optimized away by the compiler.
>
> Signed-off-by: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>

Ok, so this is being used in production.

In which case, what other signals potentially need dealing with?Â  Lets
get them all fixed in one go.

When that's done, we should make it installed by default, to match its
expected usecase.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 18:27:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 18:27:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90553.171462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFhpN-0005FH-IQ; Fri, 26 Feb 2021 18:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90553.171462; Fri, 26 Feb 2021 18:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFhpN-0005FA-FM; Fri, 26 Feb 2021 18:27:01 +0000
Received: by outflank-mailman (input) for mailman id 90553;
 Fri, 26 Feb 2021 18:27:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFhpM-0005F5-4s
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 18:27:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFhpK-0001GE-TR; Fri, 26 Feb 2021 18:26:58 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFhpK-00089O-G4; Fri, 26 Feb 2021 18:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=iNuTvkc4LGs0VPE3lG554tVwPfj5Ug5Vg9CmENqGtyQ=; b=hMRJuUFgprHCGy+sCM2Y8Xqu4P
	7eMu2yWG4Z2lOPbCotS9SYYOcT3936s+7ChDl4mRUlLBOkiImdiDX8KL9toL+8Sqab5TYXzEbkFD9
	2KH1g2hRlSGKfFyfIeuEpkRAYitNcolXn/UeXX+aYI8Eps26/g52CDQlP+rKTN1cpMsw=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.15] tools/xenstored: Avoid dereferencing a NULL pointer if LiveUpdate is failing
Date: Fri, 26 Feb 2021 18:26:55 +0000
Message-Id: <20210226182655.2499-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

In case of failure in do_lu_start(), XenStored will first free lu_start
and then try to dereference it.

This will result to a NULL dereference as the destruction callback will
set lu_start to NULL.

The crash can be avoided by freeing lu_start *after* the reply has been
set.

Fixes: af216a99fb4a ("tools/xenstore: add the basic framework for doing the live update")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This is a bug fix candidate for 4.15. The easiest way to trigger it is
to have a XTF test that starts a transaction but never terminates it.

In this case, live-updating would fail and trigger a crash.
---
 tools/xenstore/xenstored_control.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 653890f2d9e0..766b2438396a 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -657,9 +657,8 @@ static bool do_lu_start(struct delayed_request *req)
 
 	/* We will reach this point only in case of failure. */
  out:
-	talloc_free(lu_status);
-
 	send_reply(lu_status->conn, XS_CONTROL, ret, strlen(ret) + 1);
+	talloc_free(lu_status);
 
 	return true;
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 18:32:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 18:32:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90558.171475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFhuT-0006Hz-8V; Fri, 26 Feb 2021 18:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90558.171475; Fri, 26 Feb 2021 18:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFhuT-0006Hs-2j; Fri, 26 Feb 2021 18:32:17 +0000
Received: by outflank-mailman (input) for mailman id 90558;
 Fri, 26 Feb 2021 18:32:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DeEJ=H4=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1lFhuS-0006Hn-2P
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 18:32:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68878fca-0d0a-4c93-b298-212e2fabfb4a;
 Fri, 26 Feb 2021 18:32:15 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id 6A96964F2D;
 Fri, 26 Feb 2021 18:32:14 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id 66EFC609D0;
 Fri, 26 Feb 2021 18:32:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68878fca-0d0a-4c93-b298-212e2fabfb4a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614364334;
	bh=PctlNHE6xIGwxBRhX63v/+sbNK+dVx7QqNsavwojU3Q=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=Q+QiWlfpNXrUOkwGjFrF0Op2A35Nz4ytnCiUDg89IGTzvdmJV16/XioWxccv4fPo4
	 kho2xjpvWsdRrBeYxeIrHnQ6MaNTB3n5au/y/p7Dq5gtFO3d72IxLRU8MGPBdYivMX
	 s7XQI7CE/s496v8uurasKhjgQKv7JYoS20HB/GA+2AXJQ1U27V9P6EH7xTMXjLlMNZ
	 bOho3B09HGVGiA7hNIoV+yPvDaTB+ztYZDb4GHY7Py+NOJ/ayVr9qUC4Q4uX/SLXFq
	 6hnx3vhjn++WhBpUebYhijumozbccj8oCfcY1TC8ACchaX4K0Obaww9WRAoSKkjpBz
	 zLYL4verzAb5Q==
Subject: Re: [GIT PULL] xen: branch for v5.12-rc1
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210226131641.4309-1-jgross@suse.com>
References: <20210226131641.4309-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20210226131641.4309-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.12b-rc1-tag
X-PR-Tracked-Commit-Id: 53f131c284e83c29c227c0938926a82b2ed4d7ba
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 5c2e7a0af211cb7a3a24fcfe98f0ceb67560b53b
Message-Id: <161436433441.9780.2149169069373607984.pr-tracker-bot@kernel.org>
Date: Fri, 26 Feb 2021 18:32:14 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Fri, 26 Feb 2021 14:16:41 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.12b-rc1-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/5c2e7a0af211cb7a3a24fcfe98f0ceb67560b53b

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 19:23:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 19:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90566.171489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFihS-0002z0-7S; Fri, 26 Feb 2021 19:22:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90566.171489; Fri, 26 Feb 2021 19:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFihS-0002yt-2g; Fri, 26 Feb 2021 19:22:54 +0000
Received: by outflank-mailman (input) for mailman id 90566;
 Fri, 26 Feb 2021 19:22:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFihR-0002yl-HB; Fri, 26 Feb 2021 19:22:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFihR-0002Bn-9R; Fri, 26 Feb 2021 19:22:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFihR-0003S5-2B; Fri, 26 Feb 2021 19:22:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFihR-0003h9-1d; Fri, 26 Feb 2021 19:22:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7zgv7FanEt6pYbFJBymDoX0OiFCkRhaloF3HjBMlDh4=; b=wEb4j53OnKwMP6F9EKk2gwv2wv
	s5XNDD3mZKi8/O55qLLo1HSaCM6YjF2IEX0wCsOe8wwYZTz8FbNNFa/5xGKAZlKu+Z0G5I1KmT2D9
	dunUJjhd9Vw7XHmZbKMuHlQTWx7YqZHqys3sCSTiWEHOs28NuNrlsploUttpdbZky+Bo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159716-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 159716: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c4441ab1f1d506a942002ccc55fdde2fe30ef626
X-Osstest-Versions-That:
    xen=109e8177fd4a225e7025c4c17d2c9537b550b4ed
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 19:22:53 +0000

flight 159716 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159716/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c4441ab1f1d506a942002ccc55fdde2fe30ef626
baseline version:
 xen                  109e8177fd4a225e7025c4c17d2c9537b550b4ed

Last test of basis   159704  2021-02-26 10:01:26 Z    0 days
Testing same since   159709  2021-02-26 13:00:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   109e8177fd..c4441ab1f1  c4441ab1f1d506a942002ccc55fdde2fe30ef626 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 20:32:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 20:32:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90581.171503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFjmR-0001iK-Eb; Fri, 26 Feb 2021 20:32:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90581.171503; Fri, 26 Feb 2021 20:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFjmR-0001iD-BZ; Fri, 26 Feb 2021 20:32:07 +0000
Received: by outflank-mailman (input) for mailman id 90581;
 Fri, 26 Feb 2021 20:32:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFjmQ-0001i8-5H
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 20:32:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d69564aa-1a9c-4633-ac61-06976fd09582;
 Fri, 26 Feb 2021 20:32:05 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CE296601FB;
 Fri, 26 Feb 2021 20:32:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d69564aa-1a9c-4633-ac61-06976fd09582
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614371524;
	bh=ZfhIgVSHPUaPB/O+CPuZ87bwXno9lSjUW14o5/FcNvI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XukvFgxv+/Cwm4wTKnUdIVAbzIeI08Mk98Q1Xk/GARO7ktOtqPl5pJEdZmqTmcNxl
	 4g3iDNnrJEa0GQ02XWfwK7+bAFILLFWbRy5WXAggqUj7RX9CMri0ZRrbXHPufCRHNt
	 C2xDstXLj8oMgmvqP29Y00k8x4HsI6d+lEheC/Q38GkWz83nhahkL9Q0FG0PzenxWt
	 4rR6LfBj3l+dC8no9/Xbtw+h1HhBEy5csPUUoOidu5A/xqUJVxz08fV3t5jPyGG8ec
	 UCNKVSj0zai1WVNfnIOjvCL82PEla8luhGajqVZAAapinowMUaa2y6zzOe+bHm10cP
	 9HdTm/ZmnfGqQ==
Date: Fri, 26 Feb 2021 12:32:03 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roger Pau Monne <roger.pau@citrix.com>
cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, 
    Ian Jackson <iwj@xenproject.org>, George Dunlap <george.dunlap@citrix.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH for-4.15 0/3] firmware: fix build on Alpine
In-Reply-To: <20210226085908.21254-1-roger.pau@citrix.com>
Message-ID: <alpine.DEB.2.21.2102261230270.3234@sstabellini-ThinkPad-T480s>
References: <20210226085908.21254-1-roger.pau@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 Feb 2021, Roger Pau Monne wrote:
> Hello,
> 
> While the series started as a build fix for Alpine I think they are
> interesting on their own for other OSes/distros, since it allows to
> remove the i386 libc as a build dependency.
> 
> THis is done by proving a suitable set of standalone headers that are
> suitable for the needs of the firmware build. Patch 2 contains the full
> description on why it's done this way.
> 
> The main risk for patches 1 and 2 is breaking the build in some obscure
> distro/OS and toolchain combination. We aim to have this mostly covered
> by gitlab CI. Patch 3 main risk is breaking the Alpine containers in
> gitlab CI, but they are already failing.
> 
> Wanted to send this yesterday but was waiting on gitlab CI output, it's now
> all green:
> 
> https://gitlab.com/xen-project/people/royger/xen/-/pipelines/261928726

That's fantastic! Spreaking with Andrew I thought the Alpine Linux
hvmloader build issue was still unresolved, but obviously you found a
way to fix it. Great!



> Thanks, Roger.
> 
> Roger Pau Monne (3):
>   hvmloader: do not include inttypes.h
>   firmware: provide a stand alone set of headers
>   automation: enable rombios build on Alpine
> 
>  README                                        |  3 --
>  automation/scripts/build                      |  5 +--
>  tools/firmware/Rules.mk                       | 11 ++++++
>  tools/firmware/hvmloader/32bitbios_support.c  |  2 +-
>  tools/firmware/include/stdarg.h               | 10 +++++
>  tools/firmware/include/stdbool.h              |  9 +++++
>  tools/firmware/include/stddef.h               | 10 +++++
>  tools/firmware/include/stdint.h               | 39 +++++++++++++++++++
>  tools/firmware/rombios/32bit/rombios_compat.h |  4 +-
>  tools/firmware/rombios/rombios.c              |  5 +--
>  10 files changed, 85 insertions(+), 13 deletions(-)
>  create mode 100644 tools/firmware/include/stdarg.h
>  create mode 100644 tools/firmware/include/stdbool.h
>  create mode 100644 tools/firmware/include/stddef.h
>  create mode 100644 tools/firmware/include/stdint.h
> 
> -- 
> 2.30.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 20:36:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 20:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90584.171516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFjqp-0001sR-VC; Fri, 26 Feb 2021 20:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90584.171516; Fri, 26 Feb 2021 20:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFjqp-0001sK-Ry; Fri, 26 Feb 2021 20:36:39 +0000
Received: by outflank-mailman (input) for mailman id 90584;
 Fri, 26 Feb 2021 20:36:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFjqo-0001sF-VS
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 20:36:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88c54186-65c5-416e-aa66-741a7ac32f38;
 Fri, 26 Feb 2021 20:36:38 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 493F764EFA;
 Fri, 26 Feb 2021 20:36:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88c54186-65c5-416e-aa66-741a7ac32f38
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614371797;
	bh=xex4eDlzhtDi9TdoA4FM7ulEpD14jeshElkv2Pk1l2s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WZDPD92+OkR74Vps1x0tcncaH/iCPEP1dndYdjidiWKqhwIYQ1LXxxoyF5+oLPjze
	 0APeb+4zziUKGguy1sVwaet2lyIcub+qbTUn3ndWY+tTyKoq0MwunjahAOT3KNmEzI
	 rxdiiL1dr4l47asMbukmJT2caj008cpBYrXqlZWS/WtVTFbWJ9EAIfhks21WOB5Ove
	 hgYZDB2GIaqM4LqRMclhw8p2eXvuIzTqCKRHvTxdt41jdM1gMfQOXGAT66xwp/zyJy
	 +2Khb0pKsuwyYBt9htmwK/zfdVZsVS9K8gzcqRW0RsuV1qEGnLIfgtzMzfuJ02KvSJ
	 IshRo2P+9lRBg==
Date: Fri, 26 Feb 2021 12:36:36 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roger Pau Monne <roger.pau@citrix.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.15 2/3] firmware: provide a stand alone set of
 headers
In-Reply-To: <20210226085908.21254-3-roger.pau@citrix.com>
Message-ID: <alpine.DEB.2.21.2102261233160.3234@sstabellini-ThinkPad-T480s>
References: <20210226085908.21254-1-roger.pau@citrix.com> <20210226085908.21254-3-roger.pau@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1154894328-1614371797=:3234"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1154894328-1614371797=:3234
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 26 Feb 2021, Roger Pau Monne wrote:
> The current build of the firmware relies on having 32bit compatible
> headers installed in order to build some of the 32bit firmware, but
> that usually requires multilib support and installing a i386 libc when
> building from an amd64 environment which is cumbersome just to get
> some headers.
> 
> Usually this could be solved by using the -ffreestanding compiler
> option which drops the usage of the system headers in favor of a
> private set of freestanding headers provided by the compiler itself
> that are not tied to libc. However such option is broken at least
> in the gcc compiler provided in Alpine Linux, as the system include
> path (ie: /usr/include) takes precedence over the gcc private include
> path:
> 
> #include <...> search starts here:
>  /usr/include
>  /usr/lib/gcc/x86_64-alpine-linux-musl/10.2.1/include
> 
> Since -ffreestanding is currently broken on at least that distro, and
> for resilience against future compilers also having the option broken
> provide a set of stand alone 32bit headers required for the firmware
> build.
> 
> This allows to drop the build time dependency on having a i386
> compatible set of libc headers on amd64.
> 
> Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
> ---
> There's the argument of fixing gcc in Alpine and instead just use
> -ffreestanding. I think that's more fragile than providing our own set
> of stand alone headers for the firmware bits. Having the include paths
> wrongly sorted can easily make the system headers being picked up
> instead of the gcc ones, and then building can randomly fail because
> the system headers could be amd64 only (like the musl ones).
> 
> I've also seen clang-9 on Debian with the following include paths:
> 
> #include "..." search starts here:
> #include <...> search starts here:
>  /usr/local/include
>  /usr/lib/llvm-9/lib/clang/9.0.1/include
>  /usr/include/x86_64-linux-gnu
>  /usr/include
> 
> Which also seems slightly dangerous as local comes before the compiler
> private path.
> 
> IMO using our own set of stand alone headers is more resilient.
> 
> Regarding the release risks, the main one would be breaking the build
> (as it's currently broken on Alpine). I think there's a very low risk
> of this change successfully producing a binary image that's broken,
> and hence with enough build testing it should be safe to merge.

This patch is a lot nicer and smaller than I thought it would be. It
looks like the best approach.

In terms of testing, gitlab-ci has a pretty wide build test coverage, so
if we can pass those (and you have already provided a link with all
tests green in patch #0) then I am in favor of getting this in for 4.15.


> ---
>  README                                        |  3 --
>  tools/firmware/Rules.mk                       | 11 ++++++
>  tools/firmware/include/stdarg.h               | 10 +++++
>  tools/firmware/include/stdbool.h              |  9 +++++
>  tools/firmware/include/stddef.h               | 10 +++++
>  tools/firmware/include/stdint.h               | 39 +++++++++++++++++++
>  tools/firmware/rombios/32bit/rombios_compat.h |  4 +-
>  7 files changed, 80 insertions(+), 6 deletions(-)
>  create mode 100644 tools/firmware/include/stdarg.h
>  create mode 100644 tools/firmware/include/stdbool.h
>  create mode 100644 tools/firmware/include/stddef.h
>  create mode 100644 tools/firmware/include/stdint.h
> 
> diff --git a/README b/README
> index 33cdf6b826..5167bb1708 100644
> --- a/README
> +++ b/README
> @@ -62,9 +62,6 @@ provided by your OS distributor:
>      * GNU bison and GNU flex
>      * GNU gettext
>      * ACPI ASL compiler (iasl)
> -    * Libc multiarch package (e.g. libc6-dev-i386 / glibc-devel.i686).
> -      Required when building on a 64-bit platform to build
> -      32-bit components which are enabled on a default build.
>  
>  In addition to the above there are a number of optional build
>  prerequisites. Omitting these will cause the related features to be
> diff --git a/tools/firmware/Rules.mk b/tools/firmware/Rules.mk
> index 26bbddccd4..5d09ab06df 100644
> --- a/tools/firmware/Rules.mk
> +++ b/tools/firmware/Rules.mk
> @@ -17,3 +17,14 @@ $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
>  
>  # Extra CFLAGS suitable for an embedded type of environment.
>  CFLAGS += -fno-builtin -msoft-float
> +
> +# Use our own set of library headers to build firmware.
> +#
> +# Ideally we would instead use -ffreestanding, but that relies on the compiler
> +# having the right order for include paths (ie: compiler private headers before
> +# system ones). This is not the case in Alpine at least which searches system
> +# headers before compiler ones, and has been reported upstream:
> +# https://gitlab.alpinelinux.org/alpine/aports/-/issues/12477
> +# In the meantime (and for resilience against broken compilers) use our own set
> +# of headers that provide what's needed for the firmware build.
> +CFLAGS += -nostdinc -I$(XEN_ROOT)/tools/firmware/include
> diff --git a/tools/firmware/include/stdarg.h b/tools/firmware/include/stdarg.h
> new file mode 100644
> index 0000000000..c5e3761cd2
> --- /dev/null
> +++ b/tools/firmware/include/stdarg.h
> @@ -0,0 +1,10 @@
> +#ifndef _STDARG_H_
> +#define _STDARG_H_
> +
> +typedef __builtin_va_list va_list;
> +#define va_copy(dest, src) __builtin_va_copy(dest, src)
> +#define va_start(ap, last) __builtin_va_start(ap, last)
> +#define va_end(ap) __builtin_va_end(ap)
> +#define va_arg __builtin_va_arg
> +
> +#endif
> diff --git a/tools/firmware/include/stdbool.h b/tools/firmware/include/stdbool.h
> new file mode 100644
> index 0000000000..0cf76b106c
> --- /dev/null
> +++ b/tools/firmware/include/stdbool.h
> @@ -0,0 +1,9 @@
> +#ifndef _STDBOOL_H_
> +#define _STDBOOL_H_
> +
> +#define bool _Bool
> +#define true 1
> +#define false 0
> +#define __bool_true_false_are_defined 1
> +
> +#endif
> diff --git a/tools/firmware/include/stddef.h b/tools/firmware/include/stddef.h
> new file mode 100644
> index 0000000000..c7f974608a
> --- /dev/null
> +++ b/tools/firmware/include/stddef.h
> @@ -0,0 +1,10 @@
> +#ifndef _STDDEF_H_
> +#define _STDDEF_H_
> +
> +typedef __SIZE_TYPE__ size_t;
> +
> +#define NULL ((void*)0)
> +
> +#define offsetof(t, m) __builtin_offsetof(t, m)
> +
> +#endif
> diff --git a/tools/firmware/include/stdint.h b/tools/firmware/include/stdint.h
> new file mode 100644
> index 0000000000..7514096846
> --- /dev/null
> +++ b/tools/firmware/include/stdint.h
> @@ -0,0 +1,39 @@
> +#ifndef _STDINT_H_
> +#define _STDINT_H_
> +
> +#ifdef __LP64__
> +#error "32bit only header"
> +#endif
> +
> +typedef unsigned char uint8_t;
> +typedef signed char int8_t;
> +
> +typedef unsigned short uint16_t;
> +typedef signed short int16_t;
> +
> +typedef unsigned int uint32_t;
> +typedef signed int int32_t;
> +
> +typedef unsigned long long uint64_t;
> +typedef signed long long int64_t;
> +
> +#define INT8_MIN        (-0x7f-1)
> +#define INT16_MIN       (-0x7fff-1)
> +#define INT32_MIN       (-0x7fffffff-1)
> +#define INT64_MIN       (-0x7fffffffffffffffll-1)
> +
> +#define INT8_MAX        0x7f
> +#define INT16_MAX       0x7fff
> +#define INT32_MAX       0x7fffffff
> +#define INT64_MAX       0x7fffffffffffffffll
> +
> +#define UINT8_MAX       0xff
> +#define UINT16_MAX      0xffff
> +#define UINT32_MAX      0xffffffffu
> +#define UINT64_MAX      0xffffffffffffffffull
> +
> +typedef uint32_t uintptr_t;
> +
> +#define UINTPTR_MAX     UINT32_MAX
> +
> +#endif
> diff --git a/tools/firmware/rombios/32bit/rombios_compat.h b/tools/firmware/rombios/32bit/rombios_compat.h
> index 3fe7d67721..8ba4c17ffd 100644
> --- a/tools/firmware/rombios/32bit/rombios_compat.h
> +++ b/tools/firmware/rombios/32bit/rombios_compat.h
> @@ -8,9 +8,7 @@
>  
>  #define ADDR_FROM_SEG_OFF(seg, off)  (void *)((((uint32_t)(seg)) << 4) + (off))
>  
> -typedef unsigned char uint8_t;
> -typedef unsigned short int uint16_t;
> -typedef unsigned int uint32_t;
> +#include <stdint.h>
>  
>  typedef uint8_t  Bit8u;
>  typedef uint16_t Bit16u;
> -- 
> 2.30.1
> 
--8323329-1154894328-1614371797=:3234--


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 20:52:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 20:52:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90590.171528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFk5p-0003vg-DA; Fri, 26 Feb 2021 20:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90590.171528; Fri, 26 Feb 2021 20:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFk5p-0003vZ-9l; Fri, 26 Feb 2021 20:52:09 +0000
Received: by outflank-mailman (input) for mailman id 90590;
 Fri, 26 Feb 2021 20:52:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lFk5o-0003vU-3G
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 20:52:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFk5n-0003mr-GT; Fri, 26 Feb 2021 20:52:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lFk5n-0000aM-4F; Fri, 26 Feb 2021 20:52:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=VsJfaR9k5iBd3+FDtR3JcGQ5njGJ38LtsD9QIHJUxzI=; b=zfqh0kVJ+B3dwAi/OSzhIcbUAw
	gwqoMVOaLsXXP0Lgeh8cAkOiY4FVSfzWCspd8LziD+DRsDUXKSIs4Yj/M9N+0jcRdnX2gSXCrvBYl
	VhpC/KtVUfmr4O12G7bTV1WgZymuiV6pvcct1+ESEu0OGbM0DA6Q75ddyJe6b45yXTZ0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	ash.j.wilding@gmail.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Ensure the vCPU context is seen before clearing the _VPF_down
Date: Fri, 26 Feb 2021 20:51:58 +0000
Message-Id: <20210226205158.20991-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

A vCPU can get scheduled as soon as _VPF_down is cleared. As there is
currently not ordering guarantee in arch_set_info_guest(), it may be
possible that flag can be observed cleared before the new values of vCPU
registers are observed.

Add an smp_mb() before the flag is cleared to prevent re-ordering.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Barriers should work in pair. However, I am not entirely sure whether to
put the other half. Maybe at the beginning of context_switch_to()?

The issues described here is also quite theoritical because there are
hundreds of instructions executed between the time a vCPU is seen
runnable and scheduled. But better be safe than sorry :).
---
 xen/arch/arm/domain.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index bdd3d3e5b5d5..2b705e66be81 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -914,7 +914,14 @@ int arch_set_info_guest(
     v->is_initialised = 1;
 
     if ( ctxt->flags & VGCF_online )
+    {
+        /*
+         * The vCPU can be scheduled as soon as _VPF_down is cleared.
+         * So clear the bit *after* the context was loaded.
+         */
+        smp_mb();
         clear_bit(_VPF_down, &v->pause_flags);
+    }
     else
         set_bit(_VPF_down, &v->pause_flags);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 21:01:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 21:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90595.171540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFkEh-000553-9o; Fri, 26 Feb 2021 21:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90595.171540; Fri, 26 Feb 2021 21:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFkEh-00054w-6g; Fri, 26 Feb 2021 21:01:19 +0000
Received: by outflank-mailman (input) for mailman id 90595;
 Fri, 26 Feb 2021 21:01:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFkEg-00054r-Gg
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 21:01:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3585c70b-e534-423a-9daf-6c0c7dcd748f;
 Fri, 26 Feb 2021 21:01:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AD00B64F0E;
 Fri, 26 Feb 2021 21:01:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3585c70b-e534-423a-9daf-6c0c7dcd748f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614373277;
	bh=mAq4FzubIn94I1dTVuxTmD/bmmz+4fpky/DYOFONgrA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bp65KSvOHRYfDF4P8WRx5jrOJ0mpmIZjSLdwW18FDalWC7ujV9bhijBzK/pILsB00
	 nse0cNXxtK65AMGAMlV9msmDBc2aq2HlGjAM988dP5Z/cROldSZLD0pM9ESLflfH9c
	 KQ5511E9+0rHzTs/BpyQVd6VZ8zp+sMURp5JxQzIFR4L8K1LO6DSbBUZS7raM/C5TL
	 uMCq/S3eJm6TpEBojfs1lN5iWAs5SNIVQlSao68nHPVxz/tbDCsz6I41Ic5Vxrym7A
	 a69CDt9a/QqReEf6fhgHfKcOPVGt9SBuAOuxPun6iwnIA8dPSrKTRE0gUBtF7d/jD+
	 c4LtUxbzYYm5A==
Date: Fri, 26 Feb 2021 13:01:15 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    andrew.cooper3@citrix.com, julien@xen.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen: introduce XENFEAT_direct_mapped and
 XENFEAT_not_direct_mapped
In-Reply-To: <fe4f0f87-0b6a-c37a-7f17-e3cf40f739f1@suse.com>
Message-ID: <alpine.DEB.2.21.2102261246580.3234@sstabellini-ThinkPad-T480s>
References: <20210225012243.28530-1-sstabellini@kernel.org> <96d764b6-a719-711c-31ea-235381bfd0ce@suse.com> <alpine.DEB.2.21.2102250948160.3234@sstabellini-ThinkPad-T480s> <fe4f0f87-0b6a-c37a-7f17-e3cf40f739f1@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 Feb 2021, Jan Beulich wrote:
> On 25.02.2021 21:51, Stefano Stabellini wrote:
> > On Thu, 25 Feb 2021, Jan Beulich wrote:
> >> On 25.02.2021 02:22, Stefano Stabellini wrote:
> >>> --- a/xen/include/public/features.h
> >>> +++ b/xen/include/public/features.h
> >>> @@ -114,6 +114,13 @@
> >>>   */
> >>>  #define XENFEAT_linux_rsdp_unrestricted   15
> >>>  
> >>> +/*
> >>> + * A direct-mapped (or 1:1 mapped) domain is a domain for which its
> >>> + * local pages have gfn == mfn.
> >>> + */
> >>> +#define XENFEAT_not_direct_mapped       16
> >>> +#define XENFEAT_direct_mapped           17
> >>
> >> Why two new values? Absence of XENFEAT_direct_mapped requires
> >> implying not-direct-mapped by the consumer anyway, doesn't it?
> > 
> > That's because if we add both flags we can avoid all unpleasant guessing
> > games in the guest kernel.
> > 
> > If one flag or the other flag is set, we can make an informed decision.
> > 
> > But if neither flag is set, it means we are running on an older Xen,
> > and we fall back on the current checks.
> 
> Oh, okay - if there's guesswork to avoid, then I see the point.
> Maybe mention in the description?

Yes, I can do that.


> >> Further, quoting xen/mm.h: "For a non-translated guest which
> >> is aware of Xen, gfn == mfn." This to me implies that PV would
> >> need to get XENFEAT_direct_mapped set; not sure whether this
> >> simply means x86'es is_domain_direct_mapped() is wrong, but if
> >> it is, uses elsewhere in the code would likely need changing.
> > 
> > That's a good point, I didn't think about x86 PV. I think the two flags
> > are needed for autotranslated guests. I don't know for sure what is best
> > for non-autotranslated guests.
> > 
> > Maybe we could say that XENFEAT_not_direct_mapped and
> > XENFEAT_direct_mapped only apply to XENFEAT_auto_translated_physmap
> > guests. And it would match the implementation of
> > is_domain_direct_mapped().
> 
> I'm having trouble understanding this last sentence, and hence I'm
> not sure I understand the rest in the way you may mean it. Neither
> x86'es nor Arm's is_domain_direct_mapped() has any check towards a
> guest being translated (obviously such a check would be redundant
> on Arm).

I meant that we are not explicitely checking for auto_translated in
neither version of is_domain_direct_mapped(), but it is sort of implied.


> > For non XENFEAT_auto_translated_physmap guests we could either do:
> > 
> > - neither flag is set
> > - set XENFEAT_direct_mapped (without changing the implementation of
> >   is_domain_direct_mapped)
> > 
> > What do you think? I am happy either way.
> 
> I'm happy either way as well; suitably described perhaps setting
> XENFEAT_direct_mapped when !paging_mode_translate() would be
> slightly more "natural". But a spelled out and enforced
> dependency upon XENFEAT_auto_translated_physmap would too be fine
> with me.

OK. I'll go with:

            if ( is_domain_direct_mapped(d) || !paging_mode_translate(d) )
                fi.submap |= (1U << XENFEAT_direct_mapped);
            else
                fi.submap |= (1U << XENFEAT_not_direct_mapped);


With an appropriate explanation.


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 22:39:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 22:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90602.171552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFllk-0005z8-Km; Fri, 26 Feb 2021 22:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90602.171552; Fri, 26 Feb 2021 22:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFllk-0005z1-HM; Fri, 26 Feb 2021 22:39:32 +0000
Received: by outflank-mailman (input) for mailman id 90602;
 Fri, 26 Feb 2021 22:39:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e85+=H4=strugglers.net=andy@srs-us1.protection.inumbo.net>)
 id 1lFllj-0005yw-AN
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 22:39:31 +0000
Received: from mail.bitfolk.com (unknown [2001:ba8:1f1:f019::25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd234f42-7f1a-4e3d-bcb3-e9c19d6a3d23;
 Fri, 26 Feb 2021 22:39:28 +0000 (UTC)
Received: from andy by mail.bitfolk.com with local (Exim 4.84_2)
 (envelope-from <andy@strugglers.net>) id 1lFllf-00033i-Ms
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 22:39:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd234f42-7f1a-4e3d-bcb3-e9c19d6a3d23
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com; s=alpha;
	h=Content-Type:MIME-Version:Message-ID:Subject:To:From:Date; bh=YvoH1l9k9s0wx7FCW2R9QVvFEOQ66BCYK+5ppEQvY2Q=;
	b=KoqeHF/UbgxO6zmaFtKIiT3kbN9vDiCHIb7n5xDgQ28zgqW27BFVsS0DdTk39LYk2eh2Ao76HvoDB1QwVsF7/Cp6P7zsAESKRIV6VtDdVvA1tm1h/x/m3DWRJ8nDe+ibiqDl3ADwoeCUUts4jriDtc9aQbRDertt3f5QfEd43Q4YoJ/sj81VDHs7AAjwkhxFqHeVskxYioixf2cvuHN3qkNwmCrogxTG9PGxtMumpgv58apGkvGTCmUS0m8kzQZwFrQ2Xw6dFemVmQHVpoHtk/psjUjC8DNQkjOHIg8WVMwy6oqzLRNUoyCdMLeG/J8lBKm7eZ5gZEIE1J5M9lT6aw==;
Date: Fri, 26 Feb 2021 22:39:27 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xenproject.org
Subject: dom0 suddenly blocking on all access to md device
Message-ID: <20210226223927.GQ29212@bitfolk.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.23 (2014-03-12)
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-SA-Exim-Scanned: No (on mail.bitfolk.com); SAEximRunCond expanded to false

Hi,

I suspect this might be an issue in the dom0 kernel (Debian buster,
kernel 4.19.0-13-amd64), but just lately I've been sporadically
having issues where dom0 blocks or severely slows down on all access
to the particular md device that hosts all domU block devices.

Setup in dom0: an md RAID10 that is used as an LVM PV for an LVM volume
group, where all domU block devices are LVM logical volumes in that
group. So the relevant part of a domU config file might look like:

disk = [ "phy:/dev/myvg/domu_debtest1_xvda,xvda,w",
         "phy:/dev/myvg/domu_debtest1_xvdb,xvdb,w" ]

The guests are mostly PV, a sprinkling of PVH, no HVM.

There's 5 of these servers but 3 of them have only recently been
upgraded to Xen 4.12.14 (on Debian buster) from Xen 4.10 (on Debian
jessie). The fact that all of them have been pretty stable in the
past, on differing hardware, makes me discount a hardware issue. The
fact that two of them have been buster / 4.12.x for a long time
without issue but are also now starting to see this does make me
think that it's a recent dom0 kernel issue.

When the problem occurs, inside every domU I see things like this:

Feb 26 20:02:34 backup4 kernel: [2530464.736085] INFO: task btrfs-transacti:333 blocked for more than 120 seconds.
Feb 26 20:02:34 backup4 kernel: [2530464.736107]       Not tainted 4.9.0-14-amd64 #1 Debian 4.9.246-2
Feb 26 20:02:34 backup4 kernel: [2530464.736117] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 26 20:02:34 backup4 kernel: [2530464.736131] btrfs-transacti D    0   333      2 0x00000000
Feb 26 20:02:34 backup4 kernel: [2530464.736146]  0000000000000246 ffff8800f4e0c400 0000000000000000 ffff8800f8a7f100
Feb 26 20:02:34 backup4 kernel: [2530464.736168]  ffff8800fad18a00 ffff8800fa7dd000 ffffc90040b2f670 ffffffff8161a979
Feb 26 20:02:34 backup4 kernel: [2530464.736188]  ffff8800fa6d0200 0000000000000000 ffff8800fad18a00 0000000000000010
Feb 26 20:02:34 backup4 kernel: [2530464.736209] Call Trace:
Feb 26 20:02:34 backup4 kernel: [2530464.736223]  [<ffffffff8161a979>] ? __schedule+0x239/0x6f0
Feb 26 20:02:34 backup4 kernel: [2530464.736236]  [<ffffffff8161ae62>] ? schedule+0x32/0x80
Feb 26 20:02:34 backup4 kernel: [2530464.736248]  [<ffffffff8161e1fd>] ? schedule_timeout+0x1dd/0x380
Feb 26 20:02:34 backup4 kernel: [2530464.736263]  [<ffffffff8101c201>] ? xen_clocksource_get_cycles+0x11/0x20
Feb 26 20:02:34 backup4 kernel: [2530464.736275]  [<ffffffff8161a6dd>] ? io_schedule_timeout+0x9d/0x100
Feb 26 20:02:34 backup4 kernel: [2530464.736289]  [<ffffffff81367964>] ? __sbitmap_queue_get+0x24/0x90
Feb 26 20:02:34 backup4 kernel: [2530464.736302]  [<ffffffff81317f60>] ? bt_get.isra.6+0x160/0x220
Feb 26 20:02:34 backup4 kernel: [2530464.736338]  [<ffffffffc0148bf8>] ? __btrfs_map_block+0x6c8/0x11d0 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736353]  [<ffffffff810bf010>] ? prepare_to_wait_event+0xf0/0xf0
Feb 26 20:02:34 backup4 kernel: [2530464.736364]  [<ffffffff813182d3>] ? blk_mq_get_tag+0x23/0x90
Feb 26 20:02:34 backup4 kernel: [2530464.736377]  [<ffffffff81313b6a>] ? __blk_mq_alloc_request+0x1a/0x220
Feb 26 20:02:34 backup4 kernel: [2530464.736390]  [<ffffffff81314a39>] ? blk_mq_map_request+0xd9/0x170
Feb 26 20:02:34 backup4 kernel: [2530464.736402]  [<ffffffff8131726b>] ? blk_mq_make_request+0xbb/0x580
Feb 26 20:02:34 backup4 kernel: [2530464.736429]  [<ffffffffc0148bf8>] ? __btrfs_map_block+0x6c8/0x11d0 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736444]  [<ffffffff8130b0f5>] ? generic_make_request+0x115/0x2d0
Feb 26 20:02:34 backup4 kernel: [2530464.736456]  [<ffffffff8130b326>] ? submit_bio+0x76/0x140
Feb 26 20:02:34 backup4 kernel: [2530464.736481]  [<ffffffffc0149d9a>] ? btrfs_map_bio+0x19a/0x340 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736505]  [<ffffffffc0111635>] ? btree_submit_bio_hook+0xf5/0x110 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736535]  [<ffffffffc0138318>] ? submit_one_bio+0x68/0x90 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736561]  [<ffffffffc013fd4d>] ? read_extent_buffer_pages+0x1cd/0x300 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736587]  [<ffffffffc010fbe0>] ? free_root_pointers+0x60/0x60 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736609]  [<ffffffffc010ff9c>] ? btree_read_extent_buffer_pages+0x8c/0x100 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736635]  [<ffffffffc0111814>] ? read_tree_block+0x34/0x50 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736655]  [<ffffffffc00ef9f3>] ? read_block_for_search.isra.36+0x133/0x320 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736678]  [<ffffffffc00eabe4>] ? unlock_up+0xd4/0x180 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736700]  [<ffffffffc00f1b8d>] ? btrfs_search_slot+0x3ad/0xa00 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736723]  [<ffffffffc00f3a47>] ? btrfs_insert_empty_items+0x67/0xc0 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736748]  [<ffffffffc00ffe24>] ? __btrfs_run_delayed_refs+0xfc4/0x13a0 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736763]  [<ffffffff810164bd>] ? xen_mc_flush+0xdd/0x1d0
Feb 26 20:02:34 backup4 kernel: [2530464.736785]  [<ffffffffc01033ad>] ? btrfs_run_delayed_refs+0x9d/0x2b0 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736811]  [<ffffffffc0119817>] ? btrfs_commit_transaction+0x57/0xa10 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736837]  [<ffffffffc011a266>] ? start_transaction+0x96/0x480 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736861]  [<ffffffffc011464c>] ? transaction_kthread+0x1dc/0x200 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736886]  [<ffffffffc0114470>] ? btrfs_cleanup_transaction+0x590/0x590 [btrfs]
Feb 26 20:02:34 backup4 kernel: [2530464.736901]  [<ffffffff8109be69>] ? kthread+0xd9/0xf0
Feb 26 20:02:34 backup4 kernel: [2530464.736913]  [<ffffffff8109bd90>] ? kthread_park+0x60/0x60
Feb 26 20:02:34 backup4 kernel: [2530464.736926]  [<ffffffff8161f8f7>] ? ret_from_fork+0x57/0x70

It's all kinds of guest kernel, and the processes are basically
anything that tries to access its block devices.

Over in the dom0 at the time, I mostly haven't managed to get logs,
probably because its logging is on the same md device that is having
problems. Some of the servers are fortunate to have their dom0
operating system installed on separate devices to the guest devices,
and on one of those I got this:

Feb 20 00:58:44 talisker kernel: [5876461.472590] INFO: task md5_raid10:226 blocked for more than 120 seconds.
Feb 20 00:58:44 talisker kernel: [5876461.473105]       Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2
Feb 20 00:58:44 talisker kernel: [5876461.473523] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 20 00:58:44 talisker kernel: [5876461.473936] md5_raid10      D    0   226      2 0x80000000
Feb 20 00:58:44 talisker kernel: [5876461.474341] Call Trace:
Feb 20 00:58:44 talisker kernel: [5876461.474743]  __schedule+0x29f/0x840
Feb 20 00:58:44 talisker kernel: [5876461.475142]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Feb 20 00:58:44 talisker kernel: [5876461.475554]  schedule+0x28/0x80
Feb 20 00:58:44 talisker kernel: [5876461.475964]  md_super_wait+0x6e/0xa0 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.476372]  ? finish_wait+0x80/0x80
Feb 20 00:58:44 talisker kernel: [5876461.476817]  md_bitmap_wait_writes+0x93/0xa0 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.477504]  ? md_bitmap_get_counter+0x42/0xd0 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.478248]  md_bitmap_daemon_work+0x1f7/0x370 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.478904]  md_check_recovery+0x41/0x530 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.479309]  raid10d+0x62/0x1460 [raid10]
Feb 20 00:58:44 talisker kernel: [5876461.479722]  ? __switch_to_asm+0x41/0x70
Feb 20 00:58:44 talisker kernel: [5876461.480133]  ? finish_task_switch+0x78/0x280
Feb 20 00:58:44 talisker kernel: [5876461.480540]  ? _raw_spin_lock_irqsave+0x15/0x40
Feb 20 00:58:44 talisker kernel: [5876461.480987]  ? lock_timer_base+0x67/0x80
Feb 20 00:58:44 talisker kernel: [5876461.481719]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Feb 20 00:58:44 talisker kernel: [5876461.482358]  ? try_to_del_timer_sync+0x4d/0x80
Feb 20 00:58:44 talisker kernel: [5876461.482768]  ? del_timer_sync+0x37/0x40
Feb 20 00:58:44 talisker kernel: [5876461.483162]  ? schedule_timeout+0x173/0x3b0
Feb 20 00:58:44 talisker kernel: [5876461.483553]  ? md_rdev_init+0xb0/0xb0 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.483944]  ? md_thread+0x94/0x150 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.484345]  ? r10bio_pool_alloc+0x20/0x20 [raid10]
Feb 20 00:58:44 talisker kernel: [5876461.484777]  md_thread+0x94/0x150 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.485500]  ? finish_wait+0x80/0x80
Feb 20 00:58:44 talisker kernel: [5876461.486083]  kthread+0x112/0x130
Feb 20 00:58:44 talisker kernel: [5876461.486479]  ? kthread_bind+0x30/0x30
Feb 20 00:58:44 talisker kernel: [5876461.486870]  ret_from_fork+0x35/0x40
Feb 20 00:58:44 talisker kernel: [5876461.487260] INFO: task 1.xvda-0:4237 blocked for more than 120 seconds.
Feb 20 00:58:44 talisker kernel: [5876461.487644]       Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2
Feb 20 00:58:44 talisker kernel: [5876461.488027] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 20 00:58:44 talisker kernel: [5876461.488422] 1.xvda-0        D    0  4237      2 0x80000000
Feb 20 00:58:44 talisker kernel: [5876461.488842] Call Trace:
Feb 20 00:58:44 talisker kernel: [5876461.489530]  __schedule+0x29f/0x840
Feb 20 00:58:44 talisker kernel: [5876461.490149]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Feb 20 00:58:44 talisker kernel: [5876461.490545]  schedule+0x28/0x80
Feb 20 00:58:44 talisker kernel: [5876461.490954]  md_super_wait+0x6e/0xa0 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.491330]  ? finish_wait+0x80/0x80
Feb 20 00:58:44 talisker kernel: [5876461.491708]  md_bitmap_wait_writes+0x93/0xa0 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.492101]  md_bitmap_unplug+0xc5/0x120 [md_mod]
Feb 20 00:58:44 talisker kernel: [5876461.492490]  raid10_unplug+0xd4/0x190 [raid10]
Feb 20 00:58:44 talisker kernel: [5876461.492926]  blk_flush_plug_list+0xcf/0x240
Feb 20 00:58:44 talisker kernel: [5876461.493648]  blk_finish_plug+0x21/0x2e
Feb 20 00:58:44 talisker kernel: [5876461.494277]  dispatch_rw_block_io+0x696/0x990 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.494657]  ? inv_show+0x30/0x30
Feb 20 00:58:44 talisker kernel: [5876461.495043]  __do_block_io_op+0x30f/0x610 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.495458]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Feb 20 00:58:44 talisker kernel: [5876461.495871]  ? try_to_del_timer_sync+0x4d/0x80
Feb 20 00:58:44 talisker kernel: [5876461.496264]  xen_blkif_schedule+0xdb/0x650 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.496784]  ? finish_wait+0x80/0x80
Feb 20 00:58:44 talisker kernel: [5876461.497418]  ? xen_blkif_be_int+0x30/0x30 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.498041]  kthread+0x112/0x130
Feb 20 00:58:44 talisker kernel: [5876461.498668]  ? kthread_bind+0x30/0x30
Feb 20 00:58:44 talisker kernel: [5876461.499309]  ret_from_fork+0x35/0x40
Feb 20 00:58:44 talisker kernel: [5876461.499960] INFO: task 1.xvda-1:4238 blocked for more than 120 seconds.
Feb 20 00:58:44 talisker kernel: [5876461.500518]       Not tainted 4.19.0-13-amd64 #1 Debian 4.19.160-2
Feb 20 00:58:44 talisker kernel: [5876461.500943] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 20 00:58:44 talisker kernel: [5876461.501609] 1.xvda-1        D    0  4238      2 0x80000000
Feb 20 00:58:44 talisker kernel: [5876461.501992] Call Trace:
Feb 20 00:58:44 talisker kernel: [5876461.502372]  __schedule+0x29f/0x840
Feb 20 00:58:44 talisker kernel: [5876461.502747]  schedule+0x28/0x80
Feb 20 00:58:44 talisker kernel: [5876461.503121]  io_schedule+0x12/0x40
Feb 20 00:58:44 talisker kernel: [5876461.503494]  wbt_wait+0x205/0x300
Feb 20 00:58:44 talisker kernel: [5876461.503863]  ? wbt_wait+0x300/0x300
Feb 20 00:58:44 talisker kernel: [5876461.504237]  rq_qos_throttle+0x31/0x40
Feb 20 00:58:44 talisker kernel: [5876461.504637]  blk_mq_make_request+0x111/0x530
Feb 20 00:58:44 talisker kernel: [5876461.505319]  generic_make_request+0x1a4/0x400
Feb 20 00:58:44 talisker kernel: [5876461.505999]  raid10_unplug+0xfd/0x190 [raid10]
Feb 20 00:58:44 talisker kernel: [5876461.506402]  blk_flush_plug_list+0xcf/0x240
Feb 20 00:58:44 talisker kernel: [5876461.506772]  blk_finish_plug+0x21/0x2e
Feb 20 00:58:44 talisker kernel: [5876461.507140]  dispatch_rw_block_io+0x696/0x990 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.507792]  ? inv_show+0x30/0x30
Feb 20 00:58:44 talisker kernel: [5876461.508166]  __do_block_io_op+0x30f/0x610 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.508549]  ? _raw_spin_unlock_irqrestore+0x14/0x20
Feb 20 00:58:44 talisker kernel: [5876461.508967]  ? try_to_del_timer_sync+0x4d/0x80
Feb 20 00:58:44 talisker kernel: [5876461.509673]  xen_blkif_schedule+0xdb/0x650 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.510304]  ? finish_wait+0x80/0x80
Feb 20 00:58:44 talisker kernel: [5876461.510678]  ? xen_blkif_be_int+0x30/0x30 [xen_blkback]
Feb 20 00:58:44 talisker kernel: [5876461.511049]  kthread+0x112/0x130
Feb 20 00:58:44 talisker kernel: [5876461.511413]  ? kthread_bind+0x30/0x30
Feb 20 00:58:44 talisker kernel: [5876461.511776]  ret_from_fork+0x35/0x40

Administrators of the guests notice problems and try to shutdown or
reboot, but that fails because dom0 can't write to its xenstore, so
mostly domains can't be managed after this happens and the server
has to be forcibly rebooted.

These are all using the default scheduler, which I understand since
4.12 is credit2. SMT is enabled and I've limited dom0 to 2 cores,
then pinned dom0 to cores 0 and 1, and pinned all other guests to
their choice out of the remaining cores. That is something I did
fairly recently though; for a long time there was no pinning yet
this still started happening.

In a couple of cases I have found that I've been able to run
"xentop" and see a particular guest doing heavy block device reads.
I've done an "xl destroy" on that guest and then everything has
returned to normal. Unfortunately the times this has happened have
been on dom0s without useful logs. There's just a gap in logs
between when the problems started and when the (apparently)
problematic domU is destroyed. The problematic domU can then be
booted again and life goes on.

So, it could be totally unrelated to Xen, and as I investigate
further I will try different kernels in dom0. But the way that
destroying a domU frees things up makes me wonder if it could be Xen
related, maybe scheduler related? Also, it's always the md device
that the guest block devices are on that is stalled - IO to other
devices in dom0

Are there any hypervisor magic sysrq debug keys that could provide
useful information to you in ruling in / out a Xen issue?

Should I try using the "credit" scheduler (instead of "credit2") at
next boot?

I *think* this has only been seen with kernel version
4.19.0-13-amd64. Some of these servers have now been rebooted into
4.19.0-14-amd64 (the latest available package) due to the issue,
which has not yet re-occurred for them.

If it does re-occur with 4.19.0-14-amd64 what kernel version would
you advise I try out at next reboot so as to take the Debian kernel
out of the picture? I will download an upstream kernel release and
build a Debian package out of it, using my existing kernel config as
a base.

As Debian buster is on the 4.19 series should I pick the latest
4.19.x longterm to be near to it, or the 5.10.x longterm, or the
5.11.x stable?

Thanks,
Andy


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 22:48:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 22:48:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90605.171563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFluo-0006tT-HT; Fri, 26 Feb 2021 22:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90605.171563; Fri, 26 Feb 2021 22:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFluo-0006tM-Ee; Fri, 26 Feb 2021 22:48:54 +0000
Received: by outflank-mailman (input) for mailman id 90605;
 Fri, 26 Feb 2021 22:48:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFlum-0006t1-F9
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 22:48:52 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 305b196f-caa3-4da4-82a6-1074a6de6549;
 Fri, 26 Feb 2021 22:48:51 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E3C1A64EE2;
 Fri, 26 Feb 2021 22:48:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 305b196f-caa3-4da4-82a6-1074a6de6549
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614379731;
	bh=iia1f0tQSqDCaikMDFO+Bo40Eak+IQuimn8f9AjXh4k=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=unCcIjUj2qlJaPAD12PHZj3xkY9KHIqKQOUfaz5Z+7z9bOcweBCtutTAZ9DjautGG
	 3I3I9Oxgc94tatSbjh3efUdz2ZyRaSDVjAmrtO6EJbbDOba+yrDKEUCPL40m4P17p0
	 OVcXSZxgfcHHTy1lrIFI5H/3KPUfw4L//OLmzrx8vZKQRtoIerxTqpySGc7YKMr3nD
	 shfqm8qPYhw5vZN5vD9+D0dXNuafQCPcqxykPxPNUVpa5slHDXujOokyLchmjEHrL1
	 o+vY2Xye8YfNmhMyT8KFO41Xo2fzFE8+og9BzGfFW0RxZfY8zD29oqqyPYMholPbX2
	 vSd8mDwID0GlA==
Date: Fri, 26 Feb 2021 14:48:50 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, Ian Jackson <iwj@xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, roger.pau@citrix.com
Subject: Re: [PATCH for-4.15] automation: Fix the Alpine clang builds to use
 clang
In-Reply-To: <20210226110233.27991-1-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.21.2102261446490.3234@sstabellini-ThinkPad-T480s>
References: <20210226110233.27991-1-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 Feb 2021, Andrew Cooper wrote:
> Looks like a copy&paste error.
> 
> Fixes: f6e1d8515d7 ("automation: add alpine linux x86 build jobs")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks for the patch and of course it is correct

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


However unfortunately it breaks the Alpine Linux gitlab-ci again :-(
I created a branch with Roger's patches plus this patch. The two clang
Alpine Linux build jobs fail:

https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/1059686530
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/1059686532


The error is the following:

<built-in>:3:10: fatal error: 'cstring' file not found
#include "cstring"
         ^~~~~~~~~
1 error generated.
make[10]: *** [Makefile:120: headers++.chk] Error 1
make[10]: *** Waiting for unfinished jobs....







> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> ---
>  automation/gitlab-ci/build.yaml | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> index d00b8a5123..23ab81d892 100644
> --- a/automation/gitlab-ci/build.yaml
> +++ b/automation/gitlab-ci/build.yaml
> @@ -443,13 +443,13 @@ alpine-3.12-gcc-debug:
>    allow_failure: true
>  
>  alpine-3.12-clang:
> -  extends: .gcc-x86-64-build
> +  extends: .clang-x86-64-build
>    variables:
>      CONTAINER: alpine:3.12
>    allow_failure: true
>  
>  alpine-3.12-clang-debug:
> -  extends: .gcc-x86-64-build-debug
> +  extends: .clang-x86-64-build-debug
>    variables:
>      CONTAINER: alpine:3.12
>    allow_failure: true
> -- 
> 2.11.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 22:50:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 22:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90608.171576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFlw1-0007tD-2T; Fri, 26 Feb 2021 22:50:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90608.171576; Fri, 26 Feb 2021 22:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFlw0-0007t6-Vn; Fri, 26 Feb 2021 22:50:08 +0000
Received: by outflank-mailman (input) for mailman id 90608;
 Fri, 26 Feb 2021 22:50:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e85+=H4=strugglers.net=andy@srs-us1.protection.inumbo.net>)
 id 1lFlvz-0007p9-G4
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 22:50:07 +0000
Received: from mail.bitfolk.com (unknown [2001:ba8:1f1:f019::25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c26bac83-4914-43fd-8aa6-a6e918576926;
 Fri, 26 Feb 2021 22:50:06 +0000 (UTC)
Received: from andy by mail.bitfolk.com with local (Exim 4.84_2)
 (envelope-from <andy@strugglers.net>) id 1lFlvy-0003Mm-9L
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 22:50:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c26bac83-4914-43fd-8aa6-a6e918576926
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com; s=alpha;
	h=In-Reply-To:Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID:Subject:To:From:Date; bh=ZGaVSndFsOKgS7jXAKHojYKaJn6UDrftbvmw5NEdXwg=;
	b=lN1eTn0SI0ACXROLwEkdRQpBeIYIMHIDjiJ8TGiZfcZtPYngnplpemXSUz2iML63fLoC6bkAdl8Li+NvU46EnFEb/wYGYbVb0Wl6JGSN33gDcX6oj9lVPRNQMHMcXe/B5KhdKB7WqFbCgBLf9b2Jxbbv2aaxStBRBfBB6ElPJzUyqA8rtaBzPiLlMIIpUERFK7S6mMK1IzC82djWiYX21zHln+kltpksIi5XK9dZujz8LnXm+3O3SfCRq2XxfJ9Vnaw1xyt8R44Q/pqkifZVQuGs1ulCwaJL+2qT1yTecC1TRinZSECV86jps+dQQYlXwNiDPTjVT5VFbCGueQnEQw==;
Date: Fri, 26 Feb 2021 22:50:06 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xenproject.org
Subject: Re: dom0 suddenly blocking on all access to md device
Message-ID: <20210226225006.GR29212@bitfolk.com>
References: <20210226223927.GQ29212@bitfolk.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210226223927.GQ29212@bitfolk.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.23 (2014-03-12)
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-SA-Exim-Scanned: No (on mail.bitfolk.com); SAEximRunCond expanded to false

Oops, I didn't finish this sentence before sending:

On Fri, Feb 26, 2021 at 10:39:27PM +0000, Andy Smith wrote:
> Also, it's always the md device that the guest block devices are
> on that is stalled - IO to other devices in dom0
â€¦seems fine.

Thanks,
Andy


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 22:51:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 22:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90611.171588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFlxE-00081x-Ck; Fri, 26 Feb 2021 22:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90611.171588; Fri, 26 Feb 2021 22:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFlxE-00081q-9n; Fri, 26 Feb 2021 22:51:24 +0000
Received: by outflank-mailman (input) for mailman id 90611;
 Fri, 26 Feb 2021 22:51:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFlxC-00081i-J4; Fri, 26 Feb 2021 22:51:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFlxC-0005hY-BM; Fri, 26 Feb 2021 22:51:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFlxC-0004TC-1p; Fri, 26 Feb 2021 22:51:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFlxC-0004Pb-1J; Fri, 26 Feb 2021 22:51:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EWhCaZZGJB9LtCOyGhUNdNN1CFuuBDsS3O0y6+iYL7g=; b=oMmt/iL2NiShMHubwna7huMYdw
	wDxnj+dscQMCpREivGhAGBtn+aNRJ1ePqt+AIFHpT9arsFY70jDrNQ1pjq0EiB5FCGNlsK/+osv/7
	3mSsKXxLAlnYXvthObbNIvZZEuvFX+dslEHnnT9nMVMB4u3FTmH807jXUiXUerDnuZ2w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159700-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159700: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 22:51:22 +0000

flight 159700 qemu-mainline real [real]
flight 159719 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159700/
http://logs.test-lab.xenproject.org/osstest/logs/159719/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  190 days
Failing since        152659  2020-08-21 14:07:39 Z  189 days  365 attempts
Testing same since   159700  2021-02-26 08:46:59 Z    0 days    1 attempts

------------------------------------------------------------
428 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117941 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Feb 26 22:52:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 22:52:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90615.171602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFlyA-000891-NQ; Fri, 26 Feb 2021 22:52:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90615.171602; Fri, 26 Feb 2021 22:52:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFlyA-00088u-KS; Fri, 26 Feb 2021 22:52:22 +0000
Received: by outflank-mailman (input) for mailman id 90615;
 Fri, 26 Feb 2021 22:52:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECYH=H4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFly9-00088p-30
 for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 22:52:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39986784-6fb7-4293-82a7-e79b8de519ac;
 Fri, 26 Feb 2021 22:52:20 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6817264EDC;
 Fri, 26 Feb 2021 22:52:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39986784-6fb7-4293-82a7-e79b8de519ac
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614379939;
	bh=TP3PUri1nqJIGJ0ib6Y6jLpS6FE2eGzTgEFt783zVVs=;
	h=From:To:Cc:Subject:Date:From;
	b=noPsku21c4JKET3kCAZnaA7iJuRsJOBLu0bH5RnMQogrUvOwX1IMNXsEizNPf46tg
	 /+EmwMkHJuuKfJL2fC1IoIcDn8+w4KUJSuo2TcUL7BoRGp6EQthXpAGVT4qD0qMJ/b
	 1bg6NKZ11mroe2IQ3q5AyKmQBKJhtDdKedb5lcj0I1rJxXxTOeBP1ttYeKv9l5lEEw
	 eE9VceFTG92ZZZ8VR6Lem2kW5EimMfnWVfJTnpA7dPoMuWqePNfQqWTzPzSCa5wqhM
	 N9DheWyElw+LBrAs/rTIPGI9jlwij34T1xlMph8YqjzYVWcWKsEJ+Bh1PlpcJwjcJx
	 5wRpxCubEso2Q==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	jbeulich@suse.com,
	andrew.cooper3@citrix.com,
	julien@xen.org
Subject: [PATCH v2] xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped
Date: Fri, 26 Feb 2021 14:52:17 -0800
Message-Id: <20210226225217.2137-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

Introduce two feature flags to tell the domain whether it is
direct-mapped or not. It allows the guest kernel to make informed
decisions on things such as swiotlb-xen enablement.

The introduction of both flags (XENFEAT_direct_mapped and
XENFEAT_not_direct_mapped) allows the guest kernel to avoid any
guesswork if one of the two is present, or fallback to the current
checks if neither of them is present.

XENFEAT_direct_mapped is always set for not auto-translated guests.

For auto-translated guests, only Dom0 on ARM is direct-mapped. Also,
see is_domain_direct_mapped() which refers to auto-translated guests:
xen/include/asm-arm/domain.h:is_domain_direct_mapped
xen/include/asm-x86/domain.h:is_domain_direct_mapped

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: jbeulich@suse.com
CC: andrew.cooper3@citrix.com
CC: julien@xen.org
---
Changes in v2:
- code style improvements
- better comments
- better commit message
- not auto_translated domains are direct_mapped
---
 xen/common/kernel.c           |  4 ++++
 xen/include/public/features.h | 12 ++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 7a345ae45e..431447326c 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -560,6 +560,10 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
                              (1U << XENFEAT_hvm_callback_vector) |
                              (has_pirq(d) ? (1U << XENFEAT_hvm_pirqs) : 0);
 #endif
+            if ( is_domain_direct_mapped(d) || !paging_mode_translate(d) )
+                fi.submap |= (1U << XENFEAT_direct_mapped);
+            else
+                fi.submap |= (1U << XENFEAT_not_direct_mapped);
             break;
         default:
             return -EINVAL;
diff --git a/xen/include/public/features.h b/xen/include/public/features.h
index 1613b2aab8..4aebfd359a 100644
--- a/xen/include/public/features.h
+++ b/xen/include/public/features.h
@@ -114,6 +114,18 @@
  */
 #define XENFEAT_linux_rsdp_unrestricted   15
 
+/*
+ * A direct-mapped (or 1:1 mapped) domain is a domain for which its
+ * local pages have gfn == mfn. If a domain is direct-mapped,
+ * XENFEAT_direct_mapped is set; otherwise XENFEAT_not_direct_mapped
+ * is set.
+ *
+ * Not auto_translated domains are always direct-mapped. Also see
+ * XENFEAT_auto_translated_physmap.
+ */
+#define XENFEAT_not_direct_mapped         16
+#define XENFEAT_direct_mapped             17
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Feb 26 23:31:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 Feb 2021 23:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90624.171615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFmZo-0003qt-Oh; Fri, 26 Feb 2021 23:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90624.171615; Fri, 26 Feb 2021 23:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFmZo-0003qm-Ld; Fri, 26 Feb 2021 23:31:16 +0000
Received: by outflank-mailman (input) for mailman id 90624;
 Fri, 26 Feb 2021 23:31:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFmZn-0003qe-Cv; Fri, 26 Feb 2021 23:31:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFmZn-0006Ma-7j; Fri, 26 Feb 2021 23:31:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFmZm-00065H-TS; Fri, 26 Feb 2021 23:31:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFmZm-0005O3-Sx; Fri, 26 Feb 2021 23:31:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4CJz0qUwZQdcHVtVgHBSABAuPkjpANCqg16FeELAiOA=; b=MAnHZ+c5l24XayZxNQpl+gg7qO
	UKWwv3363BsMSdFs7CMUqF7f4MIAQjJeFHGSXzyXJDia3xAsNbD4ToRF2CmtzB/DPjYsA7UfhTtxc
	VMY5VvGxUWNsnIzTlP8yLr/SGhQs/2F5TU5xP+WMLwUl8IWJhYxeySWEVPlbGrrVswO8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159708-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159708: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=62f2cf57840e3b08122b6326b0ed9f4b25ce15d9
X-Osstest-Versions-That:
    ovmf=6ffbb3581ab7c25a35041bac03b760af54f852bf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 Feb 2021 23:31:14 +0000

flight 159708 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159708/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 62f2cf57840e3b08122b6326b0ed9f4b25ce15d9
baseline version:
 ovmf                 6ffbb3581ab7c25a35041bac03b760af54f852bf

Last test of basis   159695  2021-02-26 06:09:43 Z    0 days
Testing same since   159708  2021-02-26 12:45:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6ffbb3581a..62f2cf5784  62f2cf57840e3b08122b6326b0ed9f4b25ce15d9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 01:41:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 01:41:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90642.171630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFobs-0002iY-1s; Sat, 27 Feb 2021 01:41:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90642.171630; Sat, 27 Feb 2021 01:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFobr-0002iR-Uj; Sat, 27 Feb 2021 01:41:31 +0000
Received: by outflank-mailman (input) for mailman id 90642;
 Sat, 27 Feb 2021 01:41:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFobq-0002iJ-5J; Sat, 27 Feb 2021 01:41:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFobp-00029A-T2; Sat, 27 Feb 2021 01:41:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFobp-0003QZ-Gn; Sat, 27 Feb 2021 01:41:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFobp-0001NQ-G8; Sat, 27 Feb 2021 01:41:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9Udic/X2xzwcVk/YIr1PAudqhb+OxAWT6sObRWUmasQ=; b=yzkdEO2J8WundlTqZaI1xMEsfw
	tMoLkajzYyPPiVk/SNJ2h2QTtx4EAOWe9esZTcRCAnD6wZtiniA/mnG66gfkxFelrJ5F4osckr7q3
	P1mcj9TQVsjgfkN0U1H4kqIUHArkTMAtA1ILzaJbfeKQYj2ImMMV6Dz0Wgg4E0nZkujo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159702-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 159702: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ef1fcccf6e5fe3aabe7c3590964efac6d5220c43
X-Osstest-Versions-That:
    linux=fc944ddc0b4a019d4ece166909e65fa2a11c7e0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 01:41:29 +0000

flight 159702 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159702/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 159590
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159590
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159590
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159590
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159590
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159590
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159590
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159590
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159590
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159590
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159590
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ef1fcccf6e5fe3aabe7c3590964efac6d5220c43
baseline version:
 linux                fc944ddc0b4a019d4ece166909e65fa2a11c7e0e

Last test of basis   159590  2021-02-23 14:12:33 Z    3 days
Testing same since   159702  2021-02-26 09:41:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Altaparmakov <anton@tuxera.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Schemmel <christoph.schemmel@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Jakub Kicinski <kuba@kernel.org>
  Jason Self <jason@bluehome.net>
  Jiri Kosina <jkosina@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Raju Rangoju <rajur@chelsio.com>
  Rolf Eike Beer <eb@emlix.com>
  Rong Chen <rong.a.chen@intel.com>
  Ross Schmidt <ross.schm.dev@gmail.com>
  Rustam Kovhaev <rkovhaev@gmail.com>
  Sameer Pujar <spujar@nvidia.com>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Shyam Prasad N <sprasad@microsoft.com>
  Stefan Ursella <stefan.ursella@wolfvision.net>
  Steve French <stfrench@microsoft.com>
  syzbot+c584225dabdea2f71969@syzkaller.appspotmail.com
  Thierry Reding <treding@nvidia.com>
  Will McVicker <willmcvicker@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   fc944ddc0b4a..ef1fcccf6e5f  ef1fcccf6e5fe3aabe7c3590964efac6d5220c43 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 01:59:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 01:59:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90647.171645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFosh-0003u5-K5; Sat, 27 Feb 2021 01:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90647.171645; Sat, 27 Feb 2021 01:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFosh-0003ty-H7; Sat, 27 Feb 2021 01:58:55 +0000
Received: by outflank-mailman (input) for mailman id 90647;
 Sat, 27 Feb 2021 01:58:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ph90=H5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lFosf-0003td-VW
 for xen-devel@lists.xenproject.org; Sat, 27 Feb 2021 01:58:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e55d613a-c953-4020-b94f-a78558af8258;
 Sat, 27 Feb 2021 01:58:53 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 5053964EE2;
 Sat, 27 Feb 2021 01:58:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e55d613a-c953-4020-b94f-a78558af8258
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1614391132;
	bh=sJ11UUMLMAOXLV7KBsrsf1WImOyN0UyQsOelVCL69Lg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=OPZBlJLN3oW9zMbCEl1J969n8DRCEDCXhu4MMVWAtxp48MX+/kIWeoA7hwt/iZl+L
	 IqIXGFQI5eYBhKU/xWgb30v87XpYtvNtD8l8v/n8ZThOsZddXdY61oXFL2PFx5s9TG
	 XxsN2mPMcKQQBKZ1fQJ2awngq9xFSXbd9RByeEp/BYFXZ3H5Kp4AmnKZlaEUxKF1Ju
	 qlYaOkDpjU1EYOUwkHYhVdbrqJcQ26D/1g3k8BX/JNxRfGHilFOY6ggC4b/xHGaVP/
	 C8ZE5fViDTiV7b3elEvogJeKnFVUi8DjzAUHImcy5VOZ2cRUvQe4RLN/RqQcU1cPdv
	 jj0O8NKXymPAw==
Date: Fri, 26 Feb 2021 17:58:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    ash.j.wilding@gmail.com, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Ensure the vCPU context is seen before clearing
 the _VPF_down
In-Reply-To: <20210226205158.20991-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2102261756280.2682@sstabellini-ThinkPad-T480s>
References: <20210226205158.20991-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 Feb 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> A vCPU can get scheduled as soon as _VPF_down is cleared. As there is
> currently not ordering guarantee in arch_set_info_guest(), it may be
> possible that flag can be observed cleared before the new values of vCPU
> registers are observed.
> 
> Add an smp_mb() before the flag is cleared to prevent re-ordering.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> Barriers should work in pair. However, I am not entirely sure whether to
> put the other half. Maybe at the beginning of context_switch_to()?

It should be right after VGCF_online is set or cleared, right? So it
would be:

xen/arch/arm/domctl.c:arch_get_info_guest
xen/arch/arm/vpsci.c:do_common_cpu_on

But I think it is impossible that either of them get called at the same
time as arch_set_info_guest, which makes me wonder if we actually need
the barrier...



> The issues described here is also quite theoritical because there are
> hundreds of instructions executed between the time a vCPU is seen
> runnable and scheduled. But better be safe than sorry :).
> ---
>  xen/arch/arm/domain.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index bdd3d3e5b5d5..2b705e66be81 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -914,7 +914,14 @@ int arch_set_info_guest(
>      v->is_initialised = 1;
>  
>      if ( ctxt->flags & VGCF_online )
> +    {
> +        /*
> +         * The vCPU can be scheduled as soon as _VPF_down is cleared.
> +         * So clear the bit *after* the context was loaded.
> +         */
> +        smp_mb();
>          clear_bit(_VPF_down, &v->pause_flags);
> +    }
>      else
>          set_bit(_VPF_down, &v->pause_flags);
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 03:27:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 03:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90678.171656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFqFj-0004vw-3p; Sat, 27 Feb 2021 03:26:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90678.171656; Sat, 27 Feb 2021 03:26:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFqFj-0004vp-0u; Sat, 27 Feb 2021 03:26:47 +0000
Received: by outflank-mailman (input) for mailman id 90678;
 Sat, 27 Feb 2021 03:26:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFqFi-0004vh-9K; Sat, 27 Feb 2021 03:26:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFqFi-0004Gz-2a; Sat, 27 Feb 2021 03:26:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFqFh-0000jH-PA; Sat, 27 Feb 2021 03:26:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFqFh-0001Dc-Og; Sat, 27 Feb 2021 03:26:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bbMYUm0QanXwKVWbCMuzlYJGBKLrOYu3f7L2nxDvwkI=; b=Nm+WQZvDHSTuK6k8Uc+BhJp+6l
	mj94oEVbra5O/xiIdm6vE0zdYto4VwwSCP/wVZxTIGVTHtZ+2MgdFMhKNmdVIAvdz86Z8gApoyfBx
	505gdX6wdEYyq039OtugcD3fM3n6rIZZAXTyPJmOigRfion1wcPZyaXQm8DxckDntm/I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159703-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159703: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2c87f7a38f930ef6f6a7bdd04aeb82ce3971b54b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 03:26:45 +0000

flight 159703 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159703/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl     10 host-ping-check-xen fail in 159685 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 159685
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 159685

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                2c87f7a38f930ef6f6a7bdd04aeb82ce3971b54b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  210 days
Failing since        152366  2020-08-01 20:49:34 Z  209 days  361 attempts
Testing same since   159685  2021-02-25 20:39:33 Z    1 days    2 attempts

------------------------------------------------------------
5105 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1258498 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 04:05:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 04:05:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90684.171671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFqqw-0000eX-8A; Sat, 27 Feb 2021 04:05:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90684.171671; Sat, 27 Feb 2021 04:05:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFqqw-0000eQ-5A; Sat, 27 Feb 2021 04:05:14 +0000
Received: by outflank-mailman (input) for mailman id 90684;
 Sat, 27 Feb 2021 04:05:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GzqE=H5=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFqqu-0000eL-78
 for xen-devel@lists.xenproject.org; Sat, 27 Feb 2021 04:05:12 +0000
Received: from mail-ot1-x334.google.com (unknown [2607:f8b0:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a431c4dd-d743-4aa7-ab2c-6997a79ea312;
 Sat, 27 Feb 2021 04:05:10 +0000 (UTC)
Received: by mail-ot1-x334.google.com with SMTP id f33so11105367otf.11
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 20:05:10 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id f29sm2164908ook.7.2021.02.26.20.05.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 Feb 2021 20:05:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a431c4dd-d743-4aa7-ab2c-6997a79ea312
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=U3GUvGQhE//spACJjXMuj5TJ38Bc8rINQlyfWMkdizY=;
        b=piZga88zF95a+aseMFBPmFFeMX+kgz4bPIrU3YoP31j6+b3pMfPCxm3qA1hakBPGEG
         yX4K4IrkMdopR+4ITrOxswAuGVhJMoeWrkYWyIARuhJbe4IFQWOpo9RenkefPFFyKb0f
         /bKszBs5uTJ42ney5j83jodLggq7VdE2EHua4EFJM2e0fCZEMvBr2pPbJWEPG/cwamOi
         h++UHCwmI6tAVh+epHU1sPx4A8yugxD6gTlmP/AkoMNNAW1Wj+cUsG9LdMr4bTnNLFF7
         2neyviNWw6aLq6bmWUT85zAyPOqkWSN9L7qcqD2EuVWRh/36IyG8ZK3M/Dd6bL5gjrGi
         R91w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=U3GUvGQhE//spACJjXMuj5TJ38Bc8rINQlyfWMkdizY=;
        b=PtHRwS78T8jwyhuMxLdOq1EQ9h+0nSPvYhU6Qchu2Vd6pUAUts3ODcsXFdTXq6u+nw
         7Bf/2q1I/VCTDNurOSSTvTGNNyyCszZo8d/ABHgxXhiWz2Ut5ngTwXwK72s5mtQjKIvp
         3cHwSKVorGtqOlpQPPQoG6feFB67q00+aawXbiFF8O+j756aciajQ/pA1wPM1wEu0BJZ
         cVdipaQcvbiAl/bUzSM9QkdaejCqHiY9/TtD7eHnoDKtisUjG8ohfentWZ35ZgD7vYD5
         swXo8Ny4pd+y4P7yTqIYdoMPlxJkY6U3fbHhGtzprzXI5I/5RUkAhygarTGlsSe9ydO0
         fiUQ==
X-Gm-Message-State: AOAM5302FqrzODyNoc2aWxSDltjCezKOt2FNUoOUH1M6Yho5ycNmPrvT
	UIHibgOnzH42TbztUG4aqo0=
X-Google-Smtp-Source: ABdhPJxZlL652HzFUBHzqVc/UgMqYIfXV/xg+T0Ia7A/9t4+e5d/OQ+bYS224QjIIhob18UcWO4Uew==
X-Received: by 2002:a9d:7482:: with SMTP id t2mr1094250otk.261.1614398710174;
        Fri, 26 Feb 2021 20:05:10 -0800 (PST)
Date: Fri, 26 Feb 2021 21:05:07 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
Message-ID: <20210227040507.wwdg5kgripcec2df@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <9b441529-c5a4-4299-1155-869bcdab06b0@citrix.com>
 <alpine.DEB.2.21.2102251631220.3234@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.21.2102251631220.3234@sstabellini-ThinkPad-T480s>

On Thu, Feb 25, 2021 at 05:06:46PM -0800, Stefano Stabellini wrote:
> On Thu, 25 Feb 2021, Andrew Cooper wrote:
> > On 25/02/2021 15:24, Connor Davis wrote:
> > > Add the minimum code required to get xen to build with
> > > XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
> > > function added is required for a successful build, given the .config
> > > generated from riscv64_defconfig. The function implementations are just
> > > stubs; actual implmentations will need to be added later.
> > >
> > > Signed-off-by: Connor Davis <connojdavis@gmail.com>
> 
> This is awesome, Connor! I am glad you are continuing this work and
> I am really looking forward to have it in the tree.
> 
> 
> > > ---
> > >  config/riscv64.mk                        |   7 +
> > >  xen/Makefile                             |   8 +-
> > >  xen/arch/riscv/Kconfig                   |  54 ++++
> > >  xen/arch/riscv/Kconfig.debug             |   0
> > >  xen/arch/riscv/Makefile                  |  57 ++++
> > >  xen/arch/riscv/README.source             |  19 ++
> > >  xen/arch/riscv/Rules.mk                  |  13 +
> > >  xen/arch/riscv/arch.mk                   |   7 +
> > >  xen/arch/riscv/configs/riscv64_defconfig |  12 +
> > >  xen/arch/riscv/delay.c                   |  16 +
> > >  xen/arch/riscv/domain.c                  | 144 +++++++++
> > >  xen/arch/riscv/domctl.c                  |  36 +++
> > >  xen/arch/riscv/guestcopy.c               |  57 ++++
> > >  xen/arch/riscv/head.S                    |   6 +
> > >  xen/arch/riscv/irq.c                     |  78 +++++
> > >  xen/arch/riscv/lib/Makefile              |   1 +
> > >  xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
> > >  xen/arch/riscv/mm.c                      |  93 ++++++
> > >  xen/arch/riscv/p2m.c                     | 150 +++++++++
> > >  xen/arch/riscv/percpu.c                  |  17 +
> > >  xen/arch/riscv/platforms/Kconfig         |  31 ++
> > >  xen/arch/riscv/riscv64/asm-offsets.c     |  31 ++
> > >  xen/arch/riscv/setup.c                   |  27 ++
> > >  xen/arch/riscv/shutdown.c                |  28 ++
> > >  xen/arch/riscv/smp.c                     |  35 +++
> > >  xen/arch/riscv/smpboot.c                 |  34 ++
> > >  xen/arch/riscv/sysctl.c                  |  33 ++
> > >  xen/arch/riscv/time.c                    |  35 +++
> > >  xen/arch/riscv/traps.c                   |  35 +++
> > >  xen/arch/riscv/vm_event.c                |  39 +++
> > >  xen/arch/riscv/xen.lds.S                 | 113 +++++++
> > >  xen/drivers/char/serial.c                |   1 +
> > >  xen/include/asm-riscv/altp2m.h           |  39 +++
> > >  xen/include/asm-riscv/asm.h              |  77 +++++
> > >  xen/include/asm-riscv/asm_defns.h        |  24 ++
> > >  xen/include/asm-riscv/atomic.h           | 204 ++++++++++++
> > >  xen/include/asm-riscv/bitops.h           | 331 ++++++++++++++++++++
> > >  xen/include/asm-riscv/bug.h              |  54 ++++
> > >  xen/include/asm-riscv/byteorder.h        |  16 +
> > >  xen/include/asm-riscv/cache.h            |  24 ++
> > >  xen/include/asm-riscv/cmpxchg.h          | 382 +++++++++++++++++++++++
> > >  xen/include/asm-riscv/compiler_types.h   |  32 ++
> > >  xen/include/asm-riscv/config.h           | 110 +++++++
> > >  xen/include/asm-riscv/cpufeature.h       |  17 +
> > >  xen/include/asm-riscv/csr.h              | 219 +++++++++++++
> > >  xen/include/asm-riscv/current.h          |  47 +++
> > >  xen/include/asm-riscv/debugger.h         |  15 +
> > >  xen/include/asm-riscv/delay.h            |  15 +
> > >  xen/include/asm-riscv/desc.h             |  12 +
> > >  xen/include/asm-riscv/device.h           |  15 +
> > >  xen/include/asm-riscv/div64.h            |  23 ++
> > >  xen/include/asm-riscv/domain.h           |  50 +++
> > >  xen/include/asm-riscv/event.h            |  42 +++
> > >  xen/include/asm-riscv/fence.h            |  12 +
> > >  xen/include/asm-riscv/flushtlb.h         |  34 ++
> > >  xen/include/asm-riscv/grant_table.h      |  12 +
> > >  xen/include/asm-riscv/guest_access.h     |  41 +++
> > >  xen/include/asm-riscv/guest_atomics.h    |  60 ++++
> > >  xen/include/asm-riscv/hardirq.h          |  27 ++
> > >  xen/include/asm-riscv/hypercall.h        |  12 +
> > >  xen/include/asm-riscv/init.h             |  42 +++
> > >  xen/include/asm-riscv/io.h               | 283 +++++++++++++++++
> > >  xen/include/asm-riscv/iocap.h            |  13 +
> > >  xen/include/asm-riscv/iommu.h            |  46 +++
> > >  xen/include/asm-riscv/irq.h              |  58 ++++
> > >  xen/include/asm-riscv/mem_access.h       |   4 +
> > >  xen/include/asm-riscv/mm.h               | 246 +++++++++++++++
> > >  xen/include/asm-riscv/monitor.h          |  65 ++++
> > >  xen/include/asm-riscv/nospec.h           |  25 ++
> > >  xen/include/asm-riscv/numa.h             |  41 +++
> > >  xen/include/asm-riscv/p2m.h              | 218 +++++++++++++
> > >  xen/include/asm-riscv/page-bits.h        |  11 +
> > >  xen/include/asm-riscv/page.h             |  73 +++++
> > >  xen/include/asm-riscv/paging.h           |  15 +
> > >  xen/include/asm-riscv/pci.h              |  31 ++
> > >  xen/include/asm-riscv/percpu.h           |  33 ++
> > >  xen/include/asm-riscv/processor.h        |  59 ++++
> > >  xen/include/asm-riscv/random.h           |   9 +
> > >  xen/include/asm-riscv/regs.h             |  23 ++
> > >  xen/include/asm-riscv/setup.h            |  14 +
> > >  xen/include/asm-riscv/smp.h              |  46 +++
> > >  xen/include/asm-riscv/softirq.h          |  16 +
> > >  xen/include/asm-riscv/spinlock.h         |  12 +
> > >  xen/include/asm-riscv/string.h           |  28 ++
> > >  xen/include/asm-riscv/sysregs.h          |  16 +
> > >  xen/include/asm-riscv/system.h           |  99 ++++++
> > >  xen/include/asm-riscv/time.h             |  31 ++
> > >  xen/include/asm-riscv/trace.h            |  12 +
> > >  xen/include/asm-riscv/types.h            |  60 ++++
> > >  xen/include/asm-riscv/vm_event.h         |  55 ++++
> > >  xen/include/asm-riscv/xenoprof.h         |  12 +
> > >  xen/include/public/arch-riscv.h          | 183 +++++++++++
> > >  xen/include/public/arch-riscv/hvm/save.h |  39 +++
> > >  xen/include/public/hvm/save.h            |   2 +
> > >  xen/include/public/pmu.h                 |   2 +
> > >  xen/include/public/xen.h                 |   2 +
> > >  xen/include/xen/domain.h                 |   1 +
> > 
> > Well - this is orders of magnitude more complicated than it ought to
> > be.  An empty head.S doesn't (well - shouldn't) need the overwhelming
> > majority of this.
> > 
> > Do you know how all of this is being pulled in?  Is it from attempting
> > to compile common/ by any chance?
> 
> I'd love to see this patch split into several smaller patches. Ideally
> one patch per header file or per group of headers. It is fine if it ends
> up being a very large series. For patches imported from Linux, make sure
> to say that they are coming from Linux commit XXX in the commit message.
> It is going to make it a lot easier to ack them.
 
I was trying to keep the series bisectable so that every commit built
successfully. Not including any of the files above would have cause a build
error. Adding one file per patch would certainly make it easier to review,
but the series wouldn't build cleanly.

> Also, I think we need a concrete build target in mind: we don't want to
> add a function stub if it is not needed to build something. Make sure to
> specify what you are building in patch 0.

Agreed. Do you have a target in mind?

> I tried building this series. The container didn't build for me, so I
> build the toolchain myself. I noticed a couple of things:
> 
> XEN_TARGET_ARCH=riscv64 works but XEN_TARGET_ARCH=riscv doesn't.
> Maybe we should make XEN_TARGET_ARCH=riscv work too using the
> xen/Makefile TARGET transformations.
> 
> It seems to be building quite a few things under common. However it
> breaks with:
> 
> vm_event.c: In function 'vm_event_resume':
> vm_event.c:428:17: error: implicit declaration of function 'vm_event_reset_vmtrace'; did you mean 'vm_event_resume'? [-Werror=implicit-function-declaration]
>   428 |                 vm_event_reset_vmtrace(v);
>       |                 ^~~~~~~~~~~~~~~~~~~~~~
>       |                 vm_event_resume
> vm_event.c:428:17: error: nested extern declaration of 'vm_event_reset_vmtrace' [-Werror=nested-externs]

Yes, I failed to re-test the build after rebasing on the latest master.
Found this myself after I sent it out but then it was too late :/

> I got past that with a hack, but then I got this error:
> 
> ns16550.c: In function 'ns16550_init_preirq':
> ns16550.c:353:42: error: implicit declaration of function 'ioremap'; did you mean 'ioremap_wc'? [-Werror=implicit-function-declaration]
>   353 |         uart->remapped_io_base = (char *)ioremap(uart->io_base, uart->io_size);
>       |                                          ^~~~~~~
>       |                                          ioremap_wc
> ns16550.c:353:42: error: nested extern declaration of 'ioremap' [-Werror=nested-externs]
> ns16550.c:353:34: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
>   353 |         uart->remapped_io_base = (char *)ioremap(uart->io_base, uart->io_size);
>       |                                  ^
> At top level:
> ns16550.c:628:13: error: 'ns16550_init_common' defined but not used [-Werror=unused-function]
>   628 | static void ns16550_init_common(struct ns16550 *uart)
>       |             ^~~~~~~~~~~~~~~~~~~
> ns16550.c:610:41: error: 'ns16550_driver' defined but not used [-Werror=unused-variable]
>   610 | static struct uart_driver __read_mostly ns16550_driver = {
>       |                                         ^~~~~~~~~~~~~~
> ns16550.c:204:13: error: '__ns16550_poll' defined but not used [-Werror=unused-function]
>   204 | static void __ns16550_poll(struct cpu_user_regs *regs)
>       |             ^~~~~~~~~~~~~~
> ns16550.c:76:3: error: 'ns16550_com' defined but not used [-Werror=unused-variable]
>    76 | } ns16550_com[2] = { { 0 } };
>       |   ^~~~~~~~~~~
> cc1: all warnings being treated as errors
> 
> 
> Which is strange given that ns16550.c shouldn't be built at all.  "QEMU
> RISC-V virt machine support" is forcing CONFIG_HAS_NS16550=y on my
> system. I chose "All Platforms" and CONFIG_HAS_NS16550=y went away. That
> can't be right :-)

Hmm, did you apply patch 1? That patch should have taken care of this
error.

> 
> After that, I could actually finish the build:
> 
> sstabellini@sstabellini-ThinkPad-T480s:/local/repos/xen-upstream/xen$ find . -name \*.o
> ./common/spinlock.o
> ./common/irq.o
> ./common/sysctl.o
> ./common/sched/built_in.o
> ./common/sched/cpupool.o
> ./common/sched/credit2.o
> ./common/sched/core.o
> ./common/stop_machine.o
> ./common/gunzip.init.o
> ./common/multicall.o
> ./common/symbols.o
> ./common/rwlock.o
> ./common/event_channel.o
> ./common/guestcopy.o
> ./common/softirq.o
> ./common/virtual_region.o
> ./common/lib.o
> ./common/wait.o
> ./common/time.o
> ./common/notifier.o
> ./common/cpu.o
> ./common/page_alloc.o
> ./common/string.o
> ./common/vm_event.o
> ./common/tasklet.o
> ./common/version.o
> ./common/symbols-dummy.o
> ./common/memory.o
> ./common/warning.o
> ./common/xmalloc_tlsf.o
> ./common/kernel.o
> ./common/gunzip.o
> ./common/warning.init.o
> ./common/domain.o
> ./common/event_2l.o
> ./common/radix-tree.o
> ./common/timer.o
> ./common/built_in.o
> ./common/bitmap.o
> ./common/smp.o
> ./common/vsprintf.o
> ./common/keyhandler.o
> ./common/shutdown.o
> ./common/rcupdate.o
> ./common/rangeset.o
> ./common/vmap.o
> ./common/domctl.o
> ./common/preempt.o
> ./common/event_fifo.o
> ./common/monitor.o
> ./common/random.o
> ./lib/bsearch.o
> ./lib/rbtree.o
> ./lib/parse-size.o
> ./lib/ctype.o
> ./lib/ctors.o
> ./lib/list-sort.o
> ./lib/sort.o
> ./lib/built_in.o
> ./drivers/built_in.o
> ./drivers/char/serial.o
> ./drivers/char/built_in.o
> ./drivers/char/console.o
> ./arch/riscv/irq.o
> ./arch/riscv/sysctl.o
> ./arch/riscv/delay.o
> ./arch/riscv/lib/built_in.o
> ./arch/riscv/lib/find_next_bit.o
> ./arch/riscv/guestcopy.o
> ./arch/riscv/time.o
> ./arch/riscv/prelink.o
> ./arch/riscv/vm_event.o
> ./arch/riscv/setup.o
> ./arch/riscv/domain.o
> ./arch/riscv/traps.o
> ./arch/riscv/built_in.o
> ./arch/riscv/smp.o
> ./arch/riscv/mm.o
> ./arch/riscv/percpu.o
> ./arch/riscv/p2m.o
> ./arch/riscv/shutdown.o
> ./arch/riscv/head.o
> ./arch/riscv/domctl.o
> ./arch/riscv/smpboot.o
> 
> Which is absolutely astounding! Great job! I didn't imagine you already
> managed to build the whole of commons!



From xen-devel-bounces@lists.xenproject.org Sat Feb 27 04:17:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 04:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90687.171684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFr3A-0001ms-D9; Sat, 27 Feb 2021 04:17:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90687.171684; Sat, 27 Feb 2021 04:17:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFr3A-0001ml-8X; Sat, 27 Feb 2021 04:17:52 +0000
Received: by outflank-mailman (input) for mailman id 90687;
 Sat, 27 Feb 2021 04:17:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GzqE=H5=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lFr38-0001md-CV
 for xen-devel@lists.xenproject.org; Sat, 27 Feb 2021 04:17:50 +0000
Received: from mail-ot1-x32e.google.com (unknown [2607:f8b0:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e42968a6-6706-437f-80e7-19f8c133cc78;
 Sat, 27 Feb 2021 04:17:49 +0000 (UTC)
Received: by mail-ot1-x32e.google.com with SMTP id v12so10034834ott.10
 for <xen-devel@lists.xenproject.org>; Fri, 26 Feb 2021 20:17:49 -0800 (PST)
Received: from thewall (142-79-211-230.starry-inc.net. [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id o18sm2082952ooi.16.2021.02.26.20.17.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 Feb 2021 20:17:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e42968a6-6706-437f-80e7-19f8c133cc78
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=wchmwZ+IITg0OsonbnclAtd4JkPy/jt61m7tN4ZQyI8=;
        b=OBYl8SXbKcRBbNcZ4QCX+Afj8KBYfYd5ovZfMUqvGQeEBlbro538xodlQojj6TjHwi
         Ip+L36ZMvAEI/3dotjBzsOaTqJoje6z4u8661HUwy/rsP+H3AcQvravRaoxdCCFcKYml
         AtuKYSt+YAqxLlxlfjwlo55ZJvZ7N2MiFsrPwDaxW7gMpfasNrvTeiwjF6J/Kuo6FBlI
         3R9/TGJA/dgz14+FKRbeuwJOAinME6dzKck++1EItjn5JqmRp+euyjv5H/GmhhV32qKy
         WMsNtSh4A8MBTet8WSuPoMURwb1h7SwGh/fkdGdbrSWhZafUyCVohzm0fvzXxrPFN9R9
         M83A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=wchmwZ+IITg0OsonbnclAtd4JkPy/jt61m7tN4ZQyI8=;
        b=XyPH+DMDuTb1IdxqfbXflmpXFMDXyWBsndVxOHLgN1vYsD0X81CxudvLe+GHSlUGGb
         u5Nbumv6g3DHnFFRIjiqgtSeBXTJtIG4NTSTCL09VxPSEXP1DP5f7Z/ZAKk3zI3lWpI+
         2KC2S/l221LEaOcxyi1d64EuZyvrXalYOmR8jxS9SfAs6Rl5Ln/VuMTAmR7JVNHyWW0e
         UB6wn9mf851seTVHLi/1v1UfLFUO5dXFlTN5UVo6DMwPjJVv/myR9jrXouEns8uOrF49
         Lx9nOVO6/OSXkqUs6cQzbsRJ3uDIXgGiNvyVhP77Jn2iF6waFzlOlCoNYLtStkRHa4zJ
         wrqA==
X-Gm-Message-State: AOAM533eM05mMrC31wqAtIz5b0WKfush2pyAX19NY49y69/OM9MDbgxx
	btSGApTdyRq6ueUPI7UBej0=
X-Google-Smtp-Source: ABdhPJzbKDeL2kEAFmyH56AphXrm8hRaLt/N1czTIrTnu5uIA2scZcjqdK4JTqYsCVR5OPVIcAUG+w==
X-Received: by 2002:a05:6830:41:: with SMTP id d1mr4985266otp.273.1614399468949;
        Fri, 26 Feb 2021 20:17:48 -0800 (PST)
Date: Fri, 26 Feb 2021 21:17:47 -0700
From: Connor Davis <connojdavis@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-next 3/6] xen/sched: Fix build when NR_CPUS == 1
Message-ID: <20210227041747.p6kfrtvmkk5sbwof@thewall>
References: <cover.1614265718.git.connojdavis@gmail.com>
 <d0922adc698ab76223d76a0a7f328a72cedf00ad.1614265718.git.connojdavis@gmail.com>
 <b4ad0f83-e071-49f8-17a8-7fec0e226b9a@suse.com>
 <20210226030833.uugfojf5kkxhlpr7@thewall>
 <eb19a389-d2b3-d0cc-fd25-62bbb121cf98@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <eb19a389-d2b3-d0cc-fd25-62bbb121cf98@suse.com>

On Fri, Feb 26, 2021 at 09:31:02AM +0100, Jan Beulich wrote:
> On 26.02.2021 04:08, Connor Davis wrote:
> > On Thu, Feb 25, 2021 at 04:50:02PM +0100, Jan Beulich wrote:
> >> On 25.02.2021 16:24, Connor Davis wrote:
> >>> Return from cpu_schedule_up when either cpu is 0 or
> >>> NR_CPUS == 1. This fixes the following:
> >>>
> >>> core.c: In function 'cpu_schedule_up':
> >>> core.c:2769:19: error: array subscript 1 is above array bounds
> >>> of 'struct vcpu *[1]' [-Werror=array-bounds]
> >>>  2769 |     if ( idle_vcpu[cpu] == NULL )
> >>>       |
> >>>
> 
> Ah yes, at -O2 I can observe the warning on e.g.
> 
> extern int array[N];
> 
> int test(unsigned i) {
> 	if(i == N - 1)
> 		return 0;
> 	return array[i];
> }
> 
> when N=1. No warning appears when N=2 or higher, yet if it is
> sensible to emit for N=1 then it would imo be similarly
> sensible to emit in other cases. The only difference is that
> when N=1, there's no i for which the array access would ever
> be valid, while e.g. for N=2 there's exactly one such i.
> 
> I've tried an x86 build with NR_CPUS=1, and this hits the case
> you found and a 2nd one, where behavior is even more puzzling.
> For the case you've found I'd like to suggest as alternative
> 
> @@ -2769,6 +2769,12 @@ static int cpu_schedule_up(unsigned int
>      if ( cpu == 0 )
>          return 0;
>  
> +    /*
> +     * Guard in particular also against the compiler suspecting out-of-bounds
> +     * array accesses below when NR_CPUS=1.
> +     */
> +    BUG_ON(cpu >= NR_CPUS);
> +

Yeah I like this better than my approach.

>      if ( idle_vcpu[cpu] == NULL )
>          vcpu_create(idle_vcpu[0]->domain, cpu);
>      else
> 
> To fix the x86 build in this regard we'd additionally need
> something along the lines of
> 
> --- unstable.orig/xen/arch/x86/genapic/x2apic.c
> +++ unstable/xen/arch/x86/genapic/x2apic.c
> @@ -54,7 +54,17 @@ static void init_apic_ldr_x2apic_cluster
>      per_cpu(cluster_cpus, this_cpu) = cluster_cpus_spare;
>      for_each_online_cpu ( cpu )
>      {
> -        if (this_cpu == cpu || x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
> +        if ( this_cpu == cpu )
> +            continue;
> +        /*
> +         * Guard in particular against the compiler suspecting out-of-bounds
> +         * array accesses below when NR_CPUS=1 (oddly enough with gcc 10 it
> +         * is the 1st of these alone which actually helps, not the 2nd, nor
> +         * are both required together there).
> +         */
> +        BUG_ON(this_cpu >= NR_CPUS);
> +        BUG_ON(cpu >= NR_CPUS);
> +        if ( x2apic_cluster(this_cpu) != x2apic_cluster(cpu) )
>              continue;
>          per_cpu(cluster_cpus, this_cpu) = per_cpu(cluster_cpus, cpu);
>          break;
> 
> but the comment points out how strangely the compiler behaves here.
> Even flipping around the two sides of the != doesn't change its
> behavior. It is perhaps relevant to note here that there's no
> special casing of smp_processor_id() in the NR_CPUS=1 case, so the
> compiler can't infer this_cpu == 0.
> 
> Once we've settled on how to change common/sched/core.c I guess
> I'll then adjust the x86-specific change accordingly and submit as
> a separate fix (or I could of course also bundle both changes then).

Feel free to bundle both.

    Connor


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 07:37:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 07:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90700.171696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFuA3-0005Uo-Ba; Sat, 27 Feb 2021 07:37:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90700.171696; Sat, 27 Feb 2021 07:37:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFuA3-0005Uh-7M; Sat, 27 Feb 2021 07:37:11 +0000
Received: by outflank-mailman (input) for mailman id 90700;
 Sat, 27 Feb 2021 07:37:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuA1-0005UZ-KU; Sat, 27 Feb 2021 07:37:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuA1-0000Rw-BZ; Sat, 27 Feb 2021 07:37:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuA1-0004YY-1R; Sat, 27 Feb 2021 07:37:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuA1-0004p5-0x; Sat, 27 Feb 2021 07:37:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DYeCbdRwCIjB+WJ690vVBrdWnc2jg3M1Y019hpP72rI=; b=ZEs+eD8/dyktgyS59xJF0tQq18
	DZU6UVyJn6zKPGDrHsVhNNxiURxcexxLjrouFnQur1ZHTEYmcJdOT9SvVWSLQ5AjZHO0ZqUlPbn8W
	YSyYmW/v1kgMvzWlK+kzmeLsjqgNrVnd8oeGOIzz1l2NvHXE3gl/dRpifoADHe9PLyz8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159717-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159717: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=109e8177fd4a225e7025c4c17d2c9537b550b4ed
X-Osstest-Versions-That:
    xen=fc8fb368515391374e5f170a1e07205d914bc14a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 07:37:09 +0000

flight 159717 xen-unstable real [real]
flight 159724 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159717/
http://logs.test-lab.xenproject.org/osstest/logs/159724/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 159692

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159692
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159692
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159692
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159692
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159692
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159692
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159692
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159692
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159692
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159692
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159692
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  109e8177fd4a225e7025c4c17d2c9537b550b4ed
baseline version:
 xen                  fc8fb368515391374e5f170a1e07205d914bc14a

Last test of basis   159692  2021-02-26 03:53:29 Z    1 days
Testing same since   159717  2021-02-26 17:38:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 109e8177fd4a225e7025c4c17d2c9537b550b4ed
Author: Julien Grall <jgrall@amazon.com>
Date:   Sat Feb 20 19:22:34 2021 +0000

    xen/sched: Add missing memory barrier in vcpu_block()
    
    The comment in vcpu_block() states that the events should be checked
    /after/ blocking to avoids wakeup waiting race. However, from a generic
    perspective, set_bit() doesn't prevent re-ordering. So the following
    could happen:
    
    CPU0  (blocking vCPU A)         |Â   CPU1 ( unblock vCPU A)
                                    |
    A <- read local events          |
                                    |   set local events
                                    |   test_and_clear_bit(_VPF_blocked)
                                    |       -> Bail out as the bit if not set
                                    |
    set_bit(_VFP_blocked)           |
                                    |
    check A                         |
    
    The variable A will be 0 and therefore the vCPU will be blocked when it
    should continue running.
    
    vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
    to read any information about local events before the flag _VPF_blocked
    is set.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ash Wilding <ash.j.wilding@gmail.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 79b6574f8ecea39c14557bdd7049c7e2d21ddcbd
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Feb 25 17:08:49 2021 +0000

    tools/xenstore-control: Don't leak buf in live_update_start()
    
    All the error paths but one will free buf. Cover the remaining path so
    buf can't be leaked.
    
    This bug was discovered and resolved using Coverity Static Analysis
    Security Testing (SAST) by Synopsys, Inc.
    
    Fixes: 7f97193e6aa8 ("tools/xenstore: add live update command to xenstore-control")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 11d9933f6bf0cdb69cdd82c5ad2213fcbe73502f
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Feb 25 16:33:23 2021 +0000

    tools/xenstored: control: Store the save filename in lu_dump_state
    
    The function lu_close_dump_state() will use talloc_asprintf() without
    checking whether the allocation succeeded. In the unlikely case we are
    out of memory, we would dereference a NULL pointer.
    
    As we already computed the filename in lu_get_dump_state(), we can store
    the name in the lu_dump_state. This is avoiding to deal with memory file
    in the close path and also reduce the risk to use the different
    filename.
    
    This bug was discovered and resolved using Coverity Static Analysis
    Security Testing (SAST) by Synopsys, Inc.
    
    Fixes: c0dc6a3e7c41 ("tools/xenstore: read internal state when doing live upgrade")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 702b44be43b431695dd9ab49ca4a89ea50e31711
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Feb 25 15:43:04 2021 +0000

    tools/xenstored: Avoid unnecessary talloc_strdup() in do_lu_start()
    
    At the moment, the return of talloc_strdup() is not checked. This means
    we may dereference a NULL pointer if the allocation failed.
    
    However, it is pointless to allocate the memory as send_reply() will
    copy the data to a different buffer. So drop the use of talloc_strdup().
    
    This bug was discovered and resolved using Coverity Static Analysis
    Security Testing (SAST) by Synopsys, Inc.
    
    Fixes: af216a99fb4a ("tools/xenstore: add the basic framework for doing the live update")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 2fc7939e26d223b2a8ce37204ea479d013444b7f
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Feb 25 15:15:23 2021 +0000

    tools/xenstored: Avoid unnecessary talloc_strdup() in do_control_lu()
    
    At the moment, the return of talloc_strdup() is not checked. This means
    we may dereference a NULL pointer if the allocation failed.
    
    However, it is pointless to allocate the memory as send_reply() will
    copy the data to a different buffer. So drop the use of talloc_strdup().
    
    This bug was discovered and resolved using Coverity Static Analysis
    Security Testing (SAST) by Synopsys, Inc.
    
    Fixes: fecab256d474 ("tools/xenstore: add basic live-update command parsing")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>

commit 68b92ee7fef27f85846ef892e3d2e1763807b72d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 26 10:18:59 2021 +0100

    VMX: delay p2m insertion of APIC access page
    
    Inserting the mapping at domain creation time leads to a memory leak
    when the creation fails later on and the domain uses separate CPU and
    IOMMU page tables - the latter requires intermediate page tables to be
    allocated, but there's no freeing of them at present in this case. Since
    we don't need the p2m insertion to happen this early, avoid the problem
    altogether by deferring it until the last possible point. This comes at
    the price of not being able to handle an error other than by crashing
    the domain.
    
    Reported-by: Julien Grall <julien@xen.org>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Release-Acked-by: Ian Jackson <iwj@xenproject.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 08:27:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 08:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90717.171713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFuwR-0002kD-MW; Sat, 27 Feb 2021 08:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90717.171713; Sat, 27 Feb 2021 08:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFuwR-0002k6-JO; Sat, 27 Feb 2021 08:27:11 +0000
Received: by outflank-mailman (input) for mailman id 90717;
 Sat, 27 Feb 2021 08:27:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuwP-0002jy-Sf; Sat, 27 Feb 2021 08:27:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuwP-0001lv-Ml; Sat, 27 Feb 2021 08:27:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuwP-00079d-G2; Sat, 27 Feb 2021 08:27:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuwP-0006Wj-FZ; Sat, 27 Feb 2021 08:27:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GCNbpwN5rAPct8ZPBvqI9M2yw1JLqvvvbH4IdmDFQzo=; b=4l1MZ3XSXTPMWt3Dct+vUjRuA/
	RHWpaRS5/Wb+2b4+bhVvp7b+57F1KIyw7eoZw+LdBk5Qz3lwNfmphMdRUopu+OfnxdhzIIfRL/q02
	vY8n653K9+a2y2h7cFqe/DSLrqsn5wglp/WZWl72JjgsA6ygHLGmbUGWHghjiFG2NsVU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159721-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159721: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cd14150c1594d8deeee6ecf80feb5751dcd7f315
X-Osstest-Versions-That:
    ovmf=62f2cf57840e3b08122b6326b0ed9f4b25ce15d9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 08:27:09 +0000

flight 159721 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159721/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cd14150c1594d8deeee6ecf80feb5751dcd7f315
baseline version:
 ovmf                 62f2cf57840e3b08122b6326b0ed9f4b25ce15d9

Last test of basis   159708  2021-02-26 12:45:22 Z    0 days
Testing same since   159721  2021-02-26 23:39:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Laszlo Ersek <lersek@redhat.com>
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   62f2cf5784..cd14150c15  cd14150c1594d8deeee6ecf80feb5751dcd7f315 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 08:29:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 08:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90723.171729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFuyM-00033P-4E; Sat, 27 Feb 2021 08:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90723.171729; Sat, 27 Feb 2021 08:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFuyM-00033I-0d; Sat, 27 Feb 2021 08:29:10 +0000
Received: by outflank-mailman (input) for mailman id 90723;
 Sat, 27 Feb 2021 08:29:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuyL-00033A-Mt; Sat, 27 Feb 2021 08:29:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuyL-0001nP-F5; Sat, 27 Feb 2021 08:29:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuyL-0007Gp-7M; Sat, 27 Feb 2021 08:29:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFuyL-0007KY-6o; Sat, 27 Feb 2021 08:29:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8CAZahckhm4Kk3yax6J0c2R67I2kzah/NFSgNCadAPs=; b=CSv/oCipmBkO8ayz5yLbZlajq+
	NiotWM5d9YYhm9aH4WKbuLmrguf41KJhssEtZXZkKjhsXGIeDEbxjt+0kGguHVtxUimA53mz3Ki7z
	sFfJiPwTDgulG96x10iXN4x41J2QsJFJkP9GYXhBgcmN201QufJPREEFLvYH5yOfLClc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159723-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159723: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=eaaf9397f40a7a893a04ee86676478cca3c80d9d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 08:29:09 +0000

flight 159723 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159723/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              eaaf9397f40a7a893a04ee86676478cca3c80d9d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  232 days
Failing since        151818  2020-07-11 04:18:52 Z  231 days  224 attempts
Testing same since   159693  2021-02-26 04:18:55 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 44231 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 13:28:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 13:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90830.171882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFzdj-00087o-JG; Sat, 27 Feb 2021 13:28:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90830.171882; Sat, 27 Feb 2021 13:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lFzdj-00087h-FO; Sat, 27 Feb 2021 13:28:11 +0000
Received: by outflank-mailman (input) for mailman id 90830;
 Sat, 27 Feb 2021 13:28:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFzdh-00087Z-Qm; Sat, 27 Feb 2021 13:28:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFzdh-0006jb-Ia; Sat, 27 Feb 2021 13:28:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lFzdh-0003P8-8k; Sat, 27 Feb 2021 13:28:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lFzdh-0006ys-8G; Sat, 27 Feb 2021 13:28:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FMBDIeNu1qzhUAjg9b7u7cNziQ963/5hTwGZJd031AA=; b=38TUPgDYPaoAdEGEjfFDP3da29
	pA4W4Yxg+LPf/92tWzlyI5bCaULmA6sdBfi2QuIu6u/xYQz4kuVDKwFQ012l7K8ZzfRNPlBJnJfH7
	D7ucjut//ec7GySymAx0Mqt8msRJT/lZ1HCwztrwnHpn+7os/z37RxY9b1aIuCUJD15o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159720-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159720: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 13:28:09 +0000

flight 159720 qemu-mainline real [real]
flight 159747 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159720/
http://logs.test-lab.xenproject.org/osstest/logs/159747/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  191 days
Failing since        152659  2020-08-21 14:07:39 Z  189 days  366 attempts
Testing same since   159700  2021-02-26 08:46:59 Z    1 days    2 attempts

------------------------------------------------------------
428 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117941 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 14:31:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 14:31:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90853.171924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG0cU-0006hD-OV; Sat, 27 Feb 2021 14:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90853.171924; Sat, 27 Feb 2021 14:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG0cU-0006h6-LP; Sat, 27 Feb 2021 14:30:58 +0000
Received: by outflank-mailman (input) for mailman id 90853;
 Sat, 27 Feb 2021 14:30:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lG0cT-0006h1-Pb
 for xen-devel@lists.xenproject.org; Sat, 27 Feb 2021 14:30:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lG0cP-0007nb-So; Sat, 27 Feb 2021 14:30:53 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lG0cP-0002Z8-FI; Sat, 27 Feb 2021 14:30:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OPgarU1oyRZ/YOKdGPUmtw6i3lRcmWekV1Hz2zo/IQA=; b=e4I1hAlZx2Bv7kdN2SCRZtSGc/
	rrsyMbXfzKYPMVe55iw6vPWnX+TuH8CexgWXPp45W/5On1ci3XFXocQVNlJiGEiyrkmxkOlbl5GTK
	PNG/fA5FuHIZy/S7Ct8UGH81MDTzkbIsXUxZNz6DHZ82cylmRuTq1Zv+rZoEzXdb6Fnk=;
Subject: Re: [PATCH] xen/arm: Ensure the vCPU context is seen before clearing
 the _VPF_down
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 ash.j.wilding@gmail.com, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Dario Faggioli <dfaggioli@suse.com>, George Dunlap <george.dunlap@citrix.com>
References: <20210226205158.20991-1-julien@xen.org>
 <alpine.DEB.2.21.2102261756280.2682@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <ca41bfbb-d942-d8fd-e96e-c464f6b3643f@xen.org>
Date: Sat, 27 Feb 2021 14:30:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2102261756280.2682@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

(+ Dario and George)

Hi Stefano,

I have added Dario and George to get some inputs from the scheduling part.

On 27/02/2021 01:58, Stefano Stabellini wrote:
> On Fri, 26 Feb 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> A vCPU can get scheduled as soon as _VPF_down is cleared. As there is
>> currently not ordering guarantee in arch_set_info_guest(), it may be
>> possible that flag can be observed cleared before the new values of vCPU
>> registers are observed.
>>
>> Add an smp_mb() before the flag is cleared to prevent re-ordering.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>
>> Barriers should work in pair. However, I am not entirely sure whether to
>> put the other half. Maybe at the beginning of context_switch_to()?
> 
> It should be right after VGCF_online is set or cleared, right?

vcpu_guest_context_t is variable allocated on the heap just for the 
purpose of this call. So an ordering with VGFC_online is not going to do 
anything.

> So it
> would be:
> 
> xen/arch/arm/domctl.c:arch_get_info_guest
> xen/arch/arm/vpsci.c:do_common_cpu_on
> 
> But I think it is impossible that either of them get called at the same
> time as arch_set_info_guest, which makes me wonder if we actually need
> the barrier...
arch_get_info_guest() is called without the domain lock held and I can't 
see any other lock that could prevent it to be called in // of 
arch_set_info_guest().

So you could technically get corrupted information from 
XEN_DOMCTL_getvcpucontext. For this case, we would want a smp_wmb() 
before writing to v->is_initialised. The corresponding read barrier 
would be in vcpu_pause() -> vcpu_sleep_sync() -> sync_vcpu_execstate().

But this is not the issue I was originally trying to solve. Currently, 
do_common_cpu_on() will roughly do:

  1) domain_lock(d)

  2) v->arch.sctlr = ...
     v->arch.ttbr0 = ...

  3) clear_bit(_VFP_down, &v->pause_flags);

  4) domain_unlock(d)

  5) vcpu_wake(v);

If we had only one pCPU on the system, then we would only wake the vCPU 
in step 5. We would be fine in this situation. But that's not the 
interesting case.

If you add a second pCPU in the story, it may be possible to have 
vcpu_wake() happening in // (see more below). As there is no memory 
barrier, step 3 may be observed before 2. So, assuming the vcpu is 
runnable, we could start to schedule a vCPU before any update to the 
registers (step 2) are observed.

This means that when context_switch_to() is called, we may end up to 
restore some old values.

Now the question is can vcpu_wake() be called in // from another pCPU? 
AFAICT, it would be only called if a given flag in v->pause_flags is 
cleared (e.g. _VFP_blocked). But can we rely on that?

Even if we can rely on it, v->pause_flags has other flags in it. I 
couldn't rule out that _VPF_down cannot be set at the same time as the 
other _VPF_*.

Therefore, I think a barrier is necessary to ensure the ordering.

Do you agree with this analysis?

> 
> 
> 
>> The issues described here is also quite theoritical because there are
>> hundreds of instructions executed between the time a vCPU is seen
>> runnable and scheduled. But better be safe than sorry :).
>> ---
>>   xen/arch/arm/domain.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index bdd3d3e5b5d5..2b705e66be81 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -914,7 +914,14 @@ int arch_set_info_guest(
>>       v->is_initialised = 1;
>>   
>>       if ( ctxt->flags & VGCF_online )
>> +    {
>> +        /*
>> +         * The vCPU can be scheduled as soon as _VPF_down is cleared.
>> +         * So clear the bit *after* the context was loaded.
>> +         */
>> +        smp_mb();

 From the discussion above, I would move this barrier before 
v->is_initialised. So we also take care of the issue with 
arch_get_info_guest().

This barrier also can be reduced to a smp_wmb() as we only expect an 
ordering between writes.

The barrier would be paired with the barrier in:
    - sync_vcpu_execstate() in the case of arch_get_vcpu_info_guest().
    - context_switch_to() in the case of scheduling (The exact barrier 
is TDB).

>>           clear_bit(_VPF_down, &v->pause_flags);
>> +    }
>>       else
>>           set_bit(_VPF_down, &v->pause_flags);
>>   
>> -- 
>> 2.17.1
>>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 15:37:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 15:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90868.171960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG1eo-0004R2-7f; Sat, 27 Feb 2021 15:37:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90868.171960; Sat, 27 Feb 2021 15:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG1eo-0004Qv-4G; Sat, 27 Feb 2021 15:37:26 +0000
Received: by outflank-mailman (input) for mailman id 90868;
 Sat, 27 Feb 2021 15:37:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG1em-0004Qn-Mj; Sat, 27 Feb 2021 15:37:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG1em-0000Rw-HG; Sat, 27 Feb 2021 15:37:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG1em-00019t-Az; Sat, 27 Feb 2021 15:37:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lG1em-0002UU-AS; Sat, 27 Feb 2021 15:37:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YvW+6+Tjww1l/z7U+AZJqwAueuUhg8SwCEWmUfT+xnY=; b=XMSsbZwfwk5ihpo11A9oze5YtD
	fhyowoQG/ykYJFpwbO733qIkrWgfl7zehZIlNasYJR/TXqODwbxYLdQMCcjsgS7Q4lOKjDhcMRxJp
	87lfcda77/2hEYfOYPG2TtcQ0CnwqMwWpafnQyZEr0nAbuGvMofEb0cwplOo85JROct0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159722-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159722: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3fb6d0e00efc958d01c2f109c8453033a2d96796
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 15:37:24 +0000

flight 159722 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159722/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                3fb6d0e00efc958d01c2f109c8453033a2d96796
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  210 days
Failing since        152366  2020-08-01 20:49:34 Z  209 days  362 attempts
Testing same since   159722  2021-02-27 03:30:51 Z    0 days    1 attempts

------------------------------------------------------------
5128 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1267728 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 18:59:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 18:59:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90958.172058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG4oL-0008FB-T9; Sat, 27 Feb 2021 18:59:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90958.172058; Sat, 27 Feb 2021 18:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG4oL-0008F4-Q1; Sat, 27 Feb 2021 18:59:29 +0000
Received: by outflank-mailman (input) for mailman id 90958;
 Sat, 27 Feb 2021 18:59:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0r77=H5=linaro.org=richard.henderson@srs-us1.protection.inumbo.net>)
 id 1lG4oK-0008Ez-Lx
 for xen-devel@lists.xenproject.org; Sat, 27 Feb 2021 18:59:28 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54b816fd-de20-40f4-8595-098e4636384d;
 Sat, 27 Feb 2021 18:59:27 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id l2so8396757pgb.1
 for <xen-devel@lists.xenproject.org>; Sat, 27 Feb 2021 10:59:27 -0800 (PST)
Received: from [192.168.1.11] (174-21-84-25.tukw.qwest.net. [174.21.84.25])
 by smtp.gmail.com with ESMTPSA id ep17sm12598344pjb.19.2021.02.27.10.59.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 27 Feb 2021 10:59:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54b816fd-de20-40f4-8595-098e4636384d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=fumoWEvufGUqXz2ydHpb2ku6RD/nJbXOXTgn0QKGw0Q=;
        b=g+OO9dYyoUUDorYRWQcecLLNqn1EH9CMFDUSg9rWCmsSjN7whn1x/YYFnPUVuUcccw
         aFoSqQTPcxqfF/wS4QxmXSFY+I9wUISRu9v2viM2z/b0z9tQ0mrKCo889rOAj755Chxm
         kO5mIxf4xKKPiUPW5mEQTQHKHwSRPrpDQLU9Wrf9by1TJcKaOhtdMcjMve8mUKUcTApQ
         40ClLostH5b6XwAYkzB2BGaZA9vzQ0bp6r17sxn7PgzfYVg72XWn4M6DjYfwIOtiY4EB
         Uav1r6mozVCwK2+2TC3S42BHj407YPn4H0agNNKMDI1mhhQ/uu72W+APDayF04cl1BER
         hinw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=fumoWEvufGUqXz2ydHpb2ku6RD/nJbXOXTgn0QKGw0Q=;
        b=fbUO+chL0HJ9e3XQOcemVC8GkSypzkpohmXvRoyLxcE0WqP368aPG7pkayXkmS4M0h
         qTvtQUeLMMpFgBtb6r4idSAx2ybMzDvwAyXHtWjNWO7WyAQ9glaqAC0lgGGfflyxuB0G
         ez3yZ2jz8btscGcpyMsRRdZiAdst232m2NC484R0k7MDAIPjXOFPOxnv0MSPNVmrgsGF
         e6EhEKeThLcigWjx951PsN8rMD0o5RW0RoQSBoWhit6G87GokusFCIWFlKn6ypkEeeDa
         27Xpnm4x42Eyt+g3uoU35ad0nR96psi64VCVuQMDXXZZHXog7Tn4UnSyPemfrYgL/qYA
         NeHQ==
X-Gm-Message-State: AOAM532gtNo5husAyHdSpUQWqJNbILZOmAHjgoHY/um11OV3tHNz8DO9
	NtFaaOZWPqAvTBIZC0Wd8aUMWw==
X-Google-Smtp-Source: ABdhPJyvNaVNSe53kCk5uJ0s3m4BeDlq7EKMbKPXl7BDoZl3wIfbtUKusGynjVh8y+Mh4G/sxUd31w==
X-Received: by 2002:a63:4e4c:: with SMTP id o12mr7711298pgl.143.1614452366715;
        Sat, 27 Feb 2021 10:59:26 -0800 (PST)
Subject: Re: [RFC PATCH] cpu: system_ops: move to cpu-system-ops.h, keep a
 pointer in CPUClass
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 qemu-devel@nongnu.org
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Paul Durrant
 <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Claudio Fontana <cfontana@suse.de>, xen-devel@lists.xenproject.org,
 qemu-arm@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Markus Armbruster <armbru@redhat.com>, Eduardo Habkost
 <ehabkost@redhat.com>, Peter Maydell <peter.maydell@linaro.org>
References: <20210226164001.4102868-1-f4bug@amsat.org>
From: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <516603e7-d3ba-6958-41ec-ebcc52530d37@linaro.org>
Date: Sat, 27 Feb 2021 10:59:23 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210226164001.4102868-1-f4bug@amsat.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 2/26/21 8:40 AM, Philippe Mathieu-DaudÃ© wrote:
> +++ b/include/hw/core/cpu-system-ops.h
> @@ -0,0 +1,89 @@
> +/*
> + * CPU operations specific to system emulation
> + *
> + * Copyright (c) 2012 SUSE LINUX Products GmbH
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef CPU_SYSTEM_OPS_H
> +#define CPU_SYSTEM_OPS_H

I think you should create this header from the start, so that you don't have to
move the structure later.

> +/* see cpu-system-ops.h */
> +struct CPUSystemOperations;

In the beginning, you'd actually #include the new header here, and in this
final patch you'd remove the include and insert the declaration.

>      /* when system emulation is not available, this pointer is NULL */
> -    struct CPUSystemOperations system_ops;
> +    struct CPUSystemOperations *system_ops;

Insert the comment here, since the structure can't be null.  ;-)
Also, const.

>      /* when TCG is not available, this pointer is NULL */
>      struct TCGCPUOps *tcg_ops;

The only reason this one isn't const is hw/mips/jazz.  And I'm very tempted to
hack around that one.

> +static struct CPUSystemOperations arm_sysemu_ops = {
> +    .vmsd = &vmstate_arm_cpu,
> +    .get_phys_page_attrs_debug = arm_cpu_get_phys_page_attrs_debug,
> +    .asidx_from_attrs = arm_asidx_from_attrs,
> +    .virtio_is_big_endian = arm_cpu_virtio_is_big_endian,
> +    .write_elf64_note = arm_cpu_write_elf64_note,
> +    .write_elf32_note = arm_cpu_write_elf32_note,
> +};

const.


r~


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 19:41:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 19:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90962.172073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG5TI-0004YU-4k; Sat, 27 Feb 2021 19:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90962.172073; Sat, 27 Feb 2021 19:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG5TI-0004YN-1q; Sat, 27 Feb 2021 19:41:48 +0000
Received: by outflank-mailman (input) for mailman id 90962;
 Sat, 27 Feb 2021 19:41:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG5TG-0004YF-5N; Sat, 27 Feb 2021 19:41:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG5TF-0004uU-U8; Sat, 27 Feb 2021 19:41:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG5TF-0005rG-JD; Sat, 27 Feb 2021 19:41:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lG5TF-0002Zy-Ij; Sat, 27 Feb 2021 19:41:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wYd3Z1Lem3pB2sbAgfrWf4Pr2amC6MpMNTaDehB09v0=; b=oGjtGinjWCaWeubkjsPVdxTqf0
	iC0H9FYGIZEhZ/Q1VEzla9aOGUy57rwiju8P2JGxZTS0y7gn+4sjz/GXUM5uL9lRNn9QgFw0omRxe
	yO0zm+LSsV8JVCcFaGbjGNR3/f3mKxcA/YCQykDzYwS1fu5tFfn9xAPDCZCRz5R5ha30=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159741-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 159741: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=31eaefd4df78d58ad4087a13f6fc7607b266d04e
X-Osstest-Versions-That:
    ovmf=cd14150c1594d8deeee6ecf80feb5751dcd7f315
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 19:41:45 +0000

flight 159741 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159741/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 31eaefd4df78d58ad4087a13f6fc7607b266d04e
baseline version:
 ovmf                 cd14150c1594d8deeee6ecf80feb5751dcd7f315

Last test of basis   159721  2021-02-26 23:39:43 Z    0 days
Testing same since   159741  2021-02-27 11:09:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sami Mujawar <sami.mujawar@arm.com>
  Sughosh Ganu <sughosh.ganu@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cd14150c15..31eaefd4df  31eaefd4df78d58ad4087a13f6fc7607b266d04e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Feb 27 21:42:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 Feb 2021 21:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.90974.172089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG7Lr-0007xY-UH; Sat, 27 Feb 2021 21:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 90974.172089; Sat, 27 Feb 2021 21:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lG7Lr-0007xR-QZ; Sat, 27 Feb 2021 21:42:15 +0000
Received: by outflank-mailman (input) for mailman id 90974;
 Sat, 27 Feb 2021 21:42:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG7Lq-0007xI-L6; Sat, 27 Feb 2021 21:42:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG7Lq-0006wZ-Dm; Sat, 27 Feb 2021 21:42:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lG7Lq-00037S-5k; Sat, 27 Feb 2021 21:42:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lG7Lq-0003vE-5H; Sat, 27 Feb 2021 21:42:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mhbqxWUUnZOfsACtyd0acy3iRdzyC1sDhVlSYT+1wt4=; b=KM1En1xNMMvoK3w/6rTqaQWjCI
	ld0biRH9JWawJ4MCXWnM9L6XHP56NXxA2RJ35/Rbnu886TK0pa/rhnu8XiXTlnmqOqA2MvXr7zH0p
	tj8e/egS8JMetjgCJxOqa33ceacRlTMWLdLWPuKx6aEb7SZLunVO1S0Ox3S8kLob2WMY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159726-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159726: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c4441ab1f1d506a942002ccc55fdde2fe30ef626
X-Osstest-Versions-That:
    xen=fc8fb368515391374e5f170a1e07205d914bc14a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 Feb 2021 21:42:14 +0000

flight 159726 xen-unstable real [real]
flight 159776 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159726/
http://logs.test-lab.xenproject.org/osstest/logs/159776/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 159776-retest
 test-armhf-armhf-xl-vhd      20 leak-check/check    fail pass in 159776-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159692
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159692
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159692
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159692
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159692
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159692
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159692
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159692
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159692
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159692
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159692
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c4441ab1f1d506a942002ccc55fdde2fe30ef626
baseline version:
 xen                  fc8fb368515391374e5f170a1e07205d914bc14a

Last test of basis   159692  2021-02-26 03:53:29 Z    1 days
Failing since        159717  2021-02-26 17:38:57 Z    1 days    2 attempts
Testing same since   159726  2021-02-27 07:41:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fc8fb36851..c4441ab1f1  c4441ab1f1d506a942002ccc55fdde2fe30ef626 -> master


From xen-devel-bounces@lists.xenproject.org Sun Feb 28 01:48:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 01:48:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91000.172136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGBBp-0000Hp-Gq; Sun, 28 Feb 2021 01:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91000.172136; Sun, 28 Feb 2021 01:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGBBp-0000Hb-Al; Sun, 28 Feb 2021 01:48:09 +0000
Received: by outflank-mailman (input) for mailman id 91000;
 Sun, 28 Feb 2021 01:48:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ntq9=H6=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lGBBo-0000HW-Km
 for xen-devel@lists.xenproject.org; Sun, 28 Feb 2021 01:48:08 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 347f4098-1add-4615-905f-33e403ad0590;
 Sun, 28 Feb 2021 01:48:07 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11S1jE5J157296;
 Sun, 28 Feb 2021 01:48:05 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2120.oracle.com with ESMTP id 36ye1m12e2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sun, 28 Feb 2021 01:48:05 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 11S1kL11142170;
 Sun, 28 Feb 2021 01:48:04 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2043.outbound.protection.outlook.com [104.47.66.43])
 by userp3020.oracle.com with ESMTP id 36yyup2177-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sun, 28 Feb 2021 01:48:04 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB4226.namprd10.prod.outlook.com (2603:10b6:a03:210::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.23; Sun, 28 Feb
 2021 01:48:02 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::f489:4e25:63e0:c721%7]) with mapi id 15.20.3890.025; Sun, 28 Feb 2021
 01:48:02 +0000
Received: from [10.74.98.190] (138.3.200.62) by
 SJ0PR13CA0083.namprd13.prod.outlook.com (2603:10b6:a03:2c4::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.9 via Frontend
 Transport; Sun, 28 Feb 2021 01:48:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 347f4098-1add-4615-905f-33e403ad0590
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=JRCG3BWxZVhuS9n46tYU+6TTJsKzBBEdKYKlrw9u8Y0=;
 b=QGHXtHIJrArjI+NPlkRr1EUU7Zth721PmlH8D1D/kxSV172xWxKr6yIku/Bhp8Sdd8LH
 IhVVU9vy0JnTEeUjGJ2rh5qapjmyynyVn9nUFJGa1iHIEGdw6T82ecvkhwY4TXghE/Oc
 5lWKak74sjeFupApI/88Erln9EfNCyYkbj4gJnJ5b7oWmCDDQqwm5HAOICG3tevMAyzK
 zcS62qM9DuzA11VQsmFuO02KLfQsZXSK3Zd7Yo3usd6gpHmGB2FSn3OvBtGPHQfG+E7b
 xLSYPsFx1ztvyxomtxFJrgG1y4l3QuPYX7uy9DEEHkaMCrAIRgnZn0hKQnf55yjO9fiF hw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VIAih/CfMpllZplrB2mlRB4v74bNQKFnBjaxYWUmjhPj/t4R45tSNNEDZeEbw9e0AAiqKZTFqZSJSxE/crPSIW2ppsDQMv8e54OsudvcMsXB8YW6RYNQDDNwXKLqeiXq9kcO0aNHox2LiYDv4RsD9jG55gzvbkL8bY09w/dK5gmCtEpk9EKgwouN/puAgJZ+N6Ts4pre8QNvf2tLUtRx1PrzcJqoAJPwP90Ae3csNT9ub+MttncRq0c2v/P1RMba8G+cIoBpfVv3xQxV78GESy7fmVr3hqFsQpXi5cUdo8qncvghlQE9Gc+djvILkJ8blpOU/6+LoZqR807soHO3vA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JRCG3BWxZVhuS9n46tYU+6TTJsKzBBEdKYKlrw9u8Y0=;
 b=fKHYYbxdpxah3dvwFu2JTMyh0BKw6p+Rsf9ww541ikxmUkMNrZJI2/A6WrK4Vm9N+JbrqiJlq8e1e4BgLZwisPhZ7PzrweDrF4PeSxdewJy9WxGIGbKx7X2ru3x/NC0fDTdWOhuxbpPBsbtF7BikwKGtlnGjvg3k4jIQ5+yppdwNoBkHy7TVu37FnRJvdc+HLB+ZpJQjUMM/EvFyOkpBf6gYZEoaVgIj7iTQDcQcnHXd3kU7p84bm/cgdez0pzXWIkz/87VoYpLmCDz1e08Cmwd6myf/dhPpliZlBo+BdlgODXMb4SjEYLqOcIsE72bAeZ6QCIqbizfF6HYewEbfiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JRCG3BWxZVhuS9n46tYU+6TTJsKzBBEdKYKlrw9u8Y0=;
 b=fQUvBqvTuBrh+8yRT63Uq0gYBfVZF0YjeLy5OK+0vsnzxBJPF6hpWZ9MLay4cg6DwnoIky9/+uYwgjX2ZKjKZ057kVKPua+QcDGrKBfQjEAtvD4J+V4W48xgGMGzbj160ryT0xsIV/QYLdIvMuI0rd1Z+0qTvPlSaXjP2pAVJkw=
Subject: Re: [PATCH v1] xen: ACPI: Get rid of ACPICA message printing
To: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
        Linux ACPI <linux-acpi@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Juergen Gross
 <jgross@suse.com>, xen-devel@lists.xenproject.org,
        Konrad Wilk <konrad.wilk@oracle.com>
References: <1709720.Zl72FGBfpD@kreacher>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <eaeba4a0-7bb9-7b17-9ba6-49921f6e834c@oracle.com>
Date: Sat, 27 Feb 2021 20:47:57 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.7.1
In-Reply-To: <1709720.Zl72FGBfpD@kreacher>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.200.62]
X-ClientProxiedBy: SJ0PR13CA0083.namprd13.prod.outlook.com
 (2603:10b6:a03:2c4::28) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4aa07595-649b-4024-faf7-08d8db8ae258
X-MS-TrafficTypeDiagnostic: BY5PR10MB4226:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB4226A07E40ECF615230B45458A9B9@BY5PR10MB4226.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	vqRZVxewTk1FwRwQvVF1b6YNt99L2SOFwK3C5U0YiMpBuSDeuPt9kc+YWJIWf2AASho+9K64OajT/aKWJSDrCfzSFewmqGBT0NDAMJUDFEv0HA27IQA5rRx0fLamvaOEsBHeOfnAQnVWTeYVtAz8u/G4mQ66PK49K6HM8AcBJQMvQ4BK57AHX/viq8cvLjoM9+PaWCjk+QPkNNVxZTXwItfFnyXEmaUGCgjuXg0pW1Fy97uZmqV6Tj3JNlUTmMAB69EBbuQ4ZK3GweNkecyo5/zat23eLWj09cHjjNKfLalNQtsOj4ZHugJICZYVot1yMNCSRJqK0XXn9wioo8YFnRkGW/hcXuyXnfwyWPXYXfHQjWTAJnZatAlRLMZOCSQ+CVNb7ADwBUpNg0R9eMD+TvOvbl0u7UZZo2uYCi1so8J0bEZd+C7YWSpsdhRzlvNpLoehlx47cOazyd9VhrGvEjgNmGThROr1ulkUD/bn5aqx2ikm6DdcgRnB2iWqBFPBcJcRq5j1Aj7U7zJinoBBcNKbThkd24X2i0nCV3v5YZXiLHtV/S7FLrTQRQFKJi6LnQXsDBj2G390Dmp585p+RSrvPCDA8UvgRfzXlMifqC/+pUCA0hnoDii+Kt6/0Op8EUze0DGN6nje3iZqwCRY0A==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(39860400002)(366004)(136003)(396003)(346002)(53546011)(66946007)(86362001)(31696002)(478600001)(107886003)(5660300002)(4326008)(6666004)(66476007)(15650500001)(6486002)(31686004)(66556008)(8676002)(16526019)(2906002)(16576012)(186003)(316002)(8936002)(83380400001)(110136005)(956004)(44832011)(54906003)(2616005)(36756003)(26005)(26583001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?V3VaZzNEeGRYYjlNQTVHZzRrbTVtTk5kMkdXODAwSFE2dzRQZ0N3TFNWNVV1?=
 =?utf-8?B?dlRETUZHQytjblUwOUNhV2lHQ0gwYjFjMjNSdDM4OXhGRUZqaS9HQUR3Z09T?=
 =?utf-8?B?NU5EZ20wMTFPb2Rla0Jsd1phWEZidExNa3JkOVc1ZTNLUGRyQWlhZHhpd0d6?=
 =?utf-8?B?VDZ4Qk9GMElyWlRKbEFqWFllMFl3blZiMW5wYnU2eW40Y1J6Njh2UmRnSW1s?=
 =?utf-8?B?K0haZzQ1WmI4dHk2L1lHclpUN2Nhak1ydVFQZ2dnTExvdkJiS2Z5TTFrMUtB?=
 =?utf-8?B?alFkRDhPV2Fad3hqanRTbkE3Rnkwdm45L0FycFZJUnd1UytOWGlSTEFuWmxT?=
 =?utf-8?B?dU5PWDRVRlMzMDNRYno2L09BYXJ3R2lHLytBbUkzT2tQRkpsdkZ0NmF6c01U?=
 =?utf-8?B?YTYvc2xjenY0azNIMlZueEQ5YlNvMTA3VEQ1R1p2NHdrKzhVM2VBM2lzK0gx?=
 =?utf-8?B?SCtVbVBKRlU5clYrVHpJa0ltZ0ZVd2hjdFVNWWlXS0NZdEVSdmFoVkJkOWsz?=
 =?utf-8?B?NDdoUEV1YlNud1E4T0pEMUZVY2VlYk5PUzhNRWlOdE1iMWVNU05LWE01VTFu?=
 =?utf-8?B?a0dpeXBTaWtnTTdNSCtwaGo1NjM5UE9uR0xiTXN5MW51OEhiNk9kMXBDaWov?=
 =?utf-8?B?dXFySVlBSExZV0NZVTJrZ0lzeDlGTWdrZHhBTU95eGR4ckRRb0I1YmE3MThC?=
 =?utf-8?B?T2hFbkRwTEZwbm5WWG1KQjgwQjVoU1V6d2dDRDlZNW1ES0RMb0FCQXNFWEk2?=
 =?utf-8?B?WFdtcG53M0lTN3BJbXJRQXB2L0hZOS9PdnZ6cmVPQ3pnMXN4WWM2WGJnUEd5?=
 =?utf-8?B?cHJjNWQ4aFJuWGtTa2VYWkxvWWlOZFl0a3RBQjVNNVdXZ1NwZ0NmWnFwVzVD?=
 =?utf-8?B?cHhWR25oc2dDVXZtcU9jaHBDM2pWSWgyVFBINEZ1MzVTdjVUamhaemhvUG9I?=
 =?utf-8?B?c3d6dTNwbERWemw4U0dDK01ObWNXNjg2Nzc2Y2pIZldIVE1nT0szTTZja0RF?=
 =?utf-8?B?S0NmUW9pZXNNS0RFYkF6TjFOV1g4QU1rR0ttcDQrZENOaFFHRXhJdWFFMC9Y?=
 =?utf-8?B?Y2dZV0kxYVhWM2NSTEQxOXhVV2dpeGF3Ly95ZTE5bWpjeTUybGtFSFhSN1lh?=
 =?utf-8?B?TEhIdWt6ZTBUdkx4VkEvSDRTb2hCaW5WZzRnL01URGFpc01PLzl6UkpRL2k4?=
 =?utf-8?B?SUZTTGh6a0FsR0wrR2c2K0lzWEZWWTY4elNjL3dqSU9Hd2FPNWh6OVFCNDZ6?=
 =?utf-8?B?S0pqRzhyVUN6a0RncjIvNXEvOW9GV0ZnZklkTnN3dWhncjMwMERxZjNsem9h?=
 =?utf-8?B?QUVTUHdIQmxHcUNGeFdVWWppUDk1K0pUajFQUDlySldrZDdUci9lNFdCSitH?=
 =?utf-8?B?dU9UYTExa1QrSWdEM1pjV2hwZDhrVTRrcFhsbGNvRnQ4cklEZ1d4YzBhS0Er?=
 =?utf-8?B?dDVtWFZjY2FMTEVSN0tGQUkrL0o5RWhOL21YZjZqeHpPZkVuRkpyZTFHd0NQ?=
 =?utf-8?B?bkhOTnllNjh4M285VGZVWXNCNDduWUR2V1BoU282emZ2NFFlY1NFUXpMdVln?=
 =?utf-8?B?V0hHaEZoeFlsSkM0YmloOWJXb05lTlBrUkV5OG44UjhuSndJRjhDbStzV3pP?=
 =?utf-8?B?VzNub0FLeGhma2pvNGFBMmIvMitEWU1IQTZxTWV3UVUzSmhYaUwvMks4ZmZF?=
 =?utf-8?B?cWZLTUttWnZxRURQa0NVR3hOdEpoWnBlL1hNVVROdHQrVkVHVE03bFFkT0xJ?=
 =?utf-8?Q?lfNr4eOAUDYbTArF72TrhAuxmMTIu/83/AMBCLM?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4aa07595-649b-4024-faf7-08d8db8ae258
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2021 01:48:02.5335
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5XG48lgiocDWM4bpDfiLyWTk/MGDVwOlREfw8n5OKRZ9Kv7JkgMRjX5i0m2lGjgar5a1YFqFXN4KgZ9rzxU+AZvcxk2Fd8wOILAZP/QdERg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4226
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9908 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 suspectscore=0
 mlxlogscore=999 bulkscore=0 adultscore=0 phishscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102280010
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9908 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 priorityscore=1501
 mlxlogscore=999 impostorscore=0 suspectscore=0 adultscore=0 malwarescore=0
 mlxscore=0 spamscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2102280010


On 2/24/21 1:47 PM, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
>
> The ACPI_DEBUG_PRINT() macro is used in a few places in
> xen-acpi-cpuhotplug.c and xen-acpi-memhotplug.c for printing debug
> messages, but that is questionable, because that macro belongs to
> ACPICA and it should not be used elsewhere.  In addition,
> ACPI_DEBUG_PRINT() requires special enabling to allow it to actually
> print the message and the _COMPONENT symbol generally needed for
> that is not defined in any of the files in question.
>
> For this reason, replace all of the ACPI_DEBUG_PRINT() instances in
> the Xen code with acpi_handle_debug() (with the additional benefit
> that the source object can be identified more easily after this
> change) and drop the ACPI_MODULE_NAME() definitions that are only
> used by the ACPICA message printing macros from that code.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/xen/xen-acpi-cpuhotplug.c |   12 +++++-------
>  drivers/xen/xen-acpi-memhotplug.c |   16 +++++++---------


As I was building with this patch I (re-)discovered that since 2013 it depends on BROKEN (commit 76fc253723add). Despite commit message there saying that it's a temporary patch it seems to me by now that it's more than that.


And clearly noone tried to build these files since at least 2015 because memhotplug file doesn't compile due to commit cfafae940381207.


While this is easily fixable the question is whether we want to keep these files. Obviously noone cares about this functionality.


-boris



From xen-devel-bounces@lists.xenproject.org Sun Feb 28 02:49:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 02:49:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91005.172148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGC95-0006iq-1P; Sun, 28 Feb 2021 02:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91005.172148; Sun, 28 Feb 2021 02:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGC94-0006ij-Ts; Sun, 28 Feb 2021 02:49:22 +0000
Received: by outflank-mailman (input) for mailman id 91005;
 Sun, 28 Feb 2021 02:49:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGC93-0006ib-UX; Sun, 28 Feb 2021 02:49:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGC93-000615-O2; Sun, 28 Feb 2021 02:49:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGC93-0004N5-Bc; Sun, 28 Feb 2021 02:49:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGC93-0001uZ-Au; Sun, 28 Feb 2021 02:49:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P3IB8MFHObl4JAIugHu9NZlqBMiewf7KxNRPo+JRVMg=; b=hEUF2SiAfzdqIh3SDoleoBMR0T
	s4v08GU8d72x03EmJXriZ8yUB1cglbMKt50m7NhrwYqNygie6rj+LJZLKZrGY5Nm5AL+NuPVJfthC
	Kh+DdoXl+PRwo6F+YuOio1Us2auEFkH5BHNTGzM+51iETjCClCZwxQKPKm+gM9G+eEsw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159752-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159752: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 02:49:21 +0000

flight 159752 qemu-mainline real [real]
flight 159778 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159752/
http://logs.test-lab.xenproject.org/osstest/logs/159778/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  191 days
Failing since        152659  2020-08-21 14:07:39 Z  190 days  367 attempts
Testing same since   159700  2021-02-26 08:46:59 Z    1 days    3 attempts

------------------------------------------------------------
428 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117941 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 28 03:32:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 03:32:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91010.172162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGCp7-00030s-DE; Sun, 28 Feb 2021 03:32:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91010.172162; Sun, 28 Feb 2021 03:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGCp7-00030l-AA; Sun, 28 Feb 2021 03:32:49 +0000
Received: by outflank-mailman (input) for mailman id 91010;
 Sun, 28 Feb 2021 03:32:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGCp5-00030d-Mv; Sun, 28 Feb 2021 03:32:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGCp5-0006jc-CZ; Sun, 28 Feb 2021 03:32:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGCp5-0005aC-3H; Sun, 28 Feb 2021 03:32:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGCp5-0001Th-2m; Sun, 28 Feb 2021 03:32:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8AEhobPE3Y5Tb7boPDVtMsPIIhNK0IgzwxH4NAQDBtE=; b=h2fg3gGtAqnM+3IIsSZqgip0rF
	5tUVT5COWzS6mSVQRzsTKXN/uxkmm8QvEmU2MzYC0FI5stcy0eU/IyagyLtxV/1ItSxqS1dFALIh0
	HStKgMZG5UI7yBDBWNXW77puC3eQ5cyTlcvZQ3DGq+ZSCGr0X3W+QmG0x7BI2LiAiPEA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159762-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159762: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3fb6d0e00efc958d01c2f109c8453033a2d96796
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 03:32:47 +0000

flight 159762 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159762/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                3fb6d0e00efc958d01c2f109c8453033a2d96796
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  211 days
Failing since        152366  2020-08-01 20:49:34 Z  210 days  363 attempts
Testing same since   159722  2021-02-27 03:30:51 Z    0 days    2 attempts

------------------------------------------------------------
5128 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1267728 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 28 10:19:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 10:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91044.172178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGJ9s-0000wm-U2; Sun, 28 Feb 2021 10:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91044.172178; Sun, 28 Feb 2021 10:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGJ9s-0000wf-Q9; Sun, 28 Feb 2021 10:18:40 +0000
Received: by outflank-mailman (input) for mailman id 91044;
 Sun, 28 Feb 2021 10:18:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJ9r-0000wX-GU; Sun, 28 Feb 2021 10:18:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJ9r-00061B-Bu; Sun, 28 Feb 2021 10:18:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJ9r-0000u6-1q; Sun, 28 Feb 2021 10:18:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJ9r-0005db-1L; Sun, 28 Feb 2021 10:18:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bWQNK+0eyJiECO2uemznfzQWSV7iHH7r80SPI/o9RNk=; b=t6nd6TYa90qZnh0qAGHzQHLIT2
	1HsRrOsrL369vSCDPPdkXILm7vdRf9UNvh5IPQJA2CCqcM/+zoIWmCD1OcToGofAL+3OPfGZ4wud4
	d7NQlIuSzXkKMKPkRFAMCetcDGhd1f0POBpFd8IRMnUW6ShK8lrgffIXBuo9g3N1Nxf4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159783-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 159783: all pass - PUSHED
X-Osstest-Versions-This:
    xen=c4441ab1f1d506a942002ccc55fdde2fe30ef626
X-Osstest-Versions-That:
    xen=5d94433a66df29ce314696a13bdd324ec0e342fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 10:18:39 +0000

flight 159783 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159783/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  c4441ab1f1d506a942002ccc55fdde2fe30ef626
baseline version:
 xen                  5d94433a66df29ce314696a13bdd324ec0e342fe

Last test of basis   159620  2021-02-24 09:19:25 Z    4 days
Testing same since   159783  2021-02-28 09:19:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5d94433a66..c4441ab1f1  c4441ab1f1d506a942002ccc55fdde2fe30ef626 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Feb 28 10:22:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 10:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91048.172193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGJDJ-0001wH-Fn; Sun, 28 Feb 2021 10:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91048.172193; Sun, 28 Feb 2021 10:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGJDJ-0001wA-BF; Sun, 28 Feb 2021 10:22:13 +0000
Received: by outflank-mailman (input) for mailman id 91048;
 Sun, 28 Feb 2021 10:22:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJDI-0001w2-FO; Sun, 28 Feb 2021 10:22:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJDI-00065S-7A; Sun, 28 Feb 2021 10:22:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJDH-00018B-UE; Sun, 28 Feb 2021 10:22:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJDH-0007gV-Ti; Sun, 28 Feb 2021 10:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tD5QgAckaLZXhdav/YDNIeyX4gjJwXS6v2igHSNpRmQ=; b=1H0A34GTK4v9KHnwLvzs0CC2DH
	UV6nOHhBkVIvJxKlKWOYZ8QMceKrsKuElMkKL2Ln4UFRxyT0WW+4JfsOO2iLN4ClykCZ51nZxbSUx
	pzY0WT97bSiIonFbdOjI6+ogkXGt1/fTwjLXqXAPDzA1PGsrZ2Kj2E/AjoFLJI93lvGM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159782-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 159782: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=eaaf9397f40a7a893a04ee86676478cca3c80d9d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 10:22:11 +0000

flight 159782 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159782/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              eaaf9397f40a7a893a04ee86676478cca3c80d9d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  233 days
Failing since        151818  2020-07-11 04:18:52 Z  232 days  225 attempts
Testing same since   159693  2021-02-26 04:18:55 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  BalÃ¡zs MeskÃ³ <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  CÃ©dric Bosdonnat <cbosdonnat@suse.com>
  CÃ´me Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. BerrangÃ© <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  GÃ¶ran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  JÃ¡n Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-AndrÃ© Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  MichaÅ‚ Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr DrÄ…g <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  TomÃ¡Å¡ GolembiovskÃ½ <tgolembi@redhat.com>
  TomÃ¡Å¡ JanouÅ¡ek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville SkyttÃ¤ <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 44231 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 28 10:50:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 10:50:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91065.172208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGJem-0004wM-U2; Sun, 28 Feb 2021 10:50:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91065.172208; Sun, 28 Feb 2021 10:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGJem-0004wF-Qp; Sun, 28 Feb 2021 10:50:36 +0000
Received: by outflank-mailman (input) for mailman id 91065;
 Sun, 28 Feb 2021 10:50:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJel-0004w7-9Q; Sun, 28 Feb 2021 10:50:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJel-0006Vb-19; Sun, 28 Feb 2021 10:50:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJek-0003Kv-O1; Sun, 28 Feb 2021 10:50:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGJek-000197-NW; Sun, 28 Feb 2021 10:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IvMyaJNFYeb8Jx2sACpIXx870tExvahLQ7pvKDZrz+g=; b=SriOiw7sPGVvsoG5+4K+DR061t
	dthPdmVJ2BKZEEA0GmKMHk0GxsU6egWesIRZwAlfOfGUAn/9SD7SNarNnglosQAVtUEwvx+XnvtHR
	XP7bKEKIjOmGT/3FF/oqkwgChbNIWEnjuoxhnSCmzXMxoytO3AvE3tWU+b1IwkfFrrqw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159779-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 159779: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c4441ab1f1d506a942002ccc55fdde2fe30ef626
X-Osstest-Versions-That:
    xen=c4441ab1f1d506a942002ccc55fdde2fe30ef626
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 10:50:34 +0000

flight 159779 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159779/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159726
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159726
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159726
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159726
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159726
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159726
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159726
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159726
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159726
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159726
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159726
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c4441ab1f1d506a942002ccc55fdde2fe30ef626
baseline version:
 xen                  c4441ab1f1d506a942002ccc55fdde2fe30ef626

Last test of basis   159779  2021-02-28 01:51:24 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Feb 28 16:01:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 16:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91126.172223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGOVF-0001ey-0w; Sun, 28 Feb 2021 16:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91126.172223; Sun, 28 Feb 2021 16:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGOVE-0001er-To; Sun, 28 Feb 2021 16:01:04 +0000
Received: by outflank-mailman (input) for mailman id 91126;
 Sun, 28 Feb 2021 16:01:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGOVC-0001ej-V2; Sun, 28 Feb 2021 16:01:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGOVC-0003gJ-KT; Sun, 28 Feb 2021 16:01:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGOVC-00073G-As; Sun, 28 Feb 2021 16:01:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGOVC-00045m-AP; Sun, 28 Feb 2021 16:01:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S1wff4ZTs+QEFKk5xNQqkP6av40kfpIb/7CluFWKy0A=; b=aTI5PfLF6ljVFRUzDwFSPqpVsv
	IKCmxEA0S63FTI5JiXSEA/cDY0j9BXBHovYER/fDsyXGZIvlhHwxt28QpIVeKMhYwWNDkgOFIUdvg
	Xip1M7T9IEZKrPiazjiIyXMEuAi16B5KcomWdOTp42nnQ5YGqKOP0EtCqtg1hkYBXTPA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159780-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 159780: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 16:01:02 +0000

flight 159780 qemu-mainline real [real]
flight 159784 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/159780/
http://logs.test-lab.xenproject.org/osstest/logs/159784/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                51db2d7cf26d05a961ec0ee0eb773594b32cc4a1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  192 days
Failing since        152659  2020-08-21 14:07:39 Z  191 days  368 attempts
Testing same since   159700  2021-02-26 08:46:59 Z    2 days    4 attempts

------------------------------------------------------------
428 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 117941 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Feb 28 18:30:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 Feb 2021 18:30:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.91166.172245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGQpi-0007eB-P5; Sun, 28 Feb 2021 18:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 91166.172245; Sun, 28 Feb 2021 18:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lGQpi-0007e4-Lr; Sun, 28 Feb 2021 18:30:22 +0000
Received: by outflank-mailman (input) for mailman id 91166;
 Sun, 28 Feb 2021 18:30:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGQph-0007dd-DC; Sun, 28 Feb 2021 18:30:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGQph-00067x-3w; Sun, 28 Feb 2021 18:30:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lGQpg-0004Nf-Q7; Sun, 28 Feb 2021 18:30:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lGQpg-0004PY-PW; Sun, 28 Feb 2021 18:30:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ILseuz5ZYmF32BcN/ycI4t+8wIL9glooxfzHXs6ABe4=; b=kvY6YdcvhxvS69G3eCj1yILCwu
	yRzSOF/BH1GtV5lqpikeSM+27b82S+BPKKNPwIvMDo505PjDj8r8jNr7HrWutyKdd3m0z8YweRFFT
	TPcQXOQiWOwdgPoxXLFIF2LcSpzmUU44RRR2Pk9XrcKBy653URW2R9ESDMJi3rGgAEPQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-159781-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 159781: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-saverestore:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5695e51619745d4fe3ec2506a2f0cd982c5e27a4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 Feb 2021 18:30:20 +0000

flight 159781 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/159781/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  17 guest-saverestore        fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                5695e51619745d4fe3ec2506a2f0cd982c5e27a4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  211 days
Failing since        152366  2020-08-01 20:49:34 Z  210 days  364 attempts
Testing same since   159781  2021-02-28 03:37:54 Z    0 days    1 attempts

------------------------------------------------------------
5130 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1268443 lines long.)


